Are C++ compilers actually compliant with zero-size array SFINAE rule?
Asked Answered
W

2

11

About a year or two ago I read about SFINAE rules in C++. They state, in particular,

The following type errors are SFINAE errors:

...

attempting to create an array of void, array of reference, array of function, array of negative size, array of non-integral size, or array of size zero

I decided to use this rule in my homework, but it wouldn't work. Gradually reducing it, I came to this small example of code which I don't understand:

#include <iostream>

template<int I>
struct Char {};

template<int I>
using Failer = Char<I>[0];

template<int I>
void y(Failer<I> = 0) {
    std::cout << "y<" << I << ">, Failer version\n";
}

template<int I>
void y(int = 0) {
    std::cout << "y<" << I << ">, int version\n";
}

int main() {
    y<0>();
    y<1>();
    y<2>();
    y<3>();
}

Moreover, several C++ compilers seem to not understand it either. I created a Godbolt example, where you can find three different compilers resolving the y ambiguity differently:

  • gcc reports a compilation error;
  • clang chooses the int version (this is what I would think complies with the SFINAE rule);
  • icc chooses the Failer version.

Which among them is correct, and what is actually going on?

Willywillynilly answered 20/9, 2023 at 11:15 Comment(4)
If you change [0] to [I-5], icc will choose int version.Smithereens
I'm not sure what = 0 means when it comes after a type rather than the name of an argument. int foo = 0 in a function declaration makes sense, since foo is a parameter that can have a default value of zero. But int = 0 makes no sense to me, since int is a type, and there's no parameter of type int that can be set to zero. I don't know if there's some obscure C++ rule that allows it, but it at least looks wrong.Calaverite
@Calaverite - You are not required to name a parameter. So int = 0 is the default value for a parameter without a name. Limited usefulness, except in made up test cases.Hemimorphic
@Hemimorphic and source of headaches because some compilers histrically didn't support it and expose inconsistent behaviour in newer versions (when I say some, I mean Microsoft)Ungraceful
C
7

Validity of zero-size arrays

[dcl.array] p1 states that:

[The constant-expression] N specifies the array bound, i.e., the number of elements in the array; N shall be greater than zero.

Zero-size arrays are thus disallowed in principle. Note that your zero-size array appears in a function parameter, and this may be relevant according to [dcl.fct] p5:

After determining the type of each parameter, any parameter of type “array of T” or of function type T is adjusted to be “pointer to T”.

However, this type adjustment rule only kicks on after determining the type of the parameters, and one parameter has type Char<I>[0]. This should disqualify the first overload from being a candidate. In fact, your program is IFNDR because no specialization of y would be well-formed (see [temp.res.general] p6).

It is not totally clear from the wording, but the first overload would be ill-formed despite the type adjustment, and both GCC and clang agree on this (see -pedantic-errors diagnostic triggering for char[0] parameters).

Even if the compiler supports zero-size arrays as an extension, this isn't allowed to affect valid overload resolution according to [intro.compliance.general] p8:

A conforming implementation may have extensions (including additional library functions), provided they do not alter the behavior of any well-formed program. Implementations are required to diagnose programs that use such extensions that are ill-formed according to this document. Having done so, however, they can compile and execute such programs.

Conclusion

Your program is IFNDR because no specialization of the first overload of y is valid. All compilers are correct through their own extensions.

However, if we assume that the first overload of y is valid, then it should not be a viable candidate during the call y<N>(), and should be removed from the overload set, even if zero-size arrays are supported as a compiler extension. Only clang implements this correctly.

Interaction of default arguments and overload resolution

In this section, let's assume that zero-size arrays were allowed. This is just for the sake of understanding the observed compiler behavior better.

Then hypothetically, a call y<N>(0) is unambiguous, and all compilers agree and call the int overload. This is because int requires no conversions, but a conversion from 0 to a pointer type would require pointer conversion.

Overload resolution does not consider default arguments; see Are default argument conversions considered in overload resolution?. Thus hypothetically, both overloads of y are viable candidates for y<N>() and neither is a better match because neither is more specialized according to the rule of partial ordering of function templates. This is GCC's behavior.


Note: both GCC's and Clang's behavior can be explained, where Clang is more correct looking past the IFNDR issue. I am unable to explain ICC's behavior; it makes no sense.

Choriamb answered 20/9, 2023 at 12:14 Comment(6)
So program is ill-formed because it has a function with a parameter of zero-size array type. But why then in en.cppreference.com/w/cpp/language/sfinae the example of zero-size array works, despite also forcing the compiler to look at the possibility of a zero-size array (in one of the functions depending on the parity of n)?Willywillynilly
@Willywillynilly because the array size is I % 2 == 0 in the cppreference example, which is not ill-formed for all I. The IFNDR is only relevant because in your case, you have [0] and thus no valid template specialization can be generated.Choriamb
What if I write I % 1 or I * 0? It always equals zero, but it will not make the program ill-formed, will it?Willywillynilly
@Willywillynilly it would make the program ill-formed, no diagnostic required. The compiler isn't required to prove that int[I * 0] is always a zero-size array, but it is. Obviously, it's impossible to prove for arbitrarily complex expressions, but lack of diagnosability doesn't make it valid.Choriamb
ICC is perfectly valid! It is IF;NDR, which can include doing anything at all.Jeminah
@Yakk-AdamNevraumont yes, but not intentionally. I highly doubt that ICC formally proved that all instantiations of y are IFNDR and therefore it is allowed to deviate from the usual overload resolution behavior here. ICC also picks the wrong overload when using [1] instead of [0], so it's just a compiler bug.Choriamb
H
-2

Good question. The discrepancy you're seeing is due to how compilers interpret the standard. The C++ Standard does state that arrays of zero size should result in SFINAE errors. However, this is somewhat open to interpretation by compilers.

  1. GCC: Strictest interpretation. Sees zero-size array as an error and doesn't allow SFINAE to kick in.

  2. Clang: Follows the standard more closely. Sees zero-size array, triggers SFINAE, and falls back to the int version.

  3. ICC: This is definitely unexpected behavior and probably not compliant with the standard. It shouldn't be choosing the zero-size array version.

In terms of the standard, Clang is the closest to what's expected. GCC is strict but not wrong per se, and ICC is most likely incorrect.

For portable code, it's better to not rely on this particular SFINAE mechanism due to varying compiler behavior.

Hottempered answered 20/9, 2023 at 11:20 Comment(3)
SFINAE is "Substitution Failure Is Not An Error". How is GCC not wrong if it came across a substitution failure and reported an error?Willywillynilly
@Willywillynilly Because SFINAE in this particular case is a misnomer? "Substitution Failure Is Not An Error" is not per se a "standardese term".Imago
This answer does not provide any reasoning for its claims.Fax

© 2022 - 2024 — McMap. All rights reserved.