😃 (and other Unicode characters) in identifiers not allowed by g++
Asked Answered
C

3

71

I am 😞 to find that I cannot use πŸ˜ƒ as a valid identifier with g++ 4.7, even with the -fextended-identifiers option enabled:

int main(int argc, const char* argv[])
{
  const char* πŸ˜ƒ = "I'm very happy";
  return 0;
}

main.cpp:3:3: error: stray β€˜\360’ in program
main.cpp:3:3: error: stray β€˜\237’ in program
main.cpp:3:3: error: stray β€˜\230’ in program
main.cpp:3:3: error: stray β€˜\203’ in program

After some googling, I discovered that UTF-8 characters are not yet supported in identifiers, but a universal-character-name should work. So I convert my source to:

int main(int argc, const char* argv[])
{
  const char* \U0001F603 = "I'm very happy";
  return 0;
}

main.cpp:3:15: error: universal character \U0001F603 is not valid in an identifier

So apparently πŸ˜ƒ isn't a valid identifier character. However, the standard specifically allows characters from the range 10000-1FFFD in Annex E.1 and doesn't disallow it as an initial character in E.2.

My next effort was to see if any other allowed Unicode characters worked - but none that I tried did. Not even the ever important PILE OF POO (πŸ’©) character.

So, for the sake of meaningful and descriptive variable names, what gives? Does -fextended-identifiers do as it advertises or not? Is it only supported in the very latest build? And what kind of support do other compilers have?

Comintern answered 2/10, 2012 at 14:17 Comment(6)
Read this. – Harp
@Harp Unfortunately that page doesn't mention that an identifier can contain a universal-character-name, so whatever advice they give on naming conventions doesn't take into account the importance of using smiley faces as variable names. See §2.11 of ISO/IEC 14882:2011(E). – Comintern
Hmm it seems the program static const char* xπŸ˜ƒ = "I'm very happy"; crashes clang 3.1... – Forejudge
See this example. – Harp
clang supports this since 3.3 with no special options but gcc 4.8.1 still doesn't. Related: #26660680 – Fasciation
You can, but you need C++11 also your source should be encoded as unicode. – Ardellearden
F
25

As of 4.8, GCC does not support characters outside of the BMP used as identifiers. It seems to be an unnecessary restriction. Also, GCC only supports a very restricted set of character described in ucnid.tab, based on C99 and C++98 (it is not updated to C11 and C++11 yet, it seems).

As described in the manual, -fextended-identifiers is experimental, so it has a higher chance won't work as expected.


GCC supported the C11 character set starting from 4.9.0 (SVN r204886 to be precise). So OP's second piece of code using \U0001F603 does work. I still can't get the actual code using πŸ˜ƒ to work even with -finput-charset=UTF-8 with GCC 8.2 on https://gcc.godbolt.org though (You may want to follow this bug report, provided by @DanielWolf).

Meanwhile, both pieces of code work on Clang 3.3 without any options other than -std=c++11.

Forejudge answered 2/10, 2012 at 15:24 Comment(1)
How about main.cpp:3:15: error: universal character \u00a8 is not valid in an identifier? This is with 4.7, though. – Comintern
P
17

This was a known bug in GCC 9 and before. This has been fixed in GCC 10.

The official changelog for GCC 10 contains this section:

Extended characters in identifiers may now be specified directly in the input encoding (UTF-8, by default), in addition to the UCN syntax (\uNNNN or \UNNNNNNNN) that is already supported:

static const int Ο€ = 3;
int get_naΓ―ve_pi() {
  return Ο€;
}
Photocomposition answered 10/2, 2017 at 11:47 Comment(2)
There isn't a GCC 10 (10.0), only GCC 10.1(?). – Kovrov
@PeterMortensen I'm assuming that GCC follows some form of SemVer, so a statement about features in GCC 10 should hold for any 10.x version. – Photocomposition
H
6

However, the standard specifically allows characters from the range 10000-1FFFD in Annex E.1 and doesn't disallow it as an initial character in E.2.

One thing to keep in mind is that just because the C++ standard allows (or disallows) some feature, does not necessarily mean that your compiler supports (or doesn't support) that feature.

Hugohugon answered 2/10, 2012 at 15:31 Comment(3)
Yes, allowing the full set of Unicode characters specified by the standard is one that that, as far as I know, no compilers support yet, either literally or with UCNs. – Rum
Of course! I only meant to find some documentation or source that shows they don't support this feature. – Comintern
@sftrabbit Okay, maybe my answer is pointing out the obvious. KennyTM gave the link re: gcc. – Hugohugon

© 2022 - 2024 β€” McMap. All rights reserved.