Unicode identifiers and source code in C++11
Asked Answered
T

5

16

I find this in the new C++ Standard:

2.11 Identifiers                  [lex.name]
identifier:
    identifier-nondigit
    identifier identifier-nondigit
    identifier digit
identifier-nondigit:
    nondigit
    universal-character-name
    other implementation-defined character

with the additional text

An identifier is an arbitrarily long sequence of letters and digits. Each universal-character-name in an identifier shall designate a character whose encoding in ISO 10646 falls into one of the ranges specified in E.1. [...]

I can not quite comprehend what this means. From the old standard I am used to that a "universal character name" is written \u89ab, for example. But using those in an identifier...? Really?

Is the new standard more open with respect to Unicode? And I do not refer to the new Literal Types "uHello \u89ab thing"u32, I think I understood those. But:

  • Can (portable) source code be in any Unicode encoding, like UTF-8, UTF-16 or any (how-ever-defined) codepage?

  • Can I write an identifier with \u1234 in it myfu\u1234ntion (for whatever purpose)

  • Or can I use the "character names" that Unicode defines like in the ICU, i.e.

     const auto x = "German Braunb\U{LOWERCASE LETTER A WITH DIARESIS}r."u32;
    

    or even in an identifier in the source itself? That would be a treat... cough...

I think the answer to all these questions is no, but I can not map this reliably to the wording in the standard... :-)

I found "2.2 Phases of translation [lex.phases]", Phase 1:

Physical source file characters are mapped, in an implementation-defined manner, to the basic source character set [...] if necessary. The set of physical source file characters accepted is implementation-defined. [...] Any source file character not in the basic source character set (2.3) is replaced by the universal-character-name that designates that character. (An implementation may use any internal encoding, so long as an actual extended character encountered in the source file, and the same extended character expressed in the source file as a universal-character-name (i.e., using the \uXXXX notation), are handled equivalently except where this replacement is reverted in a raw string literal.)

By reading this, I now think that a compiler may choose to accept UTF-8, UTF-16 or any codepage it wishes (by meta information or user configuration). In Phase 1 it translates this into an ASCII form ("basic source character set") in which then the Unicode-characters are replaced by its \uNNNN notation (or the compiler can choose to continue to work in its Unicode-representation, but than has to make sure it handles the other \uNNNN the same way.

What do you think?

Tandie answered 15/4, 2011 at 12:49 Comment(1)
Also see https://mcmap.net/q/261133/-g-unicode-variable-nameHunterhunting
L
13

Is the new standard more open w.r.t to Unicode?

With respect to allowing universal character names in identifiers the answer is no; UCNss were allowed in identifiers back in C99 and C++98. However compilers did not implement that particular requirement until recently. Clang 3.3 I think introduces support for this and GCC has had an experimental feature for this for some time. Herb Sutter also mentioned during his Build 2013 talk "The Future of C++" that this feature would also be coming to VC++ at some point. (Although IIRC Herb refers to it as a C++11 feature; it is in fact a C++98 feature.)

It's not expected that identifiers will be written using UCNs. Instead the expected behavior is to write the desired character using the source encoding. E.g., source will look like:

long pörk;

not:

long p\u00F6rk;

However UCNs are also useful for another purpose; Compilers are not all required to accept the same source encodings, but modern compilers all support some encoding scheme where at least the basic source characters have the same encoding (that is, modern compilers all support some ASCII compatible encoding).

UCNs allow you to write source code with only the basic characters and yet still name extended characters. This is useful in, for example, writing a string literal "°" in source code that will be compiled both as CP1252 and as UTF-8:

char const *degree_sign = "\u00b0";

This string literal is encoded into the appropriate execution encoding on multiple compilers, even when the source encodings differ, as long as the compilers at least share the same encoding for basic characters.

Can (portable) source code be in any unicode encoding, like UTF-8, UTF-16 or any (how-ever-defined) codepage?

It's not required by the standard, but most compilers will accept UTF-8 source. Clang supports only UTF-8 source (although it has some compatibility for non-UTF-8 data in character and string literals), gcc allows the source encoding to be specified and includes support for UTF-8, and VC++ will guess at the encoding and can be made to guess UTF-8.

(Update: Visual Studio 2015 now provides an option to force the source and execution character sets to be UTF-8.)

Can I write an identifier with \u1234 in it myfu\u1234ntion (for whatever purpose)

Yes, the specification mandates this, although as I said not all compilers implement this requirement yet.

Or can i use the "character names" that unicode defines like in the ICU, i.e.

const auto x = "German Braunb\U{LOWERCASE LETTER A WITH DIARESIS}r."u32;

No, you cannot use Unicode long names.

or even in an identifier in the source itself? That would be a treat... cough...

If the compiler supports a source code encoding that contains the extended character you want then that character written literally in the source must be treated exactly the same as the equivalent UCN. So yes, if you use a compiler that supports this requirement of the C++ specification then you may write any character in its source character set directly in the source without bothering with writing UCNs.

Ledge answered 2/7, 2013 at 17:19 Comment(5)
I hope you will get a "Late Answer Accepted" Badge for this. The "If the compiler supports a source code encoding" seems to be the tricky part. As I understand you compilers are not required to support any of these lovely encodings, only that they understand UCNs -- and this not all compilers currently do (and are therefore not std compliant in this respect).Tandie
This means for portable code, assuming all involved compilers are compliant (and in theory), I have to write my source in ASCII using UCNs for unicode chars, right? There is no "portable" in any reliance on source code encoding.Tandie
@Tandie the spec doesn't even require ASCII; a compliant compiler could support only, say, EBCDIC and so no source code is truly portable unless we count manual encoding conversion. Also not all ASCII characters are in the basic source character set; you would have to avoid the characters '$', '`' and '@' (except, of course, as UCNs).Ledge
@Ledge Yes, the specification mandates this, although as I said not all compilers implement this requirement yet. Where specifically (in the most recent draft) does it mandate this?Dd
@Dd Look at [lex.name], § 5.10 Identifiers in n4835. The grammar specifies identifiers to include UCNs, and Table 2 specifies what characters are allowed, including high codepoints such as would be written \uXXXX and in fact \u1234 specificaly is allowed in identifiers. It's an ethiopian character, ሴ, apparently.Ledge
R
3

I think the intent is to allow Unicode characters in identifiers, such as:

long pöjk;
ostream* å;
Ruperto answered 15/4, 2011 at 13:12 Comment(2)
I did not downvote, but I don't think your answer is quite correct. By now I found "2.2.(1) Phases of translation": Physical source file characters [eg Unicode] are mapped, in an implementation-defined manner, to the basic source character set. [...] The set of physical source file characters accepted is implementation-defined. [...] Any source file character not in the basic source character set (2.3) is replaced by the universal-character-name that designates that character. [...] Thus, I now believe, that \u1234 is the intended notation in an identifier in ASCII form after Phase 1.Tandie
@Tandie The reason it's specified that way is because universal-character-names, i.e. \uXXXX and \UXXXXXXXX are the only way that the grammar refers to characters outside the basic character set. The 'as-if' rule allows the compiler to avoid actually converting extended characters to UCNs, and the spec actually says this explicitly: "An implementation may use any internal encoding, so long as an actual extended character encountered in the source file, and the same extended character expressed in the source file as a universal-character-name [...] are handled equivalently [...]".Ledge
M
2

I suggest using clang++ instead of g++. Clang is designed to be highly compatible with GCC (Wikipedia source), so you can most likely just substitute that command.

I wanted to use Greek symbols in my source code. If code readability is the goal, then it seems reasonable to use (for example) α over alpha. Especially when used in larger mathematical formulas, they can be read more easily in the source code.

To achieve this, this is a minimal working example:

File /tmp/test.cpp

#include <iostream>

int main()
{
    int α = 10;
    std::cout << "α = " << α << std::endl;
    return 0;
}

Compile and run

clang++ /tmp/test.cpp -o /tmp/test
/tmp/test

Output:

α = 10
Merrimerriam answered 25/9, 2016 at 15:41 Comment(0)
N
1

This article PRE30-C. Do not create a universal character name through concatenation works with the idea that int \u0401; is compliant code, though it's based on C99, instead of C++0x.

Necktie answered 2/8, 2011 at 16:49 Comment(2)
very good point. there is a C++ rule, too. securecoding.cert.org/confluence/display/cplusplus/…. I agree that I can write identifiers using the \u...-notation, yes. But the file itself is then ASCII, and has been produced by an early and implementation-defined step from an encoded file to this ASCII representation. Do you agree?Tandie
I think usually the file is ASCII. "Physical source file characters (ASCII) are mapped, in an implementation-defined manner, to the basic source character set (Some variant of Unicode)..."Necktie
C
1

Present versions of GCC (up to version 5.2 so far) only support ASCII and in some cases EBCDIC input files. Therefore, Unicode characters in identifiers have to be represented using \uXXXX and \UXXXXXXXX escape sequences in ASCII encoded files. While it may be possible to represent Unicode characters as ??/uXXXX and ??/UXXXXXXX in EBCDIC encoded input files, I have not tested this. At any rate, a simple one-line patch to cpp allows direct reading of UTF-8 input, provided a recent version of iconv is installed. Details are in UTF-8 Identifiers in GCC.

And it may be summarized by the patch:

diff -cNr gcc-5.2.0/libcpp/charset.c gcc-5.2.0-ejo/libcpp/charset.c

Output:

*** gcc-5.2.0/libcpp/charset.c  Mon Jan  5 04:33:28 2015
--- gcc-5.2.0-ejo/libcpp/charset.c  Wed Aug 12 14:34:23 2015
***************
*** 1711,1717 ****
    struct _cpp_strbuf to;
    unsigned char *buffer;

!   input_cset = init_iconv_desc (pfile, SOURCE_CHARSET, input_charset);
    if (input_cset.func == convert_no_conversion)
      {
        to.text = input;
--- 1711,1717 ----
    struct _cpp_strbuf to;
    unsigned char *buffer;

!   input_cset = init_iconv_desc (pfile, "C99", input_charset);
    if (input_cset.func == convert_no_conversion)
      {
        to.text = input;
Charley answered 15/8, 2015 at 0:20 Comment(1)
UCS characters in EBCDIC seems interesting. Thanks for the reference.Kessel

© 2022 - 2024 — McMap. All rights reserved.