I am revising some very old (10 years) C code. The code compiles on Unix/Mac with GCC and cross-compiles for Windows with MinGW. Currently there are TCHAR strings throughout. I'd like to get rid of the TCHAR and use a C++ string instead. Is it still necessary to use the Windows wide functions, or can I do everything now with Unicode and UTF-8?
Windows uses UTF16 still and most likely always will. You need to use wstring
rather than string
therefore. Windows APIs don't offer support for UTF8 directly largely because Windows supported Unicode before UTF8 was invented.
It is thus rather painful to write Unicode code that will compile on both Windows and Unix platforms.
UCS-2
and UTF-16
. Using characters outside the BMP is somewhat hit-or-miss. –
Dacy WideCharToMultiByte
and MultiByteToWideChar
only handle UCS-2
(returning the number of UTF-16
characters is useless for buffer allocation). GetWindowTextLength
is similarly broken, returning the number of characters (there's a footnote that alludes to multibyte character sets, but states that this special behavior only occurs when mixing ANSI and Unicode). –
Dacy UCS-2
are they the same. Look at the other parameter... specifically in bytes and not characters. The author knows about variable-length encodings, and chose to provide for them on the "MultiByte" side and not on the "WideChar" side. –
Dacy char
, in the case of UTF-8, or multiple wchar_t
, in the case of UTF-16. The assumption that one character is one wchar_t
holds for (1) 32-bit wchar_t
, which is not the case on Windows, or (2) UCS-2. This is using the word character the way it's used in Unicode literature. When MS use the word differently, they make a horrible mess, which only works out for UCS-2. –
Dacy wstring
is available anywhere that has C++ because it's in the standard library, but it's useless on UNIX since UNIX is UTF-8 –
Allative Is it still necessary to use the Windows wide functions, or can I do everything now with Unicode and UTF-8?
Yes. Unfortunately, Windows does not have native support for UTF-8. If you want proper Unicode support, you need to use the wchar_t
version of the Windows API functions, not the char
version.
should I eliminate TCHAR from Windows code?
Yes, you should. The reason TCHAR
exists is to support both Unicode and non-Unicode versions of Windows. Non-Unicode support may have been a major concern back in 2001 when Windows 98 was still popular, but not today.
And it's highly unlikely that any non-Windows-specific library would have the same kind of char
/wchar_t
overloading that makes TCHAR
usable.
So go ahead and replace all your TCHAR
s with wchar_t
s.
The code compiles on Unix/Mac with GCC and cross-compiles for Windows with MinGW.
I've had to write cross-platform C++ code before. (Now my job is writing cross-platform C# code.) Character encoding is rather painful when Windows doesn't support UTF-8 and Un*x doesn't support UTF-16. I ended up using UTF-8 as our main encoding and converting as necessary on Windows.
Yes, writing non-unicode applications nowadays is shooting yourself in the foot. Just use the wide API everywhere, and you'll not have to cry about it later. You can still use UTF8 on UNIX and wchar_t on Windows if you don't need (network) communication between platforms (or convert the wchar_t's with Win32 API to UTF-8), or go the hard way and use UTF-8 everywhere and convert to wchar_t's when you use Win32 API functions (that's what I do).
To directly answer your question:
Is it still necessary to use the Windows wide functions, or can I do everything now with Unicode and UTF-8?
No, (non-ASCII) UTF-8 is not accepted by the vast majority of Windows API functions. You still have to use the wide APIs.
One could similarly bemoan that other OSes still have no support for wchar_t
. So you also have to support UTF-8.
The other answers provide some good advice on how to manage this in a cross-platform codebase, but it sounds as if you already have an implementation supporting different character types. As desirable as ripping that out to simplify the code might sound, don't.
And I predict that someday, although probably not before the year 2020, Windows will add UTF-8 support, simply by adding U versions of all the API functions, alongside A and W, plus the same kind of linker hack. The 8-bit A functions are just a translation layer over the native W (UTF-16) functions. I bet they could generate a U-layer semi-automatically from the A-layer.
Once they've been teased enough, long enough, about their '20th century' Unicode support...
They'll still manage to make it awkward to write, ugly to read and non-portable by default, by using carefully chosen macros and default Visual Studio settings.
© 2022 - 2024 — McMap. All rights reserved.
TCHAR
to get several smallish tools to compile under Windows, Linux, and Solaris, each using its native Unicode format (UTF-16 or UTF-8). But it does involve making your owntchar.h
for the *nix platforms. – Disorient