So Qt is compiled with /Zc:wchar_t- on windows. What this means is that instead of wchar_t being a typedef for some internal type (__wchar_t I think) it becomes a typedef for unsigned short
. The really cool thing about this is that the default for MSVC is the opposite, which of course means that the libraries you're using are likely compiled with wchar_t
being a different type than Qt's wchar_t
.
This doesn't become an issue of course until you try to use something like std::wstring
in your code; especially when one or more libraries have functions that accept it as parameters. What effectively happens is that your code happily compiles but then fails to link because it's looking for definitions using std::wstring<unsigned short...>
but they only contain definitions expecting std::wstring<__wchar_t...>
(or whatever).
So I did some web searching and ran into this link: https://bugreports.qt.io/browse/QTBUG-6345
Based on the statement by Thiago Macieira, "Sorry, we will not support building Qt like this," I've been worried that fixing Qt to work like everything else might cause some problem and have been trying to avoid it. We recompiled all of our support libraries with the /Zc:wchar_t- flag and have been fairly content with that until a couple days ago when we started trying to port over (we're in the process of switching from Wx to Qt) some serialization code.
Because of how win32 works, and because Wx just wraps win32, we've been using std::wstring
to represent string data with the intent of making our product as i18n ready as possible. We did some testing and Wx did not work with multibyte characters when trying to print special stuff (even not so special stuff like the degree symbol was an issue). I'm not so sure that Qt has this problem since QString isn't just a wrapper to the underlying _TCHAR type but is a Unicode monster of some sort.
At any rate, the serialization library in boost has compiled parts. We've attempted to recompile boost with /Zc:wchar_t- but so far our attempts to tell bjam to do this have gone unheeded. We're at an impasse.
From where I'm sitting I have three options:
Recompile Qt and hope it works with /Zc:wchar_t. There's some evidence around the web that others have done this but I have no way of predicting what will happen. All attempts to ask Qt people on forums and such have gone unanswered. Hell, even in that very bug report someone asks why and it just sat there for a year.
Keep fighting with bjam until it listens. Right now I've got someone under me doing that and I have more experience fighting with things to get what I want but I do have to admit to getting rather tired of it. I'm also concerned that I'll KEEP running into this issue just because Qt wants to be a c**t.
Stop using wchar_t for anything. Unfortunately my i18n experience is pretty much 0 but it seems to me that I just need to find the right to/from function in QString (it has a BUNCH) to encode the Unicode into 8-bytes and visa-versa. UTF8 functions look promising but I really want to be sure that no data will be lost if someone from someplace with a more symbolic language starts writing in their own language and the documentation in QString frightens me a little into thinking that could happen. Of course, I could always run into some library that insists I use wchar_t and then I'm back to 1 or 2 but I rather doubt that would happen.
So, what's my question...
Which of these options is my best bet? Is Qt going to eventually cause me to gouge out my own eyes because I decided to compile it with /Zc:wchar_t anyway?
What's the magic incantation to get boost to build with /Zc:wchar_t- and will THAT cause permanent mental damage?
Can I get away with just using the standard 8-bit (well, 'common' anyway) character classes and be i18n compliant/ready?
How do other Qt developers deal with this mess?