How to get the code page of the current keyboard layout?
Asked Answered
M

3

2

My non-Unicode application needs to be able to process Unicode keyboard input (WM_CHAR/etc.), thus receive the 8-bit character code then internally convert it to Unicode. 9x-compatibility is required, so using most Unicode APIs is not an option.

Currently it looks at the language returned by PRIMARYLANGID(GetKeyboardLayout(0)), and looks up the relevant code page in a hard-coded table. I couldn't find a function to get the code page used by a particular language or keyboard layout. Converting a character/string can then be done with MultiByteToWideChar.

Is there a way to get the current keyboard layout's code page? GetACP returns the default system code page, which isn't affected by the current keyboard layout.

Maidie answered 18/8, 2009 at 2:32 Comment(0)
G
3

Here's another way to do it:

WORD languageID = LOWORD(GetKeyboardLayout(0));
char szLCData[6+1];
GetLocaleInfoA(MAKELCID(languageID, SORT_DEFAULT), LOCALE_IDEFAULTANSICODEPAGE,
               szLCData, _countof(szLCData));
int codepage = atoi(szLCData);
Getupandgo answered 23/8, 2009 at 4:8 Comment(0)
N
3

Although this is an old thread, I just spent most of this morning searching for a method for identifying the Windows codepage given a specific charset ID (when the current keyboard layout/locale is NOT set to that charset). I figured that the sample code might be of use to others looking for similar information.

In my case I wanted to map a charset value such as 161 (Greek) to equivalent Windows codepage, 1253. After a lot of digging I came up with the following:

/*
 * Convert a font charset value (e.g. 161 - Greek) into a Windows codepage (1253 for Greek)
 */

UINT CodepageFromCharset(UINT nCharset)
{
    UINT nCodepage = CP_ACP;
    CHARSETINFO csi = {0};

    // Note, the symbol charset (2, CS_SYMBOL) translates to the symbol codepage (42, CP_SYMBOL).
    // However, this codepage does NOT produce valid character translations so the ANSI charset
    // (ANSI_CHARSET) is used instead. This appears to be a known problem.
    // See this discussion: "More than you ever wanted to know about CP_SYMBOL"
    // (http://www.siao2.com/2005/11/08/490495.aspx)

    if (nCharset == SYMBOL_CHARSET) nCharset = 0;
    DWORD* lpdw = (DWORD*)nCharset;

    // Non-zero return value indicates success...
    if (TranslateCharsetInfo(lpdw, &csi, TCI_SRCCHARSET) == 0)
    {
        // This should *not* happen but just in case make sure we use a valid default codepage.
    #ifdef _UNICODE
        csi.ciACP = 1200;
    #else
        csi.ciACP = CP_ACP;
    #endif
    }

    return csi.ciACP;
}

Hope this is useful for others!

John

Nautch answered 24/1, 2011 at 13:6 Comment(0)
W
1

I've had a similar problem on an application that needed to run on Windows 9X. The solution I eventually came up with was to listen for WM_INPUTLANGCHANGE notifications messages, which are sent to top-level windows when the user changes the input language. In my message procedure I have something like this:

case WM_INPUTLANGCHANGE:
  {
    CHARSETINFO cs;
    if (TranslateCharsetInfo((DWORD*)wParam,&cs,TCI_SRCCHARSET))
      m_codePage = cs.ciACP;
    return DefWindowProc(WM_INPUTLANGCHANGE,wParam,lParam);
  }
  break;

where m_codePage is a UNIT that is initialized as

  m_codePage = CP_ACP;

I then use m___codePage in calls to MultiByteToWideChar() to handle keys from WM_CHAR etc.

Westberry answered 18/8, 2009 at 11:58 Comment(1)
This method has a flaw: if the default keyboard layout doesn't correspond to the system code page (CP_ACP), then the codepage will not be correct when the application starts.Maidie

© 2022 - 2024 — McMap. All rights reserved.