New UTF-8 features in Windows 10 1903

UTF-8 everywhere just got a whole lot easier in Windows 10 1903.
Two big additions:
1) UTF-8 codepage per process (previously was only available in control panel region-administrative as a Beta feature and affected all processes)
This is the new ActiveCodePage manifest value (works for regular Win32 x86 and x64 apps)
This means you can make a UTF-8 app with UTF-8 code page that doesn’t rely on any system locale setting. That along with some C Runtime library changes for UTF-8 locale in some recent Windows 10 SDKs will help you write apps in UTF-8.
2) If you get the charset of the UTF-8 code page it returns something new under Windows 10 1903!
BYTE GetUTF8Charset()
{
    DWORD  dwCodePage;
    CHARSETINFO cs;
    dwCodePage = 65001;
    if (TranslateCharsetInfo((DWORD *)(ULONG_PTR)dwCodePage, &cs, TCI_SRCCODEPAGE))
    {
        return (BYTE)cs.ciCharset;
    }
    else
        return DEFAULT_CHARSET;
}
Check the value of cs.ciCharset , it is a special value, 254.  This is undocumented, and works differently than DEFAULT_CHARSET.
If you use this charset in a font (lfCharSet), and select the font into the DC, then GDI functions will use that code page to convert from A to W.  That means you can call functions such as ExtTextOutA, DrawTextA etc, with UTF-8 strings and they will just convert correctly to UTF-16. Length passed in should be in bytes.
Note: this only returns 254 in Windows 10 1903 or higher and is not even tied to the UTF-8 code page in (1). Can be used any time.

New VS2015 Update 3 Runtime breaks MFC apps built with VS 2015 Update 2

Update: 07/21/2016: This issue is fixed! As noted on the connect bug in a comment made by James McNelis, “this is fixed by the update to Visual Studio 2015 Update 3 that was made available on July 20, 2016. We would advise all of our developer customers to move forward to this update.”

An earlier commenter noted, “This issue has been fixed by KB3165756 version 14.0.25424.00, released on 07/20/2016. The member m_bIsDragged has been removed from CMFCToolBarButton. The fix contains the new VC Runtime 14.0.24212.0. When you install this on a client machine MFC apps built with VS 2015 Update 2 will run without issues.” The comment further asks when the new runtime would be deployed via Windows Update.   James replied that there are no plans to do so.

So if you do deploy apps  targeting VS 2015 Update 3, please ensure you use the 14.0.24212 runtime included with this July 20th update. (previous one was 14.0.24210)

How to get this patch?  If you’ve already installed Update 3 do not use the Update 3 installer again, it won’t detect that anything needs installing.  You must use the patch located here instead:

https://msdn.microsoft.com/en-us/library/mt752379.aspx

As noted on the above page, the fix for this issue is included in the July 20th update.

Original blog post (written before the fix was made available):

The following is courtesy of a connect bug that was recently filed.

https://connect.microsoft.com/VisualStudio/feedback/details/2892501/new-vc-runtime-14-0-24210-0-breaks-mfc-app-built-with-vs-2015-update-2

There is also an MSDN forum thread about this:

https://social.msdn.microsoft.com/Forums/vstudio/en-US/5e565499-b855-4300-83cd-46be2a126519/app-compiled-with-redistributable-140239180-crashes-on-machine-updated-to-redistributable?forum=vcgeneral

Credit for below goes to the original poster, none of this is mine  (the text has been copied verbatim from the connect bug).  Just passing the info forward to show that binary breaks (due to DLL Hell) still occur after all these years.

When you build an MFC app with Visual Studio 2015 Update 2, which creates a temporary CMFCToolBarButton object on the stack, and run it on a machine with VC Runtime 14.0.24210.0, which comes with VS 2015 Update 3, then the app is broken.

In a Debug build you get this error:
“Run-Time Check Failure #2 – Stack around the variable ‘ToolbarButton’ was corrupted”

In a Release build the reaction depends on what on the stack is overwritten. In my case the app doesn’t start at all.

The problem is caused by the new BOOL member m_bIsDragged in class CMFCToolBarButton. 
So memory layout differs between Update 2 and 3.  When initializing m_bIsDragged in the constructor, the (stack) memory behind the ToolBarButton is overwritten.

My thoughts: this type of bug is difficult to fix because some people may have already taken a dependency on the new MFC that shipped with VS2015 Update 3.  Now what happens if they try to revert this back to the Update 2 header signature for existing apps (i.e. get rid of m_bIsDragged).  Now in order to really solve this properly they have to change MFC to be dynamic, somehow.  I think they could somehow check, at runtime, what MFC version the app has built with, and do something fancy to avoid this.  I don’t think it’s going to be easy though.    Or they could sacrifice the few for the many (and just backtrack to Update 2 definition), or just forget this happened and force all apps to upgrade to the new signatures (worst solution).

The fact that this problem still occurs tells me that binary compatibility is not being checked actively.  Adding members to MFC headers is a big no-no.

That is kind of scary to me as you could have a critical app that is shipping, that just breaks due to new DLLs having been provided by some other third party, or even Windows Update (for security updates).    Again the only way to avoid this is using applocal DLLs, but that kills security and is error prone, and doesn’t do anything for those people that already shipped, expecting MFC to do the right thing between updates.

 

How do I service the Universal CRT if a bug is encountered?

Recently a serious bug in the Universal CRT was discovered that breaks all MFC apps due to MFC’s use of _sntscanf_s in its DDX_Text routine for doubles.

https://connect.microsoft.com/VisualStudio/feedback/details/1773279/bug-in-sntscanf-s

This raises an interesting point.  The bug is in the Universal CRT.  No longer can you just grab a new vcredist_x86.exe or new runtime DLL and plop it into your app folder (along with an applocal MFC of course) You now have to worry about the fact that this bug is in a system component, ucrtbase.dll.   This is due to the “great refactoring” of the CRT:

http://blogs.msdn.com/b/vcblog/archive/2014/06/10/the-great-crt-refactoring.aspx

So then, how do we service ucrtbase.dll?  Do we just wait for it to show up in Windows Update?  Get the Universal CRT SDK and build a redist?

One possible answer lies here:

http://stackoverflow.com/questions/31811597/visual-c-2015-redistributable-dlls-for-app-local-deployment

Answer is: applocal distribution (in the same folder as your app) They originally prohibited this from applocal distribution, but then change their minds.  This is a problem for apps that have multiple folders.

Note: in order to do this applocal distribution properly, you cannot simply just include ucrtbase.dll.  You have to include a series of 23 other flies, named api-ms-win-core.*.dll, a list of which can be found here.  Ugly, but it works.

But, according to Microsoft, from the second comment on this blog post:

http://blogs.msdn.com/b/vcblog/archive/2015/07/20/visual-studio-2015-rtm-now-available.aspx

“On Windows 10, the real Universal CRT in the system directory will always be used, even if you have the Universal CRT DLLs included app-locally”

So on Windows 10 how do I fix a bug in ucrtbase.dll?  Do I require a Windows Update to get that serviced?  Seems like it. In other words not possible to ship an app that’s totally self contained and has all bug fixes.

Can we call this DLL Hell 3.0?

How to target XP with VC2012 or VC2013 and continue to use the Windows 8.x SDK

One of the limitations of the Microsoft provided solution for targeting XP while using Visual Studio 2012 (Update 1 and above), or Visual Studio 2013, is that you must use a special “Platform toolset” in project properties that forces usage of the Windows SDK 7.1 (instead of Windows 8.x SDK which is the default).  The other function the platform toolset provides is that it sets the Linker’s “Minimum Required Version” setting to 5.01 (instead of 6 which is the default).  But that function can just as easily be done manually by setting it in project properties.

So how about the first main function of the platform toolset?   Setting the platform toolset to one that targets XP does the following:

(1) Changes the Platform SDK being used from Windows SDK 8.x (8.1 with VC2013 and 8.0 with VC2012) back to Windows SDK 7.1

(2) Adds a preprocessor define:  _USING_V110_SDK71_ to the build

The second one turns out to be important, due to a piece of code in atlwinverapi.h, namely the following:


extern inline BOOL __cdecl _AtlInitializeCriticalSectionEx(__out LPCRITICAL_SECTION lpCriticalSection, __in DWORD dwSpinCount, __in DWORD Flags)
{
#if (NTDDI_VERSION >= NTDDI_VISTA) && !defined(_USING_V110_SDK71_) && !defined(_ATL_XP_TARGETING)
     // InitializeCriticalSectionEx is available in Vista or later, desktop or store apps
     return ::InitializeCriticalSectionEx(lpCriticalSection, dwSpinCount, Flags);
#else
     UNREFERENCED_PARAMETER(Flags);
     // ...otherwise fall back to using InitializeCriticalSectionAndSpinCount.
     return ::InitializeCriticalSectionAndSpinCount(lpCriticalSection, dwSpinCount);
#endif

As you can see, if we do not use the platform toolset that defines _USING_V110_SDK71, i.e we don’t use the Windows SDK 7.1, then we don’t get the benefit of avoiding a call to InitializeCriticalSectionEx, which is a function only available on Vista and above.  This will cause your binary to not load on XP.

But what if we really want to use the Windows 8.x SDK (taking care, of course, that we don’t call any Windows 8.x functions directly, to keep support for older operating systems). Why would we want this? For example, we may want a structure definition, some preprocessor define, or function declaration, i.e. we may want to support some feature of Windows 8.x if actually running our app on that OS.

Say we’ve decided to use Windows 8.x SDK while still allowing our app to run on XP. Are there any options available? i.e. can you keep using the v110/v120 toolsets instead of the v110_xp/v120_xp toolsets? Yes, it turns out that Microsoft left a nice loophole in the code to do exactly that. Notice the mysterious define in the block of code above named _ATL_XP_TARGETING. Turns out this is an alternative way to support XP targeting while _USING_V110_SDK71_ is NOT defined. So if you really want to support XP while using Windows 8.x SDK, we simply need to ensure our code is built with _ATL_XP_TARGETING defined. The easiest way to do this is to add a /D_ATL_XP_TARGETING flag to our C/C++ command line options in project properties.

Then, the only other step is to set the “Minimum Required Version” in project properties under Linker – System to 5.01, and we’re all set – a simple way to target XP and still use the Windows 8.1 SDK without using the platform toolset that Microsoft provided to target XP.

In summary, the _ATL_XP_TARGETING define, while undocumented, is an interesting way to keep support for XP while also allowing continued use of the Windows 8.x SDK (rather than being forced to be permanently stuck on  the older Windows 7.1 SDK)

Was forbidding desktop applications on Windows RT the correct move?

Every day, someone bugs me at work about the news reports relating to how the Surface RT was not the success Microsoft had hoped it would be.  And Asus had the same conclusion.   I tell them Surface RT was a great piece of hardware.  It had the misfortune of not being able to run desktop apps.

But what about if Windows RT (Windows ARM) had allowed desktop applications (that have been recompiled to use the ARM instruction set) to run instead of just the ones that Microsoft had allowed (i.e. Office, and several built in apps, such as notepad, calculator, and some remote debugging tools).

I’ve been following the threads on xda developers on how the digital signing of desktop apps was circumvented to allow desktop apps to bypass this check completely, and in effect, open it up for desktop development.

Apart from the exploit, how was this possible at a tools level?  Well, if we go back to //build (in 2011), and look at the Beta version of Visual Studio 2012, you’ll notice something interesting: a complete version of MFC for ARM (including static lib files).  You’ll also notice that several key Windows SDK libraries were excluded (such as common controls, etc), making these MFC libraries more difficult to take advantage of.

The question remains: why would they have included MFC if they hadn’t planned on allowing developers to make desktop apps targeting ARM? My theory is that the original plan was to allow development of desktop apps for ARM, but at some point it was decided that desktop apps should be controlled and their creation only made available to Microsoft themselves.  Hence, when the first developer preview came out at //build there were still remnants of the original plans.

After this first build, subsequent builds removed the MFC libraries that were included in that first Beta.

So coming back to the original question: was forbidding desktop apps on ARM the correct move? At the surface level (pardon the pun) it looks to be a good decision.  the WinRT (Metro) environment provided for store apps, is tightly controlled, and therefore can have a better impact on battery life, potential viruses, app revenue, etc.  But it does stifle competition.  Look at VLC for ARM right now.  They can’t make a desktop app so have been forced to go to kickstarter to fund a Windows 8 (Metro) version of their app.  It’s still under development due to the tight controls over which APIs are allowed in WinRT apps. It’ll be interesting to see when, if ever, they release something.

Imagine the ability to have Chrome or Firefox for your Surface RT? Good thing?  I’m not sure.  Or your favorite app, which only needs to be recompiled for ARM using Visual Studio, and released directly from the software developer rather than through the store.  Good thing?  Hmm, the pros and cons are hard to weigh.  If you are starting from a zero ecosystem and trying to build it up, maybe desktop apps would have been a good thing.  But on the other hand it might have caused developers to focus less on the Windows Store apps and more on their legacy desktop apps.

But judging from the developer buy-in for WinRT (Windows Store apps), it’s a moot point.  The fact is, there hasn’t been enough buy-in.  And this is the key to success of the ecosystem, having developers risk getting no revenue from a store app, vs continuing with their legacy desktop apps on Intel only.  If they had allowed desktop apps on ARM, it’s possible more of those developers would be more excited about Windows RT.

New in Windows 8.1 store apps: a way to separate your app from your resources

One of the biggest complaints about the Windows 8 Windows store app approach to dealing with localization (separate translations for each language you decide to support), was the inability to decouple the various localizations from the main app.

As I’ve talked about in previous blog posts, the satellite DLL approach to Windows desktop apps, is an excellent one that can be used successfully with a lot of manual work (and can be automated quite easily when targeting Vista and above platforms).  But in Windows 8 store apps, there was no real analogy to this.

Windows 8.1 introduces a new type of package, a resource package.  MSDN describes it well here, I’ll provide a brief summary:

A resource package is a subset of your app that is used to provide language, scale, and DirectX features.  When you deploy an app to a machine, the decision is made whether you need one or all of the resource packages.  The app package itself can be deployed to a user’s machine with none of the resource packages, one of them, or all of them, depending on the particular needs of that machine. This is great for 2 reasons: it potentially increases download speed and reduces disk space.

An app bundle manifest (.appxbundlemanifest) is what describes your app’s package and all its resource packages.

The great thing about this new system, is that Visual Studio 2013 automatically handles this for you (separates the resources into separate resource packages).

There is also a package API that allows you to get information about packages, and a sample has been prepared by Microsoft and is found here:

http://code.msdn.microsoft.com/windowsapps/Package-sample-46e239fa

as well as another great sample that shows you how this resource package approach could be used in a game:

http://code.msdn.microsoft.com/windowsapps/Games-with-resource-62bd72aa

If you’re ok with targeting Windows 8.1 for a future Windows Store app (see previous blog posts on pros and cons of targeting Windows 8 vs Windows 8.1), this is an excellent new system that I believe will be a great boon for developers.

A note about Stroustrup’s The C++ Programming Language 4th Edition

As far as C++ books are concerned, this is the definitive reference, from the inventor of C++, Bjarne Stroustrup.  I grew up with his 2nd, 3rd, and Special Editions, and I highly recommend that you take a look at the 4th for its great C++11 content.

Now, I don’t recommend you buy it right away.  Yes, I know you may be surprised by that statement, but let me explain. Stroustrup is one of those authors that takes accuracy seriously.  Due to that, he tends to go through many “printings” of his books.   He makes changes (corrections) in each of these printings, based on reader feedback.

Take a look at the errata for his 3rd and special editions:

http://www.stroustrup.com/3rd_errata.html

He went through 21 revisions of the 3rd edition, and 14 revisions of the special edition (which was basically a continuation of the 3rd edition, except in hard cover).  So a total of 35 different 3rd editions when you include the special editions (disclaimer: some of these overlapped, i.e. there were early special edition printings equivalent to 3rd editions in higher printings, but you get the idea)

Why am I mentioning this?  Because the 4th edition is very young.  He’s already up to the 3rd printing after a couple of months:

http://www.stroustrup.com/4th_printing3.html

Also, complaints have been made about the flimsy (physical) nature of the original release of the 4th edition.  Yes, it’s a paperback.  However, it looks like Addison-Wesley has heard the complaints because a hardcover version of the book will be released on July 29th!

So I recommend you take a look at buying the hardcover version after a few months of revisions.  You’ll have the majority of “big” fixes, and then you can follow along the errata pages for future fixes.

How do you know what printing you’re going to get if you order from somewhere online?  It’s hard to know, you could get a very early one based on stock. But amazon tends to go through stock quickly.  Another thing you could do is to get one from your local bookseller, and take a look on the inside to see what printing you’re getting.

Visual Studio 2013 support for targeting Windows 8

I’ve been working my way through the multitude of //build 2013 session videos on channel9, and I came across an interesting presentation:

Upgrading Windows 8 Apps to Windows 8.1

There is a lot of really great information about the gotchas for deciding on making an app that is specific to Windows 8.1.

The main point you need to remember: once you make your app target Windows 8.1 (e.g. by converting to an 8.1 app and taking advantage of 8.1 specific APIs), your app will not run on Windows 8.  On the other hand, if you target Windows 8, your app will run under both Windows 8 and Windows 8.1

Here’s the kicker: Visual Studio 2013 will NOT support creation of new Windows 8 store apps, you’ll only be able to create Windows 8.1 store apps.   However you will be able to edit and build existing Windows 8 projects with Visual Studio 2013.

So if you want to continue to target Windows 8 when creating new store apps, you are going to need both Visual Studio 2012 and Visual Studio 2013 installed. You’ll really only need Visual Studio 2012 to create the project, and once it’s created you can switch over to Visual Studio 2013.

This seems to me to not be a technical limitation, but more of a way to encourage developers to target Windows 8.1 from the get go if creating a new app.

To me it would make more sense to support creating Windows 8 store apps in Visual Studio 2013, since the infrastructure is already there to continue to edit and build existing Windows 8 projects.

Why MUIRCT is so cool (aka separating Win32 resources into satellite DLLs, the easy way)

MUIRCT is a utility that Microsoft made available starting with the Windows Vista SDK.  It’s a localization utility that allows you to “split” resources from a DLL that has already been built.

Let’s give an example.  You have a large legacy app with dozens of DLLs, all using the model of code+resources in the same module.  After all, up to this point there has been really no good techniques within Visual Studio itself to use the satellite DLL approach, without doing a lot of manual work.  Things like creating dialog boxes, etc are simply easier to do with the built in wizards (class wizards, event handlers, etc) if the code and the resources reside in the same EXE/DLL.  Unfortunately this is the exact opposite of what we need from a translation/localization perspective.    The satellite DLL approach allows you to keep your code and your resources separate, but it involves a lot of manual work, especially if you have a complex app with many EXEs and DLLs.

What if there was a way to continue with the old style code+resources in the same module approach during development, but then separate out the resources after the fact, and require only minor changes to your code?  How is that possible?  When you load a resource from a handle (e.g. let’s say your app is test.exe, and you pass in the HINSTANCE of test to LoadString, how does the operating system know to look elsewhere for the resources?  A single handle can’t represent two separate modules behind the scenes can it?

Turns out, in Vista and higher, MUIRCT can separate these resources for you, and the operating system will load them automatically when you do a resource load.  And you have control at the individual resource level as to what is treated as language neutral (what stays in test.exe) and what is localizable (what goes into the satellite DLL)

Example:

Create a new MFC app, test.exe using all the defaults

Compile the app, then run the following on the output:

muirct -q test.rcconfig test.exe test2.exe test2.exe.mui

The above commmand splits up the app test.exe into two components, a language neutral part (the EXE) and a DLL named test.exe.mui which contains the localized parts. Now translating this test.exe.mui using a resource editor, your code is separate from your resources.

Then you would store for example your app in two folders:

test2.exe en-us\test2.exe.mui

When you run your app, the operating system knows to look for your resources in en-us folder (or what ever your ultimate fallback language is selected to be)

So if you have a bug fix to test.exe, you don’t need to redo all the languages because the resources and the code are separate.

I’ve only just scratched the surface on this topic, there are many more technical details you’ll need to learn, fortunately this whole process (including a full example of an rcconfig file) is provided by Microsoft in a walkthrough and sample code here:

http://archive.msdn.microsoft.com/hellomui

and

http://archive.msdn.microsoft.com/MUIIzer

Note: this approach works in Vista and higher.  if you need XP support, you’ll need much more extensive changes to your application as there is no support for this technique at the OS level, so you end up writing a lot of extra code, which makes the whole thing pointless.  So I would recommend only using this if you are able to drop XP support from your list of supported operating systems.

Finding the kernel32.dll module handle in a Windows Store app using approved APIs

As there are a lot of forbidden Win32 APIs in Windows Store apps (i.e. APIs that, if you call them, will cause your app to fail app certification), there are often other alternative APIs that you have to call instead.  For example, the CreateFile API is banned, but for Windows Store apps they made CreateFile2.

But what about if I wanted to get the module handle of a DLL? Specifically of kernel32?  Well, looking at the help for GetModuleHandle we see the unfortunate info:

Minimum supported client Windows XP [desktop apps only]

So we can only use this with desktop apps.  For your own packaged libraries you can use LoadPackagedLibrary API.  But this doesn’t work for system DLLs.  So how can you the the handle to kernel32.dll, for example, by using only approved store APIs?

This is where VirtualQuery comes in.  Interestingly, the API’s help page lists the following info:

Minimum supported client Windows XP [desktop apps | Windows Store apps]

This is great news, because VirtualQuery can get you the module handle of any DLL just by querying any particular known function in the DLL you want the handle to.

I discovered this trick a while ago – previously I used it to find the module handle of the DLL any code is being called from.  See:

http://www.codeguru.com/cpp/w-p/dll/tips/article.php/c3635/Tip-Detecting-a-HMODULEHINSTANCE-Handle-Within-the-Module-Youre-Running-In.htm

You probably know where I’m going here, but VirtualQuery itself is a function exported from kernel32.dll!

So all we need to do to get the module handle of kernel32.dll is to do a VirtualQuery of VirtualQuery:

HMODULE GetKernelModule()
{
    MEMORY_BASIC_INFORMATION mbi = {0};
    VirtualQuery( VirtualQuery, &mbi, sizeof(mbi) );
    return reinterpret_cast<HMODULE>(mbi.AllocationBase);
}

And then from your own code:

HMODULE kernelHandle = GetKernelModule();

You can now pass this into functions such as GetProcAddress (which is also approved).  As you can see, we have a powerful way to get the module handles of any particular DLL that we have in our process address space, and then use that to get function pointers to any particular function.

Note you should only use this technique on approved APIs in Windows Store apps.   But for debugging purposes (and just to have some fun), it might be cool to do something like the following:

Generate a blank XAML (C++ Windows Store) app, add a button to the blank form. Double click on the button. Add this code in place of the event handler:


typedef int (WINAPI *pMessageBox)( __in_opt HWND hWnd,
  __in_opt LPCTSTR lpText, __in_opt LPCTSTR lpCaption, __in UINT uType);

typedef HWND (WINAPI *pGetActiveWindow)(void);

typedef HMODULE (WINAPI *pGetModuleHandle)(__in_opt LPCTSTR lpModuleName);

void App1::MainPage::Button_Click_1(Platform::Object^ sender,
  Windows::UI::Xaml::RoutedEventArgs^ e)
{
 static pMessageBox MessageBox_p = 0;
 static pGetActiveWindow GetActiveWindow_p = 0;
 static pGetModuleHandle GetModuleHandle_p = 0;

 HMODULE kmod = GetKernelModule();

 GetModuleHandle_p = (pGetModuleHandle)GetProcAddress(kmod, "GetModuleHandleW");

 HMODULE mod = GetModuleHandle_p(L"user32.dll");

 MessageBox_p = (pMessageBox)GetProcAddress(mod, "MessageBoxW");
 GetActiveWindow_p = (pGetActiveWindow)GetProcAddress(mod, "GetActiveWindow");

 MessageBox_p(GetActiveWindow_p(), L"Hello", L"Hello", MB_OK);
}

Build and deploy your app, run it, and the press the button and see what happens.  Something you don’t ever want to do in a production app :)