2012-12-09 Summary:
- In a normal mixed-mode application global native C++ destructors run as finalizers. It's not possible to change that behavior or the associated timeout.
- A mixed-mode assembly DLL runs C++ constructors/destructors during DLL load/unload - exactly as a native DLL.
- Hosting the CLR in a native executable using the COM interface allows both the deconstructors to behave as in a native DLL (the behavior I desire) and setting the timeout for finalizers (an added bonus).
- As far as I can tell the above applies to at least Visual Studio 2008, 2010 and 2012. (Only tested with .NET 4)
The actual CLR hosting executable I plan on using is very similar to the one outlined in this question except for a few minor changes:
- Setting
OPR_FinalizerRun
to some value (60 seconds currently, but subject to change) as suggested by Hans Passant. - Using the ATL COM smart pointer classes (these aren't available in the express editions of Visual Studio, so I omitted them from this post).
- Lodaing
CLRCreateInstance
frommscoree.dll
dynamically (to allow better error messages when no compatible CLR is installed). - Passing the command line on from the host to the designated
Main
function in the assembly DLL.
Thanks to all who took the time to read the question and/or comment.
2012-12-02 Update at the bottom of the post.
I'm working on a mixed mode C++/CLI application using Visual Studio 2012 with .NET 4 and was surprised to discover that the destructors for some of the native global objects weren't getting called. Investigating the issue it turns out that they behave like managed objects as explained in this post.
I was quite surprised by this behavior (I understand it for managed objects) and couldn't find it documented anywhere, neither in the C++/CLI standard nor in the description of destructors and finalizers.
Following the suggestion in a comment by Hans Passant, I compiled the programs as an assembly DLL and hosted it in a small native executable and that does give me the desired behavior (destructors given ample time to finish and running in the same thread as they were constructed)!
My questions:
- Can I get the same behavior in an stand alone executable?
- If (1) isn't feasible is it possible to configure the process timeout policy (i.e. basically calling
ICLRPolicyManager->SetTimeout(OPR_ProcessExit, INFINITE)
) for the executable? This would be an acceptable workaround. - Where is this documented / how can I educate myself more on the topic? I'd rather not rely on behavior that's liable to change.
To reproduce compile the below files as follows:
cl /EHa /MDd CLRHost.cpp
cl /EHa /MDd /c Native.cpp
cl /EHa /MDd /c /clr CLR.cpp
link /out:CLR.exe Native.obj CLR.obj
link /out:CLR.dll /DLL Native.obj CLR.obj
Unwanted behavior:
C:\Temp\clrhost>clr.exe
[1210] Global::Global()
[d10] Global::~Global()
C:\Temp\clrhost>
Running hosted:
C:\Temp\clrhost>CLRHost.exe clr.dll
[1298] Global::Global()
2a returned.
[1298] Global::~Global()
[1298] Global::~Global() - Done!
C:\Temp\clrhost>
Used files:
// CLR.cpp
public ref class T {
static int M(System::String^ arg) { return 42; }
};
int main() {}
// Native.cpp
#include <windows.h>
#include <iostream>
#include <iomanip>
using namespace std;
struct Global {
Global() {
wcout << L"[" << hex << GetCurrentThreadId() << L"] Global::Global()" << endl;
}
~Global() {
wcout << L"[" << hex << GetCurrentThreadId() << L"] Global::~Global()" << endl;
Sleep(3000);
wcout << L"[" << hex << GetCurrentThreadId() << L"] Global::~Global() - Done!" << endl;
}
} g;
// CLRHost.cpp
#include <windows.h>
#include <metahost.h>
#pragma comment(lib, "mscoree.lib")
#include <iostream>
#include <iomanip>
using namespace std;
int wmain(int argc, const wchar_t* argv[])
{
HRESULT hr = S_OK;
ICLRMetaHost* pMetaHost = 0;
ICLRRuntimeInfo* pRuntimeInfo = 0;
ICLRRuntimeHost* pRuntimeHost = 0;
wchar_t version[MAX_PATH];
DWORD versionSize = _countof(version);
if (argc < 2) {
wcout << L"Usage: " << argv[0] << L" <assembly.dll>" << endl;
return 0;
}
if (FAILED(hr = CLRCreateInstance(CLSID_CLRMetaHost, IID_PPV_ARGS(&pMetaHost)))) {
goto out;
}
if (FAILED(hr = pMetaHost->GetVersionFromFile(argv[1], version, &versionSize))) {
goto out;
}
if (FAILED(hr = pMetaHost->GetRuntime(version, IID_PPV_ARGS(&pRuntimeInfo)))) {
goto out;
}
if (FAILED(hr = pRuntimeInfo->GetInterface(CLSID_CLRRuntimeHost, IID_PPV_ARGS(&pRuntimeHost)))) {
goto out;
}
if (FAILED(hr = pRuntimeHost->Start())) {
goto out;
}
DWORD dwRetVal = E_NOTIMPL;
if (FAILED(hr = pRuntimeHost->ExecuteInDefaultAppDomain(argv[1], L"T", L"M", L"", &dwRetVal))) {
wcerr << hex << hr << endl;
goto out;
}
wcout << dwRetVal << " returned." << endl;
if (FAILED(hr = pRuntimeHost->Stop())) {
goto out;
}
out:
if (pRuntimeHost) pRuntimeHost->Release();
if (pRuntimeInfo) pRuntimeInfo->Release();
if (pMetaHost) pMetaHost->Release();
return hr;
}
2012-12-02:
As far as I can tell the behavior seems to be as follows:
- In a mixed-mode EXE file, global destructors are run as finalizers during DomainUnload regardless of whether they are placed in native code or CLR code. This is the case in Visual Studio 2008, 2010 and 2012.
- In a mixed-mode DLL hosted by a native application destructors for global native objects are run during DLL_PROCESS_DETACH after the managed method has run and all other clean up has occurred. They run in the same thread as the constructor and there is no timeout associated with them (the desired behavior). As expected the time destructors of global managed objects (non-ref classes placed in files compiled with
/clr
) can be controlled usingICLRPolicyManager->SetTimeout(OPR_ProcessExit, <timeout>)
.
Hazarding a guess, I think the reason global native constructors/destructors function "normally" (defined as behaving as I would expect) in the DLL scenario is to allow using LoadLibrary
and GetProcAddress
on native functions. I would thus expect that it is relatively safe to rely on it not changing in the foreseeable future, but would appreciate having some kind of confirmation/denial from official sources/documentation either way.
Update 2:
In Visual Studio 2012 (tested with the express and premium versions, I unfortunately don't have access to earlier versions on this machine). It should work the same way on the command line (building as outlined above), but here's how to reproduce from within the IDE.
Building CLRHost.exe:
- File -> New Project
- Visual C++ -> Win32 -> Win32 Console Application (Name the project "CLRHost")
- Application Settings -> Additional Options -> Empty project
- Press "Finish"
- Right click on Source Files in the solution explorer. Add -> New Item -> Visual C++ -> C++ File. Name it CLRHost.cpp and paste the content of CLRHost.cpp from the post.
- Project -> Properties. Configuration Properties -> C/C++ -> Code Generation -> Change "Enable C++ Exceptions" to "Yes with SEH Exceptions (/EHa)" and "Basic Runtime Checks" to "Default"
- Build.
Building CLR.DLL:
- File -> New Project
- Visual C++ -> CLR -> Class Library (Name the project "CLR")
- Delete all the autogenerated files
- Project -> Properties. Configuration Properties -> C/C++ -> Precompiled headers -> Prepcompiled headers. Change to "Not Using Precompiled Headers".
- Right click on Source Files in the solution explorer. Add -> New Item -> Visual C++ -> C++ File. Name it CLR.cpp and paste the content of CLR.cpp from the post.
- Add a new C++ file named Native.cpp and paste the code from the post.
- Right click on "Native.cpp" in the solution explorer and select properties. Change C/C++ -> General -> Common Language RunTime Support to "No Common Language RunTime Support"
- Project -> Properties -> Debugging. Change "Command" to point to CLRhost.exe, "Command Arguments" to "$(TargetPath)" including the quotes, "Debugger Type" to "Mixed"
- Build and debug.
Placing a breakpoint in the destructor of Global gives the following stack trace:
> clr.dll!Global::~Global() Line 11 C++
clr.dll!`dynamic atexit destructor for 'g''() + 0xd bytes C++
clr.dll!_CRT_INIT(void * hDllHandle, unsigned long dwReason, void * lpreserved) Line 416 C
clr.dll!__DllMainCRTStartup(void * hDllHandle, unsigned long dwReason, void * lpreserved) Line 522 + 0x11 bytes C
clr.dll!_DllMainCRTStartup(void * hDllHandle, unsigned long dwReason, void * lpreserved) Line 472 + 0x11 bytes C
mscoreei.dll!__CorDllMain@12() + 0x136 bytes
mscoree.dll!_ShellShim__CorDllMain@12() + 0xad bytes
ntdll.dll!_LdrpCallInitRoutine@16() + 0x14 bytes
ntdll.dll!_LdrShutdownProcess@0() + 0x141 bytes
ntdll.dll!_RtlExitUserProcess@4() + 0x74 bytes
kernel32.dll!74e37a0d()
mscoreei.dll!RuntimeDesc::ShutdownAllActiveRuntimes() + 0x10e bytes
mscoreei.dll!_CorExitProcess@4() + 0x27 bytes
mscoree.dll!_ShellShim_CorExitProcess@4() + 0x94 bytes
msvcr110d.dll!___crtCorExitProcess() + 0x3a bytes
msvcr110d.dll!___crtExitProcess() + 0xc bytes
msvcr110d.dll!__unlockexit() + 0x27b bytes
msvcr110d.dll!_exit() + 0x10 bytes
CLRHost.exe!__tmainCRTStartup() Line 549 C
CLRHost.exe!wmainCRTStartup() Line 377 C
kernel32.dll!@BaseThreadInitThunk@12() + 0x12 bytes
ntdll.dll!___RtlUserThreadStart@8() + 0x27 bytes
ntdll.dll!__RtlUserThreadStart@8() + 0x1b bytes
Running as a standalone executable I get a stack trace that is very similar to the one observed by Hans Passant (though it isn't using the managed version of the CRT):
> clrexe.exe!Global::~Global() Line 10 C++
clrexe.exe!`dynamic atexit destructor for 'g''() + 0xd bytes C++
msvcr110d.dll!__unlockexit() + 0x1d3 bytes
msvcr110d.dll!__cexit() + 0xe bytes
[Managed to Native Transition]
clrexe.exe!<CrtImplementationDetails>::LanguageSupport::_UninitializeDefaultDomain(void* cookie) Line 577 C++
clrexe.exe!<CrtImplementationDetails>::LanguageSupport::UninitializeDefaultDomain() Line 594 + 0x8 bytes C++
clrexe.exe!<CrtImplementationDetails>::LanguageSupport::DomainUnload(System::Object^ source, System::EventArgs^ arguments) Line 628 C++
clrexe.exe!<CrtImplementationDetails>::ModuleUninitializer::SingletonDomainUnload(System::Object^ source, System::EventArgs^ arguments) Line 273 + 0x6e bytes C++
kernel32.dll!@BaseThreadInitThunk@12() + 0x12 bytes
ntdll.dll!___RtlUserThreadStart@8() + 0x27 bytes
ntdll.dll!__RtlUserThreadStart@8() + 0x1b bytes