We have a C++ project in which there are several large static data tables (arrays of structs) generated by an preprocessing tool and compiled into our project. We've been using VC++ 2008 up to now, but are preparing to move to 2010, and these data tables are suddenly taking a very long time to compile.
As an example, one such table has about 3,000 entries, each of which is a struct containing several ints and pointers, all initialized statically. This one file took ~15 seconds to compile in VC++ 2008, but is taking 30 minutes in VC++ 2010!
As an experiment, I tried splitting this table evenly into 8 tables, each in its own .cpp file, and they compile in 20-30 seconds each. This makes me think that something inside the compiler is O(n^2) in the length of these tables.
Memory usage for cl.exe plateaus at around 400 MB (my machine has 12 GB of RAM), and I do not see any I/O activity once it plateaus, so I believe this is not a disk caching issue.
Does anyone have an idea what could be going on here? Is there some compiler feature I can turn off to get back to sane compile times?
Here is a sample of the data in the table:
// cid (0 = 0x0)
{
OID_cid,
OTYP_Cid,
0 | FOPTI_GetFn,
NULL,
0,
NULL,
(PFNGET_VOID) static_cast<PFNGET_CID>(&CBasic::Cid),
NULL,
CID_Basic,
"cid",
OID_Identity,
0,
NULL,
},
// IS_DERIVED_FROM (1 = 0x1)
{
OID_IS_DERIVED_FROM,
OTYP_Bool,
0 | FOPTI_Fn,
COptThunkMgr::ThunkOptBasicIS_DERIVED_FROM,
false,
NULL,
NULL,
NULL,
CID_Basic,
"IS_DERIVED_FROM",
OID_Nil,
0,
&COptionInfoMgr::s_aFnsig[0],
},
// FIRE_TRIGGER_EVENT (2 = 0x2)
{
OID_FIRE_TRIGGER_EVENT,
OTYP_Void,
0 | FOPTI_Fn,
COptThunkMgr::ThunkOptBasicFIRE_TRIGGER_EVENT,
false,
NULL,
NULL,
NULL,
CID_Basic,
"FIRE_TRIGGER_EVENT",
OID_Nil,
0,
NULL,
},
// FIRE_UNTRIGGER_EVENT (3 = 0x3)
{
OID_FIRE_UNTRIGGER_EVENT,
OTYP_Void,
0 | FOPTI_Fn,
COptThunkMgr::ThunkOptBasicFIRE_UNTRIGGER_EVENT,
false,
NULL,
NULL,
NULL,
CID_Basic,
"FIRE_UNTRIGGER_EVENT",
OID_Nil,
0,
NULL,
},
As you can see, it includes various ints and enums as well as a few literal strings, function pointers and pointers into other static data tables.
O(n^2)
thing so much as a caching thing. If it can't hold it all in memory it sends some to disk... major slowdown, that doesn't come up when it's one eighth the size. – ZalesO(N^2)
though, if the number of disk accesses isO(N)
and the quantity of data written is alsoO(N)
– Quackery