Found that compile time for large arrays does improve a bit if the array is broken down into smaller chunks. Yet the string approach is still significantly faster. With such a scheme, the true array could union
this array of arrays.
Posting the below as an example of how to test OP's problem without explicitly coding million byte source files. As this is not much of an answer but a resource for investigation, marking this community wiki.
#include <iostream>
using namespace std;
#include <cstdint>
#define METHOD 5
#if METHOD == 1
// 1 byte blocks 28 secs
#define ZZ16 65, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255,
#define ZZ256 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16
#define ZZ4K ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256
#define ARR constexpr uint8_t arr[]
#define COUT cout << arr << endl
#elif METHOD == 2
// 16 byte blocks 16 secs
#define ZZ16 {66, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255},
#define ZZ256 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16
#define ZZ4K ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256
#define ARR constexpr uint8_t arr[][16]
#define COUT cout << arr[0] << endl
#elif METHOD == 3
// 256 byte blocks 16 secs
#define ZZ16 67, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255,
#define ZZ256 {ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16},
#define ZZ4K ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256
#define ARR constexpr uint8_t arr[][256]
#define COUT cout << arr[0] << endl
#elif METHOD == 4
// 4K byte blocks 13 secs
#define ZZ16 68, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255, 0, 255,
#define ZZ256 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16
#define ZZ4K {ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256},
#define ARR constexpr uint8_t arr[][4096]
#define COUT cout << arr[0] << endl
#elif METHOD == 5
// String 4 sec
#define ZZ16 "\x45\xFF\x00\xFF\x00\xFF\x00\xFF\x00\xFF\x00\xFF\x00\xFF\x00\xFF"
#define ZZ256 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16 ZZ16
#define ZZ4K ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256 ZZ256
#define ARR constexpr uint8_t arr[]
#define COUT cout << arr << endl
#endif
#define ZZ64K ZZ4K ZZ4K ZZ4K ZZ4K ZZ4K ZZ4K ZZ4K ZZ4K ZZ4K ZZ4K ZZ4K ZZ4K ZZ4K ZZ4K ZZ4K ZZ4K
#define ZZ1M ZZ64K ZZ64K ZZ64K ZZ64K ZZ64K ZZ64K ZZ64K ZZ64K ZZ64K ZZ64K ZZ64K ZZ64K ZZ64K ZZ64K ZZ64K ZZ64K
#define ZZ16M ZZ1M ZZ1M ZZ1M ZZ1M ZZ1M ZZ1M ZZ1M ZZ1M ZZ1M ZZ1M ZZ1M ZZ1M ZZ1M ZZ1M ZZ1M ZZ1M
// 3 million bytes
ARR = {
ZZ1M ZZ1M ZZ1M
};
int main() {
cout << "!!!Hello World!!!" << endl;
COUT;
cout << sizeof(arr) << endl;
return 0;
}
constexpr
always static or is it more likely to be optimized away than just usingstatic
? Will the compile time withconstexpr
vary compared to the compile time ofstatic
? – Chatwinconstrexpr
suggests to the compiler "try to replace uses of this inline with the according value", which causes quite some compile overhead. How about using plain "const" instead? – Faddenconstexpr
at all? Do you understand the difference betweenconst
andconstexpr
? Did you actually put this into a header file? How often is that header file included? That said, please extract a minimal example that at least answers the above questions. Also, please clarify whatHUGE_SIZE
really is. – Faddenconstexpr
does not apply toC
. Suggest deleting tag. – Unswear