As shown in https://gcc.godbolt.org/z/_PjVDd icpc seems to be using 32-bit registers to do the offset calculation for an array that has less than 4G elements. If the array had 1B elements, that would be fine, but if it's larger than that, then we need 64-bit registers in order to access past 4GB of memory. The result is that we can get crashes as the repro in the above link shows.
I've confirmed this bug exists in icc 17 all the way to 19.0.245 (the most recent I can find). I've only tested this in Windows.
In case the above link doesn't work, here's the code:
void buggy(unsigned int count, int* a) { // note that the index is 32-bit for (unsigned i = 0; i < count; ++i) *a++ = 123; } void good(unsigned int count, int* a) { // note that the index is 64-bit for (unsigned long long i = 0; i < count; ++i) *a++ = 123; }
int main(int argc, char** argv) { const size_t array_size = argc == 123456 ? (size_t)argv : (1ULL << 31) + 1000; // prevent compiler optimizations printf("array_size == %zu\n", array_size); // double check we really did use the correct size int* out = new int[array_size]; buggy(array_size, out); // good(array_size, out); printf("no crash!\n"); return 0; }