Quantcast
Channel: Intel® Software - Intel® C++ Compiler
Viewing all articles
Browse latest Browse all 1175

can I rely on floating point integer arithmetic working across all optimizations

$
0
0

By a floating point integer I mean a normalised floating point number whose absolute value is less than (in the case of double) 2^53 for which (in the case of double)

double int_part, fractional_part;
fractional_part = std::modf(z, &int_part);

will always yield (either and hence both) zero for fractional_part and equality between the z and int_part. One makes the obvious changes 53-23 etc. for float,...

My basic question is that if I do arithmetic with this subclass of floating point numbers (-, +, *, FMA) can I be completely assured that providing there is no intermediate answer that is too large in absolute value, then, regardless of any compiler optimisation switches, use of scalar or vector operations, the answers will always be predictable, identical and correspond to the answers for the usual commutative and associative integer arithmetic.

I assume the answer is yes, but cannot find the reference that guarantees the results in an optimised setting. Are there explicit statements as to when operations are / are not consistent across the different compiler options or machines. Please note I am asking about whether the binary (or for FMA, ternary) hardware operations are consistent and whether as a result the mutations that arise because of the lack of associativity or identity of these operations that influence compiler optimisations around ordering disappear. Of course I am not asking about division.

After answering this question it becomes interesting to consider the same question including  0 as an integer (with sign chosen so 0-0 == 0) which is +0 unless round-down is enabled (https://en.wikipedia.org/wiki/Signed_zero) when it is -0.

Thanks

Terry 

 


Viewing all articles
Browse latest Browse all 1175

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>