Hello awesome people of bytes.com. After a year working on a thesis project, I stumbled upon an interesting phenomenon. When I set a float variable to 10060.000000, it never changed after going through some floating point arithmetic that involved a small number (10060f - 0.00027333f = 10060f?). Changing this variable to about 1000 or less gave the right result. Finally, changing this float variable to double fixed this problem (10060d - 0.00027333f = <10060d, which is correct). Is there a theoretical basis as to why the float-only arithmetic operation failed compared to the double - float one? I thought that 32 bits floats had enough decimals to handle most of the result of 10060 - 0.00027333. At the time of this issue, I was testing my code on Win Vista 32bits. I have not tested the Mac and Linux versions to see if they have the same issue. Thanks for your time!