Hi,
I setprecision to be 100 for both cases. I'm wondering why the number
of digits are different.
Also, for a double number, I think any digits that are longer than 15
(or 16) are not meaningful, because it exceed the double number's
precision limit. Even if I setprecision to be 100, shall it truncate
the number to be of 15(or 16) digits?
Thanks,
Peng
$ cat main.cc
#include <iostream>
#include <iomanip>
int main() {
double a = .01;
std::cout << std::setprecision(100) << a << std::endl;
std::cout << std::scientific << std::setprecision(100) << a <<
std::endl;
}
$ ./main-g.exe
0.010000000000000000208166817117216851329430937767 02880859375
1.000000000000000020816681711721685132943093776702 88085937500000000000000000000000000000000000000000 00e-02