473,385 Members | 1,356 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,385 software developers and data experts.

Linux oddity

Hi Folks,

When converting a double to an int, the result is not as
I'd expect on Linux:

#include <stdio.h>
int
main(int argc, char **argv)
{
double val = 0.24;
int multiplier = 2000;
int result = static_cast<int> (val * multiplier);
printf("result = %d (should be 480)\n", result);
return 0;
}

The above code prints 480 on SunOS and Windows, but on
Linux with gcc 3.2 it prints 479. Is there a valid
explanation for this difference?

- Keith

Jul 19 '05 #1
34 2649
Keith S. wrote:
Hi Folks,

When converting a double to an int, the result is not as
I'd expect on Linux:

#include <stdio.h>
int
main(int argc, char **argv)
{
double val = 0.24;
int multiplier = 2000;
int result = static_cast<int> (val * multiplier);
printf("result = %d (should be 480)\n", result);
return 0;
}

The above code prints 480 on SunOS and Windows, but on
Linux with gcc 3.2 it prints 479. Is there a valid
explanation for this difference?

- Keith


Hmmm, seams to be a numerical problem:

#include <cstdio>
int
main(int argc, char **argv)
{
double val = 0.24;
int multiplier = 2000;
int result = static_cast<int>(val * multiplier);
printf("val * multiplier - 480 = %20.20f\n", val * multiplier - 480);
return 0;
}

but it's strange. I assumed that the result would only depend on the CPU
not the Operating System...

Jul 19 '05 #2
Keith S. wrote in news:bm************@ID-169434.news.uni-berlin.de:
Hi Folks,

When converting a double to an int, the result is not as
I'd expect on Linux:

#include <stdio.h>
int
main(int argc, char **argv)
{
double val = 0.24;
int multiplier = 2000;
int result = static_cast<int> (val * multiplier);
printf("result = %d (should be 480)\n", result);
return 0;
}

The above code prints 480 on SunOS and Windows, but on
Linux with gcc 3.2 it prints 479. Is there a valid
explanation for this difference?


The comilers are using diferent rounding rules, as I believe
they are allowed to do. Note that 0.24 can't on most machines
be represented as an exact value, what you actually get is
something like 0.23999999999997. So the 2 compilers that
give 480 are rounding to up or to the nearest value, and the
linux box is rounding down. Its the static_cast<int>() that
is doing the rounding.

Look up the function's std::floor() and std::ceil() defined
in #include <cmath>.

HTH

Rob.
--
http://www.victim-prime.dsl.pipex.com/
Jul 19 '05 #3
Rob Williscroft wrote:
The comilers are using diferent rounding rules, as I believe
they are allowed to do. Note that 0.24 can't on most machines
be represented as an exact value, what you actually get is
something like 0.23999999999997. So the 2 compilers that
give 480 are rounding to up or to the nearest value, and the
linux box is rounding down. Its the static_cast<int>() that
is doing the rounding.


Understood, although I would have expected that static_cast<int>'s
behaviour was the same (with respect to rounding method) on
different platforms, especially when using the same compiler...

- Keith

Jul 19 '05 #4
Rob Williscroft wrote:
Keith S. wrote in news:bm************@ID-169434.news.uni-berlin.de:

Hi Folks,

When converting a double to an int, the result is not as
I'd expect on Linux:

#include <stdio.h>
int
main(int argc, char **argv)
{
double val = 0.24;
int multiplier = 2000;
int result = static_cast<int> (val * multiplier);
printf("result = %d (should be 480)\n", result);
return 0;
}

The above code prints 480 on SunOS and Windows, but on
Linux with gcc 3.2 it prints 479. Is there a valid
explanation for this difference?
The comilers are using diferent rounding rules, as I believe
they are allowed to do. Note that 0.24 can't on most machines
be represented as an exact value, what you actually get is
something like 0.23999999999997.


That is probably what is happening on the linux machine.

So the 2 compilers that give 480 are rounding to up or to the nearest value, and the
linux box is rounding down. Its the static_cast<int>() that
is doing the rounding.
afaik you always round down when converting to int.

Look up the function's std::floor() and std::ceil() defined
in #include <cmath>.

HTH

Rob.


BTW, in gnome calculator on linux I get 480. Check out their code and
see why.

--
Noah Roberts
- "If you are not outraged, you are not paying attention."

Jul 19 '05 #5

"Rob Williscroft" <rt*@freenet.REMOVE.co.uk> wrote in message news:Xn**********************************@195.129. 110.201...
Its the static_cast<int>() that is doing the rounding.
It is NOT. The floating point to int conversion always ignores the fractional part.
It is what you originally said, the conversion of the literal .24 to it's double value that
is picking which of the two representable values that it falls between.
Look up the function's std::floor() and std::ceil() defined
in #include <cmath>.


These aren't going to help.
Jul 19 '05 #6
Ron Natalie wrote:
Its the static_cast<int>() that is doing the rounding.

It is NOT. The floating point to int conversion always ignores the fractional part.


Sorry, but this is not true. Try the code with gcc on Solaris
and you'll find that static_cast<int> *rounds* to the nearest
int, rather than *truncating* as gcc on Linux does.
Look up the function's std::floor() and std::ceil() defined
in #include <cmath>.


int result = static_cast<int> (round(val * multiplier));

is the workaround I use to get consistent behavious on all compilers
(i.e. to get Linux gcc to behave like everything else).

- Keith

Jul 19 '05 #7

"Keith S." <fa***@ntlworld.com> wrote in message news:bm************@ID-169434.news.uni-berlin.de...
Sorry, but this is not true. Try the code with gcc on Solaris
and you'll find that static_cast<int> *rounds* to the nearest
int, rather than *truncating* as gcc on Linux does.
Nonsense. I just tried it and it trnucates. It has to. I've been writing
code for over a decade that relies on this behavior and I've never
come accross a compiler that gets it wrong yet.

While rounding behavior in the FLOATING POINT calculations is at the
discretion of the compiler (and IEEE FP defaults to rounding). The
floating point to integer conversion in C and C++ is mandated to be
truncation. This runs into fun and games on the Pentium as G++ as
well as several other compilers do something really stupid to accomplish
the truncation that kills performance (setting the fcw to change the rounding
mode).
Look up the function's std::floor() and std::ceil() defined
in #include <cmath>.


int result = static_cast<int> (round(val * multiplier));


round() is neither floor() or ceil().
The static_cast by the way is totally unnecessary (other that to supress a possible
compiler warning).
Jul 19 '05 #8
Ron Natalie wrote:
Nonsense. I just tried it and it trnucates. It has to. I've been writing
code for over a decade that relies on this behavior and I've never
come accross a compiler that gets it wrong yet.
Oh all right then. Here is the result on Linux:

[keith@pc-keiths keith]$ uname -a
Linux pc-keiths 2.4.19-16mdkenterprise #1 SMP Fri Sep 20 17:34:59 CEST
2002 i686 unknown unknown GNU/Linux
[keith@pc-keiths keith]$ gcc test.cpp
[keith@pc-keiths keith]$ a.out
result = 479 (should be 480)

and here is the same code run on SunOS (VC6 gives the same result too):

45 otto% uname -a
SunOS otto 5.8 Generic_108528-09 sun4u sparc SUNW,Sun-Blade-100
46 otto% gcc test.cpp
47 otto% a.out
result = 480 (should be 480)

The static_cast by the way is totally unnecessary (other that to supress a possible
compiler warning).


which is exactly why it's there :)

- Keith

Jul 19 '05 #9

"Keith S." <fa***@ntlworld.com> wrote in message news:bm************@ID-169434.news.uni-berlin.de...
Ron Natalie wrote:
Nonsense. I just tried it and it trnucates. It has to. I've been writing
code for over a decade that relies on this behavior and I've never
come accross a compiler that gets it wrong yet.


Oh all right then. Here is the result on Linux:

No, no, no. The conversion to int is NOT rounding. Try looking at the
floating point value BEFORE it is converted to int. In your Sun case it
is slightly more than 480, in the Linux case it is slighly less than 480.
THEN the when it is truncated to int, one gives 480 an the other 479.

The imprecision occurred when .24 was converted to a floating point number.
Try comparing the floating value val*multiplier with 480.0. It's less than 480.0
on the LINUX and greater than 480.0 on the Sun.
..
Jul 19 '05 #10
Keith S. wrote:
Hi Folks,

When converting a double to an int, the result is not as
I'd expect on Linux:

#include <stdio.h>
int
main(int argc, char **argv)
{
double val = 0.24;
int multiplier = 2000;
int result = static_cast<int> (val * multiplier);
printf("result = %d (should be 480)\n", result);
return 0;
}

The above code prints 480 on SunOS and Windows, but on
Linux with gcc 3.2 it prints 479. Is there a valid
explanation for this difference?


Check the status of the floating point chip. The calculation
may be being done with 80 bit precision in the first two
cases and 64 bit precision in the later case.

Jul 19 '05 #11
Ron Natalie wrote:
No, no, no. The conversion to int is NOT rounding. Try looking at the
floating point value BEFORE it is converted to int. In your Sun case it
is slightly more than 480, in the Linux case it is slighly less than 480.
THEN the when it is truncated to int, one gives 480 an the other 479.

The imprecision occurred when .24 was converted to a floating point number.
Try comparing the floating value val*multiplier with 480.0. It's less than 480.0
on the LINUX and greater than 480.0 on the Sun.


Hmm, You're right. However, this doesn't help the original problem
which is the behaviour is different on different platforms,
and gcc Linux seems to be the odd one out. Every other
platform/compiler gives the expected answer except Linux
(including my 27 year old pocket calculator).
- Keith

Jul 19 '05 #12
lilburne wrote:
Check the status of the floating point chip. The calculation may be
being done with 80 bit precision in the first two cases and 64 bit
precision in the later case.


How do you do this?

- Keith

Jul 19 '05 #13
Keith S. wrote:
lilburne wrote:
Check the status of the floating point chip. The calculation may be
being done with 80 bit precision in the first two cases and 64 bit
precision in the later case.

How do you do this?


Examine the FPU control word
http://www.wrcad.com/linux_numerics.txt

Jul 19 '05 #14
lilburne wrote:
Examine the FPU control word
http://www.wrcad.com/linux_numerics.txt


Thanks, very interesting article. A shame that the
linux developers couldn't see that pedantic accuracy
is less important that sensible results.

- Keith

Jul 19 '05 #15
Keith S. wrote:
lilburne wrote:
Examine the FPU control word
http://www.wrcad.com/linux_numerics.txt

Thanks, very interesting article. A shame that the
linux developers couldn't see that pedantic accuracy
is less important that sensible results.


Well whether you do 64 bit or 80 bit FP operations isn't
really the issue. The problem is that code like

int i = 0.24*2000;

or

if (x == y) {
...
}

where x and y are doubles, are actually bugs if you care
about accuracy. FP calculations are essentially inaccurate
and great care needs to be taken to ensure the stability of
FP results. This is one of the reasons why we test our
application on more than one architecture.

Jul 19 '05 #16

"lilburne" <li******@godzilla.net> wrote in message news:bm************@ID-203936.news.uni-berlin.de...

where x and y are doubles, are actually bugs if you care
about accuracy. FP calculations are essentially inaccurate


They are not "essentially inaccurate" unless you've got a really sloppy
implementation. The issue is numbers that appear to be exactly representable
in decimal, are NOT in floating point, yielding small errors.
Jul 19 '05 #17
Ron Natalie wrote:
"lilburne" <li******@godzilla.net> wrote in message news:bm************@ID-203936.news.uni-berlin.de...

where x and y are doubles, are actually bugs if you care
about accuracy. FP calculations are essentially inaccurate

They are not "essentially inaccurate" unless you've got a really sloppy
implementation. The issue is numbers that appear to be exactly representable
in decimal, are NOT in floating point, yielding small errors.


Seems like you're saying that FP calculations are
"essentially inaccurate" too. The small error exhibited here
resulted in a gross difference in result when the integer
conversion took place.

Those that care about the maths go to great pain to avoid
instability in the expressions used, and are particularly
careful about rounding errors, and loss of significance.

Jul 19 '05 #18
Ron Natalie wrote in news:3f*********************@news.newshosting.com:

"Rob Williscroft" <rt*@freenet.REMOVE.co.uk> wrote in message
news:Xn**********************************@195.129. 110.201...
Its the static_cast<int>() that is doing the rounding.


It is NOT. The floating point to int conversion always ignores the
fractional part. It is what you originally said, the conversion of the
literal .24 to it's double value that is picking which of the two
representable values that it falls between.


Right, thanks for the correction.

Rob.
--
http://www.victim-prime.dsl.pipex.com/
Jul 19 '05 #19

It is strange though that static_cast change the result in that
unexpected way.
On Linux r1 and r2 will differ

double tmp = val * multiplier;
int r1 = static_cast<int> (tmp);
int r2 = static_cast<int> (val * multiplier);

/Mattias
lilburne wrote:
Keith S. wrote:
Hi Folks,

When converting a double to an int, the result is not as
I'd expect on Linux:

#include <stdio.h>
int
main(int argc, char **argv)
{
double val = 0.24;
int multiplier = 2000;
int result = static_cast<int> (val * multiplier);
printf("result = %d (should be 480)\n", result);
return 0;
}

The above code prints 480 on SunOS and Windows, but on
Linux with gcc 3.2 it prints 479. Is there a valid
explanation for this difference?


Check the status of the floating point chip. The calculation may be
being done with 80 bit precision in the first two cases and 64 bit
precision in the later case.


Jul 19 '05 #20
Mattias Ekholm wrote:

It is strange though that static_cast change the result in that
unexpected way.
On Linux r1 and r2 will differ

double tmp = val * multiplier;
int r1 = static_cast<int> (tmp);
int r2 = static_cast<int> (val * multiplier);


Shrug - you'd expect the optimizer to only do the
calculation once, and in any case to produce the same code.
The only way that you can really determine what is going on
is to examine the compiler output (on g++ use the -S
option). But all FP calculations and particularly
comparisons should be done with caution, you should always
be prepared for inaccuracy and code for it.

An example of FP problems with division on my system.

#include <iostream>

int main()
{
double d = 123456.789012345;
double e = d;
cout.precision(20);
cout << d << endl;
d *= 10.0;
cout << d << endl;
cout << "attempting multiplication" << endl;
double f = d;
d *= 0.1;
cout << d << endl;
if (e != d){
cout << "Not equal" << endl;
}
cout << "attempting division" << endl;
f /= 10.0;
cout << f << endl;
if (e != f){
cout << "Not equal" << endl;
}
return 0;
}

Jul 19 '05 #21

"Mattias Ekholm" <ne**@ekholm.se> wrote in message news:4U*******************@newsb.telia.net...

It is strange though that static_cast change the result in that
unexpected way.
On Linux r1 and r2 will differ

double tmp = val * multiplier;
int r1 = static_cast<int> (tmp);
int r2 = static_cast<int> (val * multiplier);

This is most likely a bug that nobody is worked up about to argue
about.

The intel floating processor is able to handle 80 bit floats. When it does
the store to the 64 bit double, it rounds (because for MOST of the time
the processor is left in round mode). So the 64 bit float is rounded enough
to make it's fractional part 480. The direct 80 bit store in truncation mode is
less than 480.0 so it stores 479. The conversion of 80bit float->32 bit int
does not use a 64 bit intermediary.
Jul 19 '05 #22
lilburne wrote:
Well whether you do 64 bit or 80 bit FP operations isn't really the
issue. The problem is that code like

int i = 0.24*2000;

or

if (x == y) {
...
}

where x and y are doubles, are actually bugs if you care about accuracy.
FP calculations are essentially inaccurate and great care needs to be
taken to ensure the stability of FP results. This is one of the reasons
why we test our application on more than one architecture.


Code like int i = 0.24*2000 is not a bug. The user requested a valid
calculation, the fact that the hardware/software couldn't give the
correct result is the bug.

Anyhow, why does float to int conversion truncate? It makes no sense
to me, it should round. 0.999999 should convert to 1 not 0.
- Keith

Jul 19 '05 #23
Keith S. wrote:
lilburne wrote:
Well whether you do 64 bit or 80 bit FP operations isn't really the
issue. The problem is that code like

int i = 0.24*2000;

or

if (x == y) {
...
}

where x and y are doubles, are actually bugs if you care about
accuracy. FP calculations are essentially inaccurate and great care
needs to be taken to ensure the stability of FP results. This is one
of the reasons why we test our application on more than one architecture.

Code like int i = 0.24*2000 is not a bug. The user requested a valid
calculation, the fact that the hardware/software couldn't give the
correct result is the bug.


That is the nature of FP calculations if you don't like it
either don't use them, or program in such a way that
inaccurate calculations are expected and dealt with. See
"semi-numerical algorithms" by Knuth. Digital is not analog.
Anyhow, why does float to int conversion truncate? It makes no sense
to me, it should round. 0.999999 should convert to 1 not 0.


Its been that way since way before languages like C or C++
were invented. If you want to round to the nearest integer
you add 0.5 first.

Jul 19 '05 #24
lilburne wrote:
Its been that way since way before languages like C or C++ were
invented. If you want to round to the nearest integer you add 0.5 first.


Err, stupidity may well go back a long way but why encourage it?

- Keith

Jul 19 '05 #25
Keith S. wrote:
lilburne wrote:
Its been that way since way before languages like C or C++ were
invented. If you want to round to the nearest integer you add 0.5 first.

Err, stupidity may well go back a long way but why encourage it?


Because it has predictable behaviour, and there is a simple
solution of adding 0.5 before converting to an int.
Jul 19 '05 #26
lilburne wrote:
Because it has predictable behaviour, and there is a simple solution of
adding 0.5 before converting to an int.


But rounding is predicatable. If the fractional part is
less than 0.5, round down, else round up. What's difficult
about that?

- Keith

Jul 19 '05 #27
On Mon, 20 Oct 2003 07:29:58 +0100, Keith S. <fa***@ntlworld.com> wrote:
lilburne wrote:
Because it has predictable behaviour, and there is a simple solution of
adding 0.5 before converting to an int.

double x = -10.9;
std::cout static_cast<int>(x+0.5) << std::endl;


But rounding is predicatable. If the fractional part is
less than 0.5, round down, else round up. What's difficult
about that?


Many accountants would disagree with your simple rounding scheme.
As would many statisticians. As would many computer scientists.

--
Sam Holden

Jul 19 '05 #28
On 20 Oct 2003 07:26:39 GMT, Sam Holden <sh*****@flexal.cs.usyd.edu.au> wrote:

std::cout static_cast<int>(x+0.5) << std::endl;

^
^
Insert "<< ", and call me an idiot.

--
Sam Holden

Jul 19 '05 #29


"Keith S." wrote:

lilburne wrote:
Because it has predictable behaviour, and there is a simple solution of
adding 0.5 before converting to an int.
But rounding is predicatable.


Really?

the same problem you saw at the border of 0.999999998 to 1.0 occourse
at the border from 0.499999999 to 0.5

That's the way it is and there is nothing you (or I can) do against it.
It is an inherent property of how floating gpoint calculations are done on
a computer. Learn to live with it.

The next pitfall waiting for you is the comparison.
An experienced programmer doesn't write

if( SomeDoubleNumber == 0.24 )
...

for the very same reason. Depending on the history of SomeDoubleNumber
(what you did prior to that variable), the number in it may be greater
then 0.24 or may be less then 0.24, but it is almost never exactly 0.24.

The way to deal with it, is to change your way of thinking. With floating point
numbers you never say: do they equal. Instead you say: Is the difference small
enough such that I can treat them as beeing equal.

if( fabs( SomDoubleNumer - 0.24 ) < SomeEpsilon )
// they are equal

What you insert for SomeEpsilon is again dependent on what you previously
did to SomeDoubleNumber.
If the fractional part is
less than 0.5, round down, else round up. What's difficult
about that?


Nothing. In the time it has taken you to write your reply you
could have implemented a round() function which does exactly what you
want. This round function is sufficient for you, but there are other
ways of rounding too and that's why there is no such function
prebuilt.

--
Karl Heinz Buchegger
kb******@gascad.at
Jul 19 '05 #30
Karl Heinz Buchegger wrote:
Really?

the same problem you saw at the border of 0.999999998 to 1.0 occourse
at the border from 0.499999999 to 0.5


That's a different problem. The specific one I had was with the
fact that Linux gcc is doing the conversion from a float with
80 bit accuracy rather than 64 bit accuracy. By trying to be
more accurate, it is non portable. Interestingly even M$ VC6
gets it right, along with the other platforms I need to support.

Oh well, more #ifdef linux_x86...

- Keith

Jul 19 '05 #31


"Keith S." wrote:

Karl Heinz Buchegger wrote:
Really?

the same problem you saw at the border of 0.999999998 to 1.0 occourse
at the border from 0.499999999 to 0.5
That's a different problem. The specific one I had was with the
fact that Linux gcc is doing the conversion from a float with
80 bit accuracy rather than 64 bit accuracy. By trying to be
more accurate, it is non portable.


You are under a wrong imporession:
floating point arithmetic is never portable.
That's because C++ (as well as C) doesn't specify how floating
point arithmetic has to be done. There are various schemes around
and all of them suffer from the same problem (stuffing indefinit
many numbers into a fixed amount of bytes) and have different
ways to deal with it.
Interestingly even M$ VC6
gets it right, along with the other platforms I need to support.
There is no right or wrong.
Change the specific numbers and what you fell as 'right' turns around.

Oh well, more #ifdef linux_x86...


No. That's the wrong way.
Adding epsilons, rounding corrections and accounting for
numerical problems is the way to go.

double tmp = ...;

int i = tmp + 0.5;

or to take sign into account:

int round( double d )
{
if( d > 0.0 )
return d + 0.5;
return d - 0.5;
}
"Working with floating point numbers is like moving
piles of sand. Every time you do it, you loose a
little sand and add a litle dirt."

It is this dirt you need to take into account. No floating
point hardware or system can change that. And a few #ifdef's
are certainly not the tool to deal with that fact.

--
Karl Heinz Buchegger
kb******@gascad.at
Jul 19 '05 #32
"Keith S." wrote:

lilburne wrote:
Because it has predictable behaviour, and there is a simple solution of
adding 0.5 before converting to an int.


But rounding is predicatable. If the fractional part is
less than 0.5, round down, else round up. What's difficult
about that?


That's one way to round. There are several more. You can round toward
zero (i.e. truncate, in typical implementations), round down (negative
numbers get more negative), round up. Then there's the question of what
to do with that 0.5. Most people round that one up. Banker's rounding
rounds to the nearest even value (1.5 goes to 2.0, 2.5 also goes to
2.0). That removes a slight bias.

For more details, see www.petebecker.com/js200007.html.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
Jul 19 '05 #33

"Keith S." <fa***@ntlworld.com> wrote in message news:bm************@ID-169434.news.uni-berlin.de...
lilburne wrote:
Because it has predictable behaviour, and there is a simple solution of
adding 0.5 before converting to an int.


But rounding is predicatable. If the fractional part is
less than 0.5, round down, else round up. What's difficult
about that?


Truncation is predictable as well. Predictability is good, but it's not
the reason.
Jul 19 '05 #34

"Karl Heinz Buchegger" <kb******@gascad.at> wrote in message news:3F***************@gascad.at...
You are under a wrong imporession:
floating point arithmetic is never portable.


Still the fact that storing a double to a variable changes it's value is
really bogus.
Jul 19 '05 #35

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
by: Jhuola Hoptire | last post by:
Just installed J2RE 1.4.2 on a Linux server. I am very knew to the POSIX world. I couldn't dig-up much in the docs or via google about the following: 1 - Is there a standard way to make sure...
12
by: Michael Foord | last post by:
Here's a little oddity with 'print' being a reserved word... >>> class thing: pass >>> something = thing() >>> something.print = 3 SyntaxError: invalid syntax >>> print something.__dict__...
0
by: Eric Raymond | last post by:
When installing the rpms (or the tar file) onto Red Hat Enteprise Linux AS Beta 1 (Taroon), we get the follwing error: Installing all prepared tables /usr/bin/mysql_install_db: line 1: 7690...
6
by: gnu | last post by:
Rationale to use Linux ======================= - I can't afford paying for $199 for the license of an OS that's arguably better thank Linux for each of 10 computers I have. - I want to be...
5
by: jmdocherty | last post by:
All, I've been trying to set up a CSS layout and all seems well in Firefox but in Safari it just seems to be plain weird! I hope someone can help tell me whether this is a problem with my code...
5
by: cranium.2003 | last post by:
hi, Here is my code #include <iostream.h> int main() { cout <<"HI"; return 0; } and using following command to compile a C++ program g++ ex1.cpp -o ex1
43
by: michael.f.ellis | last post by:
The following script puzzles me. It creates two nested lists that compare identically. After identical element assignments, the lists are different. In one case, a single element is replaced. In...
6
by: Rex the Strange | last post by:
Hello all, Traditionally I'm a Delphi and VB6 programmer and have recently started working with VB.NET (which, I might add, I love, but that's beside the point). Anyway, I was trying to make a...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
0
by: Faith0G | last post by:
I am starting a new it consulting business and it's been a while since I setup a new website. Is wordpress still the best web based software for hosting a 5 page website? The webpages will be...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: Charles Arthur | last post by:
How do i turn on java script on a villaon, callus and itel keypad mobile phone
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.