On 6 Apr 2004 11:52:32 GMT, Duncan Booth <me@privacy.net> wrote:
If int(string) was changed to have this behaviour as well, then those of
who don't want any rounding wouldn't have any way to get the current
behaviour. Users may be surprised when they enter 2.1 and find the program
accepted it but didn't use the value they entered; I don't like suprising
users.
Even this is debatable, as it is possible to spot the error.
'.' in '2.1'
True
Or, to be sure about it...
numstr = '2.1'
('.' in numstr) or ('E' in numstr.upper ())
True
You claim it's a case of "Explicit is better than implicit" but I
don't know any typecast that is explicit about what it is casting from
in any language, Python included.
One way or the other, some users get a slight extra hassle. In this
case I think Python got it right for two reasons...
1. Wanting to implicitly accept a float in a string as an integer is
relatively unusual, so better to have the small extra hassle in
this case.
2. Accidentally accepting a float in a string as an integer when you
shouldn't is a bad thing - it is usually better to get a highly
visible exception early in development rather than releasing a
program which gives bad results without warning.
But it wouldn't matter than much either way. I've used at least one
language that did the conversion in one step and that never created a
serious problem. As Mel Wilson said...
: You could argue that unit testing, the universal solvent,
: would clean up this too.
--
Steve Horne
steve at ninereeds dot fsnet dot co dot uk