By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
443,267 Members | 1,800 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 443,267 IT Pros & Developers. It's quick & easy.

Incorrect number of arguments

P: n/a
I'm trying to keep an open mind, but I am perplexed
about something in Python that strikes me as a poor design.

py> def func(a,b):
py> print a,b
py> func(1)
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: func() takes exactly 2 arguments (1 given)
Why is the exception raised by passing the wrong number
of arguments a TypeError instead of a more specific
exception?

I'm asking because of a practical problem I had. I have
written some test suites for a module, and wanted a
test to ensure that the functions were accepting the
correct number of arguments, eg if the specs say they
take three arguments, that they actually do fail as
advertised if you pass the wrong number of arguments.

That should be simple stuff to do. Except that I have
to distinguish between TypeErrors raised because of
wrong argument counts, and TypeErrors raised inside the
function.

To add an extra layer of complication, the error string
from the TypeError differs according to how many
parameters were expected and how many were supplied, eg:

func() takes exactly 2 arguments (1 given)
func() takes at least 2 arguments (1 given)
func() takes at most 1 argument (2 given)
etc.

I worked around this problem by predicting what error
message to expect given N expected arguments and M
supplied arguments. Yuck: this is a messy, inelegant,
ugly hack :-( Thank goodness that functions are first
class objects that support introspection :-)

So, I'm wondering if there is a good reason why
TypeError is generated instead of (say) ArgumentError,
or if it is just a wart of the language for historical
reasons?
--
Steven.
Jul 19 '05 #1
Share this Question
Share on Google+
2 Replies


P: n/a
Steven D'Aprano wrote:
I worked around this problem by predicting what error message to expect
given N expected arguments and M supplied arguments. Yuck: this is a
messy, inelegant, ugly hack :-( Thank goodness that functions are first
class objects that support introspection :-)
*eureka moment*

I can use introspection on the function directly to see
how many arguments it accepts, instead of actually
calling the function and trapping the exception.
So, I'm wondering if there is a good reason why TypeError is generated
instead of (say) ArgumentError, or if it is just a wart of the language
for historical reasons?


Still a good question though. Why is it TypeError?

--
Steven.

Jul 19 '05 #2

P: n/a
Steven D'Aprano wrote:
*eureka moment*

I can use introspection on the function directly to see
how many arguments it accepts, instead of actually
calling the function and trapping the exception.
For funsies, the function 'can_call' below takes a function 'f'
and returns a new function 'g'. Calling 'g' with a set of
arguments returns True if 'f' would take the arguments,
otherwise it returns False. See the test case for an
example of use.
import new

def noop():
pass
def can_call(func):
# Make a new function with the same signature

# code(argcount, nlocals, stacksize, flags, codestring, constants, names,
# varnames, filename, name, firstlineno, lnotab[, freevars[, cellvars]])
code = func.func_code
new_code = new.code(code.co_argcount,
code.co_nlocals,

noop.func_code.co_stacksize,

code.co_flags,

noop.func_code.co_code, # don't do anything

code.co_consts,
code.co_names,
code.co_varnames,
code.co_filename,
"can_call_" + code.co_name,
code.co_firstlineno,

noop.func_code.co_lnotab, # for line number info

code.co_freevars,
# Do I need to set cellvars? Don't think so.
)

# function(code, globals[, name[, argdefs[, closure]]])
new_func = new.function(new_code, func.func_globals,
"can_call_" + func.func_name,
func.func_defaults)

# Uses a static scope
def can_call_func(*args, **kwargs):
try:
new_func(*args, **kwargs)
except TypeError, err:
return False
return True
try:
can_call_func.__name__ = "can_call_" + func.__name__
except TypeError:
# Can't change the name in Python 2.3 or earlier
pass
return can_call_func
#### test

def spam(x, y, z=4):
raise AssertionError("Don't call me!")
can_spam = can_call(spam)

for (args, kwargs) in (
((1,2), {}),
((1,), {}),
((1,), {"x": 2}),
((), {"x": 1, "y": 2}),
((), {"x": 1, "z": 2}),
((1,2,3), {}),
((1,2,3), {"x": 3}),
):
can_spam_result = can_spam(*args, **kwargs)
try:
spam(*args, **kwargs)
except AssertionError:
could_spam = True
except TypeError:
could_spam = False

if can_spam_result == could_spam:
continue

print "Failure:", repr(args), repr(kwargs)
print "Could I call spam()?", could_spam
print "Did I think I could?", can_spam_result
print

print "Done."

Still a good question though. Why is it TypeError?


My guess - in most languages with types, functions are
typed not only on "is callable" but on the parameter
signature. For example, in C
dalke% awk '{printf("%3d %s\n", NR, $0)}' tmp.c
1
2 int f(int x, int y) {
3 }
4
5 int g(int x) {
6 }
7
8 main() {
9 int (*func_ptr)(int, int);
10 func_ptr = f;
11 func_ptr = g;
12 }
% cc tmp.c
tmp.c: In function `main':
tmp.c:11: warning: assignment from incompatible pointer type
%

'Course the next question might be "then how about an
ArgumentError which is a subclasss of TypeError?"

Andrew
da***@dalkescientific.com

Jul 19 '05 #3

This discussion thread is closed

Replies have been disabled for this discussion.