the last weeks on the language (please note that some (most?) of them

are probably wrong/useless/silly, but I've seen that such notes help me

understand a lot of things and to find my weak spots.)

1) I've seen that list pop/append is amortised to its tail, but not for

its head. For this there is deque. But I think dynamical arrays can be

made with some buffer at both head and tail (but you need to keep an

index S to skip the head buffer and you have to use this index every

access to the elements of the list). I think that the most important

design goal in Python built-in data types is flexibility (and safety),

instead of just speed (but dictionaries are speedy ^_^), so why there

are deques instead of lists with both amortised tail&tail operations?

(My hypothesis: to keep list implementation a bit simpler, to avoid

wasting memory for the head buffer, and to keep them a little faster,

avoiding the use of the skip index S).

2) I usually prefer explicit verbose syntax, instead of cryptic symbols

(like decorator syntax), but I like the infix Pascal syntax ".." to

specify a closed interval (a tuple?) of integers or letters (this

syntax doesn't mean to deprecate the range function). It reminds me the

.... syntax sometimes used in mathematics to define a sequence.

Examples:

assert 1..9 == tuple(range(1,10))

for i in 1..12: pass

for c in "a".."z": pass

3) I think it can be useful a way to define infix functions, like this

imaginary decorator syntax:

@infix

def interval(x, y): return range(x, y+1) # 2 parameters needed

This may allow:

assert 5 interval 9 == interval(5,9)

4) The printf-style formatting is powerful, but I still think it's

quite complex for usual purposes, and I usually have to look its syntax

in the docs. I think the Pascal syntax is nice and simpler to remember

(especially for someone with a little Pascal/Delphi experience ^_^), it

uses two ":" to format the floating point numbers (the second :number

is optional). For example this Delphi program:

{$APPTYPE CONSOLE}

const a = -12345.67890;

begin

writeln(a);

writeln(a:2:0);

writeln(a:4:2);

writeln(a:4:20);

writeln(a:12:2);

end.

Gives:

-1.23456789000000E+0004

-12346

-12345.68

-12345.67890000000000000000

-12345.68

(The last line starts with 3 spaces)

5) From the docs about round:

Values are rounded to the closest multiple of 10 to the power minus n;

if two multiples are equally close, rounding is done away from 0 (so.

for example, round(0.5) is 1.0 and round(-0.5) is -1.0).

Example:

a = [0.05 + x/10.0 for x in range(10)]

b str(round(x, 1))

for x in a: print x,

for x in a: print str(round(x, 1)) + " ",

Gives:

0.05 0.15 0.25 0.35 0.45 0.55 0.65 0.75 0.85 0.95

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

But to avoid a bias toward rounding up there is another way do this:

If the digit immediately to the right of the last sig. fig. is more

than 5, you round up.

If the digit immediately to the right of the last sig. fig. is less

than 5, you round down.

If the digit immediately to the right of the last sig. fig. is equal to

5, you round up if the last sig. fig. is odd. You round down if the

last sig. fig. is even. You round up if 5 is followed by nonzero

digits, regardless of whether the last sig. fig. is odd or even.

http://www.towson.edu/~ladon/roundo~1.html

http://mathforum.org/library/drmath/view/58972.html

http://mathforum.org/library/drmath/view/58961.html

6) map( function, list, ...) applies function to every item of list and

return a list of the results. If list is a nested data structure, map

applies function to just the upper level objects.

In Mathematica there is another parameter to specify the "level" of the

apply.

So:

map(str, [[1,[2]], 3])

==>

['[1, [2]]', '3']

With a hypothetical level (default level = 1, it gives the normal

Python map):

map(str, [[1,[2]], 3], level=1)

==>

['[1, [2]]', '3']

map(str, [[1,[2]], 3], level=2)

==>

['1', '[2]', '3']

I think this semantic can be extended:

level=0 means that the map is performed up to the leaves (0 means

infinitum, this isn't nice, but it can be useful because I think Python

doesn't contain a built-in Infinite).

level=-1 means that the map is performed to the level just before the

leaves.

Level=-n means that the map is performed n levels before the leaves.

7) Maybe it can be useful to extended the reload(module) semantic:

reload(module [, recurse=False])

If recurse=True it reloads the module and recursively all the modules

that it imports.

8) Why reload is a function and import is a statement? (So why isn't

reload a statement too, or both functions?)

9) Functions without a return statement return None:

def foo(x): print x

I think the compiler/interpreter can give a "compilation warning" where

such function results are assigned to something:

y = foo(x)

(I know that some of such cases cannot be spotted at compilation time,

but the other cases can be useful too).

I don't know if PyChecker does this already. Generally speaking I'd

like to see some of those checks built into the normal interpreter.

Instructions like:

open = "hello"

Are legal, but maybe a "compilation warning" can be useful here too

(and maybe even a runtime warning if a Verbose flag is set).

10) There can be something in the middle between the def statement and

the lambda. For example it can be called "fun" (or it can be called

"def" still). With it maybe both def and lambdas aren't necessary

anymore. Examples:

cube = fun x:

return x**3

(the last line is indented)

sorted(data, fun x,y: return x-y)

(Probably now it's almost impossible to modify this in the language.)

11) This is just a wild idea for an alternative syntax to specify a

global variable inside a function. From:

def foo(x):

global y

y = y + 2

(the last two lines are indented)

To:

def foo(x): global.y = global.y + 2

Beside the global.y, maybe it can exist a syntax like upper.y or

caller.y that means the name y in the upper context. upper.upper.y etc.

12) Mathematica's interactive IDE suggests possible spelling errors;

this feature is often useful, works with builtin name functions too,

and it can be switched off.

In[1]:= sin2 = N[Sin[2]]

Out[1]= 0.909297

In[2]:= sina2

General::"spell1": "Possible spelling error: new symbol name "sina2"

is similar to existing symbol "sin2".

Out[2]= sina2

I don't know if some Python IDEs (or IPython) do this already, but it

can be useful in Pythonwin.

13) In Mathematica language the = has the same meaning of Python, but

:= is different:

lhs := rhs assigns rhs to be the delayed value of lhs. rhs is

maintained in an unevaluated form. When lhs appears, it is replaced by

rhs, evaluated afresh each time.

I don't know if this can be useful...

------------------

14) In one of my last emails of notes, I've tried to explain the

Pattern Matching programming paradigm of Mathematica. Josiah Carlson

answered me:

http://groups-beta.google.com/group/...5600094cb281c1

In the C/C++ world, that is called polymorphism.

You can do polymorphism with Python, and decorators may make it

easier...

This kind of programming is like the use of a kind regular expression

on the parameters of functions. Here are some fast operators, from the

(copyrighted) online help:

_ or Blank[ ] is a pattern object that can stand for any Mathematica

expression.

For example this info comes from:

http://documents.wolfram.com/mathema...unctions/Blank

This is used for example in the definition of functions:

f[x_] := x^2

__ (two _ characters) or BlankSequence[ ] is a pattern object that can

stand for any sequence of one or more Mathematica expressions.

___ (three _ characters) or BlankNullSequence[ ] is a pattern object

that can stand for any sequence of zero or more Mathematica

expressions.

___h or BlankNullSequence[h] can stand for any sequence of expressions,

all of which have head h.

p1 | p2 | ... is a pattern object which represents any of the patterns

pi

s:obj represents the pattern object obj, assigned the name s. When a

transformation rule is used, any occurrence of s on the right*hand

side is replaced by whatever expression it matched on the left*hand

side. The operator : has a comparatively low precedence. The expression

x:_+_ is thus interpreted as x:(_+_), not (x:_)+_.

p:v is a pattern object which represents an expression of the form p,

which, if omitted, should be replaced by v. Optional is used to specify

"optional arguments" in functions represented by patterns. The pattern

object p gives the form the argument should have, if it is present. The

expression v gives the "default value" to use if the argument is

absent. Example: the pattern f[x_, y_:1] is matched by f[a], with x

taking the value a, and y taking the value 1. It can also be matched by

f[a, b], with y taking the value b.

p.. is a pattern object which represents a sequence of one or more

expressions, each matching p.

p... is a pattern object which represents a sequence of zero or more

expressions, each matching p.

patt /; test is a pattern which matches only if the evaluation of test

yields True.

Example: f[x_] := fp[x] /; x > 1 defines a function in the case when a.

lhs := Module[{vars}, rhs /; test] allows local variables to be shared

between test and rhs. You can use the same construction with Block and

With.

p?test is a pattern object that stands for any expression which matches

p, and on which the application of test gives True. Ex:

p1[x_?NumberQ] := Sqrt[x]

p2[x_?NumericQ] := Sqr[x]

Verbatim[expr] represents expr in pattern matching, requiring that expr

be matched exactly as it appears, with no substitutions for blanks or

other transformations. Verbatim[x_] will match only the actual

expression x_. Verbatim is useful in setting up rules for transforming

other transformation rules.

HoldPattern[expr] is equivalent to expr for pattern matching, but

maintains expr in an unevaluated form.

Orderless is an attribute that can be assigned to a symbol f to

indicate that the elements a in expressions of the form f[e1, e2, ...]

should automatically be sorted into canonical order. This property is

accounted for in pattern matching.

Flat is an attribute that can be assigned to a symbol f to indicate

that all expressions involving nested functions f should be flattened

out. This property is accounted for in pattern matching.

OneIdentity is an attribute that can be assigned to a symbol f to

indicate that f[x], f[f[x]], etc. are all equivalent to x for the

purpose of pattern matching.

Default[f], if defined, gives the default value for arguments of the

function f obtained with a _. pattern object.

Default[f, i] gives the default value to use when _. appears as the

i-th argument of f.

Cases[{e1, e2, ...}, pattern] gives a list of the a that match the

pattern.

Cases[{e1, e2, ...}, pattern -> rhs] gives a list of the values of rhs

corresponding to the ei that match the pattern.

Position[expr, pattern] gives a list of the positions at which objects

matching pattern appear in expr.

Select[list, crit] picks out all elements a of list for which crit[ei]

is True.

DeleteCases[expr, pattern] removes all elements of expr which match

pattern.

DeleteCases[expr, pattern, levspec] removes all parts of expr on levels

specified by levspec which match pattern.

Example : DeleteCases[{1, a, 2, b}, _Integer] ==> {a, b}

Count[list, pattern] gives the number of elements in list that match

pattern.

MatchQ[expr, form] returns True if the pattern form matches expr, and

returns False otherwise.

It may look strange, but an expert can even use it to write small full

programs... But usually they are used just when necessary.

Note that I'm not suggesting to add those (all) into python.

------------------

15) NetLogo is a kind of logo derived from StarLogo, implemented in

Java.

http://ccl.northwestern.edu/netlogo/

I think it contains some ideas that can be useful for Python too.

- It has built-in some hi-level data structures, like the usual turtle

(but usually you use LOTS of turtles at the same time, in parallel),

and the patch (programmable cellular automata layers, each cell can be

programmed and it can interact with nearby cells or nearby turtles)

- It contains built-in graphics, because it's often useful for people

that starts to program, and it's useful for lots of other things. In

Python it can be useful a tiny and easy "fast" graphics library

(Tkinter too can be used, but something simpler can be useful for some

quick&dirty graphics. Maybe this library can also be faster than the

Tkinter pixel plotting and the pixel matrix visualisation).

- It contains few types of built-in graphs to plot variables, etc. (for

python there are many external plotters).

- Its built-in widgets are really easy to use (they are defined inside

NetLogo and StarLogo source), but they probably look too much toy-like

for Python programs...

- This language contains lots of other nice ideas. Some of them

probably look too much toy-like, still some examples:

http://ccl.northwestern.edu/netlogo/models/

Show that this language is only partially a toy, and it can be useful

to understand and learn nonlinear dynamics of many systems.

This is a source, usually some parts of it (like widget positioning and

parameters) are managed by the IDE:

http://ccl.northwestern.edu/netlogo/...logy/Fur.nlogo

Bye,

bear hugs,

Bearophile