473,396 Members | 1,838 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,396 software developers and data experts.

SimplePrograms challenge

Hi, I'm offering a challenge to extend the following
page by one good example:

http://wiki.python.org/moin/SimplePrograms

Right now the page starts off with 15 examples that
cover lots of ground in Python, but they're still
scratching the surface. (There are also two Eight
Queens implementations, but I'm looking to fill the
gap in lines-of-code, and they're a little long now.)

I'm looking for a good 16-line code example with the
following qualities:

1) It introduces some important Python concept that
the first 15 programs don't cover.

2) It's not too esoteric. Python newbies are the
audience (but you can assume they're not new to
programming in general).

3) It runs on Python 2.4.

4) It doesn't just demonstrate a concept; it solves
a problem at face value. (It can solve a whimsical
problem, like counting rabbits, but the program itself
should be "complete" and "suitably simple" for the
problem at hand.)

5) You're willing to have your code reviewed by the
masses.

6) No major departures from PEP 8.

Any takers?

-- Steve



__________________________________________________ __________________________________
Looking for a deal? Find great prices on flights and hotels with Yahoo! FareChase.
http://farechase.yahoo.com/
Jun 11 '07 #1
23 1880
# reading CSV files, tuple-unpacking
import csv

#pacific.csv contains:
#1,CA,California
#2,AK,Alaska
#3,OR,Oregon
#4,WA,Washington
#5,HI,Hawaii

reader = csv.reader(open('pacific.csv'))
for id, abbr, name in reader:
print '%s is abbreviated: "%s"' % (name, abbr)

Jun 11 '07 #2
On Jun 11, 6:56 pm, Steve Howell <showel...@yahoo.comwrote:
Hi, I'm offering a challenge to extend the following
page by one good example:

http://wiki.python.org/moin/SimplePrograms

Right now the page starts off with 15 examples that
cover lots of ground in Python, but they're still
scratching the surface. (There are also two Eight
Queens implementations, but I'm looking to fill the
gap in lines-of-code, and they're a little long now.)

I'm looking for a good 16-line code example with the
following qualities:

1) It introduces some important Python concept that
the first 15 programs don't cover.

2) It's not too esoteric. Python newbies are the
audience (but you can assume they're not new to
programming in general).

3) It runs on Python 2.4.

4) It doesn't just demonstrate a concept; it solves
a problem at face value. (It can solve a whimsical
problem, like counting rabbits, but the program itself
should be "complete" and "suitably simple" for the
problem at hand.)

5) You're willing to have your code reviewed by the
masses.

6) No major departures from PEP 8.

Any takers?
Ok, doctest-based version of the Unit test example added; so much more
Pythonic ;-)

André

P.S. Congrats for starting this!
>
-- Steve

__________________________________________________ __________________________________
Looking for a deal? Find great prices on flights and hotels with Yahoo! FareChase.http://farechase.yahoo.com/

Jun 11 '07 #3
On Jun 12, 8:51 am, infidel <saint.infi...@gmail.comwrote:
# reading CSV files, tuple-unpacking
import csv

#pacific.csv contains:
#1,CA,California
#2,AK,Alaska
#3,OR,Oregon
#4,WA,Washington
#5,HI,Hawaii

reader = csv.reader(open('pacific.csv'))
For generality and portability, this should be:
reader = csv.reader(open('pacific.csv', 'rb'))
for id, abbr, name in reader:
print '%s is abbreviated: "%s"' % (name, abbr)
and this example doesn't demonstrate why one should use the csv module
instead of:
for line in open('pacific.csv'):
id, abbr, name = line.rstrip().split(',')
# etc
which is quite adequate for the simplistic example file.
Jun 11 '07 #4
On Jun 12, 9:16 am, Steve Howell <showel...@yahoo.comwrote:
>
One more suggestion--maybe it could exercise a little
more of the CVS module, i.e. have something in the
data that would trip up the ','.split() approach?
The what approach?? Do you mean blah.split(',') ??

Perhaps like an example I posted a few days ago:

"Jack ""The Ripper"" Jones","""Eltsac Ruo"", 123 Smith St",,Paris TX
12345
(name and 3 address fields)
[for avoidance of doubt caused by line wrapping, repr(last_field) is
'Paris TX 12345', and the 2nd-last is '']

Jun 11 '07 #5
On Jun 11, 4:56?pm, Steve Howell <showel...@yahoo.comwrote:
Hi, I'm offering a challenge to extend the following
page by one good example:

http://wiki.python.org/moin/SimplePrograms

Right now the page starts off with 15 examples that
cover lots of ground in Python, but they're still
scratching the surface. (There are also two Eight
Queens implementations, but I'm looking to fill the
gap in lines-of-code, and they're a little long now.)

I'm looking for a good 16-line code example with the
following qualities:

1) It introduces some important Python concept that
the first 15 programs don't cover.

2) It's not too esoteric. Python newbies are the
audience (but you can assume they're not new to
programming in general).

3) It runs on Python 2.4.

4) It doesn't just demonstrate a concept; it solves
a problem at face value. (It can solve a whimsical
problem, like counting rabbits, but the program itself
should be "complete" and "suitably simple" for the
problem at hand.)

5) You're willing to have your code reviewed by the
masses.

6) No major departures from PEP 8.

Any takers?

-- Steve

__________________________________________________ _________________________ _________
Looking for a deal? Find great prices on flights and hotels with Yahoo! FareChase.http://farechase.yahoo.com/
I just posted a 30-line generator function
on your site. Should I have posted it here
first? Also, why do you count comments and
blank lines instead of lines of executable
code? Are you trying to encourage obfuscation?

Jun 12 '07 #6

Steve Howell wrote:
Hi, I'm offering a challenge to extend the following
page by one good example:

http://wiki.python.org/moin/SimplePrograms
What about simple HTML parsing? As a matter of fact this is not
language concept, but shows the power of Python standard library.
Besides, that's very popular problem among newbies. This program
for example shows all the linked URLs in the HTML document:

<code>
from HTMLParser import HTMLParser

page = '''
<html><head><title>URLs</title></head>
<body>
<ul>
<li><a href="http://domain1/page1">some page1</a></li>
<li><a href="http://domain2/page2">some page2</a></li>
</ul>
</body></html>
'''

class URLLister(HTMLParser):
def reset(self):
HTMLParser.reset(self)
self.urls = []

def handle_starttag(self, tag, attrs):
try:
# get handler for tag and call it e.g. self.start_a
getattr(self, "start_%s" % tag)(attrs)
except AttributeError:
pass

def start_a(self, attrs):
href = [v for k, v in attrs if k == "href"]
if href:
self.urls.extend(href)

parser = URLLister()
parser.feed(page)
parser.close()
for url in parser.urls: print url
</code>

--
Regards,
Rob

Jun 12 '07 #7
Rob Wolfe wrote:
Steve Howell wrote:
>Hi, I'm offering a challenge to extend the following
page by one good example:

http://wiki.python.org/moin/SimplePrograms

What about simple HTML parsing? As a matter of fact this is not
language concept, but shows the power of Python standard library.
Besides, that's very popular problem among newbies. This program
for example shows all the linked URLs in the HTML document:

<code>
from HTMLParser import HTMLParser
[Sorry if this comes twice, it didn't seem to be showing up]

I'd hate to steer a potential new Python developer to a clumsier library
when Python 2.5 includes ElementTree::

import xml.etree.ElementTree as etree

page = '''
<html><head><title>URLs</title></head>
<body>
<ul>
<li><a href="http://domain1/page1">some page1</a></li>
<li><a href="http://domain2/page2">some page2</a></li>
</ul>
</body></html>
'''

tree = etree.fromstring(page)
for a_node in tree.getiterator('a'):
url = a_node.get('href')
if url is not None:
print url

I know that the wiki page is supposed to be Python 2.4 only, but I'd
rather have no example than an outdated one.

STeVe
Jun 12 '07 #8
Steven Bethard <st************@gmail.comwrites:
I'd hate to steer a potential new Python developer to a clumsier
"clumsier"???
Try to parse this with your program:

page2 = '''
<html><head><title>URLs</title></head>
<body>
<ul>
<li><a href="http://domain1/page1">some page1</a></li>
<li><a href="http://domain2/page2">some page2</a></li>
</body></html>
'''
library when Python 2.5 includes ElementTree::

import xml.etree.ElementTree as etree

page = '''
<html><head><title>URLs</title></head>
<body>
<ul>
<li><a href="http://domain1/page1">some page1</a></li>
<li><a href="http://domain2/page2">some page2</a></li>
</ul>
</body></html>
'''

tree = etree.fromstring(page)
for a_node in tree.getiterator('a'):
url = a_node.get('href')
if url is not None:
print url
It might be even one-liner:
print "\n".join((url.get('href', '') for url in tree.findall(".//a")))

But as far as HTML (not XML) is concerned this is not very realistic solution.
>
I know that the wiki page is supposed to be Python 2.4 only, but I'd
rather have no example than an outdated one.
This example is by no means "outdated".

--
Regards,
Rob
Jun 12 '07 #9
Rob Wolfe wrote:
Steven Bethard <st************@gmail.comwrites:
>I'd hate to steer a potential new Python developer to a clumsier

"clumsier"???
Try to parse this with your program:

page2 = '''
<html><head><title>URLs</title></head>
<body>
<ul>
<li><a href="http://domain1/page1">some page1</a></li>
<li><a href="http://domain2/page2">some page2</a></li>
</body></html>
'''
If you want to parse invalid HTML, I strongly encourage you to look into
BeautifulSoup. Here's the updated code:

import ElementSoup # http://effbot.org/zone/element-soup.htm
import cStringIO

tree = ElementSoup.parse(cStringIO.StringIO(page2))
for a_node in tree.getiterator('a'):
url = a_node.get('href')
if url is not None:
print url
>I know that the wiki page is supposed to be Python 2.4 only, but I'd
rather have no example than an outdated one.

This example is by no means "outdated".
Given the simplicity of the ElementSoup code above, I'd still contend
that using HTMLParser here shows too complex an answer to too simple a
problem.

STeVe
Jun 12 '07 #10
Steven Bethard wrote:
Rob Wolfe wrote:
>Steven Bethard <st************@gmail.comwrites:
>>I'd hate to steer a potential new Python developer to a clumsier

"clumsier"???
Try to parse this with your program:

page2 = '''
<html><head><title>URLs</title></head>
<body>
<ul>
<li><a href="http://domain1/page1">some page1</a></li>
<li><a href="http://domain2/page2">some page2</a></li>
</body></html>
'''

If you want to parse invalid HTML, I strongly encourage you to look into
BeautifulSoup. Here's the updated code:

import ElementSoup # http://effbot.org/zone/element-soup.htm
import cStringIO

tree = ElementSoup.parse(cStringIO.StringIO(page2))
for a_node in tree.getiterator('a'):
url = a_node.get('href')
if url is not None:
print url
I should also have pointed out that using the above ElementSoup code can
parse the following text::

<html><head><title>URLs</title></head>
<body>
<ul>
<li<a href="http://domain1/page1">some page1</a></li>
<li><a href="http://domain2/page2">some page2</a></li>
</body></html>

where the HTMLParser code raises an HTMLParseError.

STeVe
Jun 12 '07 #11
On Jun 11, 5:56 pm, Steve Howell <showel...@yahoo.comwrote:
Hi, I'm offering a challenge to extend the following
page by one good example:

http://wiki.python.org/moin/SimplePrograms

Right now the page starts off with 15 examples that
cover lots of ground in Python, but they're still
scratching the surface. (There are also two Eight
Queens implementations, but I'm looking to fill the
gap in lines-of-code, and they're a little long now.)

I'm looking for a good 16-line code example with the
following qualities:

1) It introduces some important Python concept that
the first 15 programs don't cover.

2) It's not too esoteric. Python newbies are the
audience (but you can assume they're not new to
programming in general).

3) It runs on Python 2.4.

4) It doesn't just demonstrate a concept; it solves
a problem at face value. (It can solve a whimsical
problem, like counting rabbits, but the program itself
should be "complete" and "suitably simple" for the
problem at hand.)

5) You're willing to have your code reviewed by the
masses.

6) No major departures from PEP 8.

Any takers?

-- Steve
I love the 7-line version of the prime number generator by Tim
Hochberg at the last comment of http://aspn.activestate.com/ASPN/Coo...Recipe/117119:

from itertools import count, ifilter
def sieve():
seq = count(2)
while True:
p = seq.next()
seq = ifilter(p.__rmod__, seq)
yield p
I suspect that it violates your second rule though :)

George

Jun 12 '07 #12
Steven Bethard wrote:
Rob Wolfe wrote:
>Steven Bethard <st************@gmail.comwrites:
>>I'd hate to steer a potential new Python developer to a clumsier

"clumsier"???
Try to parse this with your program:

page2 = '''
<html><head><title>URLs</title></head>
<body>
<ul>
<li><a href="http://domain1/page1">some page1</a></li>
<li><a href="http://domain2/page2">some page2</a></li>
</body></html>
'''

If you want to parse invalid HTML, I strongly encourage you to look into
BeautifulSoup. Here's the updated code:

import ElementSoup # http://effbot.org/zone/element-soup.htm
import cStringIO

tree = ElementSoup.parse(cStringIO.StringIO(page2))
for a_node in tree.getiterator('a'):
url = a_node.get('href')
if url is not None:
print url
>>I know that the wiki page is supposed to be Python 2.4 only, but I'd
rather have no example than an outdated one.

This example is by no means "outdated".

Given the simplicity of the ElementSoup code above, I'd still contend
that using HTMLParser here shows too complex an answer to too simple a
problem.
Here's an lxml version:

from lxml import etree as et # http://codespeak.net/lxml
html = et.HTML(page2)
for href in html.xpath("//a/@href[string()]"):
print href

Doesn't count as a 15-liner, though, even if you add the above HTML code to it.

Stefan
Jun 13 '07 #13
Stefan Behnel wrote:
Steven Bethard wrote:
>If you want to parse invalid HTML, I strongly encourage you to look into
BeautifulSoup. Here's the updated code:

import ElementSoup # http://effbot.org/zone/element-soup.htm
import cStringIO

tree = ElementSoup.parse(cStringIO.StringIO(page2))
for a_node in tree.getiterator('a'):
url = a_node.get('href')
if url is not None:
print url
[snip]
>
Here's an lxml version:

from lxml import etree as et # http://codespeak.net/lxml
html = et.HTML(page2)
for href in html.xpath("//a/@href[string()]"):
print href

Doesn't count as a 15-liner, though, even if you add the above HTML code to it.
Definitely better than the HTMLParser code. =) Personally, I still
prefer the xpath-less version, but that's only because I can never
remember what all the line noise characters in xpath mean. ;-)

STeVe
Jun 13 '07 #14

Steve Howell wrote:
I suggested earlier that maybe we post multiple
solutions. That makes me a little nervous, to the
extent that it shows that the Python community has a
hard time coming to consensus on tools sometimes.
We agree that BeautifulSoup is the best for parsing HTML. :)
This is not a completely unfair knock on Python,
although I think the reason multiple solutions tend to
emerge for this type of thing is precisely due to the
simplicity and power of the language itself.

So I don't know. What about trying to agree on an XML
parsing example instead?

Thoughts?
I vote for example with ElementTree (without xpath)
with a mention of using ElementSoup for invalid HTML.

--
Regards,
Rob

Jun 13 '07 #15
Rob Wolfe wrote:
Steve Howell wrote:
>I suggested earlier that maybe we post multiple
solutions. That makes me a little nervous, to the
extent that it shows that the Python community has a
hard time coming to consensus on tools sometimes.

We agree that BeautifulSoup is the best for parsing HTML. :)
>This is not a completely unfair knock on Python,
although I think the reason multiple solutions tend to
emerge for this type of thing is precisely due to the
simplicity and power of the language itself.

So I don't know. What about trying to agree on an XML
parsing example instead?

Thoughts?

I vote for example with ElementTree (without xpath)
with a mention of using ElementSoup for invalid HTML.
Sounds good to me. Maybe something like::

import xml.etree.ElementTree as etree
dinner_recipe = '''
<ingredients>
<ing><amt><qty>24</qty><unit>slices</unit></amt><item>baguette</item></ing>
<ing><amt><qty>2+</qty><unit>tbsp</unit></amt><item>olive_oil</item></ing>
<ing><amt><qty>1</qty><unit>cup</unit></amt><item>tomatoes</item></ing>
<ing><amt><qty>1-2</qty><unit>tbsp</unit></amt><item>garlic</item></ing>
<ing><amt><qty>1/2</qty><unit>cup</unit></amt><item>Parmesan</item></ing>
<ing><amt><qty>1</qty><unit>jar</unit></amt><item>pesto</item></ing>
</ingredients>'''
pantry = set(['olive oil', 'pesto'])
tree = etree.fromstring(dinner_recipe)
for item_elem in tree.getiterator('item'):
if item_elem.text not in pantry:
print item_elem.text

Though I wouldn't know where to put the ElementSoup link in this one...

STeVe
Jun 13 '07 #16
Steven Bethard <st************@gmail.comwrites:
>I vote for example with ElementTree (without xpath)
with a mention of using ElementSoup for invalid HTML.

Sounds good to me. Maybe something like::

import xml.etree.ElementTree as etree
dinner_recipe = '''
<ingredients>
<ing><amt><qty>24</qty><unit>slices</unit></amt><item>baguette</item></ing>
<ing><amt><qty>2+</qty><unit>tbsp</unit></amt><item>olive_oil</item></ing>
^^^^^^^^^

Is that a typo here?
<ing><amt><qty>1</qty><unit>cup</unit></amt><item>tomatoes</item></ing>
<ing><amt><qty>1-2</qty><unit>tbsp</unit></amt><item>garlic</item></ing>
<ing><amt><qty>1/2</qty><unit>cup</unit></amt><item>Parmesan</item></ing>
<ing><amt><qty>1</qty><unit>jar</unit></amt><item>pesto</item></ing>
</ingredients>'''
pantry = set(['olive oil', 'pesto'])
tree = etree.fromstring(dinner_recipe)
for item_elem in tree.getiterator('item'):
if item_elem.text not in pantry:
print item_elem.text
That's nice example. :)
Though I wouldn't know where to put the ElementSoup link in this one...
I had a regular HTML in mind, something like:

<code>
# HTML page
dinner_recipe = '''
<html><head><title>Recipe</title></head><body>
<table>
<tr><th>amt</th><th>unit</th><th>item</th></tr>
<tr><td>24</td><td>slices</td><td>baguette</td></tr>
<tr><td>2+</td><td>tbsp</td><td>olive_oil</td></tr>
<tr><td>1</td><td>cup</td><td>tomatoes</td></tr>
<tr><td>1-2</td><td>tbsp</td><td>garlic</td></tr>
<tr><td>1/2</td><td>cup</td><td>Parmesan</td></tr>
<tr><td>1</td><td>jar</td><td>pesto</td></tr>
</table>
</body></html>'''

# program
import xml.etree.ElementTree as etree
tree = etree.fromstring(dinner_recipe)

#import ElementSoup as etree # for invalid HTML
#from cStringIO import StringIO # use this
#tree = etree.parse(StringIO(dinner_recipe)) # wrapper for BeautifulSoup

pantry = set(['olive oil', 'pesto'])

for ingredient in tree.getiterator('tr'):
amt, unit, item = ingredient.getchildren()
if item.tag == "td" and item.text not in pantry:
print "%s: %s %s" % (item.text, amt.text, unit.text)
</code>

But if that's too complicated I will not insist on this. :)
Your example is good enough.

--
Regards,
Rob
Jun 13 '07 #17
Rob Wolfe wrote:
Steven Bethard <st************@gmail.comwrites:
>>I vote for example with ElementTree (without xpath)
with a mention of using ElementSoup for invalid HTML.
Sounds good to me. Maybe something like::

import xml.etree.ElementTree as etree
dinner_recipe = '''
<ingredients>
<ing><amt><qty>24</qty><unit>slices</unit></amt><item>baguette</item></ing>
<ing><amt><qty>2+</qty><unit>tbsp</unit></amt><item>olive_oil</item></ing>
^^^^^^^^^
Is that a typo here?
Just trying to make Thunderbird line-wrap correctly. ;-) It's better
with a space instead of an underscore.
><ing><amt><qty>1</qty><unit>cup</unit></amt><item>tomatoes</item></ing>
<ing><amt><qty>1-2</qty><unit>tbsp</unit></amt><item>garlic</item></ing>
<ing><amt><qty>1/2</qty><unit>cup</unit></amt><item>Parmesan</item></ing>
<ing><amt><qty>1</qty><unit>jar</unit></amt><item>pesto</item></ing>
</ingredients>'''
pantry = set(['olive oil', 'pesto'])
tree = etree.fromstring(dinner_recipe)
for item_elem in tree.getiterator('item'):
if item_elem.text not in pantry:
print item_elem.text

That's nice example. :)
>Though I wouldn't know where to put the ElementSoup link in this one...

I had a regular HTML in mind, something like:

<code>
# HTML page
dinner_recipe = '''
<html><head><title>Recipe</title></head><body>
<table>
<tr><th>amt</th><th>unit</th><th>item</th></tr>
<tr><td>24</td><td>slices</td><td>baguette</td></tr>
<tr><td>2+</td><td>tbsp</td><td>olive_oil</td></tr>
<tr><td>1</td><td>cup</td><td>tomatoes</td></tr>
<tr><td>1-2</td><td>tbsp</td><td>garlic</td></tr>
<tr><td>1/2</td><td>cup</td><td>Parmesan</td></tr>
<tr><td>1</td><td>jar</td><td>pesto</td></tr>
</table>
</body></html>'''

# program
import xml.etree.ElementTree as etree
tree = etree.fromstring(dinner_recipe)

#import ElementSoup as etree # for invalid HTML
#from cStringIO import StringIO # use this
#tree = etree.parse(StringIO(dinner_recipe)) # wrapper for BeautifulSoup

pantry = set(['olive oil', 'pesto'])

for ingredient in tree.getiterator('tr'):
amt, unit, item = ingredient.getchildren()
if item.tag == "td" and item.text not in pantry:
print "%s: %s %s" % (item.text, amt.text, unit.text)
</code>

But if that's too complicated I will not insist on this. :)
Your example is good enough.
Sure, that looks fine to me. =)

Steve
Jun 13 '07 #18
# writing/reading CSV files, tuple-unpacking, cmp() built-in
import csv

writer = csv.writer(open('stocks.csv', 'wb'))
writer.writerows([
('GOOG', 'Google, Inc.', 505.24, 0.47, 0.09),
('YHOO', 'Yahoo! Inc.', 27.38, 0.33, 1.22),
('CNET', 'CNET Networks, Inc.', 8.62, -0.13, -1.49)
])

stocks = csv.reader(open('stocks.csv', 'rb'))
for ticker, name, price, change, pct in stocks:
print '%s is %s (%s%%)' % (
name,
{-1: 'down', 0: 'unchanged', 1: 'up'}[cmp(float(change),
0.0)],
pct
)

Jun 13 '07 #19
Rob Wolfe wrote:
# HTML page
dinner_recipe = '''
<html><head><title>Recipe</title></head><body>
<table>
<tr><th>amt</th><th>unit</th><th>item</th></tr>
<tr><td>24</td><td>slices</td><td>baguette</td></tr>
<tr><td>2+</td><td>tbsp</td><td>olive_oil</td></tr>
<tr><td>1</td><td>cup</td><td>tomatoes</td></tr>
<tr><td>1-2</td><td>tbsp</td><td>garlic</td></tr>
<tr><td>1/2</td><td>cup</td><td>Parmesan</td></tr>
<tr><td>1</td><td>jar</td><td>pesto</td></tr>
</table>
</body></html>'''

# program
import xml.etree.ElementTree as etree
tree = etree.fromstring(dinner_recipe)

#import ElementSoup as etree # for invalid HTML
#from cStringIO import StringIO # use this
#tree = etree.parse(StringIO(dinner_recipe)) # wrapper for BeautifulSoup

pantry = set(['olive oil', 'pesto'])

for ingredient in tree.getiterator('tr'):
amt, unit, item = ingredient.getchildren()
if item.tag == "td" and item.text not in pantry:
print "%s: %s %s" % (item.text, amt.text, unit.text)
I posted a slight variant of this, trimmed down a bit to 21 lines.

STeVe
Jun 14 '07 #20
André <an***********@gmail.comwrites:
Ok, doctest-based version of the Unit test example added; so much
more Pythonic ;-)
Sorry for being a bit picky but there are a number of things that I'm
unhappy with in that example.

1) It's the second example with 13 lines. Though I suppose that the
pragmatism of pairing the examples overriding an implicit goal of
the page is itself Pythonic.

2) assert is not the simplest example of doctest. The style should be
>>add_money([0.13, 0.02])
0.15
>>add_money([100.01, 99.99])
200.0
>>add_money([0, -13.00, 13.00])
0.0

3) which fails :-( So both the unittest and doctest examples ought to
be redone to emphasize what they are doing without getting bogged
down by issues of floating point representations.

http://wiki.python.org/moin/SimplePrograms

--
Pete Forman -./\.- Disclaimer: This post is originated
WesternGeco -./\.- by myself and does not represent
pe*********@westerngeco.com -./\.- the opinion of Schlumberger or
http://petef.port5.com -./\.- WesternGeco.
Jun 20 '07 #21
*** New Thread

#5 has been bothering me.

def greet(name):
print 'hello', name
greet('Jack')
greet('Jill')
greet('Bob')

Using greet() three times is cheating and doesn't teach much and
doesn't have any real world use that #1 can't fulfill.

I offer this replacement:

def greet(name):
""" This function prints an email signature """ #optional doc
string highly recommended
print name + " can be reached at", #comma prevents
newline from being printed
print '@'.join([name, "google.com"])
greet('Jill')

I think it's important to teach new pythonistas about good
documentation from the start. A few new print options are introduced.
And as far as functionality goes, at least it does something.
Jun 21 '07 #22
And while I'm at it...

Although Guido's tutorial was a great place to start when I first came
to python I would have learned more and faster had SimplePrograms
existed. My only complaint with the python documentation is the dearth
of examples. The PHP documentation is chock full.
Steve,

You introduced this as a challenge. Why not make it so? JUST FOR FUN I
propose that blank lines and comments not be counted. There should
probably be an upper and lower limit on new concepts introduced. There
should be an emphasis on common, real world functionality. The
standard library should be used freely (although limited to the most
common modules).

I don't mean to turn this game into something too formal and serious,
but it's obvious from the enthusiasm shown on this thread that
pythonistas take their fun seriously.

Jun 21 '07 #23
Ah, I mistook you for someone who gives a shit.

- You DID see my post on comp.lang.python and
deliberately ignored it.

- You then lied and claimed there was no discussion.

- You then lied and claimed my example merely
duplicated other examples.

- You claimed to be offended by my characterization
of your obfuscation policy as foolish and then
turned around and proved it was foolish by
admitting you couldn't comprehend the example
because it didn't have enough comments. Duh!

You're going to end up with a really short fucking
list if you ignore and delete that which you can't
understand.

Great way to encourage contributions.

Jun 22 '07 #24

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

8
by: Frank Buss | last post by:
A new challenge: http://www.frank-buss.de/marsrescue/index.html Have fun! Now you can win real prices. -- Frank Buß, fb@frank-buss.de http://www.frank-buss.de, http://www.it4-systems.de
0
by: Richard Jones | last post by:
The date for the second PyWeek challenge has been set: Sunday 26th March to Sunday 2nd April (00:00UTC to 00:00UTC). The PyWeek challenge invites entrants to write a game in one week from...
0
by: richard | last post by:
The date for the second PyWeek challenge has been set: Sunday 26th March to Sunday 2nd April (00:00UTC to 00:00UTC). The PyWeek challenge invites entrants to write a game in one week from...
3
by: Thierry | last post by:
For those interested in <b>programming riddles</b>, I would like to announce a new programming challenge I'm just launching at http://software.challenge.googlepages.com This challenge is in its...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.