473,806 Members | 2,248 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Is it just me, or is Sqlite3 goofy?

Probably just me. I've only been using Access and SQL Server
for 12 years, so I'm sure my opinions don't count for anything.

I was, nevertheless, looking forward to Sqlite3. And now
that gmpy has been upgraded, I can go ahead and install
Python 2.5.

So I open the manual to Section 13.13 where I find the first
example of how to use Sqlite3:

<code>
conn = sqlite3.connect (':memory:')

c = conn.cursor()

# Create table
c.execute('''cr eate table stocks
(date timestamp, trans varchar, symbol varchar,
qty decimal, price decimal)''')

# Insert a row of data
c.execute("""in sert into stocks
values ('2006-01-05','BUY','RHAT ',100,35.14)""" )
</code>

Seems pretty simple, yet I was befuddled and bewildered
by this example. So much so that I added a query to see exactly
what was going on.

<code>
# added by me
c.execute('sele ct * from stocks')
d = c.fetchone()
for t in d:
print type(t),t
</code>

Original code - what's wrong with this picture?
<type 'unicode'2006-01-05
<type 'unicode'BUY
<type 'unicode'RHAT
<type 'int'100
<type 'float'35.14

Why is the date returning as a string? Aren't the built in
converters supposed to handle that? Yes, if you enable
detect_types.

Added detect_types=sq lite3.PARSE_DEC LTYPES

Traceback (most recent call last):
File "C:\Python25\sq lite_first_exam ple.py", line 30, in <module>
c.execute('sele ct * from stocks')
File "C:\Python25\li b\sqlite3\dbapi 2.py", line 66, in
convert_timesta mp
datepart, timepart = val.split(" ")
ValueError: need more than 1 value to unpack

Aha, that explains why they weren't enabled.

This failed because
- the value inserted was wrong format?
- and the builtin converter can't split it cuz it has no spaces?
when it worked it was because
- detect_types was not set, so converter not invoked when queried?

Yes, the format in the example was datetime.date and the field type
should have been cast [date], not [timestamp] which needs HH:MM:SS
for the converter to work properly (but only if detect_types
enabled).

If a correct format was inserted, converter would have worked

<type 'datetime.datet ime'2006-09-04 13:30:00
<type 'unicode'BUY
<type 'unicode'RHAT
<type 'int'100
<type 'float'35.14

Or, had the field been properly cast as [date] instead of [timestamp]
it would also have worked.

<type 'datetime.date' 2006-09-04
<type 'unicode'BUY
<type 'unicode'RHAT
<type 'int'100
<type 'float'35.14

Ah, this now partly explains the original result, since detect_types
is off by default, the field cast, not being a native sqlite3 type
was ignored and the data queried back as TEXT.

<type 'unicode'2006-09-04
<type 'unicode'BUY
<type 'unicode'RHAT
<type 'int'100
<type 'float'35.14

Ok, next issue, what the fuck are [varchar] and [decimal]?
They are certainly not Sqlite3 native types. If they are
user defined types, where are the converters and adapters?
Does Sqlite3 simply ignore a cast that isn't registered?

Note that although both qty and price were cast as [decimal]
they queried back as int and float.

Note also that it's "obvious" from the query example on page 13.13
that the example is FUBAR
- the date is a unicode string, not a datetime.date type
- the price is binary floating point, not decimal

<quote>
This example uses the iterator form:
>>c = conn.cursor()
c.execute('se lect * from stocks order by price')
for row in c:
... print row
...
(u'2006-01-05', u'BUY', u'RHAT', 100, 35.140000000000 001)
(u'2006-03-28', u'BUY', u'IBM', 1000, 45.0)
(u'2006-04-06', u'SELL', u'IBM', 500, 53.0)
(u'2006-04-05', u'BUY', u'MSOFT', 1000, 72.0)
</quote>

Here we have an example of things apparently working
for the wrong reason. A classic example of the programmer
who *thinks* he knows how it works because he wrote it.
This kind of sloppiness wouldn't last 5 minutes in a production
environment.

But why did Sqlite3 make qty an int and price a float?
Hard to say since THE FURTHER EXAMPLES IN THE DOCS don't
even bother to cast the field types. I guess I'm supposed
to guess how things are supposed to work. Can I trust that
default settings will be what I want?

Ha! Can I trust the baby with a hammer?

First, note that the script example in section 13.13.3

<quote>
import sqlite3

con = sqlite3.connect (":memory:")
cur = con.cursor()
cur.executescri pt("""
create table person(
firstname,
lastname,
age
);

create table book(
title,
author,
published
);

insert into book(title, author, published)
values (
'Dirk Gently''s Holistic Detective Agency
'Douglas Adams',
1987
);
""")
</quote>

contains not one but TWO syntax errors! A single quote after
the word Agency is missing as is the comma that should be at
the end of that line. Seems that no one actually ever tried
this example.

That's easily fixed. But I was curious about why the fields
don't have type casts. After the previous debacle and
knowing that this code was never tested, I am not going to
assume it works. Better add a query to make sure.

cur.execute("se lect title, author, published from book")
d = cur.fetchall()
for i in d: print i
print

(u"Dirk Gently's Holistic Detective Agency", u'Douglas Adams', 1987)

Ok, so if not cast, the fields must default (and probably also when
a cast is made that hasn't been defined).

But watch this: being clueless (but not stupid) is a gift I have
for troubleshooting . I tried (incorrectly) to insert another record:

cur.execute("in sert into book(title, author, published) values ('Dirk
Gently''s Holistic Detective Agency','Dougla s Adams','1987')" )

(u"Dirk Gently's Holistic Detective Agency", u'Douglas Adams', 1987)
(u"Dirk Gently's Holistic Detective Agency", u'Douglas Adams', u'1987')

Uhh...how can a database have a different field type for each record?

Simple, without a cast when the table is created, the field type is
whatever you insert into it. That's how the default must work,
each record has a data structure independent of every other record!

Wow. Just think of the kind of bugs *that* must cause.

Bugs?

Here's MY example, creating a Cartesian Product

<code>
import sqlite3
letters = [(2,),('10',),(' 20',),(200,)]
con = sqlite3.connect (":memory:")
con.text_factor y = str
con.execute("cr eate table letter(c integer)")
con.executemany ("insert into letter(c) values (?)", letters)
print 'Queried: ',
for row in con.execute("se lect c from letter"):
print row,
print
print
print 'Sorted: ',
for row in con.execute("se lect c from letter order by c"):
print row[0],
print
print
print 'Cartesian Product: ',
for row in con.execute("se lect a.c, b.c, c.c from letter as a, letter
as b, letter as c"):
print row[0]+row[1]+row[2],
</code>

Note that the list of data to be inserted contains both strings and
ints. But because the field was correctly cast as [integer], Sqlite3
actually stored integers in the db. We can tell that from how the
"order by" returned the records.

Queried: (2,) (10,) (20,) (200,)

Sorted: 2 10 20 200

Cartesian Product: 6 14 24 204 14 22 32 212 24 32 42 222 204 212
222 402 14 22 32 212 22 30 40 220 32 40 50 230 212 220 230 410 24
32 42 222 32 40 50 230 42 50 60 240 222 230 240 420 204 212 222
402 212 220 230 410 222 230 240 420 402 410 420 600

Because if I cast them as [text] the sort order changes (and my
Cartesian Product becomes concatenation instead of summation).

Queried: ('2',) ('10',) ('20',) ('200',)

Sorted: 10 2 20 200

Cartesian Product: 222 2210 2220 22200 2102 21010 21020 210200
2202 22010 22020 220200 22002 220010 220020 2200200 1022 10210
10220 102200 10102 101010 101020 1010200 10202 102010 102020
1020200 102002 1020010 1020020 10200200 2022 20210 20220 202200
20102 201010 201020 2010200 20202 202010 202020 2020200 202002
2020010 2020020 20200200 20022 200210 200220 2002200 200102
2001010 2001020 20010200 200202 2002010 2002020 20020200 2002002
20020010 20020020 200200200

But if I omit the cast altogether, then the db stores the input
exactly as it was inserted, so the c field contains both
text and integers wreaking havoc with my sort order, making
records un-queryable using "where" and causing my Cartesian
Product to crash.

Queried: (2,) ('10',) ('20',) (200,)

Sorted: 2 200 10 20

Cartesian Product: 6

Traceback (most recent call last):
File "C:\Python25\us er\sqlite_test2 .py", line 225, in <module>
print row[0]+row[1]+row[2],
TypeError: unsupported operand type(s) for +: 'int' and 'str'

Yeah, I know, I've heard it before.

"This behavior is by design."

It's still fuckin' goofy.

Sep 5 '06
66 7096
Paul McNett wrote:
me********@aol. com wrote:
Do you know what INNER JOIN means?

Do you know how important it is to a relational database?

Can you explain how an INNER JOIN can even work, in theory,
with dynamic data types?

Let's stop the pissing contest and just see how it works. After all,
this is Python and we can easily just try it out. Here's my example.
Please tell me how this causes unexpected results,
When we use a best case scenario, we get what we expect.
and why it isn't SQL.
It isn't SQL simply because SQL won't let you insert text
into a numeric field.
Please modify my example to get it to cause a catastrophe,
Make it worse case? Sure, I can do that.
and post it
here so we can see the errors of our ways and be properly humbled.
#-- Preliminaries:
## from pysqlite2 import dbapi2 as sqlite
import sqlite3 as sqlite
## con = sqlite.connect( "test.db")
con = sqlite.connect( ":memory:")
cur = con.cursor()

#-- Create 3 tables for a M:M relationship between customers
#-- and categories:
cur.execute("cr eate table customers (id integer primary key
autoincrement, name char)")
cur.execute("cr eate table categories (id integer primary key
autoincrement, name char)")
cur.execute("cr eate table cust_cat (id integer primary key
autoincrement, cust_id integer, cat_id integer)")

## cur.execute("cr eate table cust_cat (id integer, cust_id,
cat_id)")
## cur.execute("cr eate table customers (id, name char)")
## cur.execute("cr eate table categories (id, name char)")
##
## # Ok, THIS fails. Because the JOINs were originally made against
fields
## # that were cast as integers so the 'hinting' of sqlite must
operate in a JOIN
## # allowing ints to JOIN against strings. Take away the casts and
the JOIN
## # fails. Unfortunately, not every situation will involve JOINing
primary keys.
##
## [(1, u'Ziggy Marley'), (2, u'David Bowie')]
## [(1, u'Glam Rock'), (2, u'Nuevo Reggae'), (3, u'Male Singers')]
## [(None, 1, 3), (None, 1, u'2'), (None, u'2', u'1'), (None, u'2',
u'3')]
##
## [(1, u'Ziggy Marley', 3, u'Male Singers')]
## []
## []
## []
## []
## [(1, u'Ziggy Marley', 3, u'Male Singers')]
#-- Insert some test data into customer and categories:
cur.execute("in sert into customers (name) values ('Ziggy Marley')")
cur.execute("in sert into customers (name) values ('David Bowie')")
cur.execute("in sert into categories (name) values ('Glam Rock')")
cur.execute("in sert into categories (name) values ('Nuevo Reggae')")
cur.execute("in sert into categories (name) values ('Male Singers')")

## # if int cast removed, manually insert ids
##cur.execute(" insert into customers (id, name) values (1,'Ziggy
Marley')")
##cur.execute(" insert into customers (id, name) values (2,'David
Bowie')")
##cur.execute(" insert into categories (id, name) values (1,'Glam
Rock')")
##cur.execute(" insert into categories (id, name) values (2,'Nuevo
Reggae')")
##cur.execute(" insert into categories (id, name) values (3,'Male
Singers')")

cur.execute("se lect * from customers")
#-- Take a look at the data (and retrieve the pk's):
print cur.fetchall()
#[(1, u'Ziggy Marley'), (2, u'David Bowie')]
cur.execute("se lect * from categories")
print cur.fetchall()
#[(1, u'Glam Rock'), (2, u'Nuevo Reggae'), (3, u'Male Singers')]
#-- Relate some customers to some categories. Note how I send strings
#-- in some places and ints in others:
##cur.execute(" insert into cust_cat (cust_id, cat_id) values (1, 3)")
##cur.execute(" insert into cust_cat (cust_id, cat_id) values (1, '2')")

##cur.execute(" insert into cust_cat (cust_id, cat_id) values ('2',
'1')")
##cur.execute(" insert into cust_cat (cust_id, cat_id) values ('2', 3)")
##cc = [(1,3),(1,'2'),( '2','1'),('2',' 3')]
cc = [(1,3),(1,'>2'), ('>2','>1'),('> 2','>3')]

## # And this also fails (differently). The 'hinting' of sqlite that
operates
## # during a JOIN only works if the string looks like an integer.
And, no,
## # the use of the '>' is not a contrived example like 'fred'. I
often get a
## # result like '>200' in what is supposed to be a numeric field.
##
## [(1, u'Ziggy Marley'), (2, u'David Bowie')]
## [(1, u'Glam Rock'), (2, u'Nuevo Reggae'), (3, u'Male Singers')]
## [(1, 1, 3), (2, 1, u'>2'), (3, u'>2', u'>1'), (4, u'>2', u'>3')]
##
## [(1, u'Ziggy Marley', 3, u'Male Singers')]
## []
## []
## []
## [(1, u'Ziggy Marley', 3, u'Male Singers')]
## [(1, u'Ziggy Marley', 3, u'Male Singers')]

cur.executemany ("insert into cust_cat (cust_id, cat_id) values
(?,?)",cc)
cur.execute("se lect * from cust_cat")
print cur.fetchall()

print

#-- Run some queries:
cur.execute("""
select customers.id as cust_id,
customers.name as cust_name,
categories.id as cat_id,
categories.name as cat_name
from customers
inner join cust_cat
on cust_cat.cust_i d = customers.id
inner join categories
on categories.id = cust_cat.cat_id
order by 2,4""")

print cur.fetchall()
#[(2, u'David Bowie', 1, u'Glam Rock'),
# (2, u'David Bowie', 3, u'Male Singers'),
# (1, u'Ziggy Marley', 3, u'Male Singers'),
# (1, u'Ziggy Marley', 2, u'Nuevo Reggae')]

cur.execute("""
select customers.id as cust_id,
customers.name as cust_name,
categories.id as cat_id,
categories.name as cat_name
from customers
inner join cust_cat
on cust_cat.cust_i d = customers.id
inner join categories
on categories.id = cust_cat.cat_id
where categories.id = 1
order by 2,4""")

print cur.fetchall()
#[(2, u'David Bowie', 1, u'Glam Rock')]

cur.execute("""
select customers.id as cust_id,
customers.name as cust_name,
categories.id as cat_id,
categories.name as cat_name
from customers
inner join cust_cat
on cust_cat.cust_i d = customers.id
inner join categories
on categories.id = cust_cat.cat_id
where categories.id = '1'
order by 2,4""")

print cur.fetchall()
#[(2, u'David Bowie', 1, u'Glam Rock')]

cur.execute("""
select customers.id as cust_id,
customers.name as cust_name,
categories.id as cat_id,
categories.name as cat_name
from customers
inner join cust_cat
on cust_cat.cust_i d = customers.id
inner join categories
on categories.id = cust_cat.cat_id
where categories.id = '2'
order by 2,4""")

print cur.fetchall()
#[(1, u'Ziggy Marley', 2, u'Nuevo Reggae')]

cur.execute("""
select customers.id as cust_id,
customers.name as cust_name,
categories.id as cat_id,
categories.name as cat_name
from customers
inner join cust_cat
on cust_cat.cust_i d = customers.id
inner join categories
on categories.id = cust_cat.cat_id
where categories.id = '3'
order by 2,4""")

print cur.fetchall()
#[(2, u'David Bowie', 3, u'Male Singers'),
# (1, u'Ziggy Marley', 3, u'Male Singers')]

cur.execute("""
select customers.id as cust_id,
customers.name as cust_name,
categories.id as cat_id,
categories.name as cat_name
from customers
inner join cust_cat
on cust_cat.cust_i d = customers.id
inner join categories
on categories.id = cust_cat.cat_id
where categories.id = 3
order by 2,4""")

print cur.fetchall()
#[(2, u'David Bowie', 3, u'Male Singers'),
# (1, u'Ziggy Marley', 3, u'Male Singers')]

Sep 9 '06 #31

sk**@pobox.com wrote:
I wouldn't be at all surprised if the pysqlite author operated under that
assumption. That the Python developers didn't pick up on the issue is not
surprising. I'm not sure how many of them are (py)sqlite users, probably
relatively few.

Skip
Who has reviewed sqlite/pysqlite after all? Reading the passage in the
sqlite FAQ I can hardly believe that passing errors silently and
coercing everything to string when it fails to be coerced to INTEGER
although INTEGER was an invariant declared in the create command is on
par with Pythons design philosophy. In other cases doctoral
dissertations are written about whether a keyword or some punctuation
shall be used for decorator syntax and in this case everything must be
rushed into the code base of the standard library?

Sep 9 '06 #32
Kay Schluehr wrote:
>
[Quoting Marc 'BlackJack' Rintsch...]
If you are so fond of static typing, why are you using Python in the first
place? Just see it as consistency -- dynamically typed language →
dynamically typed DB columns. ;-)

I have to admit I find this bogus too. It has by no means anything to
do with static typing but letting "errors pass silently" i.e.
deactivating runtime type checks as well.
If the questioner had been talking about Perl, most respondents would
rightly have said that Perl's silent coercion or conversion of values
is an irritating source of potential errors. Here, the behaviour of
SQLite, whilst not as bad as Perl - values are not converted into
something different which is then erroneous - does contradict the naive
expectations of users who expect the data type declarations to mean
something and for such constraints to be enforced.

Of course, the criticism of the questioner should be more forgiving in
this debate, since I imagine that most people with experience of SQLite
know of its "loose" data typing model, and that many assume that
everyone else is aware of this feature or limitation, even though that
may not be the case. Thus, the priority should be on emphasizing this
in the module documentation (in order to avoid unnecessary confusion),
along with mentioning other possible strategies for emulating other
database system behaviours: "Other Affinity Modes" in the SQLite
documentation [1] may be helpful here, if the future tense can be
replaced with the present tense whilst preserving the factual content
of that section.

Paul

[1] http://www.sqlite.org/datatype3.html

Sep 9 '06 #33
me********@aol. com wrote:
Paul McNett wrote:
>>me********@ao l.com wrote:
>>>Do you know what INNER JOIN means?

Do you know how important it is to a relational database?

Can you explain how an INNER JOIN can even work, in theory,
with dynamic data types?

Let's stop the pissing contest and just see how it works. After all,
this is Python and we can easily just try it out. Here's my example.
Please tell me how this causes unexpected results,


When we use a best case scenario, we get what we expect.

>>and why it isn't SQL.


It isn't SQL simply because SQL won't let you insert text
into a numeric field.
Yup, I have to agree that's pretty crappy. (Makes mental note to limit
use of SQLite).

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://holdenweb.blogspot.com
Recent Ramblings http://del.icio.us/steve.holden

Sep 9 '06 #34
Dennis Lee Bieber wrote:
>
Talking to myself again, I see...
Not quite. ;-)

[...]
How interesting... With MySQL/MySQLdb I did NOT get exceptions or
error results on inserting bad numeric data supplied as character string
format (ie, as read from the CSV). Instead, MySQL SILENTLY converted
them to ZEROS

A price of "Priceless" becomes Decimal("0.00") .

The Customer number of ">68" became 0L
This kind of thing is "classic" MySQL behaviour.
Which would one rather have to work with -- a database that copied
invalid numerics as string literals (which, in my mind, makes it much
easier to correct the data later, using "update .... set field = correct
where field = invalid") or a database that silently converts them all to
0 values. (Of course, I now expect to have a rejoinder about "Using a
REAL database instead of MySQL" -- but unless said person wishes to
start making the same comments about SQLite on at least as regular a
basis, I believe the objection itself is invalid for this example).
Given subsequent research into SQLite's affinity modes and their
presumed status as future features, the solution in that database
system's case is to apply validation in the driver/module or through
driver extensions, and there is apparently some flexibility in the
pysqlite2 modules for changing the way data types are handled, although
a cursory inspection of the documentation doesn't appear to suggest a
convenient, ready-made solution.

As for MySQL, the situation is possibly more awkward: one expects the
database system to complain about certain things, which it probably
does from time to time, but it would seem wasteful to duplicate
whatever validation the database system does do just to cover those
situations where the system misbehaves.

Paul

Sep 10 '06 #35

Dennis Lee Bieber wrote:
Guess I lied...

On Sat, 09 Sep 2006 05:22:20 GMT, Dennis Lee Bieber
<wl*****@ix.net com.comdeclaime d the following in comp.lang.pytho n:

Talking to myself again, I see...

<snip>
rs = cr.execute("""i nsert into invoice_1
(CustNo, Title, Author, Year, Price)
values (?,?,?,?,?)""",
r)

Whoops, r =rv, though the exceptions raised made it moot

rs = cr.execute("""i nsert into invoice_2
(CustNo, Title, Author, Year, Price)
values (?,?,?,?,?)""",
r)

Same comment

Out of curiousity, I converted to using MySQL(db) as a test. As
expected, the pre-insert validation code worked with same results (well,
the price was declared decimal, and Python 2.4 appears to handle that as
a Decimal("value" ) on return <G>)

Now, taking out the pre-validation and type conversion, supplying
all data as it came from the CSV file:

-=-=-=-=-=-=-
Inserting:
['066', '101 Ways to Start A Fight', 'some Irish gentleman', '1919',
'19.95']
['032', 'A Sale of Two Titties', 'Charles Dikkens', '1855', '20.00']
['001', 'Olsens Standard Book of British Birds (Expurgated)"', 'Olsen',
'None', '99.95']
['066', 'Ethel the Aardvark Goes Quantity Surveying', 'None', '1975',
'9.99']
['032', 'David Coperfield', 'Edmund Wells', '1955', '3.95']
['>68', 'Strawmen and Dorothy', '', '2006', '49.89']
['033', "The Emperor's Old Clothes", 'Grimm Hound', '1887', 'Priceless']

Select all from Invoice_1 (CustNo is CHARACTER)
(1L, '066', '101 Ways to Start A Fight', 'some Irish gentleman', 1919L,
Decimal("19.95" ))
(2L, '032', 'A Sale of Two Titties', 'Charles Dikkens', 1855L,
Decimal("20.00" ))
(3L, '001', 'Olsens Standard Book of British Birds (Expurgated)"',
'Olsen', 0L, Decimal("99.95" ))
(4L, '066', 'Ethel the Aardvark Goes Quantity Surveying', 'None', 1975L,
Decimal("9.99") )
(5L, '032', 'David Coperfield', 'Edmund Wells', 1955L, Decimal("3.95") )
(6L, '>68', 'Strawmen and Dorothy', '', 2006L, Decimal("49.89" ))
(7L, '033', "The Emperor's Old Clothes", 'Grimm Hound', 1887L,
Decimal("0.00") )

Select all from Invoice_2 (CustNo is INTEGER)
(1L, 66L, '101 Ways to Start A Fight', 'some Irish gentleman', 1919L,
Decimal("19.95" ))
(2L, 32L, 'A Sale of Two Titties', 'Charles Dikkens', 1855L,
Decimal("20.00" ))
(3L, 1L, 'Olsens Standard Book of British Birds (Expurgated)"', 'Olsen',
0L, Decimal("99.95" ))
(4L, 66L, 'Ethel the Aardvark Goes Quantity Surveying', 'None', 1975L,
Decimal("9.99") )
(5L, 32L, 'David Coperfield', 'Edmund Wells', 1955L, Decimal("3.95") )
(6L, 0L, 'Strawmen and Dorothy', '', 2006L, Decimal("49.89" ))
(7L, 33L, "The Emperor's Old Clothes", 'Grimm Hound', 1887L,
Decimal("0.00") )
-=-=-=-=-=-=-

How interesting... With MySQL/MySQLdb I did NOT get exceptions or
error results on inserting bad numeric data supplied as character string
format (ie, as read from the CSV). Instead, MySQL SILENTLY converted
them to ZEROS

A price of "Priceless" becomes Decimal("0.00") .

The Customer number of ">68" became 0L
Which would one rather have to work with -- a database that copied
invalid numerics as string literals (which, in my mind, makes it much
easier to correct the data later, using "update .... set field = correct
where field = invalid") or a database that silently converts them all to
0 values. (Of course, I now expect to have a rejoinder about "Using a
REAL database instead of MySQL" -- but unless said person wishes to
start making the same comments about SQLite on at least as regular a
basis, I believe the objection itself is invalid for this example).

(Apparently we have fallen afoul of this clause from the old
O'Reilly/MySQL black/brown book: "When asked to store a value in a
numeric column that is outside the column type's allowable range, MySQL
clips the value to the appropriate endpoint of the range and stores the
resulting value instead." -- seems character data "clips" to zero.
Are you saying that MySQL is goofy? ;-)

Based on these replies, I'm pulling back and retrenching.

As I said before, I'm not entering 500,000 records by writing
INSERT statements for each record, so reading csv files is
a more realistic test. Nevertheless, I am still convinced that
the documentation (or lack thereof) is mainly responsible for
my confusion. I was, after all, mimicing the examples given
(which still have errors).

I think an explanation of how Sqlite3 differs from SQL and
a better set of examples is still warranted.

Sep 10 '06 #36
me********@aol. com wrote:
I think an explanation of how Sqlite3 differs from SQL and
a better set of examples is still warranted.
In general, Python standard library modules that are wrappers
for third party libraries are very thinly documented, and they
should probably remain that way, because it's really too much of
a burden on the Python developers to develop this documentation
and keep it up to date. The idea is to document the wrapper, not
the wrapped library.

If I had a choice concerning these wrapper libraries, I'd much
rather see more docs on the tkinter and xml libraries. There you
need to guess a lot. There is no shortage of Tcl/Tk docs, but it
doesn't look the same within Python. For the Python xml libraries,
I've had to experiment a lot, and I have this nagging feeling
that I don't do things the way I should. (From Python 2.5, we
have ElementTree, which is much better from this perspective,
but we've just started using 2.4 at work, and we try to keep
the amount of third party libraries to a minimum here.)

It seems to me that the sqlite3 module is fairly decent in
this regard, particularly since it's not even included in a
completed Python release yet. Concerning the API, I'm surprised
to see magic method naming such as __conform__ introduced in
a library like that. It seems to me that this is a violation
of PEP 8. I'm sure there are further details that could be worth
mentioning in the docs, but I think it's important that we don't
try to duplicate the SQLite docs in the Python docs.
Sep 11 '06 #37
me********@aol. com wrote:
Bruno Desthuilliers wrote:
>me********@aol. com wrote:
>>Probably just me. I've only been using Access and SQL Server
for 12 years, so I'm sure my opinions don't count for anything.
SQLite never pretended to be a full-blown RDBMS - just a lightweight
simple embedded database as SQL-compliant as possible.

Ah, *you* haven't read the documentation either!
I use SQLite and PgSQL (and, alas, MySQL) on a daily basis. So I've *of
course* read the doc (and frequently re-read it).
"as SQL-compliant as possible"?
Yes. And yes, I know about the typing issue. If you want to make sure
you can migrate your data to a real RDBMS, then it's up to you to take
appropriate precautions (FormEncode can be a real helper here...).

This (documented enough) issue apart, we have far less troubles with
SQLite than we have with MySQL, on both the dev and admin sides.

And FWIW, insulting peoples won't buy you much.

(snip useless rant again)

--
bruno desthuilliers
python -c "print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for
p in 'o****@xiludom. gro'.split('@')])"
Sep 11 '06 #38
me********@aol. com wrote:
(snip)
Are you saying that MySQL is goofy? ;-)
This is an understatement.
--
bruno desthuilliers
python -c "print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for
p in 'o****@xiludom. gro'.split('@')])"
Sep 11 '06 #39
Kay Schluehr wrote:
sk**@pobox.com wrote:
>I wouldn't be at all surprised if the pysqlite author operated under that
assumption. That the Python developers didn't pick up on the issue is not
surprising. I'm not sure how many of them are (py)sqlite users, probably
relatively few.

Skip

Who has reviewed sqlite/pysqlite after all?
pysqlite was actually reviewed by several Python core developers before
becoming a part of the Python standard library, most prominently Neil
Norwitz and Anthony Baxter.
Reading the passage in the
sqlite FAQ I can hardly believe that passing errors silently and
coercing everything to string when it fails to be coerced to INTEGER
although INTEGER was an invariant declared in the create command is on
par with Pythons design philosophy. [...]
Unfortunately, third-party library authors don't first check with
Python's design philosophy in case their software will be wrapped as a
Python module ;-)

I did my best to combine both SQLite's and Python's type system though,
including a discussion with pysqlite 1.x users before the grand rewrite
for version 2, which is the one that ended up in the Python standard
library now.
In other cases doctoral dissertations are written about whether a
keyword or some punctuation shall be used for decorator syntax and in
this case everything must be rushed into the code base of the
standard library?
There was no rush at all.

-- Gerhard

Sep 11 '06 #40

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.