By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
434,636 Members | 1,945 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 434,636 IT Pros & Developers. It's quick & easy.

Is it just me, or is Sqlite3 goofy?

P: n/a
Probably just me. I've only been using Access and SQL Server
for 12 years, so I'm sure my opinions don't count for anything.

I was, nevertheless, looking forward to Sqlite3. And now
that gmpy has been upgraded, I can go ahead and install
Python 2.5.

So I open the manual to Section 13.13 where I find the first
example of how to use Sqlite3:

<code>
conn = sqlite3.connect(':memory:')

c = conn.cursor()

# Create table
c.execute('''create table stocks
(date timestamp, trans varchar, symbol varchar,
qty decimal, price decimal)''')

# Insert a row of data
c.execute("""insert into stocks
values ('2006-01-05','BUY','RHAT',100,35.14)""")
</code>

Seems pretty simple, yet I was befuddled and bewildered
by this example. So much so that I added a query to see exactly
what was going on.

<code>
# added by me
c.execute('select * from stocks')
d = c.fetchone()
for t in d:
print type(t),t
</code>

Original code - what's wrong with this picture?
<type 'unicode'2006-01-05
<type 'unicode'BUY
<type 'unicode'RHAT
<type 'int'100
<type 'float'35.14

Why is the date returning as a string? Aren't the built in
converters supposed to handle that? Yes, if you enable
detect_types.

Added detect_types=sqlite3.PARSE_DECLTYPES

Traceback (most recent call last):
File "C:\Python25\sqlite_first_example.py", line 30, in <module>
c.execute('select * from stocks')
File "C:\Python25\lib\sqlite3\dbapi2.py", line 66, in
convert_timestamp
datepart, timepart = val.split(" ")
ValueError: need more than 1 value to unpack

Aha, that explains why they weren't enabled.

This failed because
- the value inserted was wrong format?
- and the builtin converter can't split it cuz it has no spaces?
when it worked it was because
- detect_types was not set, so converter not invoked when queried?

Yes, the format in the example was datetime.date and the field type
should have been cast [date], not [timestamp] which needs HH:MM:SS
for the converter to work properly (but only if detect_types
enabled).

If a correct format was inserted, converter would have worked

<type 'datetime.datetime'2006-09-04 13:30:00
<type 'unicode'BUY
<type 'unicode'RHAT
<type 'int'100
<type 'float'35.14

Or, had the field been properly cast as [date] instead of [timestamp]
it would also have worked.

<type 'datetime.date'2006-09-04
<type 'unicode'BUY
<type 'unicode'RHAT
<type 'int'100
<type 'float'35.14

Ah, this now partly explains the original result, since detect_types
is off by default, the field cast, not being a native sqlite3 type
was ignored and the data queried back as TEXT.

<type 'unicode'2006-09-04
<type 'unicode'BUY
<type 'unicode'RHAT
<type 'int'100
<type 'float'35.14

Ok, next issue, what the fuck are [varchar] and [decimal]?
They are certainly not Sqlite3 native types. If they are
user defined types, where are the converters and adapters?
Does Sqlite3 simply ignore a cast that isn't registered?

Note that although both qty and price were cast as [decimal]
they queried back as int and float.

Note also that it's "obvious" from the query example on page 13.13
that the example is FUBAR
- the date is a unicode string, not a datetime.date type
- the price is binary floating point, not decimal

<quote>
This example uses the iterator form:
>>c = conn.cursor()
c.execute('select * from stocks order by price')
for row in c:
... print row
...
(u'2006-01-05', u'BUY', u'RHAT', 100, 35.140000000000001)
(u'2006-03-28', u'BUY', u'IBM', 1000, 45.0)
(u'2006-04-06', u'SELL', u'IBM', 500, 53.0)
(u'2006-04-05', u'BUY', u'MSOFT', 1000, 72.0)
</quote>

Here we have an example of things apparently working
for the wrong reason. A classic example of the programmer
who *thinks* he knows how it works because he wrote it.
This kind of sloppiness wouldn't last 5 minutes in a production
environment.

But why did Sqlite3 make qty an int and price a float?
Hard to say since THE FURTHER EXAMPLES IN THE DOCS don't
even bother to cast the field types. I guess I'm supposed
to guess how things are supposed to work. Can I trust that
default settings will be what I want?

Ha! Can I trust the baby with a hammer?

First, note that the script example in section 13.13.3

<quote>
import sqlite3

con = sqlite3.connect(":memory:")
cur = con.cursor()
cur.executescript("""
create table person(
firstname,
lastname,
age
);

create table book(
title,
author,
published
);

insert into book(title, author, published)
values (
'Dirk Gently''s Holistic Detective Agency
'Douglas Adams',
1987
);
""")
</quote>

contains not one but TWO syntax errors! A single quote after
the word Agency is missing as is the comma that should be at
the end of that line. Seems that no one actually ever tried
this example.

That's easily fixed. But I was curious about why the fields
don't have type casts. After the previous debacle and
knowing that this code was never tested, I am not going to
assume it works. Better add a query to make sure.

cur.execute("select title, author, published from book")
d = cur.fetchall()
for i in d: print i
print

(u"Dirk Gently's Holistic Detective Agency", u'Douglas Adams', 1987)

Ok, so if not cast, the fields must default (and probably also when
a cast is made that hasn't been defined).

But watch this: being clueless (but not stupid) is a gift I have
for troubleshooting. I tried (incorrectly) to insert another record:

cur.execute("insert into book(title, author, published) values ('Dirk
Gently''s Holistic Detective Agency','Douglas Adams','1987')")

(u"Dirk Gently's Holistic Detective Agency", u'Douglas Adams', 1987)
(u"Dirk Gently's Holistic Detective Agency", u'Douglas Adams', u'1987')

Uhh...how can a database have a different field type for each record?

Simple, without a cast when the table is created, the field type is
whatever you insert into it. That's how the default must work,
each record has a data structure independent of every other record!

Wow. Just think of the kind of bugs *that* must cause.

Bugs?

Here's MY example, creating a Cartesian Product

<code>
import sqlite3
letters = [(2,),('10',),('20',),(200,)]
con = sqlite3.connect(":memory:")
con.text_factory = str
con.execute("create table letter(c integer)")
con.executemany("insert into letter(c) values (?)", letters)
print 'Queried: ',
for row in con.execute("select c from letter"):
print row,
print
print
print 'Sorted: ',
for row in con.execute("select c from letter order by c"):
print row[0],
print
print
print 'Cartesian Product: ',
for row in con.execute("select a.c, b.c, c.c from letter as a, letter
as b, letter as c"):
print row[0]+row[1]+row[2],
</code>

Note that the list of data to be inserted contains both strings and
ints. But because the field was correctly cast as [integer], Sqlite3
actually stored integers in the db. We can tell that from how the
"order by" returned the records.

Queried: (2,) (10,) (20,) (200,)

Sorted: 2 10 20 200

Cartesian Product: 6 14 24 204 14 22 32 212 24 32 42 222 204 212
222 402 14 22 32 212 22 30 40 220 32 40 50 230 212 220 230 410 24
32 42 222 32 40 50 230 42 50 60 240 222 230 240 420 204 212 222
402 212 220 230 410 222 230 240 420 402 410 420 600

Because if I cast them as [text] the sort order changes (and my
Cartesian Product becomes concatenation instead of summation).

Queried: ('2',) ('10',) ('20',) ('200',)

Sorted: 10 2 20 200

Cartesian Product: 222 2210 2220 22200 2102 21010 21020 210200
2202 22010 22020 220200 22002 220010 220020 2200200 1022 10210
10220 102200 10102 101010 101020 1010200 10202 102010 102020
1020200 102002 1020010 1020020 10200200 2022 20210 20220 202200
20102 201010 201020 2010200 20202 202010 202020 2020200 202002
2020010 2020020 20200200 20022 200210 200220 2002200 200102
2001010 2001020 20010200 200202 2002010 2002020 20020200 2002002
20020010 20020020 200200200

But if I omit the cast altogether, then the db stores the input
exactly as it was inserted, so the c field contains both
text and integers wreaking havoc with my sort order, making
records un-queryable using "where" and causing my Cartesian
Product to crash.

Queried: (2,) ('10',) ('20',) (200,)

Sorted: 2 200 10 20

Cartesian Product: 6

Traceback (most recent call last):
File "C:\Python25\user\sqlite_test2.py", line 225, in <module>
print row[0]+row[1]+row[2],
TypeError: unsupported operand type(s) for +: 'int' and 'str'

Yeah, I know, I've heard it before.

"This behavior is by design."

It's still fuckin' goofy.

Sep 5 '06 #1
Share this Question
Share on Google+
66 Replies


P: n/a
In <11**********************@p79g2000cwp.googlegroups .com>,
me********@aol.com wrote:
But watch this: being clueless (but not stupid) is a gift I have
for troubleshooting. I tried (incorrectly) to insert another record:

cur.execute("insert into book(title, author, published) values ('Dirk
Gently''s Holistic Detective Agency','Douglas Adams','1987')")

(u"Dirk Gently's Holistic Detective Agency", u'Douglas Adams', 1987)
(u"Dirk Gently's Holistic Detective Agency", u'Douglas Adams', u'1987')

Uhh...how can a database have a different field type for each record?

Simple, without a cast when the table is created, the field type is
whatever you insert into it. That's how the default must work,
each record has a data structure independent of every other record!

Wow. Just think of the kind of bugs *that* must cause.

Bugs?
It's not a bug, it's a feature. And answered as third point in the FAQ:

http://www.sqlite.org/faq.html#q3

I think your whole experience is based on it. Live with it or use a
real RDBMS.

If you are so fond of static typing, why are you using Python in the first
place? Just see it as consistency -- dynamically typed language →
dynamically typed DB columns. ;-)

Ciao,
Marc 'BlackJack' Rintsch
Sep 5 '06 #2

P: n/a
me********@aol.com wrote:
Probably just me. I've only been using Access and SQL Server
for 12 years, so I'm sure my opinions don't count for anything.
[...]
>
Ok, next issue, what the fuck are [varchar] and [decimal]?
[..]
>
It's still fuckin' goofy.
Language ...

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://holdenweb.blogspot.com
Recent Ramblings http://del.icio.us/steve.holden

Sep 5 '06 #3

P: n/a
me********@aol.com wrote:
Probably just me. I've only been using Access and SQL Server
for 12 years, so I'm sure my opinions don't count for anything.
SQLite never pretended to be a full-blown RDBMS - just a lightweight
simple embedded database as SQL-compliant as possible. In it's category,
it beats Access and MySQL hands down. Now if you want a real RDBMS,
you've just failed to choose the right tool. May I suggest PostgreSQL ?

(snip useless rant)

--
bruno desthuilliers
python -c "print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for
p in 'o****@xiludom.gro'.split('@')])"
Sep 5 '06 #4

P: n/a

Marc 'BlackJack' Rintsch wrote:
In <11**********************@p79g2000cwp.googlegroups .com>,
me********@aol.com wrote:
But watch this: being clueless (but not stupid) is a gift I have
for troubleshooting. I tried (incorrectly) to insert another record:

cur.execute("insert into book(title, author, published) values ('Dirk
Gently''s Holistic Detective Agency','Douglas Adams','1987')")

(u"Dirk Gently's Holistic Detective Agency", u'Douglas Adams', 1987)
(u"Dirk Gently's Holistic Detective Agency", u'Douglas Adams', u'1987')

Uhh...how can a database have a different field type for each record?

Simple, without a cast when the table is created, the field type is
whatever you insert into it. That's how the default must work,
each record has a data structure independent of every other record!

Wow. Just think of the kind of bugs *that* must cause.

Bugs?

It's not a bug, it's a feature. And answered as third point in the FAQ:

http://www.sqlite.org/faq.html#q3
Oh, so it is "This behaviour is by design".
>
I think your whole experience is based on it.
But shouldn't a significant feature like that be explained
in the Python manuals? Why should I go dig up Sqlite
FAQs to learn what should have been in the manuals?
Live with it or use a real RDBMS.
I don't mind living with it as long as it's documented.
>
If you are so fond of static typing, why are you using Python in the first
place? Just see it as consistency -- dynamically typed language →
dynamically typed DB columns. ;-)
Did you miss this statement in Section 13.13?

"If switching to a larger database such as PostgreSQL or Oracle
is later necessary, the switch should be relatively easy. "

Obviously, this is a new useage of "relatively easy" with which
I have been previously unaware.

Ciao,
Marc 'BlackJack' Rintsch
Sep 5 '06 #5

P: n/a
>I think your whole experience is based on it.
>But shouldn't a significant feature like that be explained in the
Python manuals? Why should I go dig up Sqlite FAQs to learn what
should have been in the manuals?
I don't know, but I will take a stab at a plausible explanation. First,
sqlite support has only been in Python for a month or three. Its first
official unveiling will be when 2.5 is released. Second, it's common when
wrapping functionality into Python to rely on the documentation for the
thing being wrapped. The thinner the wrapper, the more you tend to rely on
the underlying documentation. Also, the more functionally rich the thing
you've wrapped, the more you rely on the underlying documentation. I
wouldn't be at all surprised if the pysqlite author operated under that
assumption. That the Python developers didn't pick up on the issue is not
surprising. I'm not sure how many of them are (py)sqlite users, probably
relatively few.

Skip
Sep 5 '06 #6

P: n/a

sk**@pobox.com wrote:
I think your whole experience is based on it.
>But shouldn't a significant feature like that be explained in the
>Python manuals? Why should I go dig up Sqlite FAQs to learn what
>should have been in the manuals?

I don't know, but I will take a stab at a plausible explanation. First,
sqlite support has only been in Python for a month or three. Its first
official unveiling will be when 2.5 is released.
Although possibly too late for the final release, now would be a
good time to straighten out the documentation.
Second, it's common when
wrapping functionality into Python to rely on the documentation for the
thing being wrapped. The thinner the wrapper, the more you tend to rely on
the underlying documentation. Also, the more functionally rich the thing
you've wrapped, the more you rely on the underlying documentation. I
wouldn't be at all surprised if the pysqlite author operated under that
assumption.
Ok, that's certainly plausible. But it's not an excuse. The thinner the
documentation, the greater the emphasis should be made to point
the reader to a more adequate source. Simply listing the Sqlite home
page at the bottom of the page is hardly good enough. It should be
explicitly stated in bold letters that the reader should go read the
Sqlite FAQ because it radically differs from *real* databases and
provide a seperate link to it in the body of the documentation.
That the Python developers didn't pick up on the issue is not
surprising. I'm not sure how many of them are (py)sqlite users, probably
relatively few.
I would be surprised if they had never used ANY database. A little
thing like dynamic field typing will simply make it impossible to
migrate your Sqlite data to a *real* database.

What I'll do is re-format my rant, suggest how *I* would do the
documentation, fix the errors I found in the examples and send it
off to the Python bug tracking as suggested in the manuals.

How's that as a plan?
>
Skip
Sep 5 '06 #7

P: n/a
>What I'll do is re-format my rant, suggest how *I* would do the
documentation, fix the errors I found in the examples and send it off
to the Python bug tracking as suggested in the manuals.
>How's that as a plan?
That's fine. Reformat your rant as a documentation bug report on
SourceForge:

http://sourceforge.net/projects/python

If you mention specific sqlite documentation urls you think should be
referenced in the pysqlite docs it would make it easier to accept.

Skip

Sep 5 '06 #8

P: n/a
me********@aol.com wrote:
sk**@pobox.com wrote:
>>>>I think your whole experience is based on it.
> >But shouldn't a significant feature like that be explained in the
>Python manuals? Why should I go dig up Sqlite FAQs to learn what
>should have been in the manuals?

I don't know, but I will take a stab at a plausible explanation. First,
sqlite support has only been in Python for a month or three. Its first
official unveiling will be when 2.5 is released.


Although possibly too late for the final release, now would be a
good time to straighten out the documentation.
And you would be the best person to do it, since you're teh one this has
bitten in the tender parts.
>
>>Second, it's common when
wrapping functionality into Python to rely on the documentation for the
thing being wrapped. The thinner the wrapper, the more you tend to rely on
the underlying documentation. Also, the more functionally rich the thing
you've wrapped, the more you rely on the underlying documentation. I
wouldn't be at all surprised if the pysqlite author operated under that
assumption.


Ok, that's certainly plausible. But it's not an excuse. The thinner the
documentation, the greater the emphasis should be made to point
the reader to a more adequate source. Simply listing the Sqlite home
page at the bottom of the page is hardly good enough. It should be
explicitly stated in bold letters that the reader should go read the
Sqlite FAQ because it radically differs from *real* databases and
provide a seperate link to it in the body of the documentation.
Whoa, there! This isn't commercial software we are talking about. While
I appreciate the need to continually better Python's documentation, the
"should" implies a moral imperative that the (volunteer) developers are
unikely to find compelling.
>
>>That the Python developers didn't pick up on the issue is not
surprising. I'm not sure how many of them are (py)sqlite users, probably
relatively few.


I would be surprised if they had never used ANY database. A little
thing like dynamic field typing will simply make it impossible to
migrate your Sqlite data to a *real* database.

What I'll do is re-format my rant, suggest how *I* would do the
documentation, fix the errors I found in the examples and send it
off to the Python bug tracking as suggested in the manuals.

How's that as a plan?
That's the ticket. Great idea. Changes to the documentation can be
suggested in plain ASCII, you don't have to grok the LaTeX markup.

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://holdenweb.blogspot.com
Recent Ramblings http://del.icio.us/steve.holden

Sep 6 '06 #9

P: n/a
me********@aol.com wrote:
sk**@pobox.com wrote:
>>>>I think your whole experience is based on it.
> >But shouldn't a significant feature like that be explained in the
>Python manuals? Why should I go dig up Sqlite FAQs to learn what
>should have been in the manuals?

I don't know, but I will take a stab at a plausible explanation. First,
sqlite support has only been in Python for a month or three. Its first
official unveiling will be when 2.5 is released.


Although possibly too late for the final release, now would be a
good time to straighten out the documentation.
And you would be the best person to do it, since you're the one this has
bitten in the tender parts.
>
>>Second, it's common when
wrapping functionality into Python to rely on the documentation for the
thing being wrapped. The thinner the wrapper, the more you tend to rely on
the underlying documentation. Also, the more functionally rich the thing
you've wrapped, the more you rely on the underlying documentation. I
wouldn't be at all surprised if the pysqlite author operated under that
assumption.


Ok, that's certainly plausible. But it's not an excuse. The thinner the
documentation, the greater the emphasis should be made to point
the reader to a more adequate source. Simply listing the Sqlite home
page at the bottom of the page is hardly good enough. It should be
explicitly stated in bold letters that the reader should go read the
Sqlite FAQ because it radically differs from *real* databases and
provide a seperate link to it in the body of the documentation.
Whoa, there! This isn't commercial software we are talking about. While
I appreciate the need to continually better Python's documentation, the
"should" implies a moral imperative that the (volunteer) developers are
unlikely to find compelling.
>
>>That the Python developers didn't pick up on the issue is not
surprising. I'm not sure how many of them are (py)sqlite users, probably
relatively few.


I would be surprised if they had never used ANY database. A little
thing like dynamic field typing will simply make it impossible to
migrate your Sqlite data to a *real* database.

What I'll do is re-format my rant, suggest how *I* would do the
documentation, fix the errors I found in the examples and send it
off to the Python bug tracking as suggested in the manuals.

How's that as a plan?
That's the ticket. Great idea. Changes to the documentation can be
suggested in plain ASCII, you don't have to grok the LaTeX markup.

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://holdenweb.blogspot.com
Recent Ramblings http://del.icio.us/steve.holden

Sep 6 '06 #10

P: n/a
me********@aol.com wrote:
(snip)
But shouldn't a significant feature like that be explained
in the Python manuals?
Why should it ? It's a SQLite feature, not a Python one.
Why should I go dig up Sqlite
FAQs to learn what should have been in the manuals?
Why should you read the manuals at all then ?

>Live with it or use a real RDBMS.

I don't mind living with it as long as it's documented.
It is. In SQLite manual. Or do you hope the Python manual to also fully
document PostgreSQL, MySQL, Oracle, Apache, Posix, Win32 etc ?
--
bruno desthuilliers
python -c "print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for
p in 'o****@xiludom.gro'.split('@')])"
Sep 6 '06 #11

P: n/a
Bruno Desthuilliers wrote:
me********@aol.com wrote:
I don't mind living with it as long as it's documented.

It is. In SQLite manual. Or do you hope the Python manual to also fully
document PostgreSQL, MySQL, Oracle, Apache, Posix, Win32 etc ?
With those other applications, you have a separate download. With
sqlite, you don't, on Windows at least. Surely all the 'included
batteries' should have local documentation, especially with the type
conversions.

--
Ben Sizer

Sep 6 '06 #12

P: n/a
Bruno Desthuilliers <on***@xiludom.growrote:
>
me********@aol.com wrote:
(snip)
>But shouldn't a significant feature like that be explained
in the Python manuals?

Why should it ? It's a SQLite feature, not a Python one.
You have missed the key point that, as of Python 2.5, SQLite 3 is part of
the Python 2.5 standard library. So, at this point, it IS a Python
feature.
>I don't mind living with it as long as it's documented.

It is. In SQLite manual. Or do you hope the Python manual to also fully
document PostgreSQL, MySQL, Oracle, Apache, Posix, Win32 etc ?
Warning: misplaced sarcasm detected....
--
- Tim Roberts, ti**@probo.com
Providenza & Boekelheide, Inc.
Sep 7 '06 #13

P: n/a
Bruno Desthuilliers wrote:
me********@aol.com wrote:
Probably just me. I've only been using Access and SQL Server
for 12 years, so I'm sure my opinions don't count for anything.

SQLite never pretended to be a full-blown RDBMS - just a lightweight
simple embedded database as SQL-compliant as possible.
Ah, *you* haven't read the documentation either!

"as SQL-compliant as possible"?

ROTFLMAO!
>From SQLite Homepage
documentation
Available Documentation
Distinctive Features

<quote *emphasis added*>
This page highlights some of the characteristics of SQLite
that are unusual and which make SQLite different from many
other SQL database engines.

Manifest typing

Most SQL database engines use static typing. A datatype is
associated with each column in a table and only values of
that particular datatype are allowed to be stored in that
column. SQLite relaxes this restriction by using manifest
typing.

In manifest typing, the datatype is a property of the
value itself, not of the column in which the value is
stored. SQLite thus allows the user to store any value
of any datatype into any column regardless of the declared
type of that column. (There are some exceptions to this
rule: An INTEGER PRIMARY KEY column may only store
integers. And SQLite attempts to coerce values into the
declared datatype of the column when it can.)

************************************************** *********
* The SQL language specification calls for static typing. *
************************************************** *********

So some people feel that the use of manifest typing is a
bug in SQLite. But the authors of SQLite feel very
strongly that this is a feature.

************************************************** ********
* The authors argue that static typing is a bug in the *
* SQL specification that SQLite has fixed in a backwards *
* compatible way. *
************************************************** ********
</quote>

"Fixed"? Up until now, I didn't think it was possible for
crackpot theories to be implemented in computer science.
This is absolutely the craziest thing I've ever heard.

Manifest typing doesn't work *in theory*!! Sure, through
diligent data integrity management it can be made to work
in practice. But when that's applied, guess what? All your
fields end up having the same data type anyway. Duh.

And Python's inclusion of this into the standard library
is like the AMA endorsing homeopathy. It should have
been pointed out in the Python Manuals that SQLite3 is

----NOT <----

SQL-compliant, never will be, and true SQL-compliant code

----CANNOT <----

be made to work in this environment. So much for
"If switching to a larger database such as PostgreSQL or Oracle
is later necessary, the switch should be relatively easy."

That does not, of course, mean I don't have a use for a light,
simple database that's part of the standard library. But I could
have saved myself a lot of wailing and gnashing of teeth if I
hadn't been lied to by the Python documentation that, like you,
has the mistaken belief that SQLite3 is SQL-compliant.

Fixing the documentation is now becoming an enormous task.

What are the chances that anything I send in as a bug report
will simply be ignored? Kind of like the Emporer's New Clothes, eh?
It would be an admission of ignorance and stupidity on the part
of the Python Development Team, wouldn't it?
In it's category,
it beats Access and MySQL hands down. Now if you want a real RDBMS,
you've just failed to choose the right tool. May I suggest PostgreSQL ?

(snip useless rant)

--
bruno desthuilliers
python -c "print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for
p in 'o****@xiludom.gro'.split('@')])"
Sep 7 '06 #14

P: n/a
me********@aol.com wrote:
Bruno Desthuilliers wrote:
>>me********@aol.com wrote:
>>>Probably just me. I've only been using Access and SQL Server
for 12 years, so I'm sure my opinions don't count for anything.

SQLite never pretended to be a full-blown RDBMS - just a lightweight
simple embedded database as SQL-compliant as possible.


Ah, *you* haven't read the documentation either!

"as SQL-compliant as possible"?

ROTFLMAO!
>>From SQLite Homepage
documentation
Available Documentation
Distinctive Features

<quote *emphasis added*>
This page highlights some of the characteristics of SQLite
that are unusual and which make SQLite different from many
other SQL database engines.

Manifest typing

Most SQL database engines use static typing. A datatype is
associated with each column in a table and only values of
that particular datatype are allowed to be stored in that
column. SQLite relaxes this restriction by using manifest
typing.

In manifest typing, the datatype is a property of the
value itself, not of the column in which the value is
stored. SQLite thus allows the user to store any value
of any datatype into any column regardless of the declared
type of that column. (There are some exceptions to this
rule: An INTEGER PRIMARY KEY column may only store
integers. And SQLite attempts to coerce values into the
declared datatype of the column when it can.)

************************************************** *********
* The SQL language specification calls for static typing. *
************************************************** *********

So some people feel that the use of manifest typing is a
bug in SQLite. But the authors of SQLite feel very
strongly that this is a feature.

************************************************** ********
* The authors argue that static typing is a bug in the *
* SQL specification that SQLite has fixed in a backwards *
* compatible way. *
************************************************** ********
</quote>

"Fixed"? Up until now, I didn't think it was possible for
crackpot theories to be implemented in computer science.
This is absolutely the craziest thing I've ever heard.

Manifest typing doesn't work *in theory*!! Sure, through
diligent data integrity management it can be made to work
in practice. But when that's applied, guess what? All your
fields end up having the same data type anyway. Duh.

And Python's inclusion of this into the standard library
is like the AMA endorsing homeopathy. It should have
been pointed out in the Python Manuals that SQLite3 is

----NOT <----

SQL-compliant, never will be, and true SQL-compliant code

----CANNOT <----

be made to work in this environment. So much for
"If switching to a larger database such as PostgreSQL or Oracle
is later necessary, the switch should be relatively easy."
Sorry. I have written code that started out on SQLite and migrated it to
other databases. Sure, your milage may vary. But the wailing and
gnashing of teeth is complete unnecessary.
That does not, of course, mean I don't have a use for a light,
simple database that's part of the standard library. But I could
have saved myself a lot of wailing and gnashing of teeth if I
hadn't been lied to by the Python documentation that, like you,
has the mistaken belief that SQLite3 is SQL-compliant.

Fixing the documentation is now becoming an enormous task.

What are the chances that anything I send in as a bug report
will simply be ignored? Kind of like the Emporer's New Clothes, eh?
It would be an admission of ignorance and stupidity on the part
of the Python Development Team, wouldn't it?
No, when they need ignorance and stupidity they normally look to the
user base ;-)

I'm sure your doc patch will receive the same courteous consideration
that my early suggestions for amendment did, and your name will join
mine in the list of contributors.

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://holdenweb.blogspot.com
Recent Ramblings http://del.icio.us/steve.holden

Sep 7 '06 #15

P: n/a
me********@aol.com wrote:
Bruno Desthuilliers wrote:
SQLite never pretended to be a full-blown RDBMS - just a lightweight
simple embedded database as SQL-compliant as possible.

Ah, *you* haven't read the documentation either!

"as SQL-compliant as possible"?

ROTFLMAO!
No need to be rude really. In this context "as SQL-compliant as
possible" means, "as SQL-compliant as it is possible to be within the
project's restrictions", which presumably refer to code size and speed.
It's a reasonable trade-off.
************************************************** ********
* The authors argue that static typing is a bug in the *
* SQL specification that SQLite has fixed in a backwards *
* compatible way. *
************************************************** ********
</quote>

"Fixed"? Up until now, I didn't think it was possible for
crackpot theories to be implemented in computer science.
This is absolutely the craziest thing I've ever heard.
It's not a crackpot theory. It's a completely reasonable theory. SQL is
based on relational algebra, which provides a mathematical set of
operators for grouping data that is stored in separate sets. That data
is selected and projected according to its value, and nothing else. The
concept of it having a 'type' has been overlaid on top of this,
presumably to facilitate efficient implementation, which tends to
require fixed-width rows (and hence columns). It's not necessary in any
sense, and it's reasonable to argue that if it was trivial to implement
variable width columns as efficiently as fixed width columns, that
explicit data types might never have needed to exist.
So much for
"If switching to a larger database such as PostgreSQL or Oracle
is later necessary, the switch should be relatively easy."
If you rely too much on a language-enforced data type rather than the
values of the underlying data, perhaps Python is not for you!
Personally I've migrated from SQLite to MySQL a couple of times (on
small projects, granted) and not found it to be a problem at all.
Fixing the documentation is now becoming an enormous task.
I don't think so... it doesn't take much to say that the module
implements a subset of SQL but stores ignores data types.
What are the chances that anything I send in as a bug report
will simply be ignored? Kind of like the Emporer's New Clothes, eh?
It would be an admission of ignorance and stupidity on the part
of the Python Development Team, wouldn't it?
Why get so bitter over this? I agree the docs need fixing but you make
it sound like this was a deliberate attempt to make you waste your
time.

--
Ben Sizer

Sep 8 '06 #16

P: n/a
While I can understand your frustration, I think it is
important to think about the tone in our postings here.
Hydrocephalus is one of the most common birth defects,
and it's not terribly unlikely that someone who reads
this has a family member or someone else in his proximity
who suffers from this condition.

me********@aol.com wrote:
"Fixed"? Up until now, I didn't think it was possible for
crackpot theories to be implemented in computer science.
This is absolutely the craziest thing I've ever heard.
Still, many people with lots of experience in databases
use it, and prefer it for certain kinds of applications.
All systems have limitations and deviations, and those
limitations and deviations are stated more clearly for
SQLite than for most commercial products at least. The
market leader Oracle still can't store empty strings in
VARCHAR fields for instance. They are silently converted
to NULL. I'm pretty sure that has been in clear violation
to the official spec since 1986 at least.

As far as I understand, noone here is forcing you to use
SQLite, and with your long experience of MS Access I'd
expect you to be fairly used to "almost SQL"... It's
some time since I used Jet/Access now, but I had much
more problems with that than I've had with SQLite.

SQLite is built in Tcl, by someone who appreciates the
way Tcl works, with its weak typing. I don't think Tcl's
type handling is nearly as clever as Python's, but I
think it's a good thing that Python's standard lib finally
has a DB-API compliant module, and while I would have
preferred something that was closer to standard SQL, I
don't know of a better candidate than SQLite.

It's good that it's usable without a server setup, and
that it's very light weight. A Jet engine is obviously
not an option, and I would have preferred SQLite even
if Jet was open source and worked on all platforms.
(Well, if JET *was* open source, I suspect it would
have been fixed by now.) It's possible that one could
have used the embedded version of Firebird instead, but
in my experience that's not nearly as lean or easy to
deploy.

With your long experience of Access and SQL Server I'm
sure you know well that any attempt to build a working
database application requires extensive knowledge of
the backend to understand its peculiarities and
limitations.

The list of software projects where not quite competent
developers built Access applications that worked ok in
small scale tests and failed catastrophically in real
life is looong...

Of course, if you've stayed with one vendor for 15 years,
I can imagine that you've forgotten how long it took you
Having worked with half a dozen backends or so, I'm no
longer surprised that SQL can be interpreted in so many
ways... I agree that SQLite is unique in it's approach
to typing, but if you are aware of this, it's really not
a big problem.

Sep 8 '06 #17

P: n/a
I've made the following edits:

Index: whatsnew25.tex
================================================== =================
--- whatsnew25.tex (revision 51828)
+++ whatsnew25.tex (working copy)
@@ -2116,14 +2116,16 @@
SQLite embedded database, has been added to the standard library under
the package name \module{sqlite3}.

-SQLite is a C library that provides a SQL-language database that
-stores data in disk files without requiring a separate server process.
+SQLite is a C library that provides a lightweight disk-based database
+that doesn't require a separate server process and allows accessing
+the database using a nonstandard variant of the SQL query language.
+Some applications can use SQLite for internal data storage. It's also
+possible to prototype an application using SQLite and then port the
+code to a larger database such as PostgreSQL or Oracle.
+
pysqlite was written by Gerhard H\"aring and provides a SQL interface
compliant with the DB-API 2.0 specification described by
-\pep{249}. This means that it should be possible to write the first
-version of your applications using SQLite for data storage. If
-switching to a larger database such as PostgreSQL or Oracle is
-later necessary, the switch should be relatively easy.
+\pep{249}.

If you're compiling the Python source yourself, note that the source
tree doesn't include the SQLite code, only the wrapper module.
Sep 8 '06 #18

P: n/a
Ben Sizer wrote:
me********@aol.com wrote:
Bruno Desthuilliers wrote:
SQLite never pretended to be a full-blown RDBMS - just a lightweight
simple embedded database as SQL-compliant as possible.
Ah, *you* haven't read the documentation either!

"as SQL-compliant as possible"?

ROTFLMAO!

No need to be rude really. In this context "as SQL-compliant as
possible" means, "as SQL-compliant as it is possible to be within the
project's restrictions", which presumably refer to code size and speed.
It's a reasonable trade-off.
And I accept those restrictions. I haven't complained about SQLite
being a subset of SQL. It's just that it's not SQL, so it can't even be
a subset.
>
************************************************** ********
* The authors argue that static typing is a bug in the *
* SQL specification that SQLite has fixed in a backwards *
* compatible way. *
************************************************** ********
</quote>

"Fixed"? Up until now, I didn't think it was possible for
crackpot theories to be implemented in computer science.
This is absolutely the craziest thing I've ever heard.

It's not a crackpot theory. It's a completely reasonable theory. SQL is
based on relational algebra, which provides a mathematical set of
operators for grouping data that is stored in separate sets. That data
is selected and projected according to its value, and nothing else.
Ok, it's not crackpot with respect to relational algebra.
The
concept of it having a 'type' has been overlaid on top of this,
presumably to facilitate efficient implementation, which tends to
require fixed-width rows (and hence columns). It's not necessary in any
sense, and it's reasonable to argue that if it was trivial to implement
variable width columns as efficiently as fixed width columns, that
explicit data types might never have needed to exist
But they do exist as far as the SQL Language Specification is
concerned. Isn't this just like the parallel postualate in Euclidean
Geometry? Sure, the parallel postulate isn't absolute truth in
geometry theory, but once you make it an axiom, then calling
it a bug and trying to fix it while claiming it's still Euclidean
Geometry is crackpot math.

If the SQLite author wants to make a new system based on a
different set of relational algebra axioms, that's fine. But claiming
the SQL Language Specification axioms are "bugs" that need to be
"fixed" is crackpot.
>
So much for
"If switching to a larger database such as PostgreSQL or Oracle
is later necessary, the switch should be relatively easy."

If you rely too much on a language-enforced data type rather than the
values of the underlying data, perhaps Python is not for you!
Suppose someone said "this is a new implementation of Python".
And all your programs crashed in it. Then, buried deep inside the
documentation (so as not to affect sales), you find that the author
has an issue with dynamic data types and in HIS version of Python,
all data types are static. Wouldn't you argue that what he has
isn't Python?
Personally I've migrated from SQLite to MySQL a couple of times (on
small projects, granted) and not found it to be a problem at all.
And yet, I, knowing how SQL is supposed to work, had all
kinds of problems. Why do you think I started this thread?
Because the kind of things that work in SQL-compliant systems
weren't working in SQLite.
>
Fixing the documentation is now becoming an enormous task.

I don't think so... it doesn't take much to say that the module
implements a subset of SQL but stores ignores data types.
So I can't just say

WHERE qty=12

I have to say

WHERE (qty=12) OR (qty="12")

otherwise I can't guarantee that my query will return the records
I want. Oh, and that will cause a Type mis-match error in
MS-Access meaning I literally can't migrate this query from
SQLite to MS-Access (or any other SQL-compliant database).

Do you know what INNER JOIN means?

Do you know how important it is to a relational database?

Can you explain how an INNER JOIN can even work, in theory,
with dynamic data types?

The implications of violating the SQL Language Specification
are much more serious than you think.
>
What are the chances that anything I send in as a bug report
will simply be ignored? Kind of like the Emporer's New Clothes, eh?
It would be an admission of ignorance and stupidity on the part
of the Python Development Team, wouldn't it?

Why get so bitter over this? I agree the docs need fixing but you make
it sound like this was a deliberate attempt to make you waste your
time.
I'm not bitter, just being a squeaky wheel.

And I'd be more than happy to make PROPER corrections to the
docs if I thought the effort wouldn't be wasted. But in looking over
this thread, it certainly appears that there are very few who
understand the issue.
>
--
Ben Sizer
Sep 8 '06 #19

P: n/a
me********@aol.com wrote:
Ben Sizer wrote:
>>me********@aol.com wrote:
[...]
>>>What are the chances that anything I send in as a bug report
will simply be ignored? Kind of like the Emporer's New Clothes, eh?
It would be an admission of ignorance and stupidity on the part
of the Python Development Team, wouldn't it?

Why get so bitter over this? I agree the docs need fixing but you make
it sound like this was a deliberate attempt to make you waste your
time.


I'm not bitter, just being a squeaky wheel.
Ain't that the truth :-)
And I'd be more than happy to make PROPER corrections to the
docs if I thought the effort wouldn't be wasted. But in looking over
this thread, it certainly appears that there are very few who
understand the issue.
Well, as I've already suggested, sound corrections and/or additions to
the documentation are *very* unlikely to be refused. I suspect you need
to be a little more pragmatic with regard to SQLite. If it doesn't suit
your needs, that's fine. If you can suggest changes that will avoid
others experiencing the same disappointment you have, even better.

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://holdenweb.blogspot.com
Recent Ramblings http://del.icio.us/steve.holden

Sep 8 '06 #20

P: n/a
Magnus Lycka wrote:
While I can understand your frustration, I think it is
important to think about the tone in our postings here.
Hydrocephalus is one of the most common birth defects,
and it's not terribly unlikely that someone who reads
this has a family member or someone else in his proximity
who suffers from this condition.
Ok, I appologize. No more Larry the Cable Guy humor.
I'll stick with Monty Python and Douglas Adams.
>
me********@aol.com wrote:
"Fixed"? Up until now, I didn't think it was possible for
crackpot theories to be implemented in computer science.
This is absolutely the craziest thing I've ever heard.

Still, many people with lots of experience in databases
use it, and prefer it for certain kinds of applications.
All systems have limitations and deviations, and those
limitations and deviations are stated more clearly for
SQLite than for most commercial products at least.
But they're not stated up front. The deviations from
standard SQL are extreme enough to warrant mentioning
in the Python docs.
The
market leader Oracle still can't store empty strings in
VARCHAR fields for instance. They are silently converted
to NULL. I'm pretty sure that has been in clear violation
to the official spec since 1986 at least.
But does Oracle claim the specification is a bug?
>
As far as I understand, noone here is forcing you to use
SQLite,
As long as it's included in the standard library, I'm going
to use it. There is nothing wrong with the idea of a "lite"
database. It is very misleading, though, to claim it's SQL.
and with your long experience of MS Access I'd
expect you to be fairly used to "almost SQL"... It's
some time since I used Jet/Access now, but I had much
more problems with that than I've had with SQLite.

SQLite is built in Tcl, by someone who appreciates the
way Tcl works, with its weak typing. I don't think Tcl's
type handling is nearly as clever as Python's, but I
think it's a good thing that Python's standard lib finally
has a DB-API compliant module, and while I would have
preferred something that was closer to standard SQL, I
don't know of a better candidate than SQLite.

It's good that it's usable without a server setup, and
that it's very light weight. A Jet engine is obviously
not an option, and I would have preferred SQLite even
if Jet was open source and worked on all platforms.
(Well, if JET *was* open source, I suspect it would
have been fixed by now.) It's possible that one could
have used the embedded version of Firebird instead, but
in my experience that's not nearly as lean or easy to
deploy.

With your long experience of Access and SQL Server I'm
sure you know well that any attempt to build a working
database application requires extensive knowledge of
the backend to understand its peculiarities and
limitations.

The list of software projects where not quite competent
developers built Access applications that worked ok in
small scale tests and failed catastrophically in real
life is looong...

Of course, if you've stayed with one vendor for 15 years,
I can imagine that you've forgotten how long it took you
Having worked with half a dozen backends or so, I'm no
longer surprised that SQL can be interpreted in so many
ways... I agree that SQLite is unique in it's approach
to typing, but if you are aware of this, it's really not
a big problem.
Ok, I'm now aware of it.

I'm aware that all my WHERE clauses will have to be
modified to allow for text that may show up in my numeric
fields.

I'm aware that all my ORDER BY clauses will have to
be modified for the same reason.

And I'm aware that INNER JOIN doesn't even work in
theory with dynamic typed fields and can't even be
worked around like the WHERE and ORDER BY clauses
can.

And given a typical query I use:

SELECT tblLocations.SiteID,
tblSites.SiteName,
tblLocations.IEPALocationNumber,
tblZones.Zone,
tblSampleEvent.SampleEventCode,
tblSampleAnalyses.SampleDate,
tblSamples.SampleMatrixID,
tblSamples.SampleNumber,
tblRefQAQCTypes.QAQCType,
tblResults.LabSampleNumber,
tblRefParameterGroups.ParameterGroupCode,
tblSampleAnalyses.AnalysisID,
tblRefAnalyses.AnalysisTypeCode,
tblRefParameters.ParameterReportLabel,
tblRefParameters.CASNumber,
tblResults.Result,
tblResults.Qualifier,
tblRefUnitOfMeasure.Unit,
Val(Format$(IIf(([tblResults].[unitid]=5) Or
([tblResults].[unitid]=7),
[result]/1000,[result]),
"0.0000"))
AS the_ppm_result,
IIf([tblResults].[unitid]=7,"mg/kg",
IIf([tblResults].[unitid]=5,"mg/L",
[unit]))
AS the_ppm_unit,
Val(Format$(IIf(([tblResults].[unitid]=5) Or
([tblResults].[unitid]=7),
[quantitationlimit]/1000,
[quantitationlimit]),"0.0000"))
AS the_ppm_dl,
IIf(IsNull([qualifier]) Or
([qualifier] Not Like "*U*"),1,0)
AS detect,
IIf([detect] And
[the_ppm_result]>[tbl_GW_classi],"1","0")
AS x,
IIf([detect] And
[the_ppm_result]>[tbl_GW_classi],"!","_")
AS xflag,
Master_Lookup.Tbl_pH_range,
Master_Lookup.Tbl_GW_units,
Master_Lookup.Tbl_GW_ClassI,
tblResults.Comment,
IIf([detect],[the_ppm_result],[the_ppm_result]/2)
AS detected_result,
IIf([detect],[the_ppm_result],Null)
AS detected_max_result
FROM tblSites
INNER JOIN ((tblSampleEvent
INNER JOIN (tblRefQAQCTypes
INNER JOIN ((tblSamples
INNER JOIN tblLocations
ON tblSamples.TrueLocation = tblLocations.LocationID)
INNER JOIN tblZones
ON tblLocations.LocationID = tblZones.LocationID)
ON tblRefQAQCTypes.QAQCID = tblSamples.QAQCID)
ON tblSampleEvent.EventID = tblSamples.EventID)
INNER JOIN ((tblRefAnalyses
INNER JOIN tblSampleAnalyses
ON tblRefAnalyses.AnalysisID = tblSampleAnalyses.AnalysisID)
INNER JOIN (tblRefUnitOfMeasure
INNER JOIN ((tblRefParameterGroups
INNER JOIN (tblRefParameters
LEFT JOIN Master_Lookup
ON tblRefParameters.CASNumber = Master_Lookup.Tbl__B_cas)
ON tblRefParameterGroups.ParameterGroupID =
tblRefParameters.ParameterGroupID)
INNER JOIN tblResults
ON tblRefParameters.ParameterID = tblResults.ParameterID)
ON tblRefUnitOfMeasure.UnitID = tblResults.UnitID)
ON tblSampleAnalyses.SampleAnalysisID = tblResults.SampleAnalysisID)
ON tblSamples.SampleID = tblSampleAnalyses.SampleID)
ON tblSites.SiteID = tblLocations.SiteID
WHERE (((tblLocations.SiteID)<51)
AND (Not (tblLocations.IEPALocationNumber)="G116")
AND ((tblZones.Zone)="UMAquifer")
AND ((tblSampleEvent.SampleEventCode) Like "200[2-6]Q*")
AND ((tblSamples.SampleMatrixID)=1)
AND ((tblSampleAnalyses.AnalysisID)>1)
AND ((tblResults.Qualifier) Is Null)
AND ((Master_Lookup.Tbl_pH_range)="pH 6.9 to 7.24" Or
(Master_Lookup.Tbl_pH_range) Is Null)
AND ((tblSamples.QAQCID)=1 Or (tblSamples.QAQCID)=4))
OR (((tblLocations.SiteID)<51)
AND (Not (tblLocations.IEPALocationNumber)="G116")
AND ((tblZones.Zone)="UMAquifer")
AND ((tblSampleEvent.SampleEventCode) Like "200[2-6]Q*")
AND ((tblSamples.SampleMatrixID)=1)
AND ((tblSampleAnalyses.AnalysisID)>1)
AND ((tblResults.Qualifier) Not Like "*Z*" And
(tblResults.Qualifier) Not Like "*R*")
AND ((Master_Lookup.Tbl_pH_range)="pH 6.9 to 7.24" Or
(Master_Lookup.Tbl_pH_range) Is Null)
AND ((tblSamples.QAQCID)=1 Or
(tblSamples.QAQCID)=4))
ORDER BY tblLocations.SiteID,
tblLocations.IEPALocationNumber,
tblSampleEvent.SampleEventCode,
tblRefParameterGroups.ParameterGroupCode,
tblRefParameters.ParameterReportLabel,
tblSampleAnalyses.SampleDate;

you're saying I don't have a big problem?

Sep 8 '06 #21

P: n/a
"me********@aol.com" <me********@aol.comwrites:
As long as it's included in the standard library, I'm going
to use it. There is nothing wrong with the idea of a "lite"
database. It is very misleading, though, to claim it's SQL.
Maybe it could be renamed by changing the "t" in "lite" to "k".
Sep 8 '06 #22

P: n/a
In <11**********************@m73g2000cwd.googlegroups .com>,
me********@aol.com wrote:
Ben Sizer wrote:
>I don't think so... it doesn't take much to say that the module
implements a subset of SQL but stores ignores data types.

So I can't just say

WHERE qty=12

I have to say

WHERE (qty=12) OR (qty="12")
No you just write the first and don't stuff strings into that column.
It's your choice after all.
Do you know what INNER JOIN means?

Do you know how important it is to a relational database?

Can you explain how an INNER JOIN can even work, in theory,
with dynamic data types?
The same as with static types!? Where's the problem?

table_a:

id name
----- ----------
42 bob
'foo' methusalem
'42' gaylord

table_b:

id age
----- ------
42 23
'foo' 1000
0.5 None

Join on `id`:

id name age
----- ---------- ----
42 bob 23
'foo' methusalem 1000
The implications of violating the SQL Language Specification
are much more serious than you think.
Or not so serious than you think.

Ciao,
Marc 'BlackJack' Rintsch
Sep 8 '06 #23

P: n/a
me********@aol.com wrote:
Do you know what INNER JOIN means?

Do you know how important it is to a relational database?

Can you explain how an INNER JOIN can even work, in theory,
with dynamic data types?
Let's stop the pissing contest and just see how it works. After all,
this is Python and we can easily just try it out. Here's my example.
Please tell me how this causes unexpected results, and why it isn't SQL.
Please modify my example to get it to cause a catastrophe, and post it
here so we can see the errors of our ways and be properly humbled.

#-- Preliminaries:
>>from pysqlite2 import dbapi2 as sqlite
con = sqlite.connect("test.db")
cur = con.cursor()
#-- Create 3 tables for a M:M relationship between customers
#-- and categories:
>>cur.execute("create table customers (id integer primary key
autoincrement, name char)")
>>cur.execute("create table categories (id integer primary key
autoincrement, name char)")
>>cur.execute("create table cust_cat (id integer primary key
autoincrement, cust_id integer, cat_id integer)")

#-- Insert some test data into customer and categories:
>>cur.execute("insert into customers (name) values ('Ziggy Marley')")
cur.execute("insert into customers (name) values ('David Bowie')")
cur.execute("insert into categories (name) values ('Glam Rock')")
cur.execute("insert into categories (name) values ('Nuevo Reggae')")
cur.execute("insert into categories (name) values ('Male Singers')")
cur.execute("select * from customers")
#-- Take a look at the data (and retrieve the pk's):
>>cur.fetchall()
[(1, u'Ziggy Marley'), (2, u'David Bowie')]
>>cur.execute("select * from categories")
cur.fetchall()
[(1, u'Glam Rock'), (2, u'Nuevo Reggae'), (3, u'Male Singers')]

#-- Relate some customers to some categories. Note how I send strings
#-- in some places and ints in others:
>>cur.execute("insert into cust_cat (cust_id, cat_id) values (1, 3)")
cur.execute("insert into cust_cat (cust_id, cat_id) values (1, '2')")
cur.execute("insert into cust_cat (cust_id, cat_id) values ('2', '1')")
cur.execute("insert into cust_cat (cust_id, cat_id) values ('2', 3)")
#-- Run some queries:
>>cur.execute("select customers.id as cust_id, customers.name as
cust_name, categories.id as cat_id, categories.name as cat_name from
customers inner join cust_cat on cust_cat.cust_id = customers.id inner
join categories on categories.id = cust_cat.cat_id order by 2,4")
>>cur.fetchall()
[(2, u'David Bowie', 1, u'Glam Rock'), (2, u'David Bowie', 3, u'Male
Singers'), (1, u'Ziggy Marley', 3, u'Male Singers'), (1, u'Ziggy
Marley', 2, u'Nuevo Reggae')]
>>cur.execute("select customers.id as cust_id, customers.name as
cust_name, categories.id as cat_id, categories.name as cat_name from
customers inner join cust_cat on cust_cat.cust_id = customers.id inner
join categories on categories.id = cust_cat.cat_id where categories.id =
1 order by 2,4")
>>cur.fetchall()
[(2, u'David Bowie', 1, u'Glam Rock')]
>>cur.execute("select customers.id as cust_id, customers.name as
cust_name, categories.id as cat_id, categories.name as cat_name from
customers inner join cust_cat on cust_cat.cust_id = customers.id inner
join categories on categories.id = cust_cat.cat_id where categories.id =
'1' order by 2,4")
>>cur.fetchall()
[(2, u'David Bowie', 1, u'Glam Rock')]
>>cur.execute("select customers.id as cust_id, customers.name as
cust_name, categories.id as cat_id, categories.name as cat_name from
customers inner join cust_cat on cust_cat.cust_id = customers.id inner
join categories on categories.id = cust_cat.cat_id where categories.id =
'2' order by 2,4")
>>cur.fetchall()
[(1, u'Ziggy Marley', 2, u'Nuevo Reggae')]
>>cur.execute("select customers.id as cust_id, customers.name as
cust_name, categories.id as cat_id, categories.name as cat_name from
customers inner join cust_cat on cust_cat.cust_id = customers.id inner
join categories on categories.id = cust_cat.cat_id where categories.id =
'3' order by 2,4")
>>cur.fetchall()
[(2, u'David Bowie', 3, u'Male Singers'), (1, u'Ziggy Marley', 3, u'Male
Singers')]
>>cur.execute("select customers.id as cust_id, customers.name as
cust_name, categories.id as cat_id, categories.name as cat_name from
customers inner join cust_cat on cust_cat.cust_id = customers.id inner
join categories on categories.id = cust_cat.cat_id where categories.id =
3 order by 2,4")
>>cur.fetchall()
[(2, u'David Bowie', 3, u'Male Singers'), (1, u'Ziggy Marley', 3, u'Male
Singers')]

If I have skipped the test case that will fail, please enlighten me.

--
Paul McNett
http://paulmcnett.com
http://dabodev.com

Sep 8 '06 #24

P: n/a
Marc 'BlackJack' Rintsch wrote:
In <11**********************@m73g2000cwd.googlegroups .com>,
me********@aol.com wrote:
Ben Sizer wrote:
I don't think so... it doesn't take much to say that the module
implements a subset of SQL but stores ignores data types.
So I can't just say

WHERE qty=12

I have to say

WHERE (qty=12) OR (qty="12")

No you just write the first and don't stuff strings into that column.
It's your choice after all.
Not when I don't control the source of the data.
Despite the laboratory having been given a static
type definition for data deliverables, I sporadically
find text in my numeric fields.

And surely you don't think I write INSERT statements
for each of the over 500,000 records I have? The records
are inserted programatically.

Static types mean the exceptions...raise exceptions.
How am I supposed to identify the exceptions if SQLite
simply quietly converts the non-numeric data to text?
>
Do you know what INNER JOIN means?

Do you know how important it is to a relational database?

Can you explain how an INNER JOIN can even work, in theory,
with dynamic data types?

The same as with static types!? Where's the problem?

table_a:

id name
----- ----------
42 bob
'foo' methusalem
'42' gaylord

table_b:

id age
----- ------
42 23
'foo' 1000
0.5 None

Join on `id`:

id name age
----- ---------- ----
42 bob 23
'foo' methusalem 1000
Numbers JOINing numbers and text JOINing text doesn't illustrate
the problem. The problem is when SQLite allows bad data to be
inserted. The problem isn't that fields JOIN when they are not
supposed to, it's when they fail to JOIN when they are supposed
to.
>
The implications of violating the SQL Language Specification
are much more serious than you think.

Or not so serious than you think.
Well, I can make up examples also.

Bill for customer: Tom Smith
------------------------------------------------------------------
0003 Olsen's Standard Book of British Birds (Expurgated) $ 99.95
Bill for customer: Dick Smith
------------------------------------------------------------------
0002 A Sale of Two Titties $ 20.00
0005 David Coperfield $ 3.95
Bill for customer: Harry Smith
------------------------------------------------------------------
0001 101 Ways to Start A Fight $ 19.95
Hmm...looks like item 4 isn't being billed to anyone. That's no
way to run a business. If I do a LEFT JOIN instead of an INNER JOIN:
Bill for customer: None None
------------------------------------------------------------------
0004 Ethel the Aardvark Goes Quantity Surveying $ 9.99
Bill for customer: Tom Smith
------------------------------------------------------------------
0003 Olsen's Standard Book of British Birds (Expurgated) $ 99.95
Bill for customer: Dick Smith
------------------------------------------------------------------
0002 A Sale of Two Titties $ 20.00
0005 David Coperfield $ 3.95
Bill for customer: Harry Smith
------------------------------------------------------------------
0001 101 Ways to Start A Fight $ 19.95
I see the missing item, but still don't know who to bill it to.
What if I dumped the invoice table?

4
54
Ethel the Aardvark Goes Quantity Surveying
None
1975
9.99

3
001
Olsen's Standard Book of British Birds (Expurgated)
Olsen
None
99.95

2
032
A Sale of Two Titties
Charles Dikkens
1855
20.0

5
032
David Coperfield
Edmund Wells
1955
3.95

1
066
101 Ways to Start A Fight
some Irish gentleman
1919
19.95

Aha, it was sold to customer 54, so now I just need to dump
the customer table:

001
Tom
Smith
42

032
Dick
Smith
28

066
Harry
Smith
66

only to discover there is no customer 54! How can that happen?

invoices = [(1,'066','101 Ways to Start A Fight','some Irish
gentleman',1919,19.95), \
(2,'032','A Sale of Two Titties','Charles Dikkens',1855,20.00), \
(3,'001',"Olsen's Standard Book of British Birds
(Expurgated)","Olsen",None,99.95), \
(4,066,'Ethel the Aardvark Goes Quantity Surveying',None,1975,9.99), \
(5,'032','David Coperfield','Edmund Wells',1955,3.95)]

Oops! Forgot the quotes around the customer id for item 4.
But why didn't it become 66? Because the leading 0 made
it octal! A little static typing would have helped here.

Now, IF the same error is repeated in EVERY table that's JOINed,
THEN the dynamic typing won't be a problem. But that never happens
in practice. I populate one table and ensure it's correct. Someone
else populates the other table and screws it up so the JOIN never
works.

Sure, errors happen with static typing. After all, the values still
have to match. Dynamic typing allows for more potential errors and,
thanks to Murpy's Law, I will have a much bigger problem with data
integrity.
>
Ciao,
Marc 'BlackJack' Rintsch
Sep 9 '06 #25

P: n/a
Ben Sizer wrote:
It's not a crackpot theory. It's a completely reasonable theory. SQL is
based on relational algebra, which provides a mathematical set of
operators for grouping data that is stored in separate sets. That data
is selected and projected according to its value, and nothing else. The
concept of it having a 'type' has been overlaid on top of this,
presumably to facilitate efficient implementation, which tends to
require fixed-width rows (and hence columns). It's not necessary in any
sense, and it's reasonable to argue that if it was trivial to implement
variable width columns as efficiently as fixed width columns, that
explicit data types might never have needed to exist.
The mathematical definition of the relational model includes
that data values are drawn from specific sets. Implementing
variable width columns has nothing to do with it. Here's the
reference:

1.3. A Relational View of Data

The term relation is used here in its accepted mathematical
sense. Given sets S1, S2, ···, Sn, (not necessarily
distinct), R is a relation on these n sets if it is a set
of n-tuples each of which has its first element from S1,
its second element from S2, and so on [1]. We shall refer to
Sj as the jth domain of R.

[Footnote 1:] More concisely, R is a subset of the Cartesian
product S1* S2 * ... * Sn.

-- E. F. Codd. "A Relational Model of Data for Large Shared
Data Banks." /Communications of the ACM/, Vol. 13, No. 6,
June 1970, pp. 377-387.
--
--Bryan
Sep 9 '06 #26

P: n/a
me********@aol.com wrote:
Sure, errors happen with static typing. After all, the values still
have to match. Dynamic typing allows for more potential errors and,
thanks to Murpy's Law, I will have a much bigger problem with data
integrity.
If this was a java or c++ list, all this rant would be more
understandable, but bashing dynamic typing in a dynamic language list
seems pointless at best (as this has been beaten to death over and over
again), flamebait at worst. It should be clear by now that there are
two (at least) alternatives:
1. Validate the data in python before (or at the same time when)
feeding the DB.
2. Forget sqlite and use a statically typed DBMS; it's not like there
is a shortage of them.

Massaging your SQL statements to make up for the lack of type checking
(even if this is always possible) would be a bad idea for more than
one reasons (complexity,portability,performance), so you'd better not
go down this road.

George

Sep 9 '06 #27

P: n/a
It's not a bug, it's a feature. And answered as third point in the
FAQ:
>
http://www.sqlite.org/faq.html#q3

I think your whole experience is based on it. Live with it or use a
real RDBMS.

If you are so fond of static typing, why are you using Python in the first
place? Just see it as consistency -- dynamically typed language →
dynamically typed DB columns. ;-)

Ciao,
Marc 'BlackJack' Rintsch
I have to admit I find this bogus too. It has by no means anything to
do with static typing but letting "errors pass silently" i.e.
deactivating runtime type checks as well. The problem here is that
fields are not dynamically type checked but completely untyped and only
coercion hints are present. Using a clever coercion / "type-affinity"
does not justify that there is no error case handling when the coercion
fails. This might be handled by user code ( or better by the pysqlite
wrapper ) but it appears to be redundant.

Sep 9 '06 #28

P: n/a

George Sakkis wrote:
me********@aol.com wrote:
Sure, errors happen with static typing. After all, the values still
have to match. Dynamic typing allows for more potential errors and,
thanks to Murpy's Law, I will have a much bigger problem with data
integrity.

If this was a java or c++ list, all this rant would be more
understandable, but bashing dynamic typing in a dynamic language list
seems pointless at best (as this has been beaten to death over and over
again), flamebait at worst.
But I'm not bashing Python's use of dynamic typing. But if the
SQL Language Specification says static typing, then static typing
it is. Period.
It should be clear by now that there are
two (at least) alternatives:
1. Validate the data in python before (or at the same time when)
feeding the DB.
Data integrity is an issue even with static typing. It's a bigger
issue with dynamic typing.
2. Forget sqlite and use a statically typed DBMS; it's not like there
is a shortage of them.
I have no intention of forgetting sqlite simply because it's
now part of the standard library. I have now qualms about
using it *now* because I understand it better. But reaching
that level of understanding was like pulling teeth.

Documentation shouldn't talk down to the reader. It's always
bad when you confuse the smart people. The ignorant are
supposed to be confused. It's job of the documentation to
educate the ignorant. Hiding the idiosynchrocies of Sqlite3
from the user who's already familiar with SQL is simply
unacceptable.
>
Massaging your SQL statements to make up for the lack of type checking
(even if this is always possible) would be a bad idea for more than
one reasons (complexity,portability,performance), so you'd better not
go down this road.

George
Sep 9 '06 #29

P: n/a

Dennis Lee Bieber wrote:
On 8 Sep 2006 16:46:03 -0700, "me********@aol.com" <me********@aol.com>
declaimed the following in comp.lang.python:

<snip>

After a sequence of hypothetical results of occult SQL you show
this...

invoices = [(1,'066','101 Ways to Start A Fight','some Irish
gentleman',1919,19.95), \
(2,'032','A Sale of Two Titties','Charles Dikkens',1855,20.00), \
(3,'001',"Olsen's Standard Book of British Birds
(Expurgated)","Olsen",None,99.95), \
(4,066,'Ethel the Aardvark Goes Quantity Surveying',None,1975,9.99), \
(5,'032','David Coperfield','Edmund Wells',1955,3.95)]

... A Python list of tuples!
Oops! Forgot the quotes around the customer id for item 4.
But why didn't it become 66? Because the leading 0 made
it octal! A little static typing would have helped here.
Do you routinely populate your databases by editing python lists of
tuples?
I don't routinely do anything, as I've only been playing with it for
4 days.
And if you do, why not show us the table definition and insert
statements that go with the above data?
I was simply following the examples in the Python manuals.
If the examples are poor usage, maybe they shouldn't be in
the manuals.
>
Or do you get the data from some file... What format is that file?
No file. Point is moot.
>
Would you agree that the following is a clean representation of your
example data, when considered as a CSV data source? (I left off the
primary key -- we'll just let SQLite add that value). I DID, however,
normalize the quoting by changing ' to ". (please ignore the line wrap
on the (Expurgated) -- it does not exist in the data file)

invoice.csv
-=-=-=-=-=-
"066","101 Ways to Start A Fight","some Irish gentleman",1919,19.95
"032","A Sale of Two Titties","Charles Dikkens",1855,20.00
"001","Olsen"s Standard Book of British Birds
(Expurgated)","Olsen",None,99.95
066,"Ethel the Aardvark Goes Quantity Surveying",None,1975,9.99
"032","David Coperfield","Edmund Wells",1955,3.95
-=-=-=-=-=-

Now, since you seem to believe that a "customer number" is a string
data type (implied by the presence of quotes around all but the "bad
data" record), I've so defined it in the table definition... Ah, but
what the heck, let's create a table with it defined as an integer too...

SQLiteTest.py
-=-=-=-=-=-=-=-
from pysqlite2 import dbapi2 as sql
import csv

TABLE_DEF_1 = """
create table invoice_1
(
ID integer primary key,
CustNo char,
Title char,
Author char,
Year integer,
Price float
) """

TABLE_DEF_2 = """
create table invoice_2
(
ID integer primary key,
CustNo integer,
Title char,
Author char,
Year integer,
Price float
) """

db = sql.connect("test.db")
cr = db.cursor()
try:
rs = cr.execute(TABLE_DEF_1)
db.commit()
except: #I know, I should be explicit
pass #assume table already exists

try:
rs = cr.execute(TABLE_DEF_2)
db.commit()
except: #I know, I should be explicit
pass #assume table already exists
fin = open("invoice.csv", "rb")
indata = csv.reader(fin)

print "\nInserting:"
for r in indata:
print r
if len(r) != 5:
print "^^^Bad Record"
else:
rs = cr.execute("""insert into invoice_1
(CustNo, Title, Author, Year, Price)
values (?,?,?,?,?)""",
r)
rs = cr.execute("""insert into invoice_2
(CustNo, Title, Author, Year, Price)
values (?,?,?,?,?)""",
r)
db.commit()

fin.close()

print "\nSelect all from Invoice_1 (CustNo is CHARACTER)"
rs = cr.execute("select * from invoice_1")
for r in cr:
print r

print "\nSelect all from Invoice_2 (CustNo is INTEGER)"
rs = cr.execute("select * from invoice_2")
for r in cr:
print r

db.close()
-=-=-=-=-=-=-=-

Now, let us run the above program, using the above data file! Again,
watch out for line wrapping (my comments will be blocked off with """ )

E:\UserData\Dennis Lee Bieber\My Documents\Python Progs>python
SQLiteTest.py

Inserting:
['066', '101 Ways to Start A Fight', 'some Irish gentleman', '1919',
'19.95']
['032', 'A Sale of Two Titties', 'Charles Dikkens', '1855', '20.00']
['001', 'Olsens Standard Book of British Birds (Expurgated)"', 'Olsen',
'None', '99.95']
['066', 'Ethel the Aardvark Goes Quantity Surveying', 'None', '1975',
'9.99']
['032', 'David Coperfield', 'Edmund Wells', '1955', '3.95']

"""
Well, look at that... ALL the data from the file is coming in as
character strings... customer number, year, price, title, author... It's
ALL character! The difference between quoted and unquoted numbers has
been lost.
"""
How 'bout that? Maybe I should try harder to make a
better example.
>
Select all from Invoice_1 (CustNo is CHARACTER)
(1, u'066', u'101 Ways to Start A Fight', u'some Irish gentleman', 1919,
19.949999999999999)
(2, u'032', u'A Sale of Two Titties', u'Charles Dikkens', 1855, 20.0)
(3, u'001', u'Olsens Standard Book of British Birds (Expurgated)"',
u'Olsen', u'None', 99.950000000000003)
(4, u'066', u'Ethel the Aardvark Goes Quantity Surveying', u'None',
1975, 9.9900000000000002)
(5, u'032', u'David Coperfield', u'Edmund Wells', 1955,
3.9500000000000002)

"""
No strange results there -- the year and price aren't stored as
strings, even though they were string data when inserted.
"""
Select all from Invoice_2 (CustNo is INTEGER)
(1, 66, u'101 Ways to Start A Fight', u'some Irish gentleman', 1919,
19.949999999999999)
(2, 32, u'A Sale of Two Titties', u'Charles Dikkens', 1855, 20.0)
(3, 1, u'Olsens Standard Book of British Birds (Expurgated)"', u'Olsen',
u'None', 99.950000000000003)
(4, 66, u'Ethel the Aardvark Goes Quantity Surveying', u'None', 1975,
9.9900000000000002)
(5, 32, u'David Coperfield', u'Edmund Wells', 1955, 3.9500000000000002)

"""
And look here... Again no strange results -- SQLite didn't even
treat the leading 0 as a signal that the customer number is octal. They
come out as integers without leading 0s though -- but then again, I'm
not using a formatting statement on the output...
E:\UserData\Dennis Lee Bieber\My Documents\Python Progs>
"""

Sure, errors happen with static typing. After all, the values still
have to match. Dynamic typing allows for more potential errors and,
thanks to Murpy's Law, I will have a much bigger problem with data
integrity.
SQLite's "dynamic typing" does not mean "random" or "unpredictable"
typing.

The basic rules are fairly simple.

IF the data field is declared as a numeric type, AND the input data
can be coerced to numeric without error, it is stored and returned as a
numeric value -- one would have to pass in a data value that contained
non-numeric characters for it to become a character string.
As if that never happens.
>
IF the data field is declared as a character type, AND the input
data is a numeric, it is converted to a character representation and
stored/returned as character.

(less basic involves the application of data type converters which are
probably user supplied)
As for your example of invoices and customers, surely the
application isn't trusting the user to type in a raw "customer number"
for the invoice without first validating it by attempting to retrieve
that customer from a customer table. Or, more likely, using the customer
name to look up the number in the customer table, meaning the customer
number -- whatever it is -- /will/ match the invoice data as it was
taken directly from the database.

Any data supplied to you in file format, if humanly readable, is
going to be character strings when fed to SQLite UNLESS your code first
performs some sort of conversion on it -- and I suspect you'll detect
the bad data when doing that conversion. And, by the way, the octal
detection only happens for numeric literals IN a Python statement, not
in conversion of an external string data item to numeric.
Ok, it was a bad example.
>
>int("066")
66
>int(066)
54
>>

Of course, if you're being supplied binary data files, you are
probably using the struct module to extract the numeric data fields...
But how, I wonder, would one get a non-numeric value using a numeric
specification on a string of raw bytes?
--
Wulfraed Dennis Lee Bieber KD6MOG
wl*****@ix.netcom.com wu******@bestiaria.com
HTTP://wlfraed.home.netcom.com/
(Bestiaria Support Staff: we******@bestiaria.com)
HTTP://www.bestiaria.com/
Sep 9 '06 #30

P: n/a
Paul McNett wrote:
me********@aol.com wrote:
Do you know what INNER JOIN means?

Do you know how important it is to a relational database?

Can you explain how an INNER JOIN can even work, in theory,
with dynamic data types?

Let's stop the pissing contest and just see how it works. After all,
this is Python and we can easily just try it out. Here's my example.
Please tell me how this causes unexpected results,
When we use a best case scenario, we get what we expect.
and why it isn't SQL.
It isn't SQL simply because SQL won't let you insert text
into a numeric field.
Please modify my example to get it to cause a catastrophe,
Make it worse case? Sure, I can do that.
and post it
here so we can see the errors of our ways and be properly humbled.
#-- Preliminaries:
## from pysqlite2 import dbapi2 as sqlite
import sqlite3 as sqlite
## con = sqlite.connect("test.db")
con = sqlite.connect(":memory:")
cur = con.cursor()

#-- Create 3 tables for a M:M relationship between customers
#-- and categories:
cur.execute("create table customers (id integer primary key
autoincrement, name char)")
cur.execute("create table categories (id integer primary key
autoincrement, name char)")
cur.execute("create table cust_cat (id integer primary key
autoincrement, cust_id integer, cat_id integer)")

## cur.execute("create table cust_cat (id integer, cust_id,
cat_id)")
## cur.execute("create table customers (id, name char)")
## cur.execute("create table categories (id, name char)")
##
## # Ok, THIS fails. Because the JOINs were originally made against
fields
## # that were cast as integers so the 'hinting' of sqlite must
operate in a JOIN
## # allowing ints to JOIN against strings. Take away the casts and
the JOIN
## # fails. Unfortunately, not every situation will involve JOINing
primary keys.
##
## [(1, u'Ziggy Marley'), (2, u'David Bowie')]
## [(1, u'Glam Rock'), (2, u'Nuevo Reggae'), (3, u'Male Singers')]
## [(None, 1, 3), (None, 1, u'2'), (None, u'2', u'1'), (None, u'2',
u'3')]
##
## [(1, u'Ziggy Marley', 3, u'Male Singers')]
## []
## []
## []
## []
## [(1, u'Ziggy Marley', 3, u'Male Singers')]
#-- Insert some test data into customer and categories:
cur.execute("insert into customers (name) values ('Ziggy Marley')")
cur.execute("insert into customers (name) values ('David Bowie')")
cur.execute("insert into categories (name) values ('Glam Rock')")
cur.execute("insert into categories (name) values ('Nuevo Reggae')")
cur.execute("insert into categories (name) values ('Male Singers')")

## # if int cast removed, manually insert ids
##cur.execute("insert into customers (id, name) values (1,'Ziggy
Marley')")
##cur.execute("insert into customers (id, name) values (2,'David
Bowie')")
##cur.execute("insert into categories (id, name) values (1,'Glam
Rock')")
##cur.execute("insert into categories (id, name) values (2,'Nuevo
Reggae')")
##cur.execute("insert into categories (id, name) values (3,'Male
Singers')")

cur.execute("select * from customers")
#-- Take a look at the data (and retrieve the pk's):
print cur.fetchall()
#[(1, u'Ziggy Marley'), (2, u'David Bowie')]
cur.execute("select * from categories")
print cur.fetchall()
#[(1, u'Glam Rock'), (2, u'Nuevo Reggae'), (3, u'Male Singers')]
#-- Relate some customers to some categories. Note how I send strings
#-- in some places and ints in others:
##cur.execute("insert into cust_cat (cust_id, cat_id) values (1, 3)")
##cur.execute("insert into cust_cat (cust_id, cat_id) values (1, '2')")

##cur.execute("insert into cust_cat (cust_id, cat_id) values ('2',
'1')")
##cur.execute("insert into cust_cat (cust_id, cat_id) values ('2', 3)")
##cc = [(1,3),(1,'2'),('2','1'),('2','3')]
cc = [(1,3),(1,'>2'),('>2','>1'),('>2','>3')]

## # And this also fails (differently). The 'hinting' of sqlite that
operates
## # during a JOIN only works if the string looks like an integer.
And, no,
## # the use of the '>' is not a contrived example like 'fred'. I
often get a
## # result like '>200' in what is supposed to be a numeric field.
##
## [(1, u'Ziggy Marley'), (2, u'David Bowie')]
## [(1, u'Glam Rock'), (2, u'Nuevo Reggae'), (3, u'Male Singers')]
## [(1, 1, 3), (2, 1, u'>2'), (3, u'>2', u'>1'), (4, u'>2', u'>3')]
##
## [(1, u'Ziggy Marley', 3, u'Male Singers')]
## []
## []
## []
## [(1, u'Ziggy Marley', 3, u'Male Singers')]
## [(1, u'Ziggy Marley', 3, u'Male Singers')]

cur.executemany("insert into cust_cat (cust_id, cat_id) values
(?,?)",cc)
cur.execute("select * from cust_cat")
print cur.fetchall()

print

#-- Run some queries:
cur.execute("""
select customers.id as cust_id,
customers.name as cust_name,
categories.id as cat_id,
categories.name as cat_name
from customers
inner join cust_cat
on cust_cat.cust_id = customers.id
inner join categories
on categories.id = cust_cat.cat_id
order by 2,4""")

print cur.fetchall()
#[(2, u'David Bowie', 1, u'Glam Rock'),
# (2, u'David Bowie', 3, u'Male Singers'),
# (1, u'Ziggy Marley', 3, u'Male Singers'),
# (1, u'Ziggy Marley', 2, u'Nuevo Reggae')]

cur.execute("""
select customers.id as cust_id,
customers.name as cust_name,
categories.id as cat_id,
categories.name as cat_name
from customers
inner join cust_cat
on cust_cat.cust_id = customers.id
inner join categories
on categories.id = cust_cat.cat_id
where categories.id = 1
order by 2,4""")

print cur.fetchall()
#[(2, u'David Bowie', 1, u'Glam Rock')]

cur.execute("""
select customers.id as cust_id,
customers.name as cust_name,
categories.id as cat_id,
categories.name as cat_name
from customers
inner join cust_cat
on cust_cat.cust_id = customers.id
inner join categories
on categories.id = cust_cat.cat_id
where categories.id = '1'
order by 2,4""")

print cur.fetchall()
#[(2, u'David Bowie', 1, u'Glam Rock')]

cur.execute("""
select customers.id as cust_id,
customers.name as cust_name,
categories.id as cat_id,
categories.name as cat_name
from customers
inner join cust_cat
on cust_cat.cust_id = customers.id
inner join categories
on categories.id = cust_cat.cat_id
where categories.id = '2'
order by 2,4""")

print cur.fetchall()
#[(1, u'Ziggy Marley', 2, u'Nuevo Reggae')]

cur.execute("""
select customers.id as cust_id,
customers.name as cust_name,
categories.id as cat_id,
categories.name as cat_name
from customers
inner join cust_cat
on cust_cat.cust_id = customers.id
inner join categories
on categories.id = cust_cat.cat_id
where categories.id = '3'
order by 2,4""")

print cur.fetchall()
#[(2, u'David Bowie', 3, u'Male Singers'),
# (1, u'Ziggy Marley', 3, u'Male Singers')]

cur.execute("""
select customers.id as cust_id,
customers.name as cust_name,
categories.id as cat_id,
categories.name as cat_name
from customers
inner join cust_cat
on cust_cat.cust_id = customers.id
inner join categories
on categories.id = cust_cat.cat_id
where categories.id = 3
order by 2,4""")

print cur.fetchall()
#[(2, u'David Bowie', 3, u'Male Singers'),
# (1, u'Ziggy Marley', 3, u'Male Singers')]

Sep 9 '06 #31

P: n/a

sk**@pobox.com wrote:
I wouldn't be at all surprised if the pysqlite author operated under that
assumption. That the Python developers didn't pick up on the issue is not
surprising. I'm not sure how many of them are (py)sqlite users, probably
relatively few.

Skip
Who has reviewed sqlite/pysqlite after all? Reading the passage in the
sqlite FAQ I can hardly believe that passing errors silently and
coercing everything to string when it fails to be coerced to INTEGER
although INTEGER was an invariant declared in the create command is on
par with Pythons design philosophy. In other cases doctoral
dissertations are written about whether a keyword or some punctuation
shall be used for decorator syntax and in this case everything must be
rushed into the code base of the standard library?

Sep 9 '06 #32

P: n/a
Kay Schluehr wrote:
>
[Quoting Marc 'BlackJack' Rintsch...]
If you are so fond of static typing, why are you using Python in the first
place? Just see it as consistency -- dynamically typed language →
dynamically typed DB columns. ;-)

I have to admit I find this bogus too. It has by no means anything to
do with static typing but letting "errors pass silently" i.e.
deactivating runtime type checks as well.
If the questioner had been talking about Perl, most respondents would
rightly have said that Perl's silent coercion or conversion of values
is an irritating source of potential errors. Here, the behaviour of
SQLite, whilst not as bad as Perl - values are not converted into
something different which is then erroneous - does contradict the naive
expectations of users who expect the data type declarations to mean
something and for such constraints to be enforced.

Of course, the criticism of the questioner should be more forgiving in
this debate, since I imagine that most people with experience of SQLite
know of its "loose" data typing model, and that many assume that
everyone else is aware of this feature or limitation, even though that
may not be the case. Thus, the priority should be on emphasizing this
in the module documentation (in order to avoid unnecessary confusion),
along with mentioning other possible strategies for emulating other
database system behaviours: "Other Affinity Modes" in the SQLite
documentation [1] may be helpful here, if the future tense can be
replaced with the present tense whilst preserving the factual content
of that section.

Paul

[1] http://www.sqlite.org/datatype3.html

Sep 9 '06 #33

P: n/a
me********@aol.com wrote:
Paul McNett wrote:
>>me********@aol.com wrote:
>>>Do you know what INNER JOIN means?

Do you know how important it is to a relational database?

Can you explain how an INNER JOIN can even work, in theory,
with dynamic data types?

Let's stop the pissing contest and just see how it works. After all,
this is Python and we can easily just try it out. Here's my example.
Please tell me how this causes unexpected results,


When we use a best case scenario, we get what we expect.

>>and why it isn't SQL.


It isn't SQL simply because SQL won't let you insert text
into a numeric field.
Yup, I have to agree that's pretty crappy. (Makes mental note to limit
use of SQLite).

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://holdenweb.blogspot.com
Recent Ramblings http://del.icio.us/steve.holden

Sep 9 '06 #34

P: n/a
Dennis Lee Bieber wrote:
>
Talking to myself again, I see...
Not quite. ;-)

[...]
How interesting... With MySQL/MySQLdb I did NOT get exceptions or
error results on inserting bad numeric data supplied as character string
format (ie, as read from the CSV). Instead, MySQL SILENTLY converted
them to ZEROS

A price of "Priceless" becomes Decimal("0.00").

The Customer number of ">68" became 0L
This kind of thing is "classic" MySQL behaviour.
Which would one rather have to work with -- a database that copied
invalid numerics as string literals (which, in my mind, makes it much
easier to correct the data later, using "update .... set field = correct
where field = invalid") or a database that silently converts them all to
0 values. (Of course, I now expect to have a rejoinder about "Using a
REAL database instead of MySQL" -- but unless said person wishes to
start making the same comments about SQLite on at least as regular a
basis, I believe the objection itself is invalid for this example).
Given subsequent research into SQLite's affinity modes and their
presumed status as future features, the solution in that database
system's case is to apply validation in the driver/module or through
driver extensions, and there is apparently some flexibility in the
pysqlite2 modules for changing the way data types are handled, although
a cursory inspection of the documentation doesn't appear to suggest a
convenient, ready-made solution.

As for MySQL, the situation is possibly more awkward: one expects the
database system to complain about certain things, which it probably
does from time to time, but it would seem wasteful to duplicate
whatever validation the database system does do just to cover those
situations where the system misbehaves.

Paul

Sep 10 '06 #35

P: n/a

Dennis Lee Bieber wrote:
Guess I lied...

On Sat, 09 Sep 2006 05:22:20 GMT, Dennis Lee Bieber
<wl*****@ix.netcom.comdeclaimed the following in comp.lang.python:

Talking to myself again, I see...

<snip>
rs = cr.execute("""insert into invoice_1
(CustNo, Title, Author, Year, Price)
values (?,?,?,?,?)""",
r)

Whoops, r =rv, though the exceptions raised made it moot

rs = cr.execute("""insert into invoice_2
(CustNo, Title, Author, Year, Price)
values (?,?,?,?,?)""",
r)

Same comment

Out of curiousity, I converted to using MySQL(db) as a test. As
expected, the pre-insert validation code worked with same results (well,
the price was declared decimal, and Python 2.4 appears to handle that as
a Decimal("value") on return <G>)

Now, taking out the pre-validation and type conversion, supplying
all data as it came from the CSV file:

-=-=-=-=-=-=-
Inserting:
['066', '101 Ways to Start A Fight', 'some Irish gentleman', '1919',
'19.95']
['032', 'A Sale of Two Titties', 'Charles Dikkens', '1855', '20.00']
['001', 'Olsens Standard Book of British Birds (Expurgated)"', 'Olsen',
'None', '99.95']
['066', 'Ethel the Aardvark Goes Quantity Surveying', 'None', '1975',
'9.99']
['032', 'David Coperfield', 'Edmund Wells', '1955', '3.95']
['>68', 'Strawmen and Dorothy', '', '2006', '49.89']
['033', "The Emperor's Old Clothes", 'Grimm Hound', '1887', 'Priceless']

Select all from Invoice_1 (CustNo is CHARACTER)
(1L, '066', '101 Ways to Start A Fight', 'some Irish gentleman', 1919L,
Decimal("19.95"))
(2L, '032', 'A Sale of Two Titties', 'Charles Dikkens', 1855L,
Decimal("20.00"))
(3L, '001', 'Olsens Standard Book of British Birds (Expurgated)"',
'Olsen', 0L, Decimal("99.95"))
(4L, '066', 'Ethel the Aardvark Goes Quantity Surveying', 'None', 1975L,
Decimal("9.99"))
(5L, '032', 'David Coperfield', 'Edmund Wells', 1955L, Decimal("3.95"))
(6L, '>68', 'Strawmen and Dorothy', '', 2006L, Decimal("49.89"))
(7L, '033', "The Emperor's Old Clothes", 'Grimm Hound', 1887L,
Decimal("0.00"))

Select all from Invoice_2 (CustNo is INTEGER)
(1L, 66L, '101 Ways to Start A Fight', 'some Irish gentleman', 1919L,
Decimal("19.95"))
(2L, 32L, 'A Sale of Two Titties', 'Charles Dikkens', 1855L,
Decimal("20.00"))
(3L, 1L, 'Olsens Standard Book of British Birds (Expurgated)"', 'Olsen',
0L, Decimal("99.95"))
(4L, 66L, 'Ethel the Aardvark Goes Quantity Surveying', 'None', 1975L,
Decimal("9.99"))
(5L, 32L, 'David Coperfield', 'Edmund Wells', 1955L, Decimal("3.95"))
(6L, 0L, 'Strawmen and Dorothy', '', 2006L, Decimal("49.89"))
(7L, 33L, "The Emperor's Old Clothes", 'Grimm Hound', 1887L,
Decimal("0.00"))
-=-=-=-=-=-=-

How interesting... With MySQL/MySQLdb I did NOT get exceptions or
error results on inserting bad numeric data supplied as character string
format (ie, as read from the CSV). Instead, MySQL SILENTLY converted
them to ZEROS

A price of "Priceless" becomes Decimal("0.00").

The Customer number of ">68" became 0L
Which would one rather have to work with -- a database that copied
invalid numerics as string literals (which, in my mind, makes it much
easier to correct the data later, using "update .... set field = correct
where field = invalid") or a database that silently converts them all to
0 values. (Of course, I now expect to have a rejoinder about "Using a
REAL database instead of MySQL" -- but unless said person wishes to
start making the same comments about SQLite on at least as regular a
basis, I believe the objection itself is invalid for this example).

(Apparently we have fallen afoul of this clause from the old
O'Reilly/MySQL black/brown book: "When asked to store a value in a
numeric column that is outside the column type's allowable range, MySQL
clips the value to the appropriate endpoint of the range and stores the
resulting value instead." -- seems character data "clips" to zero.
Are you saying that MySQL is goofy? ;-)

Based on these replies, I'm pulling back and retrenching.

As I said before, I'm not entering 500,000 records by writing
INSERT statements for each record, so reading csv files is
a more realistic test. Nevertheless, I am still convinced that
the documentation (or lack thereof) is mainly responsible for
my confusion. I was, after all, mimicing the examples given
(which still have errors).

I think an explanation of how Sqlite3 differs from SQL and
a better set of examples is still warranted.

Sep 10 '06 #36

P: n/a
me********@aol.com wrote:
I think an explanation of how Sqlite3 differs from SQL and
a better set of examples is still warranted.
In general, Python standard library modules that are wrappers
for third party libraries are very thinly documented, and they
should probably remain that way, because it's really too much of
a burden on the Python developers to develop this documentation
and keep it up to date. The idea is to document the wrapper, not
the wrapped library.

If I had a choice concerning these wrapper libraries, I'd much
rather see more docs on the tkinter and xml libraries. There you
need to guess a lot. There is no shortage of Tcl/Tk docs, but it
doesn't look the same within Python. For the Python xml libraries,
I've had to experiment a lot, and I have this nagging feeling
that I don't do things the way I should. (From Python 2.5, we
have ElementTree, which is much better from this perspective,
but we've just started using 2.4 at work, and we try to keep
the amount of third party libraries to a minimum here.)

It seems to me that the sqlite3 module is fairly decent in
this regard, particularly since it's not even included in a
completed Python release yet. Concerning the API, I'm surprised
to see magic method naming such as __conform__ introduced in
a library like that. It seems to me that this is a violation
of PEP 8. I'm sure there are further details that could be worth
mentioning in the docs, but I think it's important that we don't
try to duplicate the SQLite docs in the Python docs.
Sep 11 '06 #37

P: n/a
me********@aol.com wrote:
Bruno Desthuilliers wrote:
>me********@aol.com wrote:
>>Probably just me. I've only been using Access and SQL Server
for 12 years, so I'm sure my opinions don't count for anything.
SQLite never pretended to be a full-blown RDBMS - just a lightweight
simple embedded database as SQL-compliant as possible.

Ah, *you* haven't read the documentation either!
I use SQLite and PgSQL (and, alas, MySQL) on a daily basis. So I've *of
course* read the doc (and frequently re-read it).
"as SQL-compliant as possible"?
Yes. And yes, I know about the typing issue. If you want to make sure
you can migrate your data to a real RDBMS, then it's up to you to take
appropriate precautions (FormEncode can be a real helper here...).

This (documented enough) issue apart, we have far less troubles with
SQLite than we have with MySQL, on both the dev and admin sides.

And FWIW, insulting peoples won't buy you much.

(snip useless rant again)

--
bruno desthuilliers
python -c "print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for
p in 'o****@xiludom.gro'.split('@')])"
Sep 11 '06 #38

P: n/a
me********@aol.com wrote:
(snip)
Are you saying that MySQL is goofy? ;-)
This is an understatement.
--
bruno desthuilliers
python -c "print '@'.join(['.'.join([w[::-1] for w in p.split('.')]) for
p in 'o****@xiludom.gro'.split('@')])"
Sep 11 '06 #39

P: n/a
Kay Schluehr wrote:
sk**@pobox.com wrote:
>I wouldn't be at all surprised if the pysqlite author operated under that
assumption. That the Python developers didn't pick up on the issue is not
surprising. I'm not sure how many of them are (py)sqlite users, probably
relatively few.

Skip

Who has reviewed sqlite/pysqlite after all?
pysqlite was actually reviewed by several Python core developers before
becoming a part of the Python standard library, most prominently Neil
Norwitz and Anthony Baxter.
Reading the passage in the
sqlite FAQ I can hardly believe that passing errors silently and
coercing everything to string when it fails to be coerced to INTEGER
although INTEGER was an invariant declared in the create command is on
par with Pythons design philosophy. [...]
Unfortunately, third-party library authors don't first check with
Python's design philosophy in case their software will be wrapped as a
Python module ;-)

I did my best to combine both SQLite's and Python's type system though,
including a discussion with pysqlite 1.x users before the grand rewrite
for version 2, which is the one that ended up in the Python standard
library now.
In other cases doctoral dissertations are written about whether a
keyword or some punctuation shall be used for decorator syntax and in
this case everything must be rushed into the code base of the
standard library?
There was no rush at all.

-- Gerhard

Sep 11 '06 #40

P: n/a
Unfortunately, I don't think they are going to duplicate the 200 or
so page O'Reilly SQLite book as part of the help system (even if that
book is quite out-of-date; there is one skinny chapter near the end that
explains what changes "will appear" in the version that has been
available for Python for over a year now).
--
Just to let you (and everyone else know) there is a new SQLite book
out from APress that covers SQLite 3

http://www.apress.com/book/bookDisplay.html?bID=10130

It actually has a section that covers what a lot of these postings
have been discussing, check constraints. You can actually implement
type checking constraints in SQLite with very little additional code.
That way it will give you an error message if you try to insert
something of the wrong type.

HTH,

Preston
Sep 11 '06 #41

P: n/a
At Tuesday 5/9/2006 16:23, me********@aol.com wrote:
>I would be surprised if they had never used ANY database. A little
thing like dynamic field typing will simply make it impossible to
migrate your Sqlite data to a *real* database.
Why not? Because it breaks the relational model rules? That model
certainly was great 30 years ago, but now things are different. (In
fact, you didn't menction the word "relational", but I presume you
were thinking of that).
Even what you call *real* databases have a lot of incompatibilities
among them (e.g. ORACLE does not provide an "autoincrement" type, but
has sequences, and so on...). Of course you could restrict yourself
to, by example, SQL92 entry level and be a lot more compatible.
But if I'm using a nice OO language like Python which lets me bind
*any* object to *any* name, why should be wrong to bind *any* object
to *any* database column? Looks a lot more "pythonic" for me. Of
course, a true object database (like ZODB) is better.

Gabriel Genellina
Softlab SRL

__________________________________________________
Preguntá. Respondé. Descubrí.
Todo lo que querías saber, y lo que ni imaginabas,
está en Yahoo! Respuestas (Beta).
¡Probalo ya!
http://www.yahoo.com.ar/respuestas

Sep 12 '06 #42

P: n/a

Gabriel Genellina wrote:
At Tuesday 5/9/2006 16:23, me********@aol.com wrote:
I would be surprised if they had never used ANY database. A little
thing like dynamic field typing will simply make it impossible to
migrate your Sqlite data to a *real* database.

Why not? Because it breaks the relational model rules?
That's part of it.
That model
certainly was great 30 years ago, but now things are different.
Different only in "lite" databases.
(In
fact, you didn't menction the word "relational", but I presume you
were thinking of that).
Even what you call *real* databases have a lot of incompatibilities
among them (e.g. ORACLE does not provide an "autoincrement" type, but
has sequences, and so on...).
But it was stated in the sqlite docs that ALL SQL databases
use static types implying that sqlite will be incompatible
with any "heavy" database should the need arise to migrate
upwards. The issue is not that there will be compatibilty
problems with any data migration but that the truth is exactly
opposite of what's claimed in Section 13.13.

I'm not saying sqlite can't be used, what I'm asking for
is that the documentation lay the facts out and I'll decide
whether I can make this work in my application. Lying about
it makes you sound like Microsoft.
Of course you could restrict yourself
to, by example, SQL92 entry level and be a lot more compatible.
But if I'm using a nice OO language like Python which lets me bind
*any* object to *any* name, why should be wrong to bind *any* object
to *any* database column?
But SQL isn't OO, it's relational. That means JOINing tables
together on a common field. In theory, due to the comparison
hierarchy, it is impossible to do JOINs with dynamic typing
since different types can never be equal. In practice, the type
affinity kluge trys to work around this but can't do anything
if the string doesn't look like an integer when a text field
attempts to JOIN to an interger field.
Looks a lot more "pythonic" for me.
If all you have is a hammer, everything looks like a nail.
Of
course, a true object database (like ZODB) is better.

Gabriel Genellina
Softlab SRL

__________________________________________________
Preguntá. Respondé. Descubrí.
Todo lo que querías saber, y lo que ni imaginabas,
está en Yahoo! Respuestas (Beta).
¡Probalo ya!
http://www.yahoo.com.ar/respuestas
Sep 12 '06 #43

P: n/a
me********@aol.com wrote:
But it was stated in the sqlite docs that ALL SQL databases
use static types implying that sqlite will be incompatible
with any "heavy" database should the need arise to migrate
upwards. The issue is not that there will be compatibilty
problems with any data migration but that the truth is exactly
opposite of what's claimed in Section 13.13.

I'm not saying sqlite can't be used, what I'm asking for
is that the documentation lay the facts out and I'll decide
whether I can make this work in my application. Lying about
it makes you sound like Microsoft.
I thought your qualm was with the pysqlite docs, not the sqlite docs
(which apparently do make it plain how the database handles typing)?

Also, as others have mentioned, there are a number of ways to ensure
type safety, as long as you know how the database works (which as I
understand was your original point -- that it should be better
documented how it works in the pysqlite docs; and I am inclined to
agree -- at least a mention with link to the sqlite docs would be
helpful). But given that type safety is not an issue if you use those
ways of ensuring it, then the move to a fuller database _will_ be
relatively easy. If you don't want to change anything in your database
creation/update code ala check constraints, you can always explicitly
validate from python, which can be done programatically (very simple
example -- you could also use regexp patterns to validate; e.g., string
fields not only must be type str, but must not match '^\d+$', &c):

rows = [
['1', 'fred', '0051', '/home/fred'],
['2', 'bob', '0054', '/home/bob'],
['3', 'bork', '>056', '/home/bork']
]
def validate(row):
return [int(row[0]), str(row[1]), int(row[2]), str(row[3])]
for i in xrange(len(rows)):
rows[i] = validate(rows[i]) # <- throws an exception on the third row
# database stuff here...

Regards,
Jordan

Sep 12 '06 #44

P: n/a
On 11 Sep 2006 23:29:28 -0700, me********@aol.com <me********@aol.comwrote:
But it was stated in the sqlite docs that ALL SQL databases
use static types implying that sqlite will be incompatible
with any "heavy" database should the need arise to migrate
upwards. The issue is not that there will be compatibilty
problems with any data migration but that the truth is exactly
opposite of what's claimed in Section 13.13.
Implying? There's a solid word. Migrating data from SQLite to other
databases is no more difficult or easy than migrating data to any
other database. Do you think this is ever trivial? It's as hard or as
easy as you make it. No database can just take any schema and the data
you put in it and just magically convert that schema/data to
flawlessly work in any arbitrary database of your choosing. Some
databases have tools to help with this, but they still are not
perfect.
I'm not saying sqlite can't be used, what I'm asking for
is that the documentation lay the facts out and I'll decide
whether I can make this work in my application. Lying about
it makes you sound like Microsoft.
Lying? Whose lying? Where on the website is there a lie about
anything? From what I can tell, you've not taken the time to read the
documentation or post anything to the mailing list. You've just posted
jeremiads on the Python list.

Don't like the documentation? Ever volunteered to help out? Ever
posted any suggestions on the list or report a bug? Do you really
think that open source projects exists to serve you and meet your
standards? Do you think that free code and documentation just falls
like manna from heaven? Do you honestly think the two core developers
of SQLite have some secret agenda to deceive you or the world into
using SQLite?
Of course you could restrict yourself
to, by example, SQL92 entry level and be a lot more compatible.
But if I'm using a nice OO language like Python which lets me bind
*any* object to *any* name, why should be wrong to bind *any* object
to *any* database column?

But SQL isn't OO, it's relational. That means JOINing tables
together on a common field. In theory, due to the comparison
hierarchy, it is impossible to do JOINs with dynamic typing
since different types can never be equal. In practice, the type
affinity kluge trys to work around this but can't do anything
if the string doesn't look like an integer when a text field
attempts to JOIN to an interger field.
Unless you ensure that the correct types are put int the column to
begin with, which is entirely possible with SQLite, as I've already
demonstrated. And if that's just too much to bear, you can still do an
inner join by explicitly casting the two columns in the join
constraint to a common desired type. Want to know how? Read the
documentation.
Sep 12 '06 #45

P: n/a

Mike Owens wrote:
On 11 Sep 2006 23:29:28 -0700, me********@aol.com <me********@aol.comwrote:
But it was stated in the sqlite docs that ALL SQL databases
use static types implying that sqlite will be incompatible
with any "heavy" database should the need arise to migrate
upwards. The issue is not that there will be compatibilty
problems with any data migration but that the truth is exactly
opposite of what's claimed in Section 13.13.

Implying? There's a solid word. Migrating data from SQLite to other
databases is no more difficult or easy than migrating data to any
other database. Do you think this is ever trivial? It's as hard or as
easy as you make it. No database can just take any schema and the data
you put in it and just magically convert that schema/data to
flawlessly work in any arbitrary database of your choosing. Some
databases have tools to help with this, but they still are not
perfect.
So, knowing that, would you agree that

<quote Python Library Reference 13.13>
If switching to a larger database such as PostgreSQL or Oracle
is later necessary, the switch should be relatively easy.
</quote>

is misleading if not outright untruthful?
>
I'm not saying sqlite can't be used, what I'm asking for
is that the documentation lay the facts out and I'll decide
whether I can make this work in my application. Lying about
it makes you sound like Microsoft.

Lying? Whose lying?
See above quote. And while you're at it, see the sqlite docs
about how the SQL Language Specification of static typing
is a bug.
Where on the website is there a lie about
anything? From what I can tell, you've not taken the time to read the
documentation or post anything to the mailing list. You've just posted
jeremiads on the Python list.

Don't like the documentation?
No, it's misleading and full of errors (this is the Python docs
I'm referring to).
Ever volunteered to help out?
That's what this thread was about, testing the waters.
No point making bug reports if I'm the one who's wrong.
But it turns out I'm not wrong, sqlite IS goofy and this
should be pointed out.
Ever posted any suggestions on the list or report a bug?
I'm still considering it. This thread has been very useful
towards that.
Do you really
think that open source projects exists to serve you and meet your
standards? Do you think that free code and documentation just falls
like manna from heaven?
But why does it have to be wrong? It's just as easy to get
things right. Isn't that your complaint, that if I read the sqlite
docs first, the riculous examples in the Python docs would
have made more sense? Why didn't the guy writing the Python
docs read the sqlite docs first?
Do you honestly think the two core developers
of SQLite have some secret agenda to deceive you or the world into
using SQLite?
Why do they claim that the SQL Language Specification of
static typing is a bug? That's simply a lie. Why do they claim
they've "fixed" it in a backwards compatible way? That's another
lie.

Why didn't they simply say they have an alternative to static
typing? Because part of the deception is to make people think
there is something wrong with static typing.
>
Of course you could restrict yourself
to, by example, SQL92 entry level and be a lot more compatible.
But if I'm using a nice OO language like Python which lets me bind
*any* object to *any* name, why should be wrong to bind *any* object
to *any* database column?
But SQL isn't OO, it's relational. That means JOINing tables
together on a common field. In theory, due to the comparison
hierarchy, it is impossible to do JOINs with dynamic typing
since different types can never be equal. In practice, the type
affinity kluge trys to work around this but can't do anything
if the string doesn't look like an integer when a text field
attempts to JOIN to an interger field.

Unless you ensure that the correct types are put int the column to
begin with, which is entirely possible with SQLite, as I've already
demonstrated. And if that's just too much to bear, you can still do an
inner join by explicitly casting the two columns in the join
constraint to a common desired type. Want to know how? Read the
documentation.
And what do you get when you implement all these kluges?
A database that effectively is static typed. Do you still think
static typing is a bug?

Sep 12 '06 #46

P: n/a
me********@aol.com wrote:
So, knowing that, would you agree that

<quote Python Library Reference 13.13>
If switching to a larger database such as PostgreSQL or Oracle
is later necessary, the switch should be relatively easy.
</quote>

is misleading if not outright untruthful?
eh? if you've never migrated *from* SQLite to some other database, how
can *you* possibly know *anything* about how hard or easy it is?

</F>

Sep 12 '06 #47

P: n/a

Fredrik Lundh wrote:
me********@aol.com wrote:
So, knowing that, would you agree that

<quote Python Library Reference 13.13>
If switching to a larger database such as PostgreSQL or Oracle
is later necessary, the switch should be relatively easy.
</quote>

is misleading if not outright untruthful?

eh? if you've never migrated *from* SQLite to some other database, how
can *you* possibly know *anything* about how hard or easy it is?
Because I can extrapolate. I *know* before even trying it that
if I export all my data from a sqlite db to a csv file and then try
to import it into Access that there will be problems if the fields
aren't static typed.

That's one of the reasons why I was such a good test engineer.
I could anticipate problems the design engineers didn't think of
and I would deliberately provoke those problems during testing
and crash their hardware/software.

I wasn't very popular.
>
</F>
Sep 12 '06 #48

P: n/a
On 12 Sep 2006 10:24:00 -0700, me********@aol.com <me********@aol.comwrote:
So, knowing that, would you agree that

<quote Python Library Reference 13.13>
If switching to a larger database such as PostgreSQL or Oracle
is later necessary, the switch should be relatively easy.
</quote>

is misleading if not outright untruthful?
Not in the least.

If you know what you are doing from a database perspective (not just a
SQLite perspective), migrating data to another database is exactly
that -- relatively easy. That means, you may have to recreate or
modify your schema for the target database -- and this is true in ALL
databases. Native datatypes vary from system to system, and some
systems support user-defined data types, in which case your schema
will definitely have to be modified. How would you migrate a CIDR type
in PostgreSQL to a numeric field in Oracle? You have to work at it.

Next, as far as transferring you data, you most likely have to resort
to some delimited format, or INSERT statements, which is no different
than any other database.

So, I would call that relatively easy without a stretch, and
certainly no different than migrating data with any other database.

Really, how is this different than migrating data to/from any other database?
whether I can make this work in my application. Lying about
it makes you sound like Microsoft.
Lying? Whose lying?

See above quote. And while you're at it, see the sqlite docs
about how the SQL Language Specification of static typing
is a bug.
Both of which have been addressed in detail. The above quote is not
even stretching the truth, and the latter fact is a deviation that
SQLite has every right to make because they, and not you, wrote the
software. Furthermore, it is very clearly stated on the website.

So how is that a lie?
No, it's misleading and full of errors (this is the Python docs
I'm referring to).
I didn't join this thread because of Python's documentation, and I've
made that clear. I am here because you are unjustly vilifying the
SQLite project.
Ever volunteered to help out?

That's what this thread was about, testing the waters.
No point making bug reports if I'm the one who's wrong.
But it turns out I'm not wrong, sqlite IS goofy and this
should be pointed out.
Then be a man and point it out on the SQLite mailing list, where you
can be called on it, rather than ranting about it here.
But why does it have to be wrong? It's just as easy to get
things right. Isn't that your complaint, that if I read the sqlite
docs first, the riculous examples in the Python docs would
have made more sense? Why didn't the guy writing the Python
docs read the sqlite docs first?
First, SQLite's approach is no more wrong than any other database's
deviation from the standard. Second, as I've said, I'm not here for
the Python issues. I think they'll get things sorted out in due time,
and people on this list have been very receptive to your feedback.
Do you honestly think the two core developers
of SQLite have some secret agenda to deceive you or the world into
using SQLite?

Why do they claim that the SQL Language Specification of
static typing is a bug? That's simply a lie. Why do they claim
they've "fixed" it in a backwards compatible way? That's another
lie.
It's not a lie at all. Are you incapable of comprehending the context
of that text? Do you not understand that it effectively says "This is
the way we do things. It's not in agreement with the SQL standard. We
know that, we are doing it this way, and here's how it works, take it
or leave it." And the whole bug in the SQL standard, if you can't
tell, is called humor.
Why didn't they simply say they have an alternative to static
typing?
They did. You couldn't understand that from the documentation?
Because part of the deception is to make people think
there is something wrong with static typing.
Yes, it's really an underhanded conspiracy designed to deceive and
mislead on a global scale. I can just see the developers sitting
around plotting:

"Hey, let's write some free software. Yeah let's give the code away
for free and not make a dime from it. Yeah, and then let's make up a
bunch of lies to make people want to use it, so we can continue to not
make a dime from it. And let's slander the SQL standard, and say all
sorts nasty things about it. Yeah, that's how we'll spend our nights
and weekends."

You really need to find some fault that will stick at this point, don't you?

They're really up to something.
And what do you get when you implement all these kluges?
A database that effectively is static typed.
Only if you want one. Otherwise, you have the freedom of dynamic
typing, which other databases don't afford. So, you in fact have more
freedom than you do than with databases that only offer strict typing.
Do you still think static typing is a bug?
Did I say this, ever? I am not the SQLite website.

I don't think either is a bug. Both are two different viewpoints used
to solve a particular problem. Dynamic typing and type affinity can be
incredibly useful, especially for prototyping and scripting languages.
I don't have to dump and reload my tables if I want to change a column
I am testing out. I just delete me data, reload it, and get my program
to store a different representation in the column. And when I need
strict typing, I simply declare a check constraint. So no, you can't
say that your particular need for static typing meets all criteria for
all developers using SQLite. And frankly, if SQLite doesn't meet my
requirements, rather than badmouthing SQLite, I use another database,
such as PostgreSQL.

And they are entirely correct in saying that you are provided the
facilities to implement strict typing. You are given five native
storage classes and the means to ensure that only data of those
classes is stored in columns. What's all the fuss?

Again, SQLite is completely up front about its approach to typing. I
still don't see how is it that anyone is lying to you.
Sep 12 '06 #49

P: n/a
me********@aol.com wrote:
Fredrik Lundh wrote:
>>me********@aol.com wrote:

>>>So, knowing that, would you agree that

<quote Python Library Reference 13.13>
If switching to a larger database such as PostgreSQL or Oracle
is later necessary, the switch should be relatively easy.
</quote>

is misleading if not outright untruthful?

eh? if you've never migrated *from* SQLite to some other database, how
can *you* possibly know *anything* about how hard or easy it is?


Because I can extrapolate. I *know* before even trying it that
if I export all my data from a sqlite db to a csv file and then try
to import it into Access that there will be problems if the fields
aren't static typed.

That's one of the reasons why I was such a good test engineer.
I could anticipate problems the design engineers didn't think of
and I would deliberately provoke those problems during testing
and crash their hardware/software.

I wasn't very popular.
Strange, that, what with your excess of personal charm and great powers
of diplomacy. People can be so touchy, can't they?

regards
Steve
--
Steve Holden +44 150 684 7255 +1 800 494 3119
Holden Web LLC/Ltd http://www.holdenweb.com
Skype: holdenweb http://holdenweb.blogspot.com
Recent Ramblings http://del.icio.us/steve.holden

Sep 12 '06 #50

66 Replies

This discussion thread is closed

Replies have been disabled for this discussion.