473,805 Members | 2,272 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Is it just me, or is Sqlite3 goofy?

Probably just me. I've only been using Access and SQL Server
for 12 years, so I'm sure my opinions don't count for anything.

I was, nevertheless, looking forward to Sqlite3. And now
that gmpy has been upgraded, I can go ahead and install
Python 2.5.

So I open the manual to Section 13.13 where I find the first
example of how to use Sqlite3:

<code>
conn = sqlite3.connect (':memory:')

c = conn.cursor()

# Create table
c.execute('''cr eate table stocks
(date timestamp, trans varchar, symbol varchar,
qty decimal, price decimal)''')

# Insert a row of data
c.execute("""in sert into stocks
values ('2006-01-05','BUY','RHAT ',100,35.14)""" )
</code>

Seems pretty simple, yet I was befuddled and bewildered
by this example. So much so that I added a query to see exactly
what was going on.

<code>
# added by me
c.execute('sele ct * from stocks')
d = c.fetchone()
for t in d:
print type(t),t
</code>

Original code - what's wrong with this picture?
<type 'unicode'2006-01-05
<type 'unicode'BUY
<type 'unicode'RHAT
<type 'int'100
<type 'float'35.14

Why is the date returning as a string? Aren't the built in
converters supposed to handle that? Yes, if you enable
detect_types.

Added detect_types=sq lite3.PARSE_DEC LTYPES

Traceback (most recent call last):
File "C:\Python25\sq lite_first_exam ple.py", line 30, in <module>
c.execute('sele ct * from stocks')
File "C:\Python25\li b\sqlite3\dbapi 2.py", line 66, in
convert_timesta mp
datepart, timepart = val.split(" ")
ValueError: need more than 1 value to unpack

Aha, that explains why they weren't enabled.

This failed because
- the value inserted was wrong format?
- and the builtin converter can't split it cuz it has no spaces?
when it worked it was because
- detect_types was not set, so converter not invoked when queried?

Yes, the format in the example was datetime.date and the field type
should have been cast [date], not [timestamp] which needs HH:MM:SS
for the converter to work properly (but only if detect_types
enabled).

If a correct format was inserted, converter would have worked

<type 'datetime.datet ime'2006-09-04 13:30:00
<type 'unicode'BUY
<type 'unicode'RHAT
<type 'int'100
<type 'float'35.14

Or, had the field been properly cast as [date] instead of [timestamp]
it would also have worked.

<type 'datetime.date' 2006-09-04
<type 'unicode'BUY
<type 'unicode'RHAT
<type 'int'100
<type 'float'35.14

Ah, this now partly explains the original result, since detect_types
is off by default, the field cast, not being a native sqlite3 type
was ignored and the data queried back as TEXT.

<type 'unicode'2006-09-04
<type 'unicode'BUY
<type 'unicode'RHAT
<type 'int'100
<type 'float'35.14

Ok, next issue, what the fuck are [varchar] and [decimal]?
They are certainly not Sqlite3 native types. If they are
user defined types, where are the converters and adapters?
Does Sqlite3 simply ignore a cast that isn't registered?

Note that although both qty and price were cast as [decimal]
they queried back as int and float.

Note also that it's "obvious" from the query example on page 13.13
that the example is FUBAR
- the date is a unicode string, not a datetime.date type
- the price is binary floating point, not decimal

<quote>
This example uses the iterator form:
>>c = conn.cursor()
c.execute('se lect * from stocks order by price')
for row in c:
... print row
...
(u'2006-01-05', u'BUY', u'RHAT', 100, 35.140000000000 001)
(u'2006-03-28', u'BUY', u'IBM', 1000, 45.0)
(u'2006-04-06', u'SELL', u'IBM', 500, 53.0)
(u'2006-04-05', u'BUY', u'MSOFT', 1000, 72.0)
</quote>

Here we have an example of things apparently working
for the wrong reason. A classic example of the programmer
who *thinks* he knows how it works because he wrote it.
This kind of sloppiness wouldn't last 5 minutes in a production
environment.

But why did Sqlite3 make qty an int and price a float?
Hard to say since THE FURTHER EXAMPLES IN THE DOCS don't
even bother to cast the field types. I guess I'm supposed
to guess how things are supposed to work. Can I trust that
default settings will be what I want?

Ha! Can I trust the baby with a hammer?

First, note that the script example in section 13.13.3

<quote>
import sqlite3

con = sqlite3.connect (":memory:")
cur = con.cursor()
cur.executescri pt("""
create table person(
firstname,
lastname,
age
);

create table book(
title,
author,
published
);

insert into book(title, author, published)
values (
'Dirk Gently''s Holistic Detective Agency
'Douglas Adams',
1987
);
""")
</quote>

contains not one but TWO syntax errors! A single quote after
the word Agency is missing as is the comma that should be at
the end of that line. Seems that no one actually ever tried
this example.

That's easily fixed. But I was curious about why the fields
don't have type casts. After the previous debacle and
knowing that this code was never tested, I am not going to
assume it works. Better add a query to make sure.

cur.execute("se lect title, author, published from book")
d = cur.fetchall()
for i in d: print i
print

(u"Dirk Gently's Holistic Detective Agency", u'Douglas Adams', 1987)

Ok, so if not cast, the fields must default (and probably also when
a cast is made that hasn't been defined).

But watch this: being clueless (but not stupid) is a gift I have
for troubleshooting . I tried (incorrectly) to insert another record:

cur.execute("in sert into book(title, author, published) values ('Dirk
Gently''s Holistic Detective Agency','Dougla s Adams','1987')" )

(u"Dirk Gently's Holistic Detective Agency", u'Douglas Adams', 1987)
(u"Dirk Gently's Holistic Detective Agency", u'Douglas Adams', u'1987')

Uhh...how can a database have a different field type for each record?

Simple, without a cast when the table is created, the field type is
whatever you insert into it. That's how the default must work,
each record has a data structure independent of every other record!

Wow. Just think of the kind of bugs *that* must cause.

Bugs?

Here's MY example, creating a Cartesian Product

<code>
import sqlite3
letters = [(2,),('10',),(' 20',),(200,)]
con = sqlite3.connect (":memory:")
con.text_factor y = str
con.execute("cr eate table letter(c integer)")
con.executemany ("insert into letter(c) values (?)", letters)
print 'Queried: ',
for row in con.execute("se lect c from letter"):
print row,
print
print
print 'Sorted: ',
for row in con.execute("se lect c from letter order by c"):
print row[0],
print
print
print 'Cartesian Product: ',
for row in con.execute("se lect a.c, b.c, c.c from letter as a, letter
as b, letter as c"):
print row[0]+row[1]+row[2],
</code>

Note that the list of data to be inserted contains both strings and
ints. But because the field was correctly cast as [integer], Sqlite3
actually stored integers in the db. We can tell that from how the
"order by" returned the records.

Queried: (2,) (10,) (20,) (200,)

Sorted: 2 10 20 200

Cartesian Product: 6 14 24 204 14 22 32 212 24 32 42 222 204 212
222 402 14 22 32 212 22 30 40 220 32 40 50 230 212 220 230 410 24
32 42 222 32 40 50 230 42 50 60 240 222 230 240 420 204 212 222
402 212 220 230 410 222 230 240 420 402 410 420 600

Because if I cast them as [text] the sort order changes (and my
Cartesian Product becomes concatenation instead of summation).

Queried: ('2',) ('10',) ('20',) ('200',)

Sorted: 10 2 20 200

Cartesian Product: 222 2210 2220 22200 2102 21010 21020 210200
2202 22010 22020 220200 22002 220010 220020 2200200 1022 10210
10220 102200 10102 101010 101020 1010200 10202 102010 102020
1020200 102002 1020010 1020020 10200200 2022 20210 20220 202200
20102 201010 201020 2010200 20202 202010 202020 2020200 202002
2020010 2020020 20200200 20022 200210 200220 2002200 200102
2001010 2001020 20010200 200202 2002010 2002020 20020200 2002002
20020010 20020020 200200200

But if I omit the cast altogether, then the db stores the input
exactly as it was inserted, so the c field contains both
text and integers wreaking havoc with my sort order, making
records un-queryable using "where" and causing my Cartesian
Product to crash.

Queried: (2,) ('10',) ('20',) (200,)

Sorted: 2 200 10 20

Cartesian Product: 6

Traceback (most recent call last):
File "C:\Python25\us er\sqlite_test2 .py", line 225, in <module>
print row[0]+row[1]+row[2],
TypeError: unsupported operand type(s) for +: 'int' and 'str'

Yeah, I know, I've heard it before.

"This behavior is by design."

It's still fuckin' goofy.

Sep 5 '06
66 7096
Unfortunately, I don't think they are going to duplicate the 200 or
so page O'Reilly SQLite book as part of the help system (even if that
book is quite out-of-date; there is one skinny chapter near the end that
explains what changes "will appear" in the version that has been
available for Python for over a year now).
--
Just to let you (and everyone else know) there is a new SQLite book
out from APress that covers SQLite 3

http://www.apress.com/book/bookDisplay.html?bID=10130

It actually has a section that covers what a lot of these postings
have been discussing, check constraints. You can actually implement
type checking constraints in SQLite with very little additional code.
That way it will give you an error message if you try to insert
something of the wrong type.

HTH,

Preston
Sep 11 '06 #41
At Tuesday 5/9/2006 16:23, me********@aol. com wrote:
>I would be surprised if they had never used ANY database. A little
thing like dynamic field typing will simply make it impossible to
migrate your Sqlite data to a *real* database.
Why not? Because it breaks the relational model rules? That model
certainly was great 30 years ago, but now things are different. (In
fact, you didn't menction the word "relational ", but I presume you
were thinking of that).
Even what you call *real* databases have a lot of incompatibiliti es
among them (e.g. ORACLE does not provide an "autoincrem ent" type, but
has sequences, and so on...). Of course you could restrict yourself
to, by example, SQL92 entry level and be a lot more compatible.
But if I'm using a nice OO language like Python which lets me bind
*any* object to *any* name, why should be wrong to bind *any* object
to *any* database column? Looks a lot more "pythonic" for me. Of
course, a true object database (like ZODB) is better.

Gabriel Genellina
Softlab SRL

_______________ _______________ _______________ _____
Preguntá. Respondé. Descubrí.
Todo lo que querías saber, y lo que ni imaginabas,
está en Yahoo! Respuestas (Beta).
¡Probalo ya!
http://www.yahoo.com.ar/respuestas

Sep 12 '06 #42

Gabriel Genellina wrote:
At Tuesday 5/9/2006 16:23, me********@aol. com wrote:
I would be surprised if they had never used ANY database. A little
thing like dynamic field typing will simply make it impossible to
migrate your Sqlite data to a *real* database.

Why not? Because it breaks the relational model rules?
That's part of it.
That model
certainly was great 30 years ago, but now things are different.
Different only in "lite" databases.
(In
fact, you didn't menction the word "relational ", but I presume you
were thinking of that).
Even what you call *real* databases have a lot of incompatibiliti es
among them (e.g. ORACLE does not provide an "autoincrem ent" type, but
has sequences, and so on...).
But it was stated in the sqlite docs that ALL SQL databases
use static types implying that sqlite will be incompatible
with any "heavy" database should the need arise to migrate
upwards. The issue is not that there will be compatibilty
problems with any data migration but that the truth is exactly
opposite of what's claimed in Section 13.13.

I'm not saying sqlite can't be used, what I'm asking for
is that the documentation lay the facts out and I'll decide
whether I can make this work in my application. Lying about
it makes you sound like Microsoft.
Of course you could restrict yourself
to, by example, SQL92 entry level and be a lot more compatible.
But if I'm using a nice OO language like Python which lets me bind
*any* object to *any* name, why should be wrong to bind *any* object
to *any* database column?
But SQL isn't OO, it's relational. That means JOINing tables
together on a common field. In theory, due to the comparison
hierarchy, it is impossible to do JOINs with dynamic typing
since different types can never be equal. In practice, the type
affinity kluge trys to work around this but can't do anything
if the string doesn't look like an integer when a text field
attempts to JOIN to an interger field.
Looks a lot more "pythonic" for me.
If all you have is a hammer, everything looks like a nail.
Of
course, a true object database (like ZODB) is better.

Gabriel Genellina
Softlab SRL

_______________ _______________ _______________ _____
Preguntá. Respondé. Descubrí.
Todo lo que querías saber, y lo que ni imaginabas,
está en Yahoo! Respuestas (Beta).
¡Probalo ya!
http://www.yahoo.com.ar/respuestas
Sep 12 '06 #43
me********@aol. com wrote:
But it was stated in the sqlite docs that ALL SQL databases
use static types implying that sqlite will be incompatible
with any "heavy" database should the need arise to migrate
upwards. The issue is not that there will be compatibilty
problems with any data migration but that the truth is exactly
opposite of what's claimed in Section 13.13.

I'm not saying sqlite can't be used, what I'm asking for
is that the documentation lay the facts out and I'll decide
whether I can make this work in my application. Lying about
it makes you sound like Microsoft.
I thought your qualm was with the pysqlite docs, not the sqlite docs
(which apparently do make it plain how the database handles typing)?

Also, as others have mentioned, there are a number of ways to ensure
type safety, as long as you know how the database works (which as I
understand was your original point -- that it should be better
documented how it works in the pysqlite docs; and I am inclined to
agree -- at least a mention with link to the sqlite docs would be
helpful). But given that type safety is not an issue if you use those
ways of ensuring it, then the move to a fuller database _will_ be
relatively easy. If you don't want to change anything in your database
creation/update code ala check constraints, you can always explicitly
validate from python, which can be done programatically (very simple
example -- you could also use regexp patterns to validate; e.g., string
fields not only must be type str, but must not match '^\d+$', &c):

rows = [
['1', 'fred', '0051', '/home/fred'],
['2', 'bob', '0054', '/home/bob'],
['3', 'bork', '>056', '/home/bork']
]
def validate(row):
return [int(row[0]), str(row[1]), int(row[2]), str(row[3])]
for i in xrange(len(rows )):
rows[i] = validate(rows[i]) # <- throws an exception on the third row
# database stuff here...

Regards,
Jordan

Sep 12 '06 #44
On 11 Sep 2006 23:29:28 -0700, me********@aol. com <me********@aol .comwrote:
But it was stated in the sqlite docs that ALL SQL databases
use static types implying that sqlite will be incompatible
with any "heavy" database should the need arise to migrate
upwards. The issue is not that there will be compatibilty
problems with any data migration but that the truth is exactly
opposite of what's claimed in Section 13.13.
Implying? There's a solid word. Migrating data from SQLite to other
databases is no more difficult or easy than migrating data to any
other database. Do you think this is ever trivial? It's as hard or as
easy as you make it. No database can just take any schema and the data
you put in it and just magically convert that schema/data to
flawlessly work in any arbitrary database of your choosing. Some
databases have tools to help with this, but they still are not
perfect.
I'm not saying sqlite can't be used, what I'm asking for
is that the documentation lay the facts out and I'll decide
whether I can make this work in my application. Lying about
it makes you sound like Microsoft.
Lying? Whose lying? Where on the website is there a lie about
anything? From what I can tell, you've not taken the time to read the
documentation or post anything to the mailing list. You've just posted
jeremiads on the Python list.

Don't like the documentation? Ever volunteered to help out? Ever
posted any suggestions on the list or report a bug? Do you really
think that open source projects exists to serve you and meet your
standards? Do you think that free code and documentation just falls
like manna from heaven? Do you honestly think the two core developers
of SQLite have some secret agenda to deceive you or the world into
using SQLite?
Of course you could restrict yourself
to, by example, SQL92 entry level and be a lot more compatible.
But if I'm using a nice OO language like Python which lets me bind
*any* object to *any* name, why should be wrong to bind *any* object
to *any* database column?

But SQL isn't OO, it's relational. That means JOINing tables
together on a common field. In theory, due to the comparison
hierarchy, it is impossible to do JOINs with dynamic typing
since different types can never be equal. In practice, the type
affinity kluge trys to work around this but can't do anything
if the string doesn't look like an integer when a text field
attempts to JOIN to an interger field.
Unless you ensure that the correct types are put int the column to
begin with, which is entirely possible with SQLite, as I've already
demonstrated. And if that's just too much to bear, you can still do an
inner join by explicitly casting the two columns in the join
constraint to a common desired type. Want to know how? Read the
documentation.
Sep 12 '06 #45

Mike Owens wrote:
On 11 Sep 2006 23:29:28 -0700, me********@aol. com <me********@aol .comwrote:
But it was stated in the sqlite docs that ALL SQL databases
use static types implying that sqlite will be incompatible
with any "heavy" database should the need arise to migrate
upwards. The issue is not that there will be compatibilty
problems with any data migration but that the truth is exactly
opposite of what's claimed in Section 13.13.

Implying? There's a solid word. Migrating data from SQLite to other
databases is no more difficult or easy than migrating data to any
other database. Do you think this is ever trivial? It's as hard or as
easy as you make it. No database can just take any schema and the data
you put in it and just magically convert that schema/data to
flawlessly work in any arbitrary database of your choosing. Some
databases have tools to help with this, but they still are not
perfect.
So, knowing that, would you agree that

<quote Python Library Reference 13.13>
If switching to a larger database such as PostgreSQL or Oracle
is later necessary, the switch should be relatively easy.
</quote>

is misleading if not outright untruthful?
>
I'm not saying sqlite can't be used, what I'm asking for
is that the documentation lay the facts out and I'll decide
whether I can make this work in my application. Lying about
it makes you sound like Microsoft.

Lying? Whose lying?
See above quote. And while you're at it, see the sqlite docs
about how the SQL Language Specification of static typing
is a bug.
Where on the website is there a lie about
anything? From what I can tell, you've not taken the time to read the
documentation or post anything to the mailing list. You've just posted
jeremiads on the Python list.

Don't like the documentation?
No, it's misleading and full of errors (this is the Python docs
I'm referring to).
Ever volunteered to help out?
That's what this thread was about, testing the waters.
No point making bug reports if I'm the one who's wrong.
But it turns out I'm not wrong, sqlite IS goofy and this
should be pointed out.
Ever posted any suggestions on the list or report a bug?
I'm still considering it. This thread has been very useful
towards that.
Do you really
think that open source projects exists to serve you and meet your
standards? Do you think that free code and documentation just falls
like manna from heaven?
But why does it have to be wrong? It's just as easy to get
things right. Isn't that your complaint, that if I read the sqlite
docs first, the riculous examples in the Python docs would
have made more sense? Why didn't the guy writing the Python
docs read the sqlite docs first?
Do you honestly think the two core developers
of SQLite have some secret agenda to deceive you or the world into
using SQLite?
Why do they claim that the SQL Language Specification of
static typing is a bug? That's simply a lie. Why do they claim
they've "fixed" it in a backwards compatible way? That's another
lie.

Why didn't they simply say they have an alternative to static
typing? Because part of the deception is to make people think
there is something wrong with static typing.
>
Of course you could restrict yourself
to, by example, SQL92 entry level and be a lot more compatible.
But if I'm using a nice OO language like Python which lets me bind
*any* object to *any* name, why should be wrong to bind *any* object
to *any* database column?
But SQL isn't OO, it's relational. That means JOINing tables
together on a common field. In theory, due to the comparison
hierarchy, it is impossible to do JOINs with dynamic typing
since different types can never be equal. In practice, the type
affinity kluge trys to work around this but can't do anything
if the string doesn't look like an integer when a text field
attempts to JOIN to an interger field.

Unless you ensure that the correct types are put int the column to
begin with, which is entirely possible with SQLite, as I've already
demonstrated. And if that's just too much to bear, you can still do an
inner join by explicitly casting the two columns in the join
constraint to a common desired type. Want to know how? Read the
documentation.
And what do you get when you implement all these kluges?
A database that effectively is static typed. Do you still think
static typing is a bug?

Sep 12 '06 #46
me********@aol. com wrote:
So, knowing that, would you agree that

<quote Python Library Reference 13.13>
If switching to a larger database such as PostgreSQL or Oracle
is later necessary, the switch should be relatively easy.
</quote>

is misleading if not outright untruthful?
eh? if you've never migrated *from* SQLite to some other database, how
can *you* possibly know *anything* about how hard or easy it is?

</F>

Sep 12 '06 #47

Fredrik Lundh wrote:
me********@aol. com wrote:
So, knowing that, would you agree that

<quote Python Library Reference 13.13>
If switching to a larger database such as PostgreSQL or Oracle
is later necessary, the switch should be relatively easy.
</quote>

is misleading if not outright untruthful?

eh? if you've never migrated *from* SQLite to some other database, how
can *you* possibly know *anything* about how hard or easy it is?
Because I can extrapolate. I *know* before even trying it that
if I export all my data from a sqlite db to a csv file and then try
to import it into Access that there will be problems if the fields
aren't static typed.

That's one of the reasons why I was such a good test engineer.
I could anticipate problems the design engineers didn't think of
and I would deliberately provoke those problems during testing
and crash their hardware/software.

I wasn't very popular.
>
</F>
Sep 12 '06 #48
On 12 Sep 2006 10:24:00 -0700, me********@aol. com <me********@aol .comwrote:
So, knowing that, would you agree that

<quote Python Library Reference 13.13>
If switching to a larger database such as PostgreSQL or Oracle
is later necessary, the switch should be relatively easy.
</quote>

is misleading if not outright untruthful?
Not in the least.

If you know what you are doing from a database perspective (not just a
SQLite perspective), migrating data to another database is exactly
that -- relatively easy. That means, you may have to recreate or
modify your schema for the target database -- and this is true in ALL
databases. Native datatypes vary from system to system, and some
systems support user-defined data types, in which case your schema
will definitely have to be modified. How would you migrate a CIDR type
in PostgreSQL to a numeric field in Oracle? You have to work at it.

Next, as far as transferring you data, you most likely have to resort
to some delimited format, or INSERT statements, which is no different
than any other database.

So, I would call that relatively easy without a stretch, and
certainly no different than migrating data with any other database.

Really, how is this different than migrating data to/from any other database?
whether I can make this work in my application. Lying about
it makes you sound like Microsoft.
Lying? Whose lying?

See above quote. And while you're at it, see the sqlite docs
about how the SQL Language Specification of static typing
is a bug.
Both of which have been addressed in detail. The above quote is not
even stretching the truth, and the latter fact is a deviation that
SQLite has every right to make because they, and not you, wrote the
software. Furthermore, it is very clearly stated on the website.

So how is that a lie?
No, it's misleading and full of errors (this is the Python docs
I'm referring to).
I didn't join this thread because of Python's documentation, and I've
made that clear. I am here because you are unjustly vilifying the
SQLite project.
Ever volunteered to help out?

That's what this thread was about, testing the waters.
No point making bug reports if I'm the one who's wrong.
But it turns out I'm not wrong, sqlite IS goofy and this
should be pointed out.
Then be a man and point it out on the SQLite mailing list, where you
can be called on it, rather than ranting about it here.
But why does it have to be wrong? It's just as easy to get
things right. Isn't that your complaint, that if I read the sqlite
docs first, the riculous examples in the Python docs would
have made more sense? Why didn't the guy writing the Python
docs read the sqlite docs first?
First, SQLite's approach is no more wrong than any other database's
deviation from the standard. Second, as I've said, I'm not here for
the Python issues. I think they'll get things sorted out in due time,
and people on this list have been very receptive to your feedback.
Do you honestly think the two core developers
of SQLite have some secret agenda to deceive you or the world into
using SQLite?

Why do they claim that the SQL Language Specification of
static typing is a bug? That's simply a lie. Why do they claim
they've "fixed" it in a backwards compatible way? That's another
lie.
It's not a lie at all. Are you incapable of comprehending the context
of that text? Do you not understand that it effectively says "This is
the way we do things. It's not in agreement with the SQL standard. We
know that, we are doing it this way, and here's how it works, take it
or leave it." And the whole bug in the SQL standard, if you can't
tell, is called humor.
Why didn't they simply say they have an alternative to static
typing?
They did. You couldn't understand that from the documentation?
Because part of the deception is to make people think
there is something wrong with static typing.
Yes, it's really an underhanded conspiracy designed to deceive and
mislead on a global scale. I can just see the developers sitting
around plotting:

"Hey, let's write some free software. Yeah let's give the code away
for free and not make a dime from it. Yeah, and then let's make up a
bunch of lies to make people want to use it, so we can continue to not
make a dime from it. And let's slander the SQL standard, and say all
sorts nasty things about it. Yeah, that's how we'll spend our nights
and weekends."

You really need to find some fault that will stick at this point, don't you?

They're really up to something.
And what do you get when you implement all these kluges?
A database that effectively is static typed.
Only if you want one. Otherwise, you have the freedom of dynamic
typing, which other databases don't afford. So, you in fact have more
freedom than you do than with databases that only offer strict typing.
Do you still think static typing is a bug?
Did I say this, ever? I am not the SQLite website.

I don't think either is a bug. Both are two different viewpoints used
to solve a particular problem. Dynamic typing and type affinity can be
incredibly useful, especially for prototyping and scripting languages.
I don't have to dump and reload my tables if I want to change a column
I am testing out. I just delete me data, reload it, and get my program
to store a different representation in the column. And when I need
strict typing, I simply declare a check constraint. So no, you can't
say that your particular need for static typing meets all criteria for
all developers using SQLite. And frankly, if SQLite doesn't meet my
requirements, rather than badmouthing SQLite, I use another database,
such as PostgreSQL.

And they are entirely correct in saying that you are provided the
facilities to implement strict typing. You are given five native
storage classes and the means to ensure that only data of those
classes is stored in columns. What's all the fuss?

Again, SQLite is completely up front about its approach to typing. I
still don't see how is it that anyone is lying to you.
Sep 12 '06 #49
me********@aol. com wrote:
Because I can extrapolate. I *know* before even trying it that
if I export all my data from a sqlite db to a csv file and then try
to import it into Access that there will be problems if the fields
aren't static typed.
that's just the old "C++/Java is better than Smalltalk/Python/Ruby"
crap. we've seen it before, and it's no more true when it comes from
you than when it comes from some Java head. people who've actually used
dynamic typing knows that it doesn't mean that all objects have random
types all the time.
That's one of the reasons why I was such a good test engineer.
I could anticipate problems the design engineers didn't think of
and I would deliberately provoke those problems during testing
and crash their hardware/software.

I wasn't very popular.
no wonder, if you kept running around telling your colleagues that they
were liars and crackpots and slanderers when things didn't work as you
expected. what's your current line of work, btw?

</F>

Sep 12 '06 #50

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.