468,761 Members | 1,712 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 468,761 developers. It's quick & easy.

Efficient way of querying datasets in c#

Guys,

I have an XML file which is 233MB in size. It was created by loading 6
tables from an sql server database into a dataset object and then
writing out the contents from this dataset into an XML file.

Once my application starts, I load this XML file into a DataSet object
using "ReadXML" function. This creates a dataset in memory with 6
tables.

I then run a query against a table as such: ds.Tables[i].Select("Query
String");

When I run a query against a small table (i.e. < 10,000 rows), I get
results under a second. However, when I run a simple select query
against a large table (~ 2 million records), the query takes almost a
minute. Is there a way to optimize this approach to achieve the same
result?

Basically, I don't want to deal with a database, but rather have XML
files loaded as datasets (i.e. in-memory objects) and run queries
against the datasets.

Is there a more efficient way of doing this?
Thank you very much

Dec 19 '06 #1
5 3423


"sql_er" <sq****@yahoo.comwrote in message
news:11*********************@73g2000cwn.googlegrou ps.com...
Guys,

I have an XML file which is 233MB in size. It was created by loading 6
tables from an sql server database into a dataset object and then
writing out the contents from this dataset into an XML file.

Once my application starts, I load this XML file into a DataSet object
using "ReadXML" function. This creates a dataset in memory with 6
tables.

I then run a query against a table as such: ds.Tables[i].Select("Query
String");

When I run a query against a small table (i.e. < 10,000 rows), I get
results under a second. However, when I run a simple select query
against a large table (~ 2 million records), the query takes almost a
minute. Is there a way to optimize this approach to achieve the same
result?
Yes. Load the data into SQL Server. It's quite good at querying data.
>
Basically, I don't want to deal with a database, but rather have XML
files loaded as datasets (i.e. in-memory objects) and run queries
against the datasets.
You could try SQL Server Compact Edition, which is a light-weight, embedded
database engine if you don't want to install SQL Server Express Edition.
Is there a more efficient way of doing this?
You can navigate large in-memory Data Sets efficiently, but you shouldn't
expect queries to be efficient. If you know the access paths for the data
tables you can build indexes on the data tables, or you could populate
external memory structures, like a Dictionary<string,IList<DataRow>to
provide quick lookups. But you shouldn't depend on fantastic performance
from ADO.NET processing queries against datasets with millions of rows.
That's just not the problem they were designed to solve.
David

Dec 19 '06 #2
"sql_er" <sq****@yahoo.comwrote in message
news:11*********************@73g2000cwn.googlegrou ps.com...
I have an XML file which is 233MB in size.
This is a joke, right...?
When I run a query against a small table (i.e. < 10,000 rows), I get
results under a second.
Consider yourself very fortunate...
However, when I run a simple select query against a large table
(~ 2 million records), the query takes almost a minute.
You're surely not surprised by that...?
Is there a way to optimize this approach to achieve the same result?
There is - use a database. That's what they're for. They have all sorts of
functionality such as indexes and stored procedures which you can never hope
to achieve with XML.

Without any doubt whatsoever, the biggest mistake you can make with XML is
to try to use it as a replacement for a database. That's not what it's for
at all.
Basically, I don't want to deal with a database,
Are you perhaps a masochist...?
Is there a more efficient way of doing this?
See above.
Dec 20 '06 #3
When you state, "Basically I don't want to deal with a database", what is the
rationale behind this statement? Are databases evil? Or do you just have a
setup for your application that makes it difficult to install a database?

If you want really high-performance from an in-memory database, you may want
to take a look at SQLite with the ADO.NET 2.0 provider. It's all in a single
assembly that requires zero installation, and it has an "in-memory" database
option that is lightning fast. You would load your tables at application
startup, create any in-memory indexes you need to speed up queries, and off
you go with regular SQL instead of the limited "pseudo SQL" that DataSets
offer.

Peter

--
Co-founder, Eggheadcafe.com developer portal:
http://www.eggheadcafe.com
UnBlog:
http://petesbloggerama.blogspot.com


"sql_er" wrote:
Guys,

I have an XML file which is 233MB in size. It was created by loading 6
tables from an sql server database into a dataset object and then
writing out the contents from this dataset into an XML file.

Once my application starts, I load this XML file into a DataSet object
using "ReadXML" function. This creates a dataset in memory with 6
tables.

I then run a query against a table as such: ds.Tables[i].Select("Query
String");

When I run a query against a small table (i.e. < 10,000 rows), I get
results under a second. However, when I run a simple select query
against a large table (~ 2 million records), the query takes almost a
minute. Is there a way to optimize this approach to achieve the same
result?

Basically, I don't want to deal with a database, but rather have XML
files loaded as datasets (i.e. in-memory objects) and run queries
against the datasets.

Is there a more efficient way of doing this?
Thank you very much

Dec 20 '06 #4

"David Browne" <davidbaxterbrowne no potted me**@hotmail.comwrote in
message news:eL**************@TK2MSFTNGP02.phx.gbl...
Yes. Load the data into SQL Server. It's quite good at querying data.
Just to add to what David told you, with SQL 2005 you can also create XML
Indexes on XML columns which can improve query performance significantly.
I've seen XQuery queries drop from 70+ seconds on a very large XML document
to under 2 seconds on the same document by using the XML indexes on SQL 2K5.
Dec 20 '06 #5
If you know the access paths for the data tables you can build indexes on the data tables, or you could populate external memory structures, like a
Dictionary<string,IList<DataRow>to provide quick lookups. But you shouldn't depend
on fantastic performance
Guys,

Just to clarify: I have nothing against databases. I am actually a
database programmer. I know that databases are efficient and were
designed to specifically deal with lots of data. It is just that we
have 5 databases, each a copy of each other, sitting on different
machines. The reason for having 5 identical dbs on different machines
is for load balancing issue.

I was just wondering if it would be possible to replace the dbs with an
xml file, knowing that the queries we run against the are extremely
simple (i.e. simple selects). This approach was purely explorational.

As it turned out from my own experimentation and from what I have heard
from all of you, XML is not the way to go. I guess I'll stick to
databases in that case.

However, since I deal with XML files anyway, but never heard about
indexing (i.e. until now) I would like to know how it can be done.
Could you guys give me more detail about how an XML file can be indexed
and once indexed, do I need to use X-query for indexes to be used OR
could I still load it into a dataset in C# and indexing information
will be loaded with it as well?
Thank you very much

Dec 20 '06 #6

This discussion thread is closed

Replies have been disabled for this discussion.

Similar topics

21 posts views Thread by Chris S. | last post: by
6 posts views Thread by Mike O. | last post: by
2 posts views Thread by Elliot Rodriguez | last post: by
reply views Thread by Andrew Mueller | last post: by
5 posts views Thread by Alan Little | last post: by
5 posts views Thread by bob | last post: by
reply views Thread by Marin | last post: by
By using this site, you agree to our Privacy Policy and Terms of Use.