I have a 20meg in-memory stack-based array and I'd like to normalise in the
client's memory... then add foriegn keys, and display results on a datagrid.
I discovered that converting the stack to a datatable my memory utilisation
increases to 120 megs. (lots of overhead)
Couple of questions
1)- Is that memory increase typical? All I want to do is bind it to a
datagrid.
2)- Suppose I dump it to a CSV, or SQL. Is it possible to only retrieve a
subset of all that data? ...And page through the data when the user scrolls
(given that this is not an ASP application)
3)- Lastly and **perhaps most importantly** - how do people in the real
world access and update normalised data when it comes to large DB's like this
one in C#? Is the data joined by stored procedure,in-memory datatable FK's,
or a Transact-SQL command issued by the client? 3 2050
Scottie_do,
I'd ask first why you want to bind the equivalent of a 20meg array to a a
Datagrid?
Realistically, is the user going to need to view all this data in one go?
Datatables have a lot of extra baggage and features, so yes you would expect
the memory consumption to be higher.
If you store it in SQL Server, of course you can write a paging algorithm.
There are a number of examples of stored prodedures that will handle "per
page" resultsets for display in a grid via a DataTable.
Typically, the data in the database is already normalized, so the results
are obtained through a multi-table join query.
Peter
--
Co-founder, Eggheadcafe.com developer portal: http://www.eggheadcafe.com
UnBlog: http://petesbloggerama.blogspot.com
"Scottie_do" wrote: I have a 20meg in-memory stack-based array and I'd like to normalise in the client's memory... then add foriegn keys, and display results on a datagrid. I discovered that converting the stack to a datatable my memory utilisation increases to 120 megs. (lots of overhead)
Couple of questions 1)- Is that memory increase typical? All I want to do is bind it to a datagrid.
2)- Suppose I dump it to a CSV, or SQL. Is it possible to only retrieve a subset of all that data? ...And page through the data when the user scrolls (given that this is not an ASP application)
3)- Lastly and **perhaps most importantly** - how do people in the real world access and update normalised data when it comes to large DB's like this one in C#? Is the data joined by stored procedure,in-memory datatable FK's, or a Transact-SQL command issued by the client?
I'm analysing data at runtime and need to provide a few filtered views. I
dont have to bind all the tables to the grid since and am planning on tossing
irrelevant data, or exporting it for future analysis.
Most of the paging solutions Ive seen target a Web-based model. Since I
want my app to be more of a Avalon/Interactive app, I'd like to have it as an
exe.
The target database has not been created yet, this aspect of the project is
still in design. I'm just trying to come up with a solution that will allow
for several real time feeds writing to the database, normalising (to save
space), to obtaining unique keys and to allow clients to instantly see the
aggregated data.
I'm thinking I could have normalisation on the "stream reader" client that
can feed to a JIT display engine that will discard unnessary data, or it will
fork off a stream to the SQL DB. Should the forked stream be normalised by
the client/stream reader or should it be processed by the SQL engine. This
can be a lot of data so I'm trying to offload as much as possible.
-Chris
"Peter Bromberg [C# MVP]" wrote: Scottie_do, I'd ask first why you want to bind the equivalent of a 20meg array to a a Datagrid?
Realistically, is the user going to need to view all this data in one go? Datatables have a lot of extra baggage and features, so yes you would expect the memory consumption to be higher.
If you store it in SQL Server, of course you can write a paging algorithm. There are a number of examples of stored prodedures that will handle "per page" resultsets for display in a grid via a DataTable.
Typically, the data in the database is already normalized, so the results are obtained through a multi-table join query.
Peter -- Co-founder, Eggheadcafe.com developer portal: http://www.eggheadcafe.com UnBlog: http://petesbloggerama.blogspot.com
"Scottie_do" wrote:
I have a 20meg in-memory stack-based array and I'd like to normalise in the client's memory... then add foriegn keys, and display results on a datagrid. I discovered that converting the stack to a datatable my memory utilisation increases to 120 megs. (lots of overhead)
Couple of questions 1)- Is that memory increase typical? All I want to do is bind it to a datagrid.
2)- Suppose I dump it to a CSV, or SQL. Is it possible to only retrieve a subset of all that data? ...And page through the data when the user scrolls (given that this is not an ASP application)
3)- Lastly and **perhaps most importantly** - how do people in the real world access and update normalised data when it comes to large DB's like this one in C#? Is the data joined by stored procedure,in-memory datatable FK's, or a Transact-SQL command issued by the client?
Scottie_do wrote: I have a 20meg in-memory stack-based array and I'd like to normalise in the client's memory... then add foriegn keys, and display results on a datagrid. I discovered that converting the stack to a datatable my memory utilisation increases to 120 megs. (lots of overhead)
Couple of questions 1)- Is that memory increase typical? All I want to do is bind it to a datagrid.
Yes, although data in a datatable isn't stored with a lot of overhead,
binding it to a grid is.
2)- Suppose I dump it to a CSV, or SQL. Is it possible to only retrieve a subset of all that data? ...And page through the data when the user scrolls (given that this is not an ASP application)
Yes, paging is easily done on every database out there, except ms
access (ms access doesn't offer 'server side' paging). You typically
retrieve a page of data from the db, bind that to a grid, user works on
that, user clicks 'next' button or whatever and next page is retrieved.
This is of course also doable in a .exe.
Most databases offer very easy paging code, SqlServer has problems in
this area though, except sqlserver 2005 which offers a slightly better
way of doing paging (except it still isn't optimal, as you have to
specify a sort column for the rownumber). If reading is your primary
concern, I'd take this into account when selecting the db.
3)- Lastly and **perhaps most importantly** - how do people in the real world access and update normalised data when it comes to large DB's like this one in C#? Is the data joined by stored procedure,in-memory datatable FK's, or a Transact-SQL command issued by the client?
large databases are threated like small databases: retrieve the data
you want to work with at time T and only that data, not the data you
want to work with at T+t. Use the RDBMS features available to retrieve
the data you want to work with, don't do things in memory, as that's
always slower and you always have to keep more data around. So if you
have to process 10,000 rows, no-one will load these up front and bind
these to a grid. Instead, load 50 up front, show them in the grid and
let hte user proceed from there.
FB
--
------------------------------------------------------------------------
Get LLBLGen Pro, productive O/R mapping for .NET: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#)
------------------------------------------------------------------------ This thread has been closed and replies have been disabled. Please start a new discussion. Similar topics
by: Andrew |
last post by:
Last night I was reading about implementing my own stack. The example
given pushes items on and off the stack at the start and end of each
procedure (ie. in a std module). What's not so clear is...
|
by: Dan Elliott |
last post by:
Hello all,
I am writing a program which needs to run as quickly as possible, but holds
a lot of data in memory (around 1GB for a usual run). Is this too much
memory to even consider putting...
|
by: Sushil |
last post by:
Hi gurus
I was reading FAQ
"alloca cannot be written portably, and is difficult to implement on
machines without a conventional stack."
I understand that the standard does not mandate...
|
by: cj |
last post by:
I've used datatables and datasets before. Datasets being able to hold
more than one table and datatables being only one table. My mind keeps
coming up with recordsets. I can't remember how they...
|
by: Janelle.Dunlap |
last post by:
I have a table in my database that is linked to an excel spreadsheet.
I need to be able to manipulate the data in this linked table so that I
can create smaller normalized tables that work with...
|
by: jimxoch |
last post by:
Hi list,
Most STL containers are storing their data on the heap. (although some
std::string implementations are notable exceptions) Of course, using
the heap as storage increases flexibility and...
|
by: coder_lol |
last post by:
Thanks everybody for helping me with the Syntax confusion! The
implicit conversion stuff really got me :) I have one more
question...
Array<int32ia;
Does the above use the default...
|
by: vivek |
last post by:
i have some doubts on dynamic memory allocation and stacks and heaps
where is the dynamic memory allocation used?
in function calls there are some counters like "i" in the below
function. Is...
|
by: asit |
last post by:
We kno that data can be pushed onto the stack or popped 4m it. Can
stack be traversed ??
|
by: MeoLessi9 |
last post by:
I have VirtualBox installed on Windows 11 and now I would like to install Kali on a virtual machine. However, on the official website, I see two options: "Installer images" and "Virtual machines"....
|
by: DolphinDB |
last post by:
The formulas of 101 quantitative trading alphas used by WorldQuant were presented in the paper 101 Formulaic Alphas. However, some formulas are complex, leading to challenges in calculation.
Take...
|
by: DolphinDB |
last post by:
Tired of spending countless mintues downsampling your data? Look no further!
In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
|
by: Aftab Ahmad |
last post by:
Hello Experts!
I have written a code in MS Access for a cmd called "WhatsApp Message" to open WhatsApp using that very code but the problem is that it gives a popup message everytime I clicked on...
|
by: Aftab Ahmad |
last post by:
So, I have written a code for a cmd called "Send WhatsApp Message" to open and send WhatsApp messaage. The code is given below.
Dim IE As Object
Set IE =...
|
by: marcoviolo |
last post by:
Dear all,
I would like to implement on my worksheet an vlookup dynamic , that consider a change of pivot excel via win32com, from an external excel (without open it) and save the new file into a...
|
by: isladogs |
last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM).
In this month's session, we are pleased to welcome back...
|
by: Vimpel783 |
last post by:
Hello!
Guys, I found this code on the Internet, but I need to modify it a little. It works well, the problem is this: Data is sent from only one cell, in this case B5, but it is necessary that data...
|
by: jfyes |
last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
| | |