By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
444,246 Members | 1,553 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 444,246 IT Pros & Developers. It's quick & easy.

Creating Tables on the Fly

P: n/a
I have an Access 2000 MDB with ODBC linked tables to a SQL Server 7 back
end. I currently have a selections table in the front end file which the
users use to make selections of records. The table has two fields -- primary
key (which matches primary key of main, SQL Server table), and a boolean
field. The table is linked to the main table in a heterogeneous inner join.

I'm looking to move the table to the back end, while still giving each
machine a unique set of selections. Using one large table with machine name
as part of the primary key actually slows things down. So I'm considering
using a series of tables, where each machine has its own table in the back
end for selections. The machine name would be incorporated in the particular
selections table name, and the front end link would be modified on the fly
when the database is opened to point to that machine's back end selections
table.

This would require having about 50-100 individual selections tables in the
back end database. Also, if a machine doesn't have a table when the database
is opened on that machine, then that table would be created on the fly,
populated, and pointed to via the ODBC link.

Anyone see any problems with this approach, specifically creating the table
on the fly and then immediately using it, as well as having that many little
tables running around? Thanks for any input!

Neil
Oct 29 '05 #1
Share this Question
Share on Google+
31 Replies


P: n/a
> machine a unique set of selections. Using one large table with machine
name as part of the primary key actually slows things down. So I'm
considering
...

This would require having about 50-100 individual selections tables in the
back end database.


You have 50-100 rows in the table and you think that slows it down? Even
without a clustered index or any index at all, I find it hard to believe
that you can perceive any slowness whatsoever based on a scan of rows in the
three figure range. And to sacrifice manageability for that seems absurd,
at least to me.
Oct 29 '05 #2

P: n/a
Neil (no****@nospam.net) writes:
I have an Access 2000 MDB with ODBC linked tables to a SQL Server 7 back
end. I currently have a selections table in the front end file which the
users use to make selections of records. The table has two fields --
primary key (which matches primary key of main, SQL Server table), and a
boolean field. The table is linked to the main table in a heterogeneous
inner join.

I'm looking to move the table to the back end, while still giving each
machine a unique set of selections. Using one large table with machine
name as part of the primary key actually slows things down. So I'm
considering using a series of tables, where each machine has its own
table in the back end for selections. The machine name would be
incorporated in the particular selections table name, and the front end
link would be modified on the fly when the database is opened to point
to that machine's back end selections table.
...
Anyone see any problems with this approach, specifically creating the
table on the fly and then immediately using it, as well as having that
many little tables running around? Thanks for any input!


Yes, I see problems. Simply don't go there. This is not the way you use a
relational database. Make that machine name part of the PK in the single
table. If having one single table, slows things down, investigate why
instead of resorting to kludges.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

Oct 29 '05 #3

P: n/a
You misread my post. I said 50-100 tables, not 50-100 rows.

"Aaron Bertrand [SQL Server MVP]" <te*****@dnartreb.noraa> wrote in message
news:ew**************@TK2MSFTNGP09.phx.gbl...
machine a unique set of selections. Using one large table with machine
name as part of the primary key actually slows things down. So I'm
considering
...

This would require having about 50-100 individual selections tables in
the back end database.


You have 50-100 rows in the table and you think that slows it down? Even
without a clustered index or any index at all, I find it hard to believe
that you can perceive any slowness whatsoever based on a scan of rows in
the three figure range. And to sacrifice manageability for that seems
absurd, at least to me.

Oct 29 '05 #4

P: n/a
OK, point well taken. I don't see any workaround, though, since I'm using
ODBC linked tables. I can't use a pass-through query and pass the table name
as a parameter, because pass-throughs return read-only sets. And a view
wouldn't be able to return just the records for that machine.

Here's an idea. What if there were one table, but a series of views, with
each view returning records for a particular unique ID. When the app is
opened, it's assigned an available ID and its link is set to the view that
returns records for that ID. Still kind of a kludge, but perhaps a little
better.
"Erland Sommarskog" <es****@sommarskog.se> wrote in message
news:Xn**********************@127.0.0.1...
Neil (no****@nospam.net) writes:
I have an Access 2000 MDB with ODBC linked tables to a SQL Server 7 back
end. I currently have a selections table in the front end file which the
users use to make selections of records. The table has two fields --
primary key (which matches primary key of main, SQL Server table), and a
boolean field. The table is linked to the main table in a heterogeneous
inner join.

I'm looking to move the table to the back end, while still giving each
machine a unique set of selections. Using one large table with machine
name as part of the primary key actually slows things down. So I'm
considering using a series of tables, where each machine has its own
table in the back end for selections. The machine name would be
incorporated in the particular selections table name, and the front end
link would be modified on the fly when the database is opened to point
to that machine's back end selections table.
...
Anyone see any problems with this approach, specifically creating the
table on the fly and then immediately using it, as well as having that
many little tables running around? Thanks for any input!


Yes, I see problems. Simply don't go there. This is not the way you use a
relational database. Make that machine name part of the PK in the single
table. If having one single table, slows things down, investigate why
instead of resorting to kludges.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

Oct 29 '05 #5

P: n/a
You are sooooooo screwed up I have to use this in a book! I am looking
for anything you did right and I cannot find it.
I currently have a selections table in the front end file [sic] which the users use to make selections of records [sic] . The table has two fields [sic] -- primary key (which matches primary key of main [sic] SQL Server table), and a Boolean [sic] field [sic] . The table is linked to the main table in a heterogeneous inner join[sic] . <<
Let's get back to the basics of an RDBMS. Rows are not records; fields
are not columns; tables are not files. These are FUNDAMENTALLY
DIFFERENT CONCEPTS. SQL does not have Boolean data types -- they woudl
screw up the 3VL that is the foundations of SQL's model.
I'm looking to move the table to the back end, while still giving each machine a unique set of selections. <<
Then these will not be the same table, will they? Duh!
Using one large table with machine name as part of the primary key actually slows things down. <<
Machine name?? Have you ever read anything on data modeling? Logical
versus Physical?? The basics!! The basics!!
So I'm considering using a series of tables, where each machine has its own table in the back end for selections. <<
Sure, split the LOGICAL design over the PHYSICAL implementation. To
hell with the foundations of RDBMS, Data Modeling , etc. You are doing
1950's tape system in SQL.
The machine name would be incorporated in the particular selections table name, and the front end link would be modified on the fly when the database is opened to point to that machine's back end selections table. <<
You have just implemented a design we found to be a nightmare in the
1950's with magnetic tape file names. It is one of the reasons we went
to disk systems and then to navigation database and then to RDBMs
systems.
This would require having about 50-100 individual selections tables in the back end database. Also, if a machine doesn't have a table when the database is opened on that machine, then that table would be created on the fly, populated, and pointed to via the ODBC link. <<
Back to basics. A table models a set of one kind of entity or a
relationship. Then schema models the reality of the data model in
those terms. Creating them on the fly is the logical equivalent of
having an elephant drp out of the sky.
Anyone see any problems with this approach, specifically creating the table on the fly and then immediately using it, as well as having that many little tables running around? <<


Dr. Codd,. Chris Date, me, anyone in RDBMS?

Please get some help before you hurt people. Everything you are doing
is wrong

Oct 29 '05 #6

P: n/a
Neil (no****@nospam.net) writes:
OK, point well taken. I don't see any workaround, though, since I'm
using ODBC linked tables. I can't use a pass-through query and pass the
table name as a parameter, because pass-throughs return read-only sets.
And a view wouldn't be able to return just the records for that
machine.
As for how to sort out the linked tables, you will have to ask in an
Access newsgroup.
Here's an idea. What if there were one table, but a series of views, with
each view returning records for a particular unique ID. When the app is
opened, it's assigned an available ID and its link is set to the view that
returns records for that ID. Still kind of a kludge, but perhaps a little
better.


A table-valued function with the machine name as parameter would be better.
But without knowing Access, I'm quite sure that this can be solved by
adding the machine name on the Access side as well. After all, that is
also an RDBMS, and should be fitted for relational solutions.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

Oct 29 '05 #7

P: n/a
--CELKO-- (jc*******@earthlink.net) writes:
You are sooooooo screwed up I have to use this in a book! I am looking
for anything you did right and I cannot find it.


He is nowhere near as screwed up as you are. At least Neil knows how
to behave properly and politely.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

Oct 29 '05 #8

P: n/a
Yes, it can be done on the Access side, but, as noted, things are slowed
down. When I had a main SQL table (50,000 records) joined to a local Access
table (also 50,000 records, with an index and a boolean field), the form
took about 13 seconds to open (with the join being, of course, in the front
end).

With the new solution, the main SQL table still has 50,000 records, but, as
a sample, I populated the selections table with 50,000 x 25 records,
simulating 25 machines having records in the table. I created a view,
joining the two tables on the back end, and linked the view (with the
machine name as a field) to the front end. Using that 1,250,000 record view
with a parameter to only return the 50,000 records that match the machine
name, the form took about 17 seconds to open.

One of the reasons to move the selections table to the back end was for
speed in opening the form. But the resulting 1.25 mil record table is
slowing things down than the smaller table in the front end did, even with a
heterogeneous join.

Thus, I'm left with no solution, unless I can use a smaller table, of if
there's something I haven't seen re. using the view in Access (which I don't
think there is).

Neil

"Erland Sommarskog" <es****@sommarskog.se> wrote in message
news:Xn*********************@127.0.0.1...
Neil (no****@nospam.net) writes:
OK, point well taken. I don't see any workaround, though, since I'm
using ODBC linked tables. I can't use a pass-through query and pass the
table name as a parameter, because pass-throughs return read-only sets.
And a view wouldn't be able to return just the records for that
machine.


As for how to sort out the linked tables, you will have to ask in an
Access newsgroup.
Here's an idea. What if there were one table, but a series of views, with
each view returning records for a particular unique ID. When the app is
opened, it's assigned an available ID and its link is set to the view
that
returns records for that ID. Still kind of a kludge, but perhaps a little
better.


A table-valued function with the machine name as parameter would be
better.
But without knowing Access, I'm quite sure that this can be solved by
adding the machine name on the Access side as well. After all, that is
also an RDBMS, and should be fitted for relational solutions.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

Oct 30 '05 #9

P: n/a
Hi

I probably know less about access than Erland, but you may want to look at
just what is happening on the SQL Server when you have everything on the
server. SQL Profiler may give you some information that we may be missing
and how each solution works. Also look at the Query Plans for the executed
SQL and check your indexes are being used and that you have up-to-date
statistics.

You may be able to simplify things with the HOST_NAME function in your view.

John

"Neil" <no****@nospam.net> wrote in message
news:cZ*****************@newsread1.news.pas.earthl ink.net...
Yes, it can be done on the Access side, but, as noted, things are slowed
down. When I had a main SQL table (50,000 records) joined to a local
Access table (also 50,000 records, with an index and a boolean field), the
form took about 13 seconds to open (with the join being, of course, in the
front end).

With the new solution, the main SQL table still has 50,000 records, but,
as a sample, I populated the selections table with 50,000 x 25 records,
simulating 25 machines having records in the table. I created a view,
joining the two tables on the back end, and linked the view (with the
machine name as a field) to the front end. Using that 1,250,000 record
view with a parameter to only return the 50,000 records that match the
machine name, the form took about 17 seconds to open.

One of the reasons to move the selections table to the back end was for
speed in opening the form. But the resulting 1.25 mil record table is
slowing things down than the smaller table in the front end did, even with
a heterogeneous join.

Thus, I'm left with no solution, unless I can use a smaller table, of if
there's something I haven't seen re. using the view in Access (which I
don't think there is).

Neil

"Erland Sommarskog" <es****@sommarskog.se> wrote in message
news:Xn*********************@127.0.0.1...
Neil (no****@nospam.net) writes:
OK, point well taken. I don't see any workaround, though, since I'm
using ODBC linked tables. I can't use a pass-through query and pass the
table name as a parameter, because pass-throughs return read-only sets.
And a view wouldn't be able to return just the records for that
machine.


As for how to sort out the linked tables, you will have to ask in an
Access newsgroup.
Here's an idea. What if there were one table, but a series of views,
with
each view returning records for a particular unique ID. When the app is
opened, it's assigned an available ID and its link is set to the view
that
returns records for that ID. Still kind of a kludge, but perhaps a
little
better.


A table-valued function with the machine name as parameter would be
better.
But without knowing Access, I'm quite sure that this can be solved by
adding the machine name on the Access side as well. After all, that is
also an RDBMS, and should be fitted for relational solutions.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp


Oct 30 '05 #10

P: n/a
Neil (no****@nospam.net) writes:
Yes, it can be done on the Access side, but, as noted, things are slowed
down. When I had a main SQL table (50,000 records) joined to a local
Access table (also 50,000 records, with an index and a boolean field),
the form took about 13 seconds to open (with the join being, of course,
in the front end).

With the new solution, the main SQL table still has 50,000 records, but,
as a sample, I populated the selections table with 50,000 x 25 records,
simulating 25 machines having records in the table. I created a view,
joining the two tables on the back end, and linked the view (with the
machine name as a field) to the front end. Using that 1,250,000 record
view with a parameter to only return the 50,000 records that match the
machine name, the form took about 17 seconds to open.

One of the reasons to move the selections table to the back end was for
speed in opening the form. But the resulting 1.25 mil record table is
slowing things down than the smaller table in the front end did, even
with a heterogeneous join.

Thus, I'm left with no solution, unless I can use a smaller table, of if
there's something I haven't seen re. using the view in Access (which I
don't think there is).


You will have to bear with me, since my knowledge of Access is so poor.

But if I understand this correctly, you have a local table in Access
with selections that typically has 50000 rows. And since each user
has his own Access instance, this means that today there are some 25-50
instances of this table.

13 seconds to open a form is indeed a long time, and many users would
say a too long time.

But moving this data to SQL server may not a very good idea at all. Sending
50000 rows over the wire is not done snap. On a local network it may be
decently fast, but if you have a user that works from home, it will be
a pain.

So I would suggest that you should rather look into to load fewer rows
into the form initially, and then fetch depending on what action the
user takes. I can't believe that the user is looking at all 50000 at
a time.

If this data is only related to the user's selection, it's probably a
good idea to keep it local anyway. The only point I can see with moving
it to the server, is that it could permit the user to get back his
selection if he moves to another machine.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

Oct 30 '05 #11

P: n/a
On Sun, 30 Oct 2005 01:03:04 GMT, "Neil" <no****@nospam.net> wrote:
Yes, it can be done on the Access side, but, as noted, things are slowed
down. When I had a main SQL table (50,000 records) joined to a local Access
table (also 50,000 records, with an index and a boolean field), the form
took about 13 seconds to open (with the join being, of course, in the front
end).


You really need to read up on the performance issues of using Access as a
front-end to SQL Server. There are a few basic rules to know, (such as making
sure join fields are indexed, and every table has a primary key) but if you
follow them all, you should not have the performance issues you are
describing.

I routinely just build plain ol' Access queries that use criteria and join
muliple linked tables together. JET is smart enough to build prepared
statements and process the joins at the server side, so performance is pretty
good.

Oct 30 '05 #12

P: n/a
I don't think I did. But have fun.


"Neil" <no****@nospam.net> wrote in message
news:mg****************@newsread2.news.pas.earthli nk.net...
You misread my post. I said 50-100 tables, not 50-100 rows.

"Aaron Bertrand [SQL Server MVP]" <te*****@dnartreb.noraa> wrote in
message news:ew**************@TK2MSFTNGP09.phx.gbl...
machine a unique set of selections. Using one large table with machine
name as part of the primary key actually slows things down. So I'm
considering
...

This would require having about 50-100 individual selections tables in
the back end database.


You have 50-100 rows in the table and you think that slows it down? Even
without a clustered index or any index at all, I find it hard to believe
that you can perceive any slowness whatsoever based on a scan of rows in
the three figure range. And to sacrifice manageability for that seems
absurd, at least to me.


Oct 30 '05 #13

P: n/a
You wrote: "You have 50-100 rows in the table and you think that slows it
down?"

I never wrote that I have 50-100 rows in the table. I said I was considering
50-100 tables.

"Aaron Bertrand [SQL Server MVP]" <te*****@dnartreb.noraa> wrote in message
news:uJ**************@TK2MSFTNGP12.phx.gbl...
I don't think I did. But have fun.


"Neil" <no****@nospam.net> wrote in message
news:mg****************@newsread2.news.pas.earthli nk.net...
You misread my post. I said 50-100 tables, not 50-100 rows.

"Aaron Bertrand [SQL Server MVP]" <te*****@dnartreb.noraa> wrote in
message news:ew**************@TK2MSFTNGP09.phx.gbl...
machine a unique set of selections. Using one large table with machine
name as part of the primary key actually slows things down. So I'm
considering
...

This would require having about 50-100 individual selections tables in
the back end database.

You have 50-100 rows in the table and you think that slows it down?
Even without a clustered index or any index at all, I find it hard to
believe that you can perceive any slowness whatsoever based on a scan of
rows in the three figure range. And to sacrifice manageability for that
seems absurd, at least to me.



Oct 30 '05 #14

P: n/a

"Erland Sommarskog" <es****@sommarskog.se> wrote in message
news:Xn**********************@127.0.0.1...
You will have to bear with me, since my knowledge of Access is so poor.

But if I understand this correctly, you have a local table in Access
with selections that typically has 50000 rows. And since each user
has his own Access instance, this means that today there are some 25-50
instances of this table.
Yes, each on its own machine.

13 seconds to open a form is indeed a long time, and many users would
say a too long time.

But moving this data to SQL server may not a very good idea at all.
Sending
50000 rows over the wire is not done snap. On a local network it may be
decently fast, but if you have a user that works from home, it will be
a pain.
Yes, the situation is over a T1 line. The LAN users don't have any
significant delays. But the WAN users are getting long load times.

So I would suggest that you should rather look into to load fewer rows
into the form initially, and then fetch depending on what action the
user takes. I can't believe that the user is looking at all 50000 at
a time.
Well, they *work* with all 50,000, even if they don't use them. They like
being able to work in datasheet view (a spreadsheet-like representation of
data) and do sorting, filtering, editing, etc. In form view (the traditional
representation of data), I could give them one record at a time; but in
datasheet view they like to have all the records there.
If this data is only related to the user's selection, it's probably a
good idea to keep it local anyway. The only point I can see with moving
it to the server, is that it could permit the user to get back his
selection if he moves to another machine.
Well, the idea was to eliminate the heterogeneous join. And, indeed, I have
seen a performance increase in other areas of the form with the trial
back-end selections table I put in place (sorting on a field, for example,
is much faster with the selections table in the back end). The problem,
though, is that in this one area, the initial opening of the form, it
actually slows things down. And that's a key area, since users need to be
able to open the form quickly when they need the data.

Neil


--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

Oct 30 '05 #15

P: n/a
Thanks for the tips.

Regarding HOST_NAME, yes, that would be better than passing a parameter. I
found, though, that even with the host name selected on the back end, it was
still slow with that large selections table. Even with SQL selecting the
needed 50,000 records to return (from the 1.25 mil record view), it was
still slower than having a 50,000 record table on the front end and having a
heterogeneous join with the 50,000 record table on the back end.

Neil
"John Bell" <jb************@hotmail.com> wrote in message
news:43***********************@news.zen.co.uk...
Hi

I probably know less about access than Erland, but you may want to look at
just what is happening on the SQL Server when you have everything on the
server. SQL Profiler may give you some information that we may be missing
and how each solution works. Also look at the Query Plans for the executed
SQL and check your indexes are being used and that you have up-to-date
statistics.

You may be able to simplify things with the HOST_NAME function in your
view.

John

"Neil" <no****@nospam.net> wrote in message
news:cZ*****************@newsread1.news.pas.earthl ink.net...
Yes, it can be done on the Access side, but, as noted, things are slowed
down. When I had a main SQL table (50,000 records) joined to a local
Access table (also 50,000 records, with an index and a boolean field),
the form took about 13 seconds to open (with the join being, of course,
in the front end).

With the new solution, the main SQL table still has 50,000 records, but,
as a sample, I populated the selections table with 50,000 x 25 records,
simulating 25 machines having records in the table. I created a view,
joining the two tables on the back end, and linked the view (with the
machine name as a field) to the front end. Using that 1,250,000 record
view with a parameter to only return the 50,000 records that match the
machine name, the form took about 17 seconds to open.

One of the reasons to move the selections table to the back end was for
speed in opening the form. But the resulting 1.25 mil record table is
slowing things down than the smaller table in the front end did, even
with a heterogeneous join.

Thus, I'm left with no solution, unless I can use a smaller table, of if
there's something I haven't seen re. using the view in Access (which I
don't think there is).

Neil

"Erland Sommarskog" <es****@sommarskog.se> wrote in message
news:Xn*********************@127.0.0.1...
Neil (no****@nospam.net) writes:
OK, point well taken. I don't see any workaround, though, since I'm
using ODBC linked tables. I can't use a pass-through query and pass the
table name as a parameter, because pass-throughs return read-only sets.
And a view wouldn't be able to return just the records for that
machine.

As for how to sort out the linked tables, you will have to ask in an
Access newsgroup.

Here's an idea. What if there were one table, but a series of views,
with
each view returning records for a particular unique ID. When the app is
opened, it's assigned an available ID and its link is set to the view
that
returns records for that ID. Still kind of a kludge, but perhaps a
little
better.

A table-valued function with the machine name as parameter would be
better.
But without knowing Access, I'm quite sure that this can be solved by
adding the machine name on the Access side as well. After all, that is
also an RDBMS, and should be fitted for relational solutions.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp



Oct 30 '05 #16

P: n/a
Both table have PKs; and the join is on the PKs (the main table has a
single-field PK; and the selections table has two field -- the corresponding
ID value from the main table combined with the machine name).

The 13 second form open time (with the 50k record selections table in the
front end joined heterogeneously with the 50k record main table in the back
end) is over a WAN. The LAN users get much better performance.

I tried a solution with *no* selection table (tracking selections in code,
since they don't need to remain past the current session), and it cut it
down to about 6 seconds. But, unfortunately, I ran into another issue with
that (currently discussing that in comp.databases.ms-access, "Trapping Click
With Calculated Check Box" thread). So if I can remove the selections table
altogether, that would be better. But, if I need to keep it, as noted
elsewhere in this thread, overall I get better performance with the huge
selections table (for all users) in the back end, rather than a small
selections table (for single user) in the front end. But the load time is
longer, and the load time is the thing that users have been complaining
about the most.

Thus, it would be great if I could bring down the load time while still
having the selections table in the back end. But I don't see what else I can
do to improve performance.

Thanks,

Neil

"Steve Jorgensen" <no****@nospam.nospam> wrote in message
news:sc********************************@4ax.com...
On Sun, 30 Oct 2005 01:03:04 GMT, "Neil" <no****@nospam.net> wrote:
Yes, it can be done on the Access side, but, as noted, things are slowed
down. When I had a main SQL table (50,000 records) joined to a local
Access
table (also 50,000 records, with an index and a boolean field), the form
took about 13 seconds to open (with the join being, of course, in the
front
end).


You really need to read up on the performance issues of using Access as a
front-end to SQL Server. There are a few basic rules to know, (such as
making
sure join fields are indexed, and every table has a primary key) but if
you
follow them all, you should not have the performance issues you are
describing.

I routinely just build plain ol' Access queries that use criteria and join
muliple linked tables together. JET is smart enough to build prepared
statements and process the joins at the server side, so performance is
pretty
good.

Oct 30 '05 #17

P: n/a
> You may be able to simplify things with the HOST_NAME function in your
view.


Had strange results with using HOST_NAME. I added "=HOST_NAME()" to the
criteria column of the view's MachineName field. The view worked fine.
Similarly, if I open the view using a pass-through query in the front end,
it opens fine. But if I link the view to the front end, it opens with
"#Deleted" showing in each field (that's what Access displays when records
in a recordset have been deleted, but the recordset hasn't been requeried).

If, on the other hand, I hard-code the machine name into the view, then it
opens fine when linked to the front end. But something about the HOST_NAME
function that's causing it to not be able to get the records (or it gets the
records and then loses them).

Neil
Oct 30 '05 #18

P: n/a
Neil (no****@nospam.net) writes:
Had strange results with using HOST_NAME. I added "=HOST_NAME()" to the
criteria column of the view's MachineName field. The view worked fine.
Similarly, if I open the view using a pass-through query in the front
end, it opens fine. But if I link the view to the front end, it opens
with "#Deleted" showing in each field (that's what Access displays when
records in a recordset have been deleted, but the recordset hasn't been
requeried).

If, on the other hand, I hard-code the machine name into the view, then
it opens fine when linked to the front end. But something about the
HOST_NAME function that's causing it to not be able to get the records
(or it gets the records and then loses them).


Not sure I understand this (I still haven't learn Access :-), but there
is a gotcha here: which datatype is MachineName? I should be nvarchar
to avoid implicit conversions that precludes use of indexes. Not that I
can see that this explains behaviour you see above.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

Oct 30 '05 #19

P: n/a
On Sun, 30 Oct 2005 16:59:45 GMT, "Neil" <no****@nospam.net> wrote:
Both table have PKs; and the join is on the PKs (the main table has a
single-field PK; and the selections table has two field -- the corresponding
ID value from the main table combined with the machine name).

The 13 second form open time (with the 50k record selections table in the
front end joined heterogeneously with the 50k record main table in the back
end) is over a WAN. The LAN users get much better performance.


How many records are being returned to the form, and what kind of recordset
are you using? Is any of your links to a view? If linking to a view, did you
select a logical primary key when you created the link?

Oct 30 '05 #20

P: n/a

"Steve Jorgensen" <no****@nospam.nospam> wrote in message
news:um********************************@4ax.com...
On Sun, 30 Oct 2005 16:59:45 GMT, "Neil" <no****@nospam.net> wrote:
Both table have PKs; and the join is on the PKs (the main table has a
single-field PK; and the selections table has two field -- the
corresponding
ID value from the main table combined with the machine name).

The 13 second form open time (with the 50k record selections table in the
front end joined heterogeneously with the 50k record main table in the
back
end) is over a WAN. The LAN users get much better performance.


How many records are being returned to the form, and what kind of
recordset
are you using? Is any of your links to a view? If linking to a view, did
you
select a logical primary key when you created the link?


50,000; dynaset; yes, the two tables are combined in the back end using a
view; yes.

But regarding the slowness issue, I think it's resolved. See my other post
in this thread. Thanks.

Neil
Oct 30 '05 #21

P: n/a
Erland and everyone else who has been helping me here with this.

First, I want to thank you for steering me in the right direction. My
original idea about creating a series of tables was somewhat hair-brained;
but I was feeling desperate at the time and was willing to try anything.

Moving the table to the back end and making it a true relational table was
the right way to go. I noted that using this approach (with a resulting 1.25
mil record view) was actually slower than the original configuration with
the heterogeneous join. However, I opened the view, moved to the last
record, and now it's fast! So it seems that the indexes hadn't been filled
(or something like that). The form now opens in about 7 seconds (down from
13), which should be OK.

So I'm wondering if there is some command that one can use to accomplish
what I accomplished by moving to the last record.

Also, the other issue I recently noted regarding the HOST_NAME function is
still outstanding, and I should try to resolve it, as I've seen better
performance when using that than when passing the host name from the front
end. I started a new thread to discuss that, entitled, "Problem with
HOST_NAME Function with Linked View." If any of you would care to contribute
there, that would be great.

Thanks again to everyone for your help! It's much appreciated!!!

Neil
"Erland Sommarskog" <es****@sommarskog.se> wrote in message
news:Xn**********************@127.0.0.1...
Neil (no****@nospam.net) writes:
Yes, it can be done on the Access side, but, as noted, things are slowed
down. When I had a main SQL table (50,000 records) joined to a local
Access table (also 50,000 records, with an index and a boolean field),
the form took about 13 seconds to open (with the join being, of course,
in the front end).

With the new solution, the main SQL table still has 50,000 records, but,
as a sample, I populated the selections table with 50,000 x 25 records,
simulating 25 machines having records in the table. I created a view,
joining the two tables on the back end, and linked the view (with the
machine name as a field) to the front end. Using that 1,250,000 record
view with a parameter to only return the 50,000 records that match the
machine name, the form took about 17 seconds to open.

One of the reasons to move the selections table to the back end was for
speed in opening the form. But the resulting 1.25 mil record table is
slowing things down than the smaller table in the front end did, even
with a heterogeneous join.

Thus, I'm left with no solution, unless I can use a smaller table, of if
there's something I haven't seen re. using the view in Access (which I
don't think there is).


You will have to bear with me, since my knowledge of Access is so poor.

But if I understand this correctly, you have a local table in Access
with selections that typically has 50000 rows. And since each user
has his own Access instance, this means that today there are some 25-50
instances of this table.

13 seconds to open a form is indeed a long time, and many users would
say a too long time.

But moving this data to SQL server may not a very good idea at all.
Sending
50000 rows over the wire is not done snap. On a local network it may be
decently fast, but if you have a user that works from home, it will be
a pain.

So I would suggest that you should rather look into to load fewer rows
into the form initially, and then fetch depending on what action the
user takes. I can't believe that the user is looking at all 50000 at
a time.

If this data is only related to the user's selection, it's probably a
good idea to keep it local anyway. The only point I can see with moving
it to the server, is that it could permit the user to get back his
selection if he moves to another machine.

--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

Oct 30 '05 #22

P: n/a
I'm using varchar. Don't think that should make a difference, especially
since the view seems to work fine otherwise. But, anyway, I'll try nvarchar
and will let you know if it makes a difference.

If you reply to this, please reply in the "Problem with HOST_NAME Function
with Linked View" as noted in my other message. Thanks! And thanks again for
your help!

Neil
"Erland Sommarskog" <es****@sommarskog.se> wrote in message
news:Xn**********************@127.0.0.1...
Neil (no****@nospam.net) writes:
Had strange results with using HOST_NAME. I added "=HOST_NAME()" to the
criteria column of the view's MachineName field. The view worked fine.
Similarly, if I open the view using a pass-through query in the front
end, it opens fine. But if I link the view to the front end, it opens
with "#Deleted" showing in each field (that's what Access displays when
records in a recordset have been deleted, but the recordset hasn't been
requeried).

If, on the other hand, I hard-code the machine name into the view, then
it opens fine when linked to the front end. But something about the
HOST_NAME function that's causing it to not be able to get the records
(or it gets the records and then loses them).


Not sure I understand this (I still haven't learn Access :-), but there
is a gotcha here: which datatype is MachineName? I should be nvarchar
to avoid implicit conversions that precludes use of indexes. Not that I
can see that this explains behaviour you see above.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

Oct 30 '05 #23

P: n/a
On Sun, 30 Oct 2005 20:33:30 GMT, "Neil" <no****@nospam.net> wrote:

"Steve Jorgensen" <no****@nospam.nospam> wrote in message
news:um********************************@4ax.com.. .
On Sun, 30 Oct 2005 16:59:45 GMT, "Neil" <no****@nospam.net> wrote:
Both table have PKs; and the join is on the PKs (the main table has a
single-field PK; and the selections table has two field -- the
corresponding
ID value from the main table combined with the machine name).

The 13 second form open time (with the 50k record selections table in the
front end joined heterogeneously with the 50k record main table in the
back
end) is over a WAN. The LAN users get much better performance.
How many records are being returned to the form, and what kind of
recordset
are you using? Is any of your links to a view? If linking to a view, did
you
select a logical primary key when you created the link?


50,000; dynaset; yes, the two tables are combined in the back end using a
view; yes.


You're returning 50,000 rows to a form! No - that's not how client/server
works. You give users an interface that lets them pick what reasonable subset
or roll-up of the data they want to see, then navigate or drill-down form
there.

The point of SQL Server is that searching and filtering happens on the server
side and small result sets are returned to the client as requested.
But regarding the slowness issue, I think it's resolved. See my other post
in this thread. Thanks.


Solved? Perhaps, with a single user at a time, and no competing traffic on
the network. A design that returns 50,000 rows to a client in one result set
is findamentally flawed.
Oct 30 '05 #24

P: n/a
Neil (no****@nospam.net) writes:
Moving the table to the back end and making it a true relational table
was the right way to go. I noted that using this approach (with a
resulting 1.25 mil record view) was actually slower than the original
configuration with the heterogeneous join. However, I opened the view,
moved to the last record, and now it's fast! So it seems that the
indexes hadn't been filled (or something like that). The form now opens
in about 7 seconds (down from 13), which should be OK.

So I'm wondering if there is some command that one can use to accomplish
what I accomplished by moving to the last record.


Well, I still don't know Access, so I have still problems understanding
this.

There is no last record in a view. A view, just like a table is unordered.
But if you moved to what is the last record in what Access presents to
you, it is likely that all rows in the view went into the cache on SQL
Server. (Or a cache on the Access side if there is one.) One way to
force the view into cache is to run a query like "SELECT (DISTINCT col) FROM
view", and make sure to pick a column without index. Then again, as
users accesses the data it will make into cache. Pages that are not
referenced will be removed from cache, if the cache fills up. All that is
beyond your control. (OK, so there is PINTABLE, but don't use it.)

Since I don't know your application, nor do I know Access, I cannot really
suggest alternatives to the application design, but it just does not sound
right to me to get 50000 rows, before the users can start working.
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

Oct 30 '05 #25

P: n/a

Erland Sommarskog wrote:
Neil (no****@nospam.net) writes: <<>>
So I'm wondering if there is some command that one can use to accomplish
what I accomplished by moving to the last record.


Well, I still don't know Access, so I have still problems understanding
this.

There is no last record in a view. A view, just like a table is unordered.
But if you moved to what is the last record in what Access presents to

<<>> Since I don't know your application, nor do I know Access, I cannot really
suggest alternatives to the application design, but it just does not sound
right to me to get 50000 rows, before the users can start working.
Indeed.

I do know access pretty well. You could change the ODBC settings and
perhaps improve performance a bit. That's just propagating the
underlying problem though, IMO.

I've seen users who are experienced with excel presented with Access
solutions are direct conversions. These were a right mess. In every
case working with users analysing how they used the data resulted in
more practical systems designs.
It seems likely that similar redesign could be done in this instance.
Nobody really just looks through 50,000 records. By the time you page
through it all you'd need an eidetic memory to understand anything
across 50,000 records. What users will do is look at the top 20 on
price, sales or bottom 20 sales....longest outstanding invoices and
stuff like that.
Providing user selected sort criteria and top/bottom n selection
criteria is often the simplest approach.



--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp


Oct 31 '05 #26

P: n/a
Just to add to the other responses saying some of the same. You have
the wrong approach. The functionality to filter, sort, etc should go in
the database not at the client. There is probably no point using SQL
Server at all if you just want to return the entire set of data and
manipulate it client side. I too have seen projects that failed
entirely because of such a totally naive approach to client-server
design.

--
David Portas
SQL Server MVP
--

Oct 31 '05 #27

P: n/a
Thanks for your input. There are some issues with how users are currently
using some of the built-in Access tools that make it difficult to implement
some of these ideas, though I agree they would be good. I'll respond to your
comments and the similar ones by others shortly, when I have some time to
explain the situation more.

In the meantime, I was wondering about what ODBC settings you've had success
with in the past that improved performance. That would be helpful.

Thanks,

Neil

"andy" <ao***@lycos.co.uk> wrote in message
news:11**********************@g47g2000cwa.googlegr oups.com...

Erland Sommarskog wrote:
Neil (no****@nospam.net) writes:

<<>>
> So I'm wondering if there is some command that one can use to
> accomplish
> what I accomplished by moving to the last record.


Well, I still don't know Access, so I have still problems understanding
this.

There is no last record in a view. A view, just like a table is
unordered.
But if you moved to what is the last record in what Access presents to

<<>>
Since I don't know your application, nor do I know Access, I cannot
really
suggest alternatives to the application design, but it just does not
sound
right to me to get 50000 rows, before the users can start working.


Indeed.

I do know access pretty well. You could change the ODBC settings and
perhaps improve performance a bit. That's just propagating the
underlying problem though, IMO.

I've seen users who are experienced with excel presented with Access
solutions are direct conversions. These were a right mess. In every
case working with users analysing how they used the data resulted in
more practical systems designs.
It seems likely that similar redesign could be done in this instance.
Nobody really just looks through 50,000 records. By the time you page
through it all you'd need an eidetic memory to understand anything
across 50,000 records. What users will do is look at the top 20 on
price, sales or bottom 20 sales....longest outstanding invoices and
stuff like that.
Providing user selected sort criteria and top/bottom n selection
criteria is often the simplest approach.



--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

Oct 31 '05 #28

P: n/a
>> At least Neil knows how to
.. <<

What was I thinking?? Of course 2 + 2 = 5 if you are formally polite.

Yes, I am angry. I vounteer for a lot of non-profits. It is sooooo
nice that if behave properly and politely. the idiots will not kill
people. Where did a sense of professionalism get overridened by
behaving properly and politely -- whatever that means.

Nov 1 '05 #29

P: n/a
--CELKO-- (jc*******@earthlink.net) writes:
At least Neil knows how to

. <<

What was I thinking?? Of course 2 + 2 = 5 if you are formally polite.

Yes, I am angry. I vounteer for a lot of non-profits. It is sooooo
nice that if behave properly and politely. the idiots will not kill
people. Where did a sense of professionalism get overridened by
behaving properly and politely -- whatever that means.


Talking about going back to basics, before you start to tell people to
learn RDBMS and all that, how about learning grammar, to spell and
how to punctuate? :-)

Anwyay, behaving properly and politely is certainly very much of being
professional. Again, let's back to basics before you can start talking
about RDBMS...
--
Erland Sommarskog, SQL Server MVP, es****@sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techinf...2000/books.asp

Nov 1 '05 #30

P: n/a
With complex access2 apps I used to increase buffer size. As you're
pulling a shed load of data at a time that could well speed things up a
bit. Never tried it but maybe increasing threads would also help.

I would think replacing the way the users are using built in access
tools with functionality in a front end designed to reduce the amount
of data pulled is your best bet. This can of course be tricky as you'd
have to justify work and re-educate people.

The way it works at the moment sounds like you may as well just dump
the database out of sql server to an access copy overnight then the
user just copies this across and works off their C drive.

Nov 1 '05 #31

P: n/a

"andy" <ao***@lycos.co.uk> wrote in message
news:11**********************@g49g2000cwa.googlegr oups.com...
With complex access2 apps I used to increase buffer size. As you're
pulling a shed load of data at a time that could well speed things up a
bit. Never tried it but maybe increasing threads would also help.
Sounds like it's worth a try. But there's no way to change it with linked
tables, only with recordsets in code, right?

I would think replacing the way the users are using built in access
tools with functionality in a front end designed to reduce the amount
of data pulled is your best bet. This can of course be tricky as you'd
have to justify work and re-educate people.
I've been throwing around some ideas in my head, and I think we might be
able to do something like that. I think with enough explaining about how it
would work, I might be able to get the users on-board, especially when they
see the performance gains.

The way it works at the moment sounds like you may as well just dump
the database out of sql server to an access copy overnight then the
user just copies this across and works off their C drive.


No can do. The users are editing as well as viewing. Users need to be able
to edit as well as view data, and see current changes in real-time.

Plus, the database is more than just this one form that brings up all
records in the recordset. There are many reports, functions, and so forth,
all of which use back-end objects or pass-throughs. It's just in this one
area (and primarily because of the way they use Datasheet View) that things
are less than optimum.

Neil
Nov 2 '05 #32

This discussion thread is closed

Replies have been disabled for this discussion.