By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
435,165 Members | 894 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 435,165 IT Pros & Developers. It's quick & easy.

How to insert into joined tables

Expert 5K+
P: 8,434
Hi all.

Iím almost embarrassed to ask this one, but in fact most of my dabbling in Access has been at a fairly simple level, via the GUI. Now I need to do something slightly deeper, and donít know where to start looking. As most of you would be aware by now, Iím using Access 2003 to work with some fairly large archive databases Ė each is basically a log of events which happened over an entire year. Once the year has finished, that yearís data should never be changed.

Anyway, To save space in these rather large databases, I want to extract all the unique values from a somewhat repetitive 20-byte description field and store them in a separate table (one central one, not one per DB) with an autonumber PK. Iíll then replace the description field on each of the big databases with an indexed long integer field with the values here loaded from the new table. Any access to these tables in future will be through a query which joins them with the description table.

So far, so good. I have done this on a trial basis once or twice in the past, and everything worked fine. If I remember correctly the performance did suffer, but only slightly. Performance is not my concern at this point.

What I want to know is, given my query which joins an archive table and the new descriptions table to fill in the textual description, how would I go about inserting new descriptions? I mean, can I insert into the joined query (Iím probably expressing this poorly) and have the new descriptions automagically appear in the descriptions table, or will I need to add the description first, then insert the record into the other table with the matching ID?

In fact Iíve just set up another trial database with the kind of setup Iím talking about. Here is the join which returns the ďreconstitutedĒ log record.
Expand|Select|Wrap|Line Numbers
  1. SELECT Log.Field1, Log.Field2, Descriptions.Description,  Log.Field3,
  2. Log.Field4, ... Log.Field8
  3. FROM Log INNER JOIN Descriptions ON Log.DescID = Descriptions.ID;
  4.  
This produces exactly what I want in terms of output, but how do I go about inserting new records?

Oh, one more thing Ė is there any way to return the records in the original sequence? This query, and every variation I have tried, always returns them sorted by the description. In fact, I expect itís more likely sorted by the ID field which joins them, but it amounts to about the same thing since they were loaded to the new table in sorted sequence.

(I was tempted to do the same with another very repetitive text field, but given that it is 8 characters long and usually at least 7 characters populated, I doubt I'd save anything.)
Nov 13 '06 #1
Share this Question
Share on Google+
21 Replies


100+
P: 1,646
Hi all.

Iím almost embarrassed to ask this one, but in fact most of my dabbling in Access has been at a fairly simple level, via the GUI. Now I need to do something slightly deeper, and donít know where to start looking. As most of you would be aware by now, Iím using Access 2003 to work with some fairly large archive databases Ė each is basically a log of events which happened over an entire year. Once the year has finished, that yearís data should never be changed.

Anyway, To save space in these rather large databases, I want to extract all the unique values from a somewhat repetitive 20-byte description field and store them in a separate table (one central one, not one per DB) with an autonumber PK. Iíll then replace the description field on each of the big databases with an indexed long integer field with the values here loaded from the new table. Any access to these tables in future will be through a query which joins them with the description table.

So far, so good. I have done this on a trial basis once or twice in the past, and everything worked fine. If I remember correctly the performance did suffer, but only slightly. Performance is not my concern at this point.

What I want to know is, given my query which joins an archive table and the new descriptions table to fill in the textual description, how would I go about inserting new descriptions? I mean, can I insert into the joined query (Iím probably expressing this poorly) and have the new descriptions automagically appear in the descriptions table, or will I need to add the description first, then insert the record into the other table with the matching ID?

In fact Iíve just set up another trial database with the kind of setup Iím talking about. Here is the join which returns the ďreconstitutedĒ log record.
Expand|Select|Wrap|Line Numbers
  1. SELECT Log.Field1, Log.Field2, Descriptions.Description,  Log.Field3,
  2. Log.Field4, ... Log.Field8
  3. FROM Log INNER JOIN Descriptions ON Log.DescID = Descriptions.ID;
  4.  
This produces exactly what I want in terms of output, but how do I go about inserting new records?

Oh, one more thing Ė is there any way to return the records in the original sequence? This query, and every variation I have tried, always returns them sorted by the description. In fact, I expect itís more likely sorted by the ID field which joins them, but it amounts to about the same thing since they were loaded to the new table in sorted sequence.

(I was tempted to do the same with another very repetitive text field, but given that it is 8 characters long and usually at least 7 characters populated, I doubt I'd save anything.)
Hi K.
The way that I have done this before is to consider the two tables as separate for updating. Have the log table open at the new record. Open the description table and add the description. Get the id number of the last record and add this to the log table.
Nov 13 '06 #2

Expert 5K+
P: 8,434
The way that I have done this before is to consider the two tables as separate for updating. Have the log table open at the new record. Open the description table and add the description. Get the id number of the last record and add this to the log table.
I'm really hoping to avoid that sort of processing. The data is loaded via a simple import of a daily text file, and I'd like to be avoid any change to this if possible.

I have to ensure that the entire thing can be re-generated from scratch if necessary, so if I change the way the data is loaded, I will be required to go back and re-extract and re-process all of the years worth of data from the beginning of recorded history (a few years ago). Not that I haven't had to do this once or twice, but it is rather a pain.
Nov 13 '06 #3

100+
P: 1,646
I'm really hoping to avoid that sort of processing. The data is loaded via a simple import of a daily text file, and I'd like to be avoid any change to this if possible.

I have to ensure that the entire thing can be re-generated from scratch if necessary, so if I change the way the data is loaded, I will be required to go back and re-extract and re-process all of the years worth of data from the beginning of recorded history (a few years ago). Not that I haven't had to do this once or twice, but it is rather a pain.
Sorry. Don't understand. I know you are not loading the data now as you are asking how. So you do not need to do this in any way you did it before. Do you mean that you need to reconstitute the text files? If that is so then this process will not affect that. As long as the data is stored you can recall it in any way you like. The data does not need to be loaded differently either. Just recorded in this particular way.

BTW you can only retrieve the archived descriptions if you redo the entire thing and store them in the order in which you wish to retrieve them.
Nov 13 '06 #4

MMcCarthy
Expert Mod 10K+
P: 14,534

Anyway, To save space in these rather large databases, I want to extract all the unique values from a somewhat repetitive 20-byte description field and store them in a separate table (one central one, not one per DB) with an autonumber PK.
OK, I think I know what your mean. Create a new table structure with the following fields.

tblStore
StoreID (PK autonumber)
Description (Text Field)

Then run an

"INSERT INTO tblStore (Desciption) SELECT DISTINCT Description FROM UnionQueryName;"

query to put all the data into this new table.

Before changing this data as per the next part you will have to update the original tables with the Primary key of this new table based on relationship using descrition. This will probably take a while because it's a text field. You won't be able to use the union query as it is not updateable and will have to do each table individually.


Iíll then replace the description field on each of the big databases with an indexed long integer field with the values here loaded from the new table. Any access to these tables in future will be through a query which joins them with the description table.
You can now go ahead and change description to whatever you want as the hard work is done.

What I want to know is, given my query which joins an archive table and the new descriptions table to fill in the textual description, how would I go about inserting new descriptions? I mean, can I insert into the joined query (Iím probably expressing this poorly) and have the new descriptions automagically appear in the descriptions table, or will I need to add the description first, then insert the record into the other table with the matching ID?
You will have to have the new tblStore table available as a lookup to all the databases, you can link it. In each database you will have to create a fully cascading relationship between the PK of the new table and the old lookup description field which will now have a number datatype.

Oh, one more thing Ė is there any way to return the records in the original sequence? This query, and every variation I have tried, always returns them sorted by the description. In fact, I expect itís more likely sorted by the ID field which joins them, but it amounts to about the same thing since they were loaded to the new table in sorted sequence.
If you append them in the correct order then you can order by the primary key when returning them.
Nov 13 '06 #5

Expert 5K+
P: 8,434
Sorry. Don't understand. I know you are not loading the data now as you are asking how. So you do not need to do this in any way you did it before. Do you mean that you need to reconstitute the text files? If that is so then this process will not affect that. As long as the data is stored you can recall it in any way you like. The data does not need to be loaded differently either. Just recorded in this particular way.
What I mean is that there is a requirement that the entire process, from original source to final database, must be completely reproducible. In other words, someone should be able to walk in tomorrow, take the original source files from which the data was extracted, run them through my processing, and end up with the final database(s) ready to run searches etc.

To ensure that this happens, and to ensure there is no possible difference between the multiple databases (apart from the data, obviously:)), any change to the process requires that it be performed again from the very beginning. (I may need to do that anyway, if/when I get the splitting properly done.)

BTW you can only retrieve the archived descriptions if you redo the entire thing and store them in the order in which you wish to retrieve them.
I'm not sure what you mean by this. I have already extracted the descriptions (from a smallish sample, of course) to the new table, added the ID from that table into the original table, removed the text field from the original table, and used a query (the one posted) to join them together. This successfully pulls out the whole original record, with the description text coming from the separate descriptions table. It works, that's not the issue. I just don't know how to go about adding new data without having to do a separate store on the descriptions table first. And how to stop Access sorting by the descriptions (or the ID from the description table, anyway).

I feel reasonably certain that it should be possible, just don't know how. When I have time, I'll have a fiddle and see whether I can work it out.
Nov 13 '06 #6

Expert 5K+
P: 8,434
OK, I think I know what your mean. Create a new table structure with the following fields.
...
You've misunderstood me here.

I have already done all of this. It works. I now have (in a small sample setup, as the usual millions of records make things to slow) the descriptions in the new table which just has ID (autonumber PK) and the text field (unique key). This is joined via query to a corresponding DescID (long int key with duplicates) on the main table.

It all works fine. The issues I have are:
  • Results always sorted by the <bleep> description (sorry, it really annoys me).
  • How do I import new data to the main table, now that the description field is on a separate table?

You will have to have the new tblStore table available as a lookup to all the databases, you can link it. In each database you will have to create a fully cascading relationship between the PK of the new table and the old lookup description field which will now have a number datatype.
Will this allow me to do STOREs?

If you append them in the correct order then you can order by the primary key when returning them.
Hm... the records are already stored in the order of the primary key on the main table, or very close to it. I was hoping to avoid a SORTED BY clause, but looks as though I may need one. Or is it just a matter of getting the JOIN correct?
Nov 13 '06 #7

Expert 5K+
P: 8,434
... and the text field (unique key). This is joined via query to a corresponding DescID (long int key with duplicates) on the main table.
The way I worded this may seem to imply that the text field is JOINed to the new numeric field. This is, of course, not the case - DescID on the main table is JOINed (inner, I think) to the ID on the new table.
Nov 13 '06 #8

MMcCarthy
Expert Mod 10K+
P: 14,534

Will this allow me to do STOREs?
Not sure what you mean but your new table has actually become a lookup table and can be added to the same as any lookup table. Depending on how the data is entered you can run an insert query in code for values not in list. Remember it's only linked to the databases from the common source of your database so any records added would be common to all the databases.

Hm... the records are already stored in the order of the primary key on the main table, or very close to it. I was hoping to avoid a SORTED BY clause, but looks as though I may need one. Or is it just a matter of getting the JOIN correct?
Unfortunately, if you don't impose an order by clause then Access will decide on it's own not always on any known logic how to order the records.
Nov 13 '06 #9

100+
P: 1,646
I can't imagine why you would avoid an ORDER BY clause unless you have decided that there is some unacceptable performance penalty. The clause will apply to the main table and not the lookup so you can reproduce the original order.

You are going to change the database design moving some data to a new table. This will not be reproducible once you complete the process. Done and dusted. The original text files can, of course, be reconstructed at anytime.

The approach I suggested originally still stands. It is not complex and anyone familiar with Access should easily pick it up when you are nothing but a faint memory hovering by the coffee machine :)
Nov 13 '06 #10

Expert 5K+
P: 8,434
I can't imagine why you would avoid an ORDER BY clause unless you have decided that there is some unacceptable performance penalty. The clause will apply to the main table and not the lookup so you can reproduce the original order.
I have found that using ORDER BY produced huge performance penalties at times. Don't remember the cicrumstances, though.

You are going to change the database design moving some data to a new table. This will not be reproducible once you complete the process. Done and dusted. The original text files can, of course, be reconstructed at anytime.
This is what prompted my original question - I think I've worked out a simpler way to word it now. How can I have new values inserted into a lookup table as I import records?

The approach I suggested originally still stands. It is not complex and anyone familiar with Access should easily pick it up when you are nothing but a faint memory hovering by the coffee machine :)
Fair enough. But the main idea is for the loading to be via a single import. If I have to go to more complex coding in VBA and so on, it probably won't be worth the effort.
Nov 13 '06 #11

MMcCarthy
Expert Mod 10K+
P: 14,534
I can't remember what order you are doing things in but presuming your union query comes first. Something along these lines should work.

INSERT INTO NewTableName (Description)
SELECT DISTINCT Description FROM UnionQueryName
WHERE Description NOT IN (SELECT Description FROM NewTableName);
Nov 13 '06 #12

NeoPa
Expert Mod 15k+
P: 31,494
Killer42,

Here are some recommendations - some may seem quite strange and unorthodox - but I did consider your particular situation when I came up with these.

1. Instead of one central table to store the descriptions, have a (Descriptions) table per archive database.
When working on an individual db this will work entirely within the db as before. When working across dbs you would UNION the results of the pre-processed data as before, only now the data would be the (probably sorted) results of your query, rather than just your (Log) table.
If this is not practical for any reason then I suppose you would want to use a (Descriptions) table in an entirely new database. Avoid linking in the description data after the UNION though as, to link recordsets together, it will want them in a similar order. This will cause very severe delays in your situation.

2. It is not possible to import data from one source to two separate locations.
You will have to do something along the lines of :
A) Import new data to temporary table (Doesn't need to be deleted and recreated but this is one option. The other is to use a table which is already defined but you clear down the contents every time before use.)
B) Before adding the data to your Log table, make sure you add any new descriptions to the Descriptions table.
C) Link the temporary table to your Descriptions table with an INNER JOIN and append the records to your Log table.

Although you may choose to use VBA to link these queries together, that is all that should be required.

I don't see any way to avoid sorting the data, either while appending the new, or while extracting existing.
A way of minimising the delay caused though, is to ensure where possible (length of a piece of string) that the data as stored is as closely ordered as possible to that which you require.

Good luck - I know how straightforward this isn't.
Nov 13 '06 #13

Expert 5K+
P: 8,434
...some may seem quite strange and unorthodox
I like it already. :)

1. Instead of one central table to store the descriptions, have a (Descriptions) table per archive database.
When working on an individual db this will work entirely within the db as before. When working across dbs you would UNION the results of the pre-processed data as before, only now the data would be the (probably sorted) results of your query, rather than just your (Log) table.
If this is not practical for any reason then I suppose you would want to use a (Descriptions) table in an entirely new database. Avoid linking in the description data after the UNION though as, to link recordsets together, it will want them in a similar order. This will cause very severe delays in your situation.
I have already done this at a single-database level in the past, and was quite pleased with the results. In this case, I want to save maximum space by placing the lookup table in a separate database to reduce redundancy.

As far as I'm concerned, UNION is not an issue in this case. The UNION stuff is to do with sticking together multiple years' logs. This is a separate issue.

I certainly (correction, probably) would not be joining to the lookup table after the initial query, because the description is part of the criteria.

2. It is not possible to import data from one source to two separate locations.
Damn!

You will have to do something along the lines of :
A) Import new data to temporary table (Doesn't need to be deleted and recreated but this is one option. The other is to use a table which is already defined but you clear down the contents every time before use.)
B) Before adding the data to your Log table, make sure you add any new descriptions to the Descriptions table.
C) Link the temporary table to your Descriptions table with an INNER JOIN and append the records to your Log table.
Hm... as far as adding new descriptions to the lookup table, I might try to import just this field, straight into the lookup table. The "unique-keyness" of the description field will allow only the new ones to be loaded. The remaining question then, of course, is how to import the rest of the data to the log table with the ID from the description table substituted for the actual description.

Upon re-reading the above, I see you've addressed this. I'll have a fiddle in this area - thanks.

I don't see any way to avoid sorting the data, either while appending the new, or while extracting existing.
It would definitely have to be upon extraction. The data is already stored in the order I want it, it's just that Access is (I assume) returning them in the order of the joining field. It looks as though they are sorted by the description text, but that's unlikely and presumably a side-effect of the fact that the desciptions were loaded into the lookup table in that order.

A way of minimising the delay caused though, is to ensure where possible (length of a piece of string) that the data as stored is as closely ordered as possible to that which you require.
At least that's one thing in my favour - they are already stored in the correct sequence.

Good luck - I know how straightforward this isn't.
That's for sure. On the other hand, it wouldn't be much fun otherwise. :)
Nov 14 '06 #14

NeoPa
Expert Mod 15k+
P: 31,494
Access likes tables to be sorted in a similar order before joining fields. I would expect it to process through both datasets via that order (maybe by predefined index - sometimes by a pre-sort :( ).
This can often be the cause of extreme delays.
These are the sorts of issues (sorting on the fly) where having a PK of a Long Integer can save time as creating an index referring to a Long is less resource hungry (therefore faster) than creating one referring to a more complicated field, or even a group of fields.
Nov 14 '06 #15

Expert 5K+
P: 8,434
Access likes tables to be sorted in a similar order before joining fields. I would expect it to process through both datasets via that order (maybe by predefined index - sometimes by a pre-sort :( ).
This can often be the cause of extreme delays.
These are the sorts of issues (sorting on the fly) where having a PK of a Long Integer can save time as creating an index referring to a Long is less resource hungry (therefore faster) than creating one referring to a more complicated field, or even a group of fields.
Thanks, I'll keep it in mind.
Nov 14 '06 #16

Expert 5K+
P: 8,434
For those who might be interested, I have managed to get my import-to-2-tables going, more or less. I created a copy of the original table, with the full description field, called TempLog. Then I created a macro which performs the following actions...
  • RunSQL: DELETE FROM TempLog
  • TransferText: Import text file to TempLog, using my usual import spec.
  • RunSQL: Copy the descriptions from TempLog to table Descriptions. (Unique key, so no dupes)
  • RunCode: Join the lookup table with TempLog, copy to Log (SQL was more than 255 characters).
  • RunSQL: DELETE FROM TempLog
This seems to achieve what I want, though I would have preferred a single step.
Nov 15 '06 #17

MMcCarthy
Expert Mod 10K+
P: 14,534
I'm glad you finally got it worked out.

Mary

For those who might be interested, I have managed to get my import-to-2-tables going, more or less. I created a copy of the original table, with the full description field, called TempLog. Then I created a macro which performs the following actions...
  • RunSQL: DELETE FROM TempLog
  • TransferText: Import text file to TempLog, using my usual import spec.
  • RunSQL: Copy the descriptions from TempLog to table Descriptions. (Unique key, so no dupes)
  • RunCode: Join the lookup table with TempLog, copy to Log (SQL was more than 255 characters).
  • RunSQL: DELETE FROM TempLog
This seems to achieve what I want, though I would have preferred a single step.
Nov 15 '06 #18

Expert 5K+
P: 8,434
I'm glad you finally got it worked out.
Mary
:( It was really not so much a case of working it out, as giving up and going the route I had been trying to avoid.

But thanks, anyway, especially to you and NeoPa for all sorts of input along the way.

I'm still in the process of populating the lookup ID and zapping the text field on the big tables - probably will be the rest of the week.
Nov 15 '06 #19

MMcCarthy
Expert Mod 10K+
P: 14,534
:( It was really not so much a case of working it out, as giving up and going the route I had been trying to avoid.

But thanks, anyway, especially to you and NeoPa for all sorts of input along the way.

I'm still in the process of populating the lookup ID and zapping the text field on the big tables - probably will be the rest of the week.
There's a lot of useful code snippets on that site I sent you the two links to. You just might find something useful.

Mary
Nov 15 '06 #20

NeoPa
Expert Mod 15k+
P: 31,494
For those who might be interested, I have managed to get my import-to-2-tables going, more or less. I created a copy of the original table, with the full description field, called TempLog. Then I created a macro which performs the following actions...
  • RunSQL: DELETE FROM TempLog
  • TransferText: Import text file to TempLog, using my usual import spec.
  • RunSQL: Copy the descriptions from TempLog to table Descriptions. (Unique key, so no dupes)
  • RunCode: Join the lookup table with TempLog, copy to Log (SQL was more than 255 characters).
  • RunSQL: DELETE FROM TempLog
This seems to achieve what I want, though I would have preferred a single step.
No probs for the help.
Some more :
When duplicating a table it is possible, using Copy & Paste, to pass across just the table layout.
This would have saved some time and the next step.

While copying the descriptions across, use the SELECT DISTINCT predicate to avoid duplicates before returning results rather than relying on the append to fail for those records. In a large dataset this can make a performance difference.
Good Luck.
-Adrian.
Nov 15 '06 #21

Expert 5K+
P: 8,434
No probs for the help.
Some more :
When duplicating a table it is possible, using Copy & Paste, to pass across just the table layout.
This would have saved some time and the next step.
You lost me there. I know I can copy and paste table layouts, but how would that have saved me time? I did use an import to pull in the table layout for TempLog, but could have created it manually in a couple of minutes. It's only a small one, and I know it by heart.

While copying the descriptions across, use the SELECT DISTINCT predicate to avoid duplicates before returning results rather than relying on the append to fail for those records.
I'm already doing both, thanks. That is, using DISTINCT to pull just unique values, and relying on the unique key to discard ones that are already there. The performance is OK.

Correction: I've just had another look at the RunSQL and I didn't have DISTINCT in there - d'oh! Well, I certainly had intended to. When I was doing it manually to work out the process, I turned on the aggregate functions and did GROUP BY on the description to get just the unique values. (Actually, I appended the counts to another field too, but just for my own curiosity).
Nov 15 '06 #22

Post your reply

Sign in to post your reply or Sign up for a free account.