473,322 Members | 1,480 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,322 software developers and data experts.

Trouble with DAO "SEEK" in converting application to SQL Express back end.

Hello,

I have an application that I'm converting to Access 2003 and SQL Server 2005
Express. The application uses extensive use of DAO and the SEEK method on
indexes. I'm having an issue when the recordset opens a table. When I
write

Set rst = db.OpenRecordset("MyTable",dbOpenTable, dbReadOnly)

I get an error. I believe it's invalid operation or invalid parameter, I'm
not in front of the application at the moment, but will let you know if it's
important.

When I remove the dbOpenTable, it works but I can't use the SEEK method on
the index.

Any ideas?

Thanks!
Mar 30 '06
59 7402
Bri


David W. Fenton wrote:
Bri <no*@here.com> wrote in news:m6VZf.9870$gO.808@pd7tw3no:

So, to avoid this confusion I reran the tests with 10 loops for
all three methods on the smallest table using the unique long
field index: FindFirst (10) - 0.2500000
Query (10) - 0.0195313
Seek (10) - 0.0078125

Are you changing the searched-for value for each loop? Or just
searching for the same value 10 times? Or doing a .MoveFirst before
the find operation?


Each loop creates the recordset, finds the record (except the query
where the query does the finding), assigns the value to a variable,
closes and sets the recordset to nothing. I am only doing multiple loops
so that there was something to measure for the Seek and query which can
so a single loop in less than the measureable value of the Single Timer().

--
Bri

Apr 9 '06 #51
Bri


David W. Fenton wrote:
Bri <no*@here.com> wrote in news:m6VZf.9870$gO.808@pd7tw3no:

I would be interested in the
results of you test. I hypothosize that you will have a chunk of
time to open the recordset since it is a query that must run to
completion before it opens. The subsequent FindFirst should be
similar to my mid-sized table test (you did say 20k records in the
results? That's about 2/3 the size om my mid table at 32k
records).

No, I expect something like 200K records. I know that the initial
loading of the persistent recordset takes a while (c. 20 seconds),
but there is no noticeable lag once the form and the persistent
summary recordset have been initialized. Every OnCurrent event does
a FindFirst and FindNext's until no more records for that current
record are found. If there were a 3-second lag for that FindFirst
operation, I would have noticed it, unless it's only in the first
one (which would be obscured in the overhead of opening the summary
recordset).

Again, I'll have to run some benchmarks on it.


Then you may have a similar lag. You could do like I did on the Single
FindFirst and put an intermediate time after the recordset is opened and
before the FindFirst.

--
Bri

Apr 9 '06 #52
Bri <no*@here.com> wrote in news:ikZZf.9906$_u1.2539@pd7tw2no:
several messages about seek and findfirst.


Bri

I am hoping that you and others will address the question, "What is the
fastest way to find a record for which the criteria finding fields are
indexed?" using this example:

http://www.ffdba.com/NutritionalData2K.zip (7866 kb) contains
NutritionalData2K.mdb (61860 kb) a db in 2K format.

NutritionalData2K.mdb contains NutritionalData (488241 records) Table.

NutritionalData is indexed on ndb_no, nutr_no (primary),
and num_data_pts (not unique),
and num_studies (not unique).

There is nothing else in the db.

Actually it would be helpful if we coud have two solutions:

a. show the fastest way to find a (unique) record of that table where
ndb_no = 44259 and nutr_no = 430
and to [debug.print] the value of each field in that record;

b. show the fastest way to open a recordset of all the records in that
table and to find a (unique) record of that table where
ndb_no = 44259 and nutr_no = 430
and to [debug.print] the value of each field in that record;

(Of course, if you or someone else believes there are better tests we
could use these).

Solutions a and b may be identical.
Timing should include as much of the module as possible, that is include
opening the recordset etc.
DAO and ADO or other solutions could be used.
This is not my table nor my database. It's a table from a publicly
available database provided by a department of the US government.
I'll remove the file in a week or so.

I'll undertake to test in access 2003 any solution submitted, on my
laptop, providing a control for the variables of cpu, memory etc, and to
publish the results here in CDMA.

--
Lyle Fairfield
Apr 9 '06 #53
Bri <no*@here.com> wrote in news:fiZZf.9217$nf7.1185@pd7tw1no:
David W. Fenton wrote:
Bri <no*@here.com> wrote in news:m6VZf.9870$gO.808@pd7tw3no:
So, to avoid this confusion I reran the tests with 10 loops for
all three methods on the smallest table using the unique long
field index: FindFirst (10) - 0.2500000
Query (10) - 0.0195313
Seek (10) - 0.0078125


Are you changing the searched-for value for each loop? Or just
searching for the same value 10 times? Or doing a .MoveFirst
before the find operation?


Each loop creates the recordset, finds the record (except the
query where the query does the finding), assigns the value to a
variable, closes and sets the recordset to nothing. I am only
doing multiple loops so that there was something to measure for
the Seek and query which can so a single loop in less than the
measureable value of the Single Timer().


OK. I tried to look up your original code, and it was too far back
in the references chain for me to easily find it.

It does suggest to me even more strongly that perhaps there's an
issue with Access's data caching that's causing the FindFirst to be
so much more inefficient.

OK, after writing that, I finally just got around to doing a test on
my app that uses the persistent totals recordset. Here's what I
found out:

Recordset initialized: 24.112
First FindFirst: 1.655
Next FindFirst: 0.282
Next : 1.471
Next : 0.39
Next : 1.348
Next : 0.173
Next : 1.333
Next : 1.386
Next : 1.723
Next : 1.5
Next : 0.365
Next : 1.39

There is no pattern. My guess is that the value is proportional to
the distance traversed in the recordset, which is in FK order (while
the form calling the recordset navigation is in a different order;
the FK is a long, BTW).

I'm surprised it's taking this long (it's not enough to matter in
the UI for users, though -- other processes can take much longer,
e.g., the de-duping routines, which can be turned off, though they
are on by default; I tested with it off, of course).

The recordset that's being navigated has 367,983 records in it
(summed on the foreign key that is the PK of the table that is the
base data source for the form it's being called from, though that
form is not in PK order, but in name order; that's irrelevant, of
course).

The table that the recordset summarizes has 531,488 records.

Each call to the navigation will attempt to find a matching record
with one FindFirst. If none are found, it stops and returns to the
original context. If a first record is found, it does a FindNext
until there are no more matches. On average, just eyeballing my test
data, there was only one matching record in most cases (0 or 2 or
more in a handful).

I can see no proportional relationship at all between the number of
records matched in the FindFirst/FindNext operations. There isn't
any really clear relationship between location in the recordset.
Just eyeballing it, jumping to a high FK number, then jumping to a
low FK number is no slower or faster than jumping between nearby FK
values. There is also no apparent speedup from caching, since
jumping to a record, then to another and back to the original can be
longer or shorter, in no obvious pattern.

So, I"m rather puzzled about all of this.

In a recordset with 368K records, the index *has* to be in use or
the search times would be much longer, and linear to the location in
the recordset. I can't really test without the index, as it would
require deleting RI on a production database. I'd do that if it were
a smaller file, but it's 328MBs and I just don't want to muck around
with that.

Just for completeness, I'm running this on a Windows Terminal Server
with all data local in Access 2000 SR1 with SP8.

I realized that the data file hasn't been recently compacted,
either, so since it was Sunday afternoon and I knew I was the only
logged on, I compacted and re-ran the tests:

Initialize Recordset: 29.586
First FindFirst: 0.436
Next FindFirst: 1.631
Next : 0.184
Next : 1.336
Next : 0.179
Next : 1.378
Next : 1.703
Next : 0.095

Aha! A pattern *has* emerged. The lower FK values have lower values,
which shows that the recordset pointer is being reset to the start,
so there's a linear relationship between the magnitude of FK values
and the time it takes to find them. Values in the 500K range are
taking about 1.5 seconds, while values in the 100K range are taking
less than .5 seconds or less.

In any event, my results are not very consistent with yours, seems
to me.

But I think it does show that SEEK is more efficient in part because
it just repositions the record pointer, rather than always returning
to the beginning of the index and scanning through it.

Of course, that's exactly what the Help file says it does. D'oh!

I wonder if it would be more efficient to use FindNext, and if
NoMatch, then try FindPrevious? Statistically speaking, surely this
would reduce the distant traversed by an average of 50% (I think),
assuming there is no performance penalty for .NoMatch.

Instead of:

rs.FindFirst
If rs.NoMatch Then GoTo End
rs.FindNext <- in a loop until no matches

I would instead do this:

rs.FindNext
If rs.NoMatch Then
rs.FindPrevious
If rs.NoMatch Then GoTo End
End If
rs.FindNext <- in a loop until no matches

I was going to quit working on this at this point, but now I've just
got to test this!

Well, here are the results avoiding FindFirst -- it's actually quite
encouraging (though I'm not sure where the negative result is coming
from -- an innacuracy in GetTickCount?):

FindNext
FK Amt/Count /FindPrevious FindFirst
237552 600 (9) 0.39 0.398
521142 150 (1) 1.128 1.389
191276 1,000 (1) 1.553 0.202
521538 25 (1) 1.203 1.334
520956 75 (2) 0.359 1.394
593669 100 (1) 0.329 1.766
579508 100 (1) 0.036 1.767
105368 0 (0) 1.796 2.208
520611 20 (2) 0.375 1.464
521274 70 (3) 0.002 1.474
522271 50 (1) -0.012 1.468
574826 100 (1) 0.239 1.724
173916 700 (2) 1.793 0.098
148957 0 (0) 1.826 1.82
190670 2,000 (2) 0.077 0.165
177070 600 (2) 1.801 0.109
216224 400 (2) 0.147 0.256
179895 150 (1) 1.757 0.116
563124 1,000 (1) 1.618 1.67
522265 25 (1) 0.337 1.48
Totals: 16.754 22.302
Averages: 1.595619 2.124

So it looks like it's just less than a 1/3 reduction in the amount
of time it takes, simply because there's a linear relationship
between the order of the index being searched and the amount of time
it takes.

[note that the COUNT column does not indicate the number of
recordset repositionings, because any single record of the summary
recordset might be a total of more than one record from the source
table, because the summary recordset is grouped on the FK given and
subgrouped on a second foreign key. So for any DonorID (the FK
that's being navigated) there is a subgroup on FundID and that
subgroup could have one or more records totalled in it, even though
it is only one record in the summary query. This has some effect on
the timings listed here, as when there are subsequent records those
are jumped to with FindNext. As you can see there is no apparent
difference between the cases with 0 matching records or any other
count, so I don't expect this has any major effect on the elapsed
time because you'll note that nearby record navigation is very fast,
e.g., in the case of the jump from 520611 to 521274, at .002
seconds. For adjacent records, the time would surely be less than a
millisecond. But I could rewrite my code to test this if anyone
thinks its a major issue that makes my results too flawed to be
dependable]

The only time the FindNext/FindPrevious takes longer is when you
need to find the previous record. This could be optimized by caching
the previously searched value and choosing FindFirst or FindNext
based on whether or not the newly requested value is greater or less
than the previous one.

Well, it turns out doing that results in a fairly impressive
performance improvement (the test results from above are repeated in
the columns under FIRST TEST):

|----First Test---| |---------Second
|Test----------|
Amount FindNext FindNext
FK /Count /Previous FindFirst /Previous Conditional FindFirst
237552 600 (9) 0.39 0.398 0.405 0.412 0.4
521142 150 (1) 1.128 1.389 1.038 1.035 1.557
191276 1,000 (1) 1.553 0.202 1.555 1.209 0.202
521538 25 (1) 1.203 1.334 1.242 1.239 1.312
520956 75 (2) 0.359 1.394 0.351 0.014 1.377
593669 100 (1) 0.329 1.766 0.301 0.336 1.729
579508 100 (1) 0.036 1.767 0.047 0.039 1.716
105368 0 (0) 1.796 2.208 1.806 1.784 1.813
520611 20 (2) 0.375 1.464 0.384 0.315 1.454
521274 70 (3) 0.002 1.474 0.006 0.03 1.506
522271 50 (1) -0.012 1.468 0.006 -0.014 1.448
574826 100 (1) 0.239 1.724 0.249 0.254 1.78
173916 700 (2) 1.793 0.098 1.78 1.727 0.078
148957 0 (0) 1.826 1.82 1.835 0.089 1.808
190670 2,000 (2) 0.077 0.165 0.094 0.064 0.164
177070 600 (2) 1.801 0.109 1.737 0.067 0.103
216224 400 (2) 0.147 0.256 0.564 0.128 0.248
179895 150 (1) 1.757 0.116 1.748 0.139 0.141
563124 1,000 (1) 1.618 1.67 1.609 1.596 1.658
522265 25 (1) 0.337 1.48 0.343 0.224 1.472
Totals: 16.754 22.302 17.1 10.687 21.966
Averages: 1.595619 2.124 1.628571 1.0178095 2.092

As you see, FindFirst took around 22 seconds total for these 20
navigation operations, whereas conditionally choosing which to use
based on the previous value cut that in half, almost exactly, at
just under 11 seconds.

Well, this has been quite an eye opener.

Obviously, if SEEK were an option, I'd use it, but since this
particular operation is on a summary query, SEEK wouldn't be
available. But it does show that you can definitely get significant
improvement out of the DAO Find operations by writing your code
carefully.

--
David W. Fenton http://www.dfenton.com/
usenet at dfenton dot com http://www.dfenton.com/DFA/
Apr 9 '06 #54
> believes Seek is so great, why isn't it available in MS-SQL Server?

Seek is the native method of Jet. Why isn't SQL server
President Bush, or Microsoft Word?

SQL Server uses a server side caching strategy, which results in
indexes pre-loading and remaining in memory.

Jet uses a distributed database engine, which means that indexes
have to be invalidated regularly and re-loaded from the server.

(david)
"Lyle Fairfield" <ly***********@aim.com> wrote in message
news:Xn*********************************@216.221.8 1.119...
"david epsom dot com dot au" <david@epsomdotcomdotau> wrote in
news:44***********************@lon-reader.news.telstra.net:

The 'OpenRecordSet' method is used in both cases, but opening
a SnapShot retrieves all data: Opening a TableType retrieves
only meta-data.

A table type recordset is not an entire recordset.


Would you, please, define "metadata" for this particular situation.

So that I can understand better perhaps you could use the Employees or
other Table of the Northwind Database. I believe (although I've messed
with mine and can't be sure) that the Employees has nine records. Suppose
we add 2047991 records (imagine it's some trivial civil service
department) and open a table type recordset.

What exactly is loaded? I would assume it's some schema including a
description of columns, their size and type. Since the table-type
recordset is recordcount aware does JET do an sql type count behind the
scenes or does it move to the end of the recordset (and back as record
pointer is at record one) as we must do with other types of recordsets?

If it loads only descriptive metadata about columns then one assumes
opening a 204800 record recordset is as fast as opening a 2048 record
recordset? Is this the case?

What happens when we set the index? What additional data is loaded then?
Is the index loaded into a separate memory space?

If someone were seriously interested in this problem (I am not), he or
she could create a 2K table, post it somewhere on the net for download ,
and then we could all, in our spare time experiment with various ways to
"find" the, say, 10th last record.

We could then compare coding time (time for us to write the code) and
finding time. When we published our methods and results others could try
them and eventually we would come to some consensus.

Until someone does this and we have an open and calm conversation about
it I will stick my by previous observations:
I have used Seek extensively in the past. I have championed Seek in the
past. Except in very unusual circumstances Seek is irrelevant today.

Microsoft kb articles have been cited here in support of Seek. If MS
believes Seek is so great, why isn't it available in MS-SQL Server?

--
Lyle Fairfield

Apr 10 '06 #55
"david epsom dot com dot au" <david@epsomdotcomdotau> wrote in
news:44***********************@lon-reader.news.telstra.net:
believes Seek is so great, why isn't it available in MS-SQL Server?


Seek is the native method of Jet. Why isn't SQL server
President Bush, or Microsoft Word?

SQL Server uses a server side caching strategy, which results in
indexes pre-loading and remaining in memory.

Jet uses a distributed database engine, which means that indexes
have to be invalidated regularly and re-loaded from the server.


Suppose we have ten records to find (IN JET).

On the one hand we can:

"load" a tabletype recordset, and set its index to IDX_something. The
records are not loaded, just metadata, including the schema of the
columns included in the recordset, as you have pointed out; I found this
statement of yours helpful in that it made me think about the issue.

So loading this type of recordset takes almost no time at all.

We seek the first record. And find it. The whole operation is very quick.

On the other hand, we can use an sql statement to find the same record.
JET will make the best use of indexes that it can decide upon against the
schema of the columns requested.

Will there be any measurable difference in the speed of these two
operations? My experience and less than rigorous testing say no.

I don't find this surprising.

The data is all in the same place, the local table, at our feet or on our
desk or on another computer in the basement. Are these two operations
then, not conceptually identical?

Does one have an advantage over the other?

With seek and the recordset we have similar records nearby. A MoveNext
may be used to see if there are other records meeting the same criteria.

With sql there is less coding overhead; JET makes some of our decisions
for us. There must be some reason we want to find this record. If it's
update or delete, the SQL does the find and the job all in one. The
notion of similar records nearby is trumped perhaps by the notion of
doing multiple updates or deletes with the one sql statement.

Is calling ten SQL statements less efficient than opening a tabletype
recordset and doing ten seeks? I used to think so, but after thinking
about your statements I no longer do, because I think they are
essentially the same thing.

As daily practice the sql is simpler, shorter and applicable against all
or almost all of the data we use, while the seek is not.

As an aside, I note that sometimes I use a recordset transformed into an
array to do many scans of data (sometimes factorial a small subset of
records). This is many times faster than any recordset manipulation that
I have used.

If seek were removed from the new not called Jet, Jet to appear in Access
12, what things could we not do so well as we can do today?

--
Lyle Fairfield
Apr 10 '06 #56
I agree with everything you just said :~).

If seek were to disappear from A12/Jet 5, I would be
forced to re-write some code I inherited 10 years ago.

I think that a recordset is a general purpose object
with much more overhead than an array, in particular
the connection to the database is often not required.

Which leads me to think that if I had to re-write my
fast array code, I would consider an ADO disconnected
recordset, or a dictionary object. My array code does
not have to be really fast, and I could use some of the
extra features. I would certainly try if dictionary
objects were native. I've tried collection objects,
and there aren't enough extra features to justify the
switch from arrays.

(david)

"Lyle Fairfield" <ly***********@aim.com> wrote in message
news:Xn*********************************@216.221.8 1.119...
"david epsom dot com dot au" <david@epsomdotcomdotau> wrote in
news:44***********************@lon-reader.news.telstra.net:
believes Seek is so great, why isn't it available in MS-SQL Server?


Seek is the native method of Jet. Why isn't SQL server
President Bush, or Microsoft Word?

SQL Server uses a server side caching strategy, which results in
indexes pre-loading and remaining in memory.

Jet uses a distributed database engine, which means that indexes
have to be invalidated regularly and re-loaded from the server.


Suppose we have ten records to find (IN JET).

On the one hand we can:

"load" a tabletype recordset, and set its index to IDX_something. The
records are not loaded, just metadata, including the schema of the
columns included in the recordset, as you have pointed out; I found this
statement of yours helpful in that it made me think about the issue.

So loading this type of recordset takes almost no time at all.

We seek the first record. And find it. The whole operation is very quick.

On the other hand, we can use an sql statement to find the same record.
JET will make the best use of indexes that it can decide upon against the
schema of the columns requested.

Will there be any measurable difference in the speed of these two
operations? My experience and less than rigorous testing say no.

I don't find this surprising.

The data is all in the same place, the local table, at our feet or on our
desk or on another computer in the basement. Are these two operations
then, not conceptually identical?

Does one have an advantage over the other?

With seek and the recordset we have similar records nearby. A MoveNext
may be used to see if there are other records meeting the same criteria.

With sql there is less coding overhead; JET makes some of our decisions
for us. There must be some reason we want to find this record. If it's
update or delete, the SQL does the find and the job all in one. The
notion of similar records nearby is trumped perhaps by the notion of
doing multiple updates or deletes with the one sql statement.

Is calling ten SQL statements less efficient than opening a tabletype
recordset and doing ten seeks? I used to think so, but after thinking
about your statements I no longer do, because I think they are
essentially the same thing.

As daily practice the sql is simpler, shorter and applicable against all
or almost all of the data we use, while the seek is not.

As an aside, I note that sometimes I use a recordset transformed into an
array to do many scans of data (sometimes factorial a small subset of
records). This is many times faster than any recordset manipulation that
I have used.

If seek were removed from the new not called Jet, Jet to appear in Access
12, what things could we not do so well as we can do today?

--
Lyle Fairfield

Apr 11 '06 #57
"david epsom dot com dot au" <david@epsomdotcomdotau> wrote in
news:44***********************@lon-reader.news.telstra.net:
I agree with everything you just said :~).

If seek were to disappear from A12/Jet 5, I would be
forced to re-write some code I inherited 10 years ago.

I think that a recordset is a general purpose object
with much more overhead than an array, in particular
the connection to the database is often not required.

Which leads me to think that if I had to re-write my
fast array code, I would consider an ADO disconnected
recordset, or a dictionary object. My array code does
not have to be really fast


As an aside ...

I use the disconnected ADO recordset when writing ASP with Javascript as
the ASP language. The array's elements are (each) [primarykey] [array of
other field values]. They can be poulated very quickly with GetString and
split. So array[23][6] gives me the value of the 7th field in the record
having primary key 23 instantaneously. pish, pop, shift, splice give me
great power when it comes to manipulatin these. As Javascript arrays are
sparse, the absense of elements 24 to say 30 is not a memory loss; there's
just nothing there; in fact a[24] becomes a boolean as to the existence of
a record with primary key 24.

--

Lyle Fairfield
Apr 11 '06 #58

"Lyle Fairfield" <ly***********@aim.com> wrote
pish, pop, shift, splice give me great power
when it comes to manipulatin these. As Java-
script arrays are sparse, the absense of ele-
ments 24 to say 30 is not a memory loss; there's
just nothing there; in fact a[24] becomes a boolean
as to the existence of a record with primary key 24.


Is that the dismissive "pish", as in "Pish and tosh, don't be silly,
Heathcoate! There are no hostile warrio. . . "?
Apr 13 '06 #59
"Larry Linson" <bo*****@localhost.not> wrote in news:KjB%f.6666$7Z6.1904
@trnddc06:

"Lyle Fairfield" <ly***********@aim.com> wrote
pish, pop, shift, splice give me great power
when it comes to manipulatin these. As Java-
script arrays are sparse, the absense of ele-
ments 24 to say 30 is not a memory loss; there's
just nothing there; in fact a[24] becomes a boolean
as to the existence of a record with primary key 24.


Is that the dismissive "pish", as in "Pish and tosh, don't be silly,
Heathcoate! There are no hostile warrio. . . "?


Nah, it's programming to music:

Yes, I was a-splishin' and a splashin'
I was a-rollin' and a-strollin'
Yeah, I was a-movin' and a-groovin'...woo!
We was a-reelin' with the feelin'..ha!
We was a-rollin' and a-strollin'
Movin with the groovin'
Splish splash, yeah!

--
Lyle Fairfield
Apr 14 '06 #60

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

0
by: WC Justice | last post by:
I have a web site that uses an Access 2000 back end. Most of the recordsets use the rsName.open "qryName"... method. So far, in converting to a SQL Server back end, I have been changing to the...
2
by: Matt Hawley | last post by:
I'm attempting to run a .NET windows application, but continuously get the following error when starting it up: Application has generated an exception that could not be handled. Process...
5
by: Sonu | last post by:
Hello everyone and thanks in advance. I have a multilingual application which has been built in MFC VC++ 6.0 (non-Unicode). It support English German Hungarian so far, which has been fine. But...
12
by: Mindy | last post by:
Hey, I want to build a database which has an in front end application and back end tables. I am new to access, and all my knowledge is from Access for Dummies. Could any one give me a clue...
0
by: Jon Gabel | last post by:
I have a WindowsForms application that communicates via TCP/IP or SNMP (using IP*Works components) with various devices on my local network. I would like to duplicate the same functionality in a...
1
by: JLC | last post by:
I am having some trouble displaying the following. I want to display the startdatetime and enddatetime in a particular table using an offset. In that table is also a timezoneid that would...
1
by: kphip123 | last post by:
Hi! I recently converted an application from VS2003 to VS2005. The application consists of a main 'unmanaged C++ executable' which calls a managed C++ .dll. The .dll basically launches a...
0
by: Glenn Spiller | last post by:
I am trying to pass a JAVA url that needs variables from my application express app. Anybody know how to pass these variables. Even if there is a way to pass it using PL/SQL I could also use that.
0
by: DolphinDB | last post by:
Tired of spending countless mintues downsampling your data? Look no further! In this article, you’ll learn how to efficiently downsample 6.48 billion high-frequency records to 61 million...
0
by: ryjfgjl | last post by:
ExcelToDatabase: batch import excel into database automatically...
0
isladogs
by: isladogs | last post by:
The next Access Europe meeting will be on Wednesday 6 Mar 2024 starting at 18:00 UK time (6PM UTC) and finishing at about 19:15 (7.15PM). In this month's session, we are pleased to welcome back...
0
by: jfyes | last post by:
As a hardware engineer, after seeing that CEIWEI recently released a new tool for Modbus RTU Over TCP/UDP filtering and monitoring, I actively went to its official website to take a look. It turned...
1
by: CloudSolutions | last post by:
Introduction: For many beginners and individual users, requiring a credit card and email registration may pose a barrier when starting to use cloud servers. However, some cloud server providers now...
1
by: Defcon1945 | last post by:
I'm trying to learn Python using Pycharm but import shutil doesn't work
1
by: Shællîpôpï 09 | last post by:
If u are using a keypad phone, how do u turn on JavaScript, to access features like WhatsApp, Facebook, Instagram....
0
by: af34tf | last post by:
Hi Guys, I have a domain whose name is BytesLimited.com, and I want to sell it. Does anyone know about platforms that allow me to list my domain in auction for free. Thank you
0
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 3 Apr 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome former...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.