473,770 Members | 1,902 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Import/Normalize approach - column-DML, or loop

Hi all,

I've just finished almost all of what has turned out to be a real bear of a
project. It has to import data from a monthly spreadsheet export from another
program, and convert that into normalized data. The task is made more
difficult by the fact that the structure itself can vary from month to month
(in well defined ways).

So, I used the SQL-centric approach, taking vertical stripes at a time so
that, for instance, for each field with simple, repeating text data, I make a
group-by pass, inserting into the destination lookup table, then I do another
query, joining from the input table to the text fields in the lookups to get
the foreign keys for the main destination table. When I have multiple columns
that need to become 1-M, I make a pass for each of those columns, inserting
lookup record that identifies the split, then inserting the rows into the
many-side table, yadda, yadda, yadda.

All that was going swimmingly, and performing pretty well until I got to the
fields containing multiple, delimited values. My whole dedign is based on
using SQL/DML passes for everything, but the only way I could figure out to
make that work was to call a user defined function from within the query to
pull out, in successive query slices, argument 1, argument 2, etc. I was
going to use a where clause to exclude null results (input doesn't have
arguments n or above), and quit after the first pass with .RecordsAffecte d =
0.

Sounds good, but with a mere 8000 input rows, I had to cancel the first pass
after waiting about 20 minutes. Note that a whole import process takes about
3 minutes if this part is ommitted, and that includes about 40 vertical split
passes. The UDF is very simple, and performs quite fast from VB, but I guess
the overhead of calling this function from a query is VERY SEVERE. Just to
get this thing out the door, I've decided to just tnot split these for now
since we're not doing queries of that data yet.

So, my next thought is that, perhaps this step is better done by simply,
brute-force, cycling through a recordset of the input table, and inserting
rows into the destination tables. If I'm soing that, though, then am I really
getting any benefit worth speaking of by doing everything -else- as SQL-DML in
vertical slices, or would I have been much better off, just doing it the
procedural way, walking through the input table, and creating destination rows
one row at a time? I know it would have been easier to write, but here I was
trying to do things the "right" way.

If I do something like this again, would my code end up performing just as
well, and being easier to write and maintain using simple iteration rather
than SQL-DML?

Thanks for any opinions,

- Steve J
Nov 12 '05
20 2767
Steve Jorgensen <no****@nospam. nospam> wrote in
news:0v******** *************** *********@4ax.c om:
On Fri, 30 Jan 2004 16:34:18 GMT, "David W. Fenton"
<dX********@bw ay.net.invalid> wrote:
Steve Jorgensen <no****@nospam. nospam> wrote in
news:o6****** *************** ***********@4ax .com:
Well, I'm not sure that's much help in this case. it's not a
field that was originally indended to be single-valued, then had
some multi-valued data put in, it is an export form a system
with no other way to represent 1-to-many data in a spreadsheet
than to put it in a multi-valued field. Out of 8000 records,
there are about 2500 unique combinations (using GROUP BY).


With 8000->2500, I'm not sure there's any real benefit in
processing unique values.


Well, the Group By seems to run pretty fast. Intuitively, it
seems like walking more than 3x as many rows to parse would waste
more time than the Group By. Of course, it's trivial to try it
both ways, and time it.


But you'll end up with unique values that have no meaning, and
really only save you a tiny amount of time. On the other hand, it
doesn't necessarily take very much time to do it, so perhaps it's a
wash.
Either way, though, I'd first process the values into columns and
then process the columns into rows. But if more than half the
records have more than 10 values in them, then it would become
more unwieldy, I think, than just walking the table and creating
the new records.


I think it's borderline on that one. I'm not certain I know what
you are advocating, though. Are you saying to add columns to the
import table, and step-wise run queries to peel off argument 1 to
the first added column, removing it from the original source
column (leaving argument 2 as argument 1), then query again to
peel off argument 1 to the second added column, etc.?


That's one way to do it. He other way is to have a single query do
it. Or to write code that will write the SQL for you.
I'd do that in two steps of course, as you're likely to want a
lookup here, so you'll need both a list of the values connected to
the source record and a list of the unique values to pick from in
new records. So, I'd create the 1:N records (FK + Value), then
create the lookup table with a DISTINCT, then join on the value to
populate the lookup key.


Right, that's pretty much how I'm handling all the other
single-valued fields with repeated text already. It's working
great.
But with only 8000 records, I think I'd do the record creation by
walking the recordset and then walking the field and parsing.


Yeah. If i get what you're saying, I don't thing ripping
arguments into 10 or 40 new columns will be a good thing, and
anything short of the 40 or so means writing more code to handle
the arguments past #10 or so differently than the rest.


Well, I'm rather shocked at the idea that there'd be 40 separate
values stored in a single field. I know you said it was a
spreadsheet, but what kind of morons would write a spreadsheet that
complex when they obviously need a database?
I think I've figured out that a hybrid apprach might be best for
my current situation, given that I already have all the
infrastructure to build the DML parts. . . .


DML? Perhaps I should not have ignored the fact that I don't know
what DML means?


Oh, it's just SQL that's not DDL. SQL consists of DDL and DML.
To me, use of the term DML implies a focus on INSERT/UPDATE
queries rather than simple SELECTS, though I believe SELECT is
still considered part of DML. I welcome anyone's corrections on
these facts.


I've never heard that terminology before.

Of course, I never use DDL, either.
. . . The idea is that I query a
recordset with all the unique combinations, cycle through that,
and parse out the arguments in each one, build a collection of
unique values (2500 combinations, but probably only a few
hundred unique items), then insert the collection items into the
lookup table. From there, I should be able join from the source
table to the lookup using a Like expression ("/" + combination +
"/" Like "*/" + delimiter + item-text + delimier + "/*") to
populate the junction table.
I think/hope this will do the job.


Why join in that fashion?

Why not do this: walk the source table, parse each value and
insert a record with the source PK and the value into a temporary
table, then from that table, create another table witht he unique
values (this will be your lookup table, with a PK), then join
these two new tables on the value field and insert the PK from the
source table and the PK from the lookup table into your junction
table.


I'm guessing that this join, even though it would be an extra
query that will not benefit from any indexes, could be quicker
than executing 10 to 40 individual inserts from code per source
row as I go. I dunno, though. Your way is probably easier to
write.


A join on an expression faster than individual queries? I don't
think so!
With very large numbers of records, I'd break the first step into
2 parts, first parsing the values into columns and then inserting
the records from the columns.
What is your opinion if I do another project like this in the
future? DML, or loops?


Depends on the number of records and the number of values in the
multi-value field. With larger numbers of records and/or larger
numbers of values in the multi-value field, I'd tend to process
the values into columns first and then create records from the
columns. With smaller recordcounts (as in your case), I might do
it directly,
especially if the number of values stored in each field is large.

The last time I did this I was processing a phone number field
that had multiple values in it. Most of the fields had 1, 2 or 3
numbers, a handful had 4 and very small number had more than that.
I processed into 4 columns, with the 4th column still being
multi-valued, but with only a small number of records to process
(indeed, it was such a small number that I believe I created the
real records manually!). My experience with this kind of data is
that you have lots of records with 1 or 2 values, about half as
many with 3, and then a quick falloff from there, with 10% or less
having more than that. But it depends entirely on the kind of
data. Phone numbers have pretty much an upper limit, but other
kinds of data will not.

I guess what I'm saying is that with large numbers of records, I'd
separate the process of parsing the multi-value field from the
process of creating the records, for performance purposes.


I guess that sounds reasonable. I think I would have done pretty
much the same thing for a field with mostly one or 2 values per
row. It was only this strange case of having a 1-M stuffed into a
column that bought me this headache to begin with.

The other issue I'm seeing, though, is that the code would
probably have been simpler, cost less to write, and would now be
simpler to maintain if I had used loops instead of queries. Since
this is a monthly import, if the processing took less that twice
as long as now, perhaps, the simplicity alone would have been
sufficient reason to just do the loops, eh? Of course, a slower
process means testing less often during development, and that
means more difficult debugging sessions each time, so perhaps it
would not have cost less to write, but I still think I could have
made it more legible for maintenance that way.


What kind of source application is built around such a poorly
designed spreadsheet?

Is there any plan to get away from such a bad design?

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 12 '05 #11
On Fri, 30 Jan 2004 19:39:25 GMT, "David W. Fenton"
<dX********@bwa y.net.invalid> wrote:

....
Yeah. If i get what you're saying, I don't thing ripping
arguments into 10 or 40 new columns will be a good thing, and
anything short of the 40 or so means writing more code to handle
the arguments past #10 or so differently than the rest.


Well, I'm rather shocked at the idea that there'd be 40 separate
values stored in a single field. I know you said it was a
spreadsheet, but what kind of morons would write a spreadsheet that
complex when they obviously need a database?


It probably is stored in a database inside the proprietary system it comes
from. What I get is the spreadsheet exported from that system. Also, the
items in the export are very short codes.
I think I've figured out that a hybrid approach might be best for
my current situation, given that I already have all the
infrastructure to build the DML parts. . . . .... . . . The idea is that I query a
recordset with all the unique combinations, cycle through that,
and parse out the arguments in each one, build a collection of
unique values (2500 combinations, but probably only a few
hundred unique items), then insert the collection items into the
lookup table. From there, I should be able join from the source
table to the lookup using a Like expression ("/" + combination +
"/" Like "*/" + delimiter + item-text + delimier + "/*") to
populate the junction table.
I think/hope this will do the job.

Why join in that fashion?

Why not do this: walk the source table, parse each value and
insert a record with the source PK and the value into a temporary
table, then from that table, create another table witht he unique
values (this will be your lookup table, with a PK), then join
these two new tables on the value field and insert the PK from the
source table and the PK from the lookup table into your junction
table.


I'm guessing that this join, even though it would be an extra
query that will not benefit from any indexes, could be quicker
than executing 10 to 40 individual inserts from code per source
row as I go. I dunno, though. Your way is probably easier to
write.


A join on an expression faster than individual queries? I don't
think so!


Huh? Each query has to go through the whole VBA/DAO/JET layer 10-40 times per
input row. I should expect that to be quicker than running one query, an
letting JET do the whole thing at the JET engine level?
With very large numbers of records, I'd break the first step into
2 parts, first parsing the values into columns and then inserting
the records from the columns.

What is your opinion if I do another project like this in the
future? DML, or loops?
....I guess what I'm saying is that with large numbers of records, I'd
separate the process of parsing the multi-value field from the
process of creating the records, for performance purposes.


I guess that sounds reasonable. I think I would have done pretty
much the same thing for a field with mostly one or 2 values per
row. It was only this strange case of having a 1-M stuffed into a
column that bought me this headache to begin with.

The other issue I'm seeing, though, is that the code would
probably have been simpler, cost less to write, and would now be
simpler to maintain if I had used loops instead of queries. Since
this is a monthly import, if the processing took less that twice
as long as now, perhaps, the simplicity alone would have been
sufficient reason to just do the loops, eh? Of course, a slower
process means testing less often during development, and that
means more difficult debugging sessions each time, so perhaps it
would not have cost less to write, but I still think I could have
made it more legible for maintenance that way.


What kind of source application is built around such a poorly
designed spreadsheet?

Is there any plan to get away from such a bad design?


No. It's the export format provided from the application we need to get the
data from. The vendor provides another version of the program with a querying
interface for analysis, but it's missing much of the data we need that is
exported in the spreadsheet.
Nov 12 '05 #12
On Thu, 29 Jan 2004 08:18:26 GMT, Steve Jorgensen
<no****@nospam. nospam> wrote:

[snip]
All that was going swimmingly, and performing pretty well until I got to the
fields containing multiple, delimited values. My whole dedign is based on
using SQL/DML passes for everything, but the only way I could figure out to
make that work was to call a user defined function from within the query

[snip]

Did you consider using code to generate SQL statements, then executing
them? (Possibly within explicit transactions?)

--
Mike Sherrill
Information Management Systems
Nov 12 '05 #13
Steve Jorgensen <no****@nospam. nospam> wrote in
news:75******** *************** *********@4ax.c om:
On Fri, 30 Jan 2004 19:39:25 GMT, "David W. Fenton"
<dX********@bw ay.net.invalid> wrote:

...
Yeah. If i get what you're saying, I don't thing ripping
arguments into 10 or 40 new columns will be a good thing, and
anything short of the 40 or so means writing more code to handle
the arguments past #10 or so differently than the rest.


Well, I'm rather shocked at the idea that there'd be 40 separate
values stored in a single field. I know you said it was a
spreadsheet , but what kind of morons would write a spreadsheet
that complex when they obviously need a database?


It probably is stored in a database inside the proprietary system
it comes from. What I get is the spreadsheet exported from that
system. Also, the items in the export are very short codes.


So, you're normalizing data that has been denormalized for export?

How stupid is that?

Wouldn't it be better to have someone skip the spreadsheet and have
a normalized export process instead?

[]
What kind of source application is built around such a poorly
designed spreadsheet?

Is there any plan to get away from such a bad design?


No. It's the export format provided from the application we need
to get the data from. The vendor provides another version of the
program with a querying interface for analysis, but it's missing
much of the data we need that is exported in the spreadsheet.


Are you sure there's absolutely no access to the underlying data
structures?

This is the kind of thing that drives me crazy, having to program
something to undo something that has been extensively programmed
already. Any changes to the export will break your import routine,
for instance.

I recently replaced a client's system for importing data from MYOB
with direct connections via the MYOB ODBC, and vastly improved the
quality of data (previously, certain kinds of data were just not
available). I don't know if the application in question has any such
capability, but I would certainly let the client know that anything
you program is heavily contingent on there being no changes in the
output format at all.

--
David W. Fenton http://www.bway.net/~dfenton
dfenton at bway dot net http://www.bway.net/~dfassoc
Nov 12 '05 #14
On Fri, 30 Jan 2004 18:08:32 -0500, Mike Sherrill
<MS************ *@compuserve.co m> wrote:
On Thu, 29 Jan 2004 08:18:26 GMT, Steve Jorgensen
<no****@nospam .nospam> wrote:

[snip]
All that was going swimmingly, and performing pretty well until I got to the
fields containing multiple, delimited values. My whole dedign is based on
using SQL/DML passes for everything, but the only way I could figure out to
make that work was to call a user defined function from within the query

[snip]

Did you consider using code to generate SQL statements, then executing
them? (Possibly within explicit transactions?)


Yes, that's precisely what I am doing.
Nov 12 '05 #15
On Sat, 31 Jan 2004 00:06:28 GMT, "David W. Fenton"
<dX********@bwa y.net.invalid> wrote:
Steve Jorgensen <no****@nospam. nospam> wrote in
news:75******* *************** **********@4ax. com:
On Fri, 30 Jan 2004 19:39:25 GMT, "David W. Fenton"
<dX********@b way.net.invalid > wrote:

...
Yeah. If i get what you're saying, I don't thing ripping
arguments into 10 or 40 new columns will be a good thing, and
anything short of the 40 or so means writing more code to handle
the arguments past #10 or so differently than the rest.

Well, I'm rather shocked at the idea that there'd be 40 separate
values stored in a single field. I know you said it was a
spreadshee t, but what kind of morons would write a spreadsheet
that complex when they obviously need a database?


It probably is stored in a database inside the proprietary system
it comes from. What I get is the spreadsheet exported from that
system. Also, the items in the export are very short codes.


So, you're normalizing data that has been denormalized for export?

How stupid is that?

Wouldn't it be better to have someone skip the spreadsheet and have
a normalized export process instead?


Of course, that was the first thing I asked the client after they gave me the
requirements. It's a totally closed system. Since it provides valuable
market data on a subscription basis, presumably, they think giving customers
too much access would compete with their own analysis consulting business - so
we end up working around them.

It's not the first time I've seen this sort of thing, and it probably won't be
the last.

Nov 12 '05 #16

Steve,

On Sat, 31 Jan 2004 04:03:07 GMT, Steve Jorgensen
<no****@nospam. nospam> wrote in comp.databases. ms-access:
Of course, that was the first thing I asked the client after they gave me the
requirements . It's a totally closed system. Since it provides valuable
market data on a subscription basis, presumably, they think giving customers
too much access would compete with their own analysis consulting business - so
we end up working around them.

It's not the first time I've seen this sort of thing, and it probably won't be
the last.


I've run into this before, under similar circumstances.

I would suggest the following. Just because the vendor only provides
a largely unsuitable export routine and doesn't expose the data
through standard means (odbc, a propr. driver, an embedded prog. lang,
etc) does not mean that the data is not accessible by more reliable
means.

Consider the following. If there is any logical construction to the
data stored by this other app (and if there is any complexity at all
to the app, its data storage must be logically structured or it would
have been unmanagable) then the vendor probably used a third party
solution for storage. In other words, the data files probably are
some recognizable format, even if this is not acknowledged by the
vendor. Open the data file in an editor. What does it look like?
Jet? dbase? sql server? mysql? Are there any processes that need
to be run in order for this other app to work (ie, is there a db
server of some sort)? If you can determine what they used for
storage, you can access the data directly without using their
poorly-designed export interface. I would recommend strongly though
that this export be enforced as an offline read-only process. You do
not want to impact performance of the other app, or interfere with
your licensing or provision for vendor support of the existing app.

But that said, in most cases, although the vendor protects the means
of obtaining and using the information (ie the protocols for updating
and the 'program' itself) this does not necessarily mean that the data
itself is owned or protected legally by the vendor. They may have
obfuscated its storage format and provided a controlled export process
to limit your ability to access the data, but in most cases, a
licensed user would not be prohibited from using the data via an
external routine, should you be able to devise one for them.

That said, everything is, of course, dependant upon the actual license
involved here. But in numerous cases where the license wasn't
prohibitive in this regard, and the data storage architecture was
obfuscated and the programs for accessing it limited, I have written
external routines for directly reading data from such systems for
clients who needed faster and more flexible access to their data. I'd
encourage you to take a closer look at both the license involved and
the proprietary (but almost inevitably non-custom) data storage
formats and decide for yourself whether there isn't a better way,
without taking your client's word for it. You may well find a lot
better solution than the one you are putting together now.

HTH

Peter Miller
_______________ _______________ _______________ _______________
PK Solutions -- Data Recovery for Microsoft Access/Jet/SQL
Free quotes, Guaranteed lowest prices and best results
www.pksolutions.com 1.866.FILE.FIX 1.760.476.9051
Nov 12 '05 #17
On Fri, 30 Jan 2004 19:39:25 GMT, "David W. Fenton"
<dX********@bwa y.net.invalid> wrote:
Well, I'm rather shocked at the idea that there'd be 40 separate
values stored in a single field. I know you said it was a
spreadsheet, but what kind of morons would write a spreadsheet that
complex when they obviously need a database?


"Obvious" is in the eye of the beholder.

One of my first system conversions involved building a SQL database
out of 30 megs of WordPerfect (not a typo) tables. Only forty values
in a cell would have been a real treat.

--
Mike Sherrill
Information Management Systems
Nov 12 '05 #18
On Sat, 31 Jan 2004 04:00:27 GMT, Steve Jorgensen
<no****@nospam. nospam> wrote:
Did you consider using code to generate SQL statements, then executing
them? (Possibly within explicit transactions?)


Yes, that's precisely what I am doing.


I got the impression your SQL statements were executing VBA functions.
I was talking (unclearly) of building SQL statements that contain only
literal values, no function calls.

--
Mike Sherrill
Information Management Systems
Nov 12 '05 #19
On Sat, 31 Jan 2004 17:12:35 -0500, Mike Sherrill
<MS************ *@compuserve.co m> wrote:
On Sat, 31 Jan 2004 04:00:27 GMT, Steve Jorgensen
<no****@nospam .nospam> wrote:
Did you consider using code to generate SQL statements, then executing
them? (Possibly within explicit transactions?)


Yes, that's precisely what I am doing.


I got the impression your SQL statements were executing VBA functions.
I was talking (unclearly) of building SQL statements that contain only
literal values, no function calls.


Oh, right. I see what you're getting at.

The issue was that the only way I could figure out to use a query to split up
the multi-vlaued fields was to call a UDF. That's because expressions that
can be built to do that using native functions (FWICS would become longer and
more deeply nested for each successive argument number, and would quickly
exceed the limits of what JET could be reasonably expected to parse (long
before argument 40). If this was SQL Server, I think I could use PATINDEX to
find, say, the text between the 12th and 13th delimiter characters.
Nov 12 '05 #20

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
4148
by: Christos TZOTZIOY Georgiou | last post by:
I found at least one case where decombining and recombining a unicode character does not result in the same character (see at end). I have no extensive knowledge about Unicode, yet I believe that this must be a problem of the Unicode 3.2 specification and not Python's. However, I haven't found out how the decomp_data (in unicodedata_db.h) is built, and neither did I find much more info about the specifics of Unicode 3.2. I thought about...
4
1615
by: Torsten Bronger | last post by:
Hallöchen! I have a file that looks a little bit like a C header file with a long list of variables (actually constants) definitions, e.g. VI_ATTR_TIMO = 0x54378 .... Actually I need this in a couple of low-level modules that are imported into the main module, and in the main module itself. They
3
6272
by: Doug Baroter | last post by:
Hi, One of my clients has the following situation. They use Access DB for data update etc. some business functions while they also want to view the Access data quickly and more efficiently in SQL Server 2000. Huge Access db with over 100 user tables, over 60 MB data. The DTS package that comes with SQL Server 2000 seems pretty "messy" in the sense that it assumes that one needs to do one time import only or accurately it does not...
4
3025
by: Steve Jorgensen | last post by:
I'm restarting this thread with a different focus. The project I'm working on now id coming along and will be made to work, and it's too late to start over with a new strategy. Still, I'm not coming to a firm conclusion over whether it was the better approach, and wonder if I should do it differently the next time I'm faced with a similar issue. I needed an app to automatically import from spreadsheets with a semi-dynamic structure,...
5
1931
by: Colleyville Alan | last post by:
I need to import a spreadsheet into an Access table. The spreadsheet has performance for mutual funds for various periods. The problem is that if there is no info for a particular period, the spreadsheet contains a dash in the cell (for example if the fun is only 4 years old and I look in the columns for 5-year and 10-year performance). I cannot change the dashes to blank with a global change, the negative numbers would lose the...
2
4533
by: lwhite | last post by:
MS SQL 2000 sp4 on WinXp Pro SP2 I am very new to this so please let me know what I can do to make it easier for you to understand the problem. I have a non delimited text file. This text file has several columns that for the most part are fixed length but.. The fixed format starts with a variable length number( char 10) as the first column and a max (char 30) description field and another (char 50) long description field.
6
3095
by: Hemant Shah | last post by:
Folks, Today, I was exporting a table in one database and then importing it in another database. The table in destination database was missing one column (my mistake while creating the table), but import did not complain about it. Source table: Column Type Type
8
3003
by: bbcrock | last post by:
I have three tables with a relationship I've never worked with before. Can anyone suggest/comment on the best way to create a third normal form relationship between these tables? The tables basically are: TRAIN (TRAIN_ID and 15 columns about train specs, etc) TRUCK (TRUCK_ID and 12 columns about truck specs, etc) TRANSPORTATION_ITEM This table has, among others, two columns, TRUCK_ID and TRAIN_ID. If
0
1012
bugboy
by: bugboy | last post by:
At what point is normalizing worth forgetting? I have a column varchar (4) that will repeat one of only eight possible values for each row. Is it really more efficient to break this out into it's own table and link it with keys than to just have it repeat? ... it's only 4 characters.. I suppose if there are only 8 values i could make the column a char (1).. would this be more efficient or am i just splitting hairs here? Everywhere i...
7
1096
by: Frank Millman | last post by:
Hi all I am familiar enough with the normal use of 'import'. However, I have found a use for it which seems effective, but I have not seen it used like this before, so I am not sure if there are any downsides. I know that when a module is imported the first time, it is 'executed'. This normally entails setting up constants, classes, functions, etc, that you want to make available to the importer.
0
9617
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However, people are often confused as to whether an ONU can Work As a Router. In this blog post, we’ll explore What is ONU, What Is Router, ONU & Router’s main usage, and What is the difference between ONU and Router. Let’s take a closer look ! Part I. Meaning of...
0
9453
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
10254
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10099
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
10036
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
8929
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
1
7451
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
6710
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
2
3607
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.