473,385 Members | 1,615 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,385 software developers and data experts.

DB2 Unload (to record sequential file) on NT possible?

I'm try to Unload DB2 data from a table into a record sequential file
on NT. I can an unload on Mainframe, but it doesn't seem to be an
option in NT.

In NT, all I can do is export/import. I can do a Load too, but not an
Unload. I just want the only the data from the table in a record
sequential file. Export seems only to give options to have a delimited
line sequential file or a record sequential file where the data is
preceeded by table setup information.

So is there a way to unload data on NT? Thanks.

Feb 1 '07 #1
11 8593
You could try exporting that table delimited , with no character
delimiter and a very special column delimiter (export to file of del
modified by nochardel, coldel~ select blablabla...), and then find and
replace all column delimiters by 'nothing'. Although I don't see why a
delimited file couldn't work just as well as a record layout. It's the
same data, you just need to treat it differently to import back again.
With a little program/script this should be easy.

Feb 2 '07 #2
Timofmars wrote:
I'm try to Unload DB2 data from a table into a record sequential file
on NT. I can an unload on Mainframe, but it doesn't seem to be an
option in NT.

In NT, all I can do is export/import. I can do a Load too, but not an
Unload. I just want the only the data from the table in a record
sequential file. Export seems only to give options to have a delimited
line sequential file or a record sequential file where the data is
preceeded by table setup information.

So is there a way to unload data on NT? Thanks.
What is your end goal? Do you want to import data in another DB2
instance running on NT? Try using WinSQL
(http://synametrics.com/winsql). This program allows moving data from
one source to another target.

Feb 2 '07 #3
No, I'm not trying to import the data into another DB2 instance.
Instead, we have a COBOL program that reads exported data and creates
a VSAM backup file from the data. Normally, it would read data
directly from the DB2 tables to create the backup.

However, I've created something very similar to what andyhe suggested
where I export the data in a delimited file and have a COBOL program
parse out the delimiters. This is about twice as fast as the method of
reading directly from the tables.

But rather than continue using delimited export data, a record
sequential export file would be ideal. The UNLOAD command works on
Mainframe, but apparently not on NT. The NT option of an IPX format
export seems to be record sequential, but there is a header file
containing column information that I don't want. I can't think of an
easy way to skip over or remove that column data either. So I was just
looking for a way to get a pure record sequential file with only data.

Thanks for the replies so far.
On Feb 2, 3:47 pm, Juan Singh <j...@anonymous.comwrote:
Timofmars wrote:
I'm try to Unload DB2 data from a table into a record sequential file
on NT. I can an unload on Mainframe, but it doesn't seem to be an
option in NT.
In NT, all I can do is export/import. I can do a Load too, but not an
Unload. I just want the only the data from the table in a record
sequential file. Export seems only to give options to have a delimited
line sequential file or a record sequential file where the data is
preceeded by table setup information.
So is there a way to unload data on NT? Thanks.

What is your end goal? Do you want to import data in another DB2
instance running on NT? Try using WinSQL
(http://synametrics.com/winsql). This program allows moving data from
one source to another target.

Feb 5 '07 #4

Are you running COBOL program on NT to create VSAM sequential file, and then
load it to Main Frame?

Reading data directly from the database and creating sequential file should
be faster then exporting it and reading export file.

Can you post the SQL statement you are using in the COBOL program?

While stranded on information super highway Timofmars wrote:
No, I'm not trying to import the data into another DB2 instance.
Instead, we have a COBOL program that reads exported data and creates
a VSAM backup file from the data. Normally, it would read data
directly from the DB2 tables to create the backup.

However, I've created something very similar to what andyhe suggested
where I export the data in a delimited file and have a COBOL program
parse out the delimiters. This is about twice as fast as the method of
reading directly from the tables.

But rather than continue using delimited export data, a record
sequential export file would be ideal. The UNLOAD command works on
Mainframe, but apparently not on NT. The NT option of an IPX format
export seems to be record sequential, but there is a header file
containing column information that I don't want. I can't think of an
easy way to skip over or remove that column data either. So I was just
looking for a way to get a pure record sequential file with only data.

Thanks for the replies so far.
On Feb 2, 3:47 pm, Juan Singh <j...@anonymous.comwrote:
>Timofmars wrote:
I'm try to Unload DB2 data from a table into a record sequential file
on NT. I can an unload on Mainframe, but it doesn't seem to be an
option in NT.
In NT, all I can do is export/import. I can do a Load too, but not an
Unload. I just want the only the data from the table in a record
sequential file. Export seems only to give options to have a delimited
line sequential file or a record sequential file where the data is
preceeded by table setup information.
So is there a way to unload data on NT? Thanks.

What is your end goal? Do you want to import data in another DB2
instance running on NT? Try using WinSQL
(http://synametrics.com/winsql). This program allows moving data from
one source to another target.

--
Hemant Shah /"\ ASCII ribbon campaign
E-mail: No************@xnet.com \ / ---------------------
X against HTML mail
TO REPLY, REMOVE NoJunkMail / \ and postings
FROM MY E-MAIL ADDRESS.
-----------------[DO NOT SEND UNSOLICITED BULK E-MAIL]------------------
I haven't lost my mind, Above opinions are mine only.
it's backed up on tape somewhere. Others can have their own.
Feb 6 '07 #5
I'm not porting anything to a different system. The NT and Mainframe
are seperate. I'm using the mainframe unload as an example of how the
unload option does what I want it to do on mainframe. It gives me a
record sequential file that can be read directly into a copybook.

On NT, unload isn't an option. Instead, I have to use a delimited
export file (line sequential file) on NT.

On MVS, I have:

UNLD DATA FORMAT ( DSNTIAUL )
FROM TABLE DB2T.OAAAA
WHERE (OAA_FILESET_CD = '%$JFSET%')
ORDER BY CLUSTER

On NT I have:

DB2 EXPORT TO %$DB2FILE% OF DEL MODIFIED BY COLDEL~ DECPT` NOCHARDEL
MESSAGES %$DB2LOG% SELECT * FROM DB2ADMIN.AAAA WHERE (OAP_FILESET_CD =
'%$JFSET%') Order by 1, 2, 3

The COBOL programs read these files to create a custom/proprietary
backup file. This has shown to be about twice as fast a reading
directly from the DB2 tables. It may be because many cursors have to
opened for each table, I don't know. But the problem with the NT
export is that it's not as reliable as the mainframe unload, since it
requires that there's no delimiters in the actual column data, and
that our compiler has set COBSW=(-N) so that Low-Values/Null-Values
are handled correctly.
On Feb 6, 10:38 am, Hemant Shah <s...@typhoon.xnet.comwrote:
Are you running COBOL program on NT to create VSAM sequential file, and then
load it to Main Frame?

Reading data directly from the database and creating sequential file should
be faster then exporting it and reading export file.

Can you post the SQL statement you are using in the COBOL program?

While stranded on information super highway Timofmars wrote:
No, I'm not trying to import the data into another DB2 instance.
Instead, we have a COBOL program that reads exported data and creates
a VSAM backup file from the data. Normally, it would read data
directly from the DB2 tables to create the backup.
However, I've created something very similar to what andyhe suggested
where I export the data in a delimited file and have a COBOL program
parse out the delimiters. This is about twice as fast as the method of
reading directly from the tables.
But rather than continue using delimited export data, a record
sequential export file would be ideal. The UNLOAD command works on
Mainframe, but apparently not on NT. The NT option of an IPX format
export seems to be record sequential, but there is a header file
containing column information that I don't want. I can't think of an
easy way to skip over or remove that column data either. So I was just
looking for a way to get a pure record sequential file with only data.
Thanks for the replies so far.
On Feb 2, 3:47 pm, Juan Singh <j...@anonymous.comwrote:
Timofmars wrote:
I'm try to Unload DB2 data from a table into a record sequential file
on NT. I can an unload on Mainframe, but it doesn't seem to be an
option in NT.
In NT, all I can do is export/import. I can do a Load too, but not an
Unload. I just want the only the data from the table in a record
sequential file. Export seems only to give options to have a delimited
line sequential file or a record sequential file where the data is
preceeded by table setup information.
So is there a way to unload data on NT? Thanks.
What is your end goal? Do you want to import data in another DB2
instance running on NT? Try using WinSQL
(http://synametrics.com/winsql). This program allows moving data from
one source to another target.

--
Hemant Shah /"\ ASCII ribbon campaign
E-mail: NoJunkMails...@xnet.com \ / ---------------------
X against HTML mail
TO REPLY, REMOVE NoJunkMail / \ and postings
FROM MY E-MAIL ADDRESS.
-----------------[DO NOT SEND UNSOLICITED BULK E-MAIL]------------------
I haven't lost my mind, Above opinions are mine only.
it's backed up on tape somewhere. Others can have their own.

Feb 6 '07 #6
Timofmars wrote:
I'm not porting anything to a different system. The NT and Mainframe
are seperate. I'm using the mainframe unload as an example of how the
unload option does what I want it to do on mainframe. It gives me a
record sequential file that can be read directly into a copybook.

On NT, unload isn't an option. Instead, I have to use a delimited
export file (line sequential file) on NT.

On MVS, I have:

UNLD DATA FORMAT ( DSNTIAUL )
FROM TABLE DB2T.OAAAA
WHERE (OAA_FILESET_CD = '%$JFSET%')
ORDER BY CLUSTER

On NT I have:

DB2 EXPORT TO %$DB2FILE% OF DEL MODIFIED BY COLDEL~ DECPT` NOCHARDEL
MESSAGES %$DB2LOG% SELECT * FROM DB2ADMIN.AAAA WHERE (OAP_FILESET_CD =
'%$JFSET%') Order by 1, 2, 3
How about:

db2 -x "SELECT ... FROM ... WHERE ..." %DB2FILE%

That would generate something comparable to a FB data set.
The COBOL programs read these files to create a custom/proprietary
backup file. This has shown to be about twice as fast a reading
directly from the DB2 tables. It may be because many cursors have to
opened for each table, I don't know. But the problem with the NT
export is that it's not as reliable as the mainframe unload, since it
requires that there's no delimiters in the actual column data, and
that our compiler has set COBSW=(-N) so that Low-Values/Null-Values
are handled correctly.
I don't get this. Why don't you just escape such delimiters or remove them?

Is this a problem because you are using fixed-length records on MVS and
variable-length records (with delimiters) on UDB?

--
Knut Stolze
DB2 z/OS Utilities Development
IBM Germany
Feb 6 '07 #7
On Feb 6, 2:03 pm, Knut Stolze <sto...@de.ibm.comwrote:
Timofmars wrote:
I'm not porting anything to a different system. The NT and Mainframe
are seperate. I'm using the mainframe unload as an example of how the
unload option does what I want it to do on mainframe. It gives me a
record sequential file that can be read directly into a copybook.
On NT, unload isn't an option. Instead, I have to use a delimited
export file (line sequential file) on NT.
On MVS, I have:
UNLD DATA FORMAT ( DSNTIAUL )
FROM TABLE DB2T.OAAAA
WHERE (OAA_FILESET_CD = '%$JFSET%')
ORDER BY CLUSTER
On NT I have:
DB2 EXPORT TO %$DB2FILE% OF DEL MODIFIED BY COLDEL~ DECPT` NOCHARDEL
MESSAGES %$DB2LOG% SELECT * FROM DB2ADMIN.AAAA WHERE (OAP_FILESET_CD =
'%$JFSET%') Order by 1, 2, 3

How about:

db2 -x "SELECT ... FROM ... WHERE ..." %DB2FILE%

That would generate something comparable to a FB data set.
The COBOL programs read these files to create a custom/proprietary
backup file. This has shown to be about twice as fast a reading
directly from the DB2 tables. It may be because many cursors have to
opened for each table, I don't know. But the problem with the NT
export is that it's not as reliable as the mainframe unload, since it
requires that there's no delimiters in the actual column data, and
that our compiler has set COBSW=(-N) so that Low-Values/Null-Values
are handled correctly.

I don't get this. Why don't you just escape such delimiters or remove them?

Is this a problem because you are using fixed-length records on MVS and
variable-length records (with delimiters) on UDB?

--
Knut Stolze
DB2 z/OS Utilities Development
IBM Germany
What's an "FB" data set?

How would I get rid of delimiters? If I don't specify the delimiters,
it takes the default delimiters like double-quotes and periods for
character string indicators and decimal point indicators.

Anyway, I have copybooks that can read in a record-sequential data
file and will immediately have the correct values in the fields. This
is what happens on MVS when reading in the unloaded db2 tables.

On NT, an unload to a record sequential file doesn't seem to be
possible because unload is not supported on NT. However, I did see the
IXF export type does create a record-sequential file. But the problem
with that is it also includes column information in a header since
this file is meant to be imported into a database, creating the tables
if they don't already exist. Your suggestion on the Select statement
did get rid of the headers for that I see. But unfortunately, that
doesn't work to eliminate the headers in the IXF export.

Feb 6 '07 #8

O.K. Now I understand what you are trying to do. But, I still do not
understand why reading directly from the database is slower than reading the
export file. Basically they both should be using same SQL statement:

SELECT COL1, COL2, ....
FROM TABLENAME
WHERE WHERE_CLAUSE
ORDER BY ORDER_BY_CLAUSE
FOR FETCH ONLY
Do you perform RUNSTATS/REBIND before you run the COBOL program?

We use COBOL, C, Perl programs to read directly from the database and
my experience is that it is faster then export/read, and require less disk
space.

While stranded on information super highway Timofmars wrote:
I'm not porting anything to a different system. The NT and Mainframe
are seperate. I'm using the mainframe unload as an example of how the
unload option does what I want it to do on mainframe. It gives me a
record sequential file that can be read directly into a copybook.

On NT, unload isn't an option. Instead, I have to use a delimited
export file (line sequential file) on NT.

On MVS, I have:

UNLD DATA FORMAT ( DSNTIAUL )
FROM TABLE DB2T.OAAAA
WHERE (OAA_FILESET_CD = '%$JFSET%')
ORDER BY CLUSTER

On NT I have:

DB2 EXPORT TO %$DB2FILE% OF DEL MODIFIED BY COLDEL~ DECPT` NOCHARDEL
MESSAGES %$DB2LOG% SELECT * FROM DB2ADMIN.AAAA WHERE (OAP_FILESET_CD =
'%$JFSET%') Order by 1, 2, 3

The COBOL programs read these files to create a custom/proprietary
backup file. This has shown to be about twice as fast a reading
directly from the DB2 tables. It may be because many cursors have to
opened for each table, I don't know. But the problem with the NT
export is that it's not as reliable as the mainframe unload, since it
requires that there's no delimiters in the actual column data, and
that our compiler has set COBSW=(-N) so that Low-Values/Null-Values
are handled correctly.
On Feb 6, 10:38 am, Hemant Shah <s...@typhoon.xnet.comwrote:
>Are you running COBOL program on NT to create VSAM sequential file, and then
load it to Main Frame?

Reading data directly from the database and creating sequential file should
be faster then exporting it and reading export file.

Can you post the SQL statement you are using in the COBOL program?

While stranded on information super highway Timofmars wrote:
No, I'm not trying to import the data into another DB2 instance.
Instead, we have a COBOL program that reads exported data and creates
a VSAM backup file from the data. Normally, it would read data
directly from the DB2 tables to create the backup.
However, I've created something very similar to what andyhe suggested
where I export the data in a delimited file and have a COBOL program
parse out the delimiters. This is about twice as fast as the method of
reading directly from the tables.
But rather than continue using delimited export data, a record
sequential export file would be ideal. The UNLOAD command works on
Mainframe, but apparently not on NT. The NT option of an IPX format
export seems to be record sequential, but there is a header file
containing column information that I don't want. I can't think of an
easy way to skip over or remove that column data either. So I was just
looking for a way to get a pure record sequential file with only data.
Thanks for the replies so far.
On Feb 2, 3:47 pm, Juan Singh <j...@anonymous.comwrote:
Timofmars wrote:
I'm try to Unload DB2 data from a table into a record sequential file
on NT. I can an unload on Mainframe, but it doesn't seem to be an
option in NT.
In NT, all I can do is export/import. I can do a Load too, but not an
Unload. I just want the only the data from the table in a record
sequential file. Export seems only to give options to have a delimited
line sequential file or a record sequential file where the data is
preceeded by table setup information.
So is there a way to unload data on NT? Thanks.
>What is your end goal? Do you want to import data in another DB2
instance running on NT? Try using WinSQL
(http://synametrics.com/winsql). This program allows moving data from
one source to another target.

--
Hemant Shah /"\ ASCII ribbon campaign
E-mail: NoJunkMails...@xnet.com \ / ---------------------
X against HTML mail
TO REPLY, REMOVE NoJunkMail / \ and postings
FROM MY E-MAIL ADDRESS.
-----------------[DO NOT SEND UNSOLICITED BULK E-MAIL]------------------
I haven't lost my mind, Above opinions are mine only.
it's backed up on tape somewhere. Others can have their own.

--
Hemant Shah /"\ ASCII ribbon campaign
E-mail: No************@xnet.com \ / ---------------------
X against HTML mail
TO REPLY, REMOVE NoJunkMail / \ and postings
FROM MY E-MAIL ADDRESS.
-----------------[DO NOT SEND UNSOLICITED BULK E-MAIL]------------------
I haven't lost my mind, Above opinions are mine only.
it's backed up on tape somewhere. Others can have their own.
Feb 6 '07 #9
I wonder if he's not using FOR FETCH ONLY in the COBOL program.
I mention this only because I am doing some 'performance' testing and ran in
to exactly this.

Just a thought....

Frank

---
Frank Swarbrick
Senior Developer/Analyst - Mainframe Applications
FirstBank Data Corporation - Lakewood, CO USA
>>Hemant Shah<sh**@typhoon.xnet.com02/06/07 2:01 PM >>>
O.K. Now I understand what you are trying to do. But, I still do not
understand why reading directly from the database is slower than reading
the
export file. Basically they both should be using same SQL statement:

SELECT COL1, COL2, ....
FROM TABLENAME
WHERE WHERE_CLAUSE
ORDER BY ORDER_BY_CLAUSE
FOR FETCH ONLY
Do you perform RUNSTATS/REBIND before you run the COBOL program?

We use COBOL, C, Perl programs to read directly from the database and
my experience is that it is faster then export/read, and require less disk
space.

Feb 7 '07 #10
Timofmars wrote:
On Feb 6, 2:03 pm, Knut Stolze <sto...@de.ibm.comwrote:
>Timofmars wrote:
I'm not porting anything to a different system. The NT and Mainframe
are seperate. I'm using the mainframe unload as an example of how the
unload option does what I want it to do on mainframe. It gives me a
record sequential file that can be read directly into a copybook.
On NT, unload isn't an option. Instead, I have to use a delimited
export file (line sequential file) on NT.
On MVS, I have:
UNLD DATA FORMAT ( DSNTIAUL )
FROM TABLE DB2T.OAAAA
WHERE (OAA_FILESET_CD = '%$JFSET%')
ORDER BY CLUSTER
On NT I have:
DB2 EXPORT TO %$DB2FILE% OF DEL MODIFIED BY COLDEL~ DECPT` NOCHARDEL
MESSAGES %$DB2LOG% SELECT * FROM DB2ADMIN.AAAA WHERE (OAP_FILESET_CD =
'%$JFSET%') Order by 1, 2, 3

How about:

db2 -x "SELECT ... FROM ... WHERE ..." %DB2FILE%

That would generate something comparable to a FB data set.
The COBOL programs read these files to create a custom/proprietary
backup file. This has shown to be about twice as fast a reading
directly from the DB2 tables. It may be because many cursors have to
opened for each table, I don't know. But the problem with the NT
export is that it's not as reliable as the mainframe unload, since it
requires that there's no delimiters in the actual column data, and
that our compiler has set COBSW=(-N) so that Low-Values/Null-Values
are handled correctly.

I don't get this. Why don't you just escape such delimiters or remove
them?

Is this a problem because you are using fixed-length records on MVS and
variable-length records (with delimiters) on UDB?

What's an "FB" data set?
FB = fixed-length records and blocked. That tells z/OS (MVS) how the data
is stored on the volumes. Data sets on MVS are comparable to files.
How would I get rid of delimiters? If I don't specify the delimiters,
it takes the default delimiters like double-quotes and periods for
character string indicators and decimal point indicators.
You could use a "sed" or a Perl script (or whatever you have available on
your platform) to massage the output of EXPORT in any way you like.
Anyway, I have copybooks that can read in a record-sequential data
file and will immediately have the correct values in the fields. This
is what happens on MVS when reading in the unloaded db2 tables.
You haven't answered the question regarding fixed/variable length records
yet...
On NT, an unload to a record sequential file doesn't seem to be
possible because unload is not supported on NT.
Well, it is. It's just named EXPORT there.
However, I did see the
IXF export type does create a record-sequential file.
All files generated by EXPORT are "record-sequential". The difference is
just the internal encoding in the files. So what exactly do you need
there? Plain text in ASCII? (IXF is binary not text)
But the problem
with that is it also includes column information in a header since
this file is meant to be imported into a database, creating the tables
if they don't already exist. Your suggestion on the Select statement
did get rid of the headers for that I see. But unfortunately, that
doesn't work to eliminate the headers in the IXF export.
What I posted above only redirects the output of a query into a file.
The -x removes everything that is not data. Thus, it has nothing to do
with EXPORT or UNLOAD.

--
Knut Stolze
DB2 z/OS Utilities Development
IBM Germany
Feb 7 '07 #11
Hemant Shah wrote:
O.K. Now I understand what you are trying to do. But, I still do not
understand why reading directly from the database is slower than reading
the export file. Basically they both should be using same SQL statement:

SELECT COL1, COL2, ....
FROM TABLENAME
WHERE WHERE_CLAUSE
ORDER BY ORDER_BY_CLAUSE
FOR FETCH ONLY
I agree here. Exporting and then reading from the exported file will always
be slower compared to directly reading from the DB. Question is if the
application is properly coded to do that.

--
Knut Stolze
DB2 z/OS Utilities Development
IBM Germany
Feb 7 '07 #12

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

3
by: MLH | last post by:
Can I read the name of the current database's record locking file from within code? I would expect to read something like "MyDB.ldb"
1
by: Jay A. Moritz | last post by:
I have an ASP .NET 2003 application that uses an Access 2003 database. Trimming my application down to just one page that connects to the database and uses an OleDbDataReader to get info, then...
3
by: Tom McLaughlin | last post by:
I am new to vb.net and I am running into problems understanding DataBinding and its concept. The only examples I find are always talk about Web design, I only want to use this data on my...
3
by: kd | last post by:
Hi All, I have a .txt file, whose contents need to be read and processed. Which among the following is the best way to do it? - Using Sequential file access functions (FileOpen, LineInput,...
5
by: leesa | last post by:
I have a number field in a sequential file. Im going to read this file, sum up this number and display the grand total in my application using VB. Do anyone know how to do that?? Thanks in...
6
by: Cashewz | last post by:
Using a menu item to write to a sequential file from the parent form, what code and where would I put it to write the text values, settings, bool values from various child menues?
19
by: jason2412 | last post by:
Hi, Not sure if this is the right place to post this query so apologies in advance. I have two columns in an access table. Example for 2 records: Field 1 - Retail Field 2 - Sat
1
by: dhingchangs | last post by:
Hi, I want to read a sequential file character by character and the check each character for the position and process further. The lenght of file can be anything. Actually my purpose is read a...
1
by: stevedub | last post by:
I am having some trouble configuring my array to read from a sequential file, and then calling on that to fill an array of interests. I think I have the class set up to read the file, but when I run...
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
If we have dozens or hundreds of excel to import into the database, if we use the excel import function provided by database editors such as navicat, it will be extremely tedious and time-consuming...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.