473,599 Members | 3,118 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

GC with lots of small ones

Hi,

I wonder if anybody can comment if what I see is normal in FW 1.1 and how to
avoid this.

I have .Net assembly, which creates literally thousands of temporary strings
and other objects when running. Usually it is something like
{
string s=some value;
some local processing here
...
}
so, expectation is that GC will collect it some time after as unused
reference. However, it looks like in lots of cases when strings are returned
to calling method, GC has problems with finding such references and cleaning
them up. Especially, when objects are created on one thread and processed in
another.

Same with arrays, hashtables etc.

I've seen that some such temporary objects survive through tens of GC
cycles. However, when number of such temporary objects is low - less than
20000 or so, GC seems to be able to do the job.

Because of this, assembly starts choking on memory in 2-4 hours during heavy
duty use. VM grows 10 and more times very easily. My intuition is that GC
times out before finding majority of freed references - maybe because it
relocates lot of data during first phase?

Are there any "real" recommendations , which techniques should be used to
make app more GC-friendly? E.g. like, don't create more than 1000 objects
per minute, or always set temp strings after use to null or something like
this?

Thanks
Alex
Jul 21 '05 #1
10 1771

"AlexS" <sa***********@ SPAMsympaticoPL EASE.ca> wrote in message
news:O$******** ******@tk2msftn gp13.phx.gbl...
Hi,

I wonder if anybody can comment if what I see is normal in FW 1.1 and how
to
avoid this.

I have .Net assembly, which creates literally thousands of temporary
strings
and other objects when running. Usually it is something like
{
string s=some value;
some local processing here
...
}
so, expectation is that GC will collect it some time after as unused
reference. However, it looks like in lots of cases when strings are
returned
to calling method, GC has problems with finding such references and
cleaning
them up. Especially, when objects are created on one thread and processed
in
another.

Same with arrays, hashtables etc.

I've seen that some such temporary objects survive through tens of GC
cycles. However, when number of such temporary objects is low - less than
20000 or so, GC seems to be able to do the job.
It sounds like alot of objects are being promoted. Are you just working with
strings, collections, etc or are you using other, more complicated objects?
Because of this, assembly starts choking on memory in 2-4 hours during
heavy
duty use. VM grows 10 and more times very easily. My intuition is that GC
times out before finding majority of freed references - maybe because it
relocates lot of data during first phase?
The GC usually only does a generation 0 sweep, doing gen 1 and 2 sweeps less
often. If your objects are living just long enough to make it to gen 1, then
you may end up with longer cleanup times. Are you using any objects with
finalizers? An object with a finalizer causes its entire object graph to be
promoted, making it effectivly an automatic gen 1. If you are using objects
with finalizers, make sure you are disposing them(or if you wrote them,
provide a IDisposable implementation that calls GC.SupressFinal ize).
Are there any "real" recommendations , which techniques should be used to
make app more GC-friendly? E.g. like, don't create more than 1000 objects
per minute, or always set temp strings after use to null or something like
this? It'd be hard to be sure about the object creatino rate, and i doubt thats
the case. I've seen benchmarks of millions of allocations and deallocations
in a minutes time, I don't think object restrictions are going to help. Nor
will setting temp variables to null unless you are in a particular
circumstance.

In situations like(this ignores string interning):

{
string s = "a very large string indeed";
DoSomething(s);
DoSomethingTime ConsumingButUnr elatedToS();
s = "another huge string" //the first instance of s can be collected
here
DoSomethingElse (s);
}

whereas
{
string s = "a very large string indeed";
DoSomething(s);
s = null; //the first instance of s can be collected here
DoSomethingTime ConsumingButUnr elatedToS();
s = "another huge string"
DoSomethingElse (s);

}

but I expect that situation to be rather rare and unless the strings are
truely huge(several megs at the least) I wouldn't bother with it.
Thanks
Alex

Jul 21 '05 #2
Hi Alex,

Jay B. Harlow did supply me this link, I find it very interesting,

http://msdn.microsoft.com/architectu...l/scalenet.asp

There is a lot written about how the Garbage Collector functions.

Cor
Jul 21 '05 #3
Daniel,

see in text

Thanks
Alex

"Daniel O'Connell [C# MVP]" <onyxkirx@--NOSPAM--comcast.net> wrote in
message news:%2******** *******@TK2MSFT NGP11.phx.gbl.. .

"AlexS" <sa***********@ SPAMsympaticoPL EASE.ca> wrote in message
news:O$******** ******@tk2msftn gp13.phx.gbl...
Hi,

I wonder if anybody can comment if what I see is normal in FW 1.1 and how to
avoid this.

I have .Net assembly, which creates literally thousands of temporary
strings
and other objects when running. Usually it is something like
{
string s=some value;
some local processing here
...
}
so, expectation is that GC will collect it some time after as unused
reference. However, it looks like in lots of cases when strings are
returned
to calling method, GC has problems with finding such references and
cleaning
them up. Especially, when objects are created on one thread and processed in
another.

Same with arrays, hashtables etc.

I've seen that some such temporary objects survive through tens of GC
cycles. However, when number of such temporary objects is low - less than 20000 or so, GC seems to be able to do the job.
It sounds like alot of objects are being promoted. Are you just working

with strings, collections, etc or are you using other, more complicated objects?

Mostly it's strings and various collections - hashtables, arraylist, simple
arrays. I wouldn't say they are "more complicated". Is hashtable of classes,
which contain collections of strings more complicated? I am not sure here.
But according to CLR profiler problem seems to be with promotions.


Because of this, assembly starts choking on memory in 2-4 hours during
heavy
duty use. VM grows 10 and more times very easily. My intuition is that GC times out before finding majority of freed references - maybe because it
relocates lot of data during first phase?


The GC usually only does a generation 0 sweep, doing gen 1 and 2 sweeps

less often. If your objects are living just long enough to make it to gen 1, then you may end up with longer cleanup times. Are you using any objects with
finalizers? An object with a finalizer causes its entire object graph to be promoted, making it effectivly an automatic gen 1. If you are using objects with finalizers, make sure you are disposing them(or if you wrote them,
provide a IDisposable implementation that calls GC.SupressFinal ize).

No finalizers. It was easy to find and fight leaks like SolidBrushes, where
I can use Dispose, Not for strings.
Are there any "real" recommendations , which techniques should be used to
make app more GC-friendly? E.g. like, don't create more than 1000 objects per minute, or always set temp strings after use to null or something like this? It'd be hard to be sure about the object creatino rate, and i doubt thats
the case. I've seen benchmarks of millions of allocations and

deallocations in a minutes time, I don't think object restrictions are going to help. Nor will setting temp variables to null unless you are in a particular
circumstance.

In situations like(this ignores string interning):

{
string s = "a very large string indeed";
DoSomething(s);
DoSomethingTime ConsumingButUnr elatedToS();
s = "another huge string" //the first instance of s can be collected
here
DoSomethingElse (s);
}

whereas
{
string s = "a very large string indeed";
DoSomething(s);
s = null; //the first instance of s can be collected here
DoSomethingTime ConsumingButUnr elatedToS();
s = "another huge string"
DoSomethingElse (s);

}

but I expect that situation to be rather rare and unless the strings are
truely huge(several megs at the least) I wouldn't bother with it.


Some of strings were collected more efficiently when I used second variant.
If s=null; is absent, strings are shown as floating around in heap -
relocated and live. It happens not always, but happens a lot. Especially in
loops, and, it seems, in recursive calls. But strange, that you think it is
only for big strings. My heap is full of small ones - 0.1-10K.
I've seen also lots of chunks from String.Split.

I wonder if there is real difference for GC between

return <string expression>

and

string str=<string expression>;
return str;

?

Thanks
Alex


Jul 21 '05 #4

It sounds like alot of objects are being promoted. Are you just working with
strings, collections, etc or are you using other, more complicated

objects?

Mostly it's strings and various collections - hashtables, arraylist,
simple
arrays. I wouldn't say they are "more complicated". Is hashtable of
classes,
which contain collections of strings more complicated? I am not sure here.
But according to CLR profiler problem seems to be with promotions.


Hrm, nothing that needs finalization or disposal, so I wouldn't consider
anything complicated here. By the sounds of it your objects are living long
enough to survive to Generation 2, which could be a problem as the program
runs for a while.
How long does the processing take on these strings? And are there alot of
duplicated strings?
It'd be hard to be sure about the object creatino rate, and i doubt thats
the case. I've seen benchmarks of millions of allocations and

deallocations
in a minutes time, I don't think object restrictions are going to help.

Nor
will setting temp variables to null unless you are in a particular
circumstance.

In situations like(this ignores string interning):

{
string s = "a very large string indeed";
DoSomething(s);
DoSomethingTime ConsumingButUnr elatedToS();
s = "another huge string" //the first instance of s can be collected
here
DoSomethingElse (s);
}

whereas
{
string s = "a very large string indeed";
DoSomething(s);
s = null; //the first instance of s can be collected here
DoSomethingTime ConsumingButUnr elatedToS();
s = "another huge string"
DoSomethingElse (s);

}

but I expect that situation to be rather rare and unless the strings are
truely huge(several megs at the least) I wouldn't bother with it.


Some of strings were collected more efficiently when I used second
variant.
If s=null; is absent, strings are shown as floating around in heap -
relocated and live. It happens not always, but happens a lot. Especially
in
loops, and, it seems, in recursive calls. But strange, that you think it
is
only for big strings. My heap is full of small ones - 0.1-10K.
I've seen also lots of chunks from String.Split.

Without knowing the specifics, I do have some thoughts. Is your design such
that you create a string, do some work that allocates a good many
objects(enough to trigger a gen1 collectino), then create another into the
same variable? If so its possible you are prolonging the life of your
strings into a higher generation, nulling would fix that. If this algorithm
is highly recursive, it might actually be a serious source of memory
problems.
I wonder if there is real difference for GC between

return <string expression>

and

string str=<string expression>;
return str;


There shouldn't be. The JIT would probably generate similar or identical
code.
Jul 21 '05 #5
Daniel, thanks

Looks like you confirm some of my suspicions.
How long does the processing take on these strings? And are there alot of
duplicated strings?
In terms of absolute time - less than 1 second. In terms how many objects
could be created during this period - hundreds if not thousands. Also,
strings could be created in one thread and passed to another before becoming
obsolete.

I've seen also lots of chunks from String.Split.

Without knowing the specifics, I do have some thoughts. Is your design

such that you create a string, do some work that allocates a good many
objects(enough to trigger a gen1 collectino), then create another into the
same variable? If so its possible you are prolonging the life of your
strings into a higher generation, nulling would fix that. If this algorithm is highly recursive, it might actually be a serious source of memory
problems.
Lots of code with such behavior. Because I have thousands of objects and
calls, clr profiler literally chokes. Standard profile log is 50-100MB,
which kills it usually. Exceptions, hanging, not enough memory - I've seen
it all :-(

I wonder if there is real difference for GC between

return <string expression>

and

string str=<string expression>;
return str;


There shouldn't be. The JIT would probably generate similar or identical
code.


Some small consolation :-)

Thanks, Daniel.

I am trying now to think out some way to clean up this mess.

Jul 21 '05 #6
> I've seen also lots of chunks from String.Split. Without knowing the specifics, I do have some thoughts. Is your design

such
that you create a string, do some work that allocates a good many
objects(enough to trigger a gen1 collectino), then create another into
the
same variable? If so its possible you are prolonging the life of your
strings into a higher generation, nulling would fix that. If this

algorithm
is highly recursive, it might actually be a serious source of memory
problems.


Lots of code with such behavior. Because I have thousands of objects and
calls, clr profiler literally chokes. Standard profile log is 50-100MB,
which kills it usually. Exceptions, hanging, not enough memory - I've seen
it all :-(


Hrmm, this isn't good. I am trying now to think out some way to clean up this mess.


From what I understand, I'm afraid the best course may be to redesign your
app. I think the problem is inherent to the design. You either need to
serialize processing so objects disappear quickly or change the object
allocation code so that the allocations occur just before calcuations and
disappear right after. Mutlithreading may be a big part of this.

Are alot of your strings identical?
Jul 21 '05 #7

"Daniel O'Connell [C# MVP]" <onyxkirx@--NOSPAM--comcast.net> wrote in
message news:%2******** ********@tk2msf tngp13.phx.gbl. ..
> I've seen also lots of chunks from String.Split.
Without knowing the specifics, I do have some thoughts. Is your design such
that you create a string, do some work that allocates a good many
objects(enough to trigger a gen1 collectino), then create another into
the
same variable? If so its possible you are prolonging the life of your
strings into a higher generation, nulling would fix that. If this

algorithm
is highly recursive, it might actually be a serious source of memory
problems.


Lots of code with such behavior. Because I have thousands of objects and
calls, clr profiler literally chokes. Standard profile log is 50-100MB,
which kills it usually. Exceptions, hanging, not enough memory - I've seen it all :-(


Hrmm, this isn't good.
I am trying now to think out some way to clean up this mess.


From what I understand, I'm afraid the best course may be to redesign your
app. I think the problem is inherent to the design. You either need to
serialize processing so objects disappear quickly or change the object
allocation code so that the allocations occur just before calcuations and
disappear right after. Mutlithreading may be a big part of this.

Are alot of your strings identical?


If you add same string to 2 different collections - are the collection items
identical? I think not.

Could you expand a bit on serializing processing to make objects disappear
quickly? I am not sure I see how it could be done when collections are
filled by recursion or strings are passed between threads.

Unfortunately redesign is out of question - thousands of lines, which were
developed by several people.

So, if to sum up
- if string creation and processing before releasing reference take some
time bigger than GC0 and GC1, they could be lost in heap
- if big string is replaced by another big string, like str=<big string>;
<process>; str=<another big string> better to use str=null or
str=String.Empt y before next assignment
- if objects could exist for long time, they should be nulled explicitly
- if object implements IDispose it must be disposed before next assigment
explicitly
- when objects are passed between threads or asynch methods - null them
explicitly
- when objects are passed between recursive calls - null them explicitly
- try to avoid as much as possible string.concat

Doesn't look very convincing, what do you think? Most of this I never seen
in simple applications, where objects are not highly volatile. And I see now
all of this when doing a real processing for real files - parsing, editing.
I mean - negative impact on heap.

Did I miss anything?

Thanks
Alex

Jul 21 '05 #8

From what I understand, I'm afraid the best course may be to redesign
your
app. I think the problem is inherent to the design. You either need to
serialize processing so objects disappear quickly or change the object
allocation code so that the allocations occur just before calcuations and
disappear right after. Mutlithreading may be a big part of this.

Are alot of your strings identical?
If you add same string to 2 different collections - are the collection
items
identical? I think not.


Genreally not, but if you tend to have large numbers of strings that are
identical, you may save memory by interning(or you may not, I forget what
happens when you intern a string that will never be referenced again).
Could you expand a bit on serializing processing to make objects disappear
quickly? I am not sure I see how it could be done when collections are
filled by recursion or strings are passed between threads. It really isn't, it was a suggestion for a potential redesign.
Unfortunately redesign is out of question - thousands of lines, which were
developed by several people.
Thats unfortunate...I hope you can figure out a way to reduce memory usage.
I would rarely recommend this, but perhaps you should insert a GC.Collect(2)
call on a timer that goes every half hour and see if it clears gen 2 for
you. It is a hack but if the problem is over promotion, it just might work.
So, if to sum up
- if string creation and processing before releasing reference take some
time bigger than GC0 and GC1, they could be lost in heap
- if big string is replaced by another big string, like str=<big string>;
<process>; str=<another big string> better to use str=null or
str=String.Empt y before next assignment
- if objects could exist for long time, they should be nulled explicitly
No, if variables could exist for a long time, they should be nulled. Its not
possible to null an object, ;).
- if object implements IDispose it must be disposed before next assigment
explicitly Yes. - when objects are passed between threads or asynch methods - null them
explicitly
- when objects are passed between recursive calls - null them explicitly
Neither of these are true. Nulling a variable won't change anything. - try to avoid as much as possible string.concat Maybe, maybe not. String.Concat can be efficent if you are dealing with 2 or
3 strings, but if you are doing more than that definatly go with
StringBuilder.

Doesn't look very convincing, what do you think? Most of this I never seen
in simple applications, where objects are not highly volatile. And I see
now
all of this when doing a real processing for real files - parsing,
editing.
I mean - negative impact on heap.

Did I miss anything?

Thanks
Alex


Jul 21 '05 #9
Hi Alex
So, if to sum up
- if string creation and processing before releasing reference take some
time bigger than GC0 and GC1, they could be lost in heap
True. If your object survives a generation 0 collection, then dies, you've got "mid-life crisis", and that object (in your case a string) will be in memory much longer than you need
it.
- if big string is replaced by another big string, like str=<big string>;
<process>; str=<another big string> better to use str=null or
str=String.Emp ty before next assignment
By changing the reference from the first string, you've abandoned it in memory and the GC will take care of it. So nulling won't help, but won't hurt either.
- if objects could exist for long time, they should be nulled explicitly
Only if they are member variables, and the container object is still alive.
- if object implements IDispose it must be disposed before next assigment
explicitly
That's generally good practice. Consider using the C# using pattern where appropriate.
- when objects are passed between threads or asynch methods - null them
explicitly
- when objects are passed between recursive calls - null them explicitly
Nulling them won't help them get collected any faster unless they are members.
- try to avoid as much as possible string.concat
Um, maybe. Stringbuilder is your friend if you are creating many large strings.
If you haven't already, check out
Rico Mariani's blog (http://weblogs.asp.net/ricom/)
Brad Abram's blog (http://weblogs.asp.net/brada/archive...24/140645.aspx)
Improving .NET Application Performance and Scalability Chapter 5 (http://msdn.microsoft.com/library/de...etchapt05.asp).
Hope that helps
-Chris

--------------------From: "AlexS" <sa***********@ SPAMsympaticoPL EASE.ca>
References: <O$************ **@tk2msftngp13 .phx.gbl> <#e************ *@TK2MSFTNGP11. phx.gbl> <#1************ **@TK2MSFTNGP12 .phx.gbl> <et************ **@TK2MSFTNGP09 .phx.gbl> <ez************ **@TK2MSFTNGP10 .phx.gbl> <#a************ **@tk2msftngp13 .phx.gbl>Subject: Re: GC with lots of small ones
Date: Wed, 26 May 2004 13:16:02 -0400
Lines: 72
X-Priority: 3
X-MSMail-Priority: Normal
X-Newsreader: Microsoft Outlook Express 6.00.2800.1409
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1409
Message-ID: <e7************ **@TK2MSFTNGP10 .phx.gbl>
Newsgroups: microsoft.publi c.dotnet.genera l
NNTP-Posting-Host: toronto-hse-ppp3855754.symp atico.ca 67.70.1.127
Path: cpmsftngxa10.ph x.gbl!TK2MSFTNG XA01.phx.gbl!TK 2MSFTNGP08.phx. gbl!TK2MSFTNGP1 0.phx.gbl
Xref: cpmsftngxa10.ph x.gbl microsoft.publi c.dotnet.genera l:135301
X-Tomcat-NG: microsoft.publi c.dotnet.genera l
"Daniel O'Connell [C# MVP]" <onyxkirx@--NOSPAM--comcast.net> wrote in
message news:%2******** ********@tk2msf tngp13.phx.gbl. ..
>> > I've seen also lots of chunks from String.Split.
>> Without knowing the specifics, I do have some thoughts. Is your design
> such
>> that you create a string, do some work that allocates a good many
>> objects(enough to trigger a gen1 collectino), then create another into
>> the
>> same variable? If so its possible you are prolonging the life of your
>> strings into a higher generation, nulling would fix that. If this
> algorithm
>> is highly recursive, it might actually be a serious source of memory
>> problems.
>
> Lots of code with such behavior. Because I have thousands of objects and
> calls, clr profiler literally chokes. Standard profile log is 50-100MB,
> which kills it usually. Exceptions, hanging, not enough memory - I'veseen > it all :-(
>


Hrmm, this isn't good.
> I am trying now to think out some way to clean up this mess.


From what I understand, I'm afraid the best course may be to redesign your
app. I think the problem is inherent to the design. You either need to
serialize processing so objects disappear quickly or change the object
allocation code so that the allocations occur just before calcuations and
disappear right after. Mutlithreading may be a big part of this.

Are alot of your strings identical?


If you add same string to 2 different collections - are the collection items
identical? I think not.

Could you expand a bit on serializing processing to make objects disappear
quickly? I am not sure I see how it could be done when collections are
filled by recursion or strings are passed between threads.

Unfortunatel y redesign is out of question - thousands of lines, which were
developed by several people.

So, if to sum up
- if string creation and processing before releasing reference take some
time bigger than GC0 and GC1, they could be lost in heap
- if big string is replaced by another big string, like str=<big string>;
<process>; str=<another big string> better to use str=null or
str=String.Emp ty before next assignment
- if objects could exist for long time, they should be nulled explicitly
- if object implements IDispose it must be disposed before next assigment
explicitly
- when objects are passed between threads or asynch methods - null them
explicitly
- when objects are passed between recursive calls - null them explicitly
- try to avoid as much as possible string.concat

Doesn't look very convincing, what do you think? Most of this I never seen
in simple applications, where objects are not highly volatile. And I see now
all of this when doing a real processing for real files - parsing, editing.
I mean - negative impact on heap.

Did I miss anything?

Thanks
Alex


--

This posting is provided "AS IS" with no warranties, and confers no rights. Use of included script samples are subject to the terms specified at
http://www.microsoft.com/info/cpyright.htm

Note: For the benefit of the community-at-large, all responses to this message are best directed to the newsgroup/thread from which they originated.

Jul 21 '05 #10

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

4
2981
by: Robert Bralic | last post by:
Hi, Can anybody send to me aby small c++ program that is compilable vith gpp under Linux ,that I can belive that C++ realy exists ,(1000-2000) lines program source. robert.bralic@si.htnet.hr
6
3169
by: ad | last post by:
I have a huge sting array, there are about 1000 element in it. How can I divide the huge array into small ones, and there are one 10 elements in a small one array?
10
1006
by: AlexS | last post by:
Hi, I wonder if anybody can comment if what I see is normal in FW 1.1 and how to avoid this. I have .Net assembly, which creates literally thousands of temporary strings and other objects when running. Usually it is something like { string s=some value; some local processing here
5
5544
by: listerofsmeg01 | last post by:
Hi, Pretty new to PHP and MySQL. I have a page on my site that displays a lot of information from various tables. Currently I have lots of small PHP wrapper functions around SQL queries to get each bit of information. This results in maybe 10 queries to display one page, but they are all very small and simple, and it keeps the PHP looking nice too as they are simple function calls to get each piece of info, which can be called from any
17
3435
by: Brian Blais | last post by:
Hello, I have a couple of classes where I teach introductory programming using Python. What I would love to have is for the students to go through a lot of very small programs, to learn the basic programming structure. Things like, return the maximum in a list, making lists with certain patterns, very simple string parsing, etc. Unfortunately, it takes a lot of time to grade such things by hand, so I would like to automate it as much...
169
9018
by: JohnQ | last post by:
(The "C++ Grammer" thread in comp.lang.c++.moderated prompted this post). It would be more than a little bit nice if C++ was much "cleaner" (less complex) so that it wasn't a major world wide untaking to create a toolchain for it. Way back when, there used to be something called "Small C". I wonder if the creator(s) of that would want to embark on creating a nice little Small C++ compiler devoid of C++ language features that make...
6
1802
by: Dave Stallard | last post by:
So, I'm looking at some code that does 770K malloc calls in a row, of varying size, paired with corresponding freads from a binary file to initialize. In total, about 58 MB of data is allocated an initialized. Unsurprisingly, this takes a good bit of time. Question: Wouldn't it be a lot more efficient to allocate this 58 MB in one big gulp, then fread in the 58 MB of data from the file to initialize it, rather than a zillion small...
35
2461
by: RobG | last post by:
Seems developers of mobile applications are pretty much devoted to UA sniffing: <URL: http://wurfl.sourceforge.net/vodafonerant/index.htm > -- Rob
0
7904
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
8398
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
8400
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
1
8051
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows Update option using the Control Panel or Settings app; it automatically checks for updates and installs any it finds, whether you like it or not. For most users, this new feature is actually very convenient. If you want to control the update process,...
0
8267
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
1
5850
isladogs
by: isladogs | last post by:
The next Access Europe User Group meeting will be on Wednesday 1 May 2024 starting at 18:00 UK time (6PM UTC+1) and finishing by 19:30 (7.30PM). In this session, we are pleased to welcome a new presenter, Adolph Dupré who will be discussing some powerful techniques for using class modules. He will explain when you may want to use classes instead of User Defined Types (UDT). For example, to manage the data in unbound forms. Adolph will...
0
5438
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
0
3940
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
1
1505
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.