i see serveral source codes , and i found they almost only use "new"
and "delete" keywords to make they object.
Why should i do that , and as i know the object is going to be destroy
by itself at the end of the app
for example:
class test
{
public:
int x;
}
int main(int argc, char **argv)
{
test *n= new test; 30 3544
Medvedev wrote:
i see serveral source codes , and i found they almost only use "new"
and "delete" keywords to make they object.
Why should i do that , and as i know the object is going to be destroy
by itself at the end of the app
for example:
class test
{
public:
int x;
}
int main(int argc, char **argv)
{
test *n= new test;
.
.
...
delete n;
return 0;
}
i know that the object created this way is in the heap which have much
memory than stack but why they always define objects that way , why
not just say "test n" and the object will be destroyed by itself at
the end of the program! , instead of using "new" and maybe u will
forget to "delete" at the end
Several reasons.
1. They're coming from Java and they don't know any better
2. They're storing polymorphic objects inside containers
3. They need the lifetime of the object to exceed the scope in which
it was declared.
On Jul 5, 11:59 am, red floyd <no.spam.h...@example.comwrote:
Medvedev wrote:
i see serveral source codes , and i found they almost only use "new"
and "delete" keywords to make they object.
Why should i do that , and as i know the object is going to be destroy
by itself at the end of the app
for example:
class test
{
public:
int x;
}
int main(int argc, char **argv)
{
test *n= new test;
.
.
...
delete n;
return 0;
}
i know that the object created this way is in the heap which have much
memory than stack but why they always define objects that way , why
not just say "test n" and the object will be destroyed by itself at
the end of the program! , instead of using "new" and maybe u will
forget to "delete" at the end
Several reasons.
1. They're coming from Java and they don't know any better
2. They're storing polymorphic objects inside containers
3. They need the lifetime of the object to exceed the scope in which
it was declared.
how u can use object after it's scope ends!!
class CFoo
{
};
CFoo* GetFoo()
{
return new CFoo;
}
Still, I agree with you, people over use new and delete. I recently saw a
class library written in C++ that tried to be all C#'ish and required you to
write code like:
classa->InsertItem(new classB(param1));
classa->InsertItem(new classC(param2));
*STUPID*... if your code is too stupid to decide what kind of object should
be created (and keep in mind, I could see doing this for user defined
objects, but these were all internal objects), then why would you expect a
user of your class to?
Plus, that hurts performance.
"Medvedev" <3D********@gmail.comwrote in message
news:95**********************************@k30g2000 hse.googlegroups.com...
On Jul 5, 11:59 am, red floyd <no.spam.h...@example.comwrote:
>Medvedev wrote:
i see serveral source codes , and i found they almost only use "new"
and "delete" keywords to make they object.
Why should i do that , and as i know the object is going to be destroy
by itself at the end of the app
for example:
class test
{
public:
int x;
}
int main(int argc, char **argv)
{
test *n= new test;
.
.
...
delete n;
return 0;
}
i know that the object created this way is in the heap which have much
memory than stack but why they always define objects that way , why
not just say "test n" and the object will be destroyed by itself at
the end of the program! , instead of using "new" and maybe u will
forget to "delete" at the end
Several reasons.
1. They're coming from Java and they don't know any better 2. They're storing polymorphic objects inside containers 3. They need the lifetime of the object to exceed the scope in which it was declared.
how u can use object after it's scope ends!!
Medvedev wrote:
On Jul 5, 11:59 am, red floyd <no.spam.h...@example.comwrote:
>3. They need the lifetime of the object to exceed the scope in which it was declared.
how u can use object after it's scope ends!!
I don't know about "u", but the rest of us don't.
A pointer to a dynamically allocated object can be returned from the
function that created it, or declared in an outer scope and assigned in
an inner one.
--
Ian Collins.
On Jul 5, 1:13 pm, "Somebody" <someb...@cox.netwrote:
Still, I agree with you, people over use new and delete. I recently saw a
class library written in C++ that tried to be all C#'ish and required you to
write code like:
classa->InsertItem(new classB(param1));
classa->InsertItem(new classC(param2));
*STUPID*...
It's not obvious what you say that. Is it because the dynamic objects
are not handed to smart pointers right away? Or do you propose
something like the following?
classa->InsertItem(&classB(param1));
classa->InsertItem(&classC(param2));
You realize that the temporaries would be terminated too soon to be
useful?
Plus, that hurts performance.
Compared to what? If it's needed, then there is no alternative.
Ali
Medvedev wrote:
i see serveral source codes , and i found they almost only use "new"
and "delete" keywords to make they object.
If you write an OO program, you will find yourself needing a base class pointer
that points to a derived class. (That is the very goal of "OO" - to override
some critical method into that derived class.)
Otherwise, you could construct an object on the stack, use it, and let it
destroy when its method returns.
Because you need dynamically sized and typed objects, you must sometimes new them.
Why should i do that , and as i know the object is going to be destroy
by itself at the end of the app
You should code as if you don't know that. Always clean up after yourself. One
good way is with "smart pointers".
for example:
class test
{
public:
int x;
}
int main(int argc, char **argv)
{
test *n= new test;
.
.
...
delete n;
return 0;
}
i know that the object created this way is in the heap which have much
memory
Not necessarily. On modern architectures with virtual memory, both the heap and
stack can grow arbitrarily.
than stack but why they always define objects that way , why
not just say "test n" and the object will be destroyed by itself at
the end of the program! , instead of using "new" and maybe u will
forget to "delete" at the end
Because that's one of the many things C++ will let you do that are sloppy. Don't
do any of them, because any one of them could come back to bite you on the butt.
For example, you could refactor the n = new test and move it inside a working
loop. Then the loop would silently leak (virtual!) memory. You would not notice
until your program ran for hours, and got very slow.
--
Phlip
On Jul 5, 4:05 pm, Medvedev <3D.v.Wo...@gmail.comwrote:
On Jul 5, 11:59 am, red floyd <no.spam.h...@example.comwrote:
Medvedev wrote:
i see serveral source codes , and i found they almost only use "new"
and "delete" keywords to make they object.
Why should i do that , and as i know the object is going to be destroy
by itself at the end of the app
for example:
class test
{
public:
int x;
}
int main(int argc, char **argv)
{
test *n= new test;
.
.
...
delete n;
return 0;
}
i know that the object created this way is in the heap which have much
memory than stack but why they always define objects that way , why
not just say "test n" and the object will be destroyed by itself at
the end of the program! , instead of using "new" and maybe u will
forget to "delete" at the end
Several reasons.
1. They're coming from Java and they don't know any better
2. They're storing polymorphic objects inside containers
3. They need the lifetime of the object to exceed the scope in which
it was declared.
how u can use object after it's scope ends!!
Thats not what he stated. The object is _declared_ in a finite scope.
If you allocate the object on the heap, it's lifetime no longer relies
on the declaring scope.
Basicly, new and new[] transfers the responsability to you, the
programmer, to delete and delete[].
Is using new and new[] a good habit? no, its not.
You'll find C++ programmers to be retiscent in using it and would
rather rely on a smart pointer if heap allocation is indeed required.
Java programmers don't really have a choice but in C++ an automatic
variable should and usually is the default.
Generally speaking, if you see new and new[], you aren't reading a
programmer who has his roots in modern C++. Allocating on the heap
what should be automatic is frowned upon here.
And the reason for that is because smart pointers have much to offer
as long as you know their limitations.
Good examples of those are std::auto_ptr and boost::shared_ptr to name
a few.
<ac******@gmail.comwrote in message
news:c0**********************************@m44g2000 hsc.googlegroups.com...
On Jul 5, 1:13 pm, "Somebody" <someb...@cox.netwrote:
>Still, I agree with you, people over use new and delete. I recently saw a class library written in C++ that tried to be all C#'ish and required you to write code like:
classa->InsertItem(new classB(param1)); classa->InsertItem(new classC(param2));
*STUPID*...
It's not obvious what you say that. Is it because the dynamic objects
are not handed to smart pointers right away? Or do you propose
something like the following?
classa->InsertItem(&classB(param1));
classa->InsertItem(&classC(param2));
You realize that the temporaries would be terminated too soon to be
useful?
>Plus, that hurts performance.
Compared to what? If it's needed, then there is no alternative.
Ali
Well, let me de-anonimize the code a bit :)...
They had a bunch of stuff like:
ctrl->InsertItem(new ButtonTypeA(param1, param2));
ctrl->InsertItem(new ButtonTypeB(param3, param4));
ctrl->InsertItem(new ButtonTypeC(param4, param5));
So they had a UI control, and they were inserting items into it. ButtonTypeX
was a class *the class library* defined... not something that a *user* of
the class library would (or could) define.
What I was saying... was I failed to see why I should do the dirty work for
the library and not only determine what class to use, but also to allocate
it for them. They should determine whether to create the internal
ButtonTypeA, ButtonTypeB, ButtonTypeC, etc. in some other way (like a param
for example)... so I'd rather see something like:
ctrl->InsertItem(param1, param2);
ctrl->InsertItem(param3, param4);
ctrl->InsertItem(param4, param5);
or
ctrl->InsertButtonTypeA(param1, param2);
ctrl->InsertButtonTypeB(param3, param4);
ctrl->InsertButtonTypeC(param4, param5);
or
ctrl->InsertItem(TYPE_A, param1, param2);
ctrl->InsertItem(TYPE_B, param3, param4);
ctrl->InsertItem(TYPE_C, param4, param5);
Like other posters said, that style of code is C#'ish or Java'ish... it is
not typical C++ style. Unless you are doing some type of class factory type
thing.
Allocating memory is slow... which is why this particular UI library was
commonly regarded as having poor performance.
On Jul 5, 9:59 pm, red floyd <no.spam.h...@example.comwrote:
Medvedev wrote:
i see serveral source codes , and i found they almost only
use "new" and "delete" keywords to make they object. Why
should i do that , and as i know the object is going to be
destroy by itself at the end of the app
for example:
class test
{
public:
int x;
}
int main(int argc, char **argv)
{
test *n= new test;
.
.
...
delete n;
return 0;
}
i know that the object created this way is in the heap which
have much memory than stack but why they always define
objects that way , why not just say "test n" and the object
will be destroyed by itself at the end of the program! ,
instead of using "new" and maybe u will forget to "delete"
at the end
Several reasons.
1. They're coming from Java and they don't know any better
2. They're storing polymorphic objects inside containers
Not just storing them inside containers. I've a couple of
places where I've code something like:
std::auto_ptr< Base obj(
someCondition
? static_cast< Base* >( new D1 )
: static_cast< Base* >( new D2 ) ) ;
It's not that common, however.
3. They need the lifetime of the object to exceed the scope in which
it was declared.
Often, the last two reasons go together: although there's no
formal link between them, in practice, polymorphic objects tend
to have arbitrary lifetimes.
Note that you normally would prefer copying an object to
extending its lifetime, if the object supports copy.
--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
On Jul 5, 10:05 pm, Medvedev <3D.v.Wo...@gmail.comwrote:
On Jul 5, 11:59 am, red floyd <no.spam.h...@example.comwrote:
[...]
i know that the object created this way is in the heap
which have much memory than stack but why they always
define objects that way , why not just say "test n" and
the object will be destroyed by itself at the end of the
program! , instead of using "new" and maybe u will forget
to "delete" at the end
Several reasons.
1. They're coming from Java and they don't know any better
2. They're storing polymorphic objects inside containers
3. They need the lifetime of the object to exceed the scope in which
it was declared.
how u can use object after it's scope ends!!
Objects don't have scope, they have lifetime. Scope concerns
the visibility of a declaration (and is linked with the
structure of the program). Lifetime concerns when the object
comes into and goes out of being. C++ defines several different
types of lifetime, some linked to scope (e.g. automatic), and
others not (e.g. dynamic). If you create an object with new, it
has dynamic lifetime, and exists until you delete it. Which
could be somewhere else entirely.
--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
On Jul 6, 8:19 am, "Somebody" <someb...@cox.netwrote:
<acehr...@gmail.comwrote in message
news:c0**********************************@m44g2000 hsc.googlegroups.com...
On Jul 5, 1:13 pm, "Somebody" <someb...@cox.netwrote:
Still, I agree with you, people over use new and delete. I
recently saw a class library written in C++ that tried to
be all C#'ish and required you to write code like:
classa->InsertItem(new classB(param1));
classa->InsertItem(new classC(param2));
*STUPID*...
It's not obvious what you say that. Is it because the
dynamic objects are not handed to smart pointers right away?
Or do you propose something like the following?
classa->InsertItem(&classB(param1));
classa->InsertItem(&classC(param2));
You realize that the temporaries would be terminated too
soon to be useful?
Plus, that hurts performance.
Compared to what? If it's needed, then there is no alternative.
Well, let me de-anonimize the code a bit :)...
They had a bunch of stuff like:
ctrl->InsertItem(new ButtonTypeA(param1, param2));
ctrl->InsertItem(new ButtonTypeB(param3, param4));
ctrl->InsertItem(new ButtonTypeC(param4, param5));
So they had a UI control, and they were inserting items into
it. ButtonTypeX was a class *the class library* defined... not
something that a *user* of the class library would (or could)
define.
But he could almost certainly define a class which derived from
it, and use that instead.
What I was saying... was I failed to see why I should do the
dirty work for the library and not only determine what class
to use, but also to allocate it for them. They should
determine whether to create the internal ButtonTypeA,
ButtonTypeB, ButtonTypeC, etc. in some other way (like a param
for example)...
It's very likely that all three calls above invoke the same
InsertItem. Even if they don't, it's almost certain that the
user code could derive from the different ButtonType's, and pass
an instance of the derived class, rather than the base class.
so I'd rather see something like:
ctrl->InsertItem(param1, param2);
ctrl->InsertItem(param3, param4);
ctrl->InsertItem(param4, param5);
or
ctrl->InsertButtonTypeA(param1, param2);
ctrl->InsertButtonTypeB(param3, param4);
ctrl->InsertButtonTypeC(param4, param5);
or
ctrl->InsertItem(TYPE_A, param1, param2);
ctrl->InsertItem(TYPE_B, param3, param4);
ctrl->InsertItem(TYPE_C, param4, param5);
Like other posters said, that style of code is C#'ish or
Java'ish... it is not typical C++ style.
It's very typical C++ where polymorphism and arbitrary lifetime
is involved, which is almost certainly the case here.
C++ is a multi-paradigm language. There are cases when the
above paradigm is appropriate, C++ supports it, and it should be
used in such cases.
Unless you are doing some type of class factory type
thing.
Allocating memory is slow... which is why this particular UI
library was commonly regarded as having poor performance.
Practically speaking, things like buttons in a GUI must have
arbitrary lifetime; they can't be automatic variables. So if
you don't do the new, the library must. And you immediately
loose the flexibility of deriving from the object. And of
course, things like Button's typically also have identity; if
you attach an event handler to one instance, but a copy receives
the event, then your code isn't going to work.
--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
"Somebody" <so******@cox.netwrites:
Allocating memory is slow... which is why this particular UI library was
commonly regarded as having poor performance.
Only in C++, because there's no garbage collector. In languages with
automatic memory management, allocating is free.
--
__Pascal Bourguignon__
Pascal J. Bourguignon writes:
"Somebody" <so******@cox.netwrites:
>Allocating memory is slow... which is why this particular UI library was commonly regarded as having poor performance.
Only in C++, because there's no garbage collector. In languages with
automatic memory management, allocating is free.
Right. Which is why, at $day job$, a recently accomplished task was to dump
a market data feed application that was written in Java, and replace it with
C++ code -- because its garbage collection is just sooooooooo… fast, and
heap usage is soooooo… little.
Not.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
iEYEABECAAYFAkhx+DwACgkQx9p3GYHlUOKh2ACfc4FPN1ShP1 U0rXoZK7BhzwPC
H2sAnRzsiRqy/Pfug68wwjlINGwwPRFB
=nPJC
-----END PGP SIGNATURE-----
In article <7c************@pbourguignon.anevia.com>,
Pascal J. Bourguignon <pj*@informatimago.comwrote:
> Only in C++, because there's no garbage collector. In languages with automatic memory management, allocating is free.
Sorry, it isn't free.
The performance/cost characteristic of allocating and releasing memory
in an automatic memory managed language is different. Under some
circumstances, it can be orders of magnitude faster than C++
explicit memory management but it certainly isn't free and things can
happen if you get out of a particualr set of "some circumstances"
Yannick
Yannick Tremblay wrote:
Under some
circumstances, it can be orders of magnitude faster than C++
explicit memory management but it certainly isn't free and things can
happen if you get out of a particualr set of "some circumstances"
This is interesting. What cases?
In article <7c************@pbourguignon.anevia.com>, pj*@informatimago.com says...
"Somebody" <so******@cox.netwrites:
Allocating memory is slow... which is why this particular UI library was
commonly regarded as having poor performance.
Only in C++, because there's no garbage collector. In languages with
automatic memory management, allocating is free.
While it's apparent that you're much more interested in advocacy than
accuracy, could you at least attempt to maintain some _minimal_ level of
accuracy? Allocating is _never_ free. Furthermore, automatic memory
management doesn't necessarily even make it cheap.
There are garbage collectors (those that compact the heap) that make
allocation cheap. There are others (that don't compact the heap) with
which allocation can still be quite expensive.
On the other side, there are manual allocators (those that do
compaction) that make allocation cheap. There are others (those that
don't do compaction) with which allocation can be quite expensive.
The _overall_ cost of a full cycle of allocating and freeing memory is
slightly lower (on average) with manually controlled allocators than
with automatically controlled ones -- but there's enough overlap between
the two that the average means virtually nothing. You need to know quite
a bit about usage patterns and the exact implementations of the manual
and automatic memory managers in question before you can make any
meaningful prediction about their relative speed under specific
circumstances. Having done that, you still know next to nothing about
even slightly different circumstances -- generalizing the results is
very difficult.
--
Later,
Jerry.
The universe is a figment of its own imagination.
Jerry Coffin <jc*****@taeus.comwrites:
In article <7c************@pbourguignon.anevia.com>, pj*@informatimago.com says...
>"Somebody" <so******@cox.netwrites:
Allocating memory is slow... which is why this particular UI library was
commonly regarded as having poor performance.
Only in C++, because there's no garbage collector. In languages with automatic memory management, allocating is free.
While it's apparent that you're much more interested in advocacy than
accuracy, could you at least attempt to maintain some _minimal_ level of
accuracy? Allocating is _never_ free. Furthermore, automatic memory
management doesn't necessarily even make it cheap.
There are garbage collectors (those that compact the heap) that make
allocation cheap. There are others (that don't compact the heap) with
which allocation can still be quite expensive.
On the other side, there are manual allocators (those that do
compaction) that make allocation cheap. There are others (those that
don't do compaction) with which allocation can be quite expensive.
The _overall_ cost of a full cycle of allocating and freeing memory is
slightly lower (on average) with manually controlled allocators than
with automatically controlled ones -- but there's enough overlap between
the two that the average means virtually nothing. You need to know quite
a bit about usage patterns and the exact implementations of the manual
and automatic memory managers in question before you can make any
meaningful prediction about their relative speed under specific
circumstances. Having done that, you still know next to nothing about
even slightly different circumstances -- generalizing the results is
very difficult.
Of course. I just meant to shatter the notion that new and new[] must
be slow. There are implementations where allocation is fast (a
pointer increment and a test). All right, some time is traded in the
deallocation or garbage collection, and you can always find a worse
case situation. The point is that a program with lots of new is not
bad per se.
--
__Pascal Bourguignon__
On Jul 7, 3:18 pm, ytrem...@nyx.nyx.net (Yannick Tremblay) wrote:
In article <7cd4lqf3az....@pbourguignon.anevia.com>,
Pascal J. Bourguignon <p...@informatimago.comwrote:
Only in C++, because there's no garbage collector. In
languages with automatic memory management, allocating is
free.
Sorry, it isn't free.
The performance/cost characteristic of allocating and
releasing memory in an automatic memory managed language is
different. Under some circumstances, it can be orders of
magnitude faster than C++ explicit memory management but it
certainly isn't free and things can happen if you get out of a
particular set of "some circumstances"
Well, the *allocation* can be very, very cheap---two or three
machine instructions. And if the program doesn't run long
enough to trigger the collector, it will be about as close to
free as you can get.
More realistically, of course, I agree with your evaluation.
The total cost of allocating and freeing the memory can be
signficantly less, or significantly more, or (most of the time)
something more or less similar. The cost can also come at a
less convenient moment, or a more convenient moment, as well,
depending on the application. (GUI applications, for example,
work very well with garbage collection, because you can usually
arrange for almost all of the cost to occur when you'd otherwise
be waiting for an event.)
Where garbage collection is always a win is in development time
and program security and robustness.
--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
In article <7c************@pbourguignon.anevia.com>, pj*@informatimago.com says...
[ ... ]
Of course. I just meant to shatter the notion that new and new[] must
be slow. There are implementations where allocation is fast (a
pointer increment and a test). All right, some time is traded in the
deallocation or garbage collection, and you can always find a worse
case situation. The point is that a program with lots of new is not
bad per se.
You have argued that it's not necessarily _slow_. While you've provided
little real evidence of that, I'm not going to argue the point, because
even when/if the allocation itself is slow, a program can use it heavily
and still run at a perfectly acceptable speed.
Pointers are the data structure equivalent of the GOTO. What end up as
gotos (jumps) in assembly language should normally be hidden, with
what's visible being only high level block structures. Likewise,
pointers should typically be hidden, used to implement high level block
structures.
Programs where there's a lot of (visible) use of new, rather than most
use being hidden inside the implementation of higher level structures
aren't _necessarily_ evil -- but it's definitely a _strong_ indication
of a likely problem.
--
Later,
Jerry.
The universe is a figment of its own imagination.
In article <bc6c98b2-b72a-420f-bdc0- 15**********@d45g2000hsc.googlegroups.com>, ja*********@gmail.com
says...
[ ... ]
Where garbage collection is always a win is in development time
and program security and robustness.
Always...except when it's not.
--
Later,
Jerry.
The universe is a figment of its own imagination.
On Jul 7, 5:59 pm, Jerry Coffin <jcof...@taeus.comwrote:
In article <7cwsjxenwa....@pbourguignon.anevia.com>,
p...@informatimago.com says...
[ ... ]
Pointers are the data structure equivalent of the GOTO. What
end up as gotos (jumps) in assembly language should normally
be hidden, with what's visible being only high level block
structures. Likewise, pointers should typically be hidden,
used to implement high level block structures.
The difference with regards to goto is that the language
supports the necessary high level structures directly. This is
less true with regards to data structures; the library certainly
supports some (e.g. you don't need pointers to implement a list,
because the structure is already there), but certainly not all.
So you end up in the situation you were in in assembler, or,
say, Fortran IV. You use goto (pointers), but in structured
manner.
Note that some languages (e.g. lisp) don't need pointers, since
they do have the high level structures for manipulating them.
At least in the ways that are appropriate in that language.
Programs where there's a lot of (visible) use of new, rather
than most use being hidden inside the implementation of higher
level structures aren't _necessarily_ evil -- but it's
definitely a _strong_ indication of a likely problem.
The new isn't a problem; if the data has dynamic lifetime, you
need a new. The pointers themselves aren't really evil either,
in such cases, provided they're used correctly. At least, at
present, there's not much alternative---I've yet to see any
library or smart pointers which are effective for general
navigation. (Back when I started C++, "relationship management"
was an in thing, and everyone was trying to define classes to
manage relationships. None of the ones I saw really succeeded,
however, and I have the impression that people have pretty much
given up developing a generic solution.)
--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
In article <bc**********************************@d45g2000hsc. googlegroups.com>,
James Kanze <ja*********@gmail.comwrote:
>On Jul 7, 3:18 pm, ytrem...@nyx.nyx.net (Yannick Tremblay) wrote:
>In article <7cd4lqf3az....@pbourguignon.anevia.com>, Pascal J. Bourguignon <p...@informatimago.comwrote:
>Only in C++, because there's no garbage collector. In languages with automatic memory management, allocating is free.
>Sorry, it isn't free.
>The performance/cost characteristic of allocating and releasing memory in an automatic memory managed language is different. Under some circumstances, it can be orders of magnitude faster than C++ explicit memory management but it certainly isn't free and things can happen if you get out of a particular set of "some circumstances"
Well, the *allocation* can be very, very cheap---two or three machine instructions. And if the program doesn't run long enough to trigger the collector, it will be about as close to free as you can get.
2-3 machine instructions still isn't "free". However, agree, it is
very very cheap. Maybe even insignificant.
However, unless one is in the special case of short running
program, one shouldn't ignore the cost of running the collector. I
mean at that point, a non-garbage collected language should simply
reply with a proof of speed that consist in a small program that
doesn't allocate any moemory at all.
>More realistically, of course, I agree with your evaluation. The total cost of allocating and freeing the memory can be signficantly less, or significantly more, or (most of the time) something more or less similar. The cost can also come at a less convenient moment, or a more convenient moment, as well, depending on the application. (GUI applications, for example, work very well with garbage collection, because you can usually arrange for almost all of the cost to occur when you'd otherwise be waiting for an event.)
Yup, I'd agree with that.
>Where garbage collection is always a win is in development time and program security and robustness.
You know, that's the funny thing: When I write C++, a very rarely feel
that a garbage collector would help me write code faster or result in
more secure and robust code. STL containers are used extensively.
I rarely use "new", when I do it it's generally inside object and
managed using RAII or ownership is clear anyway. The few cases left
are rare enough that I don't find it a burden nor a security risk.
Even with the said GUI application, to me there's a clear ownership
relationship between a window and a button on that window so lifetime
management is simple.
However, when I write C. I do find it problematic. The absence of
both garbage collection and RAII makes memory management a constant
pain, slows development and create risks.
Unfortunately, when I look at some other code that is meant to be
compiled with a C++ compiler, things are not always as nice.
Innapropriate habits (maybe imported from C, Java or simply never
learnt better) can introduce poor memory management patterns which
leads to slow development and security and robustness issues.
So a couple of possible solutions:
- Training: teach peoples to write good code. Writing good code
doesn't take longer than writing poor code, it just requires more
skill.
- Garbage collection: has clear advantages in some situations. But is
it a substitute for poor skills?
- Remove flexibility from the C++ language: Is one of the weakness of
C++ its great flexibility? It brings as cost, the capability of
writing poor code which is somewhat limited in a less flexible
language. (witness for example python's attempts to enforce "nice"
formatting by making formatting and whitespaces part of the syntax)
Yannick
In article <906369c8-8ef3-42e8-b90e- 6e**********@l64g2000hse.googlegroups.com>, ja*********@gmail.com
says...
[ ... ]
The difference with regards to goto is that the language
supports the necessary high level structures directly. This is
less true with regards to data structures; the library certainly
supports some (e.g. you don't need pointers to implement a list,
because the structure is already there), but certainly not all.
So you end up in the situation you were in in assembler, or,
say, Fortran IV. You use goto (pointers), but in structured
manner.
I mostly agree. I think the ability to create smart pointers (for one
example) gives _more_ capability for ensuring against many of the
problems that can and do arise, however.
Note that some languages (e.g. lisp) don't need pointers, since
they do have the high level structures for manipulating them.
At least in the ways that are appropriate in that language.
Though the syntax is certainly different, dotted expressions in Lisp
make the pointers much more explicit.
Programs where there's a lot of (visible) use of new, rather
than most use being hidden inside the implementation of higher
level structures aren't _necessarily_ evil -- but it's
definitely a _strong_ indication of a likely problem.
The new isn't a problem; if the data has dynamic lifetime, you
need a new. The pointers themselves aren't really evil either,
in such cases, provided they're used correctly.
I'm not saying that the new or the pointer is the problem -- the
_visibility_ is. The new should nearly always be hidden inside of the
implementation of a data structure, preferably the most specialized,
restricted data structure that can do the job. When you see uses of new
all over the program, it's usually (usually, not always) a sign that the
data structure being built and maintained isn't well encapsulated.
--
Later,
Jerry.
The universe is a figment of its own imagination. yt******@nyx.nyx.net (Yannick Tremblay) writes:
In article <bc**********************************@d45g2000hsc. googlegroups.com>,
James Kanze <ja*********@gmail.comwrote:
>>On Jul 7, 3:18 pm, ytrem...@nyx.nyx.net (Yannick Tremblay) wrote:
>>In article <7cd4lqf3az....@pbourguignon.anevia.com>, Pascal J. Bourguignon <p...@informatimago.comwrote:
>>Only in C++, because there's no garbage collector. In languages with automatic memory management, allocating is free.
>>Sorry, it isn't free.
>>The performance/cost characteristic of allocating and releasing memory in an automatic memory managed language is different. Under some circumstances, it can be orders of magnitude faster than C++ explicit memory management but it certainly isn't free and things can happen if you get out of a particular set of "some circumstances"
Well, the *allocation* can be very, very cheap---two or three machine instructions. And if the program doesn't run long enough to trigger the collector, it will be about as close to free as you can get.
2-3 machine instructions still isn't "free". However, agree, it is
very very cheap. Maybe even insignificant.
However, unless one is in the special case of short running
program, one shouldn't ignore the cost of running the collector. I
mean at that point, a non-garbage collected language should simply
reply with a proof of speed that consist in a small program that
doesn't allocate any moemory at all.
Let's be a little more forward looking. We've got nowadays multicore
processors, and in a few years, we'll have thousands of core at our
avail. There are garbage collectors who can work constantly in
parallel, in a background thread. In elapsed time, if you allocate
memory in a few instructions, and have the garbage collected in
parallel, the total elapsed time cost is only that of allocating. On
the other hand if you keep a heavy allocation, you cannot run it in
parallel, and your elapsed time will be worse, even if you fork the
free to a background thread.
>>Where garbage collection is always a win is in development time and program security and robustness.
You know, that's the funny thing: When I write C++, a very rarely feel
that a garbage collector would help me write code faster or result in
more secure and robust code.
Yes, that's because there are other problems with C++ that are
blocking us as well.
STL containers are used extensively.
I rarely use "new", when I do it it's generally inside object and
managed using RAII or ownership is clear anyway.
STL being one of them.
The few cases left
are rare enough that I don't find it a burden nor a security risk.
Even with the said GUI application, to me there's a clear ownership
relationship between a window and a button on that window so lifetime
management is simple.
While your application only has tree-like dependencies between
objects, it's sufficient indeed. But in OO, you often get at one
object deep inside the tree, and would like to move up to some parent.
For example, you may get an event on the Button, and need to know what
window it is in to send it an appropriate message. Therefore you will
need bidirectional relationships, and then lack of a garbage collector
is really a PITA.
However, when I write C. I do find it problematic. The absence of
both garbage collection and RAII makes memory management a constant
pain, slows development and create risks.
Unfortunately, when I look at some other code that is meant to be
compiled with a C++ compiler, things are not always as nice.
Innapropriate habits (maybe imported from C, Java or simply never
learnt better) can introduce poor memory management patterns which
leads to slow development and security and robustness issues.
So a couple of possible solutions:
- Training: teach peoples to write good code. Writing good code
doesn't take longer than writing poor code, it just requires more
skill.
- Garbage collection: has clear advantages in some situations. But is
it a substitute for poor skills?
- Remove flexibility from the C++ language: Is one of the weakness of
C++ its great flexibility? It brings as cost, the capability of
writing poor code which is somewhat limited in a less flexible
language. (witness for example python's attempts to enforce "nice"
formatting by making formatting and whitespaces part of the syntax)
Well, I wouldn't say that C++ is really "flexible" (compared to Lisp),
but indeed, it feels more like you have to finish it yourself before
you can use it, implementing the language features you need to be able
to work in some adequate C++ library.
For example, if don't like new/delete (after all, they give you low
level *pointers*), you will have to implement your own 'dynamic object
reference' embedded into 'smart pointers'.
--
__Pascal Bourguignon__
Jerry Coffin <jc*****@taeus.comwrites:
In article <906369c8-8ef3-42e8-b90e- 6e**********@l64g2000hse.googlegroups.com>, ja*********@gmail.com
says...
>Note that some languages (e.g. lisp) don't need pointers, since they do have the high level structures for manipulating them. At least in the ways that are appropriate in that language.
Though the syntax is certainly different, dotted expressions in Lisp
make the pointers much more explicit.
The dot in lisp is a syntactic marker to represent CONS cells, not pointers.
(a . b) is a record of two slots containing the symbols a and b.
It's equivalent to something like: std::pair<Object{a,b}
(cons e1 e2) being equivalent to std::pair<Object>(e1,e2), but with
the pair allocated on the heap.
Lists are implemented as a chain of these CONS cells, of which the
last second slot contains the symbol NIL. A list containing a, b and
c would be stored as:
(a . (b . (c . nil))) == pair<Object>(a,pair<Object>(b,pair<Object>(c,nil)) )
it can be printed and read as:
(a b c)
and constructed either with (cons 'a (cons 'b (cons 'c nil))) or
(list 'a 'b 'c).
While there may be pointers at the implementation level, there's no
such notion at the lisp level. Only you can work at the level of
lists, or you can work at the level of the cons cells of which lists
(and other data structures) are made.
I'm not saying that the new or the pointer is the problem -- the
_visibility_ is. The new should nearly always be hidden inside of the
implementation of a data structure, preferably the most specialized,
restricted data structure that can do the job. When you see uses of new
all over the program, it's usually (usually, not always) a sign that the
data structure being built and maintained isn't well encapsulated.
--
__Pascal Bourguignon__
In article <7c************@pbourguignon.anevia.com>,
Pascal J. Bourguignon <pj*@informatimago.comwrote:
>yt******@nyx.nyx.net (Yannick Tremblay) writes:
>2-3 machine instructions still isn't "free". However, agree, it is very very cheap. Maybe even insignificant.
However, unless one is in the special case of short running program, one shouldn't ignore the cost of running the collector. I mean at that point, a non-garbage collected language should simply reply with a proof of speed that consist in a small program that doesn't allocate any moemory at all.
Let's be a little more forward looking. We've got nowadays multicore processors, and in a few years, we'll have thousands of core at our avail. There are garbage collectors who can work constantly in parallel, in a background thread. In elapsed time, if you allocate memory in a few instructions, and have the garbage collected in parallel, the total elapsed time cost is only that of allocating. On the other hand if you keep a heavy allocation, you cannot run it in parallel, and your elapsed time will be worse, even if you fork the free to a background thread.
I think you are unfairly highlighting cost of explicit allocation
while giving the garbage collector the right to run in a separate
thread "for free".
The total resource usage of an application is the total of all the
threads it runs, including garbage collector thread. The resources
used by the garbage collector thread are not available for other
usages. They haven't been created out of thin air "for free".
Multi threading on multi-core can under certain circumstance allows
one to make an application that is more responsive. However, this is
not reserved to garbage collected application. One can, and I have
written non-garbage collected multi-threaded applications
If we are forward looking, a lot of applications become
multi-threaded (or multi-process), regardless of if they are garbage
collected or not. So at that point, while a thread might be busy
freeing memory somewhere, another thread might be doing something
else. This is allowed regardless if it is garbage collected or not.
If I write... say a image/video processing application for multi
core processors aiming at maximum performance, I would write it so
that it uses all the cores availaible to their max capability, so
their wouldn't be an "unused" core that can be used to run the garbage
collector "for free".
I grant you that garbage collection allows you to more easily try to
setup a system where memory clean up happens during dead time (low
load time).
However, in practice, it is not uncommon while using some application
to notice that garbage collection is happening just at the "wrong
time" and is using a lot of resource when one would rather they were
not used for garbage collection at that particular moment. If garbage
collection is such a silver bullet, why is this happening?
Yannick
On Jul 8, 2:52 pm, ytrem...@nyx.nyx.net (Yannick Tremblay) wrote:
In article
<bc6c98b2-b72a-420f-bdc0-15ecd55e2...@d45g2000hsc.googlegroups.com>,
James Kanze <james.ka...@gmail.comwrote:
[...]
Where garbage collection is always a win is in development time
and program security and robustness.
You know, that's the funny thing: When I write C++, a very
rarely feel that a garbage collector would help me write code
faster or result in more secure and robust code.
The security issue is clear: garbage collection reduces the risk
of a dangling pointer actually pointing to some other object
(which can be a security leak, much like buffer overflow), and
more generally, allows reliable checking for dangling pointers.
The win in development time depends largely on what you're
doing; it's true that there are many cases in C++ where it isn't
that relevant. But there are still a few cases where you'll
prefer pointers to dynamically allocated objects to pure value
semantics:
-- when the objects are big enough to make the cost of copying
an issue,
-- when the object is used in an exception, and copying isn't
guaranteed no throw (if an object is to be thrown, you'll
use std::string*, rather than std::string), and
-- when the object must be polymorphic, although otherwise
value semantics apply.
Now it's true that tr1::shared_ptr can be used in the first two,
but that does mean extra effort on your part, albeit minimal.
In the third case, you may have to worry about cycles as well.
And in all cases, it means thinking about it, rather than just
using a pointer and being done with it.
(There are also special cases. I recently had to deal with a
case where I needed a complex data structure which could be
statically initialized. But wasn't always. Because of the
requirement of static initialization, the elements in the
structure couldn't have constructors of destructors, which meant
that in the case of dynamic management, I had to visit the
structure manually to do the deletes. Getting the code
exception safe was a real pain, whereas with garbage collection,
it would have represented 0 effort.)
STL containers are used extensively. I rarely use "new",
when I do it it's generally inside object and managed using
RAII or ownership is clear anyway. The few cases left are
rare enough that I don't find it a burden nor a security risk.
Even with the said GUI application, to me there's a clear
ownership relationship between a window and a button on that
window so lifetime management is simple.
The security risk is still there, as soon as you have objects
with arbitrary lifetimes. (Like buffer overflow, it can only
occur as a result of a programming error. But programming
errors exist, and the fact remains that using a dangling pointer
which points to a new object can result in your code being
compromized.) For the rest, it's certain that you don't need
garbage collection in the way that Java needs it; that it's
impossible to write correct code without it. But there are
enough cases where it does help that one or the other will
inevitably show up in most applications.
However, when I write C. I do find it problematic. The
absence of both garbage collection and RAII makes memory
management a constant pain, slows development and create
risks.
Yes. Without RAII, garbage collection is an absolute necessity.
Unfortunately, when I look at some other code that is meant to
be compiled with a C++ compiler, things are not always as
nice. Innapropriate habits (maybe imported from C, Java or
simply never learnt better) can introduce poor memory
management patterns which leads to slow development and
security and robustness issues.
So a couple of possible solutions:
- Training: teach peoples to write good code. Writing good code
doesn't take longer than writing poor code, it just requires more
skill.
I'd say different skills, rather than more skill. It actually
takes more skill to write correct C, but if you apply those
skills to C++, the results aren't going to be very good.
- Garbage collection: has clear advantages in some situations.
But is it a substitute for poor skills?
No. You need both (and the correct skills is the more important
issue).
- Remove flexibility from the C++ language: Is one of the
weakness of C++ its great flexibility? It brings as cost, the
capability of writing poor code which is somewhat limited in a
less flexible language. (witness for example python's
attempts to enforce "nice" formatting by making formatting and
whitespaces part of the syntax)
The reason C++ is widespread is not its elegance and its
simplicity. Nor even its safety, per se. The reason C++ is
widespread is its flexibility. And the fact that as the global
knowledge base expands, we learn new and better ways of doing
things. So that, correctly written, C++ is considerably more
robust and secure than Java, despite Java's having been designed
with those issues in mind. Java cast in stone the established
concepts of robustness and security at the time it was
developed; since then, we know more, and can do better. At
least in C++, because nothing has been cast in stone.
--
James Kanze (GABI Software) email:ja*********@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
In article <7c************@pbourguignon.anevia.com>,
Pascal J. Bourguignon <pj*@informatimago.comwrote:
>yt******@nyx.nyx.net (Yannick Tremblay) writes:
>The few cases left are rare enough that I don't find it a burden nor a security risk. Even with the said GUI application, to me there's a clear ownership relationship between a window and a button on that window so lifetime management is simple.
While your application only has tree-like dependencies between objects, it's sufficient indeed. But in OO, you often get at one object deep inside the tree, and would like to move up to some parent. For example, you may get an event on the Button, and need to know what window it is in to send it an appropriate message. Therefore you will need bidirectional relationships, and then lack of a garbage collector is really a PITA.
So the Button needs to send a message to its parent Window. Yes, that
means a bi-directional collaboration but that doesn't change the fact
that a Button can belongs to its parent Windows and should be
shorter lived than its parent Windows. I hate it when I close a
Window in an applicaction and it leaves some button lying around
floating in the middle of nowehere :-(
Personally, I don't see the need to know one's parent as a situation
that *requires* garbage collection. Sure, it might mean that simple
reference counting is not applicable but there are a lot of other
solutions. I see garbage collection as a possible solution.
Yannick yt******@nyx.nyx.net (Yannick Tremblay) writes:
In article <7c************@pbourguignon.anevia.com>,
Pascal J. Bourguignon <pj*@informatimago.comwrote:
>>yt******@nyx.nyx.net (Yannick Tremblay) writes:
>>2-3 machine instructions still isn't "free". However, agree, it is very very cheap. Maybe even insignificant.
However, unless one is in the special case of short running program, one shouldn't ignore the cost of running the collector. I mean at that point, a non-garbage collected language should simply reply with a proof of speed that consist in a small program that doesn't allocate any moemory at all.
Let's be a little more forward looking. We've got nowadays multicore processors, and in a few years, we'll have thousands of core at our avail. There are garbage collectors who can work constantly in parallel, in a background thread. In elapsed time, if you allocate memory in a few instructions, and have the garbage collected in parallel, the total elapsed time cost is only that of allocating. On the other hand if you keep a heavy allocation, you cannot run it in parallel, and your elapsed time will be worse, even if you fork the free to a background thread.
I think you are unfairly highlighting cost of explicit allocation
while giving the garbage collector the right to run in a separate
thread "for free".
The total resource usage of an application is the total of all the
threads it runs, including garbage collector thread. The resources
used by the garbage collector thread are not available for other
usages. They haven't been created out of thin air "for free".
True. But my exactly transmitting the point of Intel's engineers who
forewarn us programmers that we will soon have a very big number of
cores at our disposal, and when you have more cores than threads to
run, processing becomes as free as the air (on a planet).
Multi threading on multi-core can under certain circumstance allows
one to make an application that is more responsive. However, this is
not reserved to garbage collected application. One can, and I have
written non-garbage collected multi-threaded applications
If we are forward looking, a lot of applications become
multi-threaded (or multi-process), regardless of if they are garbage
collected or not. So at that point, while a thread might be busy
freeing memory somewhere, another thread might be doing something
else. This is allowed regardless if it is garbage collected or not.
But the criteria, for the final user, is the response time, not the
percentage of core you can keep busy.
If I write... say a image/video processing application for multi
core processors aiming at maximum performance, I would write it so
that it uses all the cores availaible to their max capability, so
their wouldn't be an "unused" core that can be used to run the garbage
collector "for free".
Assume you have one core per pixel, each processing its own pixel, and
a few other cores free.
Is it better to spend in all the threads time in a complex allocator,
all in parallel, or is it better to have all the worker threads/cores
allocate in a few instructions, right aways, and leave the collection
of the garbage up to the free cores?
I grant you that garbage collection allows you to more easily try to
setup a system where memory clean up happens during dead time (low
load time).
Even without low load times. That's the point of parallel garbage
collectors in multiprocessors.
However, in practice, it is not uncommon while using some application
to notice that garbage collection is happening just at the "wrong
time" and is using a lot of resource when one would rather they were
not used for garbage collection at that particular moment. If garbage
collection is such a silver bullet, why is this happening?
Well, given that we have to wait on all the applications, and most of
them don't have a garbage collector, I don't see in my experience that
the situation is any better without them. But again, this is already
become very old history, given the advent of massive multicore
processors, and the effective implementation of parallel garbage
collectors.
This is also the direction Apple's following.
--
__Pascal Bourguignon__
On Jul 6, 3:39*am, Medvedev <3D.v.Wo...@gmail.comwrote:
i see serveral source codes , and i found they almost only use "new"
and "delete" keywords to make they object.
Why should i do that , and as i know the object is going to be destroy
by itself at the end of the app
for example:
class test
{
* public:
* * * *int x;
}
int main(int argc, char **argv)
{
* test *n= new test;
* .
* .
* ...
* delete n;
return 0;}
i know that the object created this way is in the heap which have much
memory than stack but why they always define objects that way , why
not just say "test n" and the object will be destroyed by itself at
the end of the program! , instead of using "new" and maybe u will
forget to "delete" at the end
It's a problem about design. You can write a smart pointer, wrap the
'new'/'delete' operations in constructor/deconstructor functions, the
compiler will add them in your function automatic. This discussion thread is closed Replies have been disabled for this discussion. Similar topics
30 posts
views
Thread by seesaw |
last post: by
|
24 posts
views
Thread by Rv5 |
last post: by
|
13 posts
views
Thread by gary |
last post: by
|
5 posts
views
Thread by mkaushik |
last post: by
|
350 posts
views
Thread by Lloyd Bonafide |
last post: by
| | | | | | | | | | |