473,395 Members | 1,422 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,395 software developers and data experts.

Teaching new tricks to an old dog (C++ -->Ada)

I 'm following various posting in "comp.lang.ada, comp.lang.c++ ,
comp.realtime, comp.software-eng" groups regarding selection of a
programming language of C, C++ or Ada for safety critical real-time
applications. The majority of expert/people recommend Ada for safety
critical real-time applications. I've many years of experience in C/C++ (and
Delphi) but no Ada knowledge.

May I ask if it is too difficult to move from C/C++ to Ada?
What is the best way of learning Ada for a C/C++ programmer?

Jul 23 '05
822 28840
On Wed, 16 Mar 2005 15:11:46 +0100, Georg Bauhaus wrote:
Dmitry A. Kazakov wrote:
Access type is a type as any other. If it has to be a
member of Set_Element'Class why shouldn't it be?
Wrapping occurs, for example a set of ints will become a
set of int_also_set_element.


What's wrong with that. Or, better: how ints could be in a set being not
elements of the set? (:-))
As there might have to be
some implementation for int_also_set_element,
it can be much less convenient than having a set of plain
ints.
I'm afraid I don't fully understand what you mean here. Probably, that the
implementation of the set will use class-wide objects instead of specific
ones. It is definitely an issue to be addressed. This is why I wish a
better ADT for Ada.
This is a chapter in OOSC2. It's good to have a copy anyway.
http://archive.eiffel.com/doc/oosc/
From this page there is a link to a sample chapter every now
and then. See the bottom of the page.


Presently it is: HOW TO FIND THE CLASSES

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
Jul 23 '05 #501

Ioannis Vranos <iv*@remove.this.grad.com> writes:
I had Ada template library in mind as
No problem, it is built in plain Ada.
also as "information systems ("money
computing"), string processing, Ada subsetting definitions (what's this?)"
from those mentioned.


I think this is the way to restrict the language. In some area it is very
important to avoid memory allocation, you can add a pragma like;

pragma Restrictions (No_Allocators);

Here are the current (read Ada95) set of restrictions:

* Safety and Security Restrictions
====================

1. This clause defines restrictions that can be used with pragma
Restrictions, see *Note 13.12::, these facilitate the
demonstration of program correctness by allowing tailored versions
of the run-time system.
_Static Semantics_

2. The following restrictions, the same as in *Note D.7::, apply in
this Annex: No_Task_Hierarchy, No_Abort_Statement,
No_Implicit_Heap_Allocation, Max_Task_Entries is 0,
Max_Asynchronous_Select_Nesting is 0, and Max_Tasks is 0. The last
three restrictions are checked prior to program execution.

3. The following additional restrictions apply in this Annex.

4. Tasking-related restriction:

5. No_Protected_Types

There are no declarations of protected types or protected
objects.

6. Memory-management related restrictions:

7. No_Allocators

There are no occurrences of an allocator.

8. No_Local_Allocators

Allocators are prohibited in subprograms, generic sub-programs,
tasks, and entry bodies; instantiations of generic
packages are also prohibited in these contexts.

9. No_Unchecked_Deallocation

Semantic dependence on Unchecked_Deallocation is not allowed.

10. Immediate_Reclamation

Except for storage occupied by objects created by allocators
and not deallocated via unchecked deallocation, any storage
reserved at run time for an object is immediately reclaimed
when the object no longer exists.

11. Exception-related restriction:

12. No_Exceptions

Raise_statements and exception_handlers are not allowed. No
language-defined run-time checks are generated; however, a
run-time check performed automatically by the hardware is
permitted.

13. Other restrictions:

14. No_Floating_Point

Uses of predefined floating point types and operations, and
declarations of new floating point types, are not allowed.

15. No_Fixed_Point

Uses of predefined fixed point types and operations, and
declarations of new fixed point types, are not allowed.

16. No_Unchecked_Conversion

Semantic dependence on the predefined generic
Unchecked_Conversion is not allowed.

17. No_Access_Subprograms

The declaration of access-to-subprogram types is not allowed.

18. No_Unchecked_Access

The Unchecked_Access attribute is not allowed.

19. No_Dispatch

Occurrences of T'Class are not allowed, for any (tagged)
subtype T.

20. No_IO

Semantic dependence on any of the library units
Sequential_IO, Direct_IO, Text_IO, Wide_Text_IO, or Stream_IO
is not allowed.

21. No_Delay

Delay_Statements and semantic dependence on package Calendar
are not allowed.

22. No_Recursion

As part of the execution of a subprogram, the same subprogram
is not invoked.

23. No_Reentrancy

During the execution of a subprogram by a task, no other task
invokes the same subprogram.
* The Tasking Restrictions
====================

1. This clause defines restrictions that can be used with a pragma
Restrictions, see *Note 13.12::, to facilitate the construction of
highly efficient tasking run-time systems.
_Static Semantics_

2. The following restriction_identifiers are language defined:

3. No_Task_Hierarchy

All (nonenvironment) tasks depend directly on the environment
task of the partition.

4. No_Nested_Finalization

Objects with controlled parts and access types that designate
such objects shall be declared only at library level.

5. No_Abort_Statements

There are no abort_statements, and there are no calls on
Task_Identification.Abort_Task.

6. No_Terminate_Alternatives

There are no selective_accepts with terminate_alternatives.

7. No_Task_Allocators

There are no allocators for task types or types containing
task subcomponents.

8. No_Implicit_Heap_Allocations

There are no operations that implicitly require heap storage
allocation to be performed by the implementation. The
operations that implicitly require heap storage allocation
are implementation defined.

9. No_Dynamic_Priorities

There are no semantic dependences on the package
Dynamic_Priorities.

10. No_Asynchronous_Control

There are no semantic dependences on the package
Asynchronous_Task_Control.

11. The following restriction_parameter_identifiers are language
defined:

12. Max_Select_Alternatives

Specifies the maximum number of alternatives in a
selective_accept.

13. Max_Task_Entries

Specifies the maximum number of entries per task. The bounds
of every entry family of a task unit shall be static, or
shall be defined by a discriminant of a subtype whose
corresponding bound is static. A value of zero indicates
that no rendezvous are possible.

14. Max_Protected_Entries

Specifies the maximum number of entries per protected type.
The bounds of every entry family of a protected unit shall be
static, or shall be defined by a discriminant of a subtype
whose corresponding bound is static.
_Dynamic Semantics_

15. If the following restrictions are violated, the behavior is
implementation defined. If an implementation chooses to detect
such a violation, Storage_Error should be raised.

16. The following restriction_parameter_identifiers are language
defined:

17. Max_Storage_At_Blocking

Specifies the maximum portion (in storage elements) of a
task's Storage_Size that can be retained by a blocked task.

18. Max_Asynchronous_Select_Nesting

Specifies the maximum dynamic nesting level of
asynchronous_selects. A value of zero prevents the use of any
asynchronous_select.

19. Max_Tasks

Specifies the maximum number of task creations that may be
executed over the lifetime of a partition, not counting the
creation of the environment task.

20. It is implementation defined whether the use of pragma Restrictions
results in a reduction in executable program size, storage
requirements, or execution time. If possible, the implementation
should provide quantitative descriptions of such effects for each
restriction.
Pascal.

--

--|------------------------------------------------------
--| Pascal Obry Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--| http://www.obry.org
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595
Jul 23 '05 #502
Ioannis Vranos wrote:
Ada subsetting definitions
(what's this?)"


Use language defined pragmas to declare that your program
doesn't use some parts of Ada. These are instructions to
the compilation system. Then, the run-time system
can be tailored to meet the restricted set of requirements.
In addition, some formal properties of the program are
easier to demonstrate.
For example, a program might not need protected types,
a program may have no nested tasks, or it may have no
allocators (no "new"), or no unchecked_access.

Georg
Jul 23 '05 #503
Georg Bauhaus wrote:
Seems like some ColdFusion restructuring is happening.

ftp://ftp.usafa.af.mil/pub/dfcs/carlisle/asharp/
ftp://sunsite.informatik.rwth-aachen...rlisle/asharp/

Perhaps you may help. How may I setup a .NET Ada development environment?
I have downloaded from various sites so far:

7.266.221 adagide-install.exe
30.259.357 asharp-setup.exe

19.146.036 gnat-3.15p-nt.exe
5.752.374 rapid301.zip
I suppose the last two should not be needed. Are the first two the only
needed? adagide asked for gnat, while asharp (mgnat) was installed.
In this cache:
http://66.102.9.104/search?q=cache:3...ient=firefox-a

it says that AdaGIDE integrates fully with A#.
I am providing the cache here because as I said the original site is not
accessible.
Not to mention that I receive anonymous security error for the site
provided here:

"If you do not have GNAT 3.11p (or later), get it from:
ftp://ftp.cs.nyu.edu/pub/gnat/winnt

AdaGIDE uses features specific to GNAT 3.11"
So much security, and my interest of Ada has already declined to almost 0.
Being amused by this "paranoia", I checked to some local sites and found
the latest GNAT here (just to help if someone else is interested in Ada):

ftp://ftp.ntua.gr/pub/lang/gnat/3.15p/winnt/
Any guidance for setting an Ada .NET development environment?

--
Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 23 '05 #504
Ioannis Vranos wrote:
The current Ada standard library includes for example
distributed systems,
information systems ("money computing"),
string processing,
interfaces to other languages,
real-time facilities, and
Ada subsetting definitions.

Ada 2005 adds more features to the standard, including
linear algebra support, and
more file and network I/O.
One question is, can these facilities be implemented with the language
itself, or someone has to use another language to do the real work?


In theory all features could be programmed in Ada itself as the Ada ISO
standart contains system programming and "inline assembler" in one of the
optional Anex.

In Praxis it depends on the target operating system. If you have a Linux
distribution you you can check the "praxis" part yourself - just install
the GCC sources and check the "gcc/ada" directory - there are 34 C files
and 755 ada files there. That's for the library and the compiler.

Martin

--
mailto://kr******@users.sourceforge.net
Ada programming at: http://ada.krischik.com

Jul 23 '05 #505
Ioannis Vranos wrote:
"If you do not have GNAT 3.11p (or later), get it from:
ftp://ftp.cs.nyu.edu/pub/gnat/winnt

AdaGIDE uses features specific to GNAT 3.11"
So much security, and my interest of Ada has already declined to almost 0.
Not sure it has anything to do with a programming language.
The sftp at cs.nyu.edu is new, and affects the whole site.

Try http://libre.act-europe.fr/
Also, there is a mirror at ftp.informatik.rwth-aachen.de
Any guidance for setting an Ada .NET development environment?


Depends on what you define to be a .NET DE. You can get the GNAT
Programming System (GPS), supporting Ada (and C++) in various ways.
You have AdaGIDE, it seems. I like GLIDE which is based on Emacs
and speedbar. Use whatever else you have for WIMP.NET programming
when needed.

Jul 23 '05 #506
Georg Bauhaus wrote:
Not sure it has anything to do with a programming language.
The sftp at cs.nyu.edu is new, and affects the whole site.

Try http://libre.act-europe.fr/
Also, there is a mirror at ftp.informatik.rwth-aachen.de

OK, thanks for the links.

Depends on what you define to be a .NET DE.

An editor (with a Designer - aka RAD - preferably), that works with "A#"
(and not only for Win32 API).
--
Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 23 '05 #507
Randy Brukardt wrote:

[ the AJPO delegating testing authority before it was shut down ... ]
I don't know whether it was done formally or not; I wasn't involved
with that.
Such a thing was either one formally, or not at all.
In any case, it is irrelevant to the existence of the ACAA, operated
under the requirements of an ISO standard, for performing conformity
assessment of Ada compilers. Or do you want to deny that I actually
exist, that the ACAA and its websites, test suites, and authorized
laboratories, exists?


The question is not one of existence. The original claim, however, was
one of their being "official" authorities. The claim that this testing
is somehow "officia" seems to be based on two claims: one, that the
AJPO delegated the authority before it was shut down, and 2) that to
some extent or other, you're acting as part of, under the saction of,
or are in some other way connected with the ISO (e.g. "Although the Ada
tool vendors finance it through the ARA, the ACAA's real boss is the
ISO" (from http://www.adaic.com/compilers/acaa.html).

At this point, it seems quite doubtful to me that the AJPO did any such
thing.

The ISO has this to say:

ISO itself does not carry out conformity assessment. However,
in partnership with IEC (International Electrotechnical
Commission), ISO develops ISO/IEC guides and standards to be
used by organizations which carry out conformity assessment
activities. The voluntary criteria contained in these guides
and standards represent an international consensus on what
constitutes best practice.

(taken from:
http://www.iso.org/iso/en/aboutiso/i...on/index.html).

The bottom line seems to be that the ISO will take responsibility for
the fact that they wrote the standard you claim to follow -- but that
they disavow any and all other involvement with, or any more than the
most peripheral knowledge of you.

So, 1) one of the statements on your web site is probably outright
false, and another is saved from outright falsehood primarily by being
sufficiently vague as to only qualify as "misleading" 2) your claim
about my making a false statement was _probably_ itself false, and at
best certainly devoid of support 3) your statements about the quality
of an Ada compiler are no more "official" than mine or anybody else's.

The basic idea of an "official" verification process is that the
testing be done by a disinterested party -- one who gains _only_ by the
tests being accurate, NOT by them producing any particular result. What
we have here seems to be exactly the opposite: the Ada vendors have put
together a couple of puppet groups to give an illusion of there being
some distance between themselves and the testers, but in reality the
testers realize full well exactly where all their money comes from, and
at least some of them are even still directly associated with the very
Ada vendors they claim to be policing!

That isn't intended to imply, nor do I claim, that any of the testing
involved has ever been falsified or even mildly inaccurate. OTOH, it
throws considerable doubt on the claim that these tests should be
trusted because they're conducted by an "official" authority operating
with the ISO as its boss.

So, the question is not one of whether you exist -- but of whether you
and/or your testing should be trusted. In my view, your own posts have
thrown this into considerable doubt (at best).

--
Later,
Jerry.

The universe is a figment of its own imagination.

Jul 23 '05 #508
"Jerry Coffin" <jc*****@taeus.com> wrote in message
news:11**********************@o13g2000cwo.googlegr oups.com...
....
So, the question is not one of whether you exist -- but of whether you
and/or your testing should be trusted. In my view, your own posts have
thrown this into considerable doubt (at best).


The reason Jerry comes to this conclusion is based on a single summary
statement from a 5 year-old summary article; he never looks at the testing
procedures or the ISO standard that governs those procedures. Moreover, he
seems to claim reasons for this conclusion that are clearly stated in the
article (at least I hope they're clear; I'm not that great a writer...).

I'm always amazed at the lengths that some opponents of Ada will go in order
to discredit any advantage of the language. Is this because they feel
threatened by the language? (I can't imagine why.) Or they have a deep
seated hatred of things that start with 'A'? I don't know. You can't please
(or even reason with) all of the people all of the time.

Anyway, for the benefit.of anyone else still reading this thread, I'll
explain how the testing is done, and why it can be trusted. (Whether such
testing has value in any case is another issue that could be argued forever,
but that's not the question.)

The testing process is an instantiation of the ISO testing standard (ISO/IEC
18009). The detailed procedures that are followed are available on the AdaIC
website, along with the test suite itself. They were developed from the
preexisting AJPO procedures with the oversight of a panel of users,
laboratories, and vendors.

Actual testing is done by testing laboratories (ACALs). These have to be
independent of any Ada vendor. Vendors contract directly with the
laboratories for testing; this part of the process is unchanged from when
AJPO and NIST ran it.

The ACAA is the oversight agency. We provide maintenance to the test suite,
and judge disputes between the vendors and the testing labs. We also spot
check the laboratories work to insure that they have followed the
procedures. This is the role that previously was handled by AJPO (in an
office called the AVO).

The only real difference (other than the ISO standard, which codified the
rules that AJPO had followed) is that the ACAA is funded in part by the ARA.
The ACAA is primarily funded by certificate fees, but the ARA does make up
any shortfall. The ARA is an association of Ada product vendors (not just
compiler vendors).

While there is a potential conflict of interest, it is irrelevant in
practice for three reasons:
(1) The ACAA is only a judge. Moreover, technical issues are resolved by
discussion with a panel of Ada experts; if that is not satisfactory, the
ruling can be appealed to the ARG (the Ada standard maintenance group within
the Ada working group of ISO/IEC SC22 WG9). At most, the ACAA can influence
this process, not decide it.
(2) ARA includes most of the major Ada vendors. It would be hard to have
an conformity issue that they all would agree on that would not also be in
the interest of all Ada vendors.
(3) The testing in question is conformity assessment. It is not about
testing usability in any way, so it has little to do with the factors on
which vendors compete. The testing is designed to provide a yes/no answer
(well, it's a little more nuanced than that) as to whether a compiler is
compliant. There isn't a lot of leeway in this process for outside
influence.

While I could imagine an outcome from this process that would favor Ada
vendors over Ada users, that would seem to be counter-productive (why hurt
your customers?). Moreover, as the testing is about conformity and not
usability, most issues that matter to users don't even fall under the
testing umbrella. Of course, the main question (Does this compiler properly
implement the Ada Standard?) is of significant interest to users,
particularly those that will need to use more than one compiler over the
life of their project.

As to whether someone could cheat in the process, I would say it would not
be the least bit hard. The results of the tests are always handled by the
vendors before the testers see them, so there is always the possibility of
games. There is nothing new about this possibility (I know or suspect
several cases where results were falsied under the AJPO testing). I think it
actually would be harder now, because (1) I've been on the other side of the
fence, so I have some idea of what to look for; and (2) we require more
information in the test report than AJPO did. We also post the test reports
publically, so anyone can check on them. (The only AJPO test reports I ever
saw, before I was sent several cartons of old AVO material, were the ones we
had done.) We even have a public test results dispute procedure (which
thankfully never has been used), its unclear if AJPO even had one.

In any event, the real benefit is that there is a *single* conformity test
suite that *all* Ada vendors use for testing. That means that there can be
no confusion about what has and has not been tested. Most differences
between Ada compilers (beyond those allowed by the Standard) are those that
come from untested combinations of Ada features.

Randy Brukardt


Jul 23 '05 #509
Randy Brukardt wrote:

[ ... ]
The reason Jerry comes to this conclusion is based on a single
summary statement from a 5 year-old summary article;
First of all, I came to no solid conclusions. Second, there was far
more reasoning behind what I said than you admit here. Fortunately,
anybody who wishes to do so can easily go back and read about the
federal register -- wihch is available for searching by essentially
anybody who wishes to do so. If your statements are really true, why
don't you quit trying to put words in my mouth and simply show the
proof?

[ ... ]
Moreover, he
seems to claim reasons for this conclusion that are clearly stated
in the article (at least I hope they're clear; I'm not that great a
writer...).
Except for the part about your (lack of) writing ability, I have no
idea what this sentence is even attempting to say. Commenting on your
own writing ability would _seem_ to imply that you're discussing the
writing on your web site, though. If that's the case, I have to wonder
about the logic involved -- it _sounds_ more or less as if you may be
attempting to offer your own statements on your own web site as
providing "proof" of the statements you've made here!
I'm always amazed at the lengths that some opponents of Ada will go
in order to discredit any advantage of the language. Is this because
they feel threatened by the language? (I can't imagine why.) Or they
have a deep seated hatred of things that start with 'A'? I don't'
know.
This has nothing to do with discrediting the language.

Consider, for example, buying ICs. I can not only find not only who
made the ICs, but exactly which fabrication facility they were made at.
I can find ISO 900x certification(s) on that facility, and I can find
out who did that certification. I can find who they use as a failure
analysis lab, whether they're ISO 900x certified, and if so who
certified THEM. In short, I don't simply depend on their own claims --
I can find a chain of documentation that leads to people they hope I
trust.

The question is the degree to which that situation holds here. Thus
far, you've provided essentially nothing in the way of independent
corroboration of any of your statements at all. You make a statement
here, and then offer your own statement on your own web site as the
"corroborating evidence."
You can't please
(or even reason with) all of the people all of the time.


At least in this thread, you don't seem to have even attempted to
_reason_ with anybody at all. You've made unsupported claims, and when
perfectly legitimate concerns are raised, you've resorted to suggesting
that they result from fear and/or irrational hatred.

--
Later,
Jerry.

The universe is a figment of its own imagination.

Jul 23 '05 #510
> The question is the degree to which that situation holds here. Thus
far, you've provided essentially nothing in the way of independent
corroboration of any of your statements at all.


So how does one provide independent corroboration over usenet? If such
a method exists, is the method's cost reasonable? Is there an ISO
standardize method?

Jul 23 '05 #511
quote 1
(e.g. "Although the Ada
tool vendors finance it through the ARA, the ACAA's real boss is the
ISO" (from http://www.adaic.com/compilers/acaa.html).
-and-

quote 2 The ISO has this to say:


When I read your post I recognized the the phrase "the ISO" in quote
one and two have distinct meanings. That is they don't refer to the
same thing. However, the rest of your post seems to rely on them
meaning the same thing. How do you account for this?

Jul 23 '05 #512
"Jerry Coffin" <jc*****@taeus.com> wrote in message
news:11**********************@o13g2000cwo.googlegr oups.com...
....
The question is the degree to which that situation holds here. Thus
far, you've provided essentially nothing in the way of independent
corroboration of any of your statements at all. You make a statement
here, and then offer your own statement on your own web site as the
"corroborating evidence."


I can't imagine what sort of "independent corroboration" would satisfy you
(or why it matters, for that matter). I've already pointed you at the ISO
standard and the WG9 web site. Pretty much all of the other material on
testing was written by me. There's nothing sinister about that: I'm the only
one that is paid here, and thus I end up writing it. Of course, the
volunteer review board reviewed and approved that material. But even if that
was done publically, it wouldn't satisfy you, because *I* the webmaster of
the site and therefore would have probably formatted and posted the
material. Our budget isn't large enough to have multiple people doing
overlapping jobs just to prove some sort of independence.

Our way of proving relability is to conduct the entire process in public.
Thus, the procedures, test reports, and test suite are all publically
available. And I insist that these have enough information in them so that
any member of the public can reproduce the testing. (This isn't like the
destructive testing of ICs that only a few specialists can run -- anyone who
knows how to run a compiler can run the tests on it; and they had better get
the same results.) That means that any fault in the process can be exposed,
and that should provide a strong incentive to avoid bending the rules.

If that is not good enough for you, then there is nothing else I can say.
Personally, I would trust an open process over one that is not open, not
matter how independent the latter is. After all, the supposedly independent
government run process was cheated several times; because the reports
weren't readily available to the public, there was little chance that those
would be detected.

Randy.


Jul 23 '05 #513
Chad R. Meiners wrote:

[ ... ]
So how does one provide independent corroboration over usenet?
Usually by pointing to a reasonably recognizable reference. Given that
Usenet is carried primarily over the Internet, something like the
address of a recognizable web site is an obvious possibility.
If such a method exists, is the method's cost reasonable?
It's usually free.
Is there an ISO standardize method?


I'm not sure what you're asking here -- if you're asking whether the
ISO has methods for standardizing things, of course they do.

--
Later,
Jerry.

The universe is a figment of its own imagination.

Jul 23 '05 #514
Chad R. Meiners wrote:
quote 1
(e.g. "Although the Ada
tool vendors finance it through the ARA, the ACAA's real boss is the ISO" (from http://www.adaic.com/compilers/acaa.html).


-and-

quote 2
The ISO has this to say:


When I read your post I recognized the the phrase "the ISO" in quote
one and two have distinct meanings. That is they don't refer to the
same thing. However, the rest of your post seems to rely on them
meaning the same thing. How do you account for this?


Is this a troll? What in the world do you think "the ISO" could refer
to other than the ISO?

--
Later,
Jerry.

The universe is a figment of its own imagination.

Jul 23 '05 #515
Robert A Duff wrote:

[ ... ]
Sorry, I don't know how to say it politely, but this argument (on
both sides) is patently ridiculous. For one thing, neither C++ nor
Ada 95 existed at the time the Arpanet was created.
Previously noted.
For another thing, Ada has never been "a language the DoD considered
its own" -- the design has always been entirely open.
For goodness sakes, they even chose a
foreigner (Jean Ichbiah, who is French) to do the original 1983
language design!


Hmm...so your position is that the DoD _couldn't_ feel posessive about
it because at least part of that would be irrational?

Perhaps we should start over from the beginning. Hi. Welcome to our
planet. We like to call it "earth." We call ourselves "people".
Rationality in individuals is rare and in large groups, nonexistent.
Unfortunately, even though you seem to speak (or at least type) English
just like the TV shows said you would, your thinking is appearently so
different from ours that we'll probably never be able to truly
communicate. :-)

--
Later,
Jerry.

The universe is a figment of its own imagination.

Jul 23 '05 #516
Wes Groleau wrote:

[ ... ]
Interesting. Ada is bad because you _think_ it won't let you do what
you want to do.
First of all, I've never said Ada was bad -- at worst, I've said that I
found some things about it frustrating. Second, my opinions are based
on use, not just what I think might be true about it. Finally, I've
openly admitted that some (pehaps all) of my opinions may be obsolete
since my experience predates Ada 95.
But it would be better if it prevented other people
from doing something you think they shouldn't do.


It's been close to 10 years now that Java has had its multi-level
break, and in that time I'm not at all certain I've seen even _one_
instance of its use that wouldn't have been better off without it.

In the end, designing a (good) programming language requires a lot of
judgement calls, not simply applying some simplistic rule like "allow
everything". A theoretically ideal programming language would allow
everything that was intended and desired, but stop everything that
wasn't.

Real languages deal in compromises -- one is that permitting more of
what people want to do also generally allows more things they didn't
really want to do.

I think this "feature" has an even greater cost though. While most
people recognize that the undisguised goto is something to generally
avoid, many (especially those too young to have dealt with FORTRAN II
computed gotos) rarely consciously realize that a break (of any sort)
is simply a restricted form of a goto -- and the less restricted it is,
the more it's just a goto by a different name.

The result is that when people are faced with using a goto to exit from
a deeply nested loop, they usually have second thoughts -- and with a
little more thought, realize that the code can be re-structured to
eliminate the real problem that led them to want to do that in the
first place.

OTOH, when that goto is disguised as a "multi-level break", it appears
(at least to me) that they're much more willing to just use the
goto^H^H^H^Hmulti-level break instead of fixing the code.

In fairness, the multi-level goto is marginally more justifiable in
Java than in Ada -- Java copied all of C's biggest mistakes, including
its screwed up switch/case statement. It's quite often useful to have a
switch statement in a loop, and exit the loop as one of the legs of the
switch -- but the screwed-up switch makes this a multi-level break.

Regardless of other shortcomings I might see in its design, Ada did get
its case statement right, so a single-level break suffices for this
situation.

--
Later,
Jerry.

The universe is a figment of its own imagination.

Jul 23 '05 #517
Jerry Coffin wrote:
In fairness, the multi-level goto is marginally more justifiable in
Java than in Ada -- Java copied all of C's biggest mistakes, including
its screwed up switch/case statement. It's quite often useful to have a
switch statement in a loop, and exit the loop as one of the legs of the
switch -- but the screwed-up switch makes this a multi-level break.

I think that wanting to break a loop from inside a case block is very
rare, after all one places (should place) minimal code in switch cases.
On the other hand, although the designers of C#/CLI claimed their switch
implementation to be better than C++ (as with every other language
detail, they were very excited with C#/CLI in the beginning, they viewed
it as a cure-all kind of thing, now they have switched back to C++ as
the systems programming language of .NET), I think it is inferior. It
goes with something like this if you want to use more than one cases:
switch(value)
{
case 1:
{
// ...

goto case 2;
}

case 2:
{
// ...
}
case 3:
{
// ...
}
}
with implicit breaks among cases. Personally I do not like this goto stuff.

--
Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 23 '05 #518
the ISO = the ISO standard for Ada
the ISO = the Internatioal Standard Organization

Jul 23 '05 #519

"Jerry Coffin" <jc*****@taeus.com> wrote in message
news:11**********************@l41g2000cwc.googlegr oups.com...
The mindset that embraces Ada simply would never have designed things
that way. Heck, I'm clearly on the C++ side of the fence, and I stil
find many of them at least mildly distasteful. Had they been designed
by Ada programmers, the hackish character would be gone. Instead, the
system would be designed to operate in harmony as a coordinated system.

Possibly. However, I have seen more than enough "hacked" Ada to wonder
whether, in the hands of the larger community, the unruly would still run amuck,
even with Ada as their language. Although the default for most constructs in
Ada tends toward the notion of "safe," there are lots of ways to "relax" the
language's emphasis on safety. I recall that when I first learned Ada, I used
to first read Appendix F of any compiler manual so I could find ways to get
down to the machine level, do workarounds, and migrate back to my original
assembler programmer roots. It took me quite a while to finally understand
the value of limited types, the strict model that separates scope and
visibility,
and other important [engineering] ideas unique to Ada.

As to C and C++, the rules are much more relaxed at the outset than they
are in Ada. It is difficult to take a language that is not type-safe and get
programmers to make it type-safe than it is to take a language that is
type-safe and selectively relax the rules - something we do routinely
in Ada.

Richard Riehle
Jul 23 '05 #520

"Ioannis Vranos" <iv*@remove.this.grad.com> wrote in message
news:1110645475.497596@athnrd02...

There are compliance tests for C++ too.

True. However, I have never known a single DoD contractor
to require a validated C++ compiler. I have never heard of
a single DoD program manager who rquired a validated C++
compiler. If anyone did demand this level of rigor, there would
be far less C++ used by DoD contractors, but there might also
be some improvement, ultimately, in C++ compilers.

If someone chooses Ada for a military project, one of the first
questions from the program manager is, "Is the compiler validated?"
Ada is almost always held to a higher standard that we other
languages - and this is as it should be.

Still, when we hold one development environment (an Ada compiler)
to a higher standard, we need to understand that difference when
making comparions. It costs a great deal more money to produce
a compiler that must pass validation (now conformity) than to
create one in which the defects are intended to be discovered
by the users and fixed in some future release.

Richard Riehle
Jul 23 '05 #521

"Robert A Duff" <bo*****@shell01.TheWorld.com> wrote in message
news:wc*************@shell01.TheWorld.com...

...but of course there's a lot of market pressure to produce
standard-conforming compilers, for those languages that have official
standards (Ada, C, C++, Fortran, Cobol, etc).

Not Eiffel?

Richard Riehle
Jul 23 '05 #522
ad******@sbcglobal.net wrote:
"Robert A Duff" <bo*****@shell01.TheWorld.com> wrote in message
news:wc*************@shell01.TheWorld.com...

...but of course there's a lot of market pressure to produce
standard-conforming compilers, for those languages that have
official standards (Ada, C, C++, Fortran, Cobol, etc).

Not Eiffel?


TTBOMK, Eiffel doesn't have an official standard. If there was an ISO
standard, it would naturally fall under JTC1/SC22, but the ISO
JTC1/SC22 web site: http://www.open-std.org/jtc1/sc22/, doesn't mention
Eiffel anywhere.

Searching for Eiffel on the ANSI, ECMA and IEEE web sites fails to turn
up anything positive either -- ANSI and ECMA come up completely dry and
while IEEE cites a number of papers on Eiffel, none that I could find
looked like an official standard.

--
Later,
Jerry.

The universe is a figment of its own imagination.

Jul 23 '05 #523
Jerry Coffin wrote:
ad******@sbcglobal.net wrote:
"Robert A Duff" <bo*****@shell01.TheWorld.com> wrote in message
news:wc*************@shell01.TheWorld.com...
...but of course there's a lot of market pressure to produce
standard-conforming compilers, for those languages that have
official standards (Ada, C, C++, Fortran, Cobol, etc).


Not Eiffel?

TTBOMK, Eiffel doesn't have an official standard. If there was an ISO
standard, it would naturally fall under JTC1/SC22, but the ISO
JTC1/SC22 web site: http://www.open-std.org/jtc1/sc22/, doesn't mention
Eiffel anywhere.

Searching for Eiffel on the ANSI, ECMA and IEEE web sites fails to turn
up anything positive either -- ANSI and ECMA come up completely dry and
while IEEE cites a number of papers on Eiffel, none that I could find
looked like an official standard.


http://www.eiffel-nice.org/standards/

Eiffel The Language 2nd Printing
Bertrand Meyer's book describes the Eiffel language, and was adopted by
NICE as the language standard.

--
Yermat
Jul 23 '05 #524
"Jerry Coffin" <jc*****@taeus.com> writes:
ad******@sbcglobal.net wrote:
"Robert A Duff" <bo*****@shell01.TheWorld.com> wrote in message
news:wc*************@shell01.TheWorld.com...

...but of course there's a lot of market pressure to produce
standard-conforming compilers, for those languages that have
official standards (Ada, C, C++, Fortran, Cobol, etc).

Not Eiffel?


TTBOMK, Eiffel doesn't have an official standard. If there was an ISO
standard, it would naturally fall under JTC1/SC22, but the ISO
JTC1/SC22 web site: http://www.open-std.org/jtc1/sc22/, doesn't mention
Eiffel anywhere.

Searching for Eiffel on the ANSI, ECMA and IEEE web sites fails to turn
up anything positive either -- ANSI and ECMA come up completely dry and
while IEEE cites a number of papers on Eiffel, none that I could find
looked like an official standard.


I think ECMA is working on an Eiffel standard.

See:

http://www.ecma-international.org/memento/TC39-TG4.htm

- Bob
Jul 23 '05 #525
az****@yahoo.es (Alberto) writes:
"Dmitry A. Kazakov" <ma*****@dmitry-kazakov.de> wrote in message news:<1l******************************@40tude.net> ...
But even then bounds checking is not needed for every access. Example:

procedure Dynamic (A : Some_Array) is
subtype Index is Array_Index range A'Range;
J : Index := A'First;

for I in A'Range loop
A (I) := .. -- No checks
...
J := I;
end loop;
A (J) := ... -- Still no checks

C++ completely lacks the notion of constrained subtypes which makes the
above possible.


If Some_Array is any array supplied at run-time, how can the compiler
know what values are stored in A'Range? (I mean, what the bounds of
the array are) It *has* to check these bounds at run-time.


No, that's not correct. Some compilers can and do prove that I is
within the bounds of A, and therefore avoid generating any checking
code. Neither the array bounds, nor the value of I, need be known
at compile time.

In general, if *you* can prove something about a program, then it is
possible to teach a compiler to do so. Not necessarily *feasible*, but
*possible*. This is related to what Appel calls the "full employment
act for compiler writers" or some such -- no matter how smart a compiler
is, it's always possible to write a smarter one.

But in the above code, it is both possible and feasible to eliminate
*all* of the bounds checking. Thus, there's no need to suppress checks
on the above code.
2. C++ is unable to allocate objects of indefinite size on the stack.


Well, we have alloca() for this ;)


Is alloca part of the C++ standard? I thought not, but I could be
wrong. Anyway, alloca doesn't really do all that Ada can do in this
regard (avoiding heap usage for dynamic-sized things). In this case,
the Ada feature is both more efficient and safer than alloca.

- Bob
Jul 23 '05 #526
Ioannis Vranos <iv*@remove.this.grad.com> writes:
Ioannis Vranos wrote:
Actually the Boost type that you are probably looking for is:
"uint_value_t: The smallest built-in unsigned integral type that
supports the given value as a maximum. The parameter should be a
positive number."


So the example becomes:

#include <boost/integer.hpp>
int main()
{
using namespace boost;

// Value range 0..16
uint_value_t<16>::least my_var;

my_var= 9;
}
This is the equivalent of 0..whatever.
Myself thinks though that this whole range specialisation thing is
non-sense for regular application programming at least.


Indeed. :-)


Ah, so Ioannis Vranos agrees with Ioannis Vranos here,
so it must be true? ;-)

Actually, integer subranges are quite useful. For example, in some
cases they allow the compiler to ensure (without any run time checking)
that the integer index into an array matches the bounds of that array.
Including in cases where the bounds are not known at compile time.

- Bob
Jul 23 '05 #527
ad******@sbcglobal.net wrote:
Still, when we hold one development environment (an Ada compiler)
to a higher standard, we need to understand that difference when
making comparions. It costs a great deal more money to produce
a compiler that must pass validation (now conformity) than to
create one in which the defects are intended to be discovered
by the users and fixed in some future release.


If one has a copy of the validation test suite, why is it necessary
to wait for the users to find the bugs? Is it that you have to pay
for the cost of fixing the bugs that the users would never encounter?

Paul
Jul 23 '05 #528
Robert A Duff wrote:
Actually, integer subranges are quite useful. For example, in some
cases they allow the compiler to ensure (without any run time checking)
that the integer index into an array matches the bounds of that array.
Including in cases where the bounds are not known at compile time.


OK, however as I have mentioned before, one can also provide this type
of guarantees in C++, without any run-time checks:
#include <vector>
int main()
{
using namespace std;

vector<int> intVector;

for(vector<int>::size_type i=0; i<intVector.size(); ++i)
// ...
}

#include <vector>
int main()
{
using namespace std;

vector<int> intVector;

for(vector<int>::const_iterator p= intVector.begin();
p!=intVector.end(); ++p)
;// ...
}


// Not real value, for display purposes only
#include <algorithm>
#include <vector>

inline int doubleF(const int &arg) { return 2*arg; }

// ...

vector<int>someArray(10, 1);

// ...

transform( someArray.begin(), someArray.end(),
someArray.begin(), doubleF );

etc.


--
Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 23 '05 #529

"Ioannis Vranos" <iv*@remove.this.grad.com> wrote in message
news:1110326720.837893@athnrd02...

It began as "C with classes", but this does not mean "it was designed
for" that only.

And that is the unfortunate aspect of the language: its foundation on C.

I once heard Stroustrup confess that, if he had had his druthers, he would
not have started with C as the seed language.

Many of the original design ideas for C++ were abandoned to keep the
language more manageable. For example, delegation was an early idea
that failed to make it into the final design. Templates were not in the
C++ language until much later.

C++ continues to evolve, but much of that evolution seems to follow a
course of shoring up things already in the language that don't quite work
as one might prefer, or adding a truss here and a buttress there to
prevent or enable deficiencies in the language; e.g., cast-away const,
a truly silly addition to the language.

Finally, one must give credit and admiration to Dr. Stroustrup for the superb
work he did in leveraging a mediocre language such as C into a language
that has caught on with such a wide audience. One can only wonder, given
his genius, what a great language he might have created if he had not been
an AT&T employee, forced to start with C, those many years ago.

Richard Riehle

Jul 23 '05 #530

"Falk Tannhäuser" <fa*************@crf.canon.fr> wrote in message
news:d0**********@s1.news.oleane.net...

Perhaps the closest way you can get to this in C++ is

std::vector<foo_type> Data;
...
std::for_each(Data.begin(), Data.end(), DoSomething);

where "DoSomething" evaluates to a so-called "function object"
having an "operator()" accepting a (reference to) "foo_type".

OK. Try this in straight C++.

type Index is range -800_000..12_000_000;
type Decimal_Fraction is digits 12 range -473.0 .. 2_000.0;

type Vector is array (Index range <>) of Decimal_Fraction;

V1 : Vector ( -47..600); -- note the negative index range
V2 : Vector (-1.. 10); -- also a negative index range
V3 : Vector (42..451); -- index starts higher than zero;
...............................

function Scan (The_Vector : Vector ; Scan_Value : Decimal_Fraction )
return Natural;
-- This function can process any of those vector instances
without modification
.............................
-- possible implementation of the function
function Scan(The_Vector : Vector; Scan_Value : Decimal_Fraction )
return Natrual is
Scan_Count : Natural := 0; -- Natual begins at zero
begin
for I in The_Vector'Range -- No way to index off the end array
parameter
loop
if The_Vector(I) = Scan_Value; -- looking for an
exact match
Scan_Count := Scan_Count + 1; -- increment the count
end if;
end loop;
return Scan_Count; -- return required; never an implicit
return
end Scan;
.................................................. ...

I submit this in response to the observation someone made about the alleged
added
difficulty Ada imposes on the programmer in flexibility of expressiveness. In
my
own experience, Ada is far more expressive of a larger range of idioms than C++.
Note my use of the word "expressive." We can express any idea in any language,
but some languages are more expressive of some ideas than others.

This is just one example of Ada's flexibility in managing arrays. I could fill
many pages with
more examples. Of interest, here, is how easy it is to have an array index
that begins at
a value other than zero, and how easy it is to create a function that will
accept any array
of any range defined for that index. Yes, I know you can do this in C++, but
from my
perspective, it is not nearly as easy, expressive, or readable.

Counter examples to expressiveness can be illustrated in C++. For example,
many
programmers prefer the += to the x := x + 1 syntax. This is a minor
convenience
compared to the final program in Ada.

Richard Riehle

Disclaimer: I did not compile the code
before submitting it so there
might be some minor
errors. RR

Jul 23 '05 #531
ad******@sbcglobal.net wrote:
OK. Try this in straight C++.

type Index is range -800_000..12_000_000;
type Decimal_Fraction is digits 12 range -473.0 .. 2_000.0;

type Vector is array (Index range <>) of Decimal_Fraction;

V1 : Vector ( -47..600); -- note the negative index range
V2 : Vector (-1.. 10); -- also a negative index range
V3 : Vector (42..451); -- index starts higher than zero;
...............................

function Scan (The_Vector : Vector ; Scan_Value : Decimal_Fraction )
return Natural;
-- This function can process any of those vector instances
without modification
.............................
-- possible implementation of the function
function Scan(The_Vector : Vector; Scan_Value : Decimal_Fraction )
return Natrual is
Scan_Count : Natural := 0; -- Natual begins at zero
begin
for I in The_Vector'Range -- No way to index off the end array
parameter
loop
if The_Vector(I) = Scan_Value; -- looking for an
exact match
Scan_Count := Scan_Count + 1; -- increment the count
end if;
end loop;
return Scan_Count; -- return required; never an implicit
return
end Scan;
.................................................. ...

I submit this in response to the observation someone made about the alleged
added
difficulty Ada imposes on the programmer in flexibility of expressiveness. In
my
own experience, Ada is far more expressive of a larger range of idioms than C++.
Note my use of the word "expressive." We can express any idea in any language,
but some languages are more expressive of some ideas than others.

I think you mean something like:
#include <map>
#include <iostream>

int main()
{
using namespace std;

map<int, double> id;

id[-400]= 7.1;

id[-2500]= -1;

id[10]= 9;

id[-300]= 7.1;

unsigned counter= 0;

for(map<int, double>::const_iterator p= id.begin(); p!=id.end(); ++p)
{
if(p->second== 7.1)
{
cout<<"7.1 was found at index "<<p->first<<"\n";

++counter;
}
}

cout<<"\n7.1 was found "<<counter<<" times in total.\n";
}
C:\c>temp
7.1 was found at index -400
7.1 was found at index -300

7.1 was found 2 times in total.

C:\c>
I think this is even faster than yours, which scans the entire range.
And a more elegant and with the same efficiency solution:
#include <map>
#include <iostream>
#include <algorithm>
class comp: public std::unary_function<std::pair<int, double>, bool>
{
const double searchValue;

public:

comp(const double value):searchValue(value) {}

bool operator() (const std::pair<int, double> &arg) const
{
if(arg.second==searchValue)
return true;

return false;
}
};

int main()
{
using namespace std;

map<int, double> id;

id[-400]= 7.1;

id[-2500]= -1;

id[10]= 9;

id[-300]= 7.1;

cout<<"\n7.1 was found "<<count_if(id.begin(), id.end(), comp(7.1))
<<" times in total.\n";
}
C:\c>temp

7.1 was found 2 times in total.

C:\c>


This is just one example of Ada's flexibility in managing arrays. I could fill
many pages with
more examples. Of interest, here, is how easy it is to have an array index
that begins at
a value other than zero, and how easy it is to create a function that will
accept any array
of any range defined for that index.

In C++ you can create whatever you want. Even a container with abstract
conceptual features (you are limited only by your imagination).

Yes, I know you can do this in C++, but
from my
perspective, it is not nearly as easy, expressive, or readable.

I think even better. That said, I like enough Ada too. :-)

--
Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 23 '05 #532
ad******@sbcglobal.net wrote:
Possibly. However, I have seen more than enough "hacked" Ada to wonder
whether, in the hands of the larger community, the unruly would still run amuck,
even with Ada as their language. Although the default for most constructs in


I don't know how much effort I've put into persuading colleagues
that an Ada access type and a C pointer are NOT always interchangeable.

Or how many times they turned a deaf ear to my recommendation to
think about a warning (that the source and target of an
unchecked_conversion instantiation were not the same size).

--
Wes Groleau

He that is good for making excuses, is seldom good for anything else.
-- Benjamin Franklin
Jul 23 '05 #533
In article <d1**********@avnika.corp.mot.com>, Paul Dietz <pa**********@motorola.com> writes:
ad******@sbcglobal.net wrote:
Still, when we hold one development environment (an Ada compiler)
to a higher standard, we need to understand that difference when
making comparions. It costs a great deal more money to produce
a compiler that must pass validation (now conformity) than to
create one in which the defects are intended to be discovered
by the users and fixed in some future release.


If one has a copy of the validation test suite, why is it necessary
to wait for the users to find the bugs? Is it that you have to pay
for the cost of fixing the bugs that the users would never encounter?


One of the costs of fixing defects not reported by a user is the non-zero
probability that making a change will introduce additional defects.

Clearly that is unacceptable if the chance of some user discovering
the original defect can be proven to be zero.

It is required (someday) if the chance of some user discovering
the original defect can be proven to be one.

Unfortunately, most such probabilities lie somewhere in the vast middle.
Jul 23 '05 #534
Ioannis Vranos wrote:
ad******@sbcglobal.net wrote:
OK. Try this in straight C++. type Index is range -800_000..12_000_000;
type Decimal_Fraction is digits 12 range -473.0 .. 2_000.0;

type Vector is array (Index range <>) of Decimal_Fraction;

V1 : Vector ( -47..600); -- note the negative index range function Scan (The_Vector : Vector ; Scan_Value :
Decimal_Fraction )
return Natural; for I in The_Vector'Range -- No way to index off the
I think you mean something like:
[redesigning the problem?]
map<int, double> id;

id[-400]= 7.1; for(map<int, double>::const_iterator p= id.begin(); p!=id.end(); ++p) I think this is even faster than yours, which scans the entire range.
Hm. Searching a *map* of entries at numeric keys is different
from scanning an array of values and counting occurences. What
are you trying to do here?
The std::vector is missing an instantiation argument which adds
the guarantee that no index value is outside the range
-800_000..12_000_000;
std::map<int, double> is a different beast entirely, with
unknown size. Consider Vector ( -47..600);

(How do you make a subrange of double, which is missing from
your example.)

Imagine an array shared between a number of threads. The program's
task is to count the number of occurences of a particular value
in the array. Examples:
1) A shop has 10 unique doors (use an enum). For each door 4 states
can be measured: open/closed, van/no van at the door.
2) A 5-player team, each team is identified by number drawn from a fixed
set of team numbers. An array (an array, not some other data
structure) measures the number of players from each team present in
a room. Count the number of odd-team players in a room.

I hope these example illustrate some points. They are not meant to
trigger a discussion as to whether an array is the best data
structure for everything. (Note that it might be necessary to read
values from the array/Vector using random access in O(1), and to
store and replace values in O(1), another reason to use an array.)

And a more elegant and with the same efficiency solution: A solution to a different problem, I think.

In C++ you can create whatever you want. Even a container with abstract
conceptual features (you are limited only by your imagination).


You can do that using assembly language or SmallTalk, whatever.
I think this was not the point, but I should have Richard Riehle's
message speak for itself.
For example, look here, and reconsider the programming task as
originally stated: Yes, I know you can do this in C++, but
from my
perspective, it is not nearly as easy, expressive, or readable.


Georg
Jul 23 '05 #535

"T Beck" <Tr********@Infineon.com> wrote in message
news:11**********************@g14g2000cwa.googlegr oups.com...

Adrien Plisson wrote:

Ada is also used in particular rockets called missiles, and its job
inside is just to make them fall... but fall gracefully.


Would that happen to include the now-infamous Patriot Missles (the ones
that couldn't hit a barn after they'd been on for a while due to a
bug?)


No. Those missiles were not programmed in Ada.

On the other hand, there have been software failures written
in Ada, just as their have been software failures written in C,
C++, Jovial, Fortran, etc.

The use of Ada does not eliminate to potential for software failure.
No language, by itself, can guarantee there will no defects. The
best we can hope for, in the current state of software engineering,
is to minimize the risk of failure.

Ada is designed to help reduce risk. That is a primary concern when
choosing Ada. Still, we are always dealing with tradeoffs.

A language that allows compilers to generate some kind of result for
every syntactic expression, is likely to also permit erroneous constructs.
This is OK when that language is used in the hands of a person who
never makes mistakes. Of course, as the size of the code grows
larger, and the number of functions increases, human comprehension
begins face more challenges in the management of the complexity.

A language that provides a set of constraints at the outset, such as
Ada, can be annoying for little programming jobs carried out by
one person. As the software product requires more and more
people, more and more code units, and more and more time to
complete, these constraints can be useful because they prohibit
certain practices that lead to unexpected errors.

Software risk management is becoming a more and more respectable
part of software engineering practice. Language choice is only one
part of the risk management decision matrix, and we are not going to
enumerate the many other facets of risk here. However, language
can have an impact on risk.

I have seen software developed in many different languages over a
long period of time. In my own view, and I don't have formal
research to support this, Ada has had value in reducing risk of
failure "when used as directed." C++, and the C family in general,
increases the risk of failure simply because one does not have the
built-in constraints that exist in Ada. A careful programmer can
create excellent, defect-reduced, software in C++. A sloppy
programmer can grind out horrible, unsafe code in Ada (by
relaxing the built-in constraints). All in all, though, I see Ada as
more risk averse than most of its competitors.

What is the cost of being risk averse? That question is open to many
different interpretations. There certainly is a cost. Is that cost worth
the level of risk avoidance we hope to achieve with Ada? The answer
to this question will also vary according to the experience, biases, and
economic decisions favored by those who must make the choices.

Richard Riehle
Jul 23 '05 #536
In article <jR***************@newssvr21.news.prodigy.com>, <ad******@sbcglobal.net> writes:
A language that provides a set of constraints at the outset, such as
Ada, can be annoying for little programming jobs carried out by
one person. As the software product requires more and more
people, more and more code units, and more and more time to
complete, these constraints can be useful because they prohibit
certain practices that lead to unexpected errors.


I think there is another figure of merit which should be considered
along with the number of programmers -- the proximity of users to a
single programmer.

To support a hobby of mine, I will single-handedly program in TECO
macros (a strong competitor to APL in the write-only-programming race).
I am the only one who runs the program, and there is nowhere else
to point the finger if it fails.

But for real commercial code (one now at about 200,000 sloc), I use
Ada as a single-programmer, since I want my programming failures to
be evident on my computer, not that of the customer of my customer.
Jul 23 '05 #537
Ioannis Vranos wrote:
ad******@sbcglobal.net wrote:
OK. Try this in straight C++. [skip Ada program]

I think you mean something like:
[skip C++ programs]
This is just one example of Ada's flexibility in managing arrays. I could fill many pages with
more examples. Of interest, here, is how easy it is to have an array index that begins at
a value other than zero, and how easy it is to create a function that will accept any array
of any range defined for that index.

In C++ you can create whatever you want. Even a container with

abstract conceptual features (you are limited only by your imagination).

Yes, I know you can do this in C++, but
from my
perspective, it is not nearly as easy, expressive, or readable.

I think even better. That said, I like enough Ada too. :-)

--
Ioannis Vranos


Ciao,

Ioannis wrote a C++ program that in his intentions would had operate
like the Ada code that was provided as an example of language
expressiveness.

Unfortunatelly that C++ code doesn't resembles the Ada one. Have a
closer look at what Richard wrote:

type Index is range -800_000..12_000_000;
type Decimal_Fraction is digits 12 range -473.0 .. 2_000.0;
type Vector is array (Index range <>) of Decimal_Fraction;

V1 : Vector ( -47..600); -- note the negative index range
V2 : Vector (-1.. 10); -- also a negative index range
V3 : Vector (42..451); -- index starts higher than zero;

He instantiated three arrays that contain elements of type
Decimal_Fraction, each of them is 12 or more digits in whatever machine
the program can compile and each variable of the type is checked to be
in range -473.0 .. 2_000.0.
Each single array has a different number of elements and no array with
more than 12_800_001 elements is permitted to be created.

Furthermore you used a tree (std::map<> is often implemented as a
red-black tree) to be read sequentially in searching the second
element. I don't think this is the most efficient way to implement what
it is better thought to be an array. I mean that if I always have to
read every element from the beginning to the end I would prefer to
insert them in a vector.

Now I remember that you wrote "This is faster than yours scanning the
entire range". Now I can be wrong, since I don't know either enough C++
and Ada too, but it seems that you scanned the entire range too. Is it
not true? Anyway I suppose that Richard attention was to array
declarations and not to function operating on them.

Regards,

fabio de francesco

Jul 23 '05 #538
Georg Bauhaus wrote:
Hm. Searching a *map* of entries at numeric keys is different
from scanning an array of values and counting occurences. What
are you trying to do here?

Just using the appropriate available container. :-)

The std::vector is missing an instantiation argument which adds
the guarantee that no index value is outside the range
-800_000..12_000_000;

vector provides method at() that performs range checking. Also if you
want a vector that has signed integer subscripts or even floating point,
you can easily write one. However this does not "feel" C++ whose built
in arrays always store elements from index 0 and upwards.
std::map<int, double> is a different beast entirely, with
unknown size. Consider Vector ( -47..600);

(How do you make a subrange of double, which is missing from
your example.)
Do you mean like this?

#include <map>
#include <iostream>

int main()
{
using namespace std;

map<double, double> id;

id[-400.1]= 7.1;

id[-2500.6]= -1;

id[10.43]= 9;

id[-300.65]= 7.1;

unsigned counter= 0;

for(map<double, double>::const_iterator p= id.begin(); p!=id.end(); ++p)
{
if(p->second== 7.1)
{
cout<<"7.1 was found at index "<<p->first<<"\n";

++counter;
}
}

cout<<"\n7.1 was found "<<counter<<" times in total.\n";
}
C:\c>temp
7.1 was found at index -400.1
7.1 was found at index -300.65

7.1 was found 2 times in total.

C:\c>

You can also check the entire range if you want. Here the previous style
along with your full-range (expensive) style:
#include <map>
#include <iostream>

int main()
{
using namespace std;

map<int, double> id;

id[-400]= 7.1;

id[-2500]= -1;

id[10]= 9;

id[-300]= 7.1;

unsigned counter= 0;

for(map<int, double>::const_iterator p= id.begin(); p!=id.end(); ++p)
{
if(p->second== 7.1)
{
cout<<"7.1 was found at index "<<p->first<<"\n";

++counter;
}
}

cout<<"\n7.1 was found "<<counter<<" times in total.\n\n\n";
// Checks *each* range from -2500 to 10
counter=0;
for(int i= -2500; i<10; ++i)
{
if(id[i]== 7.1)
{
cout<<"7.1 was found at index "<<i<<"\n";

++counter;
}
}

cout<<"\n7.1 was found "<<counter<<" times in total.\n";
}
"for(int i= -2500; i<10; ++i)" can also be written as

for(int i= id.begin()->first; i<(--id.end())->first; ++i)

if you want this kind of abstraction. But better abstraction is the
iterator approach and the *best* the count family which (iterator and
count family) work with *all* containers.
These are bullet-proof code approaches.
Imagine an array shared between a number of threads. The program's
task is to count the number of occurences of a particular value
in the array. Examples:
1) A shop has 10 unique doors (use an enum).

We can use whatever I like, perhaps strings. What is wrong with "Door 1"
etc? :-)

For each door 4 states
can be measured: open/closed, van/no van at the door.

OK, this sounds easy. What do you think? Since you want an array:
#include <vector>
#include <algorithm>
#include <functional>
#include <iostream>

class Door
{
bool open, van;

public:
Door(){ open= van= true; }

bool IsOpen() const { return open; }
bool IsClosed() const { return !open; }
bool IsVan() const { return van; }
bool IsNoVan() const { return !van; }

Door &SetOpen() { open= true; return *this; }
Door &SetClosed() { open= false; return *this; }
Door &SetVan() { van= true; return *this; }
Door &SetNoVan() { van= false; return *this; }
};

class CheckOpen
{
bool open, van;

public:
CheckOpen(bool isopen, bool isvan):open(isopen), van(isvan) {}

bool operator() (const Door &arg)
{
return open== arg.IsOpen() && van== arg.IsVan();
}
};
int main()
{
using namespace std;

vector<Door> doors(10);

doors[4].SetOpen().SetNoVan();

unsigned counter= count_if(doors.begin(), doors.end(),
CheckOpen(true, false));

cout<<"Counted "<<counter<<" occurrences.\n";
}
C:\c>temp
Counted 1 occurrences.

C:\c>

2) A 5-player team, each team is identified by number drawn from a fixed
set of team numbers. An array (an array, not some other data
structure) measures the number of players from each team present in
a room. Count the number of odd-team players in a room.

This is easy too. I do not get your point. This can be done with arrays
as also with other containers. Why should we be restricted to one type
of container?
I hope these example illustrate some points. They are not meant to
trigger a discussion as to whether an array is the best data
structure for everything. (Note that it might be necessary to read
values from the array/Vector using random access in O(1), and to
store and replace values in O(1), another reason to use an array.)

OK.

In C++ you can create whatever you want. Even a container with
abstract conceptual features (you are limited only by your imagination).

You can do that using assembly language or SmallTalk, whatever.
I think this was not the point, but I should have Richard Riehle's
message speak for itself.


I am not talking about assembly. It is not difficult to write a
container getting a signed integer as a subscript and have range checking.

For example, look here, and reconsider the programming task as
originally stated:
Yes, I know you can do this in C++, but
from my
perspective, it is not nearly as easy, expressive, or readable.

I think it is probably more and at least the same. That said, I like Ada
somewhat. :-)

--
Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 23 '05 #539
fabio de francesco wrote:
Ciao,

Ioannis wrote a C++ program that in his intentions would had operate
like the Ada code that was provided as an example of language
expressiveness.

Unfortunatelly that C++ code doesn't resembles the Ada one. Have a
closer look at what Richard wrote:

type Index is range -800_000..12_000_000;
type Decimal_Fraction is digits 12 range -473.0 .. 2_000.0;
type Vector is array (Index range <>) of Decimal_Fraction;

V1 : Vector ( -47..600); -- note the negative index range
V2 : Vector (-1.. 10); -- also a negative index range
V3 : Vector (42..451); -- index starts higher than zero;

It isn't difficult to implement a vector that accepts signed ranges too
*with* range-checking, the default ones begin from 0 because this is the
C++ natural feel. vector::at() performs range checked access.
#include <vector>
#include <exception>
#include <iostream>
int main() try
{
std::vector<int> vec(10);

vec.at(12)=4;
}

catch(std::exception &e)
{
std::cerr<<e.what()<<"\n";
}
C:\c>temp
invalid vector<T> subscript

C:\c>

In other words it does not make much sense to have signed indexes.
Otherwise it is easy and possible.

--
Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 23 '05 #540
<ad******@sbcglobal.net> wrote in message
news:9l**************@newssvr21.news.prodigy.com.. .
And that is the unfortunate aspect of the language: its foundation on C.

I once heard Stroustrup confess that, if he had had his druthers, he would
not have started with C as the seed language.


Stroustrup said in an interview that his preference would have been Algol
with classes.

I think that I probably would have preferred that as well!
Jul 23 '05 #541
"Frank J. Lhota" <NO******************@verizon.net> writes:
<ad******@sbcglobal.net> wrote in message
news:9l**************@newssvr21.news.prodigy.com.. .
And that is the unfortunate aspect of the language: its foundation on C.

I once heard Stroustrup confess that, if he had had his druthers, he would
not have started with C as the seed language.
Stroustrup said in an interview that his preference would have been Algol
with classes.


Hmm... Isn't that language called Simula 67? ;-)
I think that I probably would have preferred that as well!


- Bob
Jul 23 '05 #542
Georg Bauhaus wrote:
I hope these example illustrate some points. They are not meant to
trigger a discussion as to whether an array is the best data
structure for everything. (Note that it might be necessary to read
values from the array/Vector using random access in O(1), and to
store and replace values in O(1), another reason to use an array.)

I forgot to say here that the cost of map's operator[] is O(log(n))
which is fairly cheap for large amount of data.

--
Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 23 '05 #543
Robert A Duff wrote:
"Frank J. Lhota" <NO******************@verizon.net> writes:
Stroustrup said in an interview that his preference would have been Algol
with classes.
Hmm... Isn't that language called Simula 67? ;-)

Darn! YOu beat me to it!
Jul 23 '05 #544
Ioannis Vranos wrote:
Georg Bauhaus wrote: I forgot to say here that the cost of map's operator[] is O(log(n))
which is fairly cheap for large amount of data.


compare O(log(n)) to O(1) where n is 1, 1000, 1_000_000.
Make this access a part of an inner loop.

Georg
Jul 23 '05 #545
Ioannis Vranos wrote:
Georg Bauhaus wrote:
Hm. Searching a *map* of entries at numeric keys is different
from scanning an array of values and counting occurences. What
are you trying to do here?
Just using the appropriate available container. :-)


Nope.
The std::vector is missing an instantiation argument which adds
the guarantee that no index value is outside the range
-800_000..12_000_000;


vector provides method at() that performs range checking.


I think we've been through this. Again, this is not the point.
Also if you
want a vector that has signed integer subscripts or even floating point,
We want a vector type indexed by values between M and N only *and* we
want the compiler + tools to help us make data structures in accord
with these ranges. We want it to take advantage of what it can learn
from the type system which allows index range checking _at compile time_.
(How do you make a subrange of double, which is missing from
your example.)



Do you mean like this?


No. I mean a double subtype whose values range from N.M to N'.M'.

I think you are missing the point here. As someone else said this is
not about functions operating on containers.

[STL for constructs] These are bullet-proof code approaches.
Unfortunatly, this is not relevant in this discussion, which is not
just about algorithms.
Imagine an array shared between a number of threads. The program's
task is to count the number of occurences of a particular value
in the array. Examples:
1) A shop has 10 unique doors (use an enum). We can use whatever I like, perhaps strings. What is wrong with "Door 1"
etc? :-)
I see the smiley but this is what might be wrong with your
approach: a map indexed by strings is not the same as an array
indexed by ten and only ten numbers. (Actually the numbers are in a
type which includes only 10 numbers and their operations.)

Imagine a Matrix. String indexing of matrix element seems possible
but you'd rather have proper arrays. There is something
wrong with ("Row 5", "Column 1") indexing unless you have very much
time and a lot of RAM.
For each door 4 states
can be measured: open/closed, van/no van at the door.


OK, this sounds easy. What do you think? Since you want an array:

bool operator() (const Door &arg)
Nice. However, operator() doesn't return one out of four states,
it returns a Boolean, but o.K.
(If the door is closed and there is a van, you still want some action to
take place. Likewise for the other cases. I know you can write this,
no need as this is not the point, see below)
{
return open== arg.IsOpen() && van== arg.IsVan();
}
vector<Door> doors(10);
Here you can see one point that you might want to demonstrate:
The compiler won't tell you that there is something wrong
with

doors[10].SetOpen().SetNoVan();

Worse, the program won't tell you either. This shows the missing
link between vector indexing and the base type system in your
approach. You could use

doors.at(10).SetOpen().SetNoVan();

and handle the exception _at run time_.
In Ada, the compiler will tell you: "index value 10 is no good!"
because the array "doors" can only be indexed by values effectively
between 0 .. 9. These and only these are the values of the type
enumerating the ten doors, and only these are allowed as index
values x in expressios doors(x).
No exception handling, no .at() needed when you listen to your
compiler and fix the indexing error before you deliver a program.
You get this for free as a result of the language's type handling
at compile time.

2) A 5-player team, each team is identified by number drawn from a fixed
set of team numbers. An array (an array, not some other data
structure) measures the number of players from each team present in
a room. Count the number of odd-team players in a room.


This is easy too. I do not get your point. This can be done with arrays
as also with other containers. Why should we be restricted to one type
of container?


There is a reason that arrays still exist. One of the reasons
should be obvious when comp.realtime is on the recipient list.
Again, imagine a wave file manipulation process.
A map indexed by strings is probably not the recommended container
when you need fast matrix computations. In fact, a map might not be
suitable at all irrespective of its key type, when r/w should be in O(1).

- Given an enum, and
- given a language that allows the enum as a basis for the construction
of an array type in the type system (not using some run time computation
method, like those you have shown here, IINM)
- given that the compiler can use its knowledge of the enum
+ when it sees an array type based on the enum
+ when it sees an array
+ when it sees an array indexed by a statically known enum value
+ etc.,
you have
(a) useful names for objects in your problem domain, checked at compile-time
(b) a conceptual link between the enum (naming the single items) and
a container _type_ (containing these items); you cannot use anything
but these named numbers for indexing
(c) the fastest possible access, for both reading and writing, possibly
checked at compile time
(d) etc.

The STL descriptions provide further reaonsing why there can be restrictions
on the uses of specific containers in specific situations, viz. O(f(n)).

I hope these example illustrate some points. They are not meant to
trigger a discussion as to whether an array is the best data
structure for everything. (Note that it might be necessary to read
values from the array/Vector using random access in O(1), and to
store and replace values in O(1), another reason to use an array.)


What if a compiler or other tool can show that in the following expression
(pseudo notation)

array_variable.at_index [n + m] <- f(x)

does not need an index range check on the lhs? (Again, yes, it is possible
to write correct programs. The question is, does one notation + compilation
system have advantages when compared to another? What is the price to pay?)
It is not difficult to write a
container getting a signed integer as a subscript and have range checking.


Which is not the point.

Georg
Jul 23 '05 #546
"Paul Dietz" <pa**********@motorola.com> wrote in message
news:d1**********@avnika.corp.mot.com...
ad******@sbcglobal.net wrote:
Still, when we hold one development environment (an Ada compiler)
to a higher standard, we need to understand that difference when
making comparions. It costs a great deal more money to produce
a compiler that must pass validation (now conformity) than to
create one in which the defects are intended to be discovered
by the users and fixed in some future release.


If one has a copy of the validation test suite, why is it necessary
to wait for the users to find the bugs? Is it that you have to pay
for the cost of fixing the bugs that the users would never encounter?


I'm not quite sure what your point is. But I think Richard was comparing a
compiler that was conformity tested against one that was not. So, such a
system wouldn't have been tested against any test suite.

Of course, in actual practice, there is a continuum of testing practice,
from formal conformity assessment to completely untested. I doubt that there
are many commercial systems on either extreme of that continuum, but Ada
systems tend to be close to the formally tested end.

Randy.

Jul 23 '05 #547
Georg Bauhaus wrote:
We want a vector type indexed by values between M and N only *and* we
want the compiler + tools to help us make data structures in accord
with these ranges. We want it to take advantage of what it can learn
from the type system which allows index range checking _at compile time_.

At first, it is easy to write a container in C++ that accepts a
specified range for indexes. The only reason that there is not one, is
because it does not make sense in C++ (when I learned some Pascal in the
past, I could not understand what was the use of the ability to use
negative indexes in arrays. The [0, +] style maps closely what is
happening in the machine.

Also it is possible to define range-checked at run-time, operations.
vector::at() is an example.
vector is also a dynamic-type array, so placing compile-time bounds
checking isn't possible/doesn't make sense.
For fixed size arrays and containers it is true, the compiler does not
catch at compile time any out of boundaries access. However we can do
this explicitly, by using compile-time checked assertions. For example:
#include <vector>
#include <boost/static_assert.hpp>

int main()
{
using namespace std;

const int MAX_SIZE=10;

vector<int> vec(MAX_SIZE);
// Many lines of code
BOOST_STATIC_ASSERT(10<MAX_SIZE);

vec[10]=4;
}
C:\c\temp.cpp In function `int main()':

14 C:\c\temp.cpp incomplete type `boost::STATIC_ASSERTION_FAILURE<
false>' used in nested name specifier

No. I mean a double subtype whose values range from N.M to N'.M'.

May you give an example for a container along with such a subtype?
Here you can see one point that you might want to demonstrate:
The compiler won't tell you that there is something wrong
with

doors[10].SetOpen().SetNoVan();

Worse, the program won't tell you either. This shows the missing
link between vector indexing and the base type system in your
approach. You could use

doors.at(10).SetOpen().SetNoVan();

and handle the exception _at run time_.
In Ada, the compiler will tell you: "index value 10 is no good!"
because the array "doors" can only be indexed by values effectively
between 0 .. 9. These and only these are the values of the type
enumerating the ten doors, and only these are allowed as index
values x in expressios doors(x).
No exception handling, no .at() needed when you listen to your
compiler and fix the indexing error before you deliver a program.
You get this for free as a result of the language's type handling
at compile time.

Will the Ada compiler tell you this, for user-defined types too? Or is
this restricted to built-in arrays? If the latest is true, then its
value isn't that much.

There is a reason that arrays still exist. One of the reasons
should be obvious when comp.realtime is on the recipient list.
Again, imagine a wave file manipulation process. A map indexed by
strings is probably not the recommended container
when you need fast matrix computations. In fact, a map might not be
suitable at all irrespective of its key type, when r/w should be in O(1).

OK, although O(log(n)) is fairly cheap, let's stick to O(1). However
personally I think that the value of defined subranges in the style
-1000, -400 has not any practical use.

- Given an enum, and
- given a language that allows the enum as a basis for the construction
of an array type in the type system (not using some run time computation
method, like those you have shown here, IINM)
- given that the compiler can use its knowledge of the enum
+ when it sees an array type based on the enum
+ when it sees an array
+ when it sees an array indexed by a statically known enum value
+ etc.,
you have
(a) useful names for objects in your problem domain, checked at
compile-time

What do you mean by names?

(b) a conceptual link between the enum (naming the single items) and
a container _type_ (containing these items); you cannot use anything
but these named numbers for indexing

Which has no value in the world of C++, but I guess many things in Ada
depend on this.

(c) the fastest possible access, for both reading and writing, possibly
checked at compile time

Fastest possible access in C++. Checked at run-time. Explicitly checked
at compile-time with compile-time assertions (which are restricted only
to constants), but is the Ada compile-time boundaries checking available
to user-defined containers? If not, can compile-time assertions be used?
(d) etc.

The STL descriptions provide further reaonsing why there can be
restrictions
on the uses of specific containers in specific situations, viz. >O(f(n)).

The access of vector, deque, string, valarray and bitset (and built in
arrays) is O(1) and of map O(log(n)) which is fairly cheap.

If you have access to TC++PL 3, you may check page 17.1.2 on page 464.
I hope these example illustrate some points. They are not meant to
trigger a discussion as to whether an array is the best data
structure for everything. (Note that it might be necessary to read
values from the array/Vector using random access in O(1), and to
store and replace values in O(1), another reason to use an array.)

What if a compiler or other tool can show that in the following

expression (pseudo notation)

array_variable.at_index [n + m] <- f(x)

does not need an index range check on the lhs? (Again, yes, it is possible to write correct programs. The question is, does one notation + compilation system have advantages when compared to another? What is the price to

pay?)
Apart from being eager to see whether this compile-time range checking
is available to user-defined containers, :-) in C++ there is no much
need for run-time boundary checking, if programming properly. Of course
one can always program improperly. :-)

--
Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 23 '05 #548
Georg Bauhaus wrote:
I forgot to say here that the cost of map's operator[] is O(log(n))
which is fairly cheap for large amount of data.

compare O(log(n)) to O(1) where n is 1, 1000, 1_000_000.
Make this access a part of an inner loop.


If you do the maths, you will see that log(10^6) isn't that large.

--
Ioannis Vranos

http://www23.brinkster.com/noicys
Jul 23 '05 #549

Ioannis Vranos <iv*@remove.this.grad.com> writes:
arrays. The [0, +] style maps closely what is happening in the machine.


Certainly. But the Ada model can maps more closely to the domain problem.
And this is an important point. We are not writting software for the machine
but to solve problems. Most of the time you just don't care how such data will
be handled by the machine (i.e. how the compiler will generate the code).

Pascal.

--

--|------------------------------------------------------
--| Pascal Obry Team-Ada Member
--| 45, rue Gabriel Peri - 78114 Magny Les Hameaux FRANCE
--|------------------------------------------------------
--| http://www.obry.org
--| "The best way to travel is by means of imagination"
--|
--| gpg --keyserver wwwkeys.pgp.net --recv-key C1082595
Jul 23 '05 #550

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

20
by: Mediocre Person | last post by:
Well, after years of teaching grade 12 students c++, I've decided to make a switch to Python. Why? * interactive mode for learning * less fussing with edit - compile - link - run - debug -...
14
by: Gabriel Zachmann | last post by:
This post is not strictly Python-specific, still I would like to learn other university teachers' opinion. Currently, I'm teaching "introduction to OO programming" at the undergrad level. My...
3
by: andy_irl | last post by:
Hi there I have been asked to teach HTML to a group in our local village community. It is nothing too serious, just a community development grant aided scheme. It will be a 10 week course of two...
12
by: Pierre Senellart | last post by:
I am going to teach a basic Web design course (fundamentals of HTML/CSS, plus some basic client-side (JavaScript) and server-side (PHP, perhaps XSLT) scripting). Most of the students do not have...
16
by: msnews.microsoft.com | last post by:
I am teaching C# to my 11 year old child. One challenge is that all the C# books I own and that I have seen in bookstores are full of language that is not easily comprehended by a student at that...
24
by: Richard Aubin | last post by:
I'm really new to vb.net programming and programming in general. I would like to teach myself on how to program effectively and I have the financial and time resources to do so. Can I anyone...
0
by: e.expelliarmus | last post by:
check this out buddies. kool website for: * hacking and anti hacking tricks * anti hackng tricks. * registry tweaks * orkut tricks * small virus * computer tricks and loads of different...
1
by: JosAH | last post by:
Greetings, Introduction This week's tip describes a few old tricks that are almost forgotten by most people around here. Sometimes there's no need for these tricks anymore because processors...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: Sonnysonu | last post by:
This is the data of csv file 1 2 3 1 2 3 1 2 3 1 2 3 2 3 2 3 3 the lengths should be different i have to store the data by column-wise with in the specific length. suppose the i have to...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
0
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.