By using this site, you agree to our updated Privacy Policy and our Terms of Use. Manage your Cookies Settings.
440,854 Members | 849 Online
Bytes IT Community
+ Ask a Question
Need help? Post your question and get tips & solutions from a community of 440,854 IT Pros & Developers. It's quick & easy.

Systems software versus applications software definitions

P: n/a
How do we define systems programs? when we say systems programming,
does it necessary mean that the programs we write need to interact
with hardware directly? For example, OS, compiler, kernel, drivers,
network protocols, etc...? Couple years ago, yes, I understand this is
definitely true. However, as the software applications become more and
more complicated, some people try to argue that. Some people argue the
definition of systems programs depend on the level of abstractions. I
heard people saying that web server is a systems software, which I
feel confused. I think web server is an application software. Yes,
other applications run on top of web server.

Please advise and discuss. thanks!!
Nov 14 '05 #1
Share this Question
Share on Google+
54 Replies


P: n/a
>How do we define systems programs?

comp.lang.c does not.
when we say systems programming,
does it necessary mean that the programs we write need to interact
with hardware directly?
That term does not have a precise definition and there is not
a sharp line between systems and applications programming.
The line gets particularly fuzzy when you are talking about
computers embedded into other devices, like cell phones.

No, I'd consider a lot of the low-level network stuff like the
TCP stack to be systems programming, even if it's not specific
to a particular type of network hardware (and there may not BE
any specific network hardware beyond a serial port. There are
also "tunnel drivers" which have a network stack but don't
actually use any hardware at all).
For example, OS, compiler, kernel, drivers,
network protocols, etc...? Couple years ago, yes, I understand this is
definitely true. However, as the software applications become more and
more complicated, some people try to argue that.
Some people argue the
definition of systems programs depend on the level of abstractions. I
heard people saying that web server is a systems software, which I
feel confused.
The video driver in Microsoft Windows probably calls Internet
Explorer to actually access the hardware :-( In any case, Microsoft
claims IE is so tightly bound into the OS you can't remove it.
I think they said that under oath in court, too.
I think web server is an application software. Yes,
other applications run on top of web server.


I consider the web server in my VOIP terminal adapter to be systems
programming, in large part because it is burned into flash and is
used to configure the adapter. The same thing applies to the Web
Management Card (which runs a web server, SNMP server, and a few
other things) that plugs into my APC UPS, for much the same reason.

On the other hand, Apache running on my PC I consider to be an
application (which, as you said, has other applications running
under it, like PHP and under that various web pages which do various
things, like present an index of my CD collection.)

Gordon L. Burditt
Nov 14 '05 #2

P: n/a
On 24 Nov 2004 16:00:55 -0800, Matt wrote:
How do we define systems programs? when we say systems programming,
does it necessary mean that the programs we write need to interact
with hardware directly? For example, OS, compiler, kernel, drivers,
network protocols, etc...? Couple years ago, yes, I understand this is
definitely true. However, as the software applications become more and
more complicated, some people try to argue that. Some people argue the
definition of systems programs depend on the level of abstractions. I
heard people saying that web server is a systems software, which I
feel confused. I think web server is an application software. Yes,
other applications run on top of web server.

Please advise and discuss. thanks!!


I personally would say programming an application is getting easier with
time (C#, Java, VB,...). Hardly anyone still bothers with Assembly...

On the topic, for me a system program is the one that either a) interact
with the hardware directly (and are written on a very low level) or b)
interacts with the OS on a low level (PartitonMagic for example).
Nov 14 '05 #3

P: n/a
On 25 Nov 2004 00:26:42 GMT, Gordon Burditt wrote:
The video driver in Microsoft Windows probably calls Internet
Explorer to actually access the hardware :-( In any case, Microsoft
claims IE is so tightly bound into the OS you can't remove it.
I think they said that under oath in court, too.


Kind of off topic, but... IE can be "safely" removed from the OS.
Nov 14 '05 #4

P: n/a

"Matej Barac" <ma*********@gmail.com> wrote in message
news:1n*****************************@40tude.net...
On 24 Nov 2004 16:00:55 -0800, Matt wrote:
<snip>
I personally would say programming an application is getting easier with
time (C#, Java, VB,...). Hardly anyone still bothers with Assembly...


Unless, of course you are into embedded software, where it's (on a regular
basis) neccesary in order to implement ISR's (for one instance). Also
implementing support for many CPU/HW features can hardly do without.

<snip>
Nov 14 '05 #5

P: n/a
> The video driver in Microsoft Windows probably calls Internet
Explorer to actually access the hardware :-( In any case, Microsoft
claims IE is so tightly bound into the OS you can't remove it.
I think they said that under oath in court, too.


Correct. The Windows shell (the code which manages the taskbar, start menu,
file folder windows and things like My Computer window) is hardly linked to
MSHTML and is dependent on it.

As about "system" vs. "app" level development - I would jugde from the
practical standpoint of a) tools used b) stability requirements. From such
point of view, all embedded code is system one :)

--
Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
ma***@storagecraft.com
http://www.storagecraft.com

Nov 14 '05 #6

P: n/a
> Kind of off topic, but... IE can be "safely" removed from the OS.

No.

You can install another browser and use it as a default handler for URLs.
Nothing more. Help files, shell folders and lots of new-fashion XP's UI are
still shown by MSHTML.

--
Maxim Shatskih, Windows DDK MVP
StorageCraft Corporation
ma***@storagecraft.com
http://www.storagecraft.com

Nov 14 '05 #7

P: n/a
Matt wrote:
How do we define systems programs? when we say systems programming,
does it necessary mean that the programs we write need to interact
with hardware directly? For example, OS, compiler, kernel, drivers,
network protocols, etc...? Couple years ago, yes, I understand this is
definitely true. However, as the software applications become more and
more complicated, some people try to argue that. Some people argue the
definition of systems programs depend on the level of abstractions. I
heard people saying that web server is a systems software, which I
feel confused. I think web server is an application software. Yes,
other applications run on top of web server.

Please advise and discuss. thanks!!


I'd say that system software generally shares the following characteristics:
- its performance sensitive
- it has some dependence on hardware
Now, kernel + compiler definitely satisfy both these two requirements,
network protocol stacks mostly the first, drivers mostly the second.

Nov 14 '05 #8

P: n/a
> I'd say that system software generally shares the following
characteristics:
- its performance sensitive
- it has some dependence on hardware


I would disagree. For me system software is more or less equivalent to
being part of the trusted computing base, with the more or less implied
side effect that if something unexpected goes wrong with it, you need to
crash the system. Handling hardware is only part of that, and there are
scenarios where you can have software driving hardware directly without
being part of the TCB - rare, but it has happened. Performance - sure,
you would want that, but correctness is top priority. As a counterpoint,
how many OSs have you seen that have been compiled with optimization for
the particular processor model they will run on?

Jan
Nov 14 '05 #9

P: n/a
Hi Matt, I'm Matt too,

Great question, I always wondered that too. I met an embedded programmer in
San Diego years ago. He said the more down to the hardware level he got, the
more exciting it was. Also maybe somebody could explain this, I've seen a
lot of mainframe positions advertised as "system programmer"

Matt

"Matt" <jr********@hotmail.com> wrote in message
news:ba**************************@posting.google.c om...
How do we define systems programs? when we say systems programming,
does it necessary mean that the programs we write need to interact
with hardware directly? For example, OS, compiler, kernel, drivers,
network protocols, etc...? Couple years ago, yes, I understand this is
definitely true. However, as the software applications become more and
more complicated, some people try to argue that. Some people argue the
definition of systems programs depend on the level of abstractions. I
heard people saying that web server is a systems software, which I
feel confused. I think web server is an application software. Yes,
other applications run on top of web server.

Please advise and discuss. thanks!!

Nov 14 '05 #10

P: n/a
Jan Vorbrüggen <jv**************@mediasec.de> writes:
I would disagree. For me system software is more or less equivalent
to being part of the trusted computing base, with the more or less
implied side effect that if something unexpected goes wrong with it,
you need to crash the system. Handling hardware is only part of
that, and there are scenarios where you can have software driving
hardware directly without being part of the TCB - rare, but it has
happened. Performance - sure, you would want that, but correctness
is top priority. As a counterpoint, how many OSs have you seen that
have been compiled with optimization for the particular processor
model they will run on?


when i did the resource manager ... there was something like 2000
(automated) tests that took 3 months elapsed time to run as part of
calibrating and verifying the resource manager.
http://www.garlic.com/~lynn/subtopic.html#bench

the standard system maint. process was monthly update (patch?)
distribution called PLC (program level change). It would ship the
cumulative source updates as well as the executable binaries.

I was asked to put out monthly PLC for the resource manager on the
same schedule as the standard system PLC. I looked at the process, and
made a counter-offer of quarterly PLC for the resource manager
.... since I would have to take all the accumulated patches (for the
whole system) and rerun some significant number of the original
validation suite .... and there just weren't the resources to do that
on a monthly basis.
http://www.garlic.com/~lynn/subtopic.html#fairshare
http://www.garlic.com/~lynn/subtopic.html#wsclock

reference to the original resource manager product announcement
http://www.garlic.com/~lynn/2001e.html#45

note that much of the bits & pices that was in the resource manager
had been available in earlier kernal ... but were dropped over period
years. it was eventually decided to collect them up and package them
as a separate distribution.

this was in the period of some transition from free to charged for
software. at the time, there had been soem distinction that
application software could be charged for ... but kernel/system
software (as part of supporting the machine) was free.

the resource manager got to be the guinee pig for the first charged
for kernel software component ... with a new distinction that kernel
software that was directly related to hardware software would still be
free ... but other types of kernel software could be charged for.

a interesting paradox then showed up for the next release. the
resource manager shipped with the release prior to the release that
shipped smp support.
http://www.garlic.com/~lynn/subtopic.html#smp

however much of the SMP design and implementation was predicated on
various features that were part of the resource manager. the problem
was that SMP support was obviously directly supported hardware and
therefor was free ... but now had integral dependancies on features in
the resource managere ... which was priced.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Nov 14 '05 #11

P: n/a
On Thu, 25 Nov 2004 14:52:10 +0000, Matt wrote:
Hi Matt, I'm Matt too,

Great question, I always wondered that too. I met an embedded programmer in
San Diego years ago. He said the more down to the hardware level he got, the
more exciting it was. Also maybe somebody could explain this, I've seen a
lot of mainframe positions advertised as "system programmer"
I remember seeing a nice little table in Datamation many years ago:
relating the approximate difficulty of implementing software:

| single-user multi-user
----------------+----------------------------
application | 1 3
system | 3 9 <== HARDEST

[Dunno if that survived formatting?]

One may quibble with the numbers, but they are roughly representative. The
simplest thing to program is a little application, just for yourself. If
one uses that as a reference, then it is very roughly 3x harder to write
something that is multi-user (simultaneously) or system level. The most
complex is something that is both system level and multi-user (like an
operating system kernel). I had to scratch my head to think of what would
be single user system type software? I guess that could be a simple (not
multi-user) interface or driver (like a single serial or parallel port?
locked for exclusive use?) or perhaps some embedded software which does
only one thing (while dealing with the real world, interrupts, etc.). The
foreground/background control applications we used to write for DEC
PDP-11s under the RT-11 O/S (er, actually more like a program launcher
like CP/M) might be an example of that kind of thing (or CP/M "drivers"?).

The hardest thing to write (and get right) is a multi-user system level
software, like the Linux kernel, or on mainframes a transaction processing
monitor like CICS or even virtual terminal handling stuff like VTAM. Now
among the hardest of the hard, some are harder than others, so 9x
multiplier in the table is to be taken with appropriate grains of salt.

Anyway, that comparison table has helped my thinking.

As for "getting closer to the hardware"? Sounds like a control freak. I
guess anyone that likes to program (make machines do what you instruct
them to) is a control freak to some extent. Personally, I think we might
not be as extreme as some Skinnerian psychologists that I have known.

"Matt" <jr********@hotmail.com> wrote in message
news:ba**************************@posting.google.c om...
How do we define systems programs? when we say systems programming,
does it necessary mean that the programs we write need to interact
with hardware directly? For example, OS, compiler, kernel, drivers,
network protocols, etc...? Couple years ago, yes, I understand this is
definitely true. However, as the software applications become more and
more complicated, some people try to argue that. Some people argue the
definition of systems programs depend on the level of abstractions. I
heard people saying that web server is a systems software, which I
feel confused. I think web server is an application software. Yes,
other applications run on top of web server.
I would say a web server is typically a multi-user application. You could
write a single thread server that handles only one request (some tiny
embedded microcomputers have this), but then what is it supposed to do if
more than one request comes in? A real web server, with performance
enhancements should be designed multi-user, able to handle multiple
requests (pseudo)simultaneously. That is more complex than single-user,
maybe equivalent to simpler system level (i.e. not kernel) programming.

Please advise and discuss. thanks!!


Starting to smell like homework?

--
Juhan Leemet
Logicognosis, Inc.

Nov 14 '05 #12

P: n/a
"Matt" <jr********@hotmail.com> wrote in message
news:ba**************************@posting.google.c om...
How do we define systems programs? when we say systems programming,
does it necessary mean that the programs we write need to interact
with hardware directly? For example, OS, compiler, kernel, drivers,
network protocols, etc...? Couple years ago, yes, I understand this is
definitely true. However, as the software applications become more and
more complicated, some people try to argue that. Some people argue the
definition of systems programs depend on the level of abstractions. I
heard people saying that web server is a systems software, which I
feel confused. I think web server is an application software. Yes,
other applications run on top of web server.

Please advise and discuss. thanks!!


I would define 'application software' as the programs which actually
directly do something which is actually useful to the user. This might be
formatting and printing a document, calculating the results in a
spreadsheet, or it may be compiling a program (if the user is in fact a
programmer). But I'd say it doesn't include, for example, a program for
setting the modem transmission speed, since this isn't really doing anything
directly useful to user (normally).

The 'system software' comprises all the programs (or modules or other bits
of software) necessary to enable the applications software to work.

Of course, there will be some programs which don't obviously fit into either
category. For example, what about Windows Explorer or an equivalent program?
Is file manipulation directly useful to the user? Not really; but it is,
sort of. So I think that's the kind of program which occupies a grey area
between 'application' and 'system'.

Obviously, this is a rough definition, but I don't think a more precise
definition would be appropriate.

--
HTH
Nick Roberts

[Cross-posted: comp.software-eng, comp.lang.c, comp.programming,
alt.os.development, comp.arch]

Nov 14 '05 #13

P: n/a
Juhan Leemet <ju***@logicognosis.com> writes:
I remember seeing a nice little table in Datamation many years ago:
relating the approximate difficulty of implementing software:

| single-user multi-user
----------------+----------------------------
application | 1 3
system | 3 9 <== HARDEST


i've frequently claimed that to take a straight forward written
application and turn it into a "service" ... it takes 4-10 times the
code and ten times the work.

example that i frequently used was taking the straight-forward stuff
written to support server handling financial transaction and upgrading
it to business critical integrity for electronic commerce
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

where service and system are somewhat related since they frequently
tend to have similar business critical integrity requirements.

it used to be that system tended to be associated with things/services
that required elevated privileges .... and that other software
(typically applications) depended on those things. these days with
personal computing ... system could refer to everything involving the
computer,

misc. posts related to assurance:
http://www.garlic.com/~lynn/subpubkey.html#assruance

random past references to business critical integrity requirements and
frequently needing 4-10 times the amount of code:
http://www.garlic.com/~lynn/2001f.html#75 Test and Set (TS) vs Compare and Swap (CS)
http://www.garlic.com/~lynn/2001m.html#4 Smart Card vs. Magnetic Strip Market
http://www.garlic.com/~lynn/2001n.html#91 Buffer overflow
http://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
http://www.garlic.com/~lynn/2002b.html#59 Computer Naming Conventions
http://www.garlic.com/~lynn/2002n.html#11 Wanted: the SOUNDS of classic computing
http://www.garlic.com/~lynn/2003g.html#62 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
http://www.garlic.com/~lynn/2003j.html#15 A Dark Day
http://www.garlic.com/~lynn/2003n.html#52 Call-gate-like mechanism
http://www.garlic.com/~lynn/2003p.html#37 The BASIC Variations
http://www.garlic.com/~lynn/2004b.html#8 Mars Rover Not Responding
http://www.garlic.com/~lynn/2004b.html#48 Automating secure transactions
http://www.garlic.com/~lynn/2004k.html#20 Vintage computers are better than modern crap !
http://www.garlic.com/~lynn/2004l.html#49 "Perfect" or "Provable" security both crypto and non-crypto?

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Nov 14 '05 #14

P: n/a
On Thu, 25 Nov 2004 14:31:20 -0700, Anne & Lynn Wheeler wrote:
Juhan Leemet <ju***@logicognosis.com> writes:
I remember seeing a nice little table in Datamation many years ago:
relating the approximate difficulty of implementing software:

| single-user multi-user
----------------+----------------------------
application | 1 3
system | 3 9 <== HARDEST
i've frequently claimed that to take a straight forward written
application and turn it into a "service" ... it takes 4-10 times the
code and ten times the work.


Thanks, that puts some meat on the bones.
example that i frequently used was taking the straight-forward stuff
written to support server handling financial transaction and upgrading
it to business critical integrity for electronic commerce
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3
Some interesting stuff there. You folks been busy!

where service and system are somewhat related since they frequently
tend to have similar business critical integrity requirements.

it used to be that system tended to be associated with things/services
that required elevated privileges .... and that other software
(typically applications) depended on those things. these days with
personal computing ... system could refer to everything involving the
computer,

misc. posts related to assurance:
http://www.garlic.com/~lynn/subpubkey.html#assruance

random past references to business critical integrity requirements and
frequently needing 4-10 times the amount of code:
http://www.garlic.com/~lynn/2001f.html#75 Test and Set (TS) vs Compare and Swap (CS)
http://www.garlic.com/~lynn/2001m.html#4 Smart Card vs. Magnetic Strip Market
http://www.garlic.com/~lynn/2001n.html#91 Buffer overflow
http://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
http://www.garlic.com/~lynn/2002b.html#59 Computer Naming Conventions
http://www.garlic.com/~lynn/2002n.html#11 Wanted: the SOUNDS of classic computing
http://www.garlic.com/~lynn/2003g.html#62 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
http://www.garlic.com/~lynn/2003j.html#15 A Dark Day
http://www.garlic.com/~lynn/2003n.html#52 Call-gate-like mechanism
http://www.garlic.com/~lynn/2003p.html#37 The BASIC Variations
http://www.garlic.com/~lynn/2004b.html#8 Mars Rover Not Responding
http://www.garlic.com/~lynn/2004b.html#48 Automating secure transactions
http://www.garlic.com/~lynn/2004k.html#20 Vintage computers are better than modern crap !
http://www.garlic.com/~lynn/2004l.html#49 "Perfect" or "Provable" security both crypto and non-crypto?


--
Juhan Leemet
Logicognosis, Inc.
Nov 14 '05 #15

P: n/a
On Thu, 25 Nov 2004 23:25:48 +0100, Andi Kleen wrote:
Juhan Leemet <ju***@logicognosis.com> writes:

I remember seeing a nice little table in Datamation many years ago:
relating the approximate difficulty of implementing software:

| single-user multi-user
----------------+----------------------------
application | 1 3
system | 3 9 <== HARDEST

[Dunno if that survived formatting?]

One may quibble with the numbers, but they are roughly representative. The


I would consider a state-of-the-art optimizing compiler to be of equal or higher
complexity than a kernel. But it would be 1 in your scheme.

There are probably lots of other counter examples. How about a single-user
application that solves some incredibly complex problem?


Sure, there are always counterexamples, like there is always sniping.

BTW, it is not my scheme. I attributed it to Datamation (from decades ago,
the prehistoric mainframe age, as many today seem to think of it?).

The much simplified table was just something to start (from nothing) to
get an understanding of the flavour of distinctions. Counter-examples
don't really help anyone understand anything, because there is not yet any
structure to counter with the example(s). Do you have any rules and/or
generalizations to help the OP? It does no good to say "complicated,
complicated, you couldn't possibly understand" when someone asks for help.

I would be inclined to "cheat" if/when trying to classify your
counter-examples, which still would not do them justice, but might lessen
the disparity. Most people would consider compilers to be system level
stuff: because they deal with interfaces? to device models? file systems?
in their generalities, and not for just one specific application. One
might even argue that they are kind of multi-user, in that they have to
solve a wide class of problems, using the same approach(es). In fact, they
are sort of multi-user in that programmers compile using each others
shared dynamic data (headers, libraries, etc.). Whatever...

BTW, I am not sure that an optimizing compiler is more complex than a
(good) operating system kernel. There's a lot of tricky stuff in both.

For the "incredibly complex problem" we should probably be careful. Are
they incredibly complex because scientific research is lacking? theory is
lacking? new paradigms have yet to be designed? That is no longer
"programming" as the OP was asking about. I consider programming to be the
design and implementation of generally known methods to specific problems.
Like implementing a random number generator for a specific application,
using one of Knuth's texts. Otherwise one is doing research (like for a
PhD, or for a "real" patent, not a software hoax). Wouldn't you agree? Not
trying to be insulting, but I would consider programming generally to
require cleverness, not genius. Otherwise, most of us would be unemployed.

However, your definitions may differ... How would you help the OP?

As for "getting closer to the hardware"? Sounds like a control freak. I
guess anyone that likes to program (make machines do what you instruct
them to) is a control freak to some extent. Personally, I think we might


If that was true then libraries would be a lot less popular than they are.


Huh? What kind of libraries? What does that have to do with someone
wanting to control things (as opposed to wanting to control people). Why
does someone start programming? Certainly not to solve world hunger.

--
Juhan Leemet
Logicognosis, Inc.

Nov 14 '05 #16

P: n/a
"Matt" <jr********@hotmail.com> wrote in message
news:ba**************************@posting.google.c om...
How do we define systems programs? when we say systems
programming,
does it necessary mean that the programs we write need to
interact
with hardware directly? For example, OS, compiler, kernel,
drivers,
network protocols, etc...? Couple years ago, yes, I understand
this is
definitely true. However, as the software applications become
more and
more complicated, some people try to argue that. Some people
argue the
definition of systems programs depend on the level of
abstractions. I
heard people saying that web server is a systems software,
which I
feel confused. I think web server is an application software.
Yes,
other applications run on top of web server.

Please advise and discuss. thanks!!


Best start with the people rather than the end product. Now...

Systems programmers dislike end users. Users need software. Ergo
if the user specifies it, the systems programmer won't write it;
the application programmer does that.

Systems programmers only write programs for themselves or other
systems programmers. Rarely they will write programs to support
applications, but only under protest at the inefficency of the
applications running on their finely crafted code accessible only
from the command line. Systems programmers count in hex.

Application programmers like end users, and are only too
delighted to use a mouse to create a GUI. Efficency isn't a
primary consideration as long as the interface has shaded buttons
and the bitmaps look good together. Application programmers count
in decimal.

Systems programmers tend to eat high fat sugary foods, keep odd
hours and often have poor hygiene because they work alone and
rarely need to spruce up to attend meetings. None aspire to
management. Dyslexia, deafness and atrocious handwriting using
blunt pencils or leaky biros are common. Spell checkers are never
used, as word processors are considered applications. Large
stretches of the code they write are uncommented. Comments are
mainly for the amusement of other systems programmers, or to
illustrate a particularly clever technique. Variables are
normally named i or j, occasionally x and y; function names
contain the words foo and bar, very often together.

Application programmers prefer quiche and salads, work regular
hours and brush up regularly for the large numbers of meetings
with large numbers of other application programmers. Most have
management aspirations. All can handle a pen without inking their
fingers, are attentive, and can spell long words like
specification and acceptance criteria without using a spell
checker. All use a spell checker anyway, just in case. They write
code with comments on every line, and use simple techniques in
case they fall out of favour with management and are asked to do
maintenance. Variable names are often mistaken for comments, so
comments are coloured differently to avoid confusion, and the
word Variable is added to the end of the name just to make sure.
Function names normally contain the words Report or Total or
Function, very often together. They too are colour coded.

Systems programs are the things written by systems programmers.
Will that do?

--
Regards
Alex McDonald
Nov 14 '05 #17

P: n/a
Here in comp.lang.c,
"Alex McDonald" <al******@btopenworld.com> spake unto us, saying:
Systems programmers count in hex.

[Snip]

Application programmers count in decimal.


I count in octal -- what does that make me? :-)

--
-Rich Steiner >>>---> http://www.visi.com/~rsteiner >>>---> Smyrna, GA USA
OS/2 + eCS + Linux + Win95 + DOS + PC/GEOS + Executor = PC Hobbyist Heaven!
WARNING: I've seen FIELDATA FORTRAN V and I know how to use it!
The Theorem Theorem: If If, Then Then.
Nov 14 '05 #18

P: n/a
Richard Steiner wrote:
Here in comp.lang.c,
"Alex McDonald" <al******@btopenworld.com> spake unto us, saying:

Systems programmers count in hex.

[Snip]

Application programmers count in decimal.

I count in octal -- what does that make me? :-)


Almost as old as I am?

NPL

--
"It is impossible to make anything foolproof
because fools are so ingenious"
- A. Bloch
Nov 14 '05 #19

P: n/a
On Thu, 25 Nov 2004 20:57:44 -0500, Richard Steiner wrote:
Here in comp.lang.c,
"Alex McDonald" <al******@btopenworld.com> spake unto us, saying:
Systems programmers count in hex.

[Snip]

Application programmers count in decimal.


I count in octal -- what does that make me? :-)


antediluvian? (like me?)

FWIW, I seem to recall being able to multiply hex digits (rote memory, I
know) when I was crawling through IBM mainframe dumps as a uni student. I
had a professor who joked about how I was playing the front panel switches
of a PDP-8 "like a piano" when I toggled in the bootstrap loader. Ah...

p.s. hated octal, esp. on 8-bit byte machines like PDP-11 ! or VAX ?!?

--
Juhan Leemet
Logicognosis, Inc.

Nov 14 '05 #20

P: n/a
"Alex McDonald" <al******@btopenworld.com> writes:
Systems programmers dislike end users. Users need software. Ergo
if the user specifies it, the systems programmer won't write it;
the application programmer does that.

Systems programmers only write programs for themselves or other
systems programmers. Rarely they will write programs to support
applications, but only under protest at the inefficency of the
applications running on their finely crafted code accessible only
from the command line. Systems programmers count in hex.


some of the old collection

http://www.garlic.com/~lynn/2001e.html#31 High Level Language Systems was Re: computer books/authors (Re: FA:
http://www.garlic.com/~lynn/2002e.html#39 Why Use *-* ?

above has:

* real programmers don't eat quiche
* real software engineers don't read dumps
* real programmers don't write specs

long ago and far away ... i have vague memories of being able to read
the holes in cards that were executable output of assembler ...
(12-2-9/x'02' "TXT" cards) and modifying the program by repunching the
binary data in the cards (actually using 026 and later 029 to do copy
of the card up until the column(s) i needed to change ... and then
using multi-punch to punch the correct holes for the hex that i
needed).

minor archeological references:
http://www.garlic.com/~lynn/95.html#4 1401 overlap instructions
http://www.garlic.com/~lynn/2001.html#0 First video terminal?
http://www.garlic.com/~lynn/2001.html#60 Text (was: Review of Steve McConnell's AFTER THE GOLD RUSH)
http://www.garlic.com/~lynn/2001k.html#27 Is anybody out there still writting BAL 370.
http://www.garlic.com/~lynn/2001k.html#28 Is anybody out there still writting BAL 370.
http://www.garlic.com/~lynn/2001k.html#31 Is anybody out there still writting BAL 370.
http://www.garlic.com/~lynn/2001m.html#45 Commenting style (was: Call for folklore)
http://www.garlic.com/~lynn/2001n.html#49 PC/370
http://www.garlic.com/~lynn/2002o.html#26 Relocation, was Re: Early computer games
--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Nov 14 '05 #21

P: n/a
jr********@hotmail.com (Matt) writes:
How do we define systems programs? when we say systems programming,
does it necessary mean that the programs we write need to interact
with hardware directly? For example, OS, compiler, kernel, drivers,
network protocols, etc...? Couple years ago, yes, I understand this is
definitely true. However, as the software applications become more and
more complicated, some people try to argue that. Some people argue the
definition of systems programs depend on the level of abstractions. I
heard people saying that web server is a systems software, which I
feel confused. I think web server is an application software. Yes,
other applications run on top of web server.

Please advise and discuss. thanks!!


Well, our prof. counted couples of software to be the system level:

0. Operating Systems.
1. Sofwares which widely use system calls. (socket is a kind of system
call, and thats why we talk about web servers as system softwares)
2. Compilers and interpreters.

hopes it helps.

--
Lin Mi (Rin Fuku)
Faculty of Environmental Information
Hagino-Hattori Laboratory
Keio University.
Nov 14 '05 #22

P: n/a
Here in comp.lang.c,
Juhan Leemet <ju***@logicognosis.com> spake unto us, saying:
On Thu, 25 Nov 2004 20:57:44 -0500, Richard Steiner wrote:
I count in octal -- what does that make me? :-)
antediluvian? (like me?)


Harumph!
p.s. hated octal, esp. on 8-bit byte machines like PDP-11 ! or VAX ?!?


It works very well in the 36-bit word-oriented environment I still play
in at work, though. 9-bit ASCII bytes. :-)

--
-Rich Steiner >>>---> http://www.visi.com/~rsteiner >>>---> Smyrna, GA USA
OS/2 + eCS + Linux + Win95 + DOS + PC/GEOS + Executor = PC Hobbyist Heaven!
WARNING: I've seen FIELDATA FORTRAN V and I know how to use it!
The Theorem Theorem: If If, Then Then.
Nov 14 '05 #23

P: n/a

"Richard Steiner" <rs******@visi.com> wrote in message
news:Y2***************@visi.com...
Here in comp.lang.c,
"Alex McDonald" <al******@btopenworld.com> spake unto us, saying:
Systems programmers count in hex.

[Snip]

Application programmers count in decimal.


I count in octal -- what does that make me? :-)


obsolete?
Nov 14 '05 #24

P: n/a
In article <m3************@averell.firstfloor.org>,
Andi Kleen <fr*****@alancoxonachip.com> wrote:

I would consider a state-of-the-art optimizing compiler to be of equal or higher
complexity than a kernel. But it would be 1 in your scheme.


There is internal complexity and interface complexity, and it is
the latter that generally causes more trouble, needs more design,
and usually gets less of both.

The most optimising compiler has very little more interface complexity
than a basic compiler, and the interface can be anything from simple
to fiendishly complex, depending. For example, under IA-64, it is
necessarily at least of medium complexity.

Similarly, a kernel for some of the more extreme microkernel system
designs can be very simple, both internally and at its interface,
but one for POSIX and derivatives necessarily has a fiendish
interface complexity.
Regards,
Nick Maclaren.
Nov 14 '05 #25

P: n/a
On Thu, 25 Nov 2004 23:25:48 +0100, Andi Kleen
<fr*****@alancoxonachip.com> wrote:
Juhan Leemet <ju***@logicognosis.com> writes:

I remember seeing a nice little table in Datamation many years ago:
relating the approximate difficulty of implementing software:

| single-user multi-user
----------------+----------------------------
application | 1 3
system | 3 9 <== HARDEST

[Dunno if that survived formatting?]

One may quibble with the numbers, but they are roughly representative. The
I would consider a state-of-the-art optimizing compiler to be of equal or higher
complexity than a kernel. But it would be 1 in your scheme.


Yes, it would, because that is the definition. A multi-user
state-of-the-art optimizing compiler would be three times as difficult
as a standalone one, a compiler which had to handle hardware
('compiling' directly into an ASIC for instance) would be about the
same, etc.
There are probably lots of other counter examples. How about a single-user
application that solves some incredibly complex problem?


Same thing. The multi-user version of that would be three times as
hard (probably worse, having worked on some distributed applications).
As for "getting closer to the hardware"? Sounds like a control freak. I
guess anyone that likes to program (make machines do what you instruct
them to) is a control freak to some extent. Personally, I think we might


If that was true then libraries would be a lot less popular than they are.


Well, he's right in that sense, when we program we do indeed want to
make the machine do what we want, and we get annoyed when they don't do
it, which is pretty much the definition of "control freak". I get
especially annoyed when libraries don't do what they say they are
supposed to do...

(Note followups and override if desired)

Chris C
Nov 14 '05 #26

P: n/a
On Thu, 25 Nov 2004 20:57:44 -0500, Richard Steiner
<rs******@visi.com> wrote:
Here in comp.lang.c,
"Alex McDonald" <al******@btopenworld.com> spake unto us, saying:
Systems programmers count in hex.

[Snip]

Application programmers count in decimal.


I count in octal -- what does that make me? :-)


Old, like me <g>.

(If I'm counting on my fingers I use Gray code...)

Chris C
Nov 14 '05 #27

P: n/a

Matt asked:

| How do we define systems programs?
| when we say systems programming, does it necessary mean that the
| programs we write need to interact with hardware directly?
| For example, OS, compiler, kernel, drivers,
| network protocols, etc...?
I see two main parts of 'system' code, the first covers all the
hardware needs (the drivers).
The other part holds OS/FS-specific code and it depends on the
'security level' of an OS, which functions are to be protected
from user- and/or admin-access and included in the system.

Network protocols are well known in detail, so there are many
different web-browser applications around, even they just call
system functions for password-cache and connection.

Compilers often just use API-functions and libraries for a
certain target-OS/CPU pair and are limited to user level,
but a few tools also allow to write 'unprotected' system code.

| Couple years ago, yes, I understand this is definitely true.

| However, as the software applications become more and
| more complicated, some people try to argue that.
| Some people argue the definition of systems programs depend on
| the level of abstractions.

I'd name it as the level of being paranoid for security :)

| I heard people saying that web server is a systems software,
| which I feel confused.
| I think web server is an application software.
| Yes, other applications run on top of web server.

If you don't mean 'net-linkers' within an GP-OS like windoze:
I have no experience with web-servers, but I think they would be
well advised to use their very own system rather than a GP-Os.

__
wolfgang
http://web.utanet.at/schw1285/KESYS/index.htm

Nov 14 '05 #28

P: n/a

In article <m3************@averell.firstfloor.org>,
Andi Kleen <fr*****@alancoxonachip.com> writes:
|> nm**@cus.cam.ac.uk (Nick Maclaren) writes:
|>
|> >>I would consider a state-of-the-art optimizing compiler to be of equal or higher
|> >>complexity than a kernel. But it would be 1 in your scheme.
|> >
|> > There is internal complexity and interface complexity, and it is
|> > the latter that generally causes more trouble, needs more design,
|> > and usually gets less of both.
|>
|> The interface of a compiler is much more complex than
|> the input language and the command line options.

Of course. I was primarily referring to the external interfaces,
but they include the code generated, the calling conventions, the
object file formats and so on.

|> An optimizing compiler consists of many passes that talk to each other
|> using complex data structures and even a special intermediate
|> language. ...

Yes, but at least that is within a single product. It gets much
hairier (managerially and technically) when the interfaces are
between separate products, perhaps even developed by separate
organisations.
Regards,
Nick Maclaren.
Nov 14 '05 #29

P: n/a

Great question, I always wondered that too. I met an embedded programmer in
San Diego years ago. He said the more down to the hardware level he got, the
more exciting it was. Also maybe somebody could explain this, I've seen a
lot of mainframe positions advertised as "system programmer"


Well, as chips become more and more complex, the embedded people who
deal directly with the hardware need to learn the "personality" of the
chip, e.g. how various register settings affect each other and which
register combinations do not work, etc. Some of these chips have
hundreds of registers, each bit controlling a different aspect. It
can be quite a challenge to learn the chip.

I can be fascinating to work at this level.
Wayne Woodruff
http://www.jtan.com/~wayne
Nov 14 '05 #30

P: n/a
In article <m3************@lhwlinux.garlic.com>,
Anne & Lynn Wheeler <ly**@garlic.com> wrote:
Jan Vorbrüggen <jv**************@mediasec.de> writes:
I would disagree. For me system software is more or less equivalent
to being part of the trusted computing base, with the more or less
implied side effect that if something unexpected goes wrong with it,
you need to crash the system. Handling hardware is only part of
that, and there are scenarios where you can have software driving
hardware directly without being part of the TCB - rare, but it has
happened. Performance - sure, you would want that, but correctness
is top priority. As a counterpoint, how many OSs have you seen that
have been compiled with optimization for the particular processor
model they will run on?
when i did the resource manager ... there was something like 2000
(automated) tests that took 3 months elapsed time to run as part of
calibrating and verifying the resource manager.
http://www.garlic.com/~lynn/subtopic.html#bench

the standard system maint. process was monthly update (patch?)
distribution called PLC (program level change). It would ship the
cumulative source updates as well as the executable binaries.

I was asked to put out monthly PLC for the resource manager on the
same schedule as the standard system PLC. I looked at the process, and
made a counter-offer of quarterly PLC for the resource manager
.... since I would have to take all the accumulated patches (for the
whole system) and rerun some significant number of the original
validation suite .... and there just weren't the resources to do that
on a monthly basis.
http://www.garlic.com/~lynn/subtopic.html#fairshare
http://www.garlic.com/~lynn/subtopic.html#wsclock


Yup. TW always said that the only way to ship bugless software was
to ship every day or not ship at all. Boy! Finding the compromise
to that one was difficult, a PITA, and different with each and every
project we ever did. And that was _without_ PHB and NIH interference.
Add the last two, and I still am amazed that we ever shipped anything
at all.

reference to the original resource manager product announcement
http://www.garlic.com/~lynn/2001e.html#45

note that much of the bits & pices that was in the resource manager
had been available in earlier kernal ... but were dropped over period
years. it was eventually decided to collect them up and package them
as a separate distribution.

this was in the period of some transition from free to charged for
software. at the time, there had been soem distinction that
application software could be charged for ... but kernel/system
software (as part of supporting the machine) was free.

the resource manager got to be the guinee pig for the first charged
for kernel software component ... with a new distinction that kernel
software that was directly related to hardware software would still be
free ... but other types of kernel software could be charged for.

a interesting paradox then showed up for the next release. the
resource manager shipped with the release prior to the release that
shipped smp support.
http://www.garlic.com/~lynn/subtopic.html#smp

however much of the SMP design and implementation was predicated on
various features that were part of the resource manager. the problem
was that SMP support was obviously directly supported hardware and
therefor was free ... but now had integral dependancies on features in
the resource managere ... which was priced.


That's how you would make money for an SMP. Our way was to
put the "service driver" for SMP on one magtape. When the customer
paid for the support, he would automatically get the tape
containing CPNSER.MAC as a part of the distribution. Each
and every other monitor module had SMP code in it under a
feature test switch and we shipped that on our monitor
distribution tape which went to all customers.

The way JMF designed the marketing change to support three
instead of two CPUs was to very carefully never mention the
word "two", using "multi" instead. Thus, we never had to
remaster a tape, had all the testing done with the previous
release, and just change the PD-something. No documentation
mentioned two (it also said multi).

To get DEC to officially "support" more than two CPUs on a system
took nine fucking months; this was a measurement of the internal
processes of product prevention that had completely infected DEC
by 1982.

/BAH
Subtract a hundred and four for e-mail.
Nov 14 '05 #31

P: n/a
Richard Steiner wrote:
Here in comp.lang.c,
Juhan Leemet <ju***@logicognosis.com> spake unto us, saying:

p.s. hated octal, esp. on 8-bit byte machines like PDP-11 ! or VAX ?!?

It works very well in the 36-bit word-oriented environment I still play
in at work, though. 9-bit ASCII bytes. :-)


Or, for the real stuff: 60-bit words ... the original CDC Cyber series !
Real Programmers can find bugs buried in 6 Megabyte core dumps :-)

--
Toon Moene - e-mail: to**@moene.indiv.nluug.nl - phone: +31 346 214290
Saturnushof 14, 3738 XG Maartensdijk, The Netherlands
Maintainer, GNU Fortran 77: http://gcc.gnu.org/onlinedocs/g77_news.html
A maintainer of GNU Fortran 95: http://gcc.gnu.org/fortran/
Nov 14 '05 #32

P: n/a
Here in comp.lang.c,
Juhan Leemet <ju***@logicognosis.com> spake unto us, saying:
p.s. hated octal, esp. on 8-bit byte machines like PDP-11 ! or VAX ?!?


The VAX was DEC's first hex machine. There was a story that
around the time of the announcement they published a
calendar with the dates in hex. The Fortran compiler supported
hex constants and format codes. The instruction fields were
in groups of four bits, unlike the PDP-11 where the instruction
fields, especially register numbers, grouped in threes.

(Though I still prefer hex for the PDP-11.)

-- glen

Nov 14 '05 #33

P: n/a
In comp.arch pt**@tom.sfc.keio.ac.jp wrote:
jr********@hotmail.com (Matt) writes:
How do we define systems programs? when we say systems programming,
does it necessary mean that the programs we write need to interact
with hardware directly? For example, OS, compiler, kernel, drivers,
network protocols, etc...? Couple years ago, yes, I understand this is
definitely true. However, as the software applications become more and
more complicated, some people try to argue that. Some people argue the
definition of systems programs depend on the level of abstractions. I
heard people saying that web server is a systems software, which I
feel confused. I think web server is an application software. Yes,
other applications run on top of web server.

Please advise and discuss. thanks!!
Well, our prof. counted couples of software to be the system level:

0. Operating Systems.
1. Sofwares which widely use system calls. (socket is a kind of system
call, and thats why we talk about web servers as system softwares)


This is a bad deifinition. For example, it potentialy leaves backup
software and most of system maintenance stuff out but lets web servers
and other solid application level stuff in.
2. Compilers and interpreters.

hopes it helps.


--
Sander

+++ Out of cheese error +++
Nov 14 '05 #34

P: n/a
Sander Vesik wrote:
In comp.arch pt**@tom.sfc.keio.ac.jp wrote:
jr********@hotmail.com (Matt) writes:

How do we define systems programs? when we say systems programming,
does it necessary mean that the programs we write need to interact
with hardware directly? For example, OS, compiler, kernel, drivers,
network protocols, etc...? Couple years ago, yes, I understand this is
definitely true. However, as the software applications become more and
more complicated, some people try to argue that. Some people argue the
definition of systems programs depend on the level of abstractions. I
heard people saying that web server is a systems software, which I
feel confused. I think web server is an application software. Yes,
other applications run on top of web server.

Please advise and discuss. thanks!!
Well, our prof. counted couples of software to be the system level:

0. Operating Systems.
1. Sofwares which widely use system calls. (socket is a kind of system
call, and thats why we talk about web servers as system softwares)

This is a bad deifinition. For example, it potentialy leaves backup
software and most of system maintenance stuff out but lets web servers
and other solid application level stuff in.


Plz explain your first point.
For web servers, to my opinion, they're widely using TCP/IP stack, which
is OS built-in functions. Therefore, at least they are not *such* high
level apps.

2. Compilers and interpreters.

hopes it helps.


--
Lin Mi (Rin Fuku)
pt**@tom.sfc.keio.ac.jp
Faculty of Environmental Information
Hagino-Hattori Laboratory
Keio University, Shounan-Fujisawa Campus
Nov 14 '05 #35

P: n/a
glen herrmannsfeldt wrote:
Here in comp.lang.c,
Juhan Leemet <ju***@logicognosis.com> spake unto us, saying:
p.s. hated octal, esp. on 8-bit byte machines like PDP-11 ! or VAX ?!?

The VAX was DEC's first hex machine. There was a story that
around the time of the announcement they published a
calendar with the dates in hex. The Fortran compiler supported
hex constants and format codes. The instruction fields were
in groups of four bits, unlike the PDP-11 where the instruction
fields, especially register numbers, grouped in threes.

(Though I still prefer hex for the PDP-11.)

-- glen


Octal (0..7) came about because of a need to 'say' binary. In the
day, the 'character' was six bits wide, all upper case and 'A' was
something like '100001' if I recall correctly. Split into groups of
3 we get '100' and '001'. Everybody could keep that much binary in
their heads and could 'see' four and one.

If asked the value of 'A' it was a blessing to say 'four one' rather
than 'one zero zero zero zero one'.

Then in the 1960's everything changed. Fairchild Semiconductor
invented the Integrated Circuit, the IC. They could now put hundreds
of transistors on one piece of silicon. The whole idea of designing
circuits from individual transisters died an almost instant death.

The new IC's were virtually circuit boards on a chip. In 1963 IBM's
new 7094 CPU was nearly the size of a city bus, required tons of air
conditioning, had 36-bit memory word, 6-bit character, 7-channel
magnetic tape, fixed record size 80 characters per record (punched
card). Perhaps the youngest (last) dinosaur.

IBM at the same time had embraced the IC and in 1964 introduced
their System 360 which was the first machine to this generation. The
old six-bit character (BCD) was severely limiting and 7-bit ASCII
was nipping at their heels. Digital Equipment, Data General, Pr1me,
etc. were giving them hell with ASCII. Hence in an attempted leap
forward, IBM uses IC's. The IC designers think 2, 4, 8 and don't
know what to do with 3 or 6.

So the new character is 8 bits and becomes Extended BCD Interchange
Code (EBCDIC). New terms, byte (the 8-bit thingy) nybble (half a
byte) were current in 1964. I'm not sure IBM did this.

The IC counters were 4-bits wide, not 3. The modulus was 16, not 8
and Octal died. Hexadecimal is born. Simply add six alpha characters
to the ten decimal ones and we have '0123456789ABCDEF' for our set.

--
Joe Wright mailto:jo********@comcast.net
"Everything should be made as simple as possible, but not simpler."
--- Albert Einstein ---
Nov 14 '05 #36

P: n/a
Nick Maclaren wrote:

In article <m3************@averell.firstfloor.org>,
Andi Kleen <fr*****@alancoxonachip.com> writes: |> An optimizing compiler consists of many passes that talk to each other
|> using complex data structures and even a special intermediate
|> language. ...

Yes, but at least that is within a single product. It gets much
hairier (managerially and technically) when the interfaces are
between separate products, perhaps even developed by separate
organisations.


Tell me about it ! As someone who writes a third party debugger for a
living I certainly fell the pain of dealing with many ill specified
external interfaces. A debugger is certainly dependent on more such
interfaces than a compiler.

--
-- Jim
--
James Cownie <jc*****@etnus.com>
Etnus, LLC. +44 117 9071438
http://www.etnus.com
Nov 14 '05 #37

P: n/a
jr********@hotmail.com (Matt) wrote in message news:<ba**************************@posting.google. com>...
How do we define systems programs?


One view (certainly not the only one) distinguishes between programs that need
or expect some privilege (e.g. supervisor state) from those that don't (though
they may use privilege-requiring services through a system-call interface).

In this view, system programs *must* be written carefully, since a malfunction
can have global effects; application programs should never be able to do worse
than shoot themselves in the foot (local and bounded side-effects only). This
last assumption depends of course on a properly fenced execution environment,
which most people don't enjoy.

Michel.
Nov 14 '05 #38

P: n/a

In article <61**************************@posting.google.com >,
ha**@watson.ibm.com (Michel Hack) writes:
|> jr********@hotmail.com (Matt) wrote in message news:<ba**************************@posting.google. com>...
|> > How do we define systems programs?
|>
|> One view (certainly not the only one) distinguishes between programs that need
|> or expect some privilege (e.g. supervisor state) from those that don't (though
|> they may use privilege-requiring services through a system-call interface).
|>
|> In this view, system programs *must* be written carefully, since a malfunction
|> can have global effects; application programs should never be able to do worse
|> than shoot themselves in the foot (local and bounded side-effects only). This
|> last assumption depends of course on a properly fenced execution environment,
|> which most people don't enjoy.

And which most systems don't enable :-(

Even when they do, there is the concept of programs that neither
need nor expect ANY privilege, but are likely to be used at a
relatively high privilege level. Even the proponents of that
view don't claim that a shell or basic utility need not be
implemented to the same standard as a 'system' program.
Regards,
Nick Maclaren.
Nov 14 '05 #39

P: n/a
In comp.arch Lin Mi <pt**@tom.sfc.keio.ac.jp> wrote:
Sander Vesik wrote:
In comp.arch pt**@tom.sfc.keio.ac.jp wrote:
jr********@hotmail.com (Matt) writes:
How do we define systems programs? when we say systems programming,
does it necessary mean that the programs we write need to interact
with hardware directly? For example, OS, compiler, kernel, drivers,
network protocols, etc...? Couple years ago, yes, I understand this is
definitely true. However, as the software applications become more and
more complicated, some people try to argue that. Some people argue the
definition of systems programs depend on the level of abstractions. I
heard people saying that web server is a systems software, which I
feel confused. I think web server is an application software. Yes,
other applications run on top of web server.

Please advise and discuss. thanks!!

Well, our prof. counted couples of software to be the system level:

0. Operating Systems.
1. Sofwares which widely use system calls. (socket is a kind of system
call, and thats why we talk about web servers as system softwares)

This is a bad deifinition. For example, it potentialy leaves backup
software and most of system maintenance stuff out but lets web servers
and other solid application level stuff in.


Plz explain your first point.
For web servers, to my opinion, they're widely using TCP/IP stack, which
is OS built-in functions. Therefore, at least they are not *such* high
level apps.


In which case so are (n)curses text editors, as they use low level terminal
i/o a lot. You definition basicly makes any network app be systems software,
something that I don't think is a good thing.

--
Sander

+++ Out of cheese error +++
Nov 14 '05 #40

P: n/a
"Chris Croughton" <ch***@keristor.net> wrote in message
news:sl******************@ccserver.keris.net...
On Thu, 25 Nov 2004 20:57:44 -0500, Richard Steiner
<rs******@visi.com> wrote:
Here in comp.lang.c,
"Alex McDonald" <al******@btopenworld.com> spake unto us, saying:
Systems programmers count in hex.

[Snip]

Application programmers count in decimal.


I count in octal -- what does that make me? :-)


Old, like me <g>.

(If I'm counting on my fingers I use Gray code...)


I find that hard to believe !

--
Chqrlie.
Nov 14 '05 #41

P: n/a
On Wed, 1 Dec 2004 16:33:21 +0100, Charlie Gordon
<ne**@chqrlie.org> wrote:
"Chris Croughton" <ch***@keristor.net> wrote in message
news:sl******************@ccserver.keris.net...
On Thu, 25 Nov 2004 20:57:44 -0500, Richard Steiner
<rs******@visi.com> wrote:
> Here in comp.lang.c,
> "Alex McDonald" <al******@btopenworld.com> spake unto us, saying:
>
>>Systems programmers count in hex.
>>
>>[Snip]
>>
>>Application programmers count in decimal.
>
> I count in octal -- what does that make me? :-)


Old, like me <g>.

(If I'm counting on my fingers I use Gray code...)


I find that hard to believe !


For numbers expected to be greater than five, it's easier to use a
binary method of some kind (I dislike having to use the other hand to
keep track of how many fives I've had). Gray code has the advantage
that only one bit changes at a time.

I learnt it as a teenager (I 'inherited' some optical shaft encoders
which used it), and have done it automatically for so many years that I
find it harder to use binary. The conversion back to 'straight' binary
(if one has forgotten the tables, only 32 values for a single hand) is
simple:

Find the leftmost '1' in the value to be converted and invert each bit
to the right of it.

Repeat with the digits of the original value (not the partially
converted one) until you run out of digits.

Converting binary to Gray code is similar (and slightly simpler, because
you don't need to remember the original value):

Find the leftmost '1' and invert each bit to the right of it.

Repeat with the digits of the partially converted value until you run
out of digits.

Incrementing a value doesn't need propagation of 'carry', it's just:

If the parity is even, invert the bottom bit.

If the parity is odd, invert the bit immediately to the left of the
rightmost '1' bit;

Or in C:

unsigned int fromGray(unsigned int val)
{
unsigned int i = 1U << (sizeof(val)*CHAR_BIT - 1);
unsigned int b = val;
while (i)
{
if (val & i)
b ^= i - 1;
i >>= 1;
}
return b;
}

unsigned int toGray(unsigned int b)
{
unsigned int i = 1U << (sizeof(b)*CHAR_BIT - 1);
while (i)
{
if (b & i)
b ^= i - 1;
i >>= 1;
}
return b;
}

(Is there an easier portable way of setting the top bit of a uint than
calculating the number of bits using sizeof and CHAR_BIT? If UINT_MAX
is guaranteed to be 2^n - 1 then (UINT_MAX - UINT_MAX/2) would work, but
that's almost as non-transparent.)

Chris C
Nov 14 '05 #42

P: n/a
On Thu, 25 Nov 2004 01:44:20 +0100, Matej Barac
<ma*********@gmail.com> wrote:
On 25 Nov 2004 00:26:42 GMT, Gordon Burditt wrote:
The video driver in Microsoft Windows probably calls Internet
Explorer to actually access the hardware :-( In any case, Microsoft
claims IE is so tightly bound into the OS you can't remove it.
I think they said that under oath in court, too.


Kind of off topic, but... IE can be "safely" removed from the OS.


You can remove the IE executable, but that program is not Internet
Explorer. What you interact with is just a thin shell around a COM
control that provides the browser functions. The IE program does
little more than instantiate that control and provide a window for it
to scribble on.

You can hunt down and remove the various incarnations of the
"WebBrowser" control, but doing so may break the Explorer shell if
"Active Desktop" functions are enabled and can render the shell
completely unusable. The help systems in 2K and XP, MS Office and
many 3rd party applications also use the browser control. And the
browser control itself relies on other controls which can't be
removed.

Microsoft exaggerated but they were not lying. Removing IE results in
reduced functionality ... that is if the system still works at all.

George
--
for email reply remove "/" from address
Nov 14 '05 #43

P: n/a
Anne & Lynn Wheeler <ly**@garlic.com> writes:
i've frequently claimed that to take a straight forward written
application and turn it into a "service" ... it takes 4-10 times the
code and ten times the work.


.... recent slashdot
http://developers.slashdot.org/devel...?tid=221&tid=1

points to the acm queue interview article
http://acmqueue.com/modules.php?name...owpage&pid=233

Designing for failure may be the key to success.
Engineering for Failure

it was called system/r ... developed on vm/370 base and initially
transferred from sjr to endicott for sql/ds. later tech transfer
to stl for db2. misc past system/r, sql, etc posts
http://www.garlic.com/~lynn/subtopic.html#systemr

and related to the subject was ha/cmp
http://www.garlic.com/~lynn/subtopic.html#hacmp

where we had to do detailed total system vulnerability and failure
mode investigation.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Nov 14 '05 #44

P: n/a
In article <uu***********@mail.comcast.net>,
Anne & Lynn Wheeler <ly**@garlic.com> wrote:
Anne & Lynn Wheeler <ly**@garlic.com> writes:
i've frequently claimed that to take a straight forward written
application and turn it into a "service" ... it takes 4-10 times the
code and ten times the work.

My viewpoint is slightly different, and I dissent. If you spend
3 times the effort in design, BEFORE you start, you can reduce the
amount of code to about twofold and the amount to effort to about
threefold.

But I agree that your figures are correct if the application wasn't
designed to be a service in the first place. At at least 3 other
people (including, I think, Fred Brooks) have said the same.
... recent slashdot
http://developers.slashdot.org/devel...?tid=221&tid=1

points to the acm queue interview article
http://acmqueue.com/modules.php?name...owpage&pid=233

Designing for failure may be the key to success.
Engineering for Failure


Well, as I know, I agree with that. But you also know that I am
convinced that the IT world is STILL heading away from there :-(
Regards,
Nick Maclaren.
Nov 14 '05 #45

P: n/a
nm**@cus.cam.ac.uk (Nick Maclaren) writes:
Well, as I know, I agree with that. But you also know that I am
convinced that the IT world is STILL heading away from there :-(


when i got to do the resource manager
http://www.garlic.com/~lynn/subtopic.html#fairshare
http://www.garlic.com/~lynn/subtopic.html#wsclock

one of the supporting processes was an automated test & benchmarking
process
http://www.garlic.com/~lynn/subtopic.html#bench

and eventually did one sequence of 2000 benchmarks taking 3 months
elapsed time before first customer ship.

with the benchmarking process ... was able to define almost any sort
of workload characteristics ... including very stressful ones that
turned out were guarenteed to crash the system. early on as a result of
this ... one of the side-tracks was to go in and completely redesign
and rewrite the kernel serialization infrastructure ... that resulted
in the elimination of all stress-test induced system failures as well
as all observed situations of hung/zombie processes .... some of
hung/zombies as well as other fault diagnostic stuff
http://www.garlic.com/~lynn/subtopic.html#dumprx

the main product release after the release of the resource manager
included SMP support ... which had a lot of dependencies on the
resource manager code ... which then created a business problem.
http://www.garlic.com/~lynn/subtopic.html#smp

the resource manager wss the first forey into priced kernel software
with guidelines that direct hardware support kernel software was
still free. having smp "free" software dependent on priced (resource
manager) software violated those business guidelines. As a result ..
something like 80percent of code in original resource manager was
removed and incorporated into the "base, free" kernel software.

so problem reporting and fix process had a procedures that assigned
a number (that was incremented sequentially) and fixes for specific
problems were given the same number. with the very first version
of vm370, the numbering started ... and as far as i know may never
have been reset (i think i've recently seen references to sequential
numbers in the 60k range?).

Any way ... if you have to be dilligent to maintain kernel integrity
(failures because of dangling activities after processes have gone
away) and/or hung/zombie processes (waiting for some activity
to complete). A couple releases after the resource manager went out,
there was a fix introduced into the dispatcher to ignore various
kinds of events under certain circumstances; this had a fix number of
something like 15,3xx (or maybe 15,1xx?). In any case, it resulted
in re-introducing hung/zombie processes.

This occured after i had redone the i/o system to make it bullet proof
so the disk enginneering lab could do their work in an operating
system environment ... instead of doing everything with dedicated,
stand-alone machines:
http://www.garlic.com/~lynn/subtopic.html#disk

I created an update that removed the effects of the fix that
re-introduced hung/zombie processes ... and tried to find out what was
the original justification for generating it in the first place.

of course, all the ha/cmp work was pretty much trying to figure out
how to retrofit availability andd assurance to existing
infrastructures
http://www.garlic.com/~lynn/subtopic.html#hacmp

the stuff for electronic commerce was someplace in-between ... characterize
all possible failure modes anywhere in the infrastructure ... and then
define recovery and/or diagnostic processes to handle every possible
scenario. ... aka the previous electronic commerce ref
http://www.garlic.com/~lynn/2004p.html#23

however, in approx, the same electronic commerce time-frame ... we
looked at taking a more systemic approach. we had a jad with taligent
about their environment ... taking the analogy from the original
os/360 system services .... what taligent characteristics would be
needed for assurance and availability system services. we had a one
week JAD where we walked thru all of the taligent infrastructure,
specifying what needed to be added/changed for assurance and
availability. at the end, we had come up with two new
availability/assurance frameworks ... and, in addition, about a 30%
hit to the existing taligent frameworks

you would still need the up-front failure analysis ... but could
possibly reduce application code lines from a 4-10 times increase to
possibly only a 1.5 to 2.0 times increase (with some higher level
assurance and availability abstrations being provided by the taligent
infrastructure).

random past taligent references:
http://www.garlic.com/~lynn/2000e.html#46 Where are they now : Taligent and Pink
http://www.garlic.com/~lynn/2000e.html#48 Where are they now : Taligent and Pink
http://www.garlic.com/~lynn/2000.html#10 Taligent
http://www.garlic.com/~lynn/2001j.html#36 Proper ISA lifespan?
http://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
http://www.garlic.com/~lynn/2002.html#24 Buffer overflow
http://www.garlic.com/~lynn/2002i.html#60 Unisys A11 worth keeping?
http://www.garlic.com/~lynn/2002j.html#76 Difference between Unix and Linux?
http://www.garlic.com/~lynn/2002m.html#60 The next big things that weren't
http://www.garlic.com/~lynn/2003d.html#45 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
http://www.garlic.com/~lynn/2003e.html#28 A Speculative question
http://www.garlic.com/~lynn/2003g.html#62 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
http://www.garlic.com/~lynn/2003j.html#15 A Dark Day
http://www.garlic.com/~lynn/2004c.html#53 defination of terms: "Application Server" vs. "Transaction Server"
http://www.garlic.com/~lynn/2004l.html#49 "Perfect" or "Provable" security both crypto and non-crypto?

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Nov 14 '05 #46

P: n/a

ref:
http://www.garlic.com/~lynn/2004p.html#63
http://www.garlic.com/~lynn/2004p.html#64

another dimension of assurance is this stuff:
http://www.software.org/quagmire/

something like 2167a can increase the straight-forward application
development costs by a factor of ten times ... and frequently this
sort of stuff can't be retrofitted after the fact (has to be done up
front before coding ever starts)

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Nov 14 '05 #47

P: n/a

In article <m3************@lhwlinux.garlic.com>,
Anne & Lynn Wheeler <ly**@garlic.com> writes:
|>
|> something like 2167a can increase the straight-forward application
|> development costs by a factor of ten times ... and frequently this
|> sort of stuff can't be retrofitted after the fact (has to be done up
|> front before coding ever starts)

Yes and no. I obviously agree with the latter, but not the former,
which is true but incorrect in context!

If you design for failure at the start, yes, it increases the
initial cost of both design and coding. But it very often reduces
the testing cost and it usually reduces the cost of finding bugs
in the field (i.e. support and maintenance) by a large factor. I
could give many examples.

The point is that the initial coding cost is often less than that
of even the initial testing, and is almost always a small fraction
of the support and maintenance costs. However, those are almost
always indirect, and typically come out of separate budgets anyway.
Regards,
Nick Maclaren.
Nov 14 '05 #48

P: n/a
Juhan Leemet <ju***@logicognosis.com> scribbled the following
on comp.lang.c:
FWIW, I seem to recall being able to multiply hex digits (rote memory, I
know) when I was crawling through IBM mainframe dumps as a uni student. I
had a professor who joked about how I was playing the front panel switches
of a PDP-8 "like a piano" when I toggled in the bootstrap loader. Ah...


You're much more learned than I am, then. The only thing almost a
decade of writing toy machine language programs to see what the
Commodore 64 can do has taught me in this regard is being able to
convert any integer from 0 to 255 from decimal to hexadecimal or back
in my head in a couple of seconds. Well, it amazed my little brother
for a couple of minutes.

--
/-- Joona Palaste (pa*****@cc.helsinki.fi) ------------- Finland --------\
\-------------------------------------------------------- rules! --------/
"How can we possibly use sex to get what we want? Sex IS what we want."
- Dr. Frasier Crane
Nov 14 '05 #49

P: n/a

Joona I Palaste <pa*****@cc.helsinki.fi> writes:
You're much more learned than I am, then. The only thing almost a
decade of writing toy machine language programs to see what the
Commodore 64 can do has taught me in this regard is being able to
convert any integer from 0 to 255 from decimal to hexadecimal or
back in my head in a couple of seconds. Well, it amazed my little
brother for a couple of minutes.


besides learning to read hex from mainframe dumps ... i also learned
to read it from the front console lights as well as the punch holes in
cards (output of assembler and compiler binary/txt decks) ... both
hex->instructions/addresses and hex->character (or in the case of
hex punch cards, holes->hex->instructions/addresses and
holes->hex->ebcdic).

In the past I had made (the mistake of?) posts about the TSM lineage
from a file backup/archive program
http://www.garlic.com/~lynn/subtopic.html#backup

that I had written for internal use that then went thru 3-4 (internal)
releases, eventually packaged as customer product called workstation
datasave facility, and then its morphing into ADSM and now TSM (tivoli
storage manager).

so a couple days ago ... i get email from somebody trying to decode a
TSM tape; included was hex dump of the first 1536 bytes off the tape
.... asking me to tell them what TSM had on the tape.

well way back in the dark ages ... you could choose your physical tape
block size ... and the "standard label" tape convention started with
three 80-byte records; vol1, hdr1, hdr2.

so the first 1536 bytes was three 512byte records ... and i recognize
the first 80 bytes of each (512byte) record as starting vol1, hdr1,
hdr2.

the hex dump had included the hex->character translation ... but for
ascii ... and of course the tsm heritage is from ebcdic mainframe
.... not ascii (aka it was the ebcdic hex for vol1, hdr1, hdr2) It
didn't even get to the TSM part of the tape data ... it was still all
the os standard label convention.

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Nov 14 '05 #50

54 Replies

This discussion thread is closed

Replies have been disabled for this discussion.