473,466 Members | 1,456 Online
Bytes | Software Development & Data Engineering Community
Create Post

Home Posts Topics Members FAQ

Mars Rover Controlled By Java

Java, the software developed by Sun Microsystems in the mid-1990s as a
universal operating system for Internet applications, gave NASA a low-cost
and easy-to-use option for running Spirit, the robotic rover that rolled
onto the planet's surface on Thursday in search of signs of water and life.

http://news.com.com/2100-1007_3-5142...l?tag=nefd_top

l8r, Mike N. Christoff

Jul 17 '05
198 18069
On Thu, 22 Jan 2004 17:24:19 -0800, Uncle Al <Un******@hate.spam.net>
wrote:
NASA is renowned for its antenna failures - the Hubble space
telescope, Ulysses at Jupiter, and now their little radio-controlled
go-cart on Mars.


I only know of one antennae failure. Gallileo's antenna failed to
open correctly. As I recall there was speculation that some of the
grease in the struts degraded (or something) during the long years of
storage following the Challenger explosion.

NASA has had it's up's and downs. That's understandable. They are
doing things that nobody has ever done before.

Jul 17 '05 #151
"Jan C. Vorbrüggen" wrote:
NASA is renowned for its antenna failures - the Hubble space
telescope,


HST didn't and doesn't have antenna problems.


It sure as Hell did. When they got it up there NASA discovered that
the high gain antenna feed cable intersected space swept out by the
high gain antenna. The technical term for this is "pookie pookie."
Ulysses at Jupiter


That one is going 'round the sun, and has had no problems of the sort.


The Jupiter orbiter could never deploy its high gain antenna. While
NASA spurted all over the Media for years, data was being received at
a rate recalling Radio Shack computers and their modems. Only a very
small fraction of the collected data was ever relayed. What a
successful mission.
and now their little radio-controlled go-cart on Mars.


...which isn't having an antenna problem, either.


No. It turns out $240 million won't custom-build a NASA FlashRAM card
as good as one can purchase at Radio Shack. That's no surprise. $1
billion could not grind and polish a NASA version of a Keyhole spy
satellite main optic available off the shelf. In pure NASA Korporate
Kulture tradition, the old fart optician who screamed about the
Hubble's mirrror blank being rough ground to the wrong spec was fired
very early on. It cost another $billion to fix it in orbit.

FOR THE EMPIRICALPRICE OF FINALLY DOING HUBBLE CORRECTLY, NASA COULD
HAVE BUILT AND ORBITED NEARLY THREE OF THEM USING KEYHOLE SATELLITE
OFF THE SHELF PLANS AND PARTS.
[snip]

Taking a picture of Martian dirt is not a major triumph. We already
have surface meterological and chemical data. Unless a nice block of
limestone turns up, with included fossils, this has been a huge and
hugely expensive bullshit party. How many Enron ex-executives are
working at NASA?

--
Uncle Al
http://www.mazepath.com/uncleal/
(Toxic URL! Unsafe for children and most mammals)
"Quis custodiet ipsos custodes?" The Net!
Jul 17 '05 #152
"Jan C. Vorbrüggen" wrote:
NASA is renowned for its antenna failures - the Hubble space
telescope,


HST didn't and doesn't have antenna problems.


It sure as Hell did. When they got it up there NASA discovered that
the high gain antenna feed cable intersected space swept out by the
high gain antenna. The technical term for this is "pookie pookie."
Ulysses at Jupiter


That one is going 'round the sun, and has had no problems of the sort.


The Jupiter orbiter could never deploy its high gain antenna. While
NASA spurted all over the Media for years, data was being received at
a rate recalling Radio Shack computers and their modems. Only a very
small fraction of the collected data was ever relayed. What a
successful mission.
and now their little radio-controlled go-cart on Mars.


...which isn't having an antenna problem, either.


No. It turns out $240 million won't custom-build a NASA FlashRAM card
as good as one can purchase at Radio Shack. That's no surprise. $1
billion could not grind and polish a NASA version of a Keyhole spy
satellite main optic available off the shelf. In pure NASA Korporate
Kulture tradition, the old fart optician who screamed about the
Hubble's mirrror blank being rough ground to the wrong spec was fired
very early on. It cost another $billion to fix it in orbit.

FOR THE EMPIRICALPRICE OF FINALLY DOING HUBBLE CORRECTLY, NASA COULD
HAVE BUILT AND ORBITED NEARLY THREE OF THEM USING KEYHOLE SATELLITE
OFF THE SHELF PLANS AND PARTS.
[snip]

Taking a picture of Martian dirt is not a major triumph. We already
have surface meterological and chemical data. Unless a nice block of
limestone turns up, with included fossils, this has been a huge and
hugely expensive bullshit party. How many Enron ex-executives are
working at NASA?

--
Uncle Al
http://www.mazepath.com/uncleal/
(Toxic URL! Unsafe for children and most mammals)
"Quis custodiet ipsos custodes?" The Net!
Jul 17 '05 #153
In article <o0********************************@4ax.com>,
un******@objectmentor.com says...
NASA has had it's up's and downs. That's understandable. They are
doing things that nobody has ever done before.


A very good point, and I am a bit surprised it hasn't been said
previously in this thread. The public seems to have convinced
themselves that space travel is no more difficult than going down
to the corner store for a loaf of bread. It's not even remotely
true, with or without a passenger on board.

NASA shouldn't be getting a "black eye" for such failures, people
should be saying "good try, when are you going to make another
attempt at solving such a difficult problem". Stop and think
for a minute about how effective you would be at debugging your
project if the link between your development machine and the
system under test was on another planet and the delay between
inputs commensurately slow. Have fun. How would you debug
hardware problems remotely if you could not have any physical
contact with the machine? EVER AGAIN?

--
Randy Howard
2reply remove FOOBAR

Jul 17 '05 #154
In article <o0********************************@4ax.com>,
un******@objectmentor.com says...
NASA has had it's up's and downs. That's understandable. They are
doing things that nobody has ever done before.


A very good point, and I am a bit surprised it hasn't been said
previously in this thread. The public seems to have convinced
themselves that space travel is no more difficult than going down
to the corner store for a loaf of bread. It's not even remotely
true, with or without a passenger on board.

NASA shouldn't be getting a "black eye" for such failures, people
should be saying "good try, when are you going to make another
attempt at solving such a difficult problem". Stop and think
for a minute about how effective you would be at debugging your
project if the link between your development machine and the
system under test was on another planet and the delay between
inputs commensurately slow. Have fun. How would you debug
hardware problems remotely if you could not have any physical
contact with the machine? EVER AGAIN?

--
Randy Howard
2reply remove FOOBAR

Jul 17 '05 #155
Robert C. Martin wrote:
NASA has had it's up's and downs.


Modulo the apostrophes, that's great tee-shirt material.

--
Richard Heathfield : bi****@eton.powernet.co.uk
"Usenet is a strange place." - Dennis M Ritchie, 29 July 1999.
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
K&R answers, C books, etc: http://users.powernet.co.uk/eton
Jul 17 '05 #156
Robert C. Martin wrote:
NASA has had it's up's and downs.


Modulo the apostrophes, that's great tee-shirt material.

--
Richard Heathfield : bi****@eton.powernet.co.uk
"Usenet is a strange place." - Dennis M Ritchie, 29 July 1999.
C FAQ: http://www.eskimo.com/~scs/C-faq/top.html
K&R answers, C books, etc: http://users.powernet.co.uk/eton
Jul 17 '05 #157
"Randy Howard"...,
| un******@objectmentor.com says...
| > NASA has had it's up's and downs. That's understandable. They are
| > doing things that nobody has ever done before.
|
| .... The public seems to have convinced
| themselves that space travel is no more difficult than going down
| to the corner store for a loaf of bread. It's not even remotely
| true, with or without a passenger on board.

Don't be silly, have you ever tried
to purchase bread on Mars?

Even if you can find a baker, it's
stale by the time you're Earthside. [ ;-) ]

| ..How would you debug
| hardware problems remotely if you could not have any physical
| contact with the machine? EVER AGAIN?

Assuming you can get it to obey _any_
commands.

a) Orient front of rover toward big rock.
b) Drive to 3 metres distant from rock, stop.
c) Drive forward 4 metres.
d) Back up 3 metres.
e) Repeat c), d) until problem fixed or rover destroyed.

Easy peasy.
You folks just do not think laterally. ;-)

--
Andrew Thompson
* http://www.PhySci.org/ PhySci software suite
* http://www.1point1C.org/ 1.1C - Superluminal!
* http://www.AThompson.info/andrew/ personal site
Jul 17 '05 #158
"Randy Howard"...,
| un******@objectmentor.com says...
| > NASA has had it's up's and downs. That's understandable. They are
| > doing things that nobody has ever done before.
|
| .... The public seems to have convinced
| themselves that space travel is no more difficult than going down
| to the corner store for a loaf of bread. It's not even remotely
| true, with or without a passenger on board.

Don't be silly, have you ever tried
to purchase bread on Mars?

Even if you can find a baker, it's
stale by the time you're Earthside. [ ;-) ]

| ..How would you debug
| hardware problems remotely if you could not have any physical
| contact with the machine? EVER AGAIN?

Assuming you can get it to obey _any_
commands.

a) Orient front of rover toward big rock.
b) Drive to 3 metres distant from rock, stop.
c) Drive forward 4 metres.
d) Back up 3 metres.
e) Repeat c), d) until problem fixed or rover destroyed.

Easy peasy.
You folks just do not think laterally. ;-)

--
Andrew Thompson
* http://www.PhySci.org/ PhySci software suite
* http://www.1point1C.org/ 1.1C - Superluminal!
* http://www.AThompson.info/andrew/ personal site
Jul 17 '05 #159
Randy Howard wrote:
NASA shouldn't be getting a "black eye" for such failures, people
should be saying "good try, when are you going to make another
attempt at solving such a difficult problem". Stop and think
for a minute about how effective you would be at debugging your
project if the link between your development machine and the
system under test was on another planet and the delay between
inputs commensurately slow. Have fun. How would you debug
hardware problems remotely if you could not have any physical
contact with the machine? EVER AGAIN?


While I hesitate to bash NASA (I do think they're doing very important
work, and I worked there myself for seven years), I regrettably prolong
a thread which has been OT from its inception with this excerpt from a
recent news article:

"[Mission manager Jennifer] Trosper said the problem appeared to be that
the rover's flash memory couldn't handle the number of files it was
storing. ... She pointed out that the scientists had thoroughly tested
the rover's systems on Earth, but that the longest trial for the file
system was nine days, half of the 18 days Spirit operated before running
into the problem."

http://www.cnn.com/2004/TECH/space/01/26/mars.rovers/

"Thoroughly tested"? If you're going to send any object, and especially
an object with a computer and software, to a distant planet where it is
supposed to survive for about 90 days, wouldn't it seem prudent to run
at least a 90 day test of the object on earth before liftoff?

-- Dave
Jul 17 '05 #160
On Mon, 26 Jan 2004 19:11:34 -0800, David C DiNucci wrote:
"[Mission manager Jennifer] Trosper said the problem appeared to be that
the rover's flash memory couldn't handle the number of files it was
storing. ...


It seems the Spirit is willing, but the flash is weak...
--
Dave Seaman
Judge Yohn's mistakes revealed in Mumia Abu-Jamal ruling.
<http://www.commoncouragepress.com/index.cfm?action=book&bookid=228>
Jul 17 '05 #161

"Uncle Al" <Un******@hate.spam.net> wrote in message
news:40***************@hate.spam.net...
"Jan C. Vorbrüggen" wrote:
NASA is renowned for its antenna failures - the Hubble space
telescope,


HST didn't and doesn't have antenna problems.


It sure as Hell did. When they got it up there NASA discovered that
the high gain antenna feed cable intersected space swept out by the
high gain antenna. The technical term for this is "pookie pookie."
Ulysses at Jupiter


That one is going 'round the sun, and has had no problems of the sort.


The Jupiter orbiter could never deploy its high gain antenna. While
NASA spurted all over the Media for years, data was being received at
a rate recalling Radio Shack computers and their modems. Only a very
small fraction of the collected data was ever relayed. What a
successful mission.
and now their little radio-controlled go-cart on Mars.


...which isn't having an antenna problem, either.


No. It turns out $240 million won't custom-build a NASA FlashRAM card
as good as one can purchase at Radio Shack.


Actually, it apparently was not the flash ram, but the particular software
that accessed the flash ram. Thats why they think they can fix it by
sending a software patch.

l8r, Mike N. Christoff

Jul 17 '05 #162
No. It turns out $240 million won't custom-build a NASA FlashRAM card
as good as one can purchase at Radio Shack.


Actually, it apparently was not the flash ram, but the particular software
that accessed the flash ram. Thats why they think they can fix it by
sending a software patch.

http://www.cnn.com/2004/TECH/space/01/26/mars.rovers/

I stand corrected.

l8r, Mike N. Christoff

Jul 17 '05 #163
Robert C. Martin wrote:
NASA has had it's up's and downs. That's understandable. They are
doing things that nobody has ever done before.


I don't agree. NASA does things that nobody has ever done before, but most
time it fails, it fails because it does things that a lot of people have
done before, just differently. My impression is that NASA has a big NIH
problem. They had it back before Gemini, because they didn't use their
German rocket experts, and their own rockets blew up again and again, until
the Sputnik shock forced them to rethink a bit. On the other hand, Russian
rocket science is dead conservative industrial production. They still use
slightly modified V2 aggregates in their rockets - lots of them, making
failures unlikely, and since they are "mass products" (several thousands a
year), they are cheap.

When NASA says their flash file system is having problems, I'd like to bang
my head against a wall. Why didn't they use a off-the-shelf flash file
system that's known to work? For all NASA engineers: My footer has an
obligation: "If you do it yourself, you better do it right!".

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
Jul 17 '05 #164
On Tue, 27 Jan 2004 04:41:14 -0500, "Michael N. Christoff"
<mc********@sympatico.caREMOVETHIS> wrote:
> No. It turns out $240 million won't custom-build a NASA FlashRAM card
> as good as one can purchase at Radio Shack.


Actually, it apparently was not the flash ram, but the particular software
that accessed the flash ram. Thats why they think they can fix it by
sending a software patch.

http://www.cnn.com/2004/TECH/space/01/26/mars.rovers/

I stand corrected.

Don't give up so easily ;-) From the article you reference:
"Trosper said the problem appeared to be that the rover's flash memory
couldn't handle the number of files it was storing."

That sounds like a (possibly) software or (more probably) design and
specification problem, not hardware.

In any case, I'm interested to discover that Radio Shack FlashRAM
cards are far superior. I wonder how the OP determined that?

--
Al Balmer
Balmer Consulting
re************************@att.net
Jul 17 '05 #165
David C DiNucci wrote:
[snip] "[Mission manager Jennifer] Trosper said the problem appeared to be that
the rover's flash memory couldn't handle the number of files it was
storing. ... She pointed out that the scientists had thoroughly tested
the rover's systems on Earth, but that the longest trial for the file
system was nine days, half of the 18 days Spirit operated before running
into the problem."

http://www.cnn.com/2004/TECH/space/01/26/mars.rovers/

"Thoroughly tested"? If you're going to send any object, and especially
an object with a computer and software, to a distant planet where it is
supposed to survive for about 90 days, wouldn't it seem prudent to run
at least a 90 day test of the object on earth before liftoff?


HERETIC!

Any child of BillGates' Windoze knows that

1) Memory is unlimited, and
2) You *want* crud files to massively accumulate without any
callable mechanism for erasure. When the partition mysteriously blows
out, the ignorant consumer is screwed into reinstalling a purchased
update. Check out th size of your Windoze *.log files. What do they
do?

Uncle Al partitions a 3 Gbyte C: drive to hold *only* Windoze. It can
be rewritten without sacrificing the rest of the hard drive. It is a
contained garbage dump that can be manually purged. Run ZTREE and get
control of goddamned Windoze file and subdirectry obscenities,

http://www.ztree.com/
http://www.mazepath.com/uncleal/zt.zip
For a nice screen configuration file. You can flip ZTREE underneath
your idiot Desktop by hitting the Windoze key then Esc.

Uncle Al's box came from Gateway (cheap) with Windoze98. Uncle Al
spent months deleting files. The 12,000 original files are now down
to 7200. When things are slow... out go another few dozen. Do you
think this is funny? Look for all *.png files in your Windoze
partition. Does that voluminous pile of crap do anything? No, it
does not. Purge the image files, get back a few MB of hard drive.
(But who cares about a few MB? Memory si unlimited...)

Back to the mission... FlashRAM degrades with use. Each cell is good
for maybe a half-million cycles in commercial chips. Did NASA custom
fabricate its own NASA-crappy FlashRAM instead of visiting Radio
Shack? Did they save 20 grams by putting in half as much memory as
they really needed?

There are no hard drives on Mars! With ambient air pressure being
only 7 torr, there isn't nearly enough air pressure to levitate the
read/write heads when the hard drive spins up. If you seal the hard
drive, it overheats.

--
Uncle Al
http://www.mazepath.com/uncleal/qz.pdf
http://www.mazepath.com/uncleal/eotvos.htm
(Do something naughty to physics)
Jul 17 '05 #166
On Tue, 27 Jan 2004 09:52:57 -0800, Uncle Al <Un******@hate.spam.net>
wrote:
There are no hard drives on Mars! With ambient air pressure being
only 7 torr, there isn't nearly enough air pressure to levitate the
read/write heads when the hard drive spins up. If you seal the hard
drive, it overheats.


But (some) hard drives are indeed sealed. E.g.
http://www.wmrc.com/businessbriefing...gy/mountop.pdf
--
Al Balmer
Balmer Consulting
re************************@att.net
Jul 17 '05 #167
Bernd Paysan <be**********@gmx.de> wrote in message news:<rg************@miriam.mikron.de>...

<snip>
When NASA says their flash file system is having problems, I'd like to bang
my head against a wall. Why didn't they use a off-the-shelf flash file
system that's known to work? For all NASA engineers: My footer has an
obligation: "If you do it yourself, you better do it right!".


As I understand, the Rover uses VxWorks, which is one of the more
popular off-the-shelf RT OSs. VxWorks supports a flash filesystem,
and I'd guess NASA would use it (although they could have written
their own).

In any case VxWorks, is not bug-free. It contains bugs that you'd be
very suprised at. But as it is a niche OS, bugs that might be
considered unacceptable in a general purpose OS are viewed
as funny quirks.

It is very possible that its flash file system sucks up large amounts of
RAM where there are many files, and this might not be considered a
problem as it might be uncommon for RT apps. Perhaps the most common
usage of flash memory is executable images and configuration files.

Thanks for reading,
Jatian
Jul 17 '05 #168
Alan Balmer wrote:

On Tue, 27 Jan 2004 09:52:57 -0800, Uncle Al <Un******@hate.spam.net>
wrote:
There are no hard drives on Mars! With ambient air pressure being
only 7 torr, there isn't nearly enough air pressure to levitate the
read/write heads when the hard drive spins up. If you seal the hard
drive, it overheats.


But (some) hard drives are indeed sealed. E.g.
http://www.wmrc.com/businessbriefing...gy/mountop.pdf


Point taken!

"MOI has successfully deployed these hermetically sealed drives in a
variety of Department of Defense and commercial applications."

Why would NASA tolerate any other department's success? Mars' ambient
temp being 0 C and below, cooling is not much of an issue. But the
power use! Except ultraminiature hard drives are not rare, either.
Some digital cameras have them. But the bouncing! Park the heads.
But there are no reliability studies! Hire some high school summer
interns and do them. But there are UNKNOWN HAZARDS! Ahhhhhh....

Repeat after Uncle Al, "NASA, SNAFU FUBAR." The first Space Scuttle
had woven ferrite core memory. But then, the Space Scuttle was
designed as a low-orbital nuclear bomber and woven core is
EMP-resistant. NASA is the gang who couldn't shoot straight, casting
their own guns out of pot metal and machining their bullets out of
Mil-Spec shitanium.

As I watch this sorry saga unfold I get the overwhelming impression
that NASA is strutting its stuff and demanding vast Media exposure for
what any competent civilian engineering group would consider to be
piffle. Given a $780 million budget, they could have air-dumped a
rover in an Antarctic dry high desert and done a real time real world
reliability test. (Long range P-3 Orion sub-chasers are currently
underemployed. C-5A cargo planes can fly mostly empty. Vandenberg is
pretty good at firing stuff over a pole.) But wait! NASA must file
an Environmental Impact Report! With somebody or other.

NASA, "It will require weeks to determine the true color of Martian
soil." Didn't any of our nation's best and brightest think to paint
on a color calibration chart? I can walk down the street to a camera
shop and buy one, cheap. I'm certain that Pantone could have supplied
something for 1000X civilian price.

Marines don't sit around crying into their inventory sheets. They
hide stuff in their socks, then go out and kill the enemy. A
perfectly budgeted war with digital readouts doesn't mean much if you
lose it to ragheads using wooden clubs. (Except the REMFS will all
get promotions while the body bags are stacked. Which of those two
sets is NASA?)

Last comment: We've got an apparently functional rover (until its
FlashRAM mysteriously craps out in two weeks) within pissing distance
of naked Martian bedrock. Said stone has a very large albedo compared
to regolith, perhaps like limestone. Does anybody here really believe
the rover will make it over to the rock and have a good look? Or will
there be "intervening priorities" that result in the rover's surprise
disablement?

Oh yeah... Is the horrible stupid thing of a rover capable of
translating its position over slabs of smooth dissected rock? You
know where Uncle Al's bet is placed. It wasn't in the specs!

--
Uncle Al
http://www.mazepath.com/uncleal/qz.pdf
http://www.mazepath.com/uncleal/eotvos.htm
(Do something naughty to physics)
Jul 17 '05 #169
"Michael N. Christoff" <mc********@sympatico.caREMOVETHIS> wrote in message news:<LJ********************@news20.bellglobal.com >...
"This is a serious problem. This is an extremely serious anomaly," said Pete
Theisinger Spirit project manager.
"There is no single fault that explains all the observables."

"...but Spirit was only transmitting "pseudo-noise", a random series of
zeroes and ones in binary code and not anything the scientists could
decipher."
http://news.bbc.co.uk/1/hi/sci/tech/3421071.stm

l8r, Mike N. Christoff


Just for information through my inbox...

---------------------------------------------

DC Agle (818) 393-9011
Jet Propulsion Laboratory, Pasadena, Calif.

Donald Savage (202) 358-1547
NASA Headquarters, Washington, D.C.

NEWS RELEASE: 2004-040 January 27, 2004

Martian Landmarks Dedicated to Apollo 1 Crew

NASA memorialized the Apollo 1 crew -- Gus Grissom, Ed White and Roger
Chaffee -- by dedicating the hills surrounding the Mars Exploration
Rover Spirit's landing site to the astronauts. The crew of Apollo 1
perished in flash fire during a launch pad test of their Apollo
spacecraft at Kennedy Space Center, Fla., 37 years ago today.

"Through recorded history explorers have had both the honor and
responsibility of naming significant landmarks," said NASA
administrator Sean O'Keefe. "Gus, Ed and Roger's contributions, as
much as their sacrifice, helped make our giant leap for mankind
possible. Today, as America strides towards our next giant leap, NASA
and the Mars Exploration Rover team created a fitting tribute to these
brave explorers and their legacy."

Newly christened "Grissom Hill" is located 7.5 kilometers (4.7 miles)
to the southwest of Spirit's position. "White Hill" is 11.2 kilometers
(7 miles) northwest of its position and "Chaffee Hill" is 14.3
kilometers (8.9 miles) south-southwest of rover's position.

Lt. Colonel Virgil I. "Gus" Grissom was a U.S. Air Force test pilot
when he was selected in 1959 as one of NASA's Original Seven Mercury
Astronauts. On July 21, 1961, Grissom became the second American and
third human in space when he piloted Liberty Bell 7 on a 15 minute
sub-orbital flight. On March 23, 1965 he became the first human to
make the voyage to space twice when he commanded the first manned
flight of the Gemini space program, Gemini 3. Selected as commander of
the first manned Apollo mission, Grissom perished along with White and
Chaffee in the Apollo 1 fire. He is buried at Arlington National
Cemetery, Va.

Captain Edward White was a US Air Force test pilot when selected in
1962 as a member of the "Next Nine," NASA's second astronaut
selection. On June 3, 1965, White became the first American to walk in
space during the flight of Gemini 4. Selected as senior pilot for the
first manned Apollo mission, White perished along with Grissom and
Chaffee in the Apollo 1 fire. He is buried at his alma mater, the
United States Military Academy, West Point, N.Y.

Selected in 1963 as a member of NASA's third astronaut class, U.S.
Navy Lieutenant Commander Roger Chaffee worked as a Gemini capsule
communicator. He also researched flight control communications
systems, instrumentation systems, and attitude and translation control
systems for the Apollo Branch of the Astronaut office. On March 21,
1966, he was selected as pilot for the first 3-man Apollo flight. He
is buried at Arlington National Cemetery, Va.

Images of the Grissom, White and Chaffee Hills can be found at:
http://www.jpl.nasa.gov/mer2004/rove...s/image-1.html

The Jet Propulsion Laboratory, Pasadena, Calif., manages the Mars
Exploration Rover project for NASA's Office of Space Science,
Washington, D.C. JPL is a division of the California Institute of
Technology, also in Pasadena. Additional information about the project
is available from JPL at http://marsrovers.jpl.nasa.gov and from
Cornell University, Ithaca, N.Y., at http://athena.cornell.edu.

---------------------------------------------

-Abhi.
Jul 17 '05 #170
David C DiNucci <da**@elepar.com> wrote in message news:<40***************@elepar.com>...
While I hesitate to bash NASA (I do think they're doing very important
work, and I worked there myself for seven years), I regrettably prolong
a thread which has been OT from its inception with this excerpt from a
recent news article:

"[Mission manager Jennifer] Trosper said the problem appeared to be that
the rover's flash memory couldn't handle the number of files it was
storing. ... She pointed out that the scientists had thoroughly tested
the rover's systems on Earth, but that the longest trial for the file
system was nine days, half of the 18 days Spirit operated before running
into the problem."

http://www.cnn.com/2004/TECH/space/01/26/mars.rovers/

"Thoroughly tested"? If you're going to send any object, and especially
an object with a computer and software, to a distant planet where it is
supposed to survive for about 90 days, wouldn't it seem prudent to run
at least a 90 day test of the object on earth before liftoff?


Oh my #$*&%'ing G*d.

Ok ... obligatory disclaimer about difficulty of ...

Screw it! This is a a languid courtesan, reclining seductively, and
saying "bash me, big boy"!

I am almost flabbergasted into textlessness. The fact that a system
.... any system, not just a computer ... may work correctly in some one
or two delta range yet fail in some 10 or 20 delta range ... some
newbie tyro university graduate wet behind the ears neophyte kid might
make this mistake in a small project, and the old seasoned pro salt
seen it all manager would take this as a teaching opportunity. But in
an entire organization, a huge project putting a robot on a distant
planet, and not once did this occur to anybody!? Nobody with
sufficient experience or a long enough memory reviewed the test plans?

Anybody who has operated MicroSoft windows has had the same
experience: crap accumulates with operation.

This is the type of thing which used to get me flamed when I chatted
in the all-computer-pro group of a local ISP: since I can no longer
write even a Basic program, how can I dare comment ... oish. Wet
nurse tyro-ish newbie error. Inbred self-congratulating community.
Industrial experience of a polywog. Etc.

Help me out here, Uncle Al: I'm running out of germane insults. ;-)

Size of institutional memory buffer needs to be increased. Political
ossification makes effective project review impossible.
Jul 17 '05 #171
On Tue, 27 Jan 2004 09:52:57 -0800, Uncle Al <Un******@hate.spam.net>
wrote:
David C DiNucci wrote:
[snip]

"[Mission manager Jennifer] Trosper said the problem appeared to be that
the rover's flash memory couldn't handle the number of files it was
storing. ... She pointed out that the scientists had thoroughly tested
the rover's systems on Earth, but that the longest trial for the file
system was nine days, half of the 18 days Spirit operated before running
into the problem."

http://www.cnn.com/2004/TECH/space/01/26/mars.rovers/

"Thoroughly tested"? If you're going to send any object, and especially
an object with a computer and software, to a distant planet where it is
supposed to survive for about 90 days, wouldn't it seem prudent to run
at least a 90 day test of the object on earth before liftoff?


HERETIC!

Any child of BillGates' Windoze knows that

1) Memory is unlimited, and


"64K should be enough for everybody."
Bill Gates (1981)

I do not think that the above contradicts in any way! (:-))

--
Regards,
Dmitry A. Kazakov
www.dmitry-kazakov.de
Jul 17 '05 #172
> Chaffee -- by dedicating the hills surrounding the Mars Exploration
Rover Spirit's landing site to the astronauts. The crew of Apollo 1
perished in flash fire during a launch pad test of their Apollo

^^^^^
-- Stefan
Jul 17 '05 #173
Dmitry A Kazakov sez:
On Tue, 27 Jan 2004 09:52:57 -0800, Uncle Al <Un******@hate.spam.net>
wrote:
David C DiNucci wrote:
[snip]

"[Mission manager Jennifer] Trosper said the problem appeared to be that
the rover's flash memory couldn't handle the number of files it was
storing. ... She pointed out that the scientists had thoroughly tested
the rover's systems on Earth, but that the longest trial for the file
system was nine days, half of the 18 days Spirit operated before running
into the problem."

http://www.cnn.com/2004/TECH/space/01/26/mars.rovers/

"Thoroughly tested"? If you're going to send any object, and especially
an object with a computer and software, to a distant planet where it is
supposed to survive for about 90 days, wouldn't it seem prudent to run
at least a 90 day test of the object on earth before liftoff?


HERETIC!

Any child of BillGates' Windoze knows that

1) Memory is unlimited, and


"64K should be enough for everybody."
Bill Gates (1981)


Now google for exact source of this quote.

Dima
--
We're sysadmins. Sanity happens to other people. -- Chris King
Jul 17 '05 #174
Dimitri Maziuk wrote:
Dmitry A Kazakov sez:
On Tue, 27 Jan 2004 09:52:57 -0800, Uncle Al <Un******@hate.spam.net>
wrote:

David C DiNucci wrote:

[snip]

"[Mission manager Jennifer] Trosper said the problem appeared to be that
the rover's flash memory couldn't handle the number of files it was
storing. ... She pointed out that the scientists had thoroughly tested
the rover's systems on Earth, but that the longest trial for the file
system was nine days, half of the 18 days Spirit operated before running
into the problem."

http://www.cnn.com/2004/TECH/space/01/26/mars.rovers/

"Thoroughly tested"? If you're going to send any object, and especially
an object with a computer and software, to a distant planet where it is
supposed to survive for about 90 days, wouldn't it seem prudent to run
at least a 90 day test of the object on earth before liftoff?

HERETIC!

Any child of BillGates' Windoze knows that

1) Memory is unlimited, and


"64K should be enough for everybody."
Bill Gates (1981)

Now google for exact source of this quote.

Dima


You'r right, we have an expression at school that "Memory is cheap".
Even our larger apps don't take much more than 100mb of memory (that's
for huge simulations).

But its a special case. When you have to lift up the memory where 1 lb
of equipment is 5lb of propellant, then its another issue. They have to
skim on weight, and that includes having small amounts of memory.. As a
tradeoff i bet they have abit of a faster processor and transmitter.

The mars rover is an example of Software Engineering vs Computer Science
vs Computer Engineering vs Electrical Engineering.
Jul 17 '05 #175
Uncle Al <Un******@hate.spam.net> wrote in message news:<40**************@hate.spam.net>...

There are no hard drives on Mars! With ambient air pressure being
only 7 torr, there isn't nearly enough air pressure to levitate the
read/write heads when the hard drive spins up. If you seal the hard
drive, it overheats.


This is a trivial engineering problem to address. Encase the HD in a
larger sealed and presurize container, with enough surface area and/or
internal air circulation to keep it cool. A low power 2.5" HD
shouldn't take that much larger of a container. What about the flash
sized microdrives?

-Bruce
Jul 17 '05 #176
Bruce Bowen wrote:
Uncle Al <Un******@hate.spam.net> wrote in message news:<40**************@hate.spam.net>...
There are no hard drives on Mars! With ambient air pressure being
only 7 torr, there isn't nearly enough air pressure to levitate the
read/write heads when the hard drive spins up. If you seal the hard
drive, it overheats.

This is a trivial engineering problem to address. Encase the HD in a
larger sealed and presurize container, with enough surface area and/or
internal air circulation to keep it cool. A low power 2.5" HD
shouldn't take that much larger of a container. What about the flash
sized microdrives?


From what i was tought all hard drives must be kept sealed. This is
because dust and air pollutants could accumulate on the read head, and
if disaster strikes, a particle of dust could get between the head and
the disk (which are almost touching, but not quite).

-Bruce

Jul 17 '05 #177
On 29 Jan 2004 16:20:29 -0800, br*****@my-deja.com (Bruce Bowen)
wrote:
Uncle Al <Un******@hate.spam.net> wrote in message news:<40**************@hate.spam.net>...

There are no hard drives on Mars! With ambient air pressure being
only 7 torr, there isn't nearly enough air pressure to levitate the
read/write heads when the hard drive spins up. If you seal the hard
drive, it overheats.


This is a trivial engineering problem to address. Encase the HD in a
larger sealed and presurize container, with enough surface area and/or
internal air circulation to keep it cool. A low power 2.5" HD
shouldn't take that much larger of a container. What about the flash
sized microdrives?


Yeah, and the 4+ Gs that the drive would experience during take-off
would do wonders for that drive! Not to mention the high levels of
radiation in space would probably fry any drive (to the best of my
knowledge, no one makes rad-hardened hard drives).

Just stick everything on a disk-on-chip, much easier and cheaper than
trying to jerry-rig some sort of hard drive contraption.

-------------
Tony Hill
hilla <underscore> 20 <at> yahoo <dot> ca
Jul 17 '05 #178
In article <35******************************@news.1usenet.com >,
Tony Hill <hi*************@yahoo.ca> wrote:
Yeah, and the 4+ Gs that the drive would experience during take-off
would do wonders for that drive!


Don't worry about the launch so much as the transients on landing
(which is to say, hitting and bouncing a couple of dozen times :-)
Jon
__@/
Jul 17 '05 #179
In sci.physics, Yoyoma_2
<Yoyoma_2@[>
wrote
on Fri, 30 Jan 2004 04:26:31 GMT
<X1lSb.333660$X%5.19364@pd7tw2no>:
Bruce Bowen wrote:
Uncle Al <Un******@hate.spam.net> wrote in message news:<40**************@hate.spam.net>...
There are no hard drives on Mars! With ambient air pressure being
only 7 torr, there isn't nearly enough air pressure to levitate the
read/write heads when the hard drive spins up. If you seal the hard
drive, it overheats.

This is a trivial engineering problem to address. Encase the HD in a
larger sealed and presurize container, with enough surface area and/or
internal air circulation to keep it cool. A low power 2.5" HD
shouldn't take that much larger of a container. What about the flash
sized microdrives?


From what i was tought all hard drives must be kept sealed. This is
because dust and air pollutants could accumulate on the read head, and
if disaster strikes, a particle of dust could get between the head and
the disk (which are almost touching, but not quite).


At the scale of a disk drive's heads a human hair would
be the equivalent of hitting a mountain -- but with a
slightly different result; instead of simply destroying
the item hitting the platter, it would leave a very nasty
gouge in the platter -- a head crash. Therefore all drives
of this type have to be sealed.

I was under the impression that drives had to survive
10-G impulse tests (e.g., dropping a laptop on the floor
from the height of a desk). So 4G wouldn't be much of a
problem to handle.

I don't know about radiation.

Uncle Al has a point; here on Earth there's a good
(relatively speaking) connection between ambient air and
the drive's internals. At 1 kPa the thermal flow is much
more tenuous; of course, one could deploy the drive with
a large cooling panel which doubles as its power supply
(the sun powers the drive; the water circulates among
the panels, heating them and cooling the drive). A large
forced-air fan might complete the ensemble, making for a
Frankenstein's monster that looks like a cross between a
jet engine and a tail-dragging Godzilla... :-) though I
suspect the primary cooling method is radiative.

Modern computers are starting to use water-cooling, which
could get interesting if people get the bright idea of
hooking the heat exchanger into the cold-water supply as
opposed to letting it exhaust noisily into the ambient air.
Of course all that does is transfer the heat elsewhere
(and possibly waste water), probably into the sea if the
water is allowed to drain, or to one's neighbors if the
water goes back into the line (not recommended for various
reasons; in fact, backflow check valves are required for
certain equipment).

I would think, though, that, absent radiation concerns,
flash memory is a nice solution -- and the radiation
presumably can be mitigated by proper shielding.



-Bruce

--
#191, ew****@earthlink.net
It's still legal to go .sigless.
Jul 17 '05 #180
Tony Hill wrote:
On 29 Jan 2004 16:20:29 -0800, br*****@my-deja.com (Bruce Bowen)
wrote:
Uncle Al <Un******@hate.spam.net> wrote in message news:<40**************@hate.spam.net>...

This is a trivial engineering problem to address. Encase the HD in a
larger sealed and presurize container, with enough surface area and/or
internal air circulation to keep it cool. A low power 2.5" HD
shouldn't take that much larger of a container. What about the flash
sized microdrives?

Yeah, and the 4+ Gs that the drive would experience during take-off
would do wonders for that drive! Not to mention the high levels of
radiation in space would probably fry any drive (to the best of my
knowledge, no one makes rad-hardened hard drives).


I actually know of some rad-proofing drives. Actually i hear there are
alot of them, they are mostly used in the military (mil-spec drives).
For example, hard drives on an aircraft carrier have to be able to take
a direct nuclear assult and still function. Little piece of cold war
trivia for you :)

Ran proofing isn't a very big deal.

According to this page i just pulled up at
random(http://www.westerndigital.com/en/pro...sp?DriveID=41),
normal desktop "run of the mill" drives can take impulses of up to 250G
when non-operating, and 60G while operating at a delta-t of 2 seconds.
That's pretty good.

So if that's for ordinary hard drives, immagine a mil-spec drive. I
doubt carriers kept all of their data on flash memory in the 1970s.
Even if they use mag tape reel, it implies that they have developed some
sort of rad-proofing for it.

Just stick everything on a disk-on-chip, much easier and cheaper than
trying to jerry-rig some sort of hard drive contraption.

-------------
Tony Hill
hilla <underscore> 20 <at> yahoo <dot> ca

Jul 17 '05 #181


Yoyoma_2 wrote:
Tony Hill wrote:
On 29 Jan 2004 16:20:29 -0800, br*****@my-deja.com (Bruce Bowen)
wrote:
Uncle Al <Un******@hate.spam.net> wrote in message
news:<40**************@hate.spam.net>...

This is a trivial engineering problem to address. Encase the HD in a
larger sealed and presurize container, with enough surface area and/or
internal air circulation to keep it cool. A low power 2.5" HD
shouldn't take that much larger of a container. What about the flash
sized microdrives?


Yeah, and the 4+ Gs that the drive would experience during take-off
would do wonders for that drive! Not to mention the high levels of
radiation in space would probably fry any drive (to the best of my
knowledge, no one makes rad-hardened hard drives).

I actually know of some rad-proofing drives. Actually i hear there are
alot of them, they are mostly used in the military (mil-spec drives).
For example, hard drives on an aircraft carrier have to be able to take
a direct nuclear assult and still function. Little piece of cold war
trivia for you :)

Ran proofing isn't a very big deal.

According to this page i just pulled up at
random(http://www.westerndigital.com/en/pro...sp?DriveID=41),
normal desktop "run of the mill" drives can take impulses of up to 250G
when non-operating, and 60G while operating at a delta-t of 2 seconds.
That's pretty good.

So if that's for ordinary hard drives, immagine a mil-spec drive. I
doubt carriers kept all of their data on flash memory in the 1970s. Even
if they use mag tape reel, it implies that they have developed some sort
of rad-proofing for it.


And the 4Gs thing is a non-issue. Most normal ATA drives can take around
300Gs when not operating or 30Gs when running without damage.

Jul 17 '05 #182
Howdy Edward
I am almost flabbergasted into textlessness. The fact that a system
... any system, not just a computer ... may work correctly in some one
or two delta range yet fail in some 10 or 20 delta range ... some
newbie tyro university graduate wet behind the ears neophyte kid might
make this mistake in a small project, and the old seasoned pro salt
seen it all manager would take this as a teaching opportunity. But in
an entire organization, a huge project putting a robot on a distant
planet, and not once did this occur to anybody!?


Yep, you nailed it.

My 5-word software testing book: Run 'er Hard & Long

-- stan


Jul 17 '05 #183
Re: Mars Rover Not Responding!
Maybe it's your cologne, you Martian perv!
Jul 17 '05 #184
In article <I4************************@twister2.starband.net> ,
Stanley Krute <St**@StanKrute.com> wrote:
Howdy Edward
I am almost flabbergasted into textlessness. The fact that a system
... any system, not just a computer ... may work correctly in some one
or two delta range yet fail in some 10 or 20 delta range ... some
newbie tyro university graduate wet behind the ears neophyte kid might
make this mistake in a small project, and the old seasoned pro salt
seen it all manager would take this as a teaching opportunity. But in
an entire organization, a huge project putting a robot on a distant
planet, and not once did this occur to anybody!?


Yep, you nailed it.

My 5-word software testing book: Run 'er Hard & Long


Sigh. That is very likely the CAUSE of the problem :-(

Any particular test schedule (artificial or natural) will create a
distribution of circumstances over the space of all that are handled
differently by the program. And, remember, we are talking about a
space of cardinality 10^(10^4) to 10^(10^8). Any particular, broken
logic (usually a combination of sections of code and data) may be
invoked only once ever millennium, or perhaps never.

Now, change the test schedule in an apparently trivial way, or use
the program for real, and that broken logic may be invoked once a
day. Ouch. Incidentally, another way of looking at this is the
probability of distinguishing two finite element automata by feeding
in test strings and comparing the results. It was studied some
decades back, and the conclusions are not pretty.

The modern, unsystematic approach to testing is hopeless as an
egnineering technique, though fine as a political or marketing one.
For high-reliability codes, we need to go back to the approaches
used in the days when most computer people were also mathematicians,
engineers or both.
Regards,
Nick Maclaren.
Jul 17 '05 #185
Howdy Nick
The modern, unsystematic approach to testing is hopeless as an
egnineering technique, though fine as a political or marketing one.
For high-reliability codes, we need to go back to the approaches
used in the days when most computer people were also mathematicians,
engineers or both.


Deep agreement as to importance of math-smart testing.

-- stan
Jul 17 '05 #186
nm**@cus.cam.ac.uk (Nick Maclaren) wrote in message news:<bv**********@pegasus.csx.cam.ac.uk>...
In article <I4************************@twister2.starband.net> ,
Stanley Krute <St**@StanKrute.com> wrote:
My 5-word software testing book: Run 'er Hard & Long


Sigh. That is very likely the CAUSE of the problem :-(

Any particular test schedule (artificial or natural) will create a
distribution of circumstances over the space of all that are handled
differently by the program. And, remember, we are talking about a
space of cardinality 10^(10^4) to 10^(10^8). Any particular, broken
logic (usually a combination of sections of code and data) may be
invoked only once ever millennium, or perhaps never.

Now, change the test schedule in an apparently trivial way, or use
the program for real, and that broken logic may be invoked once a
day. Ouch. Incidentally, another way of looking at this is the
probability of distinguishing two finite element automata by feeding
in test strings and comparing the results. It was studied some
decades back, and the conclusions are not pretty.

The modern, unsystematic approach to testing is hopeless as an
egnineering technique, though fine as a political or marketing one.
For high-reliability codes, we need to go back to the approaches
used in the days when most computer people were also mathematicians,
engineers or both.


Going back to the case at hand: it still sounds to me like the stated
cause of the bug -- more files were written to flash memory than were
anticipated in design -- is at least restrospectively obviously a
possibility which should have been addressed at the design stage, and
maybe should have been prospectively obvious also.

I'm not quite sure how to jive this with your theoretical insight that
we are searching a space of a cardinality of 10 followed by many, many
zeroes, and similar observations which make the problem sound
hopeless: maybe I'm naively wrong, or not (well that covers all the
possibilities ;-).

Is asking that when the program expends some resource it handles the
problem in some minimally damaging way really an impossibly hard
problem in the space of all the impossibly hard problems with which
computer science abounds, or is it merely a challenging but tractable
engineering problem?

For example, suppose we had a machine running around some play pen,
the the space of possible joint states of the machine and the play pen
were of cardinality 10 followed by some humungous number of zeroes.
And suppose that when the machine leaves the play pen, that is a
"crash". Now, we might ask why the machine crashed, and the designer
might respond with language about the cardinality of the joint state
space, and the impossibility of complete testing. But now we might
ask why he did not put a _fence_ around the play pen, and this answer
is no longer sufficient, and the answer "well, we let it run around
for a while, and it didn't seem likely to cross the boundaries, so we
didn't bother with a fence", is marginal.

Is the problem of building a number of internal fences in complex
systems sufficient to provided timely alert to unanticipated operating
conditions itself an intractably hard problem, or merely hard?
Jul 17 '05 #187
In article <2a**************************@posting.google.com >,
Edward Green <nu*******@aol.com> wrote:

Going back to the case at hand: it still sounds to me like the stated
cause of the bug -- more files were written to flash memory than were
anticipated in design -- is at least restrospectively obviously a
possibility which should have been addressed at the design stage, and
maybe should have been prospectively obvious also.
Perhaps. Without investigating the problem carefully, it is impossible
to tell.
I'm not quite sure how to jive this with your theoretical insight that
we are searching a space of a cardinality of 10 followed by many, many
zeroes, and similar observations which make the problem sound
hopeless: maybe I'm naively wrong, or not (well that covers all the
possibilities ;-).

Is asking that when the program expends some resource it handles the
problem in some minimally damaging way really an impossibly hard
problem in the space of all the impossibly hard problems with which
computer science abounds, or is it merely a challenging but tractable
engineering problem?


There are several aspects here. Minimising damage IS an impossibly
hard problem (not just exponentially expensive, but insoluble). But
that is often used as excuse to avoid even attempting to constrain
the consequent damages. Think of it this way.

Identifying and logging resource exhaustion takes a fixed time, so
there is no excuse not to do it. Yes, that can exhaust space in the
logging files, so there is a recursive issue, but there are known
partial solutions to that.

Identifying the cause is simple if there is one factor, harder if
there are two, and so on. To a great extent, that is also true of
predicting the resources needed, but that can be insoluble even with
one factor. This is confused by the fact that, the fewer the factors,
the more likely a bug is to be removed in initial testing.

Most of my bug-tracking time is spent on ones with 3-10 factors, on
a system with (say) 100-1,000 relevant factors. It isn't surprising
that the vendors' testing has failed to detect them. There are only
two useful approaches to such issues:

1) To design the system using a precise mathematical model, so
that you can eliminate, minimise or constrain interactions. This
also needs PRECISE interface specifications, of course, not the sloppy
rubbish that is almost universal.

2) To provide good detection and diagnostic facilities, to help
locating the causes and effects of such problems. This is even more
neglected nowadays, and is of limited help for systems like Mars
missions.
Regards,
Nick Maclaren.
Jul 17 '05 #188
In article <bv**********@pegasus.csx.cam.ac.uk>, Nick Maclaren
<nm**@cus.cam.ac.uk> writes
In article <I4************************@twister2.starband.net> ,
Stanley Krute <St**@StanKrute.com> wrote:
Howdy Edward
I am almost flabbergasted into textlessness. The fact that a system
... any system, not just a computer ... may work correctly in some one
or two delta range yet fail in some 10 or 20 delta range ... some
newbie tyro university graduate wet behind the ears neophyte kid might
make this mistake in a small project, and the old seasoned pro salt
seen it all manager would take this as a teaching opportunity. But in
an entire organization, a huge project putting a robot on a distant
planet, and not once did this occur to anybody!?


Yep, you nailed it.

My 5-word software testing book: Run 'er Hard & Long


Sigh. That is very likely the CAUSE of the problem :-(

Any particular test schedule (artificial or natural) will create a
distribution of circumstances over the space of all that are handled
differently by the program. And, remember, we are talking about a
space of cardinality 10^(10^4) to 10^(10^8). Any particular, broken
logic (usually a combination of sections of code and data) may be
invoked only once ever millennium, or perhaps never.

Now, change the test schedule in an apparently trivial way, or use
the program for real, and that broken logic may be invoked once a
day. Ouch. Incidentally, another way of looking at this is the
probability of distinguishing two finite element automata by feeding
in test strings and comparing the results. It was studied some
decades back, and the conclusions are not pretty.

The modern, unsystematic approach to testing is hopeless as an
egnineering technique, though fine as a political or marketing one.
For high-reliability codes, we need to go back to the approaches
used in the days when most computer people were also mathematicians,
engineers or both.
Regards,
Nick Maclaren.

I agree that more could be done before thorough testing, but I would not
attempt to replace even random and soak testing. Formal methods alone
will never be enough because they prove only the correctness of a
specification implemented by a model, not that the specification or the
model are accurate enough representations of the real world. The speed
with which NASA have claimed to replicate the problem suggests something
widely enough spread through the state space to be replicable by soak
testing. Furthermore, a planetary probe is a pretty good match to even
random testing, because (given the relative costs of putting something
on Mars and of conducting automatic testing in simulation or in a
warehouse) it may be possible to run for longer in test than in real
life, reducing the chance of bugs that show up only in real life.
Example: the priority inversion bug that hit a previous probe had
apparently shown up in testing but been ignored, because it wasn't what
they were looking for at the time. My impression of current good
practice is that black box testing, white box testing, and code review
are good at finding different sorts of bugs, and so should be used
together. I would lump pre-testing approaches into code review.
--
A. G. McDowell
Jul 17 '05 #189
In article <XB**************@mcdowella.demon.co.uk>,
A. G. McDowell <no****@nospam.co.uk> wrote:

I agree that more could be done before thorough testing, but I would not
attempt to replace even random and soak testing. Formal methods alone
will never be enough because they prove only the correctness of a
specification implemented by a model, not that the specification or the
model are accurate enough representations of the real world. The speed
with which NASA have claimed to replicate the problem suggests something
widely enough spread through the state space to be replicable by soak
testing. Furthermore, a planetary probe is a pretty good match to even
random testing, because (given the relative costs of putting something
on Mars and of conducting automatic testing in simulation or in a
warehouse) it may be possible to run for longer in test than in real
life, reducing the chance of bugs that show up only in real life.
Example: the priority inversion bug that hit a previous probe had
apparently shown up in testing but been ignored, because it wasn't what
they were looking for at the time. My impression of current good
practice is that black box testing, white box testing, and code review
are good at finding different sorts of bugs, and so should be used
together. I would lump pre-testing approaches into code review.


That is a classic example of what I say is mistaken methodology!
Yes, pretty well everything that you say is true, but you have missed
the fact that interactions WILL change the failure syndromes in ways
that mean untargetted testing will miss even the most glaringly
obvious errors. There are just TOO MANY possible combinations of
conditions to rely on random testing.

For a fairly simple or pervasive problem, unintelligent 'soak' testing
will work. For a more complex one, it won't. Unless you target the
testing fairly accurately, certain syndromes will either not occur or
be so rare as not to happen in a month of Sundays. I see this problem
on a daily basis :-(

An aspect of a mathematical design that I did not say explicitly (but
only hinted at) is that you can identify areas to check that the code
matches the mode and other areas where the analysis descends from
mathematics to hand-waving. You can then design precise tests for the
former, and targetted soak tests for the latter. It isn't uncommon
for such an approach to increase the effectiveness of testing beyond
all recognition.
Regards,
Nick Maclaren.
Jul 17 '05 #190
In article <bv**********@pegasus.csx.cam.ac.uk>, Nick Maclaren
<nm**@cus.cam.ac.uk> writes
(trimmed)

That is a classic example of what I say is mistaken methodology!
Yes, pretty well everything that you say is true, but you have missed
the fact that interactions WILL change the failure syndromes in ways
that mean untargetted testing will miss even the most glaringly
obvious errors. There are just TOO MANY possible combinations of
conditions to rely on random testing.
(trimmed)
An aspect of a mathematical design that I did not say explicitly (but
only hinted at) is that you can identify areas to check that the code
matches the mode and other areas where the analysis descends from
mathematics to hand-waving. You can then design precise tests for the
former, and targetted soak tests for the latter. It isn't uncommon
for such an approach to increase the effectiveness of testing beyond
all recognition.
Regards,
Nick Maclaren.

I would be very interested to hear more about increasing the
effectiveness of testing beyond all recognition. I am a professional
programmer in an area where we routinely estimate the testing effort as
about equal to the programming effort (in terms of staff time, but not
necessarily staff cost). Do you have references? As a token of sincerity
I will provide references for what we seem to agree is commercial
practice (whether it should be or not):

The main reference establishing commercial practice that I can find
online seems to be "A controlled Experiment in Program Testing and Code
Walkthroughs/Inspections" by Myers, CACM Volume 21, Number 9. The date
shows that some technology transfer might indeed be overdue - Volume 21
translates to 1978! However, I think automated testing, especially
regression testing, has become a lot easier, or at least more popular,
than it was (JUnit, Rational Robot, etc.). The notion of test coverage
tools seems to be of similar vintage and actually became less accessible
for a while as no version of tcov appeared for setups other than K&R C
on Unix. I spent part of last week trying out Hansel, an open source
test coverage tool for Java, available on www.sourceforge.net.

References from within books to hand:
Black box and white box complementary: "Testing Computer Software", by
Kamer, Falk, and Nguyen, Chapter 12 P271.
Code Review invaluable (but few details on how to do one): "Software
Assessments, Benchmarks, and Best Practices", by Capers Jones, e.g.
Chapter on Best Practices for Systems Software, P367
Mars Pathfinder bug ignored during pre-launch tests: "The Practice of
Programming", by Kernighan and Pike, Section 5.2 P121. (The next chapter
is a good short overview of commercial-practice test design circa 1999).
--
A. G. McDowell
Jul 17 '05 #191
"A. G. McDowell" <no****@nospam.co.uk> writes:
I would be very interested to hear more about increasing the
effectiveness of testing beyond all recognition. I am a professional
programmer in an area where we routinely estimate the testing effort as
about equal to the programming effort (in terms of staff time, but not
necessarily staff cost). Do you have references? As a token of sincerity
I will provide references for what we seem to agree is commercial
practice (whether it should be or not):


when we were doing the original payment gateway
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

we set up a test matrix ... not for the sotware ... but for the
service. nominal payment infrastructure trouble desk did 5 minute
problem first level problem determination ... however that was for an
infrastructure that was almost exclusively circuit based.

while it was possible to translate the (payment) message formats from
a circuit based infrastructure to a packet based infrastructure ...
translating the circuit-based service operation to a packet-based
infrastructure was less clear cut (merchant/webhost complains that
payments aren't working ... expects internet/packet connection to be
much less expensive than direct circuit ... but at the same time
expects compareable availability).

The claim has been that coding for a service operation is 4-10 times
that of a straight application implementation and ten times the effort
because of needing to understand all possible failure modes
.... regardless of whether they are characteristic of the software or
hardware or some totally unrelated environmental characteristic.

in any case, one of the issues was detailed analysis of existing
trouble desk circuit-based problem determination procedures and being
able to translate that into a packet-based (internet) environment and
still attempt to come close to the goal of being able to perform first
level problem determination in five minutes. When we started there
were cases of trouble ticket being close NTF (no trouble found) after
3hrs of manual investigation.

of course this was also at a time ... when it was difficult to find
any ISP that even knew how to spell "service level agreement".

aka ... it is possible for software to perform flawlessly and still be
useless.

some of this came from doing ha/cmp
http://www.garlic.com/~lynn/subtopic.html#hacmp

misc. related past threads
http://www.garlic.com/~lynn/2000g.html#50 Egghead cracked, MS IIS again
http://www.garlic.com/~lynn/2001e.html#48 Where are IBM z390 SPECint2000 results?
http://www.garlic.com/~lynn/2001f.html#75 Test and Set (TS) vs Compare and Swap (CS)
http://www.garlic.com/~lynn/2001k.html#18 HP-UX will not be ported to Alpha (no surprise)exit
http://www.garlic.com/~lynn/2001n.html#85 The demise of compaq
http://www.garlic.com/~lynn/2001n.html#91 Buffer overflow
http://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
http://www.garlic.com/~lynn/2002.html#28 Buffer overflow
http://www.garlic.com/~lynn/2002.html#29 Buffer overflow
http://www.garlic.com/~lynn/2002e.html#73 Blade architectures
http://www.garlic.com/~lynn/2002f.html#24 Computers in Science Fiction
http://www.garlic.com/~lynn/2002h.html#11 Why did OSI fail compared with TCP-IP?
http://www.garlic.com/~lynn/2002h.html#12 Why did OSI fail compared with TCP-IP?
http://www.garlic.com/~lynn/2002n.html#11 Wanted: the SOUNDS of classic computing
http://www.garlic.com/~lynn/2003b.html#53 Microsoft worm affecting Automatic Teller Machines
http://www.garlic.com/~lynn/2003g.html#62 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
http://www.garlic.com/~lynn/2003j.html#15 A Dark Day
http://www.garlic.com/~lynn/2003l.html#49 Thoughts on Utility Computing?
http://www.garlic.com/~lynn/2003p.html#37 The BASIC Variations

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
Jul 17 '05 #192

"A. G. McDowell" <no****@nospam.co.uk> wrote in message
news:AZ**************@mcdowella.demon.co.uk...
In article <bv**********@pegasus.csx.cam.ac.uk>, Nick Maclaren
<nm**@cus.cam.ac.uk> writes
(trimmed)

That is a classic example of what I say is mistaken methodology


I hate to barge in and short-circuit a really very interesting
discussion, but
I think a point has been missed.

The SW on the Rover was designed from the start to be remotely
debugged
and patched. This is a truely wonderful thing. Why spend the effort
it takes
to design bullet-proof SW when you can spend it designing fixable SW?

--arne (I'll take my answer off the air... ;-)
Jul 17 '05 #193
> Example: the priority inversion bug that hit a previous probe

....Mars Pathfinder...
had apparently shown up in testing but been ignored, because it
wasn't what they were looking for at the time.


Of course, that project was run on a shoe-string budget (relatively),
the priority-inversion-caused resets occured sporadically (only a few
times during maybe a year's worth of testing) and unreproducibly, and
most importantly: those guys had a strict deadline, and a lot of other
problems to solve. A rasonable trade-off, all things considered, IMO.

Jan
Jul 17 '05 #194

In article <AZ**************@mcdowella.demon.co.uk>,
"A. G. McDowell" <no****@nospam.co.uk> writes:
|> >
|> I would be very interested to hear more about increasing the
|> effectiveness of testing beyond all recognition. I am a professional
|> programmer in an area where we routinely estimate the testing effort as
|> about equal to the programming effort (in terms of staff time, but not
|> necessarily staff cost). Do you have references? As a token of sincerity
|> I will provide references for what we seem to agree is commercial
|> practice (whether it should be or not):

Regrettably not :-( I have seen references, yes, but they were
usually incidental remarks and I can't now remember exactly where I
saw them. The trouble is that they dated from the days when such
things were 'just done' and the techniques were often passed by
word of mouth, or were regarded as so obvious as to be not worth
describing.

One aspect you may be able to find is in the testing of numeric
code. One standard form of targetting is to increase the coverage
of the 'difficult' or boundary areas, because a relatively small
number of tests is adequate for the main sections. I know that
something similar has also been done in compiler test suites.

The only 'theoretical' reference I know of is to a related area:
Monte-Carlo methods. But, if you regard the problem as estimating
the number of bugs, then that is immediately applicable. I use
Hammersley and Handscomb, but that may be out of print.

Thanks for the references. There is certainly stuff dating from
the 1960s and early 1970s, but it could be the devil to track down.
NAG has certainly been doing it since it started (early 1970s),
in the numeric context.
Regards,
Nick Maclaren.
Jul 17 '05 #195
Uncle Al schrieb:
Local atmospheric pressure is 7-10 torr. Earth sea level is 760
torr.


1 torr = 4/3 hPa (hekto Pascal), for those of us who are more familiar with
the metric system.

Carsten
Jul 17 '05 #196
In article <IL******************@news.cpqcorp.net>,
"arne thormodsen" <ar***************@REMOVE.hp.com> wrote:

"A. G. McDowell" <no****@nospam.co.uk> wrote in message
news:AZ**************@mcdowella.demon.co.uk...
In article <bv**********@pegasus.csx.cam.ac.uk>, Nick Maclaren
<nm**@cus.cam.ac.uk> writes
(trimmed)
>
>That is a classic example of what I say is mistaken methodology


I hate to barge in and short-circuit a really very interesting
discussion, but
I think a point has been missed.

The SW on the Rover was designed from the start to be remotely
debugged
and patched. This is a truely wonderful thing. Why spend the effort
it takes
to design bullet-proof SW when you can spend it designing fixable SW?


.... well ... the fixability has to be bullet-proof.
Jul 17 '05 #197
"A. G. McDowell" <no****@nospam.co.uk> writes:
I would be very interested to hear more about increasing the
effectiveness of testing beyond all recognition. I am a professional
programmer in an area where we routinely estimate the testing effort
as about equal to the programming effort (in terms of staff time,
but not necessarily staff cost). Do you have references? As a token
of sincerity I will provide references for what we seem to agree is
commercial practice (whether it should be or not):


this might also be considered a characteristic difference between
platforms derived from batch oriented systems and platforms derived
from interactive oriented systems.

for 40 years or more, batch systems have tended to provide relatively
clear diagnostic information for the application owner ... since the
application owner wasn't around went their program ran; one specificly
clearly diagnosed & reported item for those 40 some year period has
been space full condition.

interactive platforms have tended to be much more laissez-faire about
providing diagnostics for such things. i've seen payroll application
ported from batch platfrom to an interactive oriented platform
.... where the sort would fail because of space full condition ... but
the error didn't get propogated appropriately thru the rest of the
infrastructure. As a result, checks got printed ... but not with
exactly the values expected. some post mortem analysis seemed to
indicate that assumptions were made about individual applications
indicating interactive error message to an human in attendance ... and
the human taking the appropriate action.

now some number of the batch platforms for possibly 20 years now
.... have had facilities that could take advantage of batch paradigm
error infrastructure and for conditions like space full ... take
automated proscriped graceful recovery actions (i.e. there is deadline
for getting checks out and can't rely on the vagaries of being able to
count on some human based mediation).

some fundamental issue about not only trying to turn out perfect code
.... but also providing an instrumented infrastructure that recognizes
errors will probably happen ... and in the absence of direct human
mediation ... other types of facilities need to be provided
(frequently a characteristic differentiation between batch-oriented
platforms and interactive-oriented platforms).

some random posts on batch vis-a-vis interactive paradigms
http://www.garlic.com/~lynn/96.html#8 Why Do Mainframes Exist ???
http://www.garlic.com/~lynn/98.html#4 VSE or MVS
http://www.garlic.com/~lynn/98.html#18 Reviving the OS/360 thread (Questions about OS/360)
http://www.garlic.com/~lynn/98.html#51 Mainframes suck? (was Re: Possibly OT: Disney Computing)
http://www.garlic.com/~lynn/99.html#16 Old Computers
http://www.garlic.com/~lynn/99.html#197 Computing As She Really Is. Was: Re: Life-Advancing Work of Timothy Berners-Lee
http://www.garlic.com/~lynn/2000.html#81 Ux's good points.
http://www.garlic.com/~lynn/2000.html#83 Ux's good points.
http://www.garlic.com/~lynn/2000f.html#58 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
http://www.garlic.com/~lynn/2001d.html#71 Pentium 4 Prefetch engine?
http://www.garlic.com/~lynn/2001k.html#14 HP-UX will not be ported to Alpha (no surprise)exit
http://www.garlic.com/~lynn/2001l.html#4 mainframe question
http://www.garlic.com/~lynn/2001n.html#90 Buffer overflow
http://www.garlic.com/~lynn/2002.html#1 The demise of compaq
http://www.garlic.com/~lynn/2002.html#24 Buffer overflow
http://www.garlic.com/~lynn/2002f.html#37 Playing Cards was Re: looking for information on the IBM 7090
http://www.garlic.com/~lynn/2002h.html#73 Where did text file line ending characters begin?
http://www.garlic.com/~lynn/2002n.html#41 Home mainframes
http://www.garlic.com/~lynn/2002o.html#0 Home mainframes
http://www.garlic.com/~lynn/2002o.html#14 Home mainframes
http://www.garlic.com/~lynn/2002p.html#54 Newbie: Two quesions about mainframes
http://www.garlic.com/~lynn/2003e.html#11 PDP10 and RISC
http://www.garlic.com/~lynn/2003h.html#56 The figures of merit that make mainframes worth the price
http://www.garlic.com/~lynn/2003j.html#46 Fast TCP
http://www.garlic.com/~lynn/2003n.html#46 What makes a mainframe a mainframe?
http://www.garlic.com/~lynn/2004.html#40 AMD/Linux vs Intel/Microsoft
http://www.garlic.com/~lynn/2004.html#41 AMD/Linux vs Intel/Microsoft
http://www.garlic.com/~lynn/2004.html#43 [Fwd: Re: Mainframe not a good architecture for interactive w
http://www.garlic.com/~lynn/2004.html#47 Mainframe not a good architecture for interactive workloads

--
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
Jul 17 '05 #198
This is all remarkably fascinating!

Just one question:

What does it have to do with Java?

Thanks,
Greg

Edward Green wrote:
Uncle Al <Un******@hate.spam.net> wrote in message news:<40***************@hate.spam.net>...
mitch wrote:
Uncle Al wrote:

Local atmospheric pressure is 7-10 torr. Earth sea level is 760
torr. How many planes do you know that cruise at 100,000 feet absent
any oxygen at all? Martian aircraft are a bad dream.

Hmm. Then the test of a Mars glider plane back in August of 2001 was
just a bad dream? ;-) Work has begun on a propellered version of the
glider cited below. Enjoy.

AMES COMPLETES SUCCESSFUL TEST OF MARS AIRPLANE PROTOTYPE


The empirical fact is that lowland Martian air pressure is 7-10 torr.
The is equivalent to 120,000-100,000 feet terrestrial altitude.

Read the article: the glider was released at 101,000 feet.

If
the silly thing will be diddling at even 1000 ft altitude Martian, the
air will be thinner.

Martian gravity is less, hence the pressure relative pressure
difference between 0 and 1000 feet will be less than that on Earth:
less than 5%.

"Ye canna break the laws of physics."

The Concorde flew at 60,000 feet and gulped air like a madman. The
U-2 did 75,000 feet, breathed air, and it was a bitch to fly. The
SR-71 Blackbird could barely do 100,000 feet while at Mach 3+ with its
cockpit windshield simmering at 620 F. It drank 8000 gallons/hr of
fuel. It breathed 6 million ft^3 of air/minute.

Al ... organizational bashing is fun and rewarding, but must be taken
with up with taste. Sending flawed subtly mirrors into space while
good ones sit in storage, and launching on colder and colder days
until disaster strikes: these are both errors of judgement well within
the capability of the political machine. But making fundamental
science errors in the preliminary design stages, and saying something
(whose gross design parameters are available to anybody willing to
take the time to look) can work when it not only can't but, according
to you, grossly can't?

That is down at the 5 sigma tail of Bayesian probability, and you know
it.

Of your three examples, only the U-2 is remotely relevant, since it
was essentially a powered glider; and it did not gulp air and fuel,
which you seem fixated on. Who the hell said anything about
air-breathing flight, anyway?

The basic principles and parameters are well known: you have your
Martian atmosphere, you have your structural requirements, you have
your power requirements, you have your known solar cell efficiency.
The engineering either comes together or it doesn't. Have you run the
figures? The issue is whether you can build a large enough and light
enough airframe to move enough rarefied gas to generate sufficient
lift to sustain flight at a drag sustainable by some reasonable power
make-up from solar cells. There are people who could do this on the
back of an envelope.

No ... I haven't run the calculations either. But knowing that high
altitude long dwell time solar powered sail planes have been seriously
considered on Earth, that flight costs less power with slower flight
and larger lifting area, knowing the experience with very light weight
miniminally powered structures accumulated by the human-powered flight
school ... all this give credibility to the idea and tends to suggest
that Al is making an ill-considered shot from the hip, as usual.

And this is not to mention aerostats ... you may have noticed also how
the test glider was carried to 101,000 ft? I suppose that was a
physical impossibility too?

Jul 17 '05 #199

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

33
by: jacob navia | last post by:
Mankind has now two robots wandering about in the planet Mars. No, it wasn't Mars that invaded Earth, but Earth that landed in Mars more than two years ago. After some initial OS glitches (Chris...
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...
0
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven...
1
by: Hystou | last post by:
Overview: Windows 11 and 10 have less user interface control over operating system update behaviour than previous versions of Windows. In Windows 11 and 10, there is no way to turn off the Windows...
0
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing,...
0
by: adsilva | last post by:
A Windows Forms form does not have the event Unload, like VB6. What one acts like?
0
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated ...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.