473,883 Members | 1,674 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Mars Rover Controlled By Java

Java, the software developed by Sun Microsystems in the mid-1990s as a
universal operating system for Internet applications, gave NASA a low-cost
and easy-to-use option for running Spirit, the robotic rover that rolled
onto the planet's surface on Thursday in search of signs of water and life.

http://news.com.com/2100-1007_3-5142...l?tag=nefd_top

l8r, Mike N. Christoff

Jul 17 '05
198 18454
Tony Hill wrote:
On 29 Jan 2004 16:20:29 -0800, br*****@my-deja.com (Bruce Bowen)
wrote:
Uncle Al <Un******@hate. spam.net> wrote in message news:<40******* *******@hate.sp am.net>...

This is a trivial engineering problem to address. Encase the HD in a
larger sealed and presurize container, with enough surface area and/or
internal air circulation to keep it cool. A low power 2.5" HD
shouldn't take that much larger of a container. What about the flash
sized microdrives?

Yeah, and the 4+ Gs that the drive would experience during take-off
would do wonders for that drive! Not to mention the high levels of
radiation in space would probably fry any drive (to the best of my
knowledge, no one makes rad-hardened hard drives).


I actually know of some rad-proofing drives. Actually i hear there are
alot of them, they are mostly used in the military (mil-spec drives).
For example, hard drives on an aircraft carrier have to be able to take
a direct nuclear assult and still function. Little piece of cold war
trivia for you :)

Ran proofing isn't a very big deal.

According to this page i just pulled up at
random(http://www.westerndigital.com/en/pro...sp?DriveID=41),
normal desktop "run of the mill" drives can take impulses of up to 250G
when non-operating, and 60G while operating at a delta-t of 2 seconds.
That's pretty good.

So if that's for ordinary hard drives, immagine a mil-spec drive. I
doubt carriers kept all of their data on flash memory in the 1970s.
Even if they use mag tape reel, it implies that they have developed some
sort of rad-proofing for it.

Just stick everything on a disk-on-chip, much easier and cheaper than
trying to jerry-rig some sort of hard drive contraption.

-------------
Tony Hill
hilla <underscore> 20 <at> yahoo <dot> ca

Jul 17 '05 #181


Yoyoma_2 wrote:
Tony Hill wrote:
On 29 Jan 2004 16:20:29 -0800, br*****@my-deja.com (Bruce Bowen)
wrote:
Uncle Al <Un******@hate. spam.net> wrote in message
news:<40******* *******@hate.sp am.net>...

This is a trivial engineering problem to address. Encase the HD in a
larger sealed and presurize container, with enough surface area and/or
internal air circulation to keep it cool. A low power 2.5" HD
shouldn't take that much larger of a container. What about the flash
sized microdrives?


Yeah, and the 4+ Gs that the drive would experience during take-off
would do wonders for that drive! Not to mention the high levels of
radiation in space would probably fry any drive (to the best of my
knowledge, no one makes rad-hardened hard drives).

I actually know of some rad-proofing drives. Actually i hear there are
alot of them, they are mostly used in the military (mil-spec drives).
For example, hard drives on an aircraft carrier have to be able to take
a direct nuclear assult and still function. Little piece of cold war
trivia for you :)

Ran proofing isn't a very big deal.

According to this page i just pulled up at
random(http://www.westerndigital.com/en/pro...sp?DriveID=41),
normal desktop "run of the mill" drives can take impulses of up to 250G
when non-operating, and 60G while operating at a delta-t of 2 seconds.
That's pretty good.

So if that's for ordinary hard drives, immagine a mil-spec drive. I
doubt carriers kept all of their data on flash memory in the 1970s. Even
if they use mag tape reel, it implies that they have developed some sort
of rad-proofing for it.


And the 4Gs thing is a non-issue. Most normal ATA drives can take around
300Gs when not operating or 30Gs when running without damage.

Jul 17 '05 #182
Howdy Edward
I am almost flabbergasted into textlessness. The fact that a system
... any system, not just a computer ... may work correctly in some one
or two delta range yet fail in some 10 or 20 delta range ... some
newbie tyro university graduate wet behind the ears neophyte kid might
make this mistake in a small project, and the old seasoned pro salt
seen it all manager would take this as a teaching opportunity. But in
an entire organization, a huge project putting a robot on a distant
planet, and not once did this occur to anybody!?


Yep, you nailed it.

My 5-word software testing book: Run 'er Hard & Long

-- stan


Jul 17 '05 #183
Re: Mars Rover Not Responding!
Maybe it's your cologne, you Martian perv!
Jul 17 '05 #184
In article <I4************ ************@tw ister2.starband .net>,
Stanley Krute <St**@StanKrute .com> wrote:
Howdy Edward
I am almost flabbergasted into textlessness. The fact that a system
... any system, not just a computer ... may work correctly in some one
or two delta range yet fail in some 10 or 20 delta range ... some
newbie tyro university graduate wet behind the ears neophyte kid might
make this mistake in a small project, and the old seasoned pro salt
seen it all manager would take this as a teaching opportunity. But in
an entire organization, a huge project putting a robot on a distant
planet, and not once did this occur to anybody!?


Yep, you nailed it.

My 5-word software testing book: Run 'er Hard & Long


Sigh. That is very likely the CAUSE of the problem :-(

Any particular test schedule (artificial or natural) will create a
distribution of circumstances over the space of all that are handled
differently by the program. And, remember, we are talking about a
space of cardinality 10^(10^4) to 10^(10^8). Any particular, broken
logic (usually a combination of sections of code and data) may be
invoked only once ever millennium, or perhaps never.

Now, change the test schedule in an apparently trivial way, or use
the program for real, and that broken logic may be invoked once a
day. Ouch. Incidentally, another way of looking at this is the
probability of distinguishing two finite element automata by feeding
in test strings and comparing the results. It was studied some
decades back, and the conclusions are not pretty.

The modern, unsystematic approach to testing is hopeless as an
egnineering technique, though fine as a political or marketing one.
For high-reliability codes, we need to go back to the approaches
used in the days when most computer people were also mathematicians,
engineers or both.
Regards,
Nick Maclaren.
Jul 17 '05 #185
Howdy Nick
The modern, unsystematic approach to testing is hopeless as an
egnineering technique, though fine as a political or marketing one.
For high-reliability codes, we need to go back to the approaches
used in the days when most computer people were also mathematicians,
engineers or both.


Deep agreement as to importance of math-smart testing.

-- stan
Jul 17 '05 #186
nm**@cus.cam.ac .uk (Nick Maclaren) wrote in message news:<bv******* ***@pegasus.csx .cam.ac.uk>...
In article <I4************ ************@tw ister2.starband .net>,
Stanley Krute <St**@StanKrute .com> wrote:
My 5-word software testing book: Run 'er Hard & Long


Sigh. That is very likely the CAUSE of the problem :-(

Any particular test schedule (artificial or natural) will create a
distribution of circumstances over the space of all that are handled
differently by the program. And, remember, we are talking about a
space of cardinality 10^(10^4) to 10^(10^8). Any particular, broken
logic (usually a combination of sections of code and data) may be
invoked only once ever millennium, or perhaps never.

Now, change the test schedule in an apparently trivial way, or use
the program for real, and that broken logic may be invoked once a
day. Ouch. Incidentally, another way of looking at this is the
probability of distinguishing two finite element automata by feeding
in test strings and comparing the results. It was studied some
decades back, and the conclusions are not pretty.

The modern, unsystematic approach to testing is hopeless as an
egnineering technique, though fine as a political or marketing one.
For high-reliability codes, we need to go back to the approaches
used in the days when most computer people were also mathematicians,
engineers or both.


Going back to the case at hand: it still sounds to me like the stated
cause of the bug -- more files were written to flash memory than were
anticipated in design -- is at least restrospectivel y obviously a
possibility which should have been addressed at the design stage, and
maybe should have been prospectively obvious also.

I'm not quite sure how to jive this with your theoretical insight that
we are searching a space of a cardinality of 10 followed by many, many
zeroes, and similar observations which make the problem sound
hopeless: maybe I'm naively wrong, or not (well that covers all the
possibilities ;-).

Is asking that when the program expends some resource it handles the
problem in some minimally damaging way really an impossibly hard
problem in the space of all the impossibly hard problems with which
computer science abounds, or is it merely a challenging but tractable
engineering problem?

For example, suppose we had a machine running around some play pen,
the the space of possible joint states of the machine and the play pen
were of cardinality 10 followed by some humungous number of zeroes.
And suppose that when the machine leaves the play pen, that is a
"crash". Now, we might ask why the machine crashed, and the designer
might respond with language about the cardinality of the joint state
space, and the impossibility of complete testing. But now we might
ask why he did not put a _fence_ around the play pen, and this answer
is no longer sufficient, and the answer "well, we let it run around
for a while, and it didn't seem likely to cross the boundaries, so we
didn't bother with a fence", is marginal.

Is the problem of building a number of internal fences in complex
systems sufficient to provided timely alert to unanticipated operating
conditions itself an intractably hard problem, or merely hard?
Jul 17 '05 #187
In article <2a************ **************@ posting.google. com>,
Edward Green <nu*******@aol. com> wrote:

Going back to the case at hand: it still sounds to me like the stated
cause of the bug -- more files were written to flash memory than were
anticipated in design -- is at least restrospectivel y obviously a
possibility which should have been addressed at the design stage, and
maybe should have been prospectively obvious also.
Perhaps. Without investigating the problem carefully, it is impossible
to tell.
I'm not quite sure how to jive this with your theoretical insight that
we are searching a space of a cardinality of 10 followed by many, many
zeroes, and similar observations which make the problem sound
hopeless: maybe I'm naively wrong, or not (well that covers all the
possibilitie s ;-).

Is asking that when the program expends some resource it handles the
problem in some minimally damaging way really an impossibly hard
problem in the space of all the impossibly hard problems with which
computer science abounds, or is it merely a challenging but tractable
engineering problem?


There are several aspects here. Minimising damage IS an impossibly
hard problem (not just exponentially expensive, but insoluble). But
that is often used as excuse to avoid even attempting to constrain
the consequent damages. Think of it this way.

Identifying and logging resource exhaustion takes a fixed time, so
there is no excuse not to do it. Yes, that can exhaust space in the
logging files, so there is a recursive issue, but there are known
partial solutions to that.

Identifying the cause is simple if there is one factor, harder if
there are two, and so on. To a great extent, that is also true of
predicting the resources needed, but that can be insoluble even with
one factor. This is confused by the fact that, the fewer the factors,
the more likely a bug is to be removed in initial testing.

Most of my bug-tracking time is spent on ones with 3-10 factors, on
a system with (say) 100-1,000 relevant factors. It isn't surprising
that the vendors' testing has failed to detect them. There are only
two useful approaches to such issues:

1) To design the system using a precise mathematical model, so
that you can eliminate, minimise or constrain interactions. This
also needs PRECISE interface specifications, of course, not the sloppy
rubbish that is almost universal.

2) To provide good detection and diagnostic facilities, to help
locating the causes and effects of such problems. This is even more
neglected nowadays, and is of limited help for systems like Mars
missions.
Regards,
Nick Maclaren.
Jul 17 '05 #188
In article <bv**********@p egasus.csx.cam. ac.uk>, Nick Maclaren
<nm**@cus.cam.a c.uk> writes
In article <I4************ ************@tw ister2.starband .net>,
Stanley Krute <St**@StanKrute .com> wrote:
Howdy Edward
I am almost flabbergasted into textlessness. The fact that a system
... any system, not just a computer ... may work correctly in some one
or two delta range yet fail in some 10 or 20 delta range ... some
newbie tyro university graduate wet behind the ears neophyte kid might
make this mistake in a small project, and the old seasoned pro salt
seen it all manager would take this as a teaching opportunity. But in
an entire organization, a huge project putting a robot on a distant
planet, and not once did this occur to anybody!?


Yep, you nailed it.

My 5-word software testing book: Run 'er Hard & Long


Sigh. That is very likely the CAUSE of the problem :-(

Any particular test schedule (artificial or natural) will create a
distribution of circumstances over the space of all that are handled
differently by the program. And, remember, we are talking about a
space of cardinality 10^(10^4) to 10^(10^8). Any particular, broken
logic (usually a combination of sections of code and data) may be
invoked only once ever millennium, or perhaps never.

Now, change the test schedule in an apparently trivial way, or use
the program for real, and that broken logic may be invoked once a
day. Ouch. Incidentally, another way of looking at this is the
probability of distinguishing two finite element automata by feeding
in test strings and comparing the results. It was studied some
decades back, and the conclusions are not pretty.

The modern, unsystematic approach to testing is hopeless as an
egnineering technique, though fine as a political or marketing one.
For high-reliability codes, we need to go back to the approaches
used in the days when most computer people were also mathematicians,
engineers or both.
Regards,
Nick Maclaren.

I agree that more could be done before thorough testing, but I would not
attempt to replace even random and soak testing. Formal methods alone
will never be enough because they prove only the correctness of a
specification implemented by a model, not that the specification or the
model are accurate enough representations of the real world. The speed
with which NASA have claimed to replicate the problem suggests something
widely enough spread through the state space to be replicable by soak
testing. Furthermore, a planetary probe is a pretty good match to even
random testing, because (given the relative costs of putting something
on Mars and of conducting automatic testing in simulation or in a
warehouse) it may be possible to run for longer in test than in real
life, reducing the chance of bugs that show up only in real life.
Example: the priority inversion bug that hit a previous probe had
apparently shown up in testing but been ignored, because it wasn't what
they were looking for at the time. My impression of current good
practice is that black box testing, white box testing, and code review
are good at finding different sorts of bugs, and so should be used
together. I would lump pre-testing approaches into code review.
--
A. G. McDowell
Jul 17 '05 #189
In article <XB************ **@mcdowella.de mon.co.uk>,
A. G. McDowell <no****@nospam. co.uk> wrote:

I agree that more could be done before thorough testing, but I would not
attempt to replace even random and soak testing. Formal methods alone
will never be enough because they prove only the correctness of a
specificatio n implemented by a model, not that the specification or the
model are accurate enough representations of the real world. The speed
with which NASA have claimed to replicate the problem suggests something
widely enough spread through the state space to be replicable by soak
testing. Furthermore, a planetary probe is a pretty good match to even
random testing, because (given the relative costs of putting something
on Mars and of conducting automatic testing in simulation or in a
warehouse) it may be possible to run for longer in test than in real
life, reducing the chance of bugs that show up only in real life.
Example: the priority inversion bug that hit a previous probe had
apparently shown up in testing but been ignored, because it wasn't what
they were looking for at the time. My impression of current good
practice is that black box testing, white box testing, and code review
are good at finding different sorts of bugs, and so should be used
together. I would lump pre-testing approaches into code review.


That is a classic example of what I say is mistaken methodology!
Yes, pretty well everything that you say is true, but you have missed
the fact that interactions WILL change the failure syndromes in ways
that mean untargetted testing will miss even the most glaringly
obvious errors. There are just TOO MANY possible combinations of
conditions to rely on random testing.

For a fairly simple or pervasive problem, unintelligent 'soak' testing
will work. For a more complex one, it won't. Unless you target the
testing fairly accurately, certain syndromes will either not occur or
be so rare as not to happen in a month of Sundays. I see this problem
on a daily basis :-(

An aspect of a mathematical design that I did not say explicitly (but
only hinted at) is that you can identify areas to check that the code
matches the mode and other areas where the analysis descends from
mathematics to hand-waving. You can then design precise tests for the
former, and targetted soak tests for the latter. It isn't uncommon
for such an approach to increase the effectiveness of testing beyond
all recognition.
Regards,
Nick Maclaren.
Jul 17 '05 #190

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

33
3103
by: jacob navia | last post by:
Mankind has now two robots wandering about in the planet Mars. No, it wasn't Mars that invaded Earth, but Earth that landed in Mars more than two years ago. After some initial OS glitches (Chris Torek can comment on those probably) in the Spirit robot, software has done smoothly ever since. All the software is written in C, what is a good decision but also a bad one, C with its warts and all...
0
9791
by: Hystou | last post by:
Most computers default to English, but sometimes we require a different language, especially when relocating. Forgot to request a specific language before your computer shipped? No problem! You can effortlessly switch the default language on Windows 10 without reinstalling. I'll walk you through it. First, let's disable language synchronization. With a Microsoft account, language settings sync across devices. To prevent any complications,...
0
11137
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers, it seems that the internal comparison operator "<=>" tries to promote arguments from unsigned to signed. This is as boiled down as I can make it. Here is my compilation command: g++-12 -std=c++20 -Wnarrowing bit_field.cpp Here is the code in...
0
10742
jinu1996
by: jinu1996 | last post by:
In today's digital age, having a compelling online presence is paramount for businesses aiming to thrive in a competitive landscape. At the heart of this digital strategy lies an intricately woven tapestry of website design and digital marketing. It's not merely about having a website; it's about crafting an immersive digital experience that captivates audiences and drives business growth. The Art of Business Website Design Your website is...
0
10410
tracyyun
by: tracyyun | last post by:
Dear forum friends, With the development of smart home technology, a variety of wireless communication protocols have appeared on the market, such as Zigbee, Z-Wave, Wi-Fi, Bluetooth, etc. Each protocol has its own unique characteristics and advantages, but as a user who is planning to build a smart home system, I am a bit confused by the choice of these technologies. I'm particularly interested in Zigbee because I've heard it does some...
0
9571
agi2029
by: agi2029 | last post by:
Let's talk about the concept of autonomous AI software engineers and no-code agents. These AIs are designed to manage the entire lifecycle of a software development project—planning, coding, testing, and deployment—without human intervention. Imagine an AI that can take a project description, break it down, write the code, debug it, and then launch it, all on its own.... Now, this would greatly impact the work of software developers. The idea...
0
7122
by: conductexam | last post by:
I have .net C# application in which I am extracting data from word file and save it in database particularly. To store word all data as it is I am converting the whole word file firstly in HTML and then checking html paragraph one by one. At the time of converting from word file to html my equations which are in the word document file was convert into image. Globals.ThisAddIn.Application.ActiveDocument.Select();...
1
4609
by: 6302768590 | last post by:
Hai team i want code for transfer the data from one system to another through IP address by using C# our system has to for every 5mins then we have to update the data what the data is updated we have to send another system
2
4215
muto222
by: muto222 | last post by:
How can i add a mobile payment intergratation into php mysql website.
3
3231
bsmnconsultancy
by: bsmnconsultancy | last post by:
In today's digital era, a well-designed website is crucial for businesses looking to succeed. Whether you're a small business owner or a large corporation in Toronto, having a strong online presence can significantly impact your brand's success. BSMN Consultancy, a leader in Website Development in Toronto offers valuable insights into creating effective websites that not only look great but also perform exceptionally well. In this comprehensive...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.