473,785 Members | 2,395 Online
Bytes | Software Development & Data Engineering Community
+ Post

Home Posts Topics Members FAQ

Making Fatal Hidden Assumptions

We often find hidden, and totally unnecessary, assumptions being
made in code. The following leans heavily on one particular
example, which happens to be in C. However similar things can (and
do) occur in any language.

These assumptions are generally made because of familiarity with
the language. As a non-code example, consider the idea that the
faulty code is written by blackguards bent on foulling the
language. The term blackguards is not in favor these days, and for
good reason. However, the older you are, the more likely you are
to have used it since childhood, and to use it again, barring
specific thought on the subject. The same type of thing applies to
writing code.

I hope, with this little monograph, to encourage people to examine
some hidden assumptions they are making in their code. As ever, in
dealing with C, the reference standard is the ISO C standard.
Versions can be found in text and pdf format, by searching for N869
and N1124. [1] The latter does not have a text version, but is
more up-to-date.

We will always have innocent appearing code with these kinds of
assumptions built-in. However it would be wise to annotate such
code to make the assumptions explicit, which can avoid a great deal
of agony when the code is reused under other systems.

In the following example, the code is as downloaded from the
referenced URL, and the comments are entirely mine, including the
'every 5' linenumber references.

/* Making fatal hidden assumptions */
/* Paul Hsiehs version of strlen.
http://www.azillionmonkeys.com/qed/asmexample.html

Some sneaky hidden assumptions here:
1. p = s - 1 is valid. Not guaranteed. Careless coding.
2. cast (int) p is meaningful. Not guaranteed.
3. Use of 2's complement arithmetic.
4. ints have no trap representations or hidden bits.
5. 4 == sizeof(int) && 8 == CHAR_BIT.
6. size_t is actually int.
7. sizeof(int) is a power of 2.
8. int alignment depends on a zeroed bit field.

Since strlen is normally supplied by the system, the system
designer can guarantee all but item 1. Otherwise this is
not portable. Item 1 can probably be beaten by suitable
code reorganization to avoid the initial p = s - 1. This
is a serious bug which, for example, can cause segfaults
on many systems. It is most likely to foul when (int)s
has the value 0, and is meaningful.

He fails to make the valid assumption: 1 == sizeof(char).
*/

#define hasNulByte(x) ((x - 0x01010101) & ~x & 0x80808080)
#define SW (sizeof (int) / sizeof (char))

int xstrlen (const char *s) {
const char *p; /* 5 */
int d;

p = s - 1;
do {
p++; /* 10 */
if ((((int) p) & (SW - 1)) == 0) {
do {
d = *((int *) p);
p += SW;
} while (!hasNulByte (d)); /* 15 */
p -= SW;
}
} while (*p != 0);
return p - s;
} /* 20 */

Let us start with line 1! The constants appear to require that
sizeof(int) be 4, and that CHAR_BIT be precisely 8. I haven't
really looked too closely, and it is possible that the ~x term
allows for larger sizeof(int), but nothing allows for larger
CHAR_BIT. A further hidden assumption is that there are no trap
values in the representation of an int. Its functioning is
doubtful when sizeof(int) is less that 4. At the least it will
force promotion to long, which will seriously affect the speed.

This is an ingenious and speedy way of detecting a zero byte within
an int, provided the preconditions are met. There is nothing wrong
with it, PROVIDED we know when it is valid.

In line 2 we have the confusing use of sizeof(char), which is 1 by
definition. This just serves to obscure the fact that SW is
actually sizeof(int) later. No hidden assumptions have been made
here, but the usage helps to conceal later assumptions.

Line 4. Since this is intended to replace the systems strlen()
function, it would seem advantageous to use the appropriate
signature for the function. In particular strlen returns a size_t,
not an int. size_t is always unsigned.

In line 8 we come to a biggie. The standard specifically does not
guarantee the action of a pointer below an object. The only real
purpose of this statement is to compensate for the initial
increment in line 10. This can be avoided by rearrangement of the
code, which will then let the routine function where the
assumptions are valid. This is the only real error in the code
that I see.

In line 11 we have several hidden assumptions. The first is that
the cast of a pointer to an int is valid. This is never
guaranteed. A pointer can be much larger than an int, and may have
all sorts of non-integer like information embedded, such as segment
id. If sizeof(int) is less than 4 the validity of this is even
less likely.

Then we come to the purpose of the statement, which is to discover
if the pointer is suitably aligned for an int. It does this by
bit-anding with SW-1, which is the concealed sizeof(int)-1. This
won't be very useful if sizeof(int) is, say, 3 or any other
non-poweroftwo. In addition, it assumes that an aligned pointer
will have those bits zero. While this last is very likely in
todays systems, it is still an assumption. The system designer is
entitled to assume this, but user code is not.

Line 13 again uses the unwarranted cast of a pointer to an int.
This enables the use of the already suspicious macro hasNulByte in
line 15.

If all these assumptions are correct, line 19 finally calculates a
pointer difference (which is valid, and of type size_t or ssize_t,
but will always fit into a size_t). It then does a concealed cast
of this into an int, which could cause undefined or implementation
defined behaviour if the value exceeds what will fit into an int.
This one is also unnecessary, since it is trivial to define the
return type as size_t and guarantee success.

I haven't even mentioned the assumption of 2's complement
arithmetic, which I believe to be embedded in the hasNulByte
macro. I haven't bothered to think this out.

Would you believe that so many hidden assumptions can be embedded
in such innocent looking code? The sneaky thing is that the code
appears trivially correct at first glance. This is the stuff that
Heisenbugs are made of. Yet use of such code is fairly safe if we
are aware of those hidden assumptions.

I have cross-posted this without setting follow-ups, because I
believe that discussion will be valid in all the newsgroups posted.

[1] The draft C standards can be found at:
<http://www.open-std.org/jtc1/sc22/wg14/www/docs/>

--
"If you want to post a followup via groups.google.c om, don't use
the broken "Reply" link at the bottom of the article. Click on
"show options" at the top of the article, then click on the
"Reply" at the bottom of the article headers." - Keith Thompson
More details at: <http://cfaj.freeshell. org/google/>
Also see <http://www.safalra.com/special/googlegroupsrep ly/>

Mar 6 '06
351 13143
Stephen Sprunk wrote:
.... snip ...
Why IBM did it that way, I'm not sure, but my guess is they found
it was cheaper to do validity/permission checks when the address
was loaded than when it was used since the latter has a latency
impact.


A single pointer check can validate the pointer for multiple
dereferences. This is much cheaper than checking it at each
dereference.

--
"Churchill and Bush can both be considered wartime leaders, just
as Secretariat and Mr Ed were both horses." - James Rhodes.
"We have always known that heedless self-interest was bad
morals. We now know that it is bad economics" - FDR
Mar 22 '06 #281
On Wed, 22 Mar 2006 03:17:38 +0000, Dik T. Winter wrote:
In article <sl************ ***********@ran dom.yi.org> Jordan Abel <ra*******@gmai l.com> writes:
> On 2006-03-22, Keith Thompson <ks***@mib.or g> wrote:
> > Andrew Reilly <an************ *@areilly.bpc-users.org> writes: ... > >> And I still say that constraining C for everyone so that it could fit the
> >> AS/400, rather than making C-on-AS/400 jump through a few more hoops to
> >> match traditional C behaviour, was the wrong trade-off. I accept that
> >> this may well be a minority view.
> >
> > It is. The C standard wouldn't just have to forbid an implementation
> > from trapping when it loads an invalid address; it would have to
> > define the behavior of any program that uses such an address.

>
> Why? It's not that difficult to define the behavior of a program that
> "uses" such an address other than by dereferencing, and no problem to
> leave the behavior undefined for dereferencing


But that would have locked out machines that strictly separate pointers
and non-pointers, in the sense that you can not load a pointer in a
non-pointer register and the other way around. Note also that on the
AS/400 a pointer is longer than any integer, so doing arithmetic on them
in integer registers would require quite a lot.


You don't have to do that at all. As you said, AS/400 uses long,
decorative pointers that are longer than integers. So no one's going
to notice if what your C compiler calls a pointer is actually a (base,
index) tuple, underneath. Being object/capability machines, these
tuples point to whole arrays, not just individual bytes or words. The
compilers could quite easily have managed all of C's pointer arithmetic as
actual arithmetic, using integers and indices, and only used or formed
real AS/400 pointers when the code did memory references (as base[index]).
There's no need for pointer arithmetic outside this model, against
different "base" pointers, so that's a straw-man argument.

Cheers,

--
Andrew

Mar 22 '06 #282
On Wed, 22 Mar 2006 03:41:07 +0000, Keith Thompson wrote:
Jordan Abel <ra*******@gmai l.com> writes:
On 2006-03-22, Keith Thompson <ks***@mib.or g> wrote:
Andrew Reilly <an************ *@areilly.bpc-users.org> writes:
On Tue, 21 Mar 2006 19:02:55 -0600, Stephen Sprunk wrote:
> If a system traps on a prefetch, it's fundamentally broken.
> However, a system that traps when an invalid pointer is loaded is
> not broken, and the AS/400 is the usual example. Annoying, but
> not broken.

And I still say that constraining C for everyone so that it could fit the
AS/400, rather than making C-on-AS/400 jump through a few more hoops to
match traditional C behaviour, was the wrong trade-off. I accept that
this may well be a minority view.

It is. The C standard wouldn't just have to forbid an implementation
from trapping when it loads an invalid address; it would have to
define the behavior of any program that uses such an address.
Why? It's not that difficult to define the behavior of a program that
"uses" such an address other than by dereferencing, and no problem to
leave the behavior undefined for dereferencing


The problem is pointer arithmetic. For example, given:

#define BUFFER_SIZE /* some big number */
int buf[BUFFER_SIZE];
int *ptr = buf + BUFFER_SIZE;
int offset = /* some number */

Requiring (ptr + offset - offset == ptr) probably wouldn't be too much
of a burden for most systems, but requiring (ptr + offset > ptr) could
cause problems.


This is a really lame argument, IMO.

Given that we're working with fixed-word-length machines, rather than
scheme's bignums, p + offset > p doesn't even necessarily hold for
integers, so why should it hold more rigerously for pointers? Wrap-around
or overflow is a fact of life for fixed-range machines. You just deal
with it. Don't go too close to the edge, or make damn sure you're
checking for it when you do.
Given the current requirements, buf can be placed
anywhere in memory; there just (on some systems) needs to be a single
extra byte just past the end of the array. Requiring out-of-bounds
pointer arithmetic to work "correctly" in all cases could be much more
burdensome.
Correctness depends on what you're trying to do. The one-byte-extra
argument doesn't help the fact that the flat memory model of C will still
"work OK" even if buf[BUFFER_SIZE-1] occupies the very last word in the
address space of the machine: there's no room for even that single byte
extra. Sure, in that instance, ptr = buf + BUFFER_SIZE will equal 0, and
your pointer comparison may break if you do it carelessly, but ptr[-1]
will still point to the last word in the array, and there are no dumb
restrictions against iterating backwards through the array, or forwards
with non-unit stride.
And avoiding creating invalid pointers really isn't all that difficult.


No, of course it isn't. Just use pointers as object handles, and do your
pointer arithmetic with integers. Whoopee: exactly the same syntax and
semantics as Pascal and Java. I wonder why we bothered with pointers in
the first place?

--
Andrew

Mar 22 '06 #283
On Wed, 22 Mar 2006 03:48:19 +0000, Keith Thompson wrote:
Andrew Reilly <an************ *@areilly.bpc-users.org> writes:
On Wed, 22 Mar 2006 03:03:47 +0000, Dik T. Winter wrote:
In article
<Pi************ *************** ********@unix42 .andrew.cmu.edu >
"Arthur J. O'Dwyer" <aj*******@andr ew.cmu.edu> writes: [...] > I believe Andrew means
>
> void foo(int & x)

I thought we were talking about C.


We were. Sure, you can put a pointer into a register argument.


"void foo(int & x)" is a syntax error in C.


I didn't use that syntax. I believe that Arthur was probably confused by
the use of the term "pass by reference", for which the C idiom is to pass
(by value) a pointer to the argument, which doesn't work if the argument
is a register variable.

Now: did you have a useful point to make?

--
Andrew

Mar 22 '06 #284

On Wed, 22 Mar 2006, Andrew Reilly wrote:
On Wed, 22 Mar 2006 03:48:19 +0000, Keith Thompson wrote:
Andrew Reilly <an************ *@areilly.bpc-users.org> writes:
On Wed, 22 Mar 2006 03:03:47 +0000, Dik T. Winter wrote:
"Arthur J. O'Dwyer" <aj*******@andr ew.cmu.edu> writes:

[...]
> I believe Andrew means
>
> void foo(int & x)

I thought we were talking about C.

We were. Sure, you can put a pointer into a register argument.


"void foo(int & x)" is a syntax error in C.


I didn't use that syntax. I believe that Arthur was probably confused
by the use of the term "pass by reference"


Well, I certainly wasn't confused by it! :) As far as I could tell,
you hadn't been talking about C since several posts back --- in my post,
I quoted context referring to "multiple return values" and reference
parameters --- the latter present in C++ but not C, and the former
present in very few C-like languages.

In case you hadn't noticed, this thread has for a long time been
crossposted to two groups in which C++ is topical, and one group in
which its apparent subject (language design) is topical. If you guys
want to talk about standard C, why don't you do that, and forget all
about pass-by-reference, multiple return values, AS/400 machine code,
and whatever other topics this thread has drifted through on its way
here?

I've removed c.p and c.a.e from the crosspost list. Feel free to go
back to discussing standard C now. ;)

-Arthur
Mar 22 '06 #285
On Wed, 22 Mar 2006 02:23:44 -0500, Arthur J. O'Dwyer wrote:
I've removed c.p and c.a.e from the crosspost list. Feel free to go
back to discussing standard C now. ;)


Aah, well I'm only reading the thread in comp.arch.embed ded.

Cheers,

--
Andrew

Mar 22 '06 #286
CBFalconer wrote:

<snip>
#define hasNulByte(x) ((x - 0x01010101) & ~x & 0x80808080)
#define SW (sizeof (int) / sizeof (char))

int xstrlen (const char *s) {
const char *p; /* 5 */
int d;

p = s - 1;
do {
p++; /* 10 */
if ((((int) p) & (SW - 1)) == 0) {
do {
d = *((int *) p);
p += SW;
} while (!hasNulByte (d)); /* 15 */
p -= SW;
}
} while (*p != 0);
return p - s;
} /* 20 */

Let us start with line 1! The constants appear to require that
sizeof(int) be 4, and that CHAR_BIT be precisely 8. I haven't
really looked too closely, and it is possible that the ~x term
allows for larger sizeof(int), but nothing allows for larger
CHAR_BIT. A further hidden assumption is that there are no trap
values in the representation of an int. Its functioning is
doubtful when sizeof(int) is less that 4. At the least it will
force promotion to long, which will seriously affect the speed.

This is an ingenious and speedy way of detecting a zero byte within
an int, provided the preconditions are met. There is nothing wrong
with it, PROVIDED we know when it is valid.


<snip>

Just incase it hasn't been mentioned [a rather long thread to check!], and
might be useful, Google has an interesting summary on finding a nul in a
word by one Scott Douglass - posted to c.l.c back in 1993.

"Here's the summary of the responses I got to my query asking for the
trick of finding a nul in a long word and other bit tricks."

http://tinyurl.com/m7uw9
--
==============
Not a pedant
==============
Mar 22 '06 #287

In article <sl************ ***********@ran dom.yi.org>, Jordan Abel <ra*******@gmai l.com> writes:
On 2006-03-22, Dik T. Winter <Di********@cwi .nl> wrote:
In article <sl************ ***********@ran dom.yi.org> Jordan Abel <ra*******@gmai l.com> writes:
On 2006-03-22, Keith Thompson <ks***@mib.or g> wrote:
Andrew Reilly <an************ *@areilly.bpc-users.org> writes:
> And I still say that constraining C for everyone so that it could fit the
> AS/400, rather than making C-on-AS/400 jump through a few more hoops to
> match traditional C behaviour, was the wrong trade-off.
I must have missed the bit in the C Rationale where the committee
wrote, "We did this for the AS/400". They probably thought it was
obvious, since no other architecture could ever have the same
requirements and support C.
It is. The C standard wouldn't just have to forbid an implementation
from trapping when it loads an invalid address; it would have to
define the behavior of any program that uses such an address.

Why? It's not that difficult to define the behavior of a program that
"uses" such an address other than by dereferencing, and no problem to
leave the behavior undefined for dereferencing

OK, define the behavior of all non-dereferencing accesses on
invalid pointers. Be sure to account for systems with non-linear
address spaces, since nothing else in the C standard excludes them.
But that would have locked out machines that strictly separate pointers
and non-pointers, in the sense that you can not load a pointer in a
non-pointer register and the other way around. Note also that on the
AS/400 a pointer is longer than any integer, so doing arithmetic on them
in integer registers would require quite a lot.

Yup. The AS/400 has a set of opcodes for manipulating integers, and
a different set for manipulating pointers. Nothing in C currently
requires it to treat the latter like the former, and I don't see any
reason why it should. (Indeed, I admit to being mystified by Andrew
Reilly's position; what would be gained by requiring that C implemen-
tations have defined behavior for invalid pointers? How is leaving
invalid pointer access undefined by the standard "constraini ng" C?)
Surely there's some way to catch and ignore the trap from loading an
invalid pointer, though.
No, there is not. The "trap" (a machine check, actually) can be
caught, and it can be responded to, by application code; but ignoring
it is not one of the options. On the AS/400, only LIC (Licensed
Internal Code) can bypass memory protection, and the C implementation
is not LIC.

The AS/400 uses a Single-Level Store. It has *one* large virtual
address space for all user-mode objects in the system: all jobs (the
equivalent of processes), all files, all resources of whatever sort.
It enforces access restrictions not by giving each process its own
virtual address space, but by dynamically granting jobs access to
"materializ ed" subspaces. (This doesn't apply to processes running
under PACE, AIUI, but that's a special case.)
I mean, it stops _somewhere_ even as it is now,


Yes, it stops: if the machine check isn't handled by the application,
the job is paused and a message is sent to the appropriate message
queue, where a user or operator can respond to it.

That happens under LIC control. The C implementation can't override
it; if it could, it'd be violating the system's security model.

Of course, the C implementation could emulate some other machine with
less-vigilant pointer handling by generating some intermediate
representation and interpreting it at runtime. That would have made
the early AS/400s unusably slow, rather than just annoyingly slow,
for C programs.

But in any case a favorite maxim of comp.lang.c applies here: what
the AS/400, or any other extant implementation, does *does not
matter* to the C standard. If we decommissioned all AS/400s today,
there might be a new architecture tomorrow with some other good
reason for disallowing operations on invalid pointers in C.

--
Michael Wojcik mi************@ microfocus.com

The lecturer was detailing a proof on the blackboard. He started to say,
"From the above it is obvious that ...". Then he stepped back and thought
deeply for a while. Then he left the room. We waited. Five minutes
later he returned smiling and said, "Yes, it is obvious", and continued
to outline the proof. -- John O'Gorman
Mar 23 '06 #288
On 2006-03-23, Michael Wojcik <mw*****@newsgu y.com> wrote:

In article <sl************ ***********@ran dom.yi.org>, Jordan Abel <ra*******@gmai l.com> writes:
On 2006-03-22, Dik T. Winter <Di********@cwi .nl> wrote:
In article <sl************ ***********@ran dom.yi.org> Jordan Abel <ra*******@gmai l.com> writes:
On 2006-03-22, Keith Thompson <ks***@mib.or g> wrote:
> Andrew Reilly <an************ *@areilly.bpc-users.org> writes:
>> And I still say that constraining C for everyone so that it could
>> fit the AS/400, rather than making C-on-AS/400 jump through a few
>> more hoops to match traditional C behaviour, was the wrong
>> trade-off.
I must have missed the bit in the C Rationale where the committee
wrote, "We did this for the AS/400". They probably thought it was
obvious, since no other architecture could ever have the same
requirements and support C.
It is. The C standard wouldn't just have to forbid an implemen-
> tation from trapping when it loads an invalid address; it would
> have to define the behavior of any program that uses such an
> address.

Why? It's not that difficult to define the behavior of a program
that "uses" such an address other than by dereferencing, and no
problem to leave the behavior undefined for dereferencing
OK, define the behavior of all non-dereferencing accesses on invalid
pointers. Be sure to account for systems with non-linear address
spaces, since nothing else in the C standard excludes them.
unspecified result, implementation-defined, compares equal, unspecified
result, unspecified result.

There, that was easy.
But that would have locked out machines that strictly separate
pointers and non-pointers, in the sense that you can not load a
pointer in a non-pointer register and the other way around. Note
also that on the AS/400 a pointer is longer than any integer, so
doing arithmetic on them in integer registers would require quite a
lot.


Yup. The AS/400 has a set of opcodes for manipulating integers, and a
different set for manipulating pointers. Nothing in C currently
requires it to treat the latter like the former, and I don't see any
reason why it should. (Indeed, I admit to being mystified by Andrew
Reilly's position; what would be gained by requiring that C implemen-
tations have defined behavior for invalid pointers? How is leaving
invalid pointer access undefined by the standard "constraini ng" C?)


It constrains code, in a way. Existing code is more important than
existing implementations , right?
Surely there's some way to catch and ignore the trap from loading an
invalid pointer, though.
No, there is not. The "trap" (a machine check, actually) can be
caught, and it can be responded to, by application code; but ignoring
it is not one of the options.


You can't "catch it and do nothing"? What are you expected to _do_ about
an invalid or protected address being loaded [not dereferenced], anyway?
What _can_ you do, having caught the machine check? What responses are
typical?
On the AS/400, only LIC (Licensed Internal Code) can bypass memory
protection, and the C implementation is not LIC.

The AS/400 uses a Single-Level Store. It has *one* large virtual
address space for all user-mode objects in the system: all jobs (the
equivalent of processes), all files, all resources of whatever sort.
It enforces access restrictions not by giving each process its own
virtual address space, but by dynamically granting jobs access to
"materializ ed" subspaces. (This doesn't apply to processes running
under PACE, AIUI, but that's a special case.)
And why is anything but a dereference an "access" to the protected
address?
I mean, it stops _somewhere_ even as it is now,
Yes, it stops: if the machine check isn't handled by the application,


What can the application do in the handler? Why couldn't a C
implementation cause all C programs to have a handler that does
something reasonable?
the job is paused and a message is sent to the appropriate message
queue, where a user or operator can respond to it.

That happens under LIC control. The C implementation can't override
it; if it could, it'd be violating the system's security model.


I didn't say override. I said ignore. Since it's not a dereference, no
harm actually done. Why does loading a protected address into a register
violate security?
Mar 23 '06 #289

In article <pa************ *************** *@areilly.bpc-users.org>, Andrew Reilly <an************ *@areilly.bpc-users.org> writes:

You don't have to do that at all. As you said, AS/400 uses long,
decorative pointers that are longer than integers. So no one's going
to notice if what your C compiler calls a pointer is actually a (base,
index) tuple, underneath. Being object/capability machines, these
tuples point to whole arrays, not just individual bytes or words. The
compilers could quite easily have managed all of C's pointer arithmetic as
actual arithmetic, using integers and indices, and only used or formed
real AS/400 pointers when the code did memory references (as base[index]).


That would break inter-language calls, which were an absolute
necessity in early AS/400 C implementations (notably EPM C), as they
were unable to use some system facilities (such as communications)
directly.

Prior to the ILE environment, there was no "linker" as such for most
(all?) AS/400 application programming languages. Source files were
compiled into separate program objects (*PGM objects) in the
filesystem. Calls with external linkage were resolved dynamically.
(This is closer to the external-call model COBOL uses, actually, so
it made sense for the 400's primary audience.)

It would have been a real mess if the C implementation had to figure
out, on every external call passing a pointer, whether the target was
C (and so could use special fake C pointers) or not (and so needed
real AS/400 pointers). Putting this burden on the C programmer would
not have improved the situation.

And, of course, pointers in aggregate data types would pose a real
problem. If a C program wanted to define a struct that corresponded
to a COBOL group item, that would've been a right pain. Obviously,
it's an implementation-specific task anyway, but on most implementa-
tions it's pretty straightforward provided the COBOL item doesn't use
any of COBOL's oddball data types.

That doesn't mean it couldn't have been done, of course, but it would
have made C - already not a member of the popular crowd on the '400
playground - too cumbersome for all but the most determined fans.

As the Rationale notes, one of the guiding principles behind C is to
do things the way the machine wants to do them. That introduces many
incompatibiliti es between implementations , but has rewards of its own.
Since C is rather unusual among HLLs in this respect, why not let it
stick to its guns rather than asking it to ape all those other
languages by hiding the machine behind its own set-dressing?

--
Michael Wojcik mi************@ microfocus.com

Aw, shucks. And I was just trying to be rude. -- P.J. Plauger
Mar 23 '06 #290

This thread has been closed and replies have been disabled. Please start a new discussion.

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.