473,386 Members | 1,673 Online
Bytes | Software Development & Data Engineering Community
Post Job

Home Posts Topics Members FAQ

Join Bytes to post your question to a community of 473,386 software developers and data experts.

Updating only fields which users have changed

Let's say I have an html form with 20 or 30 fields in it. The form
submits the fields via POST to a php page which updates a table in a
database with the $_POST vars. Which makes more sense?

1) simply UPDATING the values for all fields in the table, whether or
not any particular field has actually changed

2) running a second SELECT statement and comparing the $_POST vars to
the returned values, and only UPDATING those that have changed.

3) including a hidden form field for each field with the original
values (so for each <input type="text" name="field1" size="10"
value="blah"> there would also be a <input type="hidden"
name="field1_orig" value="blah">), comparing $_POST['field1'] with
$_POST['field2'] and only updating those that have changed.

There's very little chance that two users will be updating the same
table at the same time with this particular site, but anyone see any
particular problems with any of the above three methods? Would
doubling the number of $_POST vars (option 3) create a problem?

Thanks in advance.
Jul 17 '05 #1
11 16026
Jason:
Let's say I have an html form with 20 or 30 fields in it. The form
submits the fields via POST to a php page which updates a table in a
database with the $_POST vars. Which makes more sense?

1) simply UPDATING the values for all fields in the table, whether or
not any particular field has actually changed

2) running a second SELECT statement and comparing the $_POST vars to
the returned values, and only UPDATING those that have changed.

3) including a hidden form field for each field with the original
values (so for each <input type="text" name="field1" size="10"
value="blah"> there would also be a <input type="hidden"
name="field1_orig" value="blah">), comparing $_POST['field1'] with
$_POST['field2'] and only updating those that have changed.


I see no reason not to choose option 1. This is the simplest solution, and I
can't see any problems with it. Do you? Seeing as you list the other two
options I mean...

André Nęss
Jul 17 '05 #2
In article <cf**************************@posting.google.com >, Jason
<pm*******@4lpi.com> wrote:
Let's say I have an html form with 20 or 30 fields in it. The form
submits the fields via POST to a php page which updates a table in a
database with the $_POST vars. Which makes more sense?

1) simply UPDATING the values for all fields in the table, whether or
not any particular field has actually changed

2) running a second SELECT statement and comparing the $_POST vars to
the returned values, and only UPDATING those that have changed.

3) including a hidden form field for each field with the original
values (so for each <input type="text" name="field1" size="10"
value="blah"> there would also be a <input type="hidden"
name="field1_orig" value="blah">), comparing $_POST['field1'] with
$_POST['field2'] and only updating those that have changed.

There's very little chance that two users will be updating the same
table at the same time with this particular site, but anyone see any
particular problems with any of the above three methods? Would
doubling the number of $_POST vars (option 3) create a problem?

Thanks in advance.


If you're using mySQL as your database, then 1) is what happens anyway
- mySQL automatically detects that you're updating a field with the
same value it already contains, and doesn't actually update that field.
Not sure if this is the same for other databases thet can be used with
php.

You can also pre-fill your html form fields with current data from your
database by running a query before the form is displayed - the user
can then see the current values and edit as necessary. The html form
code would be:
<input type="text" name="field1" value="<?= $row['field1'] >">
This is a sort of similar idea to your option 3) above, but less messy,
imo.

This is the technique I usually use for doing form based database
updates. An add/insert is similar but there's no data to pre-fill the
form with.

hth.
Jul 17 '05 #3
Hi !

On 6 Jan 2004 08:28:43 -0800, pm*******@4lpi.com (Jason) wrote:
Let's say I have an html form with 20 or 30 fields in it. The form
submits the fields via POST to a php page which updates a table in a
database with the $_POST vars. Which makes more sense?

1) simply UPDATING the values for all fields in the table, whether or
not any particular field has actually changed
This is in most cases the simplest solution, if you do not run a
journal on who changed what in this table. In the case of such a
journal (which we run here pretty much on all tables), it would change
the journal timestamp, even if nothing has been updated.
2) running a second SELECT statement and comparing the $_POST vars to
the returned values, and only UPDATING those that have changed.

3) including a hidden form field for each field with the original
values (so for each <input type="text" name="field1" size="10"
value="blah"> there would also be a <input type="hidden"
name="field1_orig" value="blah">), comparing $_POST['field1'] with
$_POST['field2'] and only updating those that have changed.
Thats what I prefer to use.

There's very little chance that two users will be updating the same
table at the same time with this particular site, but anyone see any
particular problems with any of the above three methods? Would
doubling the number of $_POST vars (option 3) create a problem?
No this is a completely separate issue and you will have a complete
fuck-up and data loss if it happens (and it will). You need to
implement a locking strategy. There is pessimistic locking, which
works like this:
If someone opens the record for editing you change a flag to 'lock'
and noone else can edit that record.
This is in most cases unsuitable for the web, because you don't wanna
call that guy in Siberia, who does your editing and went for a toilet
break in the middle of editing to unlock it.

Instead, you can use optmistic locking, which just puts a timestamp
against the record, when it has been saved the last time. Then, when
saving you check the timestamp against the one you had when you opened
it for editing. If it is bigger - if someone saved it, while you were
editing, you have to resolve that conflict. The simple and stupid way
is to overwrite the other persons changes or to discard your own ones.
Instead, implement a field by field merging strategy. For this you
need to know the contents of what has been in before as well (option
3)

HTH, Jochen

Thanks in advance.


--
Jochen Daum - CANS Ltd.
PHP DB Edit Toolkit -- PHP scripts for building
database editing interfaces.
http://sourceforge.net/projects/phpdbedittk/
Jul 17 '05 #4
André Nęss <an*********************@ifi.uio.no> wrote in message news:<bt**********@maud.ifi.uio.no>...

I see no reason not to choose option 1. This is the simplest solution, and I
can't see any problems with it. Do you? Seeing as you list the other two
options I mean...


Thanks everyone for the responses. I think I was a little too generic
in my original post. Right now I'm trying to write some classes for
updating a database. Sometimes users will be submitting a form that
has fields for one record in one table, but sometimes they will be
using a form with 30 or 40 rows of fields for 30 or 40 records.

I understand in the first case that my option one (simply constructing
an UPDATE statement with all the fields) is the simplest, and that
building a UPDATE statement of only the fields that have actually
changed probably won't save much on the database end. But what about
the latter case, where there will be 30 or 40 UPDATE statements? In
many cases only 1 or 2 of the records may be changed at all, so if the
script only submits 1 or 2 UPDATEs instead of 30 or 40 there should be
some savings, right?
And here's another question. I've been thinking about the whole
separating business logic from presentation logic deal, and I've got
an idea but I'm not sure if I'm over thinking all of this.

Let's say I have an entity called thing. I'll have a file called
thing.php which the users will actually call up in their browser,
usually following links like thing.php?action=view&thing_id=123 or
some such. Depending on the $_GET vars, thing.php will include
another file called thing_form.php, thing_view.php, thing_list.php,
etc., which has all the display logic and html they will see. It will
also include a file thing.inc.php which has a class with all the
functions and business logic for thing.

Does this make sense, or am I pulling things apart too much? When
I've done little play pages in the past I've lumped all this stuff
together in one page, so will all the includes (at least two each time
a user goes to thing.php) slow everything down? Also, I'd be
interested in hearing how everyone likes to keep their
directories--lots of little, specific purpose files or a few large
files which do everything?
Thanks again. This newsgroup is such a great resource.
Jul 17 '05 #5
Jason:
André Nęss <an*********************@ifi.uio.no> wrote in message
news:<bt**********@maud.ifi.uio.no>...

I see no reason not to choose option 1. This is the simplest solution,
and I can't see any problems with it. Do you? Seeing as you list the
other two options I mean...
Thanks everyone for the responses. I think I was a little too generic
in my original post. Right now I'm trying to write some classes for
updating a database. Sometimes users will be submitting a form that
has fields for one record in one table, but sometimes they will be
using a form with 30 or 40 rows of fields for 30 or 40 records.

I understand in the first case that my option one (simply constructing
an UPDATE statement with all the fields) is the simplest, and that
building a UPDATE statement of only the fields that have actually
changed probably won't save much on the database end. But what about
the latter case, where there will be 30 or 40 UPDATE statements? In
many cases only 1 or 2 of the records may be changed at all, so if the
script only submits 1 or 2 UPDATEs instead of 30 or 40 there should be
some savings, right?


Yes there will be savings, but I regard it to be a case of premature
optimization, the update statements in question should finish fairly
quickly. On the other hand if you can avoid the redundant updates without
much extra code then I guess you might as well just go ahead and do it.
And here's another question. I've been thinking about the whole
separating business logic from presentation logic deal, and I've got
an idea but I'm not sure if I'm over thinking all of this.

Let's say I have an entity called thing. I'll have a file called
thing.php which the users will actually call up in their browser,
usually following links like thing.php?action=view&thing_id=123 or
some such. Depending on the $_GET vars, thing.php will include
another file called thing_form.php, thing_view.php, thing_list.php,
etc., which has all the display logic and html they will see. It will
also include a file thing.inc.php which has a class with all the
functions and business logic for thing.

Does this make sense, or am I pulling things apart too much? When
I've done little play pages in the past I've lumped all this stuff
together in one page, so will all the includes (at least two each time
a user goes to thing.php) slow everything down? Also, I'd be
interested in hearing how everyone likes to keep their
directories--lots of little, specific purpose files or a few large
files which do everything?


Sounds quite similar to schemes I've been using. I generally have
modularized systems (where a typical module is "user" or "document" or some
other entity in the domain of discourse). Each module offers the normal
CRUD (Create Read Update Delete) operations, and I usually have a handler
or driver which takes care of the mapping from _GET variables to what
should actually happen. How you do things from here depends on the
complexity on your application. In general, start simple, and break things
up on demand.

File sizes and what you put in what file really don't matter that much if
you ask me, the most important thing is that you have a well designed
application with fairly small specialized functions that are composed
properly and that have clear responsibilities. A good editor should make it
simple to browse the code.

André Nęss
Jul 17 '05 #6
Jason:
Yes there will be savings, but I regard it to be a case of premature
optimization, the update statements in question should finish fairly
quickly. On the other hand if you can avoid the redundant updates without
much extra code then I guess you might as well just go ahead and do it.
In general, do most people just run everything in $_POST through a
regular expression or something to look out for malicious entry? I
guess I'm really wonderring if what I'm saving by trimming UPDATES and
not validating unimportant data outweighs whatever cost there is to a
larger $_POST.
I use a form handling system I've written which takes care of validation.
All I do is declare the types of the different fields and what constraints
apply to them (for example, an integer between 1 and 10 or more complex
types like date and credit card). This validation shouldn't be very time
consuming, so again there is little to save.
One more question, if it's not too much bother--do you find yourself
representing your database fields as a bunch of class variables or an
array or some such, or do you end up just using $_POST and the array
returned by the database? This seems to defeat the 3-tier-ishness a
little since adding a field requires updating the database and the
class, but I don't really see any way around it that I like. I don't
want to query the database for the table definition every time I
create a new instance of an object.


There is no way around it, changes to the database must somehow be reflected
by changes in the code. If there was a closer fit between the DB and the
programming language then that might have been avoidable, and I guess it's
possible to write a generic intergration system which acts as a database
abstraction layer and allows you to design the database using some sort of
schema language which can then be used in PHP, but to me it seems like a
bit overkill :)

I have used both solutions where classes represents entities in the DB and
where I just use functions and pass the data along in arrays. I find the
latter simpler in PHP because of it's rather weak OOP support, but I guess
it's largely a matter of taste. In most cases the difference is merely
notational, e.g. you write $person->update($newData) (Or possibly
Person::update($newData)) instead of update_person($newData).

André Nęss

André Nęss
Jul 17 '05 #7
André Nęss <an*********************@ifi.uio.no> wrote in message news:<bt**********@maud.ifi.uio.no>...
Jason:
André Nęss <an*********************@ifi.uio.no> wrote in message
news:<bt**********@maud.ifi.uio.no>...

I see no reason not to choose option 1. This is the simplest solution,
and I can't see any problems with it. Do you? Seeing as you list the
other two options I mean...
Thanks everyone for the responses. I think I was a little too generic
in my original post. Right now I'm trying to write some classes for
updating a database. Sometimes users will be submitting a form that
has fields for one record in one table, but sometimes they will be
using a form with 30 or 40 rows of fields for 30 or 40 records.

I understand in the first case that my option one (simply constructing
an UPDATE statement with all the fields) is the simplest, and that
building a UPDATE statement of only the fields that have actually
changed probably won't save much on the database end. But what about
the latter case, where there will be 30 or 40 UPDATE statements? In
many cases only 1 or 2 of the records may be changed at all, so if the
script only submits 1 or 2 UPDATEs instead of 30 or 40 there should be
some savings, right?


Yes there will be savings, but I regard it to be a case of premature
optimization, the update statements in question should finish fairly
quickly. On the other hand if you can avoid the redundant updates without
much extra code then I guess you might as well just go ahead and do it.


I get your point. I'm actually re-writing a bunch of poorly-organized
code so I'm trying to think ahead. There are other issues as well.
For example, if I'm already checking to see if a field has changed, I
don't need to validate any of the non-changing data that's submitted.

In general, do most people just run everything in $_POST through a
regular expression or something to look out for malicious entry? I
guess I'm really wonderring if what I'm saving by trimming UPDATES and
not validating unimportant data outweighs whatever cost there is to a
larger $_POST.
And here's another question. I've been thinking about the whole
separating business logic from presentation logic deal, and I've got
an idea but I'm not sure if I'm over thinking all of this.

Let's say I have an entity called thing. I'll have a file called
thing.php which the users will actually call up in their browser,
usually following links like thing.php?action=view&thing_id=123 or
some such. Depending on the $_GET vars, thing.php will include
another file called thing_form.php, thing_view.php, thing_list.php,
etc., which has all the display logic and html they will see. It will
also include a file thing.inc.php which has a class with all the
functions and business logic for thing.

Does this make sense, or am I pulling things apart too much? When
I've done little play pages in the past I've lumped all this stuff
together in one page, so will all the includes (at least two each time
a user goes to thing.php) slow everything down? Also, I'd be
interested in hearing how everyone likes to keep their
directories--lots of little, specific purpose files or a few large
files which do everything?


Sounds quite similar to schemes I've been using. I generally have
modularized systems (where a typical module is "user" or "document" or some
other entity in the domain of discourse). Each module offers the normal
CRUD (Create Read Update Delete) operations, and I usually have a handler
or driver which takes care of the mapping from _GET variables to what
should actually happen. How you do things from here depends on the
complexity on your application. In general, start simple, and break things
up on demand.


One more question, if it's not too much bother--do you find yourself
representing your database fields as a bunch of class variables or an
array or some such, or do you end up just using $_POST and the array
returned by the database? This seems to defeat the 3-tier-ishness a
little since adding a field requires updating the database and the
class, but I don't really see any way around it that I like. I don't
want to query the database for the table definition every time I
create a new instance of an object.
File sizes and what you put in what file really don't matter that much if
you ask me, the most important thing is that you have a well designed
application with fairly small specialized functions that are composed
properly and that have clear responsibilities. A good editor should make it
simple to browse the code.


True. I'm just trying to wrap my head around some of the big-picture
stuff. I tend to think in terms of user needs and requirements,
interface and information design, etc. first and then write code that
does whatever I need it to do. I'm trying to think more in terms of
code organization and reusability, though, as I work on more projects.

Thanks.
Jul 17 '05 #8
pm*******@4lpi.com (Jason) wrote in message news:<cf*************************@posting.google.c om>...
André Nęss <an*********************@ifi.uio.no> wrote in message news:<bt**********@maud.ifi.uio.no>...
And here's another question. I've been thinking about the whole
separating business logic from presentation logic deal, and I've got
an idea but I'm not sure if I'm over thinking all of this.


What you are talking about here is the 3 tier architecture in which
you completely separate the presentation, business and data access
logic. I have created an entire development framework/infrastructure
using this architecure which is described in
http://www.tonymarston.net/php-mysql...plication.html. You can
run the application online to see it working, then download all the
code to see how it's done.

Tony Marston
http://www.tonymarston.net/
Jul 17 '05 #9
pm*******@4lpi.com (Jason) wrote in message news:<cf**************************@posting.google. com>...
Let's say I have an html form with 20 or 30 fields in it. The form
submits the fields via POST to a php page which updates a table in a
database with the $_POST vars. Which makes more sense?

1) simply UPDATING the values for all fields in the table, whether or
not any particular field has actually changed
This will work with MySQL as it will only update those fields which
have actually changed.
2) running a second SELECT statement and comparing the $_POST vars to
the returned values, and only UPDATING those that have changed.
This is the method I use. It may seem overkill, but the reason is that
in the near future I am going to implement an audit trail system in
which case I will only want to log those fields which have actually
changed. This means that I must have a copy of the original data as
well as the new data submitted by the user. My audit trail will then
be able to show both the "before" and "after" values for individual
columns.

Another possible reason for redoing the SELECT before an update is
that you can include some of the original values in the WHERE clause
(in addition to the primary key) as this will not return an entry if
any of those values have been changed by another user. This prevents a
2nd update from overwriting the 1st update.
3) including a hidden form field for each field with the original
values (so for each <input type="text" name="field1" size="10"
value="blah"> there would also be a <input type="hidden"
name="field1_orig" value="blah">), comparing $_POST['field1'] with
$_POST['field2'] and only updating those that have changed.
This is NOT a good idea as it bloats the size of the HTML file which
you send out, thus slowing down the transmission times. It also
requires code to add these extra values to the HTML output, then
another set of code to pick out and compare these values when the form
is submitted. With all this extra code the potential for accidentally
introducing bugs is enormous. It is easier and more reliable to simply
redo the sql SELECT, and it will not be any slower than executing all
that extra code to insert/extract all those extra hidden fields from
the HTML document.
There's very little chance that two users will be updating the same
table at the same time with this particular site, but anyone see any
particular problems with any of the above three methods? Would
doubling the number of $_POST vars (option 3) create a problem?

Thanks in advance.


No problem.

Tony Marston
http://www.tonymarston.net/
Jul 17 '05 #10
Jochen Daum <jo*********@cans.co.nz> wrote in message news:<kd********************************@4ax.com>. ..
Hi !

On 6 Jan 2004 08:28:43 -0800, pm*******@4lpi.com (Jason) wrote:
Let's say I have an html form with 20 or 30 fields in it. The form
submits the fields via POST to a php page which updates a table in a
database with the $_POST vars. Which makes more sense?

1) simply UPDATING the values for all fields in the table, whether or
not any particular field has actually changed

2) running a second SELECT statement and comparing the $_POST vars to
the returned values, and only UPDATING those that have changed.

3) including a hidden form field for each field with the original
values (so for each <input type="text" name="field1" size="10"
value="blah"> there would also be a <input type="hidden"
name="field1_orig" value="blah">), comparing $_POST['field1'] with
$_POST['field2'] and only updating those that have changed.

There's very little chance that two users will be updating the same
table at the same time with this particular site, but anyone see any
particular problems with any of the above three methods? Would
doubling the number of $_POST vars (option 3) create a problem?


No this is a completely separate issue and you will have a complete
fuck-up and data loss if it happens (and it will). You need to
implement a locking strategy. There is pessimistic locking, which
works like this:
If someone opens the record for editing you change a flag to 'lock'
and noone else can edit that record.
This is in most cases unsuitable for the web, because you don't wanna
call that guy in Siberia, who does your editing and went for a toilet
break in the middle of editing to unlock it.

Instead, you can use optmistic locking, which just puts a timestamp
against the record, when it has been saved the last time. Then, when
saving you check the timestamp against the one you had when you opened
it for editing. If it is bigger - if someone saved it, while you were
editing, you have to resolve that conflict. The simple and stupid way
is to overwrite the other persons changes or to discard your own ones.
Instead, implement a field by field merging strategy. For this you
need to know the contents of what has been in before as well (option
3)

HTH, Jochen


This is a good idea, and easy to implement. Before you issue the
UPDATE you issue a SELECT using the primary key plus the timestamp
value from the original SELECT. If the timestamp value has changed
(i.e. somebody else has updated the record since you last read it)
then the SELECT will not return a row and you can take whatever action
you deem necessary, such as telling the user that his update was
rejected due to a concurrent update.

Tony Marston
http://www.tonymarston.net/
Jul 17 '05 #11
I noticed that Message-ID:
<cf**************************@posting.google.com > from Jason contained
the following:
Let's say I have an html form with 20 or 30 fields in it. The form
submits the fields via POST to a php page which updates a table in a
database with the $_POST vars. Which makes more sense?

1) simply UPDATING the values for all fields in the table, whether or
not any particular field has actually changed

2) running a second SELECT statement and comparing the $_POST vars to
the returned values, and only UPDATING those that have changed.

3) including a hidden form field for each field with the original
values (so for each <input type="text" name="field1" size="10"
value="blah"> there would also be a <input type="hidden"
name="field1_orig" value="blah">), comparing $_POST['field1'] with
$_POST['field2'] and only updating those that have changed.


I recently did one where I put a checkbox next to the record and
required the user to check the box if they want to update it. This also
serves as a visual reminder to the user of the records they have edited.
To save effort for JavaScript enabled browsers I added some JS that
automatically checks the box when the value is changed.

--
Geoff Berrow (put thecat out to email)
It's only Usenet, no one dies.
My opinions, not the committee's, mine.
Simple RFDs http://www.ckdog.co.uk/rfdmaker/
Jul 17 '05 #12

This thread has been closed and replies have been disabled. Please start a new discussion.

Similar topics

7
by: Paige | last post by:
I have a database on my local machine that I make entries and corrections on. I'd like to be able to upload that to my server and have that update the database that's on the server. What I've been...
9
by: allenj | last post by:
DB2 UDB 7.2 WSE Fixpak 9 Linux Red Hat 7.3 I have some library code (written in Java, if that matters) that processes maintenance screens that are presented to the end-users as forms in a...
1
by: Chris Jackson | last post by:
I'm a novice Access user and am not sure how to solve the following problem. Any help with the following would be greatly appreciated! I have two tables with identical structures, the first holds...
3
by: Tc | last post by:
Hi, I was curious, I am thinking of writing an application that loads a dataset from a database that resides on a server. The question I have is this, if multiple copies of the app will be...
14
by: Lars Netzel | last post by:
A little background: I use three Datagrids that are in a child parent relation. I Use Negative Autoincrement on the the DataTables and that's workning nice. My problem is when I Update these...
3
by: RSH | last post by:
Hi, I have a situation in where i have two instances of SQL server, the first is our Production Environment, the second is our Development environment. Both servers contain the same databases...
0
by: kaosyeti | last post by:
this is a really weird issue for me to try to explain but i'll do my best. i started writing a db for my store in september to track customer flow. at that time, i gave a copy to 3 other of our...
0
by: oh4real | last post by:
I recently developed a compact function to efficiently allow users to change info in a form (like account info, contact info, etc.) and then the function automatically identifies what's changed and...
2
by: =?Utf-8?B?VmFuZXNzYQ==?= | last post by:
Hi All! I am with a situation where I am not getting the right updating to the form's fields. The situation is the following one: I have one combobox and one textbox. I am using the...
0
by: taylorcarr | last post by:
A Canon printer is a smart device known for being advanced, efficient, and reliable. It is designed for home, office, and hybrid workspace use and can also be used for a variety of purposes. However,...
0
by: aa123db | last post by:
Variable and constants Use var or let for variables and const fror constants. Var foo ='bar'; Let foo ='bar';const baz ='bar'; Functions function $name$ ($parameters$) { } ...
0
by: ryjfgjl | last post by:
In our work, we often receive Excel tables with data in the same format. If we want to analyze these data, it can be difficult to analyze them because the data is spread across multiple Excel files...
0
by: emmanuelkatto | last post by:
Hi All, I am Emmanuel katto from Uganda. I want to ask what challenges you've faced while migrating a website to cloud. Please let me know. Thanks! Emmanuel
0
BarryA
by: BarryA | last post by:
What are the essential steps and strategies outlined in the Data Structures and Algorithms (DSA) roadmap for aspiring data scientists? How can individuals effectively utilize this roadmap to progress...
1
by: nemocccc | last post by:
hello, everyone, I want to develop a software for my android phone for daily needs, any suggestions?
0
by: Hystou | last post by:
There are some requirements for setting up RAID: 1. The motherboard and BIOS support RAID configuration. 2. The motherboard has 2 or more available SATA protocol SSD/HDD slots (including MSATA, M.2...
0
marktang
by: marktang | last post by:
ONU (Optical Network Unit) is one of the key components for providing high-speed Internet services. Its primary function is to act as an endpoint device located at the user's premises. However,...
0
Oralloy
by: Oralloy | last post by:
Hello folks, I am unable to find appropriate documentation on the type promotion of bit-fields when using the generalised comparison operator "<=>". The problem is that using the GNU compilers,...

By using Bytes.com and it's services, you agree to our Privacy Policy and Terms of Use.

To disable or enable advertisements and analytics tracking please visit the manage ads & tracking page.