468,103 Members | 1,241 Online
Bytes | Developer Community
New Post

Home Posts Topics Members FAQ

Post your question to a community of 468,103 developers. It's quick & easy.

Double Clock Experiment

The following experiment is a demonstration of TIME TRAVEL. When
writing this program, and testing it out I found that sometimes the
program would engage itself in time travel but other times it would
not. After careful testing and reprogramming I have optimized the code
so it should work every time, however I recommend that you leave the
room while it is running and think of something else. It takes about 5
minuets to run so it is better if you don't stare at a blank screen.

Here is how the double clock experiment operates:

The program loops like this:

LOOP

wait

a = a + time

wait

b = b + time

GOTO LOOP

c = time

Then it checks to see if the differance between a and b, is correct
compared to c. In normal physics it should be, but if the program time
travels then the output will be different. To show you what I mean
let me give you an example of the program in operation:

wait 10

a = a + time 10

wait 10

b = b + time 20

wait 10

a = a + time 30

wait 10

b = b + time 40

wait 10

a = a + time 50

wait 10

b = b+ time 60
-

Now A = 90 & B = 120

-imediately check time-

C = time 60

Using logic we can deduce that A should be exactly C / 2 smaller than
B, but when you run the program the time differance can vary
considerably in either direction. On top of that, the time it takes to
run the program is not constant either. What conclusions can we draw
from this?

I'll show you some example output and then give you the source:

Time A: 146124514
Time B: 146270790
Time C: 293157

C / 2 = 146578.5
Actual Difference 145276

Time A was 1302.6 ticks into the future, or maybe time b was 1302.6
ticks into the past.

Here is some more real output:

Time A: 144371012
Time B: 144515341
Time C: 289188
C / 2 = 144594
real difference: 144329

time A was 265 ticks into the future, or time b was 265 ticks into the
past.

I wish I had an atomic clock to experiment with, but here is the
source:

#include <sys/types.h>
#include <time.h>
#include <unistd.h>
#include <stdio.h>
#include <math.h>

int main()
{

clock_t c0, c1, c2; /* clock_t is defined on <time.h> and
<sys/types.h> as int */
long count;
printf ("using UNIX function clock to measure CPU time ... \n");

for (count = 0; count < 1000; count++){

for(int cnt1 =0; cnt1 < 100000; cnt1++){
c1 = clock();
}

c0 = c0 + clock();

for(int cnt2 =0; cnt2 < 100000; cnt2++){
c1 = clock();
}

c2 = c2 + clock();

}

c1 = clock();

printf ("\tend (CPU); %d\n", (int) c0);
printf ("\tend (CPU2); %d\n", (int) c2);
printf ("\tend (NOW); %d\n", (int) c1);
return 0;

}

Apr 3 '06 #1
54 3491

<Co********@gmail.com> wrote in message
news:11**********************@t31g2000cwb.googlegr oups.com...
The following experiment is a demonstration of TIME TRAVEL. When
writing this program, and testing it out I found that sometimes the
program would engage itself in time travel but other times it would
not.


Take your meds - better still triple the dose and take them.

Rest of troll junk snipped.

Bill
Apr 3 '06 #2
>Take your meds - better still triple the dose and take them.

So because you think I need medication that makes me wrong?
Why do you even bother posting if you aren't intelligent enough to
offer an opinion about why I am wrong?

Apr 3 '06 #3
Co********@gmail.com wrote:
The following experiment is a demonstration of TIME TRAVEL. When
writing this program, and testing it out I found that sometimes the
program would engage itself in time travel but other times it would
not.
I answered this last week, didn't you read it?
I'll re-post for your benefit:
wait 10
a = a + time 10
wait 10
b = b + time 20
wait 10
a = a + time 30
wait 10
b = b + time 40
wait 10
a = a + time 50
wait 10
b = b+ time 60
Now A = 90 & B = 120
-imediately check time-
C = time 60

Using logic we can deduce that A should be exactly C / 2
smaller than B,
Only if "time" was exactly 0 before the first wait.
but when you run the program the time differance can vary
considerably in either direction. On top of that, the time it takes to
run the program is not constant either.


NAME
clock - Determine processor time

DESCRIPTION
The clock() function returns an approximation of processor time
used by the program.

CONFORMING TO
ANSI C. POSIX requires that CLOCKS_PER_SEC equals 1000000
independent of the actual resolution.

1) Your program takes time to start up
2) Your program takes time to do all of its other processing
besides calling the clock function
3) The operating system may take varying amounts of time
to execute the clock() function
4) The results of clock() are only an approximation
5) If you run this for more than 2148 seconds then you get
an integer overflow.

Who'd a thunk it?

NB. clock() is not an ANSI C function, if you want to debate its
behaviour further, then post in a group such as comp.unix.programmer

Apr 4 '06 #4
Old Wolf wrote:
NB. clock() is not an ANSI C function, if you want to debate its
behaviour further, then post in a group such as comp.unix.programmer


Sorry -- clock() is an ANSI C function, but its behaviour is
implementation-defined (ie. what it returns exactly depends
on what system you are running it on).
So if you want knowledgable answers about why clock() is
returning what it is, should ask on a NG dedicated to your
platform (such as the one I mentioned above).

Apr 4 '06 #5
Co********@gmail.com wrote:
The following experiment is a demonstration of TIME TRAVEL.
Not really. It is a demonstration of limited resolution of the system
timer and effects of interrupts on timing accuracy.

This is off-topic in comp.lang.c and probably the two sci.physics groups.
Then it checks to see if the differance between a and b, is correct
compared to c. In normal physics it should be, but if the program time
travels then the output will be different.


The physics is holding up well, but you might need to increasing you
understanding of working with limited measurement resolution and elapsed
time not accounted for with a simple model.

--
Thad
Apr 4 '06 #6
Thad wrote:
CoreyWh...@gmail.com wrote:
The following experiment is a demonstration of TIME TRAVEL.
Not really. It is a demonstration of limited resolution of the system
timer and effects of interrupts on timing accuracy. This is off-topic in comp.lang.c and probably the two sci.physics groups.

Then it checks to see if the differance between a and b, is correct
compared to c. In normal physics it should be, but if the program time
travels then the output will be different.


The physics is holding up well, but you might need to increasing you
understanding of working with limited measurement resolution and elapsed
time not accounted for with a simple model.


If you had a more objective way of measuring time then you could tell
me which of the final times in the program was the better of the two
aproximations. Because there is no more objective way to measure time
than this, both aproximations are equally accurate. In a subjective
way they are both entirely real since whatever we use to measure time
will not be perfectly accurate.

Apr 4 '06 #7

<Co********@gmail.com> wrote in message
news:11**********************@u72g2000cwu.googlegr oups.com...
Take your meds - better still triple the dose and take them.
So because you think I need medication that makes me wrong?


When a person claims something totally absurd they will be diagnosed as
delusional. Either that, or, and it is much more likely in your case, they
are a malicious troll.
Why do you even bother posting if you aren't intelligent enough to
offer an opinion about why I am wrong?


Because you are obviously a troll and replying to trolls is simply a matter
of either ignoring them or responding with what amuses you just like you are
posting to be amused. And do not reply with the usual crank/troll rot of
how do you know I am wrong etc etc which is simply a ruse to engage someone
to get your jollies - I have seen it all before. One does not need to
deliver a dissertation to know flat earthers are idiots. But to take your
challenge at face value someone on another of your 'deadly serious' posts
already was 'silly' enough to actually attempt a serious reply:
http://groups.google.com/group/alt.s...9bcfca61f557c0
I only hope he did it in full knowledge of what you are - an idiot troll.

Bill
Apr 4 '06 #8
Co********@gmail.com wrote:

The following experiment is a demonstration of TIME TRAVEL. When
writing this program, and testing it out I found that sometimes the
program would engage itself in time travel but other times it would
not. After careful testing and reprogramming I have optimized the code
so it should work every time, however I recommend that you leave the
room while it is running and think of something else. It takes about 5
minuets to run so it is better if you don't stare at a blank screen.

Here is how the double clock experiment operates:

The program loops like this:

LOOP

wait

a = a + time

wait

b = b + time

GOTO LOOP

c = time

Then it checks to see if the differance between a and b, is correct
compared to c. In normal physics it should be, but if the program time
travels then the output will be different. To show you what I mean
let me give you an example of the program in operation:

wait 10

a = a + time 10

wait 10

b = b + time 20

wait 10

a = a + time 30

wait 10

b = b + time 40

wait 10

a = a + time 50

wait 10

b = b+ time 60
-

Now A = 90 & B = 120

-imediately check time-

C = time 60

Using logic we can deduce that A should be exactly C / 2 smaller than
B, but when you run the program the time differance can vary
considerably in either direction. On top of that, the time it takes to
run the program is not constant either. What conclusions can we draw
from this?

I'll show you some example output and then give you the source:

Time A: 146124514
Time B: 146270790
Time C: 293157

C / 2 = 146578.5
Actual Difference 145276

Time A was 1302.6 ticks into the future, or maybe time b was 1302.6
ticks into the past.

Here is some more real output:

Time A: 144371012
Time B: 144515341
Time C: 289188
C / 2 = 144594
real difference: 144329

time A was 265 ticks into the future, or time b was 265 ticks into the
past.

I wish I had an atomic clock to experiment with, but here is the
source:

#include <sys/types.h>
#include <time.h>
#include <unistd.h>
#include <stdio.h>
#include <math.h>

int main()
{

clock_t c0, c1, c2; /* clock_t is defined on <time.h> and
<sys/types.h> as int */
long count;

printf ("using UNIX function clock to measure CPU time ... \n");

for (count = 0; count < 1000; count++){

for(int cnt1 =0; cnt1 < 100000; cnt1++){
c1 = clock();
}

c0 = c0 + clock();
c0 has no initial value.

for(int cnt2 =0; cnt2 < 100000; cnt2++){
c1 = clock();
}

c2 = c2 + clock();
c2 has no initial value.
This code is undefined.
This is not a C program.
This is a bunch of gibberish.

}

c1 = clock();

printf ("\tend (CPU); %d\n", (int) c0);
printf ("\tend (CPU2); %d\n", (int) c2);
printf ("\tend (NOW); %d\n", (int) c1);
return 0;

}


--
pete
Apr 4 '06 #9
So give it an initial value, it will behave the same.

Apr 4 '06 #10
comparing me to a flat earther is like comparing you to hitler. By
insulting me instead of looking at the issue you are showing your
disrespect for logic.

Apr 4 '06 #11

<Co********@gmail.com> wrote in message
news:11*********************@i40g2000cwc.googlegro ups.com...
comparing me to a flat earther is like comparing you to hitler. By
insulting me instead of looking at the issue you are showing your
disrespect for logic.


Flat earthers can fully logically substantiate their position as well and
you can not prove them wrong. But anyone with an actual brain recognizes it
as unreasonable rubbish. The same with a claim 'The following experiment is
a demonstration of TIME TRAVEL' and the even sillier word salads of your
other posts.

Stating clearly what you are serves two purposes - first it turns the table
on you by providing me with some amusement; the same kind of amusement you
want to get by engaging others in silly semantics. And secondly;
demonstrating to less experienced posters how to deal with your ilk. The
next step after a few posts outlining what is going on I have done is simply
to ignore you until engaging you again takes my fancy. So for now

Bye
Bill
Apr 4 '06 #12
Co********@gmail.com wrote:
The following experiment is a demonstration of TIME TRAVEL. When
writing this program, and testing it out I found that sometimes the
program would engage itself in time travel but other times it would
not. After careful testing and reprogramming I have optimized the code
so it should work every time, however I recommend that you leave the
room while it is running and think of something else. It takes about 5
minuets to run so it is better if you don't stare at a blank screen.

Here is how the double clock experiment operates:

The program loops like this:

LOOP

wait

a = a + time

wait

b = b + time

GOTO LOOP

c = time

Then it checks to see if the differance between a and b, is correct
compared to c. In normal physics it should be, but if the program time
travels then the output will be different. To show you what I mean
let me give you an example of the program in operation:

wait 10

a = a + time 10

wait 10

b = b + time 20

wait 10

a = a + time 30

wait 10

b = b + time 40

wait 10

a = a + time 50

wait 10

b = b+ time 60
-

Now A = 90 & B = 120

-imediately check time-

C = time 60

Using logic we can deduce that A should be exactly C / 2 smaller than
B, but when you run the program the time differance can vary
considerably in either direction. On top of that, the time it takes to
run the program is not constant either. What conclusions can we draw
from this?

I'll show you some example output and then give you the source:

Time A: 146124514
Time B: 146270790
Time C: 293157

C / 2 = 146578.5
Actual Difference 145276

Time A was 1302.6 ticks into the future, or maybe time b was 1302.6
ticks into the past.

Here is some more real output:

Time A: 144371012
Time B: 144515341
Time C: 289188
C / 2 = 144594
real difference: 144329

time A was 265 ticks into the future, or time b was 265 ticks into the
past.

I wish I had an atomic clock to experiment with, but here is the
source:

#include <sys/types.h>
#include <time.h>
#include <unistd.h>
#include <stdio.h>
#include <math.h>

int main()
{

clock_t c0, c1, c2; /* clock_t is defined on <time.h> and
<sys/types.h> as int */
long count;
printf ("using UNIX function clock to measure CPU time ... \n");

for (count = 0; count < 1000; count++){

for(int cnt1 =0; cnt1 < 100000; cnt1++){
c1 = clock();
}

c0 = c0 + clock();

for(int cnt2 =0; cnt2 < 100000; cnt2++){
c1 = clock();
}

c2 = c2 + clock();

}

c1 = clock();

printf ("\tend (CPU); %d\n", (int) c0);
printf ("\tend (CPU2); %d\n", (int) c2);
printf ("\tend (NOW); %d\n", (int) c1);
return 0;

}


The problem is you're running a program that expects perfect
measurement of time on a general purpose OS where time measurement is
only approximate. I ran your code on an embedded platform running a
real-time non-preemptive OS with interrupts turned off but replaced
your clock() with reading from a free running hardware counter (1kHz -
much finer resolution than a typical Unix tick). The result I got is
exactly what you expected:

First run:

end (CPU); 5002992
end (CPU2); 5008000
end (NOW); 10016

Second run:

end (CPU); 5002992
end (CPU2); 5008000
end (NOW); 10016

As you can see, both runs produced identical results. This is because
the system I'm running on is perfectly deterministic - no matter how
many times I run it I will get the same result. Also, you'll notice
that NOW/2 = 5008 which is the exact difference between CPU2 and CPU
times. This is what you get when nothing interferes with your
"experiment" - no interrupts, no task switching etc. which causes time
readings to be approximated. It also helps a little that my CPU is a
simple microcontroller with no pipelining or branch prediction or
out-or-order execution or instruction caching which may cause code to
take different amounts of time to execute depending on the CPU state.

Apr 4 '06 #13
Co********@gmail.com wrote:
The following experiment is a demonstration of TIME TRAVEL. When
writing this program, and testing it out I found that sometimes the
program would engage itself in time travel but other times it would
not.


Your dog is slobbering on the thiotimoline.

Also, you know Sweet Fanny Adams about either programming, physics, or
sanity.

Richard
Apr 4 '06 #14
Hi CoreyWhite, Win_XP is running lots of code you don't know about,
see: SysInternals.COM/Utilities/ProcessExplorer.html

....and you yourself introduce lags.

So the tics your code reports is effected by all sorts of mysterious activity.

As I told you before...

How, on God's green earth, could your CPU keep more accurate time
than the Nation_Institute_of_Standards_and_Technology in Colorado ? !

Being Bose-Einstein condensates, the masers they use are _Very_ cold,
and a powerful vacuum and cooling system must be employed.

Remember the SI standard of speed of light is defined for an _Ideal_ vacuum,
and no place in nature actually has, or could ever have, such a vacuum.
It's merely approximated... fudge factors have to be included.
Apr 4 '06 #15

<Co********@gmail.com> wrote in message news:11**********************@t31g2000cwb.googlegr oups.com...
The following experiment is a demonstration of TIME TRAVEL.


Time travel is easy; just wait.

Martin Hogbin
Apr 4 '06 #16
<yawn>
Androcles

<Co********@gmail.com> wrote in message
news:11**********************@t31g2000cwb.googlegr oups.com...
| The following experiment is a demonstration of TIME TRAVEL. When
| writing this program, and testing it out I found that sometimes the
| program would engage itself in time travel but other times it would
| not. After careful testing and reprogramming I have optimized the code
| so it should work every time, however I recommend that you leave the
| room while it is running and think of something else. It takes about 5
| minuets to run so it is better if you don't stare at a blank screen.
|
| Here is how the double clock experiment operates:
|
| The program loops like this:
|
| LOOP
|
| wait
|
| a = a + time
|
| wait
|
| b = b + time
|
| GOTO LOOP
|
| c = time
|
| Then it checks to see if the differance between a and b, is correct
| compared to c. In normal physics it should be, but if the program time
| travels then the output will be different. To show you what I mean
| let me give you an example of the program in operation:
|
| wait 10
|
| a = a + time 10
|
| wait 10
|
| b = b + time 20
|
| wait 10
|
| a = a + time 30
|
| wait 10
|
| b = b + time 40
|
| wait 10
|
| a = a + time 50
|
| wait 10
|
| b = b+ time 60
| -
|
| Now A = 90 & B = 120
|
| -imediately check time-
|
| C = time 60
|
| Using logic we can deduce that A should be exactly C / 2 smaller than
| B, but when you run the program the time differance can vary
| considerably in either direction. On top of that, the time it takes to
| run the program is not constant either. What conclusions can we draw
| from this?
|
| I'll show you some example output and then give you the source:
|
| Time A: 146124514
| Time B: 146270790
| Time C: 293157
|
| C / 2 = 146578.5
| Actual Difference 145276
|
| Time A was 1302.6 ticks into the future, or maybe time b was 1302.6
| ticks into the past.
|
| Here is some more real output:
|
| Time A: 144371012
| Time B: 144515341
| Time C: 289188
| C / 2 = 144594
| real difference: 144329
|
| time A was 265 ticks into the future, or time b was 265 ticks into the
| past.
|
| I wish I had an atomic clock to experiment with, but here is the
| source:
|
| #include <sys/types.h>
| #include <time.h>
| #include <unistd.h>
| #include <stdio.h>
| #include <math.h>
|
| int main()
| {
|
| clock_t c0, c1, c2; /* clock_t is defined on <time.h> and
| <sys/types.h> as int */
| long count;
|
|
| printf ("using UNIX function clock to measure CPU time ... \n");
|
| for (count = 0; count < 1000; count++){
|
| for(int cnt1 =0; cnt1 < 100000; cnt1++){
| c1 = clock();
| }
|
| c0 = c0 + clock();
|
| for(int cnt2 =0; cnt2 < 100000; cnt2++){
| c1 = clock();
| }
|
| c2 = c2 + clock();
|
| }
|
| c1 = clock();
|
|
|
| printf ("\tend (CPU); %d\n", (int) c0);
| printf ("\tend (CPU2); %d\n", (int) c2);
| printf ("\tend (NOW); %d\n", (int) c1);
| return 0;
|
| }
|
Apr 4 '06 #17

<Co********@gmail.com> wrote in message
news:11**********************@u72g2000cwu.googlegr oups.com...
Take your meds - better still triple the dose and take them.


So because you think I need medication that makes me wrong?
Why do you even bother posting if you aren't intelligent enough to
offer an opinion about why I am wrong?


He's pandering. He's got some heroes here in this NG, and because
that's their style and he knows your post is the kind of thing THEY
would trash, he tries to beat them to the punch. In doing so, he
hopes to show that he's almost as smart as they are, but in any case,
he hopes it shows that he's one among them.

It's as transparent as it gets, and oh so pathetic.
"Cranks are usually big on using adjectives like 'real' as if it had
some actual meaning."
---Hobba

"SR has nothing to do with 'perception', is has to do with time and
distances as indicated by real apparatus eg rulers and clocks."
---- Hobba

Apr 4 '06 #18
<Co********@gmail.com> wrote in message news:11*********************@i40g2000cwc.googlegro ups.com...
comparing me to a flat earther is like comparing you to hitler. By
insulting me instead of looking at the issue you are showing your
disrespect for logic.


Godwin's Law is invoked. Thread is finished.
Apr 4 '06 #19
sal
On Mon, 03 Apr 2006 15:04:13 -0700, CoreyWhite wrote:

int main()
{

clock_t c0, c1, c2; /* clock_t is defined on <time.h> and
<sys/types.h> as int */
c0 and c2 are uninitialized variables. They'll get whatever garbage
happened to be on the stack when the program started. When you take their
difference you're including the difference between two essentially random
values.

Try initializing them to zero and see what you get.

c1's uninitialized too, of course, but you never use its initial value so
that's OK.

Aside from that, the time every operation will take will vary all over the
place, as the system spends a varying percentage of its time doing what
you want it to do versus doing other stuff, like taking page faults,
checking for cron jobs, processing requests from other users (if it's
timesharing), dealing with random garbage coming in from the ethernet
card, and so forth. So you'll never see a consistent result from a
program like this.

(If it's a Windows box you also need to consider the time it takes it to
phone Bill Gates and tell him what you're doing today.)
long count;
printf ("using UNIX function clock to measure CPU time ... \n");

for (count = 0; count < 1000; count++){

for(int cnt1 =0; cnt1 < 100000; cnt1++){ c1 = clock();
}
}
c0 = c0 + clock();

for(int cnt2 =0; cnt2 < 100000; cnt2++){ c1 = clock();
}
}
c2 = c2 + clock();

c1 = clock();

printf ("\tend (CPU); %d\n", (int) c0); printf
("\tend (CPU2); %d\n", (int) c2); printf ("\tend
(NOW);
%d\n", (int) c1); return 0;
}


--
Nospam becomes physicsinsights to fix the email
I can be also contacted through http://www.physicsinsights.org

Apr 4 '06 #20
slebetman wrote:
The problem is you're running a program that expects perfect
measurement of time on a general purpose OS where time measurement is
only approximate. I ran your code on an embedded platform running a
real-time non-preemptive OS with interrupts turned off but replaced
your clock() with reading from a free running hardware counter (1kHz -
much finer resolution than a typical Unix tick). The result I got is
exactly what you expected: First run:
end (CPU); 5002992
end (CPU2); 5008000
end (NOW); 10016
Second run:
end (CPU); 5002992
end (CPU2); 5008000
end (NOW); 10016
As you can see, both runs produced identical results. This is because
the system I'm running on is perfectly deterministic - no matter how
many times I run it I will get the same result. Also, you'll notice
that NOW/2 = 5008 which is the exact difference between CPU2 and CPU
times. This is what you get when nothing interferes with your
"experiment" - no interrupts, no task switching etc. which causes time
readings to be approximated. It also helps a little that my CPU is a
simple microcontroller with no pipelining or branch prediction or
out-or-order execution or instruction caching which may cause code to
take different amounts of time to execute depending on the CPU state.


Hey okay, that really helps and thank you. Where can I get a system
like this to run some tests on?

Apr 4 '06 #21
On Tue, 4 Apr 2006 08:46:20 -0400, "AllYou!" <Id****@conversent.net>
wrote:

<Co********@gmail.com> wrote in message
news:11**********************@u72g2000cwu.googleg roups.com...
>Take your meds - better still triple the dose and take them.
So because you think I need medication that makes me wrong?
Why do you even bother posting if you aren't intelligent enough to
offer an opinion about why I am wrong?


He's pandering. He's got some heroes here in this NG, and because
that's their style and he knows your post is the kind of thing THEY
would trash, he tries to beat them to the punch. In doing so, he
hopes to show that he's almost as smart as they are, but in any case,
he hopes it shows that he's one among them.

It's as transparent as it gets, and oh so pathetic.

Sock puppets are pretty transparent, also.
"Cranks are usually big on using adjectives like 'real' as if it had
some actual meaning."
---Hobba

"SR has nothing to do with 'perception', is has to do with time and
distances as indicated by real apparatus eg rulers and clocks."
---- Hobba


--
Al Balmer
Sun City, AZ
Apr 4 '06 #22

"Al Balmer" <al******@att.net> wrote in message
news:5q********************************@4ax.com...
On Tue, 4 Apr 2006 08:46:20 -0400, "AllYou!" <Id****@conversent.net>
wrote:

<Co********@gmail.com> wrote in message
news:11**********************@u72g2000cwu.google groups.com...
>Take your meds - better still triple the dose and take them.

So because you think I need medication that makes me wrong?
Why do you even bother posting if you aren't intelligent enough to
offer an opinion about why I am wrong?


He's pandering. He's got some heroes here in this NG, and because
that's their style and he knows your post is the kind of thing THEY
would trash, he tries to beat them to the punch. In doing so, he
hopes to show that he's almost as smart as they are, but in any
case,
he hopes it shows that he's one among them.

It's as transparent as it gets, and oh so pathetic.

Sock puppets are pretty transparent, also.


I'm sorry, but were you trying to make a point?

"Cranks are usually big on using adjectives like 'real' as if it had
some actual meaning."
---Hobba

"SR has nothing to do with 'perception', is has to do with time and
distances as indicated by real apparatus eg rulers and clocks."
---- Hobba


--
Al Balmer
Sun City, AZ


Apr 4 '06 #23
Co********@gmail.com wrote:
slebetman wrote:
The problem is you're running a program that expects perfect
measurement of time on a general purpose OS where time measurement is
only approximate. I ran your code on an embedded platform running a
real-time non-preemptive OS with interrupts turned off but replaced
your clock() with reading from a free running hardware counter (1kHz -
much finer resolution than a typical Unix tick). The result I got is
exactly what you expected:

First run:


>end (CPU); 5002992
>end (CPU2); 5008000
>end (NOW); 10016


Second run:


>end (CPU); 5002992
>end (CPU2); 5008000
>end (NOW); 10016


As you can see, both runs produced identical results. This is because
the system I'm running on is perfectly deterministic - no matter how
many times I run it I will get the same result. Also, you'll notice
that NOW/2 = 5008 which is the exact difference between CPU2 and CPU
times. This is what you get when nothing interferes with your
"experiment" - no interrupts, no task switching etc. which causes time
readings to be approximated. It also helps a little that my CPU is a
simple microcontroller with no pipelining or branch prediction or
out-or-order execution or instruction caching which may cause code to
take different amounts of time to execute depending on the CPU state.


Hey okay, that really helps and thank you. Where can I get a system
like this to run some tests on?


For the CPU I used a PIC microcontroller. Almost any micorcontroller
will do: AVR, 8051 etc. Although a microprocessor generally executes
code faster, processors tend to use statistical techniques like branch
prediction and cacheing to do this hence they are not as deterministic
as microprocessors.

For the clock I simply fed a 1kHz clock through two cascaded 74HC4040
12 bit counters giving me a 24 bit value which turns out to be enough
for the experiment. The counters are interfaced with the CPU via three
octal tristate buffers which allows me to read the 24 bit value with a
single 8 bit port.

As to where can you get/buy the components I personally get lots of my
stuff at the shops along Pasar Road in Kuala Lumpur. For uncommon parts
like the 74HC4040 I get them from http://www.farnell.com. As Malaysia
is probably quite far from where your are at and I don't really know
about your local distributors I'd suggest going straight to Farnell.

And since this is getting a bit off topic I'd suggest dropping this
thread from comp.lang.c.

Apr 4 '06 #24
I've just installed a rtos that runs on PCs called OnTime. I'm going
to do some experimenting, but don't you think that given enough time
the program will still continue to preform as it does on a general os?
Try increasing the size of the loops, and leave it running over night.
Do you think you could do that for me and tell me how it preforms? I
need to know if it works or not.

Apr 4 '06 #25
On 3 Apr 2006 19:05:23 -0700, in comp.lang.c , Co********@gmail.com
wrote:

If you had a more objective way of measuring time then you could tell
me which of the final times in the program was the better of the two
aproximations. Because there is no more objective way to measure time
than this, both aproximations are equally accurate. In a subjective
way they are both entirely real since whatever we use to measure time
will not be perfectly accurate.


I think radioactive decay rates are considered a pretty good objective
way of measuring time. Given that this is how its defined...
Mark McIntyre
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan
Apr 4 '06 #26

Mark McIntyre wrote:
On 3 Apr 2006 19:05:23 -0700, in comp.lang.c , Co********@gmail.com
wrote:

If you had a more objective way of measuring time then you could tell
me which of the final times in the program was the better of the two
aproximations. Because there is no more objective way to measure time
than this, both aproximations are equally accurate. In a subjective
way they are both entirely real since whatever we use to measure time
will not be perfectly accurate.


I think radioactive decay rates are considered a pretty good objective
way of measuring time. Given that this is how its defined...
Mark McIntyre
--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it."
--Brian Kernighan

Sorry, but the time standard is based on FREQUENCY of radiation, not on
radioactive decay. Decay is better for measuring longer periods of time
(e.g. Carbon dating).

hth,
ed

Apr 4 '06 #27
but this still happens with radioactive decay!!!

Apr 4 '06 #28

<Co********@gmail.com> wrote in message
news:11**********************@v46g2000cwv.googlegr oups.com...
but this still happens with radioactive decay!!!


What does?
Apr 4 '06 #29
Corey,
Your post is kinda fun and interesting, irrespective of its accuracy.
If you get ahold of an english version of "On the Electrodynamics of
Moving Bodies" (1905AD) Albert Einstein, and peruse it for its
equaltions and explanations, you
will be able to generalize your idea into more interesting vistas.
Even if you had ideal clocks running under ideal conditions there still
exists a finite amount of time for "signal-sync" or communication.
Also, since there (scientifically) exists NO simultaneous NOW and the
infinitely small is as vast as the infinitely large, then the idea of
time travel becomes a bit illusory. This is because there could be NO
ABSOLUTE moment for multiple observers (or yourself at multiples
different "times") and "revisiting" the so-called past would really be
a new time for the observer.

Apr 4 '06 #30
In article <11**********************@e56g2000cwe.googlegroups .com>,
<Co********@gmail.com> wrote:
If you had a more objective way of measuring time then you could tell
me which of the final times in the program was the better of the two
aproximations. Because there is no more objective way to measure time
than this, both aproximations are equally accurate. In a subjective
way they are both entirely real since whatever we use to measure time
will not be perfectly accurate.


There was a recent article in either Scientific American or
American Scientist (I forget which), which indicated that clocks
are now approaching sufficient precision that it would be impossible
to synchronize any two of the ultra-precision clocks. Apparently
on those timescales, any movement of the clocks has noticable relatively
effects.

On the other hand, the resolution of the clock() call is not high
enough for such matters to be worth considering.
--
"law -- it's a commodity"
-- Andrew Ryan (The Globe and Mail, 2005/11/26)
Apr 4 '06 #31
ro******@ibd.nrc-cnrc.gc.ca (Walter Roberson) writes:
In article <11**********************@e56g2000cwe.googlegroups .com>,
<Co********@gmail.com> wrote:
If you had a more objective way of measuring time then you could tell
me which of the final times in the program was the better of the two
aproximations. Because there is no more objective way to measure time
than this, both aproximations are equally accurate. In a subjective
way they are both entirely real since whatever we use to measure time
will not be perfectly accurate.


There was a recent article in either Scientific American or
American Scientist (I forget which), which indicated that clocks
are now approaching sufficient precision that it would be impossible
to synchronize any two of the ultra-precision clocks. Apparently
on those timescales, any movement of the clocks has noticable relatively
effects.

On the other hand, the resolution of the clock() call is not high
enough for such matters to be worth considering.


In any case, clock() measures CPU time, not real time (to make this at
least *vaguely* topical in at least one of these newsgroups).

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Apr 4 '06 #32

"Walter Roberson" <ro******@ibd.nrc-cnrc.gc.ca> wrote in message
news:e0**********@canopus.cc.umanitoba.ca...
In article <11**********************@e56g2000cwe.googlegroups .com>,
<Co********@gmail.com> wrote:
If you had a more objective way of measuring time then you could tell
me which of the final times in the program was the better of the two
aproximations. Because there is no more objective way to measure time
than this, both aproximations are equally accurate. In a subjective
way they are both entirely real since whatever we use to measure time
will not be perfectly accurate.
There was a recent article in either Scientific American or
American Scientist (I forget which), which indicated that clocks
are now approaching sufficient precision that it would be impossible
to synchronize any two of the ultra-precision clocks. Apparently
on those timescales, any movement of the clocks has noticable relatively
effects.


Interesting. I do know that it is predicted modern ultra precision clocks
will demonstrate relativistic effects just by driving them around in a car.

Thanks
Bill

On the other hand, the resolution of the clock() call is not high
enough for such matters to be worth considering.
--
"law -- it's a commodity"
-- Andrew Ryan (The Globe and Mail, 2005/11/26)

Apr 5 '06 #33

"Keith Thompson" <ks***@mib.org> wrote in message
news:ln************@nuthaus.mib.org...
ro******@ibd.nrc-cnrc.gc.ca (Walter Roberson) writes:
In article <11**********************@e56g2000cwe.googlegroups .com>,
<Co********@gmail.com> wrote:
If you had a more objective way of measuring time then you could tell
me which of the final times in the program was the better of the two
aproximations. Because there is no more objective way to measure time
than this, both aproximations are equally accurate. In a subjective
way they are both entirely real since whatever we use to measure time
will not be perfectly accurate.
There was a recent article in either Scientific American or
American Scientist (I forget which), which indicated that clocks
are now approaching sufficient precision that it would be impossible
to synchronize any two of the ultra-precision clocks. Apparently
on those timescales, any movement of the clocks has noticable relatively
effects.

On the other hand, the resolution of the clock() call is not high
enough for such matters to be worth considering.


In any case, clock() measures CPU time, not real time (to make this at
least *vaguely* topical in at least one of these newsgroups).


'Real time'????? In physics, especially relativistic physics, time is what
a clock reads. Clock accuracy is a statistical thing based on comparisons
with other clocks, astronomical data etc. At present atomic clocks are the
most accurate.

Thanks
Bill

--
Keith Thompson (The_Other_Keith) ks***@mib.org
<http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*>
<http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.

Apr 5 '06 #34
Co********@gmail.com wrote:
I've just installed a rtos that runs on PCs called OnTime. I'm going
to do some experimenting, but don't you think that given enough time
the program will still continue to preform as it does on a general os?
Try increasing the size of the loops, and leave it running over night.
Do you think you could do that for me and tell me how it preforms? I
need to know if it works or not.


A 24 bit counter running at 1kHz will overflow after about 4 1/2 hours.
In my case the maximum error you can get is +/- 1 tick. I guess it ws
my luck that both runs produced the same result. The +/- 1 tick error
is due to the possibility of sampling the counter at in between
transition. Say for example you sample the counter just when it is
incrementing from1000 to 1001. In which case you have a 50% chance of
getting the either 1000 or 1001. But just like your Unix experiment
this says nothing about time travelling but more about sampling theory.

I can actually construct a set-up that can guarantee the same result
for every run simply by using the same clock source to drive both the
counter and the CPU. In which case the CPU is running in-sync with the
clock regardless of the acuracy of the clock source. Such a setup even
works if you keep varying the clock frequency because the CPU executes
instructions synchronously with the clock.

Think of it this way. If the CPU needs to execute exactly 100
instructions for each round of loop and each instruction executes in
exactly 2 clock cycles then each round of the loop will execute in
exactly 200 clock cycles. Now, when talking about 'clock' here we are
talking about the square wave used to drive the CPU. If we now use this
same square wave as the basis for the CPU to measure time then of
course the CPU will never disagree with its time measurement assuming
nothing else introduces delays or jitter to our instruction stream such
as interrupts.

If, like my experiment above, we use two different clock source: one to
drive the CPU and another to drive the counter then what you are
measuring is not "time travel" but simply the relative accuracy between
the square waveforms which can indeed be seen visually if the two
square waves are fed into an oscilloscope. In this case an error can
occur if you happen to sample the counter at a harmonic interval
between the two square waves:

(view using fixed width font or the alignment will be all wrong)

clockA 000111000111000111

clockB 00001111000011110000
^
|
if you happen to sample here
then you may get clockB's reading
as either 0 or 1

When interrupts come into play then the reading may be delayed by as
much time as it takes for the interrupt service routine to complete. So
if you're going to use OnTime's RTOS make sure you're not using
preemptive multitasking or time-sliced multitasking. And make sure
you're not using the real-time kernel. Use simple cooperative
multitasking and turn off all interrupts. Actually for the best result
use DOS and a DOS compiler like DJGPP.

Now finally, from a physical standpoint, what exactly *is* time
travelling? Your CPU? There is only one CPU, what is it travelling to
or away from, itself? This experiment does not show the CPU time
travelling but rather the software running on the CPU to be "time
travelling". In which case you need to understand that software is not
physical at all so all bets are off. Software is just like words coming
out of my mouth. If I say:

The quick brown fox jumps over the lazy dog.

and then later say:

The quick brown dog fox jumps over the lazy.

then did the word "dig" time travel in the second instance since it now
appears before the word "fox". Of course not. It is just how I decided
to utter the string of words. Just like how a CPU decides which
instruction to execute on a modern PC. On a modern PC groups of
instructions are scheduled pre-emptively with higher priority groups
being able to interrupt those with lower priority and instructions
themselves are often executed out of order.

You can conduct the same experiment like your code using a human
instead of a CPU. Ask your friend to say "The quick brown fox jumps
over the lazy dog" and measure the time between the word "fox" and
"dog". Each run will give out slightly different results not because
you measured time inaccurately, and not because the word "dog" time
travelled into the past or future, but because your friend takes
different amounts of time to utter the sentence with different length
of pauses between words and different things distracting him. This is
exactly what happens in a multitasking OS like Unix or Windows.

Apr 5 '06 #35
"Bill Hobba" <ru*****@junk.com> writes:
"Keith Thompson" <ks***@mib.org> wrote in message
news:ln************@nuthaus.mib.org...

[...]
In any case, clock() measures CPU time, not real time (to make this at
least *vaguely* topical in at least one of these newsgroups).


'Real time'????? In physics, especially relativistic physics, time is what
a clock reads. Clock accuracy is a statistical thing based on comparisons
with other clocks, astronomical data etc. At present atomic clocks are the
most accurate.


Ok, the clock() function is intended to measure the CPU time consumed
by a program, rather than (any approximation of) the time that might
be measured by a clock that measures so-called "real time" (such as an
atomic clock, sun dial, or whatever).

And this is why cross-posts between comp.lang.c and
sci.physics.relativity are a bad idea. (I have no idea why
comp.lang.c has been getting cross-posts from alt.magick lately.)

--
Keith Thompson (The_Other_Keith) ks***@mib.org <http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*> <http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.
Apr 5 '06 #36

"Walter Roberson" <ro******@ibd.nrc-cnrc.gc.ca> wrote in message
news:e0**********@canopus.cc.umanitoba.ca...
| In article <11**********************@e56g2000cwe.googlegroups .com>,
| <Co********@gmail.com> wrote:
| >If you had a more objective way of measuring time then you could tell
| >me which of the final times in the program was the better of the two
| >aproximations. Because there is no more objective way to measure time
| >than this, both aproximations are equally accurate. In a subjective
| >way they are both entirely real since whatever we use to measure time
| >will not be perfectly accurate.
|
| There was a recent article in either Scientific American or
| American Scientist (I forget which), which indicated that clocks
| are now approaching sufficient precision that it would be impossible
| to synchronize any two of the ultra-precision clocks. Apparently
| on those timescales, any movement of the clocks has noticable relatively
| effects.
|
| On the other hand, the resolution of the clock() call is not high
| enough for such matters to be worth considering.
| --
| "law -- it's a commodity"
| -- Andrew Ryan (The Globe and Mail, 2005/11/26)

There was a recent article in either the New York Times, the Chicago
Tribune,
the London Times or the National Enquirer ( I forget which ) which indicated
that the Pope was an ardent relativist who believed prayers could reach the
throne of God ( i9.0 light years away) no faster than the speed of light.
Apparently this speed limit was imposed by St. Einstein who will be
canonised
as soon as he is accepted as the one and only true God by fuckin' idiots
everywhere.

Androcles.


Apr 5 '06 #37

"Keith Thompson" <ks***@mib.org> wrote in message
news:ln************@nuthaus.mib.org...
"Bill Hobba" <ru*****@junk.com> writes:
"Keith Thompson" <ks***@mib.org> wrote in message
news:ln************@nuthaus.mib.org... [...]
In any case, clock() measures CPU time, not real time (to make this at
least *vaguely* topical in at least one of these newsgroups).


'Real time'????? In physics, especially relativistic physics, time is
what
a clock reads. Clock accuracy is a statistical thing based on
comparisons
with other clocks, astronomical data etc. At present atomic clocks are
the
most accurate.


Ok, the clock() function is intended to measure the CPU time consumed
by a program, rather than (any approximation of) the time that might
be measured by a clock that measures so-called "real time" (such as an
atomic clock, sun dial, or whatever).


Its accuracy is not that good - but accuracy is not what defines a clock -
at least in physics.

And this is why cross-posts between comp.lang.c and
sci.physics.relativity are a bad idea. (I have no idea why
comp.lang.c has been getting cross-posts from alt.magick lately.)
If you look at Corey White's posts, AKA a number of other people such as
Virtual Adepts, you will see he trolls across a number of groups including
those you mentioned. And yes he is a troll without question:
http://groups.google.com/group/alt.m...b39607eb0c30b5

Posting to a number of unrelated newsgroups is a typical troll tactic.
Except for this post I will try to remove computing forums in my responses
in future.

Best of luck with your newsgroup. Hope you don't have the trouble with
troll/cranks we at sci.physics.relativity do which is why most of its
legitimate posters have long ago developed their own way of handling them -
with varying degrees of success.

One regular poster here maintains a site of the worst - always good for a
laugh;
http://users.pandora.be/vdmoortel/di...alFumbles.html

Thanks
Bill

--
Keith Thompson (The_Other_Keith) ks***@mib.org
<http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*>
<http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.

Apr 5 '06 #38
Keith Thompson wrote:

"Bill Hobba" <ru*****@junk.com> writes:
"Keith Thompson" <ks***@mib.org> wrote in message
news:ln************@nuthaus.mib.org...

[...]
In any case, clock() measures CPU time,
not real time (to make this at
least *vaguely* topical in at least one of these newsgroups).


'Real time'????? In physics,
especially relativistic physics, time is what
a clock reads.
Clock accuracy is a statistical thing based on comparisons
with other clocks, astronomical data etc.
At present atomic clocks are the
most accurate.


Ok, the clock() function is intended to measure the CPU time consumed
by a program, rather than (any approximation of) the time that might
be measured by a clock that measures so-called "real time" (such as an
atomic clock, sun dial, or whatever).

And this is why cross-posts between comp.lang.c and
sci.physics.relativity are a bad idea. (I have no idea why
comp.lang.c has been getting cross-posts from alt.magick lately.)


Yes. "Real time" actually means something else in programming.

http://www.cs.york.ac.uk/rts/RTSBookThirdEdition.html

--
pete
Apr 5 '06 #39

"pete" <pf*****@mindspring.com> wrote in message
news:44***********@mindspring.com...
Keith Thompson wrote:

"Bill Hobba" <ru*****@junk.com> writes:
> "Keith Thompson" <ks***@mib.org> wrote in message
> news:ln************@nuthaus.mib.org... [...]
>> In any case, clock() measures CPU time,
>> not real time (to make this at
>> least *vaguely* topical in at least one of these newsgroups).
>
> 'Real time'????? In physics,
> especially relativistic physics, time is what
> a clock reads.
> Clock accuracy is a statistical thing based on comparisons
> with other clocks, astronomical data etc.
> At present atomic clocks are the
> most accurate.


Ok, the clock() function is intended to measure the CPU time consumed
by a program, rather than (any approximation of) the time that might
be measured by a clock that measures so-called "real time" (such as an
atomic clock, sun dial, or whatever).

And this is why cross-posts between comp.lang.c and
sci.physics.relativity are a bad idea. (I have no idea why
comp.lang.c has been getting cross-posts from alt.magick lately.)


Yes. "Real time" actually means something else in programming.


Ahhhhh. Got it.

Thanks
Bill

http://www.cs.york.ac.uk/rts/RTSBookThirdEdition.html

--
pete

Apr 5 '06 #40
>A 24 bit counter running at 1kHz will overflow after about 4 1/2 hours.
In my case the maximum error you can get is +/- 1 tick. I guess it ws
my luck that both runs produced the same result. The +/- 1 tick error
is due to the possibility of sampling the counter at in between
transition. Say for example you sample the counter just when it is
incrementing from1000 to 1001. In which case you have a 50% chance of
getting the either 1000 or 1001. But just like your Unix experiment
this says nothing about time travelling but more about sampling theory.

To me what that says is that the moment we sample is a probability and
there is a 50% chance of it being either 1000 or 1001. I think we
would find this probability exists even in atomic clocks. What I meant
by time travel wasn't that the computer was literally time traveling,
but that time was behaving as if it was uncertain about what time it
was.

I can actually construct a set-up that can guarantee the same result
for every run simply by using the same clock source to drive both the
counter and the CPU. In which case the CPU is running in-sync with the
clock regardless of the acuracy of the clock source. Such a setup even
works if you keep varying the clock frequency because the CPU executes
instructions synchronously with the clock.
Think of it this way. If the CPU needs to execute exactly 100
instructions for each round of loop and each instruction executes in
exactly 2 clock cycles then each round of the loop will execute in
exactly 200 clock cycles. Now, when talking about 'clock' here we are
talking about the square wave used to drive the CPU. If we now use this
same square wave as the basis for the CPU to measure time then of
course the CPU will never disagree with its time measurement assuming
nothing else introduces delays or jitter to our instruction stream such
as interrupts.

I believe you that it would always be consistent, but I don't think it
would really measure time. Lets say that it was taking longer to
execute some of the instructions but they still took only one clock
cycle. We would just be counting clock cycles then and not time.

If, like my experiment above, we use two different clock source: one to
drive the CPU and another to drive the counter then what you are
measuring is not "time travel" but simply the relative accuracy between
the square waveforms which can indeed be seen visually if the two
square waves are fed into an oscilloscope. In this case an error can
occur if you happen to sample the counter at a harmonic interval
between the two square waves:


This actually sounds like the same idea that I had when writing the
program for my expirement. I like it :).
I guess it is a matter of philosophy to ask if the accuracy of the
clocks says anything about the nature of time. Even the best atomic
clock NIST has is off by .00000000002 seconds per second. Although,
I'm sure even the acuracy of NISTs clock varies considerably. If we
ran this experiment with one of their clocks, I believe we could get
considerable interest into the experiment because it would show much
more obvious variations in the atomic clock, and would suggest time was
behaving unusually.

Apr 5 '06 #41
Co********@gmail.com wrote:
A 24 bit counter running at 1kHz will overflow after about 4 1/2 hours.
In my case the maximum error you can get is +/- 1 tick. I guess it ws
my luck that both runs produced the same result. The +/- 1 tick error
is due to the possibility of sampling the counter at in between
transition. Say for example you sample the counter just when it is
incrementing from1000 to 1001. In which case you have a 50% chance of
getting the either 1000 or 1001. But just like your Unix experiment
this says nothing about time travelling but more about sampling theory.


To me what that says is that the moment we sample is a probability and
there is a 50% chance of it being either 1000 or 1001. I think we
would find this probability exists even in atomic clocks. What I meant
by time travel wasn't that the computer was literally time traveling,
but that time was behaving as if it was uncertain about what time it
was.


If you are trying to explore the concept of time itself then be aware
that:

1. Currently use the periodic vibrations of excited crystals to measure
time.
2. We measure the period/frequency of the vibrations of excited
crystals in units of time.

Hence what we mean by time is defined recursively. Physics per se
currently does not have any concept of time which relates to the "time"
that we experience. The closest thing is the concept of entropy which
is often used to indicate an "arrow of time". Time is just a concept we
use to measure events. What we are really measuring is the events
themselves: the hand of the clock moving, crystals vibrating, radio
waves propagating. Any real attempt to measure time end up measuring
events defined in relation to time itself - again we end up with a
cyclic/recursive definition.

To be pedantic, in these experiments we are not really measuring
against time. Instead we are measuring against the number of times a
cyrstal oscillates under excitation. So the count value of 1000 does
not really represent "time" but rather that the crystal have oscillated
1000 times since the beginning of the experiment. We only assume it
measures "time" since the oscillations are specified in terms of time:
1kHz = 1000 times each second. So what is a "second": the time it takes
the crystal to oscillate 1000 times -- again we end up with a cyclical
definition.

I can actually construct a set-up that can guarantee the same result
for every run simply by using the same clock source to drive both the
counter and the CPU. In which case the CPU is running in-sync with the
clock regardless of the acuracy of the clock source. Such a setup even
works if you keep varying the clock frequency because the CPU executes
instructions synchronously with the clock.
Think of it this way. If the CPU needs to execute exactly 100
instructions for each round of loop and each instruction executes in
exactly 2 clock cycles then each round of the loop will execute in
exactly 200 clock cycles. Now, when talking about 'clock' here we are
talking about the square wave used to drive the CPU. If we now use this
same square wave as the basis for the CPU to measure time then of
course the CPU will never disagree with its time measurement assuming
nothing else introduces delays or jitter to our instruction stream such
as interrupts.


I believe you that it would always be consistent, but I don't think it
would really measure time. Lets say that it was taking longer to
execute some of the instructions but they still took only one clock
cycle. We would just be counting clock cycles then and not time.


But on a simple CPU like the PIC, indeed on most simple RISC CPUs most
instructions take exactly the same amount of clock cycles to execute.
Hence counting instruction cycles IS a measure of time in terms of
instruction cycles. Besides, you misunderstand me, I am not measuring
against INSTRUCTION CYCLES, I am measuring against CLOCK CYCLES. Indeed
some instruction cycles takes multiple clock cycles to complete but it
doesn't matter. My clock is fed directly to hardware counters from the
clock source, not form the instruction clock. So my setup is a perfect
measure of time against the oscillations of a quartz crystal.

In short, counting clock cycles is how we measure time, even on atomic
clocks we use the vibrations of the atom as the source of the clock
cycle. It's just in my case I'm using the vibrations of a quartz
crystal. If I have an atomic clock to be the clock source of my CPU
then my CPU will never disagree with the atomic clock because the CPU
executes in lock-step with the vibrations of the atomic clock and the
result will always be consistent.

Apr 5 '06 #42
"Hexenmeister" <va******@broom.Mickey_a> wrote:
"Walter Roberson" <ro******@ibd.nrc-cnrc.gc.ca> wrote in message
news:e0**********@canopus.cc.umanitoba.ca...
| There was a recent article in either Scientific American or
| American Scientist (I forget which), which indicated that clocks
| are now approaching sufficient precision that it would be impossible
| to synchronize any two of the ultra-precision clocks. Apparently
| on those timescales, any movement of the clocks has noticable relatively
| effects. There was a recent article in either the New York Times, the Chicago
Tribune,
the London Times or the National Enquirer ( I forget which ) which indicated
that the Pope was an ardent relativist who believed prayers could reach the
throne of God ( i9.0 light years away) no faster than the speed of light.


Recent, as in, last Saturday's issue, I presume?

YMHBTIRL.

Richard
Apr 5 '06 #43

"Greg Neill" <gn*******@OVE.THIS.netcom.ca> wrote in message
news:80*******************@news20.bellglobal.com.. .
<Co********@gmail.com> wrote in message
news:11*********************@i40g2000cwc.googlegro ups.com...
comparing me to a flat earther is like comparing you to hitler. By
insulting me instead of looking at the issue you are showing your
disrespect for logic.


Godwin's Law is invoked. Thread is finished.


what is Godwin's Law?

A
Apr 5 '06 #44

<Lo*****@gmail.com> wrote in message
news:11**********************@z34g2000cwc.googlegr oups.com...
Corey,
Your post is kinda fun and interesting, irrespective of its accuracy.
If you get ahold of an english version of "On the Electrodynamics of
Moving Bodies" (1905AD) Albert Einstein, and peruse it for its
equaltions and explanations, you
will be able to generalize your idea into more interesting vistas.
Even if you had ideal clocks running under ideal conditions there still
exists a finite amount of time for "signal-sync" or communication.
Also, since there (scientifically) exists NO simultaneous NOW and the
infinitely small is as vast as the infinitely large, then the idea of
time travel becomes a bit illusory. This is because there could be NO
ABSOLUTE moment for multiple observers (or yourself at multiples
different "times") and "revisiting" the so-called past would really be
a new time for the observer.


Interesting, but I wish you hadn't mentioned Einstein. Now that idiot
Hexenmeister is likely to put in another ridiculous appearance.

A
Apr 5 '06 #45

Archangel wrote:
"Greg Neill" <gn*******@OVE.THIS.netcom.ca> wrote in message
news:80*******************@news20.bellglobal.com.. .
<Co********@gmail.com> wrote in message
news:11*********************@i40g2000cwc.googlegro ups.com...
comparing me to a flat earther is like comparing you to hitler. By
insulting me instead of looking at the issue you are showing your
disrespect for logic.


Godwin's Law is invoked. Thread is finished.


what is Godwin's Law?

A


greg might add more detail but it basically says as the argument in a
thread progresses, once name calling begins, the one that invokes NAZI
references loses. and the argument is over.

Apr 5 '06 #46

"Bill Hobba" <ru*****@junk.com> wrote in message
news:b3*******************@news-server.bigpond.net.au...

"Keith Thompson" <ks***@mib.org> wrote in message
news:ln************@nuthaus.mib.org...
ro******@ibd.nrc-cnrc.gc.ca (Walter Roberson) writes:
In article <11**********************@e56g2000cwe.googlegroups .com>,
<Co********@gmail.com> wrote:
If you had a more objective way of measuring time then you could tell
me which of the final times in the program was the better of the two
aproximations. Because there is no more objective way to measure time
than this, both aproximations are equally accurate. In a subjective
way they are both entirely real since whatever we use to measure time
will not be perfectly accurate.

There was a recent article in either Scientific American or
American Scientist (I forget which), which indicated that clocks
are now approaching sufficient precision that it would be impossible
to synchronize any two of the ultra-precision clocks. Apparently
on those timescales, any movement of the clocks has noticable relatively
effects.

On the other hand, the resolution of the clock() call is not high
enough for such matters to be worth considering.


In any case, clock() measures CPU time, not real time (to make this at
least *vaguely* topical in at least one of these newsgroups).


'Real time'????? In physics, especially relativistic physics, time is
what a clock reads. Clock accuracy is a statistical thing based on
comparisons with other clocks, astronomical data etc. At present atomic
clocks are the most accurate.

Thanks
Bill

Indeed so, one of the major implications of Relativity is that there is no
such thing as real (or absolute) time. There is only relative time.

A
Apr 5 '06 #47

<ed*******@gmail.com> wrote in message
news:11*********************@i39g2000cwa.googlegro ups.com...

Archangel wrote:
"Greg Neill" <gn*******@OVE.THIS.netcom.ca> wrote in message
news:80*******************@news20.bellglobal.com.. .
> <Co********@gmail.com> wrote in message
> news:11*********************@i40g2000cwc.googlegro ups.com...
>> comparing me to a flat earther is like comparing you to hitler. By
>> insulting me instead of looking at the issue you are showing your
>> disrespect for logic.
>
> Godwin's Law is invoked. Thread is finished.
>
>


what is Godwin's Law?

A


greg might add more detail but it basically says as the argument in a
thread progresses, once name calling begins, the one that invokes NAZI
references loses. and the argument is over.


Lol, that rings a bell, I suspect I have heard of it before. Either that or
I have just had an amazing case of deja-vu. This is alt.magick after all...

Thanks.

A
Apr 5 '06 #48
ed*******@gmail.com wrote:
Archangel wrote:
"Greg Neill" <gn*******@OVE.THIS.netcom.ca> wrote in message
news:80*******************@news20.bellglobal.com.. .
Godwin's Law is invoked. Thread is finished.


what is Godwin's Law?


greg might add more detail but it basically says as the argument in a
thread progresses, once name calling begins, the one that invokes NAZI
references loses. and the argument is over.


No, it's not. RTFJF.

Richard
Apr 5 '06 #49
Bill Hobba wrote:
"Keith Thompson" <ks***@mib.org> wrote in message
news:ln************@nuthaus.mib.org...
ro******@ibd.nrc-cnrc.gc.ca (Walter Roberson) writes:
In article <11**********************@e56g2000cwe.googlegroups .com>,
<Co********@gmail.com> wrote:

If you had a more objective way of measuring time then you could tell
me which of the final times in the program was the better of the two
aproximations. Because there is no more objective way to measure time
than this, both aproximations are equally accurate. In a subjective
way they are both entirely real since whatever we use to measure time
will not be perfectly accurate.

There was a recent article in either Scientific American or
American Scientist (I forget which), which indicated that clocks
are now approaching sufficient precision that it would be impossible
to synchronize any two of the ultra-precision clocks. Apparently
on those timescales, any movement of the clocks has noticable relatively
effects.

On the other hand, the resolution of the clock() call is not high
enough for such matters to be worth considering.
In any case, clock() measures CPU time, not real time (to make this at
least *vaguely* topical in at least one of these newsgroups).

'Real time'????? In physics, especially relativistic physics, time is what
a clock reads. Clock accuracy is a statistical thing based on comparisons
with other clocks, astronomical data etc. At present atomic clocks are the
most accurate.

Thanks
Bill


Even atomic clocks are measuring an arbitrary standard. In order
to calculate 'real time', whatever that might mean, one would have
to compute the astronomical postition of the Earth, since the
standards we use for time are derived from that.
--
Keith Thompson (The_Other_Keith) ks***@mib.org
<http://www.ghoti.net/~kst>
San Diego Supercomputer Center <*>
<http://users.sdsc.edu/~kst>
We must do something. This is something. Therefore, we must do this.


Apr 5 '06 #50

This discussion thread is closed

Replies have been disabled for this discussion.

By using this site, you agree to our Privacy Policy and Terms of Use.