472,958 Members | 2,592 Online

Calculate Clock Speed (in an emulator)

Greetings,

I am writing a 65c02 processor simulator. One thing that I do not know how
to do is emulate clock speed. First, the processor is a 1Mhz CPU that I'm
attempting to simulate (modern 6502's operate at 14MHz -- unless I'm
thinking of 65816's).

In either case, I need to calculate how many cycles to consume per second
depending on the speed of the simulated processor. I'm at a complete loss
as to go about "figuring" out how long a cycle should be represented on the
processor that the program is executing in. What I'm going to do, whether
this is the way it should be or not, is when each instruction is initially
decoded, also keep track of when the next instruction can be executed based
on the number of clock cycles the current instruction requires. This way, I
can simulate the speed. Anyway, is there a resource or example somewhere of
how I can determine this? I'm on an x86 processor.
Thanks,
Shawn
Jul 22 '05 #1
1 3061
"Shawn B." <le****@html.com> wrote...
I am writing a 65c02 processor simulator. One thing that I do not know how to do is emulate clock speed. First, the processor is a 1Mhz CPU that I'm
attempting to simulate (modern 6502's operate at 14MHz -- unless I'm
thinking of 65816's).

In either case, I need to calculate how many cycles to consume per second
depending on the speed of the simulated processor. I'm at a complete loss
as to go about "figuring" out how long a cycle should be represented on the processor that the program is executing in.
You are just as any of us would be, there are no means in C++ to "consume"
any particular number of "cycles per second". The only thing remotely
related to that is the macro CLOCKS_PER_SEC, which expands into a constant
expression of type 'clock_t' and designates the number of 'ticks' the
function 'clock' counts, in one second.

Without going into hardware specifics and thus using non-standard C++ means
(if available on your platform) there is no way to do what you want. It
therefore becomes off-topic.
What I'm going to do, whether
this is the way it should be or not, is when each instruction is initially
decoded, also keep track of when the next instruction can be executed based on the number of clock cycles the current instruction requires. This way, I can simulate the speed. Anyway, is there a resource or example somewhere of how I can determine this? I'm on an x86 processor.