Computer Processor Concepts

Clock Rate

The clock rate typically refers to the frequency at which the clock circuit of a processor can generate pulses, which are used to synchronize the operations (such as adding two numbers or transferring a value from one register to another) of its components, and is used as an indicator of the processor’s speed. It is measured in clock cycles per second or its equivalent, the SI unit hertz (Hz).

The clock rate of a CPU is most useful for providing comparisons between CPUs in the same family. The clock rate is only one of several factors that can influence performance when comparing processors in different families.

There are many other factors to consider when comparing the performance of CPUs, like the width of the CPU’s data bus, the latency of the memory, and the cache architecture.

Clock rates can sometimes be misleading since the amount of work different CPUs can do in one cycle varies.

1 Clock = 1 Cycle

MAC

In computing, especially digital signal processing, the multiply–accumulate operation is a common step that computes the product of two numbers and adds that product to an accumulator. The hardware unit that performs the operation is known as a multiplier–accumulator (MAC, or MAC unit); the operation itself is also often called a MAC or a MAC operation. The MAC operation modifies an accumulator a:

\ a\leftarrow a+(b\times c)

IPS (Instructions per second)

Instructions per second (IPS) is a measure of a computer‘s processor speed. For CISC computers different instructions take different amounts of time, so the value measured depends on the instruction mix; even for comparing processors in the same family the IPS measurement can be problematic.

The term is commonly used in association with a numeric value such as thousand/kilo instructions per second (TIPS/KIPS), million instructions per second (MIPS), and billion instructions per second (GIPS).

IPS can be calculated using this equation:

{\displaystyle {\text{IPS}}={\text{sockets}}\times {\frac {\text{cores}}{\text{socket}}}\times {\text{clock}}\times {\frac {\text{Is}}{\text{cycle}}}}

However, the instructions/cycle measurement depends on the instruction sequence, the data and external factors.

Note: raw IPS has fallen into disuse.

Flops/Ops

In computing, floating point operations per second (FLOPS, flops or flop/s) is a measure of computer performance, useful in fields of scientific computations that require floating-point calculations. For such cases it is a more accurate measure than measuring instructions per second.

Flops – the number of floating-point calculations they performed per second. FLOPS can be calculated using this equation:

{\displaystyle {\text{FLOPS}}={\text{sockets}}\times {\frac {\text{cores}}{\text{socket}}}\times {\frac {\text{cycles}}{\text{second}}}\times {\frac {\text{FLOPs}}{\text{cycle}}}}

There could be more than one Flops executed for each cycle, for example, Arm Cortex-A73 can execute 2 Flops for dual precision (64-bit) and 8 Flops for single precision (32-bit).

Instructions per cycle

In computer architecture, instructions per cycle (IPC), commonly called Instructions per clock is one aspect of a processor‘s performance: the average number of instructions executed for each clock cycle.

The number of instructions per second and floating point operations per second for a processor can be derived by multiplying the number of instructions per cycle with the clock rate (cycles per second given in Hertz) of the processor in question.

Flops = IPC * Clock Rate

The number of instructions executed per clock is not a constant for a given processor; it depends on how the particular software being run interacts with the processor, and indeed the entire machine, particularly the memory hierarchy.

Reference: performance equation