Parallel Computing

Cores vs CPU speed

Posted by ed_word published: Oct. 21, 2017, 1:03 p.m.

For years software developers have been relying on hardware enhancements to mask the inefficiency of their programs. Starting from a few kHz of processing speed to MHz to now having GHz of processing power at our disposal, the performance gain provided by improvements in hardware technology has never failed to amaze us. We now expect the same to continue for years to come. But hold on a second… let us leave our world of comfort for a moment and look at it from a hardware engineer’s perspective.

Moore’s Law states that the number of transistors in an IC doubles approximately every two years. And this law still holds even after 50 years. However, it is said that the days of Moore's Law may be coming to an end as the time gap to come up with new innovative architectures widens. Sure, there has been an approximately linear increase in the processor performance over the years, but, another important aspect related to processors that we easily forget is the power consumption. If you plot a graph of power consumption vs processor speed, you’ll find that we achieve increased clock speed at the cost of increased power consumption. This is where the concept of multi-core processors comes into play.

To understand this problem better, let us consider a processor with a single core of frequency f, capacitance C and voltage supplied is V volts. The output power can be given as P1 = C(V^2)f. Now let us consider a processor with two cores connected in parallel, each core has a frequency f/2. Thus, the throughput of the system will be the same as it was in the previous case. However, in this setup considering in the ideal case, P2 = 2C(V/2)^2(f/2). Without making any compromises with respect to performance, we were able to get the job done with at most P1/4 power consumption. Practically it comes down to approximately 0.4*P1.

All we need to do is identify a subset of programs from the set of programs running on our system called concurrent programs and further identify the programs that can be run in parallel from this subset. Concurrent programs are logically active at the same time, whereas parallel programs are actually active and run simultaneously. Simple problems like finding the area under the curve, matrix multiplications etc can all be executed in parallel, saving a lot of computational time. With processors these days rolling out with 8 cores, 10 cores, 22 cores to up to 72 cores(Xeon Phi), it’s time we learn how to use them.


Sign in to comment
Oct. 26, 2017, 3:15 p.m.
pixelexel

Nice Blog Edward!