GeneralHr Library

Moore’s Law and what it means now, and the challenges the industry face.

Source | LinkedIn : By Paul Graham

For this post, I will be writing about Moore’s law. Originally I was going to include a section about what we can expect in the future. However, I’ve decided to separate this post into another post that I’ll publish in a couple of weeks.

Moore’s law  is the observation that, over the history of computing hardware, the number of transistors in a dense integrated circuit has doubled approximately every two years.

Coined in 1965 by Gordon Moore in the thirty-fifth-anniversary issue of Electronics magazine.  With this observation has dominated the IT industry ever since.

Smaller and Smaller

The root for this phenomenal growth can be seen in the 1959 lecture by, physicist Richard Feynman, called Plenty of Room at the Bottom. Here Feynman talks about the difference in between us and the atom and the potential if we can to building to this scale.

For instance, a human hair is roughly 100,000nm wide. Whereas the current generation of transistors is only 14nm or about 70 atoms wide. With a modern Intel Skylake processor containing 1.75 billion transistors. This has allowed for a 4,000,000 times increase in processing power over Intel’s 4004 produced in 1971.  

Reaching the Limit

By 2021, it’s predicted that transistors will reach 5nm. At this stage, we will have reached the limits of Moores Law. Going smaller and nature starts to play by different rules. Here Quantum Mechanics is King and Queen.

However, we have already reached another limit of processor design. The Clock Speed, or the metronome of processors and had been increased steadily until in peaked at 3.7 GHz in 2004. Since then it’s dropped back to around 3GHz. This was caused by the difficulty of radiating away the heat generated. And it has forced the industry into using alternative approaches.

Parallel Processing

Intel’s first (x86) multi-core processor was introduced in 2006 along with Hyper-Threading, in 2002. Allowing more programmes to run at the same time. However, software needs to be explicitly designed to exploit parallelism.

Languages, Like Erlang or F#, work well with parallel processing. With their languages features are getting incorporating mainstream languages like C# and Java. But writing parallel software is still harder and not suitable for all tasks.

Commoditizing Compute

Processors spend the majority of their time idle or even off. And this fact forms the basis for cloud service. By utilising resources more efficiently it’s possible to save money. However, the most important benefit is scaling. By allowing a service to scale, it’s possible to cope better with the peaks in demand.

 

Read On…

Show More

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button