This site may earn chapter commissions from the links on this folio. Terms of use.

Earlier this month, at the International Solid State Circuits Conference, Intel made headlines by declaring that future processors congenital on next-generation technologies like spintronics or breakthrough wells would probable be slower than electric current hardware, non faster.

The death of Moore's police force is a topic we've discussed before, but I desire to revisit it from a somewhat different angle. Instead of walking through the specifics of Intel's inquiry, or the varying proposals for how we might "prepare" the problem, let'southward talk almost some of the myths that deject this issue and warp perception of the topic. Some of these issues are specific to Intel, others affect the broader semiconductor manufacture.

Myth: CPUs stopped scaling because Intel isn't facing competition

1 mutual misconception around Moore'due south Constabulary and performance scaling is that we haven't really striking a wall. Intel but hasn't had any reason to proceed selling us microprocessors since AMD hasn't been competitive enough to seriously threaten its business.

Here's why it isn't truthful.

Intel's clock speeds didn't stop rise when AMD launched Bulldozer. They more often than not stopped growing by mid-2004 with the launch of Intel's Pentium four 570J — a 3.8GHz single-core CPU that drew roughly 240W of arrangement power under load. The nautical chart below shows how Intel'south high-end CPU configurations changed between 2004 and 2015.

CPU-Progression

In 2004, Intel's highest-clocked flake was a low-efficiency CPU designed for high clock speeds, not efficient operation. Over the course of xi years, Intel drastically slashed per-core ability consumption, cut overall TDP across its production lines, and vastly improved performance. I can't find whatsoever modernistic data sets that compare a Pentium four with current hardware, but Anandtech Demote has a comparing between the Pentium iv 660 (a 3.6GHz CPU with HT) and the Intel Core i7-990X. The 2011 fleck is 2.4x faster in single-threaded tests, even when operating at a modest clock speed deficit of 200MHz.

The simply affair the Pentium 4 and a modern-solar day Skylake have (roughly) in common is their maximum frequencies. Scaling didn't entirely stop, but even the Core i7-4790K is clocked only 15% higher than the old Pentium iv.

Myth: Intel could clock its CPUs higher, if it wanted to

Like most myths, in that location's a kernel of truth to this. Yep, Intel could run its CPUs at higher frequencies, just not much higher, and not without enormous costs. Take a look at Skylake power consumption equally clock speeds ascension.

Moving from 4.2GHz to 4.7GHz gets you 11% more than clock, merely requires 53% more power. That'due south an abysmal operation-per-watt ratio. I don't incertitude that there are means Intel could meliorate the situation — it could de-lid the CPUs (or at least employ solder instead of paste between the shield and the core). It could probably tune its fabs for higher frequencies. It could bin chips and only sell the very all-time of the all-time for overclocking, at increased prices. Information technology could require motherboard manufacturers to use extremely robust lath designs if they wanted to support its overclocking processors.

Fifty-fifty if all those changes combined gave Skylake another 600MHz of headroom, the Core i7-6700K would still swallow 53% more power for a 26% frequency increase. It's non worth information technology. The return on investment is too depression.

While nosotros've been discussing these trends as they bear on Intel and Intel x86 chips, there is no magic solution for ARM, AMD, or Nvidia. Graphics cards have evolved more chop-chop than CPUs these past few years, but the pace of improvement has slowed for them too. Eleven years ago, Nvidia's GeForce 7800 GTX was roughly twice as fast equally the quondam GeForce 6800 Ultra, which had launched 12 months previously. Today, it took Nvidia roughly 30 months to double performance, from the 2012 launch of the GTX 680 to the late 2014 debut of the Nvidia GTX 980. As always, these figures are approximate, but the train has slowed downwardly for everyone, non merely Intel.

Myth: A solution must exist right around the corner

A belief in the inevitability of progress and continual improvement is welded into near Western philosophy and embedded in American culture. Semiconductor scaling was, perchance, the almost potent real-world demonstration of progress that the human race has ever experienced. For decades, every generation of engineering science was meaningfully, noticeably, ameliorate than the first. Computers went from building-sized to pocket-friendly. The news cycle often feeds this perception, with a never-ending run of stories promoting the idea that a bright semiconductor hereafter is right around the corner.

The actual team of engineers, scientists, researchers, and corporate fellow that develop roadmaps and research new technologies have a different view. The International Engineering Roadmap for Semiconductors released a report in July 2015 covering the major changes to Moore'due south Law since before the group was founded, besides every bit data on how long information technology has taken betwixt when a technology is outset proposed and when information technology comes to market.

IncubationTime

Incubation time of recent semiconductor breakthroughs

The fastest path betwixt proposal and implementation was loftier-k metallic gate, which took "merely" nine years to come to market place. The slowest was raised source/drain, which took almost 20.

There's another nuance to this signal that's often missed in discussions of the topic. From the 1960s through to around 2000, process node scaling actually reflected true, geometric shrinks. From 2000 forward, we've used what the ITRS calls "equivalent scaling." That translates to "dump a chum bucket of new innovation in design, manufacturing, and materials into the node and phone call it a new process."

The use of "equivalent" scaling replaced geometric scaling by the mid-1990s.

The use of "equivalent" scaling replaced geometric scaling past the mid-1990s.

Intel's clock speed limits are a reflection of this shift. The properties of silicon accept made it extremely difficult to clock at high speeds and shrinking geometric features no longer yields the power consumption and voltage reductions that information technology did before 2000.

If graphene, which was outset isolated in 2004, or carbon nanotubes, which first came to prominent attention in 1991, were going to save our bacon from the plummet of Moore's Constabulary, we'd already know about it. For decades, the ITRS has functioned every bit a mechanism for identifying which technologies would be introduced at which nodes and how those nodes would be characterized.

There have always been differences — Intel introduced FinFET at 22nm; TSMC and Samsung / GF waited until 14nm. AMD adopted immersion lithography at 45nm and double-patterning at 32nm, Intel waited until 32nm to adopt immersion lithography only was using double-patterning every bit early every bit 45nm. In many ways, yet, the manufacture has avant-garde together, with multiple semiconductor and technology conferences a year where papers are presented and all-time practices discussed. Researchers have been hunting for conventional techniques that would restart the erstwhile scaling engine for decades. They oasis't institute whatever.

This nautical chart from a paper comparing beyond CMOS technologies shows the switching delay and power consumption of both CMOS and other technologies that take been proposed every bit replacements.

Delay (frequency) at the bottom, power consumption at the left.

Delay (frequency) at the bottom, ability consumption at the left.

Information technology's drawn from an extensive comparison of next-generation, post-CMOS technologies that found merely one technology shift, to van der Waal FETs offered even a pocket-size performance improvement and a simultaneous power reduction.

Intel and other firms go along to research Three-V semiconductors, just in that location are no easy materials waiting in the wings, no next-generation semiconductor structures to boost clock speeds, and side by side-generation lithography is making painfully slow progress.

Myth: The death of Moore's law means the death of functioning improvements

The reason Intel is talking nigh trading raw functioning for rock-bottom ability consumption in future technology is because information technology looks like nosotros tin can hack a path in that management, as opposed to staring at the rubble of conventional scaling for some other decade.

The following two graph shows relative operation between designs in a thermally constrained environment.

ThermalConstraints

These graphs evidence how side by side-generation semiconductor structures could improve performance in power-constrained environments — and all computing, these days, is considered to take place in a power-constrained environs due to hot spot formation on the dice, besides every bit the general popularity of mobile devices.

I don't want to replace one myth with another, especially since ii of these techniques — ExFETs and graphene n-p junction — rely directly on graphene, a material that hasn't proven willing to bend to our volition of tardily. Given that modern high-powered CMOS already operates under ability limitations at the device level (typically 140W for CPUs and 250-300W for GPUs), it's possible that some next-generation materials might exist used to create devices which, while not faster than CMOS in an absolute sense, attain college throughputs under real-world testing weather.

Myth: Transistor density can continue scaling indefinitely

Semiconductor densities accept kept growing long afterward clocks and power consumption stopped, but it's a brusk-term reprieve. As Paolo Gargini, chair of the ITRS, told Nature: "Fifty-fifty with super-aggressive efforts, nosotros'll become to the 2–three-nanometre limit, where features are but 10 atoms across. Is that a device at all?"

At some point, information technology'due south going to become likewise difficult to continue building structures side-by-side because nosotros won't accept enough atoms to construct transistors. Edifice 3D structures is another potential loophole we can exploit to improve densities past stacking dies, but no ane has figured out how to do the aforementioned with CPUs or GPUs — far besides much power becomes trapped in the die.

The multi-faceted futurity

Then what comes next? The short answer is "everything."

Heterogeneous compute. Specialized embedded sensors. New techniques for lowering ability consumption. New materials and technologies. 3D stacking. Moving RAM on-package or integrating larger DRAM buffers. New memory technologies.

IoT-Future

Research into cutting-edge semiconductor techniques and revolutionary all-new methods of calculating (from quantum machines to mimicking the encephalon) will keep. So will inquiry into materials like graphene and carbon nanotubes. While there'south no serious potential for a short-term comeback (call back, it typically takes ten-15 years for a major quantum to manifest equally a shipping products), the entire manufacture is pivoting to focus on low power precisely because consumers have identified low-power devices as the hereafter of amusement, calculating, and cutting-edge design.

The desktops and laptops of 10 years from now may be iterative improvements — faster, certainly, with college resolution displays and lower costs overall, merely perfectly recognizable compared to the hardware we have today. Further inquiry into smartphones and wearables, nevertheless, could finally produce the elusive holy grail of all-day battery life and loftier performance that eludes electric current-day products. Either style, the footstep of development won't be zero.

Now read: What is Moore's Law?