Intel was ambitious with its 10nm process technology plans — maybe too ambitious. Complications led to substantial delays, and to losing its reputation as the most advanced IC manufacturer in the world.
When you are the world’s largest supplier of microprocessors and one of the largest makers of semiconductor in the world, you tend to set ambitious targets in a bid to retain your position and stay ahead of the competition. With its 10nm manufacturing technology, Intel Corp. set goals so ambitious it had to delay high-volume production using this fabrication process, make changes to its roadmap, and even reconsider some aspects of its strategy. Intel is making progress with its10nm process, but with TSMC and Samsung working at nodes they’ve labeled 7nm, 6nm, 5nm, and smaller, where exactly is Intel today?
When a company designs new process technology, it sets certain goals when it comes to performance, power, and area (PPA). Contract makers of semiconductors at times sacrifice one aspect in favor of another because of their very iterative approach to design and because they have to offer a new process every year or so to enable their customers to advance their SoCs on an annual cadence. Some of such nodes are usually called ‘short nodes’ and, unlike ‘long nodes, are used only for a couple of years. By contrast, Intel used to advance its process technologies across all PPA aspects approximately every two years under its Tick-Tock (process-architecture) tenet. In case of its 10nm node (also known as Intel 1274), the company was looking at an up to 2.7x transistor density improvement (when a 6.2T high-density [HD] library is used) along with a 25% performance improvement (at the same power) or a nearly 50% reduction of power consumption (at the same frequency) when compared to its 14nm node.
Many of Intel’s advertised 10nm characteristics are similar to those of Taiwan Semiconductor Manufacturing Co.’s (TSMC) first generation 7nm fabrication process (N7), yet Intel originally planned to start high volume production (HVM) of its 10nm devices in 2016, about two years ahead of TSMC’s N7 HVM, which would have given Intel a strong advantage over its rivals particularly in the HPC space.
Intel called its ambitious transistor density gains ‘Hyper Scaling’ and later on blamed its aggressive goals for lower-than-expected yields and higher-than-14nm costs. Meanwhile, Intel needed a higher-than-usual scaling for its 10nm process not only to sustain Moore’s Law paradigm (despite longer cycles), but also to keep its die sizes small and to lower its costs (i.e., get more product units per dollar). With each process generation, chip costs per square millimeter tend to increase, so for markets like PCs, you want the chips to get smaller with each node either to lower costs or to maintain them.
On a high level, Intel’s 10nm node is a process technology that uses FinFET transistors and relies on a 13-layer metallization stack. The key technologies meant to enable Hyper Scaling are contact over active gate (COAG), usage of cobalt interconnects (fills) for the first two layers to reduce resistance at that area by 50% (versus tungsten) as well as decrease electromigration by 5x – 10x to shrink these interconnects, self-aligned quadruple patterning (SAQP) for Fin formation and self aligned double patterning (SADP) for gate formation in the front end of the line (FEOL) as well as SAQP for select metal layers in the back end of the line (BEOL). Other techniques also include ‘Single Dummy Gate’, though the three aforementioned were the most advertised ones.
All leading-edge process technologies these days rely on multi-patterning, so in case of its 10nm, Intel had to use quad (4x), penta (5x), or even hexa (6x) patterning for select features. So, in the most complex case, Intel had to expose a 10nm wafer six times to ‘draw’ one feature. Multi-patterning not only lengthens production cycles, but also tends to increase defect density, which lowers yields and greatly increases costs (reducing profitability and margins). Extensive usage of multi-patterning to get Hyper Scaling instead of waiting for extreme ultraviolet lithography (EUV) to arrive was a risk, but EUV was never meant to be ready for prime time in 2016.
Furthermore, no semiconductor maker except Intel used SAQP for BEOL of their 7nm or 10nm technologies and because of that some observers blame SAQP for high defect density. Usage of cobalt or ruthenium in the lower layers of sub-10nm nodes looks inevitable for many reasons, but cobalt was a relatively new material for Intel at the time it did R&D work for its 10nm node, so some believe that cobalt could be blamed for high defect density. The latter definitely have a point. Usage of cobalt requires new inspection tools that use electron beams.
“The shrinking geometries, in turn, place elevated demands on the metallization process and typical yield-related fail modes include incomplete gap-fill or voiding,” said Nicolas Breil, director of technical projects at Applied Materials, in an IEDM presentation two years ago. “As voids in cobalt are usually smaller than the cobalt line width, the detection of voids as small as 5nm is critical. A spot size smaller than 3nm is required to detect sub-10nm voids.”
Single-beam inspection tools are slow compared to traditional optical inspection tools (multi e-beam tools are not quite here yet, but they are also slow), but the latter do not have enough resolution for new and upcoming process technologies. To that end, e-beam tools are now used only for process qualification and calibration.
It is not uncommon for Intel to take risks and implement new technologies ahead of the industry, but in case of its 10nm process, the company went above and beyond with innovations and these mean risks.
“The people I have talked to think, in retrospect, it was too aggressive overall,” said Nathan Brookwood, Research Fellow at Insight 64.
Changes of plans & strategy
Intel first confirmed issues with its 10nm technology in July 2015 and blamed multi-patterning for high defect density and low yields. Back then, the company promised to start volume shipments of its first 10nm products, codenamed Cannon Lake, in the second half 2017, around a year later than planned. In early 2018, Intel said that it had started revenue shipments of Cannon Lake CPUs and would ramp production later in the year, but in April 2018 the company admitted that due to poor yields it would have to move volume production of 10nm CPUs to 2019. Later on, it turned out that the 2nd Generation of Intel’s 10nm fabrication process (not to be confused with 10nm+) that went into production in 2019 has a number of significant improvements over the initial 10nm manufacturing technology.
Intel, obviously, knew more about the issues with its 10nm process well before it made any public announcements in 2015. Understanding the risks, the company needed to ensure that it could produce CPUs that meet cost, performance and time-to-market requirements in the following years even without using its leading-edge node. To that end, in early 2016 the chip giant announced its new tenet of introducing new process technologies and microarchitectures. Instead of its Tick-Tock model that worked for Intel for about 10 years, the company switched to its new ‘Process-Architecture-Optimization’ (PAO) model that involved longer usage of microarchitectures as well as iterative improvements of process technologies and product design.
“The Tick-Tock model was mostly a risk mitigation strategy,” said Brookwood. “Use a known microarchitecture to debug the new process, and bring up a new microarchitecture on a proven process. Resulted in an improved product on a predictable, annual cadence.”
“I think the Tick-Tock grew out of a desire to gain additional reputation advantage from a marketing perspective,” said an ex-Intel employee. “As the management looked at it, it was the rhythm appeared to happen on a regular pace. Therefore, some people believed there was no reason to doubt it and to that end continue. They forgot how incredibly hard the tasks were.”
The new PAO principle was meant to ensure the three aforementioned things: timely introduction of competitive products by Intel and financial viability of these products. Starting from 2016, Intel has been improving its process technologies iteratively (something that Intel calls intra-node improvements) and did not have to wait for a new major node to launch a new processor. But something that looks plausible at first, may not look that good eventually, especially if the competition is aggressive.
“The Tick-Tock worked well for over a decade,” said Brookwood. “It broke a bit at 14nm, which was about a year late, and then collapsed completely at 10nm. Meanwhile, TSMC has been able to maintain a two-year cadence. More modest improvements, but way more predictable. Who would have thought that AMD could roll its entire line over to TSMC’s 7nm, while Intel is still mostly on 14nm?”
Intel’s first optimized 14nm-class process was its 14nm+ fabrication technology that allowed the company to increase frequency of its codenamed Kaby Lake CPUs by 15% over Skylake processors without increasing their power consumption. An even more advanced version of the technology — 14nm++ — has a relaxed gate pitch of 84nm (up from 70nm in case of the original 14nm) as well as a ~24% higher drive current to lower power by around 50%. Intel’s 14nm++ is used to build the company’s codenamed Coffee Lake and Comet Lake processors for premium gaming desktops as well as higher-end notebooks. Going forward, Intel will continue to advance its fabrication technologies iteratively, so we are going to see 10nm+ and 10nm++ as well as 7nm, 7nm+ and 7nm++.
Meanwhile, Intel’s CEO hopes that the company will get back to a 2 – 2.5-year cadence with major nodes, but only time will tell how it works out for Intel.
“Our goal is to deliver an annual cadence of process improvements to support our product roadmap,” said an Intel spokesperson. “We achieve this through a combination of node scaling and intranode enhancements to deliver the right combination of performance, power, and area improvements.”
Iterative approach to development of manufacturing processes is not the only major change Intel had to make. Back in the day, the company aligned its product designs and manufacturing technologies, so a particular design was destined to be made using a particular fabrication process. By now, Intel has decoupled its products and node development and says that it can produce its upcoming CPUs or GPUs using the most viable technology that it has. Such approach somewhat resembles interaction between a fabless chip developer and its foundry partner, but on a more intimate level, of course. To ensure that Intel’s chip engineers have everything they need to port their designs to a particular node, Intel last year hired Gary Patton, an ex-CTO of GlobalFoundries and a former head of IBM Microelectronics business. Patton will oversee development of process design kits (PDKs), IP, and tools.
Intel: 10nm is not our best node
Intel will keep iterative approach to advancements of its process technologies in the future. The chipmaker plans to introduce two enhanced versions of its 10nm node —10nm+, 10nm++ — in 2020 and 2021, respectively. Based on a slide demonstrated by Mark Bohr (Intel’s former senior fellow and director of process architecture and integration) in 2017, Intel’s 10nm+ promises to increase transistor performance over 10nm, but its frequency potential is still below that of 14nm++, which makes this technology a little less attractive for desktop CPUs (especially those aimed at gamers). Keeping in mind that Intel faced tough problems with defect density with its 10nm technology, it is likely that this was one of the primary things that it addressed with its 10nm+.
In the coming quarters Intel plans to start using its 10nm++ technology that promises to significantly boost transistor performance and this is probably when Intel will be able to use it for applications that benefit for high clocks. Meanwhile, Intel admits that there are fundamental reasons why its 10nm family of nodes will not be as profitable as its 22nm and 14nm nodes. Earlier this year George Davis, chief financial officer of Intel stated the following:
“This just is not going to be the best node that Intel has ever had,” he said. “It is going to be less productive than 14nm, less productive than 22nm, but we are excited about the improvements that we are seeing. We expect to start the 7nm period with a much better profile of performance over that starting at the end of 2021.”
Going forward, Intel will offer 7nm, 7nm+, and 7nm++ fabrication technologies that will rely on extreme ultraviolet lithography (EUVL), which will help Intel solve a variety of multi-patterning-related issues. Iterative development has a number of benefits, though it requires additional resources, which probably means somewhat higher R&D costs. Still, since manufacturing processes are getting more expensive to develop in general, it is hard to estimate how high these additional R&D costs are. Meanwhile, Intel’s CFO warned that the overlap between various process technologies (R&D, equipment costs, startup costs, etc.) will have an effect on gross margins:
“The fact is, like I said, it is not going to be as strong a node as people would expect from 14nm or what they will see in 7nm. We are at a time when in order to regain process leadership we had to accelerate the overlap between 10nm and 7nm, and then 7nm and 5nm. So, the cost that you are observing, starting in particularly 2021, you have got this intersection of the performance of 10nm, the investment in 7nm, and we are also well into starting the investment in 5nm, all of these elements just combine to impact the gross margin.”
Intel’s description of its 10nm process technology as something that would not financially perform as well as its 14nm node has done so far after seven years online, yet leaving numerous 10nm+ and 10nm++ projects in the roadmap, may have some interesting implications.
“The best margins come from process nodes that are a year or two old because the yield are usually much higher and the cost of the tools in fab have been depreciated,” said a person familiar with semiconductor production.
Intel’s 10nm node will have been in HVM for about a couple of years in the second half of 2021, when Intel’s 7nm production starts to ramp. Of course, depreciated equipment used for 10nm will be re-used for 7nm, but that means that financial success of the latter will, to some degree, rely on the shoulders of the former.
[This is the Part 1 story of a 2-part series on Intel’s fitful progress on a 10nm process. Please see Part 2 here. – ed.]
— Anton Shilov is a veteran technology writer who has covered many aspects of the electronics industry, including semiconductors, computers, displays, and consumer electronics.