Moore’s Law, the tenet that the number of transistors on a chip will double every 18-24 months, has driven the electronics industry for decades...
Moore’s Law, the tenet that the number of transistors on a chip will double every 18-24 months, has driven the electronics industry for decades. Today, there’s no denying that Moore’s Law is showing its age, with some semiconductor industry leaders going so far as to rewrite its definition. In this era of More-than-Moore, chipmakers are turning to new materials, 3D wafer stacking and heterogeneous integration – die with different manufacturing process nodes and technologies integrated within a single package – to keep driving the pace of advancement.
As semiconductor technology continues to advance, the time to transition from one process node to the next has shrunk significantly. Once assuming a fairly predictable cadence of around four years, the transition from 28nm to 20nm and then to 16nm took approximately 18 months each, despite the industry’s simultaneous move from planar to FinFET transistors at 16nm. Moving to FinFET was a big deal, because it got us back on the Moore’s Law scaling chart.
The relentless drive for the next big thing in smartphones and the annual, sometimes bi-annual, product release cadence by major smartphone suppliers have pushed the pace of process migration down to 7nm. We are on the cusp of seeing 5nm applications processors in the next generation of smartphones, and it’s likely that many premium smartphones for the 2020 holiday season will all be powered by 5nm chips! Regardless of market saturation predictions for smartphones worldwide and the reality that unit shipments are hovering at single digit growth, it is still a very lucrative, high-volume market.
Times they are a-changin’
Typically driven by smartphone and PC demand, forces from a different vector are now causing huge disruption. We had expected major fabless semiconductor companies (companies that do not operate their own semiconductor fabrication plants) to be early adopters of the most advanced silicon manufacturing processes, as has been the case for many generations of innovation. Yet when you examine the data, it shows a tectonic shift.
In a move that couldn’t have been predicted 10 years ago, major OEMs and cash-rich Internet companies are racing to build chip design teams in house and adopting the most bleeding-edge process technologies. This is true whether companies are designing chips for data centers and hyperscale computing, chips for next-generation smartphones, or ADAS and autonomous driving chips. And, let’s not forget that applications like smart appliances (Hey Alexa!) and AI accelerator chips are moving to high-performance custom silicon. The most famous examples of this shift include Google’s TPU powering the data center (AI training and inference) and HW3, the SoC (system on a chip) designed by Tesla for use in the Model 3.
Systems companies are changing the economic equation of semiconductor ROI. Previously you’d take the total cost of designing a chip and calculate how many chips you could sell over the product’s lifetime before determining if you could afford to start such design projects. It was all about amortization of design costs over a large volume of chips. If the cost of design was $100 million and you could sell 100 million chips, then you could amortize $1 for each chip sold. With an Average Selling Price of $40, it was a pretty good deal. But it won’t work if you can sell only 1 million chips. Then the amortization works out to be $100 per chip.
Shaking up this equation is the current reality that Internet companies and cloud services companies are not selling chips but are using these ultra-high-performance chips to power their servers. Instead, they sell cloud computing services, and the chips are just a hardware investment. So, the money is made in recurring fees for using the cloud services. All of a sudden, the business model works! This is one reason why there has been an acceleration in designing chips in the most advanced silicon processes.
The other driver is the “must-have” performance that can only be provided by bleeding-edge process technology. If you need it, you are willing to pay the premium. This is where you see a lot of AI SoC companies and cryptocurrency SoC companies, and high-end server and networking applications driving the adoption of 7nm and soon to be 5nm.
Many different types of companies and chips are taking center stage for this silicon renaissance. We now have chips powering drones of all sizes and configurations: from toy-like drones, consumer drones, autonomous drones, drones for movie-making, to autonomous farming, surveillance and security, firefighting, and other industrial applications. The possibilities are endless.
Now you need smart camera chips with varying ranges of resolution and fidelity, along with wireless and GPS connectivity. Autonomous drones have reached a level of sophistication where tremendous compute power is required for the mathematics and computation necessary to process ML/AI algorithms. Additionally, security cameras for home and enterprise use, campus-wide systems, and city-wide systems have enabled a huge business segment. This is especially true for companies in China where a smartphone camera equates to facial recognition as a means of identification and enabling e-commerce or cashless systems – upending the need for credit cards or cash.
For a brief period, SoCs for bitcoin mining consumed a huge share of wafer consumption at 7nm. Even as the bitcoin mining business has slowed down in recent years, demand for SoCs from the cryptocurrency industry is here to stay and will continue to evolve. The excitement in robotaxis and autonomous driving is also having a major impact on companies designing advanced SoCs who can tailor the exact requirements needed for their specific applications. While some automotive companies use off-the-shelf options, others want to develop custom chips to meet their specific AI training and inference requirements. All these new SoCs demand the most advanced process technologies.
So, what does this all mean?
Today is an interesting and exciting time for chip development. While growing, these applications are nowhere near maturity. Despite constant chatter about 5G, in reality we are only at the tip of the iceberg. We have not yet begun to see the impact of 5G on semiconductor consumption, especially as 5G base stations start to roll out over the next few years. Imagine the enormous amount of chips needed to support the deployment of 5G-enabled smartphones and technology worldwide.
Driven by a number of generational technology drivers, strong design activity and innovation continues at both advanced nodes as well as on the More-than-Moore front in 2020. 5G, AI and machine learning, autonomous vehicles, hyperscale computing, industrial IoT, and more all rely on semiconductors at their foundation, thereby propelling the need for next-generation computing, connectivity, and storage. This is creating tremendous opportunities for semiconductors in a host of vertical markets including consumer, hyperscale, mobile, communications, automotive, aerospace and defense, industrial, and health. As semiconductor technology advances, development continues to become more difficult, and design costs increase correspondingly. That said, we are in the midst of a technology revolution with silicon enjoying its time in the spotlight as a clear innovation enabler.