FPGAs must shape up to accelerate growth
While the industry saw the beginnings of FPGAs as a substitute for ASIC technology, the huge market return expectations for the FPGA have yet to happen. The FPGA market growth remains unmoved by escalating ASIC mask-set costs and related NRE—with FPGAs proving to be bulkier and slower than any standard cell implementation. MonolithIC 3D's Zvi Or-Bach points out that this growth stagnation stems from the inefficiency of FPGA technology.
In my recent blog, 28nm – The Last Node of Moore's Law, I outlined the recent dramatic change that has occurred after many years of cost reduction associated with dimensional scaling. It is now clear that the 28nm technology note will provide the lowest cost-per-gate for years to come. In this blog we will assess the potential implications for the ASIC and the FPGA markets.
Over the last two decades, we have seen escalating mask set costs associated with dimensional scaling, along with accordingly escalating NRE costs. At the recent 2014 SEMI Industry Strategy Symposium (ISS), Ivo Bolsens, Xilinx CTO, presented the following chart illustrating ASIC design cost escalation:
The dramatic increase in ASIC design costs have had a real effect on the ASIC market, reducing the number of new designs and dramatically reducing the number of vendors serving the ASIC market. One would expect that such a trend would have a very positive effect on the FPGA market. This is because there is no mask-set cost associated with an FPGA design and, accordingly, far lower NRE costs per design. The following fictitious chart illustrates these expectations:
Surprisingly, this did not really happen. The following chart presents the overall FPGA market during the last decade according to the financial results of Xilinx, Altera, and Actel:
The FPGA market growth could be compared to the overall semiconductor market growth as presented in the chart below, where the market in 2013 was at $305 billion. Clearly, the FPGA market growth during the last decade is similar to the overall semiconductor market growth, and there is no indication of any benefits devolving onto the FPGA market from the escalating ASIC mask-set costs and associated NRE.
FPGA technology began in the mid-1980s as an alternative to the popular ASIC technology of that time—the Gate Array (GA). The acronym FPGA stands for Field-Programmable Gate Array. During the 1990s, the Gate Array ASIC technology lost its appeal as more sophisticated ASIC technologies came to the fore, and the $20B Gate Array market shrunk dramatically until it effectively ceased to exist. Analysts expected that this would have a dramatic positive impact on the FPGA market, which did grow to some extent, but far less than everyone's expectations. The trend of escalating NRE driven by dimensional scaling and escalating lithography costs kept on going into the 2000s and drove down the number of ASIC designs. Once again, analysts expected a huge surge in the FPGA market, but—clearly—this did not happen.
In the remainder of this column I will present my company's theory why this did not happen. Also, I shall discuss some potential implications for the future. We believe that the stagnation of FPGA growth is mostly due to the inefficiency of FPGA technology. Most FPGAs use SRAM as the programming or "switch" technology. Interconnects are the dominating resource in modern designs. Within an SRAM-based FPGA, the programming of interconnects is implemented by an SRAM cell that controls a pass-transistor driver or bidirectional driver. The following chart illustrates the diffusion area associated for such a Programmable Interconnect Cell (PIC) assessed in 45nm technology and compared to the size of its mask-defined equivalent—the via. The results indicate that the cell area overhead for the SRAM PIC is over 30X when compared to a via; and this does not include the additional circuit overhead area needed to program and control the SRAM PIC.
This number had been reported in the industry for many years. A 2007 research paper by Ian Kuon and Prof Jonathan Rose (IEEE Transaction on Computer-Aided Design of IC and System) said this clearly: "In this paper, we have presented empirical measurements quantifying the gap between FPGAs and ASICs for core logic. We found that for circuits implemented purely using the LUT based logic elements, an FPGA is approximately 35 times larger and between 3.4 to 4.6 times slower on average than a standard-cell implementation."
This high programmability overhead suggests that many of the current ASIC designs cannot be replaced by their FPGA equivalents. Consequently, when advanced technology NRE is too high, the alternative is to use older node ASIC technologies. Since the number one driver for cost of mask-sets and NRE is the associated capital, the cost of older technologies goes down dramatically over time. The 30X area penalty means that one could use a node that is five generations older and have a competitive solution when compared to a current-node FPGA. Taking into account the 60 per cent gross margin of the FPGA companies, along with the overhead of using a fixed-sized device of an FPGA family rather than a custom tailored Standard Cell device, these could compensate for an additional two nodes. Looking again at the design costs as illustrated in the Xilinx chart above, we can see that at 180nm the design costs are pretty low and the mask set costs are too small even to register on the chart.
What has really happened is that many designs chose to use older node standard cells instead of an FPGA. In his last keynote presentation at the Synopsys user group (SNUG 2014), Art De Geus, Synopsys CEO, presented multiple slides to illustrate the value of Synopsys newer tools to improve older node design effectiveness. The following chart is one of them; on its left-hand side it includes the current distribution of design starts. One can easily see that the most popular current design node is at 180nm. Clearly, even such an old node provides a better product than the state-of-the art FPGA.
|Related Articles||Editor's Choice|