Examining the sparse matrix problem
As a result, the planning and execution of an SoC has lengthened instead of shortened, and it has been made riskier, not less so. It's important to examine the genesis of this sparse matrix, analyse the ramifications, and understand how ecosystem participants can successfully advance the industry.
At the dawn of the new millennium, the industry struggled with 130nm process technology and lack of market clarity. Processor choices for SoCs were rather clearly delineated by applications. Customer requirements and competitors' datasheets weren't nearly as available, so information was a highly prized differentiator. Life wasn't easier, but it wasn't complex.
Given this environment, arbitrage of market information to win in the marketplace was common and preferred. For example, Broadcom, through the acquisition of ServerWorks, correctly bet on DDR-DRAM instead of RDRAM and ended up generating a third of its 2002 revenue through this product line alone.
Fast forward to 2013, and it's scary. With relatively few tweaks in the process integration, TSMC has delivered four different variations of highly advanced 28nm process technologies, each reaching stable yields and each delivering clear differentiations on cost, performance, or power. ARM's recent introduction of the A12 processor has made clear the abundance of application processors depending on the time-horizon, performance, and power budget (A7, A9, A15, A53, A57, and now A12).
The design team just needs market information to make the right choices in this richly enabled ecosystem, and the Internet delivers. What was highly guarded Intel Developer's Forum (IDF) presentation material is not only widely available, but also just one of many sources of market, customer, and competitive data.
Thanks to blogs and tweets, rumours and gossip about Silicon Valley is only hours away, even if one lives 12 hours (or 12 1/2 hours, to be precise) away in India. The Internet contains more information today than most development team can consume—and it's available for free.
The theory is that the transparency brought by the Internet enables us to make the most efficient choice and deliver just the right performance, power, and cost solution to a fickle customer base demanding the best at 10% lower cost every year. Right?
Well, I think the Eigenvalue should be easy to find, as the market requirement has become razor sharp. However, because the matrix is, in reality, sparsely populated, the design team quickly finds that the solution space doesn't converge to a nicely defined set and could be only local optimal.
To illustrate the assertion about the sparse matrix, let's examine a couple of data points. The separation among 28nm process variants is roughly 25% in cost, 10X in power, and 30% in performance from the worst to best. In theory, there is an ideal process for each application. However, given the huge ecosystem of IP required, both from within the company and from third parties, the availability starts to impact the choice and sometimes steers the decision away from the optimal.
|Related Articles||Editor's Choice|