“People who are really serious about software should make their own hardware.” — Alan Kay

As part of EE Times’ look at Apple’s march to becoming the first trillion-dollar company, I will look at their “modern” semiconductor work.

To do this, I first need to think about the iPhone. It is the defining product on the road to the trillion-dollar milestone. When the iPhone was unveiled on Jan. 9, 2007, it was a departure from the existing phone paradigm.

Steve Jobs described it as “a widescreen iPod with touch controls, a revolutionary mobile phone, and a breakthrough internet communicator.” That day, he also introduced a new operating system: iPhoneOS, aka iOS.

 

What about Alan Kay’s quote above? Steve Jobs included it in the 2007 iPhone launch. At face value, it captured Apple’s need to design its own phone, to run its new iOS software, which brought about a new way of interacting with phones — think touchscreen, virtual keyboard, and swipes.

We did not know that this quote would soon apply to Apple’s jump into a much larger semiconductor effort. Apple was about to go vertical.

Early days and the A4
Why would Apple want to bring large-scale semiconductor design in-house? It is an expensive endeavor.

Would they be able to design comparable, let alone better, ICs than X or Y semiconductor company that was already in the space? Surely, it was too big a risk.

But in the spring of 2008, Apple announced the acquisition of PA Semi, a largely unknown processor design house focused on Digital’s StrongARM technology. Apple said that they wanted to further differentiate their products. Later, rumors started to emerge that Apple acquired Intrinsity, another processor designer. They were marketing their Hummingbird Arm-based CPU. This latter acquisition was confirmed in April 2010.

On Jan. 27, 2010, Steve Jobs took to the stage to introduce the iPad and the Apple-designed A4 SoC. I thought about the possibly impending tablet and its silicon a week before the keynote. I thought a tablet would require silicon that is between that in the iPhone and a MacBook. I also asked “…what happens if you could design your own processor with blocks tailored solely to your device and its application?”

The A-series, and what I consider the “modern” Apple semiconductor effort, was underway.

When my co-authors and I looked at the A4 for EE Times, we found considerable block level similarity with Samsung’s S5PC110; i.e., an Arm-based SoC of the day. It was also found that the Arm CPUs were identical. Our A4 die photo from the EE Times article is shown below. The A4 did not seem to be a differentiating SoC, but there was not much time between the above acquisitions and the 2009 date code on the package in the publicity shot.

Annotated A4 die photo
Annotated A4 die photo (Source: MuAnalysis)

 

A-series milestones
After the jolt of actually introducing an in-house SoC, I would say that the next major milestone came with the A6. With it, Apple introduced an in-house-designed CPU. This is no small feat.

At the time, Chipworks commented that the architecture appeared to be a manual layout. A die photo of the A6, with annotated CPU, is presented below. It showed the level of commitment to Apple’s semiconductor group. The A7 brought the so-called Secure Enclave to store and process fingerprint data from the TouchID sensor. Along the way, Apple integrated an Image Signal Processor and their Motion Co-Processor, to name but two blocks.

Annotated A6 die photo (Source: Chipworks)
Annotated A6 die photo (Source: Chipworks)

 

Let’s fast-forward to 2017’s A11. A die photo of this SoC is shown below. The A11 incorporated Apple’s first in-house-designed GPU and their so-called Neural Engine. Both are important bits of design. In the press release for the iPhone X, Apple indicates that both take part in machine-learning functionality. In particular, FaceID and Animoji are said to be enabled by the Neural Engine.

I have glossed over quite a bit, including not jumping into the performance of the major blocks. Of importance here is the design and inclusion of two more blocks of the SoC — namely, the GPU and Neural Engine — that are central to iPhone performance and experience. I see both blocks as important going forward because they contribute to machine learning.

Annotated A11 die photo (Source: TechInsights)
Annotated A11 die photo (Source: TechInsights)

 

Custom circuits and vertical integration
There is plenty of evidence of Apple’s growing design prowess. Is this design differentiating the iPhone from other phones, as Steve Jobs hoped when he launched the A4?

Let’s consider FaceID as an example.

Apple might consider a large feature such as facial recognition as being something of interest. It might acquire one or more companies to assemble IP around at least some of the technology. Some technology will be implemented in software and others may be implemented in hardware. A block, such as the Neural Engine, is identified and it is designed. At the same time, other blocks, such as the Secure Enclave, would be identified as being useful, now processing the facial image data instead of fingerprints.

Apple has stressed the coordination of software and hardware engineers at many keynotes over the years.

One can almost hear the meetings between the teams:

“I need HW that can run routines A and B.”

“I can give you most of that, but could you modify the routine such that they run this way in my block?”

And on it goes until the hardware and software are built and brought together. The circuit blocks may well be useful only to Apple after such a process, but that is fine as the semiconductor group only has one customer.

On an even finer level, and at a higher degree of integration, one might envision certain routines, or bits of routines, of the OS actually encoded at the transistor level. “I could put that calculation that is called regularly into transistors to save software cycles.”

Yes, one loses flexibility, but the performance gain could be worth it. I have mused about this from time to time, as it seems the pinnacle of hardware–software integration.

Designing for one customer
The role of the iPhone is clear. They are sold to customers and revenue is earned.

The role of semiconductors is not as clear because there are no external sales; there is no direct revenue. All of the semiconductor products are for “internal use only.” Does everyone buy an iPhone because it includes an A-series processor? The answer is surely no. Does the iPhone have capabilities that might be impossible, perform poorly, or be more expensive without the A-series? I think the answer is likely yes.

Again, let’s look at FaceID.

It is safe to say that it is an important feature of the iPhone X. If Apple designed the SoC but not its own blocks, it would need to source an appropriate machine-learning IP core. It may also need to rethink the GPU as it is said to share in some machine-learning tasks. Furthermore, Apple would need to write software that runs on these blocks. The scenario is even more intractable if there is less vertical integration and Apple sources a whole SoC. Here, in a bad case, a secondary IC may be required to perform machine learning, implying more board space and cost.

Do the A-series SoCs differentiate the final product? I would have to say yes. On one level, Apple can focus on design that is specific to its needs and desired features. They are designing for one customer, presumably reducing the compromises needed to appeal to many. Furthermore, Apple does not have to design the best cores out there. It must design cores that perform the best with iOS. Apple is able to tailor the circuit and block design to meet their needs.

It is worth noting that Apple has made some bold steps along the way. As they move major blocks to their own designs, they are placing more responsibility onto their designers. Also, even though semiconductor revenue is indirect, a poorly performing A-series chip would most likely result in reduced sales.

Looking forward
So far, I have only talked about the iPhone and the A-series SoCs. This is only part of the equation.

Apple has expanded its semiconductor portfolio into quite a few products now. One can look to the S-series in the Apple Watch, the W-series in the AirPods (and Apple Watch), and the T-series in an expanding number of MacOS systems. These all present interesting features in the final product.

Would the AirPods have been possible without the W1? At their launch, the AirPods seemed to stray from the standard Bluetooth practice of having a wire between the left and right channel. Did the W1 allow the transmission of left and right channels to the untethered AirPods? I thought about the AirPods and noted an eerie resemblance between the AirPods’ features and the acquired Passif IP. I think that the W1 was differentiating.

I do not know about what Apple is dreaming in their development labs, but one can envision some interesting possibilities with the semiconductor building blocks that are now in place. When the T1 emerged, I thought about its possible application to the Apple TV remote. Is there a market for instant, secure shopping from your TV?

Apple has shown its semiconductor design prowess and established an interesting bit of semiconductor IP. One might think of their design work as creating a library of IP cores that they can use to bring features to their end products. The vertical integration of their semiconductor design efforts has become central to numerous revenue streams and the Apple ecosystem. The semiconductor group does not need to be the best designer of everything they touch; they need to be the best designer for their customer.

Paul Boldt

— Paul Boldt is president at ned, maude, todd &rod inc., an Ottawa-based information service company focused on technology. Its target markets range from finance and investing to IP law and management consulting. Prior to this, he spent six years drafting and prosecuting patent applications at one of Canada’s largest IP law firms and three years reverse-engineering integrated circuits. Paul holds a Ph.D. from the Department of Materials Science at McMaster University.