EE Times' exclusive interview with Carver Mead delves into his achievements in the early days of semiconductors, and his legacy.
While in Silicon Valley for the 59th Design Automation Conference, I was offered the opportunity for an exclusive interview for EE Times with the legendary Carver Mead. Not one to say no to such a request, I was certainly glad to talk through some of what will be considered his legacy for generations to come.
An electronics engineer and applied physicist, Carver Mead, a Gordon and Betty Moore Professor of Engineering and Applied Science, Emeritus, at California Institute of Technology, was last month named the 2022 Kyoto Prize laureate in Advanced Technology. This is an international award bestowed by the non–profit Inamori Foundation to honor those who have contributed significantly to humankind’s scientific, cultural, and spiritual betterment; the Kyoto Prize was also bestowed upon Mead’s fellow Synaptics founder Federico Faggin in 1997.
According to the Inamori Foundation, Mead’s pioneering contributions to the field of electronics include proposing and promoting a new methodology to divide the design process of very–large–scale integration (VLSI) systems into logic, circuit, and layout designs, and separating them from the manufacturing process. He also contributed greatly to the advancement of computer–aided design technology, paving the way to VLSI design automation which, in turn, led to the rapid development of the semiconductor industry.
So, I sat down to talk through what he felt about this, to explain the things he has achieved that the award acknowledges, what are the moments that he felt proud about, and to take us through some of his history, including being badge number five at Intel, founding Synaptics, and what he is up to now. I also asked his views on today’s neuromorphic computing and whether it can ever reach anywhere near the efficiency levels of the human brain.
Here’s the interview.
Nitin Dahad: First of all, congratulations on the Kyoto award. How do you feel about that?
Carver Mead: Well, it’s very satisfying because it’s the first time that it’s been noticed that there was a lot of work early on to get the content that went into the VLSI courses. That was hard work and there was nobody around watching. The way people were doing it was nuts. I mean, they would figure out some system definition and then they’d hand that off to somebody who’d make some logic equations for it, and then they’d hand it off to somebody who’d go and make logic diagrams for it. And then they’d hand that off to somebody who’d turn those into circuit diagrams. And they’d hand that off to somebody who’d go and make a layout for that circuit diagram. And it was all done on by hand, on Mylar.
It was totally nuts. And then when they went to make a mask, they had this drawing that had all the layers, all the process layers on it, and they had to make masks for each separate one. So, they would put a rubylith (that’s a sheet of Mylar that has a very thin layer of red on it). You can see through the red, but what they do then is for each layer, they would go and cut along the edges of the shapes very precisely with, basically, a razor blade — for the whole chip, just that layer! And then they’d give that to someone who had to go around with tweezers and pull out the little strips that had been cut. It was absolutely insane.
I took a look at that, and I said, “A, There’s no way that I could do that myself and, B, there’s no way that scales.” I had just done the scaling stuff for how far you could go because Gordon had asked me how small we could make the transistors.
Nitin Dahad: That’s Gordon Moore?
Carver Mead: Yes. And I had figured out we could at least get down to the 10 nanometer range. Well, what I actually did was figure out we could get to maybe 3 nanometer thick gate oxides. They were at a 100 at the time. So we could go a factor of 30 in scale, a factor of a thousand in density. And so that meant we were going to make integrated circuits with millions of transistors on them. Well, there’s no way you’re going be able to do that with the process that they were doing it with. So, I had to think through not only how are you going to make masks, but how are you going to do the whole design process? You’re not going to draw a logic diagram for something with a million transistors. You need a more structured approach to the whole design process. So, I had to figure that out for myself. And I chose to do my own chip. And that was all in the late sixties.
So finally by 1971, I had had figured out enough to make my own chip. And, and then I got it fab’ed. Fortunately, I had some former students at Intel that would run it through the fab for me. And when that chip worked, it was just astounding because there’s so many levels of abstraction. [At Intel] I was a consultant, I wasn’t an employee, but I was badge number five. It was Bob Noyce and Gordon Moore, and Jean Jones was the admin, and Arthur Rock. So that was the original founding group. It wasn’t called Intel, then. They didn’t get the name Intel until Andy joined and then he worked with Gordon on what to call it.
It was thrilling to be part of that, but when I saw the way they were doing the design, it didn’t make any sense. It was not going to scale.
Nitin Dahad: So, it’s actually EE Times’ 50th anniversary this year. Was there a landmark you achieved in 1972 when the EE Times was born?
Carver Mead: In ‘71 I got my first chip working and there’s so many levels of abstraction to get from a system idea to a working piece of silicon. Until you do it, you’re not sure if you’ve missed something someplace. So, when that chip worked, it gave me confidence. And then, of course, the students had been watching what I was doing, and they said, “We want to learn how to do that.” Dick Pashley came and said, “Will you teach a course in that?” And I said, “Well, if you can get a dozen students, sure, I’ll do it.” Well, he got eight students. Of course, if you’re going to teach people how to do it, you have to enable them to DO it. So, we did this multi–project chip in 1971, which came back in January, 1972. And all the students had this big “aha” when their chip actually worked – that was the first VLSI class, ‘71-‘72.
And actually, that class had the seeds for what became the structured design methodology and the use of pattern generators instead of hand drawn things. And all of that was done because I had just done it myself with my own chip. So, I was fresh. I could teach the students how to do that. So I put each of their little projects on one chip. I couldn’t have a separate chip for each student, so I just put all the projects on one chip.
Nitin Dahad: People today accept chips as a normal part of their daily life, but what would’ve interested people in designing chips then, let’s take one step back, what inspired you to get into either electronics or chip design?
Carver Mead: That’s a really good question. For me, it was in 1968. I got invited to give a talk at the Device Research Conference, a little workshop that was done every year by the IEEE. They invited people doing leading–edge device work in the U.S. There were only maybe 30 of us then, and we could all sit in one room, and we got to hear about the newest stuff that people were doing. They forbid you to take pictures or anything so it was just people talking about the latest stuff. That year, they invited me to give a talk, so I talked about the scaling, and in the process, I discovered this thing that I told you about — the scaling and how it was going to go: the devices got smaller and they didn’t draw any more power per unit area. And they got faster. I mean, it was the biggest violation of Murphy’s law that I think there’s ever been! And on the flight on the way home, I was thinking, I’ve been working on the physics of the transistors, but that’s not the problem. The problem is how do you make a thing with a million moving parts? It’s never been done. It just changed my life. I HAD to do it and I had to figure it out.
Nitin Dahad: That is quite visionary. I mean, who would imagine we can get a million transistors on a chip at that time when the geometries were so large?
Carver Mead: Well, I had lots of arguments of course, because people didn’t believe it. So, I actually spent quite a bit of my time going around giving talks, just to try to get people to believe that it was within the laws of physics that you could make transistors that size. Therefore, it would happen because of the the economics. You know, Gordon had made this compelling case for the economics of scaling and when you just go through it, it makes sense. But they didn’t want to believe it for some reason. They said, “Murphy will get you somehow, you know.” It was a hard sell.
Nitin Dahad: And then what made your eight students come on the course? Was it because they thought this is a fascinating subject, or did they have the same kind of passions that you did?
Carver Mead: Well, I think it was a combination. A normal class that I was teaching would have been 30 or 40 students. So, this was the gutsy group that saw that this was the future. It wasn’t like it was immediately obvious to everybody that this was the future.
Nitin Dahad: Tell us about the birth of Synaptics.
Carver Mead: That was a long time later. That story starts in ‘81 when Dick Feynman and John Hopfield and I started the Physics of Computation course at Caltech, because we thought that there were deeper ways of understanding computation than just Turing machines. We were having lunch one day and arguing about this and they said, “The sure way to learn about it is to teach a joint course on it.“ So in ‘81 and ‘82 and ’83, for three years, we taught a joint course where we rotated who would give the lecture. And, of course, none of us had finished ideas. This was all trying to get our heads around an impossibly enormous question. But it was thrilling. And once again, the students were completely bewildered, but also, they got kind of caught up in the fact that this was how you figure stuff out that nobody knows. And they got to be part of that. The ones that got it went off and did amazing stuff because it inspired them to think beyond where people were just grinding away.
Nitin Dahad: So, you became their mentor and inspirer.
Carver Mead: Yes. The three of us did. After that, each of us went our own way with the part that we had figured was a way forward. Dick went off and did quantum computing. And I went off and did the VLSI, analog, neural morphic stuff. And John Hopfield went off and did his spin glasses. Those things were all very interesting directions that led to amazing things.
Nitin Dahad: You’d been teaching. So, what was the reason for starting a company like Synaptics then?
Carver Mead: From my former lifetime I had known all the people at Fairchild and the people that had moved over to Intel. I had become friends with Federico when he was working with M. Shima on the 4004. Then, when they formed Zilog, I had kept track of them and would go by and see what they were doing and try to talk them into doing structured design.
Nitin Dahad: And have more arguments?
Carver Mead: Yes. It’s the way it is. Federico and I had been friends for years and one night we went up to, I think maybe it was the Mountain House, and had dinner and driving back we were talking, and Federico had already kind of gotten it in his head that there’s a company here. And I think he had a little start on one. So, he said, “Well, let’s do this together.” And, so, we decided we would do it together. And because of the old friendship, it was easy to easy communicate.
Nitin Dahad: Was it easy to raise money? Did you have to raise money then? Or did you say, okay, well, we’ll figure it out?
Carver Mead: Federico had a good set of connections. Art Rock had a company called Davis Rock with Tommy Davis. I knew Art better than I did Tommy. But anyway, it just clicked. We felt that in the sensory area, there had to be something that people were just missing with the user interface. That would be vision or hearing or touch. We sort of dabbled with all of it, and the first one to really click was touch. It was actually quite remarkable all on its own. Synaptics has a whole bunch of information. I think they have two hours of my oral history.
Nitin Dahad: Well, we’ll take that as read. Let’s move on to some other stuff. One of the things we talk about lots are how everyone is doing all these neural networks. How closely should we be copying the neuron in silicon? Neurons evolved within the constraints of their biology. Is it wise to copy that given the constraints of silicon, or how do we know we aren’t just copying neuron housekeeping functions that keep the neuron alive?
Carver Mead: Well, it’s an excellent set of questions of course, because nobody knows the answer. I mean the simplest idea of a neural network could be something that learns with examples, and back propagation was a brilliant insight. It was around 1985 or [’87] or [’89], somewhere in there, and in fact Terry Sejnowski’s journal, Neural Computation, is having a commemorative issue I think coming out soon, and it has in it some insights over the last however many years that’s been published.
Well, let me answer your question. The one idea there was to learn from examples, which we do in neurons and that one idea, with a bunch of insights, having to do with implementation and all that, have turned into big business in mainstream computing today because things got to a scale and the techniques got good enough that it could do useful work. And so that’s one idea, and it’s a rather simple idea. People are just getting to the point where they’re using vision chips that actually look for the relevant information in the image, instead of just scanning out every image and then trying to figure stuff out from that, which is insane – it doesn’t scale well. It took 30 years for there to be an urgent felt need for vision systems that didn’t have big latency. But once people decide they wanted to make self–driving cars, then you needed vision systems that don’t have big latency because it’s obvious you don’t wait for frame to come around to find out there’s something moving in the image — that’s not going to work. It takes that long before there’s some connection between the technique and a commercially viable product direction. Those things can happen fast in software because everything’s digitized already.
But even in software, it took that long before the deep learning stuff took off. There were basically no new ideas there — the path of evolution of how you do it and how you use silicon to do it — which wasn’t at all obvious to people 30 years ago. And, there’s starting to be now some hearing systems that recognize words and that sort of thing, and those have used some of the things we’ve learned about the hearing system of animals. But it seems to take a very long time to make that connection when there’s anything really new.
If it’s just the application of stuff we know already, that can happen very fast, because the platform is there. But if you have to build the platform, the intellectual platform as well as physical platform then, then takes longer because, as part of that evolution process, there has to be a commercial product at each step or else it can’t keep going. So, that’s a constraint on the evolution process and that’s why it takes longer.
Nitin Dahad: So, where do you think we are with neuromorphic chips today? The most famous is Intel Loihi, but there are others around there and there’s people who are doing spiking neural networks and all kinds of things. Where do you think we are? How far do you think we’ve got?
Carver Mead: Well, the vision systems have pioneered an important idea and that is that it changes in the information that are meaningful. It isn’t the mass of it. So, like in your visual image, the picture is nice, but actually what you act on is the changes. And that then became the beginning of event–driven computing. Now event–driven computing has been known for a long time in principle. In terms of really making real time things that do that — I think they are now called dynamic vision sensors.
And that’s a deep idea. It sounds trivial, but to actually make it work well, we’re at the very beginning and it’s very hopeful that people are now, like you mentioned Intel and some of the others, building the event in as part of the way it works and that’s a very important new direction. And it sounds obvious, but it isn’t at all obvious how you actualize that, and it has to go find places where it works. And the dynamic vision thing is the first place where it’s kind of hooked in. But it takes that long. It’s amazing.
Nitin Dahad: With neuromorphic computing and trying to emulate neurons, can you get to the efficiency level of the brain. I mean, you can never get that with the computing, but do you think analog computing might be able to help there?
Carver Mead: It is astounding how much effective computation gets done in the 20 watts in our brain. And that is really what we set out to try to figure out when we started the whole neuromorphic thing. We wanted to understand that phenomenon: how can it possibly be? Once you’ve tried to make applications that do anything even remotely like what animals do — even insects. The insect can do better than any of our self–navigating robotic things. And they’re little bitty things and they run on a milliwatt.
It’s astounding. We still don’t understand it. We’ve got some insights and it’s helped the interface between neurobiology and synthetic computing — making chips that do stuff is a very rich area. It has just begun to generate things that are commercially viable, but to evolve rapidly, they have to become commercially viable.
Nitin Dahad: So, does analog computing play an important part in that?
Carver Mead: That’s a good question. It’s difficult to see what should be done in analog in what should be done in digital. In the neural system in brains of animals, the signals that go over any appreciable distance are all digital — the nerve spikes. The computation in the dendritic tree of neurons is all analog, or it’s a combination. You have signals that come from the nerve spikes of other neurons and you’re aggregating those in an analog way, but they’re quasi–digital in nature.
No–one has yet been successful in building a thing that works like the dendritic tree of neurons. It’s a little surprising, but it’s a very difficult thing. The challenge, as a technical achievement, to realize a thing that works like a real dendritic tree, requires a level of gain control and stability and that’s beyond anything that has been done. When I finally gave up, I was trying to do that. And the technology we had in the day wasn’t good enough to be able to do that.
And, of course, the technology has evolved to be more digital. We still have analog stuff in the sensory end of things, so maybe that’s where the next thing is going to happen. But it’s fits and starts.
Nitin Dahad: Let me come to something about you now, for today. What do you get up to today and what gives you the most joy?
Carver Mead: Well, the thing I did after the neuromorphic stuff was a simpler and more unified way of looking at electrodynamics and quantum physics, which are really ONE discipline, and they’ve been taught as separate disciplines. And so they have the disease that they don’t really fit together. So, people end up with teaching two disciplines and then the students never quite get them to fit. I’ve done a first pass through of what you would do to make that one discipline, and it turns out you can do it at the first level. It works much better for both disciplines and they fit together. So that was very satisfying, but that was in the year 2000.
I just did a new version of that couple of years ago during the pandemic with John Cramer. We have a paper on that in a journal called Symmetry. That came out two years ago (arXiv:2006.11365) and has some insights in it beyond what’s in the little book Collective Electrodynamics that I wrote in 2000. So that’s still a thing and I’m actively working on. It’s actually very deep and it ends up that the ideas can be simple if you don’t lose the key concept.
Nitin Dahad: You’ve been awarded this lifetime achievement prize. What is the one thing that you feel really proud of as your, your legacy, your achievement?
Carver Mead: Well, for the period that was addressed by the Kyoto prize, it was the development of a new way of looking at digital design that recognized that it would scale to a very large scale. So, it had to be a more system–level design. It had to incorporate the physical properties of microelectronic technology, and that had to be done as a unified thing, not as separate disciplines. Each step of the way had to fit with the one before it, or you didn’t end up with a thing that worked and getting that all fit together was really what was honored in the Kyoto prize. That was very satisfying because it was a period when nobody cared about it.
It just had to be done. And once it was done, then it looked obvious. So that was actually very satisfying. But the thing I’m the most pleased about is what I call collective electrodynamics — the development of electrodynamics from a quantum basis rather than from some funny mechanical ideas.
Nitin Dahad: Now my final question. You were quite a visionary, when you understood the potential of scaling transistors and materials. What’s your vision for any period of time in the future now for silicon and for what profile we can go and are we doing the right things, or is there something that we should be doing differently?
Carver Mead: I’m not close enough to everything that’s going on in microelectronics to make a cogent statement about that. It’s become a huge field. It’s wonderful what’s happening. But, as always, there needs to be a next important idea. And, if I knew what that was, I’d be doing it.
Nitin Dahad: On that note, thank you very much, Carver.
This article was originally published on EE Times.
Nitin Dahad is a correspondent for EE Times, EE Times Europe and also Editor-in-Chief of embedded.com. With 35 years in the electronics industry, he’s had many different roles: from engineer to journalist, and from entrepreneur to startup mentor and government advisor. He was part of the startup team that launched 32-bit microprocessor company ARC International in the US in the late 1990s and took it public, and co-founder of The Chilli, which influenced much of the tech startup scene in the early 2000s. He’s also worked with many of the big names—including National Semiconductor, GEC Plessey Semiconductors, Dialog Semiconductor and Marconi Instruments.