Synopsys let its AI loose on all aspects of the chip design process for the first time. The results are more than human.
Is entirely autonomous chip design possible? Can AI behave as an “artificial architect,” designing and optimizing entire chips?
This is the question Synopsys CEO Aart de Geus set out to answer in his keynote presentation at Hot Chips. The answer is a resounding “yes”.
Synopsys has long been working on using AI in its EDA tools (according to De Geus, all Synopsys tools today use AI in one form or another). Its flagship AI-powered tool, DSO.ai, launched last year. DSO tackles all the tasks in the geometry of chip design, that is, all the physical aspects of the design, in contrast to some other AI-based tools which tackle bits of this task only. DSO is named for design space optimization, Synopsys’ vision of the next step in today’s design space exploration process.
The size of the task cannot be underestimated — the search space for place and route alone is 10 to the power of 90,000, orders of magnitude greater than the search space for the fiendishly complicated Chinese game of Go at 10 to the power of 360.
Synopsys’ technology is based on a technique called reinforcement learning — a version of deep learning that doesn’t require a massive influx of data. Instead, the system starts from zero, designing chips randomly, and is given a score each time for how well it does. Over time, it designs many, many chips by trial and error and tries to optimize its score, effectively learning chip design from scratch. This technique means Synopsys doesn’t have to have access to huge amounts of data (in this case, chip designs, which are its customers’ IP) to train its algorithms.
However, customers can take this a step further, using supervised learning on top of Synopsys’ reinforcement learning algorithm for an even better result. The AI can then learn from chips the customer has previously taped out. De Geus’ examples in his Hot Chips presentation showed this technique can help the AI converge much faster on an optimum design. A customer CPU design done by trained DSO achieved 9-13% less total power, 30% less leakage power and converged 2-5 times faster than the best human effort by a team of experts. The process was overseen by a single engineer.
In a recent interview with EE Times, De Geus discussed the broader implications of AI-designed chips.
“I have heard so many times: design has kept up with Moore’s law,” he said. “Let me flip that around — technology has kept up with what design can do. EDA and Synopsys are co-responsible for making something extraordinary happen.”
De Geus sees huge parallels between the AI revolution and the dawn of electronic design automation as we know it.
“In many ways, I consider that we cracked the code on opening a whole new phase of how design is occurring,” he said. “35 years ago we cracked the code and it changed design – the similarities are uncanny.”
In the mid-1980s, Synopsys came out with design automation software which could do the same job that would take a designer weeks, do it in a few hours, and get a better result.
“The reaction was always the same: It can’t be true!” he said. “What was interesting is that maybe just a few percent were thinking: this is dangerous, it’s taking my job away. The majority said: this is great, I can run with that because now I can do much more.”
What’s different about AI-powered tools like DSO is the shift from individual tools to the whole design flow — taking the tools for the individual tasks and fusing them together. While Synopsys had started to do this 35 years ago – as De Geus says, its synthesis technologies were a mini-fusion between synthesis, timing and technology — DSO is now adding a layer of intelligence.
“Helping the human designer move up to be more of an architect is not any different than it was 35 years ago,” he said. “[It’s about] having the designer move up to deal with bigger problems because certain pieces are being solved in a way that’s even better than they could do, even if they had infinite time.”
Customers want the same increases in quality of results, time to results and cost of results that they did back then, but complexity has increased many times over due to new technologies like gate-all-around transistors and chiplets. Power and thermal considerations, and others, have also become critical.
“Half a dozen other requirements came in, and of course, that makes this optimization problem so much more multidimensional,” De Geus said. “Therein lies the power of AI, because AI has an ability to look at many things at the same time.”
For the first time, alongside DSO for physical layout optimization, Synopsys has allowed AI to take control of some of the other design domains. Aside from the physical space (cell optimization, etc), there’s the structural/architectural space (clock scheme optimization) and the behavioural space (application software optimization).
On one customer design, DSO alone reduced power consumption around 5% versus a best-effort design by a team of human experts. Giving AI concurrent access to the clocks resulted in a substantial improvement – another 13%. And allowing AI to optimise the application software – the workload running on the chip – to avoid dynamic power peaks resulted in another 10% reduction in the total power consumed by the design. The result was a 25.6% overall reduction in total power consumption, all done by AI. In other words, not only had AI rapidly reduced the time it takes to design a chip, it had also come up with a dramatically better result.
Allowing the AI to change clocking schemes is a huge milestone; the AI has advanced from the strictly physical domain into microarchitecture decisions normally reserved for human architects.
Despite AI’s new microarchitecture capabilities, De Geus was clear that humans will always be involved in the design process; his view is we should think of AI more as the next level of automation, or as a more advanced tool we can use.
The speedups achieved by Synopsys’ AI architect are extremely significant in an industry where 18-24 months to tape out is the norm. What are the knock on effects of drastically condensing this process?
“All good news!” De Geus said. “If you can do something, just for argument’s sake, in half of the time, and in addition it’s better than what you could have done manually with fewer people, what that says is we’re now going to do chips for each of the verticals.”
The ASSP/ASIC model will become more attractive to silicon vendors and will see a rapid gain in popularity, De Geus predicts.
“Some people would say this is software defined chips, because it is what you want to run on this chip that determines what the criteria for success are,” he said, noting that the AI is also capable of tweaking the application software to achieve success within power budgets.
With a total power saving of 25.6% demonstrated in a short time using AI tools, the effect is equivalent to going down one, or even two, process nodes. Will AI tools help us prolong Moore’s law, or help us squeeze more performance out of less advanced process nodes?
Both, said De Geus.
“When you suddenly get the type of improvements that we can get, there’s a real question: Do you need to go to the next node?” he said. “I know the answer of the most advanced guys. They say well, of course I’ll go to the next node and use DSO.”
As well as those continually pushing the state of the art in chip design, AI-powered tools can also greatly accelerate entry to the market for companies who don’t have chip design pedigree.
“I think that’s going to be particularly relevant when we think about those verticals that can invest themselves in doing chips, or at least being close to it,” he said.
Here, De Geus has an interesting example: hyperscale data centers.
We have seen hyperscalers build chip design expertise from zero to make chips tailored to their own workloads – specifically, AI acceleration at scale. Could an AI architect build better AI accelerator chips to accelerate AI chip design, to build better AI accelerator chips… this may sound like science fiction, but Synopsys’ (and its customers’) work has brought us closer to this singularity than ever before.
As De Geus said in his keynote: “We often say that success is the sum of our efforts. No, it’s not. Success is the product of our efforts – a single zero and everybody gets zero. That’s just another way of saying it’s about trust, and teamwork.”
In other words, designers will have to learn to trust AI tools before AI is granted permission to take over design tasks that are today done by humans.
“If you can do something in half the time, theoretically you can do twice as much,” De Geus earlier told EE Times. “But if you can do something in half the time and it gets better, and you trust it, you’re going to decide, I’m no longer going to do these things, I’m going to focus on the next level up.”
De Geus ended his keynote by saying that whether humans allow AI into the chip design tool chain or not, chip design and EDA will have a massive impact on humanity in the next 20 years.
This article was originally published on EE Times.
Sally Ward-Foxton covers AI technology and related issues for EETimes.com and all aspects of the European industry for EETimes Europe magazine. Sally has spent more than 15 years writing about the electronics industry from London, UK. She has written for Electronic Design, ECN, Electronic Specifier: Design, Components in Electronics, and many more. She holds a Masters’ degree in Electrical and Electronic Engineering from the University of Cambridge.