In one of nearly 200 papers, researchers from Georgia Institute of Technology (GIT) demonstrated low power devices at the edge of the network can run reinforcement learning for similar power budgets as they now run inference jobs.

GIT described a 65nm chip delivering up to 9.1TOPS/W for swarm robotics. Using the chip, robots communicated with each other to map out a previously unknown course.

The team used time domain processing to slash energy requirements. In a separate paper, a group from the University of Texas also used the technique to slash power consumption on a 40nm CNN accelerator.

The biggest challenge with reinforcement learning is defining effective rewards for the system, said Arijit Raychowdury, an associate professor at GIT. With so many parameters to choose from, it’s easy to go wrong. In the short term, engineers likely will customize just the last layer of a neural network at the edge, something GIT has shown on drones, he said.

Next, the group hopes to build a processor-in-memory chip if it can get access to TSMC’s resistive RAM cells, perhaps with support from Samsung.

The Korean giant is clearly among those interested in such a project. At the event, a Samsung engineer described an inference block on its latest Exynos chip and said adding some learning capabilities was a future goal for handsets.

text

Reinforcement learning with time-domain processing hits edge network power budgets. Click to enlarge. (Source: Georgia Tech)

Sony tells the history, shows internals of Aibo

One of the engineers who worked on Sony’s original Aibo gave an invited talk on the history and outlook for consumer robots.

Twenty years ago, Aibo ran on a 300 MHz processor, saw through fish-eye lens, and flashed LED eyes to show emotion. The latest version runs on a 2.4 GHz quad-core Snapdragon from Qualcomm, uses imagers with a more natural wide field of vision and conveys more nuanced feelings with OLED eyes.

text

Aibo hears his master’s voice. Click to enlarge. (Source: EE Times, ISSCC)

R&D vice president Masahiro Fujita (above left) said he has continued working on robots while versions of Aibo have come and gone. In 2010, he showed demos of robots that could carry a glass of wine and pick up a toothpick. A year later, a model used a simultaneous location and mapping camera to find and pick up trash in a room.

In 2015, Sony set up a joint venture called Aerosense to build drones that could handle inspections and surveys. Today, it shows a vision of a robotic kitchen that can help an elderly person prepare for a dinner party.

Fujita described three big leaps ahead. Robots will share experiences with each other over 5G and cloud links. They will use higher speed cameras and SoCs to lower response times, and they will support more fine-grained force detection capabilities to support more lifelike interactions — like playing with a dog.

text

Inside Sony’s latest robotic dog. Click to enlarge. (Source: ISSCC)

SK Hynix guns for Intel Optane with new DIMM

SK Hynix described work on a managed DIMM card geared to compete with the 3DXPoint cards Intel is now sampling.

The MDS DIMM can hold 512 GBytes of DDR4 memory. It will run at 12W per card, compared to 15W for Intel’s Optane. And it will have write/read latency of 100/75ns, compared to 500/100ns for Optane.

SK Hynix also aims to make the MDS card a low-cost alternative by using wire-bonding packaging rather than through-silicon vias used in Samsung’s 128 GByte DIMMs. It aims to tape out this year the MDS controller chip and start shipping cards in 2020.

The company should not have been allowed to give a paper before it taped out a chip, said memory analyst Jim Handy of Objective Analysis. However, he said the DRAM die it described was intriguing because it was 26% smaller than a standard production chip. That could make for a significantly cheaper card, if the company can ramp sales to volume levels approaching existing parts.

“I believe that SK Hynix was trying to get the capacity of the Samsung DIMM but at a lower cost. They also want to tap into Intel’s new processor support of slow things on the memory bus, which Intel needs for Optane DIMMs and SK Hynix needs so that it can use ECC on their managed DRAM,” Handy added.

text

The MDS DIMM just lacks a working controller. Click to enlarge. (Source: ISSCC)

eSilicon hits 56 Gbits/s at less than 250 mW

Engineers from eSilicon showed a 7nm transceiver drawing less than 250mW while running at up to 56 Gbits/second using PAM-4. To hit the low power, they co-optimized analog and DSP blocks and exploited the capabilities of the TSMC node which they praised.

The block is a key ingredient for leading-edge ASICs the company hopes to make for data center, networking and telecom customers. MediaTek showed a roughly similar block aimed at similar designs.

The engineers behind the design were from a former Marvell design team in Pavia, Italy, hired by eSilicon.

text

56G links aim to drive comms ASICs. Click to enlarge. (Source: ISSCC)

Researchers from Imec showed a Bluetooth 4 SoC to power a wearable patch. The 55nm chip can deliver medical-grade readings on several metrics including heart rate and respiration for more than three weeks using two 630 mAh zinc-air batteries.

The hardware, the latest of several medical patches from Imec, looks good said Allison Burdett, CEO of Toumaz technology who may license the design. The problem is the market, said Burdett whose startup has pioneered the field for years.

Hospitals have not yet developed processes and practices around digital health. They have not yet embraced Bluetooth which may not have enough range for their needs. In addition, anything that touches patients needs to be incinerated, so there’s no point making reusable devices, she said.

So far, big companies are standing back letting small companies pioneer the field. But changing hospital practices requires major surgery, she added.

text

Wearable patch is ready for digital health, hospitals—not so much. Click to enlarge. (Source: ISSCC)

Hitachi hits new level in sensitivity for accelerometers

Takashi Oshima (below), a senior Hitachi researcher, claims his team has developed the most sensitive accelerometer to date. The three-chip set eliminates nine times the noise affecting today’s chips thanks to the group’s work on seven noise sources — some not previously explored.

The milestone is just a research project so far, but Hitachi hopes to commercialize it. One application is embedding the chips into buildings and bridges to provide early warnings of structural failures.

text

Hitachi accelerometer could fend of bridge disasters. Click to enlarge. (Source: ISSCC)

TI accelerometer hits new lows in power, cost

Engineers at Texas Instruments had an idea for using an FRAM capacitor in a low-end microcontroller as a strain sensor to create a basic accelerometer function. The resulting 16 MHz chip carries essentially no additional costs and draws just 1.4 microamps, a new low. The one-dimension accelerometer, shown by design engineer Sudhanshu Khanna, could be used in a security fob for car keys, a wake-up device for a remote controller or other apps.

text

TI takes accelerometer to new lows. Click to enlarge. (Source: ISSCC)

Intel takes 5G beam forming to 70+GHz on 22FFL

Intel used its 22FFL FinFET process to design a 71-76 GHz beam-forming transceiver module. Engineers dubbed it a first step to a fully digital beam forming device. It also is an early effort in the kind of 90+GHz experiments U.S. regulators are encouraging for a potential 6G network someday.

The 64-element phased array uses 2x2 direct conversion and supports a 60-degree coverage area. Its link budget is good enough to handle data rates up to 10 Gbits/second over two meters in the lab, suggesting it could expand to cover tens of meters, an Intel researcher said.

The paper was one of several Intel used to showcase its 22nm node geared to compete with FD-SOI. It was one of several papers on millimeter-wave transceivers given the rise of the 24-39 GHz bands for some 5G cellular deployments and even higher bands for ADAS chips driving automotive radar.

text

Intel packs 64 transceivers in phased array, Click to enlarge. (Source: ISSCC)

IBM researchers drive wired link to 100 Gbits/second

Researchers at IBM’s Zurich lab showed a 100 Gbit/s link drawing at 1.1 picoJoules/bit and using a combination of PAM-4 and NRZ modulation. The links are already being planned for uses such as chip-to-chip connections. Six of the eleven engineers behind the 14nm design have joined Cisco Systems.

text

IBM transceiver hits 56G on a power budget. Click to enlarge. (Source: ISSCC)