Google bets high on lower latency for cloud service
At the International Solid-State Circuits Conference (ISSCC), a Google fellow has revealed that data center chips need lower latencies to keep up with the rise of sensor data. In addition, new kinds of computer architectures and security procedures will emerge to handle the challenges, noted technical executives from ARM, Intel and Fujitsu.
Google's data centers process "big data with little time," said Luiz Barroso, technical lead of the search giant's data center group. For example, during a Google search "almost as fast as you can type we are searching ridiculous amounts of data computed on the fly to give you a seamless experience with a system that almost guesses what you will do next," he said.
As wearables such as Google Glass emerge, imagine "how much bigger the problem will be when each user can talk to their services in addition to all the sensor data that will be available," he said. "Latency hiccups will compromise the performance of your system," he said, asking engineers to explore new circuit techniques to handle the problem.
Specifically, Barroso called for help with an emerging problem of microsecond-class latencies between two systems communicating inside a data center.
"We haven't been paying attention to it," he said, noting flash and emerging memory technologies may sport latencies in tens of microseconds. "We don't have the underlying mechanisms to make it easy for programmers to deal with microsecond level latencies, what if you could deal with them in processors?" he asked.
The kinds of latencies between non-volatile memory and a GPU, for example, "could be supported in microarchitecture in a much more fundamental way," he said.
Tech execs from ARM, Intel, and Fujitsu did not directly address the issue of latency Barroso raised. However, they agreed that advances in memory and security will reshape tomorrow's computer architectures.
John Goodacre, a director in ARM's processor group, showed research in Europe on a microserver based on arrays of 2.5D chips that put 128 CPUs next to memory on a substrate. A separate array of I/O chips will allow I/O and processor technologies to scale independently, he said.
The Euro Server research program is using 2.5D stacks of CPUs and memory along with separate shared, virtual I/O chips.
Steve Pawlowski, an Intel senior fellow, said new memory architectures on the horizon will give birth to new computing architectures for the data center. He shared with ARM's Goodacre the goal of getting memory accesses down to as little as five picojoules/byte.
Pawlowski, who is about to take on a new role at Intel heading up security research, called for a quantum leap in work on security. "At some point everything will have to be encrypted, and we will have to have a safe place to save a strong key protected in hardware," he said.
Yasunori Kimura, president of Fujitsu Labs of America, agreed. He showed a technique for accelerating homomorphic encryption by a stunning 2,048-fold using batch encryption and batch encrypted calculations.
Separately, Fujitsu is working on a sensor hub called Sprout to address the rise of data from wearable systems. "Personalized big data is the value of wearables," Kimura said.
He showed two generations of a handheld version of the Sprout hub. Ultimately the hub will be integrated in smartphones, he said.
Fujitsu will shrink its Sprout sensor hub to fit into a smartphone.
- Rick Merritt
|Related Articles||Editor's Choice|
|Related Articles||Editor's Choice|