Machine learning for the future

Article By : Jessica Lipsky

Google fellow Jeff Dean outlines the history of machine learning (ML), neural networks and various ways to programme models to take advantage of raw data coming through in the form of images or audio.

The growing number of applications that rely heavily on computer vision, language understanding and robotics, it can't be denied that we are now living in the era of deep learning and large-scale neural networks. What we now want most from machine learning, said Google Senior Fellow Jeff Dean to the audience at SIGMOD 2016 keynote yesterday (Tuesday, June 28), is “understanding."

“We now have sufficient computation resources, large enough interesting data sets,” Dean told SIGMOD attendees. “We can store tons of interesting data but what we really want is understanding about that data.”

In a keynote talk, Dean outlined the history of machine learning (ML) and neural networks and various ways to programme models to take advantage of raw data coming through in the form of images or audio. He also detailed how ML has taken shape at Google, which recently announced that it will open a machine learning center in Europe. The company developed its own accelerator chips for artificial intelligence it calls tensor processing units (TPUs) after the open source TensorFlow algorithms it released last year.

“Over time saw more and more successes to applying these techniques to different kinds of problems. This has led to really incredible growth in use of the technology across hundreds of teams at Google,” Dean said.

[FullSizeRender-(5)]
__Figure 1:__ *Deep learning trends at Google. (Image source: SIGMOD/Jeff Dean)*

Dean pointed to Google’s speech recognition team, which through the use of neural networks reduced word errors by 30%. The team used the networks to replace the acoustic model of its speech recognition pipeline — which uses raw audio waveforms to determine sounds and words — and achieved “the biggest single improvement in two decades.”

[jeff]
__Figure 2:__ *Google Senior Fellow Jeff Dean. (Image source: SIGMOD)*

The fundamental problems being solved by ML and neural networks can be found in other fields such as medical and satellite imaging. In those cases, a house may want to be identified on a map and fitted for a solar panel consultation or a diabetic patient must be screened for ocular degeneration. The same models that are used for speech recognition could be easily tweaked to serve other issues.

“There is a lot of parallelism in these models,” Dean added, pointing to the Google Translate app that can now translate signs into a different language in real time using pixel identification.

A handful of hurdles on the pathway to ML and neural-network understanding still stand. Models must be able to learn unsupervised, engage in multi-tasking and transfer learning, and take action from the world (also known as reinforcement learning). Dean said researchers are beginning to look at privacy preserving techniques in machine learning and added that model structures mdash; the part of machine learning where human interaction plays a big role in managing the weights around ML — is of great import.

“It’s important to make sure the data you provide actually reflects the policies you want or you can impose higher level polities on top of the model,” he said.

From a systems perspective, the next challenge is how to “use high level descriptions of machine learning computations and map these efficiently onto wide variety of different hardware.” Dean said he also wants to integrate machine learning into more traditional data processing.

The tail end of Moore’s Law also provides an interesting direction for ML, where Dean expects more heterogeneous or specialised hardware to do ML computations. Google trialed this with its TensorFlow ASIC and recently attempted a large-scale data collection using an array of TensorFlow-powered robots.

“I think this time neural nets are here to stay. In the nineties I think they had a lot of excitement but I think they were just lacking computation resources. Now I thin they’re showing they can solve really relevant and interesting problems,” Dean concluded. “If you’re not considering how to apply deep neural networks to your data, you probably should be.”

Virtual Event - PowerUP Asia 2024 is coming (May 21-23, 2024)

Power Semiconductor Innovations Toward Green Goals, Decarbonization and Sustainability

Day 1: GaN and SiC Semiconductors

Day 2: Power Semiconductors in Low- and High-Power Applications

Day 3: Power Semiconductor Packaging Technologies and Renewable Energy

Register to watch 30+ conference speeches and visit booths, download technical whitepapers.

Subscribe to Newsletter

Leave a comment