In 2014, Hong Kong-based venture capital firm Deep Knowledge Ventures revealed it had appointed VITAL, a machine learning program capable of making investment decisions, to its board of directors.
Built to analyze financial trends in the databases of life science companies and predict successful investments, VITAL has already been used to inform two investment decisions. While the company’s newest board member is just an algorithm, it has prompted many to question the role that artificial intelligence (AI) could have on our lives.
Yoshua Bengio, a professor in the Department of Computer Science and Operations Research at the University of Montreal, is one of the pioneers of a powerful breed of AI known as deep learning. The principles behind deep learning were first developed in the 1980s, but recent improvements in computer algorithms and processing power have led Bengio and other scientists to build machines with the potential to mimic the behavior of the human brain – machines that can recognize people, words, objects and a whole lot more.
“Today’s much more powerful computers allow us to use much larger and more diverse datasets to train networks, which helps them to perform better,” Bengio said. “The most impressive advances have been in speech recognition and image-based object-recognition systems. In 2012, before deep learning was used as extensively, the error rate on a standard benchmark was around 26%; it has now been reduced to around 6%, which is a game changer.”
SMARTER COMPUTING
Apple uses deep-learning and speech-recognition applications to power its personal assistant, Siri. Facebook leverages the technology to automate photo tagging and customize news feeds. Movie-streaming provider Netflix (Los Gatos, California) employs deep learning to provide personalized film recommendations to users. Google is applying it to services that include Street View, Google Now and Google+ Photos.
While impressive, these advancements are just the tip of the iceberg. Recognizing the potential of this field of AI, some of the world’s leading technology companies have hired deep-learning experts and redoubled their efforts to achieve further advances. “Deep learning is more efficient than older generations of algorithms at soaking up huge amounts of data and making predictions,” said Adam Coates, director of the Baidu Silicon Valley AI Lab, a Web company headquartered in Beijing. “Graphics-processing units enable us to perform massive amounts of computation, allowing us to create neural networks that can learn from very large quantities of data in a way that was inconceivable 10 years ago.”
Baidu is harnessing deep-learning capabilities to improve Shitu, its content-based image-retrieval search engine. Launched in beta in 2010, the system was initially designed as an online facial recognition system. In 2013, Shitu was expanded to group similar images and supply information about them. For example, the system can identify similar images of flowers, as well as their species, and link to corresponding online encyclopedia information.
Shitu is similar to Microsoft’s latest deep-learning venture: an object-recognition system called Project Adam, which Microsoft unveiled in July 2014 and claims is twice as accurate as and 50 times faster than other AI systems. After tapping into an ImageNet database containing more than 14 million images in 22,000 categories, for example, Microsoft announced that Project Adam had successfully taught itself to recognize specific dog breeds.
“We have reduced error rates by one-third on conversational speech recognition benchmark tests using deep-learning techniques,” said Dong Yu, principal researcher for Microsoft’s Speech & Dialog Research Group. “Over the next few years, we will work to further improve speech processing, image classification, natural-language processing and handwritten-character recognition.”
Although Microsoft has not revealed how it intends to harness the capabilities of Project Adam to provide business-centric applications, the company predicts it has the potential to revolutionize how people interact with the world, using techniques such as augmented reality.
WIDER APPLICATIONS
New developments around deep learning are popping up in industries ranging from automotive to security and defense.
Tokyo-based Preferred Networks (PFN), for example, has partnered with Toyota Motor Corporation to explore how its deep-learning technology could be used to develop autopilot systems for self-driving cars. PFN has also produced a prototype videoanalytics solution that detects, tracks and predicts the behavior of people in real time, categorizing them according to aspects such as gender, age, clothing or bodily movements. “It will enable retailers to monitor their customers’ buying patterns, or it could be combined with GPS and in-vehicle sensors to optimize road traffic management,” said Daisuke Okanohara, founder and executive president of PFN. The solution is expected to launch this year.
“DEEP LEARNING IS MORE EFFICIENT THAN OLDER GENERATIONS OF ALGORITHMS AT SOAKING UP HUGE AMOUNTS OF DATA AND MAKING PREDICTIONS.”
ADAM COATES
DIRECTOR, BAIDU SILICON VALLEY AI LAB
Meanwhile, Australia’s largest communications and technology research organization, National Information Communications and Technology Australia (NICTA), is developing a real-time, deep-learning-based visual tracker capable of following an object of interest over a sustained period.
“We also believe that this technology can be used for human-computer interactions, image search, intelligent transportation systems (from road maintenance to driving-assistance systems), or even to create a bionic eye, which mimics the function of the retina to restore sight for people with severe vision loss,” said Yi Li, a senior researcher at NICTA and adjunct research fellow at the Australian National University, located in Canberra.
On the opposite side of the Pacific, Enlitic, a start-up based in San Francisco, has other health care related plans for deep learning. It aims to leverage the technology to develop an integrated image- and data-based system that would enable doctors to diagnose common and complex illnesses more quickly and accurately.
“We want our software to offer doctors various pieces of evidence that, when combined, would indicate the likelihood that the patient is suffering from a certain disease,” said Ahna Girshick, head of product and partnerships at Enlitic. “For example, if a scan showed growths on a patient’s lung, the software would identify examples of people whose scans showed similar abnormalities, as well as a breakdown of their symptoms, lab results, gender and the success of their treatment plans. This would help doctors to make swift, informed and accurate decisions that make a real impact on patients’ lives.”
LOOKING TO THE FUTURE
While many of these deep-learning applications are still in the R&D phases, they have the potential to revolutionize how humans interact with both machines and the world around them.
“Today our sensors can easily measure things and our computers can now be taught to recognize objects by themselves, but the problem of perception is far from being solved,” Bengio said. “There is so much more we can expect, and we still have a long way to go before deep learning reaches a stage where machines can truly perceive and understand what they are seeing, as humans do. When we reach this stage, the possibilities for commercial applications are endless.”
Many of the world’s eminent scientists have cautioned that science fiction may not be far from the mark when it warns about a world run by intelligent computers. Speaking to the BBC in December 2014, theoretical physicist Stephen Hawking said: “The development of full AI could spell the end of the human race. It would take off on its own and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” In January 2015, Hawking joined a number of other scientists, professors and researchers at universities and organizations across the globe – including Facebook’s Yann LeCun and Google’s Geoffrey Hinton and Peter Norvig – to sign an open letter calling for new research priorities in the AI field. Noting that the potential benefits of AI systems are “huge,” the Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter warned that it is important to “research how to reap its benefits while avoiding potential pitfalls” to ensure that AI technology works in a way that benefits humanity. A supporting research-priorities document offered various examples of research projects that could help maximize the societal, economic and health benefits of AI while ensuring that it remains “robust and beneficial, and aligned with human interests.” The document also contains a caution that the “development of systems that embody significant amounts of intelligence and autonomy leads to important legal and ethical questions.” Elon Musk, founder and CEO of Tesla Motors and US space transport company SpaceX, also signed the letter, pledging US$10 million to fund a global AI research program run by the future of Life Institute (FLI), a Boston-based non-profit organization. “Here are all these leading AI researchers saying that AI safety is important,” Musk said. “I agree with them, so I’m committing US$10 million to support research aimed at keeping AI beneficial for humanity.” FLI will award the majority of the grant to AI researchers and the remainder to AI-related research involving other fields, such as economics, law, ethics and policy.REMAINING CAUTIOUS