The 11th annual MIT Tech Conference, a student-led event organized by the MIT Sloan Tech Club, had “exponential technologies” as its theme this year. Here’s what I learned from the event’s morning sessions which covered artificial intelligence, robotics, quantum computing, and biotechnology.
Look Alexa, No Hands! Screen-free interaction forced Amazon to solve the most difficult problems of speech recognition
Developing computer technology that does not require the use of eyes or hands is “extremely liberating,” said Rohit Prasad, Vice President and Head Scientist for Alexa at Amazon. The absence of a computer screen forced Prasad and his team to tackle the most difficult problems of speech recognition. The inspiration for Alexa was cloud-based AI, “similar to the Star Trek computer,” making it easy for users to walk around the house and get some type of question answered or action performed. The rapid adoption of Alexa was “very humbling,” said Prasad. A crucial factor in its success was opening it up to third-party developers. Today, there are more than 9,000 “skills” Alexa users can select, “for every moment and every occasion,” according to Prasad. (It’s hard to keep up with Alexa, even for her creator—a few days after the event, Amazon disclosed there are now over 10,000 Alexa skills).
We can expect Alexa to visit new countries—it recently launched in the UK and Germany—and new places—enterprises. For each new country, Alexa needs not only to master the local language, but also local information. To succeed in the business world, it needs enterprise software developers. With this in mind Amazon had made it easier to build skills by providing “a massive catalog of semantic elements” so developers do not need to be proficient in machine learning.
Prasad said that with the help of “massive deep learning” Alexa adapts to speech patterns and continuously improves. “We are at a golden age for AI,” he said, but we are only “at the very beginning of what conversation can do.” Winners of the Alexa Prize, a grand challenge of “building a socialbot that can converse coherently and engagingly with humans on popular topics for 20 minutes,” will be announced in November.
Artificial intelligence is human intelligence without creativity
Conversations with computers is a challenge that has preoccupied Richard Socher, Chief Scientist at Salesforce, for some time. With lots of examples, computers can today learn to identify words and answer questions. But Socher agrees with Prasad that there is still a lot of progress to be made before computers could actually understand what we say. There is typically a lot of hype and excitement “when we solve a sub-set of the AI problem,” he said, but we should not “extrapolate too far.”
Talking intelligently about artificial intelligence, without distracting hype and excitement, calls for a definition of what we are talking about. When we think about AI, we often think about the smartest people in the room, said Socher. Smart people play chess—but computers not only play chess better, they can also do many smart things better than smart people such as calculate faster or memorize more. “There are other things we do with our brain that people don’t consider intelligence, but maybe we should,” said Socher.
This reminded me of a conversation I had with roboticist Rodney Brooks a couple of years ago at another MIT conference. Brooks talked about the people that founded AI as a research discipline sixty years ago. “They were very intelligent people,” said Brooks. “They were good at playing games, they were good at mathematics—for them that was the essence of intelligence.” But there is very little difference, in terms of “intelligence,” between a chess master and a construction worker. “They completely missed out that the stuff that seemed easy, was actually the hard stuff,” said Brooks. “I suspect it was by their own introspection and their own views of themselves of being intelligent that led them astray.”
For Brooks, starting in the mid-1980s, the real challenge became figuring “how something like an insect with a hundred thousand neurons was much better at navigation than any of our robots at the time.” The sign on his office door at the MIT AI lab read “Artificial Insect Lab.” Focusing on how insects (and humans) move in the world, led to iRobot and the Roomba, the most popular robot in the world at 15 million sold, and today, to Rethink Robotics and collaborative industrial robots Baxter and Sawyer.
Socher also counts motor skills among the “other things we do with our brains” that maybe we should include in our definition of intelligence. “Motor intelligence is much harder for computers,” he said. “It’s difficult to sample all the events in a complex environment and represent them in a reasonable way.” Another distinguishing characteristic of human intelligence that is very challenging for computers is dealing with ambiguity. Which is why, in the age of computerized stock trading based on parsing the news, a positive review of an Anne Hathaway film moves up the shares of Berkshire Hathaway.
The biggest challenge of all—and what makes us human—is creativity. “AI will continue to struggle with creativity because it is outside the training data,” said Socher. Computers can learn from example, can replicate, and often can replicate better than humans. But they cannot create, cannot come up with something new and unique. All the major advances so far in AI, said Socher, succeeded in processing “a large amount of known training data and do things [the computer] has seen before.”
“AI teaches us who we are,” concluded Socher.
Don’t boil the ocean: To make progress, focus on limited challenges
Just as deep learning succeeded by focusing on specific tasks such as speech recognition instead of vainly pursuing the holy grail of human-like intelligence, roboticists have made progress by focusing on problems they can solve in their lifetime. All three on display at the MIT event do exactly that: Stefanie Tellex, Professor at Brown University, is focusing on pick and place tasks; Ryan Gariepy, co-founder and CTO of Clearpath Robotics is focusing on self-driving in confined spaces; and Helen Greiner, co-founder of iRobots in 1990 and today, founder of CyPhy Works, is focusing on drones, going up in the air to make navigation easier.
As an example of “the problem of trying to do everything,” Gariepy brought up self-driving cars. “Everybody is going after 3 billion people,” he said. This is the market represented by all current drivers worldwide and it’s “perfect from a company building perspective” to go after them, he said. But trying to get to level 5 of autonomous driving on city streets is attempting to do too much too quickly. Instead, Clearpath Robotics is focused on industrial self-driving vehicles, operating in controlled environments where people are trained to follow certain procedures. Another plus is the control and management of human-robot interaction which self-driving cars don’t deal with at all. Clearpath Robotics has proved it can get a self-driving vehicle to work in a factory within an hour.
Tellex is also focused on controlled environments and the “pieces you can carve out that will become possible in 5 to 10 years.” Solving the pick and place challenge could be helpful in hospitals, or in delivering parts and tools in factories. Working on robots makes people realize, Tellex said, that the “intelligence” in “artificial intelligence” is much more than playing chess or “cognitive computing.” It takes a child two years to become a “mobile manipulator,” she observed, and “human-scale manipulation is incredibly challenging.” The crux of the challenge is that we expect at least 99.99% reliability—it’s mostly a question of how much risk we are willing to tolerate.
Focusing on drones allows CyPhy Works to avoid some of the challenges facing self-driving cars such as dealing with construction sites. Greiner: “Up above the treetops is a highway waiting to be populated.” The focus on solving limited and better controlled AI challenges does not preclude a broad vision of AI’s potential impact on society. For Greiner, the vision is of an end-to-end automation of the supply chain, making it much more efficient than it is today.
The brute force of deep learning: Big data, GPUs, the cloud, and quantum computing
During a lengthy “AI Winter,” Moore’s Law has kept hope alive, re-kindling from time to time the dream of artificial human-like intelligence. The constant increase in computing power eventually ensured that computers would win in a match against a chess champion—not by mimicking human thinking but by applying “brute force.”
Similarly, deep learning, today’s reason for excitement (and hype) about the possibilities of AI, owes much to brute force, but with some interesting tweaks. The breakthrough came with the application of a new computing architecture using Graphics Processing Units (GPUs) instead of traditional computer chips—“5000 cores, each doing a simple calculation in parallel,” explained Socher. There was a lot of data to process in parallel and that was another new aspect of “brute force”—the force of “big data,” all crowdsourced, i.e., labeled by millions of internet users, helping train the deep learning algorithms. The latter, according to Socher, constantly advanced in small steps, adding another 2 or 5 percent accuracy each time.
This perfect storm of hardware and software developments led to the current “inflection point,” said Greiner. “I have to admit I have said this before. But connecting to the cloud and deep learning is the inflection point today. With deep learning we can have the next generation of robots.”
The cloud represents yet another aspect of “brute force,” combining the computing and storage power of many systems and allowing for the pooling of large sets of data. The cloud also provides an opportunity for robots to collect and provide data to deep learning systems, noted Tellex. And through the cloud, robots can learn from other robots, said Greiner, pointing to another benefit of putting data in a central, accessible repository. Tellex: “Once we know how to make the robot do something, we can teach many other robots.”
In 5 to 10 years, we may have a completely new notion of the meaning of “brute force.” John Martinis, Research Scientist at the Quantum Computing Group at Google, talked about his 30-year research into building a computer that can process as many data points as atoms in the universe. Quantum computers store 1 and 0 at the same time, in a quantum bit (Qubit), and can process them in parallel. Every qubit you add, you double the processing power and 2 to the three hundred, or 300 Qubits, is the number of atoms in the universe. The Google team is making steady progress and the “stretch goal” for the end of this year is 50 Qbits.
Martinis: “We are still at the demo stage. You can cook up special problems where the quantum computer is faster, but these are not practical problems. We will try to do something useful in 5 to 10 years.” New algorithms may speed up making quantum computers useful. “We are only one smart idea away from doing something useful,” said Martinis. But, he added, “’Only’ is the hard part.”
In the meantime, Socher can see where “extra computation,” whether from traditional or quantum computers, could help. One AI area that can benefit is “multi-task learning” or getting computers to be more like humans by not forgetting what they learned before. For this, you need to combine, ingest and process many data sets, for vision, speech, and other cognitive tasks. That means lots of data and a lot of computing power.
The brain is ground zero… if you believe neuroscience can answer everything
Another area where more computational power can be beneficial is the human brain. So thinks Bryan Johnson, founder and CEO of Kernel, a startup developing “a tiny chip that can be implanted in the brain to help people suffering from neurological damage caused by strokes, Alzheimer’s or concussions.”
But diseased brains are only an “entry point” for Johnson and his vision of computer-augmented humanity, “opening up the possibilities of what we can become.” Early in his life, Johnson pledged to himself to retire at the age of 30 with a “bunch of money to do something meaningful in the world.” He missed his goal a bit, having sold online payment company Braintree to PayPal for $800 million in 2013, when he was 34. Johnson then set up a $100 million fund that invests in science and technology start-ups that could improve quality of life or, primarily, “unlock human intelligence.”
By this he apparently means that we should apply Moore’s Law to humans. He “felt incredibly frustrated being human,” specifically with our slow bit rate and the “limitations in our bandwidth.” Now that “we have the programmability that we didn’t have before,” Johnson is also frustrated with people’s lack of imagination, telling the audience a number of times that when printing was invented, people “never expected the range of things that could be published.” Today, he thinks, we suffer from a similar failure of imagination regarding what’s possible with synthetic biology and genetics.
Johnson admits that “we don’t know a lot about the brain,” but he is an optimist. We made a lot of progress with our understanding of how a few neurons work, he asserts. So what kind of progress, Johnson asks, we will make when we understand how thousands of neurons work?
Is it possible that reducing imagination—or any other product of our minds—to how neurons fire in the brain is a failure of imagination on Johnson’s part, as it is on the part of many smart and very intelligent people? The event took place at the MIT Media Lab, which one of its luminaries, the late Marvin Minsky, once said: “The human brain is just a computer that happens to be made out of meat.”
In this (very popular) worldview, speeding up the meat machine is not that far-fetched. But given the progress that has been made by roboticists and deep learning researchers by focusing on specific cognitive tasks and abandoning the quest for artificial human-like intelligence, wouldn’t it be better to stick to just trying to help diseased brains?
[Helping with moving things alone at the conference were journalists Barb Darrow, Steven Levy and Jennifer Ouellette and Liam Paull, Research Scientist at MIT’s Distributed Robotics Lab. See here for the list of students organizing the event]
Originally published on Forbes.com