O’Reilly AI Conference: 12 Observations About Artificial Intelligence

oreillyai

At the inaugural O’Reilly AI conference, 66 artificial intelligence practitioners and researchers from 39 organizations presented the current state-of-AI: From chatbots and deep learning to self-driving cars and emotion recognition to automating jobs and obstacles to AI progress to saving lives and new business opportunities. There is no better place to imbibe the most up-to-date tech zeitgeist than at an O’Reilly Media event as has been proven again and again ever since the company put together the first Web-related meeting (WWW Wizards Workshop in July 1993).

The conference was organized by Ben Lorica and Roger Chen, with Peter Norvig and Tim O’Reilly acting as honorary program chairs. Here’s a summary of what I heard there, embellished with a few references to recent AI news and commentary:

AI is a black box—just like humans

In contrast to traditional software, explained Peter Norvig, Director of Research at Google, “what is produced [by machine learning] is not code but more or less a black box—you can peak in a little bit, we have some idea of what’s going on, but not a complete idea.”

Tim O’Reilly recently wrote in “The great question of the 21st century: Whose black box do you trust?”:

Because many of the algorithms that shape our society are black boxes… because they are, in the world of deep learning, inscrutable even to their creators – [the] question of trust is key. Understanding how to evaluate algorithms without knowing the exact rules they follow is a key discipline in today’s world.

O’Reilly offers four rules for a trust-but-verify approach to algorithms: The expected outcomes are known and external observers can verify them; it is clear how to measure “success;” there is an alignment between the goals of the creators and consumers of the algorithm; and the algorithm helps both creators and consumers make better long-term decisions.

AI is difficult—“we wanted Rosie the robot, instead we got the Roomba”

Encountering a slide projector that refused to display his presentation, Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, quipped: “How can we figure out AI if we can’t figure out AV?”

Once humans got the dumb machine to work, Etzioni proceeded to enumerate the difficulties in getting machines, even “intelligent” ones, to be more like us. For example, “people breath air” is a short, simple, statement but one that is difficult to “represent to a machine.”

Etzioni has left his tenured position at the University of Washington to lead the Paul Allen-funded institute with a mission to get AI to show at least some level of understandingof words or images, not just calculating proximity to labeled objects. He thinks that deep learning is useful but limited (see below), so it was interesting to find out that his list of AI challenges is shared, at least partially, by Yann LeCun, one of the handful of individuals who have been celebrated in the last few years for their role in the breakthroughs associated with deep learning.

As a professor at New York University and Director of AI Research for Facebook, LeCun is deep in applying deep learning to practical problems. 1 to 1.5 billion photos are uploaded daily to Facebook (not including Instagram, WhatsApp or Messenger), he told the audience, and every single one of these photos goes immediately through two convolutional neural networks. One recognizes the objects in the image and the other one detects and recognizes people. Videos go through a similar process.

This experience with what smart machines can do now highlights the challenges that need to be overcome to get them closer to human-level intelligence. For a machine to act in an intelligent way, said LeCun, it needs “to have a copy of the world and its objective function in such a way that it can roll out a sequence of actions and predict their impact on the world.” To do this, machines need to understand how the world works, learn a large amount of background knowledge, perceive the state of the world at any given moment, and be able to reason and plan. In short, the holy grail for LeCun is getting from the current state of “supervised learning” (feeding the machine with labeled objects so it “learns” from processing and analyzing them) to “unsupervised learning” (learning from unlabeled data). “The crux of the problem,” said LeCun, “is prediction under uncertainty.”

Citing “Machine Learning: The high interest credit card of technical debt,” Peter Norvig explained the reasons why machine learning is more difficult than traditional software: “Lack of clear abstraction barriers”—debugging is harder because it’s difficult to isolate a bug; “non-modularity”—if you change anything, you end up changing everything; “nonstationarity”—the need to account for new data; “whose data is this?”—issues around privacy, security, and fairness; lack of adequate tools and processes—exiting ones were developed for traditional software. “Machine learning allows you to go fast,” concluded Norvig, “but when you go fast, problems can develop and the crashes can be more spectacular than when you are going slow.”

New York University’s Gary Marcus summed up the challenges of developing AI: “We wanted Rosie the robot, instead we got the Roomba.”

The AI driving driverless cars is going to make driving a hobby. Or maybe not.

“We could be the last car-owning generation,” transportation secretary Foxx announced recently. At the conference, Jim McHugh, vice president and general manager at NVIDIA, announced that “the first AI robot is the car.” Autonomous driving, promised McHugh, will usher in a world that is safer as “a car with AI will be always thinking, attentive, aware of the environment around it, and it never gets tired. It will have super-human powers that will keep us out of harm’s way.”

Similarly, Shahin Farshchi of Lux Capital argued that by postponing government’s approval of self-driving cars we endanger many people who will be killed driving themselves. But he thinks it will happen soon–humans are so bad at driving that “the bar for AI is low.” We will end up saving lives by “adopting imperfect driverless cars.”

Still, as Tom Davenport noted, “many of us would rather be killed by another human than by machines.” Are the exceptions to driverless cars’ safety record just “corner cases” to be ignored when compared to humans’ abysmal record as Farshchi argued, or will a few serious accidents slow down considerably their adoption and government’s approval? In his presentation, Gary Marcus highlighted reliability issues with driverless cars, their “poor performance in the long tail.” Said Marcus: “The machines we have now are only good at the high-frequency examples.”

Peter Norvig used self-driving cars to illustrate two of the problems with the safety of AI he discussed in his presentation. The first is “safe exploration”—better make safe decisions when a self-driving car is out in the real world (as opposed to the driving being simulated in the lab). But given the “black box” nature of these AI-driven cars (see above), how can we verify that they are indeed safe? (see also “Why AI Makes It Hard to Prove That Self-Driving Cars Are Safe”).

The second AI safety problem per Norvig is what he called the “inattention valley.” If a self-driving car is 50% accurate, you would always be alert and ready to take over. But 99% accurate is a problem because the driver is not ready when it’s time to take over. “We have to figure out the user interface for when the system says ‘you have to take over and I really mean it’,” said Norvig.

Oren Etzioni, who thinks “autonomous cars” is a misnomer because they don’t choose where to drive, highlighted the unsafe choices people make when they are behind the wheel. Can we really prevent people from texting, he asked, and answered: “Whatever solution we come up with, people will find ways to circumvent that.” (And it’s not just texting—RoadLoans.com reports that 20% of drivers said they watched a video and 52% ate a full meal and 43% took off or put on clothing and… 25% have fallen asleep while driving).

Intelligent cars are already reducing accidents and can do much more, said Etzioni, and predicted that in 25 years, driving will become a hobby. But “the cars will still go where we tell them to go,” he said.

AI must consider culture and context—“training shapes learning”

“Many of the current algorithms have already built in them a country and a culture,” said Genevieve Bell, Intel Fellow and Director of Interaction and Experience Research at Intel. As today’s smart machines are (still) created and used only by humans, culture and context are important factors to consider in their development. Both Rana El Kaliouby (CEO of Affectiva, a startup developing emotion-aware AI) and Aparna Chennapragada (Director of Product Management at Google) stressed the importance of using diverse training data—if you want your smart machine to work everywhere on the planet it must be attuned to cultural norms.

“Training shapes learning—the training data you put in determines what you get out,” said Chennapragada. And it’s not just culture that matters, but also context, as she illustrated with what she called the “I love you” problem. Many conversations end this way but they should not be included in training data for AI-driven corporate email system.

Lili Cheng, Distinguished Engineer and General Manager with Microsoft Research, talked about Microsoft’s successful bot Xiaoice (40 million users in China and Japan) and its not-so-successful bot Tay (released on Twitter and taken down after being trained by Twitter users to spout inflammatory tweets). Turns out context matters—a public conversation (in Tay’s case) vs. a small group conversation; and culture matters—a “very human-centric, man vs. machine” U.S. (Western?) culture as opposed to Asian culture where “you have ghosts and living trees.”

AI is not going to take all our jobs—“we are not going to run out of problems”

Tim O’Reilly enumerated all the reasons we will still have jobs in the future: 1. We are not going to run out of work because we are not going to run out of problems. 2. When some things become commodities, other things become more valuable. As AI turns more and more of what we do today into a commodity, we should expect that new things will become valuable—“rich economies indulge in things that appear to be useless, but are really all about status.” 3. Economic transformation takes time and effort (Amazon is still only 20% of Wal-Mart).

Similarly, Tom Davenport, professor at Babson College and co-founder of the International Institute for Analytics, pointed out that there were half a million bank tellers in the U.S. in 1980. The number in 2016? Also half a million bank tellers. “If you are building your career on wiping out a class of jobs, I hope you are very young because it takes a long time,” said Davenport in his conference presentation and in Only Humans Need Apply, the book he recently published on how we are going to add value to smart machines rather than being replaced by them. Don’t be too optimistic or too pessimistic about AI, he told the audience, just don’t be complacent.

President Obama agrees: “If properly harnessed, [AI] can generate enormous prosperity and opportunity. But it also has some downsides that we’re gonna have to figure out in terms of not eliminating jobs. It could increase inequality. It could suppress wages.”

AI is not going to kill us—“AI is going to empower us”

Oren (and Amitai) Etzioni have suggested in “Designing AI Systems that Obey Our Laws and Values” the development of multiple AI systems that are used to check and counter-balance each other. At the conference, Etzioni quoted Andrew Ng: “Working to prevent AI from turning evil is like disrupting the space program to prevent over-population on Mars.” And Rodney Brooks: “If you are worried about the terminator, just keep the door closed” (showing a photo of a robot failing to open a closed door). Etzioni concluded: “AI is not going to exterminate us, AI is going to empower us… A very real concern is AI’s impact on jobs. That’s what we should discuss, not terminator scenarios.”

A recent survey conducted by YouGov on behalf of the British Science Association, however, found that 36% of the British public believe that the development of AI poses a threat to the long term survival of humanity. Asked “Why do so many well-respected scientists and engineers warn that AI is out to get us?” Etzioni responded: “It’s hard for me to speculate about what motivates somebody like Stephen Hawking or Elon Musk to talk so extensively about AI. I’d have to guess that talking about black holes gets boring after a while—it’s a slowly developing topic.”

One way to fight boredom is to speak at the launch of new and ground-breaking research centers, as Hawking did recently. The £10 million Leverhulme Centre for qiuopqiothe Future of Intelligence will explore “the opportunities and challenges of this potentially epoch-making technological development,” namely AI. According to The Guardian, Hawking said at the opening of the Centre, “We spend a great deal of time studying history, which, let’s face it, is mostly the history of stupidity. So it’s a welcome change that people are studying instead the future of intelligence.” No word if Hawking quoted Scott Adams (in The Dilbert Principle): “Everyone is an idiot, not just the people with low SAT scores. The only differences among us is that we’re idiots about different things at different times. No matter how smart you are, you spend much of your day being an idiot.”

AI isn’t magic and deep learning is a useful but limited tool—“a better ladder does not necessarily get you to the moon”

“Deep Learning is a bigger lever for data,” said Naveen Rao, cofounder and CEO of Nervana. “The part that seems to me ‘intelligent’ is the ability to find structure in data.” NVIDIA’s Jim McHugh was more expansive: “Deep learning is a new computing model.”

Both Rao and McHugh work for companies providing the hardware underlying deep learning. But for the people who write about deep learning it’s much more than a new computing model or a bigger lever for data. “The Google machine made a move that no human ever would… [It] so perfectly demonstrated the enormously powerful and rather mysterious talents of modern artificial intelligence,” gushed Wired about AlphaGo. To which Oren Etzioni replied, also in Wired: “…the pundits often describe deep learning as an imitation of the human brain. But it’s really just simple math executed on an enormous scale.” And Tom Davenport added, at the conference: “Deep learning is not profound learning.”

In his talk, Etzioni suggested asking AlphaGo the following questions: Can you play again? (no, unless someone pushes a button); Can you play poker? (no); Can you cross the street? (no, it’s a narrowly targeted program); Can you tell us about the game? (no).

Deep learning, said Etzioni, is a “narrow machine learning technology that has achieved outstanding results on a series of narrow tasks like speech recognition or playing Go. It’s particularly effective when we have massive amounts of labeled data… Super-human performance on a narrow task does not translate to human-level performance in general… Machine learning today is 99% the work of humans.”

Gary Marcus, professor of psychology and neural science at New York University and cofounder and CEO of Geometric Intelligence, also objects to the common description of deep learning as “mimicking the brain.” “Real neuroscience doesn’t look anything like the models we use,” argued Marcus. “There is a lot of complexity in the basic layout of the brain. There are probably a thousand different kinds of neurons in the brain; in deep learning there is one, maybe two. … The widespread commitment to neural networks with minimal instruction sets is utterly at odds with biology. … The core problem is an excessive love of parsimony.”

Referencing “Why does deep and cheap learning work so well?” Marcus observed that “a lot of smart people are convinced that deep learning is almost magical—I’m not one of them.” Deep learning, he explained, lacks ways of representing causal relationships; it has no obvious ways of performing logical inferences; and it is a long way from integrating abstract knowledge. “All of this is still true despite of all the hype and billions of dollars invested. A better ladder does not necessarily get you to the moon,” said Marcus.

AI is Augmented Intelligence—“using the strengths of both humans and machines”

Tom Davenport, who devoted his presentation (and his recent book) to advising humans on how to race with the machines rather than against them, had also an important suggestion for organizations: Establish a new position, that of the Chief Augmentation Officer. That executive should be in charge of picking the right AI technology for a specific task, the design of work processes where humans and machines work together and complement each other, and providing employees with the right options and the time to transition to them.

Tim O’Reilly suggested getting into a contest with the machine that will make both humans and machines excel. And Peter Norvig, in his list of AI safety problems, mentioned the challenge of “scalable oversight”—how and where to inject human oversight and expertise into what the AI system is doing.

Jay Wang and Jasmine Nettiksimmons, data scientists at Stitch Fix, a startup that uses artificial intelligence and human experts for a personalized shopping experience, talked about augmenting their recommendation algorithm with human stylists. “Having a human in the loop allows us to more holistically leverage unstructured data,” they said. Humans are better at ingesting customers’ online notes or Pinterest boards and understanding their meaning, thus improving customer relations and freeing the algorithm from having to anticipate edge cases.­­­­­­­ “We are trying to use the strengths of both humans and machines for an optimal result,” concluded Wang and Nettiksimmons.

AI changes how we interact with computers—and it needs a dose of empathy

“We are approaching a tipping point where speech user interfaces are going to change the entire balance of power in the technology industry,” Tim O’Reilly wrote recently.

More specifically, we need to “rethink the basic fundamentals of navigation through conversation,” said Microsoft’s Lili Cheng. The “Back” and “Home” buttons are critical for every system we use today but in a conversation, “back, back” feels “really weird.” Cheng talked about conversations as waves, “always going forward,” and as such they are very different from a user controlling a desktop. To get AI to better resemble the way people think about the world around them (see LeCun above), we could use conversations as “a great test case,” said Cheng.

Part of understanding the world as humans do is to understand (or at least detect) human emotion. To that end, Affectiva has amassed the world’s largest emotion data repository and has analyzed 4.7 million faces and 50 billion emotion data points from 75 countries. Their vision is to embed real-time emotion sensing and analytics in devices, apps, and digital experiences. “People are building relations with their digital companions but right now these companions do not have empathy,” said CEO Rana Al Kliouby.

Emotion is a burgeoning AI field—see also Microsoft’s Emotion API or the work of Maja Pantic at Imperial College—but I would suggest to all its practitioners to stick to “empathy” rather than “emotion” so as not to confuse the masses (and themselves?) about the true capabilities (and human-like qualities) of AI today.

AI should graduate from the Turing Test to smarter tests

Gary Marcus complained about paying too much attention to short term progress instead of trying to solve the “really hard problems.” There has been an exponential progress in some areas but in strong, general artificial intelligence, “there has been almost no progress.” He urged the AI community to pursue more ambitious goals—“the classic Turing test is too easily gamed,” he averred. Instead, Marcus suggested “The Domino’s Test”: Deliver a pizza to an arbitrary location with a drone or a driverless car as well as an average teenager could do.

LeCun mentioned another test of “intelligence” or natural language understanding­—the Winograd Schema—as a measure of the machine’s knowledge of how the world works. Etzioni gave two examples of Winograd Schema: “The large ball crashed right through the table because it was made of styrofoam” and “The large ball crashed right through the table because it was made of steel.” What does “it” refer to? This is pronoun resolution that a 7-year-old can do, said Etzioni, adding “common-sense knowledge and tractable reasoning are necessary for basic language understanding.”

A few months ago, Nuance Communications has sponsored the first round of the Winograd Schema Challenge, an alternative to the Turing Test. The results: Machines were 58.33% correct in their pronoun resolution compared to humans at 90.9% accuracy.

AI According to Winston Churchill

Peter Norvig: “You can say about machine learning what Winston Churchill said about democracy—it is the worst possible system except all the others that have been tried.”

Oren Etzioni: “To paraphrase Winston Churchill—deep learning is not the end, it’s not the beginning of the end, it’s not even the end of the beginning.”

AI continues to be possibly hampered by a futile search for human-level intelligence while locked into a materialist paradigm

Gary Marcus complained about research papers presented at the Neural Information Processing Systems (NIPS) conference, saying that they are like alchemy, adding a layer or two to a neural network, “a little fiddle here or there.” Instead, he suggested “a richer base of instruction set of basic computations,” arguing that “it’s time for genuinely new ideas.”

When asked “when will we see human-level AI?” Etzioni answered “I have no clue.” It turns out that in answer to his own survey of AI experts about when we will see human-level AI, he said it would be “more than 25 years.” He explained: “I’m a materialist, I believe in a world made of atoms, therefor I’m not in the ‘never’ camp.”

That thoughts (and “intelligence”) are produced only by atoms and are “computable” has been a dominant paradigm before and after Edmund Berkeley wrote at the dawn of the computer age in Giant Brains or Machines that Think (1949): “Recently there have been a good deal of news about strange giant machines that can handle information with vast speed and skill… These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.” Thirty years later, Marvin Minsky famously stated: “The human brain is just a computer that happens to be made out of meat.”

Is it possible that this paradigm—and the driving ambition at its core to play God and develop human-like machines—has led to the infamous “AI Winter”? And that continuing to adhere to it and refusing to consider “genuinely new ideas,” out-of-the-dominant-paradigm ideas, will lead to yet another AI Winter? Maybe, just maybe, our minds are not computers and computers do not resemble our brains?  And maybe, just maybe, if we finally abandon the futile pursuit of replicating “human-level AI” in computers, we will find many additional–albeit “narrow”–applications of computers to enrich and improve our lives?

To continue following this fascinating and exciting stage in the life of artificial intelligence, you can watch excerpts from the keynotes at the O’Reilly AI conference here and download presentation slides here, attend the next O’Reilly AI conference in New York, June 27-29, 2017 or sign up for the O’Reilly AI newsletter.

Originally published on Forbes.com

About GilPress

I'm Managing Partner at gPress, a marketing, publishing, research and education consultancy. Also a Senior Contributor forbes.com/sites/gilpress/. Previously, I held senior marketing and research management positions at NORC, DEC and EMC. Most recently, I was Senior Director, Thought Leadership Marketing at EMC, where I launched the Big Data conversation with the “How Much Information?” study (2000 with UC Berkeley) and the Digital Universe study (2007 with IDC). Twitter: @GilPress
This entry was posted in AI, Machine Learning. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *