A Sane Discussion of the Rising Fears of Artificial Intelligence (AI)

[vimeo 138319099 w=500 h=281]

Rise of Concerns about AI: Reflections and Directions

Discussions about artificial intelligence (AI) have jumped into the public eye over the past year, with several luminaries speaking about the threat of AI to the future of humanity. Over the last several decades, AI—automated perception, learning, reasoning, and decision making—has become commonplace in our lives. We plan trips using GPS systems that rely on the A* algorithm to optimize the route. Our smartphones understand our speech, and Siri, Cortana, and Google Now are getting better at understanding our intentions. Machine vision detects faces as we take pictures with our phones and recognizes the faces of individual people when we post those pictures to Facebook Internet search engines rely on a fabric of AI subsystems. On any day, AI provides hundreds of millions of people with search results, traffic predictions, and recommendations about books and movies. AI translates among languages in real time and speeds up the operation of our laptops by guessing what we will do next. Several companies are working on cars that can drive themselves—either with partial human oversight or entirely autonomously. Beyond the influences in our daily lives, AI techniques are playing roles in science and medicine. AI is already at work in some hospitals helping physicians understand which patients are at highest risk for complications, and AI algorithms are finding important needles in massive data haystacks, such as identifying rare but devastating side effects of medications.

The AI in our lives today provides a small glimpse of more profound contributions to come. For example, the fielding of currently available technologies could save many thousands of lives, including those lost to accidents on our roadways and to errors made in medicine. Over the longer-term, advances in machine intelligence will have deeply beneficial influences on healthcare, education, transportation, commerce, and the overall march of science. Beyond the creation of new applications and services, the pursuit of insights about the computational foundations of intelligence promises to reveal new principles about cognition that can help provide answers to longstanding questions in neurobiology, psychology, and philosophy.

On the research front, we have been making slow, yet steady progress on “wedges” of intelligence, including work in machine learning, speech recognition, language understanding, computer vision, search, optimization, and planning. However, we have made surprisingly little progress to date on building the kinds of general intelligence that experts and the lay public envision when they think about “Artificial Intelligence.” Nonetheless, advances in AI—and the prospect of new AI-based autonomous systems—have stimulated thinking about the potential risks associated with AI.

A number of prominent people, mostly from outside of computer science, have shared their concerns that AI systems could threaten the survival of humanity.1 Some have raised concerns that machines will become superintelligent and thus be difficult to control. Several of these speculations envision an “intelligence chain reaction,” in which an AI system is charged with the task of recursively designing progressively more intelligent versions of itself and this produces an “intelligence explosion.”4 While formal work has not been undertaken to deeply explore this possibility, such a process runs counter to our current understandings of the limitations that computational complexity places on algorithms for learning and reasoning. However, processes of self-design and optimization might still lead to significant jumps in competencies.

Other scenarios can be imagined in which an autonomous computer system is given access to potentially dangerous resources (for example, devices capable of synthesizing billons of biologically active molecules, major portions of world financial markets, large weapons systems, or generalized task markets9). The reliance on any computing systems for control in these areas is fraught with risk, but an autonomous system operating without careful human oversight and failsafe mechanisms could be especially dangerous. Such a system would not need to be particularly intelligent to pose risks.

We believe computer scientists must continue to investigate and address concerns about the possibilities of the loss of control of machine intelligence via any pathway, even if we judge the risks to be very small and far in the future. More importantly, we urge the computer science research community to focus intensively on a second class of near-term challenges for AI. These risks are becoming salient as our society comes to rely on autonomous or semiautonomous computer systems to make high-stakes decisions. In particular, we call out five classes of risk: bugs, cybersecurity, the “Sorcerer’s Apprentice,” shared autonomy, and socioeconomic impacts.

The first set of risks stems from programming errors in AI software. We are all familiar with errors in ordinary software; bugs frequently arise in the development and fielding of software applications and services. Some software errors have been linked to extremely costly outcomes and deaths. The verification of software systems is challenging and critical, and much progress has been made—some relying on AI advances in theorem proving. Many non-AI software systems have been developed and validated to achieve high degrees of quality assurance. For example, the software in autopilot and spacecraft systems is carefully tested and validated. Similar practices must be applied to AI systems. One technical challenge is to guarantee that systems built via machine learning methods behave properly. Another challenge is to ensure good behavior when an AI system encounters unforeseen situations. Our automated vehicles, home robots, and intelligent cloud services must perform well even when they receive surprising or confusing inputs. Achieving such robustness may require self-monitoring architectures in which a meta-level process continually observes the actions of the system, checks that its behavior is consistent with the core intentions of the designer, and intervenes or alerts if problems are identified. Research on real-time verification and monitoring of systems is already exploring such layers of reflection, and these methods could be employed to ensure the safe operation of autonomous systems.3,6

A second set of risks is cyberattacks: criminals and adversaries are continually attacking our computers with viruses and other forms of malware. AI algorithms are as vulnerable as any other software to cyberattack. As we roll out AI systems, we need to consider the new attack surfaces that these expose. For example, by manipulating training data or preferences and trade-offs encoded in utility models, adversaries could alter the behavior of these systems. We need to consider the implications of cyberattacks on AI systems, especially when AI methods are charged with making high-stakes decisions. U.S. funding agencies and corporations are supporting a wide range of cybersecurity research projects, and artificial intelligence techniques will themselves provide novel methods for detecting and defending against cyberattacks. For example, machine learning can be employed to learn the fingerprints of malware, and new layers of reflection can be employed to detect abnormal internal behaviors, which can reveal cyberattacks. Before we put AI algorithms in control of high-stakes decisions, we must be confident these systems can survive large-scale cyberattacks.

A third set of risks echo the tale of the Sorcerer’s Apprentice. Suppose we tell a self-driving car to “get us to the airport as quickly as possible!” Would the autonomous driving system put the pedal to the metal and drive at 125 mph, putting pedestrians and other drivers at risk? Troubling scenarios of this form have appeared recently in the press. Many of the dystopian scenarios of out-of-control superintelligences are variations on this theme. All of these examples refer to cases where humans have failed to correctly instruct the AI system on how it should behave. This is not a new problem. An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands literally. An AI system must analyze and understand whether the behavior that a human is requesting is likely to be judged as “normal” or “reasonable” by most people. In addition to relying on internal mechanisms to ensure proper behavior, AI systems need to have the capability—and responsibility—of working with people to obtain feedback and guidance. They must know when to stop and “ask for directions”—and always be open for feedback.

Some of the most exciting opportunities for deploying AI bring together the complementary talents of people and computers.5 AI-enabled devices are allowing the blind to see, the deaf to hear, and the disabled and elderly to walk, run, and even dance. AI methods are also being developed to augment human cognition. As an example, prototypes have been aimed at predicting what people will forget and helping them to remember and plan. Moving to the realm of scientific discovery, people working together with the Foldit online game8 were able to discover the structure of the virus that causes AIDS in only three weeks, a feat that neither people nor computers working alone could match. Other studies have shown how the massive space of galaxies can be explored hand-in-hand by people and machines, where the tireless AI astronomer understands when it needs to reach out and tap the expertise of human astronomers.7 There are many opportunities ahead for developing real-time systems that involve a rich interleaving of problem solving by people and machines.

However, building these collaborative systems raises a fourth set of risks stemming from challenges with fluidity of engagement and clarity about states and goals. Creating real-time systems where control needs to shift rapidly between people and AI systems is difficult. For example, airline accidents have been linked to misunderstandings arising when pilots took over from autopilots.a The problem is that unless the human operator has been paying very close attention, he or she will lack a detailed understanding of the current situation and can make poor decisions. Here again, AI methods can help solve these problems by anticipating when human control will be required and providing people with the critical information that they need.

A fifth set of risks concern the broad influences of increasingly competent automation on socioeconomics and the distribution of wealth.2 Several lines of evidence suggest AI-based automation is at least partially responsible for the growing gap between per capita GDP and median wages. We need to understand the influences of AI on the distribution of jobs and on the economy more broadly. These questions move beyond computer science into the realm of economic policies and programs that might ensure that the benefits of AI-based productivity increases are broadly shared.

Achieving the potential tremendous benefits of AI for people and society will require ongoing and vigilant attention to the near- and longer-term challenges to fielding robust and safe computing systems. Each of the first four challenges listed in this Viewpoint (software quality, cyberattacks, “Sorcerer’s Apprentice,” and shared autonomy) is being addressed by current research, but even greater efforts are needed. We urge our research colleagues and industry and government funding agencies to devote even more attention to software quality, cybersecurity, and human-computer collaboration on tasks as we increasingly rely on AI in safety-critical functions.

At the same time, we believe scholarly work is needed on the longer-term concerns about AI. Working with colleagues in economics, political science, and other disciplines, we must address the potential of automation to disrupt the economic sphere. Deeper study is also needed to understand the potential of superintelligence or other pathways to result in even temporary losses of control of AI systems. If we find there is significant risk, then we must work to develop and adopt safety practices that neutralize or minimize that risk. We should study and address these concerns, and the broader constellation of risks that might come to the fore in the short- and long-term, via focused research, meetings, and special efforts such as the Presidential Panel on Long-Term AI Futuresb organized by the AAAI in 2008–2009 and the One Hundred Year Study on Artificial Intelligence,10,c which is planning centuries of ongoing studies about advances in AI and its influences on people and society.

The computer science community must take a leadership role in exploring and addressing concerns about machine intelligence. We must work to ensure that AI systems responsible for high-stakes decisions will behave safely and properly, and we must also examine and respond to concerns about potential transformational influences of AI. Beyond scholarly studies, computer scientists need to maintain an open, two-way channel for communicating with the public about opportunities, concerns, remedies, and realities of AI.

References

1. Bostrum, N. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014.

2. Brynjolfsson, E. and McAfee, A. The Second Machine Age: Work Progress, and Prosperity in a Time of Brilliant Technologies. W.W. Norton & Company, New York, 2014.

3. Chen, F. and Rosu, G. Toward monitoring-oriented programming: A paradigm combining specification and implementation. Electr. Notes Theor. Comput. Sci. 89, 2 (2003), 108–127.

4. Good, I.J. Speculations concerning the first ultraintelligent machine. In Advances in Computers, Vol. 6. F.L. Alt and M. Rubinoff, Eds., Academic Press, 1965, 31–88.

5. Horvitz, E. Principles of mixed-initiative user interfaces. In Proceedings of CHI ’99, ACM SIGCHI Conference on Human Factors in Computing Systems (Pittsburgh, PA, May 1999); http://bit.ly/1DN039y.

6. Huang, J. et al. ROSRV: Runtime verification for robots. Runtime Verification, (2014), 247–254.

7. Kamar, E., Hacker, S., and Horvitz, E. Combining human and machine intelligence in large-scale crowdsourcing. AAMAS 2012 (Valencia, Spain, June 2012); http://bit.ly/1h6gfbU.

8. Khatib, F. et al. Crystal structure of a monomeric retroviral protease solved by protein folding game players. Nature Structural and Molecular Biology 18 (2011), 1175–1177.

9. Shahaf, D. and Horvitz, E. Generalized task markets for human and machine computation. AAAI 2010, (Atlanta, GA, July 2010), 986–993; http://bit.ly/1gDIuho.

10. You, J. A 100-year study of artificial intelligence? Science (Jan. 9, 2015); http://bit.ly/1w664U5.

Authors

Thomas G. Dietterich ([email protected]) is a Distinguished Professor in the School of Electrical Engineering and Computer at Oregon State University in Corvallis, OR, and president of the Association for the Advancement of Artificial Intelligence (AAAI).

Eric J. Horvitz ([email protected]) is Distinguished Scientist and Director of the Microsoft Research lab in Redmond, Washington. He is the former president of AAAI and continues to serve on AAAI’s Strategic Planning Board and Committee on Ethics in AI.

Footnotes

a. See http://en.wikipedia.org/wiki/China_Airlines_Flight_006.

b. See http://www.aaai.org/Organization/presidential-panel.php.

c. See https://ai100.stanford.edu.

Posted in AI | Tagged | Leave a comment

EMC’s David Goulden on the Reasons for not Breaking up with VMware (Video)

http://player.theplatform.com/p/PhfuRC/vNP4WUiQeJFa/embed/select/FGx9a5if6xCz?autoPlay=true&t=259

David Goulden, EMC: “Our corporate clients are looking for fewer, more strategic technology partners, not more small partners.  vendors

re/code: Goulden is the CEO of EMC’s information infrastructure business unit, the biggest portion of the federation that includes its main business of selling equipment used to store information in corporate data centers, and which accounted for $18 billion in revenue last year. Previously, he was EMC’s COO and is considered a possible successor to current EMC CEO Joe Tucci, who has been working without a contract since February and is expected to retire by the end of the year.

Does he want to be CEO? “The short answer is that the timing and the selection of a CEO, that’s up to the board of directors,” Goulden said.

“As it relates to me,” he added, “I love my job, it’s a great job, the best one I’ve actually had and I want to help the federation in any way I can to make sure it stays in the winner’s column. Beyond that you’ll have to get back to the board on how they’ll manage the process.”

Posted in Misc | Leave a comment

The Future of Online Shopping (Infographic)

ecommerce_future_infographic

Source: The Chat Shop Blog

Posted in Digitization | Tagged , | Leave a comment

Data Science and Measuring Happiness

GNHIs it possible to measure happiness? Can we compare countries on the basis of a universal yardstick for collective happiness similar to Gross National Product (GNP), the accepted measure for a country’s material well-being?  How could data science contribute to practical applications of happiness analysis?

The Kingdom of Bhutan, a 770,000-strong nation landlocked between China and India at the eastern end of the Himalayas, has been developing what it calls Gross National Happiness (GNH) and applying it to government policies for more than 40 years. Influenced by Buddhism, happiness as measured by GNH is different from the way it is perceived in the West in two ways. It is “multidimensional – not measured only by subjective well-being,” according to the Short Guide to GNH. Furthermore, it is “not focused narrowly on happiness that begins and ends with oneself and is concerned for and with oneself.”

A new data science workshop, to take place November 6th to 11th in Bhutan, will discuss and debate questions related to the measurement of happiness  with experts in data science, Buddhist leadership and Gross National Happiness. The Data Happy conference and workshop will involve a high level of participatory process, collaboratively exploring ways by which individuals can contribute to the measurement of Gross National Happiness on a daily basis.

Troy Sadkowsky, the lead organizer of the event and the founder of Data Scientists Pty Ltd and creator of DataScientists.Net, says: “Developing a more holistic and accurate measure of wealth that includes more than just financial aspects would benefit us all. This could be a powerful tool for monitoring a healthy global growth in sustainable prosperity. Data Science is purpose-built for exploring this new terrain.”

The multiple dimensions of happiness and its collective or community orientation are reflected in the 9 domains that comprise the GNH index: psychological wellbeing, time use, community vitality, cultural diversity, ecological resilience, living standards, health, education, and good governance. These are in turn comprised of 33 clustered indicators, each one of which is composed of several variables, for a total of 124 variables.

The government of Bhutan administers every four years a survey based on the GNH index and respondents rate themselves on each of the variables. The 2010 survey found that 10.4% of the population is ‘unhappy’ (defined as achieving sufficiency in 50% or less of the weighted indicators), 48.7% were found to be ‘narrowly happy, ’ 32.6% were ‘extensively happy,’ and  8.3% of the population was identified as ‘deeply happy’ (showing sufficiency in 77% or more of the weighted indicators).

At the Data Happy conference, participants will discuss a proposed system for going beyond the periodical paper-based survey to an online process that will run continuously and will be integrated into other services in Bhutan. Says Sadkowsky: “We will be looking to introduce as much new technology as feasible to help increase accessibility and usability.  A major goal is to convert attitudes around the GNH measurement tools from something that people feel they have to do, like mandatory census surveys, to something that they want to do.“

Sadkowsky hopes that participants in the Data Happy conference will contribute to designing a tool for measuring Gross Individual Happiness that can be integrated into the daily lives of the people of Bhutan. Find out more about the program and registration here.

Originally published on Forbes.com

Posted in Data Science | Tagged | Leave a comment

The Internet of Things (IoT) Comes to the NFL

nfl_tech_infographic-100612792-large.idge

CIO.com:

…each player will be equipped with a set of RFID sensors about the size of a quarter embedded in his shoulder pads, each emitting unique radio frequencies. Gillette Stadium (and every other stadium used by the NFL) has been equipped with 20 receivers to pick up those radio frequencies and pinpoint every player’s field position, speed, distance traveled and acceleration in real time.

By using two sensors for each player — one embedded in the left shoulder pad and one on the right — the system will also be able to identify the facing of each player.

The NFL plans to use the data generated to power the NFL 2015 app for Xbox One and Windows 10, allowing for things like “Next Gen Replay” that will allow fans to call up stats for each player tied into highlight clips posted on the app. But that’s just the beginning. The data will be fed to broadcasters, leveraged for in-stadium displays and provided to coaching staff and players.

Posted in Internet of Things | Tagged | Leave a comment

The global UAV drones market to grow 32% annually to reach $5.59 billion by 2020

The $599.99 IRIS+ from 3D Robotics, "a robot that will automatically fly itself where you tell it to go, while keeping a camera dead steady"

The $599.99 IRIS+ from 3D Robotics, “a robot that will automatically fly itself where you tell it to go, while keeping a camera dead steady”

MarketsAndMarkets:

Before 2014, the use of UAV drones for commercial applications was highly regulated in countries such as the United States. Post 2014, the Federal Aviation Administration (FAA) has relaxed the norms for the use of UAV drones for commercial applications, and also FAA has started giving exemptions to companies to make commercial use of drones under Section 333 with some restrictions. As of September 2, 2015, FAA has granted 1,439 exemptions for the commercial use of drones. The global UAV drones market is expected to reach $5.59 billion by 2020, and it is expected to grow at a CAGR of 32.22% between 2015 and 2020. Among all the commercial applications, precision agriculture application is expected to grow at the highest CAGR of 42.25% during the forecast period.

The rotary blade drones market held the largest market share in 2014, and it is expected to reach $4.93 billion by 2020, growing at a CAGR of 31.42% between 2015 and 2020. Rotary blade drones are widely preferred by the professionals from the media and entertainment industry for video shooting and photography.

The UAV drones market is dominated by the key players such as DJI (China), Parrot S.A. (France), 3D Robotics Inc. (U.S.), and PrecisionHawk (U.S.), among others.

Posted in Robotics | Tagged | Leave a comment

2 Solutions to Demand for Tech Skills: Coding Bootcamps and Computer Science Education Starting in Elementary Schools

CodingBootcamps

LinkedIn Official Blog:

Technical talent is in high demand. As of publishing this post, a LinkedIn job search for “Software Engineers” in the US reveals more than 100,000 open jobs. Adding a couple more tech-related roles (“User Designer,” “Data Scientist”) increases the total to more than 200,000 job openings. Job seekers looking to meet job requirements can enroll in a Master’s degree program, but that comes with a 2-year opportunity cost. Now, a shorter path is emerging: fully immersive coding bootcamps.

Coding bootcamps typically last 6-12 weeks and require participants to show up to a class in person. Bootcamps are a relatively new model, but they’re a growing trend that could help close the skills gap. Tapping into the Economic Graph, we compiled aggregated data on over 150 bootcamp programs and more than 25,000 LinkedIn members who have indicated they are attending or have attended bootcamps to identify emerging trends.

In 2011, fewer than one hundred LinkedIn members indicated they had graduated from bootcamp programs. In 2014, more than 8,000 members completed coding bootcamps and added them to their profiles, reflecting a rise in acceptance of the bootcamp model. The number of bootcamp graduates in the first six months of 2015 has nearly surpassed all of 2014. At this rate, we can expect to see more than 16,000 graduates by the end of 2015 — more than double the total number of 2014 graduates.

An Open Letter from the Nation’s Tech and Business Leaders: Why Computer Science for All is good for all

Yesterday morning, Mayor Bill de Blasio took to the stage to announce the nation’s most ambitious and far-reaching technology education effort to date: New York City will deliver computer science education to every student in each of the City’s public elementary, middle, and high schools by 2025 through its new Computer Science for All initiative.

The impact of this investment cannot be overstated.

One in five New York City businesses employs tech talent, fueling the growth of a tech sector that today represents nearly 300,000 jobs and $30 billion in annual wages. Between 2007 and 2014, tech employment in the City grew 57 percent, nearly six times faster than overall citywide employment. And by the year 2020, the U.S. Bureau of Labor projects there to be more than 1.4 million computer specialist job openings nationwide.

Posted in Misc | Tagged , | Leave a comment

Steve Papa: Data volume is cumulative, analyses possibilities are combinatoric (video)

[youtube https://www.youtube.com/watch?v=YgKSCFBZzlM?rel=0]

Steve Papa is an active founder and venture capitalist. As founder of Parallel Wireless he is making the deployment of the carrier-grade LTE RAN as easy as deploying enterprise Wi-Fi. As investor, he is active both as a founding investor and a board partner for Andreessen Horowitz.

Previously he founded and led the big data database company Endeca until it was Oracle’s largest private company acquisition, reported at $1.1 billion when announced in 2011. Endeca launched at Demo 2001 and pioneered the now ubiquitous faceted search and query. He got his high-tech career started at Inktomi where he created the company’s carrier class caching business and then helped launch Akamai. Steve studied at Princeton and Harvard where is an advisor to both their respective entrepreneurship programs.

Posted in Big Data Analytics, Data Growth | Tagged , | Leave a comment

The Virtual Reality Landscape (Infographic)

VirtualReality

Phil Orr and Max Wessel, Sapphire Ventures:

..we believe three characteristics of VR will help it carve out its position in the world of computing. These characteristics hint at Neal Stephenson’s metaverse. Namely, they are presence, scale, and interaction design.

  • Presence is an Oculus-coined term that effectively translates to a sense of being there. With presence, comes focus and attention. It’s impossible to ignore a piece of content when it’s your reality. It’s also proven that presence changes the way people internalize information and react emotionally to stimulus. That level of intellectual involvement will impact what we can do with software.
  • Scale is something we all understand. It’s difficult to imagine what the interior of a house will look like from a diagram on the screen of an iPhone. It’s easy for an architect to help you imagine how the interior of a house might look when you can walk around inside of it.
  • Interaction design changes fundamentally when your body becomes the controller. Today, when you use a computer for training or simulation, you interact with it using a mouse or a keyboard. It can inform you about different environments, but it does little to teach you how to execute tasks physically in the real world. VR changes that. When a system is designed to monitor movement on hundreds of different points on your body, it can help train you to do everything from perform surgical motions to throwing a football.
Posted in Misc | Leave a comment

Startups Disrupting Apple (Infographic)

CBInsights_ios

CB Insights:

…we used CB Insights data and analysis to dig in and see which startups have been peeling away at some of the categories served by default iOS apps… We found dozens of investor-backed private companies developing apps that would like to displace Apple’s stock apps and knock them off the home screen. In all, we identified 44 startups attacking iOS, with music, messaging, and health seeming to be the three most contested categories.

Posted in startups | Tagged , | Leave a comment