Compatibility Research Inc., granddaddy of online dating (Video)

Video: http://espn.go.com/video/clip?id=espn:13415984

 Look Magazine, February 1966

Look Magazine, February 1966

Five thirty Eight:

In our modern age of Tinder, OkCupid and Match.com, we’re used to the idea that algorithms can help us find love. But while the algorithms may have improved as the market for online dating has expanded, the inputs — the questions these computer matchmakers ask dating hopefuls — haven’t changed much since the 1960s, when Compatibility Research Inc. launched the first computerized dating service.

OnlineDatingCompatabilityresearch

Onlinedating operation match 1965

onlinedating_operationMatch_quex

See also the Harvard Crimson article from November 3, 1965, about Compatibility Research’s Operation Match, which includes this ditty:

Well, I filled out my form and I sent it along,

Never hoping I’d get anything like this.

But now when I see her,

Whenever I see her,

I want to give her one great big I.B.M. kiss.

She’s my I.B.M. baby, the ideal lady,

She’s my I.B.M. baby.

From the first time I met her I couldn’t forget her,

She’s my I.B.M. baby.

Well we’ve dated sometime,

Things are going just fine, and I’d like to settle down with her.

Just like birds of a feather

We put 2 and 2 together, and we came one with an I.B.M. affair.

She’s my I.B.M. baby, I don’t mean maybe,

She’s my I.B.M. baby.

Today, one in 10 adults now spends, on average, an hour a day on a dating web site or app, according to Nielsen. Online dating in the U.S. was a $2.2 billion industry last year.

online-dating

Posted in Misc | Tagged , , | Leave a comment

IBM Watson and Ken Jennings Compete Again, This Time for Title of Most Productive (Video)

[youtube https://www.youtube.com/watch?v=lszB8muRqQA?rel=0]

Posted in AI | Tagged , | Leave a comment

The Road to Zillions of Connected Things (IoT)

IoT_IDC

Fresh out of topping Gartner’s most hyped technologies list for the second year in a row, the Internet of Things (IoT) has kept its buzz going over the last couple of weeks with a series of announcements and new market analysis reports. First, here’s a sample of recent announcements:

  • September 20: Dialog Semiconductor has agreed to acquire Atmel Corporation for approximately $4.6 billion, combining forces in the mobile power, IoT and automotive markets, and addressing a “market opportunity of approximately $20 billion by 2019.”
  • September 18: Orange announced it is building a Low Power Wide Area (LPWA) network covering the whole of France, in line with its “ambition to become the number one operator for the Internet of Things.”
  • September 17: Alcatel-Lucent announced the acquisition of Mformation to provide service providers and enterprises with a secure, scalable, application-independent IoT security and control platform for use across multiple industries.
  • September 16: HCL Technologies announced it will jointly develop with IBM Internet of Things solutions and that the two companies will set up for that purpose an incubation center in Noida, India.
  • September 15: Salesforce has entered the Internet of Things market with its IoT Cloud.  Marc Benioff, Salesforce CEO, told the attendees of its Dreamforce annual conference: “With the Internet of Things, I’m more connected than ever. It’s truly a customer revolution.”
  • September 14: GE announced the creation of GE Digital, a new business unit led by Chief Digital officer (CDO) Bill Ruh, with the mission to “win in the Industrial Internet” (GE’s term for the Internet of Things).
  • September 14: IBM also announced a new business unit dedicated to conquering the Internet of Things market, led by new-hire Harriet Green.

We also learned more this month about the state-of-the-market for IoT and its potential impact from a number of new reports:

IDC (also here) shared the results of its survey of 2,350 IT and business decision makers in (mostly) large and medium-size enterprises worldwide:

  • Enterprise decision makers see the IoT as “strategic” (58%, especially in the health, transportation, and manufacturing industries) or “transformative” (24%, especially in IT and professional services); 13% are still ”considering” it (especially in government and financial services) and 4.1% think it’s “not important.”
  • IoT momentum is real and quantifiable:  17% of participants in the survey have already deployed IoT and 31% plan to do so this year. Less than 5% “have considered but decided against it.”
  • IoT strategies are global in scope, with enterprises in Asia/Pacific leading other regions with almost 55% of survey participants (compared, for example, with about 45% in the North America).
  • B2B is considered by survey participants as the place IoT will grow, a reversal from last year where a majority of survey participants thought the consumer IoT is where the action will be.
  • Shift in the location of processing the data: More survey participants this year will process the data generated by IoT sensors at the ”edge” rather than in the data center, a reversal  from last year’s survey.
  • Top drivers for creating an IoT strategy:  Increased productivity (14.2%), time to market (11.8%), and process automation (10.1%).
  • Top challenges for the IoT: Security, upfront costs, ongoing costs.
  • IoT is anyone’s game in 2016: Hardware and networking vendors have lost ground in their perception as “leaders,” while software vendors, analytics vendors, and device/component vendors have gained in market awareness/perception.  “Industrial Internet companies,” while not a category that was asked about last year, is at less than 10%–GE and other companies using this term have their work cut out for them to make it synonymous with IoT.

Gartner reiterated its forecast of more than 30 billion installed IoT units and estimated it will result in a 20% increase in potential revenue generated from software for manufacturers running ‘intelligent devices’.  The Internet of Things (IoT), in Gartner’s view, turns every manufacturer into a software provider, a transformation which will have profound impact on application strategy, architecture, development and integration.

Gartner recommends that manufacturers differentiate with software, increase the intelligence in their devices by adding software, and ensure they have the licensing and entitlements tools to manage the software.

Accenture estimates that based on current policy and investment trends, the IoT could add about $500 billion to China’s cumulative GDP by 2030. This would result in China’s GDP being 0.3 percent higher in that year compared with current projections. However, by taking additional measures to improve its capacity to absorb IoT technologies and increase IoT investment, China could boost its annual GDP by 1.3 percent by 2030, cumulatively adding $1.8 trillion to the economy by that time.

Last but not least, Harvard Business School professor Michael Porter and PTC CEO Jim Heppelmann published in the Harvard Business Review “How Smart, Connected Products Are Transforming Companies.” They describe the impact of the IoT on the organizational structure of manufacturing companies and conclude that “Smart, connected products reshape not only competition, as we detailed in our previous article, but the very nature of the manufacturing firm, its work, and how it is organized. They are creating the first true discontinuity in the organization of manufacturing firms in modern business history.” They also see broader benefits of the IoT, including changing consumption patterns: “Smart, connected products will free us to purchase only the goods and services we need, to share products that we do not use much, and to get more out of the products that we already have. Instead of tossing out old products for the next generation, we will hold on to products that are continually improved, upgraded, and modernized.”

Originally published on Forbes.com

Posted in Misc | Leave a comment

What Makes a Good Data Scientist?

Data_Scientist_infographic_WR

Posted in Data Science Careers, Data Scientists | Tagged | Leave a comment

Creative Destruction and the ‘Uber Effect’

CBInsights_uber-vs-TAXI-valuation-chart

CB Insights:

We used CB Insights’ valuation data to look at how the rise of Uber’s valuation correlates with the market capitalization of Medallion Financial Corp (NASDAQ: TAXI). Medallion Financial is a publicly-traded company that originates, acquires, and services loans used to purchase taxi medallions in several large US urban markets that Uber is also active in, including New York. We charted the stock price of TAXI versus the valuations for many of Uber’s rounds since 2010.

We found that TAXI has also been hammered by an “Uber Effect,” with its price down even more than the decline seen by New York City medallions. TAXI’s stock price is down nearly 49% since Uber raised its breakout $258M Series C at a $3.5B valuation. (The NASDAQ is up ~26% in the same time period.)

Uber’s valuation is up over 13x.

Mark J. Perry at the American Enterprise Institute:

UberEffect1

UberEffect2

In 1942, economist Joseph Schumpeter described “creative destruction” as a “process of industrial mutation that incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one.” There probably hasn’t been a better example of Schumpeterian creative destruction in the last decade or more than the recent ascendance of app-based ride-sharing services like Uber (and Lyft, Sidecar, Gett, Via, etc.)  challenging traditional, legacy taxi cartels in cities like New York, San Francisco, Chicago and more than 160 other US cities. Market-based evidence of the gale of creative destruction in the transportation industry is displayed in the two charts above. The top chart above shows how the increasing popularity of ride-sharing apps like Uber has caused the price of New York City individual taxi medallions to collapse by at least 37%, from a peak of more than $1 million in August 2013 to only about $650,000 in recent months (based on advertised asking prices here, not actual sales).

Further evidence of the “Uber effect” is displayed in the bottom chart above, showing the collapse in the stock price of Medallion Financial Corporation, from $16.45 in November 2013 to below $7 per share in the last few days. Medallion Financial Corporation (NASDAQ: TAXI) is a NYC-based specialty finance company that originates, acquires, and services loans that finance taxicab medallions. Just as the sky-high taxi medallion prices have been significantly eroded due to competition from the upstart ride-sharing services, so has the value of Medallion Financial Corporation’s stock price been significantly dropping. After tracking the SP&500 Index closely for many decades, the share price of Medallion Financial has fallen by a whopping 58% from its November 2013 peak, during a time when the S&P 500 has increased by 7.1%.

As the traditional, legacy taxi industry continues to collapse under the Schumpeterian forces of market disruption, the taxi cartels like the one in NYC are asking for taxpayer bailouts, or at least taxpayer-supported guarantees for taxi medallion loans. Consumers are the obvious winners from the creative destruction in the transportation industry – we now have more choice, better and faster service, friendlier drivers, cleaner cars, and maybe most importantly — lower prices. Traditional taxi drivers and medallion owners, after being protected from competition by government regulations for many generations, are the obvious losers from the “Uber effect.” Medallion prices will continue to fall as the taxi cartels continue to crumble and collapse.

NPR Planet Money: Listen to Episode 643, July 31, 2015, on Gene Freidman, the “Taxi King” and how his empire is starting to crumble. Also, “Why Does A Taxi Medallion Cost $1 Million?” from 2011.

Posted in Misc | Tagged , | Leave a comment

10 Predictions for Digital and IT Transformation: Gartner

Gartner-crystalball

Gartner released today its top predictions for “the digital future… an algorithmic and smart machine-driven world where people and machines must define harmonious relationships”:

1)    By 2018, 20 percent of business content will be authored by machines.
Technologies with the ability to proactively assemble and deliver information through automated composition engines are fostering a movement from human- to machine-generated business content. Data-based and analytical information can be turned into natural language writing using these emerging tools. Business content, such as shareholder reports, legal documents, market reports, press releases, articles and white papers, are all candidates for automated writing tools.

2)    By 2018, six billion connected things will be requesting support.
In the era of digital business, when physical and digital lines are increasingly blurred, enterprises will need to begin viewing things as customers of services — and to treat them accordingly. Mechanisms will need to be developed for responding to significantly larger numbers of support requests communicated directly by things. Strategies will also need to be developed for responding to them that are distinctly different from traditional human-customer communication and problem-solving. Responding to service requests from things will spawn entire service industries, and innovative solutions will emerge to improve the efficiency of many types of enterprise.

3)    By 2020, autonomous software agents outside of human control will participate in five percent of all economic transactions.
Algorithmically driven agents are already participating in our economy. However, while these agents are automated, they are not fully autonomous, because they are directly tethered to a robust collection of mechanisms controlled by humans — in the domains of our corporate, legal, economic and fiduciary systems. New autonomous software agents will hold value themselves, and function as the fundamental underpinning of a new economic paradigm that Gartner calls the programmable economy. The programmable economy has potential for great disruption to the existing financial services industry. We will see algorithms, often developed in a transparent, open-source fashion and set free on the blockchain, capable of banking, insurance, markets, exchanges, crowdfunding — and virtually all other types of financial instruments

4)    By 2018, more than 3 million workers globally will be supervised by a “robo-boss.”
Robo-bosses will increasingly make decisions that previously could only have been made by human managers. Supervisory duties are increasingly shifting into monitoring worker accomplishment through measurements of performance that are directly tied to output and customer evaluation. Such measurements can be consumed more effectively and swiftly by smart machine managers tuned to learn based on staffing decisions and management incentives.

5)    By year-end 2018, 20 percent of smart buildings will have suffered from digital vandalism.
Inadequate perimeter security will increasingly result in smart buildings being vulnerable to attack. With exploits ranging from defacing digital signage to plunging whole buildings into prolonged darkness, digital vandalism is a nuisance, rather than a threat. There are, nonetheless, economic, health and safety, and security consequences. The severity of these consequences depend on the target. Smart building components cannot be considered independently, but must be viewed as part of the larger organizational security process. Products must be built to offer acceptable levels of protection and hooks for integration into security monitoring and management systems.

6)    By 2018, 45 percent of the fastest-growing companies will have fewer employees than instances of smart machines.
Gartner believes the initial group of companies that will leverage smart machine technologies most rapidly and effectively will be startups and other newer companies. The speed, cost savings, productivity improvements and ability to scale of smart technology for specific tasks offer dramatic advantages over the recruiting, hiring, training and growth demands of human labor. Some possible examples are a fully automated supermarket or a security firm offering drone-only surveillance services. The “old guard” (existing) companies, with large amounts of legacy technologies and processes, will not necessarily be the first movers, but the savvier companies among them will be fast followers, as they will recognize the need for competitive parity for either speed or cost.

7)    By year-end 2018, customer digital assistant will recognize individuals by face and voice across channels and partners.
The last mile for multichannel and exceptional customer experiences will be seamless two-way engagement with customers and will mimic human conversations, with both listening and speaking, a sense of history, in-the-moment context, timing and tone, and the ability to respond, add to and continue with a thought or purpose at multiple occasions and places over time. Although facial and voice recognition technologies have been largely disparate across multiple channels, customers are willing to adopt these technologies and techniques to help them sift through increasing large amounts of information, choice and purchasing decisions. This signals an emerging demand for enterprises to deploy customer digital assistants to orchestrate these techniques and to help “glue” continual company and customer conversations.

8)    By 2018, two million employees will be required to wear health and fitness tracking devices as a condition of employment.
The health and fitness of people employed in jobs that can be dangerous or physically demanding will increasingly be tracked by employers via wearable devices. Emergency responders, such as police officers, firefighters and paramedics, will likely comprise the largest group of employees required to monitor their health or fitness with wearables. The primary reason for wearing them is for their own safety. Their heart rates and respiration, and potentially their stress levels, could be remotely monitored and help could be sent immediately if needed. In addition to emergency responders, a portion of employees in other critical roles will be required to wear health and fitness monitors, including professional athletes, political leaders, airline pilots, industrial workers and remote field workers.

9)    By 2020, smart agents will facilitate 40 percent of mobile interactions, and the postapp era will begin to dominate.
Smart agent technologies, in the form of virtual personal assistants (VPAs) and other agents, will monitor user content and behavior in conjunction with cloud-hosted neural networks to build and maintain data models from which the technology will draw inferences about people, content and contexts. Based on these information-gathering and model-building efforts, VPAs can predict users’ needs, build trust and ultimately act autonomously on the user’s behalf.

10) Through 2020, 95 percent of cloud security failures will be the customer’s fault
Security concerns remain the most common reason for avoiding the use of public cloud services. However, only a small percentage of the security incidents impacting enterprises using the cloud have been due to vulnerabilities that were the provider’s fault. This does not mean that organizations should assume that using a cloud means that whatever they do within that cloud will necessarily be secure. The characteristics of the parts of the cloud stack under customer control can make cloud computing a highly efficient way for naive users to leverage poor practices, which can easily result in widespread security or compliance failures. The growing recognition of the enterprise’s responsibility for the appropriate use of the public cloud is reflected in the growing market for cloud control tools. By 2018, 50 percent of enterprises with more than 1,000 users will use cloud access security broker products to monitor and manage their use of SaaS and other forms of public cloud, reflecting the growing recognition that although clouds are usually secure, the secure use of public clouds requires explicit effort on the part of the cloud customer.

Posted in Misc | Tagged , | Leave a comment

GE’s Internet of Things (IoT): The software platform, Predix, and new business model, GE Digital (Video)

[youtube https://www.youtube.com/watch?v=f4-pFZEv3QQ?rel=0]

On September 14, 2015, GE announced the creation of GE Digital, “a transformative move that brings together all of the digital capabilities from across the company into one organization.” It integrates GE’s Software Center, the expertise of GE’s global IT and commercial software teams, and the industrial security strength of Wurldtech. This “new model” (not a business unit, apparently) is led by Bill Ruh, chief digital officer.

In the video above, Ruh talked briefly about GE Digital, preceded by GE Digital’s CTO Harel Kodesh talking about Predix, GE’s software platform for the “Industrial Internet” or IoT.

See also Internet Of Things (IoT) News Roundup

Posted in Internet of Things | Tagged , | Leave a comment

The World’s #1 Data Scientist Talks about Data Science Skills and Tools

[youtube https://www.youtube.com/watch?v=dpzxW6buh9Y]

Owen Zhang is ranked #1 on Kaggle, the online stadium for data science competitions. An engineer by training, Zhang says that data science is finding “practical solutions to not very well-defined problems,” similar to engineering. He believes that good data scientists, “otherwise known as unicorn data scientists,” have three types of expertise. Since data science deals with practical problems, the first one is being familiar with a specific domain and knowing how to solve a problem in that domain. The second is the ability to distinguish signal from noise, or understanding statistics. The third skill is software engineering.

[youtube https://www.youtube.com/watch?v=7YnVZrabTA8]

Zhang, Chief Product Officer at DataRobot, shares in this talk his experience with open source tools in data science competitions.  Slides here.

Posted in Data Science, Data Science Careers | Tagged , , | Leave a comment

A Sane Discussion of the Rising Fears of Artificial Intelligence (AI)

[vimeo 138319099 w=500 h=281]

Rise of Concerns about AI: Reflections and Directions

Discussions about artificial intelligence (AI) have jumped into the public eye over the past year, with several luminaries speaking about the threat of AI to the future of humanity. Over the last several decades, AI—automated perception, learning, reasoning, and decision making—has become commonplace in our lives. We plan trips using GPS systems that rely on the A* algorithm to optimize the route. Our smartphones understand our speech, and Siri, Cortana, and Google Now are getting better at understanding our intentions. Machine vision detects faces as we take pictures with our phones and recognizes the faces of individual people when we post those pictures to Facebook Internet search engines rely on a fabric of AI subsystems. On any day, AI provides hundreds of millions of people with search results, traffic predictions, and recommendations about books and movies. AI translates among languages in real time and speeds up the operation of our laptops by guessing what we will do next. Several companies are working on cars that can drive themselves—either with partial human oversight or entirely autonomously. Beyond the influences in our daily lives, AI techniques are playing roles in science and medicine. AI is already at work in some hospitals helping physicians understand which patients are at highest risk for complications, and AI algorithms are finding important needles in massive data haystacks, such as identifying rare but devastating side effects of medications.

The AI in our lives today provides a small glimpse of more profound contributions to come. For example, the fielding of currently available technologies could save many thousands of lives, including those lost to accidents on our roadways and to errors made in medicine. Over the longer-term, advances in machine intelligence will have deeply beneficial influences on healthcare, education, transportation, commerce, and the overall march of science. Beyond the creation of new applications and services, the pursuit of insights about the computational foundations of intelligence promises to reveal new principles about cognition that can help provide answers to longstanding questions in neurobiology, psychology, and philosophy.

On the research front, we have been making slow, yet steady progress on “wedges” of intelligence, including work in machine learning, speech recognition, language understanding, computer vision, search, optimization, and planning. However, we have made surprisingly little progress to date on building the kinds of general intelligence that experts and the lay public envision when they think about “Artificial Intelligence.” Nonetheless, advances in AI—and the prospect of new AI-based autonomous systems—have stimulated thinking about the potential risks associated with AI.

A number of prominent people, mostly from outside of computer science, have shared their concerns that AI systems could threaten the survival of humanity.1 Some have raised concerns that machines will become superintelligent and thus be difficult to control. Several of these speculations envision an “intelligence chain reaction,” in which an AI system is charged with the task of recursively designing progressively more intelligent versions of itself and this produces an “intelligence explosion.”4 While formal work has not been undertaken to deeply explore this possibility, such a process runs counter to our current understandings of the limitations that computational complexity places on algorithms for learning and reasoning. However, processes of self-design and optimization might still lead to significant jumps in competencies.

Other scenarios can be imagined in which an autonomous computer system is given access to potentially dangerous resources (for example, devices capable of synthesizing billons of biologically active molecules, major portions of world financial markets, large weapons systems, or generalized task markets9). The reliance on any computing systems for control in these areas is fraught with risk, but an autonomous system operating without careful human oversight and failsafe mechanisms could be especially dangerous. Such a system would not need to be particularly intelligent to pose risks.

We believe computer scientists must continue to investigate and address concerns about the possibilities of the loss of control of machine intelligence via any pathway, even if we judge the risks to be very small and far in the future. More importantly, we urge the computer science research community to focus intensively on a second class of near-term challenges for AI. These risks are becoming salient as our society comes to rely on autonomous or semiautonomous computer systems to make high-stakes decisions. In particular, we call out five classes of risk: bugs, cybersecurity, the “Sorcerer’s Apprentice,” shared autonomy, and socioeconomic impacts.

The first set of risks stems from programming errors in AI software. We are all familiar with errors in ordinary software; bugs frequently arise in the development and fielding of software applications and services. Some software errors have been linked to extremely costly outcomes and deaths. The verification of software systems is challenging and critical, and much progress has been made—some relying on AI advances in theorem proving. Many non-AI software systems have been developed and validated to achieve high degrees of quality assurance. For example, the software in autopilot and spacecraft systems is carefully tested and validated. Similar practices must be applied to AI systems. One technical challenge is to guarantee that systems built via machine learning methods behave properly. Another challenge is to ensure good behavior when an AI system encounters unforeseen situations. Our automated vehicles, home robots, and intelligent cloud services must perform well even when they receive surprising or confusing inputs. Achieving such robustness may require self-monitoring architectures in which a meta-level process continually observes the actions of the system, checks that its behavior is consistent with the core intentions of the designer, and intervenes or alerts if problems are identified. Research on real-time verification and monitoring of systems is already exploring such layers of reflection, and these methods could be employed to ensure the safe operation of autonomous systems.3,6

A second set of risks is cyberattacks: criminals and adversaries are continually attacking our computers with viruses and other forms of malware. AI algorithms are as vulnerable as any other software to cyberattack. As we roll out AI systems, we need to consider the new attack surfaces that these expose. For example, by manipulating training data or preferences and trade-offs encoded in utility models, adversaries could alter the behavior of these systems. We need to consider the implications of cyberattacks on AI systems, especially when AI methods are charged with making high-stakes decisions. U.S. funding agencies and corporations are supporting a wide range of cybersecurity research projects, and artificial intelligence techniques will themselves provide novel methods for detecting and defending against cyberattacks. For example, machine learning can be employed to learn the fingerprints of malware, and new layers of reflection can be employed to detect abnormal internal behaviors, which can reveal cyberattacks. Before we put AI algorithms in control of high-stakes decisions, we must be confident these systems can survive large-scale cyberattacks.

A third set of risks echo the tale of the Sorcerer’s Apprentice. Suppose we tell a self-driving car to “get us to the airport as quickly as possible!” Would the autonomous driving system put the pedal to the metal and drive at 125 mph, putting pedestrians and other drivers at risk? Troubling scenarios of this form have appeared recently in the press. Many of the dystopian scenarios of out-of-control superintelligences are variations on this theme. All of these examples refer to cases where humans have failed to correctly instruct the AI system on how it should behave. This is not a new problem. An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands literally. An AI system must analyze and understand whether the behavior that a human is requesting is likely to be judged as “normal” or “reasonable” by most people. In addition to relying on internal mechanisms to ensure proper behavior, AI systems need to have the capability—and responsibility—of working with people to obtain feedback and guidance. They must know when to stop and “ask for directions”—and always be open for feedback.

Some of the most exciting opportunities for deploying AI bring together the complementary talents of people and computers.5 AI-enabled devices are allowing the blind to see, the deaf to hear, and the disabled and elderly to walk, run, and even dance. AI methods are also being developed to augment human cognition. As an example, prototypes have been aimed at predicting what people will forget and helping them to remember and plan. Moving to the realm of scientific discovery, people working together with the Foldit online game8 were able to discover the structure of the virus that causes AIDS in only three weeks, a feat that neither people nor computers working alone could match. Other studies have shown how the massive space of galaxies can be explored hand-in-hand by people and machines, where the tireless AI astronomer understands when it needs to reach out and tap the expertise of human astronomers.7 There are many opportunities ahead for developing real-time systems that involve a rich interleaving of problem solving by people and machines.

However, building these collaborative systems raises a fourth set of risks stemming from challenges with fluidity of engagement and clarity about states and goals. Creating real-time systems where control needs to shift rapidly between people and AI systems is difficult. For example, airline accidents have been linked to misunderstandings arising when pilots took over from autopilots.a The problem is that unless the human operator has been paying very close attention, he or she will lack a detailed understanding of the current situation and can make poor decisions. Here again, AI methods can help solve these problems by anticipating when human control will be required and providing people with the critical information that they need.

A fifth set of risks concern the broad influences of increasingly competent automation on socioeconomics and the distribution of wealth.2 Several lines of evidence suggest AI-based automation is at least partially responsible for the growing gap between per capita GDP and median wages. We need to understand the influences of AI on the distribution of jobs and on the economy more broadly. These questions move beyond computer science into the realm of economic policies and programs that might ensure that the benefits of AI-based productivity increases are broadly shared.

Achieving the potential tremendous benefits of AI for people and society will require ongoing and vigilant attention to the near- and longer-term challenges to fielding robust and safe computing systems. Each of the first four challenges listed in this Viewpoint (software quality, cyberattacks, “Sorcerer’s Apprentice,” and shared autonomy) is being addressed by current research, but even greater efforts are needed. We urge our research colleagues and industry and government funding agencies to devote even more attention to software quality, cybersecurity, and human-computer collaboration on tasks as we increasingly rely on AI in safety-critical functions.

At the same time, we believe scholarly work is needed on the longer-term concerns about AI. Working with colleagues in economics, political science, and other disciplines, we must address the potential of automation to disrupt the economic sphere. Deeper study is also needed to understand the potential of superintelligence or other pathways to result in even temporary losses of control of AI systems. If we find there is significant risk, then we must work to develop and adopt safety practices that neutralize or minimize that risk. We should study and address these concerns, and the broader constellation of risks that might come to the fore in the short- and long-term, via focused research, meetings, and special efforts such as the Presidential Panel on Long-Term AI Futuresb organized by the AAAI in 2008–2009 and the One Hundred Year Study on Artificial Intelligence,10,c which is planning centuries of ongoing studies about advances in AI and its influences on people and society.

The computer science community must take a leadership role in exploring and addressing concerns about machine intelligence. We must work to ensure that AI systems responsible for high-stakes decisions will behave safely and properly, and we must also examine and respond to concerns about potential transformational influences of AI. Beyond scholarly studies, computer scientists need to maintain an open, two-way channel for communicating with the public about opportunities, concerns, remedies, and realities of AI.

References

1. Bostrum, N. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014.

2. Brynjolfsson, E. and McAfee, A. The Second Machine Age: Work Progress, and Prosperity in a Time of Brilliant Technologies. W.W. Norton & Company, New York, 2014.

3. Chen, F. and Rosu, G. Toward monitoring-oriented programming: A paradigm combining specification and implementation. Electr. Notes Theor. Comput. Sci. 89, 2 (2003), 108–127.

4. Good, I.J. Speculations concerning the first ultraintelligent machine. In Advances in Computers, Vol. 6. F.L. Alt and M. Rubinoff, Eds., Academic Press, 1965, 31–88.

5. Horvitz, E. Principles of mixed-initiative user interfaces. In Proceedings of CHI ’99, ACM SIGCHI Conference on Human Factors in Computing Systems (Pittsburgh, PA, May 1999); http://bit.ly/1DN039y.

6. Huang, J. et al. ROSRV: Runtime verification for robots. Runtime Verification, (2014), 247–254.

7. Kamar, E., Hacker, S., and Horvitz, E. Combining human and machine intelligence in large-scale crowdsourcing. AAMAS 2012 (Valencia, Spain, June 2012); http://bit.ly/1h6gfbU.

8. Khatib, F. et al. Crystal structure of a monomeric retroviral protease solved by protein folding game players. Nature Structural and Molecular Biology 18 (2011), 1175–1177.

9. Shahaf, D. and Horvitz, E. Generalized task markets for human and machine computation. AAAI 2010, (Atlanta, GA, July 2010), 986–993; http://bit.ly/1gDIuho.

10. You, J. A 100-year study of artificial intelligence? Science (Jan. 9, 2015); http://bit.ly/1w664U5.

Authors

Thomas G. Dietterich ([email protected]) is a Distinguished Professor in the School of Electrical Engineering and Computer at Oregon State University in Corvallis, OR, and president of the Association for the Advancement of Artificial Intelligence (AAAI).

Eric J. Horvitz ([email protected]) is Distinguished Scientist and Director of the Microsoft Research lab in Redmond, Washington. He is the former president of AAAI and continues to serve on AAAI’s Strategic Planning Board and Committee on Ethics in AI.

Footnotes

a. See http://en.wikipedia.org/wiki/China_Airlines_Flight_006.

b. See http://www.aaai.org/Organization/presidential-panel.php.

c. See https://ai100.stanford.edu.

Posted in AI | Tagged | Leave a comment

EMC’s David Goulden on the Reasons for not Breaking up with VMware (Video)

http://player.theplatform.com/p/PhfuRC/vNP4WUiQeJFa/embed/select/FGx9a5if6xCz?autoPlay=true&t=259

David Goulden, EMC: “Our corporate clients are looking for fewer, more strategic technology partners, not more small partners.  vendors

re/code: Goulden is the CEO of EMC’s information infrastructure business unit, the biggest portion of the federation that includes its main business of selling equipment used to store information in corporate data centers, and which accounted for $18 billion in revenue last year. Previously, he was EMC’s COO and is considered a possible successor to current EMC CEO Joe Tucci, who has been working without a contract since February and is expected to retire by the end of the year.

Does he want to be CEO? “The short answer is that the timing and the selection of a CEO, that’s up to the board of directors,” Goulden said.

“As it relates to me,” he added, “I love my job, it’s a great job, the best one I’ve actually had and I want to help the federation in any way I can to make sure it stays in the winner’s column. Beyond that you’ll have to get back to the board on how they’ll manage the process.”

Posted in Misc | Leave a comment