
Source: BLS
HT: CB Insights

Source: BLS
HT: CB Insights
[youtube https://www.youtube.com/watch?v=zBCOMm_ytwM]
Machine Learning Research Institute:
As previously announced, we recently ran a 22-day Colloquium Series on Robust and Beneficial AI (CSRBAI) at the MIRI office, co-hosted with the Oxford Future of Humanity Institute. The colloquium was aimed at bringing together safety-conscious AI scientists from academia and industry to share their recent work. The event served that purpose well, initiating some new collaborations and a number of new conversations between researchers who hadn’t interacted before or had only talked remotely.
Over 50 people attended from 25 different institutions, with an average of 15 people present on any given talk or workshop day. In all, there were 17 talks and four weekend workshops on the topics of transparency, robustness and error-tolerance, preference specification, and agent models and multi-agent dilemmas. The full schedule and talk slides are available on the event page.
Stuart Russell, professor of computer science at UC Berkeley and co-author of Artificial Intelligence: A Modern Approach, gave the opening keynote. Russell spoke on “AI: The Story So Far” (slides). Abstract:
I will discuss the need for a fundamental reorientation of the field of AI towards provably beneficial systems. This need has been disputed by some, and I will consider their arguments. I will also discuss the technical challenges involved and some promising initial results.
Russell discusses his recent work on cooperative inverse reinforcement learning 36 minutes in. This paper and Dylan Hadfield-Menell’s related talk on corrigibility (slides) inspired lots of interest and discussion at CSRBAI.

M2:
1. The worldwide Internet of Things market is predicted to grow to $1.7 trillion by 2020, marking a compound annual growth rate of 16.9%. – IDC Worldwide Internet of Things Forecast, 2015 – 2020.
2. An estimated 25 billion connected “things” will be in use by 2020. – Gartner Newsroom
3. Wearable technology vendors shipped 78.1 million wearable devices in 2015, an increase of 171.6% from 2014. Shipment predictions for this year are 111 million, increasing to 215 million in 2019. – IDC Worldwide Quarterly Wearable Device Tracker
4. By 2020, each person is likely to have an average of 5.1 connected devices. – Frost and Sullivan Power Management in IoT and Connected Devices
5. In a 2016 PwC survey of 1,000 U.S. consumers, 45% say they now own a fitness band, 27% a smartwatch, and 12% smart clothing. 57% say they are excited about the future of wearable technology as part of everyday life. 80% say wearable devices make them more efficient at home, 78% more efficient at work. – PwC The Wearable Life 2.0: Connected Living in a Wearable World
6. By 2020, more than half of major new business processes and systems will incorporate some element, large or small, of the Internet of Things. – Gartner Predicts 2016: Unexpected Implications Arising from the Internet of Things
7. 65% of approximately 1,000 global business executives surveyed say they agree organizations that leverage the internet of things will have a significant advantage; 19% however, still say they have never heard of the Internet of Things. – Internet of Things Institute 2016 I0T Trends Survey
8. 80% of retailers worldwide say they agree that the Internet of Things will drastically change the way companies do business in the next three years. – Retail Systems Research: The Internet of Things in Retail: Great Expectations
9. By 2018, six billion things will have the ability to request support. – Gartner Predicts 2016: CRM Customer Service and Support
10. By 2020, 47% of devices will have the necessary intelligence to request support. –Gartner Predicts 2016: CRM Customer Service and Support
11. By 2025, the Internet of Things could generate more than $11 trillion a year in economic value through improvements in energy efficiency, public transit, operations management, smart customer relationship management and more. –McKinsey Global Institute Report: The Internet of Things: Mapping the value behind the Hype
12. Barcelona estimates that IoT systems have helped the city save $58 million a year from connected water management and $37 million a year via smart street lighting alone. – Harvard University Report
13. General Electric estimates that the “Industrial Internet” market (connected industrial machinery) will add $10 to $15 trillion to the global GDP within the next 20 years. – GE Reports
14. General Electric believes that using connected industrial machinery to make oil and gas exploration and development just 1% more efficient would result in a savings of $90 billion. – GE Reports
15. The connected health market is predicted to grow to $117B by 2020. Remote patient monitoring is predicted to be a $46 billion market by 2017. – ACT Report
16. Connected homes will be a major part of the Internet of Things. By 2019, companies will ship 1.9 billion connected home devices, marking an estimated $490 billion in revenue (Business Insider Intelligence). By 2020, even the connected kitchen will contribute at least 15 percent savings in the food and beverage industry, leveraging data analytics. – Gartner Research Predicts 2015: The Internet of Things

If your organization is moving toward digital too slowly, your people may be looking to leave. That’s one of the findings highlighted here and explored more fully in Aligning the organization for its digital future.


Source: Ericsson
Internet of Things (IoT) is expected to surpass mobile phones as the largest category of connected devices in 2018.
Between 2015 and 2021, IoT is expected to increase at a compounded annual growth rate (CAGR) of 23 percent, making up close to 16 billion of the total forecast 28 billion connected devices by 2021.
HT: LTP

The global collaborative robots market is expected to grow at a CAGR of 60.04% between 2016 and 2022 from $110 Million in 2015 and is expected to reach $3.3 Billion by 2022.
The collaborative robots market is application driven; the application in the automotive sector accounted for the largest share in 2015
The global collaborative robots market is driven by application in industries such as automotive, metal and machining, furniture and equipment, food and beverages, plastic and polymers, and others.
Collaborative robots used in the automotive sector accounted for the largest share of the global collaborative robots market in 2015; this market is expected to grow at a significant rate between 2016 and 2022.
In developed regions, such as North America and Europe, growth in the collaborative robots market in the automotive sector is expected to be driven by rise in safety rated manufacturing and the growing trend of precision which were not possible due to the common human errors.
Collaborative robots are used in the furniture and equipment industry and this market is expected to witness rapid growth during the forecast period.
An acceptance and installation rate of collaborative robots in furniture and equipment industry is increasing and is expected to continue to grow rapidly during the forecast period. This growth is expected to be significant in RoW region for new fleet of applications.
Asia-Pacific is expected to hold a large share of the collaborative robots market by 2022.
Europe was the largest market in 2015, followed by Asia-Pacific and North America. Regulations have driven the market for collaborative robots to reduce the need for safety fences between human and robots and mitigate the effects of imminent collisions (accidents).
Europe was the early adopter which has resulted in a large market for collaborative robots in 2015.
The collaborative robots market in Asia-Pacific is expected to surpass that of Europe by 2018 and hold a large market share through 2022.
The major companies in the global collaborative robots market are:
ABB Ltd. (Switzerland)
KUKA AG (Germany)
FANUC Corporation (Japan)
Robert Bosch GmbH (Germany)
Universal Robots (Denmark)
Rethink Robotics (U.S.)
Energid Technologies (U.S.).
MRK-Systeme GmbH (Germany).
“Everything is about time. You’re lead timed, your speed to market, and that’s the leveling wind in our industry. Short run manufacturing means that you’re going to have a fairly high touch model. We were really looking to increase productivity and improve our delivery in our service and our quality, mainly from a standpoint of error-proofing, because there’s a number of things that have to be done 100% correctly,” explained Ron Kirscht, President of Donnelly Custom Manufacturing. “But, if that’s what your job is and you’re doing it as a person, it becomes a little mundane, and that’s when people can become inattentive.”
Baxter collaborative robots are on the job at Donnelly’s plant in Alexandria, MN, taking on those time consuming, repetitive tasks where there’s no room for errors. This includes removing parts from a conveyor belt and stacking each one on customized stacking devices. By automating these jobs with robotics, Donnelly employees are assigned to more valuable work.
Kirscht added, “Baxter has some qualities that he brings to Donnelly that creates excitement, innovation and enthusiasm, allowing people to come up with ideas in ways for utilizing Baxter. I think that the Baxter robot is a game changer in modern manufacturing, because it really creates an opportunity for people on the manufacturing floor to innovate. It spawns creativity.”
[youtube https://www.youtube.com/watch?v=ant9adbTK5M]
[youtube https://www.youtube.com/watch?v=3-MZ288onbs]
Moderator:
George Westerman, MIT Initiative on the Digital Economy (@gwesterman)
Speakers:
Gerald Chertavian, Year Up (@yearup)
Prof. Tom Davenport, Fellow at MIT Initiative on the Digital Economy (@tdav)
Karen Kocher, Cigna (@kkocher)
Steve Phillips, Avnet, Inc. (@Steven_phillips)
Are AI and robots eating jobs? Yes–some jobs more than others. But even as automation replaces some workers, it will enhance the roles of others. Companies will need people who can work closely with technology, as well as those who can do what computers cannot. How can CIOs develop a workforce that will thrive in the digital age? Which skills will be valued and which ones will be replaced? Does college still matter? Will on-demand workers replace full-time employees? Join our eclectic panel-–experts in AI and jobs, Human Resources, alternative skill development, and digital leadership–as they describe what the coming changes in skills, jobs, and careers mean for CIOs and their companies.
Forrester forecasts that cognitive technologies such as robots, artificial intelligence (AI), machine learning, and automation will replace 7% of US jobs by 2025.

Truck drivers dominate the map for a few reasons.
The rise and fall of secretaries: Through much of the ’80s, as the U.S. economy shifted away from factories that make goods and toward offices that provide services, secretary became the most common job in more and more states. But a second shift — the rise of the personal computer — reversed this trend, as machines did more and more secretarial work.

Bob Rogers, Chief Data Scientist, Intel
“Business leaders want ‘the answer,’” says Bob Rogers, Chief Data Scientist for Big Data Solutions at Intel. But data scientists must understand what “the answer” means in the specific business context and communicate the expected impact in the language of the business executives. They need to explain the results of their analysis in “terms of the risk to the business” and “translate uncertainty into outcomes,” says Rogers. “If you show error bars on a number in a business presentation, you are probably going down the wrong path.”
When the data scientist as a new business role has emerged about a decade ago, the emphasis was on how it combined two disciplines and skill sets: computer science and statistics. More recently, the discussion of this evolving role has been along the lines of Rogers’ observation, as one combining technical and business expertise, emphasizing the importance of communications skills. Drew Conway’s 2010 definition of a data scientist as a Venn diagram of computer science, statistics and domain expertise has now been updated to include communications as a stand-alone set of required skills.
“Statisticians have missed the initial boat of data science,” says Rogers. “They tend to be very specific about the way they discuss data, ways that are not necessarily amenable to a broader discussion with a business audience.”
What we have here is a re-definition of what was previously perceived as a highly technical job to a more generalized business role. The rise of the Sexiest Job of the 21st Century has spawned numerous undergraduate and graduate programs focusing on imparting technical skills and knowledge, aiming to supply the widely-discussed shortage of experts in managing and mining the avalanche of big data. We now see business schools (e.g., Wharton) establishing a major in analytics, combining data science training with general business education. The nest step, I would argue, will be the complete integration of the two types of training: Business education as data science education.
Rogers’ varied work experience over the last twenty five years is a prime example of the amalgam of skills and expertise that will be the hallmark of successful business leaders in the years to come. It’s a unique combination of scientific curiosity and acumen, facility with computer programming and data manipulation, entrepreneurial drive and experimental inclination. All of these wrapped in a deep understanding, derived from direct experience, of the business context—the requirements, challenges, human motivations and attitudes that drive business success.
Like some of the leading data scientists of recent vintage, Rogers started his working life after earning a PhD in Physics. But in 1991, when he got his degree from Harvard University, there was not much data to support his thesis work in astrophysics, so he and others like him “were doing a lot more simulations.” Today, “there is a lot of data associated with cosmology,” says Rogers, but then and now, knowing how “to model the data” has been a crucial requirement in this and other scientific fields. A new training ground today for budding data scientists, according to Rogers, is computational neuroscience, where the “amount and shape of data” coming from functional MRI requires “advanced modeling thinking.”
While doing a post-doc at a research institute, his own experience with computer modeling and simulations led Rogers to co-author a book on using artificial neural networks for time series forecasting. All of a sudden he was getting phone calls from people asking him about forecasting the stock market, a subject he didn’t know much about.
Serendipity plays a major role in many illustrious careers and Rogers’ was no exception. The husband of a friend of his wife’s owned a trading firm in Chicago, and with his help, Rogers started a company rather than pursue an academic career, just like many latter-day data scientists. “I was 28 at the time,” he explained when I asked him why he made such a risky career switch.
In another similarity to today’s data scientists, Rogers did not limit his involvement with the startup to developing forecasting models for the Chicago futures market, but also got down and dirty building a research platform for collecting data on transactions and the back-office systems for executing trades, accounting, and other functions.
This went on for about a dozen years, in the last four of which Rogers has switched from R&D work to selling the company’s services when it opened up for new—international—investors.
“What was really profound for me as a data scientist,” says Rogers, “was actually the marketing side—I started to appreciate that there was a huge difference between having a technology that performed well and having a product that was tailored to fit the specific business needs of the customer. International investors had very specific needs around how the product was configured.”
Recalling his own experience leads Rogers to yet another observation about how understanding the business context and being able to communicate with business leaders are such important components of the data scientist’s job today:
“What I’ve seen changed between the pre-data science period and the current era is that analytics in the enterprise used to be very focused on a business leader asking a business analyst for a report on X—that was the process. Now, it’s much more of a conversation. Story telling skills, sensitivity to what the business needs are—successful data scientists tend to have this conversation.” In addition, there is more sensitivity to the uncertainty associated with data—“awareness that a number is not just a number”—even data that comes from a structured database should be handled with care.
By 2006, it was time to move on and “get into something that was more personally satisfying to me,” says Rogers, as “our computational and technological advantages have started to decline.” Healthcare turned out to be the more personally satisfying domain and he became the global product manager for the Humphrey Visual Field Analyser, widely used in Glaucoma care.
In yet another application of adding a time dimension to data, Rogers worked with a research team to move beyond a single, one-time measurement of the patient’s peripheral vision and compute the rate of change and the progression of blindness over time. “It became an important tool for tracking these patients and their response to therapy,” he says. And in yet another immersion in practical, hands-on computing, the solution involved adding networking software (licensed from Apple) to multiple devices in a clinic to facilitate the collection of data from past measurements.
Better access to data, Rogers understood from that experience, was crucial for improving healthcare. In 2009, when the US federal government started to give incentives for healthcare providers to use Electronic Medical Records (EMR) systems, he saw how the original paper silos were simply replaced with electronic silos, with each EMR system becoming a stand-alone database. Not only there was no physical connection, there was no interoperability “from a semantic point of view—descriptions in one system could not be directly compared with those in another.”
The solution was a cloud-based system that pulled data from a variety of sources and a machine learning software that constructed a table of all the codes and concepts in the clinical data and mapped them to each other. ”The more data we got, the better we got at mapping these concepts and building a robust set of associations,” says Rogers. “That allowed us to build a clinically intelligent search engine.”
You may think that this is “big data” in a nutshell—more data equals better learning or as some have called it, “the unreasonable effectiveness of data.” But you may want to reconsider admiring data quantity for quantity’s sake, given what Rogers and his colleagues found out while mining electronic medical records.
“63% of the key information that the doctor needs to know about you is not in your coded data at all,” says Rogers. “And 30% of the time, if you have a heart failure in your code, it’s not heart failure” and could have been a mistake or a related entry (e.g., a test for heart failure) in the billing system. As a result, most of the learning in Rogers’ machine learning system was dedicated to analysis of the text to “understand what information about the patient is actually correct.” An important big data lesson or what one may call the unreasonable effectiveness of data quality.
That system became the foundation of another startup, Apixio, which has recently raised $19.3 million in Series D venture capital funding. After serving there as Chief Scientist for 5 years, Rogers moved on again, in January 2015, this time from the world of startups to the corporate world and his current role at Intel.
As Chief Data Scientist he works internally on product road maps, providing input related to his expertise and the trends he sees. Externally, he works with the customers of Intel’s customers, helping them in “conceptualizing their entire analytics pipeline.” Providing free advice to consumers of analytics “helps keep Intel at the center of the computational world” and helps keep Rogers abreast of the latest data mining trends and developments. He learns about on-going concerns regarding whether a “new architecture” is required to accommodate the most recent data science tools and observes the rise of new challenges such as “monitoring many different real-time data streams.” And he reports that recently there has been a lot of interest in deep learning. Here, too, a key concern is integration–is it possible to build these new capabilities within the existing big data infrastructure?
Rogers’ role as a trusted advisor also includes working with partners. For example, the Collaborative Cancer Cloud, an Intel-developed precision medicine analytics platform. Currently, it is used by the Knight Cancer Institute at Oregon Health & Science University, Dana-Farber Cancer Institute and Ontario Institute for Cancer Research, to securely share patient genomic, imaging and clinical data to accelerate their research into potentially lifesaving discoveries.
Extrapolating from his current and previous work, Rogers sees the future of AI as “the development of machine learning systems that are good at figuring out the context.” A lot of the recent AI news has been about what he perceives as immature work—“image captioning is a sort of parlor trick,” says Rogers. “We will start to see an emerging AI capability” when we have machine or deep learning capable of identifying the context of the image.
Unlike others who see the machines as potentially replacing humans, Rogers envisions human-machine collaboration: “AI capabilities are most interesting when they are used to amplify human capabilities. There are things that we are good at cognitively but we cannot do at scale. We [should use machine learning] to surface the information from a large volume of data so we can do the next level of inference ourselves,” he says.
Understanding the context. Accepting and managing uncertainty. Linking pieces of data to uncover new insights. Like good data scientists, future business leaders will not look for “the answer.” With the right attitude, experience, and training, they will actively search for data to refute their assumptions, question their most beloved initiatives, and challenge their established career trajectories.
Originally published on Forbes.com

[August 1, 2016] was something of a red-letter day for the tech industry. When the stock market closed, the five most valuable companies on the planet were, for the first time, technology concerns. And they all hailed from the West Coast of the US, whether the San Francisco Bay Area (Apple, Alphabet and Facebook) or in and around Seattle (Microsoft and Amazon).
In subsequent days, ExxonMobil — which held the title of world’s most valuable company until it was overhauled by Apple — edged back above Facebook and Amazon. But it may only be a temporary reprieve. A seemingly inexorable shift in business and stock market momentum is under way, as today’s technology leaders assume a more central place in personal and business life.
Ten years ago, at the height of the PC era, Microsoft was the only tech company in the top 20. Now, though, the big five control a much wider array of digital platforms around which life and work revolve — from smartphones and cloud computing data centres to mobile messaging apps.
They are also racing each other to build the next platforms, from virtual reality headsets to driverless cars and digital assistants powered by artificial intelligence.
Only China, thanks to a domestic market that is hard for outsiders to penetrate, can lay claim to tech companies with the scale and ambition to compete.
That has been underlined by this week’s detente in the ride-hailing wars, which has seen Uber’s global expansion halted and a new Chinese digital champion crowned, in the shape of Didi Chuxing.
Today, the key question is: which markets are next in the big five’s sights, as they cast around more widely for growth?

The market value rankings over the last couple of decades offer a glimpse at the world’s changing economy. At the peak of the dot-com bubble in March 2000, tech companies including Microsoft, Intel and Cisco were among the biggest companies in the world — and they are still giants. Ten years ago, big banks, Chinese industrial and financial companies and global commodities firms crowded the market cap big leagues. Five years ago, Apple became the biggest company in the world by stock value, a position it has occupied with some interruptions since then.
In addition to Apple, the growing might of Google, Amazon and Facebook have lifted those companies to new heights and Microsoft’s market value has rebounded under a new CEO. Non-tech titans like Exxon and GE have slipped a bit. Stock investors are now willing to pay more for a dollar of future earnings for the tech superpowers than they are for most other corporations. Of course, technology’s Fab Five may not last in their lofty perch. The streak could end after one day. Good times never last. But for the moment, technology is on top of the world.
Every industry uses computers, software, and internet services. If that’s what “technology” means, then every company is in the technology business—a useless distinction. But it’s more likely that “technology” has become so overused, and so carelessly associated with Silicon Valley-style computer software and hardware startups, that the term has lost all meaning. Perhaps finance has exacerbated the problem by insisting on the generic industrial term “technology” as a synonym for computing.
There are companies that are firmly planted in the computing sector. Microsoft and Apple are two. Intel is another—it makes computer parts for other computer makers. But it’s also time to recognize that some companies—Alphabet, Amazon, and Facebook among them—aren’t primarily in the computing business anyway. And that’s no slight, either. The most interesting thing about companies like Alphabet, Amazon, and Facebook is that they are not (computing) technology companies. Instead, they are using computing infrastructure to build new—and enormous—businesses in other sectors. If anything, that’s a fair take on what “technology” might mean as a generic term: manipulating one set of basic materials to realize goals that exceed those materials.

US consumer adoption of wearable devices will reach 29% in 2021, up from 18% last year, according to a new Forrester Data forecast. Today the vast majority of consumers who own and use wearables have a health or wellness device (17%), while monitoring, retail, notifications, and travel are expected to grow…