Collaborative Robotics Market

Collaborative-Robotics-ABI

MarketsAndMarkets:

The global collaborative robots market is expected to grow at a CAGR of 60.04% between 2016 and 2022 from $110 Million in 2015 and is expected to reach $3.3 Billion by 2022.

The collaborative robots market is application driven; the application in the automotive sector accounted for the largest share in 2015

The global collaborative robots market is driven by application in industries such as automotive, metal and machining, furniture and equipment, food and beverages, plastic and polymers, and others.

Collaborative robots used in the automotive sector accounted for the largest share of the global collaborative robots market in 2015; this market is expected to grow at a significant rate between 2016 and 2022.

In developed regions, such as North America and Europe, growth in the collaborative robots market in the automotive sector is expected to be driven by rise in safety rated manufacturing and the growing trend of precision which were not possible due to the common human errors.

Collaborative robots are used in the furniture and equipment industry and this market is expected to witness rapid growth during the forecast period.

An acceptance and installation rate of collaborative robots in furniture and equipment industry is increasing and is expected to continue to grow rapidly during the forecast period. This growth is expected to be significant in RoW region for new fleet of applications.

Asia-Pacific is expected to hold a large share of the collaborative robots market by 2022.

Europe was the largest market in 2015, followed by Asia-Pacific and North America. Regulations have driven the market for collaborative robots to reduce the need for safety fences between human and robots and mitigate the effects of imminent collisions (accidents).

Europe was the early adopter which has resulted in a large market for collaborative robots in 2015.

The collaborative robots market in Asia-Pacific is expected to surpass that of Europe by 2018 and hold a large market share through 2022.

The major companies in the global collaborative robots market are:

ABB Ltd. (Switzerland)

KUKA AG (Germany)

FANUC Corporation (Japan)

Robert Bosch GmbH (Germany)

Universal Robots (Denmark)

Rethink Robotics (U.S.)

Energid Technologies (U.S.).

MRK-Systeme GmbH (Germany).

Rethink Robotics:

“Everything is about time. You’re lead timed, your speed to market, and that’s the leveling wind in our industry. Short run manufacturing means that you’re going to have a fairly high touch model. We were really looking to increase productivity and improve our delivery in our service and our quality, mainly from a standpoint of error-proofing, because there’s a number of things that have to be done 100% correctly,” explained Ron Kirscht, President of Donnelly Custom Manufacturing. “But, if that’s what your job is and you’re doing it as a person, it becomes a little mundane, and that’s when people can become inattentive.”

Baxter collaborative robots are on the job at Donnelly’s plant in Alexandria, MN, taking on those time consuming, repetitive tasks where there’s no room for errors. This includes removing parts from a conveyor belt and stacking each one on customized stacking devices. By automating these jobs with robotics, Donnelly employees are assigned to more valuable work.

Kirscht added, “Baxter has some qualities that he brings to Donnelly that creates excitement, innovation and enthusiasm, allowing people to come up with ideas in ways for utilizing Baxter. I think that the Baxter robot is a game changer in modern manufacturing, because it really creates an opportunity for people on the manufacturing floor to innovate. It spawns creativity.”
[youtube https://www.youtube.com/watch?v=ant9adbTK5M]

Posted in Misc, robots | Tagged , | Leave a comment

Skills and Jobs for the Digital Future

[youtube https://www.youtube.com/watch?v=3-MZ288onbs]

Moderator:
George Westerman, MIT Initiative on the Digital Economy (@gwesterman)

Speakers:
Gerald Chertavian, Year Up (@yearup)
Prof. Tom Davenport, Fellow at MIT Initiative on the Digital Economy (@tdav)
Karen Kocher, Cigna (@kkocher)
Steve Phillips, Avnet, Inc. (@Steven_phillips)

Are AI and robots eating jobs? Yes–some jobs more than others. But even as automation replaces some workers, it will enhance the roles of others. Companies will need people who can work closely with technology, as well as those who can do what computers cannot. How can CIOs develop a workforce that will thrive in the digital age? Which skills will be valued and which ones will be replaced? Does college still matter? Will on-demand workers replace full-time employees? Join our eclectic panel-–experts in AI and jobs, Human Resources, alternative skill development, and digital leadership–as they describe what the coming changes in skills, jobs, and careers mean for CIOs and their companies.

Forrester forecasts that cognitive technologies such as robots, artificial intelligence (AI), machine learning, and automation will replace 7% of US jobs by 2025.

 

  • 16% of US jobs will be replaced, while the equivalent of 9% jobs will be created — a net loss of 7% of US jobs by 2025.
  • Office and administrative support staff will be the most rapidly disrupted.
  • The cognitive era will create new jobs, such as robot monitoring professionals, data scientists, automation specialists, and content curators: Forrester forecasts 8.9 million new jobs in the US by 2025.
  • 93% of automation technologists feel unprepared or only partially prepared to tackle the challenges associated with smart machine technologies.

Jobs_byState

NPR Planet Money:

Truck drivers dominate the map for a few reasons.

  • Driving a truck has been immune to two of the biggest trends affecting U.S. jobs: globalization and automation. A worker in China can’t drive a truck in Ohio, and machines can’t drive cars (yet).
  • Regional specialization has declined. So jobs that are needed everywhere — like truck drivers and schoolteachers — have moved up the list of most-common jobs.
  • The prominence of truck drivers is partly due to the way the government categorizes jobs. It lumps together all truck drivers and delivery people, creating a very large category. Other jobs are split more finely; for example, primary school teachers and secondary school teachers are in separate categories.

The rise and fall of secretaries: Through much of the ’80s, as the U.S. economy shifted away from factories that make goods and toward offices that provide services, secretary became the most common job in more and more states. But a second shift — the rise of the personal computer — reversed this trend, as machines did more and more secretarial work.

Posted in AI, digital transformation | Tagged , , | Leave a comment

A Career in Data Science: Bob Rogers, Chief Data Scientist, Intel

Bob_Rogers_headshot

Bob Rogers, Chief Data Scientist, Intel

 

“Business leaders want ‘the answer,’” says Bob Rogers, Chief Data Scientist for Big Data Solutions at Intel. But data scientists must understand what “the answer” means in the specific business context and communicate the expected impact in the language of the business executives.  They need to explain the results of their analysis in “terms of the risk to the business” and “translate uncertainty into outcomes,” says Rogers. “If you show error bars on a number in a business presentation, you are probably going down the wrong path.”

When the data scientist as a new business role has emerged about a decade ago, the emphasis was on how it combined two disciplines and skill sets: computer science and statistics. More recently, the discussion of this evolving role has been along the lines of Rogers’ observation, as one combining technical and business expertise, emphasizing the importance of communications skills. Drew Conway’s 2010 definition of a data scientist as a Venn diagram of computer science, statistics and domain expertise has now been updated to include communications as a stand-alone set of required skills.

“Statisticians have missed the initial boat of data science,” says Rogers. “They tend to be very specific about the way they discuss data, ways that are not necessarily amenable to a broader discussion with a business audience.”

What we have here is a re-definition of what was previously perceived as a highly technical job to a more generalized business role. The rise of the Sexiest Job of the 21st Century has spawned numerous undergraduate and graduate programs focusing on imparting technical skills and knowledge, aiming to supply the widely-discussed shortage of experts in managing and mining the avalanche of big data. We now see business schools (e.g., Wharton) establishing a major in analytics, combining data science training with general business education. The nest step, I would argue, will be the complete integration of the two types of training: Business education as data science education.

Rogers’ varied work experience over the last twenty five years is a prime example of the amalgam of skills and expertise that will be the hallmark of successful business leaders in the years to come. It’s a unique combination of scientific curiosity and acumen, facility with computer programming and data manipulation, entrepreneurial drive and experimental inclination. All of these wrapped in a deep understanding, derived from direct experience, of the business context—the requirements, challenges, human motivations and attitudes that drive business success.

Like some of the leading data scientists of recent vintage, Rogers started his working life after earning a PhD in Physics. But in 1991, when he got his degree from Harvard University, there was not much data to support his thesis work in astrophysics, so he and others like him “were doing a lot more simulations.”  Today, “there is a lot of data associated with cosmology,” says Rogers, but then and now, knowing how “to model the data” has been a crucial requirement in this and other scientific fields. A new training ground today for budding data scientists, according to Rogers, is computational neuroscience, where the “amount and shape of data” coming from functional MRI requires “advanced modeling thinking.”

While doing a post-doc at a research institute, his own experience with computer modeling and simulations led Rogers to co-author a book on using artificial neural networks for time series forecasting. All of a sudden he was getting phone calls from people asking him about forecasting the stock market, a subject he didn’t know much about.

Serendipity plays a major role in many illustrious careers and Rogers’ was no exception. The husband of a friend of his wife’s owned a trading firm in Chicago, and with his help, Rogers started a company rather than pursue an academic career, just like many latter-day data scientists. “I was 28 at the time,” he explained when I asked him why he made such a risky career switch.

In another similarity to today’s data scientists, Rogers did not limit his involvement with the startup to developing forecasting models for the Chicago futures market, but also got down and dirty building a research platform for collecting data on transactions and the back-office systems for executing trades, accounting, and other functions.

This went on for about a dozen years, in the last four of which Rogers has switched from R&D work to selling the company’s services when it opened up for new—international—investors.

“What was really profound for me as a data scientist,” says Rogers, “was actually the marketing side—I started to appreciate that there was a huge difference between having a technology that performed well and having a product that was tailored to fit the specific business needs of the customer. International investors had very specific needs around how the product was configured.”

Recalling his own experience leads Rogers to yet another observation about how understanding the business context and being able to communicate with business leaders are such important components of the data scientist’s job today:

“What I’ve seen changed between the pre-data science period and the current era is that analytics in the enterprise used to be very focused on a business leader asking a business analyst for a report on X—that was the process. Now, it’s much more of a conversation. Story telling skills, sensitivity to what the business needs are—successful data scientists tend to have this conversation.” In addition, there is more sensitivity to the uncertainty associated with data—“awareness that a number is not just a number”—even data that comes from a structured database should be handled with care.

By 2006, it was time to move on and “get into something that was more personally satisfying to me,” says Rogers, as “our computational and technological advantages have started to decline.” Healthcare turned out to be the more personally satisfying domain and he became the global product manager for the Humphrey Visual Field Analyser, widely used in Glaucoma care.

In yet another application of adding a time dimension to data, Rogers worked with a research team to move beyond a single, one-time measurement of the patient’s peripheral vision and compute the rate of change and the progression of blindness over time.  “It became an important tool for tracking these patients and their response to therapy,” he says. And in yet another immersion in practical, hands-on computing, the solution involved adding networking software (licensed from Apple) to multiple devices in a clinic to facilitate the collection of data from past measurements.

Better access to data, Rogers understood from that experience, was crucial for improving healthcare. In 2009, when the US federal government started to give incentives for healthcare providers to use Electronic Medical Records (EMR) systems, he saw how the original paper silos were simply replaced with electronic silos, with each EMR system becoming a stand-alone database. Not only there was no physical connection, there was no interoperability “from a semantic point of view—descriptions in one system could not be directly compared with those in another.”

The solution was a cloud-based system that pulled data from a variety of sources and a machine learning software that constructed a table of all the codes and concepts in the clinical data and mapped them to each other. ”The more data we got, the better we got at mapping these concepts and building a robust set of associations,” says Rogers. “That allowed us to build a clinically intelligent search engine.”

You may think that this is “big data” in a nutshell—more data equals better learning or as some have called it, “the unreasonable effectiveness of data.” But you may want to reconsider admiring data quantity for quantity’s sake, given what Rogers and his colleagues found out while mining electronic medical records.

“63% of the key information that the doctor needs to know about you is not in your coded data at all,” says Rogers. “And 30% of the time, if you have a heart failure in your code, it’s not heart failure” and could have been a mistake or a related entry (e.g., a test for heart failure) in the billing system. As a result, most of the learning in Rogers’ machine learning system was dedicated to analysis of the text to “understand what information about the patient is actually correct.” An important big data lesson or what one may call the unreasonable effectiveness of data quality.

That system became the foundation of another startup, Apixio, which has recently raised $19.3 million in Series D venture capital funding. After serving there as Chief Scientist for 5 years, Rogers moved on again, in January 2015, this time from the world of startups to the corporate world and his current role at Intel.

As Chief Data Scientist he works internally on product road maps, providing input related to his expertise and the trends he sees. Externally, he works with the customers of Intel’s customers, helping them in “conceptualizing their entire analytics pipeline.” Providing free advice to consumers of analytics “helps keep Intel at the center of the computational world” and helps keep Rogers abreast of the latest data mining trends and developments. He learns about on-going concerns regarding whether a “new architecture” is required to accommodate the most recent data science tools and observes the rise of new challenges such as “monitoring many different real-time data streams.” And he reports that recently there has been a lot of interest in deep learning. Here, too, a key concern is integration–is it possible to build these new capabilities within the existing big data infrastructure?

Rogers’ role as a trusted advisor also includes working with partners. For example, the Collaborative Cancer Cloud, an Intel-developed precision medicine analytics platform. Currently, it is used by the Knight Cancer Institute at Oregon Health & Science University, Dana-Farber Cancer Institute and Ontario Institute for Cancer Research, to securely share patient genomic, imaging and clinical data to accelerate their research into potentially lifesaving discoveries.

Extrapolating from his current and previous work, Rogers sees the future of AI as “the development of machine learning systems that are good at figuring out the context.” A lot of the recent AI news has been about what he perceives as immature work—“image captioning is a sort of parlor trick,” says Rogers. “We will start to see an emerging AI capability” when we have machine or deep learning capable of identifying the context of the image.

Unlike others who see the machines as potentially replacing humans, Rogers envisions human-machine collaboration: “AI capabilities are most interesting when they are used to amplify human capabilities. There are things that we are good at cognitively but we cannot do at scale. We [should use machine learning] to surface the information from a large volume of data so we can do the next level of inference ourselves,” he says.

Understanding the context. Accepting and managing uncertainty. Linking pieces of data to uncover new insights. Like good data scientists, future business leaders will not look for “the answer.” With the right attitude, experience, and training, they will actively search for data to refute their assumptions, question their most beloved initiatives, and challenge their established career trajectories.

Originally published on Forbes.com

Posted in Data Science, Data Science Careers, Data Scientists | Tagged | Leave a comment

The Rise of the ‘Tech’ Giants

RiseOfTechGiants

Financial Times:

[August 1, 2016] was something of a red-letter day for the tech industry. When the stock market closed, the five most valuable companies on the planet were, for the first time, technology concerns. And they all hailed from the West Coast of the US, whether the San Francisco Bay Area (Apple, Alphabet and Facebook) or in and around Seattle (Microsoft and Amazon).

In subsequent days, ExxonMobil — which held the title of world’s most valuable company until it was overhauled by Apple — edged back above Facebook and Amazon. But it may only be a temporary reprieve. A seemingly inexorable shift in business and stock market momentum is under way, as today’s technology leaders assume a more central place in personal and business life.

Ten years ago, at the height of the PC era, Microsoft was the only tech company in the top 20. Now, though, the big five control a much wider array of digital platforms around which life and work revolve — from smartphones and cloud computing data centres to mobile messaging apps.

They are also racing each other to build the next platforms, from virtual reality headsets to driverless cars and digital assistants powered by artificial intelligence.

Only China, thanks to a domestic market that is hard for outsiders to penetrate, can lay claim to tech companies with the scale and ambition to compete.

That has been underlined by this week’s detente in the ride-hailing wars, which has seen Uber’s global expansion halted and a new Chinese digital champion crowned, in the shape of Didi Chuxing.

Today, the key question is: which markets are next in the big five’s sights, as they cast around more widely for growth?

bloomberg_techOverlords

Bloomberg:

The market value rankings over the last couple of decades offer a glimpse at the world’s changing economy. At the peak of the dot-com bubble in March 2000, tech companies including Microsoft, Intel and Cisco were among the biggest companies in the world — and they are still giants. Ten years ago, big banks, Chinese industrial and financial companies and global commodities firms crowded the market cap big leagues. Five years ago, Apple became the biggest company in the world by stock value, a position it has occupied with some interruptions since then.

In addition to Apple, the growing might of Google, Amazon and Facebook have lifted those companies to new heights and Microsoft’s market value has rebounded under a new CEO. Non-tech titans like Exxon and GE have slipped a bit. Stock investors are now willing to pay more for a dollar of future earnings for the tech superpowers than they are for most other corporations. Of course, technology’s Fab Five may not last in their lofty perch. The streak could end after one day. Good times never last. But for the moment, technology is on top of the world.

The Atlantic:

Every industry uses computers, software, and internet services. If that’s what “technology” means, then every company is in the technology business—a useless distinction. But it’s more likely that “technology” has become so overused, and so carelessly associated with Silicon Valley-style computer software and hardware startups, that the term has lost all meaning. Perhaps finance has exacerbated the problem by insisting on the generic industrial term “technology” as a synonym for computing.

There are companies that are firmly planted in the computing sector. Microsoft and Apple are two. Intel is another—it makes computer parts for other computer makers. But it’s also time to recognize that some companies—Alphabet, Amazon, and Facebook among them—aren’t primarily in the computing business anyway. And that’s no slight, either. The most interesting thing about companies like Alphabet, Amazon, and Facebook is that they are not (computing) technology companies. Instead, they are using computing infrastructure to build new—and enormous—businesses in other sectors. If anything, that’s a fair take on what “technology” might mean as a generic term: manipulating one set of basic materials to realize goals that exceed those materials.

Posted in Misc | Tagged | Leave a comment

Wearables are today all about wellness

Forrester_weerablesJuly16

Forrester Research:

US consumer adoption of wearable devices will reach 29% in 2021, up from 18% last year, according to a new Forrester Data forecast. Today the vast majority of consumers who own and use wearables have a health or wellness device (17%), while monitoring, retail, notifications, and travel are expected to grow…

  • Wearables sales will grow from $4.2 billion in 2015 to $9.8 billion in 2021. Both device volume and a larger mix of more expensive wearable devices such as smartwatches, apparel, and glasses will fuel the revenue number.
  • Smartwatch sales will hit $16 million in 2021. By 2021, over one-third of the 46 million wearables sold will be smartwatches. Higher consumer adoption of mobile payments, voice, notifications, and wellness applications will spur growth.
  • While some digital businesses have wearables strategies today, these are nascent with a focus on piloting watch apps in order to learn. Just over a third (34%) of businesses do not have a wearables strategy today and do not plan on implementing one in the future, while only 11% currently follow a wearables strategy.

 

Posted in Misc | Tagged | Leave a comment

No End to Moore’s Law

Moore's Law

IEEE Spectrum:

This final ITRS report is titled ITRS 2.0. The name reflects the idea that improvements in computing are no longer driven from the bottom-up, by tinier switches and denser or faster memories. Instead, it takes a more top-down approach, focusing on the applications that now drive chip design, such as data centers, the Internet of Things, and mobile gadgets.

The new IEEE roadmap—the International Roadmap for Devices and Systems—will also take this approach, but it will add computer architecture to the mix, allowing for “a comprehensive, end-to-end view of the computing ecosystem, including devices, components, systems, architecture, and software,” according to a recent press release.

Transistor miniaturization was still a part of the long-term forecast as recently as 2014, when the penultimate ITRS report was released. That report predicted that the physical gate length of transistors—an indicator of how far current must travel in the device—and other key logic chip dimensions would continue to shrink until at least 2028. But since then, 3D concepts have gained momentum. The memory industry has already turned to 3D architectures to ease miniaturization pressure and boost the capacity of NAND Flash. Monolithic 3D integration, which would build layers of devices one on top of another, connecting them with a dense forest of wires, has also been an increasingly popular subject of discussion.

The new report embraces these trends, predicting an end to traditional scaling—the shrinking of chip features—by the early 2020’s. But the idea that we’re now facing an end to Moore’s Law “is completely wrong,” Gargini says. “The press has invented multiple ways of defining Moore’s Law but there is only one way: The number of transistors doubles every two years.”

Moore’s Law, he emphasizes, is simply a prediction about how many transistors can fit in a given area of IC—whether it’s done, as it has been for decades, in a single layer or by stacking multiple layers. If a company really wanted to, Gargini says, it could continue to make transistors smaller well into the 2020s, “but it’s more economic to go 3-D. That’s the message we wanted to send.”

Posted in Misc | Tagged | Leave a comment

Internet of Things Market Landscape

IoT-landscape2016.png

Matt Turck:

As in previous versions, the chart is organized into building blocks, horizontals and verticals. Pretty much every segment is seeing a lot of activity, but it is worth noting that those parts are not particularly well integrated just yet, meaning in particular that vertical applications are not necessarily built on top of horizontals. To the contrary, we’re very much very much in the era of the “full stack” IoT startup – because there is no dominant horizontal platform, and not enough mature, cheap and fully reliable components just yet, startups tend to build a lot themselves: hardware, software, data/analytics, etc. Some enterprise IoT companies, such as our portfolio company Helium, also have a professional services organization on top, as enterprise customers are at the stage where they try to make sense of the IoT opportunity and are looking for something that “just works”, as opposed to mixing and matching best of breed components. This is a typical characteristic of startups operating in an early market, and I would expect many of those companies to evolve over time, and possibly ditch the hardware component of their business entirely.

 

Posted in Internet of Things | Tagged | Leave a comment

Internet of Things in 2025: $3 Trillion Market, 27 Billion Connections

MachinaResearch_IoTapps

M2M Application Groups covered in Machina Research’s M2M Forecast Database

Machina Research:

  • The total number of IoT connections will grow from 6 billion in 2015 to 27 billion in 2025, a CAGR of 16%.
  • Today 71% of all IoT connections are connected using a short range technology (e.g. WiFi, Zigbee, or in-building PLC), by 2025 that will have grown slightly to 72%. The big short-range applications, which cause it to be the dominant technology category, are Consumer Electronics, Building Security and Building Automation.
  • Cellular connections will grow from 334 million at the end of 2015 to 2.2 billion by 2025, of which the majority will be LTE. 45% of those cellular connections will be in the ‘Connected Car’ sector, including both factory-fit embedded connections and aftermarket devices.
  • 11% of connections in 2025 will use Low Power Wide Area (LPWA) connections such as Sigfox, LoRa and LTE-NB1.
  • China and the US will be neck-and-neck for dominance of the global market by 2025. China which will account for 21% of global IoT connections, ahead of the US on 20, with similar proportions for cellular connections. However, the US wins in terms of IoT revenue (22% vs 19%). Third largest market is Japan with 7% of all connections, 7% of cellular and 6% of global revenue.
  • The total IoT revenue opportunity will be USD3 trillion in 2025 (up from USD750 billion in 2015). Of this figure, USD1.3 trillion will be accounted for by revenue directly derived from end users in the form of devices, connectivity and application revenue. The remainder comes from upstream and downstream IoT-related sources such as application development, systems integration, hosting and data monetisation.
  • By 2025, IoT will generate over 2 zettabytes of data, mostly generated by consumer electronics devices. However it will account for less than 1% of cellular data traffic. Cellular traffic is particularly generated by digital billboards, in-vehicle connectivity and CCTV.
Posted in Internet of Things | Tagged | Leave a comment

The Digital Ecosystem

Deloitte_digital_eco.jpg

Deloitte_digital_eco_methodoogy.jpg

Deloitte University Press

What is the digital ecosystem? It’s a subset of TMT companies that specialize in the development of hardware, content, and software applications and provide a platform for the creation, distribution, and consumption of content, applications, and services.

Posted in digital transformation | Tagged | Leave a comment

Machine Learning, Artist’s Rendition

[vimeo 176389707 w=640 h=360]

Motherboard:

Jakob Werner, a 22-year-old animator and visual design student in Germany, decided to interpret the idea of machine learning a bit more literally “in order to create a sarcastic view at our society and into the future.”

The result is a wooden automaton that appears to “read” by holding a book, moving its eyes as if scanning the words, and then flipping the page. “The secret behind machine learning,” he wrote in the video’s description. “This is how machines collect data.”

Werner’s artist statement shows he’s thought deeply about this topic, even though his project is tongue-in-cheek.

“Machine learning distinguishes itself from memorisation because it has the ability to recognize patterns which makes the algorithm seem more human-like,” he wrote. “This is essential for machines in order to be considered as artificial intelligence. Though there are a lot of programs and machines that aren’t intelligent but just seem to be. They don’t solve problems, they are programmed to work around them.”

 

Posted in AI, Machine Learning, Misc | Tagged | Leave a comment