
Source: Singular Vision

As a refresh to my 2014 blog and report, here are the next 15 emerging technologies Forrester thinks you need to follow closely. We organize this year’s list into three groups — systems of engagement technologies will help you become customer-led, systems of insight technologies will help you become insights-driven, and supporting technologies will help you become fast and connected.
Why these 15? You might have noticed a few glaring omissions. Certainly blockchain has garnered a lot of attention; and 3D printing is on most of our competitors’ lists. The answer goes back to being customer led, insights driven, fast, and connected. Those of you that follow our research will recognize these as the four principles of customer obsessed operations. The technologies we selected will have the biggest impact on your ability to win, serve and retain customers whose expectations of service through technology are only going up. Furthermore, our list focuses on those technologies that will have the biggest business impact in the next five years. We think blockchain’s big impact outside of financial services, for example, is further out so it didn’t make our list, even though it is important. Maybe by 2018, when I update our list next.
Since I don’t have room here for details about all of our technologies, I’ll focus on five that we think have the potential to change the world. That’s ? of our list by the way – which means a lot of change is coming; it’s time to make your technology bets.

To get a more accurate assessment of the opinion of leading researchers in the field, I turned to the Fellows of the American Association for Artificial Intelligence, a group of researchers who are recognized as having made significant, sustained contributions to the field.
In early March 2016, AAAI sent out an anonymous survey on my behalf, posing the following question to 193 fellows:
“In his book, Nick Bostrom has defined Superintelligence as ‘an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.’ When do you think we will achieve Superintelligence?”
…In essence, according to 92.5 percent of the respondents, superintelligence is beyond the foreseeable horizon.

The smart home market is expected to grow from $46.97 Billion in 2015 to $121.73 Billion by 2022, at a CAGR of 14.07% between 2016 and 2022.
Leading vendors:


Network & Endpoint Security: This is the largest category in our market map and includes startups like Red Canary which specializes in protecting an enterprise’s computer networks from vulnerabilities that arise as a result of remotely connecting users’ laptops, tablets, and other electronic devices. Other companies, like Cylance, apply artificial intelligence algorithms to predictively identify and stop malware and advanced threats in defense of endpoints.
IoT/IIoT Security: Startups in this category include Argus Cyber Security, which is an automotive cybersecurity company enabling car manufacturers to protect connected vehicles. Also, Indegy, which provides security for Industrial Control Systems (ICS) used across critical infrastructures, e.g., energy, water utilities, petrochemical plants, manufacturing facilities, etc.
Threat Intelligence: Companies include Flashpoint, which illuminates targeted malicious activities on the deep web and dark web to uncover potential threats and thwart attacks.
Mobile Security: Companies in this category include Zimperium, which delivers enterprise mobile threat protection for Android and iOS devices.
Behavioral Detection: Included are companies like Darktrace, which detects abnormal behavior in organizations in order to identify threats and manage risks from cyberattacks.
Cloud Security: Startups like Tigera offer solutions for enterprises looking for secure application and workload delivery across private, public, and hybrid clouds.
Deception Security: Companies like illusive networks can identify and proactively deceive and disrupt attackers before they can cause harm.
Continuous Network Visibility: Protectwise and others offer solutions for visualizing network activity and responding to cyberattacks in real time.
Risk Remediation: Companies, including AttackIQ, offer solutions for pinpointing vulnerabilities in technologies, people, and processes, with recommendations on how to effectively fill security gaps.
Website Security: Distil Networks and Shape Security offer website developers the ability to identify and police malicious website traffic, including malicious bots and more.
Quantum Encryption: Startups like Post-Quantum offer encrypted wireless and data communications technology that relies on the science underlying quantum mechanics.

Intel Analytics Summit, August 15, 2016 (Source: @MissAmaraKay)
From the most recent edition of the tech bible: Moore’s Law begat faster processing and cheap storage which begat machine learning and big data which begat deep learning and today’s AI Spring. In her opening keynote at the Intel Analytics Summit, which was mostly about machine learning, Intel’s executive vice president Diane Bryant said that we are now “reaching a tipping point where data is the game changer.” (Disclosure: Intel paid my travel expenses).
With the rapid growth of machine-to-machine data exchange we should expect more, according to Bryant: Autonomous vehicles will produce 4 terabytes of data each day, a connected plane will transmit 40 terabytes of data, and the automated, connected factory will generate one petabyte (one million gigabytes) daily.
Another presenter, CB Bohn, Senior Database Engineer at Etsy, the online marketplace, speculated that the tipping point has already happened—when the value of the data exceeded the cost of its storage. Historical data has lots of value left in it, so “why throw it away?” asked Bohn. Cheap storage, added Debora Donato, Director of R&D at Mix Tech, a content discovery platform, has changed the attitudes of businesses towards data and what they can do with it.
Leading-edge enterprises today apply machine learning algorithms to mine and find insights in the ever-expanding data store. Jason Waxman, corporate vice president and general manager of the data center solutions group at Intel, described how Penn Medicine is improving patient care, using Intel’s TAP open analytics platform. One pilot study focused on sepsis or blood infection which affects more than a million Americans annually and is the ninth leading cause of disease-related deaths and the #1 cause of deaths in intensive care units, according to the Centers for Disease Control (CDC). Penn Medicine was able to correctly identify about 85% of sepsis cases (up from 50%), and made such identifications as much as 30 hours before the onset of septic shock, as opposed to just two hours prior using traditional identification methods.
Candid is a new app that launched recently, applying AI to solve the challenges previous anonymous social platforms could not overcome. CEO Bindu Reddy explained how machine learning helps identify and remove “bad apples”—both inappropriate content and abusers—and recommend relevant groups to Candid users.
Clear Labs differentiate itself by conducting DNA tests that are untargeted and unbiased, aiming to index the world’s food supply and set worldwide standards for “food integrity.” Maria Fernandez Guajardo, vice president of product, described how their molecular analysis of 345 hot dog samples from 75 brands and 10 retailers, discovered that 14.4% of the products tested were “problematic in some way,” mostly because of added ingredients that did not show up on the label. Some consumers, she reported, were especially concerned about hot dogs that claimed to be vegetarians but actually contained meat.
In answer to a question on the future of machine learning from O’Reilly Media’s Ben Lorica, moderator of a panel on distributed analytics, Intel fellow Pradeep Dubey suggested focusing on deep learning as it has been demonstrably successful recently. Michael Franklin of UC Berkeley recommended focusing on machine learning approaches that are usable, understandable and robust, whether of the deep or shallow kind. If an automated system is going to make a decision, he said, “You’d better understand what are the assumptions that went into the data and the algorithms, where does the data you collected differ from those assumptions and how robust is the answer that popped out of the system.”
This was, I believe, a swipe at some of the deep learning practitioners who have admitted publicly that they don’t really understand how their system comes up with its successful results (e.g., Yoshua Bengio: “very often we are in a situation where we do not understand the results of an experiment”). But nothing succeeds like success, whether it is understood or not, and for the last few years deep leaning has become a force of climate change, transforming the AI Winter into the AI Spring.
Pedro Domingos of the University of Washington, in his talk at the event, put the recent resurgence of deep learning in the historical perspective of five different approaches (and solutions) to artificial intelligence: Symbolists (inverse deduction), Connectionists (backpropagation—popular with the deep learning crowd), Evolutionaries (genetic programming), Bayesians (probabilistic inference) and Analogizers (kernel machines). Domingos’ book, The Master Algorithm, is a rallying cry for finding the best of all worlds, the one algorithm that will unite all approaches and provide the answer to life, the universe, and everything.
Before we get to the time when big algorithm will tell us what to do whether we understand it or not, humans are still needed to make sense of all the data they—and the machines—generate. The last panel of the Analytics Summit was, appropriately, a discussion of educating future data scientist. The panelists, executives with Coursera (Emily Glassberg Sands), Kaggle (Anthony Goldboom), Continuum Analytics (Travis Oliphant), Metis (Rumman Chowdhury), and Galvanize (Ryan Orban), moderated by Edd Wilder-James of Silicon Valley Data Science, represented the burgeoning world of data science education.
The good news is that one got the impression that they are now training a vastly expanded pool of people, with very diverse backgrounds and experiences, that either want to become proficient in data analysis or want to be able to speak, as general business managers, the data scientist’s language. The challenge today is not so much the widely-discussed shortage of data scientists but the failure by many companies to effectively integrate and support the work of data scientists. The right internal champion, the panelists agree, one who understands the potential of analytics and machine learning and knows how to get the required resources, is key to the success of the data science team.
Originally published on Forbes.com

Screen time increasingly dominates our leisure hours, too, according to the annual American Time Use Survey, which is based on a nationally representative sample of individuals aged 15 and older.
“Screen time” and “active leisure” are necessarily hard to precisely define. (For example, reading for leisure was included in “active leisure,” but reading on an e-reader could be thought of as time spent on the screen).
Nonetheless, these data show a clear trend—adult Americans are spending more of their non-work/education time on a screen.
The first iPhone was released June 29, 2007.
By Hal Varian, Chief Economist, Google
Published on IMF.org
A computer now sits in the middle of virtually every economic transaction in the developed world. Computing technology is rapidly penetrating the developing world as well, driven by the rapid spread of mobile phones. Soon the entire planet will be connected, and most economic transactions worldwide will be computer mediated.
Data systems that were once put in place to help with accounting, inventory control, and billing now have other important uses that can improve our daily life while boosting the global economy.
Computer mediation can impact economic activity through five important channels.
Data collection and analysis: Computers can record many aspects of a transaction, which can then be collected and analyzed to improve future transactions. Automobiles, mobile phones, and other complex devices collect engineering data that can be used to identify points of failure and improve future products. The result is better products and lower costs.
Personalization and customization: Computer mediation allows services that were previously one-size-fits-all to become personalized to satisfy individual needs. Today we routinely expect that online merchants we have dealt with previously possess relevant information about our purchase history, billing preferences, shipping addresses, and other details. This allows transactions to be optimized for individual needs.
Experimentation and continuous improvement: Online systems can experiment with alternative algorithms in real time, continually improving performance. Google, for example, runs over 10,000 experiments a year dealing with many different aspects of the services it provides, such as ranking and presentation of search results. The experimental infrastructure to run such experiments is also available to the company’s advertisers, who can use it to improve their own offerings.
Contractual innovation: Contracts are critical to economic transactions, but without computers it was often difficult or costly to monitor contractual performance. Verifying performance can help alleviate problems with asymmetric information, such as moral hazard and adverse selection, which can interfere with efficient transactions. There is no longer a risk of purchasing a “lemon” car if vehicular monitoring systems can record history of use and vehicle health at minimal cost.
Coordination and communication: Today even tiny companies with a handful of employees have access to communication services that only the largest multinationals could afford 20 years ago. These micro-multinationals can operate on a global scale because the cost of computation and communication has fallen dramatically. Mobile devices have enabled global coordination of economic activity that was extremely difficult just a decade ago. For example, today authors can collaborate on documents simultaneously even when they are located thousands of miles apart. Videoconferencing is now essentially free, and automated document translation is improving dramatically. As mobile technology becomes ubiquitous, organizations will become more flexible and responsive, allowing them to improve productivity.
Let us dig deeper into these five channels through which computers are changing our lives and our economy.
We hear a lot about “big data” (see “Big Data’s Big Muscle,” in this issue of F&D), but “small data” can be just as important, if not more so. Twenty years ago only large companies could afford sophisticated inventory management systems. But now every mom-and-pop corner store can track its sales and inventory using intelligent cash registers, which are basically just personal computers with a drawer for cash. Small business owners can handle their own accounting using packaged software or online services, allowing them to better track their business performance. Indeed, these days data collection is virtually automatic. The challenge is to translate that raw data into information that can be used to improve performance.
Large businesses have access to unprecedented amounts of data, but many industries have been slow to use it, due to lack of experience in data management and analysis. Music and video entertainment have been distributed online for more than a decade, but the entertainment industry has been slow to recognize the value of the data collected by servers that manage this distribution (see “Music Going for a Song,” in this issue of F&D). The entertainment industry, driven by competition from technology companies, is now waking up to the possibility of using this data to improve their products.
The automotive industry is also evolving quickly by adding sensors and computing power to its products. Self-driving cars are rapidly becoming a reality. In fact, we would have self-driving cars now if it weren’t for the randomness introduced by human drivers and pedestrians. One solution to this problem would be restricted lanes for autonomous vehicles only. Self-driving cars can communicate among themselves and coordinate in ways that human drivers are (alas) unable to. Autonomous vehicles don’t get tired, they don’t get inebriated, and they don’t get distracted. These features of self-driving cars will save millions of lives in the coming years.
Twenty years ago it was a research challenge for computers to recognize pictures containing human beings. Now free photo storage systems can find pictures with animals, mountains, castles, flowers, and hundreds of other items in seconds. Improved facial recognition technology and automated indexing allow the photos to be found and organized easily and quickly.
Similarly, just in the past few years voice recognition systems have become significantly more accurate. Voice communication with electronic devices is possible now and will soon become the norm. Real-time verbal language translation is a reality in the lab and will be commonplace in the near future. Removing language barriers will lead to increased foreign trade, including, of course, tourism.
Observational data can uncover interesting patterns and correlations in data. But the gold standard for discovering causal relationships is experimentation, which is why online companies like Google routinely experiment and continuously improve their systems. When transactions are mediated by computers, it is easy to divide users into treatment and control groups, deploy treatment, and analyze outcomes in real time.
Companies now routinely use this kind of experimentation for marketing purposes, but these techniques can be used in many other contexts. For example, institutions such as the Massachusetts Institute of Technology’s Abdul Latif Jameel Poverty Action Lab have been able to run controlled experiments of proposed interventions in developing economies to alleviate poverty, improve health, and raise living standards. Randomized controlled experiments can be used to resolve questions about what sorts of incentives work best for increasing saving, educating children, managing small farms, and a host of other policies.
The traditional business model for advertising was “You pay me to show your ad to people, and some of them might come to your store.” Now in the online world, the model is “I’ll show your ad to people, and you only have to pay me if they come to your website.” The fact that advertising transactions are computer mediated allows merchants to pay only for the outcome they care about.
Consider the experience of taking a taxi in a strange city. Is this an honest driver who will take the best route and charge me the appropriate fee? At the same time, the driver may well have to worry whether the passenger is honest and will pay for the ride. This is a one-time interaction, with limited information on both sides and potential for abuse. But now consider technology such as that used by Lyft, Uber, and other ride services. Both parties can see rating history, both parties can access estimates of expected fares, and both parties have access to maps and route planning. The transaction has become more transparent to all parties, enabling more efficient and effective transactions. Riders can enjoy cheaper and more convenient trips, and drivers can enjoy a more flexible schedule.
Smartphones have disrupted the taxi industry by enabling these improved transactions, and every player in the industry is now offering such capabilities—or will soon. Many people see the conflict between ride services and the taxi industry as one of innovators versus regulators. However, from a broader perspective, what matters is which technology wins. The technology used by rideshare companies clearly provides a better experience for both drivers and passengers, so it will likely be widely adopted by traditional taxi services.
Simply being able to capture transaction history can improve contracts (see “Two Faces of Change,” in this issue of F&D). It is remarkable that I can walk into a bank in a new city, where I know no one and no one knows me, and arrange for a mortgage worth millions of dollars. This is enabled by credit rating services, which dramatically reduce risk on both sides of the transaction, making loans possible for people who otherwise could not get them.
Recently I had some maintenance work done on my house. The team of workers used their mobile phones to photograph items that needed replacement, communicate with their colleagues at the hardware store, find their way to the job site, use as a flashlight to look in dark places, order lunch for delivery, and communicate with me. All of these formerly time-consuming tasks can now be done quickly and easily. Workers spend less time waiting for instructions, information, or parts. The result is reduced transaction costs and improved efficiency.
Today only the wealthy can afford to employ executive assistants. But in the future everyone will have access to digital assistant services that can search through vast amounts of information and communicate with other assistants to coordinate meetings, maintain records, locate data, plan trips, and do the dozens of other things necessary to get things done (see “Robots, Growth, and Inequality,” in this issue of F&D). All of the big tech companies are investing heavily in this technology, and we can expect to see rapid progress thanks to competitive pressure.
Today’s mobile phones are many times more powerful and much less expensive than those that powered Apollo 11, the 1969 manned expedition to the moon. These mobile phone components have become “commoditized.” Screens, processors, sensors, GPS chips, networking chips, and memory chips cost almost nothing these days. You can buy a reasonable smartphone now for $50, and prices continue to fall. Smartphones are becoming commonplace even in very poor regions.
The availability of those cheap components has enabled innovators to combine and recombine these components to create new devices—fitness monitors, virtual reality headsets, inexpensive vehicular monitoring systems, and so on. The Raspberry Pi is a $35 computer designed at Cambridge University that uses mobile phone parts with a circuit board the size of a pack of playing cards. It is far more powerful than the Unix workstations of just 15 years ago.
The same forces of standardization, modularization, and low prices are driving progress in software. The hardware created using mobile phone parts often uses open-source software for its operating system. At the same time, the desktop motherboards from the personal computer era have now become components in vast data centers, also running open-source software. The mobile devices can hand off relatively complex tasks such as image recognition, voice recognition, and automated translation to the data centers on an as-needed basis. The availability of cheap hardware, free software, and inexpensive access to data services has dramatically cut entry barriers for software development, leading to millions of mobile phone applications becoming available at nominal cost.
I have painted an optimistic picture of how technology will impact the global economy. But how will this technological progress show up in conventional economic statistics? Here the picture is somewhat mixed. Take GDP, for example. This is usually defined as the market value of all final goods and services produced in a given country in a particular time period. The catch is “market value”—if a good isn’t bought and sold, it generally doesn’t show up in GDP.
This has many implications. Household production, ad-supported content, transaction costs, quality changes, free services, and open-source software are dark matter as far as GDP is concerned, since technological progress in these areas does not show up directly in GDP. Take, for example, ad-supported content, which is widely used to support provision of online media. In the U.S. Bureau of Economic Analysis National Economic Accounts, advertising is treated as a marketing expense—an intermediate product—so it isn’t counted as part of GDP. A content provider that switches from a pay-per-view business model to an ad-supported model reduces GDP.
One example of technology making a big difference to productivity is photography. Back in 2000, about 80 billion photos were taken worldwide—a good estimate since only three companies produced film then. In 2015, it appears that more than 1.5 trillion photos were taken worldwide, roughly 20 times as many. At the same time the volume exploded, the cost of photos fell from about 50 cents each for film and developing to essentially zero.
So over 15 years the price fell to zero and output went up 20 times. Surely that is a huge increase in productivity. Unfortunately, most of this productivity increase doesn’t show up in GDP, since the measured figures depend on the sales of film, cameras, and developing services, which are only a small part of photography these days.
In fact, when digital cameras were incorporated into smartphones, GDP decreased, camera sales fell, and smartphone prices continued to decline. Ideally, quality adjustments would be used to measure the additional capabilities of mobile phones. But figuring out the best way to do this and actually incorporating these changes into national income accounts is a challenge.
Even if we could accurately measure the number of photos now taken, most are produced at home and distributed to friends and family at zero cost; they are not bought and sold and don’t show up in GDP. Nevertheless, those family photos are hugely valuable to the people who take them.
The same thing happened with global positioning systems (GPS). In the late 1990s, the trucking industry adopted expensive GPS and vehicular monitoring systems and saw significant increases in productivity as a result. In the past 10 years, consumers have adopted GPS for home use. The price of the systems has fallen to zero, since they are now bundled with smartphones, and hundreds of millions of people use such systems on a daily basis. But as with cameras, the integration of GPS with smartphones has likely reduced GDP, since sales of stand-alone GPS systems have fallen.
As in the case of cameras, this measurement problem could be solved by implementing a quality adjustment for smartphones. But it is tricky to know exactly how to do this, and statistical agencies want a system that will stand the test of time. Even after the quality adjustment problem is worked out, the fact that most photos are not exchanged for cash will remain—that isn’t a part of GDP, and technological improvements in that area are just not measured by conventional statistics.
When the entire planet is indeed connected, everyone in the world will, in principle, have access to virtually all human knowledge. The barriers to full access are not technological but legal and economic. Assuming that these issues can be resolved, we can expect to see a dramatic increase in human prosperity.
But will these admittedly utopian hopes be realized? I believe that technology is generally a force for good—but there is a dark side to the force (see “The Dark Side of Technology,” in this issue of F&D). Improvements in coordination technology may help productive enterprises but at the same time improve the efficiency of terrorist organizations. The cost of communication may drop to zero, but people will still disagree, sometimes violently. In the long run, though, if technology enables broad improvement in human welfare, people might devote more time to enlarging the pie and less to squabbling over the size of the pieces.

A new Ponemon Institute survey has found that 76% of IT practitioners in the U.S. and Europe say their organizations have suffered the loss or theft of important data over the past two years. This is a significant increase from the 67% reporting data loss or theft in the same survey two years ago.
Here are the other key findings of the survey of 3,027 employees and IT practitioners in the U.S. and Europe, conducted in April and May, 2016, and sponsored by Varonis Systems:

58% of IT practitioners see outside attackers who compromise insider credentials as the #1 threat, followed by 55% who point to insiders who are negligent, and 44% worrying about malware.
62% of end users say they have access to company data they probably shouldn’t see, with 47% saying such access happens very frequently or frequently. The overall figure (62%) is down from 71% reporting too much access to confidential data in 2014.
Only 29% of IT practitioners report that their organizations enforce a strict least-privilege model to ensure insiders have access to company data on a need-to-know basis.

88% of end users say their jobs require them to access and use proprietary information such as customer data, contact lists, employee records, financial reports, confidential business documents, or other sensitive information assets. This is significantly higher than the 76% recorded in the 2014 survey.
43% of respondents say they retain and store document or files they created or worked on forever (up from 40% in 2014). Another 25 percent of respondents say they keep documents or files one year or longer.
Only 25% of the organizations surveyed monitor all employee and third-party email and file activity, while 38% do not monitor any file and email activity.
78% of IT practitioners are very concerned about ransomware. 15% of organizations have experienced ransomware and barely half of those detected the attack in the first 24 hours.
35% of the organizations surveyed have no searchable records of file system activity, leaving them unable to determine, among other things, which files have been encrypted by ransomware.
The Ponemon Institute concludes:
The continuing increase in data loss and theft is due in large part to two troubling factors:
Originally published on Forbes.com