Forrester’s Top Emerging Technologies To Watch: 2017-2021

forrester_technologies

Forrester:

As a refresh to my 2014 blog and report, here are the next 15 emerging technologies Forrester thinks you need to follow closely. We organize this year’s list into three groups — systems of engagement technologies will help you become customer-led, systems of insight technologies will help you become insights-driven, and supporting technologies will help you become fast and connected.

Why these 15? You might have noticed a few glaring omissions. Certainly blockchain has garnered a lot of attention; and 3D printing is on most of our competitors’ lists. The answer goes back to being customer led, insights driven, fast, and connected. Those of you that follow our research will recognize these as the four principles of customer obsessed operations. The technologies we selected will have the biggest impact on your ability to win, serve and retain customers whose expectations of service through technology are  only going up. Furthermore, our list focuses on those technologies that will have the biggest business impact in the next five years. We think blockchain’s big impact outside of financial services, for example, is further out so it didn’t make our list, even though it is important. Maybe by 2018, when I update our list next.

Since I don’t have room here for details about all of our technologies, I’ll focus on five that we think have the potential to change the world. That’s ? of our list by the way – which means a lot of change is coming; it’s time to make your technology bets.

  • IoT software and solutions bring customer engagement potential within reach. Theses software platforms and solutions act as a bridge between highly specialized sensor, actuator, compute, and networking technology for real-world objects and related business software. This technology gives firms visibility into and control of customer and operational realities. By 2021, technology for specific use cases will be mature, but protocol diversity, immature standards and the need for organizational changes will still stymie or delay many firms. …
  • Intelligent agents coupled with AI/cogntive technologies will automate engagement and solve tasks. Intelligent agents represent a set of AI-powered solutions that understand users’ behavior and are discerning enough to interpret needs and make decisions on  their behalf. By 2021, we think that automation, supported by intelligent software agents drivng by an evolution in AI and cogntive technology will have eliminated an net 6% of US jobs. But the loss won’t be uniform. There will be an 11% loss of jobs that are vulnerable and a 5% creation of jobs in industries that stand to benefit. …
  • Augmented reality overlays digital information and experiences on the physical world using combinations of cameras and displays. While we cover both VR and AR, we find that while a lot of attention has been placed on VR, AR has more play, for enteprises in the short term and eventually for consumers as well. By 2021, we will be fully into a transition period between separated and tightly blended physical and digital experiences in our work and lives. …
  • Hybrid wireless technology will eventually ereate connected cverything. Hybrid wireless technologies are the interfaces and software that allow devices to simultaneously leverage and translate between two or more different wireless providers, protocols, and frequency bands, such as light, radio, Wi-Fi, cellular, and Sigfox. By 2021, a virtual network infrastructure will emerge to weave together wireless technologies that globally connect IoT and customer engagement platforms. 
Posted in AI | Tagged | Leave a comment

What AI Researchers Say About When Superintelligence will Arrive

ai_superintelligence

Oren Etzioni:

To get a more accurate assessment of the opinion of leading researchers in the field, I turned to the Fellows of the American Association for Artificial Intelligence, a group of researchers who are recognized as having made significant, sustained contributions to the field.

In early March 2016, AAAI sent out an anonymous survey on my behalf, posing the following question to 193 fellows:

“In his book, Nick Bostrom has defined Superintelligence as ‘an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.’ When do you think we will achieve Superintelligence?”

…In essence, according to 92.5 percent of the respondents, superintelligence is beyond the foreseeable horizon.

Posted in AI | Leave a comment

What Happens in 60 Seconds in the Internet Economy

60-seconds_ecommerce.jpg

Source

Posted in Misc | Leave a comment

Smart Home Today and in the Future

Futurism_HouseOf2016.jpg

MarketsAndMarkets:

The smart home market is expected to grow from $46.97 Billion in 2015 to $121.73 Billion by 2022, at a CAGR of 14.07% between 2016 and 2022.

Leading vendors:

  1. Honeywell International Inc. (U.S.),
  2. Legrand (France),
  3. Ingersoll-Rand plc. (Ireland),
  4. Johnson Controls Inc. (U.S.),
  5. Schneider Electric SE (France),
  6. Siemens AG (Germany),
  7. ABB Ltd. (Switzerland),
  8. Acuity Brands, Inc. (U.S.),
  9. United Technologies Corporation (U.S.),
  10. Samsung Electronics Co., Ltd. (South Korea),
  11. Nest Labs, Inc. (U.S.),
  12. Crestron Electronics, Inc. (U.S.).

CB Insights:

CBInsights_SmartHome.png

  • Appliances & Audio Devices: These include household products that function as a conventional appliance or device, yet offer advantages through connectivity, such as Sectorqube‘s MAID Oven and Sonos‘ smart home speakers. Sonos is the most well-funded smart home startup in terms of equity financing.
  • Device Controllers: While most startups produce individual smart home products, these companies produce the devices controlling them. Examples are Peel‘s universal remote and Ivee‘s personal voice assistant, advertised as “Siri for the home.” Both of these companies have received VC funding from Lightspeed Venture Partners and Foundry Group. Most of these products are able to control smart home products from other companies such as Philips and Nest.
  • Energy & Utilities: These are companies that utilize sensors, monitoring tech, and data to conserve water and energy. Ecobee and Rachio, for instance, develop products that monitor and control AC and water sprinkler systems respectively, to help make consumption more efficient. Interestingly, several startups in this category have received funding from corporations and corporate venture capital firms, such as Carrier Corporation, which backed Ecobee, and Amazon’s Alexa Fund, which backed Rachio.
  • Gardening: These companies focus on producing smart products for watering and monitoring household yards, gardens, and plants. This is one of the smaller categories in terms of number of companies. The most well-funded startup in this category is Edyn, which recently raised a $2M Series A round.
  • General Smart Home Solutions: Instead of producing a single smart gadget, these companies build or distribute multi-device systems that automate several parts of your home, such as ecoVent‘s custom vent/sensor system or Vivint‘s third-party device bundles. Vivint, specifically, has secured $145M in equity funding — second in smart homes only to Sonos.
  • Health & Wellness: These are products that assist home occupants in maintaining their health and lifestyle, such as MedMinder Systems‘ smart medicine containers and Beddit‘s under-the-bed health sensor. A notable deal in this category is Hello‘s $40M Series A round last year, which made it the most well-funded smart home startup in health & wellness, with over $50M in equity funding.
  • Home Robots: This category is home to companies that produce robots specifically for maintenance and assistance in a home environment. These include robotic assistant Jibo, whose total equity funding is currently at $52M, and home cleaning robot Neato.
  • Lighting: Taking cues from products such as the Philips Hue, companies like Sequioa Capital-backed LIFX are coming up with their own app-controlled lightbulbs. Others such as Switchmate are going beyond the bulb and building app-controllable light switches.
  • Pet/Baby Monitors: These companies focus on producing video monitors and sensors to monitor pets and babies through the comfort of a smartphone. Most startups in this space, such as Y Combinator alumni Lully and Petcube, are young and still in their early stages of funding.
  • Safety & Security: These companies utilize the internet and home automation technologies to help protect you and your home with monitors, internet-enabled locks, smart smoke detectors, and more. This is one of the larger and more well-funded categories, as companies in this space include Ring, Simplisafe, August Home, andCanary, which have all received over $40M in equity funding.
  • Miscellaneous: Startups in this category have particularly unique offerings, such as Electric Objects‘ dynamic art display, Kamarq‘s sound table, and Notion‘s universal sensor.

 

 

Posted in Misc | Tagged , , | Leave a comment

Cybersecurity market map

cbinsights_cybersecurity-map

CB Insights:

Network & Endpoint Security: This is the largest category in our market map and includes startups like Red Canary which specializes in protecting an enterprise’s computer networks from vulnerabilities that arise as a result of remotely connecting users’ laptops, tablets, and other electronic devices. Other companies, like Cylance, apply artificial intelligence algorithms to predictively identify and stop malware and advanced threats in defense of endpoints.

IoT/IIoT Security: Startups in this category include Argus Cyber Security, which is an automotive cybersecurity company enabling car manufacturers to protect connected vehicles. Also, Indegy, which provides security for Industrial Control Systems (ICS) used across critical infrastructures, e.g., energy, water utilities, petrochemical plants, manufacturing facilities, etc.

Threat Intelligence: Companies include Flashpoint, which illuminates targeted malicious activities on the deep web and dark web to uncover potential threats and thwart attacks.

Mobile Security: Companies in this category include Zimperium, which delivers enterprise mobile threat protection for Android and iOS devices.

Behavioral Detection: Included are companies like Darktrace, which detects abnormal behavior in organizations in order to identify threats and manage risks from cyberattacks.

Cloud Security: Startups like Tigera offer solutions for enterprises looking for secure application and workload delivery across private, public, and hybrid clouds.

Deception Security: Companies like illusive networks can identify and proactively deceive and disrupt attackers before they can cause harm.

Continuous Network Visibility: Protectwise and others offer solutions for visualizing network activity and responding to cyberattacks in real time.

Risk Remediation: Companies, including AttackIQ, offer solutions for pinpointing vulnerabilities in technologies, people, and processes, with recommendations on how to effectively fill security gaps.

Website Security: Distil Networks and Shape Security offer website developers the ability to identify and police malicious website traffic, including malicious bots and more.

Quantum Encryption: Startups like Post-Quantum offer encrypted wireless and data communications technology that relies on the science underlying quantum mechanics.

Posted in Misc | Tagged | Leave a comment

Artificial Intelligence and Machine Learning in Focus at Intel Analytics Summit

Intel_WelcomePoster1.jpg

Intel Analytics Summit, August 15, 2016 (Source: @MissAmaraKay)

From the most recent edition of the tech bible: Moore’s Law begat faster processing and cheap storage which begat machine learning and big data which begat deep learning and today’s AI Spring. In her opening keynote at the Intel Analytics Summit, which was mostly about machine learning, Intel’s executive vice president Diane Bryant said that we are now “reaching a tipping point where data is the game changer.” (Disclosure: Intel paid my travel expenses).

With the rapid growth of machine-to-machine data exchange we should expect more, according to Bryant: Autonomous vehicles will produce 4 terabytes of data each day, a connected plane will transmit 40 terabytes of data, and the automated, connected factory will generate one petabyte (one million gigabytes) daily.

Another presenter, CB Bohn, Senior Database Engineer at Etsy, the online marketplace, speculated that the tipping point has already happened—when the value of the data exceeded the cost of its storage. Historical data has lots of value left in it, so “why throw it away?” asked Bohn. Cheap storage, added Debora Donato, Director of R&D at Mix Tech, a content discovery platform, has changed the attitudes of businesses towards data and what they can do with it.

Leading-edge enterprises today apply machine learning algorithms to mine and find insights in the ever-expanding data store.  Jason Waxman, corporate vice president and general manager of the data center solutions group at Intel, described how Penn Medicine is improving patient care, using Intel’s TAP open analytics platform. One pilot study focused on sepsis or blood infection which affects more than a million Americans annually and is the ninth leading cause of disease-related deaths and the #1 cause of deaths in intensive care units, according to the Centers for Disease Control (CDC). Penn Medicine was able to correctly identify about 85% of sepsis cases (up from 50%), and made such identifications as much as 30 hours before the onset of septic shock, as opposed to just two hours prior using traditional identification methods.

Saghamitra Deb, Chief Data Scientist at Accenture Technology Labs, talked about using AI to read and annotate documents, a useful machine assistance in many situations. She highlighted textual analysis of clinical trial data that correlates various conditions, leading to new insights and improved personalized medicine.

Candid is a new app that launched recently, applying AI to solve the challenges previous anonymous social platforms could not overcome. CEO Bindu Reddy explained how machine learning helps identify and remove “bad apples”—both inappropriate content and abusers—and recommend relevant groups to Candid users.

Clear Labs differentiate itself by conducting DNA tests that are untargeted and unbiased, aiming to index the world’s food supply and set worldwide standards for “food integrity.” Maria Fernandez Guajardo, vice president of product, described how their molecular analysis of 345 hot dog samples from 75 brands and 10 retailers, discovered that 14.4% of the products tested were “problematic in some way,” mostly because of added ingredients that did not show up on the label. Some consumers, she reported, were especially concerned about hot dogs that claimed to be vegetarians but actually contained meat.

In answer to a question on the future of machine learning from O’Reilly Media’s Ben Lorica, moderator of a panel on distributed analytics, Intel fellow Pradeep Dubey suggested focusing on deep learning as it has been demonstrably successful recently. Michael Franklin of UC Berkeley recommended focusing on machine learning approaches that are usable, understandable and robust, whether of the deep or shallow kind. If an automated system is going to make a decision, he said, “You’d better understand what are the assumptions that went into the data and the algorithms, where does the data you collected differ from those assumptions and how robust is the answer that popped out of the system.”

This was, I believe, a swipe at some of the deep learning practitioners who have admitted publicly that they don’t really understand how their system comes up with its successful results (e.g., Yoshua Bengio: “very often we are in a situation where we do not understand the results of an experiment”). But nothing succeeds like success, whether it is understood or not, and for the last few years deep leaning has become a force of climate change, transforming the AI Winter into the AI Spring.

Pedro Domingos of the University of Washington, in his talk at the event, put the recent resurgence of deep learning in the historical perspective of five different approaches (and solutions) to artificial intelligence: Symbolists (inverse deduction), Connectionists (backpropagation—popular with the deep learning crowd), Evolutionaries (genetic programming), Bayesians (probabilistic inference) and Analogizers (kernel machines). Domingos’ book, The Master Algorithm, is a rallying cry for finding the best of all worlds, the one algorithm that will unite all approaches and provide the answer to life, the universe, and everything.

Before we get to the time when big algorithm will tell us what to do whether we understand it or not, humans are still needed to make sense of all the data they—and the machines—generate. The last panel of the Analytics Summit was, appropriately, a discussion of educating future data scientist. The panelists, executives with Coursera (Emily Glassberg Sands), Kaggle (Anthony Goldboom), Continuum Analytics (Travis Oliphant), Metis (Rumman Chowdhury), and Galvanize (Ryan Orban), moderated by Edd Wilder-James of Silicon Valley Data Science, represented the burgeoning world of data science education.

The good news is that one got the impression that they are now training a vastly expanded pool of people, with very diverse backgrounds and experiences, that either want to become proficient in data analysis or want to be able to speak, as general business managers, the data scientist’s language. The challenge today is not so much the widely-discussed shortage of data scientists but the failure by many companies to effectively integrate and support the work of data scientists. The right internal champion, the panelists agree, one who understands the potential of analytics and machine learning and knows how to get the required resources, is key to the success of the data science team.

Originally published on Forbes.com

Posted in AI, Machine Learning | Tagged | Leave a comment

iPhone Digital Tipping Point: Screen time took over active leisure in 2007

screentimetippingpoint

Brookings:

Screen time increasingly dominates our leisure hours, too, according to the annual American Time Use Survey, which is based on a nationally representative sample of individuals aged 15 and older.

“Screen time” and “active leisure” are necessarily hard to precisely define. (For example, reading for leisure was included in “active leisure,” but reading on an e-reader could be thought of as time spent on the screen).

Nonetheless, these data show a clear trend—adult Americans are spending more of their non-work/education time on a screen.

The first iPhone was released June 29, 2007.

Posted in Misc | Tagged | Leave a comment

Hal Varian on Intelligent Technology

By Hal Varian, Chief Economist, Google

Published on IMF.org

A computer now sits in the middle of virtually every economic transaction in the developed world. Computing technology is rapidly penetrating the developing world as well, driven by the rapid spread of mobile phones. Soon the entire planet will be connected, and most economic transactions worldwide will be computer mediated.

Data systems that were once put in place to help with accounting, inventory control, and billing now have other important uses that can improve our daily life while boosting the global economy.

Transmission routes

Computer mediation can impact economic activity through five important channels.

Data collection and analysis: Computers can record many aspects of a transaction, which can then be collected and analyzed to improve future transactions. Automobiles, mobile phones, and other complex devices collect engineering data that can be used to identify points of failure and improve future products. The result is better products and lower costs.

Personalization and customization: Computer mediation allows services that were previously one-size-fits-all to become personalized to satisfy individual needs. Today we routinely expect that online merchants we have dealt with previously possess relevant information about our purchase history, billing preferences, shipping addresses, and other details. This allows transactions to be optimized for individual needs.

Experimentation and continuous improvement: Online systems can experiment with alternative algorithms in real time, continually improving performance. Google, for example, runs over 10,000 experiments a year dealing with many different aspects of the services it provides, such as ranking and presentation of search results. The experimental infrastructure to run such experiments is also available to the company’s advertisers, who can use it to improve their own offerings.

Contractual innovation: Contracts are critical to economic transactions, but without computers it was often difficult or costly to monitor contractual performance. Verifying performance can help alleviate problems with asymmetric information, such as moral hazard and adverse selection, which can interfere with efficient transactions. There is no longer a risk of purchasing a “lemon” car if vehicular monitoring systems can record history of use and vehicle health at minimal cost.

Coordination and communication: Today even tiny companies with a handful of employees have access to communication services that only the largest multinationals could afford 20 years ago. These micro-multinationals can operate on a global scale because the cost of computation and communication has fallen dramatically. Mobile devices have enabled global coordination of economic activity that was extremely difficult just a decade ago. For example, today authors can collaborate on documents simultaneously even when they are located thousands of miles apart. Videoconferencing is now essentially free, and automated document translation is improving dramatically. As mobile technology becomes ubiquitous, organizations will become more flexible and responsive, allowing them to improve productivity.

Let us dig deeper into these five channels through which computers are changing our lives and our economy.

Data collection and analysis

We hear a lot about “big data” (see “Big Data’s Big Muscle,” in this issue of F&D), but “small data” can be just as important, if not more so. Twenty years ago only large companies could afford sophisticated inventory management systems. But now every mom-and-pop corner store can track its sales and inventory using intelligent cash registers, which are basically just personal computers with a drawer for cash. Small business owners can handle their own accounting using packaged software or online services, allowing them to better track their business performance. Indeed, these days data collection is virtually automatic. The challenge is to translate that raw data into information that can be used to improve performance.

Large businesses have access to unprecedented amounts of data, but many industries have been slow to use it, due to lack of experience in data management and analysis. Music and video entertainment have been distributed online for more than a decade, but the entertainment industry has been slow to recognize the value of the data collected by servers that manage this distribution (see “Music Going for a Song,” in this issue of F&D). The entertainment industry, driven by competition from technology companies, is now waking up to the possibility of using this data to improve their products.

The automotive industry is also evolving quickly by adding sensors and computing power to its products. Self-driving cars are rapidly becoming a reality. In fact, we would have self-driving cars now if it weren’t for the randomness introduced by human drivers and pedestrians. One solution to this problem would be restricted lanes for autonomous vehicles only. Self-driving cars can communicate among themselves and coordinate in ways that human drivers are (alas) unable to. Autonomous vehicles don’t get tired, they don’t get inebriated, and they don’t get distracted. These features of self-driving cars will save millions of lives in the coming years.

Personalization and customization

Twenty years ago it was a research challenge for computers to recognize pictures containing human beings. Now free photo storage systems can find pictures with animals, mountains, castles, flowers, and hundreds of other items in seconds. Improved facial recognition technology and automated indexing allow the photos to be found and organized easily and quickly.

Similarly, just in the past few years voice recognition systems have become significantly more accurate. Voice communication with electronic devices is possible now and will soon become the norm. Real-time verbal language translation is a reality in the lab and will be commonplace in the near future. Removing language barriers will lead to increased foreign trade, including, of course, tourism.

Continuous improvement

Observational data can uncover interesting patterns and correlations in data. But the gold standard for discovering causal relationships is experimentation, which is why online companies like Google routinely experiment and continuously improve their systems. When transactions are mediated by computers, it is easy to divide users into treatment and control groups, deploy treatment, and analyze outcomes in real time.

Companies now routinely use this kind of experimentation for marketing purposes, but these techniques can be used in many other contexts. For example, institutions such as the Massachusetts Institute of Technology’s Abdul Latif Jameel Poverty Action Lab have been able to run controlled experiments of proposed interventions in developing economies to alleviate poverty, improve health, and raise living standards. Randomized controlled experiments can be used to resolve questions about what sorts of incentives work best for increasing saving, educating children, managing small farms, and a host of other policies.

Contractual innovation

The traditional business model for advertising was “You pay me to show your ad to people, and some of them might come to your store.” Now in the online world, the model is “I’ll show your ad to people, and you only have to pay me if they come to your website.” The fact that advertising transactions are computer mediated allows merchants to pay only for the outcome they care about.

Consider the experience of taking a taxi in a strange city. Is this an honest driver who will take the best route and charge me the appropriate fee? At the same time, the driver may well have to worry whether the passenger is honest and will pay for the ride. This is a one-time interaction, with limited information on both sides and potential for abuse. But now consider technology such as that used by Lyft, Uber, and other ride services. Both parties can see rating history, both parties can access estimates of expected fares, and both parties have access to maps and route planning. The transaction has become more transparent to all parties, enabling more efficient and effective transactions. Riders can enjoy cheaper and more convenient trips, and drivers can enjoy a more flexible schedule.

Smartphones have disrupted the taxi industry by enabling these improved transactions, and every player in the industry is now offering such capabilities—or will soon. Many people see the conflict between ride services and the taxi industry as one of innovators versus regulators. However, from a broader perspective, what matters is which technology wins. The technology used by rideshare companies clearly provides a better experience for both drivers and passengers, so it will likely be widely adopted by traditional taxi services.

Simply being able to capture transaction history can improve contracts (see “Two Faces of Change,” in this issue of F&D). It is remarkable that I can walk into a bank in a new city, where I know no one and no one knows me, and arrange for a mortgage worth millions of dollars. This is enabled by credit rating services, which dramatically reduce risk on both sides of the transaction, making loans possible for people who otherwise could not get them.

Communication and coordination

Recently I had some maintenance work done on my house. The team of workers used their mobile phones to photograph items that needed replacement, communicate with their colleagues at the hardware store, find their way to the job site, use as a flashlight to look in dark places, order lunch for delivery, and communicate with me. All of these formerly time-consuming tasks can now be done quickly and easily. Workers spend less time waiting for instructions, information, or parts. The result is reduced transaction costs and improved efficiency.

Today only the wealthy can afford to employ executive assistants. But in the future everyone will have access to digital assistant services that can search through vast amounts of information and communicate with other assistants to coordinate meetings, maintain records, locate data, plan trips, and do the dozens of other things necessary to get things done (see “Robots, Growth, and Inequality,” in this issue of F&D). All of the big tech companies are investing heavily in this technology, and we can expect to see rapid progress thanks to competitive pressure.

Putting it all together

Today’s mobile phones are many times more powerful and much less expensive than those that powered Apollo 11, the 1969 manned expedition to the moon. These mobile phone components have become “commoditized.” Screens, processors, sensors, GPS chips, networking chips, and memory chips cost almost nothing these days. You can buy a reasonable smartphone now for $50, and prices continue to fall. Smartphones are becoming commonplace even in very poor regions.

The availability of those cheap components has enabled innovators to combine and recombine these components to create new devices—fitness monitors, virtual reality headsets, inexpensive vehicular monitoring systems, and so on. The Raspberry Pi is a $35 computer designed at Cambridge University that uses mobile phone parts with a circuit board the size of a pack of playing cards. It is far more powerful than the Unix workstations of just 15 years ago.

The same forces of standardization, modularization, and low prices are driving progress in software. The hardware created using mobile phone parts often uses open-source software for its operating system. At the same time, the desktop motherboards from the personal computer era have now become components in vast data centers, also running open-source software. The mobile devices can hand off relatively complex tasks such as image recognition, voice recognition, and automated translation to the data centers on an as-needed basis. The availability of cheap hardware, free software, and inexpensive access to data services has dramatically cut entry barriers for software development, leading to millions of mobile phone applications becoming available at nominal cost.

The productivity puzzle

I have painted an optimistic picture of how technology will impact the global economy. But how will this technological progress show up in conventional economic statistics? Here the picture is somewhat mixed. Take GDP, for example. This is usually defined as the market value of all final goods and services produced in a given country in a particular time period. The catch is “market value”—if a good isn’t bought and sold, it generally doesn’t show up in GDP.

This has many implications. Household production, ad-supported content, transaction costs, quality changes, free services, and open-source software are dark matter as far as GDP is concerned, since technological progress in these areas does not show up directly in GDP. Take, for example, ad-supported content, which is widely used to support provision of online media. In the U.S. Bureau of Economic Analysis National Economic Accounts, advertising is treated as a marketing expense—an intermediate product—so it isn’t counted as part of GDP. A content provider that switches from a pay-per-view business model to an ad-supported model reduces GDP.

One example of technology making a big difference to productivity is photography. Back in 2000, about 80 billion photos were taken worldwide—a good estimate since only three companies produced film then. In 2015, it appears that more than 1.5 trillion photos were taken worldwide, roughly 20 times as many. At the same time the volume exploded, the cost of photos fell from about 50 cents each for film and developing to essentially zero.

So over 15 years the price fell to zero and output went up 20 times. Surely that is a huge increase in productivity. Unfortunately, most of this productivity increase doesn’t show up in GDP, since the measured figures depend on the sales of film, cameras, and developing services, which are only a small part of photography these days.

In fact, when digital cameras were incorporated into smartphones, GDP decreased, camera sales fell, and smartphone prices continued to decline. Ideally, quality adjustments would be used to measure the additional capabilities of mobile phones. But figuring out the best way to do this and actually incorporating these changes into national income accounts is a challenge.

Even if we could accurately measure the number of photos now taken, most are produced at home and distributed to friends and family at zero cost; they are not bought and sold and don’t show up in GDP. Nevertheless, those family photos are hugely valuable to the people who take them.

The same thing happened with global positioning systems (GPS). In the late 1990s, the trucking industry adopted expensive GPS and vehicular monitoring systems and saw significant increases in productivity as a result. In the past 10 years, consumers have adopted GPS for home use. The price of the systems has fallen to zero, since they are now bundled with smartphones, and hundreds of millions of people use such systems on a daily basis. But as with cameras, the integration of GPS with smartphones has likely reduced GDP, since sales of stand-alone GPS systems have fallen.

As in the case of cameras, this measurement problem could be solved by implementing a quality adjustment for smartphones. But it is tricky to know exactly how to do this, and statistical agencies want a system that will stand the test of time. Even after the quality adjustment problem is worked out, the fact that most photos are not exchanged for cash will remain—that isn’t a part of GDP, and technological improvements in that area are just not measured by conventional statistics.

Will the promise of technology be realized?

When the entire planet is indeed connected, everyone in the world will, in principle, have access to virtually all human knowledge. The barriers to full access are not technological but legal and economic. Assuming that these issues can be resolved, we can expect to see a dramatic increase in human prosperity.

But will these admittedly utopian hopes be realized? I believe that technology is generally a force for good—but there is a dark side to the force (see “The Dark Side of Technology,” in this issue of F&D). Improvements in coordination technology may help productive enterprises but at the same time improve the efficiency of terrorist organizations. The cost of communication may drop to zero, but people will still disagree, sometimes violently. In the long run, though, if technology enables broad improvement in human welfare, people might devote more time to enlarging the pie and less to squabbling over the size of the pieces.

Posted in Digitization | Tagged | Leave a comment

Sharp Increase in Data Loss Due to Insider Threats

figure-1-640

A new Ponemon Institute survey has found that 76% of IT practitioners in the U.S. and Europe say their organizations have suffered the loss or theft of important data over the past two years. This is a significant increase from the 67% reporting data loss or theft in the same survey two years ago.

Here are the other key findings of the survey of 3,027 employees and IT practitioners in the U.S. and Europe, conducted in April and May, 2016, and sponsored by Varonis Systems:

figure-3-640

58% of IT practitioners see outside attackers who compromise insider credentials as the #1 threat, followed by 55% who point to insiders who are negligent, and 44% worrying about malware.

62% of end users say they have access to company data they probably shouldn’t see, with 47% saying such access happens very frequently or frequently. The overall figure (62%) is down from 71% reporting too much access to confidential data in 2014.

Only 29% of IT practitioners report that their organizations enforce a strict least-privilege model to ensure insiders have access to company data on a need-to-know basis.

fig-8-640

88% of end users say their jobs require them to access and use proprietary information such as customer data, contact lists, employee records, financial reports, confidential business documents, or other sensitive information assets. This is significantly higher than the 76% recorded in the 2014 survey.

43% of respondents say they retain and store document or files they created or worked on forever (up from 40% in 2014). Another 25 percent of respondents say they keep documents or files one year or longer.

Only 25% of the organizations surveyed monitor all employee and third-party email and file activity, while 38% do not monitor any file and email activity.

78% of IT practitioners are very concerned about ransomware. 15% of organizations have experienced ransomware and barely half of those detected the attack in the first 24 hours.

35% of the organizations surveyed have no searchable records of file system activity, leaving them unable to determine, among other things, which files have been encrypted by ransomware.

The Ponemon Institute concludes:

The continuing increase in data loss and theft is due in large part to two troubling factors:

  • Compromises in insider accounts that are exacerbated by far wider employee and third-party access to sensitive information than is necessary
  • The continued failure to monitor access and activity around email and file systems – where most confidential and sensitive data moves and lives.

Originally published on Forbes.com

Posted in Misc | Tagged | Leave a comment

Mobile Banking and Millennials

bi-preferred device banking.png

BI Intelligence:

The smartphone is becoming the foundation of the future of mobile banking, especially among younger customers who will wield financial influence in the coming decades. Consider the following data from the BI Intelligence survey of millennials:

  • 71% of millennials say it’s very important to have a banking app, and 60% say it’s very important to have an app to make payments.
  • 51% say that they have made a purchase through a mobile website or through an app in the last month.
  • 27% say they have used their phone to make a payment at a checkout in a store in the last month.
Posted in Misc | Tagged | Leave a comment