Smart Home Today and in the Future

Futurism_HouseOf2016.jpg

MarketsAndMarkets:

The smart home market is expected to grow from $46.97 Billion in 2015 to $121.73 Billion by 2022, at a CAGR of 14.07% between 2016 and 2022.

Leading vendors:

  1. Honeywell International Inc. (U.S.),
  2. Legrand (France),
  3. Ingersoll-Rand plc. (Ireland),
  4. Johnson Controls Inc. (U.S.),
  5. Schneider Electric SE (France),
  6. Siemens AG (Germany),
  7. ABB Ltd. (Switzerland),
  8. Acuity Brands, Inc. (U.S.),
  9. United Technologies Corporation (U.S.),
  10. Samsung Electronics Co., Ltd. (South Korea),
  11. Nest Labs, Inc. (U.S.),
  12. Crestron Electronics, Inc. (U.S.).

CB Insights:

CBInsights_SmartHome.png

  • Appliances & Audio Devices: These include household products that function as a conventional appliance or device, yet offer advantages through connectivity, such as Sectorqube‘s MAID Oven and Sonos‘ smart home speakers. Sonos is the most well-funded smart home startup in terms of equity financing.
  • Device Controllers: While most startups produce individual smart home products, these companies produce the devices controlling them. Examples are Peel‘s universal remote and Ivee‘s personal voice assistant, advertised as “Siri for the home.” Both of these companies have received VC funding from Lightspeed Venture Partners and Foundry Group. Most of these products are able to control smart home products from other companies such as Philips and Nest.
  • Energy & Utilities: These are companies that utilize sensors, monitoring tech, and data to conserve water and energy. Ecobee and Rachio, for instance, develop products that monitor and control AC and water sprinkler systems respectively, to help make consumption more efficient. Interestingly, several startups in this category have received funding from corporations and corporate venture capital firms, such as Carrier Corporation, which backed Ecobee, and Amazon’s Alexa Fund, which backed Rachio.
  • Gardening: These companies focus on producing smart products for watering and monitoring household yards, gardens, and plants. This is one of the smaller categories in terms of number of companies. The most well-funded startup in this category is Edyn, which recently raised a $2M Series A round.
  • General Smart Home Solutions: Instead of producing a single smart gadget, these companies build or distribute multi-device systems that automate several parts of your home, such as ecoVent‘s custom vent/sensor system or Vivint‘s third-party device bundles. Vivint, specifically, has secured $145M in equity funding — second in smart homes only to Sonos.
  • Health & Wellness: These are products that assist home occupants in maintaining their health and lifestyle, such as MedMinder Systems‘ smart medicine containers and Beddit‘s under-the-bed health sensor. A notable deal in this category is Hello‘s $40M Series A round last year, which made it the most well-funded smart home startup in health & wellness, with over $50M in equity funding.
  • Home Robots: This category is home to companies that produce robots specifically for maintenance and assistance in a home environment. These include robotic assistant Jibo, whose total equity funding is currently at $52M, and home cleaning robot Neato.
  • Lighting: Taking cues from products such as the Philips Hue, companies like Sequioa Capital-backed LIFX are coming up with their own app-controlled lightbulbs. Others such as Switchmate are going beyond the bulb and building app-controllable light switches.
  • Pet/Baby Monitors: These companies focus on producing video monitors and sensors to monitor pets and babies through the comfort of a smartphone. Most startups in this space, such as Y Combinator alumni Lully and Petcube, are young and still in their early stages of funding.
  • Safety & Security: These companies utilize the internet and home automation technologies to help protect you and your home with monitors, internet-enabled locks, smart smoke detectors, and more. This is one of the larger and more well-funded categories, as companies in this space include Ring, Simplisafe, August Home, andCanary, which have all received over $40M in equity funding.
  • Miscellaneous: Startups in this category have particularly unique offerings, such as Electric Objects‘ dynamic art display, Kamarq‘s sound table, and Notion‘s universal sensor.

 

 

Posted in Misc | Tagged , , | Leave a comment

Cybersecurity market map

cbinsights_cybersecurity-map

CB Insights:

Network & Endpoint Security: This is the largest category in our market map and includes startups like Red Canary which specializes in protecting an enterprise’s computer networks from vulnerabilities that arise as a result of remotely connecting users’ laptops, tablets, and other electronic devices. Other companies, like Cylance, apply artificial intelligence algorithms to predictively identify and stop malware and advanced threats in defense of endpoints.

IoT/IIoT Security: Startups in this category include Argus Cyber Security, which is an automotive cybersecurity company enabling car manufacturers to protect connected vehicles. Also, Indegy, which provides security for Industrial Control Systems (ICS) used across critical infrastructures, e.g., energy, water utilities, petrochemical plants, manufacturing facilities, etc.

Threat Intelligence: Companies include Flashpoint, which illuminates targeted malicious activities on the deep web and dark web to uncover potential threats and thwart attacks.

Mobile Security: Companies in this category include Zimperium, which delivers enterprise mobile threat protection for Android and iOS devices.

Behavioral Detection: Included are companies like Darktrace, which detects abnormal behavior in organizations in order to identify threats and manage risks from cyberattacks.

Cloud Security: Startups like Tigera offer solutions for enterprises looking for secure application and workload delivery across private, public, and hybrid clouds.

Deception Security: Companies like illusive networks can identify and proactively deceive and disrupt attackers before they can cause harm.

Continuous Network Visibility: Protectwise and others offer solutions for visualizing network activity and responding to cyberattacks in real time.

Risk Remediation: Companies, including AttackIQ, offer solutions for pinpointing vulnerabilities in technologies, people, and processes, with recommendations on how to effectively fill security gaps.

Website Security: Distil Networks and Shape Security offer website developers the ability to identify and police malicious website traffic, including malicious bots and more.

Quantum Encryption: Startups like Post-Quantum offer encrypted wireless and data communications technology that relies on the science underlying quantum mechanics.

Posted in Misc | Tagged | Leave a comment

Artificial Intelligence and Machine Learning in Focus at Intel Analytics Summit

Intel_WelcomePoster1.jpg

Intel Analytics Summit, August 15, 2016 (Source: @MissAmaraKay)

From the most recent edition of the tech bible: Moore’s Law begat faster processing and cheap storage which begat machine learning and big data which begat deep learning and today’s AI Spring. In her opening keynote at the Intel Analytics Summit, which was mostly about machine learning, Intel’s executive vice president Diane Bryant said that we are now “reaching a tipping point where data is the game changer.” (Disclosure: Intel paid my travel expenses).

With the rapid growth of machine-to-machine data exchange we should expect more, according to Bryant: Autonomous vehicles will produce 4 terabytes of data each day, a connected plane will transmit 40 terabytes of data, and the automated, connected factory will generate one petabyte (one million gigabytes) daily.

Another presenter, CB Bohn, Senior Database Engineer at Etsy, the online marketplace, speculated that the tipping point has already happened—when the value of the data exceeded the cost of its storage. Historical data has lots of value left in it, so “why throw it away?” asked Bohn. Cheap storage, added Debora Donato, Director of R&D at Mix Tech, a content discovery platform, has changed the attitudes of businesses towards data and what they can do with it.

Leading-edge enterprises today apply machine learning algorithms to mine and find insights in the ever-expanding data store.  Jason Waxman, corporate vice president and general manager of the data center solutions group at Intel, described how Penn Medicine is improving patient care, using Intel’s TAP open analytics platform. One pilot study focused on sepsis or blood infection which affects more than a million Americans annually and is the ninth leading cause of disease-related deaths and the #1 cause of deaths in intensive care units, according to the Centers for Disease Control (CDC). Penn Medicine was able to correctly identify about 85% of sepsis cases (up from 50%), and made such identifications as much as 30 hours before the onset of septic shock, as opposed to just two hours prior using traditional identification methods.

Saghamitra Deb, Chief Data Scientist at Accenture Technology Labs, talked about using AI to read and annotate documents, a useful machine assistance in many situations. She highlighted textual analysis of clinical trial data that correlates various conditions, leading to new insights and improved personalized medicine.

Candid is a new app that launched recently, applying AI to solve the challenges previous anonymous social platforms could not overcome. CEO Bindu Reddy explained how machine learning helps identify and remove “bad apples”—both inappropriate content and abusers—and recommend relevant groups to Candid users.

Clear Labs differentiate itself by conducting DNA tests that are untargeted and unbiased, aiming to index the world’s food supply and set worldwide standards for “food integrity.” Maria Fernandez Guajardo, vice president of product, described how their molecular analysis of 345 hot dog samples from 75 brands and 10 retailers, discovered that 14.4% of the products tested were “problematic in some way,” mostly because of added ingredients that did not show up on the label. Some consumers, she reported, were especially concerned about hot dogs that claimed to be vegetarians but actually contained meat.

In answer to a question on the future of machine learning from O’Reilly Media’s Ben Lorica, moderator of a panel on distributed analytics, Intel fellow Pradeep Dubey suggested focusing on deep learning as it has been demonstrably successful recently. Michael Franklin of UC Berkeley recommended focusing on machine learning approaches that are usable, understandable and robust, whether of the deep or shallow kind. If an automated system is going to make a decision, he said, “You’d better understand what are the assumptions that went into the data and the algorithms, where does the data you collected differ from those assumptions and how robust is the answer that popped out of the system.”

This was, I believe, a swipe at some of the deep learning practitioners who have admitted publicly that they don’t really understand how their system comes up with its successful results (e.g., Yoshua Bengio: “very often we are in a situation where we do not understand the results of an experiment”). But nothing succeeds like success, whether it is understood or not, and for the last few years deep leaning has become a force of climate change, transforming the AI Winter into the AI Spring.

Pedro Domingos of the University of Washington, in his talk at the event, put the recent resurgence of deep learning in the historical perspective of five different approaches (and solutions) to artificial intelligence: Symbolists (inverse deduction), Connectionists (backpropagation—popular with the deep learning crowd), Evolutionaries (genetic programming), Bayesians (probabilistic inference) and Analogizers (kernel machines). Domingos’ book, The Master Algorithm, is a rallying cry for finding the best of all worlds, the one algorithm that will unite all approaches and provide the answer to life, the universe, and everything.

Before we get to the time when big algorithm will tell us what to do whether we understand it or not, humans are still needed to make sense of all the data they—and the machines—generate. The last panel of the Analytics Summit was, appropriately, a discussion of educating future data scientist. The panelists, executives with Coursera (Emily Glassberg Sands), Kaggle (Anthony Goldboom), Continuum Analytics (Travis Oliphant), Metis (Rumman Chowdhury), and Galvanize (Ryan Orban), moderated by Edd Wilder-James of Silicon Valley Data Science, represented the burgeoning world of data science education.

The good news is that one got the impression that they are now training a vastly expanded pool of people, with very diverse backgrounds and experiences, that either want to become proficient in data analysis or want to be able to speak, as general business managers, the data scientist’s language. The challenge today is not so much the widely-discussed shortage of data scientists but the failure by many companies to effectively integrate and support the work of data scientists. The right internal champion, the panelists agree, one who understands the potential of analytics and machine learning and knows how to get the required resources, is key to the success of the data science team.

Originally published on Forbes.com

Posted in AI, Machine Learning | Tagged | Leave a comment

iPhone Digital Tipping Point: Screen time took over active leisure in 2007

screentimetippingpoint

Brookings:

Screen time increasingly dominates our leisure hours, too, according to the annual American Time Use Survey, which is based on a nationally representative sample of individuals aged 15 and older.

“Screen time” and “active leisure” are necessarily hard to precisely define. (For example, reading for leisure was included in “active leisure,” but reading on an e-reader could be thought of as time spent on the screen).

Nonetheless, these data show a clear trend—adult Americans are spending more of their non-work/education time on a screen.

The first iPhone was released June 29, 2007.

Posted in Misc | Tagged | Leave a comment

Hal Varian on Intelligent Technology

By Hal Varian, Chief Economist, Google

Published on IMF.org

A computer now sits in the middle of virtually every economic transaction in the developed world. Computing technology is rapidly penetrating the developing world as well, driven by the rapid spread of mobile phones. Soon the entire planet will be connected, and most economic transactions worldwide will be computer mediated.

Data systems that were once put in place to help with accounting, inventory control, and billing now have other important uses that can improve our daily life while boosting the global economy.

Transmission routes

Computer mediation can impact economic activity through five important channels.

Data collection and analysis: Computers can record many aspects of a transaction, which can then be collected and analyzed to improve future transactions. Automobiles, mobile phones, and other complex devices collect engineering data that can be used to identify points of failure and improve future products. The result is better products and lower costs.

Personalization and customization: Computer mediation allows services that were previously one-size-fits-all to become personalized to satisfy individual needs. Today we routinely expect that online merchants we have dealt with previously possess relevant information about our purchase history, billing preferences, shipping addresses, and other details. This allows transactions to be optimized for individual needs.

Experimentation and continuous improvement: Online systems can experiment with alternative algorithms in real time, continually improving performance. Google, for example, runs over 10,000 experiments a year dealing with many different aspects of the services it provides, such as ranking and presentation of search results. The experimental infrastructure to run such experiments is also available to the company’s advertisers, who can use it to improve their own offerings.

Contractual innovation: Contracts are critical to economic transactions, but without computers it was often difficult or costly to monitor contractual performance. Verifying performance can help alleviate problems with asymmetric information, such as moral hazard and adverse selection, which can interfere with efficient transactions. There is no longer a risk of purchasing a “lemon” car if vehicular monitoring systems can record history of use and vehicle health at minimal cost.

Coordination and communication: Today even tiny companies with a handful of employees have access to communication services that only the largest multinationals could afford 20 years ago. These micro-multinationals can operate on a global scale because the cost of computation and communication has fallen dramatically. Mobile devices have enabled global coordination of economic activity that was extremely difficult just a decade ago. For example, today authors can collaborate on documents simultaneously even when they are located thousands of miles apart. Videoconferencing is now essentially free, and automated document translation is improving dramatically. As mobile technology becomes ubiquitous, organizations will become more flexible and responsive, allowing them to improve productivity.

Let us dig deeper into these five channels through which computers are changing our lives and our economy.

Data collection and analysis

We hear a lot about “big data” (see “Big Data’s Big Muscle,” in this issue of F&D), but “small data” can be just as important, if not more so. Twenty years ago only large companies could afford sophisticated inventory management systems. But now every mom-and-pop corner store can track its sales and inventory using intelligent cash registers, which are basically just personal computers with a drawer for cash. Small business owners can handle their own accounting using packaged software or online services, allowing them to better track their business performance. Indeed, these days data collection is virtually automatic. The challenge is to translate that raw data into information that can be used to improve performance.

Large businesses have access to unprecedented amounts of data, but many industries have been slow to use it, due to lack of experience in data management and analysis. Music and video entertainment have been distributed online for more than a decade, but the entertainment industry has been slow to recognize the value of the data collected by servers that manage this distribution (see “Music Going for a Song,” in this issue of F&D). The entertainment industry, driven by competition from technology companies, is now waking up to the possibility of using this data to improve their products.

The automotive industry is also evolving quickly by adding sensors and computing power to its products. Self-driving cars are rapidly becoming a reality. In fact, we would have self-driving cars now if it weren’t for the randomness introduced by human drivers and pedestrians. One solution to this problem would be restricted lanes for autonomous vehicles only. Self-driving cars can communicate among themselves and coordinate in ways that human drivers are (alas) unable to. Autonomous vehicles don’t get tired, they don’t get inebriated, and they don’t get distracted. These features of self-driving cars will save millions of lives in the coming years.

Personalization and customization

Twenty years ago it was a research challenge for computers to recognize pictures containing human beings. Now free photo storage systems can find pictures with animals, mountains, castles, flowers, and hundreds of other items in seconds. Improved facial recognition technology and automated indexing allow the photos to be found and organized easily and quickly.

Similarly, just in the past few years voice recognition systems have become significantly more accurate. Voice communication with electronic devices is possible now and will soon become the norm. Real-time verbal language translation is a reality in the lab and will be commonplace in the near future. Removing language barriers will lead to increased foreign trade, including, of course, tourism.

Continuous improvement

Observational data can uncover interesting patterns and correlations in data. But the gold standard for discovering causal relationships is experimentation, which is why online companies like Google routinely experiment and continuously improve their systems. When transactions are mediated by computers, it is easy to divide users into treatment and control groups, deploy treatment, and analyze outcomes in real time.

Companies now routinely use this kind of experimentation for marketing purposes, but these techniques can be used in many other contexts. For example, institutions such as the Massachusetts Institute of Technology’s Abdul Latif Jameel Poverty Action Lab have been able to run controlled experiments of proposed interventions in developing economies to alleviate poverty, improve health, and raise living standards. Randomized controlled experiments can be used to resolve questions about what sorts of incentives work best for increasing saving, educating children, managing small farms, and a host of other policies.

Contractual innovation

The traditional business model for advertising was “You pay me to show your ad to people, and some of them might come to your store.” Now in the online world, the model is “I’ll show your ad to people, and you only have to pay me if they come to your website.” The fact that advertising transactions are computer mediated allows merchants to pay only for the outcome they care about.

Consider the experience of taking a taxi in a strange city. Is this an honest driver who will take the best route and charge me the appropriate fee? At the same time, the driver may well have to worry whether the passenger is honest and will pay for the ride. This is a one-time interaction, with limited information on both sides and potential for abuse. But now consider technology such as that used by Lyft, Uber, and other ride services. Both parties can see rating history, both parties can access estimates of expected fares, and both parties have access to maps and route planning. The transaction has become more transparent to all parties, enabling more efficient and effective transactions. Riders can enjoy cheaper and more convenient trips, and drivers can enjoy a more flexible schedule.

Smartphones have disrupted the taxi industry by enabling these improved transactions, and every player in the industry is now offering such capabilities—or will soon. Many people see the conflict between ride services and the taxi industry as one of innovators versus regulators. However, from a broader perspective, what matters is which technology wins. The technology used by rideshare companies clearly provides a better experience for both drivers and passengers, so it will likely be widely adopted by traditional taxi services.

Simply being able to capture transaction history can improve contracts (see “Two Faces of Change,” in this issue of F&D). It is remarkable that I can walk into a bank in a new city, where I know no one and no one knows me, and arrange for a mortgage worth millions of dollars. This is enabled by credit rating services, which dramatically reduce risk on both sides of the transaction, making loans possible for people who otherwise could not get them.

Communication and coordination

Recently I had some maintenance work done on my house. The team of workers used their mobile phones to photograph items that needed replacement, communicate with their colleagues at the hardware store, find their way to the job site, use as a flashlight to look in dark places, order lunch for delivery, and communicate with me. All of these formerly time-consuming tasks can now be done quickly and easily. Workers spend less time waiting for instructions, information, or parts. The result is reduced transaction costs and improved efficiency.

Today only the wealthy can afford to employ executive assistants. But in the future everyone will have access to digital assistant services that can search through vast amounts of information and communicate with other assistants to coordinate meetings, maintain records, locate data, plan trips, and do the dozens of other things necessary to get things done (see “Robots, Growth, and Inequality,” in this issue of F&D). All of the big tech companies are investing heavily in this technology, and we can expect to see rapid progress thanks to competitive pressure.

Putting it all together

Today’s mobile phones are many times more powerful and much less expensive than those that powered Apollo 11, the 1969 manned expedition to the moon. These mobile phone components have become “commoditized.” Screens, processors, sensors, GPS chips, networking chips, and memory chips cost almost nothing these days. You can buy a reasonable smartphone now for $50, and prices continue to fall. Smartphones are becoming commonplace even in very poor regions.

The availability of those cheap components has enabled innovators to combine and recombine these components to create new devices—fitness monitors, virtual reality headsets, inexpensive vehicular monitoring systems, and so on. The Raspberry Pi is a $35 computer designed at Cambridge University that uses mobile phone parts with a circuit board the size of a pack of playing cards. It is far more powerful than the Unix workstations of just 15 years ago.

The same forces of standardization, modularization, and low prices are driving progress in software. The hardware created using mobile phone parts often uses open-source software for its operating system. At the same time, the desktop motherboards from the personal computer era have now become components in vast data centers, also running open-source software. The mobile devices can hand off relatively complex tasks such as image recognition, voice recognition, and automated translation to the data centers on an as-needed basis. The availability of cheap hardware, free software, and inexpensive access to data services has dramatically cut entry barriers for software development, leading to millions of mobile phone applications becoming available at nominal cost.

The productivity puzzle

I have painted an optimistic picture of how technology will impact the global economy. But how will this technological progress show up in conventional economic statistics? Here the picture is somewhat mixed. Take GDP, for example. This is usually defined as the market value of all final goods and services produced in a given country in a particular time period. The catch is “market value”—if a good isn’t bought and sold, it generally doesn’t show up in GDP.

This has many implications. Household production, ad-supported content, transaction costs, quality changes, free services, and open-source software are dark matter as far as GDP is concerned, since technological progress in these areas does not show up directly in GDP. Take, for example, ad-supported content, which is widely used to support provision of online media. In the U.S. Bureau of Economic Analysis National Economic Accounts, advertising is treated as a marketing expense—an intermediate product—so it isn’t counted as part of GDP. A content provider that switches from a pay-per-view business model to an ad-supported model reduces GDP.

One example of technology making a big difference to productivity is photography. Back in 2000, about 80 billion photos were taken worldwide—a good estimate since only three companies produced film then. In 2015, it appears that more than 1.5 trillion photos were taken worldwide, roughly 20 times as many. At the same time the volume exploded, the cost of photos fell from about 50 cents each for film and developing to essentially zero.

So over 15 years the price fell to zero and output went up 20 times. Surely that is a huge increase in productivity. Unfortunately, most of this productivity increase doesn’t show up in GDP, since the measured figures depend on the sales of film, cameras, and developing services, which are only a small part of photography these days.

In fact, when digital cameras were incorporated into smartphones, GDP decreased, camera sales fell, and smartphone prices continued to decline. Ideally, quality adjustments would be used to measure the additional capabilities of mobile phones. But figuring out the best way to do this and actually incorporating these changes into national income accounts is a challenge.

Even if we could accurately measure the number of photos now taken, most are produced at home and distributed to friends and family at zero cost; they are not bought and sold and don’t show up in GDP. Nevertheless, those family photos are hugely valuable to the people who take them.

The same thing happened with global positioning systems (GPS). In the late 1990s, the trucking industry adopted expensive GPS and vehicular monitoring systems and saw significant increases in productivity as a result. In the past 10 years, consumers have adopted GPS for home use. The price of the systems has fallen to zero, since they are now bundled with smartphones, and hundreds of millions of people use such systems on a daily basis. But as with cameras, the integration of GPS with smartphones has likely reduced GDP, since sales of stand-alone GPS systems have fallen.

As in the case of cameras, this measurement problem could be solved by implementing a quality adjustment for smartphones. But it is tricky to know exactly how to do this, and statistical agencies want a system that will stand the test of time. Even after the quality adjustment problem is worked out, the fact that most photos are not exchanged for cash will remain—that isn’t a part of GDP, and technological improvements in that area are just not measured by conventional statistics.

Will the promise of technology be realized?

When the entire planet is indeed connected, everyone in the world will, in principle, have access to virtually all human knowledge. The barriers to full access are not technological but legal and economic. Assuming that these issues can be resolved, we can expect to see a dramatic increase in human prosperity.

But will these admittedly utopian hopes be realized? I believe that technology is generally a force for good—but there is a dark side to the force (see “The Dark Side of Technology,” in this issue of F&D). Improvements in coordination technology may help productive enterprises but at the same time improve the efficiency of terrorist organizations. The cost of communication may drop to zero, but people will still disagree, sometimes violently. In the long run, though, if technology enables broad improvement in human welfare, people might devote more time to enlarging the pie and less to squabbling over the size of the pieces.

Posted in Digitization | Tagged | Leave a comment

Sharp Increase in Data Loss Due to Insider Threats

figure-1-640

A new Ponemon Institute survey has found that 76% of IT practitioners in the U.S. and Europe say their organizations have suffered the loss or theft of important data over the past two years. This is a significant increase from the 67% reporting data loss or theft in the same survey two years ago.

Here are the other key findings of the survey of 3,027 employees and IT practitioners in the U.S. and Europe, conducted in April and May, 2016, and sponsored by Varonis Systems:

figure-3-640

58% of IT practitioners see outside attackers who compromise insider credentials as the #1 threat, followed by 55% who point to insiders who are negligent, and 44% worrying about malware.

62% of end users say they have access to company data they probably shouldn’t see, with 47% saying such access happens very frequently or frequently. The overall figure (62%) is down from 71% reporting too much access to confidential data in 2014.

Only 29% of IT practitioners report that their organizations enforce a strict least-privilege model to ensure insiders have access to company data on a need-to-know basis.

fig-8-640

88% of end users say their jobs require them to access and use proprietary information such as customer data, contact lists, employee records, financial reports, confidential business documents, or other sensitive information assets. This is significantly higher than the 76% recorded in the 2014 survey.

43% of respondents say they retain and store document or files they created or worked on forever (up from 40% in 2014). Another 25 percent of respondents say they keep documents or files one year or longer.

Only 25% of the organizations surveyed monitor all employee and third-party email and file activity, while 38% do not monitor any file and email activity.

78% of IT practitioners are very concerned about ransomware. 15% of organizations have experienced ransomware and barely half of those detected the attack in the first 24 hours.

35% of the organizations surveyed have no searchable records of file system activity, leaving them unable to determine, among other things, which files have been encrypted by ransomware.

The Ponemon Institute concludes:

The continuing increase in data loss and theft is due in large part to two troubling factors:

  • Compromises in insider accounts that are exacerbated by far wider employee and third-party access to sensitive information than is necessary
  • The continued failure to monitor access and activity around email and file systems – where most confidential and sensitive data moves and lives.

Originally published on Forbes.com

Posted in Misc | Tagged | Leave a comment

Mobile Banking and Millennials

bi-preferred device banking.png

BI Intelligence:

The smartphone is becoming the foundation of the future of mobile banking, especially among younger customers who will wield financial influence in the coming decades. Consider the following data from the BI Intelligence survey of millennials:

  • 71% of millennials say it’s very important to have a banking app, and 60% say it’s very important to have an app to make payments.
  • 51% say that they have made a purchase through a mobile website or through an app in the last month.
  • 27% say they have used their phone to make a payment at a checkout in a store in the last month.
Posted in Misc | Tagged | Leave a comment

AI by the Numbers: Funding, China, agribots, analytics, robo-surgery, driverless cars

ai_investment

ai_vc_category

ai-stats-by-sector

Source: Raconteur

Posted in AI | Tagged | Leave a comment

Internet of Things by the Numbers: Research Updates

iot_webinar_slide6IDC presented on August 4 its annual mid-year IoT review webcast, hosted by Vernon Turner, senior vice president and research fellow for IoT and Carrie MacGillivray, vice president of IoT & Mobile. Here are the highlights:

  • An updated Digital Universe estimate of the amount of data created in the world annually (see above) forecast 180 Zettabytes (or 180 trillion gigabytes) in 2025, up from less than 10 Zettabytes in 2015 and 44 Zettabytes in 2020.
  • Reaching the analytics phase of IoT: The actionable IoT Data–the IoT data that is analyzed and used to change business processes–in 2025 will be as big as all the data created in 2020. To make real time decisions, says IDC, “machine learning becomes important for the machine.”
  • Growth rates: From 2020 to 2025, the volume of traditional data will grow by 2.3x; the volume of data that can be analyzed will grow by 4.8x; and the actionable data will grow by 9.6x.
  • Connected devices: From less than 20 billion today to 30 billion in 2020 to 80 billion in 2025; by 2025, there will be 152,200 new connected devices every minute. “Everything we have of value will be connected to the internet,” says IDC.
  • Current State-of-IoT in the U.S.: $230 billion will be invested in IoT in 2016 growing to $370 billion in 2018; 35% of U.S. companies are in the last 2 stages of IDC’s IoT maturity model; leading use cases are manufacturing operations, fleet management, and smart buildings.
  • Market share of IoT networks in 2020: Wireless LAN/Wi-Fi 60%, Low-Power WAN 25%, Cellular 15%.
  • The end of the self-built IoT platform? IDC thinks that there are between 300 and 400 company-specific IoT platforms. But they see a trend where companies abandon these efforts in favor of focusing on what they do really well. One of the winners this year has been Microsoft and its IoT platform with GE Predix coming to Azure.
  • Other recent trends: Battle for Low-Power WAN market between proprietary solutions such as SIGFOX, LoRA and Ingenu and the ones favored by the cellular operators such as Narrow Band IoT; Applying AI to IoT security and a variety of IoT use cases; 2016 is the year of IoT developer—prominent example being IBM and AT&T’s announcement in July, coupling IBM Watson with AT&T development tools.
  • Industry news: Cisco (IoT group moving organizationally from hardware to software) now has a well- known and established IoT platform, buying Jasper Wireless for $1.4 billion; Softbank buying ARM for $32 billion to take advantage of healthy growth of ARM’s market—their challenge is not to compromise ARM neutrality and fend off possible counter offers by Google or Microsoft.

IDC’s next IoT webinar, on September 22, will provide the results of its survey of 4,100 IoT decision makers in 25 countries.

In other research news, Machina Research released on August 3 its annual guidance on the size of the IoT market opportunity. Highlights:

  • The total number of IoT connections will grow from 6 billion in 2015 to 27 billion in 2025.
  • The total IoT revenue opportunity will be $3 trillion in 2025 (up from $750 billion in 2015). Of this figure, $1.3 trillion will be accounted for by revenue directly derived from end users in the form of devices, connectivity and application revenue. The remainder comes from upstream and downstream IoT-related sources such as application development, systems integration, hosting and data monetization.
  • By 2025, IoT will generate over 2 Zettabytes of data, mostly generated by consumer electronics devices. However, it will account for less than 1% of cellular data traffic. Cellular traffic is particularly generated by digital billboards, in-vehicle connectivity and CCTV.
  • China and the U.S. will be neck-and-neck for dominance of the global market by 2025. China which will account for 21% of global IoT connections, ahead of the U.S. with 20%, with similar proportions for cellular connections. However, the U.S. will still be ahead in terms of IoT revenue (22% vs 19%). Third largest market is Japan with 7% of all connections.
  • Today 71% of all IoT connections are connected using a short range technology (e.g. WiFi, Zigbee, or in-building PLC), by 2025 that will have grown slightly to 72%. The big short-range applications, which cause it to be the dominant technology category, are Consumer Electronics, Building Security and Building Automation.
  • Cellular connections will grow from 334 million at the end of 2015 to 2.2 billion by 2025, of which the majority will be LTE. 45% of those cellular connections will be in the ‘Connected Car’ sector, including both factory-fit embedded connections and aftermarket devices.
  • 11% of connections in 2025 will use Low Power Wide Area (LPWA) connections such as Sigfox, LoRa and LTE-NB1.

Finally, ABI Research also sees machine learning playing an important role in the adoption of IoT by enterprises. It estimates that revenues generated by machine learning-based data analytics tools and services will reach nearly $20 billion in 2021 as Machine-Learning-as-a-Service (MLaaS) models take off.

Originally published on Forbes.com

Posted in Data Growth, Internet of Things, Misc | Tagged | Leave a comment

The first mass deployment of driverless taxis will happen by 2020

bi_driverless_taxi

BI Intelligence:

Since the start of 2016, automakers, tech companies, and ride-hailing services have been racing to create a driverless taxi service. This service would mirror how an Uber works today, but there wouldn’t be a driver.

So far, the race has been brutal, as companies jockey for position by spending billions to acquire/invest in companies that will help make a driverless taxi service a reality. Uber recently took the pole position by announcing it would begin piloting its self-driving taxi service (with a driver still behind the wheel) in Pittsburgh later this month. But other companies, including almost every automaker, are quickly catching up as we reach the mid-way point in the driverless taxi race…

In a new report, we analyze the fast evolving driverless taxi model and examine the moves companies have made so far in creating a service. In particular, we distill the service into three main players: the automakers who produce the cars, the components suppliers who outfit them to become driverless, and the shared mobility services that provide the platform for consumers to order them.

Here are some of the key takeaways from the report:

  • Fully autonomous taxis are already here, but to reach the point where companies can remove the driver will take a few years. Both Delphi and nuTonomy have been piloting fully autonomous taxi services in Singapore.
  • Driverless taxi services would significantly benefit the companies creating them, but could have a massive ripple effect on the overall economy. They could cause lower traffic levels, less pollution, and safer roads. They could also put millions of people who rely on the taxi, as well as the automotive market, out of a job.
  • We expect the first mass deployment of driverless taxis to happen by 2020. Some government officials have even more aggressive plans to deploy driverless taxis before that, but we believe they will be stymied by technology barriers, including mass infrastructure changes.
  • But it will take 20-plus years for a driverless taxi service to make a significant dent in the way people travel. We believe the services will be launched in select pockets of the world, but will not reach a global level in the same time-frame that most technology proliferates.

 

Posted in Misc | Tagged | Leave a comment