40 Startups in the IoT in Retail Market Landscape

cbinsights_retail-iot-market-map

CB Insights:

Beacon- and sensor-based analytics – These companies provide hardware and software to help stores track visitors. They focus on data collection for internal analytics, such as merchandise tracking, adjusting staffing levels, monitoring promotions, etc. Euclid Analytics, for example, promises visitor tracking to monitor the impact of promotions on driving store visits and better understand stores’ busy times and aisles.

Beacon-based marketing – These companies also track visitors, but focus more on proximity marketing use cases (which may also include some analytic insights). Estimote provides small, colorful beacons that send push notifications to users’ phones about products or promotions when it senses someone near. Kimetric’s sensors aim to visually identify shoppers’ age, gender, eye focus, and clothing style to present them with personalized marketing.

Beacon analytics and marketing – These startups track visitors and provide a mix of internal analytics and proximity marketing services.

Inventory tracking – QueueHop provides connected anti-theft tags for items that automatically unclip after payment. Cosy is building inventory-tracking robots that use indoor mapping software.

Indoor mapping – These startups take advantage of connected devices to create detailed indoor maps of stores and malls. Stores can help users find the right items and direct them to promotions.

Service robots – Simbe Robotics and Fellow Robots are designing robots for use in-store, to help customers find items and ensure the shelves stay stocked. Fellow Robots worked with Lowe’s to launch the LoweBot in eleven stores this fall.

Loss prevention – Gatekeeper uses RFID tags that work with wheel-locking features to automatically stop shopping carts that leave the store area, helping prevent theft. Carttronicstracks baskets and carts with RFID tags, provides cameras that receive signals from the tags and can film anyone leaving the store with unauthorized items.

At-home shopping buttons – Kwik and Hiku offer connected devices that can automatically place online orders for goods from users’ homes, comparable to the Amazon Dash buttons. Kwik will provide branded buttons that let customers re-order items with one touch, while the Hiku device can also scan barcodes, to identify items for re-ordering, and can recognize users’ voices.

Smart dressing rooms – Oak Labs created an interactive, touchscreen mirror that lets shoppers request new items, adjust fitting room lighting, and see outfit recommendations. The mirror can sense which products the shopper brought into the room using RFID technology, and then present related products, save the items to shoppers’ online accounts, or display related items. Oak has worked with Polo Ralph Lauren.

 

Posted in Internet of Things | Tagged | Leave a comment

Artificial Intelligence: Enterprise Adoption by Industry

aiworld_enterpriseadoption

AI Trends:

In terms of plans to deploy AI commercially in the near future (before the end of 2018), healthcare is clearly the vertical sector in which the largest percentage of companies worldwide will take action, followed by the consumer content and apps industry (including gaming), manufacturing, retail and automotive.

Posted in AI | Tagged | Leave a comment

IDC Survey Finds IoT is All About The Data

iot

IDC just released the results of its 3rd annual survey of IoT decision-makers (press release here and webcast with IDC Vernon Turner and Carrie MacGillivray here). The IoT market is maturing, says IDC, going beyond its initial focus on connecting more and more things. Data management is fast becoming the overarching theme, with Analytics and the IoT Platform emerging as the main requirements of the 31.4% of organizations surveyed that have already launched IoT solutions , and the additional 43% looking to deploy in the next 12 months.

The survey was conducted in July and August 2016, with 4,500 decision-makers from more than 25 countries participating, from enterprises with more than 100 employees in a wide range of industries. Here are the highlights:

  • 55% say IoT is strategic to their business as a means to compete more effectively. 21% regard it as “transformative”—they know it holds promise and are looking for the right investment, says IDC.
  • Top reasons to invest in IoT: Increase productivity (24%), time-to-market (22.5%), process automation (21.7%). IDC noted that internal and operational benefits are still the main drivers of IoT deployment. However, in a possible sign of market maturation, “reducing costs” was not mentioned much this year (in contrast to previous years) and “time-to-market” appeared for the first time.
  • The business side of the enterprise is responsible for most IoT initiatives. 62% of respondents said business units fund IoT as opposed to 3% being funded by IT and 35% where IT provides the funds and the business units are involved in managing the project. 18% of projects are run by specially created IoT business units. “It’s not a technology solution, it’s a business solution,” says IDC. Of the projects that are funded by the business, IT is involved in the project (36%) or IT is aware but not involved (25%). Only 1% of respondents reported “shadow IT for IoT” where IT is not aware of the IoT project.
  • Top IoT challenges include security (26%), privacy (21%), upfront cost (22%), on-going cost (19%), IT infrastructure (16%) and IoT skills (14%). The issues of IoT-related skills came up for the first time this year, in apparent response to the challenge of handling the influx of IoT data.
  • The security challenge is addressed in an as-hock manner, with enterprises opting for a variety of solutions: security processes are integrated into the IoT workflow (23%), a tiered approach where devices are secured by firewalls between tiers (21%), and as an extension of existing IT security policies (20%).Where is the IoT being processed: 54% collect data at the edge and transmit to the enterprise; 29% collect and process data at the point of creation; 14% collect and process some data at the point of creation and transmit the rest to the enterprise.
  • Industries that lead in the adoption of IoT include financial services (including insurance), retail, and manufacturing. Lagging sectors include government, healthcare, and (surprising to me) utilities.

Survey results show that IBM and Microsoft have taken a leading role in almost all IoT segments, especially the ones ascending in importance—analytics, software, systems integration and providing an IoT platform.  This is due, IDC says, to their success in blending a cloud strategy with analytics and software capabilities. To IDC’s question about most important digital transformation projects, survey respondents cited cloud transformation/transition (66%), IoT (32%), and big data/cognitive solutions (27%). IDC noted that these transformational initiatives are interlinked: The cloud gives IoT a platform on which to scale and the IoT lays the foundation for investments in big data and cognitive solutions, to make sense of all data generated by the IoT and residing in the cloud.

Originally published on Forbes.com

 

Posted in Internet of Things, Misc | Tagged | Leave a comment

Who is Buying All the AI Startups? Google, Intel, Apple, Twitter and Salesforce

cbinsights_race_for_ai

CB Insights:

Nearly 140 private companies working to advance artificial intelligence technologies have been acquired since 2011, with over 40 acquisitions taking place in 2016 alone (as of 10/7/2016). Corporate giants like Google, IBM, Yahoo, Intel, Apple and Salesforce, are competing in the race to acquire private AI companies, with Samsung emerging as a new entrant this month with its acquisition of startup Viv Labs, which is developing a Siri-like AI assistant.

Posted in AI | Tagged | Leave a comment

Only Humans Need Apply: Winners and Losers in the Age of Smart Machines

Under pressure to remove alleged human bias from its “Trending Topics” section, in August Facebook fired the editors who were selecting and writing headlines for the stories, explaining that this “will make the product more automated.” The results of trusting algorithms more than humans have continued to make headlines ever since with the Trending “product” promoting a fake news story about Fox News’ Meghan Kelly, a conspiracy article claiming the 9/11 twin towers collapsed because of “controlled demolition,” and Apple’s Tim Cook announcing that Siri will physically come out of the phone and do all the household chores (a story from an Indian satirical website, Faking News, that was Trending’s top story on the day of the iPhone 7 launch event), to mention just a few of the more embarrassing machine failures.

Silicon Valley has never displayed much love for fallible humans, but has shown a lot of confidence in the continuous improvement and now, self-improvement, of machines. Do humans still have an important role to play in our automated lives which are increasingly controlled by sophisticated algorithms and seemingly smarter machines?

onlyhumansneedapply

In Only Humans Need Apply: Winners and Losers in the Age of Smart Machines, knowledge work and analytics expert Tom Davenport and Julia Kirby, a contributing editor for the Harvard Business Review, offer optimistic, upbeat and practical answers to this much-debated question. “The upside potential of the advancing technology is the promise of augmentation—in which humans and computers combine their strengths to achieve more favorable outcomes than either could do alone,” they write.

There is not much difference, contend Davenport and Kirby, between technologies of automation and technologies of augmentation. The difference lies in the goals and attitudes behind the application of these technologies. Automation is unidirectional and focuses “primarily or exclusively on cost reduction” via the elimination of human labor. In contrast, “augmentation approaches tend to be more likely to achieve value and innovation” and they are bidirectional, making “humans more capable of what they are good at” and “machines even better at what they do.”

It is a shortsighted (and short-term) strategy for companies to favor automation over augmentation: “If the goal is to provide truly exceptional or differentiated products and services at scale, only an augmentation arrangement can accomplish that,” write Davenport and Kirby. They advocate a “workplace that combine sophisticated machines and humans in partnerships of mutual augmentation” and mutual benefit.

Competitive considerations apply not only to companies in the race against the machine, but also to their employees. The book addresses primarily the plight of knowledge workers who thought they would escape the fate of factory workers but are now increasingly automated out of a job. “The advice on avoiding that fate,” say Davenport and Kirby, “has been noticeably thin. For the most part, the experts boil it down to a single, daunting task: Keep getting smarter. We are going to argue that there are other strategies, all of them featuring augmentation of human work by machines.”

The authors describe in detail—with vivid and engaging examples—five “options for augmentation:” Stepping Up or moving a level above the machines and making high level decisions about augmentation; Stepping Aside or choosing to pursue a job that computers are not good at, such as selling or motivating; Stepping In or monitoring and improving the computer’s automated decisions; Stepping Narrowly or finding a specialty area in a specific profession that wouldn’t be economical to automate; and Stepping Forward or becoming involved in creating the very technology that supports intelligent decisions.

These strategies will work for knowledge workers (or all workers) who are “willing to work to add value to machines, and who are willing to have machines add value to them.” They will also work for organizations that understand that “no matter how smart these machines get, there is still some potential value from human augmentation.”

What a refreshing perspective in these times of machine-worship, where Silicon Valley’s automation addiction has spread far and wide. Mark Fields, the chief executive of Ford Motor Company, recently promised completely self-driving cars by about 2025, displaying a very Silicon Valley (and silly) attitude by saying “a driver is not going to be required.”

Fields and the many other executives of established companies racing against the disruptive Silicon Valley machine should read Only Humans Need Apply where Davenport and Kirby warn that companies investing in self-driving cars “could find that they have put a lot of energy into developing vehicles that drive themselves but are stuck with regulations that require an alert driver with hands on the steering wheel and feet on the pedals. If that happens, perhaps a company whose strategy all along has given careful thought to how to redeploy the human attention that is freed up by the technology—will win big.”

Just because you can automate, doesn’t mean you should. This is the important lesson of this contrarian, timely, and well-argued book. Augmentation, say Davenport and Kirby, is something “societies should encourage in ways big and small.” Hear! Hear!

Originally published on Forbes.com

Posted in Misc | Leave a comment

Neural Networks Typology

neuralnetworksSource: The Asimov Institute

With new neural network architectures popping up every now and then, it’s hard to keep track of them all. Knowing all the abbreviations being thrown around (DCIGN, BiLSTM, DCGAN, anyone?) can be a bit overwhelming at first.

So I decided to compose a cheat sheet containing many of those architectures. Most of these are neural networks, some are completely different beasts. Though all of these architectures are presented as novel and unique, when I drew the node structures… their underlying relations started to make more sense…

Composing a complete list is practically impossible, as new architectures are invented all the time. Even if published it can still be quite challenging to find them even if you’re looking for them, or sometimes you just overlook some. So while this list may provide you with some insights into the world of AI, please, by no means take this list for being comprehensive; especially if you read this post long after it was written.

For each of the architectures depicted in the picture, I wrote a very, very brief description. You may find some of these to be useful if you’re quite familiar with some architectures, but you aren’t familiar with a particular one.

 

Posted in AI | Tagged | Leave a comment

60 Startups Active in the Deep Learning Market Landscape

CBInsights_DeepLearning.png

CB Insights:

As recently as 2013, the [deep learning] space saw fewer than 10 deals. But since January 2015, deep learning startups have raised over 70 equity deals in aggregate, and over $600M in equity funding since 2012…

Computer Vision: Startups here are using deep learning for image recognition, analytics, and classification. Aerial image analytics startup Terraloupe was seed-funded this year by Germany-based Bayern Kapital. New York-based Calrifai — backed by investors including Google Ventures, Lux Capital, and NVidia — entered the R/GA accelerator this year, after raising $10M in Series A in Q2’15. Captricity, which extracts information from hand-written data, has raised $49M in equity funding so far from investors including Social Capital, Accomplice, White Mountains Insurance Group, and New York Life Insurance Company.

Speech analytics/conversational interface: Google made news last year when it entered the Chinese market with its investment — a $60M Series C round — in Shanghai-based Mobvoi. The smart watch maker’s core tech includes speech recognition, text-to-speech conversion, and semantic analysis. Another startup that has recently been in the news is Viv Labs, which demonstrated its Siri-like AI assistant earlier this year. The company has raised $30M in equity funding from investors including Horizons Ventures and Pritzker Group Venture Capital.

Core AI: Startups here are developing algorithms that can be applied across multiple industries like finance, healthcare, and e-commerce. To name a few, Japan-based LeapMindraised $3.4M from Archetype Ventures, ITOCHU Technology Ventures, and Visionnaire Ventures in Q3’16; Teradeep, a neural network startup that “accelerates” deep learning via field programmable gate arrays (FPGAs) — integrated circuits that can be programmed for customer-specific applications — received funding in Q1’16 from the corporate venture arm of XILINX.

Auto & Robotics: 5 out of 6 startups in this category raised their first equity funding rounds this year. Machine vision company Netradyne, which has developed a driver-safety platform called Driveri, received $16M in Series A from Reliance Industries in India. Two other auto tech startups, Andreessen Horowitz-backed Comma.ai and Oriza Ventures-based Drive.ai, raised $3M and $12M in early-stage funding, respectively. China-based Turing Robot, which was initially focused on voice technologies, is now expanding into the consumer robotics market. It raised $7.6M in corporate minority from Alpha Animation & Culture. Another China-based startup, Rokid, which also has an office in the US, is developing a social robot. You can read more about robotics startups in China in our post here.

BI, Sales & CRM: Applications here include voice analytics to extract information from calls, automated customer response solutions, business data analytics, and sales targeting. To name a few, Palo Alto-based Mariana raised $2M in seed money from investors including Blumberg Capital; London-based True AI, previously seed funded by Entrepreneur First, entered the Microsoft Ventures Accelerator in Q3’16; another UK-based startup, Ripjar, raised funds from Winton Ventures in Q2’16.

Healthcare: As we discussed in our webinar recently, healthcare is the hottest area of investment compared to other industry-specific applications of artificial intelligence. Deep learning startups here include drug discovery platforms Insilico and Atomwise, IBM-backed precision medicine startup Pathway Genomics, and diagnostics companies Butterfly Networkand Enlitic.

Security: Israel-based Deep Instinct, which claims to be the first startup to bring deep learning to cybersecurity, is backed by investors including Blumberg Capital, UST Global, and U.S. News & World Report. Other startups here include Seattle-based SignalSense, which applies deep learning to IT security and smart camera startup Umbo CV.

E-Commerce: Deep learning in e-commerce was spotlighted recently by Etsy’s acquisition of Blackbird Technologies. Three startups in the private sector using AI in e-commerce raised funding rounds this year: Reflektion raised $18M in Q1’16 from investors including Intel Capital, Battery Ventures, and Marc Benioff; ViSenze raised $10.5M in Series B from investors including Rakuten Ventures, Enspire Capital, and Phillip Private Equity; India-based Staqu raised angel funds in Q2’16.

See also

Faster Artificial Intelligence: Baidu Benchmarks Hardware For Deep Learning

Deep Learning Is Still A No-Show In Gartner 2016 Hype Cycle For Emerging Technologies

AI And Machine Learning Take Center Stage At Intel Analytics Summit

Posted in Misc | Leave a comment

The Dell-EMC Merger and the Googlization of IT

Yes, Joe Tucci is a great salesman and Michael Dell is the ultimate entrepreneur, but it is Google that is really behind the $67 billion merger. Tucci: “The waves of change we now see in our industry are unprecedented and, to navigate this change, we must create a new company for a new era.” In other words, we must survive in the digital natives era, ushered in by Google, and magnified by the likes of Amazon and Facebook.

To understand what Tucci calls “the new world order,” let’s take a quick tour of the old one, to better understand how the digital natives forced Dell and EMC into the largest tech acquisition in history. Dell and EMC were the two most successful U.S. stocks in the 1990s, appreciating more than any other stock over that booming decade.  They rode on a new tidal wave of digital data, unleashed by the advent of the PC and the networking of PCs in 1980s.

As a result, between 1990 and 2000, the structure of the IT industry has changed for the first—and so far, the last—time, expanding to include large vendors focused on one layer of the IT stack: Intel in semi-conductors, EMC in storage, Cisco in networking, Microsoft in operating systems, Oracle in databases. IBM—the dominant player in the previous era of vertically-integrated, “one-stop-shopping” IT vendors—saved itself from the fate of DEC, Wang, and Prime (all, like EMC, based in Massachusetts) by focusing on services.

The restructured IT industry, and specifically, the focused, “best-in-class” vendors, answered a pressing business need. Digitization and the rapid growth of data unleashed new business requirements and opportunities that called for new ways to sell and buy IT.

There were new business needs for storing much larger volumes of data, mining the data for new market insights, and providing better service to customers by making increasingly “mission-critical” computer systems available 24/7. New IT buyers, such as executives in leading-edge IT departments, business executives impatient with their IT departments, or IT  executives that were asked to take over the out-of-control IT systems acquired by the business units, eschewed the vertically-integrated IT vendors in favor of the new focused competitors, embracing enthusiastically the new “mix and match” IT mentality.

The 2000s were a decade of more-of-the-same with the industry and IT buyers recuperating for a long time from Y2K and the implosion of the dot-com bubble, and going through two recessions. IBM (minus its PC business) and HP (plus Compaq, a successful, focused, PC vendor, like Dell) were the only large “one-stop-shopping” vendors to survive (Sun Microsystems did not). Dell tried, not too successfully, to expand its business beyond PCs to become a one-stop-shopping enterprise IT vendor.

But IT was not the same. Yet another wave of digital data was unleahsed by the advent of the World Wide Web (a.k.a. “the Internet). Unlike the previous wave, this one gave rise to “digital natives,” a new breed of companies with new business models based on Web domination (i.e., mastering online advertising) and data mining (i.e., indexing, recommendations, linking, etc.).  It also gave rise to a new breed of IT buyers.

In the early 200os, Google’s business presented unprecedented IT requirements for performance, availability and scalability (IT jargon for “we have lots of data to store, process, and shuttle around”). They could buy computer storage, servers and networks from existing IT vendors but the cost was prohibitive. More important, Google’s engineers, as someone who was there at the time told me, always thought they could do a better job than anyone else. So they went ahead and built their own IT infrastructure, stringing together “commodity” (off-the shelf) hardware components, and developing innovative software to manage it.

In a recently published paper, Google’s engineers described their approach to “overcoming the cost, operational complexity, and limited scale endemic to datacenter networks a decade ago.” This was the latest in a long string of influential papers that Google has published (starting, I think, in 2006), sharing with the world its experience and expertise in building an IT infrastructure for the 21st century. Moreover, it also released some of the code it has developed as open software, available for free for anyone dealing with similar IT requirements.

Other digital natives were the first to benefit from Google’s academic-like “publish or perish” mentality. They developed Google’s ideas further or came up with their own solutions, taking a page from Google’s business model—it’s a business where IT matters a lot, IT is a core competency. A prominent example is Hadoop, originally developed at Google as a solution to a storage bottleneck standing in the way of analyzing or manipulating large amounts of data, developed further by Yahoo engineers and released by them as open source software, eventually to become a foundational technology for big data analytics.

Facebook, absorbing some top Google engineering talent, went on further to invent an IT infrastructure handling not only petabytes of data every day but also providing an online service to more than 1 billion people worldwide. And it went further than Google in influencing how IT is done everywhere, by establishing the Open Compute Project, with companies such as Goldman Sachs, Bank of America, and Fidelity as members.

Amazon not only built an IT infrastructure for the 21st century, but went even further than Google and Facebook by making it available to the world for a fee, establishing the concept of IT-on-demand or cloud computing on a solid footing. In the process, it has convinced many digital natives, such as Netflix, to run their entire demanding IT infrastructure on Amazon Web Services.  Now, Amazon is ready to take over the enterprise IT market, making clear at AWS:reinvent 2015 that it is going after the legacy IT business.

This is the supply side of the equation that forced Dell and EMC into this merger. But the demand side is no less important. Just like in the early 1990s, when cheaper hardware and software allowed business executives to do their own computing, by-passing the central IT department, we see today the rise of business executives building their fame and fortunes by buying computer services directly from cloud computing providers.

But the Googlization or Amazonization of IT is not limited to business executives.  It is impossible to overstate the impact Google and other digital natives had on IT executives. The new breed of IT executives is ready to “mix and match,” to buy “best-of-breed,” to experiment with off-the-shelf hardware and open source software.

All of this explains why Dell and EMC are merging but also hints at the enormous challenges they will have in convincing IT buyers to buy into their “back-to-the-future” strategy, that a business model that stopped working in the 1990s is the answer to winning in “a new world order.” All the Google-derived talk about “software-defined-everything” and “converged infrastructure” may not be enough for IT buyers looking to take charge of what is increasingly becoming, if not a core competency, a competitive differentiator and a new source of revenues for many companies. All businesses are now digital businesses and their IT requirements are starting to resemble those Google encountered a decade ago.

IBM, HP, Oracle, and Cisco also need to articulate why “one-stop-shopping” is the way forward for IT buyers. Their task is not made easier by the industry’s influential opinion makers, such as Gartner. In its recent Symposium, Gartner told the more than 8,000 CIOs and senior IT executives in attendance to choose as partners “digital accelerators” such as Amazon and Google, not “digital inhibitors” such as Dell and EMC.

Gartner, however, put VMware, the crown jewel in the EMC “federation,” somewhat ahead of the legacy vendors. Will the company that made cloud computing a reality (there will be no cloud computing without server virtualization) save the biggest technology-industry takeover ever?

Originally published on Forbes.com

Posted in Misc | Tagged , , , , , | Leave a comment

Internet Of Things By The Numbers: Results from New Surveys

comptia-iot-slide-2

Things are looking up for the Internet of Things. 80% of organizations have a more positive view of IoT today compared to a year ago, according to a survey of 512 IT and business executives by CompTIA. “This reflects greater levels of attention from the C-suite and a better understanding of how the many different elements of the IoT ecosystem are starting to come together,” says CompTIA. Here are the highlights from this and other recent surveys:

How big is the IoT and how fast is it growing? The number of connected things, from computers to household monitors to cars, is projected to grow at an annual compound rate of 23.1% between 2014 to 2020, reaching 50.1 billion things in 2020.

What is the IoT? In the minds of the business and IT executives surveyed, the IoT is associated with “ever-greater levels of connectivity; more intelligence built into devices, objects, and systems; and a strong data and applied learning orientation.” These views “sync-up well with the macro trends of more powerful and pervasive computing and storage, the further blurring of the physical and the virtual and the harnessing of big data for real-world functional activities.”

comptia-iot_associations

What is the current level of IoT adoption? 60% of organizations have started an IoT initiative, 45% of which were funded by a new budget allocation. An additional 23% of companies plan to start an IoT initiative within a year. About 90% of the 500 executives Bain surveyed remain in the planning and proof-of-concept stage, and only about 20% expect to implement solutions at scale by 2020.

comptia-iot_slide3

What is the perceived impact of the IoT compared to other new technologies? The IoT leads other much-discussed technologies, including robotics and artificial intelligence, as the technology that is having the most impact on the business.

comptia_iot_impact

What are the expected benefits from IoT and how do they relate to existing activities and operations? The top 5 expected benefits are:

  1.  Cost savings from operational efficiencies
  2.  New/better streams of data to improve decision-making
  3. Staff productivity gains
  4.  Better visibility/monitoring of assets throughout the organization
  5.  New/better customer experiences.

While the expected benefits are roughly split between existing operations and new products or revenue streams, a majority of businesses (61%) report having their IoT initiative as “enabling and extending” technology as opposed to regarding it as a separate and distinct activity (37%).

Bain also found high expectations of the potential benefits of the IoT, including improving the quality of products or services, improving the productivity of the workforce, and increasing the reliability of operations.

comptia_iot_benefits

comptia_iot_fit

Are they too optimistic or too pessimistic? 57%  of respondents believe their organization is very  well equipped or mostly well equipped to manage the security component of          IoT. “Given the number of security unknowns with IoT,” says CompITA, “especially in areas that may be beyond the control of the operator, this confidence may be misplaced.” Indeed, Bain found security at the top of the list of concerns about IoT, with 45% of respondents citing it as one of the top three barriers to IoT implementation.  Similarly, when Forrester surveyed 232 companies developing IoT products it found that 38% anticipated security to be the biggest challenge to IoT implementation, more than any other issue and 64% cited data and device security as the most important capability for their IoT product. Finally, a Tripwire survey of 220 security professionals found that only 30% felt their organizations were prepared for security threats related to IoT devices.

comptia_iot_security

Originally published on Forbes.com

Posted in Internet of Things | Tagged , , , | Leave a comment

Gartner Hype Cycle for Emerging Technologies 2016: Deep Learning Still Missing

gartner_emerging-tech-hc-2016

For the 22nd year, Gartner has released its much-discussed hype cycle report on emerging technologies, “providing a cross-industry perspective on the technologies and trends that business strategists, chief innovation officers, R&D leaders, entrepreneurs, global market developers and emerging-technology teams should consider in developing emerging-technology portfolios.”

Reacting to last year’s hype cycle report (see below), I made the following comment:

Machine learning is making its first appearance on the chart this year, but already past the peak of inflated expectations. A glaring omission here is “deep learning,” the new label for and the new generation of machine learning, and one of the most hyped emerging technologies of the past couple of years.

gartner-hype-2015

Source: Gartner, August 2015

This year, Gartner has moved machine learning back a few notches, putting it at the peak of inflated expectations, still with 2 to 5 years until mainstream adoption. Is machine learning an emerging technology and is there a better term to describe what most of the hype is about nowadays in tech circles?

Machine learning is best defined as the transition from feeding the computer with programs containing specific instructions in the forms of step-by-step rules or algorithms to feeding the computer with algorithms that can learn from data and can make inferences “on their own.” The computer is “trained” by data which is labeled or classified based on previous outcomes, and its software algorithms “learns” how to predict the classification of new data that is not labeled or classified. For example, after a period of training in which the computer is presented with spam and non-spam email messages, a good machine learning program will successfully identify, (i.e., predict,) which email message is spam and which is not without human intervention. In addition to spam filtering, machine learning has been applied successfully to problems such as hand-writing recognition, machine translation, fraud detection, and product recommendations.

Indeed, machine learning has been around for quite a while. In 1959, per Wikipedia, Arthur Samuel defined machine learning as a “Field of study that gives computers the ability to learn without being explicitly programmed.” Yes, 1959—not exactly what one would call “an emerging technology.”

“Artificial Neural Networks” and “Deep Learning” (and variations thereof) are the most hyped buzzwords today, more than any other tech buzzword, I would argue. They  have also been around for a long time, but advances made by The Canadian Mafia (and others) over the last decade in training computers with big data using specialized processors have generated “the latest craze.” The tipping point(s), to borrow another buzzword, came in 2012 when two much-publicized breakthroughs occurred: The Google “Brain Team” has trained a cluster of 16,000 computers to train itself to recognize an image of a cat after processing 10 million digital images taken from YouTube videos. In the same year, a deep neural network achieved 16% error rate at the annual Imagenet Large Scale Visual Recognition Challenge (ILSVCR), a competition where research teams submit programs that classify and detect objects and scenes, a significant improvement over previous results. The rapid progress this method has exhibited over the last four years has prompted countless PhD candidates to switch the focus of their research to the newly promising field, the application of deep neural nets to other areas requiring the processing and analysis of unstructured data, vastly increased funding from VCs, and a fierce competition for artificial intelligence startups.

Deep learning is not the only approach to machine learning being pursued today—Pedro Domingos suggests five fundamental approaches, but the variations and combinations are numerous. It gets most of the attention today, particularly after the triumph earlier this year of Deep Mind Technologies neural net over one of the best Go players in the world, but other approaches (or a combination of approaches) may prove even more promising or more successful in solving other types of challenges. So maybe using “deep learning” in a hype cycle chart, regardless of the hype, is indeed not a good idea. Instead, maybe Gartner should have used “advanced machine learning” to describe the emerging technology or technologies that are at the core of the hype, excitement, and interest in machine learning (or “artificial intelligence” which is also an old term and very ambiguous one).

Here’s a definition of Advance Machine Learning:

In advanced machine learning, deep neural nets (DNNs) move beyond classic computing and information management to create systems that can autonomously learn to perceive the world, on their own. The explosion of data sources and complexity of information makes manual classification and analysis infeasible and uneconomic. DNNs automate these tasks and make it possible to address key challenges related to the information of everything trend.

DNNs (an advanced form of machine learning particularly applicable to large, complex datasets) is what makes smart machines appear “intelligent.” DNNs enable hardware- or software-based machines to learn for themselves all the features in their environment, from the finest details to broad sweeping abstract classes of content. This area is evolving quickly, and organizations must assess how they can apply these technologies to gain competitive advantage.

Excellent definition (and advice), distinguishing what is “emerging” from “artificial intelligence” (coined in 1955 to describe human-like intelligence displayed by a computer program) and “machine intelligence. Oh, the source of this excellent definition of “Advanced Machine Learning” emerging technologies? Check out Gartner Identifies the Top 10 Strategic Technology Trends for 2016, published in October 2015.

In the press release accompanying the new hype cycle chart, Gartner states categorically:

Smart machine technologies will be the most disruptive class of technologies over the next 10 years due to radical computational power, near-endless amounts of data, and unprecedented advances in deep neural networks [italics mine] that will allow organizations with smart machine technologies to harness data in order to adapt to new situations and solve problems that no one has encountered previously.

Originally published on Forbes.com

Posted in AI, deep learning, Machine Learning | Tagged | Leave a comment