The Internet of Things (IoT): 9 Predictions and Facts

internet-of-thingsA number of new reports on the Internet of Things (IoT) provide a fresh look at the state of this hot market and forecasts for its future impact on the world’s economy.

IDC discussed The Internet of Things Mid-Year Review at a webinar on July 23, including findings from a survey of 3,566 companies in North America. IDC defines IoT as “a network of uniquely identifiable ‘things’ that communicate without human interaction using IP connectivity.” Tata Consulting Services (TCS) issued a report titled The Internet of Things: The Complete Reimaginative Force, based on a survey of 3,764 executives worldwide. TCS defines the IoT as “smart, connected products.” The McKinsey Global Institute (MGI) published The Internet of Things: Mapping the value beyond the hype. MGI defines IoT as “sensors and actuators connected by networks to computing systems” and excludes “systems in which all of the sensors’ primary purpose is to receive intentional human input, such as smartphone apps.” Finally, Business Insider (BI) issued The Smart City report on IoT initiatives in cities worldwide.

The economic impact of the IoT will re-shape the world’s economy

The IoT has a total potential economic impact of $3.9 trillion to $11.1 trillion a year by 2025. At the top end, that level of value—including the consumer surplus—would be equivalent to about 11 percent of the world economy (MGI). The Internet of Things (IoT) market will expand from $780 billion this year to $1.68 trillion in 2020, growing at a CAGR of 16.9%.  Sensors/modules and connectivity account for more than 50% of spending on IoT, followed by IT services at more than 25% and software at 15%. Traditional IT hardware accounts for less than 5% of total spending on IoT (IDC)

Investments in IoT technologies by cities worldwide will increase by $97 billion from 2015 to 2019. The cities’ IoT deployments will create $421 billion in economic value worldwide in 2019. That economic value will be derived from revenues from IoT device installations and sales and savings from efficiency gains in city services (BI).

There will be almost 30 billion of IoT devices in 2020

In 2015, 4,800 connected end points are added every minute. This number will grow to 7,900 by 2020. The installed base of the Internet of Things devices will grow from 10.3 billion devices in 2014 to 29.5 billion in 2020. 19 billion of these devices will be installed in North America in 2020 (IDC). The number of IoT devices installed in cities will increase by more than 5 billion in the next four years (BI).

The IoT will be primarily an enterprise market

In 2018, the IoT installed base will be split 70% in the enterprise and 30% in the consumer market, but enterprises will account for 90% of the spending (IDC). Business-to-business applications will probably capture more value—nearly 70 percent of it—than consumer uses, although consumer applications, such as fitness monitors and self-driving cars, attract the most attention and can create significant value, too (MGI).

Over the next few years, North America will still be the focal point for the IoT

The IoT has a large potential in developing economies, but it will have a higher overall value impact in advanced economies because of the higher value per use. However, developing economies could generate nearly 40 percent of the IoT’s value, and nearly half in some settings (MGI). 2020 will be a tipping point year for Asia, when it will become the geographical region with the largest installed base of IoT devices (IDC). North American companies will spend 0.45% of revenue this year on IoT initiatives, while European companies will spend 0.40%. Asia-Pacific companies will invest 0.34% of revenue in the IoT, and Latin American firms will spend 0.23% of revenue. North American and European companies are more frequently selling smart, connected products than are Asia-Pacific and Latin American companies (TCS).

The telecommunication industry leads other sectors in IoT investments

The Telecommunications, banking, utilities, and securities/investment services industries are the leading sectors investing in IoT in 2015 (IDC). In gaining benefits from the IoT, industrial manufacturers reported the largest average revenue increase from their IoT initiatives last year (29%), and they forecast they’d have the largest revenue increase from the IoT by 2018 (27% over 2015). Industrial manufacturers were also in the lead for using sensors and other digital technologies to monitor the products they sold to customers (with 40% of the companies doing so) (TCS).

IoT adoption is gaining momentum worldwide

36% of companies in North America have IoT initiatives in 2015 (IDC). 79% of companies worldwide already use IoT technologies, investing 0.4% of revenue on average. They expect their IoT budgets to rise by 20% by 2018 to $103 million (TCS).

Costs and customers are the key drivers of IoT investments

Lower operational costs and better customer service and support lead the list of significant drivers of current IoT initiatives. In large companies, business process efficiency/operations optimization and customer acquisition and/or retention also top the list (IDC). Companies with IoT programs in place reported an average revenue increase of 16% in 2014, in the areas of business where IoT initiatives were deployed. In addition, about 9% of firms had an average revenue increase of more than 60%.The biggest product and process improvements reported by companies were more customized offerings and tailored marketing campaigns, faster product improvements, and more effective customer service (TCS). Cities are adopting IoT technologies because they deliver a broad range of benefits for cities including reducing traffic congestion and air pollution, improving public safety, and providing new ways for governments to interact with their citizens (BI).

Security, culture change, determining priorities, and optimizing ROI are key IoT concerns

Security issues top the list of current barriers to IoT adoption (especially with larger companies), followed by funding the initial investment at the scale needed, determining the highest priority use cases, and changing business processes (IDC). identifying and pursuing new business and/or revenue opportunities that the IoT makes possible, and determining what data to collect, are key issues. Also important are getting managers and workers to change the way they think about customers, products, and processes, and having top executives who believe the IoT will have a profound impact and are willing to invest in it (TCS). Currently, most IoT data are not used. For example, on an oil rig that has 30,000 sensors, only 1 percent of the data are examined. That’s because this information is used mostly to detect and control anomalies—not for optimization and prediction, which provide the greatest value (MGI).

Microsoft leads the IoT market

The top 5 vendors mentioned as the IoT provider companies “plan to work with within the next 2 years” are: Microsoft, AT&T, Verizon, Cisco, and IBM. For large companies (more than 1000 employees), Microsoft and Cisco lead the list (IDC).

Originally published on Forbes.com

Posted in Internet of Things | Tagged , , | Leave a comment

What Policy Change Would Accelerate the Benefits of the Internet of Things? (IoT)

[youtube https://www.youtube.com/watch?v=y8zvkWWcUdA?rel=0]

McKinsey Global Institute:

Joi Ito: It gets back to open standards, interoperability, and a focus on non-IP-encumbered technology.

Jon Bruner: Everyone is looking for clarification on the rules on drones.

Renee DiResta: I don’t know that I feel that policy is really impeding anything right now. Maybe I’m wrong about that. I read through the FCC1 report and didn’t get the sense that there was anything [holding back the IoT] on a fundamental policy level.

Mark Hatch: Maybe it’s bandwidth-related: How do we handle the frequency and the radio waves and all the telecommunication requirements? This is a Qualcomm Technologies question maybe, along with the FCC. I may be completely wrong on that, but it’s one of the things I am curious about. How do you handle all of the communication data flow that’s going on and keep things from running into one another?

Mike Olson: The globe doesn’t have a data-privacy policy. Europe does broadly, but not in detail. In the United States, we have precisely two data-privacy laws: HIPAA,2 which protects your healthcare data, and the Fair Credit Reporting Act. Those are the only things that happen nationwide in terms of data privacy. Everything else is left to the states, and the states are pretty clueless about it. If we could elucidate policies and create laws that were uniform, it would be a lot easier for us to build and deploy these systems.

Dan Kaufman: If I had to guess, it’s the ability of people to protect their information. The Internet of Things is based on this fundamental ability to share information, and if we can’t do that in a safe and secure way, we’re going to need policies and laws so that everybody understands what’s within reason.

Cory Doctorow: I would reform the Digital Millennium Copyright Act, the 1998 statute whose language prohibits the circumvention of digital locks. I think with one step, we could make the future a better place. Ironically, the US Trade Representative has actually gone to all of America’s trading partners and gotten them to pass their own version of the Digital Millennium Copyright Act. So, every country in the world is liable to this problem. Now, the great news is that if the US stops enforcing it here, then all of those other countries will very quickly follow suit, because there’s money to be made in circumvention. The only reason to put a digital lock on is to extract maximum profits from your platform.

Tim O’Reilly: To me, policy makers need to not be trying to prevent the future from happening. They should be just policing bad actors. A good example is in healthcare. We are already producing vast reams of health data. HIPAA, the health-information privacy act, is a real obstacle. If you have a serious illness, you want to share your data with anybody who can help. You want to put your data together with other people’s data, because this collective amassing of data is one of the great keys to the future. And yet here we have these overreaching privacy laws that are going to make it difficult. So, punish bad actors—don’t prevent good actors.

Posted in Internet of Things | Leave a comment

Most Hyped Technologies: Self-Driving Cars, Self-Service Analytics, IoT; No More Big Data Buzz

Gartner just released its 2015 Hype Cycle for Emerging Technologies report. It’s our most reliable buzz bellwether, annually defining what’s in and what’s out. At the peak of inflated expectations just two years ago, Big Data was dethroned by the Internet of Things last year (but it was still estimated to be five to ten years from the Plateau of Productivity), only to completely disappear from Gartner’s hype radar this year (the 2010-2014 hype cycles are at the bottom of this post). Big data is out. So what’s in?

Gartner, August 2015

Gartner, August 2015

The Internet of Things is still at the top of the list, with self-driving cars (“autonomous vehicles”) ascending from pre-peak to the peak of the hype cycle. But there is an intriguing new category—“advanced analytics with self-service delivery”—sharing with them top billing. I guess one could hype all three in one emerging technology package of “The Internet of Autonomous Vehicles Delivering Advanced Analytics“ as the solution to all our transportation problems.

These technologies at the peak of the hype cycle also highlighted for me what’s missing from this year’s report. Given that the most hyped news out of Black Hat and Defcon conferences earlier this month were demonstrations of how to hack into cars (self-driving or not) and take control of them remotely, it is interesting that Gartner does not list any specific cybersecurity-related emerging technologies. It does mention, however, two general categories—“digital security” and “software-defined security” —both described as pre-peak, 5 to 10 years to the Plateau of Productivity. This may simply reflect the hype-less status of cybersecurity technologies. Given the daily news about data breaches, one could only hope that next year’s report will include some specific emerging solutions to what is promising to be a growing economic burden.

Another emerging technology showing promise last year—data science—has disappeared from this year’s report. It is replaced by “citizen data science” which Gartner thinks, as it did regarding data science last year, is only 2 to 5 years from the plateau. This could turn out to be the most optimistic prediction in this year’s report. A related category—machine learning—is making its first appearance on the chart this year, but already past the peak of inflated expectations. A glaring omission here is “deep learning,” the new label for and the new generation of machine learning, and one of the most hyped emerging technologies of the past couple of years.

It all boils down to what Gartner calls digital humanism: “New to the Hype Cycle this year is the emergence of technologies that support what Gartner defines as digital humanism—the notion that people are the central focus in the manifestation of digital businesses and digital workplaces.”

For the last 21 years Gartner has published the Hype Cycle report, of which Lee Rainie of the Pew Research Center has said: “There are sometimes disputes about where on the curve any individual innovation might rest, but there have been few challenges to the general trends it outlines.” I remember attending a Gartner Conference just before it started publishing this report and listening to a presentation by the analyst responsible at the time for Gartner’s emerging technologies research. He started his presentation by declaring: “Those who live by the crystal ball, die eating broken glass.”

The charts below show the evolution of Gartner’s crystal ball over the last five years and allow us to track the hype around Big Data over that period. It made its first appearance in August of 2011 as “‘Big data’ and extreme information processing and management” with 2 to 5 years to the Plateau of Productivity,then just made it into the Peak of Inflated Expectations in 2012, then rose to the top of most hyped technologies (together with consumer 3D printing and Gamification) in 2013, then started to descend into the Trough of Disillusionment in 2014, only to completely vanish in 2015. I guess Big Data is no longer an emerging technology.

Gartner Hype Cycle 2014

Gartner_HypeCycle_2014

Gartner, August 2014

Gartner Hype Cycle 2013

Gartner, August 2013

Gartner, August 2013

Gartner Hype Cycle 2012

Gartner, August 2012

Gartner, August 2012

 Gartner Hype Cycle 2011

Gartner, August 2011

Gartner, August 2011

 Gartner Hype Cycle 2010

Gartner, August 2010

Gartner, August 2010

 An earlier version of this post was published on Forbes.com

Posted in Big Data Analytics | Tagged , , , | Leave a comment

How Much Data is Generated Every Minute?

data-never-sleeps-3_final1Source: DOMO

Posted in Misc | Leave a comment

What Makes the Internet of Things (IoT) Work (SlideShare)

[slideshare id=51024872&doc=9760-internet-of-things-part-2-slidesharev3-150728171647-lva1-app6892]

Posted in Internet of Things | Leave a comment

On Brontobyte Data and Other Big Words

big-data-infographic_Brontobyte

Source: Datafloq

Paul McFedries in IEEE Spectrum

When Gartner released its annual Hype Cycle for Emerging Technologies for 2014, it was interesting to note that big data was now located on the downslope from the “Peak of Inflated Expectations,” while the Internet of Things (often shortened to IoT) was right at the peak, and data science was on the upslope. This felt intuitively right. First, although big data—those massive amounts of information that require special techniques to store, search, and analyze—remains a thriving and much-discussed area, it’s no longer the new kid on the data block. Second, everyone expects that the data sets generated by the Internet of Things will be even more impressive than today’s big-data collections. And third, collecting data is one significant challenge, but analyzing and extracting knowledge from it is quite another, and the purview of data science.

Just how much information are we talking about here? Estimates vary widely, but big-data buffs sometimes speak of storage in units of brontobytes, a term that appears to be based on brontosaurus, one of the largest creatures ever to rattle the Earth. That tells you we’re dealing with a big number, but just how much data could reside in a brontobyte? I could tell you that it’s 1,000 yottabytes, but that likely won’t help. Instead, think of a terabyte, which these days represents an average-size hard drive. Well, you would need 1,000,000,000,000,000 (a thousand trillion) of them to fill a brontobyte. Oh, and for the record, yes, there’s an even larger unit tossed around by big-data mavens: the geopbyte, which is 1,000 brontobytes. Whatever the term, we’re really dealing in hellabytes, that is, a helluva lot of data.

Wrangling even petabyte-size data sets (a petabyte is 1,000 terabytes) and data lakes (data stored and readily accessible in its pure, unprocessed state) are tasks for professionals, so not only are listings for big-data-related jobs thick on the ground but the job titles themselves now display a pleasing variety: companies are looking for data architects (specialists in building data models), data custodians and data stewards (who manage data sources), data visualizers (who can translate data into visual form), data change agents and data explorers (who change how a company does business based on analyzing company data), and even data frackers (who use enhanced or hidden measures to extract or obtain data).

But it’s not just data professionals who are taking advantage of Brobdingnagian data sets to get ahead. Nowhere is that more evident than in the news, where a new type of journalism has emerged that uses statistics, programming, and other digital data and tools to produce or shape news stories. This data journalism (or data-driven journalism) is exemplified by Nate Silver’s FiveThirtyEight site, a wildly popular exercise in precision journalism and computer-assisted reporting (or CAR).

And everyone, professional and amateur alike, no longer has the luxury of dealing with just “big” data. Now there is also thick data (which combines both quantitative and qualitative analysis), long data (which extends back in time hundreds or thousands of years), hot data (which is used constantly, meaning it must be easily and quickly accessible), and cold data (which is used relatively infrequently, so it can be less readily available).

In the 1980s we were told we needed cultural literacy. Perhaps now we need big-data literacy, not necessarily to become proficient in analyzing large data sets but to become aware of how our everyday actions—our small datacontribute to many different big-data sets and what impact that might have on our privacy and security. Let’s learn how to become custodians of our own data.

Posted in Big Data Analytics | Tagged , , , , | Leave a comment

Top 10 Programming Languages 2015

top-ten-programming-languagesNote: Left column shows 2015 ranking; right column shows 2014 ranking.

Source: IEEE Spectrum

The big five—Java, C, C++, Python, and C#—remain on top, with their ranking undisturbed, but C has edged to within a whisper of knocking Java off the top spot. The big mover is R, a statistical computing language that’s handy for analyzing and visualizing big data, which comes in at sixth place. Last year it was in ninth place, and its move reflects the growing importance of big data to a number of fields. A significant amount of movement has occurred further down in the rankings, as languages like Go, Perl, and even Assembly jockey for position…

A number of languages have entered the rankings for the first time. Swift, Apple’s new language, has already gained enough traction to make a strong appearance despite being released only 13 months ago. Cuda is another interesting entry—it’s a language created by graphics chip company Nvidia that’s designed for general-purpose computing using the company’s powerful but specialized graphics processors, which can be found in many desktop and mobile devices. Seven languages in all are appearing for the first time.

Posted in Misc | Tagged | Leave a comment

Why Humans Will Forever Rule Over the Machines

Terminator-2Everywhere you turn nowadays, you hear about the imminent triumph of intelligent machines over humans. They will take our jobs, they will make their own decisions, they will be even more intelligent than humans, they pose a threat to humanity (per Stephen Hawking, Bill Gates, and Elon Musk). Marc Andreesen recently summed up on Twitter the increased hubbub about the dangers of Artificial Intelligence: “From ‘It’s so horrible how little progress has been made’ to ‘It’s so horrible how much progress has been made’ in one step.”

Don’t worry. The machines will never take over, no matter how much progress will be made in artificial intelligence . It will forever remain artificial, devoid of what makes us human (and intelligent in the full sense of the word), and what accounts for our unlimited creativity, the fountainhead of ideas that will always keep us at least a few steps ahead of the machines.

In a word, intelligent machines will never have culture, our unique way of transmitting meanings and context over time, our continuously invented and re-invented inner and external realities.

When you stop to think about culture—the content of our thinking—it is amazing that it has been missing from the thinking of the people creating “thinking machines” and/or debating how much they will impact our lives for as long as this work and conversation has been going on. No matter what position they take in the debate and/or what path they follow in developing robots and/or artificial intelligence, they have collectively made a conscious or unconscious decision to reduce the incredible bounty and open-endedness of our thinking to computation, an exchange of information between billions of neurons, which they either hope or are afraid that we will eventually replicate in a similar exchange between increasingly powerful computers. It’s all about quantity and we know that Moore’s Law takes care of that.

Almost all the people participating in the debate about the rise of the machines have subscribed to the Turing Paradigm which basically says “let’s not talk about what we cannot define or investigate and simply equate thinking with computation.”

The dominant thinking about thinking machines, whether of the artificial or the human kind, has not changed since Edward C. Berkeley wrote in Giant Brains or Machines that Think, his 1949 book about the recently invented computers: “These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.” Thirty years later, MIT’s Marvin Minsky famously stated: “The human brain is just a computer that happens to be made out of meat.” Today, Harvard geneticist George Church goes further (reports Joichi Ito), suggesting that we should make brains as smart as computers, and not the other way around.

Still, from time to time we do hear new and original challenges to the dominant paradigm. In “Computers Versus Humanity: Do We Compete?” Liah Greenfeld and Mark Simes bring culture and the mind into the debate over artificial intelligence, concepts that do not exist in the prevailing thinking about thinking. They define culture as the symbolic process by which humans transmit their ways of life. It is a historical process, i.e., it occurs in time, and it operates on both the collective and individual levels simultaneously.

The mind, defined as “culture in the brain,” is a process representing an individualization of the collective symbolic environment. It is supported by the brain and, in turn, it organizes the connective complexity of the brain.  Greenfeld and Simes argue that “mapping and explaining the organization and biological processes in the human brain will only be complete when such symbolic, and therefore non-material, environment is taken into account.”

They conclude that what distinguishes humanity from all other forms of life “is its endless, unpredictable creativity. It does not process information: It creates. It creates information, misinformation, forms of knowledge that cannot be called information at all, and myriads of other phenomena that do not belong to the category of knowledge. Minds do not do computer-like things, ergo computers cannot outcompete us all.”

The mind, the continuous and dynamic creative process by which we live our conscious lives, is missing from the debates over the promise and perils of artificial intelligence. A recent example is a special section on robots in the July/August issue of Foreign Affairs, in which the editors brought together a number of authors with divergent opinions about the race against the machines. All of them, however, do not question the assumption that we are in a race:

  • A roboticist, MIT’s Daniela Rus, writes about the “significant gaps” that have to be closed in order to make robots our little helpers and makes the case for robots and humans augmenting and complementing each other’s skills (in “The Robots Are Coming”).
  • Another roboticist, Carnegie Mellon’s Illah Reza Nourbakhsh, highlights robots’ “potential to produce dystopian outcomes” and laments the lack of required training in ethics, human rights, privacy, or security at the academic engineering programs that grant degrees in robotics (in “The Coming Robot Dystopia”).
  • The authors of The Second Machine Age, MIT’s Erik Brynjolfsson and Andrew McAfee, predict that human labor will not disappear anytime soon because “we humans are a deeply social species, and the desire for human connection carries over to our economic lives.” But the prediction is limited to “within the next decade,” after which “there is a real possibility… that human labor will, in aggregate, decline in relevance because of technological progress, just as horse labor did earlier” (in “Will Humans Go the Way of Horses?”).
  • The chief economics commentator at the Financial Times, Martin Wolf, dismisses the predictions regarding the imminent “breakthroughs in information technology, robotics, and artificial intelligence that will dwarf what has been achieved in the past two centuries” and the emergence of machines that are “supremely intelligent and even self-creating.” While also hedging his bets about the future, he states categorically “what we know for the moment is that there is nothing extraordinary in the changes we are now experiencing. We have been here before and on a much larger scale” (in “Same as It Ever Was: Why the Techno-optimists Are Wrong”).

Same as it ever was, indeed. A lively debate and lots of good arguments: Robots will help us, robots could harm us, robots may or may not take our jobs, robots—for the moment—are nothing special.  Beneath the superficial disagreement lies a fundamental shared acceptance of the general premise that we are not different from computers, only have the temporary and fleeting advantage of greater computing power.

No wonder that the editor of Foreign Affairs, Gideon Rose, concludes that “something is clearly happening here, but we don’t know what it means. And by the time we do, authors and editors might well have been replaced by algorithms along with everybody else.”

Let me make a bold prediction. Algorithms will not create on their own a competitor to Foreign Affairs. No matter how intelligent machines will become (and they will be much smarter than they are today), they will not create science or literature or any of the other components of our culture that we have created over the course of millennia and will continue to create, in some cases aided by technologies that we create and control.

And by “we,” I don’t mean only Einstein and Shakespeare. I mean the entire human race, engaged in creating, absorbing, manipulating, processing, communicating the symbols that make our culture, making sense of our reality. I doubt that we will ever have a machine creating Twitter on its own, not even the hashtag.

I’m sure we will have smart machines that could perform special tasks, augmenting our capabilities and improving our lives. That many jobs will be taken over by algorithms and robots, and many others will be created because of them, as we have seen over the last half-century. And that bad people will use these intelligent machines to harm other people and that we will make many mistakes relying too much on them and not thinking about all the consequences of what we are developing.

But intelligent machines will not have a mind of their own. Intelligent machines will not have our imagination, our creativity, our unique human culture. Intelligent machines will not take over because they will never be human.

Originally published on Forbes.com

Posted in Misc | Leave a comment

John Markoff on automation, jobs, Deep Learning and AI limitations

markoff640My sense, after spending two or three years working on this, is that it’s a much more nuanced situation than the alarmists seem to believe. Brynjolfsson and McAfee, and Martin Ford, and Jaron Lanier have all written about the rapid pace of automation. There are two things to consider: One, the pace is not that fast. Deploying these technologies will take more time than people think. Two, the structure of the workforce may change in ways that means we need more robots than we think we do, and that the robots will have a role to play. The other thing is that the development of the technologies to make these things work is uneven.

Right now, we’re undergoing a rapid acceleration in pattern recognition technologies. Machines, for the first time are learning how to recognize objects; they’re learning how to understand scenes, how to recognize the human voice, how to understand human language. That’s all happening, no question that the advances have been dramatic and it’s largely happened due to this technique called deep learning, which is a modern iteration of the artificial neural nets, which of course have been around since the 1950s and even before.

What hasn’t happened is the other part of the AI problem, which is called cognition. We haven’t made any breakthroughs in planning and thinking, so it’s not clear that you’ll be able to turn these machines loose in the environment to be waiters or flip hamburgers or do all the things that human beings do as quickly as we think. Also, in the United States the manufacturing economy has already left, by and large. Only 9 percent of the workers in the United States are involved in manufacturing.

There’s this wonderful counter situation to the popular belief that there will be no jobs. The last time someone wrote about this was in 1995 when a book titled The End of Work predicted this. The decade after that, the US economy grew faster than the population for the next decade. It’s not clear to me at all that things are going to work out the way they felt.

The classic example is that almost everybody cites this apparent juxtaposition of Instagram—thirteen programmers taking out a giant corporation, Kodak, with 140,000 workers. In fact, that’s not what happened at all. For one thing, Kodak wasn’t killed by Instagram. Kodak was a company that put a gun to its head and pulled the trigger multiple times until it was dead. It just made all kinds of strategic blunders. The simplest evidence of that is its competitor, Fuji, which did very well across this chasm of the Internet. The deeper thought is that Instagram, as a new?age photo sharing system, couldn’t exist until the modern Internet was built, and that probably created somewhere between 2.5 and 5 million jobs, and made them good jobs. The notion that Instagram killed both Kodak and the jobs is just fundamentally wrong…

…What worries me about the future of Silicon Valley, is that one-dimensionality, that it’s not a Renaissance culture, it’s an engineering culture. It’s an engineering culture that believes that it’s revolutionary, but it’s actually not that revolutionary. The Valley has, for a long time, mined a couple of big ideas…

…In fact, things are slowing down. In 2045, it’s going to look more like it looks today than you think.

Source: Edge

Posted in AI, deep learning, Machine Learning | Leave a comment

Hype Curve of (Hardware) Neural Networks

Neural_Networks_hype_curve

Source: Olivier Temam

Posted in AI | Leave a comment