[youtube https://www.youtube.com/watch?v=wjmFrqyPnKc?rel=0]
The Rise of the Outsiders and the Future of the IT Industry
You probably heard the news about the CFO of Morgan Stanley becoming the new Google CFO and the on-going migration of Wall Street bankers to Silicon Valley (here and here). But did you know that this is just a surface manifestation of an underlying secular trend, nothing less than a new restructuring of the IT industry and the emergence of completely new developers and sellers of information technology products and services?
You could argue that at any point in time in the last half a century you could say about the fast-changing IT industry that it is not your father’s IT industry. But remarkably, the structure of the IT industry has been very stable, with only one major redrawing of its landscape throughout its history. Today, the “IT industry” is being redefined with two major clusters of companies increasingly competing with the incumbents and reshaping the old IT world order.
First, what did your father’s IT industry look like? Until about 1990, the industry was dominated by IBM and a few smaller companies (e.g., DEC, HP, Wang, Prime). These companies were offering one-stop-shopping, providing to enterprises the entire IT stack, from hardware platforms (including proprietary chips) to peripherals to operating systems to networking to applications and to services.
The advent of the PC and, in particular, the networking of PCs in the 1980s, gave rise to a new tidal wave of digital data. As a result, between 1990 and 2000, the industry’s structure has expanded to include large vendors focused on one layer of the IT stack: Intel in semi-conductors, EMC in storage, Cisco in networking, Microsoft in operating systems, Oracle in databases. IBM saved itself from the fate of DEC, Wang, and Prime by focusing on services. DEC could have saved itself if it realized in time the value of focusing on its biggest competitive advantage—networking—instead of letting a number of focused players enter this market (which Cisco eventually dominated). At the same time, a number of focused PC vendors, primarily Compaq and Dell, carved a larger role for themselves by gradually expanding their reach into other layers of the IT stack, emulating the old model of a vertically-integrated IT vendor. By and large, they were less successful than the new, horizontally-focused IT vendors.
The restructured IT industry, and specifically, the focused, “best-in-class” vendors answered a pressing business need. Digitization and the rapid growth of data unleashed new business requirements and opportunities that called for new ways to sell and buy IT. The new competitive and business pressures to keep more and more data online and for longer duration, to mind and mine the data, to share and move it around, all contributed to the demand for a flexible IT infrastructure where buyers assemble together the pieces of their IT infrastructure from different vendors. Most important, in the late 1980s and early 1990s, businesses fundamentally changed their attitude towards and the scope of what they did with data: From primarily an internal, back-office bookkeeping, “how did we do last quarter?” focus on the past, to external, customer-, partner-, and supplier-oriented data creation, collection, and mining, with a focus on the present and “let’s understand how to serve our stakeholders better.”
Now, how is today’s or tomorrow’s IT industry different from what it was just fifteen years ago? We still have more or less the same dominant players—IBM, Cisco, EMC, Oracle, HP, a reinvigorated Dell, etc. If there has been any marked change, it has been a shift back to a vertically-integrated model, with all of these players providing the whole IT stack (or what they call, with their insatiable appetite for new buzzwords, “converged infrastructures”).
I would argue that this “back to the future” model is on its last legs and the real future belongs to two clusters of companies: Digital natives (e.g., Google, Facebook, Amazon) and large, IT-intensive enterprises (e.g., financial services companies).
In the mid-2000s, a new business need emerged: “Let’s make sense of the mountains of data we continue to accumulate,” focusing on the future and the mining of data to understand better the likely results of different courses of actions and decisions. Unlike the previous shift in the fortunes of the IT industry, the companies driving this shift were not old or new “IT vendors.” They were the companies born on the Web, the digital natives, with data (and ever more data) at the core of everything they did. Google, Facebook, Amazon, Netflix—all built from scratch their IT infrastructure to their own specifications, inventing and re-imagining in the process the entire IT stack. They were the first outsiders insisting that IT was such a core competency for them that they could not trust it to “IT vendors” and could do a more efficient and effective job on their own, thank you.
Google and Amazon later became insiders by offering their infrastructure as a service to small and large businesses. But other companies born on the Web, such as Netflix, simply continued to amass unparalleled knowledge about state-of-the-art IT, develop innovative IT solutions which they then made available to the world as open source software, and hire (or train) the best and the brightest IT professionals. This became a new IT ecosystem, a whole new world of IT far surpassing what was happening in the “IT industry” on any measure of innovation and competitiveness.
Some of the large enterprises which used to be the most important customers of the traditional IT vendors are now joining this new IT ecosystem, reinforcing the reinvention of how IT is developed, sold and bought. Consider these recent news items:
- Santander is the first global bank to offer cloud data storage services to corporate customers. “As I think how I am going to compete with all these new technology players, I can offer the same services as some of these big guys,” Santander’s chairman, Ana Botín told the Financial Times.
- By 2018, Bank of America plans to have 80% of its workloads running on software-defined infrastructure inspired by Web companies, a process that began in 2013. “The transformation at Bank of America reflects the migration of state-of-the art information technology developed by Internet companies into the broader economy,” reports the Wall Street Journal.
- Facebook announced a new type of server, a collaboration with Intel based on its own design which it hopes other companies will adopt as well. The announcement was one of many at a recent gathering of the Open Compute Project, a nonprofit group formed by Facebook in 2011 to adapt principles of open-source software to hardware. “Members develop and share designs for servers, networking gear and storage devices that any company can build and sell, creating competition that helps hold down hardware costs,” reports the Wall Street Journal.
- Fidelity Investments reconfigured its data centers to better fit its business needs, engaging its engineering team in redesigning a revolutionary new rack, and reducing energy consumption by 20%. The announcement from the Open Compute Project reads in part: “Fidelity’s Enterprise Infrastructure team also wanted to transform the way its members worked. Instead of maintaining a closed shop, the team was looking to open up and engage with an external community of engineers as a way to keep up with the latest developments and innovate in-house… The Open Bridge Rack is a convertible datacenter rack that can physically adjust to hold any size server up to 23 inches across. It’s a key link in helping enterprises make the switch from proprietary gear to Open Compute… Fidelity designed (patent pending) and donated it to the Open Compute Foundation, making it available to different manufacturers.”
- Apple has acquired speedy and scalable NoSQL database company FoundationDB, TechCrunch reports. With this acquisition, Apple aims to bolster the reliability and speed of its cloud services, and possibly provide video streaming for its rumored TV service.
The new “IT vendors” are companies that see IT not only as an integral part of their business strategy but go even further to view IT—and the data they collect and mine–as what their company is all about. This new breed of IT vendors have emerged over the last decade from the ranks of digital natives and they are joined now by large companies that want to get further return—and possibly new sources of revenues—from their large investments in IT. Furthermore, just like the digital natives, these large established companies don’t want to be beholden anymore to the lock-in tactics of traditional IT vendors.
The size of the IT industry worldwide in 2010 was about $1.5 trillion. Gartner predicts that the industry (including telecom) will grow to $3.8 trillion this year. The real “IT industry,” however, is much larger than that, given all the IT-related activities happening outside the traditional boundaries of the industry. And if you include in the “IT industry” everything that’s being digitized—content, communications, consumer electronics, commerce—we are looking at an industry that will grow to at least $20 trillion by 2020. In this largely expanded industry there will probably be still room for the traditional IT players. But the growth spurts, innovation, and new skills will come from today’s outsiders, from the companies whose core competency is the collection, processing, analysis, distribution, and use of data.
Originally published on Forbes.com
Data Science at Zillow (Slideshare)
[slideshare id=45132578&doc=pythondatascienceatzillow-150225104833-conversion-gate02]
Bruce Schneier’s Must-Read Book on Security and Privacy
“The surveillance society snuck up on us,” says Bruce Schneier in Data and Goliath: The Hidden Battles to Capture Your Data and Control Your World. It’s a thought-provoking, absorbing, and comprehensive guide to our new big data world. Most important, it’s a call for a serious discussion and urgent action to stop the harms caused by the mass collection and mining of data by governments and corporations. To paraphrase Schneier’s position on anonymity—we either need to develop more robust techniques for preserving our freedom, or give up on the idea entirely.
An expert on computer security, Schneier has written over a dozen books in the last 20 years on the subject, some highly technical, but this one is a call to action addressed to a mainstream audience. The impetus for writing such a book, it seems, were the 2013 revelations of the NSA mass surveillance. Schneier worked with The Guardian’s Glenn Greenwald, helping in the analysis of some of the more technical documents that were leaked by Edward Snowden.
Schneier divides his guide to our big data world into three parts. The first covers the surveillance society: The massive amounts of data about ourselves we generate when we use computers, what governments and corporations do with this data separately and together, and the important difference between targeted and mass surveillance. The second part of the book is about the damage caused by government and corporate surveillance, including economic damage to U.S. businesses, and how the actions that are meant to protect us actually degrade privacy and security. The last part consists of a list of principles “to guide our thinking,” policy recommendations regarding government and corporate surveillance, and prescriptions for defensive behavior by individuals, ending with a general discussion of the trade-offs between big data’s value to society and its misuse and abuse.
“I’m not, and this book is not, anti-technology,” says Schneier. Declaring himself not even being anti-surveillance, he suggests that we need to design new ways for the NSA to perform its job while protecting our privacy. From that position, he proceeds to debunk and clarify some of the myths and misinformation spread by defenders of surveillance and to alert us to the consequences of our relaxed attitude towards what is being done with our data.
Corporate and government interests—and hunger for data—have converged in our time, Schneier argues. In exchange for free services from corporations and for protection from terrorists, we’ve agreed to mass surveillance. The convergence of government and corporate interests is now amplified by politicians who are lured by big data’s promises of targeted messaging and effective get-out-the vote campaigns.
The problem is that mass surveillance doesn’t work as advertised, in both the public and private sectors. Schneier: “There’s no actual proof of any real successes against terrorism as a result of mass surveillance and significant evidence of harm” and, on targeted advertising, “what’s unclear is how much more data helps.” The NSA simply tapped into the “massive Internet eavesdropping system” already built by corporations, but failed to see that it’s a very ineffective way to catch terrorists.
“Data mining works best when you are searching for a well-defined profile, when there are a reasonable number of events per year, and when the cost of false alarms is low,” says Schneier. Alas, terrorists do not have a common profile (see U.S. Authorities Struggle to Find a Pattern Among Aspiring Islamic State Members), each attack is unique, and terrorists would do their best to avoid detection. “When you are watching everything, you are not seeing anything,” concludes Schneier.
Mass surveillance by the NSA is not only ineffective, it also ensures reduced security and loss of privacy. All computer users today use basically the same hardware and software and when the NSA hacks into any of the components of the global computer network, it makes it more vulnerable. Schneier: “Because we all use the same products, technologies, protocols, and standards, we either make it easier for everyone to spy on everyone, or harder for anyone to spy on anyone.”
Data and Goliath is also a comprehensive guide to what’s to be done about our data. Here’s a sample of recommendations: Apply the same transparency principles that traditionally have governed law enforcement in the U.S. to national security; make government officials personally responsible for illegal behavior; overturn the “antiquated” third-party doctrine, recognizing that our information is our property and not the property of the service provider; reduce the NSA’s funding to pre-9/11 levels; establish an independent U.S. data protection agency; make intelligence-related whistleblowing a legal defense in the U.S.; block mass surveillance by encrypting your hard drive, chats, email, everything; and engage in the political process by noticing and talking about surveillance and by “giving copies of this book to all your friends as gifts.”
That the book is a great gift to one and all has already been recognized by many readers as evident by the fact that it has made the New York Times Best Sellers list. But will it manage to make a dent in the complacency of the American public? Will it motivate all of us to do the work we need to do to stop mass surveillance?
Recent results of a Pew Research Center survey titled “Americans’ Privacy Strategies Post-Snowden” are not very encouraging. Almost nine-in-ten respondents say they have heard at least a bit about the government surveillance programs to monitor phone use and internet use (31% say they have heard a lot). But only 30% of all adults have taken at least one step to hide or shield their information from mass surveillance and many are not aware of the commonly available tools that could make their online activities more private.
While 57% say it is unacceptable for the government to monitor the communications of U.S. citizens, majorities support monitoring those particular individuals who use words like “explosives” and “automatic weapons” in their search engine queries (65% say that) and those who visit anti-American websites (67% say that). 46% describe themselves as “not very concerned” or “not at all concerned” about the surveillance. The lack of concern about mass surveillance is even more pronounced when people are asked about “electronic surveillance in various parts of their digital lives.”
I’m a good example of the general apathy about what is being done with our data. I’ve never bothered to look at the notices I often receive from banks and credit cards regarding their privacy policies. It so happened that I received one while reading Data and Goliath, a notice that I’m sure is a typical example of the privacy policies of many financial institutions as it simply follows what is allowed by U.S. federal law. The “privacy policy” (a more apt title would be “we do basically whatever we want to do with your personal information”) is outrageous in its entirety but this statement takes the cake: “When you are no longer our customer, we continue to share your information as described in this notice.”
In addition to demonstrating how many U.S. businesses couldn’t care less about their customers, these types of privacy policy notices—coming from staid, pre-Internet financial institutions—also show that while the scale of mass surveillance has reached unprecedented levels today, our indifference to what’s being done with our data has remained stable—and widespread—for ages. U.S. corporations have been legally collecting and sharing our data long before Google appeared on the scene and our data has been a fountain of enthusiasm for government officials and business executives for a long time. Here’s what Arthur R. Miller wrote in his 1971 The Assault on Privacy:
Too many information handlers seem to measure a man by the number of bits of storage capacity his dossier will occupy… The new information technologies seem to have given birth to a new social virus – ‘data mania.’ Its symptoms are shortness of breath and heart palpitations when contemplating a new computer application, a feeling of possessiveness about information and a deep resentment toward those who won’t yield it, a delusion that all information handlers can walk on water, and a highly advanced case of antistigmatism that prevents the affected victim from perceiving anything but the intrinsic value of data.
Today’s “data mania” is called big data. Schneier’s book helped convince me that the first step, the first line of defense against the data deluge drowning our freedoms is to expose, explain and eradicate from public discourse the false tenets of big data religion. These include the incredible effectiveness of lots and lots of data, machines are better than humans in making data-driven decisions, let the data ask the questions, sampling is so 19th century, and privacy is dead, get over it. After 9/11, the NSA converted to the big data religion and went after “the whole haystack” because it provided a comforting set of rituals, I mean, action plans (see here and here for some of my previous discussions of big data religion).
Schneier devotes the last pages of his book to “the big data trade-off” which he calls “the fundamental issue of the information age”: How do make use of our data to benefit society as a whole while at the same time protecting our privacy. “Our data has enormous value when we put it all together,” says Schneier. But that flies in the face of his well-argued contentions that more data does not lead to better outcomes either with targeted advertising or protecting us from terrorists.
What he is talking about is the promise of big data, our hopes that more data, lots more data, will improve our lives. But why trade our security and privacy—and our present freedoms—for an unproven promise of some vague future benefit?
“I don’t think anyone can comprehend how much humanity will benefit from putting all our health data in a single database and letting researchers access it,” says Schneier. Indeed, it sounds plausible, at least intuitively, that big data could serve as a remedy to what ails us. I remember that growing up with a father who was a physician in private practice, I often thought about the wasted valuable data about his patients he meticulously recorded in his index card file, data that was never shared and analyzed with other physicians’ data. (Obviously, I started the big data conversation, at least with myself, very early on).
Data has facilitated progress in medicine, science, and other areas of inquiry and practice. But the collection and analysis of data has evolved in tandem with the development of tools that help ensure that only non-biased data is collected and that our questions drive this data collection (as opposed to the data driving our questions). The hype surrounding the availability of data generated by our wearables and its presumed big promise for healthcare, for example, almost ensures that medical enquiry will forgo some of its critical and proven foundations: carefully designed samples, control groups, longitudinal studies, etc.
I used to be a data optimist. Here’s what I wrote in Big Data is Neither an Atomic Bomb Nor a Holy Grail: “Decisions based on non-biased data are almost always better than decisions that are not based on data. That’s the promise of big data or data analysis. No need to exaggerate its potential… Better focus on small steps where the collection and analysis of data measurably and demonstrably lead to better allocation of resources and improved quality of life.”
Bruce Schneier has made me a data radical. I don’t believe in trade-offs anymore, I’m firmly convinced that the risks associated with mass surveillance far outweigh any potential big data benefits. Instead of believing that decisions based on data are almost always better than decisions that are not based on data, I must admit now that decisions based on data are frequently dangerous, disruptive, ineffective and just plain stupid.
Don’t fall for the “promise” of big data. Better focus on the present and start getting our freedoms back, first and foremost by finally making our data our property. Occupy Data, anyone?
Originally published on Forbes.com
5 Tips for Leading a Digital Transformation
Digital transformation is what drives new investments in information technology today and what may finally get the U.S. economy growing at a faster pace. But while we hear a lot about digital transformation today, the term is rarely defined. Instead, we typically get a list of the latest digital technologies to impact enterprises—mobile devices, social networks, cloud computing, big data analytics, etc.—and very little guidance regarding how to go about the desired transformation. An exception that proves the rule is Isaac Sacolick, a CIO who has made sharing (on his blog) practical advice about digital transformation an important dimension of his professional life.
First, let’s define “transformation.” Business transformation is about finding and successfully pursuing a new business model. IT transformation is finding out how to make IT a strategic lever for the business in addition to being a robust business infrastructure. Digital transformation is finding out what data can do for your business (or non-profit or government agency).
“Transformation is a mindset,” Isaac Sacolick told me last week. He recently joined Greenwich Associates, a leading provider of global market intelligence and advisory services to the financial services industry, as its Global CIO and Managing Director. Before that he was CIO of McGraw Hill Construction, providing data and intelligence to the construction industry, CIO of BusinessWeek Magazine, a founder and COO at TripConnect, a travel industry social network, and CTO at PowerOne Media, providing software as a service to the newspaper industry.
With his unique background as both an entrepreneur and a senior information technology executive for companies who have made data the core of their business, Sacolick offers five tips for leading a digital transformation.
Define what digital transformation means to your business
“Digital transformation is not just about technology and its implementation,” says Sacolick, “it’s about looking at the business strategy through the lens of technical capabilities and how that changes how you are operating and generating revenues.”
He counsels starting at the top and looking at what the business is trying to accomplish, rather than focusing on the technology or how IT operates. This could be “a specific project or business initiative, a revenue growth objective, required costs savings or meeting competitive benchmarks.” Identifying what role the collection, analysis, and “monetization” of data play in these strategic initiatives is a key dimension of a digital transformation.
Establish a digital transformation process
If you don’t have a process, any talk about digital transformation remains just that—talk. For Sacolick, the process is Agile. A set of software development methods in which requirements and code evolve through disciplined and structured collaboration between self-organizing, cross-functional teams, Agile has become an increasingly popular process for efficiently developing high-quality software applications.
But Sacolick practices and views Agile in a larger perspective, as a process for business and IT teams to work on digital transformation, “a collaborative, disciplined practice for executing.” It is particularly useful where there is a lot of uncertainty about what is required and what solution the team will end up developing. “Agile allows everybody to see both the forest and the trees but focus on the trees ,” says Sacolick. “It immediately puts business and IT in a room together and gets the team focused on the things that matter most.”
Initiate technology-based change
To do digital transformation right, IT has to be a source of innovative ideas and new practices driven by technology. An Agile process allows the IT team to explore and experiment, rather than being “order takers” only following what the business thinks the solution ought to look like.
“It’s not unusual to get in the middle of development a requirement that the IT team doesn’t know how to solve,” says Sacolick. “They then embark on what we call ‘Spikes,’ exploratory projects of short duration used to research a concept or create a simple prototype. It gives them the know-how or the confidence to go do something new. A lot of innovation comes from that mindset—encouraging experimentation.”
IT’s role as an enabler of change and innovation, however, should not be limited to the structured Agile process used to develop a specific application or solution. “There are always inflections when new technologies are going to give a business a transformational capability and that’s where IT leadership has to be smart about looking for opportunities or disruptions to the business,” says Sacolick.
In Sacolick’s book, an IT organization always has to be on the look-out for what’s happening with new technologies, and present to business executives the implications for their company and industry.
In these IT-initiated discussions with the business, says Sacolick, “you have to go through the economics of introducing new technologies, supporting them and transitioning to them.” The upshot of the discussion could be “this is interesting but we can’t afford to do this now,” but Sacolick thinks IT should be ready to present both the implications embracing the new technology or tool but also of not adopting it in a timely fashion. It’s important to discuss the what-if questions, as in what if a competitor, possibly a smaller company which is not even on the competitive radar at that point, chooses to use the new technology to gain efficiencies or market share. “It’s important to have these conversations early before your competitors move and you are too far behind the curve,” says Sacolick.
Find the leads
“Find the leads” is Sacolick’s term for identifying the members of the IT team that will make the time to run with the digital transformation initiatives. It’s a challenge, because the IT organization is typically consumed by the management of the IT infrastructure. How do you channel the energies of the IT staff beyond day-to-day work (without resorting to the solution of the pointy-haired boss in a Dilbert cartoon, telling his staff: “If you come up with a good idea, I’ll let you take on the project in addition to your existing work”)?
Sacolick looks for individuals who are assertive and have leadership qualities. “More often than not,” he says, “once I throw them at these challenges, they start running with them and the things they were supporting tend to either go away or go to others in the group to handle or they bubble up and we start asking how do we solve this better—anything that is supported by a top engineer is not a good use of their time.”
Creating the right work environment for these IT leads is key. “Get them the tools to start,” says Sacolick, “allow them to fail a bit, give them time and cushion to work out the new technologies and challenges they trying to master.”
Establish a data-driven, continuous learning culture
A key to successful digital transformations is an IT organization that understands what data can do to its own successful performance. To become smarter about its operations, says Sacolick, the IT team needs “to collect meaningful data, convert it to metrics, look for trends, and prioritize improvements.” With practice, the IT organization becomes better and better at asking good questions about the data and deriving insights from it.
Learning from data is important, but Sacolick thinks even more important is sharing the learning. He sees it as the essence of collaboration, among members of the IT team and between IT and other functions. “IT teams that horde information, over-complicate things so that business users don’t understand how things work, or make it impossible for their colleagues to enhance their technical implementations, are impeding organizational growth and scalability,” he says.
And it starts at the top: “Leaders need to demonstrate collaborative practices so that they can be emulated by managers, teams, and individuals.”
5 Things to keep in mind if you want to succeed as a leader of a digital transformation while at the same time excelling as the custodian of your company’s digital assets.
Originally published on Forbes.com
Internet’s Most-Read Stories (Infographic)
It turns out that the most-shared articles aren’t fluffy clickbait. Generally, they’re pieces that focus on grander themes: kids (“Schools Fail to Train Kids”), extreme wealth and poverty (“The World’s Poorest President,” “The Rich Alarmed by Homeless Jesus”), self-improvement (“What Mentally Strong People Avoid,” “How Not to Say the Wrong Thing”), God (“Science Increasingly Makes the Case for God”), and death (“Dying on Your Own Terms,” “Unmournable Bodies: Those We Kill Unknowingly”). Only some of the most universal aspects of human experience.
The visualization also reveals what types of storytelling are most engaging. Readers shared stories about other people’s lives the most when they were told from an intimate perspective instead of with impersonal statistics—as seen in the story of the life of Dasani, a homeless child from New York City, or a new mother who drove a Mercedes to pick up food stamps. If it’s not a personal, emotionally driven story, then it’s probably useful or service-y (“14 Habits That Drain Your Energy”) or entertaining (“Justin Timberlake Shows Us How Dumb We Sound When We Use Hashtags”).
6 Bad Digital Habits and How to Overcome Them (Infographic)
Source: Digital Information World
CEOs Embrace Digital Transformation, Compete in New Industries
CEOs no longer question the need to embrace technology at the core of their business in order to create value for customers, but 58% still see the rapid pace of technological change as a challenge. So we learn from the 18th annual PwC CEO survey. It’s based on 1,322 interviews with CEOs in 77 countries, conducted between September and December 2014.
Digital transformation is both a challenge and an opportunity. Digitization has blurred or even eliminated rigid industry boundaries starting with the media, content, and communications industries and now spreading everywhere. The new digital business world has no pre-defined boundaries, no industry-based rules or limitations.
Indeed, 54% of CEOs have entered a new sector or sub-sector, or considered it, in the past three years, according to PwC. More than half (56%) of CEOs think it likely that companies will increasingly compete in new industries over the next three years. Unlike in the past, when “unrelated diversification” was the business strategy of only large conglomerates, PwC found that 51% of the smaller firms (revenues up to $100 million) included in the survey, have entered a new sector or subsector, or considered doing so, within the past three years, compared with 64% of the largest firms, with revenues of over $10 billion.
What technologies CEOs think are the most strategic in facilitating the digital transformation of their companies and industries? Leading the list are mobile technologies for customer engagement (81%), data mining and analysis (80%), cyber security (78%), the Internet of Things (65%), socially enabled business processes (61%) and cloud computing (60%). Most interesting here is the inclusion of the Internet of Things, somewhat new on the scene as a business buzzword, but it’s possible that the survey respondents have either referred to what they see as its future potential or to the value they have already derived from established technologies such RFID and machine-to-machine communications.
The pace of digital transformation has a lot to do with the return on investment CEOs and their companies have enjoyed from the digital technologies they have deployed in the past. 86% say a clear vision of how digital technologies can help achieve competitive advantage is key to the success of digital investments. 83% say the same for having a well-thought-out plan – including concrete measures of success – for digital investments.
The majority of CEOs think that digital technologies have created high value for their organizations in areas like data and data analytics, customer experience, digital trust and innovation capacity. Surprisingly, however, most CEOs point to operational efficiency as the area where they have seen the best return on digital investment. 82% think value has been created in this area, with half of these CEOs seeing “very high value.” The PwC report explains this finding as follows: “The transformation of cost structures is a symptom of the digital transformation that companies are undergoing as they align their business and operating models to new ways of delivering stakeholder value. Indeed, 71% of CEOs also tell us they’re cutting costs this year – the highest percentage since we began asking the question in 2010.”
It’s clear from these findings that digital technologies are used not only for generating new revenue streams from existing or new customers, sometimes in completely new lines of business, but also as tools CEOs used to automate existing work flows (and reducing headcount) and streamlining existing business processes. To make sure digital technologies are deployed for both expansion and efficiency, CEOs now understand that they need to take charge: 86% think it’s important that they themselves champion the use of digital technologies.
This is certainly good news and what’s driving the acceleration of the digital transformation of all businesses.
For more, check out these survey reports
http://www.pwc.com/us/en/ceo-survey/index.html?WT.mc_id=cs_us-hero-home_CEO-survey for the full report
http://www.pwc.com/us/en/ceo-survey/technology-impact.jhtml for digital technology-specific stats
[Originally published on Forbes.com]
The Hadoop Bubble Quivers As Hortonworks Misses
Last month, Hortonworks announced quarterly results for the first time as a public company and they came below expectations. It had revenues of $12.7 million (up 55% year-over-year), but average Wall Street estimates were $13.42 million. Similarly, Wall Street expected a loss of $2.04 per share and Hortonworks reported a loss of $2.19 per share.
The results could be attributed to a company new to the game of providing guidance to Wall Street. But the company’s management had substantial experience in that department throughout their impressive careers so we must look somewhere else for an explanation. What if November 10, 2014, the day Hortonworks filed the paperwork for its IPO was the beginning of the end of the Hadoop bubble, to quote your humble correspondent? What if December 12, 2014, the day Hortonworks went public, surprising many by its swift action, the bubble “began to quiver and shake preparatory to its bursting”? What if Hortonworks had decided to rush to the exit while expectations were high?
People who had over-inflated expectations—and may have grumbled yesterday “what were we thinking”—should have listened to Mike Stonebraker last August. Here’s what this foremost authority on databases (and serial entrepreneur) said about the new generation of Hadoop from Hortonworks competitor Cloudera:
Impala is architected exactly like all of the shared-nothing parallel SQL DBMSs, serving the data warehouse market. Specifically, notice clearly that the MapReduce layer has been removed, and for good reason. As some of us have been pointing out for years, MapReduce is not a useful internal interface inside a SQL (or Hive) DBMS. Impala was architected by savvy DBMS developers, who know the above pragma. In fact, development activity similar to Impala is being done by both HortonWorks and FaceBook. This, of course, presents the Hadoop vendors with a dilemma. Historically, “Hadoop” referred to the open source version of MapReduce written by Yahoo. However, Impala has thrown this layer out of the stack. How can one be a Hadoop vendor, when Hadoop is no longer in the mainstream stack? The answer is simple: redefine “Hadoop”, and that is exactly what the Hadoop vendors have done. The word “Hadoop” is now used to mean the entire stack.
In my post, I suggested “a few things to ponder when considering the potential success of the current leading Hadoop vendors and whether Hadoop in general is in the first stage of a rapid market expansion or the last stage of a bubble inflating.” One of them was the incorporation of Hadoop and similar tools by established software vendors into their traditional database and information management offerings. Stonebraker is highlighting the opposite, the recasting of Hadoop into what looks like a traditional database technology. He says: “Meanwhile most of the data warehouse vendors support HDFS, and many offer features to support semi-structured data. Hence, the data warehouse market and the Hadoop market will quickly converge.”
Another argument I made was “Hadoop is so 2004 (at least at Google).” Here’s Stonebraker on the subject:
Google must be “laughing in their beer” about now. They invented MapReduce to support the web crawl for their search engine in 2004. A few years ago they replaced MapReduce in this application with BigTable, because they wanted an interactive storage system and MapReduce was batch-only. Hence, the driving application behind MapReduce moved to a better platform a while ago. Now Google is reporting that they see little-to-no future need for MapReduce. It is indeed ironic that Hadoop is picking up support in the general community about five years after Google moved on to better things. Hence, the rest of the world followed Google into Hadoop with a delay of most of a decade. Google has long since abandoned it. I wonder how long it will take the rest of the world to follow Google’s direction and do likewise…
No matter. Here’s what Matthew Hedberg, an analyst for RBC Capital Markets, wrote (according to Investor’s Business Daily) just before the Hortonworks quarterly earnings announcement: “We remain bullish on Hortonworks’ opportunity as a pure play on Hadoop and believe it to be one of the better-positioned disruptive vendors in what could be a once-in-a-decade data replatforming opportunity.” An analyst with Cowen and Co, Jesse Hulsing, expressed a similar bullish sentiment: “The Hadoop market is in early stages of adoption. Our view is that most large enterprises (5,000-plus employees) will have adopted or piloted the technology by fiscal year 2020. The underlying driver of this adoption is the growth in analytic applications, which is driven by rapid growth in new data types and new user types. Hortonworks should benefit from this.”
Maybe the market is indeed going gangbusters and Hortonworks is simply losing to better-equipped competitors, primarily Cloudera?
Apparently anticipating this question, Cloudera issued last week a “momentum press release,” announcing that its 2014 revenues “surpassed $100 million,” calling the results “an indicator of Hadoop’s strong momentum.” Derrick Harris at GigaOm had this to say about the news: “That the company, which is still privately held, would choose to disclose even that much information about its finances speaks to the fast maturation, growing competition and big egos in the Hadoop space.” Similarly, Arik Hesseldahl at Re/Code noted that a “likely motivation for the press release is a battle of optics between Cloudera and its primary rival, Hortonworks… Cloudera may simply be seeking to remind the marketplace which Hadoop company is bigger.”
It is also reminding the marketplace that it’s not going to be subject to the scrutiny accorded public companies anytime soon. Cloudera co-founder and chief strategy officer Mike Olson told Re/Code: “We have no timeline for an IPO, period.” CEO Reilly told Fortune “We’re of the size and scale that we could be a successful public company right now. But we’re so well backed that we don’t need to go public to have access to financing.” Indeed, after riding a bubble and raising a cool $1 billion, who needs Wall Street?
But they need Main Street. Regardless of the close to $1.5 billion in venture capital the key Hadoop competitors left standing—Cloudera, Hortonworks and MapR—have raised, to survive and succeed they need enterprise customers to buy the current (and future) Hadoop incarnations they offer.
In my previous post on the Hadoop Bubble, I quoted a 2014 survey conducted by Wikibon which found that only 36% of the respondents were using Hadoop and the majority of those (64%) were using it in proof-of-concept environments. Even more important to the financial future of Hadoop vendors, Wikibon found that “only 25% of Hadoop practitioners are paying customers of one or another Hadoop vendor. 24% use a free distribution provided by a vendor, but the majority, 51%, roll their own Hadoop downloaded from the Apache Software Foundation.” Don’t you think this has something to do with Hortonworks’ quarterly results?
The author of the excellent Wikibon report, Jeff Kelly, gave a presentation last week, titled The Big Data Money Trail. About 23 minutes into the presentation, Kelly gets to a slide titled (surprise!) “Is this the beginning of the end of the bubble, or is there something next that matters?”
Kelly definitely thinks (or at least thought last week) that Hadoop still matters. He thinks Cloudera and Hortonworks will survive and doesn’t back down from his previous estimates of how big the big data market will get. He predicts three future developments, all helping accelerate big data adoption, but not necessarily (in my opinion) promising for Hadoop vendors: enterprises will overcome their process and culture obstacles for adopting big data technologies; innovation will continue to drive the market because it is based on open source software; and while Hadoop was the “low-hanging fruit,” offering cost saving opportunities, now enterprises will start building “data-driven applications.”
To illustrate the last point, specifically the value that can be created by all these new applications of big data, Kelly reproduced on the slide the results of previous work done by Wikibon which estimated the “spend and value delivered by industrial internet” to reach $1.2 trillion in 2020. Bert Latamore, his colleague at Silicon Angle, wrote in his summary of Kelly’s talk that “Vendors will do well in the Big Data market over the next decade, Kelly predicts, but the real winners will be the companies that harness the technology creatively. He estimates that practitioners will create $1.2 trillion in new value from Big Data over the coming decade.” (Italics mine)
So a 2013 report on the Industrial Internet has metamorphosed into current (and misattributed) estimates of how many dollars are swimming in the big data lake. This is how bubbles rise, and eventually, burst.
Or maybe not, maybe I’m wrong and what we have is the beginning of a solid market for products from disruptive vendors going after “once-in-a-decade data replatforming opportunity.” After all, Hortonworks provided above-consensus guidance for the current quarter and they are much closer than I to what is really happening in the marketplace.
In an interview with Derrick Harris conducted last week, Hortonworks CEO Rob Bearden said that he is not backing off his 2014 prediction that Hadoop will soon become a multi-billion-dollar market and Hortonworks will be a billion-dollar company in terms of revenue. Hadoop is actually just a part — albeit a big one — of a major evolution in the data-infrastructure space, he explained to Harris. As companies start replacing the pieces of their data environments, they’ll do so with the open source options that now dominate new technologies. These include Hadoop, NoSQL databases, Storm, Kafka, Spark and the like. “Open source companies can be very successful in terms of revenue growth and in terms of profitability faster than the old proprietary platforms got there,” Bearden said.
Originally posted on Forbes.com




