Top 10 Predictions for $2.14 Trillion IT Market in 2014: IDC

IDC issued recently its top 10 predictions for 2014. IDC’s Frank Gens predicted that 2014 “will be about pitched battles” and a coming IT industry consolidation around a small number of big “winners.” The industry landscape will change as “incumbents will no longer be foolish enough to say we don’t compete with Amazon.”

Here’s my edited version of the predictions in the IDC press release and webcast:

Overall IT spending to grow 5.1% to $2.14 trillion, PC revenues to decline 6%

Worldwide sales of smartphones (12% growth) and tablets (18%) will continue at a “torrid pace” (accounting for over 60% of total IT market growth) at the expense of PC sales which will continue to decline. Spending on servers, storage, networks, software, and services will “fare better” than in 2013.

Android vs Apple, round 6

The Samsung-led Android community “will maintain its volume advantage over Apple,” but Apple will continue to enjoy “higher average selling prices and an established ecosystem of apps.” Google Play (Android) app downloads and revenues, however, “are making dramatic gains.” IDC advises Microsoft to “quickly double mobile developer interest in Windows.” Or else?

Amazon (and possibly Google) to take on traditional IT suppliers

Amazon Web Services’ “avalanche of platform-as-a-service offerings for developers and higher value services for businesses” will force traditional IT suppliers to “urgently reconfigure themselves.” Google, IDC predicts, will join in the fight, as it realizes “it is at risk of being boxed out of a market where it should be vying for leadership.”***

Emerging markets will return to double-digit growth of 10%

Emerging markets will account for 35% of worldwide IT revenues and, for the first time, more than 60% of worldwide IT spending growth. “In dollar terms,” IDC says, “China’s IT spending growth will match that of the United States, even though the Chinese market is only one third the size of the U.S. market.” In 2014, the number of smart connected devices shipped
in emerging markets will be almost double that shipped in developed markets and emerging markets will be a hotbed of Internet of Things market development.

In Pictures: Gartner’s 10 Strategic Technology Trends For 2013

There’s a $100 billion cloud in our future

Spending on cloud services and the technology to enable these services “will surge by 25% in 2014, reaching over $100 billion.” IDC predicts “a dramatic increase in the number of datacenters as cloud players race to achieve global scale.”

Cloud service providers will increasingly drive the IT market

As cloud-dedicated datacenters grow in number and importance, the market for server, storage, and networking components “will increasingly be driven by cloud service providers, who have traditionally favored highly componentized and commoditized designs.” The incumbent IT hardware vendors will be forced to adopt a “cloud-first” strategy, IDC predicts. 25–30% of server shipments will go to datacenters managed by service providers, growing to 43% by 2017.

Bigger big data spending

IDC predicts spending of more than $14 billion on big data technologies and services or 30% growth year-over-year, “as demand for big data analytics skills continues to outstrip supply.” The cloud will play a bigger role with IDC predicting a race to develop cloud-based platforms capable of streaming data in real time. There will be increased use by enterprises of externally-sourced data and applications and “data brokers will proliferate.” IDC predicts explosive growth in big data analytics services, with the number of providers to triple in three years. 2014 spending on these services will exceed $4.5 billion, growing by 21%.

Here comes the social enterprise

IDC predicts increased integration of social technologies into existing enterprise applications. “In addition to being a strategic component in virtually all customer engagement and marketing strategies,” IDC says, “data from social applications will feed the product and service development process.” By 2017, 80% of Fortune 500 companies will have an active customer community, up from 30% today.

Here comes the Internet of Things

By 2020, the Internet of Things will generate 30 billion autonomously connected end points and $8.9 trillion in revenues. IDC predicts that in 2014 we will see new partnerships among IT vendors, service providers, and semiconductor vendors that will address this market. Again, China will be a key player:  The average Chinese home in 2030 will have 40–50 intelligent devices/sensors, generating 200TB of data annually.

The digitization of all industries

By 2018, 1/3 of share leaders in virtually all industries will be “Amazoned” by new and incumbent players. “A key to competing in these disrupted and reinvented industries,” IDC says, “will be to create industry-focused innovation platforms (like GE’s Predix) that attract and enable large communities of innovators – dozens to hundreds will emerge in the next several years.” Concomitant with this digitization of everything trend, “the IT buyer profile continues to shift to business executives. In 2014, and through 2017, IT spending by groups outside of IT departments will grow at more than 6% per year.”

***Can’t resist quoting my August 2011 post: “Consumer vs. enterprise is an old and soon-to-be obsolete distinction. If Google will not take away some of Microsoft’s (and IBM’s, etc. for that matter) “enterprise” revenues, someone else will. At stake are the $1.5 trillion spent annually by enterprises on hardware, software, and services. If you include what enterprises spend on IT internally (staff, etc.), you get at least $3 trillion. A big chunk of that will move to the cloud over the next fifteen years. Compare this $3 trillion to the $400 billion spent annually on all types of advertising worldwide.  Why leave money on the table?”

[Originally published on Forbes.com]

Posted in Big Data Analytics, Internet of Things | Leave a comment

On Data Janitors, Engineers, and Statistics

Big Data Borat tweeted recently that “Data Science is 99% preparation, 1% misinterpretation.” Commenting on the 99% part, Cloudera’s Josh Wills says: “I’m a data janitor. That’s the sexiest job of the 21st century. It’s very flattering, but it’s also a little baffling.” Kaggle, the data-science-as-sport startup, takes care of the “1% misinterpretation” part by providing a matchmaking service between the sexiest of the sexy data janitors and the organizations requiring their hard-to-find skills. It charges $300 per hour for the service, of which $200 go to the data janitor (at least in the case of Shashi Godbole, quoted in the Technology Review article). Kaggle justifies its mark-up by delivering “the best 0.5% of the 95,988 data scientists who compete in data mining competitions,” the top of its data science table league, the ranking of data scientists based on their performance in Kaggle’s competitions, presumably representing  sound interpretation and top-notch productivity.

Kaggle’s co-founder Anthony Goldbloom tells The Atlantic’s Thomas Goetz that the ranking also represents a solution to a “market failure” in assessing the skills and relevant experience of the new breed of data scientists: “Kaggle represents a new sort of labor market, one where skills have been bifurcated from credentials.” Others see this as the creation of a new, $300 per hour, guild. In “Data Scientists Don’t Scale,” ZDnet’s Andrew Brust says that “’Data scientist’ is a title designed to be exclusive, standoffish and protective of a lucrative guild… The solution… isn’t legions of new data scientists. Instead, we need self-service tools that empower smart and tenacious business people to perform Big Data analysis themselves.”

Continue reading

Posted in Data Science Careers, Data Scientists, Statistics | Leave a comment

Big Data in Context (Infographic)

Posted in Big Data Analytics, Infographics | Leave a comment

What’s the Big Data? 12 Definitions

Last week I got an email from UC Berkeley’s Master of Information and Data Science program, asking me to respond to a survey of data science thought leaders, asking the question “What is big data”? I was especially delighted to be regarded as a “thought leader” by Berkeley’s School of Information, whose previous dean, Hal Varian (now chief economist at Google, answered my challenge fourteen years ago and produced the first study to estimate the amount of new information created in the world annually, a study I consider to be a major milestone in the evolution of our understanding of big data.

The Berkeley researchers estimated that the world had produced about 1.5 billion gigabytes of information in 1999 and in a 2003 replication of the study found out that amount to have doubled in 3 years. Data was already getting bigger and bigger and around that time, in 2001, industry analyst Doug Laney described the “3Vs”—volume, variety, and velocity—as the key “data management challenges” for enterprises, the same “3Vs” that have been used in the last four years by just about anyone attempting to define or describe big data.

The first documented use of the term “big data” appeared in a 1997 paper by scientists at NASA, describing the problem they had with visualization (i.e. computer graphics) which “provides an interesting challenge for computer systems: data sets are generally quite large, taxing the capacities of main memory, local disk, and even remote disk. We call this the problem of big data. When data sets do not fit in main memory (in core), or when they do not fit even on local disk, the most common solution is to acquire more resources.”

In 2008, a number of prominent American computer scientists popularized the term, predicting that “big-data computing” will “transform the activities of companies, scientific researchers, medical practitioners, and our nation’s defense and intelligence operations.” The term “big-data computing,” however, is never defined in the paper.

The traditional database of authoritative definitions is, of course, the Oxford English Dictionary (OED). Here’s how the OED defines big data: (definition #1) “data of a very large size, typically to the extent that its manipulation and management present significant logistical challenges.”

But this is 2014 and maybe the first place to look for definitions should be Wikipedia. Indeed, it looks like the OED followed its lead. Wikipedia defines big data (and it did it before the OED) as (#2) “an all-encompassing term for any collection of data sets so large and complex that it becomes difficult to process using on-hand data management tools or traditional data processing applications.”

While a variation of this definition is what is used by most commentators on big data, its similarity to the 1997 definition by the NASA researchers reveals its weakness. “Large” and “traditional” are relative and ambiguous (and potentially self-serving for IT vendors selling either “more resources” of the “traditional” variety or new, non-“traditional” technologies).

The widely-quoted 2011 big data study by McKinsey highlighted that definitional challenge. Defining big data as (#3) “datasets whose size is beyond the ability of typical database software tools to capture, store, manage, and analyze,” the McKinsey researchers acknowledged that “this definition is intentionally subjective and incorporates a moving definition of how big a dataset needs to be in order to be considered big data.” As a result, all the quantitative insights of the study, including the updating of the UC Berkeley numbers by estimating how much new data is stored by enterprises and consumers annually, relate to digital data, rather than just big data, e.g., no attempt was made to estimate how much of the data (or “datasets”) enterprises store is big data.

Another prominent source on big data is Viktor Mayer-Schönberger and Kenneth Cukier’s book on the subject. Noting that “there is no rigorous definition of big data,” they offer one that points to what can be done with the data and why its size matters:

(#4) “The ability of society to harness information in novel ways to produce useful insights or goods and services of significant value” and “…things one can do at a large scale that cannot be done at a smaller one, to extract new insights or create new forms of value.”

In Big Data@Work, Tom Davenport concludes that because of “the problems with the definition” of big data, “I (and other experts I have consulted) predict a relatively short life span for this unfortunate term.” Still, Davenport offers this definition:

(#5) “The broad range of new and massive data types that have appeared over the last decade or so.”

Let me offer a few other possible definitions:

(#6) The new tools helping us find relevant data and analyze its implications.

(#7) The convergence of enterprise and consumer IT.

(#8) The shift (for enterprises) from processing internal data to mining external data.

(#9) The shift (for individuals) from consuming data to creating data.

(#10) The merger of Madame Olympe Maxime and Lieutenant Commander Data.

#(11) The belief that the more data you have the more insights and answers will rise automatically from the pool of ones and zeros.

#(12) A new attitude by businesses, non-profits, government agencies, and individuals that combining data from multiple sources could lead to better decisions.

I like the last two. #11 is a warning against blindly collecting more data for the sake of collecting more data (see NSA). #12 is an acknowledgment that storing data in “data silos” has been the key obstacle to getting the data to work for us, to improve our work and lives. It’s all about attitude, not technologies or quantities.

What’s your definition of big data?

See here for the compilation of Big data definitions from 40+ thought leaders.

[Originally published on Forbes.com]

Posted in Big Data Analytics, Big Data History | Leave a comment

A Very Short History Of The Internet Of Things

There have been visions of smart, communicating objects even before the global computer network was launched forty-five years ago. As the Internet has grown to link all signs of intelligence (i.e., software) around the world, a number of other terms associated with the idea and practice of connecting everything to everything have made their appearance, including machine-to-machine (M2M), Radio Frequency Identification (RFID), context-aware computing, wearables, ubiquitous computing, and the Web of Things. Here are a few milestones in the evolution of the mashing of the physical with the digital.

1932                                    Jay B. Nash writes in Spectatoritis: “Within our grasp is the leisure of the Greek citizen, made possible by our mechanical slaves, which far outnumber his twelve to fifteen per free man… As we step into a room, at the touch of a button a dozen light our way. Another slave sits twenty-four hours a day at our thermostat, regulating the heat of our home. Another sits night and day at our automatic refrigerator. They start our car; run our motors; shine our shoes; and cult our hair. They practically eliminate time and space by their very fleetness.”

January 13, 1946              The 2-Way Wrist Radio, worn as a wristwatch by Dick Tracy and members of the police force, makes its first appearance and becomes one of the comic strip’s most recognizable icons.

1949                                    The bar code is conceived when 27 year-old Norman Joseph Woodland draws four lines in the sand on a Miami beach. Woodland, who later became an IBM engineer, received (with Bernard Silver) the first patent for a linear bar code in 1952. More than twenty years later, another IBMer, George Laurer, was one of those primarily responsible for refining the idea for use by supermarkets.

1955                                    Edward O. Thorp conceives of the first wearable computer, a cigarette pack-sized analog device, used for the sole purpose of predicting roulette wheels. Developed further with the help of Claude Shannon, it was tested in Las Vegas in the summer of 1961, but its existence was revealed only in 1966.

October 4, 1960               Morton Heilig receives a patent for the first-ever head-mounted display.

1967                                    Hubert Upton invents an analog wearable computer with eyeglass-mounted display to aid in lip reading.

October 29, 1969             The first message is sent over the ARPANET, the predecessor of the Internet.

January 23, 1973              Mario Cardullo receives the first patent for a passive, read-write RFID tag.

June 26, 1974                    A Universal Product Code (UPC) label is used to ring up purchases at a supermarket for the first time.

1977                                    CC Collins develops an aid to the blind, a five-pound wearable with a head-mounted camera that converted images into a tactile grid on a vest.

Early 1980s                        Members of the Carnegie-Mellon Computer Science department install micro-switches in the Coke vending machine and connect them to the PDP-10 departmental computer so they could see on their computer terminals how many bottles were present in the machine and whether they were cold or not.

1981                                    While still in high school, Steve Mann develops a backpack-mounted “wearable personal computer-imaging system and lighting kit.”

1990                                    Olivetti develops an active badge system, using infrared signals to communicate a person’s location.

September 1991              Xerox PARC’s Mark Weiser publishes “The Computer in the 21st Century” in Scientific American, using the terms “ubiquitous computing” and “embodied virtuality” to describe his vision of how “specialized elements of hardware and software, connected by wires, radio waves and infrared, will be so ubiquitous that no one will notice their presence.”

1993                                    MIT’s Thad Starner starts using a specially-rigged computer and heads-up display as a wearable.

1993                                    Columbia University’s Steven Feiner, Blair MacIntyre, and Dorée Seligmann develop KARMA–Knowledge-based Augmented Reality for Maintenance Assistance. KARMA overlaid wireframe schematics and maintenance instructions on top of whatever was being repaired.

1994                                    Xerox EuroPARC’s Mik Lamming and Mike Flynn demonstrate the Forget-Me-Not, a wearable device that communicates via wireless transmitters and records interactions with people and devices, storing the information in a database.

1994                                    Steve Mann develops a wearable wireless webcam, considered the first example of lifelogging.

September 1994              The term ‘context-aware’ is first used by B.N. Schilit and M.M. Theimer in “Disseminating active map information to mobile hosts,” Network, Vol. 8, Issue 5.

1995                                    Siemens sets up a dedicated department inside its mobile phones business unit to develop and launch a GSM data module called “M1” for machine-to-machine (M2M) industrial applications, enabling machines to communicate over wireless networks. The first M1 module was used for point of sale (POS) terminals, in vehicle telematics, remote monitoring and tracking and tracing applications.

December 1995                MIT’s Nicholas Negroponte and Neil Gershenfeld write in “Wearable Computing” in Wired: “For hardware and software to comfortably follow you around, they must merge into softwear… The difference in time between loony ideas and shipped products is shrinking so fast that it’s now, oh, about a week.”

October 13-14, 1997       Carnegie-Mellon, MIT, and Georgia Tech co-host the first IEEE International Symposium on Wearable Computers, in Cambridge, MA.

1999                                    The Auto-ID (for Automatic Identification) Center is established at MIT. Sanjay Sarma, David Brock and Kevin Ashton turned RFID into a networking technology by linking objects to the Internet through the RFID tag.

1999                                    Neil Gershenfeld writes in When Things Start to Think: “Beyond seeking to make computers ubiquitous, we should try to make them unobtrusive…. For all the coverage of the growth of the Internet and the World Wide Web, a far bigger change is coming as the number of things using the Net dwarf the number of people. The real promise of connecting computers is to free people, by embedding the means to solve problems in the things around us.”

January 1, 2001                David Brock, co-director of MIT’s Auto-ID Center, writes in a white paper titled “The Electronic Product Code (EPC): A Naming Scheme for Physical Objects”: “For over twenty-?ve years, the Universal Product Code (UPC or ‘bar code’) has helped streamline retail checkout and inventory processes… To take advantage of [the Internet’s] infrastructure, we propose a new object identi?cation scheme, the Electronic Product Code (EPC), which uniquely identi?es objects and facilitates tracking throughout the product life cycle.”

March 18, 2002                Chana Schoenberger and Bruce Upbin publish “The Internet of Things” in Forbes. They quote Kevin Ashton of MIT’s Auto-ID Center: “We need an internet for things, a standardized way for computers to understand the real world.”

April 2002                          Jim Waldo writes in “Virtual Organizations, Pervasive Computing, and an Infrastructure for Networking at the Edge,” in the Journal of Information Systems Frontiers: “…the Internet is becoming the communication fabric for devices to talk to services, which in turn talk to other services. Humans are quickly becoming a minority on the Internet, and the majority stakeholders are computational entities that are interacting with other computational entities without human intervention.”

June 2002                          Glover Ferguson, chief scientist for Accenture, writes in “Have Your Objects Call My Objects” in the Harvard Business Review: “It’s no exaggeration to say that a tiny tag may one day transform your own business. And that day may not be very far off.”

January 2003                    Bernard Traversat et al. publish “Project JXTA-C: Enabling a Web of Things” in HICSS ’03 Proceedings of the 36th Annual Hawaii International Conference on System Sciences. They write: “The open-source Project JXTA was initiated a year ago to specify a standard set of protocols for ad hoc, pervasive, peer-to-peer computing as a foundation of the upcoming Web of Things.”

October 2003                    Sean Dodson writes in the Guardian: ”Last month, a controversial network to connect many of the millions of tags that are already in the world (and the billions more on their way) was launched at the McCormick Place conference centre on the banks of Lake Michigan. Roughly 1,000 delegates from across the worlds of retail, technology and academia gathered for the launch of the electronic product code (EPC) network. Their aim was to replace the global barcode with a universal system that can provide a unique number for every object in the world. Some have already started calling this network ‘the internet of things’.”

August 2004                      Science-fiction writer Bruce Sterling introduces the concept of “Spime” at SIGGRAPH, describing it as “a neologism for an imaginary object that is still speculative. A Spime also has a kind of person who makes it and uses it, and that kind of person is somebody called a ‘Wrangler.’ … The most important thing to know about Spimes is that they are precisely located in space and time. They have histories. They are recorded, tracked, inventoried, and always associated with a story…  In the future, an object’s life begins on a graphics screen. It is born digital. Its design specs accompany it throughout its life. It is inseparable from that original digital blueprint, which rules the material world. This object is going to tell you – if you ask – everything that an expert would tell you about it. Because it WANTS you to become an expert.”

September 2004              G. Lawton writes in “Machine-to-machine technology gears up for growth” in Computer: “There are many more machines—defined as things with mechanical, electrical, or electronic properties­—in the world than people. And a growing number of machines are networked… M2M is based on the idea that a machine has more value when it is networked and that the network becomes more valuable as more machines are connected.”

October 2004                    Neil Gershenfeld, Raffi Krikorian and Danny Cohen write in “The Internet of Things” in Scientific American: “Giving everyday objects the ability to connect to a data network would have a range of benefits: making it easier for homeowners to configure their lights and switches, reducing the cost and complexity of building construction, assisting with home health care. Many alternative standards currently compete to do just that—a situation reminiscent of the early days of the Internet, when computers and networks came in multiple incompatible types.”

October 25, 2004             Robert Weisman writes in the Boston Globe: “The ultimate vision, hatched in university laboratories at MIT and Berkeley in the 1990s, is an ‘Internet of things’ linking tens of thousands of sensor mesh networks. They’ll monitor the cargo in shipping containers, the air ducts in hotels, the fish in refrigerated trucks, and the lighting and heating in homes and industrial plants. But the nascent sensor industry faces a number of obstacles, including the need for a networking standard that can encompass its diverse applications, competition from other wireless standards, security jitters over the transmitting of corporate data, and some of the same privacy concerns that have dogged other emerging technologies.”

2005                                    A team of faculty members at the Interaction Design Institute Ivrea (IDII) in Ivrea, Italy, develops Arduino, a cheap and easy-to-use single-board microcontroller, for their students to use in developing interactive projects. Adrian McEwen and Hakim Cassamally in Designing the Internet of Things: “Combined with an extension of the wiring software environment, it made a huge impact on the world of physical computing.”

November 2005               The International Telecommunications Union publishes the 7th in its series of reports on the Internet, titled “The Internet of Things.”

June 22, 2009                    Kevin Ashton writes in “That ‘Internet of Things’ Thing” in RFID Journal: “I could be wrong, but I’m fairly sure the phrase ‘Internet of Things’ started life as the title of a presentation I made at Procter & Gamble (P&G) in 1999. Linking the new idea of RFID in P&G’s supply chain to the then-red-hot topic of the Internet was more than just a good way to get executive attention. It summed up an important insight—one that 10 years later, after the Internet of Things has become the title of everything from an article in Scientific American to the name of a European Union conference, is still often misunderstood.”

Thanks to Sanjay Sarma and Neil Gershenfeld for their comments on a draft of this timeline.

[Originally posted on Forbes.com]

Posted in Internet of Things | Leave a comment

3 Big Data Milestones

If you were asked to name the top three events in the history of the IT industry, which ones would you choose? Here’s my list:

June 30, 1945: John Von Neumann published the First Draft of a Report on the EDVAC, the first documented discussion of the stored program concept and the blueprint for computer architecture to this day.

May 22, 1973: Bob Metcalfe “banged out the memo inventing Ethernet” at Xerox Palo Alto Research Center (PARC).

March 1989: Tim Berners-Lee circulated “Information management: A proposal” at CERN in which he outlined a global hypertext system.

[Note: if round numbers are your passion, you may opt—without changing the substance of this condensed history—for the ENIAC proposal of April 1943, Ethernet in 1973, and CERN making the World Wide Web available to the world free of charge in April 1993, so that 2013 marks the 70th, 40th, and 20th anniversaries of these events.]

Why bother at all to look back? And why did I select these as the top three milestones in the evolution of information technology?

Most observers of the IT industry prefer and are expected to talk about what’s coming, not what’s happened. But to make educated guesses about the future of the IT industry, it helps to understand its past. Here I depart from most commentators who, if they talk at all about the industry’s past, divide it into hardware-defined “eras,” usually labeled “mainframes,” “PCs,” “Internet,” and “Post-PC.”

Another way of looking at the evolution of IT is to focus on the specific contributions of technological inventions and advances to the industry’s key growth driver: digitization and the resulting growth in the amount of digital data created, shared, and consumed. Each of these three events represents a leap forward, a quantitative and qualitative change in the growth trajectory of what we now call big data.

The industry was born with the first giant calculators digitally processing and manipulating numbers and then expanded to digitize other, mostly transaction-oriented activities, such as airline reservations.  But until the 1980s, all computer-related activities revolved around interactions between a person and a computer. That did not change when the first PCs arrived on the scene.

The PC was simply a mainframe on your desk. Of course it unleashed a wonderful stream of personal productivity applications that in turn contributed greatly to the growth of enterprise data and the start of digitizing leisure-related, home-based activities. But I would argue that the major quantitative and qualitative leap occurred only when work PCs were connected to each other via Local Area Networks (LANs)—where Ethernet became the standard—and then long-distance via Wide Area Networks (WANs). With the PC, you could digitally create the memo you previously typed on a typewriter, but to distribute it, you still had to print it and make paper copies. Computer networks (and their “killer app,” email) made the entire process digital, ensuring the proliferation of the message, drastically increasing the amount of data created, stored, moved, and consumed.

Connecting people in a vast and distributed network of computers not only increased the amount of data generated but also led to numerous new ways of getting value out of it, unleashing many new enterprise applications and a new passion for “data mining.” This in turn changed the nature of competition and gave rise to new “horizontal” players, focused on one IT component as opposed to the vertically integrated, “end-to-end solution” business model that has dominated the industry until then. Intel in semiconductors, Microsoft in operating systems, Oracle in databases, Cisco in networking, Dell in PCs (or rather, build-to-order PCs), and EMC in storage have made the 1990s the decade in which “best-of-breed” was what many IT buyers believed in, assembling their IT infrastructures from components sold by focused, specialized IT vendors.

The next phase in the evolution of the industry, the next quantitative and qualitative leap in the amount of data generated, came with the invention of the World Wide Web (commonly mislabeled as “the Internet”). It led to the proliferation of new applications which were no longer limited to enterprise-related activities but digitized almost any activity in our lives. Most important, it provided us with tools that greatly facilitated the creation and sharing of information by anyone with access to the Internet (the open and almost free wide area network only few people cared or knew about before the invention of the World Wide Web). The work memo I typed on a typewriter which became a digital document sent across the enterprise and beyond now became my life journal which I could discuss with others, including people on the other side of the globe I have never met.  While computer networks took IT from the accounting department to all corners of the enterprise, the World Wide Web took IT to all corners of the globe, connecting millions of people. Interactive conversations and sharing of information among these millions replaced and augmented broadcasting and drastically increased (again) the amount of data created, stored, moved, and consumed. And just as in the previous phase, a bunch of new players emerged, all of them born on the Web, all of them regarding “IT” not as specific function responsible for running the infrastructure but as the essence of their business, data and its analysis becoming their competitive edge.

We are probably going to see soon—and maybe already are experiencing—a new phase in the evolution of IT and a new quantitative and qualitative leap in the growth of data. The cloud—a new way to deliver IT, big data—a new attitude towards data and its potential value, and The Internet of Things (including wearable computers such as Google Glass)—connecting billions of monitoring and measurement devices quantifying everything—combine to sketch for us the future of IT.

[Originally published on Forbes.com]

Posted in Big Data Analytics, Big Data Futures, Big Data History, Data Growth | Leave a comment

When Will Human-Level AI Arrive? Ray Kurzweil (2029) and Rodney Brooks (2117++)

Source: IEEE Spectrum

See also:

AI Researchers Predict Automation of All Human Jobs in 125 Years

Robot Overlords: AI At Facebook, Amazon, Disney And Digital Transformation At GE, DBS, BNY Mellon

Posted in AI | Tagged | Leave a comment

The Real World of Big Data (Infographic)

Click image to see a larger version

The Real World of Big Data via Wikibon Infographics

Posted in Big Data Analytics, Infographics | Leave a comment

The Digital Marketing Landscape: 2 Views

Gartner Digital Marketing Transit Map

Source: Gartner

marketing_technology_landscape_2012

Source: chiefmartec.com

Posted in Misc | Leave a comment

Big Data Bytes of the Week: The End of Big Data?

The end of Big Data? Based on his discussions with CIOs, reports Derrick Harris at GigaOm, Opera Solutions’ CEO Arnab Gupta “thinks the analytics market will crest around the end of next year as CIOs face enormous data spikes.”  Is this what he means by “turning Big Data into Small Data?” Apparently saying “crest” is a very convincing way to get $84 million, but does he really believe that the Big Data flood is going to start tapering off next year?

Continue reading

Posted in Data Growth, Data Scientists, Predictions | Leave a comment