What Makes the Internet of Things (IoT) Work (SlideShare)

[slideshare id=51024872&doc=9760-internet-of-things-part-2-slidesharev3-150728171647-lva1-app6892]

Posted in Internet of Things | Leave a comment

On Brontobyte Data and Other Big Words

big-data-infographic_Brontobyte

Source: Datafloq

Paul McFedries in IEEE Spectrum

When Gartner released its annual Hype Cycle for Emerging Technologies for 2014, it was interesting to note that big data was now located on the downslope from the “Peak of Inflated Expectations,” while the Internet of Things (often shortened to IoT) was right at the peak, and data science was on the upslope. This felt intuitively right. First, although big data—those massive amounts of information that require special techniques to store, search, and analyze—remains a thriving and much-discussed area, it’s no longer the new kid on the data block. Second, everyone expects that the data sets generated by the Internet of Things will be even more impressive than today’s big-data collections. And third, collecting data is one significant challenge, but analyzing and extracting knowledge from it is quite another, and the purview of data science.

Just how much information are we talking about here? Estimates vary widely, but big-data buffs sometimes speak of storage in units of brontobytes, a term that appears to be based on brontosaurus, one of the largest creatures ever to rattle the Earth. That tells you we’re dealing with a big number, but just how much data could reside in a brontobyte? I could tell you that it’s 1,000 yottabytes, but that likely won’t help. Instead, think of a terabyte, which these days represents an average-size hard drive. Well, you would need 1,000,000,000,000,000 (a thousand trillion) of them to fill a brontobyte. Oh, and for the record, yes, there’s an even larger unit tossed around by big-data mavens: the geopbyte, which is 1,000 brontobytes. Whatever the term, we’re really dealing in hellabytes, that is, a helluva lot of data.

Wrangling even petabyte-size data sets (a petabyte is 1,000 terabytes) and data lakes (data stored and readily accessible in its pure, unprocessed state) are tasks for professionals, so not only are listings for big-data-related jobs thick on the ground but the job titles themselves now display a pleasing variety: companies are looking for data architects (specialists in building data models), data custodians and data stewards (who manage data sources), data visualizers (who can translate data into visual form), data change agents and data explorers (who change how a company does business based on analyzing company data), and even data frackers (who use enhanced or hidden measures to extract or obtain data).

But it’s not just data professionals who are taking advantage of Brobdingnagian data sets to get ahead. Nowhere is that more evident than in the news, where a new type of journalism has emerged that uses statistics, programming, and other digital data and tools to produce or shape news stories. This data journalism (or data-driven journalism) is exemplified by Nate Silver’s FiveThirtyEight site, a wildly popular exercise in precision journalism and computer-assisted reporting (or CAR).

And everyone, professional and amateur alike, no longer has the luxury of dealing with just “big” data. Now there is also thick data (which combines both quantitative and qualitative analysis), long data (which extends back in time hundreds or thousands of years), hot data (which is used constantly, meaning it must be easily and quickly accessible), and cold data (which is used relatively infrequently, so it can be less readily available).

In the 1980s we were told we needed cultural literacy. Perhaps now we need big-data literacy, not necessarily to become proficient in analyzing large data sets but to become aware of how our everyday actions—our small datacontribute to many different big-data sets and what impact that might have on our privacy and security. Let’s learn how to become custodians of our own data.

Posted in Big Data Analytics | Tagged , , , , | Leave a comment

Top 10 Programming Languages 2015

top-ten-programming-languagesNote: Left column shows 2015 ranking; right column shows 2014 ranking.

Source: IEEE Spectrum

The big five—Java, C, C++, Python, and C#—remain on top, with their ranking undisturbed, but C has edged to within a whisper of knocking Java off the top spot. The big mover is R, a statistical computing language that’s handy for analyzing and visualizing big data, which comes in at sixth place. Last year it was in ninth place, and its move reflects the growing importance of big data to a number of fields. A significant amount of movement has occurred further down in the rankings, as languages like Go, Perl, and even Assembly jockey for position…

A number of languages have entered the rankings for the first time. Swift, Apple’s new language, has already gained enough traction to make a strong appearance despite being released only 13 months ago. Cuda is another interesting entry—it’s a language created by graphics chip company Nvidia that’s designed for general-purpose computing using the company’s powerful but specialized graphics processors, which can be found in many desktop and mobile devices. Seven languages in all are appearing for the first time.

Posted in Misc | Tagged | Leave a comment

Why Humans Will Forever Rule Over the Machines

Terminator-2Everywhere you turn nowadays, you hear about the imminent triumph of intelligent machines over humans. They will take our jobs, they will make their own decisions, they will be even more intelligent than humans, they pose a threat to humanity (per Stephen Hawking, Bill Gates, and Elon Musk). Marc Andreesen recently summed up on Twitter the increased hubbub about the dangers of Artificial Intelligence: “From ‘It’s so horrible how little progress has been made’ to ‘It’s so horrible how much progress has been made’ in one step.”

Don’t worry. The machines will never take over, no matter how much progress will be made in artificial intelligence . It will forever remain artificial, devoid of what makes us human (and intelligent in the full sense of the word), and what accounts for our unlimited creativity, the fountainhead of ideas that will always keep us at least a few steps ahead of the machines.

In a word, intelligent machines will never have culture, our unique way of transmitting meanings and context over time, our continuously invented and re-invented inner and external realities.

When you stop to think about culture—the content of our thinking—it is amazing that it has been missing from the thinking of the people creating “thinking machines” and/or debating how much they will impact our lives for as long as this work and conversation has been going on. No matter what position they take in the debate and/or what path they follow in developing robots and/or artificial intelligence, they have collectively made a conscious or unconscious decision to reduce the incredible bounty and open-endedness of our thinking to computation, an exchange of information between billions of neurons, which they either hope or are afraid that we will eventually replicate in a similar exchange between increasingly powerful computers. It’s all about quantity and we know that Moore’s Law takes care of that.

Almost all the people participating in the debate about the rise of the machines have subscribed to the Turing Paradigm which basically says “let’s not talk about what we cannot define or investigate and simply equate thinking with computation.”

The dominant thinking about thinking machines, whether of the artificial or the human kind, has not changed since Edward C. Berkeley wrote in Giant Brains or Machines that Think, his 1949 book about the recently invented computers: “These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.” Thirty years later, MIT’s Marvin Minsky famously stated: “The human brain is just a computer that happens to be made out of meat.” Today, Harvard geneticist George Church goes further (reports Joichi Ito), suggesting that we should make brains as smart as computers, and not the other way around.

Still, from time to time we do hear new and original challenges to the dominant paradigm. In “Computers Versus Humanity: Do We Compete?” Liah Greenfeld and Mark Simes bring culture and the mind into the debate over artificial intelligence, concepts that do not exist in the prevailing thinking about thinking. They define culture as the symbolic process by which humans transmit their ways of life. It is a historical process, i.e., it occurs in time, and it operates on both the collective and individual levels simultaneously.

The mind, defined as “culture in the brain,” is a process representing an individualization of the collective symbolic environment. It is supported by the brain and, in turn, it organizes the connective complexity of the brain.  Greenfeld and Simes argue that “mapping and explaining the organization and biological processes in the human brain will only be complete when such symbolic, and therefore non-material, environment is taken into account.”

They conclude that what distinguishes humanity from all other forms of life “is its endless, unpredictable creativity. It does not process information: It creates. It creates information, misinformation, forms of knowledge that cannot be called information at all, and myriads of other phenomena that do not belong to the category of knowledge. Minds do not do computer-like things, ergo computers cannot outcompete us all.”

The mind, the continuous and dynamic creative process by which we live our conscious lives, is missing from the debates over the promise and perils of artificial intelligence. A recent example is a special section on robots in the July/August issue of Foreign Affairs, in which the editors brought together a number of authors with divergent opinions about the race against the machines. All of them, however, do not question the assumption that we are in a race:

  • A roboticist, MIT’s Daniela Rus, writes about the “significant gaps” that have to be closed in order to make robots our little helpers and makes the case for robots and humans augmenting and complementing each other’s skills (in “The Robots Are Coming”).
  • Another roboticist, Carnegie Mellon’s Illah Reza Nourbakhsh, highlights robots’ “potential to produce dystopian outcomes” and laments the lack of required training in ethics, human rights, privacy, or security at the academic engineering programs that grant degrees in robotics (in “The Coming Robot Dystopia”).
  • The authors of The Second Machine Age, MIT’s Erik Brynjolfsson and Andrew McAfee, predict that human labor will not disappear anytime soon because “we humans are a deeply social species, and the desire for human connection carries over to our economic lives.” But the prediction is limited to “within the next decade,” after which “there is a real possibility… that human labor will, in aggregate, decline in relevance because of technological progress, just as horse labor did earlier” (in “Will Humans Go the Way of Horses?”).
  • The chief economics commentator at the Financial Times, Martin Wolf, dismisses the predictions regarding the imminent “breakthroughs in information technology, robotics, and artificial intelligence that will dwarf what has been achieved in the past two centuries” and the emergence of machines that are “supremely intelligent and even self-creating.” While also hedging his bets about the future, he states categorically “what we know for the moment is that there is nothing extraordinary in the changes we are now experiencing. We have been here before and on a much larger scale” (in “Same as It Ever Was: Why the Techno-optimists Are Wrong”).

Same as it ever was, indeed. A lively debate and lots of good arguments: Robots will help us, robots could harm us, robots may or may not take our jobs, robots—for the moment—are nothing special.  Beneath the superficial disagreement lies a fundamental shared acceptance of the general premise that we are not different from computers, only have the temporary and fleeting advantage of greater computing power.

No wonder that the editor of Foreign Affairs, Gideon Rose, concludes that “something is clearly happening here, but we don’t know what it means. And by the time we do, authors and editors might well have been replaced by algorithms along with everybody else.”

Let me make a bold prediction. Algorithms will not create on their own a competitor to Foreign Affairs. No matter how intelligent machines will become (and they will be much smarter than they are today), they will not create science or literature or any of the other components of our culture that we have created over the course of millennia and will continue to create, in some cases aided by technologies that we create and control.

And by “we,” I don’t mean only Einstein and Shakespeare. I mean the entire human race, engaged in creating, absorbing, manipulating, processing, communicating the symbols that make our culture, making sense of our reality. I doubt that we will ever have a machine creating Twitter on its own, not even the hashtag.

I’m sure we will have smart machines that could perform special tasks, augmenting our capabilities and improving our lives. That many jobs will be taken over by algorithms and robots, and many others will be created because of them, as we have seen over the last half-century. And that bad people will use these intelligent machines to harm other people and that we will make many mistakes relying too much on them and not thinking about all the consequences of what we are developing.

But intelligent machines will not have a mind of their own. Intelligent machines will not have our imagination, our creativity, our unique human culture. Intelligent machines will not take over because they will never be human.

Originally published on Forbes.com

Posted in Misc | Leave a comment

John Markoff on automation, jobs, Deep Learning and AI limitations

markoff640My sense, after spending two or three years working on this, is that it’s a much more nuanced situation than the alarmists seem to believe. Brynjolfsson and McAfee, and Martin Ford, and Jaron Lanier have all written about the rapid pace of automation. There are two things to consider: One, the pace is not that fast. Deploying these technologies will take more time than people think. Two, the structure of the workforce may change in ways that means we need more robots than we think we do, and that the robots will have a role to play. The other thing is that the development of the technologies to make these things work is uneven.

Right now, we’re undergoing a rapid acceleration in pattern recognition technologies. Machines, for the first time are learning how to recognize objects; they’re learning how to understand scenes, how to recognize the human voice, how to understand human language. That’s all happening, no question that the advances have been dramatic and it’s largely happened due to this technique called deep learning, which is a modern iteration of the artificial neural nets, which of course have been around since the 1950s and even before.

What hasn’t happened is the other part of the AI problem, which is called cognition. We haven’t made any breakthroughs in planning and thinking, so it’s not clear that you’ll be able to turn these machines loose in the environment to be waiters or flip hamburgers or do all the things that human beings do as quickly as we think. Also, in the United States the manufacturing economy has already left, by and large. Only 9 percent of the workers in the United States are involved in manufacturing.

There’s this wonderful counter situation to the popular belief that there will be no jobs. The last time someone wrote about this was in 1995 when a book titled The End of Work predicted this. The decade after that, the US economy grew faster than the population for the next decade. It’s not clear to me at all that things are going to work out the way they felt.

The classic example is that almost everybody cites this apparent juxtaposition of Instagram—thirteen programmers taking out a giant corporation, Kodak, with 140,000 workers. In fact, that’s not what happened at all. For one thing, Kodak wasn’t killed by Instagram. Kodak was a company that put a gun to its head and pulled the trigger multiple times until it was dead. It just made all kinds of strategic blunders. The simplest evidence of that is its competitor, Fuji, which did very well across this chasm of the Internet. The deeper thought is that Instagram, as a new?age photo sharing system, couldn’t exist until the modern Internet was built, and that probably created somewhere between 2.5 and 5 million jobs, and made them good jobs. The notion that Instagram killed both Kodak and the jobs is just fundamentally wrong…

…What worries me about the future of Silicon Valley, is that one-dimensionality, that it’s not a Renaissance culture, it’s an engineering culture. It’s an engineering culture that believes that it’s revolutionary, but it’s actually not that revolutionary. The Valley has, for a long time, mined a couple of big ideas…

…In fact, things are slowing down. In 2045, it’s going to look more like it looks today than you think.

Source: Edge

Posted in AI, deep learning, Machine Learning | Leave a comment

Hype Curve of (Hardware) Neural Networks

Neural_Networks_hype_curve

Source: Olivier Temam

Posted in AI | Leave a comment

Text Analytics: Inaugural Speeches of All U.S. Presidents (Infographic)

presidential-speech-textanalytics-infographic.

Posted in Infographics | Leave a comment

The CDO Interview: Digital Transformation at CVS Health

Brian Tilzer, SVP and CDO, CVS Health

Brian Tilzer, SVP and CDO, CVS Health

CVS Health has recently opened a new Digital Innovation Lab in Boston. Enabling digital access to healthcare and related services anytime and anywhere is a key component in the company’s transformation into a multi-faceted healthcare provider.

CVS also announced plans to expand its physical footprint by acquiring for $1.9 billion Target’s 1,660 pharmacies and 80 clinics. These will be added to its existing 7,800 retail drugstores and nearly 1,000 walk-in medical clinics. CVS also acts as pharmacy benefits manager for more than 70 million plan members and provides specialty pharmacy services. Rapidly expanding its portfolio—last month it acquired Omnicare Inc., a provider of pharmacy services, for $12.7 billion—CVS moved up to number 10 on this year’s Fortune 500 list with $139.4 billion in 2014 revenues.

Successful transformation from a drugstore chain to a provider of innovative approaches to delivering healthcare services takes a lot of moving parts, not least of which is information technology. CVS highlighted the digital dimension of this transformation when it hired Brian Tilzer in February 2013 as senior vice president and its first Chief Digital Officer (CDO), to “develop and lead teams” driving the company’s “digital innovation efforts.”

Tilzer’s IT experience goes all the way back to the 1960s, when his mother worked as a programmer for Equitable Life Insurance. Later, she taught her very young son how to write programs for the Apple II. This got Tilzer his first job in retail, showing customers at his local computer store what they could do with the personal computer.

What followed was a career as a technology and business strategy consultant, a senior vice president of strategy and business development at Linens n Things, and six years in senior eCommerce roles at Staples. Now Tilzer is defining the CDO role for CVS Health.

CVS was ahead of many other companies at the time as there were only 225 CDOs at the end of 2012, according to the CDO club. That number is estimated to grow by 800% to 2,000 by the end of this year. Late last year, Korn Ferry predicted that CDOs will be among the most in-demand C-level positions in 2015 and IDC predicted that by 2020, “60% of CIOs in global organizations will be supplanted by the Chief Digital Officer (CDO) for the delivery of IT-enabled products and digital services.”

At the MIT Sloan CIO Symposium last month, Tilzer participated in a panel discussion titled “Leading Digital: A Manifesto for IT and Business Executives.” Also on the panel was Michael Nilles, CIO of the Schindler Group and CEO of Schindler Digital Business.  A global provider of elevators, escalators and related services, the Schindler Group has opted to entrust its CIO with a business unit responsible for connecting its products via the Internet of Things and providing its 20,000 field service employees with a “digital tool case.”

At the Symposium’s panel discussion, Nilles argued strongly for having only one executive perform the CIO and CDO roles, implying that CIOs are not worth their salt if they can’t take on themselves the CDO role.

CVS Health proves Nilles wrong. Its CIO, Stephen Gold, “has one of the best resumes in IT,” according to Forbes contributor Peter High. Gold is focused on IT transformation, which in a company moving rapidly to expand its business portfolio and physical footprint, is a very broad and critical “focus.” CDO responsibilities would probably be just a distraction for him. Answering Nilles in the Symposium panel discussion, Tilzer stressed the importance of having a senior executive in charge of digital, explaining that this is a new marketing channel and an opportunity to engage customers like never before.

Different companies, given their unique business and competitive situation, would respond differently to the challenge of who to entrust with CDO responsibilities. Some would create a completely new business unit and would have the CIO run it in addition to his or her traditional IT responsibilities, as Schindler did. Other companies may combine the CIO and CDO responsibilities in one executive or in one team, but will not create a separate business unit. And some companies, like CVS Health, may opt for a clear division of responsibilities.

“We have separate teams but one team mentality,” Tilzer told me on the sidelines of the Symposium. “We challenge each other in healthy ways. We want to move fast and they want to be secure. We have to take care to do both,” he added.

Tilzer’s team is charged with figuring out what CVS’s customers want and has the expertise to design online experiences to meet these needs. They write the business requirements, and the IT team, with the knowledge of what tools to use and the right software development skills, builds the solution. “The relationship is seamless, we tend to start with the customer and IT tends to start with the technology but it’s got to come together,” said Tilzer.

One could think about the CDO as the chief customization officer. “Digital is the place where all the services CVS provides can come together,” said Tilzer, “connecting our customers and their families with what we do.” And just like information technology, “digital” touches everything today. Tilzer: “My job would be a lot easier if I was trying to build digital CVS, a separate team or unit. What I’m trying to do is to weave digital experiences and technologies into every aspect of our customers’ experience. This requires us to closely collaborate with other [functions].”

One area of collaboration across the CDO team, IT, and marketing is analytics. In the past, data was used primarily to justify projects. But now, “the bigger idea is to use data to personalize experiences,” according to Tilzer.  IT builds the tools that Tilzer’s team use to make customers’ experiences as relevant as possible. At the same time, his team needs to work with marketing to ensure the personalized messages to customers they develop support the overall communication strategy for the company.

Tilzer has on his team digital strategists, “people who figure out what program we should go after and what great things we should do for our customers.” Other members of the team are product managers who develop a road map of features and functions to meet the digital strategy; and user experience experts, designing the look and feel of the digital product and working with marketing to ensure compliance with the brand.

One reason for establishing the Digital Innovation Lab in Boston is to expand the pool of talent Tilzer can tap. But the key motivation for opening the lab is to develop an innovation ecosystem. “We are trying to accelerate our rate of innovation,” Tilzer told me. “We need to explore best-of-breed third party partners, bring them into our world, figure out how they can be combined with our digital experiences and test it rapidly with our consumers.”

In Boston, observed Tilzer, “the intersection between technology and healthcare is very strong.” The Digital Innovation Lab is located half way between the Longwood medical area and the Kendal Square technology hub across the river in Cambridge. “We don’t need to do everything ourselves,” Tilzer said.

Originally published on Forbes.com

Posted in Digitization | Tagged , , | Leave a comment

The Price Declines Behind the Explosion of Data

PCPrice

SoftwarePrice

CamerasPrice

CellPhoneSerivicePrice

Source: BloombergBusiness

Posted in Digitization | Leave a comment

How Much Will You Pay to Protect Your Data?

DataProtection_ValuedByConsumers

Source: Harvard Business Review

See also: ‘We The People Have A Lot Of Work To Do’ Says Schneier In A Must-Read Book On Security And Privacy

Posted in Misc | Tagged , , | Leave a comment