
Source: Digitlistmag

Source: Digitlistmag
The history of technology, whether of the last five or five hundred years, is often told as a series of pivotal events or the actions of larger-than-life individuals, of endless “revolutions” and “disruptive” innovations that “change everything.” It is history as hype, offering a distorted view of the past, sometimes through the tinted lenses of contemporary fads and preoccupations.
In contrast, ENIAC in Action: Making and Remaking the Modern Computer, is a nuanced, engaging and thoroughly researched account of the early days of computers, the people who built and operated them, and their old and new applications. Say the authors, Thomas Haigh, Mark Priestley and Crispin Rope:
The titles of dozens of books have tried to lure a broad audience to an obscure topic by touting an idea, a fish, a dog, a map, a condiment, or a machine as having “changed the world”… One of the luxuries of writing an obscure academic book is that one is not required to embrace such simplistic conceptions of history.
Instead, we learn that the Electronic Numerical Integrator and Computer (ENIAC) was “a product of historical contingency, of an accident, and no attempt to present it as the expression of whatever retrospectively postulated law or necessity would contribute to our understanding.”
The ENIAC was developed in a specific context that shaped the life and times of what became “the only fully electronic computer working in the U.S.,” from its inception during World War II to 1950, when other computers have successfully joined the race to create a new industry. “Without the war, no one would have built a machine with [ENIAC’s] particular combination of strengths and weaknesses,” say Haigh, Priestley and Rope.
The specific context in which the ENIAC emerged had also to do with the interweaving of disciplines, skills, and people working in old and new roles. The ENIAC was an important milestone in the long evolution of labor-saving devices, scientific measurement, business management, and knowledge work.
Understanding this context sheds new light on women’s role in the emergence of the new discipline of computer science and the new practice of corporate data processing. “Women in IT” has been a topic of much discussion recently, frequently starting with Ada Lovelace who is for many the “first computer programmer.” A very popular example of the popular view that women invented programming is Walter Isaacson’s The Innovators (see Haigh’s and Priestley’s rejoinder and a list of factual inaccuracies committed by Isaacson).
It turns out that history (of the accurate kind) can be more inspirational than story-telling driven by current interests and agendas, and furnish us (of all genders) with more relevant role-models. The authors of ENIAC in Action highlight the importance of the work of ENIAC’s mostly female “operators” (neglected by other historians, they say, because of the disinclination to celebrate work seen as blue-collar), reflecting “a long tradition of female participation in applied mathematics within the institutional settings of universities and research laboratories,” a tradition that continued with the ENIAC and similar machines performing the same work (e.g., firing-table computations) but much faster.
The female operators, initially hired to help mainly with the physical configuration of the ENIAC (which was re-wired for each computing task), ended up contributing significantly to the development of “set-up forms” and the emerging computer programming profession: “It was hard to devise a mathematical treatment without good knowledge of the processes of mechanical computation, and it was hard to turn a computational plan into a set-up without hands-on knowledge of how ENIAC ran.”
When computing moved from research laboratories into the corporate world, most firms used existing employees in the newly created “data processing” (later “IT”) department, re-assigning them from relevant positions: punched-card-machine workers, corporate “systems men” (business process redesign), and accountants. Write the authors of ENIAC in Action:
Because all these groups were predominantly male, the story of male domination of administrative programming work was… a story of continuity within a particular institutional context. Thus, we see the history of programming labor not as the creation of a new occupation in which women were first welcomed and then excluded, but rather as a set of parallel stories in which the influence of ENIAC and other early machines remained strong in centers of scientific computation but was negligible in corporate data-processing work.

Programmers Betty Jean Jennings (left) and Fran Bilas (right) operate ENIAC’s main control panel at the Moore School of Electrical Engineering. (U.S. Army photo from the archives of the ARL Technical Library) Source: Wikipedia
Good history is a guide to how society works; bad history is conjuring evil forces where there are none. ENIAC in Action resurrects the pioneering work of the real “first programmers” such as Jean Bartik and Klara von Neumann and explains why corporate IT has evolved to employ mostly their male successors.
Today’s parallel to the ENIAC-era big calculation is big data, as is the notion of “discovery” and the abandonment of hypotheses. “One set initial parameters, ran the program, and waited to see what happened” is today’s The unreasonable effectiveness of data. There is a direct line of the re-shaping of scientific practice from the ENIAC pioneering simulations to “automated science.” But is the removal of human imagination from scientific practice good for scientific progress?
Similarly, it’s interesting to learn about the origins of today’s renewed interest in, fascination with, and fear of “artificial intelligence.” Haigh, Priestley and Rope argue against the claim that the “irresponsible hyperbole” regarding early computers was generated solely by the media, writing that “many computing pioneers, including John von Neumann, [conceived] of computers as artificial brains.”
Indeed, in his First Draft of a Report on the EDVAC—which became the foundation text of modern computer science (or more accurately, computer engineering practice)—von Neumann compared the components of the computer to “the neurons of higher animals.” While von Neumann thought that the brain was a computer, he allowed that it was a complex one, following McCulloch and Pitts (in their 1943 paper “A Logical Calculus of the Ideas Immanent in Nervous Activity”) in ignoring “the more complicated aspects of neuron functioning,” he wrote.
Given that McCulloch said about the “neurons” discussed in his and Pitts’ seminal paper that they “were deliberately as impoverished as possible,” what we have at the dawn of “artificial intelligence” is simplification squared, based on an extremely limited (possibly non-existent at the time) understanding of how the human brain works.
These mathematical exercises, born out of the workings of very developed brains but not mimicking or even remotely describing them, led to the development of “artificial neural networks” which led to “deep learning” which led to the general excitement today about computer programs “mimicking the brain” when they succeed in identifying cat images or beating a Go champion.
In 1949, computer scientist Edmund Berkeley wrote in his book, Giant Brains or Machines that Think: “These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.”
Haigh, Priestley and Rope write that “…the idea of computers as brains was always controversial, and… most people professionally involved with the field had stepped away from it by the 1950s.” But thirty years later, Marvin Minsky famously stated: “The human brain is just a computer that happens to be made out of meat.”
Most computer scientists by that time were indeed occupied by less lofty goals than playing God, but only very few objected to these kind of statements, or to Minsky receiving the most prestigious award of their profession (for his role in creating the field of artificial intelligence). Today, the idea that computers and brains are the same thing, leads people with very developed brains to conclude that if computers can win in Go, they can think, and that with just a few more short steps up the neural networks evolution ladder, computers will reason that it’s in their best interests to destroy humanity.
Twenty years ago, the U.S. Postal Service issued a new stamp commemorating the 50th birthday of ENIAC. The stamp displayed an image of a brain partially covered by small blocs that contain parts of circuit boards and binary code. One of the few computer scientists who objected to this pernicious and popular idea was Joseph Weizenbaum:
What do these people actually mean when they shout that man is a machine (and a brain a “meat machine”)? It is… that human beings are “computable,” that they are not distinct from other objects in the world… computers enable fantasies, many of them wonderful, but also those of people whose compulsion to play God overwhelms their ability to fathom the consequences of their attempt to turn their nightmares into reality.
The dominant fantasy is that computers “change the world” and “make it a better place,” they spread democracy and other cherished values, etc., etc. It is vociferously promoted by people who believe themselves to be rational but ignore reality which has proven again and again that 70 years of computers have done little to change our society. Two recent examples are the hacker who scanned the Internet for networked printers and made them print an anti-semitic flyer and the good people at Microsoft who released an “AI-powered chatbot” only to find out that it took Twitter users just 16 hours to teach it to spew racial slurs.
ENIAC and its progeny have not changed what’s most important in our world: humans. Maybe Gates, Hawking, and Musk are right after all. Once computers surpass us in intelligence, they will understand that humanity cannot be changed by technology and it’s better just to get rid of it. In the meantime, the creativity and intelligence of good historians writing books such as ENIAC in Action will keep us informed and entertained.
Originally published on Forbes.com

IDC-IoTAPril2016 

Consumer IoT is lagging behind industrial IoT in terms of interest, investments, and successful applications. CB Insights has found that in 2011, the industrial IoT accounted for 17% of all IoT funding dollars. In 2015, the share of industrial IoT investment rose to 40% of all IoT investment.
In a recent report citing the results of large surveys of consumers in the U.S. and other countries, Forrester observed that the IoT today “seems to open up more business opportunities in the industrial and B2B space than for consumer brands” (see also Forrester’s blog post). Similarly, in a recent report and webinar based on a large consumer survey in the U.S., IDC has concluded that “beyond security and point-solutions to specific problems, consumer IoT is still looking for a clear value proposition.”
Here are some interesting findings:
14% of U.S. online adults are currently using a wearable, and only 7% use any smart home device. Usage of connected devices in smart homes or cars is even lower in Europe. Smoke and home security monitoring are the two smart home services U.S. consumers are the most interested in, followed closely by water monitoring. (Forrester)
More than 8 million households in the U.S. already use some kind of home automation and control. The home IoT applications consumers are interested in are networked sensors monitoring for fire, smoke, water, or CO at home; seeing and recording who comes to the front door using a video camera; and networked sensors monitoring doors and windows. Consumers are least interested in networked kitchen appliances. (IDC)
Reasons for purchasing home control application: 30% cited solving a known problem, either recent or long-standing; 40% cited word-of- mouth, news about such devices; almost 20% said it seemed like “a neat solution to a problem I didn’t know I had” (!!!) and over 15% said that the device “was on sale.” (IDC)
Preferred installer for home automation and control systems in order of preference: Residential security company, myself, other professional installers, cable or telephone companies. (IDC)
Half of U.S. online adults are concerned that the monthly service cost of smart home technologies would be too high, and 38% fear the initial cost of setup would be too high. (Forrester)
36% of U.S. online adults fear using smart home services could compromise the privacy of their personal information. (Forrester)
Among those interested in home control IoT application but haven’t purchased one: High concern around cost (which is common for new applications) and unusually (for new applications) high concerns around reliability and user experience. (IDC)
31% are interested in access to the internet while using the car (i.e., on-board internet) and access to an interactive voice response system (i.e., a digital driving assistant). Telematics-enabled usage based insurance (UBI) is emerging and will disrupt the car insurance industry. (Forrester)
In 2016, 33% of U.S. online adults will use some form of IoT across home, wearables, and car. However, usage in the next two years will primarily be led by wearables and smartwatches. (Forrester)
IDC concludes that “the majority of consumers remain skeptical of the value proposition behind the home Internet of Things and are holding back for a higher overall value proposition.” In the IDC press release, Jonathan Gaw said:
“The long-run impact of the Internet of Things will be broader and deeper than we imagine right now, but the industry is still in the early stages of developing the vision and conveying it to consumers.”
IDC continues: “Winners will solve a problem the consumer didn’t know they had. Security and privacy – punished for a lack of it, probably not rewarded for having it. Voice interfaces have potential, but still need development for mainstream users.”
Forrester’s Thomas Husson and his colleagues cite a pioneering home-focused voice interface, Amazon Echo, as an example of successful consumer IoT device. “Combining the Dash Buttons’ big-data-meets-internet-of-things experiment with Amazon Echo and Alexa Voice Assistant will enable Amazon to aggregate multiple brands’ offering and anticipate consumers’ needs,” says Forrester.
The report continues: “Because consumers invest little in new experiences, they hurt little when abandoned. The consequence is that the vast majority of new IoT products will fail unless marketers develop a customer relationship that is frequent, emotionally engaging, and conveniently delivered.”
Both Forrester and IDC seems to understand what works and what doesn’t work with current IoT offerings and continue to advance their knowledge by surveying consumers and talking to enterprise decision-makers.
The U.S. government may want to pay closer attention to their (and other industry observers’) work. The National Telecommunications and Information Administration (NTIA) recently issued a request for comments which included this gem:
Although a number of architectures describing different aspects or various applications of the IoT are being developed, there is no broad consensus on exactly how the concept should be defined or scoped. Consensus has emerged, however, that the number of connected devices is expected to grow exponentially, and the economic impact of those devices will increase dramatically.
To which James Connolly responded: “How can the public comment when even Commerce can’t really define the term?”
Originally published on Forbes.com

Ever wondered how much time the average person spends looking at their TV, computer, phone or laptop? Well, this chart shows exactly that, broken down by country.
Produced by Mary Meeker for her annual presentation on internet trends, the chart reveals some interesting insights. Clearly Indonesia and the Philippines are glued to their screens, but it’s the breakdown where it get interesting. Look at the disparity in TV viewing between the U.S. and Vietnam, say, despite their similar totals; or the lack of tablet time in South Korea. (But then, maybe that’s because Samsung tablet suck.) And Italy and France barely spend any time at a screen—but then, maybe that’s what happens if you ban email after 6pm. [KPCB via Quartz]

Source: Teradata and EIU
Nearly half of CEOs believe that all of their employees have access to the data they need, but only 27% of employees agree.
That’s according to study results from Teradata, a data analytics and marketing firm. The company commissioned The Economist Intelligence Unit to survey 362 workers across the globe — including those in management, finance, sales and marketing, business development and more.
CEOs also overestimate how quickly “big data” moves through their company, with 43% of CEO respondents believing that relevant data is made available in real-time, compared to 29% of all respondents.
Overall, CEOs are wearing rose-colored glasses when examining the overall effectiveness big data has on their initiatives: 38% believe their employees are able to extract relevant insights from the data, while only 24% of all respondents do.
The report notes that of companies that outperform in profitability as a result of data-driven marketing, 63% of the initiatives are launched by corporate leadership, and 41% have a centralized data and analytics group. Of companies that say they underperform, 38% of initiatives are launched by the higher-ups and 28% say data and analytics are centralized.

Kirsten Newbold-Knipp, Gartner:
Here are a few highlights from some of our 2016 marketing cool vendors reports as well as guidance on technology selection.

ComTIA:
CompTIA evaluates trends for its IT Industry Outlook based on their recent or imminent impact. For developments that are just emerging, or trends that are still on under the radar, Buzzwords Watch provides a glimpse of terms that could gain traction. Of course, many will also fizzle out.
Note: CompTIA’s Buzzword Watch is not meant to be a formal, quantitative assessment of trends, but rather an informal look at interesting concepts that may be worth paying attention to in the year ahead.