ENIAC in Action: Making and Remaking the Modern Computer

EniacInActionCoverThe history of technology, whether of the last five or five hundred years, is often told as a series of pivotal events or the actions of larger-than-life individuals, of endless “revolutions” and “disruptive” innovations that “change everything.” It is history as hype, offering a distorted view of the past, sometimes through the tinted lenses of contemporary fads and preoccupations.

In contrast, ENIAC in Action: Making and Remaking the Modern Computer, is a nuanced, engaging and thoroughly researched account of the early days of computers, the people who built and operated them, and their old and new applications. Say the authors, Thomas Haigh, Mark Priestley and Crispin Rope:

The titles of dozens of books have tried to lure a broad audience to an obscure topic by touting an idea, a fish, a dog, a map, a condiment, or a machine as having “changed the world”… One of the luxuries of writing an obscure academic book is that one is not required to embrace such simplistic conceptions of history.

Instead, we learn that the Electronic Numerical Integrator and Computer (ENIAC) was “a product of historical contingency, of an accident, and no attempt to present it as the expression of whatever retrospectively postulated law or necessity would contribute to our understanding.”

The ENIAC was developed in a specific context that shaped the life and times of what became “the only fully electronic computer working in the U.S.,” from its inception during World War II to 1950, when other computers have successfully joined the race to create a new industry. “Without the war, no one would have built a machine with [ENIAC’s] particular combination of strengths and weaknesses,” say Haigh, Priestley and Rope.

The specific context in which the ENIAC emerged had also to do with the interweaving of disciplines, skills, and people working in old and new roles. The ENIAC was an important milestone in the long evolution of labor-saving devices, scientific measurement, business management, and knowledge work.

Understanding this context sheds new light on women’s role in the emergence of the new discipline of computer science and the new practice of corporate data processing. “Women in IT” has been a topic of much discussion recently, frequently starting with Ada Lovelace who is for many the “first computer programmer.” A very popular example of the popular view that women invented programming is Walter Isaacson’s The Innovators (see Haigh’s and Priestley’s rejoinder and a list of factual inaccuracies committed by Isaacson).

It turns out that history (of the accurate kind) can be more inspirational than story-telling driven by current interests and agendas, and furnish us (of all genders) with more relevant role-models.  The authors of ENIAC in Action highlight the importance of the work of ENIAC’s mostly female “operators” (neglected by other historians, they say, because of the disinclination to celebrate work seen as blue-collar), reflecting “a long tradition of female participation in applied mathematics within the institutional settings of universities and research laboratories,” a tradition that continued with the ENIAC and similar machines performing the same work (e.g., firing-table computations) but much faster.

The female operators, initially hired to help mainly with the physical configuration of the ENIAC (which was re-wired for each computing task), ended up contributing significantly to the development of “set-up forms” and the emerging computer programming profession: “It was hard to devise a mathematical treatment without good knowledge of the processes of mechanical computation, and it was hard to turn a computational plan into a set-up without hands-on knowledge of how ENIAC ran.”

When computing moved from research laboratories into the corporate world, most firms used existing employees in the newly created “data processing” (later “IT”) department, re-assigning them from relevant positions: punched-card-machine workers, corporate “systems men” (business process redesign), and accountants. Write the authors of ENIAC in Action:

Because all these groups were predominantly male, the story of male domination of administrative programming work was… a story of continuity within a particular institutional context. Thus, we see the history of programming labor not as the creation of a new occupation in which women were first welcomed and then excluded, but rather as a set of parallel stories in which the influence of ENIAC and other early machines remained strong in centers of scientific computation but was negligible in corporate data-processing work.

ENIAC

Programmers Betty Jean Jennings (left) and Fran Bilas (right) operate ENIAC’s main control panel at the Moore School of Electrical Engineering. (U.S. Army photo from the archives of the ARL Technical Library) Source: Wikipedia

Good history is a guide to how society works; bad history is conjuring evil forces where there are none. ENIAC in Action resurrects the pioneering work of the real “first programmers” such as Jean Bartik and Klara von Neumann and explains why corporate IT has evolved to employ mostly their male successors.

Good history also provides us with a mirror in which we can compare and contrast past and present developments. The emergence of the “data science” profession today, in which women play a more significant role than in the traditional IT profession, parallels the emergence of computer programming. Just like the latter required knowledge of both computer operations and mathematical analysis, data science marries knowledge of computers with statistical analysis skills.
Developing models is the core of data scientists’ work and ENIAC in Action devotes considerable space to the emergence of computer simulations and the discussion of their impact on scientific practice. Simulations brought on a shift from equations to algorithms, providing “a fundamentally experimental way of discovering the properties of the system described.”

 

Today’s parallel to the ENIAC-era big calculation is big data, as is the notion of “discovery” and the abandonment of hypotheses. “One set initial parameters, ran the program, and waited to see what happened” is today’s The unreasonable effectiveness of data.  There is a direct line of the re-shaping of scientific practice from the ENIAC pioneering simulations to “automated science.” But is the removal of human imagination from scientific practice good for scientific progress?

Similarly, it’s interesting to learn about the origins of today’s renewed interest in, fascination with, and fear of “artificial intelligence.” Haigh, Priestley and Rope argue against the claim that the “irresponsible hyperbole” regarding early computers was generated solely by the media, writing that “many computing pioneers, including John von Neumann, [conceived] of computers as artificial brains.”

Indeed, in his First Draft of a Report on the EDVAC—which became the foundation text of modern computer science (or more accurately, computer engineering practice)—von Neumann compared the components of the computer to “the neurons of higher animals.” While von Neumann thought that the brain was a computer, he allowed that it was a complex one, following McCulloch and Pitts (in their 1943 paper “A Logical Calculus of the Ideas Immanent in Nervous Activity”) in ignoring “the more complicated aspects of neuron functioning,” he wrote.

Given that McCulloch said about the “neurons” discussed in his and Pitts’ seminal paper that they “were deliberately as impoverished as possible,” what we have at the dawn of “artificial intelligence” is simplification squared, based on an extremely limited (possibly non-existent at the time) understanding of how the human brain works.

These mathematical exercises, born out of the workings of very developed brains but not mimicking or even remotely describing them, led to the development of “artificial neural networks” which led to “deep learning” which led to the general excitement today about computer programs “mimicking the brain” when they succeed in identifying cat images or beating a Go champion.

In 1949, computer scientist Edmund Berkeley wrote in his book, Giant Brains or Machines that Think: “These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.”

Haigh, Priestley and Rope write that “…the idea of computers as brains was always controversial, and… most people professionally involved with the field had stepped away from it by the 1950s.” But thirty years later, Marvin Minsky famously stated: “The human brain is just a computer that happens to be made out of meat.”

Most computer scientists by that time were indeed occupied by less lofty goals than playing God, but only very few objected to these kind of statements, or to Minsky receiving the most prestigious award of their profession (for his role in creating the field of artificial intelligence). Today, the idea that computers and brains are the same thing, leads people with very developed brains to conclude that if computers can win in Go, they can think, and that with just a few more short steps up the neural networks evolution ladder, computers will reason that it’s in their best interests to destroy humanity.

eniacstamp50Twenty years ago, the U.S. Postal Service issued a new stamp commemorating the 50th birthday of ENIAC. The stamp displayed an image of a brain partially covered by small blocs that contain parts of circuit boards and binary code. One of the few computer scientists who objected to this pernicious and popular idea was Joseph Weizenbaum:

What do these people actually mean when they shout that man is a machine (and a brain a “meat machine”)? It is… that human beings are “computable,” that they are not distinct from other objects in the world… computers enable fantasies, many of them wonderful, but also those of people whose compulsion to play God overwhelms their ability to fathom the consequences of their attempt to turn their nightmares into reality.

The dominant fantasy is that computers “change the world” and “make it a better place,” they spread democracy and other cherished values, etc., etc. It is vociferously promoted by people who believe themselves to be rational but ignore reality which has proven again and again that 70 years of computers have done little to change our society. Two recent examples are the hacker who scanned the Internet for networked printers and made them print an anti-semitic flyer and the good people at Microsoft who released an “AI-powered chatbot” only to find out that it took Twitter users just 16 hours to teach it to spew racial slurs.

ENIAC and its progeny have not changed what’s most important in our world: humans. Maybe Gates, Hawking, and Musk are right after all. Once computers surpass us in intelligence, they will understand that humanity cannot be changed by technology and it’s better just to get rid of it. In the meantime, the creativity and intelligence of good historians writing books such as ENIAC in Action will keep us informed and entertained.

Originally published on Forbes.com

Posted in Computer History | Leave a comment

Forrester and IDC on Consumer Interest in IoT

Figure 5 - Forrester

IDC-IoTAPril2016 Key Takeaways from IDCs 2016 Consumer IoT Webinar_08

IoT-Vs-Industrial

Consumer IoT is lagging behind industrial IoT in terms of interest, investments, and successful applications. CB Insights has found that in 2011, the industrial IoT accounted for 17% of all IoT funding dollars. In 2015, the share of industrial IoT investment rose to 40% of all IoT investment.

In a recent report citing the results of large surveys of consumers in the U.S. and other countries, Forrester observed that the IoT today “seems to open up more business opportunities in the industrial and B2B space than for consumer brands” (see also Forrester’s blog post). Similarly, in a recent report and webinar based on a large consumer survey in the U.S., IDC has concluded that “beyond security and point-solutions to specific problems, consumer IoT is still looking for a clear value proposition.”

Here are some interesting findings:

14% of U.S. online adults are currently using a wearable, and only 7% use any smart home device. Usage of connected devices in smart homes or cars is even lower in Europe. Smoke and home security monitoring are the two smart home services U.S. consumers are the most interested in, followed closely by water monitoring. (Forrester)

More than 8 million households in the U.S. already use some kind of home automation and control. The home IoT applications consumers are interested in are networked sensors monitoring for fire, smoke, water, or CO at home; seeing and recording who comes to the front door using a video camera; and networked sensors monitoring doors and windows. Consumers are least interested in networked kitchen appliances. (IDC)

Reasons for purchasing home control application: 30% cited solving a known problem, either recent or long-standing; 40% cited word-of- mouth, news about such devices; almost 20% said it seemed like “a neat solution to a problem I didn’t know I had” (!!!) and over 15% said that the device “was on sale.” (IDC)

Preferred installer for home automation and control systems in order of preference: Residential security company, myself, other professional installers, cable or telephone companies. (IDC)

Half of U.S. online adults are concerned that the monthly service cost of smart home technologies would be too high, and 38% fear the initial cost of setup would be too high. (Forrester)

36% of U.S. online adults fear using smart home services could compromise the privacy of their personal information. (Forrester)

Among those interested in home control IoT application but haven’t purchased one: High concern around cost (which is common for new applications) and unusually (for new applications) high concerns around reliability and user experience. (IDC)

31% are interested in access to the internet while using the car (i.e., on-board internet) and access to an interactive voice response system (i.e., a digital driving assistant). Telematics-enabled usage based insurance (UBI) is emerging and will disrupt the car insurance industry. (Forrester)

In 2016, 33% of U.S. online adults will use some form of IoT across home, wearables, and car. However, usage in the next two years will primarily be led by wearables and smartwatches. (Forrester)

IDC concludes that “the majority of consumers remain skeptical of the value proposition behind the home Internet of Things and are holding back for a higher overall value proposition.” In the IDC press release, Jonathan Gaw said:

“The long-run impact of the Internet of Things will be broader and deeper than we imagine right now, but the industry is still in the early stages of developing the vision and conveying it to consumers.”

IDC continues: “Winners will solve a problem the consumer didn’t know they had. Security and privacy – punished for a lack of it, probably not rewarded for having it. Voice interfaces have potential, but still need development for mainstream users.”

Forrester’s Thomas Husson and his colleagues cite a pioneering home-focused voice interface, Amazon Echo, as an example of successful consumer IoT device. “Combining the Dash Buttons’ big-data-meets-internet-of-things experiment with Amazon Echo and Alexa Voice Assistant will enable Amazon to aggregate multiple brands’ offering and anticipate consumers’ needs,” says Forrester.

The report continues: “Because consumers invest little in new experiences, they hurt little when abandoned. The consequence is that the vast majority of new IoT products will fail unless marketers develop a customer relationship that is frequent, emotionally engaging, and conveniently delivered.”

Both Forrester and IDC seems to understand what works and what doesn’t work with current IoT offerings and continue to advance their knowledge by surveying consumers and talking to enterprise decision-makers.

The U.S. government may want to pay closer attention to their (and other industry observers’) work. The National Telecommunications and Information Administration (NTIA) recently issued a request for comments which included this gem:

Although a number of architectures describing different aspects or various applications of the IoT are being developed, there is no broad consensus on exactly how the concept should be defined or scoped. Consensus has emerged, however, that the number of connected devices is expected to grow exponentially, and the economic impact of those devices will increase dramatically.

To which James Connolly responded: “How can the public comment when even Commerce can’t really define the term?”

Originally published on Forbes.com

Posted in Internet of Things | Tagged , , | Leave a comment

Why Enterprises Use AI and for What

AI_most-widely-used-solutions

HT: Raconteur

Posted in AI, Misc | Tagged | Leave a comment

How Much Time the World Spends Looking at Screens (Infographic)

ScreenMinutes_global

Gizmodo

Ever wondered how much time the average person spends looking at their TV, computer, phone or laptop? Well, this chart shows exactly that, broken down by country.

Produced by Mary Meeker for her annual presentation on internet trends, the chart reveals some interesting insights. Clearly Indonesia and the Philippines are glued to their screens, but it’s the breakdown where it get interesting. Look at the disparity in TV viewing between the U.S. and Vietnam, say, despite their similar totals; or the lack of tablet time in South Korea. (But then, maybe that’s because Samsung tablet suck.) And Italy and France barely spend any time at a screen—but then, maybe that’s what happens if you ban email after 6pm. [KPCB via Quartz]

Posted in Misc | Tagged | Leave a comment

Nearly 50% of CEOs believe all of their employees have access to the data they need, only 27% of employees agree

 

Bigdata_teradatasurvey

Source: Teradata and EIU

RTBlog:

Nearly half of CEOs believe that all of their employees have access to the data they need, but only 27% of employees agree.

That’s according to study results from Teradata, a data analytics and marketing firm. The company commissioned The Economist Intelligence Unit to survey 362 workers across the globe — including those in management, finance, sales and marketing, business development and more.

CEOs also overestimate how quickly “big data” moves through their company, with 43% of CEO respondents believing that relevant data is made available in real-time, compared to 29% of all respondents.

Overall, CEOs are wearing rose-colored glasses when examining the overall effectiveness big data has on their initiatives: 38% believe their employees are able to extract relevant insights from the data, while only 24% of all respondents do.

The report notes that of companies that outperform in profitability as a result of data-driven marketing, 63% of the initiatives are launched by corporate leadership, and 41% have a centralized data and analytics group. Of companies that say they underperform, 38% of initiatives are launched by the higher-ups and 28% say data and analytics are centralized.

Posted in Misc | Tagged , | Leave a comment

Gartner Marketing Technology Map May 2016

marketing_technology_may2016

Kirsten Newbold-Knipp, Gartner:

Here are a few highlights from some of our 2016 marketing cool vendors reports as well as guidance on technology selection.

  • Cool Vendors in Content Marketing: As content marketing grows up from its early tactical success to become a scalable program, marketers need to expand their content pipeline with high quality results. The vendors highlighted in this year’s research all heed that call: Seenit supports UGC video, Canva and Visage extend graphic design capabilities and Cintell helps increase content relevance by making personas more actionable.
  • Cool Vendors in Digital Commerce Marketing: Marketers need to enhance and evolve their digital commerce marketing to create compelling shopping experiences. Both Edgecase and Reflektion make shopping more informative and relevant, while ChannelSight powers distributed commerce and shoppable media to expand marketers’ addressable audience. Two players make it easier to merchandising complex products – Marxent uses virtual reality to bring products and environments to life, while True Fit takes the guesswork out of sizing shoes and clothing online.
  • Cool Vendors in Mobile Marketing: Understanding how consumers use mobile devices to engage online and offline is key to an effective mobile marketing strategy. Location based marketing is top of mind. Bluefox helps retailers and brands engage in location based personalization – without the need for an app, NinthDecimal supports location related insights with a focus on online/offline attribution and Gravy provides location informed behavioral analytics by tracking attendance at live events. Yext, serves the flip side of location, helping businesses maintain their physical-world information across online directories and listings. Marfeel – the only non-location based player in this year’s lineup – provides tools to design native mobile-optimized pages that load faster and create better end-user experiences.
  • Cool Vendors in Social Marketing: Social marketers are looking for new ways leverage the audiences they’ve built over years – from activation to insights – especially now that advertising dominates organic reach in social. On the activation front, Ahalogy helps clients drive sales through pay-for-performance via Pinterest, Chirpify offers awards in return for social engagement that builds out richer customer profiles and Mavrck uses the concept of microinfluence to drive scalable word-of-mouth efforts. Using social data for customer insights, Hyperactivate lets brands track and understand which individuals create the most campaign impact whereas Pixability helps marketers track and optimize their YouTube advertising efforts.
  • Framework for Choosing Digital Marketing Technology (Gartner client access only): With so many cool vendors, you need to choose carefully to ensure that you get the most bang for your marketing buck. It’s vital that you don’t jeopardize your investment by looking too narrowly at a particular feature set or become restricted by unrealistic budget caps. You need to hone in on how the technology will help you achieve your marketing goals. Gartner’s framework will help you define and articulate the capabilities you need the tech to deliver. And it will guide you through the process of vendor selection, to ensure you get the tech that’s best for you.
Posted in Misc | Leave a comment

Buzzword Watch: What’s In and Out in Technology 2016

compTIA-buzzwords

ComTIA:

CompTIA evaluates trends for its IT Industry Outlook based on their recent or imminent impact. For developments that are just emerging, or trends that are still on under the radar, Buzzwords Watch provides a glimpse of terms that could gain traction. Of course, many will also fizzle out.

Note: CompTIA’s Buzzword Watch is not meant to be a formal, quantitative assessment of trends, but rather an informal look at interesting concepts that may be worth paying attention to in the year ahead.

Posted in Misc | Tagged | Leave a comment

History of the Internet of Things (IoT)

Internet-of-things-history

Source: 

See also: A Very Short History Of The Internet Of Things

Posted in Internet of Things | Leave a comment

A Day in the Life of a Data Scientist

DataScientist_dayinlife

HT: @NinjaEconomics

See also: Cleaning Big Data: Most Time-Consuming, Least Enjoyable Data Science Task, Survey

Posted in Misc | Leave a comment

DMway Automates Predictive Analytics

Crystal-ball-predicitve-analyticsIn its most recent hype cycle for emerging technologies, Gartner introduced “citizen data science” and “advanced analytics with self-service delivery.” Both technologies were predicted to reach the “plateau of productivity” in 2 to 5 years before

The shortage of data scientists and the resulting high salaries they command is giving rise to new self-service tools, automating all stages of data science so business analysts, marketing managers, IT staff and others could perform advanced analytics as part of their jobs.

By 2017, Gartner says, the number of these citizen data scientists in small and large organizations will grow five times faster than the number of highly skilled data scientists. Forrester agrees that the “huge demand” for data scientists will not be met in the short term, “even as more degree programs launch globally.” And the demand for advanced data analysis will only increase in the coming years with the rise of the Internet of Things.

Automation also helps the few overworked data scientists available today, making the experienced more productive and helping the newly-minted add value faster.  A number of startups, such as Trifacta and Tamr, have focused on the early stages of the data analytics process—data preparation and transformation—and others have focused on later stages such as data visualization or on specific applications and industries.

An interesting challenge is automating the core of the data science process, the development and maintenance of predictive models (Forrester recently declared that Predictive Analytics is the hottest big data technology). The founders of DMway, which recently raised $1 million dollars in seed funding from JVP Labs, have “spent their entire careers on understanding and mapping the methods of algorithm and model developers,” says CEO Gil Nizri.

“Predictive analytics is a great competitive differentiator but it is still beyond the reach of most organizations,” adds Nizri.  “DMway is enabling any size company, from SMB to enterprise, to compete on a level playing field.”

DMway’s model building “mimics the way a human expert develops a model,” says CTO Ronen Meiri. It starts by exploring the data, searching through all potential predictors and selecting the most influential. Using the set of influential predictors it creates a final prediction model and then applies it to an independent dataset to check its accuracy and over-fitting, making sure the model is general enough to apply to new observations.  Finally, it provides multiple methods for seamless integration and deployment of the model.

The result is faster model development and more accurate models, sometimes 20% more accurate than traditionally-developed models. The benefits of automation, however, do not apply only to the initial development of the model. “Most of the resources are going to model maintenance and not to building the model for the first time,” says Meiri. “In micro-financing, for example, they usually re-build the model every three months.”

Businesses operating in environments with fast-changing conditions are prime candidates for automated model maintenance and a number of DMway’s early customers are Fintech startups. BACKED, providing loans to young Americans, uses DMway to predict loan defaults and Fido Credit, provider of micro-financing in Africa, uses DMway to assess credit risk.  Beyond the financial sector, DMway’s automated model development is used by the marketing department of YES, a Cable TV operator, to predict customer churn and facilitate lead conversion.

As Eric Siegel, founder of Predictive Analytics World and author of Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die, declaimed in a previous version of this rap video:

[youtube https://www.youtube.com/watch?v=bSP3z0LmWEg?rel=0]

Modeling means modifying models incrementally,

With a geek technique to tweak, it will reach the peak eventually.

Each step is taken to improve prediction on the training cases,

One small step for man; one giant leap—the human race is going places!

DMway is a good example of how automation is best discussed as human augmentation rather than human replacement, as it facilitates analyst-machine collaboration. The human race may indeed go places when data scientists—both of the highly skilled and of the “citizen” varieties—are supplied with tools that increase their productivity and the accuracy of models that drive decisions.

Originally published on Forbes.com

Posted in Predictive analytics | Tagged | Leave a comment