What Are Amazon Echo, Apple Siri and Google Now Good For?

Voice_assistance

Posted in Misc | Leave a comment

600+ companies investing in deep learning

DeepLearing_investment

DeepLearning_investments

 

O’Reilly Data:

…more than 600 companies have jumped into applying deep learning with real budgets. …about 90 companies (level 3), have made strategic investments in deep learning for their businesses. Another 177 companies (level 2) are developing projects using deep learning with dedicated resources in staff. And more than 350 companies (level 1) are experimenting with deep learning in their labs.

Given how early deep learning is as a technology, the majority of companies investing in deep learning are IT and software businesses. However, we discovered interesting champions in other industries that are adopting deep learning as well. [Above] are some examples of companies that are not in traditional software or IT businesses, but that are adopting deep learning. Given that deep learning has early roots in image processing, it is exciting to see health care companies like Siemens Healthcare and GE Healthcare leading the pack, along with research institutions like the NIH and Lawrence Livermore National Labs.

 

Posted in deep learning | Tagged | Leave a comment

A Few Internet of Things (IoT) Facts (Infographic)

IoT_facts_infographic.jpg

Posted in Internet of Things | Leave a comment

Why the Growing Threat of Ransomware is Good for You

“Unbelievable” is what FBI Cyber Division Assistant Director James Trainor called last week the increase in the amount and sophistication of ransomware attacks in the first quarter of 2016, according to CIO Journal.

Last year, there were 2,453 reported ransomware incidents in the U.S., in which victims paid about $24.1 million. We can expect much more in 2016, says the FBI, defining ransomware as “an insidious type of malware that encrypts, or locks, valuable digital files and demands a ransom to release them.”

Yaki_Varonis

Yaki Faitelson, CEO, Varonis

Yaki Faitelson, CEO of Varonis, sees a silver lining in the changing threat environment. Ransomware, he argues, is the only type of cybersecurity infiltration where the attackers want their presence to be known, typically shortly after succeeding in obtaining access to the victim’s files and encrypting them.

”Ransomware is very vocal,” says Faitelson, “but it acts exactly like other malicious insider threats.” As such, it can serve as a sort of cybersecurity training exercise, exposing to the victims specific vulnerabilities in their defenses.

“This is what we call security from the inside out,” says Faitelson. “Nearly all data breaches come, in one form or another, from insiders.” Data breaches can originate with a disgruntled employee or one seeking a material gain. But for the most part, they are the result of inadequate management of data access permissions compounded by innocent mistakes committed by insiders, such as clicking on an e-mail with a malware attachment.

You may think that with all the publicity about “phishing” attempts, people are much more careful about opening email attachments from unknown sources. But the 2016 Data Breach Investigations Report found that 30% of phishing messages were opened, up from 24% last year, and that 12% of email users went on to click the malicious attachment.

An additional fuel to the ransomware fire is its increased sophistication, now spreading to your organization not only via email but also with the help of infected websites, taking advantage of unpatched software on end-user computers.

So what’s the best protection? “Ransomware is about backups, more so than anything else,” says the FBI’s Trainor. Faitelson begs to differ.  “Most organizations don’t have effective backup,” he says. Their physical backup is not up-to-date and is costly to recover. Their up-to-date backup files are increasingly being targeted by the ransomware attackers who make sure to encrypt them as well.

Instead of relying solely on physical backup, Varonis recommends constant monitoring of the IT infrastructure, looking for mass encryption beyond a certain threshold and looking for the typical extensions that the ransomware software creates.

“The best way to find today’s sophisticated attackers is user behavior analytics, understanding what is normal and what is not, identifying behavioral anomalies for accounts that are targeted by hackers,” says Faitelson.

User behavior analytics is a relatively new cybercrime-fighting tool for Varonis and the industry.  Realizing that protecting the perimeter and the endpoints of the IT infrastructure is not enough, the industry is moving rapidly to developing and providing machine learning tools that detect anomalies and alert security staff to unusual activity. Faitelson argues that Varonis has a headstart in this field as it has been monitoring and analyzing how users interact with data and file systems since 2005.

Before he and Ohad Korkus founded Varonis, they worked in professional services for NetApp. While implementing a project in Angola for a large energy exploration firm, someone deleted many critical files: images taken from the ocean floor at great expense.  Attempting to find out who deleted the files became a monumental task.

It was then that they realized that enterprises needed a much better way to track, visualize, analyze and protect their data. That led to Varonis’ initial focus on data management, on understanding, mapping, and organizing data ownership, rights, and responsibilities across the enterprise.

That decade-plus experience, specifically the gathering and analyzing of metadata, data about the data, its use, and users’ interactions with it, now informs the algorithms and automated rules Varonis uses to identify abnormal behavior without generating lots of distracting “false positives,” alerts triggered by benign activity. Given the 33% revenue growth announced by Varonis last week, the move to applying its data management expertise to cybersecurity seems to be working.

Ransomware may be changing the dynamics of cyber defense, but it may also change how organizations value their information. That maybe another ransomware silver lining: It quantifies, in monetary terms, what it costs not to have access to specific records and files. Says Faitelson: “Ransomware shows the organization the value of the data.”

Originally published on Forbes.com

Posted in Misc | Tagged | Leave a comment

10 Inventions Predicted By The Simpsons (Video)

[youtube https://www.youtube.com/watch?v=ndTglH25dDY?rel=0]

Posted in Misc | Leave a comment

Where should you put your data scientists?

[slideshare id=61486991&doc=whereshouldyouputyourdatascientists-160429025555]

Posted in Data Science Careers, Data Scientists | Leave a comment

The Economic Impact of the Internet of Things

IoT_economic impact

Source: Digitlistmag

Posted in Internet of Things | Leave a comment

ENIAC in Action: Making and Remaking the Modern Computer

EniacInActionCoverThe history of technology, whether of the last five or five hundred years, is often told as a series of pivotal events or the actions of larger-than-life individuals, of endless “revolutions” and “disruptive” innovations that “change everything.” It is history as hype, offering a distorted view of the past, sometimes through the tinted lenses of contemporary fads and preoccupations.

In contrast, ENIAC in Action: Making and Remaking the Modern Computer, is a nuanced, engaging and thoroughly researched account of the early days of computers, the people who built and operated them, and their old and new applications. Say the authors, Thomas Haigh, Mark Priestley and Crispin Rope:

The titles of dozens of books have tried to lure a broad audience to an obscure topic by touting an idea, a fish, a dog, a map, a condiment, or a machine as having “changed the world”… One of the luxuries of writing an obscure academic book is that one is not required to embrace such simplistic conceptions of history.

Instead, we learn that the Electronic Numerical Integrator and Computer (ENIAC) was “a product of historical contingency, of an accident, and no attempt to present it as the expression of whatever retrospectively postulated law or necessity would contribute to our understanding.”

The ENIAC was developed in a specific context that shaped the life and times of what became “the only fully electronic computer working in the U.S.,” from its inception during World War II to 1950, when other computers have successfully joined the race to create a new industry. “Without the war, no one would have built a machine with [ENIAC’s] particular combination of strengths and weaknesses,” say Haigh, Priestley and Rope.

The specific context in which the ENIAC emerged had also to do with the interweaving of disciplines, skills, and people working in old and new roles. The ENIAC was an important milestone in the long evolution of labor-saving devices, scientific measurement, business management, and knowledge work.

Understanding this context sheds new light on women’s role in the emergence of the new discipline of computer science and the new practice of corporate data processing. “Women in IT” has been a topic of much discussion recently, frequently starting with Ada Lovelace who is for many the “first computer programmer.” A very popular example of the popular view that women invented programming is Walter Isaacson’s The Innovators (see Haigh’s and Priestley’s rejoinder and a list of factual inaccuracies committed by Isaacson).

It turns out that history (of the accurate kind) can be more inspirational than story-telling driven by current interests and agendas, and furnish us (of all genders) with more relevant role-models.  The authors of ENIAC in Action highlight the importance of the work of ENIAC’s mostly female “operators” (neglected by other historians, they say, because of the disinclination to celebrate work seen as blue-collar), reflecting “a long tradition of female participation in applied mathematics within the institutional settings of universities and research laboratories,” a tradition that continued with the ENIAC and similar machines performing the same work (e.g., firing-table computations) but much faster.

The female operators, initially hired to help mainly with the physical configuration of the ENIAC (which was re-wired for each computing task), ended up contributing significantly to the development of “set-up forms” and the emerging computer programming profession: “It was hard to devise a mathematical treatment without good knowledge of the processes of mechanical computation, and it was hard to turn a computational plan into a set-up without hands-on knowledge of how ENIAC ran.”

When computing moved from research laboratories into the corporate world, most firms used existing employees in the newly created “data processing” (later “IT”) department, re-assigning them from relevant positions: punched-card-machine workers, corporate “systems men” (business process redesign), and accountants. Write the authors of ENIAC in Action:

Because all these groups were predominantly male, the story of male domination of administrative programming work was… a story of continuity within a particular institutional context. Thus, we see the history of programming labor not as the creation of a new occupation in which women were first welcomed and then excluded, but rather as a set of parallel stories in which the influence of ENIAC and other early machines remained strong in centers of scientific computation but was negligible in corporate data-processing work.

ENIAC

Programmers Betty Jean Jennings (left) and Fran Bilas (right) operate ENIAC’s main control panel at the Moore School of Electrical Engineering. (U.S. Army photo from the archives of the ARL Technical Library) Source: Wikipedia

Good history is a guide to how society works; bad history is conjuring evil forces where there are none. ENIAC in Action resurrects the pioneering work of the real “first programmers” such as Jean Bartik and Klara von Neumann and explains why corporate IT has evolved to employ mostly their male successors.

Good history also provides us with a mirror in which we can compare and contrast past and present developments. The emergence of the “data science” profession today, in which women play a more significant role than in the traditional IT profession, parallels the emergence of computer programming. Just like the latter required knowledge of both computer operations and mathematical analysis, data science marries knowledge of computers with statistical analysis skills.
Developing models is the core of data scientists’ work and ENIAC in Action devotes considerable space to the emergence of computer simulations and the discussion of their impact on scientific practice. Simulations brought on a shift from equations to algorithms, providing “a fundamentally experimental way of discovering the properties of the system described.”

 

Today’s parallel to the ENIAC-era big calculation is big data, as is the notion of “discovery” and the abandonment of hypotheses. “One set initial parameters, ran the program, and waited to see what happened” is today’s The unreasonable effectiveness of data.  There is a direct line of the re-shaping of scientific practice from the ENIAC pioneering simulations to “automated science.” But is the removal of human imagination from scientific practice good for scientific progress?

Similarly, it’s interesting to learn about the origins of today’s renewed interest in, fascination with, and fear of “artificial intelligence.” Haigh, Priestley and Rope argue against the claim that the “irresponsible hyperbole” regarding early computers was generated solely by the media, writing that “many computing pioneers, including John von Neumann, [conceived] of computers as artificial brains.”

Indeed, in his First Draft of a Report on the EDVAC—which became the foundation text of modern computer science (or more accurately, computer engineering practice)—von Neumann compared the components of the computer to “the neurons of higher animals.” While von Neumann thought that the brain was a computer, he allowed that it was a complex one, following McCulloch and Pitts (in their 1943 paper “A Logical Calculus of the Ideas Immanent in Nervous Activity”) in ignoring “the more complicated aspects of neuron functioning,” he wrote.

Given that McCulloch said about the “neurons” discussed in his and Pitts’ seminal paper that they “were deliberately as impoverished as possible,” what we have at the dawn of “artificial intelligence” is simplification squared, based on an extremely limited (possibly non-existent at the time) understanding of how the human brain works.

These mathematical exercises, born out of the workings of very developed brains but not mimicking or even remotely describing them, led to the development of “artificial neural networks” which led to “deep learning” which led to the general excitement today about computer programs “mimicking the brain” when they succeed in identifying cat images or beating a Go champion.

In 1949, computer scientist Edmund Berkeley wrote in his book, Giant Brains or Machines that Think: “These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.”

Haigh, Priestley and Rope write that “…the idea of computers as brains was always controversial, and… most people professionally involved with the field had stepped away from it by the 1950s.” But thirty years later, Marvin Minsky famously stated: “The human brain is just a computer that happens to be made out of meat.”

Most computer scientists by that time were indeed occupied by less lofty goals than playing God, but only very few objected to these kind of statements, or to Minsky receiving the most prestigious award of their profession (for his role in creating the field of artificial intelligence). Today, the idea that computers and brains are the same thing, leads people with very developed brains to conclude that if computers can win in Go, they can think, and that with just a few more short steps up the neural networks evolution ladder, computers will reason that it’s in their best interests to destroy humanity.

eniacstamp50Twenty years ago, the U.S. Postal Service issued a new stamp commemorating the 50th birthday of ENIAC. The stamp displayed an image of a brain partially covered by small blocs that contain parts of circuit boards and binary code. One of the few computer scientists who objected to this pernicious and popular idea was Joseph Weizenbaum:

What do these people actually mean when they shout that man is a machine (and a brain a “meat machine”)? It is… that human beings are “computable,” that they are not distinct from other objects in the world… computers enable fantasies, many of them wonderful, but also those of people whose compulsion to play God overwhelms their ability to fathom the consequences of their attempt to turn their nightmares into reality.

The dominant fantasy is that computers “change the world” and “make it a better place,” they spread democracy and other cherished values, etc., etc. It is vociferously promoted by people who believe themselves to be rational but ignore reality which has proven again and again that 70 years of computers have done little to change our society. Two recent examples are the hacker who scanned the Internet for networked printers and made them print an anti-semitic flyer and the good people at Microsoft who released an “AI-powered chatbot” only to find out that it took Twitter users just 16 hours to teach it to spew racial slurs.

ENIAC and its progeny have not changed what’s most important in our world: humans. Maybe Gates, Hawking, and Musk are right after all. Once computers surpass us in intelligence, they will understand that humanity cannot be changed by technology and it’s better just to get rid of it. In the meantime, the creativity and intelligence of good historians writing books such as ENIAC in Action will keep us informed and entertained.

Originally published on Forbes.com

Posted in Computer History | Leave a comment

Forrester and IDC on Consumer Interest in IoT

Figure 5 - Forrester

IDC-IoTAPril2016 Key Takeaways from IDCs 2016 Consumer IoT Webinar_08

IoT-Vs-Industrial

Consumer IoT is lagging behind industrial IoT in terms of interest, investments, and successful applications. CB Insights has found that in 2011, the industrial IoT accounted for 17% of all IoT funding dollars. In 2015, the share of industrial IoT investment rose to 40% of all IoT investment.

In a recent report citing the results of large surveys of consumers in the U.S. and other countries, Forrester observed that the IoT today “seems to open up more business opportunities in the industrial and B2B space than for consumer brands” (see also Forrester’s blog post). Similarly, in a recent report and webinar based on a large consumer survey in the U.S., IDC has concluded that “beyond security and point-solutions to specific problems, consumer IoT is still looking for a clear value proposition.”

Here are some interesting findings:

14% of U.S. online adults are currently using a wearable, and only 7% use any smart home device. Usage of connected devices in smart homes or cars is even lower in Europe. Smoke and home security monitoring are the two smart home services U.S. consumers are the most interested in, followed closely by water monitoring. (Forrester)

More than 8 million households in the U.S. already use some kind of home automation and control. The home IoT applications consumers are interested in are networked sensors monitoring for fire, smoke, water, or CO at home; seeing and recording who comes to the front door using a video camera; and networked sensors monitoring doors and windows. Consumers are least interested in networked kitchen appliances. (IDC)

Reasons for purchasing home control application: 30% cited solving a known problem, either recent or long-standing; 40% cited word-of- mouth, news about such devices; almost 20% said it seemed like “a neat solution to a problem I didn’t know I had” (!!!) and over 15% said that the device “was on sale.” (IDC)

Preferred installer for home automation and control systems in order of preference: Residential security company, myself, other professional installers, cable or telephone companies. (IDC)

Half of U.S. online adults are concerned that the monthly service cost of smart home technologies would be too high, and 38% fear the initial cost of setup would be too high. (Forrester)

36% of U.S. online adults fear using smart home services could compromise the privacy of their personal information. (Forrester)

Among those interested in home control IoT application but haven’t purchased one: High concern around cost (which is common for new applications) and unusually (for new applications) high concerns around reliability and user experience. (IDC)

31% are interested in access to the internet while using the car (i.e., on-board internet) and access to an interactive voice response system (i.e., a digital driving assistant). Telematics-enabled usage based insurance (UBI) is emerging and will disrupt the car insurance industry. (Forrester)

In 2016, 33% of U.S. online adults will use some form of IoT across home, wearables, and car. However, usage in the next two years will primarily be led by wearables and smartwatches. (Forrester)

IDC concludes that “the majority of consumers remain skeptical of the value proposition behind the home Internet of Things and are holding back for a higher overall value proposition.” In the IDC press release, Jonathan Gaw said:

“The long-run impact of the Internet of Things will be broader and deeper than we imagine right now, but the industry is still in the early stages of developing the vision and conveying it to consumers.”

IDC continues: “Winners will solve a problem the consumer didn’t know they had. Security and privacy – punished for a lack of it, probably not rewarded for having it. Voice interfaces have potential, but still need development for mainstream users.”

Forrester’s Thomas Husson and his colleagues cite a pioneering home-focused voice interface, Amazon Echo, as an example of successful consumer IoT device. “Combining the Dash Buttons’ big-data-meets-internet-of-things experiment with Amazon Echo and Alexa Voice Assistant will enable Amazon to aggregate multiple brands’ offering and anticipate consumers’ needs,” says Forrester.

The report continues: “Because consumers invest little in new experiences, they hurt little when abandoned. The consequence is that the vast majority of new IoT products will fail unless marketers develop a customer relationship that is frequent, emotionally engaging, and conveniently delivered.”

Both Forrester and IDC seems to understand what works and what doesn’t work with current IoT offerings and continue to advance their knowledge by surveying consumers and talking to enterprise decision-makers.

The U.S. government may want to pay closer attention to their (and other industry observers’) work. The National Telecommunications and Information Administration (NTIA) recently issued a request for comments which included this gem:

Although a number of architectures describing different aspects or various applications of the IoT are being developed, there is no broad consensus on exactly how the concept should be defined or scoped. Consensus has emerged, however, that the number of connected devices is expected to grow exponentially, and the economic impact of those devices will increase dramatically.

To which James Connolly responded: “How can the public comment when even Commerce can’t really define the term?”

Originally published on Forbes.com

Posted in Internet of Things | Tagged , , | Leave a comment

Why Enterprises Use AI and for What

AI_most-widely-used-solutions

HT: Raconteur

Posted in AI, Misc | Tagged | Leave a comment