Looking at the Dark Side of the Net

Etay Maor, IBM

Etay Maor, IBM

Wired recently reported that hackers posted a “data dump, 9.7 gigabytes in size… to the dark web using an Onion address accessible only through the Tor browser.” The data included names, passwords, addresses, profile descriptions and several years of credit card data for 32 million users of Ashley Madison, a social network billing itself as the premier site for married individuals seeking partners for affairs.

“I want to show you the dark side of the net,” Etay Maor told me when we met last month at the IBM offices in Cambridge, Massachusetts. He then proceeded to give me a tour of the Internet’s underground, where cyber criminals and hackers exchange data, swap tips, and offer free and for-fee services. “Information sharing is a given on the dark side,” said Maor, “but for the good guys, it’s not that easy.”

Maor is a senior fraud prevention strategist at IBM and has watched the dark side of the Web at RSA, where he led the cyber threats research lab, and later at Trusteer, a cybersecurity startup which IBM has acquired in 2013 for a reported $1 billion. His focus is cybercrime intelligence, specifically malware—understanding how it is developed and the networks over which it is distributed. Maor is an expert on how cyber criminals think and act and shares his knowledge with IBM’s customers and also with the world at large by speaking at conferences and blogging at securityintelligence.com.

The Web is like an iceberg divided into three segments, each with its own cluster of hangouts for cyber criminals and their digital breadcrumbs. The tip of the iceberg is the “Clear Web” (also called the Surface Web), indexed by Google and other search engines. The very large body of the iceberg, submerged under the virtual water, is the “Deep Web”—anything on the Web that’s not accessible to the search engines (e.g., your bank account). Within the Deep Web lies the “Dark Web,” a region of the iceberg that is difficult to access and can be reached only via specialized networks.

Maor first demonstrated to me how much cybercrime-related information is available on the Clear Web. Simply by searching for spreadsheets with the word “password” in them, you can get the default password list for many types of devices and other things and places of interest to criminals. There is easily accessible information that may have been posted to the Web innocently or by mistake. But there is also a lot of compromised information (e.g., stolen email addresses and their passwords) available on legitimate websites that provide a Web location for dumping data.

Then there are forums for criminals, some masquerading as a benign “hacking community” or “security research forum,” promoting themselves like any other business and/or community, including a Facebook page, and covering their costs or even making some money by displaying ads. One such forum had 1,200 other people accessing it when Maor showed it to me, demonstrating how, with a few clicks of the mouse, you can find lists of stolen credit card numbers including all the requisite information about the card holder.

Maor proceeded to introduce me to Tor, the most popular specialized network providing anonymity for its users, including participants in the underground economy of the Dark Web.  It was developed in the 1990s with the purpose of protecting U.S. intelligence communications online by researchers at the US Naval Research Lab which released the code in 2004 under a free license. It has 2.5 million daily users, some with legitimate reasons to protect their identities, and others who are engaged in criminal activities.

Tor is based on Onion routing, where messages are encapsulated in layers of encryption. The encrypted data is transmitted through a series of network nodes called onion routers, each of which “peels” away a single layer, uncovering the data’s next destination. The sender remains anonymous because each intermediary knows only the location of the immediately preceding and following nodes. The final node in the chain, the “exit node,” decrypts the final layer and delivers the message to the recipient.

While Tor is used by people with legitimate reasons to hide their identity, it (and similar networks) also facilitates a thriving underground economy. This is where you can buy firearms, drugs, fake documents, prescription drugs or engage in pedophilia networks, human trafficking, and organ trafficking. Maor paraphrases Oscar Wilde: “Give a man a mask and he will show his true face.”

Tor is also home to rapidly growing “startups,” offering fraud-as-a-service. A decade ago, says Maor, cybercrime “was one-man operation.  Today, it’s teamwork.”  Furthermore, the whole process, from coding the malware to distributing it to working with money mules, can be easily outsourced.  Everything a cybercriminal might need is now available on the underground forums, some components of the process as a free download, others as a for-fee service, including cloud-based services with guaranteed service level agreements (SLAs). The menu of cybercrime options has grown beyond financial fraud tools, to include advanced targeting tools, Remote Access Tools (RATs), and health care and insurance fraud tools and services.

The explosion of data about us, our lives and our workplaces on the Clear Web has helped the denizens of the Dark Web circumvent traditional online defenses such as passwords. “Fifteen years ago,” says Maor, “it took a lot of work to breach a company. Today, I can go on Linkedin and find out exactly what is the structure of the company I’m interested in.” Knowledge of the reporting structure of a specific company helps criminals’ “social engineering” efforts, manipulating people into performing certain compromising actions or divulging confidential information. Once criminals get to know their targets (e.g., by connecting on Linkedin), the victims may open an email or attachment that will infect their computer and provide the desired access to the company’s IT infrastructure.

Cyber criminals are taking advantage of the abundance of data on the Web and its success at connecting and networking over 2 billion people around the world. 80% of cyber attacks are driven by highly organized crime rings in which data, tools and expertise are widely shared, according to a UN study on organized crime, generating $445 billion in illegal profits and brokering one billion-plus pieces of personally identifiable information annually.

Data and networking—aren’t they also great tools in the fight against cybercrime? Not so much. Corporations and security firms have been reluctant to share cybersecurity intelligence. Only 15% of respondents to a recent survey said that “participating in knowledge sharing” is a spending priority.

There have been some efforts to change that, such as the establishment of industry-specific Information Sharing and Analysis Centers (ISACs) and the cross-industry National Council of ISACs.  The Department of Homeland Security and other government agencies are working to promote specific, standardized message and communication formats to facilitate the sharing of cyber intelligence in real time. The Cybersecurity Information Sharing Act (CISA), a bill creating a framework for companies and federal agencies to coordinate against cyberattacks, is being debated in Congress.

Alejandro Mayorkas, the Deputy Secretary of Homeland Security recently said: “Today’s threats require the engagement of our entire society. This shared responsibility means that we have to work with each other in ways that are often new for the government and the private sector. This means that we also have to trust each other and share information.”

IBM has taken a big step towards greater engagement and information sharing when it launched in April the IBM X-Force Exchange. It is a threat intelligence sharing platform where registered users can mine IBM’s data to research security threats, aggregate cyber intelligence, and collaborate with their peers. IBM says the exchange has quickly grown to 7,500 registered users, identifying in real-time sophisticated cybercrime campaigns. “I’m a fan,” security guru Bruce Schneier responded when I asked him about X-Force Exchange.

“The security industry must share information, all the time, in real time,” says Maor. “It’s a change of mindset, but it has to be done if we want to have some sort of edge against the criminals.”

Originally published on Forbes.com

Posted in Misc | Leave a comment

Startups Disrupting Education with New Technologies

CBInsights_Smart-VC-BSG

CB Insights:

Venture capital funding to education technology startups passed $1.6B last year, across 217 deals. With ed tech startup investing becoming so competitive and crowded, it’s important to know where top VCs are placing their bets…

Accel Partners, Felicis Ventures, and New Enterprise Associates are the most active smart VCs in ed tech, with more than 10 unique portfolio companies each in the area. The least active investors are Index Ventures and Battery Ventures. Two startups had the most unique smart VC investors, with 5 each: classroom-based community tool Edmodo and online learning content platform Knewton.

We identified six different ed tech markets that smart VCs are moving into.

  • Online language learning: Companies providing online and mobile software to learn foreign languages or English as a second language. Firms in this category that have received smart money VC deals include Mindsnacks, Duolingo, and Open English
  • Teacher-student collaboration & communication: These companies connect students and teachers through online and mobile software to share content, manage assignments, and communicate both in and out of the classroom. Firms that have received smart money investments include Piazza, Instructure, Remind, and Edmodo.
  • Education data and analytics: Companies providing data analytics software and solutions in and around the education industry and student performance. This category encompasses a few firms that have received smart money funding, including Civitas Learning and Declara.
  • Coding and programming education: Companies offering digital offerings aimed at coding, programming, or engineering skills and techniques. Companies with smart money VC backing include Codecademy, One Month, and Bloc.
  • MOOCs & online classrooms – Companies offering free or accredited online courses or tutorials in assorted subject areas. Two companies in this area with smart money VC funding are Udemy and Coursera.
  • Tutoring and Test Prep: Companies offering tutors, textbooks, notes, or study materials for specific standardized tests. Smart money VC companies include WyZantand Desire2Learn.
Posted in startups | Leave a comment

The Internet of Things (IoT): 9 Predictions and Facts

internet-of-thingsA number of new reports on the Internet of Things (IoT) provide a fresh look at the state of this hot market and forecasts for its future impact on the world’s economy.

IDC discussed The Internet of Things Mid-Year Review at a webinar on July 23, including findings from a survey of 3,566 companies in North America. IDC defines IoT as “a network of uniquely identifiable ‘things’ that communicate without human interaction using IP connectivity.” Tata Consulting Services (TCS) issued a report titled The Internet of Things: The Complete Reimaginative Force, based on a survey of 3,764 executives worldwide. TCS defines the IoT as “smart, connected products.” The McKinsey Global Institute (MGI) published The Internet of Things: Mapping the value beyond the hype. MGI defines IoT as “sensors and actuators connected by networks to computing systems” and excludes “systems in which all of the sensors’ primary purpose is to receive intentional human input, such as smartphone apps.” Finally, Business Insider (BI) issued The Smart City report on IoT initiatives in cities worldwide.

The economic impact of the IoT will re-shape the world’s economy

The IoT has a total potential economic impact of $3.9 trillion to $11.1 trillion a year by 2025. At the top end, that level of value—including the consumer surplus—would be equivalent to about 11 percent of the world economy (MGI). The Internet of Things (IoT) market will expand from $780 billion this year to $1.68 trillion in 2020, growing at a CAGR of 16.9%.  Sensors/modules and connectivity account for more than 50% of spending on IoT, followed by IT services at more than 25% and software at 15%. Traditional IT hardware accounts for less than 5% of total spending on IoT (IDC)

Investments in IoT technologies by cities worldwide will increase by $97 billion from 2015 to 2019. The cities’ IoT deployments will create $421 billion in economic value worldwide in 2019. That economic value will be derived from revenues from IoT device installations and sales and savings from efficiency gains in city services (BI).

There will be almost 30 billion of IoT devices in 2020

In 2015, 4,800 connected end points are added every minute. This number will grow to 7,900 by 2020. The installed base of the Internet of Things devices will grow from 10.3 billion devices in 2014 to 29.5 billion in 2020. 19 billion of these devices will be installed in North America in 2020 (IDC). The number of IoT devices installed in cities will increase by more than 5 billion in the next four years (BI).

The IoT will be primarily an enterprise market

In 2018, the IoT installed base will be split 70% in the enterprise and 30% in the consumer market, but enterprises will account for 90% of the spending (IDC). Business-to-business applications will probably capture more value—nearly 70 percent of it—than consumer uses, although consumer applications, such as fitness monitors and self-driving cars, attract the most attention and can create significant value, too (MGI).

Over the next few years, North America will still be the focal point for the IoT

The IoT has a large potential in developing economies, but it will have a higher overall value impact in advanced economies because of the higher value per use. However, developing economies could generate nearly 40 percent of the IoT’s value, and nearly half in some settings (MGI). 2020 will be a tipping point year for Asia, when it will become the geographical region with the largest installed base of IoT devices (IDC). North American companies will spend 0.45% of revenue this year on IoT initiatives, while European companies will spend 0.40%. Asia-Pacific companies will invest 0.34% of revenue in the IoT, and Latin American firms will spend 0.23% of revenue. North American and European companies are more frequently selling smart, connected products than are Asia-Pacific and Latin American companies (TCS).

The telecommunication industry leads other sectors in IoT investments

The Telecommunications, banking, utilities, and securities/investment services industries are the leading sectors investing in IoT in 2015 (IDC). In gaining benefits from the IoT, industrial manufacturers reported the largest average revenue increase from their IoT initiatives last year (29%), and they forecast they’d have the largest revenue increase from the IoT by 2018 (27% over 2015). Industrial manufacturers were also in the lead for using sensors and other digital technologies to monitor the products they sold to customers (with 40% of the companies doing so) (TCS).

IoT adoption is gaining momentum worldwide

36% of companies in North America have IoT initiatives in 2015 (IDC). 79% of companies worldwide already use IoT technologies, investing 0.4% of revenue on average. They expect their IoT budgets to rise by 20% by 2018 to $103 million (TCS).

Costs and customers are the key drivers of IoT investments

Lower operational costs and better customer service and support lead the list of significant drivers of current IoT initiatives. In large companies, business process efficiency/operations optimization and customer acquisition and/or retention also top the list (IDC). Companies with IoT programs in place reported an average revenue increase of 16% in 2014, in the areas of business where IoT initiatives were deployed. In addition, about 9% of firms had an average revenue increase of more than 60%.The biggest product and process improvements reported by companies were more customized offerings and tailored marketing campaigns, faster product improvements, and more effective customer service (TCS). Cities are adopting IoT technologies because they deliver a broad range of benefits for cities including reducing traffic congestion and air pollution, improving public safety, and providing new ways for governments to interact with their citizens (BI).

Security, culture change, determining priorities, and optimizing ROI are key IoT concerns

Security issues top the list of current barriers to IoT adoption (especially with larger companies), followed by funding the initial investment at the scale needed, determining the highest priority use cases, and changing business processes (IDC). identifying and pursuing new business and/or revenue opportunities that the IoT makes possible, and determining what data to collect, are key issues. Also important are getting managers and workers to change the way they think about customers, products, and processes, and having top executives who believe the IoT will have a profound impact and are willing to invest in it (TCS). Currently, most IoT data are not used. For example, on an oil rig that has 30,000 sensors, only 1 percent of the data are examined. That’s because this information is used mostly to detect and control anomalies—not for optimization and prediction, which provide the greatest value (MGI).

Microsoft leads the IoT market

The top 5 vendors mentioned as the IoT provider companies “plan to work with within the next 2 years” are: Microsoft, AT&T, Verizon, Cisco, and IBM. For large companies (more than 1000 employees), Microsoft and Cisco lead the list (IDC).

Originally published on Forbes.com

Posted in Internet of Things | Tagged , , | Leave a comment

What Policy Change Would Accelerate the Benefits of the Internet of Things? (IoT)

[youtube https://www.youtube.com/watch?v=y8zvkWWcUdA?rel=0]

McKinsey Global Institute:

Joi Ito: It gets back to open standards, interoperability, and a focus on non-IP-encumbered technology.

Jon Bruner: Everyone is looking for clarification on the rules on drones.

Renee DiResta: I don’t know that I feel that policy is really impeding anything right now. Maybe I’m wrong about that. I read through the FCC1 report and didn’t get the sense that there was anything [holding back the IoT] on a fundamental policy level.

Mark Hatch: Maybe it’s bandwidth-related: How do we handle the frequency and the radio waves and all the telecommunication requirements? This is a Qualcomm Technologies question maybe, along with the FCC. I may be completely wrong on that, but it’s one of the things I am curious about. How do you handle all of the communication data flow that’s going on and keep things from running into one another?

Mike Olson: The globe doesn’t have a data-privacy policy. Europe does broadly, but not in detail. In the United States, we have precisely two data-privacy laws: HIPAA,2 which protects your healthcare data, and the Fair Credit Reporting Act. Those are the only things that happen nationwide in terms of data privacy. Everything else is left to the states, and the states are pretty clueless about it. If we could elucidate policies and create laws that were uniform, it would be a lot easier for us to build and deploy these systems.

Dan Kaufman: If I had to guess, it’s the ability of people to protect their information. The Internet of Things is based on this fundamental ability to share information, and if we can’t do that in a safe and secure way, we’re going to need policies and laws so that everybody understands what’s within reason.

Cory Doctorow: I would reform the Digital Millennium Copyright Act, the 1998 statute whose language prohibits the circumvention of digital locks. I think with one step, we could make the future a better place. Ironically, the US Trade Representative has actually gone to all of America’s trading partners and gotten them to pass their own version of the Digital Millennium Copyright Act. So, every country in the world is liable to this problem. Now, the great news is that if the US stops enforcing it here, then all of those other countries will very quickly follow suit, because there’s money to be made in circumvention. The only reason to put a digital lock on is to extract maximum profits from your platform.

Tim O’Reilly: To me, policy makers need to not be trying to prevent the future from happening. They should be just policing bad actors. A good example is in healthcare. We are already producing vast reams of health data. HIPAA, the health-information privacy act, is a real obstacle. If you have a serious illness, you want to share your data with anybody who can help. You want to put your data together with other people’s data, because this collective amassing of data is one of the great keys to the future. And yet here we have these overreaching privacy laws that are going to make it difficult. So, punish bad actors—don’t prevent good actors.

Posted in Internet of Things | Leave a comment

Most Hyped Technologies: Self-Driving Cars, Self-Service Analytics, IoT; No More Big Data Buzz

Gartner just released its 2015 Hype Cycle for Emerging Technologies report. It’s our most reliable buzz bellwether, annually defining what’s in and what’s out. At the peak of inflated expectations just two years ago, Big Data was dethroned by the Internet of Things last year (but it was still estimated to be five to ten years from the Plateau of Productivity), only to completely disappear from Gartner’s hype radar this year (the 2010-2014 hype cycles are at the bottom of this post). Big data is out. So what’s in?

Gartner, August 2015

Gartner, August 2015

The Internet of Things is still at the top of the list, with self-driving cars (“autonomous vehicles”) ascending from pre-peak to the peak of the hype cycle. But there is an intriguing new category—“advanced analytics with self-service delivery”—sharing with them top billing. I guess one could hype all three in one emerging technology package of “The Internet of Autonomous Vehicles Delivering Advanced Analytics“ as the solution to all our transportation problems.

These technologies at the peak of the hype cycle also highlighted for me what’s missing from this year’s report. Given that the most hyped news out of Black Hat and Defcon conferences earlier this month were demonstrations of how to hack into cars (self-driving or not) and take control of them remotely, it is interesting that Gartner does not list any specific cybersecurity-related emerging technologies. It does mention, however, two general categories—“digital security” and “software-defined security” —both described as pre-peak, 5 to 10 years to the Plateau of Productivity. This may simply reflect the hype-less status of cybersecurity technologies. Given the daily news about data breaches, one could only hope that next year’s report will include some specific emerging solutions to what is promising to be a growing economic burden.

Another emerging technology showing promise last year—data science—has disappeared from this year’s report. It is replaced by “citizen data science” which Gartner thinks, as it did regarding data science last year, is only 2 to 5 years from the plateau. This could turn out to be the most optimistic prediction in this year’s report. A related category—machine learning—is making its first appearance on the chart this year, but already past the peak of inflated expectations. A glaring omission here is “deep learning,” the new label for and the new generation of machine learning, and one of the most hyped emerging technologies of the past couple of years.

It all boils down to what Gartner calls digital humanism: “New to the Hype Cycle this year is the emergence of technologies that support what Gartner defines as digital humanism—the notion that people are the central focus in the manifestation of digital businesses and digital workplaces.”

For the last 21 years Gartner has published the Hype Cycle report, of which Lee Rainie of the Pew Research Center has said: “There are sometimes disputes about where on the curve any individual innovation might rest, but there have been few challenges to the general trends it outlines.” I remember attending a Gartner Conference just before it started publishing this report and listening to a presentation by the analyst responsible at the time for Gartner’s emerging technologies research. He started his presentation by declaring: “Those who live by the crystal ball, die eating broken glass.”

The charts below show the evolution of Gartner’s crystal ball over the last five years and allow us to track the hype around Big Data over that period. It made its first appearance in August of 2011 as “‘Big data’ and extreme information processing and management” with 2 to 5 years to the Plateau of Productivity,then just made it into the Peak of Inflated Expectations in 2012, then rose to the top of most hyped technologies (together with consumer 3D printing and Gamification) in 2013, then started to descend into the Trough of Disillusionment in 2014, only to completely vanish in 2015. I guess Big Data is no longer an emerging technology.

Gartner Hype Cycle 2014

Gartner_HypeCycle_2014

Gartner, August 2014

Gartner Hype Cycle 2013

Gartner, August 2013

Gartner, August 2013

Gartner Hype Cycle 2012

Gartner, August 2012

Gartner, August 2012

 Gartner Hype Cycle 2011

Gartner, August 2011

Gartner, August 2011

 Gartner Hype Cycle 2010

Gartner, August 2010

Gartner, August 2010

 An earlier version of this post was published on Forbes.com

Posted in Big Data Analytics | Tagged , , , | Leave a comment

How Much Data is Generated Every Minute?

data-never-sleeps-3_final1Source: DOMO

Posted in Misc | Leave a comment

What Makes the Internet of Things (IoT) Work (SlideShare)

[slideshare id=51024872&doc=9760-internet-of-things-part-2-slidesharev3-150728171647-lva1-app6892]

Posted in Internet of Things | Leave a comment

On Brontobyte Data and Other Big Words

big-data-infographic_Brontobyte

Source: Datafloq

Paul McFedries in IEEE Spectrum

When Gartner released its annual Hype Cycle for Emerging Technologies for 2014, it was interesting to note that big data was now located on the downslope from the “Peak of Inflated Expectations,” while the Internet of Things (often shortened to IoT) was right at the peak, and data science was on the upslope. This felt intuitively right. First, although big data—those massive amounts of information that require special techniques to store, search, and analyze—remains a thriving and much-discussed area, it’s no longer the new kid on the data block. Second, everyone expects that the data sets generated by the Internet of Things will be even more impressive than today’s big-data collections. And third, collecting data is one significant challenge, but analyzing and extracting knowledge from it is quite another, and the purview of data science.

Just how much information are we talking about here? Estimates vary widely, but big-data buffs sometimes speak of storage in units of brontobytes, a term that appears to be based on brontosaurus, one of the largest creatures ever to rattle the Earth. That tells you we’re dealing with a big number, but just how much data could reside in a brontobyte? I could tell you that it’s 1,000 yottabytes, but that likely won’t help. Instead, think of a terabyte, which these days represents an average-size hard drive. Well, you would need 1,000,000,000,000,000 (a thousand trillion) of them to fill a brontobyte. Oh, and for the record, yes, there’s an even larger unit tossed around by big-data mavens: the geopbyte, which is 1,000 brontobytes. Whatever the term, we’re really dealing in hellabytes, that is, a helluva lot of data.

Wrangling even petabyte-size data sets (a petabyte is 1,000 terabytes) and data lakes (data stored and readily accessible in its pure, unprocessed state) are tasks for professionals, so not only are listings for big-data-related jobs thick on the ground but the job titles themselves now display a pleasing variety: companies are looking for data architects (specialists in building data models), data custodians and data stewards (who manage data sources), data visualizers (who can translate data into visual form), data change agents and data explorers (who change how a company does business based on analyzing company data), and even data frackers (who use enhanced or hidden measures to extract or obtain data).

But it’s not just data professionals who are taking advantage of Brobdingnagian data sets to get ahead. Nowhere is that more evident than in the news, where a new type of journalism has emerged that uses statistics, programming, and other digital data and tools to produce or shape news stories. This data journalism (or data-driven journalism) is exemplified by Nate Silver’s FiveThirtyEight site, a wildly popular exercise in precision journalism and computer-assisted reporting (or CAR).

And everyone, professional and amateur alike, no longer has the luxury of dealing with just “big” data. Now there is also thick data (which combines both quantitative and qualitative analysis), long data (which extends back in time hundreds or thousands of years), hot data (which is used constantly, meaning it must be easily and quickly accessible), and cold data (which is used relatively infrequently, so it can be less readily available).

In the 1980s we were told we needed cultural literacy. Perhaps now we need big-data literacy, not necessarily to become proficient in analyzing large data sets but to become aware of how our everyday actions—our small datacontribute to many different big-data sets and what impact that might have on our privacy and security. Let’s learn how to become custodians of our own data.

Posted in Big Data Analytics | Tagged , , , , | Leave a comment

Top 10 Programming Languages 2015

top-ten-programming-languagesNote: Left column shows 2015 ranking; right column shows 2014 ranking.

Source: IEEE Spectrum

The big five—Java, C, C++, Python, and C#—remain on top, with their ranking undisturbed, but C has edged to within a whisper of knocking Java off the top spot. The big mover is R, a statistical computing language that’s handy for analyzing and visualizing big data, which comes in at sixth place. Last year it was in ninth place, and its move reflects the growing importance of big data to a number of fields. A significant amount of movement has occurred further down in the rankings, as languages like Go, Perl, and even Assembly jockey for position…

A number of languages have entered the rankings for the first time. Swift, Apple’s new language, has already gained enough traction to make a strong appearance despite being released only 13 months ago. Cuda is another interesting entry—it’s a language created by graphics chip company Nvidia that’s designed for general-purpose computing using the company’s powerful but specialized graphics processors, which can be found in many desktop and mobile devices. Seven languages in all are appearing for the first time.

Posted in Misc | Tagged | Leave a comment

Why Humans Will Forever Rule Over the Machines

Terminator-2Everywhere you turn nowadays, you hear about the imminent triumph of intelligent machines over humans. They will take our jobs, they will make their own decisions, they will be even more intelligent than humans, they pose a threat to humanity (per Stephen Hawking, Bill Gates, and Elon Musk). Marc Andreesen recently summed up on Twitter the increased hubbub about the dangers of Artificial Intelligence: “From ‘It’s so horrible how little progress has been made’ to ‘It’s so horrible how much progress has been made’ in one step.”

Don’t worry. The machines will never take over, no matter how much progress will be made in artificial intelligence . It will forever remain artificial, devoid of what makes us human (and intelligent in the full sense of the word), and what accounts for our unlimited creativity, the fountainhead of ideas that will always keep us at least a few steps ahead of the machines.

In a word, intelligent machines will never have culture, our unique way of transmitting meanings and context over time, our continuously invented and re-invented inner and external realities.

When you stop to think about culture—the content of our thinking—it is amazing that it has been missing from the thinking of the people creating “thinking machines” and/or debating how much they will impact our lives for as long as this work and conversation has been going on. No matter what position they take in the debate and/or what path they follow in developing robots and/or artificial intelligence, they have collectively made a conscious or unconscious decision to reduce the incredible bounty and open-endedness of our thinking to computation, an exchange of information between billions of neurons, which they either hope or are afraid that we will eventually replicate in a similar exchange between increasingly powerful computers. It’s all about quantity and we know that Moore’s Law takes care of that.

Almost all the people participating in the debate about the rise of the machines have subscribed to the Turing Paradigm which basically says “let’s not talk about what we cannot define or investigate and simply equate thinking with computation.”

The dominant thinking about thinking machines, whether of the artificial or the human kind, has not changed since Edward C. Berkeley wrote in Giant Brains or Machines that Think, his 1949 book about the recently invented computers: “These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.” Thirty years later, MIT’s Marvin Minsky famously stated: “The human brain is just a computer that happens to be made out of meat.” Today, Harvard geneticist George Church goes further (reports Joichi Ito), suggesting that we should make brains as smart as computers, and not the other way around.

Still, from time to time we do hear new and original challenges to the dominant paradigm. In “Computers Versus Humanity: Do We Compete?” Liah Greenfeld and Mark Simes bring culture and the mind into the debate over artificial intelligence, concepts that do not exist in the prevailing thinking about thinking. They define culture as the symbolic process by which humans transmit their ways of life. It is a historical process, i.e., it occurs in time, and it operates on both the collective and individual levels simultaneously.

The mind, defined as “culture in the brain,” is a process representing an individualization of the collective symbolic environment. It is supported by the brain and, in turn, it organizes the connective complexity of the brain.  Greenfeld and Simes argue that “mapping and explaining the organization and biological processes in the human brain will only be complete when such symbolic, and therefore non-material, environment is taken into account.”

They conclude that what distinguishes humanity from all other forms of life “is its endless, unpredictable creativity. It does not process information: It creates. It creates information, misinformation, forms of knowledge that cannot be called information at all, and myriads of other phenomena that do not belong to the category of knowledge. Minds do not do computer-like things, ergo computers cannot outcompete us all.”

The mind, the continuous and dynamic creative process by which we live our conscious lives, is missing from the debates over the promise and perils of artificial intelligence. A recent example is a special section on robots in the July/August issue of Foreign Affairs, in which the editors brought together a number of authors with divergent opinions about the race against the machines. All of them, however, do not question the assumption that we are in a race:

  • A roboticist, MIT’s Daniela Rus, writes about the “significant gaps” that have to be closed in order to make robots our little helpers and makes the case for robots and humans augmenting and complementing each other’s skills (in “The Robots Are Coming”).
  • Another roboticist, Carnegie Mellon’s Illah Reza Nourbakhsh, highlights robots’ “potential to produce dystopian outcomes” and laments the lack of required training in ethics, human rights, privacy, or security at the academic engineering programs that grant degrees in robotics (in “The Coming Robot Dystopia”).
  • The authors of The Second Machine Age, MIT’s Erik Brynjolfsson and Andrew McAfee, predict that human labor will not disappear anytime soon because “we humans are a deeply social species, and the desire for human connection carries over to our economic lives.” But the prediction is limited to “within the next decade,” after which “there is a real possibility… that human labor will, in aggregate, decline in relevance because of technological progress, just as horse labor did earlier” (in “Will Humans Go the Way of Horses?”).
  • The chief economics commentator at the Financial Times, Martin Wolf, dismisses the predictions regarding the imminent “breakthroughs in information technology, robotics, and artificial intelligence that will dwarf what has been achieved in the past two centuries” and the emergence of machines that are “supremely intelligent and even self-creating.” While also hedging his bets about the future, he states categorically “what we know for the moment is that there is nothing extraordinary in the changes we are now experiencing. We have been here before and on a much larger scale” (in “Same as It Ever Was: Why the Techno-optimists Are Wrong”).

Same as it ever was, indeed. A lively debate and lots of good arguments: Robots will help us, robots could harm us, robots may or may not take our jobs, robots—for the moment—are nothing special.  Beneath the superficial disagreement lies a fundamental shared acceptance of the general premise that we are not different from computers, only have the temporary and fleeting advantage of greater computing power.

No wonder that the editor of Foreign Affairs, Gideon Rose, concludes that “something is clearly happening here, but we don’t know what it means. And by the time we do, authors and editors might well have been replaced by algorithms along with everybody else.”

Let me make a bold prediction. Algorithms will not create on their own a competitor to Foreign Affairs. No matter how intelligent machines will become (and they will be much smarter than they are today), they will not create science or literature or any of the other components of our culture that we have created over the course of millennia and will continue to create, in some cases aided by technologies that we create and control.

And by “we,” I don’t mean only Einstein and Shakespeare. I mean the entire human race, engaged in creating, absorbing, manipulating, processing, communicating the symbols that make our culture, making sense of our reality. I doubt that we will ever have a machine creating Twitter on its own, not even the hashtag.

I’m sure we will have smart machines that could perform special tasks, augmenting our capabilities and improving our lives. That many jobs will be taken over by algorithms and robots, and many others will be created because of them, as we have seen over the last half-century. And that bad people will use these intelligent machines to harm other people and that we will make many mistakes relying too much on them and not thinking about all the consequences of what we are developing.

But intelligent machines will not have a mind of their own. Intelligent machines will not have our imagination, our creativity, our unique human culture. Intelligent machines will not take over because they will never be human.

Originally published on Forbes.com

Posted in Misc | Leave a comment