Will Google Own AI? (4)

Norm Jouppi, Google:

We’ve been using compute-intensive machine learning in our products for the past 15 years. We use it so much that we even designed an entirely new class of custom machine learning accelerator, the Tensor Processing Unit. Just how fast is the TPU, actually? Today, in conjunction with a TPU talk for a National Academy of Engineering meeting at the Computer History Museum in Silicon Valley, we’re releasing a study that shares new details on these custom chips, which have been running machine learning applications in our data centers since 2015. This first generation of TPUs targeted inference (the use of an already trained model, as opposed to the training phase of a model, which has somewhat different characteristics), and here are some of the results we’ve seen:

  • On our production AI workloads that utilize neural network inference, the TPU is 15x to 30x faster than contemporary GPUs and CPUs.
  • The TPU also achieves much better energy efficiency than conventional chips, achieving 30x to 80x improvement in TOPS/Watt measure (tera-operations [trillion or 1012 operations] of computation per Watt of energy consumed).
  • The neural networks powering these applications require a surprisingly small amount of code: just 100 to 1500 lines. The code is based on TensorFlow, our popular open-source machine learning framework.
  • More than 70 authors contributed to this report. It really does take a village to design, verify, implement and deploy the hardware and software of a system like this.
Posted in AI, deep learning | Tagged | Leave a comment

Industrial IoT Market to Reach $151.01 Billion by 2020

The Industrial Internet of Things (IIoT) market was valued at $93.99 Billion in 2014, to reach $151.01 Billion by 2020 and is expected to grow at a CAGR of 8.03% between 2015 and 2020.

IIoT is the integration of complex physical machinery with industrial networks and data analytics solutions to improve operational efficiency and reduce costs. It comprises advanced sensor technologies, machine-to machine communication, real-time data analytics, and machine learning algorithms to enhance the decision-making capabilities of the industries. The need to identify potential failures in machinery in advance to avoid unplanned outages by the use of predictive maintenance techniques is a key influencing factor for the adoption of IIoT solutions. Advancements in sensor technologies as well as improved reliability, coverage area, and bandwidth of cellular technologies are enabling IIoT in sectors such as manufacturing, energy & power, and healthcare among others. The implementation of IIoT is expected to give rise to new business models and provide opportunities to a wide range of new and established companies in the market.

The key players in the market include General Electric (U.S.), Cisco Inc. (U.S.), Intel Corporation (U.S.), Rockwell Automation (U.S.), ARM Holdings plc. (U.K.), ABB Ltd. (Switzerland), Siemens AG (Germany), Honeywell International Inc. (U.S.), Dassault Systèmes SA (France), Huawei Technology Co., Ltd. (China), Zebra Technologies (U.S.), IBM Corporation (U.S.), and Robert Bosch GmbH (Germany) among others.

Posted in Internet of Things, Misc | Tagged , , , | Leave a comment

Data Scientists: The Definition of Sexy

I put “sexy” in the title because I’m told that the words in the title make all the difference in getting noticed on the Web. That has certainly proven true for the Harvard Business Review after it included the word “sexiest” in the title of a recent article. It even got the attention, probably for the first time ever, of Geekosystem, a website devoted to geeks:

The Harvard Business Review, a noted authority on “things that are sexy,” has declared “Data Scientist“ to be the sexiest career of the 21st century. The article reflects the burgeoning mystique of the new and pocket protector friendly gig, which we have to assume narrowly edged out things like “Chippendales dancer” and “calendar firefighter” on its way to being named the sexiest of all possible careers. Because if there’s one thing that gives a job an indefinable allure, it is everybody else being kind of unsure what it is you really do — a quality that data scientists damn near embody.

Whether employers know or don’t know what data scientists do, they have been using—in rapidly-growing numbers—the term“data scientist” in job descriptions in the past two years as Indeed.com’s data demonstrates.   

Continue reading
Posted in Data Science, Data Science History, Data Scientists | Leave a comment

62% of Enterprises will Use AI by 2018

AI-Survey_small.png

Artificial intelligence has replaced big data this year as the most talked about new set of technologies. As with big data five years ago—behind the hype, the confusion generated by an ill-defined term, and the record funding by VC—we are starting to see emerging investments and practical applications where it matters most—in enterprises.

A new report from Narrative Science, based on a survey of 235 business executives conducted by the National Business Research Institute (NBRI), sheds light on the state-of-AI in enterprises today and in the future: 38% of enterprises are already using AI technologies and 62% will use AI technologies by 2018. Keep in mind that “AI technologies” is a broad term that includes machine and deep learning, recommendation engines, predictive and prescriptive analytics, automated written reporting and communications, and voice recognition and response.

Here are some other key findings of the survey:

  • 26% are currently using AI technologies to automate manual, repetitive tasks, up from 15% in 2015
  • 20% of those who haven’t yet adopted AI cite lack of clarity regarding its value proposition
  • 58% are using predictive analytics
  • 25% are using automated written reporting and communications
  • 25% are using voice recognition and response
  • 38% see predictions on activity related to machines, customers or business health as the most important benefit of an AI solution
  • 27% see automation of manual and repetitive tasks as the most important benefit of an AI solution
  • 95% of those who indicated that they are skilled at using big data to solve business problems or generate insights also use AI technologies, up from 59% in 2015
  • 61% of enterprises with an innovation strategy are applying AI to their data to find previously missed opportunities such as process improvements or new revenue streams
 

Big data has spawned the current interest and increased investment in artificial intelligence. The availability of large volumes of data—plus new algorithms and more computing power—are behind the recent success of deep learning, finally pulling AI out of its long “winter.” More broadly, the enthusiasm around big data (and the success of data-driven digital natives such as Google and Facebook), has led many enterprises to invest heavily in the collection, storage, and organization of data.

But what is to be done with the data? What is the value of having more data if not in new business insights? To uncover new insights, you need hard-to-find data scientists. Indeed, 59% of the respondents to the survey see the shortage of data science talent as the primary barrier to realizing value from their big data technologies. These companies are now turning to AI technologies to help augment their data science capabilities as partial solution to the talent shortage.

Narrative Science, providing software that transforms data into easy-to-read stories, is one of many startups trying to build a bridge between big data and artificial intelligence, between massive generation and collection of data and developing and applying algorithms to make sense of it.

Gartner has coined a new term—Algorithmic Business—to describe the shift of digital businesses from big data to artificial intelligence. Says Gartner: “It is only when the organization shifts from a focus on big data to ‘big answers’ that value begins to emerge… Algorithms are more essential to the business than data alone. Algorithms define action.”

IDC, another analyst firm (and another coiner of new terms), talks about “Cognitive Services” and predicts they will be embedded in new apps, with the top new investment areas over the next couple of years to be “Contextual Understanding” and “Automated Next Best Action capabilities.” Mastering “cognitive” is a must, says IDC, recommending to enterprises to make machine learning a top priority for 2016—“lots of startups in your industry are already using it to disrupt you.”

Artificial Intelligence is the new big data, not just as the reigning buzzword, but also as a new set of technologies enterprises are exploring so they can turn data into smart actions.

Originally published on Forbes.com

Posted in AI | Tagged | Leave a comment

Upcoming Big Data and Data Science Events

From Data to Knowledge

May 7-11, University of California, Berkeley

Continue reading
Posted in Misc | Leave a comment

Google and Alphabet: Invention–and Commerical Success–is not Enough

Google-Alphabet-business-628x330

It looks like most of the publications and pundits of the world had something to say about the surprise-of-the-decade: Google’s transformation into Alphabet (Techmeme provides a sample here). For me, the numerous questions they posed only triggered further questions:  Is the new holding company going to be like Berkshire Hathaway, or GE, or AT&T or an early retirement playground for Page and Brin, playing God instead of golf? Is Larry Page saying he does not want to be Bill Gates or does he want to be Thomas Edison Plus? Is it just a simple “re-org,” so typical of large and lumbering companies, masquerading as an “unconventional move”?

In his post (not in a conventional press release) announcing the surprising metamorphosis, Larry Page made sure to remind us that “As Sergey and I wrote in the original founders letter 11 years ago, ‘Google is not a conventional company. We do not intend to become one.’”

Google, now Alphabet, is indeed an unconventional company in many respects, not the least of which is that the aforementioned founders hold 54% of the stock’s voting rights, giving them full control of the company. But at its core, I would argue, it’s a conventional company in a conventional business.

“Invention is not enough,” Page has said (see James Altucher’s post). “You have to combine both things: invention and innovation focus, plus the company that can commercialize things and get them to people.”

For Page and Brin, the key invention was a better search engine. But they brilliantly coupled it, with the help of the bright people they hired, with two other inventions that made that original invention a commercial success: Developing their own computing infrastructure capable of handling Brontobyte Data and a completely new approach to selling advertising.

By relying on advertising for its livelihood (it still accounts for over 90% of Google’s revenues), Google has become a conventional media company. It has enjoyed the growing stream of advertising dollars shifting from print and other channels to online. But it will be the victim of its own success: As online advertising becomes more dominant, growth will slow and Google’s fortunes will rise and fall with the advertising market which typically follows the rise and fall of economy (online advertising in the U.S., growing at 13%, already accounts for 28% of the overall advertising market which will grow only 3.2% this year).

In addition, relying on a segment of the advertising market which is completely dependent on ever-changing technology is a challenge in and of itself, as we have already seen in the ups and downs of display advertising and the shift from desktop to mobile. If some bright young entrepreneur (or a PhD student) finds tomorrow a way to transmit advertising to our brains without the help of devices and the Internet and we readily accept it in exchange for some new, can’t-live-without service, there will be no Google as we know it. Ditto if that proverbial kid in the garage will invent the real “disruption,” a new way to promote companies and their offerings, without what we have called “advertising” for centuries.

That may happen tomorrow or may not happen for a long time, so Page and Brin will continue to have the funds to fuel their ambitions. It’s just that now they will not have to deal at all with the day-to-day management of what has become for them a boring cash cow.

Brin has already done that for a number of years, focusing entirely on “moonshots.” But Page apparently wanted to prove to himself in 2011 (not to the world—he probably doesn’t care much about other people’s opinions) that he can also be a CEO of a large company and could make it re-invent itself. In this (the re-invention part) he completely failed. It may not be a coincidence that we learned of the final demise of Google’s grand social experiment, Page’s attempt to out-Facebook Facebook, just before the surprise Alphabet announcement. (It may also not be a coincidence that the announcement came on the 20th anniversary of When Larry Met Sergey, the first milestone in the official Google history timeline).

The failures are insignificant light of the history Page and Brin have made by giving millions of people around the world, in exchange for their data, very useful tools, at no cost. But brilliant inventions turned into commercial success, however, are not enough for the likes of Page and Brin and they never liked where the money supporting their free services came from, channeling (probably preceding) Jeff Hammerbacher’s sentiment: “The best minds of my generation are thinking about how to make people click ads.” Their version of a mid-life crisis is to remove themselves from their very successful one-trick advertising pony and immerse themselves in attempting to make very big history or Brontobyte history.

Page and Brin are sometimes mentioned—and explained—together with Amazon’s Jeff Bezos as the result of Montessori education (see here and here). But I think there is something much more important at the root of Page, Brin, and Bezos’s ambitions and successful enterprises. In the words of Harry Louis Sullivan, describing Chicago in 1875:

“Big” was the word. “Biggest” was preferred, and “the biggest in the world” was the braggart phrase on every tongue. Chicago had had the biggest conflagration “in the world.” It was the biggest grain and lumber market “in the world.” It slaughtered more hogs than any other city “in the world.” It was the greatest railroad center, the greatest this, and the greatest that… what they said was true; and had they said, in the din, we are the crudest, rawest, most savagely ambitious dreamers and would-be doers in the world, that also might be true… These men had vision. What they saw was real, they saw it as destiny.

Continuing an American tradition (how “unconventional”), Page, Brin, and Bezos saw “big” as their destiny. Page and Brin named their company after a very big number. Bezos chose the largest river in the world to stand for “the everything store.” But Bezos has taken a different route to world domination, one that is not depended on advertising and using us as the product, but on changing the way we buy and sell goods and services, inventing new ways to consume while driving down the cost of consumption. His one-trick pony, selling books online, has metamorphosed into selling everything, including computer services, serving as a platform for other sellers, creating content, designing devices, and more.

Page has said “especially in technology, we need revolutionary change, not incremental change, “and “I think as technologists we should have some safe places where we can try out new things and figure out the effect on society.” Bezos believes in incremental change and doesn’t talk much about Amazon’s impact on society. In about ten years, we should have a better idea of which approach—Alphabet’s or Amazon’s—has left a bigger and more positive impact on the world.

An earlier version of this psot was published on Forbes.com

Posted in Misc | Tagged , | Leave a comment

Top 10 Programming Languages 2016

top-ten-programming-languages2016

IEEE Spectrum:

After two years in second place, C has finally edged out Java for the top spot. Staying in the top five, Python has swapped places with C++ to take the No. 3 position, and C# has fallen out of the top five to be replaced with R. R is following its momentum from previous years, as part of a positive trend in general for modern big-data languages…

Google and Apple are also making their presence felt, with Google’s Go just beating out Apple’s Swift for inclusion in the Top Ten. Still, Swift’s rise is impressive, as it’s jumped five positions to 11th place since last year, when it first entered the rankings. Several other languages also debuted last year, a marked difference from this year, with no new languages entering the rankings.

See also Top 10 Programming Languages 2015

Posted in Data Science | Tagged | Leave a comment

The Simple Pictures Artificial Intelligence Still Can’t Recognize

AI-CantRecognize

Wired:

Earlier this month, Clune discussed these findings with fellow researchers at the Neural Information Processing Systems conference in Montreal. The event brought together some of the brightest thinkers working in artificial intelligence. The reactions sorted into two rough groups. One group—generally older, with more experience in the field—saw how the study made sense. They might’ve predicated a different outcome, but at the same time, they found the results perfectly understandable.

The second group, comprised of people who perhaps hadn’t spent as much time thinking about what makes today’s computer brains tick, were struck by the findings. At least initially, they were surprised these powerful algorithms could be so plainly wrong. Mind you, these were still people publishing papers on neural networks and hanging out at one of the year’s brainiest AI gatherings.

To Clune, the bifurcated response was telling: It suggested a sort of generational shift in the field. A handful of years ago, the people working with AI were building AI. These days, the networks are good enough that researchers are simply taking what’s out there and putting it to work. “In many cases you can take these algorithms off the shelf and have them help you with your problem,” Clune says. “There is an absolute gold rush of people coming in and using them.”

That’s not necessarily a bad thing. But as more stuff is built on top of AI, it will only become more vital to probe it for shortcomings like these. If it really just takes a string of pixels to make an algorithm certain that a photo shows an innocuous furry animal, think how easy it could be to slip pornography undetected through safe search filters. In the short term, Clune hopes the study will spur other researchers to work on algorithms that take images’ global structure into account. In other words, algorithms that make computer vision more like human vision.

But what does “recognize” mean? The two groups of AI researchers described above don’t include AI researchers (e.g., Oren Etzioni) that argues that for a computer to be “intelligent,” it needs to understand what it “sees,” not just identify or classify it. “Recognize” means understanding concepts, not just pattern matching.

Here’s a video clip of Richard Feynman (HT Farnam Street) about why recognizing the difference between knowing the name of something and understanding it is so important for humans.

See that bird? It’s a brown-throated thrush, but in Germany it’s called a halzenfugel, and in Chinese they call it a chung ling and even if you know all those names for it, you still know nothing about the bird. You only know something about people; what they call the bird.

[youtube https://www.youtube.com/watch?v=05WS0WN7zMQ]

Posted in AI | Leave a comment

Gartner Hype Cycle for Emerging Technologies, 2019

Gartner:

The Hype Cycle for Emerging Technologies is unique among most Gartner Hype Cycles because it garners insights from more than 2,000 technologies into a succinct set of 29 emerging technologies and trends. This Hype Cycle specifically focuses on the set of technologies that show promise in delivering a high degree of competitive advantage over the next five to 10 years…

This year, Gartner refocused the Hype Cycle for Emerging Technologies to shift toward introducing new technologies that have not been previously highlighted in past iterations of this Hype Cycle. While this necessitates retiring most of the technologies that were highlighted in the 2018 version, it does not mean that those technologies have ceased to be important.

See also

2017 Gartner Hype Cycle for Emerging Technologies: AI, AR/VR, Digital Platforms

Gartner Hype Cycle for Emerging Technologies 2016: Deep Learning Still Missing

Posted in AI | Tagged | Leave a comment

Data is Eating the World: The new face of globalization

MGI_DataFlows_2017

McKinsey:

The growth of trade compared with the growth of GDP in this decade has been half of that in the late 1990s and early 2000s, while global capital flows as a percentage of GDP have dropped precipitously since the 2008–09 financial crisis and have not returned to pre-crisis levels.

At the same time, there is evidence that other facets of globalization continue to advance, rapidly and at scale. Cross-border data flows are increasing at rates approaching 50 times those of last decade. Almost a billion social-networking users have at least one foreign connection, while 2.5 billion people have email accounts, and 200 billion emails are exchanged every day. About 250 million people are currently living outside of their home country, and more than 350 million people are cross-border e-commerce shoppers—expanding opportunities for small and medium-sized enterprises to become “micro-multinationals.”

See also

Data Is Eating the World: Supply Chain Innovation

Data is Eating the World: 163 Trillion Gigabytes Will Be Created in 2025

Data Is Eating the World: Enterprise Edition

Data Is Eating the World: Self-Driving Cars

Posted in Data Growth | Tagged | Leave a comment