Best of 2019: How AI Killed Google’s Social Network

[February 4, 2019] Facebook turns 15 today, after announcing last week a record profit and 30% revenue growth. Also today, “you will no longer be able to create new Google+ profiles, pages, communities or events,” in anticipation of the complete shutdown in April of Google’s social network, its bet-the-company challenge to Facebook.

Both Google and Facebook have proved many business mantras wrong, not the least of which is the one about “first-mover advantages.” In business, timing is everything. There is no first-mover advantage just as there is no late-mover advantage (and there are no “business laws,” regardless of what countless books, articles, and lectures tell you).

When Google was launched on September 4, 1998, it had to compete with a handful of other search engines. Google vanquished all of them because instead of “organizing the world’s information” (in the words of its stated mission), it opted for automated self-organization. Google built its “search” business (what used to be called “information retrieval”) by closely tracking cross-references (i.e., links between web pages) as they were happening and correlating relevance with quantity of cross-references (i.e., popularity of pages as judged by how many other pages linked to them). In contrast, the dominant player at the time, Yahoo, followed the traditional library model by attempting to build a card-catalog (ontologies) of all the information on the web. Automated classification (i.e., Google) won.

Similarly, Facebook wasn’t the first social network. The early days of the web saw SixDegrees.com and LiveJournal and, in 2002, Friendster reached 3 million users in just a few months. MySpace launched in 2003 and 2 years later reached 25 million users. These early movers conditioned consumers to the idea (and possible benefits) of social networking and helped encourage increased investment in broadband connections. They also provided Facebook with a long list of technical and business mistakes to avoid.

There was also a shining example of a successful web-born company–Google–for Facebook to emulate. Like Google, it attracted clever engineers to build a smart and scalable infrastructure and, like Google, it established a successful and sustainable business model by re-inventing advertising. Facebook, however, went much further than its role model in responding to rising competition by either buying competitors or successfully copying them.

It also led Google to launch Google+, its most spectacular failure to date. The major culprit was the misleading concept of a “social signal.” Driven by the rise of Facebook (and Twitter), the conventional wisdom around 2010 was that the data Google was collecting, the data that was behind the success of its search engine, was missing out the “social” dimension of finding and discovering information. People on the web (and on Facebook and Twitter) were increasingly relying on getting relevant information from the members of their social networks, reducing their use of Google Search.

When Larry Page took over as Google CEO in 2011, adding a “social signal” to its search engine—and trying to beat Facebook at its own game—became his primary mission. In his first week as CEO in April 2011, Page sent a company-wide memo tying 25% of every employee’s bonus to Google’s success in social. Google introduced its answer to the Facebook “like” button, the Google “+1” recommendations, which, according to Danny Sullivan, the most astute Google watcher at the time, could “become an important new signal for Google to use as part of its overall ranking algorithm, during a time when it desperately needs new signals.”

The complete competitive answer to Facebook, Google+, was launched in June 2011, as “one of the most ambitious bets in the company’s history,” and a “response to the disruption of Web 2.0 and the emergence of the social web,” per Eric Schmidt and Jonathan Rosenberg in How Google Works (2014). But in January 2012, ComScore estimated that users averaged 3.3 minutes on the site compared to 7.5 hours on Facebook. And it was all downhill from there. Why?

A part of the problem was that Google tried very hard to show the world it’s not just copying Facebook but improving on it. Facebook’s simple approach to creating a social network was perceived to be too simple as it designated as “friends” (and still does) everybody in your network from your grandmother to someone you never met in person who has worked with you on a time-limited work-related project. Google’s clever answer was “circles,” allowing you to classify “friends” into specific and meaningful sub-networks. This, of course, went against Google’s early great hunch that user (or librarian) classification does not work on the web because it does not “scale.” So what looked like a much-needed correction to Facebook ultimately failed. Trained well by Google to expect and enjoy automated classification, users did not want to play librarians.

More important, I guess that even the relatively small number of active participants in Google+ (90 million by the end of 2011) was enough for Google to discover pretty quickly that the belief that “Making use of social signals gives Google a valuable new signal closely tied with individuals and known accounts that it could use” was simply a mirage. “Social signals” did not improve search results. In addition, 2012 brought about the Deep Learning (what we now call “AI”) revolution that changed everything at Google, especially how it engineered its search algorithm.

Sophisticated statistical classification—finding hidden correlations in huge amounts of data and using them to put seemingly unrelated entities into common buckets—was the foundation of Google’s initial success. In 2012, a specific approach to this type of statistical analysis of vast quantities of data, variously called “machine learning,” “deep learning,” and “artificial intelligence (AI),” burst out of obscure academic papers and precincts and became the buzzword of the day.

Two major milestones marked the emergence of what I prefer to call “statistics on steroids”: In June 2012, Google’s Jeff Dean and Stanford’s Andrew Ng reported an experiment in which they showed a deep learning neural network 10 million unlabeled images randomly taken from YouTube videos, and “to our amusement, one of our artificial neurons learned to respond strongly to pictures of… cats.” And in October of the same year, a deep learning neural network achieved an error rate of only 16% in the ImageNet Large Scale Visual Recognition Challenge, a significant improvement over the 25% error rate achieved by the best entry the year before. “AI” was off to the races.

The impact of the statistics on steroids revolution was such that even Google’s most sacred cow, its search algorithm, had to—after some resistance—incorporate the new, automated, scalable, not-user-dependent, “AI” signal, an improved way to statistically analyze the much bigger pile of data Google now collects. “RankBrain has moved in, a machine-learning artificial intelligence that Google’s been using to process a ‘very large fraction’ of search results per day,” observed Danny Sullivan in October 2015.

AI killed Google+.

Good for Google. Analysts expect Google’s parent Alphabet to report earnings today after the market close of $11.08 per share and adjusted revenue of $31.3 billion. These results would represent year-over-year growth rates of 14% and 21%, respectively.

Update: Alphabet’s Q4 2018 revenues were up 22% at $39.3 billion, and earnings per share were $12.77, up 31.6%.

Originally published on Forbes.com

Posted in AI | Tagged | Leave a comment

Machine Learning Algorithms

Murat Durmus: an overview of the most proven algorithms that are mainly used in the field of predictive analytics

Posted in AI, Machine Learning, Predictive analytics | Leave a comment

AI Chips, Anyone? On Surging Markets and Spectacular Exits

March 11, 2019: Nvidia buys Mellanox for $6.9b The deal is the second largest ever acquisition in the Israeli high tech industry

December 16, 2019: Intel Confirms $2 Billion Habana Labs Acquisition The deal marks Intel’s second-largest acquisition of an Israeli company

From How To Let Ordinary Investors Invest In Startups? OurCrowd Has The Answer:

“The arrival of autonomous cars is stuck in traffic,” says Medved. “But if you have a product that can be used in the near term you are going to be fine.” As an example, he mentions Hailo, a startup chipmaker OurCrowd has invested in.

Rethinking the traditional computer architecture, Hailo invented an AI processor enabling smart devices to perform sophisticated machine (deep) learning tasks with minimal power consumption, size, and cost. Hailo’s specialized processor fits into a multitude of smart machines and devices, including autonomous vehicles, smart cameras, smartphones, drones, AR/VR platforms, and wearables.

The market for deep learning chipsets is estimated to increase from $5.1 billion in 2018 to $72.6 billion in 2025. The edge computing market, the target market for the Hailo processor, is expected to represent more than three-quarters of the total market opportunity, according to research firm Tractica. “There are ton of applications where people need this today,” says Medved, “not only when there will be fully autonomous cars.”

 

Posted in Misc | Leave a comment

99 Predictions About AI in 2020

“Q: How worried do you think we humans should be that machines will take our jobs?

A: It depends what role machine intelligence will play. Machine intelligence in some cases will be useful for solving problems, such as translation. But in other cases, such as in finance or medicine, it will replace people.”

This Q&A is taken from Tom Standage’s description of how he interviewed AI (language model GPT-2) for The Economist The World in 2020. As readers of this column’s annual roundup of AI predictions know, this year’s first installment of 120 AI predictions for 2020 featured my interview of Amazon AI in which Alexa performed slightly better than the previous year.

For the new list of 99 additional predictions, I repeated Standage’s question to Alexa, and got the response “Hmm, I’m not sure.” The following AI movers and shakers are a lot more confident in what the near future of machine intelligence will look like, from robotic process automation (RPA) to human intelligence augmentation (HIA) to natural language processing (NLP).

Read more here

Posted in AI, Predictions | Tagged | Leave a comment

Las Vegas Most Cyber Insecure US Metro

Source: Cybersecurity in the City: Where Small Businesses Are Most Vulnerable to Attack

Posted in Misc | Tagged | Leave a comment

Online Advertising Revenues Expected to Grow by $60 Billion between 2019 and 2024

Infographic: The Changing Face of the U.S. Advertising Landscape | Statista You will find more infographics at Statista

Posted in Misc | Leave a comment

Extra! Extra! 42 Additional 2020 Cybersecurity Predictions

From disrupting elections to targeted ransomware to privacy regulations to deepfakes and malevolent AI, 141 cybersecurity predictions for 2020 did not exhaust the subject so here are additional 42 from senior cybersecurity executives.

Read more here

Posted in AI | Tagged | Leave a comment

Aging and Automation

The “under-appreciated” workforce — experienced workers with long tenures at their companies, aged 50 and above — are estimated to have contributed $7.6 trillion to U.S. economic activity in 2015, set to jump to over $13.5 trillion by 2032, according to new report by Mercer and Oliver Wyman with Marsh & McLennan Advantage on aging and automation. Yet, those employees also face the threat of having their work replaced by machines, with older workers in the U.S. doing jobs that are on average 52% automatable. However, a rapidly aging population and falling birthrate means retraining this workforce is vital for the success of many companies, argues the report

Posted in AI | Tagged , , | Leave a comment

Robot Augmentation

You may have heard “AI” explained as “augmented intelligence,” i.e.., robots supporting or enhancing human intelligence. Now, researchers at Amazon are experimenting with robot augmentation:

“…researchers at Amazon’s Alexa AI division developed a framework that endows agents with the ability to ask for help in certain situations. Using what’s called a model-confusion-based method, the agents ask questions based on their level of confusion as determined by a predefined confidence threshold, which the researchers claim boosts the agents’ success by at least 15%.”

See here and here

Posted in AI, Robotics | Tagged | Leave a comment

The Digital Transformation of Recorded Music

Statistic: Distribution of music industry revenue in the United States in 2017 and 2018, by source | Statista

Find more statistics at Statista

Source: The Hollywood Reporter

Posted in digital transformation, Digitization | Tagged | Leave a comment