Best of 2019: How Israel Became a Medical Cannabis Leader

[April 29, 2019]

Opening the CannaTech conference earlier this month, former Israeli prime minister Ehud Barak quipped that Israel is now the “land of milk, honey and cannabis.” Given the recent performance of the cannabis-related stocks traded on the Tel-Aviv stock exchange (Barak is Chairman of InterCure whose stock appreciated 1000% in 2018), are investors getting high on nothing more than a buzz bubble?

Behind the buzz about “marijuana millionaires,” Yuge market potential, and volatile stocks (InterCure’s stock nearly tripled earlier this year but is now 25% off its peak), is a serious 55-year-old Israeli enterprise of pioneering interdisciplinary research into the medical benefits of cannabis. Supported by a perfect climate for growing cannabis, it has led to a very supportive climate—academic, regulatory, and entrepreneurial—for developing botanical-sourced pharmaceutical-grade products. Like the rest of the world, Israel has considered cannabis (and still does) to be a “dangerous drug,” but unlike the rest of the world, it has not let the stigma deter its insatiable curiosity about cannabis’s therapeutic potential.

The entrepreneurial poster child for this long-held belief in the efficacy of medical marijuana is Breath of life (BOL) Pharma. Founded in 2008, today it is “the only company in Israel that is fully integrated throughout the value chain,” says its CEO, Tamir Gedo.

This means BOL Pharma is compliant with the GAP and GMP standards of the global pharmaceutical industry, governing all stages of production and distribution, from cultivation to processing to marketing of finished products such as tablets, capsules, inhalers, creams and oils. This unique competitive advantage is buttressed by BOL’s R&D function, currently involved with 32 Phase 2 clinical trials, and a 65,000 square feet production plant and one million square feet of cultivation facilities.

“You don’t see this kind of consistency in products around the world,” says Gedo. “Flowers are not consistent and if you don’t have consistency, you run the risk of having side effects at different times.” The need to overcome the challenge of developing medicine from an inconsistent botanical source is why 60% of BOL’s 200 employees have worked before in the pharmaceutical industry and why Gedo insists on staying focused on the company’s medical cannabis vision and not developing products for recreational use. “Our advantage is time,” says Gedo, “we’ve been doing it for many years.”

This time- and experience-based competitive advantage applies to the Israeli cannabis ecosystem as a whole. In the early 1960s, looking to make his mark in the academic world, Israeli chemist Raphael Mechoulam decided to focus on cannabis research because “in a small country like Israel, if you want to do significant work, you should try to do something novel.” Moreover, “a scientist should find topics of importance,” he says in the documentary The Scientist. “Cannabis had been used for thousands of years both as a drug [and] as a recreational agent, but surprisingly, the active compound was never isolated in pure form.”

[youtube https://www.youtube.com/watch?v=csbJnBKqwIw]

Mechoulam and his colleagues isolated the chemical compounds of cannabis (which he called “cannabinoids”), specifically CBD (the main non-psychoactive component) and THC (the psychoactive component). In the early 1990s, they discovered the endocannabinoid system in the human body which is involved in regulating a variety of physiological and cognitive processes (including mood and memory), and in mediating the pharmacological effects of cannabis.

These discoveries have led to a vast body of research conducted in Israel and around the world on various aspects, medical and otherwise, of cannabinoids (see here, for example). With government support, both in terms of funding and regulation, Israel has become a center for medical cannabis R&D, with many academic institutions and companies “offshoring” research and clinical trials to Israel, having been prevented from doing it in the US and elsewhere.

Gedo calls this R&D-and-clinical-trials-as-a-service “open innovation,” providing the research and regulatory infrastructure for others to innovate and produce their own IP. But the infrastructure and accumulated experience and expertise also help BOL Pharma and other Israeli companies develop their own unique cannabis-related IP. For example, BOL has been working on unique new formulations which make the medical cannabis more effective by increasing its “bio-availability” (rate of absorption in the body), thus reducing the cost to the consumer and potential side effects.

When BOL entered the medical cannabis market a decade ago it did not have a lot of local (or global, for that matter) competition. Today, according to a recent survey published in Israeli business publication Globes, there are more than 100 Israeli companies contributing to the “current boiling point” of this market. These include companies growing and processing cannabis, or running pharma production facilities, or exporting Israeli know-how, or developing drug delivery mechanisms.

These companies are going after a worldwide medical cannabis market estimated to grow rapidly to $100.03 billion in 2025, according to Grand View Research. Most are private companies, but some may test the public markets before long—BOL Pharma may list on the Canadian stock exchange or in the US and Canndoc, another pharma-grade medical cannabis pioneer (acquired last year by InterCure), has recently submitted a confidential prospectus for a Nasdaq IPO.

The ever-growing market size estimates and increased activity in the public markets have drawn the attention of venture capital firms. Funding for cannabis startups in the US more than doubled from 2017 to 2018, reaching more than $1.3 billion, according to Crunchbase. The first quarter of 2019 saw funding more than double year-over-year, and earlier this month, Pax Labs (vaporization technologies and devices) raised $420 million at a valuation of $1.7 billion.

The most recent funding data for cannabis-related Israeli startups shows  that only $76 million have been raised from 2013 to 2017, according to IVC Research. That number has probably increased considerably by now as just one cannabis-related startup, Syqe Medical (inhalers), has raised $50 million in its second round of funding at the end of 2018.

And there’s more to come, including Israel-US collaborations. OurCrowd, Israel’s most active venture investor (including in Syqe Medical), announced in January that it will partner with Colorado-based 7thirty to create a new $30 million fund focused on emerging cannabis technology companies in Israel, Canada and the United States.

At its annual conference last month, OurCrowd awarded 88-year-old Professor Raphael Mechoulam its Maimonides Lifetime Achievement award (the other winner was 100-year-old Professor Avraham Baniel, the inventor and co-founder of DouxMatok). In accepting the award, Mechoulam talked about his current work, predicting that “within the next decade, maybe less, we shall have drugs for a variety of diseases based on the compounds, the constituents of the [cannabis] plant and the constituents of our own body, the endogenous cannabinoids, and compounds that we make, that will be effective for a large number of diseases.”

[youtube https://www.youtube.com/watch?v=LNhVhUXw5Ak]

Originally published on Forbes.com

Posted in healthcare | Tagged | Leave a comment

Advancing Your AI Career

AI jobs

AI Career Pathways” is designed to guide aspiring AI engineers in finding jobs and building a career. The table above shows Workera’s key findings about AI roles and the tasks they perform. You’ll find more insights like this in the free PDF.

From the report:

People in charge of data engineering need strong coding and software
engineering skills, ideally combined with machine learning skills to help them
make good design decisions related to data. Most of the time, data engineering is done using database query languages such as SQL and object-oriented programming languages such as Python, C++, and Java. Big data tools such as Hadoop and Hive are also commonly used.
Modeling is usually programmed in Python, R, Matlab, C++, Java, or another language. It requires strong foundations in mathematics, data science, and machine learning. Deep learning skills are required by some organizations, especially those focusing on computer vision, natural language processing, or speech recognition.
People working in deployment need to write production code, possess strong back-end engineering skills (in Python, Java, C++, and the like), and understand cloud technologies (for example AWS, GCP, and Azure).
Team members working on business analysis need an understanding of
mathematics and data science for analytics, as well as strong communication skills and business acumen. They sometimes use programming languages suchas R, Python, and Tableau, although many tasks can be carried out in a spreadsheet, PowerPoint or Keynote, or an A/B testing software.
Working on AI infrastructure requires broad software engineering skills to write production code and understand cloud technologies.

Posted in AI, Data Science Careers | Tagged , | Leave a comment

Best of 2019: The Web at 30

bernersLee

[March 12, 2019] Tim Berners-Lee liberated data so it can eat the world. In his book Weaving the Web, he wrote:

I was excited about escaping from the straightjacket of hierarchical documentation systems…. By being able to reference everything with equal ease, the web could also represent associations between things that might seem unrelated but for some reason did actually share a relationship. This is something the brain can do easily, spontaneously. … The research community has used links between paper documents for ages: Tables of content, indexes, bibliographies and reference sections… On the Web… scientists could escape from the sequential organization of each paper and bibliography, to pick and choose a path of references that served their own interest.

With this one imaginative leap, Berners-Lee moved beyond a major stumbling block for all previous information retrieval systems: The pre-defined classification system at their core. This insight was so counter-intuitive that even during the early years of the Web, attempts were made to do just that: To classify (and organize in pre-defined taxonomies) all the information on the Web.

Thirty years ago, Tim Berners-Lee circulated a proposal for “Mesh” to his management at CERN. While the Internet started as a network for linking research centers, the World Wide Web started as a way to share information among researchers at CERN. Both have expanded to touch today more than half of the world’s population because they have been based on open standards.

Creating a closed and proprietary system has been the business model of choice for many great inventors and some of the greatest inventions of the computer age. That’s where we were headed towards in the early 1990s: The establishment of global proprietary networks owned by a few computers and telecommunications companies, whether old or new. Tim Berners-Lee’s invention and CERN’s decision to offer it to the world for free in 1993 changed the course of this proprietary march, giving a new—and much expanded—life to the Internet (itself a response to proprietary systems that did not inter-communicate) and establishing a new, open platform, for a seemingly infinite number of applications and services.

As Bob Metcalfe told me in 2009: “Tim Berners-Lee invented the URL, HTTP, and HTML standards… three adequate standards that, when used together, ignited the explosive growth of the Web… What this has demonstrated is the efficacy of the layered architecture of the Internet. The Web demonstrates how powerful that is, both by being layered on top of things that were invented 17 years before, and by giving rise to amazing new functions in the following decades.”

Metcalfe also touched on the power and potential of an open platform: “Tim Berners-Lee tells this joke, which I hasten to retell because it’s so good. He was introduced at a conference as the inventor of the World Wide Web. As often happens when someone is introduced that way, there are at least three people in the audience who want to fight about that, because they invented it or a friend of theirs invented it. Someone said, ‘You didn’t. You can’t have invented it. There’s just not enough time in the day for you to have typed in all that information.’ That poor schlemiel completely missed the point that Tim didn’t create the World Wide Web. He created the mechanism by which many, many people could create the World Wide Web.”

Metcalfe’s comments were first published in ON magazine which I created and published for my employer at the time, EMC Corporation. For a special issue (PDF) commemorating the 20th anniversary of the invention of the Web, we asked some 20 digital influencers (as we would call them today) how the Web has changed their and our lives and what it will look like in the future. Here’s a sample:

Howard Rheingold: “The Web allows people to do things together that they weren’t allowed to do before. But… I think we are in danger of drowning in a sea of misinformation, disinformation, spam, porn, urban legends, and hoaxes.”

Chris Brogan: “We look at the Web as this set of tools that allow people to try any idea without a whole lot of expense… Anyone can start anything with very little money, and then it’s just a meritocracy in terms of winning the attention wars.”

Dany Levy (founder of DailyCandy): “With the Web, everything comes so easily. I wonder about the future and the human ability to research and to seek and to find, which is really an important skill. I wonder, will human beings lose their ability to navigate?”

We also interviewed Berners-Lee in 2009. He said that the Web has “changed in the last few years faster than it changed before, and it is crazy to for us to imagine this acceleration will suddenly stop.” He pointed out the ongoing tendency to lock what we do with computers in a proprietary jail: “…there are aspects of the online world that are still fairly ‘pre-Web.’ Social networking sites, for example, are still siloed; you can’t share your information from one site with a contact on another site.”

But he remained both realistic and optimistic, the hallmarks of an entrepreneur: “The Web, after all, is just a tool…. What you see on it reflects humanity—or at least the 20% of humanity that currently has access to the Web… No one owns the World Wide Web, no one has a copyright for it, and no one collects royalties from it. It belongs to humanity, and when it comes to humanity, I’m tremendously optimistic.”

Originally published on Forbes.com

See also A Very Short History Of The Internet And The Web

Posted in Misc | Tagged | Leave a comment

Best of 2019: 60 Years of Progress in AI

New Zealand flatworm

New Zealand flatworm

[January 8, 2019] Today is the first day of CES 2019 and artificial intelligence (AI) “will pervade the show,” says Gary Shapiro, chief executive of the Consumer Technology Association. One hundred and thirty years ago today (January 8, 1889), Herman Hollerith was granted a patent titled “Art of Compiling Statistics.” The patent described a punched card tabulating machine which heralded the fruitful marriage of statistics and computer engineering—called “machine learning” since the late 1950s, and reincarnated today as “deep learning,” or more popularly as “artificial intelligence.”

Commemorating IBM’s 100th anniversary in 2011, The Economist wrote:

In 1886, Herman Hollerith, a statistician, started a business to rent out the tabulating machines he had originally invented for America’s census. Taking a page from train conductors, who then punched holes in tickets to denote passengers’ observable traits (e.g., that they were tall, or female) to prevent fraud, he developed a punch card that held a person’s data and an electric contraption to read it. The technology became the core of IBM’s business when it was incorporated as Computing Tabulating Recording Company (CTR) in 1911 after Hollerith’s firm merged with three others.

In his patent application, Hollerith explained the usefulness of his machine in the context of a population survey and the statistical analysis of what we now call “big data”:

The returns of a census contain the names of individuals and various data relating to such persons, as age, sex, race, nativity, nativity of father, nativity of mother, occupation, civil condition, etc. These facts or data I will for convenience call statistical items, from which items the various statistical tables are compiled. In such compilation the person is the unit, and the statistics are compiled according to single items or combinations of items… it may be required to know the numbers of persons engaged in certain occupations, classified according to sex, groups of ages, and certain nativities. In such cases persons are counted according to combinations of items. A method for compiling such statistics must be capable of counting or adding units according to single statistical items or combinations of such items. The labor and expense of such tallies, especially when counting combinations of items made by the usual methods, are very great.

In Before the Computer, James Cortada describes the results of the first large-scale machine learning project:

The U.S. Census of 1890… was a milestone in the history of modern data processing…. No other occurrence so clearly symbolized the start of the age of mechanized data handling…. Before the end of that year, [Hollerith’s] machines had tabulated all 62,622,250 souls in the United States. Use of his machines saved the bureau $5 million over manual methods while cutting sharply the time to do the job. Additional analysis of other variables with his machines meant that the Census of 1890 could be completed within two years, as opposed to nearly ten years taken for fewer data variables and a smaller population in the previous census.

But the efficient output of the machine was considered by some as “fake news.” In 1891, the Electrical Engineer reported (quoted in Patricia Cline Cohen’s A Calculating People):

The statement by Mr. Porter [the head of the Census Bureau, announcing the initial count of the 1890 census] that the population of this great republic was only 62,622,250 sent into spasms of indignation a great many people who had made up their minds that the dignity of the republic could only be supported on a total of 75,000,000. Hence there was a howl, not of “deep-mouthed welcome,” but of frantic disappointment.  And then the publication of the figures for New York! Rachel weeping for her lost children and refusing to be comforted was a mere puppet-show compared with some of our New York politicians over the strayed and stolen Manhattan Island citizens.

A century later, no matter how even more efficiently machines learned, they were still accused of creating and disseminating fake news. On March 24, 2011, the U.S. Census Bureau delivered “New York’s 2010 Census population totals, including first look at race and Hispanic origin data for legislative redistricting.” In response to the census data showing that New York has about 200,000 less people than originally thought, Senator Chuck Schumer said, “The Census Bureau has never known how to count urban populations and needs to go back to the drawing board. It strains credulity to believe that New York City has grown by only 167,000 people over the last decade.” Mayor Bloomberg called the numbers “totally incongruous” and Brooklyn borough president Marty Markowitz said “I know they made a big big mistake.” [The results of the 1990 census were also disappointing and were unsuccessfully challenged in court, according to the New York Times].

Complaints by politicians and other people have not slowed down the continuing advances in using computers in ingenious ways for increasingly sophisticated statistical analysis. In 1959, Arthur Samuel experimented with teaching computers how to beat humans in chess, calling his approach “machine learning.”

Later applied successfully to modern challenges such as spam filtering and fraud detection, the machine-learning approach relied on statistical procedures that found patterns in the data or classified the data into different buckets, allowing the computer to “learn” (e.g., optimize the performance—accuracy—of a certain task) and “predict” (e.g., classify or put in different buckets) the type of new data that is fed to it. Entrepreneurs such as Norman Nie (SPSS) and Jim Goodnight (SAS) accelerated the practical application of computational statistics by developing software programs that enabled the widespread use of machine learning and other sophisticated statistical analysis techniques.

In his 1959 paper, Samuel described machine learning as particularly suited for very specific tasks, in distinction to the “Neural-net approach,” which he thought could lead to the development of general-purpose leaning machines. The neural networks approach was inspired by a 1943 paper by Warren S. McCulloch and Walter Pitts in which they described networks of idealized and simplified artificial “neurons” and how they might perform simple logical functions, leading to the popular description of today’s neural networks as “mimicking the brain.”

Over the years, the popularity of “neural networks” have gone up and down a number of hype cycles, starting with the Perceptron, a 2-layer neural network that was considered by the US Navy to be “the embryo of an electronic computer that.. will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.” In addition to failing to meet these lofty expectations—similar in tone to today’s perceived threat of “super-intelligence”—neural networks suffered from a fierce competition from the academics who coined the term “artificial intelligence” in 1955 and preferred the manipulation of symbols rather than computational statistics as a sure path to creating a human-like machine.

It didn’t work and “AI Winter” set in. With the invention and successful application of “backpropagation” as a way to overcome the limitations of simple neural networks, statistical analysis was again on the ascendance, now cleverly labeled as “deep learning.” In Neural Networks and Statistical Models (1994), Warren Sarle explained to his worried and confused fellow statisticians that the ominous-sounding artificial neural networks

are nothing more than nonlinear regression and discriminant models that can be implemented with standard statistical software… like many statistical methods, [artificial neural networks] are capable of processing vast amounts of data and making predictions that are sometimes surprisingly accurate; this does not make them “intelligent” in the usual sense of the word. Artificial neural networks “learn” in much the same way that many statistical algorithms do estimation, but usually much more slowly than statistical algorithms. If artificial neural networks are intelligent, then many statistical methods must also be considered intelligent.

Sarle provided his colleagues with a handy dictionary translating the terms used by “neural engineers” to the language of statisticians (e.g., “features” are “variables”). In anticipation of today’s “data science” and predictions of algorithms replacing statisticians (and even scientists), Sarle reassured them that no “black box” can substitute for human intelligence:

Neural engineers want their networks to be black boxes requiring no human intervention—data in, predictions out. The marketing hype claims that neural networks can be used with no experience and automatically learn whatever is required; this, of course, is nonsense. Doing a simple linear regression requires a nontrivial amount of statistical expertise.

In his April 2018 congressional testimony, Mark Zuckerberg agreed that relying blindly on black boxes is not a good idea: “I don’t think that in 10 or 20 years, in the future that we all want to build, we want to end up with systems that people don’t understand how they’re making decisions.” Still, Zuckerberg used the aura, the enigma, the mystery that masks inconvenient truths, everything that has been associated with the hyped marriage of computers and statistical analysis, to ensure the public that the future will be great: “Over the long term, building AI tools is going to be the scalable way to identify and root out most of this harmful content.”

Facebook’s top AI researcher Yann LeCun is “less optimistic, and a lot less certain about how long it would take to improve AI tools.” In his assessment, “Our best AI systems have less common sense than a house cat.” An accurate description of today’s not very intelligent machines, and reminiscent of what Samuel said in his 1959 machine learning paper:

Warren S. McCulloch has compared the digital computer to the nervous system of a flatworm. To extend this comparison to the situation under discussion would be unfair to the worm since its nervous system is actually quite highly organized as compared to [the most advanced artificial neural networks of the day].

Over the past sixty years, artificial intelligence has advanced from being not as smart as a flatworm to having less common sense than a house cat.

Originally published on Forbes.com

Posted in AI, Computer History, deep learning, Machine Learning, Statistics | Tagged | Leave a comment

Best of 2019: How AI Killed Google’s Social Network

[February 4, 2019] Facebook turns 15 today, after announcing last week a record profit and 30% revenue growth. Also today, “you will no longer be able to create new Google+ profiles, pages, communities or events,” in anticipation of the complete shutdown in April of Google’s social network, its bet-the-company challenge to Facebook.

Both Google and Facebook have proved many business mantras wrong, not the least of which is the one about “first-mover advantages.” In business, timing is everything. There is no first-mover advantage just as there is no late-mover advantage (and there are no “business laws,” regardless of what countless books, articles, and lectures tell you).

When Google was launched on September 4, 1998, it had to compete with a handful of other search engines. Google vanquished all of them because instead of “organizing the world’s information” (in the words of its stated mission), it opted for automated self-organization. Google built its “search” business (what used to be called “information retrieval”) by closely tracking cross-references (i.e., links between web pages) as they were happening and correlating relevance with quantity of cross-references (i.e., popularity of pages as judged by how many other pages linked to them). In contrast, the dominant player at the time, Yahoo, followed the traditional library model by attempting to build a card-catalog (ontologies) of all the information on the web. Automated classification (i.e., Google) won.

Similarly, Facebook wasn’t the first social network. The early days of the web saw SixDegrees.com and LiveJournal and, in 2002, Friendster reached 3 million users in just a few months. MySpace launched in 2003 and 2 years later reached 25 million users. These early movers conditioned consumers to the idea (and possible benefits) of social networking and helped encourage increased investment in broadband connections. They also provided Facebook with a long list of technical and business mistakes to avoid.

There was also a shining example of a successful web-born company–Google–for Facebook to emulate. Like Google, it attracted clever engineers to build a smart and scalable infrastructure and, like Google, it established a successful and sustainable business model by re-inventing advertising. Facebook, however, went much further than its role model in responding to rising competition by either buying competitors or successfully copying them.

It also led Google to launch Google+, its most spectacular failure to date. The major culprit was the misleading concept of a “social signal.” Driven by the rise of Facebook (and Twitter), the conventional wisdom around 2010 was that the data Google was collecting, the data that was behind the success of its search engine, was missing out the “social” dimension of finding and discovering information. People on the web (and on Facebook and Twitter) were increasingly relying on getting relevant information from the members of their social networks, reducing their use of Google Search.

When Larry Page took over as Google CEO in 2011, adding a “social signal” to its search engine—and trying to beat Facebook at its own game—became his primary mission. In his first week as CEO in April 2011, Page sent a company-wide memo tying 25% of every employee’s bonus to Google’s success in social. Google introduced its answer to the Facebook “like” button, the Google “+1” recommendations, which, according to Danny Sullivan, the most astute Google watcher at the time, could “become an important new signal for Google to use as part of its overall ranking algorithm, during a time when it desperately needs new signals.”

The complete competitive answer to Facebook, Google+, was launched in June 2011, as “one of the most ambitious bets in the company’s history,” and a “response to the disruption of Web 2.0 and the emergence of the social web,” per Eric Schmidt and Jonathan Rosenberg in How Google Works (2014). But in January 2012, ComScore estimated that users averaged 3.3 minutes on the site compared to 7.5 hours on Facebook. And it was all downhill from there. Why?

A part of the problem was that Google tried very hard to show the world it’s not just copying Facebook but improving on it. Facebook’s simple approach to creating a social network was perceived to be too simple as it designated as “friends” (and still does) everybody in your network from your grandmother to someone you never met in person who has worked with you on a time-limited work-related project. Google’s clever answer was “circles,” allowing you to classify “friends” into specific and meaningful sub-networks. This, of course, went against Google’s early great hunch that user (or librarian) classification does not work on the web because it does not “scale.” So what looked like a much-needed correction to Facebook ultimately failed. Trained well by Google to expect and enjoy automated classification, users did not want to play librarians.

More important, I guess that even the relatively small number of active participants in Google+ (90 million by the end of 2011) was enough for Google to discover pretty quickly that the belief that “Making use of social signals gives Google a valuable new signal closely tied with individuals and known accounts that it could use” was simply a mirage. “Social signals” did not improve search results. In addition, 2012 brought about the Deep Learning (what we now call “AI”) revolution that changed everything at Google, especially how it engineered its search algorithm.

Sophisticated statistical classification—finding hidden correlations in huge amounts of data and using them to put seemingly unrelated entities into common buckets—was the foundation of Google’s initial success. In 2012, a specific approach to this type of statistical analysis of vast quantities of data, variously called “machine learning,” “deep learning,” and “artificial intelligence (AI),” burst out of obscure academic papers and precincts and became the buzzword of the day.

Two major milestones marked the emergence of what I prefer to call “statistics on steroids”: In June 2012, Google’s Jeff Dean and Stanford’s Andrew Ng reported an experiment in which they showed a deep learning neural network 10 million unlabeled images randomly taken from YouTube videos, and “to our amusement, one of our artificial neurons learned to respond strongly to pictures of… cats.” And in October of the same year, a deep learning neural network achieved an error rate of only 16% in the ImageNet Large Scale Visual Recognition Challenge, a significant improvement over the 25% error rate achieved by the best entry the year before. “AI” was off to the races.

The impact of the statistics on steroids revolution was such that even Google’s most sacred cow, its search algorithm, had to—after some resistance—incorporate the new, automated, scalable, not-user-dependent, “AI” signal, an improved way to statistically analyze the much bigger pile of data Google now collects. “RankBrain has moved in, a machine-learning artificial intelligence that Google’s been using to process a ‘very large fraction’ of search results per day,” observed Danny Sullivan in October 2015.

AI killed Google+.

Good for Google. Analysts expect Google’s parent Alphabet to report earnings today after the market close of $11.08 per share and adjusted revenue of $31.3 billion. These results would represent year-over-year growth rates of 14% and 21%, respectively.

Update: Alphabet’s Q4 2018 revenues were up 22% at $39.3 billion, and earnings per share were $12.77, up 31.6%.

Originally published on Forbes.com

Posted in AI | Tagged | Leave a comment

Machine Learning Algorithms

Murat Durmus: an overview of the most proven algorithms that are mainly used in the field of predictive analytics

Posted in AI, Machine Learning, Predictive analytics | Leave a comment

AI Chips, Anyone? On Surging Markets and Spectacular Exits

March 11, 2019: Nvidia buys Mellanox for $6.9b The deal is the second largest ever acquisition in the Israeli high tech industry

December 16, 2019: Intel Confirms $2 Billion Habana Labs Acquisition The deal marks Intel’s second-largest acquisition of an Israeli company

From How To Let Ordinary Investors Invest In Startups? OurCrowd Has The Answer:

“The arrival of autonomous cars is stuck in traffic,” says Medved. “But if you have a product that can be used in the near term you are going to be fine.” As an example, he mentions Hailo, a startup chipmaker OurCrowd has invested in.

Rethinking the traditional computer architecture, Hailo invented an AI processor enabling smart devices to perform sophisticated machine (deep) learning tasks with minimal power consumption, size, and cost. Hailo’s specialized processor fits into a multitude of smart machines and devices, including autonomous vehicles, smart cameras, smartphones, drones, AR/VR platforms, and wearables.

The market for deep learning chipsets is estimated to increase from $5.1 billion in 2018 to $72.6 billion in 2025. The edge computing market, the target market for the Hailo processor, is expected to represent more than three-quarters of the total market opportunity, according to research firm Tractica. “There are ton of applications where people need this today,” says Medved, “not only when there will be fully autonomous cars.”

 

Posted in Misc | Leave a comment

99 Predictions About AI in 2020

“Q: How worried do you think we humans should be that machines will take our jobs?

A: It depends what role machine intelligence will play. Machine intelligence in some cases will be useful for solving problems, such as translation. But in other cases, such as in finance or medicine, it will replace people.”

This Q&A is taken from Tom Standage’s description of how he interviewed AI (language model GPT-2) for The Economist The World in 2020. As readers of this column’s annual roundup of AI predictions know, this year’s first installment of 120 AI predictions for 2020 featured my interview of Amazon AI in which Alexa performed slightly better than the previous year.

For the new list of 99 additional predictions, I repeated Standage’s question to Alexa, and got the response “Hmm, I’m not sure.” The following AI movers and shakers are a lot more confident in what the near future of machine intelligence will look like, from robotic process automation (RPA) to human intelligence augmentation (HIA) to natural language processing (NLP).

Read more here

Posted in AI, Predictions | Tagged | Leave a comment

Las Vegas Most Cyber Insecure US Metro

Source: Cybersecurity in the City: Where Small Businesses Are Most Vulnerable to Attack

Posted in Misc | Tagged | Leave a comment

Online Advertising Revenues Expected to Grow by $60 Billion between 2019 and 2024

Infographic: The Changing Face of the U.S. Advertising Landscape | Statista You will find more infographics at Statista

Posted in Misc | Leave a comment