Country Ranking of IoT Preparedness

iot_idcindex

IDC:

[This is an] updated index ranking the Group of 20 (G20) nations on their preparedness for Internet of Things (IoT) development. The original index was first published in 2013 but this updated index is now comprised of 13 criteria that IDC views as necessary for sustained development of the IoT and reflects each nation’s economic stature, technological preparedness, and business readiness to benefit from the efficiencies linked to IoT solutions.

The United States, South Korea, and the United Kingdom ranked as the three countries most ready to generate and benefit from the IoT. The U.S. scored particularly well on measures such as ease of doing business, government effectiveness, innovation, and cloud infrastructure, as well as GDP and technology spending as a percent of GDP. South Korea, despite a modest GDP, scored extremely well on IoT-specific spending and has a business environment that fosters innovation and promotes attractive investment opportunities. Similarly, the U.K. scored very highly on measures of ease of doing business, government effectiveness, regulatory quality, start-up procedures, innovation, and broadband penetration.

The standout country in the ranking proved to be Australia, which, despite its relatively small GDP, scored exceptionally high on ease of doing business and start-up procedures, government effectiveness and regulatory quality, and innovation and education. Australia’s scores point to a country that has the necessary ingredients for a business environment that is ready for the growth of IoT.

Posted in Internet of Things | Tagged | Leave a comment

Mobile advertising now accounts for nearly half of online ad budgets

mobile_ad

Financial Times:

Spending on mobile advertising in the US soared 89 per cent to $15.5bn in the first half of the year, taking up nearly half of online ad budgets, new data show. Mobile makes up 47 per cent of all online ad expenditures — up from 30 per cent a year ago and far surpassing the 19 per cent share taken by banner ads, according to a report from the Interactive Advertising Bureau and PwC, the professional services firm.

Posted in Misc | Leave a comment

9 Categories of Data Scientists

DataScience_categories.PNG

DataViz:

  • Those strong in statistics: they sometimes develop new statistical theories for big data, that even traditional statisticians are not aware of. They are expert in statistical modeling, experimental design, sampling, clustering, data reduction, confidence intervals, testing, modeling, predictive modeling and other related techniques.
  • Those strong in mathematics: NSA (national security agency) or defense/military people working on big data, astronomers, and operations research people doing analytic business optimization (inventory management and forecasting, pricing optimization, supply chain, quality control, yield optimization) as they collect, analyse and extract value out of data.
  • Those strong in data engineering, Hadoop, database/memory/file systems optimization and architecture, API’s, Analytics as a Service, optimization of data flows, data plumbing.
  • Those strong in machine learning / computer science (algorithms, computational complexity)
  • Those strong in business, ROI optimization, decision sciences, involved in some of the tasks traditionally performed by business analysts in bigger companies (dashboards design, metric mix selection and metric definitions, ROI optimization, high-level database design)
  • Those strong in production code development, software engineering (they know a few programming languages)
  • Those strong in visualization
  • Those strong in GIS, spatial data, data modeled by graphs, graph databases
  • Those strong in a few of the above. After 20 years of experience across many industries, big and small companies (and lots of training), I’m strong both in stats, machine learning, business, mathematics and more than just familiar with visualization and data engineering. This could happen to you as well over time, as you build experience. I mention this because so many people still think that it is not possible to develop a strong knowledge base across multiple domains that are traditionally perceived as separated (the silo mentality). Indeed, that’s the very reason why data science was created.
Posted in Data Science Careers | Tagged | Leave a comment

Fintech Financing Worldwide 2010-2015

fintech_segment

fintech_financing

Posted in Misc | Tagged | Leave a comment

Updated Laws of Robotics

futurism_roboticslaws

Posted in Robotics | Tagged | Leave a comment

The Evolution of Data Scientists, 2006-2016

datascientist_2006

datascientist_2011

datascientist_2016

Gaurav Vohra, CEO & Co-founder Jigsaw Academy:

Here are my predictions for 2021.

  • Machine learning and deep learning will become much more popular. More data and better processing power will enable a lot more analysis of different data. Those who develop these skills will be in demand.
  • There will be a lot more data generated through internet of things. This data will be bigger and messier. Data scientists who develop skills to work with IoT data will have an advantage.
  • Specialized roles will continue to evolve. Specializations will become more logical and some of the confusion around them today will disappear in the next 5 years.
  • Analytics will play an important role in hiring for analytics (and all other roles). We are already seeing evidence of this and I think data driven hiring is coming very soon.
Posted in Data Science Careers, Data Scientists | Leave a comment

DeepBench from Baidu: Benchmarking Hardware for Deep Learning

baidu3

Source: Greg Diamos and Sharan Narang, “The need for speed: Benchmarking deep learning workloads,” O’Reilly AI Conference

At the O’Reilly Artificial Intelligence conference, Baidu Research announced DeepBench, an open source benchmarking tool for evaluating the performance of deep learning operations on different hardware platforms. Greg Diamos and Sharan Narang of Baidu Research’s Silicon Valley AI Lab talked at the conference about the motivation for developing the benchmark and why faster computers are crucial to the continued success of deep learning.

The harbinger of the current AI Spring, deep learning is a machine learning method using “artificial neural networks,” moving vast amounts of data through many layers of hardware and software, each layer coming up with its own representation of the data and passing what it “learned” to the next layer. As a widely publicized deep learning project has demonstrated four years ago, feeding such an artificial neural network with images extracted from 10 million videos can result in the computer (in this case, an array of 16,000 processors) learning to identify and label correctly an image of a cat. One of the leaders of that “Google Brain” project was Andrew Ng, who is today the Chief Scientist at Baidu and the head of Baidu Research.

Research areas of interest to by Baidu Research include image recognition, speech recognition, natural language processing, robotics, and big data. Its Silicon Valley AI Lab has deep learning and systems research teams that work together “to explore the latest in deep learning algorithms as well as find innovative ways to accelerate AI research with new hardware and software technologies.”

DeepBench is an attempt to accelerate the development of the hardware foundation for deep learning, by helping hardware developers optimize their processors for deep learning applications, and specifically, for the “training” phase in which the system learns through trial and error. “There are many different types of applications in deep learning—if you are a hardware manufacturer, you may not understand how to build for them. We are providing a tool for people to help them see if a change to a processor [design] improves performance and how it affects the application,” says Diamos.  One of the exciting things about deep learning for him (and no doubt for many other researchers) is that “as the computer gets faster, the application gets better and the algorithms get smarter.”

Case in point is speech recognition. Or more specifically, DeepSpeech, Baidu Research’s “state-of-the-art speech recognition system developed using end-to-end deep learning.” The most important aspect of this system is its simplicity, says Diamos, with audio on one end, text on the other end, and a single learning algorithm (a recurring convolutional neural network), sitting in the middle. “We can take exactly the same architecture and apply it to both English and Mandarin with greater accuracy than systems we were building in the past,” says Diamos.

In Mandarin, the system is more accurate in transcribing audio to text than native speakers, as the latter may have difficulty understanding what is said because of noise level or accent. Indeed, the data set used by DeepSpeech is very large because it was created by mixing hours of synthetic noise with the raw audio, explains Narang. The largest publicly available data set is about 2000 hours of audio recordings while the one used by DeepSpeech clocks in at 100,000 hours or 10 terabytes of data.

The approach taken by the developers of DeepSpeech is superior to other approaches argue Narang and Diamos. Traditional speech recognition systems using a “hand-designed algorithm,” get more accurate with more data but eventually saturate, requiring a domain expert to develop a new algorithm. The hybrid approach adds a deep convolutional neural network. The result is better scaling but again the performance eventually saturates. DeepSpeech uses deep learning as the entire algorithm and achieves continuous improvement in performance (accuracy) with larger data sets and larger models (more and bigger layers).

Bigger is better. But to capitalize on this feature (pun intended) of deep learning, you need faster computers. “The biggest bottleneck,” says Narang, “is training the model.” He concludes: “Large data sets, a complex model with many layers, and the need to train the model many times is slowing down deep learning research. To make rapid progress, we need to reduce model training time. That’s why we need tools to benchmark the performance of deep learning training. DeepBench allows us to measure the time it takes to perform the underlying deep learning operation. It establishes a line in the sand that will encourage hardware developers to do better by focusing on the right issues.”

Originally published on Forbes.com

Posted in deep learning | Tagged | Leave a comment

Digital Tipping Point: Internet Advertising Surpassing TV Advertising in 2016

Source: PwC

Source: PwC

On October 27, 1994, HotWired, the first commercial Web magazine, gave birth to the first Web banner ad and the Internet advertising industry. PwC predicts that Internet advertising revenues worldwide will surpass TV advertising in 2016 and reach $260.4 billion in 2020.

More about the first six online ads here. One of them, an ad for AT&T, simply said: “Have you ever clicked your mouse right HERE? You will!”

first-banner-ad

 

Posted in Misc | Tagged | Leave a comment

Maana Deploys AI to Optimize Enterprise Knowledge at Maersk

2015.12.10-WSJ_Screenshot-01

Maana Knowledge Platform for Oil and Gas

Does your company suffer from corporate amnesia? Palo Alto, California-based startup Maana has developed a cure for what ails organizations everywhere: Knowledge of how to perform a certain task or make a specific decision walks out the door with employees migrating to another job or retiring. Even when this tacit knowledge is captured, codified and stored in a database, it may not be accessible to the people who need it, when they need it. “We patented a unique and novel way of indexing and organizing the knowledge that is locked in data silos across the organization,” says founder and CEO Babur Ozden. Today, Maana released a new version of its AI-driven platform.

Failing organizational memory is particularly harmful when there is a “decision deadline,” explains Ozden: “These are decisions that need to take place along the workflow of an operation and need to be taken in a few hours or a few minutes.” Maana’s knowledge graph, which captures complex relations between actions, processes, and assets, coupled with advanced AI algorithms, semantic search, and deep learning, helps employees make faster and more relevant data-driven decisions by providing them with the relevant pieces of organizational memory at the moment they need it most.

Maana’s technology “captures the knowledge people acquire on the job and enables other employees, who do not have a similar experience, to have a head start in making a decision instead of starting from zero,” says Ibrahim Gokcen, Head of Data Science & Analytics at Maersk. The Maersk Group is a worldwide conglomerate that operates in 130 countries with a workforce of over 89,000 employees. Headquartered in Copenhagen, Denmark, with 2015 revenues of $40.3 billion, it owns Maersk Line, the world’s largest container shipping company, and is involved in a wide range of activities in the shipping, logistics, and the oil and gas industries.

“We want to make AI part of our digital journey,” says Gocken. “Strong technology platforms with AI capabilities help the data science and analytics people focus on the business logic, on the algorithms, and on churning models very quickly. These platforms give a head start not just to employees making decisions but also to our data scientists.”

Adds Donald Thompson, Maana’s founder and president: “We capture in a pragmatic way the knowledge of subject matter experts and business users and make it explicit so more people can take advantage of it.”

In a statement, Thompson said that the new version of Maana’s platform is “introducing our first collection of Knowledge Assistants and Knowledge Applications that really bring out the value of our user-guided and machine-assisted approach. People at all levels are empowered to rapidly gain the understanding they need in order to make the best decisions, while generating new knowledge assets (models) that others can use or build upon.”
Originally published on Forbes.com
Posted in AI | Tagged , | Leave a comment

Visually Linking AI, Machine Learning, Deep Learning, Big Data and Data Science

ai_data-science-diagram

ai_data-science-diagram2

Source: Battle of the Data Science Venn Diagrams

HT: KDnuggets

ai_machine_deep_learning

What’s the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning?

Over the past few years AI has exploded, and especially since 2015. Much of that has to do with the wide availability of GPUs that make parallel processing ever faster, cheaper, and more powerful. It also has to do with the simultaneous one-two punch of practically infinite storage and a flood of data of every stripe (that whole Big Data movement) – images, text, transactions, mapping data, you name it.

 

Posted in AI, Data Science, deep learning, Machine Learning | Tagged | Leave a comment