
Source: Wale Akinfaderin
Source:

6 Hot Internet of Things (IoT) Security Technologies
Bruce Schneier on how IoT-linked Teddy Bear leaked personal audio recordings and a video interview with him (Schneier, not the Teddy Bear) about IoT security.


According to the tally Google provided to MIT Technology Review, it published 218 journal or conference papers on machine learning in 2016, nearly twice as many as it did two years ago… Compared to all companies that publish prolifically on artificial intelligence, Clarivate ranks Google No. 1 by a wide margin.
See also AI And Community Development Are Two Key Reasons Why Google May Win The Cloud Wars

Neural network created in SAS Visual Data Mining and Machine Learning 8.1
Artificial intelligence, machine learning and neural networks-based deep learning are concepts that have recently come to dominate venture capital funding, startup formation, promotion and exits and policy discussions. The highly-publicized triumphs over humans in Go and Poker, rapid progress in speech recognition, image identification, and language translation, and the proliferation of talking and texting virtual assistants and chatbots, have helped inflate the market cap of Apple (#1 as of February 17), Google (#2), Microsoft (#3), Amazon (#5), and Facebook (#6).
While these companies dominate the headlines—and the war for the relevant talent—other companies that have been analyzing data or providing tools for analysis for years are also capitalizing on recent AI advances. A case in point are Equifax and SAS: The former developing deep learning tools to improve credit scoring and the latter adding new deep learning functionality to its data mining tools and offering a deep learning API.
Both companies have a lot of experience in what they do. Equifax, founded in 1899, is a credit reporting agency, collecting and analyzing data on more than 820 million consumers and more than 91 million businesses worldwide. SAS, founded in 1976, develops and sells data analytics and data management software.
The AI concepts that make headlines today also have a long history. Moving beyond speedy calculation, two approaches emerged in the 1950s to applying early computers to other type of cognitive work. One was labeled “artificial intelligence,” the other “machine learning” (a decidedly less sexy and attention-grabbing name). While the artificial intelligence approach was related to symbolic logic, a branch of mathematics, the machine-learning approach was related to statistics. And there was another important distinction between the two: The artificial intelligence approach was part of the dominant computer science paradigm and the practice of a programmer defining what the computer had to do by coding an algorithm, a model, a program in a programming language. The machine-learning approach relied on data and on statistical procedures that found patterns in the data or classified the data into different buckets, allowing the computer to “learn” (e.g., optimize the performance—accuracy—of a certain task) and “predict” (e.g., classify or put in different buckets) the type of new data that is fed to it.
For traditional computer science, data was what the program processed and the output of that processing. With machine learning, the data itself defines what to do next. Says Oliver Schabenberger, Executive Vice President and Chief Technology Officer at SAS: “What sometimes gets overlooked is that it’s really the data that drives machine learning.”
Over the years, machine learning has been applied successfully to problems such as spam filtering, handwriting recognition, machine translation, fraud detection, and product recommendations. Many successful “digital natives” such as Google, Amazon and Netflix, have built their fortunes with the help of machine learning algorithms. The real-world experiences of these companies have proved how successful machine learning can be in using lots of data from a variety of sources to predict consumer behavior. Using lots and lots of data makes predictive models more robust and predictions more accurate. “Big Data,” however, gave rise not only to new type of data-driven companies, but also to a new type of machine learning: “Deep Learning.”
Deep learning takes the machine-learning approach much further by applying it to multi-layer “artificial neural networks.” Influenced by a computational model for human neural networks first developed in 1943, artificial neural networks got their first software manifestation in the 1957 Perceptron, an algorithm for pattern recognition based on a two-layer network. Abandoned for a while because of the limited computing power of the day, deep neural networks have seen a remarkable revival over the last decade, fueled by advanced algorithms, big data, and increased computer power, specifically in the form Graphics Processing Units (GPU) which process data in parallel, thus cutting down on the time required to “train” the computer.
Today’s deep neural networks move vast amounts of data through many layers of hardware and software, each layer coming up with its own representation of the data and passing what it “learned” to the next layer. Artificial intelligence attempts “to make a machine that thinks like a human. Deep neural networks try to solve pretty narrow tasks,” says Schabenberger. Relinquishing the quest for human-like intelligence, deep learning has succeeded in vastly expanding the range of narrow tasks machines can learn and perform.
“We noticed a couple of years ago,” says Peter Maynard, Senior Vice President of Global Analytics at Equifax, “that we were not getting enough statistical lift from our traditional credit scoring methodology.” The conventional wisdom in the credit scoring industry at the time was that they must continue to use traditional machine learning approaches such as logistical regression because the results were interpretable, i.e., in compliance with regulation. Modern machine-learning approaches such as deep neural networks, which promised more accurate results, presented a challenge in that regard as they were not interpretable. They are considered a “black box,” a process so complex that even its programmers do not fully understand how the learning machine reached the results it produced.
“My team decided to challenge that and find a way to make neural nets interpretable,” says Maynard. He explains: “We developed a mathematical proof that shows that we could generate a neural net solution that can be completely interpretable for regulatory purposes. Each of the inputs can map into the hidden layer of the neural network and we imposed a set of criteria that enable us to interpret the attributes coming into the final model. We stripped apart the black box so we can have an interpretable outcome. That was revolutionary, no one has ever done that before.”
Maynard reports that the neural net has improved the predictive ability of the model by up to 15%. The larger the size of the data set analyzed and the more complex the analysis, the bigger is the improvement. “In credit scoring,” says Maynard, “we spend a lot of time creating segments to build a model on. Determining the optimal segment could take sometimes 20% of the time that it takes to build a model. In the context of neural nets, those segments are the hidden layers—the neural net does it all for you. The machine is figuring out what are the segments and what are the weights in a segment instead of having an analyst do that. I find it really powerful.”
The immediate benefit of using neural nets is faster model development as some of the work previously done by data scientists in building and testing a model is automated. But Maynard envisions “full automation,” especially regarding a big part of a data scientist’s job—the ongoing tweaking of the model. Maynard: ”You have a human reviewing it to make sure it’s executing as intended but the whole thing is done automatically. It’s similar to search optimization or product recommendations where the model gets tweaked every time you click. In credit scoring, when you have a neural network with superior predictability and interpretability, there is no reason to have a person in the middle of that process.”
In addition, the “attributes” or the factors affecting a credit score (e.g., the size of an individual’s checking account balance and how it was used over the last 6 months), are now “data-driven.” Instead of being hypotheses developed by data scientists, now the attributes are created by the deep learning process, on the basis of a much larger set of historical or “trended data.” “We are looking at 72 months of data and identifying patterns of consumer behavior over time, using machine learning to understand the signal and the strength of the signal over that time period,” says Maynard. “Now, instead of creating thousands of attributes, we can create hundreds of thousands of attributes for testing. The algorithms will determine what’s the most predictive in terms of the behavior we are trying to model.”
The result—and the most important benefit of using modern machine learning tools—is greater access to credit. Analyzing two years’ worth of U.S. mortgage data, Equifax determined that numerous declined loans could have been loaned safely. That promises a considerable expansion of the universe of approved mortgages. “The use case we showed regulators,” says Maynard, “was in the telecom industry where people had to put down a down payment to get a cell phone—with this model they don’t need to do that anymore.”
Equifax has filed for a patent for its work on improving credit scoring. “It’s the dawn of a new age—enabling greater access to credit is a huge opportunity,” says Maynard.
Originally published on Forbes.com

The market for artificial intelligence (AI) technologies is flourishing. Beyond the hype and the heightened media attention, the numerous startups and the internet giants racing to acquire them, there is a significant increase in investment and adoption by enterprises. A Narrative Science survey found last year that 38% of enterprises are already using AI, growing to 62% by 2018. Forrester Research predicted a greater than 300% increase in investment in artificial intelligence in 2017 compared with 2016. IDC estimated that the AI market will grow from $8 billion in 2016 to more than $47 billion in 2020.
Coined in 1955 to describe a new computer science sub-discipline, “Artificial Intelligence” today includes a variety of technologies and tools, some time-tested, others relatively new. To help make sense of what’s hot and what’s not, Forrester just published a TechRadar report on Artificial Intelligence (for application development professionals), a detailed analysis of 13 technologies enterprises should consider adopting to support human decision-making.
Based on Forrester’s analysis, here’s my list of the 10 hottest AI technologies:
There are certainly many business benefits gained from AI technologies today, but according to a survey Forrester conducted last year, there are also obstacles to AI adoption as expressed by companies with no plans of investing in AI:
There is no defined business case 42%
Not clear what AI can be used for 39%
Don’t have the required skills 33%
Need first to invest in modernizing data mgt platform 29%
Don’t have the budget 23%
Not certain what is needed for implementing an AI system 19%
AI systems are not proven 14%
Do not have the right processes or governance 13%
AI is a lot of hype with little substance 11%
Don’t own or have access to the required data 8%
Not sure what AI means 3%
Once enterprises overcome these obstacles, Forrester concludes, they stand to gain from AI driving accelerated transformation in customer-facing applications and developing an interconnected web of enterprise intelligence.
Originally published on Forbes.com

HT: ArchiTECHt
With its 150-year history, over $2.4 trillion in assets, 37 million customers, and 4,000-strong presence across 70 countries, [HSBC] is an important financial institution in a heavily-regulated industry. “We have to make sure our customers feel confident and trust in us to be the custodian of their assets,” stated Darryl West, Group Chief Information Officer at HSBC, at the recently held Google Cloud Next conference…
“Apart from our $2.4 trillion dollars of assets on our balance sheet, we have at the core of the company a massive asset in [the form of] our data. And what’s been happening in the last three years is a massive growth in the size of our data assets,” shared West, pointing out that data at HSBC has grown tremendously from 56 petabytes in 2014 to over 100 petabytes as of early 2017. “Our customers are adopting digital channels more aggressively and we’re collecting more data about how our customers interact with us. As a bank, we need to work with partners to enable us to understand what’s happening and draw out insights in order for us to run a better business and create some amazing customer experiences,” said West.
| Rank | Job Title | Number of Postings per Million | Average Base Salary | Average Growth in Postings 2013-2016 |
| 1 | Machine Learning Engineer | 58 | $134,306 | 36% |
| 2 | Data Scientist | 360 | $129,938 | 108% |
| 3 | Computer Vision Engineer | 20 | $127,849 | 34% |
| 4 | Development Operations Engineer | 731 | $123,165 | 106% |
| 5 | Cloud Engineer | 217 | $118,878 | 67% |
| 6 | Senior Audit Manager | 53 | $118,692 | 52% |
| 7 | Penetration Tester | 3l7 | $115,557 | 52% |
| 8 | Oracle HCM Manager | 44 | $113,107 | 41% |
| 9 | Full Stack Developer | 641 | $110,770 | 122% |
| 10 | Salesforce Developer | 230 | $108,089 | 83% |
Source: IEEE Spectrum

Venture capital investors have lately taken a keen interest in the logistics, supply chain management and shipping market, which measures in the trillions of dollars globally and in the hundreds of billions of dollars in the US alone.
Based on data from 502 deals struck with US-based companies in the supply chain management, shipping and logistics industries, it’s easy to see that investor interest has been piqued by this supposedly “boring” space. Over the past several years, there’s been a significant upswing in the amount of capital deployed into upstart logistics, shipping and supply chain management companies… Between the start of 2013 and the end of 2016, the amount of venture capital money invested into these industries essentially tripled, a positive change of 297% in the space of four years.
The scale, scope and depth of data supply chains are generating today is accelerating, providing ample data sets to drive contextual intelligence. The following graphic provides an overview of 52 different sources of big data that are generated in supply chains Plotting the data sources by variety, volume and velocity by the relative level of structured/unstructured data, it’s clear that the majority of supply chain data is generated outside an enterprise. Forward-thinking manufacturers are looking at big data as a catalyst for greater collaboration. Source: Big Data Analytics in Supply Chain Management: Trends and Related Research. Presented at 6th International Conference on Operations and Supply Chain Management, Bali, 2014

At the 2017 MIT Tech Conference, Roboticist Helen Greiner, CTO of CyPhy Works, talked about her vision of an end-to-end automation of the supply chain, making it much more efficient than it is today. Greiner: “Up above the treetops is a highway waiting to be populated.”
CyPhy Works and Pilot Thomas Logistics are working together to bring innovative technology to the oil and gas industry. PTL is adding our persistent aerial solutions to their portfolio of services. We flew above PTL’s Mobile, Alabama facility to show their team what is possible with PARC.
This highlight reel includes targeted zoom to a bridge 3 miles away, standard and IR overviews of a tank farm, and vehicle tracking on both land and water. All this during roughly 25 mph winds; conditions that could down other drones.
PARC’s live feed was seen simultaneously at each company’s headquarters (in Massachusetts and Fort Worth, Texas) through CyPhy’s data platform. The next day a live feed was also streamed to IHS Markit’s CERAWeek – the energy industry’s premier annual conference – as part of CyPhy Work’s Energy Innovation Pioneers presentations.
Source: CB Insights


Source: Emily Barry

Travis Deyle, Why Indoor Robots for Commercial Spaces Are the Next Big Thing in Robotics:
Venture funding for robotics has exploded by more than 10x over the last six years and shows no signs of stopping. Most of this investment has been focused on the usual suspects: logistics, warehouse automation, robot arms for manufacturing, healthcare and surgical robots, drones, agriculture, and autonomous cars…
There’s a massive, untapped market… Commercial spaces such as hotels, hospitals, offices, retail stores, banks, schools, nursing homes, schools, malls, and museums.