Creative Destruction and the ‘Uber Effect’

CBInsights_uber-vs-TAXI-valuation-chart

CB Insights:

We used CB Insights’ valuation data to look at how the rise of Uber’s valuation correlates with the market capitalization of Medallion Financial Corp (NASDAQ: TAXI). Medallion Financial is a publicly-traded company that originates, acquires, and services loans used to purchase taxi medallions in several large US urban markets that Uber is also active in, including New York. We charted the stock price of TAXI versus the valuations for many of Uber’s rounds since 2010.

We found that TAXI has also been hammered by an “Uber Effect,” with its price down even more than the decline seen by New York City medallions. TAXI’s stock price is down nearly 49% since Uber raised its breakout $258M Series C at a $3.5B valuation. (The NASDAQ is up ~26% in the same time period.)

Uber’s valuation is up over 13x.

Mark J. Perry at the American Enterprise Institute:

UberEffect1

UberEffect2

In 1942, economist Joseph Schumpeter described “creative destruction” as a “process of industrial mutation that incessantly revolutionizes the economic structure from within, incessantly destroying the old one, incessantly creating a new one.” There probably hasn’t been a better example of Schumpeterian creative destruction in the last decade or more than the recent ascendance of app-based ride-sharing services like Uber (and Lyft, Sidecar, Gett, Via, etc.)  challenging traditional, legacy taxi cartels in cities like New York, San Francisco, Chicago and more than 160 other US cities. Market-based evidence of the gale of creative destruction in the transportation industry is displayed in the two charts above. The top chart above shows how the increasing popularity of ride-sharing apps like Uber has caused the price of New York City individual taxi medallions to collapse by at least 37%, from a peak of more than $1 million in August 2013 to only about $650,000 in recent months (based on advertised asking prices here, not actual sales).

Further evidence of the “Uber effect” is displayed in the bottom chart above, showing the collapse in the stock price of Medallion Financial Corporation, from $16.45 in November 2013 to below $7 per share in the last few days. Medallion Financial Corporation (NASDAQ: TAXI) is a NYC-based specialty finance company that originates, acquires, and services loans that finance taxicab medallions. Just as the sky-high taxi medallion prices have been significantly eroded due to competition from the upstart ride-sharing services, so has the value of Medallion Financial Corporation’s stock price been significantly dropping. After tracking the SP&500 Index closely for many decades, the share price of Medallion Financial has fallen by a whopping 58% from its November 2013 peak, during a time when the S&P 500 has increased by 7.1%.

As the traditional, legacy taxi industry continues to collapse under the Schumpeterian forces of market disruption, the taxi cartels like the one in NYC are asking for taxpayer bailouts, or at least taxpayer-supported guarantees for taxi medallion loans. Consumers are the obvious winners from the creative destruction in the transportation industry – we now have more choice, better and faster service, friendlier drivers, cleaner cars, and maybe most importantly — lower prices. Traditional taxi drivers and medallion owners, after being protected from competition by government regulations for many generations, are the obvious losers from the “Uber effect.” Medallion prices will continue to fall as the taxi cartels continue to crumble and collapse.

NPR Planet Money: Listen to Episode 643, July 31, 2015, on Gene Freidman, the “Taxi King” and how his empire is starting to crumble. Also, “Why Does A Taxi Medallion Cost $1 Million?” from 2011.

Posted in Misc | Tagged , | Leave a comment

10 Predictions for Digital and IT Transformation: Gartner

Gartner-crystalball

Gartner released today its top predictions for “the digital future… an algorithmic and smart machine-driven world where people and machines must define harmonious relationships”:

1)    By 2018, 20 percent of business content will be authored by machines.
Technologies with the ability to proactively assemble and deliver information through automated composition engines are fostering a movement from human- to machine-generated business content. Data-based and analytical information can be turned into natural language writing using these emerging tools. Business content, such as shareholder reports, legal documents, market reports, press releases, articles and white papers, are all candidates for automated writing tools.

2)    By 2018, six billion connected things will be requesting support.
In the era of digital business, when physical and digital lines are increasingly blurred, enterprises will need to begin viewing things as customers of services — and to treat them accordingly. Mechanisms will need to be developed for responding to significantly larger numbers of support requests communicated directly by things. Strategies will also need to be developed for responding to them that are distinctly different from traditional human-customer communication and problem-solving. Responding to service requests from things will spawn entire service industries, and innovative solutions will emerge to improve the efficiency of many types of enterprise.

3)    By 2020, autonomous software agents outside of human control will participate in five percent of all economic transactions.
Algorithmically driven agents are already participating in our economy. However, while these agents are automated, they are not fully autonomous, because they are directly tethered to a robust collection of mechanisms controlled by humans — in the domains of our corporate, legal, economic and fiduciary systems. New autonomous software agents will hold value themselves, and function as the fundamental underpinning of a new economic paradigm that Gartner calls the programmable economy. The programmable economy has potential for great disruption to the existing financial services industry. We will see algorithms, often developed in a transparent, open-source fashion and set free on the blockchain, capable of banking, insurance, markets, exchanges, crowdfunding — and virtually all other types of financial instruments

4)    By 2018, more than 3 million workers globally will be supervised by a “robo-boss.”
Robo-bosses will increasingly make decisions that previously could only have been made by human managers. Supervisory duties are increasingly shifting into monitoring worker accomplishment through measurements of performance that are directly tied to output and customer evaluation. Such measurements can be consumed more effectively and swiftly by smart machine managers tuned to learn based on staffing decisions and management incentives.

5)    By year-end 2018, 20 percent of smart buildings will have suffered from digital vandalism.
Inadequate perimeter security will increasingly result in smart buildings being vulnerable to attack. With exploits ranging from defacing digital signage to plunging whole buildings into prolonged darkness, digital vandalism is a nuisance, rather than a threat. There are, nonetheless, economic, health and safety, and security consequences. The severity of these consequences depend on the target. Smart building components cannot be considered independently, but must be viewed as part of the larger organizational security process. Products must be built to offer acceptable levels of protection and hooks for integration into security monitoring and management systems.

6)    By 2018, 45 percent of the fastest-growing companies will have fewer employees than instances of smart machines.
Gartner believes the initial group of companies that will leverage smart machine technologies most rapidly and effectively will be startups and other newer companies. The speed, cost savings, productivity improvements and ability to scale of smart technology for specific tasks offer dramatic advantages over the recruiting, hiring, training and growth demands of human labor. Some possible examples are a fully automated supermarket or a security firm offering drone-only surveillance services. The “old guard” (existing) companies, with large amounts of legacy technologies and processes, will not necessarily be the first movers, but the savvier companies among them will be fast followers, as they will recognize the need for competitive parity for either speed or cost.

7)    By year-end 2018, customer digital assistant will recognize individuals by face and voice across channels and partners.
The last mile for multichannel and exceptional customer experiences will be seamless two-way engagement with customers and will mimic human conversations, with both listening and speaking, a sense of history, in-the-moment context, timing and tone, and the ability to respond, add to and continue with a thought or purpose at multiple occasions and places over time. Although facial and voice recognition technologies have been largely disparate across multiple channels, customers are willing to adopt these technologies and techniques to help them sift through increasing large amounts of information, choice and purchasing decisions. This signals an emerging demand for enterprises to deploy customer digital assistants to orchestrate these techniques and to help “glue” continual company and customer conversations.

8)    By 2018, two million employees will be required to wear health and fitness tracking devices as a condition of employment.
The health and fitness of people employed in jobs that can be dangerous or physically demanding will increasingly be tracked by employers via wearable devices. Emergency responders, such as police officers, firefighters and paramedics, will likely comprise the largest group of employees required to monitor their health or fitness with wearables. The primary reason for wearing them is for their own safety. Their heart rates and respiration, and potentially their stress levels, could be remotely monitored and help could be sent immediately if needed. In addition to emergency responders, a portion of employees in other critical roles will be required to wear health and fitness monitors, including professional athletes, political leaders, airline pilots, industrial workers and remote field workers.

9)    By 2020, smart agents will facilitate 40 percent of mobile interactions, and the postapp era will begin to dominate.
Smart agent technologies, in the form of virtual personal assistants (VPAs) and other agents, will monitor user content and behavior in conjunction with cloud-hosted neural networks to build and maintain data models from which the technology will draw inferences about people, content and contexts. Based on these information-gathering and model-building efforts, VPAs can predict users’ needs, build trust and ultimately act autonomously on the user’s behalf.

10) Through 2020, 95 percent of cloud security failures will be the customer’s fault
Security concerns remain the most common reason for avoiding the use of public cloud services. However, only a small percentage of the security incidents impacting enterprises using the cloud have been due to vulnerabilities that were the provider’s fault. This does not mean that organizations should assume that using a cloud means that whatever they do within that cloud will necessarily be secure. The characteristics of the parts of the cloud stack under customer control can make cloud computing a highly efficient way for naive users to leverage poor practices, which can easily result in widespread security or compliance failures. The growing recognition of the enterprise’s responsibility for the appropriate use of the public cloud is reflected in the growing market for cloud control tools. By 2018, 50 percent of enterprises with more than 1,000 users will use cloud access security broker products to monitor and manage their use of SaaS and other forms of public cloud, reflecting the growing recognition that although clouds are usually secure, the secure use of public clouds requires explicit effort on the part of the cloud customer.

Posted in Misc | Tagged , | Leave a comment

GE’s Internet of Things (IoT): The software platform, Predix, and new business model, GE Digital (Video)

[youtube https://www.youtube.com/watch?v=f4-pFZEv3QQ?rel=0]

On September 14, 2015, GE announced the creation of GE Digital, “a transformative move that brings together all of the digital capabilities from across the company into one organization.” It integrates GE’s Software Center, the expertise of GE’s global IT and commercial software teams, and the industrial security strength of Wurldtech. This “new model” (not a business unit, apparently) is led by Bill Ruh, chief digital officer.

In the video above, Ruh talked briefly about GE Digital, preceded by GE Digital’s CTO Harel Kodesh talking about Predix, GE’s software platform for the “Industrial Internet” or IoT.

See also Internet Of Things (IoT) News Roundup

Posted in Internet of Things | Tagged , | Leave a comment

The World’s #1 Data Scientist Talks about Data Science Skills and Tools

[youtube https://www.youtube.com/watch?v=dpzxW6buh9Y]

Owen Zhang is ranked #1 on Kaggle, the online stadium for data science competitions. An engineer by training, Zhang says that data science is finding “practical solutions to not very well-defined problems,” similar to engineering. He believes that good data scientists, “otherwise known as unicorn data scientists,” have three types of expertise. Since data science deals with practical problems, the first one is being familiar with a specific domain and knowing how to solve a problem in that domain. The second is the ability to distinguish signal from noise, or understanding statistics. The third skill is software engineering.

[youtube https://www.youtube.com/watch?v=7YnVZrabTA8]

Zhang, Chief Product Officer at DataRobot, shares in this talk his experience with open source tools in data science competitions.  Slides here.

Posted in Data Science, Data Science Careers | Tagged , , | Leave a comment

A Sane Discussion of the Rising Fears of Artificial Intelligence (AI)

[vimeo 138319099 w=500 h=281]

Rise of Concerns about AI: Reflections and Directions

Discussions about artificial intelligence (AI) have jumped into the public eye over the past year, with several luminaries speaking about the threat of AI to the future of humanity. Over the last several decades, AI—automated perception, learning, reasoning, and decision making—has become commonplace in our lives. We plan trips using GPS systems that rely on the A* algorithm to optimize the route. Our smartphones understand our speech, and Siri, Cortana, and Google Now are getting better at understanding our intentions. Machine vision detects faces as we take pictures with our phones and recognizes the faces of individual people when we post those pictures to Facebook Internet search engines rely on a fabric of AI subsystems. On any day, AI provides hundreds of millions of people with search results, traffic predictions, and recommendations about books and movies. AI translates among languages in real time and speeds up the operation of our laptops by guessing what we will do next. Several companies are working on cars that can drive themselves—either with partial human oversight or entirely autonomously. Beyond the influences in our daily lives, AI techniques are playing roles in science and medicine. AI is already at work in some hospitals helping physicians understand which patients are at highest risk for complications, and AI algorithms are finding important needles in massive data haystacks, such as identifying rare but devastating side effects of medications.

The AI in our lives today provides a small glimpse of more profound contributions to come. For example, the fielding of currently available technologies could save many thousands of lives, including those lost to accidents on our roadways and to errors made in medicine. Over the longer-term, advances in machine intelligence will have deeply beneficial influences on healthcare, education, transportation, commerce, and the overall march of science. Beyond the creation of new applications and services, the pursuit of insights about the computational foundations of intelligence promises to reveal new principles about cognition that can help provide answers to longstanding questions in neurobiology, psychology, and philosophy.

On the research front, we have been making slow, yet steady progress on “wedges” of intelligence, including work in machine learning, speech recognition, language understanding, computer vision, search, optimization, and planning. However, we have made surprisingly little progress to date on building the kinds of general intelligence that experts and the lay public envision when they think about “Artificial Intelligence.” Nonetheless, advances in AI—and the prospect of new AI-based autonomous systems—have stimulated thinking about the potential risks associated with AI.

A number of prominent people, mostly from outside of computer science, have shared their concerns that AI systems could threaten the survival of humanity.1 Some have raised concerns that machines will become superintelligent and thus be difficult to control. Several of these speculations envision an “intelligence chain reaction,” in which an AI system is charged with the task of recursively designing progressively more intelligent versions of itself and this produces an “intelligence explosion.”4 While formal work has not been undertaken to deeply explore this possibility, such a process runs counter to our current understandings of the limitations that computational complexity places on algorithms for learning and reasoning. However, processes of self-design and optimization might still lead to significant jumps in competencies.

Other scenarios can be imagined in which an autonomous computer system is given access to potentially dangerous resources (for example, devices capable of synthesizing billons of biologically active molecules, major portions of world financial markets, large weapons systems, or generalized task markets9). The reliance on any computing systems for control in these areas is fraught with risk, but an autonomous system operating without careful human oversight and failsafe mechanisms could be especially dangerous. Such a system would not need to be particularly intelligent to pose risks.

We believe computer scientists must continue to investigate and address concerns about the possibilities of the loss of control of machine intelligence via any pathway, even if we judge the risks to be very small and far in the future. More importantly, we urge the computer science research community to focus intensively on a second class of near-term challenges for AI. These risks are becoming salient as our society comes to rely on autonomous or semiautonomous computer systems to make high-stakes decisions. In particular, we call out five classes of risk: bugs, cybersecurity, the “Sorcerer’s Apprentice,” shared autonomy, and socioeconomic impacts.

The first set of risks stems from programming errors in AI software. We are all familiar with errors in ordinary software; bugs frequently arise in the development and fielding of software applications and services. Some software errors have been linked to extremely costly outcomes and deaths. The verification of software systems is challenging and critical, and much progress has been made—some relying on AI advances in theorem proving. Many non-AI software systems have been developed and validated to achieve high degrees of quality assurance. For example, the software in autopilot and spacecraft systems is carefully tested and validated. Similar practices must be applied to AI systems. One technical challenge is to guarantee that systems built via machine learning methods behave properly. Another challenge is to ensure good behavior when an AI system encounters unforeseen situations. Our automated vehicles, home robots, and intelligent cloud services must perform well even when they receive surprising or confusing inputs. Achieving such robustness may require self-monitoring architectures in which a meta-level process continually observes the actions of the system, checks that its behavior is consistent with the core intentions of the designer, and intervenes or alerts if problems are identified. Research on real-time verification and monitoring of systems is already exploring such layers of reflection, and these methods could be employed to ensure the safe operation of autonomous systems.3,6

A second set of risks is cyberattacks: criminals and adversaries are continually attacking our computers with viruses and other forms of malware. AI algorithms are as vulnerable as any other software to cyberattack. As we roll out AI systems, we need to consider the new attack surfaces that these expose. For example, by manipulating training data or preferences and trade-offs encoded in utility models, adversaries could alter the behavior of these systems. We need to consider the implications of cyberattacks on AI systems, especially when AI methods are charged with making high-stakes decisions. U.S. funding agencies and corporations are supporting a wide range of cybersecurity research projects, and artificial intelligence techniques will themselves provide novel methods for detecting and defending against cyberattacks. For example, machine learning can be employed to learn the fingerprints of malware, and new layers of reflection can be employed to detect abnormal internal behaviors, which can reveal cyberattacks. Before we put AI algorithms in control of high-stakes decisions, we must be confident these systems can survive large-scale cyberattacks.

A third set of risks echo the tale of the Sorcerer’s Apprentice. Suppose we tell a self-driving car to “get us to the airport as quickly as possible!” Would the autonomous driving system put the pedal to the metal and drive at 125 mph, putting pedestrians and other drivers at risk? Troubling scenarios of this form have appeared recently in the press. Many of the dystopian scenarios of out-of-control superintelligences are variations on this theme. All of these examples refer to cases where humans have failed to correctly instruct the AI system on how it should behave. This is not a new problem. An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands literally. An AI system must analyze and understand whether the behavior that a human is requesting is likely to be judged as “normal” or “reasonable” by most people. In addition to relying on internal mechanisms to ensure proper behavior, AI systems need to have the capability—and responsibility—of working with people to obtain feedback and guidance. They must know when to stop and “ask for directions”—and always be open for feedback.

Some of the most exciting opportunities for deploying AI bring together the complementary talents of people and computers.5 AI-enabled devices are allowing the blind to see, the deaf to hear, and the disabled and elderly to walk, run, and even dance. AI methods are also being developed to augment human cognition. As an example, prototypes have been aimed at predicting what people will forget and helping them to remember and plan. Moving to the realm of scientific discovery, people working together with the Foldit online game8 were able to discover the structure of the virus that causes AIDS in only three weeks, a feat that neither people nor computers working alone could match. Other studies have shown how the massive space of galaxies can be explored hand-in-hand by people and machines, where the tireless AI astronomer understands when it needs to reach out and tap the expertise of human astronomers.7 There are many opportunities ahead for developing real-time systems that involve a rich interleaving of problem solving by people and machines.

However, building these collaborative systems raises a fourth set of risks stemming from challenges with fluidity of engagement and clarity about states and goals. Creating real-time systems where control needs to shift rapidly between people and AI systems is difficult. For example, airline accidents have been linked to misunderstandings arising when pilots took over from autopilots.a The problem is that unless the human operator has been paying very close attention, he or she will lack a detailed understanding of the current situation and can make poor decisions. Here again, AI methods can help solve these problems by anticipating when human control will be required and providing people with the critical information that they need.

A fifth set of risks concern the broad influences of increasingly competent automation on socioeconomics and the distribution of wealth.2 Several lines of evidence suggest AI-based automation is at least partially responsible for the growing gap between per capita GDP and median wages. We need to understand the influences of AI on the distribution of jobs and on the economy more broadly. These questions move beyond computer science into the realm of economic policies and programs that might ensure that the benefits of AI-based productivity increases are broadly shared.

Achieving the potential tremendous benefits of AI for people and society will require ongoing and vigilant attention to the near- and longer-term challenges to fielding robust and safe computing systems. Each of the first four challenges listed in this Viewpoint (software quality, cyberattacks, “Sorcerer’s Apprentice,” and shared autonomy) is being addressed by current research, but even greater efforts are needed. We urge our research colleagues and industry and government funding agencies to devote even more attention to software quality, cybersecurity, and human-computer collaboration on tasks as we increasingly rely on AI in safety-critical functions.

At the same time, we believe scholarly work is needed on the longer-term concerns about AI. Working with colleagues in economics, political science, and other disciplines, we must address the potential of automation to disrupt the economic sphere. Deeper study is also needed to understand the potential of superintelligence or other pathways to result in even temporary losses of control of AI systems. If we find there is significant risk, then we must work to develop and adopt safety practices that neutralize or minimize that risk. We should study and address these concerns, and the broader constellation of risks that might come to the fore in the short- and long-term, via focused research, meetings, and special efforts such as the Presidential Panel on Long-Term AI Futuresb organized by the AAAI in 2008–2009 and the One Hundred Year Study on Artificial Intelligence,10,c which is planning centuries of ongoing studies about advances in AI and its influences on people and society.

The computer science community must take a leadership role in exploring and addressing concerns about machine intelligence. We must work to ensure that AI systems responsible for high-stakes decisions will behave safely and properly, and we must also examine and respond to concerns about potential transformational influences of AI. Beyond scholarly studies, computer scientists need to maintain an open, two-way channel for communicating with the public about opportunities, concerns, remedies, and realities of AI.

References

1. Bostrum, N. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014.

2. Brynjolfsson, E. and McAfee, A. The Second Machine Age: Work Progress, and Prosperity in a Time of Brilliant Technologies. W.W. Norton & Company, New York, 2014.

3. Chen, F. and Rosu, G. Toward monitoring-oriented programming: A paradigm combining specification and implementation. Electr. Notes Theor. Comput. Sci. 89, 2 (2003), 108–127.

4. Good, I.J. Speculations concerning the first ultraintelligent machine. In Advances in Computers, Vol. 6. F.L. Alt and M. Rubinoff, Eds., Academic Press, 1965, 31–88.

5. Horvitz, E. Principles of mixed-initiative user interfaces. In Proceedings of CHI ’99, ACM SIGCHI Conference on Human Factors in Computing Systems (Pittsburgh, PA, May 1999); http://bit.ly/1DN039y.

6. Huang, J. et al. ROSRV: Runtime verification for robots. Runtime Verification, (2014), 247–254.

7. Kamar, E., Hacker, S., and Horvitz, E. Combining human and machine intelligence in large-scale crowdsourcing. AAMAS 2012 (Valencia, Spain, June 2012); http://bit.ly/1h6gfbU.

8. Khatib, F. et al. Crystal structure of a monomeric retroviral protease solved by protein folding game players. Nature Structural and Molecular Biology 18 (2011), 1175–1177.

9. Shahaf, D. and Horvitz, E. Generalized task markets for human and machine computation. AAAI 2010, (Atlanta, GA, July 2010), 986–993; http://bit.ly/1gDIuho.

10. You, J. A 100-year study of artificial intelligence? Science (Jan. 9, 2015); http://bit.ly/1w664U5.

Authors

Thomas G. Dietterich ([email protected]) is a Distinguished Professor in the School of Electrical Engineering and Computer at Oregon State University in Corvallis, OR, and president of the Association for the Advancement of Artificial Intelligence (AAAI).

Eric J. Horvitz ([email protected]) is Distinguished Scientist and Director of the Microsoft Research lab in Redmond, Washington. He is the former president of AAAI and continues to serve on AAAI’s Strategic Planning Board and Committee on Ethics in AI.

Footnotes

a. See http://en.wikipedia.org/wiki/China_Airlines_Flight_006.

b. See http://www.aaai.org/Organization/presidential-panel.php.

c. See https://ai100.stanford.edu.

Posted in AI | Tagged | Leave a comment

EMC’s David Goulden on the Reasons for not Breaking up with VMware (Video)

http://player.theplatform.com/p/PhfuRC/vNP4WUiQeJFa/embed/select/FGx9a5if6xCz?autoPlay=true&t=259

David Goulden, EMC: “Our corporate clients are looking for fewer, more strategic technology partners, not more small partners.  vendors

re/code: Goulden is the CEO of EMC’s information infrastructure business unit, the biggest portion of the federation that includes its main business of selling equipment used to store information in corporate data centers, and which accounted for $18 billion in revenue last year. Previously, he was EMC’s COO and is considered a possible successor to current EMC CEO Joe Tucci, who has been working without a contract since February and is expected to retire by the end of the year.

Does he want to be CEO? “The short answer is that the timing and the selection of a CEO, that’s up to the board of directors,” Goulden said.

“As it relates to me,” he added, “I love my job, it’s a great job, the best one I’ve actually had and I want to help the federation in any way I can to make sure it stays in the winner’s column. Beyond that you’ll have to get back to the board on how they’ll manage the process.”

Posted in Misc | Leave a comment

The Future of Online Shopping (Infographic)

ecommerce_future_infographic

Source: The Chat Shop Blog

Posted in Digitization | Tagged , | Leave a comment

Data Science and Measuring Happiness

GNHIs it possible to measure happiness? Can we compare countries on the basis of a universal yardstick for collective happiness similar to Gross National Product (GNP), the accepted measure for a country’s material well-being?  How could data science contribute to practical applications of happiness analysis?

The Kingdom of Bhutan, a 770,000-strong nation landlocked between China and India at the eastern end of the Himalayas, has been developing what it calls Gross National Happiness (GNH) and applying it to government policies for more than 40 years. Influenced by Buddhism, happiness as measured by GNH is different from the way it is perceived in the West in two ways. It is “multidimensional – not measured only by subjective well-being,” according to the Short Guide to GNH. Furthermore, it is “not focused narrowly on happiness that begins and ends with oneself and is concerned for and with oneself.”

A new data science workshop, to take place November 6th to 11th in Bhutan, will discuss and debate questions related to the measurement of happiness  with experts in data science, Buddhist leadership and Gross National Happiness. The Data Happy conference and workshop will involve a high level of participatory process, collaboratively exploring ways by which individuals can contribute to the measurement of Gross National Happiness on a daily basis.

Troy Sadkowsky, the lead organizer of the event and the founder of Data Scientists Pty Ltd and creator of DataScientists.Net, says: “Developing a more holistic and accurate measure of wealth that includes more than just financial aspects would benefit us all. This could be a powerful tool for monitoring a healthy global growth in sustainable prosperity. Data Science is purpose-built for exploring this new terrain.”

The multiple dimensions of happiness and its collective or community orientation are reflected in the 9 domains that comprise the GNH index: psychological wellbeing, time use, community vitality, cultural diversity, ecological resilience, living standards, health, education, and good governance. These are in turn comprised of 33 clustered indicators, each one of which is composed of several variables, for a total of 124 variables.

The government of Bhutan administers every four years a survey based on the GNH index and respondents rate themselves on each of the variables. The 2010 survey found that 10.4% of the population is ‘unhappy’ (defined as achieving sufficiency in 50% or less of the weighted indicators), 48.7% were found to be ‘narrowly happy, ’ 32.6% were ‘extensively happy,’ and  8.3% of the population was identified as ‘deeply happy’ (showing sufficiency in 77% or more of the weighted indicators).

At the Data Happy conference, participants will discuss a proposed system for going beyond the periodical paper-based survey to an online process that will run continuously and will be integrated into other services in Bhutan. Says Sadkowsky: “We will be looking to introduce as much new technology as feasible to help increase accessibility and usability.  A major goal is to convert attitudes around the GNH measurement tools from something that people feel they have to do, like mandatory census surveys, to something that they want to do.“

Sadkowsky hopes that participants in the Data Happy conference will contribute to designing a tool for measuring Gross Individual Happiness that can be integrated into the daily lives of the people of Bhutan. Find out more about the program and registration here.

Originally published on Forbes.com

Posted in Data Science | Tagged | Leave a comment

The Internet of Things (IoT) Comes to the NFL

nfl_tech_infographic-100612792-large.idge

CIO.com:

…each player will be equipped with a set of RFID sensors about the size of a quarter embedded in his shoulder pads, each emitting unique radio frequencies. Gillette Stadium (and every other stadium used by the NFL) has been equipped with 20 receivers to pick up those radio frequencies and pinpoint every player’s field position, speed, distance traveled and acceleration in real time.

By using two sensors for each player — one embedded in the left shoulder pad and one on the right — the system will also be able to identify the facing of each player.

The NFL plans to use the data generated to power the NFL 2015 app for Xbox One and Windows 10, allowing for things like “Next Gen Replay” that will allow fans to call up stats for each player tied into highlight clips posted on the app. But that’s just the beginning. The data will be fed to broadcasters, leveraged for in-stadium displays and provided to coaching staff and players.

Posted in Internet of Things | Tagged | Leave a comment

The global UAV drones market to grow 32% annually to reach $5.59 billion by 2020

The $599.99 IRIS+ from 3D Robotics, "a robot that will automatically fly itself where you tell it to go, while keeping a camera dead steady"

The $599.99 IRIS+ from 3D Robotics, “a robot that will automatically fly itself where you tell it to go, while keeping a camera dead steady”

MarketsAndMarkets:

Before 2014, the use of UAV drones for commercial applications was highly regulated in countries such as the United States. Post 2014, the Federal Aviation Administration (FAA) has relaxed the norms for the use of UAV drones for commercial applications, and also FAA has started giving exemptions to companies to make commercial use of drones under Section 333 with some restrictions. As of September 2, 2015, FAA has granted 1,439 exemptions for the commercial use of drones. The global UAV drones market is expected to reach $5.59 billion by 2020, and it is expected to grow at a CAGR of 32.22% between 2015 and 2020. Among all the commercial applications, precision agriculture application is expected to grow at the highest CAGR of 42.25% during the forecast period.

The rotary blade drones market held the largest market share in 2014, and it is expected to reach $4.93 billion by 2020, growing at a CAGR of 31.42% between 2015 and 2020. Rotary blade drones are widely preferred by the professionals from the media and entertainment industry for video shooting and photography.

The UAV drones market is dominated by the key players such as DJI (China), Parrot S.A. (France), 3D Robotics Inc. (U.S.), and PrecisionHawk (U.S.), among others.

Posted in Robotics | Tagged | Leave a comment