Ask people on the street how much AI affects their lives, and most would probably answer that it doesn’t affect them right now. Some might say that it’s pure science fiction. Others might say that it may affect our future but isn’t used in our world today. Some might correctly identify a few ways it’s used in modern technology, such as voice-powered personal assistants like Siri, Alexa and Cortana. But most would be surprised to find out how widely it is already woven into the fabric of daily life.

Personal assistants and other humanlike bots

Voice-powered personal assistants show how AI can expand human capabilities. Their ability to understand human speech and almost instantaneously interact with widespread data sources or cyber-physical systems to accomplish users’ goals gives people something akin to their own genie in a magic lamp.

Nor does the list of such personal assistants end with those well-known examples. Special-purpose applications such as Lucy[1] for marketers fill special niches in more and more industries. Lucy uses IBM Watson to gather and analyze marketing data, using the same kind of natural language interface to communicate with marketers that popular home personal assistants do.

Other AI tools provide intelligent interfaces between stores and customers. Global office supplies retailer Staples turned its old “Easy Button”[2] advertising campaign into an AI personal assistant that enables customers to tell their Easy Button in-office personal assistant what they need, and have it instantly place an order for speedy delivery.

The North Face uses an AI assistant[3] to help customers determine the right outerwear for their needs. It asks them natural language questions to understand what activities make up their active lifestyles and matches those needs to the store’s inventory. Healthcare organizations are developing AI systems that provide personalized answers to patient questions or that interface with doctors to bring them the latest data on clinical trials pertinent to their patients’ needs.

Companies like Cogito are working with bots in the customer service industry to expand the boundaries of bots’ emotional intelligence.[4] This enables them to recognize cues in humans’ facial expressions or tone of voice and identify the emotional state of the person they are interfacing with.

In most cases, these bots will escalate a matter to a human when it detects emotional impediments to an interaction, but the more advanced bots are becoming able to respond appropriately to a growing variety of human emotions that they encounter. Many of the customer service interactions you currently have – and, unfortunately, telemarketer calls[5] you receive – may find you interacting not with a human, but with an AI-enabled bot – without you suspecting it in the least.

Such, too, may be the case with many simple news stories[6] that your read. Articles that are primarily data-driven, such as financial summaries and sports recaps, are increasingly written by AI.

Or consider for a moment the voice-to-text features on your smartphone. They, too, are increasingly being taken for granted, but until the introduction of artificial neural networks, such transcription abilities were out of reach for even the most advanced computers. With them, though, they have become even more accurate than human transcription.[7]

And if you’ve ever interacted through email with someone named [email protected] or [email protected] to set up a meeting with a busy executive, you’ve been corresponding with a bot.[8] This bot uses machine learning (ML), a core component of AI, to learn the executive’s schedule and meeting preferences. Once trained, the executive can CC it on meeting-related emails and the bot will communicate with other email recipients in humanlike language to arrange meetings that fit the executive’s meeting preferences.

Analyzing and structuring raw data

ML is at the heart of AI’s ability to identify user preferences and anticipate user needs. It analyzes large quantities of data to structure it into a useable form. As increasing volumes of data become available, AI’s ability to analyze everything from manufacturing processes to customer behaviors to market trends makes this increased body of data useable. Being less constrained by human limits in how much data can be analyzed at once, AI can take into consideration more disparate types of data to produce far deeper and more comprehensive analyses.

Not only can AI analyze more data, but it can do so without being distracted as humans so often are. It can monitor information at minute levels that humans would find mundane over long periods of time.

The old images of a manufacturing plant technician monitoring a large console covered with gauges, or a guard monitoring a large bank of security monitors thus are becoming obsolete. AI-driven systems can monitor more gauges more closely than a human technician could. And AI-enabled security systems are being trained to “see” what’s happening on closed circuit TV feeds and identify any anomalies that require human intervention. Such ML also drives a wide variety of applications that are ubiquitous in everyday life.

Google Maps uses anonymized data from smartphones and information from crowdsourced apps to analyze traffic conditions and suggest the fastest routes to commuters’ destinations. Ride-sharing apps like Uber and Lyft use some of the same techniques to enhance their predictive ability. They are becoming increasingly precise in predicting arrival times, travel times, pickup locations and even fraud detection.

Gmail uses ML to learn what you perceive as spam. Rather than relying only on specific keywords to reroute incoming mail to your spam folder, it analyzes how you treat your incoming email to predict what you will want to see and what you will immediately discard. This also applies to Gmail’s sorting of email into Primary, Social and Promotions inboxes. The more you confirm its analysis of a type of email, the more it will follow the same pattern. The more you correct its decisions, the more it will revise how it assesses the indicators in your emails.

One of the industries into which AI has penetrated most deeply is finance. Checks can be scanned by smartphones and read with the help of ML and AI rather than being physically delivered to a bank. AI powers fraud detection systems, analyzing the vast number of daily transactions – and the vast number of variables that may combine to suggest a fraudulent one – to flag those that show suspicious signs.

Financial institutions increasingly use AI in making credit decisions, too. MIT researchers[9] found that loan defaults could be decreased by 25% through AI-enabled tools. A wide variety of companies also use such predictive abilities of AI to improve customer experience and engage customers more deeply.

Predictive engines

Personalized searches and recommendations on shopping sites have become so commonplace that most users don’t realize how AI drives them. Users simply take those features for granted. When you pick up your cell phone, it provides you with news headlines and information about your friends’ social lives based on its analysis of what has drawn your attention in the past.

Brick-and-mortar stores increasingly provide customers with coupons customized to their past purchases through the predictive powers of ML applied to customers’ loyalty cards. Fashion ecommerce site Lyst uses ML and metadata tags to identify what the clothing in different images look like and match the images that fit users’ tastes to their search text.

ML is becoming increasingly adept at powering predictive features, and it does it extremely effectively. One study[10] claimed that such recommender features increased sales as much as 30%.

Amazon’s ML enables it to predict user needs with an almost scary degree of accuracy. It is now even working to develop a system for identifying and delivering what users need even before the users realize they need it.

Social media sites use AI to analyze the content that users create or consume, so the site can serve them content and ads that fit their needs and interests. They also use surrounding context to more clearly distinguish user intent in what they write. One of the most advanced uses of AI on social media is the capability to “see” uploaded images and suggest related images. Or, in the case of Facebook, it uses facial recognition to accurately identify people in uploaded images and suggest those names for image tags.

Home environmental control systems manufacturer Nest’s behavioral algorithms learn users’ heating and cooling preferences. The more data those systems obtain, the better its systems can anticipate those preferences, meaning that users are relieved from having to make manual adjustments.

Netflix’s growing mastery of predictive technology enables it to satisfy customers with recommendations customized to what members have enjoyed in the past. And Pandora’s predictive technology goes beyond Netflix in its recommendations, as its combination of human curating and algorithms ensures that little-known songs and artists don’t get overlooked in favor of heavily marketed ones. In other words, it gets to know users’ musical tastes so well that it successfully identifies music the user will like even before the user knows that those artists and songs exist. This provides consumers with the added delight of discovery.

Autonomous operation

AI’s ability to analyze and predict has also enabled it to carry out complex tasks that people, even today, find it hard to imagine a machine doing. The coming of self-driving cars has been widely predicted as being on the horizon. The truth is that that horizon is already at our doorstep.

Tesla has featured fully autonomous cars since 2016. A wide variety of competitors ranging from major car companies to major technology companies like Google are in advanced testing stages. Google’s widely touted self-driving car, Waymo, is currently using driving games to learn the basics of driving the way that people do, by experience, before making on-road testing more widespread.

And although autonomous cars are not yet commonplace, that isn’t the case in all transportation industries. Autonomous operation is already used to a growing degree in the autopilot features of airplanes. The New York Times[11] reports that a typical commercial flight requires only seven minutes of human pilot control, mainly for takeoffs and landings.

Medical diagnosis

AI is also far more prevalent in the medical field than most people realize. On the more basic end of the complexity scale, AI is at the core of the Human Diagnosis Project (Human Dx)[12] to help doctors whose patients’ monetary means are limited give those patients more advanced medical care than they could otherwise afford.

Doctors who serve patients with limited means can submit patient symptoms and questions to this medical crowdsourcing app. Many specialists whose services the patients would otherwise be unable to afford respond, and Human Dx’s AI system then analyzes and refines the responses to bring the submitting doctor a relevant consensus of advice.

Consolidating specialists’ diagnoses are not the upper limits of what AI can do in medicine, though. AI systems are increasingly being used as aids to doctors in diagnosis. Being able to process and quickly analyze far larger amounts of data than a human could, AI is proving to be a valuable tool in helping doctors make more effective diagnoses.

Not only does the data include patients’ medical records, but anonymized results from similar cases, the latest clinical research and even studies that dig into results of treatments based on patients’ genetic traits. This can help doctors detect life-threatening medical conditions at earlier stages than those doctors could by themselves and deliver more personalized treatments.

The much-acclaimed IBM Watson supercomputer is involved in a growing number of medical use cases.[13] This includes genetically sequencing brain tumors, matching cancer patients to the clinical trials that offer the most promising treatments for their cancers and more precisely analyzing patients’ potential susceptibility to stroke or heart attack, just to mention a few. It has even proven successful in diagnosing some cases that had stumped human physicians,[14] although much more testing is needed before we see this feature rolled out for widespread use.

Takeaways

Clearly, AI and ML have already made far greater inroads into our lives today than most people realize. It is increasingly expanding human capabilities and taking over an increasing number of tasks.

Yet in many ways, the use of AI is still in its infancy. There is so much more to come. How much will it take over tasks that we do on our own today? What tasks will it take over? And how will this impact the workers that presently do them?

Contrary to popular beliefs about AI, it will not impact only blue color workers. Look back at how AI is used right now and you’ll see that many of the tasks it currently performs involve white collar – or even professional – workers.

Marketers, data analysts, customer service representatives – even doctors – are seeing AI perform tasks they currently do now. It will not only be low-skilled workers who will be impacted. In many ways, AI stands to enhance the abilities of both middle-skill and high-skill workers. But in many ways AI threatens to replace some of those workers whose jobs it currently enhances. Before we can properly prepare for the coming AI disruption, we need to get a clearer idea of what kinds of shifts AI are likely to bring.

In the next three chapters, we will look at each of the three main – and dramatically different – views of the future that AI will bring. Those views, while often extreme, each point out important issues that we need to consider if we are going to move into that future with minimal negative impact on our lives.

[1] Barry Levine, IBM’s Watson now powers Lucy, a cognitive computing system built specifically for marketers, MarTech Today, 2016, Available: https://martechtoday.com/ibms-watson-begets-equals-3s-lucy-supercomputing-system-built-specifically-marketers-180950

[2] Chris Cancialosi, How Staples Is Making Its Easy Button Even Easier With A.I., Forbes, 2016, Available: https://www.forbes.com/sites/chriscancialosi/2016/12/13/how-staples-is-making-its-easy-button-even-easier-with-a-i/#433606c859ef

[3] Sharon Gaudin, The North Face sees A.I. as a perfect fit, ComputerWorld, 2016, Available: https://www.computerworld.com/article/3026449/retail-it/the-north-face-sees-ai-as-a-perfect-fit-video.html

[4] Ashley Minogue, Beyond the Marketing Hype: How Cogito Delivers Real Value Through AI, OpenView, 2017, Available: https://labs.openviewpartners.com/beyond-the-marketing-hype-how-cogito-delivers-real-value-through-ai/#.WqADxudG2Uk

[5] John Egan, What’s the Future of Robots in Telemarketing, DMA Nonprofit Federation, 2017, Available: https://chi.nonprofitfederation.org/blog/whats-future-robots-telemarketing/

[6] Matthew Jenkin, Written out of the story: the robots capable of making the news, The Guardian, 2017, Available: https://www.theguardian.com/small-business-network/2016/jul/22/written-out-of-story-robots-capable-making-the-news

[7] W. Xiong, J. Droppo, X. Huang, F. Seide, M. Seltzer, A. Stolcke, D. Yu, G. Zweig, Achieving Human Parity in Conversational Speech Recognition, Cornell University Library, 2016, revised 2017, Available: https://arxiv.org/abs/1610.05256

[8] Ingrid Lunden, Rise of the bots: X.ai raises $23m more for Amy, a bot that arranges appointments, TechCrunch, 2016, Available: https://techcrunch.com/2016/04/07/rise-of-the-bots-x-ai-raises-23m-more-for-amy-a-bot-that-arranges-appointments/

[9] Andrew Lo, Consumer Credit-Risk Models Via Machine-Learning Algorithms, MIT, 2009, Available: http://bigdata.csail.mit.edu/node/22

[10] Amit Sharma, Third-Party Recommendations System Industry: Current Trends and Future Directions, SSRN, 2013, Available: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2263983

[11] John Markoff, Planes Without Pilots, New York Times, 2015, Available: https://www.nytimes.com/2015/04/07/science/planes-without-pilots.html?_r=0

[12] Jeremy Hsu, Can a Crowdsourced AI Medical Diagnosis App Outperform Your Doctor?, Scientific American, 2017, Available: https://www.scientificamerican.com/article/can-a-crowdsourced-ai-medical-diagnosis-app-outperform-your-doctor/

[13] Jeremy Hsu, ibid.

[14] James Billington, IBM’s Watson cracks medical mystery with life-saving diagnosis for patient who baffled doctors, International Business Times, 2016, Available: http://www.ibtimes.co.uk/ibms-watson-cracks-medical-mystery-life-saving-diagnosis-patient-who-baffled-doctors-1574963

75fa2e2520901fd22b0395362b873c746e73b3d4bd7ee6af7f66a1f5e4b4197c?s=150&d=mp&r=g

Marin Ivezic is a Partner at a Big 4 firm. He has worked with clients who adopted AI to eliminate thousands of jobs, increasing profits by cutting costs. And he has worked with clients who adopted AI to augment their workforce’s skills and increase profits while creating additional jobs. In both groups, some of the companies flourished, and other failed. These experiences led him to closely study the current debate on AI’s effect on business’ future.

9740c3510eea55f5e5a84f766731868217c041b89461601a0b0d8d38e03044ca?s=150&d=mp&r=g

Luka Ivezic is an independent consultant and author exploring geopolitical and socioeconomic implications of emerging technologies such as 5G, Artificial Intelligence (AI) and Internet of Things (IoT). To better observe policy discussions and societal attitudes towards early adoptions of emerging technologies, Luka spent last five years living between US, UK, Denmark, Singapore, Japan and Canada. This has given him a unique perspective on how emerging technologies shape different societies, and how different cultures determine technological development.