Newsletters

Customer Support:   (972) 395-3225

Home

Articles, News, Announcements - click Main News Page
Previous Story       Next Story
    
Avoiding BOT Biases In Customer Experience
Submitted by The Taylor Reach Group

August 18, 2017

AVOIDING BOT BIASES IN CUSTOMER EXPERIENCE
 
 
 
How pervasive is the impending impact of artificial intelligence (AI) on the customer experience (CX)? Well, to join a discussion on the topic, I’d be willing to traverse a room full of telemarketers pitching the latest timeshare deals in Orlando. And it would be worth it, as there are just so many possibilities to explore when pondering the ultimate takeover of the Contact Center by androids.
 
Robot-led workforce automation is not going away. MGI research found that 45 percent of work activities could be automated using current technologies; 80 percent of that activity is attributable to existing machine-learning capabilities. While some of the most celebrated sci-fi authors and respected tech influencers predict a future struggle with synthetic organisms bent on the destruction of their lesser intelligent creators, my greater concern is for an autonomous cab refusing me service because I’ve been segmented as a likely non-tipper. (I see you, Uber.)
 
This type of consumer segmentation is merely high-tech stereotyping in disguise. After all, stereotypes are primarily segmentations of a group using the observed or perceived behaviors of a few to then identify the whole. AI systems have become supercharged at doing just this very thing at high speeds, and with as much data as is available. The cultural imperative, therefore, becomes the need to prevent the imprinting of biases onto AI systems to avoid the isms of prejudice.
 
So, how do we as a society ensure that a disposition toward fair and equal treatment is standard issue in the AI ecosystem? In the time spent by much greater minds pondering this question from a broader viewpoint, machine learning continues to make its way into all facets of the customer experience. Contact Center companies must have both a sound strategy and the technical infrastructure in place to address this growing concern.
 
To better understand the ethical challenges that Contact Centers face as they begin to implement AI and machine learning, let’s take a look at two pernicious traits within these systems’ interactions with customers. They are manipulation and discrimination.
 
Manipulation
In the world of AI, the line between influence and manipulation is clouded with intent. In an early attempt to provide more targeted and personalized experiences for consumers, companies would invest in data mining programs that promised a competitive advantage. However, their ability to maximize insights from big data was only marginally successful; unfortunately, there were not enough hamsters to turn the wheel.
 
Enter artificial intelligence. Developments in this space now allow for insights to occur in real-time within a system that is simultaneously learning while quickly making improvements. Data derived from this process stream can be sketchy. As an example, let’s say it’s been proven that customers are more likely to upgrade their services during times of great duress or heightened euphoria. And let’s say a company just installed the latest super bot-CRM with a feature that can track a customer’s emotional state by using machine learning.
 
Would it be ethically wrong to target customers whose emotional states indicate a high probability for product upgrades? Or, what about creating those emotional feelings for the customer, to whom you can then pitch upgrades?
 
Both of these examples could easily be considered normal business practice, maybe the second one is creepier than the first, but they represent the types of opportunities that arise when given the ability to analyze thousands of touchpoints across millions of customers at the blink of any eye, and the ability do it better during the second blink.
 
Discrimination
While manipulation is clouded with intent, the line between judgment and discrimination is even hazier, in part due to pre-conceived biases. The personal data collected by AI systems may help a retailer determine the types of goods and serves to offer consumers, but in the wrong hands, the same data could be used to disenfranchise a segment of buyers as well.
 
These “supply or deny” functions are standard fare for algorithms, a precursor to the AI we know today. It’s not uncommon to experience the downstream effects of their faulty judgment while interacting with content produced by entertainment, e-commerce, and social media giants:
Netflix recommended movie genres of no interest to viewers while hiding content more aligned with their tastes. Amazon banned shoppers who returned items too frequently. Facebook has patented a technology through which your ability to repay a loan is determined by your social network. You may want to unfriend a few acquaintances immediately.
 
It doesn’t take much to see where things are heading. One day your personal data may be dissected and analyzed, without ill intent, but those insights might still create biases that are inaccurate or missing certain context that was overridden with machine learning and segmentation.
 
My human intuition tells me Call Centers’ practices and policies will likely continue to stumble in a bot-assisted future, but that doesn’t mean the conversation should end. There are still lines to be drawn and rules to be put in place. Having these conversations today, openly and transparently, will provide a far better experience for customers and quite possibly benefit our society.
 
Join the conversation and share your views on the ethical challenges associated with implementing AI and machine learning.  

 

__________
 
For additional articles on our site, please click here  

    

 
Return to main news page