In an age where digital technologies are increasingly interwoven into our daily interactions, businesses are starting to capitalize on the efficiency and scalability that artificial intelligence (AI) offers, particularly in the area of customer service. While transitioning from human-based to AI-generated assistance could eventually enhance the efficiency and enjoyment of customer experiences, there are still significant obstacles to overcome. A case in point is the recent issue faced by Air Canada customer Jake Moffatt, who encountered pricing discrepancies while interacting exclusively with the company’s chatbot.

Air Canada’s Erroneous Chatbot
When Jake Moffatt needed to fly from Vancouver to Toronto for his grandmother’s funeral, he sought a bereavement fare from Air Canada—a special discounted rate intended for travelers in emergency family situations. Opting to interact with Air Canada’s chatbot rather than a human representative for assistance, Jake was mistakenly advised to purchase his ticket first and request the discount afterward. Trusting this guidance, he bought his ticket for nearly $600, only to learn later that Air Canada’s policy required these discounts to be arranged before the flight, not post-purchase.
Driven by this misadvice, Jake presented his case to a Canadian tribunal, contesting Air Canada’s assertion that the chatbot, acting as an independent entity, was responsible for the misinformation. The tribunal ruled in favor of Jake, rejecting the airline’s argument and highlighting that Air Canada was ultimately accountable for the accuracy of the information it provides, regardless of the source being human or AI. As a result, Air Canada was ordered to pay Jake more than $600, compensating for the ticket and the additional tribunal expenses.
Legal and Ethical Considerations of AI in Customer Service
AI chatbots, powered by machine learning algorithms and natural language processing, can significantly enhance efficiency and customer satisfaction. They can handle a wide array of inquiries simultaneously, providing quick responses around the clock. However, these technologies are not perfect, and they carry inherent legal and business risks. As demonstrated by the Air Canada case, chatbots can expose companies to potential liability for misinformation. Moreover, deploying these technologies raises additional legal concerns, including issues related to customer privacy and data security.
As highlighted by WilmerHale Senior Associate Ali Jessani in his article “Chatbots, AI, and the Future of Privacy,” the proliferation of chatbots and related AI technologies might pose risks to consumer privacy. Harvesting data is critical for AI tools to develop and chatbots in particular scrape billions of data points from across the internet to train and update their predictive language models. While many chatbots claim they do not retain personal user information, the reality is more complicated. The fact of the matter is that a user could inadvertently reveal sensitive details such as age, gender, profession, and interests during their interaction with a chatbot. Under prevailing laws like the California Privacy Rights Act, this identifiable information could then potentially be leveraged to infer user preferences or behaviors. These insights, in turn, could be shared with advertisers to develop targeted advertising campaigns tailored to individual users.

Although existing privacy regulations provide certain protections, the distinct characteristics of AI introduce new challenges that might call for advanced regulatory approaches. Current legal structures concentrate on standard data privacy issues, such as identifying what constitutes personal information, establishing protocols for notification and consent, and setting guidelines for data security. Yet, the evolving and anticipatory nature of AI technologies indicates the need for more specialized measures to tackle the unique threats they present, especially concerning profiling and decision-making processes.
Furthermore, like any other system, chatbots can have weaknesses or potential security risks that malicious individuals might take advantage of. These vulnerabilities can emerge from several areas, such as insufficient security protocols, subpar coding techniques, or mistakes made by users. Additionally, the complexity that comes with connecting chatbots to different systems and databases can introduce new vulnerabilities. Although it’s impossible to make any system completely secure against cyber threats, it’s vital to identify and address these weaknesses to enhance the overall security of the system.
Specific threats that chatbots face include the possible dissemination of malware and ransomware. If these malicious programs manage to breach a chatbot, they can extend their reach to larger company networks or lock and hold data for ransom. Cybercriminals could exploit weaknesses in chatbots to send this malicious software straight to users’ devices.
Additionally, if chatbots lack robust data protection, such as strong encryption, there’s a risk that data could be stolen or tampered with, endangering the confidentiality, integrity, and availability of essential information. Another critical concern is the possibility of attackers posing as legitimate chatbots, tricking users into handing over sensitive data under the false belief they are communicating with a secure, verified source.

Conclusion
Jake Moffatt’s experience with Air Canada’s chatbot serves as a cautionary tale for businesses embarking on the AI journey in customer service. As companies become more dependent on AI for consumer interactions, it’s imperative they emphasize clarity, precision, and adherence to legal standards to build trust and ensure customer contentment. The shifting legal framework around AI underscores the importance of maintaining high ethical and regulatory standards, ensuring AI applications not only improve service but also respect user rights and maintain corporate responsibility.
At this crossroads of technology and regulation, businesses must remain alert and flexible, prepared to refine their AI practices in light of new legal and ethical guidelines. Embracing thorough AI governance allows companies to leverage AI’s advantages while addressing potential pitfalls, setting the stage for a future where AI and human efforts converge to provide superior customer experiences.






Leave a reply to Azza El Wakeel Cancel reply