Today, the Consumer Financial Protection Bureau (CFPB) issued a report analyzing the use of chatbots in consumer finance and the impact on customer service. The report notes that financial institutions are increasingly using chatbots to reduce the costs of human customer service agents, and moving away from simple, rules-based chatbots toward more sophisticated technologies, such as large language models and generative chatbots and others marketed as artificial intelligence (AI).

The report found that while chatbots may be useful for answering basic questions, their effectiveness lessens as the questions become more complex. According to the CFPB, “[r]eview of consumer complaints and of the current market show that some people experience significant negative outcomes due to the technical limitations of chatbots functionality.” Additionally, the CFPB warns that financial institutions may risk violating federal consumer protection law when deploying chatbot technology. In addition to potential privacy and security risks, the CFPB states that “[w]hen chatbots are poorly designed, or when customers are unable to get support, there can be widespread harm and customer trust can be significantly undermined.”

Chatbots are computer programs that mimic human interaction by processing a user’s input to produce an appropriate output. “Rule-based chatbots use either decision tree logic or a database of keywords to trigger preset, limited responses. These chatbots may present the user with a set menu of options to select from or navigate the user between options based on a set of keywords and generate replies using predetermined rules … More complex chatbots use additional technologies to generate responses. Specifically, these chatbots may be designed to use machine learning or technology often marketed as ‘artificial intelligence’ to simulate natural dialogue.” Chatbots have been widely adopted by banks, mortgage servicers, and debt collectors. In 2022, over 98 million consumers interacted with a bank’s chatbot and that number is projected to grow to 110.9 million users by 2026.

According to the CFPB, while the use of chatbots has increased, so have the consumer complaints, including complaints concerning:

  • Difficulties in recognizing and obtaining dispute resolution.
    • Chatbots process only specific words or syntax to recognize a dispute and begin the process of dispute resolution. So the ability of chatbots to discern disputes may be limited.
    • Even when chatbots do recognize a dispute, their ability to reach a resolution can be limited. “In some cases, customers are disputing transactions or information that is incorrect. Chatbots that are limited to simply regurgitating the same system information that the customer is attempting to dispute back to them are insufficient.”
    • Importantly for fair lending concerns, a chatbot’s limited syntax may be problematic for consumers with limited English proficiency.
  • Difficulties obtaining accurate or sufficient information.
    • Studies have shown that chatbots sometimes generate inaccurate data that go undetected by some consumers. Specifically, researchers have found that chatbots are ill-suited for tasks that require logic, specialized knowledge, or current data.
    • According to the CFPB, “[w]hen a chatbot is backed by unreliable technology, inaccurate data, or is little more than a gateway into the company’s public policies or FAQs, customers may be left without recourse. Providing reliable and accurate responses to people with regard to their financial lives is a critical function for financial institutions.”
  • Difficulties obtaining meaningful customer service.
    • A chatbot’s scripted responses may fail to answer a customer’s questions and instead lead to “doom loops” or “continuous loops of repetitive, unhelpful jargon or legalese without an offramp to a human customer service representative.”
  • Difficulties obtaining intervention from human customer service representatives.
    • Consumers complain about the lack of access to or unreasonably long wait times for human customer service representatives. The limitations of chatbots and lack of access to a human customer service representative may not be apparent to customers when they are initially signing up with a specific financial institution.
    • Moreover, chatbots may be less likely to waive fees or negotiate on prices.
  • Difficulties keeping personal information safe.
    • Fake chatbots can be used by fraudsters to conduct phishing attacks.
    • Customers must validate themselves as the owner of a specific account by providing personally identifiable information. Therefore, these chat logs provide another venue for privacy attacks.

The report concludes by warning financial institutions that the difficulties listed above can not only lead to dissatisfied customers, but also to violations of federal consumer protection laws. “Deficient chatbots that prevent access to live, human support can lead to law violations, diminished service, and other harms. The shift away from relationship banking and toward algorithmic banking will have a number of long-term implications that the CFPB will continue to monitor closely.”

Our Take:

We believe this is the first regulatory statement about the use of chatbots by financial services companies, and it follows the now-familiar theme of hostility to AI and algorithms, underscoring that relationship banking is at risk of being replaced by algorithmic banking. Although the statement certainly appears aimed at causing financial institutions to pay close attention to the problems asserted by the CFPB, it remains to be seen whether the CFPB will truly elevate what seems like customer service issues to alleged violations of law. We’ll be watching this issue closely.