CFPB Spotlights the Use of Chatbots
In June, the CFPB released a new issue spotlight on the use of chatbots by banks and other financial institutions. The report notes that banks have increasingly moved from “simple, rule-based chatbots towards more sophisticated technologies such as large language models (“LLMs”) and those marketed as ‘artificial intelligence.’” While these chatbots are intended to simulate human-like responses, they can end up frustrating consumers’ attempts to obtain answers and assistance with financial products or services.
Some of the CFPB’s listed concerns are:
• Limited ability to solve complex problems, resulting in inadequate levels of customer assistance (for example, difficulty understanding requests, requiring use particular phrases to trigger resolution, difficulty knowing when to connect with a live agent). The CFPB argues this is particularly concerning in the context of financial services, where consumers’ need for assistance could be “dire and urgent.”
• The potential for inaccurate, unreliable, or insufficient information. In contexts where financial institutions are required to provide people with certain information that is legally required to be accurate, such lapses may also constitute law violations.
• Security risks associated with bad actors’ use of fake impersonation chatbots to conduct phishing attacks at scale, as well as privacy risks both in securing customers’ inputted data or in illegally collecting and using personal data for chatbot training purposes.
The CFPB notes that it is actively monitoring the market to ensure financial institutions are using chatbots in a manner consistent with customer and legal obligations.