Blog post
Cutting Through the Hype: Why Regulators Must Define AI with Precision
Jason Kratovil
Published
August 28, 2024
In its continuing effort to stay informed on how technology is changing financial services, the US Treasury Department recently solicited input on how artificial intelligence is used in the financial sector. This is a smart line of inquiry: If you've walked around the exhibitor hall of any financial services-focused conference in the last two years, you'd be convinced that 1) "AI" is the great salvation for every conceivable aspect of the business of banking, and 2) almost every vendor is the undisputed leader in incorporating AI into its version of a better mousetrap.
This reality means the term "artificial intelligence" has in many respects become a marketing buzzword that often blurs the line between truly advanced generative models that simulate human creativity, and more explainable and human-supervised data processing tools. This ambiguity creates risk for policymakers drafting future legislation or regulation as well as for market participants who may find themselves subject to inappropriate one-size-fits-all rules.
Treasury’s efforts to craft an accurate definition of "artificial intelligence" for use in financial services is commendable. In our comments, SentiLink focused on this question, stressing the need to define "AI" in a way that appropriately captures the relative risk of a technology, rather than employing a definition that applies to any data processing technique that performs analytic computations and happens to be advanced.
Did you know there is already a definition of "artificial intelligence" in federal law? The version cited by Treasury is from a 2021 defense authorization bill. It reads:
[Artificial intelligence is] a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human–based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.
We believe this would be problematic as a generally-applicable definition for use in the financial sector due to its broad scope. This definition could encompass technologies that are advanced, but aren't the less understood "black box" systems that present the most risk. While certain AI models — particularly those used in generative and predictive AI — warrant close scrutiny, not all data processing and analytic tools present the same concerns.
SentiLink's current suite of identity fraud solutions, for example, use supervised machine learning models like regression algorithms and ensemble trees to assess fraud risk. These models analyze historical data and generate risk scores that indicate the likelihood of identity fraud. Unlike AI systems that predict or perceive future outcomes, SentiLink's models provide data points based on observed data at a specific time. While our machine learning systems are certainly advanced, they are highly explainable. Our models also do not substitute for human judgment; our outputs are used by our financial institution partners to make further risk assessments and take additional verification steps.
This distinction highlights why we encouraged Treasury to consider a more precise definition of AI within financial regulatory frameworks. An overly broad definition would be risk-agnostic, painting lots of technologies with too broad a brush, potentially imposing unnecessary costs, and inhibiting the adoption and effectiveness of tools that can benefit consumers and financial institutions. A refined definition should focus on predictive and decision-making technologies that replace human judgment and are not fully explainable by their development principles. Such clarity would help avoid misclassification and increase the likelihood that when legislation and/or regulation comes, it is focused on the least-understood technologies that present genuine risks to consumers.
As AI continues to evolve, it is essential for regulators to adopt a more nuanced approach to its definition. Rhetorical overuse combined with an overly broad understanding not only dilutes the meaning of AI but also creates confusion in the market, making it difficult for financial institutions and policymakers alike to discern which solutions warrant the closest examination. By refining the definition to focus on the technologies that pose the greatest risks, Treasury can strike a balance between regulation and innovation, ensuring that the financial sector can continue to benefit from the powerful tools that SentiLink and others provide.