In a recent speech, SEC Chair Gary Gensler highlighted the risks associated with generative AI, specifically chatbots, and their impact on financial markets. The increasing adoption of AI technology has raised concerns about institutions relying on a limited subset of information to make crucial decisions. Gensler emphasized the potential dangers of herding behavior and the risks posed by centralized datasets and models, comparing them to past financial crises.
Generative AI and the Risk of Herding Behavior:
Generative AI, with chatbots as its prominent application, presents a transformative technology that has the potential to disrupt financial markets. Gensler expressed concerns about how the large demand for data and computing power in the AI sector could lead to a few dominant tech platforms. This concentration may restrict the variety of AI models available for companies, leading to a situation where multiple institutions base their decisions on a flawed dataset. The SEC Chair drew parallels to the financial crisis of 2008, where banks followed credit raters’ flawed information, and the Twitter-driven run on Silicon Valley Bank.
The interconnected nature of the global financial system may exacerbate the risks associated with generative AI. Gensler emphasized that AI’s rise and the dominance of deep-learning models could intensify fragility by promoting herding behavior among actors due to a shared signal from a single base model or data aggregator. The potential consequences of such herding behavior and the reliance on flawed information point to the need for greater regulatory oversight and risk management in the use of generative AI in finance.
AI in the Financial Sector and the Role of Language Models:
The financial sector has long been incorporating AI systems in various operations. Insurance companies and creditors employ algorithms and natural language processing to analyze financial data when determining loan amounts. Trading firms leverage AI to detect fraud and interpret market signals swiftly. However, Gensler specifically highlighted the significance of large language models (LLMs), including generative AI, and their transformative potential.
While generative AI is not yet widely used in finance, Gensler stressed the need for regulatory structures to address the risks and challenges arising from its adoption. The SEC recognizing the impact of AI on the market established FinHub as a resource center in 2018 to address queries related to AI, cryptocurrencies, and fintech. The SEC itself harnesses machine learning for market surveillance to enforce policies. Gensler noted that existing risk management guidelines must be updated to align with the rapid advancements in AI technology, hinting at the necessity for an industry-wide reassessment of its utilization.
The SEC’s Ongoing Efforts in AI Regulation:
SEC Chair Gensler’s concerns regarding AI’s impact on financial markets align with his previous work at MIT, where he co-authored a paper highlighting the limitations of current regulatory structures in addressing AI’s implications in finance. The SEC has maintained a proactive stance on AI regulation, evident through the establishment of FinHub. Furthermore, the agency actively pursues cases against companies involved in emerging technologies, particularly in the cryptocurrency space, to uphold financial laws.
As regulatory frameworks adapt to the advancements in AI, technology experts, industry stakeholders, and regulators must collaboratively navigate the challenges associated with the widespread adoption of generative AI. The SEC’s use of machine learning for market surveillance underscores the need for thoughtful regulation that ensures transparency, mitigates risks, and protects against potential pitfalls in the financial industry.