Financial firms are not safe from the threat of AI, says the New York State Department of Financial Services.
The recommendation was part of an 11-page guidance document the department released this week, which cited risks from social engineering, cyberattacks and the theft of nonpublic information, reports the Wall Street Journal. The NYDFS regulates about 3,000 financial institutions with collectively manage some $9.7 trillion.
“I think it’s really about making sure there’s expertise in the institution, making sure they’re engaging with lots of stakeholders, so they understand the development of the technology,” said Adrienne Harris, superintendent of the NYDFS. “It’s about making sure that you’ve got the right expertise in-house—or that you’re otherwise seeking it through external parties—to make sure your institution is equipped to deal with the risk presented."
As the Insurance Journal notes, the NYDFS's guidance also encourages firms to integrate AI into their cybersecurity measures for "substantial" benefits. “AI’s ability to analyze vast amounts of data quickly and accurately is tremendously valuable for: automating routine repetitive tasks, such as reviewing security logs and alerts, analyzing behavior, detecting anomalies, and predicting potential security threats; efficiently identifying assets, vulnerabilities, and threats; responding quickly once a threat is detected; and expediting recovery of normal operations,”
But as New York state is upping its regulation around AI, California is fumbling its response. Late last month, Governor Gavin Newsom vetoed an AI safety bill (SB 1047), citing that "smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good," NPR details.
California State Senator Scott Weiner, who co-authored the bill, took to X to write that the veto "leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from U.S. policymakers, particularly given Congress's continuing paralysis around regulating the tech industry in any meaningful way."
AI & Big Law
An analysis by KPMG found that "legal departments will be on the front lines of defending against cyber attacks and upholding organizational resilience." Specifically, the professional services firm adds that legal teams will help organizations respond to these attacks by "working with in-house technology or operational teams to implement or adopt appropriate cybersecurity technology to protect the organization’s data (in compliance with stricter data protection/cyber security laws)."
Verdict
As we've written here before, AI's effects on society are still in their opening chapters. And while the benefits of the technology are being loudly touted by Silicon Valley and other industry leaders, but security risks should also be carefully considered and buttressed against.
Be a smarter legal leader
Join 7,000+ subscribers getting the 4-minute monthly newsletter with fresh takes on the legal news and industry trends that matter.