As banks continue to adopt generative AI technology, Goldman Sachs is embracing the technology, but is also alert to its risks. Goldman’s most recent criticism is focused on the legislation surrounding LLMs.
Click here to follow our new WhatsApp channel, and get instant news updates straight to your phone 📱.
In its annual 10-K filing with the SEC for 2024, Goldman described the regulatory environment for AI as “uncertain and rapidly evolving.” The firm previously noted the propensity of LLMs to hallucinate, but said in its 10k that LLMs also risk the infringement of intellectual property, releasing confidential or proprietary data, and the incorporation of biases from training models. Goldman said it has a hard time using LLMs in departments “that require documentation or explanation of the basis on which decisions are made,” due to the unexplainability of both internal models and (especially) models provided by third-party vendors.
Failing to comply with changing regulations doesn’t just expose the bank to legal penalties, it can “harm [its] reputation and public perception” as well as casting doubt on “the effectiveness of [its] security measures.”
Click here to follow our new WhatsApp channel, and get instant news updates straight to your phone 📱.
This hasn’t stopped it, and many other banks, using the technology, including in their compliance teams themselves. Zachary Plotkin, managing director of recruitment firm Madison Davis, told us “most large financial services institutions are touching the surface on AI and establishing internal AI groups in risk and compliance.” These teams are generally given a small number of resources and “a small sample size to work with.” Elsewhere, firms rely on one of what Plotkin estimates to be over a thousand AI-software vendors. In compliance, these include risk decisioning platform Oscilar and compliance ‘co-pilot’ Sedric.ai. Generally, these are good news for compliance staff at present, who regularly complain that they’re overworked and underfunded. There are fears that these tools will impact jobs, but they’re more likely to replace work that has already been offshored first.
If banking compliance staff are using a risk-laden technology like AI, then who watches the watchmen? Plotkin predicts “there will be a rise in the next 1-3 years for specialists” in AI regulation, who will be paid at a rate above their peers. A report from Selby Jennings in 2024 estimated that compliance directors in banking earn compensation of up to £175k ($225k) on average in London. Goldman seemed to support that notion in its filing; the bank said it may need to “increase [its] compliance costs” due to evolving regulation.
JPMorgan has already hired multiple people in ‘AI policy and governance’ including Ria Strasser-Galvis, Google’s head of foreign policy partnerships, who joined as an executive director in February. Multiple people from JPMorgan have moved internally into Strasser-Galvis’s team.
In time, Plotkin says knowledge of AI regulation will be “ubiquitous for compliance officers” and there won’t be a large, dedicated team for specialists. The arc could be similar to prompt engineers, which were once thought to be a potential new class of jobs, when in reality prompt engineering is used by pretty much everyone.
Have a confidential story, tip, or comment you’d like to share? Contact: Telegram: @AlexMcMurray, WhatsApp: (+1 269 237 3950). Signal: @AlexMcMurrayEFC.88 Click here to fill in our anonymous form, or email editortips@efinancialcareers.com.
Bear with us if you leave a comment at the bottom of this article: all our comments are moderated by human beings. Sometimes these humans might be asleep, or away from their desks, so it may take a while for your comment to appear. Eventually it will – unless it’s offensive or libellous (in which case it won’t.)