Subscribe

AI is facilitating financial fraud, Treasury warns

Advances in artificial intelligence make it easier for criminals to impersonate customers and craft increasingly sophisticated email phishing attacks, according to a report.

Artificial intelligence is making it easier for fraudsters to carry out more sophisticated attacks on financial firms, the Treasury Department said in a report Wednesday. 

Recent advances in AI mean criminals can more realistically mimic voice or video to impersonate customers at financial institutions and access accounts, the agency wrote. The technology advances also allow bad actors to craft increasingly sophisticated email phishing attacks with better formatting and fewer typos, according to Treasury. 

“Artificial intelligence is redefining cybersecurity and fraud in the financial services sector,” Nellie Liang, under secretary for domestic finance, said in a statement accompanying the report, which was mandated under a presidential executive order last year.

The agency is the latest to sound a warning about AI, which presents risks as well as opportunities. Key financial regulators, including the Federal Reserve, the Securities and Exchange Commission and the Consumer Financial Protection Bureau, have raised concerns about everything from discrimination to potential systemic risk. 

The Biden administration will work with financial firms to use emerging technologies while also “safeguarding against threats to operational resiliency and financial stability,” Liang said.

As part of the report, the Treasury Department conducted 42 interviews with individuals from the financial services and information-technology sectors, data providers, and anti-fraud and anti-money-laundering firms. One concern was potential “regulatory fragmentation” as federal and state agencies set ground rules for AI. 

Treasury said it will work with the industry-led Financial Services Sector Coordinating Council, and the Financial and Banking Information Infrastructure Committee — tasked with improving collaboration among financial regulators — to ensure regulatory efforts are in sync.

GAPS BETWEEN FIRMS

The report noted that smaller financial firms, unlike larger companies, have fewer IT resources and less expertise to develop AI systems in-house and often have to rely on third parties. They also have access to less internal data to train AI models to prevent fraud. 

To address the gap, the American Bankers Association is designing a pilot program to facilitate industry information-sharing on fraud and other illicit activities. The US government may also be able to help by providing access to historical fraud reports to help train AI models, Treasury said. 

Treasury also laid out a number of other steps that the government and industry should consider, including developing a common language around AI and using standardized descriptions for certain vendor-provided AI systems to identify what data was used to train the model and where it came from. 

EM debt funds a smart way to add yield, diversification to portfolios, says VanEck strategist

Related Topics: ,

Learn more about reprints and licensing for this article.

Recent Articles by Author

Credent Wealth Management attracts two new partner-advisors

Indiana-based $2.5B RIA has added 12 firms since it was founded in 2018.

Tech rally fuels equities rally, commodities gain

But there are headwinds including US data, Japan intervention.

Treasuries rise ahead of US inflation data

Early trade Friday paused a selloff in global bonds.

Bad day for Bitcoin, net $218M withdrawn from ETFs

Hong Kong will become latest market to launch crypto ETFs.

UBS share buybacks may be at risk from regulators

The banking group may need an extra $20B buffer under new rules.

X

Subscribe and Save 60%

Premium Access
Print + Digital

Learn more
Subscribe to Print