The U.S. Treasury acknowledges the benefits of responsible tech innovations across the financial industry and the numerous opportunities surrounding AI in banking.
New Tech does, however, pose risks, and current risk management frameworks need to be revised to protect against emerging AI threats. In this regard, the American Bankers Association (ABA), in cooperation with the Bank Policy Institute (BPI), is trying to reduce the risk of AI and emerging tech in the banking industry. Here are a few strategic measures that banks can implement to cut the risks of generative AI in the banking industry.
The top banking institutions have started developing customized AI frameworks to ensure the responsible use of AI. These frameworks follow internationally accepted AI guidelines and frameworks such as OWASP's AI Security/Privacy Guide, OECD AI Principles, and NIST's RMF while catering to the unique needs of the banking sector.
These AI frameworks allow banks to assess and manage the potential risks of AI systems in banking. This approach will enable banks to discover gaps in their current controls and develop robust mitigation strategies for AI risks in banking.
Lately, banks are integrating risk management functions horizontally across the organization to reduce the risks caused by AI systems. For instance, many financial institutions place the AI risk governance department under a designated AI role or current official, like the Chief Information Security Officer (CISO) or the Chief Technology Officer (CTO).
Other financial institutions have rolled out competency centers to handle opportunities and risks of artificial intelligence. Regardless of the structure, banks are advised to blend their AI and tech strategies with their company risk control processes and work with other departments to ensure comprehensive AI risk mitigation.
AI will never replace humans in certain domains, especially when it comes to ensuring fairness. According to a white paper published by a team of executives and academics from tech and financial services, fair AI needs human intervention.
The paper notes that AI algorithms can’t fully replace the experience and generalized knowledge of a thoroughly trained and disparate team, analyzing automatic systems for inherent discrimination bias. Kartik Hosanagar, professor of information at the Wharton School, says, “Everyone should be aware of the repercussions of AI making decisions on our behalf.
Besides, institutions should incorporate key principles when creating and deploying customer-facing AI.” These principles would simplify the process of how people flag questionable AI decisions.
Regarding vendors supplying AI technology, banks must increase third-party verification and tracking to attribute AI-specific aspects. Apart from the usual third-party-related inquiries, banks should also inquire about AI model validation, data privacy, AI technology integration, and data retention and privacy policies.
FS-ISAC has just published a Generative AI Vendor Evaluation & Qualitative Risk Assessment Tool and Generative AI Vendor Evaluations & Qualitative Risk Assessment Guide that players in the banking sector may use when planning for and engaging with Generative AI vendors.
Major banking stakeholders acknowledge that they should implement and extend multi-factor authentication tools more widely to boost fraud and cybersecurity safeguards against AI-driven attacks. According to a recent MIT Technology Review publication, criminals are using Generative AI to bypass the current authentication systems used in the banking sector.
In place of vulnerable biometric authentication systems like keystrokes, voice, and video recognition, banks should implement measures that provide better security guarantees, including:
The opportunities and challenges presented by AI in the banking industry require an integrated risk management strategy. Banks should incorporate risk management frameworks across their enterprises, ask vendors the right questions regarding AI, and adopt human-centric approaches to safeguard their systems, customers, and clients.
The U.S. Treasury acknowledges the benefits of responsible tech innovations across the financial industry and the numerous opportunities surrounding AI in banking.
New Tech does, however, pose risks, and current risk management frameworks need to be revised to protect against emerging AI threats. In this regard, the American Bankers Association (ABA), in cooperation with the Bank Policy Institute (BPI), is trying to reduce the risk of AI and emerging tech in the banking industry. Here are a few strategic measures that banks can implement to cut the risks of generative AI in the banking industry.
The top banking institutions have started developing customized AI frameworks to ensure the responsible use of AI. These frameworks follow internationally accepted AI guidelines and frameworks such as OWASP's AI Security/Privacy Guide, OECD AI Principles, and NIST's RMF while catering to the unique needs of the banking sector.
These AI frameworks allow banks to assess and manage the potential risks of AI systems in banking. This approach will enable banks to discover gaps in their current controls and develop robust mitigation strategies for AI risks in banking.
Lately, banks are integrating risk management functions horizontally across the organization to reduce the risks caused by AI systems. For instance, many financial institutions place the AI risk governance department under a designated AI role or current official, like the Chief Information Security Officer (CISO) or the Chief Technology Officer (CTO).
Other financial institutions have rolled out competency centers to handle opportunities and risks of artificial intelligence. Regardless of the structure, banks are advised to blend their AI and tech strategies with their company risk control processes and work with other departments to ensure comprehensive AI risk mitigation.
AI will never replace humans in certain domains, especially when it comes to ensuring fairness. According to a white paper published by a team of executives and academics from tech and financial services, fair AI needs human intervention.
The paper notes that AI algorithms can’t fully replace the experience and generalized knowledge of a thoroughly trained and disparate team, analyzing automatic systems for inherent discrimination bias. Kartik Hosanagar, professor of information at the Wharton School, says, “Everyone should be aware of the repercussions of AI making decisions on our behalf.
Besides, institutions should incorporate key principles when creating and deploying customer-facing AI.” These principles would simplify the process of how people flag questionable AI decisions.
Regarding vendors supplying AI technology, banks must increase third-party verification and tracking to attribute AI-specific aspects. Apart from the usual third-party-related inquiries, banks should also inquire about AI model validation, data privacy, AI technology integration, and data retention and privacy policies.
FS-ISAC has just published a Generative AI Vendor Evaluation & Qualitative Risk Assessment Tool and Generative AI Vendor Evaluations & Qualitative Risk Assessment Guide that players in the banking sector may use when planning for and engaging with Generative AI vendors.
Major banking stakeholders acknowledge that they should implement and extend multi-factor authentication tools more widely to boost fraud and cybersecurity safeguards against AI-driven attacks. According to a recent MIT Technology Review publication, criminals are using Generative AI to bypass the current authentication systems used in the banking sector.
In place of vulnerable biometric authentication systems like keystrokes, voice, and video recognition, banks should implement measures that provide better security guarantees, including:
The opportunities and challenges presented by AI in the banking industry require an integrated risk management strategy. Banks should incorporate risk management frameworks across their enterprises, ask vendors the right questions regarding AI, and adopt human-centric approaches to safeguard their systems, customers, and clients.