AI application risks in banking
Acknowledging the difference between conventional deterministic (static) software applications vs. self-learning stochastic AI applications:
Conventional deterministic software applications follow predefined rules and logic, where the output is predictable and based on the input. These applications perform the same way every time given the same inputs and any changes in behavior are explicitly coded by the developer.
In contrast, self-learning stochastic AI applications, such as machine learning systems, involve probabilistic processes and adapt over time based on data. Their behavior can vary even with identical inputs, as they continuously learn and improve from experience, often producing different outputs based on the underlying patterns and uncertainties in the data. These systems can evolve and make decisions based on learned knowledge rather than being strictly controlled by predefined rules.
In a business context, Large Language Models (LLMs) used for AI applications can be vulnerable to several attack risks:
- Data Poisoning: Attackers can manipulate the training data to introduce biases or vulnerabilities, leading the model to produce incorrect or harmful outputs when deployed.
- Model Inversion: By querying the model extensively, attackers can potentially reverse-engineer sensitive information from the training data, revealing private or confidential data used during the model's development.
- Adversarial Inputs: Subtle modifications to input data can cause the model to generate unintended outputs, potentially bypassing security filters or leading to incorrect decision-making.
- Prompt Injection: Malicious actors may craft inputs or prompts that manipulate the model’s behavior, steering it towards harmful, biased or misleading outputs.
- Misuse for Phishing or Social Engineering: LLMs can be used to generate convincing phishing messages or other forms of social engineering, which can trick employees or customers into divulging sensitive information.
- Bias Exploitation: If the LLM is not properly controlled, attackers could exploit inherent biases in the model to manipulate outcomes in ways that favor their malicious objectives.
To mitigate these risks, businesses need to implement robust security measures such as model auditing, adversarial testing and access controls, alongside continuous monitoring and updates.
Can an AI loan agent be tricked into approving a loan not meeting approval criteria by malicious prompt injection by a loan officer ?
Remembering that any fraudulent losses at financial institutions were always attributed to criminal energy by banking staff and not due to a lack of management oversight or weaknesses in process controls, therefore attacks on AI applications are very likely with unprecedented financial losses and a lack of trust in the bank which might lead to another bank failure.
Explore the smart ProcessHorizon web app for holistic automated process mapping: https://processhorizon.com