Ethical AI: The Good, the Bad, and the Ugly

AI plays a pivotal role in enhancing the effectiveness of financial crime prevention measures.

In 2018, Oxford philosopher Nick Bostrum proposed a thought experiment called “The Vulnerable World Hypothesis,” which considers humanity’s increasing technological advances through an ethical lens. He imagines an urn that is filled with various differently colored balls, each color representing a possible invention, discovery, or idea: white balls are unequivocally good, and gray balls can present dangers and benefits. The urn also contains one black ball, which has the power to cause large-scale destruction to humanity. Bostrum suggests that, as we continue to draw balls from the urn, we become increasingly close to selecting the black ball.

Generative artificial intelligence (AI)—with its vast potential for good but also rife with danger—has so far shown to be a gray ball. However, as Generative AI advances, society will be well served to regulate it at a far greater and more robust scale, lest it transform into the black ball that harms society.

Key ethical considerations stemming from AI, and generative AI in particular, include biased data and outcomes, transparency and accountability, data privacy and security, plagiarism, misinformation, and the future of work. Negative outcomes can arise even when development and deployment are executed with good intent. In the hands of bad actors, though, AI and generative AI can be misused, leaving a trail of dire consequences—facilitating financial crime, swaying public opinion and elections, or intentionally undermining human rights.

Several examples highlight the unintended adverse consequences of AI and its purposeful criminal uses:

  • In September 2023, a group of authors initiated a lawsuit against OpenAI, the company that launched ChatGPT, alleging that the company had committed copyright infringement for training its models on authored works without permission.
  • The accuracy of AI-driven facial recognition systems is often highly dependent on race, which could lead to the wrongful arrests and convictions of minorities.
  • Generative AI-created deepfakes, which include imagery and audio, are so sophisticated that it can be near-impossible to differentiate them from authentic information. Deepfakes run amok could deeply impact what we believe and who we believe, with severe implications for national and international security.
  • Phishing scams are accelerating with the help of generative AI, which has allowed fraudsters to craft more convincing messages and scale up their attacks.

As AI continues to refine itself, both good and bad applications will grow increasingly adept. Use cases not previously seen could arise, leading to even more ethical dilemmas.

In the world of fraud and money laundering risk management, ethical AI plays a pivotal role in enhancing the effectiveness and fairness of financial crime prevention measures. By leveraging advanced algorithms and machine learning, ethical or responsible AI-based systems can analyze immense datasets to identify anomalies indicative of fraudulent activities and money laundering while minimizing unfair or unintended consequences.

Ethical AI principles foster accountability and responsible use of technology by incorporating mechanisms for continuous monitoring and auditing. Their application ensures that algorithms prioritize accuracy and transparency while guarding against biases that could unduly affect certain groups. Striking a balance between technological advancement and ethical considerations is vital in developing AI solutions that not only combat fraud and money laundering but also align with societal values and evolving regulatory requirements.

Comprehensive regulations and international cooperation will be critical to navigating this increasingly complex landscape. Although regulatory regimes addressing AI are in their infancy, several jurisdictions and international organizations have made progress in their efforts to encourage innovation while safeguarding ethical standards.

  • The EU’s AI Act, which passed in early December 2023, takes a human rights-centered approach to governing AI and restricts specific AI systems designated as high-risk. Recognized by many as the world’s first comprehensive regulatory framework to take on AI, the AI Act—though imperfect—will likely greatly influence other legislative efforts. These regulations, however, may be insufficient to deter rogue actors and networks who are unbound by compliance concerns. Addressing this will require heightened cooperation between international organizations, federal governments, and especially tech companies, whose embedded safety guardrails in their models are often easily bypassed.
  • The UN has issued guidelines and standards surrounding ethical AI, such as the Recommendation on the Ethics of Artificial Intelligence by the United Nations Educational, Scientific and Cultural Organization.
  • In July 2023, a group of the top players in tech, including Amazon, Google, Meta, and Microsoft, signed an AI safety pledge at the White House.

These are positive steps forward, but many of these actions are voluntary and non-binding, with no compliance requirements and no penalties. Symbolism will only go so far towards protecting rights when it comes to AI. Indeed, Google and Microsoft—two of the AI safety pledge signatories—have axed their ethical AI teams. In the balance between innovation and ethics, the former might come out on top without incentive to do otherwise.

Still, AI’s potential for good is overwhelming; if leveraged correctly, it could promote a safer, more just world. The road ahead will be difficult but not impossible: As long as regulatory efforts continue to progress and tech companies face pressure to develop ethical AI, it is unlikely that humanity has drawn the black ball from Bostrum’s urn of invention. We have, though, clearly drawn a gray ball—one that is just as complicated and risk-laden as it is beneficial.

Be on the lookout for Datos Insights’ report on The Double-Edged Sword of Generative AI: Fraud Perpetration and Detection coming later in January. To hear our experts expound on the benefits and dangers of generative AI as well as other critical trends impacting risk functions going into 2024, register for our January 9 webinar on the Top Trends in Risk in 2024.