Ethics of AI, Deconstructed by Anton Vokrug, the Business Adviser of Dexola
Ethics refers to the study of what is morally right and wrong or a set of beliefs and rules that guide how a person behaves. It involves questions about justice, fairness, and the proper treatment of individuals in various situations.
The field often examines principles like honesty, integrity, and responsibility, providing a framework for individuals and organizations to make decisions. Ethical guidelines can vary between cultures, religions, and individuals, but they generally aim to establish a basis for moral conduct.
Humans learn ethics naturally from their social groups. However, AI trains itself by consuming and analyzing massive amounts of data. From that point, it might acquire unethical behaviors and begin applying them in interactions.
The Confidentiality Problem
AI developers train their models on data they scrape from the web: books, science publications, articles, web forums, social media posts, etc. Additionally, developers may fine-tune the AI based on how users interact with it.
While this data is anonymous, its use may be considered acceptable. However, when the collected data can be used to identify its owner, ethical concerns arise as bad actors can use it for cyberbullying, doxing, phishing, and other unlawful practices. For instance, in the spring of 2023, there was a massive leak of ChatGPT credentials that affected over 100,000 accounts.
Regular users do not know how the AI tools use their data, how it is stored, and how it is protected. Imagine a manager who composes an email using ChatGPT; confidential information could inadvertently become public if the AI receives a special request. For example, Apple forbids its employees from using ChatGPT over the fear of leaking sensitive data.
To address this issue, AI applications need to adopt stringent privacy policies. First, they have to make the users aware of how their data will be used and give them the option to refuse to store it and to delete the data that is already stored by AI.
The Discrimination Problem
Because AI analyzes a wide range of data, it can inadvertently adopt biases from certain social groups, leading to discrimination and prejudicial treatment of others.
For example, the AI can pick up on racial, sexual, political, and other stereotypes. When applied to evaluate job applicants, the AI might unfairly disadvantage candidates based on biases it has “learned”. The study found that ChatGPT can generate toxic and harmful responses if given specific prompts.
The discrimination problem is tied to the confidentiality problem. The AI cannot make a biased decision if it lacks personal data from applicants apart from their CVs.
The Bad Actor Problem
Skilled people use ChatGPT concept to build their own AIs without ethical restrictions. For example, there is a phishing-specific AI named FraudGPT, designed to compose convincing scam emails; software to generate deepfakes of famous people or to bypass KYC; and even AI WormGPT to write custom viruses.
To address this issue, cloud-computing companies could scrutinize the software they host, since AI models are too resource-intensive to run on a home PC.
The Real-Life Privacy Problem
The government can utilize AI to improve surveillance technology for the automatic identification of individuals. While this application sounds good, it disrespects the right to privacy. When people know the cameras see them not as just bystanders, but as identified individuals, they may lose trust in the police.
This concern goes beyond street surveillance, covering any data that could be captured and analyzed by AI. Some researchers found a way to identify keystrokes by the sounds the keys make and understand what is written while using a laptop’s built-in microphone.
To solve the problem, the government has to write strict rules on how to use AI-assisted surveillance so that people will know when they might be identified.
The Economic Problems
For certain tasks, AI may even outperform seasoned professionals. Widespread adoption of AI could result in job losses across both technical and artistic fields.
For example, Hollywood actors and screenwriters are on strike because they do not want to be replaced by AI. Goldman Sachs’s research indicates that AI could replace up to 300 million workers.
How to Solve These Problems
At Dexola, we believe that the most effective solution is to regulate AI development, as well as the methods for collecting and storing data used for training.
Regulation and oversight of AI development are essential to ensuring that developers adhere to best practices to safely store the data, not discriminating against people based on their social groups, not attempting to replace workers with AI, not using confidential data without approval, and not spying on people unless necessary.
The issue of bad actors is the most challenging to resolve, as the government can’t simply prohibit them from using open-source AI or developing their own. Nevertheless, this issue could be mitigated through educational initiatives or the use of special filters that identify AI-generated content and alert users.