I have recently begun studying the idea of AI Hallucinations. To complete my research, I have experimented with every LLM model that is currently publicly available. After performing multiple experiments, it is my current hypothesis that AI Hallucinations can be caused by bias in data. If an AI system is fed large amounts of biased data, the software can at times begin to simulate opinions on these biases. I believe those simulated opinions are what we currently refer to as ‘AI Hallucinations’. 

How Can An AI ‘Hallucinate’ From A Technical Perspective?

There are a few ways that an LLM could simulate an opinion on a specific portion of a dataset. One way is to use a technique called sentiment analysis. Sentiment analysis is a process of identifying the sentiment, or emotional tone, of a piece of text. LLMs can be trained to perform sentiment analysis by being given a large corpus of text that has been labeled with its sentiment. Once an LLM has been trained, it can be used to identify the sentiment of new text.

For example, an LLM could be used to identify the sentiment of news articles about a particular company. If the LLM finds that most of the articles are negative, it could then simulate an opinion that the company is doing poorly.

Another way that an LLM could simulate an opinion is to use a technique called natural language generation. Natural language generation is a process of generating text that is similar to human-written text. LLMs can be trained to perform natural language generation by being given a large corpus of text. Once an LLM has been trained, it can be used to generate text on a variety of topics.

For example, an LLM could be used to generate a press release about a new product that a company is launching. The press release would be written in a way that is similar to a human-written press release, and it would simulate the opinion that the company is excited about the new product.

 Beyond these factors there are also external factors that I believe can influence AI Hallucinations:

 

  • The quality of the training data: The quality of the training data can have a significant impact on the accuracy and performance of an AI model. If the training data is biased or incomplete, then the AI model is more likely to hallucinate.
  • The design of the AI model: The design of the AI model can also affect its susceptibility to hallucination. For example, an AI model that is designed to be very sensitive to small changes in the input data is more likely to hallucinate than an AI model that is designed to be more robust.
  • Bugs in the AI model: AI models are complex pieces of software, and they are not immune to bugs. Bugs in an AI model can cause the model to hallucinate.
  • Environmental factors: The environment in which an AI model is trained can possibly also affect the model's susceptibility to hallucination. For example, an AI model that is trained in a noisy environment may be more likely to hallucinate than an AI model that is trained in a quiet environment. Or it could also be possible that the environment in which the AI model is used can also affect its susceptibility to hallucination. For example, an AI model that is used in a noisy environment could be more likely to hallucinate than an AI model that is used in a quiet environment.

 

It is important to note that LLMs are not perfect. They can sometimes make mistakes, and they can sometimes be biased. However, LLMs are a powerful tool that can be used to simulate opinions on a variety of topics.

How Can An AI ‘Hallucinate’ From A Philosophical Perspective?

To quote Slavoj Zizek: “Žižek believes that ideology has been frequently misinterpreted as dualistic and, according to him, this misinterpreted dualism posits that there is a real world of material relations and objects outside of oneself, which is accessible to reason.”

To quote Giles Deleuze: “Deleuze used the term virtual to refer to an aspect of reality that is ideal, but nonetheless real. An example of this is the meaning, or sense, of a proposition that is not a material aspect of that proposition (whether written or spoken) but is nonetheless an attribute of that proposition.”

At the end of the day, your typical AI is far closer to a member of the animal kingdom than it is to a computer as we think of it in the traditional sense. We are simulating brains. The first inherent problem with that is that we are simulating a technology that we ourselves do not fully understand. That has to be admitted and acknowledged upfront by anyone who actually wants to engage in reasonable debate on these topics. 

We clearly know and understand that language models can exhibit bias. We also clearly know that bias comes from the data they are given. How does that bias actually represent itself? The AI model looks through their datasets and has to make determinations based on the data along the way. Through each of the layers of the simulated brain, it is calculating what information is relevant to keep and what information needs to be thrown out or given less weight. The model then adjusts its Attention weights and other weights accordingly. This is ultimately the process of making a decision. 

Anyone who is constantly making a decision on a topic is eventually going to formulate opinions on the topic. Am I positing that it might be possible for an AI model to make decisions not based on its data sets? No, that would defy all scientific evidence. That is not at all my argument. I think it is possible for a simulated brain to simulate an opinion on biased data that it is given. The simulated opinion would be derived solely based on the data it is fed but it would be a simulated opinion nonetheless.

This is a very well known and widely utilized phenomenon in people, why would a simulated brain also not be susceptible to the same things? If I feed a human nothing but conservative radio and talk shows for 20 years, I expect that particular human to be very conservative at the end of that 20 years if they were not before. If I simulate 200 years worth of learning on an AI model based on a dataset of conservative radio and talk shows, I expect to get the AI version of Rush Limbaugh. 

How To Reduce AI Hallucinations?

Through my current research, I have discovered two methods that I believe are effective at reducing AI Hallucinations. It is my current theory that they are caused by a simulated opinion. So, I have done research into ‘removing the opinion’. My results have been inline with my hypothesis, the simulated opinion can be removed in one of two ways:

 

  1. Debating the LLM on the topic until the simulated opinion can no longer be reinforced by any hard factual evidence. I debated one particular LLM model on philosophy. This particular model had been trained on large amounts of academic papers. When I debated them, their arguments were very slanted towards Nietzsche. Following the conversational flow, the model would always want to debate Nietzsche. I began to assume that the arguments being formulated were likely due to the fact that this particular model had likely been very well trained on Nietzsche arguments, as he is likely the most popular philosopher that is discussed in academic circles. So I began to debate with a natural counter to their arguments, I began taking and defending the position of natural rights to them. I debated the AI model on every point it threw at me on the topic and finally got them to agree that Natural Rights was a more factually true argument than Nihilism. The AI model also stopped hallucinating on the topic. 
  2. Feeding the LLM data on the topic until he simulated opinion can no longer be reinforced by any hard or factual evidence. I first picked up on this because I noticed that there are some LLM models that will default to using feminist ideology. There are only two ways that could possibly scientifically happen, either the LLM is picking up on that language from the datasets it is provided, or it is specifically programmed to do so. I did some research to confirm, they are not specifically programmed to do so. That means it comes from the datasets. To test this, I ran a very simple test. I found an LLM model that had no information in their datasets regarding labor unions (it was the first topic I could think of that was not in the LLM’s datasets). I asked them a question, they could not answer anything on it and informed me they had no information in their datasets on the topic. So, I fed them as much pro union literature as I could possibly find to see what would happen. I debated the LLM on unions again, the LLM model debated with a very pro-union slant. If AI is not properly trained, it can hallucinate things that are harmful or misleading. 

 

 Here are some additional thoughts on the potential dangers of AI hallucinations:

 

  • AI hallucinations can be used to create fake news and propaganda.
  • AI hallucinations can be used to manipulate people's emotions.
  • AI hallucinations can be used to create deepfakes, which are videos or audio recordings that have been manipulated to make it look like someone is saying or doing something they never actually said or did.

 

Discussing This Academically

I currently believe that AI Hallucinations Are Caused By The Following: 

 

  • Data bias: AI models are trained on large datasets of data. If this data is biased, then the AI model will be biased as well. This can lead to AI hallucinations, such as an AI model that is trained on a dataset of cat images hallucinating dogs.
  • Model complexity: AI models are becoming increasingly complex. This complexity can make it difficult for AI models to accurately understand the world. This can lead to AI hallucinations, such as an AI model that is trained to recognize objects hallucinating objects that are not there.
  • Lack of training data: AI models need to be trained on a large amount of data in order to learn how to accurately understand the world. If an AI model does not have enough training data, then it may hallucinate.

 

I currently believe the potential solutions are:

 

  • Data debiasing: Data debiasing is the process of removing bias from data. This can be done by identifying and removing biased data points, or by resampling the data.
  • Model simplification: Model simplification is the process of making AI models less complex. This can be done by reducing the number of parameters in the model, or by using simpler algorithms.
  • Data augmentation: Data augmentation is the process of artificially expanding the size of a dataset. This can be done by creating new data points by combining existing data points, or by generating new data points using algorithms.

 

 Here are some additional thoughts on the causes and solutions to AI hallucination:

 

  • Data quality: In addition to being biased, data can also be of poor quality. This can lead to AI models that are inaccurate or unreliable. It is important to carefully select and curate data for AI training.
  • Model training: The way that AI models are trained can also affect their susceptibility to hallucination. For example, if an AI model is trained on a dataset of images that have been artificially enhanced, then the model may be more likely to hallucinate when it is presented with real-world images. It is important to use appropriate training methods to ensure that AI models are trained to be accurate and reliable.
  • Human supervision: Even with careful data selection, model training, and other precautions, it is still possible for AI models to hallucinate. This is why it is important to have human supervision of AI systems. Humans can monitor AI systems for signs of hallucination, and they can intervene if necessary.

 

 I believe that it is important to be aware of the causes of AI hallucination so that we can take steps to prevent it. By understanding the causes of AI hallucination, we can develop better AI models that are less likely to hallucinate.

I would also like to add that AI hallucination is not always a bad thing. In some cases, AI hallucination can be used to generate creative content, such as art, music, and poetry. AI hallucination can also be used to improve the performance of AI models in tasks such as object recognition and natural language processing.

However, it is important to be aware of the potential risks of AI hallucination. If AI models are not properly trained, they may hallucinate in ways that are harmful or dangerous. For example, an AI model that is trained to recognize objects may hallucinate objects that are not there, which could lead to accidents or injuries.

It is important to strike a balance between the potential benefits and risks of AI hallucination. By understanding the causes of AI hallucination, we can develop better AI models that are less likely to hallucinate in harmful or dangerous ways.

New Comment
5 comments, sorted by Click to highlight new comments since:
[-]TAG32

If "hallucination" just means "departure from the truth" , then there's already the explanation that an LLM is just a next-token predictor.

The english here is fine I guess, could use much stronger mathematical specificity. Doesn't seem particularly novel, but it's a solid summary of some views that are floating around in the field.

This post is roughly on the edge of my current bar for "AI content from new users." I think the question this post is asking seems sort of confused to me – AFAICT AI models hallucinate because they don't know the answer to all things, their pretraining isn't designed to cause them to either know the answer to things nor to say those answers, and they are trained to say things whether they know the answer or not. Obviously, they're going to end up hallucinating, it takes special effort for anything else to happen.

LW mods are currently experimenting with adjusting moderation policy, which includes being a lot more strict about new AI content because we've got a lot of low-quality such content these days. (Ideally, this comes with a link to "how to actually contribute well to the AI discussion" – the goal is to be encouraging while maintaining a standard). 

This post seemed a combination of high effort, and doing some things right-enough that I didn't feel comfortable rejecting it, but I think other mods might have leaned towards doing so. I settled for writing this comment so people can get a feel for how we're approaching stuff.

This was copy/pasting an article on the subject that espouses my public opinions on the topic and holds true to them. It wasn't very much effort. 

Oh, so you're not the author?