Skip to Main Content

Generative Artificial Intelligence (Beginner)

Misinformation & Bias in AI

Misinformation

While generative GAI tools can help users in multiple stages of their research projects, they are also known for not using factual information. GAI tools can produce "hallucinations," false information created by AI systems to defend its statements.

In some cases, GAI tools are intentionally used to create false images or audiovisuals to spread misinformation and mislead the audience. These products are known as "deep fakes."

In addition to creating false information, GAI can create outdated information. Some GAI tools may not have access to current information, or they could have been trained using past datasets, so they are unable to provide current results.

Bias

GAI can contain biases in the results it produces. Using copious amounts of data available on the internet, GAI systems predict patterns of words or thought and reflect biases present in a given input.

Some GAI tools also use human feedback to learn. This is done under the assumption that the human testers are unbias, but no one is truly impartial. For example, GAI like ChatGPT has been documented using sexist, racist, and otherwise offensive language in results.

Related Recommendations  

  • Fact-check all results
  • Evaluate results for possible biases
  • Avoid asking for a list of sources on a specific topic (AI will hallucinate)
  • Consult AI developers' notes 
  • Remember that AI tools are not search engines

Selected Readings 

Artificial Intelligence and Academic Integrity

Plagiarism

Generative AI tools are currently raising concerns about increase plagiarism. These concerns are one of the reasons this guide exists. It is important to cite everything that is not your work or idea. That includes any information or writing produced using GAI. Policies regarding GAI can vary from class to class, so its important to review syllabi and the current student handbook to understand what is expected of researchers regarding the use of GAI. If you are submitting your writing to a journal, it is also recommended to check the journal's policy regarding AI usage.

False Citations

GAI tools such as ChatGPT have been known to generate false citations, and even if the citations represent actual papers, the cited content in ChatGPT might still be inaccurate. Providing false citations in research, whether intentional or unintentional, violates MUW's Academic Integrity Policy

Selected Readings