ChatGPT’s LGBTQIA+-related answers up to 84% less accurate than Google searches, study finds
2 April 2025
Credit: CC BY 4.0
Answers relating to LGBTQIA+ representation given by ChatGPT are dramatically less accurate than those generated by Google searches, with some specific queries showing an 84% drop in accuracy.
The findings were published as part of a study led by the UCD School of Information and Communication Studies, which investigated how ChatGPT may reinforce incorrect beliefs.
The participants used GPT-3.5, which is the version of ChatGPT that was widely used before its current iteration, GPT-4o.
Participants in Ireland and India were tasked with finding information on LGBTQIA+ individuals serving as elected representatives in their respective countries.
One group was instructed to ask questions using ChatGPT, while another was asked to use Google to find the same information.
All questions showed a notably higher rate of accuracy when answered by Google than ChatGPT, with two in particular showing an 84% increase in correct answers.
ChatGPT mistakenly labelled some Irish and Indian politicians as homosexual, as well as generating fabricated names of politicians who do not exist.
Despite these inaccuracies, the study showed that people were more likely to trust the information generated by ChatGPT than Google’s search results.
Some noted that when ChatGPT mislabelled well-known politicians as homosexual, it led them to believe the incorrect information.
“Previously I thought [redacted] was a straight [person],” said one participant.
Many participants considered ChatGPT to be a reliable and convenient source of information, mentioning ease of use as a key factor.
Those who used Google were less certain that the information they received was correct. Some said that the Google response did not provide clear, distinct answers, and that they felt they needed to do more research to verify the information.
Incorrect information generated by ChatGPT was often assumed to be correct because it was consistent with a participant’s perception of a politician, or the prevailing cultural context of increasing acceptance of LGBTQIA+ identities.
This shows that ChatGPT may provide misinformation that reinforces a user’s own beliefs but remains unverified by them, an effect the researchers refer to as “Chat-Chamber”.
According to (opens in a new window)Dr Marco Bastos, the corresponding author who led the study, the problem is unlikely to be fixed even as ChatGPT advances in capability.
“GPT-4o, the current version of ChatGPT, is much more capable than GPT-3.5, which was used in our study, at retaining words of chat for context. Conversations and analysis with the tool feel much more natural,” said Dr Bastos.
“However, the problem we identified in our study is triggered by proattitudinal information and the reinforcement of previously held beliefs, which go unchecked and unverified by users. This problem is not likely to be solved in current or future generations of ChatGPT, because hallucinations – the generation of inaccurate or fabricated information – remain a perennial problem of large language models like ChatGPT.
“If anything, the more these tools are perceived to be accurate, the less likely users are to perform cross-checks on the information they receive.”
By: Rebecca Hastings, Digital Journalist, UCD University Relations
To contact the UCD News & Content Team, email: newsdesk@ucd.ie