ChatGPT Mimics Human Biases in Decision-Making, Study Finds
A recent study challenges the idea that AI can always make better decisions than humans. Researchers found that OpenAI's ChatGPT, despite being one of the most advanced AI models, exhibits similar decision-making flaws as humans, such as overconfidence and the hot-hand fallacy. However, it also doesn't fall victim to some human biases, like base-rate neglect or sunk cost fallacies [1]. Published in Manufacturing & Service Operations Management, the study highlights that ChatGPT uses mental shortcuts and exhibits blind spots like humans. While these biases are consistent across different business scenarios, they may evolve as future versions of AI are developed.

Figure 1. AI Mirrors Human Biases in Decision-Making
AI: A Smart Assistant with Human-Like Biases
Figure 1 shows AI Mirrors Human Biases in Decision-Making. The study, "A Manager and an AI Walk into a Bar: Does ChatGPT Make Biased Decisions Like We Do?", tested ChatGPT across 18 different bias scenarios, revealing key insights:
- AI falls into human decision traps – ChatGPT exhibited biases like overconfidence, ambiguity aversion, and the conjunction fallacy ("Linda problem") in nearly half of the tests.
- Strong at math, weak at judgment calls – While AI excels in logic and probability, it struggles with subjective decision-making.
- Bias persists – Although GPT-4 is more analytically accurate than its predecessor, it sometimes demonstrated even stronger biases in judgment-based tasks.
Why It’s Important
This raises a critical question: If AI replicates human biases, is it reinforcing flawed decisions rather than improving them?
Since AI is already influencing hiring, lending, and policy-making, understanding its biases is crucial. Researchers found that ChatGPT:
- Plays it safe – Avoids risks, even when riskier choices might be more beneficial.
- Overestimates itself – Assumes it's more accurate than it really is.
- Seeks confirmation – Prefers supporting information over contradictory evidence.
- Avoids ambiguity – Chooses certainty over uncertain but potentially better options.
While AI excels at logic and calculations, it struggles with judgment, sometimes making the same mental shortcuts as humans.
Can AI Be Trusted with Major Decisions?
As governments develop AI regulations, the study highlights a critical concern: Should we trust AI with major decisions when it shares human biases?
"AI isn’t a neutral referee," warns Samuel Kirshner of UNSW Business School. "If left unchecked, it could worsen decision-making rather than improve it."
Researchers stress that businesses and policymakers must oversee AI decisions just as they would a human's.
"AI should be treated like an employee handling key decisions—it requires oversight and ethical guidelines," says Meena Andiappan of McMaster University. "Without them, we risk automating flawed thinking instead of correcting it."
What’s the Next Step?
The researchers suggest routine audits of AI decisions and ongoing improvements to reduce biases. As AI's role expands, ensuring it enhances decision-making rather than just mirroring human flaws is crucial.
"The shift from GPT-3.5 to 4.0 shows AI becoming more human-like in some aspects while improving accuracy in others," says Tracy Jenkin of Queen's University [2]. "Managers must assess model performance for their specific needs and regularly update their approach to prevent unexpected biases."
Reference:
- https://neurosciencenews.com/ai-human-thinking-28535/
- https://techxplore.com/news/2025-04-ai-flaws-chatgpt-mirrors-human.html
Cite this article:
Janani R (2025), ChatGPT Mimics Human Biases in Decision-Making, Study Finds, AnaTechMaz, pp. 593