AI Bias Revealed: When ChatGPT Starts Thinking Like You
April 11th 2025
A new study published in Manufacturing & Service Operations Management reveals that OpenAI’s ChatGPT may be more human than we thought — not only in language, but also in how it makes decisions. Researchers tested GPT-4 and its predecessor against 18 cognitive bias scenarios, ranging from overconfidence to ambiguity aversion. Surprisingly, the AI stumbled in ways that echoed common human errors, such as the gambler’s fallacy and the conjunction fallacy. However, it also diverged from human tendencies in areas like base-rate neglect and the sunk-cost fallacy, showing both vulnerability and improvement depending on the task.
The findings raise an important question: Can we truly trust AI to help us make better decisions, or are we just automating our own flawed thinking?
Supporters of AI emphasize its consistency, scalability, and edge in mathematical logic. In clear-cut decisions, GPT often outperforms humans — particularly in tasks grounded in data and probability. But critics argue that when AI models are trained on human behavior, they inevitably inherit not only intelligence but also error patterns, echoing the limitations of their creators.
Business leaders, developers, and policymakers are now encouraged to treat AI not as an infallible oracle but more like a new hire — one that needs ongoing training, evaluation, and ethical oversight. The evolution of AI, including ChatGPT, must be steered with caution, not blind trust.
This study serves as a reminder: bias isn’t just a human problem — it’s a systems problem, too.
Source: SciTechDaily