Monday, February 16, 2026

STAT+: Why doctors and patients are having two totally different AI chatbot experiences

Welcome to the latest edition of STAT’s AI Prognosis, where we dive deep into the world of artificial intelligence and its ever-evolving applications. In today’s feature, we will explore how AI models perform differently based on the user, and the potential risks that come with this variance. Our expert, Brittany Trang, will guide us through this fascinating topic and shed light on the need for greater awareness and accountability when it comes to AI technologies.

Artificial intelligence has undoubtedly revolutionized numerous industries, from healthcare to finance, by offering faster and more accurate data analysis, prediction, and decision-making. But as with any technology, it is not infallible, and its effectiveness often depends on the user. This is where Brittany Trang comes in. As a leading AI researcher and expert, she has been studying the impact of user diversity on AI models’ performance, and her findings are both eye-opening and concerning.

According to Trang, AI models are designed and trained based on specific data sets, which are often biased and lack diversity. For instance, a facial recognition AI system trained primarily on images of white individuals will have a higher accuracy rate for that demographic compared to people of color. This bias can lead to serious issues, such as misidentification, discrimination, and even false convictions, as seen in numerous real-life cases.

But it’s not just about race; AI models can also be affected by factors like age, gender, language, and socio-economic background. For example, a chatbot designed to assist with mental health may not accurately understand or respond to individuals from different cultural backgrounds, leading to inadequate support and potential harm. This highlights the need for more diverse data sets and inclusive AI development, where all users are considered and represented.

Trang’s research has also shown that AI models’ performance can vary even within a particular demographic. For instance, an AI-powered healthcare diagnostic tool may have a higher accuracy rate for men compared to women, or for younger individuals compared to older adults. This can have dangerous consequences, such as misdiagnosis or inadequate treatment plans, as seen in the case of a 39-year-old woman who died due to undiagnosed heart disease, while her AI model predicted she was at low risk.

So, why do AI models perform differently for different users? Trang explains that it’s because these models are often trained on a limited amount of data, which may not accurately represent the entire population. Additionally, AI models rely on algorithms that can reflect human biases, whether consciously or unconsciously. This means that if the data sets are biased, the algorithm will learn and replicate those biases, leading to unequal and potentially dangerous outcomes.

Fortunately, there are steps being taken to address this issue. Trang’s research has been instrumental in raising awareness and pushing for more diverse and inclusive data sets, as well as accountability for AI developers and companies. The recent Black Lives Matter movement has also shed light on the urgent need for more ethical and responsible AI development.

In addition, some organizations have taken it upon themselves to audit their AI systems for bias and work towards mitigating it. For example, Google has announced a plan to reduce the use of gender and race data in its AI systems to prevent biases in its products. Similarly, Amazon has introduced a new tool that can detect biased language in job postings and suggest more inclusive alternatives.

But there is still a long way to go. As AI continues to expand its reach and impact, it is crucial to address the issue of performance variance based on user diversity. This requires collaboration between AI experts, data scientists, ethicists, and policymakers to develop fair and unbiased AI systems. It also requires a change in our mindset, where diversity and inclusion are given the same importance as technical accuracy and efficiency.

In conclusion, Brittany Trang’s research highlights the need for a more holistic approach to AI development, where diversity and inclusion are built-in from the start. As we continue to rely on AI for critical decision-making, we must strive for fairness, accountability, and transparency in its development and use. With the right efforts and collaboration, we can pave the way for a more equitable and ethical future for AI.

most popular