Wednesday, March 18, 2026

STAT+: Better ways to test AI models for health care, according to one Harvard researcher

In the world of artificial intelligence, the safety and security of machine learning models (LLMs) is a growing concern. As these models become more advanced and integrated into our daily lives, the potential for vulnerabilities and malicious attacks also increases. However, one researcher is taking a proactive approach to addressing this issue. Danielle Bitterman, a leading expert in AI security, is on a mission to find and fix vulnerabilities in LLMs to make them safer for everyone. In this edition of the AI Prognosis newsletter, we take a closer look at Bitterman’s work and how it is shaping the future of AI.

Bitterman’s interest in AI security began during her graduate studies in computer science. She was fascinated by the potential of AI and its ability to revolutionize various industries. However, as she delved deeper into the subject, she realized that the security of these systems was often overlooked. This realization sparked her passion for finding vulnerabilities in LLMs and making them more secure.

One of the main challenges in securing LLMs is their complexity. These models are trained on vast amounts of data and use complex algorithms to make decisions. This makes it difficult to identify potential vulnerabilities and even harder to fix them. However, Bitterman has developed innovative techniques to tackle this issue. She uses a combination of manual analysis and automated tools to identify vulnerabilities in LLMs. This approach allows her to cover a wide range of models and detect even the most subtle vulnerabilities.

Bitterman’s work has already made a significant impact in the field of AI security. She has identified and reported several vulnerabilities in popular LLMs, including those used in self-driving cars and medical diagnosis systems. Her findings have not only helped to make these systems safer but have also raised awareness about the importance of AI security.

One of the most notable contributions of Bitterman’s research is her development of a vulnerability scoring system for LLMs. This system assigns a score to each vulnerability based on its severity and potential impact. This allows developers to prioritize and address the most critical vulnerabilities first, making the process of securing LLMs more efficient and effective.

Bitterman’s work has not gone unnoticed in the AI community. She has received numerous awards and recognition for her contributions to the field of AI security. She has also been invited to speak at various conferences and events, where she shares her knowledge and insights on securing LLMs.

In addition to her research, Bitterman is also actively involved in educating the public about AI security. She believes that awareness and understanding of the potential risks associated with AI are crucial in ensuring the safe and responsible development of these systems. She regularly publishes articles and gives interviews to media outlets, spreading awareness about the importance of AI security.

Bitterman’s work is not without its challenges. As AI technology continues to advance, so do the techniques used by malicious actors to exploit vulnerabilities. However, Bitterman remains determined and continues to push the boundaries of AI security. She believes that with the right approach and collaboration between researchers and developers, we can make LLMs safer and more secure.

In conclusion, Danielle Bitterman’s work on finding vulnerabilities in LLMs is a crucial step towards ensuring the safety and security of AI systems. Her innovative techniques and dedication to this field have already made a significant impact, and her contributions will continue to shape the future of AI. As we move towards a more AI-driven world, it is essential to have researchers like Bitterman who are committed to making these systems safer for everyone. We look forward to seeing what she will achieve in the future and how her work will continue to shape the AI landscape.

most popular