Detect Lung Disease with AI Breath Test

plus: Who's Liable for AI Medical Errors?

Happy Friday! It’s March 28th.

Dartmouth just wrapped up the first clinical trial of a therapy chatbot. Their AI-powered "Therabot" helped people with depression and anxiety as effectively as traditional therapy.

I’ve come across plenty of wellness chatbots before, but this is the first time I’ve seen one put through a real clinical trial. Are there even regulatory frameworks for this yet?

Our picks for the week:

  • Featured Research: Detect Lung Disease with AI Breath Test

  • Perspectives: Who's Liable for AI Medical Errors?

  • Product Pipeline: Smart Monitoring from Muscle Data

  • Policy & Ethics: EU Debates AI Regulation Limits

Read Time: 4.5 minutes

FEATURED RESEARCH

Early Silicosis Detection Now Possible with AI-Powered Breath Test

Illustration of three construction workers wearing hard hats and safety gear, engaged in discussion.

Silicosis, an incurable lung disease caused by silica dust, is affecting more and more construction workers, stone masons and tunnelers. Traditional methods like X-rays and CT scans only catch the disease after irreversible lung damage has occurred, making early detection near impossible.

Breath tests and AI is the answer: A new study from the University of New South Wales (UNSW) has developed an AI powered breath test to detect silicosis quickly and accurately. The test identifies the disease by analyzing hundreds of volatile organic compounds in exhaled breath.

This method stands out because it uses highly sensitive mass spectrometry combined with interpretable AI models.

Researchers were able to distinguish between silicosis patients and healthy individuals with over 90% accuracy, a big improvement over current tests.

Why it matters: The breath test takes under 5 minutes and no special patient preparation is required.

By detecting silicosis before lung damage occurs, affected workers can be removed from silica exposure earlier and potentially stop the disease in its tracks.

Next steps: While promising, the test still needs to be validated through larger trials and real-world clinical testing.

Researchers plan to refine the AI model and expand its use in workplaces, potentially enabling routine screening programs.

Early detection through breath analysis could become a valuable tool in occupational health, offering construction workers a chance for intervention before the disease takes hold.

For more details: Full Article

Brain Booster

Which of these is a legit reason construction workers are sometimes encouraged to grow facial hair carefully if they wear respirators?

Login or Subscribe to participate in polls.

Select the right answer! (See explanation below)

Opinion and Perspectives

AI LIABILITY

Who Should Be Responsible When AI Causes Medical Mistakes?

AI as a tool is entering medicine to reduce errors and ease physician workloads. But new research from Johns Hopkins and the University of Texas says this rapid adoption is creating a new burden: doctors will be blamed when AI makes mistakes.

AI’s unrealistic expectations: A recent JAMA Health Forum brief highlights the problem: Physicians are being held to impossible standards when using AI.

One of the study’s findings is that people blame doctors more for medical errors made with AI than with a human colleague, even when it’s clear the AI is at fault.

Shefali Patil, lead author and professor at the University of Texas, put it bluntly: “AI was supposed to ease the burden, but instead, it’s shifting liability onto physicians, forcing them to flawlessly interpret technology even the creators can’t fully explain.”

The hidden cost of relying on AI: This expectation creates immense pressure. Physicians must decide when to trust or override AI without guidelines, increasing their risk of burnout and potentially more medical errors.

The authors likened this to expecting pilots to build their own planes mid-flight, a scenario that’s unrealistic and risky.

Rethinking the approach: To help doctors manage this pressure, the researchers suggest healthcare organizations establish clear rules and structured guidelines for AI use, and ongoing training and support.

They say collective responsibility, not individual blame, is the way to safely integrate AI into clinical practice.

For more details: Full Article

Top Funded Healthcare Companies

For more startup funding, read our latest February Report.

Product Pipeline

WEARABLE HEALTH

Wearable Devices Expands AI Muscle Signal Platform into Predictive Health Monitoring

Wearable Devices is taking its AI-based bio-signal platform, LMM, beyond wearables into the health and wellness space.

Originally built for gesture control in extended reality, the tech is now being used for predictive health monitoring and cognitive state tracking to detect fatigue, stress, and early signs of illness.

Unlike traditional biosensors, LMM learns from muscle activity in real-time, turning micro-movements into insights.

As they open LMM to partners in health and enterprise, they’re building the foundation for AI-powered physiological monitoring that’s personal, proactive and scalable.

For more details: Full Article

Policy and Ethics

EU AI ACT

Europe Pushes Back on Big Tech Efforts to Soften AI Regulation

As the EU wraps up its AI Act, lawmakers are warning against watering down key provisions.

The European Commission is considering making parts of the law voluntary for big models like GPT-4 and Gemini after intense lobbying from Big Tech and US officials.

Meta called the rules “unworkable” while US VP JD Vance complained about “ideological bias” in regulation.

Lawmakers say softening these rules will enable election interference and disinformation. In a letter to the Commission they said this will “deeply disrupt the European economy and democracy”.

They’re urging Brussels to keep transparency and accountability at the heart of AI governance as these systems get more powerful.

For more details: Full Article

Byte-Sized Break

📢 Three Things AI Did This Week

  • China’s DeepSeek released its upgraded V3 model, DeepSeek-V3-0324, on Hugging Face with improved reasoning and coding capabilities, intensifying its AI rivalry with OpenAI and Anthropic. [Link]

  • Musician and tech entrepreneur Will.i.am has partnered with LG and Qualcomm to advance audio technology and AI integration. He emphasizes the need for responsible AI usage, advocating for an "AI Constitution" to protect user data. [Link]

  • North Korea just rolled out what it says is an AI-powered suicide drone and its first airborne early-warning system—marking a major tech upgrade as it deepens military ties with Russia and continues deploying troops in Ukraine. [Link]

Have a Great Weekend!

❤️ Help us create something you'll love—tell us what matters!

💬 We read all of your replies, comments, and questions.

👉 See you all next week! - Bauris

Trivia Answer: C) Facial hair can prevent a proper mask seal

It’s true! Facial hair can mess with the seal on respirators like N95s, making them way less effective at filtering out harmful dust like silica. That’s why some workplaces have grooming guidelines for people who need tight-fitting masks. Safety over style, folks!

How did we do this week?

Login or Subscribe to participate in polls.

Reply

or to participate.