• AI in Lab Coat
  • Posts
  • The AI Solution Closing the Racial Gap in Glaucoma Detection

The AI Solution Closing the Racial Gap in Glaucoma Detection

plus: Trump Axes Biden-Era AI Safeguards—What’s Next for Healthcare Tech?

Happy Friday! It’s January 24th.

This has been a big week for healthcare since Donald Trump's inauguration, with executive orders reshaping Medicare, withdrawing from the WHO, and launching a $500 billion AI initiative. These moves could drive innovation but leave us with an uncertain future for global health collaboration and policy stability.

Our picks for the week:

  • Featured Research: The AI Solution Closing the Racial Gap in Glaucoma Detection

  • Perspectives: Why Most AI Tools in Primary Care Can't Be Trusted Yet

  • Product Pipeline: FDA Clears First-Ever Blood Test for Both Bacterial and Viral Infections

  • Policy & Ethics: Trump Axes Biden-Era AI Safeguards—What’s Next for Healthcare Tech?

FEATURED RESEARCH

The AI Solution Closing the Racial Gap in Glaucoma Detection

 An illustration of two people facing each other, one with curly hair and hoop earrings, and the other with short hair, a beard, and glasses, with speech bubbles between them.

Glaucoma, a leading cause of blindness, affects 80 million people worldwide and disproportionately affects racial and ethnic minorities. Black and Hispanic patients are 4.4 and 2.5 times more likely, respectively, to have undiagnosed and untreated glaucoma than White patients.

Traditional screening methods miss early signs and existing AI tools don’t perform equally well across diverse identity groups.

A step forward with Fair Identity Normalization (FIN): Researchers at the Harvard Ophthalmology AI Lab have developed a new AI module, Fair Identity Normalization (FIN), to address the inequities in glaucoma screening.

FIN adjusts how AI models see retinal imaging, so performance is equal across racial, ethnic and gender groups. Tested on over 7,000 retinal scans, FIN increased accuracy (AUC) for Black patients from 0.77 to 0.82 and Hispanic patients from 0.74 to 0.79.

Why this matters: FIN addresses the long-standing inequities in healthcare by making sure AI diagnostics work for everyone equally. It improves accuracy across racial, ethnic and gender groups and reduces disparities.

Unlike traditional methods, FIN gets to the root of the problem rather than applying band-aids.

For underserved communities, this represents a meaningful step toward fair and reliable care.

For more details: Full Article

Brain Booster

Which of these Mondays is often referred to as the “most depressing day of the year,” occurring on the third Monday of January?

Login or Subscribe to participate in polls.

Select the right answer! (See explanation below)

Opinion and Perspectives

HEALTHCARE TRANSPARENCY

Why Most AI Tools in Primary Care Can't Be Trusted Yet

A recent systematic review in JAMA Network Open highlighted troubling gaps in the evidence supporting 43 predictive machine learning algorithms used in primary care.

Most of these tools for chronic conditions like cardiovascular disease and diabetes have no robust data to prove their quality and impact.

Transparency matters: The study found that only 28% of the algorithms fulfilled roughly half of the quality criteria outlined in the Dutch AIPA guideline, a framework for assessing AI tools in healthcare.

Evidence was strongest during development (46%) and weakest during preparation (19%) and impact assessment (30%).

Tools from peer reviewed studies had more publicly available evidence than those in FDA or CE-marked databases.

The regulatory angle: Dr. María Villalobos-Quesada, one of the study’s authors, says we need better reporting across the AI lifecycle.

The EU’s AI Act could be a step forward, requiring high risk AI to be in standardized databases. But we don’t know how it will play out in real life.

Why this matters: Without consistent evidence, clinicians rely on limited or manufacturer data to assess AI tools.

That’s a risk to patient safety and trust. Improved transparency and adherence to frameworks like the Dutch AIPA guideline are critical for ensuring that AI tools fulfill their promise in primary care.

For more details: Full Article

Top Funded Startups

For more startup funding, read our latest December Report.

Product Pipeline

INFECTIOUS DISEASE DETECTION

FDA Clears First-Ever Blood Test for Both Bacterial and Viral Infections

The FDA-cleared TriVerity™ Test by Inflammatix is the first and only molecular blood test that identifies both bacterial and viral infections and assesses the need for critical care.

Using machine learning to analyze 29 RNA biomarkers, it provides actionable insights within hours, helping emergency departments manage acute infections more effectively.

Data from the SEPSIS-SHIELD study, with 1,222 patients across 22 sites, confirmed its high diagnostic accuracy across diverse populations.

TriVerity reduces unnecessary admissions, eases overcrowding, and ensures timely care for patients at risk of severe illness, addressing a critical gap in emergency care management.

For more details: Full Article

Policy and Ethics

AI SAFETY

Trump Axes Biden-Era AI Safeguards—What’s Next for Healthcare Tech?

In one of his first acts as President, Trump rescinded an executive order from October 2023 that was designed to protect AI in healthcare and other areas.

The Biden order had tasked the U.S. Department of Health and Human Services (HHS) with monitoring unsafe AI healthcare practices and the National Institute of Standards and Technology (NIST) with creating an AI Safety Institute to develop standards, including red-team testing and bias mitigation.

The fate of the NIST’s future efforts is unknown. Instead, Trump is planning $500 billion in private AI infrastructure investments, and a shift away from government oversight to industry led innovation.

Critics worry about risks from unchecked AI development.

For more details: Full Article

Byte-Sized Break

📢 Three Things AI Did This Week

  • Apple has paused its AI-generated news summary notifications due to widespread inaccuracies and backlash. [Link]

  • At the World Economic Forum, Pope Francis cautioned against AI worsening the "crisis of truth," especially in healthcare. He urged ethical oversight to prevent misuse and protect trust in medical AI advancements. [Link]

  • Chinese AI startup DeepSeek unveiled its open-source R1 model, rivaling OpenAI's o1 in reasoning capabilities at a fraction of the cost, highlighting China's rapid progress in AI. [Link]

Have a Great Weekend!

❤️ Help us create something you'll love—tell us what matters!

💬 We read all of your replies, comments, and questions.

👉 See you all next week! - Bauris

Trivia Answer: C) Blue Monday

Blue Monday, dubbed the “most depressing day of the year,” owes its reputation to a mix of post-holiday blues, cold and dreary weather, mounting holiday debt, and the realization that those New Year’s resolutions are already toast. Even though it’s a marketing invention from 2005 with no scientific basis, the name stuck because, let’s be honest, January can be rough!

How did we do this week?

Login or Subscribe to participate in polls.

Reply

or to participate.