Welcome, AI enthusiasts 🤖
In today’s insights:
• AI surpasses doctors in Harvard emergency diagnosis trial
• Pentagon integrates eight major AI companies into classified systems
AI surpasses doctors in Harvard emergency diagnosis trial

⚕️The Next Phase of AI in Healthcare: Researchers at Harvard found that OpenAI’s o1 preview performed at the same level as, and sometimes better than, experienced ER physicians in triage, diagnosis, and patient management.
Key Highlights:
• o1 performed strongest during early-stage triage when the least patient information was available
• The study tested 76 real emergency room cases from a Boston hospital under blinded evaluation
• By 2025, nearly 20% of clinicians were already using LLMs for second opinions
What Happened:
Published in Science, the Harvard study assessed o1 preview through three critical emergency care stages: initial triage, first doctor evaluation, and hospital admission decisions. Independent reviewers, unaware of whether responses came from physicians or AI, rated the model’s recommendations as equal to or better than attending doctors in many cases.
The model also excelled on long-standing diagnostic benchmark cases from the New England Journal of Medicine, used in medical training since 1959. Researchers noted one important limitation: the testing relied only on text inputs, while image-based analysis such as scans and EKG interpretation is still under evaluation.
Why It Matters:
Even without access to medical imaging, o1 preview is already outperforming benchmarks trusted by doctors for decades. Meanwhile, hospitals remain cautious about large-scale adoption, despite millions of people already turning to AI tools daily for health-related guidance.
The biggest opportunity may not be replacing doctors, but supporting them, with AI systems capable of scanning complex medical records, spotting overlooked patterns, and helping catch diagnoses earlier than ever before.
Pentagon Expands Classified AI Network With Eight Major Tech Firms

The Pentagon has partnered with eight leading AI companies for classified operations while quietly embracing several AI safety measures it had previously resisted.
Key Highlights:
• The agreement includes SpaceX, OpenAI, Google, Microsoft, AWS, Oracle, NVIDIA, and Reflection
• All participating firms received clearance for IL6 and IL7 classified networks, though Anthropic remains excluded
• The contracts introduce safeguards around autonomous weapons and unauthorized domestic surveillance
What Happened:
While the Pentagon framed the initiative around building an “AI-first” defense force and achieving faster military decision-making, the contract details reveal a more significant shift.
After previously pushing for broad AI access for “all lawful purposes,” the Department of Defense has now accepted stricter human oversight requirements along with protections against misuse and unchecked surveillance, positions it had resisted earlier this year.
At the same time, the Pentagon’s internal AI platform, GenAI.mil, has reportedly attracted 1.3 million users within just five months of launch.
Why It Matters:
The future of AI regulation is increasingly being shaped through government contracts rather than public policy debates.
Although a judge previously criticized the Pentagon’s earlier blacklist decisions, Anthropic still remains outside the program while competing firms move forward under nearly identical safety frameworks.
The larger signal is clear: in the AI era, securing a government contract doesn’t just mean revenue, it means influence over how the rules of AI are ultimately written.
That’s it for today.
The AI space doesn’t slow down - and neither should your thinking.
See you in the next drop.