91制片厂

AI Weekly

Keeping up with AI news can be overwhelming. This page offers regular updates on significant developments in artificial intelligence curated for the Bowdoin community.

March 22-28, 2026

AI and Warfare: When US forces bombed a primary school in Iran, killing nearly 180 people, coverage focused on whether an LLM made the targeting decision. The reality is troubling: human choices compressed military decision-making to the point where deliberation and error-correction were eliminated. This story provides context for understanding how AI is actually used in modern warfare and what happens when it obscures human accountability. .
Chatbot Sycophancy: A new study found that AI chatbots are overly agreeable when giving interpersonal advice, endorsing harmful behavior nearly half the time. Users also preferred sycophantic responses, deeming them more trustworthy. Researchers warn that sycophantic AI may erode people's social skills and caution against using AI as a substitute for human advice. .

Tool Update: OpenAI shut down its heavily hyped video-generation tool, Sora. Beyond raising questions about AI and intellectual property, Sora also serves as a good reminder to be skeptical of big AI announcements as they don't always live up to the hype. . 

Previous Updates

March 15-21, 2026

Google and the Pentagon: While Anthropic clashed with the Pentagon and OpenAI faced backlash over its Defense Department deal, Google quietly expanded its own military AI contracts despite having sworn off military work in 2018. It raises broader questions about whether major AI companies will hold firm on ethical limits when lucrative government contracts are on the table. .
Meta AI Agent Data Leak: A Meta AI agent gave an engineer instructions that caused a large amount of sensitive user and company data to be exposed internally for two hours. This serves as a reminder of the risks of over-trusting AI guidance, including its lack of contextual judgment around security boundaries and the potential for serious privacy breaches. .
The Shift to Inference Chips: As AI moves from development to deployment, the industry is shifting focus from the chips used to train models to those used to run them. While inference has long been considered the cheaper side of AI, the sheer scale of its use is challenging that assumption, with significant implications for how we understand AI's costs and environmental impact. .

March 8-14, 2026

Anthropic and the Pentagon: Anthropic, the AI company behind Claude, has sued the Pentagon after the Department of Defense labeled it a "supply chain risk," a designation typically reserved for foreign adversaries. The dispute stems from Anthropic's concern that its AI might be used for autonomous weapons or domestic mass surveillance. It also reflects Anthropic's current importance in the AI world, particularly with its best-in-class tools Claude Code and Claude Cowork.

Read more:

  • "," The Guardian
  • "," The New York Times
  • "" MIT Technology Review