Trader sentiment on an Anthropic-Pentagon deal hinges on the fallout from their July 2025 $200 million Department of Defense contract, which collapsed in early 2026 over Anthropic's refusal to lift AI safety guardrails for military applications like classified analysis and potential weapons systems. The Trump administration's February directive banned federal use of Anthropic's Claude models, followed by a March supply chain risk designation prompting Anthropic's lawsuit against the Pentagon. Competitors like OpenAI and Google have secured expanded AI pacts, filling the gap amid DoD's push for unrestricted access. Key catalysts include lawsuit proceedings and possible backchannel talks, though Anthropic's safety-first stance signals low near-term resolution odds.
Experimental AI-generated summary referencing Polymarket data. This is not trading advice and plays no role in how this market resolves. · Updated$124,315 Vol.
April 30
2%
May 31
30%
June 30
50%
$124,315 Vol.
April 30
2%
May 31
30%
June 30
50%
This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”.
A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count).
An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect.
Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period.
Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement.
The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
Market Opened: Mar 6, 2026, 1:33 PM ET
Resolver
0x65070BE91...This market will resolve to “Yes” if Anthropic and the United States Department of Defense (DOD/Department of War) reach any commercial agreement to allow for the use of Claude or other Anthropic artificial intelligence models by DOD employees by April 30, 2026, 11:59 PM ET. Otherwise, this market will resolve to “No”.
A commercial agreement between Anthropic and a broader set of the US government that grants usage of Anthropic AI models to DOD employees will count. Agreements or designations which allow Anthropic to offer its services to the DOD, but do not constitute an effective agreement for Anthropic to do so, however, will not count (e.g the inclusion of Anthropic on a Master Service Agreement or Indefinite Delivery Indefinite Quantity contract would not count).
An official announcement of a qualifying agreement, made within this market’s timeframe, will count, regardless of whether or when the agreement actually goes into effect.
Official announcements that the previously agreed contract between Anthropic and the DOD will be fully or partially reinstated, or otherwise will continue without impediment, will count, so long as this includes extended use of Anthropic AI models by DOD employees beyond any designated phase-out period.
Continued use of Anthropic technologies by DOD employees without a qualifying agreement (e.g. during a 6 month phase-out period) will not count. A court ruling that the designation of Anthropic as a supply chain risk is unlawful will not qualify for a “Yes” resolution unless it is accompanied by a reinstatement of Anthropic's DOD contract or a new qualifying Anthropic-DOD agreement.
The primary resolution sources for this market will be official information from Anthropic and the United States federal government; however, a consensus of credible reporting will also be used.
Resolver
0x65070BE91...Trader sentiment on an Anthropic-Pentagon deal hinges on the fallout from their July 2025 $200 million Department of Defense contract, which collapsed in early 2026 over Anthropic's refusal to lift AI safety guardrails for military applications like classified analysis and potential weapons systems. The Trump administration's February directive banned federal use of Anthropic's Claude models, followed by a March supply chain risk designation prompting Anthropic's lawsuit against the Pentagon. Competitors like OpenAI and Google have secured expanded AI pacts, filling the gap amid DoD's push for unrestricted access. Key catalysts include lawsuit proceedings and possible backchannel talks, though Anthropic's safety-first stance signals low near-term resolution odds.
Experimental AI-generated summary referencing Polymarket data. This is not trading advice and plays no role in how this market resolves. · Updated


Beware of external links.
Beware of external links.
Frequently Asked Questions