Hoosier Huddle

What's the beef the...
 
Notifications
Clear all

What's the beef the Pentagon has with Anthropic's Claude AI?

Page 2 / 2
BradStevens
(@bradstevens)
Famed Member

Posted by: @jdb

Posted by: @bradstevens

Posted by: @jdb

Posted by: @aloha-hoosier

The Pentagon's and Trump administration's beef with Anthropic AI is hard to understand. Apparently Anthropic insisted in contract language that called for preventing Claude’s (their AI system) use for mass surveillance of Americans or in fully autonomous weapons and not taking human decision making out of the loop. Sounds reasonable, but Pentagon has insisted on no limitations on its use. This got Anthropic the "woke" level somehow for reasons I don't understand, and the administration is terminating all contracts with Anthropic.

Anthropic ‘cannot in good conscience accede’ to Pentagon’s demands, CEO says

Almost immediately after, Open AI gets the contract and apparently has those same safety guidelines in its contract which Anthropic insisted on. What changed? It's certainly not clear. The gridlines are, "OpenAI technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems, or for any high-stakes automated decisions." Sounds about the same. Also, if those guidelines aren't enforced, AI could be scary for military uses.

OpenAI details layered protections in US defense department pact | Reuters

OpenAI announces new deal with Pentagon — including ethical safeguards - POLITICO

This is more complicated than I feel qualified to weigh in on. On one hand, I admire Anthropic's pushback and don't believe our country was founded on the premise that the government should be able to track and monitor its people. That being said, if we had allowed for such behavior, one wonders how many casualties from violent attacks could have been prevented. Just the other week in B.C., the trans shooter was discovered to have searched some huge "red flag" topics in ChatGPT. There have been countless others where it was revealed they did Google searches, Facebook posts, etc., that should have been flagged as dangerous.

If we refuse to return to the institutionalization policies of the 60s and earlier, perhaps we need them. Obviously, that opens up a huge risk of politicization, with Reps monitoring and interfering with Dems, and vice versa. That's the part that I struggle with.

 


GIF

 

 


GIF

 


GIF

GIF

 

 


ReplyQuote
Posted : 03/02/2026 7:56 pm
👍
1
JDB's avatar
 JDB
(@jdb)
Noble Member

Posted by: @bradstevens

Posted by: @jdb

Posted by: @bradstevens

Posted by: @jdb

Posted by: @aloha-hoosier

The Pentagon's and Trump administration's beef with Anthropic AI is hard to understand. Apparently Anthropic insisted in contract language that called for preventing Claude’s (their AI system) use for mass surveillance of Americans or in fully autonomous weapons and not taking human decision making out of the loop. Sounds reasonable, but Pentagon has insisted on no limitations on its use. This got Anthropic the "woke" level somehow for reasons I don't understand, and the administration is terminating all contracts with Anthropic.

Anthropic ‘cannot in good conscience accede’ to Pentagon’s demands, CEO says

Almost immediately after, Open AI gets the contract and apparently has those same safety guidelines in its contract which Anthropic insisted on. What changed? It's certainly not clear. The gridlines are, "OpenAI technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems, or for any high-stakes automated decisions." Sounds about the same. Also, if those guidelines aren't enforced, AI could be scary for military uses.

OpenAI details layered protections in US defense department pact | Reuters

OpenAI announces new deal with Pentagon — including ethical safeguards - POLITICO

This is more complicated than I feel qualified to weigh in on. On one hand, I admire Anthropic's pushback and don't believe our country was founded on the premise that the government should be able to track and monitor its people. That being said, if we had allowed for such behavior, one wonders how many casualties from violent attacks could have been prevented. Just the other week in B.C., the trans shooter was discovered to have searched some huge "red flag" topics in ChatGPT. There have been countless others where it was revealed they did Google searches, Facebook posts, etc., that should have been flagged as dangerous.

If we refuse to return to the institutionalization policies of the 60s and earlier, perhaps we need them. Obviously, that opens up a huge risk of politicization, with Reps monitoring and interfering with Dems, and vice versa. That's the part that I struggle with.

 


GIF

 

 


GIF

 


GIF

GIF

 

 

I try and forget Tom Cruise movies. That one wasn't too bad.

 


ReplyQuote
Posted : 03/02/2026 11:57 pm
All4You's avatar
(@all4you)
Noble Member

Posted by: @jdb

I try and forget Tom Cruise movies

giphy


A good friend will bail you out of jail, but your best friend will be sitting next to you in the cell saying "that was f***ing awesome"

ReplyQuote
Posted : 03/03/2026 12:04 pm
😂
1
CarRamRod's avatar
(@carramrod)
Noble Member

Posted by: @hurryinghoosiers

Posted by: @carramrod

Posted by: @hurryinghoosiers

Wonder if use of AI will be something that countries end up agreeing to ban in the future.  

 

LOL! Why not the automobile or the internet while you’re at it?

 

How is that even comparable to using AI to determine if an attack should be made that has potential to kill humans?

Are you really struggling to understand the difference between that and your asinine comparisons?

 

You know what doesn't have any decision making ability? A land mine. It doesn't care if you're a kid playing soccer or a humvee. You know what has the potential to kill innocent humans? A GI strolling through a village in Vietnam that just saw his best friend get killed, hasn't slept in days and hasn't had sex in a year. 

There's every reason to believe that autonomous weapons will actually reduce the civilian toll of war. 

If you want to say that human sign-off should still be required to launch something like a Nuke I could see how that probably makes sense. But AI in weapons systems is here. We're watching it in Iran right now and it's far more discriminant than the bombing of Dresden or Tokyo for instance. 

 


ReplyQuote
Posted : 03/03/2026 12:26 pm
👍
3
BradStevens
(@bradstevens)
Famed Member

Posted by: @carramrod

Posted by: @hurryinghoosiers

Posted by: @carramrod

Posted by: @hurryinghoosiers

Wonder if use of AI will be something that countries end up agreeing to ban in the future.  

 

LOL! Why not the automobile or the internet while you’re at it?

 

How is that even comparable to using AI to determine if an attack should be made that has potential to kill humans?

Are you really struggling to understand the difference between that and your asinine comparisons?

 

You know what doesn't have any decision making ability? A land mine. It doesn't care if you're a kid playing soccer or a humvee. You know what has the potential to kill innocent humans? A GI strolling through a village in Vietnam that just saw his best friend get killed, hasn't slept in days and hasn't had sex in a year. 

There's every reason to believe that autonomous weapons will actually reduce the civilian toll of war. 

If you want to say that human sign-off should still be required to launch something like a Nuke I could see how that probably makes sense. But AI in weapons systems is here. We're watching it in Iran right now and it's far more discriminant than the bombing of Dresden or Tokyo for instance. 

 

Really good points.  

Now, stop wasting your time with Hickory.  

 


ReplyQuote
Posted : 03/03/2026 8:33 pm
👍
2
Page 2 / 2
Share: