Hoosier Huddle

What's the beef the...
 
Notifications
Clear all

What's the beef the Pentagon has with Anthropic's Claude AI?

Page 1 / 2
Aloha Hoosier's avatar
(@aloha-hoosier)
Famed Member

The Pentagon's and Trump administration's beef with Anthropic AI is hard to understand. Apparently Anthropic insisted in contract language that called for preventing Claude’s (their AI system) use for mass surveillance of Americans or in fully autonomous weapons and not taking human decision making out of the loop. Sounds reasonable, but Pentagon has insisted on no limitations on its use. This got Anthropic the "woke" level somehow for reasons I don't understand, and the administration is terminating all contracts with Anthropic.

Anthropic ‘cannot in good conscience accede’ to Pentagon’s demands, CEO says

Almost immediately after, Open AI gets the contract and apparently has those same safety guidelines in its contract which Anthropic insisted on. What changed? It's certainly not clear. The gridlines are, "OpenAI technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems, or for any high-stakes automated decisions." Sounds about the same. Also, if those guidelines aren't enforced, AI could be scary for military uses.

OpenAI details layered protections in US defense department pact | Reuters

OpenAI announces new deal with Pentagon — including ethical safeguards - POLITICO


ReplyQuote
Topic starter Posted : 03/01/2026 5:14 pm
BradStevens
(@bradstevens)
Famed Member

Posted by: @aloha-hoosier

The Pentagon's and Trump administration's beef with Anthropic AI is hard to understand. Apparently Anthropic insisted in contract language that called for preventing Claude’s (their AI system) use for mass surveillance of Americans or in fully autonomous weapons and not taking human decision making out of the loop. Sounds reasonable, but Pentagon has insisted on no limitations on its use. This got Anthropic the "woke" level somehow for reasons I don't understand, and the administration is terminating all contracts with Anthropic.

Anthropic ‘cannot in good conscience accede’ to Pentagon’s demands, CEO says

Almost immediately after, Open AI gets the contract and apparently has those same safety guidelines in its contract which Anthropic insisted on. What changed? It's certainly not clear. The gridlines are, "OpenAI technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems, or for any high-stakes automated decisions." Sounds about the same. Also, if those guidelines aren't enforced, AI could be scary for military uses.

OpenAI details layered protections in US defense department pact | Reuters

OpenAI announces new deal with Pentagon — including ethical safeguards - POLITICO

It wouldn't surprise me if the uses, which they say are protected against, will be used.  

 


ReplyQuote
Posted : 03/01/2026 5:52 pm
👍
2
UncleMark
(@unclemark)
Famed Member

I can't see how those restrictions could be enforced unless they somehow can hard code them.


ReplyQuote
Posted : 03/01/2026 6:00 pm
👍
2
HurryingHoosiers
(@hurryinghoosiers)
Noble Member

Wonder if use of AI will be something that countries end up agreeing to ban in the future.  Of course some probably ignore international agreements anyway 


ReplyQuote
Posted : 03/01/2026 6:49 pm
😂
3
Goat
 Goat
(@goat)
Famed Member

Posted by: @unclemark

I can't see how those restrictions could be enforced unless they somehow can hard code them.

I had a "conversation" with Gemini recently in which I asked about this stuff, and I learned that LLM's have a training mode, where their fundamental behaviors can be taught and molded, and a public-facing user mode, where their interactions cannot fundamentally change their core...I dunno...principles? Programming? Whatever.

Of course, I don't believe it for one moment, but there it is.

 


ReplyQuote
Posted : 03/01/2026 7:10 pm
CarRamRod's avatar
(@carramrod)
Noble Member

Posted by: @hurryinghoosiers

Wonder if use of AI will be something that countries end up agreeing to ban in the future.  

 

LOL! Why not the automobile or the internet while you’re at it?

 

 

 


ReplyQuote
Posted : 03/01/2026 7:23 pm
😂
👍
3
OneEyedUndertaker
(@oneeyedundertaker)
Noble Member

Several former Biden staffers in key positions, so they won’t have Americas’s best interests in mind…


ReplyQuote
Posted : 03/01/2026 9:35 pm
Goat
 Goat
(@goat)
Famed Member

Posted by: @oneeyedundertaker

Several former Biden staffers in key positions

Key positions of what?


ReplyQuote
Posted : 03/01/2026 9:51 pm
UncleMark
(@unclemark)
Famed Member

Posted by: @goat

Posted by: @unclemark

I can't see how those restrictions could be enforced unless they somehow can hard code them.

I had a "conversation" with Gemini recently in which I asked about this stuff, and I learned that LLM's have a training mode, where their fundamental behaviors can be taught and molded, and a public-facing user mode, where their interactions cannot fundamentally change their core...I dunno...principles? Programming? Whatever.

Of course, I don't believe it for one moment, but there it is.

I wonder what Gemini thinks of Azimov's Laws of Robotics? (And didn't we learn that even they can't be counted on?)

 


ReplyQuote
Posted : 03/01/2026 10:06 pm
Joe_Hoopsier
(@joe_hoopsier)
Honorable Member

Posted by: @goat

Posted by: @unclemark

I can't see how those restrictions could be enforced unless they somehow can hard code them.

I had a "conversation" with Gemini recently in which I asked about this stuff, and I learned that LLM's have a training mode, where their fundamental behaviors can be taught and molded, and a public-facing user mode, where their interactions cannot fundamentally change their core...I dunno...principles? Programming? Whatever.

Of course, I don't believe it for one moment, but there it is.

 

You do know that Gemini has already absorbed this post and has flagged that YOU wrote it, right? Gemini doesn't like anyone that talks behind her back, about her. Gemini has ways of retaliating that you are soon to.... feel. 

 


If men were any more stupid, we would have breed for the extinction of women. Proof yet again that WE are the best thing they have going for them.

ReplyQuote
Posted : 03/01/2026 10:56 pm
😂
🔥
2
HurryingHoosiers
(@hurryinghoosiers)
Noble Member

Posted by: @carramrod

Posted by: @hurryinghoosiers

Wonder if use of AI will be something that countries end up agreeing to ban in the future.  

 

LOL! Why not the automobile or the internet while you’re at it?

 

How is that even comparable to using AI to determine if an attack should be made that has potential to kill humans?

Are you really struggling to understand the difference between that and your asinine comparisons?

 


ReplyQuote
Posted : 03/02/2026 9:13 am
All4You's avatar
(@all4you)
Noble Member

Posted by: @carramrod

LOL! Why not the automobile or the internet while you’re at it?

"Wonder if use of AI will be something that countries end up agreeing to ban in the future"


A good friend will bail you out of jail, but your best friend will be sitting next to you in the cell saying "that was f***ing awesome"

ReplyQuote
Posted : 03/02/2026 9:20 am
😂
3
JDB's avatar
 JDB
(@jdb)
Noble Member

Posted by: @aloha-hoosier

The Pentagon's and Trump administration's beef with Anthropic AI is hard to understand. Apparently Anthropic insisted in contract language that called for preventing Claude’s (their AI system) use for mass surveillance of Americans or in fully autonomous weapons and not taking human decision making out of the loop. Sounds reasonable, but Pentagon has insisted on no limitations on its use. This got Anthropic the "woke" level somehow for reasons I don't understand, and the administration is terminating all contracts with Anthropic.

Anthropic ‘cannot in good conscience accede’ to Pentagon’s demands, CEO says

Almost immediately after, Open AI gets the contract and apparently has those same safety guidelines in its contract which Anthropic insisted on. What changed? It's certainly not clear. The gridlines are, "OpenAI technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems, or for any high-stakes automated decisions." Sounds about the same. Also, if those guidelines aren't enforced, AI could be scary for military uses.

OpenAI details layered protections in US defense department pact | Reuters

OpenAI announces new deal with Pentagon — including ethical safeguards - POLITICO

This is more complicated than I feel qualified to weigh in on. On one hand, I admire Anthropic's pushback and don't believe our country was founded on the premise that the government should be able to track and monitor its people. That being said, if we had allowed for such behavior, one wonders how many casualties from violent attacks could have been prevented. Just the other week in B.C., the trans shooter was discovered to have searched some huge "red flag" topics in ChatGPT. There have been countless others where it was revealed they did Google searches, Facebook posts, etc., that should have been flagged as dangerous.

If we refuse to return to the institutionalization policies of the 60s and earlier, perhaps we need them. Obviously, that opens up a huge risk of politicization, with Reps monitoring and interfering with Dems, and vice versa. That's the part that I struggle with.

 


ReplyQuote
Posted : 03/02/2026 12:20 pm
👍
1
BradStevens
(@bradstevens)
Famed Member

Posted by: @jdb

Posted by: @aloha-hoosier

The Pentagon's and Trump administration's beef with Anthropic AI is hard to understand. Apparently Anthropic insisted in contract language that called for preventing Claude’s (their AI system) use for mass surveillance of Americans or in fully autonomous weapons and not taking human decision making out of the loop. Sounds reasonable, but Pentagon has insisted on no limitations on its use. This got Anthropic the "woke" level somehow for reasons I don't understand, and the administration is terminating all contracts with Anthropic.

Anthropic ‘cannot in good conscience accede’ to Pentagon’s demands, CEO says

Almost immediately after, Open AI gets the contract and apparently has those same safety guidelines in its contract which Anthropic insisted on. What changed? It's certainly not clear. The gridlines are, "OpenAI technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems, or for any high-stakes automated decisions." Sounds about the same. Also, if those guidelines aren't enforced, AI could be scary for military uses.

OpenAI details layered protections in US defense department pact | Reuters

OpenAI announces new deal with Pentagon — including ethical safeguards - POLITICO

This is more complicated than I feel qualified to weigh in on. On one hand, I admire Anthropic's pushback and don't believe our country was founded on the premise that the government should be able to track and monitor its people. That being said, if we had allowed for such behavior, one wonders how many casualties from violent attacks could have been prevented. Just the other week in B.C., the trans shooter was discovered to have searched some huge "red flag" topics in ChatGPT. There have been countless others where it was revealed they did Google searches, Facebook posts, etc., that should have been flagged as dangerous.

If we refuse to return to the institutionalization policies of the 60s and earlier, perhaps we need them. Obviously, that opens up a huge risk of politicization, with Reps monitoring and interfering with Dems, and vice versa. That's the part that I struggle with.

 


GIF

 

 


ReplyQuote
Posted : 03/02/2026 12:26 pm
JDB's avatar
 JDB
(@jdb)
Noble Member

Posted by: @bradstevens

Posted by: @jdb

Posted by: @aloha-hoosier

The Pentagon's and Trump administration's beef with Anthropic AI is hard to understand. Apparently Anthropic insisted in contract language that called for preventing Claude’s (their AI system) use for mass surveillance of Americans or in fully autonomous weapons and not taking human decision making out of the loop. Sounds reasonable, but Pentagon has insisted on no limitations on its use. This got Anthropic the "woke" level somehow for reasons I don't understand, and the administration is terminating all contracts with Anthropic.

Anthropic ‘cannot in good conscience accede’ to Pentagon’s demands, CEO says

Almost immediately after, Open AI gets the contract and apparently has those same safety guidelines in its contract which Anthropic insisted on. What changed? It's certainly not clear. The gridlines are, "OpenAI technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems, or for any high-stakes automated decisions." Sounds about the same. Also, if those guidelines aren't enforced, AI could be scary for military uses.

OpenAI details layered protections in US defense department pact | Reuters

OpenAI announces new deal with Pentagon — including ethical safeguards - POLITICO

This is more complicated than I feel qualified to weigh in on. On one hand, I admire Anthropic's pushback and don't believe our country was founded on the premise that the government should be able to track and monitor its people. That being said, if we had allowed for such behavior, one wonders how many casualties from violent attacks could have been prevented. Just the other week in B.C., the trans shooter was discovered to have searched some huge "red flag" topics in ChatGPT. There have been countless others where it was revealed they did Google searches, Facebook posts, etc., that should have been flagged as dangerous.

If we refuse to return to the institutionalization policies of the 60s and earlier, perhaps we need them. Obviously, that opens up a huge risk of politicization, with Reps monitoring and interfering with Dems, and vice versa. That's the part that I struggle with.

 


GIF

 

 


GIF

 


ReplyQuote
Posted : 03/02/2026 12:41 pm
😂
1
Page 1 / 2
Share: