The Pentagon's and Trump administration's beef with Anthropic AI is hard to understand. Apparently Anthropic insisted in contract language that called for preventing Claude’s (their AI system) use for mass surveillance of Americans or in fully autonomous weapons and not taking human decision making out of the loop. Sounds reasonable, but Pentagon has insisted on no limitations on its use. This got Anthropic the "woke" level somehow for reasons I don't understand, and the administration is terminating all contracts with Anthropic.
Almost immediately after, Open AI gets the contract and apparently has those same safety guidelines in its contract which Anthropic insisted on. What changed? It's certainly not clear. The gridlines are, "OpenAI technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems, or for any high-stakes automated decisions." Sounds about the same. Also, if those guidelines aren't enforced, AI could be scary for military uses.
The Pentagon's and Trump administration's beef with Anthropic AI is hard to understand. Apparently Anthropic insisted in contract language that called for preventing Claude’s (their AI system) use for mass surveillance of Americans or in fully autonomous weapons and not taking human decision making out of the loop. Sounds reasonable, but Pentagon has insisted on no limitations on its use. This got Anthropic the "woke" level somehow for reasons I don't understand, and the administration is terminating all contracts with Anthropic.
Almost immediately after, Open AI gets the contract and apparently has those same safety guidelines in its contract which Anthropic insisted on. What changed? It's certainly not clear. The gridlines are, "OpenAI technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems, or for any high-stakes automated decisions." Sounds about the same. Also, if those guidelines aren't enforced, AI could be scary for military uses.
Wonder if use of AI will be something that countries end up agreeing to ban in the future. Of course some probably ignore international agreements anyway
I can't see how those restrictions could be enforced unless they somehow can hard code them.
I had a "conversation" with Gemini recently in which I asked about this stuff, and I learned that LLM's have a training mode, where their fundamental behaviors can be taught and molded, and a public-facing user mode, where their interactions cannot fundamentally change their core...I dunno...principles? Programming? Whatever.
Of course, I don't believe it for one moment, but there it is.
I can't see how those restrictions could be enforced unless they somehow can hard code them.
I had a "conversation" with Gemini recently in which I asked about this stuff, and I learned that LLM's have a training mode, where their fundamental behaviors can be taught and molded, and a public-facing user mode, where their interactions cannot fundamentally change their core...I dunno...principles? Programming? Whatever.
Of course, I don't believe it for one moment, but there it is.
I wonder what Gemini thinks of Azimov's Laws of Robotics? (And didn't we learn that even they can't be counted on?)
I can't see how those restrictions could be enforced unless they somehow can hard code them.
I had a "conversation" with Gemini recently in which I asked about this stuff, and I learned that LLM's have a training mode, where their fundamental behaviors can be taught and molded, and a public-facing user mode, where their interactions cannot fundamentally change their core...I dunno...principles? Programming? Whatever.
Of course, I don't believe it for one moment, but there it is.
You do know that Gemini has already absorbed this post and has flagged that YOU wrote it, right? Gemini doesn't like anyone that talks behind her back, about her. Gemini has ways of retaliating that you are soon to.... feel.
If men were any more stupid, we would have breed for the extinction of women. Proof yet again that WE are the best thing they have going for them.
The Pentagon's and Trump administration's beef with Anthropic AI is hard to understand. Apparently Anthropic insisted in contract language that called for preventing Claude’s (their AI system) use for mass surveillance of Americans or in fully autonomous weapons and not taking human decision making out of the loop. Sounds reasonable, but Pentagon has insisted on no limitations on its use. This got Anthropic the "woke" level somehow for reasons I don't understand, and the administration is terminating all contracts with Anthropic.
Almost immediately after, Open AI gets the contract and apparently has those same safety guidelines in its contract which Anthropic insisted on. What changed? It's certainly not clear. The gridlines are, "OpenAI technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems, or for any high-stakes automated decisions." Sounds about the same. Also, if those guidelines aren't enforced, AI could be scary for military uses.
This is more complicated than I feel qualified to weigh in on. On one hand, I admire Anthropic's pushback and don't believe our country was founded on the premise that the government should be able to track and monitor its people. That being said, if we had allowed for such behavior, one wonders how many casualties from violent attacks could have been prevented. Just the other week in B.C., the trans shooter was discovered to have searched some huge "red flag" topics in ChatGPT. There have been countless others where it was revealed they did Google searches, Facebook posts, etc., that should have been flagged as dangerous.
If we refuse to return to the institutionalization policies of the 60s and earlier, perhaps we need them. Obviously, that opens up a huge risk of politicization, with Reps monitoring and interfering with Dems, and vice versa. That's the part that I struggle with.
The Pentagon's and Trump administration's beef with Anthropic AI is hard to understand. Apparently Anthropic insisted in contract language that called for preventing Claude’s (their AI system) use for mass surveillance of Americans or in fully autonomous weapons and not taking human decision making out of the loop. Sounds reasonable, but Pentagon has insisted on no limitations on its use. This got Anthropic the "woke" level somehow for reasons I don't understand, and the administration is terminating all contracts with Anthropic.
Almost immediately after, Open AI gets the contract and apparently has those same safety guidelines in its contract which Anthropic insisted on. What changed? It's certainly not clear. The gridlines are, "OpenAI technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems, or for any high-stakes automated decisions." Sounds about the same. Also, if those guidelines aren't enforced, AI could be scary for military uses.
This is more complicated than I feel qualified to weigh in on. On one hand, I admire Anthropic's pushback and don't believe our country was founded on the premise that the government should be able to track and monitor its people. That being said, if we had allowed for such behavior, one wonders how many casualties from violent attacks could have been prevented. Just the other week in B.C., the trans shooter was discovered to have searched some huge "red flag" topics in ChatGPT. There have been countless others where it was revealed they did Google searches, Facebook posts, etc., that should have been flagged as dangerous.
If we refuse to return to the institutionalization policies of the 60s and earlier, perhaps we need them. Obviously, that opens up a huge risk of politicization, with Reps monitoring and interfering with Dems, and vice versa. That's the part that I struggle with.
The Pentagon's and Trump administration's beef with Anthropic AI is hard to understand. Apparently Anthropic insisted in contract language that called for preventing Claude’s (their AI system) use for mass surveillance of Americans or in fully autonomous weapons and not taking human decision making out of the loop. Sounds reasonable, but Pentagon has insisted on no limitations on its use. This got Anthropic the "woke" level somehow for reasons I don't understand, and the administration is terminating all contracts with Anthropic.
Almost immediately after, Open AI gets the contract and apparently has those same safety guidelines in its contract which Anthropic insisted on. What changed? It's certainly not clear. The gridlines are, "OpenAI technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems, or for any high-stakes automated decisions." Sounds about the same. Also, if those guidelines aren't enforced, AI could be scary for military uses.
This is more complicated than I feel qualified to weigh in on. On one hand, I admire Anthropic's pushback and don't believe our country was founded on the premise that the government should be able to track and monitor its people. That being said, if we had allowed for such behavior, one wonders how many casualties from violent attacks could have been prevented. Just the other week in B.C., the trans shooter was discovered to have searched some huge "red flag" topics in ChatGPT. There have been countless others where it was revealed they did Google searches, Facebook posts, etc., that should have been flagged as dangerous.
If we refuse to return to the institutionalization policies of the 60s and earlier, perhaps we need them. Obviously, that opens up a huge risk of politicization, with Reps monitoring and interfering with Dems, and vice versa. That's the part that I struggle with.