The Pentagon's and Trump administration's beef with Anthropic AI is hard to understand. Apparently Anthropic insisted in contract language that called for preventing Claude’s (their AI system) use for mass surveillance of Americans or in fully autonomous weapons and not taking human decision making out of the loop. Sounds reasonable, but Pentagon has insisted on no limitations on its use. This got Anthropic the "woke" level somehow for reasons I don't understand, and the administration is terminating all contracts with Anthropic.
Almost immediately after, Open AI gets the contract and apparently has those same safety guidelines in its contract which Anthropic insisted on. What changed? It's certainly not clear. The gridlines are, "OpenAI technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems, or for any high-stakes automated decisions." Sounds about the same. Also, if those guidelines aren't enforced, AI could be scary for military uses.
This is more complicated than I feel qualified to weigh in on. On one hand, I admire Anthropic's pushback and don't believe our country was founded on the premise that the government should be able to track and monitor its people. That being said, if we had allowed for such behavior, one wonders how many casualties from violent attacks could have been prevented. Just the other week in B.C., the trans shooter was discovered to have searched some huge "red flag" topics in ChatGPT. There have been countless others where it was revealed they did Google searches, Facebook posts, etc., that should have been flagged as dangerous.
If we refuse to return to the institutionalization policies of the 60s and earlier, perhaps we need them. Obviously, that opens up a huge risk of politicization, with Reps monitoring and interfering with Dems, and vice versa. That's the part that I struggle with.
The Pentagon's and Trump administration's beef with Anthropic AI is hard to understand. Apparently Anthropic insisted in contract language that called for preventing Claude’s (their AI system) use for mass surveillance of Americans or in fully autonomous weapons and not taking human decision making out of the loop. Sounds reasonable, but Pentagon has insisted on no limitations on its use. This got Anthropic the "woke" level somehow for reasons I don't understand, and the administration is terminating all contracts with Anthropic.
Almost immediately after, Open AI gets the contract and apparently has those same safety guidelines in its contract which Anthropic insisted on. What changed? It's certainly not clear. The gridlines are, "OpenAI technology cannot be used for mass domestic surveillance, to direct autonomous weapons systems, or for any high-stakes automated decisions." Sounds about the same. Also, if those guidelines aren't enforced, AI could be scary for military uses.
This is more complicated than I feel qualified to weigh in on. On one hand, I admire Anthropic's pushback and don't believe our country was founded on the premise that the government should be able to track and monitor its people. That being said, if we had allowed for such behavior, one wonders how many casualties from violent attacks could have been prevented. Just the other week in B.C., the trans shooter was discovered to have searched some huge "red flag" topics in ChatGPT. There have been countless others where it was revealed they did Google searches, Facebook posts, etc., that should have been flagged as dangerous.
If we refuse to return to the institutionalization policies of the 60s and earlier, perhaps we need them. Obviously, that opens up a huge risk of politicization, with Reps monitoring and interfering with Dems, and vice versa. That's the part that I struggle with.
I try and forget Tom Cruise movies. That one wasn't too bad.
Wonder if use of AI will be something that countries end up agreeing to ban in the future.
LOL! Why not the automobile or the internet while you’re at it?
How is that even comparable to using AI to determine if an attack should be made that has potential to kill humans?
Are you really struggling to understand the difference between that and your asinine comparisons?
You know what doesn't have any decision making ability? A land mine. It doesn't care if you're a kid playing soccer or a humvee. You know what has the potential to kill innocent humans? A GI strolling through a village in Vietnam that just saw his best friend get killed, hasn't slept in days and hasn't had sex in a year.
There's every reason to believe that autonomous weapons will actually reduce the civilian toll of war.
If you want to say that human sign-off should still be required to launch something like a Nuke I could see how that probably makes sense. But AI in weapons systems is here. We're watching it in Iran right now and it's far more discriminant than the bombing of Dresden or Tokyo for instance.
Wonder if use of AI will be something that countries end up agreeing to ban in the future.
LOL! Why not the automobile or the internet while you’re at it?
How is that even comparable to using AI to determine if an attack should be made that has potential to kill humans?
Are you really struggling to understand the difference between that and your asinine comparisons?
You know what doesn't have any decision making ability? A land mine. It doesn't care if you're a kid playing soccer or a humvee. You know what has the potential to kill innocent humans? A GI strolling through a village in Vietnam that just saw his best friend get killed, hasn't slept in days and hasn't had sex in a year.
There's every reason to believe that autonomous weapons will actually reduce the civilian toll of war.
If you want to say that human sign-off should still be required to launch something like a Nuke I could see how that probably makes sense. But AI in weapons systems is here. We're watching it in Iran right now and it's far more discriminant than the bombing of Dresden or Tokyo for instance.