Viewing a single comment thread. View all comments

Unfocusedbrain t1_j7qys9x wrote

That's true enough. Considering people have died to GPS of all things, yeah, its a non-negligible issue.

The more concerning issue is bad faith actors and malicious agents. There are already examples of people using other AI software maliciously. Countless to list.

For Chagpt there is an example of cybersecurity researchers using ChatGPT to make malware even with its filters in place. They were acting in good faith too - but that also means people with less academic pursuits could use it for malicious but similar means.

−1

[deleted] t1_j7sdjau wrote

[deleted]

1

Unfocusedbrain t1_j7ssxdk wrote

True enough that malware is possible without ChatGPT my snarky commenter. I'm more concerned with script kiddies able to mass produce polymorphic malware that makes mitigation cumbersome with very little effort or investment by the creator.

Hackers have the advantage of anonymity, so it becomes incredibly difficult to stop them proactively. This just makes it worse.

But that wasn't my point my bad faithed chum and you know that very well. I mean, your posting history makes it really clear you have a vested interest in ChatGPT being unfettered as possible. So I don't think you and I can have a neutral discussion about this in the first place. Nor would you want one.

1