Viewing a single comment thread. View all comments

prototyperspective t1_it6wneb wrote

Info like that should be contained in articles like https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence https://en.wikipedia.org/wiki/Regulation_of_algorithms (and that page may be useful in expanding these albeit the summaries should be shorter than that page).

There could also be a new article but I think the "Existential risk" article is too specific to "Existential risk" which makes the common mistake of ignoring the more immediate currently more realistic issues.

Also often it may be a better approach to explore regulating potential AI-made products such as regulation of chemicals (or similar fundamentally different approaches).

2

Endward22 t1_itcy7qw wrote

>"Existential risk"

I heart this from Borstrom, tbh.

1

prototyperspective t1_itd1ubv wrote

I'm not saying there is no existential risk but it's only a part of major risks and it's not a good approach to attempt to disentangle these or to ignore other ones so it should be an article about risks with info that some of them may be existential with some info why/how so.

2