Viewing a single comment thread. View all comments

SomeLongWindedIdiot t1_jd07i7z wrote

Why is AI safety not a major topic of discussion here and in similar communities?

I apologize if the non-technical nature of my question is inappropriate for the sub, but as you’ll see from my comment I think this is very important.

I have been studying AI more and more over the past months (for perspective on my level that consists of Andrew Ng’s Deep Learning course, Kaggle competitions and simple projects, reading a few landmark papers and digging into transformers) The more I learn, the more I am both concerned and hopeful. It seems all but certain to me that AI will completely change life as we know it in the next few decades, quite possibly the next few years if the current pace of progression continues. It could change life to something much, much better or much, much worse based on who develops it and how safely they do it.

To me safety is far and away to most important subfield in AI now, but is one of the least discussed. Even if you think there is a low chance of AI going haywire on its own, in my admittedly very non-expert view it’s obvious that we should be also concerned about the judgment and motives of the people developing and controlling the most powerful AIs, and the risks of such powerful tools being accessible to everyone. At the very least I would want discussion on actionable things we can all do as individuals.

I feel a strong sense of duty to do what I can, even if that’s not much. I want to donate a percentage of my salary to funding AI safety, and I am looking whether I can effectively contribute with work to any AI safety organizations. I have a few of my own ideas along these lines; does anyone have any suggestions? I think we should also discuss ways to shift the incentives of major AI organizations. Maybe there isn’t a ton we can do (although there are a LOT of people looking, there is room for a major movement), but it’s certainly not zero.

3

Nyanraltotlapun t1_jdeltnm wrote

Long story short, main property of complex systems is the ability to pretend and mimic. So the real safety of AI lies in its physical limitations (compute power algos etc.) the same limitations that makes them less useful less capable. So the more powerful AI is the less safe it is. There more danger it poses. And it is dangerous alright. More dangerous than nuclear weapons is.

1