_SputnicK_ t1_j7m28yh wrote

As someone who has been here since 2016, I consider myself a "legacy user." I think this sub is focused too much on commercial AI and not enough on the theory of artificial general intelligence, exponential growth, and intelligence explosion. December was nothing but ChatGPT screenshots. The userbase has shifted from readers of Kurzweil and Bostrom to those who are only here because of the endless AI news coverage. Many of the comments are to the effect "wow! AGI tomorrow" as this place functions like an endless hype train. I suppose to some extent this was inevitable, but it's still I think a regression in my view.

People in this sub assume that we can go from word prediction engines (ChatGPT) to artificial general intelligence while dismissing the numerous breakthroughs needed to reach that milestone. No one here understands the theory behind how AI actually works so it's more based on sentiment than fact. I need to find a small community of people who actually enjoy reading AI papers.

Edit: This sub has done well by remaining apolitical and largely focused on topic, but I really fear that this sub could devolve into a kind of hype machine echo chamber, and I fear that we're already there. Take someone like u/ideasware who understood the development of AI as tragic and very possibly apocalyptic. No one here seems to want to get into the finer details of how things could go very wrong.


_SputnicK_ t1_itwm8qn wrote

I think AGI will arrive in the wake of rumors and speculation. For months, people will talk about how X company (e.g., Google, OpenAI, Meta) has "solved intelligence" but nothing will be confirmed for an exceptionally long time. Internally, there will be pandemonium about the ramifications of releasing the model to the public. Inevitably, the model will be leaked, and imitation models will follow.