Right now models seem to be getting much better when they are scaled up. They are also currently pretty dang cheap compared to any kind of real industrial infrastructure. Single and double digit millions is nothing to governments and corporations like Google. Even without architecture improvements, what does a $10 billion AI look like?
So I'd honestly not be that shocked if we have a "universal knowledge worker" type service by 2025 that offers you an "employee" with the reasoning ability of an average undergraduate but with the knowledge base of the whole internet.
I totally agree regarding the mind. Unless the mind is truly just magic, it can be emulated.
The kind of AI I am starting to favor as one of the safer types would be a collective super-intelligence made up of many, specialized, subhuman AI models working together using language as a universal medium. That way we can literally read its thoughts at all times and all of the human level complexity happens in the open.
It would be smarter than its constituent AI models the way that a research team is smarter than a single researcher.
oddlyspecificnumber7 OP t1_j4gqr9k wrote
Reply to comment by PoliteThaiBeep in Does anyone else get the feeling that, once true AGI is achieved, most people will act like it was the unsurprising and inevitable outcome that they expected? by oddlyspecificnumber7
Right now models seem to be getting much better when they are scaled up. They are also currently pretty dang cheap compared to any kind of real industrial infrastructure. Single and double digit millions is nothing to governments and corporations like Google. Even without architecture improvements, what does a $10 billion AI look like?
So I'd honestly not be that shocked if we have a "universal knowledge worker" type service by 2025 that offers you an "employee" with the reasoning ability of an average undergraduate but with the knowledge base of the whole internet.