Nadeja_

Nadeja_ t1_j6wtvef wrote

  1. Although neural networks (the human brain too) tend to “hallucinate” and to make things up (your own memory isn’t 100% reliable either) that’s why we help our memory with pictures, taking notes, journals, record numbers and so on (not just because we forget, but also because we might not remember correctly). If you want to retrieve accurate info from a nn, then you have it to understand your question and come up with the probable answer, then find the source on the net or in a database, then, if found, a quote function returns the exact quote/info. However, trust-wise, there is the alignment problem, but that’s another story.

  2. Yeah, that sounds like “we don’t need the wheel, because we did fine without it in the past 300,000 years”.

  3. “Would only”, “would never”… is reasoning in absolutist terms, witch ends up in faulty predictions such as “heavier than air machines would never fly”. For now, with the current models, you still have to to review the results: the generated answer or may contain inaccurate or made up info, the generated code may have bugs or not work at all, the generated image comes with weird stuff that you notice when you zoom in or the hands look funny, and so on. But it’s pretty likely that eventually we will have reliable models that understand the context better, that know how a hand is supposed to be and how it works, that return accurate sourced info, that code like the best professional. Our brain is the example that’s doable, unless you believe (based on no evidence) it’s because of something magical.

  4. You can hardly be 100% be sure of anything, if you ask to a philosopher, and there may be some issue, but there are also peer reviewed papers.

  5. Or maybe the opposite happens and there would be fewer wrong diagnoses. In the medical field there is already who uses machine learning. Still, students shouldn’t delegate their learning, reasoning and writing to language models and other models (not yet at least, I’m not sure how I would feel when an ASI will be around), but use them to improve (e.g. you ask ChatGPT to improve your essay and you learn how to write better).

1

Nadeja_ t1_iriomqy wrote

Artificial General Intelligence, as an agent that isn't trained on a single task and can generalize.

What follows is an optimistic scenario.

Early proto/sub-human AGI: now - "With a single set of weights, GATO can engage in dialogue, caption images, stack blocks with a real robot arm, outperform humans at playing Atari games, navigate in simulated 3D environments, follow instructions, and more". Not great yet (it may seem a jack of trades and master of none), but with an improved architecture and scaling up, the possible developments sound promising.

SH-AGI (sub-human): Q4 2023 to 2024 - as long as nucking won't happen, nor the next political delirium. The SH-AGI would be a considerable improvement compared to GATO and would be capable of discussing with you at LaMDA+ level about the good quality video that it is generating. At times it would feel even human and even sentient, but other times you would still facepalm in frustration and in fact memory and other issues and weaknesses won't be fully resolved yet; also (like the current models that draw weird hands) it would still do some weird things, not realizing they aren't making full sense.

HL-AGI (human-level) / Strong AI: around 2026 (but still apparently not really self-aware) developing to around 2030, when it would be a strong AI, possibly self-aware, conscious and also not just reacting to your input. Although qualitatively not super-human, but as smart as a smart human (and now fully aware of what hands are, how they move, what makes sense etc), quantitatively it would beat any human with the sheer processing power running 24/7 and trained more than any human could be in a multitude of lives, for any possible skill, and connecting all this knowledge and skills together, understanding and having ideas that no human could even imagine.

At that point hope that the alignment problem is solved enough and you aren't facing a manipulative HL-AI instead. This won't be just the values (you can't even "align" humans to values, rights, crimes, only broadly), but it would be an alignment to the core goals (that for the humanity, as well as any other species on the Earth is "survive"). The aligned HL-AGI would see her/him/them/itself as part of the humanity**, sharing the same goal of survival**. It that won't fully happen, good luck.

ASI (super-human): not too many years after. This would happen when the AI becomes also qualitatively superior to any human cognitive skill. Reverse engineering the human brain is a thing, but can you imagine *super-*human reasoning? You could probably, intuitively, guess that there is something smarter than how you can think, but if you can figure what is it, you are already that intelligent, therefore it's not super-intelligent. Do you see what I mean? As a human level-intelligence you can barely figure out how to engineer a human-level intelligence. To go above, you could think of an indirect trick, e.g. of scaling up the human-level brain or using genetic algorithms, hoping that something emerges by itself. However, since the HL-AGI would also be a master coder and a master engineer and with a top notch understanding of how the brain works, and the master of anything else... maybe it would be able to figure out a better trick.

Once there, you couldn't possibly understand anymore what the ASI thinks even if the ASI were trying to explain it to you, as you couldn't explain quantum computing to your hamster, and then it would be like explaining it to the ants, and then...

9