User1539 t1_j5f9cdo wrote
This is why I keep saying we don't need 'real AGI' to feel the vast majority of the effects we all think we'll see when AGI happens.
We don't need a superhuman thinking machine to do 99% of the tasks people want to automate. What we need is a slice of a 70IQ factory worker's brain that can do that one thing over and over again.
We already have the building blocks for that.
GoldenRain t1_j5fx0wy wrote
If we want effective automation or make general human tasks faster we certainly do not need AGI.
If we want inventions and technology which would be hard for humans to come up with in a reasonable time frame, we do need AGI. If we want technology human intelligence is unable to comprehend, we need ASI. The step between those two is likely quite short.
drsimonz t1_j5g9533 wrote
Depends on the nature of the invention. A lot of research involves trial and error, and this is ripe for automation. A really cool example (which as far as I know doesn't involve any AI so far) is robotic biochemistry labs. If you need to test 500 different drug candidates in some complicated assay, you can just upload the experiment via web API and the next thing you know, dozens of robots come to life mixing reagents and monitoring the results. In my view, automation of any kind will continue to accelerate science for a while, even without AGI.
I would also argue that in some narrow fields, we're already at a point where humans are totally incapable of comprehending technology that is generated by software. The obvious example being neural networks (we can understand the architecture, but not the weights). Another would be the hardware description languages used for IC design. Sure, a really smart computer engineer with an electron microscope could probably reverse engineer some tiny block of a modern CPU, but it would be nearly impossible to map the entire thing. They have billions of transistors. When we design these things, it's simply not possible without the use of sophisticated software. Similarly when you compile code to assembly, you might be able to understand tiny fragments of assembly, but the entire program would take a lifetime to get through. Without compilers and interpreters, software would still see extremely limited use in society, and we literally wouldn't be having this discussion.
Edit: forgot to say, of course AGI will be a completely different animal since it will be able to generate new kinds of ideas where even the concept is beyond the reach of a human brain.
SoylentRox t1_j5h5lxz wrote
This. And there are bigger problems unsolved that scale might help with.
For example, finding synthetic cellular growth serums. This is a massive trial and error effort - which molecules in bovine plasma do you actually need for full development of structures in vitro.
Growing human organs. Similarly there is a vast amount of trial and error, you really need to have millions of attempts.
Even trying to do the above rationally, you need to investigate in parallel the effect of each unknown molecule. And you need a lot of experiments, not just one - you don't want to develop a false conclusion.
Ideally I can imagine a setup where scientific papers stop being encrypted with difficult to parse text, but are essentially in a standard machine duplicable form. So the setup and experiment sections are a link to the actual files used to configure the robotics, the results are the unabridged raw data. The analysis was done by an AI who was prompted on what you were looking for, so there can't be accusations of cherry picking a conclusion.
And journals don't accept non replicated work. What has to happen is the paper has to get 'picked up' by another lab, with a different source of funding (or the funding is done in a way that reduces COI), ideally using a different robotic software stack to turn the high level 'setup' files into actionable steps, a different robotics AI vendor, and a different model of robotic hardware.
Each "point of heterogeneity" above has to be part of the data quality metrics for the replication, and then depending on the discovered effects you only draw reliable conclusions on high quality data.
Also the above allows every paper to use all prior data on something rather than stand alone. Your prior should always be calculated from the set of all prior research, not split evenly between hypotheses.
Institutions are slow to change, but i can imagine a "new science" group of companies and institutions who uses the above, plus AGI, and they surge so far ahead of everyone else in results that no one else matters.
NASA vs the Kenyan space program.
User1539 t1_j5grqw0 wrote
> If we want effective automation or make general human tasks faster we certainly do not need AGI.
Agreed. We're very, very, close to this now, and likely very far away from AGI.
> If we want inventions and technology which would be hard for humans to come up with in a reasonable time frame, we do need AGI.
This is where we disagree. I have many contacts at universities, and most of my friends have a PHD and participate in some kind of research.
In their work, they were evaluating Watson (IBMs LLM style AI) years ago, and talking about how it would help them.
Having a PHD necessarily means having tunnel vision. You will do research that makes you the single person on earth that knows about the one cell you study, or the one protein you've been working with.
Right now, the condition of science is that we have all these researchers writing papers to help other scientists have a wider knowledge on things they couldn't possibly dedicate time to.
It's still nowhere near wide enough. PHDs aren't able to easily work outside their field, and the result is that their research needs to go through several levels of simplification before someone can find a use for it, or see how it effects their own research.
A well trained LLM can tear down those walls between different fields. Suddenly, you've got an infinitely patient, infinitely knowledgeable assistant. They can write code for you. You can ask it what effect your protein might have on a new material, without having to become, or know, a material scientist.
Everyone having a 'smart' assistant that can offer an expert level understanding of EVERY FIELD will bridge the gaps between the highly specialized geniuses of our time.
Working with the sort of AI we have now will take us to an entirely new level.
Baturinsky t1_j5iq32y wrote
And how safe is to give that tools into the hands of, among others, criminals and terrorists?
User1539 t1_j5jjimd wrote
The same argument has been made about google, and it's a real concern. Some moron killed his wife a week or so ago, and the headline read 'Suspect google history included 'How to hide a 140lb body''
So, yeah. It's already a problem.
Right now we deal with it by having Google keep records and hoping criminals who google shit like that are just too stupid to use a VPN or anonymous internet.
Again, we don't need AGI to have that problem. It's already here.
That's the whole point of my comment. We need to stop waiting for AGI before we start to treat these systems as being capable of existential change for the human race.
Baturinsky t1_j5jl5nt wrote
I agree, human + AI working together is already and AGI. With only limit of the human part being unscaleable. And can be extremely dangerous if AI part is very powerful and both are non-aligned with fundamental human values.
User1539 t1_j5jlbqa wrote
Yeah, it still represents an exponential growth of human potential. The problem is that humans are monsters.
Baturinsky t1_j5k0xxz wrote
Yes, but I hope it can be addressed
https://www.reddit.com/r/ControlProblem/comments/109xs2a/ai_alignment_problem_may_be_just_a_subcase_of_the/
Artanthos t1_j5j1tiy wrote
We already have that.
Machine Learning algorithms are already making advances in mathematics and medicine.
TinyBurbz t1_j5gw55e wrote
>We don't need a superhuman thinking machine to do 99% of the tasks people want to automate. What we need is a slice of a 70IQ factory worker's brain that can do that one thing over and over again.
We need a better smarter search engine then? Something that can intelligently ingest and present information.
User1539 t1_j5gxrwt wrote
Honestly, what we need is something to translate between what an LLM can 'understand' needs to be done and the physical world.
Right now, we can ask an LLM what the process of, say, changing the oil in a car is.
We can also program an industrial robot to do that task, basically blind.
To automate jobs, we need an LLM style understanding of the task, and the steps required, coupled to the ability to take each of those steps and communicate it to a 'body', checking as it goes that the process is following correctly.
So, if an LLM could, say, break the problem into steps, taking into account the situation around it, it could probably do the job.
Imagine typing into Chat GPT a prompt like 'You are programming a robot arm. You need to pick up a glass. Write the code to pick up the glass in front of you. '
Then automatically send that to a camera/arm, and have the image processing describing back 'The arm is to the left of the glass by 2 inches', please program the arm to grab the glass.
'The glass has been knocked over to the left, and is now on its side, 4 inches in front of the hand. please program the arm to grab the glass'
Ultimately it would be more complicated than that, but I think that's the basic idea of what many researchers are working on moving forward.
With a feedback loop of video being able to 'describe' to the LLM what is happening, and the LLM adjusting to meet its task, you could have a very useful android.
TinyBurbz t1_j5hp5hl wrote
>With a feedback loop of video being able to 'describe' to the LLM what is happening, and the LLM adjusting to meet its task, you could have a very useful android
Thats GAN/GameAI territory and is already out there. The algorithm is given an outcome like "win this match" or "pour water into this cup" and works out how to do so on it's own. It's how a lot of self-driving models work, and how OpenAI helped deliver AI that is indistinguishable from real players to Dota2 (they even rage if they cant follow their normal routine.)
What I foresee ultimately is tools we already used super-enhanced by AI. For example, a Wolfram-GPT macro for VisualStudio that generates the menial part of code; leaving the coder to figure-out harder logic themselves which the macro can then pick up on and offer complete code for.
Or, let's say someone is writing a story, but doesn't want to write out a full conversation between two characters, or perhaps they need help crafting a lore without also having to write a prequel.
While I know art AI's make beautiful renderings, to me, their potential is squandered on the lazy. Getting more into this, AI art could be so much more if used as a tool. It could do amazing things like generating real-world textures allowing every tree in a game to be unique. But as it stands people seem so much more interested in letting AI do the work for them, instead of letting AI enhance the work they already have done.
I know this sub has a hard one for letting AI do all this shit on it's own as if it's alive, but to me, that really stifles these tools. As it stands right now, AI is a viral app fad that will fade into the background to deliver nothing but ads and more disturbing YouTube Kids content. I know how badly people want self-aware machines, and mistake these tools for something living. Everyone arguing about letting AI have no limits is missing the point of what the creators of these tools want from them.
RabidHexley t1_j5lthcl wrote
> While I know art AI's make beautiful renderings, to me, their potential is squandered on the lazy. Getting more into this, AI art could be so much more if used as a tool. It could do amazing things like generating real-world textures allowing every tree in a game to be unique. But as it stands people seem so much more interested in letting AI do the work for them, instead of letting AI enhance the work they already have done.
This is the main thing that sticks out to me about the AI art revolution, in terms of how it'll really change the game. People are laser-focused on the idea of AI creating bespoke art pieces in their entirety. But a lot of art; be it illustration, animation, game design, comics, etc. contains a lot of tedious, repetitive "art <space> work" that is only tangential to the artists' creative vision and could automated by tech like this.
Another example would be something like a comic book, manga, or animated series. Where the artist designs the world and art style, draws out the characters and their unique looks etc. But then is able to use AI to rapidly generate back-drops or background characters that fit into their specific style. Allowing them to focus on the more specific, key, creative segments of the work.
This could drop the cost and massively increase the accessibility for mediums that currently require numerous tedious hours to produce an incredibly small amount of content, or huge teams of creatives made to do grunt work.
duffmanhb t1_j5h4dte wrote
Based on what I've heard about Google's AI -- That type of AGI is already there. I don't think any AGI will ever make everyone content as it's a broad moving goal post that's ill defined, and fundamentally digital is going to be different than biological processing, but the AI Google has is really really good. Mostly because it's multiple different types of AIs all networked together, connected to the internet, and can learn novel tasks on demand.
Original_Ad_1103 t1_j5igg1q wrote
“70IQ factory worker's brain”
Bruh, disrespectful, you had to compare it to a human being?
User1539 t1_j5jj3wy wrote
I'm not comparing it to a human being, I'm saying many, many, jobs could be automated if we could take a single function of the lowest working human.
Original_Ad_1103 t1_j5ncul7 wrote
But it’s still a human, you just said “lowest working human”, why lowest? Why 70IQ? Just say factory worker, even a repetitive one, I’m not denying that repetitive simple jobs usually have people that aren’t good at complex tasking or of particular intelligence, but still, no need to bring IQ into this, that’s kinda rude. It’s like saying “Just need an AI who can do a repetitive task like a cashier with Down Syndrome’s.”
User1539 t1_j5ndt3y wrote
70 is the limit for getting Social Security and not having to work. It is literally the line where someone is expected to go out and get a job.
I'm literally saying we don't need 'smarter than human' AGI. An AI that could do the work we give to the people of whom we expect the least would be an existential change.
IQ is a common measure of someone's intelligence. But, if the mention of a measure of human intelligence offends you, then you probably shouldn't take part in conversations where human intelligence is routinely compared to machine intelligence.
Original_Ad_1103 t1_j5ng0j2 wrote
I’m just saying, i know IQ is a common measurement, and that it’s directly correlated with work. I’ve not seen that many comparison between IQ and machine intelligence in subs, certainly not low “IQ”. Even though the comparison is true, it’s still “offensive” to some cuz there’s discriminatory undertones to it. Like you said, taking the “slice of a 70IQ brain”, bruh, you could’ve made any other example. Like just a repetitive task, or taking the slice of a simple program that does the same thing over and over again.
User1539 t1_j5nhi04 wrote
No, you're being offensive.
70 is a measure of a human IQ. Lots of humans have an IQ of around 70. They're regular, hard working people. There's nothing wrong with them.
I'm using that number because it is a number used to determine if someone is capable of employment,not to determine if they're good people.
I'm saying if we took away all the jobs from people with 70 and below IQs, it would be earth shattering. Because those people do a lot of work. Good work. Like good people do.
I'm literally saying that most people worry about AI becoming smarter than the smartest human, forgetting that most of us fall far below that line, and replacing all the hard working people in factories is going to change EVERYTHING.
But, deep down, you think people with low IQs are disgusting, and anyone that talks about them must be insulting them. Because you literally can't imagine a world where someone with a 70IQ is simply a reference point, and not an insult.
If we were talking about flying jets, and I offhand mentioned the robot would have to be 6ft tall, as that is the height cutoff for flying a jet, would you be insulted on the pilots behalf? No. Because you don't think 6ft tall jet fighter pilots are 'less' and need your defending.
Not only do factory workers not need you to stick up for them, you're showing your true colors with how you're acting like they're so mentally challenged no one should talk about them at all.
RabidHexley t1_j5lu71s wrote
I don't think they're saying that actual factory workers are unintelligent, but that an AI wouldn't need to simulate a great deal of intelligence in order to perform a lot of the menial tasks humans are made to do. Even many complex jobs or trades are largely task-oriented, demanding skill, but not necessarily great leaps of intuition to perform. Your average human is well well above the necessary intelligence to perform the average job, but we do them because somebody has to (and jobs, but that's a whole other thing).
Viewing a single comment thread. View all comments