Comments

You must log in or register to comment.

User1539 t1_j5f9cdo wrote

This is why I keep saying we don't need 'real AGI' to feel the vast majority of the effects we all think we'll see when AGI happens.

We don't need a superhuman thinking machine to do 99% of the tasks people want to automate. What we need is a slice of a 70IQ factory worker's brain that can do that one thing over and over again.

We already have the building blocks for that.

75

GoldenRain t1_j5fx0wy wrote

If we want effective automation or make general human tasks faster we certainly do not need AGI.

If we want inventions and technology which would be hard for humans to come up with in a reasonable time frame, we do need AGI. If we want technology human intelligence is unable to comprehend, we need ASI. The step between those two is likely quite short.

26

drsimonz t1_j5g9533 wrote

Depends on the nature of the invention. A lot of research involves trial and error, and this is ripe for automation. A really cool example (which as far as I know doesn't involve any AI so far) is robotic biochemistry labs. If you need to test 500 different drug candidates in some complicated assay, you can just upload the experiment via web API and the next thing you know, dozens of robots come to life mixing reagents and monitoring the results. In my view, automation of any kind will continue to accelerate science for a while, even without AGI.

I would also argue that in some narrow fields, we're already at a point where humans are totally incapable of comprehending technology that is generated by software. The obvious example being neural networks (we can understand the architecture, but not the weights). Another would be the hardware description languages used for IC design. Sure, a really smart computer engineer with an electron microscope could probably reverse engineer some tiny block of a modern CPU, but it would be nearly impossible to map the entire thing. They have billions of transistors. When we design these things, it's simply not possible without the use of sophisticated software. Similarly when you compile code to assembly, you might be able to understand tiny fragments of assembly, but the entire program would take a lifetime to get through. Without compilers and interpreters, software would still see extremely limited use in society, and we literally wouldn't be having this discussion.

Edit: forgot to say, of course AGI will be a completely different animal since it will be able to generate new kinds of ideas where even the concept is beyond the reach of a human brain.

9

SoylentRox t1_j5h5lxz wrote

This. And there are bigger problems unsolved that scale might help with.

For example, finding synthetic cellular growth serums. This is a massive trial and error effort - which molecules in bovine plasma do you actually need for full development of structures in vitro.

Growing human organs. Similarly there is a vast amount of trial and error, you really need to have millions of attempts.

Even trying to do the above rationally, you need to investigate in parallel the effect of each unknown molecule. And you need a lot of experiments, not just one - you don't want to develop a false conclusion.

Ideally I can imagine a setup where scientific papers stop being encrypted with difficult to parse text, but are essentially in a standard machine duplicable form. So the setup and experiment sections are a link to the actual files used to configure the robotics, the results are the unabridged raw data. The analysis was done by an AI who was prompted on what you were looking for, so there can't be accusations of cherry picking a conclusion.

And journals don't accept non replicated work. What has to happen is the paper has to get 'picked up' by another lab, with a different source of funding (or the funding is done in a way that reduces COI), ideally using a different robotic software stack to turn the high level 'setup' files into actionable steps, a different robotics AI vendor, and a different model of robotic hardware.

Each "point of heterogeneity" above has to be part of the data quality metrics for the replication, and then depending on the discovered effects you only draw reliable conclusions on high quality data.

Also the above allows every paper to use all prior data on something rather than stand alone. Your prior should always be calculated from the set of all prior research, not split evenly between hypotheses.

Institutions are slow to change, but i can imagine a "new science" group of companies and institutions who uses the above, plus AGI, and they surge so far ahead of everyone else in results that no one else matters.

NASA vs the Kenyan space program.

6

User1539 t1_j5grqw0 wrote

> If we want effective automation or make general human tasks faster we certainly do not need AGI.

Agreed. We're very, very, close to this now, and likely very far away from AGI.

> If we want inventions and technology which would be hard for humans to come up with in a reasonable time frame, we do need AGI.

This is where we disagree. I have many contacts at universities, and most of my friends have a PHD and participate in some kind of research.

In their work, they were evaluating Watson (IBMs LLM style AI) years ago, and talking about how it would help them.

Having a PHD necessarily means having tunnel vision. You will do research that makes you the single person on earth that knows about the one cell you study, or the one protein you've been working with.

Right now, the condition of science is that we have all these researchers writing papers to help other scientists have a wider knowledge on things they couldn't possibly dedicate time to.

It's still nowhere near wide enough. PHDs aren't able to easily work outside their field, and the result is that their research needs to go through several levels of simplification before someone can find a use for it, or see how it effects their own research.

A well trained LLM can tear down those walls between different fields. Suddenly, you've got an infinitely patient, infinitely knowledgeable assistant. They can write code for you. You can ask it what effect your protein might have on a new material, without having to become, or know, a material scientist.

Everyone having a 'smart' assistant that can offer an expert level understanding of EVERY FIELD will bridge the gaps between the highly specialized geniuses of our time.

Working with the sort of AI we have now will take us to an entirely new level.

9

Baturinsky t1_j5iq32y wrote

And how safe is to give that tools into the hands of, among others, criminals and terrorists?

1

User1539 t1_j5jjimd wrote

The same argument has been made about google, and it's a real concern. Some moron killed his wife a week or so ago, and the headline read 'Suspect google history included 'How to hide a 140lb body''

So, yeah. It's already a problem.

Right now we deal with it by having Google keep records and hoping criminals who google shit like that are just too stupid to use a VPN or anonymous internet.

Again, we don't need AGI to have that problem. It's already here.

That's the whole point of my comment. We need to stop waiting for AGI before we start to treat these systems as being capable of existential change for the human race.

1

Baturinsky t1_j5jl5nt wrote

I agree, human + AI working together is already and AGI. With only limit of the human part being unscaleable. And can be extremely dangerous if AI part is very powerful and both are non-aligned with fundamental human values.

1

Artanthos t1_j5j1tiy wrote

We already have that.

Machine Learning algorithms are already making advances in mathematics and medicine.

1

TinyBurbz t1_j5gw55e wrote

>We don't need a superhuman thinking machine to do 99% of the tasks people want to automate. What we need is a slice of a 70IQ factory worker's brain that can do that one thing over and over again.

We need a better smarter search engine then? Something that can intelligently ingest and present information.

3

User1539 t1_j5gxrwt wrote

Honestly, what we need is something to translate between what an LLM can 'understand' needs to be done and the physical world.

Right now, we can ask an LLM what the process of, say, changing the oil in a car is.

We can also program an industrial robot to do that task, basically blind.

To automate jobs, we need an LLM style understanding of the task, and the steps required, coupled to the ability to take each of those steps and communicate it to a 'body', checking as it goes that the process is following correctly.

So, if an LLM could, say, break the problem into steps, taking into account the situation around it, it could probably do the job.

Imagine typing into Chat GPT a prompt like 'You are programming a robot arm. You need to pick up a glass. Write the code to pick up the glass in front of you. '

Then automatically send that to a camera/arm, and have the image processing describing back 'The arm is to the left of the glass by 2 inches', please program the arm to grab the glass.

'The glass has been knocked over to the left, and is now on its side, 4 inches in front of the hand. please program the arm to grab the glass'

Ultimately it would be more complicated than that, but I think that's the basic idea of what many researchers are working on moving forward.

With a feedback loop of video being able to 'describe' to the LLM what is happening, and the LLM adjusting to meet its task, you could have a very useful android.

3

TinyBurbz t1_j5hp5hl wrote

>With a feedback loop of video being able to 'describe' to the LLM what is happening, and the LLM adjusting to meet its task, you could have a very useful android

Thats GAN/GameAI territory and is already out there. The algorithm is given an outcome like "win this match" or "pour water into this cup" and works out how to do so on it's own. It's how a lot of self-driving models work, and how OpenAI helped deliver AI that is indistinguishable from real players to Dota2 (they even rage if they cant follow their normal routine.)

What I foresee ultimately is tools we already used super-enhanced by AI. For example, a Wolfram-GPT macro for VisualStudio that generates the menial part of code; leaving the coder to figure-out harder logic themselves which the macro can then pick up on and offer complete code for.

Or, let's say someone is writing a story, but doesn't want to write out a full conversation between two characters, or perhaps they need help crafting a lore without also having to write a prequel.

While I know art AI's make beautiful renderings, to me, their potential is squandered on the lazy. Getting more into this, AI art could be so much more if used as a tool. It could do amazing things like generating real-world textures allowing every tree in a game to be unique. But as it stands people seem so much more interested in letting AI do the work for them, instead of letting AI enhance the work they already have done.

I know this sub has a hard one for letting AI do all this shit on it's own as if it's alive, but to me, that really stifles these tools. As it stands right now, AI is a viral app fad that will fade into the background to deliver nothing but ads and more disturbing YouTube Kids content. I know how badly people want self-aware machines, and mistake these tools for something living. Everyone arguing about letting AI have no limits is missing the point of what the creators of these tools want from them.

2

RabidHexley t1_j5lthcl wrote

> While I know art AI's make beautiful renderings, to me, their potential is squandered on the lazy. Getting more into this, AI art could be so much more if used as a tool. It could do amazing things like generating real-world textures allowing every tree in a game to be unique. But as it stands people seem so much more interested in letting AI do the work for them, instead of letting AI enhance the work they already have done.

This is the main thing that sticks out to me about the AI art revolution, in terms of how it'll really change the game. People are laser-focused on the idea of AI creating bespoke art pieces in their entirety. But a lot of art; be it illustration, animation, game design, comics, etc. contains a lot of tedious, repetitive "art <space> work" that is only tangential to the artists' creative vision and could automated by tech like this.

Another example would be something like a comic book, manga, or animated series. Where the artist designs the world and art style, draws out the characters and their unique looks etc. But then is able to use AI to rapidly generate back-drops or background characters that fit into their specific style. Allowing them to focus on the more specific, key, creative segments of the work.

This could drop the cost and massively increase the accessibility for mediums that currently require numerous tedious hours to produce an incredibly small amount of content, or huge teams of creatives made to do grunt work.

2

duffmanhb t1_j5h4dte wrote

Based on what I've heard about Google's AI -- That type of AGI is already there. I don't think any AGI will ever make everyone content as it's a broad moving goal post that's ill defined, and fundamentally digital is going to be different than biological processing, but the AI Google has is really really good. Mostly because it's multiple different types of AIs all networked together, connected to the internet, and can learn novel tasks on demand.

1

Original_Ad_1103 t1_j5igg1q wrote

“70IQ factory worker's brain”

Bruh, disrespectful, you had to compare it to a human being?

1

User1539 t1_j5jj3wy wrote

I'm not comparing it to a human being, I'm saying many, many, jobs could be automated if we could take a single function of the lowest working human.

2

Original_Ad_1103 t1_j5ncul7 wrote

But it’s still a human, you just said “lowest working human”, why lowest? Why 70IQ? Just say factory worker, even a repetitive one, I’m not denying that repetitive simple jobs usually have people that aren’t good at complex tasking or of particular intelligence, but still, no need to bring IQ into this, that’s kinda rude. It’s like saying “Just need an AI who can do a repetitive task like a cashier with Down Syndrome’s.”

1

User1539 t1_j5ndt3y wrote

70 is the limit for getting Social Security and not having to work. It is literally the line where someone is expected to go out and get a job.

I'm literally saying we don't need 'smarter than human' AGI. An AI that could do the work we give to the people of whom we expect the least would be an existential change.

IQ is a common measure of someone's intelligence. But, if the mention of a measure of human intelligence offends you, then you probably shouldn't take part in conversations where human intelligence is routinely compared to machine intelligence.

1

Original_Ad_1103 t1_j5ng0j2 wrote

I’m just saying, i know IQ is a common measurement, and that it’s directly correlated with work. I’ve not seen that many comparison between IQ and machine intelligence in subs, certainly not low “IQ”. Even though the comparison is true, it’s still “offensive” to some cuz there’s discriminatory undertones to it. Like you said, taking the “slice of a 70IQ brain”, bruh, you could’ve made any other example. Like just a repetitive task, or taking the slice of a simple program that does the same thing over and over again.

1

User1539 t1_j5nhi04 wrote

No, you're being offensive.

70 is a measure of a human IQ. Lots of humans have an IQ of around 70. They're regular, hard working people. There's nothing wrong with them.

I'm using that number because it is a number used to determine if someone is capable of employment,not to determine if they're good people.

I'm saying if we took away all the jobs from people with 70 and below IQs, it would be earth shattering. Because those people do a lot of work. Good work. Like good people do.

I'm literally saying that most people worry about AI becoming smarter than the smartest human, forgetting that most of us fall far below that line, and replacing all the hard working people in factories is going to change EVERYTHING.

But, deep down, you think people with low IQs are disgusting, and anyone that talks about them must be insulting them. Because you literally can't imagine a world where someone with a 70IQ is simply a reference point, and not an insult.

If we were talking about flying jets, and I offhand mentioned the robot would have to be 6ft tall, as that is the height cutoff for flying a jet, would you be insulted on the pilots behalf? No. Because you don't think 6ft tall jet fighter pilots are 'less' and need your defending.

Not only do factory workers not need you to stick up for them, you're showing your true colors with how you're acting like they're so mentally challenged no one should talk about them at all.

1

RabidHexley t1_j5lu71s wrote

I don't think they're saying that actual factory workers are unintelligent, but that an AI wouldn't need to simulate a great deal of intelligence in order to perform a lot of the menial tasks humans are made to do. Even many complex jobs or trades are largely task-oriented, demanding skill, but not necessarily great leaps of intuition to perform. Your average human is well well above the necessary intelligence to perform the average job, but we do them because somebody has to (and jobs, but that's a whole other thing).

2

MacacoNu t1_j5erbwy wrote

I made a prompt to get wolfram alpha results directly in chatGPT, I posted it here, it has helped me a lot:: https://www.reddit.com/r/ChatGPT/comments/10aaq5m/creating_a_superpowered_assistant_with_chatgpt/

46

YobaiYamete t1_j5irbw3 wrote

What does Wolfram Alpha offer that ChatGPT doesn't already do? Non-troll question, I've never used Wolfram Alpha before, and only know about it by asking ChatGPT just now lol. It sounds like a primitive version of ChatGPT

6

CanuckButt t1_j5sb0zd wrote

"Computational knowledge engine"

Maybe the last hoorah for trying to hardcode intelligence into computers.

2

hopelesslysarcastic t1_j5gjrn6 wrote

Thanks for this man..I saw your post on ChatGPT and got it set up pretty quickly. Awesome.

5

HeinrichTheWolf_17 t1_j5eiep4 wrote

We’re catapulting right into the movie ‘Her’. Things are moving fast, and reality is based for that.

Accelerate 🚘

36

dmit0820 t1_j5fugbk wrote

Just wait until these models are multi modal and can simultaneously process video, text, and robitcs data. Deep mind already created an early prototype called GATO.

18

Proc_Gene_Coll t1_j5fm38z wrote

This is the way the singularity happens
Not with a bang but a Reddit post

18

ecnecn t1_j5grari wrote

I know some people in a western EU country that train Davinci in order to reduce most of the personel in their legal department of their company and to eliminate contracts with big law firms. Mainly trained on employment law, financial law and EU regulations. The layoff / contract cancel will be between 2023/24. Its a big bank. Saved money from this is gigantic - especially from the big law firms that asked for exorbitant hourly wages in the past. From time to time this bank just offered law firm consultants better contracts because hiring certain people was cheaper than paying the law firms in question. Right now they want to get rid of them through AI. Things move faster behind the curtain than people see... Some of the big names in law business will vanish or massively reduce personel in the next years.

14

Circ-Le-Jerk t1_j5h56zc wrote

I know a company who's currently training and tuning a model from their sales call center. They are transcribing all their sales calls and rating their quality, then using these to fine tune the model. In return, sales people will have dynamic scripts that are optimized based on 10s of thousands of successful sales calls, to know exactly what to say.

4

BigShoots t1_j5hpa62 wrote

>sales people will have dynamic scripts that are optimized based on 10s of thousands of successful sales calls, to know exactly what to say.

That's nice, but it also means that none of those jobs will exist within two years.

2

visarga t1_j5gwe6f wrote

> Some of the big names in law business will vanish or massively reduce personel in the next years.

So there are two choices here

  1. use AI to reduce costs, assuming AI are perfect
  2. use AI to increase profits, assuming realistic AIs

You think 1 is more probable. I think people are still necessary to maximise profits. AI works better with people around.

3

ecnecn t1_j5gxbsv wrote

There will be a core team of their best lawyers (honors exam lawyers) but the law firms & their consulting contracts will get a direct hit - so some employees of the law department. As of now purpose-trained Davinci reaches an accuracy about 75% the costly consultants had an accuracy about 80% (they actually measured it) ... 5% less accuracy (GPT 3.5) but much less cost. If GPT 4.0 is just a bit better it will change workspace forever. You are right it works better with people but you just need the elite of each department. This bank has like core team of 5 high paid syndicus lawyers and 30 contract lawyers from law firms. They will reduce their core team to 3 and cut the contract. Now interpolate this step to every bank in EU...

6

BootyPatrol1980 t1_j5hyuuy wrote

This is how I suspect AI in the next decade is going to play out. APIs interacting with each other to create hybrid generative systems.

So imagine you have an AI assistant, "Fred". You ask Fred how many calories are in a tonne of Lucky Charms. Fred uses an API call to a recommender AI that tells Fred who to ask to calculate that, and it's Wolfram. Fred makes an API call to Wolfram and parses the response back to you.

Rather than one company "dominating" I feel like there will be hundreds of these specialist AI systems.

7

lambolifeofficial OP t1_j5hzicy wrote

This sounds plausible. But wouldn't it be inefficient?

3

BootyPatrol1980 t1_j5ie25m wrote

Not much more so than today's web applications, IMO. Most web calls end up pulling data or resources from several APIs as it is when you request a page.

My take on the multiple AI front is that these AI systems will be like apps themselves. Too specialized to dominate, but could potentially excel an area of expertise better than a generalized monolithic AI, if it is properly fine-tuned.

3

BitPax t1_j5ej57m wrote

Would be cool to see it do some leetcode questions.

4