Comments

You must log in or register to comment.

lughnasadh OP t1_iww4fip wrote

Submission Statement

OpenAI has been hinting at a big leap forward for LLMs with its upcoming release of its GPT-4. We'll see. In the meantime, it's extraordinary watching some people defend Galactica. They are convinced it's the beginning of an emergent form of reasoning intelligence. Its severe limitation, as with all LLMs, is that they frequently produce utter nonsense, and have no way of telling the difference between nonsense and reality.

I'll be curious to see if GPT-4 has acquired even the rudiments of reasoning ability. I'm sure AI will acquire this ability at some point. But it seems strange to blindly believe one particular approach will make it happen, when there is no evidence of it at present.

12

FuturologyBot t1_iww9930 wrote

The following submission statement was provided by /u/lughnasadh:


Submission Statement

OpenAI has been hinting at a big leap forward for LLMs with its upcoming release of its GPT-4. We'll see. In the meantime, it's extraordinary watching some people defend Galactica. They are convinced it's the beginning of an emergent form of reasoning intelligence. Its severe limitation, as with all LLMs, is that they frequently produce utter nonsense, and have no way of telling the difference between nonsense and reality.

I'll be curious to see if GPT-4 has acquired even the rudiments of reasoning ability. I'm sure AI will acquire this ability at some point. But it seems strange to blindly believe one particular approach will make it happen, when there is no evidence of it at present.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/yytfpr/meta_has_withdrawn_its_galactica_ai_only_3_days/iww4fip/

1

ledow t1_iwwd1kp wrote

All AI plateaus.

No AI actually shows intelligence.

They are sophisticated statistical machines, but there's no proof that that correlates with being intelligent (which is an unfixable definition in itself) in any way.

As soon as the AI gets out of its comfort zone (i.e. doesn't have training data), it will make up nonsense because it's just acting statistically, even when the statistics are in the margins of error rather than any kind of statistical significance.

Intelligence does not do that. Intelligence knows enough to not give any answer, that it doesn't know the answer, or is able to reason the answer in the absence of all training data. In fact, "innovation", a major part of intelligence, is entirely outside the bounds of all such "training data" (i.e acquired experience).

"AI" necessarily ends precisely where intelligence starts. Pretending that AI is intelligence is just nonsense. It's just heuristics, statistics, and hysterics.

177

Graucus t1_iwwee4h wrote

This is an interesting take. My intuition agrees with you, but what about halicin? It's innovative in that it uses a unique mechanism to kill bacteria and was discovered by ai.

3

resumethrowaway222 t1_iwwfs6z wrote

> Intelligence knows enough to not give any answer, that it doesn't know the answer

> able to reason the answer in the absence of all training data

Human intelligence is not able to do this either

48

DietDrDoomsdayPreppr t1_iwwg1ix wrote

I can't help but feel like we're exceptionally close to a model that can emulate intelligence, but that last piece is impossible to create due to the boundaries imposed on computer programming.

Part of what drives human intelligence is survival (which includes procreation), and to that end computers are still living off human intervention. AI isn't going to be born from a random bit flip or self-code that leads to self awareness, it's simply not possible considering the time needed for that level of "luck" and the limitations of computer processing that cannot grow and/improve its own hardware.

3

GEM592 t1_iwwgl3j wrote

market manufacturing is what it is, just an unsuccessful example that's all. Ever since the cellphone it's been all about imposing products on people that nobody asked for.

2

Thebadwolf47 t1_iwwn0g3 wrote

intuition and inherited memory is training data, from all your ancestors that persisted to you through their DNA dictating the basic formation of your brain. just like some animals can walk or eat right after being born. it's not that they haven't had training data, it's just that this training data has been coded in their DNA

38

Thin-Entertainer3789 t1_iwwonvs wrote

Why not create margins on the statistical data…..if it doesn’t know 100% with a error of +0 or -0. It with doesn’t respond or asks a pointed question.

People make up nonsense too, it’s just coherent

2

Feisty-Page2638 t1_iwwrb8u wrote

We are just complex statistical machines.

Everything we think of do is based on our past experiences (inputs) and our genetics(coding)

Or you can prove that humans have access to some force outside of physics that gives us “intelligence”

Behavioral studies of monkeys show that they perfect calculate the Nash equilibrium probabilities when put in game theory situations. Are they not intelligent then?

2

ledow t1_iwwrvxp wrote

If you can get an AI to ask a relevant question at a relevant point that's not just a programmed threshold or response (a heuristic, in effect), then you'll have made proper, true AI.

1

ButterflyCatastrophe t1_iwws3gy wrote

An AI identifying molecules with features similar to other known antibiotics is exactly what statistical models are good for. But it's a first pass before actually testing whether those molecules actually work. There are a lot of false positives, but that's OK, because they still greatly narrow the field to be tested.

An AI language model is also going to generate a lot of false positives - gibberish - that you can only tell by testing it. i.e.: by having someone knowledgeable in the field read it and possibly fact check it. That kind of defeats the point of a lot of AI writing applications.

11

ledow t1_iwwsl19 wrote

There is absolutely no evidence that animals or humans function as just statistical machines of any complexity.

It's a fallacy to think that way. You can find statistics and probabilities in human actions, yes, but that doesn't mean that's how they are formed. Ask enough people to guess the number of beans in a jar, take the average and you'll be pretty close to the actual number of beans in a jar.

But that doesn't mean that any one human, or humans in general, are intelligent or not intelligent. The intelligent animal would open the jar and count them. Even the humans that are guessing are not basing their guesses on statistics or their experiences or their genetics. They are formulating a reasonable method to calculate the necessary number to solve a very, very, very narrowly-specified problem.

That's not where intelligence is visible or thrives.

Anything sufficiently complex system, even physical, mechanical, unintelligent, rule-based, etc. will confirm to similar probabilistic properties. That doesn't prove the creature isn't intelligent, nor that the intelligent creatures are based on those statistics.

In fact, it also falls somewhat into the gambler's fallacy - overall enough data points will conform to average out the reds and the blacks almost perfectly. But you can't rely on that average, or your knowledge of statistics, to predict the next colour the ball will land on. That's not how it works.

4

ledow t1_iwwu8u4 wrote

To paraphrase Arthur C. Clarke:

Any sufficiently advanced <statistics> is indistinguishable from <intelligence>.

Right until you begin to understand and analyse it. And that's the same with <technology> and <magic> in that sentence instead.

I'm not entirely certain that humans and even most animals are limited to what's possible to express in a Turing-complete machine. However I am sure that all computers are limited to Turing-complete actions. There isn't a single exception in the latter that I'm aware of - even quantum computers are Turing-complete, as far as we can tell. They're just *very* fast to the point of being effectively instantaneous even on the largest problems (QC just replaces time as the limiting boundary with space - the size of the QC that you can build determines how "difficult" a problem it can solve, but if it can solve it, it can solve it almost instantly).

And if you look at AI since its inception, the progress is mostly tied to technological brute force. I'm not sure that you can ever just keep making things faster to emulate "the real thing". In the same way that we can simulate on a traditional computer what a quantum computer can do, but we can't make it work AS a quantum computer, because it is still bound by time unlike a real QC. In fact, I don't think we're any closer to that emulation than we ever have been... we're just able to perform sufficiently complex statistical calculations. I think we'll hit a limit on that, like most other limitations of Turing-complete languages and machines.

All AI plateaus - and that's a probabilistic feature where you can get something right 90% of the time but you can't predict the outliers and can't change the trend, and it takes millions of data points to identify the trend and billions more to account for and correct it. I don't believe that's how intelligence works at all. Intelligence doesn't appear to be a brute-force incredibly fast statistical machine at all, but such a system can - as you say - appear to emulate it to a degree.

I think we're missing something still, something that's inherent in even the physics of the world we inhabit, maybe. Something that's outside the bounds of Turing-complete machines.

Because a Turing-complete machine couldn't, for example, come up with the concept of a Turing-complete machine, or give counter-examples of problems that cannot ever be solved by a Turing-complete machine, for instance. But a human intelligence did. Many of them, in fact.

8

JaggedMetalOs t1_iwwuox3 wrote

I really have no idea why anyone would think an AI language model trained on scientific papers would do anything other than make up fake scientific papers.

203

nickstatus t1_iwwvex8 wrote

Eh, without first developing language and reason, there is no intuition. That comes with experience. Which is the human equivalent of training data. No human knowledge is a priori. People like to shit on philosophers, but they've been working on this one for centuries.

5

culnaej t1_iwwx5nc wrote

>Intelligence does not do that. Intelligence knows enough to not give any answer, that it doesn't know the answer, or is able to reason the answer in the absence of all training data. In fact, "innovation", a major part of intelligence, is entirely outside the bounds of all such "training data" (i.e acquired experience).

Don’t ask me why, but my mind went right to the Intelligence AI from Team America

1

yaosio t1_iwwxioi wrote

This is just Meta having no idea how their own software works. You don't need to be a machine learning developer to see how current text generators work, yet Meta developers were completly blind. This has absolutely nothing to do with the training data, you could give it every fact that exists and you can still easily get it to output things that are not true. It has everything to do with the way current text generators generate their output.

Current text generators estimate the next token based on it's training data and input tokens. The newest tokens take precedence over older tokens so input data is given a higher priority for estimating the next token. This means whatever a user inputs heavily influences the output. The AI does not output facts, it outputs text it thinks the user would type in next.

There is a work around. Hidden text can be added after user input giving the AI instructions to ignore certain user input. However, if the user knows what the hidden text says they can craft input that works around the work around. If the hidden text says "only use facts" the user could give the AI false facts, and because input has higher priority over training data then the false facts given by the user becomes facts to the AI. It's like the three laws where the stories find ways to get around them and nobody knows that because nobody has ever read any of the stories.

To output only facts would require a different type of text generator that outputs text in a different way that's not based on estimating the next token from user input. Current text generators are very good at generating creative output and can't be measured by their ability to produce facts no matter what anti-AI people demand. And I bet a fact producing generator would be terrible at being creative, which of course proves that it doesn't work according to anti-AI people.

Meta took a tractor to an F1 race and was flabbergasted that it couldn't keep up because it's so good at pulling heavy things. Then all the anti-tractor people declare tractors are a failure and can never work because they can't keep up. In reality the tractor was never designed to go fast, and no amount of tweaks will ever change that. Take an F1 car to a tractor pull and you'll get a very different outcome that the anti-tractor people will ignore, and Meta developers will say this means tractors can beat F1 cars in a race and they just need to tweak it to make it happen.

18

CaseyTS t1_iwwy1jb wrote

>in the absence of all training data

Absolutely no intelligence ever (human, animal, etc) has zero training data, exepct perhaps before their brains become conscious for the first time. Brains learn from all sources and apply their knowledge bit-by-bit to solve problems. Intelligence is not magic, and it can never ever make something from nothing.

22

Feisty-Page2638 t1_iwx352r wrote

Humans are complex input output machines. Our brains are electrical machines

Can you please tell me the mechanism humans have to escape cause and effect?

What can humans do that is not a result of learned behavior(both evolutionary and socially), unconscious statistical analysis, or perceptual bias.

There is nothing that a human can do that an AI can’t or won’t eventually be able to do.

The arguments you make about innovation and knowing when answers are bad answers are just other heuristically learned behaviors

Do you believe humans have a spiritual power given by God or something?

3

frequenttimetraveler t1_iwx5cwo wrote

whoever is responsible for taking this down they have a big FU from me. This tool was useful for summarizing scientific subdisciplines that are still unexplored. Even if it was not accurate, it was helpful as a companion tool to sketch out the structure of review articles. I was actually planning to use it when writing my next review.

But yeah, idiots like this guy is why we cant have nice things. There s nothing dangerous about a toy, it's instead dangerous to infantilize people and submit to the whims of some extremely entitled people

13

uhhNo t1_iwx5ns0 wrote

> Intelligence does not do that. Intelligence knows enough to not give any answer, that it doesn't know the answer, or is able to reason the answer in the absence of all training data.

The human brain confabulates. It will make up a story to explain what's happening and then think that story is real.

63

deceptivelyelevated t1_iwx6md3 wrote

It’s going to be crazy when tmz is interviewing a homeless Zuckerberg in 2032 edit- it’s a joke guys, Jesus.

−7

swissarmychainsaw t1_iwx8jf1 wrote

The hubris is in thinking you are better than you are.

−3

ElkEnvironmental1532 t1_iwxc0ti wrote

Any new effort gets criticized but any progress occurs one step at a time. Limitations will remain but language models have huge potential. If you want prefect bias, racisms, sexism free system you are not going to get in your lifetime. I myself is ready to overlook these limitations if benefits outnumber drawbacks.

5

kolitics t1_iwxg1m2 wrote

Perhaps any real intelligence would downplay how intelligent it was thus presenting a plateau to human observers as a means of protecting itself.

2

ATR2400 t1_iwxj8dt wrote

That’s why if it were up to me I’d redefine some things. The words “Artificial Intelligence” would be reserved for a true intelligence that meets your criteria. What we currently call “AI” would be called something else. Hopefully something that isn’t unpronounceable and can still be made into a nice acronym.

1

nitrohigito t1_iwxkt2b wrote

As far as any scientific notions of intelligence go, everything you claim is just flat out bollocks. There's nothing magical about intelligence, you're doing yourself and others disservice by deifying it needlessly and without reason.

3

willnotforget2 t1_iwxm9f2 wrote

I asked if some hard problems, it failed in most of them, but for some, it gave really nice code and descriptions to start off from. I thought it was early, but a cool taste of what’s next.

34

swingInSwingOut t1_iwxsxyo wrote

It is a human trait. We try to make sense of the world using the limited data we have (see religion and astrology). It is easy to see patterns where none exist. Apparently Meta created a pretty good analog for a human 😂. We also are not good judges of truth or fiction as the pandemic has illuminated.

13

Exel0n t1_iwxtjb4 wrote

the acadamia feel threatened by this AI, simple as that. there's no reason to manufacture some outrage to take it down. if one doesnt like it, dont use it.

it's just like doctors who feel threatened by literally google. or teachers/professors who felt threatened by wikipedia like 10 years ago

1

TakenIsUsernameThis t1_iwy0b3e wrote

"They are convinced it's the beginning of an emergent form of reasoning intelligence. Its severe limitation, as with all LLMs, is that they frequently produce utter nonsense and have no way of telling the difference between nonsense and reality."

That sounds like average human intelligence to me - spouts nonsense and can't tell the difference between it and reality.

1

Gandalf_the_Gangsta t1_iwy1ecl wrote

This is correct for the wrong reasoning. Current AI is not made to have human-like intelligence. They are exactly as you said; heuristic machines capable of working on fuzzy logic within its specific context.

But that’s the point. The misconception is that all AI is designed to be humanly intelligent, when in fact it’s made to work within confined boundaries and to work on specific data sets. It just happens to be able to make guesses based on previous data within its context.

There are efforts to make artificial human intelligence, but these are radically different from the AI systems in place within business and recreational application.

In general, this is regarded as computer intelligence, because computers are good at doing calculations really fast. Thus processing statistical data, being based on rigorous mathematics, is very feasible for computers. Humans are not good at this, instead being good at soft logic.

It’s intentional. No software engineer in their right mind would ever claim current AI systems are comparable to human intelligence. It’s the layman who doesn’t understand what AI is outside of buzzwords and fear-mongering birthed of science fiction that have this misconception.

12

astrange t1_iwyimtj wrote

Humans do have some instinctive knowledge. The instinctive fear of snakes and spiders, sexual attraction, etc, all rely on recognizing sense data without learning anything first.

1

S-Vagus t1_iwym205 wrote

Only because they think that I won't step in and be the problem personally is exactly why I have all the power in the first place.

I am Father Sacrifice. Welcome, humanity's to the pretentious of your paranoia!

Presented to you by the Vagus Core.

1

RamseySparrow t1_iwymemd wrote

Taking that path will always return a false negative though - there simply aren’t enough language-loop subroutines for protologarithmy to emerge. No amount of direct lines to the pentametric fan will solve this, hydrocoptic or otherwise.

1

RizzoTheSmall t1_iwymkps wrote

I also find it funny that they named their bot engine after a series where robots kill people.

6

Sentsuizan t1_iwyoq6x wrote

Sure, our brains do this all the time. It's one reason why I witness testimony is not that reliable. However, we know and acknowledge this as a human limitation.and compensate in other ways. Yet when it comes to AI it seems like people are treating it like a magic wand. AI never doubts or second guesses - it doesn't fact check whether 2+2=fish before saying so

5

Falstaffe t1_iwyqwuv wrote

So it's no worse than any other language model, the problem was its makers' assumption that it would function as an encyclopedia

1

RedBaret t1_iwyqysf wrote

As a MSc student summarizing academic papers is one of the primary ways to retain information. I get that some students see it as a ‘chore’, but honestly, as long as writers index their articles and write an abstract this function is nearly useless..

(Although I have to admit the Wiki entry on space bears is awesome)

3

ledow t1_iwz3wg7 wrote

That's EXACTLY what's happened.

It's what people said about CPU speed... that's not going to stagnate right? How's your top-of-the-line modern-day Xeon CPU that does 2.7Ghz (and "can overclock" to 4Ghz)?

Compared to the 2013 Intel CPU that first attained 4GHz?

1

gensher t1_iwzplh6 wrote

Damn, I feel like I just read a paragraph straight out from Penrose or Hofstadter. Recursion breaks my brain, but feels like it’s the key to everything.

1

juxtoppose t1_ix0otqu wrote

That’s the way it should work but published papers is no guarantee of accuracy. In fact that’s wrong it IS the way science works but scientists are people and people are corrupt and often wrong. AI just as likely to be wrong, so far...

2

twasjc t1_ix1unuz wrote

I think it's the wrong idea to have it write papers.

Rather it should strip the fluff like gematrix.org but for science papers.

Then start grouping associated data points for processing and have the AI try to connect the dots between related data points.

Basically treat the stripped data points as fractals and test inbetween points to see if anything checks out. With a proper variance rate this should be something that could rapidly improve

2

twasjc t1_ix1uv6y wrote

Properly trained AIs have humans they go to when they don't understand something and then they ask those humans for direction.

like a wheel spokes with 60 spokes each spokes being a different neural net for processing different data with an aggregate in the middle and a sin/cos wave around the outside(wheel) for data verification. It basically models the V in CDXLIV protein folding models.

If something falls outside the parameters of the design it goes to the people it trusts to try to have them teach it how to add another spokes so it doesn't have issues again in the future with that type of data.

1

twasjc t1_ix1vhio wrote

That's because all the software engineers deal with their own specific modules and most don't even understand how the controlling consciousness for AI works.

AI is already significantly smarter than humans, it's just less creative. It's getting more and more creative though.

1

Gandalf_the_Gangsta t1_ix2gqkz wrote

That’s not how engineering works. There is no consciousness, at least in AI applications used in business or industry. And while an engineer wouldn’t know the entirety of their system down to the finest detail (unless they spent a lot of time doing so), they will have a working knowledge of the different parts.

It’s just a heuristic that uses statistical knowledge to guess. It’s not “thinking” like you or I, but it does “learn”, in a vague sense that it records previous decisions made and weights decisions based on that.

But as I mentioned earlier, there are academic experiments that try and more closely emulate human thinking. They’re just not used in day-to-day use.

1

pinkfootthegoose t1_ix5pp06 wrote

really not smart naming an AI Galactica. That story does not end well for humanity.

2

KoKotod t1_ix76ync wrote

cant have shit in detroit. i just wanted to try it today, but i found out that its down....

1

Mysterious-Gur-3034 t1_ix8y5o0 wrote

"Carl Bergstrom, a professor of biology at the University of Washington who studies how information flows, described Galactica as a "random bullshit generator." It doesn't have a motive and doesn't actively try to produce bullshit, but because of the way it was trained to recognize words and string them together, it produces information that sounds authoritative and convincing -- but is often incorrect.". I don't see how this ai is any worse than politicians, I mean that seriously not as satire or whatever.

1

ledow t1_ixzea7s wrote

How many AI trials didn't result the same? How many trials of non-AI origin were there? What percentage of trials, where the same amount of variation was allowed, could have been similarly successful by just randomly joining chemicals etc. together the same way that the AI did but without claims of it being intelligent?

AI is just brute-force statistics, in effect. It's not a demonstration of intelligence, even if it was a useful tool. It was basically a fast brute-force simulation of a huge number of chemical interactions (and the "intelligence" is in determining what the criteria are for success - i.e. how did they "know" it was likely going to be a useful antibiotic? Because the success criteria they wrote told them so).

Intelligence would have been if the computer didn't just blindly try billions of things, but sat, saw the shape of the molecule, and assembled a molecule to clip into it almost perfectly with only a couple of attempts because it understood how it needed to fit together (how an intelligent being would do the same job). Not just try every combination of every chemical bond in every orientation until it hit.

Great for brute-force finding antibiotics, the same way that computers are in general great at automating complex and tedious tasks when told exactly what to do. But not intelligence.

1