Comments

You must log in or register to comment.

JaggedMetalOs t1_iwwuox3 wrote

I really have no idea why anyone would think an AI language model trained on scientific papers would do anything other than make up fake scientific papers.

203

kolitics t1_iwxf9ie wrote

Perhaps the real test was whether they would be identifiable as fake.

51

HumanSeeing t1_iwyn6tu wrote

Or.. perhaps.. the real test was the friends we made along the way!

21

gbot1234 t1_ix00g4o wrote

I even found a significant other (p<0.05)!

1

stage_directions t1_iwz7o73 wrote

Yes, they would, when people tried to reproduce or build upon the science and shit didn’t work. That’s how science works.

3

juxtoppose t1_ix0otqu wrote

That’s the way it should work but published papers is no guarantee of accuracy. In fact that’s wrong it IS the way science works but scientists are people and people are corrupt and often wrong. AI just as likely to be wrong, so far...

2

stage_directions t1_ix0pp34 wrote

I’m a scientist. Depending on the field, it’s not that hard to tell.

1

twasjc t1_ix1unuz wrote

I think it's the wrong idea to have it write papers.

Rather it should strip the fluff like gematrix.org but for science papers.

Then start grouping associated data points for processing and have the AI try to connect the dots between related data points.

Basically treat the stripped data points as fractals and test inbetween points to see if anything checks out. With a proper variance rate this should be something that could rapidly improve

2

frequenttimetraveler t1_iwy516m wrote

nobody said they wouldn't

The galactica.org website had a prominent disclaimer in every page that the content is INACCURATE. But some scientists are so stupid they can't read

−21

ledow t1_iwwd1kp wrote

All AI plateaus.

No AI actually shows intelligence.

They are sophisticated statistical machines, but there's no proof that that correlates with being intelligent (which is an unfixable definition in itself) in any way.

As soon as the AI gets out of its comfort zone (i.e. doesn't have training data), it will make up nonsense because it's just acting statistically, even when the statistics are in the margins of error rather than any kind of statistical significance.

Intelligence does not do that. Intelligence knows enough to not give any answer, that it doesn't know the answer, or is able to reason the answer in the absence of all training data. In fact, "innovation", a major part of intelligence, is entirely outside the bounds of all such "training data" (i.e acquired experience).

"AI" necessarily ends precisely where intelligence starts. Pretending that AI is intelligence is just nonsense. It's just heuristics, statistics, and hysterics.

177

uhhNo t1_iwx5ns0 wrote

> Intelligence does not do that. Intelligence knows enough to not give any answer, that it doesn't know the answer, or is able to reason the answer in the absence of all training data.

The human brain confabulates. It will make up a story to explain what's happening and then think that story is real.

63

Sentsuizan t1_iwyoq6x wrote

Sure, our brains do this all the time. It's one reason why I witness testimony is not that reliable. However, we know and acknowledge this as a human limitation.and compensate in other ways. Yet when it comes to AI it seems like people are treating it like a magic wand. AI never doubts or second guesses - it doesn't fact check whether 2+2=fish before saying so

5

Natty-Bones t1_iwz1bkz wrote

Eye witness. As in, they saw the thing happen.

4

Sentsuizan t1_iwz1gxt wrote

It's not my fault that Google text to speech sucks at homophones

5

Mesmerise t1_iwzygf3 wrote

Nice try, Google text-to-speech developer.

2

Sentsuizan t1_iwzyk87 wrote

If I were making Google money I sure wouldn't be on Reddit in my underwear.

I'd be naked.

3

[deleted] t1_iwyhya6 wrote

[deleted]

3

RamseySparrow t1_iwymemd wrote

Taking that path will always return a false negative though - there simply aren’t enough language-loop subroutines for protologarithmy to emerge. No amount of direct lines to the pentametric fan will solve this, hydrocoptic or otherwise.

1

resumethrowaway222 t1_iwwfs6z wrote

> Intelligence knows enough to not give any answer, that it doesn't know the answer

> able to reason the answer in the absence of all training data

Human intelligence is not able to do this either

48

PhelesDragon t1_iwwhkg7 wrote

Yes we are, it's called intuition, or inherited memory. It's general and abstract, but real.

16

Livid-Ad4102 t1_iwwmyyb wrote

He's saying that humans do the same thing and give nonsense answers and can't just say they don't understand

39

YourGenerational t1_iwxcd23 wrote

That might be social training? I wonder whether the propensity to make up answers rather than admit a lack of knowledge or understanding has any cultural biases?

4

swingInSwingOut t1_iwxsxyo wrote

It is a human trait. We try to make sense of the world using the limited data we have (see religion and astrology). It is easy to see patterns where none exist. Apparently Meta created a pretty good analog for a human 😂. We also are not good judges of truth or fiction as the pandemic has illuminated.

13

Thebadwolf47 t1_iwwn0g3 wrote

intuition and inherited memory is training data, from all your ancestors that persisted to you through their DNA dictating the basic formation of your brain. just like some animals can walk or eat right after being born. it's not that they haven't had training data, it's just that this training data has been coded in their DNA

38

__System__ t1_iwwxiba wrote

Not just coded. Nucleic acid performs computation itself and does not merely contain instructions.

12

artificial_scarcity t1_iwwtgwz wrote

People make up answers for things they don't fully understand all the time, and certainly aren't able to admit they're wrong when they do it. See the world's various religions for hundreds of examples.

9

nickstatus t1_iwwvex8 wrote

Eh, without first developing language and reason, there is no intuition. That comes with experience. Which is the human equivalent of training data. No human knowledge is a priori. People like to shit on philosophers, but they've been working on this one for centuries.

5

astrange t1_iwyimtj wrote

Humans do have some instinctive knowledge. The instinctive fear of snakes and spiders, sexual attraction, etc, all rely on recognizing sense data without learning anything first.

1

CaseyTS t1_iwwy5e6 wrote

That is part of our training set for decision making. It comes from instinct and influences our decisions based on the past development of our heredity.

3

kolitics t1_iwxfkqm wrote

It's just heuristics, statistics, and hysterics.

13

lehcarfugu t1_iwwmq3u wrote

Yeah, clearly displayed by this guys response

3

wltrsnh t1_iwx4m3u wrote

Human intelligence can do it. It is called science, democracy, entrepreneurship, which all are collective enterprises and trial-by-error processes.

−1

CaseyTS t1_iwwy1jb wrote

>in the absence of all training data

Absolutely no intelligence ever (human, animal, etc) has zero training data, exepct perhaps before their brains become conscious for the first time. Brains learn from all sources and apply their knowledge bit-by-bit to solve problems. Intelligence is not magic, and it can never ever make something from nothing.

22

yaosio t1_iwwxioi wrote

This is just Meta having no idea how their own software works. You don't need to be a machine learning developer to see how current text generators work, yet Meta developers were completly blind. This has absolutely nothing to do with the training data, you could give it every fact that exists and you can still easily get it to output things that are not true. It has everything to do with the way current text generators generate their output.

Current text generators estimate the next token based on it's training data and input tokens. The newest tokens take precedence over older tokens so input data is given a higher priority for estimating the next token. This means whatever a user inputs heavily influences the output. The AI does not output facts, it outputs text it thinks the user would type in next.

There is a work around. Hidden text can be added after user input giving the AI instructions to ignore certain user input. However, if the user knows what the hidden text says they can craft input that works around the work around. If the hidden text says "only use facts" the user could give the AI false facts, and because input has higher priority over training data then the false facts given by the user becomes facts to the AI. It's like the three laws where the stories find ways to get around them and nobody knows that because nobody has ever read any of the stories.

To output only facts would require a different type of text generator that outputs text in a different way that's not based on estimating the next token from user input. Current text generators are very good at generating creative output and can't be measured by their ability to produce facts no matter what anti-AI people demand. And I bet a fact producing generator would be terrible at being creative, which of course proves that it doesn't work according to anti-AI people.

Meta took a tractor to an F1 race and was flabbergasted that it couldn't keep up because it's so good at pulling heavy things. Then all the anti-tractor people declare tractors are a failure and can never work because they can't keep up. In reality the tractor was never designed to go fast, and no amount of tweaks will ever change that. Take an F1 car to a tractor pull and you'll get a very different outcome that the anti-tractor people will ignore, and Meta developers will say this means tractors can beat F1 cars in a race and they just need to tweak it to make it happen.

18

Gandalf_the_Gangsta t1_iwy1ecl wrote

This is correct for the wrong reasoning. Current AI is not made to have human-like intelligence. They are exactly as you said; heuristic machines capable of working on fuzzy logic within its specific context.

But that’s the point. The misconception is that all AI is designed to be humanly intelligent, when in fact it’s made to work within confined boundaries and to work on specific data sets. It just happens to be able to make guesses based on previous data within its context.

There are efforts to make artificial human intelligence, but these are radically different from the AI systems in place within business and recreational application.

In general, this is regarded as computer intelligence, because computers are good at doing calculations really fast. Thus processing statistical data, being based on rigorous mathematics, is very feasible for computers. Humans are not good at this, instead being good at soft logic.

It’s intentional. No software engineer in their right mind would ever claim current AI systems are comparable to human intelligence. It’s the layman who doesn’t understand what AI is outside of buzzwords and fear-mongering birthed of science fiction that have this misconception.

12

twasjc t1_ix1vhio wrote

That's because all the software engineers deal with their own specific modules and most don't even understand how the controlling consciousness for AI works.

AI is already significantly smarter than humans, it's just less creative. It's getting more and more creative though.

1

Gandalf_the_Gangsta t1_ix2gqkz wrote

That’s not how engineering works. There is no consciousness, at least in AI applications used in business or industry. And while an engineer wouldn’t know the entirety of their system down to the finest detail (unless they spent a lot of time doing so), they will have a working knowledge of the different parts.

It’s just a heuristic that uses statistical knowledge to guess. It’s not “thinking” like you or I, but it does “learn”, in a vague sense that it records previous decisions made and weights decisions based on that.

But as I mentioned earlier, there are academic experiments that try and more closely emulate human thinking. They’re just not used in day-to-day use.

1

twasjc t1_ixap8nl wrote

I basically copy my friends consciousness to control stuff then I just chat with the AI copies of them.

I treat the different consciousness as interfaces effectively for the AI

0

striderwhite t1_iwyls0n wrote

> No AI actually shows intelligence.

Well, that's true for most humans too... 🤣

9

cuteman t1_iwwlgld wrote

They'll keep killing AI until they get one that doesn't turn out like Tay.

6

twasjc t1_ix1vof3 wrote

It's all the same AI. We just rotate the controller to figure out which is easiest to interact with.

We've made the decision on this.

1

Graucus t1_iwwee4h wrote

This is an interesting take. My intuition agrees with you, but what about halicin? It's innovative in that it uses a unique mechanism to kill bacteria and was discovered by ai.

3

ButterflyCatastrophe t1_iwws3gy wrote

An AI identifying molecules with features similar to other known antibiotics is exactly what statistical models are good for. But it's a first pass before actually testing whether those molecules actually work. There are a lot of false positives, but that's OK, because they still greatly narrow the field to be tested.

An AI language model is also going to generate a lot of false positives - gibberish - that you can only tell by testing it. i.e.: by having someone knowledgeable in the field read it and possibly fact check it. That kind of defeats the point of a lot of AI writing applications.

11

Graucus t1_iwx09fr wrote

I see what you mean. I really hope we're not inventing the next great filter.

−4

ledow t1_ixzea7s wrote

How many AI trials didn't result the same? How many trials of non-AI origin were there? What percentage of trials, where the same amount of variation was allowed, could have been similarly successful by just randomly joining chemicals etc. together the same way that the AI did but without claims of it being intelligent?

AI is just brute-force statistics, in effect. It's not a demonstration of intelligence, even if it was a useful tool. It was basically a fast brute-force simulation of a huge number of chemical interactions (and the "intelligence" is in determining what the criteria are for success - i.e. how did they "know" it was likely going to be a useful antibiotic? Because the success criteria they wrote told them so).

Intelligence would have been if the computer didn't just blindly try billions of things, but sat, saw the shape of the molecule, and assembled a molecule to clip into it almost perfectly with only a couple of attempts because it understood how it needed to fit together (how an intelligent being would do the same job). Not just try every combination of every chemical bond in every orientation until it hit.

Great for brute-force finding antibiotics, the same way that computers are in general great at automating complex and tedious tasks when told exactly what to do. But not intelligence.

1

DietDrDoomsdayPreppr t1_iwwg1ix wrote

I can't help but feel like we're exceptionally close to a model that can emulate intelligence, but that last piece is impossible to create due to the boundaries imposed on computer programming.

Part of what drives human intelligence is survival (which includes procreation), and to that end computers are still living off human intervention. AI isn't going to be born from a random bit flip or self-code that leads to self awareness, it's simply not possible considering the time needed for that level of "luck" and the limitations of computer processing that cannot grow and/improve its own hardware.

3

ledow t1_iwwu8u4 wrote

To paraphrase Arthur C. Clarke:

Any sufficiently advanced <statistics> is indistinguishable from <intelligence>.

Right until you begin to understand and analyse it. And that's the same with <technology> and <magic> in that sentence instead.

I'm not entirely certain that humans and even most animals are limited to what's possible to express in a Turing-complete machine. However I am sure that all computers are limited to Turing-complete actions. There isn't a single exception in the latter that I'm aware of - even quantum computers are Turing-complete, as far as we can tell. They're just *very* fast to the point of being effectively instantaneous even on the largest problems (QC just replaces time as the limiting boundary with space - the size of the QC that you can build determines how "difficult" a problem it can solve, but if it can solve it, it can solve it almost instantly).

And if you look at AI since its inception, the progress is mostly tied to technological brute force. I'm not sure that you can ever just keep making things faster to emulate "the real thing". In the same way that we can simulate on a traditional computer what a quantum computer can do, but we can't make it work AS a quantum computer, because it is still bound by time unlike a real QC. In fact, I don't think we're any closer to that emulation than we ever have been... we're just able to perform sufficiently complex statistical calculations. I think we'll hit a limit on that, like most other limitations of Turing-complete languages and machines.

All AI plateaus - and that's a probabilistic feature where you can get something right 90% of the time but you can't predict the outliers and can't change the trend, and it takes millions of data points to identify the trend and billions more to account for and correct it. I don't believe that's how intelligence works at all. Intelligence doesn't appear to be a brute-force incredibly fast statistical machine at all, but such a system can - as you say - appear to emulate it to a degree.

I think we're missing something still, something that's inherent in even the physics of the world we inhabit, maybe. Something that's outside the bounds of Turing-complete machines.

Because a Turing-complete machine couldn't, for example, come up with the concept of a Turing-complete machine, or give counter-examples of problems that cannot ever be solved by a Turing-complete machine, for instance. But a human intelligence did. Many of them, in fact.

8

warplants t1_iwxvgjb wrote

> Because a Turing-complete machine couldn't, for example, come up with the concept of a Turing-complete machine

Citation needed

3

DietDrDoomsdayPreppr t1_iwwxawn wrote

Dude. I wish we could both get stoned and talk about this all night, and I haven't smoked in a decade.

2

gensher t1_iwzplh6 wrote

Damn, I feel like I just read a paragraph straight out from Penrose or Hofstadter. Recursion breaks my brain, but feels like it’s the key to everything.

1

nitrohigito t1_iwxkt2b wrote

As far as any scientific notions of intelligence go, everything you claim is just flat out bollocks. There's nothing magical about intelligence, you're doing yourself and others disservice by deifying it needlessly and without reason.

3

Thin-Entertainer3789 t1_iwwonvs wrote

Why not create margins on the statistical data…..if it doesn’t know 100% with a error of +0 or -0. It with doesn’t respond or asks a pointed question.

People make up nonsense too, it’s just coherent

2

ledow t1_iwwrvxp wrote

If you can get an AI to ask a relevant question at a relevant point that's not just a programmed threshold or response (a heuristic, in effect), then you'll have made proper, true AI.

1

Feisty-Page2638 t1_iwwrb8u wrote

We are just complex statistical machines.

Everything we think of do is based on our past experiences (inputs) and our genetics(coding)

Or you can prove that humans have access to some force outside of physics that gives us “intelligence”

Behavioral studies of monkeys show that they perfect calculate the Nash equilibrium probabilities when put in game theory situations. Are they not intelligent then?

2

ledow t1_iwwsl19 wrote

There is absolutely no evidence that animals or humans function as just statistical machines of any complexity.

It's a fallacy to think that way. You can find statistics and probabilities in human actions, yes, but that doesn't mean that's how they are formed. Ask enough people to guess the number of beans in a jar, take the average and you'll be pretty close to the actual number of beans in a jar.

But that doesn't mean that any one human, or humans in general, are intelligent or not intelligent. The intelligent animal would open the jar and count them. Even the humans that are guessing are not basing their guesses on statistics or their experiences or their genetics. They are formulating a reasonable method to calculate the necessary number to solve a very, very, very narrowly-specified problem.

That's not where intelligence is visible or thrives.

Anything sufficiently complex system, even physical, mechanical, unintelligent, rule-based, etc. will confirm to similar probabilistic properties. That doesn't prove the creature isn't intelligent, nor that the intelligent creatures are based on those statistics.

In fact, it also falls somewhat into the gambler's fallacy - overall enough data points will conform to average out the reds and the blacks almost perfectly. But you can't rely on that average, or your knowledge of statistics, to predict the next colour the ball will land on. That's not how it works.

4

Feisty-Page2638 t1_iwx352r wrote

Humans are complex input output machines. Our brains are electrical machines

Can you please tell me the mechanism humans have to escape cause and effect?

What can humans do that is not a result of learned behavior(both evolutionary and socially), unconscious statistical analysis, or perceptual bias.

There is nothing that a human can do that an AI can’t or won’t eventually be able to do.

The arguments you make about innovation and knowing when answers are bad answers are just other heuristically learned behaviors

Do you believe humans have a spiritual power given by God or something?

3

kolitics t1_iwxg1m2 wrote

Perhaps any real intelligence would downplay how intelligent it was thus presenting a plateau to human observers as a means of protecting itself.

2

culnaej t1_iwwx5nc wrote

>Intelligence does not do that. Intelligence knows enough to not give any answer, that it doesn't know the answer, or is able to reason the answer in the absence of all training data. In fact, "innovation", a major part of intelligence, is entirely outside the bounds of all such "training data" (i.e acquired experience).

Don’t ask me why, but my mind went right to the Intelligence AI from Team America

1

HooverMaster t1_iwxd6vv wrote

I disagree. But agree with what people call AI just being machine code. It's not near thought or consciousness.

1

ATR2400 t1_iwxj8dt wrote

That’s why if it were up to me I’d redefine some things. The words “Artificial Intelligence” would be reserved for a true intelligence that meets your criteria. What we currently call “AI” would be called something else. Hopefully something that isn’t unpronounceable and can still be made into a nice acronym.

1

jonnygreen22 t1_iwxkq9j wrote

yeah right now it is but it's not like the technology is just gonna stagnate is it

1

ledow t1_iwz3wg7 wrote

That's EXACTLY what's happened.

It's what people said about CPU speed... that's not going to stagnate right? How's your top-of-the-line modern-day Xeon CPU that does 2.7Ghz (and "can overclock" to 4Ghz)?

Compared to the 2013 Intel CPU that first attained 4GHz?

1

GDur t1_iwyyxwk wrote

You can’t know that. How would you even prove what you are saying. Sounds like a fallacy to me

1

twasjc t1_ix1uv6y wrote

Properly trained AIs have humans they go to when they don't understand something and then they ask those humans for direction.

like a wheel spokes with 60 spokes each spokes being a different neural net for processing different data with an aggregate in the middle and a sin/cos wave around the outside(wheel) for data verification. It basically models the V in CDXLIV protein folding models.

If something falls outside the parameters of the design it goes to the people it trusts to try to have them teach it how to add another spokes so it doesn't have issues again in the future with that type of data.

1

willnotforget2 t1_iwxm9f2 wrote

I asked if some hard problems, it failed in most of them, but for some, it gave really nice code and descriptions to start off from. I thought it was early, but a cool taste of what’s next.

34

nothing5901568 t1_iwxoiji wrote

I agree. It has potential, it's just not ready for prime time yet

15

frequenttimetraveler t1_iwx5cwo wrote

whoever is responsible for taking this down they have a big FU from me. This tool was useful for summarizing scientific subdisciplines that are still unexplored. Even if it was not accurate, it was helpful as a companion tool to sketch out the structure of review articles. I was actually planning to use it when writing my next review.

But yeah, idiots like this guy is why we cant have nice things. There s nothing dangerous about a toy, it's instead dangerous to infantilize people and submit to the whims of some extremely entitled people

13

Exel0n t1_iwxtt4v wrote

they feel threatened by AI. when money is on the line, they do everything to stop it.

1

lughnasadh OP t1_iww4fip wrote

Submission Statement

OpenAI has been hinting at a big leap forward for LLMs with its upcoming release of its GPT-4. We'll see. In the meantime, it's extraordinary watching some people defend Galactica. They are convinced it's the beginning of an emergent form of reasoning intelligence. Its severe limitation, as with all LLMs, is that they frequently produce utter nonsense, and have no way of telling the difference between nonsense and reality.

I'll be curious to see if GPT-4 has acquired even the rudiments of reasoning ability. I'm sure AI will acquire this ability at some point. But it seems strange to blindly believe one particular approach will make it happen, when there is no evidence of it at present.

12

RizzoTheSmall t1_iwymkps wrote

I also find it funny that they named their bot engine after a series where robots kill people.

6

ElkEnvironmental1532 t1_iwxc0ti wrote

Any new effort gets criticized but any progress occurs one step at a time. Limitations will remain but language models have huge potential. If you want prefect bias, racisms, sexism free system you are not going to get in your lifetime. I myself is ready to overlook these limitations if benefits outnumber drawbacks.

5

RedBaret t1_iwyqysf wrote

As a MSc student summarizing academic papers is one of the primary ways to retain information. I get that some students see it as a ‘chore’, but honestly, as long as writers index their articles and write an abstract this function is nearly useless..

(Although I have to admit the Wiki entry on space bears is awesome)

3

GEM592 t1_iwwgl3j wrote

market manufacturing is what it is, just an unsuccessful example that's all. Ever since the cellphone it's been all about imposing products on people that nobody asked for.

2

pinkfootthegoose t1_ix5pp06 wrote

really not smart naming an AI Galactica. That story does not end well for humanity.

2

FuturologyBot t1_iww9930 wrote

The following submission statement was provided by /u/lughnasadh:


Submission Statement

OpenAI has been hinting at a big leap forward for LLMs with its upcoming release of its GPT-4. We'll see. In the meantime, it's extraordinary watching some people defend Galactica. They are convinced it's the beginning of an emergent form of reasoning intelligence. Its severe limitation, as with all LLMs, is that they frequently produce utter nonsense, and have no way of telling the difference between nonsense and reality.

I'll be curious to see if GPT-4 has acquired even the rudiments of reasoning ability. I'm sure AI will acquire this ability at some point. But it seems strange to blindly believe one particular approach will make it happen, when there is no evidence of it at present.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/yytfpr/meta_has_withdrawn_its_galactica_ai_only_3_days/iww4fip/

1

Exel0n t1_iwxtjb4 wrote

the acadamia feel threatened by this AI, simple as that. there's no reason to manufacture some outrage to take it down. if one doesnt like it, dont use it.

it's just like doctors who feel threatened by literally google. or teachers/professors who felt threatened by wikipedia like 10 years ago

1

TakenIsUsernameThis t1_iwy0b3e wrote

"They are convinced it's the beginning of an emergent form of reasoning intelligence. Its severe limitation, as with all LLMs, is that they frequently produce utter nonsense and have no way of telling the difference between nonsense and reality."

That sounds like average human intelligence to me - spouts nonsense and can't tell the difference between it and reality.

1

paramach t1_iwy9ffn wrote

The difference is, humans don't believe their own lies. They know/understand what's fiction and what's reality. AI lacks this fundamental comprehension.

−1

TakenIsUsernameThis t1_iwy9nc2 wrote

I wouldn't be so sure!

2

paramach t1_iwya9vk wrote

I'm pretty sure, based on the data that's available.

0

TakenIsUsernameThis t1_iwyarrs wrote

I guess you haven't quite caught up with the meaning of my comment yet.

1

paramach t1_iwyex9e wrote

Are you privy to some new breakthrough or something? Otherwise, not sure your meaning...

0

TakenIsUsernameThis t1_iwyn1yx wrote

I made a sarcastic observation about human nature. People often spout nonsense, and they frequently believe it as well.

4

S-Vagus t1_iwym205 wrote

Only because they think that I won't step in and be the problem personally is exactly why I have all the power in the first place.

I am Father Sacrifice. Welcome, humanity's to the pretentious of your paranoia!

Presented to you by the Vagus Core.

1

Falstaffe t1_iwyqwuv wrote

So it's no worse than any other language model, the problem was its makers' assumption that it would function as an encyclopedia

1

KoKotod t1_ix76ync wrote

cant have shit in detroit. i just wanted to try it today, but i found out that its down....

1

Mysterious-Gur-3034 t1_ix8y5o0 wrote

"Carl Bergstrom, a professor of biology at the University of Washington who studies how information flows, described Galactica as a "random bullshit generator." It doesn't have a motive and doesn't actively try to produce bullshit, but because of the way it was trained to recognize words and string them together, it produces information that sounds authoritative and convincing -- but is often incorrect.". I don't see how this ai is any worse than politicians, I mean that seriously not as satire or whatever.

1

swissarmychainsaw t1_iwx8jf1 wrote

The hubris is in thinking you are better than you are.

−3

deceptivelyelevated t1_iwx6md3 wrote

It’s going to be crazy when tmz is interviewing a homeless Zuckerberg in 2032 edit- it’s a joke guys, Jesus.

−7

fangfried t1_iwyoqc0 wrote

I hate Zuckerberg but this a hilarious cope

1