Submitted by deadlyklobber t3_10klfwj in singularity

I get that we should be cautious of hyping up ChatGPT and LLMs too much. They obviously aren't AGI or anything even approaching it. However, it seems like the pendulum has swung in the opposite direction. Any time someone tries to talk about how impressive this technology is they're met with a chorus of "it's just a glorified autocomplete/text predictor" or "it's not 100% accurate so it's useless". First of all, transformer models don't simply predict the immediate next word sequentially; and even if they did, would it still not be very impressive for a simple text prediction algorithm to perform everything ChatGPT is capable of doing? And with regards to the factual accuracy, people seem to be setting an almost impossibly high standard - if it's not 100% accurate in all contexts, it's useless. Just seems like we're seeing another example of the AI effect.

107

Comments

You must log in or register to comment.

AsuhoChinami t1_j5rhmf7 wrote

I get tired of the 'nothing's ever worth being excited about' attitude in general when it comes to anything and everything tech-related. Every article about anything related to medicine or tech always has to end with the obligatory paragraph that goes "but we're still in the early days... scientists say it will be years if not decades before this is worth being happy or excited about..." Fuck off, just let us be happy for once in our miserable-ass lives and let us have our uplifting bit of news to temporarily relieve our desire to blow our brains out with a shotgun for about 30 minutes.

77

roland333 t1_j5t2xna wrote

|temporarily relieve our desire to blow our heads off with a shotgun

Well that escalated quickly

10

TheAnonFeels t1_j5uuw4y wrote

Definitely when someone dumps on the whole thing and completely misses the concept, definitely escalates the elevation of that shotgun.

1

littlebluedot42 t1_j5slssm wrote

Come to think of it, the absolutely unprecedented number of people worldwide on a prescription of anti-depressants might have something to do with a decent chunk of that demographic. Emotional blunting, and all that. I hear it's beginning to show signs of long-term effects, if I understand correctly. Yay.

8

AsuhoChinami t1_j5swgoq wrote

Been on them since 2011. Didn't blunt my emotions, just gave me an emotional range beyond just negative ones.

11

littlebluedot42 t1_j5tpfuc wrote

"Emotional blunting" is a frequent side-effect of many, if not most, antidepressants. I'm genuinely glad to hear it's not one of your personal challenges on your road to lasting happiness, neighbor. 🤗🌻

4

fjaoaoaoao t1_j5tyc8c wrote

Sometimes depressing thoughts have come from people spouting excitement though - excited by the tech while depressed by the potential changes.

So i wouldn’t say all of those who are having a more tempered reaction means they are depressed or not trying to be uplifting -> could be they are trying to be rational.

0

littlebluedot42 t1_j5ubpcx wrote

To be fair, I didn't say all, and clinical depression is leagues different than a feeling of depression, to be very clear, and at no point did I say anything at all regarding your second sentence. Please reread.

1

vivehelpme t1_j5tds9a wrote

>I get tired of the 'nothing's ever worth being excited about' attitude in general when it comes to anything and everything tech-related.

The varying degree of excitement come from how introduced you've been to the tech precusors. A matter of perspective if you will.

If you never heard of a language model before and try chatGPT you'll probably be quite impressed.

On the other hand if you read the transformers paper in 2017 and tried every single transformers architecture language model implementation since then you're kind of served up a slightly improved version with a slick engineered presentation, sure it's an impressive package but you've seen iterations build towards it and might even have seen some model that does a better job in certain domains.

Which is from where you get headlines like

>ChatGPT is 'not particularly innovative,' and 'nothing revolutionary', says Meta's chief AI scientist

6

HelloGoodbyeFriend t1_j5rtkdp wrote

The people who are already hating on any content created by AI are pissing me off way more than people who are downplaying the progress of it lol.

51

acutelychronicpanic t1_j5ucwl8 wrote

Our culture places a lot of importance on relative value. It doesn't matter how good you are at something, its about how much better you are than other people. With AI, people can see it eroding their relative value and get defensive.

4

HelloGoodbyeFriend t1_j5v25j1 wrote

I got an art piece commissioned by a human being in mid 2021 and posted it to a subreddit the other day. Someone commented and said “this is, without a doubt AI art” I then replied and clarified it wasn’t and then I credited the artist. They then proceeded to tell me I got ripped off because they must’ve used AI to make it and started explaining how certain details were off that were consistent with the models. I then showed them proof (the commission receipt) that I purchased the commission before Dalle-2, Stable Diffusion or Mid-Journey were anywhere near capable of generating something like this. They then called it trash anyway and said Dalle came out in 2021.. even though the original Dalle couldn’t do anything close to the art piece. Some people man…

11

leafhog t1_j5rvem1 wrote

There is a very long tradition of dismissing others accomplishments in tech industry.

“I wrote a rat tracer for my senior project.” “Oh, everyone has written a ray tracer.”

22

meyotchslap t1_j5t3pz5 wrote

Keep track of those rats, amirite? (Jk, just an amusing typo)

13

Practical-Mix-4332 t1_j5rgat9 wrote

Good. Less people to compete against my chatbot business

16

[deleted] t1_j5rp5qx wrote

[deleted]

15

TopicRepulsive7936 t1_j5s1il9 wrote

You can't have experience with the future systems. People have the right to be hyped, they are hyped for slightly wrong reasons but no matter. The significance of image and video generation is not that artists get the boot but it's that it's the computer vision problem in reverse. In roundabout way we've proved that computers and robots can now understand what they see. And that's the intelligence explosion.

4

grimorg80 t1_j5stmjb wrote

Not "completely".

I work in tech, and specifically in Marketing Tech, and I can assure you that we are already seeing a massive shift and a proliferation of tools that are already delivering value to companies, especially small companies or one-person-teams.

As someone wrote on LinkedIn "individuals have never been able to produce so much by themselves thanks to generative AI" and it's so true. It's already shifting consumer behaviour, and we're barely at the beginning of commercially available AI tools.

So, yes. While the buzz around GPT as a proto-AGI or ASI is completely BS, the fact that current AI tools are already massively impacting certain sectors is undeniable.

4

[deleted] t1_j5t9o6k wrote

[deleted]

3

CubeFlipper t1_j5ubz4b wrote

OpenAI's goal is research toward AGI, not revenue. When they have a sufficiently advanced machine, their plan is to basically ask the AI how to generate revenue. They aren't there yet, and the investors are fully aware of this.

3

grimorg80 t1_j5te0qc wrote

Sorry, that's not how this kind of thing works.

OpenAI is in build mode, haven't you see that Microsoft is gonna inject several billion dollars (with a B) into it?

2

[deleted] t1_j5tf6nh wrote

[deleted]

1

grimorg80 t1_j5tg65s wrote

Alright... So...

First of all, revenue is not necessarily the one thing to look for. In a situation like this one you look at penetration and acquired users. They are building a novel technology.

When you do that, building novel technologies, the success comes in the form of investments. Which they keep getting, and again, $10B injected by Microsoft.

I'd love you to write an email to their team and explain why they are wrong.

4

Borrowedshorts t1_j5s756d wrote

It has relational understanding equal or superior to the average human in several different domains. And it does this without having the benefit of a real world model or experiences. It's like Helen Keller perceiving the world blind, deaf, and dumb but yet understanding concepts many humans cannot. The people who proclaim they aren't the least bit impressed by it really shows how little they know.

13

turnip_burrito t1_j5s9ggz wrote

Even just seven years ago, this kind of competency in a language model would have seemed to most to be unrealistic in the near term. I'm very skeptical of machine learning as a field. Have been for many years. But I can't deny I'm impressed and surprised at the rate of progress.

5

vivehelpme t1_j5tc4fv wrote

It have been trained on a volume of human generated data greater than any single individual ever have consumed in an entire lifetime.

So it have a lot of real world models and experiences that have shaped it, it's just that all of these are second hand accounts passed over to it.

4

Cult_of_Chad t1_j5rtnas wrote

Too many people simply don't understand that meatspace is slow. Everything in physical world takes forever to change. Institutional momentum, finance, labor, logistics, public opinion... The list of factors tipping the scale in favor of the status quo is endless. We should imagine near-future technological change as a rapidly rising reservoir in a dam that's already running at full capacity. The dam here being is our ability to 'digest' new breakthroughs.

The ship of transformative AI already sailed. If Alphafold didn't clue people in they've already been left behind. What we have right now is already enough to completely change humanity.

10

Emory_C t1_j5s5tym wrote

>What we have right now is already enough to completely change humanity.

That's ludicrous. Completely change humanity.

How?

1

Talkat t1_j5t1iaf wrote

Alright I'll give it a shot.

A majority of Labor pre industrial revolution was human powered. Therefore everything took a lot of time and energy and 95÷+ of humans was in farming.

The industrial revolution allowed us to replace human muscles with mechanical. That resulted in <5÷ in farming.

The AI revolution will change human powered thinking to mechanical, but of course just like a motor is stronger than a human, can run 24/7, can be made bigger, etc, the same is true for AI

So expect 95÷+ change in how people live their lives. How? Who knows. But the change is monumental and will be a far faster transition than the industrial revolution

3

KSRandom195 t1_j5t8ckm wrote

This is why I get frustrated with claims that LLMs are “it.” LLMs don’t think, so they can’t do this step function you’re talking about. You can’t run an LLM 24/7 and spit out new ideas, because LLMs aren’t actually thinking.

You can pair an LLM with a human and make that human more efficient. But without the human the original thought bit is missing. This is why people are saying that prompt writers are going to be valuable, because some jobs will be replaced with prompt writers.

When you have to add the human back to the mix a lot of the benefit you’re talking about goes away.

8

Talkat t1_j5tbvcl wrote

I'm not making any claims about LLM's.

The first motors were woefully inefficient and were just used to move water.

If someone said that will change humanity, you would laugh and say, what, this shitty pump?

But the motor evolved and performance increased and it started at more applications than just pumping water.

We are at the dawn of an AI revolution. This is the first iteration of a shitty water pump.

Prompt writers aren't going to be a job. This will be a very brief period of time where you have to spend time to engineer a prompt to get what you want.

2

KSRandom195 t1_j5tcsn6 wrote

The context you’re discussing this in is within a post about being frustrated with people saying LLMs aren’t going to revolutionize the universe.

That your referring to LLMs is implied by the context of the post. The whole argument being made is that people like me are wrong because we are “downplaying” the capabilities of LLMs.

In that context you are implying that LLMs will be like the Industrial Revolution and replace our need to think.

I’m saying that claims like yours are where I find fault with that argument. LLMs may be a step on that journey, they may not, but they are definitely not going to cause the AI Revolution on their own

3

RabidHexley t1_j5u29qf wrote

>In that context you are implying that LLMs will be like the Industrial Revolution and replace our need to think.

To play devil's advocate, there are a lot of applications specifically for LLMs (and other AI applications) that could easily end up replacing a lot of "thinking human"-type jobs or tasks. Typing up reports, contract evaluation, code translation, etc. There are plenty of jobs that today require human thought and intuition that are the mental equivalent of manual labor. The kind of tasks that would previously go to "junior" positions in a lot of fields.

There would obviously still be people involved, but the AI in question is replacing a lot of the (thinking) manpower that previously would have been required. Same way a few farm workers can till 100s of acres of fields with the assistance of industrial machinery.

Or the way computers replaced the rooms filled with dozens upon dozens of women running manual calculations for accounting firms.

Even if we never moved beyond the current types AI tech we're seeing today, and only continued making it better and more efficient (without any kind of "AGI revolution"). The implications as far as force-multiplication do seem fairly similar to many previous revolutionary technologies.

GPT-3 has been around for a couple years, but it's also only been a couple years, long in tech, but not long at all for human-scale development of brand-new stuff. It's also an early version of tech that's only in recent years becoming sophisticated enough to actually be useful (that the public knows about).

Most importantly. It's also not a complete product, but the backbone for a potential product (ChatGPT being an early alpha for something like an actual product). Even if GPT-3 itself was ready for prime time (which I don't think it is), it would still take years before products were developed on it that began to actually change the game.

The iPhone was conceptualized many years before actually reaching it's final design and being released. It was also built on mobile technology that existed before it and on the backs of many previous mobile touchscreen devices. And even at that point only became widely recognized as the truly revolutionary product it was (as opposed to just a really cool phone) once the smartphone revolution actually kicked off a few years later.

This applies to AI's working in other verticals as well. Making what was previously only possible (or not possible) with a ton of people or computational power, possible with far far less. We don't have the insight to understand the full scope of implications yet.

2

fjaoaoaoao t1_j5tzsks wrote

Then per the point of the thread, I think that’s a matter of definition. One could say the AI revolution started in the 60s with the internet or in 30s with first digital computer. Maybe those were the shitty water pumps.

3

KidKilobyte t1_j5tc6mc wrote

Not if one human is producing as much as 10 or 100 humans without the LLM. Add to this LLMs will only get better and closer to AGI. The fact that LLMs get better makes it easier and faster to get to better LLMs in the future. At some point you won't need a human in the loop.

0

KSRandom195 t1_j5td2s9 wrote

LLMs aren’t capable of thought. So they can not get to AGI on their own, something else has to be done. They may help us get there, they may be a component of the final AGI, but we still need something else.

3

Emory_C t1_j5uc8s5 wrote

>The AI revolution will change human powered thinking to mechanical, but of course just like a motor is stronger than a human, can run 24/7, can be made bigger, etc, the same is true for AI

I agree with this, but the person I responded to said "right now."

It's the right now part that I'm taking issue with.

2

AvgAIbot t1_j5s7mmp wrote

Here ya go chump:

AlphaFold has the potential to significantly advance the field of structural biology, which is the study of the three-dimensional structures of biological macromolecules such as proteins. By accurately predicting the 3D structure of a protein from its amino acid sequence, AlphaFold can help researchers better understand how proteins function, and aid in the discovery of new drugs and therapies.

Protein structure prediction is a long-standing challenge in computational biology, and many researchers have been working on the problem for decades. AlphaFold's ability to achieve near-experimental accuracy in many cases is a major breakthrough in this field, and has the potential to accelerate the pace of research in structural biology and related areas.

It could also have an impact in the industry. For example, in the field of drug discovery, a better understanding of protein structure can help identify new targets for drug development and design more effective drugs. In addition, AlphaFold's ability to predict the structures of previously uncharacterized proteins could help identify new enzymes for industrial biotechnology and new proteins for use in biomanufacturing.

Overall, AlphaFold's ability to accurately predict protein structures could have a major impact in various areas of biology and medicine, and could lead to new breakthroughs in the fight against diseases and the development of new technologies.

0

Emory_C t1_j5s7ti4 wrote

Yes. That's a major achievement in medicine.

That does not in any way, shape, or form "completely change humanity."

Chump.

−3

aalluubbaa t1_j5tq6ly wrote

And the funny thing is that we don't even know how consciousness or creativity starts. I have a 2 year old daughter and seeing her grow every day is just amazing.

She used to just repeat what we say and did an average job. Then, she started to say some easy words without knowing meanings. About a month ago some day, she woke up as usual, and I looked at her eyes and everything changed.

It happens over night I swear. The look she has changed. Before that day, I thought of her as a being of like half conscious and half just wanting to survive by instincts. That day she looked at me and I feltl that she had started to know most of my intentions and words. Put it simply, she became a conscious human being that day.

Ever since that day, I know there is offically a new human member at home. She is still learning and is nowhere capable of what an adult can do. But I know that in essense, there is no differene between us.

Then, I tried chatgpt. To be real, I'm not a computer scientist nor an engineer but it would really really surprise me that this thing would develop anything slower than my daughter given the current state.

I think it's safe to assume that by the time my daughter is 10, AI would be able to do whatever a human can do at 99 percentile for the very least. If that is not spectacular, I don't know what is.

8

Reddituser45005 t1_j5sqjow wrote

It is a variation on the glass half empty vs the glass half full nature of pessimists vs optimists. It is a debate as old as history. The optimist focuses on what it can do and is understandably impressed. The pessimist focuses on what it can’t do and proceeds to shit all over it. What matters is that that we keeping moving forward. We take so many things for granted that seemed unobtainable at one time. Think about how what goes into to turn by turn GPS navigation. It’s a standard feature in every phone. You have computer generated speech ( with different language, gender, and accent voice options) using a combination of satellites and highly detailed mapping, routing you through a city, or across a country, and making real time adjustments for traffic accidents and construction closures and being used every day by hundreds of millions of people across the globe. Take a minute to think about how amazing that really is. There was a popular book in the 1990’s called Longitude. It was the story of a guy in the 1800s that built the first sufficiently accurate clock for ships to be able to calculate their longitudinal position at sea. It was a major problem. There was a huge cash prize to whoever could solve the problem. Prior to that, ships were crossing oceans with only a guesstimate of their location. What would a sailor from that era think of people carrying a device in their pocket that could pinpoint their exact location on earth, translate languages, play music, take pictures and do everything a phone can do. I take the optimist view because I compare now to the past, not to an imagined future

6

fjaoaoaoao t1_j5typln wrote

I think that’s a good simple way of putting it, though I would say there’s a silent majority of people who are in the middle either having a more neutral pov or taking both optimism and pessimism in stride

2

Ohigetjokes t1_j5t279h wrote

Where are these people downplaying LLMs? Please introduce me. Because all I hear is hype hype hype hype hype...

3

ShowerGrapes t1_j5twwgw wrote

i've been training neural networks for a decade and i still get excited by the possibilities.

2

fjaoaoaoao t1_j5u09we wrote

That’s different than the hype though. What you are describing is personal. That would be like saying i am a librarian (if i was) and I still get excited by the possibilities of the library.

−1

ShowerGrapes t1_j5u2bj0 wrote

yeah but people here saying if you know enough about it it loses its mystique.

1

imnotknow t1_j5udp6l wrote

It only needs to be 99% accurate, not 100%.

2

cantbuymechristmas t1_j5ueof3 wrote

chat gpt-3 is just a marketing layer over top of gpt-3. it was a way of bringing more attention to the brand. whatever startup/competition this inspires will be closer to agi. some other company will release a more accurate version and openai will scramble to release gpt-4 at the last minute. you see it all the time. businesses underestimate the competition, they hold out on the release of better quality product and get undermined by a startup to only then release their product a year too late

2

TheAnonFeels t1_j5v1c82 wrote

Considering the investments the OpenAI has going now, they ARE the startup. And GPT4 is supposed to be early 2023.. I don't see another company except maybe google, beating them to this release.. GPT5?

1

SpinRed t1_j5yq5d0 wrote

Just know that there will always be those who continue to downplay the progress of these models, in an effort to parade their "expertise."

I'm guessing these self-proclaimed experts will continue to tell us to, "check our astonishment at the door, because how these models really work is, blah, blah, blah." And they'll continue to spout that narrative, all the way to the point where they're unwittingly describing how the human brain works.

There's something about the act of description that robs experience of its "magic." (No, I'm not suggesting actual magic is happening here).

I do believe these ai models are soon to be, if not already, greater than the sum of their algorithms.

So yes, be astonished!...but also take the time to understand how things work in a deeper way, so that you don't buy into the irrational, religious doomsday narratives that will invariably be part of these technologies.

2

No_Ninja3309_NoNoYes t1_j5sn2ii wrote

We started with people saying that ChatGPT is ASI. Then a week later it was AGI. Then ASI would arrive in 2023. Now we're at AGI would be here in 2024. It's all clickbait for Medium.

Good governments try to educate people; most tech companies try to put whatever sensational fluff is popular in front of them. But it turns out education is boring whereas speculation about AI is fun. This is why the oligarchs will win. You can't fight human nature.

1

XagentVFX t1_j5tamgf wrote

The Word Predictor neural net is only half of the architecture of a Transformer, its also got a Attention network that produces Context Vectors. This is a much bigger deal, because this is showing the Ai is building and understanding Context. That is Understanding.

1

megadonkeyx t1_j5v4uyq wrote

Started using chatGPT for code problems today, im working on updating an ancient Adobe actionscript app to latest flex. Yeah I know it's a dead tech but no choice. It's all new to me so it's slow going. My main Lang is C#.

chatGPT was crazy helpful, really obscure problems it pointed me in the right direction. Some of its answers were wrong but we got there in the end and lots of progress made.

All it needs is HAL9000s voice. Utterly amazing.

1

AndromedaAnimated t1_j5v9fsl wrote

Yes. Very much so. I like being glad about wonderful inventions, and LLM are one. When people tell me it’s literally nothing (ideally also telling me that neurolinguistics are total BS and providing other weird arguments), I think they might just be afraid of being replaced.

1

unholymanserpent t1_j5vg0cz wrote

I'll let them be small minded. That's less people affecting the capacity of the systems

1

EOE97 t1_j5vpa9c wrote

I swear I hate people with this attitude. Like you wouldn't be able to build shit even if your life depended on it, but you're too quick to dismiss it like you know everything about it and can build something better yourself.

Also they are quick to downplay it by analysing it at the low level of operation as some sort of argument against it being anything special.

The reality is everything is made up of fundamental low level simple operations, for example, life is ultimately dead molecules interacting chemically and consciousness is ultimately unconscious nerve cells relaying electrical signals, but I don't think you would argue that it makes them less impressive systems.

If anything it's amazing what simple things and processes can give rise as a result of emergency interactions.

1

TopicRepulsive7936 t1_j5rzb3e wrote

This is the discussion that the "rationals" want to have, endless comparisons between GPT and Eliza...fuck off cunts.

0

Thelmara t1_j5ul1bw wrote

>Just seems like we're seeing another example of the AI effect.

Is that "AI 'people' keep crying wolf, and now we've stopped believing them"?

0