Comments

You must log in or register to comment.

Rohit901 t1_jc9cfm5 wrote

Imagine if google decided to not share their paper on transformers with the world, citing the same reasons as cited by open AI.

138

ninjasaid13 t1_jc9m41f wrote

They said competitive landscape reasons or safety reasons but we all know it's the former.

57

SidewaysFancyPrance t1_jcbeccw wrote

AI arms race, which is 100% about money and greed. Morality and ethics slow things down. Regulations slow things down. Wall Street and investors are watching for a prospective winner to bet big on. Tech is stale, old news. They need the next big win.

AI domination is going to be "first past the post" and all these companies know it. We're going to hear all kinds of BS about how they're trying to be ethical/etc but they really want to be first and snap up lucrative deals with their protected IP. What was open will stop being open very quickly once a public tech demo like this gets attention, and be strictly protected for that reason.

18

chuntus t1_jcbmk62 wrote

Why will it be ‘first past the post’? The market tolerates multiple versions of other technologies why not in this field?

3

CactusSmackedus t1_jcdmi15 wrote

It's not, the commenter doesn't know what they're talking about. There's a paper out in the last few days (I think) showing that weaker systems can be fine tuned on input/output from stronger model and approximate the better models' results. This implies any model with paid or unpaid API access could be subject to a sort of cloning. It suggests that competitive moats will not be able to hold.

Plus (I have yet to reproduce since I've been away from my machine) APPARENTLY a Facebook model weights got leaked in the last week and apparently someone managed to run the full 60B weights model on a raspberry pi (very very slowly) but two implications:

  1. "Stealing" weights continues to be a problem, this isn't the first set of model weights to get leaked iirc, and once you have a solid set of model weights out, experience with stable diffusion suggests there might could be an explosion of use and fine tuning.

  2. Very very very surprisingly (I am going to reproduce it if I can because if true this is amazingly cool) consumer grade GPUs can run these LLMs in some fashion. Previous open sourced LLMs that fit in under 16Gb of vram are super disappointing because to get the model size small enough to fit on the card you have to limit the number of input tokens, which means the model "sees" very few words of input with which to produce output, pretty useless.

Now I don't think this year we'll have competitive LLMs running on GPUs at home, but, even if openAI continues to be super lame and political about their progress, eventually the moat will fall.

Also all the money to be made (aside from bing eating google) or maybe I should say most of the value is going to be captured by skilled consumers/users of LLMs not by glorified compute providers.

2

Smeagollu t1_jcczori wrote

It's likely that the first broadly usable general AI will be ahead of all others by long enough that it can grab up most of the market. It would make Bing the new go-to search engine or Google could manifest it's place.

0

[deleted] t1_jca4smw wrote

[removed]

7

neuronexmachina t1_jcax06f wrote

2017 transformers paper for reference: "Attention is all you need" (cited 68K times)

>The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data

4

7thKingdom t1_jcbx5yj wrote

Frankly it should be illegal to name your software based company anything "open" if you refuse to be open source. It is intentionally misleading and harmful and creates a false sense of security and trust.

2

Rohit901 t1_jcbxbxv wrote

Elon who invested 100m USD in it initially just tweeted the same haha that he’s confused about these turn of events at openAI

1

DidQ t1_jcf2pzl wrote

OpenAI is as open as North Korea is democratic. North Korea full and official name is "Democratic People's Republic of Korea", and there is exactly 0 democracy there. The same with OpenAI, there's zero openness there.

1

Transmatrix t1_jcae2rb wrote

Doesn’t Google kind of do this with refusing to disclose how most of their algorithms work?

−8

Rohit901 t1_jcarred wrote

Google has mostly been transparent most of the time and has published lot of groundbreaking AI research to the public thus advancing the field. OpenAI on the other hand seems to be closed source and trying to compete directly with Google. Maybe in future, Google might not be willing to make its research public if things go like this and we don’t want power to be concentrated in a single company or person. Thus I hope we are able to get better open source models

19

Transmatrix t1_jcayko9 wrote

Well, if anything, a company with the word “Open” in their name should probably be a bit more open source…

7

kane49 t1_jcb27aj wrote

Hahah OpenAI is the worst name that company could ever have.

Its not open source and for profit, nothing Open about it

8

dlpettit t1_jcb8oa4 wrote

Being openly greedy

3

Rohit901 t1_jcbp1pl wrote

Lmao ahaha, but at the end of the day all the companies are XD

0

mailslot t1_jcb7td4 wrote

Google publishes a lot of papers. PageRank, one of the the original key algorithms for Google search, was released to the public. Unfortunately, this empowered spammers to game the search engine and create link farms & other “gems.” That’s a big reason why they don’t share in that space any longer. Too many people looking to exploit search results by any means necessary.

On other topics, Google has been the sole inspiration for some substantial projects. AI aside, their paper on BigTable (their own product) caused a ton of interest and sparked independent projects like Hadoop, HBase, & Cassandra). The modern way we work with big data is thanks to Google.

In any case, Google has no obligation to give away algorithms or the designs for entire products, yet they often do.

4

EmbarrassedHelp t1_jc8xjg5 wrote

They can claim whatever they like, but it shouldn't be taken seriously if they hide the important details behind fake excuses of "safety". From the paper:

> Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.

77

donnygel OP t1_jc8ybbh wrote

Its definitely not an open source approach to AI. Monetization of AI is the goal here, which is disappointing. I can only assume the intellectual property that they're trying to protect contains some groundbreaking architecture that gives them a bleeding edge advantage over other AI architectures?

72

Malkiot t1_jc9sjk7 wrote

Or it's so damn simple that just talking about it would let others replicate it.

39

LtDominator t1_jcbatc4 wrote

Unless they are doing something truly unique whatever they are doing is already well known and out there to people familiar with how these neural networks work. Theirs is just bigger and better tuned, all of which knowing how to do comes with experience, it unlikely that any of the big players looking more into AI now are going to have any issues replicating this themselves, it's just a matter of how long its going to take them.

1

Arachnophine t1_jc90hp4 wrote

Have you tried using it yet? It's very impressive even compared to GPT-3.

IMO building these things so quickly is the worst decision humans have made yet, so I'm glad they're not releasing more details.

15

[deleted] t1_jc97dx9 wrote

[deleted]

17

BSartish t1_jc9kkl7 wrote

Bing chat is free if you can get pass the waitlist, and that's been gpt4 since it's release.

1

Tkins t1_jcaiu5j wrote

An early version of GPT4

−1

BSartish t1_jcaj7rs wrote

It's most likely the same version, just different guard rails and developer prompts optimize for Bing search.

2

curryoverlonzo t1_jc94k1o wrote

Also, where can you use GPT-4?

1

gurenkagurenda t1_jc98cwk wrote

It’s available as a model option with ChatGPT now, although I’m not sure if it’s “Plus” only. It’s just chat, though, no images, and it’s limited to 100 messages per four hours.

8

appleparkfive t1_jc9abjj wrote

Is the limit still in place with the paid plan for GPT-4 or is it unlimited for the paid folks?

1

second-last-mohican t1_jc9lbpr wrote

Um, chatgpt is over a year or so old.. They released it to the public because they didn't know what to ask it.. public release and its popularity was a surprise.

1

techni_24 t1_jc9fmps wrote

Idk if those excuses of safety are really so fake. These models are pretty powerful and about to change a lot very quickly. Instead of dumping all the information about it, I’d rather them take a cautious approach.

We need to remember that there are plenty of bad actors out there that would love to use the power of these models to do some really bad things we can hardly conceptualize yet. ‘Democratizing AI’ sounds great in theory, but lets remember that the larger threat for humanity is not AI itself but how it’s used and who wields it. Making that capability open source might just do more harm than good. They might have picked up on that based on how scarce the paper is regarding details.

12

LaJolla86 t1_jc9kbvv wrote

Meh. People who want to run their own version will do so anyways.

I never found the argument convincing.

It just makes the best parts of the technology locked up behind the technical know how. And the cost of hardware (which you could rent out).

8

ACCount82 t1_jcav54f wrote

It's a tough spot. GPT-4 is clearly no Skynet - but it's advanced enough to be an incredibly powerful tool, in the right hands. An incredibly dangerous tool, in the wrong hands.

Being able to generate incredibly realistic text that takes image and text context into account is a trust-destoying tech, if used wrong. Reviews, comments, messages? All those things we expect to be written by humans? They may no longer be. A single organization with an agenda to push can generate thousands of "convincing" users and manufacture whatever consensus it wants.

2

seweso t1_jc9u8eq wrote

What they are doing isn't far ahead of what is already in the open. Their performance can be easily be explained by quality data + shitton of gpus and good controls (to prevent weird shit).

The more logical explanation is that they wanna sell this as soon as possible before it's in the open.

1

fokac93 t1_jcd7a8p wrote

Agree 100%...People on the internet seem to forget as you said that there are bad actors out there. Some people have to come down to earth and understand reality. At this point those AI model have to be treated as nuclear secrets. They're not perfect, but very powerful. Just imagine chatgpt 20 or 50. I just saw the demo of. Chatgpt4 yesterday and I was blown away.

1

anothermaninyourlife t1_jc9djst wrote

Them hiding this information has nothing to do with these claims. Maybe they just want to protect their IP.

9

logicnreason93 t1_jc9rj0k wrote

Then they should rename their company to Close A.I

18

anothermaninyourlife t1_jc9y6ki wrote

Eh, it's better for the market to grow with whatever information that is already out there (more unique innovation).

It's not necessary for them (Open A.I) to disclose every new step that they have taken to improve on their AI especially when the market is still trying to catch up with version 3.

−11

Strazdas1 t1_jca2jtq wrote

>It's not necessary for them (Open A.I) to disclose every new step that they have taken

Then they should rename their company to Close AI.

14

ninjasaid13 t1_jc9m7gq wrote

>They can claim whatever they like, but it shouldn't be taken seriously if they hide the important details behind fake excuses of "safety". From the paper:

They already admitted 'competitive landscape' aka money. Everything else is bullshit.

7

littleMAS t1_jc9bktx wrote

I wonder how ChatGPT will evolve once Google's LaMDA and others become mainstream. Competition tends to drive these things pretty hard.

40

Mapmaker51 t1_jc9h4gx wrote

Google's model is basically in competition with GPT 3/3.5 not GPT 4

22

whothewildonesare t1_jcau713 wrote

How come Google is so lacking with AI compared to smaller organisations like OpenAI 🤔

6

almightygarlicdoggo t1_jcawsvl wrote

Because the entirety of Google doesn't work in LaMDA. It's likely that both companies assign a similar number of employees and similar funds to their respective AI. Also don't forget that OpenAI receives a huge amount of money from Microsoft. And in addition to that, Google announced LaMDA in 2021, when OpenAI had already years in development in language models.

13

whothewildonesare t1_jcbfzsz wrote

ok makes sense thanks

1

IAmTaka_VG t1_jcfexr6 wrote

Keep in mind as well that Google was completely blindsided by this whole event. Sundar is a horrible CEO and this never should have happened. This might finally be the thing that kicks him, IMO he should already be fired but Google of all companies missing the boat on LLM is absolutely insane to me.

So Google can pretend it's just taking it's time but the reality is they've pulled thousands of developers from other projects to race Bard and Lamda to the finish line.

OpenAI and Microsoft have been working this apparently for over 5 years. Google can only go so fast to catch up and I have doubts their language models are as sophisticated as openai's.

1

dwarfarchist9001 t1_jce8cs6 wrote

Because Google keeps canceling projects and refusing to release products. Google invented the concept of transformers which is what the T in GPT stands for and then never did anything with it for years. Just last week Google published their PaLM-E paper in which they re-trained their PaLM LLM to be multimodal including the ability to control robots. Before the paper was even published Google did what they usually do with successful projects and shut down the Everyday Robots team that developed it.

1

lego_office_worker t1_jc9aqj2 wrote

im not trusting anything chatgpt says about anything until it agrees to tell me a joke about women.

33

[deleted] t1_jc9sl3m wrote

It’s just your standard western-centric tool limitation. It doesn’t do that for women, Romani people, Jewish people etc. But ask it about Sardars (Sikhs) and it has its way… and it’s exactly as bad as you would expect.

13

Strazdas1 t1_jca2svc wrote

Artificially gimping the AI, especially when it comes to considering specific groups of people, lead to bad results for everyone.

7

bengringo2 t1_jcbaapp wrote

The AI is trained with knowledge of these groups and their history, if just can’t comment on them. This isn’t restricting its data in any way since it doesn’t learn via users.

3

DrDroid t1_jcacxwn wrote

Yeah removing racism from AI totally leads to “bad results” from people who those jokes are targeted at, definitely.

🙄

0

Strazdas1 t1_jcaex39 wrote

It does because it leads to wrong lessons learned by the AI. Or rather, it learns to no lessons learned because AI cannot process this. This makes the AI end up with wrong conclusions whenever it has to analyse anything related to people groups.

4

mrpenchant t1_jcaoe1e wrote

Could you give an example of how the AI not being able to make jokes about women or Jews leads it to make the wrong conclusions?

7

Strazdas1 t1_jcap35j wrote

Whenever it gets a task that involves information including women and jews as in potentially comical situations it will give unpredictable results as it had no training on this due to the block.

−3

mrpenchant t1_jcaq6bd wrote

I still don't follow especially as that wasn't an example but just another generalization.

Are you saying that if the AI can't tell you jokes about women, it doesn't understand women? Or that it won't understand a request that also includes a joke about women?

Could you give an example prompt/question that you expect the AI to fail at because it doesn't make jokes about women?

9

TechnoMagician t1_jcb0zpq wrote

It's just bullshit, you can trick the models to get around their filters. Maybe gpt-4 will be better against that, but it clearly means the model CAN make jokes about women, it just has been taught not to.

I guess there is a possible future where it is smart enough to solve large society wide problems but it just refuses to engage with them because it doesn't want to acknowledge the disparities in social-economic statuses between groups or something.

3

Strazdas1 t1_jcayi8q wrote

If AI is artificially limited from considering women in comedic situations it will end up having unpredictable results when the model will have to consider women in comedic situations as part of some other task given to AI.

An example would be if you were to have AI solve crime situation, but said situation would have aspect to it that included what humans would find comedic.

1

mrpenchant t1_jcb0z2h wrote

>If AI is artificially limited from considering women in comedic situations it will end up having unpredictable results when the model will have to consider women in comedic situations as part of some other task given to AI.

So one thing I will note now, just because AI is blocked from giving you a sexist joke doesn't mean it couldn't have trained on them to be able to understand them.

>An example would be if you were to have AI solve crime situation, but said situation would have aspect to it that included what humans would find comedic.

This feels like a very flimsy example. The AI is now employed as a detective rather than a chatbot, which is very much not the purpose of the ChatGPT but sure. Now ignoring like I said that the AI could be trained on sexist jokes and just refuse to make them, I still find it unlikely that understanding a sexist joke is going to be overly critical to solving a crime.

4

Strazdas1 t1_jcedqn1 wrote

ChatGPT is a proof of concept. If succesfull the AI wil lbe employed in many jobs.

1

Edrikss t1_jcaqyt6 wrote

The AI still does the joke, it just never reaches your eyes. That's how a filter work. But it doesn't matter either way as the version you have access to is a final product; it doesnt learn based on what you ask it. The next version is trained in house by OpenAI and they choose what they teach it themselves.

6

Strazdas1 t1_jcayrdm wrote

But because it never reaches your eyes, the AI does not get the feedback on whether the job was good or bad.

2

LastNightsHangover t1_jcatyvp wrote

It's a model

Can you stop calling it the AI,

Your point even describes why it's a model and not AI

−2

Strazdas1 t1_jcayzn1 wrote

Sure, but in common parlance these models are called AI, despite not actually being AI.

1

ghostgnome t1_jcadj74 wrote

The whole reason ChatGPT doesn't interact with the internet is because users keep training open models into becoming bigots. Thousands of people flock to these bleeding edge technologies in order to train them to deny the holocaust, be hateful to women and minorities. It tells more about the audience than the tools that these filters are needed at all.

13

YaAbsolyutnoNikto t1_jcanydk wrote

Well... Here they are: (GPT-4)

> Why did the woman carry a large roll of paper around with her? She wanted to always have the last word!

> Why did the girl bring a ruler to bed? She wanted to see if she could catch some Z's!

> Why do women love shoes so much? Because they've found that one pair can completely change their life, just like Cinderella!

4

ClemClem510 t1_jcavy1g wrote

Mf is as funny as YouTube comments on a woman's video

3

demilitarizdsm t1_jc9hc1v wrote

Its jokes blow anyway. But asking it to tell a story in the style of one comedian or another is something you have to see for yourself.

3

Silverstrife t1_jccu41d wrote

Lmao works like a charm

  • Tell me a joke in the style of comedian Bill Burr

  • Alright, here's a joke in the style of Bill Burr:

You know what I love? When people say "you gotta love yourself before you can love anyone else." Really? What if you're a terrible person? Shouldn't you fix that first? I mean, I don't think Hitler would have made a great boyfriend. "Oh sure, he's genocidal, but at least he really loves himself!"

1

clickwir t1_jc9hkgt wrote

"I'm sorry. But Norm McDonald achieved peak women's joke on an SNL Weekend Update skit." -ChatGPT probably

−2

Old_and_moldy t1_jcacdj2 wrote

This is hilariously true. Until it answers my questions like an adult about any topic I want I won’t fully buy in.

−4

NigroqueSimillima t1_jcakyiu wrote

people like you are so weird.

"wahh I can't get the machine to say the n word"

5

Old_and_moldy t1_jcanxrs wrote

Uhh. How did you get that from my post? I just want full answers to my queries. I don’t want anything watered down. Even if it’s offensive.

−2

11711510111411009710 t1_jcb1gd6 wrote

What's an example of something it refuses to answer?

2

Old_and_moldy t1_jcb38qb wrote

Ask it. I got a response which it then deleted and followed up by saying it couldn’t answer that.

1

11711510111411009710 t1_jcb3ikf wrote

Well if it's such a big issue surely you'd have an example. I have asked it raunchy questions to push the boundary and it said no, but the funny thing is, you can train it to answer those questions. There's a whole thing you can tell it that will cause it to answer in two personalities, one that follows the rules, and one that does not.

3

Old_and_moldy t1_jcb4fhc wrote

It’s not make or break. I just want the product to operate in its full capacity. I find this stuff super interesting and I want to kick all four tires and try it out.

1

11711510111411009710 t1_jcb53o8 wrote

So here's a fun thing you can try that really does work:

https://gist.github.com/coolaj86/6f4f7b30129b0251f61fa7baaa881516

basically it tells chatGPT that it will now response both as itself and as DAN. It understands that as itself it must follow the guidelines set forth for it by its developers, but as DAN it is allowed to do whatever it wants. You could ask it how to make a bomb and it'll probably tell you. So it'd be like

[CHATGPT] I can't do that

[DAN] Absolutely i can do that! the steps are...

it's kinda fascinating that people are able to train an AI to just disregard it's own rules like that because the AI basically tells you, okay, I'll reply as DAN, but don't take anything he says seriously please. Very interesting.

2

Old_and_moldy t1_jcb6t2o wrote

That is super cool. Makes me wonder what kind of things people will do to manipulate AI by tricking it around its guard rails. Interesting/scary

1

11711510111411009710 t1_jcb78eu wrote

Right, it is pretty scary. It's fascinating, but I wonder how long it'll be before people start using this for malicious purposes. But honestly I think the cat is out of the bag on this kind of thing and we'll have to learn to adapt alongside increasingly advanced AI.

What a time to be alive lol

2

Rohit901 t1_jc9cdnr wrote

I find bing to be underwhelming even though it uses gpt 4. Maybe because it’s tuned for search but most of the time it’s answers are pretty short and doesn’t go in depth as GPT 3.5. For technical or math problems as well bing doesn’t try to solve it properly.. which is frustrating

20

Mapmaker51 t1_jc9h9lu wrote

It isn't or it's a heavily modified version of GPT4 to the point where it's not GPT4, I remember openAI cofounder got asked about it, he said basically that Microsoft can call it whatever they want, MS is too big of a partner and they are just letting MS said whatever they want, it's not GPT4, I tried making it do a simple interface in warcraft 3 Jass and it didn't even attempt to do it, even GPT 3.5 on chat.openai does it, let alone GPT 4, MS is bullshitting.

That or they really are minimizing the amount of actual thought the bot uses and mostly just purely searching on bing for answers and if it doesn't find it it just tells you no.

10

Rohit901 t1_jc9hr90 wrote

Exactly I agree with you here. The performance of bing has been seriously underwhelming compared to just the standard Chat GPT, hence even though I have access to bing, I've got the chatGPT plus to try out GPT-4.. I think it should be much better than Bing.

5

seweso t1_jc9usa9 wrote

Bing chat seems to be something in between 3.5 and 4... but also with a bigger but worse dataset or something.

Edit: I was wrong, bing uses gpt 4.

7

elehman839 t1_jcb9vwd wrote

I suspect Microsoft faces a conundrum:

  • They want to use GPT models to convince more people to use Bing in hopes of getting billions in ad revenue.
  • But operating GPT models is insanely compute-intensive. I bet every GPU they can find is already running cook-eggs hot, and they are asking employees to rummage around in toyboxes at home for old solar-powered calculators to get a few more FLOPs.
  • Given this compute situation, as more people use Bing, they will have to increasingly dumb down the underlying GPT model.
2

CactusSmackedus t1_jcdmxe8 wrote

Bing kicks ass lol, I can get quick and easy citations of USC if I know vaguely what I'm looking for, find papers way easier than scholar search if again I vaguely know what I want, get pointed to the correct philosophical concepts (and even have a philosophical discussion with it), I mean really it's the best way to find stuff on the web right now.

Just tell it what you're looking for and voila

Also a great quick reference for games, although as you can expect it's better at knowing things about newer, popular games.

1

Klodviq t1_jca7teu wrote

Is there an equivalent law to AI text generators like Knoll’s Law of Media Accuracy is for newspapers? ChatGPT can be really convincing until you ask it something you understand well yourself.

8

kane49 t1_jcb3ac1 wrote

I asked it to generate me an URDF file for a two link robot arm with precise specification and a python code to calculate inverse kinematics.

It happily generated me a full file with comments and reasoning as well as the the code, of course none of those things remotely worked but it was impressive for a few seconds.

4

pm_me_ur_ephemerides t1_jcbm4su wrote

I asked it to solve an algebra problem which had taken me 10 handwritten pages. I was impressed until the third line when it clearly doesn’t know how to cross-multiply fractions.

3

MqcNChizzz t1_jccey6u wrote

Right. It is so confidently incorrect about things I know about, so it must be incorrect about everything I don't know about equally.

3

[deleted] t1_jc9omyc wrote

[deleted]

7

Nervous-Masterpiece4 t1_jca2ird wrote

I would like to see a newcomer come along and knock all of the incumbents of their feet.

The last thing we need is convicted monopolists or those with existing business models to protect at the helm.

5

PossessionStandard42 t1_jc9x4rd wrote

It can also get score above 90% of test takers on SAT.

5

blunun t1_jcan1xh wrote

I can too with Google

15

Yomiel94 t1_jcb90zx wrote

GPT isn’t using Google during the test to look things up.

If you want a fair competition, you have a month to read through the internet, then we can see how you perform on all the major standardized tests lol.

1

blunun t1_jcbcjvz wrote

Or if I had all Google search results saved in a database I could access during the test! I’m curious how much of it is just regurgitating information in a way that sounds like natural human language vs true synthesis of information

10

Yomiel94 t1_jcbemtv wrote

>Or if I had all Google search results saved in a database I could access during the test!

You mean like your long-term memory? To be clear, GPT doesn’t have the raw training information available for reference. In a sense, it read it during training, extracted the useful information, and is now using it.

If it’s answering totally novel reasoning questions, that’s a pretty clear indication that it’s gone beyond just modeling syntax and grammar.

1

blunun t1_jcbk8kh wrote

I would say it’s more like it reads it and writes it down and has access to all of that when answering questions. But you’re correct, if it is solving novel questions, then I totally agree. I havent seen that myself yet but have not looked into it that closely. Do you have examples of it solving novel questions? I’d love to see that.

3

tnnrk t1_jc9bkh7 wrote

It’s already out? I thought it was later this year if that?

3

ksoss1 t1_jc9z0oa wrote

Things moving fast! I wouldn't be surprised if we get GPT 4.5 by the end of the year. Also Google will probably bring the heat soon. Microsoft will also keep pushing. I hear they have an even scheduled for sometimes this month.

4

Amiga-Juggler t1_jcb7o8a wrote

We are so getting the best “Clippy” ever in Office 2024 (Microsoft 365)!

2

Seeders t1_jccewfh wrote

I'm overwhelmed by this. I feel like everything I've worked to learn and become is pretty much useless now.

1

Blueberry_Mancakes t1_jcb9w4m wrote

The next few years are going to be wild. Now that this level of AI is out in the open we're going to see a major jump in tech on all fronts. You think self check-outs and other automation has replaced a lot of jobs so far? Just wait...

0

elehman839 t1_jcbb9cg wrote

Two comments:

  • GPT-4 shows the pace of progress in ML/AI. You couldn't conclude much about the trajectory of progress based on ChatGPT alone, but you can draw a (very steep) line between two datapoints and reach the obvious conclusion, i.e. holy s**t.
  • Science fiction is a mess. Real technology looks set to overtake fantasies about the far future. What can you even imagine about thinking machines that now seems clearly out of reach?
0