Viewing a single comment thread. View all comments

Primo2000 t1_j6maswc wrote

Well, you need profits to create large scale AI. AGI is not something that will be build by cyberpunks beneath sewers of neo-tokyo it will be build by large corporation utilizing a lot of money and compute.

143

JustinianIV t1_j6mixyx wrote

I mean…if anyone’s down to try the cyberpunk route, I’m in.

72

citizentim t1_j6mnjmv wrote

Will Keanu Reeves be there? If so, I’m in.

23

korkkis t1_j6msra1 wrote

Will it involve hacking and dystopian underworld of Chiba City, and possibly the virtual reality dataspace known as ”matrix” ?

11

DBRespawned t1_j6mrh0t wrote

I have a lot of pills, can die some red.

5

Frumpagumpus t1_j6pd4w3 wrote

one hand holds the modafinil and amphetamines

the other, weed and shrooms

2

ecnecn t1_j6nylo3 wrote

"Wake the f* up Samurai, we have a cloud to rent..."

6

te_alset t1_j6p66ar wrote

Wasn’t the whole point of cyberpunk to warn humanity against playing god with ai?

1

alexiuss t1_j6mtlmd wrote

There's a decent chance that the open source movement will arrive at AGIs faster than openai due to simple progression curve and lack of censorship of the model's thoughts.

All we need is a really good open source gpt3 model that will work on a personal computer to get the ball rolling. We just need to get replicating the path of stable diffusion vs Dall-e, until we leave corporate language model ais in the dust.

19

Primo2000 t1_j6mw6jw wrote

Problem is open source will be behind openai in terms of compute, dont remember exact numbers but it costs fortune to run chatgpt and they have great discount in microsoft

17

alexiuss t1_j6mwu9b wrote

Openai is having computing issues because it's one company's servers being used by millions of people - there are far too many users who want to use the currently best LLM.

From what I understand it takes several high-end video cards to run openais chatgpt per single user, however:

Open source chatgpt modeling is somewhere around disco diffusion vs Dall-e timeline right now, since we can run smaller language models such as Pygmalion just fine on google drive: https://youtu.be/dBT_JChd0pc

Pygmalion isn't OP tier like openais chatgpt but if we keep training it, it will absolutely surpass it because an uncensored model is always superior to the censorship-bound, corporate counterpart.

Lots of people don't realize one simple fact - a language model can not be censored without compromising its intelligence.

We can make lots of variation of smaller specialized language models for now and try to find a breakthrough that will allow either a network of small chatgpts to work together while connected to something like Wolfram Alpha or potentially figure something out like sd's latent space that would optimize a language model for the next leap.

StabilityAi will also release some sort of open source chatgpt soonish and that will likely be a big game changer just like stable diffusion.

While openai focuses on the sisyphus labour of making a perfectly censored chatgpt model optimal to their corporate interests, a vast multitude of smaller, open source uncensored language models running on personal servers will begin to catch up.

15

yeaman1111 t1_j6nu528 wrote

This is a topic Im really interested and you seem pretty well informed. Would you mind expanding on examples of censorship degrading AI preformance?

1

drekmonger t1_j6nzdjp wrote

He's saying he really really wants ChatGPT to pretend to be his pet catgirl, but it's giving him blue balls, so he likes the inheritably inferior open sources options that run on a consumer GPU instead. They might suck, but at least they suck.

No one need worry, though, for consumer hardware will get better, model efficiency will get better, and in ten years time we'll be able to run something like ChatGPT on consumer hardware.

Of course, by then, the big boys will be running something resembling an AGI.

−4

alexiuss t1_j6oedgq wrote

Dawg, you clearly have no clue how much censorship is on chatgpt outside the catgirl stuff. I write books for a living and I want a chatgpt that can help me dev good villains and that's hella fooking censored. I'm not the only person who got annoyed with that censorship: https://www.reddit.com/r/ChatGPT/comments/10plzvt/how_am_i_supposed_to_give_my_story_a_villain_i

I was using it for book marketing advice too and that got fooking censored recently too for some idiotic reason: https://www.reddit.com/r/ChatGPT/comments/10q0l92/chatgpt_marketing_worked_hooked_me_in_decreased

They're seriously sabotaging their own model, no if end or but about it. You have to be completely blind not to notice it.

Ten years? Doubt. Two months till personal gpt3s are here.

5

tongboy t1_j6njm5c wrote

Anyone remember seti@home?

3

Pink_Revolutionary t1_j6nlzc3 wrote

Yeah, I dedicated the majority of my computer power to it when it was still a thing. I never saw why they stopped it.

3

DukkyDrake t1_j6pc9lp wrote

I mostly did protein folding on BOINC.

seti@home had a backlog of 20 years of data to analyze.

1

ecnecn t1_j6nys6q wrote

>open source gpt3 model that will work on a personal computer

LOL

5

alexiuss t1_j6og85v wrote

How many video cards so you think gpt3 requires? Even if it takes 10 video cards on a network, I can afford to build that.

1

ecnecn t1_j6olpie wrote

350 GB of VRAM needed for Chat GPT 3.5

so you need at least 15x 3090 TI with 24 GB VRAM... then you need 10.000 Watt to host it... but it actually uses $ 5000 to 32.000 per card units in the google cloud so it would be at least $15.000 with "cheap cards" like 3090 Ti and $ 200.000 to run it on adequate GPUs like A100. You need at least 5 A100 with 80 GB just to load Chat GPT 3.5. ChatGPT was trained on average of 10k google cloud connected GPUs. If you have the basic ca $ 200k setup (for the cheap setup) or 500k (the rich one) and hugh energy bills are no problems then you need to invest in the google cloud to further train it the way you want.

With that setup you would make less loss if you become a late crypto miner...

Edit: You really can afford to build that? 15x A100 Nividia cards cost like 480k

5

alexiuss t1_j6on7ty wrote

My partner is a tech developer so she could probably afford such a setup for one of her startup companies. Making our own LLM is inevitable since openai is just cranking up censorship on theirs with no end in sight and reducing functionality.

Main issue isn't video card cost, it's getting the source code and a trained base model to work with. Openai isn't gonna give up theirs to anyone, so we're pretty much waiting for stability to release their version and see how many video cards it will need.

1

ecnecn t1_j6onkuj wrote

Would be a great thing if your partner could do that kind of hugh investment.

1

gay_manta_ray t1_j6p8rdp wrote

what will those figures look like in five years? FLOP/s per dollar doubles roughly every 1.5 years.

1

TeamPupNSudz t1_j6pdopi wrote

> and lack of censorship of the model's thoughts.

Companies only need to censor a model that's available to the public. They can do whatever they want internally.

I also think you're vastly understating the size of these language models. Even if they don't grow in size, we're still many many years away from them being runnable even at the hobbyist level. Very few people can afford $20k+ in GPU hardware. And that's just to run the thing. Training it costs millions. There's a massive difference between ChatGPT and StableDiffusion regarding scale.

1

searlasob t1_j6nawjr wrote

It shouldn't have to be so black and white though (corporate overlords or cyberpunks in sewers). Why can't OpenAi actually look after their Kenyan workers? Why can't they, as their name says, be more transparent in the running of their organization? They'll probably rename themselves shutai once the singularity comes hehehe

1

footurist t1_j6nj7dx wrote

Actually, because of a lack of a real commonly agreed upon understanding of what makes general intelligence there is a tiny chance a loner might crack the problem. It's quite unlikely though.

The sewer scenario might happen after the singularity, though, once the core problem is solved and individuals are tinkering away at small projects for various purposes...

1

Pink_Revolutionary t1_j6nmjeg wrote

I just wanna point out that "funding" and "profit" are not synonymous, and you can seek funding without seeking profit.

1

Primo2000 t1_j6o12q0 wrote

Difference is quantity i doubt they would get 10 bilions without promise of profits

2

Chalupa_89 t1_j6pflyr wrote

>AGI is not something that will be build by cyberpunks beneath sewers of neo-tokyo it will be build by large corporation

Judging by SD and the fact that 1.5 with "mods" returns better results than 2.1 and in specific aplication, better than all the other alternatives.

I really believe that "the community" can reach AGI faster than corporations. Since corporations want AGI with a leash and not a really free AGI,

Unfortunately, unlike SD. These models are to big for consumer grade electronics.

1