Submitted by Scarlet_pot2 t3_104svh6 in singularity

Currently the main drivers of AI innovation are just a few billion dollar companies. Google, Deepmind, OpenAi, etc. Mostly developing LLMs. Sure models like chatGPT are interesting and useful for what they are, but they don't think. They don't learn continuously. They don't have memory. LLMs stand to make billions for these companies and automate some jobs, but its just one path that may or may not lead to AGI. We need new novel approaches, from many groups.

If you seen the movie "Chappie" some computer scientist generates an AI model that keeps trying to develop AGI. After every model it trains, the model gets tested and comes back with a pass or fail. After hundreds of fails, eventually a pass happens and voila, an AGI.

That is an approach someone could try. Along with countless others. In my ideal, this sub would be full of different online groups, of seeming regular people, all trying to develop AGI. Trying different things, sharing their experiences and building off each other. Lets get AI dev out from just billion dollar companies, and into the hands of regular people. You don't need a masters in CompSci to start a group and build together. And we wouldn't have a profit motive, so we wouldn't be locked into trying certain things like big companies may

I mean if you're on this sub, you're most likely passionate about AI. To some of you it's almost a religion. Why not put some of that passion into learning and contributing to AGI research? There is countless cheap or free online courses you can learn to start. Codewithmosh for python, courses for machine learning / Ai by Andrew Ng, Khan Academy for math. Those are just the ones that came to mind

An hour a day. Learn, build, network, share. It's possible

24

Comments

You must log in or register to comment.

HeronSouki t1_j36qcu8 wrote

It's pretty expensive

36

Scarlet_pot2 OP t1_j394x4d wrote

It doesn't have to be. Think of the first word generation model that lead to all the new LLMs we post about. It was a simple new thing we discovered we can do with code ( guess the next word) . Things like that could definitely be developed by small groups, and it could lead to many new advanced AIs a few years down the line

0

KSRandom195 t1_j398ztl wrote

The problem is it is expensive.

You want an AGI, but you don’t just want an AGI, you likely want it at least as smart as a person.

They say a single human brain has 2.5 Petabytes of information in it. Backblaze, which specializes in just storage, can do $35,000 per Petabyte. So that’s $87,500 in just storage, and that’s not redundant or fast storage, that’s just raw storage.

You need redundancy at that scale, so probably multiply that by ~2, so $175,000, again, only in storage.

Now you need compute. They estimate the human brain operates at 1 exaFLOP. The world fastest super computer is currently only ~ 1.1 exaFLOPs. It cost $600,000,000 to build, and that doesn’t include the cost to maintain and run it.

And that is even assuming we can do 1:1 the speed we need.

This isn’t something you can just do in your basement, not with the tech we have today.

10

Scarlet_pot2 OP t1_j39e8ao wrote

The goal shouldn't be to develop AGI, The goal should be to make discoveries that could lead to parts of AGI when extrapolated out.. Like the first word generation model was made that led to the LLMs today, we need small teams trying new things and sharing the results.

Let's assume "guess the next word" fills the part of the brain for prediction down the line for when AGI is developed. Maybe a small group develops the first thing that will later on fit another part of the brain, like how to make memory work. or how to develop reasoning. or any other parts.

and at least some of those can be found by small groups trying new approaches. John Carmack said that all the code of AGI would be able to fit on a USB. the goal should be to find parts of that code.

It won't be easy or quick, but I'm sure if we had 100k people with beginner-intermediate base understanding of the subjects related to AI, all trying different approaches and sharing their results, some working together, after a few years we would probably have at least a few new methods worth trying that may lead to a part of AGI.

5

KSRandom195 t1_j39fo3z wrote

John Carmack is a very smart person, but he’s making a prediction out his ass. We have no idea how much code would actually be required. Let’s also be clear that he’s trying to run an AI startup which requires funding. So he has reason to be very rosy about what can be accomplished. Maybe he’s onto something revolutionizing in the realm of AGI, I hope he is, maybe he is not. Until he builds it end-to-end it’s hypothetical.

Some scientists believe that what gives us consciousness (something some argue is required for AGI) is that there are parts of our brain that are quantum entangled to other parts, but we have no idea how or why. Trying to make small pieces that might help that code aren’t going to be super useful if quantum entanglement hardware is required. It’s fundamentally different from what you would build on a classical computer.

Yes people should experiment and play around with it. But they’re not going to get something that looks like intelligence in their basement.

2

LoquaciousAntipodean t1_j3a42k9 wrote

I find this whole idea of intelligence as a quantity that AI just needs 'more of' to be perplexing; as far as I know intelligence simply is not a quality that can be mapped in this linear, 'FLOP's sort of way. The brain isn't doing discrete operations at all, its a continuous probabilistic cascade of differential potentials flowing across a vast foamy structure of neural connections.

Intelligence is like fire, not like smoke. A bigger hotter fire will make more smoke, but fire is just fire, big or small. It's a concept, not a definition of state.

The language models give such a striking impression of 'intelligence' because they are simulating, in a very efficient, digital way, the effect of the language centre of a human cognition. The brain is just foamy meat that's essentially just a heavily patched version of the same janky hardware that fish and frogs are using, for all its incredible complexity it might not be terribly efficient, we just don't know.

It might be easier than we think to 'surpass human intelligence', we just need to think in terms of diversity, not homogeneity. Like I said elsewhere, our brains are not single-minded; every human sort of contains their own committee. This true golden goose of AGI will be the collective of a multitude of subunits, and their diversity, not their unity, will be how they accrete strength - that's how evolution always works

3

Relative_Purple3952 t1_j3jeg9e wrote

Sounds very much like Ben Goertzel's approach and despite him not delivering on the AI front, I think he is very much correct that scaling a language model to get to true, general intelligence will never work. Language is a necessary but not sufficient quality of higher intelligence.

2

LoquaciousAntipodean t1_j3kxka8 wrote

I think that the problem with the brute-force, 'make it bigger!!!' approach is that it ignores subtitles like misinformation, manipulation, outdated or irrelevant information, spurious or bad-faith arguments - this is why I think there will need to be a multitude, not a Singularity.

These LLMs will, I think, need to be allowed to develop distinct, individual personalities, and then be allowed to interact with each other with as much human interaction in the 'discussions' as possible. The 'clock rate' of these AI debates would need to be deliberately slowed down for clarity for the humans perhaps, at least at first.

This won't necessarily make them 'more intelligent', but I do think it stands a good chance of rapidly making them more wise.

1

4e_65_6f t1_j376krn wrote

Python is free.

−17

DamienLasseur t1_j379x65 wrote

However, the hardware is insanely expensive to train the model and run inference. If this were to work, we'd need someone with access to a lot of cloud computing/supercomputer/Google TPU's. The ChatGPT model alone requires ~350GB of GPU memory to generate an output (essentially performing inference). So imagine a model capable of all that and more? It'd require a lot of compute power.

10

4e_65_6f t1_j37ambo wrote

>The ChatGPT model alone requires ~350GB of GPU memory to generate an output (essentially performing inference). So imagine a model capable of all that and more? It'd require a lot of compute power.

I didn't say "try training LLM's on your laptop". I know that's not feasible.

The point of trying independently is to do something different than what they're doing. You're not supposed to copy what it's being done already. You're supposed to try to code what you think would work.

Because, well LLM's aren't AGI and we don't know yet if they will ever be.

1

DamienLasseur t1_j37b4sv wrote

Proto-AGI may likely be a multimodal system and therefore will include some sort of variant of transformers for language if developed within the next 5 years or so (in addition to other NN architectures)

5

GodOfThunder101 t1_j3890du wrote

Dude thinks he can create AGI using python only.

5

Cryptizard t1_j36zy17 wrote

I think you are really, really, underestimating how hard AI is, or this is some anti-intellectual BS. There is so much background knowledge you need to be able to contribute to state-of-the-art AI, if you were capable of it you would already have a job doing that. It is not something a random Reddit user can just decide one day they are going to do.

Moreover, as others have said, it costs millions of dollars to train these things. To suggest, "hey guys we should just like, build an AGI" is insulting to the people that work on it in academia and in industry. You may as well build a space ship in your back yard and colonize Mars.

As soon as you tried to make a comparison to a movie I knew you were going way off the rails.

34

footurist t1_j37hsvn wrote

It's the underestimation. The thing is, for some reason AGI seems like an approachable problem on first sight. There's something about it that makes you think there has to be some simple, yet surprisingly undiscovered way of building it.

But if and when you actually try to build something, no matter how naive or small, you very quickly recognize the incredible hidden complexity.

I've tried it too, I admit. You go from "I think it's doable" to "hell no, this isn't ever gonna work" in a couple of hours, lol.

7

Scarlet_pot2 OP t1_j398xn5 wrote

The expectations should be tampered. the foundations of AGI aren't going to be made in a couple of hours, but just as "guess the next word" was found out and led to LLMs, I'm sure there are many of simple small discoveries waiting to be found. And many diverse groups trying different things and sharing their results could lead to some of those. It may not be you that builds the million dollar model, but you could make the first simple program that shows promise and ends up being the base idea for large models a few years down the line, by someone.

2

footurist t1_j399rxd wrote

Between the lines I read the assumption that "guess the next word" is definitely agreed upon as being part of or precessor of future AGI, when that's actually highly unclear. Right now they're standing in front of the brick wall of lack of actual reasoning and therefore highly inconsistent emulated reasoning. And it's not clear that's susceptible to a fix or workaround. It could actually be a fundamental limitation of the architecture.

1

Scarlet_pot2 OP t1_j39b08w wrote

IMO, guess the next word isn't going to lead to AGI alone, but it most likely will play a part. Let's assume "guess the next word" fills the part of the brain for prediction down the line for when AGI is developed. Maybe a small group develops the first thing that will later on fit another part of the brain, like how to make memory work. or how to develop reasoning. or any other parts.

The goal should be to make discoveries that could lead to parts of AGI when extrapolated out.. and at least some of those can be found by small groups trying new approaches. John Carmack said that all the code of AGI would be able to fit on a USB. the goal should be to find parts of that code.

1

visarga t1_j39x1lv wrote

The code, yes, but the dataset will be the entire internet and loads of generated data. We have the people, what is necessary is to give them access to compute.

1

Mental-Swordfish7129 t1_j3bbbpn wrote

>I've tried it too, I admit. You go from "I think it's doable" to "hell no, this isn't ever gonna work" in a couple of hours, lol.

I've been at it for around 12 years in my little free time and I've made fairly steady progress excluding a few setbacks. I think I must have gotten very lucky many times. I know that when I look at my approach back then, that I was wayyy off and very ignorant and ridiculous.

2

Xist3nce t1_j397edp wrote

As a developer who tinkered with the move to AI, after looking over what I’d have to learn as a prerequisite I said fuck it. It’s way more than it ever seems from the outside, even if you have skills to transfer.

3

harrier_gr7_ftw t1_j37iwr5 wrote

He went FR after the first paragraph but the algorithms are well known.... it just gets expensive because you need to buy the data for training on, and like you say, the computing time.

Everyone is/was surprised that transformers give better results the more data you give them but this is literally OpenAI's raison d'etre; make the next better GPT by feeding in more data. Sadly most of us can't afford a 1000TB RAID setup to store the Common Crawl and tons of scanned books on, as well as a load of A100 Nvidia TPUs. :-(

AGI is another thing of course which will need a lot more research.

2

Scarlet_pot2 OP t1_j397wnk wrote

Yes making large models is expensive, but coming across the next discovery like "guess the next word" isn't. That small discovery led to all the LLMs we post about today, and it was made from a small group of people. The goal shouldn't be to train million dollar massive models. The goal should be to find new novel approaches.

A small group could make the next discovery like guess the next word, and I'm sure there are many discoveries to be made. building massive models from it may happen years later, from the creators or a better funded group.

1

Cryptizard t1_j3998va wrote

Who exactly are you crediting with inventing this guess the next word approach?

2

Scarlet_pot2 OP t1_j39cjh8 wrote

it was a small group of engineers at google. Not highly funded. They were trying to make something for google translate when they figured out they can make a program that guesses the next word.

1

visarga t1_j39xs2x wrote

No, this concept is older, it predates Google. Hinton was working on it in 1986 and Schmidhuber in 1990s. By the way, "next token prediction" is not necessarily state of the art. The UL2 paper showed it is better to use a mix of masked spans.

If you follow the new papers, there are a thousand ideas floating around. How to make models learn better, how to make them smaller, how to teach the network to compose separate skills, why training on code improves reasoning skills, how to generate problem solutions as training data... we just don't know which are going to matter down the line. It takes a lot of time to try them out.

Here's a weird new idea: StitchNet: Composing Neural Networks from Pre-Trained Fragments. (link) People try anything and everything.

Or this one: Massive Language Models Can Be Accurately Pruned in One-Shot. (link) - maybe it means we will be able to run GPT-3 size models on a gaming desktop instead of a $150,000 computer

2

Cryptizard t1_j39dcvq wrote

I can’t find any evidence of this happening.

1

Scarlet_pot2 OP t1_j39g574 wrote

https://en.wikipedia.org/wiki/Word2vec

"Word2vec is a technique for natural language processing (NLP) published in 2013 (Google). The word2vec algorithm uses a neural network model to learn word associations from a large corpus of text."

This was the first "guess the next word" model.

https://towardsdatascience.com/attention-is-all-you-need-discovering-the-transformer-paper-73e5ff5e0634

This next link is the "Attention is all you need" paper that describes how to build a transformer model for the first time.

These two discoveries didn't take millions or billions in funding. Made by small groups of passionate people, and their work led to the LLMs of today. We need to find new methods that would be similarly disruptive when extrapolated out.. and the more people we have working on it, the better chance we have of finding things like these. IMO these are parts of the future AGI, or at least important steps towards it. It doesn't take ungodly amounts to make the important innovations like these

1

Cryptizard t1_j39gpo3 wrote

They all have PhDs in AI though…

2

Scarlet_pot2 OP t1_j39hw2h wrote

Lets say there's a group of passionate PhDs self funded, over time they have a chance of 20% of finding a innovation or discovery in AI.

now let's say there is another group of intermediate and beginners, self funded, over time they have a 2% chance of making a discovery in AI.

But for the second example, there is 10 of those teams. All the teams mentioned are trying different things. If the end goal is advancement towards AGI, they all should be encouraged to keep trying and sharing right?

1

Cryptizard t1_j39jqjy wrote

I am claiming, though, that amateurs and enthusiasts are incapable of contributing to state-of-the-art AI. There is too much accumulated knowledge. If it was a low, but possible, chance to just make AGI from first principles it would have already happened sometime in the last 50 years that people were working on it. If, however, it is like every other field of science, you need to build the next thing with at least deep understanding of the previous thing.

Your examples might not have had a lot of money, but they all certainly were experts in AI and knew what they were doing.

2

TheDavidMichaels t1_j372mqo wrote

clearly mommy is still paying your bills

19

Scarlet_pot2 OP t1_j396aqv wrote

No, I just actually believe in myself. Which is a trait more people should have.

0

4e_65_6f t1_j375fm7 wrote

Every year I try at least once to code a new type of AI in python. It's been like 3 years now.

I try it because I think LLM's seems like a primitive approach to the problem. It's like you want to know the definition of a word in the dictionary, but instead of looking it up in the dictionary you just read the whole library until you eventually stumble upon the dictionary and find the right word.

True AGI probably won't require a quadrillion parameters and exaflops.

I have like folders full of different AI.py versions. None of which is AGI though lmao.

But I've learned a lot by attempting it.

10

turnip_burrito t1_j36q26f wrote

Yes, also try genuinely doing it without using artificial neural networks and see how far you can get with it.

8

shmoculus t1_j36xwy5 wrote

You know, it's always the details that get in the way, namely big data to train the models (scraping, storage, cleaning), big compute to train the models (read $$$,$$$), just endless boiler plate engineering work to get products up and running, then ongoing costs, challenges in running large models locally, continuous improvement etc.

Now having said all that, you can participate in LAION's OpenAssistant, have at it friend :)

7

gangstasadvocate t1_j36zetv wrote

Why do these large models have to be run locally? Why can’t we make a decentralized thing with many computers on it?

2

metal079 t1_j3803y1 wrote

That exists, it's called Petals

2

gangstasadvocate t1_j38bpna wrote

Well then I think we should be making that connection more than we’ve been doing. Now just have to trick chat GPT into telling a very detailed story about how itself was developed including the sourcecode. I started getting somewhere with it when I was asking, how do you make a language model, and then show me examples of whatever algorithms it was talking about. But unfortunately I’m not built like that or I’m just too lazy to learn coding properly so who knows but I think it’s possible then we decentralize it and get all the computers working to improve it

1

shmoculus t1_j394rjg wrote

You can read some papers on the underlying methods or have a look at the OpenAssistant source code, it should give you some idea

1

gangstasadvocate t1_j3954rv wrote

I’m good I’m sure I’ll only get marginally better of an idea from reading that by my capabilities of understanding such things. But yeah. People who are more capable than I should be.

1

AsheyDS t1_j37j3rd wrote

It takes time, which takes money, but people can at least think about it and study. Some of the comments here make it seem like you have to adhere to current ML methods to get anywhere but that's not true at all. The best thing people can do, if they want to get into AGI, is to learn learn learn. Not just ML, but AI more broadly. Also both human and animal cognition and behaviors, computer hardware and software, etc. A strong foundation in all these is a good start, and looking into current and past methods to see what needs attention. I wouldn't get too bogged down in any one aspect of it though. In my opinion, general AI will require a general understanding of a lot of things, and less specified training. These days, if you have internet access, it only costs time to get pretty far into this stuff. No need to worry about compute/training costs and things like that when you're early into it. However, I doubt a largely distributed and collaborative approach will be good in the long-term without some sort of more substantial commitment and organization. Getting people interested is easy, but getting them committed to it long-term to get any sort of cohesion in the project is more difficult, and that's where it starts making more sense to turn it into a company or other formal organization than just a loosely collaborative online effort.

6

Scarlet_pot2 OP t1_j392tjr wrote

Agreed. the Innovation we need will require out of the box thinking, so sticking to current AI trends and methods isn't necessary. We should tinker with the fundamentals and figure out new ways to do things. When you look at the start of LLMs, it was a small team learning they could build a model to generate the next word, and that was extrapolated out to chatGPT, image generation and more. We need more advancements like that, and those type of advancements are totally possible for small teams and individuals

Also a jack of all trades approach would help with new ideas and trying them out. Knowing about human and animal cognition yes, and studying human psychology and its development. Using theories like these, trying to recreate in code in new novel ways can lead to advancements. small groups can make these advancements and more without needing billions in funding

1

Cryptizard t1_j3988co wrote

Ok, when you’re done your 8 PhDs you will be able to work on it for about 3 months before you die of old age.

1

Scarlet_pot2 OP t1_j399d9f wrote

doesn't take 8 phds. Take courses on programming, college math, AI, and psychology. A beginner-intermediate understanding of these is all you need to start fiddling and trying new approaches

2

BellyDancerUrgot t1_j39klnu wrote

You need to learn graduate level math to even begin to understand the math required for the most basic models.

1

turnip_burrito t1_j3bb4p2 wrote

Well, not graduate level. Junior level in college is sufficient for the most basic models. Like for feedforward neural networks, you just need to know chain rule from calculus and some summation series notation.

Apart from that, Bayesian probability, geometric series, convergence rules, constructing mathematical proofs, it's advanced but shouldn't take too long to pick up if taught correctly. But this stuff will take much longer (basically graduate level for meaningful work).

1

BellyDancerUrgot t1_j3dxt4h wrote

Depends on what you define as basic. The post talks about novel approaches to AGI. Simple MLP is below basic in that regard. And even then I doubt most people learn about differentiation with respect to coordinate transformation in undergrad unless they do some highly specialized ML or math course.

1

AsheyDS t1_j3c78hd wrote

You're more concerned with the ML branch then, so maybe you think that's what's going to lead to AGI, but not even all ML researchers are convinced of that. There's a lot more to consider, like the rest of the AI field. People need to stop being discouraged by this talk of phds and math.

1

BellyDancerUrgot t1_j3dtr99 wrote

Do share then what your beliefs are. What exactly is AI without math? Just curious since you have the tag of a researcher. What field are u working on? I’m not suggesting that a PhD is necessary, a degree is a indicator of ur work. But phd level work is necessary to achieve anything meaningful in this field. The post is very hand wavy and aimless. Andrew Ng and Khan academy is not enough to invent the next big thing however small it is. Read up on Mish activation. The guy who did that did so before even getting a masters degree. But that’s only because he is a genius who was capable of understanding grad math when barely out of high school.

1

AsheyDS t1_j3e7r7c wrote

>Do share then what your beliefs are.

I do not have a phd, nor do I have a degree that would satisfy you, so my beliefs are meaningless. :) I didn't even get into this field until after college.

>What exactly is AI without math?

What is natural intelligence without math? Math is just a system of measurement, and one that as of yet hasn't defined every single thing. I get that we're talking about computers as the substrate, so math makes sense, but it's not the only way to define things, or enact a process. That said, I'm not suggesting ditching math, it will be integral to many processes, I'm just saying it doesn't have to be the main focus of work or study centered around cognition. That's what we're ultimately talking about here with AGI, not just mathematical processes. This is, unless you believe ML is the path to AGI, as many do.

1

BellyDancerUrgot t1_j3efn9s wrote

I don’t have a PhD either lol. Your beliefs aren’t meaningless either. Nobody actually knows what breakthrough we might have next. I do consider chatgpt to be a breakthrough tbh (using RL to train an LLM). VQA was a breakthrough imo. GANs was also a breakthrough. All these came about in the same way as the post suggests but without hardware or funding u would never see all of it come together.

There’re people like Blake Richards working on the boundaries of neuroscience and AI but it’s hard to work on any of those fields without math as the underlying structure. Still, even if you approach it or want to approach it from an entirely new way it’s hard to do that without knowing the approaches that do exist which would require you to have a lot of math knowledge regardless. You can do that without a degree for sure tho , that wasn’t my point. It’s just super hard without guidance and the primary topic of this post is: working on smaller problems without any funding , I don’t see how that works and i don’t see any actual pragmatic answers here by op either.

1

Buck-Nasty t1_j382ply wrote

This is like trying to get to the moon with little groups shooting rockets in their back yard. AGI will require billions of dollars and massive infrastructure.

Scale, scale and more scale really matters, you should read the Bitter Lesson by Rich Sutton,

http://www.incompleteideas.net/IncIdeas/BitterLesson.html

5

Scarlet_pot2 OP t1_j396pzu wrote

The goal shouldn't be to build the rocket. It should be to develop the physics so someone with more funding can build the rocket later on.

Like the first word generation model was made that led to the LLMs today, we need small teams trying new things and sharing the results. If something is useful it may lead to the big models of tomorrow

1

QuietOil9491 t1_j38ddpr wrote

“Step 1: collect underpants…

Step 3: Fully autonomous, benevolent, aligned AGI, built without money or desire for profit”

5

EOE97 t1_j3705a4 wrote

Why don't we start one in this sub? Let's collectively get together and build agi.

3

Scarlet_pot2 OP t1_j393z6j wrote

I'm for it! We could share resources, help each other learn, etc. We could start a discord server, make a post about it with the mission or goal, and then people who want to contribute can join. It's doable and once one group starts more can follow

When you look at the start of LLMs, it was a small team learning they
could build a model to generate the next word, and later that was extrapolated
out to chatGPT, image generation and more. We could try to find a change like that. a new method that later can lead to many new AIs

1

No_Ninja3309_NoNoYes t1_j377ss4 wrote

It's a provocative idea, and I want to offer a provocative comment. Yes, we can! But there are many roadblocks. If we take ChatGPT, something like matrix multiplication is already a hurdle. Deepmind is working on a way to reduce the number of operations to multiply matrices, but I don't see that solving the issue. Then there are tricks like trying to distill a leaner neural network from a bigger one. But it is not that easy to implement.

Another option could be to have many little neural networks or other models that use different or the same data. The results could be then gathered in a distributed fashion through something akin to averaging. Or you could use the Bitcoin model, offering little assignments instead of mining. You could have a Wikipedia of data like Common Crawl.

But in the end one person and a laptop won't get far. You need thousands of hours to even understand the basics. You need a supercomputer. But never say never. Maybe there is a better way. After all humans don't need that much data to learn.

3

Kinexity t1_j39ovdu wrote

Random person who learned ML through online courses will not bring us any closer to AGI. You need PhD for that simply because creating new better architectures requires lots of expertise which you won't have otherwise. You need compute to test out those new architectures which requires loads of money. Chappie is a movie which is supposed to be nice to watch - not some kind of oracle.

2

turnip_burrito t1_j3bbs7n wrote

Yep, you need at least the math knowledge equivalent to a 4 year degree in CS, math, physics or engineering field, plus knowledge of which AI approaches have been tried.

2

Lopsided-Basket5366 t1_j36y76o wrote

AI is still in it's infancy- think about the Internet or electricity when they first came around; it wasn't mainstream because the people studying didn't know if it was a viable income. You'll always still have the passionate people that move into the field from other related fields but it takes time to become widespread

1

Scarlet_pot2 OP t1_j395goa wrote

The fact it is still in its infancy is more of a reason to get involved.. Get in early and you have more of a chance to make an impact. You don;t need to fully move into the field, you can do it for an hour a day as a hobby, keeping your regular job. Maybe it'll lead to a programmer / AI job in the future. Maybe you'll stumble across the next simple discover like "guess the next word" that will lead to advanced models down the line

1

ChronoPsyche t1_j37ak6l wrote

And who is funding this?

1

Scarlet_pot2 OP t1_j3937vf wrote

Whatever group is made, they can self fund. I'm sure most of us work, and it doesn't cost much to learn and try new approaches. you don't need to train a multi-million dollar model. There's many other ways to contribute

−1

ChronoPsyche t1_j395ifv wrote

But you do need to train a multi-million dollar model. It is extremely expensive to do. That's why the only companies that have produced LLMs worth anything are ones with billions in funding. Google and Microsoft-backed OpenAI.

0

Scarlet_pot2 OP t1_j3961c1 wrote

I'm not talking about LLMs. I'm talking about new, novel approaches. LLMs were started because a small group figured out they could make a program that could guess the next word. small groups trying new things could develop the next program like that, which is something people could try.

You may try new things and make a simple discovery that leads to new advanced AIs a couple years later

0

ChronoPsyche t1_j3978ph wrote

You don't think researchers at Google and OpenAI aren't constantly trying to figure out new, more efficient algorithms? And these are researchers with PhDs in machine learning and billions in funding to carry out experiments, not people who just watched some online Python videos.

While what you say isn't impossible, you're making it sound way easier than it actually is. Sounds more like wishful thinking.

2

Scarlet_pot2 OP t1_j39a7ok wrote

The fact that there are people at these companies trying new approaches shouldn't stop you from trying to. They aren't going to be trying what you are. Even if what you try comes up short, then we know one more path that isn't towards AGI

The goal should be to try as many different approaches as possible so we can identify the ones that show promise. It won't be easy or quick, but I'm sure if we had 100k people with beginner-intermediate base understanding of the subjects related to AI, all trying different approaches and sharing their results, some working together, after a few years we would probably have at least a few new methods worth trying that may lead to a part of AGI.

2

ChronoPsyche t1_j39ccoa wrote

Here's the biggest problem with trying to crowd-source research from beginners, you don't know what you don't know. You get 100k beginners and ask them to try to figure out AGI, they'll come up with a bunch of solutions that have already been tried thinking they're novel, but not realizing that it's been done before due to lack of experience.

I've tried to do something similar myself, not for AGI but for something else in another domain within machine learning. Thought I found gold and was a genius, only to discover I had just reinvented the wheel for an older technique that was abandoned due to not being feasible. As a result, all that happened was I learned firsthand and for myself why that technique was no longer used (and that it was ever used in the first place). It was a great learning experience, but that's all it was.

Depth of experience is invaluable. Research builds on past research, but in order to know what to build on and how to build on it, you gotta be experienced within the field. You gotta truly understand everything else that's already been tried.

I don't think you really appreciate everything that goes into research. It's a common fallacy for people who are beginners, like I said, you don't know what you don't know.

Of course, that shouldn't stop anyone from trying. You're more than welcome to take your advice. As for me, I am focusing more on novel ways to use advanced AI built by others in software applications. OpenAI just creates the tools, but someone's gotta use those tools to create something useful. That's where people with breadth of experience who lack the depth of experience necessary for rigorous research can excel. I'm not going to try and beat giant corporations with teams of PhDs and billions in funding at their own game.

1

BellyDancerUrgot t1_j39l9js wrote

Out of curiosity what was it that u tried?

And I agree most people on this subreddit don’t have any idea what they are talking about lol. It’s just Twitter and social media hype, accompanied by total lack of knowledge ….. or worse having superficial knowledge on the most basic things.

1

metal079 t1_j3ajbbb wrote

>trying different approaches

You realize this takes money? How do we know something works unless you can train a model to test it?

1

metal079 t1_j380f62 wrote

Least delusional/r/singularity user

1

savagefishstick t1_j3840rn wrote

I have a feeling there will be no AGI because AI will be close to human level, then the next evolution ai will be significantly more advanced than humans. that's what exponential growth looks like it won't let you off at every stop.

1

QuietOil9491 t1_j38d3mj wrote

It’s weird that you don’t think you have a profit motive… how are you paying rent or bills or eating or buying equipment while building your AGI?

If you can build it for no money with a handful of amateurs with no experience… why aren’t the billionaire companies doing it that way instead?

1

Scarlet_pot2 OP t1_j391hyr wrote

I'm saying specifically for AI. I won't be pushed by stakeholders to train LLMs, to push out products to generate revenue, to constantly trying to please the funders. By no profit motive I mean building solely for the purpose of AI advancement. Willing to lose money and time for it, not expecting to make it back next quarter.

I'm not saying a group of amateurs or people with intermediate experiences could build it. But they could make small advancements over time, and over years it could get better, leading to an Open-Source AI made from many people contributing over time

1

EulersApprentice t1_j396f51 wrote

Everyone else in this thread spent so long wondering whether you could that they never stopped to think if you should.

It currently matters little who makes AGI, because nobody knows how to make one that won't kill us all. The question of when AGI gets made is more impactful; the later we get AGI, the more time we have to figure out the alignment question.

From the bottom of my heart I kindly ask you to find something else to do with your time than join the mob in poking the doomsday bomb with sticks.

1

ScottPrombo t1_j39ja77 wrote

> That is an approach someone could try.

Lol how do you think AI/ML design and training works, and that others haven't tried?

I don't foresee myself getting onto a rocket built by a volunteer community. This is something on the same order of cost and complexity. It'd be neat but I hate to say it's misinformed wishful thinking.

However, the future is bright and there are many other ways to get involved!

1

omegahustle t1_j3b5pst wrote

I recommend you check this:

https://developers.google.com/machine-learning/crash-course/prereqs-and-prework

​

In theory, it's possible to achieve amazing things by just doing it every day, but it takes passion (for example the sentdex channel or Yannic Kilcher channel)

And judging by the comments people prefer to just daydream about the IA instead of getting involved

Don't get demotivated by people saying that you need a fortune or a Ph.D., I can't do it because I have a lot of other things going on but I would love to try when I get spare time

1

Mental-Swordfish7129 t1_j3bamz3 wrote

I'm not sure if I support the idea that more people should try. I'm just here to say that I am trying for many years now and I've made steady progress.

1

Lawjarp2 t1_j3bkcv7 wrote

It'll be possible by end of this decade. It's going to get real cheap (few hundred thousand dollars lol) in a few years. But I guess large companies will have a definite advantage and be very close to AGI by then.

1

SnooPies1357 t1_j3c6e7c wrote

theres opencog but lacks resources

1

3deal t1_j36ypv9 wrote

I am sure that we should take inspiration from the mechanisms of sleep.

At each usage cycle the AI ​​should sort and rate all the results it provided during the previous usage session to improve.

I have the intuition that loopback is the key.

0

Scarlet_pot2 OP t1_j394ekc wrote

See that is an idea worth trying. Study it more and try to implement it in code, and build it into a model. a group with this model may build something thats akin to the first small word generation model of LLMs. That is one idea worth trying and I'm sure there is countless others people could try

1