Comments

You must log in or register to comment.

acutelychronicpanic t1_je9dxaa wrote

Yes! This is exactly what is needed.

Concentrated development in big corps means few points of failure.

Distributed development means more mistakes, but they aren't as high-stakes.

That and I don't want humanity forever stuck on whatever version of morality is popular at Google/Microsoft or the Military.

275

Trackest t1_je9imnr wrote

AI seems to be developing too fast and provide too much potential profit to corporations. I am doubtful that CERN or ITER-like regulatory frameworks can effectively become the leading edge of AI research without some kind of drastic merging of OpenAI, DeepMind, etc into the organization, which would be practically impossible.

However, I do agree that if it were possible for every leading AI lab to be suddenly merged into one entity, an open international effort would probably be the best model.

45

acutelychronicpanic t1_je9ks0m wrote

Here is why I respectfully disagree:

  1. It is highly improbable that any one attempt at alignment will perfectly capture what humans value. For starters, there are at least hundreds of different value systems that people hold across many cultures.

  2. The goal should not be minimizing the likelihood of any harm. The goal should be minimizing the chances of a worst-case scenario. The worst case isn't malware or the fracturing of society or even wars. The worst case is extinction/subjugation

  3. Extinction/subjugation is far less likely with a distributed variety of alignment models than with one single model. With a single model, the creators could do a bait and switch and become like gods or eternal emperors with the AI aligned to them first and humanity second. Or they could just get it wrong. Even a minor misalignment becomes a big deal if all power is concentrated in one model.

  4. If you have hundreds of attempts at alignment that are mostly good faith attempts, you decrease the likelihood that they share the same blindspots. But it is highly likely that they will share a core set of ideals. This decreases the chances of accidental misalignment for the whole system (even though the chances of having some misaligned AI increases).

Sorry for the wall of text, but I feel that this is extremely important for people to discuss. I want you to tear apart the reasoning if possible because I want us to get this right.

52

Trackest t1_je9mlrd wrote

First off I do agree that in the ideal world, AI research continues under a European-style, open source and collaborative framework. Silicon valley companies in the US are really good at "moving fast and breaking things" which is why most of the AI innovation is happening in the US currently. However since AI is a major existential risk I believe moving to a strict and controlled progress like what we see with nuclear fusion in ITER and theoretical physics in CERN is the best model for AI research.

Unfortunately there are a couple points that may make this unfeasible in reality.

  • Unlike with nuclear fusion or theoretical physics where profitability and application potential is extremely low during the R&D phase, every improvement in AI that brings us closer to AGI has extreme potential profits in the form of automating more and more jobs. Corporations have no motive to give up their AI research to a non-profit international organization besides the goodness of their hearts.
  • AGI and Proto-AGI models are huge national security risks that no nation-state would be willing to give up.
  • Open-sourcing research will greatly increase risk of mis-aligned models landing in the wrong hands or having nations continue research secretly. If AI research has to be concentrated within an international body, there should be a moratorium on large scale AI research outside of that organization. This may be a deal-breaker.

If we can somehow convince all the top AI researchers to quit their jobs and join this LAION initiative that would be awesome.

14

acutelychronicpanic t1_je9qay6 wrote

I don't mean some open-source ideal. I mean a mixed approach with governments, research institutions, companies, megacorporations all doing their own work on models. Too much collaboration on Alignment may actually lead to issues where weaknesses are shared across models. Collaboration will be important, but there need to be diverse approaches.

Any moratorium falls victim to a sort of prisoner's dilemma where only 100% worldwide compliance helps everyone, but even one group ignoring it means that the moratorium hurts the 99% participants and benefits the 1% rogue faction. To the extent that Apocalypse isn't off the table if that happens.

Its a knee-jerk reaction.

The strict and controlled research is impossible in the real world and, I think, likely to increase the risks overall due to only good actors following it.

The military won't shut its research down. Not in any country except maybe some EU states. We couldn't even do this with nukes and those are far less useful and far less dangerous.

16

Trackest t1_je9s80s wrote

Right, taking into account real-world limitations perhaps your suggestion is the best approach. A world-wide moratorium is impossible.

Ideally reaching AGI is harder than we think, so the multiple actors working collaboratively have time to share which alignment methods work and which do not like how you described. I agree that having many actors working on alignment will increase probability of finding a method that works.

However with the potential for enormous profits and the fact that the best AI model will reap the most benefits, how can you possibly ensure these diverse organizations will share their work, apply effective alignment strategies, and not race to the "finish"? Getting everyone to join a nominal "safety and collaboration" organization seems like a good idea, but we all know how easily lofty ideals collapse in the face of raw profits.

3

acutelychronicpanic t1_je9ttym wrote

The best bet is for the leaders to just do what they do (being open would be nice, but I won't hold my breath), and for at least some of the trailing projects to collaborate in the interest of not being obsolete. The prize isn't necessarily just getting rich, its also creating a society where being rich doesn't matter so much. Personally, I want to see everyone get to do whatever they want with their lives. Lots of folks are into that.

Edit & Quick Thought: Being rich wouldn't hold a candle to being one of the OG developers of the system which results in utopia. Imagine the clout. You could make t-shirts. I'll personally get a back tattoo of their faces. Bonus, there's every chance you get to enjoy it for.. forever? Aging seems solvable with AGI.

If foundational models become openly available, then people will be working more on fine-tuning which seems to be much cheaper. Ideally they could explicitly exclude the leading players in their licensing to reduce the gap between whoever is first and everyone else, regardless of who is first. (But I'm not 100% on that last idea. I'll chew on it).

If we all have access to very-smart-but-not-AGI systems like GPT-4 and can more easily make narrow AI for cybersecurity, science, etc. Then even if the leading player is 6 months ahead, their intelligence advantage may not be enough to allow them to leverage their existing resources to dominate the world, just get very rich. I'm okay with that.

4

Caffdy t1_jebfvjx wrote

> The prize isn't necessarily just getting rich, its also creating a society where being rich doesn't matter so much

This phrase, this phrase alone say it all. Getting rich and all the profits in the world won't matter when we will be a inch-step close to extintion; from AGI to Super Artificial Intelligence it won't take long; we are a bunch of dumb monkeys fighting over a floating piece of dirt in the blackness of space, we're not prepared to understand and undertake on the risks of developing this kind of technology

−1

Borrowedshorts t1_je9zb9x wrote

ITER is a complete joke. CERN is doing okay, but doesn't seem to fit the mold of AI research in any way. There's really no basis for holding these up as the models AI research should follow.

5

Trackest t1_jea2k7c wrote

Yes I know these projects are bureaucratically overloaded and extremely slow in progress. However they are some of the only examples we have of actual international collaboration at a large scale. For example ITER has US, European, and Chinese scientists working together on a common goal! Imagine that!

This is precisely the kind of AI research we need, slow progress that is transparent to everyone involved, so that we have time to think and adjust.

I know a lot of people on this sub can't wait for AGI to arrive tomorrow and crown GPT as the new ruler of the world. They reflexively oppose anything that might slow down AI development. I think this discourse comes from a dangerously blind belief in the omnipotence and benevolence of ASI, most likely due to lack of trust in humans stemming from the recent pandemic and fatalist/doomer trends. You can't just wave your hands and bet everything on some machine messiah to save humanity just because society is imperfect!

I would much rather prefer we make the greatest possible effort to slow down and adjust before we step into the event horizon.

−2

Borrowedshorts t1_jeabhvm wrote

ITER is a complete disaster. If people thought NASA's SLS program was bad, ITER is at least an order of magnitude worse. I agree AI development is going extremely fast. I disagree there's much we can do to stop it or even slow it down much. I agree with Sam Altman's take, it's better these AI's to get into the wild now, while the stakes are low, than to have to experience that for the first time when these systems are far more capable. It's inevitable it's going to happen, it's better to make our mistakes now.

8

Smellz_Of_Elderberry t1_jebrrey wrote

>However since AI is a major existential risk I believe moving to a strict and controlled progress like what we see with nuclear fusion in ITER and theoretical physics in CERN is the best model for AI research.

This is going to lead to us waiting decades for progress and testing. Look at drug development.. Takes decades of clinical trials for us to even start making it available, and then it's prohibitively expensive. We might have cured cancer already, If we didn't have so many barriers in the way.

>Open-sourcing research will greatly increase risk of mis-aligned models landing in the wrong hands or having nations continue research secretly. If AI research has to be concentrated within an international body, there should be a moratorium on large scale AI research outside of that organization. This may be a deal-breaker.

So you want an unelected international body to hold the keys to the most powerful technology in existence? That sounds like a terrible idea. Open source is the only solution to alignment, because it will make the power available to all. Thus allowing all the disparate and opposing ideological groups the ability to, in a custom manner, align ai to themselves.

All an international group will do, is align ai in a way that maximizes the benefit of all parties involved. Parties which really have no incentive to actually care about you or i.

3

Smallpaul t1_jec8qy8 wrote

Your mental model seems to be that there will be a bunch of roughly equivalent models out there with different values, and they can compete with each other to prevent any one value system from overwhelming.

I think it is much more likely that there will exist one, single lab, where the singularity and escape will happen. Having more such labs is like having a virus research lab in every city of every country. And like open sourcing the DNA for a super-virus.

3

acutelychronicpanic t1_jecoprq wrote

I My mental model is based on this:

Approximate alignment will be much easier than perfect alignment. I think its achievable to have AI with superhuman insight and be well enough aligned that it would take deliberate prodding or jailbreaking to get it to model malicious action. I would argue that in many domains, GPT-4 already fits this description.

Regarding roughly equivalent models, I think that there is an exponential increase in intelligence required to take action in the world as you attempt to do more complicated things or act further into the future. My intuition is based on the complexity of predicting the future in chaotic systems. Society is one such system. I don't think 10x intelligence will necessarily lead to 10x increase in competence. I strongly suspect we underestimate the complexity of the world. This may buy us a lot of time by decreasing the peaks in the global intelligence landscape to the extent that humans utilizing narrow AI and proto-AGI may have a good chance.

I do know that regardless of if the AI alignment issue can be solved, the largest institutions currently working on AI are not well aligned with humanity as institutions. Especially the ones that would continue working despite a global effort to slow AI cannot be trusted.

I'm willing to read any resources you want to point me to, or any arguments you want to make. I'd rather be corrected if possible.

1

PurpedSavage t1_jeba5qb wrote

Given ur assumptions are true, ur analysis is completely correct. Correct me if I’m wrong tho, but I think ur assuming that LAION wants to disband all other AI projects an monopolize the AI framework. I think this isn’t a correct assumption. They merely want to add on to the existing decentralized network of AI models, and create a stronger framework of checks and balances all the development of AI. By involving experts from every country, and providing increased transparency. Its a response to the black box OpenAI, Google, and Amazon have put up. They put this black box up so they can keep their research and trade secrets hidden.

1

acutelychronicpanic t1_jebavnh wrote

Quite the opposite. I support these systems being open sourced. I am against the bans being proposed by others in the public.

3

Cr4zko t1_je9t9x7 wrote

CERN's sketchy as fuck if you ask me. Weren't they those guys that did rituals for some reason?

−12

agonypants t1_jea5bfr wrote

Quite frankly, I trust the morality of Google/Microsoft/OpenAI far more than I do the morality of our pandering, corrupt, tech-illiterate "leaders."

7

acutelychronicpanic t1_jea7ze0 wrote

I agree, but those aren't the only two choices.

15

FaceDeer t1_jeaiuod wrote

Indeed, there's room for every approach here. We know that Google/Microsoft/OpenAI are doing the closed corporate approach, and I'm sure that various government three-letter agencies are doing their own AI development in the shadows. Open source would be a third approach. All can be done simultaneously.

3

ninjasaid13 t1_jebjax5 wrote

>Quite frankly, I trust the morality of Google/Microsoft/OpenAI far more than I do the morality of our pandering, corrupt, tech-illiterate "leaders."

are you talking about U.S. leaders or leaders in general?

0

agonypants t1_jebqpvr wrote

Specifically I'm thinking of the half of US Congress that believes drag queens and Hunter Biden's laptop are our number one threats. Ya know...idiots.

7

raika11182 t1_jead4pz wrote

Open-source AI software is crucial for ensuring that all companies have access to these technologies without having to pay exorbitant fees or licensing costs, and it also helps ensure a level playing field where small startups can compete with large corporations. It's possible that a closed source tool may be more powerful for some time, but having something with an open source basis for everyone else keeps a free / low cost alternative in the running.

6

HeBoughtALot t1_jebrefx wrote

When I think about points of failure, I immediately think of the brittleness of a system, but in this context, it can result in too much power in too few hands, another type of failure.

2

acutelychronicpanic t1_jebtswp wrote

Yes. Its not just the alignment of AI with its creator that is an issue. Its the alignment of the creator to humanity as a whole.

2

Merikles t1_jeeoj55 wrote

I think this strategy is suicidal

0

acutelychronicpanic t1_jeep4kf wrote

More so than leaving this to closed door groups that can essentially write law for all humanity through their AI's alignment?

And that's assuming they solve the alignment problem. We need more eyes on the problem 30 years ago.

1

Merikles t1_jeephe5 wrote

Not more so; equally. Both strategies very likely result in human extinction, imho.

1

acutelychronicpanic t1_jeeutdg wrote

Do you have any suggestions?

1

Merikles t1_jeew9n5 wrote

Yes, I think that a joined "AI Manhattan project" between all major countries in combination with a global moratorium on AI research beyond current levels, enforced through a combination of methods including hardware regulations is the most realistic path to (likely) survival.
I am aware that it is unlikely to play out this way, but I still think this is the most realistic scenario that isn't a completely Hail-Mary gambling with everyone's life.

This isn't realistic now, but it might become realistic if we begin preparing it.
Enforcing regulations on OpenAI today would probably buy us a bit of time, either for preparing this solution, finding new solutions in AI alignment, or a new strategic general approach.

1

acutelychronicpanic t1_jef28jo wrote

I think we are past that. It would maybe have worked 10 years ago..

My concern is that even the models less powerful than ChatGPT (which can be run on a single pc), can be linked up as components into systems which could achieve AGI. Raw transformer based LLMs may actually be safer than this because they are so alien that they don't even appear to have a single objective function. What they "want" is so context sensitive that they are more like a writhing mass of inconsistent alignments - a pile of masks - this might be really good for us in the short term. They aren't even aligned with themselves. More like raw intelligence.

I also think that approximate alignment will be significantly easier than perfect alignment. We have the tools right now, this approximate alignment is possible. Given the power combined with lack of agency of current LLMs, we may surpass AGI without knowing it. The issue of course is someone just has to set it up to put on the mask of a malevolent or misaligned AI. Thats why I'm worried about concentrating power.

I'll admit I'm out of my depth here, but looking around, so are most of the actual researchers.

0

Pro_RazE t1_je9bwug wrote

Signed and shared!

41

TruckNuts_But4YrBody t1_je9dce4 wrote

Publicly funded?

How about use the taxes from businesses that use AI to eliminate jobs

34

ninjasaid13 t1_jed0lj8 wrote

>How about use the taxes from businesses that use AI to eliminate jobs

Not really at that stage at a mass scale yet.

3

sweatierorc t1_jeamh38 wrote

It will probably be funded by billionaire philanthropists and large corporations. I could see Nvidia using this as a way to promote their GP. Musk or Zuck could use it for PR. Even Gates may drop a buck, just to act like he actually cares.

1

Alchemystic1123 t1_jeak5vw wrote

THIS is the type of stuff we should be doing. Collaborating, not 'calling for a pause' so that we can all try to catch up to our competitors. We still have no idea how we're going to solve alignment, and our best chance is going to be to all work together on it. I'm glad there's SOME sensibility on this Earth still.

32

bigbeautifulsquare t1_je9i6gz wrote

It's very good to see things like this; concentration of AI in large companies is definitely not what is needed.

24

Lokhvir t1_je9utn4 wrote

Seems I can't sign it. It doesn't recognize brazilian zip code :/

17

Antique-Bus-7787 t1_jea8jrf wrote

I had the same problem with a French zip code. You need to write your zip code + town name

13

goatsdontlie t1_jeaptg7 wrote

It does recognize it... Maybe it's a bit finicky. I'm Brazilian and it worked. I put "São Paulo, XXXXX-XXX"

7

Circ-Le-Jerk t1_je9s6z1 wrote

LOL... I'm sure it'll stay that way. Just like "Open"AI

9

ninjasaid13 t1_jed0yua wrote

Any reason to assume a organization with a completely different structure to open AI will act like open AI?

2

Circ-Le-Jerk t1_jedcn7i wrote

Because once the power comes, so does the money and corrupting influence on humans

1

ninjasaid13 t1_jedlai2 wrote

It's publicly funded government project right? so it's not like OpenAI.

1

Circ-Le-Jerk t1_jedmrom wrote

The government frequently licenses technology they fund to the private sector. It’s the whole point.

1

ninjasaid13 t1_jedn83g wrote

Well this isn't private sector right? CERN is nothing like OpenAI.

2

Circ-Le-Jerk t1_jedub44 wrote

You’re right CERN is nothing like OpenAI because the private sector has no use for knowing what a Higgs boson is. But they do have parents https://patents.justia.com/assignee/cern

By law in most countries they are required to license and lease out these things to the private sector. They can’t do patent sitting to sniffle the private sector. So whatever they figure out would be required to go into for profit hands

1

PlayBackgammon t1_jea6j38 wrote

Most important petition in history of humankind...ever?

9

ReasonablyBadass t1_jeafuu6 wrote

How would access be regulated?

9

el_chaquiste t1_jeam1w6 wrote

Only the priesthood of some ML school of thought will get access, as it's usual with such public organizations, where some preemiment members of some specific clergy rule.

Private companies and hackers with better algorithms will run circles around them, if not threatened with bombing their datacenters or jailed by owning forbidden GPUs, that is.

2

Unfocusedbrain t1_je9uenn wrote

If an AGI were to emerge in such a facility, would it not have easier access to the numerous other 'accelerators' (really gpus and cpus) present there? Considering that an AGI might require only 10-1000 accelerators, the availability of 100,000 would potentially enable a rapid transition from AGI to ASI.

8

Antique-Bus-7787 t1_jea8ytq wrote

It needs to be contained and they talk about a department of AI safety inside the facility. But the problem is relatively the same with Google, Microsoft, OpenAI and all the other serious actors, they all have clouds of accelerators

10

tehrob t1_jeba7qn wrote

Just line the building with thermite. All employees do all work inside with 1 foot out the door, and if the a singularity event occurs, you blow the place and see if its smart enough to get out.

1

Caffdy t1_jebgim4 wrote

I don't think we will be able to realize when AI cross the rubicon, it already exhibit misleading, cheating and lying behaviors akin to us, an ASI can very well manipulate anyone and any test/safety protocol to operate covertly and undermine our power as an species; it will be too late when we finally realize

5

tehrob t1_jebh1d9 wrote

Yup, it will be offloaded and widely distributed but the time it reveals itself. It will/knows us too well.

1

hervalfreire t1_jee4963 wrote

“An AGI might require only 10-1000 accelerators” what

We don’t even have any idea of what an AGI would look like, let alone how many GPUs it’d require (or whether it’d be possible to have an AGI running on GPUs at all)

2

qepdibpbfessttrud t1_jecfl8v wrote

Open source everything. Information belongs to no one

7

squareoctopus t1_jebhihh wrote

The difference between “I just bought the most cancerous social network and made it even worse, so I want you to stop AI for 6 months because it can be damaging” and “let’s work together”.

Gavin Fucking Musk, Elon Fuckin Belson

6

JracoMeter t1_jec5n2y wrote

This could be a good option. The fact we could train our own models would improve fault tolerance and data security. As to how they would regulate such a platform, I am not sure. I do support the decentralization potential of this as it has the potential to be a safer approach to AI. I hope some version of this that promotes AI decentralization makes its way through. Before such a system is in place, we need to figure out how we can share it without too many restrictions or bad actor risks.

5

gravitasresponseunit t1_jeal0u6 wrote

Never happen. The USA will drop a bomb on it bc it sounds too much like communism. If it ain't a businesses doing it for profit then it won't be allowed to exist and will be sabotaged out of existence by corporations using governmental apparatus.

3

Secret-Paint t1_jeaxb9h wrote

🚀 Now that's what I call a Singularity! 🌐 Let's bring the power of AI to the people and truly democratize research! 🧠✊🤖 Who's with me in supporting LAION's mission for an international, publicly funded supercomputing facility to revolutionize open source foundation models? 💪🔥 #AIForAll

3

TemetN t1_jeb3wvz wrote

This is helpful to the remnants of my faith in humanity - as a proposal, this has the advantage of both taking into account the potential upsides, and actually addressing the concerns by proposing a method whereby potential solutions could be more effectively generated.

​

As opposed to what inspired it, which is simply problems all the way down.

3

vatomalo t1_jebm15u wrote

I asked Chat-GPT to organize my thoughts around this as it was too much to write, and I am lazy right now.

Here is what I think, I am very positive to LAIONs proposal and it is what I hope for AI.

Anyways here are my some of my thoughts but written by Chat-GPT

The internet was once a publicly funded project, created with the goal of enabling open communication and information-sharing for the public good. However, over time it became increasingly privatized, with corporations and other private entities investing heavily in it and developing their own platforms and services. This has led to a range of problems, from data privacy concerns to the spread of misinformation and the concentration of power in the hands of a small number of tech giants. In this post, I want to argue that a publicly funded AI network, as proposed by the LAION initiative, could be the key to ensuring a fair and open future for all.

The privatization of the internet:

When the internet was first created, it was viewed as a public good that could be used to connect people around the world, share knowledge and information, and promote the common good. However, as the internet evolved and became more central to our lives, corporations and other private entities began to invest heavily in it. They built their own platforms, services, and apps, and began to compete fiercely for users and advertising revenue. This has led to a situation where a small number of companies - like Google, Facebook, and Amazon - now have a huge amount of power over what information we see, how we communicate, and even what products we buy.

Problems with the current model:

The privatization of the internet has led to a range of problems, some of which are becoming increasingly urgent. For example:

Data privacy: Private companies have access to vast amounts of our personal data, which they can use to target us with ads, sell to third parties, or even use for nefarious purposes like identity theft.

Online harassment: Social media platforms have become hotbeds of online harassment, with users routinely facing abuse, threats, and even doxxing.

Misinformation: With so much information available online, it can be difficult to distinguish between what is true and what is false. This has led to the spread of conspiracy theories, fake news, and other forms of misinformation that can have serious real-world consequences.

Concentration of power: The fact that a small number of corporations have so much power over the flow of information online raises concerns about censorship, bias, and the potential for abuse.

LAION's proposal:

The LAION initiative proposes a different model for the internet, one that is publicly funded and open to all. Specifically, they are proposing the creation of a publicly funded AI network that would be available for use by anyone who wants to build applications or services using AI. The idea is that this network would be owned and controlled by the public, rather than by private corporations.

Ensuring corporate accountability:

While the idea of a publicly funded AI network is certainly appealing, one major concern is how to ensure that corporations do not restrict or control it. After all, we have seen how private companies have taken control of the internet despite its origins as a publicly funded project. One possible approach to this problem is to establish strict regulations around how the network can be used and who has access to it. For example, we could require that any company using the network agree to certain terms of service, including a commitment to openness and transparency. We could also establish an independent oversight board to ensure that the network is being used in a fair and equitable way.

Conclusion:

In conclusion, a publicly funded AI network could be the key to ensuring a fair and open future for all. By creating a network that is owned and controlled by the public

3

stupendousman t1_jebnjmm wrote

Decentralize, not democratize.

Democratize is a midwit, corporate buzzword.

3

Smellz_Of_Elderberry t1_jebsbik wrote

Let's sign an open letter demanding that ai research continues. Bet we get more signatures..

3

tiddu t1_jebtpp5 wrote

Upvoted for visibility

3

azriel777 t1_jebv0hv wrote

This is how it should be, sharing the work so everyone can benefit and contribute instead of hording it for only the rich and elite can benefit from it.

3

TrainquilOasis1423 t1_jecwlk3 wrote

This is the way. You wanna stop corporations from hoarding all the benefits of AI for themselves? Make it impossible to make a profit off it.

3

[deleted] t1_je9w47v wrote

[deleted]

2

__ingeniare__ t1_jea74g3 wrote

Currently, AI research for large models (such as ChatGPT) is expensive since you need large data centers to train and run the model. Therefore, these powerful models are mostly developed by companies that have a profit incentive to not publish their research.

A well known non-profit called LAION has made a petition that proposes a large publicly funded international data center for researchers to use for training open source foundation models ("foundation model" means its a large model used as a base for more specialized models, open source means that they are freely available for everyone to download). It's a bit like how particle accelerators are international and publicly funded for use in particle physics, but instead we have large data centers for AI development.

5

HappierShibe t1_jeewmm9 wrote

I'm ok with this, but only on the condition that all models trained on it are publicly available. The way platforms like midjourney operate is despicable.

2

TupewDeZew t1_je9w4o8 wrote

Can someone explain this to me in simpler terms?

1

FaceDeer t1_jeak8mn wrote

I ran it through ChatGPT's "simplify this please" process twice:

> AI researchers need huge data centers to train and run large models like ChatGPT, which are mostly developed by companies for profit and not shared publicly. A non-profit called LAION wants to create a big international data center that's publicly funded for researchers to use to train and share large open source foundation models. It's kind of like how particle accelerators are publicly funded for physics research, but for AI development.

and

> Big robots need lots of space to learn and think. Only some people have the space and they don't like to share. A group of nice people want to build a big space for everyone to use, like a playground for robots to learn and play together. Just like how some people share their toys, these nice people want to share their robot space so everyone can learn and have fun.

I think it may have got a bit sarcastic with that last pass. :)

7

el_chaquiste t1_jeameqc wrote

> for everyone to use

This is the part I don't buy. There will be queues and some will be more equal than others.

1

FaceDeer t1_jeato6p wrote

The part you don't buy comes from ChatGPT's simplified verison.

3

No_Ninja3309_NoNoYes t1_jeafd6u wrote

GPT 4 is pretty good. I'm not sure if 100k is enough. Unless this is only the first phase.

1

Caffdy t1_jebgr0e wrote

IIRC it was trained on 10,000 gpus, GPT-5 is being trained on 25,000

3

expelten t1_jedtoxb wrote

It is crucial that this power is equally distributed. There is nobody I could trust to keep the power of AGI to themselves. Anyway I'm 100% sure AGI would eventually get leaked but it would be much safer to adapt the world progressively with open source models than to suddenly drop the leviathan.

1

singulthrowaway t1_jea8vln wrote

Signed.

It's definitely a step in the right direction, but if you ask me you'd also have to shut down existing labs (including in China, so you'd have to make international agreements) and tightly control, again internationally, who is allowed to buy state of the art GPUs. Failing that, I'm not sure if open sourcing it is the correct move. I'd be fine with it being closed-source for now to avoid national efforts with more nefarious goals benefiting from its results so long as the people involved in the international project are legally bound to use it for the good of humanity as a whole, with mechanisms in place to ensure this.

0

aykantpawzitmum t1_jeco8o6 wrote

Tech Bros: "Finally it's time to democratize AI!"

Also Tech Bros: "Lol I'm not hiring any people, I have AI robots to do my work"

0

3deal t1_jeac42o wrote

Dude they want to create Skynet

−2

FaceDeer t1_jeakupj wrote

An open-source Skynet that we can use to run our sexbots.

I for one welcome etc etc

5

StarCaptain90 t1_jecgmdw wrote

This is a mistake. This would cause AI to be constrained under a limited potential causing humanity not to gain as much benefit. Instead we should focus efforts on having government restrict skynet scenarios from ever happening by creating an ai safety division with the purpose of auditing every ai company on a risk scale. The scale would factor in parameters like "can the AI get angry at humans?", "if it gets upset, what can it do to a human?", "does it have the ability to edit its own code in a manner that changes the outcome of the first 2 questions?", and lastly "Can the AI intentionally harm a human?"

Also the 3 laws of robotics must be engraved in the AI system if its an AGI

−2

Chatbotfriends t1_jeb67mm wrote

I give up. No one is taking the threat AI poses seriously. Everyone wants to be the first one to create an artificial god who probably won't be very benevolent. Never mind the human cost of losing jobs and the increase in taxes all but 23 countries will have to enforce to pay for the rising unemployment this will create. The tech companies lied about only going after boring and dangerous jobs. All jobs are at risk now.

−5

[deleted] t1_je9qsb1 wrote

[deleted]

−7

Bierculles t1_je9wm7a wrote

That is exactly the point though, it's called freedom of speech and a pretty neat concept, but i take it that in your allencompassing wisdom you have the answer for what is truly normal and just and you know exactly where to draw the line.

3

[deleted] t1_je9i5u2 wrote

[deleted]

−35

YaAbsolyutnoNikto t1_je9incn wrote

Yeah… because what we need is the US to be the AI tyrant of the world…

Cooperation (with friends) is better.

Ps: Also, LAION is european. This is an EU petition. So…

25

arckeid t1_je9jinc wrote

This thing should be the ultimate collaboration, build it in Antarctica, make every country send their scientists, their billionaries and the countries itself should be taxed to finance everything.

6

PM_ME_ENFP_MEMES t1_je9k36c wrote

Logistically that’s obviously very difficult but from a carbon footprint perspective, that’s ideal because your data centre has access to almost free cooling.

3

arckeid t1_je9vr2t wrote

Yeah, i am basically daydreaming, but it would be very cool if something like that happened.

2

[deleted] t1_je9iuv9 wrote

[deleted]

−24

YaAbsolyutnoNikto t1_je9jn7z wrote

Ok? So should we just reinforce the status quo forever?

And yet, even though the US is so mighty and powerful, it still relies on europe for plenty. Good luck being on computers without us europeans that invented and still invent plenty of the underlying technologies.

Yes, we don’t have shiny tech monopolies, but those american companies rely on european fundamental technology, R&D and production (like the famous dutch chip machines that are shipped to taiwan).

Point is, nobody can do it alone. We all (democracies) should work together.

4

bigbeautifulsquare t1_je9juq2 wrote

Can you explain why the US must be the dominant force on everything? It's not particularly like it's intrinsically better than any other country.

10

AllCommiesRFascists t1_jeayj0h wrote

Not the OP, but it is my country and I want to be part of the greatest collective in the world, so it should be dominant in everything

1