Comments

You must log in or register to comment.

SkyeandJett t1_jefl01s wrote

Cool. Now go back in time a couple years and that might actually mean something. Good that someone is even aware of what's up though.

126

flexaplext t1_jefmsfi wrote

Would have had more of a chance with an EU-backed one. Trying to buyout an existing firm that's already gone a long way with LLM development.

Oh wait, Brexit happened 🤷🏻‍♂️🤦🏻‍♂️

And governments are useless.

13

flexaplext t1_jefnwy6 wrote

Yeah. The EU would provide vastly more money and resources. Which could try to make up for its inevitable incompetence and failings.

The UK, on the other hand, will supply a petty budget that won't make a dent. Along with their own fresh servings of incompetence, of course.

9

flexaplext t1_jefrxdi wrote

No. It's for them to rejoin the EU and probably be even more aligned inside it than they were before. So they're not this pathetic little island trying to take on the likes of the US and China 😂

5

Smallpaul t1_jefs4d6 wrote

If absolutely means something.

The goal wouldn’t be to make something better than GPT-5 to outcompete it.

The goal would be to have an AI that could be run locally, fine tuned locally and trusted with the data of UK citizens.

78

28mmAtF8 t1_jefsota wrote

I actually think this is a pretty good idea. We need to keep in mind that there won't be one "all powerful" AI incumbent for a long time. There's too much ground to cover and too many niche use cases for one system to truly dominate. Maybe if we see an actual AGI, but that's still quite a ways away.

31

flexaplext t1_jefsxdn wrote

Yeah, that's the alternative. And such people will probably win because governments are so useless.

However, I suspect the US government will just forcibly take over OpenAI at some point on the grounds of National Security. They may be useless, but they're good at taking things over.

The same option probably won't exist for the UK government though. Which is why they'd be better joining the EU again and trying something within that union. Of course, with the EU buying out a decent existing company to get themselves started, as I also suggested. 

Or the EU could just fund many different companies and then take over the one that wins out, the US-style plan. To ask the UK try to do this model alone dramatically reduces their funding, company pool and odds of them being successful.

5

nomadiclizard t1_jeftoid wrote

Sure, we'll get right on that, after you've surrendered to the Hague and had your trial for the war crime of aggression.

8

Iffykindofguy t1_jefu3qn wrote

Congrats you fell for capitalist propaganda. Governments can actually be extremely effective, especially when compared to the "market" look at the operating cost of private insurance vs how medicaid operates internally speaking. Where do you see industry absolutely crushing it? Because I see it no where, theyre all too self serving and forced to show immediate growth or theyre removed from power causing long term planning to be impossible. Please, educate yourself.

2

broadenandbuild t1_jefv0mf wrote

The funny thing is that chatGPT is developed entirely on the back of public data. Naturally it should be open.

30

28mmAtF8 t1_jefv96k wrote

It's only as dystopian as it's use-case. The important factor in implementation would be that whatever group runs it needs to maintain some independence from political whims.

I don't have much more faith in corporations to be benevolent with them so I'd like to see public service AIs counterweight AIs that are launched with pure greed in mind.

17

flexaplext t1_jefw4sn wrote

Yeah, because we don't just see governments knee-jerk reacting to AI now when private enterprise has been developing and investing in it for many years.

And it isn't the most important and dangerous technology that will ever exist and yet they have little to no regulations on it or proper plans going forward for it. Despite this being obvious and known for decades.

And MPs know so much about computer programming, I'm sure they'll be able to know how to lead AI development and appoint the right people to it. Doing so in an efficient and innovative manner

And I'm sure the best programmers will be lining up to work for the government and their military rather than OpenAI and progressive companies.

4

Desi___Gigachad t1_jefy8xl wrote

So that the LLM only spews what the government wants the public to hear?

3

acutelychronicpanic t1_jefzkcd wrote

I bet it will be well aligned to human values and not just their legal system and government. /s

I do think more systems will be good. But I worry about governments putting in strict regulations in the name of safety that render us unable to actually solve alignment.

5

TheBoundFenrir t1_jefzmjo wrote

PoI already did this. They called theirs Samaritan. /s

1

Newhereeeeee t1_jeg0a0u wrote

That’s good. Would be nice for every country or region to have one and try to outcompete each other into automating work.

2

lovesdogsguy t1_jeg4s08 wrote

It's also the second time this week a leader of a major western country has talked openly about artificial intelligence. This is about to really blow up.

​

Edit: Seriously — world leaders are going to be flinging shit like apes for the next six Months.

48

zendonium t1_jeg4ww9 wrote

Ai Speech British Overlord. Ah, so this is what he really wanted from the ASBO.

2

drekmonger t1_jeg5eb6 wrote

There's plenty of justification. Puffy jacket pope pictures for a start.

The capabilities of modern AI to output disinformation campaigns should be a strong concern. And that's just the tip of the disruptive iceberg.

−1

WonderFactory t1_jeg5tbe wrote

With enough investment it wouldn't take long to catch up with OpenAI. I think by this time next year there will be multiple models better than GPT-4, maybe even hundreds. Almost anyone can do it. It's possibly the case that GPT 4 isn't even trained optimally. Its very slow so presumably didn't build on the optimal data/parameters balance shown in the chinchilla paper.

−4

Freedom_Alive t1_jeggdja wrote

They're too find behind the curve. The horse has bolten and taken a rocket ship to mars.

Imagine the gov trying to develop electric cars today.

The public sector will be 10x the cost to achieve 10% of the results.

And recently there was a good meme of UK Gov paying cyber security experts £50k so imagine who wants to go work for instead of building a million dollar company with this tech.

11

adamantium99 t1_jeggu2z wrote

Awright: Switchin’ to footie mode. Did you see that ludicrous display last night?

3

Smallpaul t1_jegi080 wrote

They wouldn’t do it in-house. They would fund some kind of coalition.

Also: it’s been proven that you can use one AI to train another so you can bootstrap more cheaply than starting from scratch. Lots of relevant open source out there.

A huge part of the problem is just having enough cash to rent GPUs in any case. Not necessarily deep technical problems.

Also, as I said above, it doesn’t have to be competitive. It doesn’t have to be a product they sell. It could be a tool they themselves use to run the UK government without sending citizen data to a black box in America.

11

Cr4zko t1_jegi8sv wrote

I dig it if everyone's able to use it and not just UK citizens.

1

visarga t1_jegkwr6 wrote

If you stop the regular people from using AI then only criminals and government will use it. How is that better? And you can't stop it because a good enough AI will run on edge/cheap hardware.

To be practical about disinformation it would be better to work on human+AI solutions. Like a network of journalists flagging stories and then AI extending that information to the rest of the media.

You should see the problem of disinformation as biology, the constant war between organism and viruses, the evolving immune system. Constant war is normal state, we should have the AI tools to bear the disinformation attack. Virus and anti-virus.

9

drekmonger t1_jegnvxe wrote

I don't think it's possible to put the genie back into the bottle.

I also think that once the extent of AI's present day capabilities start to click with the general population, governments might try.

1

WetForHer t1_jegp6tr wrote

That war criminal should just shut the f up.

5

Ihateseatbelts t1_jegrot2 wrote

I'm all for it... in principle. An actual public-service answer to for-profit LLMs should absolutely be an option, if not the go-to solution. But given our UK leadership and their flagrant disregard of said public, I'm not so sure.

London is an AI research hotspot, which is great, sure, but that's also what I'm worried about. The current state apparatus lends itself to a culture of dictatorship by consultancy, which ultimately stifles public interest and agency.

2

Borrowedshorts t1_jegu043 wrote

The British have always been towards the forefront in computing technology. This seems to fit the mold.

3

RemyVonLion t1_jegusn2 wrote

Corporations will quickly flip on prioritizing greed once they realize AI will rapidly outpace our capabilities in all aspects, and we will become insignificant competition to a superior race.

3

PolishSoundGuy t1_jegx4cc wrote

I agree with you that it should be open, but it’s not “entirely trained on public data”. In order for the model to be useful, someone has to feed it prompts and ideal responses, which actually were tens of thousands of people employed by OpenAI. They had fine tuned the model to what it is today.

9

BS_Radar0 t1_jeh1utx wrote

Tony Blair was an idiot in office, and he’s an idiot now.

2

CMDR_BunBun t1_jeh23dh wrote

Just finished watching the Fridman/ Kudkowsky interview and honestly...the man does make some good points. Am not ready to jump off a cliff yet like he seems so hell bent on, but damm the situation is dicey atm. The alignment issue is not settled and it seem everyone and their sister is racing towards strong AI...which may lead to AGI...an unaligned AI. We have got to get this right, because we will only get one shot at it.

2