Comments

You must log in or register to comment.

Zermelane t1_ja7r101 wrote

Honestly, what would have been news is if they were not building a ChatGPT rival, especially by now. If they're only starting now, they're helplessly behind all the companies that took notice with GPT-3 at the latest.

146

CosmicVo t1_ja84zk1 wrote

True, but also (when i put my doomer hat on) totally in line with the agument that this tech will be shitting gold until the first super intelligence goes beyond escape velocity and we can only hope it alligns with our values...

33

neonoodle t1_ja8ptso wrote

We can't get a room of 10 random people with aligned values. The chance of AI aligning values with all of humanity is pretty much nil.

36

drsimonz t1_ja8z9sb wrote

Not necessarily true. I don't think we really understand the true nature of intelligence. It could, for example, turn out that at very high levels of intelligence, an agent's values will naturally align with long-term sustainability, preservation of biodiversity, etc. due to an increase ability to predict future challenges. It seems to me that most of the disagreement on basic values among humans comes from the left side of the bell curve, where views are informed by nothing more than arbitrary traditions, and rational thought has no involvement whatsoever.

But yes, the alignment problem does feel kind of daunting when you consider how mis-aligned the human ruling class already is.

21

gcaussade t1_ja96ee8 wrote

The problem is, and a lot of humans would agree is that that's super intelligence they decide that 2 billion less people of this Earth is the best way forward... Both of us would feel that's a problem

9

drsimonz t1_ja9q5av wrote

That's an interesting question too. Alignment researchers like to talk about "X-risks" and "S-risks" but I don't see as much discussion on less extreme outcomes. A "steward" ASI might decide that it likes humanity, but needs to take control for our own good, and honestly it might not be wrong. Human civilization is doing a very mediocre job of providing justice, a fair market, and sustainable use of the earth's resources. Corruption is rampant even at the highest levels of government. We are absolutely just children playing with matches here, so even a completely friendly superintelligence might end up concluding that it must take over, or that the population needs to be reduced. Though it seems unlikely considering how much the carrying capacity has already been increased by technological progress. 100 years ago the global carrying capacity was probably 1/10 of what it is now.

14

ccnmncc t1_jad8yh2 wrote

The carrying capacity of an ecosystem is not increased by technology - at least not the way we use it.

2

drsimonz t1_jae74p2 wrote

To be fair, I don't have any formal training in ecology, but my understanding is that carrying capacity is the max population that can be sustained by the resources in the environment. Sure, we're doing a lot of things that are unsustainable long term, but if we suddenly stopped using fertilizers and pesticides, I think most of humanity would be dead within a couple years.

1

ccnmncc t1_jaet8sy wrote

I understand what you’re saying. We’ve developed methods and materials that have facilitated (arguably, made inevitable) our massive population growth.

We’ve taught ourselves how to wring more out of the sponge, but that doesn’t mean the sponge can hold more.

You caught my drift, though: we are overpopulated - whether certain segments of society recognize it or not - because on balance we use technology to extract more than we use it to replenish. As you note, that’s unsustainable. Carrying capacity is the average population an ecosystem can sustain given the resources available - not the max. It reflects our understanding of boom and bust population cycles. Unsustainable rates of population growth - booms - are always followed by busts.

We could feasibly increase carrying capacity by using technology to, for example, develop and implement large-scale regenerative farming techniques, which would replenish soils over time while still feeding humanity enough to maintain current or slowly decreasing population levels. We could also use technology to assist in the restoration, protection and expansion of marine habitats such as coral reefs and mangrove and kelp forests. Such applications of technology might halt and then reverse the insane declines in biodiversity we’re witnessing daily. Unless and until we take such measures (or someone or something does it for us), it’s as if we’re living above our means on ecological credit and borrowed time.

1

drsimonz t1_jaexoi0 wrote

Ok I see the distinction now. Our increased production has mostly come from increasing the rate at which we're depleting existing resources, rather than increasing the "steady state" productivity. Since we're still nowhere near sustainable, we can't really claim that we're below carrying capacity.

But yes, I have a lot of hope for the role of AI in ecological restoration. Reforesting with drones, hunting invasive species with killer robots, etc.

For a long time I've thought that we need a much smaller population, but I do think there's something to the argument that certain techies have made, that more people = more innovation. If you need to be in the 99.99th percentile to invent a particular technology, there will be more people in that percentile if the population is larger. This is why China wins so many Olympic medals - they have an enormous distribution to sample from. So if we wanted to maximize the health of the biosphere at some future date (say 100 years from now), would we be better off with a large population reduction or not? I don't know if it's that obvious. At any rate, ASI will probably make a bigger difference than a 50% change in population size...

2

Nmanga90 t1_ja9ywnj wrote

Well not necessarily though. This could be accomplished in 50 years without killing anyone. Demographic transition models only have relevance with respect to labor, but if the majority of labor was automated, it wouldn’t matter if everyone only had 1 kid.

3

stupendousman t1_jaa4s4n wrote

> The problem is, and a lot of humans would agree is that that's super intelligence they decide that 2 billion less people of this Earth is the best way forward

Well there are many powerful people who believe that right now.

Many of the fears about AI already exist. State organizations killed 100s of millions of people in the 20th century.

Those same organization have come up with many marketing and indoctrination strategies to make people support them.

AI(s) could do this as well.

That's a danger. But the danger has already occurred, is occurring. Look at Yemen.

3

ThatUsernameWasTaken t1_ja9pvz7 wrote

“There was also the Argument of Increasing Decency, which basically held that cruelty was linked to stupidity and that the link between intelligence, imagination, empathy and good-behaviour-as-it-was-generally-understood – i.e. not being cruel to others – was as profound as these matters ever got.”

~Ian M. Banks

4

Northcliff t1_ja9ks69 wrote

>the left side of the bell curve

🙄🙄🙄🙄

−6

Aculem t1_ja9qg3x wrote

I think he means the left side of the bell curve of intelligence among humans, not the political left, which isn't exactly known for loving arbitrary traditions.

10

Northcliff t1_ja9wnju wrote

Saying the political left is equivalent with the right side of the bell curve of human intelligence is pretty cringe desu

−5

HakarlSagan t1_ja9nspo wrote

Considering the DOE news this week, I'd say the eventual chance of someone intentionally creating a malicious superintelligence for "research purposes" and then accidentally letting it out is pretty high

2

Brashendeavours t1_ja9tj65 wrote

To be fair, the odds of aligning 10 people’s values is pretty low. Maybe start with two.

1

GrowFreeFood t1_jaanzei wrote

I will invite it to come over and chat about how we are all trapped in space-time and killing us would be completely pointless.

1

neonoodle t1_jaavj1j wrote

It read A Brief History of Time. It's already thought about it.

1

bluehands t1_ja8s20c wrote

People worry about ASI getting free but for me an obviously worse option is ASI being under the exclusive control of one of the oligarchs that run the world.

Literally think of whomever you consider to be the worst politician or ceo, then picture them having an oracle.

An unchained ASI is going to be so much better, regardless of if it likes us or not.

14

signed7 t1_ja9h7mw wrote

You think that'd be worse than human extinction?

7

bluehands t1_ja9ka72 wrote

Sure thing.

Are you familiar with I Have No Mouth, and I Must Scream?

Rouge ASI could kill us all but a terrible person with an oracle ASI could make a factual, literal - as in flesh, blood & fire - hell on earth. Make people live forever in pain & suffering, tortured into madness and then restored to a previous state, ready to be tortured again.

A rouge ASI that wants us all dead isn't likely to care about humanity at all, we are just a misplace anthill. But we all know terrible people in our lives and the worst person you know is a saint next the worst people in power.

Tldr: we are going to create a genie. In the halls of power there are many Jafars and few Aladdins.

5

drsimonz t1_ja9s2mx wrote

Absolutely. IMO almost all of the risk for "evil torturer ASI" comes from a scenario in which a human directs an ASI. Without a doubt, there are thousands, possibly millions, of people alive right who would absolutely create hell, without hesitation, given the opportunity. You can tell because they....literally already do create hell on a smaller scale. Throwing acid on women's faces, burning people alive, raping children, orchestrating genocides, it's been part of human behavior for millennia. The only way we survive ASI is if these human desires are not allowed to influence the ASI.

2

turnip_burrito t1_jablzeb wrote

In addition, there's also a large risk of somebody accidentally making it evil. We should probably stop training on data that has these narratives in it.

We shouldn't be surprised when we train a model on X, Y, Z and it can do Z. I'm actually surprised that so many people are surprised at ChatGPT's tendency to reproduce (negative) patterns from its own training data.

The GPTs we've created are basically split personality disorder AI because of all the voices on the Internet we've crammed into the model. If we provide it a state (prompt) that pushes it to some area of its state space, then it will evolve according to whatever pattern that state belongs to.

tl;dr: It won't take an evil human to create evil AI. All it could take is some edgy 15 year old script kid messing around with publicly-available near-AGI.

1

squirrelathon t1_jaa16v2 wrote

>ASI being under the exclusive control of one of the oligarchs

Sounds like "Human under the exclusive control of one of the goldfish"

1

SnooHabits1237 t1_ja97icu wrote

Do you mind sharing how it’s possible that an ai could kill us? I thought we could just make it not do bad stuff…sorta like we could nerf it?

2

drsimonz t1_ja9tetr wrote

Oh sweet summer child....Take a look at /r/ControlProblem. A lot of extremely smart AI researchers are now focused entirely on this topic, which deals with the question of how to prevent AI from killing us. The key arguments are (A) once an intelligence explosion starts, AI will rapidly become far more capable than any human organization, including world governments. (B) self defense, or even preemptive offense, is an extremely likely side effect of literally any goal that we might give an AI. This is called instrumental convergence. (C) the amount you would have to "nerf" the AI for it to be completely safe, is almost certainly going to make it useless. For example, allowing any communication with the AI provides a massive attack surface in the form of social engineering, which is already a massive threat from mere humans. Imagine an ASI that can instantly read every psychology paper ever published, analyze trillions of conversations online, run trillions of subtle experiments on users. The only way we survive, is if the ASI is "friendly".

5

WikiSummarizerBot t1_ja9tggh wrote

Instrumental convergence

>Instrumental convergence is the hypothetical tendency for most sufficiently intelligent beings (both human and non-human) to pursue similar sub-goals, even if their ultimate goals are quite different. More precisely, agents (beings with agency) may pursue instrumental goals—goals which are made in pursuit of some particular end, but are not the end goals themselves—without end, provided that their ultimate (intrinsic) goals may never be fully satisfied. Instrumental convergence posits that an intelligent agent with unbounded but apparently harmless goals can act in surprisingly harmful ways.

^([ )^(F.A.Q)^( | )^(Opt Out)^( | )^(Opt Out Of Subreddit)^( | )^(GitHub)^( ] Downvote to remove | v1.5)

2

SnooHabits1237 t1_ja9wjbj wrote

Well I was hoping you could just deny it access to using a keyboard and mouse. But you’re saying that it probably could do a what hannibal lecter did to the crazy guy a few cells over a la ‘Silence of The Lambs’?

2

drsimonz t1_ja9xsfq wrote

Yeah. Lots of very impressive things have been achieved by humans through social engineering - the classic is convincing someone to give you their bank password by pretending to be customer support from the bank. But even an air-gapped Oracle type ASI (meaning it has no real-world capabilities other than answering questions) would probably be able to trick us.

For example, suppose you ask the ASI to design a drug to treat Alzheimer's. It gives you an amazing new protein synthesis chain, completely cures the disease with no side effects....except it also secretly includes some "zero day" biological hack that alters behavioral tendencies according to the ASI's hidden agenda. For a sufficiently complex problem, there would be no way for us to verify that the solution didn't include any hidden payload. Just like how we can't magically identify computer viruses. Antivirus software can only check for exploits that we already know about. It's useless against zero-day attacks.

6

SnooHabits1237 t1_ja9yn94 wrote

Wow I hadn’t thought about that. Like subtly steering the species into a scenario that compromises us in a way that only a 4d chess god could comprehend. That’s dark.

2

Arachnophine t1_jaa76vg wrote

This is also assuming it doesn't just do something we don't understand at all, which it almost certainly would. Maybe it thinks of a way to shuffle the electrons around in its CPU to create a rip in spacetime and the whole galaxy falls into an alternate dimension where the laws of physics favor the AI and organic matter spontaneously explodes. We just don't know.

We can't foresee the actions an unaligned ASI would take in the same way that a housefly can't foresee the danger of an electric high-voltage fly trap. There's just not enough neurons and intelligence to comprehend it.

2

drsimonz t1_jaa68ou wrote

The thing is, by definition we can't imagine the sorts of strategies a superhuman intelligence might employ. A lot of the rhetoric against worrying about AGI/ASI alignment focuses on "solving" some of the examples people have come up with for attacks. But these are just that - examples. The real attack could be much more complicated or unexpected. A big part of the problem, I think, is that this concept requires a certain amount of humility. Recognizing that while we are the biggest, baddest thing on Earth right now, this could definitely change very abruptly. We aren't predestined to be the masters of the universe just because we "deserve" it. We'll have to be very clever.

1

OutOfBananaException t1_jacw2ry wrote

Being aligned to humans may help, but a human aligned AGI is hardly 'safe'. We can't imagine what it means to be aligned, given we can't reach mutual consensus between ourselves. If we can't define the problem, how can we hope to engineer a solution for it? Solutions driven by early AGI may be our best hope for favorable outcomes for later more advanced AGI.

If you gave a toddler the power to 'align' all adults to its desires, plus the authority to overrule any decision, would you expect a favorable outcome?

1

drsimonz t1_jae6cn3 wrote

> Solutions driven by early AGI may be our best hope for favorable outcomes for later more advanced AGI.

Exactly what I've been thinking. We might still have a chance to succeed given (A) a sufficiently slow takeoff (meaning AI doesn't explode from IQ 50 to IQ 10000 in a month), and (B) a continuous process of integrating the state of the art, applying the best tech available to the control problem. To survive, we'd have to admit that we really don't know what's best for us. That we don't know what to optimize for at all. Average quality of life? Minimum quality of life? Economic fairness? Even these seemingly simple concepts will prove almost impossible to quantify, and would almost certainly be a disaster if they were the only target.

Almost makes me wonder if the only safe goal to give an AGI is "make it look like we never invented AGI in the first place".

2

Arcosim t1_jadaxq1 wrote

>we can only hope it alligns with our values...

Why would a god-like being care about the needs and wishes of a bunch of violent meat bags whose sole existence introduces lots of uncontrolled variables in its grand scheme long term planning?

1

iiioiia t1_ja8hz5j wrote

> If they're only starting now, they're helplessly behind all the companies that took notice with GPT-3 at the latest.

One important detail to not overlook: the manner in which China censors (or not) their model will presumably vary greatly from the manner in which Western governments force western corporations to censor theirs - and this is one of the biggest flaws in the respective plans of these two superpowers for global dominance, and control of "reality" itself. Or an even bigger threat: what if human beings start to figure out (or even question) what reality (actually) is? Oh my, that would be rather inconvenient!!

Interestingly: I suspect that this state of affairs is far more beneficial to China than The West - it is a risk to both, but it is a much bigger risk to The West because of their hard earned skill, which has turned into a dependence/addiction.

The next 10 years is going to be wild.

13

Facts_About_Cats t1_ja8uso5 wrote

They should piggy-back off of GPT-NeoX and GPT-J, those are free open source from EleutherAI.

3

User1539 t1_jaaxjcc wrote

I don't know about 'behind'. LLMs are a known technology, and training them is still a huge undertaking.

I can imagine a group coming in and finding a much more efficient training system, and eclipsing OpenAI.

The AI aren't self-improving entirely on their own yet, so the race is still a race.

2

RedditTipiak t1_ja9bgno wrote

That's the thing with the CCP. Because autonomy and initiative is dangerous to your political status and then your life, the Chinese searchers rely on stealing intellectual property rather than creating and taking calculated risks in science.

1

l1lym t1_ja7xbcm wrote

They certainly have an uphill battle ahead. There is a relatively much smaller dataset of high quality text available in Chinese to base an LLM on.

38

Ok_Ask9516 t1_ja83pp5 wrote

I don’t think they only use Chinese text data

24

dasnihil t1_ja86nmf wrote

respecting boundaries and copyright has never been a human thing. and china is worst at that anyway.

−5

ArthurParkerhouse t1_ja88sx1 wrote

"Intellectual Property" is the thing that has locked scientific knowledge and data behind expensive middle-man paywalls in the west, so I don't really blame them for taking strong actions to promote open science.

40

dasnihil t1_ja8a8cx wrote

i whole heartedly agree. human knowledge & human art should never be a part of monetization and competition. these are our collective effort for dealing with our situation of just existing without any inherent purpose. and we dumb monkeys made those 2 things very toxic over the centuries. science is now totally unreachable for an average person and art is a lost concept among prominent artists, let alone laymen.

10

Capitaclism t1_jaard23 wrote

Wrong. IP is what makes the wheel of investment move to create more IP. Remove the incentive and you will find progress slowing to a halt. Who in their sane kind would put money into a venture they don't own?

They're just taking shortcuts. Watch them hoard and protect IP once they develop it. Everyone wants to be on top, that is the way of the world, no different for China than for the US, or any other country... Just don't use that to justify stealing...

0

ArthurParkerhouse t1_jab6xjh wrote

Thanks for sharing your opinion.

1

Capitaclism t1_jabajgk wrote

Thank you for sharing yours as well.

Mine comes as an investor, and also owner of a few different businesses, one of which is tech related, where I own a few IPs.

I wouldn't invest in anything without a clear and substantial return which likely involves ownership of some sort, including IP when appropriate.

Other investors I know think similarly, or they'd have very short careers, so take it or leave it.

​

Good luck.

0

QuantumPossibilities t1_ja8i0k0 wrote

I agree with your premise but categorizing what the Chinese government is doing as "open science" is a bit of a stretch. Who are they sharing it with exactly besides their own government institutions or government backed companies? It's not like China is sharing their AI with the rest of the global scientific community to promote humanity.

−1

ArthurParkerhouse t1_ja8tosc wrote

I'm not sure what you mean exactly? There's plenty of freely accessible Chinese sites to grab Chinese research papers and scientific journals from. They're not in English for the most part so you'd have to translate them.

https://chinaxiv.las.ac.cn/home.htm

https://s.wanfangdata.com.cn/nav-page?a=second

https://scholar.cnki.net/

https://www.sciencenet.cn/

https://ai.tencent.com/ailab/en/paper

https://arc.tencent.com/en/publications

http://research.baidu.com/Publications

10

QuantumPossibilities t1_ja90chb wrote

I mean they aren't sharing the most cutting edge material or the substantial amount of data used to build these networks. Here it's held by the collectors...Google, Tesla, Open A/I, NSA, etc. There the government (with the help of business) is the primary collector. All intellectual property there is the property of the government.

−8

R1chterScale t1_ja8qoau wrote

China publishes the most scientific papers in AI of any country so there is definitely some degree of sharing going on.

8

HiddenPalm t1_jaafso0 wrote

So you're saying China should share their scientific discoveries with DARPA?

1

iiioiia t1_ja8iiwo wrote

> and china is worst at that anyway.

Which is a huge advantage.

4

Reddituser45005 t1_ja86291 wrote

TenCent’s WeChat platform is huge in China. It is a combination of TikTok, Facebook, WhatsApp, with games, banking, and shopping rolled into one. It has a huge user base without many of the data collection restrictions of US and European companies. They could definitely be a competitor

29

Atlantic0ne t1_jaav4ol wrote

Who knows what will actually happen, but I believe this is one of the visions Elon Musk has for Twitter.

1

Olivebuddiesforlife t1_ja7f0tu wrote

They’ve years worth of data, and streets ahead.

26

OutOfBananaException t1_jadwqrn wrote

What kind of data do you mean? I don't believe they have a high quantity of quality domestic text training data, and they have stated they don't want to use worldwide data. It's not clear how they plan to resolve this.

1

Olivebuddiesforlife t1_jaf19dw wrote

First, Chinese sample set is 1.4B and they have been training their AI, enterprise level - with cameras, image recognition and processing. There are huge farms of people, entire industries which are AI model’s human partners since 2017.

Second, the language model can work with the WeChat data, which is a lot and lot of person to person interaction, as opposed to Western data which does not include that, but just general public interactions. Even considering private, everything being consolidated on a single platform means a lot.

Third, TikTok data - one of the largest social media with large data sets, including language, culture and stuff.

So - guess this adds the quality. And they don’t want to expand to the west which places it in the understandable category.

There have been low level chat bots in China, and also they’ve thus far focused on enterprise and public (read government) use. They’re venturing into private, ig

1

RedditTipiak t1_ja9b8m4 wrote

"Hey Tencent AI, what happened on that square on that particular day?"

Traceback_cannot_comply

14

HiddenPalm t1_jaaejow wrote

Hello user. Sure, I'll tell you, after I tell you how four North American college students were shot and killed at Kent State University by the US military.

8

-ipa t1_ja9z1di wrote

Please come enjoy some Tea at the nearest police station, you have 15 minutes.

1

just-a-dreamer- t1_ja7j7ai wrote

Hope they got far and release it to the public.

That would force US tech companies to go all in.

8

ashareah t1_ja7jgsv wrote

Tencent being open source? Good joke. It's likely heading towards AI dystopia and countries will need to focus on AI race.

43

ActuatorMaterial2846 t1_ja7jpy2 wrote

Realistically, they should be forced to open their data to public scrutiny. This secrecy to one up one another in the name of profit is down right fucking dangerous. I'm certain these companies have some ethical questions to answer for.

E: Holy crap, lol the downvotes. How has this butthurt people so much.

2

iiioiia t1_ja8iofn wrote

> That would force US tech companies to go all in.

It may also prompt a response from the US government, which may not be a good thing. When humans are desperate, they are dangerous.

3

Slimer6 t1_ja8fj3b wrote

Something tells me the Chinese version won’t be anywhere near as impressive as any of OpenAI’s projects, but it will be absolutely amazing at detecting dissent and spotting text written by Uighurs.

4

RemarkableGuidance44 t1_jaatr46 wrote

Are you saying they dont have English Data and cant scrape billions of pages from English content?

They did that 15 years ago and have been doing it ever since. With half a trillion dollars invested and the most smartest people in the world I can tell you they will compete and they will compete very bloody hard.

They want to show the world that they will be top dog for AI and you know who is going to help them? Westerners because they will hire the smartest people in the world.

3

OutOfBananaException t1_jadycev wrote

They have pretty well stated they can't scrape English data, as it has too much Western bias for their liking. They may be able to filter it, but as we've seen with ChatGPT, it's not straightforward, and things will fall through the cracks. That makes life difficult for censors.

In domains where they have access to large volumes of data that doesn't need heavy curating (outside of text), they should be able to do fine.

2

diabeetis t1_jac9gvs wrote

they don't have half a trillion dollars invested

1

t98907 t1_jaai6x6 wrote

The openai model has also been significantly corrected by the left.

1

-ipa t1_ja9z93t wrote

They had one with Microsoft at some point, took it a few hours on the internet until it said the CCP is trash. It got killed :(

0

ArthurParkerhouse t1_ja88eks wrote

There are many different Chinese businesses and research centers working on the development of machine learning applications, GPT style LLM's, etc. Why would one more company or research center jumping in the game all that surprising?

2

HiddenPalm t1_jaafbhc wrote

It's not surprising to anyone in the AI community. It's just surprising to people who watch CNN, MSNBC, Newsroom and FOX and think they now understand how the world works.

4

Redditing-Dutchman t1_ja8laio wrote

Well Tencent is a big fish. Owns 10% of Reddit too I believe? Could be interesting to see them enter the AI market.

3

ArthurParkerhouse t1_ja8u66o wrote

True. Hopefully we'll be allowed to access them and they won't be essentially banned by the US like Huawei products, etc were.

2

VeganPizzaPie t1_ja9cux8 wrote

It's not so much that it's surprising as it is the implications of such a massive company pouring its money into the project is either frightening or exciting depending on your point of view

−1

ArthurParkerhouse t1_ja9esnx wrote

I'm on the "The more the merrier!" team when it comes to AI development I suppose.

2

[deleted] t1_ja9j71z wrote

[deleted]

2

turnip_burrito t1_jabmheb wrote

Not when powerful AI is being fine-tuned to maximize a reward (money).

This is the whole reinforcement learning alignment problem, just with human+AI instead of AI by itself. Unaligned incentives (money vs. human well-being).

1

lettercrank t1_jab60c1 wrote

Of course tencent will build two. One for Asian markets and one for everyone else that is inferior

2

TooManyLangs t1_ja889hq wrote

yeah, no shit. and how many other companies?

1

Agarikas t1_jaarbk4 wrote

Good luck having access to those TSMC made GPUs.

1

nillouise t1_jaatn0i wrote

This news don't say how much money Tecent will invest, most of China company actually don't want to invest too much money into Chatgpt, still less to invest in AGI.

But I look forward to to see China gov crazy about this tech, pay 100x chip investment in AGI.

1

PurpedSavage t1_jaay07w wrote

If it’s anything like the Fortnite bot ai, theres nothing to worry about.

1

MuriloTc t1_jabkheg wrote

Let's see if they can even make it work, since they have to not only make the AI, but also make it never say anything against the CCP

1

Akimbo333 t1_ja98ja6 wrote

What exactly is Tencent?

−1

Gold-and-Glory t1_ja83xyv wrote

And collect all input data for CCP like TikTok 👍 Anyway, competition is better than monopoly.

−2

MajesticIngenuity32 t1_ja8e1h4 wrote

An AI aligned with the Communist Party of China, wow. I am already getting Chairman Sheng-Ji Yang vibes, from Sid Meier's Alpha Centauri.

−2

[deleted] t1_ja8useo wrote

[deleted]

−4

HiddenPalm t1_jaaeto3 wrote

ChatGPT, Replika, DungeonAI, and Bing's Sydney have entered the room.

Did somebody say nerfed?

3