ProShortKingAction

ProShortKingAction t1_jdxqskd wrote

Belt and Road initiative loans are relatively low compared to other sources you would go to for a project like this like the IMF who would be saddling you with a higher interest rate and a bunch of stipulations involving how you can run your economy. The debt trap isn't anything on the surface it's that China is giving these loans to historically unstable countries that no other source would be willing to lend to because there is no guarantee they will be economically stable. So you take the loan because of course you will be the one to turn your country around and keep the boat from tipping over and then a nearby harvest fails, bread prices go up, and everything goes to shit. Now here you are stuck with this loan and no capacity to repay it

7

ProShortKingAction t1_ja95pc5 wrote

Honestly I think a big part of the difference between Cold War propaganda and modern propaganda on the subject is simply time. A large chunk of the people who were so into the idea of conflict with Russia also remembered the day that the Newspaper showed them images of a single bomb wiping a major city off the face of the Earth. People now a days don't have that type of reference. I have a feeling if a city like Kyiv (God forbid) got obliterated by a nuke people would not be so dismissive of the idea of nukes being something to scared of

Edit: I'm just using the city of Kyiv as an example because the idea of it being nuked has regularly been in the news and dismissed as an impossibility by regular people

1

ProShortKingAction t1_ja8ut3u wrote

I guess you are both correct because I was very vague in my initial comment. You are correct that for example 100 Hiroshima sized nukes if they were dropped in a desert would not end the world. However those same 100 nukes which are much smaller than what countries are capable of building today if dropped for example in heavily populated areas in Pakistan and India would cause a level of global famine that would bring every country on the planet to its knees, worse than any other famine in world history.

But even that might not be the end of the world to you, the end of the world was a pretty vague way to describe it on my part and for that I'm sorry. I meant more the collapse of everything that we currently rely on to survive. Countries falling apart, countless dead from starvation in even the wealthiest nations of the world, global trade collapsing, resource wars both regional and international, freak weather phenomenon, etc.

And that's not even considering how much more powerful modern nukes are than the one dropped on Hiroshima

2

ProShortKingAction t1_j33gery wrote

Honestly it sounds like it'd lead to some nazi shit, intelligence is incredibly subjective by its very nature and if you ask a thousand people what intelligence is you are going to get a thousand different answers. On top of that we already have a lot of evidence that someone's learning capability in childhood is directly related to how safe the person felt growing up presumably because the safer someone feels the more willing their brain is to let them experiment and reach for new things, which typically means that people who grow up in areas with higher violent crime rates or people persecuted by their government tend to score lower on tests and in general have a harder time at school. If we take some numbers and claim they represent how intelligent a person is and a group persecuted by a government routinely scores lower than the rest of that society it would make it pretty easy for the government to claim that group is less of a person than everyone else. Not to mention the loss of personhood for people with mental disabilities. Whatever metric we tie to AI quality is going to be directly associated with how "human" it is to the general public which is all fine when the AIs are pretty meh but once there are groups of humans scoring worse than the AI then it's going to bring up a whole host of issues

7

ProShortKingAction t1_j32sn33 wrote

Yeah I also feel like we are reaching a point where moving the goal post of what is consciousness any farther gets into some pretty dark territory. Sure an AI expert can probably explain that ChatGPT isn't conscious but it's going to be in a way that no one outside of the field is going to understand at this point. I keep on seeing takes like "oh it isn't conscious because it doesn't keep its train of thought in-between conversations"... OK so your grandfather with dementia isn't conscious? Is that really a point these people want to make?

I feel like it's getting close enough that we should stop putting so much weight into the idea of what is and isn't conscious before we start moving this goal post past what a lot of human beings can compete with.

16

ProShortKingAction t1_j32rvyt wrote

It's not in a discord debate with philosophy nerds, it was told to represent itself as a high school teacher which is why it makes a lot of sense for it to put forward multiple possible ideas and help the student figure it out for themselves instead of giving one definitive answer on an open ended question

17

ProShortKingAction t1_iye9p5c wrote

Tell that to the folks working in transistor development. Sometimes modern industries have to create something where some parts of it are never going to be observable but need to be predictable in order to come out with an end product.

2

ProShortKingAction t1_iy94ob1 wrote

That makes a lot more sense, apologies. I've been seeing a weirdly large amount of talk of "oh insert Nation with nuclear weapons here should watch out they can't handle insert nation with nuclear weapons here" like this generation has fully forgotten that these weapons even exist. I thought you were probably referring to the increased tensions between the U.S. and China or China and India

0

ProShortKingAction t1_iwvoojk wrote

Reply to comment by rnimmer in Full Self-Driving Twitter by [deleted]

Even with the farthest stretch of the imagination as to what the ML researchers/developers at Tesla are capable of you would still need a significant amount of data on the tasks that need to be automated. If entire teams are laid off how will their tasks even be explained to the model let alone demonstrated enough times for the model to understand?

23

ProShortKingAction t1_iwvkss0 wrote

Automating software tasks is an entirely different skillset than building a Machine Learning model for automating something like driving. Automating software tasks involves slowly documenting and analyzing each best practice, finding what is repetitive and creating scripts/pipelines for those repetitive tasks. It's not AI based it's more a bunch of if else statements that you slowly build over years. Even if the skillset was the same and Elon brought on one engineer for every tech employee at Twitter it would still take months if not years of documentation and slow replacement to make up for the kind of skill drain that has happened at Twitter over the past two weeks. Nine women can't make a baby in a month and all that

37

ProShortKingAction t1_iurmfoc wrote

Sorry, I took that as them saying they had built-in safety checks that are meant to prevent the robot from doing an unsafe physical action not prevent it from writing vulnerable code. I might have misinterpreted that.

Another thing I would like to bring up in the favor of this model of going about things is that vulnerabilities slip through in regular code all the time, this approach doesn't have to be perfect just more safe than the current approach. It's like with driverless cars, they don't have to be perfect just more safe than a car driven by a human which seems like a low bar. I just don't see anything from this post that implies a safe way to do this approach isn't rather far off

Edit: In the Twitter thread made by one of the researchers posted elsewhere in this thread they very vaguely mention "... and many potential safety risks need to be addressed" its hard to tell if this is referencing the robot physically interacting with the world, cybersecurity concerns, or both.

6

ProShortKingAction t1_iurhi4z wrote

How do you prevent the robot from writing unsafe code? If it is continually adding new code without being checked by devs or a security team it seems like you'd run into the issue of there always being the possibility of it being one instruction away from generating code that includes a dangerous vulnerability

28

ProShortKingAction t1_irbibv8 wrote

I keep on feeling like one of these discoveries is going to turn out to be the penicillin of our generation where it leaves such an immense impact on human life and quality of life that its borderline impossible to imagine life without it.

You can imagine life without cars, without phones, without computers. But writers and people just looking back at the past regularly forget just how insanely different life was before penicillin. I personally would be dead 4 times over if it hadn't been invented and I bet most of the people reading this have similar stories. But it's such an insane impact that it's almost impossible for us to wrap our heads around and so instead we often times take it as granted, just a part of human life

9