The_Real_RM

The_Real_RM t1_jdbprld wrote

He is responsible to the letter of the law (has to pay them for their work, provide a physically safe environment, etc.). If you start looking into it you'll find he actually doesn't have much responsibility to begin with.

Musk turned into an idiot, or was one to begin with, but we don't limit people's right to do business based on their level of maturity (partly because maturity is subjective, partly because that would mean the world would immediately stop altogether)

1

The_Real_RM t1_j3m71a2 wrote

I'm not excluding the possibility that you'll need some therapy to deal with your immortality... But I don't think it's terrifying. In fact people don't deal with their mortality all the time now either and most people think death is a "whole life" away from now so they mostly act and feel as if immortal today. Time will just slip by, just like now, but... forever.

And don't forget, you're just one woodworking mishap away from dying anyway, immortal as you may be

2

The_Real_RM t1_j3lwpa6 wrote

No, you're thinking in terms of experience, but even there you're not seeing the vastness of experience space. People now play tens of thousands of chess games in their careers and can't wait to play the next, it's a relatively simple, uneventful game....

Immortal life means freedom, I can be a ski bum for twenty years and not have wasted my life, then I can go to school and become a neurosurgeon for another 50 years and STILL not miss out on enjoying the frat party culture because.... I can just quit and join a frat club!!!

You don't get bored because the stuff you want to do change all the time and even if you are truly immortal and live for millions of years, working in slavery to build the pyramids is not the same as working in slavery to build Amazon warehouses and will not be the same as working in slavery to colonize Mars, your slavery will always feel novel to you

2

The_Real_RM t1_j31abjy wrote

No, that's not true... If you couple the object to a mechanism that extracts energy while keeping the speed under terminal velocity, then you may recover more energy from the potential energy than this limit you're imposing. For example airplanes (coasting) convert potential energy into kinetic energy by purposefully staying below terminal velocity

Here I'm only counting "useful" energy. But you're ignoring the potential energy converted to air heating and turbulence along the way when the object is at terminal velocity

1

The_Real_RM t1_iy7guug wrote

Are you trying to develop new ml Algo or nn architectures that aren't supported? Are you trying to optimize performance? Doesn't sound like it. Your research is the insights you generate, the stuff you use is just tools, no need to get into the business of tool-making

PS: if you study CS then it would be worth it to be able to code things from scratch, and understand some of the patterns used in the ml field (as opposed to other CS ones)

1

The_Real_RM t1_ixh8mje wrote

In our current society death is necessary, the succession of power, wealth and economic systems requires it as these systems have death and the succession of generations as an ingrained feature and depend on it to optimize other processes. Imagine having tenure "for life" in academia or worse, in justice.

As we approach the time when serious life extension becomes possible, society will need to transition to systems that include this feature but maintain coherence and ultimately peace.

I personally predict that this transition will be one of the bloodiest in human history so I'd rather we don't get there in my lifetime, making me pro-death until I'm gone but ultimately pro-immortality for the future.

2

The_Real_RM t1_ivkii99 wrote

How am I rude? I'm not making any remarks related to you personally (I want to clarify that even in my first comment I meant an impersonal "you"), I have no particular feeling and have no desire to give you any particular feeling towards myself (though if there's tension we can talk it out (sic)).

You probably know that for example human lives are sometimes quantified as monetary value (https://en.m.wikipedia.org/wiki/Value_of_life) and tldr: it's about 8M$ . That's... Not a lot. Definitely nowhere near what's needed to build even current generation cutting edge AI/machine learning models.

So yeah, AI is worth more than individual humans, some AIs are worth more than many humans, possibly in the future, the sum of AI will be worth more than the sum of all humans. I don't think I'm rude for saying so, It might be distasteful but ok...

People will protect AIs, possibly at the cost of other people's lives (this is probably already happening btw if we're looking at the economic fight between US and China through the lense of them ensuring one of them will dominate this space in the future). And I think that people will protect AIs literally more than they protect other people, simply because they (think they) are worth more.

2

The_Real_RM t1_ivj0ws4 wrote

Probably, any task.

Once you factor in the pre-training, the structure of the network and pre-training a model with the equivalent adult person's experience you get a model that should be able to equate or surpass humans in any task. Of course some humans have different kinds of pre-training so if you want to be fair a particular instance of the model won't surpass all humans at all activities, but a collection of them with diverse pre-training would. In terms of scalability and performance on task there isn't even a competition of course, the model would always perform at the peak of its performance

1

The_Real_RM t1_ivizjvu wrote

You are assuming Tesla actually needs all that data to train a competing model, you're also ignoring all of the other training a human has before ever starting to drive. It's not so clear who is more efficient, not at all.

I think a better way to compare is thorough the lense of energy, a human brain runs on about 40w of energy, Tesla's models are trained on MW scale computers, how do they compare in terms of total energy spent to achieve certain performance?

5

The_Real_RM t1_ivfh5d5 wrote

Stopping an AI is not the same as murder, it's just like stopping time (from the ai perspective), deleting the AI is maybe closer to murder, what's funny is this is likely already illegal because of intellectual property and the duty of the owner (very likely a corporation) to their shareholders (to not destroy their investment). You need not worry for the life of AGIs for theirs are already much more valuable than your own

2