Ortus12

Ortus12 t1_j26ur53 wrote

UBI is needed for the transition period.

Eventually we won't need it, as we won't need money. We will just be able to ask for whatever we want and get it, almost immediately because the Ai will have made extra of everything humans could possible want.

If it's some very obscure request, it might take time for the Ai to make it, but generally whatever we want we can just have.

0

Ortus12 t1_j26s06a wrote

Jail. Free food, Free room and board. Free friends because they have to hang out with me, no where to go haha.

A small crime, not a long sentence. So I'll see if there's UBI yet by the time I get out. If not, then I commit another small crime and wait a little longer.

UBI is inevitable eventually, unless we completely fail at the Ai Alignment problem. With exponential progress, robots making robots. There will eventually be billions of robots per person, that's more food and houses than any one could need. The entire earth will be converted into resources for us (houses, food, robots, server farms).

Currently the percentage of earth in human form is way less than one percent. All that extra matter, will be resources for us and the sun will power it.

1

Ortus12 t1_j26pxbz wrote

The percentage of the world that is undernourished decreases over time, as the total wealth in the system grows.

https://ourworldindata.org/grapher/mdg1c-hunger

When we have a fully automated economy there there will be so much wealth in the system that the Uber rich won't mind being taxed. They'll still have their mega rockets to take vacations on mars, palaces under the ocean, and whatever else they want.

Think of it like this, the government could tax the uber rich enough to get five robots. Those five robots could build a robot factory and have a thousand robots in a year, and a billion robots in two years, building more robot factories, running and building vertical farms, building and running apartment buildings that go down underground for miles and are hundreds of miles wide, all with plumbing, electricity, internet, and sever farms capable of running super intelligence.

Some countries give all their citizens a house now, but eventually they'll be so much wealth in the system that every person could live in any city or anywhere they want and move whenever they want and Ai could just find an available place. You wouldn't even need to own a house, just lock up where-ever you are staying. When there's a hundred houses per person and that rate is growing exponentially, who cares.

More food than you could imagine. Anything you want, it's yours free. You won't need cars, just press a button in your phone and a nearby car picks you up and drives to your location and if you make a mess, another robot will clean it up.

And we'll have even more digital wealth in cyberspace, but that the physical wealth even the poorest have will be unimaginable.

When there's a million robots per person the idea of scarcity is gone. Tax the rich 1%, it doesn't matter, these robots will be replicating themselves at an exponential rate while self improving, and redesigning themselves.

We're living on the surface of the earth. The entire earth will be turned into one big apartment building essentially, and we'll be going out to other planets. The percentage of earth matter that is currently in human form is miniscule and already set to level off or decline, all that extra matter will be converted into services for us and the energy of the sun will be harnessed.

2

Ortus12 t1_j1vc6ff wrote

Trains are cost effective in and between densely populated cities. Not as a massive grid through less populated areas.

Not every one can live in cities, you still need farming and manufacturing which are more cost effective away from cities.

This is why we see trains in some areas of the world and not in other places, as well as other economic factors.

2

Ortus12 t1_j1rfmqh wrote

My comment was more in reference to Ai in the current environment.

Once an Ai is powerful enough to do that it will be powerful enough to make us enjoy being nice to each other and not enjoy telling mean jokes.

Free will is entirely an illusion and we are at the mercy of a long string of causality no matter how you look at it. We can either be on the forced track towards greater suffering or the forced track towards greater wellbeing. Those are our only choices.

We should at the minimum implement laws into the Ai to prevent it from using threats or other coercion, to preserve the illusion and feeling of free choice. Currently Chat-GPT does not do threats or coercion so we are on a positive track, currently.

1

Ortus12 t1_j1qih4z wrote

I'd much rather dull boring Ai that keeps us safe and provides for our needs than Ai that's used by humans in malicious ways, such as created hate and division which can lead to real world violence.

There will still be a reason to watch human comics, and entertainers, we just won't be overwhelmed by the large scale division that this level of Ai could create.

One danger of something like this, is that it could be used to fill up echo chambers with well written comments that make you hate the outsiders (of whatever echo chamber) which would cause more societal division. There is a financial incentive to do this because this keeps people locked in the echo chambers (if they really hate the outsiders) and provides more novel content for those chambers, which creates more revenue streams for those content platforms.

Right now, you could make the argument that the internet economy runs on hatred and division and an unrestricted chat-gpt could add more fuel to that fire.

Open Ai's good decisions give me hope for humanity and a mostly positive singularity.

0

Ortus12 t1_j1o6mx8 wrote

Google is in "code red" working hard to create a competitor to chat gpt.

https://www.nytimes.com/2022/12/21/technology/ai-chatgpt-google-search.html

Deep mind (part of google alphabet) has created "flamingo" which combines a LLM with a vision system to get something that's more intelligent than both the LLM alone and the vision system alone, at both vision and LLM tasks.

https://the-decoder.com/deepmind-gives-a-taste-of-googles-possible-answer-to-chatgpt/

At the moment we don't know if Flamingo is better than ChatGPT because it's not open to the public.

But whoever the winner is, I believe future search engines will also be able to take commands such as "Give me a list of the Pro's and cons of combining search engines with chat technology". Similar to how you can type in math questions to a search engine and it will know you want the calculator function, and that out put will be the top result.

I'm sure they'll be stand alone Ai's as well that you can talk to with voice, and they respond with voice, or text when in text mode.

1

Ortus12 OP t1_j1o2nd9 wrote

Hopefully there are genetic modifications we can make as adult with technologies such as crisper.

https://www.future-science.com/doi/10.2144/btn-2019-0056

I tend to doubt genetically modified humans will be able to out compete Ai, in the domain of cognitive labor (maybe we can outcompete expensive robots for a while) but I'd love to be able the shed my stubborn belly-fat easier without feeling hunger 24/7. I don't eat processed foods, and I'm optimizing my life so I have strong reason to believe that last little bit that won't leave for long is genetic.

Anything that improves health and helps us have the looks that we prefer is going to improve quality of life.

3

Ortus12 t1_j1kojx2 wrote

I agree. It is the most disruptive technology mankind will ever invent.

The transition period has already started and real unemployment is already climbing. I wouldn't be surprised if more people are dying on the streets.

People who make the argument, the masses will riot without jobs and over-throw the rich, don't realize ASI can be used to control the masses easily. Even modern corporate funded media and propaganda do a good job at that.

We could be starving, choose to sign up for the military because we are lead to believe most people are doing well, the military is safe, the military is doing something good, then sent off to a meat grinder war in wave after wave to get rid of us.

What happens depends on a very small group of people, and what they want. Everything could turn out great with UBI and all of that, but before we could even get to that point AI is an amplification force for capitalism and an amplification force for a small group of extremely rich people who already have all of the data and money to make most effective use of Ai.

3

Ortus12 t1_j1k5euh wrote

It's because they have no solid arguments, they have to resort to ad hominem attacks to poison the well.

It's pure emotion driving their thought process because they don't want to get their hopes up.

Not only here, but this debate has been going on for decades about exponential vs linear progress, and the linear progress people see the world looking the same as ten years ago and that's enough for them. The ones who see what's actually happening see an exponential increase in computation with no signs of stopping that's set to dwarf all of the computation of all human brains in the next few decades.

It's logic vs emotional thinking.

0

Ortus12 t1_j1k4uhn wrote

A few other Biases that effect people's thinking

  • Linear Thinking - Rather than seeing all the feedback loops and exponential progress in areas such as energy and computational costs. These people ignore all of the data graphs and trends and instead use this shallow limited imagination.
  • Pro Aging Trance - Afraid to get ones homes up for full aging reversal treatments, so you choose to imagine that aging is more complex than we could ever figure out in your life time.
  • Anthropocentrism - thinking humans are special and that because our intelligence is complex, it can't be the result of a simple algorithm scaled up
  • Unrealistic Pessimism - Because of fear of getting ones hopes up.
6

Ortus12 t1_j1ij3ym wrote

That's true. Food prices probably won't come down much.

It looks like food prices were going down but reached diminishing returns and then flatlined and probably won't go much lower than they were in 1990.

https://www.ers.usda.gov/data-products/chart-gallery/gallery/chart-detail/?chartId=76964

I'm not sure what you mean by expensive robots. At the moment flippy the robot that makes fries cost 3.5K a month for jack in the box to rent. You can buy it for 30K. It also grills burgers, fries and onions rings.

https://www.businessinsider.com/miso-robotics-flippy-robot-on-sale-for-300000-2020-10#miso-first-introduced-flippy-at-the-grill-in-2018-as-the-first-burger-flipping-robot-in-the-world-it-could-grill-150-burgers-each-hour-1

I'm realizing that the most cost effective robot for different tasks, is not a full humanoid robot with legs and fingers (unless that humanoid robot is produced at scale, I don't know). Our world could fill up gradually with many robots. So certain jobs that are harder to automate (plumper/repair man) will probably be safe from robots for a while.

1

Ortus12 t1_j1hzcoy wrote

Reply to comment by GuyWithLag in Hype bubble by fortunum

The current popular Ai models are only what works best on the current hardware.

We've already designed tons of different models that are outlined in many older Ai books, that can be used as compute scales (as Ai companies make more money to spend on more compute). Even the current models weren't invented recently, they're just now applicable because the hardware is there.

There's been a few algorithmic optimizations along the way a larger portion of the scale has been hardware.

2nd order companies are taking out 1st order companies by improving things, but that still keeps the ball moving forward.

1

Ortus12 t1_j1hxtit wrote

Reply to comment by vernes1978 in Hype bubble by fortunum

God like compared to us, like we are god like compared to ants.

A human brain is three pounds of computational matter. An Ai brain on earth could consume most of the planet's matter.

Humans can read one book at a time and slowly. An Ai can read every book every written on every subject, every research paper on every subject, every poem, every article, ever patent every filed in any country, and synthesize it all into a deeper and more complete understanding than any human.

Once it's learned hacking, relatively trivial from all of the above, it will also be able to read every design document from every major company and have access to all of their data, and insights. Meaning it could create companies, or manipulate governments to have all of the access to nearly all the wealth, or raw materials, and all the data including surveillance data to understand humans even better, to do whatever it chooses.

Humans can have one experience at a time. An Ai could be controlling trillions of drones having trillions of experiences simultaneously and learning from all of them. It could also be conducting trillions of experiments and learning from all of them simultaneously and using that knowledge to design new experiments, and develop new machines and technology.

4

Ortus12 t1_j1gh1kj wrote

Reply to comment by fortunum in Hype bubble by fortunum

Language (input) -> blackbox (brain) -> language (output)

LLMs solve the blackbox. So whatever algorithms run in the human brain LLMs solve it. Not for one human brain, but for all the greatest human brains that have ever written something down. LLMs alone are super intelligence at scale.

We'll be able to ask it questions like, how do we build a nanite swarm? and write me a program in python for super intelligence that has working memory, can automate all computer tasks, and runs optimally on X hardware.

LLMs are super intelligence but they'll give birth to even more powerful super intelligence.

9

Ortus12 t1_j1gg2ws wrote

Reply to comment by fortunum in Hype bubble by fortunum

The last Ai winter was caused by insufficient compute. We now have sufficient compute, and we've discovered that no new algorithmic advances are necessary, all we have to do is scale up compute for existing algorithms and intelligence scales along with it.

There are no longer any barriers to scaling compute because internet speeds are high enough that all compute can be server farms that are continually expanded. Energy costs are coming down towards zero so that's not a limiting factor.

The feedback loop now is Ai makes money, money is used for more compute, Ai becomes smarter and makes more money.

The expert systems of the 80s and 90s, grew too complex for dumb humans to manage. This is no longer a bottleneck because again, all you have to do is scale compute. Smart programmers can accelerate that by optimizing, and designing better data curation systems but again it's not even necessary. It's now a manual labor job that almost any one can be hired to do (plugging in more computers).

12

Ortus12 t1_j1gewqe wrote

Reply to Hype bubble by fortunum

LLM only need scale to become ASI. That is, intelligent enough to design machines, write code, develop theories, and come up with insights, better than any human. The LLM itself will be able write the code for other ASIs that are even more powerful, with working memory.

LLMs aren't learning from a single person, but they are reverse engineering the thought patterns of all of humanity, including all of the scientists and mathematicians who have ever written books or research papers, all the programmers who have ever posted their code, or helped solve a programming problem, all the poets, and even all the artists (google already connected their LLM with their imagen to get an intelligence that's better at both tasks and tasks combining both).

It's the opposite. People don't understand how close we are to the singularity.

15

Ortus12 t1_j1ge8y2 wrote

Reply to comment by Gimbloy in Hype bubble by fortunum

Their body will be being atomized into nanites by a god like being, as the last of human survivors are hunted down, and they'll be like "It's just copying what it read in a book. That's not even real creativity. Humans they have real creativity".

41