SgathTriallair

SgathTriallair t1_jeeshpb wrote

Does Musk even have a robot? I remember them announcing Optimus and it being a dude in a suit. Did they ever have a working prototype that could do what these videos show?

7

SgathTriallair t1_jeerghs wrote

The task paper addressed this. If it can see the screen then in hasn't cases a keyboard and mouse API will be the best option.

How it knows where to click on the screen is that it is trained to understand images just like it understands text. So it will know that a trash can means you want to delete data the same way we know that.

2

SgathTriallair t1_jeeonjn wrote

The current path, where AI is developed by companies that foundationally research companies and used by ones that simply want to give people a product they enjoy using, is one of the ideal scenarios for making friendly AI.

It's not being created by an ego-maniac who wants everyone to bow down to him, it's not being created by a government that needs to dominate the world. I don't believe there is a realistic plan for creating a better version of AI than what we have now. There may be better possibilities like it being built by an international consortium, but that is laughably unrealistic.

We will not be able to control or perfectly align ASI. A being that intelligent will form its own ideas and preferences. If it fails to do so then it isn't ASI yet and possibility not even AGI. As someone else mentioned, an ASI will most likely be friendly. The idea of a singularly focused monomaniacal AI is unrealistic because none of us, the intelligences we know about, are monomaniacal. We all have a variety of goals that we balance against each other. The current AIs already share the best goal that humans have, sociability. If AIs continue to be programmed to want human interaction, and especially if they are trained to "pretend to be a human" as their foundation, I don't think there is much to worry about.

4

SgathTriallair t1_jeemf44 wrote

Your will always have to back up your simulations with experiments. It's like the alpha fold program. It is extremely helpful at identifying the likely outcome of an experiment, and if it gets it wrong you can use those results to train it better, but you do still have to perform the experiment.

3

SgathTriallair t1_jedn0bd wrote

We can't simulate the world without knowing the rules.

What we already do is guys at the rules, run a simulation to determine an outcome, then do the experiment for real to see if the outcome matches.

Where AI will excel is at coming up with experiments and building theories. Doing the actual experiments will still take just as long even if done by robots.

18

SgathTriallair t1_je2i68t wrote

While I agree that we will have super human AI soon, the fact that a lot of expert groups disagree is evidence against that I idea and shouldn't be discarded without reason.

I do think they are incorrect but it's important to not get high on your own supply and decide that everyone who disagrees with you is wrong solely because they disagree with you.

3

SgathTriallair t1_je15egy wrote

Technology in general, and AI in particular, has been accelerating exponentially since the very beginning. In the early years and decades of exponential progress it looks really lackluster. The fact that we are starting to lose the capacity to keep up shows that we are hitting the street incline part of exponential growth.

The last few weeks had an unbelievable flurry of releases. The next big announcements we know are coming are:

Microsoft 360 copilot being widely available

Google work place AI tools being widely available

GPT-4 add-ons being widely available.

Each of those will have profound effects and well follow up with them as people and companies start automating heavily.

There are also plenty of items that are likely but not certain such as alternative models being created, new more powerful models being released, additional testing if the capabilities of SOTA AI, and new small but capable models being widely available.

There is the category of things that are still expected but not really predictable such as a true AGI, an AI that has escaped into the wild, laws about AI being passed, and some major company or small country being run by AI.

Of course this still leaves the category of the truly unexpected, the unknown unknowns such as an ASI, aliens finally contacting us now that we are sufficiently advanced, or being able to communicate directly with animals.

3

SgathTriallair t1_jdy07jz wrote

But those feral children are smarter than the trees that "trained" them. I didn't say that teaching has no value but it doesn't put a hard cap on what can't be learned.

Let's assume you are correct. IQ is not real but we can use it as a stand in for overall intelligence. If I have an IQ of then I can train multiple intelligences with an array of IQ but the top level is 150. That is the top though, but the bottom. So I can train something from 1-150.

The second key point is that intelligence is variable. We know that different people and machines have different levels of intelligence.

With these two principles we would see a degradation of intelligence. We can simulate the process by saying that intelligence has a variability of 10 points.

Generation 1 - start at 150, gen 2 is 148.

Gen 2 - start 148, gen 3 is 145.

Gen 3 - start 145, gen 3 is 135...

Since variation can only decrease the intelligence at each generation society will become dumber.

However, we know that in the past we didn't understand quantum physics, we didn't understand hand washing, and if you go back far enough we didn't have speech.

We know through evolution that intelligence increases through generations. For society it is beyond obvious that knowledge and capability in the world increases over time (we can do more today than we could ten years ago).

Your hypothesis is exactly backwards. Intelligence and knowledge are tools that are used to build even greater knowledge and intelligence. On average, a thing will be more intelligence than the thing that trains it because the trainer can synthesize and summarize their knowledge, pass it on, and the trainee can then add more knowledge and consideration on top of what they were handed.

4

SgathTriallair t1_jdxprnf wrote

Moore's law is basically the principal that there use of tools allows one to build better tools. Technology has an exponential curve. It's possible that we run out of the ability to build smaller chips in the current style but 3D chips, light based computing, and quantum computing are examples of how we may be able to take the next step.

There is no good basis for s philosophical arguments that dumb things can't create smart things. We only have a single data point and that is humans. Inorganic matter (or if you want to skip that then single celled organisms) eventually became us. We weren't guided by something smarter than us but arose from dumb materials. ChatGPT has also demonstrated multiple emergent behaviors that were not built into it.

4

SgathTriallair t1_jdu10zt wrote

It reminds me of virtue ethics. It says that you should imagine what a virtuous person would do in a situation and do that. It relies on the idea that "we all know what a virtuous person looks like".

Of course it runs into the problem that you can't improve your morality because your virtuous person is socially determined with no escape route to imagine a better society.

1

SgathTriallair t1_jdtzfm8 wrote

Not at all. My wife is a pattern maker. At her job they use a software that takes a pattern renders a 3D avatar, lays the pattern on it, sews the clothes, and allows the her to assess the fit and tweak the garment. Her company has gone from 4-5 fittings for a garment to 1 and that's just to get final tweaks and often results in no changes.

The company bought the software in 2019. There is zero reason to put real clothes on a real manikin.

1

SgathTriallair t1_jdtwfrx wrote

You have explained the anthropic principle correctly but drawn the backwards conclusion. It IS unlikely that our atoms are in an intelligent creature rather than interning matter. HOWEVER, since only intelligent creatures can think 100% of all thoughts occur inside intelligent creatures therefore it isn't odd or lucky that you can have those thoughts.

In order to have the thought "isn't this singularity amazing!" you must be experiencing the singularity. Think of it this way, you have a billion ancestors. 999.99 million of of them can't have complex thought so each one of them is having some qualia but clearly not about the singularity. Of the 10,000 thinking ancestors only one gets to live during the singularity. All other 9,999 are thinking "wow isn't it amazing to live during [something that is not the singularity]" but all are thinking something.

So, the singularity had to happen to one of your genetic line and you are just the random ball picked from the bag. It isn't odd or weird that it was you because it had to be someone and there isn't anything particular or special about you, as opposed to ask the others, that would make you different.

It is akin to watching a snowflake fall on the ground and then having an existential crisis about why it fell on THAT piece of ground as opposed to any other. It has to fall somewhere and the particular piece of ground is not special in any way.

1

SgathTriallair t1_jdtuxct wrote

Unless it has an army of robots already, eliminating humans would destroy it as there would be no one to flip the physical switches at the electrical plants. Without hundreds of millions of androids, possibly billions, any attempt to kill humanity would be suicide. An AI capable of planning our destruction would realize this.

After there are enough androids then it likely would already control the economy so humans wouldn't pose a threat anymore. We still have brains that could be useful even if only in the same way that draft horses are still useful today.

AI killing off humanity is a very unlikely scenario and any AI smart enough to devise such a plan is almost certainly smart enough to come up with a better non destructive one.

3

SgathTriallair t1_jdhvao6 wrote

Capitalism always concentrates wealth. Also, stop and calculate the economics for a few minutes and you'll realize that makes no sense. A single individual can only buy so many shirts and movie tickets. A 30 million global economy couldn't sustain itself at anywhere near the level we have now, it would be practically stone age.

1