red75prime

red75prime t1_jdtqsmj wrote

Reply to comment by Narootomoe in [D] GPT4 and coding problems by enryu42

Does GPT-4 have instant recall of all of its training data? I doubt it. It probably has some emergent structures akin to episodic memory, but it seems to have trouble distinguishing its memories from its hallucinations, so it's not a fully functional episodic memory (it lacks metamemory or something like that).

1

red75prime t1_j8hf7aw wrote

Reply to comment by Darustc4 in Altman vs. Yudkowsky outlook by kdun19ham

What is conjectured: nanobots eating everything.

What is happening: "Would... you... be... OK... to... get... answer... in... the... next... decade...?" As experimental processes overwhelm available computational capacity and attempts to create botnet fail as the network is monitored by similarly intelligent systems.

Sure, survival of relatively optimal processes with intelligent selection can give rise to agents, but agents will be fairly limited by computational capacity in non-monitored environment (private computers, mostly) and will be actively hunted and shut down in monitored environments (data-centers).

3

red75prime t1_j7k7hh0 wrote

I've run it thru GPT for your reading pleasure: "I like to tell people that GPT-3 is more like writing an essay for English class (or the SAT) than a research paper for a history class. It cares about grammatical correctness -- in other words, readability -- rather than accuracy or truth. For the SAT, they used to say "you can make up quotes", because they're grading your writing, not your content."

1

red75prime t1_j6oyhjp wrote

> and that those symptoms persist at a higher rate than experienced by non-animal-owners even if the household goes animal-free.

Is it a causal link? Or is it because non-animal-owners are a mix of people with and without respiratory issues at approximately population-average proportion and people who'd gone animal-free are more likely to have pre-existing respiratory issues?

5

red75prime t1_j5kx5ha wrote

Backpropagation is a tool that takes care of servicemen not getting the orders. There's the vanishing gradient problem affecting deep networks, but RELUs and residual connections seem to take care of it just fine. Mitigation of the problem in recurrent networks is harder though.

As for the brain... The brain architecture most likely is not the one and only architecture suitable for general intelligence. And taking into account that researchers get similar results when scaling up different architectures, there are quite a few of them.

6

red75prime t1_j5kjfq2 wrote

I expect AGI around 2030.

I think that the most likely reason (but still not sufficiently probable to affect my estimate) for extending AGI timeline is that the brain does use quantum acceleration for some parts of its functionality.

2

red75prime t1_j5khfna wrote

Reply to comment by Cr4zko in Steelmanning AI pessimists. by atomsinmove

You find a way to make it recurrent (keep state alongside input buffer), add memory (working, as a part of the said state, and long-term), overcome catastrophic forgetting in online learning, find efficient intrinsic motivations. Maybe it's enough.

6

red75prime t1_j4pogqn wrote

> <salient context> + <current symbol buffer> + neural network

That's RNN (recurrent neural network). As far as I know LSTM is still state of the art for them. And it struggles with long-term dependencies.

[He checks papers]

It looks like combination of transformer and LSTM does provide some benefits, but nothing groundbreaking yet.

1

red75prime t1_j23f2k2 wrote

Number of qubits is not the only parameter of a quantum computer. Other are qubit lifetime, gate fidelity, gate operation time, connectivity.

And, no. Even 1000 perfect qubits aren't anywhere near enough to break SHA-256 (which underlies bitcoin security).

With 1000 perfect qubits you can crack RSA-256, which could be cracked classically since 1990s.

9

red75prime t1_j1zdhax wrote

> what makes you think it will happen like that?

Cautiously optimistic elitism. AIs most likely will be controlled, or guided, or designed to desire, or whatever not by the rich, politicians, or the people, but by engineers, scientists, and philosophers, who wouldn't be willing to recklessly change the course of humanity, but will give us time to think and decide. And it could be a very long time.

0

red75prime t1_j1z8d16 wrote

Who will allow voting on an amendment that will make it possible for AIs to actively participate in government?

> As I said, "ALL jobs" is a different planet.

Yeah, and that planet may not lie in our future.

Anyway, I think that the future of Earth (if human race is to survive) will be, in essence, a lush ancestral museum of humans doing their thing (culture, politics, art, fashion, philosophy, science), while limiting AIs to supporting/protective roles, while posthumans will do frontier exploration. Not everyone will be OK with brainwashing themselves into believing that VR is real, and AIs can't provide you with genuine exploration thrill if you are on Earth due to light speed limitations.

1

red75prime t1_j1yzeub wrote

AIs will not be doing political governance though, but quenching inequality outrage would be an interesting task for them to solve.

Of course, I don't know for certain, but I highly confident in inertia of human society. Political activity, when you have no other meaningful things to do (if you aren't into VR), could be a driving force of societal change, but it would be opposed by existing political structures.

All in all, I see the future of Earth as a kind of ancestral museum with baseline humans doing their thing.

1

red75prime t1_j1yvlg2 wrote

> allow people to stay for only a few days at a time

I doubt that political system will evolve that quick. Privately owned property is here to stay alongside with state/community-owned property that you have to wait in queue to live in. So, connections to old money will still be important if you want to, say, enjoy beach side stay this year and not next century.

1