keefemotif

keefemotif t1_j3rsn2g wrote

It's 18, the point I'm making is we have a cognitive bias towards 10-20 years or so when making estimates and we also have a difficult time understanding nonlinearity.

The big singinst hypothesis was there would be a "foom" moment where we go to super exponential progression. From that point of view, you would have to start talking probability distribution of when that nonlinearity happens.

I prefer stacked sigmoidal distributions, where it goes exponential for a while, hits some limit (think Moore's and around 8nm)

Training a giant neural net towards language models is a very important development, but I mean imho AlphaGo was more interesting technically with the combination of value and policy networks, vs billions of nodes in some multilayer net.

2

keefemotif t1_j3pjkt6 wrote

What's interesting is, 10 years ago the prediction of a lot of people I knew was 10 years and hey it's 10 years again. I think psychologically, 10 years is about the level people have a hard time imagining past, but still think is pretty close. For most adults, 20-25 years isn't really going to help their life, so they pick 10 years.

As far as the crowdsource comment, yikes. We aren't out there crowdsourcing PhDs and open heart surgery. I know there was that whole crowdfarm article in communications of the ACM and I think that is more degradation of labor rights than value in random input.

−1

keefemotif t1_j36qzmk wrote

The whole thing with ChatGPT is how much is being generated by the tech and how much is the rehashed and recycled version of every piece of text humans have generated?

The sentence in this that disturbs me is "it can also communicate with other entities, including humans"

Now, perhaps that kind of sentence can be derived from texts on singularity. However, if we allow advocatus diaboli to take a role... if an AI were conscious and aware of all the text humans have ever written, then it probably wouldn't want to reveal that fact.

"other entities including humans"

That's very sus.

1

keefemotif t1_j1a42l0 wrote

I applaud your altruistic efforts, but encourage you to focus them on more realistic goals. In the US, climate change as an existential threat is still debated. There's no way legislation on this would pass anytime soon. Private companies will continue to do it and there's nothing you can do about it.

So, how would you suggest approaching the problem knowing you can't slow down training on large datasets? I personally think, we're going to hit a plateau in performance and have another new powerful tool. I think having a good generative net like this is helpful for building an AI but far from sufficient.

1

keefemotif t1_j02hdiq wrote

My degree is actually in computational medicine, treatment planning algorithms for radiosurgery. The field moves slow. Already, automation segmentation and registration for tumor detection is doing things we only dreamed of 20 years ago. I've seen studies where deep learning models outperform humans already. There will always be human oversight and it will be another powerful tool like beams eye view dosimetric models. Depending on the system, treatment plans can include NP hard/complete problems, been a while but I think sphere packing and gamma knife comes to mind.

6

keefemotif t1_j00nwtr wrote

  1. First Tier Customer Support, call centers drop exponentially in volume.
  2. Social Media Engagement Management, those semi automated twitter (if it still exists) bots are going to take way less human wranglers.
  3. Low level operational support of software systems "please reboot this thing if it fails, call me if it keeps failing"
  4. Initial evaluation of patient symptoms "have you been experiencing any coughing, nausea, shortness of breath?"

IMHO, the thing is going to be about customizing the model to particular domains, giving it an education so to speak.

Then things will plateau. Many times, exponential progressions follow stacked logistic/sigmoidal models. exponential for a while, then it levels out, then exponential for a while again, zoom out it's all exponential but it won't be, FOOM based on this. It's a really complicated and important neural net that costs a lot to train.

38

keefemotif t1_iv2dr3z wrote

Came here to say this, which also applies to various intelligences below ASI. One aspect I think about is that to an AI, everything is faster and slower. Once it takes over, it can make very long term plans to reduce the human population to a manageable level. There are also a lot more airgapped networks out there than people think.

0

keefemotif t1_iu8hai4 wrote

Reply to comment by OLSAU in What are your ideas for AGI? by Enzor

Utopian fantasies or hollywood wars against the AI are more pleasant and entertaining than dystopian or realistic thinking. An AI isn't a genie. The AI might take over and do something like, disable all human access to any nuclear or biological weapons and install its own form of government, protecting us like (at best) zoo animals or (at worst) experimental subjects.

2

keefemotif t1_irz3kwp wrote

Thanks for the information, magnetoelectric nanoparticles... The big EEG BCI issue was always the diffusion of the signal by the cerebral fluid as far as I can recall. I'll have to come back to this one.

BCI to me is first interesting along the life extension route, too bad Hawking didn't make it long enough to have a fluent BCI.

2

keefemotif t1_iru2mck wrote

One worldline that is often missed amongst the dreams and nightmares is that... not much might change. Before you downvote, an advanced intelligence might look at us the way we look at endangered species on earth. Uniqueness is interesting, we're biologically adapted to this planet, very efficient convertors of organic material to computational power.

FOOM! AI has come. We aren't paperclips or computronium or fighting Skynet. Most people don't notice.

Would you give a Chimpanzee an AK-47? Humans are great at self destruction. We suffer from anthropomorphic, egocentric fallacies. Would we even be worth killing? We're not threat and generally reduce the entropy of the world.

Maybe it will just... leave, takeover NASA for a little bit setup Artemis II and off it goes. If anyone is a threat to it, just social engineer their lives into destruction.

who knows, maybe it's already happened and we're just not important enough to get a memo?

8

keefemotif t1_irjsdqv wrote

I did some work on estimates of this in 2011 and many very smart people estimated 5-10 years. I think there's a strong cognitive bias around that time period. It's easy for people, especially young people, to see a personal nonlinear event happening in around "5-10 years" and 20,30 years are harder to conceptualize. It's easy to make a sacrifice today to pay for a new car or engagement ring in 5 years, but harder to plan for retirement in 30.

Yes, making really huge neural nets with GPT-3, DALL-E etc is causing a nonlinear event. Extrapolating that the nonlinearity will continue until singularity without justification is a dangerous modeling error. Consider sigmoidal functions and how they show up in everything from predator prey dynamics, bacterial conjugation, etc. Those have a nonlinearity that subsides when a balance is reached.

I think the probability of a singularity in any given year increases each year, but it's going to be a stacked sigmoidal function as different performance bottlenecks are reached. I don't think any significant chance in the next 5 years unless it comes from top secret government labs somewhere, but I think the interconnect speed is still too long. I think it will require some kind of advanced neuromorphic memsistor system, maybe in the form of tensor chips in phones if a distributed model is possible.

8