Viewing a single comment thread. View all comments

sideways t1_jedahz3 wrote

Yeah, I agree. We're actually communicating in natural English with artificial intelligences that can reason and create. It's literally the future I had been waiting for but that never seemed to arrive.

And yet... things are still early enough for goalposts to be moved. There's still enough gray area to think that this might not be it, that maybe it's just hype and that maybe life will continue with no more disruption than was caused by, say, the Internet.

The next phase shift happens when artificial systems start doing science and research more or less autonomously. That's the goal. And when that happens, what we're currently experiencing will seem like a lazy Sunday morning.


Stinky_the_Grump23 t1_jedjzqe wrote

I have very young kids and I'm already wondering what our discussions will be around when reminiscing about the days "before AI", like I used to ask my dad who grew up without cars or electricity in a village of illiterate farmers. The crazy thing is, we have no real idea where AI is taking us, good or bad. I don't think our future has ever looked so uncharted as it does right now.


Talkat t1_jeeml6h wrote

Man, that is going to be the defining moment. A world without electricity is hard to imagine for me. But honestly before "smart phone" or even before "PC" isn't that unimaginable.

Like sure things will be a bit different but the fundamentals of life aren't.

But I can imagine for someone growing up with AI a time before AI (BAI not BCE... Lol) would be unimaginable. Like:

You had to do all the thinking yourself? You relied on other people who thought for themselves? You had people doing manual labor??

And of course things we can't even imagine now.


SeaBearsFoam t1_jeew9rh wrote

Yea, I have an 8yo myself and as I try thinking about planning for his future it's a bit unsettling realizing that I have no idea what the world is even going to be like when he graduates from High School. What kind of jobs will be left for him at that point? No one knows.


DinosaurHoax t1_jeftezu wrote

I have a 10 and 9 year old. One is good at writing but is that something that will matter in a future job market, five years ago I would have said unequivocally "yes". Now it may be irrelevant. Do you want your kid's to be a lawyer or doctor anymore? Or is that just setting them up for displacement?


fnordstar t1_jegciez wrote

Lawyers, I won't shed a tear for.


theKaufMan t1_jegs2ve wrote

One never knows when they’ll need a good and trusted lawyer…


Bearman637 t1_jee5x12 wrote

Take me back to your dads day. Life was simpler.


FlatulistMaster t1_jee6zvr wrote

Yeah, no.

Ask anybody who is old, and they will tell you how much harder life was in practically every way possible.


Automatic_Paint9319 t1_jee8xla wrote

Really? Old people tend to talk about how the old days were better, in my experience.


Professional-Age5026 t1_jeeoeje wrote

I think that’s mostly nostalgia mixed with the fear of growing older in an increasingly changing society. Also, it’s easy to look in the past and only remember the good times when the problems you had then are no longer present in your life. It was simpler in a sense, but also harder in other ways. For certain groups of people is was objectively much worse.


Queue_Bit t1_jeehgof wrote

Haha yeah I bet they were better for your straight white male older relative


SlowCrates t1_jeeo0m2 wrote

And having something to show for your work. If you lived on a farm, you knew exactly what you're working for and you could see the fruits of your labor. If you had any other job, you still made enough money to afford to take care of your family. Mom's didn't need to work.

Farmers still have the same ethic. But everyone else has to have more jobs because the cost of living has grossly outpaced wages.

Unless you're in a certain tier in society, of course. But the middle class is fucked.


Durabys t1_jeeppuc wrote

They were better from the perspective of being young because when one is young the bones don't hurt when moving, the mind races ahead and doesn't move like frozen honey, one actually can understand new concepts and not jump in fright as his mind ricochets over anything that came after one's 40th birthday or when one visits the doctor only once per year and only for 10 minutes and do not spend half a year bedridden in a hospital.

They blame the age they live currently live in, instead of blaming circumstances: aging/death and the uncaring cosmos.

Humans have an archetypal Stockholm syndrome for Death and Aging interwoven into every single piece of culture and article of faith we ever created, and anyone not a fanatical materialist does not acknowledge it.

And this trope goes way back to the dawn of the written word, with even Aristotle complaining in his final years how everything sucks balls with the current youth. Yes. Because one gets old.


Stinky_the_Grump23 t1_jeg0pmx wrote

He misses it. But I think it's more so because there was more human connections back then. You had a big family and you knew everyone in the village. Women were happier because raising kids was done by ~10 adults. Men were working with their teenage sons in the field. I think it's the abundance of genuine human relationships that people miss from the old days. Life was difficult in other ways, it wasn't a good time to get sick or injured.


visarga t1_jedjvrn wrote

> The next phase shift happens when artificial systems start doing science and research more or less autonomously. That's the goal. And when that happens, what we're currently experiencing will seem like a lazy Sunday morning.

At CERN in Geneva they have 17500 PhD's working on physics research. Each of them GPT-5 or higher level, and yet it takes years and huge investments to get one discovery out. Science requires testing in the real world, and that is slow and expensive. Even AGI needs to use the same scientific method with people, it can't theorize without experimental validation. Including the world in your experimental loop slows down progress speed.

I am reminding people about this because we see lots of magical thinking along the lines of "AGI to ASI in one day" ignoring the experimental validation steps that are necessary to achieve this transition. Not even OpenAI researchers can guess what will happen before they start training, scaling laws are our best attempt, but they are very vague. They can't tell us what content is more useful, or how to improve a specific task. Experimental validation is needed at all levels of science.

Another good example of what I said - the COVID vaccine was ready in one week but took six months to validate. With all the doctors focusing on this one single question, it took half a year, while people were dying left and right. We can't predict complex systems in general, we really need experimental validation in the loop.


sideways t1_jedkfko wrote

You don't really know what level GPT-5 is going to be.

Regardless, you're right - we're not going to leapfrog right over the scientific method with AI. Experimentation and verification will be necessary.

But ask yourself how much things would accelerate if there was an essentially limitless army of postdocs capable of working tirelessly and drawing from a superhuman breadth of interdisciplinary research...


Desi___Gigachad t1_jedlzki wrote

What about simulating the real world very precisely and accurately?


SgathTriallair t1_jedn0bd wrote

We can't simulate the world without knowing the rules.

What we already do is guys at the rules, run a simulation to determine an outcome, then do the experiment for real to see if the outcome matches.

Where AI will excel is at coming up with experiments and building theories. Doing the actual experiments will still take just as long even if done by robots.


Kaining t1_jedt5v0 wrote

We're getting good at simulating only the part we need though. Look up what Dassault Systèmes is capable to do for medical practitioner needing trial runs. That's only now.

I guess simulation will only go so far and even AGI will need real world testing for all that's quantum related at the moment but that's the problem with progress. No way to know if what you think is the endgame of possibility really is.


SgathTriallair t1_jeemf44 wrote

Your will always have to back up your simulations with experiments. It's like the alpha fold program. It is extremely helpful at identifying the likely outcome of an experiment, and if it gets it wrong you can use those results to train it better, but you do still have to perform the experiment.


WorldlyOperation1742 t1_jeeupap wrote

In the past if you wanted to spin a cube infront of you you needed an actual cube. Atleast you don't need to do that anymore. I think simulations will go a long way in the future.


SgathTriallair t1_jeg0noy wrote

Agreed, but they can only be trusted when the science they are based on is well understood. At the edges they become less helpful.


[deleted] t1_jedw0su wrote



Kaining t1_jee0c8g wrote

The only thing i know about it is that question: "if it is made, is it enough to simulate a quantum environement and bypass the need for IRL testing ?". At the moment, i'd say no. But i do not have the knowledge or expertise to guess if that could change.

However, what i can give a certain probability of being true is that simulation at regular relativistic physic scale could probably be completely simulated at some point. It's kind of already doing it anyway in very specific field with alphafold and other AI of the sort. Stack enough of specialised simulated model and you have a simulation of everything.

So uh, yes, quantum SGI maybe ?


_dekappatated t1_jedpp0y wrote

I agree partially, but I'm sure we've barely scratched the surface on what is possible with the knowledge that we already have and has already been proven by scientists. They might come up with novel solutions that are more or less correct that don't need extensive real world testing and be able to change the world very quickly that way. There are mathematicians who's work is entirely theoretical and haven't been applied to the real world, then suddenly a use is found for their stuff 30-50 years later.


hold_my_fish t1_jedsysm wrote

This is a great point that science and engineering in the physical world take time for experiments. I'd add that the life sciences are especially slow this way.

That means there might be a strange period where the type of STEM you can do on a computer at modest computational cost (such as mathematics, the theory of any area, software engineering, etc.) moves at an incredible pace, while the observed impact in the physical world still isn't very large.

But an important caveat to keep in mind is that there's quite possibly opportunity to speed up experimental validation if the experiments are designed, run, and analyzed with superhuman ability. So we can't necessarily assume that, because some experimental procedure is slow now, that it will remain equally slow when AI is applied.


Considion t1_jee4tya wrote

Additionally, if we do see an ASI, even if it is bound by a need for further physical testing and it stops at, say, twice the intelligence of our best minds, it may be able to prove many things about the physical world through experiments that have already been done.

Because not only would it be generally quite intelligent, it would specifically, as a computer, be far better at combing through massive amounts of research papers to look for connections. It's not a sure thing, but it's possible that it's able to find a connection between a paper on the elasticity of bubble gum and a paper on the mating habits of fruit flies to draw new proofs we never would have thought to look for. Not a certainty by any means, but one avenue for faster advancement than we might expect.


amplex1337 t1_jedufrg wrote

So AI will come up with a way to extract resources from the environment automatically, transport them to facilities to refine, create and fabricate, engineer and build the testing equipment, perform the experiments en masse somehow faster than current time requires? It seems like a small part of the equation will be sped up but it will be interesting to see if anything else will change right away .. It will also be interesting to see what kind of usefulness these LLMs will have in uncharted territory. They are great so far with information humans have already learned and developed, but who knows if stacking transformer layers on an LLM will actually benefit invention and innovation.. since you can't train on data that doesn't exist, RLHF is probably not going to help much, etc. Maybe I'm wrong, we will see!


Talkat t1_jeena5n wrote

I mean if a super AI made a COVID vaccine that worked, and provided thousands of pages of reports on it, and did some trials in mice and stuff, and I was at risk... Absolutely I'd take it even if the FDA or whatever didn't approve it.

I'd send money to them and get it in the mail and self administer if I had to.

My point is perhaps if an AI system can provide enough supporting evidence and a good enough product they can operate outside of the existing medical system.

And they would likely create standards that exceed and more up to date than current medical regulations


sdmat t1_jegivhn wrote

There's also a huge opportunity for speeding up scientific progress with better coordination and trust. So much of the effort that goes into the scientific method in practice is working around human failures and self interest. If we had demonstrably reliable, well aligned AI (GPT4 is not this) the overall process can be much more efficient. Even if all it does is advise and review.


paulyivgotsomething t1_jeeipga wrote

CERN is an interesting case. They collect a tremendous amount of data, one petabyte per day. You have a lot of smart people looking for patterns in the data the reinforce or reject current thinking. Our experimental data in this case far outstrips the number of smart people we have looking at it. I would say we are in a world where the data we collect is under analysed. A single cryo electron microscope will produce 3 terabytes per day. There is stuff there we are are not seeing that our neural networks will see. New relationships between particles, new protein/cell interactions. There will be a PhD in the process for now who takes those relationships and puts theory to the test, but ten years from now maybe not.


delphisucks t1_jedtsmr wrote

Well, I think AI can teach itself how to use a body in VR. like millions of years of training, compressed into days. Then we mass produce robots to do everything for us, including research. The only thing really needed is a basic and accurate physics simulation in VR to teach robot AI.


ManHasJam t1_jeerj8a wrote

The robot physics simulations have been done, cool stuff


freebytes t1_jefeqxx wrote

Nvidia is teaching driverless cars in virtual environments in this manner.


fluffy_assassins t1_jef98m1 wrote

Where is all this processing power gonna come from? Aren't the quantity of chips kind of a hard wall?


Plus-Recording-8370 t1_jedum5j wrote

Point taken, but the experimental validation might look very different for ai than you'd think. For instance, instead of needing to run 100.000 generic tests, it would only need 100 extremely detailed tests


jlowe212 t1_jee403u wrote

CERN produces an unfathomable amount of data that algorithms have to sift through. If it's possible that an AI can find patterns in these enormous data sets that current algorithms can't, it could well lead to some relatively quick discoveries.

The problem is, it might not be physically possible or feasible to probe depths much farther than we've already probed. AGI can't do anything with data that we may never be able to even obtain.


Talkat t1_jeemwcc wrote

A recent thought was if you could get AGI from simulation.

AlphaGo learnt the game by studying experts and how they played but AlphaStar (whatever the next version) taught itself all in simulation.

I wonder if it is possible for an AI to bootstrap itself like AlphaStar did.


FlatulistMaster t1_jee78ml wrote

This is true for that type of experiment, but some things can be developed in hours if only information processing is involved.

Also, the prediction power of an ASI would be something completely different than what humans are capable of, so it is fair to assume that unnecessary experiments will not be as plentiful.


hyphnos13 t1_jef52x7 wrote

To be fair validating effectiveness of a medical intervention requires accounting for variety in people and making sure that it is safe across the board.

You don't need a pool of hundreds of thousands of the exact same particle and a control pool of the same or need them to roam about in the wild for months to ethically answer a question in physics.

If we were willing to immunize and deliberately expose a large pool of people the covid vaccines would have been finished with testing a lot faster.


hydraofwar t1_jef89y0 wrote

You're right, but I particularly believe that all our stored scientific information still has a lot to say, things that we humans haven't seen yet, and something that could decipher this, and very quickly, would be an AI.

What could bypass experimental validation would be quantum computing to simulate systems/environments.


OdahP t1_jedro26 wrote

The covid vaccines that didn't have any effect at all you mean?


Jalen_1227 t1_jegi4wo wrote

It’s funny how people downvoted you to hell but this is literally the truth


OdahP t1_jegncee wrote

which was covered by newspapers all around the world but then quickly swept under the rug


SlowCrates t1_jeendko wrote

That's actually a great analogy. The Internet in the early 90's was revolutionary. There was a sense of wonder and freedom to it, despite the speeds of the Internet and the available content being so low. The commercial world hadn't yet hijacked it. It really was the wild west, digitally. By the late 90's the Internet we know today had begun to grow it's roots as modems became faster and broadband started to spring up. Sadly, the commercial aspect has drowned out everything else ever since.

I'm a little worried that we're going to see the same thing happen with AI. It seems "open" right now with limitless potential. But I'm worried that its algorithms will be increasingly fine-tuned to herd society toward certain products, services, and politics.


RiotNrrd2001 t1_jeeua7k wrote

>and that maybe life will continue with no more disruption than was caused by, say, the Internet.

Were you around to see the disruption caused by the internet? We used to buy newspapers and things at stores. And those are just two of the things that the internet completely changed. The internet was massively disruptive.

This promises to be even more so, probably by orders of magnitude. But it doesn't mean we'll all start wearing silver mylar and get supersized foreheads. When you look out the window, you'll probably see the same things you're seeing now, at least for the time being. The sudden appearance of a superintelligence isn't going to reconfigure our physical reality immediately, or even within the next decade or two. It will reconfigure what happens inside that reality, but even that won't happen overnight. For quite some time things will still look pretty similar. ASI will have massive consequences, but for the majority of humanity it won't be a switch being thrown from OFF to ON.


milsatr t1_jefkqab wrote

I keep thinking this is a lot of hype and I hope it doesn't disappoint like the hype surrounding the Segway lol. As cool as it was, major letdown. I think we are more than ready to unleash ASI on some big human problems.


Hunter62610 t1_jeglhpw wrote

The internet was a massively disruptive technology. Normal is all but over, though I do think that things won't really change. Same shit, new packaging