Belostoma

Belostoma t1_je1m50i wrote

It's not happening yet. There's accelerating growth due to increased interest and understanding from humans seeing what this stuff can do, but the exponential growth associated with a true singularity will come from the AI being capable of improving itself much better than humans can. The AI improves itself, gets better at improving itself due to the improvements, improves itself even more, and so on recursively.

The capability of AI in computer programming right now is impressive, but it's not at the level of understanding really complex programs (like itself) well enough to debug them, let alone reason about how to improve them. AI is scary good at one-off programming puzzles that are easily to fully and briefly specify, but that's a very different task from understanding how all the parts of a large, complex program work together and coming up with novel ideas to rearrange them to serve some over-arching goal.

I think some of the recursive self-improvement will begin with some combination of human and machine intelligence, but right now the AI is really just a time-saver to make human coders somewhat more efficient, rather than something that greatly expands their capabilities.

2

Belostoma t1_j9n79vk wrote

A hypothesis that could explain literally anything explains nothing. No matter how crazy the universe is, it’s always within the capabilities of an omnipotent being because that’s the definition of omnipotent. You don’t even need a being; you can just say it’s all magic. But you can say that about anything you don’t yet understand no matter what it is. There is no possible way the universe could be that you couldn’t explain by saying it’s magic. That makes the explanation meaningless.

1

Belostoma t1_j9e0siw wrote

No. I'm really interested to see what happens and whether the singularity really plays out in the wild exponential way this sub is expecting. I'm both worried and excited about what it will mean for society. But I also believe bug collecting is a pretty fun way to pass the time while we're waiting, and I'm sure it will still be interesting afterward. Really it's crazy how many fascinating adaptations the little buggers have and how much biodiversity there is. Get a decent microscope and have your mind blown.

Or take up literally any other form of entertainment. You are unfathomably lucky to be conscious in the first place, and it's only for a while, so you might as well enjoy it. We don't need any deeper meaning than finding things we enjoy; as the highest form of consciousness in the known Universe, we humans are the primary arbiters of meaning. Everything that happens is only meaningful to the extent that it means something to some conscious creature (that's us). Rather than pining for a higher consciousness to give things meaning, just do it yourself. And if you enjoy thinking about the singularity, cool. Keep that on your list of meaningful things. Just don't rely on it as the only thing that counts.

4

Belostoma t1_j6gewjj wrote

I know AGI can. I'm skeptical that it's only 6 years out at all, let alone only 6 years out from being so widespread that just any employer can hire it at will. My main point is that not-general AI can't even begin to compete with human programmers in the complex jobs most of them actually spend most of their time on. I think humans who can leverage non-general AI to make themselves more productive will be the best programmers for a pretty long time.

1

Belostoma t1_j6erizs wrote

You'll be fine.

AI is not going to be able to really replace programmers for a long time, and the people saying it will just don't have much experience with programming in the real world. It's going to take widespread AGI before programmers are obsolete, and it's hard to predict when that will happen, but it's not 2 years.

Learning a programming language is pretty easy, and AI can do that. The hard, time-consuming part is figuring out what specifically you want the computer to do in the first place. Once you do that, expressing it precisely in terms of a programming language is no more difficult (and often easier) than expressing it in natural language. The other huge part of the job is debugging or optimizing code, which requires some deep understanding of how the many pieces of a complex system work together. This is an enormous leap beyond the capabilities of the AI tech that is currently impressing everyone. It's not impossible for theoretical future AI, but it's not just an incremental improvement over models like GPT. It's a whole different kind of thing.

I expect AI to provide increasingly impressive autocomplete features to help make programmers more productive. That's still exciting stuff. It might reduce the market for programmers a little bit by making each one more productive, so people don't need to hire as many of them. But it won't replace them until we have true AGI that can actually reason and understand things.

To be clear, I'm not dissing what OpenAI is doing here. I'm excited for it. But people on this sub especially are badly misjudging some of its implications.

1

Belostoma t1_j5g66ge wrote

The problem is that the reasons ChatGPT gives dumb answers are buried deep in the opaque vagaries of its algorithm. Students need to respect that ChatGPT can just screw up, but checking it becomes an exercise in rote fact checking.

What they really need to learn about critical thinking are the myriad ways humans can mislead themselves and others, on purpose or by accident, from the tricks of malicious grifters to subtle biases we all have. ChatGPT isn't great for that, and it unfortunately might discourage the use of tools like long-form essays that are better for learning critical thinking and other essential skills like structuring ideas.

13

Belostoma t1_j3kte6l wrote

>won't the time taken to make scientific discoveries - or really discoveries of any kind - begin to tend to zero?

No.

Many discoveries are only enabled by somebody being in the right place at the right time to make an important observation, or somebody creatively getting interested in a question nobody else thought to ask. These serendipitous moments happen unpredictably. Many questions can't be answered without extensive data collection specifically to answer them, and that consumes time or in-person resources an AI might not have at its disposal, unless we want it to blanket the planet in drones it can use to study everything at once.

For example, I study salmon populations. Many of the questions we ask rely on knowing how many salmon of a particular kind came back from the ocean each year. We basically get one data point per year, and we can't say much from one data point. If we're carefully monitoring a population, which is expensive and time-consuming, it still takes decades to build up a dataset with a large enough sample size to draw useful conclusions. There is no way to speed that up.

Likewise, much of basic research in medical science involves testing things in other organisms, from cell lines to nematodes to fruit flies to mice. Many testing methods rely on random chance, with biologists breeding tons of these critters until a particular genetic variant they want arises, and they can then use it to test some hypothesis. The time to run experiments is generally based on the life cycles of the study organisms, which aren't instantaneous.

Also, there is no chance that "everything is discovered," ever, because there are constantly new things to discover. There are new things happening that we want to understand. In my field, there might be a worrisome fluctuation in a salmon population one year, and the only way to find its cause is to go out and collect some time-consuming data. It can't be deduced from first principles or past data because something new and unusual is happening. New and unusual things are happening everywhere, all the time.

I expect AGI/ASI to be a transformative partner for scientists, capable of both facilitating and making great discoveries. But some of the predictions for ASI are just over-the-top and don't reflect how knowledge is really gained. No amount of digital genius can produce data that haven't been collected yet, nor draw correct conclusions from inadequate sample sizes, nor collect independent samples faster than the study subjects generate them.

Also, regarding the positive feedback loop of AI development and exponential growth / recursive self-improvement, know that pretty much nothing grows exponentially forever. The faster it grows, the faster it reaches asymptotes where progress is limited by something different from whatever was limiting it before.

5

Belostoma t1_j25nki0 wrote

>Do you know how easy it is to be added on to someone else's research when you're popular? May I ask what you know about scientific publication and how it works?

Why are you asking these stupid questions of /u/tzaeru, when I already showed you that Sagan had many top-notch, original research publications that were both first-authored and published before he was famous? He wasn't just getting added onto somebody else's work. I don't see what you're trying to accomplish with this bullshit.

1

Belostoma t1_j25n46c wrote

>I am not wrong. You don't know what you're talking about. But I know you think you do. I'm a research scientist and I lived through his life.

No, you're really very wrong here. Anyone who wants to follow this exchange beyond a "he-said she-said" can browse deeply through Sagan's record on Google Scholar, read about his roles in NASA's planetary science missions, and see for themselves.

However, as it happens, I'm also a research scientist, probably with more experience than you. I'm guessing you're still just a grad student or postdoc with an excessive ego. I'm sure I have more claim to have "lived through his life" than you. Sagan inspired me to go to Cornell and major in astronomy, where I worked as an undergrad in his old office (albeit only when meeting with my supervisor, whose office it was). I later did undergraduate research in radio astronomy for Yervant Terzian, the incredibly kind and brilliant man who hired Sagan at Cornell and held the same professorship Sagan did (David C Duncan Professor of Physical Sciences) at the time I worked for him.

I ended up switching to a different field that better fits the kind of day-to-day work I like to do (quantitative ecology), but I am highly familiar with Sagan's legacy and personally close to it. I know you're wrong, and I'm qualified to know.

1

Belostoma t1_j25kjax wrote

>It can also explain the reason behind it's answers. It can be confidently incorrect sometimes, but ChatGPT is for sure more than just "predicting what word should come next".

But the explanation of the reasoning is just a part of what should come next, based on other people having explained reasoning similarly in similar contexts. It's still basically a pattern-matching autocomplete, just an insanely good one.

3

Belostoma t1_j256ays wrote

>I imagine a research orientated GPT could keep going deeper and deeper until it hits the current limit of our understanding about a particular subject.

The problem is that you're kind of running up agains the limits of what the tech behind ChatGPT can do. It doesn't understand anything it's saying; it's just good at predicting what word should come next when it has lots of training data to go on. When you start talking about technical details that have maybe only been addressed in a few scientific publications, it takes some understanding of the meaning behind the words to assemble those details into a coherent summary; it can't be done based on language patterns alone. Even something like knowing which details are extraneous and which belong in a summary requires a large sample size to see which pieces of language are common to many sources and which are specific to one document. There's not enough training data for the deep dives you seek.

>Where I think a research assistant GPT would really shine is by being able to absorb all of these independent data points and instantly making the connections.

I think this is a great possibility for a research assistant AI eventually, but it will probably require advances in a different type of AI than the language models ChatGPT is using.

6

Belostoma t1_j253x71 wrote

>Decent scientist, but no high level research.

Wrong. You could arguably say that of Tyson, but certainly not Sagan. He had numerous highly cited, lead-authored publications in top journals, for example (not an exhausive list):

https://www.science.org/doi/abs/10.1126/science.177.4043.52

https://www.science.org/doi/abs/10.1126/science.276.5316.1217

https://www.science.org/doi/abs/10.1126/science.206.4425.1363

https://www.sciencedirect.com/science/article/abs/pii/0022519373902166

https://www.science.org/doi/abs/10.1126/science.173.3995.417

https://www.sciencedirect.com/science/article/pii/0019103584900186?casa_token=-1h0Q6J_StsAAAAA:ppOvkzDw8pZatwQbK5geuP7lFRklAc7Q62fOgs1Hpz6agXTxNSNSFQ22fDyUoZdaRA4WuyuTjg

Even if he weren't remotely famous and hadn't written any popular books, he would easily be among the top 1 % of scientists in his field (planetary science) by traditional academic metrics. On top of that publication record, he was the director of the Laboratory for Planetary Studies at Cornell and one of the principal scientists on most of the major NASA planetary science missions of his day. You don't even need to add his incredibly important public-facing work to rank him among the most influential planetary scientists ever.

It sounds like you're trying to impress someone here by acting unimpressed with someone everyone else rightly idolizes. It's not working.

4

Belostoma t1_j24yu83 wrote

New research hyper-relevant to mine is likely to cite at least one of my papers, so I already get an alert. And ChatGPT wouldn't write a better summary of it than the authors did in the abstract. So I don't see the specific case you describe being especially useful.

There are many times when my research takes me into a new sub-field for just one or two questions ancillary to my own work, and I could see a more advanced, research-oriented form of ChatGPT (especially one that can cite and quote its sources) being potentially useful for the early stages of exploring a new idea and an unfamiliar body of work.

12

Belostoma t1_j24vbe8 wrote

Exponential growth in anything is rarely sustained indefinitely. It comes in bursts.

I expect the tech behind ChatGPT to start to hit a ceiling before too long. Its job is basically to coherently summarize its training data relevant to the prompt, and it's already super impressive on prompts for which adequate training data exist. It will cause disruptive changes in parts of society built around the assumption that a human wrote something. 2023 will probably bring cooler art and more believable writing as things like ChatGPT and Dall-E are refined.

However, there isn't really a smooth path for incremental improvement from this to tasks that require understanding and thought, making logical deductions from extremely sparse training data with an understanding of their credibility and connections, and solving novel problems. I'm not saying AI won't crack that nut eventually, but it's a different and harder problem that will require new breakthroughs rather than incremental improvements.

I expect exponential growth in that area whenever AI gets good enough to really help AI researchers make the next breakthroughs and start a positive feedback loop of recursive self-improvement. But it's not clear that ChatGPT is the start of that cycle. Humans might leverage it to gain some efficiency in their work, but that's more of a linear improvement than exponential.

2

Belostoma t1_iv8xftx wrote

Yeah, nematodes are easier. I'm a biologist in a different field, but I have coauthored one publication in that field because I did the statistical analysis for them. I'm not terribly impressed with the anti-aging field overall. There's a lot of "let's just try tons of different chemicals and see which one causes some marginal improvement in a model organism." It strikes me as kind of a scattergun approach searching for a molecular fountain of youth that probably doesn't exist. I think they are on track to finding things that help people age more gracefully for a while and get a few extra years of good life. That's a worthy goal, but I just don't anticipate an earth-shattering "now we will live forever" type advance on the short timeline some futurists are hyping. The state of the field is less "these nanobots will reprogram your cells" and more "blueberries seem to be good for you."

25

Belostoma t1_iueotqs wrote

Add sea beans to the list -- they're delicious!

I'm not sure if your idea works overall though, rather than trading one problem for another. I'm sure there are places it could be helpfully done, but where would we find the physical space to grow enough saltwater crops to replace a large proportion of freshwater-driven industrial agriculture? Coastal and especially estuary habitat is fragile and critical to begin with, and the last thing we want to do is convert any of it from biodiverse native vegetation to a monoculture of human food, even if it helps slightly with the water problem.

Maybe this would be a good application for futuristic vertical farming tech in high-rises near the sea. I don't know which crops might be amenable to that.

3