justowen4

justowen4 t1_je8hj5f wrote

It’s also not true, even Stephen Wolfram, who is a legitimate genius in the technical definition of genius, has to rework the definition of “understand” to avoid applying it to ChatGPT. Understanding, like intelligence, has to be defined in terms of thresholds of geometric associations, because that’s what our brain does. And guess what, that’s what LLMs do. It’s coordinates at the base layer. Doesn’t mean it’s conscious, but it’s definitely intelligence and understanding at the fundamental substrate. To redefine these words so that only humans can participate is just egotistical nonsense

1

justowen4 t1_iyktahl wrote

The economy is not the sum of human effort, it’s the volume of active capital. Ai doesn’t deplete capital, but it does accelerate capitalism (rich get richer, and the poor get richer). Work just evolves, and humans only need to keep investing in themselves to keep up this marathon. In other words, don’t worry lizard brain, you are safe.

2

justowen4 t1_ixakkip wrote

I’m doubtful we will get innovative outputs from the 2023 llms, I think better summarized analysis of existing knowledge will be the next step, assisting humans to make innovation faster — I think we have been preparing for a good Ai assistant for a long time, from clippy to now every Fortune 500 companies frontline customer support and sales system, we are almost at the point where these systems will have the intelligence needed to be nearly as useful as trained human agents, and then it’ll pick up steam fast as there trillions of dollars in that general workflow

1

justowen4 t1_iwxl8ps wrote

They have already scaled out, twitter is refreshingly open about their tech and historically a big player in open sourcing some of their concepts

2

justowen4 t1_ivahguf wrote

There is a nearly limitless amount of innovation potential in biochemistry that AIs like AlphaFold are specifically good at. Ecological problems are biochemical problems, and the reason we can’t figure out bacteria and enzymes to rectify our polluted biological systems (from the boreal forest to gut microbiomes) is that traditional computing can’t calculate the complex simulations to find solutions. The next step is big pharma throwing billions into drug simulations via AI, and then we will have built the intelligence needed to determine ecological adjuncts to clean up polluted environments. Humans have tried with mixed success to adjust biological systems but it will take a super smart simulator to find solutions that don’t backfire.

7

justowen4 t1_iuw9c84 wrote

Perhaps your point could be further articulated by the idea that we are not maximizing economic capacity by using historical data directly, we need an AI that can factor bias into the equation. In other words institutional racism is bad for predictive power because it will assume certain groups are simply unproductive, so we need an AI smart enough to recognize the dynamics of historical opportunity levels and virtuous cycles. I’m pretty sure this would not be hard for a decent AI to grasp. Interestingly these AIs give tax breaks for the ultra wealthy which I am personally opposed to but even with all the dynamics factored into maximum productivity the truth might be that rich people are better at productivity.. (I’m poor btw)

2

justowen4 t1_itt5mpf wrote

It’s simply going to be both scenarios in 2023, quantity and quality, synthetic data variations from existing corpuses with better training distributions (pseudo-sparcity) on optimized hardware. Maybe even some novel chips like photon or analog later next year. It’s like cpus 20 years ago, optimizations all around!

6

justowen4 t1_itdur6e wrote

Just imagine what 2023 will bring us in AI advancements. Feels a lot like the semiconductor shrinking progress but further on the exponential curve. I wouldn’t be surprised if gpt-4 is delayed so they can incorporate all the new ways to train. I think the next step will be big+efficient and see if we can crack into those remaining cognition tests that Ai still falls short on

6

justowen4 t1_it645n5 wrote

In case you missed it, LLMs surprised us by being able to scale beyond expectations. The underestimation was because llms came from the nlp world with simple word2vec style word associations. In 2017 the groundbreaking “attention is all you need” paper showed the simple transformer architecture alone with lots of gpu time can outperform other model types. Why? Because it’s not an nlp word association network anymore, it’s a layered context calculator that uses words as ingredients. Barely worth calling them llms unless you redefine language to be integral to human intelligence

9

justowen4 t1_isobs10 wrote

Pff it’s just china in that data. We aren’t going to see the exponential robot usage until it’s smart enough to handle edge cases. It’s still the same assembly line robots that we have been using for generations

3