visarga

visarga t1_izvv9xi wrote

I'd be interested in knowing, too. I want to parse the HTML of a page and identify what actions are possible, such as modifying text in an input or clicking a button. But web pages often go over 30K tokens so there's no way to fit them. HTML can be extremely verbose.

2

visarga t1_izg14yr wrote

> But here lies the rub: you will need to do this for everything that you do going forward, and the facade will need to never fall.

In a few years we'll be all surrounded by very advanced AI left and right. The trend is to use more and more AI, not less. It will become like penmanship in the age of keyboards. Everyone will use AI for writing.

BTW, you can use GPT-3 prompted with personality profiles to answer polls, rate things, act like a focus group. If you know the distribution of your audience you can focus-group the shit out of your messages to obtain the maximum impact.

> “conditioning GPT3 on thousands of socio-demographic backstories from real human participants in multiple large surveys in the United States: the 2012, 2016, and 2020 waves of the American National Election Studies (ANES)[16], and Rothschild et al.’s “Pigeonholing Partisans” data.

> When properly conditioned, is able to produce outputs biased both toward and against specific groups and perspectives in ways that strongly correspond with human response patterns along fine-grained demographic axes. In other words, these language models do not contain just one bias, but many”.

They can simulate a population in silicon for virtual polling. Everyone will want to virtual-test their tweets and articles.

6

visarga t1_izg08wq wrote

> What I really would like to see in the future is neural interfacing; merge AI capability with human sensibility. Return the power back to the human race.

I'd like first to run chatGPT on my desktop, like I can run Stable Diffusion. This is for reasons of freedom and privacy. It will create a new safe space for creativity, and is much easier to achieve. Maybe they can shrink the model, or maybe we get better GPUs.

3

visarga t1_izdk9wi wrote

We are already in post-scarcity with regard to many information based services - there's so much music, literature, scientific papers, online courses, free Encyclopedia for all languages, open source software, open source models - more than we could ever consume. So many hobby communities and YT channels, with great people showing their work. Millions of software problems solved on StackOverflow, you can find almost any fix there. The internet itself exceedes our bandwidth and is post scarcity. This is how post-scarcity feels like. Everything is available but you got to make the first move.

But if we think about industry, even if we had 100% free energy and 100% perfect automation it would not mean we are post scarcity yet. We need to ensure the raw materials, either locally or from remote sources, or we got to recycle perfectly, or invent smart materials that can be produced locally. Economy is going to look like ecology, everything recycled and efficient.

1

visarga t1_iyanwjo wrote

I managed to find the ends of its knowledge.

  • it has scarce knowledge about decorative plants such as Anthurium King of Spades - this is an expensive plant in EU, about 200-300 EUR.

  • it has fuzzy/no knowledge about a hotel resort I visited last summer in Greece.

So there are obscure plants and points of interest that are outside its closed-book ability to remember. It doesn't literally remember everything. Other than that, it's amazing.

3

visarga t1_iy91piz wrote

> All the AI predictions mentioned by various posts in this sub will more likely be under the control of elites.

You can download a model, but you can't download a Google or FaceBook. AI needs less resources to run locally, instead of a whole data centre it needs just a desktop computer in case of Stable Diffusion or an expensive multi-GPU box in case of a model like GPT-3.

The moral - by running on people's hardware AI could be serving us instead of the big corporations. AI will empower everyone with new skills, lowering the entry barrier to various fields. That is a democratising influence.

I think Google and FB right now are scared of the replacement of manual browsing with chat dialogue agents. If those agents are controlled by the users it means no advertising will be possible anymore. Your own agent will be helpful and polite, will separate the spam from the ham and serve you just what you need without all the crap.

2

visarga t1_iy5bkiu wrote

> chances are good, we'll just be the monkey pressing the buttons

What a lack of imagination. What would you do if you had materials with amazing properties? What would you apply AI next to? The work is just starting.

5

visarga t1_iy3ooc0 wrote

As a programmer I had to learn a new language every 5-7 years or so. Paradigm changes come one after another. We'll just add AI to the toolbox and use it to write code. Even when AI code works well there is a need to trust it and decide on the various trade-offs. Someone got to get close and personal with the code. By the time it can solve everything by itself we'll be well into AGI, but we'll still get involved in it to express our goals.

2

visarga t1_iy3ljki wrote

> Now it’s them being attacked and suddenly they don’t like it

Hahahaha. You're missing the big picture. Software has been cannibalising itself for 50 years. Every new open source package or library removes a bit of work from everyone else. You'd think we would be out of work by now, but in reality it's one of the hottest jobs. I mean, Wordpress alone automated/eliminated the work of a whole generation of web devs, but there was so much more work coming up that it wasn't a problem.

Work is not a zero sum game. If I could do 1000 units of work, I would plan something. If I could do 100,000 units of work, I would make a different plan. Not just scaled up linearly, but a different strategy. My prediction is that companies are going to take the AI and keep the people as well, and we'll be very very busy. Nothing expands faster than human desires/aspirations, not even automation.

1

visarga t1_iy2uc9o wrote

Young'uns I still remember 8bit processors in 1980s and loading programs from cassette tape. My father was still using IBM-style cards at work when I was a child, I messed up a whole stack playing with them. One card was a line of code. He had to sort it back by hand.

I think the biggest factor of change in the last 20 years was the leap in computing and communication speed. It took us from the PC era into the internet era. This meant an explosion in online media and indirectly allowed the collection of huge datasets that are being used to train AI today.

The things I've seen. I remember Geoffrey Hinton presenting his pre-deep-learning paper "Restricted Boltzmann Machines" around 2005. That instantly got my attention and I started following the topic, back then ML was a pariah. 12 years later I was working in AI. I have seen blow by blow from the front seat every step AI has made since 2012 when things got heated up. I read the Residual Neural Network paper the same day it was published, and witnessed the birth of transformer. I have seen GANs come and go, and even talked with the original author Ian Goodfellow right here on reddit before he got famous. I got to train many neural nets and play with even more. Much of what I learned is already useless, GPT-3 and SD are so open ended they make projects that took years take just weeks now.

Funny thing, when Hinton published the RBM paper he was using unsupervised learning. I thought it was very profound. But in 2012 the big breakthroughs were supervised learning (ImageNet). For five years only supervised learning got the attention and admiration. But in the last 5 years unsupervised won the spotlight again. How the wheel turns.

5

visarga t1_ixmbq7v wrote

AI is not that creative yet, maybe in the future, but how many mathematicians are? Apparently it is able to solve hard problems that are not in the training set:

> Meta AI has built a neural theorem prover that has solved 10 International Math Olympiad (IMO) problems — 5x more than any previous AI system.

> trained on a dataset of successful mathematical proofs and then learns to generalize to new, very different kinds of problems

This is from 3 weeks ago: link

1