visarga
visarga t1_izzb3zf wrote
Reply to I think this post will be monumentally important for some of you to read. Put it in your brain, think about it, and get ready for the next few years. If you are part of this Subreddit; You are forward thinking, you're already ahead of the curve, you will have one shot to be at an advantage. NOW. by AdditionalPizza
Actually your long post is great for prompting chatGPT to write an article. I got one in business style and one like a "self help" guide.
visarga t1_izvv9xi wrote
Reply to [D] Getting around GPT-3's 4k token limit? by granddaddy
I'd be interested in knowing, too. I want to parse the HTML of a page and identify what actions are possible, such as modifying text in an input or clicking a button. But web pages often go over 30K tokens so there's no way to fit them. HTML can be extremely verbose.
visarga t1_izg2xcp wrote
Reply to comment by [deleted] in 1 year of college since using GPT by innovate_rye
All the students hearing about chatGPT:
> Wait, it can do homework?
I don't think Azure has enough GPUs to solve all the homework yet.
visarga t1_izg2oly wrote
Reply to comment by MostRationalFeminist in 1 year of college since using GPT by innovate_rye
What is there to gain from being stuck in 2019? We have chatGPT today.
visarga t1_izg14yr wrote
Reply to comment by GuyWithLag in 1 year of college since using GPT by innovate_rye
> But here lies the rub: you will need to do this for everything that you do going forward, and the facade will need to never fall.
In a few years we'll be all surrounded by very advanced AI left and right. The trend is to use more and more AI, not less. It will become like penmanship in the age of keyboards. Everyone will use AI for writing.
BTW, you can use GPT-3 prompted with personality profiles to answer polls, rate things, act like a focus group. If you know the distribution of your audience you can focus-group the shit out of your messages to obtain the maximum impact.
> “conditioning GPT3 on thousands of socio-demographic backstories from real human participants in multiple large surveys in the United States: the 2012, 2016, and 2020 waves of the American National Election Studies (ANES)[16], and Rothschild et al.’s “Pigeonholing Partisans” data.
> When properly conditioned, is able to produce outputs biased both toward and against specific groups and perspectives in ways that strongly correspond with human response patterns along fine-grained demographic axes. In other words, these language models do not contain just one bias, but many”.
They can simulate a population in silicon for virtual polling. Everyone will want to virtual-test their tweets and articles.
visarga t1_izg08wq wrote
Reply to comment by PyreOfDeath97 in 1 year of college since using GPT by innovate_rye
> What I really would like to see in the future is neural interfacing; merge AI capability with human sensibility. Return the power back to the human race.
I'd like first to run chatGPT on my desktop, like I can run Stable Diffusion. This is for reasons of freedom and privacy. It will create a new safe space for creativity, and is much easier to achieve. Maybe they can shrink the model, or maybe we get better GPUs.
visarga t1_izdk9wi wrote
Reply to How will the transition between scarcity-based economics and post-scarcity based economics happen? by asschaos
We are already in post-scarcity with regard to many information based services - there's so much music, literature, scientific papers, online courses, free Encyclopedia for all languages, open source software, open source models - more than we could ever consume. So many hobby communities and YT channels, with great people showing their work. Millions of software problems solved on StackOverflow, you can find almost any fix there. The internet itself exceedes our bandwidth and is post scarcity. This is how post-scarcity feels like. Everything is available but you got to make the first move.
But if we think about industry, even if we had 100% free energy and 100% perfect automation it would not mean we are post scarcity yet. We need to ensure the raw materials, either locally or from remote sources, or we got to recycle perfectly, or invent smart materials that can be produced locally. Economy is going to look like ecology, everything recycled and efficient.
visarga t1_iylef6c wrote
Reply to comment by ziplock9000 in Is my career soon to be nonexistent? by apyrexvision
> humans have to be needed.
There's always someone who needs us. It's us. Nobody can outsource self interests. If people can't get jobs, then they need to be self reliant, a kind of job in itself.
visarga t1_iyle6xx wrote
Reply to comment by FDP_666 in Is my career soon to be nonexistent? by apyrexvision
Don't generalise from agriculture to coding. If the tractor misses the row, it's no big deal. If the AI fails the coding task, maybe things start falling apart.
visarga t1_iyanwjo wrote
I managed to find the ends of its knowledge.
-
it has scarce knowledge about decorative plants such as Anthurium King of Spades - this is an expensive plant in EU, about 200-300 EUR.
-
it has fuzzy/no knowledge about a hotel resort I visited last summer in Greece.
So there are obscure plants and points of interest that are outside its closed-book ability to remember. It doesn't literally remember everything. Other than that, it's amazing.
visarga OP t1_iy9cm38 wrote
Reply to comment by mrconter1 in [r] The Singular Value Decompositions of Transformer Weight Matrices are Highly Interpretable - LessWrong by visarga
I am not contradicting you, but we should decide on a case by case basis, some of the articles are ok. This one is not doomsday-related at all.
visarga t1_iy91piz wrote
Reply to comment by musing2020 in How should the society - right now - adapt to the AI boom? by reviedox
> All the AI predictions mentioned by various posts in this sub will more likely be under the control of elites.
You can download a model, but you can't download a Google or FaceBook. AI needs less resources to run locally, instead of a whole data centre it needs just a desktop computer in case of Stable Diffusion or an expensive multi-GPU box in case of a model like GPT-3.
The moral - by running on people's hardware AI could be serving us instead of the big corporations. AI will empower everyone with new skills, lowering the entry barrier to various fields. That is a democratising influence.
I think Google and FB right now are scared of the replacement of manual browsing with chat dialogue agents. If those agents are controlled by the users it means no advertising will be possible anymore. Your own agent will be helpful and polite, will separate the spam from the ham and serve you just what you need without all the crap.
visarga OP t1_iy8xctv wrote
Reply to comment by beezlebub33 in [r] The Singular Value Decompositions of Transformer Weight Matrices are Highly Interpretable - LessWrong by visarga
Oh yes, for people who prefer video there is also
CS25 I Stanford Seminar - Transformer Circuits, Induction Heads, In-Context Learning
visarga t1_iy5bkiu wrote
Reply to comment by grahag in AI invents millions of materials that don’t yet exist. "Transformative tool" is already being used in the hunt for more energy-dense electrodes for lithium-ion batteries. by SoulGuardian55
> chances are good, we'll just be the monkey pressing the buttons
What a lack of imagination. What would you do if you had materials with amazing properties? What would you apply AI next to? The work is just starting.
visarga t1_iy5b3hq wrote
Reply to comment by Alternative_Note_406 in AI invents millions of materials that don’t yet exist. "Transformative tool" is already being used in the hunt for more energy-dense electrodes for lithium-ion batteries. by SoulGuardian55
Time to hop on. There will probably be many startups bringing revolutionary materials to market. The low hanging fruit hasn't been picked yet.
visarga t1_iy5aqv2 wrote
visarga t1_iy3ooc0 wrote
Reply to comment by LevelWriting in Google Has a Secret Project That Is Using AI to Write and Fix Code by nick7566
As a programmer I had to learn a new language every 5-7 years or so. Paradigm changes come one after another. We'll just add AI to the toolbox and use it to write code. Even when AI code works well there is a need to trust it and decide on the various trade-offs. Someone got to get close and personal with the code. By the time it can solve everything by itself we'll be well into AGI, but we'll still get involved in it to express our goals.
visarga t1_iy3n331 wrote
Reply to comment by Juicecalculator in Google Has a Secret Project That Is Using AI to Write and Fix Code by nick7566
No, the search engine AI is fabulous at maximising ad revenue. Works as intended. Your proposed changes would not make Google more money in the short term. What were you thinking?
visarga t1_iy3ljki wrote
Reply to comment by RoboticPro in Google Has a Secret Project That Is Using AI to Write and Fix Code by nick7566
> Now it’s them being attacked and suddenly they don’t like it
Hahahaha. You're missing the big picture. Software has been cannibalising itself for 50 years. Every new open source package or library removes a bit of work from everyone else. You'd think we would be out of work by now, but in reality it's one of the hottest jobs. I mean, Wordpress alone automated/eliminated the work of a whole generation of web devs, but there was so much more work coming up that it wasn't a problem.
Work is not a zero sum game. If I could do 1000 units of work, I would plan something. If I could do 100,000 units of work, I would make a different plan. Not just scaled up linearly, but a different strategy. My prediction is that companies are going to take the AI and keep the people as well, and we'll be very very busy. Nothing expands faster than human desires/aspirations, not even automation.
visarga t1_iy3l7zy wrote
Reply to comment by User1539 in Google Has a Secret Project That Is Using AI to Write and Fix Code by nick7566
> What use will that be when you can describe your needs in a natural language to an AI and it will create the application for you?
Same thing happened to learning English - it used to be the smart choice, but now translation software removed that barrier.
visarga t1_iy2w2kd wrote
Reply to comment by CypherLH in 2002 vs 2012 vs 2022 | how has technology changed? by Phoenix5869
> By comparison the leap from 2012 to 2022 seems smaller...
True, but this is also the golden period of AI. I think 90% of all AI research was done in the last 10 years.
visarga t1_iy2uc9o wrote
Young'uns I still remember 8bit processors in 1980s and loading programs from cassette tape. My father was still using IBM-style cards at work when I was a child, I messed up a whole stack playing with them. One card was a line of code. He had to sort it back by hand.
I think the biggest factor of change in the last 20 years was the leap in computing and communication speed. It took us from the PC era into the internet era. This meant an explosion in online media and indirectly allowed the collection of huge datasets that are being used to train AI today.
The things I've seen. I remember Geoffrey Hinton presenting his pre-deep-learning paper "Restricted Boltzmann Machines" around 2005. That instantly got my attention and I started following the topic, back then ML was a pariah. 12 years later I was working in AI. I have seen blow by blow from the front seat every step AI has made since 2012 when things got heated up. I read the Residual Neural Network paper the same day it was published, and witnessed the birth of transformer. I have seen GANs come and go, and even talked with the original author Ian Goodfellow right here on reddit before he got famous. I got to train many neural nets and play with even more. Much of what I learned is already useless, GPT-3 and SD are so open ended they make projects that took years take just weeks now.
Funny thing, when Hinton published the RBM paper he was using unsupervised learning. I thought it was very profound. But in 2012 the big breakthroughs were supervised learning (ImageNet). For five years only supervised learning got the attention and admiration. But in the last 5 years unsupervised won the spotlight again. How the wheel turns.
visarga t1_ixmbq7v wrote
Reply to comment by purple_hamster66 in When they make AGI, how long will they be able to keep it a secret? by razorbeamz
AI is not that creative yet, maybe in the future, but how many mathematicians are? Apparently it is able to solve hard problems that are not in the training set:
> Meta AI has built a neural theorem prover that has solved 10 International Math Olympiad (IMO) problems — 5x more than any previous AI system.
> trained on a dataset of successful mathematical proofs and then learns to generalize to new, very different kinds of problems
This is from 3 weeks ago: link
visarga t1_ixigmah wrote
Reply to comment by RomanScallop in what does this sub think of Elon Musk by [deleted]
They are a dot on the edge of the space of knowledge.
visarga t1_j04fo01 wrote
Reply to Can we guesstimate chatGPTs impact to job market by 2025? by Friedrich_Cainer
> how many jobs
Given that 1mil users kneeled Azure's GPU farms, I don't think they can scale it up to have a significant impact on the job market.