TFenrir

TFenrir t1_jeenq5p wrote

> The thing is it's like a DJ mixing records it could generate some amazing new mixes but if the pattern is not already out there it's very unlikely to find new patterns.

What does this mean in practice?

Hypothetically, let's say I ask a future (1-2 years out) model to write me a brand new fantasy book series, and tell it what all my favourite books are - and it writes me something that is stellar, 5/5. If someone comes to me and says "yes but is this TRULY original?" What does that even mean?

I think some people are very confident that LLMs cannot find new ideas, but I don't know where they get that confidence from - LLMs have continuously exceeded the thresholds proposed by their critics, and now it feels like we're getting into the esoteric. It's a bit of a... God of the gaps situation to me.

Hypothetically, let's say a language model solves a math problem that has never been solved before - would that change your mind? Do you think that's even possible?

2

TFenrir t1_jebuahq wrote

I think it's a hard question to answer, because many factors can go into layoffs - and after layoffs it's very common for companies to not hire back similar roles but replace tasks with software. That doesn't even get into the culture of layoffs - some companies just don't like doing it, and you'll hear stories about people who go into work all day and play Minesweeper or whatever.

That being said, I think we'll see the first potentially significant disruption when Google and Microsoft release their office AI suite.

I know people whose entire job is to make PowerPoint/Slides. When someone can say "turn this email chain into a proposal doc" -> "turn this proposal doc into a really nice looking set of slides, with animations and a cool dark theme" - that's going to be very disruptive.

66

TFenrir t1_jdt9bng wrote

I remember a completely different movie. In what I remember, the AI tries its best to not hurt anyone, and in the end - really doesn't, even when defending itself. And humanity in the end becomes it's own worst enemy, as in its fear of the AI, sets itself down a path of global pain and suffering.

Who's lives did the AI end up sacrificing? Didn't it save a lot of people's lives?

49

TFenrir t1_jdim3vv wrote

That is a really good tip.

I'm using langchainjs (I can do python, but my js background is 10x python) - one of the things I want to play with more is getting consistent json output from a response - there is a helper tool I tried with a bud a while back when we were pairing... Typescript validator or something or other, that seemed to help.

Any tips with that?

1

TFenrir t1_jdhqnb3 wrote

Are you working with the gpt4 api yet? I'm still working with 3.5-turbo so it isn't toooo crazy during dev, but I'm about to write a new custom agent that will be my first attempt at a few different improvements to my previous implementations - one of them namely is trying to use different models for different parts of the chain, conditionally. Eg - I want to experiment with using 3.5 for some mundane infernal scratch pad work, but switch to 4 if the confidence of the agent in success is low - that sort of thing.

I'm hoping I can have some success, but at the very least the pain will be educational.

1

TFenrir t1_jchmcei wrote

>So I’ve recently joined this subreddit, around the time chat gpt was released and first came into the public eye. Since then I’ve been lurking and trying to stay up to date but honestly get lost in the sauce.

That's fair, there's actually just so much that is happening, and has been happening for years, keeping up with it all is overwhelming.

> I don’t really understand the scope of this AI and techno stuff going on. I’m not saying these advancements are not a big deal because it is. However, I can’t help but scoff in disbelief when I see people talk about things like, immortality achieved, true equality within society, capitalism replaced, labour reduced, climate change reversed and the worlds problems are fixed. I see a lot of utopian “possibilities” get thrown around.

No one can predict this. Anyone who says they are confident, is just idealistic and optimistic. No one knows. There are people in the world who want this to be true, who want us to move to a more utopian society. Plenty of those people are in this sub, because they believe that the upheaval and change from something like a true general intelligence can release us from all worldly burdens, to different degrees of craziness.

> Is change of this scale really coming? It seems kinda sci-fi to me. More fantasy than reality.

Honestly, no one knows. One common thought process is:

  1. AI will get smarter than humans
  2. AI will be able to self improve
  3. AI will then be benevolent
  4. AI will solve all health/scarcity problems
  5. AI will keep us alive forever
  6. AI will help us connect our brains to machines
  7. From this point, basically anything

This is a common dream for people in the sub, and it's influenced more and more but popular media (Sword Art Online, Upload (Amazon), etc). What's complicated about this is that while this is all basically fantasy or idealism after point 2, the first two points feel increasingly likely. And it opens up all kinds of doors. So I think you'll increasingly see these and other more fantastical dreams.

I just like to focus on the tech.

> I can’t really wrap my head around all the information and terms. Like those weekly AI news posts with all the things that happen in a week make no sense to me. I have no clue whats going on really. I’m inclined to believe we are really on the precipice of huge change since so many people talk as if we are. Although I don’t get the same enthusiasm outside of this subreddit. Its not really talked about in the news or governmentally.

It's starting to happen outside of this group, but yeah there are years and years of terminology and concepts that can be overwhelming for someone who is new to it all. But feel free to ask questions, and many people are willing to be helpful and answer.

> These are just my personal thoughts and to add some discussion aspect to this posts I’ll end of with a question. When do you think these advancements in AI/technology really start to seep into the inner workings of our society and make noticeable change for the layman?

I think it will start now, and will grow bigger soon. I think Google Docs/Microsoft office are going to be the big ones, but we are now getting these tools inside of apps like Slack as well.

This means that people will start using them every day for work, which will Herald in the overarching public discourse.

7

TFenrir t1_jadrb3u wrote

I think if we can get a really good, probably sparsely activated, multimodal model that can do continual learning that shows transfer - ala Pathways, many white collar jobs are done.

Any system that has continual learning I think would also have short/medium/whatever term memory, and a context window that can handle enough at once that rivals what we can handle at any given time.

But the thing is I think that unlike biological systems, there are many different inefficient ways to get us there as well. A very dense model that is big enough, with a better fine tuning process might be all we need. Or maybe the bottle neck is currently really context, as in-context learning is quite powerful, what if we suddenly have an efficiency breakthrough with a Transformer 2.0 that can allow for context windows of 1 million tokens?

Also maybe we don't need multimodal per se, maybe a system that is trained on pixels would cover all bases.

7

TFenrir t1_jacuy0q wrote

I think this is really hard to predict, because there are many different paths forward. What if LLMs get good at writing directly minified code? What if they make their own software language? What happens with new architectures that maybe have something like....RETRO or similar memory stores built in. Heck even current vector stores allow for some really impressive things. There are tons of architectures that could potentially come into play that make the maximum context window of 32k tokens more than enough, or maybe 100k is needed. There was a paper I read a while back that was experimenting with context windows that large.

Also you should look into Google pitchfork, which is the code name for a project Google is working on that is essentially an LLM tied to a codebase, that can iteratively improve it through natural language requests.

My gut is, by this summer we will start to see very interesting small apps built with unique architectures that are LLMs iteratively improving a codebase. I don't know where it will go from there.

1

TFenrir t1_ja55hrf wrote

? Idiocy? You think anyone at DeepMind is an idiot?

Look, we can talk about the value of solving for something like Go/Baduk, even in just getting people to understand the power of RL (as it was predicted it would be another decade before anyone would be able to beat a master), or we can talk about the fact that DeepMind has dozens of concurrent projects running at any given time and they have models that have been trained in everything from Atari games, to Starcraft - to models that are embodied in robots or models trained on stabilizing plasma... But what I think is more important is you try to remember that none of these organizations is composed of idiots.

They are made of international teams, filled to the brim with people who have excelled in their careers and in usually everything they have attempted in their lives. I don't think you need to show respect or anything dramatic like that, but if you are approaching your critique of their actions with this assumption that they are like... Dumb. Don't. We can put aside that even your critique here shows you don't have a good understanding of what they are working on, because I think before all that, you are doing yourself a disservice if you don't appreciate the depth of consideration that goes into basically everyone of their decisions. Even if there are missteps and mistakes, none of them come from idiocy.

1

TFenrir t1_j9v3joj wrote

A lot of it has to do with computational intensity and latency. Text to audio and vice versa takes a bit of time - and different challenges for local or cloud based solutions.

Let's say you want chatbot to real-time reply to you in audio, with a cloud based solution.

First you speak to it in audio, and that is sent to a cloud server - this part is relatively fast, and what already happens with things like Google home/Alexa. Then it needs to convert it to text, and run that text through an LLM. Then the LLM creates a response, and that needs to be converted to audio.

Let's say for a solution like we see with elevenlabs, it takes 2 seconds for every second of audio you want to generate. That means if the reply is going to be 10 seconds, it takes 20 seconds to generate. That would be too slow.

You might have opportunity to stream that audio, by only generating some of the text to audio before starting the process, but these solutions work better when they are given more text to generate all at once... Generating a word at a time would be like talking with A. Period. In. Between. Every. Word.

3

TFenrir t1_j9pff6n wrote

Ironically, I think the best way to deal with terms in the book you don't understand is to just ask chatgpt to explain it to you. Either by giving it an excerpt, asking it to define a word, or asking it about the book, eg "in the singularity is near, what does Kurzweil mean by algorithmic improvements outpacing hardware?"

Btw this is ChatGPTs answer:

> In "The Singularity is Near," Ray Kurzweil argues that we are approaching a point in the future where machine intelligence will surpass human intelligence, an event he refers to as the "Singularity." Kurzweil suggests that this rapid advance of technology will be driven by two major factors: the exponential growth of computing power and the development of more advanced algorithms.

> Regarding the concept of "algorithmic improvements outpacing hardware," Kurzweil is referring to the fact that the speed and power of computers are not the only factors that determine their capabilities. The software, or the algorithms that run on the hardware, also play a crucial role in what computers can accomplish.

> Kurzweil argues that as we continue to develop better algorithms, we will be able to do more with the same amount of computing power, or even less. This means that even if we reach a point where we cannot continue to improve the hardware, the development of more advanced algorithms will still enable us to create machines that are more intelligent and capable.

> Therefore, Kurzweil's idea of "algorithmic improvements outpacing hardware" suggests that the development of better algorithms will be a key factor in driving the progress of machine intelligence, even if we reach limits in terms of hardware capabilities.

2