Comments

You must log in or register to comment.

X-msky t1_j0re4y0 wrote

About two weeks ago, just feed your text to chatGPT in chunks or use an API

The technology is here, it's just a matter of money

31

ChronoPsyche t1_j0se7nf wrote

If you fed it in chunks then asked it to summarize, it would summarize like the last 2 pages. It has a context window of about 1500 words. Anything more than that and it won't "remember" it.

Although chances are you would just get network errors long before you could finish feeding it.

13

TheSecretAgenda t1_j0sgrye wrote

Could it take a novel and turn it into a comic book?

0

ChronoPsyche t1_j0sij1y wrote

No. It can't take anything close to the size of a novel. It can only hold 1500 words in its "memory" at a time, as I just said lol.

4

TheSecretAgenda t1_j0sj30g wrote

I mean if you just fed a page at time in.

1

ChronoPsyche t1_j0sj8c3 wrote

It can't produce images, so no. Unless you just wanted to translate a page of novel text into a page of comic book-like text.

EDIT: You could of course use Stable Diffusuon to produce the images and ChatGPT to produce text, but it would still be a very involved process.

3

nebson10 t1_j0rssz2 wrote

What API? Did they release the chatGPT api?

2

X-msky t1_j0sfaol wrote

It was reverse engineered, check GitHub for chatGPT projects

3

nebson10 t1_j0t37ys wrote

Is it safe? I don't want to get banned.

0

Marcus_111 t1_j0r3w18 wrote

Maximum 3 months.

12

4e_65_6f t1_j0qcxxv wrote

You can paste the text there and it will answer based on the prompt. And no it's not AGI (yet).

9

coumineol t1_j0qir5v wrote

It has a token limit of 4000 which is significantly shorter than most books. Will most probably be significantly improved this year though.

11

4e_65_6f t1_j0qjafm wrote

I didn't know there was a limit. It worked for my code that's 120+ lines of code so I figured it would also work for books.

2

oddonebetween t1_j0rdxze wrote

It's really inconsistent with code. I've given it a few snippets of code and it never gives a fully correct answer. It's terrible at connecting things. Haven't had much luck with it with React Redux and that's all about connecting the state.

3

4e_65_6f t1_j0rf0hq wrote

I found that it works better if you keep it short, like tell it to write just a function or a small part of the code rather than the whole thing. Also explain in obnoxious detail what is supposed to be happening and it often gets it right.

It's really good at improving already written code also, I used it to make my code shorter and more efficient.

3

BinaryMan151 t1_j0tem1w wrote

I tested it and asked it to write a loop counting down from 10 - 1 in C++ and it did it perfectly. It created a for loop and the code was there. It even instructed me how to run the program.

1

RichardKingg t1_j0rjk30 wrote

I was using davinci-03 and switched to ChatGPT, I feel it is better at throwing some good results (I'm programming with JavaScript).

I even asked it to create intermediate exercices for module patterns and it definitely nailed it, I suppose it has a lot to do with prompts and how long is the code you are feeding it.

2

oddonebetween t1_j0tpc39 wrote

I'm also programming with JavaScript. Yeah it's good for really short functions but that's about it in my experience. But I'll try improve my prompts.

1

ChronoPsyche t1_j0sekj8 wrote

The token limit consists of about 1500 words. It's not entirely clear what happens when it reaches that limit. In GPT3, it just stops being responsive after 1500 words. For ChatGPT, I think it may just progressively dump its memory to avoid the session being interrupted, although, I've also had it getting network errors or just getting really slow at responding after a long conversation, so I'm not entirely sure. The point is that it stops working as intended after around 1500 words or at the very least forgets things said 1500 words prior.

1

Qumeric t1_j0r1n3w wrote

Not so far. Reading pdf is probably not working great right now, but it works well enough for many cases and definitely will be improved.

I think the main problem right now is that LLM's memory is short, so to actually learn a full textbook, it has to be fine-tuned on it. It is inconvenient and expensive, but I am pretty sure it is possible to make it much better.

I would say we will see something like this in 3 years or less.

5

Ezekiel_W t1_j0qplpf wrote

I would say the next few years.

4

No_Ninja3309_NoNoYes t1_j0rdr95 wrote

It's impossible to say, but to my knowledge, the system currently gets trained on data once and that's it Also it won't be instant because there are 175 billions of parameters to re-evaluate

2

sumane12 t1_j0rg70c wrote

Give it as much info as you can based on the token limit, and ask it to summarise the info into bullet points, keep doing this until you have bullet points for the entire document that doesn't exceed 3k tokens, give it this summary and ask it for the info you need

Taken directly from gpt "you can also provide a summary of the information"

AGI? Depends on your definition of AGI. If AGI to you means human level cognitive abilities in every genre, emotions, consciousness, will and objectives of its own, ability to plan and reason, then no, it's not AGI. If your definition of AGI is broader, an agent that can learn something and then generalise that ability to a different or many different domains, then yes I think ChatGPT is AGI. However I caveat this by saying this is not what most people mean when they say AGI.

2

President-Jo t1_j0rmfiy wrote

I told my chatGPT AI about a hypothetical world that is identical to Earth. Then I asked how a “hughman” could come to rule all of that world. I got a lot of good info ;)

2

Working_Ideal3808 t1_j0rpaq0 wrote

It can already do this if you know how to code & fine tune the algorithms

2

Nervous-Newt848 t1_j0t3f7u wrote

I mean, yes....

Essentially, but it's much more complex than that.

Not fine tuning the algorithms, but adjusting the weights and biases using an optimization algorithm.

Fine tuning the weights and biases.

1

ghostfuckbuddy t1_j0tfnrr wrote

You could make one! That sounds like ordinary finetuning but with a clean user interface.

2

azriel777 t1_j0rzzu7 wrote

Can already do it. I copy a lot of stuff from webpages and just feed it in the text, then ask it stuff about it. Pretty crazy honestly.

1

Nervous-Newt848 t1_j0t38zn wrote

This is not true. You can test this by logging in and out and asking the same questions without copy and pasting the text you gave in the previous session.

Each session has a certain memory context window. That it retains from previous input.

When you restart the session, log in and out. The memory will be lost.

1

azriel777 t1_j0todmd wrote

I should have been clear. I meant you can train it for that session. Yea, it forgets the next session, but it is still pretty cool. I uploaded content from a RPG game book it did not know and it learned the rules and mechanics instantly. Had it generate a character and make a story in the setting. Was really cool. But yea, it sucks that it forgot about it right afterword's and if I want to do it again, I have to re-upload it for another session.

2

Nervous-Newt848 t1_j0t25cz wrote

This can already be done but it doesnt work how you want it to.

Data is first gathered, whether it be images or text or even both (multimodal). This data is then transformed into numbers, more specifically number matrixes. In PyTorch these are called TENSORS.

Tensors of data is fed into the neural network in batches, or small parts of the dataset at a time. After each batch of data the neural network's weights and biases are adjusted using an optimization algorithm. One pass of all the data into the neural network is called an EPOCH.

The neural network is then tested for accuracy using a loss function.

The neural network can be trained multiple epochs (or times) on the same dataset to increase accuracy (minimize the loss function)

The closer the loss function is to zero the more accurate the NN model

Depending on the amount of parameters, hidden layers, epochs, and data this can require a lot of computational power and of course electricity.

The amount of electricity and cooling that gpu, tpu, or cpu (whatever the company is using) racks require is why this costs millions of dollars.

In order for a neural network to learn it has to be retrained with more data server-side not client-side.

1

CypherLH t1_j0tbp2a wrote

Well, there is "fine tuning" as well which doesn't require re-training the entire model. GPT-3 already has this but its a pain in the butt to use; would be nice if we had a slick GUI where you can just copy and paste text or uploaded .txt files and have it auto-run the fine-tuning based on that.

4