Viewing a single comment thread. View all comments

jackilion t1_j2mhdf6 wrote

You can't teach language to AlphaGo.

AlphaX is an architecture that is made to quickly traverse a huge space of possibilities. That's why it's good at games like chess and Go, where the AI has to think ahead of what the game state could be N moves down the line, each move exponentially increasing the amount of game states. Same for AlphaFold and protein folding.

GPT is a transformer, which gets an input vector, possibly, but not necessarily representing language, and produces an output vector. Through self attention it is able to weigh certain parts of the vector on it's own, similar to how humans weigh certain words in a sentence differently.

StableDiffusion is a Denoising Diffusion Model, a model that takes a 2D tensor as input (possibly representing an Image) and producing a 2D tensor as output. It's used to learn to reverse some noise algorithm that has been applied to the dataset.

You see, each of these architectures have a very specific form of input and output, and their structure enables them to perform a certain task very well. You can't "teach" ChatGPT to produce an image, because it doesn't have a way to process image data.

3

ItsTimeToFinishThis OP t1_j2mowck wrote

>AlphaX is an architecture that is made to quickly traverse a huge space of possibilities.

So, brute force.

1