Viewing a single comment thread. View all comments

stevenbrown375 t1_j47crss wrote

It’s not like there will be just one.

AI Training is essentially a process of building, testing, and deleting the 99% worst performing bots to achieve alignment, usually via a set of objective functions. Billions and billions of bots rapidly get created and destroyed during this process. It doesn’t stop until the desired behavior is achieved.

To get something like ChatGPT with 175 billion parameters I’d imagine there were trillions of epochs and orders of magnitude more iterations. That’s why these models cost so much to train.

So yeah, algorithm developers ask questions, and if it answers wrong - poof.

1