Viewing a single comment thread. View all comments

WeekendCautious3377 t1_j861p3s wrote

And those matrices (millions of rows and columns) change at every iteration. So it would be probably better visualized as a video of a brain scan.

6

VoidAndOcean t1_j862hjf wrote

yea but you understand the nature of 1 variable changing has an effect on the whole matrix. It's fine. just a big calculation;

4

WeekendCautious3377 t1_j864j33 wrote

Yes and no. Google’s latest LLM model handles 540 billion parameters. Linear algebra is literally as simple as y = a*x + b. But you do billions of it every time with input that you don’t 100% understand. For instance, it is easy to record a person’s voice and give that file in a form of a series of numbers. You give hundreds of thousands of voice records to these models and it evolves these giant matrices that are billions in size. Model (giant matrix) goes through a bunch of iterations per input to optimize itself and picks up nuances of a human voice embedded in the digital form.

You can then tell the program to group together different input by patterns like accents. Now you have multiple models optimized to speak in different accents.

If you had billions of people each only looking at one parameter at a time, it would be feasible to follow each “simple” algebra. But you literally need billions of people looking at it. There are better ways to find overall inferences.

You can think of it as just like trying to analyze any big system.

Traffic in LA? You can definitely look at each person’s car and eventually figure out how each person made a decision to drive in what way. But that will not solve the problem of traffic problem of the overall city of millions of people driving.

Only AI problem is orders of magnitude more complicated.

11