Viewing a single comment thread. View all comments

Verence17 t1_iycens8 wrote

So, imagine playing a game: you are told a number, you add some X to that number and tell the result. You will be told if the result differs from the one expected by the person who told you the number, so you have to guess the correct X.

"1. What do we want as a result?"

"Well, maybe X = 0? 1+0=1, my answer is 1."

"No, for 1 we need something bigger. Let's try again, what do we want to get for 2?"

"Then maybe X = 2? 2+2=4, my answer is 4."

"No, we need less than that. Another try: what do we want for 3?"

"So, X is bigger than 0 but smaller than 2... Maybe X = 1? 3+1=4, my answer is 4."

"Yes, that's what we needed, you guessed the correct X!"

In this scenario, "take a number and add X to it" is your algorithm and X is a parameter for that algorithm. You don't know that parameter beforehand, you guess it in an iterative way only from the required answer.

Turns out, we can construct an algorithm with quite a lot of parameters (possibly, millions) in such a way that there will be possible values for that parameters which, in theory, will give us good results for the task at hand. Not perfect, but good. We don't know what exactly these values are, we only know that they can exist. The task can even be as complex as showing the algorithm an image of a bird and expecting the answer "bird", it still may work with some parameters unknown to us.

Learning methods allow the program, in a similar way to the example above, start with a completely random guess and then tweak all these parameters in a more or less sensible way only based on what the expected answer is. And the math goes in such a way that it will likely slowly find better and better combinations until it encounters something that actually works to an extent. This process is what's called machine learning and the set of values found for the parameters is called a model for this specific algorithm.

1