sqweeeeeeeeeeeeeeeps

sqweeeeeeeeeeeeeeeps t1_j20alj8 wrote

Reply to comment by Horneur in Making an AI play LoL by Horneur

There is no game theory function. I’m confused on how you landed to this line of thinking. Try out league-play reinforcement learning strategies here on a simple game like tic tac toe. It should play enough games to learn the best move in every scenario.

1

sqweeeeeeeeeeeeeeeps t1_j209r5c wrote

Reply to comment by Horneur in Making an AI play LoL by Horneur

What do you think game theory means? These are all game theory based and you can use DL on any of it. Tic tac toe is simple enough to be an explicit algorithm but you can practice making simulations and use DRL for it

3

sqweeeeeeeeeeeeeeeps t1_j203p1p wrote

Reply to comment by Horneur in Making an AI play LoL by Horneur

Remember, Chess AI’s were the first thing really. Dota/League is significantly harder than chess. So try making AI’s for:

Tic Tac Toe, Simple Dice Game (Farkle seems like a good one), Card game with slightly more intricacies, Checkers, Chess, A simple game involving player movement, League.

3

sqweeeeeeeeeeeeeeeps t1_izt8ldx wrote

Pytorch / Keras / Tensorflow for deep learning

And any basic ML library you want, scitkit leaen etc.

Deep learning is all about GPU usage and running long experiments in production. I’m confused what you even want

Is the question basically asking, what skills would someone specialized in DL have vs someone specializing in non-DL ML have?

−1

sqweeeeeeeeeeeeeeeps t1_izphlmd wrote

? You are proving your SWIN model is overparameterized for CIFAR. Make an EVEN simpler model than those, you prob won’t be able to with off the shelf distillation. Doing this just for ImageNet literally doesn’t change anything. It’s just a different more complex dataset.

What’s your end goal? To come up with a distillation technique to make NN’s more efficient and smaller?

1

sqweeeeeeeeeeeeeeeps t1_iwnu8yt wrote

You are misinterpreting what “normalizing” is. It converts your data to fit a standard normal distribution. That means, you have positive and negative numbers centered around 0. This is optimal for most deep learning models. The interval [0,1] is not good because you want some weights to be negative as certain features negatively impact certain results.

1

sqweeeeeeeeeeeeeeeps t1_iu03am3 wrote

Lmao this is too funny. I am sure you can easily outperform sota models “speed”, but does it have higher performance/accuracy. We use these overparameterized deep models to perform better, not be accurate. How do you know you can perform “as well as a human”? What tests are you running? What is the backbone of this algo. I think you have just made a small neural net and saying “look how fast this is”, but performs soooo much worse in comparison to actually big models. I am taking all of this with a grain of salt because you are in highschool and have no actual judgement of what sota models actually do

“70+ algorithms in the past year” is that supposed to be impressive? Are you suggesting the amount of algorithms you produce have any indicator of how they perform. How do you even tune 70 models in a year.

I have a challenge for you. Since you are in HS, read as much research as you can (probably on efficient networks or whatever you seem to like) and write a review paper of some small niche subject. Then start coming up with novel ideas for it, test it, tune it, push benchmarks and have as many legitimate comparisons to real world models. Then publish it.

1