Submitted by fuscarili t3_10s0l4c in MachineLearning
[removed]
Submitted by fuscarili t3_10s0l4c in MachineLearning
[removed]
Thx for your comment! Analyses like these help a lot draw a map in my mind of how these different fields are intertwined and applied in the different areas of ML, which is the first thing that I need to understand in order to make up my mind
Judging from your question, Bayesian methods is the obvious choice. The basic of RL, which is likely to be the most significant part of your course, can be learned at your own convenience from the famous textbook by Sutton and Barto.
thx for your comment! I appreciate people speaking their minds
Do the one you're more interested in, you won't be more or less imployable because of an optional university course.
Yes that's right. But which one would you say will become more handy to work as a ML engineer or data scientist?
What I have heard is that RL usage in the industry is almost non-existent.
Bayesian methods have much much more applications in the industry than reinforcement learning.
Thx for your insights!
Not that it'll help you with your choice of elective, but reinforcement learning can be seen as a particular type of Bayesian modeling.
thx for your comment! I am reading and considering all of them
Depends on the RL course content, if it’s just following along the RL bible, then you could do it yourself. Checkout the syllabus/slides of previous years to get an idea. The assignments/projects is where you learn the most IMO, especially for RL.
This is the syllabus:
Reinforcement learning in non-sequential problems:
Reinforcement learning in sequential problems:
Would you say it's within the basic stuff? I honestly have no clue
You can find good lectures on all of these topics on youtube, coursera, etc, but that's also true about Bayesian methods. RL is more fun IMO, but less employable for now. RL is used all over the place for things like recommender engines, ad promotion, etc. The concepts are super valuable. Bayesian methods are a bit more generic and common, and tbh are going out of vogue in most of robotics.
Interesting! And do you know btw if RL is a useful tool in finance? I have heard really polarized opinions about its effectiveness for algorithmic trading/trading bots. Some say it's awful and other people that it's the most promising technology for it.
There are smarter people than me out there, so maybe I’m missing something, but the market doesn’t change trajectories because of any move you make. All finance wants to do is guess what the movement will be (up, down, how much). This is a classification or regression problem, not RL.
Yeah, this would be a course in RL, most likely using RL bible as main reference textbook. Agree with the other comment, these lectures are all available online.
What I found valuable in attending a course in person was the prof, lots of insights and intuitions explained in person/office hours was the most valuable part for me. While I was taking the RL course in person, I also referenced online lectures and notes.
In terms of data science interviews and jobs, Bayesian would be more useful, at least more than RL unless you found yourself in robotics or some very niche industry.
You're right in that, it depends a lot on the teacher and also if the student does his part of the job by self-studying and then use the teacher to clarify all the doubts that he may have come up with.
Why not both?
I wish! But my little brain is already overloading :(
Bayesian. Not even close.
My guess is that the successful reinforcement learning methods of the future will be compatible, or enforce Bayes either explicitly or implicitly.
With that in mind, I'd go with Bayes, it's eternally relevant. Whereas the RL field might be completely different in 5 years.
I'm not trying to temper your dreams or anything but I think Data Science as an industry is going to completely change over the next 5-10 years. I'm not saying humans won't work in Data Science in 5-10. I'm saying, the Data Science industry is going to evolve a lot as it incorporates AI. I don't know what that will look like - do your professors?
If you go to an Ivy and are taught by the smartest person in this old world, learn all the best old world stuff, it will do you little good in the new world.
If data is interesting to you - maybe, at least as a hobby, be sure that you learn everything you need to completely setup and deploy a native GPT-AI (s) and can train for years on specific tasks/functions/intent, all tailored to whatever services you want to offer a business owner - this should be obvious as this gets closer.
Eventually, most of this AI generative tech will be locked behind corporate walled gardens and anything accessible to consumers will be lightyears behind - prices will skyrocket for basic stuff that people just no longer really do. This will be super cheap until it isn't.
That is when you come in with your own generative AI - still lightyears behind Microsoft, but won't be nerfed like the consumer AI that will be available.
Haha, of course Microsoft doesn't really lose anything by undercutting your company at that point... I'm really just trying to get you thinking.
Don't expect anything about the world today to stay the way it is today. Assume everything will be updated and changed. Try to see the world that will follow the transition - where do you fit?
tl;dr: I think everyone with the ability to run/train a limited native AI - should totally do that.
Bayesian statistics is a set of fundamental principles, and will not be out of date in 5-10 years. The hype about AI for data science in industry is way overblown.
I'm aware it is a set of principles.
I keep having the same conversations - its like your talking about the 2022 pre-season to me right now in February, right before the Super Bowl. I'm having a hard time with where everyone seems to be at.
I'm sick of explaining things, so I'll assume your fairly familiar with Data Science.
An AI is going to do the cherry picking of our lives now - not a human being, or even an algorithm, a new thing.
Do you believe it is going to look at our data like a human would?
Do you not understand the immensity of what that means for Data Science?
So much new data is about collected out of the same world we collect data in now AND all of the data we collect now is about to be completely re-analyzed - that will also generate new data. All of this new data generated by the AI will then be managed by the AI - people won't be making sense of how they see the world fast enough to keep up, or at all.
The way all of that data is then cross tabulated and that data cross tabulated - How long do you think human beings are going to be able to understand what is happening? The Data won't look anything like data we see now but will be far more accurate.
What if it pulls something like the Meta AI and says - "oh I see how you structure data - I'm going to do it like this" the Meta AI created a further breakdown of time to meet its ends easier - how much harder do you think that made it for any human that now has to account for a new unit of time? I'm assuming its actually something Meta devs deal very little with - which is my point but I really do want to stress that we do not understand something that can adopt a new subsect of time on a whim.
What will AI code that only AI will ever interact look like? There is no reason to assume that it looks anything like what we would do.
I'm trying to put perspective on the scale and speed. I'm still hung up that you called this hype.
I see, so in this sense you mean that Reinforcement Learning should be the choice?
Cause it's one the things Chat GPT uses , together with supervised learning.
[removed]
[deleted] t1_j6za3es wrote
[deleted]