Viewing a single comment thread. View all comments

maybe_yeah t1_iwdu5wi wrote

> The book is laid out as a series of fictionalized in sprints that take you from pre-project requirements and proposal development all the way to deployment. You’ll discover battle-tested techniques for ensuring you have the appropriate data infrastructure, coordinating ML experiments, and measuring model performance. With this book as your guide, you’ll know how to bring a project to a successful conclusion, and how to use your lessons learned for future projects.

1 INTRODUCTION: DELIVERING MACHINE LEARNING PROJECTS IS HARD, LET’S DO IT BETTER

2 PRE-PROJECT: FROM OPPORTUNITY TO REQUIREMENTS

3 PRE-PROJECT: FROM REQUIREMENTS TO A PROPOSAL

4 SPRINT ZERO: GETTING STARTED

5 SPRINT 1: DIVING INTO THE PROBLEM

6 SPRINT 1: EDA, ETHICS, BASELINE EVALUATION

7 SPRINT 2: MAKING USEFUL MODELS WITH ML

8 SPRINT 2: TESTING AND SELECTION

9 SPRINT 3: SYSTEM BUILDING AND PRODUCTION

10 POST PROJECT (SPRINT Ω)

Who is the target audience for this book? The description doesn't mention patterns and the online chapter view doesn't seem to have code samples

9

sgt102 OP t1_iwdwyr5 wrote

The target audience is people who are being asked to lead an ML project for the first time - or who aspire to do so. The book doesn't try to teach the implementation details of modelling - mostly because there are many texts that do that very well already, far better than I could. So there are no code examples.

3

globalminima t1_iwe6j08 wrote

There is no mention of monitoring, maintenance or retraining - does chapter 9 go into this? This is a big blind-spot if it's not there (and where most of the problems happen for inexperienced ML engineers)

13

sgt102 OP t1_iwhfc9e wrote

Chapter 9 addresses (to some extent) logging and monitoring, and goverance - which is a lot to do with how the model should be managed in life....

I've worked in projects where the model was ungoverned and went wrong and no one noticed for a long time... and that caused damage. I also got called in to sort out a project where the team retrained the model every week... and every week they overfitted it on new data. I think knowing what the models should do, being able to say that they are doing that and then having a clear way of deciding what to do if they aren't (ie. someone in charge) is the base of maintaining them... what's your pov though?

2