Viewing a single comment thread. View all comments

Mark8472 t1_izyamjv wrote

How long was development time and required human resources (e.g. number of FTE days)?

How well do both scale?

How easily are they maintained / cost on the long run?

9

fedegarzar OP t1_izycx10 wrote

  1. We did not run those experiments. But in our opinion, it's easier to maintain a python pipeline than using the UI or CLI of AWS.

  2. In terms of scalability, I think StatsForecast wins by far, given that it takes a lot less time to compute and supports integration with spark and ray.

  3. The point of the whole experiment is to show that the AutoML solution is far more expensive in the long run.

27

Mark8472 t1_izygeei wrote

I get that. But since it doesn’t show the full picture the conclusion is misleading.

−17

marr75 t1_izywb2h wrote

If they were using a custom python pipeline for the statistical models, yeah, I could see this argument. But, like many of the Nixtla tools:

!conda install -c conda-forge statsforecast
import sf
sf.fit(Xzero, yzero)
yone = sf.predict(Xone)

This is a pretty common "marketing" post format from Nixtla. I think they make good tools and good points, so I'm not at all mad about it. They're providing a ready to use tool (StatsForecast) and making a great point about it's performance and cost vs the AWS alternative. Asking for the total cost of developing and maintaining statsforecast means you'd have to also account for the total cost and complexity of developing and maintaining AmazonForecast...

12