As you may recall we added the real-time testing of AI predictors the last week of June.  This was a very helpful first step in improving the quality of our predictors.  This week we’ve added our first true ensemble learning algorithm.    As you might recall from this post, we’ve been doing a poor mans version of ensemble learning with the 3-5 Bike Classifiers we’ve been running to check for a match.   This has been less than ideal as it’s better to have the machine evaluate a series of predictors to get to the “best” answer.

Early results have been good so we increased the quality threshold for this new model to passing at > 80% accuracy.   This should help further reduce false positives.  You can see your individual algorithm test results (must have sync’d a ride to trigger the new algorithm).

We’ll monitor this over the next few weeks as we embark on the Segment oriented predictions and my sense is we’ll deploy this or other ensemble approaches anytime a regression or classification can’t get the accuracy we desire on it’s own.

We still have several other tricks we can do to further increase quality.  Each week we take a few steps forward on quality!

BTW, a few of you have asked a great question:  Why do we have a Bike Classifier?  Answer is our long term vision is to offer a service that predicts when key bike parts might fail (e.g. chain, tires, etc) which will depend on quality training data.  Additionally, it’s a relatively simple classification problem we use to test of our end to end AI process.

Ensemble Learning revisited