Test AI with known data
- Abhay Kulkarni

- Apr 17, 2020
- 1 min read
So the data science is done, programming completed and the prediction API ready. It might also give you an estimate of expected accuracy and other success measures. This it would do using some techniques built into your data science pipeline like cross fold validation. But does that give you the confidence that this prediction rule will work for you. Sure the new data pipeline is new, and you as a business user might not know enough about data science to feel confident. So depend on what is known to you for testing your AI. I would suggest identifying two things: a business user who understand the data being predicted upon, and a dataset that this business user understands well. While selecting the dataset, ensure that it covers all known scenarios, is preferably not a dataset that was used for training the model, and a dataset that is good i.e. verified to be accurate (search my blog page for “good data”). Do save the dataset and its results. As the pipeline/model changes, testing on the same dataset will help you better compare different models. Test your AI with known data and you take one more step towards practical AI. #abhayPracticalAI #ArtificialIntelligence #AI

Comments