How to assess an AI

open question, no answer

17 Nov 2022 | in ai

Had a good catch up with Renat Zubairov, 2nd time founder & CEO at RevOS, about AI in Sales.

Key question that came to light is "how do you assess an AI"?

Typical approach for validation is to build a model that provides results matching what humans feel to be right.

So for example Renat's AI built upon the inputs of Customer Success Managers about Accounts' health & potential gets validated by the CSM's agreement on the results.

It's systemising people's gut feel.

This provides benefits of course, as it helps scale the "expertise" of the contributors.

But what is "expertise"?

So if an AI provides a result that speaks against the "expert" gut feel, is it wrong - or did it surface something based on data that the "expert" doesn't/cannot see?

The latter would unlock true new value, but faces resistance as not inline with what the user thinks is the right thing to do.

How do you validate an AI output where the result "doesn't feel right" (but could be)?

I don't have an answer - just fascinated by the questions AI poses :)

links

social