Systems Smart Enough To Know When They’re Not Smart Enough
Josh Clark opens with “Our answer machines have an over-confidence problem.” Perhaps you’ve seen the examples of search results (in any form) presenting terrifyingly wrong (or at least “controversial”) “answers”. Hash tag fake news.
Search, in whatever form we offer it to our users, tends that way. This our top result, dear person! Interact with it! Our algorithm predicts you won’t regret it! Certainly, there is incentive to present results in that way.
Josh asks some hard questions:
- When should we sacrifice speed for accuracy?
- How might we convey uncertainty or ambiguity?
- How might we identify hostile information zones?
- How might we provide the answer’s context?
- How might we adapt to speech and other low-resolution interfaces?
To which I might add: can we find a business incentive to make these things happen? Can such a good job be done with all this, that it attracts users, gains their trust, and makes them good customers? I fear that fast, overconfident, context-free answers is better business, short term.