Abstract:
It is generally accepted that trust in the artificial intelligence system is determined by the confidence of the consumer and regulatory organizations that this system is capable of performing the tasks assigned to it with the required quality. In the scientific literature, we are talking only about increasing trust but not about guaranteeing trust in the results of artificial intelligence. In the interpretation of increasing trust, it is natural to believe that there is no trust in the results of the work of artificial intelligence. In this article, a mathematical model is built, within the framework of which it is proved that in the class of artificial intelligence systems built on machine learning, there can be no guarantees of trust. The concept of “the trust in classifier” is defined if it correctly classifies new data with probability $1$. The result was obtained under the conditions of the classical data space $R^L$ and a set of uniform distributions. The model can be complicated by leaving the space metric and the distributions continuous. In this case, trust does not depend on the capabilities of the classifier and on the generalization property.