CHA - a device, program module or data set, information about the internal structure and content of which is not completely available, but the specifications of the input and output data are known, more at
https://www.thoughtsmag.com/list-of-image-annotation-tools-for-machine-learning/. The object whose behavior is being simulated is just such a "black box". It doesn’t matter to us what it and the model have inside and how it functions, the main thing is that our model behaves in the same way in similar situations.