Image analytics solutions are essential for next generation manufacturing as they can compute important KPIs related to production, maintenance, and quality in real-time which are otherwise not obtainable through machine data. One main challenge that reduces the performance of an image analytics model is when, during inference, it encounters an image, whose statistical characteristics are very different from the images that it had been trained on. This is called data drift. In commercial applications this is addressed by forming a very large dataset such as ImageNet, which contains several images corresponding to the same class and then training a model over it so hence there is little to no drift during inference. However, image analytics solutions in factories are usually custom developed for a specific situation such as one line/product which does scale when the situation changes due to drift which can be caused due to changes in lighting, camera placement, different camera specs in a new line, dust or oil on camera lens, occlusion by human workers and a multitude of other reasons. For factory settings it is not possible to create an ImageNet type dataset given the sheer volume of different and moving parts within a factory shopfloor, some of which are very specific to a given factory. Instead monitoring and compensating this data drift will detect and resolve the degradation in image analytics model performance. In this paper, the proposed solution detects the drift in images during inference using convolutional variational autoencoder and compensates this drift with minimum system integration to easily scale the solution for wide range of changing conditions.
|