We quantify the sensitivity of diffractive optical networks’ inference accuracy against input object variations in the form of translation, rotation, and scaling, and present a new training methodology that enables diffractive networks to maintain their classification performance despite such object variations at the input field-of-view. Our analyses on all-optical classification of handwritten digits reveal that this new training scheme provides blind inference accuracy gains of >50%, >30% and >30% for randomly shifted, rotated and scaled input objects, respectively, demonstrating its efficacy. These results are important for using diffractive optical networks in various machine vision applications involving dynamic objects and environments.
|