Sorry, Algorithm, You Got It Wrong

Sorry, Algorithm, You Got It Wrong
תאריך

As part of a formal verification and deep learning project, students Yael Leibovitch Weiss and Omer Cohen succeeded in introducing minimal noise to images, thereby leading object Detection algorithms to produce major errors

Sometimes, the best way to fix something is to try and break it. That’s the principle of formal verification: Attempting to undermine the software systems and prove their logic, to make sure they are bugless. Recently, students Yael Leibovitch Weiss and Omer Cohen have achieved some fascinating relevant results, as part of their formal verification and deep learning project, supervised by Prof. Hillel Kugler and PhD. candidate Avraham Raviv. “The project involved incorporating algorithm verification tools into object detection algorithms, like the ones integrated in autonomous cars’ cameras, which must detect every object around: roads, road signs, cars, people, etc.,” says Avraham Raviv. “As part of the project, the students used deep learning models, used among other things to detect and classify objects in images, and formal verification tools, in order to see how object Detection algorithms handle images with noise.”

The project was carried out in the computer engineering track. Yael and Omer study at the track, as part of High School Academy program, majoring in cyber and data analysis. The project continues the work done by Roey Asa, a former student at the same program, and was first developed as part of the Faculty’s hackathon. “The project’s objective was to generate attacks on the object’s location, innovative attacks considered relatively hard to detect,” says Yael. “As part of the project, we employed formal verification tools, to generate minimal noise for images, which can hardly be spied, but will make the deep learning model classify the objects in the image the wrong way. We divided the attacks by four key types of attacks. The first and most interesting constitutes attacks on the object’s location; the second – attacks on the object’s classification; the third involves an original object correctly detected, alongside wrong objects classified; while the fourth involves an original Object wrongly classified, alongside other wrong objects detected. We wanted to prove the network could be attacked for each type of attacks.”

The project presented some very interesting results. “In practice, we have managed to incorporate a formal verification tool that generated noise attacks on a deep learning Algorithm which detected multiple objects in an image (rather than just one), while factoring in the object’s location within the image,” says Omer. “The results we’ve achieved are unique, as we have shown that we can add minimal noise to images, which causes major algorithm errors, like the localization of objects in the wrong places, which can be very critical in autonomous cars, for example. And once we have found those errors, we can review the algorithm to correct it. Bottom line, these results can make autonomous cars safer.”

Last Updated Date : 04/10/2023