Prof. Gonen Singer and Dr. Shahar Somin have been Awarded an AI grant

Prof. Gonen Singer and Dr. Shahar Somin have been Awarded an AI grant
תאריך

The prestigious grant, offered by the Ministry of Innovation, Science and Technology, was awarded in recognition of their research, which proposes a novel AI-based method for automatic, task-oriented pruning of neural networks

Deep neural networks are composed of millions and even billions of computational parameters, organized in various layers and structures. This complexity enables deep learning models to achieve impressive performance, but it also comes with high computational costs, extended training times, and prolonged inference times. One of the main approaches to addressing this challenge is neural network pruning, a process aimed at reducing the number of parameters or active components in the network while maintaining its performance level.

"Deep learning models demonstrate exceptional performance, but they are often massive, heavy, and expensive in terms of computational and memory resources. Optimal pruning of neural networks without compromising their performance is one of the central challenges in today's field of deep learning," says Prof. Gonen Singer.

Prof. Singer and Dr. Shahar Somin are currently working on developing a new method for automatic, adaptive, and task-aware neural network pruning. Their joint work has earned them a research grant from the Ministry of Innovation, Science and Technology, as part of the prestigious Breakthrough AI track. This year, fewer than 20 research groups from across the country were awarded this prestigious grant.

According to Prof. Singer, existing pruning methods have two main shortcomings. The first is their general nature: most methods perform uniform pruning of the network, without adaptation to a specific task or particular dataset. The second shortcoming is that the pruning depth, meaning the percentage of components removed from the network, is determined manually in advance. As a result, it's impossible to know beforehand whether pruning will interfere with performance, or what the maximum possible level of pruning can be achieved while maintaining model quality.

Prof. Singer and Dr. Somin's research addresses these two challenges and focuses on developing an automatic and adaptive approach to neural network pruning. This approach aims to select, as part of the learning process itself, the optimal pruning level for a task and dataset, while maintaining model performance. "First, we propose performing task-aware pruning, so that different tasks will lead to different pruned network structures. Second, the pruning level is determined automatically and maximized, subject to the constraint that the network's performance will not be compromised," explains Prof. Gonen Singer.

As part of the research, Prof. Singer and Dr. Somin will develop a methodology for evaluating the contribution of each component in the network to the performance of a specific task. Additionally, the functional relationships between network components will be represented using graphical models, with the goal of identifying a minimal subset of components with complementary contributions that together preserve the predictive capabilities of the full model. This method is expected to enable the development of smaller, faster, and more efficient models in terms of computational and memory resources, as well as to expand the use of deep learning in sensitive domains such as medicine, security, and finance, even in environments where computational resources are limited.
 

Last Updated Date : 31/12/2025