![]() |
ИСТИНА |
Войти в систему Регистрация |
ФНКЦ РР |
||
We present NoGiNet, a deep learning solution for the DREAM 2022 promoter expression challenge. Our network is based on EfficientNetV2 with residual concatenation instead of residual summation blocks. The One Cycle Policy is used for training with AdamW resulting in the so-called superconvergence of the model, with reduced training time and improved model performance. Further improvement of the model convergence and validation performance was achieved by re-formulating the initial regression task as a soft-classification problem. Additionally, during training, we use an additional binary channel to explicitly mark the object with integer and thus likely imprecise expression measurements. The information from the second strand of each promoter is provided in a similar way, by augmenting the dataset with the reverse complementary sequences and then explicitly marking the orientation in a separate binary channel. Our approach does not include any attention-based mechanics but reached high internal validation metrics and competitive performance on the public leaderboard. This agrees with the recently published studies showing that properly designed convolutional neural networks can outperform attention-based architectures in image analysis.