Аннотация:This paper describes our solution approach for the AuTexTification (Automated Text Identification) competition held as part of the IberLEF 2023 conference. Generated text is an increasing problem nowadays. Due to the spread of large volumes of generated texts across the Internet, people are often confused by this kind of content. In this article, we present a model for machine generated text detection based on different BERT-like encoder models. To achieve better results, we applied a fine-tuning approach of large pre-trained language encoder models XLM-RoBERTa, mDeBERTa and MiniLM-V2. In order to improve the quality of the detectors, we performed extensive preprocessing and expansion of the training data, preserving the structural properties. The method described in the paper helped our team to achieve about 66% for the English binary dataset in the final competition result.