Аннотация:Very-high-energy gamma ray photons interact with the atmosphere to give rise to cascades of secondary particles – extensive air showers (EASs), which in turn generate very short flashes of Cherenkov radiation. This flashes are detected on the ground with Imaging Air Cherenkov Telescopes (IACTs). In the TAIGA experiment, in addition to images directly detected and recorded by the experimental facilities, images obtained as a result of simulation are used extensively. Earlier we applied a machine learning technique called Generative Adversarial Networks (GAN) to quickly generate images of gamma events for the TAIGA experiment. The initial analysis of the generated images showed the applicability of the method, but revealed some features that require additional refinement of the network. In particular, it was important to teach the network that in our case images have a specific shape and orientation. In this paper we discuss the possibility of improving the generated images by preprocessing the training dataset. We also present an example of a GAN built and trained with these requirements in mind. Testing the results using third-party software showed that more than 95% of the generated images were found to be correct, while the generation is quite fast: after training the network creates about 400 event images in 1 second.