This paper presents a solution to create synthetic datasets for deep learning training of convolutional neural networks (CNNs) for plant-weed classification. We use the Unity game engine to create simulated procedural fields of sunflowers and weeds images. The visual imagery is generated by the photo realistic real time rendering engine in Unity. Moreover, we include the regular red, green, and blue (RGB) channels, plus the near infrared (NIR) channel data. This is done by including the aligned textures from both the RGB and the NIR channel separately, since Unity does not simulate NIR illumination. Our main contribution is the simulation of the sunflower plant including both the RGB and the NIR data, based of a real image dataset with low quality and quantity. This generates improved datasets that can reliably train CNNs for plant-weed segmentation classification. The results obtained achieve high intersection over union (IoU) performance when we build a dataset with a small subset of hand-picked synthetic images. The selected images included high amount of plant and weed pixel data plus the available real images for training. Our best results show an IoU performance of 76.4%, training the CNN only with sunflower synthetic images. This is close to the results from our previous research where the available real dataset, for sugar beets, had ideal conditions of quality and quantity. Therefore, we conclude that using synthetic imagery including both RGB and NIR data can greatly improve plant-weed segmentation classification IoU performance, when the real images available have limited quality and quantity.
2022, Simulation and Modeling Methodologies, Technologies and Applications., Pages 42-63
Augmentation of Sunflower-Weed Segmentation Classification with Unity Generated Imagery Including Near Infrared Sensor Data (04b Atto di convegno in volume)
Carbone Carlos, Potena Ciro, Nardi Daniele
ISBN: 978-3-030-84810-1; 978-3-030-84811-8
Gruppo di ricerca: Artificial Intelligence and Robotics