Articles
BioAuxNet: Recognition of ground-dwelling nocturnal fauna using a mask region based convolutional neural network
Published : 1 September 2023
Abstract
The study of ground-dwelling nocturnal fauna is a challenging research task due to multiple implications in different domains such as the control of pesticide use, the prediction of the crop yields or for the identification of plant diseases. This paper presents an auto matic method for recognizing this fauna from field images acquired over several years.
This method, named BioAuxNet, is based on a deep learning algorithm which combines the latest advances in the fields of neural networks and computer vision.
Using more than 100,000 raw images taken in the field over four years, we created the first realistic dataset of 8 commons ground-dwelling nocturnal fauna species: ground beetles, mice, opilion, slugs, shrews and earthworms. For our model, we classically used transfer learning (refining a pre-trained network) and data augmentation (multiplying certain images for better generalization). Consequently, our model can recognize the fauna with an accuracy criterion of 84.31% which is very encouraging for its use in real-world situations. The perspectives of this study are manifold, including the possibility of monitoring the numbers of certain beneficials in the field.
This method, named BioAuxNet, is based on a deep learning algorithm which combines the latest advances in the fields of neural networks and computer vision.
Using more than 100,000 raw images taken in the field over four years, we created the first realistic dataset of 8 commons ground-dwelling nocturnal fauna species: ground beetles, mice, opilion, slugs, shrews and earthworms. For our model, we classically used transfer learning (refining a pre-trained network) and data augmentation (multiplying certain images for better generalization). Consequently, our model can recognize the fauna with an accuracy criterion of 84.31% which is very encouraging for its use in real-world situations. The perspectives of this study are manifold, including the possibility of monitoring the numbers of certain beneficials in the field.
References
- He K., Georgia, Gkioxari G., Dollar P., Girshick R. Facebook AI Research (FAIR), Mask R-CNN. Cornell University, 2018. Disponible sur : https://arxiv.org/abs/1703.06870
- Abdulla W. Matterport_maskrcnn_2017, Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow. 2017. Disponible sur : https://github.com/matterport/Mask_RCNN
- Lin T., Maire M., Belongie S., Bourdev L., Girshick R., Hays J., Perona P., Ramanan D., Zitnick C., Dollar P. Microsoft COCO: Common Objects in Context. European conference on computer vision, 2014/9/6 ; Pages 740-755. Springer, Cham editor.
- Grieshop M.J., Werling B., Buehrer K., Perrone J., Isaacs R., Landis D. Big Brother is Watching: Studying Insect Predation in the Age of Digital Surveillance. American Entomologist; Volume 58; Number 3; 2012.
- Collett R.A., Fisher D.O. Time-lapse camera trapping as an alternative to pitfall trapping for estimating activity of leaf litter arthropods. Ecology Evolution, 2017,7:7527–7533. Disponible sur : https://doi.org/10.1002/ece3.3275
- Jiao L., Zhang F., Liu F., Yang S., Li L., Feng Z. & al. A survey of deep learning-based object detection. IEEE Access, 2019, arXiv:1907.09408v2
Attachments
No supporting information for this articleArticle statistics
Views: 8