YOLO-based miner detection in low light underground mines

Over the years, injuries and fatalities from underground mine emergencies have been one of the snares associated with underground mining operations. Numerous studies have been conducted by institutions such as the National Institute of Occupational Safety and Health (NIOSH) and its partner agencies to address this challenge. However, the quest to achieve zero fatalities is daunting and remains unresolved. These injuries and fatalities cannot be fully eliminated because, among other things, humans are prone to error, which can lead to emergencies or unsafe responses to emergencies.

Among the various technologies developed to facilitate emergency response is the robot, which has been employed for such missions in recent years. Notable examples include Numbat, developed by Australia’s CSIRO for search and rescue, and the Remotec Wolverine robot, developed by the US Mine Safety and Health Administration (MSHA) for rescue and recovery missions in US mines. All these deployments were from the surface to underground [1]. This method of deployment could face issues such as restricted entry and delays in deployment. To avert these challenges, robots can be stationed underground and deployed by underground miners during emergencies. This could increase recovery and reduce evacuation time.

Another challenge observed when using robots for underground rescue missions is vision difficulties. Most robots are equipped with either RGB or thermal sensors to facilitate miner detection. However, these sensors cannot intelligently detect miners. Researchers at the Missouri S&T’s Mining Sustainability Modeling Research Group leveraged the power of machine learning, specifically object detection, to solve this challenge. The researchers created a unique thermal image dataset of underground mine operations during practical lessons at the Missouri University of Science and Technology Experimental Mine to train two established algorithms, YOLOv5 and YOLOv8, via transfer learning. Figure 1 shows the images of the samples collected for this experiment.

Figure 1 Thermal images captured in three color modes [2].

The transfer-learned models were evaluated against each other using metrics such as mAP_50, recall, precision, F1 score, and inference speed. The results showed that YOLOv8 has superior detection performance compared to YOLOv5, whereas the reverse is true for inference speed. However, a good compromise between accuracy and speed YOLOv8 is the optimal. In addition, researchers examined the efficacy of trained models in detecting miners during a specific emergency, that is, a small fire scenario. The results showed that both algorithms could detect miners in the fire scene; however, they misdetected fires as “persons” in two cases. Figure 2 shows the inference results for a small fire emergency. In future work, we will explore how to incorporate the YOLOv8 algorithm into robotic platforms, such as Boston Dynamics’ Spot Mini robot.

Figure 2 Detection using fire images for the nanovariant of both models [2].

Reference

1.          Murphy RR, Kravitz J, Stover SL, Shoureshi R (2009) Mobile robots in mine rescue and recovery. IEEE Robot Autom Mag 16:91–103. https://doi.org/10.1109/MRA.2009.932521

2.          Addy C, Nadendla VSS, Awuah-Offei K (2025) YOLO-Based Miner Detection Using Thermal Images in Underground Mines. Min Metall Explor. https://doi.org/10.1007/s42461-025-01249-6

Leave a Reply

Your email address will not be published. Required fields are marked *