Login | Register

Autonomous Real-time Forest Fire Detection and Firefighting using Unmanned Aerial Vehicles

Title:

Autonomous Real-time Forest Fire Detection and Firefighting using Unmanned Aerial Vehicles

Dilfanian, Erfan ORCID: https://orcid.org/0009-0004-0855-1875 (2025) Autonomous Real-time Forest Fire Detection and Firefighting using Unmanned Aerial Vehicles. Masters thesis, Concordia University.

[thumbnail of Dilfanian_MASc_F2025.pdf]
Preview
Text (application/pdf)
Dilfanian_MASc_F2025.pdf - Accepted Version
Available under License Spectrum Terms of Access.
9MB

Abstract

This thesis proposes a framework for autonomous real-time fire detection and firefighting using Unmanned Aerial Vehicles (UAVs). In recent years, wildfires have inflicted increasingly severe damages on ecosystems and human settlements. UAVs have demonstrated strong effectiveness in forest fire detection as they provide multiple onboard sensors and can rapidly cover large, inaccessible areas. Forest firefighting using UAVs remains an emerging field of study, and although UAVs offer significant advantages for forest firefighting due to their mobility, flexibility, and ability to operate in hazardous environments, developing reliable algorithms for this application presents several challenges. In fact, as the amount of payload that can be carried by a UAV is limited, water or fire retardants must be dispersed precisely. The importance of accurate water dropping will be highlighted further by considering that the UAV's onboard GPS sensor also introduces errors to the system. Moreover, a robust UAV wildfire suppression for a line of fire scenario, which is the most common form of forest fire occurrence, is not presented in the existing literature. Hence, this thesis proposes a framework in which a UAV first runs a forest fire detection model to find the direction of fire, followed by initiating a particular UAV pattern optimized for a fire spot localization algorithm, and then upon acquiring the GPS coordinates of the fire spots, the UAV suppresses the line of fires, all within a single flight. A visual feedback is integrated with a designed control system, which adjusts the UAV to the desired position prior to the release of the fire retardant. The developed algorithms are verified through experimental outdoor flight tests using a DJI M300 RTK UAV with an H20T camera system and a designed water tanker system to be mounted on the UAV.

Divisions:Concordia University > Gina Cody School of Engineering and Computer Science > Mechanical, Industrial and Aerospace Engineering
Item Type:Thesis (Masters)
Authors:Dilfanian, Erfan
Institution:Concordia University
Degree Name:M.A. Sc.
Program:Mechanical Engineering
Date:16 July 2025
Thesis Supervisor(s):Zhang, Youmin
Keywords:Unmanned Aerial Vehicles (UAVs), Forest fire detection, Forest firefighting, Vision-based control, Localization
ID Code:995817
Deposited By: Erfan Dilfanian
Deposited On:04 Nov 2025 17:13
Last Modified:04 Nov 2025 17:13
Additional Information:Video demonstration of the outdoor flight tests is available at: https://www.youtube.com/watch?v=ZP2KoxtwAsg&t=116s

References:

[1] K. Gajendiran, S. Kandasamy, and M. Narayanan, “Influences of wildfire on the forest ecosystem and climate change: A comprehensive study,” Environmental Research, vol. 240, p. 117537, Jan. 2024, doi: 10.1016/j.envres.2023.117537.
[2] D. M. J. S. Bowman, C. A. Kolden, J. T. Abatzoglou, F. H. Johnston, G. R. Van Der Werf, and M. Flannigan, “Vegetation fires in the Anthropocene,” Nat Rev Earth Environ, vol. 1, no. 10, pp. 500–515, Aug. 2020, doi: 10.1038/s43017-020-0085-3.
[3] D. C. S. Vieira, P. Borrelli, D. Jahanianfard, A. Benali, S. Scarpa, and P. Panagos, “Wildfires in Europe: Burned soils require attention,” Environmental Research, vol. 217, p. 114936, Jan. 2023, doi: 10.1016/j.envres.2022.114936.
[4] M. J. Baur, A. D. Friend, and A. F. A. Pellegrini, “Widespread and systematic effects of fire on plant–soil water relations,” Nat. Geosci., vol. 17, no. 11, pp. 1115–1120, Nov. 2024, doi: 10.1038/s41561-024-01563-6.
[5] M. W. Jones, D. I. Kelley, C. A. Burton, F. Di Giuseppe, M. L. F. Barbosa, E. Brambleby, A. J. Hartley, A. Lombardi, G. Mataveli, J. R. McNorton, F. R. Spuler, J. B. Wessel, J. T. Abatzoglou, L. O. Anderson, N. Andela, S. Archibald, D. Armenteras, E. Burke, R. Carmenta, E. Chuvieco, H. Clarke, S. H. Doerr, P. M. Fernandes, L. Giglio, D. S. Hamilton, S. Hantson, S. Harris, P. Jain, C. A. Kolden, T. Kurvits, S. Lampe, S. Meier, S. New, M. Parrington, M. M. G. Perron, Y. Qu, N. S. Ribeiro, B. H. Saharjo, J. San-Miguel-Ayanz, J. K. Shuman, V. Tanpipat, G. R. Van Der Werf, S. Veraverbeke, and G. Xanthopoulos, “State of Wildfires 2023–2024,” Earth Syst. Sci. Data, vol. 16, no. 8, pp. 3601–3685, Aug. 2024, doi: 10.5194/essd-16-3601-2024.
[6] J. MacCarthy, A. Tyukavina, M. J. Weisse, N. Harris, and E. Glen, “Extreme wildfires in Canada and their contribution to global loss in tree cover and carbon emissions in 2023,” Global Change Biology, vol. 30, no. 6, p. e17392, Jun. 2024, doi: 10.1111/gcb.17392.
[7] B. Byrne, J. Liu, K. W. Bowman, M. Pascolini-Campbell, A. Chatterjee, S. Pandey, K. Miyazaki, G. R. Van Der Werf, D. Wunch, P. O. Wennberg, C. M. Roehl, and S. Sinha, “Carbon emissions from the 2023 Canadian wildfires,” Nature, Aug. 2024, doi: 10.1038/s41586-024-07878-z.
[8] D. Y. Boulanger, M. D. Arseneault, A. C. Bélisle, D. Y. Bergeron, D. J. Boucher, D. Y. Boucher, D. V. Danneyrolles, D. S. Erni, D. P. Gachon, D. M. P. Girardin, M. E. Grant, D. P. Grondin, D. J.-P. Jetté, D. G. Labadie, M. Leblond, D. A. Leduc, D. J. P. Puigdevall, M.-H. St-Laurent, D. J. Tremblay, and D. K. Waldron, “The 2023 wildfire season in Québec: an overview of extreme conditions, impacts, lessons learned and considerations for the future,” Canadian Journal of Forest Research.
[9] “Statistics | CAL FIRE.” Accessed: Jan. 10, 2025. [Online]. Available: https://www.fire.ca.gov/our-impact/statistics
[10] Y. Lei, T.-H. Lei, C. Lu, X. Zhang, and F. Wang, “Wildfire Smoke: Health Effects, Mechanisms, and Mitigation,” Environ. Sci. Technol., vol. 58, no. 48, pp. 21097–21119, Dec. 2024, doi: 10.1021/acs.est.4c06653.
[11] Z. Wang, Z. Wang, Z. Zou, X. Chen, H. Wu, W. Wang, H. Su, F. Li, W. Xu, Z. Liu, and J. Zhu, “Severe Global Environmental Issues Caused by Canada’s Record-Breaking Wildfires in 2023,” Adv. Atmos. Sci., vol. 41, no. 4, pp. 565–571, Apr. 2024, doi: 10.1007/s00376-023-3241-0.
[12] W. Mao, R. Shalaby, B. Agyapong, G. Obuobi-Donkor, R. Da Luz Dias, and V. I. O. Agyapong, “Devastating Wildfires and Mental Health: Major Depressive Disorder Prevalence and Associated Factors among Residents in Alberta and Nova Scotia, Canada,” Behavioral Sciences, vol. 14, no. 3, p. 209, Mar. 2024, doi: 10.3390/bs14030209.
[13] M. C. Scur, D. Centurião, C. N. Berlinck, E. K. L. Batista, R. Libonati, J. Rodrigues, A. V. Nunes, L. C. Garcia, G. W. Fernandes, G. A. Damasceno-Junior, A. D. M. M. Pereira, L. Anderson, J. M. Ochoa-Quintero, M. D. R. Oliveira, D. B. Ribeiro, and F. O. Roque, “Economic Losses and Cross Border Effects Caused by Pantanal Catastrophic Wildfires,” 2023. doi: 10.2139/ssrn.4601097.
[14] A. Heyns, W. Du Plessis, M. Kosch, and G. Hough, “Optimisation of tower site locations for camera-based wildfire detection systems,” Int. J. Wildland Fire, vol. 28, no. 9, p. 651, 2019, doi: 10.1071/WF18196.
[15] O. Kucuk, O. Topaloglu, A. O. Altunel, and M. Cetin, “Visibility analysis of fire lookout towers in the Boyabat State Forest Enterprise in Turkey,” Environ Monit Assess, vol. 189, no. 7, p. 329, Jul. 2017, doi: 10.1007/s10661-017-6008-1.
[16] S. Sakellariou, F. Samara, S. Tampekis, O. Christopoulou, and A. Sfougaris, “Optimal Number and Location of Watchtowers for Immediate Detection of Forest Fires in a Small Island:,” International Journal of Agricultural and Environmental Information Systems, vol. 8, no. 4, pp. 1–19, Oct. 2017, doi: 10.4018/IJAEIS.2017100101.
[17] H. Wang, G. Zhang, Z. Yang, H. Xu, F. Liu, and S. Xie, “Satellite Remote Sensing False Forest Fire Hotspot Excavating Based on Time-Series Features,” Remote Sensing, vol. 16, no. 13, p. 2488, Jul. 2024, doi: 10.3390/rs16132488.
[18] T. Sung, Y. Kang, and J. Im, “Enhancing Satellite-Based Wildfire Monitoring: Advanced Contextual Model Using Environmental and Structural Information,” IEEE Trans. Geosci. Remote Sensing, vol. 62, pp. 1–16, 2024, doi: 10.1109/TGRS.2024.3418475.
[19] Y. Kim, B.-R. Kim, and S. Park, “Synergistic use of multi-satellite remote sensing to detect forest fires: A case study in South Korea,” Remote Sensing Letters, vol. 14, no. 5, pp. 491–502, May 2023, doi: 10.1080/2150704X.2023.2215947.
[20] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, May 2015, doi: 10.1038/nature14539.
[21] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” in Advances in Neural Information Processing Systems, F. Pereira, C. J. Burges, L. Bottou, and K. Q. Weinberger, Eds., Curran Associates, Inc., 2012. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf
[22] R. Szeliski, Computer Vision: Algorithms and Applications. in Texts in Computer Science. Cham: Springer International Publishing, 2022. doi: 10.1007/978-3-030-34372-9.
[23] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” CoRR, vol. abs/1409.1556, 2014, [Online]. Available: https://api.semanticscholar.org/CorpusID:14124313
[24] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, “The Pascal Visual Object Classes (VOC) Challenge,” Int J Comput Vis, vol. 88, no. 2, pp. 303–338, Jun. 2010, doi: 10.1007/s11263-009-0275-4.
[25] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, vol. 9351, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, Eds., in Lecture Notes in Computer Science, vol. 9351. , Cham: Springer International Publishing, 2015, pp. 234–241. doi: 10.1007/978-3-319-24574-4_28.
[26] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation,” in Computer Vision – ECCV 2018, vol. 11211, V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, Eds., in Lecture Notes in Computer Science, vol. 11211. , Cham: Springer International Publishing, 2018, pp. 833–851. doi: 10.1007/978-3-030-01234-2_49.
[27] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet Large Scale Visual Recognition Challenge,” Int J Comput Vis, vol. 115, no. 3, pp. 211–252, Dec. 2015, doi: 10.1007/s11263-015-0816-y.
[28] M. D. Zeiler and R. Fergus, “Visualizing and Understanding Convolutional Networks,” in Computer Vision – ECCV 2014, vol. 8689, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, Eds., in Lecture Notes in Computer Science, vol. 8689. , Cham: Springer International Publishing, 2014, pp. 818–833. doi: 10.1007/978-3-319-10590-1_53.
[29] C. Szegedy, Wei Liu, Yangqing Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA: IEEE, Jun. 2015, pp. 1–9. doi: 10.1109/CVPR.2015.7298594.
[30] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA: IEEE, Jun. 2016, pp. 770–778. doi: 10.1109/CVPR.2016.90.
[31] S. Zagoruyko and N. Komodakis, “Wide Residual Networks,” 2016, arXiv. doi: 10.48550/ARXIV.1605.07146.
[32] S. Xie, R. Girshick, P. Dollar, Z. Tu, and K. He, “Aggregated Residual Transformations for Deep Neural Networks,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI: IEEE, Jul. 2017, pp. 5987–5995. doi: 10.1109/CVPR.2017.634.
[33] G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Q. Weinberger, “Deep Networks with Stochastic Depth,” in Computer Vision – ECCV 2016, vol. 9908, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds., in Lecture Notes in Computer Science, vol. 9908. , Cham: Springer International Publishing, 2016, pp. 646–661. doi: 10.1007/978-3-319-46493-0_39.
[34] G. Larsson, M. Maire, and G. Shakhnarovich, “FractalNet: Ultra-Deep Neural Networks without Residuals,” 2016, arXiv. doi: 10.48550/ARXIV.1605.07648.
[35] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA: IEEE, Jun. 2014, pp. 580–587. doi: 10.1109/CVPR.2014.81.
[36] R. Girshick, “Fast R-CNN,” in 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile: IEEE, Dec. 2015, pp. 1440–1448. doi: 10.1109/ICCV.2015.169.
[37] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition,” in Computer Vision – ECCV 2014, vol. 8691, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, Eds., in Lecture Notes in Computer Science, vol. 8691. , Cham: Springer International Publishing, 2014, pp. 346–361. doi: 10.1007/978-3-319-10578-9_23.
[38] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” in Advances in Neural Information Processing Systems, C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, Eds., Curran Associates, Inc., 2015. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2015/file/14bfa6bb14875e45bba028a21ed38046-Paper.pdf
[39] P. Sun, R. Zhang, Y. Jiang, T. Kong, C. Xu, W. Zhan, M. Tomizuka, Z. Yuan, and P. Luo, “Sparse R-CNN: An End-to-End Framework for Object Detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 12, pp. 15650–15664, Dec. 2023, doi: 10.1109/TPAMI.2023.3292030.
[40] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA: IEEE, Jun. 2016, pp. 779–788. doi: 10.1109/CVPR.2016.91.
[41] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg, “SSD: Single Shot MultiBox Detector,” in Computer Vision – ECCV 2016, vol. 9905, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds., in Lecture Notes in Computer Science, vol. 9905. , Cham: Springer International Publishing, 2016, pp. 21–37. doi: 10.1007/978-3-319-46448-0_2.
[42] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, “Focal Loss for Dense Object Detection,” in 2017 IEEE International Conference on Computer Vision (ICCV), Venice: IEEE, Oct. 2017, pp. 2999–3007. doi: 10.1109/ICCV.2017.324.
[43] M. Tan, R. Pang, and Q. V. Le, “EfficientDet: Scalable and Efficient Object Detection,” presented at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
[44] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” ArXiv, vol. abs/2004.10934, 2020, [Online]. Available: https://api.semanticscholar.org/CorpusID:216080778
[45] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada: IEEE, Jun. 2023, pp. 7464–7475. doi: 10.1109/CVPR52729.2023.00721.
[46] K. Muhammad, J. Ahmad, Z. Lv, P. Bellavista, P. Yang, and S. W. Baik, “Efficient Deep CNN-Based Fire Detection and Localization in Video Surveillance Applications,” IEEE Trans. Syst. Man Cybern, Syst., vol. 49, no. 7, pp. 1419–1434, Jul. 2019, doi: 10.1109/TSMC.2018.2830099.
[47] R. Ghali and M. A. Akhloufi, “Deep Learning Approaches for Wildland Fires Remote Sensing: Classification, Detection, and Segmentation,” Remote Sensing, vol. 15, no. 7, p. 1821, Mar. 2023, doi: 10.3390/rs15071821.
[48] R. Ghali and M. A. Akhloufi, “Deep Learning Approach for Wildland Fire Recognition Using RGB and Thermal Infrared Aerial Image,” Fire, vol. 7, no. 10, p. 343, Sep. 2024, doi: 10.3390/fire7100343.
[49] B. H. Hopkins, L. O. O’Neill, F. A. Afghah, A. R. Razi, A. W. Watts, P. F. Fule, and J. C. Coen, “FLAME 2: Fire detection and modeLing: Aerial Multi-spectral imagE dataset.” IEEE DataPort. doi: 10.21227/SWYW-6J78.
[50] W. Zhou, B.-H. Tang, Z.-W. He, L. Huang, and J. Chen, “Identification of forest fire points under clear sky conditions with Himawari-8 satellite data,” International Journal of Remote Sensing, vol. 45, no. 1, pp. 214–234, Jan. 2024, doi: 10.1080/01431161.2023.2295834.
[51] A. Shalan, N. A. Walee, M. Hefny, and M. Rahman, “Improving Deep Machine Learning for Early Wildfire Detection from Forest Sensory Images,” in 2024 5th International Conference on Artificial Intelligence, Robotics and Control (AIRC), Cairo, Egypt: IEEE, Apr. 2024, pp. 18–22. doi: 10.1109/AIRC61399.2024.10672414.
[52] C. Yuan, Y. Zhang, and Z. Liu, “A survey on technologies for automatic forest fire monitoring, detection, and fighting using unmanned aerial vehicles and remote sensing techniques,” Can. J. For. Res., vol. 45, no. 7, pp. 783–792, Jul. 2015, doi: 10.1139/cjfr-2014-0347.
[53] L. Zhang, M. Wang, Y. Fu, and Y. Ding, “A Forest Fire Recognition Method Using UAV Images Based on Transfer Learning,” Forests, vol. 13, no. 7, p. 975, Jun. 2022, doi: 10.3390/f13070975.
[54] B. T. K. Reddy and R. Sugumar, “Effective forest fire detection by UAV image using Resnet 50 compared over Google Net,” presented at the INNOVATIONS IN THERMAL, MANUFACTURING, STRUCTURAL AND ENVIRONMENTAL ENGINEERING: ICITMSEE’24, Trichy, India, 2025, p. 020274. doi: 10.1063/5.0265669.
[55] Y. He, J. Hu, M. Zeng, Y. Qian, and R. Zhang, “DCGC-YOLO: The Efficient Dual-Channel Bottleneck Structure YOLO Detection Algorithm for Fire Detection,” IEEE Access, vol. 12, pp. 65254–65265, 2024, doi: 10.1109/ACCESS.2024.3385856.
[56] T. Zhang, F. Wang, W. Wang, Q. Zhao, W. Ning, and H. Wu, “Research on Fire Smoke Detection Algorithm Based on Improved YOLOv8,” IEEE Access, vol. 12, pp. 117354–117362, 2024, doi: 10.1109/ACCESS.2024.3448608.
[57] L. Qiao, Y. Fu, H. Dong, Q. Qin, Y. Zhang, X. Wu, E. Dilfanian, A. Taherzadeh, H. Benzerrouk, H. Guiddir, and H. Murray, “Grad-CAM for Network Models: To Support Aerial Vision Based Wildfire Perception,” in IECON 2024 - 50th Annual Conference of the IEEE Industrial Electronics Society, Chicago, IL, USA: IEEE, Nov. 2024, pp. 1–6. doi: 10.1109/IECON55916.2024.10905147.
[58] J. Liu, J. Song, and X. Li, “A forest fire detection method based on adaptive feature fusion module,” in 2023 IEEE International Conference on Mechatronics and Automation (ICMA), Harbin, Heilongjiang, China: IEEE, Aug. 2023, pp. 135–140. doi: 10.1109/ICMA57826.2023.10216157.
[59] L. Qiao, S. Li, Y. Zhang, and J. Yan, “Early Wildfire Detection and Distance Estimation Using Aerial Visible-Infrared Images” IEEE Transactions on Industrial Electronics, 2024, doi: 10.1109/TIE.2024.3387089.
[60] D. Rao, T. Xu, and X.-J. Wu, “TGFuse: An Infrared and Visible Image Fusion Approach Based on Transformer and Generative Adversarial Network,” IEEE Trans. on Image Process., pp. 1–1, 2024, doi: 10.1109/TIP.2023.3273451.
[61] L. Mu, Y. Yang, B. Wang, Y. Zhang, N. Feng, and X. Xie, “Edge Computing-Based Real-Time Forest Fire Detection Using UAV Thermal and Color Images,” IEEE J. Sel. Top. Appl. Earth Observations Remote Sensing, vol. 18, pp. 6760–6771, 2025, doi: 10.1109/JSTARS.2025.3528652.
[62] K. Niu, C. Wang, J. Xu, J. Liang, X. Zhou, K. Wen, M. Lu, and C. Yang, “Early Forest Fire Detection With UAV Image Fusion: A Novel Deep Learning Method Using Visible and Infrared Sensors,” IEEE J. Sel. Top. Appl. Earth Observations Remote Sensing, vol. 18, pp. 6617–6629, 2025, doi: 10.1109/JSTARS.2025.3541205.
[63] J. J. Roldán-Gómez, E. González-Gironda, and A. Barrientos, “A Survey on Robotic Technologies for Forest Firefighting: Applying Drone Swarms to Improve Firefighters’ Efficiency and Safety,” Applied Sciences, vol. 11, no. 1, p. 363, Jan. 2021, doi: 10.3390/app11010363.
[64] X. Wu, S. Li, L. Qiao, Y. Zhang, H. Benzerrouk, and H. Guiddir, “An Autonomous Water-Dropping Method with High Precision using Unmanned Aerial Firefighting Vehicles,” in 2024 IEEE International Conference on Advanced Intelligent Mechatronics (AIM), Boston, MA, USA: IEEE, Jul. 2024, pp. 339–344. doi: 10.1109/AIM55361.2024.10637124.
[65] A. Mehra, A. Shukla, D. Prajapati, P. Kumar, A. Rana, and T. Patil, “Vision-Based Control of UAV for Autonomous Firefighting,” in 2024 16th International Conference on Computer and Automation Engineering (ICCAE), Melbourne, Australia: IEEE, Mar. 2024, pp. 562–566. doi: 10.1109/ICCAE59995.2024.10569529.
[66] J. Quenzel, M. Splietker, D. Pavlichenko, D. Schleich, C. Lenz, M. Schwarz, M. Schreiber, M. Beul, and S. Behnke, “Autonomous Fire Fighting with a UAV-UGV Team at MBZIRC 2020,” in 2021 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece: IEEE, Jun. 2021, pp. 934–941. doi: 10.1109/ICUAS51884.2021.9476846.
[67] “MBZIRC.” [Online]. Available: https://www.mbzirc.com/
[68] D. Legendre, R. Becker, E. Alméras, and A. Chassagne, “Air tanker drop patterns,” Int. J. Wildland Fire, vol. 23, no. 2, p. 272, 2014, doi: 10.1071/WF13029.
[69] M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography.,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.
[70] A. S. Hassanein, S. Mohammad, M. Sameer, and M. E. Ragab, “A Survey on Hough Transform, Theory, Techniques and Applications,” 2015, doi: 10.48550/ARXIV.1502.02160.
[71] X. Wang, H. Liu, Y. Tian, Z. Chen, and Z. Cai, “A Fast Optimization Method of Water-Dropping Scheme for Fixed-Wing Firefighting Aircraft,” IEEE Access, vol. 9, pp. 120815–120832, 2021, doi: 10.1109/ACCESS.2021.3106538.
[72] K. He, X. Zhang, S. Ren, and J. Sun, “Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 37, no. 9, pp. 1904–1916, Sep. 2015, doi: 10.1109/TPAMI.2015.2389824.
[73] T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature Pyramid Networks for Object Detection,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI: IEEE, Jul. 2017, pp. 936–944. doi: 10.1109/CVPR.2017.106.
[74] Z. Zheng, P. Wang, W. Liu, J. Li, R. Ye, and D. Ren, “Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression,” AAAI, vol. 34, no. 07, pp. 12993–13000, Apr. 2020, doi: 10.1609/aaai.v34i07.6999.
[75] X. Li, W. Wang, L. Wu, S. Chen, X. Hu, J. Li, J. Tang, and J. Yang, “Generalized Focal Loss: Learning Qualified and Distributed Bounding Boxes for Dense Object Detection,” in Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, Eds., Curran Associates, Inc., 2020, pp. 21002–21012. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2020/file/f0bda020d2470f2e74990a07a607ebd9-Paper.pdf
[76] “Brief summary of YOLOv8 model structure · Issue #189 · ultralytics/ultralytics · GitHub.” Accessed: Apr. 19, 2025. [Online]. Available: https://github.com/ultralytics/ultralytics/issues/189
[77] Ultralytics, “Explore Ultralytics YOLOv8.” [Online]. Available: https://docs.ultralytics.com/models/yolov8/#supported-tasks-and-modes
[78] D. M. W. Powers, “Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation,” 2020, doi: 10.48550/ARXIV.2010.16061.
[79] “GitHub - UZ-SLAMLab/ORB_SLAM3: ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM.” Accessed: Jan. 06, 2025. [Online]. Available: https://github.com/UZ-SLAMLab/ORB_SLAM3
[80] “The Official YAML Web Site.” Accessed: Jan. 06, 2025. [Online]. Available: https://yaml.org/
[81] “DJI MATRICE 300 RTK RELEASE SYSTEM | 3KG TEST - YouTube.” Accessed: Nov. 18, 2024. [Online]. Available: https://www.youtube.com/watch?v=pugxh2NB27s
[82] “3D CAD Software: Shape the World We Live In | CATIA - Dassault Systèmes.” Accessed: Nov. 18, 2024. [Online]. Available: https://www.3ds.com/products/catia
[83] “Makerspace - Concordia University.” Accessed: Nov. 23, 2024. [Online]. Available: https://www.concordia.ca/ginacody/students/makerspace.html
[84] “Shop 3D printers, filaments and accessories - Bambu Lab CA store.” Accessed: Nov. 24, 2024. [Online]. Available: https://ca.store.bambulab.com/?gad_source=1&gclid=Cj0KCQiAuou6BhDhARIsAIfgrn6tkFPOMkWVARyTV5IzfPV6ziNIIGG00Lvi3mu67Mb4T0WAZNK62XQaAiOmEALw_wcB
[85] “Nouvelles méthodes pour la détermination des orbites des comètes - Adrien Marie Legendre - Google Books.” Accessed: Nov. 24, 2024. [Online]. Available: https://books.google.ca/books/about/Nouvelles_m%C3%A9thodes_pour_la_d%C3%A9terminati.html?id=FRcOAAAAQAAJ&redir_esc=y
[86] “DJI M300 RTK User Manual.” [Online]. Available: https://dl.djicdn.com/downloads/matrice-300/20200507/M300_RTK_User_Manual_EN.pdf
[87] “D-RTK 2.” [Online]. Available: https://www.dji.com/ca/d-rtk-2
[88] “The Beaufort Wind Scale | Royal Meteorological Society.” Accessed: Dec. 27, 2024. [Online]. Available: https://www.rmets.org/metmatters/beaufort-wind-scale
[89] T. Oda, H. Hiroyasu, M. Arai, and K. Nishida, “Characterization of Liquid Jet Atomization across a High-Speed Airstream.,” JSME Int. J., Ser. B, vol. 37, no. 4, pp. 937–944, 1994, doi: 10.1299/jsmeb.37.937.
[90] T. Inamura and N. Nagai, “Spray Characteristics of Liquid Jet Traversing Subsonic Airstreams,” Journal of Propulsion and Power, vol. 13, no. 2, pp. 250–256, Mar. 1997, doi: 10.2514/2.5156.
[91] P.-K. Wu, K. A. Kirkendall, R. P. Fuller, and A. S. Nejad, “Spray Structures of Liquid Jets Atomized in Subsonic Crossflows,” Journal of Propulsion and Power, vol. 14, no. 2, pp. 173–182, Mar. 1998, doi: 10.2514/2.5283.
[92] “DJI ZENMUSE H20 SERIES USER MANUAL.” [Online]. Available: https://dl.djicdn.com/downloads/Zenmuse_H20_Series/Zenmuse_H20_Series_User_Manual-EN.pdf
[93] “iCrest 2.0 User Manual.” [Online]. Available: thesisBackupMidway_5_30Dec_2024_3AM_45min
[94] “SDK Manager | NVIDIA Developer.” Accessed: Jan. 01, 2025. [Online]. Available: https://developer.nvidia.com/sdk-manager
[95] “ROS: Home.” Accessed: Jan. 01, 2025. [Online]. Available: https://ros.org/
[96] “GitHub - dji-sdk/Onboard-SDK: DJI Onboard SDK Official Repository.” Accessed: Jan. 01, 2025. [Online]. Available: https://github.com/dji-sdk/Onboard-SDK
[97] “GitHub - dji-sdk/Onboard-SDK-ROS: Official ROS packages for DJI onboard SDK.” Accessed: Jan. 01, 2025. [Online]. Available: https://github.com/dji-sdk/Onboard-SDK-ROS
[98] “OpenCV - Open Computer Vision Library.” Accessed: Jan. 01, 2025. [Online]. Available: https://opencv.org/
[99] “FuzzyLite: The FuzzyLite Libraries for Fuzzy Logic Control.” Accessed: Jan. 01, 2025. [Online]. Available: https://fuzzylite.com/
[100] “Git.” Accessed: Jan. 01, 2025. [Online]. Available: https://git-scm.com/
[101] “GitHub · Build and ship software on a single, collaborative platform · GitHub.” Accessed: Jan. 01, 2025. [Online]. Available: https://github.com/
[102] “DJI Assistant 2 (Enterprise Series) - Download Center - DJI.” Accessed: Jan. 02, 2025. [Online]. Available: https://www.dji.com/ca/downloads/softwares/assistant-dji-2-for-matrice
[103] S. Li, L. Qiao, Y. Zhang, and J. Yan, “An Early Forest Fire Detection System Based on DJI M300 Drone and H20T Camera,” in 2022 International Conference on Unmanned Aircraft Systems (ICUAS), Dubrovnik, Croatia: IEEE, Jun. 2022, pp. 932–937. doi: 10.1109/ICUAS54217.2022.9836119.
[104] E. Dilfanian, Y. Zhang, H. Dong, L. Qiao, A. Taherzadeh, X. Wu, H. Benzerrouk, and H. Guiddir, “Autonomous Vision-Guided High-Precision Firefighting using Unmanned Aerial Vehicles,” in IECON 2024 - 50th Annual Conference of the IEEE Industrial Electronics Society, Chicago, IL, USA: IEEE, Nov. 2024, pp. 1–6. doi: 10.1109/IECON55916.2024.10905606.
[105] Q. Qin, E. Dilfanian, Y. Zhang, Y. Fu, H. Benzerrouk, and H. Guiddir, “Unmanned Aerial-Ground Systems for Wildfire Detection, Global Localization and Suppression,” in 33rd Mediterranean Conference on Control and Automation (MED), Tangier, Morocco: IEEE, Jun. 2025, pp. 642–647. doi: 10.1109/med64031.2025.11073311.
[106] L. Qiao, Y. Fu, H. Dong, Q. Qin, Y. Zhang, X. Wu, E. Dilfanian, A. Taherzadeh, H. Benzerrouk, H. Guiddir, and H. Murray, “Grad-CAM for Network Models: To Support Aerial Vision Based Wildfire Perception,” in IECON 2024 - 50th Annual Conference of the IEEE Industrial Electronics Society, Chicago, IL, USA: IEEE, Nov. 2024, pp. 1–6. doi: 10.1109/IECON55916.2024.10905147.
All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.

Repository Staff Only: item control page

Downloads per month over past year

Research related to the current document (at the CORE website)
- Research related to the current document (at the CORE website)
Back to top Back to top