Login | Register

Robust and Fast Schemes for Generation of Matched Features in MIS Images

Title:

Robust and Fast Schemes for Generation of Matched Features in MIS Images

Pourshahabi, Muhammad Reza (2024) Robust and Fast Schemes for Generation of Matched Features in MIS Images. PhD thesis, Concordia University.

[thumbnail of Pourshahabi_PhD_F2024.pdf]
Preview
Text (application/pdf)
Pourshahabi_PhD_F2024.pdf - Accepted Version
Available under License Spectrum Terms of Access.
3MB

Abstract

Robotic-assisted minimally invasive surgery (MIS) offers numerous benefits including smaller incisions, faster recovery, enhanced precision, and remote operations. Image processing operations such as 3D visualization, augmented reality, and image registration, which are often feature-based, are used in MIS. Feature detection, extraction, and matching (FDEM) and feature matching refinement (FMR) constitute the cornerstone of these operations. MIS images are affected by deformation, occlusions, and specular reflection, which hinder the processes of FDEM and FMR, severely affecting the number of matched features.
FDEM is a process in which, given a pair of images, certain distinctive features are detected from the pair, then suitably represented as feature vectors, and finally, the corresponding feature vectors are compared and matched leading to a set of matched features known as a putative set for the pair. On the other hand, FMR is a process in which the falsely matched pairs of features are, as much as possible, removed from a putative set. The existing FDEM and FMR schemes are computationally expensive or lead to a set of matched features that are not well dispersed over the region of interest and suffer from having an insufficient number of true matches.
The overall objective of this thesis is to propose robust and fast schemes for generation of matched features in MIS images. In the first part of the thesis, a very fast and accurate FMR scheme is proposed. The main idea used in developing this scheme is in determining the size of local neighborhoods so that the smoothness of deformation field can be effectively applied to check the feature topology preservation between the corresponding regions of the pair of images to identify the true matches in the putative set of the pair. In the second part, a fast and accurate FDEM scheme that combines the strong attributes of three well-known FDEM schemes, SIFT, SURF and ORB, is proposed. The focus is on producing putative sets of matched features that have a good spatial quality in addition to a good matching quality. Extensive experiments are conducted to demonstrate the effectiveness of the proposed FMR and FDEM schemes.

Divisions:Concordia University > Gina Cody School of Engineering and Computer Science > Electrical and Computer Engineering
Item Type:Thesis (PhD)
Authors:Pourshahabi, Muhammad Reza
Institution:Concordia University
Degree Name:Ph. D.
Program:Electrical and Computer Engineering
Date:19 July 2024
Thesis Supervisor(s):Ahmad, M. Omair and Swamy, M.N.S.
Keywords:Feature Detection, Extraction, and Matching Feature Matching Refinement Minimally Invasive Surgery
ID Code:994576
Deposited By: Muhammad Reza Pourshahabi
Deposited On:24 Oct 2024 16:57
Last Modified:24 Oct 2024 16:57

References:

[1] A. Goshtasby, Theory and Applications of Image Registration. Hoboken, NJ, USA: John Wiley & Sons, 2017.
[2] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis., vol. 50, no. 2, pp. 91-110, 2004.
[3] T. Haidegger, “Autonomy for surgical robots: concepts and paradigms,” IEEE Trans. Med. Robot. Bionics, vol. 1, no. 2, pp. 65-76, May 2019.
[4] A. Leporini et al., “Technical and functional validation of a teleoperated multirobots platform for minimally invasive surgery,” IEEE Trans. Med. Robot. Bionics, vol. 2, no. 2, pp. 148-156, May 2020.
[5] L. Qian, J. Y. Wu, S. P. DiMaio, N. Navab and P. Kazanzides, “A review of augmented reality in robotic-assisted surgery,” IEEE Trans. Med. Robot. Bionics, vol. 2, no. 1, pp. 1-16, Feb. 2020.
[6] C. Girerd, A. V. Kudryavtsev, P. Rougeot, P. Renaud, K. Rabenorosoa and B. Tamadazte, “Automatic tip-steering of concentric tube robots in the trachea based on visual SLAM,” IEEE Trans. Med. Robot. Bionics, vol. 2, no. 4, pp. 582-585, Nov. 2020.
[7] Y. Liu et al., “Real-time robust stereo visual SLAM system based on bionic eyes,” IEEE Trans. Med. Robot. Bionics, vol. 2, no. 3, pp. 391-398, Aug. 2020.
[8] J. Song, J. Wang, L. Zhao, S. Huang and G. Dissanayake, “MIS-SLAM: Real-time large-scale dense deformable SLAM system in minimal invasive surgery based on heterogeneous computing,” IEEE Robot. Autom. Lett, vol. 3, no. 4, pp. 4068-4075, Oct. 2018.
[9] M. N. Cheema et al., “Image-aligned dynamic liver reconstruction using intra-operative field of views for Minimal Invasive Surgery,” IEEE Trans. Biomed. Eng., vol. 66, no. 8, pp. 2163-2173, Aug. 2019.
[10] H. Zhou and J. Jagadeesan, “Real-time dense reconstruction of tissue surface from stereo optical video,” IEEE Trans. Med. Imag., vol. 39, no. 2, pp. 400-412, Feb. 2020.
[11] P. Vagdargi et al., “Real-time 3-D video reconstruction for guidance of transventricular neurosurgery,” IEEE Trans. Med. Robot. Bionics, vol. 5, no. 3, pp. 669-682, Aug. 2023.
[12] V. Penza, Z. Cheng, M. Koskinopoulou, A. Acemoglu, D. G. Caldwell and L. S. Mattos, “Vision-guided autonomous robotic electrical bio-impedance scanning system for abnormal tissue detection,” IEEE Trans. Med. Robot. Bionics, vol. 3, no. 4, pp. 866-877, Nov. 2021.
[13] S. Zhang, L. Zhao, S. Huang, M. Ye and Q. Hao, “A template-based 3D reconstruction of colon structures and textures from stereo colonoscopic images,” IEEE Trans. Med. Robot. Bionics, vol. 3, no. 1, pp. 85-95, Feb. 2021.
[14] D. Stoyanov, M. Visentini-Scarzanella, P. Pratt, and G. Z. Yang, “Real-time stereo reconstruction in robotic assisted minimally invasive surgery,” in Proc. 10th Int. Conf. Med. Image Comp. Comp-Asst. Intervent., 2010.
[15] G. A. Puerto-Souza, J. A. Cadeddu and G. Mariottini, “Toward long-term and accurate augmented-reality for monocular endoscopic videos,” IEEE Trans. Biomed. Eng., vol. 61, no. 10, pp. 2609–2620, Oct. 2014.
[16] M. C. Yip, D. G. Lowe, S. E. Salcudean, R. N. Rohling and C. Y. Nguan, “Tissue tracking and registration for image-guided surgery,” IEEE Trans. Med. Imag., vol. 31, no. 11, pp. 2169–2182, Nov. 2012.
[17] T. Bergen and T. Wittenberg, “Stitching and surface reconstruction from endoscopic image sequences: A review of applications and methods,” IEEE Journal Biomed. Health Informatics, vol. 20, no. 1, pp. 304–321, Jan. 2016.
[18] H. Zhou and J. Jayender, “Real-time nonrigid mosaicking of laparoscopy images,” IEEE Trans. Med. Imag., vol. 40, no. 6, pp. 1726–1736, June 2021.
[19] H. Bay, A. Ess, T. Tuytelaars and L. Gool, “Speeded-up robust features (SURF),” Comput. Vis. Image Understand., vol.110, No.3, 2008, pp. 346–359, 2008.
[20] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” in Proc. ICCV, 2011, pp. 2564–2571.
[21] J. Sun, Z. Shen, Y. Wang, H. Bao and X. Zhou, “LoFTR: Detector-free local feature matching with transformers,” in Proc. CVPR, 2021, pp. 8918–8927.
[22] Q. Wang, J. Zhang, K. Yang, K. Peng and R. Stiefelhagen, “Matchformer: Interleaving attention in transformers for feature matching,” in Proc. ACCV, 2022, pp. 256–273.
[23] H. Chen, Z. Luo, L. Zhou, Y. Tian, M. Zhen, T. Fang, D. McKinnon, Y. Tsin and L. Quan, “Aspanformer: Detector-free image matching with adaptive span transformer,” in Proc. ECCV, 2022, pp. 20–36.
[24] J. Ni, Y. Li, Z. Huang, H. Li, H. Bao, Z. Cui and G. Zhang, “PATS: Patch area transportation with subdivision for local feature matching,” in Proc. CVPR, 2023, pp. 17776–17786.
[25] D. DeTone, T. Malisiewicz and A. Rabinovich, “SuperPoint: Self-supervised interest point detection and description,” in Proc. CVPRW, 2018, pp. 337–349.
[26] P. -E. Sarlin, D. DeTone, T. Malisiewicz and A. Rabinovich, “SuperGlue: Learning feature matching with graph neural networks,” in Proc. CVPR, 2020, pp. 4937–4946.
[27] H. Chen, Z. Luo, J. Zhang, L. Zhou, X. Bai, Z. Hu, C.-L. Tai and L. Quan, “Learning to match features with seeded graph matching network,” in Proc. ICCV, 2021, pp. 6281–6290.
[28] P. Lindenberger, P. -E. Sarlin and M. Pollefeys, “LightGlue: Local feature matching at light speed,” in Proc. ICCV, 2023, pp. 17581–17592.
[29] Q.H. Tran, T.J. Chin, G. Carneiro, M.S. Brown, and D. Suter, “In defence of RANSAC for outlier rejection in deformable registration,” in Proc. ECCV, 2012, pp. 274–287.
[30] G. A. Puerto-Souza and G. Mariottini, “A fast and accurate feature-matching algorithm for minimally-invasive endoscopic images,” IEEE Trans. Med. Imag., vol. 32, no. 7, pp. 1201–1214, July 2013.
[31] H. Zhou and J. Jayender, “EMDQ: Removal of image feature mismatches in real-time,” IEEE Trans. Image Process., vol. 31, pp. 706-720, 2022.
[32] J. Ma, J. Zhao, J. Tian, A. L. Yuille, and Z. Tu, “Robust point matching via vector field consensus,” IEEE Trans. Image Process., vol. 23, no. 4, pp. 1706–1721, Apr. 2014.
[33] J. Ma, J. Zhao, J. Jiang, H. Zhou, and X. Guo, “Locality preserving matching,” Int. J. Comput. Vis., vol. 127, no. 5, pp. 512–531, 2019.
[34] J. Ma, X. Jiang, J. Jiang, J. Zhao, and X. Guo, “LMR: Learning a two-class classifier for mismatch removal,” IEEE Trans. Image Process., vol. 28, pp. 4045–4059, Aug. 2019.
[35] X. Jiang, J. Ma, J. Jiang and X. Guo, “Robust feature matching using spatial clustering with heavy outliers,” IEEE Trans. Image Process., vol. 29, pp. 736–746, 2020.
[36] G. Wang, and Y. Chen, “Robust feature matching using guided local outlier factor,” Pattern Recognit., vol. 117, 2021, 107986.
[37] M. Brown and D. G. Lowe, “Invariant features from interest point groups,” in British Machine Vision Conference, Cardiff, Wales, 2002, pp. 656–665.
[38] E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” in Proc. ECCV, 2006, pp. 430–443.
[39] E. Rosten, R. Porter, and T. Drummond, “Faster and better: a machine learning approach to corner detection,” IEEE Trans. Pat. Ana. Mach. Intel., vol. 32, no. 1, pp. 105–119, Jan 2010.
[40] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “BRIEF: Binary robust independent elementary features,” in Proc. ECCV, 2010, pp. 778–792.
[41] M. A. Fischler and R. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM, vol. 24, no. 6, pp. 381–395, 1981.
[42] A. N. Tikhonov, and V. Y. Arsenin, Solutions of Ill-posed Problems. Washington, DC, USA: Winston, 1977.
[43] M. Ester, H.-P. Kriegel, J. Sander, and X. Xu, “A density-based algorithm for discovering clusters in large spatial databases with noise,” in Proc. KDD, 1996, pp. 226–231.
[44] M. R. Pourshahabi, M. O. Ahmad and M. N. S. Swamy, “A Very Fast and Robust Method for Refinement of Putative Matches of Features in MIS Images for Robotic-Assisted Surgery,” IEEE Trans. Med. Robot. Bionics, vol. 6, no. 2, pp. 419–432, May 2024.
[45] J. L. Bentley, “Multidimensional binary search trees used for associative searching,” Commun. ACM, vol. 18, No. 9, 1975, pp 509–517.
[46] G. A. Puerto-Souza and G. Mariottini, Laparoscopic dataset [Online] Accessed: 2018, http://ranger.uta.edu/gianluca/feature_matching (2014)
[47] Hamlyn Centre laparoscopic/endoscopic video datasets [Online] Accessed: 2018, http://hamlyn.doc.ic.ac.uk/vision (2012)
[48] A. Goshtasby, “Image registration by local approximation methods,” Image Vis. Computing 6, no. 4, November 1988, pp 255–61.
[49] P. Pratt, D. Stoyanov, M. Visentini-Scarzanella, and G. Z. Yang, “Dynamic guidance for robotic surgery using image-constrained biomechanical models,” in Proc. 10th Int. Conf. Med. Image Comp. Comp-Asst. Intervent., 2010.
[50] M. Misawa, et al., “Development of a computer-aided detection system for colonoscopy and a publicly accessible large colonoscopy video database (with video),” Gastrointestinal Endoscopy, Vol. 93, Issue 4, pp. 960–967.e3, 2021.
[51] H. Itoh et al., 2020, “SUN colonoscopy video database,” Dataset, sundatabase, Accessed: 2023. [Online]. Available: http://amed8k.sundatabase.org/
[52] H. Borgli, et al., “HyperKvasir, a comprehensive multi-class image and video dataset for gastrointestinal endoscopy,” Sci Data 7, 283, 2020.
[53] H. Borgli et al., 2020, “Hyperkvasir—The largest gastrointestinal dataset,” Dataset, simula. Accessed: 2023. [Online]. Available: https://datasets.simula.no/hyper-kvasir/
[54] M. R. Pourshahabi, M. O. Ahmad and M.N.S. Swamy, “SIFOR: A Robust Scheme for Detection, Extraction, and Matching of Features in MIS Images for Robotic-Assisted Surgery,” currently under review.
[55] F. L. Bookstein, “Principal warps: thin-plate splines and the decomposition of deformations,” IEEE Trans. Pat. Ana. Mach. Intel., vol. 11, no. 6, pp. 567–585, June 1989.
[56] C. Harris and M. Stephens, “A combined corner and edge detector,” in Alvey Vision Conference, 1988, pp. 147–151.
[57] P. J. Clark and F.C. Evans, “Distance to nearest neighbor as a measure of spatial relationships in populations,” Ecology, vol. 35, no. 4, pp. 445–453, 1954.
[58] K. P. Donnelly, “Simulations to determine the variance and edge effect of total nearest neighborhood distance,” in Simulation studies in archeology, ed. I. Hodder, Cambridge, NY, USA: Cambridge University Press, 1978, pp. 91–95.
[59] T. J. Oyana and F. M. Margai, Spatial analysis: statistics, visualization, and computational methods. Boca Raton, FL, USA: Taylor & Francis Group, 2016.
[60] L. Zagorchev and A. Goshtasby, “A comparative study of transformation functions for nonrigid image registration,” IEEE Trans. Image Process., vol. 15, no. 3, pp. 529–538, March 2006.
All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access.

Repository Staff Only: item control page

Downloads per month over past year

Research related to the current document (at the CORE website)
- Research related to the current document (at the CORE website)
Back to top Back to top