[1] A. M. Reitsma and J. D. Moreno, “Ethics of innovative surgery: Us surgeons’ definitions, knowledge, and attitudes,” Journal of the American College of Surgeons, vol. 200, no. 1, pp. 103–110, 2005. [2] S. Timmermans and E. S. Kolker, “Evidence-based medicine and the reconfiguration of medical knowledge,” Journal of health and social behavior, pp. 177–193, 2004. [3] D. L. Sackett, W. M. Rosenberg, J. M. Gray, R. B. Haynes, and W. S. Richardson, “Evidence based medicine: what it is and what it isn’t,” pp. 71–72, 1996. [4] J. E. van Timmeren, D. Cester, S. Tanadini-Lang, H. Alkadhi, and B. Baessler, “Radiomics in medical imaging—“how-to” guide and critical reflection,” Insights into imaging, vol. 11, no. 1, pp. 1–16, 2020. [5] S. H. Song, H. Park, G. Lee, H. Y. Lee, I. Sohn, H. S. Kim, S. H. Lee, J. Y. Jeong, J. Kim, K. S. Lee et al., “Imaging phenotyping using radiomics to predict micropapillary pattern within lung adenocarcinoma,” Journal of Thoracic Oncology, vol. 12, no. 4, pp. 624–632, 2017. [6] G. D. Tourassi, “Journey toward computer-aided diagnosis: role of image texture analysis,” Radiology, vol. 213, no. 2, pp. 317–320, 1999. [7] Z. Liu, S. Wang, J. W. Di Dong, C. Fang, X. Zhou, K. Sun, L. Li, B. Li, M. Wang, and J. Tian, “The applications of radiomics in precision diagnosis and treatment of oncology: opportunities and challenges,” Theranostics, vol. 9, no. 5, p. 1303, 2019. [8] I. Bankman, Handbook of medical image processing and analysis. Elsevier, 2008. [9] S. M. Anwar, M. Majid, A. Qayyum, M. Awais, M. Alnowami, and M. K. Khan, “Medical image analysis using convolutional neural networks: a review,” Journal of medical systems, vol. 42, no. 11, pp. 1–13, 2018. [10] E. Nasr-Esfahani, S. Samavi, N. Karimi, S. R. Soroushmehr, K. Ward, M. H. Jafari, B. Felfeliyan, B. Nallamothu, and K. Najarian, “Vessel extraction in x-ray angiograms using deep learning,” in 2016 38th Annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE, 2016, pp. 643–646. [11] E. Bye and E. McKinney, “Fit analysis using live and 3d scan models,” International Journal of Clothing Science and Technology, 2010. [12] G. J. Iddan and G. Yahav, “Three-dimensional imaging in the studio and elsewhere,” in ThreeDimensional Image Capture and Applications IV, vol. 4298. International Society for Optics and Photonics, 2001, pp. 48–55. [13] N. Sharma and L. M. Aggarwal, “Automated medical image segmentation techniques,” Journal of medical physics/Association of Medical Physicists of India, vol. 35, no. 1, p. 3, 2010. [14] E. L. Usery and T. Hahmann, “What is in a contour map? a region-based logical formalization of contour semantics,” 2015. [15] I. Mpiperis, S. Malasiotis, and M. G. Strintzis, “3d face recognition by point signatures and iso-contours,” Proc. of SPPRA, 2007. [16] S. P. Morse, “Concepts of use in contour map processing,” Communications of the ACM, vol. 12, no. 3, pp. 147–152, 1969. [17] “Marching squares,” https://en.wikipedia.org/wiki/Marching squares. [18] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778. [19] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818–2826. [20] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computerassisted intervention. Springer, 2015, pp. 234–241. [21] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–3440. [22] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, 2012. [23] “Medium: Cnns,” https://medium.com/techiepedia/binary-image-classifier-cnn-usingtensorflow-a3f5d6746697. [24] “Jermy jordan: Intro to cnns,” https://www.jeremyjordan.me/convolutional-neural-networks/. [25] F. Milletari, N. Navab, and S.-A. Ahmadi, “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in 2016 fourth international conference on 3D vision (3DV). IEEE, 2016, pp. 565–571. [26] “Understanding and designing the female pelvic anatomy, a measuring device and an intravaginal device using 3-dimensional modeling techniques and artificial intelligence,” https://www.mitacs.ca/en/projects/understanding-and-designing-female-pelvicanatomy-measuring-device-and-intravaginal-device, 2020. [27] CHEROKEE, “Women’s Health Specialists,” https://cherokeewomenshealth.com/2021/02/ pelvic-organ-prolapse-at-just-32-years-old/. [28] K. A. Jones, J. P. Shepherd, S. S. Oliphant, L. Wang, C. H. Bunker, and J. L. Lowder, “Trends in inpatient prolapse procedures in the united states, 1979–2006,” American journal of obstetrics and gynecology, vol. 202, no. 5, pp. 501–e1, 2010. [29] W. Post, “Vaginal mesh has caused health problems,” https://www.washingtonpost.com/ national/health-science/vaginal-mesh-has-caused-health-problems-in-many-women-evenas-some-surgeons-vouch-for-its-safety-and-efficacy/2019/01/18/1c4a2332-ff0f-11e8-ad40-cdfd0e0dd65a story.html/. [30] mayoclinic, “Pessary types,” https://www.mayoclinic.org/diseases-conditions/urinaryincontinence/multimedia/pessary-use/img-20006056. [31] H. Yu, H. Wang, Y. Shi, K. Xu, X. Yu, and Y. Cao, “The segmentation of bones in pelvic CT images based on extraction of key frames,” BMC medical imaging, vol. 18, no. 1, p. 18, 2018. [32] P. T. Truc, S. Lee, and T.-S. Kim, “A density distance augmented chan-vese active contour for CT bone segmentation,” in 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 2008, pp. 482–485. [33] D. Kainmueller, H. Lamecker, S. Zachow, and H.-C. Hege, “Coupling deformable models for multi-object segmentation,” in International Symposium on Biomedical Simulation. Springer, 2008, pp. 69–78. [34] H. Wang, J. W. Suh, S. R. Das, J. B. Pluta, C. Craige, and P. A. Yushkevich, “Multi-atlas segmentation with joint label fusion,” IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 3, pp. 611–623, 2012. [35] H. Wang, M. Moradi, Y. Gur, P. Prasanna, and T. Syeda-Mahmood, “A multi-atlas approach to region of interest detection for medical image classification,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2017, pp. 168–176. [36] F. Yokota, T. Okada, M. Takao, N. Sugano, Y. Tada, N. Tomiyama, and Y. Sato, “Automated CT segmentation of diseased hip using hierarchical and conditional statistical shape models,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2013, pp. 190–197. [37] C. Chu, J. Bai, X. Wu, and G. Zheng, “MASCG: Multi-Atlas Segmentation Constrained Graph method for accurate segmentation of hip CT images,” Medical image analysis, vol. 26, no. 1, pp. 173–184, 2015. [38] G. Zeng, X. Yang, J. Li, L. Yu, P.-A. Heng, and G. Zheng, “3D U-net with multi-level deep supervision: fully automatic segmentation of proximal femur in 3D MR images,” in International workshop on machine learning in medical imaging. Springer, 2017, pp. 274–282. [39] F. Chen, J. Liu, Z. Zhao, M. Zhu, and H. Liao, “Three-dimensional feature-enhanced network for automatic femur segmentation,” IEEE journal of biomedical and health informatics, vol. 23, no. 1, pp. 243–252, 2017. [40] Y. Chang, Y. Yuan, C. Guo, Y. Wang, Y. Cheng, and S. Tamura, “Accurate pelvis and femur segmentation in hip CT with a novel patch-based refinement,” IEEE journal of biomedical and health informatics, vol. 23, no. 3, pp. 1192–1204, 2018. [41] P. Liu, H. Han, Y. Du, H. Zhu, Y. Li, F. Gu, H. Xiao, J. Li, C. Zhao, L. Xiao et al., “Deep learning to segment pelvic bones: large-scale ct datasets and baseline models,” International Journal of Computer Assisted Radiology and Surgery, vol. 16, no. 5, pp. 749–756, 2021. [42] M. Tan and Q. V. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” arXiv preprint arXiv:1905.11946, 2019. [43] R. Szeliski, Computer vision: algorithms and applications. Springer Science & Business Media, 2010. [44] J. Schindelin, I. Arganda-Carreras, E. Frise, V. Kaynig, M. Longair, T. Pietzsch, S. Preibisch, C. Rueden, S. Saalfeld, B. Schmid et al., “Fiji: an open-source platform for biological-image analysis,” Nature methods, vol. 9, no. 7, pp. 676–682, 2012. [45] K. Clark, B. Vendt, K. Smith, J. Freymann, J. Kirby, P. Koppel, S. Moore, S. Phillips, D. Maffitt, M. Pringle et al., “The cancer imaging archive TCIA: maintaining and operating a public information repository,” Journal of digital imaging, vol. 26, no. 6, pp. 1045–1057, 2013. [46] M. J. Ackerman, “The visible human project,” Proceedings of the IEEE, vol. 86, no. 3, pp. 504–511, 1998. [47] W. R. Crum, O. Camara, and D. L. Hill, “Generalized overlap measures for evaluation and validation in medical image analysis,” IEEE Trans. Med. Imag., vol. 25, no. 11, pp. 1451–1461, 2006. [48] H.-H. Chang, A. H. Zhuang, D. J. Valentino, and W.-C. Chu, “Performance measure characterization for evaluating neuroimage segmentation algorithms,” Neuroimage, vol. 47, no. 1, pp. 122–135, 2009. [49] P. Yakubovskiy, “Segmentation models,” https://segmentation-models.readthedocs.io/en/ latest/, 2019. [50] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009, pp. 248–255. [51] P. Jois, “Qualitative assessment of breast asymmetry using 3-dimensional modeling, with computer vision and deep learning,” https://www.mitacs.ca/en/projects/qualitative-assessmentbreast-asymmetry-using-3-dimensional-modeling-computer-vision-and, 2021. [52] J. Semple, K. A. Metcalfe, H. T. Lynch, C. Kim-Sing, L. Senter, T. Pal, P. Ainsworth, J. Lubinski, N. Tung, C. Eng et al., “International rates of breast reconstruction after prophylactic mastectomy in brca1 and brca2 mutation carriers,” Annals of surgical oncology, vol. 20, no. 12, pp. 3817–3822, 2013. [53] K. E. Turk and M. Yılmaz, “The effect on quality of life and body image of mastectomy among ¨ breast cancer survivors,” European journal of breast health, vol. 14, no. 4, p. 205, 2018. [54] J. Bostwick, Plastic and reconstructive breast surgery. Quality Medical Pub., 1990, vol. 2. [55] K. Sneeuw, N. Aaronson, J. Yarnold, M. Broderick, J. Regan, G. Ross, and A. Goddard, “Cosmetic and functional outcomes of breast conserving treatment for early stage breast cancer. 1.comparison of patients’ ratings, observers’ ratings and objective assessments,” Radiotherapy and oncology, vol. 25, no. 3, pp. 153–159, 1992. [56] T. L. H. Brown, C. Ringrose, R. Hyland, A. Cole, and T. Brotherston, “A method of assessing female breast morphometry and its clinical application,” vol. 52, no. 5. Elsevier, 1999, pp. 355–359. [57] W.H.O., “Breast Cancer Early Diagnosis and Screening,” https://www.who.int/news-room/ fact-sheets/detail/breast-cancer, 2019. [58] L. Yuen-Jong, Aesthetics of the female breast: correlation of pluralistic evaluations with volume and surface area. Yale Digital Library, 2009. [59] WikiHow, “Breast volume test,” https://www.wikihow.com/Weigh-Your-Breasts#/Image:Weigh-Your-Breasts/. [60] J. C. Lowery, E. G. Wilkins, W. M. Kuzon, and J. A. Davis, “Evaluations of aesthetic results in breast reconstruction: an analysis of reliability.” Annals of plastic surgery, vol. 36, no. 6, pp. 601–6, 1996. [61] R. D. Pezner, J. A. Lipsett, N. L. Vora, and K. R. Desai, “Limited usefulness of observerbased cosmesis scales employed to evaluate patients treated conservatively for breast cancer,” International Journal of Radiation Oncology* Biology* Physics, vol. 11, no. 6, pp. 1117– 1119, 1985. [62] J. Lee, M. Kawale, F. A. Merchant, J. Weston, M. C. Fingeret, D. Ladewig, G. P. Reece, M. A. Crosby, E. K. Beahm, and M. K. Markey, “Validation of stereophotogrammetry of the human torso,” Breast cancer: basic and clinical research, vol. 5, pp. BCBCR–S6352, 2011. [63] A. Losken, H. Seify, D. D. Denson, A. A. Paredes Jr, and G. W. Carlson, “Validating threedimensional imaging of the breast,” Annals of plastic surgery, vol. 54, no. 5, pp. 471–476, 2005. [64] D. Sheffer, R. Herron, W. Morek, F. Proietti-Orlandi, C. Loughry, R. Hamor, R. Liebelt, and R. Varga, “Stereophotogrammetric method for breast cancer detection,” in Biostereometrics’ 82, vol. 361. SPIE, 1983, pp. 120–124. [65] W. Krois, A. K. Romar, T. Wild, P. Dubsky, R. Exner, P. Panhofer, R. Jakesz, M. Gnant, and F. Fitzal, “Objective breast symmetry analysis with the breast analyzing tool (bat): improved tool for clinical trials,” Breast cancer research and treatment, vol. 164, no. 2, pp. 421–427, 2017. [66] R. Hartmann, M. Weiherer, D. Schiltz, M. Baringer, V. Noisser, V. Hosl, A. Eigenberger, ¨ S. Seitz, C. Palm, L. Prantl et al., “New aspects in digital breast assessment: further refinement of a method for automated digital anthropometry,” Archives of gynecology and obstetrics, vol. 303, no. 3, pp. 721–728, 2021. [67] M. Eder, F. v. Waldenfels, A. Swobodnik, M. Kloppel, A.-K. Pape, T. Schuster, S. Raith, ¨ E. Kitzler, N. A. Papadopulos, H.-G. Machens et al., “Objective breast symmetry evaluation using 3-d surface imaging,” The Breast, vol. 21, no. 2, pp. 152–158, 2012. [68] K. Minolta, “Vivid 3d digitizer,” https://www.upc.edu/sct/ca/documents equipament/d 288 id-715.pdf, 2002. [69] C. Scientific, “Vectra XT 3D Imaging Systems,” https://www.canfieldsci.com/imagingsystems/vectra-xt-3d-imaging-system/, 2022. [70] Y. Yang, D. Mu, B. Xu, W. Li, X. Zhang, Y. Lin, and H. Li, “An intraoperative measurement method of breast symmetry using three-dimensional scanning technique in reduction mammaplasty,” Aesthetic Plastic Surgery, vol. 45, no. 5, pp. 2135–2145, 2021. [71] S. Amini and M. Kersten-Oertel, “Augmented reality mastectomy surgical planning prototype using the hololens template for healthcare technology letters,” Healthcare Technology Letters, vol. 6, no. 6, p. 261, 2019. [72] C. Maple, “Geometric design and space planning using the marching squares and marching cube algorithms,” in 2003 international conference on geometric modeling and graphics, 2003. Proceedings. IEEE, 2003, pp. 90–95. [73] H. R. R. Courant, R. Courant, H. Robbins, I. Stewart et al., What is Mathematics?: an elementary approach to ideas and methods. Oxford University Press, USA, 1996. [74] “Scandy pro: 3d scanning,” https://www.scandy.co/. [75] “Likert scale,” https://en.wikipedia.org/wiki/Likert scale. [76] J. Cai, S. Gu, and L. Zhang, “Learning a deep single image contrast enhancer from multiexposure images,” IEEE Transactions on Image Processing, vol. 27, no. 4, pp. 2049–2062, 2018. [77] J. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, and H. Lu, “Dual attention network for scene segmentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 3146–3154. [78] Z. Huang, X. Wang, L. Huang, C. Huang, Y. Wei, and W. Liu, “Ccnet: Criss-cross attention for semantic segmentation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 603–612. [79] S. Zhao, B. Wu, W. Chu, Y. Hu, and D. Cai, “Correlation maximized structural similarity loss for semantic segmentation,” arXiv preprint arXiv:1910.08711, 2019. [80] M. M. Kawale, G. P. Reece, M. A. Crosby, E. K. Beahm, M. C. Fingeret, M. K. Markey, and F. A. Merchant, “Automated identification of fiducial points on 3d torso images,” Biomedical Engineering and Computational Biology, vol. 5, pp. BECB–S11 800, 2013.