American Journal of Mechanical Engineering
ISSN (Print): 2328-4102 ISSN (Online): 2328-4110 Website: Editor-in-chief: Kambiz Ebrahimi, Dr. SRINIVASA VENKATESHAPPA CHIKKOL
Open Access
Journal Browser
American Journal of Mechanical Engineering. 2020, 8(2), 76-87
DOI: 10.12691/ajme-8-2-5
Open AccessArticle

Monocular Visual SLAM for Underwater Navigation in Turbid and Dynamic Environments

Chinthaka Amarasinghe1, , Asanga Ratnaweera2 and Sanjeeva Maitripala2

1Department of Science & Technology, Uva Wellassa University, Badulla, Sri Lanka

2Department of Mechanical Engineering, University of Peradeniya, Peradeniya, Sri Lanka

Pub. Date: August 28, 2020

Cite this paper:
Chinthaka Amarasinghe, Asanga Ratnaweera and Sanjeeva Maitripala. Monocular Visual SLAM for Underwater Navigation in Turbid and Dynamic Environments. American Journal of Mechanical Engineering. 2020; 8(2):76-87. doi: 10.12691/ajme-8-2-5


Localization, navigation, and mapping using vision-based algorithms are an active topic in underwater robotic applications. Although many algorithms developed in recent years, especially in the ground and areal robotic communities, directly applying those methods in underwater navigation remain challenging due to the visual degradation induced by the medium. In this paper, we proposed UW-SLAM (Underwater SLAM), a new monocular visual SLAM algorithm focused on the underwater environment which addresses the turbidity and dynamism. The proposed method was evaluated with several underwater datasets with comparison to the state of the art monocular SLAM methods.

monocular visual navigation underwater vision visual SLAM

Creative CommonsThis work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit


[1]  L. Paull, S. Saeedi, M. Seto, and H. Li, “AUV navigation and localization: A review,” IEEE Journal of Oceanic Engineering, vol. 39, no. 1. pp. 131-149, 2014.
[2]  L. Chen, S. Wang, K. Mcdonald-maier, and H. Hu, “Towards autonomous localization and mapping of AUVs: a survey,” vol. 1, no. 2, pp. 97-120, 2013.
[3]  F. Bonin-Font, A. Ortiz, and G. Oliver, “Visual Navigation for Mobile Robots: A Survey,” J. Intell. Robot. Syst., vol. 53, no. 3, pp. 263-296, 2008.
[4]  D. Loebis, R. Sutton, and J. Chudley, “Review of multisensor data fusion techniques and their application to autonomous underwater vehicle navigation,” J. Mar. Eng. Technol., vol. 1, no. 1, pp. 3-14, 2002.
[5]  J. C. Kinsey, R. Eustice, and L. L. Whitcomb, “A Survey of Underwater Vehicle Navigation: Recent Advances and New Challenges,” 7th Conf. Manoeuvring Control Mar. Cr., pp. 1-12, 2006.
[6]  J. C. Kinsey, R. M. Eustice, and L. L. Whitcomb, “A Survey of Underwater Vehicle Navigation: Recent Advances and New Challenges,” {IFAC} Conf. Manoeuvering Control Mar. Cr., 2006.
[7]  J. Yuh, “Design and Control of Autonomous Underwater Robots: A Survey,” vol. 24, pp. 7-8, 2000.
[8]  H. Lu, Y. Li, Y. Zhang, M. Chen, S. Serikawa, and H. Kim, “Underwater Optical Image Processing: A Comprehensive Review,” 2017.
[9]  S. Corchs and R. Schettini, “Underwater image processing: State of the art of restoration and image enhancement methods,” EURASIP J. Adv. Signal Process., vol. 2010, 2010.
[10]  J. Banerjee, R. Ray, S. R. K. Vadali, S. N. Shome, and S. Nandy, “Real-time underwater image enhancement: An improved approach for imaging with AUV-150,” Sadhana - Acad. Proc. Eng. Sci., vol. 41, no. 2, pp. 225-238, 2016.
[11]  R. Giubilato, M. Pertile, and S. Debei, “A comparison of monocular and stereo visual FastSLAM implementations,” 3rd IEEE Int. Work. Metrol. Aerospace, Metroaerosp. 2016 - Proc., pp. 227-232, 2016.
[12]  J. Cunha, E. Pedrosa, C. Cruz, A. J. Neves, and N. Lau, “Using a depth camera for indoor robot localization and navigation,” DETI/IEETA-University of Aveiro, 2011.
[13]  D. Maier, A. Hornung, and M. Bennewitz, “Real-time navigation in 3D environments based on depth camera data,” in 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012), 2012, pp. 692-697.
[14]  S. T. Digumarti, R. Siegwart, A. Thomas, and P. Beardsley, “Underwater 3D Capture using a Low-Cost Commercial Depth Camera,” vol. 1.
[15]  A. Dancu, M. Fourgeaud, Z. Franjcic, R. Avetisyan, and Q. Ab, “Underwater reconstruction using depth sensors,” pp. 1-4, 2014.
[16]  J. M. Maxime Ferrera, “Real-Time Monocular Visual Odometry for Turbid and Dynamic Underwater Environments,” pp. 1-19, 2019.
[17]  J. Bouguet, “Pyramidal Implementation of the Lucas Kanade Feature Tracker Description of the algorithm,” vol. 1, no. 2, pp. 1-9.
[18]  C. Harris and M. Stephens, “A Combined Corner and Edge Detector,” pp. 23.1-23.6, 2013.
[19]  R. Garcia and N. Gracias, “Detection of interest points in turbid underwater images,” Ocean. 2011 IEEE - Spain, 2011.
[20]  F. Codevilla, J. D. O. Gaya, N. D. Filho, and S. S. C. C. Botelho, “Achieving Turbidity Robustness on Underwater Images Local Feature Detection,” pp. 154.1-154.13, 2015.
[21]  G. Younes, D. Asmar, E. Shammas, and J. Zelek, “Keyframe-based monocular SLAM: design, survey, and future directions,” Rob. Auton. Syst., vol. 98, pp. 67-88, 2017.
[22]  H. Strasdat, J. M. M. Montiel, and A. J. Davison, “Visual SLAM: Why Filter?”
[23]  R. M. Eustice, O. Pizarro, and H. Singh, “Visually augmented navigation for autonomous underwater vehicles,” IEEE J. Ocean. Eng., vol. 33, no. 2, pp. 103-122, 2008.
[24]  F. S. Hover et al., “Advanced perception, navigation and planning for autonomous in-water ship hull inspection,” Int. J. Rob. Res., vol. 31, no. 12, pp. 1445-1464, 2012.
[25]  A. Kim and R. Eustice, “Pose-graph Visual SLAM with Geometric Model Selection for Autonomous Underwater Ship Hull Inspection.”
[26]  I. Mahon, S. B. Williams, O. Pizarro, and M. Johnson-Roberson, “Efficient View-Based SLAM Using Visual Loop Closures,” IEEE Trans. Robot., vol. 24, no. 5, pp. 1002-1014, 2008.
[27]  A. Jalón-Monzón, C. G. R. De León, M. Alvarez-Múgica, S. Méndez-Ramírez, M. Á. Hevia-Suárez, and S. Escaf-Barmadah, “Utilidad de la biopsia frÍa ureteral durante la cistectomÍa radical como predictor de riesgo de recidiva: RevisiÓn de nuestra serie,” Arch. Esp. Urol., vol. 71, no. 5, pp. 486-494, 2018.
[28]  M. Pfingsthorn, R. Rathnam, T. Luczynski, and A. Birk, “Full 3D navigation correction using low frequency visual tracking with a stereo camera,” in OCEANS 2016 - Shanghai, 2016, pp. 1-6.
[29]  A. Kim and R. M. Eustice, “Real-Time Visual SLAM for Autonomous Underwater Hull Inspection Using Visual Saliency,” IEEE Trans. Robot., vol. 29, no. 3, pp. 719-733, 2013.
[30]  P. Drap, D. Merad, B. Hijazi, and L. Gaoua, “Underwater Photogrammetry and Object Modeling: A Case Study of Xlendi Underwater Photogrammetry and Object Modeling: A Case Study of Xlendi Wreck in Malta,” no. December, 2015.
[31]  F. Bellavia, M. Fanfani, and C. Colombo, “Selective Visual Odometry for Accurate AUV Localization,” pp. 1-12, 2014.
[32]  P. Lluis, N. Carrasco, F. Bonin-font, and G. O. Codina, “Stereo Graph-SLAM for Autonomous Underwater Vehicles⋆,” vol. 07122.
[33]  H. Kaiming, S. Jian, and T. Xiaoou, “Single image haze removal using dark channel prior. Single image haze removal using dark channel prior.,” Cvpr, vol. 33, no. 12, pp. 2341-2353, 2009.
[34]  J. M. M. Montiel, “ORB-SLAM: a Versatile and Accurate Monocular SLAM System,” pp. 1-17.
[35]  Georg Klein and David W., “Parallel Tracking and Mapping for Small AR Workspaces,” 2007 6th IEEE ACM Int. Symp. Mix. Augment. Real., 2007.
[36]  M. A. Fischler and R. C. Bolles, “Paradigm for Model,” vol. 24, no. 6, 1981.
[37]  D. Nister, “An efficient solution to the five-point relative pose problem,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no. 6, pp. 756-770, 2004.
[38]  X.-R. H. Xiao-Shan Gao, “Complete Solution Classification for the Perspectieive-Three- Point-Problem.”
[39]  P. H. S. Torr, “MLESAC: A New Robust Estimator with Application to Estimating Image Geometry,” vol. 156, pp. 138-156, 2000.
[40]  M. I. A. Lourakis and A. A. Argyros, “SBA: A Software Package for Generic Sparse Bundle Adjustment,” vol. 36, no. 1, 2009.
[41]  I. Amidror, “Scattered data interpolation methods for electronic imaging systems: a survey,” J. Electron. Imaging, vol. 11, no. 2, p. 157, 2002.
[42]  I. D. Reid, B. Williams, M. Cummins, J. D. Tardós, J. Neira, and P. Newman, “A comparison of loop closing techniques in monocular SLAM,” Rob. Auton. Syst., vol. 57, no. 12, pp. 1188-1197, 2009.
[43]  “Review on Loop Closure Detection of Visual Slam,” no. 6, pp. 81-86, 2018.
[44]  K. Granström and T. B. Schön, “Learning to close the loop from 3D point clouds,” IEEE/RSJ 2010 Int. Conf. Intell. Robot. Syst. IROS 2010 - Conf. Proc., pp. 2089-2095, 2010.
[45]  Dorian, Galvez-Lopez, D.Juan, and Tardos, “Bags of Binary Words for Fast Place Recognition in Image Sequences,” IEEE Trans. Robot. VOL. , NO. , Mon. YEAR. SHORT Pap., vol. 6, no. 3, 2012.
[46]  P. L. Negre, F. Bonin-Font, and G. Oliver, “Cluster-based loop closing detection for underwater slam in feature-poor regions,” Proc. - IEEE Int. Conf. Robot. Autom., vol. 2016-June, pp. 2589-2595, 2016.
[47]  M. Ester, H. Kriegel, X. Xu, and D.- Miinchen, “A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise,” 1996.
[48]  R. Kümmerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, “G2o: A general framework for graph optimization,” Proc. - IEEE Int. Conf. Robot. Autom., pp. 3607-3613, 2011.
[49]  C. Estrada, J. Neira, and J. D. Tardos, “Hierarchical SLAM: real-time accurate mapping of large environments,” IEEE Trans. Robot., vol. 21, no. 4, pp. 588-596, 2005.
[50]  B. Williams, M. Cummins, J. Neira, P. Newman, I. Reid, and J. Tardós, “An image-to-map loop closing method for monocular SLAM,” 2008 IEEE/RSJ Int. Conf. Intell. Robot. Syst. IROS, pp. 2053-2059, 2008.
[51]  A. C. Duarte, G. B. Zaffari, S. Rosa, L. M. Longaray, P. Drews-jr, and S. S. C. Botelho, “Towards Comparison of Underwater SLAM Methods: An Open Dataset Collection,” no. October, 2016.
[52]  J. Engel, V. Koltun, and D. Cremers, “Direct Sparse Odometry,” 2016.
[53]  J. Engel, T. Sch, and D. Cremers, “Direct Monocular SLAM,” pp. 1-16.