Journal of Computer Sciences and Applications
ISSN (Print): 2328-7268 ISSN (Online): 2328-725X Website: Editor-in-chief: Minhua Ma, Patricia Goncalves
Open Access
Journal Browser
Journal of Computer Sciences and Applications. 2018, 6(1), 17-22
DOI: 10.12691/jcsa-6-1-2
Open AccessSpecial Issue

Computational Vision for Automatic Tracking and Objective Estimation of Mobile Robot Trajectory

Sangho Park1,

1Department of Computer, Electronics and Graphics Technology, Central Connecticut State University, USA

Pub. Date: June 04, 2018
(This article belongs to the Special Issue Information Technology and Computational Intelligence)

Cite this paper:
Sangho Park. Computational Vision for Automatic Tracking and Objective Estimation of Mobile Robot Trajectory. Journal of Computer Sciences and Applications. 2018; 6(1):17-22. doi: 10.12691/jcsa-6-1-2


Automatic tracking and evaluation of moving-object trajectories is critical in many applications such as performance estimation of mobile robot navigation. Mobile robot is an effective platform for stimulating student motivation at K-12 institutions as well as a good tool for rigorous engineering practices in colleges, universities, and graduate schools. Developing new mobile robot platforms and algorithms requires objective estimation of navigation performance in a quantitative manner. Conventional methods to estimate mobile robot navigation typically rely on manual usage of chronometer to measure the time spent for the completion of a given task or counting the success rate on the task. This paper proposes an alternative; a multi-camera vision system that can automatically track the movement of mobile robot and estimate it in terms of physics-based profiles: position, velocity, and acceleration of the robot in the trajectory with respect to a user-defined world-coordinate system. The proposed vision system runs two synchronized cameras to simultaneously capture and track the movement of the robot at 30 frames per second. The system runs a homography-based projection algorithm that converts the view-dependent appearance of the robot in the camera images to a view-independent orthographic projection mapped on the registered world coordinate system. This enables the human evaluator to view and estimate the robot navigation from a virtual top-down view embedded with the physics-based profiles regardless of the actual cameras’ viewing positions. The proposed system can also be used for other domains including highway traffic monitoring and intelligent video surveillance.

computational vision object tracking trajectory estimation robot navigation multiple view geometry

Creative CommonsThis work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit


Figure of 14


[1]  National Science Board. “Preparing the Next Generation of STEM Innovators: Identifying and Developing our Nation’s Human Capital,” 2010. Retrieved from:
[2]  Aggarwal, J.K. and Cai, Q. “Human motion analysis: a review”. Computer Vision and Image Understanding, 73(3): 295-304. 1999.
[3]  Gavrila, D. “The visual analysis of human movement: a survey”. Computer Vision and Image Understanding, 73(1): 82-98. 1999.
[4]  T.B. Moeslund, T. B. and E. Granum, E. “A survey of computer vision-based human motion capture”. Computer Vision and Image Understanding, 81(3): 231-268. 2001.
[5]  Beinhofer, M. & Burgard, W. “Efficient Estimation of Expected Distributions for Mobile Robot Navigation”, Proc. of the Austrian Robotics Conference. 2014.
[6]  Suliman, C., Cruceru, C. & Moldoveanu, F. “Mobile Robot Position Estination Using the Kalman Filter”, Scientific Bulletin of the Petru Maior University of Tirgu Mures, Vol. 6 (XXIII). 2009.
[7]  Hartley, R. & Zisserman, A. Multiple View Geometry in Computer Vision, 2nd ed. Cambridge University Press. 2003.
[8]  Forsyth, D. and Ponce, J. Computer Vision: A Modern Approach, Prentice Hall, 2003.
[9]  Ma, Y., Soatto, S., Kosecka, J., & Sastry, S. An Invitation to 3-D Vision: From Images to Geometric Models, Springer Ltd. 2001.
[10]  Criminisi, A., Reid, I. & Zisserman, A. “A plane measuring device”, Image and Vision Computing, 17(8):625-634, 1999.
[11]  Allen, J.G., Xu, R.Y.D., & Jin, J.S. “Object Tracking Using CamShift Algorithm and Multiple Quantized Feature Spaces”, in Conferences in Research and Practice in Information Technology, Vol. 36. M. Piccardi, M., Hintz, T., He, X., Huang, M.L., Feng, D.D. & Jin, J. Editors. 2004.
[12]  Bradski, G.R. “Computer Vision Face Tracking for Use in a Perceptual User Interface”, Intel, 1998
[13]  Fukunaga, K. “Introduction to Statistical Pattern Recognition,” Academic Press, Boston, 1990.
[14]  Park, S. and Trivedi, M. “Understanding Human Interactions with Track and Body Synergies (TBS) Captured from Multiple Views,” Computer Vision and Image Understanding: Special Issue on Intelligent Visual Surveillance, 111(1), pp. 2-20, Elsevier Inc. 2008.