Article citationsMore >>

Fukunaga, K. “Introduction to Statistical Pattern Recognition,” Academic Press, Boston, 1990.

has been cited by the following article:

Article

Computational Vision for Automatic Tracking and Objective Estimation of Mobile Robot Trajectory

1Department of Computer, Electronics and Graphics Technology, Central Connecticut State University, USA


Journal of Computer Sciences and Applications. 2018, Vol. 6 No. 1, 17-22
DOI: 10.12691/jcsa-6-1-2
Copyright © 2018 Science and Education Publishing

Cite this paper:
Sangho Park. Computational Vision for Automatic Tracking and Objective Estimation of Mobile Robot Trajectory. Journal of Computer Sciences and Applications. 2018; 6(1):17-22. doi: 10.12691/jcsa-6-1-2.

Correspondence to: Sangho  Park, Department of Computer, Electronics and Graphics Technology, Central Connecticut State University, USA. Email: spark@ccsu.edu

Abstract

Automatic tracking and evaluation of moving-object trajectories is critical in many applications such as performance estimation of mobile robot navigation. Mobile robot is an effective platform for stimulating student motivation at K-12 institutions as well as a good tool for rigorous engineering practices in colleges, universities, and graduate schools. Developing new mobile robot platforms and algorithms requires objective estimation of navigation performance in a quantitative manner. Conventional methods to estimate mobile robot navigation typically rely on manual usage of chronometer to measure the time spent for the completion of a given task or counting the success rate on the task. This paper proposes an alternative; a multi-camera vision system that can automatically track the movement of mobile robot and estimate it in terms of physics-based profiles: position, velocity, and acceleration of the robot in the trajectory with respect to a user-defined world-coordinate system. The proposed vision system runs two synchronized cameras to simultaneously capture and track the movement of the robot at 30 frames per second. The system runs a homography-based projection algorithm that converts the view-dependent appearance of the robot in the camera images to a view-independent orthographic projection mapped on the registered world coordinate system. This enables the human evaluator to view and estimate the robot navigation from a virtual top-down view embedded with the physics-based profiles regardless of the actual cameras’ viewing positions. The proposed system can also be used for other domains including highway traffic monitoring and intelligent video surveillance.

Keywords