Secondary abstract: |
Development of algorithms for autonomous mobile robots is one of the main subjects that concerns the field of computer vision. One of the approaches introduces a concept called simultaneous localization and mapping (SLAM), by which a mobile robot can build a map of an environment, and at the same time, use this map to compute its own location. The robot can gather the information about the environment with the use of many different sensors. In our work we focus on systems that use a single camera for this purpose. One of the tasks that has to be tackled by a SLAM system, is tracking of objects that are present in the environment. Because we expect the robot to be constantly moving, we can presume that, from the robot's point of view, the appearance of the tracked objects will change. In order to resolve this problem we implemented a tracker that is capable of adapting to the changing appearance of the tracked object. We achieved this by translating the problem of tracking into a classification problem and by using some of the methods provided by the field of machine learning. We integrated the obtained tracker into an allready existent application of SLAM, called MonoSLAM. Results of our tests showed that such combination performs better in some situations than does the basic MonoSLAM, although in other situations the very ability of adapting makes the system fail. In the concluding section we point out some possible improvements and give suggestions for further research. |