Some hours ago I uploaded two videos of me running an image sequence through two of the state of the art visual monocular SLAM algorithms. Here are the links and the description:
Having recorded a sequence at 20 fps in which there is quite much rotation per translation and a loop, I compared the performance of LSD-SLAM and ORB-SLAM.
In this sequence, tracking was lost for LSD-SLAM and the algorithm did not manage to recover and thus also failed to close the loop. It is difficult to compare the two algorithms due to their different outputs. The point cloud from LSD-SLAM is semi-dense and occasionally not very noisy, which is more useful if you would want to make a 3d-reconstruction. The ORB-SLAM, on the other hand, gives an excellent odometry and is robust to considerable rotation per translation and if tracking is lost it relocalises with little delay as soon as frames have enough overlap with old frames. I will try to recalibrate the camera and try LSD-SLAM on this dataset again, and see if it turns out better.
LSD-SLAM: Large-Scale Direct Monocular SLAM (J. Engel, T. Schöps, D. Cremers), ECCV 2014.
Open-Source Code Available: http://vision.in.tum.de/lsdslam
ORB-SLAM: tracking and mapping recognizable features (Mur-Artal, Raúl, and Juan D. Tardos), RSS 2014.
Open-Source Code Availabe: https://github.com/raulmur/ORB_SLAM2
Sensor: Matrix-Vision mvBluefox-MLC200w