We propose a learning-based approach to measure, model, and predict accuracy on three-dimensional reconstruction and camera pose determined from data captured by low-cost visual sensors as RGB-D cameras coupled to robotic platforms or vehicles. Basically, we start creating a ground truth of 3D points (mapping) and camera poses (localization) using an accurate set of smart markers that we specifically devised and built for this work. Using these ground truth data, a set of actual errors and accuracy are calculated during the motion of our mobile robotic platform. To the end, a modeling for the error is provided and error prediction on reconstruction and camera positioning is accurately estimated as a function of the camera’s distance, velocity, and vibration of the platform. We use a multi-layer perceptron neural network for this last step. The outputs are the root mean squared errors for the 3D reconstruction and the relative pose errors for the camera poses. Experimental results show that this approach has a prediction accuracy of ±1% for the 3D reconstruction and ±2.5% for camera poses, which shows substantially improved performance in comparison with state-of-the-art methods.