The inhomogeneous refractive indices of biological tissues blur and distort single molecule emission patterns generating image artifacts and decreasing the achievable resolution of single molecule localization microscopy (SMLM). To compensate tissue induced aberrations, conventional sensorless adaptive optics methods rely on iterative mirror-changes and image-quality metrics. However, these metrics result in inconsistent, and sometimes opposite, metric responses and thus fundamentally limit the efficacy of these approaches for aberration correction in tissues. Bypassing the previous iterative trial-then-evaluate processes, we developed deep learning driven adaptive optics for SMLM to allow direct inference of wavefront distortion and near real-time compensation. Our trained deep neural network monitors the individual emission patterns from single molecule experiments, infers their shared wavefront distortion, feeds the estimates through a dynamic filter, and drives a deformable mirror to compensate sample induced aberrations. We demonstrated that our method simultaneously estimates and compensates 28 types of wavefront deformation shapes, restores single molecule emission patterns approaching the conditions untouched by the specimen, and improves the resolution and fidelity of 3D SMLM through >130 µm thick tissue specimens, with as few as 3-20 mirror changes.