Protein pre-training has emerged as a transformative approach for solving diverse biological tasks. While many contemporary methods focus on sequence-based language models, recent findings highlight that protein sequences alone are insufficient to capture the extensive information inherent in protein structures. Recognizing the crucial role of protein structure in defining function and interactions, we introduce Sable, a versatile pre-training model designed to comprehensively understand protein structures. Sable incorporates a novel structural encoding mechanism that enhances inter-atomic information exchange and spatial awareness, combined with robust pre-training strategies and lightweight decoders optimized for specific downstream tasks. This approach enables Sable to consistently outperform existing methods in tasks such as regression, classification, and generation, demonstrating its superior capability in protein structure representation. The code and models will be released in the GitHub repository.