In recent years, Aspect Sentiment Quadruple Prediction (ASQP) has emerged as a prominent task in the field of aspect-based sentiment analysis. We observe complex dependencies among aspect sentiment elements, rendering traditional sequence-to-sequence (Seq2Seq) modeling for element extraction insufficiently rigorous. Additionally, despite the widespread application of Pre-trained Language Models (PLMs) in Aspect-Based Sentiment Analysis (ABSA), there is a significant gap when representing dependencies between sentiment elements, which involve complex structure-informed label classification, diverging from PLM's Masked Language Model (MLM) pre-training tasks. This gap limits the full potential of PLMs. To bridge this gap and better adapt to the structural information between sentiment elements, we propose a Opinion-Tree-aware Prompt Tuning (OTPT) method. This approach transforms traditional PLM Seq2Seq tasks into sequence-to-tree tasks, optimizing PLM potential. Specifically, we model sentiment elements as a tree structure, creating a dynamic virtual template and label words, integrating tree structure information as soft prompts, and conducting stepwise sentiment label classification. Extensive experiments show that our method achieves performance superior to baseline on two widely used datasets, demonstrating good robustness in handling cases of missing sentiment elements and complex multi-sentiment quadruples.