Data privacy has become a popular trend in deep learning, and improving model performance through collaborative training without sharing data is a promising area of research. This paper introduces a novel privacy-preserving learning approach that utilizes Pseudo-siamese networks and mutual knowledge distillation. The two branches are trained collaboratively without sharing private data based on contrast loss with knowledge distillation, allowing them to learn from each other's knowledge and improve their overall performance. During training, back-propagated gradients are found to be heavily influenced by the principal components of input data. To address this issue, an alternating iterative method is proposed for parameter updates in stochastic gradient descent, where both principal components and orthogonal projections of datapoints are used for model training alternatively. This learning approach can be easily extended to incorporate multiple distributed models, and optimizations are performed on two of them in a circular manner. Experiments on convolutional and Pseudo-siamese networks demonstrate that gradients for updating are dominated by principal component data, and the proposed approach can improve model performance without compromising privacy or sharing private data or parameters.