Optical neural networks (ONNs) are emerging as a high-performance machine learning (ML) in terms of power efficiency, parallelism, and computational speed. There are broad interests in leveraging ONNs into medical sensing, security screening, drug detection, and autonomous driving. However, it is challenging to implement reconfigurability for ONNs at specific frequency, e.g., Terehertz (THz), and thus deploying multi-task learning (MTL) algorithms on ONNs requires re-building and duplicating physical diffractive systems, which significantly degrades the energy and cost efficiency in practical application scenarios. This work presents a diffractive ONNs targeting MTL with unreconfigurable components, namely, RubikONNs. This architecture utilizes the physical properties of optical systems to encode multiple feed-forward functions by physically rotating the hardware similarly to rotating a Rubik's Cube. We demonstrate two domain-specific training algorithms RotAgg and RotSeq to optimize MTL performance of RubikONNs. Our analytic results demonstrate more than 4x improvements in energy and cost efficiency with marginal accuracy degradation compared to the state-of-the-art approaches for MTL-ONNs. Moreover, we perform a comprehensive RubikONNs design space analysis and explainability, which offers concrete design methodologies for practical uses.