We present a prompt learning framework designed to enhance the performance in computer vision task considering a particular use case where the training image dataset is confronted with highly imbalanced categorical distributions. By formulating the prompt learning as a variational problem, our model is capable of generating multiple prompts to describe a semantic (i.e, class). The motivation behind generating multiple prompts originates from the heuristic that the voting ensemble establishes a more robust aggregated learning algorithm which potentially benefits learning the tail classes where the number of the training sample is scarce. Unlike previous prompt learning techniques, which are often restricted by using a fixed set of prompts during the training and the test phase, we propose to learn the prompt distribution from which an arbitrary number of prompts can be sampled whenever required, and we named our method as “Prompt Distribution Learning (PDL)”. We will discuss and contrast various ways to formulate the variation model and thoroughly compare their performances against state-of-the-art solutions for long-tailed visual recognition. Our empirical study suggests that the proposed prompt-learning framework is beneficial for transferring a pre-trained vision-language model to long-tailed downstream visual recognition tasks while being sufficiently flexible in accommodating different designs of prompt-generating functions. Our code is publicly available at https: //github.com/Walter-pixel/Prompt-Distribution-of-CLIP-Long-Tailed-Data.