The multi-agent computing task scheduling problem can be transformed into a task assignment optimization problem under the premise of minimum system cost and maximum sample utilization. For this problem, this paper proposes a computational task migration scheme based on multi-agent body generation adversarial imitation learning. In the battlefield, this scheme uses the idle computing power of tanks, fighting vehicles, drones or infantry equipment to form a Ad Hoc cloud. Based on the characteristics of distributed training of reinforcement learning algorithm, it makes full use of equipment resources to solve the transfer scheme of computing tasks. By using behavioral cloning to construct the data set and build the initial strategy model, the reward function is simplified, and the network structure, parameters and training hyperparameters are set. Driven by the training set, the generative adversarial inverse reinforcement learning method is used for network training. The simulation results show that the MAGAIL-MCT algorithm can mimic the expert trajectory behavior and successfully train the neural network. The experimental results prove that the scheme is improved in reducing system overhead, improving sample utilization, reducing migration delay, migration energy consumption and reducing the impact of mobility.