To address the issues of high energy consumption and slow convergence speed in Split learning, we propose a novel framework Batched Split Learning (BSL). We group edge nodes and deploy different device-side models in each batch, then deploy the corresponding server-side model on the edge server. Edge nodes utilize the technology Over-the-Air Computing to transmit crushed data to servers for server-side model training, and feed the gradient of the crushed data back to the nodes to perform model updating. For each group, after a certain number of training rounds unique device-side models are generated, which can be shared across different groups and finally aggregated to obtain a global model.To optimize the training latency and transmission energy cost of BSL, we define a mixed integer nonlinear programming problem. We use the branch-and bound method to solve this problem and also design a heuristic algorithm for it. We take thorough experiments to evaluate the performance of proposed methods. It can be found that the latency of our proposed BSL framework can be about 78% less than Vanilla Split Learning. Compared with mainstream Parallel Split Learning scheme, our energy consumption is about 20% less. Furthermore, our proposed Branch-and-Bound based algorithm reduces the energy consumption by 55% over the energy-efficient clustering method using random update (EECRU) algorithm 1 and 47% over the greedy energy efficient clustering scheme (GEECS) algorithm. Moreover, our proposed Multi-objective Heuristic algorithm reduces 37.81% and 26.18% than EECRU and GEECS algorithms respectively.