The rapid advancement in embedded AI, driven by integrating deep neural networks (DNNs) into embedded systems for real-time image and video processing, has been notably pushed by AI-specific platforms like the AMD Xilinx Vitis AI on the MPSoC-FPGA platform. This platform utilizes a configurable Deep Processing Unit (DPU) for scalable resource utilization and operating frequencies. Our study employed a detailed methodology to assess the impact of various DPU configurations and frequencies on resource utilization and energy consumption. The findings reveal that increasing the DPU frequency enhances resource utilization efficiency and improves performance. Conversely, lower frequencies significantly reduce resource utilization, with only a borderline decrease in performance. These trade-offs are influenced not only by frequency but also by variations in DPU parameters. These findings are critical for developing energy-efficient AI-driven systems in Advanced Driver Assistance Systems (ADAS) based on real-time video processing. By leveraging the capabilities of Xilinx Vitis AI deployed on the Kria KV260 MPSoC platform, we explore the intricacies of optimizing energy efficiency through multi-task learning in real-time ADAS applications.