Edge computing has become a fundamental technology for Internet of Things (IoT) applications. To provide reliable services for latency-sensitive applications, edge servers must respond to end devices within the shortest amount of time possible. Edge distributed denial-of-service (DDoS) attacks, which render edge servers unusable by legitimate IoT applications by sending heavy requests from distributed attacking sources, is a threat that leads to severe latency. To protect edge servers from DDoS attacks, a hybrid computing paradigm known as an end-edge-cloud ecosystem provides a possible solution. Cloud assistance is allowed with this architecture. Edge servers can upload their pending tasks onto a cloud center for a workload reduction when encountering a DDoS attack, similar to borrowing resources from the cloud. Nevertheless, before using the ecosystem to mitigate edge DDoS attacks, we must address the core problem that edge servers must decide when and to what extent they should upload tasks to the cloud center. In this study, we focus on the design of optimal cloud assistance policies. First, we propose an edge workload evolution model that describes how the workload of the edge servers change over time with a given cloud assistance policy. On this basis, we quantify the effectiveness of the policy by using the resulting overall latency and formulate an optimal control problem for seeking optimal policies that can minimize such latency. We then provide solutions by deriving the optimality system and discuss some properties of the optimal solutions to accelerate the problem solving. Next, we introduce a numerical iterative algorithm to seek solutions that can satisfy the optimality system. Finally, we provide several illustrative numerical examples. The results show that the optimal policies obtained can effectively mitigate edge DDoS attacks.