Federated Learning (FL) enables decentralized machine learning while preserving data privacy. Despite extensive research on FL frameworks and simulations, packet skewness due to poor network conditions remains understudied. This paper proposes a greedy approach for parameter selection in FL scenarios that prioritizes global model improvements and mitigates performance declines. The proposed greedy parameter aggregation approach is evaluated using a modified User Datagram Protocol (UDP) in the NS-3 network simulator under diverse network conditions, employing the widely used CIFAR-10 dataset.The global model with greedy aggregation outperformed in severely degraded networks, achieving 67% accuracy on CIFAR-10, compared to 27 % without the approach. The greedy model’s accuracy decreased 14 % from ideal conditions, while the non-greedy model declined 54 %. This comparison underscores the greedy approach’s resilience and effectiveness in maintaining the global model performance despite challenging network environments.