Federated Learning (FL) is a recent Machine Learning method for training with private data locally stored in distributed machines without gathering them into one place for central learning. Because FL depends on a central server for repeated aggregation of local training models, this server is prone to become a performance bottleneck. Therefore, one can combine FL with Edge Computing: introduce a layer of edge servers to each serve as a regional aggregator to offload the main server. The scalability is thus improved, however at the cost of learning accuracy. We show that this cost can be alleviated with a proper choice of edge server assignment: which edge servers should aggregate the training models from which local machines. Specifically, we propose an assignment solution which is especially useful for the case of non-IID training data (well-known to hinder today's FL performance). Our findings are substantiated with an evaluation study using real-world datasets.