Knowledge distillation has attracted much attention in knowledge and information systems. The latest distillation methods use Kullback-Leibler Divergence to distill complex models from simple models, yielding surprising results. However, due to the difference in the capacity of GNNs and MLPs models, distilling GNNs through fixed dimensions and layers of MLPs will inevitably cause information loss. Also, the gap between the features produced by GNNs and the features produced by MLPs gradually becomes larger as the number of layers increases. To this end, we propose an adaptive hierarchical distillation framework that distills GNNs through MLPs with variable dimensions and layers to ensure the integrity of the distilled information. Specifically, we use a Neural Architecture Search(NAS) to adaptively find MLPs with appropriate dimensions and layers for each layer of GNNs. Then, the graph structure information is distilled from GNNs layer by layer, which makes the student neural network(NN) structure can better match with the teacher model, and the features are better aligned, so as to better learn the graph structure information. At last, the student model distilled to the complete information is used in the downstream learning task. Experimental results on various datasets show impressive improvements in node classification tasks compared with previous state-of-the-art methods.