The framework for assessing educational systems and providing ideological instruction to college students is offered. Here is where you may start the 5G network. In the first step, we collect the dataset to initialize the proposed flow. The collected dataset can be pre-processed using normalization. After the input education details have been preprocessed, the education data is grouped by utilizing the Hierarchical K-means technique. The aspects can then be removed using the variational autoencoder (VAE). After it has been reviewed for viability, the data can be stored in a database for future research. Then, using the Boosted TCP protocol (BTCP), the workable data can be sent to the identity of the student also BTCP is used to convert data effectively. The presentation of this protocol improved using Enhanced fruit fly optimization (EFFO). The performances such as throughput, good-put, transmission rate, and execution time are analyzed and then the result is compared with existing methods. The proposed structure flow is represented in Fig. 1.

**5G Network Setup**:

While 5G networks and verification testing is now underway, the introduction of commercial 5G network services will necessitate the supply of monitoring solutions at 5G networks I&M to ensure IPE performance. As a result, we must establish up an efficient 5g network for data goodness transmission.

**Dataset Collection**:

This article selects 55 quantifiable formative evaluation forms for counselors' use, with each form containing mostly two criteria. The majority of it is made up of management attitude and capability. Handle individual learners fairly; cultivate a positive relationship with the students; speak respectfully and model behavior; be truthful and self-disciplined. Students act unfairly and selfishly at work; Students act unfairly and selfishly at work.

**Data pre-processing using Normalization**:

The information is unstructured and may contain identical packages or incomplete data. It has been thoroughly cleaned and processed to remove identical and repeating instances, as well as invalid information. Because the educational system's databases are so large, sample reduction techniques must be used. Extracting aspects approaches are required to reduce unnecessary characteristics due to the vast number of features in this dataset. During the pre-processing stage, the data may be normalized. The s-score, which is defined by the equation, is generated in the first stage of the normalization procedure (1).

E = [(Sm − α)/v] (1)

Where α is defined as mean data

𝜏 – Standard

Further, equation (1) is given as,

$$E=\frac{{S}_{m}-{\overline{S}}_{m}}{SD}$$

2

The sample standard deviation is denoted by SD.

A randomly assigned sample is made up of,

$${E}_{h}={\sigma }_{0}+{\sigma }_{1}{Sm}_{h}+ {\mu }_{{0}_{h}}$$

3

Where \({\mu }_{{0}_{h}}\)is error-based v2.

Following that, the errors must be independent of each other, as detailed below.

$${t}_{v}=\sqrt{U}\frac{t}{\sqrt{{t}^{2}+u-1}}$$

4

Where the random variable is denoted by tv.

The standard deviation is then used to normalize the changes in the variable.

The following formula is used to calculate the moment scaling deviation.

$${M}_{SD}=\frac{{\delta }^{ms}}{{\tau }^{ms}}$$

5

Where moment scaling is denoted as "ms".

$${\delta }^{ms}=Ex{\left({R}_{v}-\gamma \right)}^{ms}$$

6

In the above equation, Rv and Ex are representations of randomized variables and expected outcomes respectively.

$${\tau }^{ms}={\sqrt{Ex{\left({R}_{v}-\gamma \right)}^{ms}}}^{2}$$

7

$$COV=\frac{ms}{\overline{{R}_{v}}}$$

8

The variant of the coefficient is represented by COV.

The process of feature scaling is terminated by setting all variables to 0 or 1. The unison-based normalizing process is what it's called. The normalized equation would therefore be written as follows:

$${R}_{v}^{\text{'}}=\frac{t-{t}_{min}}{{t}_{max}-{t}_{min}}$$

9

The range and consistency of the data may remain constant after the input has been normalized. The goal of this phase is to reduce or eliminate data delay. Following that, the normalized data can be utilized as an input to the next processes.

**Clustering using the Hierarchical K-means (HK) technique**:

Because college students' interactional activity is variable, it is important to build tighter groups of students with similar interactional characteristics, and Hierarchical K-means clustering provides the best fit for the given dataset and is superior to any other grouping approach. Clustering divides m samples into j groups, with each input attribute belonging to one of the clusters and not the others. When the teacher has completed grading the students, he could build up and complete the grouping using the data he has gathered.

For K-means clustering, go through the steps below.

First set the beginning condition by defining the number of clusters and selecting the cluster centers at random. The distance between the qualities is measured using Euclidean distance.

$$z\left(c,d\right)=\sqrt{{\left({c}_{1}-{d}_{1}\right)}^{2}+{\left({c}_{2}-{d}_{2}\right)}^{2}\dots +{\left({c}_{m}-{d}_{m}\right)}^{2}}$$

10

$$z\left(c,d\right)=\sqrt{\sum _{h=1}^{m}{\left({c}_{h}-{d}_{h}\right)}^{2}}$$

11

Where c and d are two Euclidian locations and z is the distance between them. Second, allocate each data point to the cluster center closest to it to create a new division. Third, update the centers for clusters that have gained or lost data points. Finally, repeat steps 2 and 3 until you reach a distance convergence threshold.

**Feature extraction using Variational autoencoder (VAE)**:

To see which data separation technique on 5G network connection data may improve clustering. Feature extraction is a method for extracting new features from an original dataset is as we want to lessen the measure of resources needed for processing without sacrificing crucial qualities, this method is effective. Removing aspects can also aid in the reduction of unnecessary characteristics in a study. Feature extraction transforms basic features into more significant features in a surprising way. To reduce the high dimensionality of the feature vector, feature extraction is a procedure for constructing new features that rely on the original input feature set. The transformation is carried out using algebraic transformations and some optimization criteria. In addition, while dealing with high-dimensional challenges, feature extraction can manage critical information. By preserving the original relative distance between features and covering the original data potential structure, these dimensionality reduction strategies strive to avoid losing a considerable amount of information during the feature transformation process.

**Boosted TCP Protocol (BTCP)**:

The direction of social and intellectual workers in the educational techniques employed is the framework as an education model. It is an important part of IPE's ideas, theories, and technical orientations, as well as radical social and intellectual conceptions. Experts in IPE theories created a structure to improve cognition, action, and expression. Thinking based on an understanding, IPE frameworks make up the education governance framework. H1 and H2 reflect the administrative structure and experimental units, respectively. The following equation depicts the similarity between the databases of college students in the IPE:

$$Similarity=Sy=\frac{2k({H}_{1}+{H}_{2})}{k{H}_{1}+k{H}_{2}}$$

12

The number k signifies the module's number, and the similarity within university student records denotes similarity among the two databases. The following equation depicts the distributing of data as shown below,

$$DD=Sy*\left(F+G*k\right)*[k{H}_{1}+k{H}_{2}]$$

13

In this scenario, F represents the role of human interest, and G represents the sense of confusing tasks in an education sector, IPE as a way of feeding nations that need liberty, rather than the base of IPE. As shown in equation (14), the expertise IPE structure is an academic structure that relates to describing the illogical growth of the knowledge economy, which has lost its fundamental place in life learning.

$$Sy={\beta }_{f}\left({k}_{1}{S}_{1}+{S}_{2}{k}_{t-1}+{S}_{1}{k}_{t-1}+{S}_{1}\right)$$

14

To further clarify its application effect, the approach is used to the teaching of political and ideological subjects in a freshman class at universities. The students' attitudes about political and ideological courses, their level of satisfaction with classroom teaching, their acceptability of political and ideological courses, as well as their learning circumstances are all examined using the questionnaires utilized in this study. The acceptance of political and ideological courses may be higher, which can be used to evaluate the use of the MATLAB Simulation tool for IPE.

**Enhanced Fruit Fly Optimization (EFFO)**:

Within a radius of one, the basic FFO develops food sources around its swarm site. This radius is set in stone and cannot be modified throughout the iteration process. The main disadvantage of the EFFO is more iterations are required by the algorithm to obtain an ideal solution. The fruit fly swarm site is frequently far from optimal in early iterations; the search radius may be too limited, and a significant increase in iterations is required to identify a favorable spot. The swarm area is close to an optimal or near-ideal solution in the last generations. The fine-tuning of solution space necessitates a very limited scope. This search radius is excessively broad. Fixed values of search radius can help improve the FFO's performance and reduce its flaws. It can also be expressed in the following equation (15).

$$\rho ={\rho }_{max}.exp\left(log\left(\frac{{\rho }_{min}}{{\rho }_{max}}\right).\frac{I}{{I}_{max}}\right)$$

15

Where \(\rho\) = each iteration's search radius

\({\rho }_{max}\) = maximum radius

\({\rho }_{min}\) = minimum radius

I = iteration number and

Imax = maximum iteration number.

To improve the intensive search, we don't modify all of the swarm location's decision variations while creating a new solution. Instead, we pick one decision variation at random from a uniform distribution. Let a € 1, 2,...m be a number picked at random index. As follows, a new solution Yi = (yi,1, yi,2,..., yi,m) is formed.

\({Y}_{i,j}=\left\{\begin{array}{c}{\sigma }_{j}\pm \rho .ran\left( \right) ifj=a\\ {\sigma }_{j} otherwise\end{array}\right.\) j=1,2,…m (16)

A suitable initial swarm location could speed up convergence to good solutions. To choose a decent swarm location, we generate a population of PS solutions at random, and the best one is chosen as the first fruit fly swarm location.

**Algorithm**

Enhanced Fruit Fly Optimization (EFFO)

*//Initialization*

*Parameters setting a*, \({\rho }_{max}\), \({\rho }_{min}\), *and*\({I}_{max}\)

*for i=1, 2,….,b // produce food source Y* *1* , *Y**2*,*… Y**a*

$${Y}_{i,j}={L}_{j}+\left({U}_{j}-{L}_{j}\right)\times ran\left( \right), j=\text{1,2},\dots m$$

*end*

\(\nabla \leftarrow arg\left({ min}_{i=\text{1,2},..b} f\left({Y}_{i}\right)\right)\) *//representation of swarm location*

*I=0*

*Again*

\(\rho ={\rho }_{max}.exp\left(log\left(\frac{{\rho }_{min}}{{\rho }_{max}}\right).\frac{I}{{I}_{max}}\right)\) *// foraging phase of opheresis*

*For i=1,2,….b*

*a=random number between [1, m]*

// *produce food source Y**i* *= (y**i,1*, *y**i,2*,*..., y**i,m**)*

\({Y}_{i,j}=\left\{\begin{array}{c}{\sigma }_{j}\pm \rho .ran\left( \right) if j=a\\ {\sigma }_{j} otherwise\end{array}\right.\) *j=1,2,…m*

*If Y* *i,j* *> U**j* *then Y**ij* *= U**j*

*If Y* *i,j* *< L**j* *then Y**ij* *= L**j*

*end for // vision forging phase*

$${Y}_{best}=arg\left({ min}_{i=\text{1,2},..b} f\left({Y}_{i}\right)\right)$$

*If f (Y* *best* *) < f(* \(\nabla\) *) then* \(\nabla\)*= Y**best*

*If f(* \(\nabla\) *) < f (Y* *** *) then Y* *** *=* \(\nabla\)

*till I = I* *max*

The above mathematic technique depicts the entire computing approach for the presented IFFO algorithm.