The experimental setup is made with following entities:
- Data Center 2. Data Center Broker 3. Hosts
- Processing Elements 5. Virtual Machines 6. Cloudlets
Number of Processing Elements (CPUs) = 1
Data Center Specifications:
Architecture = "x86"
Operating System = "Linux"
Virtual Machine = "Xen"
Cloudlet Specifications: 40 cloudlets of random Burst Times (1-100).
Host Specifications: 4 Hosts with one processing element in each host, having
RAM = 4096 MB
Storage = 1000000
Bandwidth = 10000
Virtual Machine Specifications: 4 Virtual Machines are created with the following details:
Size = 10000 MB
RAM = 512 MB
MIPS = 1000
Bandwidth = 1000
Simulator: cloudsim
Dataset: Google Dataset (https://zenodo.org/record/3696775#.YxtXFHZBy3A) on
“Problem instances for scheduling jobs with time windows on unrelated parallel machines”
The experiment is conducted in two phases.
- Phase-I: Processes are subjected to run under eight different settings using the Static and Dynamic RR and/or SJF, FCFS with wide variations of Time Slice.
- Phase-II: The algorithms producing best results under the phase-I are short-listed for comparison with our proposed algorithm.
Eight different settings are chosen for phase-I as:
Case 1: Static Round-Robin with Time-Slice = 5 secs
Case 2: Static Round-Robin with Time-Slice = 10 secs
Case 3: Static Round-Robin with Time-Slice = 20 secs
Case 4: Static Round-Robin with Time-Slice = (Average BT+ Highest BT) /2
Case 5: Static First Come First Serve (Non-Pre-Emptive)
Case 6: Static SJF + Round-Robin with Time-Slice = (Average BT + Highest BT) /2
Case 7: Dynamic Round-Robin with Time-Slice = (Average BT + Highest BT) /2
Case 8: Dynamic SJF + Round-Robin using Time-Slice = (Average BT + Highest BT) /2
The random burst times chosen for 40 tasks/processes/cloudlets are listed in Table 2 below for submission to 4 virtual machines in groups of 10 each.
Table 2: Burst Time of Input Processes
Process No
|
VM-1
|
VM-2
|
VM-3
|
VM-4
|
P1
|
21
|
9
|
43
|
38
|
P2
|
72
|
57
|
99
|
34
|
P3
|
15
|
81
|
23
|
65
|
P4
|
19
|
7
|
13
|
29
|
P5
|
75
|
50
|
44
|
40
|
P6
|
30
|
90
|
10
|
61
|
P7
|
88
|
22
|
44
|
39
|
P8
|
58
|
73
|
37
|
83
|
P9
|
17
|
25
|
65
|
31
|
P10
|
11
|
48
|
55
|
70
|
Average BT
|
40.60
|
46.20
|
43.30
|
49.00
|
6. Results and Experimental Analysis of Phase-I
When 40 tasks of Table 2 were subjected to eight different cases under 4 VMs, the results obtained for AWT, ACT and CS are tabulated in Table 3 below.
Table 3: Results of Phase-1
|
VM-1
|
VM-2
|
|
ACT
|
AWT
|
CS
|
ACT
|
AWT
|
CS
|
Case-1
|
254
|
213
|
85
|
297
|
251
|
181
|
Case-2
|
255
|
214
|
45
|
292
|
246
|
50
|
Case-3
|
240
|
199
|
24
|
304
|
257
|
29
|
Case-4
|
269
|
229
|
14
|
285
|
239
|
14
|
Case-5
|
228
|
188
|
10
|
246
|
199
|
10
|
Case-6
|
267
|
226
|
13
|
284
|
237
|
13
|
Case-7
|
167
|
126
|
14
|
194
|
148
|
14
|
Case-8
|
167
|
126
|
14
|
194
|
148
|
14
|
|
VM-3
|
VM-4
|
Case-1
|
292
|
249
|
270
|
385
|
385
|
371
|
Case-2
|
294
|
251
|
48
|
378
|
378
|
53
|
Case-3
|
295
|
252
|
27
|
355
|
355
|
29
|
Case-4
|
254
|
211
|
13
|
256
|
256
|
12
|
Case-5
|
238
|
195
|
10
|
249
|
249
|
10
|
Case-6
|
245
|
202
|
11
|
257
|
257
|
12
|
Case-7
|
171
|
128
|
11
|
227
|
227
|
13
|
Case-8
|
171
|
128
|
11
|
227
|
227
|
13
|
6 a. Analysis of the Results ( Case-3 under VM-3)
Figure 2 below shows the Gantt Chart of 10 processes running under VM-3 for Case-3.
From the Gantt Chart, the completion time of 10 processes are noted in the following Table - 3.
Table 4: Completion Time of 10 processes running under VM-3 (Case-III)
Process
|
Completion Time
|
P1
|
326
|
P2
|
433
|
P3
|
226
|
P4
|
73
|
P5
|
350
|
P6
|
103
|
P7
|
354
|
P8
|
283
|
P9
|
414
|
P10
|
389
|
Average Completion Time:
(326 + 433 + 226 + 73 + 350 + 103 + 354 + 283 + 414 + 389)/10 = 2951/10 = 295
Average Waiting Time (AWT): Average Waiting Time is defined as
AWT = Sum of ((Time of Completion - Burst Time) of each job) / Number of Jobs
So, for the above example,
AWT = ( 282 + 334 + 203 + 60 + 306 + 93 + 310 + 246 + 349 + 334) /10 = 252
Context Switches: Number of Context Switches is defined as
Context Switches (CS) = Sum of (Number of Iterations of Each Process)
For the sample case considered,
CS = 3 + 5 + 2 + 1 + 3 + 1 + 3 + 2 + 4 + 3 = 27
6 b. Analysis of the Results ( Case-8 under VM1)
Table 5 below lists the tasks and related averages and time-slice values in each iteration.
Table 5: Sorted Burst Times under VM 1
Burst Times
|
Dynamic Averages
|
Time Slices
|
11, 15, 17, 19, 21, 30, 58, 72, 75, 88
|
41, 14, 5
|
64, 19, 5
|
Iteration 1:
Av= (11+15+17+19+21+30+58+72+75+88)/10
= 41
TS = (Average BT + Highest BT)/2 = (88 + 41)/2
= 64
As time-slice is 64, first seven tasks are executed completely and remaining 3 tasks are partly executed. The remaining burst times of these three tasks are 8, 11 and 24.
Iteration 2:
Av = (8 +11+24)/3
= 14
TS = (14 +24)/2
= 19
Now only last task is left incomplete with remaining burst time as 5.
Iteration 3:
Av = 5
TS = (5+5)/2
= 5
Average Completion Time :
(11 + 26 + 43 + 62 + 83 + 113 + 171 + 371 + 382 + 406) / 10 = 167
Average Waiting Time:
(0 + 11 + 26 + 43 + 62 + 83 + 113 + 300 + 307 + 318)/10 = 126
Total number of Context Switches:
(1 + 1 + 1 + 1 + 1 + 1 + 1 + 2 + 2 + 3) = 14
The total number of context switches of 10 tasks in each VM under the above mentioned eight different cases of different time-slice with/without ordering of tasks is given in the graph of Figure 3. From the graph, it is evident that the context switching is minimum for case-5 (FCFS).
But, considering the graph for completion time and waiting time as shown in Figure 4 and Figure 5 for the same tasks and VMs, best results are obtained for the case-7 and case-8. Therefore, these cases are short-listed for phase-II, where they are compared with the proposed algorithm.
7. Results of Phase-II ( Output of the proposed algorithm)
As short-listed in phase-I, the Round-Robin with SJF (case 6 and case 8) are considered in phase-II with time slice set to different values as suggested by earlier researchers. Then, they are compared with the proposed algorithm to prove that the proposed work is better in performance.
The 40 processes run earlier under 4 VMs (Table 2) are combined to run under one single VM using the proposed algorithm and the results so obtained are compared with the other algorithms of equation [4]. The results obtained are tabulated in Table 6 below for the proposed algorithm NODARR along with other algorithms referenced in [4]. The related graphs are indicated in figures 6 and 7.
Finally, the experiment is extended to run for 1000 processes to ascertain the better performance of the proposed algorithm. The results are depicted in figure 8. Here, again the proposed algorithm out-performed the other two algorithms in terms of all the three metrics: AWT, ACT and CS.
Table 6: Results of the proposed algorithm
Time Slice
|
Algorithm
|
CS
|
ACT
|
AWT
|
TS = (Av+median)/2
|
MARR (4)
|
73
|
851
|
806
|
TS = (Av+highest)/2
|
AMRR(1)
|
48
|
678
|
633
|
TS = Second Largest
|
NODARR (9)
|
40
|
627
|
582
|
The results prove that the proposed algorithm performs better as compared to the other two algorithms as suggested in reference [4].