|
3 | 3 | ## Overview |
4 | 4 |
|
5 | 5 | - [Parallel Computing Challenges](#parallel-computing-challenges) |
6 | | - - [Task 1 - Parallise for loop](#task-1---parallise-for-loop) |
| 6 | + - [Overview](#overview) |
| 7 | + - [Task 1 - Parallise `for` Loop](#task-1---parallise-for-loop) |
7 | 8 | - [Task 2 - Run task 1 on HPC cluster](#task-2---run-task-1-on-hpc-cluster) |
8 | 9 | - [Task 3 - Reduction Clause](#task-3---reduction-clause) |
9 | | - - [Task 4 - Private Clause](#task-4---private-clause) |
| 10 | + - [Task 4 - Private clause](#task-4---private-clause) |
10 | 11 | - [Task 5 - Calculate Pi using "Monte Carlo Algorithm"](#task-5---calculate-pi-using-monte-carlo-algorithm) |
11 | 12 | - [Bonus - Laplace equation to calculate the temperature of a square plane](#bonus---laplace-equation-to-calculate-the-temperature-of-a-square-plane) |
12 | 13 |
|
|
15 | 16 | Goal: To to create an array `[0,1,2………...19]` |
16 | 17 |
|
17 | 18 | 1. Compile array.c and execute it. Check the run time of the serial code |
18 | | -2. Add `#pragma<>` |
| 19 | +2. Add `#pragma<>` |
19 | 20 | 3. Compile the code again |
20 | 21 | 4. Run parallel code and check the improved run time |
21 | 22 |
|
22 | 23 | ## Task 2 - Run task 1 on HPC cluster |
23 | 24 |
|
24 | | -1. Check the available partitions with `show_cluster` |
25 | | -2. Modify `RunHello.sh ` |
26 | | -3. `sbatch RunHello.sh` |
27 | | -4. `cat slurm<>.out` and check the run time |
| 25 | +1. Check the available partitions with `show_cluster` |
| 26 | +2. Modify `RunHello.sh` |
| 27 | +3. `sbatch RunHello.sh` |
| 28 | +4. `cat slurm<>.out` and check the run time |
28 | 29 |
|
29 | | ->You can also use strudel web to run the script without sbatch: https://beta.desktop.cvl.org.au/login |
| 30 | +> You can also use [strudel web](https://beta.desktop.cvl.org.au/login) to run the script without sbatch. |
30 | 31 |
|
31 | 32 | ## Task 3 - Reduction Clause |
32 | 33 |
|
33 | 34 | Goal: To find the sum of the array elements |
34 | 35 |
|
35 | | -1. Compile `reduction.c` and execute it. Check the run time |
36 | | -2. Add `#pragma<>` |
37 | | -3. Compile `reduction.c` again |
38 | | -4. Run parallel code and check the improved run time. Make sure you got the same result as the serial code |
| 36 | +1. Compile `reduction.c` and execute it. Check the run time |
| 37 | +2. Add `#pragma<>` |
| 38 | +3. Compile `reduction.c` again |
| 39 | +4. Run parallel code and check the improved run time. Make sure you got the same result as the serial code |
39 | 40 |
|
40 | | ->`module load gcc` to use newer version of gcc if you have error with something like `-std=c99` |
| 41 | +> `module load gcc` to use newer version of gcc if you have error with something like `-std=c99` |
41 | 42 |
|
42 | 43 | ## Task 4 - Private clause |
43 | 44 |
|
44 | 45 | The goal of this task is to square each value in array and find the sum of them |
45 | | -1. Compile private.c and execute it. Check the run time. `#include` the default library `<math.h>` and link it |
46 | | -2. Add `#pragma<>` |
47 | | -3. Compile `private.c` again |
48 | | -4. Run parallel code and check the improved run time |
| 46 | + |
| 47 | +1. Compile private.c and execute it. Check the run time. `#include` the default library `<math.h>` and link it |
| 48 | +2. Add `#pragma<>` |
| 49 | +3. Compile `private.c` again |
| 50 | +4. Run parallel code and check the improved run time |
49 | 51 |
|
50 | 52 | ## Task 5 - Calculate Pi using "Monte Carlo Algorithm" |
51 | 53 |
|
52 | 54 | Goal: To estimate the value of pi from simulation |
53 | 55 |
|
54 | | -- No instructions on this task. Use what you have learnt in previous tasks to run a parallel code! |
55 | | -- You should get a result close to pi(3.1415…….) |
| 56 | +- No instructions on this task. Use what you have learnt in previous tasks to run a parallel code! |
| 57 | +- You should get a result close to pi(3.1415…….) |
56 | 58 |
|
57 | 59 | Short explanation of Monte Carlo algorithm: |
58 | 60 |
|
|
0 commit comments