I’m trying to run some FDTD simulations on a high-performance computing cluster but I’m running into some difficulties. I am trying to do a distributed computing job, but I can’t seem to get Lumerical to use more than one processor and I was wondering if anybody could tell me what I’m doing wrong.
Our cluster uses Slurm as a job manager, so I’ve created a bash script (attached) for submitting my job. In the script, I do the normal setup where I set a number of nodes and a number of processes per node (as well as RAM allocations) and then I point Lumerical to the MPICH2 Nemesis MPI and run the job.
Whenever the job runs, the cluster allocates the correct number of nodes, but when I check the Lumerical log files (one is attached) they say that only 1 CPU is used and the simulation is 1x1x1. It runs fine and gives me a file, but it defeats the purpose of using the HPC cluster if I’m using 1 CPU.
Is there something I have to allocate in the resources before saving the .fsp file? Or another command I have to add to the Slurm bash? Or is this a question for our HPC support staff?