Topology optimization of a 4-channel wavelength de-multiplexer

Topology Optimization of a 4-channel wavelength demultiplexer (2D, TE-polarization)

Overview

In this example, we use the topology optimization feature from the inverse design toolbox lumopt to design a wavelength demultiplexer for 4 channels. We target a 10nm transmission band around the center wavelength of 1270nm, 1290m, 1310nm and 1330nm. Between the channels, there is a 10nm gap for isolation.
image
In this example, we restrict the design area to 6um x 6um.

Run and Results

Download the file CWDM_splitter_1310_4ch_2D_TE_topology.zip (2.6 KB) and unzip the files into one common directory.

To speed up the optimization, it is recommended to set the resource configuration of FDTD such that the job manager can run several jobs in parallel. Since each job is a relatively small 2d simulation, it makes sense to configure several Resources which use one thread each. For example, if the computer that will run the optimization has 8 physical cores, it makes sense to add 8 resources with one core each as shown in the screenshot below. In this example, there is no point in having more than 8 resources. So, if the computer has 16 cores, it makes sense to add 8 resources with two cores each.

Running the python script then starts the optimization. This example is a complex and the full optimization runs for about 1000 iterations. Depending on the available compute resources, this can take several hours up to a few days.

Important Model Settings

The initial condition

In this example, we initially fill the design area with a fictitious average material parameter, which often results in solutions with good performance. However, depending on the FOM and other settings, it is possible that a different starting condition will yield better results

Maximum number of iterations, especially during the binarization phase

The 4-channel wavelength demultiplexer is a complex device and the optimization requires significantly more iterations than simpler devices (such as the Y-splitter). In this example, we increased the number of iterations for the initial grayscale phase to 500 and we increased the maximum number of iterations for each binarization step to 50. Further increasing those values will lead to longer optimization times but can also further improve the results.

Here is a list of settings that you can modify to obtain different results:

Structure
The structure itself can be changed by modifying the footprint and also the smoothing filter radius as shown below:


Warning: Along the y-axis, the structure should not be less than 5um tall because the optimization region won’t be connected to the output waveguides anymore.

Initial conditions
Since we use a local, gradient-based optimizer, the result often strongly depends on the initial guess. The given example has four different initial conditions built in to try:


By uncommenting any of the lines, one can try different initial conditions. In addition, it is possible to provide custom initial conditions such as random noise or specific structures.

Convergence parameters of the optimizer
Lumopt has a number of parameters which determine the progress of the optimizer. Changing them often allows to make trade-offs between the best results and computations time. Here are some of the most important ones:

  • max_iter: Determines after how many iterations, the greyscale phase ends. The optimizer can terminate before this number due to other criteria (e.g. see below) but it will never perform more iterations. Increasing this number will often require more computational time but can lead to better performance. The default value is 400 in this example.

  • tfol: The minimum require change in the figure of merit (FOM) between two subsequent iterations for the greyscale optimization to continue. The default value is 1e-5 in this example

  • continuation_max_iter: The maximum number of iterations for each binarization step. If not specified, the default is 20 iterations but for complicated structures, explicitly setting a larger number as shown below can lead to better optimization results (at the cost of more simulation time)
    image

  • beta_factor: After each binarization step, the new beta value is computed by
    $$\beta_{new} = \beta_{old} * \beta_{factor}.$$ The default value is beta_factor=1.2 but you can set other values as shown below. Increasing beta_factor can make the optimization faster. Decreasing can give better optimization results but at the cost of longer compute times.
    image

Hi,

Thanks for the informative example and discussion.

I get this error after running the example, without any changes, at iteration 293:

Traceback (most recent call last): File “C:\Users\bayoum53\Desktop\LumOpt\Topology optimization of a 4-channel wavelength de-multiplexer\CWDM_splitter_1310_4ch_2D_TE_topology\CWDM_splitter_1310_4ch_2D_TE_topology.py”, line 69, in runSim(params, eps_bg, eps_wg, x_pos, y_pos, size_x1e-9, size_y1e-9, filter_R, working_dir=working_dir, beta=1) File “C:\Users\bayoum53\Desktop\LumOpt\Topology optimization of a 4-channel wavelength de-multiplexer\CWDM_splitter_1310_4ch_2D_TE_topology\CWDM_splitter_1310_4ch_2D_TE_topology.py”, line 48, in runSim opt.run(working_dir = working_dir) File “C:\Program Files\Lumerical\v202\api\python\lumopt\optimization.py”, line 340, in run self.optimizer.run() File “C:\Program Files\Lumerical\v202\api\python\lumopt\optimizers\generic_optimizers.py”, line 56, in run method = self.method) File “C:\Program Files\Lumerical\v202\python-3.6.8-embed-amd64\Lib\site-packages\scipy\optimize_minimize.py”, line 600, in minimize callback=callback, **options) File “C:\Program Files\Lumerical\v202\python-3.6.8-embed-amd64\Lib\site-packages\scipy\optimize\lbfgsb.py”, line 335, in _minimize_lbfgsb f, g = func_and_grad(x) File “C:\Program Files\Lumerical\v202\python-3.6.8-embed-amd64\Lib\site-packages\scipy\optimize\lbfgsb.py”, line 285, in func_and_grad f = fun(x, args) File “C:\Program Files\Lumerical\v202\python-3.6.8-embed-amd64\Lib\site-packages\scipy\optimize\optimize.py”, line 326, in function_wrapper return function((wrapper_args + args)) File “C:\Program Files\Lumerical\v202\api\python\lumopt\optimizers\minimizer.py”, line 24, in callable_fom_local fom = callable_fom(params_over_scaling_factor) File “C:\Program Files\Lumerical\v202\api\python\lumopt\optimization.py”, line 117, in callable_fom fom_list = list(map(process_forward_solve, self.optimizations)) File “C:\Program Files\Lumerical\v202\api\python\lumopt\optimization.py”, line 115, in process_forward_solve return optimization.process_forward_sim(iter) File “C:\Program Files\Lumerical\v202\api\python\lumopt\optimization.py”, line 583, in process_forward_sim Optimization.check_simulation_was_successful(self.sim) File “C:\Program Files\Lumerical\v202\api\python\lumopt\optimization.py”, line 860, in check_simulation_was_successful raise UserWarning(‘FDTD simulation did not complete successfully: status {0}’.format(simulation_status)) UserWarning: FDTD simulation did not complete successfully: status 0.0 >>>

Any idea what might have caused it?

Also, is it currently available to continue my optimization starting from where the optimization stopped last time?

Thanks,

Ahmed.

Hi Ahmed,
the error message suggests that one of the FDTD simulations (either forward or adjoint, possibly both) of that iteration did not finish successfully. There are several possible reasons for why that can happen, e.g. solver ran out of disk space, solver ran out of memory, solver could not obtain a license, …

The easiest way to find out why the solver failed is to check the log file which should be in the same folder as the simulation file (e.g. opts_0/forward_0_p0.log).

Unfortunately, it is not possible to exactly continue the optimization where it failed. The main reason is that the L-BFGS optimizer builds up some internal state based on the optimization history which is not saved.

However, you can easily use the last valid result to start a new optimization with the same parameters. This is almost as good as an exact restart. A rough sketch how to start from a previous optimization looks like this:

prev_filename='<path to old optimization>/parameters_292.npz' #< Parameters of the iteration to load
prev_geom = TopologyOptimization2D.from_file( filename2d )
params = prev_geom .last_params
beta = prev_geom.beta

runSim(params, ...)

If the optimization was already in the binarization phase (i.e. beta>1) you may want to reduce the number of iterations for the first phase of the restart.

1 Like

Hi Jens,

Thanks for your reply.

I tried the following changes as per your suggestion, I also attach the .py file below:

prev_filename='CWDM_splitter_1310_4ch_2D_TE_x6000_y6000_f0200_0\parameters_291.npz'
prev_geom = TopologyOptimization2D.from_file(prev_filename)
params = prev_geom.last_params
beta = prev_geom.beta          
                        #< Use the structure defined in the project file as initial condition

working_dir = 'CWDM_splitter_1310_4ch_2D_TE_x{:04d}_y{:04d}_f{:04d}'.format(size_x,size_y,int(filter_R*1e9))
runSim(params, eps_bg, eps_wg, x_pos, y_pos, size_x*1e-9, size_y*1e-9, filter_R, working_dir=working_dir, beta=beta)

The optimization made 10 iterations then I received the following error:

While I think this error could be bypassed by changing how the multiplication is performed in the LumOpt script, I worry that I might not have imported the last .npz file correctly.

Regards,

Ahmed.

CWDM_splitter_1310_4ch_2D_TE_topology starting from where optimization stopped.py (4.4 KB)

Hello,

I have a question regarding manufacturability and 3D optimization.

Looking at the GDSII file generated in this example, using the GDSII extraction method used here with an index threshold of 2.05, it seems there are feature sizes that are below 100 nm (~26 nm in the attached image, GDSII attached), and angles that are less than 90 degrees. Could this be avoided in further simulations?

contours_ind_thresh_2.05.gds (58.1 KB)

For the 3D optimization, do you think using 2.5D simulations would be an accurate representation of the actual device? Since the 2D simulations could take up to a few days, I believe it would take tremendous amount of time trying to optimize this structure with this footprint in 3D.

Thanks,

Ahmed.

Hi Jens,

Thanks for your useful example.
I have a same problem like Ahmed but in 3D at iteration of 179. I used following code to start a new optimization with the same parameters.

prev_filename='D:\Goudarzi\API\Y branch\3D_2_monitors_70_30\superopt_2/parameters_175.npz'
prev_geom = TopologyOptimization3DLayered.from_file( prev_filename )
params = prev_geom .last_params
beta = prev_geom.beta

runSim(params, eps_min, eps_max, x_pos, y_pos, z_pos, filter_R*1e-9, beta)

then I received the following error:

CONFIGURATION FILE {‘root’: ‘C:\Program Files\Lumerical\v202\api\python’, ‘lumapi’: ‘C:\Program Files\Lumerical\v202\api\python’}
Traceback (most recent call last):
File “D:\Goudarzi\API\Y branch\3D_2_monitors_70_30\splitter_opt_3D_TE_topology.py”, line 83, in
prev_geom = TopologyOptimization3DLayered.from_file( prev_filename )
TypeError: from_file() missing 1 required positional argument: ‘filter_R’

Do you have any recommendations for this problem?
Best regards,
Kiyanoush

Hi, I run the example and I got lots of errors about invalid syntax.

And My versioin is 2020r2