What I usually do in these scenarios is wrap the important cells as functions (you don’t have to merge any of them) and have a certain master cell that iterates over a list of parameters and calls these functions. E.g. this is what a “master cell” looks like in one of my notebooks:
import itertools # parameters P_peak_all = [100, 200] idle_ratio_all = [0., 0.3, 0.6] # iterate through these parameters and call the notebook's logic for P_peak, idle_ratio in itertools.product(P_peak_all, idle_ratio_all): print(P_peak, idle_ratio, P_peak*idle_ratio) print('========================') m_synth, m_synth_ns = build_synth_measurement(P_peak, idle_ratio) compare_measurements(m_synth, m_synth_ns, "Peak pauser", "No scheduler", file_note="-%d-%d" % (P_peak, int(idle_ratio*100)))
You can still have some data dragging throughout the notebook (i.e. calling each function at the bottom of the cell with your data) to be able to test stuff live for individual cells. For example some cell might state:
def square(x): y = x**2 return y square(x) # where x is your data running from the prior cells
Which lets you experiment live and still call the generic functionality from the master cell.
I know it’s some additional work to refactor your notebook using functions, but I found it actually increases my notebook’s readability which is useful when you come back to it after a longer period and it’s easier to convert it to a “proper” script or module if necessary.