Running the Demonstrator
Objectives
Run the demonstrator notebook step by step
Use example scripts for command-line training
Set up tmux for background training runs
Tune hyperparameters for different systems
Demonstrator Notebook
The main entry point is demonstrator-v1.orchestrator.ipynb, designed to run on NAIC Orchestrator VMs.
Starting the Notebook
cd ~/pseudo-hamiltonian-neural-networks
source venv/bin/activate
jupyter lab --no-browser --ip=127.0.0.1 --port=8888
Then open http://localhost:8888/lab/tree/demonstrator-v1.orchestrator.ipynb via your SSH tunnel (see Episode 03).
Notebook Walkthrough
The demonstrator covers:
Environment verification – checks phlearn, PyTorch, and GPU availability
System definition – sets up the mass-spring damper with configurable parameters
Data generation – creates training trajectories using phlearn simulators
PHNN training – trains the pseudo-Hamiltonian model with midpoint integrator
Baseline training – trains a standard neural network for comparison
Short-horizon evaluation – both models fit well on training-length predictions
Long-horizon evaluation – PHNN maintains validity while baseline diverges
Energy analysis – visualizes the learned Hamiltonian and dissipation
Execution Time
On a GPU VM (L40S), the full notebook runs in approximately 5-10 minutes. On CPU, expect 15-30 minutes. You can reduce training epochs for faster exploration.
Example Scripts
For command-line training without Jupyter:
cd ~/pseudo-hamiltonian-neural-networks
source venv/bin/activate
# Train a PHNN model
python example_scripts/train_model.py
# Evaluate a trained model
python example_scripts/model_evaluation.py
Available Example Notebooks
Notebook |
System |
Time (GPU) |
|---|---|---|
|
Mass-spring damper |
~2 min |
|
Various ODE systems |
~5 min |
|
Various PDE systems |
~15 min |
|
KdV equation |
~10 min |
|
Cahn-Hilliard equation |
~10 min |
|
BBM equation |
~10 min |
|
KdV-Burgers equation |
~10 min |
Background Training with tmux
For longer training runs, use tmux to keep the process alive after disconnecting:
# Start a named tmux session
tmux new -s phnn
# Inside tmux:
cd ~/pseudo-hamiltonian-neural-networks
source venv/bin/activate
python example_scripts/train_model.py 2>&1 | tee training.log
# Detach: Ctrl+B, then D
To monitor and reattach:
# Monitor the log file
tail -f ~/pseudo-hamiltonian-neural-networks/training.log
# Reattach to the tmux session
tmux attach -t phnn
Hyperparameter Tuning
Key hyperparameters to adjust:
Parameter |
Default |
Effect |
|---|---|---|
|
30 |
More epochs improve fit but risk overfitting |
|
0.001 |
Lower values converge more slowly but more stably |
|
300 |
More training trajectories improve generalization |
|
|
|
|
2 |
System-dependent (2 for mass-spring, more for coupled systems) |
When to Use GPU
For ODE systems (2-4 states, <1000 trajectories), CPU training is fast enough. GPU acceleration provides the most benefit for PDE systems with large spatial grids (64+ points) or when training many epochs.
Keypoints
The demonstrator notebook is the primary entry point for interactive exploration
Use tmux for background training to survive SSH disconnections
Example scripts provide command-line alternatives to Jupyter
ODE training is fast (minutes); PDE training benefits from GPU acceleration
Key hyperparameters: epochs, learning rate, number of trajectories, integrator choice
The
midpointintegrator balances speed and accuracy for most use cases