Optimizers
- class pylo.optim.AdafacLO_naive(params, momentum_decays=[0.15216392, 0.14245212, 0.06812963], rms_decays=[0.01079706], adafactor_decays=[0.18621896, -0.10864615, -0.06185547], lr=1.0, exp_mult=0.001, step_mult=0.01, input_size=39, hidden_size=32, hidden_layers=1, initial_momentum_decays=(0.9, 0.99, 0.999), initial_rms_decays=(0.999,), initial_adafactor_decays=(0.9, 0.99, 0.999), max_grad_norm=None, concat_weights=True, make_separate_weights=False, split_weights=False, clip_grad=False, weight_decay=0.0, mup_lrs=None, hf_key: str | None = 'btherien/mulo')[source]
- __init__(params, momentum_decays=[0.15216392, 0.14245212, 0.06812963], rms_decays=[0.01079706], adafactor_decays=[0.18621896, -0.10864615, -0.06185547], lr=1.0, exp_mult=0.001, step_mult=0.01, input_size=39, hidden_size=32, hidden_layers=1, initial_momentum_decays=(0.9, 0.99, 0.999), initial_rms_decays=(0.999,), initial_adafactor_decays=(0.9, 0.99, 0.999), max_grad_norm=None, concat_weights=True, make_separate_weights=False, split_weights=False, clip_grad=False, weight_decay=0.0, mup_lrs=None, hf_key: str | None = 'btherien/mulo')[source]
- pylo.optim.MuLO_naive(params, impl=<class 'pylo.optim.AdafacLO_naive.AdafacLO_naive'>, **kwargs)[source]
μP (Maximal Update Parameterization) wrapper for the PyTorch native implementation of the Adafac learned optimizer.
This function applies the μP parameterization to the Adafac learned optimizer, scaling learning rates for matrix-like parameters according to their width multipliers. Parameters are organized into groups based on their infinite-width shape properties.
Note
This implementation requires that all parameters have been processed with mup.set_base_shapes() to establish their infinite-width behavior.
Example
>>> model = MyModel() >>> mup.set_base_shapes(model, base_model) >>> optimizer = MuLO_naive(model.parameters())
- class pylo.optim.VeLO_naive(params, momentum_decays=[0.0, 0.0, 0.0], rms_decays=[0.0], adafactor_decays=[0.0, 0.0, 0.0], lr=0.001, exp_mult=0.001, step_mult=0.001, input_size=30, hidden_size=4, hidden_layers=1, initial_momentum_decays=(0.9, 0.99, 0.999), lstm_input_size=30, lstm_hidden_size=512, param_inits=256, num_steps=10000, initial_rms_decays=(0.999,), initial_adafactor_decays=(0.9, 0.99, 0.999), concat_weights=True, make_separate_weights=False, split_weights=False, weight_decay=0.0, clip_grad=False, mup_lrs=None, hf_key_rnn='Pauljanson002/VeLO_RNN', hf_key_mlp='Pauljanson002/VeLO_MLP')[source]
- __init__(params, momentum_decays=[0.0, 0.0, 0.0], rms_decays=[0.0], adafactor_decays=[0.0, 0.0, 0.0], lr=0.001, exp_mult=0.001, step_mult=0.001, input_size=30, hidden_size=4, hidden_layers=1, initial_momentum_decays=(0.9, 0.99, 0.999), lstm_input_size=30, lstm_hidden_size=512, param_inits=256, num_steps=10000, initial_rms_decays=(0.999,), initial_adafactor_decays=(0.9, 0.99, 0.999), concat_weights=True, make_separate_weights=False, split_weights=False, weight_decay=0.0, clip_grad=False, mup_lrs=None, hf_key_rnn='Pauljanson002/VeLO_RNN', hf_key_mlp='Pauljanson002/VeLO_MLP')[source]
- state_dict()[source]
Return the state of the optimizer as a
dict.It contains two entries:
state: a Dict holding current optimization state. Its contentdiffers between optimizer classes, but some common characteristics hold. For example, state is saved per parameter, and the parameter itself is NOT saved.
stateis a Dictionary mapping parameter ids to a Dict with state corresponding to each parameter.
param_groups: a List containing all parameter groups where eachparameter group is a Dict. Each parameter group contains metadata specific to the optimizer, such as learning rate and weight decay, as well as a List of parameter IDs of the parameters in the group. If a param group was initialized with
named_parameters()the names content will also be saved in the state dict.
NOTE: The parameter IDs may look like indices but they are just IDs associating state with param_group. When loading from a state_dict, the optimizer will zip the param_group
params(int IDs) and the optimizerparam_groups(actualnn.Parameters) in order to match state WITHOUT additional verification.A returned state dict might look something like:
{ 'state': { 0: {'momentum_buffer': tensor(...), ...}, 1: {'momentum_buffer': tensor(...), ...}, 2: {'momentum_buffer': tensor(...), ...}, 3: {'momentum_buffer': tensor(...), ...} }, 'param_groups': [ { 'lr': 0.01, 'weight_decay': 0, ... 'params': [0] 'param_names' ['param0'] (optional) }, { 'lr': 0.001, 'weight_decay': 0.5, ... 'params': [1, 2, 3] 'param_names': ['param1', 'layer.weight', 'layer.bias'] (optional) } ] }
- load_state_dict(state_dict)[source]
Load the optimizer state.
- Parameters:
state_dict (dict) – optimizer state. Should be an object returned from a call to
state_dict().
Warning
Make sure this method is called after initializing
torch.optim.lr_scheduler.LRScheduler, as calling it beforehand will overwrite the loaded learning rates.Note
The names of the parameters (if they exist under the “param_names” key of each param group in
state_dict()) will not affect the loading process. To use the parameters’ names for custom cases (such as when the parameters in the loaded state dict differ from those initialized in the optimizer), a customregister_load_state_dict_pre_hookshould be implemented to adapt the loaded dict accordingly. Ifparam_namesexist in loaded state dictparam_groupsthey will be saved and override the current names, if present, in the optimizer state. If they do not exist in loaded state dict, the optimizerparam_nameswill remain unchanged.Example
>>> # xdoctest: +SKIP >>> model = torch.nn.Linear(10, 10) >>> optim = torch.optim.SGD(model.parameters(), lr=3e-4) >>> scheduler1 = torch.optim.lr_scheduler.LinearLR( ... optim, ... start_factor=0.1, ... end_factor=1, ... total_iters=20, ... ) >>> scheduler2 = torch.optim.lr_scheduler.CosineAnnealingLR( ... optim, ... T_max=80, ... eta_min=3e-5, ... ) >>> lr = torch.optim.lr_scheduler.SequentialLR( ... optim, ... schedulers=[scheduler1, scheduler2], ... milestones=[20], ... ) >>> lr.load_state_dict(torch.load("./save_seq.pt")) >>> # now load the optimizer checkpoint after loading the LRScheduler >>> optim.load_state_dict(torch.load("./save_optim.pt"))