devinterp.optim package¶
Submodules¶
devinterp.optim.sgld module¶
- class devinterp.optim.sgld.SGLD(params, lr=0.01, noise_level=1.0, weight_decay=0.0, localization=0.0, nbeta: Callable | float = 1.0, bounding_box_size=None, save_noise=False, save_mala_vars=False, optimize_over=None, noise_norm=False, grad_norm=False, weight_norm=False, distance=False)¶
Bases:
Optimizer
Implements Stochastic Gradient Langevin Dynamics (SGLD) optimizer.
This optimizer blends Stochastic Gradient Descent (SGD) with Langevin Dynamics, introducing Gaussian noise to the gradient updates. This makes it sample weights from the posterior distribution, instead of optimizing weights.
This implementation follows Lau et al.’s (2023) implementation, which is a modification of Welling and Teh (2011) that omits the learning rate schedule and introduces an localization term that pulls the weights towards their initial values.
The equation for the update is as follows:
$$Delta w_t = frac{epsilon}{2}left(frac{beta n}{m} sum_{i=1}^m nabla log pleft(y_{l_i} mid x_{l_i}, w_tright)+gammaleft(w_0-w_tright) - lambda w_tright) + N(0, epsilonsigma^2)$$
where $w_t$ is the weight at time $t$, $epsilon$ is the learning rate, $(beta n)$ is the inverse temperature (we’re in the tempered Bayes paradigm), $n$ is the number of training samples, $m$ is the batch size, $gamma$ is the localization strength, $lambda$ is the weight decay strength, and $sigma$ is the noise term.
Example
>>> optimizer = SGLD(model.parameters(), lr=0.1, nbeta=utils.optimal_nbeta(dataloader))
>>> optimizer.zero_grad() >>> loss_fn(model(input), target).backward() >>> optimizer.step()
Note
localization
is unique to this class and serves to guide the weights towards their original values. This is useful for estimating quantities over the local posterior.noise_level
is not intended to be changed, except when testing! Doing so will raise a warning.Although this class is a subclass of
torch.optim.Optimizer
, this is a bit of a misnomer in this case. It’s not used for optimizing in LLC estimation, but rather for sampling from the posterior distribution around a point.Hyperparameter optimization is more of an art than a science. Check out the calibration notebook
for how to go about it in a simple case.
- Parameters:
params (Iterable) – Iterable of parameters to optimize or dicts defining parameter groups. Either
model.parameters()
or something more fancy, just like othertorch.optim.Optimizer
classes.lr (float, optional) – Learning rate $epsilon$. Default is 0.01
noise_level (float, optional) – Amount of Gaussian noise $sigma$ introduced into gradient updates. Don’t change this unless you know very well what you’re doing! Default is 1
weight_decay (float, optional) – L2 regularization term $lambda$, applied as weight decay. Default is 0
localization (float, optional) – Strength of the force $gamma$ pulling weights back to their initial values. Default is 0
nbeta (int, optional) – Inverse reparameterized temperature (otherwise known as n*beta or ~beta), float (default: 1., set to utils.optimal_nbeta(dataloader)=len(batch_size)/np.log(len(batch_size)))
bounding_box_size (float, optional) – the size of the bounding box enclosing our trajectory in parameter space. Default is None, in which case no bounding box is used.
save_noise (bool, optional) – Whether to store the per-parameter noise during optimization. Default is False
save_mala_vars (bool, optional) – Whether to store variables for calculating Metropolis-Adjusted Langevin Algorithm (MALA) metrics.
optimize_over – A boolean tensor of the same shape as the parameters. Used to implement weight restrictions.
Think of it as a boolean mask that restricts the set of parameters that can be updated. Default is None (no restrictions). :type optimize_over: torch.Tensor, optional :param noise_norm: Boolean flag to track the norm of the noise. Default is False :type noise_norm: bool, optional :param grad_norm: Boolean flag to track the norm of the gradient. Default is False :type grad_norm: bool, optional :param weight_norm: Boolean flag to track the norm of the weights. Default is False :type weight_norm: bool, optional :param distance: Boolean flag to track the distance between the current weights and the initial weights. Default is False :type distance: bool, optional
- Raises:
Warning – if
noise_level
is set to anything other than 1Warning – if
nbeta
is set to 1
- OptimizerPostHook¶
alias of
Callable
[[Self
,Tuple
[Any
, …],Dict
[str
,Any
]],None
]
- OptimizerPreHook¶
alias of
Callable
[[Self
,Tuple
[Any
, …],Dict
[str
,Any
]],Optional
[Tuple
[Tuple
[Any
, …],Dict
[str
,Any
]]]]
- add_param_group(param_group: Dict[str, Any]) None ¶
Add a param group to the
Optimizer
s param_groups.This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the
Optimizer
as training progresses.- Parameters:
param_group (dict) – Specifies what Tensors should be optimized along with group specific optimization options.
- load_state_dict(state_dict: Dict[str, Any]) None ¶
Loads the optimizer state.
- Parameters:
state_dict (dict) – optimizer state. Should be an object returned from a call to
state_dict()
.
- register_load_state_dict_post_hook(hook: Callable[[Optimizer], None], prepend: bool = False) RemovableHandle ¶
Register a load_state_dict post-hook which will be called after
load_state_dict()
is called. It should have the following signature:hook(optimizer) -> None
The
optimizer
argument is the optimizer instance being used.The hook will be called with argument
self
after callingload_state_dict
onself
. The registered hook can be used to perform post-processing afterload_state_dict
has loaded thestate_dict
.- Parameters:
hook (Callable) – The user defined hook to be registered.
prepend (bool) – If True, the provided post
hook
will be fired before all the already registered post-hooks onload_state_dict
. Otherwise, the providedhook
will be fired after all the already registered post-hooks. (default: False)
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemoveableHandle
- register_load_state_dict_pre_hook(hook: Callable[[Optimizer, Dict[str, Any]], Dict[str, Any] | None], prepend: bool = False) RemovableHandle ¶
Register a load_state_dict pre-hook which will be called before
load_state_dict()
is called. It should have the following signature:hook(optimizer, state_dict) -> state_dict or None
The
optimizer
argument is the optimizer instance being used and thestate_dict
argument is a shallow copy of thestate_dict
the user passed in toload_state_dict
. The hook may modify the state_dict inplace or optionally return a new one. If a state_dict is returned, it will be used to be loaded into the optimizer.The hook will be called with argument
self
andstate_dict
before callingload_state_dict
onself
. The registered hook can be used to perform pre-processing before theload_state_dict
call is made.- Parameters:
hook (Callable) – The user defined hook to be registered.
prepend (bool) – If True, the provided pre
hook
will be fired before all the already registered pre-hooks onload_state_dict
. Otherwise, the providedhook
will be fired after all the already registered pre-hooks. (default: False)
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemoveableHandle
- register_state_dict_post_hook(hook: Callable[[Optimizer, Dict[str, Any]], Dict[str, Any] | None], prepend: bool = False) RemovableHandle ¶
Register a state dict post-hook which will be called after
state_dict()
is called. It should have the following signature:hook(optimizer, state_dict) -> state_dict or None
The hook will be called with arguments
self
andstate_dict
after generating astate_dict
onself
. The hook may modify the state_dict inplace or optionally return a new one. The registered hook can be used to perform post-processing on thestate_dict
before it is returned.- Parameters:
hook (Callable) – The user defined hook to be registered.
prepend (bool) – If True, the provided post
hook
will be fired before all the already registered post-hooks onstate_dict
. Otherwise, the providedhook
will be fired after all the already registered post-hooks. (default: False)
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemoveableHandle
- register_state_dict_pre_hook(hook: Callable[[Optimizer], None], prepend: bool = False) RemovableHandle ¶
Register a state dict pre-hook which will be called before
state_dict()
is called. It should have the following signature:hook(optimizer) -> None
The
optimizer
argument is the optimizer instance being used. The hook will be called with argumentself
before callingstate_dict
onself
. The registered hook can be used to perform pre-processing before thestate_dict
call is made.- Parameters:
hook (Callable) – The user defined hook to be registered.
prepend (bool) – If True, the provided pre
hook
will be fired before all the already registered pre-hooks onstate_dict
. Otherwise, the providedhook
will be fired after all the already registered pre-hooks. (default: False)
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemoveableHandle
- register_step_post_hook(hook: Callable[[Self, Tuple[Any, ...], Dict[str, Any]], None]) RemovableHandle ¶
Register an optimizer step post hook which will be called after optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None
The
optimizer
argument is the optimizer instance being used.- Parameters:
hook (Callable) – The user defined hook to be registered.
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- register_step_pre_hook(hook: Callable[[Self, Tuple[Any, ...], Dict[str, Any]], Tuple[Tuple[Any, ...], Dict[str, Any]] | None]) RemovableHandle ¶
Register an optimizer step pre hook which will be called before optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None or modified args and kwargs
The
optimizer
argument is the optimizer instance being used. If args and kwargs are modified by the pre-hook, then the transformed values are returned as a tuple containing the new_args and new_kwargs.- Parameters:
hook (Callable) – The user defined hook to be registered.
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- state_dict() Dict[str, Any] ¶
Returns the state of the optimizer as a
dict
.It contains two entries:
state
: a Dict holding current optimization state. Its contentdiffers between optimizer classes, but some common characteristics hold. For example, state is saved per parameter, and the parameter itself is NOT saved.
state
is a Dictionary mapping parameter ids to a Dict with state corresponding to each parameter.
param_groups
: a List containing all parameter groups where eachparameter group is a Dict. Each parameter group contains metadata specific to the optimizer, such as learning rate and weight decay, as well as a List of parameter IDs of the parameters in the group.
NOTE: The parameter IDs may look like indices but they are just IDs associating state with param_group. When loading from a state_dict, the optimizer will zip the param_group
params
(int IDs) and the optimizerparam_groups
(actualnn.Parameter
s) in order to match state WITHOUT additional verification.A returned state dict might look something like:
{ 'state': { 0: {'momentum_buffer': tensor(...), ...}, 1: {'momentum_buffer': tensor(...), ...}, 2: {'momentum_buffer': tensor(...), ...}, 3: {'momentum_buffer': tensor(...), ...} }, 'param_groups': [ { 'lr': 0.01, 'weight_decay': 0, ... 'params': [0] }, { 'lr': 0.001, 'weight_decay': 0.5, ... 'params': [1, 2, 3] } ] }
- step(closure=None)¶
Perform a single SGLD optimization step.
- zero_grad(set_to_none: bool = True) None ¶
Resets the gradients of all optimized
torch.Tensor
s.- Parameters:
set_to_none (bool) – instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests
zero_grad(set_to_none=True)
followed by a backward pass,.grad
s are guaranteed to be None for params that did not receive a gradient. 3.torch.optim
optimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether).
devinterp.optim.sgnht module¶
- class devinterp.optim.sgnht.SGNHT(params, lr=0.01, diffusion_factor=0.01, bounding_box_size=None, save_noise=False, save_mala_vars=False, nbeta=1.0)¶
Bases:
Optimizer
Implement the Stochastic Gradient Nose Hoover Thermostat (SGNHT) Optimizer. This optimizer blends SGD with an adaptive thermostat variable to control the magnitude of the injected noise, maintaining the kinetic energy of the system.
It follows Ding et al.’s (2014) implementation.
The equations for the update are as follows:
$$Delta w_t = epsilonleft(frac{beta n}{m} sum_{i=1}^m nabla log pleft(y_{l_i} mid x_{l_i}, w_tright) - xi_t w_t right) + sqrt{2A} N(0, epsilon)$$ $$Deltaxi_{t} = epsilon left( frac{1}{n} |w_t|^2 - 1 right)$$
where $w_t$ is the weight at time $t$, $epsilon$ is the learning rate, $(beta n)$ is the inverse temperature (we’re in the tempered Bayes paradigm), $n$ is the number of samples, $m$ is the batch size, $xi_t$ is the thermostat variable at time $t$, $A$ is the diffusion factor, and $N(0, A)$ represents Gaussian noise with mean 0 and variance $A$.
Note
diffusion_factor
is unique to this class, and functions as a way to allow for random parameter changes while keeping them from blowing up by guiding parameters back to a slowly-changing thermostat value using a friction term.This class does not have an explicit localization term like
SGLD()
does. If you want to constrain your sampling, usebounding_box_size
Although this class is a subclass of
torch.optim.Optimizer
, this is a bit of a misnomer in this case. It’s not used for optimizing in LLC estimation, but rather for sampling from the posterior distribution around a point.
- Parameters:
params (Iterable) – Iterable of parameters to optimize or dicts defining parameter groups. Either
model.parameters()
or something more fancy, just like othertorch.optim.Optimizer
classes.lr (float, optional) – Learning rate $epsilon$. Default is 0.01
diffusion_factor (float, optional) – The diffusion factor $A$ of the thermostat. Default is 0.01
bounding_box_size (float, optional) – the size of the bounding box enclosing our trajectory. Default is None
nbeta (int, optional) – Effective Inverse Temperature, float (default: 1., set to utils.optimal_nbeta(dataloader)=len(batch_size)/np.log(len(batch_size)))
- Raises:
Warning – if
nbeta
is set to 1Warning – if
NoiseNorm
callback is usedWarning – if
MALA
callback is used
- OptimizerPostHook¶
alias of
Callable
[[Self
,Tuple
[Any
, …],Dict
[str
,Any
]],None
]
- OptimizerPreHook¶
alias of
Callable
[[Self
,Tuple
[Any
, …],Dict
[str
,Any
]],Optional
[Tuple
[Tuple
[Any
, …],Dict
[str
,Any
]]]]
- add_param_group(param_group: Dict[str, Any]) None ¶
Add a param group to the
Optimizer
s param_groups.This can be useful when fine tuning a pre-trained network as frozen layers can be made trainable and added to the
Optimizer
as training progresses.- Parameters:
param_group (dict) – Specifies what Tensors should be optimized along with group specific optimization options.
- load_state_dict(state_dict: Dict[str, Any]) None ¶
Loads the optimizer state.
- Parameters:
state_dict (dict) – optimizer state. Should be an object returned from a call to
state_dict()
.
- register_load_state_dict_post_hook(hook: Callable[[Optimizer], None], prepend: bool = False) RemovableHandle ¶
Register a load_state_dict post-hook which will be called after
load_state_dict()
is called. It should have the following signature:hook(optimizer) -> None
The
optimizer
argument is the optimizer instance being used.The hook will be called with argument
self
after callingload_state_dict
onself
. The registered hook can be used to perform post-processing afterload_state_dict
has loaded thestate_dict
.- Parameters:
hook (Callable) – The user defined hook to be registered.
prepend (bool) – If True, the provided post
hook
will be fired before all the already registered post-hooks onload_state_dict
. Otherwise, the providedhook
will be fired after all the already registered post-hooks. (default: False)
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemoveableHandle
- register_load_state_dict_pre_hook(hook: Callable[[Optimizer, Dict[str, Any]], Dict[str, Any] | None], prepend: bool = False) RemovableHandle ¶
Register a load_state_dict pre-hook which will be called before
load_state_dict()
is called. It should have the following signature:hook(optimizer, state_dict) -> state_dict or None
The
optimizer
argument is the optimizer instance being used and thestate_dict
argument is a shallow copy of thestate_dict
the user passed in toload_state_dict
. The hook may modify the state_dict inplace or optionally return a new one. If a state_dict is returned, it will be used to be loaded into the optimizer.The hook will be called with argument
self
andstate_dict
before callingload_state_dict
onself
. The registered hook can be used to perform pre-processing before theload_state_dict
call is made.- Parameters:
hook (Callable) – The user defined hook to be registered.
prepend (bool) – If True, the provided pre
hook
will be fired before all the already registered pre-hooks onload_state_dict
. Otherwise, the providedhook
will be fired after all the already registered pre-hooks. (default: False)
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemoveableHandle
- register_state_dict_post_hook(hook: Callable[[Optimizer, Dict[str, Any]], Dict[str, Any] | None], prepend: bool = False) RemovableHandle ¶
Register a state dict post-hook which will be called after
state_dict()
is called. It should have the following signature:hook(optimizer, state_dict) -> state_dict or None
The hook will be called with arguments
self
andstate_dict
after generating astate_dict
onself
. The hook may modify the state_dict inplace or optionally return a new one. The registered hook can be used to perform post-processing on thestate_dict
before it is returned.- Parameters:
hook (Callable) – The user defined hook to be registered.
prepend (bool) – If True, the provided post
hook
will be fired before all the already registered post-hooks onstate_dict
. Otherwise, the providedhook
will be fired after all the already registered post-hooks. (default: False)
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemoveableHandle
- register_state_dict_pre_hook(hook: Callable[[Optimizer], None], prepend: bool = False) RemovableHandle ¶
Register a state dict pre-hook which will be called before
state_dict()
is called. It should have the following signature:hook(optimizer) -> None
The
optimizer
argument is the optimizer instance being used. The hook will be called with argumentself
before callingstate_dict
onself
. The registered hook can be used to perform pre-processing before thestate_dict
call is made.- Parameters:
hook (Callable) – The user defined hook to be registered.
prepend (bool) – If True, the provided pre
hook
will be fired before all the already registered pre-hooks onstate_dict
. Otherwise, the providedhook
will be fired after all the already registered pre-hooks. (default: False)
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemoveableHandle
- register_step_post_hook(hook: Callable[[Self, Tuple[Any, ...], Dict[str, Any]], None]) RemovableHandle ¶
Register an optimizer step post hook which will be called after optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None
The
optimizer
argument is the optimizer instance being used.- Parameters:
hook (Callable) – The user defined hook to be registered.
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- register_step_pre_hook(hook: Callable[[Self, Tuple[Any, ...], Dict[str, Any]], Tuple[Tuple[Any, ...], Dict[str, Any]] | None]) RemovableHandle ¶
Register an optimizer step pre hook which will be called before optimizer step. It should have the following signature:
hook(optimizer, args, kwargs) -> None or modified args and kwargs
The
optimizer
argument is the optimizer instance being used. If args and kwargs are modified by the pre-hook, then the transformed values are returned as a tuple containing the new_args and new_kwargs.- Parameters:
hook (Callable) – The user defined hook to be registered.
- Returns:
a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
torch.utils.hooks.RemovableHandle
- state_dict() Dict[str, Any] ¶
Returns the state of the optimizer as a
dict
.It contains two entries:
state
: a Dict holding current optimization state. Its contentdiffers between optimizer classes, but some common characteristics hold. For example, state is saved per parameter, and the parameter itself is NOT saved.
state
is a Dictionary mapping parameter ids to a Dict with state corresponding to each parameter.
param_groups
: a List containing all parameter groups where eachparameter group is a Dict. Each parameter group contains metadata specific to the optimizer, such as learning rate and weight decay, as well as a List of parameter IDs of the parameters in the group.
NOTE: The parameter IDs may look like indices but they are just IDs associating state with param_group. When loading from a state_dict, the optimizer will zip the param_group
params
(int IDs) and the optimizerparam_groups
(actualnn.Parameter
s) in order to match state WITHOUT additional verification.A returned state dict might look something like:
{ 'state': { 0: {'momentum_buffer': tensor(...), ...}, 1: {'momentum_buffer': tensor(...), ...}, 2: {'momentum_buffer': tensor(...), ...}, 3: {'momentum_buffer': tensor(...), ...} }, 'param_groups': [ { 'lr': 0.01, 'weight_decay': 0, ... 'params': [0] }, { 'lr': 0.001, 'weight_decay': 0.5, ... 'params': [1, 2, 3] } ] }
- zero_grad(set_to_none: bool = True) None ¶
Resets the gradients of all optimized
torch.Tensor
s.- Parameters:
set_to_none (bool) – instead of setting to zero, set the grads to None. This will in general have lower memory footprint, and can modestly improve performance. However, it changes certain behaviors. For example: 1. When the user tries to access a gradient and perform manual ops on it, a None attribute or a Tensor full of 0s will behave differently. 2. If the user requests
zero_grad(set_to_none=True)
followed by a backward pass,.grad
s are guaranteed to be None for params that did not receive a gradient. 3.torch.optim
optimizers have a different behavior if the gradient is 0 or None (in one case it does the step with a gradient of 0 and in the other it skips the step altogether).