Trainer

SentenceTransformerTrainer

class sentence_transformers.trainer.SentenceTransformerTrainer(model: Optional[SentenceTransformer] = None, args: sentence_transformers.training_args.SentenceTransformerTrainingArguments = None, train_dataset: Optional[Union[Dataset, DatasetDict, Dict[str, Dataset]]] = None, eval_dataset: Optional[Union[Dataset, DatasetDict, Dict[str, Dataset]]] = None, loss: Optional[Union[torch.nn.modules.module.Module, Dict[str, torch.nn.modules.module.Module], Callable[[SentenceTransformer], torch.nn.modules.module.Module], Dict[str, Callable[[SentenceTransformer], torch.nn.modules.module.Module]]]] = None, evaluator: Optional[Union[sentence_transformers.evaluation.SentenceEvaluator.SentenceEvaluator, List[sentence_transformers.evaluation.SentenceEvaluator.SentenceEvaluator]]] = None, data_collator: Optional[DataCollator] = None, tokenizer: Optional[Union[transformers.tokenization_utils_base.PreTrainedTokenizerBase, Callable]] = None, model_init: Optional[Callable[], SentenceTransformer]] = None, compute_metrics: Optional[Callable[[transformers.trainer_utils.EvalPrediction], Dict]] = None, callbacks: Optional[List[transformers.trainer_callback.TrainerCallback]] = None, optimizers: Tuple[torch.optim.optimizer.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None), preprocess_logits_for_metrics: Optional[Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] = None)[source]

SentenceTransformerTrainer is a simple but feature-complete training and eval loop for PyTorch based on the 🤗 Transformers Trainer.

This trainer integrates support for various transformers.TrainerCallback subclasses, such as:

  • WandbCallback to automatically log training metrics to W&B if wandb is installed

  • TensorBoardCallback to log training metrics to TensorBoard if tensorboard is accessible.

  • CodeCarbonCallback to track the carbon emissions of your model during training if codecarbon is installed.

    • Note: These carbon emissions will be included in your automatically generated model card.

See the Transformers Callbacks documentation for more information on the integrated callbacks and how to write your own callbacks.

Parameters

Important attributes:

  • model – Always points to the core model. If using a transformers model, it will be a [PreTrainedModel] subclass.

  • model_wrapped – Always points to the most external model in case one or more other modules wrap the original model. This is the model that should be used for the forward pass. For example, under DeepSpeed, the inner model is wrapped in DeepSpeed and then again in torch.nn.DistributedDataParallel. If the inner model hasn’t been wrapped, then self.model_wrapped is the same as self.model.

  • is_model_parallel – Whether or not a model has been switched to a model parallel mode (different from data parallelism, this means some of the model layers are split on different GPUs).

  • place_model_on_device – Whether or not to automatically place the model on the device - it will be set to False if model parallel or deepspeed is used, or if the default TrainingArguments.place_model_on_device is overridden to return False .

  • is_in_train – Whether or not a model is currently running train (e.g. when evaluate is called while in train)

add_callback(callback)

Add a callback to the current list of [~transformers.TrainerCallback].

Parameters

callback (type or [~transformers.TrainerCallback]) – A [~transformers.TrainerCallback] class or an instance of a [~transformers.TrainerCallback]. In the first case, will instantiate a member of that class.

compute_loss(model: SentenceTransformer, inputs: Dict[str, Union[torch.Tensor, Any]], return_outputs: bool = False)Union[torch.Tensor, Tuple[torch.Tensor, Dict[str, Any]]][source]

Computes the loss for the SentenceTransformer model.

It uses self.loss to compute the loss, which can be a single loss function or a dictionary of loss functions for different datasets. If the loss is a dictionary, the dataset name is expected to be passed in the inputs under the key “dataset_name”. This is done automatically in the add_dataset_name_column method. Note that even if return_outputs = True, the outputs will be empty, as the SentenceTransformers losses do not return outputs.

Parameters
  • model (SentenceTransformer) – The SentenceTransformer model.

  • inputs (Dict[str, Union[torch.Tensor, Any]]) – The input data for the model.

  • return_outputs (bool, optional) – Whether to return the outputs along with the loss. Defaults to False.

Returns

The computed loss. If return_outputs is True, returns a tuple of loss and outputs. Otherwise, returns only the loss.

Return type

Union[torch.Tensor, Tuple[torch.Tensor, Dict[str, Any]]]

create_model_card(language: Optional[str] = None, license: Optional[str] = None, tags: Optional[Union[str, List[str]]] = None, model_name: Optional[str] = None, finetuned_from: Optional[str] = None, tasks: Optional[Union[str, List[str]]] = None, dataset_tags: Optional[Union[str, List[str]]] = None, dataset: Optional[Union[str, List[str]]] = None, dataset_args: Optional[Union[str, List[str]]] = None, **kwargs)None[source]

Creates a draft of a model card using the information available to the Trainer.

Parameters
  • language (str, optional) – The language of the model (if applicable)

  • license (str, optional) – The license of the model. Will default to the license of the pretrained model used, if the original model given to the Trainer comes from a repo on the Hub.

  • tags (str or List[str], optional) – Some tags to be included in the metadata of the model card.

  • model_name (str, optional) – The name of the model.

  • finetuned_from (str, optional) – The name of the model used to fine-tune this one (if applicable). Will default to the name of the repo of the original model given to the Trainer (if it comes from the Hub).

  • tasks (str or List[str], optional) – One or several task identifiers, to be included in the metadata of the model card.

  • dataset_tags (str or List[str], optional) – One or several dataset tags, to be included in the metadata of the model card.

  • dataset (str or List[str], optional) – One or several dataset identifiers, to be included in the metadata of the model card.

  • dataset_args (str or List[str], optional) – One or several dataset arguments, to be included in the metadata of the model card.

create_optimizer()

Setup the optimizer.

We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the Trainer’s init through optimizers, or subclass and override this method in a subclass.

create_optimizer_and_scheduler(num_training_steps: int)

Setup the optimizer and the learning rate scheduler.

We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the Trainer’s init through optimizers, or subclass and override this method (or create_optimizer and/or create_scheduler) in a subclass.

create_scheduler(num_training_steps: int, optimizer: Optional[torch.optim.optimizer.Optimizer] = None)

Setup the scheduler. The optimizer of the trainer must have been set up either before this method is called or passed as an argument.

Parameters

num_training_steps (int) – The number of training steps to do.

evaluate(eval_dataset: Optional[Union[datasets.arrow_dataset.Dataset, Dict[str, datasets.arrow_dataset.Dataset]]] = None, ignore_keys: Optional[List[str]] = None, metric_key_prefix: str = 'eval')Dict[str, float][source]

Run evaluation and returns metrics.

The calling script will be responsible for providing a method to compute metrics, as they are task-dependent (pass it to the init compute_metrics argument).

You can also subclass and override this method to inject custom behavior.

Parameters
  • eval_dataset (Dataset, optional) – Pass a dataset if you wish to override self.eval_dataset. If it is a [~datasets.Dataset], columns not accepted by the model.forward() method are automatically removed. It must implement the __len__ method.

  • ignore_keys (List[str], optional) – A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.

  • metric_key_prefix (str, optional, defaults to “eval”) – An optional prefix to be used as the metrics key prefix. For example the metrics “bleu” will be named “eval_bleu” if the prefix is “eval” (default)

Returns

A dictionary containing the evaluation loss and the potential metrics computed from the predictions. The dictionary also contains the epoch number which comes from the training state.

get_eval_dataloader(eval_dataset: Optional[datasets.arrow_dataset.Dataset] = None)torch.utils.data.dataloader.DataLoader[source]

Returns the evaluation [~torch.utils.data.DataLoader].

Subclass and override this method if you want to inject some custom behavior.

Parameters

eval_dataset (torch.utils.data.Dataset, optional) – If provided, will override self.eval_dataset. If it is a [~datasets.Dataset], columns not accepted by the model.forward() method are automatically removed. It must implement __len__.

get_test_dataloader(test_dataset: datasets.arrow_dataset.Dataset)torch.utils.data.dataloader.DataLoader[source]

Returns the training [~torch.utils.data.DataLoader].

Subclass and override this method if you want to inject some custom behavior.

Parameters

test_dataset (torch.utils.data.Dataset, optional) – The test dataset to use. If it is a [~datasets.Dataset], columns not accepted by the model.forward() method are automatically removed. It must implement __len__.

get_train_dataloader()torch.utils.data.dataloader.DataLoader[source]

Returns the training [~torch.utils.data.DataLoader].

Will use no sampler if train_dataset does not implement __len__, a random sampler (adapted to distributed training if necessary) otherwise.

Subclass and override this method if you want to inject some custom behavior.

Launch an hyperparameter search using optuna or Ray Tune or SigOpt. The optimized quantity is determined by compute_objective, which defaults to a function returning the evaluation loss when no metric is provided, the sum of all metrics otherwise.

<Tip warning={true}>

To use this method, you need to have provided a model_init when initializing your [Trainer]: we need to reinitialize the model at each new run. This is incompatible with the optimizers argument, so you need to subclass [Trainer] and override the method [~Trainer.create_optimizer_and_scheduler] for custom optimizer/scheduler.

</Tip>

Parameters
  • hp_space (Callable[[“optuna.Trial”], Dict[str, float]], optional) – A function that defines the hyperparameter search space. Will default to [~trainer_utils.default_hp_space_optuna] or [~trainer_utils.default_hp_space_ray] or [~trainer_utils.default_hp_space_sigopt] depending on your backend.

  • compute_objective (Callable[[Dict[str, float]], float], optional) – A function computing the objective to minimize or maximize from the metrics returned by the evaluate method. Will default to [~trainer_utils.default_compute_objective].

  • n_trials (int, optional, defaults to 100) – The number of trial runs to test.

  • direction (str or List[str], optional, defaults to “minimize”) – If it’s single objective optimization, direction is str, can be “minimize” or “maximize”, you should pick “minimize” when optimizing the validation loss, “maximize” when optimizing one or several metrics. If it’s multi objectives optimization, direction is List[str], can be List of “minimize” and “maximize”, you should pick “minimize” when optimizing the validation loss, “maximize” when optimizing one or several metrics.

  • backend (str or [~training_utils.HPSearchBackend], optional) – The backend to use for hyperparameter search. Will default to optuna or Ray Tune or SigOpt, depending on which one is installed. If all are installed, will default to optuna.

  • hp_name (Callable[[“optuna.Trial”], str]], optional) – A function that defines the trial/run name. Will default to None.

  • kwargs (Dict[str, Any], optional) –

    Additional keyword arguments passed along to optuna.create_study or ray.tune.run. For more information see:

Returns

All the information about the best run or best runs for multi-objective optimization. Experiment summary can be found in run_summary attribute for Ray backend.

Return type

[trainer_utils.BestRun or List[trainer_utils.BestRun]]

is_local_process_zero()bool

Whether or not this process is the local (e.g., on one machine if training in a distributed fashion on several machines) main process.

is_world_process_zero()bool

Whether or not this process is the global main process (when training in a distributed fashion on several machines, this is only going to be True for one process).

log(logs: Dict[str, float])None

Log logs on the various objects watching training.

Subclass and override this method to inject custom behavior.

Parameters

logs (Dict[str, float]) – The values to log.

pop_callback(callback)

Remove a callback from the current list of [~transformers.TrainerCallback] and returns it.

If the callback is not found, returns None (and no error is raised).

Parameters

callback (type or [~transformers.TrainerCallback]) – A [~transformers.TrainerCallback] class or an instance of a [~transformers.TrainerCallback]. In the first case, will pop the first member of that class found in the list of callbacks.

Returns

The callback removed, if found.

Return type

[~transformers.TrainerCallback]

push_to_hub(commit_message: Optional[str] = 'End of training', blocking: bool = True, **kwargs)str

Upload self.model and self.tokenizer to the 🤗 model hub on the repo self.args.hub_model_id.

Parameters
  • commit_message (str, optional, defaults to “End of training”) – Message to commit while pushing.

  • blocking (bool, optional, defaults to True) – Whether the function should return only when the git push has finished.

  • kwargs (Dict[str, Any], optional) – Additional keyword arguments passed along to [~Trainer.create_model_card].

Returns

The URL of the repository where the model was pushed if blocking=False, or a Future object tracking the progress of the commit if blocking=True.

remove_callback(callback)

Remove a callback from the current list of [~transformers.TrainerCallback].

Parameters

callback (type or [~transformers.TrainerCallback]) – A [~transformers.TrainerCallback] class or an instance of a [~transformers.TrainerCallback]. In the first case, will remove the first member of that class found in the list of callbacks.

save_model(output_dir: Optional[str] = None, _internal_call: bool = False)

Will save the model, so you can reload it using from_pretrained().

Will only save from the main process.

train(resume_from_checkpoint: Optional[Union[bool, str]] = None, trial: Union[optuna.Trial, Dict[str, Any]] = None, ignore_keys_for_eval: Optional[List[str]] = None, **kwargs)

Main training entry point.

Parameters
  • resume_from_checkpoint (str or bool, optional) – If a str, local path to a saved checkpoint as saved by a previous instance of [Trainer]. If a bool and equals True, load the last checkpoint in args.output_dir as saved by a previous instance of [Trainer]. If present, training will resume from the model/optimizer/scheduler states loaded here.

  • trial (optuna.Trial or Dict[str, Any], optional) – The trial run or the hyperparameter dictionary for hyperparameter search.

  • ignore_keys_for_eval (List[str], optional) – A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions for evaluation during the training.

  • kwargs (Dict[str, Any], optional) – Additional keyword arguments used to hide deprecated arguments