Training Arguments

SentenceTransformerTrainingArguments

class sentence_transformers.training_args.SentenceTransformerTrainingArguments[source]

SentenceTransformerTrainingArguments extends TrainingArguments with additional arguments specific to Sentence Transformers. See TrainingArguments for the complete list of available arguments.

Parameters
  • output_dir (str) – The output directory where the model checkpoints will be written.

  • prompts (Union[Dict[str, Dict[str, str]], Dict[str, str], str], optional) –

    The prompts to use for each column in the training, evaluation and test datasets. Four formats are accepted:

    1. str: A single prompt to use for all columns in the datasets, regardless of whether the training/evaluation/test datasets are datasets.Dataset or a datasets.DatasetDict.

    2. Dict[str, str]: A dictionary mapping column names to prompts, regardless of whether the training/evaluation/test datasets are datasets.Dataset or a datasets.DatasetDict.

    3. Dict[str, str]: A dictionary mapping dataset names to prompts. This should only be used if your training/evaluation/test datasets are a datasets.DatasetDict or a dictionary of datasets.Dataset.

    4. Dict[str, Dict[str, str]]: A dictionary mapping dataset names to dictionaries mapping column names to prompts. This should only be used if your training/evaluation/test datasets are a datasets.DatasetDict or a dictionary of datasets.Dataset.

  • batch_sampler (Union[BatchSamplers, str], optional) – The batch sampler to use. See BatchSamplers for valid options. Defaults to BatchSamplers.BATCH_SAMPLER.

  • multi_dataset_batch_sampler (Union[MultiDatasetBatchSamplers, str], optional) – The multi-dataset batch sampler to use. See MultiDatasetBatchSamplers for valid options. Defaults to MultiDatasetBatchSamplers.PROPORTIONAL.

property ddp_timeout_delta

The actual timeout for torch.distributed.init_process_group since it expects a timedelta variable.

property device

The device used by this process.

property eval_batch_size

The actual batch size for evaluation (may differ from per_gpu_eval_batch_size in distributed training).

get_process_log_level()

Returns the log level to be used depending on whether this process is the main process of node 0, main process of node non-0, or a non-main process.

For the main process the log level defaults to the logging level set (logging.WARNING if you didn’t do anything) unless overridden by log_level argument.

For the replica processes the log level defaults to logging.WARNING unless overridden by log_level_replica argument.

The choice between the main and replica process settings is made according to the return value of should_log.

get_warmup_steps(num_training_steps: int)

Get number of steps used for a linear warmup.

property local_process_index

The index of the local process used.

main_process_first(local=True, desc='work')

A context manager for torch distributed environment where on needs to do something on the main process, while blocking replicas, and when it’s finished releasing the replicas.

One such use is for datasets’s map feature which to be efficient should be run once on the main process, which upon completion saves a cached version of results and which then automatically gets loaded by the replicas.

Parameters
  • local (bool, optional, defaults to True) – if True first means process of rank 0 of each node if False first means process of rank 0 of node rank 0 In multi-node environment with a shared filesystem you most likely will want to use local=False so that only the main process of the first node will do the processing. If however, the filesystem is not shared, then the main process of each node will need to do the processing, which is the default behavior.

  • desc (str, optional, defaults to “work”) – a work description to be used in debug logs

property n_gpu

The number of GPUs used by this process.

Note

This will only be greater than one when you have multiple GPUs available but are not using distributed training. For distributed training, it will always be 1.

property parallel_mode

The current mode used for parallelism if multiple GPUs/TPU cores are available. One of:

  • ParallelMode.NOT_PARALLEL: no parallelism (CPU or one GPU).

  • ParallelMode.NOT_DISTRIBUTED: several GPUs in one single process (uses torch.nn.DataParallel).

  • ParallelMode.DISTRIBUTED: several GPUs, each having its own process (uses torch.nn.DistributedDataParallel).

  • ParallelMode.TPU: several TPU cores.

property place_model_on_device

Can be subclassed and overridden for some specific integrations.

property process_index

The index of the current process used.

set_dataloader(train_batch_size: int = 8, eval_batch_size: int = 8, drop_last: bool = False, num_workers: int = 0, pin_memory: bool = True, persistent_workers: bool = False, prefetch_factor: Optional[int, None] = None, auto_find_batch_size: bool = False, ignore_data_skip: bool = False, sampler_seed: Optional[int, None] = None)

A method that regroups all arguments linked to the dataloaders creation.

Parameters
  • drop_last (bool, optional, defaults to False) – Whether to drop the last incomplete batch (if the length of the dataset is not divisible by the batch size) or not.

  • num_workers (int, optional, defaults to 0) – Number of subprocesses to use for data loading (PyTorch only). 0 means that the data will be loaded in the main process.

  • pin_memory (bool, optional, defaults to True) – Whether you want to pin memory in data loaders or not. Will default to True.

  • persistent_workers (bool, optional, defaults to False) – If True, the data loader will not shut down the worker processes after a dataset has been consumed once. This allows to maintain the workers Dataset instances alive. Can potentially speed up training, but will increase RAM usage. Will default to False.

  • prefetch_factor (int, optional) – Number of batches loaded in advance by each worker. 2 means there will be a total of 2 * num_workers batches prefetched across all workers.

  • auto_find_batch_size (bool, optional, defaults to False) – Whether to find a batch size that will fit into memory automatically through exponential decay, avoiding CUDA Out-of-Memory errors. Requires accelerate to be installed (pip install accelerate)

  • ignore_data_skip (bool, optional, defaults to False) – When resuming training, whether or not to skip the epochs and batches to get the data loading at the same stage as in the previous training. If set to True, the training will begin faster (as that skipping step can take a long time) but will not yield the same results as the interrupted training would have.

  • sampler_seed (int, optional) – Random seed to be used with data samplers. If not set, random generators for data sampling will use the same seed as self.seed. This can be used to ensure reproducibility of data sampling, independent of the model seed.

Example:

```py >>> from transformers import TrainingArguments

>>> args = TrainingArguments("working_dir")
>>> args = args.set_dataloader(train_batch_size=16, eval_batch_size=64)
>>> args.per_device_train_batch_size
16
```
set_evaluate(strategy: Union[str, transformers.trainer_utils.IntervalStrategy] = 'no', steps: int = 500, batch_size: int = 8, accumulation_steps: Optional[int, None] = None, delay: Optional[float, None] = None, loss_only: bool = False, jit_mode: bool = False)

A method that regroups all arguments linked to evaluation.

Parameters
  • strategy (str or [~trainer_utils.IntervalStrategy], optional, defaults to “no”) –

    The evaluation strategy to adopt during training. Possible values are:

    • ”no”: No evaluation is done during training.

    • ”steps”: Evaluation is done (and logged) every steps.

    • ”epoch”: Evaluation is done at the end of each epoch.

    Setting a strategy different from “no” will set self.do_eval to True.

  • steps (int, optional, defaults to 500) – Number of update steps between two evaluations if strategy=”steps”.

  • batch_size (int optional, defaults to 8) – The batch size per device (GPU/TPU core/CPU…) used for evaluation.

  • accumulation_steps (int, optional) – Number of predictions steps to accumulate the output tensors for, before moving the results to the CPU. If left unset, the whole predictions are accumulated on GPU/TPU before being moved to the CPU (faster but requires more memory).

  • delay (float, optional) – Number of epochs or steps to wait for before the first evaluation can be performed, depending on the eval_strategy.

  • loss_only (bool, optional, defaults to False) – Ignores all outputs except the loss.

  • jit_mode (bool, optional) – Whether or not to use PyTorch jit trace for inference.

Example:

```py >>> from transformers import TrainingArguments

>>> args = TrainingArguments("working_dir")
>>> args = args.set_evaluate(strategy="steps", steps=100)
>>> args.eval_steps
100
```
set_logging(strategy: Union[str, transformers.trainer_utils.IntervalStrategy] = 'steps', steps: int = 500, report_to: Union[str, List[str]] = 'none', level: str = 'passive', first_step: bool = False, nan_inf_filter: bool = False, on_each_node: bool = False, replica_level: str = 'passive')

A method that regroups all arguments linked to logging.

Parameters
  • strategy (str or [~trainer_utils.IntervalStrategy], optional, defaults to “steps”) –

    The logging strategy to adopt during training. Possible values are:

    • ”no”: No logging is done during training.

    • ”epoch”: Logging is done at the end of each epoch.

    • ”steps”: Logging is done every logging_steps.

  • steps (int, optional, defaults to 500) – Number of update steps between two logs if strategy=”steps”.

  • level (str, optional, defaults to “passive”) – Logger log level to use on the main process. Possible choices are the log levels as strings: “debug”, “info”, “warning”, “error” and “critical”, plus a “passive” level which doesn’t set anything and lets the application set the level.

  • report_to (str or List[str], optional, defaults to “all”) – The list of integrations to report the results and logs to. Supported platforms are “azure_ml”, “clearml”, “codecarbon”, “comet_ml”, “dagshub”, “dvclive”, “flyte”, “mlflow”, “neptune”, “tensorboard”, and “wandb”. Use “all” to report to all integrations installed, “none” for no integrations.

  • first_step (bool, optional, defaults to False) – Whether to log and evaluate the first global_step or not.

  • nan_inf_filter (bool, optional, defaults to True) –

    Whether to filter nan and inf losses for logging. If set to True the loss of every step that is nan or inf is filtered and the average loss of the current logging window is taken instead.

    <Tip>

    nan_inf_filter only influences the logging of loss values, it does not change the behavior the gradient is computed or applied to the model.

    </Tip>

  • on_each_node (bool, optional, defaults to True) – In multinode distributed training, whether to log using log_level once per node, or only on the main node.

  • replica_level (str, optional, defaults to “passive”) – Logger log level to use on replicas. Same choices as log_level

Example:

```py >>> from transformers import TrainingArguments

>>> args = TrainingArguments("working_dir")
>>> args = args.set_logging(strategy="steps", steps=100)
>>> args.logging_steps
100
```
set_lr_scheduler(name: Union[str, transformers.trainer_utils.SchedulerType] = 'linear', num_epochs: float = 3.0, max_steps: int = - 1, warmup_ratio: float = 0, warmup_steps: int = 0)

A method that regroups all arguments linked to the learning rate scheduler and its hyperparameters.

Parameters
  • name (str or [SchedulerType], optional, defaults to “linear”) – The scheduler type to use. See the documentation of [SchedulerType] for all possible values.

  • num_epochs (float, optional, defaults to 3.0) – Total number of training epochs to perform (if not an integer, will perform the decimal part percents of the last epoch before stopping training).

  • max_steps (int, optional, defaults to -1) – If set to a positive number, the total number of training steps to perform. Overrides num_train_epochs. For a finite dataset, training is reiterated through the dataset (if all data is exhausted) until max_steps is reached.

  • warmup_ratio (float, optional, defaults to 0.0) – Ratio of total training steps used for a linear warmup from 0 to learning_rate.

  • warmup_steps (int, optional, defaults to 0) – Number of steps used for a linear warmup from 0 to learning_rate. Overrides any effect of warmup_ratio.

Example:

```py >>> from transformers import TrainingArguments

>>> args = TrainingArguments("working_dir")
>>> args = args.set_lr_scheduler(name="cosine", warmup_ratio=0.05)
>>> args.warmup_ratio
0.05
```
set_optimizer(name: Union[str, transformers.training_args.OptimizerNames] = 'adamw_torch', learning_rate: float = 5e-05, weight_decay: float = 0, beta1: float = 0.9, beta2: float = 0.999, epsilon: float = 1e-08, args: Optional[str, None] = None)

A method that regroups all arguments linked to the optimizer and its hyperparameters.

Parameters
  • name (str or [training_args.OptimizerNames], optional, defaults to “adamw_torch”) – The optimizer to use: “adamw_hf”, “adamw_torch”, “adamw_torch_fused”, “adamw_apex_fused”, “adamw_anyprecision” or “adafactor”.

  • learning_rate (float, optional, defaults to 5e-5) – The initial learning rate.

  • weight_decay (float, optional, defaults to 0) – The weight decay to apply (if not zero) to all layers except all bias and LayerNorm weights.

  • beta1 (float, optional, defaults to 0.9) – The beta1 hyperparameter for the adam optimizer or its variants.

  • beta2 (float, optional, defaults to 0.999) – The beta2 hyperparameter for the adam optimizer or its variants.

  • epsilon (float, optional, defaults to 1e-8) – The epsilon hyperparameter for the adam optimizer or its variants.

  • args (str, optional) – Optional arguments that are supplied to AnyPrecisionAdamW (only useful when optim=”adamw_anyprecision”).

Example:

```py >>> from transformers import TrainingArguments

>>> args = TrainingArguments("working_dir")
>>> args = args.set_optimizer(name="adamw_torch", beta1=0.8)
>>> args.optim
'adamw_torch'
```
set_push_to_hub(model_id: str, strategy: Union[str, transformers.trainer_utils.HubStrategy] = 'every_save', token: Optional[str, None] = None, private_repo: bool = False, always_push: bool = False)

A method that regroups all arguments linked to synchronizing checkpoints with the Hub.

<Tip>

Calling this method will set self.push_to_hub to True, which means the output_dir will begin a git directory synced with the repo (determined by model_id) and the content will be pushed each time a save is triggered (depending on your self.save_strategy). Calling [~Trainer.save_model] will also trigger a push.

</Tip>

Parameters
  • model_id (str) – The name of the repository to keep in sync with the local output_dir. It can be a simple model ID in which case the model will be pushed in your namespace. Otherwise it should be the whole repository name, for instance “user_name/model”, which allows you to push to an organization you are a member of with “organization_name/model”.

  • strategy (str or [~trainer_utils.HubStrategy], optional, defaults to “every_save”) –

    Defines the scope of what is pushed to the Hub and when. Possible values are:

    • ”end”: push the model, its configuration, the tokenizer (if passed along to the [Trainer]) and a

    draft of a model card when the [~Trainer.save_model] method is called. - “every_save”: push the model, its configuration, the tokenizer (if passed along to the [Trainer])

    and

    a draft of a model card each time there is a model save. The pushes are asynchronous to not block training, and in case the save are very frequent, a new push is only attempted if the previous one is finished. A last push is made with the final model at the end of training. - “checkpoint”: like “every_save” but the latest checkpoint is also pushed in a subfolder named last-checkpoint, allowing you to resume training easily with trainer.train(resume_from_checkpoint=”last-checkpoint”). - “all_checkpoints”: like “checkpoint” but all checkpoints are pushed like they appear in the

    output

    folder (so you will get one checkpoint folder per folder in your final repository)

  • token (str, optional) – The token to use to push the model to the Hub. Will default to the token in the cache folder obtained with huggingface-cli login.

  • private_repo (bool, optional, defaults to False) – If True, the Hub repo will be set to private.

  • always_push (bool, optional, defaults to False) – Unless this is True, the Trainer will skip pushing a checkpoint when the previous push is not finished.

Example:

```py >>> from transformers import TrainingArguments

>>> args = TrainingArguments("working_dir")
>>> args = args.set_push_to_hub("me/awesome-model")
>>> args.hub_model_id
'me/awesome-model'
```
set_save(strategy: Union[str, transformers.trainer_utils.IntervalStrategy] = 'steps', steps: int = 500, total_limit: Optional[int, None] = None, on_each_node: bool = False)

A method that regroups all arguments linked to checkpoint saving.

Parameters
  • strategy (str or [~trainer_utils.IntervalStrategy], optional, defaults to “steps”) –

    The checkpoint save strategy to adopt during training. Possible values are:

    • ”no”: No save is done during training.

    • ”epoch”: Save is done at the end of each epoch.

    • ”steps”: Save is done every save_steps.

  • steps (int, optional, defaults to 500) – Number of updates steps before two checkpoint saves if strategy=”steps”.

  • total_limit (int, optional) – If a value is passed, will limit the total amount of checkpoints. Deletes the older checkpoints in output_dir.

  • on_each_node (bool, optional, defaults to False) –

    When doing multi-node distributed training, whether to save models and checkpoints on each node, or only on the main one.

    This should not be activated when the different nodes use the same storage as the files will be saved with the same names for each node.

Example:

```py >>> from transformers import TrainingArguments

>>> args = TrainingArguments("working_dir")
>>> args = args.set_save(strategy="steps", steps=100)
>>> args.save_steps
100
```
set_testing(batch_size: int = 8, loss_only: bool = False, jit_mode: bool = False)

A method that regroups all basic arguments linked to testing on a held-out dataset.

<Tip>

Calling this method will automatically set self.do_predict to True.

</Tip>

Parameters
  • batch_size (int optional, defaults to 8) – The batch size per device (GPU/TPU core/CPU…) used for testing.

  • loss_only (bool, optional, defaults to False) – Ignores all outputs except the loss.

  • jit_mode (bool, optional) – Whether or not to use PyTorch jit trace for inference.

Example:

```py >>> from transformers import TrainingArguments

>>> args = TrainingArguments("working_dir")
>>> args = args.set_testing(batch_size=32)
>>> args.per_device_eval_batch_size
32
```
set_training(learning_rate: float = 5e-05, batch_size: int = 8, weight_decay: float = 0, num_epochs: float = 3, max_steps: int = - 1, gradient_accumulation_steps: int = 1, seed: int = 42, gradient_checkpointing: bool = False)

A method that regroups all basic arguments linked to the training.

<Tip>

Calling this method will automatically set self.do_train to True.

</Tip>

Parameters
  • learning_rate (float, optional, defaults to 5e-5) – The initial learning rate for the optimizer.

  • batch_size (int optional, defaults to 8) – The batch size per device (GPU/TPU core/CPU…) used for training.

  • weight_decay (float, optional, defaults to 0) – The weight decay to apply (if not zero) to all layers except all bias and LayerNorm weights in the optimizer.

  • num_train_epochs (float, optional, defaults to 3.0) – Total number of training epochs to perform (if not an integer, will perform the decimal part percents of the last epoch before stopping training).

  • max_steps (int, optional, defaults to -1) – If set to a positive number, the total number of training steps to perform. Overrides num_train_epochs. For a finite dataset, training is reiterated through the dataset (if all data is exhausted) until max_steps is reached.

  • gradient_accumulation_steps (int, optional, defaults to 1) –

    Number of updates steps to accumulate the gradients for, before performing a backward/update pass.

    <Tip warning={true}>

    When using gradient accumulation, one step is counted as one step with backward pass. Therefore, logging, evaluation, save will be conducted every gradient_accumulation_steps * xxx_step training examples.

    </Tip>

  • seed (int, optional, defaults to 42) – Random seed that will be set at the beginning of training. To ensure reproducibility across runs, use the [~Trainer.model_init] function to instantiate the model if it has some randomly initialized parameters.

  • gradient_checkpointing (bool, optional, defaults to False) – If True, use gradient checkpointing to save memory at the expense of slower backward pass.

Example:

```py >>> from transformers import TrainingArguments

>>> args = TrainingArguments("working_dir")
>>> args = args.set_training(learning_rate=1e-4, batch_size=32)
>>> args.learning_rate
1e-4
```
property should_log

Whether or not the current process should produce log.

property should_save

Whether or not the current process should write to disk, e.g., to save models and checkpoints.

to_dict()

Serializes this instance while replace Enum by their values (for JSON serialization support). It obfuscates the token values by removing their value.

to_json_string()

Serializes this instance to a JSON string.

to_sanitized_dict()Dict[str, Any]

Sanitized serialization to use with TensorBoard’s hparams

property train_batch_size

The actual batch size for training (may differ from per_gpu_train_batch_size in distributed training).

property world_size

The number of processes used in parallel.