Custom usage
Use custom models
By default, Transformers.js uses hosted pretrained models and precompiled WASM binaries, which should work out-of-the-box. You can customize this as follows:
Settings
Copied
import { env } from '@xenova/transformers';
// Specify a custom location for models (defaults to '/models/').
env.localModelPath = '/path/to/models/';
// Disable the loading of remote models from the BOINC AI Hub:
env.allowRemoteModels = false;
// Set location of .wasm files. Defaults to use a CDN.
env.backends.onnx.wasm.wasmPaths = '/path/to/files/';
For a full list of available settings, check out the API Reference.
Convert your models to ONNX
We recommend using our conversion script to convert your PyTorch, TensorFlow, or JAX models to ONNX in a single command. Behind the scenes, it uses 🌍 Optimum to perform conversion and quantization of your model.
Copied
python -m scripts.convert --quantize --model_id <model_name_or_path>
For example, convert and quantize bert-base-uncased using:
Copied
python -m scripts.convert --quantize --model_id bert-base-uncased
This will save the following files to ./models/
:
Copied
bert-base-uncased/
├── config.json
├── tokenizer.json
├── tokenizer_config.json
└── onnx/
├── model.onnx
└── model_quantized.onnx
Last updated