Custom usage

Use custom models

By default, Transformers.js uses hosted pretrained modelsarrow-up-right and precompiled WASM binariesarrow-up-right, which should work out-of-the-box. You can customize this as follows:

Settings

Copied

import { env } from '@xenova/transformers';

// Specify a custom location for models (defaults to '/models/').
env.localModelPath = '/path/to/models/';

// Disable the loading of remote models from the BOINC AI Hub:
env.allowRemoteModels = false;

// Set location of .wasm files. Defaults to use a CDN.
env.backends.onnx.wasm.wasmPaths = '/path/to/files/';

For a full list of available settings, check out the API Referencearrow-up-right.

Convert your models to ONNX

We recommend using our conversion scriptarrow-up-right to convert your PyTorch, TensorFlow, or JAX models to ONNX in a single command. Behind the scenes, it uses 🌍 Optimumarrow-up-right to perform conversion and quantization of your model.

Copied

For example, convert and quantize bert-base-uncasedarrow-up-right using:

Copied

This will save the following files to ./models/:

Copied

Last updated