Server-side Inference in Node.js
Last updated
Last updated
Although Transformers.js was originally designed to be used in the browser, it’s also able to run inference on the server. In this tutorial, we will design a simple Node.js API that uses Transformers.js for sentiment analysis.
We’ll also show you how to use the library in both CommonJS and ECMAScript modules, so you can choose the module system that works best for your project:
- The official standard format to package JavaScript code for reuse. It’s the default module system in modern browsers, with modules imported using import
and exported using export
. Fortunately, starting with version 13.2.0, Node.js has stable support of ES modules.
- The default module system in Node.js. In this system, modules are imported using require()
and exported using module.exports
.
Although you can always use the for server-side inference, using Transformers.js means that you can write all of your code in JavaScript (instead of having to set up and communicate with a separate Python process).
Useful links:
Source code ( or )
version 18+
version 9+
Let’s start by creating a new Node.js project and installing Transformers.js via :
Copied
To indicate that your project uses ECMAScript modules, you need to add "type": "module"
to your package.json
:
Copied
Next, you will need to add the following imports to the top of app.js
:
Copied
Following that, let’s import Transformers.js and define the MyClassificationPipeline
class.
Copied
Start by adding the following imports to the top of app.js
:
Copied
Copied
Copied
Since we use lazy loading, the first request made to the server will also be responsible for loading the pipeline. If you would like to begin loading the pipeline as soon as the server starts running, you can add the following line of code after defining MyClassificationPipeline
:
Copied
To start the server, run the following command:
Copied
Copied
Copied
Great! We’ve successfully created a basic HTTP server that uses Transformers.js to classify text.
By default, the first time you run the application, it will download the model files and cache them on your file system (in ./node_modules/@xenova/transformers/.cache/
). All subsequent requests will then use this model. You can change the location of the cache by setting env.cacheDir
. For example, to cache the model in the .cache
directory in the current working directory, you can add:
Copied
If you want to use local model files, you can set env.localModelPath
as follows:
Copied
You can also disable loading of remote models by setting env.allowRemoteModels
to false
:
Copied
Next, create a new file called app.js
, which will be the entry point for our application. Depending on whether you’re using or , you will need to do some things differently (see below).
We’ll also create a helper class called MyClassificationPipeline
control the loading of the pipeline. It uses the to lazily create a single instance of the pipeline when getInstance
is first called, and uses this pipeline for all subsequent calls:
Following that, let’s import Transformers.js and define the MyClassificationPipeline
class. Since Transformers.js is an ESM module, we will need to dynamically import the library using the function:
Next, let’s create a basic server with the built-in module. We will listen for requests made to the server (using the /classify
endpoint), extract the text
query parameter, and run this through the pipeline.
The server should be live at , which you can visit in your web browser. You should see the following message:
This is because we aren’t targeting the /classify
endpoint with a valid text
query parameter. Let’s try again, this time with a valid request. For example, you can visit and you should see: