Only show these results:

Smart Compose (v3 only)

For many users, writing clear, concise, grammatically correct, well-structured email messages takes a lot of time. Writing is not everyone’s superpower and doesn't need to be. With Nylas's Smart Compose endpoint, a user can generate a well-written email in a few seconds based on user prompts and email context.

Currently Smart Compose can:

Note: Smart Compose is only available to Nylas users on v3-beta and later. It is not available for v2.7.


To use Smart Compose in v3-beta, you only need a v3 Nylas application, a working authentication configuration (a provider authentication app and a connector/integration), and a grant.

Make sure your project includes a field for the user to enter their instructions for the AI.

Future plans

When v3 reaches GA, you will also need to connect your own LLM account to use Smart Compose, for example you might connect your organization's OpenAI account. This allows you to choose your own LLM model and parameters, so you can customize the AI output.

Response options

The Nylas Smart Compose endpoints support two methods of getting AI responses: you can either receive them as a REST response in a single JSON blob, or use SSE (Server-Sent Events) to stream the response tokens as they are received.

To enable SSE, add the Accept: text/event-stream header to your request. You'll need to make sure your project can accept streaming events, and render them for the user.

To receive responses using the REST method and receive a JSON response, either add Accept: application/json to the header, or omit the Accept header entirely.

Prompt latency varies depending on the length and complexity of the prompt. You might want to add a "working" indicator to your UI so the user knows to wait for the response.


Prompts sent to the current Nylas LLM can be up to 1000 tokens long. Longer queries will receive an error message.