Skip to content
Skip to main content

How to prevent duplicate and conflicting agent replies

An agent that replies twice to the same message looks broken. An agent that replies to a message another agent already handled looks worse. Both happen more often than you’d expect, especially under load — webhooks can be delivered more than once, concurrent workers can race each other, and shared inboxes create ambiguity about who should respond.

This recipe covers the patterns that prevent it.

Three common sources:

  1. Webhook redelivery. Nylas guarantees at-least-once delivery. If your endpoint doesn’t return 200 fast enough, or there’s a transient network issue, you’ll get the same message.created notification again. If the agent processes both, it sends two replies.

  2. Concurrent workers. If your webhook handler runs on multiple instances (Lambda, ECS tasks, worker processes), two instances can pick up the same notification simultaneously and both start generating a reply.

  3. Shared inboxes. Two different agents — or an agent and a human — watching the same mailbox can both decide to respond to the same message. This is harder to solve at the application layer because the conflict isn’t a duplicate event, it’s a coordination problem.

Track which message IDs you’ve already processed. Before doing anything, check whether you’ve seen this one.

app.post("/webhooks/nylas", async (req, res) => {
res.status(200).end();
const event = req.body;
if (event.type !== "message.created") return;
const messageId = event.data.object.id;
// Atomic check-and-set. If the key already exists, bail.
const alreadyProcessed = await db.processedMessages.setIfAbsent(messageId, {
receivedAt: Date.now(),
});
if (alreadyProcessed) return;
// Safe to proceed -- this is the first time we're handling this message.
await handleMessage(event.data.object);
});

The setIfAbsent operation must be atomic. In Postgres, that’s an INSERT ... ON CONFLICT DO NOTHING with a check on the returned row count. In Redis, it’s SET messageId 1 NX EX 86400. The TTL should be long enough that a redelivered webhook hours later still gets caught — 24 hours is a safe default.

Even with webhook dedup, two concurrent workers can race past the check-and-set within the same millisecond window. A per-thread lock prevents both from generating a reply.

async function handleMessage(msg) {
// Acquire a lock on this thread. If another worker holds it, wait or skip.
const lock = await db.acquireLock(`thread:${msg.thread_id}`, {
ttlMs: 30_000, // Release after 30 seconds if the worker crashes.
});
if (!lock.acquired) {
// Another worker is already handling this thread.
return;
}
try {
// Double-check: has a reply already been sent since this message arrived?
const thread = await nylas.threads.find({
identifier: AGENT_GRANT_ID,
threadId: msg.thread_id,
});
const latestMessage = thread.data.latestDraftOrMessage;
if (latestMessage && latestMessage.from[0]?.email === AGENT_EMAIL) {
// The agent already replied (from a prior worker or retry). Skip.
return;
}
await generateAndSendReply(msg);
} finally {
await lock.release();
}
}

The double-check inside the lock is important. Between the webhook arriving and the lock being acquired, another worker might have already finished. Checking the thread’s latest message catches this.

The cleanest way to prevent conflicting replies is to eliminate the shared inbox. Agent Accounts make this trivial — each agent gets its own [email protected] address, its own inbox, and its own webhook stream. There’s no coordination problem because there’s no overlap.

If you’re running multiple agents, give each one its own Agent Account:

Each agent’s webhook handler only processes messages for its own grant_id. No two agents are ever looking at the same message.

// Each agent process only handles its own grant.
if (msg.grant_id !== MY_GRANT_ID) return;

When you do need shared access — a human reviewing what the agent sent, an ops team monitoring the inbox — use IMAP access for read-only oversight rather than having multiple automated writers on the same mailbox.

Even with dedup and locking, a bug in your agent logic can produce a reply storm — the agent responds, the response triggers another webhook (outbound fires message.created too), and the cycle repeats.

Guard against this with a per-thread send rate limit:

async function sendReply(threadId, messageId, body) {
// Check how many messages the agent has sent on this thread recently.
const recentSends = await db.recentAgentSends(threadId, { withinMinutes: 5 });
if (recentSends >= 3) {
// Something is wrong -- escalate instead of sending.
await escalateToHuman(threadId, "reply rate limit hit");
return;
}
await nylas.messages.send({
identifier: AGENT_GRANT_ID,
requestBody: {
replyToMessageId: messageId,
to: [{ email: recipientEmail }],
body,
},
});
await db.recordAgentSend(threadId);
}

And always filter out the agent’s own messages at the top of your webhook handler:

// First check in every handler -- skip messages from the agent itself.
const sender = msg.from?.[0]?.email;
if (sender === AGENT_EMAIL) return;

For Agent Accounts, rules can pre-sort inbound messages before the webhook fires, reducing the chance of conflicting logic. Route messages from known domains to specific folders, block spam at the SMTP layer, and auto-archive notifications that don’t need a reply.

# Create a rule that routes all messages from a known domain to a specific folder.
curl --request POST \
--url "https://api.us.nylas.com/v3/rules" \
--header "Authorization: Bearer <NYLAS_API_KEY>" \
--header "Content-Type: application/json" \
--data '{
"match": [{ "field": "from.domain", "operator": "equals", "value": "noreply.example.com" }],
"actions": [{ "action": "assign_to_folder", "value": "notifications" }],
"description": "Route automated notifications to a separate folder"
}'

Your webhook handler can then check which folder a message landed in and skip folders the agent shouldn’t reply to.

  • Dedup and locking are both necessary. Dedup catches redelivered webhooks (same event, delivered twice). Locking catches concurrent workers (same event, processed simultaneously). You need both.
  • Set TTLs on your dedup records. A message ID you processed yesterday doesn’t need to stay in the dedup table forever. 24-48 hours is enough. After that, a webhook for the same message ID is almost certainly a bug, not a redelivery.
  • Log, don’t swallow. When you skip a message because it’s a duplicate or another worker holds the lock, log that it happened. Silent skips make debugging harder.
  • Test the race condition. Synthetic load testing with concurrent webhook deliveries is the only reliable way to verify your dedup and locking work. A single-threaded test won’t surface the problem.