Skip to main content

Check cache before generating

Call GET /conversations/{id}/keypoints before POST. Cached results are free — POST uses an AI credit only when no cached result exists.

Handle rate limits with backoff

When you receive a 429, read the Retry-After header (seconds until reset) and wait before retrying. Avoid tight retry loops.
const res = await fetch(url, { headers });
if (res.status === 429) {
  const retryAfter = parseInt(res.headers.get("Retry-After") || "60", 10);
  await new Promise(r => setTimeout(r, retryAfter * 1000));
  // retry the request
}

Use webhooks instead of polling

Instead of polling GET /conversations for new data, register a webhook for conversation.keypoints.ready and react to events in real time. This is faster and doesn’t count against your rate limit.

Store your API key securely

Use environment variables — never hardcode keys in source code or commit them to version control. If a key is compromised, revoke it immediately in Settings → API Keys and create a new one.

Paginate large result sets

The list conversations endpoint returns up to 100 results per page. Use offset to iterate through all results.
let offset = 0;
const limit = 100;
let allConversations = [];

while (true) {
  const res = await fetch(
    `${BASE}/api/v1/conversations?limit=${limit}&offset=${offset}`,
    { headers }
  );
  const { data, pagination } = await res.json();
  allConversations.push(...data);
  if (allConversations.length >= pagination.total) break;
  offset += limit;
}