Recipes Background Jobs

Background AI Jobs with Oban

The page request finishes in 30ms. The 4-second LLM call happens in an Oban worker, with retries, deduplication, and a cron schedule. The user gets a notification when it’s done.

This is how petal.build runs its daily Elixir news bot: a cron-scheduled Oban worker hits Perplexity once a day, parses the JSON response, and inserts deduped news items. Total work in your controller: zero. Total downtime if Perplexity flakes: zero (Oban retries with exponential backoff).

Why Oban for AI work?

  • Retries with exponential backoff handle provider flakes for free — most LLM 500s are transient.
  • unique: [period: 60, fields: [:args]] prevents the same prompt from being dispatched twice in a window.
  • Dedicated :ai queue with a low concurrency cap keeps your provider rate-limit budget separate from regular jobs.
  • Cron plugin replaces Quantum entirely for daily LLM tasks — see the cost-tracking recipe to enforce a daily spend cap inside perform/1.