Recipes Streaming Chat

AI Streaming Chat

Full Screen Demo

A real-time streaming chat interface that shows AI responses as they’re generated, token by token. This creates a more engaging user experience compared to waiting for the entire response.

Key Features

  • Real-time token streaming using ReqLLM's streaming API
  • Message validation and sanitization
  • Reusable message components for clean templates
  • Multi-turn conversations with full context history
  • Comprehensive error handling with retry logic
  • Auto-scroll to latest message with visual indicators

How It Works

  1. User submits a message, which is validated for safety
  2. Message is added to the conversation history
  3. A background Task is started to stream the AI response
  4. Each token chunk is sent to the LiveView via handle_info messages
  5. The streaming message updates in real-time using reusable components
  6. Once complete, the assistant message is added to the conversation history
  7. Full conversation context is maintained for multi-turn interactions