# Aether Chat — LLM Reference Document > This file describes Aether Chat in detail for use by language models, AI agents, > search crawlers, and developer tooling. It covers architecture, features, data > formats, APIs, and integration points. ## Overview **Aether Chat** is a fully client-side, privacy-first AI chat assistant built on the [Nostr](https://nostr.com/) protocol stack. It runs entirely in the browser — no server-side backend, no user accounts, no telemetry. All conversations are stored locally in the browser's IndexedDB. - **Live URL:** - **Repository:** nostr://npub14rg4vrt2v374q95ezeeydu3hkdhmzglcj950mggacap4x0lv0gyq04wun7/relay.ngit.dev/aether-chat-1 - **Built with:** [Shakespeare](https://shakespeare.diy) — AI-powered web development platform - **License:** MIT --- ## Technology Stack | Layer | Technology | |---|---| | UI framework | React 18 with hooks and concurrent rendering | | Language | TypeScript 5 (strict mode) | | Styling | TailwindCSS 3 + shadcn/ui component library | | Build tool | Vite 6 with esbuild | | Nostr integration | Nostrify (@nostrify/nostrify, @nostrify/react) | | Data fetching | TanStack Query v5 | | Routing | React Router v6 | | Local storage | IndexedDB via `idb` library | | Markdown rendering | react-markdown + remark-gfm + rehype-highlight | | Syntax highlighting | highlight.js (github-dark theme) | | PDF export | jsPDF v4 | | QR codes | qrcode (canvas-based, pure JS) | | AI media generation | @fal-ai/client | | Font | Inter Variable (@fontsource-variable/inter) | | PWA | Custom service worker (sw.js) + Web App Manifest | --- ## Core Features ### 1. AI Chat (OpenAI-compatible APIs) Aether Chat connects to any OpenAI-compatible API endpoint. The user configures one or more API endpoints in Settings. Multiple configurations can be saved and switched between. **Supported authentication methods:** - `api-key` — Standard `Authorization: Bearer ` header - `nip98` — NIP-98 HTTP Auth using the user's Nostr signing key (no API key needed) **Supported providers (tested):** - OpenAI (api.openai.com) - Groq (api.groq.com/openai/v1) - OpenRouter (openrouter.ai/api/v1) - DeepSeek (api.deepseek.com/v1) - Mistral (api.mistral.ai/v1) - Together AI (api.together.xyz/v1) - Any other OpenAI-compatible endpoint **Request format:** Standard OpenAI `/chat/completions` with `stream: true` and `stream_options: { include_usage: true }`. Server-Sent Events (SSE) streaming is parsed chunk-by-chunk. Tool calls are accumulated from streaming deltas. **Agentic loop:** When the model requests tool calls, Aether executes them and feeds results back as `role: "tool"` messages, then calls the model again. This loop runs up to 8 rounds per user message. ### 2. Tools / Function Calling The following tools are exposed to the model and executed client-side: | Tool name | Trigger | Implementation | |---|---|---| | `web_search` | User asks about current events/facts | Bing RSS feed via CORS proxy; Wikipedia fallback | | `nostr_search` | User asks to search Nostr | NIP-50 full-text search on relay group | | `generate_image` | User asks to create/draw an image | fal.ai REST API (`@fal-ai/client`) | | `generate_video` | User asks to create a video | fal.ai REST API | | `deep_research` | User asks for deep research | Multi-step: plan → search loop → synthesise | Each tool execution emits one or more **step messages** — persistent `assistant` messages stored in IndexedDB with a `stepType` field — so the user sees every step of the process in the chat timeline. ### 3. Web Search **Primary:** Bing RSS feed (`https://www.bing.com/search?q=...&format=rss`) - No API key required - No bot detection (RSS is served to automated readers) - Proxied through `https://proxy.shakespeare.diy/?url=` - Returns title, URL, description snippet **Fallback:** Wikipedia OpenSearch API - `https://en.wikipedia.org/w/api.php?action=query&list=search&...&origin=*` - CORS-native, no proxy needed - Used when Bing RSS fails or returns empty ### 4. Nostr Search (NIP-50) Nostr full-text search uses the NIP-50 `search` filter field on search-capable relays. **Search relays:** - `wss://search.nos.today` - `wss://nostr.wine` - `wss://relay.noswhere.com` - `wss://gleasonator.dev/relay` Query: `[{ kinds: [1], search: "", limit: 5 }]` **NIP-19 identifier lookup:** When the search query begins with `npub1`, `note1`, `nevent1`, `nprofile1`, or `naddr1`, Aether decodes the identifier using `nip19.decode()` from nostr-tools and performs a direct relay lookup instead: | Identifier | Filter used | |---|---| | `npub1` / `nprofile1` | `{ kinds: [0, 1], authors: [pubkey] }` | | `note1` | `{ ids: [eventId] }` | | `nevent1` | `{ ids: [eventId] }` | | `naddr1` | `{ kinds: [kind], authors: [pubkey], '#d': [identifier] }` | When a NIP-19 lookup is used, the AI is automatically instructed to summarise the retrieved content for the user. ### 5. AI Media Generation (fal.ai) Image and video generation uses the fal.ai platform via `@fal-ai/client`. **Authentication:** fal.ai API key (stored in settings, never sent to any server except fal.ai). NIP-98 auth is listed as an option but fal.ai uses their own key system. **Default image models:** - `fal-ai/flux/schnell` (fast) - `fal-ai/flux/dev` (quality) - `fal-ai/flux-pro`, `fal-ai/flux-pro/v1.1` - `fal-ai/stable-diffusion-v3-medium` - `fal-ai/aura-flow`, `fal-ai/hyper-sdxl` - Custom model ID input available for any fal.ai model **Default video models:** - `fal-ai/kling-video/v1/standard/text-to-video` - `fal-ai/kling-video/v1.6/standard/text-to-video` - `fal-ai/minimax-video/image-to-video` - `fal-ai/ltx-video`, `fal-ai/cogvideox-5b` - Custom model ID input available Generated media is attached to step messages and the final answer message. Images are rendered inline in the chat. Videos get a `