In this guide you will build a personalized audio health briefing — a spoken summary that turns raw Sahha data into something that sounds like a short podcast clip. The flow looks like this:
By the end you will have each step working independently, plus a bonus section that wires them together into a single API endpoint.
Prerequisites
Before you begin, make sure you have Sahha webhook data streaming into your database — follow the guide for your platform:
Stream health events to Supabase with Edge Functions
Stream health events to Convex with HTTP Actions
Stream health events to Firebase with Cloud Functions
Setup your LLM
You need an LLM to turn structured health data into a natural spoken narrative. Any provider with a text generation API works — OpenAI, Anthropic, Google, etc.
This guide uses Gemini 2.0 Flash as the concrete example. It is fast, inexpensive, and the free tier is generous enough for development.
Setup ElevenLabs
ElevenLabs converts the LLM-generated narrative into natural-sounding speech.
- Create an account at elevenlabs.io and grab your API key from the Profile settings page
- Pick a voice from the Voice Library and copy its Voice ID
S9GPGBaMND8XWwwzxQXp and the eleven_multilingual_v2 model which supports 29 languages. Setup environment variables
You need three environment variables. Where you set them depends on your platform:
Set secrets with the Supabase CLI:
npx supabase secrets set LLM_API_KEY=your_gemini_api_key
npx supabase secrets set ELEVENLABS_API_KEY=your_elevenlabs_api_key
npx supabase secrets set ELEVENLABS_VOICE_ID=S9GPGBaMND8XWwwzxQXp In the Convex dashboard, go to Settings → Environment Variables and add:
| Variable | Value |
|---|---|
LLM_API_KEY | Your Gemini API key |
ELEVENLABS_API_KEY | Your ElevenLabs API key |
ELEVENLABS_VOICE_ID | Your voice ID (e.g. S9GPGBaMND8XWwwzxQXp) |
Set secrets with the Firebase CLI:
firebase functions:secrets:set LLM_API_KEY
firebase functions:secrets:set ELEVENLABS_API_KEY
firebase functions:secrets:set ELEVENLABS_VOICE_ID Query recent health data
Fetch the latest scores and archetypes for a given user. There are two important things to know:
- Scores include a type (e.g.
activity,sleep,readiness), a numeric value, and factor breakdowns — extract these fields rather than passing raw JSON to the LLM - Archetypes need a separate, dedicated query because they are less frequent than score events and get buried in a generic time-ordered query. Deduplicate by name so you only keep the latest value per archetype
import { createClient } from "@supabase/supabase-js";
const supabase = createClient(
Deno.env.get("SUPABASE_URL")!,
Deno.env.get("SUPABASE_SERVICE_ROLE_KEY")!
);
// Fetch recent score events
async function getRecentScores(externalId: string) {
const yesterday = new Date(
Date.now() - 24 * 60 * 60 * 1000
).toISOString();
const { data: events, error } = await supabase
.from("sahha_events")
.select("*")
.eq("external_id", externalId)
.eq("event_type", "ScoreCreatedIntegrationEvent")
.gte("created_at", yesterday)
.order("created_at", { ascending: false })
.limit(50);
if (error) throw error;
return events;
}
// Fetch archetypes — deduplicated by name
async function getRecentArchetypes(externalId: string) {
const { data: events, error } = await supabase
.from("sahha_events")
.select("*")
.eq("external_id", externalId)
.eq("event_type", "ArchetypeCreatedIntegrationEvent")
.order("created_at", { ascending: false })
.limit(30);
if (error) throw error;
// Keep only the latest event per archetype name
const seen = new Set<string>();
return events.filter((e) => {
const name = e.payload?.name;
if (name && !seen.has(name)) {
seen.add(name);
return true;
}
return false;
});
} import { internalQuery } from "./_generated/server";
import { v } from "convex/values";
// Fetch recent events for a user (scores, biomarkers, etc.)
export const getByExternalId = internalQuery({
args: {
externalId: v.string(),
limit: v.optional(v.number()),
},
handler: async (ctx, args) => {
const limit = args.limit ?? 50;
const events = await ctx.db
.query("healthEvents")
.withIndex("by_user_type", (q) =>
q.eq("externalId", args.externalId)
)
.order("desc")
.take(limit);
return events.map((e) => ({
eventType: e.eventType,
payload: e.payload,
receivedAt: e.receivedAt,
}));
},
});
// Fetch archetypes — deduplicated by name
export const getArchetypesByExternalId = internalQuery({
args: { externalId: v.string() },
handler: async (ctx, args) => {
const events = await ctx.db
.query("healthEvents")
.withIndex("by_user_type", (q) =>
q
.eq("externalId", args.externalId)
.eq("eventType", "ArchetypeCreatedIntegrationEvent")
)
.order("desc")
.take(30);
// Keep only the latest per archetype name
const seen = new Set<string>();
const unique = [];
for (const e of events) {
const name = (e.payload as Record<string, unknown>)
?.name as string;
if (name && !seen.has(name)) {
seen.add(name);
unique.push({
eventType: e.eventType,
payload: e.payload,
});
}
}
return unique;
},
}); import { getFirestore } from "firebase-admin/firestore";
const db = getFirestore();
// Fetch recent score events
async function getRecentScores(externalId: string) {
const yesterday = new Date(
Date.now() - 24 * 60 * 60 * 1000
).toISOString();
const snapshot = await db
.collection("sahha_events")
.doc(externalId)
.collection("ScoreCreatedIntegrationEvent")
.where("createdAtUtc", ">=", yesterday)
.orderBy("createdAtUtc", "desc")
.limit(50)
.get();
return snapshot.docs.map((doc) => doc.data());
}
// Fetch archetypes — deduplicated by name
async function getRecentArchetypes(externalId: string) {
const snapshot = await db
.collection("sahha_events")
.doc(externalId)
.collection("ArchetypeCreatedIntegrationEvent")
.orderBy("createdAtUtc", "desc")
.limit(30)
.get();
const events = snapshot.docs.map((doc) => doc.data());
// Keep only the latest event per archetype name
const seen = new Set<string>();
return events.filter((e) => {
const name = e.payload?.name;
if (name && !seen.has(name)) {
seen.add(name);
return true;
}
return false;
});
} Build the prompt
Turn the raw Sahha data into compact, readable context for the LLM. This step is the same regardless of which database you use.
First, format the queried data into a summary string:
function buildDataSummary(
scoreEvents: any[],
archetypeEvents: any[]
): string {
// Build score summary — extract type, value, and factor breakdowns
const scoreSummary = scoreEvents
.filter(
(e) =>
e.eventType === "ScoreCreatedIntegrationEvent"
)
.map((e) => {
const p = e.payload;
const scoreType = p?.type ?? "unknown";
const score = p?.score;
const factors = Array.isArray(p?.factors)
? p.factors
.map(
(f: any) =>
`${f.name}: ${f.score} (${f.state})`
)
.join(", ")
: "";
return `[${scoreType}] score=${score} | ${factors}`;
})
.join("\n");
// Build archetype summary (already deduplicated by the query)
const archetypeSummary = archetypeEvents
.map((e) => {
const p = e.payload;
return `[${p?.name}] ${p?.value} (${p?.periodicity})`;
})
.join("\n");
return [
scoreSummary && `Scores:\n${scoreSummary}`,
archetypeSummary && `Archetypes:\n${archetypeSummary}`,
]
.filter(Boolean)
.join("\n\n");
}
Then build the prompt. This is a 5-7 sentence structure designed to produce 120–150 words that sound natural when spoken aloud. Each part has a specific job:
- Opener — a consistent opening line so users know the briefing has started. “Let’s take a look at your health data today.”
- One strength — the most interesting positive Sahha signal with a real number, plus a plain-language explanation of any technical terms. “Your readiness is sitting at 87%, which means your body is primed to perform.”
- One gap — the weakest signal and why it matters in everyday terms. “But your circadian alignment — how consistent your body clock is — is lower than usual, which can affect your energy and focus.”
- Context — a trend or pattern if available (improving, declining, or stable).
- One specific action — a low-friction action (under 15 minutes) tied directly to the gap. “Try setting a consistent bedtime this week — even 15 minutes closer to your usual time can make a noticeable difference.”
- Sign-off — a brief encouraging close.
Here is the function that assembles the instructions, structure, data hints, and health data into a single prompt:
function buildBriefingPrompt(dataSummary: string): string {
const instructions = [
"Write exactly ONE daily health briefing.",
"It will be read aloud as TTS audio so it must sound natural and conversational when spoken.",
"STRICT word count: 120-150 words total.",
"5-7 sentences.",
"Use second person ('you').",
"Use ONLY the facts, labels, and numbers in the Health data. Do NOT invent or estimate anything.",
"Use at most TWO numbers total.",
"Percent conversion:",
"- Some metrics in Health data will include markers like '(0–1)' or '(ratio)' or '(normalized)' or '(percent-like)'. If a value has any of these markers AND is between 0 and 1, convert it to a percentage for speech (0.87 -> 87%).",
"- Scores may also include these markers; treat them the same way.",
"- If the Health data already includes a '%' symbol, keep it as-is and do not re-convert.",
"- Do NOT convert decimals for unit-based metrics (hours, minutes, steps, bpm, ms, °C/°F, kg, etc.).",
"Plain-language requirement:",
"- If you mention any Factor/Biomarker term that could be unclear, immediately add a short lay explanation (4–10 words) after it, using an em dash.",
"- Example format: 'circadian alignment—how consistent your body clock is'.",
"- Keep explanations simple and non-medical.",
"Avoid medical/clinical language. No diagnosis, no prescriptions, no fear.",
"Output ONLY the briefing text — no numbering, no alternatives, no preamble.",
].join(" ");
const structure = [
"Structure (5-7 sentences):",
"1) Opener: always start with exactly 'Let\\'s take a look at your health data today.' Then move to the next sentence.",
"2) One strength: pick the most interesting positive Sahha signal (prefer a Score or marked percent-like metric) and optionally cite ONE related Factor/Biomarker with a lay explanation. Include ONE number if available (apply percent conversion rules).",
"3) One gap: pick the weakest Sahha signal (prefer a Score or marked percent-like metric) and cite ONE driver Factor/Biomarker with a lay explanation. Explain why it matters in everyday terms (energy, mood, focus, recovery).",
"4) Context: mention a trend or pattern if available (improving, declining, or stable over recent days). If no trend data exists, skip this sentence.",
"5) One action: give ONE specific, low-friction action (<15 minutes) tied directly to the gap, with a concrete payoff.",
"6) Sign-off: brief encouraging close, 1 sentence.",
"Selection rules:",
"- Prefer: Scores + Factors; use Biomarkers when they clarify the driver.",
"- Do NOT reference or use archetype labels (e.g. Champion, Achiever). Never greet the user with an archetype name.",
"- If Trends/Comparisons are present, you may include ONE trend word (improving/declining/stable) with no extra numbers.",
"- Create contrast: choose a strength and a gap from different domains when possible.",
"- If key data is missing, say 'Based on what we have today…' and continue without guessing.",
].join("\n");
const dataHints = [
"DATA HINTS:",
"- Values marked with '(0–1)', '(ratio)', '(normalized)', or '(percent-like)' should be spoken as percentages if between 0 and 1.",
"- When you see a complex term, add a lay definition using an em dash right after it.",
].join("\n");
return [
instructions,
structure,
dataHints,
`Health data:\n${dataSummary}`,
"Briefing:",
].join("\n\n");
}
Generate the narrative
Send the prompt to an LLM. This example uses Gemini 2.0 Flash — fast and inexpensive. Any LLM with a text generation API will work (OpenAI, Anthropic, etc.) — just adjust the request shape.
async function generateNarrative(
prompt: string,
apiKey: string
): Promise<string> {
const response = await fetch(
"https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent",
{
method: "POST",
headers: {
"Content-Type": "application/json",
"x-goog-api-key": apiKey,
},
body: JSON.stringify({
contents: [{ parts: [{ text: prompt }] }],
generationConfig: { temperature: 0.7 },
}),
}
);
if (!response.ok) {
const errText = await response.text();
throw new Error(
`LLM request failed (${response.status}): ${errText}`
);
}
const data = await response.json();
return (
data?.candidates?.[0]?.content?.parts?.[0]?.text ??
"Unable to generate briefing."
);
}
Here is an example of what a generated briefing looks like:
Let’s take a look at your health data today. Your readiness is looking strong at 87%, which means your body is primed to perform. However, your circadian alignment — how consistent your body clock is — has dipped a bit, and that can affect your energy and focus throughout the day. Things have been trending stable over the past few days, so this is a good time to lock in a routine. Try setting a consistent bedtime tonight, even just 15 minutes closer to your usual time — it can make a real difference in how you feel tomorrow. You’re on a solid track, keep it going.
Convert to speech with ElevenLabs
Send the narrative to ElevenLabs to get back an MP3 audio file:
async function textToSpeech(
text: string,
voiceId: string,
apiKey: string
): Promise<ArrayBuffer> {
const response = await fetch(
`https://api.elevenlabs.io/v1/text-to-speech/${voiceId}`,
{
method: "POST",
headers: {
"Content-Type": "application/json",
"xi-api-key": apiKey,
},
body: JSON.stringify({
text,
model_id: "eleven_multilingual_v2",
voice_settings: {
stability: 0.5,
similarity_boost: 0.75,
speed: 1.1,
},
}),
}
);
if (!response.ok) {
throw new Error(
`ElevenLabs request failed: ${response.status}`
);
}
return response.arrayBuffer();
}

voiceId, go to the ElevenLabs Voice Library and copy the ID from your chosen voice. The eleven_multilingual_v2 model supports 29 languages — use eleven_monolingual_v1 if you only need English and want lower latency. Tip — hard-code a profile ID for demos. If you are building a demo or prototype, you can skip user authentication and hard-code the externalId directly in your frontend. Find available profile IDs by checking your Sahha dashboard or querying your database for distinct externalId values. For example in your app:
const PROFILE_ID = "your-sahha-profile-id"Replace this with the externalId you used when creating the Sahha profile. This lets you test the full briefing flow without wiring up authentication first.
Bonus: Wire it up as an API endpoint
Combine all the steps above into a single POST /daily-briefing endpoint that accepts an externalId and returns MP3 audio.
Create a new Edge Function at supabase/functions/daily-briefing/index.ts:
import { createClient } from "@supabase/supabase-js";
const supabase = createClient(
Deno.env.get("SUPABASE_URL")!,
Deno.env.get("SUPABASE_SERVICE_ROLE_KEY")!
);
const LLM_API_KEY = Deno.env.get("LLM_API_KEY")!;
const ELEVENLABS_API_KEY = Deno.env.get("ELEVENLABS_API_KEY")!;
const ELEVENLABS_VOICE_ID =
Deno.env.get("ELEVENLABS_VOICE_ID") ?? "S9GPGBaMND8XWwwzxQXp";
Deno.serve(async (req) => {
// CORS preflight
if (req.method === "OPTIONS") {
return new Response(null, {
status: 204,
headers: {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, OPTIONS",
"Access-Control-Allow-Headers": "Content-Type",
},
});
}
if (req.method !== "POST") {
return new Response("Method not allowed", { status: 405 });
}
const corsHeaders = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, OPTIONS",
"Access-Control-Allow-Headers": "Content-Type",
};
try {
const { externalId } = await req.json();
if (!externalId) {
return new Response(
JSON.stringify({ error: "externalId is required" }),
{
status: 400,
headers: { "Content-Type": "application/json", ...corsHeaders },
}
);
}
// 1. Query scores
const yesterday = new Date(
Date.now() - 24 * 60 * 60 * 1000
).toISOString();
const { data: scoreEvents } = await supabase
.from("sahha_events")
.select("*")
.eq("external_id", externalId)
.eq("event_type", "ScoreCreatedIntegrationEvent")
.gte("created_at", yesterday)
.order("created_at", { ascending: false })
.limit(50);
// 2. Query archetypes (deduplicated)
const { data: allArchetypes } = await supabase
.from("sahha_events")
.select("*")
.eq("external_id", externalId)
.eq("event_type", "ArchetypeCreatedIntegrationEvent")
.order("created_at", { ascending: false })
.limit(30);
const seen = new Set<string>();
const archetypeEvents = (allArchetypes ?? []).filter((e: any) => {
const name = e.payload?.name;
if (name && !seen.has(name)) {
seen.add(name);
return true;
}
return false;
});
// 3. Build prompt and generate narrative
const dataSummary = buildDataSummary(scoreEvents ?? [], archetypeEvents);
const prompt = buildBriefingPrompt(dataSummary);
const narrative = await generateNarrative(prompt, LLM_API_KEY);
// 4. Convert to speech
const audio = await textToSpeech(
narrative,
ELEVENLABS_VOICE_ID,
ELEVENLABS_API_KEY
);
return new Response(audio, {
headers: {
"Content-Type": "audio/mpeg",
"X-Narrative-Text": encodeURIComponent(narrative),
"Access-Control-Expose-Headers": "X-Narrative-Text",
...corsHeaders,
},
});
} catch (err) {
return new Response(
JSON.stringify({ error: "Internal server error" }),
{
status: 500,
headers: { "Content-Type": "application/json", ...corsHeaders },
}
);
}
});
// --- Helper functions (buildDataSummary, buildBriefingPrompt,
// generateNarrative, textToSpeech) go here — same as the
// functions shown in the earlier sections of this guide ---Deploy with:
npx supabase functions deploy daily-briefing --no-verify-jwt Download the starter project — it includes the Convex backend, a React frontend with an audio player, and all project configuration already set up.
Download starter projectAfter downloading:
- Unzip and open the project
- Install dependencies:
cd daily-health-briefing
npm install- Start the Convex dev server:
npx convex dev- Set your environment variables:
npx convex env set LLM_API_KEY your_gemini_api_key
npx convex env set ELEVENLABS_API_KEY your_elevenlabs_api_key
npx convex env set ELEVENLABS_VOICE_ID S9GPGBaMND8XWwwzxQXp
npx convex env set SAHHA_WEBHOOK_SECRET your_sahha_webhook_secret- Set your Sahha profile ID in
src/App.tsx:
const PROFILE_ID = "your-sahha-profile-id"- Start the frontend:
npm run devand open http://localhost:5173
healthEvents table in the Convex dashboard. Create convex/dailyBriefing.ts:
import { httpAction } from "./_generated/server";
import { internal } from "./_generated/api";
const corsHeaders = {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "POST, OPTIONS",
"Access-Control-Allow-Headers": "Content-Type",
};
export const generateBriefing = httpAction(
async (ctx, request) => {
// CORS preflight
if (request.method === "OPTIONS") {
return new Response(null, {
status: 204,
headers: corsHeaders,
});
}
try {
const { externalId } = await request.json();
if (!externalId) {
return new Response(
JSON.stringify({
error: "externalId is required",
}),
{
status: 400,
headers: {
"Content-Type": "application/json",
...corsHeaders,
},
}
);
}
// 1. Query scores and archetypes in parallel
const [events, archetypeEvents] =
await Promise.all([
ctx.runQuery(
internal.queries.getByExternalId,
{ externalId, limit: 50 }
),
ctx.runQuery(
internal.queries.getArchetypesByExternalId,
{ externalId }
),
]);
if (!events?.length && !archetypeEvents.length) {
return new Response(
JSON.stringify({
error: "No health data found for this user",
}),
{
status: 404,
headers: {
"Content-Type": "application/json",
...corsHeaders,
},
}
);
}
// 2. Build prompt and generate narrative
const dataSummary = buildDataSummary(
events ?? [],
archetypeEvents
);
const prompt = buildBriefingPrompt(dataSummary);
const llmApiKey = process.env.LLM_API_KEY!;
const narrative = await generateNarrative(
prompt,
llmApiKey
);
// 3. Convert to speech
const elevenLabsApiKey =
process.env.ELEVENLABS_API_KEY!;
const voiceId =
process.env.ELEVENLABS_VOICE_ID ??
"S9GPGBaMND8XWwwzxQXp";
const audio = await textToSpeech(
narrative,
voiceId,
elevenLabsApiKey
);
return new Response(audio, {
status: 200,
headers: {
"Content-Type": "audio/mpeg",
"X-Narrative-Text": encodeURIComponent(
narrative
),
"Access-Control-Expose-Headers":
"X-Narrative-Text",
...corsHeaders,
},
});
} catch (err: unknown) {
const message =
err instanceof Error ? err.message : String(err);
console.error("Daily briefing error:", message);
return new Response(
JSON.stringify({ error: "Internal server error" }),
{
status: 500,
headers: {
"Content-Type": "application/json",
...corsHeaders,
},
}
);
}
}
);
// --- Helper functions (buildDataSummary, buildBriefingPrompt,
// generateNarrative, textToSpeech) go here — same as the
// functions shown in the earlier sections of this guide ---Register the route in convex/http.ts:
import { generateBriefing } from "./dailyBriefing";
// Daily health briefing
http.route({
path: "/daily-briefing",
method: "POST",
handler: generateBriefing,
});
// CORS preflight
http.route({
path: "/daily-briefing",
method: "OPTIONS",
handler: generateBriefing,
});502 from a Convex httpAction. Convex infrastructure intercepts 502 responses and replaces them with a generic error that strips your CORS headers — your frontend will see a network error instead of your error message. Use 500 for server errors instead. Create functions/src/dailyBriefing.ts:
import { onRequest } from "firebase-functions/v2/https";
import { getFirestore } from "firebase-admin/firestore";
import { defineSecret } from "firebase-functions/params";
const LLM_API_KEY = defineSecret("LLM_API_KEY");
const ELEVENLABS_API_KEY = defineSecret("ELEVENLABS_API_KEY");
const ELEVENLABS_VOICE_ID = defineSecret("ELEVENLABS_VOICE_ID");
const db = getFirestore();
export const dailyBriefing = onRequest(
{
secrets: [
LLM_API_KEY,
ELEVENLABS_API_KEY,
ELEVENLABS_VOICE_ID,
],
},
async (req, res) => {
// CORS
res.set("Access-Control-Allow-Origin", "*");
res.set("Access-Control-Allow-Methods", "POST, OPTIONS");
res.set("Access-Control-Allow-Headers", "Content-Type");
if (req.method === "OPTIONS") {
res.status(204).send("");
return;
}
if (req.method !== "POST") {
res.status(405).send("Method not allowed");
return;
}
try {
const { externalId } = req.body;
if (!externalId) {
res.status(400).json({ error: "externalId is required" });
return;
}
// 1. Query scores
const yesterday = new Date(
Date.now() - 24 * 60 * 60 * 1000
).toISOString();
const scoreSnapshot = await db
.collection("sahha_events")
.doc(externalId)
.collection("ScoreCreatedIntegrationEvent")
.where("createdAtUtc", ">=", yesterday)
.orderBy("createdAtUtc", "desc")
.limit(50)
.get();
const scoreEvents = scoreSnapshot.docs.map((doc) =>
doc.data()
);
// 2. Query archetypes (deduplicated)
const archSnapshot = await db
.collection("sahha_events")
.doc(externalId)
.collection("ArchetypeCreatedIntegrationEvent")
.orderBy("createdAtUtc", "desc")
.limit(30)
.get();
const allArchetypes = archSnapshot.docs.map((doc) =>
doc.data()
);
const seen = new Set<string>();
const archetypeEvents = allArchetypes.filter((e) => {
const name = e.payload?.name;
if (name && !seen.has(name)) {
seen.add(name);
return true;
}
return false;
});
// 3. Build prompt and generate narrative
const dataSummary = buildDataSummary(
scoreEvents,
archetypeEvents
);
const prompt = buildBriefingPrompt(dataSummary);
const narrative = await generateNarrative(
prompt,
LLM_API_KEY.value()
);
// 4. Convert to speech
const audio = await textToSpeech(
narrative,
ELEVENLABS_VOICE_ID.value(),
ELEVENLABS_API_KEY.value()
);
res.set("Content-Type", "audio/mpeg");
res.set(
"X-Narrative-Text",
encodeURIComponent(narrative)
);
res.set(
"Access-Control-Expose-Headers",
"X-Narrative-Text"
);
res.send(Buffer.from(audio));
} catch (err) {
console.error("Daily briefing error:", err);
res.status(500).json({ error: "Internal server error" });
}
}
);
// --- Helper functions (buildDataSummary, buildBriefingPrompt,
// generateNarrative, textToSpeech) go here — same as the
// functions shown in the earlier sections of this guide ---Deploy with:
firebase deploy --only functions Test your endpoint with curl:
curl -X POST https://YOUR_ENDPOINT_URL/daily-briefing \
-H "Content-Type: application/json" \
-d '{"externalId": "your-user-id"}' \
--output briefing.mp3
The narrative text is also available in the X-Narrative-Text response header (URL-encoded).
Troubleshooting
No health data returned from the database
- Confirm that your Sahha webhook is active and delivering events — check the Sahha dashboard under Data Delivery → Webhooks
- Verify the
externalIdyou are querying matches the one set when creating the Sahha profile - Check that events exist within the last 24 hours — older events will be filtered out by the time window
- Try querying without the date filter first to confirm data exists at all
LLM request fails or returns an error
- Double-check your
LLM_API_KEYis set correctly in your environment variables - If using Gemini, verify your key at Google AI Studio — the free tier has daily quota limits that can be exhausted
- Check your LLM provider’s status page for any outages
- If you see a
429error, you have hit the rate limit or daily quota — wait and retry, or upgrade your plan
ElevenLabs returns 401 or 403
- Verify your
ELEVENLABS_API_KEYis correct and active - Check your ElevenLabs plan’s character quota — free tier has limited characters per month
- Ensure the
voiceIdexists in your account or is a public voice from the Voice Library
ElevenLabs rate limit (429)
- The free tier allows a limited number of requests per minute
- Add a retry with exponential backoff, or cache generated audio for repeat requests with the same data
- Consider upgrading your ElevenLabs plan for higher rate limits
Convex endpoint returns a generic error with no CORS headers
- If your Convex httpAction returns HTTP status
502, Convex infrastructure intercepts the response and replaces it with a genericerror code: 502text/plain response — with no CORS headers. Your frontend will see a network error instead of your error message - Always use status
500(not502) for server errors in Convex httpActions - Make sure your CORS headers are included on all error responses, not just success responses
Audio file is empty or corrupted
- Ensure you are reading the response as an
ArrayBuffer, not as JSON - Check that the
Content-Typeheader in your response is set toaudio/mpeg - Verify the narrative text is not empty before sending it to ElevenLabs — an empty string will produce an error
Next steps
You now have a working daily briefing that turns raw health data into a personalized audio summary. Here are some ideas for extending it:
- Scheduled delivery — Use a cron job or scheduled function to generate briefings each morning and push a notification to the user
- Caching — Store generated audio for each user/day to avoid re-generating the same briefing on repeat requests
- Voice personalization — Let users pick their preferred ElevenLabs voice or clone their own
- Multi-language support — The
eleven_multilingual_v2model already supports 29 languages — pass the user’s locale preference to the LLM prompt to generate briefings in their language - Trend analysis — Expand the query window to 7 days and ask the LLM to highlight week-over-week changes