15 April 2026

Kokoro Streaming Latency Investigation on Firefox

Making Local TTS Actually Stream: Fixing Kokoro FastAPI for Real-Time Audio

If you’ve been following along with my local AI setup, you’ll know I run most of my services within Proxmox VE LXC or Podman containers.  One of those is Kokoro, a self-hosted text-to-speech instance based on the Kokoro-82M ONNX model.  The audio generated is surprisingly good and the model is small so inference is possible and fast on a CPU.  There are fifty-nine voices covering a few languages.  An important point is that it exposes an OpenAI-compatible API that plugs straight into Open WebUI.

This alone would not have warrantied a blog post, because it's a Text to Speech engine in a repo, but, no surprise, what started as a simple Firefox bug fix turned into a streaming pipeline investigation: with the usual benchmarks, agent assistance with code analysis,  duplicate container sandbox, and concluded with a fix that meaningfully reduces time-to-first-audio for conversational use cases.


The Firefox Bug

It worked fine in Chrome, but produced an error in Firefox when clicking Generate Speech:


The culprit was a single line in AudioService.js:

this.sourceBuffer = this.mediaSource.addSourceBuffer('audio/mpeg');

It turns out Firefox does not support audio/mpeg.  The fix was to test for support and fallback if MediaSource Extensions(MSE) are not available:

if (!window.MediaSource || !MediaSource.isTypeSupported('audio/mpeg')) {
    await this.setupBufferedStream(stream, response, onProgress, estimatedChunks);
    return;
}

The setupBufferedStream fallback collects all incoming audio chunks into a Blob and sets it as a plain audio.src.  No MSE required and works everywhere.  Rather than rebuilding the image the file patch was saved locally and injected using podman cp.

Benchmarking: Does Format or Voice Matter?

With the Firefox issue sorted, I ran a latency benchmark.  Then another and another across three formats (mp3, pcm and wav) and three voices.  The phrase was short and topical, since I'm seeking some consultancy:

“Hi Mediclinic, your EHR project sounds interesting and has the potential for a lot of impact.”

Three runs per combination, stream: false, measured with Python’s time.perf_counter().

By format (averaged across all voices)

Format Avg latency File size
WAV 1382 ms ~256 KB
PCM 1417 ms ~256 KB
MP3 1457 ms ~86 KB

By voice (averaged across all formats)

The choices were: English, Japanese, Mandarin, Spanish, French, Hindi, Italian, and Brazilian Portuguese. 

Voice Description Avg latency
af_heart American English female 1379 ms
bm_fable British English male 1439 ms
ef_dora Spanish female 1438 ms

The takeaway: format and voice choice barely matter for latency. The ONNX inference dominates — everything else (MP3 encoding, voice model differences) contributes at most ~80 ms. MP3 encoding time was minimal and the right choice for web playback because of it's file size advantage. The French voice (ef_dora) performs on par with the English voices, which is a good sign for multilingual deployments.

Can we go faster?

I spotted while reading the documentation, yes it's a bad habit I developed when I was young, that there API has a stream: true parameter.  For a conversational applications, I thought this would be useful and a simple switch to enable...  You can probably guess that I was being naive, because I thought great enable the flag and the server would stream the audio during generation reducing perceived latency.   It turns out that streaming works with sentances, so I split the test phrase to start with a nice short initial sentance:

“Hi Mediclinic.  Your EHR project sounds interesting and has the potential for a lot of impact.”

Then Claude wrote some Python to track exactly when each 1 KB chunk arrived at the client:

t_start = time.perf_counter()
chunks = []
with urllib.request.urlopen(req) as resp:
    while True:
        chunk = resp.read(1024)
        if not chunk: break
        t = round((time.perf_counter() - t_start) * 1000)
        chunks.append((t, len(chunk)))

print(f"First chunk: {chunks[0][0]}ms")
print(f"Last chunk:  {chunks[-1][0]}ms")

Results for stream: true, af_heart, MP3:

First chunk: 1462ms
Last chunk:  1464ms
Chunks: 89

All chunks arrived within 2 ms of each other, after a full 1.4 second wait. stream: false was identical.  Even PCM which has zero encoder overhead.  Eh?  This doesn't seem right.  What was going on?  Was something buffering the audio before a single byte was sent?

The Rabbit Hole

I made a copy of the base container called kokoro-stream, on port 8881 as an isolated sandbox for Claude to play with.  The server code uses async generators and yield statements all the way from the HTTP handler down to the ONNX inference layer, which is good practice.  The StreamingResponse even sets X-Accel-Buffering: no, so it should work.

Three hypotheses:


Hypothesis Evidence for
H1 ONNX inference batches both sentences as one call PCM (no encoder) also shows simultaneous delivery
H2 Uvicorn buffers the response body below a threshold No asyncio yield points between sentence yields
H3 PyAV MP3 encoder buffers early frames Secondary — can’t explain PCM behaviour

What the code actually does

Inside tts_service.py, smart_split() splits the input text into chunks before inference, this is good.  However, it batches sentences together when their combined token count is under 250 tokens.  Guess what?  The two sentence test is only 105 tokens, so both sentences were delivered as a single string to KokoroV1.generate().

Inside kokoro_v1.py, the pipeline called split_pattern=r'\n+' meaning it would only split on newlines, not just full stops.  And since there were no newlines, both sentences went through as a single inference call producing a single audio file.  No amount of downstream async would fix that.

Even if the sentences had been processed separately, the for result in pipeline(...) loop is synchronous and never returns control to the asyncio event loop between sentences, so the HTTP layer has no opportunity to flush.

The Fix

Two changes:

inference/kokoro_v1.py 

Change the split pattern to include breaks on full stops:

# before
split_pattern=r'\n+'
# after
split_pattern=r'(?<=[.!?])\s+'

inference/kokoro_v1.py and services/tts_service.py 

Add yield points:

yield AudioChunk(...)
await asyncio.sleep(0)  # return control to event loop → HTTP layer can flush

Before and now Time To First Audio(TTFA)

Metric Before After
First chunk (TTFA) ~1400 ms ~575 ms
Last chunk ~1400 ms ~1400 ms
Gap ~2 ms ~1100 ms

The first audio now arrives after ~575ms, while the second is still being generated.  The total generation time is unchanged, unsurprisingly, but the latency is lower and this is just what is needed for conversational applications, like calling several service centres to ask about the availability and costs of servicing a car.  I was surprised that online systems here is South Africa, don't show available slots, but instead are lead generation and a human sends and email or calls you a few hours later.

Conclusion

A few things worth noting:

The architecture.  Kokoro uses async generators throughout, so the issue wasn’t bad design, it was two small configuration defaults affect short inputs.  The token batching threshold (250 tokens) and the newline-only split pattern made sense in isolation, but eliminate sentence-level streaming for my test input.

PCM as a diagnostic tool.  Benchmarking PCM format (raw samples, no encoding) alongside MP3 was valuable, to idnetify, and eliminate the audio encoder as a suspect early.   When PCM and MP3 shows similar timings the bottleneck is unrelated and upstream of the encoder.

asyncio.sleep(0) is surprisingly powerful. A zero-duration sleep doesn’t actually sleep, it yields control to the event loop.  That’s enough for uvicorn to flush pending response bytes to the socket.  It’s a one-liner with impact on latency.


Podman on Ubuntu 24.04. 

Kokoro image: ghcr.io/remsky/kokoro-fastapi-cpu:latest

Voices used: af_heart, bm_fable, ef_dora.

LiteLLM + Agent Teams: A Practical Guide

LiteLLM + Agent Teams: A Practical Guide

An aide memoire for using the local AI infrastructure day-to-day.


The big picture

You have three layers:

Your task (plain English)
        ↓
  Agent team (Python, OpenAI Agents SDK)
        ↓
  LiteLLM proxy  ←→  Ollama (local GPU)
                 ←→  OpenRouter (cloud free)
                 ←→  Anthropic (claude-haiku)

LiteLLM is a translation layer. It gives everything a single OpenAI-compatible URL (http://10.140.20.63:4000/v1) regardless of whether the model is running locally on your GPU or fetched from a cloud provider. Your code never changes — only the model name string changes.

The agent team is a set of specialised AI workers. You give the orchestrator a task in plain English; it decides which specialist to hand it to; the specialist does the work and hands results back.


Part 1 — Using LiteLLM directly

From the command line (curl)

# Ask any model a question
curl http://10.140.20.63:4000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer no-key-needed" \
  -d '{
    "model": "qwen3.5:4b",
    "messages": [{"role": "user", "content": "What is a BGP route reflector?"}]
  }'

# List all available models
curl http://10.140.20.63:4000/v1/models | python3 -m json.tool | grep '"id"'

From Python (OpenAI SDK)

from openai import OpenAI

client = OpenAI(
    base_url="http://10.140.20.63:4000/v1",
    api_key="no-key-needed",
)

response = client.chat.completions.create(
    model="qwen3.5:4b",   # or "claude-haiku-4-5", "nemotron-120b", etc.
    messages=[{"role": "user", "content": "Summarise this log: ..."}],
)
print(response.choices[0].message.content)

Choosing a model

Use case Model string Where it runs
Quick questions, triage qwen3.5:4b Local GPU (3.4 GB)
Writing code qwen2.5-coder:7b Local GPU (4.7 GB)
General analysis qwen3.5 Local GPU (6.6 GB)
Images / screenshots qwen3-vl Local GPU (6.1 GB)
Heavy reasoning nemotron-120b Cloud free (OpenRouter)
Reliable tool calling claude-haiku-4-5 Cloud (Anthropic/OpenRouter)
Best available free free Cloud free (auto-routed)

Group aliases — if the specific model is busy or unavailable, LiteLLM falls back automatically:

Alias Primary Fallback
fast qwen3.5:4b qwen2.5-coder:1.5b
coder qwen2.5-coder:7b qwen2.5-coder:1.5b
local qwen3.5 llama3.1
reasoning nemotron-120b gpt-oss-120b

Health check

curl http://10.140.20.63:4000/health
incus exec litellm -- journalctl -u litellm -f   # live logs

Part 2 — Running the agent team

The one-liner

cd /home/user/claude/agents
.venv/bin/python team.py "your task here"

Example tasks

# Coding
.venv/bin/python team.py "write a Python script that tails a log file and alerts on ERROR lines"

# Research
.venv/bin/python team.py "what are the main CVEs in OpenSSH versions 8.x to 9.x?"

# Analysis
.venv/bin/python team.py "analyse this nmap output and prioritise the findings: [paste output]"

# Mixed — the orchestrator chains specialists automatically
.venv/bin/python team.py "research the log4shell vulnerability then write a Python checker for it"

What happens under the hood

You: "research log4shell then write a checker"
        ↓
Orchestrator (claude-haiku) reads task
        ↓
Handoff → Researcher (nemotron-120b, cloud)
  "Log4Shell is CVE-2021-44228, affects Log4j 2.0–2.14.1..."
        ↓
Back to Orchestrator → Handoff → Coder (qwen2.5-coder:7b, local GPU)
  "def check_log4shell(host, port): ..."
        ↓
Orchestrator summarises and returns to you

The orchestrator uses haiku because it reliably produces valid tool-call JSON for handoffs. Local Ollama models are fast but unreliable at structured function-calling.

Watching it work

Add LITELLM_LOG=DEBUG to see every model call:

LITELLM_LOG=DEBUG .venv/bin/python team.py "hello"

Or watch the LiteLLM proxy logs live in another terminal:

incus exec litellm -- journalctl -u litellm -f

Part 3 — Writing your own agents

Minimal single agent

import asyncio, os
os.environ["OPENAI_BASE_URL"] = "http://10.140.20.63:4000/v1"
os.environ["OPENAI_API_KEY"]  = "no-key-needed"

from agents import Agent, Runner

agent = Agent(
    name="Helper",
    model="qwen3.5:4b",
    instructions="You are a helpful assistant. Be concise.",
)

async def main():
    result = await Runner.run(agent, "What is ARP spoofing?")
    print(result.final_output)

asyncio.run(main())

Adding tools (things agents can do)

from agents import Agent, Runner, function_tool
import httpx

@function_tool
async def get_url(url: str) -> str:
    """Fetch the contents of a URL."""
    async with httpx.AsyncClient(timeout=10) as c:
        r = await c.get(url)
        return r.text[:2000]   # truncate to avoid context overflow

agent = Agent(
    name="WebReader",
    model="qwen3.5:4b",
    instructions="You can fetch URLs to answer questions.",
    tools=[get_url],
)

Rule: tools are Python functions decorated with @function_tool. The agent decides when to call them. The docstring becomes the tool description — make it clear.

Handing off between agents

from agents import Agent, Runner, handoff

specialist = Agent(
    name="Specialist",
    model="qwen3.5",
    instructions="You handle detailed analysis. Return results clearly.",
)

orchestrator = Agent(
    name="Orchestrator",
    model="claude-haiku-4-5",
    instructions="Route analysis tasks to Specialist. Summarise results.",
    handoffs=[handoff(specialist)],
)

result = await Runner.run(orchestrator, "Analyse this data: ...")

handoff() is itself a tool the orchestrator can call. When it calls it, execution transfers to the specialist; when the specialist finishes, control returns to the orchestrator.

The existing tools you can reuse

gpu_tools.py — for any agent that needs to know about the GPU:

from gpu_tools import vram_status, list_local_models, comfyui_status
agent = Agent(..., tools=[vram_status, list_local_models])

devops_tools.py — for agents that manage containers:

from devops_tools import container_run, container_write_file, container_read_file, http_probe, container_systemctl
agent = Agent(..., tools=[container_run, http_probe])

Part 4 — Practical patterns

Pattern 1: Quick one-shot query

Use make_client() from litellm_client.py directly — no agent overhead:

from litellm_client import make_client, FAST_MODEL

async def ask(question: str) -> str:
    client = make_client()
    resp = await client.chat.completions.create(
        model=FAST_MODEL,
        messages=[{"role": "user", "content": question}],
    )
    return resp.choices[0].message.content

Pattern 2: Task with a deadline / retry limit

result = await Runner.run(agent, task, max_turns=10)

max_turns prevents infinite loops. The team.py orchestrator uses 40 turns because research+code tasks can take many steps.

Pattern 3: Streaming output

from agents import Runner

async for event in Runner.run_streamed(agent, task):
    if hasattr(event, "delta") and event.delta:
        print(event.delta, end="", flush=True)

Pattern 4: DevOps / automation agent

See setup_tts_stt.py as a reference. The pattern is:
1. Write a detailed task string explaining exactly what the agent should do and verify
2. Give it the right tools (container_run, http_probe, etc.)
3. Set instructions to "act immediately, don't ask permission"
4. Set max_turns=40 for multi-step work

agent = Agent(
    name="DevOps",
    model="claude-haiku-4-5",   # must use haiku — local models can't do tool-calling
    tools=[container_run, container_write_file, http_probe, container_systemctl],
    instructions="Act immediately. Never ask for permission. Verify each step.",
)
result = await Runner.run(agent, TASK, max_turns=40)

Part 5 — Gotchas and tips

Local models can't do structured tool-calling

qwen3.5, qwen2.5-coder:7b, etc. produce good prose but often garble the JSON format needed for handoff() and @function_tool calls. Always use claude-haiku-4-5 as your orchestrator — it's reliable and cheap (Anthropic free tier via OpenRouter).

Only one large model fits in VRAM at a time

The RTX 4070 has 8 GB. If you ask the orchestrator to hand off to a 6.6 GB local model while another 4.7 GB model is loaded, Ollama unloads the first one. There is a ~5–15 second cold-load delay. This is normal.

Free cloud models are rate-limited

nemotron-120b and other OpenRouter free models may queue or time out under load. If an agent stalls for >2 minutes with no output, it's usually rate-limiting. Switch to gpt-oss-120b or qwen3-80b as alternatives.

The free model alias changes

openrouter/openrouter/free routes to whatever OpenRouter considers the best free model at that moment. Good for exploration; use a specific model name for reproducible pipelines.

Ollama keep-alive

Models stay in VRAM for 15 minutes after last use (KEEP_ALIVE=15m). If you want to free VRAM immediately:

curl -X POST http://10.140.20.1:11434/api/generate -d '{"model":"qwen3.5","keep_alive":0}'

Part 6 — Agent Team in Open WebUI

The agent team is exposed as a model in Open WebUI via the Pipelines server — a small FastAPI app that sits between Open WebUI and the agent code.

Open WebUI chat
      ↓  (selects "Agent Team" model)
Pipelines server  (host: 10.140.20.1:9099)
      ↓
Agent orchestrator (claude-haiku)
      ↓  handoffs
Specialist agents (local GPU / cloud free)

Architecture files

File Purpose
agents/pipelines/agent_team.py The pipeline class — wraps the agent team
agents/run_pipelines.sh Manual start script
/etc/systemd/system/owui-pipelines.service Systemd service (starts on boot)

Managing the pipelines server

sudo systemctl status owui-pipelines
sudo systemctl restart owui-pipelines
sudo journalctl -u owui-pipelines -f

Connecting to Open WebUI (one-time setup)

  1. Open http://localhost:3001
  2. Top-right avatar → Admin Panel
  3. Settings → Connections → Pipelines
  4. Add:
  5. URL: http://10.140.20.1:9099
  6. API Key: 0p3n-w3bu!
  7. Click Save — "Agent Team" now appears in the model picker

Using it

Select Agent Team in the model picker and chat normally. Each message is routed by the orchestrator to the right specialist. The full conversation history is passed so the team has context across turns.

The pipelines server API key (0p3n-w3bu!) is the default from the open-webui-pipelines package. Change it in /etc/systemd/system/owui-pipelines.service and update the Open WebUI connection setting to match.

Adding more pipelines

Drop a new .py file with a Pipeline class into agents/pipelines/, then:

sudo systemctl restart owui-pipelines

The new pipeline appears as a model in Open WebUI immediately.


Quick reference card

# Run agent team
cd /home/user/claude/agents && .venv/bin/python team.py "task"

# Query a model directly
curl http://10.140.20.63:4000/v1/chat/completions \
  -H "Content-Type: application/json" -H "Authorization: Bearer no-key-needed" \
  -d '{"model":"qwen3.5:4b","messages":[{"role":"user","content":"hello"}]}'

# List models
curl -s http://10.140.20.63:4000/v1/models | python3 -m json.tool | grep '"id"'

# Watch LiteLLM traffic
incus exec litellm -- journalctl -u litellm -f

# Check VRAM
curl -s http://10.140.20.1:11434/api/ps | python3 -m json.tool

# Add a model to Ollama
ollama pull <model-name>
# Then add it to /etc/litellm/config.yaml and push + restart

File map

/home/user/claude/agents/
├── team.py            ← entry point — run this
├── litellm_client.py  ← model constants and URLs
├── gpu_tools.py       ← tools: vram_status, list_local_models, comfyui_status
├── devops_tools.py    ← tools: container_run, container_write_file, http_probe, ...
├── setup_tts_stt.py   ← reference: single-purpose DevOps agent
└── .venv/             ← virtualenv (openai-agents, openai)

/etc/litellm/
├── config.yaml        ← model list (edit on host, push to container)
└── secrets.env        ← OPENROUTER_API_KEY

CPU vs. GPU: Is Hardware Acceleration Always Faster for Real-Time TTS?

CPU vs. GPU: Is Hardware Acceleration Always Faster for Real-Time TTS?

Following up on my last post about fixing progressive streaming in Kokoro FastAPI, I decided to take things a step further. If the goal is minimizing latency for a conversational AI assistant, shouldn't throwing a dedicated GPU at the problem make it even faster?

I spent the afternoon duplicating my streaming container and configuring it to run on a local NVIDIA GeForce RTX 4070 (8GB). The results were... surprising. It turns out that for real-time, sentence-by-sentence streaming, "faster" hardware doesn't always translate to a better user experience.


The Setup: Moving to Incus and CUDA

While my previous tests were in Podman, I've recently moved to Incus for better resource management. I duplicated the kokoro-stream container to a new sandbox named kokoro-stream-gpu and passed through the GPU:

incus config device add kokoro-stream-gpu mygpu gpu uid=1000 gid=1000
incus config set kokoro-stream-gpu nvidia.runtime true
incus config set kokoro-stream-gpu nvidia.driver.capabilities compute,utility,video

Inside the container, I switched the backend from the ONNX CPU runtime to the PyTorch GPU version. I also had to port over the same split_pattern and asyncio.sleep(0) fixes from the last session to ensure I was comparing apples to apples (sentence-level streaming vs. sentence-level streaming).


The Benchmark: Short vs. Long Form

I ran two tests using the British English male voice (bm_fable): one with a short two-sentence phrase (~90 chars) and one with the full text of my last blog post (~8,700 chars).

Metric CPU (ONNX) GPU (RTX 4070) Speedup
TTFA (Short Text) ~557 ms ~508 ms 1.1x
Total Time (Long Text) ~289 s ~15 s 19.2x
Throughput (Long Text) ~30 char/s ~580 char/s 19.2x
System RAM Usage 1.21 GiB 1.92 GiB -
Video RAM (VRAM) 0 MB ~850 MB -

Reflections: When is the GPU worth it?

The results tell two very different stories depending on what you're doing.

1. Conversational AI (Short Sentences)

If you're building a real-time voice assistant that speaks one or two sentences at a time, the CPU is the clear winner. The Time to First Audio (TTFA) is virtually identical because the overhead of initializing the GPU pipeline eats up any compute gains. For this use case, the GPU is just an expensive way to use more RAM.

2. Long-Form Content (Articles, Blog Posts)

This is where the RTX 4070 absolutely screams. When I threw the full 8,700-character blog post at it, the GPU version finished the entire synthesis in 15 seconds. The CPU version was still grinding away at nearly the 5-minute mark.

At 580 characters per second, the GPU isn't just "faster"—it changes the nature of the service. You can listen to an entire article almost as soon as you click "Generate."

The Verdict

  • Stick with CPU for: Open WebUI, chatbots, home assistants, and low-RAM servers.
  • Switch to GPU for: Audiobook generation, long-form reading, or high-concurrency environments.

The kokoro-stream-gpu container is now my go-to for "reading" long documentation, while the CPU version remains my daily driver for conversational chat.


The Evidence: Benchmarking Code

To keep things evidence-based, here is the Python script used to capture these metrics. It probes the streaming API and measures exactly when the first and last chunks arrive.

1. Throughput & Latency Probe (benchmark_long.py)

import time
import requests
import subprocess

# Ports
GPU_URL = "http://localhost:8881/v1/audio/speech"
CPU_URL = "http://localhost:8882/v1/audio/speech"

# Load long text
with open("blog_post.md", "r") as f:
    LONG_TEXT = f.read()

def run_benchmark(name, url):
    print(f"\n--- Benchmarking {name} ---")
    start_time = time.time()
    first_chunk_time = None

    payload = {
        "input": LONG_TEXT,
        "voice": "bm_fable",
        "response_format": "mp3",
        "stream": True
    }

    with requests.post(url, json=payload, stream=True) as r:
        r.raise_for_status()
        for chunk in r.iter_content(chunk_size=1024):
            if chunk and first_chunk_time is None:
                first_chunk_time = time.time() - start_time

        total_time = time.time() - start_time

    return {
        "ttfa_ms": round(first_chunk_time * 1000, 2),
        "total_s": round(total_time, 2),
        "char_s": round(len(LONG_TEXT) / total_time, 2)
    }

2. Evidence Audio Generation (generate_evidence.py)

import requests
import hashlib

def generate_and_hash(url, filename):
    r = requests.post(url, json={"input": LONG_TEXT, "voice": "bm_fable"})
    with open(filename, "wb") as f:
        f.write(r.content)
    return hashlib.md5(r.content).hexdigest()

# Results:
# CPU Hash: a22fe5e4d70a2888d755e0f8df7dae8f
# GPU Hash: e5ccba5c22ef3edf594aabaa2c08bb5f

Running Incus on Ubuntu 24.04. Hardware: NVIDIA GeForce RTX 4070 8GB. Frameworks: ONNX Runtime (CPU) vs. PyTorch 2.6+CUDA 12.4 (GPU).

Making Local TTS Actually Stream: Fixing Kokoro FastAPI for Real-Time Audio

Making Local TTS Actually Stream: Fixing Kokoro FastAPI for Real-Time Audio

If you've been following along with my local AI setup, you'll know I run most of my services in Podman containers on a home server — Ollama, Open WebUI, Whisper, and a handful of other tools. One of those is Kokoro FastAPI, a self-hosted text-to-speech server based on the Kokoro-82M ONNX model. It produces surprisingly good speech, supports multiple voices and languages, and exposes an OpenAI-compatible API that plugs straight into Open WebUI.

This post covers a productive session where what started as a simple Firefox bug turned into a full streaming pipeline investigation — with benchmarks, a duplicate container sandbox, and a fix that meaningfully reduced time-to-first-audio for conversational use cases.


The Firefox Bug

First thing first: the web UI at kokoro-web.lan worked fine in Chrome but threw this in Firefox when you clicked Generate Speech:

MediaSource.addSourceBuffer: Type not supported in MediaSource

The culprit was a single line in AudioService.js:

this.sourceBuffer = this.mediaSource.addSourceBuffer('audio/mpeg');

Firefox simply does not support audio/mpeg in the MediaSource Extensions (MSE) API. Chrome does. The fix was to check for support first, and fall back to a simpler approach when MSE isn't available:

if (!window.MediaSource || !MediaSource.isTypeSupported('audio/mpeg')) {
    await this.setupBufferedStream(stream, response, onProgress, estimatedChunks);
    return;
}

The setupBufferedStream fallback collects all incoming audio chunks into a Blob and sets it as a plain audio.src — no MSE required, works everywhere. The patched file is saved locally and injected via podman cp rather than rebuilding the image.


Benchmarking: Does Format or Voice Matter?

With the Firefox issue sorted, I ran a proper latency benchmark across the three supported output formats and three voices, using a consistent test phrase:

"I love mediclinic, but I think there is a lot of scope for the EHR development to go awry."

Three runs per combination, stream: false, measured with Python's time.perf_counter().

By format (averaged across all voices)

Format Avg latency File size
WAV 1382 ms ~256 KB
PCM 1417 ms ~256 KB
MP3 1457 ms ~86 KB

By voice (averaged across all formats)

Voice Description Avg latency
af_heart American English female 1379 ms
bm_fable British English male 1439 ms
ef_dora Dutch female 1438 ms

The takeaway: format and voice choice barely matter for latency. The ONNX inference dominates — everything else (MP3 encoding, voice model differences) contributes at most ~80 ms. MP3 is still the right default for web playback given its file size advantage. The Dutch voice (ef_dora) performs on par with the English voices, which is a good sign for multilingual deployments.


The Streaming Mystery

The Kokoro API has a stream: true parameter. For a conversational application, this should mean the server sends the first sentence's audio while it's still generating the second — reducing perceived latency significantly. I modified the test phrase to have two clear sentences:

"I love mediclinic. But I think there is a lot of scope for the EHR development to go awry."

Then I wrote a Python probe to track exactly when each 1 KB chunk arrived at the client:

t_start = time.perf_counter()
chunks = []
with urllib.request.urlopen(req) as resp:
    while True:
        chunk = resp.read(1024)
        if not chunk: break
        t = round((time.perf_counter() - t_start) * 1000)
        chunks.append((t, len(chunk)))

print(f"First chunk: {chunks[0][0]}ms")
print(f"Last chunk:  {chunks[-1][0]}ms")

Results for stream: true, af_heart, MP3:

First chunk: 1462ms
Last chunk:  1464ms
Chunks: 89

All 89 chunks arrived within 2 ms of each other, after a full 1.4 second wait. stream: false was identical. Even PCM format — which has zero encoder overhead — showed the same pattern. Something was buffering the entire audio before sending a single byte.


The Investigation

I spun up a duplicate container, kokoro-stream, on port 8881 as an isolated sandbox, and set about tracing the pipeline. The server code is actually well-architected: async generators and yield statements all the way from the HTTP handler down to the ONNX inference layer. The StreamingResponse even sets X-Accel-Buffering: no. On paper, it should stream.

I identified three hypotheses:

Hypothesis Evidence for
H1 ONNX inference batches both sentences as one call PCM (no encoder) also shows simultaneous delivery
H2 Uvicorn buffers the response body below a threshold No asyncio yield points between sentence yields
H3 PyAV MP3 encoder buffers early frames Secondary — can't explain PCM behaviour

What the code actually does

Inside tts_service.py, smart_split() splits the input text into chunks before inference — good. But it batches sentences together when their combined token count is under 250 tokens. The two-sentence test input is only 105 tokens, so both sentences were delivered as a single string to KokoroV1.generate().

Inside kokoro_v1.py, the pipeline was called with split_pattern=r'\n+' — meaning it would only split on newlines. Since there were no newlines, both sentences went through a single ONNX inference call and produced a single audio yield. No amount of async wiring downstream could fix that.

Even if the sentences had been processed separately, the for result in pipeline(...) loop is synchronous — it never returns control to the asyncio event loop between sentences, so the HTTP layer has no opportunity to flush.

The fix

Two minimal changes to kokoro-stream only:

inference/kokoro_v1.py — change the pipeline split pattern to break on sentence-ending punctuation:

# before
split_pattern=r'\n+'
# after
split_pattern=r'(?<=[.!?])\s+'

inference/kokoro_v1.py and services/tts_service.py — add asyncio yield points between sentence yields:

yield AudioChunk(...)
await asyncio.sleep(0)  # return control to event loop → HTTP layer can flush

Before and after

Metric Before After
First chunk (TTFA) ~1400 ms ~575 ms
Last chunk ~1400 ms ~1400 ms
Gap ~2 ms ~1100 ms

First sentence audio now arrives at the client at ~575 ms while the second sentence is still being synthesised. Total generation time is unchanged — we're not making the model faster, we're just not making the user wait for everything before delivering anything.


Setup

Both containers are now accessible via .lan hostnames using Caddy as a reverse proxy:

URL Container Port Notes
https://kokoro-web.lan kokoro-tts 8880 Production
https://kokoro-stream.lan kokoro-stream 8881 Streaming-optimised

Open WebUI is configured to use the production container at port 8880. The streaming container is available for direct use and API calls where lower TTFA matters.


Reflections

A few things worth noting from this session:

The architecture was already correct. The Kokoro FastAPI codebase uses async generators properly throughout — the issue wasn't bad design, it was two small configuration defaults that compounded badly for short inputs. The token batching threshold (250 tokens) and the newline-only split pattern made sense in isolation but combined to eliminate sentence-level streaming entirely for typical conversational inputs.

PCM as a diagnostic tool. Benchmarking PCM format (raw samples, no encoding) alongside MP3 was valuable precisely because it let us eliminate the audio encoder as a suspect early. When PCM and MP3 showed identical behaviour, we knew the bottleneck was upstream of the encoder.

asyncio.sleep(0) is surprisingly powerful. A zero-duration sleep doesn't actually sleep — it just yields control back to the event loop. That's enough to let uvicorn flush pending response bytes to the socket. It's a one-line fix with a meaningful impact on perceived latency.

The full benchmark data, pipeline analysis, and change logs are all documented if you want to replicate this setup.


Running Podman on Ubuntu 24.04. Kokoro FastAPI image: ghcr.io/remsky/kokoro-fastapi-cpu:latest. Voices used: af_heart, bm_fable, ef_dora.