15 April 2026

Kokoro Streaming Latency Investigation on Firefox

Making Local TTS Actually Stream: Fixing Kokoro FastAPI for Real-Time Audio

If you’ve been following along with my local AI setup, you’ll know I run most of my services within Proxmox VE LXC or Podman containers.  One of those is Kokoro, a self-hosted text-to-speech instance based on the Kokoro-82M ONNX model.  The audio generated is surprisingly good and the model is small so inference is possible and fast on a CPU.  There are fifty-nine voices covering a few languages.  An important point is that it exposes an OpenAI-compatible API that plugs straight into Open WebUI.

This alone would not have warrantied a blog post, because it's a Text to Speech engine in a repo, but, no surprise, what started as a simple Firefox bug fix turned into a streaming pipeline investigation: with the usual benchmarks, agent assistance with code analysis,  duplicate container sandbox, and concluded with a fix that meaningfully reduces time-to-first-audio for conversational use cases.


The Firefox Bug

It worked fine in Chrome, but produced an error in Firefox when clicking Generate Speech:


The culprit was a single line in AudioService.js:

this.sourceBuffer = this.mediaSource.addSourceBuffer('audio/mpeg');

It turns out Firefox does not support audio/mpeg.  The fix was to test for support and fallback if MediaSource Extensions(MSE) are not available:

if (!window.MediaSource || !MediaSource.isTypeSupported('audio/mpeg')) {
    await this.setupBufferedStream(stream, response, onProgress, estimatedChunks);
    return;
}

The setupBufferedStream fallback collects all incoming audio chunks into a Blob and sets it as a plain audio.src.  No MSE required and works everywhere.  Rather than rebuilding the image the file patch was saved locally and injected using podman cp.

Benchmarking: Does Format or Voice Matter?

With the Firefox issue sorted, I ran a latency benchmark.  Then another and another across three formats (mp3, pcm and wav) and three voices.  The phrase was short and topical, since I'm seeking some consultancy:

“Hi Mediclinic, your EHR project sounds interesting and has the potential for a lot of impact.”

Three runs per combination, stream: false, measured with Python’s time.perf_counter().

By format (averaged across all voices)

Format Avg latency File size
WAV 1382 ms ~256 KB
PCM 1417 ms ~256 KB
MP3 1457 ms ~86 KB

By voice (averaged across all formats)

The choices were: English, Japanese, Mandarin, Spanish, French, Hindi, Italian, and Brazilian Portuguese. 

Voice Description Avg latency
af_heart American English female 1379 ms
bm_fable British English male 1439 ms
ef_dora Spanish female 1438 ms

The takeaway: format and voice choice barely matter for latency. The ONNX inference dominates — everything else (MP3 encoding, voice model differences) contributes at most ~80 ms. MP3 encoding time was minimal and the right choice for web playback because of it's file size advantage. The French voice (ef_dora) performs on par with the English voices, which is a good sign for multilingual deployments.

Can we go faster?

I spotted while reading the documentation, yes it's a bad habit I developed when I was young, that there API has a stream: true parameter.  For a conversational applications, I thought this would be useful and a simple switch to enable...  You can probably guess that I was being naive, because I thought great enable the flag and the server would stream the audio during generation reducing perceived latency.   It turns out that streaming works with sentances, so I split the test phrase to start with a nice short initial sentance:

“Hi Mediclinic.  Your EHR project sounds interesting and has the potential for a lot of impact.”

Then Claude wrote some Python to track exactly when each 1 KB chunk arrived at the client:

t_start = time.perf_counter()
chunks = []
with urllib.request.urlopen(req) as resp:
    while True:
        chunk = resp.read(1024)
        if not chunk: break
        t = round((time.perf_counter() - t_start) * 1000)
        chunks.append((t, len(chunk)))

print(f"First chunk: {chunks[0][0]}ms")
print(f"Last chunk:  {chunks[-1][0]}ms")

Results for stream: true, af_heart, MP3:

First chunk: 1462ms
Last chunk:  1464ms
Chunks: 89

All chunks arrived within 2 ms of each other, after a full 1.4 second wait. stream: false was identical.  Even PCM which has zero encoder overhead.  Eh?  This doesn't seem right.  What was going on?  Was something buffering the audio before a single byte was sent?

The Rabbit Hole

I made a copy of the base container called kokoro-stream, on port 8881 as an isolated sandbox for Claude to play with.  The server code uses async generators and yield statements all the way from the HTTP handler down to the ONNX inference layer, which is good practice.  The StreamingResponse even sets X-Accel-Buffering: no, so it should work.

Three hypotheses:


Hypothesis Evidence for
H1 ONNX inference batches both sentences as one call PCM (no encoder) also shows simultaneous delivery
H2 Uvicorn buffers the response body below a threshold No asyncio yield points between sentence yields
H3 PyAV MP3 encoder buffers early frames Secondary — can’t explain PCM behaviour

What the code actually does

Inside tts_service.py, smart_split() splits the input text into chunks before inference, this is good.  However, it batches sentences together when their combined token count is under 250 tokens.  Guess what?  The two sentence test is only 105 tokens, so both sentences were delivered as a single string to KokoroV1.generate().

Inside kokoro_v1.py, the pipeline called split_pattern=r'\n+' meaning it would only split on newlines, not just full stops.  And since there were no newlines, both sentences went through as a single inference call producing a single audio file.  No amount of downstream async would fix that.

Even if the sentences had been processed separately, the for result in pipeline(...) loop is synchronous and never returns control to the asyncio event loop between sentences, so the HTTP layer has no opportunity to flush.

The Fix

Two changes:

inference/kokoro_v1.py 

Change the split pattern to include breaks on full stops:

# before
split_pattern=r'\n+'
# after
split_pattern=r'(?<=[.!?])\s+'

inference/kokoro_v1.py and services/tts_service.py 

Add yield points:

yield AudioChunk(...)
await asyncio.sleep(0)  # return control to event loop → HTTP layer can flush

Before and now Time To First Audio(TTFA)

Metric Before After
First chunk (TTFA) ~1400 ms ~575 ms
Last chunk ~1400 ms ~1400 ms
Gap ~2 ms ~1100 ms

The first audio now arrives after ~575ms, while the second is still being generated.  The total generation time is unchanged, unsurprisingly, but the latency is lower and this is just what is needed for conversational applications, like calling several service centres to ask about the availability and costs of servicing a car.  I was surprised that online systems here is South Africa, don't show available slots, but instead are lead generation and a human sends and email or calls you a few hours later.

Conclusion

A few things worth noting:

The architecture.  Kokoro uses async generators throughout, so the issue wasn’t bad design, it was two small configuration defaults affect short inputs.  The token batching threshold (250 tokens) and the newline-only split pattern made sense in isolation, but eliminate sentence-level streaming for my test input.

PCM as a diagnostic tool.  Benchmarking PCM format (raw samples, no encoding) alongside MP3 was valuable, to idnetify, and eliminate the audio encoder as a suspect early.   When PCM and MP3 shows similar timings the bottleneck is unrelated and upstream of the encoder.

asyncio.sleep(0) is surprisingly powerful. A zero-duration sleep doesn’t actually sleep, it yields control to the event loop.  That’s enough for uvicorn to flush pending response bytes to the socket.  It’s a one-liner with impact on latency.


Podman on Ubuntu 24.04. 

Kokoro image: ghcr.io/remsky/kokoro-fastapi-cpu:latest

Voices used: af_heart, bm_fable, ef_dora.

LiteLLM + Agent Teams: A Practical Guide

LiteLLM + Agent Teams: A Practical Guide

An aide memoire for using the local AI infrastructure day-to-day.


The big picture

You have three layers:

Your task (plain English)
        ↓
  Agent team (Python, OpenAI Agents SDK)
        ↓
  LiteLLM proxy  ←→  Ollama (local GPU)
                 ←→  OpenRouter (cloud free)
                 ←→  Anthropic (claude-haiku)

LiteLLM is a translation layer. It gives everything a single OpenAI-compatible URL (http://10.140.20.63:4000/v1) regardless of whether the model is running locally on your GPU or fetched from a cloud provider. Your code never changes — only the model name string changes.

The agent team is a set of specialised AI workers. You give the orchestrator a task in plain English; it decides which specialist to hand it to; the specialist does the work and hands results back.


Part 1 — Using LiteLLM directly

From the command line (curl)

# Ask any model a question
curl http://10.140.20.63:4000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer no-key-needed" \
  -d '{
    "model": "qwen3.5:4b",
    "messages": [{"role": "user", "content": "What is a BGP route reflector?"}]
  }'

# List all available models
curl http://10.140.20.63:4000/v1/models | python3 -m json.tool | grep '"id"'

From Python (OpenAI SDK)

from openai import OpenAI

client = OpenAI(
    base_url="http://10.140.20.63:4000/v1",
    api_key="no-key-needed",
)

response = client.chat.completions.create(
    model="qwen3.5:4b",   # or "claude-haiku-4-5", "nemotron-120b", etc.
    messages=[{"role": "user", "content": "Summarise this log: ..."}],
)
print(response.choices[0].message.content)

Choosing a model

Use case Model string Where it runs
Quick questions, triage qwen3.5:4b Local GPU (3.4 GB)
Writing code qwen2.5-coder:7b Local GPU (4.7 GB)
General analysis qwen3.5 Local GPU (6.6 GB)
Images / screenshots qwen3-vl Local GPU (6.1 GB)
Heavy reasoning nemotron-120b Cloud free (OpenRouter)
Reliable tool calling claude-haiku-4-5 Cloud (Anthropic/OpenRouter)
Best available free free Cloud free (auto-routed)

Group aliases — if the specific model is busy or unavailable, LiteLLM falls back automatically:

Alias Primary Fallback
fast qwen3.5:4b qwen2.5-coder:1.5b
coder qwen2.5-coder:7b qwen2.5-coder:1.5b
local qwen3.5 llama3.1
reasoning nemotron-120b gpt-oss-120b

Health check

curl http://10.140.20.63:4000/health
incus exec litellm -- journalctl -u litellm -f   # live logs

Part 2 — Running the agent team

The one-liner

cd /home/user/claude/agents
.venv/bin/python team.py "your task here"

Example tasks

# Coding
.venv/bin/python team.py "write a Python script that tails a log file and alerts on ERROR lines"

# Research
.venv/bin/python team.py "what are the main CVEs in OpenSSH versions 8.x to 9.x?"

# Analysis
.venv/bin/python team.py "analyse this nmap output and prioritise the findings: [paste output]"

# Mixed — the orchestrator chains specialists automatically
.venv/bin/python team.py "research the log4shell vulnerability then write a Python checker for it"

What happens under the hood

You: "research log4shell then write a checker"
        ↓
Orchestrator (claude-haiku) reads task
        ↓
Handoff → Researcher (nemotron-120b, cloud)
  "Log4Shell is CVE-2021-44228, affects Log4j 2.0–2.14.1..."
        ↓
Back to Orchestrator → Handoff → Coder (qwen2.5-coder:7b, local GPU)
  "def check_log4shell(host, port): ..."
        ↓
Orchestrator summarises and returns to you

The orchestrator uses haiku because it reliably produces valid tool-call JSON for handoffs. Local Ollama models are fast but unreliable at structured function-calling.

Watching it work

Add LITELLM_LOG=DEBUG to see every model call:

LITELLM_LOG=DEBUG .venv/bin/python team.py "hello"

Or watch the LiteLLM proxy logs live in another terminal:

incus exec litellm -- journalctl -u litellm -f

Part 3 — Writing your own agents

Minimal single agent

import asyncio, os
os.environ["OPENAI_BASE_URL"] = "http://10.140.20.63:4000/v1"
os.environ["OPENAI_API_KEY"]  = "no-key-needed"

from agents import Agent, Runner

agent = Agent(
    name="Helper",
    model="qwen3.5:4b",
    instructions="You are a helpful assistant. Be concise.",
)

async def main():
    result = await Runner.run(agent, "What is ARP spoofing?")
    print(result.final_output)

asyncio.run(main())

Adding tools (things agents can do)

from agents import Agent, Runner, function_tool
import httpx

@function_tool
async def get_url(url: str) -> str:
    """Fetch the contents of a URL."""
    async with httpx.AsyncClient(timeout=10) as c:
        r = await c.get(url)
        return r.text[:2000]   # truncate to avoid context overflow

agent = Agent(
    name="WebReader",
    model="qwen3.5:4b",
    instructions="You can fetch URLs to answer questions.",
    tools=[get_url],
)

Rule: tools are Python functions decorated with @function_tool. The agent decides when to call them. The docstring becomes the tool description — make it clear.

Handing off between agents

from agents import Agent, Runner, handoff

specialist = Agent(
    name="Specialist",
    model="qwen3.5",
    instructions="You handle detailed analysis. Return results clearly.",
)

orchestrator = Agent(
    name="Orchestrator",
    model="claude-haiku-4-5",
    instructions="Route analysis tasks to Specialist. Summarise results.",
    handoffs=[handoff(specialist)],
)

result = await Runner.run(orchestrator, "Analyse this data: ...")

handoff() is itself a tool the orchestrator can call. When it calls it, execution transfers to the specialist; when the specialist finishes, control returns to the orchestrator.

The existing tools you can reuse

gpu_tools.py — for any agent that needs to know about the GPU:

from gpu_tools import vram_status, list_local_models, comfyui_status
agent = Agent(..., tools=[vram_status, list_local_models])

devops_tools.py — for agents that manage containers:

from devops_tools import container_run, container_write_file, container_read_file, http_probe, container_systemctl
agent = Agent(..., tools=[container_run, http_probe])

Part 4 — Practical patterns

Pattern 1: Quick one-shot query

Use make_client() from litellm_client.py directly — no agent overhead:

from litellm_client import make_client, FAST_MODEL

async def ask(question: str) -> str:
    client = make_client()
    resp = await client.chat.completions.create(
        model=FAST_MODEL,
        messages=[{"role": "user", "content": question}],
    )
    return resp.choices[0].message.content

Pattern 2: Task with a deadline / retry limit

result = await Runner.run(agent, task, max_turns=10)

max_turns prevents infinite loops. The team.py orchestrator uses 40 turns because research+code tasks can take many steps.

Pattern 3: Streaming output

from agents import Runner

async for event in Runner.run_streamed(agent, task):
    if hasattr(event, "delta") and event.delta:
        print(event.delta, end="", flush=True)

Pattern 4: DevOps / automation agent

See setup_tts_stt.py as a reference. The pattern is:
1. Write a detailed task string explaining exactly what the agent should do and verify
2. Give it the right tools (container_run, http_probe, etc.)
3. Set instructions to "act immediately, don't ask permission"
4. Set max_turns=40 for multi-step work

agent = Agent(
    name="DevOps",
    model="claude-haiku-4-5",   # must use haiku — local models can't do tool-calling
    tools=[container_run, container_write_file, http_probe, container_systemctl],
    instructions="Act immediately. Never ask for permission. Verify each step.",
)
result = await Runner.run(agent, TASK, max_turns=40)

Part 5 — Gotchas and tips

Local models can't do structured tool-calling

qwen3.5, qwen2.5-coder:7b, etc. produce good prose but often garble the JSON format needed for handoff() and @function_tool calls. Always use claude-haiku-4-5 as your orchestrator — it's reliable and cheap (Anthropic free tier via OpenRouter).

Only one large model fits in VRAM at a time

The RTX 4070 has 8 GB. If you ask the orchestrator to hand off to a 6.6 GB local model while another 4.7 GB model is loaded, Ollama unloads the first one. There is a ~5–15 second cold-load delay. This is normal.

Free cloud models are rate-limited

nemotron-120b and other OpenRouter free models may queue or time out under load. If an agent stalls for >2 minutes with no output, it's usually rate-limiting. Switch to gpt-oss-120b or qwen3-80b as alternatives.

The free model alias changes

openrouter/openrouter/free routes to whatever OpenRouter considers the best free model at that moment. Good for exploration; use a specific model name for reproducible pipelines.

Ollama keep-alive

Models stay in VRAM for 15 minutes after last use (KEEP_ALIVE=15m). If you want to free VRAM immediately:

curl -X POST http://10.140.20.1:11434/api/generate -d '{"model":"qwen3.5","keep_alive":0}'

Part 6 — Agent Team in Open WebUI

The agent team is exposed as a model in Open WebUI via the Pipelines server — a small FastAPI app that sits between Open WebUI and the agent code.

Open WebUI chat
      ↓  (selects "Agent Team" model)
Pipelines server  (host: 10.140.20.1:9099)
      ↓
Agent orchestrator (claude-haiku)
      ↓  handoffs
Specialist agents (local GPU / cloud free)

Architecture files

File Purpose
agents/pipelines/agent_team.py The pipeline class — wraps the agent team
agents/run_pipelines.sh Manual start script
/etc/systemd/system/owui-pipelines.service Systemd service (starts on boot)

Managing the pipelines server

sudo systemctl status owui-pipelines
sudo systemctl restart owui-pipelines
sudo journalctl -u owui-pipelines -f

Connecting to Open WebUI (one-time setup)

  1. Open http://localhost:3001
  2. Top-right avatar → Admin Panel
  3. Settings → Connections → Pipelines
  4. Add:
  5. URL: http://10.140.20.1:9099
  6. API Key: 0p3n-w3bu!
  7. Click Save — "Agent Team" now appears in the model picker

Using it

Select Agent Team in the model picker and chat normally. Each message is routed by the orchestrator to the right specialist. The full conversation history is passed so the team has context across turns.

The pipelines server API key (0p3n-w3bu!) is the default from the open-webui-pipelines package. Change it in /etc/systemd/system/owui-pipelines.service and update the Open WebUI connection setting to match.

Adding more pipelines

Drop a new .py file with a Pipeline class into agents/pipelines/, then:

sudo systemctl restart owui-pipelines

The new pipeline appears as a model in Open WebUI immediately.


Quick reference card

# Run agent team
cd /home/user/claude/agents && .venv/bin/python team.py "task"

# Query a model directly
curl http://10.140.20.63:4000/v1/chat/completions \
  -H "Content-Type: application/json" -H "Authorization: Bearer no-key-needed" \
  -d '{"model":"qwen3.5:4b","messages":[{"role":"user","content":"hello"}]}'

# List models
curl -s http://10.140.20.63:4000/v1/models | python3 -m json.tool | grep '"id"'

# Watch LiteLLM traffic
incus exec litellm -- journalctl -u litellm -f

# Check VRAM
curl -s http://10.140.20.1:11434/api/ps | python3 -m json.tool

# Add a model to Ollama
ollama pull <model-name>
# Then add it to /etc/litellm/config.yaml and push + restart

File map

/home/user/claude/agents/
├── team.py            ← entry point — run this
├── litellm_client.py  ← model constants and URLs
├── gpu_tools.py       ← tools: vram_status, list_local_models, comfyui_status
├── devops_tools.py    ← tools: container_run, container_write_file, http_probe, ...
├── setup_tts_stt.py   ← reference: single-purpose DevOps agent
└── .venv/             ← virtualenv (openai-agents, openai)

/etc/litellm/
├── config.yaml        ← model list (edit on host, push to container)
└── secrets.env        ← OPENROUTER_API_KEY

CPU vs. GPU: Is Hardware Acceleration Always Faster for Real-Time TTS?

CPU vs. GPU: Is Hardware Acceleration Always Faster for Real-Time TTS?

Following up on my last post about fixing progressive streaming in Kokoro FastAPI, I decided to take things a step further. If the goal is minimizing latency for a conversational AI assistant, shouldn't throwing a dedicated GPU at the problem make it even faster?

I spent the afternoon duplicating my streaming container and configuring it to run on a local NVIDIA GeForce RTX 4070 (8GB). The results were... surprising. It turns out that for real-time, sentence-by-sentence streaming, "faster" hardware doesn't always translate to a better user experience.


The Setup: Moving to Incus and CUDA

While my previous tests were in Podman, I've recently moved to Incus for better resource management. I duplicated the kokoro-stream container to a new sandbox named kokoro-stream-gpu and passed through the GPU:

incus config device add kokoro-stream-gpu mygpu gpu uid=1000 gid=1000
incus config set kokoro-stream-gpu nvidia.runtime true
incus config set kokoro-stream-gpu nvidia.driver.capabilities compute,utility,video

Inside the container, I switched the backend from the ONNX CPU runtime to the PyTorch GPU version. I also had to port over the same split_pattern and asyncio.sleep(0) fixes from the last session to ensure I was comparing apples to apples (sentence-level streaming vs. sentence-level streaming).


The Benchmark: Short vs. Long Form

I ran two tests using the British English male voice (bm_fable): one with a short two-sentence phrase (~90 chars) and one with the full text of my last blog post (~8,700 chars).

Metric CPU (ONNX) GPU (RTX 4070) Speedup
TTFA (Short Text) ~557 ms ~508 ms 1.1x
Total Time (Long Text) ~289 s ~15 s 19.2x
Throughput (Long Text) ~30 char/s ~580 char/s 19.2x
System RAM Usage 1.21 GiB 1.92 GiB -
Video RAM (VRAM) 0 MB ~850 MB -

Reflections: When is the GPU worth it?

The results tell two very different stories depending on what you're doing.

1. Conversational AI (Short Sentences)

If you're building a real-time voice assistant that speaks one or two sentences at a time, the CPU is the clear winner. The Time to First Audio (TTFA) is virtually identical because the overhead of initializing the GPU pipeline eats up any compute gains. For this use case, the GPU is just an expensive way to use more RAM.

2. Long-Form Content (Articles, Blog Posts)

This is where the RTX 4070 absolutely screams. When I threw the full 8,700-character blog post at it, the GPU version finished the entire synthesis in 15 seconds. The CPU version was still grinding away at nearly the 5-minute mark.

At 580 characters per second, the GPU isn't just "faster"—it changes the nature of the service. You can listen to an entire article almost as soon as you click "Generate."

The Verdict

  • Stick with CPU for: Open WebUI, chatbots, home assistants, and low-RAM servers.
  • Switch to GPU for: Audiobook generation, long-form reading, or high-concurrency environments.

The kokoro-stream-gpu container is now my go-to for "reading" long documentation, while the CPU version remains my daily driver for conversational chat.


The Evidence: Benchmarking Code

To keep things evidence-based, here is the Python script used to capture these metrics. It probes the streaming API and measures exactly when the first and last chunks arrive.

1. Throughput & Latency Probe (benchmark_long.py)

import time
import requests
import subprocess

# Ports
GPU_URL = "http://localhost:8881/v1/audio/speech"
CPU_URL = "http://localhost:8882/v1/audio/speech"

# Load long text
with open("blog_post.md", "r") as f:
    LONG_TEXT = f.read()

def run_benchmark(name, url):
    print(f"\n--- Benchmarking {name} ---")
    start_time = time.time()
    first_chunk_time = None

    payload = {
        "input": LONG_TEXT,
        "voice": "bm_fable",
        "response_format": "mp3",
        "stream": True
    }

    with requests.post(url, json=payload, stream=True) as r:
        r.raise_for_status()
        for chunk in r.iter_content(chunk_size=1024):
            if chunk and first_chunk_time is None:
                first_chunk_time = time.time() - start_time

        total_time = time.time() - start_time

    return {
        "ttfa_ms": round(first_chunk_time * 1000, 2),
        "total_s": round(total_time, 2),
        "char_s": round(len(LONG_TEXT) / total_time, 2)
    }

2. Evidence Audio Generation (generate_evidence.py)

import requests
import hashlib

def generate_and_hash(url, filename):
    r = requests.post(url, json={"input": LONG_TEXT, "voice": "bm_fable"})
    with open(filename, "wb") as f:
        f.write(r.content)
    return hashlib.md5(r.content).hexdigest()

# Results:
# CPU Hash: a22fe5e4d70a2888d755e0f8df7dae8f
# GPU Hash: e5ccba5c22ef3edf594aabaa2c08bb5f

Running Incus on Ubuntu 24.04. Hardware: NVIDIA GeForce RTX 4070 8GB. Frameworks: ONNX Runtime (CPU) vs. PyTorch 2.6+CUDA 12.4 (GPU).

Making Local TTS Actually Stream: Fixing Kokoro FastAPI for Real-Time Audio

Making Local TTS Actually Stream: Fixing Kokoro FastAPI for Real-Time Audio

If you've been following along with my local AI setup, you'll know I run most of my services in Podman containers on a home server — Ollama, Open WebUI, Whisper, and a handful of other tools. One of those is Kokoro FastAPI, a self-hosted text-to-speech server based on the Kokoro-82M ONNX model. It produces surprisingly good speech, supports multiple voices and languages, and exposes an OpenAI-compatible API that plugs straight into Open WebUI.

This post covers a productive session where what started as a simple Firefox bug turned into a full streaming pipeline investigation — with benchmarks, a duplicate container sandbox, and a fix that meaningfully reduced time-to-first-audio for conversational use cases.


The Firefox Bug

First thing first: the web UI at kokoro-web.lan worked fine in Chrome but threw this in Firefox when you clicked Generate Speech:

MediaSource.addSourceBuffer: Type not supported in MediaSource

The culprit was a single line in AudioService.js:

this.sourceBuffer = this.mediaSource.addSourceBuffer('audio/mpeg');

Firefox simply does not support audio/mpeg in the MediaSource Extensions (MSE) API. Chrome does. The fix was to check for support first, and fall back to a simpler approach when MSE isn't available:

if (!window.MediaSource || !MediaSource.isTypeSupported('audio/mpeg')) {
    await this.setupBufferedStream(stream, response, onProgress, estimatedChunks);
    return;
}

The setupBufferedStream fallback collects all incoming audio chunks into a Blob and sets it as a plain audio.src — no MSE required, works everywhere. The patched file is saved locally and injected via podman cp rather than rebuilding the image.


Benchmarking: Does Format or Voice Matter?

With the Firefox issue sorted, I ran a proper latency benchmark across the three supported output formats and three voices, using a consistent test phrase:

"I love mediclinic, but I think there is a lot of scope for the EHR development to go awry."

Three runs per combination, stream: false, measured with Python's time.perf_counter().

By format (averaged across all voices)

Format Avg latency File size
WAV 1382 ms ~256 KB
PCM 1417 ms ~256 KB
MP3 1457 ms ~86 KB

By voice (averaged across all formats)

Voice Description Avg latency
af_heart American English female 1379 ms
bm_fable British English male 1439 ms
ef_dora Dutch female 1438 ms

The takeaway: format and voice choice barely matter for latency. The ONNX inference dominates — everything else (MP3 encoding, voice model differences) contributes at most ~80 ms. MP3 is still the right default for web playback given its file size advantage. The Dutch voice (ef_dora) performs on par with the English voices, which is a good sign for multilingual deployments.


The Streaming Mystery

The Kokoro API has a stream: true parameter. For a conversational application, this should mean the server sends the first sentence's audio while it's still generating the second — reducing perceived latency significantly. I modified the test phrase to have two clear sentences:

"I love mediclinic. But I think there is a lot of scope for the EHR development to go awry."

Then I wrote a Python probe to track exactly when each 1 KB chunk arrived at the client:

t_start = time.perf_counter()
chunks = []
with urllib.request.urlopen(req) as resp:
    while True:
        chunk = resp.read(1024)
        if not chunk: break
        t = round((time.perf_counter() - t_start) * 1000)
        chunks.append((t, len(chunk)))

print(f"First chunk: {chunks[0][0]}ms")
print(f"Last chunk:  {chunks[-1][0]}ms")

Results for stream: true, af_heart, MP3:

First chunk: 1462ms
Last chunk:  1464ms
Chunks: 89

All 89 chunks arrived within 2 ms of each other, after a full 1.4 second wait. stream: false was identical. Even PCM format — which has zero encoder overhead — showed the same pattern. Something was buffering the entire audio before sending a single byte.


The Investigation

I spun up a duplicate container, kokoro-stream, on port 8881 as an isolated sandbox, and set about tracing the pipeline. The server code is actually well-architected: async generators and yield statements all the way from the HTTP handler down to the ONNX inference layer. The StreamingResponse even sets X-Accel-Buffering: no. On paper, it should stream.

I identified three hypotheses:

Hypothesis Evidence for
H1 ONNX inference batches both sentences as one call PCM (no encoder) also shows simultaneous delivery
H2 Uvicorn buffers the response body below a threshold No asyncio yield points between sentence yields
H3 PyAV MP3 encoder buffers early frames Secondary — can't explain PCM behaviour

What the code actually does

Inside tts_service.py, smart_split() splits the input text into chunks before inference — good. But it batches sentences together when their combined token count is under 250 tokens. The two-sentence test input is only 105 tokens, so both sentences were delivered as a single string to KokoroV1.generate().

Inside kokoro_v1.py, the pipeline was called with split_pattern=r'\n+' — meaning it would only split on newlines. Since there were no newlines, both sentences went through a single ONNX inference call and produced a single audio yield. No amount of async wiring downstream could fix that.

Even if the sentences had been processed separately, the for result in pipeline(...) loop is synchronous — it never returns control to the asyncio event loop between sentences, so the HTTP layer has no opportunity to flush.

The fix

Two minimal changes to kokoro-stream only:

inference/kokoro_v1.py — change the pipeline split pattern to break on sentence-ending punctuation:

# before
split_pattern=r'\n+'
# after
split_pattern=r'(?<=[.!?])\s+'

inference/kokoro_v1.py and services/tts_service.py — add asyncio yield points between sentence yields:

yield AudioChunk(...)
await asyncio.sleep(0)  # return control to event loop → HTTP layer can flush

Before and after

Metric Before After
First chunk (TTFA) ~1400 ms ~575 ms
Last chunk ~1400 ms ~1400 ms
Gap ~2 ms ~1100 ms

First sentence audio now arrives at the client at ~575 ms while the second sentence is still being synthesised. Total generation time is unchanged — we're not making the model faster, we're just not making the user wait for everything before delivering anything.


Setup

Both containers are now accessible via .lan hostnames using Caddy as a reverse proxy:

URL Container Port Notes
https://kokoro-web.lan kokoro-tts 8880 Production
https://kokoro-stream.lan kokoro-stream 8881 Streaming-optimised

Open WebUI is configured to use the production container at port 8880. The streaming container is available for direct use and API calls where lower TTFA matters.


Reflections

A few things worth noting from this session:

The architecture was already correct. The Kokoro FastAPI codebase uses async generators properly throughout — the issue wasn't bad design, it was two small configuration defaults that compounded badly for short inputs. The token batching threshold (250 tokens) and the newline-only split pattern made sense in isolation but combined to eliminate sentence-level streaming entirely for typical conversational inputs.

PCM as a diagnostic tool. Benchmarking PCM format (raw samples, no encoding) alongside MP3 was valuable precisely because it let us eliminate the audio encoder as a suspect early. When PCM and MP3 showed identical behaviour, we knew the bottleneck was upstream of the encoder.

asyncio.sleep(0) is surprisingly powerful. A zero-duration sleep doesn't actually sleep — it just yields control back to the event loop. That's enough to let uvicorn flush pending response bytes to the socket. It's a one-line fix with a meaningful impact on perceived latency.

The full benchmark data, pipeline analysis, and change logs are all documented if you want to replicate this setup.


Running Podman on Ubuntu 24.04. Kokoro FastAPI image: ghcr.io/remsky/kokoro-fastapi-cpu:latest. Voices used: af_heart, bm_fable, ef_dora.

Speed vs Quality: Comparing LFM2, Qwen3, and DeepSeek-R1 on a Local APU

Speed vs Quality: Comparing LFM2, Qwen3, and DeepSeek-R1 on a Local APU

The Question

In a previous post I found that qwen3:30b-a3b runs at 12 TPS on my AMD APU (Ryzen 5800U, 64GB DDR4), beating every other model on the system by exploiting Mixture-of-Experts to keep active parameters low. Then lfm2:24b-a2b arrived and hit 17.8 TPS — 47% faster — using a hybrid SSM+MoE architecture.

But tokens per second is only half the story. A model that answers faster but worse is not a better model. This post compares three architecturally distinct models on the same hardware across reasoning, synthesis, and analytical tasks.


The Contenders

Model Architecture TPS (Matt-Mini) What makes it distinctive
lfm2:24b-a2b Hybrid SSM+MoE 17.8 No KV cache (SSM), ~2B active params
qwen3:30b-a3b MoE Transformer 12.0 3B active params, built-in thinking mode
deepseek-r1:14b Dense Transformer 3.1 Reinforcement-learning trained reasoner

All running on Matt-Mini: AMD Ryzen 5800U, 64GB DDR4 (~50 GB/s bandwidth), AMD Vega 8 iGPU via Ollama's Vulkan backend.


Test 1: Formal Logic (Einstein's Puzzle)

The classic five-houses logic puzzle — 15 clues, five attributes per house, one correct solution. This tests structured multi-step deduction: can the model hold state, backtrack when it hits a contradiction, and reach a definite answer?

LFM2:24b-a2b — 17.8 TPS

LFM2 engaged immediately with a clear systematic approach, correctly anchoring on the fixed clues (Norwegian in house 1, house 3 drinks milk, house 2 is blue), then working through colour placement. Crucially, when it hit a contradiction — incorrectly placing green at house 3 which conflicted with the coffee/milk clue — it identified the error and self-corrected:

"Wait — green house is house 3, drinks milk — but clue 5 says green house drinks coffee. Contradiction! So our earlier assumption must be wrong. Let's recheck green/white placement."

It then correctly revised to green=4, white=5, and continued building the solution. At 2000 tokens it was still mid-deduction (the puzzle typically requires 2500–3500 tokens to complete), but the reasoning quality was coherent throughout with no hallucinated leaps. The self-correction under contradiction is the key capability here — many smaller models simply assert a wrong answer.

Qwen3:30b-a3b — 12.0 TPS

Qwen3's thinking mode is enabled by default. Given a 2000-token budget, the model consumed all 2000 tokens in internal reasoning and produced no visible output. This is not a failure of reasoning — it is a consequence of extended chain-of-thought: Qwen3 thinks before it writes, and for a complex puzzle, that thinking budget was exhausted before the answer appeared.

The practical implication: At 12 TPS, giving Qwen3 enough tokens to complete a hard logic problem (say 4000 tokens total — 2000 thinking, 2000 answer) means waiting ~5–6 minutes with nothing visible on screen until the model finishes thinking. For interactive use, this requires either disabling thinking mode (/no_think suffix or think: false in the API) or accepting the latency.

DeepSeek-R1:14b — 3.1 TPS

Same issue, worse TPS. At 3.1 TPS, 4000 tokens takes 21 minutes. DeepSeek-R1 is a dedicated reasoning model — the chain-of-thought is the point — but on bandwidth-constrained hardware the combination of dense architecture (no MoE savings), slow TPS, and long thinking chains makes it painful for interactive reasoning tasks. It is the right tool for problems where correctness matters more than speed and you are willing to wait.


The Thinking-Model Problem on Slow Hardware

This test uncovered a fundamental tension that does not appear in benchmark numbers:

Thinking models have a token overhead that multiplies with your hardware's latency.

On a GPU running at 50 TPS, Qwen3 spending 1000 tokens on internal reasoning costs 20 seconds. On an APU at 12 TPS, the same thinking costs 83 seconds — and you see nothing during that time. DeepSeek-R1 at 3.1 TPS spending 2000 tokens thinking costs 10.7 minutes of silence.

This creates a counterintuitive outcome: LFM2, which does not use extended chain-of-thought by default, feels smarter in interactive use on slow hardware — not because it reasons better, but because it shows its work in real-time rather than batching it invisibly. The perceived quality advantage is partly architectural and partly latency psychology.

For batch or automated tasks (where you submit a prompt and retrieve the answer later), thinking models regain their advantage. For interactive research assistance where you are iterating on prompts, fast transparent reasoning often beats slow invisible reasoning.


Test 2: Research Synthesis

Prompt: Advise on running the best LLM for general-purpose research on an AMD APU (Ryzen 5800U, 64GB DDR4, ~50 GB/s). Compare (a) large MoE with few active params, (b) hybrid SSM+MoE, (c) small dense transformer. Cover speed, quality, context handling, and give a recommendation.

Note: Qwen3 and DeepSeek-R1 were run with thinking disabled (think: false) so their responses are direct rather than chain-of-thought.

LFM2:24b-a2b — 17.8 TPS

LFM2 gave a well-structured response covering all three categories with accurate technical descriptions. On MoE it correctly identified the active-parameter advantage and low-bandwidth benefit. On SSM+MoE it correctly described the KV cache elimination benefit. On dense transformers it correctly explained the bandwidth bottleneck.

One notable weakness: it mentioned "AMD XDNA or ROCm kernels for optimal speed" — technically inaccurate for an Ollama/Vulkan setup. It conflated the architecture's theoretical requirements with platform-specific details it was uncertain about. This is a common failure mode in synthesis tasks: confident-sounding specifics that are slightly wrong.

Qwen3:30b-a3b — 12.0 TPS (thinking disabled)

With thinking disabled, Qwen3 produced a response that read as an externalised internal monologue — thinking-as-prose rather than a structured answer. It walked through the problem step-by-step in first person ("Let me break this down... Wait, the user said..."), ultimately recommending a small dense model like Mistral 7B.

This recommendation is wrong for this specific hardware, and interestingly, Qwen3 is itself running on this hardware. It gave generic advice calibrated to typical CPU inference bottlenecks rather than the actual measured results. It also suggested llama.cpp "15–25 TPS on a 5800U for a 7B Q4_K_M model" — a reasonable estimate for CPU-only inference, but ignoring the iGPU Vulkan backend that gives the measured 12 TPS for a 30B MoE model.

The knowledge cutoff problem: Qwen3 has no training data on LFM2 or the specific benchmark results for this hardware. It gave the best answer it could from general principles, and those principles pointed it toward the wrong conclusion. The model recommended a 7B dense model when the empirical data shows a 24B hybrid SSM+MoE outperforms it by a significant margin on this exact configuration.

DeepSeek-R1:14b — 3.1 TPS (thinking disabled)

DeepSeek-R1's response was clean and well-structured, but contained a clear hallucination: it described SSM as "Switched Sparse Memory" — a fabricated expansion of the acronym. SSM stands for State Space Model, a fundamentally different concept.

Its recommendation — large MoE transformer — was directionally correct, citing that "~50 GB/s is high bandwidth." This is precisely backwards: 50 GB/s is the bottleneck on this hardware, not an advantage. The model interpreted the bandwidth figure in absolute terms rather than relative to the model's memory requirements.

Despite these errors, the structural reasoning was sound: MoE's active-parameter reduction is the right lever to pull on bandwidth-constrained hardware. It got to the right answer via partially wrong reasoning.


Test 3: Critical Analysis

Prompt: Analyse the claim: "Q4 always reduces model quality compared to Q8, so for serious research you should always use Q8." Give a rigorous assessment.

This is a well-defined analytical task with a clear correct answer. All three models had relevant training data. This is where quality differences should be most visible.

LFM2:24b-a2b — 17.8 TPS

LFM2 identified three flaws in the claim:

  1. Universal claim without qualification — Q4 vs Q8 impact depends on architecture, task, calibration, and hardware. "Always" is unjustified.
  2. Ignores hardware acceleration — AVX_VNNI provides INT8 dot product acceleration; on AVX_VNNI CPUs, Q8_0 carries no compute overhead, changing the tradeoff entirely.
  3. Task-specific sensitivity — Coarse reasoning is less sensitive to precision loss than fine-grained factual recall.

The hardware-specific point about AVX_VNNI is precisely the kind of nuance that matters for the actual decision. This was accurate and practically useful.

Qwen3:30b-a3b — 12.0 TPS (thinking disabled)

Qwen3 gave the most thorough response of the three, leading with the logical structure: the word "always" is a universal quantifier that is falsified by a single counterexample. It cited concrete benchmark evidence — Q4 on Llama 3 and Mistral models incurs less than 1% absolute accuracy loss on MMLU vs Q8 — and identified multiple factors the claim ignores: quantisation-aware training, task sensitivity, model architecture, and implicit regularisation effects.

It also identified a counterintuitive scenario: some models fine-tuned with quantisation-aware training show no significant degradation at Q4, or in edge cases slight improvements due to regularisation. This is a more complete analysis than LFM2's.

The response was cut at 1000 tokens mid-sentence, suggesting there was more to come. The quality of what was produced was high.

DeepSeek-R1:14b — 3.1 TPS (thinking disabled)

DeepSeek-R1 produced the most concise response. It correctly identified the binary framing as the flaw (Q4 offers 16 values, Q8 offers 256 — but this alone doesn't determine quality impact), and noted that the practical effect depends on the model and task. The response was shorter and less detailed than the others, but accurate within its scope.

At 3.1 TPS, the time cost of a thorough analysis at DeepSeek-R1's depth is high. For tasks where analysis quality scales with thoroughness, the slow TPS compounds against you.


What the Tests Reveal

On factual accuracy

All three models made errors on the synthesis task — LFM2 got a platform detail wrong, Qwen3 gave the wrong recommendation, DeepSeek-R1 hallucinated an acronym expansion. The analysis task showed higher accuracy across all three, because the claim being analysed is well within their training distribution. Models are more reliable on tasks that resemble their training data. Novel hardware configurations and cutting-edge model architectures fall outside that zone.

On reasoning quality

For the logic puzzle, LFM2's transparent step-by-step reasoning with explicit self-correction was more useful interactively than Qwen3's silent exhausted thinking budget. On the analysis task, Qwen3 produced the most thorough and structured response when thinking was disabled — the base model quality is high when it actually produces output.

On the thinking-mode tradeoff

Thinking mode is a quality multiplier. But on hardware where generation is slow, it is also a latency multiplier — and one that applies silently before you see any output. The practical rule for APU-class hardware:

  • Interactive use: Disable thinking (think: false or /no_think). You get the answer faster, and for most tasks the quality loss is modest.
  • Batch / overnight analysis: Enable thinking and set a high token budget. You submit the job, come back later, get a more thorough answer.

LFM2 sidesteps this entirely: its architecture doesn't separate thinking from output, so you see the reasoning in real-time as it generates.


Summary: When to Use Each Model

Scenario Best choice Why
Interactive research, iterative queries lfm2:24b-a2b Fastest, transparent reasoning, good synthesis
Hard reasoning, batch mode deepseek-r1:14b Dedicated reasoner; worth the wait
Thorough structured analysis, batch mode qwen3:30b-a3b (thinking enabled) Best analysis quality when given token budget
Code generation qwen3-coder:30b-a3b-q4_K_M Specialised fine-tune, 12 TPS
Long document analysis lfm2:24b-a2b SSM avoids KV cache growth at long context
Uncensored / sensitive research qwen3.5-abliterated:35b-a3b-q4_K No guardrails, 4.65 TPS
Quick simple queries qwen3:8b 5.3 TPS, low overhead

The key finding

Speed and quality are not independent on bandwidth-constrained hardware. A model that is four times slower doesn't just cost you time — it changes the nature of the interaction. Thinking modes that are near-free on a GPU become minutes-long commitments on an APU, turning iterative exploration into batch processing. LFM2's hybrid SSM+MoE architecture produces the best combination of speed and quality for interactive use on this hardware, not because it is the most capable model in isolation, but because it delivers its capability at a speed that keeps research workflows fluid.


Appendix: Hardware and Setup

Matt-Mini:
- CPU: AMD Ryzen 7 5800U (Zen 3, no AVX_VNNI)
- iGPU: AMD Radeon Vega 8 (shared DDR4, ~50 GB/s)
- RAM: 64GB DDR4-3200
- Inference: Ollama 0.20.6 + Vulkan backend

API note: Thinking can be disabled per-request via "think": false in the Ollama generate payload, or by appending /no_think to the prompt for Qwen3 models. DeepSeek-R1 respects think: false at the API level.

Tested April 2026.

Running LLMs Locally: AMD APU vs Discrete GPU — Why Architecture Matters More Than Hardware

Running LLMs Locally: AMD APU vs Discrete GPU — Why Architecture Matters More Than Hardware

The Hardware

I benchmarked two very different local AI setups:

Matt-Mini — a Windows Mini PC that most people would dismiss for AI:
- CPU: AMD Ryzen 7 5800U (8 cores, Zen 3)
- iGPU: AMD Radeon Vega 8 (integrated, shared memory)
- RAM: 64GB DDR4-3200 (~50 GB/s bandwidth)

Ubuntu Laptop — a more conventional AI workstation:
- GPU: NVIDIA RTX 4070 8GB VRAM (~300 GB/s GDDR6X bandwidth)
- RAM: DDR5 system RAM (~80–100 GB/s), separate from GPU VRAM

The critical insight about the APU: the iGPU uses shared system memory as VRAM. With 64GB of RAM, the GPU can access tens of gigabytes for model weights — something impossible on a discrete GPU with fixed VRAM. The trade-off is bandwidth: DDR4 gives ~50 GB/s vs the RTX 4070's ~300 GB/s.


The Benchmark Setup

I used Ollama as the inference server (Vulkan backend for AMD iGPU — no ROCm required) and ran three prompts per model:

  • Short: "What is 2 + 2? Answer in one word." — tests base throughput
  • Reasoning: A multi-step maths problem — tests sustained generation
  • Coding: Fibonacci with memoization in Python — tests structured output

Metric: tokens per second (TPS) for generation.


Results: Matt-Mini (AMD Ryzen 7 5800U + Vega 8 iGPU, 64GB shared RAM)

Model Architecture Comparison (all Q4_K_M)

Model Avg TPS Total Params Active Params Type
qwen3:30b-a3b 12.0 30B 3B MoE
qwen3-coder:30b-a3b 12.1 30B 3B MoE (coding)
qwen3:8b 5.3 8B 8B Dense
qwen3.5-abliterated:35b-a3b 4.65 35B ~3.5B MoE (uncensored)
qwen3.5-opus-distill 3.83 35B ~3.5B MoE (distilled, Q8_0)
mixtral:8x7b 3.5 46.7B 12.9B MoE
deepseek-r1:14b 3.1 14B 14B Dense

Q4_K_M vs Q8_0 on Bandwidth-Constrained iGPU

The Vega 8 iGPU is bottlenecked by DDR4 memory bandwidth (~50 GB/s). Q8_0 uses 2× the memory bandwidth of Q4_K_M with no compute benefit on hardware lacking AVX_VNNI. The speed penalty is significant:

Model Q4_K_M TPS Q8_0 TPS Q4 faster by
qwen3-coder:30b-a3b 12.1 7.73 +57%
qwen3.5-abliterated:35b-a3b 4.65 3.83 +21%

Use Q4_K_M on the APU. Q8_0 only makes sense if quality is paramount and you can accept the speed penalty.


Results: Ubuntu Laptop (NVIDIA RTX 4070 8GB, DDR5)

General and Reasoning Models

Model Avg TPS Params Notes
qwen2.5-coder:1.5b 163 1.5B Tiny, saturates GPU
qwen2.5-coder:7b 52 7B Fast in VRAM
qwen3.5:4b 51 4B
deepseek-r1:7b 39 7B Strong reasoning, consistent TPS
qwen3-vl:8b 35 8B Vision model
llama3.1:latest 36 8B
qwen3.5:latest 24 ~14B Starts hitting VRAM limit
qwen3.5:27b 3.0 27B Exceeds 8GB VRAM, spills to RAM

Vision Models (for ComfyUI and multimodal workflows)

Model Avg TPS VRAM Notes
qwen3-vl:4b-instruct-q8_0 45 ~5.5GB Best balance — fast, high quality, leaves headroom
qwen3-vl:8b-instruct-q4_K_M 35 ~5.5GB Larger model, slightly slower, better comprehension
minicpm-v:8b-2.6-q4_K_M 38 ~5GB Fast but terse — short responses on text tasks
qwen2.5vl:3b-q8_0 15 ~3.5GB Slow despite small size — VRAM load overhead

The dramatic drop from qwen3.5:latest (~24 TPS) to qwen3.5:27b (3 TPS) marks the VRAM cliff. Once the model no longer fits in 8GB, it spills to system RAM — but even though this machine has fast DDR5, the bottleneck becomes the PCIe bus (~32 GB/s) between the GPU and system memory, not the RAM speed itself. Performance collapses to APU-level speeds despite the faster RAM.


The Key Finding: Active Parameters Are What Matter

The headline result is qwen3:30b-a3b hitting 12 TPS — faster than the 8B dense model, despite having 30 billion total parameters.

This seems counterintuitive until you understand Mixture of Experts (MoE) architecture. In a MoE model, the network is split into many "expert" sub-networks. For any given token, only a small subset of experts are activated. qwen3:30b-a3b has 30B total parameters but only 3B active per token — the same compute cost per token as a 3B dense model, but with the knowledge capacity of a 30B model.

The rule that emerges from these results:

MoE speed advantage only materialises when active parameter count is kept low.

Look at mixtral:8x7b: it's MoE, but with 12.9B active parameters per token. Despite the MoE structure it runs at the same speed as the dense 14B model — because the active compute is similar.

qwen3:30b-a3b wins because it keeps active params at just 3B while maximising total capacity.


The Two Hardware Stories

Discrete GPU: Fast but VRAM-limited

The RTX 4070 hits 35–163 TPS for models that fit in 8GB VRAM. It's fast — bandwidth is not the bottleneck. But the moment a model exceeds 8GB, performance falls off a cliff: qwen3.5:27b drops to 3 TPS, identical to the APU. The discrete GPU is a sprinter with a hard wall.

Shared-Memory APU: Slow but capacious

The Vega 8 iGPU runs at 3–12 TPS — slower across the board for models that fit in discrete VRAM. But it can run a 34GB Q8_0 model that would never fit on the RTX 4070. The APU is a distance runner with no wall.

Where they meet

When a model exceeds the discrete GPU's VRAM, both machines run at the same ~3 TPS. At that point, the APU's 64GB capacity advantage becomes the deciding factor — it can run larger models at equal speed, with Q8_0 quality instead of being forced into aggressive quantization.

The MoE Sweet Spot for APUs

Low active-parameter MoE is the ideal architecture for shared-memory systems: fewer active params = less bandwidth per token = more TPS on bandwidth-constrained DDR4. qwen3:30b-a3b at 12 TPS demonstrates this perfectly — 30B total parameters, but only 3B active, running faster than the dense 8B model.


Practical Recommendations

For AMD APU systems with 32GB+ unified memory (Ryzen 5800U, no AVX_VNNI):
1. Use qwen3:30b-a3b or qwen3-coder:30b-a3b as your default — ~12 TPS, best speed/quality
2. Use Q4_K_M, not Q8_0 — Q8_0 is 20–57% slower on bandwidth-limited DDR4; AVX_VNNI (which would offset the bandwidth cost) is not present on Zen 3
3. Prefer MoE models with low active param counts (under 4B active) — this is the single biggest performance lever
4. Ollama with Vulkan is the easiest path — no ROCm build required, works out of the box
5. Disable sleep — large model downloads will resume but you waste time

For discrete GPU systems (e.g. RTX 4070 8GB, Intel Ultra 7 165H with AVX_VNNI):
1. Match model size to VRAM — keep total model size under ~7.5GB to stay fully in VRAM
2. Q4_K_M for 7–8B models at this VRAM level — fits comfortably with headroom
3. Q8_0 is viable for vision models under 6GB (e.g. qwen3-vl:4b-instruct-q8_0) — AVX_VNNI on the host CPU means Q8_0 CPU fallback is no slower
4. For ComfyUI inpainting: qwen3-vl:4b-instruct-q8_0 at 45 TPS uses ~5.5GB, leaving room for the diffusion model
5. Avoid models that spill to RAM — PCIe bandwidth (~32 GB/s) becomes the bottleneck, not DDR5
6. For larger models, the APU is a natural complement — it runs 30B+ at equal speed to any spilling model


Tools Used

  • Ollama — inference server, Vulkan backend
  • llmfit — hardware-fit recommender (useful for finding candidate models, but note: speed estimates for Vega 8 iGPU are inaccurate — it assumes 180 GB/s ROCm bandwidth vs the real ~50 GB/s)
  • benchmark_ollama.py — custom benchmark script measuring TPS across models and prompt types

Tested April 2026 on Ollama — AMD Ryzen 7 5800U (Vega 8 iGPU, 64GB DDR4) and NVIDIA RTX 4070 8GB (DDR5 system RAM).