-
-
Notifications
You must be signed in to change notification settings - Fork 4
Description
The Problem
I noticed that the framework is advertised as “blazing fast”, but this goal is technically unachievable while remaining tied to the WSGI specification.
WSGI is inherently synchronous and one-request-per-worker, which leads to the following limitations:
- It cannot use event-loop concurrency
- Every request blocks the worker
- It scales only through processes/threads
- It has no native support for long-lived connections (WebSockets, SSE, etc.)
- It cannot match the throughput or latency of ASGI-based frameworks
Even the fastest WSGI frameworks eventually hit the same bottleneck because the limitation comes from the WSGI contract itself, not the framework implementation.
Modern ASGI frameworks (Litestar, Starlette, FastAPI, Quart, etc.) routinely achieve 10x–20x more throughput when paired with servers like Uvicorn + uvloop, while WSGI applications are capped by synchronous execution.
Because of these architectural constraints, calling a WSGI-based framework “blazing fast” can be misleading, since the specification itself prevents performance levels comparable to event-driven ASGI frameworks.