Limit
Serverless ratelimiting
We have a dedicated package for ratelimiting in serverless functions. It’s built with Cloudflare workers and Durable Objects to orchestrate low latency ratelimiting at the edge, without sacrificing consistency.
Check out the documentation for the @unkey/ratelimit
package.
Request
How many requests may pass in the given duration.
How long the window should be.
Either a type string literal like 60s
, 20m
or plain milliseconds.
A unique identifier for the request. This can be a user id, an IP address or a session id.
The route or resource of what is being ratelimited for example trpc.user.update
Expensive requests may use up more resources. You can specify a cost to the request and we’ll deduct this many tokens in the current window. If there are not enough tokens left, the request is denied.
Example:
-
You have a limit of 10 requests per second you already used 4 of them in the current window.
-
Now a new request comes in with a higher cost:
-
The request passes and the current limit is now at
8
-
The same request happens again, but would not be rejected, because it would exceed the limit in the current window:
8 + 4 > 10
Do not wait for a response from the origin. Faster but less accurate.
We observe a 97%+ accuracy when using async
mode with significantly lower latency.
Record arbitrary data about this request. This does not affect the limit itself but can help you debug later.
Specify which resources this request would access and we’ll create a papertrail for you.
See app.unkey.com/audit for details.
Response
Whether the request may pass(true) or exceeded the limit(false).
Maximum number of requests allowed within a window.
How many requests the user has left within the current window.
Unix timestamp in milliseconds when the limits are reset.
Was this page helpful?