hikari.impl.rate_limits#

Basic lazy ratelimit systems for asyncio.

See hikari.impl.buckets for HTTP-specific rate-limiting logic.

Module Contents#

class hikari.impl.rate_limits.BaseRateLimiter[source]#

Bases: abc.ABC

Base for any asyncio-based rate limiter being used.

abstract async acquire()[source]#

Acquire permission to perform a task that needs to have rate limit management enforced.

Calling this function will cause it to block until you are not longer being rate limited.

abstract close()[source]#

Close the rate limiter, cancelling any internal tasks that are executing.

class hikari.impl.rate_limits.BurstRateLimiter(name)[source]#

Bases: BaseRateLimiter, abc.ABC

Base implementation for a burst-based rate limiter.

This provides an internal queue and throttling placeholder, as well as complete logic for safely aborting any pending tasks when being shut down.

property is_empty: bool[source]#

Return True if no futures are on the queue being rate limited.

name: str[source]#

The name of the rate limiter.

queue: List[asyncio.Future[Any]][source]#

The queue of any futures under a rate limit.

throttle_task: asyncio.Task[Any] | None[source]#

The throttling task, or None if it is not running.

abstract async acquire()[source]#

Acquire time on this rate limiter.

Calling this function will cause it to block until you are not longer being rate limited.

close()[source]#

Close the rate limiter, and shut down any pending tasks.

Once this is invoked, you should not reuse this object.

class hikari.impl.rate_limits.ExponentialBackOff(base=2.0, maximum=64.0, jitter_multiplier=1.0, initial_increment=0)[source]#

Implementation of an asyncio-compatible exponential back-off algorithm with random jitter.

$t_{backoff} = b^{i} + m cdot mathrm{rand}()$

Such that $(t_{backoff})$ is the backoff time, $(b)$ is the base, $(i)$ is the increment that increases by 1 for each invocation, and $(m)$ is the jitter multiplier. $(mathrm{rand}())$ returns a value in the range $([0,1])$.

Parameters:
basefloat

The base to use. Defaults to 2.0.

maximumfloat

The max value the backoff can be in a single iteration.

All values will be capped to this base value plus some random jitter.

jitter_multiplierfloat

The multiplier for the random jitter. Defaults to 1.0. Set to 0 to disable jitter.

initial_incrementint

The initial increment to start at. Defaults to 0.

Raises:
ValueError

If an int that’s too big to be represented as a float or a non-finite value is passed in place of a field that’s annotated as float.

base: Final[float][source]#

The base to use. Defaults to 2.0.

increment: int[source]#

The current increment.

jitter_multiplier: Final[float][source]#

The multiplier for the random jitter.

This defaults to 1.0. Set to 0.0 to disable jitter.

maximum: float[source]#

This is the max value the backoff can be in a single iteration before an asyncio.TimeoutError is raised.

reset()[source]#

Reset the exponential back-off.

class hikari.impl.rate_limits.ManualRateLimiter[source]#

Bases: BurstRateLimiter

Rate limit handler for the global HTTP rate limit.

This is a non-preemptive rate limiting algorithm that will always return completed futures until ManualRateLimiter.throttle is invoked. Once this is invoked, any subsequent calls to ManualRateLimiter.acquire will return incomplete futures that will be enqueued to an internal queue. A task will be spun up to wait for a period of time given to the ManualRateLimiter.throttle. Once that has passed, the lock will begin to re-consume incomplete futures on the queue, completing them.

Triggering a throttle when it is already set will cancel the current throttle task that is sleeping and replace it.

This is used to enforce the global HTTP rate limit that will occur “randomly” during HTTP API interaction.

Expect random occurrences.

reset_at: float | None[source]#

The monotonic time.monotonic timestamp at which the ratelimit gets lifted.

async acquire()[source]#

Acquire time on this rate limiter.

Calling this function will cause it to block until you are not longer being rate limited.

get_time_until_reset(now)[source]#

Determine how long until the current rate limit is reset.

Parameters:
nowfloat

The monotonic time.monotonic timestamp.

Returns:
float

The time left to sleep before the rate limit is reset. If no rate limit is in effect, then this will return 0.0 instead.

throttle(retry_after)[source]#

Perform the throttling rate limiter logic.

Iterates repeatedly while the queue is not empty, adhering to any rate limits that occur in the mean time.

Note

This will invoke ManualRateLimiter.unlock_later as a scheduled task in the future (it will not await it to finish).

When the ManualRateLimiter.unlock_later coroutine function completes, it should be expected to set the throttle_task to None. This means you can check if throttling is occurring by checking if throttle_task is not None.

If this is invoked while another throttle is in progress, that one is cancelled and a new one is started. This enables new rate limits to override existing ones.

Parameters:
retry_afterfloat

How long to sleep for before unlocking and releasing any futures in the queue.

async unlock_later(retry_after)[source]#

Sleep for a while, then remove the lock.

Warning

You should not need to invoke this directly. Call ManualRateLimiter.throttle instead.

When the ManualRateLimiter.unlock_later coroutine function completes, it should be expected to set the throttle_task to None. This means you can check if throttling is occurring by checking if throttle_task is not None.

Parameters:
retry_afterfloat

How long to sleep for before unlocking and releasing any futures in the queue.

class hikari.impl.rate_limits.WindowedBurstRateLimiter(name, period, limit)[source]#

Bases: BurstRateLimiter

Windowed burst rate limiter.

Rate limiter for rate limits that last fixed periods of time with a fixed number of times it can be used in that time frame.

To use this, you should call WindowedBurstRateLimiter.acquire and await the result immediately before performing your rate-limited task.

If the rate limit has been hit, acquiring time will return an incomplete future that is placed on the internal queue. A throttle task is then spun up if not already running that will be expected to provide some implementation of backing off and sleeping for a given period of time until the limit has passed, and then proceed to consume futures from the queue while adhering to those rate limits.

If the throttle task is already running, the acquired future will always be incomplete and enqueued regardless of whether the rate limit is actively reached or not.

Acquiring a future from this limiter when no throttling task is running and when the rate limit is not reached will always result in the task invoking a drip and a completed future being returned.

Dripping is left to the implementation of this class, but will be expected to provide some mechanism for updating the internal statistics to represent that a unit has been placed into the bucket.

limit: int[source]#

The maximum number of WindowedBurstRateLimiter.acquire’s allowed in this time window.

period: float[source]#

How long the window lasts for from the start in seconds.

remaining: int[source]#

The number of WindowedBurstRateLimiter.acquire’s left in this window before you will get rate limited.

reset_at: float[source]#

The time.monotonic that the limit window ends at.

async acquire()[source]#

Acquire time on this rate limiter.

Calling this function will cause it to block until you are not longer being rate limited.

drip()[source]#

Decrement the remaining counter.

get_time_until_reset(now)[source]#

Determine how long until the current rate limit is reset.

Warning

Invoking this method will update the internal state if we were previously rate limited, but at the given time are no longer under that limit. This makes it imperative that you only pass the current timestamp to this function, and not past or future timestamps. The effects of doing the latter are undefined behaviour.

Parameters:
nowfloat

The monotonic time.monotonic timestamp.

Returns:
float

The time left to sleep before the rate limit is reset. If no rate limit is in effect, then this will return 0.0 instead.

is_rate_limited(now)[source]#

Determine if we are under a rate limit at the given time.

Warning

Invoking this method will update the internal state if we were previously rate limited, but at the given time are no longer under that limit. This makes it imperative that you only pass the current timestamp to this function, and not past or future timestamps. The effects of doing the latter are undefined behaviour.

Parameters:
nowfloat

The monotonic time.monotonic timestamp.

Returns:
bool

True if we are being rate limited, or False if we are not.

async throttle()[source]#

Perform the throttling rate limiter logic.

Iterates repeatedly while the queue is not empty, adhering to any rate limits that occur in the mean time.

Note

You should usually not need to invoke this directly, but if you do, ensure to call it using asyncio.create_task, and store the task immediately in throttle_task.

When this coroutine function completes, it will set the throttle_task to None. This means you can check if throttling is occurring by checking if throttle_task is not None.