Overview
Query batching allows you to send multiple GraphQL operations in a single HTTP request. The router supports processing batched GraphQL requests where multiple operations are sent in a single JSON array. Each operation in the batch is processed concurrently, with configurable limits on the number of operations and concurrency.Hereafter, when we refer to a
batch
or a batch
, we mean the entire batch request. An operation
or batch operation
refers to a single record in the batch array that will be processed independently.We do not recommend batching for security reasons and suggest alternative approaches. Please see the section below for more information.
Configuration
Query batching can be configured in your router configuration file:config.yaml
enabled
: As we do not recommend using batching, it is disabled by default.max_concurrency
: Specifies how many operations will be processed concurrently. If you want to disable concurrency and process everything serially, you can set themax_concurrency
value to1
. The minimum allowed value is 1.max_entries_per_batch
: Defines the maximum number of operations that can be present in a batch. While we do not support unlimited entries, you can set an extremely high number. The minimum allowed value is 1.omit_extensions
: CertainerrorCodes
specific to batch operation failures can be omitted.
Usage
To use query batching, send a POST request to your GraphQL endpoint with a JSON array of operations:Tracing
When batching is enabled, we have added several parameters to make it easier to trace batch requests. For batched requests, the root span (or the start cosmo-router span) will always have the following attributes:wg.operation.batching.is_batched
: Identifies whether the request is batchedwg.operation.batching.operations_count
: Indicates how many operations are expected in the batch request
wg.operation.batching.operation_index
: Indicates which index of the batch operation array this operation belongs to
Rate Limiting
For batching, the same subgraph rate limiting rules apply if enabled. This means that if a rate limit of 15 per 5 seconds is defined, a batch of 20 operations would have 5 of them fail with a rate-limiting error. For more information, please refer to the rate limiting page.Feature Flags
Feature flags work the same as normal requests. The flag will be identified by the cookie or feature flag header. However, note that the feature flag will apply to all operations in the batch. For more information, please refer to the feature flag page.Why We Do Not Recommend Batching
Batching as a feature has been built to support clients who are already using query batching in their infrastructure and to make it easy for those users to start using the Cosmo router. If you haven’t used batching yet and are planning to, here are a few reasons to reconsider:-
Load Balancing: A single batch is processed by a single router instance. This means if you have 50 routers running, a batch of 200 will still be processed on one instance. Depending on your
max_concurrency
setting, this could result in slower responses compared to sending 200 separate load-balanced requests from the client. We recommend using HTTP/2 multiplexing instead, as described in the next section. -
Security: If you have set an extremely high
max_entries_per_batch
value, it is much easier for an attacker to DDoS the router plus the services and infrastructure behind the router by sending multiple batch requests containing many operations. This requires much less effort than sending every single batch operation as an individual request. - Caching: It’s easier to cache individual operations than it is to cache batches. A single error will also make the entire batch uncacheable.