mirror of
https://github.com/zhigang1992/apollo.git
synced 2026-05-13 17:30:47 +08:00
Move the Engine proxy feature config article to the Platform docs
This commit is contained in:
BIN
docs/source/img/apollo-engine/cache-hints.png
Normal file
BIN
docs/source/img/apollo-engine/cache-hints.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 41 KiB |
BIN
docs/source/img/apollo-engine/cache-metrics.png
Normal file
BIN
docs/source/img/apollo-engine/cache-metrics.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 118 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 86 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 45 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 40 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 68 KiB |
@@ -37,6 +37,8 @@ The `apollo-tracing` and `apollo-cache-control` extensions are open specificatio
|
||||
1. **Scala** with [Sangria](https://github.com/sangria-graphql/sangria) supports tracing with [sangria-slowlog](https://github.com/sangria-graphql/sangria-slowlog#apollo-tracing-extension) project.
|
||||
1. **Elixir** with [Absinthe](https://github.com/absinthe-graphql/absinthe) supports tracing with the [apollo-tracing-elixir](https://github.com/sikanhe/apollo-tracing-elixir) package.
|
||||
|
||||
> **Note:** Using a different server? Let us know at support@apollographql.com. The development of Apollo tracing implementations is community driven and we would love to start a conversation with you!
|
||||
|
||||
You can test that you’ve correctly enabled Apollo Tracing by running any query against your API using GraphiQL.
|
||||
|
||||
The `tracing` field should now be returned as part of the response's `extensions` like below. Don’t worry, this data won’t make it back to your clients once you've set up the Engine proxy, because the proxy will filter it out.
|
||||
@@ -303,6 +305,473 @@ The Engine proxy will invoke the Lambda function as if it was called from Amazon
|
||||
|
||||
If you've got a proxy running and successfully configured to talk to your cloud functions, then sending a request to it will invoke your function and return the response back to you. If everything is working, you should be able to visit the Metrics tab in the Engine UI and see data from the requests you're sending in the interface!
|
||||
|
||||
## Feature configuration
|
||||
|
||||
The following proxy features require specific setup steps to get working. Full query caching is the only proxy feature that hasn't been built-in to Apollo Server 2.
|
||||
1. [Automatically **persisting** your queries](#automatic-persisted-queries)
|
||||
1. [**Caching** full query responses](#caching)
|
||||
1. [Integrating with your **CDN**](#cdn)
|
||||
1. [Using the Engine proxy with **query batching**](#query-batching)
|
||||
|
||||
<h3 id="automatic-persisted-queries">Automatic Persisted Queries (APQ)</h3>
|
||||
|
||||
Automatically persisting your queries is a performance technique in which you send a query hash to your server instead of the entire GraphQL query string. Your server keeps track of the map between these hashes and their full query strings and does the lookup on its end, saving you the bandwidth of sending the full query string over the wire.
|
||||
|
||||
An added benefit of using APQs with GraphQL is that it's an easy mechanism to transform your GraphQL POST requests into GET requests, allowing you to easily leverage any CDN infrastructure you may already have in place.
|
||||
|
||||
> **Note:** Apollo Server 2 reduces the setup necessary to use automatic persisted queries, and these instructions are only necessary when using the Apollo Engine Proxy. To find out more visit the [Apollo Server](/docs/apollo-server/whats-new.html#Automatic-Persisted-Queries) docs.
|
||||
|
||||
The query registry that maps query hashes to query strings is stored in a user-configurable cache and read by the Engine proxy. This can either be an in-memory store (configured by default to be 50MB) within each Engine proxy instance, or an external, configurable [memcached](https://memcached.org/) store.
|
||||
|
||||
To use automatic persisted queries with the Engine proxy:
|
||||
|
||||
* Use Engine proxy `v1.0.1` or newer.
|
||||
* If your GraphQL server is hosted on a different origin domain from where it will be accessed, setup the appropriate [CORS headers](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing) using the `overrideGraphqlResponseHeaders` object on the proxy's `frontend` configuration:
|
||||
|
||||
```javascript
|
||||
frontends: [{
|
||||
overrideGraphqlResponseHeaders: {
|
||||
'Access-Control-Allow-Origin': '*',
|
||||
},
|
||||
}],
|
||||
```
|
||||
* Configure your client to use APQs. If you're using Apollo Client, you can easily use [`apollo-link-persisted-queries`](https://github.com/apollographql/apollo-link-persisted-queries#automatic-persisted-queries) to set this up.
|
||||
<!-- * Verify APQ is working properly using the [verification procedure] (// TODO(dman): get link to new article). -->
|
||||
<!-- * Read [how it works] (// TODO(dman): get link to new article) for additional details. -->
|
||||
|
||||
If everything is set up correctly, you should see your client sending hashes insteady of query strings over the network, but receiving data as if it had sent a normal query.
|
||||
|
||||
<h3 id="caching">Caching</h3>
|
||||
|
||||
To bring caching to GraphQL we've developed [Apollo Cache Control](https://github.com/apollographql/apollo-cache-control), an open standard that allows servers to specify exactly which parts of a response can be cached and how long they can be cached for.
|
||||
|
||||
We've built a mechanism into the Engine proxy that allows it to read these "cache hints" that servers send along with their responses. It uses these hints to determine if the response can be cached, wether or not it should be cached for everyone or a specific user, and how long it can be cached for.
|
||||
|
||||
The Engine proxy computes a cache privacy level and expiration date by combining the data from all of the fields returned by the server for a particular request. It errs on the safe side, so shorter `maxAge` results override longer and `PRIVATE` scope overrides `PUBLIC`. A missing `maxAge` on a field will default to `0`, meaning that all fields in the result must have a `maxAge > 0` for the response to be cached at all.
|
||||
|
||||
The Engine proxy reads Apollo Cache Control extensions, caching whole query responses based on the computed cacheability of each new query. The Engine UI will visualize how each query was impacted by the cache policy set on it.
|
||||
|
||||
There are just a few steps to enable response caching in Engine proxy, and one of them is optional!
|
||||
1. [Extend your server's responses with `cacheControl` extensions.](#add-cache-extensions)
|
||||
1. [Annotate your schema and/or resolvers with cache control hints.](#annotate-your-responses)
|
||||
1. [_Optional:_ Configure cache options in your Engine Proxy configuration.](#configure-cache-options)
|
||||
|
||||
|
||||
<h4 style="position: relative;">
|
||||
<span id="add-cache-extensions" style="position: absolute; top: -100px;" ></span>
|
||||
1. Add `cacheControl` extensions to your sevrer
|
||||
</h4>
|
||||
|
||||
If you're using Apollo Server for your Node GraphQL server, the only server code change required is to add `cacheControl: true` to the options passed to your Apollo Server configuration.
|
||||
```js line=5,12
|
||||
// Apollo Server 2:
|
||||
const server = new ApolloServer({
|
||||
typeDefs,
|
||||
resolvers,
|
||||
cacheControl: true,
|
||||
});
|
||||
|
||||
// Apollo Server 1.2 and onwards:
|
||||
app.use('/graphql', bodyParser.json(), graphqlExpress({
|
||||
schema,
|
||||
context: {},
|
||||
cacheControl: true
|
||||
}));
|
||||
|
||||
```
|
||||
|
||||
We're working with the community to add support for Apollo Cache Control to non-Node GraphQL server libraries. Contact us at suppot@apollogrqphql.com if you're interested in joining the community to work on support for `express-graphql` or non-Node GraphQL servers.
|
||||
|
||||
|
||||
<h4 style="position: relative;">
|
||||
<span id="annotate-your-responses" style="position: absolute; top: -100px;" ></span>
|
||||
2. Add cache hints to your responses
|
||||
</h4>
|
||||
|
||||
Next we'll add some cache hints to our GarphQL responses. There are two ways to do this -- either dynamically in your resolvers or statically on your schema types and fields. Each `cacheControl` hint has two parameters:
|
||||
- The `maxAge` parameter defines the number of seconds that Engine Proxy should serve the cached response.
|
||||
- The `scope` parameter declares that a unique response should be cached for every user (`PRIVATE`) or a single response should be cached for all users (`PUBLIC`/default).
|
||||
|
||||
**Interpreting `maxAge` for a query (how long the query can be cached for):**
|
||||
|
||||
To determine the expiration time of a particular query, the Engine proxy looks at all of the `maxAge` hints returned by the server, which have been set on a per-field basis, and picks the shortest.
|
||||
|
||||
For example, the following trace indicates a 4 minute (`maxAge = 240`) for one field and 1 min (`maxAge = 60`) for another. This means that the Engine proxy will use "1 minute" as the overall expiration time for the whole result. You can use the Trace view in the Engine UI to understand your cache hit rates and the overall `maxAge` for your queries:
|
||||
|
||||

|
||||
|
||||
> **Note:** If your query calls a type with a field referencing list of type objects, such as `[Post]` referencing `Author` in the `author` field, Engine will consider the `maxAge` of the `Author` type as well.
|
||||
|
||||
**Setting cache scope for a query (public vs. private):**
|
||||
|
||||
Apollo Engine supports caching of personalized responses using the `scope: PRIVATE` cache hint. Private caching requires that Engine identify unique users, using the methods defined in the `sessionAuth` configuration section.
|
||||
|
||||
Engine supports extracting users' identity from an HTTP header (specified in `header`), or an HTTP cookie (specified in `cookie`).
|
||||
|
||||
For security, Engine can be configured to verify the extracted identity before serving a cached response. This allows your service to verify the session is still valid and avoid replay attacks.
|
||||
This verification is performed by HTTP request, to the URL specified in `tokenAuthUrl`.
|
||||
|
||||
The token auth URL will receive an HTTP POST containing: `{"token": "AUTHENTICATION-TOKEN"}`.
|
||||
It should return an HTTP `200` response if the token is still considered valid.
|
||||
It may optionally return a JSON body:
|
||||
* `{"ttl": 300}` to indicate the session token check can be cached for 300 seconds.
|
||||
* `{"id": "alice"}` to indicate an internal user ID that should be used for identification. By returning a persistent identifier such as a database key, Engine's cache can follow a user across sessions and devices.
|
||||
* `{"ttl": 600, "id": "bob"}` to combine both.
|
||||
|
||||
Authentication checks with `ttl>0` will be cached in a `store` named in `sessionAuth`, or in the default 50MB in-memory store.
|
||||
|
||||
**Setting static cache hints in your schema:**
|
||||
|
||||
Cache hints can be added to your schema using directives on your types and fields. When executing your query, these hints will be added to the response and interpreted by Engine to compute a cache policy for the response.
|
||||
|
||||
Engine sets cache TTL as the lowest `maxAge` in the query path.
|
||||
|
||||
```graphql
|
||||
type Post @cacheControl(maxAge: 240) {
|
||||
id: Int!
|
||||
title: String
|
||||
author: Author
|
||||
votes: Int @cacheControl(maxAge: 500)
|
||||
readByCurrentUser: Boolean! @cacheControl(scope: PRIVATE)
|
||||
}
|
||||
|
||||
type Author @cacheControl(maxAge: 60) {
|
||||
id: Int
|
||||
firstName: String
|
||||
lastName: String
|
||||
posts: [Post]
|
||||
}
|
||||
```
|
||||
|
||||
You should receive cache control data in the `extensions` field of your response:
|
||||
|
||||
```js
|
||||
"cacheControl": {
|
||||
"version": 1,
|
||||
"hints": [
|
||||
{
|
||||
"path": [
|
||||
"post"
|
||||
],
|
||||
"maxAge": 240
|
||||
},
|
||||
{
|
||||
"path": [
|
||||
"post",
|
||||
"votes"
|
||||
],
|
||||
"maxAge": 30
|
||||
},
|
||||
{
|
||||
"path": [
|
||||
"post",
|
||||
"readByCurrentUser"
|
||||
],
|
||||
"scope": "PRIVATE"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
For the above schema, there are a few ways to generate different TTLs depending on your query. Take the following examples:
|
||||
|
||||
*Example 1*
|
||||
```graphql
|
||||
query getPostsForAuthor {
|
||||
Author {
|
||||
posts
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
`getPostsForAuthor` will have `maxAge` of 60 seconds, even though the `Post` object has `maxAge` of 240 seconds.
|
||||
|
||||
*Example 2*
|
||||
```graphql
|
||||
query getTitleForPost {
|
||||
Post {
|
||||
title
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
`getTitleForPost` will have `maxAge` of 240 seconds (inherited from Post), even though the `title` field has no `maxAge` specified.
|
||||
|
||||
*Example 3*
|
||||
```graphql
|
||||
query getVotesForPost {
|
||||
Post {
|
||||
votes
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
`getVotesForPost` will have `maxAge` of 240 seconds, even though the `votes` field has a higher `maxAge`.
|
||||
|
||||
**Setting dynamic cache hints in your resolvers:**
|
||||
|
||||
If you'd like to add cache hints dynamically, you can use a programmatic API from within your resolvers.
|
||||
|
||||
```js
|
||||
const resolvers = {
|
||||
Query: {
|
||||
post: (_, { id }, _, { cacheControl }) => {
|
||||
cacheControl.setCacheHint({ maxAge: 60 });
|
||||
return find(posts, { id });
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Setting a default `maxAge` for your whole schema:**
|
||||
|
||||
The power of cache hints comes from being able to set them precisely to different values on different types and fields based on your understanding of your implementation's semantics. But when getting started, you might just want to apply the same `maxAge` to most of your resolvers. You can specify a default max age when you set up `cacheControl` in your server. This max age will be applied to all resolvers which don't explicitly set `maxAge` via schema hints (including schema hints on the type that they return) or the programmatic API. You can override this for a particular resolver or type by setting `@cacheControl(maxAge: 0)`.
|
||||
|
||||
Just like when you set `@cacheControl(maxAge: 5)` explicitly on a field or a type, data is considered to be public by default and the cache will be shared among all users of your site, so when using this option, be sure that you're really OK with creating a shared cache for all of your GraphQL queries. You can still override a specific type or resolver to use the private cache by setting `@cacheControl(scope: PRIVATE)`.
|
||||
|
||||
For example, for Express:
|
||||
|
||||
```javascript
|
||||
app.use('/graphql', bodyParser.json(), graphqlExpress({
|
||||
schema,
|
||||
context: {},
|
||||
tracing: true,
|
||||
cacheControl: {
|
||||
defaultMaxAge: 5,
|
||||
},
|
||||
}));
|
||||
```
|
||||
|
||||
Setting `defaultMaxAge` requires `apollo-server-*` 1.3.4 or newer.
|
||||
|
||||
<h4 style="position: relative;">
|
||||
<span id="configure-cache-options" style="position: absolute; top: -100px;" ></span>
|
||||
3. _Optional:_ Configure cache options
|
||||
</h4>
|
||||
|
||||
As long as you're using a version of the Engine proxy that's greater than `1.0`, you won't have to configure anything to use public response caching. The proxy comes with a default 50MB in-memory cache. To enable private response caching or to configure details of how caching works, there are a few fields in the Engine configuration (ie, argument to `new ApolloServer`) that are relevant.
|
||||
|
||||
Here is an example of changing the Engine config for caching `scope: PUBLIC` responses to use memcached instead of an in-memory cache.
|
||||
Since no `privateFullQueryStore` is provided, `scope: PRIVATE` responses will not be cached.
|
||||
|
||||
```js
|
||||
const engine = new ApolloEngine({
|
||||
stores: [{
|
||||
memcache: {
|
||||
url: ['localhost:4567'],
|
||||
},
|
||||
}],
|
||||
// ...
|
||||
});
|
||||
```
|
||||
|
||||
Below is an example of an Engine config for caching `scope: PUBLIC` and `scope: PRIVATE` responses, using the default (empty-string-named 50MB in-memory cache) for public responses and authorization tokens, and memcached for private responses.
|
||||
By using a private response cache, we guarantee that a response affecting multiple users is never evicted for a response affecting only a single user.
|
||||
|
||||
```js
|
||||
const engine = new ApolloEngine({
|
||||
stores: [{
|
||||
name: 'privateResponseMemcache',
|
||||
memcache: {
|
||||
url: ['localhost:4567'],
|
||||
},
|
||||
}],
|
||||
sessionAuth: {
|
||||
header: 'Authorization',
|
||||
tokenAuthUrl: 'https://auth.mycompany.com/engine-auth-check',
|
||||
},
|
||||
queryCache: {
|
||||
privateFullQueryStore: 'privateResponseMemcache',
|
||||
// By not mentioning publicFullQueryStore, we keep it enabled with
|
||||
// the default empty-string-named in-memory store.
|
||||
},
|
||||
// ...
|
||||
});
|
||||
```
|
||||
|
||||
|
||||
**stores**
|
||||
|
||||
Stores is an array of places for Engine to store data such as: query responses, authentication checks, or persisted queries.
|
||||
|
||||
Every store must have a unique `name`. The empty string is a valid name; there is a default in-memory 50MB cache with the empty string for its name which is used for any caching feature if you don't specify a store name. You can specify the name of `"disabled"` to any caching feature to turn off that feature.
|
||||
|
||||
Engine supports two types of stores:
|
||||
|
||||
* `inMemory` stores provide a bounded LRU cache embedded within the Engine Proxy.
|
||||
Since there's no external servers to configure, in-memory stores are the easiest to get started with.
|
||||
Since there's no network overhead, in-memory stores are the fastest option.
|
||||
However, if you're running multiple copies of Engine Proxy, their in-memory stores won't be shared --- a cache hit on one server may be a cache miss on another server.
|
||||
In memory caches are wiped whenever Engine Proxy restarts.
|
||||
|
||||
The only configuration required for in memory stores is `cacheSize` --- an upper limit specified in bytes. It defaults to 50MB.
|
||||
|
||||
* `memcache` stores use external [Memcached](https://memcached.org/) server(s) for persistence.
|
||||
This provides a shared location for multiple copies of Engine Proxy to achieve the same cache hit rate.
|
||||
This location is also not wiped across Engine Proxy restarts.
|
||||
|
||||
Memcache store configuration requires an array of addresses called `url`, for the memcached servers. (This name is misleading: the values are `host:port` without any URL scheme like `http://`.) All addresses must contain both host and port, even if using the default memcached port. The AWS Elasticache discovery protocol is not currently supported.
|
||||
`keyPrefix` may also be specified, to allow multiple environments to share a memcached server (i.e. dev/staging/production).
|
||||
|
||||
We suggest developers start with an in-memory store, then upgrade to Memcached if the added deployment complexity is worth it for production.
|
||||
This will give you much more control over memory usage and enable sharing the cache across multiple Engine proxy instances.
|
||||
|
||||
**sessionAuth**
|
||||
|
||||
This is useful when you want to do per-session response caching with Engine. To be able to cache results for a particular user, Engine needs to know how to identify a logged-in user. In this example, we've configured it to look for an `Authorization` header, so private data will be stored with a key that's specific to the value of that header.
|
||||
|
||||
You can specify that the session ID is defined by either a header or a cookie. Optionally, you can specify a REST endpoint which the Engine Proxy can use to determine whether a given token is valid.
|
||||
|
||||
**queryCache**
|
||||
|
||||
This maps the types of result caching Engine performs to the stores you've defined in the `stores` field.
|
||||
In this case, we're sending public and private cached data to unique stores, so that responses affecting multiple users will never be evicted for responses affecting a single user.
|
||||
|
||||
If you leave `queryCache.publicFullQueryStore` blank, it will use the default 50MB in-memory cache. Set it to `"disabled"` to turn off the cache.
|
||||
|
||||
If you configure `sessionAuth` but leave `queryCache.privateFullQueryStore` blank, it will use the default 50MB in-memory cache. Set it to `"disabled"` to turn off the cache.
|
||||
|
||||
#### Visualizing caching
|
||||
|
||||
One of the best parts about using caching via the Engine proxy is that you can easily see how it's working once you set it up. The Metrics views in the Engine UI show you exactly which responses are cached and which are not, so you can understand how caching is helping you make your server more performant. Here's what the Engine metrics charts look like when you have everything set up correctly:
|
||||
|
||||

|
||||
|
||||
|
||||
#### How HTTP headers affect caching
|
||||
|
||||
The main way that your GraphQL server specifies cache behavior is through the `cacheControl` GraphQL extension, which is rendered in the body of a GraphQL response. However, Engine also understands and sets several caching-related HTTP headers.
|
||||
|
||||
**HTTP headers interpreted by Engine**
|
||||
|
||||
Engine will never decide to cache responses in its response cache unless you tell it to with the `cacheControl` GraphQL extension. However, Engine does observe some HTTP headers and can use them to restrict caching further than what the extension says. These headers include:
|
||||
|
||||
* `Cache-Control` **response** header: If the `Cache-Control` response header contains `no-store`, `no-cache`, or `private`, Engine will not cache the response. If the `Cache-Control` response header contains `max-age` or `s-maxage` directives, then Engine will not cache any data for longer than the specified amount of time. (That is, data will be cached for the minimum of the header-provided `max-age` and the extension-provided `maxAge`.) `s-maxage` takes precedence over `max-age`.
|
||||
* `Cache-Control` **request** header: If the `Cache-Control` request header contains `no-cache`, Engine will not look in the cache for responses. If the `Cache-Control` request header contains `no-store`, Engine will not cache the response.
|
||||
* `Expires` response header: If the `Expires` response header is present, then Engine will not cache any data past the given date. The `Cache-Control` directives `s-maxage` and `max-age` take precedence over `Expires`.
|
||||
* `Vary` response header: If the `Vary` response header is present, then Engine will not return this response to any request whose headers named in the `Vary` header don't match the request that created this response. (For example, if a request had a `Accept-Language: de` header and the response had a `Vary: Accept-Language` header, then that response won't be returned from the cache to any response that does not also have a `Accept-Language: de` header.) Additionally, Engine uses a heuristic to store requests that have different values for headers that it suspects may show up in the response `Vary` header under different cache keys; currently that heuristic is that it assumes that any header that has ever shown up in a `Vary` header in a GraphQL response may be relevant.
|
||||
|
||||
**HTTP headers set by Engine**
|
||||
|
||||
When returning a GraphQL response which is eligible for the full-query cache (ie, all of the data has a non-zero `maxAge` set in the `cacheControl` GraphQL extension), Engine sets the `Cache-Control` header with a `max-age` directive equal to the minimum `maxAge` of all data in the response. If any of the data in the response has a `scope: PRIVATE` hint, the `Cache-Control` header will include the `private` directive; otherwise it will include the `public` directive. This header completely replaces any `Cache-Control` and `Expires` headers provided by your GraphQL server.
|
||||
|
||||
<h3 id="cdn">CDN integration</h3>
|
||||
|
||||
Many high-traffic web services use content delivery networks (CDNs) such as [Cloudflare](https://www.cloudflare.com/), [Akamai](https://www.akamai.com/) or [Fastly](https://www.fastly.com/) to cache their content as close to their clients as possible.
|
||||
|
||||
> Apollo Server 2 supports CDN integration out of the box and doesn't require the Engine Proxy. To learn how, read through the [guide on CDN integration](/docs/apollo-server/whats-new.html#CDN-integration). For other server implementations, the Engine Proxy makes it straightforward to use CDNs with GraphQL queries whose responses can be cached while still passing more dynamic queries through to your GraphQL server.
|
||||
|
||||
To use the Engine proxy behind a CDN, you need to be able to tell the CDN which GraphQL responses it's allowed to cache and you need to make sure that your GraphQL requests arrive in a format that CDNs cache. Engine Proxy supports this by combining its [caching](#caching) and [automatic persisted queries](#automatic-persisted-queries) featues. This section explains the basic steps for setting up these features to work with CDNs; for more details on how to configure these features, see their respective sections.
|
||||
|
||||
#### 1. Set up caching using Apollo Cache Contol
|
||||
|
||||
You'll need to follow the guide in the [caching](#caching) section to set up your server to extend its requests with cache hint extensions.
|
||||
|
||||
Once you have your server sending responses with cache hints in the `response.extensions` your Engine proxy will start serving the HTTP `Cache-Control` header on the _fully cacheable_ responses (any response containing only data with non-zero `maxAge` annotations). The header will refer to the minimum `maxAge` value across the whole response, and it will be `public` unless some of the data is tagged `scope: PRIVATE`. You should be able to observe this header in your browser's dev tools. The Engine proxy will also cache the responses in its own default public in-memory cache.
|
||||
|
||||
#### 2. Set up automatic persisted queries
|
||||
|
||||
At this point, GraphQL requetss are still POST requests. Most CDNs will only cache GET requests, and GET requests generally work best if the URL is of a bounded size. To work with this, enable Apollo Engine Proxy's Automatic Persisted Queries (APQ) support. This allows clients to send short hashes instead of full queries, and you can configure it to use GET requests for those queries.
|
||||
|
||||
To do this, follow the steps in the [guide above](#automatic-persisted-queries). After completing the steps in that section of the guide, you should be able to observe queries being sent as `GET` requests with the appropriate `Cache-Control` response headers using your browser's developer tools.
|
||||
|
||||
|
||||
#### 3. Set up your CDN
|
||||
|
||||
How precisely this works relies upon which CDN you chose. Configure your CDN to send requests to your Engine proxy-powered GraphQL app. For some CDNs, you may need to specially configure your CDN to honor origin Cache-Control headers. For example, here is [Akamai's documentation on that setting](https://learn.akamai.com/en-us/webhelp/ion/oca/GUID-57C31126-F745-4FFB-AA92-6A5AAC36A8DA.html). If all is well, your cacheable queries should now be cached by your CDN! Note that requests served directly by your CDN will not show up in your Engine dashboard.
|
||||
|
||||
<h3 id="query-batching">Query batching</h3>
|
||||
|
||||
Query batching allows your client to batch multiple queries into one request. This means that if you render several view components within a short time interval, for example a navbar, sidebar, and content, and each of those do their own GraphQL query, the queries can be sent together in a single roundtrip.
|
||||
|
||||
A batch of queries can be sent by simply sending a JSON-encoded array of queries in the request:
|
||||
|
||||
```js
|
||||
[
|
||||
{ "query": "{
|
||||
feed(limit: 2, type: NEW) {
|
||||
postedBy {
|
||||
login
|
||||
}
|
||||
repository {
|
||||
name
|
||||
owner {
|
||||
login
|
||||
}
|
||||
}
|
||||
}
|
||||
}" },
|
||||
{ "query": "query CurrentUserForLayout {
|
||||
currentUser {
|
||||
__typename
|
||||
avatar_url
|
||||
login
|
||||
}
|
||||
}" }
|
||||
]
|
||||
```
|
||||
|
||||
Batched requests to servers that don’t support batching fail without explicit code to handle batching, however the Engine proxy has batched request handling built-in.
|
||||
|
||||
If a batch of queries is sent, the batches are fractured by the Engine proxy and individual queries are sent to origins in parallel. Engine will wait for all the responses to complete and send a single response back to the client. The response will be an array of GraphQL results:
|
||||
|
||||
```js
|
||||
[{
|
||||
"data": {
|
||||
"feed": [
|
||||
{
|
||||
"postedBy": {
|
||||
"login": "AleksandraKaminska"
|
||||
},
|
||||
"repository": {
|
||||
"name": "GitHubApp",
|
||||
"owner": {
|
||||
"login": "AleksandraKaminska"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"postedBy": {
|
||||
"login": "ashokhein"
|
||||
},
|
||||
"repository": {
|
||||
"name": "memeryde",
|
||||
"owner": {
|
||||
"login": "ashokhein"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"currentUser": {
|
||||
"__typename": "User",
|
||||
"avatar_url": "https://avatars2.githubusercontent.com/u/11861843?v=4",
|
||||
"login": "johannakate"
|
||||
}
|
||||
}
|
||||
}]
|
||||
```
|
||||
|
||||
If your origin supports batching and you'd like to pass entire batches through instead of having the Engine proxy break them up, set `supportsBatch: true` within the origins section of the configuration:
|
||||
|
||||
```js
|
||||
const engine = new ApolloEngine({
|
||||
apiKey: "ENGINE_API_KEY",
|
||||
origins: [{
|
||||
supportsBatch: true,
|
||||
}],
|
||||
});
|
||||
```
|
||||
|
||||
#### Batching in Apollo Client with Engine
|
||||
|
||||
Apollo Client has built-in support for batching queries in your client application. To learn how to use query batching with Apollo Client, visit the in-depth guide on our package [`apollo-link-batch-http`](/docs/link/links/batch-http.html).
|
||||
|
||||
If you have questions, we're always available at support@apollographql.com.
|
||||
|
||||
## Proxy configuration
|
||||
|
||||
View our [full proxy configuration doc](/docs/references/proxy-config.html) for information on every available configuration option for the Engine proxy.
|
||||
|
||||
Reference in New Issue
Block a user