Router Extensibility Features in Kubernetes
Learn how to deploy a self-hosted router (GraphOS Router or Apollo Router Core) in Kubernetes with extensibility features
The router supports two extensibility options to customize the router's behavior. The extensibility features are:
This guide shows how to deploy a router with these features in Kubernetes.
Deploy with Rhai scripts
The router supports Rhai scripting to add custom functionality.
Enabling Rhai scripts in your deployed router requires mounting an extra volume for your Rhai scripts and getting your scripts onto the volume. That can be done by following steps in a separate example for creating a custom in-house router chart. The example creates a new (in-house) chart that depends on the released router chart, and the new chart has templates that add the necessary configuration to allow Rhai scripts for a deployed router.
Deploying with a coprocessor
You have two options to consider when deploying a coprocessor.
Consider the following when deciding which option to use:
The sidecar container option is the simplest and most common way to deploy a coprocessor. It allows you to run the coprocessor in the same pod as the router, which can simplify networking and configuration.
The separate
Deployment
option allows you to run the coprocessor in a different pod, which can be useful if you want to scale the coprocessor independently of the router.
Deploy as a sidecar container
The router supports external coprocessing to run custom logic on requests throughout the router's request-handling lifecycle.
A deployed coprocessor can have its own application image and container in the router pod.
To configure a coprocessor and its container for your deployed router through a YAML configuration file:
Create a YAML file,
coprocessor_values.yaml
, to contain additional values that override default values.Edit
coprocessor_values.yaml
to configure a coprocessor for the router. For reference, follow the typical and minimal configuration examples, and apply them torouter.configuration.coprocessor
.
Example of typical configuration for a coprocessor
1coprocessor:
2 url: http://127.0.0.1:8081 # Required. Replace with the URL of your coprocessor's HTTP endpoint.
3 timeout: 2s # The timeout for all coprocessor requests. Defaults to 1 second (1s)
4 router: # This coprocessor hooks into the `RouterService`
5 request: # By including this key, the `RouterService` sends a coprocessor request whenever it first receives a client request.
6 headers: true # These boolean properties indicate which request data to include in the coprocessor request. All are optional and false by default.
7 body: false
8 context: false
9 sdl: false
10 path: false
11 method: false
12 response: # By including this key, the `RouterService` sends a coprocessor request whenever it's about to send response data to a client (including incremental data via @defer).
13 headers: true
14 body: false
15 context: false
16 sdl: false
17 status_code: false
18 supergraph: # This coprocessor hooks into the `SupergraphService`
19 request: # By including this key, the `SupergraphService` sends a coprocessor request whenever it first receives a client request.
20 headers: true # These boolean properties indicate which request data to include in the coprocessor request. All are optional and false by default.
21 body: false
22 context: false
23 sdl: false
24 method: false
25 response: # By including this key, the `SupergraphService` sends a coprocessor request whenever it's about to send response data to a client (including incremental data via @defer).
26 headers: true
27 body: false
28 context: false
29 sdl: false
30 status_code: false
31 subgraph:
32 all:
33 request: # By including this key, the `SubgraphService` sends a coprocessor request whenever it is about to make a request to a subgraph.
34 headers: true # These boolean properties indicate which request data to include in the coprocessor request. All are optional and false by default.
35 body: false
36 context: false
37 uri: false
38 method: false
39 service_name: false
40 subgraph_request_id: false
41 response: # By including this key, the `SubgraphService` sends a coprocessor request whenever receives a subgraph response.
42 headers: true
43 body: false
44 context: false
45 service_name: false
46 status_code: false
47 subgraph_request_id: false
Edit
coprocessor_values.yaml
to add a container for the coprocessor.
1extraContainers:
2 - name: <coprocessor-deployed-name> # name of deployed container
3 image: <coprocessor-app-image> # name of application image
4 ports:
5 - containerPort: <coprocessor-container-port> # must match port of router.configuration.coprocessor.url
6 env: [] # array of environment variables
Deploy the router with the additional YAML configuration file. For example, starting with the
helm install
command from the basic deployment step, append--values coprocessor_values.yaml
:
1helm install --namespace <router-namespace> --set managedFederation.apiKey="<graph-api-key>" --set managedFederation.graphRef="<graph-ref>" oci://ghcr.io/apollographql/helm-charts/router --version <router-version> --values router/values.yaml --values coprocessor_values.yaml
Deploying using a separate Deployment
Deploying as a separate Deployment
can take shape in two ways:
Using an entirely separate Helm chart.
Using the router's Helm chart as a dependency and adding a new
Deployment
templateThis option is more complex but allows you to customize the router's Helm chart and add your own templates whilst keeping the coporcessor's deployment alongside the router's.
Separate Helm chart
In the case of using a separate Helm chart, a coprocessor
chart would be deployed independently of the router. This chart would contain the configuration for the coprocessor's deployment. An example folder structure might look like:
1charts/
2├── coprocessor/
3│ ├── Chart.yaml
4│ ├── values.yaml
5│ ├── templates/
6│ │ ├── deployment.yaml
7│ │ ├── service.yaml
8│ │ └── ...
9│ └── ...
10├── router/
11│ ├── values.yaml
12│ └── ...
The router
chart would be the router's Helm chart, which you can deploy as described in the Kubernetes deployment guide.
Using the router's Helm chart as a dependency
In the case of using the router's Helm chart as a dependency, you can create a new template in the templates
folder of the router
Helm chart. This template would contain the configuration for the coprocessor's deployment.
The Chart.yaml
file for the router would include:
1dependencies:
2 - name: router
3 version: 2.3.0
4 repository: oci://ghcr.io/apollographql/helm-charts
An example folder structure might look like:
1charts/
2├── router/
3│ ├── Chart.yaml
4│ ├── values.yaml
5│ ├── templates/
6│ │ ├── deployment.yaml
7│ │ ├── service.yaml
8│ │ └── ...
9│ └── ...
In the above example, the router
chart would be the router's Helm chart, which you can deploy as described in the Kubernetes deployment guide. The templates
folder would contain the configuration for the coprocessor's deployment. Within the values.yaml
you can then nest the necessary configuration under the router
key, such as:
1router:
2 configuration:
3 coprocessor:
4 url: http://<coprocessor-service-name>:<coprocessor-container-port>