Docs
Launch GraphOS Studio

Deploying with AWS Lambda

How to deploy Apollo Server with AWS Lambda


AWS Lambda is a serverless computing platform with a pay-for-use billing model that enables you to run code without worrying about provisioning or managing servers.

In this guide, we'll walk through how to deploy Apollo Server's AWS Lambda integration to AWS Lambda using the Serverless framework.

Prerequisites

Make sure you've completed the following before proceeding with this guide:

⚠️ AWS best practices warn against using your AWS account root user keys for any task where it's not required (e.g., don't use these keys to configure the AWS CLI). Instead, create an IAM user account with the least privilege required to deploy your application, and configure the AWS CLI to use that account.

Setting up your project

For this example, we'll start from scratch to show how all the pieces fit together.

Begin by installing the necessary packages for using Apollo Server and its integration for AWS Lambda:

npm install @apollo/server graphql @as-integrations/aws-lambda
npm install -D typescript

Next, we'll create a file with a basic Apollo Server setup. Note the file's name and location; we'll need those in a later step:

src/server.ts
import { ApolloServer } from '@apollo/server';
// The GraphQL schema
const typeDefs = `#graphql
type Query {
hello: String
}
`;
// A map of functions which return data for the schema.
const resolvers = {
Query: {
hello: () => 'world',
},
};
// Set up Apollo Server
const server = new ApolloServer({
typeDefs,
resolvers,
});
src/server.js
import { ApolloServer } from '@apollo/server';
// The GraphQL schema
const typeDefs = `#graphql
type Query {
hello: String
}
`;
// A map of functions which return data for the schema.
const resolvers = {
Query: {
hello: () => 'world',
},
};
// Set up Apollo Server
const server = new ApolloServer({
typeDefs,
resolvers,
});

Now we can import the startServerAndCreateLambdaHandler function and handlers object from @as-integrations/aws-lambda, passing in our ApolloServer instance:

src/server.ts
import { ApolloServer } from '@apollo/server';
import { startServerAndCreateLambdaHandler, handlers } from '@as-integrations/aws-lambda';
const typeDefs = `#graphql
type Query {
hello: String
}
`;
const resolvers = {
Query: {
hello: () => 'world',
},
};
const server = new ApolloServer({
typeDefs,
resolvers,
});
// This final export is important!
export const graphqlHandler = startServerAndCreateLambdaHandler(
server,
// We will be using the Proxy V2 handler
handlers.createAPIGatewayProxyEventV2RequestHandler(),
);
src/server.js
import { ApolloServer } from '@apollo/server';
import { startServerAndCreateLambdaHandler, handlers } from '@as-integrations/aws-lambda';
const typeDefs = `#graphql
type Query {
hello: String
}
`;
const resolvers = {
Query: {
hello: () => 'world',
},
};
const server = new ApolloServer({
typeDefs,
resolvers,
});
// This final export is important!
export const graphqlHandler = startServerAndCreateLambdaHandler(
server,
// We will be using the Proxy V2 handler
handlers.createAPIGatewayProxyEventV2RequestHandler(),
);

The final line in the code snippet above creates an export named graphqlHandler with a Lambda function handler. We'll get back to this function in a moment!

Deploying using the Serverless framework

Serverless is a framework that helps make deploying serverless applications to platforms like AWS Lambda easier.

Installing the CLI

We'll use the Serverless CLI to deploy our application. You can either install the Serverless package into your project directly or install the Serverless CLI globally:

npm install -g serverless

The Serverless CLI can access the credentials of the AWS CLI, which you configured earlier. So now we just need to tell Serverless which service we want to deploy.

AWS best practices recommend rotating your access keys for use cases that require long-term credentials (e.g., hosting an application).

Configuring services

You can configure Serverless using a serverless.yml file, letting it know which services to deploy and where the handlers are.

If you are using TypeScript, download the serverless-plugin-typescript package to enable Serverless to use your TS file:

npm install -D serverless-plugin-typescript

You use the example serverless.yml configuration below; take care to ensure the file path you use is pointing to the file where you export your handler:

serverless.yml
service: apollo-lambda
provider:
name: aws
runtime: nodejs16.x
httpApi:
cors: true
functions:
graphql:
# Make sure your file path is correct!
# (e.g., if your file is in the root folder use server.graphqlHandler )
# The format is: <FILENAME>.<HANDLER>
handler: src/server.graphqlHandler
events:
- httpApi:
path: /
method: POST
- httpApi:
path: /
method: GET
# Omit the following lines if you aren't using TS!
plugins:
- serverless-plugin-typescript

Running locally

Before deploying, we can use the Serverless CLI to invoke our handler locally to ensure everything is working. We'll do this by mocking an HTTP request with a GraphQL operation.

You can store a mock HTTP requests locally by creating a query.json file, like so:

query.json
{
"version": "2",
"headers": {
"content-type": "application/json",
},
"isBase64Encoded": false,
"rawQueryString": "",
"requestContext": {
"http": {
"method": "POST",
},
// Other requestContext properties omitted for brevity
},
"rawPath": "/",
"routeKey": "/",
"body": "{\"operationName\": null, \"variables\": null, \"query\": \"{ hello }\"}"
}

Now we can use serverless to invoke our handler using the query above:

serverless invoke local -f graphql -p query.json

Your response should look something like this:

{
"statusCode": 200,
"headers": {
"cache-control": "no-store",
"content-type": "application/json; charset=utf-8",
"content-length": "27"
},
"body": "{\"data\":{\"hello\":\"world\"}}\n"
}

With everything working locally, we can move on to deployment!

Deploying

As we mentioned earlier, Serverless already has access to your AWS CLI credentials, so to deploy, all you need to do is run the following command:

serverless deploy

If successful, serverless should output something like this:

> serverless deploy
> Deploying apollo-lambda to stage dev (us-east-1)
> ✔ Service deployed to stack apollo-lambda-dev (187s)
> ..............
> endpoints:
> POST - https://ujt89xxyn3.execute-api.us-east-1.amazonaws.com/dev/
> GET - https://ujt89xxyn3.execute-api.us-east-1.amazonaws.com/dev/
> functions:
> graphql: apollo-lambda-dev-graphql
> Monitor all your API routes with Serverless Console: run "serverless --console"

You can now navigate to your endpoints and query your newly hosted server using Apollo Sandbox.

What does serverless do?

First, it builds the functions, zips up the artifacts, and uploads them to a new S3 bucket. Then, it creates a Lambda function with those artifacts and outputs the HTTP endpoint URLs to the console if everything is successful.

Managing the resulting services

The resulting S3 buckets and Lambda functions are accessible from the AWS Console. The AWS Console also lets you view the IAM user you created earlier.

To find the S3 bucket that Serverless created, search in Amazon's listed services for S3, then look for the name of your bucket (e.g., apollo-lambda-dev-serverlessdeploymentbucket-1s10e00wvoe5f is the name of our bucket).

To find the Lambda function that Serverless created, search in Amazon's listed services for Lambda. Double-check the region at the top right of the screen if your list of Lambda functions is empty or missing your new function. The default region for Serverless deployments is us-east-1 (N. Virginia).

If you ever want to remove the S3 bucket or Lambda functions that Serverless created, you can run the following command:

serverless remove

Middleware

In order to implement event and result mutations, type-safe middleware can be passed to the startServerAndCreateLambdaHandler call. The API is as follows:

import { middleware, startServerAndCreateLambdaHandler, handlers } from '@as-integrations/aws-lambda';
import { server } from './server';
const requestHandler = handlers.createAPIGatewayProxyEventV2RequestHandler();
// Middleware is an async function whose type is based on your request handler. Middleware
// can read and mutate the incoming event. Additionally, returning an async function from your
// middleware allows you to read and mutate the result before it's sent.
const middlewareFn: middleware.MiddlewareFn<typeof requestHandler> = async (event) => {
// read or update the event here
// optionally return a callback to access the result
return async (result) => {
// read or update the result here
};
};
startServerAndCreateLambdaHandler(server, requestHandler, {
middleware: [middlewareFn],
});
import { startServerAndCreateLambdaHandler, handlers } from '@as-integrations/aws-lambda';
import { server } from './server';
const requestHandler = handlers.createAPIGatewayProxyEventV2RequestHandler();
// Middleware is an async function whose type is based on your request handler. Middleware
// can read and mutate the incoming event. Additionally, returning an async function from your
// middleware allows you to read and mutate the result before it's sent.
const middlewareFn = async (event) => {
// read or update the event here
// optionally return a callback to access the result
return async (result) => {
// read or update the result here
};
};
startServerAndCreateLambdaHandler(server, requestHandler, {
middleware: [middlewareFn],
});

One use case for middleware is cookie modification. The APIGatewayProxyStructuredResultV2 type contains a property cookies which can be pushed to. This allows you to set multiple set-cookie headers in the response.

import {
startServerAndCreateLambdaHandler,
middleware,
handlers,
} from '@as-integrations/aws-lambda';
import { server } from './server';
import { refreshCookie } from './cookies';
const requestHandler = handlers.createAPIGatewayProxyEventV2RequestHandler();
// Utilizing typeof
const cookieMiddleware: middleware.MiddlewareFn<typeof requestHandler> = async (
event,
) => {
// Access existing cookies and produce a refreshed one
const cookie = refreshCookie(event.cookies);
return async (result) => {
// Ensure proper initialization of the cookies property on the result
result.cookies = result.cookies ?? [];
// Result is mutable so it can be updated here
result.cookies.push(cookie);
};
};
export default startServerAndCreateLambdaHandler(server, requestHandler, {
middleware: [
cookieMiddleware,
],
});

More use-cases and API information can be found in the library's README.

Event extensions

In many cases, API Gateway events will have an authorizer in front of them that contains custom state that will be used for authorization during GraphQL resolution. All of the handlers that are packaged with the library contain a generic type which allows you to explicitly extend the base event type. By passing an event with authorization information, that event type will be used during the creation of contextValue and for middleware. Below is an example, and more information can be found in the library's README.

import { startServerAndCreateLambdaHandler, middleware, handlers } from '@as-integrations/aws-lambda';
import type { APIGatewayProxyEventV2WithLambdaAuthorizer } from 'aws-lambda';
import { server } from './server';
export default startServerAndCreateLambdaHandler(
server,
handlers.createAPIGatewayProxyEventV2RequestHandler<
APIGatewayProxyEventV2WithLambdaAuthorizer<{
myAuthorizerContext: string;
}>
>(),
);
import { startServerAndCreateLambdaHandler, handlers } from '@as-integrations/aws-lambda';
import { server } from './server';
export default startServerAndCreateLambdaHandler(server, handlers.createAPIGatewayProxyEventV2RequestHandler());

Custom request handling

In order to support all event types from AWS Lambda (including custom ones), a request handler creation utility is exposed as handlers.createHandler(eventParser, resultGenerator). This function returns a fully typed request handler that can be passed as the second argument to the startServerAndCreateLambdaHandler call. Below is an example and the exact API is documented in the library's README.

import { startServerAndCreateLambdaHandler, handlers } from '@as-integrations/aws-lambda';
import type { APIGatewayProxyEventV2 } from 'aws-lambda';
import { HeaderMap } from '@apollo/server';
import { server } from './server';
type CustomInvokeEvent = {
httpMethod: string;
queryParams: string;
headers: Record<string, string>;
body: string;
};
type CustomInvokeResult =
| {
success: true;
body: string;
}
| {
success: false;
error: string;
};
const requestHandler = handlers.createRequestHandler<CustomInvokeEvent, CustomInvokeResult>(
{
parseHttpMethod(event) {
return event.httpMethod;
},
parseHeaders(event) {
const headerMap = new HeaderMap();
for (const [key, value] of Object.entries(event.headers)) {
headerMap.set(key, value);
}
return headerMap;
},
parseQueryParams(event) {
return event.queryParams;
},
parseBody(event) {
return event.body;
},
},
{
success({ body }) {
return {
success: true,
body: body.string,
};
},
error(e) {
if (e instanceof Error) {
return {
success: false,
error: e.toString(),
};
}
console.error('Unknown error type encountered!', e);
throw e;
},
},
);
export default startServerAndCreateLambdaHandler(server, requestHandler);
import { startServerAndCreateLambdaHandler, handlers } from '@as-integrations/aws-lambda';
import { HeaderMap } from '@apollo/server';
import { server } from './server';
const requestHandler = handlers.createRequestHandler(
{
parseHttpMethod(event) {
return event.httpMethod;
},
parseHeaders(event) {
const headerMap = new HeaderMap();
for (const [key, value] of Object.entries(event.headers)) {
headerMap.set(key, value);
}
return headerMap;
},
parseQueryParams(event) {
return event.queryParams;
},
parseBody(event) {
return event.body;
},
},
{
success({ body }) {
return {
success: true,
body: body.string,
};
},
error(e) {
if (e instanceof Error) {
return {
success: false,
error: e.toString(),
};
}
console.error('Unknown error type encountered!', e);
throw e;
},
},
);
export default startServerAndCreateLambdaHandler(server, requestHandler);

Using event information

You can use the context function to get information about the current operation from the original Lambda data structures.

Your context function can access this information from its argument containing event and context objects:

const server = new ApolloServer<MyContext>({
typeDefs,
resolvers,
});
// This final export is important!
export const graphqlHandler = startServerAndCreateLambdaHandler(server, handlers.createAPIGatewayProxyEventV2RequestHandler(), {
context: async ({ event, context }) => {
return {
lambdaEvent: event,
lambdaContext: context,
};
},
});
const server = new ApolloServer({
typeDefs,
resolvers,
});
// This final export is important!
export const graphqlHandler = startServerAndCreateLambdaHandler(server, handlers.createAPIGatewayProxyEventV2RequestHandler(), {
context: async ({ event, context }) => {
return {
lambdaEvent: event,
lambdaContext: context,
};
},
});

The event object contains the API Gateway event (HTTP headers, HTTP method, body, path, etc.). The context object (not to be confused with the context function) contains the current Lambda Context (function name, function version, awsRequestId, time remaining, etc.).

If you've changed your setup to use @vendia/serverless-express your context function receives req and res options which are express.Request and express.Response objects:

const { ApolloServer } = require('@apollo/server');
const { expressMiddleware } = require('@apollo/server/express4');
const serverlessExpress = require('@vendia/serverless-express');
const express = require('express');
const { json } = require('body-parser');
const cors = require('cors');
const server = new ApolloServer({
typeDefs: 'type Query { x: ID }',
resolvers: { Query: { x: () => 'hi!' } },
});
server.startInBackgroundHandlingStartupErrorsByLoggingAndFailingAllRequests();
const app = express();
app.use(
cors(),
json(),
expressMiddleware(server, {
// The Express request and response objects are passed into
// your context initialization function
context: async ({ req, res }) => {
// Here is where you'll have access to the
// API Gateway event and Lambda Context
const { event, context } = serverlessExpress.getCurrentInvoke();
return {
expressRequest: req,
expressResponse: res,
lambdaEvent: event,
lambdaContext: context,
};
},
}),
);
exports.handler = serverlessExpress({ app });
const { ApolloServer } = require('@apollo/server');
const { expressMiddleware } = require('@apollo/server/express4');
const serverlessExpress = require('@vendia/serverless-express');
const express = require('express');
const { json } = require('body-parser');
const cors = require('cors');
const server = new ApolloServer({
typeDefs: 'type Query { x: ID }',
resolvers: { Query: { x: () => 'hi!' } },
});
server.startInBackgroundHandlingStartupErrorsByLoggingAndFailingAllRequests();
const app = express();
app.use(
cors(),
json(),
expressMiddleware(server, {
// The Express request and response objects are passed into
// your context initialization function
context: async ({ req, res }) => {
// Here is where you'll have access to the
// API Gateway event and Lambda Context
const { event, context } = serverlessExpress.getCurrentInvoke();
return {
expressRequest: req,
expressResponse: res,
lambdaEvent: event,
lambdaContext: context,
};
},
}),
);
exports.handler = serverlessExpress({ app });

Customizing HTTP routing behavior

If you want to customize your HTTP routing behavior, you can couple Apollo Server's Express integration (i.e., expressMiddleware) with the @vendia/serverless-express package. The @vendia/serverless-express library translates between Lambda events and Express requests. Despite their similar names, the Serverless CLI and the @vendia/serverless-express package are unrelated.

You can update your Apollo Server setup to the following to have a fully functioning Lambda server that works in a variety of AWS features:

const { ApolloServer } = require('@apollo/server');
const { expressMiddleware } = require('@apollo/server/express4');
const serverlessExpress = require('@vendia/serverless-express');
const express = require('express');
const { json } = require('body-parser');
const cors = require('cors');
const server = new ApolloServer({
typeDefs: 'type Query { x: ID }',
resolvers: { Query: { x: () => 'hi!' } },
});
server.startInBackgroundHandlingStartupErrorsByLoggingAndFailingAllRequests();
const app = express();
app.use(cors(), json(), expressMiddleware(server));
exports.graphqlHandler = serverlessExpress({ app });
const { ApolloServer } = require('@apollo/server');
const { expressMiddleware } = require('@apollo/server/express4');
const serverlessExpress = require('@vendia/serverless-express');
const express = require('express');
const { json } = require('body-parser');
const cors = require('cors');
const server = new ApolloServer({
typeDefs: 'type Query { x: ID }',
resolvers: { Query: { x: () => 'hi!' } },
});
server.startInBackgroundHandlingStartupErrorsByLoggingAndFailingAllRequests();
const app = express();
app.use(cors(), json(), expressMiddleware(server));
exports.graphqlHandler = serverlessExpress({ app });

The setup enables you to customize your HTTP behavior as needed.

Previous
Proxy configuration
Next
Heroku
Edit on GitHubEditForumsDiscord