Keep AWS Lambdas Warm with .NET Core

After my last blog post about the extended cold start times with AWS Lambdas inside a private VPC, we needed to find a way to minimize the effect of this in our serverless application. 

Are you sure?

The first consideration before adding a warmer is to ensure it’s actually needed.  If the Lambda doesn’t require low latency, the cold start is fast or the client can handle retries with little effort, you may not need a warmer at all.  On the other hand, if your Lambda is processing logic in response to an external client that is expecting fast/reliable responses then you might need it.  If your not sure, AWS provides really good documentation about concurrency and throttling behavior which is also a good primer to this post. 

Okay, how?

The best practice recommended by both AWS and the industry is to use CloudWatch to ping and warm the Lambda.  Unofficially, AWS has indicated Lambdas inside a private VPC live for around 15 minutes.  After extensive research this appears to be true for the first instance of the Lambda.  However additional instances (beyond the first) only live for around 5 minutes so we will schedule warming for every 5 minutes.

There are many blog posts and tutorials describing how to do this online but very few cover how to handle this when using .NET Core.  There are other packaged solutions as well either through frameworks like serverless or NPM packages for solving this problem as well.

Setup CloudWatch

The CloudWatch piece is language agnostic.  We need an Event Rule that calls our Lambda function every 5 minutes:

image

Notice we have configured the input to send a JSON constant.  We do this so we know when its the warmer calling the Lambda.  More on this in a moment.

You can also use CloudFormation to create the rule:

“Resources”: {
     “LambdaWarmer” : {
         “Type” : “AWS::Events::Rule”,
         “Properties” : {
              “Description”: {
                     “Fn::Sub”: [
                         “A cloud watch Event for warming ${lambdaARN}”,
                         {
                             “lambdaARN”: {
                                “Ref” : “lambdaARN”
                             }
                         }
                     ]
                 },

              “ScheduleExpression” : “rate(5 minutes)”,
                “Targets” : [{
                    “Arn” : {
                     “Ref”: “lambdaARN”
                    },
                     “Id”: “Warmer”,
                     “Input”: “{ \”Resource\”: \”WarmingLambda\”, \”Body\”: \”5\” }”
                 }]
          }
     }
}

Warming in .NET Core

Our primary use case for keeping Lambdas warm was with APIs for external users via API Gateway so that’s what we will use as our example. Direct Lambda execution can use the same approach with slight tweaks.

Background

When you use the AWS Serverless Application templates for .NET from AWS, it assumes you are also using API Gateway to route HTTP requests to the Lambda.  So the project includes a LambdaEntryPoint class which inherits from APIGatewayProxyFunction.

public class LambdaEntryPoint : Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction
     {
         /// <summary>
         /// The builder has configuration, logging and Amazon API Gateway already configured. The startup class
         /// needs to be configured in this method using the UseStartup<>() method.
         /// </summary>
         /// <param name=”builder”></param>
         protected override void Init(IWebHostBuilder builder)
         {
             builder
                 .UseStartup<Startup>()
                 .UseApiGateway();
         }
     }

AWS developed the APIGatewayProxyFunction to abstract the creation of the .NET host and parse/pass the HTTP request to the host correctly.  The source for this class can be found on Github and is very helpful in understanding how .NET code is executed in Lambda for API Gateway.

By Default

Without further changes, the Lambda will attempt to parse the CloudWatch Event Rule trigger request but creates errors in the attempt to convert the request into a APIGatewayProxyRequest.  You will see errors (blue line below) in your monitoring for the Lambda.

image

The error details are logged in CloudWatch for the Lambda:

One or more errors occurred. (Object reference not set to an instance of an object.): AggregateException
at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)
at lambda_method(Closure , Stream , Stream , LambdaContextInternal )
at Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction.MarshallRequest(InvokeFeatures features, APIGatewayProxyRequest apiGatewayRequest, ILambdaContext lambdaContext)
at Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction.FunctionHandlerAsync(APIGatewayProxyRequest request, ILambdaContext lambdaContext)
at Acme.Lambda.AspNetCoreServer.APIGatewayProxyFunction.FunctionHandlerAsync(APIGatewayProxyRequest request, ILambdaContext lambdaContext) in C:\agent\_work\13\s\Acme.Services.AWS\Lambda\AspNetCoreServer\APIGatewayProxyFunction.cs:line 38

The call fails because Lambda is expecting a call from the API Gateway with all the API request details.  We are only sending it the static JSON payload above ({ “Resource”: “WarmingLambda”, “Body”: “1” }) so there is a null reference exception on one of the API request properties.

Technically this will keep one instance of your lambda warm but adds all the errors as well which is less than ideal.

Handle the Warming Call

We can improve the solution by handling the warming call in an override of the  FunctionHandlerAysnc function in the APIGatewayProxyFunction class.

public abstract class APIGatewayProxyFunction : Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction
{

    public override async Task<APIGatewayProxyResponse> FunctionHandlerAsync(APIGatewayProxyRequest request, ILambdaContext lambdaContext)
     {
         Console.WriteLine(“In overridden FunctionHandlerAsync…”);

        if (request.Resource == “WarmingLambda”)
         {
             var concurrencyCount = 1;
             int.TryParse(request.Body, out concurrencyCount);

            if (concurrencyCount > 1)
             {
                 Console.WriteLine($”Warming instance {concurrencyCount}.”);
                 var client = new AmazonLambdaClient();
                 await client.InvokeAsync(new Amazon.Lambda.Model.InvokeRequest
                 {
                     FunctionName = lambdaContext.FunctionName,
                     InvocationType = InvocationType.RequestResponse,
                     Payload = JsonConvert.SerializeObject(new APIGatewayProxyRequest {
                         Resource = request.Resource,
                         Body = (concurrencyCount – 1).ToString() })
                 });
             }
            
             return new APIGatewayProxyResponse { };
         }
        
         return await base.FunctionHandlerAsync(request, lambdaContext);
     }

In this override we are checking the Resource property of the request for the value “WarmingLambda” which is part of the static JSON we are sending from the CloudWatch rule.  If it’s a warming call, we are getting the desired concurrency count (from the body) and calling the Lambda again using the AWS.Lambda SDK and passing the concurrency count minus 1.  This creates a chain of synchronous Lambda calls which ensures the correct number of Lambda are instantiated.  Without the chaining, Lambda instances may handle multiple warming calls and we wouldn’t achieve the full number of concurrent executions.  Finally we return an empty APIGatewayProxyResponse to short-circuit any further code and respond as quickly as possible. 

Note also the appropriate security role/permissions must be given to allow the Lambda to execute itself.

The result

This setup seems to warm the configured number of concurrent instances effectively.  Here you can see the number of ConcurrentExecutions and Duration from the Lambda monitoring page. 

Note the ConcurrentExecutions monitor is only provided when you set a Reserve Concurrency limit.

image

image

Notice the inconsistent behavior before the 21:00 hour mark.  This is where we had set the warming schedule to 6 minutes.  After which we tried the 5 minute schedule and everything became more consistent and you can see the maximum duration dropped to less than 1 sec which indicates we were avoiding cold starts.

Further Considerations

The amount of time it takes to perform the warming is linear to the number of concurrent instances you configure.  In our testing the chained, synchronous calls to the Lambda can take upwards of 1.5 seconds for 50 instances. So the first Lambda call executes for the full 1.5 seconds and every subsequent call has a shorter duration down to the last call which is typically less than one millisecond.  The net effect on the system will need to be tested further once we have a production workload to find the best configuration of warm instances and reserved concurrency counts.

We haven’t yet added any error handling in the warming call.  Again we have deferred this to see what type of error handling is needed once we have a production workload.

Feedback!

If this post was helpful to you or you have further questions please leave a comment below to continue the conversation. 

Happy Serverless Computing!

5 thoughts on “Keep AWS Lambdas Warm with .NET Core”

  1. Thankyou for the great article! I had little trouble finding the package for AmazonLambdaClient as it is actually called AWSSDK.Lambda (not AWS.Lambda).

Leave a Reply