AWS SQS with Dead-letter queue (DLQ) local setup using Localstack

2025-08-26

Developing and testing distributed systems can be a complex and challenging task. These systems are composed of multiple services that communicate with each other asynchronously, often through message queues. AWS Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. However, relying solely on the cloud for development and testing can be slow and expensive. This is where Localstack comes in.

Localstack is a high-fidelity local AWS cloud stack that allows you to develop and test your cloud and serverless applications offline. By using Localstack, you can significantly speed up your development and testing cycles, reduce costs, and improve the overall quality of your applications. In this comprehensive guide, we will walk you through the process of setting up AWS SQS with a Dead-Letter Queue (DLQ) locally using Localstack. We will cover everything from setting up your local environment to creating a realistic case study that demonstrates how to handle message processing failures.

Local Development Setup

Before we dive into the details of setting up SQS with a DLQ, let's first set up our local development environment. You will need to have Docker and Docker Compose installed on your machine. If you don't have them installed, you can follow the official documentation to install them:

Once you have Docker and Docker Compose installed, create a new directory for your project and create a file named docker-compose.localstack.yml with the following content:


version: '3.8'

services:
  localstack:
    image: localstack/localstack:latest
    ports:
      - "4566:4566"
      - "4510-4559:4510-4559"
    environment:
      - SERVICES=sqs,iam,lambda
      - DEFAULT_REGION=us-east-1
    volumes:
      - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
      - "/var/run/docker.sock:/var/run/docker.sock"
    

This Docker Compose file is slightly different from the previous example. It uses a more recent version of Localstack and explicitly enables the iam and lambda services, which we'll need for our advanced case study.

You can start Localstack by running:


docker-compose -f docker-compose.localstack.yml up -d
    

Creating the SQS Queues

While you can use an initialization script as shown previously, you can also create resources manually using the AWS CLI. First, let's configure a local profile for the AWS CLI to interact with Localstack easily.


aws configure set aws_access_key_id test --profile local && \
aws configure set aws_secret_access_key test --profile local && \
aws configure set region us-east-1 --profile local && \
aws configure set output json --profile local
    

Now, let's create our FIFO queue and its corresponding Dead-Letter Queue (DLQ). We are using FIFO queues for ordering and exactly-once processing guarantees.


# Create FIFO SQS queue with ContentBasedDeduplication enabled
aws --endpoint-url=http://localhost:4566 --profile local sqs create-queue \
  --queue-name simple-queue.fifo \
  --attributes FifoQueue=true,ContentBasedDeduplication=true,VisibilityTimeout=60

# Create DLQ (Dead Letter Queue) for failed messages
aws --endpoint-url=http://localhost:4566 --profile local sqs create-queue \
  --queue-name simple-queue-dlq.fifo \
  --attributes FifoQueue=true,ContentBasedDeduplication=true
    

Dead-Letter Queues (DLQs)

A Dead-Letter Queue (DLQ) is a queue that other (source) queues can target for messages that can't be processed successfully. DLQs are useful for debugging your application or messaging system because they let you isolate unconsumed messages to determine why their processing failed. When you configure a DLQ, you need to set a redrive policy on the source queue. The redrive policy specifies the ARN of the DLQ and the maxReceiveCount, which is the number of times a message can be received by consumers before it's sent to the DLQ.

Visibility Timeout

The visibility timeout is the period of time during which Amazon SQS prevents other consumers from receiving and processing a message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours. When a consumer receives a message, the visibility timeout for that message begins. If the consumer fails to process and delete the message before the visibility timeout expires, the message becomes visible to other consumers and can be received again. In our script, we set the visibility timeout to 60 seconds.

Advanced Case Study: Go Lambda Producer/Consumer

Let's build a more advanced, serverless example using Go Lambda functions to produce and consume messages. This mimics a more realistic microservices architecture.

Step 1: The SQS Sender Lambda (Go)

First, we'll create a Lambda function that sends a message to our simple-queue.fifo.

Compile the Go application for a Linux environment and create a deployment package:


# Build the Go binary
GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o lambda/handlers/simple_sqs_sender/bootstrap lambda/handlers/simple_sqs_sender/main.go lambda/handlers/simple_sqs_sender/localstack_sqs.go

# Create deployment package
chmod +x lambda/handlers/simple_sqs_sender/bootstrap
cd lambda/handlers/simple_sqs_sender && zip simple_sqs_sender.zip bootstrap && cd ../../..
    

Next, create an IAM role that the Lambda function can assume to get permissions to send messages to SQS.


# Create IAM Role for the sender Lambda
aws --endpoint-url=http://localhost:4566 --profile local iam create-role \
  --role-name lambda-sqs-role \
  --assume-role-policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"Service":"lambda.amazonaws.com"},"Action":"sts:AssumeRole"}]}'

# Attach a policy to the role
aws --endpoint-url=http://localhost:4566 --profile local iam put-role-policy \
  --role-name lambda-sqs-role \
  --policy-name SQSAccessPolicy \
  --policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Action":["sqs:SendMessage"],"Resource":"*"}]}'
    

Finally, create the Lambda function itself.


# Create the sender Lambda function
aws --endpoint-url=http://localhost:4566 --profile local lambda create-function \
  --function-name simple-sqs-sender \
  --runtime provided.al2 \
  --role arn:aws:iam::000000000000:role/lambda-sqs-role \
  --handler bootstrap \
  --zip-file fileb://lambda/handlers/simple_sqs_sender/simple_sqs_sender.zip \
  --timeout 30 \
  --architectures x86_64 \
  --environment Variables='{SQS_SIMPLE_QUEUE_URL=http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/simple-queue.fifo,AWS_ENDPOINT_URL=http://host.docker.internal:4566}'
    

Note the environment variables. We pass the queue URL and the AWS endpoint URL so the Lambda knows where to send messages within the Localstack environment.

Step 2: The SQS Receiver Lambda (Go)

Now, let's create the consumer. This Lambda will be triggered by messages arriving in simple-queue.fifo.

First, build and package the receiver application:


# Build the Go binary
GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -o lambda/handlers/simple_sqs_receiver/bootstrap lambda/handlers/simple_sqs_receiver/main.go

# Create deployment package
chmod +x lambda/handlers/simple_sqs_receiver/bootstrap
cd lambda/handlers/simple_sqs_receiver && zip simple_sqs_receiver.zip bootstrap && cd ../../..
    

Create the necessary IAM role and policy for the receiver to read and delete messages from the queue.


# Create IAM Role for the receiver Lambda
aws --endpoint-url=http://localhost:4566 --profile local iam create-role \
  --role-name lambda-sqs-receiver-role \
  --assume-role-policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"Service":"lambda.amazonaws.com"},"Action":"sts:AssumeRole"}]}'

# Attach a policy to the role
aws --endpoint-url=http://localhost:4566 --profile local iam put-role-policy \
  --role-name lambda-sqs-receiver-role \
  --policy-name SQSReceiverPolicy \
  --policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Action":["sqs:ReceiveMessage","sqs:DeleteMessage","sqs:GetQueueAttributes"],"Resource":"*"}]}'
    

Create the receiver Lambda function:


# Create the receiver Lambda function
aws --endpoint-url=http://localhost:4566 --profile local lambda create-function \
  --function-name simple-sqs-receiver \
  --runtime provided.al2 \
  --role arn:aws:iam::000000000000:role/lambda-sqs-receiver-role \
  --handler bootstrap \
  --zip-file fileb://lambda/handlers/simple_sqs_receiver/simple_sqs_receiver.zip \
  --timeout 30 \
  --architectures x86_64 \
  --environment Variables='{AWS_ENDPOINT_URL=http://host.docker.internal:4566}'
    

Step 3: Configure Redrive and Event Source Mapping

Now we connect the main queue to its DLQ. Create a file named redrive-attributes.json with the following content. This tells simple-queue.fifo to send messages to simple-queue-dlq.fifo after they have been received twice without being deleted.


{
    "RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:000000000000:simple-queue-dlq.fifo\",\"maxReceiveCount\":\"2\"}"
}
    

Apply this policy to the main queue:


aws --endpoint-url=http://localhost:4566 --profile local sqs set-queue-attributes \
  --queue-url http://localhost:4566/000000000000/simple-queue.fifo \
  --attributes file://redrive-attributes.json
    

Finally, create an event source mapping. This crucial step subscribes the simple-sqs-receiver Lambda to the simple-queue.fifo queue. AWS Lambda will now poll the queue and invoke your function when messages are available.


aws --endpoint-url=http://localhost:4566 --profile local lambda create-event-source-mapping \
  --function-name simple-sqs-receiver \
  --event-source-arn arn:aws:sqs:us-east-1:000000000000:simple-queue.fifo \
  --batch-size 1
    

With this setup, you can invoke the simple-sqs-sender Lambda to send messages. The simple-sqs-receiver will be triggered automatically. If the receiver fails to process a message twice, it will be moved to the DLQ for later inspection.

Benefits of Using a DLQ

Conclusion

In this comprehensive guide, we have shown you how to set up AWS SQS with a Dead-Letter Queue (DLQ) locally using Localstack. We have covered everything from setting up your local environment to creating a realistic case study that demonstrates how to handle message processing failures. By using Localstack, you can significantly speed up your development and testing cycles, reduce costs, and improve the overall quality of your applications.

For more information, you can refer to the following resources: