One of the most annoying parts of setting up Elastic Search on AWS is the way access is configured for that service. From what i've gathered you have one of two options to protect it:

  1. Make sure it's running internally only (i.e. not internet facing and only available to the other services on the same VPC)
  2. Completely open to the internet and eventually relying (if you can make it work) on the Cognito authentication service

Typically you would, as we did, go for the path of least resistance and greater security, which is to make it accessible only to the internal VPC. But that means you now cannot access Kibana to interact with ES.

The way we went around this at Drover was to setup a container running an Nginx proxy that provides basic auth protected access to kibana (using SSL). Here's how we did it:

1. The container

The docker container definition (Dockerfile)

FROM nginx:1.15

RUN apt-get update -qq && apt-get install -y --allow-unauthenticated openssh-server nano curl

EXPOSE 80:80
EXPOSE 22:22

# SSH access
RUN mkdir /root/.ssh
RUN touch /root/.ssh/authorized_keys
RUN chmod 700 /root/.ssh
RUN chmod 644 /root/.ssh/authorized_keys
RUN mkdir /var/run/sshd

# SSH public keys
RUN echo "ssh-rsa <public_key> void@mypc" >> /root/.ssh/authorized_keys

RUN mkdir /auth
RUN chmod 701 /auth
COPY /kibana.htpasswd /auth
RUN chmod 644 /auth/kibana.htpasswd
# TODO: Rest of envs
RUN chown -R www-data /auth

RUN rm /etc/nginx/conf.d/default.conf
COPY /nginx.conf /etc/nginx/conf.d/kibanas.conf
COPY / /
RUN chmod +x /



  • Nginx listens on port 80, but we also expose port 22 so we can ssh into the container for troubleshooting
  • The appropriate authorized keys are set into the container for passwordless login
  • The password file is copied into the appropriate location so it can be later read by nginx
  • The nginx config file is also copied to the correct location
  • Docker will run the start script when booting

The start script (


service ssh start

The nginx config file (nginx.conf)

server {
  listen 80;

  auth_basic           "Restricted";
  auth_basic_user_file /auth/kibana.htpasswd;

  # redirect /
  location = / {
   rewrite ^ /_plugin/kibana/ redirect;

  location / {
    proxy_http_version 1.1;
    proxy_set_header   Authorization ""; # Don't pass auth to kibana
    proxy_set_header   Upgrade $http_upgrade;
    proxy_set_header   Connection 'upgrade';
    proxy_set_header   Host $host;
    proxy_cache_bypass $http_upgrade;

    proxy_pass https://<your_url_to_es>/;


  • Make sure you have a DNS entry (either route53 or another) that points to this server name
  • The <your_url_to_es> should be the Elastic Search url you can find in the ES AWS service for the particular instance you want access to
  • This single proxy could potentially serve more than one Kibana/ES access, just add more server definitions

The password file (kibana.htpasswd)



2. Terraforming makes it easier

If you read one of my previous posts, you know that terraform is an excellent tool for setting up infrastructure. So that's what we use to setup the container in AWS ECS.

I will skip the cluster setup and instead focus on the task-definitions and services. Setting up resources using terraform is something that you can easily learn about in their documentation.

Task definition

data "aws_ecs_task_definition" "kibana" {
  task_definition = "${}"

resource "aws_ecs_task_definition" "kibana" {
  family                   = "kibana"
  task_role_arn            = "arn:aws:iam::<your_amazon_account_id>:role/ecsTaskExecutionRole"
  execution_role_arn       = "arn:aws:iam::<your_amazon_account_id>:role/ecsTaskExecutionRole"
  network_mode             = "awsvpc"
  placement_constraints    = [],
  requires_compatibilities = ["FARGATE"]
  cpu                      = 256
  memory                   = 2048
  container_definitions    = <<DEFINITION
    "name": "KibanaContainer",
    "image": "<your_amazon_account_id>",
    "cpu": 0,
    "portMappings": [
        "containerPort": 80,
        "hostPort": 80,
        "protocol": "tcp"
    "essential": true,
    "mountPoints": [],
    "volumesFrom": [],
    "logConfiguration": {
      "logDriver": "awslogs",
      "options": {
        "awslogs-group": "/ecs/KibanaProxy",
        "awslogs-region": "eu-west-2",
        "awslogs-stream-prefix": "ecs"


  • This definition makes certain assumptions, as follows
  • There is already an initial empty task definition named Kibana in your fargate account
  • <your_amazon_account_id> should be filled with whatever is your account id
  • You have built, tagged and pushed the previously defined docker image into amazon's container repository and the name and tag of the image are kibana-proxy:latest
  • This is deploying to the eu-west-2 region (London), change it to closer to home if you need

The service

resource "aws_security_group" "kibana_sg" {
  name        = "kibana_sg"
  description = "Kibana proxy security group"
  vpc_id      = "${var.vpc_id}"

  ingress {
    from_port        = 80
    to_port          = 80
    protocol         = "tcp"
    cidr_blocks      = [""]
    ipv6_cidr_blocks = ["::/0"]

  ingress {
    from_port        = 22
    to_port          = 22
    protocol         = "tcp"
    cidr_blocks      = [""]
    ipv6_cidr_blocks = ["::/0"]

  # allow all outbound traffic
  egress {
    from_port        = "0"
    to_port          = "0"
    protocol         = "-1"
    cidr_blocks      = [""]
    ipv6_cidr_blocks = ["::/0"]

  tags {
    Name = "kibana_sg"

resource "aws_ecs_service" "kibana" {
  name            = "kibana"
  cluster         = "${module.support_cluster.cluster_id}"
  task_definition = "${}:${max("${aws_ecs_task_definition.kibana.revision}", "${data.aws_ecs_task_definition.kibana.revision}")}"
  desired_count   = "${var.replicas}"
  launch_type     = "FARGATE"

  load_balancer {
    target_group_arn  = "${aws_alb_target_group.kibana-target-group.arn}"
    container_port    = 80
    container_name    = "KibanaContainer"

  network_configuration {
    subnets          = "${var.subnet_ids}"
    assign_public_ip = true
    security_groups  = ["${}"]


  • vpc_id is a variable being passed into this module, you can hard code it with the VPC you have setup for your cluster
  • cluster_id is the name of the cluster that you create to house this service (for instance Kibana, or support)
  • The service will try to use the latest task definition available (hence the previous need to have already a task definition with that name, otherwise it will fail the first time it runs)
  • replicas is the number of containers you want to be running (used by the load balancer)
  • subnet_ids is an array of subnets that you create for your cluster (typically one public for the load balancers and one private for the containers) 
  • target_group_arn is setup from the load balancers definition (up next)

Load balancing

resource "aws_alb" "kibana-load-balancer" {
  name            = "kibana-lb"
  security_groups = ["<public_facing_security_group_id>"]
  subnets         = "${var.subnet_ids}"

  tags {
    Name = "kibana-load-balancer"

resource "aws_alb_target_group" "kibana-target-group" {
  name        = "kibana-tg"
  port        = "80"
  protocol    = "HTTP"
  vpc_id      = "${var.vpc_id}"
  target_type = "ip"

  health_check {
    healthy_threshold   = "5"
    unhealthy_threshold = "2"
    interval            = "60"
    matcher             = "200,302,301,401"
    path                = "/_plugin/kibana/"
    port                = "traffic-port"
    protocol            = "HTTP"
    timeout             = "15"

  tags {
    Name = "kibana-target-group"

resource "aws_alb_listener" "kibana-lb-listener" {
  load_balancer_arn = "${aws_alb.kibana-load-balancer.arn}"
  port              = "443"
  protocol          = "HTTPS"
  ssl_policy        = "ELBSecurityPolicy-2016-08"
  certificate_arn   = "arn:aws:acm:eu-west-2:<your_amazon_account_id>:certificate/<certificate_key_for_lb_domain>"

  default_action {
    target_group_arn = "${aws_alb_target_group.kibana-target-group.arn}"
    type             = "forward"

resource "aws_alb_listener" "kibana-lb-redirect-listener" {
  load_balancer_arn = "${aws_alb.kibana-load-balancer.arn}"
  port              = "80"
  protocol          = "HTTP"

  default_action {
    type             = "redirect"
    redirect {
      port        = "443"
      protocol    = "HTTPS"
      status_code = "HTTP_301"


  • public_facing_security_group_id is the security group id for the load balancer, which should be open to the outside (internet)
  • vpc_id is the public vpc
  • certificate_key_for_lb_domain - basically the whole certificate_arn is supposed to be the arn of the certificate you generate for the domain. Say you have the domain in route53. You should generate a certificate for the load balancer to use that contains the domain so that SSL works properly. Make sure it's consistent with the nginx.conf definition as well

Bonus - auto DNS

resource "aws_route53_record" "dns-kibana" {
  zone_id = "<myapp.com_zone_id>"
  name    = ""
  type    = "CNAME"
  ttl     = "300"
  records = ["${aws_alb.kibana-load-balancer.dns_name}"]


  • This resource creates a DNS entry for the load balancer in the provided zone id. The load balancer module defined previously will feed into the record being setup here so that it all ties together.

The conclusion

When you have it all setup, you can just fire up terraform apply and it should take care of setting up everything for you, assuming you already pushed the docker image into your account's docker repository. Once everything is running should direct you to the kibana interface, asking for the Basic Auth username/password you setup before allowing you inside.

There is some glue work to do to make this all work, but i know you can do it. Just reference the documentation on terraform and it should be super easy.

Happy Kibaning.