Skip to main content

Deploying a self-hosted runner

Prerequisites

Blink's runner must be deployed in an environment with a container engine. Examples of supported container engines are Docker and Kubernetes.

Network requirements

  • Blink's runner needs to communicate with the Blink SAAS Server over port 443 (HTTPs).
  • The runner network configuration must allow access to Blink’s service at https://app.blinkops.com/. The network communication is performed using HTTPS protocol.
  • User resource access - Depending on the specific use-case, Blink’s runner can be used and integrated with a wide variety of cloud services. Consider which internal/external services will be required and allow the necessary network access to those services.

Hardware recommendations

It is recommended to have at least:

  • 0.5GB of RAM for the runner environment.
  • 20GB of disk space for the runner environment, including runners and plugin images.
  • Running on an environment with two vCPUs.

Deployment modes

Docker

Supported platforms are

  • Linux
  • MacOSx

A Runner on Docker runs with the host's Docker socket to run the integrations, in order to allow the runner to start new sessions on demand.

Kubernetes

Supported Kubernetes deployment stack:

  • Kubernetes engine - Version 1.19 or higher.
  • Helm deployment - Versions 3 or higher.
info

When deploying Runner in Kubernetes, you need to set an appKey as an input value. This key connects the runner to Blink. If you want to manage this key in your Secret Manager, and you use kubernetes-external-secrets in your cluster, you can create an ExternalSecret resource for it before installing the Runner helm chart. The secret name should be set to blink-runner-secret.

info

Default Kubernetes Connection is a custom feature of the Blink runner on the Kubernetes platform. Users that install Blink's runner on Kubernetes get a default connection to their namespace, which will help them use Blink's Kubernetes integration. This connection has access to the namespace service account, giving the ability to control the namespace through Blink.

Known Limitations

If you have Calico installed on your cluster, there is a known bug in Calico used in the Tigera operator manifest for 1.7 and 1.8. 
For more details see the following articles

EC2

Blink supports deployment of runner via cloudformation template, which will deploy an EC2 instance with a docker runner installed on it.

Deploying a Runner

  1. On the left-hand side of Blink browser click Runners > New Runner Group. A dialogue box will open.

  2. Fill in the parameters:

    Name: Name of Runner group.

    Default: Select the checkbox if this your default Runner group from now on.

  3. Click Create. A dialogue box will open. Select Helm, Docker, or CloudFormation. Copy the command to install the Runner in your environment to your clipboard. It contains the registration key of the Runner and you will not be able to view it again.

    Kubernetes Docker / Command

    Use the command provided in the Blink platform UI in order to install the Blink Runner

    CloudFormation
    Use this option if you don't have Helm or a Host with a Docker Engine. Before following the next steps, make sure you have an AWS EC2 Key Pair:

    1. Go to the CloudFormation stack. Log in to your AWS account. You are directed to the Quick create stack form.

    2. Enter the values in the form as follows:

      ParameterDescription
      Stack nameGive your stack a name
      BlinkURLThe Blink URL the runner should connect to. Do not change the default value. This parameter is mandatory.
      DiskSizeThe disk size of the EC2 instance running the runner. Default is 40. This parameter is mandatory.
      InstanceEc2KeyPairThe EC2 Key Pair used for logging in to the EC2 instance running the Runner. This parameter is mandatory.
      InstanceSshAccessCIdrBlockA CIDR block describing the IP addresses from which the EC2 instance running the Runner should be accessible from. This parameter is mandatory.
      InstanceTypeThe type of the EC2 instance running the runner. Select a type from the drop-down menu. This parameter is mandatory.
      LatestAmiIDThe path of the AWS SSM parameter which stores the AMI ID of the latest Amazon Linux version. Do not change the deafult value.
      RunnerApiKeyCopy from the text are in the blink platform step 2 and paste the value. This parameter is mandatory.
      RunnerVersionDo not change the deafult value. This parameter is mandatory.
      SubnetIdID of a subnet which can access the internet in the given VPC. If this parameter and Vpcid will be left empty, a VPC with a public Subnet and an internet gateway will be created. Otherwise this parameter should be specified.
      VpcIdID of a VPC to create the EC2 instance running the runner in. If this parameter and Sunetid will be left empty, a VPC with a public Subnet and an internet gateway will be created. Otherwise this parameter should be specified.
      OnPremVaultUrlThe URL of the Vault instance the Runner should connect to. Should be specified together with the OnPremVaultRootToken parameter.
      OnPremVaultRootTokenThe Root Token of the Vault instance the Runner should connect to. Should be specified together with the OnPremVaultUrl parameter.
    3. Create stack. AWS will create what was specified in the form.

    4. In the Blink platform, click Close. In the Runners page you can see that the runner is connected and how many instances it has. In your AWS account, you can see all the resources that were created and outputs.

    5. Click Close. The new Runner group appears on the Runners page. A green dot appears next to the Runner group name after a user has installed a Runner that is connected to an active Runner group.

      Connecting to the Runner host created with CloudFormation

      Once the stack if fully created from the above template, navigate to the Outputs tab. There you can find the Ec2InstanceUser and Ec2InstancePublicDnsName outputs, among others. Using the values of these 2 outputs and the key file which corresponds to the EC2 Key Pair you selected when creating the stack (The file should have been downloaded when the Key Pair was created), you should be able to log in to the Runner host by running the following command (given that the IP of the host from which you're running the command is within the range defined by the CIDR block you specified when creating the stack):

        ssh -i <path_to_key_file> <value of Ec2InstanceUser output>@<value of Ec2InstancePublicDnsName output>

Advanced - Configuring runner resource limitations

To configure the limits for pods the Runner deploys on Kubrenetes deployment, use the following flags when deploying the Runner:

--set config.container.limit_cpu="400m" // The CPU limit for plugin containers
--set config.container.limit_memory="650Mi" // The Memory limit for plugins containers
--set config.container.limit_storage="1Gi" // The Storage limit for plugins containers
--set config.container.extra_limit_cpu="400m" // The CPU limit for core and http containers
--set config.container.extra_limit_memory="650Mi" // The Memory limit for core and http containers
--set config.container.extra_limit_storage="4Gi" // The Storage limit for core and http containers

Advanced - Balancing the load within a Runner group

A Group consists of many identical instances of runners. You can balance the load of a runner by spreading it over any number of Instances.

Checking how many instances you currently have in a runner group.

  1. Click the three dots on the runner and select Edit. A dialogue box opens.
  2. Under Instances, you can see a list of instances and their status, that are used in the runner group .

Adding more runner instances to your group

Runner is deployed with Kubernetes

  1. Scale up to Kubernetes deployment of the runner using the following command:
    kubectl scale deployment blink-runner --replicas={number of replicas}

Runner is deployed in a standalone docker server

  1. To create more runner instances in a runner Group, use the same command that was provided after creating the runner group used. Run the command in the shell as many times as you wish to create more Instances.