DigitalOcean fully manages Regional Load Balancers and Global Load Balancers, ensuring they are highly available load balancing services. Load balancers distribute traffic to groups of Droplets in specific regions or across different regions, which prevents the health of a backend service from depending on the health of a single server or a single region.
First, click Networking in the main navigation, and then click Load Balancers to go to the load balancer index page. Click on an individual load balancer’s name to go to its detail page, which has three tabs:
Nodes, where you can view the nodes currently attached to the load balancer and modify the backend node pool.
Graphs, where you can view graphs of traffic patterns and infrastructure health.
Settings, where you can set or customize the forwarding rules, sticky sessions, health checks, SSL forwarding, and PROXY protocol.
Point Hostname at Load Balancer
To start sending traffic from your hostname to your load balancer, you need to create an A record on your DNS provider that points your hostname at the load balancer’s IP address.
If your DNS provider is DigitalOcean, reference Create and Delete DNS Records to see how to do this. If you do not use DigitalOcean as a DNS provider, reference your current provider’s documentation to see how this is done.
Droplet Connectivity
Load balancers automatically connect to Droplets that reside in the same VPC network as the load balancer.
To validate that private networking has been enabled on a Droplet from the control panel, click Droplets in the main nav, then click the Droplet you want to check from the list of Droplets.
From the Droplet’s page, click Networking in the left menu. If the private network interface is enabled, the Private Network section populates with the Droplet’s private IPv4 address and VPC network name. If the private network interface has not been enabled, a “Turn off” button is displayed.
Manage the Backend Nodes
Add Droplets to a Load Balancer Using the CLI
The following command requires the Droplet’s ID number. Use the doctl compute droplet list command to retrieve a list of Droplets and their ID’s.
How to Add Droplets to a Load Balancer Using the DigitalOcean CLI
Add Droplets to a Load Balancer Using the Control Panel
In the Droplets tab, you can view and modify the load balancer’s backend node pool.
This page displays information about the status of each node and its other health metrics. Clicking on a node name takes you to the node’s detail page.
If you are managing backend Droplets by name, you can add additional Droplets by clicking the Add Droplets button on this page. If you are managing by tag, you instead have an Edit Tag button.
When you add Droplets to a load balancer, the Droplets start in a DOWN state and remain in a DOWN state until they pass the load balancer’s health check. Once the backends have passed the health check the required number of times, they are marked healthy and the load balancer begins forwarding requests to them.
View Graphs
Click the Graphs tab to get a visual representation of traffic patterns and infrastructure health. The metrics in this section change depending on whether the load balancer is for a Droplet or for Kubernetes nodes.
The Frontend section displays graphs related to requests to the load balancer itself:
HTTP Requests Per Second
Connections
HTTP Responses
Traffic Received/Sent
The Droplets section displays graphs related to the backend Droplet pool:
HTTP Total Session Duration
HTTP Average Response Time
Queue size
HTTP responses
Downtime
Health checks
Number of connections
The Kubernetes section displays graphs related to the backend Kubernetes nodes:
HTTP Total Session Duration
HTTP Average Response Time
HTTP responses
Number of connections
Modify Advanced Settings
Click the Settings tab to modify the way that the load balancer functions.
Scaling Configuration
The load balancer’s scaling configuration allows you to adjust the load balancer’s number of nodes. The number of nodes determines:
How many simultaneous connections it can maintain.
How many requests per second it can handle.
How many SSL connections it can decrypt per second.
The load balancer’s overall monthly cost.
The load balancer must have at least one node. You can add or remove nodes at any time to meet your traffic needs.
Note
The quantity and size of the load balancers you can have on your account depends on your account’s resource limits. We use dynamic resource limits to protect our platform against bad actors. Currently, you can’t check your resource limit for load balancers, but you can contact support if you reach the limit and need to increase it. We are working on features that allow you to review this limit in the control panel.
Forwarding Rules
Forwarding rules define how traffic is routed from the load balancer to its backend nodes. The left side of each rule defines the listening port and protocol on the load balancer itself, and the right side defines where and how the requests are routed to the backends.
DigitalOcean Kubernetes automatically manages its load balancers’ forwarding rules, based on the ports you expose for a given service on your worker nodes. You can also manually update your protocol or SSL options.
Add or Remove Forwarding Rules Using the CLI
How to Add a Forwarding Rule Using the DigitalOcean CLI
Add or Remove Forwarding Rules Using the Control Panel
To add a forwarding rule from the control panel, click Networking in the main navigation, then choose the Load Balancers. Click on the load balancer you want to modify, then click Settings to go to its settings page. In the Forwarding rules section, click the Edit. button. A new menu appears with any existing rules.
To create a new rule, click the New rule drop-down menu and then select the protocol of the traffic the load balancer receives. This opens additional rule configuration options. Select the port the load balancer receives traffic on, and then select the protocol and port the Droplet receives traffic on. Once you have configured the rule, click Save. The rule is applied to the load balancer.
To remove a forwarding rule, click the Delete button beside the forwarding rule you want to remove.
Health Checks
In the Target section, you choose the Protocol (HTTP, HTTPS, or TCP), Port (80 by default), and Path (/ by default) that nodes should respond on.
In the Additional Settings section, you choose:
The Check Interval, which is how many seconds the load balancer waits between health checks.
The Response Timeout, which is how many seconds the load balancer waits between responses.
The Unhealthy Threshold, which is how many consecutive times a node must fail a health check before the load balancer stops forwarding traffic to it.
The Healthy Threshold, which is how many consecutive times a node must pass a health check before the load balancer forwards traffic to it.
The success criteria for HTTP and HTTPS health checks is a status code response in the range 200 - 399. The success criteria for TCP health checks is completing a TCP handshake to connect.
Note
HTTP and HTTPS health checks may fail with Droplets running Apache on Rocky Linux because the default Apache page returns a 403 Forbidden HTTP response code. To fix this, either change the health check from HTTP/HTTPS to TCP or configure Apache to return a 200 OK response code by creating an HTML page in Apache’s root directory.
In the Target section, you choose the Protocol (HTTP, HTTPS, or TCP), Port (80 by default), and Path (/ by default) that nodes should respond on.
In the Additional Settings section, you choose:
The Check Interval, which is how many seconds the load balancer waits between health checks.
The Response Timeout, which is how many seconds the load balancer waits between responses.
The Unhealthy Threshold, which is how many consecutive times a node must fail a health check before the load balancer stops forwarding traffic to it.
The Healthy Threshold, which is how many consecutive times a node must pass a health check before the load balancer forwards traffic to it.
The success criteria for HTTP and HTTPS health checks is a status code response in the range 200 - 399. The success criteria for TCP health checks is completing a TCP handshake to connect.
Note
HTTP and HTTPS health checks may fail with Droplets running Apache on Rocky Linux because the default Apache page returns a 403 Forbidden HTTP response code. To fix this, either change the health check from HTTP/HTTPS to TCP or configure Apache to return a 200 OK response code by creating an HTML page in Apache’s root directory.
Sticky Sessions
Sticky sessions send subsequent requests from the same client to the same Droplet by setting a cookie with a configurable name and TTL (Time-To-Live) duration. The TTL parameter defines the duration the cookie remains valid in the client’s browser. This option is useful for application sessions that rely on connecting to the same Droplet for each request.
Enabling the PROXY protocol allows the load balancer to forward client connection information (such as client IP addresses) to your Droplets. The software running on the Droplets must be properly configured to accept the connection information from the load balancer.
By default, DigitalOcean Load Balancers ignore the Connection: keep-alive header of HTTP responses from Droplets to load balancers and close the connection upon completion. When you enable backend keepalive, the load balancer honors the Connection: keep-alive header and keeps the connection open for reuse. This allows the load balancer to use fewer active TCP connections to send and to receive HTTP requests between the load balancer and your target Droplets.
Enabling this option generally improves performance (requests per second and latency) and is more resource efficient. For many use cases, such as serving web sites and APIs, this can improve the performance the client experiences. However, it is not guaranteed to improve performance in all situations, and can increase latency in certain scenarios.
The option applies to all forwarding rules where the target protocol is HTTP or HTTPS. It does not apply to forwarding rules that use TCP, HTTPS, or HTTP/2 passthrough.
There are no hard limits to the number of connections between the load balancer and each server. However, if the target servers are undersized, they may not be able to handle incoming traffic and may lose packets. See Best Practices for Performance on DigitalOcean Load Balancers.
HTTP Idle Timeout
By default, load balancer connections time out after being idle for 60 seconds. You can increase or decrease this amount of time to fit your application’s needs. You can set it for a minimum of 30 seconds and a maximum 600 seconds (10 minutes).
Add or Remove Firewall Rules From a Load Balancer
Currently, you can only add firewall rules to a load balancer using the CLI or API.
To add or remove firewall rules from an existing load balancer using the CLI, use the --allow-list and --deny-list flags with the update command to define a list of IP addresses and CIDRs that the load balancer accepts or blocks incoming connections from.
How to Add or Remove Firewall Rules Using the DigitalOcean CLI
To add or remove firewall rules from an existing load balancer using the API, use the update endpoint with the firewall field to define a list of IP addresses and CIDRs the load balancer accepts or blocks connections from.
How to Add or Remove Firewall Rules Using the DigitalOcean API