load balancer logo

Load Balancer

Dynamically distribute your traffic to increase the scalability of your application

Load Balancer makes it easier to ensure the scalability, high availability, and resilience of your applications. This is achieved by dynamically balancing the traffic load across multiple instances, in multiple regions. Deliver your application's users a great experience, by automatically managing variable traffic and handling peak loads, while getting costs under control. By combining Load Balancer with Gateway and Floating IP, you can set up a solution that acts as a single entry point to your application, secures the exposure of your private resources, and supports fail-over scenarios.

Built for high-availability

Load Balancer is built upon a distributed architecture and is backed by an SLA providing 99.99% availability. Leveraging its health check capability, Load Balancer distributes the load to available instances.

Designed for automated deployment

Choose from the load balancer size that fits your needs. Configure and automate with Openstack API, UI, CLI, or with OVHcloud API. Load Balancer can be deployed with Terraform to automate and balance the traffic loads on a wide scale.

Included security

To ensure data security and confidentiality, Load Balancer comes with free SSL/TLS encryption and benefits from our Anti-DDoS Infrastructure protection—real-time protection from network attacks.

Discover our Load Balancer range

The following table provides informational values to help you choose the plan that best meets your needs.


 
Applicable to the following listeners:
All HTTP/TCP/HTTPS* HTTP/ HTTPS TERMINATED_HTTPS* UDP
Load Balancer Size Bandwidth Concurrent active session  Session created per second Requests  per second SSL/TLS session created per second Requests per second Packet per second
Size S 200 Mbs (up/down) 10k 8k 10k 250 5k 10k
Size M 500 Mbs (up/down) 20k 10k

20k

500 10k 20k
Size L 2 Gbs (up/down) 40k 10k

40k

1,000 20k 40k


*HTTPS listener is passthrough meaning that is the SSL/TLS termination is managed by the load balancer members. On contrary, the TERMINATED_HTTPS listener is managing the SSL/TLS termination.

Use cases

Manage high volumes of traffic and seasonal activity

With the Load Balancer, you can manage traffic growth seamlessly by adding new instances to your configuration in just a few clicks. Should your traffic be variable, whether it is increasing or decreasing, the Load Balancer will adapt how it distributes traffic.

Blue-Green deployment and testing scenarios

Support of Openstack API for using Load Balancer, Gateway and Floating IPs enables customers to spawn and test staging environments before deploying them on production. This can lead to swapping production and staging environments, which facilitates a continuous deployment model.

Load Balancer scenarios

The load balancer can be used in three main types of architecture. Floating IP and Gateway may or may not be part of the architecture, depending on the given scenario.

public to private

Public to Private

Incoming traffic originates from the internet and reaches a Floating IP associated with the load balancer. The instances behind the load balancer are located on a private network and have no public IP, which ensures that they remain completely private and isolated from the internet.

 

public to public

Public to Public

Incoming traffic originates from internet and reaches a Floating IP that is associated to the Load Balancer. The instances to which the Load Balancer routes traffic are accessible with a public IP. Hence, the Load Balancer uses the Floating IP with an egress to reach these instances.

private to private

Private to Private

Incoming traffic originates from a private network and is routed to instances accessible from this private network. In this case, Floating IP or Gateway are not needed.

Usage

Our Load Balancer can be used with Openstack API or CLI, and will be available later through the Control Panel.

Below are the basic commands to

Create a Load Balancer

openstack loadbalancer create --flavor small --vip-network-id my_private_network

Configure an entry point (listener) and a target (pool) :

openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 test
openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type HTTP --url-path /healthcheck pool1
openstack loadbalancer member create --subnet-id my_subnet --address 10.0.0.1 --protocol-port 80 pool1
openstack loadbalancer member create --subnet-id my_subnet --address 10.0.0.2 --protocol-port 80 pool1

Configure the network (remind that you need be inside a vRack for this to work properly, check our guide to deploy a vRack)

# configure the network Gateway
openstack subnet set --gateway 10.0.0.254 my_subnet
# add a vrouter
openstack router create myrouter
openstack router set --external-gateway Ext-Net myrouter
openstack router add subnet myrouter my_subnet
# add the floating IP
openstack floating ip create Ext-Net
# The following IDs should be visible in the output of previous commandsv
openstack floating ip set --port

Documentation and guides

Understand Networking concepts in Public Cloud
Getting started with Load Balancer on Public Cloud
Ready to get started?

Create an account and launch your services in minutes.

Main features

Regionalized

Create and expose your Load Balancer service closer to your customers, and take a geographical approach when building your infrastructure.

Simplified management

Choose the tool that suits you for administration of your Load Balancer: OpenStack Horizon UI or API.

Integrated with the Public Cloud ecosystem

Deploy and manage your Load Balancer directly from your Public Cloud environment, thanks to Octavia API support and all compatible tools (Terraform, Ansible, Salt, etc.).

SSL/TLS encryption

Load Balancer supports SSL/TLS encryption to ensure data confidentiality. You can either quickly create your Let's Encrypt DV SSL certificates, included at no additional charge with any of our Load Balancer service plan. You also have the possibility to upload your own certificate if you work with a specific Certificate Authority.

Connection to private networks

To keep your application nodes isolated on the private network, the Load Balancer can be used as a pathway between public addressing and your private networks, with the OVHcloud vRack.

Private workloads

If you want to use the Load Balancer privately, and having it reachable only from your private network with backend instances inside - it is a possibility!

Multiple health check protocols

Define the conditions for excluding an instance or node to fit your criteria. You can choose from: standard TCP verification, HTTP code, or many other options that you can find on the official OpenStack Load Balancer documentation.

Support any Public Cloud instances

The Load Balancer can manage several node types, like the standard instances operated by OpenStack and containers provided by Kubernetes. Through the private network, you can use Hosted Private Cloud virtual machines and Bare Metal servers as a backend.

Public cloud prices

Load Balancer billing

Load Balancer is billed upon usage, on an hourly basis. The service is available in three plans, depending on your traffic profile: Small, Medium, and Large.

FAQ

What is Layer-7 HTTP(S) load balancing?

This describes the way to transport the application layer (ie: the web traffic) from a source, to backend servers through a loadbalancing component which can apply different advanced traffic routing policies. These policies includes the use of http cookies, proxy-protocol support, different methods of load distribution between the backends, https use and offloading

Why is my Load Balancer spawned per-region?

The availability of Public Cloud solutions depends on OpenStack regions. Each region has its own OpenStack platform, which provides it with its own computing, storage, network resources, etc. You can find out more about regional availability here.

What protocols can I use with my Load Balancer?

The supported protocols are - at the launch of the product, with version : TCP, HTTP, HTTPS, TERMINATED_HTTPS, UDP, SCTP and HTTP/2.

How does Load Balancer verify which hosts are healthy?

Load Balancer uses healthmonitor to check if backend services are alive. You can configure a number of protocols for that purpose, including (but not limited to) HTTP, TLS, TCP, UDP, SCTP and PING.

I have my own SSL certificate, can I use it?

Yes, of course. You can either use the OVHcloud Customer Control Panel to upload your own SSL certificate to be used with Load Balancer, or you can perform this operation using the OVHcloud API if you have require this action to be automated.

I don't know how to generate an SSL certificate, how can I use HTTPS LBaaS?

That's not an issue! Through the OVHcloud Customer Control Panel, you're able to create and generate your own Let's Encrypt SSL DV certificate, and use it with your LoadBalancer, making your deployment easy. The Let's Encrypt SSL DV certificate is included within in the price of the Load Balancer at no additional charge.

What is a load balancer in the Cloud ?

A cloud Load Balancer is a load balancing system that is fully managed in the cloud, which can be quickly instantiated, configured via API and has very high availability. A typical feature of a cloud Load Balancer is pay-per-use billing. This means that you only pay for what you use.

What is the difference between Load Balancer for Kubernetes and Load Balancer?

Load Balancer for Kubernetes works for our Managed Kubernetes offer only. It delivers an interface that is directly compatible with Kubernetes. This means you can easily control your Load Balancer for Kubernetes, with native tools.

Load Balancer is built upon Openstack Octavia, and can be deployed within your Public Cloud project, leveraging  the Openstack API, enabling automation through tool like Terraform, Ansible, or Salt. Load Balancer is planned to support Kubernetes and we will keep you updated about its availability.