V2 Pool Examples
Marathon™ Apps and DC/OS™ Services
DC/OS services are typically run as applications on the Marathon framework. To create a pool configuration file for a Marathon application, you will need to know the Apache® Mesos® task
name and port
name.
For example, in the following snippet of a Marathon app definition, the task
name is my-app
and the port
name is web
.
{
"id": "/my-app",
...
"portDefinitions": [
{
"name": "web",
"protocol": "tcp",
"port": 0
}
]
}
Simple Marathon Application
The following is a simple example of a pool configuration for load-balancing a Marathon application like the one above:
{
"apiVersion": "V2",
"name": "app-lb",
"count": 1,
"haproxy": {
"frontends": [{
"bindPort": 80,
"protocol": "HTTP",
"linkBackend": {
"defaultBackend": "app-backend"
}
}],
"backends": [{
"name": "app-backend",
"protocol": "HTTP",
"services": [{
"marathon": {
"serviceID": "/my-app"
},
"endpoint": {
"portName": "web"
}
}]
}]
}
}
Path Based Routing
This pool configures a load balancer which sends traffic to the httpd
backend unless the path begins with /nginx
, in which case it sends traffic to the NGINX™ backend. The path in the request is rewritten before being sent to NGINX.
{
"apiVersion": "V2",
"name": "path-routing",
"count": 1,
"haproxy": {
"frontends": [{
"bindPort": 80,
"protocol": "HTTP",
"linkBackend": {
"defaultBackend": "httpd",
"map": [{
"pathBeg": "/nginx",
"backend": "nginx"
}]
}
}],
"backends": [{
"name": "httpd",
"protocol": "HTTP",
"services": [{
"marathon": {
"serviceID": "/host-httpd"
},
"endpoint": {
"portName": "web"
}
}]
},{
"name": "nginx",
"protocol": "HTTP",
"rewriteHttp": {
"path": {
"fromPath": "/nginx",
"toPath": "/"
}
},
"services": [{
"mesos": {
"frameworkName": "marathon",
"taskName": "bridge-nginx"
},
"endpoint": {
"portName": "web"
}
}]
}]
}
}
Here are some examples of how the path would be changed for different fromPath
and toPath
values:
fromPath: "/nginx"
,toPath: ""
, request:/nginx
->/
fromPath: "/nginx"
,toPath: "/"
, request:/nginx
->/
fromPath: "/nginx"
,toPath: "/"
, request:/nginx/
->/
fromPath: "/nginx"
,toPath: "/"
, request:/nginx/index.html
->/index.html
fromPath: "/nginx"
,toPath: "/"
, request:/nginx/subpath/index.html
->/subpath/index.html
fromPath: "/nginx/"
,toPath: ""
, request:/nginx
->/nginx
(The path is not rewritten in this case because the request did not match/nginx/
)fromPath: "/nginx/"
,toPath: ""
, request:/nginx/
->/
fromPath: "/nginx"
,toPath: "/subpath"
, request:/nginx
->/subpath
fromPath: "/nginx"
,toPath: "/subpath"
, request:/nginx/
->/subpath/
fromPath: "/nginx"
,toPath: "/subpath"
, request:/nginx/index.html
->/subpath/index.html
fromPath: "/nginx"
,toPath: "/subpath/"
, request:/nginx/index.html
->/subpath//index.html
(Note that for cases other thantoPath: ""
ortoPath: "/"
, it is suggested that thefromPath
andtoPath
either both end in/
, or neither do because the rewritten path could otherwise end up with a double slash.)fromPath: "/nginx/"
,toPath: "/subpath/"
, request:/nginx/index.html
->/subpath/index.html
We used pool.haproxy.frontend.linkBackend.pathBeg
in this example to match on the beginning of a path. Other useful fields are:
pathBeg
: Match on path beginningpathEnd
: Match on path endingpathReg
: Match on a path regular expression
Internal (East / West) Load Balancing
Sometimes it is desired or necessary to use Edge-LB for load balancing traffic inside of a DC/OS cluster. This can also be done using Minuteman VIPs, but if you need layer 7 functionality, Edge-LB can be configured for internal only traffic.
The changes necessary are:
- Change the
pool.haproxy.stats.bindPort
,pool.haproxy.frontend.bindPort
to some port that is available on at least one private agent. - Change the
pool.role
to something other thanslave_public
(the default). Usually"*"
works unless you have created a separate role for this purpose.
{
"apiVersion": "V2",
"name": "internal-lb",
"role": "*",
"count": 1,
"haproxy": {
"stats": {
"bindPort": 15001
},
"frontends": [{
"bindPort": 15000,
"protocol": "HTTP",
"linkBackend": {
"defaultBackend": "app-backend"
}
}],
"backends": [{
"name": "app-backend",
"protocol": "HTTP",
"services": [{
"marathon": {
"serviceID": "/my-app"
},
"endpoint": {
"portName": "web"
}
}]
}]
}
}
Internal Static DNS, VIPs, and Addresses
Internal addresses such as those generated by Mesos-DNS, Spartan, or Minuteman VIPs can be exposed outside of the cluster with Edge-LB by using pool.haproxy.backend.service.endpoint.type: "ADDRESS"
.
It should also be noted that this is not always a good idea. Exposing secured internal services to the outside world using an insecure endpoint can be dangerous, keep this in mind when using this feature.
{
"apiVersion": "V2",
"name": "dns-lb",
"count": 1,
"haproxy": {
"frontends": [{
"bindPort": 80,
"protocol": "HTTP",
"linkBackend": {
"defaultBackend": "app-backend"
}
}],
"backends": [{
"name": "app-backend",
"protocol": "HTTP",
"services": [{
"endpoint": {
"type": "ADDRESS",
"address": "myapp.marathon.l4lb.thisdcos.directory",
"port": 555
}
}]
}]
}
}
Mesos Frameworks and DC/OS Services
For Mesos frameworks and DC/OS services that run tasks which are not managed by Marathon such as Kafka® brokers, and so forth, use the pool.haproxy.backend.service.mesos
object to filter and select mesos tasks appropriately.
{
"apiVersion": "V2",
"name": "services-lb",
"count": 1,
"haproxy": {
"frontends": [{
"bindPort": 1025,
"protocol": "TCP",
"linkBackend": {
"defaultBackend": "kafka-backend"
}
}],
"backends": [{
"name": "kafka-backend",
"protocol": "TCP",
"services": [{
"mesos": {
"frameworkName": "beta-confluent-kafka",
"taskNamePattern": "^broker-.*$"
},
"endpoint": {
"port": 1025
}
}]
}]
}
}
Other useful fields for selecting frameworks and tasks in pool.haproxy.backend.service.mesos
:
frameworkName
: Exact matchframeworkNamePattern
: Regular expressionframeworkID
: Exact matchframeworkIDPattern
: Regular expressiontaskName
: Exact matchtaskNamePattern
: Regular expressiontaskID
: Exact matchtaskIDPattern
: Regular expression
Hostname / SNI Routing with VHOSTS
To direct traffic based on the hostname to multiple backends for a single port (such as 80 or 443), you can use the pool.haproxy.frontend.linkBackend
setting.
Before you begin
-
You must have at least one secure socket layer (SSL) certificiate for the Edge-LB service account. Depending on the security requirements of the cluster, you might have additional SSL certificates that you want to use for access to the linked backend.
-
You should create and store a DC/OS secret for each unique SSL certificate you are using. However, one secret is enough if the SSL certificate includes a wildcard that matches several separate websites with the same layer-2 domain namespace. For example, you only need to create and store one secret if you have a certificate to trust any website in the
*.ajuba.net
domain. -
Each secret should contain sections similar to the following:
-----BEGIN CERTIFICATE----- ...certificate body here... -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- ...private key body here... -----END RSA PRIVATE KEY-----
For more information about creating and storing secrets, see Secrets.
Sample configuration
After you have created or identified the SSL certificate and stored it securely in DC/OS Secrets, you can route traffic to multiple backends using the pool.haproxy.frontend.linkBackend
setting as illustrated in the following example:
s
{
"apiVersion": "V2",
"name": "vhost-routing",
"count": 1,
"secrets": [
{
"secret": "mysslsecret1",
"file": "mysecretfile1"
},
{
"secret": "mysslsecret2",
"file": "mysecretfile2"
}
],
"haproxy": {
"frontends": [
{
"bindPort": 80,
"protocol": "HTTP",
"linkBackend": {
"map": [
{
"hostEq": "nginx.example.com",
"backend": "nginx"
},
{
"hostReg": ".*.httpd.example.com",
"backend": "httpd"
}
]
}
},
{
"bindPort": 443,
"protocol": "HTTPS",
"certificates": [
"$SECRETS/mysecretfile1",
"$SECRETS/mysecretfile2"
],
"linkBackend": {
"map": [
{
"hostEq": "nginx.example.com",
"backend": "nginx"
},
{
"hostReg": ".*.httpd.example.com",
"backend": "httpd"
}
]
}
}
],
"backends": [
{
"name": "httpd",
"protocol": "HTTP",
"services": [
{
"marathon": {
"serviceID": "/host-httpd"
},
"endpoint": {
"portName": "web"
}
}
]
},
{
"name": "nginx",
"protocol": "HTTP",
"services": [
{
"mesos": {
"frameworkName": "marathon",
"taskName": "bridge-nginx"
},
"endpoint": {
"portName": "web"
}
}
]
}
]
}
}
Weighted Backend Servers
To add relative weights to backend servers, use the pool.haproxy.backend.service.endpoint.miscStr
field. In the example below, the /app-v1
service will receive 20 out of every 30 requests, and /app-v2
will receive the remaining 10 out of every 30 requests. The default weight is 1, and the max weight is 256.
This approach can be used to implement some canary or A/B testing use cases.
{
"apiVersion": "V2",
"name": "app-lb",
"count": 1,
"haproxy": {
"frontends": [{
"bindPort": 80,
"protocol": "HTTP",
"linkBackend": {
"defaultBackend": "default"
}
}],
"backends": [{
"name": "default",
"protocol": "HTTP",
"services": [{
"marathon": {
"serviceID": "/app-v1"
},
"endpoint": {
"portName": "web",
"miscStr": "weight 20"
}
},{
"marathon": {
"serviceID": "/app-v2"
},
"endpoint": {
"portName": "web",
"miscStr": "weight 10"
}
}]
}]
}
}
SSL/TLS certificates
There are three different ways to get and use a certificate:
Automatically generated self-signed certificate
{
"apiVersion": "V2",
"name": "auto-certificates",
"count": 1,
"autoCertificate": true,
"haproxy": {
"frontends": [
{
"bindPort": 443,
"protocol": "HTTPS",
"certificates": [
"$AUTOCERT"
],
"linkBackend": {
"defaultBackend": "host-httpd"
}
}
],
"backends": [{
"name": "host-httpd",
"protocol": "HTTP",
"services": [{
"marathon": {
"serviceID": "/host-httpd"
},
"endpoint": {
"portName": "web"
}
}]
}]
}
}
DC/OS Secrets (Enterprise Only)
{
"apiVersion": "V2",
"name": "secret-certificates",
"count": 1,
"autoCertificate": false,
"secrets": [
{
"secret": "mysecret",
"file": "mysecretfile"
}
],
"haproxy": {
"frontends": [
{
"bindPort": 443,
"protocol": "HTTPS",
"certificates": [
"$SECRETS/mysecretfile"
],
"linkBackend": {
"defaultBackend": "host-httpd"
}
}
],
"backends": [{
"name": "host-httpd",
"protocol": "HTTP",
"services": [{
"marathon": {
"serviceID": "/host-httpd"
},
"endpoint": {
"portName": "web"
}
}]
}]
}
}
Environment variables (Insecure)
{
"apiVersion": "V2",
"name": "env-certificates",
"count": 1,
"autoCertificate": false,
"environmentVariables": {
"ELB_FILE_HAPROXY_CERT": "-----BEGIN CERTIFICATE-----\nfoo\n-----END CERTIFICATE-----\n-----BEGIN RSA PRIVATE KEY-----\nbar\n-----END RSA PRIVATE KEY-----\n"
},
"haproxy": {
"frontends": [
{
"bindPort": 443,
"protocol": "HTTPS",
"certificates": [
"$ENVFILE/ELB_FILE_HAPROXY_CERT"
],
"linkBackend": {
"defaultBackend": "host-httpd"
}
}
],
"backends": [{
"name": "host-httpd",
"protocol": "HTTP",
"services": [{
"marathon": {
"serviceID": "/host-httpd"
},
"endpoint": {
"portName": "web"
}
}]
}]
}
}
Virtual Networks
In this example we create a pool that will be launched on the virtual network provided by DC/OS overlay called “dcos”. In general you can launch a pool on any CNI network, by setting pool.virtualNetworks[].name
to the CNI network name.
{
"apiVersion": "V2",
"name": "vnet-lb",
"count": 1,
"virtualNetworks": [
{
"name": "dcos",
"labels": {
"key0": "value0",
"key1": "value1"
}
}
],
"haproxy": {
"frontends": [{
"bindPort": 80,
"protocol": "HTTP",
"linkBackend": {
"defaultBackend": "vnet-be"
}
}],
"backends": [{
"name": "vnet-be",
"protocol": "HTTP",
"services": [{
"marathon": {
"serviceID": "/my-vnet-app"
},
"endpoint": {
"portName": "my-vnet-port"
}
}]
}]
}
}
Auto Pool Marathon Application
{
"id": "/auto-pool-bridge",
"labels": {
"edgelb.expose": "true",
"edgelb.template": "default",
"edgelb.first.frontend.certificates": "$AUTOCERT",
"edgelb.first.frontend.rules": "hostEq:www.test.com|pathBeg:/bridge",
"edgelb.first.backend.rewriteHttp.path": "/bridge:/id"
},
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "mesosphere/id-server:2.1.0"
},
"portMappings": [
{
"containerPort": 80,
"hostPort": 0,
"protocol": "tcp",
"name": "id"
}
]
},
"cpus": 0.1,
"requirePorts": false,
"networks": [
{
"mode": "container/bridge"
}
],
"mem": 32,
"cmd": "/start 80 bridge"
}