DC/OS Apache Kafka Security
The DC/OS Apache Kafka service supports Kafka’s native transport encryption, authentication, and authorization mechanisms. The service provides automation and orchestration to simplify the use of these important features. For more information on Kafka’s security, read the following:
Provisioning a service account
This section describes how to configure DC/OS access for Apache Kafka. Depending on your security mode, Apache Kafka may require service authentication for access to DC/OS.
A service like Apache Kafka typically performs certain privileged actions on the cluster, which might require authenticating with the cluster. A service account associated with the service is used to authenticate with the DC/OS cluster. It is recommended to provisioning a separate service account for each service that would perform privileged operations. Service accounts authenticate using public-private keypair. The public key is used to create the service account in the cluster, while the corresponding private key is stored in the secret store. The service account and the service account secret are passed to the service as install time options.
Security mode | Service Account |
---|---|
Disabled | Not available |
Permissive | Optional |
Strict | Required |
If you install a service in permissive mode and do not specify a service account, Metronome and Marathon will act as if requests made by this service are made by an account with the superuser permission.
Prerequisites:
- DC/OS CLI installed and be logged in as a superuser.
- Enterprise DC/OS CLI 0.4.14 or later installed.
Create a Key Pair
In this step, a 2048-bit RSA public-private key pair is created using the Enterprise DC/OS CLI.
Create a public-private key pair and save each value into a separate file within the current directory.
dcos security org service-accounts keypair <private-key>.pem <public-key>.pem
Create a Service Account
From a terminal prompt, create a new service account (for example, kafka
) containing the public key (<your-public-key>.pem
).
dcos security org service-accounts create -p <your-public-key>.pem -d <description> kafka
You can verify your new service account using the following command.
dcos security org service-accounts show kafka
Create a Secret
Create a secret (kafka/<secret-name>
) with your service account and private key specified (<private-key>.pem
).
dcos security secrets create-sa-secret <private-key>.pem <service-account-id> kafka/<secret-name>
You can list the secrets with this command:
dcos security secrets list /
Create and Assign Permissions
Use the following DC/OS CLI commands to rapidly provision the Apache Kafka service account with the required permissions.
- Create the permission.
Service name | <service-role> |
---|---|
/kafka |
kafka-role |
/kafka-prod |
kafka-prod-role |
/team01/kafka |
team01__kafka-role |
/team01/prod/kafka |
team01__prod__kafka-role |
Permissive
Run these commands with the service account name you created for the service in the Create a Service Account step above. For example we are using kafka
dcos security org users grant kafka dcos:mesos:master:framework:role:<service-role> create --description "Allow registering as a framework of role <service-role> with Mesos master"
dcos security org users grant kafka dcos:mesos:master:reservation:role:<service-role> create --description "Allow creating Mesos resource reservations of role <service-role>"
dcos security org users grant kafka dcos:mesos:master:volume:role:<service-role> create --description "Allow creating Mesos persistent volumes of role <service-role>"
dcos security org users grant kafka dcos:mesos:master:reservation:principal:kafka delete --description "Allow unreserving Mesos resource reservations with principal kafka"
dcos security org users grant kafka dcos:mesos:master:volume:principal:kafka delete --description "Allow deleting Mesos persistent volumes with principal kafka"
Strict
Run these commands with the service account name you created for the service in the Create a Service Account step above. For example we are using kafka
dcos security org users grant kafka dcos:mesos:master:task:user:nobody create --description "Allow running a task as linux user nobody"
dcos security org users grant kafka dcos:mesos:master:framework:role:<service-role> create --description "Allow registering as a framework of role <service-role> with Mesos master"
dcos security org users grant kafka dcos:mesos:master:reservation:role:<service-role> create --description "Allow creating Mesos resource reservations of role <service-role>"
dcos security org users grant kafka dcos:mesos:master:volume:role:<service-role> create --description "Allow creating Mesos persistent volumes of role <service-role>"
dcos security org users grant kafka dcos:mesos:master:reservation:principal:kafka delete --description "Allow unreserving Mesos resource reservations with principal kafka"
dcos security org users grant kafka dcos:mesos:master:volume:principal:kafka delete --description "Allow deleting Mesos persistent volumes with principal kafka"
Transport Encryption
Using Custom TLS settings Kafka package
To use the custom TLS certs for kafka service, we need to add the following options in the configuration of the package:
"service": {
"name" : "kafka",
"transport_encryption": {
"enabled": true,
"tls_cert": "kafka/customtlscert",
"key_store": "kafka/keystore",
"key_store_password_file": "kafka/keystorepass",
"trust_store": "kafka/truststore",
"trust_store_password_file": "kafka/truststorepass"
"allow_plaintext": false
}
}
Note:
transport_encryption.enabled:true
means that custom transport encryption is enabled. In future releases, we will separate the custom tls and default tls feature.
Example with self-signed certificate
Generate CA-cert and CA-private-key, called ca-cert
and ca-key
respectively
openssl req -new -newkey rsa:4096 -days 365 -x509 -subj "/C=US/ST=CA/L=SF/O=Mesosphere/OU=Mesosphere/CN=kafka" -keyout ca-key -out ca-cert -nodes
Generate a keystore, called broker.keystore
keytool -genkey -keyalg RSA -keystore broker.keystore -validity 365 -storepass changeit -keypass changeit -dname "CN=kafka" -storetype JKS
Generate Certificate Signing Request (CSR) called cert-file
keytool -keystore broker.keystore -certreq -file cert-file -storepass changeit -keypass changeit
Sign the Generated certificate
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:changeit
Generate a truststore with ca-cert
keytool -keystore broker.truststore -alias CARoot -import -file ca-cert -storepass changeit -keypass changeit -noprompt
Generate a truststore called broker.truststore
with ca-cert
keytool -keystore broker.truststore -alias CARoot -importcert -file ca-cert -storepass changeit -keypass changeit -noprompt
Generate a truststore with self-signed cert
keytool -keystore broker.truststore -alias CertSigned -importcert -file cert-signed -storepass changeit -keypass changeit -noprompt
Attach the dcos cluster using
dcos cluster setup {CLUSTER_URL}
Create Service account and its secret to use the TLS feature Please refer to Service Accounts for more details.
Now we are ready to install a Kafka cluster with custom transport encryption enabled.
Create a file named dcos-kafka-options-customtls.json
with following configuration
cat <<EOF >>dcos-kafka-options-customtls.json
{
"service": {
"name": "kafka",
"service_account": "kafka",
"service_account_secret": "kafka-secret",
"security": {
"transport_encryption": {
"enabled": true,
"allow_plaintext": false,
"tls_cert": "kafka/customtlscert",
"key_store": "kafka/keystore",
"key_store_password_file": "kafka/keystorepass",
"trust_store": "kafka/truststore",
"trust_store_password_file": "kafka/truststorepass"
}
}
}
}
EOF
Tip: If you store your secret in a path that matches the service name (e.g. service name and secret path are kafka
), then only the service named kafka
can access it.
Install the beta kafka service
dcos package install beta-kafka --options=dcos-kafka-options-customtls.json --yes
Verification
The custom transport encryption settings can be verified from the server.properties
file of brokers.
Authentication
DC/OS Apache Kafka supports two authentication mechanisms, SSL and Kerberos. The two are supported independently and may not be combined. If both SSL and Kerberos authentication are enabled, the service will use Kerberos authentication.
Kerberos Authentication
Kerberos authentication relies on a central authority to verify that Kafka clients (be it broker, consumer, or producer) are who they say they are. DC/OS Apache Kafka integrates with your existing Kerberos infrastructure to verify the identity of clients.
Prerequisites
- The hostname and port of a KDC reachable from your DC/OS cluster
- Sufficient access to the KDC to create Kerberos principals
- Sufficient access to the KDC to retrieve a keytab for the generated principals
- The DC/OS Enterprise CLI
- DC/OS Superuser permissions
Configure Kerberos Authentication
Create principals
The DC/OS Apache Kafka service requires a Kerberos principal for each broker to be deployed. Each principal must be of the form
<service primary>/kafka-<broker index>-broker.<service subdomain>.autoip.dcos.thisdcos.directory@<service realm>
with:
service primary = service.security.kerberos.primary
broker index = 0 up to brokers.count - 1
service subdomain = service.name with all
/'s removed
service realm = service.security.kerberos.realm
For example, if installing with these options:
{
"service": {
"name": "a/good/example",
"security": {
"kerberos": {
"primary": "example",
"realm": "EXAMPLE"
}
}
},
"brokers": {
"count": 3
}
}
then the principals to create would be:
example/kafka-0-broker.agoodexample.autoip.dcos.thisdcos.directory@EXAMPLE
example/kafka-1-broker.agoodexample.autoip.dcos.thisdcos.directory@EXAMPLE
example/kafka-2-broker.agoodexample.autoip.dcos.thisdcos.directory@EXAMPLE
Active Directory
Microsoft Active Directory can be used as a Kerberos KDC. Doing so requires creating a mapping between Active Directory users and Kerberos principals.
The utility ktpass can be used to both create a keytab from Active Directory and generate the mapping at the same time.
The mapping can, however, be created manually. For a Kerberos principal like <primary>/<host>@<REALM>
, the Active Directory user should have its servicePrincipalName
and userPrincipalName
attributes set to,
servicePrincipalName = <primary>/<host>
userPrincipalName = <primary>/<host>@<REALM>
For example, with the Kerberos principal example/kafka-0-broker.agoodexample.autoip.dcos.thisdcos.directory@EXAMPLE
, then the correct mapping would be,
servicePrincipalName = example/kafka-0-broker.agoodexample.autoip.dcos.thisdcos.directory
userPrincipalName = example/kafka-0-broker.agoodexample.autoip.dcos.thisdcos.directory@EXAMPLE
If either mapping is incorrect or not present, the service will fail to authenticate that Principal. The symptom in the Kerberos debug logs will be an error of the form
KRBError:
sTime is Wed Feb 07 03:22:47 UTC 2018 1517973767000
suSec is 697984
error code is 6
error Message is Client not found in Kerberos database
sname is krbtgt/AD.MESOSPHERE.COM@AD.MESOSPHERE.COM
msgType is 30
when the userPrincipalName
is set incorrectly, and an error of the form
KRBError:
sTime is Wed Feb 07 03:44:57 UTC 2018 1517975097000
suSec is 128465
error code is 7
error Message is Server not found in Kerberos database
sname is kafka/kafka-1-broker.confluent-kafka.autoip.dcos.thisdcos.directory@AD.MESOSPHERE.COM
msgType is 30
when the servicePrincipalName
is set incorrectly.
Place Service Keytab in DC/OS Secret Store
The DC/OS Apache Kafka service uses a keytab containing all node principals (service keytab). After creating the principals above, generate the service keytab making sure to include all the node principals. This will be stored as a secret in the DC/OS Secret Store.
The service keytab should be stored at service/path/name/service.keytab
. As noted above. for DC/OS 1.10, it would be __dcos_base64__service.keytab
), where service/path/name
matches the path and name of the service. For example, if installing with the options
{
"service": {
"name": "a/good/example"
}
}
then the service keytab should be stored at a/good/example/service.keytab
.
Documentation for adding a file to the secret store can be found here.
Install the Service
Install the DC/OS Apache Kafka service with the following options in addition to your own:
{
"service": {
"security": {
"kerberos": {
"enabled": true,
"enabled_for_zookeeper": <true|false default false>,
"kdc": {
"hostname": "<kdc host>",
"port": <kdc port>
},
"primary": "<service primary default kafka>",
"realm": "<realm>",
"keytab_secret": "<path to keytab secret>",
"debug": <true|false default false>
}
}
}
}
{
"kafka": {
"kafka_zookeeper_uri": <list of zookeeper hosts>
}
}
The DC/OS Apache Zookeeper service (kafka-zookeeper
package) is intended for this purpose and supports Kerberos.
SSL Authentication
SSL authentication requires that all clients be they brokers, producers, or consumers present a valid certificate from which their identity can be derived. DC/OS Apache Kafka uses the CN
of the SSL certificate as the principal for a given client. For example, from the certificate CN=bob@example.com,OU=,O=Example,L=London,ST=London,C=GB
the principal bob@example.com
will be extracted.
Prerequisites
- Completion of the section Transport Encryption above
Install the Service
Install the DC/OS Apache Kafka service with the following options in addition to your own:
{
"service": {
"service_account": "<service-account>",
"service_account_secret": "<secret path>",
"security": {
"transport_encryption": {
"enabled": true
},
"ssl_authentication": {
"enabled": true
}
}
}
}
Authenticating a Client
To authenticate a client against DC/OS Apache Kafka, you will need to configure it to use a certificate signed by the DC/OS CA. After generating a certificate signing request, you can issue it to the DC/OS CA by calling the API <dcos-cluster>/ca/api/v2/sign
. Using curl
the request would look like:
curl -X POST \
-H "Authorization: token=$(dcos config show core.dcos_acs_token)" \
<dcos-cluster>/ca/api/v2/sign \
-d '{"certificate_request": "<json-encoded-value-of-request.csr>"}'
The <json-encoded-value-of-request.csr>
field represents the content of the csr
file as a single line, where new lines are replaced with \n
.
curl -X POST \
-H "Authorization: token=$(dcos config show core.dcos_acs_token)" \
<dcos-cluster>/ca/api/v2/sign \
-d '{"certificate_request": ""-----BEGIN CERTIFICATE REQUEST-----\nMIIC<snipped for brevity>o39lBi1w=\n-----END CERTIFICATE REQUEST-----\n""}'
The response will contain a signed public certificate. More information on DC/OS CA API can be found here.
Authorization
The DC/OS Apache Kafka service supports Kafka’s ACL-based authorization system. To use Kafka’s ACLs, either SSL or Kerberos authentication must be enabled as detailed above.
Enable Authorization
Prerequisites
Install the Service
Install the DC/OS Apache Kafka service with the following options in addition to your own (remember, either SSL authentication or Kerberos must be enabled):
{
"service": {
"security": {
"authorization": {
"enabled": true,
"super_users": "<list of super users>",
"allow_everyone_if_no_acl_found": <true|false default false>
}
}
}
}
service.security.authorization.super_users
should be set to a semi-colon delimited list of principals to treat as super users (all permissions). The format of the list is User:<user1>;User:<user2>;...
. Using Kerberos authentication, the “user” value is the Kerberos primary, and for SSL authentication the “user” value is the CN
of the certificate. The Kafka brokers themselves are automatically designated as super users.
Securely Exposing DC/OS Apache Kafka Outside the Cluster.
Both transport encryption and Kerberos are tightly coupled to the DNS hosts of the Kafka brokers. Therefore, exposing a secure Apache Kafka service outside of the cluster requires additional setup.
Broker to Client Connection
To expose a secure Apache Kafka service outside of the cluster, any client connecting to it must be able to access all brokers of the service via the IP address assigned to the broker. This IP address will be one of: an IP address on a virtual network or the IP address of the agent the broker is running on.
Forwarding DNS and Custom Domain
Every DC/OS cluster has a unique cryptographic ID which can be used to forward DNS queries to that Cluster. To securely expose the service outside the cluster, external clients must have an upstream resolver configured to forward DNS queries to the DC/OS cluster of the service as described here.
With only forwarding configured, DNS entries within the DC/OS cluster will be resolvable at <task-domain>.autoip.dcos.<cryptographic-id>.dcos.directory
. However, if you configure a DNS alias, you can use a custom domain. For example, <task-domain>.cluster-1.acmeco.net
. In either case, the DC/OS Apache Kafka service will need to be installed with an additional security option:
{
"service": {
"security": {
"custom_domain": "<custom-domain>"
}
}
}
where <custom-domain>
is one of autoip.dcos.<cryptographic-id>.dcos.directory
or your organization specific domain (e.g., cluster-1.acmeco.net
).
As a concrete example, using the custom domain of cluster-1.acmeco.net
the broker 0 task would have a host of kafka-0-broker.<service-name>.cluster-1.acmeco.net
.
Kerberos Principal Changes
Transport encryption alone does not require any additional changes. Endpoint discovery will work as normal, and clients will be able to connect securely with the custom domain as long as they are configured as described here.
Kerberos, however, does require slightly different configuration. As noted in the section Create Principals, the principals of the service depend on the hostname of the service. When creating the Kerberos principals, be sure to use the correct domain.
For example, if you install with the following settings:
{
"service": {
"name": "a/good/example",
"security": {
"custom_domain": "cluster-1.example.net",
"kerberos": {
"primary": "example",
"realm": "EXAMPLE"
}
}
},
"brokers": {
"count": 3
}
}
The principals to create are as follows:
example/kafka-0-broker.agoodexample.cluster-1.example.net@EXAMPLE
example/kafka-1-broker.agoodexample.cluster-1.example.net@EXAMPLE
example/kafka-2-broker.agoodexample.cluster-1.example.net@EXAMPLE