The default DC/OS Apache HDFS installation provides reasonable defaults for trying out the service, but may not be sufficient for production use. You may require a different configuration depending on the context of the deployment.
Installing with Custom Configuration
The following are some examples of how to customize the installation of your Apache HDFS instance.
In each case, you would create a new Apache HDFS instance using the custom configuration as follows:
dcos package install hdfs --options=sample-hdfs.json
We recommend that you store your custom configuration in source control.
Installing multiple instances
By default, the Apache HDFS service is installed with a service name of hdfs
. You may specify a different name using a custom service configuration as follows:
{
"service": {
"name": "hdfs-other"
}
}
When the above JSON configuration is passed to the package install hdfs
command via the --options
argument, the new service will use the name specified in that JSON configuration:
dcos package install hdfs --options=hdfs-other.json
Multiple instances of Apache HDFS may be installed into your DC/OS cluster by customizing the name of each instance. For example, you might have one instance of Apache HDFS named hdfs-staging
and another named hdfs-prod
, each with its own custom configuration.
After specifying a custom name for your instance, it can be reached using dcos hdfs
CLI commands or directly over HTTP as described below.
Installing into folders
In DC/OS 1.10 and later, services may be installed into folders by specifying a slash-delimited service name. For example:
{
"service": {
"name": "/foldered/path/to/hdfs"
}
}
The above example will install the service under a path of foldered
=> path
=> to
=> hdfs
. It can then be reached using dcos hdfs
CLI commands or directly over HTTP as described below.
Addressing named instances
After you’ve installed the service under a custom name or under a folder, it may be accessed from all dcos hdfs
CLI commands using the --name
argument. By default, the --name
value defaults to the name of the package, or hdfs
.
For example, if you had an instance named hdfs-dev
, the following command would invoke a pod list
command against it:
dcos hdfs --name=hdfs-dev pod list
The same query would be over HTTP as follows:
curl -H "Authorization:token=$auth_token" <dcos_url>/service/hdfs-dev/v1/pod
Likewise, if you had an instance in a folder like /foldered/path/to/hdfs
, the following command would invoke a pod list
command against it:
dcos hdfs --name=/foldered/path/to/hdfs pod list
Similarly, it could be queried directly over HTTP as follows:
curl -H "Authorization:token=$auth_token" <dcos_url>/service/foldered/path/to/hdfs-dev/v1/pod
You may add a -v
(verbose) argument to any dcos hdfs
command to see the underlying HTTP queries that are being made. This can be a useful tool to see where the CLI is getting its information. In practice, dcos hdfs
commands are a thin wrapper around an HTTP interface provided by the DC/OS Apache HDFS Service itself.
Integration with DC/OS access controls
In Enterprise DC/OS, DC/OS access controls can be used to restrict access to your service. To give a non-superuser complete access to a service, grant them the following list of permissions:
dcos:adminrouter:service:marathon full
dcos:service:marathon:marathon:<service-name> full
dcos:service:adminrouter:<service-name> full
dcos:adminrouter:ops:mesos full
dcos:adminrouter:ops:slave full
Where <service-name>
is your full service name, including the folder if it is installed in one.
Service Settings
Placement Constraints
Placement constraints allow you to customize where a service is deployed in the DC/OS cluster. Placement constraints use the Marathon operators syntax. For example, [["hostname", "UNIQUE"]]
ensures that at most one pod instance is deployed per agent.
A common task is to specify a list of whitelisted systems to deploy to. To achieve this, use the following syntax for the placement constraint:
[["hostname", "LIKE", "10.0.0.159|10.0.1.202|10.0.3.3"]]
Updating Placement Constraints
Clusters change, and as such so will your placement constraints. However, already running service pods will not be affected by changes in placement constraints. This is because altering a placement constraint might invalidate the current placement of a running pod, and the pod will not be relocated automatically as doing so is a destructive action. We recommend using the following procedure to update the placement constraints of a pod:
- Update the placement constraint definition in the service.
- For each affected pod, one at a time, perform a
pod replace
. This will (destructively) move the pod to be in accordance with the new placement constraints.
Enterprise
ZonesRequires: DC/OS 1.11 Enterprise or later.
Placement constraints can be applied to DC/OS zones by referring to the @zone
key. For example, one could spread pods across a minimum of three different zones by including this constraint:
[["@zone", "GROUP_BY", "3"]]
For the @zone constraint to be applied correctly, DC/OS must have Fault Domain Awareness enabled and configured.
Virtual networks
DC/OS Apache HDFS supports deployment on virtual networks on DC/OS (including the dcos
overlay network), allowing each container (task) to have its own IP address and not use port resources on the agent machines. This can be specified by passing the following configuration during installation:
{
"service": {
"virtual_network_enabled": true
}
}
User
By default, all pods’ containers will be started as system user “nobody”. If your system configured for using over system user (for instance, you may have externally mounted persistent volumes with root’s permissions), you can define the user by defining a custom value for the service’s property “user”, for example:
{
"service": {
"properties": {
"user": "root"
}
}
}
Regions
The service parameter region
can be used to deploy the service in an alternate region. By default the service is deployed in the “local” region, which is the region the DC/OS masters are running in. To install a service in a specific reason, include in its options:
{
"service": {
"region": "<region>"
}
}
Node Configuration
The node configuration objects correspond to the configuration for nodes in the HDFS cluster. Node configuration must be specified during installation and may be modified during configuration updates. All of the properties except disk
and disk_type
may be modified during the configuration update process.
A Note on Memory Configuration
As part of the configuration for each node type, the amount of memory in MB allocated to the node can be specified. This value must be larger than the specified maximum heap size for the given node type. Make sure to allocate enough space for additional memory used by the JVM and other overhead. A good rule of thumb is allocate twice as much memory as the size of the heap (set using either hdfs.hadoop_heapsize
or <node type>.hadoop_<node type>node_opts
).
A Note on Disk Types
As already noted, the disk size and type specifications cannot be modified after initial installation. Furthermore, the following disk volume types are available:
ROOT
: Data is stored on the same volume as the agent work directory and the node tasks use the configured amount of disk space.MOUNT
: Data will be stored on a dedicated, operator-formatted volume attached to the agent. Dedicated MOUNT volumes have performance advantages and a disk error on these MOUNT volumes will be correctly reported to HDFS.
HDFS File System Configuration
The HDFS file system network configuration, permissions, and compression are configured via the hdfs
JSON object. Once these properties are set at installation time they can not be reconfigured.
Operating System Configuration
In order for HDFS to function correctly, you must perform several important configuration modifications to the OS hosting the deployment. HDFS requires OS-level configuration settings typical of a production storage server.
File | Setting | Value | Reason |
---|---|---|---|
/etc/sysctl.conf | vm.swappiness | 0 | If the OS swaps out the HDFS processes, they can fail to respond to RPC requests, resulting in the process being marked `down` by the cluster. This can be particularly troublesome for name nodes and journal nodes. |
/etc/security/limits.conf | nofile | unlimited | If this value is too low, a job that operate on the HDFS cluster may fail due to too may open file handles. |
/etc/security/limits.conf, /etc/security/limits.d/90-nproc.conf | nproc | 32768 | An HDFS node spawns many threads, which go towards kernel nproc count. If nproc is not set appropriately, the node will be killed. |