Control groups (cgroups) are a Linux kernel feature available through the LinuxContainerExecutor program that you can configure to limit and monitor the CPU resources available to YARN container processes on a node.
Configure YARN to use cgroups through properties in the yarn-site.xml file
and then restart each NodeManager and ResourceManager service after you update
yarn-site.xml for changes to take effect.
container-executor.cfg option. By default,
feature.mount-cgroup.enabled=0. To enable this feature, set
feature.mount-cgroup.enabled=1./sys/fs/cgroup/cpu,cpuacct to make
YARN cgroups work; you can use automount.The following sections describe how to configure mount paths for cgroups when automount is
enabled and disabled, how to use an alternative method for configuring a mount path, and the
yarn-site.xml properties related to YARN cgroup configuration.
Before you enable
automount, set feature.mount-cgroup.enabled=1 in the
container-executor.cfg file.
yarn.nodemanager.linux-container-executor.cgroups.mount to
true, and define the mount path in the yarn-site.xml file,
as shown in the following
example:<property>
<name>yarn.nodemanager.linux-container-executor.cgroups.mount</name>
<value>true</value>
</property>
<property>
<name>yarn.nodemanager.linux-container-executor.cgroups.mount-path</name>
<value>/mycgroup</value>
</property>
mkdir -p /mycgroup/cpu
chown -R mapr:mapr /mycgroup/cpu
Based
on the settings, NodeManager will mount cgroups under /mycgroup/cpu
automatically. /mycgroup/cpu or set the
yarn.nodemanager.linux-container-executor.cgroups.mount property to
false before NodeManager restarts. However, if the NodeManager node
restarts, you do not have to perform the manual unmount; the
/mycgroup/cpu path is unmounted automatically during restart.
To disable automount, set
yarn.nodemanager.linux-container-executor.cgroups.mount to false in
yarn-site.xml.
/sys/fs/cgroup/.
When each node running NodeManager restarts, you must manually create the
hadoop-yarn directory under /sys/fs/cgroup/cpu,cpuacct,
as shown in the following
example:mkdir -p /sys/fs/cgroup/cpu,cpuacct/hadoop-yarn
chown -R mapr:mapr /sys/fs/cgroup/cpu,cpuacct/hadoop-yarn
For pre-MEP 7.0.x and RHEL 8 environments, you can unmount the default mount path and configure a new mount path.
First, set yarn.nodemanager.linux-container-executor.cgroups.mount to
false in yarn-site.xml.
umount /sys/fs/cgroup/cpu,cpuacct
cgroup -o 'cpu' none '/opt/mapr/cgroup'
mkdir /opt/mapr/cgroup/hadoop-yarn
chown -R mapr:mapr /opt/mapr/cgroup/hadoop-yarn
chmod 750 /opt/mapr/cgroup/hadoop-yarn
yarn-site.xml:
| Properties | Configuration Description |
|---|---|
| yarn.nodemanager.container-executor.class | Set this property to
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor. |
| yarn.nodemanager.linux-container-executor.resources-handler.class | Set this property to
org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler. |
| yarn.nodemanager.linux-container-executor.cgroups.hierarchy | Set this property to the location of the Cgroups hierarchy. If
yarn.nodemanager.linux-container-executor.cgroups.mount is
false then this cgroup's hierarchy must already exist in this
location. By default, this is set to /hadoop-yarn. |
| yarn.nodemanager.linux-container-executor.cgroups.mount | If the cgroup hierarchy is already configured, verify that this value is set
to the default, false. Otherwise, set this value to
true. |
| yarn.nodemanager.linux-container-executor.cgroups.mount-path | Set this property to the path where the
LinuxContainerExecutor should attempt to mount cgroups if the
hierarchy is not found. The container-executor binary will try to mount to
<mountPath>/cpu, which must exists before the NodeManager
service is started on this node. |
| yarn.nodemanager.linux-container-executor.group | Verify that this setting matches the
yarn.nodemanager.linux-container-executor.group setting in
container-executor.cfg
(/opt/mapr/hadoop/hadoop-2.x.x/etc/hadoop/container-executor.cfg). |
| yarn.nodemanager.resource.percentage-physical-cpu-limit | Limits the CPU resources available to container processes. Set
yarn.nodemanager.resource.percentage-physical-cpu-limit to the
percentage that you want to limit the CPU usage of all YARN containers to. For
example, if you set this value to 75, YARN containers on this node cannot use more
than 75% of the CPU available on the node. |
|
yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage |
Specifies whether the CPU allocation should have a hard or soft limit. If containers should not use more CPU than what was originally allocated, set
Set
|