Monitors the activity of the Container Location Database (CLDB). This utility prints information about the CLDB service that is running on the node from which you run the utility.
Monitoring the progress of the container location database (CLDB) may be useful when troubleshooting cluster issues.
The cldbguts utility prints information about active container reports,
full container reports, registration requests, MapRHPE Ezmeral Data Fabric filesystem heartbeats, NFS
server heartbeats, and containers. You can run cldbguts from any
container location database (CLDB) node; however,
running this command from the container location database (CLDB) master node provides the most
relevant information.
cldbguts, it continues to print the output until you
kill the process. To prevent cldbguts from printing indefinitely,
specify the -n parameter that denotes the number of times
cldbguts should print the output./opt/mapr/bin/cldbguts [[acr | rpc | heartbeat | containers | alarms | table | all] [-n iterations-count]
cldbguts without any parameters, only the fcr,
clrpc, regn, mfs hb,
nfs hb, assigns, roles,
progress, and the con-chain fields are
displayed.Represents active container requests (ACR).
This column includes the following information:
nr: Number of ACRs completed in the previous second.
The first entry displays the total number of ACRs completed since the
start of the CLDB service on the node.pt: Processing time (in milliseconds) for the ACRs
completed in the previous second. The first entry displays the total time
(in milliseconds) spent processing the ACRs since the start of the CLDB
service on the node.to: Number of ACRs that took longer than expected in
the previous second. The first entry displays the total number of ACRs
that took longer than expected since the start of the CLDB service on the
node.d: Number of duplicate ACRs received in the previous
second. The first entry displays the total number of duplicate ACRs since
the start of the CLDB service on the node.dp: Number of duplicate ACRs that required additional
work in the previous second. The first entry displays the total number of
duplicate ACRs that required additional work since the start of the CLDB
service on the node.Represents full container report (FCR).
This column includes the following information:
nr: Number of FCRs completed in the previous second.
The first entry displays the total number of FCRs completed since the
start of the CLDB service on the node.
pt: Processing time (in milliseconds) for the FCRs
completed in the previous second. The first entry displays the total
time (in milliseconds) spent processing the FCRs since the start of
the CLDB service on the node.
to: Number of FCRs that took longer than expected in
the previous second. The first entry displays the total number of FCRs
that took longer than expected since the start of the CLDB service on
the node.
Represents registration requests.
This column includes the following information:
nr: Number of registration requests completed in the
previous second. The first entry displays the total number of
registration requests completed since the start of the CLDB service on
the node.
pt: Processing time (in milliseconds) for the
registration requests completed in the previous second. The first
entry displays the total time (in milliseconds) spent processing the
registration requests since the start of the CLDB service on the
node.
to: Number of registration requests that took longer
than expected in the previous second. The first entry displays the
total number of registration requests that took longer than expected
since the start of the CLDB service on the node.
d: Number of duplicate registration requests received
in the previous second. The first entry displays the total number of
duplicate registration requests since the start of the CLDB service on
the node.dp: Number of duplicate registration requests that
required additional work in the previous second. The first entry displays
the total number of duplicate registration requests that required
additional work since the start of the CLDB service on the node.Information about HPE Ezmeral Data Fabric filesystem heartbeats.
This column includes the following information:
nr: Number of HPE Ezmeral Data Fabric filesystem
heartbeats completed in the previous second. The first entry displays
total number of HPE Ezmeral Data Fabric filesystem
heartbeats completed since the start of the CLDB service on the node.
pt: Processing time (in microseconds) for the HPE Ezmeral Data Fabric filesystem heartbeats completed in the previous
second. The first entry displays total time (in microseconds) spent
processing HPE Ezmeral Data Fabric filesystem
heartbeats since the start of the CLDB service on the node.
to: Number of HPE Ezmeral Data Fabric filesystem
heartbeats that took longer than expected in the previous second. The
first entry displays total number of HPE Ezmeral Data Fabric filesystem heartbeats that took longer than expected since the start
of the CLDB service on the node.
bmc: Number of times the Become Master Command
(bmc) has been sent to this MFS.otc: Number of times the other commands (apart from
bmc) has been sent to this MFS.Information about NFS server heartbeats.
This column includes the following information:
nr: Number of NFS server heartbeats completed in the
previous second. The first entry displays total number of NFS server
heartbeats completed since the start of the CLDB service on the node.
pt: Processing time (in microseconds) for the NFS server
heartbeats completed in the previous second. The first entry displays the
total time (in microseconds) spent processing HPE Ezmeral Data Fabric filesystem heartbeats since the start of the CLDB service on the
node.
This column includes the following information:
nr: Number of container assign requests in the previous
second. The first entry displays the total number of container assign
requests since the start of the CLDB service on the node.
nc: Number of containers created as part of the above
container assign requests in the previous second. The first entry
displays the total number of containers created since the start of the
CLDB service on the node.
nrt: Number of container assign requests for tablets in
the previous second. The first entry displays the total number of
container assign requests for tablets since the start of the CLDB service
on the node.
nct: Number of containers created as part of the above
container assign requests for tablets in the previous second. The first
entry displays the total number of container created in tablets since the
start of the CLDB service on the node.
pt: Time taken by container assignment RPC in
millisecondstpt: Time taken by container assignment tablet RPC in
millisecondscas: Number of storage pools scanned for container
assignment requestsRepresents the roles of the various replica containers.
This column includes the following information:
bm: Number of replica containers that are in the process
of becoming master
ms: Number of replica containers that the CLDB thinks
have valid masters
wr: Number of replica containers that are waiting for
CLDB to assign a role to them
rs: Number of replica containers that are re-syncing
vr: Number of non-master replica containers that have
finished resynchronization
uu: Number of replica containers that are unused. For
example, the number of replica containers that are on nodes or storage
pools which have been offline or unavailable for more than an hour.
This column includes the following information:
m%: Percentage of containers that have valid masters
uc: Number of unique containers
v%: Percentage of replica containers that are valid
(that is, have completed resynchronization)
tr: Total number of replica containers
This column includes the following information:
ms: Number of unique containers that have a master
1r: Number of unique containers that have 2 valid copies
of the data
2r: Number of unique containers that have 3 valid copies
of the data
This column includes the following information:
lu: Number of container location lookups
up: Number of container location updates
dl: Number of container location deletes
sc: Number of container location scansThis column includes the following information:
lu: Number of container size lookups
up: Number of container size updates
dl: Number of container size deletes
sc: Number of container size scansThis column includes the following information:
lu: Number of lookups on the SP-Container-Vol table
up: Number of updates on the SP-Container-Vol table
dl: Number of deletes on the SP-Container-Vol table
sc: Number of scans on the SP-Container-Vol tableThis column includes the following information:
nn: Number of nodes in the cluster
of: Number of offline nodesnsp: Number of storage pools
of: Number of offline storage pools/opt/mapr/bin/cldbguts all -n 3
2019-09-15 22:08:39,981
mfs hb nfs hb assigns roles progress con-chain location size sptable fcr clrpc regn nodes
nr pt to bmc otc nr pt nr nc nrt nct pt tpt cas bm ms wr rs vr uu m% uc v% tr ms 1r 2r lu up dl sc lu up dl sc lu up dl sc nr pt to nr pt to nr pt to nn of sp of
428807 112504140 0 57 0 0 0 120 5 0 0 294 0 2 0 61 0 0 0 0 98.39% 62 100.00% 61 61 0 0 0 113 0 32 0 416 0 16 0 1 0 0 476 5073 0 5650 3971 0 3 178 0 1 0 1 0
1 245 0 57 0 0 0 0 0 0 0 0 0 0 0 61 0 0 0 0 98.39% 62 100.00% 61 61 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0
1 288 0 57 0 0 0 0 0 0 0 0 0 0 0 61 0 0 0 0 98.39% 62 100.00% 61 61 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0