Question
· Jun 21, 2022

Freeze MSM/MUMPS

Hello, today we had another freeze in the system. It was close to 09:55. Logs follow for analysis. Following are MSM/MUMPS logs with about 1050 active connections at the time of the freeze. Does anyone see something different?


#Active connections
^CONNECTIONS("66281,35364")="1110|21/06/2022|09:49:24"
^CONNECTIONS("66281,35374")="1103|21/06/2022|09:49:34"
^CONNECTIONS("66281,35384")="1098|21/06/2022|09:49:44"
^CONNECTIONS("66281,35394")="1100|21/06/2022|09:49:54"
^CONNECTIONS("66281,35404")="1094|21/06/2022|09:50:04"
^CONNECTIONS("66281,35414")="1091|21/06/2022|09:50:14"
^CONNECTIONS("66281,35424")="1087|21/06/2022|09:50:24"
^CONNECTIONS("66281,35434")="1085|21/06/2022|09:50:34"
^CONNECTIONS("66281,35444")="1081|21/06/2022|09:50:44"
^CONNECTIONS("66281,35454")="1078|21/06/2022|09:50:54"


SISMED1:/var/tmp> vmstat -Iwt 2
kthr memory page faults cpu time
----------- --------------------- ------------------------------------ ------------------ ----------------------- --------
r b p avm fre fi fo pi po fr sr in sy cs us sy id wa pc ec hr mi se
15 0 0 1090196 71560 146 39 0 0 0 0 9014 394550 66962 19 27 54 0 1.01 50.7 09:55:07
12 0 0 1090486 70581 322 83 0 0 0 0 11423 517553 83951 23 38 39 0 1.30 64.9 09:55:09
2 1 0 1090486 70391 89 34 0 0 0 0 10097 462802 77751 21 33 46 0 1.16 58.0 09:55:11
4 0 0 1090545 70070 116 36 0 0 0 0 10465 481553 74345 21 33 45 0 1.17 58.7 09:55:13
4 0 0 1090967 69456 89 2 0 0 0 0 10390 470679 76499 21 33 46 0 1.17 58.3 09:55:15
1 0 0 1090528 69863 18 715 0 0 0 0 2481 78521 13065 47 8 44 1 1.13 56.5 09:55:17
4 0 0 1090528 69859 0 0 0 0 0 0 99 15008 460 53 3 44 0 1.11 55.7 09:55:19
2 0 0 1090528 69854 0 0 0 0 0 0 103 20976 564 56 4 40 0 1.20 59.9 09:55:21
2 0 0 1090528 69850 0 0 0 0 0 0 73 4585 415 52 1 46 0 1.08 53.8 09:55:23
2 0 0 1090542 69833 0 0 0 0 0 0 52 2112 369 50 1 49 0 1.03 51.4 09:55:25
2 0 0 1090542 69830 0 1 0 0 0 0 36 1712 331 50 1 49 0 1.02 51.1 09:55:27
2 0 0 1090556 69813 0 0 0 0 0 0 33 3050 305 50 1 49 0 1.03 51.3 09:55:29
2 0 0 1090945 69423 0 0 0 0 0 0 41 19132 349 52 3 45 0 1.10 55.0 09:55:31
2 0 0 1090945 69421 0 0 0 0 0 0 60 2215 446 50 1 49 0 1.03 51.3 09:55:33
2 0 0 1090945 69418 0 0 0 0 0 0 31 2630 315 50 1 49 0 1.02 51.0 09:55:35
2 0 0 1090937 69424 0 0 0 0 0 0 43 7827 317 51 1 48 0 1.03 51.7 09:55:37

SISMED1:/var/tmp> iostat -RDTl 3 1000000000000000000000000000000000000000000000000000000000 > iostat.txt
Disks: xfers read write queue time
--------------- -------------------------------- ------------------------------------ ------------------------------------ -------------------------------------- ---------
%tm bps tps bread bwrtn rps avg min max time fail wps avg min max time fail avg min max avg avg serv
act serv serv serv outs serv serv serv outs time time time wqsz sqsz qfull
hdisk_BKP_9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:09:55
hdisk_PRD_1 5.0 645.8K 148.7 600.7K 45.1K 144.7 0.3 0.1 2.1 0 0 4.0 0.6 0.4 0.9 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:09:55
hdisk_BKP_10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:09:55
hdisk_BKP_4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:09:55
hdisk_BKP_2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:09:55
hdisk_caa 0.0 1.7K 1.0 1.0K 682.7 0.3 0.2 0.2 0.2 0 0 0.7 0.2 0.2 0.2 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:09:55
hdisk_BKP_6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:09:55
hdisk_PRD_2 7.0 532.5K 130.0 532.5K 0.0 130.0 0.4 0.1 4.9 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:09:55
hdisk_home 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:09:55
hdisk_BKP_1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:09:55
hdisk13 0.0 25.9K 6.3 0.0 25.9K 0.0 0.0 0.0 0.0 0 0 6.3 0.2 0.2 0.3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:09:55
hdisk_PRD_3 5.0 537.9K 126.3 516.1K 21.8K 125.7 0.3 0.1 2.1 0 0 0.7 1.1 0.5 1.7 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:09:55
hdisk_BKP_8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:09:55
hdisk_BKP_3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:09:55
hdisk_BKP_5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:09:55
hdisk_BKP_7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:09:55
hdisk_PRD_4 5.3 663.6K 136.7 553.0K 110.6K 135.0 0.4 0.1 6.9 0 0 1.7 0.8 0.5 1.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:09:55
hdisk_BKP_9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:02
hdisk_PRD_1 0.3 91.5K 20.0 36.9K 54.6K 9.0 0.4 0.2 0.5 0 0 11.0 0.8 0.4 1.3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:02
hdisk_BKP_10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:02
hdisk_BKP_4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:02
hdisk_BKP_2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:02
hdisk_caa 0.0 2.7K 1.3 2.0K 682.7 0.7 0.2 0.2 0.2 0 0 0.7 0.3 0.3 0.3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:02
hdisk_BKP_6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:02
hdisk_PRD_2 0.7 107.9K 23.7 34.1K 73.7K 8.3 0.4 0.2 1.2 0 0 15.3 0.9 0.5 1.3 0 0 0.0 0.0 0.1 0.0 0.0 0.0 09:55:02
hdisk_home 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:02
hdisk_BKP_1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:02
hdisk13 0.0 15.0K 3.7 0.0 15.0K 0.0 0.0 0.0 0.0 0 0 3.7 0.2 0.2 0.3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:02
hdisk_PRD_3 0.7 136.5K 31.3 41.0K 95.6K 9.7 0.4 0.3 0.5 0 0 21.7 1.0 0.4 1.6 0 0 0.2 0.0 1.1 0.0 0.0 4.3 09:55:02
hdisk_BKP_8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:02
hdisk_BKP_3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:02
hdisk_BKP_5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:02
hdisk_BKP_7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:02
hdisk_PRD_4 0.3 109.2K 20.3 34.1K 75.1K 8.3 0.4 0.2 0.5 0 0 12.0 0.9 0.5 1.4 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:02
hdisk_BKP_9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:05
hdisk_PRD_1 0.7 81.9K 18.0 41.0K 41.0K 10.0 0.4 0.2 0.4 0 0 8.0 0.9 0.4 1.4 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:05
hdisk_BKP_10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:05
hdisk_BKP_4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:05
hdisk_BKP_2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:05
hdisk_caa 0.0 1.7K 1.0 1.0K 682.7 0.3 0.1 0.1 0.1 0 0 0.7 0.2 0.2 0.3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:05
hdisk_BKP_6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:05
hdisk_PRD_2 0.3 73.7K 17.7 28.7K 45.1K 7.0 0.4 0.3 0.5 0 0 10.7 1.1 0.5 1.4 0 0 0.0 0.0 0.1 0.0 0.0 0.0 09:55:05
hdisk_home 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:05
hdisk_BKP_1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:05
hdisk13 0.0 19.1K 4.7 0.0 19.1K 0.0 0.0 0.0 0.0 0 0 4.7 0.5 0.2 2.7 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:05
hdisk_PRD_3 0.7 116.1K 18.0 49.2K 66.9K 12.0 0.4 0.2 0.4 0 0 6.0 1.0 0.6 1.4 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:05
hdisk_BKP_8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:05
hdisk_BKP_3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:05
hdisk_BKP_5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:05
hdisk_BKP_7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:05
hdisk_PRD_4 0.3 83.3K 17.7 34.1K 49.2K 8.3 0.7 0.3 6.0 0 0 9.3 1.0 0.5 1.3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:05
hdisk_BKP_9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:08
hdisk_PRD_1 1.3 314.0K 75.7 281.3K 32.8K 68.0 0.3 0.1 1.4 0 0 7.7 0.8 0.4 1.7 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:08
hdisk_BKP_10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:08
hdisk_BKP_4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:08
hdisk_BKP_2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:08
hdisk_caa 0.0 2.7K 1.3 2.0K 682.7 0.7 2.5 0.2 4.8 0 0 0.7 0.2 0.2 0.2 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:08
hdisk_BKP_6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:08
hdisk_PRD_2 3.0 292.2K 71.3 263.5K 28.7K 64.3 0.4 0.2 5.8 0 0 7.0 1.0 0.6 1.3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:08
hdisk_home 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:08
hdisk_BKP_1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:08
hdisk13 0.0 12.3K 3.0 0.0 12.3K 0.0 0.0 0.0 0.0 0 0 3.0 0.2 0.2 0.3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:08
hdisk_PRD_3 3.3 327.7K 73.0 269.0K 58.7K 65.7 0.4 0.2 5.6 0 0 7.3 0.9 0.5 1.6 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:08
hdisk_BKP_8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:08
hdisk_BKP_3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:08
hdisk_BKP_5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:08
hdisk_BKP_7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:08
hdisk_PRD_4 3.3 344.1K 77.0 286.7K 57.3K 70.0 0.4 0.1 3.9 0 0 7.0 0.9 0.6 1.3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:08
hdisk_BKP_9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:11
hdisk_PRD_1 1.3 170.7K 39.7 127.0K 43.7K 30.7 0.3 0.2 0.5 0 0 9.0 2.7 0.4 6.3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:11
hdisk_BKP_10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:11
hdisk_BKP_4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:11
hdisk_BKP_2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:11
hdisk_caa 0.0 1.4K 0.7 1.0K 341.3 0.3 0.2 0.2 0.2 0 0 0.3 0.3 0.3 0.3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:11
hdisk_BKP_6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:11
hdisk_PRD_2 1.0 165.2K 40.3 124.2K 41.0K 30.3 0.4 0.1 0.6 0 0 10.0 3.4 0.5 6.3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:11
hdisk_home 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:11
hdisk_BKP_1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:11
hdisk13 1.3 75.1K 18.3 0.0 75.1K 0.0 0.0 0.0 0.0 0 0 18.3 0.2 0.2 0.3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:11
hdisk_PRD_3 1.0 128.3K 31.3 75.1K 53.2K 18.3 0.4 0.3 0.4 0 0 13.0 1.9 0.5 6.3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:11
hdisk_BKP_8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:11
hdisk_BKP_3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:11
hdisk_BKP_5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:11
hdisk_BKP_7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:11
hdisk_PRD_4 1.7 237.6K 54.0 109.2K 128.3K 26.3 0.4 0.2 0.6 0 0 27.7 2.5 0.4 6.3 0 0 0.1 0.0 4.7 0.0 0.0 1.0 09:55:11
hdisk_BKP_9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:14
hdisk_PRD_1 1.3 107.9K 26.0 99.7K 8.2K 24.0 0.3 0.1 0.5 0 0 2.0 0.5 0.4 0.6 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:14
hdisk_BKP_10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:14
hdisk_BKP_4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:14
hdisk_BKP_2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:14
hdisk_caa 0.0 3.1K 1.7 2.0K 1.0K 0.7 0.2 0.2 0.2 0 0 1.0 0.2 0.2 0.2 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:14
hdisk_BKP_6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:14
hdisk_PRD_2 0.7 120.1K 29.3 120.1K 0.0 29.3 0.4 0.3 5.7 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:14
hdisk_home 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:14
hdisk_BKP_1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:14
hdisk13 0.7 17.7K 4.3 0.0 17.7K 0.0 0.0 0.0 0.0 0 0 4.3 0.9 0.2 8.6 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:14
hdisk_PRD_3 2.7 154.3K 26.3 101.0K 53.2K 24.3 0.5 0.2 9.0 0 0 2.0 0.6 0.4 0.8 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:14
hdisk_BKP_8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:14
hdisk_BKP_3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:14
hdisk_BKP_5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:14
hdisk_BKP_7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:14
hdisk_PRD_4 1.0 139.3K 28.0 92.8K 46.4K 22.0 0.4 0.2 2.2 0 0 6.0 0.6 0.5 0.8 0 0 0.0 0.0 0.1 0.0 0.0 0.0 09:55:14
hdisk_BKP_9 0.3 1.4K 0.3 0.0 1.4K 0.0 0.0 0.0 0.0 0 0 0.3 0.2 0.2 0.2 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:17
hdisk_PRD_1 5.0 573.4K 131.3 47.8K 525.7K 11.7 0.3 0.2 0.4 0 0 119.7 0.8 0.4 11.4 0 0 0.0 0.0 0.8 0.0 0.0 3.0 09:55:17
hdisk_BKP_10 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:17
hdisk_BKP_4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:17
hdisk_BKP_2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:17
hdisk_caa 0.0 1.4K 0.7 1.0K 341.3 0.3 0.1 0.1 0.1 0 0 0.3 0.2 0.2 0.2 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:17
hdisk_BKP_6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:17
hdisk_PRD_2 2.3 379.6K 88.3 41.0K 338.6K 10.0 0.3 0.3 0.4 0 0 78.3 0.9 0.4 11.3 0 0 0.0 0.0 0.1 0.0 0.0 0.0 09:55:17
hdisk_home 0.3 9.6K 2.3 0.0 9.6K 0.0 0.0 0.0 0.0 0 0 2.3 0.9 0.4 1.3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:17
hdisk_BKP_1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:17
hdisk13 0.3 524.3K 63.3 0.0 524.3K 0.0 0.0 0.0 0.0 0 0 63.3 0.4 0.2 1.4 0 0 0.1 0.0 0.9 0.0 0.0 16.3 09:55:17
hdisk_PRD_3 3.3 494.3K 107.7 36.9K 457.4K 9.0 0.4 0.3 0.4 0 0 98.7 0.9 0.4 10.8 0 0 0.0 0.0 0.8 0.0 0.0 2.7 09:55:17
hdisk_BKP_8 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:17
hdisk_BKP_3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:17
hdisk_BKP_5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:17
hdisk_BKP_7 0.3 1.4K 0.3 0.0 1.4K 0.0 0.0 0.0 0.0 0 0 0.3 0.6 0.6 0.6 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:17
hdisk_PRD_4 2.0 484.7K 105.7 39.6K 445.1K 9.7 0.4 0.3 1.0 0 0 96.0 1.0 0.4 11.3 0 0 0.2 0.0 2.1 0.0 0.0 17.0 09:55:17
hdisk_BKP_9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:20
hdisk_PRD_1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:20
hdisk_BKP_10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:20
hdisk_BKP_4 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:20
hdisk_BKP_2 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:20
hdisk_caa 0.0 3.1K 1.7 2.0K 1.0K 0.7 0.2 0.2 0.2 0 0 1.0 0.3 0.3 0.3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:20
hdisk_BKP_6 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:20
hdisk_PRD_2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:20
hdisk_home 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:20
hdisk_BKP_1 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:20
hdisk13 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:20
hdisk_PRD_3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:20
hdisk_BKP_8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:20
hdisk_BKP_3 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:20
hdisk_BKP_5 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:20
hdisk_BKP_7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:20
hdisk_PRD_4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:20
hdisk_BKP_9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:23
hdisk_PRD_1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:23
hdisk_BKP_10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:23
hdisk_BKP_4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:23
hdisk_BKP_2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:23
hdisk_caa 0.0 1.4K 0.7 1.0K 341.3 0.3 0.3 0.3 0.3 0 0 0.3 0.3 0.3 0.3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:23
hdisk_BKP_6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:23
hdisk_PRD_2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:23
hdisk_home 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:23
hdisk_BKP_1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:23
hdisk13 0.3 21.8K 5.3 0.0 21.8K 0.0 0.0 0.0 0.0 0 0 5.3 0.2 0.2 0.3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:23
hdisk_PRD_3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:23
hdisk_BKP_8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:23
hdisk_BKP_3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:23
hdisk_BKP_5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:23
hdisk_BKP_7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:23
hdisk_PRD_4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:23
hdisk_BKP_9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:26
hdisk_PRD_1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:26
hdisk_BKP_10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:26
hdisk_BKP_4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:26
hdisk_BKP_2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:26
hdisk_caa 0.0 2.7K 1.3 2.0K 682.7 0.7 0.2 0.2 0.2 0 0 0.7 0.3 0.3 0.3 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:26
hdisk_BKP_6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:26
hdisk_PRD_2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:26
hdisk_home 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:26
hdisk_BKP_1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:26
hdisk13 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:26
hdisk_PRD_3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:26
hdisk_BKP_8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:26
hdisk_BKP_3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:26
hdisk_BKP_5 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:26
hdisk_BKP_7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:26
hdisk_PRD_4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:26
hdisk_BKP_9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:29
hdisk_PRD_1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:29
hdisk_BKP_10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:29
hdisk_BKP_4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0 0 0.0 0.0 0.0 0.0 0.0 0.0 09:55:29

SISMED1:/var/tmp> tprof -ujeskzl -A -I -N -r default -x sleep 60
Configuration information
=========================
System: AIX 7.1 Node: SISMED1 Machine: 00FB42D74C00
Tprof command was:
tprof -ujeskzl -A -I -N -r default -x sleep 60
Trace command was:
/usr/bin/trace -ad -M -L 737711308 -T 500000 -j 00A,001,002,003,38F,005,006,134,210,139,5A2,5A5,465,234,5D8, -o default.trc
Total Samples = 11996
Traced Time = 60.01s (out of a total execution time of 60.01s)
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

Process FREQ Total Kernel User Shared Other Java
======= ==== ===== ====== ==== ====== ===== ====
mumsm 1 5997 10 5987 0 0 0
wait 1 5655 5655 0 0 0 0
/usr/bin/ps 13 90 60 3 27 0 0
/usr/es/sbin/cluster/clstat 5 46 35 0 11 0 0
/usr/sbin/zabbix_agentd: 15 30 22 1 7 0 0
/usr/sbin/snmpd 1 28 25 0 3 0 0
/usr/bin/login 4 15 9 0 6 0 0
/usr/bin/sh 13 14 13 1 0 0 0
/opt/rsct/bin/rmcd 2 11 7 0 4 0 0
/usr/sbin/sshd 4 11 5 1 5 0 0
gil 4 10 10 0 0 0 0
/usr/sbin/telnetd 5 8 8 0 0 0 0
lock_rcv 1 7 7 0 0 0 0
/bin/ksh 5 6 6 0 0 0 0
/usr/sbin/syslogd 1 5 2 1 2 0 0
/usr/bin/netstat 3 5 5 0 0 0 0
/usr/es/sbin/cluster/clinfo 1 5 3 1 1 0 0
swapper 1 4 4 0 0 0 0
/usr/bin/w 1 3 3 0 0 0 0
/usr/bin/wc 3 3 3 0 0 0 0
/var/opt/tivoli/ep/_jvm/jre/bin/java
2 3 2 0 1 0 0
/usr/sbin/rsct/bin/hagsd 1 3 0 0 3 0 0
/bin/perl 1 3 2 0 1 0 0
/usr/bin/awk 3 3 3 0 0 0 0
/usr/java7_64/bin/java 2 3 2 0 1 0 0
-ksh 3 3 2 0 1 0 0
mutio 2 3 3 0 0 0 0
/usr/bin/sort 1 2 1 0 1 0 0
/usr/bin/sleep 2 2 2 0 0 0 0
./msm 2 2 1 0 1 0 0
DPKP 1 2 2 0 0 0 0
/usr/sbin/syncd 1 1 1 0 0 0 0
/usr/bin/iostat 1 1 0 0 1 0 0
/usr/bin/hostname 1 1 1 0 0 0 0
/usr/bin/grep 1 1 1 0 0 0 0
/usr/sbin/cron 1 1 1 0 0 0 0
/usr/bin/vmstat 1 1 1 0 0 0 0
/usr/bin/setmaps 1 1 1 0 0 0 0
/usr/es/sbin/cluster/clstrmgr 1 1 0 0 1 0 0
/usr/sbin/aso 1 1 1 0 0 0 0
muctrl 1 1 1 0 0 0 0
/usr/bin/date 1 1 0 0 1 0 0
/usr/sbin/bindprocessor 1 1 0 0 1 0 0
rpc.lockd 1 1 1 0 0 0 0
/opt/rsct/bin/IBM.MgmtDomainRMd
1 1 0 0 1 0 0
======= ==== ===== ====== ==== ====== ===== ====
Total 118 11996 5921 5995 80 0 0

Process PID TID Total Kernel User Shared Other Java
======= === === ===== ====== ==== ====== ===== ====
mumsm 13566366 101908519 5997 10 5987 0 0 0
wait 131076 131077 5655 5655 0 0 0 0
/usr/sbin/snmpd 5570750 9043989 28 25 0 3 0 0

5701654 10616911 10 6 0 4 0 0

27853170 116523025 10 8 0 2 0 0

16908568 21037289 9 6 0 3 0 0

16908564 21037285 9 6 0 3 0 0

27918726 116326431 9 7 0 2 0 0

27918720 116326425 9 8 0 1 0 0

6488312 11468895 8 4 1 3 0 0
/usr/bin/ps 21496144 116260879 8 4 1 3 0 0
/usr/bin/ps 19988892 116391957 7 4 0 3 0 0
/usr/bin/ps 16908558 21037279 7 5 0 2 0 0
/usr/bin/ps 16187840 115933193 7 4 0 3 0 0
lock_rcv 2883690 4849827 7 7 0 0 0 0
/usr/bin/ps 21103086 121634993 7 4 1 2 0 0
/usr/bin/ps 21496172 116260907 7 5 0 2 0 0
/usr/bin/ps 21496154 116260889 7 4 0 3 0 0
/usr/bin/ps 15204564 106758313 7 5 0 2 0 0
/usr/bin/ps 27722152 114950187 7 6 0 1 0 0
/usr/bin/ps 15204562 106758311 7 3 1 3 0 0
/usr/bin/ps 21103060 121634967 7 6 0 1 0 0
/usr/bin/ps 22938090 110559313 6 6 0 0 0 0

6684920 11272291 6 5 0 1 0 0
/usr/sbin/sshd 21496188 116260923 6 2 1 3 0 0
/usr/bin/ps 21496170 116260905 6 4 0 2 0 0

16187636 36438113 5 3 1 1 0 0

4128894 7471333 5 2 1 2 0 0
swapper 0 3 4 4 0 0 0 0
/usr/bin/login 26870168 104071203 4 2 0 2 0 0
/usr/bin/login 26804608 107741261 4 4 0 0 0 0
gil 1179684 2162755 4 4 0 0 0 0
/usr/bin/login 19136912 110231567 4 3 0 1 0 0
/bin/perl 21103062 121634969 3 2 0 1 0 0
/usr/bin/w 21103044 121634951 3 3 0 0 0 0
gil 1179684 2097217 3 3 0 0 0 0
/usr/bin/login 13697434 107216935 3 0 0 3 0 0

6357204 11534433 3 2 0 1 0 0

16187848 115933201 3 3 0 0 0 0

7143560 23920657 3 0 0 3 0 0
/usr/bin/netstat 16908792 21037257 2 2 0 0 0 0
/usr/bin/netstat 26804604 107741257 2 2 0 0 0 0
/bin/ksh 6226110 10747977 2 2 0 0 0 0

7602186 24707089 2 1 0 1 0 0

6815952 11599971 2 2 0 0 0 0

27525488 107937851 2 2 0 0 0 0
/usr/sbin/sshd 18940338 106299459 2 2 0 0 0 0

6291456 13303961 2 2 0 0 0 0
mutio 15991256 121307293 2 2 0 0 0 0
/usr/bin/sort 27722094 114950385 2 1 0 1 0 0
/usr/sbin/sshd 18940336 106299457 2 0 0 2 0 0
gil 1179684 2228293 2 2 0 0 0 0
DPKP 4653246 9633873 2 2 0 0 0 0
/usr/bin/sh 21103068 121634975 2 1 1 0 0 0

21496178 116260913 1 1 0 0 0 0
/usr/bin/grep 21103054 121634961 1 1 0 0 0 0
/usr/bin/setmaps 21496142 116260877 1 1 0 0 0 0
/usr/bin/sh 27722128 114950163 1 1 0 0 0 0
/usr/bin/sh 27722098 114950389 1 1 0 0 0 0
/usr/bin/sh 27722108 114950399 1 1 0 0 0 0
/usr/bin/sh 27722090 114950381 1 1 0 0 0 0
/usr/bin/sh 16908548 21037269 1 1 0 0 0 0
/usr/bin/sh 27722138 114950173 1 1 0 0 0 0
/usr/bin/sh 16908546 21037267 1 1 0 0 0 0
/usr/bin/sh 21103062 121634969 1 1 0 0 0 0
/usr/bin/sh 27722122 114950157 1 1 0 0 0 0
/usr/bin/sh 15204548 106758297 1 1 0 0 0 0
/usr/bin/sh 27722120 114950155 1 1 0 0 0 0
/usr/bin/sh 25559450 103612559 1 1 0 0 0 0
/usr/bin/sleep 17694988 110821625 1 1 0 0 0 0
/usr/bin/sleep 17695000 110821381 1 1 0 0 0 0
/usr/bin/date 27722132 114950167 1 0 0 1 0 0
/usr/bin/awk 18809328 115212293 1 1 0 0 0 0
/usr/bin/awk 26804598 107741251 1 1 0 0 0 0
/usr/bin/awk 17694980 110821617 1 1 0 0 0 0

5701654 12845229 1 1 0 0 0 0

7012416 14221511 1 0 0 1 0 0

3866654 12451967 1 0 0 1 0 0

6291456 8781893 1 0 0 1 0 0
/usr/sbin/aso 4849814 8192253 1 1 0 0 0 0

21496164 116260899 1 0 0 1 0 0
/usr/sbin/cron 15204554 106758303 1 1 0 0 0 0
/usr/bin/vmstat 24314156 34472159 1 1 0 0 0 0
/bin/ksh 21496172 116260907 1 1 0 0 0 0
/usr/sbin/sshd 18809088 115212309 1 1 0 0 0 0
/usr/bin/iostat 24183068 114294831 1 0 0 1 0 0
/usr/bin/netstat 21103056 121634963 1 1 0 0 0 0
-ksh 26804608 107741261 1 0 0 1 0 0

15204574 106758323 1 1 0 0 0 0

15204552 106758301 1 1 0 0 0 0

27722160 106102885 1 1 0 0 0 0

19988886 116391951 1 0 0 1 0 0
/bin/ksh 18809324 115212289 1 1 0 0 0 0
/bin/ksh 27787630 116981781 1 1 0 0 0 0

15204558 106758307 1 1 0 0 0 0

16187844 115933197 1 0 0 1 0 0

21496184 116260919 1 1 0 0 0 0

27853172 116523027 1 1 0 0 0 0

13959448 113246405 1 1 0 0 0 0

27722158 106102883 1 1 0 0 0 0

27853162 116523017 1 1 0 0 0 0

27722154 114950189 1 1 0 0 0 0
/usr/bin/wc 27722098 114950389 1 1 0 0 0 0

27722110 114950145 1 1 0 0 0 0
/usr/bin/wc 27853164 116523019 1 1 0 0 0 0

7602186 20054121 1 1 0 0 0 0

21954826 109117579 1 1 0 0 0 0
/bin/ksh 17694996 110821377 1 1 0 0 0 0
/usr/bin/wc 27918704 116326409 1 1 0 0 0 0
./msm 13697434 107216935 1 1 0 0 0 0
gil 1179684 2031679 1 1 0 0 0 0
./msm 26804608 107741261 1 0 0 1 0 0
muctrl 7995424 28311785 1 1 0 0 0 0
-ksh 13697434 107216935 1 1 0 0 0 0
mutio 25821478 113311857 1 1 0 0 0 0
/usr/sbin/syncd 1638506 5177503 1 1 0 0 0 0
rpc.lockd 4194446 8454193 1 1 0 0 0 0
-ksh 18809088 115212309 1 1 0 0 0 0
======= === === ===== ====== ==== ====== ===== ====
Total 11996 5921 5995 80 0 0

Total Ticks For All Processes (USER) = 5995

User Process Ticks % Address Bytes
============= ===== ====== ======= =====
mumsm 5987 49.91 100b3674 1234d
/usr/bin/ps 3 0.03 1000001b0 14768
/usr/sbin/zabbix_agentd: 1 0.01 100016f8c 1
/usr/sbin/sshd 1 0.01 10000150 d0080
/usr/sbin/syslogd 1 0.01 1000001b0 abb0
/usr/bin/sh 1 0.01 10000100 3eaec
/usr/es/sbin/cluster/clinfo 1 0.01 10000178 4f402

Profile: mumsm

Total Ticks For All Processes (mumsm) = 5987

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
<0x100B36A0> 1061 8.84 100b36a0 1
<0x100C59AC> 868 7.24 100c59ac 1
<0x100B3678> 734 6.12 100b3678 1
<0x100B36D4> 685 5.71 100b36d4 1
<0x100B3688> 409 3.41 100b3688 1
<0x100B36D0> 397 3.31 100b36d0 1
<0x100C5990> 316 2.63 100c5990 1
<0x100C59C0> 238 1.98 100c59c0 1
<0x100B368C> 201 1.68 100b368c 1
<0x100C5950> 196 1.63 100c5950 1
<0x100B3674> 143 1.19 100b3674 1
<0x100B36C0> 140 1.17 100b36c0 1
<0x100B36D8> 126 1.05 100b36d8 1
<0x100B367C> 122 1.02 100b367c 1
<0x100C5978> 120 1.00 100c5978 1
<0x100C5970> 119 0.99 100c5970 1
<0x100B36C8> 112 0.93 100b36c8 1

Profile: /usr/bin/ps

Total Ticks For All Processes (/usr/bin/ps) = 3

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
.print_args 1 0.01 /usr/bin/ps bb0 158
.malloc 1 0.01 glink64.s 13b58 18
.main 1 0.01 /usr/bin/ps 7830 46ac

Profile: /usr/sbin/zabbix_agentd:

Total Ticks For All Processes (/usr/sbin/zabbix_agentd:) = 1

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
<0x100016F8C> 1 0.01 100016f8c 1

Profile: /usr/sbin/sshd

Total Ticks For All Processes (/usr/sbin/sshd) = 1

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
.chacha_encrypt_bytes 1 0.01 chacha_private.h 1f864 113c

Profile: /usr/sbin/syslogd

Total Ticks For All Processes (/usr/sbin/syslogd) = 1

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
.iscntrl 1 0.01 glink64.s a9d0 18

Profile: /usr/bin/sh

Total Ticks For All Processes (/usr/bin/sh) = 1

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
.xec_switch 1 0.01 /usr/bin/sh 2b3c0 1f14

Profile: /usr/es/sbin/cluster/clinfo

Total Ticks For All Processes (/usr/es/sbin/cluster/clinfo) = 1

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
.atoi 1 0.01 glink.s c4d8 28

Total Ticks For All Processes (KERNEL) = 5908

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
h_cede_end_point 5620 46.85 hcalls.s aacf8 8
.unlock_enable_mem 38 0.32 low.s 930c 1f4
ovlya_addr_pcs_glue_xon2 29 0.24 vmvcs.s 755ac 8c
.enable 15 0.13 misc.s 1cb590 70
.simple_unlock_mem 13 0.11 low.s 9918 1e8
.v_lookup_mpss 11 0.09 ../../../../../src/bos/kernel/vmm/v_lookup.c 17f700 420
.hkeyset_restore 10 0.08 low.s 14c24 24
.simple_lock 9 0.08 low.s 9500 400
sc_msr_2_point 8 0.07 low.s 3840 24
svc_rfid_ret32 7 0.06 low.s 3bec 2a0
.v_lookup 7 0.06 ../../../../../src/bos/kernel/vmm/v_lookup.c 181ae0 3a0
.fill_procentry64 7 0.06 ../../../../../src/bos/kernel/proc/getproc.c 465480 960
hypervisor_call_asm_end_point 7 0.06 hcalls.s ab2b8 18
.umem_move 5 0.04 low.s b600 200
.hkeyset_add 5 0.04 low.s 14c00 24
svc_rfid_ret64 4 0.03 low.s 3b9c 14
.disable_lock 4 0.03 low.s 9004 2fc
.v_lookup_swpft 4 0.03 ../../../../../src/bos/kernel/vmm/v_lookup.c 17fdc0 280
.v_insscb 3 0.03 ../../../../../src/bos/kernel/vmm/v_scblist.c 2d9b40 440
.trchook64 3 0.03 trchka64.s b1fb0 450
.nlcLookup 3 0.03 ../../../../../src/bos/kernel/lfs/nlc.c 790760 400
.kernel_add_gate_cstack 2 0.02 low.s 14f20 60
hk_update_ukeys_accr_point 2 0.02 64/skeys.s a8840 30
.v_deque_nfr 2 0.02 ../../../../../src/bos/kernel/vmm/v_freelist.c 191260 420
.vm_handle 2 0.02 ../../../../../src/bos/kernel/vmm/vmmisc64.c 3b840 e80
.kern_geth 2 0.02 ../../../../../src/bos/kernel/vmm/vmadsputil.c 11c480 4c0
.fetch_and_add 2 0.02 low.s 9b00 80
.crexport 2 0.02 ../../../../../../src/bos/kernel/s/auth/cred.c 4b3c0 280
.v_inspft 2 0.02 ../../../../../src/bos/kernel/vmm/v_lists.c 25ae60 280
.v_update_frame_stats 2 0.02 ../../../../../src/bos/kernel/vmm/v_lists.c 25b0e0 920
.mtrchook2 2 0.02 low.s 10204 38
.v_scan_compute_weights 2 0.02 ../../../../../src/bos/kernel/vmm/vmscan.c 2e4f80 3c0
.j2_lookup 2 0.02 ../../../../../src/bos/kernel/j2/j2_lookup.c 3c4f80 580
.v_wpagein 2 0.02 ../../../../../src/bos/kernel/vmm/v_getsubs1.c 183c00 1320
.getthrds 2 0.02 ../../../../../src/bos/kernel/proc/getproc.c 46c9e0 b60
.copyin 1 0.01 ../../../../../src/bos/kernel/vmm/userio.c a7a40 220
.VM_GETKEY 1 0.01 ../../../../../src/bos/kernel/vmm/vmmisc64.c 3c800 40
._hkeyset_restore_userkeys 1 0.01 64/skeys.s a8870 10
.drw_lock_read 1 0.01 low.s 15d20 120
._ptrgl 1 0.01 low.s 14d00 24
._thread_unlock_common 1 0.01 ../../../../../src/bos/kernel/proc/sleep3.c be1c0 4c0
.clock 1 0.01 ../../../../../src/bos/kernel/proc/clock.c c9220 580
.xmattach 1 0.01 ../../../../../src/bos/kernel/vmm/xmem.c 11b0c0 320
.hkeyset_replace 1 0.01 low.s 14c48 24
.vm_add_xmemcnt 1 0.01 ../../../../../src/bos/kernel/vmm/v_segsubs.c 1226a0 200
.vm_att 1 0.01 low.s b484 7c
.lock_done_mem 1 0.01 low.s be48 178
.v_ff_bitmap_upd 1 0.01 ../../../../../src/bos/kernel/vmm/v_freelist.c 18fd00 300
.v_deque_nfr_ff_bitmap 1 0.01 ../../../../../src/bos/kernel/vmm/v_freelist.c 1901e0 160
.lock_read 1 0.01 low.s bd20 120
.get_64bit_rlimit_u 1 0.01 ../../../../../src/bos/kernel/proc/resource_pn.c 1b6640 e0
.vm_det 1 0.01 low.s b504 7c
sc_msr_1_point 1 0.01 low.s 3a08 c
.cdev_rdwr 1 0.01 ../../../../../src/bos/kernel/specfs/cdev_subr.c 8a9180 4a0
.v_psize_freeok 1 0.01 ../../../../../src/bos/kernel/vmm/vmpsize.c 5cc60 1a0
.v_get_validated_bsidx 1 0.01 ../../../../../src/bos/kernel/vmm/v_scbsubs.c 6b4a0 1a0
.v_sort_wsidlist 1 0.01 ../../../../../src/bos/kernel/vmm/v_relsubs.c 2bbd60 560
.v_relframe 1 0.01 ../../../../../src/bos/kernel/vmm/v_relsubs.c 2c10c0 e00
.v_reclaim 1 0.01 ../../../../../src/bos/kernel/vmm/v_getsubs.c 2d1160 1220
.v_validate_sidx 1 0.01 ../../../../../src/bos/kernel/vmm/v_scbsubs.c 6b640 12c0
.v_descoreboard 1 0.01 ../../../../../src/bos/kernel/vmm/v_mpsubs.c 6f2e0 240
.v_scan_end 1 0.01 ../../../../../src/bos/kernel/vmm/vmscan.c 2e6c00 100
.pile_object_wanted 1 0.01 ../../../../../src/bos/kernel/j2/j2_inode.c 3108a0 40
.iActivate 1 0.01 ../../../../../src/bos/kernel/j2/j2_inode.c 315280 500
.bmAssign 1 0.01 ../../../../../src/bos/kernel/j2/j2_bufmgr.c 32d520 c40
.check_free 1 0.01 ../../../../../src/bos/kernel/vmm/vmxmdbg.c 373680 3a0
.crxref 1 0.01 ../../../../../../src/bos/kernel/s/auth/cred.c 4bce0 e0
.j2_seek 1 0.01 ../../../../../src/bos/kernel/j2/j2_seek.c 3d2180 80
.validfault 1 0.01 ../../../../../src/bos/kernel/vmm/v_getsubs64.c 403240 1a0
.waitproc_find_run_queue 1 0.01 ../../../../../src/bos/kernel/proc/dispatch.c 7fb60 960
.getptr 1 0.01 ../../../../../src/bos/kernel/proc/getproc.c 46a020 1a0
.segattach 1 0.01 ../../../../../src/bos/kernel/proc/getproc.c 46af80 3a0
.waitproc 1 0.01 ../../../../../src/bos/kernel/proc/dispatch.c 86f60 9c0
.rtfree_nolock 1 0.01 ../../../../../src/bos/kernel/net/route.c 501100 1
.rtalloc1_nolock_gr 1 0.01 ../../../../../src/bos/kernel/net/route.c 501500 780
.m_copydata 1 0.01 ../../../../../src/bos/kernel/uipc/mbuf.c 52e800 1
.uipc_usrreq 1 0.01 ../../../../../src/bos/kernel/uipc/usrreq.c 54e480 1
.soclose2 1 0.01 ../../../../../src/bos/kernel/uipc/socket.c 57a700 e00
.erecvit 1 0.01 ../../../../../src/bos/kernel/uipc/syscalls.c 592c00 1580
.sbdrop 1 0.01 ../../../../../src/bos/kernel/uipc/socket2.c 5a8580 1
.closex 1 0.01 ../../../../../src/bos/kernel/lfs/close.c 605020 3e0
.lookuppn 1 0.01 ../../../../../src/bos/kernel/lfs/lookuppn.c 606140 10c0
.xmalloc 1 0.01 ../../../../../src/bos/kernel/alloc/xmalloc.c 62eec0 9c0
.vnop_rele 1 0.01 ../../../../../src/bos/kernel/lfs/vnops.c 69dce0 180
.vnop_ioctl 1 0.01 ../../../../../src/bos/kernel/lfs/vnops.c 6a0600 200
.vnop_getattr_flags 1 0.01 ../../../../../src/bos/kernel/lfs/vnops.c 6a0d60 1a0
._kvmgetinfo 1 0.01 ../../../../../src/bos/kernel/vmm/vmgetinfo.c 728c20 840
.vminfo64to32 1 0.01 ../../../../../src/bos/kernel/vmm/vmgetinfo.c 72d820 340
.copyinstr 1 0.01 ../../../../../src/bos/kernel/vmm/userio.c a7600 220
.uinfox_ref 1 0.01 ../../../../../src/bos/kernel/proc/usrinfo.c 7fd4e0 a0
.ld_resolve1 1 0.01 ../../../../../src/bos/kernel/ldr/ld_symbols.c 86a520 1920
.ld_hash 1 0.01 ../../../../../src/bos/kernel/ldr/ld_symbols.c 86f3e0 c0

Millicode Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
.memset_overlay 6 0.05 low.s e008 5f8
.memset64_overlay 2 0.02 low.s e808 5f8
.strlen_overlay 1 0.01 low.s e600 200
.strcpy_overlay 1 0.01 low.s fc00 200
.strcmp_overlay 1 0.01 low.s dc00 200

Total Ticks For All Processes (KEX) = 13

Kernel Ext Ticks % Address Bytes
========== ===== ====== ======= =====
/usr/lib/drivers/netinet 5 0.04 5cd0200 ef8a0
UnknownBinary 3 0.03 f1000000c03e7094 1
/usr/lib/drivers/eth_demux 3 0.03 5ca0200 3e30
/usr/lib/drivers/if_en 1 0.01 5fa0200 7d58
/usr/lib/drivers/vioentdd 1 0.01 5c50300 24a48

Profile: /usr/lib/drivers/netinet

Total Ticks For All Processes (/usr/lib/drivers/netinet) = 5

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
.tcp_fasttimo 2 0.02 /usr/lib/drivers/netinet df4e0 420
.tcp_slowtimo 2 0.02 /usr/lib/drivers/netinet dee40 668
.tcp_input0 1 0.01 /usr/lib/drivers/netinet bd700 a7dc

Profile: UnknownBinary

Total Ticks For All Processes (UnknownBinary) = 3

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
<0xF1000000C03E7094> 3 0.03 f1000000c03e7094 1

Profile: /usr/lib/drivers/eth_demux

Total Ticks For All Processes (/usr/lib/drivers/eth_demux) = 3

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
.eth_std_receive 2 0.02 /usr/lib/drivers/eth_demux 2800 4bc
.eth_receive 1 0.01 /usr/lib/drivers/eth_demux 860 3ac

Profile: /usr/lib/drivers/if_en

Total Ticks For All Processes (/usr/lib/drivers/if_en) = 1

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
.en_output 1 0.01 /usr/lib/drivers/if_en 2da0 1898

Profile: /usr/lib/drivers/vioentdd

Total Ticks For All Processes (/usr/lib/drivers/vioentdd) = 1

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
.vioent_receive 1 0.01 ../../../../../src/bos/kernext/vioent/vioent_receive.c 18b00 2020

Total Ticks For All Processes (SH-LIBs) = 80

Shared Object Ticks % Address Bytes
============= ===== ====== ======= =====
/usr/lib/libc.a[shr_64.o] 28 0.23 900000000000a00 359438
/usr/lib/libc.a[shr.o] 25 0.21 d0100280 361448
/usr/bin/ps 6 0.05 900000000579a60 218dd
/usr/lib/libpthreads.a[shr_xpg5.o] 5 0.04 d052f180 32ced
/usr/lib/libcrypto.a[libcrypto.so.1.0.0] 3 0.03 d414c350 15ec1e
/usr/lib/libpthreads.a[shr.o] 3 0.03 d0c39180 2d10d
/usr/lib/libi18n.a[shr.o] 2 0.02 d057e280 b180
/usr/lib/libsnmppriv_ne.a[shr.o] 2 0.02 d0c7c280 7ddac
/usr/lib/libct_mss.a[shr.o] 2 0.02 d469a280 8bc50
/usr/lib/libperfstat.a[shr_64.o] 1 0.01 900000000911b00 67273
/usr/sbin/zabbix_agentd: 1 0.01 900000000579ac0 1
/usr/java7_64/jre/lib/ppc64/compressedrefs/libj9jit26.so 1 0.01 900000000cea280 8ca0f4
Millicode routines 1 0.01 0 3200

Profile: /usr/lib/libc.a[shr_64.o]

Total Ticks For All Processes (/usr/lib/libc.a[shr_64.o]) = 28

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
.strchr 3 0.03 strchr.s 1c000 180
.compare_rec_char 3 0.03 ../../../../../../../src/bos/usr/lib/security/LOCAL/common_compare.c 1993e0 220
._doprnt 3 0.03 ../../../../../../../src/bos/usr/ccs/lib/libc/doprnt.c 5a80 81a0
.compare_rec_int 3 0.03 ../../../../../../../src/bos/usr/lib/security/LOCAL/common_compare.c 199240 1a0
.binary_search 2 0.02 ../../../../../../../src/bos/usr/ccs/lib/libs/ntree.c 1067c0 2a0
.iscntrl 2 0.02 ../../../../../../../src/bos/usr/ccs/lib/libc/iscntrl.c 279500 100
.free_y 2 0.02 ../../../../../../../src/bos/usr/ccs/lib/libc/malloc_y.c 40cc0 9a0
.colon_search 1 0.01 ../../../../../../../src/bos/usr/lib/security/LOCAL/file_colon.c 1ad300 1120
.ntree_search 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libs/ntree.c 105b20 340
._select 1 0.01 glink64.s 156d28 1
.malloc_y 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libc/malloc_y.c 42780 840
.read_colon_rec 1 0.01 ../../../../../../../src/bos/usr/lib/security/LOCAL/common_file.c 190840 720
.__ftell 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libc/ftell.c 3db00 3a0
.strncpy 1 0.01 strncpy.s 3c080 128
.malloc_common@AF104_87 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libc/malloc_common.c 2aa20 1e0
.malloc 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libc/malloc_common.c 2a720 120
.__ntree_locate 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libs/ntree.c 105e60 620

Profile: /usr/lib/libc.a[shr.o]

Total Ticks For All Processes (/usr/lib/libc.a[shr.o]) = 25

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
.strpbrk 3 0.03 ../../../../../../../src/bos/usr/ccs/lib/libc/strpbrk.c 23500 100
.strncpy 2 0.02 strncpy.s 23c00 140
.fgets 2 0.02 ../../../../../../../src/bos/usr/ccs/lib/libc/fgets.c 1bd80 380
.strsep 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libirs/strsep.c 14f580 100
.sv_next 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libirs/lcl_sv.c 1326e0 2c0
.ntree_walk 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libs/ntree.c 1134c0 200
.string 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libc/doscan.c f7f60 560
.time_base_to_time 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libc/POWER/time_base_to_time.c ef680 400
.read_wall_time 1 0.01 read_real_time.s ef580 100
.__nl_langinfo_std 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libc/__nl_langinfo_std.c 223940 c0
._q_cvtl 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libc/_q_cvtl.c a0000 120
.colon_search 1 0.01 ../../../../../../../src/bos/usr/lib/security/LOCAL/file_colon.c 1b6460 1080
.sigemptyset 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libc/sigops.c 7cd00 80
.mbtowc 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libc/mbtowc.c 39300 180
._doprnt 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libc/doprnt.c 2e880 7fc0
.splay 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libc/malloc_y.c 27a80 520
.__ftell 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libc/ftell.c 25880 3a0
.compare_rec_char 1 0.01 ../../../../../../../src/bos/usr/lib/security/LOCAL/common_compare.c 1a3060 220
.atoi 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libc/atoi.c 23600 600
.__fd_select 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libc/iosl.c 161ac0 220
.fcntl 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libc/fcntl.c 1f3c0 c0

Profile: /usr/bin/ps

Total Ticks For All Processes (/usr/bin/ps) = 6

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
<0x90000000057ABE0> 2 0.02 90000000057abe0 1
<0x90000000059B33C> 1 0.01 90000000059b33c 1
<0x90000000057ABE8> 1 0.01 90000000057abe8 1
<0x900000000579A60> 1 0.01 900000000579a60 1
<0x90000000059B304> 1 0.01 90000000059b304 1

Profile: /usr/lib/libpthreads.a[shr_xpg5.o]

Total Ticks For All Processes (/usr/lib/libpthreads.a[shr_xpg5.o]) = 5

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
.global_unlock_ppc_mp 2 0.02 pth_locks_ppc_mp.s 2e824 dc
.global_lock_ppc_mp_eh 2 0.02 pth_locks_ppc_mp_eh.s 2e724 dc
.thread_tsleep 1 0.01 glink.s 17a50 28

Profile: /usr/lib/libcrypto.a[libcrypto.so.1.0.0]

Total Ticks For All Processes (/usr/lib/libcrypto.a[libcrypto.so.1.0.0]) = 3

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
.bn_mul_mont_fpu64 3 0.03 ppc64-mont.s 1e2b0 f00

Profile: /usr/lib/libpthreads.a[shr.o]

Total Ticks For All Processes (/usr/lib/libpthreads.a[shr.o]) = 3

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
.pthread_cleanup_pop 2 0.02 ../../../../../../../../src/bos/usr/ccs/lib/libpthreads/pth_pthread.c eb00 80
.pthread_cleanup_push 1 0.01 ../../../../../../../../src/bos/usr/ccs/lib/libpthreads/pth_pthread.c eb80 80

Profile: /usr/lib/libi18n.a[shr.o]

Total Ticks For All Processes (/usr/lib/libi18n.a[shr.o]) = 2

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
.__wctomb_iso1 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libi18n/__wctomb_iso1.c 3100 80
.__mbtowc_iso1 1 0.01 ../../../../../../../src/bos/usr/ccs/lib/libi18n/__mbtowc_iso1.c 2780 180

Profile: /usr/lib/libsnmppriv_ne.a[shr.o]

Total Ticks For All Processes (/usr/lib/libsnmppriv_ne.a[shr.o]) = 2

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
.asn_decode_length 1 0.01 ../../../../../../src/tcpip/usr/lib/libsnmppriv_ne/snmp_asn1.c 39780 120
.fstatx 1 0.01 glink.s 46d0 1

Profile: /usr/lib/libct_mss.a[shr.o]

Total Ticks For All Processes (/usr/lib/libct_mss.a[shr.o]) = 2

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
.sha512 2 0.02 /afs/apd.pok.ibm.com/u/myungbae/sandboxes/base_mut/src/rsct/crypto/clicv4/ansic/be32/clic.c 4190 36dc

Profile: /usr/lib/libperfstat.a[shr_64.o]

Total Ticks For All Processes (/usr/lib/libperfstat.a[shr_64.o]) = 1

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
.strncpy 1 0.01 strncpy.s 2300 140

Profile: /usr/sbin/zabbix_agentd:

Total Ticks For All Processes (/usr/sbin/zabbix_agentd:) = 1

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
<0x900000000579AC0> 1 0.01 900000000579ac0 1

Profile: /usr/java7_64/jre/lib/ppc64/compressedrefs/libj9jit26.so

Total Ticks For All Processes (/usr/java7_64/jre/lib/ppc64/compressedrefs/libj9jit26.so) = 1

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
.samplerThreadProc(void*) 1 0.01 HookedByTheJit.cpp 175e00 f40

Profile: Millicode routines

Total Ticks For All Processes (Millicode routines) = 1

Subroutine Ticks % Source Address Bytes
========== ===== ====== ====== ======= =====
.mull 1 0.01 low.s 3180 80

<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

Discussion (4)1
Log in or sign up to continue

MSM/MUMPS is running on a Unix IBM/AIX 7.

Following are logs from the console.log from IBM/AIX and logs from the command below

alog -o -t console

errpt -a > errptSISMED.txt

         0 Tue Jun 21 03:38:01 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtff:@129_156_11_135 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 129_156_11_135 -P prtff -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/t2uXqea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Dec 21 09:47:14 2020.  Current Time: Jun 21 03:38:01 2022
Use local problem reporting procedures.

         0 Tue Jun 21 03:38:01 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt29:@172_16_127_30 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_127_30 -P prt29 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tWST7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jul  1 07:34:49 2019.  Current Time: Jun 21 03:38:01 2022
Use local problem reporting procedures.

         0 Tue Jun 21 03:38:01 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt43:@172_16_136_90 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_136_90 -P prt43 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/teaX7ea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Apr 26 07:18:10 2021.  Current Time: Jun 21 03:38:01 2022
Use local problem reporting procedures.

         0 Tue Jun 21 03:38:02 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt64:@172_16_150_90 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_150_90 -P prt64 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tVwg7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jan 18 06:27:49 2022.  Current Time: Jun 21 03:38:02 2022
Use local problem reporting procedures.

         0 Tue Jun 21 03:38:03 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtg1:@172_20_104_10 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_20_104_10 -P prtg1 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tBVS7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Dec 15 12:55:22 2021.  Current Time: Jun 21 03:38:03 2022
Use local problem reporting procedures.

         0 Tue Jun 21 03:38:08 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt75:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prt75 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/tZouaaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Oct 16 10:07:29 2021.  Current Time: Jun 21 03:38:08 2022
Use local problem reporting procedures.

         0 Tue Jun 21 03:38:17 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt75:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prt75 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/tZouaaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Oct 16 10:07:29 2021.  Current Time: Jun 21 03:38:17 2022
Use local problem reporting procedures.

qdaemon: errno = 4: A system call received an interrupt.
         0 Tue Jun 21 03:38:43 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtf2:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prtf2 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/ttRnaaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Feb 11 09:21:58 2022.  Current Time: Jun 21 03:38:43 2022
Use local problem reporting procedures.

         0 Tue Jun 21 03:38:43 -03 2022 qdaemon: (WARNING): 0781-088 Queue prthc:@172_16_108_15 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_108_15 -P prthc -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tfv4Maa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Nov  3 07:25:52 2020.  Current Time: Jun 21 03:38:43 2022
Use local problem reporting procedures.

         0 Tue Jun 21 03:38:47 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt03:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prt03 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/trfAMea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jun  3 11:06:38 2022.  Current Time: Jun 21 03:38:47 2022
Use local problem reporting procedures.

         0 Tue Jun 21 03:38:51 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtff:@129_156_11_135 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 129_156_11_135 -P prtff -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/t2uXqea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Dec 21 09:47:14 2020.  Current Time: Jun 21 03:38:51 2022
Use local problem reporting procedures.

         0 Tue Jun 21 03:38:53 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt29:@172_16_127_30 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_127_30 -P prt29 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tWST7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jul  1 07:34:49 2019.  Current Time: Jun 21 03:38:52 2022
Use local problem reporting procedures.

         0 Tue Jun 21 03:38:54 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtn3:@172_16_107_22 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_107_22 -P prtn3 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tLkiqaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jul 19 05:16:40 2021.  Current Time: Jun 21 03:38:54 2022
Use local problem reporting procedures.

         0 Tue Jun 21 03:38:54 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt31:@129_156_10_165 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 129_156_10_165 -P prt31 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/t5-Waaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Nov  9 08:11:21 2021.  Current Time: Jun 21 03:38:54 2022
Use local problem reporting procedures.

         0 Tue Jun 21 03:38:55 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt43:@172_16_136_90 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_136_90 -P prt43 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/teaX7ea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Apr 26 07:18:10 2021.  Current Time: Jun 21 03:38:55 2022
Use local problem reporting procedures.

         0 Tue Jun 21 03:38:55 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt64:@172_16_150_90 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_150_90 -P prt64 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tVwg7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jan 18 06:27:49 2022.  Current Time: Jun 21 03:38:55 2022
Use local problem reporting procedures.

         0 Tue Jun 21 03:38:57 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtg1:@172_20_104_10 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_20_104_10 -P prtg1 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tBVS7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Dec 15 12:55:22 2021.  Current Time: Jun 21 03:38:57 2022
Use local problem reporting procedures.

         0 Tue Jun 21 06:30:02 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt75:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prt75 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/tZouaaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Oct 16 10:07:29 2021.  Current Time: Jun 21 06:30:02 2022
Use local problem reporting procedures.

         0 Tue Jun 21 06:30:45 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt31:@129_156_10_165 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 129_156_10_165 -P prt31 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/t5-Waaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Nov  9 08:11:21 2021.  Current Time: Jun 21 06:30:45 2022
Use local problem reporting procedures.

         0 Tue Jun 21 06:30:47 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtff:@129_156_11_135 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 129_156_11_135 -P prtff -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/t2uXqea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Dec 21 09:47:14 2020.  Current Time: Jun 21 06:30:47 2022
Use local problem reporting procedures.

         0 Tue Jun 21 06:30:48 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtn3:@172_16_107_22 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_107_22 -P prtn3 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tLkiqaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jul 19 05:16:40 2021.  Current Time: Jun 21 06:30:48 2022
Use local problem reporting procedures.

         0 Tue Jun 21 06:30:49 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt43:@172_16_136_90 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_136_90 -P prt43 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/teaX7ea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Apr 26 07:18:10 2021.  Current Time: Jun 21 06:30:49 2022
Use local problem reporting procedures.

         0 Tue Jun 21 06:30:49 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt29:@172_16_127_30 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_127_30 -P prt29 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tWST7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jul  1 07:34:49 2019.  Current Time: Jun 21 06:30:49 2022
Use local problem reporting procedures.

         0 Tue Jun 21 06:30:50 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt64:@172_16_150_90 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_150_90 -P prt64 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tVwg7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jan 18 06:27:49 2022.  Current Time: Jun 21 06:30:50 2022
Use local problem reporting procedures.

         0 Tue Jun 21 06:30:51 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtg1:@172_20_104_10 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_20_104_10 -P prtg1 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tBVS7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Dec 15 12:55:22 2021.  Current Time: Jun 21 06:30:51 2022
Use local problem reporting procedures.

         0 Tue Jun 21 06:31:31 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtf2:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prtf2 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/ttRnaaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Feb 11 09:21:58 2022.  Current Time: Jun 21 06:31:31 2022
Use local problem reporting procedures.

         0 Tue Jun 21 06:31:31 -03 2022 qdaemon: (WARNING): 0781-088 Queue prthc:@172_16_108_15 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_108_15 -P prthc -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tfv4Maa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Nov  3 07:25:52 2020.  Current Time: Jun 21 06:31:31 2022
Use local problem reporting procedures.

         0 Tue Jun 21 06:31:36 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt03:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prt03 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/trfAMea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jun  3 11:06:38 2022.  Current Time: Jun 21 06:31:36 2022
Use local problem reporting procedures.

         0 Tue Jun 21 07:24:44 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt98:@172_16_136_35 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_136_35 -P prt98 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tFb4Mea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jun 21 07:23:14 2022.  Current Time: Jun 21 07:24:44 2022
Use local problem reporting procedures.

         0 Tue Jun 21 07:30:04 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt75:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prt75 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/tZouaaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Oct 16 10:07:29 2021.  Current Time: Jun 21 07:30:04 2022
Use local problem reporting procedures.

         0 Tue Jun 21 07:30:48 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtff:@129_156_11_135 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 129_156_11_135 -P prtff -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/t2uXqea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Dec 21 09:47:14 2020.  Current Time: Jun 21 07:30:48 2022
Use local problem reporting procedures.

         0 Tue Jun 21 07:30:48 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt31:@129_156_10_165 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 129_156_10_165 -P prt31 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/t5-Waaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Nov  9 08:11:21 2021.  Current Time: Jun 21 07:30:48 2022
Use local problem reporting procedures.

         0 Tue Jun 21 07:30:50 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtn3:@172_16_107_22 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_107_22 -P prtn3 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tLkiqaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jul 19 05:16:40 2021.  Current Time: Jun 21 07:30:50 2022
Use local problem reporting procedures.

         0 Tue Jun 21 07:30:50 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt43:@172_16_136_90 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_136_90 -P prt43 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/teaX7ea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Apr 26 07:18:10 2021.  Current Time: Jun 21 07:30:50 2022
Use local problem reporting procedures.

         0 Tue Jun 21 07:30:50 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt29:@172_16_127_30 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_127_30 -P prt29 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tWST7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jul  1 07:34:49 2019.  Current Time: Jun 21 07:30:50 2022
Use local problem reporting procedures.

         0 Tue Jun 21 07:30:51 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt64:@172_16_150_90 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_150_90 -P prt64 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tVwg7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jan 18 06:27:49 2022.  Current Time: Jun 21 07:30:51 2022
Use local problem reporting procedures.

         0 Tue Jun 21 07:30:53 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtg1:@172_20_104_10 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_20_104_10 -P prtg1 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tBVS7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Dec 15 12:55:22 2021.  Current Time: Jun 21 07:30:53 2022
Use local problem reporting procedures.

         0 Tue Jun 21 07:31:31 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtf2:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prtf2 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/ttRnaaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Feb 11 09:21:58 2022.  Current Time: Jun 21 07:31:31 2022
Use local problem reporting procedures.

         0 Tue Jun 21 07:31:31 -03 2022 qdaemon: (WARNING): 0781-088 Queue prthc:@172_16_108_15 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_108_15 -P prthc -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tfv4Maa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Nov  3 07:25:52 2020.  Current Time: Jun 21 07:31:31 2022
Use local problem reporting procedures.

         0 Tue Jun 21 07:31:38 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt03:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prt03 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/trfAMea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jun  3 11:06:38 2022.  Current Time: Jun 21 07:31:38 2022
Use local problem reporting procedures.

         0 Tue Jun 21 08:30:04 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt75:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prt75 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/tZouaaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Oct 16 10:07:29 2021.  Current Time: Jun 21 08:30:04 2022
Use local problem reporting procedures.

         0 Tue Jun 21 08:30:47 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt31:@129_156_10_165 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 129_156_10_165 -P prt31 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/t5-Waaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Nov  9 08:11:21 2021.  Current Time: Jun 21 08:30:47 2022
Use local problem reporting procedures.

         0 Tue Jun 21 08:30:48 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt29:@172_16_127_30 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_127_30 -P prt29 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tWST7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jul  1 07:34:49 2019.  Current Time: Jun 21 08:30:48 2022
Use local problem reporting procedures.

         0 Tue Jun 21 08:30:49 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtff:@129_156_11_135 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 129_156_11_135 -P prtff -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/t2uXqea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Dec 21 09:47:14 2020.  Current Time: Jun 21 08:30:49 2022
Use local problem reporting procedures.

         0 Tue Jun 21 08:30:50 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt43:@172_16_136_90 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_136_90 -P prt43 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/teaX7ea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Apr 26 07:18:10 2021.  Current Time: Jun 21 08:30:50 2022
Use local problem reporting procedures.

         0 Tue Jun 21 08:30:51 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtn3:@172_16_107_22 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_107_22 -P prtn3 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tLkiqaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jul 19 05:16:40 2021.  Current Time: Jun 21 08:30:51 2022
Use local problem reporting procedures.

         0 Tue Jun 21 08:30:51 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt64:@172_16_150_90 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_150_90 -P prt64 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tVwg7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jan 18 06:27:49 2022.  Current Time: Jun 21 08:30:51 2022
Use local problem reporting procedures.

         0 Tue Jun 21 08:30:55 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtg1:@172_20_104_10 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_20_104_10 -P prtg1 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tBVS7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Dec 15 12:55:22 2021.  Current Time: Jun 21 08:30:55 2022
Use local problem reporting procedures.

         0 Tue Jun 21 08:31:31 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtf2:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prtf2 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/ttRnaaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Feb 11 09:21:58 2022.  Current Time: Jun 21 08:31:31 2022
Use local problem reporting procedures.

         0 Tue Jun 21 08:31:31 -03 2022 qdaemon: (WARNING): 0781-088 Queue prthc:@172_16_108_15 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_108_15 -P prthc -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tfv4Maa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Nov  3 07:25:52 2020.  Current Time: Jun 21 08:31:31 2022
Use local problem reporting procedures.

         0 Tue Jun 21 08:31:38 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt03:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prt03 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/trfAMea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jun  3 11:06:38 2022.  Current Time: Jun 21 08:31:38 2022
Use local problem reporting procedures.

         0 Tue Jun 21 09:30:04 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt75:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prt75 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/tZouaaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Oct 16 10:07:29 2021.  Current Time: Jun 21 09:30:04 2022
Use local problem reporting procedures.

         0 Tue Jun 21 09:30:47 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt31:@129_156_10_165 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 129_156_10_165 -P prt31 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/t5-Waaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Nov  9 08:11:21 2021.  Current Time: Jun 21 09:30:47 2022
Use local problem reporting procedures.

         0 Tue Jun 21 09:30:49 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt29:@172_16_127_30 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_127_30 -P prt29 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tWST7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jul  1 07:34:49 2019.  Current Time: Jun 21 09:30:49 2022
Use local problem reporting procedures.

         0 Tue Jun 21 09:30:50 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtff:@129_156_11_135 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 129_156_11_135 -P prtff -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/t2uXqea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Dec 21 09:47:14 2020.  Current Time: Jun 21 09:30:50 2022
Use local problem reporting procedures.

         0 Tue Jun 21 09:30:50 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtn3:@172_16_107_22 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_107_22 -P prtn3 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tLkiqaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jul 19 05:16:40 2021.  Current Time: Jun 21 09:30:50 2022
Use local problem reporting procedures.

         0 Tue Jun 21 09:30:51 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt43:@172_16_136_90 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_136_90 -P prt43 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/teaX7ea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Apr 26 07:18:10 2021.  Current Time: Jun 21 09:30:51 2022
Use local problem reporting procedures.

         0 Tue Jun 21 09:30:51 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt64:@172_16_150_90 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_150_90 -P prt64 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tVwg7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jan 18 06:27:49 2022.  Current Time: Jun 21 09:30:51 2022
Use local problem reporting procedures.

         0 Tue Jun 21 09:30:53 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtg1:@172_20_104_10 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_20_104_10 -P prtg1 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tBVS7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Dec 15 12:55:22 2021.  Current Time: Jun 21 09:30:53 2022
Use local problem reporting procedures.

         0 Tue Jun 21 09:31:32 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtf2:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prtf2 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/ttRnaaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Feb 11 09:21:58 2022.  Current Time: Jun 21 09:31:32 2022
Use local problem reporting procedures.

         0 Tue Jun 21 09:31:32 -03 2022 qdaemon: (WARNING): 0781-088 Queue prthc:@172_16_108_15 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_108_15 -P prthc -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tfv4Maa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Nov  3 07:25:52 2020.  Current Time: Jun 21 09:31:32 2022
Use local problem reporting procedures.

         0 Tue Jun 21 09:31:39 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt03:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prt03 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/trfAMea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jun  3 11:06:38 2022.  Current Time: Jun 21 09:31:39 2022
Use local problem reporting procedures.

         0 Tue Jun 21 10:30:03 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt75:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prt75 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/tZouaaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Oct 16 10:07:29 2021.  Current Time: Jun 21 10:30:03 2022
Use local problem reporting procedures.

         0 Tue Jun 21 10:30:48 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtff:@129_156_11_135 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 129_156_11_135 -P prtff -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/t2uXqea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Dec 21 09:47:14 2020.  Current Time: Jun 21 10:30:48 2022
Use local problem reporting procedures.

         0 Tue Jun 21 10:30:49 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt31:@129_156_10_165 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 129_156_10_165 -P prt31 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/t5-Waaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Nov  9 08:11:21 2021.  Current Time: Jun 21 10:30:49 2022
Use local problem reporting procedures.

         0 Tue Jun 21 10:30:49 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtn3:@172_16_107_22 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_107_22 -P prtn3 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tLkiqaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jul 19 05:16:40 2021.  Current Time: Jun 21 10:30:49 2022
Use local problem reporting procedures.

         0 Tue Jun 21 10:30:50 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt43:@172_16_136_90 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_136_90 -P prt43 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/teaX7ea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Apr 26 07:18:10 2021.  Current Time: Jun 21 10:30:50 2022
Use local problem reporting procedures.

         0 Tue Jun 21 10:30:50 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt29:@172_16_127_30 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_127_30 -P prt29 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tWST7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jul  1 07:34:49 2019.  Current Time: Jun 21 10:30:50 2022
Use local problem reporting procedures.

         0 Tue Jun 21 10:30:50 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt64:@172_16_150_90 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_150_90 -P prt64 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tVwg7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jan 18 06:27:49 2022.  Current Time: Jun 21 10:30:50 2022
Use local problem reporting procedures.

         0 Tue Jun 21 10:30:52 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtg1:@172_20_104_10 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_20_104_10 -P prtg1 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tBVS7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Dec 15 12:55:22 2021.  Current Time: Jun 21 10:30:52 2022
Use local problem reporting procedures.

         0 Tue Jun 21 10:31:31 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtf2:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prtf2 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/ttRnaaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Feb 11 09:21:58 2022.  Current Time: Jun 21 10:31:31 2022
Use local problem reporting procedures.

         0 Tue Jun 21 10:31:31 -03 2022 qdaemon: (WARNING): 0781-088 Queue prthc:@172_16_108_15 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_108_15 -P prthc -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tfv4Maa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Nov  3 07:25:52 2020.  Current Time: Jun 21 10:31:31 2022
Use local problem reporting procedures.

         0 Tue Jun 21 10:31:35 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt03:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prt03 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/trfAMea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jun  3 11:06:38 2022.  Current Time: Jun 21 10:31:35 2022
Use local problem reporting procedures.

         0 Tue Jun 21 12:30:04 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt75:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prt75 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/tZouaaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Oct 16 10:07:29 2021.  Current Time: Jun 21 12:30:04 2022
Use local problem reporting procedures.

         0 Tue Jun 21 12:30:46 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt31:@129_156_10_165 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 129_156_10_165 -P prt31 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/t5-Waaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Nov  9 08:11:21 2021.  Current Time: Jun 21 12:30:46 2022
Use local problem reporting procedures.

         0 Tue Jun 21 12:30:48 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtn3:@172_16_107_22 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_107_22 -P prtn3 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tLkiqaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jul 19 05:16:40 2021.  Current Time: Jun 21 12:30:48 2022
Use local problem reporting procedures.

         0 Tue Jun 21 12:30:49 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtff:@129_156_11_135 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 129_156_11_135 -P prtff -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/t2uXqea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Dec 21 09:47:14 2020.  Current Time: Jun 21 12:30:49 2022
Use local problem reporting procedures.

         0 Tue Jun 21 12:30:50 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt29:@172_16_127_30 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_127_30 -P prt29 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tWST7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jul  1 07:34:49 2019.  Current Time: Jun 21 12:30:50 2022
Use local problem reporting procedures.

         0 Tue Jun 21 12:30:50 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt43:@172_16_136_90 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_136_90 -P prt43 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/teaX7ea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Apr 26 07:18:10 2021.  Current Time: Jun 21 12:30:50 2022
Use local problem reporting procedures.

         0 Tue Jun 21 12:30:51 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt64:@172_16_150_90 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_150_90 -P prt64 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tVwg7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jan 18 06:27:49 2022.  Current Time: Jun 21 12:30:51 2022
Use local problem reporting procedures.

         0 Tue Jun 21 12:30:53 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtg1:@172_20_104_10 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_20_104_10 -P prtg1 -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tBVS7aa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Dec 15 12:55:22 2021.  Current Time: Jun 21 12:30:53 2022
Use local problem reporting procedures.

         0 Tue Jun 21 12:31:31 -03 2022 qdaemon: (WARNING): 0781-088 Queue prtf2:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prtf2 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/ttRnaaa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Feb 11 09:21:58 2022.  Current Time: Jun 21 12:31:31 2022
Use local problem reporting procedures.

         0 Tue Jun 21 12:31:31 -03 2022 qdaemon: (WARNING): 0781-088 Queue prthc:@172_16_108_15 went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S 172_16_108_15 -P prthc -N /usr/lib/lpd/aixshort \
            /var/spool/qdaemon/tfv4Maa
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Nov  3 07:25:52 2020.  Current Time: Jun 21 12:31:31 2022
Use local problem reporting procedures.

         0 Tue Jun 21 12:31:35 -03 2022 qdaemon: (WARNING): 0781-088 Queue prt03:@iprint went down, job is still queued:
  Backend: /usr/lib/lpd/rembak -S iprint.ghc.com.br -P prt03 -N /usr/lib/lpd/bsdshort \
            /var/spool/qdaemon/trfAMea
  Backend Exit Value: EXITFATAL (0100)
  Job Submit Time: Jun  3 11:06:38 2022.  Current Time: Jun 21 12:31:35 2022
Use local problem reporting procedures.

---------------------------------------------------------------------------
LABEL:          CL_VAR_FULL
IDENTIFIER:     E5899EEB

Date/Time:       Tue Jun 21 10:29:46 -03 2022
Sequence Number: 21318
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           S
Type:            PERM
WPAR:            Global
Resource Name:   CAA (for RSCT)  

Description
/var filesystem is running low on space

Probable Causes
Unknown

Failure Causes
Unknown

    Recommended Actions
    RSCT could malfunction if /var gets full
    Increase the filesystem size or delete unwanted files

Detail Data
Percent full
          75
Percent threshold
          75
---------------------------------------------------------------------------
LABEL:          TTY_TTYHOG
IDENTIFIER:     0873CF9F

Date/Time:       Tue Jun 21 10:29:46 -03 2022
Sequence Number: 21317
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           S
Type:            TEMP
WPAR:            Global
Resource Name:   pts/58          

Description
TTYHOG OVER-RUN

Failure Causes
EXCESSIVE LOAD ON PROCESSOR

    Recommended Actions
    REDUCE SYSTEM LOAD.
    REDUCE SERIAL PORT BAUD RATE

Duplicates
Number of duplicates
           1
Time of first duplicate
Tue Jun 21 10:29:36 -03 2022
Time of last duplicate
Tue Jun 21 10:29:46 -03 2022
---------------------------------------------------------------------------
LABEL:          TTY_TTYHOG
IDENTIFIER:     0873CF9F

Date/Time:       Tue Jun 21 10:29:36 -03 2022
Sequence Number: 21316
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           S
Type:            TEMP
WPAR:            Global
Resource Name:   pts/58          

Description
TTYHOG OVER-RUN

Failure Causes
EXCESSIVE LOAD ON PROCESSOR

    Recommended Actions
    REDUCE SYSTEM LOAD.
    REDUCE SERIAL PORT BAUD RATE

---------------------------------------------------------------------------
LABEL:          CL_VAR_FULL
IDENTIFIER:     E5899EEB

Date/Time:       Tue Jun 21 10:15:05 -03 2022
Sequence Number: 21315
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           S
Type:            PERM
WPAR:            Global
Resource Name:   CAA (for RSCT)  

Description
/var filesystem is running low on space

Probable Causes
Unknown

Failure Causes
Unknown

    Recommended Actions
    RSCT could malfunction if /var gets full
    Increase the filesystem size or delete unwanted files

Detail Data
Percent full
          75
Percent threshold
          75
---------------------------------------------------------------------------
LABEL:          CL_VAR_FULL
IDENTIFIER:     E5899EEB

Date/Time:       Tue Jun 21 09:59:46 -03 2022
Sequence Number: 21314
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           S
Type:            PERM
WPAR:            Global
Resource Name:   CAA (for RSCT)  

Description
/var filesystem is running low on space

Probable Causes
Unknown

Failure Causes
Unknown

    Recommended Actions
    RSCT could malfunction if /var gets full
    Increase the filesystem size or delete unwanted files

Detail Data
Percent full
          75
Percent threshold
          75
---------------------------------------------------------------------------
LABEL:          STORAGERM_STARTED_S
IDENTIFIER:     EDFF8E9B

Date/Time:       Mon Jun 20 16:53:21 -03 2022
Sequence Number: 21313
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           O
Type:            INFO
WPAR:            Global
Resource Name:   StorageRM       

Description
IBM.StorageRM daemon has started.

Probable Causes
The RSCT Storage Resource Manager daemon (IBM.StorageRMd) has been started.

User Causes
The RSCT Storage Resource Manager daemon (IBM.StorageRMd) has been started.

    Recommended Actions
    None

Detail Data
DETECTING MODULE
RSCT,IBM.StorageRMd.C,1.52,326                
ERROR ID
                                          
REFERENCE CODE
                                          
---------------------------------------------------------------------------
LABEL:          CONFIGRM_ONLINE_ST
IDENTIFIER:     3B16518D

Date/Time:       Mon Jun 20 16:53:21 -03 2022
Sequence Number: 21312
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           S
Type:            INFO
WPAR:            Global
Resource Name:   ConfigRM        

Description
The node is online in the domain indicated in the detail data.

Probable Causes
A user ran the 'startrpdomain' or 'startrpnode' commands.
The node rebooted while the node was online.
The configuration manager recycled the node through an offline/online
transition to resynchronize the domain configuration or to recover
from some other failure.

User Causes
A user ran the 'startrpdomain' or 'startrpnode' commands.
The node rebooted while the node was online.
The configuration manager recycled the node through an offline/online
transition to resynchronize the domain configuration or to recover
from some other failure.

    Recommended Actions
    None

Detail Data
DETECTING MODULE
RSCT,PeerDomain.C,1.99.34.6,26599             
ERROR ID
                                          
REFERENCE CODE
                                          
Peer Domain Name
CL_SISMED
---------------------------------------------------------------------------
LABEL:          CONFIGRM_HASQUORUM_
IDENTIFIER:     4BDDFBCC

Date/Time:       Mon Jun 20 16:53:21 -03 2022
Sequence Number: 21311
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           S
Type:            INFO
WPAR:            Global
Resource Name:   ConfigRM        

Description
The operational quorum state of the active peer domain has changed to HAS_QUORUM.
In this state, cluster resources may be recovered and controlled as needed by
management applications.

Probable Causes
One or more nodes have come online in the peer domain.

User Causes
One or more nodes have come online in the peer domain.

    Recommended Actions
    None

Detail Data
DETECTING MODULE
RSCT,PeerDomain.C,1.99.34.6,20532             
ERROR ID
                                          
REFERENCE CODE
                                          
---------------------------------------------------------------------------
LABEL:          RMCD_INFO_3_ST
IDENTIFIER:     C369AE20

Date/Time:       Mon Jun 20 16:53:21 -03 2022
Sequence Number: 21310
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           O
Type:            INFO
WPAR:            Global
Resource Name:   RMCdaemon       

Description
The daemon is started.

Probable Causes
The Resource Monitoring and Control daemon has been started.

User Causes
The startsrc -s ctrmc command has been executed or
the rmcctrl -s command has been executed.

    Recommended Actions
    Confirm that the daemon should be started.

Detail Data
DETECTING MODULE
RSCT,rmcd_rmcproc.c,1.165.1.3,1018            
ERROR ID
6UsOO11l.BgW/9oX/33.2g0...................
REFERENCE CODE
                                          
rmcd_flags
C4C9 101C
---------------------------------------------------------------------------
LABEL:          GS_START_ST
IDENTIFIER:     AFA89905

Date/Time:       Mon Jun 20 16:53:14 -03 2022
Sequence Number: 21309
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           O
Type:            INFO
WPAR:            Global
Resource Name:   cthags          

Description
Group Services daemon started

Probable Causes
Daemon started during system startup
Daemon re-started automatically by SRC
Daemon started during installation
Daemon started manually by user

User Causes
Daemon started manually by user

    Recommended Actions
    Check that Group Services daemon is running

Detail Data
DETECTING MODULE
RSCT,pgsd.C,1.62.1.41,750                     
ERROR ID
63Y7ej0e.BgW/V8n/33.2g0...................
REFERENCE CODE
                                          
DIAGNOSTIC EXPLANATION
HAGS daemon started by SRC. Log file is /var/ct/3ZDzY69geHvP12DXtDlzeM/log/cthags/trace.
---------------------------------------------------------------------------
LABEL:          CONFIGRM_STARTED_ST
IDENTIFIER:     DE84C4DB

Date/Time:       Mon Jun 20 16:52:44 -03 2022
Sequence Number: 21308
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           O
Type:            INFO
WPAR:            Global
Resource Name:   ConfigRM        

Description
IBM.ConfigRM daemon has started.

Probable Causes
The RSCT Configuration Manager daemon (IBM.ConfigRMd) has been started.

User Causes
The RSCT Configuration Manager daemon (IBM.ConfigRMd) has been started.

    Recommended Actions
    None

Detail Data
DETECTING MODULE
RSCT,IBM.ConfigRMd.C,1.76.1.1,422             
ERROR ID
                                          
REFERENCE CODE
                                          
---------------------------------------------------------------------------
LABEL:          RMCD_INFO_0_ST
IDENTIFIER:     A6DF45AA

Date/Time:       Mon Jun 20 16:52:42 -03 2022
Sequence Number: 21307
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           O
Type:            INFO
WPAR:            Global
Resource Name:   RMCdaemon       

Description
The daemon is started.

Probable Causes
The Resource Monitoring and Control daemon has been started.

User Causes
The startsrc -s ctrmc command has been executed or
the rmcctrl -s command has been executed.

    Recommended Actions
    Confirm that the daemon should be started.

Detail Data
DETECTING MODULE
RSCT,rmcd.c,1.134.2.1,255                     
ERROR ID
6eKora08.BgW/IA/133.2g0...................
REFERENCE CODE
                                          
---------------------------------------------------------------------------
LABEL:          REBOOT_ID
IDENTIFIER:     2BFA76F6

Date/Time:       Mon Jun 20 16:52:07 -03 2022
Sequence Number: 21304
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           S
Type:            TEMP
WPAR:            Global
Resource Name:   SYSPROC         

Description
SYSTEM SHUTDOWN BY USER

Probable Causes
SYSTEM SHUTDOWN

Detail Data
USER ID
           0
0=SOFT IPL 1=HALT 2=TIME REBOOT
           0
TIME TO REBOOT (FOR TIMED REBOOT ONLY)
           0
---------------------------------------------------------------------------
LABEL:          ERRLOG_ON
IDENTIFIER:     9DBCFDEE

Date/Time:       Mon Jun 20 16:52:25 -03 2022
Sequence Number: 21303
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           O
Type:            TEMP
WPAR:            Global
Resource Name:   errdemon        

Description
ERROR LOGGING TURNED ON

Probable Causes
ERRDEMON STARTED AUTOMATICALLY

User Causes
/USR/LIB/ERRDEMON COMMAND

    Recommended Actions
    NONE

---------------------------------------------------------------------------
LABEL:          ERRLOG_OFF
IDENTIFIER:     192AC071

Date/Time:       Mon Jun 20 16:50:41 -03 2022
Sequence Number: 21302
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           O
Type:            TEMP
WPAR:            Global
Resource Name:   errdemon        

Description
ERROR LOGGING TURNED OFF

Probable Causes
ERRSTOP COMMAND

User Causes
ERRSTOP COMMAND

    Recommended Actions
    RUN ERRDEAD COMMAND
    TURN ERROR LOGGING ON

---------------------------------------------------------------------------
LABEL:          LDMP_COMPLETE
IDENTIFIER:     AEA055D0

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21301
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           S
Type:            INFO
WPAR:            Global
Resource Name:   livedump        

Description
Live dump complete

Detail Data
File name and message text
/var/adm/ras/livedump/lvm.medvgbkp.202206201950.00.DZ

---------------------------------------------------------------------------
LABEL:          LDMP_COMPLETE
IDENTIFIER:     AEA055D0

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21300
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           S
Type:            INFO
WPAR:            Global
Resource Name:   livedump        

Description
Live dump complete

Detail Data
File name and message text
/var/adm/ras/livedump/lvm.medvg.202206201950.00.DZ

---------------------------------------------------------------------------
LABEL:          LVM_GS_CHILDGONE
IDENTIFIER:     4B219AEA

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21299
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           U
Type:            PERM
WPAR:            Global
Resource Name:   LIBLVM          
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
Concurrent LVM daemon forced Volume Group offline

Probable Causes
Unrecoverable event detected by Concurrent LVM daemon

Failure Causes
Lost communication with remote nodes
Lost quorum

    Recommended Actions
    Ensure Cluster daemons are running
    CHECK ERROR LOG FOR ADDITIONAL RELATED ENTRIES
    Attempt to bring the Concurrent Volume Group back online
    IF PROBLEM PERSISTS, CONTACT APPROPRIATE SERVICE REPRESENTATIVE

Detail Data
Volume Group ID
00F9 8DEE 0000 4C00 0000 014B 506D D812
MAJOR/MINOR DEVICE NUMBER
0028 0000
SENSE DATA
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
---------------------------------------------------------------------------
LABEL:          LVM_SA_QUORCLOSE
IDENTIFIER:     CAD234BE

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21298
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           H
Type:            UNKN
WPAR:            Global
Resource Name:   LVDD            
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
QUORUM LOST, VOLUME GROUP CLOSING

Probable Causes
PHYSICAL VOLUME UNAVAILABLE

Detail Data
MAJOR/MINOR DEVICE NUMBER
8000 0028 0000 0000
QUORUM COUNT
           2
ACTIVE COUNT
       65535
SENSE DATA
0000 0000 0000 0A9E 00F9 8DEE 0000 4C00 0000 014B 506D D812 0000 0000 0000 0000
---------------------------------------------------------------------------
LABEL:          LVM_SA_QUORCLOSE
IDENTIFIER:     CAD234BE

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21297
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           H
Type:            UNKN
WPAR:            Global
Resource Name:   LVDD            
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
QUORUM LOST, VOLUME GROUP CLOSING

Probable Causes
PHYSICAL VOLUME UNAVAILABLE

Detail Data
MAJOR/MINOR DEVICE NUMBER
8000 0028 0000 0000
QUORUM COUNT
           2
ACTIVE COUNT
       65535
SENSE DATA
0000 0000 0000 0A9E 00F9 8DEE 0000 4C00 0000 014B 506D D812 0000 0000 0000 0000
---------------------------------------------------------------------------
LABEL:          LVM_GS_CHILDGONE
IDENTIFIER:     4B219AEA

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21296
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           U
Type:            PERM
WPAR:            Global
Resource Name:   LIBLVM          
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
Concurrent LVM daemon forced Volume Group offline

Probable Causes
Unrecoverable event detected by Concurrent LVM daemon

Failure Causes
Lost communication with remote nodes
Lost quorum

    Recommended Actions
    Ensure Cluster daemons are running
    CHECK ERROR LOG FOR ADDITIONAL RELATED ENTRIES
    Attempt to bring the Concurrent Volume Group back online
    IF PROBLEM PERSISTS, CONTACT APPROPRIATE SERVICE REPRESENTATIVE

Detail Data
Volume Group ID
00F9 8DEE 0000 4C00 0000 014B 506D D812
MAJOR/MINOR DEVICE NUMBER
0028 0000
SENSE DATA
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
---------------------------------------------------------------------------
LABEL:          LVM_GS_NOENV
IDENTIFIER:     9BD08D55

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21295
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           U
Type:            INFO
WPAR:            Global
Resource Name:   LIBLVM          
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
Unable to start gsclvmd

Probable Causes
Unable to establish communication with Cluster daemons

Failure Causes
Unable to establish communication with Cluster daemons

    Recommended Actions
    Ensure Cluster daemons are running
    IF PROBLEM PERSISTS, CONTACT APPROPRIATE SERVICE REPRESENTATIVE

Detail Data
SENSE DATA
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
---------------------------------------------------------------------------
LABEL:          LVM_SA_QUORCLOSE
IDENTIFIER:     CAD234BE

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21294
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           H
Type:            UNKN
WPAR:            Global
Resource Name:   LVDD            
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
QUORUM LOST, VOLUME GROUP CLOSING

Probable Causes
PHYSICAL VOLUME UNAVAILABLE

Detail Data
MAJOR/MINOR DEVICE NUMBER
8000 0028 0000 0000
QUORUM COUNT
           2
ACTIVE COUNT
       65535
SENSE DATA
0000 0000 0000 0A9E 00F9 8DEE 0000 4C00 0000 014B 506D D812 0000 0000 0000 0000
---------------------------------------------------------------------------
LABEL:          LVM_GS_CHILDGONE
IDENTIFIER:     4B219AEA

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21293
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           U
Type:            PERM
WPAR:            Global
Resource Name:   LIBLVM          
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
Concurrent LVM daemon forced Volume Group offline

Probable Causes
Unrecoverable event detected by Concurrent LVM daemon

Failure Causes
Lost communication with remote nodes
Lost quorum

    Recommended Actions
    Ensure Cluster daemons are running
    CHECK ERROR LOG FOR ADDITIONAL RELATED ENTRIES
    Attempt to bring the Concurrent Volume Group back online
    IF PROBLEM PERSISTS, CONTACT APPROPRIATE SERVICE REPRESENTATIVE

Detail Data
Volume Group ID
00F9 8DEE 0000 4C00 0000 014B 506D D812
MAJOR/MINOR DEVICE NUMBER
0028 0000
SENSE DATA
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
---------------------------------------------------------------------------
LABEL:          LVM_GS_NOENV
IDENTIFIER:     9BD08D55

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21292
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           U
Type:            INFO
WPAR:            Global
Resource Name:   LIBLVM          
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
Unable to start gsclvmd

Probable Causes
Unable to establish communication with Cluster daemons

Failure Causes
Unable to establish communication with Cluster daemons

    Recommended Actions
    Ensure Cluster daemons are running
    IF PROBLEM PERSISTS, CONTACT APPROPRIATE SERVICE REPRESENTATIVE

Detail Data
SENSE DATA
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
---------------------------------------------------------------------------
LABEL:          LVM_GS_CHILDGONE
IDENTIFIER:     4B219AEA

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21291
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           U
Type:            PERM
WPAR:            Global
Resource Name:   LIBLVM          
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
Concurrent LVM daemon forced Volume Group offline

Probable Causes
Unrecoverable event detected by Concurrent LVM daemon

Failure Causes
Lost communication with remote nodes
Lost quorum

    Recommended Actions
    Ensure Cluster daemons are running
    CHECK ERROR LOG FOR ADDITIONAL RELATED ENTRIES
    Attempt to bring the Concurrent Volume Group back online
    IF PROBLEM PERSISTS, CONTACT APPROPRIATE SERVICE REPRESENTATIVE

Detail Data
Volume Group ID
00F9 8DEE 0000 4C00 0000 014A F455 2593
MAJOR/MINOR DEVICE NUMBER
0025 0000
SENSE DATA
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
---------------------------------------------------------------------------
LABEL:          LVM_SA_QUORCLOSE
IDENTIFIER:     CAD234BE

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21290
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           H
Type:            UNKN
WPAR:            Global
Resource Name:   LVDD            
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
QUORUM LOST, VOLUME GROUP CLOSING

Probable Causes
PHYSICAL VOLUME UNAVAILABLE

Detail Data
MAJOR/MINOR DEVICE NUMBER
8000 0025 0000 0000
QUORUM COUNT
           6
ACTIVE COUNT
       65535
SENSE DATA
0000 0000 0000 0A9E 00F9 8DEE 0000 4C00 0000 014A F455 2593 00FB 42D7 15E1 EEBA
---------------------------------------------------------------------------
LABEL:          LVM_SA_QUORCLOSE
IDENTIFIER:     CAD234BE

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21289
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           H
Type:            UNKN
WPAR:            Global
Resource Name:   LVDD            
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
QUORUM LOST, VOLUME GROUP CLOSING

Probable Causes
PHYSICAL VOLUME UNAVAILABLE

Detail Data
MAJOR/MINOR DEVICE NUMBER
8000 0025 0000 0000
QUORUM COUNT
           6
ACTIVE COUNT
       65535
SENSE DATA
0000 0000 0000 0A9E 00F9 8DEE 0000 4C00 0000 014A F455 2593 00FB 42D7 15E1 EEBA
---------------------------------------------------------------------------
LABEL:          LVM_GS_CHILDGONE
IDENTIFIER:     4B219AEA

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21288
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           U
Type:            PERM
WPAR:            Global
Resource Name:   LIBLVM          
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
Concurrent LVM daemon forced Volume Group offline

Probable Causes
Unrecoverable event detected by Concurrent LVM daemon

Failure Causes
Lost communication with remote nodes
Lost quorum

    Recommended Actions
    Ensure Cluster daemons are running
    CHECK ERROR LOG FOR ADDITIONAL RELATED ENTRIES
    Attempt to bring the Concurrent Volume Group back online
    IF PROBLEM PERSISTS, CONTACT APPROPRIATE SERVICE REPRESENTATIVE

Detail Data
Volume Group ID
00F9 8DEE 0000 4C00 0000 014A F455 2593
MAJOR/MINOR DEVICE NUMBER
0025 0000
SENSE DATA
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
---------------------------------------------------------------------------
LABEL:          LVM_GS_NOENV
IDENTIFIER:     9BD08D55

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21287
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           U
Type:            INFO
WPAR:            Global
Resource Name:   LIBLVM          
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
Unable to start gsclvmd

Probable Causes
Unable to establish communication with Cluster daemons

Failure Causes
Unable to establish communication with Cluster daemons

    Recommended Actions
    Ensure Cluster daemons are running
    IF PROBLEM PERSISTS, CONTACT APPROPRIATE SERVICE REPRESENTATIVE

Detail Data
SENSE DATA
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
---------------------------------------------------------------------------
LABEL:          LVM_SA_QUORCLOSE
IDENTIFIER:     CAD234BE

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21286
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           H
Type:            UNKN
WPAR:            Global
Resource Name:   LVDD            
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
QUORUM LOST, VOLUME GROUP CLOSING

Probable Causes
PHYSICAL VOLUME UNAVAILABLE

Detail Data
MAJOR/MINOR DEVICE NUMBER
8000 0025 0000 0000
QUORUM COUNT
           6
ACTIVE COUNT
       65535
SENSE DATA
0000 0000 0000 0A9E 00F9 8DEE 0000 4C00 0000 014A F455 2593 00FB 42D7 15E1 EEBA
---------------------------------------------------------------------------
LABEL:          LVM_GS_CHILDGONE
IDENTIFIER:     4B219AEA

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21285
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           U
Type:            PERM
WPAR:            Global
Resource Name:   LIBLVM          
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
Concurrent LVM daemon forced Volume Group offline

Probable Causes
Unrecoverable event detected by Concurrent LVM daemon

Failure Causes
Lost communication with remote nodes
Lost quorum

    Recommended Actions
    Ensure Cluster daemons are running
    CHECK ERROR LOG FOR ADDITIONAL RELATED ENTRIES
    Attempt to bring the Concurrent Volume Group back online
    IF PROBLEM PERSISTS, CONTACT APPROPRIATE SERVICE REPRESENTATIVE

Detail Data
Volume Group ID
00F9 8DEE 0000 4C00 0000 014A F455 2593
MAJOR/MINOR DEVICE NUMBER
0025 0000
SENSE DATA
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
---------------------------------------------------------------------------
LABEL:          LVM_GS_NOENV
IDENTIFIER:     9BD08D55

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21284
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           U
Type:            INFO
WPAR:            Global
Resource Name:   LIBLVM          
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
Unable to start gsclvmd

Probable Causes
Unable to establish communication with Cluster daemons

Failure Causes
Unable to establish communication with Cluster daemons

    Recommended Actions
    Ensure Cluster daemons are running
    IF PROBLEM PERSISTS, CONTACT APPROPRIATE SERVICE REPRESENTATIVE

Detail Data
SENSE DATA
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
---------------------------------------------------------------------------
LABEL:          LDMP_COMPLETE
IDENTIFIER:     AEA055D0

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21283
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           S
Type:            INFO
WPAR:            Global
Resource Name:   livedump        

Description
Live dump complete

Detail Data
File name and message text
/var/adm/ras/livedump/lvm.ghchome.202206201950.00.DZ

---------------------------------------------------------------------------
LABEL:          LVM_SA_QUORCLOSE
IDENTIFIER:     CAD234BE

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21282
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           H
Type:            UNKN
WPAR:            Global
Resource Name:   LVDD            
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
QUORUM LOST, VOLUME GROUP CLOSING

Probable Causes
PHYSICAL VOLUME UNAVAILABLE

Detail Data
MAJOR/MINOR DEVICE NUMBER
8000 0025 0000 0000
QUORUM COUNT
           6
ACTIVE COUNT
       65535
SENSE DATA
0000 0000 0000 0A9E 00F9 8DEE 0000 4C00 0000 014A F455 2593 00FB 42D7 15E1 EEBA
---------------------------------------------------------------------------
LABEL:          LVM_SA_QUORCLOSE
IDENTIFIER:     CAD234BE

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21281
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           H
Type:            UNKN
WPAR:            Global
Resource Name:   LVDD            
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
QUORUM LOST, VOLUME GROUP CLOSING

Probable Causes
PHYSICAL VOLUME UNAVAILABLE

Detail Data
MAJOR/MINOR DEVICE NUMBER
8000 0028 0000 0000
QUORUM COUNT
           2
ACTIVE COUNT
       65535
SENSE DATA
0000 0000 0000 0A9E 00F9 8DEE 0000 4C00 0000 014B 506D D812 0000 0000 0000 0000
---------------------------------------------------------------------------
LABEL:          LVM_GS_CONNECTIVITY
IDENTIFIER:     DB14100E

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21280
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           U
Type:            PERM
WPAR:            Global
Resource Name:   LIBLVM          
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
Group Services detected a failure

Probable Causes
Unable to establish communication with Cluster daemons

Failure Causes
Concurrent Volume Group forced offline

    Recommended Actions
    CHECK ERROR LOG FOR ADDITIONAL RELATED ENTRIES
    IF PROBLEM PERSISTS, CONTACT APPROPRIATE SERVICE REPRESENTATIVE

Detail Data
Volume Group ID
00F9 8DEE 0000 4C00 0000 014B 506D D812
MAJOR/MINOR DEVICE NUMBER
0028 0000
SENSE DATA
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
---------------------------------------------------------------------------
LABEL:          LVM_GS_CONNECTIVITY
IDENTIFIER:     DB14100E

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21279
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           U
Type:            PERM
WPAR:            Global
Resource Name:   LIBLVM          
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
Group Services detected a failure

Probable Causes
Unable to establish communication with Cluster daemons

Failure Causes
Concurrent Volume Group forced offline

    Recommended Actions
    CHECK ERROR LOG FOR ADDITIONAL RELATED ENTRIES
    IF PROBLEM PERSISTS, CONTACT APPROPRIATE SERVICE REPRESENTATIVE

Detail Data
Volume Group ID
00F9 8DEE 0000 4C00 0000 014A F455 2593
MAJOR/MINOR DEVICE NUMBER
0025 0000
SENSE DATA
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
---------------------------------------------------------------------------
LABEL:          STORAGERM_STOPPED_S
IDENTIFIER:     A8576C0D

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21278
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           O
Type:            INFO
WPAR:            Global
Resource Name:   StorageRM       

Description
IBM.StorageRM daemon has been stopped.

Probable Causes
The RSCT Storage Resource Manager daemon(IBM.StorageRMd) has been stopped.

User Causes
The stopsrc -s IBM.StorageRM command has been executed.

    Recommended Actions
    Confirm that the daemon should be stopped. Normally, this daemon should
not be stopped explicitly by the user.

Detail Data
DETECTING MODULE
RSCT,StorageRMDaemon.C,1.66,333               
ERROR ID
                                          
REFERENCE CODE
                                          
---------------------------------------------------------------------------
LABEL:          LVM_GS_CONNECTIVITY
IDENTIFIER:     DB14100E

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21277
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           U
Type:            PERM
WPAR:            Global
Resource Name:   LIBLVM          
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
Group Services detected a failure

Probable Causes
Unable to establish communication with Cluster daemons

Failure Causes
Concurrent Volume Group forced offline

    Recommended Actions
    CHECK ERROR LOG FOR ADDITIONAL RELATED ENTRIES
    IF PROBLEM PERSISTS, CONTACT APPROPRIATE SERVICE REPRESENTATIVE

Detail Data
Volume Group ID
00F9 8DEE 0000 4C00 0000 014A F430 EF07
MAJOR/MINOR DEVICE NUMBER
0024 0000
SENSE DATA
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
---------------------------------------------------------------------------
LABEL:          LVM_SA_QUORCLOSE
IDENTIFIER:     CAD234BE

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21276
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           H
Type:            UNKN
WPAR:            Global
Resource Name:   LVDD            
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
QUORUM LOST, VOLUME GROUP CLOSING

Probable Causes
PHYSICAL VOLUME UNAVAILABLE

Detail Data
MAJOR/MINOR DEVICE NUMBER
8000 0024 0000 0000
QUORUM COUNT
           3
ACTIVE COUNT
       65535
SENSE DATA
0000 0000 0000 0A9E 00F9 8DEE 0000 4C00 0000 014A F430 EF07 0000 0000 0000 0000
---------------------------------------------------------------------------
LABEL:          LVM_SA_QUORCLOSE
IDENTIFIER:     CAD234BE

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21275
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           H
Type:            UNKN
WPAR:            Global
Resource Name:   LVDD            
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
QUORUM LOST, VOLUME GROUP CLOSING

Probable Causes
PHYSICAL VOLUME UNAVAILABLE

Detail Data
MAJOR/MINOR DEVICE NUMBER
8000 0024 0000 0000
QUORUM COUNT
           3
ACTIVE COUNT
       65535
SENSE DATA
0000 0000 0000 0A9E 00F9 8DEE 0000 4C00 0000 014A F430 EF07 0000 0000 0000 0000
---------------------------------------------------------------------------
LABEL:          LVM_GS_CONNECTIVITY
IDENTIFIER:     DB14100E

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21274
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           U
Type:            PERM
WPAR:            Global
Resource Name:   LIBLVM          
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
Group Services detected a failure

Probable Causes
Unable to establish communication with Cluster daemons

Failure Causes
Concurrent Volume Group forced offline

    Recommended Actions
    CHECK ERROR LOG FOR ADDITIONAL RELATED ENTRIES
    IF PROBLEM PERSISTS, CONTACT APPROPRIATE SERVICE REPRESENTATIVE

Detail Data
Volume Group ID
00F9 8DEE 0000 4C00 0000 014A F430 EF07
MAJOR/MINOR DEVICE NUMBER
0024 0000
SENSE DATA
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
---------------------------------------------------------------------------
LABEL:          CONFIGRM_STOPPED_ST
IDENTIFIER:     447D3237

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21273
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           O
Type:            INFO
WPAR:            Global
Resource Name:   ConfigRM        

Description
IBM.ConfigRM daemon has been stopped.

Probable Causes
The RSCT Configuration Manager daemon(IBM.ConfigRMd) has been stopped.

User Causes
The stopsrc -s IBM.ConfigRM command has been executed.

    Recommended Actions
    Confirm that the daemon should be stopped. Normally, this daemon should
    not be stopped explicitly by the user.

Detail Data
DETECTING MODULE
RSCT,ConfigRMDaemon.C,1.30,221                
ERROR ID
                                          
REFERENCE CODE
                                          
---------------------------------------------------------------------------
LABEL:          OPMSG
IDENTIFIER:     AA8AB241

Date/Time:       Mon Jun 20 16:50:31 -03 2022
Sequence Number: 21272
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           O
Type:            TEMP
WPAR:            Global
Resource Name:   OPERATOR        

Description
OPERATOR NOTIFICATION

User Causes
ERRLOGGER COMMAND

    Recommended Actions
    REVIEW DETAILED DATA

Detail Data
MESSAGE FROM ERRLOGGER COMMAND
clexit.rc : clstrmgrES terminated during AIX shutdown.
---------------------------------------------------------------------------
LABEL:          CORE_DUMP
IDENTIFIER:     A924A5FC

Date/Time:       Mon Jun 20 16:26:55 -03 2022
Sequence Number: 21271
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           S
Type:            PERM
WPAR:            Global
Resource Name:   SYSPROC         

Description
SOFTWARE PROGRAM ABNORMALLY TERMINATED

Probable Causes
SOFTWARE PROGRAM

User Causes
USER GENERATED SIGNAL

    Recommended Actions
    CORRECT THEN RETRY

Failure Causes
SOFTWARE PROGRAM

    Recommended Actions
    RERUN THE APPLICATION PROGRAM
    IF PROBLEM PERSISTS THEN DO THE FOLLOWING
    CONTACT APPROPRIATE SERVICE REPRESENTATIVE

Detail Data
SIGNAL NUMBER
           4
USER'S PROCESS ID:
              64028842
FILE SYSTEM SERIAL NUMBER
          18
INODE NUMBER
                     2
CORE FILE NAME
/msmmed/core
PROGRAM NAME
mumsm
STACK EXECUTION DISABLED
           0
COME FROM ADDRESS REGISTER
??
PROCESSOR ID
  hw_fru_id: 0
  hw_cpu_id: 1

ADDITIONAL INFORMATION
??
??
Unable to generate symptom string.
---------------------------------------------------------------------------
LABEL:          DMPCHK_TOOSMALL
IDENTIFIER:     E87EF1BE

Date/Time:       Mon Jun 20 15:00:00 -03 2022
Sequence Number: 21270
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           O
Type:            PEND
WPAR:            Global
Resource Name:   dumpcheck       

Description
The largest dump device is too small.

Probable Causes
Neither dump device is large enough to accommodate a system dump at this time.

    Recommended Actions
    Increase the size of one or both dump devices.

Detail Data
Largest dump device
lg_dumplv                                                                                                                       
Largest dump device size in kb
     1048576
Current estimated dump size in kb
     1092382

In the errpt output, Roger, it looks like something made your disks unavailable.

LABEL:          LVM_SA_QUORCLOSE
IDENTIFIER:     CAD234BE

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21297
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           H
Type:            UNKN
WPAR:            Global
Resource Name:   LVDD            
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
QUORUM LOST, VOLUME GROUP CLOSING

Probable Causes
PHYSICAL VOLUME UNAVAILABLE

Detail Data
MAJOR/MINOR DEVICE NUMBER
8000 0028 0000 0000
QUORUM COUNT
           2
ACTIVE COUNT
       65535
SENSE DATA
0000 0000 0000 0A9E 00F9 8DEE 0000 4C00 0000 014B 506D D812 0000 0000 0000 0000
---------------------------------------------------------------------------
LABEL:          LVM_GS_CHILDGONE
IDENTIFIER:     4B219AEA

Date/Time:       Mon Jun 20 16:50:40 -03 2022
Sequence Number: 21296
Machine Id:      00FB42D74C00
Node Id:         SISMED1
Class:           U
Type:            PERM
WPAR:            Global
Resource Name:   LIBLVM          
Resource Class:  NONE
Resource Type:   NONE
Location:        

Description
Concurrent LVM daemon forced Volume Group offline

Probable Causes
Unrecoverable event detected by Concurrent LVM daemon

Failure Causes
Lost communication with remote nodes
Lost quorum

    Recommended Actions
    Ensure Cluster daemons are running
    CHECK ERROR LOG FOR ADDITIONAL RELATED ENTRIES
    Attempt to bring the Concurrent Volume Group back online
    IF PROBLEM PERSISTS, CONTACT APPROPRIATE SERVICE REPRESENTATIVE

Detail Data
Volume Group ID
00F9 8DEE 0000 4C00 0000 014B 506D D812
MAJOR/MINOR DEVICE NUMBER
0028 0000
SENSE DATA
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000

Make sure your hardware is healthy.