1 :Info: total_time_meters: ttm: 2019-06-01 total_time_meters, ttm
2
3 Syntax as a command: ttm -control_arg
4
5
6 Function: prints out the CPU time percentage and average CPU time
7 spent doing various tasks.
8
9
10 Control arguments:
11 -report_reset, -rr
12 generates a full report and then performs the reset operation.
13 -reset, -rs
14 resets the metering interval for the invoking process so that the
15 interval begins at the last call with -reset specified. If -reset
16 has never been given in a process, it is equivalent to having been
17 specified at system initialization time.
18
19
20 Access required: This command requires access to phcs_ or
21 metering_gate_.
22
23
24 Notes: If the total_time_meters command is given with no control
25 argument, it prints a full report.
26
27 The following are brief descriptions of each of the variables printed
28 out by total_time_meters. Average CPU times are given in microseconds.
29 In the description below, system CPU time is the total amount of CPU
30 time generated by all configured CPUs. Idle time is CPU time consumed
31 by an idle process; an idle process is given a CPU only if no other
32 nonidle process can be given that CPU. System nonidle time is the
33 difference between system CPU time and the aggregate idle time. In
34 this computation, MP idle time, work class idle time, and loading idle
35 time are considered as overhead time and are included in system nonidle
36 time. That is, system idle time is defined to include only the idle
37 time caused by light load; it does not include the idle time caused by
38 system bottlenecks; that time is counted as overhead.
39
40
41 The three columns in the display contain, respectively, the percent of
42 system CPU time, percent of system nonidle time, and average time per
43 instance for the overhead tasks. The percents of non-idle time are
44 included to assist the user in comparing values measured under light
45 load with those measured under heavy load. It can not be emphasized
46 too often that measurements made under light load should not be used to
47 make tuning or configuration decision.
48
49 Several of the overhead task names are indented, to indicate that they
50 are part of the preceding, non-indented task. The percents for these
51 indented tasks are also included in the percent for the preceding task.
52 That is, in the example at the end of this description, page faults
53 used 1.49% of system CPU time; 0.14% was used by PC Loop Locks, and the
54 remaining 1.35% was used by other page fault overhead.
55
56
57 Page Faults
58 is the percentage of CPU time spent handling page faults and the
59 average time spent per page fault.
60 PC Loop Locks
61 is the percentage of CPU time spent looping on the page table lock,
62 and the average time spent per looplocking. This number will be
63 nonzero only on a multiprocessor system. This number is also
64 included in page fault time.
65 PC Queue
66 is the percentage of CPU time spent processing the core queue, and
67 the average time spent per core queue processing. The core queue is
68 used to prevent loop looks in page control on interrupt side. If an
69 interrupt for a page I/O is received when the page table is locked,
70 an entry is made into the core queue. When the page table is next
71 unlocked, the core queue is processed.
72
73
74 Seg Faults
75 is the percentage of CPU time spent handling segment faults, and the
76 average time spent per segment fault. These values do not include
77 the time spent handling page faults that occurred during the segment
78 fault handling.
79 Bound Faults
80 is the percentage of CPU time spent handling bound faults and the
81 average time spent per bound fault. These values do not include
82 time spent handling page faults that occurred during bound fault
83 processing.
84 Interrupts
85 is the percentage of CPU time spent handling interrupts, and the
86 average time spent per interrupt.
87
88
89 Other Fault
90 is the percentage of CPU time spent handling certain other faults.
91 The fault processing time included is fault handling time that is
92 not charged to the user process as virtual CPU time and that does
93 not appear elsewhere in the total_time_meters output i.e. it is
94 not page fault segment fault or bound fault processing. The vast
95 majority of the time included as Other Fault processing is related
96 to the processing of connect faults and timer_runout faults.
97 Getwork
98 is the percentage of CPU time spent in the getwork function of
99 traffic control, and the average time spent per pass through
100 getwork. The getwork routine is used to select a process to run on
101 a CPU and to switch address spaces to that process. This number is
102 also included in other fault time.
103
104
105 TC Loop Locks
106 is the percentage of CPU time spent looping on a traffic control
107 lock, and the average time spent per looplocking. The locks
108 included in this category are the global traffic control lock and
109 the individual Active Process Table Entry APTE locks. This time
110 is nonzero only on a multiprocessor system. This number is also
111 included in other fault time.
112 Post Purging
113 is the percentage of CPU time spent in post purging processes that
114 have lost eligibility, and the average time spent per post purge.
115 Post purging a process involves moving all of its per-process pages
116 that are in main memory into the "most recently used" position in
117 the core map and computing the working set of the process. This
118 time is nonzero only if the "post_purge" tuning parameter is set to
119 "on." This number is also included in other fault time.
120
121
122 MP Idle
123 is the multiprogramming idle. This is the percentage of CPU time
124 that is spent idling when processes are contending for eligibility,
125 but not all contending processes are eligible. This occurs because
126 some site-defined or system limit on eligibility has been
127 reached--e.g., maximum number of eligible processes tuning
128 parameter "max_eligible", maximum number of ring 0 stacks tuning
129 parameter "max_max_eligible", per-work-class maximum number of
130 eligible processes, working set limit, etc. MP idle is CPU time
131 wasted in idling because the eligibility limits are set too low for
132 the configuration, or because there is not enough memory in the
133 configuration to hold the working sets of a larger number of
134 eligible processes.
135
136
137 Work Class Idle
138 is the percent of CPU time spent idling because the only processes
139 that could have been run belonged to work classes that had used
140 their maximum percentage of CPU time. Setting upper limits on work
141 classes will cause the system to go idle rather than run processes
142 in those work classes that have reached their maximum percent. This
143 meter indicates the percent of CPU time wasted in idling because of
144 the setting of these limits.
145
146
147 Loading Idle
148 is the percentage of CPU time that is spent idling when processes
149 are contending for eligibility, not all contending processes can be
150 made eligible, and some eligible processes are being loaded. Being
151 loaded means wiring the two per-process pages that must be in main
152 memory in order for a process to run--the first page of the
153 descriptor segment DSEG and the first page of the process
154 descriptor segment PDS.
155
156
157 NMP Idle
158 Is the nonmultiprogramming idle--the percentage of system CPU time
159 that is spent idling when all processes contending for eligibility
160 are eligible. Time is charged to NMP idle under two different
161 circumstances: 1) there are fewer processes contending for
162 eligibility than there are processors in the configuration; 2) there
163 are fewer non-waiting processes than there are processors in the
164 configuration that is most of the eligible processes are waiting
165 for system events such as page faults, and no additional processes
166 are contending for eligibility. Both of these circumstances are
167 caused by light load; therefore NMP idle time, along with zero idle
168 time, is subtracted from system CPU time to get system non-idle
169 time.
170
171
172 Zero Idle
173 is the percentage of system CPU time that is spent idling when no
174 processes are ready and contending for eligibility.
175 Other Overhead
176 is the percentage of system CPU time that is overhead but cannot be
177 attributed to any of the above categories of overhead. This is
178 almost entirely instrumentation artifact, due to a small but
179 indeterminable amount of time between the occurrence of a fault or
180 interrupt and the reading of the system clock which begins the
181 charging of time to some overhead function. Due to hardware
182 features such as cache memory and associative memory, this time is
183 not constant per fault, even though the same instruction sequence is
184 executed each time. Other Overhead represents the effect of this
185 nondeterminism.
186
187
188 Virtual CPU Time
189 is the precent of CPU time delivered to user processes as virtual
190 CPU time. Virtual CPU time is time spent running user ring code
191 commands application programs etc. or inner ring code in direct
192 response to user ring requests via gate calls. System virtual CPU
193 time is total system CPU time less all system overhead and idle
194 time. It is the sum of the virtual CPU time charged to all
195 processes. One objective of tuning is to maximize virtual CPU time.
196
197
198 Examples: The following is an example of the information printed when
199 the total_time_meters command is invoked with no control argument.
200
201 Total metering time 91 :33:53
202 % %NI AVE
203 Page faults 1.49 4.28 2301.466
204 PC Loop Locks 0.14 0.41 3439.733
205 PC Queue 0.17 0.49 306.381
206 Seg Faults 0.84 2.40 9628.827
207 Bound Faults 0.05 0.14 15850.365
208 Interrupts 2.66 7.61 1959.442
209 Other Fault 3.17 9.07
210 Getwork 1.49 4.27 638.160
211 TC Loop Locks 0.08 0.24 309.842
212 Post Purging 0.09 0.25 790.584
213 MP Idle 0.20 0.58
214 Work Class Idle 0.09 0.26
215 Loading Idle 0.02 0.05
216 NMP Idle 36.36
217 Zero Idle 28.72
218 Other Overhead 0.10 0.29
219 Virtual CPU Time 26.13 74.84
220
221
222 :Internal: history_comment.gi: 2019-06-01 history_comment
223
224 /****^ HISTORY COMMENTS:
225 1) change2019-06-01Swenson, approve2019-06-01MCR10060,
226 audit2019-06-01GDixon, install2019-06-01MR12.6g-0024:
227 Add referenced, but missing, example.
228 END HISTORY COMMENTS */
229