Well, 900G - and page space is 1TB? And 10% occupied, on average.
I'll just say - ok, but not really happy about it - on average.
First off, I consider paging space "SLOOOOOOW" memory.
Memory hierachy:
proc: registers, caches: L1, L2, L3, L4 (this is all on processor), LS registers (Load-Store) deal, ideally, with processor cache lines; then comes real-memory and SRAD (aka Resource Affinity Domains) - and then disk storage (paging space for working memory, file-system space for 'permanent' aka persistent data.
The idea that paging space is an alternative for real memory is no longer 'acceptable', imho. And, from your comment is sounds as if application owners see paging space as a viable alternative.
Going back in time: when system memory of 32M (mega) byte was considered large - paging space was a bit more accepted as an alturnative because it was near impossible to go above boundaries such as 32M, or 64M. 16-bit was still considered 'large' rather than 'tiny' now.
So, when 2^16 was large, 2^32 was just entering the picture - systems with paging space at 2x real memory was normal.
Here is a minimal situation:
root@x065:[/root]lparstat -i | grep Memory
Online Memory : 1024 MB
Maximum Memory : 4096 MB
Minimum Memory : 1024 MB
Desired Memory : 1024 MB
root@x065:[/root]lsps -s
Total Paging Space Percent Used
512MB 18%
Normally I would consider 18% paging space on the high side, but this is a stable system.
root@x065:[/root]svmon -G -O unit=MB
Unit: MB
-------------------------------------------------------------------------------
size inuse free pin virtual available
memory 1024.00 813.21 210.79 339.06 508.24 259.21
pg space 512.00 91.9
work pers clnt other
pin 264.56 0 0 74.5
in use 464.59 0 348.62
So, yes, the system has had a need to page out 92MB, and that remains allocated, but atm, 20% (roughly), of system memory is free.
root@x065:[/root]vmstat -s
60748468 total address trans. faults
923790 page ins
1346183 page outs
70317 paging space page ins
181723 paging space page outs
0 total reclaims
29693425 zero filled pages faults
30344 executable filled pages faults
1222006 pages examined by clock
4 revolutions of the clock hand
329788 pages freed by the clock
200629 backtracks
13971 free frame waits
0 extend XPT waits
521134 pending I/O waits
1189333 start I/Os
947549 iodones
31003617 cpu context switches
4368283 device interrupts
5913498 software interrupts
16396623 decrementer interrupts
1904 mpc-sent interrupts
1904 mpc-receive interrupts
351637 phantom interrupts
0 traps
79721468 syscalls
root@x065:[/root]uptime
17:20:29 up 1 day 6:27, 1 user, load average: 0.99, 0.99, 1.00
From above, I would say the statistics indicate 1024MB is on the small size (as I know real work was last done 24 hours ago).
* 4 revolutions of the clock hand
* 923790 page ins
* 1346183 page outs
* 70317 paging space page ins
* 181723 paging space page outs
This system has scanned all of it's memory, looking for memory to steal - 4 times, and the paging space activity is roughly 10% of all i/o activity (page ins/outs include of paging space page ins/outs)
* Above is an example of some of the items a performance consultant should discuss with you.
* Since your application 'requires' paging space - your needs as administrator is to arrange paging space sections as separate disks, from memory - 64GB max size
* paging space gets allocated round-robin, so all paging space areas should be the same size.
* while there is a historical preference for all paging space to be in rootvg - I would not worry about that. A separate VG for additional paging space support can be considered - to make rootvg backups (i.e., mksysb) more manageable.
** Note - having 1TB (or whatever) of paging space does not make the backup larger - paging space is not a file system so it's contents is not in the mksysb
** paging space is no longer used for dumps (aix 3.X used hd7, AIX4 and AIX5 used hd6 (paging space) AIX7 uses lg_dumplv
* a more in-depth study of WHEN paging space is demanded
* a monitor should be started to monitor `lsps -s` statistics, e.g., using a cron job and run it every minute. Will be boring - but will help with the when (not directly who/what, but you could add extra logic to query system when it goes above 15% or 20%, or whatever. Having the % every minutes will help target a trigger/threshold value.
** Hope this helps,
Michael
p.s. (This would have never worked, for me, via linkedIn or twitter.)