Let's take the easy question first:
why j2_maxPageReadAhead is not masking the file fragmentation
A couple of reasons come to mind:
- 1. The default value is too small to notice
- 2. more likely, in this case - the file is already cached so it is not being paged in.
An observation: jfs2 is organized, whenever possible, in 16k chunks. While physically the diskblock may still be 4k jfs2 is organizing data in 16k "chunks" (my word) whenever possible. This was one of the performance issues over 10 years ago - when using 32-bit kernels (that were built for 4k page file systems (or jfs(1)) rather than a 64-bit kernel (only size available since AIX 6.1). 64-bit kernels are optimized for 16k file system "chunks".
The next "easy" question: file fragementation
After your copies complete look at:
/usr/bin/fileplace /some/file/name
While I recommend reading the man page - I recommend comparing the -l and -p flags. ALSO, when dealing with a new file - run the command
sync to ensure that the file information is up to date in the LVM (for older files - more than 1 minute old - this is not necessary).
Getting to the more difficult stuff:
File writes are dependent on many things. I would be interested in amount of cache space available (was high rate of page stealing occuring); what type of disk subsystem; disk queue depth; if virtualized storage - writing to spindle in storage, or writing to LUN; etc..
Also, the 180MByte write feels slow on a modern system (on my personal systems that would be very fast, but on a MPIO SAN on 8/16G ports that could be considered slow).
So, what are we writing to?
Note: to test "readahead", first unmount the filesystem, then use dd if=/some/big/file of=/dev/null bs={use different sizes, e.g., 4k, 8k, 16k, 64k} and compare results. To test write (single copy) I use almost the same, but if=/dev/zero and of=/some/big/newfile