Author Topic: JFS2 and fragmentation  (Read 52629 times)

0 Members and 1 Guest are viewing this topic.

VincentL

  • Jr. Member
  • **
  • Posts: 5
  • Karma: +0/-0
JFS2 and fragmentation
« on: December 01, 2014, 11:48:53 AM »
Hi,

I've made performance tests with Netbackup on jfs2 filesystem with a 4KB blocksize.

1/ Backup one big file : 180MB/s.
2/ Copy this file 4 times in parallel (nohup cp -rp test test1 & ; nohup cp -rp test test2 & ; nohup cp -rp test test3 & ; nohup cp -rp test test4 &)
3/ Backup one new file (test1 for example) : 50MB/s.

I imagine this difference is due to file fragmentation. But why j2_maxPageReadAhead is not masking the file fragmentation ?

Is it possible to optimize the jfs2 ?
Is there a command to see a file fragmentation ? Maybe defragfs ?

For information, I used 8 "cp" in parallel to optimize a database copy to gain time.

Thanks.

Michael

  • Administrator
  • Hero Member
  • *****
  • Posts: 1339
  • Karma: +0/-0
Re: JFS2 and fragmentation
« Reply #1 on: December 04, 2014, 08:31:41 AM »
Let's take the easy question first:
Quote
why j2_maxPageReadAhead is not masking the file fragmentation
A couple of reasons come to mind:
  • 1. The default value is too small to notice
  • 2. more likely, in this case - the file is already cached so it is not being paged in.
An observation: jfs2 is organized, whenever possible, in 16k chunks. While physically the diskblock may still be 4k jfs2 is organizing data in 16k "chunks" (my word) whenever possible. This was one of the performance issues over 10 years ago - when using 32-bit kernels (that were built for 4k page file systems (or jfs(1)) rather than a 64-bit kernel (only size available since AIX 6.1). 64-bit kernels are optimized for 16k file system "chunks".

The next "easy" question: file fragementation
After your copies complete look at:
/usr/bin/fileplace /some/file/name

While I recommend reading the man page - I recommend comparing the -l and -p flags. ALSO, when dealing with a new file - run the command sync to ensure that the file information is up to date in the LVM (for older files - more than 1 minute old - this is not necessary).

Getting to the more difficult stuff:
File writes are dependent on many things. I would be interested in amount of cache space available (was high rate of page stealing occuring); what type of disk subsystem; disk queue depth; if virtualized storage - writing to spindle in storage, or writing to LUN; etc..
Also, the 180MByte write feels slow on a modern system (on my personal systems that would be very fast, but on a MPIO SAN on 8/16G ports that could be considered slow).

So, what are we writing to?

Note: to test "readahead", first unmount the filesystem, then use dd if=/some/big/file of=/dev/null bs={use different sizes, e.g., 4k, 8k, 16k, 64k} and compare results. To test write (single copy) I use almost the same, but if=/dev/zero and of=/some/big/newfile