07 April 2017

Is there a performance impact when using Solaris ZFS lz4 compression?

Starting with Solaris 11.3 ZFS supports lz4 compression. Lets verify the impact to performance if we enable lz4 compression with 2 concrete sample files.
First a zip file containing Solaris 11 SRU Updates and second a simple text logfile.

We disable the ZFS Cache to see the impact of I/O and compression
# zfs set primarycache=metadata v0123_db/source
# zfs set primarycache=metadata compressed/fs
# zfs set primarycache=metadata uncompressed/fs


Test 1 - zipped file

# time cp p25604852_1100_Solaris86-64_1of4.zip /uncompressed

real    1m27.571s
user    0m0.002s
sys     0m4.361s

-bash-4.4$ zfs get compression,compressratio,used uncompressed/fs
NAME             PROPERTY       VALUE  SOURCE
uncompressed/fs  compression    off    inherited from uncompressed
uncompressed/fs  compressratio  1.00x  -
uncompressed/fs  used           1.35G  -


# time cp p25604852_1100_Solaris86-64_1of4.zip /compressed

real    1m27.427s
user    0m0.002s
sys     0m4.408s

-bash-4.4$ zfs get compression,compressratio,used compressed/fs
NAME           PROPERTY       VALUE  SOURCE
compressed/fs  compression    lz4    inherited from compressed
compressed/fs  compressratio  1.00x  -
compressed/fs  used           1.34G  -

We see the same duration, no performance loss and because the file is zipped
nearly no space savings.



Test 2 - Log file with Text

# time cp framework.log /uncompressed/

real    0m24.608s
user    0m0.001s
sys     0m1.241s

-bash-4.4$ zfs get compression,compressratio,used uncompressed/fs
NAME             PROPERTY       VALUE  SOURCE
uncompressed/fs  compression    off    inherited from uncompressed
uncompressed/fs  compressratio  1.00x  -
uncompressed/fs  used           390M   -


# time cp framework.log /compressed/

real    0m24.495s
user    0m0.001s
sys     0m1.260s

-bash-4.4$ zfs get compression,compressratio,used compressed/fs
NAME           PROPERTY       VALUE  SOURCE
compressed/fs  compression    lz4    inherited from compressed
compressed/fs  compressratio  6.37x  -
compressed/fs  used           61.4M  -

Good compression (6x). We save 330MB of disk space here.
No impact to duration. The SPARC S7 core is fast enough.


And now Read Performance

# time cp /compressed/framework.log /tmp; time cp /uncompressed/framework.log /tmp

real    0m17.415s
user    0m0.001s
sys     0m1.354s

real    0m24.479s
user    0m0.001s
sys     0m1.389s

Better results from compressed filesystem. CPU decompression is faster than doing I/O. Need to read 6x the data from uncompressed zfs filesystem.


Summary
With above samples we don't see negative impact when enabling lz4 compression. If you use compressable text files you save lots of disk space while gaining read performance. We start using lz4 on our ZPOOLs by default now.

31 October 2016

My Favorite Oracle Solaris Sessions at #doag2016 Conference in Nuremberg

The German Oracle User Group (DOAG) Conference is the largest Oracle Conferene in Europe.
Taking place each Year in Mid November.

Here the links to my favorite Solaris Sessions, Wednesday, 16.11.

11:00 Room Stockholm Oracle Solaris 11 Zonen - Spezialitäten
          Marcel Hofstetter

12:00 Room Budapest Less Known Features of Solaris
          Jörg Möllenkamp

13:00 Room Stockholm Oracle Solaris - The Next Generation
          Joost Pronk & Franz Haberhauer

16:00 RoomStockholm End to End Diagnostics with Oracle Solaris Observability
          Eve Kleinknecht

See you in Nuremberg

08 September 2016

Performance Comparison SPARC T4 and SPARC S7

SPARC S7
JomaSoft replaces the SPARC T4-1 Server with the new SPARC S7-2 Server (with 2 sockets / 8 cores each at 4.26Ghz).

Read more about the SPARC S7-2 Server

Comparison
We created a 3GB / 1 core LDom on our SPARC T4-1 Server running Solaris 11.3 SRU11 with all data stored on a SAN Disk. Inside the LDom we installed our VDCF application and loaded datacenter configuration into the VDCF sqlite database. Next we executed datacenter analysis like patch comparison, calculated migration possibilities and server configuration consistency checks. The analysis are traditional single thread workload.

After the tests on the SPARC T4-1 we migrated our LDom to the new SPARC S7-2 Server. This allowed us to compare the systems using the same Operating System, Setup and Data, to make sure we compare “apple to apple”.

The results for our workload showed a 2x faster performance on the SPARC S7-2 Server. We are very happy with this results. This workload did not use the Software In Silicon features, only performed better because of the new CPU architecture (higher frequency, more and better CPU cache and memory).

In my view the SPARC S7-2 Server is the ideal platform for customers to replace their old SPARC hardware with an excellent price / performance ratio.

10 June 2016

How to change Solaris Zones configurations online

I assume you are aware that Solaris Zones are one of the most valuable features of Solaris since years. In this post I focus on the "Live Zone Reconfiguration" feature available since
Solaris 11.2 for Solaris Zones and since Solaris 11.3 for Kernel Zones. CPU pools, filesystems, network and disk configurations can be changed while Solaris Zones are running.

1. Limit CPU usage of a Solaris Zone using dedicated-cpu

By default Solaris Zones share the CPUs with the global and all other local Zones.
Our sample Zone currently uses 16 virtual CPUs.

# zlogin v0131 psrinfo | wc -l

16


We can now assign 4 dedicated virtual CPUs to be used by this Zone only.

# zonecfg -z v0131 -r "add dedicated-cpu; set ncpus=4; end"

zone 'v0131': Checking: Adding dedicated-cpu

zone 'v0131': Applying the changes

# zlogin v0131 psrinfo | wc -l

4
The “zonecfg -r” changes the configuration of the running Zone only.
Make sure to run the command once again to make the configuration persistent for the next Zone reboot.

# zonecfg -z v0131 "add dedicated-cpu; set ncpus=4; end"


2. Create and mount an additional ZFS filesystem

# zfs create v0131_data/myapp

# zonecfg -z v0131 -r "add fs; set type=zfs; set dir=/myapp; set special=v0131_data/myapp; end"

zone 'v0131': Checking: Mounting fs dir=/myapp

zone 'v0131': Applying the changes


# zlogin v0131 mount | grep myapp

/myapp on /myapp read/write/setuid/devices/rstchown/nonbmand/exec/xattr/atime/zone=v0131/nozonemod/sharezone=4/dev=d50045 on Fri Jun 10 11:56:19 2016


And make it persistent

# zonecfg -z v0131 "add fs; set type=zfs; set dir=/myapp; set special=v0131_data/myapp; end"

Adding network interfaces and disk devices are similar to the samples above.