Saying Goodbye to an Old (GPFS) Friend

by sark

I'm currently sat at the kitchen table of a self-catering holiday let in Sheringham on the east coast of England.  Instead of holiday plans, I am thinking about future work projects.  I know, I know - I'm on holiday and thinking about work!  You see, I'm one of those oddballs who enjoys their job, and soon my employer will be migrating to a new data storage system.  Storage is me.  It is my passion.  And I think about it a lot.  Allow me to explain where my interest came from, a story of joy, but also sadness.

Several years ago my employer had a storage system built.  It consisted of two servers, two external RAID controllers, a bunch of storage expansion units with Serial Attached SCSI (SAS) disks, and some units with Serial AT Attachment (SATA) disks.  The servers ran CentOS with IBM's General Parallel File System (GPFS) (now known as Storage or Spectrum Scale) file system.  It held all of the company's data - pretty important stuff!  When this was installed, my Linux experience was limited to tinkering around with VMs, so this thing scared me.

The more I learned, the less I feared it.  I can remember vividly attending a training course for two days in Yorkshire.  The course was organized by the vendor and their instructors taught me and others how to use the system.  From that moment on I was hooked!  I loved this server system, but I was still petrified of it at the same time.  I spent hours learning all of its bits and pieces.

Over time, my confidence and skills improved no end.  I had hours of fun writing little Bash scripts for the file system.  We expanded the system, creating a third server to handle intensive I/O, adding another server to the cluster running IBM's Tivoli Storage Manager (TSM) (now known as Spectrum Protect) to backup and archive data to tape... pwoar!  Now you're talking!  There's something nerdy about watching a tape library robot pick up a tape, load it into a drive, and read or write data after you run a few commands in TSM's command line interface.

Fast-forward a few years, and, lo and behold, I went to work for the vendor of the storage system.  I worked on some big systems out there in the wild for some household names.  I also worked with brilliant people based in my native U.K., as well as Germany and the USA.  It was a great and memorable time.

However, I've ended up back working for my original employer.  Their storage system manager left his position and they were in need of someone schooled in the ways of the command line.  Returning to the original storage system was great.  I love archiving to tape; shifting data here, there, and everywhere; swapping out broken disks; managing the GPFS system...  I know what you're thinking: this guy needs a life!  But I just love storage!  This system sparked my passion for learning Linux, storage systems, and going further down the rabbit hole, until I found the hacker community and 2600.  Thanks to this storage system, I got to work with some brilliant people that gave me knowledge and skills that fed that passion.

Sadly, the GPFS system that I fell in love with is being retired and replaced.  The head of department wants a new single-node storage system.  It will be built by an external company and will be running Windows (I ain't a Windows guy) with "Resilient" File System (ReFS).  Why Windows over Linux?  From a performance perspective, it makes sense.  The users of the system need more performance than you can imagine and the company building it decided, along with the budget holder at work, SMB Direct is what is needed.  GPFS can give huge performance on Windows clients by adding the Windows clients to the cluster.  The only problem is that GPFS is expensive with year-on-year license costs as well as support costs.

I was given the task of designing another less powerful storage system which will mirror the data.  This system will behave as our Disaster Recovery (DR) system and will perform backups to tape and cloud.  I put Debian on it, installed ZFS, and built the file system out in a few minutes.  I installed Samba, bound Linux to the Windows domain for file authentication using Winbind, built my shares and installed Bacula to backup the system to LTO and AWS.  Designing and building the DR server was brilliant fun!  I love GPFS, but I've certainly now fallen in love with ZFS.

I'm going to miss the GPFS storage system.  Without it, I wouldn't have half the knowledge I have now and certainly not the passion.  To me, the system has soul.  I've had to pour blood, sweat, and tears into the thing to keep it going.  Thanks to GPFS, I've been able to pour this passion into my DR system, giving it soul and adding an element of beauty to its build.

It's going to be a hard day when I shut the GPFS and TSM systems down for the last time.  They have become close friends over the years.  Friends that have made me smile, angry, happy, and have fueled my passion for tech.  Something very special.  I shall raise a glass to them and I look forward to pouring my energy into the DR system, turning my focus to ZFS and Bacula, but, of course, fondly remembering my old friends.  I'm certainly grateful for what they gave me.  Here's to absent friends.

Big thanks to Zelig for proof reading!

Return to $2600 Index