After finally filling up my trusted old 2.5″ 4TB USB photo hard drive, the time has come to look for a bigger storage. I briefly contemplated getting just a bigger 2.5″ drive (there now seem to be 5TB or even 8TB drives on the market), but since I’ve recently started playing with 4k video, this wouldn’t be a very future-proof solution. Multi-bay storage seemed like a better option.

There are 2 paths for those wishing to graduate to a multi-bay storage:

  • NAS or Network Attached Storage
    These are more popular and usually come with a ton of features (including VPN and media servers like Plex) and they also can be accessed simultaneously from multiple devices. They however come with one major downside – they are only as fast as your network, which these days means 1Gbit/s (of 10Gbit/s in theory, but prices for that kind of network equipment are still substantial).
  • DAS or Directly Attached Storage
    These are usually much faster, especially those with Thunderbolt 3 connectivity. However DAS is usually just a dumb storage with no additional features. But since I’ve fancied the idea of being able to edit photos and videos directly from the storage, I went with this option in the end.

Unfortunately, there are not many vendors in the consumer DAS market. The major players seem to be QNAP, Drobo, G-Technology (WD), LaCie and OWC.

While QNAP has some interesting NAS/DAS hybrids, the Thunderbolt connectivity works over Ethernet which sounded like a potential source of reliability issues. Drobo’s reviews point out unreliable internal cabling and the other companies didn’t offer 6 bay solution in the price range I was looking at. So after much deliberation, I’ve decided to go with OWC ThunderBay 6. This was despite some previous negative experience I had with OWC where one of their SSDs died on me just days after its warranty ended.

Let me just say here that I like the idea behind OWC products. No nonsense, reasonable price, reliably built. Unfortunately however, ThunderBay is a deeply conflicted product that manages to deliver on all of these ideas, yet it somehow falls completely flat due to some unfortunate product management decisions I will get into below.

First though, let’s have a look at how the storage holds together.


The RAID functionality in multi-bay storages is usually provided onboard, however that is not the case with ThunderBay. ThunderBay presents its drives to the connected host as individual drives and the host then needs to run a software RAID driver on top of them to provide the RAID functionality. The role of this driver is fulfilled by SoftRAID and while some may worry that this setup would put unnecessary load on the host’s CPU, the CPU usage during my testing was mostly negligible.

SoftRAID can be purchased from OWC either as a standalone package or in a bundle with the ThunderBay. Now SoftRAID is a bit of a marketing disaster, so without going into too many details, let me just say that as far as I can tell, SoftRAID can be purchased as a bundle in 2 versions – XT Lite and XT. There seem to be about 73 other versions of SoftRAID you can buy, but let’s just stick with these two.

The XT part seems to denote that the license is tied to the OWC storage you have purchased SoftRAID with. The Lite part means you only get access to RAID 0 and 1. The non-Lite version also supports RAID 4, 5 and 10, but it will cost you about $100 more.


  • SoftRAID in and of itself preforms decently and does pretty much what it says on the tin.


  • Do read the tin, because there are some surprising omissions.
  • APFS is not supported. Given how old and unreliable HFS+ is at this stage and the fact that APFs has shipped more than 2 years ago, this is a bit surprising. When I contacted OWC about this, their response was that APFS is not a good fit for ThunderBay. They say on spinning drives it performs much slower than HFS+ (OWC claims 50% performance penalty which I find hard to believe). But even then, ThunderBay has an M.2 SSD slot and you won’t be able to use APFS for that either.
  • RAID6 (distributed 2 drive parity) is not supported.
  • While DAS storages generally cannot be accessed by multiple clients simultaneously, you may want to connect your storage to a second computer once in a while (e.g. for backups). SoftRAID license is for 1 seat only and in order to release the license allocation by Computer A and connect your DAS to Computer B, you will need to completely log out your user on Computer A. That is certainly better than not being able to do this in the first place, but we live in 2019 and macOS generally needs to reboot or do a user log out only handful of times a year, so this is cumbersome to say the least.

Now when I spoke to OWC, I was told that all issues listed above should be addressed in SoftRAID 6, once it is released (current version of SoftRAID at the time of this writing is 5.7.5). That is certainly great news, however there are 2 caveats:

  • SoftRAID 6 has been rumoured to be released “soon” for couple of years now. Current unofficial estimate for release date is end of 2019.
  • Even if you buy your ThunderBay today, when all of the above caveats have been admitted by OWC and new version to address them was promised, you will still be charged for the upgrade from SoftRAID 5 to 6 once it is eventually released. OWC has not decided on the upgrade price yet, but judging by their license pricing history it will probably be north of $100. Talk about osborning your product.


With all of this out of the way, how does SoftRAID actually perform? I’m glad you’ve asked. I have wondered that myself and I’ve decided to thoroughly test it. The HDDs I’m using in these tests are shucked 10TB WD Elements drives (model WDC WD100EMAZ-00WJTA0).

Let’s start by answering two ephemeral questions:

What if I just want a very, very expensive 1 drive enclosure?

ThunderBay would be a terrible choice here. With just a standalone drive in use, compared to the WD Elements USB3 enclosure the HDD originally came in, ThunderBay wins on random reads but that’s about it. It’s interesting to see there’s such a difference though.

Are 6 disks 6x faster than 1 disk?

As you’d probably guess, the answer is ‘no‘ as there’ll always be some overhead introduced by the RAID array. But just how bad is it? Well, for random access it’s pretty bad.

I’ve started by taking the single drive ThunderBay read throughput and just multiplying it by N (the number of drives) to get the ‘hypothetical’ maximum value.

I am then comparing this to the real performance of an N-drive RAID0 array. I chose RAID0 because it has the least overhead.

Random reads are quite bleak. The difference between theoretical maximum and real life performance is about 50%.

Luckily, sequential reads look much better.

Now let’s have a look at the performance of the individual RAID levels and how the performance changes with the amount of disks we include in the array.

RAID0 (stripe)

As one might expect, the more drives are in a stripe array, the faster it generally performs.

Strangely though, random writes have so far been faster than reads (see also the enclosure comparison section above). To be perfectly honest, I’m not entirely sure why that is. I’m guessing it has something to do with serialising of the write operations on the drive firmware level.

RAID1 (mirror)

Nothing too exciting here, mirror performance seems to be quite consistent across the board, with random reads getting marginally faster with number of drives in an array. I guess this is because the more drives can compete for retrieving the same piece of information, the better the chance of one drive getting it a bit faster.

RAID10 (nested mirror & stripe)

Here we see the combined performance characteristics of the previous two RAID levels. There are only data for 4 and 6 drives, because RAID10 requires even number of drives and at least 4 of them.

RAID5 (distributed 1 drive parity)

The read performance improves with the number of drives as ideally N-1 drives can be working in parallel to fetch the different pieces of data. Writes however suffer significantly. This is because for every write operation, reads of the to-be-overwritten data and the old parity need to occur first, followed by writes of the new data plus the new parity.

RAID levels comparison

Let’s now compare the individual RAID levels with each other. For each RAID level in the comparison I am using a 6-drive array.

And since I had access to the SoftRAID 6 beta, I have also added some preliminary benchmarks I have done on RAID 6 using SoftRAID 6 beta 31. These should not be taken too literally though as things may still change before the final version of SoftRAID 6 is released.

Since RAID0 is not really practical due to lack of redundancy and RAID1 is comically slow with way too much wasted storage for redundancy, the questions of which RAID level to choose boils down to how do you feel about sacrificing 50% vs 17% of overall capacity and whether your use case needs faster reads or writes.

RAID5 has a 1 drive redundancy and faster reads, but writes suffer significantly.

RAID10 also has a 1 drive (plus up to 2 more if you’re really lucky ;-)) redundancy and faster writes.

For me, the choice is clear: RAID5 is the winner as for photo & video editing I’ll need sequential reads much more than I’ll need anything else and that 20TB of extra storage capacity will also come in handy.

If it wasn’t limited to the beta, I might have consider RAID6 for the extra redundancy. But since it is confined to the beta SoftRAID 6 driver (and that driver was still a bit unstable during my testing), this was not really an option.

In the last RAID test, let’s have a look at how will RAID5 performance suffer if one of the 6 drives fails and the array has to operate in a degraded state.

As you can see, the reads drop significantly in this case. Presumably this is because reads from blocks on the missing drive now need to access all 5 drives in order to calculate the desired value using parity.


And last but not least, ThunderBay also has an M.2 SSD slot, so how does that perform?

I’ve put in an XPG SX8200 Pro 2TB SSD drive with theoretical sequential read/write speeds of up to 3500/3000 MB/s. Well, that was a waste of money – ThunderBay isn’t anywhere close to being able to saturate that drive. With random access, we do see a significant boost over the spinning drives at least:

No spinning is always better than spinning when it comes to random access.

But sequential access is underwhelming to say the least. Even the spinning drives beat it in some cases!

What’s going on? The culprit here is the Thunderbolt layout of this particular DAS. ThunderBay 6 utilises 4x PCI Express 3.0 lanes (4 lanes is not the maximum Thunderbolt 3 supports, but OWC has probably decided that using 8 or 16 lanes would make the storage too costly). Each lane has a theoretical bandwidth of 984.6 MB/s, but I’m told the maximum achievable throughput in real world is probably closer to 750 MB/s.

Now 3 of the 4 PCIe lanes are allocated to the 6 SATA drive bays (2 drives per lane). And as you’ve probably guessed, the last lane is allocated to the M.2 slot. Which caps its throughput at ~750MB/s.


Reliability is something only shown over time. So far however, I have witnessed a few worrying instances of the SoftRAID driver reporting errors for no apparent reason. These would be usually accompanied by a message like this:

When checked, the volume would usually still be mounted and all disks would still be present in the array. I’m not sure if the volume gets remounted automatically or what is going on here, but definitely not a great start.

Needless to say, always keep a backup of your important data. Although, backing up 50TBs of data somewhere is easier said than done.


After all is said and done, would I recommend getting OWC ThunderBay in any configuration at this point?

No, not really, and it’s mostly because of the SoftRAID 6. If you’re interested in this storage, wait till SoftRAID 6 is released or until OWC at least guarantees free upgrades to 6 with every new SoftRAID purchase. Why they have not done that yet is beyond me.

The poor SSD performance is another thing to keep in mind. I don’t think there are many SSD drives out there that would struggle to saturate the 750MB/s bandwidth. However since that M.2 slot is more of a bonus in what is otherwise primarily a multi-bay HDD storage, I would not dwell on this too much.

Other than that, ThunderBay is a well built and decently performing DAS. Only time will tell if it’s also a reliable one.

Resources used

SoftRAID 5.7.5 & 6.0 beta 31

fio benchmark tool and the benchmarking script used for these tests

Benchmarks were performed on 15″ MacBook Pro (2018, 2.6 GHz)