Due to recent storage requirements I have had to upgrade my file server. I wanted a contiguous partition and since these files would be highly critical, I also wanted some sort of recovery capability. With that in mind I chose to use Raid 5. I was also putting together a few other servers at the time, so I ended up “standardizing” the parts which lead to some added expense and overkill.

The plan is based on Build a Fault-Tollerant Terabyte Storage Server for Under $1.98. I knew that there would be a performance hit due to Raid 5 so I chose to use SATA HDs and a PCI-Express Raid Controller to help minimize the effect. Due to these changes and my “standardizing” of parts, the design was more expensive and over-engineered. For example, my design uses a 3.0 GHz Processor which is really not needed. Also the changes in the design meant that I could not use the highpoint specified in the article and I was limited to using 4 HDs in the array instead of 8. However, using larger HDs I was able to get 1.5 TB (originally I was aiming for 3 TB). My design only uses 1 HD to hold parity instead of 2, so there was a small victory there.

I started with the raid controller and then built the server around it. I have used software raid in the past, but knowing how slow those were and the size of the partition I was planning on creating I wanted hardware raid. I checked out the Linux Raid compatibility on the Linux Mafia’s Serial ATA (SATA) chipsets — Linux support status. The Areca 1210 was a little more expensive, but uses real hardware raid.

Now that I had the raid controller, I needed a motherboard for it to sit in. I was already ordering a MSI 945GM3-F, so I just added another to the order. Same thing with the 3.0 GHz Pentium Processor and 1 GB of RAM. The motherboard had onboard video so I did not need a video adapter. Also since this is a server and I will usually access it remotely, the video is only needed for emergencies and the initial install.

Once I had the guts, I moved on to hard drives. I picked up 5 Maxtor DiamondMax 21 500 GB HDs, 4 for the raid array and 1 for the OS. I also wanted a DVD burner in case I needed to make backups, so I dropped in a LG DVD Burner.

The next step was a case to hold everything in. I chose the Antec P182 because it had 11 bays. I could have gotten a cheaper smaller case, but I am delusional about being able to build an 8 HD array using 1 TB HDs (7 TB partition) when this setup is filled. The last piece was the power to drive this monstrosity. Being partial to Antec and wanted to break the 500W barrier, I chose the NeoHE 550.

The only thing left was to put it together. The larger case gave me ample room to move around in so the parts were easier to put in than in most other systems I have put together. I hooked up the system to my KVM and fired it up. The Areca 1210 came up before the BIOS screen and was fairly simple to use. It even had a “quick start” option that all I had to do was select the HDs and the raid level. It took over 2 hours to build the 1.5 TB array which is not bad. It even initialized and then activated the array. One of the HDs was damaged and while I was waiting on the replacement I had installed Slackware 12.0, so once the raid array was finished I rebooted and did an fdisk -l and there was my partition. No drivers needed. A quick mkfs and a change to the fstab and I am now the proud owner of a 1.5 TB partition.

Parts:
Areca 1210 Raid Controller
3.0 GHz Pentium Processor
MSI 945GM3-F Motherboard
LG DVD Burner
Crucial 1 GB RAM
Maxtor DiamondMax 21 500 GB HD
Antec P182 Case
NeoHE 550 550W Power Supply

References:
Build a Fault-Tollerant Terabyte Storage Server for Under $1.98
Serial ATA (SATA) chipsets – Linux support status

Leave a Reply

You must be logged in to post a comment.