Driver canon pixma ip1880 window 7, Lease termination format india, Gangstar rio city of saints for windows xp, Epson stylus c41sx windows 7 driver, Ios 7 beta 6 legal ipsw iphone 5, Kalavar king telugu movie mp3 songs, Riff for ubuntu 13.04, Diablo 2 lod able items, Cults static zip, I am the bread of life

iSCSI

Lately a Howto has appeared, detailing how to set up iSCSI on Linux. I first set up and used iSCSI almost 2 years ago, back when it was a little known buzzword. Microsoft were still on v1 of their free initiator, UNH had theirs, and linux-iscsi seemed to be the most promising and workable initiator around. Now it’s beginning to compete with FC and Netapp kit, Dell are hedging their bets on VM machines on iSCSI backends, and Open-iSCSI rule the roost.

My experiences were during a time when I was testing a clustered, redundant storage system for Absolute Studios. Basically RAID5 over Gigabit ethernet, with RAID5 disk arrays in each node. Theoretically, it would allow for a maximum of one failed disk per node at any one time, and also one entire failed node as well. Performance would be seriously degraded in that state, but it would survive – which is all that mattered. Ideally, concurrent read and write performance would be good too. All this, built with commodity hardware and left to do the job. Machines would be able to connect directly to specific targets on the servers, disks could be added on the fly, systems could stay online during maintenance and Samba wouldn’t be a bottleneck on individual servers. A lofty dream was to convert all the rendernodes to Linux too (an ongoing project at the time), and use the tens of Gigabytes of unused space on each node as a storage node. In retrospect, with mdadm and lvm2 it would be nigh on impossible.

Anyway, the lessons learnt with iSCSI were simple… it hammers the network card, and the application hammers the disk. iSCSI sucks on anything that doesn’t support jumbo frames too. Secondly, when you’re hammering your network card and ATA bus, chances are that your PCI bus is going to suffer too. This is where I tripped up. All the nodes I had were using the same make and model of motherboard. That particular model had some bug that caused the entire system to lock up hard under load in this scenario, although is was fine with high disk and memory I/O during rendering. Downgrading the BIOS didn’t help, and I couldn’t narrow it down to one particular component to write to the manufacturer about.

As a system, the whole thing seemed to work nicely. Disk performance was actually pretty good, when it worked. Failover worked too, until the second node died and the array went offline. It was a wild and wacky solution that could maybe have prevented us going down the Dell route (and thus avoiding the relentless 4k stack problem with PERC4, mdadm and lvm2).

iSCSI is great. It solves a lot of problems, and does it cheaper too. But that’s because it uses commodity kit. And we all know commodity kit isn’t as bug-proof as we like to think it is. Google have gotten away with it by having an embarrassingly redundant system. Google have also been kind enough to release fantastic amounts of research into these sort of systems, and Hadoop has taken up the reigns. Hopefully soon we’ll see a usable, redundant, cloud-like storage system similar to S3. One that we can set up ourselves and use locally, or rent out should we choose to do so.