This section will mention some of the hardware concerns involved when running software RAID.
It is indeed possible to run RAID over IDE disks. And excellent performance can be achieved too. In fact, today's price on IDE drives and controllers does make IDE something to be considered, when setting up new RAID systems.
It is very important, that you only use one IDE disk per IDE bus. Not only would two disks ruin the performance, but the failure of a disk often guarantees the failure of the bus, and therefore the failure of all disks on that bus. In a fault-tolerant RAID setup (RAID levels 1,4,5), the failure of one disk can be handled, but the failure of two disks (the two disks on the bus that fails due to the failure of the one disk) will render the array unusable. Also, when the master drive on a bus fails, the slave or the IDE controller may get awfully confused. One bus, one drive, that's the rule.
There are cheap PCI IDE controllers out there. You often get two or four busses for around $80. Considering the much lower price of IDE disks versus SCSI disks, I'd say an IDE disk array could be a really nice solution if one can live with the relatively low (around 8 probably) disks one can attach to a typical system (unless of course, you have a lot of PCI slots for those IDE controllers).
IDE has major cabling problems though when it comes to large arrays. Even if you had enough PCI slots, it's unlikely that you could fit much more than 8 disks in a system and still get it running without data corruption (caused by too long IDE cables).
This has been a hot topic on the linux-kernel list for some time. Although hot swapping of drives is supported to some extent, it is still not something one can do easily.
Don't ! IDE doesn't handle hot swapping at all. Sure, it may work for you, if your IDE driver is compiled as a module (only possible in the 2.2 series of the kernel), and you re-load it after you've replaced the drive. But you may just as well end up with a fried IDE controller, and you'll be looking at a lot more down-time than just the time it would have taken to replace the drive on a downed system.
The main problem, except for the electrical issues that can destroy your hardware, is that the IDE bus must be re-scanned after disks are swapped. The current IDE driver can't do that. If the new disk is 100% identical to the old one (wrt. geometry etc.), it may work even without re-scanning the bus, but really, you're walking the bleeding edge here.
Normal SCSI hardware is not hot-swappable either. It may however work. If your SCSI driver supports re-scanning the bus, and removing and appending devices, you may be able to hot-swap devices. However, on a normal SCSI bus you probably shouldn't unplug devices while your system is still powered up. But then again, it may just work (and you may end up with fried hardware).
The SCSI layer should survive if a disk dies, but not all SCSI drivers handle this yet. If your SCSI driver dies when a disk goes down, your system will go with it, and hot-plug isn't really interesting then.
With SCA, it should be possible to hot-plug devices. However, I don't have the hardware to try this out, and I haven't heard from anyone who's tried, so I can't really give any recipe on how to do this.
If you want to play with this, you should know about SCSI and RAID internals anyway. So I'm not going to write something here that I can't verify works, instead I can give a few clues:
Not all SCSI drivers support appending and removing devices. In the 2.2 series of the kernel, at least the Adaptec 2940 and Symbios NCR53c8xx drivers seem to support this, others may and may not. I'd appreciate if anyone has additional facts here...