Proc & heat sink showed up today. So, I ran out to CompUSA to snag thermal paste and an IDE cable (I don’t have a spare long enough to reach my primary drive). Get the heat sink on (barely), plug everything in, and flip the power switch. All the fans kick on, and then kick off again about half-a-second later. WTF? So, I start pulling stuff. Everything, really. IDE connectors, PCI cards, even the RAM. Same result. Try flipping the FSB jumper. Nothing. Reset the BIOS. Nothing. Don’t even get a POST. Grrr. So, the PS is BRAND NEW, so that’s probably not the problem. It’s got to be either the motherboard or the processor, right? Unfortunately, I don’t have another AMD socket A motherboard to test the processor on–at least, not one that can take the processor I need to test. So, what to do? I can either take the whole rig down to the local PC shop, pay $50, and get all my components tested (which might not be a bad idea), or I can go to NewEgg and drop $100 for a brand new motherboard and processor. If I do the first, then I still have to splash out for replacement parts if something is busted. If I do the 2nd, I can test both used parts, but I’m left with a spare I need to get rid of (sell back on eBay, probably). I guess I’ll have to mull it over–I’ve got a support request in with the motherboard manufacturer hoping for some insight, in the interim. I’m kinda pissed, tho. I was ready to install the OS, since my drives shipped today and should arrive on Wednesday.
Well, I'm pretty much out of testing options, so I went ahead and bought a brand new motherboard and processor from NewEgg. Should ship today, I think, which would deliver it on Monday. I'll see if I can test the used processor on the new motherboard, and if that works it'll narrow it down to the used motherboard and I can sell the processor on eBay. My 8 hard drives showed up last night - 2.0 TB of raw storage. I'll probably start cramming them into the case tonight.
That is indeed what happened - I've got the new processor and motherboard. I didn't get to installing them last night, as I was working on the MythTV frontend box, but I'm planning on moving back to this machine tonight. I need to complete this one - including getting all the software installed and configured for my services - before I can even think about beginning work on my MythTV backend.
Got the brand new motherboard and processor in... and it works! w00t! I'm installing ubuntu 5.10 (32 bit) right now, and once this is done I'll yank the temporary CD ROM and hook the 8 HDDs to the RAID controller. Get the array set up (which takes a LONG time), and then I'll be able to tweak the partitioning to the above settings. When that's done, it'll be time to start installing Apache, Postfix, MySQL, PHP, etc. BTW, the GigE card worked straight-away as well, even tho this one wasn't integrated. I have to say, I'm pretty happy with my Ubuntu experiences to date. I really like this distro.
Serious progress today! I managed to get the RAID controller driver compiled and installed. Code: > apt-get install essential-build > apt-get install linux-headers-`uname -a` > cd /usr/src/linux-headers-`uname -a` > cp /boot/`uname -a`-config .config > make oldconfig Download the RAID driver source, and follow the directions in the readme > vi /boot/grub/menu.lst Add in "hde=noprobe hdf=noprobe hdg=noprobe hdh=noprobe hdi=noprobe hdj=noprobe hdk=noprobe hdl=noprobe" to the kernel line Crammed in all the drives (and it was crammed, despite the big rackmount case), hooked up power and IDE cables, booted, and jumped into the RAID BIOS on bootup. It recognized 8 250GB drives (hopefully the photo of the monitor turned out), I selected the master channels, and built a 4 disk RAID5 array. 15 minutes in, and it's at 6% (you do the math). Once this array is built, I'll build a second array of the slave channels. Then, I'll RAID0 them in the OS, for one giant drive. Looking good!
OK, doing more reading on LVM and software RAID, and I'm not positive I can even do this. I think I have to create the partions before I RAID them together. I used LVM on hda, so I can easily shift those around. The new plan: hda1 = /boot, ~200MB hda2 = /root, ~100MB hda3 = /var, ~10GB hda4 = /, ~29.7GB mda1 = /home, ~1500GB I'll use quotas to manage /home.
Well, had I managed to stay up a little longer, I might have been able to get the drives partitioned. It took somewhere between 3.5 and 4 hours to create each of the two RAID5 arrays. Each is 750GB. I booted into Linux this morning, and got no obvious errors, so I have to assume everything is good. I didn't have time to log in before I left for work and check dmesg, and unfortunately the computer is on the GREEN (fully firewalled) portion of my LAN right now (ORANGE doesn't have DHCP), so I can't SSH in and finish the job from here. The computer sounds like a jet engine, BTW. The case fans (3) and power supply are quiet (the PS is whisper quite, actually), but the CPU fan sounds like it's at about 13000 RPM. I have to see if I can undervolt the thing - maybe stick a resistor in line to drop the voltage across the fan and slow it down a smidge. A 10% reduction in speed (estimated, achieved by sticking a finger on the hub) made a 90% reduction in noise, so... It's not a big deal, since the entire computer will be stuck away in the laundry/storage room in the basement, but still. (The MythTV frontend box is practically silent with the case on, and it's extremely well ventilated as well, with three case fans.)
*reads like one and a half pages of this thread* um... um... uh.... ...I have a Dell. It lets me look at naked girls.
Whoa. See if you'd'a mentioned this in your first post, I'd'a paid more attention to this whole thing.
I just did some quick math, and I could store 2,142 full-length compressed porn movies (assuming they could fit on one CD - 700MB XViD).
In 20 years your kids will laugh at that statement and think "2142 full length compressed pron flicks is NOT enough for anyone."
2142 might be enough, but they without 3D tactile sensory perception (TSP), and the enormous accelleration card that it takes to run it, porn just won't be good enough to wank to.
OK, I think I lied here. Reading the docs, I think what I need to do is create partitions on /dev/sda and /dev/sdb (the two RAID5 arrays), create a RAID0 array in software with raidtab, and THEN - before I create the filesystem - use LVM's "pvcreate" on /dev/md0 to create a LVM volume, which I *then* partition into LVM partitions. And *THEN* I create the filesystems. Whew. All I can do is give it a shot. If it works, I'm back to the original partition structure, even if the underlying mechanics are a little more complicated. I wonder if I could add NFS shares on other machines to a LVM volume? If I could, I could expand my storage by building another computer, exporting the final physical partition over NFS, and simply add it in LVM to my existing array. Boom - instant doubled storage, without having to migrate anything I already have. That would be slick.
Sonofabitch. Apparently, the kernel option "noprobe" (ie, "hde=noprobe" or "ide2=noprobe") doesn't work. Which means that my RAID controller doesn't work, as the OS is recognizing each drive individually (ie, hde, hdf, etc) rather than as two RAID5 arrays (sd0 and sd1). So... I've got to scrap Ubuntu (which I really like, and was hoping to standarize on) and replace it with some other distro. Preferably one that has apt-get, lvm, raid, and xfs default. Any suggestions?
I would of tried E-Bay for a Sun Enterprise 250 Server...and a Sun StorEdge a1000.. But thats just a small LTA. ~worm~
Downloaded the netinstall cd for Debian 3.1_R2, and I've got that cranking right now. Debian is supposed to be an excellent choice for servers, plus since Ubuntu is debian-based, my desktop machines will administer in a very similar fashion (which is why I wanted to standardize in the first place). So we'll see how this goes. I've also got the SuSE 10.1 install CDs, so if Debian won't work (I expect it to) I've got an immediately available backup plan. This sumbitch is going up this weekend, dammit.
It goes not well. The install was good - in fact, the installer is (unsurprisingly) almost identical to Ubuntu's. And "noprobe" worked. However, the controller compilation had warnings, and insmod failed. I spent a little time searching and monkeying, but never got it to work. So... I'm going to try SuSE 10.1. There is a binary driver download from Highpoint expressly for SuSE 10.0, so that *ought* to work. I hope. CentOS might be my next try - there is at least one documented case of that OS working (the web page that inspired this project).
I’m thru messing around. I’m downloading CentOS now–I KNOW somebody has done exactly what I’m doing with CentOS, so this bastard ought to work. The SuSE experiment sucked. YaST is awesome on the desktop, but it blows chunks over SSH. I tried Debian again, and this time I was able to build the driver, but I couldn’t get it to load correctly, and I got a kernel panic on boot (too many kernel options). It’s a shame, because I liked Debian–apt-get just worked. And it feels a lot like Ubuntu.
YEAH, BITCHES! CentOS 4.3 was the winner. I haven’t used a Red Hat based distro since RH8, and I like this one. Yum is a really nice piece of software–seems it might even be better than apt-get. Now that I’ve successfully done this on CentOS, I think I might have learned enough to get it working on other systems–at least the ones that didn’t have a kernel panic for too many “noprobe”s, or that noprobe at least worked on. /dev/sda :: SCSI device sda: 1465191168 512-byte hdwr sectors (750178 MB) /dev/sdb :: SCSI device sdb: 1465191168 512-byte hdwr sectors (750178 MB) I’ll stripe those in RAID0, and BAM–1.5TB. w00t! But first, I’ve got to go take some pictures of my wife’s schooling show (horses), and then go watch the Crew pound the crap out of United. Laters.
It turns out CentOS doesn’t support XFS in the stock kernel–which was to be the file system for the home directory (killer on large file deletes–key for MythTV). So, I have to install an unsupported kernel from the “centosplus” repository. Thankfully, that’s literally as simple as “yum update kernel”. The good news is, I was able to create the partitioning I wanted on the RAID array with LVM. Worked like a charm. So, now I’ll reboot with the new kernel and a) make sure XFS is working, and b) make sure the RAID array comes up at boot time. I’m a little doubtful on “b”, so I might be looking at the modprobe.conf docs a little more.
Code: [root@localhost /]# df -H Filesystem Size Used Avail Use% Mounted on /dev/mapper/system-main 16G 1.3G 14G 8% / /dev/hda1 192M 16M 166M 9% /boot none 248M 0 248M 0% /dev/shm /dev/mapper/system-tmp 11G 58M 10G 1% /tmp /dev/mapper/system-var 11G 309M 9.8G 4% /var /dev/mapper/raid_disk-home 1.5T 541k 1.5T 1% /home That’s right: /home is 1.5 terabytes. I’ve still got to migrate /usr/local and /etc (and swap) to the RAID array, but everything comes up roses on reboot. Now I just need to find where I stashed the case screws, tighten everything up, and shove this bad boy back in the storage room. Then I can start software management. I’ve got to move my Samba config (easy), my web config (also easy), my mail setup (hard), and get LDAP set up so I can have single-signon for all my services.
Ran some benchmarks of the 1.5TB RAID drive (/home) versus the 40GB system drive (/tmp). This was using a 2GB file (4x RAM, to ensure no RAM caching effect). Code: michel-delving-RAID (/home) I/O Benchmarks: -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU michel-d 2000 24918 91.4 51746 39.9 18399 11.6 21769 76.7 52409 22.9 350.9 3.3 michel-delving-system (/tmp) I/O Benchmarks -------Sequential Output-------- ---Sequential Input-- --Random-- -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks--- Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU michel-d 2000 15634 51.9 18691 12.3 7399 3.7 6900 22.8 19196 5.1 83.3 0.5 The RAID array uses a lot more CPU, but it’s also a TON faster. Block writing was almost 52K/sec, and I can’t imaging pushing anywhere near that much data.