What's new
Carbonite

South Africa's Top Online Tech Classifieds!
Register a free account today to become a member! (No Under 18's)
Home of C.U.D.

Home Server (& Network) Setups

[MENTION=10745]Tinuva[/MENTION], looks like you're building up a good setup. I look forward to reading the guide you'll make. What area would you like tips in though? Cause the links I provided to napp-it's site have very good guides about getting things up and running.

I may very well sell my 2U APC UPS at some point in the future so that I can upgrade to a newer model, perhaps we should speak then.
 
Wait I didn't see napp-it mentioned in this thread before. That said I didn't look at it, mostly because I didn't realize it was free for home use. Looking at the initial guide there, it has all the tips I was looking for, I will just add the other things I like and have set up thus far, like zsh ect.
 
Wait I didn't see napp-it mentioned in this thread before. That said I didn't look at it, mostly because I didn't realize it was free for home use. Looking at the initial guide there, it has all the tips I was looking for, I will just add the other things I like and have set up thus far, like zsh ect.

Ah, must have been in one of the other threads I commented on in similar topic ;) glad you found them though
 
Well I can't get NUT (Network UPS Tools) to work on OmniOS :( Need it for my UPS. It compiles fine on OpenIndiana, though there is also a 3rd party repository with UPS-NUT package so I don't really have to go that route.
 
Ok I give up. My usb to serial adapater has a **** driver in Illumos/Solaris kernels. If I want this ups to work on the fileserver, I have to use Linux.

So the choices are between:
- Great ZFS on OpenIndiana/OmniOS but no Network UPS Tools
or
- Linux with UPS but ok'ish ZFSonLinux

=/


Or maybe I need to invest in a UPS with a network port, will make life simpler.
 
Invest in a better ups hehe

Sent using direct mind to machine interface
 
Well my setup isn't the most impressive but maybe it will provide someone with a few ideas:

Asus DSL-N10 Router
2 x D-link Green Gigabit Switches
D-link DAP-1155 Wireless Bridge
Ubiquiti airMax Grid
D-link DES-1024D 24 port Switch

The DSL-N10 is Situated in the Dining Room, central of the house.
It is responsible for Internet access for all Port Forwarding for hosting game servers.
Also provides Wifi to my iPad, phone and parents laptops.
Wifi signal is strong throughout the house.

My Ubiquiti airMax Grid(Durban Wireless Community connection) connects to the DSL-N10 along with my HP Officejet 6500.

A 20m ethernet cable runs from the DSL-N10 to one of the D-link Green Gigabit Switches which is located in the "Computer Room"
The fileserver, my PC and bros PC connect to the D-link Gigabit Switch.

My Fileserver specs are as follows:
Coolermaster Stacker 820 (modded to accept 9 DVD Writers and 12 HDDs)
Asus P5G41t-M LX Mobo
Intel Core2Quad Q9300
2x4GB Corsair ValueSelect DDR3 Ram
2 x Lian Li iB-01 5 port Sata RAID cards (for HDDs)
Manhattan Ultra ATA IDE card (for DVD Writers)
Sunix SATA4000 SATA card (for DVD Writers)
9 x LG Dual Layer DVD Writers (CD/DVD Duplication)
1x 1TB Western Digital Green (OS and downloads)
5 x 2TB Seagate Green Drives
Currently running Windows 7 Ultimate and FlexRAID

Got a Raspberry Pi running OpenELEC as a Media Player in the lounge. Streams easily from Fileserver using Samba.
My parents 32" Samsung LED streams using Universal Media server (only program I could get to work)
Both connect to the Gigabit switch in the Computer Room.

The D-link DES-1024D 24 port Switch sits in the lounge, adjacent the Computer Room, for whenever we host a LAN.

The other Gigabit switch and the D-link DAP-1155 Wireless Bridge are used to keep my BTC/LTC Miners happy.
They are situated in a server rack on wheels with an 8-port KVM Switch.
Battled to find a well ventilated room for the Miners and kept moving them around, thus the Wireless Bridge to make my life easier.

The fileserver stays on 24/7.
It runs Kaspersky Update Utility to distribute updates to all 8 computers.
UnDelete to prevent accidental deletion from networked computers.
NetLimmiter with remote control to monitor and control each computer's internet usage (especially while gaming online)
Universal Media Server to stream to my parents TV.
And a few other downloading/uploading programs.

I am pretty happy with the whole setup, does everything I need it to do :)

Feel free to pm me if there are any questions :)
 
X-Case RM 424 - 24 Hotswap Bay 6GB Mini SAS (SATA/SAS Backplane) - railkit was included
Corsair HX850 PSU
Asus P8Z68 Deluxe/Gen3 motherboard
1x Intel Core I5-2500k
2x Corsair CMZ8GX3M1A1600C10, 16GB Memory to start with, will upgrade it later to 32GB
1x 128GB OCZ Vertex 4 SSD for OS in a pci slot holder, hotswap from back of case

Alright so I got some parts now and in the initial stuff is installed, as per above.

The OS is also installed, went with OpenIndiana instead of OmniOS. While OmniOS is great, OpenIndiana looks a bit more polished and complete and since both is based on the same kernel, I decided OpenIndiana is better for now. That said, I am running the 2013 August release, which is brand new.

Oh I chose Illumos kernel based OS over Linux with ZFSonLiunux, after reading the license is not the only problem, but Linux kernel developers are against virtual memory pools which ZFS makes heavily use of, so Linux + ZFS is just a disaster waiting to happen in my opinion, I refuse to do that to my data. That does however mean a UPS that works with the Illumos kernel.

Tested running KVM VMs thus far, using the ruby qemu-toolkit, makes life super simple, and I must say, KVMs are super fast, exactly the same as on Linux on this same machine, except the ZFS benefits are great.
16 seconds to dump a 8GByte zvol using zfs send, and 12 seconds restore using zfs receive. This is on the SSD ofcoarse.

I am in 2 minds though. I initially was going to go for 4x 6 drive raidz2s, then someone convinced me may as well go for 3x 8 drive raidz2s, which makes sense, but ZFS best practices according to this blog says different: Aaron Toponce : ZFS Administration, Part VIII- Zpool Best Practices and Caveats
For the number of disks in the storage pool, use the “power of two plus parity” recommendation. This is for storage space efficiency and hitting the “sweet spot” in performance. So, for a RAIDZ-1 VDEV, use three (2+1), five (4+1), or nine (8+1) disks. For a RAIDZ-2 VDEV, use four (2+2), six (4+2), ten (8+2), or eighteen (16+2) disks. For a RAIDZ-3 VDEV, use five (2+3), seven (4+3), eleven (8+3), or nineteen (16+3) disks. For pools larger than this, consider striping across mirrored VDEVs.
So according to this.... the 4x 6 drive raidz2s is the way to go :(



Then for those interested in my network at home, this is the basic of it:
1x Billion 7300RA ADSL Router used as modem only, in bridge mode
1x Mikrotik 751G-2hnd, Router + basic 1Gb/s switch + WiFi for Laptops/Phones
1x Linksys 1Gbs 24 port switch rack mountable
1x Gaming PC - i7-2600k
1x HTPC - core2duo 2.3ghz
1x Fileserver (as above in the rack, 4u)
Then a bunch of Laptops and Cellphones using the WiFi

1x 700kva UPS for the the ADSL modem and Mikrotik (lasts about 5 hours @ 1% load)
1x 700kva UPS for the HTPC (lasts about 25 minutes @ 23% load)
1x 700kva UPS for the gaming pc (last 0minutes when gaming, last 15minutes when not gaming, NUT on windows doesnt work so manual turn off)

No UPS yet for the Rack, but on the list to get, was going to use my gaming pcs UPS as a temporary solution, but since it doesnt work on Illumos yet for me, threw away that idea.
 
Invest in a better ups hehe

Sent using direct mind to machine interface
How about determination, perseverance and just being plain hardass, I finally tracked down a repository with updated drivers for OpenIndiana. It is called the hipster project and hipster repository, which have some bleeding edge packages in, unlike regular OpenIndiana that try to be on the same packages as Oracle Solaris.

So now finally, I have a working UPS on OpenIndiana. The driver problem wasn't to do with the UPS, but rather the USB-to-Serial adapter that I am using with the UPS.

That said, I now finally got everything to work with OpenIndiana that I need to work with it, all that is left is cleaning up and building my raid pool once the drives and raid controller arrives by the end of the month.
 
Finally board on its way
ymatujuq.jpg


I should have it by next week then the build will commence. I have been a bit easy going, but moving into a house takes its toll on your time.
 
Woah thats a sick board!

Looking at what is available, I do think I should have chosen a different motherboard, but not going to be upset over it now, can always later upgrade, first want the server going now.

Sadly, the drives I want, is out of stock from my favorite supplier :(
 
Ok so the last few items arrived to get my raidz2 pool up and running. The one thing I didn't consider, is now my bottleneck. I only have a single 1Gbps network port on this motherboard, and I easily max it out just transferring data to the server.

Code:
root@fs1 ~ # zpool iostat -v
                              capacity     operations    bandwidth
pool                       alloc   free   read  write   read  write
-------------------------  -----  -----  -----  -----  -----  -----
rpool                      73.4G  45.6G     29     29   311K   532K
  c4t0d0s0                 73.4G  45.6G     29     29   311K   532K
-------------------------  -----  -----  -----  -----  -----  -----
tank                       1.01T  20.7T     14  3.10K  58.2K   107M
  raidz2                   1.01T  20.7T     14  3.10K  58.2K   107M
    c1t50014EE2B3B22A68d0      -      -      1    509  5.47K  29.1M
    c1t50014EE2B3B22961d0      -      -      2    515  10.6K  29.1M
    c1t50014EE25E5C8FDFd0      -      -      3    512  13.3K  29.1M
    c1t50014EE209071DA5d0      -      -      1    508  5.46K  29.1M
    c1t50014EE209073B61d0      -      -      2    516  10.5K  29.1M
    c1t50014EE20907672Bd0      -      -      3    516  12.9K  29.1M
cache                          -      -      -      -      -      -
  c8t0d0                    231G  7.13G    390    275   935K  32.8M
-------------------------  -----  -----  -----  -----  -----  -----

Traffic graph on the port:

traffic-to-fs1.png
 
so my cards arrived at long last and my cable should be here by tomorrow.

happiness I tell you.
 

Users who are viewing this thread

Back
Top Bottom