I would consider myself to be fairly down-to-earth but at the same time sometimes I like to have a bit of fun.
Today I decided to build an extreme PFsense build on pcpartpicker.com. While the chances of me actually building this beast of a machine are slim to none (especially for my own home network). Not to mention that the advantages of using a system like this on a small home network wouldn’t be nearly as apparent as they would be for a large to midsize business (no matter how nerdy the homeowner is). I can dream can’t I?
The main focus on this centralized on 3 things: reliability though redundancy, massive speeds, and the fact that PFsense now supports ZFS. As you can probably tell I am a ZFS freak and so I paid a lot of attention to storage. The dual Intel 400GB PCI SSDs are part of the 780 Series. They will be striped together using ZFS and will contain the Squid cache (at least as much of it that is not contained in the it’s massive RAM).
The reasoning here is that ZFS will store the “important” Squid Cache so there is no need to take up any more PCI slots. So what does the operating system reside on? This is where the other two Samsung 850 EVOs come in. These will be used for the operating system (which would be in a RAID array using the onboard RAID controller). I’m using RAID rather then a ZFS ZPool for one simple reason: The operating system is disposable. The settings for pfSense can be easily backed up and with AutoBackup now being free. You hardly loose anything if you experience silent data corruption. What would be the point of giving up even more RAM to run the Zpool just for the operating system? I certainly don’t see one.
The Network Cards
Of course the essence of PFsense is in networking. So this is where the most thought went in. There is no beating Intel NICs when it comes to FreeBSD support. So the machine is all connected up to two Intel PRO/1000 Pf Dual Port Server Adapters which is a fiber channel card said to be compatible with PFsense. I also have two quad port Intel Gigabit adapters (for a total of 8 gigabit NICs) this way you can have several redundant links just in case you choose to connect this to some gigabit ethernet networks. Yes I could have used a 10GbE adapter but those only have 2 slots and I figured with a setup like this you would want to use LAGG interfaces using LACP or something. There are also two Intel 10GbE NICs on the motherboard which could be used you wanted another redundant system or instead a little bit more bandwidth.