Chia to spice up your life

u2ros
11 min readNov 3, 2021

--

Healthy Chia seeds! :)

I’m quite a busy person. A family, a job and a house project. But I guess that wasn’t enough, so in may, year of our Lord 2021, I decided to hop on the Chia hype train. Before I present my journey in the past 6 months, let me give you a brief overview of the Chia crypto platform (you can skip it if you want)

I actually first read about Chia (ticker XCH) on Ars Technica. In a nutshell, Chia brands itself as a green, eco friendly base layer protocol. Green as opposed to BTC and ETH which use the battle tested proof of work (PoW) consensus algorithm to ensure security of the network. Its problem is of course the tremendous amount of energy it uses, as most miners are either dedicated ASIC devices or GPUs running on full throttle. However, security and consequentially, trust, is one of the fundamentals for any blockchain platform, so you can’t cross this off your checklist if you want to be serious in this space.

Chia solves security by utilizing what they call proof of space and proof of time. An approach to securing the blockchain with way less energy, orders of magnitude in fact. Hmmm, hold it buster! We’ve seen cryptos that use disk space for confirming blocks, like for example FileCoin. True, projects have used disk space before, but it is how the disk space is utilized, and here, Chia is fundamentally different.

Mining chia, or farming, as they call it, has 2 stages:

  • a one time CPU intensive “plot” file generation process
  • continuous farming on the generated plot files (block confirmation process)

The greenness of Chia comes from the fact, that you create plot files (pregenerate proofs) only once. So, something that happens on the ASIC or GPU all the time continuously, is a one off event in Chia. Once your plot files are generated, to participate in block confirmation process and earn XCH, you need as little as a little Raspberry Pi, your disks and Chia client software running. The only power consumed is about 5–7W per disk.

Plot twist though! The plotting and the farming phase are essentially and unfortunately on completely opposing shores in terms of what kind of hardware they require. Being a Chia farmer is not as trivial as throwing money at GPUs, and for me, it was quite a trip and an educational one too.

Project constraints

I presented myself with two: acceptable noise of my setup and acceptable final energy consumption (after plotting is done and all that remains is farming). My initial idea was to acquire 1 PB of space for Chia farming. I bought 14TB disks, which meant around 75 disks.

A word on plotting:

Quite some progress was made in development of the plotting software since the staert of the project. From the combination of plotman and official plotter, that burned through SSD disks, we moved to MadMax plotter or bitblaze, which can both better utilize memory and cores. I used two hardware configs: a Ryzen 9 5950X with 128GB of RAM and a Threadripper PRO 3995WX with 512GB RAM. It turns out, that 5950X is an incredibly fast beast in terms of price/performance. If you’d put 4x 5950X rigs against a single 3995WX, doing 4 plots on both, utilizing all 16 cores on each 5950x and 4x 16 cores on 3995WX, the TR PRO only wins thanks to being more power efficient as it burns around 600W, while 4x 5950X would burn around 1100W on full throttle. Costwise to build these rigs, 4x 5950X and 1x 3995WX system would cost about the same. Having a single machine for plotting vs four is easier from the managing standpoint though.

5950X vs 3995WX, size comparison (yep, it is BIG)

Word of caution! The components used in the plotting process are under great stress, so monitor your temps and act accordingly.

Left: Plotting with RAM disk without active cooling (DDR4 temp limit is 81°C). RIGHT: Same component with dedicated fan installed to blow over the RAM sticks

Onward to farming!

Once the plots are created, they are moved to however you designed your long term storage based on HDDs. I actually went through 4 phases, each better then previous.

Phase 1, Commercial NAS enclosure at home

On the Chia blog, I saw they gave some recommendations about what kind of storage devices to use for farming. One of the ideas was, to use a NAS enclosure, like Synology DS1821+, so I bought one, popped some disks in, setup a RAID and started filling disks with plot files. In April, the time when I started, my two 5950x plotters were happily gently humming in my apartment and I could easily cool everything with just my window being opened. This was a pretty good solution for a 2–4 enclosure setup with 2 plotters. Total farm size was around 0.5 TB.

Pros:

  • simple setup and managment of enclosures via web interface
  • robust, reliable, silent

Cons:

  • super expensive per unit (for 1PB, I would need about 11 enclosures, costing around 13000 EUR). Cost per disk bay was around 150 EUR which is half to one third of the price of one HDD.
  • cannot provide true JBOD, only RAID, which meant, I would loose 1 disk per enclosure. This would push bay cost per disk to 171 EUR. RAID0 was out of the question, because a single disk failure would cause the loss of plot files on all 8 disks.
  • 11 enclosures is a no-go in a flat where I live. Too much noise, too much dust and too many options to trip something over or trip over something ;)

While using enclosures is super nice and I would recommend one to use for storing your personal data, they are neither cost efficient, nor disk space efficient for Chia. If I wanted to make some profit out of this (and justify the time spent), I also needed to scale a bit. Also, warm days were getting more common which meant more more heat and more fan noise. Not something you want to listen to at home.

Lovely Synology DS1821+, quiet, compact and user friendly. But costly!

Phase 2, using old servers, rented office

Scaling up meant not only buying more gear, I also needed more physical space, so I decided to move my stuff to a dedicated location, a small office with AC. Friends from discord suggested one way to scale is to buy some old servers. After reading on the topic, I decided to go for some Supermicro 826 and 836 chassis. I was vaguely familiar with them because we use them at work. Why not the 4U 846 you ask? 24 disks per chassis. Sure, It would bring the cost per bay down further but that motherf* is heavy and my office is in the second floor. So yea, for practical reasons, I’m no hulk.

A handful of E-bay bought servers arrived, I’ve setup a 19" rack on wheels, installed the servers, added a 10Gbe switch. I would transfer my plots from the plotters to the servers, therefore 1 Gbit LAN was out of the question (2 minutes of copying vs 15 minutes). Setup was working nicely:

Pros:

  • cheap to buy on E-bay (around 700 EUR for a 16 bay server, 400 for 12 bay server), cost per bay is around 42 EUR, 5x cheaper than buying NAS enclosures.
  • despite being 10 year old, this is reliable hardware, I had zero hardware problems.

However, there were some issues:

Cons:

  • Power hungry. Most of these servers use dual Xeon CPU setups, each easily surpassing 300 or 400W in operation and this was a problem. 5 servers with support for less than 70 disks would burn around 1800W.
  • Still heavy to move around. Even though I opted out of using the giant 4U chassis, moving the 2U and 3U chassis is still cumbersome and can hurt your back
  • Loud!. They say Supermicro servers are the most silent compared to DELL or HP servers, but you still cannot keep one in the same room and listen to Mozart. Since I’m not physically present on my farm, expect for maintenance, I could do with a couple of servers running, but there are other tenants in the building and the noise would be too much.
  • BIOS POST times that make your beard grow. Missed that escape key to bring up GRUB or boot menu? Too bad, hit ctrl+alt+del and wait 3 minutes. Need to restart the server? Too bad, go for some shopping or lunch while you wait for reboot to finish.

Phase 3, One old server + expander chassis

There I was sitting in the rented office next to a truckload of old servers and no nuclear power plant at hand to connect them to directly. Luckily there was a rather elegant solution.

I didn’t know at the time, that the backplanes of these servers (the devices into which you plug the HDDs) are of 3 types: Type TQ, A and EL. TQ type uses one SATA cable for each disk bay so these are usually found in 12 or 16 bay chassis. Way too many cables! The A type uses SAS cables, 1 SAS cable for 4 disks. So if you have a 16 bay chassis, you’d connect 4 SAS cables to the A type backplane. Finally, there is the EL of expander backplate.

The EL type backplanes actually only requires a single SAS cable, because the expander chip(s) on these backplanes acts as a kind of network switch for disks. Disks share the 2.2 GBit/s SAS2 connection which can be duplicated to achieve 4.4 GBit/s. What’s even better, it allows daisy chaining from one expander backplane to another. Depending on what kind of HBA card you use, you can literally connect thousands of disks in this manner. I personally use an LSI 9206–16e HBA card which can control up to 1024 disks.

Realizing all my servers had the EL type backplanes, I quickly reworked my entire setup, and this had several really nice consequences:

Pros:

  • only one server chassis now held an actual server inside (mobo, cpus, ram,…), others were just empty boxes, daisy chained together and connected to the LSI HBA card on the one server that remained.
  • A one-off benefit: I could resell the motherboards, CPUs and RAM on E-bay and get some of the investment back
  • Power consumption dropped significantly to about 8W per disk (including the consumption of fans, the backplane and “master” server which was swapped to a Z490 based i5 10500)

Cons:

  • no reduction in required physical space
  • still had to deal with heavy chassis boxes
  • To keep the temps in check, the fans were still too loud (I tried using lower RPM fans on custom 3D printed housings, but they were not able to keep the disks under the recommended 39°C)
Single “populated” chassis on top, multiple expander boxes under it, connected via 8644–8088 (black) cable

Phase 4, Custom cases

Noise, space, temperatures and physical mass were the remaining annoyances. After some consideration, I decided I would design and construct my own cases. Thinking about the problem, I returned to the 4U 846 chassis and its advantage of being capable of carrying 24 disks. I decided its big backplane was gonna be the basis for my cases.

Found a guy on E-bay that was selling a couple of 846 SAS3 backplanes. SAS3 bandwidth is an absolute overkill for what Chia farming needs (just some queries every couple of seconds, no writes), but after some talk, he told me he had a truckload of 846 SAS2 EL2 backplanes that he wasn’t advertising yet. We quickly made a deal and me and my dad got to work. He’s a MacGyver type of guy, so I would do the 3D design, pick components, he’d get the material and do the assembly.

I decided to go for a vertical design with 2 fans blowing into the case and 2 fans sucking air out. At the same time, I’d get some help from natural convection and AC blowing cool air over on top. Finding the right fan was quite a challenge. It needed to generate enough static pressure to create airflow and at the same time, remain under 35 decibels (my requirement). I chose the Silverstone SST-FM84 which had the added benefit of having customizable rpm via a control knob. Perfect! The backplane also has a 120mm fan blowing over from underneath. I went for a Corsair 550W power supply which had enough current on 5V that SATA disks use. (note that SAS disks run on 12V and burn significantly more power).

Fully populated custom case with Supermicro 846 SAS2 EL2 backplane as basis

After some testing, measuring temperatures, upgrading backplane firmware (did not know updating a backplane firmware is actually a thing), I settled on the final configuration. Leaving two middle bays in the second to last column empty gave me greatly improved thermals and I could leave the fans spinning at 2/3, while still keeping all the disks under 39°C. Yahoo! ;)

Thermogram (not sure the absolute temps are correct due to unknown emissivity)

Pros:

  • Light and portable aluminum cases
  • Least physical space required
  • Flexible cooling/noise settings
  • Most power efficient
  • Lowest cost per disk bay (around 10 EUR)

Cons:

  • No remote control of any kind on the cases, I have to rely on my own monitoring solution which includes a couple of ambient temperature sensors, smartmontools and discord webhooks to send me meaningful alerts
  • No remote power on/off capabilities

Definitely some room to improve still but I am satisfied. In case of a failure, temps rising or rigs going offline, I have a 5 minute drive to the rented office to check what’s up and that’s good enough for me.

Testing the thermal behavior with or without plugged bays on the near completed farm

So there you have it. After half a year of dealing with Chia almost 1 hour daily, I know how to flash a backplane firmware, what is SAS, how it works and its topology options, 3d design, 3d printing, server managment and more. Whether my farming will make me a profit, ask me in about a year ;)

Shouts out to ruant, Panzerknacker, G-man, Gnomuz and many others who gave me hardware advice or their perspective ;)

Update: Upon popular request, I’m adding some photos and the recipe of the custom cases.

Components used:

  • Supermicro 846 SAS2 EL2 backplane (EL1 just as good)
  • Corsair RM550X (backplane staggers disk startup so peak current is okay)
  • Chinese 3 pin fan hub
  • 4x Silverstone SST-FM84 80mm fan with rpm knob
  • 1x 120mm fan for cooling the underside.
  • some plexi glass, some side metal panels, aluminum profiles that make up the frame
  • foam to fill up small gaps for better airflow

NOTE: For anyone who may use these or some other backplanes. I experienced an issue where only a certain number of disks would be recognized by the backplane. This was fixed by upgrading the firmware of the backplanes for which you need an LSI adapter, a SAS cable, update software and the correct firmware files (got those from Supermicro support). Since the software was intended to run on Ubuntu 12.04 era systems, It took some effort to get the deprecated dependancies to work on 20.04. After flashing the firmware, all disks appeared without issues.

--

--

u2ros

Tech lover and enthusiast. Love to discuss IT tech and trends, gadgets, sports...