|Version 16 (modified by 4 years ago) ( diff ),|
Zero Emission Heating is a research project which uses left-over processing power from old computer equipment to transform Green Energy (generated using Wind & Solar Power) to heat.
This yields to the following benefits:
- The 'high quality' energy has a dual purpose, instead of directly converting it to heat.
- Reuse of equipment is placed higher on the Waste Hierarchy ladder, than Recycling or Disposal.
- The computing power could be used for the common cause, by calculating for science projects using BOINC.
- The use of computers allows advanced smart management; for example only heat when prices are low or scale down heating (computation) to a lower level.
I have an old PC with additional GPU, how-ever running this at full capacity does not yield enough heat to fully heat my home, so I properly needs to be running multiple machines and find good solutions for the noise produced. This only works with Zero Emission Energy; I have solar panels mounted on the roof and bought a stake in wind-power to have power available from different sources.
Environment costs are a factor to consider when switching to full electric power. Yet what does it cost? Tarifs for The Netherlands are listed:
- 1kWh electricity costs 22ct (5ct w/o VAT)
- 1m3 natural gas costs 68ct (20ct w/o VAT)
1m3 natural gas produces roughly 37 MJ of heat, 1kWh electricity produces roughly 3.6MJ, thus you will need 10kWh electricity to produce the same amount of heat as 1m3 natural gas.
Thus the overall comparison yields:
- 10kWh electricity costs 220ct (50ct w/o VAT)
- 1m3 natural gas costs 68ct (20ct w/o VAT)
Heating with electricity in the Netherlands is 3.5 (2.5 w/o VAT) times more expensive than natural gas. How-ever since natural gas is 'deprecated' the prices are expected to rise steady where-as prices for electricity are going down. I consider this extra costs 'Environment Protection', which I believe should be factored into more decisions.
Why not use …
- Air source heat pump; They make quite some noise and annoy the neighbours.
- Geothermal heat pump; Large investment and not scalable without good planning and coordination with neighbourhood.
- District heating; Transporting excess heat requires a large infrastructure, spending the same amount on improving the electricity grid allows for more re-use possibilities.
By using Gridcoin (GRC) you could get a small rebate of the costs required to run the equipment
It comes with the following challenges:
- Running (old) computers requires time & effort in order to keep them running.
- Spare parts needs to be around to be able to cope with failures.
- Drivers for old GPU hardware are sometimes hard to get to work.
Join the cause
Find 'Team Zero Emission Heating' in your favourite BOINC project and join the team to promote our effort.
- Start-up Nerdalize is using computing power to heat tap-water.
Test Setup 1
- 3 x NVIDIA Quadro 4000
- Intel Quad Core
- 4GB DDR3 RAM
- 750W Crossair power supply
- PCI-Express 4 port riser board
- Intel motherboard
# Install Ubuntu 16.04 Server LTS amd64: <Out-Of-Scope> # Update system to latest version: $ sudo apt-get update $ sudo apt-get dist-upgrade $ sudo reboot # Unload nouveau driver (causes trouble during NVIDIA install): $ sudo rmmod nouveau # Install NVIDIA drivers legacy 390.87 drivers: $ sudo apt-get install build-essential $ chmod 755 NVIDIA-Linux-x86_64-390.87.run $ sudo ./NVIDIA-Linux-x86_64-390.87.run $ sudo update-initramfs -u $ sudo reboot # Install helper software for GPU testing: $ sudo apt-get install clinfo opencl-headers git $ git clone https://github.com/ihaque/memtestCL.git $ cd memtestCL/ $ make -f Makefiles/Makefile.linux64 # GPU testing and status utilities: $ ./memtestCL $ clinfo $ nvidia-smi # Install BOINC software: $ sudo apt-get install boinc-client boinctui $ sudo systemctl start boinc-client # Configure BOINC software: $ boinccmd --project_attach http://einstein.phys.uwm.edu/ <account_id> # Monitor BOINC software: $ boinctui
Test Setup 2
- 2 x Radeon HD7870
- 550W Power Supply
- Dual-PCIe motherboard
# Install standard Ubuntu 14.04.05 LTS amd64 Server edition: <Out-Of-Scope> # Download fglrx binaries from: https://www.amd.com/en/support/graphics/amd-radeon-hd/ati-radeon-hd-5000-series/ati-radeon-hd-5770 # Make system up2date: $ sudo apt-get dist-upgrade # Install and load compatible kernel: $ sudo apt-get install linux-headers-3.19.0-80-generic linux-image-3.19.0-80-generic linux-image-extra-3.19.0-80-generic linux-tools-3.19.0-80-generic $ sudo apt-get remove linux-headers-4.4* linux-image-4.4* linux-image-extra-4.4* linux-tools-4.4* $ sudo reboot # Install fglrx (headless) drivers: $ sudo dpkg -i fglrx-core_15.201-0ubuntu1_amd64_UB_14.01.deb $ sudo apt-get install -f ls # Fix missing OpenCL symlink: $ cd /usr/lib $ sudo ln -s libOpenCL.so.1 libOpenCL.so # Fix for segfault for e.g. on 2nd invocation of clinfo: $ ar p fglrx_15.201-0ubuntu1_amd64_UB_14.01.deb data.tar.gz | sudo tar -C / -xzf - ./etc/ati/amdpcsdb.default # Install helper software for GPU testing: $ sudo apt-get install clinfo opencl-headers git $ git clone https://github.com/ihaque/memtestCL.git $ cd memtestCL/ $ make -f Makefiles/Makefile.linux64 # GPU testing and status utilities: $ ./memtestCL $ clinfo $ nvidia-smi # Install BOINC software: $ sudo apt-get install boinc-client boinctui $ sudo systemctl start boinc-client # Configure BOINC software: $ boinccmd --project_attach http://einstein.phys.uwm.edu/ <account_id> # Monitor BOINC software: $ boinctui
- GPU's with less than 2GB RAM are not supported by many BOINC projects anymore
- Software support for older videocards is lacking. AMD open source drivers do not support OpenCL, which is required for the computations. The NVIDIA drivers tends to be more easy to install.
- Motherboards, RAM and GPUs are somehow cheap to source, how-ever high-power PSUs are not cheaply available.
- Riser cards (used for crypto-mining) provide a cheap way to add extra GPUs to an system.
- Case mounting is a large problem. When using non-standards hardware layouts, I am basically stuck making custom (wooden) casings.
- Use SSD harddisk instead of old harddisk. You save 25 EUR, how-ever you gain quite some speed during install, booting and reliabilitiy.
Repairing PSUs is very hard. Since components are very tightly spaced there is no room to make repairs, without actually destroying or removing other components. Parts which are easy repairable on SMPS (Switch Mode Power Supply) units are capacitors, fuses and some bridges. How-ever if some has failed it normally cascade into failure of other components as well. If you cannot repair the PSU by replacing the visible broken parts, I would consider it a lost cause.
Repairing GPUs has been an art. I like the view of Louis Rossmann on the matter. He quite rightfully(?) mentions the fact that it is often not broken solder connection on the outside, yet rather a loose connection on the inside. The inner-bond connections becomes loose due to the stress of the videocard from constantly cooling and heating. I have put this to practice, by sourcing broken GPUs and attempt to repair them. The GPUs I source do still have a working image (with artifacts). With 5 cards of multiple vendors repaired, I have a 100 percent success-rate.
The method I use:
- Strip videocard, remove cooler, cooling paste.
- Isolate working area and protect other sensitive parts from burning by putting capton tape on the PCB around the GPU.
- Using a hot-air station Slowly heat GPU to around 160 degree Celsius. Measure temperature using multiple temperature sensors. It takes roughly 10 minutes to reach this level. Make sure your hot-air station is set to around 160-180 degree Celsius. Do not be tempted to use higher temperatures since you will most likely damage/move parts if solder start to melt.
- Allow GPU to cool down by slowly removing the heat (e.g. 'cold' to 100 degree Celsius by setting the hot-air gun to 100 degree Celsius.
- Allow cooling to room temperature without the use of forced cooling.
- Re-apply cooling-paste; use some good stuff instead of the factory-crap. Also take into account less-is-more. The cooling-paste is only supposed to fill the microscopic small gaps of air between the copper-plating any other configuration will cause worst heat transfer and thus hotter electronics.
- Clean-up your stuff and try your card.
- Ensure cooling profile of the card is more stable e.g. let it run idle for a while before turning off the PC. Running an GPU at full-trottle and turning the system off, will cause a lot of stress on the electronics since it does not have time (no power = no cooling fans) to reach it 'happy' operation temperature, which is around 40-50 degree Celsius.