Intel NUC as a backend for development
A comprehensive story on how I started using Intel NUC as a backend for local or remote development, while my code editor (on any machine) is used as a thin client.
Table of contents
Rationale
There were 2 problems, using a dedicated machine for development, I thought I might solve.
- I worked on a project that had a large amount of heavy I/O tests I was running very often that had an intensive impact on SSD drives (with noticeable degrading over time). I use iMac and MacBook Pro and in my opinion, SSD is a weak point in both (hard to extract/replace in former and impossible to replace in latter), so I wanted to have a dedicated machine (where it will be easy to replace SSD) that will save me from buying a new Mac computer once it’s SSD is dead or slow that it’s impossible to work comfortably.
- We use Docker (like everyone else I think) for development and deployment, which makes I/O bound things slow on macOS (for reasons). There are some workarounds for that, I tried a lot of them, but it didn’t work well eventually. I even had to run the application itself locally on macOS (with all the infrastructure services still in Docker containers) to avoid adding project directory (which is huge) to the container’s context, since syncing that all the time makes fans spinning that much that computer is about to take off.
I was postponing the solution for more than a year, until recently, when iMac started to crash repeatedly and ultimately didn’t turn on. My first thought was: “This is it, too late…” 🤡 The problem was that as far as I know (I can be wrong) we don’t have authorized Apple service providers in Ukraine, so I was completely paranoid about taking my computer anywhere with all the work data on decrypted SSD (which I had due to already degraded performance). I have even started to watch iMac disassembling tutorials, which looked hard, but feasible. With all that frustration I decided to dabble with the only accessible (without disassembling) component, which is RAM, luckily for me it turned out to be a broken RAM module exactly! Nevertheless, I decided it was a clear sign I should implement my long-expected goal.
For those asking “Why still use those 🤬 Macs?”: I had thoughts to build a custom PC (I used to do it before, I like the process and the hardware options currently available) and use it with Linux (Windows might be an option too, Microsoft does a lot of effort towards programmers, but as far as I know WSL and Docker for Windows have similar to macOS I/O performance issues), but every time I try to use it on desktop (tried it again a couple of years ago) - I get a frustration since it has all the similar issues I was facing many years ago, which require some time to fix and I am way too lazy for that already (I used it for like 10 years on the desktop, different distributions, and it was fun to solve all those missing drivers and other configuration-related issues before, but it’s not anymore). Also, I am very used to Macs (computers and ecosystem) already and spoiled by Retina displays, especially that 5K on iMac, as far as I know, there are no many similar ones available for PC for a reasonable amount of money (there is one from LG that is exactly one like in an iMac but I saw a lot of bad reviews on that and I frankly don’t like how it looks).
Hardware
Despite technical specifications, I wanted this machine to be compact (have as a small footprint as possible), have a decent appearance (I can’t do proper network cable management at the place I am currently living, so I knew it will be at the same room where my working table is and I did not want to hide it) and be energy efficient.
The main competitors were: Mac Mini, custom-built SFF PC (based on Mini-ITX) and Intel NUC (I know there’re similar solutions by Gigabyte, Asus, MSI, but I didn’t like how they looked). I know many people use remote servers (some companies provide programmers with VMs that run apps), but I wasn’t considering it due to security and legal issues, also I am sure it wouldn’t give a smooth enough experience even having a very fast internet connection.
Mac Mini was my first thought, but after some research, I figured it had no replaceable SSD (2020 model) and there were problems installing Ubuntu on it (I knew I wasn’t going to run macOS there) and I was already tired of workarounds. It is also pricey if you want to go with some good configuration. Withdrawn.
Custom-built SFF PC was something I was very thrilled about, there are so many hardware options available, you can build anything you like, with great performance. The thing I didn’t like was the availability of nice and small Mini-ITX cases (small form factor), the only nice and available in Ukraine at that time was NZXT H1 (it’s tall, but still has a small footprint), however, it is not compatible with LGA 1200 motherboard (I was looking at Intel Core i9 10850K CPU back then), then I also figured out it had not very efficient cooling. I liked Ghost N1 case, but it would be very expensive to get it here.
I had a struggle whether I should proceed with Intel NUC or build a custom PC (I was already thinking that maybe NZXT H210 wasn’t that big), but after some researches, I decided to give Intel NUC a shot. The reason was mostly the size, decent performance (I understand that Intel NUC has a very throttled mobile alike CPU variant, but it gave me enough, I will tell more about it in conclusion section) and energy efficiency (see the table below).
i7-10710U | i5-7267U | i5-9600K | i9-10850K | |
---|---|---|---|---|
Max TDP | 15W | 25W | 95W | 125W |
Power consumption per day (kWh) | 0.1 | 0.1 | 0.4 | 0.5 |
Running cost per day | $0.004 | $0.007 | $0.023 | $0.030 |
Power consumption per year (kWh) | 21.9 | 40.9 | 138.7 | 182.5 |
Running cost per year | $1.31 | $2.45 | $8.32 | $10.95 |
CPU Single Thread Rating and Estimated Energy Usage Cost. This data is taken from cpubenchmark.net comparing CPUs in my Intel NUC, MacBook Pro, iMac, and custom-built PC that didn’t happen respectively.
My final choice was Intel NUC 10 NUC10i7FNK (slim kit, going with a tall one would be probably a good idea of having an extra storage space for backups, but I decided I wanted a smaller one) which I equipped with HyperX Impact DDR4-2666 SODIMM 32GB (2 x 16GB) and Samsung 970 PRO NVMe M.2 SSD 512GB.
Assembling it all was fast and pure joy. Also, I was happily shocked by its size, I never paid attention to its dimensions when I was reading specs and for some reason, I was sure it’s similar in size to Mac Mini, but turned out it’s like 1.5x of Raspberry Pi (when in case).
The total price of the package: $1016. Not that cheap as I expected/wanted, but I did not have any regret so far.
Software
From the very beginning, I knew I was going to install Ubuntu Server 20.04 (all the servers I use nowadays run Ubuntu so I thought it’s good to have a similar environment) on Intel NUC.
I don’t have an external display and newer iMacs cannot be used as such, so to see what is going on during setup (also to play with BIOS) I just connected NUC’s HDMI output to an old capturing device I had and streamed it using OBS (like there were other options 🤪).
The installation process ran without any issue, NUC was using a shared internet connection from iMac. All the peripherals were recognized successfully without any flaw (at least the one I used so far), Intel NUC has official Ubuntu support, so it’s not a surprise (unlike some of my experiences using Linux on Desktop as I’ve mentioned before).
The following section will be about configuring Ubuntu Server. I will try to avoid going deep into details and skip obvious steps, so here will be listed mostly unusual things or those that are not done often (like installing a list of essentials packages) so it can be used as a memo. You can skip to Usage or Conclusion section if you are not interested in it
Initial Setup
The first thing I always do with a fresh Linux install is setting an appropriate locale. In Ubuntu you do so by first generating the necessary ones, you can do it by running:
$ sudo dpkg-reconfigure locales
And then setting it in /etc/default/locale
file like:
LANG=en_US.UTF-8
After that I do a system upgrade:
$ sudo apt update
$ sudo apt upgrade
The next step should be some security configuration, ssh keys adding, etc, which I omit here, you can find a lot of tutorials on how to do it (like this one).
Then I install some essential packages:
$ sudo apt install -y linux-tools-common linux-tools-$(uname -r)
$ sudo apt install -y make build-essential libssl-dev zlib1g-dev \
libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm \
libncurses5-dev libncursesw5-dev xmlsec1 xz-utils \
tk-dev libffi-dev liblzma-dev python-openssl
I also install lm-sensors
package, so I can monitor temperatures using sensors
command:
$ sudo apt install lm-sensors
$ sudo sensors-detect
And some necessary tools for development.
$ sudo apt install git git-extras fzf zsh tmux
For zsh configuration, I use Oh My Zsh.
I use fzf for zsh history search (available via built-in ohmyzsh plugin).
Then I install Docker and docker-compose:
$ sudo apt install docker.io docker-compose
$ sudo systemctl enable docker
$ sudo systemctl start docker
$ sudo gpasswd -a dmrz docker
The last command adds a user with username dmrz
(change to the one you need) to docker group to use it without sudo
For Python development I prefer to install pyenv and pyenv-virtualenv yet.
Disk
I used LVM option for partitioning during the initial install and eventually figured out that my partition did not use the entire disk space (it’s volume was less than a half to be precise). Seems like LVM reserves some space for reasons. You can use vgdisplay
command to figure out the volume that has been allocated:
$ sudo vgdisplay
--- Volume group ---
VG Name ubuntu-vg
...
VG Size 475.43 GiB
Alloc PE / Size 51200 / 200.00 GiB
Free PE / Size 70511 / 275.43 GiB
...
If you want to use the entire disk space, run the following (it’s easy to assume this operation may be risky, so make sure you back up all the data before):
$ sudo lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
$ sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
If you run into any issue, check out this answer for more details.
Network
Static IP
I connect my NUC directly to iMac using ethernet cable, hence I want it to have a permanent IP. If you connect NUC to your network via some router or other network device with DHCP - please check the next section about setting it up with DHCP
On MacOS I setup a manually configured IP 192.168.2.1 for ethernet interface with netmask 255.255.255.0. I also set it up to share Internet Connection over Ethernet (for a case when NUC is not connection via Wi-Fi to main router), you can read about it here.
On Ubuntu’s side, we have need to add /etc/netplan/99_config.yaml
configuration file with the following content:
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: no
dhcp6: no
addresses:
- 192.168.2.2/24
routes:
- to: default
via: 192.168.2.1
nameservers:
addresses:
- 8.8.8.8
Feel free to use other DNS server addresses (I use 8.8.8.8 provided by Google in my example).
Once a file is saved we can apply the configuration:
$ sudo netplan apply
DHCP
This is alternative variant for network configuration if you plan connecting your NUC to router or other hardware that has DCHP server configured. If you need to configure Static IP - read the corresponding section.
Ubuntu Server does not have DHCP enabled by default (so I could easily connect it to any network), but it is easy to enable (using netplan). We first create a configuration file /etc/netplan/99_config.yaml
with the following content:
network:
version: 2
renderer: networkd
ethernets:
eno1:
dhcp4: true
Make sure you use a proper interface name (eno1
in my case), run ip a
to find it out.
Once a file is saved we can apply the configuration:
$ sudo netplan apply
Wi-Fi
I mostly use NUC from iMac that shares an internet connection over ethernet, but sometimes I also connect to it using my laptop, so it’s good to have Wi-Fi working. There are many ways to do it, but the simplest one is using Network Manager, which has a great command-line interface. We first need to install it:
$ sudo apt install network-manager
We can now list all the network devices available and focus on the one with wifi
type:
$ sudo nmcli d
DEVICE TYPE STATE CONNECTION
wlp0s20f3 wifi unavailable --
If its state is unavailable we first need to turn the radio module on:
$ sudo nmcli r wifi on
And now we can see a list of available networks:
$ sudo nmcli d wifi list
IN-USE BSSID SSID MODE CHAN RATE SIGNAL BARS SECURITY
00:00:00:00:00:00 MYWIFI Infra 12 540 Mbit/s 80 ▂▄▆_ WPA2
00:00:00:00:00:00 MYWIFI_5G Infra 72 540 Mbit/s 75 ▂▄▆_ WPA2
Now we can connect to the network we want using its SSID:
$ sudo nmcli d wifi connect MYWIFI_5G password myverysecurepassword
Wake-on-LAN
Intel NUC ethernet interface supports WOL, I used ethtool
to enable it:
$ sudo apt install ethtool
$ sudo ethtool -s eno1 wol g
In order to run it automatically on boot - we will create a systemd
service, by adding the following into /etc/systemd/system/wol.service
:
[Unit]
Description=Configure Wake On LAN
[Service]
Type=oneshot
ExecStart=/sbin/ethtool -s eno1 wol g
[Install]
WantedBy=basic.target
and running:
$ sudo systemctl enable wol.service
$ sudo systemctl start wol.service
Your network interface may differ from eno1
, you can find how to determine its name in DHCP Setup.
I will talk about using wake on LAN in Usage Daily Routine section.
Avahi
If you want to have NUC machine available on the network by hostname.local
name, we need to install avahi-daemon yet:
sudo apt install -y avahi-daemon
CPU Frequency Scaling
This was probably the trickiest part of the entire setup (and the most time-consuming). A long time ago when I was actively using Linux on desktop, Intel processors had something called SpeedStep for CPU frequency scaling and the only thing you needed to do was install a package called cpufreq
or cpufreqd
, run it as a daemon and it worked liked a charm. Turns out Intel (maybe not only they) now has something called P-state (since Sandy Bridge), that does not even require any additional software, and scaling is managed by CPU itself (if I understood it correctly), which sounds cool, but did not work well for me. It has only 2 governors: performance and powersave, most of the time it was set to performance (even when NUC was idling), it was also enabling turbo boost when it was not necessary, so fan noise was always excessive. You can ensure you have this driver enabled by running the following command:
$ cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_driver
intel_pstate
I tried different approaches, installed different utilities, managed governor-related settings manually, it all didn’t help much. Until I installed auto-cpufreq, this was working fine, but not perfect: scaling lags, not scaling down enough on idle, CPU consumption (it’s written in Python), turbo boost required manual adjustments. I am not trying to carp, I think auto-cpufreq
is great, but I suppose it would better work on laptops (that have discharging states, unlike NUC). I decided to continue my research and saw many people were unhappy with that Intel P-state as well and the solution was simply to turn it off. I did it and I am now happy with the CPU Frequency Scaling that is achieved by using acpi-cpufreq
fallback driver. To turn off P-state, you need to disable its module in GRUB config (so it is not loaded anymore on boot), by making sure there’s the following line in /etc/default/grub
:
GRUB_CMDLINE_LINUX_DEFAULT="intel_pstate=disable"
If you changed that line before, make sure you only add intel_pstate=disable
there.
After reboot you can see that acpi-cpufreq
driver is used now:
$ cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_driver
acpi-cpufreq
By default, it should use ondemand
scaling governor, which is usually a good choice for systems that you need to have both good performance and to be energy efficient when idling. Run the following to check the governor used:
$ cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
ondemand
If for some reason you have other governor set, you can google how to change it (they’re so many ways and utils), but the simplest way in my opinion is by installing cpufrequtils
:
$ sudo apt install cpufrequtils
And setting a government in its configuration file /etc/default/cpufrequtils
:
ENABLE="true"
GOVERNOR="ondemand"
To list available scaling governors use cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors
command.
So that’s it, after that my NUC was up and running and ready for work usage. The rest of the software installation and configuration process was very limited to my specific needs and is not worth sharing.
Usage
Overview
When I was pondering all this idea about using an external machine as a backend for development, I had multiple usage scenarios:
- Run all the infrastructure services and other artifacts on NUC, while the main application would remain running locally (as before), I liked this scenario, since it would require almost no configuration and 1gbps connection should be sufficient for all to be running fine (not as smooth as locally probably, but fine enough).
- Run everything inside NUC and share the project directory using SFTP or NFS, so I could edit code on any machine using any editor. Or maybe the more hardcore variant, edit remotely in vim (I was a long-time vim user, so it wouldn’t be a problem, though I am more used to vscode now). This one would also require taking care about making a running application available for client machines (some reverse proxy or manual port forwarding).
- Use Visual Studio Code Remote - SSH plugin. I don’t know why it wasn’t the first scenario I thought about (maybe did not enough research). I knew there was a similar plugin for Docker, but didn’t know there’s this one, for developing on the remote machine. There is a similar feature in PyCharm, but I didn’t have a chance to test it yet.
Even though I was sure I’d go with the first scenario, I decided to give the third one a shot and make an experiment (before I bought NUC). I have prepared a similar environment on Raspberry Pi and tried working with it using Visual Studio Code Remote plugin. I was very skeptical about it, but it worked and worked well (even with a very slow Raspberry Pi 3 I/O and 100mbps connection). I changed my mind and decided I will proceed with this scenario first and if something goes wrong (it did not) I can always fall back to the first one.
So, since the end of October 2020, I am using Visual Studio Code Remote - SSH for developing on NUC from my mac computers. It works perfectly flawlessly. Usage experience is not different from what I used to locally. Things like searching files (or in files) are even working faster (thanks to NVMe SSD 😀). It both provides access to the remote project directory using a built-in terminal and forwards all the opened ports (which can be adjusted manually) to my client machine (so I can open my web apps like they are running locally). For all this time I saw Visual Studio Code hanging a bit only once (and I am sure it has nothing to do with NUC, but with vscode itself). Since my working project had a huge amount of files, I had to increase fs.inotify.max_user_watches
limit by adding the following to /etc/sysctl.conf
:
fs.inotify.max_user_watches=524288
You can read more about this issue here.
Once the project setup is ready, you can even run the same project on multiple computers and they will all use the same environment (both connecting to the same codebase and locally alike running application without having a single bit of those present in the system).
Developing on the NUC using Wi-Fi is very comfortable too (maybe not that smooth sometimes, if you pay attention), I could manage to get a better signal (when tested its speed was 100-300mbps), but this all is enough for me. It still works much faster than when I do local development on my old MacBook Pro! (Read more about performance in conclusion section). Speaking of connection speed, when I was thinking about building a custom PC for this purpose, I wanted it to have a Thunderbolt port because I was sure I will need those 40gbps to have that local development experience feeling (especially if I’d went with the first usage scenario); when I started considering Intel NUC - I was happy it had this port as well. I never had a chance to try it so far, turned out you need a special Thunderbolt data transfer cable (and not the regular USB Type-C, I was silly enough to think the one from MacBook’s charger would work) and their price is high, especially if you want to have something longer than 1m. 1gbps ethernet connection works fine for me and I don’t know if I will ever need more, but at some point, I think I would like to finally try that Thunderbolt thing.
Daily Routine
This is how my usual development process looks like with NUC:
- I use wakeonlan command to turn it on. NUC is located on the bookshelf a couple of meters away from my working table, so it is really useful. You can also schedule a time when NUC is on/off in its BIOS.
- Then I run
Visual Studio Code
locally (which usually opens a remote project on NUC already, if I didn’t switch it) and work the same way I did without NUC before. - All my code is located on NUC, I use git on NUC as well (mostly from command-line, but sometimes from vscode UI), periodically I backup sensitive data to a USB hard drive. On my working computer, I only need vscode to edit and a web browser to view the result.
- When I need to run some long-running process in the terminal - I always run it using tmux, so I don’t have to run it again if I close the editor accidentally or my working machine goes to sleep. I have a bash script that runs tmux service with all the necessary things running in their separate panes.
- I often need to use OpenVPN connection inside NUC, unfortunately, I was unable to configure it using Network Manager (maybe due to 2fa), so I have to invoke it separately (have a script for that as well). I think I will need to investigate if it’s possible to share OpenVPN tunnel from the working computer if I have already connected there.
- When I am done - I either suspend or power off NUC by running the remote command using SSH.
I am totally fine with this workflow, the only thing that I would probably like to improve is turning NUC on/off, so I don’t have to run iTerm or Terminal to invoke wakeonline or remote power off commands. Maybe I can use Alfred’s Powerpack for that or some other hooks (when I run vscode for instance), would be happy to hear any recommendation on that.
Conclusion
I am using Intel NUC as a backend for local development for like 5 months already and to summarize it all I can say that I am happy with the setup and I think that I achieved what I was willing to.
I am no longer scared of failing SSD on any of my computers. I don’t have any sensitive work-related data on Macs, so I can take it to repairmen if necessary, also it seems they will serve longer without degrading fast since they are not under frequent I/O stress anymore. SSD in NUC is just a consumable that can be easily replaced in case of a failure or if I am not happy with its degraded performance anymore. Of course, I would like it all to be as easy as just install a new SSD and it would work exactly like before right away (I am sure there’re techniques of making snapshots of an entire disk as a backup, but it will likely degrade performance if it runs constantly), but I hope this post will help me to install everything fast next time if I need it. At some point, I even thought about having an extra NUC for redundancy, how crazy is that? 👽
As a final note, I would like to list the pros and cons I have found so far.
Pros
- Performance. This is what surprised me the most. I expected it to perform like something between my old MacBook Pro (with Intel Core i5 7267U) and iMac (with Intel Core i5 9600K). I would be totally fine with that performance since it was the last of the requirements for this setup. Intel NUC (with Core i7 10710U, throttled, mobile CPU) showed very great results, my main project test suite (which is mostly I/O bound, but CPU bound tasks present there as well) was running 2-3 times faster, usually 6 minutes versus 15 minutes on iMac; also JavaScript (gulp, webpack) compilation tasks and similar were running fast enough (1.5-2 times faster). Intel NUC’s CPU has 6 cores, so there is a good potential for concurrency, it worked very well for tests in Python project using pytest running with 6 parallel workers (it also has Hyper Threading support, but when running on 12 threads - performance will degrade due to throttling, it will become hot fast). And I don’t think that this is because Intel NUC’s CPU is faster, it’s not, it’s just all those things are very slow in macOS unfortunately (especially Docker).
- Maintainability. Can easily replace RAM and SSD in case of a failure
- Size. It is really small (like x1.5 of Raspberry Pi as I said or Apple TV). I think I could take it in my bag if I need to work at some other place, but since it’s totally fine working even with 100mbps connection - I can do it just remotely. You can put it anywhere, it has a very tiny space requirement.
- Appearance. I don’t like the glossy finish of its top plate, but the overall look is great and modern. There is no need to hide it.
- Energy Efficiency. It consumes the same as 1-2 LED bulbs (on average workflow).
- Robustness. I know 5 months is not enough to speak of it, but I never had a reason to not trust Intel (despite their current market situation).
- BIOS. It looks fantastic, easy to use, has frequent updates. You can change a lot of things there, like the power button or disk usage indicator LED color (I know it might sound like something small and silly, but I never saw anything like that in other computers before).
- Interfaces. It has many interfaces that might be even necessary when you use it mainly as a headless backend for development: card reader, thunderbolt, USB Type-C.
Cons
- Maintainability (yeah, I know, both pro and con). Cannot replace CPU in case of a failure, since it is SoC (system on a chip) and you need to replace the entire board.
- PSU size. Not that small external power supply (I think it’s a price of the NUC’s size itself), could be smaller I think.
- Fan Noise. I can’t say it is that noisy, but it is noticeable, you can hear it even when idling (I am spoiled by computers that are silent when you don’t do anything heavy). Luckily it’s a bit far from my working place and I can only hear it under the load. There are sophisticated fan control settings in BIOS, which I didn’t play much with yet, but I think there’s a chance to set up some sane temp to speed ratio, so it will be quieter (I will update this if I succeed to do so).
Thank you for reading!