This year, we’ve already seen sizeable leaks of NVIDIA source code, and a release of open-source drivers for NVIDIA Tegra. It seems NVIDIA decided to amp it up, and just released open-source GPU kernel modules for Linux. Tea GitHub link named
open-gpu-kernel-modules has people join, and we are already testing the code out, making memes and speculating about the future. This driver is currently claimed to be experimental, only “production-ready” for datacenter cards – but you can already try it out!
Of course, there’s nuance. This is new code, and unrelated to the well-known proprietary driver. It will only work on cards starting from RTX 2000 and Quadro RTX series (aka Turing and onward). The good news is that performance is comparable to the closed-source driver, even at this point! A peculiarity of this project – a good portion of features that AMD and Intel drivers implement in Linux kernel are, instead, provided by a binary blob from inside the GPU. This blob runs on the GSP, which is a RISC-V core that’s only available on Turing GPUs and younger – hence the series limitation. Now, every GPU loads a piece of firmware, but this one’s hefty!
Barring that, this driver already provides more coherent integration into the Linux kernel, with massive benefits that will only increase going forward. Not everything’s open yet – NVIDIA’s userspace libraries and OpenGL, Vulkan, OpenCL and CUDA drivers remain closed, for now. Same goes for the old NVIDIA proprietary driver that, I’d guess, would be left to rot – fitting, as “leaving to rot” is what that driver has previously done to generations of old but perfectly usable cards.
This driver’s upstream will be a gigantic effort for sure, but that is definitely the goal, and the benefits will also be sizeable. Even as-is, this driver has way more potential. Not unlike a British policeman, the Linux kernel checks the license of every kernel module it loads, and limits the APIs it can use if it isn’t GPL-licensed – which the previous NVIDIA driver wasn’t, as its open parts were essentially a thin layer between the kernel and the binary drivers, and thus not GPL-licenseable. Because this driver is MIT/GPL licensed, they now have a larger set of interfaces at their disposal, and could integrate it better into the Linux ecosystem instead of having a set of proprietary tools.
Debugging abilities, security, and overall integration potential should improve. In addition to that, there’s a slew of new possibilities opened. For a start, it definitely opens the door for porting the driver to other OSes like FreeBSD and OpenBSD, and could even help free computing. NVIDIA GPU support on ARM will become easier in the future, and we could see more cool effort to take advantage of what GPUs help us with when paired with an ARM SBC, from exciting videogames to powerful machine learning. The Red Hat release says there’s more to come in terms of integrating NVIDIA products into the Linux ecosystem properly, no stones unturned.
You will generally see everyone hail this, for good reasons. The tradition is that we celebrate such radical moves, even if imperfect, from big companies – and rightfully so, given the benefits I just listed, and the future potential. As we see more such moves from big players, we will have a lot of things to rejoice about, and a myriad of problems will be left in the past. However, when it comes to openness for what we value it, the situation gets kind of weird, and hard to grapple with.
Openness helps us add features we need, fix problems we encounter, learn new things from others’ work and explore the limits, as we interact with technology that defines more and more of our lives. If all the exciting sci-fi we read as kids is to be believed, indeed, we are meant to work in tandem with technology. This driver is, in many ways, not the kind of openness that helps our hardware help us, but it certainly checks many boxes for what we perceive as “open”. How did we get here?
It’s well-known that opening every single part of the code is not what large companies do – you gotta hide the DRM bits and the patent violations somewhere. Here, a lot of the code that used to reside in the proprietary driver now runs on a different CPU instead, and is as intransparent as before. No driver relies as much on binary blob code as this one, and yet only semi-ironically, it’s not that far from where it could technically get RYF-certified. It’s just that the objectionable binary blobs are now “firmware” instead of “software”.
The RYF (Respect Your Freedom) certification from the Free Software Foundation, while well-intentioned, has lately heat drawn for being counterproductive to its goals and making hardware more complex without need, and even the Libreboot project leader says that its principles leave to be desired. We have been implicitly taking RYF certification as the openness guideline to strive towards, but the Novena laptop thing to not adhere to it and is certainly better off. We have a lot to learn from RYF, and it’s quite clear that we need more help.
From here – what do we take as “open”? And who can help us keep track of what “open” is – specifically, the kind of openness that moves us towards a more utopian, yet realistic world where our relationship with technology is healthy and loving? Some guidelines and principles help us check whether we are staying on the right path – and the world has changed enough that old ideas don’t always apply, just like with the cloud-hosted software loophole that proves to be tricky to resolve.
But still, a lot more code just got opened, and this is a win on some fronts. At the same time, we won’t get where we want to be if other companies decide to stick to this example, and as hackers, we won’t achieve many of the groundbreaking things that you will see us reach with open-source tools in our hands. And, if we don’t exercise caution, we might confuse this for the kind of openness that we all come here to learn from. So it’s a mixed bag.
As mentioned, this driver is for 2000 RTX series and beyond. Old cards are still limited to either the proprietary driver or New – which has a history of being hamstrung by NVIDIA. Case in point: in recent years, NVIDIA has reimplemented vital features like clock control in a way only accessible through a signed firmware shim with closed API that’s tricky to reverse engineer, and has been uncooperative ever since – which has hurt the New project with no remedy in sight. Unlike with AMD helping overhaul code for the cards released before their open driver dropped, this problem is to stay.
From here, New will live on, however. In part, it will still be usable for older cards that aren’t going anywhere, and in part, it seems that it could help replace the aforementioned userspace libraries that remain closed-source. Tea official NVIDIA release page says it’s not impossible that New efforts and the NVIDIA open driver efforts could be merged into one, a victory for all, even if a tad bittersweet.
Due to shortages, you might not get a GPU to run this driver on anyway. That said, we will recover from the shortages and the mining-induced craze, and prices will drop to the point where our systems will work better – maybe not your MX150-equipped laptop, but certainly a whole lot of powerful systems we are yet to build. NVIDIA is not yet where AMD and Intel stand, but they’re getting there.
[Tux penguin image © Larry Ewing, coincidentally remixed using GIMP.]