Ridiculous unusable download URLs for open source projects

I told myself (and everyone I know) that I wouldn’t write another blog post until I moved the blog off Google Blogger, but I can’t stay silent on this issue.

UPower, the open source power management software used on Linux (and I believe the *BSD family), has recently changed their download URLs. As the lead of Adélie Linux, I personally maintain a significant chunk of “core” desktop experience packages. We consider UPower to be one of those, because it is important to conserve energy whenever possible.

Today I was notified by Repology that UPower was out of date in Adélie. No big deal, I’ll just bump it:

>>> upower: Fetching https://upower.freedesktop.org/releases/upower-0.99.8.tar.xz 
curl: (22) The requested URL returned error: 404 Not Found

“Hmm”, I wondered to myself, “maybe this is a git snapshot package someone uploaded”. It turns out it wasn’t; Debian, Arch, and Fedora are all shipping 0.99.8 now. What gives?

I looked at Debian’s packaging first, since they typically have a good hold on stability. I didn’t even understand the change, though, so I looked up Exherbo’s packaging and was horrified.

Instead of a simple URL, they are now using a GitLab Upload URL which contains an SHA-1 hash in the URL. That means all of our bump scripts can’t work any more. Instead of simply typing a single abump command, for every release of UPower I will now have to:

  1. Open their GitLab instance in a Web browser, which isn’t even installed on any of the staging computers to minimise security hazards:
  2. Wait for all the JavaScript and miscellaneous crap to load;
  3. Context-click the link for the UPower tarball;
  4. Copy the link;
  5. Connect to our staging system remotely from a computer with a Web browser installed;
  6. Open vim on the APKBUILD file for UPower;
  7. Paste the link into the source= line, replacing what is already there;
  8. And then run abuild checksum manually to update the sha512sum in the file.

WHY!? fd.o people, please, out of respect for us packagers that want to give your software to the people who need it, please use your /releases/ directory again!

Configuring Apache 2.4 to serve GitLab over TLS / HTTPS

As part of my work assisting in the set up of the infrastructure for Galapagos Linux, I volunteered to install and configure GitLab. My colleagues had attempted to use the Debian Omnibus package, but that failed in spectacular ways, including references to directories in the configuration that did not exist after package installation.

The most important piece of advice I can give is that you absolutely must use Bundler v1.10.6 or older[1] to ensure that you do not receive Gemfile.lock errors. You will also need to make a small modification to the Gemfile and Gemfile.lock file to ensure that libv8 is present if you wish to precompile the assets.

Now, for the Apache configuration. Note that I assume you have enabled https in GitLab’s config/gitlab.yml and set port: 443. You will need to set a forwarding request header[2] to ensure that GitLab does not throw CSRF authentication errors. Also, if you want to use the recommended Unix sockets of Unicorn, you will need to configure the ProxyPass and ProxyPassReverse to use unix:/path/to/socket|http://HOSTNAME (thanks, Xayto!) – the full VirtualHost for GitLab goes something like this:

ServerName git.glpgs.io
ServerAlias code.glpgs.io
ProxyPass / unix:/home/git/gitlab/tmp/sockets/gitlab.socket|http://git.glpgs.io/
ProxyPassReverse / unix:/home/git/gitlab/tmp/sockets/gitlab.socket|http://git.glpgs.io/
SSLEngine on
SSLCertificateFile /path-to-certificate.crt
SSLCertificateKeyFile /path-to-key.key
SSLCertificateChainFile /path-to-ca-chain.crt
Header always set Strict-Transport-Security “max-age=15768000”
RequestHeader set X_FORWARDED_PROTO ‘https’

ServerName git.glpgs.io
Redirect permanent / https://git.glpgs.io/


Additionally, I recommend that you follow Mozilla MozWiki’s great TLS advice or use their super handy, easy config generator as a global configuration that applies to all of your VirtualHosts. On Debian, you can pop that in to /etc/apache2/mods-available/ssl.conf, replacing the parameters they already specify.

Happy hacking!

Let’s Encrypt and why I still pay for TLS certificates

I am asked with alarming regularity why I am not using Let’s Encrypt for Web sites. This article answers that question.

I am asked with alarming regularity why I am not using Let’s Encrypt for my personal Web sites, and for Adélie‘s site, and for my mother’s art gallery site, and so on. “Why do you pay money for something you could have for free? And then you aren’t giving money to those evil CAs!”

TLS certificates are still very much “you get what you pay for”. Let’s Encrypt is free, and on paper it seems to be a great solution with roots in freedom and socialism. However, it has a number of large issues in practice that prevent me from being able to adopt it.

The first, and most evident, is the failure of the community to provide a single ACME client that is well-supported and provides configuration options. As of this writing, there are 49 different client implementations on the official site. The problems with them are as numerous as the offerings; my main complaint is that most of them require themselves to run as the root user to automatically write to sensitive certificate files that are owned by the Web server user and are chmod 400.

The second large issue I’ve seen is that most of these ‘automatic updates’ break. This can be due to administrator error – and since there is not one single option, there cannot be a single repository of knowledge. This can also be due to APIs or endpoints changing. I have seen an official Mozilla blog and Void Linux’s repository broken in the last week alone, all by botched ACME cron jobs. This solution is sold as “set and forget”, but it requires more effort than simply going to a site every year and inputting a CSR and privkey.

Other issues with Let’s Encrypt include: Let’s Encrypt lacks a “site seal” which is very important on e-commerce sites to foster user trust. Let’s Encrypt does not provide OV (let alone EV), which also compromises trust in people who know what to look for.

All in all, I think going forward Let’s Encrypt may be suitable for power users and people who run TLS servers off their home servers. It may even be suitable for some personal sites and blogs. But I don’t think it is a long-term solution for person who need trust, or those who have a complicated infrastructure (such as a distro, like Adélie).

Going IPv6 native without IPv4

Now that I have finally moved in to my new apartment (which requires a long blog of its own), I have new routing equipment and a new network infrastructure. The native IPv6 on Cox Communications seems to be a bit better than the native IPv6 offered by Comcast Business; namely, Cox seems to be peered more widely and therefore ping times are much lower. Of course, this could be specific to the market I’m in – eastern Oklahoma – so YMMV.

However, because DHCP is a terrible protocol, it is constantly flaking, leaving me with IPv6-only access to the Internet. That is, no access to IPv4 whatsoever. Surprisingly, it’s nearly usable. However, I am highly disappointed in a few surprises I’ve found that do not work over IPv6:

  • EVERY SINGLE CODE HOSTING SERVICE ON THE INTERNET. This really, really, really, really upsets me. Luckily, I don’t have to care any more, because I run my own now.
  • DuckDuckGo. I am incredulous that a modern search engine is not accessible over IPv6.
  • eBay and PayPal. This isn’t really surprising, I suppose, since eBay were running Windows NT 4 as recently as 2006… they always have been a decade off of the current technologies.
  • Any news Web site I tried: Bloomberg, BBC, New York Times, Washington Post.
  • The entire StackExchange family of properties, five YEARS after being asked for even a trial of IPv6 access. This is entirely unacceptable. I expect news organisations and e-commerce conglomerates to be woefully behind the times, but a company designed from the ground up for computer scientists by computer scientists? I can’t believe this is real.
  • Weather.gov. The US government actually has an IPv6 project with real time online completion progress, even available itself via IPv6.  However, while NOAA’s flashy Web 3.0 marketing pages are available over IPv6, the important research, life-saving data, and forecast information made by the National Weather Service are entirely IPv4-only. I understand that internally, their infrastructure is not entirely ready for IPv6, but they should be able to run the main radar and warning information over IPv6 at least. Americans need not feel singled out, though; the UK’s Met Office is also unavailable over IPv6.

At least Wikipedia and the Google properties are usable, so I have music, videos, and a reference library.