While our ETag patch works pretty fine if it comes to serving data off
store paths, it unfortunately broke something that might be a bit more
common, namely when using regexes to extract path components of
location directives for example.
Recently, @devhell has reported a bug with a nginx location directive
like this:
location ~^/\~([a-z0-9_]+)(/.*)?$" {
alias /home/$1/public_html$2;
}
While this might look harmless at first glance, it does however cause
issues with our ETag patch. The alias directive gets broken up by nginx
like this:
*2 http script copy: "/home/"
*2 http script capture: "foo"
*2 http script copy: "/public_html/"
*2 http script capture: "bar.txt"
In our patch however, we use realpath(3) to get the canonicalised path
from ngx_http_core_loc_conf_s.root, which returns the *configured* value
from the root or alias directive. So in the example above, realpath(3)
boils down to the following syscalls:
lstat("/home", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
lstat("/home/$1", 0x7ffd08da6f60) = -1 ENOENT (No such file or directory)
During my review[1] of the initial patch, I didn't actually notice that
what we're doing here is returning NGX_ERROR if the realpath(3) call
fails, which in turn causes an HTTP 500 error.
Since our patch actually made the canonicalisation (and thus additional
syscalls) necessary, we really shouldn't introduce an additional error
so let's - at least for now - silently skip return value if realpath(3)
has failed.
However since we're using the unaltered root from the config we have
another issue, consider this root:
/nix/store/...-abcde/$1
Calling realpath(3) on this path will fail (except if there's a file
called "$1" of course), so even this fix is not enough because it
results in the ETag not being set to the store path hash.
While this is very ugly and we should fix this very soon, it's not as
serious as getting HTTP 500 errors for serving static files.
I added a small NixOS VM test, which uses the example above as a
regression test.
It seems that my memory is failing these days, since apparently I *knew*
about this issue since digging for existing issues in nixpkgs, I found
this similar pull request which I even reviewed:
https://github.com/NixOS/nixpkgs/pull/66532
However, since the comments weren't addressed and the author hasn't
responded to the pull request, I decided to keep this very commit and do
a follow-up pull request.
[1]: https://github.com/NixOS/nixpkgs/pull/48337
Signed-off-by: aszlig <aszlig@nix.build>
Reported-by: @devhell
Acked-by: @7c6f434c
Acked-by: @yorickvP
Merges: https://github.com/NixOS/nixpkgs/pull/80671
Fixes: https://github.com/NixOS/nixpkgs/pull/66532
According to https://endoflife.software/programming-languages/server-side-scripting/ruby
ruby 2.4 will go end-of-life in march, where the new release of nixpkgs
will be cut. We won't be able to support it for security updates.
Remove all references to ruby_2_4 and add ruby_2_7 instead where
missing.
Mark packages that depend on ruby 2.4 as broken:
* chefdk
* sonic-pi
The primary motivation of this change was to allow third-party modules
to be used with OpenResty, but it also results in a significant
reduction of code duplication.
According to https://repology.org/repository/nix_unstable/problems, we have a
lot of packages that have http links that redirect to https as their homepage.
This commit updates all these packages to use the https links as their
homepage.
The following script was used to make these updates:
```
curl https://repology.org/api/v1/repository/nix_unstable/problems \
| jq '.[] | .problem' -r \
| rg 'Homepage link "(.+)" is a permanent redirect to "(.+)" and should be updated' --replace 's@$1@$2@' \
| sort | uniq > script.sed
find -name '*.nix' | xargs -P4 -- sed -f script.sed -i
```
This fixes the patch for nginx to clear the Last-Modified header if a
static file is served from the Nix store.
So far we only used the ETag from the store path, but if the
Last-Modified header is always set to "Thu, 01 Jan 1970 00:00:01 GMT",
Firefox and Chrome/Chromium seem to ignore the ETag and simply use the
cached content instead of revalidating.
Alongside the fix, this also adds a dedicated NixOS VM test, which uses
WebDriver and Firefox to check whether the content is actually served
from the browser's cache and to have a more real-world test case.
This is what I've suspected a while ago[1]:
> Heads-up everyone: After testing this in a few production instances,
> it seems that some browsers still get cache hits for new store paths
> (and changed contents) for some reason. I highly suspect that it might
> be due to the last-modified header (as mentioned in [2]).
>
> Going to test this with last-modified disabled for a little while and
> if this is the case I think we should improve that patch by disabling
> last-modified if serving from a store path.
Much earlier[2] when I reviewed the patch, I wrote this:
> Other than that, it looks good to me.
>
> However, I'm not sure what we should do with Last-Modified header.
> From RFC 2616, section 13.3.4:
>
> - If both an entity tag and a Last-Modified value have been
> provided by the origin server, SHOULD use both validators in
> cache-conditional requests. This allows both HTTP/1.0 and
> HTTP/1.1 caches to respond appropriately.
>
> I'm a bit nervous about the SHOULD here, as user agents in the wild
> could possibly just use Last-Modified and use the cached content
> instead.
Unfortunately, I didn't pursue this any further back then because
@pbogdan noted[3] the following:
> Hmm, could they (assuming they are conforming):
>
> * If an entity tag has been provided by the origin server, MUST
> use that entity tag in any cache-conditional request (using If-
> Match or If-None-Match).
Since running with this patch in some deployments, I found that both
Firefox and Chrome/Chromium do NOT re-validate against the ETag if the
Last-Modified header is still the same.
So I wrote a small NixOS VM test with Geckodriver to have a test case
which is closer to the real world and I indeed was able to reproduce
this.
Whether this is actually a bug in Chrome or Firefox is an entirely
different issue and even IF it is the fault of the browsers and it is
fixed at some point, we'd still need to handle this for older browser
versions.
Apart from clearing the header, I also recreated the patch by using a
plain "git diff" with a small description on top. This should make it
easier for future authors to work on that patch.
[1]: https://github.com/NixOS/nixpkgs/pull/48337#issuecomment-495072764
[2]: https://github.com/NixOS/nixpkgs/pull/48337#issuecomment-451644084
[3]: https://github.com/NixOS/nixpkgs/pull/48337#issuecomment-451646135
Signed-off-by: aszlig <aszlig@nix.build>
This also lets us remove a hack for supporting LibreSSL 2.7, since we're
now using 2.9 by default, anyway.
Finally, use Ninja to run the CMake build instead of Make -- speeds
things up for me by a few seconds.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
There ver very many conflicts, basically all due to
name -> pname+version. Fortunately, almost everything was auto-resolved
by kdiff3, and for now I just fixed up a couple evaluation problems,
as verified by the tarball job. There might be some fallback to these
conflicts, but I believe it should be minimal.
Hydra nixpkgs: ?compare=1538299
PHP 7.1 is currently on life support, as in only recieving security related patches.
This will only continue until: 2019-12-01
This date are in the middle of the 19.09 lifecycle. So it would be
nice to not have it in the 19.09 stable release. Dropping it now would
also result in less maintanance in updating them.
The death dates can be seen on following links:
- https://endoflife.date/php
- https://php.net/supported-versions.php
- https://en.wikipedia.org/wiki/PHP#Release_history
So far, the Nix store directory was hardcoded and if someone uses a
different Nix store directory the patch won't work. Of course, this is
pretty uncommon, but by not only substituting the store directory but
also the length of it we also save a few calls to ngx_strlen(), which
should save us a few cycles.
Signed-off-by: aszlig <aszlig@nix.build>
The original patch introduced a new "real" variable which gets populated
(and allocated) via ngx_realpath(). It's properly freed in error
conditions but it won't be freed if ngx_http_set_etag returns
successfully.
Adding another ngx_free() just before returning fixes that memory leak.
I also fixed a small indentation issue along the way.
Signed-off-by: aszlig <aszlig@nix.build>
In Apache HTTP Server 2.4 releases 2.4.17 to 2.4.38, with MPM event, worker or
prefork, code executing in less-privileged child processes or
threads (including scripts executed by an in-process scripting interpreter)
could execute arbitrary code with the privileges of the parent process (usually
root) by manipulating the scoreboard.
This adds the nginx module `ngx_http_proxy_connect_module` which allows
to tunnel HTTPS through an nginx proxy[1].
As this module contained patches for several nginx version, some minor
adjustments were needed:
* Allowed each entry in `nginxModules` to provide patches.
* Added an optional `supports` attribute to ensure that each module can
determine if it supports the currently built nginx version (e.g. stable
1.14 ATM or mainline 1.15 ATM).
[1] https://github.com/chobits/ngx_http_proxy_connect_module
Currently, it seems there is no easy way to override package to add
modules. For example, if we want to add the `ipscrub` module, we can
do:
pkgs.nginxStable.override {
modules = [ pkgs.nginxModules.ipscrub ];
};
But, then, we loose `rtmp`, `dav` and `moreheaders` which are defined
in `all-packages.nix`. With this modification, we can now do:
pkgs.nginxStable.override {
modules = pkg.nginxStable.passthru.modules ++ [ pkgs.nginxModules.ipscrub ];
};
The tests were enabled in #53488 and succeeded on Darwin; on Linux they
still failed because of empty hostname inside the sandbox (we have no
UTS-namespace hostname and I think no /etc/hosts). Nix on Darwin lacks
powerful enough sandboxing, so there were no problems on Darwin.
Patching the tests to fallback to "127.0.0.1" if hostname of the
localhost cannot be retrieved matches the behaviour of lighttpd itself
and allows the tests to pass.
Not sure if having no hostname in the test environment is a bit too
weird for the upstream to care.