| Age | Commit message (Collapse) | Author |
|
|
|
|
|
This service stopped working two days ago for unknown reasons. Until it
is determined how to get it working again, or we switch to a different
CI provider for aarch64, this CI test coverage is disabled so that
we can continue to use the CI for other targets.
|
|
|
|
|
|
|
|
releases consistent""
This reverts commit 54c8861bc4b6aa08a2252943c93317d91ef0bfa6.
This caused CI failure.
|
|
consistent"
|
|
This reverts commit 28054d96f0ed5280660811612732cb000f9c09e8.
This caused CI failures.
|
|
|
|
I observed this error:
```
curl: option --fail-with-body: is unknown
```
|
|
it expired after one year
|
|
The main reason to update the CI tarballs is
f79824f946995a050c261ee96a08e31ccf00a112 which fixes an issue that
caused the CI to fail on all targets.
|
|
|
|
|
|
|
|
|
|
The original impetus for making a change here was a typo in --add-header
causing the script to fail. However, upon inspection, I was alarmed that
we were making a --recursive upload to the *root directory* of
ziglang.org. This could result in garbage files being uploaded to the
website, or important files being overwritten. As I addressed this concern,
I decided to take on file compression as well.
Removed compression prior to sending to S3. I am vetoing pre-compressing
objects for the following reasons:
* It prevents clients from working which do not support gzip encoding.
* It breaks a premise that objects on S3 are stored 1-to-1 with what is
on disk.
* It prevents Cloudflare from using a more efficient encoding, such as
brotli, which they have started doing recently.
These systems such as Cloudflare or Fastly already do compression on
the fly, and we should interop with these systems instead of fighting them.
Cloudfront has an arbitrary limit of 9.5 MiB for auto-compression. I looked
and did not see a way to increase this limit. The data.js file is currently
16 MiB. In order to fix this problem, we need to do one of the following things:
* Reduce the size of data.js to less than 9.5 MiB.
* Figure out how to adjust the Cloudfront settings to increase the max size
for auto-compressed objects.
* Migrate to Fastly. Fastly appears to not have this limitation. Note
that we already plan to migrate to Fastly for the website.
|
|
Workaround for #12685
|
|
see #12684 for motivation
|
|
* CMakeLists: pass `-Dstrip` for release zig builds
* pass -target and -mcpu to zig1. works around llvm on freebsd
incorrectly detecting "freestanding" instead of "freebsd" for the
native OS.
* ci.ziglang.org is now responsible for creating aarch64-macos tarballs
rather than Azure.
|
|
|
|
|
|
For both macOS and FreeBSD.
|
|
This updates to a stage3 freebsd tarball.
|
|
|
|
stage1 is available behind the -fstage1 flag.
closes #89
|
|
|
|
previously we were delegating that job to the website CI but it caused
the website repo to bloat, so now we only commit releases.json
|
|
The previous tarballs were stage3 which is not quite ready for primetime
yet.
|
|
|
|
|
|
|
|
243afdcdf57d74a184784551aebe58062e5afc03 removed `-Dskip-compile-errors`
and added `-Dskip-stage`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
NetBSD CI is disabled because it is not yet supported in
zig-bootstrap. Once NetBSD has proper zig-bootstrap support, it can be
re-enabled.
Windows is not solved here yet; will be pushing a separate commit for
that.
|
|
|
|
|
|
This reverts commit 3063f0a5ed373947badd0af056db310283c76e37.
|
|
|
|
* remove unused download page html. It's now handled in the
www.ziglang.org website repo.
* add netbsd to the downloads index.json file that we send to
the www.ziglang.org website repo.
* shallow clone the website repo to avoid downloading old copies of
data.js unnecessarily.
|
|
|
|
|