| Age | Commit message (Collapse) | Author |
|
As with Solaris (dba1bf935390ddb0184a4dc72245454de6c06fd2), we have no way to
actually audit contributions for these OSs. IBM also makes it even harder than
Oracle to actually obtain these OSs.
closes #23695
closes #23694
closes #3655
closes #23693
|
|
feat: init x86_16 arch via CBE
|
|
There is no straightforward way for the Zig team to access the Solaris system
headers; to do this, one has to create an Oracle account, accept their EULA to
download the installer ISO, and finally install it on a machine or VM. We do not
have to jump through hoops like this for any other OS that we support, and no
one on the team has expressed willingness to do it.
As a result, we cannot audit any Solaris contributions to std.c or other
similarly sensitive parts of the standard library. The best we would be able to
do is assume that Solaris and illumos are 100% compatible with no way to verify
that assumption. But at that point, the solaris and illumos OS tags would be
functionally identical anyway.
For Solaris especially, any contributions that involve APIs introduced after the
OS was made closed-source would also be inherently more risky than equivalent
contributions for other proprietary OSs due to the case of Google LLC v. Oracle
America, Inc., wherein Oracle clearly demonstrated its willingness to pursue
legal action against entities that merely copy API declarations.
Finally, Oracle laid off most of the Solaris team in 2017; the OS has been in
maintenance mode since, presumably to be retired completely sometime in the 2030s.
For these reasons, this commit removes all Oracle Solaris support.
Anyone who still wishes to use Zig on Solaris can try their luck by simply using
illumos instead of solaris in target triples - chances are it'll work. But there
will be no effort from the Zig team to support this use case; we recommend that
people move to illumos instead.
|
|
|
|
https://github.com/ziglang/zig/issues/25699
|
|
`std.Target`: add tags and info for alpha, hppa, microblaze, sh + some bonus commits
|
|
|
|
|
|
Renames arePointersLogical to shouldBlockPointerOps for clarity
adds capability check to allow pointer ops on .storage_buffer when
variable_pointers capability is enabled.
Fixes #25638
|
|
This allows us to rule out support for certain address spaces based on the OS.
This commit is just a refactor, however, and doesn't actually make use of that
opportunity yet.
|
|
|
|
* ELF v1 on powerpc64 is only barely kept on life support in a couple of Linux
distros. I don't anticipate that this will last much longer.
* Most of the Linux world has moved to powerpc64le which requires ELF v2.
* Some Linux distros have even started supporting powerpc64 with ELF v2.
* The BSD world has long since moved to ELF v2.
* We have no actual linking support for ELF v1.
* ELF v1 had confused DWARF register mappings which is becoming a problem in
our DWARF code in std.debug.
It's clear that ELF v1 is on its way out, and we never fully supported it
anyway. So let's not waste any time or energy on it going forward.
closes #5927
|
|
|
|
|
|
Without allowing this, the references to `compiler_rt.dll` emitted by
the coff linker will prevent the executable from running.
|
|
Closes #25343
|
|
Some changes to prepare for FreeBSD CI
|
|
There are some blocking bugs in the self-hosted ELF linker.
|
|
This iteration already has significantly better incremental support.
Closes #24110
|
|
The big endian RISC-V effort is mostly driven by MIPS (the company) which is
pivoting to RISC-V, and presumably needs a big endian variant to fill the niche
that big endian MIPS (the ISA) did.
GCC already supports these targets, but LLVM support will only appear in 22;
this commit just adds the necessary target knowledge and checks on our end.
|
|
It doesn't really make sense for `target_util.canBuildLibCompilerRt`
(and its ubsan-rt friend) to take in `use_llvm`, because the caller
doesn't control that: they're just going to queue a sub-compilation for
the runtime. The only exception to that is the ZCU strategy, where we
effectively embed `_ = @import("compiler_rt")` into the Zig compilation:
there, the question does matter. Rather than trying to do multiple weird
calls to model this, just have `canBuildLibCompilerRt` return not just a
boolean, but also differentiate the self-hosted backend being capable of
building the library vs only LLVM being capable. Logic in `Compilation`
uses that difference to decide whether to use the ZCU strategy, and also
to disable the library if the compiler does not support LLVM and it is
required.
Also, remove a redundant check later on, when actually queuing jobs.
We've already checked that we can build `compiler_rt`, and
`compiler_rt_strat` is set accordingly. I'm guessing this was there to
work around a bug I saw in the old strategy assignment, where support
was ignored in some cases.
Resolves: #24623
|
|
`std.Target`: require libc for Android API levels prior to 29
|
|
|
|
Same as 97ecb6c551eb628e5a37d18d5a9720d3714a04ef for NetBSD.
|
|
Closes #24553
|
|
|
|
|
|
|
|
|
|
|
|
Closes #24236.
|
|
https://github.com/ziglang/zig/issues/24341
|
|
|
|
This struct is larger than 256 bytes and code that copies it
consistently shows up in profiles of the compiler.
|
|
`stage2_spirv64` -> `stage2_spirv`
|
|
|
|
Unfortunately, the self-hosted SPIR-V backend is quite tightly coupled
with the self-hosted SPIR-V linker through its `Object` concept (which
is much like `llvm.Object`). Reworking this would be too much work for
this branch. So, for now, I have introduced a special case (similar to
the LLVM backend's special case) to the codegen logic when using this
backend. We will want to delete this special case at some point, but it
need not block this work.
|
|
My original goal here was just to get the self-hosted Wasm backend
compiling again after the pipeline change, but it turned out that from
there it was pretty simple to entirely eliminate the shared state
between `codegen.wasm` and `link.Wasm`. As such, this commit not only
fixes the backend, but makes it the second backend (after CBE) to
support the new 1:N:1 threading model.
|
|
The idea here is that instead of the linker calling into codegen,
instead codegen should run before we touch the linker, and after MIR is
produced, it is sent to the linker. Aside from simplifying the call
graph (by preventing N linkers from each calling into M codegen
backends!), this has the huge benefit that it is possible to
parallellize codegen separately from linking. The threading model can
look like this:
* 1 semantic analysis thread, which generates AIR
* N codegen threads, which process AIR into MIR
* 1 linker thread, which emits MIR to the binary
The codegen threads are also responsible for `Air.Legalize` and
`Air.Liveness`; it's more efficient to do this work here instead of
blocking the main thread for this trivially parallel task.
I have repurposed the `Zcu.Feature.separate_thread` backend feature to
indicate support for this 1:N:1 threading pattern. This commit makes the
C backend support this feature, since it was relatively easy to divorce
from `link.C`: it just required eliminating some shared buffers. Other
backends don't currently support this feature. In fact, they don't even
compile -- the next few commits will fix them back up.
|
|
The main goal of this commit is to make it easier to decouple codegen
from the linkers by being able to do LLVM codegen without going through
the `link.File`; however, this ended up being a nice refactor anyway.
Previously, every linker stored an optional `llvm.Object`, which was
populated when using LLVM for the ZCU *and* linking an output binary;
and `Zcu` also stored an optional `llvm.Object`, which was used only
when we needed LLVM for the ZCU (e.g. for `-femit-llvm-bc`) but were not
emitting a binary.
This situation was incredibly silly. It meant there were N+1 places the
LLVM object might be instead of just 1, and it meant that every linker
had to start a bunch of methods by checking for an LLVM object, and just
dispatching to the corresponding method on *it* instead if it was not
`null`.
Instead, we now always store the LLVM object on the `Zcu` -- which makes
sense, because it corresponds to the object emitted by, well, the Zig
Compilation Unit! The linkers now mostly don't make reference to LLVM.
`Compilation` makes sure to emit the LLVM object if necessary before
calling `flush`, so it is ready for the linker. Also, all of the
`link.File` methods which act on the ZCU -- like `updateNav` -- now
check for the LLVM object in `link.zig` instead of in every single
individual linker implementation. Notably, the change to LLVM emit
improves this rather ludicrous call chain in the `-fllvm -flld` case:
* Compilation.flush
* link.File.flush
* link.Elf.flush
* link.Elf.linkWithLLD
* link.Elf.flushModule
* link.emitLlvmObject
* Compilation.emitLlvmObject
* llvm.Object.emit
Replacing it with this one:
* Compilation.flush
* llvm.Object.emit
...although we do currently still end up in `link.Elf.linkWithLLD` to do
the actual linking. The logic for invoking LLD should probably also be
unified at least somewhat; I haven't done that in this commit.
|
|
Closes #22257
|
|
Before:
* std.Target.arm.featureSetHas(target.cpu.features, .has_v7)
* std.Target.x86.featureSetHasAny(target.cpu.features, .{ .sse, .avx, .cmov })
* std.Target.wasm.featureSetHasAll(target.cpu.features, .{ .atomics, .bulk_memory })
After:
* target.cpu.has(.arm, .has_v7)
* target.cpu.hasAny(.x86, &.{ .sse, .avx, .cmov })
* target.cpu.hasAll(.wasm, &.{ .atomics, .bulk_memory })
|
|
compiler: Rework PIE option logic.
|
|
This appeared in Valgrind 3.25.0.
|
|
The previous supports_fpic() check was too broad.
|
|
To my knowledge, the only platforms that actually *require* PIE are Fuchsia and
Android, and the latter *only* when building a dynamically-linked executable.
OpenBSD and macOS both strongly encourage using PIE by default, but it isn't
technically required. So for the latter platforms, we enable it by default but
don't enforce it.
Also, importantly, if we're building an object file or a static library, and the
user hasn't explicitly told us whether to build PIE or non-PIE code (and the
target doesn't require PIE), we should *not* default to PIE. Doing so produces
code that cannot be linked into non-PIE output. In other words, building an
object file or a static library as PIE is an optimization only to be done when
the user knows that it'll end up in a PIE executable in the end.
Closes #21837.
|
|
This adds 4 `Legalize.Feature`s:
* `expand_intcast_safe`
* `expand_add_safe`
* `expand_sub_safe`
* `expand_mul_safe`
These do pretty much what they say on the tin. This logic was previously
in Sema, used when `Zcu.Feature.safety_checked_instructions` was not
supported by the backend. That `Zcu.Feature` has been removed in favour
of this legalization.
|
|
Backends can instead ask legalization on a per-instruction basis.
|
|
|
|
|