| Age | Commit message (Collapse) | Author |
|
My original goal here was just to get the self-hosted Wasm backend
compiling again after the pipeline change, but it turned out that from
there it was pretty simple to entirely eliminate the shared state
between `codegen.wasm` and `link.Wasm`. As such, this commit not only
fixes the backend, but makes it the second backend (after CBE) to
support the new 1:N:1 threading model.
|
|
As of this commit, every backend other than self-hosted Wasm and
self-hosted SPIR-V compiles and (at least somewhat) functions again.
Those two backends are currently disabled with panics.
Note that `Zcu.Feature.separate_thread` is *not* enabled for the fixed
backends. Avoiding linker references from codegen is a non-trivial task,
and can be done after this branch.
|
|
The idea here is that instead of the linker calling into codegen,
instead codegen should run before we touch the linker, and after MIR is
produced, it is sent to the linker. Aside from simplifying the call
graph (by preventing N linkers from each calling into M codegen
backends!), this has the huge benefit that it is possible to
parallellize codegen separately from linking. The threading model can
look like this:
* 1 semantic analysis thread, which generates AIR
* N codegen threads, which process AIR into MIR
* 1 linker thread, which emits MIR to the binary
The codegen threads are also responsible for `Air.Legalize` and
`Air.Liveness`; it's more efficient to do this work here instead of
blocking the main thread for this trivially parallel task.
I have repurposed the `Zcu.Feature.separate_thread` backend feature to
indicate support for this 1:N:1 threading pattern. This commit makes the
C backend support this feature, since it was relatively easy to divorce
from `link.C`: it just required eliminating some shared buffers. Other
backends don't currently support this feature. In fact, they don't even
compile -- the next few commits will fix them back up.
|
|
|
|
Similar to the previous commit, this commit untangles LLD integration
from the self-hosted linkers. Despite the big network of functions which
were involved, it turns out what was going on here is quite simple. The
LLD linking logic is actually very self-contained; it requires a few
flags from the `link.File.OpenOptions`, but that's really about it. We
don't need any of the mutable state on `Elf`/`Coff`/`Wasm`, for
instance. There was some legacy code trying to handle support for using
self-hosted codegen with LLD, but that's not a supported use case, so
I've just stripped it out.
For now, I've just pasted the logic for linking the 3 targets we
currently support using LLD for into this new linker implementation,
`link.Lld`; however, it's almost certainly possible to combine some of
the logic and simplify this file a bit. But to be honest, it's not
actually that bad right now.
This commit ends up eliminating the distinction between `flush` and
`flushZcu` (formerly `flushModule`) in linkers, where the latter
previously meant something along the lines of "flush, but if you're
going to be linking with LLD, just flush the ZCU object file, don't
actually link"?. The distinction here doesn't seem like it was properly
defined, and most linkers seem to treat them as essentially identical
anyway. Regardless, all calls to `flushZcu` are gone now, so it's
deleted -- one `flush` to rule them all!
The end result of this commit and the preceding one is that LLVM and LLD
fit into the pipeline much more sanely:
* If we're using LLVM for the ZCU, that state is on `zcu.llvm_object`
* If we're using LLD to link, then the `link.File` is a `link.Lld`
* Calls to "ZCU link functions" (e.g. `updateNav`) lower to calls to the
LLVM object if it's available, or otherwise to the `link.File` if it's
available (neither is available under `-fno-emit-bin`)
* After everything is done, linking is finalized by calling `flush` on
the `link.File`; for `link.Lld` this invokes LLD, for other linkers it
flushes self-hosted linker state
There's one messy thing remaining, and that's how self-hosted function
codegen in a ZCU works; right now, we process AIR with a call sequence
something like this:
* `link.doTask`
* `Zcu.PerThread.linkerUpdateFunc`
* `link.File.updateFunc`
* `link.Elf.updateFunc`
* `link.Elf.ZigObject.updateFunc`
* `codegen.generateFunction`
* `arch.x86_64.CodeGen.generate`
So, we start in the linker, take a scenic detour through `Zcu`, go back
to the linker, into its implementation, and then... right back out, into
code which is generic over the linker implementation, and then dispatch
on the *backend* instead! Of course, within `arch.x86_64.CodeGen`, there
are some more places which switch on the `link` implementation being
used. This is all pretty silly... so it shall be my next target.
|
|
The main goal of this commit is to make it easier to decouple codegen
from the linkers by being able to do LLVM codegen without going through
the `link.File`; however, this ended up being a nice refactor anyway.
Previously, every linker stored an optional `llvm.Object`, which was
populated when using LLVM for the ZCU *and* linking an output binary;
and `Zcu` also stored an optional `llvm.Object`, which was used only
when we needed LLVM for the ZCU (e.g. for `-femit-llvm-bc`) but were not
emitting a binary.
This situation was incredibly silly. It meant there were N+1 places the
LLVM object might be instead of just 1, and it meant that every linker
had to start a bunch of methods by checking for an LLVM object, and just
dispatching to the corresponding method on *it* instead if it was not
`null`.
Instead, we now always store the LLVM object on the `Zcu` -- which makes
sense, because it corresponds to the object emitted by, well, the Zig
Compilation Unit! The linkers now mostly don't make reference to LLVM.
`Compilation` makes sure to emit the LLVM object if necessary before
calling `flush`, so it is ready for the linker. Also, all of the
`link.File` methods which act on the ZCU -- like `updateNav` -- now
check for the LLVM object in `link.zig` instead of in every single
individual linker implementation. Notably, the change to LLVM emit
improves this rather ludicrous call chain in the `-fllvm -flld` case:
* Compilation.flush
* link.File.flush
* link.Elf.flush
* link.Elf.linkWithLLD
* link.Elf.flushModule
* link.emitLlvmObject
* Compilation.emitLlvmObject
* llvm.Object.emit
Replacing it with this one:
* Compilation.flush
* llvm.Object.emit
...although we do currently still end up in `link.Elf.linkWithLLD` to do
the actual linking. The logic for invoking LLD should probably also be
unified at least somewhat; I haven't done that in this commit.
|
|
* The `codegen_nav`, `codegen_func`, `codegen_type` tasks are renamed to
`link_nav`, `link_func`, and `link_type`, to more accurately reflect
their purpose of sending data to the *linker*. Currently, `link_func`
remains responsible for codegen; this will change in an upcoming
commit.
* Don't go on a pointless detour through `PerThread` when linking ZCU
functions/`Nav`s; so, the `linkerUpdateNav` etc logic now lives in
`link.zig`. Currently, `linkerUpdateFunc` is an exception, because it
has broader responsibilities including codegen, but this will be
solved in an upcoming commit.
|
|
Also closes #23902
|
|
Closes #22257
|
|
Before:
* std.Target.arm.featureSetHas(target.cpu.features, .has_v7)
* std.Target.x86.featureSetHasAny(target.cpu.features, .{ .sse, .avx, .cmov })
* std.Target.wasm.featureSetHasAll(target.cpu.features, .{ .atomics, .bulk_memory })
After:
* target.cpu.has(.arm, .has_v7)
* target.cpu.hasAny(.x86, &.{ .sse, .avx, .cmov })
* target.cpu.hasAll(.wasm, &.{ .atomics, .bulk_memory })
|
|
compiler: Rework PIE option logic.
|
|
Closes #24030.
|
|
Otherwise we rely on Clang's default which is known to not always match ours.
|
|
Linking it by default means that we produce binaries that, effectively, only run
on systems which have the Windows SDK installed because ucrtbased.dll is not
redistributable, and the Windows SDK is what actually installs ucrtbased.dll
into %SYSTEM32%. The resulting binaries also can't run under Wine because Wine
does not provide ucrtbased.dll.
It is also inconsistent with our behavior for *-windows-gnu where we always link
ucrtbase.dll. See #23983, #24019, and #24053 for more details.
So just use ucrtbase.dll regardless of mode. With this change, we can also drop
the implicit definition of the _DEBUG macro in zig cc, which has in some cases
been problematic for users.
Users who want to opt into the old behavior can do so, both for *-windows-msvc
and *-windows-gnu, by explicitly passing -lucrtbased and -D_DEBUG. We might
consider adding a more ergonomic flag like -fdebug-crt to the zig build-* family
of commands in the future.
Closes #24052.
|
|
This reverts commit 4641e9556d1343e95e4676e2f1e3024173db9962.
See discussion on #24052.
|
|
|
|
By using the debug UCRT, we get access to functions like _CrtDbgReport.
Closes #23983.
|
|
In a compiler built with debug extensions, pass `--debug-incremental` to
spawn the "incremental debug server". This is a TCP server exposing a
REPL which allows querying a bunch of compiler state, some of which is
stored only when that flag is passed. Eventually, this will probably
move into `std.zig.Server`/`std.zig.Client`, but this is easier to work
with right now. The easiest way to interact with the server is `telnet`.
|
|
This was an unintentional regression in 23c8175 which meant that
backwards-incompatible ZIR changes would have caused compiler crashes if
old caches were present.
|
|
Closes #23813.
|
|
Support dynamically-linked NetBSD libc when cross-compiling
|
|
This commit makes some big changes to how we track state for Zig source
files. In particular, it changes:
* How `File` tracks its path on-disk
* How AstGen discovers files
* How file-level errors are tracked
* How `builtin.zig` files and modules are created
The original motivation here was to address incremental compilation bugs
with the handling of files, such as #22696. To fix this, a few changes
are necessary.
Just like declarations may become unreferenced on an incremental update,
meaning we suppress analysis errors associated with them, it is also
possible for all imports of a file to be removed on an incremental
update, in which case file-level errors for that file should be
suppressed. As such, after AstGen, the compiler must traverse files
(starting from analysis roots) and discover the set of "live files" for
this update.
Additionally, the compiler's previous handling of retryable file errors
was not very good; the source location the error was reported as was
based only on the first discovered import of that file. This source
location also disappeared on future incremental updates. So, as a part
of the file traversal above, we also need to figure out the source
locations of imports which errors should be reported against.
Another observation I made is that the "file exists in multiple modules"
error was not implemented in a particularly good way (I get to say that
because I wrote it!). It was subject to races, where the order in which
different imports of a file were discovered affects both how errors are
printed, and which module the file is arbitrarily assigned, with the
latter in turn affecting which other files are considered for import.
The thing I realised here is that while the AstGen worker pool is
running, we cannot know for sure which module(s) a file is in; we could
always discover an import later which changes the answer.
So, here's how the AstGen workers have changed. We initially ensure that
`zcu.import_table` contains the root files for all modules in this Zcu,
even if we don't know any imports for them yet. Then, the AstGen
workers do not need to be aware of modules. Instead, they simply ignore
module imports, and only spin off more workers when they see a by-path
import.
During AstGen, we can't use module-root-relative paths, since we don't
know which modules files are in; but we don't want to unnecessarily use
absolute files either, because those are non-portable and can make
`error.NameTooLong` more likely. As such, I have introduced a new
abstraction, `Compilation.Path`. This type is a way of representing a
filesystem path which has a *canonical form*. The path is represented
relative to one of a few special directories: the lib directory, the
global cache directory, or the local cache directory. As a fallback, we
use absolute (or cwd-relative on WASI) paths. This is kind of similar to
`std.Build.Cache.Path` with a pre-defined list of possible
`std.Build.Cache.Directory`, but has stricter canonicalization rules
based on path resolution to make sure deduplicating files works
properly. A `Compilation.Path` can be trivially converted to a
`std.Build.Cache.Path` from a `Compilation`, but is smaller, has a
canonical form, and has a digest which will be consistent across
different compiler processes with the same lib and cache directories
(important when we serialize incremental compilation state in the
future). `Zcu.File` and `Zcu.EmbedFile` both contain a
`Compilation.Path`, which is used to access the file on-disk;
module-relative sub paths are used quite rarely (`EmbedFile` doesn't
even have one now for simplicity).
After the AstGen workers all complete, we know that any file which might
be imported is definitely in `import_table` and up-to-date. So, we
perform a single-threaded graph traversal; similar to what
`resolveReferences` plays for `AnalUnit`s, but for files instead. We
figure out which files are alive, and which module each file is in. If a
file turns out to be in multiple modules, we set a field on `Zcu` to
indicate this error. If a file is in a different module to a prior
update, we set a flag instructing `updateZirRefs` to invalidate all
dependencies on the file. This traversal also discovers "import errors";
these are errors associated with a specific `@import`. With Zig's
current design, there is only one possible error here: "import outside
of module root". This must be identified during this traversal instead
of during AstGen, because it depends on which module the file is in. I
tried also representing "module not found" errors in this same way, but
it turns out to be much more useful to report those in Sema, because of
use cases like optional dependencies where a module import is behind a
comptime-known build option.
For simplicity, `failed_files` now just maps to `?[]u8`, since the
source location is always the whole file. In fact, this allows removing
`LazySrcLoc.Offset.entire_file` completely, slightly simplifying some
error reporting logic. File-level errors are now directly built in the
`std.zig.ErrorBundle.Wip`. If the payload is not `null`, it is the
message for a retryable error (i.e. an error loading the source file),
and will be reported with a "file imported here" note pointing to the
import site discovered during the single-threaded file traversal.
The last piece of fallout here is how `Builtin` works. Rather than
constructing "builtin" modules when creating `Package.Module`s, they are
now constructed on-the-fly by `Zcu`. The map `Zcu.builtin_modules` maps
from digests to `*Package.Module`s. These digests are abstract hashes of
the `Builtin` value; i.e. all of the options which are placed into
"builtin.zig". During the file traversal, we populate `builtin_modules`
as needed, so that when we see this imports in Sema, we just grab the
relevant entry from this map. This eliminates a bunch of awkward state
tracking during construction of the module graph. It's also now clearer
exactly what options the builtin module has, since previously it
inherited some options arbitrarily from the first-created module with
that "builtin" module!
The user-visible effects of this commit are:
* retryable file errors are now consistently reported against the whole
file, with a note pointing to a live import of that file
* some theoretical bugs where imports are wrongly considered distinct
(when the import path moves out of the cwd and then back in) are fixed
* some consistency issues with how file-level errors are reported are
fixed; these errors will now always be printed in the same order
regardless of how the AstGen pass assigns file indices
* incremental updates do not print retryable file errors differently
between updates or depending on file structure/contents
* incremental updates support files changing modules
* incremental updates support files becoming unreferenced
Resolves: #22696
|
|
Only works for NetBSD 10.1+. Note that we still default to targeting NetBSD 9.
Contributes to #2877.
|
|
|
|
Inline calls which happened in the erroring `AnalUnit` still show as
error notes, because they tend to make very important context (e.g. to
see how comptime values propagate through them). However, "earlier"
inline calls are still useful to see to understand how something is
being referenced, so we should include them in the reference trace.
|
|
When `-freference-trace` is not passed, we want to show exactly one
reference trace. Previously, we set the reference trace root in `Sema`
iff there were no other failed analyses. However, this results in an
arbitrary error being the one with the reference trace after error
sorting. It is also incompatible with incremental compilation, where
some errors might be unreferenced. Instead, set the field on all
analysis errors, and decide in `Compilation.getAllErrorsAlloc` which
reference trace[s] to actually show.
|
|
The driver doesn't support it, and FreeBSD 13+ on PPC64 uses ELFv2 anyway.
|
|
Only works for FreeBSD 14+. Note that we still default to targeting FreeBSD 13.
Contributes to #2876.
|
|
|
|
|
|
If clang encountered bad imports, the depfile will not be generated. It
doesn't make sense to warn the user in this case. In fact,
`FileNotFound` is never worth warning about here; it just means that
the file we were deleting to save space isn't there in the first place!
If the missing file actually affected the compilation (e.g. another
process raced to delete it for some reason) we would already error in
the normal code path which reads these files, so we can safely omit the
warning in the `FileNotFound` case always, only warning when the file
might still exist.
To see what this fixes, create the following file...
```c
#include <nonexist>
```
...and run `zig build-obj` on it. Before this commit, you will get a
redundant warning; after this commit, that warning is gone.
|
|
It remains 1 everywhere else.
Also remove some code that allowed setting the libc++ ABI version on the
Compilation since there are no current plans to actually expose this in the CLI.
|
|
|
|
|
|
Aside from adding comments to document the logic in `Cache.Manifest.hit`
better, this commit fixes two serious bugs.
The first, spotted by Andrew, is that when upgrading from a shared to an
exclusive lock on the manifest file, we do not seek it back to the
start. This is a simple fix.
The second is more subtle, and has to do with the computation of file
digests. Broadly speaking, the goal of the main loop in `hit` is to
iterate the files listed in the manifest file, and check if they've
changed, based on stat and a file hash. While doing this, the
`bin_digest` field of `std.Build.Cache.File`, which is initially
`undefined`, is populated for all files, either straight from the
manifest (if the stat matches) or recomputed from the file on-disk. This
file digest is then used to update `man.hash.hasher`, which is building
the final hash used as, for instance, the output directory name when the
compiler emits into the cache directory. When `hit` returns a cache
miss, it is expected that `man.hash.hasher` includes the digests of all
"initial files"; that is, those which have been already added with e.g.
`addFilePath`, but not those which will later be added with
`addFilePost` (even though the manifest file has told us about some such
files). Previously, `hit` was using the `unhit` function to do this in a
few cases. However, this is incorrect, because `hit` assumes that all
files already have their `bin_digest` field populated; this function is
only valid to call *after* `hit` returns. Instead, we need to actually
compute the hashes which haven't yet been populated. Even if this logic
has been working, there was still a bug here, because we called `unhit`
when upgrading from a shared to an exclusive lock, writing the
(potentially `undefined`) file digests, but the loop itself writes the
file digests *again*! All in all, the hashing logic here was actually
incredibly broken.
I've taken the opportunity to restructure this section of the code into
what I think is a more readable format. A new function,
`hitWithCurrentLock`, uses the open manifest file to try and find a
cache hit. It returns a tagged union which, in the miss case, tells the
caller (`hit`) how many files already have their hash populated. This
avoids redundant work recomputing the same hash multiple times in
situations where the lock needs upgrading. This also eliminates the
outer loop from `hit`, which was a little confusing because it iterated
no more than twice!
The bugs fixed here could manifest in several different ways depending
on how contended file locks were satisfied. Most notably, on a cache
miss, the Zig compiler might have written the compilation output to the
incorrect directory (because it incorrectly constructed a hash using
`undefined` or repeated file digests), resulting in all future hits on
this manifest causing `error.FileNotFound`. This is #23110. I have been
able to reproduce #23110 on `master`, and have not been able to after
this commit, so I am relatively sure this commit resolves that issue.
Resolves: #23110
|
|
* Accept -fsanitize-c=trap|full in addition to the existing form.
* Accept -f(no-)sanitize-trap=undefined in zig cc.
* Change type of std.Build.Module.sanitize_c to std.zig.SanitizeC.
* Add some missing Compilation.Config fields to the cache.
Closes #23216.
|
|
incremental: fixes
|
|
|
|
Compile log output is now separated based on the `AnalUnit` which
perfomred the `@compileLog` call, so that we can omit the output for
unreferenced ("dead") units. The units are also sorted when collecting
the `ErrorBundle`, so that compile logs are always printed in a
consistent order, like compile errors are. This is important not only
for incremental compilation, but also for parallel analysis.
Resolves: #23609
|
|
Introduce libzigc for libc function implementations in Zig
|
|
This lays the groundwork for #2879. This library will be built and linked when a
static libc is going to be linked into the compilation. Currently, that means
musl, wasi-libc, and MinGW-w64. As a demonstration, this commit removes the musl
C code for a few string functions and implements them in libzigc. This means
that those libzigc functions are now load-bearing for musl and wasi-libc.
Note that if a function has an implementation in compiler-rt already, libzigc
should not implement it. Instead, as we recently did for memcpy/memmove, we
should delete the libc copy and rely on the compiler-rt implementation.
I repurposed the existing "universal libc" code to do this. That code hadn't
seen development beyond basic string functions in years, and was only usable-ish
on freestanding. I think that if we want to seriously pursue the idea of Zig
providing a freestanding libc, we should do so only after defining clear goals
(and non-goals) for it. See also #22240 for a similar case.
|
|
Darwin targets.
|
|
Co-authored-by: Alex Rønne Petersen <alex@alexrp.com>
|
|
Step.Compile: use LtoMode enum for lto option
|
|
Closes #21411.
|
|
|
|
See:
* https://discourse.llvm.org/t/rfc-freezing-c-03-headers-in-libc/77319
* https://discourse.llvm.org/t/rfc-project-hand-in-hand-llvm-libc-libc-code-sharing/77701
We're dropping support for C++03 for Zig due to the first change; it would be
insane to ship 1018 duplicate header files just for this outdated use case.
As a result of the second change, I had to bring in a subset of the headers from
llvm-libc since libc++ now depends on these. Hopefully we can continue to get
away with not copying the entirety of llvm-libc.
|
|
|
|
LLD expects the library file name (minus extension) to be exactly libmingw32. By
calling it mingw32 previously, we prevented it from being detected as being in
LLD's list of libraries that are excluded from the MinGW-specific auto-export
mechanism.
https://github.com/llvm/llvm-project/blob/b9d27ac252265839354fffeacaa8f39377ed7424/lld/COFF/MinGW.cpp#L30-L56
As a result, a DLL built for *-windows-gnu with Zig would export a bunch of
internal MinGW symbols. This sometimes worked out fine, but it could break at
link or run time when linking an EXE with a DLL, where both are targeting
*-windows-gnu and thus linking separate copies of mingw32.lib. In #23204, this
manifested as the linker getting confused about _gnu_exception_handler() because
it was incorrectly exported by the DLL while also being defined in the
mingw32.lib that was being linked into the EXE.
Closes #23204.
|
|
On updates with failed files, we should refrain from doing any semantic
analysis, or even touching codegen/link. That way, incremental
compilation state is untouched for when the user fixes the AstGen
errors.
Resolves: #23205
|