| Age | Commit message (Collapse) | Author |
|
After this change, the frontend and backend cooperate to keep track of
which Decls are actually emitted into the machine code. When any backend
sees a `decl_ref` Value, it must mark the corresponding Decl `alive`
field to true.
This prevents unused comptime data from spilling into the output object
files. For example, if you do an `inline for` loop, previously, any
intermediate value calculations would have gone into the object file.
Now they are garbage collected immediately after the owner Decl has its
machine code generated.
In the frontend, when it is time to send a Decl to the linker, if it has
not been marked "alive" then it is deleted instead.
Additional improvements:
* Resolve type ABI layouts after successful semantic analysis of a
Decl. This is needed so that the backend has access to struct fields.
* Sema: fix incorrect logic in resolveMaybeUndefVal. It should return
"not comptime known" instead of a compile error for global variables.
* `Value.pointerDeref` now returns `null` in the case that the pointer
deref cannot happen at compile-time. This is true for global
variables, for example. Another example is if a comptime known
pointer has a hard coded address value.
* Binary arithmetic sets the requireRuntimeBlock source location to the
lhs_src or rhs_src as appropriate instead of on the operator node.
* Fix LLVM codegen for slice_elem_val which had the wrong logic for
when the operand was not a pointer.
As noted in the comment in the implementation of deleteUnusedDecl, a
future improvement will be to rework the frontend/linker interface to
remove the frontend's responsibility of calling allocateDeclIndexes.
I discovered some issues with the plan9 linker backend that are related
to this, and worked around them for now.
|
|
Before calling populateTestFunctions() we want to check
totalErrorCount() but that will read from some tables that might get
populated by the thread pool for C compilation tasks. So we wait until
all those tasks are finished before proceeding.
|
|
Frontend improvements:
* When compiling in `zig test` mode, put a task on the work queue to
analyze the main package root file. Normally, start code does
`_ = import("root");` to make Zig analyze the user's code, however in
the case of `zig test`, the root source file is the test runner.
Without this change, no tests are picked up.
* In the main pipeline, once semantic analysis is finished, if there
are no compile errors, populate the `test_functions` Decl with the
set of test functions picked up from semantic analysis.
* Value: add `array` and `slice` Tags.
LLVM backend improvements:
* Fix incremental updates of globals. Previously the
value of a global would not get replaced with a new value.
* Fix LLVM type of arrays. They were incorrectly sending
the ABI size as the element count.
* Remove the FuncGen parameter from genTypedValue. This function is for
generating global constants and there is no function available when
it is being called.
- The `ref_val` case is now commented out. I'd like to eliminate
`ref_val` as one of the possible Value Tags. Instead it should
always be done via `decl_ref`.
* Implement constant value generation for slices, arrays, and structs.
* Constant value generation for functions supports the `decl_ref` tag.
|
|
* There is now a main_pkg in addition to root_pkg. They are usually the
same. When using `zig test`, main_pkg is the user's source file and
root_pkg has the test runner.
* scanDecl no longer looks for test decls outside the package being
tested. honoring `--test-filter` is still TODO.
* test runner main function has a void return value rather than
`anyerror!void`
* Sema is improved to generate better AIR for for loops on slices.
* Sema: fix incorrect capacity calculation in zirBoolBr
* Sema: add compile errors for trying to use slice fields as an lvalue.
* Sema: fix type coercion for error unions
* Sema: fix analyzeVarRef generating garbage AIR
* C codegen: fix renderValue for error unions with 0 bit payload
* C codegen: implement function pointer calls
* CLI: fix usage text
Adds 4 new AIR instructions:
* slice_len, slice_ptr: to get the ptr and len fields of a slice.
* slice_elem_val, ptr_slice_elem_val: to get the element value of
a slice, and a pointer to a slice.
AstGen gains a new functionality:
* One of the unused flags of struct decls is now used to indicate
structs that are known to have non-zero size based on the AST alone.
|
|
|
|
add -femit-llvm-bc CLI option and implement it, and improve -fcompiler-rt support
|
|
Fixes #9340.
|
|
The include_compiler_rt stored in the bin file options means that we need
compiler-rt symbols *somehow*. However, in the context of using the stage1 backend
we need to tell stage1 to include compiler-rt only if stage1 is the place that
needs to provide those symbols. Otherwise the stage2 infrastructure will take care
of it in the linker, by putting compiler_rt.o into a static archive, or linking
compiler_rt.a against an executable. In other words we only want to set this flag
for stage1 if we are using build-obj.
|
|
When using `build-exe` or `build-lib -dynamic`, `-fcompiler-rt` means building
compiler-rt into a static library and then linking it into the executable.
When using `build-lib`, `-fcompiler-rt` means building compiler-rt into an
object file and then adding it into the static archive.
Before this commit, when using `build-obj`, zig would build compiler-rt
into an object file, and then on ELF, use `lld -r` to merge it into the
main object file. Other linker backends of LLD do not support `-r` to
merge objects, so this failed with error messages for those targets.
Now, `-fcompiler-rt` when used with `build-obj` acts as if the user puts
`_ = @import("compiler_rt");` inside their root source file. The symbols
of compiler-rt go into the same compilation unit as the root source file.
This is hooked up for stage1 only for now. Once stage2 is capable of
building compiler-rt, it should be hooked up there as well.
|
|
* Added doc comments for `std.Target.ObjectFormat` enum
* `std.Target.oFileExt` is removed because it is incorrect for Plan-9
targets. Instead, use `std.Target.ObjectFormat.fileExt` and pass a
CPU architecture.
* Added `Compilation.Directory.joinZ` for when a null byte is desired.
* Improvements to `Compilation.create` logic for computing `use_llvm`
and reporting errors in contradictory flags. `-femit-llvm-ir` and
`-femit-llvm-bc` will now imply `-fLLVM`.
* Fix compilation when passing `.bc` files on the command line.
* Improvements to the stage2 LLVM backend:
- cleaned up error messages and error reporting. Properly bubble up
some errors rather than dumping to stderr; others turn into panics.
- properly call ZigLLVMCreateTargetMachine and
ZigLLVMTargetMachineEmitToFile and implement calculation of the
respective parameters (cpu features, code model, abi name, lto,
tsan, etc).
- LLVM module verification only runs in debug builds of the compiler
- use LLVMDumpModule rather than printToString because in the case
that we incorrectly pass a null pointer to LLVM it may crash during
dumping the module and having it partially printed is helpful in
this case.
- support -femit-asm, -fno-emit-bin, -femit-llvm-ir, -femit-llvm-bc
- Support LLVM backend when used with Mach-O and WASM linkers.
|
|
|
|
Portable Executable is an executable format, not an object format.
Everywhere in the entire zig codebase, we treated coff and pe as if they
were the same. Remove confusion by not including pe in the
std.Target.ObjectFormat enum.
|
|
It incorrectly did not process the death of its operand. Additionally:
* delete dead code accidentally introduced in fe14e339458a578657f3890f00d654a15c84422c
* improve AIR printing code to include liveness data for operands.
Now an exclamation point ("!") indicates the tombstone of an AIR
instruction.
|
|
Previously we had codegen_decl for both constant values as well as
function bodies. A recent commit updated the linker backends to add
updateFunc as a separate function than updateDecl, and now this commit
does the same with work queue tasks.
The frontend now distinguishes between function pointers and function
bodies.
|
|
* some instructions are not implemented yet
* fix off-by-1 in Air.getMainBody
* Compilation: use `@import("builtin")` rather than `std.builtin`
for the values that are different for different build configurations.
* Sema: avoid calling `addType` in between
air_instructions.ensureUnusedCapacity and corresponding
appendAssumeCapacity because it can possibly add an instruction.
* Value: functions print their names
|
|
Now you can pass `.unneeded` for a `LazySrcLoc` and if there ended up
being a compile error that needed it, you'll get
`error.NeededSourceLocation`.
Callsites can now exploit this error to do the expensive computation
to produce a source location object and then repeat the operation.
|
|
Now the branch is compiling again, provided that one uses
`-Dskip-non-native`, but many code paths are disabled. The code paths
can now be re-enabled one at a time and updated to conform to the new
AIR memory layout.
|
|
to the link infrastructure, instead of being stored with Module.Fn. This
moves towards a strategy to make more efficient use of memory by not
storing Air or Liveness data in the Fn struct, but computing it on
demand, immediately sending it to the backend, and then immediately
freeing it.
Backends which want to defer codegen until flush() such as SPIR-V
must move the Air/Liveness data upon `updateFunc` being called and keep
track of that data in the backend implementation itself.
|
|
also do the inline assembly instruction
|
|
- Changed ZIR encoding of `import` metadata from having instruction
indexes to storing token indexes.
|
|
|
|
|
|
Invoke `linkAsArchive` directly in MachO backend when LLVM is available
and we are asked to create a static lib.
|
|
|
|
|
|
Signed-off-by: Takeshi Yoneda <takeshi@tetrate.io>
|
|
Partially addresses #9203. It fixes the first case, but not the second
one mentioned in the issue.
|
|
By requiring the source file to be null-terminated, we avoid extra
branching while simplifying the logic at the same time.
Running ast-check on a large zig source file (udivmodti4_test.zig),
master branch compared to this commit:
* 4% faster wall clock
* 7% fewer cache misses
* 1% fewer branches
|
|
The motivation for this commit is that there exists source files which
produce ast-check errors, but crash stage1 or otherwise trigger stage1
bugs. Previously to this commit, Zig would run AstGen, collect the
compile errors, run stage1, report stage1 compile errors and exit if
any, and then report AstGen compile errors.
The main change in this commit is to report AstGen errors prior to
invoking stage1, and in fact if any AstGen errors occur, do not invoke
stage1 at all.
This caused most of the compile error tests to fail due to things such
as unused local variables and mismatched stage1/stage2 error messages.
It was taking a long time to update the test cases one-by-one, so I
took this opportunity to unify the stage1 and stage2 testing harness,
specifically with regards to compile errors. In this way we can start
keeping track of which tests pass for 1, 2, or both.
`zig build test-compile-errors` no longer works; it is now integrated
into `zig build test-stage2`.
This is one step closer to executing compile error tests in parallel; in
fact the ThreadPool object is already in scope.
There are some cases where the stage1 compile errors were actually
better; those are left failing in this commit, to be addressed in a
follow-up commit.
Other changes in this commit:
* build.zig: improve support for -Dstage1 used with the test step.
* AstGen: minor cosmetic changes to error messages.
* stage2: add -fstage1 and -fno-stage1 flags. This now allows one to
download a binary of the zig compiler and use the llvm backend of
self-hosted. This was also needed for hooking up the test harness.
However, I realized that stage1 calls exit() and also has memory
leaks, so had to complicate the test harness by not using this flag
after all and instead invoking as a child process.
- These CLI flags will disappear once we start shipping the
self-hosted compiler as the main compiler. Until then, they can be
used to try out the work-in-progress stage2.
* stage2: select the LLVM backend by default for release modes, as long
as the target architecture is supported by LLVM.
* test harness: support setting the optimize mode
|
|
|
|
* Add command line help for "-mexec-model"
* Define WasmExecModel enum in std.builtin.
* Drop the support for the old crt1.o in favor of crt1-command.o
Signed-off-by: Takeshi Yoneda <takeshi@tetrate.io>
|
|
zig ld can create dylibs now; remove system linker hack and any mention of ld64.lld from the codebase
|
|
This logic was a workaround to prevent cache deadlocks which happened
from always using exclusive file locks. Now that the Cache system
supports sharing cached artifacts, this workaround is no longer needed.
Closes #7596
|
|
|
|
This reverts commit 15a030ef3d2d68992835568f2fb9d5deab18f39f.
ast-check started crashing after this commit. Needs more QA before
merging.
|
|
|
|
This feature is necessary for cross-compiling code that dynamically
links system libraries, at least with the current feature set of lld.
|
|
|
|
Calling processOutdatedAndDeletedDecls() should not happen when using
the stage1 backend. Problem solved with checking a simple flag.
|
|
Normally we rely on importing std to in turn import the root
in the start code, but when using the stage1 won't happen,
so in order to run AstGen on the root we put it into the
import_table here.
|
|
There is now a distinction between `@import` with a .zig extension and
without. Without a .zig extension it assumes it is a package name, and
returns error.PackageNotFound if not mapped into the package table.
|
|
|
|
This change reduces the amount of divergence in the compiler's main
pipeline logic enough to run AstGen for all files in the compilation,
regardless of whether the stage1 or stage2 backend is being used.
Practically, this means that all Zig code is subject to new compile
errors, such as unused local variables.
Additionally:
* remove leftover unsound asserts from recent hash map changes
* fix sub-Compilation errors not indenting correctly
|
|
|
|
|
|
|
|
|
|
|
|
Previously, Zig did not properly communicate the target CPU features for
RISC-V to clang assembler, because Clang has a different way to pass CPU
features for C code and for assembly code. This commit makes Zig pass a
RISC-V -march flag in order to communicate CPU features to Clang when
compiling assembly files.
|
|
* stage1 backend allows configuring the uwtables function attr
via a flag rather than its own logic.
* stage2 defaults to enabling uwtable attr when
linking libunwind, or always on windows
* stage2 makes link_eh_frame_hdr true automatically if uwtable
attr is set to be on for zig functions
* CLI: add -funwind-tables and -fno-unwind-tables to allow the user to
override the defaults.
* hook it up to `zig cc`
closes #9046
|