| Age | Commit message (Collapse) | Author |
|
This includes function aliases, but not function declarations.
Also, re-introduce a target check for function alignment which was
inadvertently removed in the prior commit.
|
|
Resolves: #22261
|
|
The error messages here aren't amazing yet, but this is an improvement
on status quo, because the current behavior allows false negative
compile errors, so effectively miscompiles.
Resolves: #15874
|
|
|
|
|
|
compiler: allow semantic analysis of files with AstGen errors
|
|
It currently prints as:
:3:18: error: untagged union 'Zcu.LazySrcLoc{ .base_node_inst = InternPool.TrackedInst.Index(104), .offset = Zcu.LazySrcLoc.Offset{ .node_offset = Zcu.LazySrcLoc.Offset.TracedOffset{ .x = -2, .trace = (value tracing disabled) } } }' cannot be converted to integer
|
|
This commit enhances AstGen to introduce a form of error resilience
which allows valid ZIR to be emitted even when AstGen errors occur.
When a non-fatal AstGen error (e.g. `appendErrorNode`) occurs, ZIR
generation is not affected; the error is added to `astgen.errors` and
ultimately to the errors stored in `extra`, but that doesn't stop us
getting valid ZIR. Fatal AstGen errors (e.g. `failNode`) are a bit
trickier. These errors return `error.AnalysisFail`, which is propagated
up the stack. In theory, any parent expression can catch this error and
handle it, continuing ZIR generation whilst throwing away whatever was
lost. For now, we only do this in one place: when creating declarations.
If a call to `fnDecl`, `comptimeDecl`, `globalVarDecl`, etc, returns
`error.AnalysisFail`, the `declaration` instruction is still created,
but its body simply contains the new `extended(astgen_error())`
instruction, which instructs Sema to terminate semantic analysis with a
transitive error. This means that a fatal AstGen error causes the
innermost declaration containing the error to fail, but the rest of the
file remains intact.
If a source file contains parse errors, or an `error.AnalysisFail`
happens when lowering the top-level struct (e.g. there is an error in
one of its fields, or a name has multiple declarations), then lowering
for the entire file fails. Alongside the existing `Zir.hasCompileErrors`
query, this commit introduces `Zir.loweringFailed`, which returns `true`
only in this case.
The end result here is that files with AstGen failures will almost
always still emit valid ZIR, and hence can undergo semantic analysis on
the parts of the file which are (from AstGen's perspective) valid. This
is a noteworthy improvement to UX, but the main motivation here is
actually incremental compilation. Previously, AstGen failures caused
lots of semantic analysis work to be thrown out, because all `AnalUnit`s
in the file required re-analysis so as to trigger necessary transitive
failures and remove stored compile errors which would no longer make
sense (because a fresh compilation of this code would not emit those
errors, as the units those errors applied to would fail sooner due to
referencing a failed file). Now, this case only applies when a file has
severe top-level errors, which is far less common than something like
having an unused variable.
Lastly, this commit changes a few errors in `AstGen` to become fatal
when they were previously non-fatal and vice versa. If there is still a
reasonable way to continue AstGen and lower to ZIR after an error, it is
non-fatal; otherwise, it is fatal. For instance, `comptime const`, while
redundant syntax, has a clear meaning we can lower; on the other hand,
using an undeclared identifer has no sane lowering, so must trigger a
fatal error.
|
|
|
|
Thus leaving the design space for this alignment value open, e.g. for packing.
|
|
...and fix a minor x86_64 backend bug exposed by this fix.
Resolves: #21974
|
|
It's an FQN, not an actual file name.
|
|
|
|
This commit reworks how anonymous struct literals and tuples work.
Previously, an untyped anonymous struct literal
(e.g. `const x = .{ .a = 123 }`) was given an "anonymous struct type",
which is a special kind of struct which coerces using structural
equivalence. This mechanism was a holdover from before we used
RLS / result types as the primary mechanism of type inference. This
commit changes the language so that the type assigned here is a "normal"
struct type. It uses a form of equivalence based on the AST node and the
type's structure, much like a reified (`@Type`) type.
Additionally, tuples have been simplified. The distinction between
"simple" and "complex" tuple types is eliminated. All tuples, even those
explicitly declared using `struct { ... }` syntax, use structural
equivalence, and do not undergo staged type resolution. Tuples are very
restricted: they cannot have non-`auto` layouts, cannot have aligned
fields, and cannot have default values with the exception of `comptime`
fields. Tuples currently do not have optimized layout, but this can be
changed in the future.
This change simplifies the language, and fixes some problematic
coercions through pointers which led to unintuitive behavior.
Resolves: #16865
|
|
spirv: vulkan setup
|
|
Also, start using labeled switch statements when dispatching
maybe-runtime instructions like condbr to comptime-only variants like
condbr_inline.
This can't be merged until we get a zig1.wasm update due to #21385.
Resolves: #21405
|
|
The print order of error sets depends on the order that the compiler
adds names to its internal state. These names can be anything, and
do not necessarily need to be from the same error set or be errors
at all. When the last remaining reference to builtin.cpu.arch was
removed in start.zig in 9b42bc1ce5, this order changed. Likely there
is something that has the name 'C' that is referenced somewhere
recursively from builtin.cpu.arch.
This all causes these few tests to fail, and hence the expected
order is simply updated now. Perhaps there is a better way to
add this.
|
|
Under some architecture/operating system combinations it is forbidden
to return a pointer from a merge, as these pointers must point to a
location at compile time. This adds a check for those cases when
returning a pointer from a block merge.
|
|
Some improvements to the compiler's handling of function alignment
|
|
Closes #21159
|
|
Add `is_dll_import` to @extern, to support `__declspec(dllimport)` with the MSVC ABI
|
|
- tests/standalone/extern wasn't running its test step
- add compile error tests for thread local / dll import @extern in a comptime scope
|
|
|
|
|
|
This also includes some compiler and std changes to correct error
messages which weren't properly updated before.
|
|
Resolves: #21702
|
|
coercing `undefined`
Just switches logic around in coerceExtra to check for returning in a noreturn function before coercing undefined to anything
|
|
|
|
Resolves: #20433
|
|
Resolves: #20365
|
|
|
|
closes #11650
|
|
|
|
see #17969
|
|
closes #21417
|
|
Resolves: #10703
Resolves: #17798
|
|
Resolves: #21392
|
|
|
|
compiler: implement labeled switch/continue
|
|
|
|
Resolves: #9938
|
|
|
|
|
|
The compiler actually doesn't need any functional changes for this: Sema
does reification based on the tag indices of `std.builtin.Type` already!
So, no zig1.wasm update is necessary.
This change is necessary to disallow name clashes between fields and
decls on a type, which is a prerequisite of #9938.
|
|
|
|
|
|
|
|
closes #72
|
|
The type `Zcu.Decl` in the compiler is problematic: over time it has
gained many responsibilities. Every source declaration, container type,
generic instantiation, and `@extern` has a `Decl`. The functions of
these `Decl`s are in some cases entirely disjoint.
After careful analysis, I determined that the two main responsibilities
of `Decl` are as follows:
* A `Decl` acts as the "subject" of semantic analysis at comptime. A
single unit of analysis is either a runtime function body, or a
`Decl`. It registers incremental dependencies, tracks analysis errors,
etc.
* A `Decl` acts as a "global variable": a pointer to it is consistent,
and it may be lowered to a specific symbol by the codegen backend.
This commit eliminates `Decl` and introduces new types to model these
responsibilities: `Cau` (Comptime Analysis Unit) and `Nav` (Named
Addressable Value).
Every source declaration, and every container type requiring resolution
(so *not* including `opaque`), has a `Cau`. For a source declaration,
this `Cau` performs the resolution of its value. (When #131 is
implemented, it is unsolved whether type and value resolution will share
a `Cau` or have two distinct `Cau`s.) For a type, this `Cau` is the
context in which type resolution occurs.
Every non-`comptime` source declaration, every generic instantiation,
and every distinct `extern` has a `Nav`. These are sent to codegen/link:
the backends by definition do not care about `Cau`s.
This commit has some minor technically-breaking changes surrounding
`usingnamespace`. I don't think they'll impact anyone, since the changes
are fixes around semantics which were previously inconsistent (the
behavior changed depending on hashmap iteration order!).
Aside from that, this changeset has no significant user-facing changes.
Instead, it is an internal refactor which makes it easier to correctly
model the responsibilities of different objects, particularly regarding
incremental compilation. The performance impact should be negligible,
but I will take measurements before merging this work into `master`.
Co-authored-by: Jacob Young <jacobly0@users.noreply.github.com>
Co-authored-by: Jakub Konka <kubkon@jakubkonka.com>
|
|
I pointed a fuzzer at the tokenizer and it crashed immediately. Upon
inspection, I was dissatisfied with the implementation. This commit
removes several mechanisms:
* Removes the "invalid byte" compile error note.
* Dramatically simplifies tokenizer recovery by making recovery always
occur at newlines, and never otherwise.
* Removes UTF-8 validation.
* Moves some character validation logic to `std.zig.parseCharLiteral`.
Removing UTF-8 validation is a regression of #663, however, the existing
implementation was already buggy. When adding this functionality back,
it must be fuzz-tested while checking the property that it matches an
independent Unicode validation implementation on the same file. While
we're at it, fuzzing should check the other properties of that proposal,
such as no ASCII control characters existing inside the source code.
Other changes included in this commit:
* Deprecate `std.unicode.utf8Decode` and its WTF-8 counterpart. This
function has an awkward API that is too easy to misuse.
* Make `utf8Decode2` and friends use arrays as parameters, eliminating a
runtime assertion in favor of using the type system.
After this commit, the crash found by fuzzing, which was
"\x07\xd5\x80\xc3=o\xda|a\xfc{\x9a\xec\x91\xdf\x0f\\\x1a^\xbe;\x8c\xbf\xee\xea"
no longer causes a crash. However, I did not feel the need to add this
test case because the simplified logic eradicates most crashes of this
nature.
|