| Age | Commit message (Collapse) | Author |
|
`LazySrcLoc` now stores a reference to the "base AST node" to which it
is relative. The previous tagged union is `LazySrcLoc.Offset`. To make
working with this structure convenient, `Sema.Block` contains a
convenience `src` method which takes an `Offset` and returns a
`LazySrcLoc`.
The "base node" of a source location is no longer given by a `Decl`, but
rather a `TrackedInst` representing either a `declaration`,
`struct_decl`, `union_decl`, `enum_decl`, or `opaque_decl`. This is a
more appropriate model, and removes an unnecessary responsibility from
`Decl` in preparation for the upcoming refactor which will split it into
`Nav` and `Cau`.
As a part of these `Decl` reworks, the `src_node` field is eliminated.
This change aids incremental compilation, and simplifies `Decl`. In some
cases -- particularly in backends -- the source location of a
declaration is desired. This was previously `Decl.srcLoc` and worked for
any `Decl`. Now, it is `Decl.navSrcLoc` in reference to the upcoming
refactor, since the set of `Decl`s this works for precisely corresponds
to what will in future become a `Nav` -- that is, source-level
declarations and generic function instantiations, but *not* type owner
Decls.
This commit introduces more tags to `LazySrcLoc.Offset` so as to
eliminate the concept of `error.NeededSourceLocation`. Now, `.unneeded`
should only be used to assert that an error path is unreachable. In the
future, uses of `.unneeded` can probably be replaced with `undefined`.
The `src_decl` field of `Sema.Block` no longer has a role in type
resolution. Its main remaining purpose is to handle namespacing of type
names. It will be eliminated entirely in a future commit to remove
another undue responsibility from `Decl`.
It is worth noting that in future, the `Zcu.SrcLoc` type should probably
be eliminated entirely in favour of storing `Zcu.LazySrcLoc` values.
This is because `Zcu.SrcLoc` is not valid across incremental updates,
and we want to be able to reuse error messages from previous updates
even if the source file in question changed. The error reporting logic
should instead simply resolve the location from the `LazySrcLoc` on the
fly.
|
|
Deprecated aliases that are now compile errors:
- `std.fs.MAX_PATH_BYTES` (renamed to `std.fs.max_path_bytes`)
- `std.mem.tokenize` (split into `tokenizeAny`, `tokenizeSequence`, `tokenizeScalar`)
- `std.mem.split` (split into `splitSequence`, `splitAny`, `splitScalar`)
- `std.mem.splitBackwards` (split into `splitBackwardsSequence`, `splitBackwardsAny`, `splitBackwardsScalar`)
- `std.unicode`
+ `utf16leToUtf8Alloc`, `utf16leToUtf8AllocZ`, `utf16leToUtf8`, `fmtUtf16le` (all renamed to have capitalized `Le`)
+ `utf8ToUtf16LeWithNull` (renamed to `utf8ToUtf16LeAllocZ`)
- `std.zig.CrossTarget` (moved to `std.Target.Query`)
Deprecated `lib/std/std.zig` decls were deleted instead of made a `@compileError` because the `refAllDecls` in the test block would trigger the `@compileError`. The deleted top-level `std` namespaces are:
- `std.rand` (renamed to `std.Random`)
- `std.TailQueue` (renamed to `std.DoublyLinkedList`)
- `std.ChildProcess` (renamed/moved to `std.process.Child`)
This is not exhaustive. Deprecated aliases that I didn't touch:
+ `std.io.*`
+ `std.Build.*`
+ `std.builtin.Mode`
+ `std.zig.c_translation.CIntLiteralRadix`
+ anything in `src/`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
* test non-ObjC literal deduping logic
|
|
|
|
|
|
this one is even harder to document then the last large overhaul.
TLDR;
- split apart Emit.zig into an Emit.zig and a Lower.zig
- created seperate files for the encoding, and now adding a new instruction
is as simple as just adding it to a couple of switch statements and providing the encoding.
- relocs are handled in a more sane maner, and we have a clear defining boundary between
lea_symbol and load_symbol now.
- a lot of different abstractions for things like the stack, memory, registers, and others.
- we're using x86_64's FrameIndex now, which simplifies a lot of the tougher design process.
- a lot more that I don't have the energy to document. at this point, just read the commit itself :p
|
|
|
|
|
|
* rename .xros to .visionos as agreed in the tracking issue
* add support for VisionOS platform in the MachO linker
|
|
|
|
|
|
This should not be a public API, and the x86 backend does not support
the value 16.
|
|
This function accepts a WaitGroup parameter and manages the reference
counting therein. It also is infallible.
The existing `spawn` function is still handy when the job wants to
further schedule more tasks.
|
|
|
|
wheras on NetBSD, only 2 PT_LOAD are usually produced by other compilers
(tested with host gcc and clang).
$ ldd -v main_4segs
.../main_4segs: wrong number of segments (4 != 2)
.../main_4segs: invalid ELF class 2; expected 1
|
|
* Fix the ELF binaries for freestanding target created with the self-hosted linker.
The ELF specification (generic ABI) states that ``loadable process segments must have congruent
values for p_vaddr and p_offset, modulo the page size''. Linux refuses to load binaries that
don't meet this requirement (execve() fails with EINVAL).
|
|
|
|
|
|
this patch renames ComptimeStringMap to StaticStringMap, makes it
accept only a single type parameter, and return a known struct type
instead of an anonymous struct. initial motivation for these changes
was to reduce the 'very long type names' issue described here
https://github.com/ziglang/zig/pull/19682.
this breaks the previous API. users will now need to write:
`const map = std.StaticStringMap(T).initComptime(kvs_list);`
* move `kvs_list` param from type param to an `initComptime()` param
* new public methods
* `keys()`, `values()` helpers
* `init(allocator)`, `deinit(allocator)` for runtime data
* `getLongestPrefix(str)`, `getLongestPrefixIndex(str)` - i'm not sure
these belong but have left in for now incase they are deemed useful
* performance notes:
* i posted some benchmarking results here:
https://github.com/travisstaloch/comptime-string-map-revised/issues/1
* i noticed a speedup reducing the size of the struct from 48 to 32
bytes and thus use u32s instead of usize for all length fields
* i noticed speedup storing KVs as a struct of arrays
* latest benchmark shows these wall_time improvements for
debug/safe/small/fast builds: -6.6% / -10.2% / -19.1% / -8.9%. full
output in link above.
|
|
|
|
|
|
link/elf: implement string merging
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Linux isn't the only OS that `mmap`s segments.
|
|
We've got a big one here! This commit reworks how we represent pointers
in the InternPool, and rewrites the logic for loading and storing from
them at comptime.
Firstly, the pointer representation. Previously, pointers were
represented in a highly structured manner: pointers to fields, array
elements, etc, were explicitly represented. This works well for simple
cases, but is quite difficult to handle in the cases of unusual
reinterpretations, pointer casts, offsets, etc. Therefore, pointers are
now represented in a more "flat" manner. For types without well-defined
layouts -- such as comptime-only types, automatic-layout aggregates, and
so on -- we still use this "hierarchical" structure. However, for types
with well-defined layouts, we use a byte offset associated with the
pointer. This allows the comptime pointer access logic to deal with
reinterpreted pointers far more gracefully, because the "base address"
of a pointer -- for instance a `field` -- is a single value which
pointer accesses cannot exceed since the parent has undefined layout.
This strategy is also more useful to most backends -- see the updated
logic in `codegen.zig` and `codegen/llvm.zig`. For backends which do
prefer a chain of field and elements accesses for lowering pointer
values, such as SPIR-V, there is a helpful function in `Value` which
creates a strategy to derive a pointer value using ideally only field
and element accesses. This is actually more correct than the previous
logic, since it correctly handles pointer casts which, after the dust
has settled, end up referring exactly to an aggregate field or array
element.
In terms of the pointer access code, it has been rewritten from the
ground up. The old logic had become rather a mess of special cases being
added whenever bugs were hit, and was still riddled with bugs. The new
logic was written to handle the "difficult" cases correctly, the most
notable of which is restructuring of a comptime-only array (for
instance, converting a `[3][2]comptime_int` to a `[2][3]comptime_int`.
Currently, the logic for loading and storing work somewhat differently,
but a future change will likely improve the loading logic to bring it
more in line with the store strategy. As far as I can tell, the rewrite
has fixed all bugs exposed by #19414.
As a part of this, the comptime bitcast logic has also been rewritten.
Previously, bitcasts simply worked by serializing the entire value into
an in-memory buffer, then deserializing it. This strategy has two key
weaknesses: pointers, and undefined values. Representations of these
values at comptime cannot be easily serialized/deserialized whilst
preserving data, which means many bitcasts would become runtime-known if
pointers were involved, or would turn `undefined` values into `0xAA`.
The new logic works by "flattening" the datastructure to be cast into a
sequence of bit-packed atomic values, and then "unflattening" it; using
serialization when necessary, but with special handling for `undefined`
values and for pointers which align in virtual memory. The resulting
code is definitely slower -- more on this later -- but it is correct.
The pointer access and bitcast logic required some helper functions and
types which are not generally useful elsewhere, so I opted to split them
into separate files `Sema/comptime_ptr_access.zig` and
`Sema/bitcast.zig`, with simple re-exports in `Sema.zig` for their small
public APIs.
Whilst working on this branch, I caught various unrelated bugs with
transitive Sema errors, and with the handling of `undefined` values.
These bugs have been fixed, and corresponding behavior test added.
In terms of performance, I do anticipate that this commit will regress
performance somewhat, because the new pointer access and bitcast logic
is necessarily more complex. I have not yet taken performance
measurements, but will do shortly, and post the results in this PR. If
the performance regression is severe, I will do work to to optimize the
new logic before merge.
Resolves: #19452
Resolves: #19460
|
|
This allows `std.Uri.resolve_inplace` to properly preserve the fact
that `new` is already escaped but `base` may not be. I originally tried
just moving `raw_uri` around, but it made uri resolution unmanagably
complicated, so I instead added per-component information to `Uri` which
allows extra allocations to be avoided when constructing uris with
components from different sources, and in some cases, deferring the work
all the way to when the uri is printed, where an allocator may not even
be needed.
Closes #19587
|