diff options
| author | mlugg <mlugg@mlugg.co.uk> | 2024-04-08 16:14:39 +0100 |
|---|---|---|
| committer | mlugg <mlugg@mlugg.co.uk> | 2024-04-17 13:41:25 +0100 |
| commit | d0e74ffe52d0ae0d876d4e3f7ef5d32b5f5460a5 (patch) | |
| tree | 001cb2b59a48e913e6036675b71f4736c55647c7 /src/type.zig | |
| parent | 77abd3a96aa8c8c1277cdbb33d88149d4674d389 (diff) | |
| download | zig-d0e74ffe52d0ae0d876d4e3f7ef5d32b5f5460a5.tar.gz zig-d0e74ffe52d0ae0d876d4e3f7ef5d32b5f5460a5.zip | |
compiler: rework comptime pointer representation and access
We've got a big one here! This commit reworks how we represent pointers
in the InternPool, and rewrites the logic for loading and storing from
them at comptime.
Firstly, the pointer representation. Previously, pointers were
represented in a highly structured manner: pointers to fields, array
elements, etc, were explicitly represented. This works well for simple
cases, but is quite difficult to handle in the cases of unusual
reinterpretations, pointer casts, offsets, etc. Therefore, pointers are
now represented in a more "flat" manner. For types without well-defined
layouts -- such as comptime-only types, automatic-layout aggregates, and
so on -- we still use this "hierarchical" structure. However, for types
with well-defined layouts, we use a byte offset associated with the
pointer. This allows the comptime pointer access logic to deal with
reinterpreted pointers far more gracefully, because the "base address"
of a pointer -- for instance a `field` -- is a single value which
pointer accesses cannot exceed since the parent has undefined layout.
This strategy is also more useful to most backends -- see the updated
logic in `codegen.zig` and `codegen/llvm.zig`. For backends which do
prefer a chain of field and elements accesses for lowering pointer
values, such as SPIR-V, there is a helpful function in `Value` which
creates a strategy to derive a pointer value using ideally only field
and element accesses. This is actually more correct than the previous
logic, since it correctly handles pointer casts which, after the dust
has settled, end up referring exactly to an aggregate field or array
element.
In terms of the pointer access code, it has been rewritten from the
ground up. The old logic had become rather a mess of special cases being
added whenever bugs were hit, and was still riddled with bugs. The new
logic was written to handle the "difficult" cases correctly, the most
notable of which is restructuring of a comptime-only array (for
instance, converting a `[3][2]comptime_int` to a `[2][3]comptime_int`.
Currently, the logic for loading and storing work somewhat differently,
but a future change will likely improve the loading logic to bring it
more in line with the store strategy. As far as I can tell, the rewrite
has fixed all bugs exposed by #19414.
As a part of this, the comptime bitcast logic has also been rewritten.
Previously, bitcasts simply worked by serializing the entire value into
an in-memory buffer, then deserializing it. This strategy has two key
weaknesses: pointers, and undefined values. Representations of these
values at comptime cannot be easily serialized/deserialized whilst
preserving data, which means many bitcasts would become runtime-known if
pointers were involved, or would turn `undefined` values into `0xAA`.
The new logic works by "flattening" the datastructure to be cast into a
sequence of bit-packed atomic values, and then "unflattening" it; using
serialization when necessary, but with special handling for `undefined`
values and for pointers which align in virtual memory. The resulting
code is definitely slower -- more on this later -- but it is correct.
The pointer access and bitcast logic required some helper functions and
types which are not generally useful elsewhere, so I opted to split them
into separate files `Sema/comptime_ptr_access.zig` and
`Sema/bitcast.zig`, with simple re-exports in `Sema.zig` for their small
public APIs.
Whilst working on this branch, I caught various unrelated bugs with
transitive Sema errors, and with the handling of `undefined` values.
These bugs have been fixed, and corresponding behavior test added.
In terms of performance, I do anticipate that this commit will regress
performance somewhat, because the new pointer access and bitcast logic
is necessarily more complex. I have not yet taken performance
measurements, but will do shortly, and post the results in this PR. If
the performance regression is severe, I will do work to to optimize the
new logic before merge.
Resolves: #19452
Resolves: #19460
Diffstat (limited to 'src/type.zig')
| -rw-r--r-- | src/type.zig | 102 |
1 files changed, 93 insertions, 9 deletions
diff --git a/src/type.zig b/src/type.zig index 264125c6d0..fcacfaf9e6 100644 --- a/src/type.zig +++ b/src/type.zig @@ -172,6 +172,7 @@ pub const Type = struct { } /// Prints a name suitable for `@typeName`. + /// TODO: take an `opt_sema` to pass to `fmtValue` when printing sentinels. pub fn print(ty: Type, writer: anytype, mod: *Module) @TypeOf(writer).Error!void { const ip = &mod.intern_pool; switch (ip.indexToKey(ty.toIntern())) { @@ -187,8 +188,8 @@ pub const Type = struct { if (info.sentinel != .none) switch (info.flags.size) { .One, .C => unreachable, - .Many => try writer.print("[*:{}]", .{Value.fromInterned(info.sentinel).fmtValue(mod)}), - .Slice => try writer.print("[:{}]", .{Value.fromInterned(info.sentinel).fmtValue(mod)}), + .Many => try writer.print("[*:{}]", .{Value.fromInterned(info.sentinel).fmtValue(mod, null)}), + .Slice => try writer.print("[:{}]", .{Value.fromInterned(info.sentinel).fmtValue(mod, null)}), } else switch (info.flags.size) { .One => try writer.writeAll("*"), .Many => try writer.writeAll("[*]"), @@ -234,7 +235,7 @@ pub const Type = struct { } else { try writer.print("[{d}:{}]", .{ array_type.len, - Value.fromInterned(array_type.sentinel).fmtValue(mod), + Value.fromInterned(array_type.sentinel).fmtValue(mod, null), }); try print(Type.fromInterned(array_type.child), writer, mod); } @@ -352,7 +353,7 @@ pub const Type = struct { try print(Type.fromInterned(field_ty), writer, mod); if (val != .none) { - try writer.print(" = {}", .{Value.fromInterned(val).fmtValue(mod)}); + try writer.print(" = {}", .{Value.fromInterned(val).fmtValue(mod, null)}); } } try writer.writeAll("}"); @@ -1965,6 +1966,12 @@ pub const Type = struct { return Type.fromInterned(union_fields[index]); } + pub fn unionFieldTypeByIndex(ty: Type, index: usize, mod: *Module) Type { + const ip = &mod.intern_pool; + const union_obj = mod.typeToUnion(ty).?; + return Type.fromInterned(union_obj.field_types.get(ip)[index]); + } + pub fn unionTagFieldIndex(ty: Type, enum_tag: Value, mod: *Module) ?u32 { const union_obj = mod.typeToUnion(ty).?; return mod.unionTagFieldIndex(union_obj, enum_tag); @@ -3049,22 +3056,34 @@ pub const Type = struct { }; } - pub fn structFieldAlign(ty: Type, index: usize, mod: *Module) Alignment { - const ip = &mod.intern_pool; + pub fn structFieldAlign(ty: Type, index: usize, zcu: *Zcu) Alignment { + return ty.structFieldAlignAdvanced(index, zcu, null) catch unreachable; + } + + pub fn structFieldAlignAdvanced(ty: Type, index: usize, zcu: *Zcu, opt_sema: ?*Sema) !Alignment { + const ip = &zcu.intern_pool; switch (ip.indexToKey(ty.toIntern())) { .struct_type => { const struct_type = ip.loadStructType(ty.toIntern()); assert(struct_type.layout != .@"packed"); const explicit_align = struct_type.fieldAlign(ip, index); const field_ty = Type.fromInterned(struct_type.field_types.get(ip)[index]); - return mod.structFieldAlignment(explicit_align, field_ty, struct_type.layout); + if (opt_sema) |sema| { + return sema.structFieldAlignment(explicit_align, field_ty, struct_type.layout); + } else { + return zcu.structFieldAlignment(explicit_align, field_ty, struct_type.layout); + } }, .anon_struct_type => |anon_struct| { - return Type.fromInterned(anon_struct.types.get(ip)[index]).abiAlignment(mod); + return (try Type.fromInterned(anon_struct.types.get(ip)[index]).abiAlignmentAdvanced(zcu, if (opt_sema) |sema| .{ .sema = sema } else .eager)).scalar; }, .union_type => { const union_obj = ip.loadUnionType(ty.toIntern()); - return mod.unionFieldNormalAlignment(union_obj, @intCast(index)); + if (opt_sema) |sema| { + return sema.unionFieldAlignment(union_obj, @intCast(index)); + } else { + return zcu.unionFieldNormalAlignment(union_obj, @intCast(index)); + } }, else => unreachable, } @@ -3301,6 +3320,71 @@ pub const Type = struct { }; } + pub fn arrayBase(ty: Type, zcu: *const Zcu) struct { Type, u64 } { + var cur_ty: Type = ty; + var cur_len: u64 = 1; + while (cur_ty.zigTypeTag(zcu) == .Array) { + cur_len *= cur_ty.arrayLenIncludingSentinel(zcu); + cur_ty = cur_ty.childType(zcu); + } + return .{ cur_ty, cur_len }; + } + + pub fn packedStructFieldPtrInfo(struct_ty: Type, parent_ptr_ty: Type, field_idx: u32, zcu: *Zcu) union(enum) { + /// The result is a bit-pointer with the same value and a new packed offset. + bit_ptr: InternPool.Key.PtrType.PackedOffset, + /// The result is a standard pointer. + byte_ptr: struct { + /// The byte offset of the field pointer from the parent pointer value. + offset: u64, + /// The alignment of the field pointer type. + alignment: InternPool.Alignment, + }, + } { + comptime assert(Type.packed_struct_layout_version == 2); + + const parent_ptr_info = parent_ptr_ty.ptrInfo(zcu); + const field_ty = struct_ty.structFieldType(field_idx, zcu); + + var bit_offset: u16 = 0; + var running_bits: u16 = 0; + for (0..struct_ty.structFieldCount(zcu)) |i| { + const f_ty = struct_ty.structFieldType(i, zcu); + if (i == field_idx) { + bit_offset = running_bits; + } + running_bits += @intCast(f_ty.bitSize(zcu)); + } + + const res_host_size: u16, const res_bit_offset: u16 = if (parent_ptr_info.packed_offset.host_size != 0) + .{ parent_ptr_info.packed_offset.host_size, parent_ptr_info.packed_offset.bit_offset + bit_offset } + else + .{ (running_bits + 7) / 8, bit_offset }; + + // If the field happens to be byte-aligned, simplify the pointer type. + // We can only do this if the pointee's bit size matches its ABI byte size, + // so that loads and stores do not interfere with surrounding packed bits. + // + // TODO: we do not attempt this with big-endian targets yet because of nested + // structs and floats. I need to double-check the desired behavior for big endian + // targets before adding the necessary complications to this code. This will not + // cause miscompilations; it only means the field pointer uses bit masking when it + // might not be strictly necessary. + if (res_bit_offset % 8 == 0 and field_ty.bitSize(zcu) == field_ty.abiSize(zcu) * 8 and zcu.getTarget().cpu.arch.endian() == .little) { + const byte_offset = res_bit_offset / 8; + const new_align = Alignment.fromLog2Units(@ctz(byte_offset | parent_ptr_ty.ptrAlignment(zcu).toByteUnits().?)); + return .{ .byte_ptr = .{ + .offset = byte_offset, + .alignment = new_align, + } }; + } + + return .{ .bit_ptr = .{ + .host_size = res_host_size, + .bit_offset = res_bit_offset, + } }; + } + pub const @"u1": Type = .{ .ip_index = .u1_type }; pub const @"u8": Type = .{ .ip_index = .u8_type }; pub const @"u16": Type = .{ .ip_index = .u16_type }; |
