| Age | Commit message (Collapse) | Author |
|
Now they both use `Value.Tag.aggregate`.
Additionally the LLVM backend now has implemented lowering of
tuple values.
|
|
Detect if we are storing an array operand to a bitcasted vector pointer.
If so, we instead reach through the bitcasted pointer to the vector pointer,
bitcast the array operand to a vector, and then lower this as a store of
a vector value to a vector pointer. This generally results in better code,
as well as working around an LLVM bug.
See #11154
|
|
|
|
|
|
|
|
This implements type equality for error sets. This is done
through element-wise error set comparison.
Inferred error sets are always distinct types and other error sets are
always sorted. See #11022.
|
|
Co-authored-by: Veikka Tuominen <git@vexu.eu>
|
|
* mul_add AIR instruction: use `pl_op` instead of `ty_pl`. The type is
always the same as the operand; no need to waste bytes redundantly
storing the type.
* AstGen: use coerced_ty for all the operands except for one which we
use to communicate the type.
* Sema: use the correct source location for requireRuntimeBlock in
handling of `@mulAdd`.
* native backends: handle liveness even for the functions that are
TODO.
* C backend: implement `@mulAdd`. It lowers to libc calls.
* LLVM backend: make `@mulAdd` handle all float types.
- improved fptrunc and fpext to handle f80 with compiler-rt calls.
* Value.mulAdd: handle all float types and use the `@mulAdd` builtin.
* behavior tests: revert the changes to testing `@mulAdd`. These
changes broke the test coverage, making it only tested at
compile-time.
Improved f80 support:
* std.math.fma handles f80
* move fma functions from freestanding libc to compiler-rt
- add __fmax and fmal
- make __fmax and fmaq only exported when they don't alias fmal.
- make their linkage weak just like the rest of compiler-rt symbols.
* removed `longDoubleIsF128` and replaced it with `longDoubleIs` which
takes a type as a parameter. The implementation is now more accurate
and handles more targets. Similarly, in stage2 the function
CTypes.sizeInBits is more accurate for long double for more targets.
|
|
|
|
|
|
|
|
|
|
Similar to how Type.eql was reworked in the previous commit, this commit
reworks Type.hash to check all the different kinds of tags that a Type
can be represented with. It also completes the implementation for all
types except error sets, which need to have Type.eql enhanced as well.
|
|
|
|
Still TODO is extern unions.
|
|
|
|
|
|
|
|
This implements #10113 for the self-hosted compiler only. It removes the
ability to override alignment of packed struct fields, and removes the
ability to put pointers and arrays inside packed structs.
After this commit, nearly all the behavior tests pass for the stage2 llvm
backend that involve packed structs.
I didn't implement the compile errors or compile error tests yet. I'm
waiting until we have stage2 building itself and then I want to rework
the compile error test harness with inspiration from Vexu's arocc test
harness. At that point it should be a much nicer dev experience to work
on compile errors.
|
|
|
|
|
|
This change implements the above built-ins for Sema and the LLVM
backend. Other backends have had placeholders added for lowering.
|
|
stage2 sema: Fix sign handling of exotic integers in `@bitCast`
|
|
|
|
|
|
Big-int functions were updated to respect the provided abi_size, rather
than inferring a potentially incorrect abi_size implicitly.
In combination with the convention that any required padding bits are
added on the MSB end, this means that exotic integers can potentially
have a well-defined memory layout.
|
|
|
|
|
|
Get rid of `std.math.F80Repr`. Instead of trying to match the memory
layout of f80, we treat it as a value, same as the other floating point
types. The functions `make_f80` and `break_f80` are introduced to
compose an f80 value out of its parts, and the inverse operation.
stage2 LLVM backend: fix pointer to zero length array tripping LLVM
assertion. It now checks for when the element type is a zero-bit type
and lowers such thing the same way that pointers to other zero-bit types
are lowered.
Both stage1 and stage2 LLVM backends are adjusted so that f80 is lowered
as x86_fp80 on x86_64 and i386 architectures, and identical to a u80 on
others. LLVM constants are lowered in a less hacky way now that #10860
is fixed, by using the expression `(exp << 64) | fraction` using llvm
constants.
Sema is improved to handle c_longdouble by recursively handling it
correctly for whatever the float bit width is. In both stage1 and
stage2.
|
|
* F80Repr extern struct needs no explicit padding; let's match the
target padding.
* stage2: fix lowering of f80 constants.
* stage1: decide ABI size and alignment of f80 based on alignment of
u64. x86 has alignof u64 equal to 4 but arm has it as 8.
* stage2: fix Value.floatReadFromMemory to use F80Repr
|
|
|
|
- Merge `floatop.zig` and `floatop_stage1.zig` since most tests now pass
on stage2.
- Add more behavior tests for a bunch of functions.
|
|
|
|
* pass more x64 behavior tests
* return with a TODO error when lowering a decl with no runtime bits
* insert some debug logs for tracing recursive descent down the
type-value tree when lowering types
* print `Decl`'s name when print debugging `decl_ref` value
|
|
* pass air_tag instead of zir_tag
* also pass eval function so that the branch only happens once and the
body of zirUnaryMath is simplified
* Value.sqrt: update to handle f80 and f128 in the normalized way that
includes handling c_longdouble.
Semi-related change: fix incorrect sqrt builtin name for f80 in stage1.
|
|
Support for f128, comptime_float, and c_longdouble require improvements
to compiler_rt and will implemented in a later PR. Some of the code in
this commit could be made more generic, for instance `llvm.airSqrt`
could probably be `llvm.airUnaryMath`, but let's cross that
bridge when we get to it.
|
|
stage2: store externs lib name as part of decl
|
|
Once the relevant compiler_rt functions are implemented, these panics
can be removed.
|
|
Currently Zig lowers `@intToFloat` for f80 incorrectly on non-x86
targets:
```
broken LLVM module found:
UIToFP result must be FP or FP vector
%62 = uitofp i64 %61 to i128
SIToFP result must be FP or FP vector
%66 = sitofp i64 %65 to i128
```
This happens because on such targets, we use i128 instead of x86_fp80 in
order to avoid "LLVM ERROR: Cannot select". `@intToFloat` must be
lowered differently to account for this difference as well.
|
|
|
|
AstGen: Fixed bug where f80 types in source were triggering illegal
behavior.
Value: handle f80 in floating point arithmetic functions.
Value: implement floatRem and floatMod
This commit introduces dependencies on compiler-rt that are not
implemented. Those are a prerequisite to merging this branch.
|
|
`ExternFn` will contain a maybe-lib-name if it was defined with
the `extern` keyword like so
```zig
extern "c" fn write(usize, usize, usize) usize;
```
`lib_name` will live as long as `ExternFn` decl does.
|
|
std: make ArrayHashMap eql function accept an additional param
|
|
|
|
|
|
saturating left shift on type comptime_int executed unreachable code
|
|
|
|
Takes advantage of the pattern already established with
array_init_anon. Also upgrades array_init (non-anon) to the pattern.
Implements comptime struct value equality and pointer value hashing.
|
|
AstGen:
* rename the known_has_bits flag to known_non_opv to make it better
reflect what it actually means.
* add a known_comptime_only flag.
* make the flags take advantage of identifiers of primitives and the
fact that zig has no shadowing.
* correct the known_non_opv flag for function bodies.
Sema:
* Rename `hasCodeGenBits` to `hasRuntimeBits` to better reflect what it
does.
- This function got a bit more complicated in this commit because of
the duality of function bodies: on one hand they have runtime bits,
but on the other hand they require being comptime known.
* WipAnonDecl now takes a LazySrcDecl parameter and performs the type
resolutions that it needs during finish().
* Implement comptime `@ptrToInt`.
Codegen:
* Improved handling of lowering decl_ref; make it work for
comptime-known ptr-to-int values.
- This same change had to be made many different times; perhaps we
should look into merging the implementations of `genTypedValue`
across x86, arm, aarch64, and riscv.
|
|
This commit updates stage2 to enforce the property that the syntax
`fn()void` is a function *body* not a *pointer*. To get a pointer, the
syntax `*const fn()void` is required.
ZIR puts function alignment into the func instruction rather than the
decl because this way it makes it into function types. LLVM backend
respects function alignments.
Struct and Union have methods `fieldSrcLoc` to help look up source
locations of their fields. These trigger full loading, tokenization, and
parsing of source files, so should only be called once it is confirmed
that an error message needs to be printed.
There are some nice new error hints for explaining why a type is
required to be comptime, particularly for structs that contain function
body types.
`Type.requiresComptime` is now moved into Sema because it can fail and
might need to trigger field type resolution. Comptime pointer loading
takes into account types that do not have a well-defined memory layout
and does not try to compute a byte offset for them.
`fn()void` syntax no longer secretly makes a pointer. You get a function
body type, which requires comptime. However a pointer to a function body
can be runtime known (obviously).
Compile errors that report "expected pointer, found ..." are factored
out into convenience functions `checkPtrOperand` and `checkPtrType` and
have a note about function pointers.
Implemented `Value.hash` for functions, enum literals, and undefined values.
stage1 is not updated to this (yet?), so some workarounds and disabled
tests are needed to keep everything working. Should we update stage1 to
these new type semantics? Yes probably because I don't want to add too
much conditional compilation logic in the std lib for the different
backends.
|