aboutsummaryrefslogtreecommitdiff
path: root/src/Liveness.zig
AgeCommit message (Collapse)Author
2022-08-12stage2: generate a switch for `@errSetCast` safetyVeikka Tuominen
2022-08-05stage2: add runtime safety for invalid enum valuesVeikka Tuominen
2022-07-23stage2: implement `@setFloatMode`Veikka Tuominen
2022-06-30stage2: lower float negation explicitlyAndrew Kelley
Rather than lowering float negation as `0.0 - x`. * Add AIR instruction for float negation. * Add compiler-rt functions for f128, f80 negation closes #11853
2022-06-05stage2: implement the new "try" ZIR/AIR instructionAndrew Kelley
Implements semantic analysis for the new try/try_inline ZIR instruction. Adds the new try/try_ptr AIR instructions and implements them for the LLVM backend. Fixes not calling rvalue() for tryExpr in AstGen. This is part of an effort to implement #11772.
2022-05-31LLVM: elide some loads when loweringAndrew Kelley
Generally, the load instruction may need to make a copy of an isByRef=true value, such as in the case of the following code: ```zig pub fn swap(comptime T: type, a: *T, b: *T) void { const tmp = a.*; a.* = b.*; b.* = tmp; } ``` However, it only needs to do so if there are any instructions which can possibly write to memory. When calling functions with isByRef=true parameters, the AIR code that is generated looks like loads followed directly by call. This allows for a peephole optimization when lowering loads: if the load instruction operates on an isByRef=true type and dies before any side effects occur, then we can safely lower the load as a no-op that returns its operand. This is one out of three changes I intend to make to address #11498. However I will put these changes in separate branches and merge them separately so that we can have three independent points on the perf charts.
2022-05-17stage2: fix pointer arithmetic result typeAndrew Kelley
This makes it so the result of doing pointer arithmetic creates a new pointer type that has adjusted alignment.
2022-05-16stage2: implement error return tracesVeikka Tuominen
2022-04-27add new builtin function `@tan`Andrew Kelley
The reason for having `@tan` is that we already have `@sin` and `@cos` because some targets have machine code instructions for them, but in the case that the implementation needs to go into compiler-rt, sin, cos, and tan all share a common dependency which includes a table of data. To avoid duplicating this table of data, we promote tan to become a builtin alongside sin and cos. ZIR: The tag enum is at capacity so this commit moves `field_call_bind_named` to be `extended`. I measured this as one of the least used tags in the zig codebase. Fix libc math suffix for `f32` being wrong in both stage1 and stage2. stage1: add missing libc prefix for float functions.
2022-04-14stage2: progress towards stage3Andrew Kelley
* The `@bitCast` workaround is removed in favor of `@ptrCast` properly doing element casting for slice element types. This required an enhancement both to stage1 and stage2. * stage1 incorrectly accepts `.{}` instead of `{}`. stage2 code that abused this is fixed. * Make some parameters comptime to support functions in switch expressions (as opposed to making them function pointers). * Avoid relying on local temporaries being mutable. * Workarounds for when stage1 and stage2 disagree on function pointer types. * Workaround recursive formatting bug with a `@panic("TODO")`. * Remove unreachable `else` prongs for some inferred error sets. All in effort towards #89.
2022-04-12Liveness: modify encoding to support over 32 operandsAndrew Kelley
Prior to this, Liveness encoded `asm`, `call`, and `aggregate_init` with a single 32-bit integer, allowing up to 35 operands (3 are provided by the regular tomb_bits). However, the Zig language allows function calls with more than 35 arguments, inline assembly with more than 35 inputs, and anonymous tuples with more than 35 elements. The new encoding stores an index to the extra array instead of the bits directly, and then as many extra elements as needed to encode all the operands. The MSB is used as a flag to tell which element is the last one, allowing for 31 bits per element. Prior to this, print_air did not bother correctly printing tombstones for these instructions; now it does. In addition to updating the BigTomb iteration logic in the machine code backends, this commit extracts the common logic into the Liveness namespace.
2022-04-07Liveness: utilize Air.refToIndexAndrew Kelley
2022-03-29stage2: implement `@intToError` with safetyAndrew Kelley
This commit introduces a new AIR instruction `cmp_lt_errors_len`. It's specific to this use case for two reasons: * The total number of errors is not stable during semantic analysis; it can only be reliably checked when flush() is called. So the backend that is lowering the instruction must emit a relocation of some kind and then populate it during flush(). * The fewer AIR instructions in memory, the better for compiler performance, so we squish complex meanings into AIR tags without hesitation. The instruction is implemented only in the LLVM backend so far. It does this by creating a simple function which is gutted and re-populated with each flush(). AstGen now uses ResultLoc.coerced_ty for `@intToError` and Sema does the coercion.
2022-03-26Sema: change zirOverflowArithmetic to use new version of AIR instsjoachimschmidt557
Also applies the change to Liveness
2022-03-25sema: use `pl_op` for `@select`John Schmidt
2022-03-25stage2: implement `@select`John Schmidt
2022-03-21stage2: add AIR instruction `cmp_vector`William Sengir
The existing `cmp_*` instructions get their result type from `lhs`, but vector comparison will always return a vector of bools with only the length derived from its operands. This necessitates the creation of a new AIR instruction.
2022-03-19stage2: add dbg_block_{begin,end} instructionVeikka Tuominen
2022-03-17stage2: implement `@reduce`Andrew Kelley
Notably, Value.eql and Value.hash are improved to treat NaN as equal to itself, so that Type/Value can be hash map keys. Likewise float hashing normalizes the float value before computing the hash.
2022-03-16stage2 llvm: keep track of inlined functionsVeikka Tuominen
2022-03-16Sema: emit dbg_func around inline callsVeikka Tuominen
2022-03-13stage2: add debug info for locals in the LLVM backendAndrew Kelley
Adds 2 new AIR instructions: * dbg_var_ptr * dbg_var_val Sema no longer emits dbg_stmt AIR instructions when strip=true. LLVM backend: fixed lowerPtrToVoid when calling ptrAlignment on the element type is problematic. LLVM backend: fixed alloca instructions improperly getting debug location annotated, causing chaotic debug info behavior. zig_llvm.cpp: fixed incorrect bindings for a function that should use unsigned integers for line and column. A bunch of C test cases regressed because the new dbg_var AIR instructions caused their operands to be alive, exposing latent bugs. Mostly it's just a problem that the C backend lowers mutable and const slices to the same C type, so we need to represent that in the C backend instead of printing two duplicate typedefs.
2022-03-11stage2: implement `@shuffle` at runtimeVeikka Tuominen
2022-03-11stage2: passing threadlocal tests for x86_64-linuxAndrew Kelley
* use the real start code for LLVM backend with x86_64-linux - there is still a check for zig_backend after initializing the TLS area to skip some stuff. * introduce new AIR instructions and implement them for the LLVM backend. They are the same as `call` except with a modifier. - call_always_tail - call_never_tail - call_never_inline * LLVM backend calls hasRuntimeBitsIgnoringComptime in more places to avoid unnecessarily depending on comptimeOnly being resolved for some types. * LLVM backend: remove duplicate code for setting linkage and value name. The canonical place for this is in `updateDeclExports`. * LLVM backend: do some assembly template massaging to make `%%` rendered as `%`. More hacks will be needed to make inline assembly catch up with stage1.
2022-03-06stage2: rework `@mulAdd`Andrew Kelley
* mul_add AIR instruction: use `pl_op` instead of `ty_pl`. The type is always the same as the operand; no need to waste bytes redundantly storing the type. * AstGen: use coerced_ty for all the operands except for one which we use to communicate the type. * Sema: use the correct source location for requireRuntimeBlock in handling of `@mulAdd`. * native backends: handle liveness even for the functions that are TODO. * C backend: implement `@mulAdd`. It lowers to libc calls. * LLVM backend: make `@mulAdd` handle all float types. - improved fptrunc and fpext to handle f80 with compiler-rt calls. * Value.mulAdd: handle all float types and use the `@mulAdd` builtin. * behavior tests: revert the changes to testing `@mulAdd`. These changes broke the test coverage, making it only tested at compile-time. Improved f80 support: * std.math.fma handles f80 * move fma functions from freestanding libc to compiler-rt - add __fmax and fmal - make __fmax and fmaq only exported when they don't alias fmal. - make their linkage weak just like the rest of compiler-rt symbols. * removed `longDoubleIsF128` and replaced it with `longDoubleIs` which takes a type as a parameter. The implementation is now more accurate and handles more targets. Similarly, in stage2 the function CTypes.sizeInBits is more accurate for long double for more targets.
2022-03-06stage2: implement `@mulAdd` for scalar floatsJohn Schmidt
2022-03-03wasm: Implement `@wasmMemoryGrow` builtinLuuk de Gram
Similarly to the other wasm builtin, this implements the grow variation where the memory index is a comptime known value. The operand as well as the result are runtime values. This also verifies during semantic analysis the target we're building for is wasm, or else emits a compilation error. This means that other backends do not have to handle this AIR instruction, other than the wasm and LLVM backends.
2022-03-03wasm: Implement `@wasmMemorySize()` builtinLuuk de Gram
This implements the `wasmMemorySize` builtin, in Sema and the Wasm backend. The Stage2 implementation differs from stage1 in the way that `index` must be a comptime value. The stage1 variant is incorrect, as the index is part of the instruction encoding, and therefore, cannot be a runtime value.
2022-02-28stage2: fix frame_address AIR instructionAndrew Kelley
Various places were assuming different union tags. Now it is consistently a no-op instruction, just like the similar instruction ret_addr.
2022-02-28stage2: implement `@frameAddress`Veikka Tuominen
2022-02-26stage2: implement `@unionInit`Andrew Kelley
The ZIR instruction `union_init_ptr` is renamed to `union_init`. I made it always use by-value semantics for now, not taking the time to invest in result location semantics, in case we decide to change the rules for unions. This way is much simpler. There is a new AIR instruction: union_init. This is for a comptime known tag, runtime-known field value. vector_init is renamed to aggregate_init, which solves a TODO comment.
2022-02-24stage2: implement fieldParentPtrVeikka Tuominen
2022-02-22liveness: add helper for extracting liveness of switch branchJakub Konka
2022-02-19stage2: implement errunion_payload_ptr_setVeikka Tuominen
2022-02-19Merge pull request #10924 from ziglang/air-independence-dayAndrew Kelley
AIR independence day
2022-02-18stage2: make AIR not reference ZIR for inline assemblyAndrew Kelley
Instead it stores all the information it needs to into AIR. closes #10784
2022-02-18stage2: Implement `@bitReverse` and `@byteSwap`Cody Tapscott
This change implements the above built-ins for Sema and the LLVM backend. Other backends have had placeholders added for lowering.
2022-02-09stage2: implement all builtin floatops for f{16,32,64}John Schmidt
- Merge `floatop.zig` and `floatop_stage1.zig` since most tests now pass on stage2. - Add more behavior tests for a bunch of functions.
2022-02-07stage2: implement @sqrt for f{16,32,64}John Schmidt
Support for f128, comptime_float, and c_longdouble require improvements to compiler_rt and will implemented in a later PR. Some of the code in this commit could be made more generic, for instance `llvm.airSqrt` could probably be `llvm.airUnaryMath`, but let's cross that bridge when we get to it.
2022-01-30stage2: implement shl_exact and shr_exactAndrew Kelley
These produce an undefined value when one bits are shifted out. New AIR instruction: shr_exact.
2022-01-20stage2: fix compilation on 32 bit targetsAndrew Kelley
2022-01-20stage2: implement tuplesAndrew Kelley
* AIR instruction vector_init gains the ability to init arrays and tuples in addition to vectors. This will probably also gain the ability to initialize structs and be renamed to `aggregate_init`. * AstGen prefers to use an `anon_array_init` ZIR instruction for local variables when the init expr is an array literal and there is no type.
2022-01-18stage2: implement `@prefetch`Andrew Kelley
This reverts commit f423b5949b8722d4b290f57c3d06d015e39217b0, re-instating commit d48e4245b68bf25c7f41804a5012ac157a5ee546.
2022-01-18Revert "stage2: implement `@prefetch`"Andrew Kelley
This reverts commit d48e4245b68bf25c7f41804a5012ac157a5ee546. I have no idea why this is failing Drone CI, but in a branch, reverting this commit solved the problem.
2022-01-15stage2: implement `@prefetch`Andrew Kelley
2022-01-13stage2: fix build on 32-bit ISAsAndrew Kelley
Fixes regression introduced in 93b854eb745ab3294054ae71150fe60f134f4d10.
2022-01-12stage2: implement `@ctz` and `@clz` including SIMDAndrew Kelley
AIR: * `array_elem_val` is now allowed to be used with a vector as the array type. * New instructions: splat, vector_init AstGen: * The splat ZIR instruction uses coerced_ty for the ResultLoc, avoiding an unnecessary `as` instruction, since the coercion will be performed in Sema. * Builtins that accept vectors now ignore the type parameter. Comment from this commit reproduced here: The accepted proposal #6835 tells us to remove the type parameter from these builtins. To stay source-compatible with stage1, we still observe the parameter here, but we do not encode it into the ZIR. To implement this proposal in stage2, only AstGen code will need to be changed. Sema: * `clz` and `ctz` ZIR instructions are now handled by the same function which accept AIR tag and comptime eval function pointer to differentiate. * `@typeInfo` for vectors is implemented. * `@splat` is implemented. It takes advantage of `Value.Tag.repeated` 😎 * `elemValue` is implemented for vectors, when the index is a scalar. Handling a vector index is still TODO. * Element-wise coercion is implemented for vectors. It could probably be optimized a bit, but it is at least complete & correct. * `Type.intInfo` supports vectors, returning int info for the element. * `Value.ctz` initial implementation. Needs work. * `Value.eql` is implemented for arrays and vectors. LLVM backend: * Implement vector support when lowering `array_elem_val`. * Implement vector support when lowering `ctz` and `clz`. * Implement `splat` and `vector_init`.
2022-01-08stage2: @errorName sema+llvmRobin Voetter
2021-12-27stage2: LLVM backend: implement `@tagName` for enumsAndrew Kelley
Introduced a new AIR instruction: `tag_name`. Reasons to do this instead of lowering it in Sema to a switch, function call, array lookup, or if-else tower: * Sema is a bottleneck; do less work in Sema whenever possible. * If any optimization passes run, and the operand to becomes comptime-known, then it could change to have a comptime result value instead of lowering to a function or array or something which would then have to be garbage-collected. * Backends may want to choose to use a function and a switch branch, or they may want to use a different strategy. Codegen for `@tagName` is implemented for the LLVM backend but not any others yet. Introduced some new `Type` tags: * `const_slice_u8_sentinel_0` * `manyptr_const_u8_sentinel_0` The motivation for this was to make typeof() on the tag_name AIR instruction non-allocating. A bunch more enum tests are passing now.
2021-12-21stage2: @shlWithOverflowRobin Voetter