diff options
| author | Andrew Kelley <andrew@ziglang.org> | 2021-01-02 12:32:30 -0700 |
|---|---|---|
| committer | Andrew Kelley <andrew@ziglang.org> | 2021-01-02 19:11:19 -0700 |
| commit | 9362f382ab7023592cc1d71044217b847b122406 (patch) | |
| tree | 3587f4c88b949673a94e995367414d80a5ef68af /src/codegen.zig | |
| parent | fea8659b82ea1a785f933c58ba9d65ceb05a4094 (diff) | |
| download | zig-9362f382ab7023592cc1d71044217b847b122406.tar.gz zig-9362f382ab7023592cc1d71044217b847b122406.zip | |
stage2: implement function call inlining in the frontend
* remove the -Ddump-zir thing. that's handled through --verbose-ir
* rework Fn to have an is_inline flag without requiring any more memory
on the heap per function.
* implement a rough first version of dumping typed zir (tzir) which is
a lot more helpful for debugging than what we had before. We don't
have a way to parse it though.
* keep track of whether the inline-ness of a function changes because
if it does we have to go update callsites.
* add compile error for inline and export used together.
inline function calls and comptime function calls are implemented the
same way. A block instruction is set up to capture the result, and then
a scope is set up that has a flag for is_comptime and some state if the
scope is being inlined.
when analyzing `ret` instructions, zig looks for inlining state in the
scope, and if found, treats `ret` as a `break` instruction instead, with
the target block being the one set up at the inline callsite.
Follow-up items:
* Complete out the debug TZIR dumping code.
* Don't redundantly generate ZIR for each inline/comptime function
call. Instead we should add a new state enum tag to Fn.
* comptime and inlining branch quotas.
* Add more test cases.
Diffstat (limited to 'src/codegen.zig')
| -rw-r--r-- | src/codegen.zig | 10 |
1 files changed, 5 insertions, 5 deletions
diff --git a/src/codegen.zig b/src/codegen.zig index 6530b687e5..588c3dec4c 100644 --- a/src/codegen.zig +++ b/src/codegen.zig @@ -532,7 +532,7 @@ fn Function(comptime arch: std.Target.Cpu.Arch) type { self.code.items.len += 4; try self.dbgSetPrologueEnd(); - try self.genBody(self.mod_fn.analysis.success); + try self.genBody(self.mod_fn.data.body); const stack_end = self.max_end_stack; if (stack_end > math.maxInt(i32)) @@ -576,7 +576,7 @@ fn Function(comptime arch: std.Target.Cpu.Arch) type { }); } else { try self.dbgSetPrologueEnd(); - try self.genBody(self.mod_fn.analysis.success); + try self.genBody(self.mod_fn.data.body); try self.dbgSetEpilogueBegin(); } }, @@ -593,7 +593,7 @@ fn Function(comptime arch: std.Target.Cpu.Arch) type { try self.dbgSetPrologueEnd(); - try self.genBody(self.mod_fn.analysis.success); + try self.genBody(self.mod_fn.data.body); // Backpatch stack offset const stack_end = self.max_end_stack; @@ -638,13 +638,13 @@ fn Function(comptime arch: std.Target.Cpu.Arch) type { writeInt(u32, try self.code.addManyAsArray(4), Instruction.pop(.al, .{ .fp, .pc }).toU32()); } else { try self.dbgSetPrologueEnd(); - try self.genBody(self.mod_fn.analysis.success); + try self.genBody(self.mod_fn.data.body); try self.dbgSetEpilogueBegin(); } }, else => { try self.dbgSetPrologueEnd(); - try self.genBody(self.mod_fn.analysis.success); + try self.genBody(self.mod_fn.data.body); try self.dbgSetEpilogueBegin(); }, } |
