diff options
| author | mlugg <mlugg@mlugg.co.uk> | 2025-09-25 03:45:47 +0100 |
|---|---|---|
| committer | Jacob Young <jacobly0@users.noreply.github.com> | 2025-09-27 18:30:52 -0400 |
| commit | 0c476191a4ad3514b2d10ba083ba2f86fee9b223 (patch) | |
| tree | 6caf5db939c65417d0e48f67afb74a1d290a84bf /lib/std/debug/Pdb.zig | |
| parent | 1b0bde0d8dedfef324ba16fba5aa3a24b5fb9bbd (diff) | |
| download | zig-0c476191a4ad3514b2d10ba083ba2f86fee9b223.tar.gz zig-0c476191a4ad3514b2d10ba083ba2f86fee9b223.zip | |
x86_64: generate better constant memcpy code
`rep movsb` isn't usually a great idea here. This commit makes the logic
which tentatively existed in `genInlineMemcpy` apply in more cases, and
in particular applies it to the "new" backend logic. Put simply, all
copies of 128 bytes or fewer will now attempt this path first,
where---provided there is an SSE register and/or a general-purpose
register available---we will lower the operation using a sequence of 32,
16, 8, 4, 2, and 1 byte copy operations.
The feedback I got on this diff was "Push it to master and if it
miscomps I'll revert it" so don't blame me when it explodes
Diffstat (limited to 'lib/std/debug/Pdb.zig')
0 files changed, 0 insertions, 0 deletions
