| Age | Commit message (Collapse) | Author |
|
|
|
What is `sparcel`, you might ask? Good question!
If you take a peek in the SPARC v8 manual, §2.2, it is quite explicit that SPARC
v8 is a big-endian architecture. No little-endian or mixed-endian support to be
found here.
On the other hand, the SPARC v9 manual, in §3.2.1.2, states that it has support
for mixed-endian operation, with big-endian mode being the default.
Ok, so `sparcel` must just be referring to SPARC v9 running in little-endian
mode, surely?
Nope:
* https://github.com/llvm/llvm-project/blob/40b4fd7a3e81d32b29364a1b15337bcf817659c0/llvm/lib/Target/Sparc/SparcTargetMachine.cpp#L226
* https://github.com/llvm/llvm-project/blob/40b4fd7a3e81d32b29364a1b15337bcf817659c0/llvm/lib/Target/Sparc/SparcTargetMachine.cpp#L104
So, `sparcel` in LLVM is referring to some sort of fantastical little-endian
SPARC v8 architecture. I've scoured the internet and I can find absolutely no
evidence that such a thing exists or has ever existed. In fact, I can find no
evidence that a little-endian implementation of SPARC v9 ever existed, either.
Or any SPARC version, actually!
The support was added here: https://reviews.llvm.org/D8741
Notably, there is no mention whatsoever of what CPU this might be referring to,
and no justification given for the "but some are little" comment added in the
patch.
My best guess is that this might have been some private exercise in creating a
little-endian version of SPARC that never saw the light of day. Given that SPARC
v8 explicitly doesn't support little-endian operation (let alone little-endian
instruction encoding!), and no CPU is known to be implemented as such, I think
it's very reasonable for us to just remove this support.
|
|
This is a misfeature that we inherited from LLVM:
* https://reviews.llvm.org/D61259
* https://reviews.llvm.org/D61939
(`aarch64_32` and `arm64_32` are equivalent.)
I truly have no idea why this triple passed review in LLVM. It is, to date, the
*only* tag in the architecture component that is not, in fact, an architecture.
In reality, it is just an ILP32 ABI for AArch64 (*not* AArch32).
The triples that use `aarch64_32` look like `aarch64_32-apple-watchos`. Yes,
that triple is exactly what you think; it has no ABI component. They really,
seriously did this.
Since only Apple could come up with silliness like this, it should come as no
surprise that no one else uses `aarch64_32`. Later on, a GNU ILP32 ABI for
AArch64 was developed, and support was added to LLVM:
* https://reviews.llvm.org/D94143
* https://reviews.llvm.org/D104931
Here, sanity seems to have prevailed, and a triple using this ABI looks like
`aarch64-linux-gnu_ilp32` as you would expect.
As can be seen from the diffs in this commit, there was plenty of confusion
throughout the Zig codebase about what exactly `aarch64_32` was. So let's just
remove it. In its place, we'll use `aarch64-watchos-ilp32`,
`aarch64-linux-gnuilp32`, and so on. We'll then translate these appropriately
when talking to LLVM. Hence, this commit adds the `ilp32` ABI tag (we already
have `gnuilp32`).
|
|
|
|
|
|
|
|
with this rewrite we can call functions inside of
inline assembly, enabling us to use the default start.zig logic
all that's left is to implement lr/sc loops for atomically manipulating
1 and 2 byte values, after which we can use the segfault handler logic.
|
|
|
|
|
|
|
|
|
|
|
|
I was doing duplicate work with `elemOffset` multiplying by the abi size and then the `ptr_add` `genBinOp` also multiplying.
This led to having writes happening in the wrong place.
|
|
|
|
|
|
we can run `std.debug.print` now, with both run-time strings and integers!
|
|
the csrs `avl` and `vtype` are considered caller-saved so it could have changed while inside of the function.
the easiest way to handle this is to just set the cached `vtype` and `avl` to null, so that the next time something
needs to set it, it'll emit an instruction instead of relying on a potentially invalid setting.
|
|
Closes #12012
|
|
|
|
|
|
These are quite old GPUs, and it is unlikely that Zig will ever be able to
target them.
See: https://en.wikipedia.org/wiki/Radeon_HD_2000_series
|
|
`ip.get` specifically doesn't allow `extern_func` keys to access it.
|
|
stage2-wasm: overflow ops improvement
|
|
|
|
|
|
Tracked by #20680
|
|
Additionally fixed a bug for shr on signed big ints
|
|
Added behavior tests to verify implementation
|
|
Unexpected to be found only now
|
|
more RISC-V backend progress
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Now we generate debug undefined constants when the user asks for them to dedup across the function decl. This takes 2 instructions instead of 7 in the RISC-V backend.
TODO, we need to dedupe across function decl boundaries.
|
|
Same validation rules as the backing integer would have.
|
|
This test would have failed in the past, but this has been fixed sometime in the last year. Closes #15944
|
|
|
|
|
|
|
|
more RISC-V backend progress
|
|
|
|
|
|
|
|
|
|
|
|
Reorganize how the binOp and genBinOp functions work.
I've spent quite a while here reading exactly through the spec and so many
tests are enabled because of several critical issues the old design had.
There are some regressions that will take a long time to figure out individually
so I will ignore them for now, and pray they get fixed by themselves. When
we're closer to 100% passing is when I will start diving into them one-by-one.
|
|
|