Liam
898f14b772
backend/x64: use mmap for all code allocations on Linux
2022-04-19 18:45:46 +01:00
Merry
78b4ba10c9
Migrate to mcl
2022-04-19 18:05:04 +01:00
Merry
ed9955891f
mcl: Fix bug in non-template mcl::bit::ones
2022-04-19 18:05:04 +01:00
Merry
95422b2091
mcl: bit_field: Fix incorrect argument order in replicate_element
2022-04-19 16:53:45 +01:00
Merry
de4154aa18
externals: Remove mp and replace uses with mcl
2022-04-19 16:28:28 +01:00
Merry
f642637971
externals: Add mcl v0.1.3
...
Merge commit '7eb1d05f63c7ba8df6a203138932ea428ab4aa49' as 'externals/mcl'
2022-04-19 16:27:57 +01:00
Merry
7eb1d05f63
Squashed 'externals/mcl/' content from commit a86a53843
...
git-subtree-dir: externals/mcl
git-subtree-split: a86a53843f82e4d6ca2f2e1437824495acad2712
2022-04-19 16:27:52 +01:00
Wunkolo
27bbf4501b
backend/x64: Use upper EVEX registers as scratch space
...
AVX512 adds an additional **16** simd registers, for a total of 32 simd
registers, accessible by utilizing EVEX encoded instructions. Rather
than using the `ScratchXmm` function, adding additional
register-pressure and spilling, AVX512-enabled contexts can just
directly use `xmm{16-31}` registers as intermediate scratch registers.
2022-04-06 17:41:55 +01:00
Wunkolo
f0b9cb9ccf
tests/A64: Add {S,U}SHL instruction unit tests
2022-04-06 17:41:55 +01:00
merry
644172477e
Implement enable_cycle_counting
2022-04-03 16:10:32 +01:00
merry
aac1f6ab1b
Implement halt_reason
...
* Provide reason for halting and atomically update this.
* Allow user to specify a halt reason and return this information on halt.
* Check if halt was requested prior to starting execution.
2022-04-03 15:37:20 +01:00
merry
116297ccd5
common: Add atomic
...
Implement atomic or operation on u32
2022-04-03 15:30:39 +01:00
merry
f6be6bc14b
emit_x64_memory: Appease MSVC
...
Associated with changes in 8bcd46b7e9
2022-04-02 20:41:34 +01:00
merry
8bcd46b7e9
emit_x64_memory: Ensure 128-bit loads/stores are atomic
2022-04-02 19:33:48 +01:00
merry
e27733464b
emit_x64_memory: Always order exclusive accesses
2022-04-02 19:33:15 +01:00
merry
cd91a36613
emit_x64_memory: Fix bug in 16-bit ordered EmitReadMemoryMov
2022-04-02 19:32:46 +01:00
merry
9cadab8fa9
backend/emit_x64_memory: Enforce memory ordering
2022-03-29 20:57:34 +01:00
merry
675efecf47
emit_x64_memory: Combine A32 and A64 memory code
2022-03-29 20:51:50 +01:00
merry
af2d50288f
A64/sys_ic: Return to dispatch on possible invalidation
2022-03-27 15:27:34 +01:00
merry
cf0709c7f1
emit_x64_memory: Share Emit{Read,Write}MemoryMove
2022-03-26 16:51:55 +00:00
merry
64adc91ca2
emit_x64_memory: Move EmitFastmemVAddr to common file
2022-03-26 16:49:14 +00:00
merry
18f02e2088
emit_x64_memory: Move EmitVAddrLookup to common file
2022-03-26 16:46:06 +00:00
merry
3d657c450a
emit_x64_memory: Share EmitDetectMisalignedVAddr
2022-03-26 16:09:56 +00:00
merry
fb586604b4
emit_x64_memory: Share constants
2022-03-26 16:05:03 +00:00
merry
5cf2d59913
A32: Add AccType information and propagate to IR-level
2022-03-26 15:38:10 +00:00
merry
614ecb7020
A64: Propagate AccType information to IR-level
2022-03-26 15:38:10 +00:00
merry
879f211686
ir/value: Add AccType to Value
2022-03-26 15:38:10 +00:00
Alexandre Bouvier
9d369436d8
cmake: Fix unicorn and llvm
2022-03-22 20:27:01 +00:00
merry
7b69c87ffc
fuzz_arm: Add offset thumb instruction test
...
Test thumb instructions when (PC % 4) == 2
2022-03-20 21:05:55 +00:00
merry
c78b82dd2c
vfp: VLDM is UNPREDICABLE when n is R15 in thumb mode
2022-03-20 20:52:11 +00:00
Sergi Granell
0ec4a23710
thumb32: Implement LDA and STL
...
Note that those are ARMv8 additions to the Thumb instruction set.
2022-03-20 20:16:27 +00:00
merry
e1a266b929
A32: Implement SHA256SU1
2022-03-20 13:59:18 +00:00
merry
ab4c6cfefb
A32: Implement SHA256SU0
2022-03-20 13:59:18 +00:00
merry
c022a778d6
A32: Implement SHA256H, SHA256H2
2022-03-20 13:59:18 +00:00
merry
bb713194a0
backend/x64: Implement SHA256 polyfills
2022-03-20 13:59:18 +00:00
merry
98cff8dd0d
IR: Implement SHA256MessageSchedule{0,1}
2022-03-20 13:59:18 +00:00
merry
f0a4bf1f6a
IR: Implement SHA256Hash
2022-03-20 13:59:18 +00:00
merry
81536e4630
github: Use GCC 10 on Ubuntu
2022-03-20 13:59:18 +00:00
merry
a4daad6336
block_of_code: Add HostFeature SHA
2022-03-20 00:13:03 +00:00
merry
36f1541347
externals: Update catch to 2.13.8
2022-03-19 17:51:45 +00:00
Andrea Pappacoda
e4b669fd5b
build: remove extra include path for system vixl
...
As far as I know the only pkg-config file provided by Vixl is the one generated by Meson when applying my yet to be merged patch.
That extra include path was needed because I mistakenly thought that adding `vixl` as an include subdirectory was not necessary, but I fixed it in my latest revision - more details here: https://github.com/Linaro/vixl/pull/7#discussion_r778167004
The fix already landed in Debian and Ubuntu, that as far as I know are the only Linux distros that ship my patch, so manually adding that include directory shouldn't be necessary anymore
2022-03-13 20:24:26 +00:00
Merry
bcfe377aaa
x64/reg_alloc: More zero extension paranoia
2022-03-06 12:24:50 +00:00
Merry
316b95bb3f
{a32,a64}_emit_x64_memory: Zero extension paranoia
2022-03-06 12:10:40 +00:00
Merry
0fd32c5fa4
a64_emit_x64_memory: Fix bug in 128 bit exclusive write fallback
2022-02-28 19:53:43 +00:00
merry
5ea2b49ef0
backend/x64: Inline exclusive memory access operations ( #664 )
...
* a64_emit_x64_memory: Add Unsafe_IgnoreGlobalMonitor optimization
* a32_emit_x64_memory: Add Unsafe_IgnoreGlobalMonitor optimization
* a32_emit_x64_memory: Remove dead code
* {a32,a64}_emit_x64_memory: Also verify vaddr in Exclusive{Read,Write}MemoryInlineUnsafe
* a64_emit_x64_memory: Full fallback for ExclusiveWriteMemoryInlineUnsafe
* a64_emit_x64_memory: Inline full locking
* a64_emit_x64_memory: Allow inlined locking to be optionally removed
* spin_lock: Use xbyak instead of inline asm
* a64_emit_x64_memory: Recompile on exclusive fastmem failure
* Avoid variable shadowing
* a32_emit_x64_memory: Implement recompilation
* Fix recompilation
* spin_lock: Clang format fix
* fix fallback function calls
2022-02-28 08:13:10 +00:00
merry
0a11e79b55
backend/x64: Ensure all HostCalls are appropriately zero-extended
2022-02-27 20:04:44 +00:00
merry
6c4fa780e0
{a32,a64}_emit_x64_memory: Ensure return value of fastmem callback are zero-extended
2022-02-27 19:58:23 +00:00
merry
dc3e70c552
fuzz_arm: Sometimes we have to step more to sync up with unicorn
...
This happens if unicorn happens to jump back on an IT instruction.
2022-02-27 19:51:09 +00:00
merry
593de127d2
a64_emit_x64: Clear fastmem patch information on ClearCache
2022-02-27 19:50:05 +00:00
Merry
c90173151e
backend/x64: Split off memory emitters
2022-02-26 21:25:09 +00:00