[][src]Module core::arch::aarch64

🔬 This is a nightly-only experimental API. (stdsimd #27731)
This is supported on AArch64 only.

Platform-specific intrinsics for the aarch64 platform.

See the module documentation for more details.

Structs

float32x2_tExperimentalAArch64

ARM-specific 64-bit wide vector of two packed f32.

float32x4_tExperimentalAArch64

ARM-specific 128-bit wide vector of four packed f32.

float64x1_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed f64.

float64x2_tExperimentalAArch64

ARM-specific 128-bit wide vector of two packed f64.

int16x2_tExperimentalAArch64

ARM-specific 32-bit wide vector of two packed i16.

int16x4_tExperimentalAArch64

ARM-specific 64-bit wide vector of four packed i16.

int16x8_tExperimentalAArch64

ARM-specific 128-bit wide vector of eight packed i16.

int32x2_tExperimentalAArch64

ARM-specific 64-bit wide vector of two packed i32.

int32x4_tExperimentalAArch64

ARM-specific 128-bit wide vector of four packed i32.

int64x1_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed i64.

int64x2_tExperimentalAArch64

ARM-specific 128-bit wide vector of two packed i64.

int8x4_tExperimentalAArch64

ARM-specific 32-bit wide vector of four packed i8.

int8x8_tExperimentalAArch64

ARM-specific 64-bit wide vector of eight packed i8.

int8x16_tExperimentalAArch64

ARM-specific 128-bit wide vector of sixteen packed i8.

int8x16x2_tExperimentalAArch64

ARM-specific type containing two int8x16_t vectors.

int8x16x3_tExperimentalAArch64

ARM-specific type containing three int8x16_t vectors.

int8x16x4_tExperimentalAArch64

ARM-specific type containing four int8x16_t vectors.

int8x8x2_tExperimentalAArch64

ARM-specific type containing two int8x8_t vectors.

int8x8x3_tExperimentalAArch64

ARM-specific type containing three int8x8_t vectors.

int8x8x4_tExperimentalAArch64

ARM-specific type containing four int8x8_t vectors.

poly16x4_tExperimentalAArch64

ARM-specific 64-bit wide vector of four packed u16.

poly16x8_tExperimentalAArch64

ARM-specific 128-bit wide vector of eight packed u16.

poly64x1_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed p64.

poly64x2_tExperimentalAArch64

ARM-specific 64-bit wide vector of two packed p64.

poly8x8_tExperimentalAArch64

ARM-specific 64-bit wide polynomial vector of eight packed u8.

poly8x16_tExperimentalAArch64

ARM-specific 128-bit wide vector of sixteen packed u8.

poly8x16x2_tExperimentalAArch64

ARM-specific type containing two poly8x16_t vectors.

poly8x16x3_tExperimentalAArch64

ARM-specific type containing three poly8x16_t vectors.

poly8x16x4_tExperimentalAArch64

ARM-specific type containing four poly8x16_t vectors.

poly8x8x2_tExperimentalAArch64

ARM-specific type containing two poly8x8_t vectors.

poly8x8x3_tExperimentalAArch64

ARM-specific type containing three poly8x8_t vectors.

poly8x8x4_tExperimentalAArch64

ARM-specific type containing four poly8x8_t vectors.

uint16x2_tExperimentalAArch64

ARM-specific 32-bit wide vector of two packed u16.

uint16x4_tExperimentalAArch64

ARM-specific 64-bit wide vector of four packed u16.

uint16x8_tExperimentalAArch64

ARM-specific 128-bit wide vector of eight packed u16.

uint32x2_tExperimentalAArch64

ARM-specific 64-bit wide vector of two packed u32.

uint32x4_tExperimentalAArch64

ARM-specific 128-bit wide vector of four packed u32.

uint64x1_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed u64.

uint64x2_tExperimentalAArch64

ARM-specific 128-bit wide vector of two packed u64.

uint8x4_tExperimentalAArch64

ARM-specific 32-bit wide vector of four packed u8.

uint8x8_tExperimentalAArch64

ARM-specific 64-bit wide vector of eight packed u8.

uint8x16_tExperimentalAArch64

ARM-specific 128-bit wide vector of sixteen packed u8.

uint8x16x2_tExperimentalAArch64

ARM-specific type containing two uint8x16_t vectors.

uint8x16x3_tExperimentalAArch64

ARM-specific type containing three uint8x16_t vectors.

uint8x16x4_tExperimentalAArch64

ARM-specific type containing four uint8x16_t vectors.

uint8x8x2_tExperimentalAArch64

ARM-specific type containing two uint8x8_t vectors.

uint8x8x3_tExperimentalAArch64

ARM-specific type containing three uint8x8_t vectors.

uint8x8x4_tExperimentalAArch64

ARM-specific type containing four uint8x8_t vectors.

Functions

__DMBExperimentalAArch64 and mclass

Data Memory Barrier

__DSBExperimentalAArch64 and mclass

Data Synchronization Barrier

__ISBExperimentalAArch64 and mclass

Instruction Synchronization Barrier

__NOPExperimentalAArch64 and mclass

No Operation

__SEVExperimentalAArch64 and mclass

Send Event

__WFEExperimentalAArch64 and mclass

Wait For Event

__WFIExperimentalAArch64 and mclass

Wait For Interrupt

__breakpointExperimentalAArch64

Inserts a breakpoint instruction.

__crc32bExperimentalAArch64 and crc

CRC32 single round checksum for bytes (8 bits).

__crc32hExperimentalAArch64 and crc

CRC32 single round checksum for half words (16 bits).

__crc32wExperimentalAArch64 and crc

CRC32 single round checksum for words (32 bits).

__crc32dExperimentalAArch64 and crc

CRC32 single round checksum for quad words (64 bits).

__crc32cbExperimentalAArch64 and crc

CRC32-C single round checksum for bytes (8 bits).

__crc32chExperimentalAArch64 and crc

CRC32-C single round checksum for half words (16 bits).

__crc32cwExperimentalAArch64 and crc

CRC32-C single round checksum for words (32 bits).

__crc32cdExperimentalAArch64 and crc

CRC32-C single round checksum for quad words (64 bits).

__disable_fault_irqExperimentalAArch64 and mclass

Disable FIQ

__disable_irqExperimentalAArch64 and mclass

Disable IRQ Interrupts

__enable_fault_irqExperimentalAArch64 and mclass

Enable FIQ

__enable_irqExperimentalAArch64 and mclass

Enable IRQ Interrupts

__get_APSRExperimentalAArch64 and mclass

Get APSR Register

__get_BASEPRIExperimentalAArch64 and mclass

Get Base Priority

__get_CONTROLExperimentalAArch64 and mclass

Get Control Register

__get_FAULTMASKExperimentalAArch64 and mclass

Get Fault Mask

__get_IPSRExperimentalAArch64 and mclass

Get IPSR Register

__get_MSPExperimentalAArch64 and mclass

Get Main Stack Pointer

__get_PRIMASKExperimentalAArch64 and mclass

Get Priority Mask

__get_PSPExperimentalAArch64 and mclass

Get Process Stack Pointer

__get_xPSRExperimentalAArch64 and mclass

Get xPSR Register

__set_BASEPRIExperimentalAArch64 and mclass

Set Base Priority

__set_BASEPRI_MAXExperimentalAArch64 and mclass

Set Base Priority with condition

__set_CONTROLExperimentalAArch64 and mclass

Set Control Register

__set_FAULTMASKExperimentalAArch64 and mclass

Set Fault Mask

__set_MSPExperimentalAArch64 and mclass

Set Main Stack Pointer

__set_PRIMASKExperimentalAArch64 and mclass

Set Priority Mask

__set_PSPExperimentalAArch64 and mclass

Set Process Stack Pointer

_cls_u32ExperimentalAArch64

Counts the leading most significant bits set.

_cls_u64ExperimentalAArch64

Counts the leading most significant bits set.

_clz_u64ExperimentalAArch64

Count Leading Zeros.

_rbit_u64ExperimentalAArch64

Reverse the bit order.

_rev_u16ExperimentalAArch64

Reverse the order of the bytes.

_rev_u32ExperimentalAArch64

Reverse the order of the bytes.

_rev_u64ExperimentalAArch64

Reverse the order of the bytes.

brkExperimentalAArch64

Generates the trap instruction BRK 1

qaddExperimentalAArch64

Signed saturating addition

qadd8ExperimentalAArch64

Saturating four 8-bit integer additions

qadd16ExperimentalAArch64

Saturating two 16-bit integer additions

qasxExperimentalAArch64

Returns the 16-bit signed saturated equivalent of

qsaxExperimentalAArch64

Returns the 16-bit signed saturated equivalent of

qsubExperimentalAArch64

Signed saturating subtraction

qsub8ExperimentalAArch64

Saturating two 8-bit integer subtraction

qsub16ExperimentalAArch64

Saturating two 16-bit integer subtraction

sadd8ExperimentalAArch64

Returns the 8-bit signed saturated equivalent of

sadd16ExperimentalAArch64

Returns the 16-bit signed saturated equivalent of

sasxExperimentalAArch64

Returns the 16-bit signed equivalent of

selExperimentalAArch64

Select bytes from each operand according to APSR GE flags

shadd8ExperimentalAArch64

Signed halving parallel byte-wise addition.

shadd16ExperimentalAArch64

Signed halving parallel halfword-wise addition.

shsub8ExperimentalAArch64

Signed halving parallel byte-wise subtraction.

shsub16ExperimentalAArch64

Signed halving parallel halfword-wise subtraction.

smladExperimentalAArch64

Dual 16-bit Signed Multiply with Addition of products and 32-bit accumulation.

smlsdExperimentalAArch64

Dual 16-bit Signed Multiply with Subtraction of products and 32-bit accumulation and overflow detection.

smuadExperimentalAArch64

Signed Dual Multiply Add.

smuadxExperimentalAArch64

Signed Dual Multiply Add Reversed.

smusdExperimentalAArch64

Signed Dual Multiply Subtract.

smusdxExperimentalAArch64

Signed Dual Multiply Subtract Reversed.

usad8ExperimentalAArch64

Sum of 8-bit absolute differences.

usad8aExperimentalAArch64

Sum of 8-bit absolute differences and constant.

vadd_f32ExperimentalAArch64 and neon

Vector add.

vadd_f64ExperimentalAArch64 and neon

Vector add.

vadd_s8ExperimentalAArch64 and neon

Vector add.

vadd_s16ExperimentalAArch64 and neon

Vector add.

vadd_s32ExperimentalAArch64 and neon

Vector add.

vadd_u8ExperimentalAArch64 and neon

Vector add.

vadd_u16ExperimentalAArch64 and neon

Vector add.

vadd_u32ExperimentalAArch64 and neon

Vector add.

vaddd_s64ExperimentalAArch64 and neon

Vector add.

vaddd_u64ExperimentalAArch64 and neon

Vector add.

vaddl_s8ExperimentalAArch64 and neon

Vector long add.

vaddl_s16ExperimentalAArch64 and neon

Vector long add.

vaddl_s32ExperimentalAArch64 and neon

Vector long add.

vaddl_u8ExperimentalAArch64 and neon

Vector long add.

vaddl_u16ExperimentalAArch64 and neon

Vector long add.

vaddl_u32ExperimentalAArch64 and neon

Vector long add.

vaddq_f32ExperimentalAArch64 and neon

Vector add.

vaddq_f64ExperimentalAArch64 and neon

Vector add.

vaddq_s8ExperimentalAArch64 and neon

Vector add.

vaddq_s16ExperimentalAArch64 and neon

Vector add.

vaddq_s32ExperimentalAArch64 and neon

Vector add.

vaddq_s64ExperimentalAArch64 and neon

Vector add.

vaddq_u8ExperimentalAArch64 and neon

Vector add.

vaddq_u16ExperimentalAArch64 and neon

Vector add.

vaddq_u32ExperimentalAArch64 and neon

Vector add.

vaddq_u64ExperimentalAArch64 and neon

Vector add.

vaesdq_u8ExperimentalAArch64 and crypto

AES single round decryption.

vaeseq_u8ExperimentalAArch64 and crypto

AES single round encryption.

vaesimcq_u8ExperimentalAArch64 and crypto

AES inverse mix columns.

vaesmcq_u8ExperimentalAArch64 and crypto

AES mix columns.

vcombine_f32ExperimentalAArch64 and neon

Vector combine

vcombine_f64ExperimentalAArch64 and neon

Vector combine

vcombine_p8ExperimentalAArch64 and neon

Vector combine

vcombine_p16ExperimentalAArch64 and neon

Vector combine

vcombine_p64ExperimentalAArch64 and neon

Vector combine

vcombine_s8ExperimentalAArch64 and neon

Vector combine

vcombine_s16ExperimentalAArch64 and neon

Vector combine

vcombine_s32ExperimentalAArch64 and neon

Vector combine

vcombine_s64ExperimentalAArch64 and neon

Vector combine

vcombine_u8ExperimentalAArch64 and neon

Vector combine

vcombine_u16ExperimentalAArch64 and neon

Vector combine

vcombine_u32ExperimentalAArch64 and neon

Vector combine

vcombine_u64ExperimentalAArch64 and neon

Vector combine

vmaxv_f32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_s8ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_s16ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_s32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_u8ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_u16ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_u32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_f32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_f64ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_s8ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_s16ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_s32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_u8ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_u16ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_u32ExperimentalAArch64 and neon

Horizontal vector max.

vminv_f32ExperimentalAArch64 and neon

Horizontal vector min.

vminv_s8ExperimentalAArch64 and neon

Horizontal vector min.

vminv_s16ExperimentalAArch64 and neon

Horizontal vector min.

vminv_s32ExperimentalAArch64 and neon

Horizontal vector min.

vminv_u8ExperimentalAArch64 and neon

Horizontal vector min.

vminv_u16ExperimentalAArch64 and neon

Horizontal vector min.

vminv_u32ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_f32ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_f64ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_s8ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_s16ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_s32ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_u8ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_u16ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_u32ExperimentalAArch64 and neon

Horizontal vector min.

vmovl_s8ExperimentalAArch64 and neon

Vector long move.

vmovl_s16ExperimentalAArch64 and neon

Vector long move.

vmovl_s32ExperimentalAArch64 and neon

Vector long move.

vmovl_u8ExperimentalAArch64 and neon

Vector long move.

vmovl_u16ExperimentalAArch64 and neon

Vector long move.

vmovl_u32ExperimentalAArch64 and neon

Vector long move.

vmovn_s16ExperimentalAArch64 and neon

Vector narrow integer.

vmovn_s32ExperimentalAArch64 and neon

Vector narrow integer.

vmovn_s64ExperimentalAArch64 and neon

Vector narrow integer.

vmovn_u16ExperimentalAArch64 and neon

Vector narrow integer.

vmovn_u32ExperimentalAArch64 and neon

Vector narrow integer.

vmovn_u64ExperimentalAArch64 and neon

Vector narrow integer.

vpmax_f32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmax_s8ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmax_s16ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmax_s32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmax_u8ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmax_u16ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmax_u32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_f32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_f64ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_s8ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_s16ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_s32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_u8ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_u16ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_u32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmin_f32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpmin_s8ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpmin_s16ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpmin_s32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpmin_u8ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpmin_u16ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpmin_u32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_f32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_f64ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_s8ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_s16ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_s32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_u8ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_u16ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_u32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vqtbl1_p8ExperimentalAArch64 and neon

Table look-up

vqtbl1_s8ExperimentalAArch64 and neon

Table look-up

vqtbl1_u8ExperimentalAArch64 and neon

Table look-up

vqtbl1q_p8ExperimentalAArch64 and neon

Table look-up

vqtbl1q_s8ExperimentalAArch64 and neon

Table look-up

vqtbl1q_u8ExperimentalAArch64 and neon

Table look-up

vqtbl2_p8ExperimentalAArch64 and neon

Table look-up

vqtbl2_s8ExperimentalAArch64 and neon

Table look-up

vqtbl2_u8ExperimentalAArch64 and neon

Table look-up

vqtbl2q_p8ExperimentalAArch64 and neon

Table look-up

vqtbl2q_s8ExperimentalAArch64 and neon

Table look-up

vqtbl2q_u8ExperimentalAArch64 and neon

Table look-up

vqtbl3_p8ExperimentalAArch64 and neon

Table look-up

vqtbl3_s8ExperimentalAArch64 and neon

Table look-up

vqtbl3_u8ExperimentalAArch64 and neon

Table look-up

vqtbl3q_p8ExperimentalAArch64 and neon

Table look-up

vqtbl3q_s8ExperimentalAArch64 and neon

Table look-up

vqtbl3q_u8ExperimentalAArch64 and neon

Table look-up

vqtbl4_p8ExperimentalAArch64 and neon

Table look-up

vqtbl4_s8ExperimentalAArch64 and neon

Table look-up

vqtbl4_u8ExperimentalAArch64 and neon

Table look-up

vqtbl4q_p8ExperimentalAArch64 and neon

Table look-up

vqtbl4q_s8ExperimentalAArch64 and neon

Table look-up

vqtbl4q_u8ExperimentalAArch64 and neon

Table look-up

vqtbx1_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1q_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1q_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1q_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2q_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2q_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2q_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3q_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3q_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3q_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4q_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4q_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4q_u8ExperimentalAArch64 and neon

Extended table look-up

vrsqrte_f32ExperimentalAArch64 and neon

Reciprocal square-root estimate.

vsha1cq_u32ExperimentalAArch64 and crypto

SHA1 hash update accelerator, choose.

vsha1h_u32ExperimentalAArch64 and crypto

SHA1 fixed rotate.

vsha1mq_u32ExperimentalAArch64 and crypto

SHA1 hash update accelerator, majority.

vsha1pq_u32ExperimentalAArch64 and crypto

SHA1 hash update accelerator, parity.

vsha1su0q_u32ExperimentalAArch64 and crypto

SHA1 schedule update accelerator, first part.

vsha1su1q_u32ExperimentalAArch64 and crypto

SHA1 schedule update accelerator, second part.

vsha256h2q_u32ExperimentalAArch64 and crypto

SHA256 hash update accelerator, upper part.

vsha256hq_u32ExperimentalAArch64 and crypto

SHA256 hash update accelerator.

vsha256su0q_u32ExperimentalAArch64 and crypto

SHA256 schedule update accelerator, first part.

vsha256su1q_u32ExperimentalAArch64 and crypto

SHA256 schedule update accelerator, second part.

vtbl1_p8ExperimentalAArch64 and neon

Table look-up

vtbl1_s8ExperimentalAArch64 and neon

Table look-up

vtbl1_u8ExperimentalAArch64 and neon

Table look-up

vtbl2_p8ExperimentalAArch64 and neon

Table look-up

vtbl2_s8ExperimentalAArch64 and neon

Table look-up

vtbl2_u8ExperimentalAArch64 and neon

Table look-up

vtbl3_p8ExperimentalAArch64 and neon

Table look-up

vtbl3_s8ExperimentalAArch64 and neon

Table look-up

vtbl3_u8ExperimentalAArch64 and neon

Table look-up

vtbl4_p8ExperimentalAArch64 and neon

Table look-up

vtbl4_s8ExperimentalAArch64 and neon

Table look-up

vtbl4_u8ExperimentalAArch64 and neon

Table look-up

vtbx1_p8ExperimentalAArch64 and neon

Extended table look-up

vtbx1_s8ExperimentalAArch64 and neon

Extended table look-up

vtbx1_u8ExperimentalAArch64 and neon

Extended table look-up

vtbx2_p8ExperimentalAArch64 and neon

Extended table look-up

vtbx2_s8ExperimentalAArch64 and neon

Extended table look-up

vtbx2_u8ExperimentalAArch64 and neon

Extended table look-up

vtbx3_p8ExperimentalAArch64 and neon

Extended table look-up

vtbx3_s8ExperimentalAArch64 and neon

Extended table look-up

vtbx3_u8ExperimentalAArch64 and neon

Extended table look-up

vtbx4_p8ExperimentalAArch64 and neon

Extended table look-up

vtbx4_s8ExperimentalAArch64 and neon

Extended table look-up

vtbx4_u8ExperimentalAArch64 and neon

Extended table look-up