1.27.0[][src]Module core::arch::x86

This is supported on x86 only.

Platform-specific intrinsics for the x86 platform.

See the module documentation for more details.

Structs

CpuidResultx86

Result of the cpuid instruction.

__m128ix86

128-bit wide integer vector type, x86-specific

__m128x86

128-bit wide set of four f32 types, x86-specific

__m128dx86

128-bit wide set of two f64 types, x86-specific

__m256ix86

256-bit wide integer vector type, x86-specific

__m256x86

256-bit wide set of eight f32 types, x86-specific

__m256dx86

256-bit wide set of four f64 types, x86-specific

__m64Experimentalx86

64-bit wide integer vector type, x86-specific

__m512iExperimentalx86

512-bit wide integer vector type, x86-specific

__m512Experimentalx86

512-bit wide set of sixteen f32 types, x86-specific

__m512dExperimentalx86

512-bit wide set of eight f64 types, x86-specific

Constants

_CMP_EQ_OQx86

Equal (ordered, non-signaling)

_CMP_EQ_OSx86

Equal (ordered, signaling)

_CMP_EQ_UQx86

Equal (unordered, non-signaling)

_CMP_EQ_USx86

Equal (unordered, signaling)

_CMP_FALSE_OQx86

False (ordered, non-signaling)

_CMP_FALSE_OSx86

False (ordered, signaling)

_CMP_GE_OQx86

Greater-than-or-equal (ordered, non-signaling)

_CMP_GE_OSx86

Greater-than-or-equal (ordered, signaling)

_CMP_GT_OQx86

Greater-than (ordered, non-signaling)

_CMP_GT_OSx86

Greater-than (ordered, signaling)

_CMP_LE_OQx86

Less-than-or-equal (ordered, non-signaling)

_CMP_LE_OSx86

Less-than-or-equal (ordered, signaling)

_CMP_LT_OQx86

Less-than (ordered, non-signaling)

_CMP_LT_OSx86

Less-than (ordered, signaling)

_CMP_NEQ_OQx86

Not-equal (ordered, non-signaling)

_CMP_NEQ_OSx86

Not-equal (ordered, signaling)

_CMP_NEQ_UQx86

Not-equal (unordered, non-signaling)

_CMP_NEQ_USx86

Not-equal (unordered, signaling)

_CMP_NGE_UQx86

Not-greater-than-or-equal (unordered, non-signaling)

_CMP_NGE_USx86

Not-greater-than-or-equal (unordered, signaling)

_CMP_NGT_UQx86

Not-greater-than (unordered, non-signaling)

_CMP_NGT_USx86

Not-greater-than (unordered, signaling)

_CMP_NLE_UQx86

Not-less-than-or-equal (unordered, non-signaling)

_CMP_NLE_USx86

Not-less-than-or-equal (unordered, signaling)

_CMP_NLT_UQx86

Not-less-than (unordered, non-signaling)

_CMP_NLT_USx86

Not-less-than (unordered, signaling)

_CMP_ORD_Qx86

Ordered (non-signaling)

_CMP_ORD_Sx86

Ordered (signaling)

_CMP_TRUE_UQx86

True (unordered, non-signaling)

_CMP_TRUE_USx86

True (unordered, signaling)

_CMP_UNORD_Qx86

Unordered (non-signaling)

_CMP_UNORD_Sx86

Unordered (signaling)

_MM_EXCEPT_DENORMx86

See _mm_setcsr

_MM_EXCEPT_DIV_ZEROx86

See _mm_setcsr

_MM_EXCEPT_INEXACTx86

See _mm_setcsr

_MM_EXCEPT_INVALIDx86

See _mm_setcsr

_MM_EXCEPT_MASKx86

See _MM_GET_EXCEPTION_STATE

_MM_EXCEPT_OVERFLOWx86

See _mm_setcsr

_MM_EXCEPT_UNDERFLOWx86

See _mm_setcsr

_MM_FLUSH_ZERO_MASKx86

See _MM_GET_FLUSH_ZERO_MODE

_MM_FLUSH_ZERO_OFFx86

See _mm_setcsr

_MM_FLUSH_ZERO_ONx86

See _mm_setcsr

_MM_FROUND_CEILx86

round up and do not suppress exceptions

_MM_FROUND_CUR_DIRECTIONx86

use MXCSR.RC; see vendor::_MM_SET_ROUNDING_MODE

_MM_FROUND_FLOORx86

round down and do not suppress exceptions

_MM_FROUND_NEARBYINTx86

use MXCSR.RC and suppress exceptions; see vendor::_MM_SET_ROUNDING_MODE

_MM_FROUND_NINTx86

round to nearest and do not suppress exceptions

_MM_FROUND_NO_EXCx86

suppress exceptions

_MM_FROUND_RAISE_EXCx86

do not suppress exceptions

_MM_FROUND_RINTx86

use MXCSR.RC and do not suppress exceptions; see vendor::_MM_SET_ROUNDING_MODE

_MM_FROUND_TO_NEAREST_INTx86

round to nearest

_MM_FROUND_TO_NEG_INFx86

round down

_MM_FROUND_TO_POS_INFx86

round up

_MM_FROUND_TO_ZEROx86

truncate

_MM_FROUND_TRUNCx86

truncate and do not suppress exceptions

_MM_HINT_NTAx86

See _mm_prefetch.

_MM_HINT_T0x86

See _mm_prefetch.

_MM_HINT_T1x86

See _mm_prefetch.

_MM_HINT_T2x86

See _mm_prefetch.

_MM_MASK_DENORMx86

See _mm_setcsr

_MM_MASK_DIV_ZEROx86

See _mm_setcsr

_MM_MASK_INEXACTx86

See _mm_setcsr

_MM_MASK_INVALIDx86

See _mm_setcsr

_MM_MASK_MASKx86

See _MM_GET_EXCEPTION_MASK

_MM_MASK_OVERFLOWx86

See _mm_setcsr

_MM_MASK_UNDERFLOWx86

See _mm_setcsr

_MM_ROUND_DOWNx86

See _mm_setcsr

_MM_ROUND_MASKx86

See _MM_GET_ROUNDING_MODE

_MM_ROUND_NEARESTx86

See _mm_setcsr

_MM_ROUND_TOWARD_ZEROx86

See _mm_setcsr

_MM_ROUND_UPx86

See _mm_setcsr

_SIDD_BIT_MASKx86

Mask only: return the bit mask

_SIDD_CMP_EQUAL_ANYx86

For each character in a, find if it is in b (Default)

_SIDD_CMP_EQUAL_EACHx86

The strings defined by a and b are equal

_SIDD_CMP_EQUAL_ORDEREDx86

Search for the defined substring in the target

_SIDD_CMP_RANGESx86

For each character in a, determine if b[0] <= c <= b[1] or b[1] <= c <= b[2]...

_SIDD_LEAST_SIGNIFICANTx86

Index only: return the least significant bit (Default)

_SIDD_MASKED_NEGATIVE_POLARITYx86

Negate results only before the end of the string

_SIDD_MASKED_POSITIVE_POLARITYx86

Do not negate results before the end of the string

_SIDD_MOST_SIGNIFICANTx86

Index only: return the most significant bit

_SIDD_NEGATIVE_POLARITYx86

Negate results

_SIDD_POSITIVE_POLARITYx86

Do not negate results (Default)

_SIDD_SBYTE_OPSx86

String contains signed 8-bit characters

_SIDD_SWORD_OPSx86

String contains unsigned 16-bit characters

_SIDD_UBYTE_OPSx86

String contains unsigned 8-bit characters (Default)

_SIDD_UNIT_MASKx86

Mask only: return the byte mask

_SIDD_UWORD_OPSx86

String contains unsigned 16-bit characters

_XCR_XFEATURE_ENABLED_MASKx86

XFEATURE_ENABLED_MASK for XCR

Functions

_MM_GET_EXCEPTION_MASKx86 and sse

See _mm_setcsr

_MM_GET_EXCEPTION_STATEx86 and sse

See _mm_setcsr

_MM_GET_FLUSH_ZERO_MODEx86 and sse

See _mm_setcsr

_MM_GET_ROUNDING_MODEx86 and sse

See _mm_setcsr

_MM_SET_EXCEPTION_MASKx86 and sse

See _mm_setcsr

_MM_SET_EXCEPTION_STATEx86 and sse

See _mm_setcsr

_MM_SET_FLUSH_ZERO_MODEx86 and sse

See _mm_setcsr

_MM_SET_ROUNDING_MODEx86 and sse

See _mm_setcsr

_MM_TRANSPOSE4_PSx86 and sse

Transpose the 4x4 matrix formed by 4 rows of __m128 in place.

__cpuidx86

See __cpuid_count.

__cpuid_countx86

Returns the result of the cpuid instruction for a given leaf (EAX) and sub_leaf (ECX).

__get_cpuid_maxx86

Returns the highest-supported leaf (EAX) and sub-leaf (ECX) cpuid values.

__rdtscpx86

Reads the current value of the processor’s time-stamp counter and the IA32_TSC_AUX MSR.

_addcarry_u32x86

Add unsigned 32-bit integers a and b with unsigned 8-bit carry-in c_in (carry flag), and store the unsigned 32-bit result in out, and the carry-out is returned (carry or overflow flag).

_addcarryx_u32x86 and adx

Add unsigned 32-bit integers a and b with unsigned 8-bit carry-in c_in (carry or overflow flag), and store the unsigned 32-bit result in out, and the carry-out is returned (carry or overflow flag).

_andn_u32x86 and bmi1

Bitwise logical AND of inverted a with b.

_bextr2_u32x86 and bmi1

Extracts bits of a specified by control into the least significant bits of the result.

_bextr_u32x86 and bmi1

Extracts bits in range [start, start + length) from a into the least significant bits of the result.

_blcfill_u32x86 and tbm

Clears all bits below the least significant zero bit of x.

_blcfill_u64x86 and tbm

Clears all bits below the least significant zero bit of x.

_blci_u32x86 and tbm

Sets all bits of x to 1 except for the least significant zero bit.

_blci_u64x86 and tbm

Sets all bits of x to 1 except for the least significant zero bit.

_blcic_u32x86 and tbm

Sets the least significant zero bit of x and clears all other bits.

_blcic_u64x86 and tbm

Sets the least significant zero bit of x and clears all other bits.

_blcmsk_u32x86 and tbm

Sets the least significant zero bit of x and clears all bits above that bit.

_blcmsk_u64x86 and tbm

Sets the least significant zero bit of x and clears all bits above that bit.

_blcs_u32x86 and tbm

Sets the least significant zero bit of x.

_blcs_u64x86 and tbm

Sets the least significant zero bit of x.

_blsfill_u32x86 and tbm

Sets all bits of x below the least significant one.

_blsfill_u64x86 and tbm

Sets all bits of x below the least significant one.

_blsi_u32x86 and bmi1

Extract lowest set isolated bit.

_blsic_u32x86 and tbm

Clears least significant bit and sets all other bits.

_blsic_u64x86 and tbm

Clears least significant bit and sets all other bits.

_blsmsk_u32x86 and bmi1

Get mask up to lowest set bit.

_blsr_u32x86 and bmi1

Resets the lowest set bit of x.

_bswapx86

Return an integer with the reversed byte order of x

_bzhi_u32x86 and bmi2

Zero higher bits of a >= index.

_fxrstorx86 and fxsr

Restores the XMM, MMX, MXCSR, and x87 FPU registers from the 512-byte-long 16-byte-aligned memory region mem_addr.

_fxsavex86 and fxsr

Saves the x87 FPU, MMX technology, XMM, and MXCSR registers to the 512-byte-long 16-byte-aligned memory region mem_addr.

_lzcnt_u32x86 and lzcnt

Counts the leading most significant zero bits.

_mm256_add_pdx86 and avx

Add packed double-precision (64-bit) floating-point elements in a and b.

_mm256_add_psx86 and avx

Add packed single-precision (32-bit) floating-point elements in a and b.

_mm256_and_pdx86 and avx

Compute the bitwise AND of a packed double-precision (64-bit) floating-point elements in a and b.

_mm256_and_psx86 and avx

Compute the bitwise AND of packed single-precision (32-bit) floating-point elements in a and b.

_mm256_or_pdx86 and avx

Compute the bitwise OR packed double-precision (64-bit) floating-point elements in a and b.

_mm256_or_psx86 and avx

Compute the bitwise OR packed single-precision (32-bit) floating-point elements in a and b.

_mm256_shuffle_pdx86 and avx

Shuffle double-precision (64-bit) floating-point elements within 128-bit lanes using the control in imm8.

_mm256_shuffle_psx86 and avx

Shuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8.

_mm256_andnot_pdx86 and avx

Compute the bitwise NOT of packed double-precision (64-bit) floating-point elements in a and then AND with b.

_mm256_andnot_psx86 and avx

Compute the bitwise NOT of packed single-precision (32-bit) floating-point elements in a and then AND with b.

_mm256_max_pdx86 and avx

Compare packed double-precision (64-bit) floating-point elements in a and b, and return packed maximum values

_mm256_max_psx86 and avx

Compare packed single-precision (32-bit) floating-point elements in a and b, and return packed maximum values

_mm256_min_pdx86 and avx

Compare packed double-precision (64-bit) floating-point elements in a and b, and return packed minimum values

_mm256_min_psx86 and avx

Compare packed single-precision (32-bit) floating-point elements in a and b, and return packed minimum values

_mm256_mul_pdx86 and avx

Multiply packed double-precision (64-bit) floating-point elements in a and b.

_mm256_mul_psx86 and avx

Multiply packed single-precision (32-bit) floating-point elements in a and b.

_mm256_addsub_pdx86 and avx

Alternatively add and subtract packed double-precision (64-bit) floating-point elements in a to/from packed elements in b.

_mm256_addsub_psx86 and avx

Alternatively add and subtract packed single-precision (32-bit) floating-point elements in a to/from packed elements in b.

_mm256_sub_pdx86 and avx

Subtract packed double-precision (64-bit) floating-point elements in b from packed elements in a.

_mm256_sub_psx86 and avx

Subtract packed single-precision (32-bit) floating-point elements in b from packed elements in a.

_mm256_div_psx86 and avx

Compute the division of each of the 8 packed 32-bit floating-point elements in a by the corresponding packed elements in b.

_mm256_div_pdx86 and avx

Compute the division of each of the 4 packed 64-bit floating-point elements in a by the corresponding packed elements in b.

_mm256_round_pdx86 and avx

Round packed double-precision (64-bit) floating point elements in a according to the flag b. The value of b may be as follows:

_mm256_ceil_pdx86 and avx

Round packed double-precision (64-bit) floating point elements in a toward positive infinity.

_mm256_floor_pdx86 and avx

Round packed double-precision (64-bit) floating point elements in a toward negative infinity.

_mm256_round_psx86 and avx

Round packed single-precision (32-bit) floating point elements in a according to the flag b. The value of b may be as follows:

_mm256_ceil_psx86 and avx

Round packed single-precision (32-bit) floating point elements in a toward positive infinity.

_mm256_floor_psx86 and avx

Round packed single-precision (32-bit) floating point elements in a toward negative infinity.

_mm256_sqrt_psx86 and avx

Return the square root of packed single-precision (32-bit) floating point elements in a.

_mm256_sqrt_pdx86 and avx

Return the square root of packed double-precision (64-bit) floating point elements in a.

_mm256_blend_pdx86 and avx

Blend packed double-precision (64-bit) floating-point elements from a and b using control mask imm8.

_mm256_blend_psx86 and avx

Blend packed single-precision (32-bit) floating-point elements from a and b using control mask imm8.

_mm256_blendv_pdx86 and avx

Blend packed double-precision (64-bit) floating-point elements from a and b using c as a mask.

_mm256_blendv_psx86 and avx

Blend packed single-precision (32-bit) floating-point elements from a and b using c as a mask.

_mm256_dp_psx86 and avx

Conditionally multiply the packed single-precision (32-bit) floating-point elements in a and b using the high 4 bits in imm8, sum the four products, and conditionally return the sum using the low 4 bits of imm8.

_mm256_hadd_pdx86 and avx

Horizontal addition of adjacent pairs in the two packed vectors of 4 64-bit floating points a and b. In the result, sums of elements from a are returned in even locations, while sums of elements from b are returned in odd locations.

_mm256_hadd_psx86 and avx

Horizontal addition of adjacent pairs in the two packed vectors of 8 32-bit floating points a and b. In the result, sums of elements from a are returned in locations of indices 0, 1, 4, 5; while sums of elements from b are locations 2, 3, 6, 7.

_mm256_hsub_pdx86 and avx

Horizontal subtraction of adjacent pairs in the two packed vectors of 4 64-bit floating points a and b. In the result, sums of elements from a are returned in even locations, while sums of elements from b are returned in odd locations.

_mm256_hsub_psx86 and avx

Horizontal subtraction of adjacent pairs in the two packed vectors of 8 32-bit floating points a and b. In the result, sums of elements from a are returned in locations of indices 0, 1, 4, 5; while sums of elements from b are locations 2, 3, 6, 7.

_mm256_xor_pdx86 and avx

Compute the bitwise XOR of packed double-precision (64-bit) floating-point elements in a and b.

_mm256_xor_psx86 and avx

Compute the bitwise XOR of packed single-precision (32-bit) floating-point elements in a and b.

_mm256_cmp_pdx86 and avx

Compare packed double-precision (64-bit) floating-point elements in a and b based on the comparison operand specified by imm8.

_mm256_cmp_psx86 and avx

Compare packed single-precision (32-bit) floating-point elements in a and b based on the comparison operand specified by imm8.

_mm256_cvtpd_psx86 and avx

Convert packed double-precision (64-bit) floating-point elements in a to packed single-precision (32-bit) floating-point elements.

_mm256_cvtps_pdx86 and avx

Convert packed single-precision (32-bit) floating-point elements in a to packed double-precision (64-bit) floating-point elements.

_mm256_zeroallx86 and avx

Zero the contents of all XMM or YMM registers.

_mm256_zeroupperx86 and avx

Zero the upper 128 bits of all YMM registers; the lower 128-bits of the registers are unmodified.

_mm256_permutevar_psx86 and avx

Shuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in b.

_mm256_permute_psx86 and avx

Shuffle single-precision (32-bit) floating-point elements in a within 128-bit lanes using the control in imm8.

_mm256_permutevar_pdx86 and avx

Shuffle double-precision (64-bit) floating-point elements in a within 256-bit lanes using the control in b.

_mm256_permute_pdx86 and avx

Shuffle double-precision (64-bit) floating-point elements in a within 128-bit lanes using the control in imm8.

_mm256_broadcast_ssx86 and avx

Broadcast a single-precision (32-bit) floating-point element from memory to all elements of the returned vector.

_mm256_broadcast_sdx86 and avx

Broadcast a double-precision (64-bit) floating-point element from memory to all elements of the returned vector.

_mm256_broadcast_psx86 and avx

Broadcast 128 bits from memory (composed of 4 packed single-precision (32-bit) floating-point elements) to all elements of the returned vector.

_mm256_broadcast_pdx86 and avx

Broadcast 128 bits from memory (composed of 2 packed double-precision (64-bit) floating-point elements) to all elements of the returned vector.

_mm256_load_pdx86 and avx

Load 256-bits (composed of 4 packed double-precision (64-bit) floating-point elements) from memory into result. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.

_mm256_store_pdx86 and avx

Store 256-bits (composed of 4 packed double-precision (64-bit) floating-point elements) from a into memory. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.

_mm256_load_psx86 and avx

Load 256-bits (composed of 8 packed single-precision (32-bit) floating-point elements) from memory into result. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.

_mm256_store_psx86 and avx

Store 256-bits (composed of 8 packed single-precision (32-bit) floating-point elements) from a into memory. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.

_mm256_loadu_pdx86 and avx

Load 256-bits (composed of 4 packed double-precision (64-bit) floating-point elements) from memory into result. mem_addr does not need to be aligned on any particular boundary.

_mm256_storeu_pdx86 and avx

Store 256-bits (composed of 4 packed double-precision (64-bit) floating-point elements) from a into memory. mem_addr does not need to be aligned on any particular boundary.

_mm256_loadu_psx86 and avx

Load 256-bits (composed of 8 packed single-precision (32-bit) floating-point elements) from memory into result. mem_addr does not need to be aligned on any particular boundary.

_mm256_storeu_psx86 and avx

Store 256-bits (composed of 8 packed single-precision (32-bit) floating-point elements) from a into memory. mem_addr does not need to be aligned on any particular boundary.

_mm256_maskload_pdx86 and avx

Load packed double-precision (64-bit) floating-point elements from memory into result using mask (elements are zeroed out when the high bit of the corresponding element is not set).

_mm256_maskstore_pdx86 and avx

Store packed double-precision (64-bit) floating-point elements from a into memory using mask.

_mm256_maskload_psx86 and avx

Load packed single-precision (32-bit) floating-point elements from memory into result using mask (elements are zeroed out when the high bit of the corresponding element is not set).

_mm256_maskstore_psx86 and avx

Store packed single-precision (32-bit) floating-point elements from a into memory using mask.

_mm256_movehdup_psx86 and avx

Duplicate odd-indexed single-precision (32-bit) floating-point elements from a, and return the results.

_mm256_moveldup_psx86 and avx

Duplicate even-indexed single-precision (32-bit) floating-point elements from a, and return the results.

_mm256_movedup_pdx86 and avx

Duplicate even-indexed double-precision (64-bit) floating-point elements from "a", and return the results.

_mm256_stream_pdx86 and avx

Moves double-precision values from a 256-bit vector of [4 x double] to a 32-byte aligned memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon).

_mm256_stream_psx86 and avx

Moves single-precision floating point values from a 256-bit vector of [8 x float] to a 32-byte aligned memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon).

_mm256_rcp_psx86 and avx

Compute the approximate reciprocal of packed single-precision (32-bit) floating-point elements in a, and return the results. The maximum relative error for this approximation is less than 1.5*2^-12.

_mm256_rsqrt_psx86 and avx

Compute the approximate reciprocal square root of packed single-precision (32-bit) floating-point elements in a, and return the results. The maximum relative error for this approximation is less than 1.5*2^-12.

_mm256_unpackhi_pdx86 and avx

Unpack and interleave double-precision (64-bit) floating-point elements from the high half of each 128-bit lane in a and b.

_mm256_unpackhi_psx86 and avx

Unpack and interleave single-precision (32-bit) floating-point elements from the high half of each 128-bit lane in a and b.

_mm256_unpacklo_pdx86 and avx

Unpack and interleave double-precision (64-bit) floating-point elements from the low half of each 128-bit lane in a and b.

_mm256_unpacklo_psx86 and avx

Unpack and interleave single-precision (32-bit) floating-point elements from the low half of each 128-bit lane in a and b.

_mm256_testz_pdx86 and avx

Compute the bitwise AND of 256 bits (representing double-precision (64-bit) floating-point elements) in a and b, producing an intermediate 256-bit value, and set ZF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set CF to 0. Return the ZF value.

_mm256_testc_pdx86 and avx

Compute the bitwise AND of 256 bits (representing double-precision (64-bit) floating-point elements) in a and b, producing an intermediate 256-bit value, and set ZF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set CF to 0. Return the CF value.

_mm256_testnzc_pdx86 and avx

Compute the bitwise AND of 256 bits (representing double-precision (64-bit) floating-point elements) in a and b, producing an intermediate 256-bit value, and set ZF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set CF to 0. Return 1 if both the ZF and CF values are zero, otherwise return 0.

_mm256_testz_psx86 and avx

Compute the bitwise AND of 256 bits (representing single-precision (32-bit) floating-point elements) in a and b, producing an intermediate 256-bit value, and set ZF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set CF to 0. Return the ZF value.

_mm256_testc_psx86 and avx

Compute the bitwise AND of 256 bits (representing single-precision (32-bit) floating-point elements) in a and b, producing an intermediate 256-bit value, and set ZF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set CF to 0. Return the CF value.

_mm256_testnzc_psx86 and avx

Compute the bitwise AND of 256 bits (representing single-precision (32-bit) floating-point elements) in a and b, producing an intermediate 256-bit value, and set ZF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set CF to 0. Return 1 if both the ZF and CF values are zero, otherwise return 0.

_mm256_movemask_pdx86 and avx

Set each bit of the returned mask based on the most significant bit of the corresponding packed double-precision (64-bit) floating-point element in a.

_mm256_movemask_psx86 and avx

Set each bit of the returned mask based on the most significant bit of the corresponding packed single-precision (32-bit) floating-point element in a.

_mm256_setzero_pdx86 and avx

Return vector of type __m256d with all elements set to zero.

_mm256_setzero_psx86 and avx

Return vector of type __m256 with all elements set to zero.

_mm256_set_pdx86 and avx

Set packed double-precision (64-bit) floating-point elements in returned vector with the supplied values.

_mm256_set_psx86 and avx

Set packed single-precision (32-bit) floating-point elements in returned vector with the supplied values.

_mm256_setr_pdx86 and avx

Set packed double-precision (64-bit) floating-point elements in returned vector with the supplied values in reverse order.

_mm256_setr_psx86 and avx

Set packed single-precision (32-bit) floating-point elements in returned vector with the supplied values in reverse order.

_mm256_castpd_psx86 and avx

Cast vector of type __m256d to type __m256.

_mm256_castps_pdx86 and avx

Cast vector of type __m256 to type __m256d.

_mm256_undefined_psx86 and avx

Return vector of type __m256 with undefined elements.

_mm256_undefined_pdx86 and avx

Return vector of type __m256d with undefined elements.

_mm256_broadcastsd_pdx86 and avx2

Broadcast the low double-precision (64-bit) floating-point element from a to all elements of the 256-bit returned value.

_mm256_broadcastss_psx86 and avx2

Broadcast the low single-precision (32-bit) floating-point element from a to all elements of the 256-bit returned value.

_mm256_fmadd_pdx86 and fma

Multiply packed double-precision (64-bit) floating-point elements in a and b, and add the intermediate result to packed elements in c.

_mm256_fmadd_psx86 and fma

Multiply packed single-precision (32-bit) floating-point elements in a and b, and add the intermediate result to packed elements in c.

_mm256_fmaddsub_pdx86 and fma

Multiply packed double-precision (64-bit) floating-point elements in a and b, and alternatively add and subtract packed elements in c to/from the intermediate result.

_mm256_fmaddsub_psx86 and fma

Multiply packed single-precision (32-bit) floating-point elements in a and b, and alternatively add and subtract packed elements in c to/from the intermediate result.

_mm256_fmsub_pdx86 and fma

Multiply packed double-precision (64-bit) floating-point elements in a and b, and subtract packed elements in c from the intermediate result.

_mm256_fmsub_psx86 and fma

Multiply packed single-precision (32-bit) floating-point elements in a and b, and subtract packed elements in c from the intermediate result.

_mm256_fmsubadd_pdx86 and fma

Multiply packed double-precision (64-bit) floating-point elements in a and b, and alternatively subtract and add packed elements in c from/to the intermediate result.

_mm256_fmsubadd_psx86 and fma

Multiply packed single-precision (32-bit) floating-point elements in a and b, and alternatively subtract and add packed elements in c from/to the intermediate result.

_mm256_fnmadd_pdx86 and fma

Multiply packed double-precision (64-bit) floating-point elements in a and b, and add the negated intermediate result to packed elements in c.

_mm256_fnmadd_psx86 and fma

Multiply packed single-precision (32-bit) floating-point elements in a and b, and add the negated intermediate result to packed elements in c.

_mm256_fnmsub_pdx86 and fma

Multiply packed double-precision (64-bit) floating-point elements in a and b, and subtract packed elements in c from the negated intermediate result.

_mm256_fnmsub_psx86 and fma

Multiply packed single-precision (32-bit) floating-point elements in a and b, and subtract packed elements in c from the negated intermediate result.

_mm256_abs_epi8x86 and avx2

Computes the absolute values of packed 8-bit integers in a.

_mm256_abs_epi16x86 and avx2

Computes the absolute values of packed 16-bit integers in a.

_mm256_abs_epi32x86 and avx2

Computes the absolute values of packed 32-bit integers in a.

_mm256_add_epi8x86 and avx2

Add packed 8-bit integers in a and b.

_mm256_add_epi16x86 and avx2

Add packed 16-bit integers in a and b.

_mm256_add_epi32x86 and avx2

Add packed 32-bit integers in a and b.

_mm256_add_epi64x86 and avx2

Add packed 64-bit integers in a and b.

_mm256_adds_epi8x86 and avx2

Add packed 8-bit integers in a and b using saturation.

_mm256_adds_epi16x86 and avx2

Add packed 16-bit integers in a and b using saturation.

_mm256_adds_epu8x86 and avx2

Add packed unsigned 8-bit integers in a and b using saturation.

_mm256_adds_epu16x86 and avx2

Add packed unsigned 16-bit integers in a and b using saturation.

_mm256_alignr_epi8x86 and avx2

Concatenate pairs of 16-byte blocks in a and b into a 32-byte temporary result, shift the result right by n bytes, and return the low 16 bytes.

_mm256_and_si256x86 and avx2

Compute the bitwise AND of 256 bits (representing integer data) in a and b.

_mm256_andnot_si256x86 and avx2

Compute the bitwise NOT of 256 bits (representing integer data) in a and then AND with b.

_mm256_avg_epu8x86 and avx2

Average packed unsigned 8-bit integers in a and b.

_mm256_avg_epu16x86 and avx2

Average packed unsigned 16-bit integers in a and b.

_mm256_blend_epi16x86 and avx2

Blend packed 16-bit integers from a and b using control mask imm8.

_mm256_blend_epi32x86 and avx2

Blend packed 32-bit integers from a and b using control mask imm8.

_mm256_blendv_epi8x86 and avx2

Blend packed 8-bit integers from a and b using mask.

_mm256_broadcastb_epi8x86 and avx2

Broadcast the low packed 8-bit integer from a to all elements of the 256-bit returned value.

_mm256_broadcastd_epi32x86 and avx2

Broadcast the low packed 32-bit integer from a to all elements of the 256-bit returned value.

_mm256_broadcastq_epi64x86 and avx2

Broadcast the low packed 64-bit integer from a to all elements of the 256-bit returned value.

_mm256_broadcastsi128_si256x86 and avx2

Broadcast 128 bits of integer data from a to all 128-bit lanes in the 256-bit returned value.

_mm256_broadcastw_epi16x86 and avx2

Broadcast the low packed 16-bit integer from a to all elements of the 256-bit returned value

_mm256_bslli_epi128x86 and avx2

Shift 128-bit lanes in a left by imm8 bytes while shifting in zeros.

_mm256_bsrli_epi128x86 and avx2

Shift 128-bit lanes in a right by imm8 bytes while shifting in zeros.

_mm256_castpd128_pd256x86 and avx

Casts vector of type __m128d to type __m256d; the upper 128 bits of the result are undefined.

_mm256_castpd256_pd128x86 and avx

Casts vector of type __m256d to type __m128d.

_mm256_castpd_si256x86 and avx

Casts vector of type __m256d to type __m256i.

_mm256_castps128_ps256x86 and avx

Casts vector of type __m128 to type __m256; the upper 128 bits of the result are undefined.

_mm256_castps256_ps128x86 and avx

Casts vector of type __m256 to type __m128.

_mm256_castps_si256x86 and avx

Casts vector of type __m256 to type __m256i.

_mm256_castsi256_psx86 and avx

Casts vector of type __m256i to type __m256.

_mm256_castsi256_pdx86 and avx

Casts vector of type __m256i to type __m256d.

_mm256_castsi128_si256x86 and avx

Casts vector of type __m128i to type __m256i; the upper 128 bits of the result are undefined.

_mm256_castsi256_si128x86 and avx

Casts vector of type __m256i to type __m128i.

_mm256_cmpeq_epi8x86 and avx2

Compare packed 8-bit integers in a and b for equality.

_mm256_cmpeq_epi16x86 and avx2

Compare packed 16-bit integers in a and b for equality.

_mm256_cmpeq_epi32x86 and avx2

Compare packed 32-bit integers in a and b for equality.

_mm256_cmpeq_epi64x86 and avx2

Compare packed 64-bit integers in a and b for equality.

_mm256_cmpgt_epi8x86 and avx2

Compare packed 8-bit integers in a and b for greater-than.

_mm256_cmpgt_epi16x86 and avx2

Compare packed 16-bit integers in a and b for greater-than.

_mm256_cmpgt_epi32x86 and avx2

Compare packed 32-bit integers in a and b for greater-than.

_mm256_cmpgt_epi64x86 and avx2

Compare packed 64-bit integers in a and b for greater-than.

_mm256_cvtepi32_pdx86 and avx

Convert packed 32-bit integers in a to packed double-precision (64-bit) floating-point elements.

_mm256_cvtepi32_psx86 and avx

Convert packed 32-bit integers in a to packed single-precision (32-bit) floating-point elements.

_mm256_cvtepi16_epi32x86 and avx2

Sign-extend 16-bit integers to 32-bit integers.

_mm256_cvtepi16_epi64x86 and avx2

Sign-extend 16-bit integers to 64-bit integers.

_mm256_cvtepi32_epi64x86 and avx2

Sign-extend 32-bit integers to 64-bit integers.

_mm256_cvtepi8_epi16x86 and avx2

Sign-extend 8-bit integers to 16-bit integers.

_mm256_cvtepi8_epi32x86 and avx2

Sign-extend 8-bit integers to 32-bit integers.

_mm256_cvtepi8_epi64x86 and avx2

Sign-extend 8-bit integers to 64-bit integers.

_mm256_cvtepu16_epi32x86 and avx2

Zero extend packed unsigned 16-bit integers in a to packed 32-bit integers, and store the results in dst.

_mm256_cvtepu16_epi64x86 and avx2

Zero-extend the lower four unsigned 16-bit integers in a to 64-bit integers. The upper four elements of a are unused.

_mm256_cvtepu32_epi64x86 and avx2

Zero-extend unsigned 32-bit integers in a to 64-bit integers.

_mm256_cvtepu8_epi16x86 and avx2

Zero-extend unsigned 8-bit integers in a to 16-bit integers.

_mm256_cvtepu8_epi32x86 and avx2

Zero-extend the lower eight unsigned 8-bit integers in a to 32-bit integers. The upper eight elements of a are unused.

_mm256_cvtepu8_epi64x86 and avx2

Zero-extend the lower four unsigned 8-bit integers in a to 64-bit integers. The upper twelve elements of a are unused.

_mm256_cvtpd_epi32x86 and avx

Convert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers.

_mm256_cvtps_epi32x86 and avx

Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers.

_mm256_cvtsd_f64x86 and avx2

Returns the first element of the input vector of [4 x double].

_mm256_cvtsi256_si32x86 and avx2

Returns the first element of the input vector of [8 x i32].

_mm256_cvtss_f32x86 and avx

Returns the first element of the input vector of [8 x float].

_mm256_cvttpd_epi32x86 and avx

Convert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers with truncation.

_mm256_cvttps_epi32x86 and avx

Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation.

_mm256_extract_epi8x86 and avx2

Extract an 8-bit integer from a, selected with imm8. Returns a 32-bit integer containing the zero-extended integer data.

_mm256_extract_epi16x86 and avx2

Extract a 16-bit integer from a, selected with imm8. Returns a 32-bit integer containing the zero-extended integer data.

_mm256_extract_epi32x86 and avx2

Extract a 32-bit integer from a, selected with imm8.

_mm256_extractf128_psx86 and avx

Extract 128 bits (composed of 4 packed single-precision (32-bit) floating-point elements) from a, selected with imm8.

_mm256_extractf128_pdx86 and avx

Extract 128 bits (composed of 2 packed double-precision (64-bit) floating-point elements) from a, selected with imm8.

_mm256_extractf128_si256x86 and avx

Extract 128 bits (composed of integer data) from a, selected with imm8.

_mm256_extracti128_si256x86 and avx2

Extract 128 bits (of integer data) from a selected with imm8.

_mm256_hadd_epi16x86 and avx2

Horizontally add adjacent pairs of 16-bit integers in a and b.

_mm256_hadd_epi32x86 and avx2

Horizontally add adjacent pairs of 32-bit integers in a and b.

_mm256_hadds_epi16x86 and avx2

Horizontally add adjacent pairs of 16-bit integers in a and b using saturation.

_mm256_hsub_epi16x86 and avx2

Horizontally subtract adjacent pairs of 16-bit integers in a and b.

_mm256_hsub_epi32x86 and avx2

Horizontally subtract adjacent pairs of 32-bit integers in a and b.

_mm256_hsubs_epi16x86 and avx2

Horizontally subtract adjacent pairs of 16-bit integers in a and b using saturation.

_mm256_i32gather_psx86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm256_i32gather_pdx86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm256_i64gather_psx86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm256_i64gather_pdx86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm256_i32gather_epi32x86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm256_i32gather_epi64x86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm256_i64gather_epi32x86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm256_i64gather_epi64x86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm256_insert_epi8x86 and avx

Copy a to result, and insert the 8-bit integer i into result at the location specified by index.

_mm256_insert_epi16x86 and avx

Copy a to result, and insert the 16-bit integer i into result at the location specified by index.

_mm256_insert_epi32x86 and avx

Copy a to result, and insert the 32-bit integer i into result at the location specified by index.

_mm256_insertf128_psx86 and avx

Copy a to result, then insert 128 bits (composed of 4 packed single-precision (32-bit) floating-point elements) from b into result at the location specified by imm8.

_mm256_insertf128_pdx86 and avx

Copy a to result, then insert 128 bits (composed of 2 packed double-precision (64-bit) floating-point elements) from b into result at the location specified by imm8.

_mm256_insertf128_si256x86 and avx

Copy a to result, then insert 128 bits from b into result at the location specified by imm8.

_mm256_inserti128_si256x86 and avx2

Copy a to dst, then insert 128 bits (of integer data) from b at the location specified by imm8.

_mm256_lddqu_si256x86 and avx

Load 256-bits of integer data from unaligned memory into result. This intrinsic may perform better than _mm256_loadu_si256 when the data crosses a cache line boundary.

_mm256_load_si256x86 and avx

Load 256-bits of integer data from memory into result. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.

_mm256_loadu2_m128x86 and avx,sse

Load two 128-bit values (composed of 4 packed single-precision (32-bit) floating-point elements) from memory, and combine them into a 256-bit value. hiaddr and loaddr do not need to be aligned on any particular boundary.

_mm256_loadu2_m128dx86 and avx,sse2

Load two 128-bit values (composed of 2 packed double-precision (64-bit) floating-point elements) from memory, and combine them into a 256-bit value. hiaddr and loaddr do not need to be aligned on any particular boundary.

_mm256_loadu2_m128ix86 and avx,sse2

Load two 128-bit values (composed of integer data) from memory, and combine them into a 256-bit value. hiaddr and loaddr do not need to be aligned on any particular boundary.

_mm256_loadu_si256x86 and avx

Load 256-bits of integer data from memory into result. mem_addr does not need to be aligned on any particular boundary.

_mm256_madd_epi16x86 and avx2

Multiply packed signed 16-bit integers in a and b, producing intermediate signed 32-bit integers. Horizontally add adjacent pairs of intermediate 32-bit integers.

_mm256_maddubs_epi16x86 and avx2

Vertically multiply each unsigned 8-bit integer from a with the corresponding signed 8-bit integer from b, producing intermediate signed 16-bit integers. Horizontally add adjacent pairs of intermediate signed 16-bit integers

_mm256_mask_i32gather_psx86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm256_mask_i32gather_pdx86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm256_mask_i64gather_psx86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm256_mask_i64gather_pdx86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm256_mask_i32gather_epi32x86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm256_mask_i32gather_epi64x86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm256_mask_i64gather_epi32x86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm256_mask_i64gather_epi64x86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm256_maskload_epi32x86 and avx2

Load packed 32-bit integers from memory pointed by mem_addr using mask (elements are zeroed out when the highest bit is not set in the corresponding element).

_mm256_maskload_epi64x86 and avx2

Load packed 64-bit integers from memory pointed by mem_addr using mask (elements are zeroed out when the highest bit is not set in the corresponding element).

_mm256_maskstore_epi32x86 and avx2

Store packed 32-bit integers from a into memory pointed by mem_addr using mask (elements are not stored when the highest bit is not set in the corresponding element).

_mm256_maskstore_epi64x86 and avx2

Store packed 64-bit integers from a into memory pointed by mem_addr using mask (elements are not stored when the highest bit is not set in the corresponding element).

_mm256_max_epi8x86 and avx2

Compare packed 8-bit integers in a and b, and return the packed maximum values.

_mm256_max_epi16x86 and avx2

Compare packed 16-bit integers in a and b, and return the packed maximum values.

_mm256_max_epi32x86 and avx2

Compare packed 32-bit integers in a and b, and return the packed maximum values.

_mm256_max_epu8x86 and avx2

Compare packed unsigned 8-bit integers in a and b, and return the packed maximum values.

_mm256_max_epu16x86 and avx2

Compare packed unsigned 16-bit integers in a and b, and return the packed maximum values.

_mm256_max_epu32x86 and avx2

Compare packed unsigned 32-bit integers in a and b, and return the packed maximum values.

_mm256_min_epi8x86 and avx2

Compare packed 8-bit integers in a and b, and return the packed minimum values.

_mm256_min_epi16x86 and avx2

Compare packed 16-bit integers in a and b, and return the packed minimum values.

_mm256_min_epi32x86 and avx2

Compare packed 32-bit integers in a and b, and return the packed minimum values.

_mm256_min_epu8x86 and avx2

Compare packed unsigned 8-bit integers in a and b, and return the packed minimum values.

_mm256_min_epu16x86 and avx2

Compare packed unsigned 16-bit integers in a and b, and return the packed minimum values.

_mm256_min_epu32x86 and avx2

Compare packed unsigned 32-bit integers in a and b, and return the packed minimum values.

_mm256_movemask_epi8x86 and avx2

Create mask from the most significant bit of each 8-bit element in a, return the result.

_mm256_mpsadbw_epu8x86 and avx2

Compute the sum of absolute differences (SADs) of quadruplets of unsigned 8-bit integers in a compared to those in b, and store the 16-bit results in dst. Eight SADs are performed for each 128-bit lane using one quadruplet from b and eight quadruplets from a. One quadruplet is selected from b starting at on the offset specified in imm8. Eight quadruplets are formed from sequential 8-bit integers selected from a starting at the offset specified in imm8.

_mm256_mul_epi32x86 and avx2

Multiply the low 32-bit integers from each packed 64-bit element in a and b

_mm256_mul_epu32x86 and avx2

Multiply the low unsigned 32-bit integers from each packed 64-bit element in a and b

_mm256_mulhi_epi16x86 and avx2

Multiply the packed 16-bit integers in a and b, producing intermediate 32-bit integers and returning the high 16 bits of the intermediate integers.

_mm256_mulhi_epu16x86 and avx2

Multiply the packed unsigned 16-bit integers in a and b, producing intermediate 32-bit integers and returning the high 16 bits of the intermediate integers.

_mm256_mulhrs_epi16x86 and avx2

Multiply packed 16-bit integers in a and b, producing intermediate signed 32-bit integers. Truncate each intermediate integer to the 18 most significant bits, round by adding 1, and return bits [16:1].

_mm256_mullo_epi16x86 and avx2

Multiply the packed 16-bit integers in a and b, producing intermediate 32-bit integers, and return the low 16 bits of the intermediate integers

_mm256_mullo_epi32x86 and avx2

Multiply the packed 32-bit integers in a and b, producing intermediate 64-bit integers, and return the low 16 bits of the intermediate integers

_mm256_or_si256x86 and avx2

Compute the bitwise OR of 256 bits (representing integer data) in a and b

_mm256_packs_epi16x86 and avx2

Convert packed 16-bit integers from a and b to packed 8-bit integers using signed saturation

_mm256_packs_epi32x86 and avx2

Convert packed 32-bit integers from a and b to packed 16-bit integers using signed saturation

_mm256_packus_epi16x86 and avx2

Convert packed 16-bit integers from a and b to packed 8-bit integers using unsigned saturation

_mm256_packus_epi32x86 and avx2

Convert packed 32-bit integers from a and b to packed 16-bit integers using unsigned saturation

_mm256_permute2f128_psx86 and avx

Shuffle 256-bits (composed of 8 packed single-precision (32-bit) floating-point elements) selected by imm8 from a and b.

_mm256_permute2f128_pdx86 and avx

Shuffle 256-bits (composed of 4 packed double-precision (64-bit) floating-point elements) selected by imm8 from a and b.

_mm256_permute2f128_si256x86 and avx

Shuffle 258-bits (composed of integer data) selected by imm8 from a and b.

_mm256_permute2x128_si256x86 and avx2

Shuffle 128-bits of integer data selected by imm8 from a and b.

_mm256_permute4x64_pdx86 and avx2

Shuffle 64-bit floating-point elements in a across lanes using the control in imm8.

_mm256_permute4x64_epi64x86 and avx2

Permutes 64-bit integers from a using control mask imm8.

_mm256_permutevar8x32_psx86 and avx2

Shuffle eight 32-bit foating-point elements in a across lanes using the corresponding 32-bit integer index in idx.

_mm256_permutevar8x32_epi32x86 and avx2

Permutes packed 32-bit integers from a according to the content of b.

_mm256_sad_epu8x86 and avx2

Compute the absolute differences of packed unsigned 8-bit integers in a and b, then horizontally sum each consecutive 8 differences to produce four unsigned 16-bit integers, and pack these unsigned 16-bit integers in the low 16 bits of the 64-bit return value

_mm256_set1_pdx86 and avx

Broadcast double-precision (64-bit) floating-point value a to all elements of returned vector.

_mm256_set1_psx86 and avx

Broadcast single-precision (32-bit) floating-point value a to all elements of returned vector.

_mm256_set1_epi8x86 and avx

Broadcast 8-bit integer a to all elements of returned vector. This intrinsic may generate the vpbroadcastb.

_mm256_set1_epi16x86 and avx

Broadcast 16-bit integer a to all all elements of returned vector. This intrinsic may generate the vpbroadcastw.

_mm256_set1_epi32x86 and avx

Broadcast 32-bit integer a to all elements of returned vector. This intrinsic may generate the vpbroadcastd.

_mm256_set1_epi64xx86 and avx

Broadcast 64-bit integer a to all elements of returned vector. This intrinsic may generate the vpbroadcastq.

_mm256_set_epi8x86 and avx

Set packed 8-bit integers in returned vector with the supplied values in reverse order.

_mm256_set_epi16x86 and avx

Set packed 16-bit integers in returned vector with the supplied values.

_mm256_set_epi32x86 and avx

Set packed 32-bit integers in returned vector with the supplied values.

_mm256_set_epi64xx86 and avx

Set packed 64-bit integers in returned vector with the supplied values.

_mm256_set_m128x86 and avx

Set packed __m256 returned vector with the supplied values.

_mm256_set_m128dx86 and avx

Set packed __m256d returned vector with the supplied values.

_mm256_set_m128ix86 and avx

Set packed __m256i returned vector with the supplied values.

_mm256_setr_epi8x86 and avx

Set packed 8-bit integers in returned vector with the supplied values in reverse order.

_mm256_setr_epi16x86 and avx

Set packed 16-bit integers in returned vector with the supplied values in reverse order.

_mm256_setr_epi32x86 and avx

Set packed 32-bit integers in returned vector with the supplied values in reverse order.

_mm256_setr_epi64xx86 and avx

Set packed 64-bit integers in returned vector with the supplied values in reverse order.

_mm256_setr_m128x86 and avx

Set packed __m256 returned vector with the supplied values.

_mm256_setr_m128dx86 and avx

Set packed __m256d returned vector with the supplied values.

_mm256_setr_m128ix86 and avx

Set packed __m256i returned vector with the supplied values.

_mm256_setzero_si256x86 and avx

Return vector of type __m256i with all elements set to zero.

_mm256_shuffle_epi8x86 and avx2

Shuffle bytes from a according to the content of b.

_mm256_shuffle_epi32x86 and avx2

Shuffle 32-bit integers in 128-bit lanes of a using the control in imm8.

_mm256_shufflehi_epi16x86 and avx2

Shuffle 16-bit integers in the high 64 bits of 128-bit lanes of a using the control in imm8. The low 64 bits of 128-bit lanes of a are copied to the output.

_mm256_shufflelo_epi16x86 and avx2

Shuffle 16-bit integers in the low 64 bits of 128-bit lanes of a using the control in imm8. The high 64 bits of 128-bit lanes of a are copied to the output.

_mm256_sign_epi8x86 and avx2

Negate packed 8-bit integers in a when the corresponding signed 8-bit integer in b is negative, and return the results. Results are zeroed out when the corresponding element in b is zero.

_mm256_sign_epi16x86 and avx2

Negate packed 16-bit integers in a when the corresponding signed 16-bit integer in b is negative, and return the results. Results are zeroed out when the corresponding element in b is zero.

_mm256_sign_epi32x86 and avx2

Negate packed 32-bit integers in a when the corresponding signed 32-bit integer in b is negative, and return the results. Results are zeroed out when the corresponding element in b is zero.

_mm256_sll_epi16x86 and avx2

Shift packed 16-bit integers in a left by count while shifting in zeros, and return the result

_mm256_sll_epi32x86 and avx2

Shift packed 32-bit integers in a left by count while shifting in zeros, and return the result

_mm256_sll_epi64x86 and avx2

Shift packed 64-bit integers in a left by count while shifting in zeros, and return the result

_mm256_slli_epi16x86 and avx2

Shift packed 16-bit integers in a left by imm8 while shifting in zeros, return the results;

_mm256_slli_epi32x86 and avx2

Shift packed 32-bit integers in a left by imm8 while shifting in zeros, return the results;

_mm256_slli_epi64x86 and avx2

Shift packed 64-bit integers in a left by imm8 while shifting in zeros, return the results;

_mm256_slli_si256x86 and avx2

Shift 128-bit lanes in a left by imm8 bytes while shifting in zeros.

_mm256_sllv_epi32x86 and avx2

Shift packed 32-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and return the result.

_mm256_sllv_epi64x86 and avx2

Shift packed 64-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and return the result.

_mm256_sra_epi16x86 and avx2

Shift packed 16-bit integers in a right by count while shifting in sign bits.

_mm256_sra_epi32x86 and avx2

Shift packed 32-bit integers in a right by count while shifting in sign bits.

_mm256_srai_epi16x86 and avx2

Shift packed 16-bit integers in a right by imm8 while shifting in sign bits.

_mm256_srai_epi32x86 and avx2

Shift packed 32-bit integers in a right by imm8 while shifting in sign bits.

_mm256_srav_epi32x86 and avx2

Shift packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits.

_mm256_srl_epi16x86 and avx2

Shift packed 16-bit integers in a right by count while shifting in zeros.

_mm256_srl_epi32x86 and avx2

Shift packed 32-bit integers in a right by count while shifting in zeros.

_mm256_srl_epi64x86 and avx2

Shift packed 64-bit integers in a right by count while shifting in zeros.

_mm256_srli_epi16x86 and avx2

Shift packed 16-bit integers in a right by imm8 while shifting in zeros

_mm256_srli_epi32x86 and avx2

Shift packed 32-bit integers in a right by imm8 while shifting in zeros

_mm256_srli_epi64x86 and avx2

Shift packed 64-bit integers in a right by imm8 while shifting in zeros

_mm256_srli_si256x86 and avx2

Shift 128-bit lanes in a right by imm8 bytes while shifting in zeros.

_mm256_srlv_epi32x86 and avx2

Shift packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros,

_mm256_srlv_epi64x86 and avx2

Shift packed 64-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros,

_mm256_store_si256x86 and avx

Store 256-bits of integer data from a into memory. mem_addr must be aligned on a 32-byte boundary or a general-protection exception may be generated.

_mm256_storeu2_m128x86 and avx,sse

Store the high and low 128-bit halves (each composed of 4 packed single-precision (32-bit) floating-point elements) from a into memory two different 128-bit locations. hiaddr and loaddr do not need to be aligned on any particular boundary.

_mm256_storeu2_m128dx86 and avx,sse2

Store the high and low 128-bit halves (each composed of 2 packed double-precision (64-bit) floating-point elements) from a into memory two different 128-bit locations. hiaddr and loaddr do not need to be aligned on any particular boundary.

_mm256_storeu2_m128ix86 and avx,sse2

Store the high and low 128-bit halves (each composed of integer data) from a into memory two different 128-bit locations. hiaddr and loaddr do not need to be aligned on any particular boundary.

_mm256_storeu_si256x86 and avx

Store 256-bits of integer data from a into memory. mem_addr does not need to be aligned on any particular boundary.

_mm256_stream_si256x86 and avx

Moves integer data from a 256-bit integer vector to a 32-byte aligned memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon)

_mm256_sub_epi8x86 and avx2

Subtract packed 8-bit integers in b from packed 16-bit integers in a

_mm256_sub_epi16x86 and avx2

Subtract packed 16-bit integers in b from packed 16-bit integers in a

_mm256_sub_epi32x86 and avx2

Subtract packed 32-bit integers in b from packed 16-bit integers in a

_mm256_sub_epi64x86 and avx2

Subtract packed 64-bit integers in b from packed 16-bit integers in a

_mm256_subs_epi8x86 and avx2

Subtract packed 8-bit integers in b from packed 8-bit integers in a using saturation.

_mm256_subs_epi16x86 and avx2

Subtract packed 16-bit integers in b from packed 16-bit integers in a using saturation.

_mm256_subs_epu8x86 and avx2

Subtract packed unsigned 8-bit integers in b from packed 8-bit integers in a using saturation.

_mm256_subs_epu16x86 and avx2

Subtract packed unsigned 16-bit integers in b from packed 16-bit integers in a using saturation.

_mm256_testc_si256x86 and avx

Compute the bitwise AND of 256 bits (representing integer data) in a and b, and set ZF to 1 if the result is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, and set CF to 1 if the result is zero, otherwise set CF to 0. Return the CF value.

_mm256_testnzc_si256x86 and avx

Compute the bitwise AND of 256 bits (representing integer data) in a and b, and set ZF to 1 if the result is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, and set CF to 1 if the result is zero, otherwise set CF to 0. Return 1 if both the ZF and CF values are zero, otherwise return 0.

_mm256_testz_si256x86 and avx

Compute the bitwise AND of 256 bits (representing integer data) in a and b, and set ZF to 1 if the result is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, and set CF to 1 if the result is zero, otherwise set CF to 0. Return the ZF value.

_mm256_undefined_si256x86 and avx

Return vector of type __m256i with undefined elements.

_mm256_unpackhi_epi8x86 and avx2

Unpack and interleave 8-bit integers from the high half of each 128-bit lane in a and b.

_mm256_unpackhi_epi16x86 and avx2

Unpack and interleave 16-bit integers from the high half of each 128-bit lane of a and b.

_mm256_unpackhi_epi32x86 and avx2

Unpack and interleave 32-bit integers from the high half of each 128-bit lane of a and b.

_mm256_unpackhi_epi64x86 and avx2

Unpack and interleave 64-bit integers from the high half of each 128-bit lane of a and b.

_mm256_unpacklo_epi8x86 and avx2

Unpack and interleave 8-bit integers from the low half of each 128-bit lane of a and b.

_mm256_unpacklo_epi16x86 and avx2

Unpack and interleave 16-bit integers from the low half of each 128-bit lane of a and b.

_mm256_unpacklo_epi32x86 and avx2

Unpack and interleave 32-bit integers from the low half of each 128-bit lane of a and b.

_mm256_unpacklo_epi64x86 and avx2

Unpack and interleave 64-bit integers from the low half of each 128-bit lane of a and b.

_mm256_xor_si256x86 and avx2

Compute the bitwise XOR of 256 bits (representing integer data) in a and b

_mm256_zextpd128_pd256x86 and avx,sse2

Constructs a 256-bit floating-point vector of [4 x double] from a 128-bit floating-point vector of [2 x double]. The lower 128 bits contain the value of the source vector. The upper 128 bits are set to zero.

_mm256_zextps128_ps256x86 and avx,sse

Constructs a 256-bit floating-point vector of [8 x float] from a 128-bit floating-point vector of [4 x float]. The lower 128 bits contain the value of the source vector. The upper 128 bits are set to zero.

_mm256_zextsi128_si256x86 and avx,sse2

Constructs a 256-bit integer vector from a 128-bit integer vector. The lower 128 bits contain the value of the source vector. The upper 128 bits are set to zero.

_mm_abs_epi8x86 and ssse3

Compute the absolute value of packed 8-bit signed integers in a and return the unsigned results.

_mm_abs_epi16x86 and ssse3

Compute the absolute value of each of the packed 16-bit signed integers in a and return the 16-bit unsigned integer

_mm_abs_epi32x86 and ssse3

Compute the absolute value of each of the packed 32-bit signed integers in a and return the 32-bit unsigned integer

_mm_add_epi8x86 and sse2

Add packed 8-bit integers in a and b.

_mm_add_epi16x86 and sse2

Add packed 16-bit integers in a and b.

_mm_add_epi32x86 and sse2

Add packed 32-bit integers in a and b.

_mm_add_epi64x86 and sse2

Add packed 64-bit integers in a and "b`.

_mm_add_pdx86 and sse2

Add packed double-precision (64-bit) floating-point elements in a and b.

_mm_add_psx86 and sse

Adds __m128 vectors.

_mm_add_sdx86 and sse2

Return a new vector with the low element of a replaced by the sum of the low elements of a and b.

_mm_add_ssx86 and sse

Adds the first component of a and b, the other components are copied from a.

_mm_adds_epi8x86 and sse2

Add packed 8-bit integers in a and b using saturation.

_mm_adds_epi16x86 and sse2

Add packed 16-bit integers in a and b using saturation.

_mm_adds_epu8x86 and sse2

Add packed unsigned 8-bit integers in a and b using saturation.

_mm_adds_epu16x86 and sse2

Add packed unsigned 16-bit integers in a and b using saturation.

_mm_addsub_pdx86 and sse3

Alternatively add and subtract packed double-precision (64-bit) floating-point elements in a to/from packed elements in b.

_mm_addsub_psx86 and sse3

Alternatively add and subtract packed single-precision (32-bit) floating-point elements in a to/from packed elements in b.

_mm_aesdec_si128x86 and aes

Perform one round of an AES decryption flow on data (state) in a.

_mm_aesdeclast_si128x86 and aes

Perform the last round of an AES decryption flow on data (state) in a.

_mm_aesenc_si128x86 and aes

Perform one round of an AES encryption flow on data (state) in a.

_mm_aesenclast_si128x86 and aes

Perform the last round of an AES encryption flow on data (state) in a.

_mm_aesimc_si128x86 and aes

Perform the InvMixColumns transformation on a.

_mm_aeskeygenassist_si128x86 and aes

Assist in expanding the AES cipher key.

_mm_alignr_epi8x86 and ssse3

Concatenate 16-byte blocks in a and b into a 32-byte temporary result, shift the result right by n bytes, and return the low 16 bytes.

_mm_and_pdx86 and sse2

Compute the bitwise AND of packed double-precision (64-bit) floating-point elements in a and b.

_mm_and_psx86 and sse

Bitwise AND of packed single-precision (32-bit) floating-point elements.

_mm_and_si128x86 and sse2

Compute the bitwise AND of 128 bits (representing integer data) in a and b.

_mm_andnot_pdx86 and sse2

Compute the bitwise NOT of a and then AND with b.

_mm_andnot_psx86 and sse

Bitwise AND-NOT of packed single-precision (32-bit) floating-point elements.

_mm_andnot_si128x86 and sse2

Compute the bitwise NOT of 128 bits (representing integer data) in a and then AND with b.

_mm_avg_epu8x86 and sse2

Average packed unsigned 8-bit integers in a and b.

_mm_avg_epu16x86 and sse2

Average packed unsigned 16-bit integers in a and b.

_mm_blend_epi16x86 and sse4.1

Blend packed 16-bit integers from a and b using the mask imm8.

_mm_blend_epi32x86 and avx2

Blend packed 32-bit integers from a and b using control mask imm8.

_mm_blend_pdx86 and sse4.1

Blend packed double-precision (64-bit) floating-point elements from a and b using control mask imm2

_mm_blend_psx86 and sse4.1

Blend packed single-precision (32-bit) floating-point elements from a and b using mask imm4

_mm_blendv_epi8x86 and sse4.1

Blend packed 8-bit integers from a and b using mask

_mm_blendv_pdx86 and sse4.1

Blend packed double-precision (64-bit) floating-point elements from a and b using mask

_mm_blendv_psx86 and sse4.1

Blend packed single-precision (32-bit) floating-point elements from a and b using mask

_mm_broadcast_ssx86 and avx

Broadcast a single-precision (32-bit) floating-point element from memory to all elements of the returned vector.

_mm_broadcastb_epi8x86 and avx2

Broadcast the low packed 8-bit integer from a to all elements of the 128-bit returned value.

_mm_broadcastd_epi32x86 and avx2

Broadcast the low packed 32-bit integer from a to all elements of the 128-bit returned value.

_mm_broadcastq_epi64x86 and avx2

Broadcast the low packed 64-bit integer from a to all elements of the 128-bit returned value.

_mm_broadcastsd_pdx86 and avx2

Broadcast the low double-precision (64-bit) floating-point element from a to all elements of the 128-bit returned value.

_mm_broadcastss_psx86 and avx2

Broadcast the low single-precision (32-bit) floating-point element from a to all elements of the 128-bit returned value.

_mm_broadcastw_epi16x86 and avx2

Broadcast the low packed 16-bit integer from a to all elements of the 128-bit returned value

_mm_bslli_si128x86 and sse2

Shift a left by imm8 bytes while shifting in zeros.

_mm_bsrli_si128x86 and sse2

Shift a right by imm8 bytes while shifting in zeros.

_mm_castpd_psx86 and sse2

Casts a 128-bit floating-point vector of [2 x double] into a 128-bit floating-point vector of [4 x float].

_mm_castpd_si128x86 and sse2

Casts a 128-bit floating-point vector of [2 x double] into a 128-bit integer vector.

_mm_castps_pdx86 and sse2

Casts a 128-bit floating-point vector of [4 x float] into a 128-bit floating-point vector of [2 x double].

_mm_castps_si128x86 and sse2

Casts a 128-bit floating-point vector of [4 x float] into a 128-bit integer vector.

_mm_castsi128_pdx86 and sse2

Casts a 128-bit integer vector into a 128-bit floating-point vector of [2 x double].

_mm_castsi128_psx86 and sse2

Casts a 128-bit integer vector into a 128-bit floating-point vector of [4 x float].

_mm_ceil_pdx86 and sse4.1

Round the packed double-precision (64-bit) floating-point elements in a up to an integer value, and store the results as packed double-precision floating-point elements.

_mm_ceil_psx86 and sse4.1

Round the packed single-precision (32-bit) floating-point elements in a up to an integer value, and store the results as packed single-precision floating-point elements.

_mm_ceil_sdx86 and sse4.1

Round the lower double-precision (64-bit) floating-point element in b up to an integer value, store the result as a double-precision floating-point element in the lower element of the intrisic result, and copy the upper element from a to the upper element of the intrinsic result.

_mm_ceil_ssx86 and sse4.1

Round the lower single-precision (32-bit) floating-point element in b up to an integer value, store the result as a single-precision floating-point element in the lower element of the intrinsic result, and copy the upper 3 packed elements from a to the upper elements of the intrinsic result.

_mm_clflushx86 and sse2

Invalidate and flush the cache line that contains p from all levels of the cache hierarchy.

_mm_clmulepi64_si128x86 and pclmulqdq

Perform a carry-less multiplication of two 64-bit polynomials over the finite field GF(2^k).

_mm_cmp_pdx86 and avx,sse2

Compare packed double-precision (64-bit) floating-point elements in a and b based on the comparison operand specified by imm8.

_mm_cmp_psx86 and avx,sse

Compare packed single-precision (32-bit) floating-point elements in a and b based on the comparison operand specified by imm8.

_mm_cmp_sdx86 and avx,sse2

Compare the lower double-precision (64-bit) floating-point element in a and b based on the comparison operand specified by imm8, store the result in the lower element of returned vector, and copy the upper element from a to the upper element of returned vector.

_mm_cmp_ssx86 and avx,sse

Compare the lower single-precision (32-bit) floating-point element in a and b based on the comparison operand specified by imm8, store the result in the lower element of returned vector, and copy the upper 3 packed elements from a to the upper elements of returned vector.

_mm_cmpeq_epi8x86 and sse2

Compare packed 8-bit integers in a and b for equality.

_mm_cmpeq_epi16x86 and sse2

Compare packed 16-bit integers in a and b for equality.

_mm_cmpeq_epi32x86 and sse2

Compare packed 32-bit integers in a and b for equality.

_mm_cmpeq_epi64x86 and sse4.1

Compare packed 64-bit integers in a and b for equality

_mm_cmpeq_pdx86 and sse2

Compare corresponding elements in a and b for equality.

_mm_cmpeq_psx86 and sse

Compare each of the four floats in a to the corresponding element in b. The result in the output vector will be 0xffffffff if the input elements were equal, or 0 otherwise.

_mm_cmpeq_sdx86 and sse2

Return a new vector with the low element of a replaced by the equality comparison of the lower elements of a and b.

_mm_cmpeq_ssx86 and sse

Compare the lowest f32 of both inputs for equality. The lowest 32 bits of the result will be 0xffffffff if the two inputs are equal, or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmpestrax86 and sse4.2

Compare packed strings in a and b with lengths la and lb using the control in imm8, and return 1 if b did not contain a null character and the resulting mask was zero, and 0 otherwise.

_mm_cmpestrcx86 and sse4.2

Compare packed strings in a and b with lengths la and lb using the control in imm8, and return 1 if the resulting mask was non-zero, and 0 otherwise.

_mm_cmpestrix86 and sse4.2

Compare packed strings a and b with lengths la and lb using the control in imm8 and return the generated index. Similar to _mm_cmpistri with the exception that _mm_cmpistri implicitly determines the length of a and b.

_mm_cmpestrmx86 and sse4.2

Compare packed strings in a and b with lengths la and lb using the control in imm8, and return the generated mask.

_mm_cmpestrox86 and sse4.2

Compare packed strings in a and b with lengths la and lb using the control in imm8, and return bit 0 of the resulting bit mask.

_mm_cmpestrsx86 and sse4.2

Compare packed strings in a and b with lengths la and lb using the control in imm8, and return 1 if any character in a was null, and 0 otherwise.

_mm_cmpestrzx86 and sse4.2

Compare packed strings in a and b with lengths la and lb using the control in imm8, and return 1 if any character in b was null, and 0 otherwise.

_mm_cmpge_pdx86 and sse2

Compare corresponding elements in a and b for greater-than-or-equal.

_mm_cmpge_psx86 and sse

Compare each of the four floats in a to the corresponding element in b. The result in the output vector will be 0xffffffff if the input element in a is greater than or equal to the corresponding element in b, or 0 otherwise.

_mm_cmpge_sdx86 and sse2

Return a new vector with the low element of a replaced by the greater-than-or-equal comparison of the lower elements of a and b.

_mm_cmpge_ssx86 and sse

Compare the lowest f32 of both inputs for greater than or equal. The lowest 32 bits of the result will be 0xffffffff if a.extract(0) is greater than or equal b.extract(0), or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmpgt_epi8x86 and sse2

Compare packed 8-bit integers in a and b for greater-than.

_mm_cmpgt_epi16x86 and sse2

Compare packed 16-bit integers in a and b for greater-than.

_mm_cmpgt_epi32x86 and sse2

Compare packed 32-bit integers in a and b for greater-than.

_mm_cmpgt_epi64x86 and sse4.2

Compare packed 64-bit integers in a and b for greater-than, return the results.

_mm_cmpgt_pdx86 and sse2

Compare corresponding elements in a and b for greater-than.

_mm_cmpgt_psx86 and sse

Compare each of the four floats in a to the corresponding element in b. The result in the output vector will be 0xffffffff if the input element in a is greater than the corresponding element in b, or 0 otherwise.

_mm_cmpgt_sdx86 and sse2

Return a new vector with the low element of a replaced by the greater-than comparison of the lower elements of a and b.

_mm_cmpgt_ssx86 and sse

Compare the lowest f32 of both inputs for greater than. The lowest 32 bits of the result will be 0xffffffff if a.extract(0) is greater than b.extract(0), or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmpistrax86 and sse4.2

Compare packed strings with implicit lengths in a and b using the control in imm8, and return 1 if b did not contain a null character and the resulting mask was zero, and 0 otherwise.

_mm_cmpistrcx86 and sse4.2

Compare packed strings with implicit lengths in a and b using the control in imm8, and return 1 if the resulting mask was non-zero, and 0 otherwise.

_mm_cmpistrix86 and sse4.2

Compare packed strings with implicit lengths in a and b using the control in imm8 and return the generated index. Similar to _mm_cmpestri with the exception that _mm_cmpestri requires the lengths of a and b to be explicitly specified.

_mm_cmpistrmx86 and sse4.2

Compare packed strings with implicit lengths in a and b using the control in imm8, and return the generated mask.

_mm_cmpistrox86 and sse4.2

Compare packed strings with implicit lengths in a and b using the control in imm8, and return bit 0 of the resulting bit mask.

_mm_cmpistrsx86 and sse4.2

Compare packed strings with implicit lengths in a and b using the control in imm8, and returns 1 if any character in a was null, and 0 otherwise.

_mm_cmpistrzx86 and sse4.2

Compare packed strings with implicit lengths in a and b using the control in imm8, and return 1 if any character in b was null. and 0 otherwise.

_mm_cmple_pdx86 and sse2

Compare corresponding elements in a and b for less-than-or-equal

_mm_cmple_psx86 and sse

Compare each of the four floats in a to the corresponding element in b. The result in the output vector will be 0xffffffff if the input element in a is less than or equal to the corresponding element in b, or 0 otherwise.

_mm_cmple_sdx86 and sse2

Return a new vector with the low element of a replaced by the less-than-or-equal comparison of the lower elements of a and b.

_mm_cmple_ssx86 and sse

Compare the lowest f32 of both inputs for less than or equal. The lowest 32 bits of the result will be 0xffffffff if a.extract(0) is less than or equal b.extract(0), or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmplt_epi8x86 and sse2

Compare packed 8-bit integers in a and b for less-than.

_mm_cmplt_epi16x86 and sse2

Compare packed 16-bit integers in a and b for less-than.

_mm_cmplt_epi32x86 and sse2

Compare packed 32-bit integers in a and b for less-than.

_mm_cmplt_pdx86 and sse2

Compare corresponding elements in a and b for less-than.

_mm_cmplt_psx86 and sse

Compare each of the four floats in a to the corresponding element in b. The result in the output vector will be 0xffffffff if the input element in a is less than the corresponding element in b, or 0 otherwise.

_mm_cmplt_sdx86 and sse2

Return a new vector with the low element of a replaced by the less-than comparison of the lower elements of a and b.

_mm_cmplt_ssx86 and sse

Compare the lowest f32 of both inputs for less than. The lowest 32 bits of the result will be 0xffffffff if a.extract(0) is less than b.extract(0), or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmpneq_pdx86 and sse2

Compare corresponding elements in a and b for not-equal.

_mm_cmpneq_psx86 and sse

Compare each of the four floats in a to the corresponding element in b. The result in the output vector will be 0xffffffff if the input elements are not equal, or 0 otherwise.

_mm_cmpneq_sdx86 and sse2

Return a new vector with the low element of a replaced by the not-equal comparison of the lower elements of a and b.

_mm_cmpneq_ssx86 and sse

Compare the lowest f32 of both inputs for inequality. The lowest 32 bits of the result will be 0xffffffff if a.extract(0) is not equal to b.extract(0), or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmpnge_pdx86 and sse2

Compare corresponding elements in a and b for not-greater-than-or-equal.

_mm_cmpnge_psx86 and sse

Compare each of the four floats in a to the corresponding element in b. The result in the output vector will be 0xffffffff if the input element in a is not greater than or equal to the corresponding element in b, or 0 otherwise.

_mm_cmpnge_sdx86 and sse2

Return a new vector with the low element of a replaced by the not-greater-than-or-equal comparison of the lower elements of a and b.

_mm_cmpnge_ssx86 and sse

Compare the lowest f32 of both inputs for not-greater-than-or-equal. The lowest 32 bits of the result will be 0xffffffff if a.extract(0) is not greater than or equal to b.extract(0), or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmpngt_pdx86 and sse2

Compare corresponding elements in a and b for not-greater-than.

_mm_cmpngt_psx86 and sse

Compare each of the four floats in a to the corresponding element in b. The result in the output vector will be 0xffffffff if the input element in a is not greater than the corresponding element in b, or 0 otherwise.

_mm_cmpngt_sdx86 and sse2

Return a new vector with the low element of a replaced by the not-greater-than comparison of the lower elements of a and b.

_mm_cmpngt_ssx86 and sse

Compare the lowest f32 of both inputs for not-greater-than. The lowest 32 bits of the result will be 0xffffffff if a.extract(0) is not greater than b.extract(0), or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmpnle_pdx86 and sse2

Compare corresponding elements in a and b for not-less-than-or-equal.

_mm_cmpnle_psx86 and sse

Compare each of the four floats in a to the corresponding element in b. The result in the output vector will be 0xffffffff if the input element in a is not less than or equal to the corresponding element in b, or 0 otherwise.

_mm_cmpnle_sdx86 and sse2

Return a new vector with the low element of a replaced by the not-less-than-or-equal comparison of the lower elements of a and b.

_mm_cmpnle_ssx86 and sse

Compare the lowest f32 of both inputs for not-less-than-or-equal. The lowest 32 bits of the result will be 0xffffffff if a.extract(0) is not less than or equal to b.extract(0), or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmpnlt_pdx86 and sse2

Compare corresponding elements in a and b for not-less-than.

_mm_cmpnlt_psx86 and sse

Compare each of the four floats in a to the corresponding element in b. The result in the output vector will be 0xffffffff if the input element in a is not less than the corresponding element in b, or 0 otherwise.

_mm_cmpnlt_sdx86 and sse2

Return a new vector with the low element of a replaced by the not-less-than comparison of the lower elements of a and b.

_mm_cmpnlt_ssx86 and sse

Compare the lowest f32 of both inputs for not-less-than. The lowest 32 bits of the result will be 0xffffffff if a.extract(0) is not less than b.extract(0), or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmpord_pdx86 and sse2

Compare corresponding elements in a and b to see if neither is NaN.

_mm_cmpord_psx86 and sse

Compare each of the four floats in a to the corresponding element in b. Returns four floats that have one of two possible bit patterns. The element in the output vector will be 0xffffffff if the input elements in a and b are ordered (i.e., neither of them is a NaN), or 0 otherwise.

_mm_cmpord_sdx86 and sse2

Return a new vector with the low element of a replaced by the result of comparing both of the lower elements of a and b to NaN. If neither are equal to NaN then 0xFFFFFFFFFFFFFFFF is used and 0 otherwise.

_mm_cmpord_ssx86 and sse

Check if the lowest f32 of both inputs are ordered. The lowest 32 bits of the result will be 0xffffffff if neither of a.extract(0) or b.extract(0) is a NaN, or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_cmpunord_pdx86 and sse2

Compare corresponding elements in a and b to see if either is NaN.

_mm_cmpunord_psx86 and sse

Compare each of the four floats in a to the corresponding element in b. Returns four floats that have one of two possible bit patterns. The element in the output vector will be 0xffffffff if the input elements in a and b are unordered (i.e., at least on of them is a NaN), or 0 otherwise.

_mm_cmpunord_sdx86 and sse2

Return a new vector with the low element of a replaced by the result of comparing both of the lower elements of a and b to NaN. If either is equal to NaN then 0xFFFFFFFFFFFFFFFF is used and 0 otherwise.

_mm_cmpunord_ssx86 and sse

Check if the lowest f32 of both inputs are unordered. The lowest 32 bits of the result will be 0xffffffff if any of a.extract(0) or b.extract(0) is a NaN, or 0 otherwise. The upper 96 bits of the result are the upper 96 bits of a.

_mm_comieq_sdx86 and sse2

Compare the lower element of a and b for equality.

_mm_comieq_ssx86 and sse

Compare two 32-bit floats from the low-order bits of a and b. Returns 1 if they are equal, or 0 otherwise.

_mm_comige_sdx86 and sse2

Compare the lower element of a and b for greater-than-or-equal.

_mm_comige_ssx86 and sse

Compare two 32-bit floats from the low-order bits of a and b. Returns 1 if the value from a is greater than or equal to the one from b, or 0 otherwise.

_mm_comigt_sdx86 and sse2

Compare the lower element of a and b for greater-than.

_mm_comigt_ssx86 and sse

Compare two 32-bit floats from the low-order bits of a and b. Returns 1 if the value from a is greater than the one from b, or 0 otherwise.

_mm_comile_sdx86 and sse2

Compare the lower element of a and b for less-than-or-equal.

_mm_comile_ssx86 and sse

Compare two 32-bit floats from the low-order bits of a and b. Returns 1 if the value from a is less than or equal to the one from b, or 0 otherwise.

_mm_comilt_sdx86 and sse2

Compare the lower element of a and b for less-than.

_mm_comilt_ssx86 and sse

Compare two 32-bit floats from the low-order bits of a and b. Returns 1 if the value from a is less than the one from b, or 0 otherwise.

_mm_comineq_sdx86 and sse2

Compare the lower element of a and b for not-equal.

_mm_comineq_ssx86 and sse

Compare two 32-bit floats from the low-order bits of a and b. Returns 1 if they are not equal, or 0 otherwise.

_mm_crc32_u8x86 and sse4.2

Starting with the initial value in crc, return the accumulated CRC32 value for unsigned 8-bit integer v.

_mm_crc32_u16x86 and sse4.2

Starting with the initial value in crc, return the accumulated CRC32 value for unsigned 16-bit integer v.

_mm_crc32_u32x86 and sse4.2

Starting with the initial value in crc, return the accumulated CRC32 value for unsigned 32-bit integer v.

_mm_cvt_si2ssx86 and sse

Alias for _mm_cvtsi32_ss.

_mm_cvt_ss2six86 and sse

Alias for _mm_cvtss_si32.

_mm_cvtepi32_pdx86 and sse2

Convert the lower two packed 32-bit integers in a to packed double-precision (64-bit) floating-point elements.

_mm_cvtepi32_psx86 and sse2

Convert packed 32-bit integers in a to packed single-precision (32-bit) floating-point elements.

_mm_cvtepi16_epi32x86 and sse4.1

Sign extend packed 16-bit integers in a to packed 32-bit integers

_mm_cvtepi16_epi64x86 and sse4.1

Sign extend packed 16-bit integers in a to packed 64-bit integers

_mm_cvtepi32_epi64x86 and sse4.1

Sign extend packed 32-bit integers in a to packed 64-bit integers

_mm_cvtepi8_epi16x86 and sse4.1

Sign extend packed 8-bit integers in a to packed 16-bit integers

_mm_cvtepi8_epi32x86 and sse4.1

Sign extend packed 8-bit integers in a to packed 32-bit integers

_mm_cvtepi8_epi64x86 and sse4.1

Sign extend packed 8-bit integers in the low 8 bytes of a to packed 64-bit integers

_mm_cvtepu16_epi32x86 and sse4.1

Zero extend packed unsigned 16-bit integers in a to packed 32-bit integers

_mm_cvtepu16_epi64x86 and sse4.1

Zero extend packed unsigned 16-bit integers in a to packed 64-bit integers

_mm_cvtepu32_epi64x86 and sse4.1

Zero extend packed unsigned 32-bit integers in a to packed 64-bit integers

_mm_cvtepu8_epi16x86 and sse4.1

Zero extend packed unsigned 8-bit integers in a to packed 16-bit integers

_mm_cvtepu8_epi32x86 and sse4.1

Zero extend packed unsigned 8-bit integers in a to packed 32-bit integers

_mm_cvtepu8_epi64x86 and sse4.1

Zero extend packed unsigned 8-bit integers in a to packed 64-bit integers

_mm_cvtpd_epi32x86 and sse2

Convert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers.

_mm_cvtpd_psx86 and sse2

Convert packed double-precision (64-bit) floating-point elements in "a" to packed single-precision (32-bit) floating-point elements

_mm_cvtps_epi32x86 and sse2

Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers.

_mm_cvtps_pdx86 and sse2

Convert packed single-precision (32-bit) floating-point elements in a to packed double-precision (64-bit) floating-point elements.

_mm_cvtsd_f64x86 and sse2

Return the lower double-precision (64-bit) floating-point element of "a".

_mm_cvtsd_si32x86 and sse2

Convert the lower double-precision (64-bit) floating-point element in a to a 32-bit integer.

_mm_cvtsd_ssx86 and sse2

Convert the lower double-precision (64-bit) floating-point element in b to a single-precision (32-bit) floating-point element, store the result in the lower element of the return value, and copy the upper element from a to the upper element the return value.

_mm_cvtsi32_ssx86 and sse

Convert a 32 bit integer to a 32 bit float. The result vector is the input vector a with the lowest 32 bit float replaced by the converted integer.

_mm_cvtsi32_sdx86 and sse2

Return a with its lower element replaced by b after converting it to an f64.

_mm_cvtsi128_si32x86 and sse2

Return the lowest element of a.

_mm_cvtsi32_si128x86 and sse2

Return a vector whose lowest element is a and all higher elements are 0.

_mm_cvtss_f32x86 and sse

Extract the lowest 32 bit float from the input vector.

_mm_cvtss_sdx86 and sse2

Convert the lower single-precision (32-bit) floating-point element in b to a double-precision (64-bit) floating-point element, store the result in the lower element of the return value, and copy the upper element from a to the upper element the return value.

_mm_cvtss_si32x86 and sse

Convert the lowest 32 bit float in the input vector to a 32 bit integer.

_mm_cvtt_ss2six86 and sse

Alias for _mm_cvttss_si32.

_mm_cvttpd_epi32x86 and sse2

Convert packed double-precision (64-bit) floating-point elements in a to packed 32-bit integers with truncation.

_mm_cvttps_epi32x86 and sse2

Convert packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation.

_mm_cvttsd_si32x86 and sse2

Convert the lower double-precision (64-bit) floating-point element in a to a 32-bit integer with truncation.

_mm_cvttss_si32x86 and sse

Convert the lowest 32 bit float in the input vector to a 32 bit integer with truncation.

_mm_div_pdx86 and sse2

Divide packed double-precision (64-bit) floating-point elements in a by packed elements in b.

_mm_div_psx86 and sse

Divides __m128 vectors.

_mm_div_sdx86 and sse2

Return a new vector with the low element of a replaced by the result of diving the lower element of a by the lower element of b.

_mm_div_ssx86 and sse

Divides the first component of b by a, the other components are copied from a.

_mm_dp_pdx86 and sse4.1

Returns the dot product of two __m128d vectors.

_mm_dp_psx86 and sse4.1

Returns the dot product of two __m128 vectors.

_mm_extract_epi8x86 and sse4.1

Extract an 8-bit integer from a, selected with imm8. Returns a 32-bit integer containing the zero-extended integer data.

_mm_extract_epi16x86 and sse2

Return the imm8 element of a.

_mm_extract_epi32x86 and sse4.1

Extract an 32-bit integer from a selected with imm8

_mm_extract_psx86 and sse4.1

Extract a single-precision (32-bit) floating-point element from a, selected with imm8

_mm_extract_si64x86 and sse4a

Extracts the bit range specified by y from the lower 64 bits of x.

_mm_floor_pdx86 and sse4.1

Round the packed double-precision (64-bit) floating-point elements in a down to an integer value, and store the results as packed double-precision floating-point elements.

_mm_floor_psx86 and sse4.1

Round the packed single-precision (32-bit) floating-point elements in a down to an integer value, and store the results as packed single-precision floating-point elements.

_mm_floor_sdx86 and sse4.1

Round the lower double-precision (64-bit) floating-point element in b down to an integer value, store the result as a double-precision floating-point element in the lower element of the intrinsic result, and copy the upper element from a to the upper element of the intrinsic result.

_mm_floor_ssx86 and sse4.1

Round the lower single-precision (32-bit) floating-point element in b down to an integer value, store the result as a single-precision floating-point element in the lower element of the intrinsic result, and copy the upper 3 packed elements from a to the upper elements of the intrinsic result.

_mm_fmadd_pdx86 and fma

Multiply packed double-precision (64-bit) floating-point elements in a and b, and add the intermediate result to packed elements in c.

_mm_fmadd_psx86 and fma

Multiply packed single-precision (32-bit) floating-point elements in a and b, and add the intermediate result to packed elements in c.

_mm_fmadd_sdx86 and fma

Multiply the lower double-precision (64-bit) floating-point elements in a and b, and add the intermediate result to the lower element in c. Store the result in the lower element of the returned value, and copy the upper element from a to the upper elements of the result.

_mm_fmadd_ssx86 and fma

Multiply the lower single-precision (32-bit) floating-point elements in a and b, and add the intermediate result to the lower element in c. Store the result in the lower element of the returned value, and copy the 3 upper elements from a to the upper elements of the result.

_mm_fmaddsub_pdx86 and fma

Multiply packed double-precision (64-bit) floating-point elements in a and b, and alternatively add and subtract packed elements in c to/from the intermediate result.

_mm_fmaddsub_psx86 and fma

Multiply packed single-precision (32-bit) floating-point elements in a and b, and alternatively add and subtract packed elements in c to/from the intermediate result.

_mm_fmsub_pdx86 and fma

Multiply packed double-precision (64-bit) floating-point elements in a and b, and subtract packed elements in c from the intermediate result.

_mm_fmsub_psx86 and fma

Multiply packed single-precision (32-bit) floating-point elements in a and b, and subtract packed elements in c from the intermediate result.

_mm_fmsub_sdx86 and fma

Multiply the lower double-precision (64-bit) floating-point elements in a and b, and subtract the lower element in c from the intermediate result. Store the result in the lower element of the returned value, and copy the upper element from a to the upper elements of the result.

_mm_fmsub_ssx86 and fma

Multiply the lower single-precision (32-bit) floating-point elements in a and b, and subtract the lower element in c from the intermediate result. Store the result in the lower element of the returned value, and copy the 3 upper elements from a to the upper elements of the result.

_mm_fmsubadd_pdx86 and fma

Multiply packed double-precision (64-bit) floating-point elements in a and b, and alternatively subtract and add packed elements in c from/to the intermediate result.

_mm_fmsubadd_psx86 and fma

Multiply packed single-precision (32-bit) floating-point elements in a and b, and alternatively subtract and add packed elements in c from/to the intermediate result.

_mm_fnmadd_pdx86 and fma

Multiply packed double-precision (64-bit) floating-point elements in a and b, and add the negated intermediate result to packed elements in c.

_mm_fnmadd_psx86 and fma

Multiply packed single-precision (32-bit) floating-point elements in a and b, and add the negated intermediate result to packed elements in c.

_mm_fnmadd_sdx86 and fma

Multiply the lower double-precision (64-bit) floating-point elements in a and b, and add the negated intermediate result to the lower element in c. Store the result in the lower element of the returned value, and copy the upper element from a to the upper elements of the result.

_mm_fnmadd_ssx86 and fma

Multiply the lower single-precision (32-bit) floating-point elements in a and b, and add the negated intermediate result to the lower element in c. Store the result in the lower element of the returned value, and copy the 3 upper elements from a to the upper elements of the result.

_mm_fnmsub_pdx86 and fma

Multiply packed double-precision (64-bit) floating-point elements in a and b, and subtract packed elements in c from the negated intermediate result.

_mm_fnmsub_psx86 and fma

Multiply packed single-precision (32-bit) floating-point elements in a and b, and subtract packed elements in c from the negated intermediate result.

_mm_fnmsub_sdx86 and fma

Multiply the lower double-precision (64-bit) floating-point elements in a and b, and subtract packed elements in c from the negated intermediate result. Store the result in the lower element of the returned value, and copy the upper element from a to the upper elements of the result.

_mm_fnmsub_ssx86 and fma

Multiply the lower single-precision (32-bit) floating-point elements in a and b, and subtract packed elements in c from the negated intermediate result. Store the result in the lower element of the returned value, and copy the 3 upper elements from a to the upper elements of the result.

_mm_getcsrx86 and sse

Get the unsigned 32-bit value of the MXCSR control and status register.

_mm_hadd_epi16x86 and ssse3

Horizontally add the adjacent pairs of values contained in 2 packed 128-bit vectors of [8 x i16].

_mm_hadd_epi32x86 and ssse3

Horizontally add the adjacent pairs of values contained in 2 packed 128-bit vectors of [4 x i32].

_mm_hadd_pdx86 and sse3

Horizontally add adjacent pairs of double-precision (64-bit) floating-point elements in a and b, and pack the results.

_mm_hadd_psx86 and sse3

Horizontally add adjacent pairs of single-precision (32-bit) floating-point elements in a and b, and pack the results.

_mm_hadds_epi16x86 and ssse3

Horizontally add the adjacent pairs of values contained in 2 packed 128-bit vectors of [8 x i16]. Positive sums greater than 7FFFh are saturated to 7FFFh. Negative sums less than 8000h are saturated to 8000h.

_mm_hsub_epi16x86 and ssse3

Horizontally subtract the adjacent pairs of values contained in 2 packed 128-bit vectors of [8 x i16].

_mm_hsub_epi32x86 and ssse3

Horizontally subtract the adjacent pairs of values contained in 2 packed 128-bit vectors of [4 x i32].

_mm_hsub_pdx86 and sse3

Horizontally subtract adjacent pairs of double-precision (64-bit) floating-point elements in a and b, and pack the results.

_mm_hsub_psx86 and sse3

Horizontally add adjacent pairs of single-precision (32-bit) floating-point elements in a and b, and pack the results.

_mm_hsubs_epi16x86 and ssse3

Horizontally subtract the adjacent pairs of values contained in 2 packed 128-bit vectors of [8 x i16]. Positive differences greater than 7FFFh are saturated to 7FFFh. Negative differences less than 8000h are saturated to 8000h.

_mm_i32gather_psx86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm_i32gather_pdx86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm_i64gather_psx86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm_i64gather_pdx86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm_i32gather_epi32x86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm_i32gather_epi64x86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm_i64gather_epi32x86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm_i64gather_epi64x86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8.

_mm_insert_epi8x86 and sse4.1

Return a copy of a with the 8-bit integer from i inserted at a location specified by imm8.

_mm_insert_epi16x86 and sse2

Return a new vector where the imm8 element of a is replaced with i.

_mm_insert_epi32x86 and sse4.1

Return a copy of a with the 32-bit integer from i inserted at a location specified by imm8.

_mm_insert_psx86 and sse4.1

Select a single value in a to store at some position in b, Then zero elements according to imm8.

_mm_insert_si64x86 and sse4a

Inserts the [length:0] bits of y into x at index.

_mm_lddqu_si128x86 and sse3

Load 128-bits of integer data from unaligned memory. This intrinsic may perform better than _mm_loadu_si128 when the data crosses a cache line boundary.

_mm_lfencex86 and sse2

Perform a serializing operation on all load-from-memory instructions that were issued prior to this instruction.

_mm_load1_psx86 and sse

Construct a __m128 by duplicating the value read from p into all elements.

_mm_load1_pdx86 and sse2

Load a double-precision (64-bit) floating-point element from memory into both elements of returned vector.

_mm_load_pdx86 and sse2

Load 128-bits (composed of 2 packed double-precision (64-bit) floating-point elements) from memory into the returned vector. mem_addr must be aligned on a 16-byte boundary or a general-protection exception may be generated.

_mm_load_pd1x86 and sse2

Load a double-precision (64-bit) floating-point element from memory into both elements of returned vector.

_mm_load_psx86 and sse

Load four f32 values from aligned memory into a __m128. If the pointer is not aligned to a 128-bit boundary (16 bytes) a general protection fault will be triggered (fatal program crash).

_mm_load_ps1x86 and sse

Alias for _mm_load1_ps

_mm_load_sdx86 and sse2

Loads a 64-bit double-precision value to the low element of a 128-bit integer vector and clears the upper element.

_mm_load_si128x86 and sse2

Load 128-bits of integer data from memory into a new vector.

_mm_load_ssx86 and sse

Construct a __m128 with the lowest element read from p and the other elements set to zero.

_mm_loaddup_pdx86 and sse3

Load a double-precision (64-bit) floating-point element from memory into both elements of return vector.

_mm_loadh_pdx86 and sse2

Loads a double-precision value into the high-order bits of a 128-bit vector of [2 x double]. The low-order bits are copied from the low-order bits of the first operand.

_mm_loadl_epi64x86 and sse2

Load 64-bit integer from memory into first element of returned vector.

_mm_loadl_pdx86 and sse2

Loads a double-precision value into the low-order bits of a 128-bit vector of [2 x double]. The high-order bits are copied from the high-order bits of the first operand.

_mm_loadr_pdx86 and sse2

Load 2 double-precision (64-bit) floating-point elements from memory into the returned vector in reverse order. mem_addr must be aligned on a 16-byte boundary or a general-protection exception may be generated.

_mm_loadr_psx86 and sse

Load four f32 values from aligned memory into a __m128 in reverse order.

_mm_loadu_pdx86 and sse2

Load 128-bits (composed of 2 packed double-precision (64-bit) floating-point elements) from memory into the returned vector. mem_addr does not need to be aligned on any particular boundary.

_mm_loadu_psx86 and sse

Load four f32 values from memory into a __m128. There are no restrictions on memory alignment. For aligned memory _mm_load_ps may be faster.

_mm_loadu_si128x86 and sse2

Load 128-bits of integer data from memory into a new vector.

_mm_madd_epi16x86 and sse2

Multiply and then horizontally add signed 16 bit integers in a and b.

_mm_maddubs_epi16x86 and ssse3

Multiply corresponding pairs of packed 8-bit unsigned integer values contained in the first source operand and packed 8-bit signed integer values contained in the second source operand, add pairs of contiguous products with signed saturation, and writes the 16-bit sums to the corresponding bits in the destination.

_mm_mask_i32gather_psx86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm_mask_i32gather_pdx86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm_mask_i64gather_psx86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm_mask_i64gather_pdx86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm_mask_i32gather_epi32x86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm_mask_i32gather_epi64x86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm_mask_i64gather_epi32x86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm_mask_i64gather_epi64x86 and avx2

Return values from slice at offsets determined by offsets * scale, where scale is between 1 and 8. If mask is set, load the value from src in that position instead.

_mm_maskload_epi32x86 and avx2

Load packed 32-bit integers from memory pointed by mem_addr using mask (elements are zeroed out when the highest bit is not set in the corresponding element).

_mm_maskload_epi64x86 and avx2

Load packed 64-bit integers from memory pointed by mem_addr using mask (elements are zeroed out when the highest bit is not set in the corresponding element).

_mm_maskload_pdx86 and avx

Load packed double-precision (64-bit) floating-point elements from memory into result using mask (elements are zeroed out when the high bit of the corresponding element is not set).

_mm_maskload_psx86 and avx

Load packed single-precision (32-bit) floating-point elements from memory into result using mask (elements are zeroed out when the high bit of the corresponding element is not set).

_mm_maskmoveu_si128x86 and sse2

Conditionally store 8-bit integer elements from a into memory using mask.

_mm_maskstore_epi32x86 and avx2

Store packed 32-bit integers from a into memory pointed by mem_addr using mask (elements are not stored when the highest bit is not set in the corresponding element).

_mm_maskstore_epi64x86 and avx2

Store packed 64-bit integers from a into memory pointed by mem_addr using mask (elements are not stored when the highest bit is not set in the corresponding element).

_mm_maskstore_pdx86 and avx

Store packed double-precision (64-bit) floating-point elements from a into memory using mask.

_mm_maskstore_psx86 and avx

Store packed single-precision (32-bit) floating-point elements from a into memory using mask.

_mm_max_epi8x86 and sse4.1

Compare packed 8-bit integers in a and b and return packed maximum values in dst.

_mm_max_epi16x86 and sse2

Compare packed 16-bit integers in a and b, and return the packed maximum values.

_mm_max_epi32x86 and sse4.1

Compare packed 32-bit integers in a and b, and return packed maximum values.

_mm_max_epu8x86 and sse2

Compare packed unsigned 8-bit integers in a and b, and return the packed maximum values.

_mm_max_epu16x86 and sse4.1

Compare packed unsigned 16-bit integers in a and b, and return packed maximum.

_mm_max_epu32x86 and sse4.1

Compare packed unsigned 32-bit integers in a and b, and return packed maximum values.

_mm_max_pdx86 and sse2

Return a new vector with the maximum values from corresponding elements in a and b.

_mm_max_psx86 and sse

Compare packed single-precision (32-bit) floating-point elements in a and b, and return the corresponding maximum values.

_mm_max_sdx86 and sse2

Return a new vector with the low element of a replaced by the maximum of the lower elements of a and b.

_mm_max_ssx86 and sse

Compare the first single-precision (32-bit) floating-point element of a and b, and return the maximum value in the first element of the return value, the other elements are copied from a.

_mm_mfencex86 and sse2

Perform a serializing operation on all load-from-memory and store-to-memory instructions that were issued prior to this instruction.

_mm_min_epi8x86 and sse4.1

Compare packed 8-bit integers in a and b and return packed minimum values in dst.

_mm_min_epi16x86 and sse2

Compare packed 16-bit integers in a and b, and return the packed minimum values.

_mm_min_epi32x86 and sse4.1

Compare packed 32-bit integers in a and b, and return packed minimum values.

_mm_min_epu8x86 and sse2

Compare packed unsigned 8-bit integers in a and b, and return the packed minimum values.

_mm_min_epu16x86 and sse4.1

Compare packed unsigned 16-bit integers in a and b, and return packed minimum.

_mm_min_epu32x86 and sse4.1

Compare packed unsigned 32-bit integers in a and b, and return packed minimum values.

_mm_min_pdx86 and sse2

Return a new vector with the minimum values from corresponding elements in a and b.

_mm_min_psx86 and sse

Compare packed single-precision (32-bit) floating-point elements in a and b, and return the corresponding minimum values.

_mm_min_sdx86 and sse2

Return a new vector with the low element of a replaced by the minimum of the lower elements of a and b.

_mm_min_ssx86 and sse

Compare the first single-precision (32-bit) floating-point element of a and b, and return the minimum value in the first element of the return value, the other elements are copied from a.

_mm_minpos_epu16x86 and sse4.1

Finds the minimum unsigned 16-bit element in the 128-bit __m128i vector, returning a vector containing its value in its first position, and its index in its second position; all other elements are set to zero.

_mm_move_epi64x86 and sse2

Return a vector where the low element is extracted from a and its upper element is zero.

_mm_move_sdx86 and sse2

Constructs a 128-bit floating-point vector of [2 x double]. The lower 64 bits are set to the lower 64 bits of the second parameter. The upper 64 bits are set to the upper 64 bits of the first parameter.

_mm_move_ssx86 and sse

Return a __m128 with the first component from b and the remaining components from a.

_mm_movedup_pdx86 and sse3

Duplicate the low double-precision (64-bit) floating-point element from a.

_mm_movehdup_psx86 and sse3

Duplicate odd-indexed single-precision (32-bit) floating-point elements from a.

_mm_movehl_psx86 and sse

Combine higher half of a and b. The highwe half of b occupies the lower half of result.

_mm_moveldup_psx86 and sse3

Duplicate even-indexed single-precision (32-bit) floating-point elements from a.

_mm_movelh_psx86 and sse

Combine lower half of a and b. The lower half of b occupies the higher half of result.

_mm_movemask_epi8x86 and sse2

Return a mask of the most significant bit of each element in a.

_mm_movemask_pdx86 and sse2

Return a mask of the most significant bit of each element in a.

_mm_movemask_psx86 and sse

Return a mask of the most significant bit of each element in a.

_mm_mpsadbw_epu8x86 and sse4.1

Subtracts 8-bit unsigned integer values and computes the absolute values of the differences to the corresponding bits in the destination. Then sums of the absolute differences are returned according to the bit fields in the immediate operand.

_mm_mul_epi32x86 and sse4.1

Multiply the low 32-bit integers from each packed 64-bit element in a and b, and return the signed 64-bit result.

_mm_mul_epu32x86 and sse2

Multiply the low unsigned 32-bit integers from each packed 64-bit element in a and b.

_mm_mul_pdx86 and sse2

Multiply packed double-precision (64-bit) floating-point elements in a and b.

_mm_mul_psx86 and sse

Multiplies __m128 vectors.

_mm_mul_sdx86 and sse2

Return a new vector with the low element of a replaced by multiplying the low elements of a and b.

_mm_mul_ssx86 and sse

Multiplies the first component of a and b, the other components are copied from a.

_mm_mulhi_epi16x86 and sse2

Multiply the packed 16-bit integers in a and b.

_mm_mulhi_epu16x86 and sse2

Multiply the packed unsigned 16-bit integers in a and b.

_mm_mulhrs_epi16x86 and ssse3

Multiply packed 16-bit signed integer values, truncate the 32-bit product to the 18 most significant bits by right-shifting, round the truncated value by adding 1, and write bits [16:1] to the destination.

_mm_mullo_epi16x86 and sse2

Multiply the packed 16-bit integers in a and b.

_mm_mullo_epi32x86 and sse4.1

Multiply the packed 32-bit integers in a and b, producing intermediate 64-bit integers, and returns the lowest 32-bit, whatever they might be, reinterpreted as a signed integer. While pmulld __m128i::splat(2), __m128i::splat(2) returns the obvious __m128i::splat(4), due to wrapping arithmetic pmulld __m128i::splat(i32::MAX), __m128i::splat(2) would return a negative number.

_mm_or_pdx86 and sse2

Compute the bitwise OR of a and b.

_mm_or_psx86 and sse

Bitwise OR of packed single-precision (32-bit) floating-point elements.

_mm_or_si128x86 and sse2

Compute the bitwise OR of 128 bits (representing integer data) in a and b.

_mm_packs_epi16x86 and sse2

Convert packed 16-bit integers from a and b to packed 8-bit integers using signed saturation.

_mm_packs_epi32x86 and sse2

Convert packed 32-bit integers from a and b to packed 16-bit integers using signed saturation.

_mm_packus_epi16x86 and sse2

Convert packed 16-bit integers from a and b to packed 8-bit integers using unsigned saturation.

_mm_packus_epi32x86 and sse4.1

Convert packed 32-bit integers from a and b to packed 16-bit integers using unsigned saturation

_mm_pausex86 and sse2

Provide a hint to the processor that the code sequence is a spin-wait loop.

_mm_permute_pdx86 and avx,sse2

Shuffle double-precision (64-bit) floating-point elements in a using the control in imm8.

_mm_permute_psx86 and avx,sse

Shuffle single-precision (32-bit) floating-point elements in a using the control in imm8.

_mm_permutevar_pdx86 and avx

Shuffle double-precision (64-bit) floating-point elements in a using the control in b.

_mm_permutevar_psx86 and avx

Shuffle single-precision (32-bit) floating-point elements in a using the control in b.

_mm_prefetchx86 and sse

Fetch the cache line that contains address p using the given strategy.

_mm_rcp_psx86 and sse

Return the approximate reciprocal of packed single-precision (32-bit) floating-point elements in a.

_mm_rcp_ssx86 and sse

Return the approximate reciprocal of the first single-precision (32-bit) floating-point element in a, the other elements are unchanged.

_mm_round_pdx86 and sse4.1

Round the packed double-precision (64-bit) floating-point elements in a using the rounding parameter, and store the results as packed double-precision floating-point elements. Rounding is done according to the rounding parameter, which can be one of:

_mm_round_psx86 and sse4.1

Round the packed single-precision (32-bit) floating-point elements in a using the rounding parameter, and store the results as packed single-precision floating-point elements. Rounding is done according to the rounding parameter, which can be one of:

_mm_round_sdx86 and sse4.1

Round the lower double-precision (64-bit) floating-point element in b using the rounding parameter, store the result as a double-precision floating-point element in the lower element of the intrinsic result, and copy the upper element from a to the upper element of the intrinsic result. Rounding is done according to the rounding parameter, which can be one of:

_mm_round_ssx86 and sse4.1

Round the lower single-precision (32-bit) floating-point element in b using the rounding parameter, store the result as a single-precision floating-point element in the lower element of the intrinsic result, and copy the upper 3 packed elements from a to the upper elements of the instrinsic result. Rounding is done according to the rounding parameter, which can be one of:

_mm_rsqrt_psx86 and sse

Return the approximate reciprocal square root of packed single-precision (32-bit) floating-point elements in a.

_mm_rsqrt_ssx86 and sse

Return the approximate reciprocal square root of the fist single-precision (32-bit) floating-point elements in a, the other elements are unchanged.

_mm_sad_epu8x86 and sse2

Sum the absolute differences of packed unsigned 8-bit integers.

_mm_set1_psx86 and sse

Construct a __m128 with all element set to a.

_mm_set1_pdx86 and sse2

Broadcast double-precision (64-bit) floating-point value a to all elements of the return value.

_mm_set1_epi8x86 and sse2

Broadcast 8-bit integer a to all elements.

_mm_set1_epi16x86 and sse2

Broadcast 16-bit integer a to all elements.

_mm_set1_epi32x86 and sse2

Broadcast 32-bit integer a to all elements.

_mm_set1_epi64xx86 and sse2

Broadcast 64-bit integer a to all elements.

_mm_set_epi8x86 and sse2

Set packed 8-bit integers with the supplied values.

_mm_set_epi16x86 and sse2

Set packed 16-bit integers with the supplied values.

_mm_set_epi32x86 and sse2

Set packed 32-bit integers with the supplied values.

_mm_set_epi64xx86 and sse2

Set packed 64-bit integers with the supplied values, from highest to lowest.

_mm_set_pdx86 and sse2

Set packed double-precision (64-bit) floating-point elements in the return value with the supplied values.

_mm_set_pd1x86 and sse2

Broadcast double-precision (64-bit) floating-point value a to all elements of the return value.

_mm_set_psx86 and sse

Construct a __m128 from four floating point values highest to lowest.

_mm_set_ps1x86 and sse

Alias for _mm_set1_ps

_mm_set_sdx86 and sse2

Copy double-precision (64-bit) floating-point element a to the lower element of the packed 64-bit return value.

_mm_set_ssx86 and sse

Construct a __m128 with the lowest element set to a and the rest set to zero.

_mm_setcsrx86 and sse

Set the MXCSR register with the 32-bit unsigned integer value.

_mm_setr_epi8x86 and sse2

Set packed 8-bit integers with the supplied values in reverse order.

_mm_setr_epi16x86 and sse2

Set packed 16-bit integers with the supplied values in reverse order.

_mm_setr_epi32x86 and sse2

Set packed 32-bit integers with the supplied values in reverse order.

_mm_setr_pdx86 and sse2

Set packed double-precision (64-bit) floating-point elements in the return value with the supplied values in reverse order.

_mm_setr_psx86 and sse

Construct a __m128 from four floating point values lowest to highest.

_mm_setzero_pdx86 and sse2

Returns packed double-precision (64-bit) floating-point elements with all zeros.

_mm_setzero_psx86 and sse

Construct a __m128 with all elements initialized to zero.

_mm_setzero_si128x86 and sse2

Returns a vector with all elements set to zero.

_mm_sfencex86 and sse

Perform a serializing operation on all store-to-memory instructions that were issued prior to this instruction.

_mm_sha1msg1_epu32x86 and sha

Perform an intermediate calculation for the next four SHA1 message values (unsigned 32-bit integers) using previous message values from a and b, and returning the result.

_mm_sha1msg2_epu32x86 and sha

Perform the final calculation for the next four SHA1 message values (unsigned 32-bit integers) using the intermediate result in a and the previous message values in b, and returns the result.

_mm_sha1nexte_epu32x86 and sha

Calculate SHA1 state variable E after four rounds of operation from the current SHA1 state variable a, add that value to the scheduled values (unsigned 32-bit integers) in b, and returns the result.

_mm_sha1rnds4_epu32x86 and sha

Perform four rounds of SHA1 operation using an initial SHA1 state (A,B,C,D) from a and some pre-computed sum of the next 4 round message values (unsigned 32-bit integers), and state variable E from b, and return the updated SHA1 state (A,B,C,D). func contains the logic functions and round constants.

_mm_sha256msg1_epu32x86 and sha

Perform an intermediate calculation for the next four SHA256 message values (unsigned 32-bit integers) using previous message values from a and b, and return the result.

_mm_sha256msg2_epu32x86 and sha

Perform the final calculation for the next four SHA256 message values (unsigned 32-bit integers) using previous message values from a and b, and return the result.

_mm_sha256rnds2_epu32x86 and sha

Perform 2 rounds of SHA256 operation using an initial SHA256 state (C,D,G,H) from a, an initial SHA256 state (A,B,E,F) from b, and a pre-computed sum of the next 2 round message values (unsigned 32-bit integers) and the corresponding round constants from k, and store the updated SHA256 state (A,B,E,F) in dst.

_mm_shuffle_epi8x86 and ssse3

Shuffle bytes from a according to the content of b.

_mm_shuffle_epi32x86 and sse2

Shuffle 32-bit integers in a using the control in imm8.

_mm_shuffle_pdx86 and sse2

Constructs a 128-bit floating-point vector of [2 x double] from two 128-bit vector parameters of [2 x double], using the immediate-value parameter as a specifier.

_mm_shuffle_psx86 and sse

Shuffle packed single-precision (32-bit) floating-point elements in a and b using mask.

_mm_shufflehi_epi16x86 and sse2

Shuffle 16-bit integers in the high 64 bits of a using the control in imm8.

_mm_shufflelo_epi16x86 and sse2

Shuffle 16-bit integers in the low 64 bits of a using the control in imm8.

_mm_sign_epi8x86 and ssse3

Negate packed 8-bit integers in a when the corresponding signed 8-bit integer in b is negative, and return the result. Elements in result are zeroed out when the corresponding element in b is zero.

_mm_sign_epi16x86 and ssse3

Negate packed 16-bit integers in a when the corresponding signed 16-bit integer in b is negative, and return the results. Elements in result are zeroed out when the corresponding element in b is zero.

_mm_sign_epi32x86 and ssse3

Negate packed 32-bit integers in a when the corresponding signed 32-bit integer in b is negative, and return the results. Element in result are zeroed out when the corresponding element in b is zero.

_mm_sll_epi16x86 and sse2

Shift packed 16-bit integers in a left by count while shifting in zeros.

_mm_sll_epi32x86 and sse2

Shift packed 32-bit integers in a left by count while shifting in zeros.

_mm_sll_epi64x86 and sse2

Shift packed 64-bit integers in a left by count while shifting in zeros.

_mm_slli_epi16x86 and sse2

Shift packed 16-bit integers in a left by imm8 while shifting in zeros.

_mm_slli_epi32x86 and sse2

Shift packed 32-bit integers in a left by imm8 while shifting in zeros.

_mm_slli_epi64x86 and sse2

Shift packed 64-bit integers in a left by imm8 while shifting in zeros.

_mm_slli_si128x86 and sse2

Shift a left by imm8 bytes while shifting in zeros.

_mm_sllv_epi32x86 and avx2

Shift packed 32-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and return the result.

_mm_sllv_epi64x86 and avx2

Shift packed 64-bit integers in a left by the amount specified by the corresponding element in count while shifting in zeros, and return the result.

_mm_sqrt_pdx86 and sse2

Return a new vector with the square root of each of the values in a.

_mm_sqrt_psx86 and sse

Return the square root of packed single-precision (32-bit) floating-point elements in a.

_mm_sqrt_sdx86 and sse2

Return a new vector with the low element of a replaced by the square root of the lower element b.

_mm_sqrt_ssx86 and sse

Return the square root of the first single-precision (32-bit) floating-point element in a, the other elements are unchanged.

_mm_sra_epi16x86 and sse2

Shift packed 16-bit integers in a right by count while shifting in sign bits.

_mm_sra_epi32x86 and sse2

Shift packed 32-bit integers in a right by count while shifting in sign bits.

_mm_srai_epi16x86 and sse2

Shift packed 16-bit integers in a right by imm8 while shifting in sign bits.

_mm_srai_epi32x86 and sse2

Shift packed 32-bit integers in a right by imm8 while shifting in sign bits.

_mm_srav_epi32x86 and avx2

Shift packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in sign bits.

_mm_srl_epi16x86 and sse2

Shift packed 16-bit integers in a right by count while shifting in zeros.

_mm_srl_epi32x86 and sse2

Shift packed 32-bit integers in a right by count while shifting in zeros.

_mm_srl_epi64x86 and sse2

Shift packed 64-bit integers in a right by count while shifting in zeros.

_mm_srli_epi16x86 and sse2

Shift packed 16-bit integers in a right by imm8 while shifting in zeros.

_mm_srli_epi32x86 and sse2

Shift packed 32-bit integers in a right by imm8 while shifting in zeros.

_mm_srli_epi64x86 and sse2

Shift packed 64-bit integers in a right by imm8 while shifting in zeros.

_mm_srli_si128x86 and sse2

Shift a right by imm8 bytes while shifting in zeros.

_mm_srlv_epi32x86 and avx2

Shift packed 32-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros,

_mm_srlv_epi64x86 and avx2

Shift packed 64-bit integers in a right by the amount specified by the corresponding element in count while shifting in zeros,

_mm_store1_psx86 and sse

Store the lowest 32 bit float of a repeated four times into aligned memory.

_mm_store1_pdx86 and sse2

Store the lower double-precision (64-bit) floating-point element from a into 2 contiguous elements in memory. mem_addr must be aligned on a 16-byte boundary or a general-protection exception may be generated.

_mm_store_pdx86 and sse2

Store 128-bits (composed of 2 packed double-precision (64-bit) floating-point elements) from a into memory. mem_addr must be aligned on a 16-byte boundary or a general-protection exception may be generated.

_mm_store_pd1x86 and sse2

Store the lower double-precision (64-bit) floating-point element from a into 2 contiguous elements in memory. mem_addr must be aligned on a 16-byte boundary or a general-protection exception may be generated.

_mm_store_psx86 and sse

Store four 32-bit floats into aligned memory.

_mm_store_ps1x86 and sse

Alias for _mm_store1_ps

_mm_store_sdx86 and sse2

Stores the lower 64 bits of a 128-bit vector of [2 x double] to a memory location.

_mm_store_si128x86 and sse2

Store 128-bits of integer data from a into memory.

_mm_store_ssx86 and sse

Store the lowest 32 bit float of a into memory.

_mm_storeh_pdx86 and sse2

Stores the upper 64 bits of a 128-bit vector of [2 x double] to a memory location.

_mm_storel_epi64x86 and sse2

Store the lower 64-bit integer a to a memory location.

_mm_storel_pdx86 and sse2

Stores the lower 64 bits of a 128-bit vector of [2 x double] to a memory location.

_mm_storer_pdx86 and sse2

Store 2 double-precision (64-bit) floating-point elements from a into memory in reverse order. mem_addr must be aligned on a 16-byte boundary or a general-protection exception may be generated.

_mm_storer_psx86 and sse

Store four 32-bit floats into aligned memory in reverse order.

_mm_storeu_pdx86 and sse2

Store 128-bits (composed of 2 packed double-precision (64-bit) floating-point elements) from a into memory. mem_addr does not need to be aligned on any particular boundary.

_mm_storeu_psx86 and sse

Store four 32-bit floats into memory. There are no restrictions on memory alignment. For aligned memory _mm_store_ps may be faster.

_mm_storeu_si128x86 and sse2

Store 128-bits of integer data from a into memory.

_mm_stream_pdx86 and sse2

Stores a 128-bit floating point vector of [2 x double] to a 128-bit aligned memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon).

_mm_stream_psx86 and sse

Stores a into the memory at mem_addr using a non-temporal memory hint.

_mm_stream_sdx86 and sse4a

Non-temporal store of a.0 into p.

_mm_stream_si32x86 and sse2

Stores a 32-bit integer value in the specified memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon).

_mm_stream_si128x86 and sse2

Stores a 128-bit integer vector to a 128-bit aligned memory location. To minimize caching, the data is flagged as non-temporal (unlikely to be used again soon).

_mm_stream_ssx86 and sse4a

Non-temporal store of a.0 into p.

_mm_sub_epi8x86 and sse2

Subtract packed 8-bit integers in b from packed 8-bit integers in a.

_mm_sub_epi16x86 and sse2

Subtract packed 16-bit integers in b from packed 16-bit integers in a.

_mm_sub_epi32x86 and sse2

Subtract packed 32-bit integers in b from packed 32-bit integers in a.

_mm_sub_epi64x86 and sse2

Subtract packed 64-bit integers in b from packed 64-bit integers in a.

_mm_sub_pdx86 and sse2

Subtract packed double-precision (64-bit) floating-point elements in b from a.

_mm_sub_psx86 and sse

Subtracts __m128 vectors.

_mm_sub_sdx86 and sse2

Return a new vector with the low element of a replaced by subtracting the low element by b from the low element of a.

_mm_sub_ssx86 and sse

Subtracts the first component of b from a, the other components are copied from a.

_mm_subs_epi8x86 and sse2

Subtract packed 8-bit integers in b from packed 8-bit integers in a using saturation.

_mm_subs_epi16x86 and sse2

Subtract packed 16-bit integers in b from packed 16-bit integers in a using saturation.

_mm_subs_epu8x86 and sse2

Subtract packed unsigned 8-bit integers in b from packed unsigned 8-bit integers in a using saturation.

_mm_subs_epu16x86 and sse2

Subtract packed unsigned 16-bit integers in b from packed unsigned 16-bit integers in a using saturation.

_mm_test_all_onesx86 and sse4.1

Tests whether the specified bits in a 128-bit integer vector are all ones.

_mm_test_all_zerosx86 and sse4.1

Tests whether the specified bits in a 128-bit integer vector are all zeros.

_mm_test_mix_ones_zerosx86 and sse4.1

Tests whether the specified bits in a 128-bit integer vector are neither all zeros nor all ones.

_mm_testc_pdx86 and avx

Compute the bitwise AND of 128 bits (representing double-precision (64-bit) floating-point elements) in a and b, producing an intermediate 128-bit value, and set ZF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set CF to 0. Return the CF value.

_mm_testc_psx86 and avx

Compute the bitwise AND of 128 bits (representing single-precision (32-bit) floating-point elements) in a and b, producing an intermediate 128-bit value, and set ZF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set CF to 0. Return the CF value.

_mm_testc_si128x86 and sse4.1

Tests whether the specified bits in a 128-bit integer vector are all ones.

_mm_testnzc_pdx86 and avx

Compute the bitwise AND of 128 bits (representing double-precision (64-bit) floating-point elements) in a and b, producing an intermediate 128-bit value, and set ZF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set CF to 0. Return 1 if both the ZF and CF values are zero, otherwise return 0.

_mm_testnzc_psx86 and avx

Compute the bitwise AND of 128 bits (representing single-precision (32-bit) floating-point elements) in a and b, producing an intermediate 128-bit value, and set ZF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set CF to 0. Return 1 if both the ZF and CF values are zero, otherwise return 0.

_mm_testnzc_si128x86 and sse4.1

Tests whether the specified bits in a 128-bit integer vector are neither all zeros nor all ones.

_mm_testz_pdx86 and avx

Compute the bitwise AND of 128 bits (representing double-precision (64-bit) floating-point elements) in a and b, producing an intermediate 128-bit value, and set ZF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 64-bit element in the intermediate value is zero, otherwise set CF to 0. Return the ZF value.

_mm_testz_psx86 and avx

Compute the bitwise AND of 128 bits (representing single-precision (32-bit) floating-point elements) in a and b, producing an intermediate 128-bit value, and set ZF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set ZF to 0. Compute the bitwise NOT of a and then AND with b, producing an intermediate value, and set CF to 1 if the sign bit of each 32-bit element in the intermediate value is zero, otherwise set CF to 0. Return the ZF value.

_mm_testz_si128x86 and sse4.1

Tests whether the specified bits in a 128-bit integer vector are all zeros.

_mm_tzcnt_32x86 and bmi1

Counts the number of trailing least significant zero bits.

_mm_ucomieq_sdx86 and sse2

Compare the lower element of a and b for equality.

_mm_ucomieq_ssx86 and sse

Compare two 32-bit floats from the low-order bits of a and b. Returns 1 if they are equal, or 0 otherwise. This instruction will not signal an exception if either argument is a quiet NaN.

_mm_ucomige_sdx86 and sse2

Compare the lower element of a and b for greater-than-or-equal.

_mm_ucomige_ssx86 and sse

Compare two 32-bit floats from the low-order bits of a and b. Returns 1 if the value from a is greater than or equal to the one from b, or 0 otherwise. This instruction will not signal an exception if either argument is a quiet NaN.

_mm_ucomigt_sdx86 and sse2

Compare the lower element of a and b for greater-than.

_mm_ucomigt_ssx86 and sse

Compare two 32-bit floats from the low-order bits of a and b. Returns 1 if the value from a is greater than the one from b, or 0 otherwise. This instruction will not signal an exception if either argument is a quiet NaN.

_mm_ucomile_sdx86 and sse2

Compare the lower element of a and b for less-than-or-equal.

_mm_ucomile_ssx86 and sse

Compare two 32-bit floats from the low-order bits of a and b. Returns 1 if the value from a is less than or equal to the one from b, or 0 otherwise. This instruction will not signal an exception if either argument is a quiet NaN.

_mm_ucomilt_sdx86 and sse2

Compare the lower element of a and b for less-than.

_mm_ucomilt_ssx86 and sse

Compare two 32-bit floats from the low-order bits of a and b. Returns 1 if the value from a is less than the one from b, or 0 otherwise. This instruction will not signal an exception if either argument is a quiet NaN.

_mm_ucomineq_sdx86 and sse2

Compare the lower element of a and b for not-equal.

_mm_ucomineq_ssx86 and sse

Compare two 32-bit floats from the low-order bits of a and b. Returns 1 if they are not equal, or 0 otherwise. This instruction will not signal an exception if either argument is a quiet NaN.

_mm_undefined_pdx86 and sse2

Return vector of type __m128d with undefined elements.

_mm_undefined_psx86 and sse

Return vector of type __m128 with undefined elements.

_mm_undefined_si128x86 and sse2

Return vector of type __m128i with undefined elements.

_mm_unpackhi_epi8x86 and sse2

Unpack and interleave 8-bit integers from the high half of a and b.

_mm_unpackhi_epi16x86 and sse2

Unpack and interleave 16-bit integers from the high half of a and b.

_mm_unpackhi_epi32x86 and sse2

Unpack and interleave 32-bit integers from the high half of a and b.

_mm_unpackhi_epi64x86 and sse2

Unpack and interleave 64-bit integers from the high half of a and b.

_mm_unpackhi_pdx86 and sse2

The resulting __m128d element is composed by the low-order values of the two __m128d interleaved input elements, i.e.:

_mm_unpackhi_psx86 and sse

Unpack and interleave single-precision (32-bit) floating-point elements from the higher half of a and b.

_mm_unpacklo_epi8x86 and sse2

Unpack and interleave 8-bit integers from the low half of a and b.

_mm_unpacklo_epi16x86 and sse2

Unpack and interleave 16-bit integers from the low half of a and b.

_mm_unpacklo_epi32x86 and sse2

Unpack and interleave 32-bit integers from the low half of a and b.

_mm_unpacklo_epi64x86 and sse2

Unpack and interleave 64-bit integers from the low half of a and b.

_mm_unpacklo_pdx86 and sse2

The resulting __m128d element is composed by the high-order values of the two __m128d interleaved input elements, i.e.:

_mm_unpacklo_psx86 and sse

Unpack and interleave single-precision (32-bit) floating-point elements from the lower half of a and b.

_mm_xor_pdx86 and sse2

Compute the bitwise OR of a and b.

_mm_xor_psx86 and sse

Bitwise exclusive OR of packed single-precision (32-bit) floating-point elements.

_mm_xor_si128x86 and sse2

Compute the bitwise XOR of 128 bits (representing integer data) in a and b.

_mulx_u32x86 and bmi2

Unsigned multiply without affecting flags.

_pdep_u32x86 and bmi2

Scatter contiguous low order bits of a to the result at the positions specified by the mask.

_pext_u32x86 and bmi2

Gathers the bits of x specified by the mask into the contiguous low order bit positions of the result.

_popcnt32x86 and popcnt

Counts the bits that are set.

_rdrand16_stepx86 and rdrand

Read a hardware generated 16-bit random value and store the result in val. Return 1 if a random value was generated, and 0 otherwise.

_rdrand32_stepx86 and rdrand

Read a hardware generated 32-bit random value and store the result in val. Return 1 if a random value was generated, and 0 otherwise.

_rdseed16_stepx86 and rdseed

Read a 16-bit NIST SP800-90B and SP800-90C compliant random value and store in val. Return 1 if a random value was generated, and 0 otherwise.

_rdseed32_stepx86 and rdseed

Read a 32-bit NIST SP800-90B and SP800-90C compliant random value and store in val. Return 1 if a random value was generated, and 0 otherwise.

_rdtscx86

Reads the current value of the processor’s time-stamp counter.

_subborrow_u32x86

Add unsigned 32-bit integers a and b with unsigned 8-bit carry-in c_in (carry or overflow flag), and store the unsigned 32-bit result in out, and the carry-out is returned (carry or overflow flag).

_t1mskc_u32x86 and tbm

Clears all bits below the least significant zero of x and sets all other bits.

_t1mskc_u64x86 and tbm

Clears all bits below the least significant zero of x and sets all other bits.

_tzcnt_u32x86 and bmi1

Counts the number of trailing least significant zero bits.

_tzmsk_u32x86 and tbm

Sets all bits below the least significant one of x and clears all other bits.

_tzmsk_u64x86 and tbm

Sets all bits below the least significant one of x and clears all other bits.

_xgetbvx86 and xsave

Reads the contents of the extended control register XCR specified in xcr_no.

_xrstorx86 and xsave

Perform a full or partial restore of the enabled processor states using the state information stored in memory at mem_addr.

_xrstorsx86 and xsave,xsaves

Perform a full or partial restore of the enabled processor states using the state information stored in memory at mem_addr.

_xsavex86 and xsave

Perform a full or partial save of the enabled processor states to memory at mem_addr.

_xsavecx86 and xsave,xsavec

Perform a full or partial save of the enabled processor states to memory at mem_addr.

_xsaveoptx86 and xsave,xsaveopt

Perform a full or partial save of the enabled processor states to memory at mem_addr.

_xsavesx86 and xsave,xsaves

Perform a full or partial save of the enabled processor states to memory at mem_addr

_xsetbvx86 and xsave

Copy 64-bits from val to the extended control register (XCR) specified by a.

_MM_SHUFFLEExperimentalx86

A utility function for creating masks to use with Intel shuffle and permute intrinsics.

_m_emptyExperimentalx86 and mmx

Empty the MMX state, which marks the x87 FPU registers as available for use by x87 instructions. This instruction must be used at the end of all MMX technology procedures.

_m_maskmovqExperimentalx86 and sse,mmx

Conditionally copies the values from each 8-bit element in the first 64-bit integer vector operand to the specified memory location, as specified by the most significant bit in the corresponding element in the second 64-bit integer vector operand.

_m_paddbExperimentalx86 and mmx

Add packed 8-bit integers in a and b.

_m_padddExperimentalx86 and mmx

Add packed 32-bit integers in a and b.

_m_paddsbExperimentalx86 and mmx

Add packed 8-bit integers in a and b using saturation.

_m_paddswExperimentalx86 and mmx

Add packed 16-bit integers in a and b using saturation.

_m_paddusbExperimentalx86 and mmx

Add packed unsigned 8-bit integers in a and b using saturation.

_m_padduswExperimentalx86 and mmx

Add packed unsigned 16-bit integers in a and b using saturation.

_m_paddwExperimentalx86 and mmx

Add packed 16-bit integers in a and b.

_m_pavgbExperimentalx86 and sse,mmx

Computes the rounded averages of the packed unsigned 8-bit integer values and writes the averages to the corresponding bits in the destination.

_m_pavgwExperimentalx86 and sse,mmx

Computes the rounded averages of the packed unsigned 16-bit integer values and writes the averages to the corresponding bits in the destination.

_m_pextrwExperimentalx86 and sse,mmx

Extracts 16-bit element from a 64-bit vector of [4 x i16] and returns it, as specified by the immediate integer operand.

_m_pinsrwExperimentalx86 and sse,mmx

Copies data from the 64-bit vector of [4 x i16] to the destination, and inserts the lower 16-bits of an integer operand at the 16-bit offset specified by the immediate operand n.

_m_pmaxswExperimentalx86 and sse,mmx

Compares the packed 16-bit signed integers of a and b writing the greatest value into the result.

_m_pmaxubExperimentalx86 and sse,mmx

Compares the packed 8-bit signed integers of a and b writing the greatest value into the result.

_m_pminswExperimentalx86 and sse,mmx

Compares the packed 16-bit signed integers of a and b writing the smallest value into the result.

_m_pminubExperimentalx86 and sse,mmx

Compares the packed 8-bit signed integers of a and b writing the smallest value into the result.

_m_pmovmskbExperimentalx86 and sse,mmx

Takes the most significant bit from each 8-bit element in a 64-bit integer vector to create a 16-bit mask value. Zero-extends the value to 32-bit integer and writes it to the destination.

_m_pmulhuwExperimentalx86 and sse,mmx

Multiplies packed 16-bit unsigned integer values and writes the high-order 16 bits of each 32-bit product to the corresponding bits in the destination.

_m_psadbwExperimentalx86 and sse,mmx

Subtracts the corresponding 8-bit unsigned integer values of the two 64-bit vector operands and computes the absolute value for each of the difference. Then sum of the 8 absolute differences is written to the bits [15:0] of the destination; the remaining bits [63:16] are cleared.

_m_pshufwExperimentalx86 and sse,mmx

Shuffles the 4 16-bit integers from a 64-bit integer vector to the destination, as specified by the immediate value operand.

_m_psubbExperimentalx86 and mmx

Subtract packed 8-bit integers in b from packed 8-bit integers in a.

_m_psubdExperimentalx86 and mmx

Subtract packed 32-bit integers in b from packed 32-bit integers in a.

_m_psubsbExperimentalx86 and mmx

Subtract packed 8-bit integers in b from packed 8-bit integers in a using saturation.

_m_psubswExperimentalx86 and mmx

Subtract packed 16-bit integers in b from packed 16-bit integers in a using saturation.

_m_psubusbExperimentalx86 and mmx

Subtract packed unsigned 8-bit integers in b from packed unsigned 8-bit integers in a using saturation.

_m_psubuswExperimentalx86 and mmx

Subtract packed unsigned 16-bit integers in b from packed unsigned 16-bit integers in a using saturation.

_m_psubwExperimentalx86 and mmx

Subtract packed 16-bit integers in b from packed 16-bit integers in a.

_mm256_madd52hi_epu64Experimentalx86 and avx512ifma,avx512vl

Multiply packed unsigned 52-bit integers in each 64-bit element of b and c to form a 104-bit intermediate result. Add the high 52-bit unsigned integer from the intermediate result with the corresponding unsigned 64-bit integer in a, and store the results in dst.

_mm256_madd52lo_epu64Experimentalx86 and avx512ifma,avx512vl

Multiply packed unsigned 52-bit integers in each 64-bit element of b and c to form a 104-bit intermediate result. Add the low 52-bit unsigned integer from the intermediate result with the corresponding unsigned 64-bit integer in a, and store the results in dst.

_mm512_abs_epi32Experimentalx86 and avx512f

Computes the absolute values of packed 32-bit integers in a.

_mm512_madd52hi_epu64Experimentalx86 and avx512ifma

Multiply packed unsigned 52-bit integers in each 64-bit element of b and c to form a 104-bit intermediate result. Add the high 52-bit unsigned integer from the intermediate result with the corresponding unsigned 64-bit integer in a, and store the results in dst.

_mm512_madd52lo_epu64Experimentalx86 and avx512ifma

Multiply packed unsigned 52-bit integers in each 64-bit element of b and c to form a 104-bit intermediate result. Add the low 52-bit unsigned integer from the intermediate result with the corresponding unsigned 64-bit integer in a, and store the results in dst.

_mm512_mask_abs_epi32Experimentalx86 and avx512f

Compute the absolute value of packed 32-bit integers in a, and store the unsigned results in dst using writemask k (elements are copied from src when the corresponding mask bit is not set).

_mm512_maskz_abs_epi32Experimentalx86 and avx512f

Compute the absolute value of packed 32-bit integers in a, and store the unsigned results in dst using zeromask k (elements are zeroed out when the corresponding mask bit is not set).

_mm512_set1_epi64Experimentalx86 and avx512f

Broadcast 64-bit integer a to all elements of dst.

_mm512_setr_epi32Experimentalx86 and avx512f

Set packed 32-bit integers in dst with the supplied values in reverse order.

_mm512_setzero_si512Experimentalx86 and avx512f

Return vector of type __m512i with all elements set to zero.

_mm_abs_pi8Experimentalx86 and ssse3,mmx

Compute the absolute value of packed 8-bit integers in a and return the unsigned results.

_mm_abs_pi16Experimentalx86 and ssse3,mmx

Compute the absolute value of packed 8-bit integers in a, and return the unsigned results.

_mm_abs_pi32Experimentalx86 and ssse3,mmx

Compute the absolute value of packed 32-bit integers in a, and return the unsigned results.

_mm_add_pi8Experimentalx86 and mmx

Add packed 8-bit integers in a and b.

_mm_add_pi16Experimentalx86 and mmx

Add packed 16-bit integers in a and b.

_mm_add_pi32Experimentalx86 and mmx

Add packed 32-bit integers in a and b.

_mm_add_si64Experimentalx86 and sse2,mmx

Adds two signed or unsigned 64-bit integer values, returning the lower 64 bits of the sum.

_mm_adds_pi8Experimentalx86 and mmx

Add packed 8-bit integers in a and b using saturation.

_mm_adds_pi16Experimentalx86 and mmx

Add packed 16-bit integers in a and b using saturation.

_mm_adds_pu8Experimentalx86 and mmx

Add packed unsigned 8-bit integers in a and b using saturation.

_mm_adds_pu16Experimentalx86 and mmx

Add packed unsigned 16-bit integers in a and b using saturation.

_mm_alignr_pi8Experimentalx86 and ssse3,mmx

Concatenates the two 64-bit integer vector operands, and right-shifts the result by the number of bytes specified in the immediate operand.

_mm_avg_pu8Experimentalx86 and sse,mmx

Computes the rounded averages of the packed unsigned 8-bit integer values and writes the averages to the corresponding bits in the destination.

_mm_avg_pu16Experimentalx86 and sse,mmx

Computes the rounded averages of the packed unsigned 16-bit integer values and writes the averages to the corresponding bits in the destination.

_mm_cmpgt_pi8Experimentalx86 and mmx

Compares whether each element of a is greater than the corresponding element of b returning 0 for false and -1 for true.

_mm_cmpgt_pi16Experimentalx86 and mmx

Compares whether each element of a is greater than the corresponding element of b returning 0 for false and -1 for true.

_mm_cmpgt_pi32Experimentalx86 and mmx

Compares whether each element of a is greater than the corresponding element of b returning 0 for false and -1 for true.

_mm_cvt_pi2psExperimentalx86 and sse,mmx

Converts two elements of a 64-bit vector of [2 x i32] into two floating point values and writes them to the lower 64-bits of the destination. The remaining higher order elements of the destination are copied from the corresponding elements in the first operand.

_mm_cvt_ps2piExperimentalx86 and sse,mmx

Convert the two lower packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers.

_mm_cvtpd_pi32Experimentalx86 and sse2,mmx

Converts the two double-precision floating-point elements of a 128-bit vector of [2 x double] into two signed 32-bit integer values, returned in a 64-bit vector of [2 x i32].

_mm_cvtpi8_psExperimentalx86 and sse,mmx

Converts the lower 4 8-bit values of a into a 128-bit vector of 4 f32s.

_mm_cvtpi16_psExperimentalx86 and sse,mmx

Converts a 64-bit vector of i16s into a 128-bit vector of 4 f32s.

_mm_cvtpi32_psExperimentalx86 and sse,mmx

Converts two elements of a 64-bit vector of [2 x i32] into two floating point values and writes them to the lower 64-bits of the destination. The remaining higher order elements of the destination are copied from the corresponding elements in the first operand.

_mm_cvtpi32_pdExperimentalx86 and sse2,mmx

Converts the two signed 32-bit integer elements of a 64-bit vector of [2 x i32] into two double-precision floating-point values, returned in a 128-bit vector of [2 x double].

_mm_cvtpi32x2_psExperimentalx86 and sse,mmx

Converts the two 32-bit signed integer values from each 64-bit vector operand of [2 x i32] into a 128-bit vector of [4 x float].

_mm_cvtps_pi8Experimentalx86 and sse,mmx

Convert packed single-precision (32-bit) floating-point elements in a to packed 8-bit integers, and returns theem in the lower 4 elements of the result.

_mm_cvtps_pi16Experimentalx86 and sse,mmx

Convert packed single-precision (32-bit) floating-point elements in a to packed 16-bit integers.

_mm_cvtps_pi32Experimentalx86 and sse,mmx

Convert the two lower packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers.

_mm_cvtpu8_psExperimentalx86 and sse,mmx

Converts the lower 4 8-bit values of a into a 128-bit vector of 4 f32s.

_mm_cvtpu16_psExperimentalx86 and sse,mmx

Converts a 64-bit vector of i16s into a 128-bit vector of 4 f32s.

_mm_cvtsi32_si64Experimentalx86 and mmx

Copy 32-bit integer a to the lower elements of the return value, and zero the upper element of the return value.

_mm_cvtsi64_si32Experimentalx86 and mmx

Return the lower 32-bit integer in a.

_mm_cvtt_ps2piExperimentalx86 and sse,mmx

Convert the two lower packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation.

_mm_cvttpd_pi32Experimentalx86 and sse2,mmx

Converts the two double-precision floating-point elements of a 128-bit vector of [2 x double] into two signed 32-bit integer values, returned in a 64-bit vector of [2 x i32]. If the result of either conversion is inexact, the result is truncated (rounded towards zero) regardless of the current MXCSR setting.

_mm_cvttps_pi32Experimentalx86 and sse,mmx

Convert the two lower packed single-precision (32-bit) floating-point elements in a to packed 32-bit integers with truncation.

_mm_emptyExperimentalx86 and mmx

Empty the MMX state, which marks the x87 FPU registers as available for use by x87 instructions. This instruction must be used at the end of all MMX technology procedures.

_mm_extract_pi16Experimentalx86 and sse,mmx

Extracts 16-bit element from a 64-bit vector of [4 x i16] and returns it, as specified by the immediate integer operand.

_mm_hadd_pi16Experimentalx86 and ssse3,mmx

Horizontally add the adjacent pairs of values contained in 2 packed 64-bit vectors of [4 x i16].

_mm_hadd_pi32Experimentalx86 and ssse3,mmx

Horizontally add the adjacent pairs of values contained in 2 packed 64-bit vectors of [2 x i32].

_mm_hadds_pi16Experimentalx86 and ssse3,mmx

Horizontally add the adjacent pairs of values contained in 2 packed 64-bit vectors of [4 x i16]. Positive sums greater than 7FFFh are saturated to 7FFFh. Negative sums less than 8000h are saturated to 8000h.

_mm_hsub_pi16Experimentalx86 and ssse3,mmx

Horizontally subtracts the adjacent pairs of values contained in 2 packed 64-bit vectors of [4 x i16].

_mm_hsub_pi32Experimentalx86 and ssse3,mmx

Horizontally subtracts the adjacent pairs of values contained in 2 packed 64-bit vectors of [2 x i32].

_mm_hsubs_pi16Experimentalx86 and ssse3,mmx

Horizontally subtracts the adjacent pairs of values contained in 2 packed 64-bit vectors of [4 x i16]. Positive differences greater than 7FFFh are saturated to 7FFFh. Negative differences less than 8000h are saturated to 8000h.

_mm_insert_pi16Experimentalx86 and sse,mmx

Copies data from the 64-bit vector of [4 x i16] to the destination, and inserts the lower 16-bits of an integer operand at the 16-bit offset specified by the immediate operand n.

_mm_loadh_piExperimentalx86 and sse

Set the upper two single-precision floating-point values with 64 bits of data loaded from the address p; the lower two values are passed through from a.

_mm_loadl_piExperimentalx86 and sse

Load two floats from p into the lower half of a __m128. The upper half is copied from the upper half of a.

_mm_madd52hi_epu64Experimentalx86 and avx512ifma,avx512vl

Multiply packed unsigned 52-bit integers in each 64-bit element of b and c to form a 104-bit intermediate result. Add the high 52-bit unsigned integer from the intermediate result with the corresponding unsigned 64-bit integer in a, and store the results in dst.

_mm_madd52lo_epu64Experimentalx86 and avx512ifma,avx512vl

Multiply packed unsigned 52-bit integers in each 64-bit element of b and c to form a 104-bit intermediate result. Add the low 52-bit unsigned integer from the intermediate result with the corresponding unsigned 64-bit integer in a, and store the results in dst.

_mm_maddubs_pi16Experimentalx86 and ssse3,mmx

Multiplies corresponding pairs of packed 8-bit unsigned integer values contained in the first source operand and packed 8-bit signed integer values contained in the second source operand, adds pairs of contiguous products with signed saturation, and writes the 16-bit sums to the corresponding bits in the destination.

_mm_maskmove_si64Experimentalx86 and sse,mmx

Conditionally copies the values from each 8-bit element in the first 64-bit integer vector operand to the specified memory location, as specified by the most significant bit in the corresponding element in the second 64-bit integer vector operand.

_mm_max_pi16Experimentalx86 and sse,mmx

Compares the packed 16-bit signed integers of a and b writing the greatest value into the result.

_mm_max_pu8Experimentalx86 and sse,mmx

Compares the packed 8-bit signed integers of a and b writing the greatest value into the result.

_mm_min_pi16Experimentalx86 and sse,mmx

Compares the packed 16-bit signed integers of a and b writing the smallest value into the result.

_mm_min_pu8Experimentalx86 and sse,mmx

Compares the packed 8-bit signed integers of a and b writing the smallest value into the result.

_mm_movemask_pi8Experimentalx86 and sse,mmx

Takes the most significant bit from each 8-bit element in a 64-bit integer vector to create a 16-bit mask value. Zero-extends the value to 32-bit integer and writes it to the destination.

_mm_movepi64_pi64Experimentalx86 and sse2,mmx

Returns the lower 64 bits of a 128-bit integer vector as a 64-bit integer.

_mm_movpi64_epi64Experimentalx86 and sse2,mmx

Moves the 64-bit operand to a 128-bit integer vector, zeroing the upper bits.

_mm_mul_su32Experimentalx86 and sse2,mmx

Multiplies 32-bit unsigned integer values contained in the lower bits of the two 64-bit integer vectors and returns the 64-bit unsigned product.

_mm_mulhi_pu16Experimentalx86 and sse,mmx

Multiplies packed 16-bit unsigned integer values and writes the high-order 16 bits of each 32-bit product to the corresponding bits in the destination.

_mm_mulhrs_pi16Experimentalx86 and ssse3,mmx

Multiplies packed 16-bit signed integer values, truncates the 32-bit products to the 18 most significant bits by right-shifting, rounds the truncated value by adding 1, and writes bits [16:1] to the destination.

_mm_mullo_pi16Experimentalx86 and sse,mmx

Multiplies packed 16-bit integer values and writes the low-order 16 bits of each 32-bit product to the corresponding bits in the destination.

_mm_packs_pi16Experimentalx86 and mmx

Convert packed 16-bit integers from a and b to packed 8-bit integers using signed saturation.

_mm_packs_pi32Experimentalx86 and mmx

Convert packed 32-bit integers from a and b to packed 16-bit integers using signed saturation.

_mm_sad_pu8Experimentalx86 and sse,mmx

Subtracts the corresponding 8-bit unsigned integer values of the two 64-bit vector operands and computes the absolute value for each of the difference. Then sum of the 8 absolute differences is written to the bits [15:0] of the destination; the remaining bits [63:16] are cleared.

_mm_set1_epi64Experimentalx86 and sse2,mmx

Initializes both values in a 128-bit vector of [2 x i64] with the specified 64-bit value.

_mm_set1_pi8Experimentalx86 and mmx

Broadcast 8-bit integer a to all all elements of dst.

_mm_set1_pi16Experimentalx86 and mmx

Broadcast 16-bit integer a to all all elements of dst.

_mm_set1_pi32Experimentalx86 and mmx

Broadcast 32-bit integer a to all all elements of dst.

_mm_set_epi64Experimentalx86 and sse2,mmx

Initializes both 64-bit values in a 128-bit vector of [2 x i64] with the specified 64-bit integer values.

_mm_set_pi8Experimentalx86 and mmx

Set packed 8-bit integers in dst with the supplied values.

_mm_set_pi16Experimentalx86 and mmx

Set packed 16-bit integers in dst with the supplied values.

_mm_set_pi32Experimentalx86 and mmx

Set packed 32-bit integers in dst with the supplied values.

_mm_setr_epi64Experimentalx86 and sse2,mmx

Constructs a 128-bit integer vector, initialized in reverse order with the specified 64-bit integral values.

_mm_setr_pi8Experimentalx86 and mmx

Set packed 8-bit integers in dst with the supplied values in reverse order.

_mm_setr_pi16Experimentalx86 and mmx

Set packed 16-bit integers in dst with the supplied values in reverse order.

_mm_setr_pi32Experimentalx86 and mmx

Set packed 32-bit integers in dst with the supplied values in reverse order.

_mm_setzero_si64Experimentalx86 and mmx

Constructs a 64-bit integer vector initialized to zero.

_mm_shuffle_pi8Experimentalx86 and ssse3,mmx

Shuffle packed 8-bit integers in a according to shuffle control mask in the corresponding 8-bit element of b, and return the results

_mm_shuffle_pi16Experimentalx86 and sse,mmx

Shuffles the 4 16-bit integers from a 64-bit integer vector to the destination, as specified by the immediate value operand.

_mm_sign_pi8Experimentalx86 and ssse3,mmx

Negate packed 8-bit integers in a when the corresponding signed 8-bit integer in b is negative, and return the results. Element in result are zeroed out when the corresponding element in b is zero.

_mm_sign_pi16Experimentalx86 and ssse3,mmx

Negate packed 16-bit integers in a when the corresponding signed 16-bit integer in b is negative, and return the results. Element in result are zeroed out when the corresponding element in b is zero.

_mm_sign_pi32Experimentalx86 and ssse3,mmx

Negate packed 32-bit integers in a when the corresponding signed 32-bit integer in b is negative, and return the results. Element in result are zeroed out when the corresponding element in b is zero.

_mm_storeh_piExperimentalx86 and sse

Store the upper half of a (64 bits) into memory.

_mm_storel_piExperimentalx86 and sse

Store the lower half of a (64 bits) into memory.

_mm_stream_piExperimentalx86 and sse,mmx

Store 64-bits of integer data from a into memory using a non-temporal memory hint.

_mm_sub_pi8Experimentalx86 and mmx

Subtract packed 8-bit integers in b from packed 8-bit integers in a.

_mm_sub_pi16Experimentalx86 and mmx

Subtract packed 16-bit integers in b from packed 16-bit integers in a.

_mm_sub_pi32Experimentalx86 and mmx

Subtract packed 32-bit integers in b from packed 32-bit integers in a.

_mm_sub_si64Experimentalx86 and sse2,mmx

Subtracts signed or unsigned 64-bit integer values and writes the difference to the corresponding bits in the destination.

_mm_subs_pi8Experimentalx86 and mmx

Subtract packed 8-bit integers in b from packed 8-bit integers in a using saturation.

_mm_subs_pi16Experimentalx86 and mmx

Subtract packed 16-bit integers in b from packed 16-bit integers in a using saturation.

_mm_subs_pu8Experimentalx86 and mmx

Subtract packed unsigned 8-bit integers in b from packed unsigned 8-bit integers in a using saturation.

_mm_subs_pu16Experimentalx86 and mmx

Subtract packed unsigned 16-bit integers in b from packed unsigned 16-bit integers in a using saturation.

_mm_unpackhi_pi8Experimentalx86 and mmx

Unpacks the upper four elements from two i8x8 vectors and interleaves them into the result: [a.4, b.4, a.5, b.5, a.6, b.6, a.7, b.7].

_mm_unpackhi_pi16Experimentalx86 and mmx

Unpacks the upper two elements from two i16x4 vectors and interleaves them into the result: [a.2, b.2, a.3, b.3].

_mm_unpackhi_pi32Experimentalx86 and mmx

Unpacks the upper element from two i32x2 vectors and interleaves them into the result: [a.1, b.1].

_mm_unpacklo_pi8Experimentalx86 and mmx

Unpacks the lower four elements from two i8x8 vectors and interleaves them into the result: [a.0, b.0, a.1, b.1, a.2, b.2, a.3, b.3].

_mm_unpacklo_pi16Experimentalx86 and mmx

Unpacks the lower two elements from two i16x4 vectors and interleaves them into the result: [a.0 b.0 a.1 b.1].

_mm_unpacklo_pi32Experimentalx86 and mmx

Unpacks the lower element from two i32x2 vectors and interleaves them into the result: [a.0, b.0].

has_cpuidExperimentalx86

Does the host support the cpuid instruction?

ud2Experimentalx86

Generates the trap instruction UD2

Type Definitions

__mmask16Experimentalx86

The __mmask16 type used in AVX-512 intrinsics, a 16-bit integer