Bug 305728 - Add support for AVX2, BMI1, BMI2 and FMA instructions
Summary: Add support for AVX2, BMI1, BMI2 and FMA instructions
Status: RESOLVED FIXED
Alias: None
Product: valgrind
Classification: Developer tools
Component: vex (other bugs)
Version First Reported In: 3.9.0.SVN
Platform: unspecified Linux
: NOR normal
Target Milestone: ---
Assignee: Julian Seward
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2012-08-24 16:13 UTC by Jakub Jelinek
Modified: 2014-04-13 15:26 UTC (History)
3 users (show)

See Also:
Latest Commit:
Version Fixed/Implemented In:
Sentry Crash Report:


Attachments
valgrind-avx2-1.patch (76.26 KB, patch)
2012-08-24 16:16 UTC, Jakub Jelinek
Details
valgrind-avx2-2.patch (56.96 KB, patch)
2012-08-27 14:55 UTC, Jakub Jelinek
Details
valgrind-avx2-3.patch (79.95 KB, patch)
2012-08-29 11:35 UTC, Jakub Jelinek
Details
valgrind-avx2-4.patch (626 bytes, patch)
2012-08-30 12:44 UTC, Jakub Jelinek
Details
valgrind-bmi-1.patch (29.41 KB, patch)
2012-08-30 12:46 UTC, Jakub Jelinek
Details
valgrind-bmi-2.patch (22.87 KB, patch)
2012-09-03 14:38 UTC, Jakub Jelinek
Details
valgrind-bmi-3.patch (25.90 KB, patch)
2012-09-04 09:00 UTC, Jakub Jelinek
Details
valgrind-fma-1.patch (127.53 KB, patch)
2012-09-05 12:24 UTC, Jakub Jelinek
Details
valgrind-memcheck-avx2-bmi-fma.patch (7.13 KB, patch)
2012-09-11 08:34 UTC, Jakub Jelinek
Details
valgrind-vmaskmov-load.patch (21.77 KB, patch)
2012-09-12 18:05 UTC, Jakub Jelinek
Details
valgrind-avx2-5.patch (42.26 KB, patch)
2012-09-12 18:07 UTC, Jakub Jelinek
Details
valgrind-avx2-bmi-fma-tests.tar.bz2 (350.87 KB, application/octet-stream)
2012-09-12 19:25 UTC, Jakub Jelinek
Details
valgrind-bmi-4.patch (2.43 KB, patch)
2012-09-13 08:20 UTC, Jakub Jelinek
Details
valgrind-bmi-5.patch (726 bytes, patch)
2012-09-13 16:26 UTC, Jakub Jelinek
Details
avx2-prereq.patch (1.23 KB, patch)
2012-09-19 20:29 UTC, Mark Wielaard
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Jakub Jelinek 2012-08-24 16:13:29 UTC
AVX2 ISA is not supported by valgrind yet.
Manual for AVX2 is available from http://software.intel.com/en-us/avx/ and an emulator if you don't have access to hardware is at http://software.intel.com/en-us/articles/intel-software-development-emulator
Would be nice if valgrind could be used instead of that emulator for free software development and next year on real hardware.

AVX2 is supported by GCC 4.7 and later, its testsuite can be used as another testsuite for the new insn support.

Reproducible: Always
Comment 1 Jakub Jelinek 2012-08-24 16:16:58 UTC
Created attachment 73440 [details]
valgrind-avx2-1.patch

The following patch adds support for the following insns:

VPADDB ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG FC /r
VPADDD ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG FE /r
VPADDQ ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG D4 /r
VPADDW ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG FD /r
VPCMPEQB ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG 74 /r
VPCMPEQD ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG 76 /r
VPCMPEQW ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG 75 /r
VPCMPGTB ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG 64 /r
VPCMPGTD ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG 66 /r
VPCMPGTW ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG 65 /r
VPOR ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG EB /r
VPSIGNB ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F38.WIG 08 /r
VPSIGND ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F38.WIG 0A /r
VPSIGNW ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F38.WIG 09 /r
VPSLLD imm8, ymm2, ymm1 = VEX.NDD.256.66.0F.WIG 72 /6 ib
VPSLLDQ imm8, ymm2, ymm1 = VEX.NDD.256.66.0F.WIG 73 /7 ib
VPSLLD xmm3/m128, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG F2 /r
VPSLLQ  imm8, ymm2, ymm1 = VEX.NDD.256.66.0F.WIG 73 /6 ib
VPSLLQ xmm3/m128, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG F3 /r
VPSLLW imm8, ymm2, ymm1 = VEX.NDD.256.66.0F.WIG 71 /6 ib
VPSLLW xmm3/m128, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG F1 /r
VPSRAD imm8, ymm2, ymm1 = VEX.NDD.256.66.0F.WIG 72 /4 ib
VPSRAD xmm3/m128, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG E2 /r
VPSRAW imm8, ymm2, ymm1 = VEX.NDD.256.66.0F.WIG 71 /4 ib
VPSRAW xmm3/m128, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG E1 /r
VPSRLD imm8, ymm2, ymm1 = VEX.NDD.256.66.0F.WIG 72 /2 ib
VPSRLDQ imm8, ymm2, ymm1 = VEX.NDD.256.66.0F.WIG 73 /3 ib
VPSRLD xmm3/m128, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG D2 /r
VPSRLQ  imm8, ymm2, ymm1 = VEX.NDD.256.66.0F.WIG 73 /2 ib
VPSRLQ xmm3/m128, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG D3 /r
VPSRLW imm8, ymm2, ymm1 = VEX.NDD.256.66.0F.WIG 71 /2 ib
VPSRLW xmm3/m128, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG D1 /r
VPSUBB ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG F8 /r
VPSUBD ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG FA /r
VPSUBQ ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG FB /r
VPSUBW ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG F9 /r
VPXOR ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG EF /r

I've taken a slightly different approach for the testsuite, by trying to add (almost, for VPMASK*/V*GATHER* there is a fixme) all AVX2 insns into the testcase right away, and just commented out tests for insns that aren't supported by valgrind yet.

BTW, VPCMP{EQ,GT}Q is partially supported by the patch (supported on the guest side, but not on the host side).

As time permits, I'd like to add follow-up patches with support for further insns.
Comment 2 Jakub Jelinek 2012-08-27 14:55:04 UTC
Created attachment 73506 [details]
valgrind-avx2-2.patch

Incremental patch on top of the previous one, this one adds support for 44 AVX2 insns:
VPABSB ymm2/m256, ymm1 = VEX.256.66.0F38.WIG 1C /r
VPABSD ymm2/m256, ymm1 = VEX.256.66.0F38.WIG 1E /r
VPABSW ymm2/m256, ymm1 = VEX.256.66.0F38.WIG 1D /r
VPADDSB ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG EC /r
VPADDSW ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG ED /r
VPADDUSB ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG DC /r
VPADDUSW ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG DD /r
VPCMPEQQ ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F38.WIG 29 /r
VPCMPGTQ ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F38.WIG 37 /r
VPMAXSB ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F38.WIG 3C /r
VPMAXSD ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F38.WIG 3D /r
VPMAXSW ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG EE /r
VPMAXUB ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG DE /r
VPMAXUD ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F38.WIG 3F /r
VPMAXUW ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F38.WIG 3E /r
VPMINSB ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F38.WIG 38 /r
VPMINSD ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F38.WIG 39 /r
VPMINSW ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG EA /r
VPMINUB ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG DA /r
VPMINUD ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F38.WIG 3B /r
VPMINUW ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F38.WIG 3A /r
VPMOVSXBD xmm2/m64, ymm1 = VEX.256.66.0F38.WIG 21 /r
VPMOVSXBQ xmm2/m32, ymm1 = VEX.256.66.0F38.WIG 22 /r
VPMOVSXBW xmm2/m128, ymm1 = VEX.256.66.0F38.WIG 20 /r
VPMOVSXDQ xmm2/m128, ymm1 = VEX.256.66.0F38.WIG 25 /r
VPMOVSXWD xmm2/m128, ymm1 = VEX.256.66.0F38.WIG 23 /r
VPMOVSXWQ xmm2/m64, ymm1 = VEX.256.66.0F38.WIG 24 /r
VPMOVZXBD xmm2/m64, ymm1 = VEX.256.66.0F38.WIG 31 /r
VPMOVZXBQ xmm2/m32, ymm1 = VEX.256.66.0F38.WIG 32 /r
VPMOVZXBW xmm2/m128, ymm1 = VEX.256.66.0F38.WIG 30 /r
VPMOVZXDQ xmm2/m128, ymm1 = VEX.256.66.0F38.WIG 35 /r
VPMOVZXWD xmm2/m128, ymm1 = VEX.256.66.0F38.WIG 33 /r
VPMOVZXWQ xmm2/m64, ymm1 = VEX.256.66.0F38.WIG 34 /r
VPMULDQ ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F38.WIG 28 /r
VPMULHRSW ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F38.WIG 0B /r
VPMULHUW ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG E4 /r
VPMULHW ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG E5 /r
VPMULLD ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F38.WIG 40 /r
VPMULLW ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG D5 /r
VPMULUDQ ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG F4 /r
VPSUBSB ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG E8 /r
VPSUBSW ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG E9 /r
VPSUBUSB ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG D8 /r
VPSUBUSW ymm3/m256, ymm2, ymm1 = VEX.NDS.256.66.0F.WIG D9 /r
Comment 3 Jakub Jelinek 2012-08-29 11:35:08 UTC
Created attachment 73544 [details]
valgrind-avx2-3.patch

And this patch adds the rest of the AVX2 support except for unimplemented VPMASKMOV*/V*GATHER*, which is waiting for AVX implementation of VMASKMOV* - I guess we need an operation for conditional load and conditional store, because e.g. memcheck needs to handle it right - it can't trigger failures on masked off loads/stores, on the other side it should check non-masked off ones.
Comment 4 Jakub Jelinek 2012-08-30 12:44:33 UTC
Created attachment 73559 [details]
valgrind-avx2-4.patch

In the last patch I've missed hunk for Makefile*am, attached now.
Comment 5 Jakub Jelinek 2012-08-30 12:46:53 UTC
Created attachment 73560 [details]
valgrind-bmi-1.patch

This patch adds some of the easier BMI1 and BMI2 instructions:

ANDN r/m32, r32b, r32a = VEX.NDS.LZ.0F38.W0 F2 /r
ANDN r/m64, r64b, r64a = VEX.NDS.LZ.0F38.W1 F2 /r
MULX r/m32, r32b, r32a = VEX.NDD.LZ.F2.0F38.W0 F6 /r
MULX r/m64, r64b, r64a = VEX.NDD.LZ.F2.0F38.W1 F6 /r
RORX imm8, r/m32, r32a = VEX.LZ.F2.0F3A.W0 F0 /r /i
RORX imm8, r/m64, r64a = VEX.LZ.F2.0F3A.W1 F0 /r /i
SARX r32b, r/m32, r32a = VEX.NDS.LZ.F3.0F38.W0 F7 /r
SARX r64b, r/m64, r64a = VEX.NDS.LZ.F3.0F38.W1 F7 /r
SHLX r32b, r/m32, r32a = VEX.NDS.LZ.66.0F38.W0 F7 /r
SHLX r64b, r/m64, r64a = VEX.NDS.LZ.66.0F38.W1 F7 /r
SHRX r32b, r/m32, r32a = VEX.NDS.LZ.F2.0F38.W0 F7 /r
SHRX r64b, r/m64, r64a = VEX.NDS.LZ.F2.0F38.W1 F7 /r

Note that for ANDN this differs in the PF flag from the Intel PIN emulator (but the documentation says that AF and PF flags are undefined after the insn).
Comment 6 Jakub Jelinek 2012-09-03 14:38:38 UTC
Created attachment 73625 [details]
valgrind-bmi-2.patch

This patch adds support for the following BMI1 instructions:
BLSI r/m32, r32 = VEX.NDD.LZ.0F38.W0 F3 /3
BLSI r/m64, r64 = VEX.NDD.LZ.0F38.W1 F3 /3
BLSMSK r/m32, r32 = VEX.NDD.LZ.0F38.W0 F3 /2
BLSMSK r/m64, r64 = VEX.NDD.LZ.0F38.W1 F3 /2
BLSR r/m32, r32 = VEX.NDD.LZ.0F38.W0 F3 /1
BLSR r/m64, r64 = VEX.NDD.LZ.0F38.W1 F3 /1
BEXTR r32b, r/m32, r32a = VEX.NDS.LZ.0F38.W0 F7 /r
BEXTR r64b, r/m64, r64a = VEX.NDS.LZ.0F38.W1 F7 /r
and changes ANDN flags handling to match the hardware.
Comment 7 Jakub Jelinek 2012-09-04 09:00:00 UTC
Created attachment 73643 [details]
valgrind-bmi-3.patch

Rest of BMI1/BMI2 support.  This adds:
BZHI r32b, r/m32, r32a = VEX.NDS.LZ.0F38.W0 F5 /r
BZHI r64b, r/m64, r64a = VEX.NDS.LZ.0F38.W1 F5 /r
PDEP r/m32, r32b, r32a = VEX.NDS.LZ.F2.0F38.W0 F5 /r
PDEP r/m64, r64b, r64a = VEX.NDS.LZ.F2.0F38.W1 F5 /r
PEXT r/m32, r32b, r32a = VEX.NDS.LZ.F3.0F38.W0 F5 /r
PEXT r/m64, r64b, r64a = VEX.NDS.LZ.F3.0F38.W1 F5 /r
TZCNT r/m16, r16 = F3 0F BC /r
TZCNT r/m32, r32 = F3 0F BC /r
TZCNT r/m64, r64 = REX.W + F3 0F BC /r
insns, TZCNT only if BMI1 is claimed in hw flags.  Using separate bits for BMI1 and AVX2, because BMI1 is implemented in some AMD CPUs that don't support AVX2 (bdver2, btver2), while AFAIK only Haswell/Broadwell and later Intel CPUs are AVX2/BMI1/BMI2/LZCNT.
Comment 8 Jakub Jelinek 2012-09-05 12:24:20 UTC
Created attachment 73668 [details]
valgrind-fma-1.patch

FMA support:
VFMADDSUB132PS xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W0 96 /r
VFMADDSUB132PS ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W0 96 /r
VFMADDSUB132PD xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W1 96 /r
VFMADDSUB132PD ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W1 96 /r
VFMSUBADD132PS xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W0 97 /r
VFMSUBADD132PS ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W0 97 /r
VFMSUBADD132PD xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W1 97 /r
VFMSUBADD132PD ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W1 97 /r
VFMADD132PS xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W0 98 /r
VFMADD132PS ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W0 98 /r
VFMADD132PD xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W1 98 /r
VFMADD132PD ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W1 98 /r
VFMADD132SS xmm3/m32, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W0 99 /r
VFMADD132SD xmm3/m64, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W1 99 /r
VFMSUB132PS xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W0 9A /r
VFMSUB132PS ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W0 9A /r
VFMSUB132PD xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W1 9A /r
VFMSUB132PD ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W1 9A /r
VFMSUB132SS xmm3/m32, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W0 9B /r
VFMSUB132SD xmm3/m64, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W1 9B /r
VFNMADD132PS xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W0 9C /r
VFNMADD132PS ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W0 9C /r
VFNMADD132PD xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W1 9C /r
VFNMADD132PD ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W1 9C /r
VFNMADD132SS xmm3/m32, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W0 9D /r
VFNMADD132SD xmm3/m64, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W1 9D /r
VFNMSUB132PS xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W0 9E /r
VFNMSUB132PS ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W0 9E /r
VFNMSUB132PD xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W1 9E /r
VFNMSUB132PD ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W1 9E /r
VFNMSUB132SS xmm3/m32, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W0 9F /r
VFNMSUB132SD xmm3/m64, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W1 9F /r
VFMADDSUB213PS xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W0 A6 /r
VFMADDSUB213PS ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W0 A6 /r
VFMADDSUB213PD xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W1 A6 /r
VFMADDSUB213PD ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W1 A6 /r
VFMSUBADD213PS xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W0 A7 /r
VFMSUBADD213PS ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W0 A7 /r
VFMSUBADD213PD xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W1 A7 /r
VFMSUBADD213PD ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W1 A7 /r
VFMADD213PS xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W0 A8 /r
VFMADD213PS ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W0 A8 /r
VFMADD213PD xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W1 A8 /r
VFMADD213PD ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W1 A8 /r
VFMADD213SS xmm3/m32, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W0 A9 /r
VFMADD213SD xmm3/m64, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W1 A9 /r
VFMSUB213PS xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W0 AA /r
VFMSUB213PS ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W0 AA /r
VFMSUB213PD xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W1 AA /r
VFMSUB213PD ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W1 AA /r
VFMSUB213SS xmm3/m32, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W0 AB /r
VFMSUB213SD xmm3/m64, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W1 AB /r
VFNMADD213PS xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W0 AC /r
VFNMADD213PS ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W0 AC /r
VFNMADD213PD xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W1 AC /r
VFNMADD213PD ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W1 AC /r
VFNMADD213SS xmm3/m32, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W0 AD /r
VFNMADD213SD xmm3/m64, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W1 AD /r
VFNMSUB213PS xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W0 AE /r
VFNMSUB213PS ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W0 AE /r
VFNMSUB213PD xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W1 AE /r
VFNMSUB213PD ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W1 AE /r
VFNMSUB213SS xmm3/m32, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W0 AF /r
VFNMSUB213SD xmm3/m64, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W1 AF /r
VFMADDSUB231PS xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W0 B6 /r
VFMADDSUB231PS ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W0 B6 /r
VFMADDSUB231PD xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W1 B6 /r
VFMADDSUB231PD ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W1 B6 /r
VFMSUBADD231PS xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W0 B7 /r
VFMSUBADD231PS ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W0 B7 /r
VFMSUBADD231PD xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W1 B7 /r
VFMSUBADD231PD ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W1 B7 /r
VFMADD231PS xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W0 B8 /r
VFMADD231PS ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W0 B8 /r
VFMADD231PD xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W1 B8 /r
VFMADD231PD ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W1 B8 /r
VFMADD231SS xmm3/m32, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W0 B9 /r
VFMADD231SD xmm3/m64, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W1 B9 /r
VFMSUB231PS xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W0 BA /r
VFMSUB231PS ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W0 BA /r
VFMSUB231PD xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W1 BA /r
VFMSUB231PD ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W1 BA /r
VFMSUB231SS xmm3/m32, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W0 BB /r
VFMSUB231SD xmm3/m64, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W1 BB /r
VFNMADD231PS xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W0 BC /r
VFNMADD231PS ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W0 BC /r
VFNMADD231PD xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W1 BC /r
VFNMADD231PD ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W1 BC /r
VFNMADD231SS xmm3/m32, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W0 BD /r
VFNMADD231SD xmm3/m64, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W1 BD /r
VFNMSUB231PS xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W0 BE /r
VFNMSUB231PS ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W0 BE /r
VFNMSUB231PD xmm3/m128, xmm2, xmm1 = VEX.DDS.128.66.0F38.W1 BE /r
VFNMSUB231PD ymm3/m256, ymm2, ymm1 = VEX.DDS.256.66.0F38.W1 BE /r
VFNMSUB231SS xmm3/m32, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W0 BF /r
VFNMSUB231SD xmm3/m64, xmm2, xmm1 = VEX.DDS.LIG.66.0F38.W1 BF /r

Tested on HW as well as patched valgrind --tool=none.  Testcase should in this case just print:
Testing successful
and nothing else.
Comment 9 Jakub Jelinek 2012-09-11 08:34:33 UTC
Created attachment 73809 [details]
valgrind-memcheck-avx2-bmi-fma.patch

--tool=memcheck support for all the previous patches together (before this only --tool=none worked).
Comment 10 Jakub Jelinek 2012-09-12 18:05:17 UTC
Created attachment 73865 [details]
valgrind-vmaskmov-load.patch

I've realized that VMASKMOV* loads are actually implementable quite easily even without extending the IR, only the VMASKMOV* stores are actually a problem (Julian, could you please eventually look into that?).
So, this patch implements some missing AVX instructions:

VMASKMOVPS m128, xmm2, xmm1 = VEX.NDS.128.66.0F38.WIG 2C /r
VMASKMOVPS m256, ymm2, ymm1 = VEX.NDS.256.66.0F38.WIG 2C /r
VMASKMOVPD m128, xmm2, xmm1 = VEX.NDS.128.66.0F38.WIG 2D /r
VMASKMOVPD m256, ymm2, ymm1 = VEX.NDS.256.66.0F38.WIG 2D /r
Comment 11 Jakub Jelinek 2012-09-12 18:07:01 UTC
Created attachment 73866 [details]
valgrind-avx2-5.patch

And this patch implements similarly VPMASKMOV* loads, plus gather loads and as I've just noticed also missing VPBROADCAST* support.

VPBROADCASTB xmm2/m8, xmm1 = VEX.128.66.0F38.W0 78 /r
VPBROADCASTB xmm2/m8, ymm1 = VEX.256.66.0F38.W0 78 /r
VPBROADCASTW xmm2/m16, xmm1 = VEX.128.66.0F38.W0 79 /r
VPBROADCASTW xmm2/m16, ymm1 = VEX.256.66.0F38.W0 79 /r
VPBROADCASTD xmm2/m32, xmm1 = VEX.128.66.0F38.W0 58 /r
VPBROADCASTD xmm2/m32, ymm1 = VEX.256.66.0F38.W0 58 /r
VPBROADCASTQ xmm2/m64, xmm1 = VEX.128.66.0F38.W0 59 /r
VPBROADCASTQ xmm2/m64, ymm1 = VEX.256.66.0F38.W0 59 /r
VPMASKMOVD m128, xmm2, xmm1 = VEX.NDS.128.66.0F38.W0 8C /r
VPMASKMOVD m256, ymm2, ymm1 = VEX.NDS.256.66.0F38.W0 8C /r
VPMASKMOVQ m128, xmm2, xmm1 = VEX.NDS.128.66.0F38.W1 8C /r
VPMASKMOVQ m256, ymm2, ymm1 = VEX.NDS.256.66.0F38.W1 8C /r
VPGATHERDD xmm2, vm32x, xmm1 = VEX.DDS.128.66.0F38.W0 90 /r
VPGATHERDD ymm2, vm32y, ymm1 = VEX.DDS.256.66.0F38.W0 90 /r
VPGATHERDQ xmm2, vm32x, xmm1 = VEX.DDS.128.66.0F38.W1 90 /r
VPGATHERDQ ymm2, vm32x, ymm1 = VEX.DDS.256.66.0F38.W1 90 /r
VPGATHERQD xmm2, vm64x, xmm1 = VEX.DDS.128.66.0F38.W0 91 /r
VPGATHERQD xmm2, vm64y, xmm1 = VEX.DDS.256.66.0F38.W0 91 /r
VPGATHERQQ xmm2, vm64x, xmm1 = VEX.DDS.128.66.0F38.W1 91 /r
VPGATHERQQ ymm2, vm64y, ymm1 = VEX.DDS.256.66.0F38.W1 91 /r
VGATHERDPS xmm2, vm32x, xmm1 = VEX.DDS.128.66.0F38.W0 92 /r
VGATHERDPS ymm2, vm32y, ymm1 = VEX.DDS.256.66.0F38.W0 92 /r
VGATHERDPD xmm2, vm32x, xmm1 = VEX.DDS.128.66.0F38.W1 92 /r
VGATHERDPD ymm2, vm32x, ymm1 = VEX.DDS.256.66.0F38.W1 92 /r
VGATHERQPS xmm2, vm64x, xmm1 = VEX.DDS.128.66.0F38.W0 93 /r
VGATHERQPS xmm2, vm64y, xmm1 = VEX.DDS.256.66.0F38.W0 93 /r
VGATHERQPD xmm2, vm64x, xmm1 = VEX.DDS.128.66.0F38.W1 93 /r
VGATHERQPD ymm2, vm64y, ymm1 = VEX.DDS.256.66.0F38.W1 93 /r
Comment 12 Mark Wielaard 2012-09-12 18:21:29 UTC
In valgrind-fma-1.patch this hunk:

--- valgrind/none/tests/amd64/Makefile.am.jj	2012-08-30 12:54:03.000000000 +0200
+++ valgrind/none/tests/amd64/Makefile.am	2012-09-05 13:35:08.195742680 +0200
@@ -117,6 +117,8 @@ endif
 if BUILD_BMI_TESTS
  check_PROGRAMS += bmi
 endif
+if BUILD_FMA_TESTS
+ check_PROGRAMS += fma
 if BUILD_MOVBE_TESTS
  check_PROGRAMS += movbe
 endif

is missing an endif.
Comment 13 Mark Wielaard 2012-09-12 18:58:40 UTC
In valgrind-avx2-1.patch it looks like avx2-1.vgtest avx2-1.stdout.exp avx2-1.stderr.exp are missing.

In valgrind-bmi-1.patch it looks like bmi.stderr.exp bmi.stdout.exp bmi.vgtest are missing.
Comment 14 Jakub Jelinek 2012-09-12 19:25:16 UTC
Created attachment 73868 [details]
valgrind-avx2-bmi-fma-tests.tar.bz2

I wasn't adding them because the tests were in flux.
Here are the current expected files corresponding to patches up to from today, though I have tested them just manually so far (which is why the endif in the Makefile.am above was missed).
I'm not 100% sure what exactly vgtest does, does it try to run the test only under valgrind, or also on real CPU?  The thing is, valgrind should support all those insns even when running on just AVX capable CPU, so if the test is only run under valgrind, then they should pass as is.  If they are also run on hw, then ../../../tests/x86_amd64_features will need to be updated to also check for avx2, bmi, fma bits (note, for those it needs to pass rcx = 0 to CPUID, not just right rax).
Comment 15 Jakub Jelinek 2012-09-13 08:20:58 UTC
Created attachment 73881 [details]
valgrind-bmi-4.patch

Testing with gcc testsuite - make check-gcc RUNTESTFLAGS='--target_board=valgrind-sim/-m64 i386.exp'
revealed a bug in the BZHI emulation for start == 0.  Fixed thusly.
Comment 16 Mark Wielaard 2012-09-13 15:00:33 UTC
The configure check in valgrind-bmi-1.patch seems broken because there is an early } after the first asm statement and the operands to mulx seem wrong. Something like this seems to work:

--- valgrind-3.8.0.new/configure.in     2012-09-13 14:53:45.826948006 +0200
+++ valgrind-3.8.0.newer/configure.in   2012-09-13 16:56:02.618108815 +0200
@@ -1820,11 +1820,11 @@
 AC_MSG_CHECKING([if x86/amd64 assembler speaks BMI1 and BMI2])
 
 AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[]], [[
-  do { unsigned int h, l;
+  do { unsigned int h; unsigned long long int l1, l2, l3;
    __asm__ __volatile__(
-      "andn %2, %1, %0" : "=r" (h) : "r" (0x1234567), "r" (0x7654321) ); }
+      "andn %2, %1, %0" : "=r" (h) : "r" (0x1234567), "r" (0x7654321) );
    __asm__ __volatile__(
-      "movl %2, %%edx; mulx %3, %1, %0" : "=r" (h), "=r" (l) : "g" (0x1234567), "g" (0x7654321) : "edx" ); }
+      "movl %2, %%edx; mulx %3, %1, %0" : "=&r" (l1), "=&r" (l2) : "g" (0x1234567), "r" (l3) : "edx" ); }
   while (0)
 ]])], [
 ac_have_as_bmi=yes
Comment 17 Jakub Jelinek 2012-09-13 16:26:46 UTC
Created attachment 73891 [details]
valgrind-bmi-5.patch

I think just this should be sufficient for the bmi tests configure check.
Comment 18 Mark Wielaard 2012-09-19 20:29:21 UTC
Created attachment 74037 [details]
avx2-prereq.patch

(In reply to comment #14)
> I'm not 100% sure what exactly vgtest does, does it try to run the test only
> under valgrind, or also on real CPU?  The thing is, valgrind should support
> all those insns even when running on just AVX capable CPU, so if the test is
> only run under valgrind, then they should pass as is. 

The tests run under valgrind and work when there is only an AVX capable CPU (no AVX2 support). But since the test programs might or might not be compiled depending on whether or not the installed binutils supports the assembler instructions one also needs to check that the test program was actually build in the prereq.
Comment 19 Julian Seward 2013-03-27 11:44:40 UTC
Rebased and committed, r2702, r13338, r13339, r13340.  Thank you for
the patches, and sorry it took so long to land them.
Comment 20 Luke-Jr 2014-04-13 09:39:38 UTC
I'm confused. This looks like it should be in 3.9.0, but I'm running 3.9.0 and getting:
vex x86->IR: unhandled instruction bytes: 0xC4 0xE2 0x7B 0xF7

What am I missing?
Comment 21 Tom Hughes 2014-04-13 13:55:25 UTC
You're running 32 bit code but the we only support those instructions in 64 bit code at the moment.
Comment 22 Luke-Jr 2014-04-13 14:45:32 UTC
(In reply to comment #21)
> You're running 32 bit code but the we only support those instructions in 64
> bit code at the moment.

Is there a bug tracking 32-bit support? Which category would x32 fall into - will switching to that work?
Comment 23 Tom Hughes 2014-04-13 15:26:19 UTC
There are various bugs asking for various instructions on 32 bit x86 but there are no current plans to support any of the more recent instruction set extensions in 32 bit code.

There is support for the x32 ABI at all at the moment, so switching to that won't help.