diff options
Diffstat (limited to 'llvm')
208 files changed, 9892 insertions, 4433 deletions
diff --git a/llvm/docs/CFIVerify.rst b/llvm/docs/CFIVerify.rst index 6403347..f766be1 100644 --- a/llvm/docs/CFIVerify.rst +++ b/llvm/docs/CFIVerify.rst @@ -10,7 +10,7 @@ Objective This document provides an overview of an external tool to verify the protection mechanisms implemented by Clang's *Control Flow Integrity* (CFI) schemes -(``-fsanitize=cfi``). This tool, provided a binary or DSO, should infer whether +(``-fsanitize=cfi``). This tool, given a binary or DSO, should infer whether indirect control flow operations are protected by CFI, and should output these results in a human-readable form. @@ -22,12 +22,12 @@ Location ======== This tool will be present as a part of the LLVM toolchain, and will reside in -the "/llvm/tools/llvm-cfi-verify" directory, relative to the LLVM trunk. It will +the ``/llvm/tools/llvm-cfi-verify`` directory, relative to the LLVM trunk. It will be tested in two methods: - Unit tests to validate code sections, present in - "/llvm/unittests/tools/llvm-cfi-verify". -- Integration tests, present in "/llvm/tools/clang/test/LLVMCFIVerify". These + ``/llvm/unittests/tools/llvm-cfi-verify``. +- Integration tests, present in ``/llvm/tools/clang/test/LLVMCFIVerify``. These integration tests are part of clang as part of a continuous integration framework, ensuring updates to the compiler that reduce CFI coverage on indirect control flow instructions are identified. @@ -38,16 +38,16 @@ Background This tool will continuously validate that CFI directives are properly implemented around all indirect control flows by analysing the output machine code. The analysis of machine code is important as it ensures that any bugs -present in linker or compiler do not subvert CFI protections in the final +present in the linker or compiler do not subvert CFI protections in the final shipped binary. Unprotected indirect control flow instructions will be flagged for manual -review. These unexpected control flows may simply have not been accounted for in -the compiler implementation of CFI (e.g. indirect jumps to facilitate switch +review. These unexpected control flows may not have been accounted for in +the compiler implementation of CFI (e.g., indirect jumps to facilitate switch statements may not be fully protected). It may be possible in the future to extend this tool to flag unnecessary CFI -directives (e.g. CFI directives around a static call to a non-polymorphic base +directives (e.g., CFI directives around a static call to a non-polymorphic base type). This type of directive has no security implications, but may present performance impacts. @@ -66,7 +66,7 @@ the disassembly. A control flow graph would be generated from a small buffer of the instructions surrounding the 'target' control flow instruction. If the target instruction is branched-to, the fallthrough of the branch should be the CFI trap (on x86, this is a ``ud2`` instruction). If the target instruction is -the fallthrough (i.e. immediately succeeds) of a conditional jump, the +the fallthrough (i.e., immediately succeeds) of a conditional jump, the conditional jump target should be the CFI trap. If an indirect control flow instruction does not conform to one of these formats, the target will be noted as being CFI-unprotected. @@ -76,7 +76,7 @@ fallthrough of a conditional jump), if the target represents a vcall that takes arguments, these arguments may be pushed to the stack after the branch but before the target instruction. In these cases, a secondary 'spill graph' in constructed, to ensure the register argument used by the indirect jump/call is -not spilled from the stack at any point in the interim period. If there are no +not spilled from the stack at any point in the interim. If there are no spills that affect the target register, the target is marked as CFI-protected. Other Design Notes diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst index 6d0e828..8b6c25c 100644 --- a/llvm/docs/LangRef.rst +++ b/llvm/docs/LangRef.rst @@ -8588,13 +8588,14 @@ functions, and contains richer semantic information about the type of the allocation. This information is consumed by the ``alloc-token`` pass to instrument such calls with allocation token IDs. -The metadata contains a string with the type of an allocation. +The metadata contains: string with the type of an allocation, and a boolean +denoting if the type contains a pointer. .. code-block:: none call ptr @malloc(i64 64), !alloc_token !0 - !0 = !{!"<type-name>"} + !0 = !{!"<type-name>", i1 <contains-pointer>} Module Flags Metadata ===================== diff --git a/llvm/docs/SPIRVUsage.rst b/llvm/docs/SPIRVUsage.rst index b6cd4b4..d2d6646 100644 --- a/llvm/docs/SPIRVUsage.rst +++ b/llvm/docs/SPIRVUsage.rst @@ -233,6 +233,8 @@ Below is a list of supported SPIR-V extensions, sorted alphabetically by their e - Adds support for 4-bit integer type, and allow this type to be used in cooperative matrices. * - ``SPV_KHR_float_controls2`` - Adds execution modes and decorations to control floating-point computations in both kernels and shaders. It can be used on whole modules and individual instructions. + * - ``SPV_INTEL_predicated_io`` + - Adds predicated load and store instructions that conditionally read from or write to memory based on a boolean predicate. SPIR-V representation in LLVM IR ================================ diff --git a/llvm/include/llvm/ADT/Bitset.h b/llvm/include/llvm/ADT/Bitset.h index ecb6b14..b1e539e 100644 --- a/llvm/include/llvm/ADT/Bitset.h +++ b/llvm/include/llvm/ADT/Bitset.h @@ -28,15 +28,15 @@ namespace llvm { /// initialization. template <unsigned NumBits> class Bitset { - typedef uintptr_t BitWord; + using BitWord = uintptr_t; - enum { BITWORD_SIZE = (unsigned)sizeof(BitWord) * CHAR_BIT }; + static constexpr unsigned BitwordBits = sizeof(BitWord) * CHAR_BIT; - static_assert(BITWORD_SIZE == 64 || BITWORD_SIZE == 32, + static_assert(BitwordBits == 64 || BitwordBits == 32, "Unsupported word size"); static constexpr unsigned NumWords = - (NumBits + BITWORD_SIZE - 1) / BITWORD_SIZE; + (NumBits + BitwordBits - 1) / BitwordBits; protected: using StorageType = std::array<BitWord, NumWords>; @@ -60,23 +60,23 @@ public: } constexpr Bitset &set(unsigned I) { - Bits[I / BITWORD_SIZE] |= BitWord(1) << (I % BITWORD_SIZE); + Bits[I / BitwordBits] |= BitWord(1) << (I % BitwordBits); return *this; } constexpr Bitset &reset(unsigned I) { - Bits[I / BITWORD_SIZE] &= ~(BitWord(1) << (I % BITWORD_SIZE)); + Bits[I / BitwordBits] &= ~(BitWord(1) << (I % BitwordBits)); return *this; } constexpr Bitset &flip(unsigned I) { - Bits[I / BITWORD_SIZE] ^= BitWord(1) << (I % BITWORD_SIZE); + Bits[I / BitwordBits] ^= BitWord(1) << (I % BitwordBits); return *this; } constexpr bool operator[](unsigned I) const { - BitWord Mask = BitWord(1) << (I % BITWORD_SIZE); - return (Bits[I / BITWORD_SIZE] & Mask) != 0; + BitWord Mask = BitWord(1) << (I % BitwordBits); + return (Bits[I / BitwordBits] & Mask) != 0; } constexpr bool test(unsigned I) const { return (*this)[I]; } diff --git a/llvm/include/llvm/Analysis/ScalarEvolution.h b/llvm/include/llvm/Analysis/ScalarEvolution.h index 858c1d5..8876e4e 100644 --- a/llvm/include/llvm/Analysis/ScalarEvolution.h +++ b/llvm/include/llvm/Analysis/ScalarEvolution.h @@ -1002,10 +1002,14 @@ public: /// (at every loop iteration). It is, at the same time, the minimum number /// of times S is divisible by 2. For example, given {4,+,8} it returns 2. /// If S is guaranteed to be 0, it returns the bitwidth of S. - LLVM_ABI uint32_t getMinTrailingZeros(const SCEV *S); + /// If \p CtxI is not nullptr, return a constant multiple valid at \p CtxI. + LLVM_ABI uint32_t getMinTrailingZeros(const SCEV *S, + const Instruction *CtxI = nullptr); - /// Returns the max constant multiple of S. - LLVM_ABI APInt getConstantMultiple(const SCEV *S); + /// Returns the max constant multiple of S. If \p CtxI is not nullptr, return + /// a constant multiple valid at \p CtxI. + LLVM_ABI APInt getConstantMultiple(const SCEV *S, + const Instruction *CtxI = nullptr); // Returns the max constant multiple of S. If S is exactly 0, return 1. LLVM_ABI APInt getNonZeroConstantMultiple(const SCEV *S); @@ -1525,8 +1529,10 @@ private: /// Return the Value set from which the SCEV expr is generated. ArrayRef<Value *> getSCEVValues(const SCEV *S); - /// Private helper method for the getConstantMultiple method. - APInt getConstantMultipleImpl(const SCEV *S); + /// Private helper method for the getConstantMultiple method. If \p CtxI is + /// not nullptr, return a constant multiple valid at \p CtxI. + APInt getConstantMultipleImpl(const SCEV *S, + const Instruction *Ctx = nullptr); /// Information about the number of times a particular loop exit may be /// reached before exiting the loop. diff --git a/llvm/include/llvm/BinaryFormat/Dwarf.def b/llvm/include/llvm/BinaryFormat/Dwarf.def index 2c9a3c0..fbf22cc 100644 --- a/llvm/include/llvm/BinaryFormat/Dwarf.def +++ b/llvm/include/llvm/BinaryFormat/Dwarf.def @@ -424,6 +424,9 @@ HANDLE_DW_AT(0x89, export_symbols, 5, DWARF) HANDLE_DW_AT(0x8a, deleted, 5, DWARF) HANDLE_DW_AT(0x8b, defaulted, 5, DWARF) HANDLE_DW_AT(0x8c, loclists_base, 5, DWARF) +// New in Dwarf v6: +HANDLE_DW_AT(0x90, language_name, 6, DWARF) +HANDLE_DW_AT(0x91, language_version, 6, DWARF) // Vendor extensions: HANDLE_DW_AT(0x806, GHS_namespace_alias, 0, GHS) diff --git a/llvm/include/llvm/BinaryFormat/Dwarf.h b/llvm/include/llvm/BinaryFormat/Dwarf.h index 2c50125..815e85d 100644 --- a/llvm/include/llvm/BinaryFormat/Dwarf.h +++ b/llvm/include/llvm/BinaryFormat/Dwarf.h @@ -500,8 +500,15 @@ toDW_LNAME(SourceLanguage language) { return {}; } +/// Returns a version-independent language name. LLVM_ABI llvm::StringRef LanguageDescription(SourceLanguageName name); +/// Returns a language name corresponding to the specified version. +/// If the version is not recognized for the specified language, returns +/// the version-independent name. +LLVM_ABI llvm::StringRef LanguageDescription(SourceLanguageName Name, + uint32_t Version); + inline bool isCPlusPlus(SourceLanguage S) { bool result = false; // Deliberately enumerate all the language options so we get a warning when @@ -997,6 +1004,7 @@ LLVM_ABI StringRef VisibilityString(unsigned Visibility); LLVM_ABI StringRef VirtualityString(unsigned Virtuality); LLVM_ABI StringRef EnumKindString(unsigned EnumKind); LLVM_ABI StringRef LanguageString(unsigned Language); +LLVM_ABI StringRef SourceLanguageNameString(SourceLanguageName Lang); LLVM_ABI StringRef CaseString(unsigned Case); LLVM_ABI StringRef ConventionString(unsigned Convention); LLVM_ABI StringRef InlineCodeString(unsigned Code); @@ -1038,6 +1046,7 @@ LLVM_ABI unsigned getSubOperationEncoding(unsigned OpEncoding, LLVM_ABI unsigned getVirtuality(StringRef VirtualityString); LLVM_ABI unsigned getEnumKind(StringRef EnumKindString); LLVM_ABI unsigned getLanguage(StringRef LanguageString); +LLVM_ABI unsigned getSourceLanguageName(StringRef SourceLanguageNameString); LLVM_ABI unsigned getCallingConvention(StringRef LanguageString); LLVM_ABI unsigned getAttributeEncoding(StringRef EncodingString); LLVM_ABI unsigned getMacinfo(StringRef MacinfoString); diff --git a/llvm/include/llvm/CodeGen/GlobalISel/LegalizerInfo.h b/llvm/include/llvm/CodeGen/GlobalISel/LegalizerInfo.h index fd72a38..9855444 100644 --- a/llvm/include/llvm/CodeGen/GlobalISel/LegalizerInfo.h +++ b/llvm/include/llvm/CodeGen/GlobalISel/LegalizerInfo.h @@ -115,14 +115,17 @@ struct LegalityQuery { struct MemDesc { LLT MemoryTy; uint64_t AlignInBits; - AtomicOrdering Ordering; + AtomicOrdering Ordering; //< For cmpxchg this is the success ordering. + AtomicOrdering FailureOrdering; //< For cmpxchg, otherwise NotAtomic. MemDesc() = default; - MemDesc(LLT MemoryTy, uint64_t AlignInBits, AtomicOrdering Ordering) - : MemoryTy(MemoryTy), AlignInBits(AlignInBits), Ordering(Ordering) {} + MemDesc(LLT MemoryTy, uint64_t AlignInBits, AtomicOrdering Ordering, + AtomicOrdering FailureOrdering) + : MemoryTy(MemoryTy), AlignInBits(AlignInBits), Ordering(Ordering), + FailureOrdering(FailureOrdering) {} MemDesc(const MachineMemOperand &MMO) : MemDesc(MMO.getMemoryType(), MMO.getAlign().value() * 8, - MMO.getSuccessOrdering()) {} + MMO.getSuccessOrdering(), MMO.getFailureOrdering()) {} }; /// Operations which require memory can use this to place requirements on the diff --git a/llvm/include/llvm/CodeGen/MIR2Vec.h b/llvm/include/llvm/CodeGen/MIR2Vec.h index ea68b45..7b1b5d9 100644 --- a/llvm/include/llvm/CodeGen/MIR2Vec.h +++ b/llvm/include/llvm/CodeGen/MIR2Vec.h @@ -38,6 +38,7 @@ #include "llvm/IR/PassManager.h" #include "llvm/Pass.h" #include "llvm/Support/CommandLine.h" +#include "llvm/Support/Error.h" #include "llvm/Support/ErrorOr.h" #include <map> #include <set> @@ -92,46 +93,31 @@ public: /// Get the string key for a vocabulary entry at the given position std::string getStringKey(unsigned Pos) const; - MIRVocabulary() = delete; - MIRVocabulary(VocabMap &&Entries, const TargetInstrInfo *TII); - MIRVocabulary(ir2vec::VocabStorage &&Storage, const TargetInstrInfo &TII) - : Storage(std::move(Storage)), TII(TII) {} - - bool isValid() const { - return UniqueBaseOpcodeNames.size() > 0 && - Layout.TotalEntries == Storage.size() && Storage.isValid(); - } - - unsigned getDimension() const { - if (!isValid()) - return 0; - return Storage.getDimension(); - } + unsigned getDimension() const { return Storage.getDimension(); } // Accessor methods const Embedding &operator[](unsigned Opcode) const { - assert(isValid() && "MIR2Vec Vocabulary is invalid"); unsigned LocalIndex = getCanonicalOpcodeIndex(Opcode); return Storage[static_cast<unsigned>(Section::Opcodes)][LocalIndex]; } // Iterator access using const_iterator = ir2vec::VocabStorage::const_iterator; - const_iterator begin() const { - assert(isValid() && "MIR2Vec Vocabulary is invalid"); - return Storage.begin(); - } + const_iterator begin() const { return Storage.begin(); } - const_iterator end() const { - assert(isValid() && "MIR2Vec Vocabulary is invalid"); - return Storage.end(); - } + const_iterator end() const { return Storage.end(); } /// Total number of entries in the vocabulary - size_t getCanonicalSize() const { - assert(isValid() && "Invalid vocabulary"); - return Storage.size(); - } + size_t getCanonicalSize() const { return Storage.size(); } + + MIRVocabulary() = delete; + + /// Factory method to create MIRVocabulary from vocabulary map + static Expected<MIRVocabulary> create(VocabMap &&Entries, + const TargetInstrInfo &TII); + +private: + MIRVocabulary(VocabMap &&Entries, const TargetInstrInfo &TII); }; } // namespace mir2vec @@ -145,7 +131,6 @@ class MIR2VecVocabLegacyAnalysis : public ImmutablePass { StringRef getPassName() const override; Error readVocabulary(); - void emitError(Error Err, LLVMContext &Ctx); protected: void getAnalysisUsage(AnalysisUsage &AU) const override { @@ -156,7 +141,7 @@ protected: public: static char ID; MIR2VecVocabLegacyAnalysis() : ImmutablePass(ID) {} - mir2vec::MIRVocabulary getMIR2VecVocabulary(const Module &M); + Expected<mir2vec::MIRVocabulary> getMIR2VecVocabulary(const Module &M); }; /// This pass prints the embeddings in the MIR2Vec vocabulary diff --git a/llvm/include/llvm/Frontend/HLSL/RootSignatureMetadata.h b/llvm/include/llvm/Frontend/HLSL/RootSignatureMetadata.h index bfcbf72..7ef6667 100644 --- a/llvm/include/llvm/Frontend/HLSL/RootSignatureMetadata.h +++ b/llvm/include/llvm/Frontend/HLSL/RootSignatureMetadata.h @@ -27,160 +27,15 @@ class Metadata; namespace hlsl { namespace rootsig { - -template <typename T> class RootSignatureValidationError - : public ErrorInfo<RootSignatureValidationError<T>> { -public: - static char ID; - StringRef ParamName; - T Value; - - RootSignatureValidationError(StringRef ParamName, T Value) - : ParamName(ParamName), Value(Value) {} - - void log(raw_ostream &OS) const override { - OS << "Invalid value for " << ParamName << ": " << Value; - } - - std::error_code convertToErrorCode() const override { - return llvm::inconvertibleErrorCode(); - } -}; - -class OffsetAppendAfterOverflow : public ErrorInfo<OffsetAppendAfterOverflow> { -public: - static char ID; - dxil::ResourceClass Type; - uint32_t Register; - uint32_t Space; - - OffsetAppendAfterOverflow(dxil::ResourceClass Type, uint32_t Register, - uint32_t Space) - : Type(Type), Register(Register), Space(Space) {} - - void log(raw_ostream &OS) const override { - OS << "Range " << getResourceClassName(Type) << "(register=" << Register - << ", space=" << Space << ") " - << "cannot be appended after an unbounded range "; - } - - std::error_code convertToErrorCode() const override { - return llvm::inconvertibleErrorCode(); - } -}; - -class ShaderRegisterOverflowError - : public ErrorInfo<ShaderRegisterOverflowError> { -public: - static char ID; - dxil::ResourceClass Type; - uint32_t Register; - uint32_t Space; - - ShaderRegisterOverflowError(dxil::ResourceClass Type, uint32_t Register, - uint32_t Space) - : Type(Type), Register(Register), Space(Space) {} - - void log(raw_ostream &OS) const override { - OS << "Overflow for shader register range: " << getResourceClassName(Type) - << "(register=" << Register << ", space=" << Space << ")."; - } - - std::error_code convertToErrorCode() const override { - return llvm::inconvertibleErrorCode(); - } -}; - -class OffsetOverflowError : public ErrorInfo<OffsetOverflowError> { -public: - static char ID; - dxil::ResourceClass Type; - uint32_t Register; - uint32_t Space; - - OffsetOverflowError(dxil::ResourceClass Type, uint32_t Register, - uint32_t Space) - : Type(Type), Register(Register), Space(Space) {} - - void log(raw_ostream &OS) const override { - OS << "Offset overflow for descriptor range: " << getResourceClassName(Type) - << "(register=" << Register << ", space=" << Space << ")."; - } - - std::error_code convertToErrorCode() const override { - return llvm::inconvertibleErrorCode(); - } -}; - -class TableSamplerMixinError : public ErrorInfo<TableSamplerMixinError> { + : public ErrorInfo<RootSignatureValidationError> { public: static char ID; - dxil::ResourceClass Type; - uint32_t Location; - - TableSamplerMixinError(dxil::ResourceClass Type, uint32_t Location) - : Type(Type), Location(Location) {} - - void log(raw_ostream &OS) const override { - OS << "Samplers cannot be mixed with other " - << "resource types in a descriptor table, " << getResourceClassName(Type) - << "(location=" << Location << ")"; - } - - std::error_code convertToErrorCode() const override { - return llvm::inconvertibleErrorCode(); - } -}; - -class GenericRSMetadataError : public ErrorInfo<GenericRSMetadataError> { -public: - LLVM_ABI static char ID; - StringRef Message; - MDNode *MD; - - GenericRSMetadataError(StringRef Message, MDNode *MD) - : Message(Message), MD(MD) {} - - void log(raw_ostream &OS) const override { - OS << Message; - if (MD) { - OS << "\n"; - MD->printTree(OS); - } - } - - std::error_code convertToErrorCode() const override { - return llvm::inconvertibleErrorCode(); - } -}; - -class InvalidRSMetadataFormat : public ErrorInfo<InvalidRSMetadataFormat> { -public: - LLVM_ABI static char ID; - StringRef ElementName; + std::string Msg; - InvalidRSMetadataFormat(StringRef ElementName) : ElementName(ElementName) {} - - void log(raw_ostream &OS) const override { - OS << "Invalid format for " << ElementName; - } + RootSignatureValidationError(const Twine &Msg) : Msg(Msg.str()) {} - std::error_code convertToErrorCode() const override { - return llvm::inconvertibleErrorCode(); - } -}; - -class InvalidRSMetadataValue : public ErrorInfo<InvalidRSMetadataValue> { -public: - LLVM_ABI static char ID; - StringRef ParamName; - - InvalidRSMetadataValue(StringRef ParamName) : ParamName(ParamName) {} - - void log(raw_ostream &OS) const override { - OS << "Invalid value for " << ParamName; - } + void log(raw_ostream &OS) const override { OS << Msg; } std::error_code convertToErrorCode() const override { return llvm::inconvertibleErrorCode(); diff --git a/llvm/include/llvm/Frontend/OpenMP/OMPKinds.def b/llvm/include/llvm/Frontend/OpenMP/OMPKinds.def index 01ca8da..1694a33 100644 --- a/llvm/include/llvm/Frontend/OpenMP/OMPKinds.def +++ b/llvm/include/llvm/Frontend/OpenMP/OMPKinds.def @@ -42,6 +42,7 @@ __OMP_TYPE(Double) OMP_TYPE(SizeTy, M.getDataLayout().getIntPtrType(Ctx)) OMP_TYPE(Int63, Type::getIntNTy(Ctx, 63)) +OMP_TYPE(FuncPtrTy, PointerType::get(Ctx, M.getDataLayout().getProgramAddressSpace())) __OMP_PTR_TYPE(VoidPtr) __OMP_PTR_TYPE(VoidPtrPtr) @@ -471,7 +472,7 @@ __OMP_RTL(__kmpc_target_init, false, Int32, KernelEnvironmentPtr, KernelLaunchEn __OMP_RTL(__kmpc_target_deinit, false, Void,) __OMP_RTL(__kmpc_kernel_prepare_parallel, false, Void, VoidPtr) __OMP_RTL(__kmpc_parallel_51, false, Void, IdentPtr, Int32, Int32, Int32, Int32, - VoidPtr, VoidPtr, VoidPtrPtr, SizeTy) + FuncPtrTy, VoidPtr, VoidPtrPtr, SizeTy) __OMP_RTL(__kmpc_for_static_loop_4, false, Void, IdentPtr, VoidPtr, VoidPtr, Int32, Int32, Int32, Int8) __OMP_RTL(__kmpc_for_static_loop_4u, false, Void, IdentPtr, VoidPtr, VoidPtr, Int32, Int32, Int32, Int8) __OMP_RTL(__kmpc_for_static_loop_8, false, Void, IdentPtr, VoidPtr, VoidPtr, Int64, Int64, Int64, Int8) diff --git a/llvm/include/llvm/IR/DIBuilder.h b/llvm/include/llvm/IR/DIBuilder.h index 25cbc38..f3839c9 100644 --- a/llvm/include/llvm/IR/DIBuilder.h +++ b/llvm/include/llvm/IR/DIBuilder.h @@ -146,9 +146,9 @@ namespace llvm { /// \param SDK The SDK name. On Darwin, this is the last component /// of the sysroot. LLVM_ABI DICompileUnit * - createCompileUnit(unsigned Lang, DIFile *File, StringRef Producer, - bool isOptimized, StringRef Flags, unsigned RV, - StringRef SplitName = StringRef(), + createCompileUnit(DISourceLanguageName Lang, DIFile *File, + StringRef Producer, bool isOptimized, StringRef Flags, + unsigned RV, StringRef SplitName = StringRef(), DICompileUnit::DebugEmissionKind Kind = DICompileUnit::DebugEmissionKind::FullDebug, uint64_t DWOId = 0, bool SplitDebugInlining = true, @@ -729,7 +729,8 @@ namespace llvm { /// \param Subscripts Subscripts. LLVM_ABI DICompositeType *createVectorType(uint64_t Size, uint32_t AlignInBits, DIType *Ty, - DINodeArray Subscripts); + DINodeArray Subscripts, + Metadata *BitStride = nullptr); /// Create debugging information entry for an /// enumeration. diff --git a/llvm/include/llvm/IR/DebugInfoMetadata.h b/llvm/include/llvm/IR/DebugInfoMetadata.h index 7c6e709..c626efc 100644 --- a/llvm/include/llvm/IR/DebugInfoMetadata.h +++ b/llvm/include/llvm/IR/DebugInfoMetadata.h @@ -66,6 +66,55 @@ namespace dwarf { enum Tag : uint16_t; } +/// Wrapper structure that holds a language name and its version. +/// +/// Some debug-info formats, particularly DWARF, distniguish between +/// language codes that include the version name and codes that don't. +/// DISourceLanguageName may hold either of these. +/// +class DISourceLanguageName { + /// Language version. The version scheme is language + /// dependent. + uint32_t Version = 0; + + /// Language name. + /// If \ref HasVersion is \c true, then this name + /// is version independent (i.e., doesn't include the language + /// version in its name). + uint16_t Name; + + /// If \c true, then \ref Version is interpretable and \ref Name + /// is a version independent name. + bool HasVersion; + +public: + bool hasVersionedName() const { return HasVersion; } + + /// Returns a versioned or unversioned language name. + uint16_t getName() const { return Name; } + + /// Transitional API for cases where we do not yet support + /// versioned source language names. Use \ref getName instead. + /// + /// FIXME: remove once all callers of this API account for versioned + /// names. + uint16_t getUnversionedName() const { + assert(!hasVersionedName()); + return Name; + } + + /// Returns language version. Only valid for versioned language names. + uint32_t getVersion() const { + assert(hasVersionedName()); + return Version; + } + + DISourceLanguageName(uint16_t Lang, uint32_t Version) + : Version(Version), Name(Lang), HasVersion(true) {}; + DISourceLanguageName(uint16_t Lang) + : Version(0), Name(Lang), HasVersion(false) {}; +}; + class DbgVariableRecord; LLVM_ABI extern cl::opt<bool> EnableFSDiscriminator; @@ -2003,7 +2052,7 @@ public: LLVM_ABI static const char *nameTableKindString(DebugNameTableKind PK); private: - unsigned SourceLanguage; + DISourceLanguageName SourceLanguage; unsigned RuntimeVersion; uint64_t DWOId; unsigned EmissionKind; @@ -2013,16 +2062,17 @@ private: bool DebugInfoForProfiling; bool RangesBaseAddress; - DICompileUnit(LLVMContext &C, StorageType Storage, unsigned SourceLanguage, - bool IsOptimized, unsigned RuntimeVersion, - unsigned EmissionKind, uint64_t DWOId, bool SplitDebugInlining, - bool DebugInfoForProfiling, unsigned NameTableKind, - bool RangesBaseAddress, ArrayRef<Metadata *> Ops); + DICompileUnit(LLVMContext &C, StorageType Storage, + DISourceLanguageName SourceLanguage, bool IsOptimized, + unsigned RuntimeVersion, unsigned EmissionKind, uint64_t DWOId, + bool SplitDebugInlining, bool DebugInfoForProfiling, + unsigned NameTableKind, bool RangesBaseAddress, + ArrayRef<Metadata *> Ops); ~DICompileUnit() = default; static DICompileUnit * - getImpl(LLVMContext &Context, unsigned SourceLanguage, DIFile *File, - StringRef Producer, bool IsOptimized, StringRef Flags, + getImpl(LLVMContext &Context, DISourceLanguageName SourceLanguage, + DIFile *File, StringRef Producer, bool IsOptimized, StringRef Flags, unsigned RuntimeVersion, StringRef SplitDebugFilename, unsigned EmissionKind, DICompositeTypeArray EnumTypes, DIScopeArray RetainedTypes, @@ -2042,8 +2092,8 @@ private: getCanonicalMDString(Context, SDK), Storage, ShouldCreate); } LLVM_ABI static DICompileUnit * - getImpl(LLVMContext &Context, unsigned SourceLanguage, Metadata *File, - MDString *Producer, bool IsOptimized, MDString *Flags, + getImpl(LLVMContext &Context, DISourceLanguageName SourceLanguage, + Metadata *File, MDString *Producer, bool IsOptimized, MDString *Flags, unsigned RuntimeVersion, MDString *SplitDebugFilename, unsigned EmissionKind, Metadata *EnumTypes, Metadata *RetainedTypes, Metadata *GlobalVariables, Metadata *ImportedEntities, @@ -2068,7 +2118,7 @@ public: DEFINE_MDNODE_GET_DISTINCT_TEMPORARY( DICompileUnit, - (unsigned SourceLanguage, DIFile *File, StringRef Producer, + (DISourceLanguageName SourceLanguage, DIFile *File, StringRef Producer, bool IsOptimized, StringRef Flags, unsigned RuntimeVersion, StringRef SplitDebugFilename, DebugEmissionKind EmissionKind, DICompositeTypeArray EnumTypes, DIScopeArray RetainedTypes, @@ -2084,7 +2134,7 @@ public: SysRoot, SDK)) DEFINE_MDNODE_GET_DISTINCT_TEMPORARY( DICompileUnit, - (unsigned SourceLanguage, Metadata *File, MDString *Producer, + (DISourceLanguageName SourceLanguage, Metadata *File, MDString *Producer, bool IsOptimized, MDString *Flags, unsigned RuntimeVersion, MDString *SplitDebugFilename, unsigned EmissionKind, Metadata *EnumTypes, Metadata *RetainedTypes, Metadata *GlobalVariables, @@ -2099,7 +2149,7 @@ public: TempDICompileUnit clone() const { return cloneImpl(); } - unsigned getSourceLanguage() const { return SourceLanguage; } + DISourceLanguageName getSourceLanguage() const { return SourceLanguage; } bool isOptimized() const { return IsOptimized; } unsigned getRuntimeVersion() const { return RuntimeVersion; } DebugEmissionKind getEmissionKind() const { diff --git a/llvm/include/llvm/IR/DiagnosticInfo.h b/llvm/include/llvm/IR/DiagnosticInfo.h index 5f7225e..a426fb0 100644 --- a/llvm/include/llvm/IR/DiagnosticInfo.h +++ b/llvm/include/llvm/IR/DiagnosticInfo.h @@ -20,6 +20,7 @@ #include "llvm/ADT/StringRef.h" #include "llvm/ADT/Twine.h" #include "llvm/IR/DebugLoc.h" +#include "llvm/Support/BranchProbability.h" #include "llvm/Support/CBindingWrapping.h" #include "llvm/Support/Compiler.h" #include "llvm/Support/ErrorHandling.h" @@ -555,6 +556,7 @@ public: Argument(StringRef Key, bool B) : Key(Key), Val(B ? "true" : "false") {} LLVM_ABI Argument(StringRef Key, DebugLoc dl); LLVM_ABI Argument(StringRef Key, InstructionCost C); + LLVM_ABI Argument(StringRef Key, BranchProbability P); }; /// \p PassName is the name of the pass emitting this diagnostic. \p diff --git a/llvm/include/llvm/IR/Intrinsics.td b/llvm/include/llvm/IR/Intrinsics.td index 96da698..8856eda 100644 --- a/llvm/include/llvm/IR/Intrinsics.td +++ b/llvm/include/llvm/IR/Intrinsics.td @@ -1983,16 +1983,16 @@ def int_experimental_vector_match : DefaultAttrsIntrinsic< [ llvm_anyvector_ty, llvm_anyvector_ty, LLVMScalarOrSameVectorWidth<0, llvm_i1_ty> ], // Mask - [ IntrNoMem ]>; + [ IntrNoMem, IntrSpeculatable ]>; // Extract based on mask bits def int_experimental_vector_extract_last_active: DefaultAttrsIntrinsic<[LLVMVectorElementType<0>], [llvm_anyvector_ty, LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, - LLVMVectorElementType<0>], [IntrNoMem]>; + LLVMVectorElementType<0>], [IntrNoMem, IntrSpeculatable]>; // Operators -let IntrProperties = [IntrNoMem] in { +let IntrProperties = [IntrNoMem, IntrSpeculatable] in { // Integer arithmetic def int_vp_add : DefaultAttrsIntrinsic<[ llvm_anyvector_ty ], [ LLVMMatchType<0>, @@ -2039,26 +2039,6 @@ let IntrProperties = [IntrNoMem] in { LLVMMatchType<0>, LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, llvm_i32_ty]>; - def int_vp_sdiv : DefaultAttrsIntrinsic<[ llvm_anyvector_ty ], - [ LLVMMatchType<0>, - LLVMMatchType<0>, - LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, - llvm_i32_ty]>; - def int_vp_udiv : DefaultAttrsIntrinsic<[ llvm_anyvector_ty ], - [ LLVMMatchType<0>, - LLVMMatchType<0>, - LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, - llvm_i32_ty]>; - def int_vp_srem : DefaultAttrsIntrinsic<[ llvm_anyvector_ty ], - [ LLVMMatchType<0>, - LLVMMatchType<0>, - LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, - llvm_i32_ty]>; - def int_vp_urem : DefaultAttrsIntrinsic<[ llvm_anyvector_ty ], - [ LLVMMatchType<0>, - LLVMMatchType<0>, - LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, - llvm_i32_ty]>; def int_vp_abs : DefaultAttrsIntrinsic<[ llvm_anyvector_ty ], [ LLVMMatchType<0>, llvm_i1_ty, @@ -2390,7 +2370,29 @@ let IntrProperties = [IntrNoMem] in { llvm_i32_ty]>; } -let IntrProperties = [IntrNoMem, ImmArg<ArgIndex<1>>] in { +// Integer VP division and remainder: not speculatable. +def int_vp_sdiv : DefaultAttrsIntrinsic<[ llvm_anyvector_ty ], + [ LLVMMatchType<0>, + LLVMMatchType<0>, + LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, + llvm_i32_ty], [IntrNoMem]>; +def int_vp_udiv : DefaultAttrsIntrinsic<[ llvm_anyvector_ty ], + [ LLVMMatchType<0>, + LLVMMatchType<0>, + LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, + llvm_i32_ty], [IntrNoMem]>; +def int_vp_srem : DefaultAttrsIntrinsic<[ llvm_anyvector_ty ], + [ LLVMMatchType<0>, + LLVMMatchType<0>, + LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, + llvm_i32_ty], [IntrNoMem]>; +def int_vp_urem : DefaultAttrsIntrinsic<[ llvm_anyvector_ty ], + [ LLVMMatchType<0>, + LLVMMatchType<0>, + LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, + llvm_i32_ty], [IntrNoMem]>; + +let IntrProperties = [IntrNoMem, IntrSpeculatable, ImmArg<ArgIndex<1>>] in { def int_vp_ctlz : DefaultAttrsIntrinsic<[ llvm_anyvector_ty ], [ LLVMMatchType<0>, llvm_i1_ty, @@ -2422,18 +2424,18 @@ def int_loop_dependence_war_mask: def int_get_active_lane_mask: DefaultAttrsIntrinsic<[llvm_anyvector_ty], [llvm_anyint_ty, LLVMMatchType<1>], - [IntrNoMem]>; + [IntrNoMem, IntrSpeculatable]>; def int_experimental_get_vector_length: DefaultAttrsIntrinsic<[llvm_i32_ty], [llvm_anyint_ty, llvm_i32_ty, llvm_i1_ty], - [IntrNoMem, + [IntrNoMem, IntrSpeculatable, ImmArg<ArgIndex<1>>, ImmArg<ArgIndex<2>>]>; def int_experimental_cttz_elts: DefaultAttrsIntrinsic<[llvm_anyint_ty], [llvm_anyvector_ty, llvm_i1_ty], - [IntrNoMem, ImmArg<ArgIndex<1>>]>; + [IntrNoMem, IntrSpeculatable, ImmArg<ArgIndex<1>>]>; def int_experimental_vp_splice: DefaultAttrsIntrinsic<[llvm_anyvector_ty], @@ -2442,21 +2444,21 @@ def int_experimental_vp_splice: llvm_i32_ty, LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, llvm_i32_ty, llvm_i32_ty], - [IntrNoMem, ImmArg<ArgIndex<2>>]>; + [IntrNoMem, IntrSpeculatable, ImmArg<ArgIndex<2>>]>; def int_experimental_vp_reverse: DefaultAttrsIntrinsic<[llvm_anyvector_ty], [LLVMMatchType<0>, LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, llvm_i32_ty], - [IntrNoMem]>; + [IntrNoMem, IntrSpeculatable]>; def int_experimental_vp_splat: DefaultAttrsIntrinsic<[llvm_anyvector_ty], [LLVMVectorElementType<0>, LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>, llvm_i32_ty], - [IntrNoMem]>; + [IntrNoMem, IntrSpeculatable]>; def int_vp_is_fpclass: DefaultAttrsIntrinsic<[ LLVMScalarOrSameVectorWidth<0, llvm_i1_ty>], @@ -2753,16 +2755,22 @@ def int_preserve_static_offset : DefaultAttrsIntrinsic<[llvm_ptr_ty], def int_vector_reverse : DefaultAttrsIntrinsic<[llvm_anyvector_ty], [LLVMMatchType<0>], - [IntrNoMem]>; + [IntrNoMem, + IntrSpeculatable]>; def int_vector_splice : DefaultAttrsIntrinsic<[llvm_anyvector_ty], [LLVMMatchType<0>, LLVMMatchType<0>, llvm_i32_ty], - [IntrNoMem, ImmArg<ArgIndex<2>>]>; + [IntrNoMem, + IntrSpeculatable, + ImmArg<ArgIndex<2>>]>; //===---------- Intrinsics to query properties of scalable vectors --------===// -def int_vscale : DefaultAttrsIntrinsic<[llvm_anyint_ty], [], [IntrNoMem]>; +def int_vscale : DefaultAttrsIntrinsic<[llvm_anyint_ty], + [], + [IntrNoMem, + IntrSpeculatable]>; //===---------- Intrinsics to perform subvector insertion/extraction ------===// def int_vector_insert : DefaultAttrsIntrinsic<[llvm_anyvector_ty], @@ -2776,18 +2784,22 @@ def int_vector_extract : DefaultAttrsIntrinsic<[llvm_anyvector_ty], foreach n = 2...8 in { def int_vector_interleave#n : DefaultAttrsIntrinsic<[llvm_anyvector_ty], !listsplat(LLVMOneNthElementsVectorType<0, n>, n), - [IntrNoMem]>; + [IntrNoMem, + IntrSpeculatable]>; def int_vector_deinterleave#n : DefaultAttrsIntrinsic<!listsplat(LLVMOneNthElementsVectorType<0, n>, n), [llvm_anyvector_ty], - [IntrNoMem]>; + [IntrNoMem, + IntrSpeculatable]>; } //===-------------- Intrinsics to perform partial reduction ---------------===// def int_vector_partial_reduce_add : DefaultAttrsIntrinsic<[LLVMMatchType<0>], - [llvm_anyvector_ty, llvm_anyvector_ty], - [IntrNoMem]>; + [llvm_anyvector_ty, + llvm_anyvector_ty], + [IntrNoMem, + IntrSpeculatable]>; //===----------------- Pointer Authentication Intrinsics ------------------===// // diff --git a/llvm/include/llvm/Support/SpecialCaseList.h b/llvm/include/llvm/Support/SpecialCaseList.h index 64cad80..466e2a4 100644 --- a/llvm/include/llvm/Support/SpecialCaseList.h +++ b/llvm/include/llvm/Support/SpecialCaseList.h @@ -12,13 +12,16 @@ #ifndef LLVM_SUPPORT_SPECIALCASELIST_H #define LLVM_SUPPORT_SPECIALCASELIST_H +#include "llvm/ADT/ArrayRef.h" #include "llvm/ADT/StringMap.h" +#include "llvm/Support/Allocator.h" #include "llvm/Support/Compiler.h" #include "llvm/Support/GlobPattern.h" #include "llvm/Support/Regex.h" #include <memory> #include <string> #include <utility> +#include <variant> #include <vector> namespace llvm { @@ -118,11 +121,49 @@ protected: SpecialCaseList(SpecialCaseList const &) = delete; SpecialCaseList &operator=(SpecialCaseList const &) = delete; - /// Represents a set of globs and their line numbers +private: + // Lagacy v1 matcher. + class RegexMatcher { + public: + LLVM_ABI Error insert(StringRef Pattern, unsigned LineNumber); + LLVM_ABI void + match(StringRef Query, + llvm::function_ref<void(StringRef Rule, unsigned LineNo)> Cb) const; + + struct Reg { + Reg(StringRef Name, unsigned LineNo, Regex &&Rg) + : Name(Name), LineNo(LineNo), Rg(std::move(Rg)) {} + StringRef Name; + unsigned LineNo; + Regex Rg; + }; + + std::vector<Reg> RegExes; + }; + + class GlobMatcher { + public: + LLVM_ABI Error insert(StringRef Pattern, unsigned LineNumber); + LLVM_ABI void + match(StringRef Query, + llvm::function_ref<void(StringRef Rule, unsigned LineNo)> Cb) const; + + struct Glob { + Glob(StringRef Name, unsigned LineNo, GlobPattern &&Pattern) + : Name(Name), LineNo(LineNo), Pattern(std::move(Pattern)) {} + StringRef Name; + unsigned LineNo; + GlobPattern Pattern; + }; + + std::vector<GlobMatcher::Glob> Globs; + }; + + /// Represents a set of patterns and their line numbers class Matcher { public: - LLVM_ABI Error insert(StringRef Pattern, unsigned LineNumber, - bool UseRegex); + LLVM_ABI Matcher(bool UseGlobs, bool RemoveDotSlash); + LLVM_ABI void match(StringRef Query, llvm::function_ref<void(StringRef Rule, unsigned LineNo)> Cb) const; @@ -133,36 +174,19 @@ protected: return R; } - struct Glob { - Glob(StringRef Name, unsigned LineNo) : Name(Name), LineNo(LineNo) {} - std::string Name; - unsigned LineNo; - GlobPattern Pattern; - // neither copyable nor movable because GlobPattern contains - // Glob::StringRef that points to Glob::Name. - Glob(Glob &&) = delete; - Glob() = default; - }; - - struct Reg { - Reg(StringRef Name, unsigned LineNo, Regex &&Rg) - : Name(Name), LineNo(LineNo), Rg(std::move(Rg)) {} - std::string Name; - unsigned LineNo; - Regex Rg; - Reg(Reg &&) = delete; - Reg() = default; - }; + LLVM_ABI Error insert(StringRef Pattern, unsigned LineNumber); - std::vector<std::unique_ptr<Matcher::Glob>> Globs; - std::vector<std::unique_ptr<Reg>> RegExes; + std::variant<RegexMatcher, GlobMatcher> M; + bool RemoveDotSlash; }; using SectionEntries = StringMap<StringMap<Matcher>>; +protected: struct Section { - Section(StringRef Str, unsigned FileIdx) - : SectionStr(Str), FileIdx(FileIdx) {}; + Section(StringRef Str, unsigned FileIdx, bool UseGlobs) + : SectionMatcher(UseGlobs, /*RemoveDotSlash=*/false), SectionStr(Str), + FileIdx(FileIdx) {} Section(Section &&) = default; @@ -186,11 +210,15 @@ protected: findMatcher(StringRef Prefix, StringRef Category) const; }; + ArrayRef<const Section> sections() const { return Sections; } + +private: + BumpPtrAllocator StrAlloc; std::vector<Section> Sections; LLVM_ABI Expected<Section *> addSection(StringRef SectionStr, unsigned FileIdx, unsigned LineNo, - bool UseGlobs = true); + bool UseGlobs); /// Parses just-constructed SpecialCaseList entries from a memory buffer. LLVM_ABI bool parse(unsigned FileIdx, const MemoryBuffer *MB, diff --git a/llvm/include/llvm/Support/TrailingObjects.h b/llvm/include/llvm/Support/TrailingObjects.h index dc03285..c479765 100644 --- a/llvm/include/llvm/Support/TrailingObjects.h +++ b/llvm/include/llvm/Support/TrailingObjects.h @@ -182,8 +182,6 @@ protected: static constexpr size_t additionalSizeToAllocImpl(size_t SizeSoFar) { return SizeSoFar; } - - template <bool CheckAlignment> static void verifyTrailingObjectsAlignment() {} }; } // end namespace trailing_objects_internal @@ -203,10 +201,7 @@ class TrailingObjects template <typename... Tys> class Foo {}; - typedef trailing_objects_internal::TrailingObjectsImpl< - trailing_objects_internal::MaxAlignment<TrailingTys...>, BaseTy, - TrailingObjects<BaseTy, TrailingTys...>, BaseTy, TrailingTys...> - ParentType; + using ParentType = typename TrailingObjects::TrailingObjectsImpl; using TrailingObjectsBase = trailing_objects_internal::TrailingObjectsBase; using ParentType::getTrailingObjectsImpl; diff --git a/llvm/include/llvm/Transforms/Utils/SimplifyCFGOptions.h b/llvm/include/llvm/Transforms/Utils/SimplifyCFGOptions.h index ee3cc95..2d0f957 100644 --- a/llvm/include/llvm/Transforms/Utils/SimplifyCFGOptions.h +++ b/llvm/include/llvm/Transforms/Utils/SimplifyCFGOptions.h @@ -24,6 +24,7 @@ struct SimplifyCFGOptions { int BonusInstThreshold = 1; bool ForwardSwitchCondToPhi = false; bool ConvertSwitchRangeToICmp = false; + bool ConvertSwitchToArithmetic = false; bool ConvertSwitchToLookupTable = false; bool NeedCanonicalLoop = true; bool HoistCommonInsts = false; @@ -48,6 +49,10 @@ struct SimplifyCFGOptions { ConvertSwitchRangeToICmp = B; return *this; } + SimplifyCFGOptions &convertSwitchToArithmetic(bool B) { + ConvertSwitchToArithmetic = B; + return *this; + } SimplifyCFGOptions &convertSwitchToLookupTable(bool B) { ConvertSwitchToLookupTable = B; return *this; diff --git a/llvm/lib/Analysis/ConstantFolding.cpp b/llvm/lib/Analysis/ConstantFolding.cpp index b744537..45c889c 100755 --- a/llvm/lib/Analysis/ConstantFolding.cpp +++ b/llvm/lib/Analysis/ConstantFolding.cpp @@ -329,6 +329,7 @@ bool llvm::IsConstantOffsetFromGlobal(Constant *C, GlobalValue *&GV, // Look through ptr->int and ptr->ptr casts. if (CE->getOpcode() == Instruction::PtrToInt || + CE->getOpcode() == Instruction::PtrToAddr || CE->getOpcode() == Instruction::BitCast) return IsConstantOffsetFromGlobal(CE->getOperand(0), GV, Offset, DL, DSOEquiv); @@ -1495,22 +1496,22 @@ Constant *llvm::ConstantFoldCastOperand(unsigned Opcode, Constant *C, default: llvm_unreachable("Missing case"); case Instruction::PtrToAddr: - // TODO: Add some of the ptrtoint folds here as well. - break; case Instruction::PtrToInt: if (auto *CE = dyn_cast<ConstantExpr>(C)) { Constant *FoldedValue = nullptr; - // If the input is a inttoptr, eliminate the pair. This requires knowing + // If the input is an inttoptr, eliminate the pair. This requires knowing // the width of a pointer, so it can't be done in ConstantExpr::getCast. if (CE->getOpcode() == Instruction::IntToPtr) { - // zext/trunc the inttoptr to pointer size. - FoldedValue = ConstantFoldIntegerCast(CE->getOperand(0), - DL.getIntPtrType(CE->getType()), + // zext/trunc the inttoptr to pointer/address size. + Type *MidTy = Opcode == Instruction::PtrToInt + ? DL.getAddressType(CE->getType()) + : DL.getIntPtrType(CE->getType()); + FoldedValue = ConstantFoldIntegerCast(CE->getOperand(0), MidTy, /*IsSigned=*/false, DL); } else if (auto *GEP = dyn_cast<GEPOperator>(CE)) { // If we have GEP, we can perform the following folds: - // (ptrtoint (gep null, x)) -> x - // (ptrtoint (gep (gep null, x), y) -> x + y, etc. + // (ptrtoint/ptrtoaddr (gep null, x)) -> x + // (ptrtoint/ptrtoaddr (gep (gep null, x), y) -> x + y, etc. unsigned BitWidth = DL.getIndexTypeSizeInBits(GEP->getType()); APInt BaseOffset(BitWidth, 0); auto *Base = cast<Constant>(GEP->stripAndAccumulateConstantOffsets( @@ -1518,7 +1519,8 @@ Constant *llvm::ConstantFoldCastOperand(unsigned Opcode, Constant *C, if (Base->isNullValue()) { FoldedValue = ConstantInt::get(CE->getContext(), BaseOffset); } else { - // ptrtoint (gep i8, Ptr, (sub 0, V)) -> sub (ptrtoint Ptr), V + // ptrtoint/ptrtoaddr (gep i8, Ptr, (sub 0, V)) + // -> sub (ptrtoint/ptrtoaddr Ptr), V if (GEP->getNumIndices() == 1 && GEP->getSourceElementType()->isIntegerTy(8)) { auto *Ptr = cast<Constant>(GEP->getPointerOperand()); @@ -1528,12 +1530,13 @@ Constant *llvm::ConstantFoldCastOperand(unsigned Opcode, Constant *C, Sub->getOpcode() == Instruction::Sub && Sub->getOperand(0)->isNullValue()) FoldedValue = ConstantExpr::getSub( - ConstantExpr::getPtrToInt(Ptr, IntIdxTy), Sub->getOperand(1)); + ConstantExpr::getCast(Opcode, Ptr, IntIdxTy), + Sub->getOperand(1)); } } } if (FoldedValue) { - // Do a zext or trunc to get to the ptrtoint dest size. + // Do a zext or trunc to get to the ptrtoint/ptrtoaddr dest size. return ConstantFoldIntegerCast(FoldedValue, DestTy, /*IsSigned=*/false, DL); } diff --git a/llvm/lib/Analysis/IR2Vec.cpp b/llvm/lib/Analysis/IR2Vec.cpp index 6885351..1794a60 100644 --- a/llvm/lib/Analysis/IR2Vec.cpp +++ b/llvm/lib/Analysis/IR2Vec.cpp @@ -239,10 +239,21 @@ void FlowAwareEmbedder::computeEmbeddings(const BasicBlock &BB) const { // If the operand is defined elsewhere, we use its embedding if (const auto *DefInst = dyn_cast<Instruction>(Op)) { auto DefIt = InstVecMap.find(DefInst); - assert(DefIt != InstVecMap.end() && - "Instruction should have been processed before its operands"); - ArgEmb += DefIt->second; - continue; + // Fixme (#159171): Ideally we should never miss an instruction + // embedding here. + // But when we have cyclic dependencies (e.g., phi + // nodes), we might miss the embedding. In such cases, we fall back to + // using the vocabulary embedding. This can be fixed by iterating to a + // fixed-point, or by using a simple solver for the set of simultaneous + // equations. + // Another case when we might miss an instruction embedding is when + // the operand instruction is in a different basic block that has not + // been processed yet. This can be fixed by processing the basic blocks + // in a topological order. + if (DefIt != InstVecMap.end()) + ArgEmb += DefIt->second; + else + ArgEmb += Vocab[*Op]; } // If the operand is not defined by an instruction, we use the vocabulary else { diff --git a/llvm/lib/Analysis/Loads.cpp b/llvm/lib/Analysis/Loads.cpp index 4c2e1fe..54f55b2 100644 --- a/llvm/lib/Analysis/Loads.cpp +++ b/llvm/lib/Analysis/Loads.cpp @@ -812,7 +812,9 @@ static bool isPointerUseReplacable(const Use &U) { auto *User = Worklist.pop_back_val(); if (!Visited.insert(User).second) continue; - if (isa<ICmpInst, PtrToIntInst>(User)) + // FIXME: The PtrToIntInst case here is not strictly correct, as it + // changes which provenance is exposed. + if (isa<ICmpInst, PtrToIntInst, PtrToAddrInst>(User)) continue; if (isa<PHINode, SelectInst>(User)) Worklist.append(User->user_begin(), User->user_end()); diff --git a/llvm/lib/Analysis/ModuleDebugInfoPrinter.cpp b/llvm/lib/Analysis/ModuleDebugInfoPrinter.cpp index 0fbf082..f31d625 100644 --- a/llvm/lib/Analysis/ModuleDebugInfoPrinter.cpp +++ b/llvm/lib/Analysis/ModuleDebugInfoPrinter.cpp @@ -43,11 +43,13 @@ static void printModuleDebugInfo(raw_ostream &O, const Module *M, // filenames), so just print a few useful things. for (DICompileUnit *CU : Finder.compile_units()) { O << "Compile unit: "; - auto Lang = dwarf::LanguageString(CU->getSourceLanguage()); + auto Lang = + dwarf::LanguageString(CU->getSourceLanguage().getUnversionedName()); if (!Lang.empty()) O << Lang; else - O << "unknown-language(" << CU->getSourceLanguage() << ")"; + O << "unknown-language(" << CU->getSourceLanguage().getUnversionedName() + << ")"; printFile(O, CU->getFilename(), CU->getDirectory()); O << '\n'; } diff --git a/llvm/lib/Analysis/ScalarEvolution.cpp b/llvm/lib/Analysis/ScalarEvolution.cpp index 63e1b14..30bcff7 100644 --- a/llvm/lib/Analysis/ScalarEvolution.cpp +++ b/llvm/lib/Analysis/ScalarEvolution.cpp @@ -6351,19 +6351,20 @@ const SCEV *ScalarEvolution::createNodeForGEP(GEPOperator *GEP) { return getGEPExpr(GEP, IndexExprs); } -APInt ScalarEvolution::getConstantMultipleImpl(const SCEV *S) { +APInt ScalarEvolution::getConstantMultipleImpl(const SCEV *S, + const Instruction *CtxI) { uint64_t BitWidth = getTypeSizeInBits(S->getType()); auto GetShiftedByZeros = [BitWidth](uint32_t TrailingZeros) { return TrailingZeros >= BitWidth ? APInt::getZero(BitWidth) : APInt::getOneBitSet(BitWidth, TrailingZeros); }; - auto GetGCDMultiple = [this](const SCEVNAryExpr *N) { + auto GetGCDMultiple = [this, CtxI](const SCEVNAryExpr *N) { // The result is GCD of all operands results. - APInt Res = getConstantMultiple(N->getOperand(0)); + APInt Res = getConstantMultiple(N->getOperand(0), CtxI); for (unsigned I = 1, E = N->getNumOperands(); I < E && Res != 1; ++I) Res = APIntOps::GreatestCommonDivisor( - Res, getConstantMultiple(N->getOperand(I))); + Res, getConstantMultiple(N->getOperand(I), CtxI)); return Res; }; @@ -6371,33 +6372,33 @@ APInt ScalarEvolution::getConstantMultipleImpl(const SCEV *S) { case scConstant: return cast<SCEVConstant>(S)->getAPInt(); case scPtrToInt: - return getConstantMultiple(cast<SCEVPtrToIntExpr>(S)->getOperand()); + return getConstantMultiple(cast<SCEVPtrToIntExpr>(S)->getOperand(), CtxI); case scUDivExpr: case scVScale: return APInt(BitWidth, 1); case scTruncate: { // Only multiples that are a power of 2 will hold after truncation. const SCEVTruncateExpr *T = cast<SCEVTruncateExpr>(S); - uint32_t TZ = getMinTrailingZeros(T->getOperand()); + uint32_t TZ = getMinTrailingZeros(T->getOperand(), CtxI); return GetShiftedByZeros(TZ); } case scZeroExtend: { const SCEVZeroExtendExpr *Z = cast<SCEVZeroExtendExpr>(S); - return getConstantMultiple(Z->getOperand()).zext(BitWidth); + return getConstantMultiple(Z->getOperand(), CtxI).zext(BitWidth); } case scSignExtend: { // Only multiples that are a power of 2 will hold after sext. const SCEVSignExtendExpr *E = cast<SCEVSignExtendExpr>(S); - uint32_t TZ = getMinTrailingZeros(E->getOperand()); + uint32_t TZ = getMinTrailingZeros(E->getOperand(), CtxI); return GetShiftedByZeros(TZ); } case scMulExpr: { const SCEVMulExpr *M = cast<SCEVMulExpr>(S); if (M->hasNoUnsignedWrap()) { // The result is the product of all operand results. - APInt Res = getConstantMultiple(M->getOperand(0)); + APInt Res = getConstantMultiple(M->getOperand(0), CtxI); for (const SCEV *Operand : M->operands().drop_front()) - Res = Res * getConstantMultiple(Operand); + Res = Res * getConstantMultiple(Operand, CtxI); return Res; } @@ -6405,7 +6406,7 @@ APInt ScalarEvolution::getConstantMultipleImpl(const SCEV *S) { // sum of trailing zeros for all its operands. uint32_t TZ = 0; for (const SCEV *Operand : M->operands()) - TZ += getMinTrailingZeros(Operand); + TZ += getMinTrailingZeros(Operand, CtxI); return GetShiftedByZeros(TZ); } case scAddExpr: @@ -6414,9 +6415,9 @@ APInt ScalarEvolution::getConstantMultipleImpl(const SCEV *S) { if (N->hasNoUnsignedWrap()) return GetGCDMultiple(N); // Find the trailing bits, which is the minimum of its operands. - uint32_t TZ = getMinTrailingZeros(N->getOperand(0)); + uint32_t TZ = getMinTrailingZeros(N->getOperand(0), CtxI); for (const SCEV *Operand : N->operands().drop_front()) - TZ = std::min(TZ, getMinTrailingZeros(Operand)); + TZ = std::min(TZ, getMinTrailingZeros(Operand, CtxI)); return GetShiftedByZeros(TZ); } case scUMaxExpr: @@ -6429,7 +6430,7 @@ APInt ScalarEvolution::getConstantMultipleImpl(const SCEV *S) { // ask ValueTracking for known bits const SCEVUnknown *U = cast<SCEVUnknown>(S); unsigned Known = - computeKnownBits(U->getValue(), getDataLayout(), &AC, nullptr, &DT) + computeKnownBits(U->getValue(), getDataLayout(), &AC, CtxI, &DT) .countMinTrailingZeros(); return GetShiftedByZeros(Known); } @@ -6439,12 +6440,18 @@ APInt ScalarEvolution::getConstantMultipleImpl(const SCEV *S) { llvm_unreachable("Unknown SCEV kind!"); } -APInt ScalarEvolution::getConstantMultiple(const SCEV *S) { +APInt ScalarEvolution::getConstantMultiple(const SCEV *S, + const Instruction *CtxI) { + // Skip looking up and updating the cache if there is a context instruction, + // as the result will only be valid in the specified context. + if (CtxI) + return getConstantMultipleImpl(S, CtxI); + auto I = ConstantMultipleCache.find(S); if (I != ConstantMultipleCache.end()) return I->second; - APInt Result = getConstantMultipleImpl(S); + APInt Result = getConstantMultipleImpl(S, CtxI); auto InsertPair = ConstantMultipleCache.insert({S, Result}); assert(InsertPair.second && "Should insert a new key"); return InsertPair.first->second; @@ -6455,8 +6462,9 @@ APInt ScalarEvolution::getNonZeroConstantMultiple(const SCEV *S) { return Multiple == 0 ? APInt(Multiple.getBitWidth(), 1) : Multiple; } -uint32_t ScalarEvolution::getMinTrailingZeros(const SCEV *S) { - return std::min(getConstantMultiple(S).countTrailingZeros(), +uint32_t ScalarEvolution::getMinTrailingZeros(const SCEV *S, + const Instruction *CtxI) { + return std::min(getConstantMultiple(S, CtxI).countTrailingZeros(), (unsigned)getTypeSizeInBits(S->getType())); } @@ -10243,8 +10251,7 @@ const SCEV *ScalarEvolution::stripInjectiveFunctions(const SCEV *S) const { static const SCEV * SolveLinEquationWithOverflow(const APInt &A, const SCEV *B, SmallVectorImpl<const SCEVPredicate *> *Predicates, - - ScalarEvolution &SE) { + ScalarEvolution &SE, const Loop *L) { uint32_t BW = A.getBitWidth(); assert(BW == SE.getTypeSizeInBits(B->getType())); assert(A != 0 && "A must be non-zero."); @@ -10260,7 +10267,12 @@ SolveLinEquationWithOverflow(const APInt &A, const SCEV *B, // // B is divisible by D if and only if the multiplicity of prime factor 2 for B // is not less than multiplicity of this prime factor for D. - if (SE.getMinTrailingZeros(B) < Mult2) { + unsigned MinTZ = SE.getMinTrailingZeros(B); + // Try again with the terminator of the loop predecessor for context-specific + // result, if MinTZ s too small. + if (MinTZ < Mult2 && L->getLoopPredecessor()) + MinTZ = SE.getMinTrailingZeros(B, L->getLoopPredecessor()->getTerminator()); + if (MinTZ < Mult2) { // Check if we can prove there's no remainder using URem. const SCEV *URem = SE.getURemExpr(B, SE.getConstant(APInt::getOneBitSet(BW, Mult2))); @@ -10708,7 +10720,7 @@ ScalarEvolution::ExitLimit ScalarEvolution::howFarToZero(const SCEV *V, return getCouldNotCompute(); const SCEV *E = SolveLinEquationWithOverflow( StepC->getAPInt(), getNegativeSCEV(Start), - AllowPredicates ? &Predicates : nullptr, *this); + AllowPredicates ? &Predicates : nullptr, *this, L); const SCEV *M = E; if (E != getCouldNotCompute()) { @@ -15737,51 +15749,11 @@ void ScalarEvolution::LoopGuards::collectFromBlock( return RewriteMap.lookup_or(S, S); }; - // Check for the SCEV expression (A /u B) * B while B is a constant, inside - // \p Expr. The check is done recuresively on \p Expr, which is assumed to - // be a composition of Min/Max SCEVs. Return whether the SCEV expression (A - // /u B) * B was found, and return the divisor B in \p DividesBy. For - // example, if Expr = umin (umax ((A /u 8) * 8, 16), 64), return true since - // (A /u 8) * 8 matched the pattern, and return the constant SCEV 8 in \p - // DividesBy. - std::function<bool(const SCEV *, const SCEV *&)> HasDivisibiltyInfo = - [&](const SCEV *Expr, const SCEV *&DividesBy) { - if (auto *Mul = dyn_cast<SCEVMulExpr>(Expr)) { - if (Mul->getNumOperands() != 2) - return false; - auto *MulLHS = Mul->getOperand(0); - auto *MulRHS = Mul->getOperand(1); - if (isa<SCEVConstant>(MulLHS)) - std::swap(MulLHS, MulRHS); - if (auto *Div = dyn_cast<SCEVUDivExpr>(MulLHS)) - if (Div->getOperand(1) == MulRHS) { - DividesBy = MulRHS; - return true; - } - } - if (auto *MinMax = dyn_cast<SCEVMinMaxExpr>(Expr)) - return HasDivisibiltyInfo(MinMax->getOperand(0), DividesBy) || - HasDivisibiltyInfo(MinMax->getOperand(1), DividesBy); - return false; - }; - - // Return true if Expr known to divide by \p DividesBy. - std::function<bool(const SCEV *, const SCEV *&)> IsKnownToDivideBy = - [&](const SCEV *Expr, const SCEV *DividesBy) { - if (SE.getURemExpr(Expr, DividesBy)->isZero()) - return true; - if (auto *MinMax = dyn_cast<SCEVMinMaxExpr>(Expr)) - return IsKnownToDivideBy(MinMax->getOperand(0), DividesBy) && - IsKnownToDivideBy(MinMax->getOperand(1), DividesBy); - return false; - }; - const SCEV *RewrittenLHS = GetMaybeRewritten(LHS); const SCEV *DividesBy = nullptr; - if (HasDivisibiltyInfo(RewrittenLHS, DividesBy)) - // Check that the whole expression is divided by DividesBy - DividesBy = - IsKnownToDivideBy(RewrittenLHS, DividesBy) ? DividesBy : nullptr; + const APInt &Multiple = SE.getConstantMultiple(RewrittenLHS); + if (!Multiple.isOne()) + DividesBy = SE.getConstant(Multiple); // Collect rewrites for LHS and its transitive operands based on the // condition. diff --git a/llvm/lib/AsmParser/LLParser.cpp b/llvm/lib/AsmParser/LLParser.cpp index 897e679..5589966 100644 --- a/llvm/lib/AsmParser/LLParser.cpp +++ b/llvm/lib/AsmParser/LLParser.cpp @@ -5861,11 +5861,11 @@ bool LLParser::parseDICompileUnit(MDNode *&Result, bool IsDistinct) { #undef VISIT_MD_FIELDS Result = DICompileUnit::getDistinct( - Context, language.Val, file.Val, producer.Val, isOptimized.Val, flags.Val, - runtimeVersion.Val, splitDebugFilename.Val, emissionKind.Val, enums.Val, - retainedTypes.Val, globals.Val, imports.Val, macros.Val, dwoId.Val, - splitDebugInlining.Val, debugInfoForProfiling.Val, nameTableKind.Val, - rangesBaseAddress.Val, sysroot.Val, sdk.Val); + Context, DISourceLanguageName(language.Val), file.Val, producer.Val, + isOptimized.Val, flags.Val, runtimeVersion.Val, splitDebugFilename.Val, + emissionKind.Val, enums.Val, retainedTypes.Val, globals.Val, imports.Val, + macros.Val, dwoId.Val, splitDebugInlining.Val, debugInfoForProfiling.Val, + nameTableKind.Val, rangesBaseAddress.Val, sysroot.Val, sdk.Val); return false; } diff --git a/llvm/lib/BinaryFormat/Dwarf.cpp b/llvm/lib/BinaryFormat/Dwarf.cpp index 8b24044..969047a 100644 --- a/llvm/lib/BinaryFormat/Dwarf.cpp +++ b/llvm/lib/BinaryFormat/Dwarf.cpp @@ -472,6 +472,137 @@ StringRef llvm::dwarf::LanguageDescription(dwarf::SourceLanguageName lname) { return "Unknown"; } +StringRef llvm::dwarf::LanguageDescription(dwarf::SourceLanguageName Name, + uint32_t Version) { + switch (Name) { + // YYYY + case DW_LNAME_Ada: { + if (Version <= 1983) + return "Ada 83"; + if (Version <= 1995) + return "Ada 95"; + if (Version <= 2005) + return "Ada 2005"; + if (Version <= 2012) + return "Ada 2012"; + } break; + + case DW_LNAME_Cobol: { + if (Version <= 1974) + return "COBOL-74"; + if (Version <= 1985) + return "COBOL-85"; + } break; + + case DW_LNAME_Fortran: { + if (Version <= 1977) + return "FORTRAN 77"; + if (Version <= 1990) + return "FORTRAN 90"; + if (Version <= 1995) + return "Fortran 95"; + if (Version <= 2003) + return "Fortran 2003"; + if (Version <= 2008) + return "Fortran 2008"; + if (Version <= 2018) + return "Fortran 2018"; + } break; + + // YYYYMM + case DW_LNAME_C: { + if (Version == 0) + break; + if (Version <= 198912) + return "C89"; + if (Version <= 199901) + return "C99"; + if (Version <= 201112) + return "C11"; + if (Version <= 201710) + return "C17"; + } break; + + case DW_LNAME_C_plus_plus: { + if (Version == 0) + break; + if (Version <= 199711) + return "C++98"; + if (Version <= 200310) + return "C++03"; + if (Version <= 201103) + return "C++11"; + if (Version <= 201402) + return "C++14"; + if (Version <= 201703) + return "C++17"; + if (Version <= 202002) + return "C++20"; + } break; + + case DW_LNAME_ObjC_plus_plus: + case DW_LNAME_ObjC: + case DW_LNAME_Move: + case DW_LNAME_SYCL: + case DW_LNAME_BLISS: + case DW_LNAME_Crystal: + case DW_LNAME_D: + case DW_LNAME_Dylan: + case DW_LNAME_Go: + case DW_LNAME_Haskell: + case DW_LNAME_HLSL: + case DW_LNAME_Java: + case DW_LNAME_Julia: + case DW_LNAME_Kotlin: + case DW_LNAME_Modula2: + case DW_LNAME_Modula3: + case DW_LNAME_OCaml: + case DW_LNAME_OpenCL_C: + case DW_LNAME_Pascal: + case DW_LNAME_PLI: + case DW_LNAME_Python: + case DW_LNAME_RenderScript: + case DW_LNAME_Rust: + case DW_LNAME_Swift: + case DW_LNAME_UPC: + case DW_LNAME_Zig: + case DW_LNAME_Assembly: + case DW_LNAME_C_sharp: + case DW_LNAME_Mojo: + case DW_LNAME_GLSL: + case DW_LNAME_GLSL_ES: + case DW_LNAME_OpenCL_CPP: + case DW_LNAME_CPP_for_OpenCL: + case DW_LNAME_Ruby: + case DW_LNAME_Hylo: + case DW_LNAME_Metal: + break; + } + + // Fallback to un-versioned name. + return LanguageDescription(Name); +} + +llvm::StringRef llvm::dwarf::SourceLanguageNameString(SourceLanguageName Lang) { + switch (Lang) { +#define HANDLE_DW_LNAME(ID, NAME, DESC, LOWER_BOUND) \ + case DW_LNAME_##NAME: \ + return "DW_LNAME_" #NAME; +#include "llvm/BinaryFormat/Dwarf.def" + } + + return {}; +} + +unsigned +llvm::dwarf::getSourceLanguageName(StringRef SourceLanguageNameString) { + return StringSwitch<unsigned>(SourceLanguageNameString) +#define HANDLE_DW_LNAME(ID, NAME, DESC, LOWER_BOUND) \ + .Case("DW_LNAME_" #NAME, DW_LNAME_##NAME) +#include "llvm/BinaryFormat/Dwarf.def" + .Default(0); +} + StringRef llvm::dwarf::CaseString(unsigned Case) { switch (Case) { case DW_ID_case_sensitive: diff --git a/llvm/lib/Bitcode/Reader/MetadataLoader.cpp b/llvm/lib/Bitcode/Reader/MetadataLoader.cpp index 22c7fa5..a4d1b83 100644 --- a/llvm/lib/Bitcode/Reader/MetadataLoader.cpp +++ b/llvm/lib/Bitcode/Reader/MetadataLoader.cpp @@ -1866,11 +1866,13 @@ Error MetadataLoader::MetadataLoaderImpl::parseOneMetadata( // Ignore Record[0], which indicates whether this compile unit is // distinct. It's always distinct. IsDistinct = true; + auto *CU = DICompileUnit::getDistinct( - Context, Record[1], getMDOrNull(Record[2]), getMDString(Record[3]), - Record[4], getMDString(Record[5]), Record[6], getMDString(Record[7]), - Record[8], getMDOrNull(Record[9]), getMDOrNull(Record[10]), - getMDOrNull(Record[12]), getMDOrNull(Record[13]), + Context, DISourceLanguageName(Record[1]), getMDOrNull(Record[2]), + getMDString(Record[3]), Record[4], getMDString(Record[5]), Record[6], + getMDString(Record[7]), Record[8], getMDOrNull(Record[9]), + getMDOrNull(Record[10]), getMDOrNull(Record[12]), + getMDOrNull(Record[13]), Record.size() <= 15 ? nullptr : getMDOrNull(Record[15]), Record.size() <= 14 ? 0 : Record[14], Record.size() <= 16 ? true : Record[16], diff --git a/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp b/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp index 6d86809..7ed140d 100644 --- a/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp +++ b/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp @@ -2107,7 +2107,8 @@ void ModuleBitcodeWriter::writeDICompileUnit(const DICompileUnit *N, unsigned Abbrev) { assert(N->isDistinct() && "Expected distinct compile units"); Record.push_back(/* IsDistinct */ true); - Record.push_back(N->getSourceLanguage()); + + Record.push_back(N->getSourceLanguage().getUnversionedName()); Record.push_back(VE.getMetadataOrNullID(N->getFile())); Record.push_back(VE.getMetadataOrNullID(N->getRawProducer())); Record.push_back(N->isOptimized()); diff --git a/llvm/lib/CodeGen/AsmPrinter/CodeViewDebug.cpp b/llvm/lib/CodeGen/AsmPrinter/CodeViewDebug.cpp index c5d6e40..12d749c 100644 --- a/llvm/lib/CodeGen/AsmPrinter/CodeViewDebug.cpp +++ b/llvm/lib/CodeGen/AsmPrinter/CodeViewDebug.cpp @@ -633,8 +633,8 @@ void CodeViewDebug::beginModule(Module *M) { Node = *CUs->operands().begin(); } const auto *CU = cast<DICompileUnit>(Node); - - CurrentSourceLanguage = MapDWLangToCVLang(CU->getSourceLanguage()); + CurrentSourceLanguage = + MapDWLangToCVLang(CU->getSourceLanguage().getUnversionedName()); if (!M->getCodeViewFlag() || CU->getEmissionKind() == DICompileUnit::NoDebug) { Asm = nullptr; diff --git a/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp b/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp index 09d5f9c..d751a7f 100644 --- a/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp +++ b/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp @@ -1040,7 +1040,8 @@ void DwarfDebug::finishUnitAttributes(const DICompileUnit *DIUnit, NewCU.addString(Die, dwarf::DW_AT_producer, Producer); NewCU.addUInt(Die, dwarf::DW_AT_language, dwarf::DW_FORM_data2, - DIUnit->getSourceLanguage()); + DIUnit->getSourceLanguage().getUnversionedName()); + NewCU.addString(Die, dwarf::DW_AT_name, FN); StringRef SysRoot = DIUnit->getSysRoot(); if (!SysRoot.empty()) @@ -2930,10 +2931,9 @@ static dwarf::PubIndexEntryDescriptor computeIndexValue(DwarfUnit *CU, case dwarf::DW_TAG_union_type: case dwarf::DW_TAG_enumeration_type: return dwarf::PubIndexEntryDescriptor( - dwarf::GIEK_TYPE, - dwarf::isCPlusPlus((dwarf::SourceLanguage)CU->getLanguage()) - ? dwarf::GIEL_EXTERNAL - : dwarf::GIEL_STATIC); + dwarf::GIEK_TYPE, dwarf::isCPlusPlus(CU->getSourceLanguage()) + ? dwarf::GIEL_EXTERNAL + : dwarf::GIEL_STATIC); case dwarf::DW_TAG_typedef: case dwarf::DW_TAG_base_type: case dwarf::DW_TAG_subrange_type: @@ -3926,7 +3926,7 @@ void DwarfDebug::addDwarfTypeUnitType(DwarfCompileUnit &CU, TypeUnitsUnderConstruction.emplace_back(std::move(OwnedUnit), CTy); NewTU.addUInt(UnitDie, dwarf::DW_AT_language, dwarf::DW_FORM_data2, - CU.getLanguage()); + CU.getSourceLanguage()); uint64_t Signature = makeTypeSignature(Identifier); NewTU.setTypeSignature(Signature); diff --git a/llvm/lib/CodeGen/AsmPrinter/DwarfUnit.cpp b/llvm/lib/CodeGen/AsmPrinter/DwarfUnit.cpp index 3cfe7cc..aa078f3 100644 --- a/llvm/lib/CodeGen/AsmPrinter/DwarfUnit.cpp +++ b/llvm/lib/CodeGen/AsmPrinter/DwarfUnit.cpp @@ -100,7 +100,7 @@ DwarfUnit::~DwarfUnit() { } int64_t DwarfUnit::getDefaultLowerBound() const { - switch (getLanguage()) { + switch (getSourceLanguage()) { default: break; @@ -704,12 +704,17 @@ void DwarfUnit::addType(DIE &Entity, const DIType *Ty, addDIEEntry(Entity, Attribute, DIEEntry(*getOrCreateTypeDIE(Ty))); } +llvm::dwarf::SourceLanguage DwarfUnit::getSourceLanguage() const { + return static_cast<llvm::dwarf::SourceLanguage>( + getLanguage().getUnversionedName()); +} + std::string DwarfUnit::getParentContextString(const DIScope *Context) const { if (!Context) return ""; // FIXME: Decide whether to implement this for non-C++ languages. - if (!dwarf::isCPlusPlus((dwarf::SourceLanguage)getLanguage())) + if (!dwarf::isCPlusPlus(getSourceLanguage())) return ""; std::string CS; @@ -940,7 +945,7 @@ void DwarfUnit::constructTypeDIE(DIE &Buffer, const DISubroutineType *CTy) { // Add prototype flag if we're dealing with a C language and the function has // been prototyped. - if (isPrototyped && dwarf::isC((dwarf::SourceLanguage)getLanguage())) + if (isPrototyped && dwarf::isC(getSourceLanguage())) addFlag(Buffer, dwarf::DW_AT_prototyped); // Add a DW_AT_calling_convention if this has an explicit convention. @@ -1448,7 +1453,7 @@ void DwarfUnit::applySubprogramAttributes(const DISubprogram *SP, DIE &SPDie, // Add the prototype if we have a prototype and we have a C like // language. - if (SP->isPrototyped() && dwarf::isC((dwarf::SourceLanguage)getLanguage())) + if (SP->isPrototyped() && dwarf::isC(getSourceLanguage())) addFlag(SPDie, dwarf::DW_AT_prototyped); if (SP->isObjCDirect()) @@ -1700,8 +1705,7 @@ DIE *DwarfUnit::getIndexTyDie() { addString(*IndexTyDie, dwarf::DW_AT_name, Name); addUInt(*IndexTyDie, dwarf::DW_AT_byte_size, std::nullopt, sizeof(int64_t)); addUInt(*IndexTyDie, dwarf::DW_AT_encoding, dwarf::DW_FORM_data1, - dwarf::getArrayIndexTypeEncoding( - (dwarf::SourceLanguage)getLanguage())); + dwarf::getArrayIndexTypeEncoding(getSourceLanguage())); DD->addAccelType(*this, CUNode->getNameTableKind(), Name, *IndexTyDie, /*Flags*/ 0); return IndexTyDie; diff --git a/llvm/lib/CodeGen/AsmPrinter/DwarfUnit.h b/llvm/lib/CodeGen/AsmPrinter/DwarfUnit.h index bb00ec3..9288d7e 100644 --- a/llvm/lib/CodeGen/AsmPrinter/DwarfUnit.h +++ b/llvm/lib/CodeGen/AsmPrinter/DwarfUnit.h @@ -17,6 +17,7 @@ #include "llvm/ADT/DenseMap.h" #include "llvm/CodeGen/AsmPrinter.h" #include "llvm/CodeGen/DIE.h" +#include "llvm/IR/DebugInfoMetadata.h" #include "llvm/Target/TargetMachine.h" #include <optional> #include <string> @@ -107,7 +108,7 @@ public: return LabelBegin; } MCSymbol *getEndLabel() const { return EndLabel; } - uint16_t getLanguage() const { return CUNode->getSourceLanguage(); } + llvm::dwarf::SourceLanguage getSourceLanguage() const; const DICompileUnit *getCUNode() const { return CUNode; } DwarfDebug &getDwarfDebug() const { return *DD; } @@ -358,6 +359,10 @@ protected: } private: + DISourceLanguageName getLanguage() const { + return CUNode->getSourceLanguage(); + } + /// A helper to add a wide integer constant to a DIE using a block /// form. void addIntAsBlock(DIE &Die, dwarf::Attribute Attribute, const APInt &Val); diff --git a/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp b/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp index fa0ccd6..906d62a3 100644 --- a/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp +++ b/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp @@ -1215,7 +1215,7 @@ bool CombinerHelper::isIndexedLoadStoreLegal(GLoadStore &LdSt) const { LLT MemTy = LdSt.getMMO().getMemoryType(); SmallVector<LegalityQuery::MemDesc, 2> MemDescrs( {{MemTy, MemTy.getSizeInBits().getKnownMinValue(), - AtomicOrdering::NotAtomic}}); + AtomicOrdering::NotAtomic, AtomicOrdering::NotAtomic}}); unsigned IndexedOpc = getIndexedOpc(LdSt.getOpcode()); SmallVector<LLT> OpTys; if (IndexedOpc == TargetOpcode::G_INDEXED_STORE) diff --git a/llvm/lib/CodeGen/GlobalISel/LoadStoreOpt.cpp b/llvm/lib/CodeGen/GlobalISel/LoadStoreOpt.cpp index b2f8435..cdc1f64 100644 --- a/llvm/lib/CodeGen/GlobalISel/LoadStoreOpt.cpp +++ b/llvm/lib/CodeGen/GlobalISel/LoadStoreOpt.cpp @@ -958,7 +958,8 @@ void LoadStoreOpt::initializeStoreMergeTargetInfo(unsigned AddrSpace) { for (unsigned Size = 2; Size <= MaxStoreSizeToForm; Size *= 2) { LLT Ty = LLT::scalar(Size); SmallVector<LegalityQuery::MemDesc, 2> MemDescrs( - {{Ty, Ty.getSizeInBits(), AtomicOrdering::NotAtomic}}); + {{Ty, Ty.getSizeInBits(), AtomicOrdering::NotAtomic, + AtomicOrdering::NotAtomic}}); SmallVector<LLT> StoreTys({Ty, PtrTy}); LegalityQuery Q(TargetOpcode::G_STORE, StoreTys, MemDescrs); LegalizeActionStep ActionStep = LI.getAction(Q); diff --git a/llvm/lib/CodeGen/MIR2Vec.cpp b/llvm/lib/CodeGen/MIR2Vec.cpp index 87565c0..e859765 100644 --- a/llvm/lib/CodeGen/MIR2Vec.cpp +++ b/llvm/lib/CodeGen/MIR2Vec.cpp @@ -49,14 +49,8 @@ cl::opt<float> OpcWeight("mir2vec-opc-weight", cl::Optional, cl::init(1.0), //===----------------------------------------------------------------------===// MIRVocabulary::MIRVocabulary(VocabMap &&OpcodeEntries, - const TargetInstrInfo *TII) - : TII(*TII) { - // Fixme: Use static factory methods for creating vocabularies instead of - // public constructors - // Early return for invalid inputs - creates empty/invalid vocabulary - if (!TII || OpcodeEntries.empty()) - return; - + const TargetInstrInfo &TII) + : TII(TII) { buildCanonicalOpcodeMapping(); unsigned CanonicalOpcodeCount = UniqueBaseOpcodeNames.size(); @@ -67,6 +61,15 @@ MIRVocabulary::MIRVocabulary(VocabMap &&OpcodeEntries, Layout.TotalEntries = Storage.size(); } +Expected<MIRVocabulary> MIRVocabulary::create(VocabMap &&Entries, + const TargetInstrInfo &TII) { + if (Entries.empty()) + return createStringError(errc::invalid_argument, + "Empty vocabulary entries provided"); + + return MIRVocabulary(std::move(Entries), TII); +} + std::string MIRVocabulary::extractBaseOpcodeName(StringRef InstrName) { // Extract base instruction name using regex to capture letters and // underscores Examples: "ADD32rr" -> "ADD", "ARITH_FENCE" -> "ARITH_FENCE" @@ -107,13 +110,11 @@ unsigned MIRVocabulary::getCanonicalIndexForBaseName(StringRef BaseName) const { } unsigned MIRVocabulary::getCanonicalOpcodeIndex(unsigned Opcode) const { - assert(isValid() && "MIR2Vec Vocabulary is invalid"); auto BaseOpcode = extractBaseOpcodeName(TII.getName(Opcode)); return getCanonicalIndexForBaseName(BaseOpcode); } std::string MIRVocabulary::getStringKey(unsigned Pos) const { - assert(isValid() && "MIR2Vec Vocabulary is invalid"); assert(Pos < Layout.TotalEntries && "Position out of bounds in vocabulary"); // For now, all entries are opcodes since we only have one section @@ -232,16 +233,11 @@ Error MIR2VecVocabLegacyAnalysis::readVocabulary() { return Error::success(); } -void MIR2VecVocabLegacyAnalysis::emitError(Error Err, LLVMContext &Ctx) { - Ctx.emitError(toString(std::move(Err))); -} - -mir2vec::MIRVocabulary +Expected<mir2vec::MIRVocabulary> MIR2VecVocabLegacyAnalysis::getMIR2VecVocabulary(const Module &M) { if (StrVocabMap.empty()) { if (Error Err = readVocabulary()) { - emitError(std::move(Err), M.getContext()); - return mir2vec::MIRVocabulary(std::move(StrVocabMap), nullptr); + return std::move(Err); } } @@ -255,15 +251,13 @@ MIR2VecVocabLegacyAnalysis::getMIR2VecVocabulary(const Module &M) { if (auto *MF = MMI.getMachineFunction(F)) { const TargetInstrInfo *TII = MF->getSubtarget().getInstrInfo(); - return mir2vec::MIRVocabulary(std::move(StrVocabMap), TII); + return mir2vec::MIRVocabulary::create(std::move(StrVocabMap), *TII); } } - // No machine functions available - return invalid vocabulary - emitError(make_error<StringError>("No machine functions found in module", - inconvertibleErrorCode()), - M.getContext()); - return mir2vec::MIRVocabulary(std::move(StrVocabMap), nullptr); + // No machine functions available - return error + return createStringError(errc::invalid_argument, + "No machine functions found in module"); } //===----------------------------------------------------------------------===// @@ -284,13 +278,15 @@ bool MIR2VecVocabPrinterLegacyPass::runOnMachineFunction(MachineFunction &MF) { bool MIR2VecVocabPrinterLegacyPass::doFinalization(Module &M) { auto &Analysis = getAnalysis<MIR2VecVocabLegacyAnalysis>(); - auto MIR2VecVocab = Analysis.getMIR2VecVocabulary(M); + auto MIR2VecVocabOrErr = Analysis.getMIR2VecVocabulary(M); - if (!MIR2VecVocab.isValid()) { - OS << "MIR2Vec Vocabulary Printer: Invalid vocabulary\n"; + if (!MIR2VecVocabOrErr) { + OS << "MIR2Vec Vocabulary Printer: Failed to get vocabulary - " + << toString(MIR2VecVocabOrErr.takeError()) << "\n"; return false; } + auto &MIR2VecVocab = *MIR2VecVocabOrErr; unsigned Pos = 0; for (const auto &Entry : MIR2VecVocab) { OS << "Key: " << MIR2VecVocab.getStringKey(Pos++) << ": "; diff --git a/llvm/lib/CodeGen/MachinePipeliner.cpp b/llvm/lib/CodeGen/MachinePipeliner.cpp index 3a9651c..89ed4da 100644 --- a/llvm/lib/CodeGen/MachinePipeliner.cpp +++ b/llvm/lib/CodeGen/MachinePipeliner.cpp @@ -110,6 +110,7 @@ STATISTIC(NumFailZeroMII, "Pipeliner abort due to zero MII"); STATISTIC(NumFailNoSchedule, "Pipeliner abort due to no schedule found"); STATISTIC(NumFailZeroStage, "Pipeliner abort due to zero stage"); STATISTIC(NumFailLargeMaxStage, "Pipeliner abort due to too many stages"); +STATISTIC(NumFailTooManyStores, "Pipeliner abort due to too many stores"); /// A command line option to turn software pipelining on or off. static cl::opt<bool> EnableSWP("enable-pipeliner", cl::Hidden, cl::init(true), @@ -193,6 +194,13 @@ static cl::opt<bool> MVECodeGen("pipeliner-mve-cg", cl::Hidden, cl::init(false), cl::desc("Use the MVE code generator for software pipelining")); +/// A command line argument to limit the number of store instructions in the +/// target basic block. +static cl::opt<unsigned> SwpMaxNumStores( + "pipeliner-max-num-stores", + cl::desc("Maximum number of stores allwed in the target loop."), cl::Hidden, + cl::init(200)); + namespace llvm { // A command line option to enable the CopyToPhi DAG mutation. @@ -544,6 +552,23 @@ bool MachinePipeliner::canPipelineLoop(MachineLoop &L) { return false; } + unsigned NumStores = 0; + for (MachineInstr &MI : *L.getHeader()) + if (MI.mayStore()) + ++NumStores; + if (NumStores > SwpMaxNumStores) { + LLVM_DEBUG(dbgs() << "Too many stores\n"); + NumFailTooManyStores++; + ORE->emit([&]() { + return MachineOptimizationRemarkAnalysis(DEBUG_TYPE, "canPipelineLoop", + L.getStartLoc(), L.getHeader()) + << "Too many store instructions in the loop: " + << ore::NV("NumStores", NumStores) << " > " + << ore::NV("SwpMaxNumStores", SwpMaxNumStores) << "."; + }); + return false; + } + // Remove any subregisters from inputs to phi nodes. preprocessPhiNodes(*L.getHeader()); return true; diff --git a/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp b/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp index 851d445..507b2d6 100644 --- a/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp @@ -1843,7 +1843,8 @@ bool FastISel::selectOperator(const User *I, unsigned Opcode) { return selectCast(I, ISD::SINT_TO_FP); case Instruction::IntToPtr: // Deliberate fall-through. - case Instruction::PtrToInt: { + case Instruction::PtrToInt: + case Instruction::PtrToAddr: { EVT SrcVT = TLI.getValueType(DL, I->getOperand(0)->getType()); EVT DstVT = TLI.getValueType(DL, I->getType()); if (DstVT.bitsGT(SrcVT)) diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp index 87d5453..3b5f83f 100644 --- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp @@ -3416,7 +3416,7 @@ void DAGTypeLegalizer::SplitVecRes_PARTIAL_REDUCE_MLA(SDNode *N, SDValue &Lo, SDValue Input2 = N->getOperand(2); SDValue AccLo, AccHi; - std::tie(AccLo, AccHi) = DAG.SplitVector(Acc, DL); + GetSplitVector(Acc, AccLo, AccHi); unsigned Opcode = N->getOpcode(); // If the input types don't need splitting, just accumulate into the @@ -3429,8 +3429,8 @@ void DAGTypeLegalizer::SplitVecRes_PARTIAL_REDUCE_MLA(SDNode *N, SDValue &Lo, SDValue Input1Lo, Input1Hi; SDValue Input2Lo, Input2Hi; - std::tie(Input1Lo, Input1Hi) = DAG.SplitVector(Input1, DL); - std::tie(Input2Lo, Input2Hi) = DAG.SplitVector(Input2, DL); + GetSplitVector(Input1, Input1Lo, Input1Hi); + GetSplitVector(Input2, Input2Lo, Input2Hi); EVT ResultVT = AccLo.getValueType(); Lo = DAG.getNode(Opcode, DL, ResultVT, AccLo, Input1Lo, Input2Lo); @@ -4761,8 +4761,8 @@ SDValue DAGTypeLegalizer::SplitVecOp_PARTIAL_REDUCE_MLA(SDNode *N) { SDLoc DL(N); SDValue Input1Lo, Input1Hi, Input2Lo, Input2Hi; - std::tie(Input1Lo, Input1Hi) = DAG.SplitVector(N->getOperand(1), DL); - std::tie(Input2Lo, Input2Hi) = DAG.SplitVector(N->getOperand(2), DL); + GetSplitVector(N->getOperand(1), Input1Lo, Input1Hi); + GetSplitVector(N->getOperand(2), Input2Lo, Input2Hi); unsigned Opcode = N->getOpcode(); EVT ResultVT = Acc.getValueType(); diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp index c35f29d..175753f 100644 --- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp @@ -571,7 +571,7 @@ bool SelectionDAGISel::runOnMachineFunction(MachineFunction &mf) { SwiftError->setFunction(mf); const Function &Fn = mf.getFunction(); - bool InstrRef = mf.shouldUseDebugInstrRef(); + bool InstrRef = mf.useDebugInstrRef(); FuncInfo->set(MF->getFunction(), *MF, CurDAG); diff --git a/llvm/lib/Frontend/HLSL/RootSignatureMetadata.cpp b/llvm/lib/Frontend/HLSL/RootSignatureMetadata.cpp index 707f0c3..132a280 100644 --- a/llvm/lib/Frontend/HLSL/RootSignatureMetadata.cpp +++ b/llvm/lib/Frontend/HLSL/RootSignatureMetadata.cpp @@ -24,15 +24,7 @@ namespace llvm { namespace hlsl { namespace rootsig { -char GenericRSMetadataError::ID; -char InvalidRSMetadataFormat::ID; -char InvalidRSMetadataValue::ID; -char TableSamplerMixinError::ID; -char ShaderRegisterOverflowError::ID; -char OffsetOverflowError::ID; -char OffsetAppendAfterOverflow::ID; - -template <typename T> char RootSignatureValidationError<T>::ID; +char RootSignatureValidationError::ID; static std::optional<uint32_t> extractMdIntValue(MDNode *Node, unsigned int OpId) { @@ -57,20 +49,6 @@ static std::optional<StringRef> extractMdStringValue(MDNode *Node, return NodeText->getString(); } -template <typename T, typename = std::enable_if_t< - std::is_enum_v<T> && - std::is_same_v<std::underlying_type_t<T>, uint32_t>>> -static Expected<T> -extractEnumValue(MDNode *Node, unsigned int OpId, StringRef ErrText, - llvm::function_ref<bool(uint32_t)> VerifyFn) { - if (std::optional<uint32_t> Val = extractMdIntValue(Node, OpId)) { - if (!VerifyFn(*Val)) - return make_error<RootSignatureValidationError<uint32_t>>(ErrText, *Val); - return static_cast<T>(*Val); - } - return make_error<InvalidRSMetadataValue>("ShaderVisibility"); -} - namespace { // We use the OverloadVisit with std::visit to ensure the compiler catches if a @@ -81,8 +59,52 @@ template <class... Ts> struct OverloadedVisit : Ts... { }; template <class... Ts> OverloadedVisit(Ts...) -> OverloadedVisit<Ts...>; +struct FmtRange { + dxil::ResourceClass Type; + uint32_t Register; + uint32_t Space; + + FmtRange(const mcdxbc::DescriptorRange &Range) + : Type(Range.RangeType), Register(Range.BaseShaderRegister), + Space(Range.RegisterSpace) {} +}; + +raw_ostream &operator<<(llvm::raw_ostream &OS, const FmtRange &Range) { + OS << getResourceClassName(Range.Type) << "(register=" << Range.Register + << ", space=" << Range.Space << ")"; + return OS; +} + +struct FmtMDNode { + const MDNode *Node; + + FmtMDNode(const MDNode *Node) : Node(Node) {} +}; + +raw_ostream &operator<<(llvm::raw_ostream &OS, FmtMDNode Fmt) { + Fmt.Node->printTree(OS); + return OS; +} + +static Error makeRSError(const Twine &Msg) { + return make_error<RootSignatureValidationError>(Msg); +} } // namespace +template <typename T, typename = std::enable_if_t< + std::is_enum_v<T> && + std::is_same_v<std::underlying_type_t<T>, uint32_t>>> +static Expected<T> +extractEnumValue(MDNode *Node, unsigned int OpId, StringRef ErrText, + llvm::function_ref<bool(uint32_t)> VerifyFn) { + if (std::optional<uint32_t> Val = extractMdIntValue(Node, OpId)) { + if (!VerifyFn(*Val)) + return makeRSError(formatv("Invalid value for {0}: {1}", ErrText, Val)); + return static_cast<T>(*Val); + } + return makeRSError(formatv("Invalid value for {0}:", ErrText)); +} + MDNode *MetadataBuilder::BuildRootSignature() { const auto Visitor = OverloadedVisit{ [this](const dxbc::RootFlags &Flags) -> MDNode * { @@ -226,12 +248,12 @@ MDNode *MetadataBuilder::BuildStaticSampler(const StaticSampler &Sampler) { Error MetadataParser::parseRootFlags(mcdxbc::RootSignatureDesc &RSD, MDNode *RootFlagNode) { if (RootFlagNode->getNumOperands() != 2) - return make_error<InvalidRSMetadataFormat>("RootFlag Element"); + return makeRSError("Invalid format for RootFlags Element"); if (std::optional<uint32_t> Val = extractMdIntValue(RootFlagNode, 1)) RSD.Flags = *Val; else - return make_error<InvalidRSMetadataValue>("RootFlag"); + return makeRSError("Invalid value for RootFlag"); return Error::success(); } @@ -239,7 +261,7 @@ Error MetadataParser::parseRootFlags(mcdxbc::RootSignatureDesc &RSD, Error MetadataParser::parseRootConstants(mcdxbc::RootSignatureDesc &RSD, MDNode *RootConstantNode) { if (RootConstantNode->getNumOperands() != 5) - return make_error<InvalidRSMetadataFormat>("RootConstants Element"); + return makeRSError("Invalid format for RootConstants Element"); Expected<dxbc::ShaderVisibility> Visibility = extractEnumValue<dxbc::ShaderVisibility>(RootConstantNode, 1, @@ -252,17 +274,17 @@ Error MetadataParser::parseRootConstants(mcdxbc::RootSignatureDesc &RSD, if (std::optional<uint32_t> Val = extractMdIntValue(RootConstantNode, 2)) Constants.ShaderRegister = *Val; else - return make_error<InvalidRSMetadataValue>("ShaderRegister"); + return makeRSError("Invalid value for ShaderRegister"); if (std::optional<uint32_t> Val = extractMdIntValue(RootConstantNode, 3)) Constants.RegisterSpace = *Val; else - return make_error<InvalidRSMetadataValue>("RegisterSpace"); + return makeRSError("Invalid value for RegisterSpace"); if (std::optional<uint32_t> Val = extractMdIntValue(RootConstantNode, 4)) Constants.Num32BitValues = *Val; else - return make_error<InvalidRSMetadataValue>("Num32BitValues"); + return makeRSError("Invalid value for Num32BitValues"); RSD.ParametersContainer.addParameter(dxbc::RootParameterType::Constants32Bit, *Visibility, Constants); @@ -279,7 +301,7 @@ Error MetadataParser::parseRootDescriptors( "parseRootDescriptors should only be called with RootDescriptor " "element kind."); if (RootDescriptorNode->getNumOperands() != 5) - return make_error<InvalidRSMetadataFormat>("Root Descriptor Element"); + return makeRSError("Invalid format for Root Descriptor Element"); dxbc::RootParameterType Type; switch (ElementKind) { @@ -308,23 +330,17 @@ Error MetadataParser::parseRootDescriptors( if (std::optional<uint32_t> Val = extractMdIntValue(RootDescriptorNode, 2)) Descriptor.ShaderRegister = *Val; else - return make_error<InvalidRSMetadataValue>("ShaderRegister"); + return makeRSError("Invalid value for ShaderRegister"); if (std::optional<uint32_t> Val = extractMdIntValue(RootDescriptorNode, 3)) Descriptor.RegisterSpace = *Val; else - return make_error<InvalidRSMetadataValue>("RegisterSpace"); - - if (RSD.Version == 1) { - RSD.ParametersContainer.addParameter(Type, *Visibility, Descriptor); - return Error::success(); - } - assert(RSD.Version > 1); + return makeRSError("Invalid value for RegisterSpace"); if (std::optional<uint32_t> Val = extractMdIntValue(RootDescriptorNode, 4)) Descriptor.Flags = *Val; else - return make_error<InvalidRSMetadataValue>("Root Descriptor Flags"); + return makeRSError("Invalid value for Root Descriptor Flags"); RSD.ParametersContainer.addParameter(Type, *Visibility, Descriptor); return Error::success(); @@ -333,7 +349,7 @@ Error MetadataParser::parseRootDescriptors( Error MetadataParser::parseDescriptorRange(mcdxbc::DescriptorTable &Table, MDNode *RangeDescriptorNode) { if (RangeDescriptorNode->getNumOperands() != 6) - return make_error<InvalidRSMetadataFormat>("Descriptor Range"); + return makeRSError("Invalid format for Descriptor Range"); mcdxbc::DescriptorRange Range; @@ -341,7 +357,7 @@ Error MetadataParser::parseDescriptorRange(mcdxbc::DescriptorTable &Table, extractMdStringValue(RangeDescriptorNode, 0); if (!ElementText.has_value()) - return make_error<InvalidRSMetadataFormat>("Descriptor Range"); + return makeRSError("Invalid format for Descriptor Range"); if (*ElementText == "CBV") Range.RangeType = dxil::ResourceClass::CBuffer; @@ -352,35 +368,34 @@ Error MetadataParser::parseDescriptorRange(mcdxbc::DescriptorTable &Table, else if (*ElementText == "Sampler") Range.RangeType = dxil::ResourceClass::Sampler; else - return make_error<GenericRSMetadataError>("Invalid Descriptor Range type.", - RangeDescriptorNode); + return makeRSError(formatv("Invalid Descriptor Range type.\n{0}", + FmtMDNode{RangeDescriptorNode})); if (std::optional<uint32_t> Val = extractMdIntValue(RangeDescriptorNode, 1)) Range.NumDescriptors = *Val; else - return make_error<GenericRSMetadataError>("Number of Descriptor in Range", - RangeDescriptorNode); + return makeRSError(formatv("Invalid number of Descriptor in Range.\n{0}", + FmtMDNode{RangeDescriptorNode})); if (std::optional<uint32_t> Val = extractMdIntValue(RangeDescriptorNode, 2)) Range.BaseShaderRegister = *Val; else - return make_error<InvalidRSMetadataValue>("BaseShaderRegister"); + return makeRSError("Invalid value for BaseShaderRegister"); if (std::optional<uint32_t> Val = extractMdIntValue(RangeDescriptorNode, 3)) Range.RegisterSpace = *Val; else - return make_error<InvalidRSMetadataValue>("RegisterSpace"); + return makeRSError("Invalid value for RegisterSpace"); if (std::optional<uint32_t> Val = extractMdIntValue(RangeDescriptorNode, 4)) Range.OffsetInDescriptorsFromTableStart = *Val; else - return make_error<InvalidRSMetadataValue>( - "OffsetInDescriptorsFromTableStart"); + return makeRSError("Invalid value for OffsetInDescriptorsFromTableStart"); if (std::optional<uint32_t> Val = extractMdIntValue(RangeDescriptorNode, 5)) Range.Flags = *Val; else - return make_error<InvalidRSMetadataValue>("Descriptor Range Flags"); + return makeRSError("Invalid value for Descriptor Range Flags"); Table.Ranges.push_back(Range); return Error::success(); @@ -390,7 +405,7 @@ Error MetadataParser::parseDescriptorTable(mcdxbc::RootSignatureDesc &RSD, MDNode *DescriptorTableNode) { const unsigned int NumOperands = DescriptorTableNode->getNumOperands(); if (NumOperands < 2) - return make_error<InvalidRSMetadataFormat>("Descriptor Table"); + return makeRSError("Invalid format for Descriptor Table"); Expected<dxbc::ShaderVisibility> Visibility = extractEnumValue<dxbc::ShaderVisibility>(DescriptorTableNode, 1, @@ -404,8 +419,8 @@ Error MetadataParser::parseDescriptorTable(mcdxbc::RootSignatureDesc &RSD, for (unsigned int I = 2; I < NumOperands; I++) { MDNode *Element = dyn_cast<MDNode>(DescriptorTableNode->getOperand(I)); if (Element == nullptr) - return make_error<GenericRSMetadataError>( - "Missing Root Element Metadata Node.", DescriptorTableNode); + return makeRSError(formatv("Missing Root Element Metadata Node.\n{0}", + FmtMDNode{DescriptorTableNode})); if (auto Err = parseDescriptorRange(Table, Element)) return Err; @@ -419,7 +434,7 @@ Error MetadataParser::parseDescriptorTable(mcdxbc::RootSignatureDesc &RSD, Error MetadataParser::parseStaticSampler(mcdxbc::RootSignatureDesc &RSD, MDNode *StaticSamplerNode) { if (StaticSamplerNode->getNumOperands() != 15) - return make_error<InvalidRSMetadataFormat>("Static Sampler"); + return makeRSError("Invalid format for Static Sampler"); mcdxbc::StaticSampler Sampler; @@ -453,12 +468,12 @@ Error MetadataParser::parseStaticSampler(mcdxbc::RootSignatureDesc &RSD, if (std::optional<float> Val = extractMdFloatValue(StaticSamplerNode, 5)) Sampler.MipLODBias = *Val; else - return make_error<InvalidRSMetadataValue>("MipLODBias"); + return makeRSError("Invalid value for MipLODBias"); if (std::optional<uint32_t> Val = extractMdIntValue(StaticSamplerNode, 6)) Sampler.MaxAnisotropy = *Val; else - return make_error<InvalidRSMetadataValue>("MaxAnisotropy"); + return makeRSError("Invalid value for MaxAnisotropy"); Expected<dxbc::ComparisonFunc> ComparisonFunc = extractEnumValue<dxbc::ComparisonFunc>( @@ -477,22 +492,22 @@ Error MetadataParser::parseStaticSampler(mcdxbc::RootSignatureDesc &RSD, if (std::optional<float> Val = extractMdFloatValue(StaticSamplerNode, 9)) Sampler.MinLOD = *Val; else - return make_error<InvalidRSMetadataValue>("MinLOD"); + return makeRSError("Invalid value for MinLOD"); if (std::optional<float> Val = extractMdFloatValue(StaticSamplerNode, 10)) Sampler.MaxLOD = *Val; else - return make_error<InvalidRSMetadataValue>("MaxLOD"); + return makeRSError("Invalid value for MaxLOD"); if (std::optional<uint32_t> Val = extractMdIntValue(StaticSamplerNode, 11)) Sampler.ShaderRegister = *Val; else - return make_error<InvalidRSMetadataValue>("ShaderRegister"); + return makeRSError("Invalid value for ShaderRegister"); if (std::optional<uint32_t> Val = extractMdIntValue(StaticSamplerNode, 12)) Sampler.RegisterSpace = *Val; else - return make_error<InvalidRSMetadataValue>("RegisterSpace"); + return makeRSError("Invalid value for RegisterSpace"); Expected<dxbc::ShaderVisibility> Visibility = extractEnumValue<dxbc::ShaderVisibility>(StaticSamplerNode, 13, @@ -502,16 +517,10 @@ Error MetadataParser::parseStaticSampler(mcdxbc::RootSignatureDesc &RSD, return Error(std::move(E)); Sampler.ShaderVisibility = *Visibility; - if (RSD.Version < 3) { - RSD.StaticSamplers.push_back(Sampler); - return Error::success(); - } - assert(RSD.Version >= 3); - if (std::optional<uint32_t> Val = extractMdIntValue(StaticSamplerNode, 14)) Sampler.Flags = *Val; else - return make_error<InvalidRSMetadataValue>("Static Sampler Flags"); + return makeRSError("Invalid value for Static Sampler Flags"); RSD.StaticSamplers.push_back(Sampler); return Error::success(); @@ -521,7 +530,7 @@ Error MetadataParser::parseRootSignatureElement(mcdxbc::RootSignatureDesc &RSD, MDNode *Element) { std::optional<StringRef> ElementText = extractMdStringValue(Element, 0); if (!ElementText.has_value()) - return make_error<InvalidRSMetadataFormat>("Root Element"); + return makeRSError("Invalid format for Root Element"); RootSignatureElementKind ElementKind = StringSwitch<RootSignatureElementKind>(*ElementText) @@ -549,8 +558,8 @@ Error MetadataParser::parseRootSignatureElement(mcdxbc::RootSignatureDesc &RSD, case RootSignatureElementKind::StaticSamplers: return parseStaticSampler(RSD, Element); case RootSignatureElementKind::Error: - return make_error<GenericRSMetadataError>("Invalid Root Signature Element", - Element); + return makeRSError( + formatv("Invalid Root Signature Element\n{0}", FmtMDNode{Element})); } llvm_unreachable("Unhandled RootSignatureElementKind enum."); @@ -563,7 +572,10 @@ validateDescriptorTableSamplerMixin(const mcdxbc::DescriptorTable &Table, for (const mcdxbc::DescriptorRange &Range : Table.Ranges) { if (Range.RangeType == dxil::ResourceClass::Sampler && CurrRC != dxil::ResourceClass::Sampler) - return make_error<TableSamplerMixinError>(CurrRC, Location); + return makeRSError( + formatv("Samplers cannot be mixed with other resource types in a " + "descriptor table, {0}(location={1})", + getResourceClassName(CurrRC), Location)); CurrRC = Range.RangeType; } return Error::success(); @@ -583,8 +595,8 @@ validateDescriptorTableRegisterOverflow(const mcdxbc::DescriptorTable &Table, Range.BaseShaderRegister, Range.NumDescriptors); if (!verifyNoOverflowedOffset(RangeBound)) - return make_error<ShaderRegisterOverflowError>( - Range.RangeType, Range.BaseShaderRegister, Range.RegisterSpace); + return makeRSError( + formatv("Overflow for shader register range: {0}", FmtRange{Range})); bool IsAppending = Range.OffsetInDescriptorsFromTableStart == DescriptorTableOffsetAppend; @@ -592,15 +604,16 @@ validateDescriptorTableRegisterOverflow(const mcdxbc::DescriptorTable &Table, Offset = Range.OffsetInDescriptorsFromTableStart; if (IsPrevUnbound && IsAppending) - return make_error<OffsetAppendAfterOverflow>( - Range.RangeType, Range.BaseShaderRegister, Range.RegisterSpace); + return makeRSError( + formatv("Range {0} cannot be appended after an unbounded range", + FmtRange{Range})); const uint64_t OffsetBound = llvm::hlsl::rootsig::computeRangeBound(Offset, Range.NumDescriptors); if (!verifyNoOverflowedOffset(OffsetBound)) - return make_error<OffsetOverflowError>( - Range.RangeType, Range.BaseShaderRegister, Range.RegisterSpace); + return makeRSError(formatv("Offset overflow for descriptor range: {0}.", + FmtRange{Range})); Offset = OffsetBound + 1; IsPrevUnbound = @@ -614,17 +627,15 @@ Error MetadataParser::validateRootSignature( const mcdxbc::RootSignatureDesc &RSD) { Error DeferredErrs = Error::success(); if (!hlsl::rootsig::verifyVersion(RSD.Version)) { - DeferredErrs = - joinErrors(std::move(DeferredErrs), - make_error<RootSignatureValidationError<uint32_t>>( - "Version", RSD.Version)); + DeferredErrs = joinErrors( + std::move(DeferredErrs), + makeRSError(formatv("Invalid value for Version: {0}", RSD.Version))); } if (!hlsl::rootsig::verifyRootFlag(RSD.Flags)) { - DeferredErrs = - joinErrors(std::move(DeferredErrs), - make_error<RootSignatureValidationError<uint32_t>>( - "RootFlags", RSD.Flags)); + DeferredErrs = joinErrors( + std::move(DeferredErrs), + makeRSError(formatv("Invalid value for RootFlags: {0}", RSD.Flags))); } for (const mcdxbc::RootParameterInfo &Info : RSD.ParametersContainer) { @@ -639,28 +650,26 @@ Error MetadataParser::validateRootSignature( const mcdxbc::RootDescriptor &Descriptor = RSD.ParametersContainer.getRootDescriptor(Info.Location); if (!hlsl::rootsig::verifyRegisterValue(Descriptor.ShaderRegister)) - DeferredErrs = - joinErrors(std::move(DeferredErrs), - make_error<RootSignatureValidationError<uint32_t>>( - "ShaderRegister", Descriptor.ShaderRegister)); + DeferredErrs = joinErrors( + std::move(DeferredErrs), + makeRSError(formatv("Invalid value for ShaderRegister: {0}", + Descriptor.ShaderRegister))); if (!hlsl::rootsig::verifyRegisterSpace(Descriptor.RegisterSpace)) - DeferredErrs = - joinErrors(std::move(DeferredErrs), - make_error<RootSignatureValidationError<uint32_t>>( - "RegisterSpace", Descriptor.RegisterSpace)); - - if (RSD.Version > 1) { - bool IsValidFlag = - dxbc::isValidRootDesciptorFlags(Descriptor.Flags) && - hlsl::rootsig::verifyRootDescriptorFlag( - RSD.Version, dxbc::RootDescriptorFlags(Descriptor.Flags)); - if (!IsValidFlag) - DeferredErrs = - joinErrors(std::move(DeferredErrs), - make_error<RootSignatureValidationError<uint32_t>>( - "RootDescriptorFlag", Descriptor.Flags)); - } + DeferredErrs = joinErrors( + std::move(DeferredErrs), + makeRSError(formatv("Invalid value for RegisterSpace: {0}", + Descriptor.RegisterSpace))); + + bool IsValidFlag = + dxbc::isValidRootDesciptorFlags(Descriptor.Flags) && + hlsl::rootsig::verifyRootDescriptorFlag( + RSD.Version, dxbc::RootDescriptorFlags(Descriptor.Flags)); + if (!IsValidFlag) + DeferredErrs = joinErrors( + std::move(DeferredErrs), + makeRSError(formatv("Invalid value for RootDescriptorFlag: {0}", + Descriptor.Flags))); break; } case dxbc::RootParameterType::DescriptorTable: { @@ -668,26 +677,26 @@ Error MetadataParser::validateRootSignature( RSD.ParametersContainer.getDescriptorTable(Info.Location); for (const mcdxbc::DescriptorRange &Range : Table) { if (!hlsl::rootsig::verifyRegisterSpace(Range.RegisterSpace)) - DeferredErrs = - joinErrors(std::move(DeferredErrs), - make_error<RootSignatureValidationError<uint32_t>>( - "RegisterSpace", Range.RegisterSpace)); + DeferredErrs = joinErrors( + std::move(DeferredErrs), + makeRSError(formatv("Invalid value for RegisterSpace: {0}", + Range.RegisterSpace))); if (!hlsl::rootsig::verifyNumDescriptors(Range.NumDescriptors)) - DeferredErrs = - joinErrors(std::move(DeferredErrs), - make_error<RootSignatureValidationError<uint32_t>>( - "NumDescriptors", Range.NumDescriptors)); + DeferredErrs = joinErrors( + std::move(DeferredErrs), + makeRSError(formatv("Invalid value for NumDescriptors: {0}", + Range.NumDescriptors))); bool IsValidFlag = dxbc::isValidDescriptorRangeFlags(Range.Flags) && hlsl::rootsig::verifyDescriptorRangeFlag( RSD.Version, Range.RangeType, dxbc::DescriptorRangeFlags(Range.Flags)); if (!IsValidFlag) - DeferredErrs = - joinErrors(std::move(DeferredErrs), - make_error<RootSignatureValidationError<uint32_t>>( - "DescriptorFlag", Range.Flags)); + DeferredErrs = joinErrors( + std::move(DeferredErrs), + makeRSError(formatv("Invalid value for DescriptorFlag: {0}", + Range.Flags))); if (Error Err = validateDescriptorTableSamplerMixin(Table, Info.Location)) @@ -705,46 +714,49 @@ Error MetadataParser::validateRootSignature( for (const mcdxbc::StaticSampler &Sampler : RSD.StaticSamplers) { if (!hlsl::rootsig::verifyMipLODBias(Sampler.MipLODBias)) - DeferredErrs = joinErrors(std::move(DeferredErrs), - make_error<RootSignatureValidationError<float>>( - "MipLODBias", Sampler.MipLODBias)); + DeferredErrs = + joinErrors(std::move(DeferredErrs), + makeRSError(formatv("Invalid value for MipLODBias: {0:e}", + Sampler.MipLODBias))); if (!hlsl::rootsig::verifyMaxAnisotropy(Sampler.MaxAnisotropy)) DeferredErrs = joinErrors(std::move(DeferredErrs), - make_error<RootSignatureValidationError<uint32_t>>( - "MaxAnisotropy", Sampler.MaxAnisotropy)); + makeRSError(formatv("Invalid value for MaxAnisotropy: {0}", + Sampler.MaxAnisotropy))); if (!hlsl::rootsig::verifyLOD(Sampler.MinLOD)) - DeferredErrs = joinErrors(std::move(DeferredErrs), - make_error<RootSignatureValidationError<float>>( - "MinLOD", Sampler.MinLOD)); + DeferredErrs = + joinErrors(std::move(DeferredErrs), + makeRSError(formatv("Invalid value for MinLOD: {0}", + Sampler.MinLOD))); if (!hlsl::rootsig::verifyLOD(Sampler.MaxLOD)) - DeferredErrs = joinErrors(std::move(DeferredErrs), - make_error<RootSignatureValidationError<float>>( - "MaxLOD", Sampler.MaxLOD)); - - if (!hlsl::rootsig::verifyRegisterValue(Sampler.ShaderRegister)) DeferredErrs = joinErrors(std::move(DeferredErrs), - make_error<RootSignatureValidationError<uint32_t>>( - "ShaderRegister", Sampler.ShaderRegister)); + makeRSError(formatv("Invalid value for MaxLOD: {0}", + Sampler.MaxLOD))); + + if (!hlsl::rootsig::verifyRegisterValue(Sampler.ShaderRegister)) + DeferredErrs = joinErrors( + std::move(DeferredErrs), + makeRSError(formatv("Invalid value for ShaderRegister: {0}", + Sampler.ShaderRegister))); if (!hlsl::rootsig::verifyRegisterSpace(Sampler.RegisterSpace)) DeferredErrs = joinErrors(std::move(DeferredErrs), - make_error<RootSignatureValidationError<uint32_t>>( - "RegisterSpace", Sampler.RegisterSpace)); + makeRSError(formatv("Invalid value for RegisterSpace: {0}", + Sampler.RegisterSpace))); bool IsValidFlag = dxbc::isValidStaticSamplerFlags(Sampler.Flags) && hlsl::rootsig::verifyStaticSamplerFlags( RSD.Version, dxbc::StaticSamplerFlags(Sampler.Flags)); if (!IsValidFlag) - DeferredErrs = - joinErrors(std::move(DeferredErrs), - make_error<RootSignatureValidationError<uint32_t>>( - "Static Sampler Flag", Sampler.Flags)); + DeferredErrs = joinErrors( + std::move(DeferredErrs), + makeRSError(formatv("Invalid value for Static Sampler Flag: {0}", + Sampler.Flags))); } return DeferredErrs; @@ -758,9 +770,9 @@ MetadataParser::ParseRootSignature(uint32_t Version) { for (const auto &Operand : Root->operands()) { MDNode *Element = dyn_cast<MDNode>(Operand); if (Element == nullptr) - return joinErrors(std::move(DeferredErrs), - make_error<GenericRSMetadataError>( - "Missing Root Element Metadata Node.", nullptr)); + return joinErrors( + std::move(DeferredErrs), + makeRSError(formatv("Missing Root Element Metadata Node."))); if (auto Err = parseRootSignatureElement(RSD, Element)) DeferredErrs = joinErrors(std::move(DeferredErrs), std::move(Err)); diff --git a/llvm/lib/Frontend/HLSL/RootSignatureValidations.cpp b/llvm/lib/Frontend/HLSL/RootSignatureValidations.cpp index 30408df..1735751 100644 --- a/llvm/lib/Frontend/HLSL/RootSignatureValidations.cpp +++ b/llvm/lib/Frontend/HLSL/RootSignatureValidations.cpp @@ -41,8 +41,6 @@ bool verifyRootDescriptorFlag(uint32_t Version, if (Version == 1) return Flags == FlagT::DataVolatile; - assert((Version <= 3) && "Provided invalid root signature version"); - // The data-specific flags are mutually exclusive. FlagT DataFlags = FlagT::DataVolatile | FlagT::DataStatic | FlagT::DataStaticWhileSetAtExecute; @@ -118,8 +116,6 @@ bool verifyStaticSamplerFlags(uint32_t Version, if (Version <= 2) return Flags == dxbc::StaticSamplerFlags::None; - assert(Version == 3 && "Provided invalid root signature version"); - dxbc::StaticSamplerFlags Mask = dxbc::StaticSamplerFlags::NonNormalizedCoordinates | dxbc::StaticSamplerFlags::UintBorderColor | diff --git a/llvm/lib/Frontend/OpenMP/OMPIRBuilder.cpp b/llvm/lib/Frontend/OpenMP/OMPIRBuilder.cpp index 5980ee3..286ed03 100644 --- a/llvm/lib/Frontend/OpenMP/OMPIRBuilder.cpp +++ b/llvm/lib/Frontend/OpenMP/OMPIRBuilder.cpp @@ -3623,7 +3623,9 @@ OpenMPIRBuilder::InsertPointOrErrorTy OpenMPIRBuilder::createReductionsGPU( // 1. Build a list of reduction variables. // void *RedList[<n>] = {<ReductionVars>[0], ..., <ReductionVars>[<n>-1]}; auto Size = ReductionInfos.size(); - Type *PtrTy = PointerType::getUnqual(Ctx); + Type *PtrTy = PointerType::get(Ctx, Config.getDefaultTargetAS()); + Type *FuncPtrTy = + Builder.getPtrTy(M.getDataLayout().getProgramAddressSpace()); Type *RedArrayTy = ArrayType::get(PtrTy, Size); CodeGenIP = Builder.saveIP(); Builder.restoreIP(AllocaIP); @@ -3667,9 +3669,9 @@ OpenMPIRBuilder::InsertPointOrErrorTy OpenMPIRBuilder::createReductionsGPU( Builder.getInt64(MaxDataSize * ReductionInfos.size()); if (!IsTeamsReduction) { Value *SarFuncCast = - Builder.CreatePointerBitCastOrAddrSpaceCast(SarFunc, PtrTy); + Builder.CreatePointerBitCastOrAddrSpaceCast(SarFunc, FuncPtrTy); Value *WcFuncCast = - Builder.CreatePointerBitCastOrAddrSpaceCast(WcFunc, PtrTy); + Builder.CreatePointerBitCastOrAddrSpaceCast(WcFunc, FuncPtrTy); Value *Args[] = {SrcLocInfo, ReductionDataSize, RL, SarFuncCast, WcFuncCast}; Function *Pv2Ptr = getOrCreateRuntimeFunctionPtr( @@ -10072,13 +10074,14 @@ void OpenMPIRBuilder::initializeTypes(Module &M) { LLVMContext &Ctx = M.getContext(); StructType *T; unsigned DefaultTargetAS = Config.getDefaultTargetAS(); + unsigned ProgramAS = M.getDataLayout().getProgramAddressSpace(); #define OMP_TYPE(VarName, InitValue) VarName = InitValue; #define OMP_ARRAY_TYPE(VarName, ElemTy, ArraySize) \ VarName##Ty = ArrayType::get(ElemTy, ArraySize); \ VarName##PtrTy = PointerType::get(Ctx, DefaultTargetAS); #define OMP_FUNCTION_TYPE(VarName, IsVarArg, ReturnType, ...) \ VarName = FunctionType::get(ReturnType, {__VA_ARGS__}, IsVarArg); \ - VarName##Ptr = PointerType::get(Ctx, DefaultTargetAS); + VarName##Ptr = PointerType::get(Ctx, ProgramAS); #define OMP_STRUCT_TYPE(VarName, StructName, Packed, ...) \ T = StructType::getTypeByName(Ctx, StructName); \ if (!T) \ diff --git a/llvm/lib/IR/AsmWriter.cpp b/llvm/lib/IR/AsmWriter.cpp index 245129f..ae086bcd 100644 --- a/llvm/lib/IR/AsmWriter.cpp +++ b/llvm/lib/IR/AsmWriter.cpp @@ -2369,8 +2369,12 @@ static void writeDICompileUnit(raw_ostream &Out, const DICompileUnit *N, AsmWriterContext &WriterCtx) { Out << "!DICompileUnit("; MDFieldPrinter Printer(Out, WriterCtx); - Printer.printDwarfEnum("language", N->getSourceLanguage(), - dwarf::LanguageString, /* ShouldSkipZero */ false); + + Printer.printDwarfEnum("language", + N->getSourceLanguage().getUnversionedName(), + dwarf::LanguageString, + /* ShouldSkipZero */ false); + Printer.printMetadata("file", N->getRawFile(), /* ShouldSkipNull */ false); Printer.printString("producer", N->getProducer()); Printer.printBool("isOptimized", N->isOptimized()); diff --git a/llvm/lib/IR/DIBuilder.cpp b/llvm/lib/IR/DIBuilder.cpp index 1344df9..07a870f 100644 --- a/llvm/lib/IR/DIBuilder.cpp +++ b/llvm/lib/IR/DIBuilder.cpp @@ -131,17 +131,13 @@ static DIScope *getNonCompileUnitScope(DIScope *N) { } DICompileUnit *DIBuilder::createCompileUnit( - unsigned Lang, DIFile *File, StringRef Producer, bool isOptimized, - StringRef Flags, unsigned RunTimeVer, StringRef SplitName, + DISourceLanguageName Lang, DIFile *File, StringRef Producer, + bool isOptimized, StringRef Flags, unsigned RunTimeVer, StringRef SplitName, DICompileUnit::DebugEmissionKind Kind, uint64_t DWOId, bool SplitDebugInlining, bool DebugInfoForProfiling, DICompileUnit::DebugNameTableKind NameTableKind, bool RangesBaseAddress, StringRef SysRoot, StringRef SDK) { - assert(((Lang <= dwarf::DW_LANG_Metal && Lang >= dwarf::DW_LANG_C89) || - (Lang <= dwarf::DW_LANG_hi_user && Lang >= dwarf::DW_LANG_lo_user)) && - "Invalid Language tag"); - assert(!CUNode && "Can only make one compile unit per DIBuilder instance"); CUNode = DICompileUnit::getDistinct( VMContext, Lang, File, Producer, isOptimized, Flags, RunTimeVer, @@ -719,11 +715,20 @@ DICompositeType *DIBuilder::createArrayType( DICompositeType *DIBuilder::createVectorType(uint64_t Size, uint32_t AlignInBits, DIType *Ty, - DINodeArray Subscripts) { - auto *R = DICompositeType::get(VMContext, dwarf::DW_TAG_array_type, "", - nullptr, 0, nullptr, Ty, Size, AlignInBits, 0, - DINode::FlagVector, Subscripts, 0, - /*EnumKind=*/std::nullopt, nullptr); + DINodeArray Subscripts, + Metadata *BitStride) { + auto *R = DICompositeType::get( + VMContext, dwarf::DW_TAG_array_type, /*Name=*/"", + /*File=*/nullptr, /*Line=*/0, /*Scope=*/nullptr, /*BaseType=*/Ty, + /*SizeInBits=*/Size, /*AlignInBits=*/AlignInBits, /*OffsetInBits=*/0, + /*Flags=*/DINode::FlagVector, /*Elements=*/Subscripts, + /*RuntimeLang=*/0, /*EnumKind=*/std::nullopt, /*VTableHolder=*/nullptr, + /*TemplateParams=*/nullptr, /*Identifier=*/"", + /*Discriminator=*/nullptr, /*DataLocation=*/nullptr, + /*Associated=*/nullptr, /*Allocated=*/nullptr, /*Rank=*/nullptr, + /*Annotations=*/nullptr, /*Specification=*/nullptr, + /*NumExtraInhabitants=*/0, + /*BitStride=*/BitStride); trackIfUnresolved(R); return R; } diff --git a/llvm/lib/IR/DebugInfo.cpp b/llvm/lib/IR/DebugInfo.cpp index f9ded50..9601a8a 100644 --- a/llvm/lib/IR/DebugInfo.cpp +++ b/llvm/lib/IR/DebugInfo.cpp @@ -1078,7 +1078,7 @@ LLVMMetadataRef LLVMDIBuilderCreateCompileUnit( auto File = unwrapDI<DIFile>(FileRef); return wrap(unwrap(Builder)->createCompileUnit( - map_from_llvmDWARFsourcelanguage(Lang), File, + DISourceLanguageName(map_from_llvmDWARFsourcelanguage(Lang)), File, StringRef(Producer, ProducerLen), isOptimized, StringRef(Flags, FlagsLen), RuntimeVer, StringRef(SplitName, SplitNameLen), static_cast<DICompileUnit::DebugEmissionKind>(Kind), DWOId, diff --git a/llvm/lib/IR/DebugInfoMetadata.cpp b/llvm/lib/IR/DebugInfoMetadata.cpp index 77d044b..e30df88 100644 --- a/llvm/lib/IR/DebugInfoMetadata.cpp +++ b/llvm/lib/IR/DebugInfoMetadata.cpp @@ -1184,9 +1184,10 @@ DIFile *DIFile::getImpl(LLVMContext &Context, MDString *Filename, DEFINE_GETIMPL_STORE(DIFile, (CS, Source), Ops); } DICompileUnit::DICompileUnit(LLVMContext &C, StorageType Storage, - unsigned SourceLanguage, bool IsOptimized, - unsigned RuntimeVersion, unsigned EmissionKind, - uint64_t DWOId, bool SplitDebugInlining, + DISourceLanguageName SourceLanguage, + bool IsOptimized, unsigned RuntimeVersion, + unsigned EmissionKind, uint64_t DWOId, + bool SplitDebugInlining, bool DebugInfoForProfiling, unsigned NameTableKind, bool RangesBaseAddress, ArrayRef<Metadata *> Ops) : DIScope(C, DICompileUnitKind, Storage, dwarf::DW_TAG_compile_unit, Ops), @@ -1199,7 +1200,7 @@ DICompileUnit::DICompileUnit(LLVMContext &C, StorageType Storage, } DICompileUnit *DICompileUnit::getImpl( - LLVMContext &Context, unsigned SourceLanguage, Metadata *File, + LLVMContext &Context, DISourceLanguageName SourceLanguage, Metadata *File, MDString *Producer, bool IsOptimized, MDString *Flags, unsigned RuntimeVersion, MDString *SplitDebugFilename, unsigned EmissionKind, Metadata *EnumTypes, Metadata *RetainedTypes, diff --git a/llvm/lib/IR/DiagnosticInfo.cpp b/llvm/lib/IR/DiagnosticInfo.cpp index 4f37624..8e6d654 100644 --- a/llvm/lib/IR/DiagnosticInfo.cpp +++ b/llvm/lib/IR/DiagnosticInfo.cpp @@ -273,6 +273,13 @@ DiagnosticInfoOptimizationBase::Argument::Argument(StringRef Key, C.print(OS); } +DiagnosticInfoOptimizationBase::Argument::Argument(StringRef Key, + BranchProbability P) + : Key(std::string(Key)) { + raw_string_ostream OS(Val); + P.print(OS); +} + DiagnosticInfoOptimizationBase::Argument::Argument(StringRef Key, DebugLoc Loc) : Key(std::string(Key)), Loc(Loc) { if (Loc) { diff --git a/llvm/lib/IR/Verifier.cpp b/llvm/lib/IR/Verifier.cpp index 71a8a38..c9ff86b 100644 --- a/llvm/lib/IR/Verifier.cpp +++ b/llvm/lib/IR/Verifier.cpp @@ -5398,8 +5398,10 @@ void Verifier::visitCapturesMetadata(Instruction &I, const MDNode *Captures) { void Verifier::visitAllocTokenMetadata(Instruction &I, MDNode *MD) { Check(isa<CallBase>(I), "!alloc_token should only exist on calls", &I); - Check(MD->getNumOperands() == 1, "!alloc_token must have 1 operand", MD); + Check(MD->getNumOperands() == 2, "!alloc_token must have 2 operands", MD); Check(isa<MDString>(MD->getOperand(0)), "expected string", MD); + Check(mdconst::dyn_extract_or_null<ConstantInt>(MD->getOperand(1)), + "expected integer constant", MD); } /// verifyInstruction - Verify that an instruction is well formed. diff --git a/llvm/lib/Passes/PassBuilder.cpp b/llvm/lib/Passes/PassBuilder.cpp index 20dcde8..53cf004 100644 --- a/llvm/lib/Passes/PassBuilder.cpp +++ b/llvm/lib/Passes/PassBuilder.cpp @@ -1111,6 +1111,8 @@ Expected<SimplifyCFGOptions> parseSimplifyCFGOptions(StringRef Params) { Result.forwardSwitchCondToPhi(Enable); } else if (ParamName == "switch-range-to-icmp") { Result.convertSwitchRangeToICmp(Enable); + } else if (ParamName == "switch-to-arithmetic") { + Result.convertSwitchToArithmetic(Enable); } else if (ParamName == "switch-to-lookup") { Result.convertSwitchToLookupTable(Enable); } else if (ParamName == "keep-loops") { diff --git a/llvm/lib/Passes/PassBuilderPipelines.cpp b/llvm/lib/Passes/PassBuilderPipelines.cpp index 119caea..fea0d25 100644 --- a/llvm/lib/Passes/PassBuilderPipelines.cpp +++ b/llvm/lib/Passes/PassBuilderPipelines.cpp @@ -781,6 +781,7 @@ PassBuilder::buildFunctionSimplificationPipeline(OptimizationLevel Level, FPM.addPass(SimplifyCFGPass(SimplifyCFGOptions() .convertSwitchRangeToICmp(true) + .convertSwitchToArithmetic(true) .hoistCommonInsts(true) .sinkCommonInsts(true))); FPM.addPass(InstCombinePass()); @@ -1377,6 +1378,7 @@ void PassBuilder::addVectorPasses(OptimizationLevel Level, FPM.addPass(SimplifyCFGPass(SimplifyCFGOptions() .forwardSwitchCondToPhi(true) .convertSwitchRangeToICmp(true) + .convertSwitchToArithmetic(true) .convertSwitchToLookupTable(true) .needCanonicalLoops(false) .hoistCommonInsts(true) @@ -1603,6 +1605,7 @@ PassBuilder::buildModuleOptimizationPipeline(OptimizationLevel Level, OptimizePM.addPass( SimplifyCFGPass(SimplifyCFGOptions() .convertSwitchRangeToICmp(true) + .convertSwitchToArithmetic(true) .speculateUnpredictables(true) .hoistLoadsStoresWithCondFaulting(true))); @@ -2187,6 +2190,7 @@ PassBuilder::buildLTODefaultPipeline(OptimizationLevel Level, // Delete basic blocks, which optimization passes may have killed. LateFPM.addPass(SimplifyCFGPass(SimplifyCFGOptions() .convertSwitchRangeToICmp(true) + .convertSwitchToArithmetic(true) .hoistCommonInsts(true) .speculateUnpredictables(true))); MPM.addPass(createModuleToFunctionPassAdaptor(std::move(LateFPM))); diff --git a/llvm/lib/Passes/PassRegistry.def b/llvm/lib/Passes/PassRegistry.def index c5c0d64..1b16525 100644 --- a/llvm/lib/Passes/PassRegistry.def +++ b/llvm/lib/Passes/PassRegistry.def @@ -687,8 +687,9 @@ FUNCTION_PASS_WITH_PARAMS( parseSimplifyCFGOptions, "no-speculate-blocks;speculate-blocks;no-simplify-cond-branch;" "simplify-cond-branch;no-forward-switch-cond;forward-switch-cond;" - "no-switch-range-to-icmp;switch-range-to-icmp;no-switch-to-lookup;" - "switch-to-lookup;no-keep-loops;keep-loops;no-hoist-common-insts;" + "no-switch-range-to-icmp;switch-range-to-icmp;no-switch-to-arithmetic;" + "switch-to-arithmetic;no-switch-to-lookup;switch-to-lookup;" + "no-keep-loops;keep-loops;no-hoist-common-insts;" "hoist-common-insts;no-hoist-loads-stores-with-cond-faulting;" "hoist-loads-stores-with-cond-faulting;no-sink-common-insts;" "sink-common-insts;no-speculate-unpredictables;speculate-unpredictables;" diff --git a/llvm/lib/Support/SpecialCaseList.cpp b/llvm/lib/Support/SpecialCaseList.cpp index 6ad8d7d..80fd485 100644 --- a/llvm/lib/Support/SpecialCaseList.cpp +++ b/llvm/lib/Support/SpecialCaseList.cpp @@ -22,6 +22,7 @@ #include "llvm/Support/VirtualFileSystem.h" #include <algorithm> #include <limits> +#include <memory> #include <stdio.h> #include <string> #include <system_error> @@ -29,55 +30,77 @@ namespace llvm { -Error SpecialCaseList::Matcher::insert(StringRef Pattern, unsigned LineNumber, - bool UseGlobs) { +Error SpecialCaseList::RegexMatcher::insert(StringRef Pattern, + unsigned LineNumber) { if (Pattern.empty()) return createStringError(errc::invalid_argument, - Twine("Supplied ") + - (UseGlobs ? "glob" : "regex") + " was blank"); - - if (!UseGlobs) { - // Replace * with .* - auto Regexp = Pattern.str(); - for (size_t pos = 0; (pos = Regexp.find('*', pos)) != std::string::npos; - pos += strlen(".*")) { - Regexp.replace(pos, strlen("*"), ".*"); - } + "Supplied regex was blank"); - Regexp = (Twine("^(") + StringRef(Regexp) + ")$").str(); + // Replace * with .* + auto Regexp = Pattern.str(); + for (size_t pos = 0; (pos = Regexp.find('*', pos)) != std::string::npos; + pos += strlen(".*")) { + Regexp.replace(pos, strlen("*"), ".*"); + } - // Check that the regexp is valid. - Regex CheckRE(Regexp); - std::string REError; - if (!CheckRE.isValid(REError)) - return createStringError(errc::invalid_argument, REError); + Regexp = (Twine("^(") + StringRef(Regexp) + ")$").str(); - auto Rg = - std::make_unique<Matcher::Reg>(Pattern, LineNumber, std::move(CheckRE)); - RegExes.emplace_back(std::move(Rg)); + // Check that the regexp is valid. + Regex CheckRE(Regexp); + std::string REError; + if (!CheckRE.isValid(REError)) + return createStringError(errc::invalid_argument, REError); - return Error::success(); - } + RegExes.emplace_back(Pattern, LineNumber, std::move(CheckRE)); + return Error::success(); +} - auto Glob = std::make_unique<Matcher::Glob>(Pattern, LineNumber); - // We must be sure to use the string in `Glob` rather than the provided - // reference which could be destroyed before match() is called - if (auto Err = GlobPattern::create(Glob->Name, /*MaxSubPatterns=*/1024) - .moveInto(Glob->Pattern)) +void SpecialCaseList::RegexMatcher::match( + StringRef Query, + llvm::function_ref<void(StringRef Rule, unsigned LineNo)> Cb) const { + for (const auto &R : reverse(RegExes)) + if (R.Rg.match(Query)) + Cb(R.Name, R.LineNo); +} + +Error SpecialCaseList::GlobMatcher::insert(StringRef Pattern, + unsigned LineNumber) { + if (Pattern.empty()) + return createStringError(errc::invalid_argument, "Supplied glob was blank"); + + auto Res = GlobPattern::create(Pattern, /*MaxSubPatterns=*/1024); + if (auto Err = Res.takeError()) return Err; - Globs.push_back(std::move(Glob)); + Globs.emplace_back(Pattern, LineNumber, std::move(Res.get())); return Error::success(); } +void SpecialCaseList::GlobMatcher::match( + StringRef Query, + llvm::function_ref<void(StringRef Rule, unsigned LineNo)> Cb) const { + for (const auto &G : reverse(Globs)) + if (G.Pattern.match(Query)) + Cb(G.Name, G.LineNo); +} + +SpecialCaseList::Matcher::Matcher(bool UseGlobs, bool RemoveDotSlash) + : RemoveDotSlash(RemoveDotSlash) { + if (UseGlobs) + M.emplace<GlobMatcher>(); + else + M.emplace<RegexMatcher>(); +} + void SpecialCaseList::Matcher::match( StringRef Query, llvm::function_ref<void(StringRef Rule, unsigned LineNo)> Cb) const { - for (const auto &Glob : reverse(Globs)) - if (Glob->Pattern.match(Query)) - Cb(Glob->Name, Glob->LineNo); - for (const auto &Regex : reverse(RegExes)) - if (Regex->Rg.match(Query)) - Cb(Regex->Name, Regex->LineNo); + if (RemoveDotSlash) + Query = llvm::sys::path::remove_leading_dotslash(Query); + return std::visit([&](auto &V) { return V.match(Query, Cb); }, M); +} + +Error SpecialCaseList::Matcher::insert(StringRef Pattern, unsigned LineNumber) { + return std::visit([&](auto &V) { return V.insert(Pattern, LineNumber); }, M); } // TODO: Refactor this to return Expected<...> @@ -136,10 +159,11 @@ bool SpecialCaseList::createInternal(const MemoryBuffer *MB, Expected<SpecialCaseList::Section *> SpecialCaseList::addSection(StringRef SectionStr, unsigned FileNo, unsigned LineNo, bool UseGlobs) { - Sections.emplace_back(SectionStr, FileNo); + Sections.emplace_back(SectionStr, FileNo, UseGlobs); auto &Section = Sections.back(); - if (auto Err = Section.SectionMatcher.insert(SectionStr, LineNo, UseGlobs)) { + SectionStr = SectionStr.copy(StrAlloc); + if (auto Err = Section.SectionMatcher.insert(SectionStr, LineNo)) { return createStringError(errc::invalid_argument, "malformed section at line " + Twine(LineNo) + ": '" + SectionStr + @@ -164,12 +188,18 @@ bool SpecialCaseList::parse(unsigned FileIdx, const MemoryBuffer *MB, // https://discourse.llvm.org/t/use-glob-instead-of-regex-for-specialcaselists/71666 bool UseGlobs = Version > 1; + bool RemoveDotSlash = Version > 2; + Section *CurrentSection; - if (auto Err = addSection("*", FileIdx, 1).moveInto(CurrentSection)) { + if (auto Err = addSection("*", FileIdx, 1, true).moveInto(CurrentSection)) { Error = toString(std::move(Err)); return false; } + // This is the current list of prefixes for all existing users matching file + // path. We may need parametrization in constructor in future. + constexpr StringRef PathPrefixes[] = {"src", "!src", "mainfile", "source"}; + for (line_iterator LineIt(*MB, /*SkipBlanks=*/true, /*CommentMarker=*/'#'); !LineIt.is_at_eof(); LineIt++) { unsigned LineNo = LineIt.line_number(); @@ -204,8 +234,11 @@ bool SpecialCaseList::parse(unsigned FileIdx, const MemoryBuffer *MB, } auto [Pattern, Category] = Postfix.split("="); - auto &Entry = CurrentSection->Entries[Prefix][Category]; - if (auto Err = Entry.insert(Pattern, LineNo, UseGlobs)) { + auto [It, _] = CurrentSection->Entries[Prefix].try_emplace( + Category, UseGlobs, + RemoveDotSlash && llvm::is_contained(PathPrefixes, Prefix)); + Pattern = Pattern.copy(StrAlloc); + if (auto Err = It->second.insert(Pattern, LineNo)) { Error = (Twine("malformed ") + (UseGlobs ? "glob" : "regex") + " in line " + Twine(LineNo) + ": '" + Pattern + "': " + toString(std::move(Err))) @@ -262,4 +295,17 @@ unsigned SpecialCaseList::Section::getLastMatch(StringRef Prefix, return LastLine; } +StringRef SpecialCaseList::Section::getLongestMatch(StringRef Prefix, + StringRef Query, + StringRef Category) const { + StringRef LongestRule; + if (const Matcher *M = findMatcher(Prefix, Category)) { + M->match(Query, [&](StringRef Rule, unsigned) { + if (LongestRule.size() < Rule.size()) + LongestRule = Rule; + }); + } + return LongestRule; +} + } // namespace llvm diff --git a/llvm/lib/Target/AArch64/AArch64MachineFunctionInfo.h b/llvm/lib/Target/AArch64/AArch64MachineFunctionInfo.h index 91e64e6..bd0a17d 100644 --- a/llvm/lib/Target/AArch64/AArch64MachineFunctionInfo.h +++ b/llvm/lib/Target/AArch64/AArch64MachineFunctionInfo.h @@ -315,6 +315,8 @@ public: } void setStackSizeSVE(uint64_t ZPR, uint64_t PPR) { + assert(isAligned(Align(16), ZPR) && isAligned(Align(16), PPR) && + "expected SVE stack sizes to be aligned to 16-bytes"); StackSizeZPR = ZPR; StackSizePPR = PPR; HasCalculatedStackSizeSVE = true; @@ -425,6 +427,8 @@ public: // Saves the CalleeSavedStackSize for SVE vectors in 'scalable bytes' void setSVECalleeSavedStackSize(unsigned ZPR, unsigned PPR) { + assert(isAligned(Align(16), ZPR) && isAligned(Align(16), PPR) && + "expected SVE callee-save sizes to be aligned to 16-bytes"); ZPRCalleeSavedStackSize = ZPR; PPRCalleeSavedStackSize = PPR; HasSVECalleeSavedStackSize = true; diff --git a/llvm/lib/Target/AArch64/AArch64PrologueEpilogue.cpp b/llvm/lib/Target/AArch64/AArch64PrologueEpilogue.cpp index 1568161..f110558 100644 --- a/llvm/lib/Target/AArch64/AArch64PrologueEpilogue.cpp +++ b/llvm/lib/Target/AArch64/AArch64PrologueEpilogue.cpp @@ -60,7 +60,6 @@ static bool isPartOfZPRCalleeSaves(MachineBasicBlock::iterator I) { case AArch64::PTRUE_C_B: return I->getFlag(MachineInstr::FrameSetup) || I->getFlag(MachineInstr::FrameDestroy); - case AArch64::SEH_SavePReg: case AArch64::SEH_SaveZReg: return true; } @@ -75,6 +74,8 @@ static bool isPartOfPPRCalleeSaves(MachineBasicBlock::iterator I) { case AArch64::LDR_PXI: return I->getFlag(MachineInstr::FrameSetup) || I->getFlag(MachineInstr::FrameDestroy); + case AArch64::SEH_SavePReg: + return true; } } @@ -94,6 +95,26 @@ AArch64PrologueEpilogueCommon::AArch64PrologueEpilogueCommon( HasFP = AFL.hasFP(MF); NeedsWinCFI = AFL.needsWinCFI(MF); + + // Windows unwind can't represent the required stack adjustments if we have + // both SVE callee-saves and dynamic stack allocations, and the frame pointer + // is before the SVE spills. The allocation of the frame pointer must be the + // last instruction in the prologue so the unwinder can restore the stack + // pointer correctly. (And there isn't any unwind opcode for `addvl sp, x29, + // -17`.) + // + // Because of this, we do spills in the opposite order on Windows: first SVE, + // then GPRs. The main side-effect of this is that it makes accessing + // parameters passed on the stack more expensive. + // + // We could consider rearranging the spills for simpler cases. + if (Subtarget.isTargetWindows() && AFI->getSVECalleeSavedStackSize()) { + if (AFI->hasStackHazardSlotIndex()) + reportFatalUsageError("SME hazard padding is not supported on Windows"); + SVELayout = SVEStackLayout::CalleeSavesAboveFrameRecord; + } else if (AFI->hasSplitSVEObjects()) { + SVELayout = SVEStackLayout::Split; + } } MachineBasicBlock::iterator @@ -334,6 +355,55 @@ bool AArch64PrologueEpilogueCommon::shouldCombineCSRLocalStackBump( return true; } +SVEFrameSizes AArch64PrologueEpilogueCommon::getSVEStackFrameSizes() const { + StackOffset PPRCalleeSavesSize = + StackOffset::getScalable(AFI->getPPRCalleeSavedStackSize()); + StackOffset ZPRCalleeSavesSize = + StackOffset::getScalable(AFI->getZPRCalleeSavedStackSize()); + StackOffset PPRLocalsSize = AFL.getPPRStackSize(MF) - PPRCalleeSavesSize; + StackOffset ZPRLocalsSize = AFL.getZPRStackSize(MF) - ZPRCalleeSavesSize; + if (SVELayout == SVEStackLayout::Split) + return {{PPRCalleeSavesSize, PPRLocalsSize}, + {ZPRCalleeSavesSize, ZPRLocalsSize}}; + // For simplicity, attribute all locals to ZPRs when split SVE is disabled. + return {{PPRCalleeSavesSize, StackOffset{}}, + {ZPRCalleeSavesSize, PPRLocalsSize + ZPRLocalsSize}}; +} + +struct SVEPartitions { + struct { + MachineBasicBlock::iterator Begin, End; + } PPR, ZPR; +}; + +static SVEPartitions partitionSVECS(MachineBasicBlock &MBB, + MachineBasicBlock::iterator MBBI, + StackOffset PPRCalleeSavesSize, + StackOffset ZPRCalleeSavesSize, + bool IsEpilogue) { + MachineBasicBlock::iterator PPRsI = MBBI; + MachineBasicBlock::iterator End = + IsEpilogue ? MBB.begin() : MBB.getFirstTerminator(); + auto AdjustI = [&](auto MBBI) { return IsEpilogue ? std::prev(MBBI) : MBBI; }; + // Process the SVE CS to find the starts/ends of the ZPR and PPR areas. + if (PPRCalleeSavesSize) { + PPRsI = AdjustI(PPRsI); + assert(isPartOfPPRCalleeSaves(*PPRsI) && "Unexpected instruction"); + while (PPRsI != End && isPartOfPPRCalleeSaves(AdjustI(PPRsI))) + IsEpilogue ? (--PPRsI) : (++PPRsI); + } + MachineBasicBlock::iterator ZPRsI = PPRsI; + if (ZPRCalleeSavesSize) { + ZPRsI = AdjustI(ZPRsI); + assert(isPartOfZPRCalleeSaves(*ZPRsI) && "Unexpected instruction"); + while (ZPRsI != End && isPartOfZPRCalleeSaves(AdjustI(ZPRsI))) + IsEpilogue ? (--ZPRsI) : (++ZPRsI); + } + if (IsEpilogue) + return {{PPRsI, MBBI}, {ZPRsI, PPRsI}}; + return {{MBBI, PPRsI}, {PPRsI, ZPRsI}}; +} + AArch64PrologueEmitter::AArch64PrologueEmitter(MachineFunction &MF, MachineBasicBlock &MBB, const AArch64FrameLowering &AFL) @@ -613,30 +683,12 @@ void AArch64PrologueEmitter::emitPrologue() { bool IsWin64 = Subtarget.isCallingConvWin64(F.getCallingConv(), F.isVarArg()); unsigned FixedObject = AFL.getFixedObjectSize(MF, AFI, IsWin64, IsFunclet); - // Windows unwind can't represent the required stack adjustments if we have - // both SVE callee-saves and dynamic stack allocations, and the frame - // pointer is before the SVE spills. The allocation of the frame pointer - // must be the last instruction in the prologue so the unwinder can restore - // the stack pointer correctly. (And there isn't any unwind opcode for - // `addvl sp, x29, -17`.) - // - // Because of this, we do spills in the opposite order on Windows: first SVE, - // then GPRs. The main side-effect of this is that it makes accessing - // parameters passed on the stack more expensive. - // - // We could consider rearranging the spills for simpler cases. - bool FPAfterSVECalleeSaves = - Subtarget.isTargetWindows() && AFI->getSVECalleeSavedStackSize(); - - if (FPAfterSVECalleeSaves && AFI->hasStackHazardSlotIndex()) - reportFatalUsageError("SME hazard padding is not supported on Windows"); - auto PrologueSaveSize = AFI->getCalleeSavedStackSize() + FixedObject; // All of the remaining stack allocations are for locals. determineLocalsStackSize(NumBytes, PrologueSaveSize); MachineBasicBlock::iterator FirstGPRSaveI = PrologueBeginI; - if (FPAfterSVECalleeSaves) { + if (SVELayout == SVEStackLayout::CalleeSavesAboveFrameRecord) { // If we're doing SVE saves first, we need to immediately allocate space // for fixed objects, then space for the SVE callee saves. // @@ -712,110 +764,66 @@ void AArch64PrologueEmitter::emitPrologue() { if (AFL.windowsRequiresStackProbe(MF, NumBytes + RealignmentPadding)) emitWindowsStackProbe(AfterGPRSavesI, DL, NumBytes, RealignmentPadding); - StackOffset PPRCalleeSavesSize = - StackOffset::getScalable(AFI->getPPRCalleeSavedStackSize()); - StackOffset ZPRCalleeSavesSize = - StackOffset::getScalable(AFI->getZPRCalleeSavedStackSize()); - StackOffset SVECalleeSavesSize = PPRCalleeSavesSize + ZPRCalleeSavesSize; - StackOffset PPRLocalsSize = AFL.getPPRStackSize(MF) - PPRCalleeSavesSize; - StackOffset ZPRLocalsSize = AFL.getZPRStackSize(MF) - ZPRCalleeSavesSize; - - std::optional<MachineBasicBlock::iterator> ZPRCalleeSavesBegin, - ZPRCalleeSavesEnd, PPRCalleeSavesBegin, PPRCalleeSavesEnd; - + auto [PPR, ZPR] = getSVEStackFrameSizes(); + StackOffset SVECalleeSavesSize = ZPR.CalleeSavesSize + PPR.CalleeSavesSize; + StackOffset NonSVELocalsSize = StackOffset::getFixed(NumBytes); StackOffset CFAOffset = - StackOffset::getFixed((int64_t)MFI.getStackSize() - NumBytes); + StackOffset::getFixed(MFI.getStackSize()) - NonSVELocalsSize; + MachineBasicBlock::iterator AfterSVESavesI = AfterGPRSavesI; - if (!FPAfterSVECalleeSaves) { - // Process the SVE callee-saves to find the starts/ends of the ZPR and PPR - // areas. - PPRCalleeSavesBegin = AfterGPRSavesI; - if (PPRCalleeSavesSize) { - LLVM_DEBUG(dbgs() << "PPRCalleeSavedStackSize = " - << PPRCalleeSavesSize.getScalable() << "\n"); - - assert(isPartOfPPRCalleeSaves(*PPRCalleeSavesBegin) && - "Unexpected instruction"); - while (isPartOfPPRCalleeSaves(AfterSVESavesI) && - AfterSVESavesI != MBB.getFirstTerminator()) - ++AfterSVESavesI; + // Allocate space for the callee saves and PPR locals (if any). + if (SVELayout != SVEStackLayout::CalleeSavesAboveFrameRecord) { + auto [PPRRange, ZPRRange] = + partitionSVECS(MBB, AfterGPRSavesI, PPR.CalleeSavesSize, + ZPR.CalleeSavesSize, /*IsEpilogue=*/false); + AfterSVESavesI = ZPRRange.End; + if (EmitAsyncCFI) + emitCalleeSavedSVELocations(AfterSVESavesI); + + StackOffset AllocateBeforePPRs = SVECalleeSavesSize; + StackOffset AllocateAfterPPRs = PPR.LocalsSize; + if (SVELayout == SVEStackLayout::Split) { + AllocateBeforePPRs = PPR.CalleeSavesSize; + AllocateAfterPPRs = PPR.LocalsSize + ZPR.CalleeSavesSize; } - PPRCalleeSavesEnd = ZPRCalleeSavesBegin = AfterSVESavesI; - if (ZPRCalleeSavesSize) { - LLVM_DEBUG(dbgs() << "ZPRCalleeSavedStackSize = " - << ZPRCalleeSavesSize.getScalable() << "\n"); - assert(isPartOfZPRCalleeSaves(*ZPRCalleeSavesBegin) && - "Unexpected instruction"); - while (isPartOfZPRCalleeSaves(AfterSVESavesI) && - AfterSVESavesI != MBB.getFirstTerminator()) - ++AfterSVESavesI; - } - ZPRCalleeSavesEnd = AfterSVESavesI; - } - - if (EmitAsyncCFI) - emitCalleeSavedSVELocations(AfterSVESavesI); - - if (AFI->hasSplitSVEObjects()) { - assert(!FPAfterSVECalleeSaves && - "Cannot use FPAfterSVECalleeSaves with aarch64-split-sve-objects"); - assert(!AFL.canUseRedZone(MF) && - "Cannot use redzone with aarch64-split-sve-objects"); - // TODO: Handle HasWinCFI/NeedsWinCFI? - assert(!NeedsWinCFI && - "WinCFI with aarch64-split-sve-objects is not supported"); - - // Split ZPR and PPR allocation. - // Allocate PPR callee saves - allocateStackSpace(*PPRCalleeSavesBegin, 0, PPRCalleeSavesSize, + allocateStackSpace(PPRRange.Begin, 0, AllocateBeforePPRs, EmitAsyncCFI && !HasFP, CFAOffset, - MFI.hasVarSizedObjects() || ZPRCalleeSavesSize || - ZPRLocalsSize || PPRLocalsSize); - CFAOffset += PPRCalleeSavesSize; - - // Allocate PPR locals + ZPR callee saves - assert(PPRCalleeSavesEnd == ZPRCalleeSavesBegin && + MFI.hasVarSizedObjects() || AllocateAfterPPRs || + ZPR.LocalsSize || NonSVELocalsSize); + CFAOffset += AllocateBeforePPRs; + assert(PPRRange.End == ZPRRange.Begin && "Expected ZPR callee saves after PPR locals"); - allocateStackSpace(*PPRCalleeSavesEnd, RealignmentPadding, - PPRLocalsSize + ZPRCalleeSavesSize, - EmitAsyncCFI && !HasFP, CFAOffset, - MFI.hasVarSizedObjects() || ZPRLocalsSize); - CFAOffset += PPRLocalsSize + ZPRCalleeSavesSize; - - // Allocate ZPR locals - allocateStackSpace(*ZPRCalleeSavesEnd, RealignmentPadding, - ZPRLocalsSize + StackOffset::getFixed(NumBytes), + allocateStackSpace(PPRRange.End, RealignmentPadding, AllocateAfterPPRs, EmitAsyncCFI && !HasFP, CFAOffset, - MFI.hasVarSizedObjects()); + MFI.hasVarSizedObjects() || ZPR.LocalsSize || + NonSVELocalsSize); + CFAOffset += AllocateAfterPPRs; } else { - // Allocate space for the callee saves (if any). - StackOffset LocalsSize = - PPRLocalsSize + ZPRLocalsSize + StackOffset::getFixed(NumBytes); - if (!FPAfterSVECalleeSaves) - allocateStackSpace(AfterGPRSavesI, 0, SVECalleeSavesSize, - EmitAsyncCFI && !HasFP, CFAOffset, - MFI.hasVarSizedObjects() || LocalsSize); + assert(SVELayout == SVEStackLayout::CalleeSavesAboveFrameRecord); + // Note: With CalleeSavesAboveFrameRecord, the SVE CS have already been + // allocated (and separate PPR locals are not supported, all SVE locals, + // both PPR and ZPR, are within the ZPR locals area). + assert(!PPR.LocalsSize && "Unexpected PPR locals!"); CFAOffset += SVECalleeSavesSize; + } - // Allocate space for the rest of the frame including SVE locals. Align the - // stack as necessary. - assert(!(AFL.canUseRedZone(MF) && NeedsRealignment) && - "Cannot use redzone with stack realignment"); - if (!AFL.canUseRedZone(MF)) { - // FIXME: in the case of dynamic re-alignment, NumBytes doesn't have - // the correct value here, as NumBytes also includes padding bytes, - // which shouldn't be counted here. - StackOffset SVELocalsSize = PPRLocalsSize + ZPRLocalsSize; - allocateStackSpace(AfterSVESavesI, RealignmentPadding, - SVELocalsSize + StackOffset::getFixed(NumBytes), - EmitAsyncCFI && !HasFP, CFAOffset, - MFI.hasVarSizedObjects()); - } + // Allocate space for the rest of the frame including ZPR locals. Align the + // stack as necessary. + assert(!(AFL.canUseRedZone(MF) && NeedsRealignment) && + "Cannot use redzone with stack realignment"); + if (!AFL.canUseRedZone(MF)) { + // FIXME: in the case of dynamic re-alignment, NumBytes doesn't have the + // correct value here, as NumBytes also includes padding bytes, which + // shouldn't be counted here. + allocateStackSpace( + AfterSVESavesI, RealignmentPadding, ZPR.LocalsSize + NonSVELocalsSize, + EmitAsyncCFI && !HasFP, CFAOffset, MFI.hasVarSizedObjects()); } // If we need a base pointer, set it up here. It's whatever the value of the - // stack pointer is at this point. Any variable size objects will be allocated - // after this, so we can still use the base pointer to reference locals. + // stack pointer is at this point. Any variable size objects will be + // allocated after this, so we can still use the base pointer to reference + // locals. // // FIXME: Clarify FrameSetup flags here. // Note: Use emitFrameOffset() like above for FP if the FrameSetup flag is @@ -1270,7 +1278,9 @@ void AArch64PrologueEmitter::emitCalleeSavedSVELocations( StackOffset::getScalable(MFI.getObjectOffset(FI)) - StackOffset::getFixed(AFI->getCalleeSavedStackSize(MFI)); - if (AFI->hasSplitSVEObjects() && + // The scalable vectors are below (lower address) the scalable predicates + // with split SVE objects, so we must subtract the size of the predicates. + if (SVELayout == SVEStackLayout::Split && MFI.getStackID(FI) == TargetStackID::ScalableVector) Offset -= PPRStackSize; @@ -1349,13 +1359,10 @@ void AArch64EpilogueEmitter::emitEpilogue() { return; } - bool FPAfterSVECalleeSaves = - Subtarget.isTargetWindows() && AFI->getSVECalleeSavedStackSize(); - bool CombineSPBump = shouldCombineCSRLocalStackBump(NumBytes); // Assume we can't combine the last pop with the sp restore. bool CombineAfterCSRBump = false; - if (FPAfterSVECalleeSaves) { + if (SVELayout == SVEStackLayout::CalleeSavesAboveFrameRecord) { AfterCSRPopSize += FixedObject; } else if (!CombineSPBump && PrologueSaveSize != 0) { MachineBasicBlock::iterator Pop = std::prev(MBB.getFirstTerminator()); @@ -1390,7 +1397,8 @@ void AArch64EpilogueEmitter::emitEpilogue() { while (FirstGPRRestoreI != Begin) { --FirstGPRRestoreI; if (!FirstGPRRestoreI->getFlag(MachineInstr::FrameDestroy) || - (!FPAfterSVECalleeSaves && isPartOfSVECalleeSaves(FirstGPRRestoreI))) { + (SVELayout != SVEStackLayout::CalleeSavesAboveFrameRecord && + isPartOfSVECalleeSaves(FirstGPRRestoreI))) { ++FirstGPRRestoreI; break; } else if (CombineSPBump) @@ -1414,13 +1422,9 @@ void AArch64EpilogueEmitter::emitEpilogue() { if (HasFP && AFI->hasSwiftAsyncContext()) emitSwiftAsyncContextFramePointer(EpilogueEndI, DL); - StackOffset ZPRStackSize = AFL.getZPRStackSize(MF); - StackOffset PPRStackSize = AFL.getPPRStackSize(MF); - StackOffset SVEStackSize = ZPRStackSize + PPRStackSize; - // If there is a single SP update, insert it before the ret and we're done. if (CombineSPBump) { - assert(!SVEStackSize && "Cannot combine SP bump with SVE"); + assert(!AFI->hasSVEStackSize() && "Cannot combine SP bump with SVE"); // When we are about to restore the CSRs, the CFA register is SP again. if (EmitCFI && HasFP) @@ -1437,188 +1441,122 @@ void AArch64EpilogueEmitter::emitEpilogue() { NumBytes -= PrologueSaveSize; assert(NumBytes >= 0 && "Negative stack allocation size!?"); - if (!AFI->hasSplitSVEObjects()) { - // Process the SVE callee-saves to determine what space needs to be - // deallocated. - StackOffset DeallocateBefore = {}, DeallocateAfter = SVEStackSize; - MachineBasicBlock::iterator RestoreBegin = FirstGPRRestoreI, - RestoreEnd = FirstGPRRestoreI; - int64_t ZPRCalleeSavedSize = AFI->getZPRCalleeSavedStackSize(); - int64_t PPRCalleeSavedSize = AFI->getPPRCalleeSavedStackSize(); - int64_t SVECalleeSavedSize = ZPRCalleeSavedSize + PPRCalleeSavedSize; - - if (SVECalleeSavedSize) { - if (FPAfterSVECalleeSaves) - RestoreEnd = MBB.getFirstTerminator(); - - RestoreBegin = std::prev(RestoreEnd); - while (RestoreBegin != MBB.begin() && - isPartOfSVECalleeSaves(std::prev(RestoreBegin))) - --RestoreBegin; - - assert(isPartOfSVECalleeSaves(RestoreBegin) && - isPartOfSVECalleeSaves(std::prev(RestoreEnd)) && - "Unexpected instruction"); - - StackOffset CalleeSavedSizeAsOffset = - StackOffset::getScalable(SVECalleeSavedSize); - DeallocateBefore = SVEStackSize - CalleeSavedSizeAsOffset; - DeallocateAfter = CalleeSavedSizeAsOffset; + auto [PPR, ZPR] = getSVEStackFrameSizes(); + auto [PPRRange, ZPRRange] = partitionSVECS( + MBB, + SVELayout == SVEStackLayout::CalleeSavesAboveFrameRecord + ? MBB.getFirstTerminator() + : FirstGPRRestoreI, + PPR.CalleeSavesSize, ZPR.CalleeSavesSize, /*IsEpilogue=*/true); + + StackOffset SVECalleeSavesSize = ZPR.CalleeSavesSize + PPR.CalleeSavesSize; + StackOffset SVEStackSize = + SVECalleeSavesSize + PPR.LocalsSize + ZPR.LocalsSize; + MachineBasicBlock::iterator RestoreBegin = ZPRRange.Begin; + MachineBasicBlock::iterator RestoreEnd = PPRRange.End; + + // Deallocate the SVE area. + if (SVELayout == SVEStackLayout::CalleeSavesAboveFrameRecord) { + StackOffset SVELocalsSize = ZPR.LocalsSize + PPR.LocalsSize; + // If the callee-save area is before FP, restoring the FP implicitly + // deallocates non-callee-save SVE allocations. Otherwise, deallocate them + // explicitly. + if (!AFI->isStackRealigned() && !MFI.hasVarSizedObjects()) { + emitFrameOffset(MBB, FirstGPRRestoreI, DL, AArch64::SP, AArch64::SP, + SVELocalsSize, TII, MachineInstr::FrameDestroy, false, + NeedsWinCFI, &HasWinCFI); } - // Deallocate the SVE area. - if (FPAfterSVECalleeSaves) { - // If the callee-save area is before FP, restoring the FP implicitly - // deallocates non-callee-save SVE allocations. Otherwise, deallocate - // them explicitly. - if (!AFI->isStackRealigned() && !MFI.hasVarSizedObjects()) { - emitFrameOffset(MBB, FirstGPRRestoreI, DL, AArch64::SP, AArch64::SP, - DeallocateBefore, TII, MachineInstr::FrameDestroy, - false, NeedsWinCFI, &HasWinCFI); - } + // Deallocate callee-save non-SVE registers. + emitFrameOffset(MBB, RestoreBegin, DL, AArch64::SP, AArch64::SP, + StackOffset::getFixed(AFI->getCalleeSavedStackSize()), TII, + MachineInstr::FrameDestroy, false, NeedsWinCFI, &HasWinCFI); - // Deallocate callee-save non-SVE registers. - emitFrameOffset(MBB, RestoreBegin, DL, AArch64::SP, AArch64::SP, - StackOffset::getFixed(AFI->getCalleeSavedStackSize()), - TII, MachineInstr::FrameDestroy, false, NeedsWinCFI, - &HasWinCFI); - - // Deallocate fixed objects. - emitFrameOffset(MBB, RestoreEnd, DL, AArch64::SP, AArch64::SP, - StackOffset::getFixed(FixedObject), TII, - MachineInstr::FrameDestroy, false, NeedsWinCFI, - &HasWinCFI); - - // Deallocate callee-save SVE registers. - emitFrameOffset(MBB, RestoreEnd, DL, AArch64::SP, AArch64::SP, - DeallocateAfter, TII, MachineInstr::FrameDestroy, false, - NeedsWinCFI, &HasWinCFI); - } else if (SVEStackSize) { - int64_t SVECalleeSavedSize = AFI->getSVECalleeSavedStackSize(); - // If we have stack realignment or variable-sized objects we must use the - // FP to restore SVE callee saves (as there is an unknown amount of - // data/padding between the SP and SVE CS area). - Register BaseForSVEDealloc = - (AFI->isStackRealigned() || MFI.hasVarSizedObjects()) ? AArch64::FP - : AArch64::SP; - if (SVECalleeSavedSize && BaseForSVEDealloc == AArch64::FP) { - Register CalleeSaveBase = AArch64::FP; - if (int64_t CalleeSaveBaseOffset = - AFI->getCalleeSaveBaseToFrameRecordOffset()) { - // If we have have an non-zero offset to the non-SVE CS base we need - // to compute the base address by subtracting the offest in a - // temporary register first (to avoid briefly deallocating the SVE - // CS). - CalleeSaveBase = MBB.getParent()->getRegInfo().createVirtualRegister( - &AArch64::GPR64RegClass); - emitFrameOffset(MBB, RestoreBegin, DL, CalleeSaveBase, AArch64::FP, - StackOffset::getFixed(-CalleeSaveBaseOffset), TII, - MachineInstr::FrameDestroy); - } - // The code below will deallocate the stack space space by moving the - // SP to the start of the SVE callee-save area. - emitFrameOffset(MBB, RestoreBegin, DL, AArch64::SP, CalleeSaveBase, - StackOffset::getScalable(-SVECalleeSavedSize), TII, + // Deallocate fixed objects. + emitFrameOffset(MBB, RestoreEnd, DL, AArch64::SP, AArch64::SP, + StackOffset::getFixed(FixedObject), TII, + MachineInstr::FrameDestroy, false, NeedsWinCFI, &HasWinCFI); + + // Deallocate callee-save SVE registers. + emitFrameOffset(MBB, RestoreEnd, DL, AArch64::SP, AArch64::SP, + SVECalleeSavesSize, TII, MachineInstr::FrameDestroy, false, + NeedsWinCFI, &HasWinCFI); + } else if (AFI->hasSVEStackSize()) { + // If we have stack realignment or variable-sized objects we must use the FP + // to restore SVE callee saves (as there is an unknown amount of + // data/padding between the SP and SVE CS area). + Register BaseForSVEDealloc = + (AFI->isStackRealigned() || MFI.hasVarSizedObjects()) ? AArch64::FP + : AArch64::SP; + if (SVECalleeSavesSize && BaseForSVEDealloc == AArch64::FP) { + // TODO: Support stack realigment and variable-sized objects. + assert( + SVELayout != SVEStackLayout::Split && + "unexpected stack realignment or variable sized objects with split " + "SVE stack objects"); + + Register CalleeSaveBase = AArch64::FP; + if (int64_t CalleeSaveBaseOffset = + AFI->getCalleeSaveBaseToFrameRecordOffset()) { + // If we have have an non-zero offset to the non-SVE CS base we need to + // compute the base address by subtracting the offest in a temporary + // register first (to avoid briefly deallocating the SVE CS). + CalleeSaveBase = MBB.getParent()->getRegInfo().createVirtualRegister( + &AArch64::GPR64RegClass); + emitFrameOffset(MBB, RestoreBegin, DL, CalleeSaveBase, AArch64::FP, + StackOffset::getFixed(-CalleeSaveBaseOffset), TII, MachineInstr::FrameDestroy); - } else if (BaseForSVEDealloc == AArch64::SP) { - if (SVECalleeSavedSize) { - // Deallocate the non-SVE locals first before we can deallocate (and - // restore callee saves) from the SVE area. - emitFrameOffset(MBB, RestoreBegin, DL, AArch64::SP, AArch64::SP, - StackOffset::getFixed(NumBytes), TII, - MachineInstr::FrameDestroy, false, NeedsWinCFI, - &HasWinCFI, EmitCFI && !HasFP, - SVEStackSize + StackOffset::getFixed( - NumBytes + PrologueSaveSize)); - NumBytes = 0; - } - - emitFrameOffset(MBB, RestoreBegin, DL, AArch64::SP, AArch64::SP, - DeallocateBefore, TII, MachineInstr::FrameDestroy, - false, NeedsWinCFI, &HasWinCFI, EmitCFI && !HasFP, - SVEStackSize + - StackOffset::getFixed(NumBytes + PrologueSaveSize)); - - emitFrameOffset(MBB, RestoreEnd, DL, AArch64::SP, AArch64::SP, - DeallocateAfter, TII, MachineInstr::FrameDestroy, false, - NeedsWinCFI, &HasWinCFI, EmitCFI && !HasFP, - DeallocateAfter + - StackOffset::getFixed(NumBytes + PrologueSaveSize)); + } + // The code below will deallocate the stack space space by moving the SP + // to the start of the SVE callee-save area. + emitFrameOffset(MBB, RestoreBegin, DL, AArch64::SP, CalleeSaveBase, + -SVECalleeSavesSize, TII, MachineInstr::FrameDestroy); + } else if (BaseForSVEDealloc == AArch64::SP) { + auto CFAOffset = + SVEStackSize + StackOffset::getFixed(NumBytes + PrologueSaveSize); + + if (SVECalleeSavesSize) { + // Deallocate the non-SVE locals first before we can deallocate (and + // restore callee saves) from the SVE area. + auto NonSVELocals = StackOffset::getFixed(NumBytes); + emitFrameOffset(MBB, ZPRRange.Begin, DL, AArch64::SP, AArch64::SP, + NonSVELocals, TII, MachineInstr::FrameDestroy, false, + NeedsWinCFI, &HasWinCFI, EmitCFI && !HasFP, CFAOffset); + CFAOffset -= NonSVELocals; + NumBytes = 0; } - if (EmitCFI) - emitCalleeSavedSVERestores(RestoreEnd); - } - } else if (AFI->hasSplitSVEObjects() && SVEStackSize) { - // TODO: Support stack realigment and variable-sized objects. - assert(!AFI->isStackRealigned() && !MFI.hasVarSizedObjects() && - "unexpected stack realignment or variable sized objects with split " - "SVE stack objects"); - // SplitSVEObjects. Determine the sizes and starts/ends of the ZPR and PPR - // areas. - auto ZPRCalleeSavedSize = - StackOffset::getScalable(AFI->getZPRCalleeSavedStackSize()); - auto PPRCalleeSavedSize = - StackOffset::getScalable(AFI->getPPRCalleeSavedStackSize()); - StackOffset PPRLocalsSize = PPRStackSize - PPRCalleeSavedSize; - StackOffset ZPRLocalsSize = ZPRStackSize - ZPRCalleeSavedSize; - - MachineBasicBlock::iterator PPRRestoreBegin = FirstGPRRestoreI, - PPRRestoreEnd = FirstGPRRestoreI; - if (PPRCalleeSavedSize) { - PPRRestoreBegin = std::prev(PPRRestoreEnd); - while (PPRRestoreBegin != MBB.begin() && - isPartOfPPRCalleeSaves(std::prev(PPRRestoreBegin))) - --PPRRestoreBegin; - } - - MachineBasicBlock::iterator ZPRRestoreBegin = PPRRestoreBegin, - ZPRRestoreEnd = PPRRestoreBegin; - if (ZPRCalleeSavedSize) { - ZPRRestoreBegin = std::prev(ZPRRestoreEnd); - while (ZPRRestoreBegin != MBB.begin() && - isPartOfZPRCalleeSaves(std::prev(ZPRRestoreBegin))) - --ZPRRestoreBegin; - } - - auto CFAOffset = - SVEStackSize + StackOffset::getFixed(NumBytes + PrologueSaveSize); - if (PPRCalleeSavedSize || ZPRCalleeSavedSize) { - // Deallocate the non-SVE locals first before we can deallocate (and - // restore callee saves) from the SVE area. - auto NonSVELocals = StackOffset::getFixed(NumBytes); - emitFrameOffset(MBB, ZPRRestoreBegin, DL, AArch64::SP, AArch64::SP, - NonSVELocals, TII, MachineInstr::FrameDestroy, false, - false, nullptr, EmitCFI && !HasFP, CFAOffset); - NumBytes = 0; - CFAOffset -= NonSVELocals; - } + if (ZPR.LocalsSize) { + emitFrameOffset(MBB, ZPRRange.Begin, DL, AArch64::SP, AArch64::SP, + ZPR.LocalsSize, TII, MachineInstr::FrameDestroy, false, + NeedsWinCFI, &HasWinCFI, EmitCFI && !HasFP, CFAOffset); + CFAOffset -= ZPR.LocalsSize; + } - if (ZPRLocalsSize) { - emitFrameOffset(MBB, ZPRRestoreBegin, DL, AArch64::SP, AArch64::SP, - ZPRLocalsSize, TII, MachineInstr::FrameDestroy, false, - false, nullptr, EmitCFI && !HasFP, CFAOffset); - CFAOffset -= ZPRLocalsSize; - } + StackOffset SVECalleeSavesToDealloc = SVECalleeSavesSize; + if (SVELayout == SVEStackLayout::Split && + (PPR.LocalsSize || ZPR.CalleeSavesSize)) { + assert(PPRRange.Begin == ZPRRange.End && + "Expected PPR restores after ZPR"); + emitFrameOffset(MBB, PPRRange.Begin, DL, AArch64::SP, AArch64::SP, + PPR.LocalsSize + ZPR.CalleeSavesSize, TII, + MachineInstr::FrameDestroy, false, NeedsWinCFI, + &HasWinCFI, EmitCFI && !HasFP, CFAOffset); + CFAOffset -= PPR.LocalsSize + ZPR.CalleeSavesSize; + SVECalleeSavesToDealloc -= ZPR.CalleeSavesSize; + } - if (PPRLocalsSize || ZPRCalleeSavedSize) { - assert(PPRRestoreBegin == ZPRRestoreEnd && - "Expected PPR restores after ZPR"); - emitFrameOffset(MBB, PPRRestoreBegin, DL, AArch64::SP, AArch64::SP, - PPRLocalsSize + ZPRCalleeSavedSize, TII, - MachineInstr::FrameDestroy, false, false, nullptr, - EmitCFI && !HasFP, CFAOffset); - CFAOffset -= PPRLocalsSize + ZPRCalleeSavedSize; - } - if (PPRCalleeSavedSize) { - emitFrameOffset(MBB, PPRRestoreEnd, DL, AArch64::SP, AArch64::SP, - PPRCalleeSavedSize, TII, MachineInstr::FrameDestroy, - false, false, nullptr, EmitCFI && !HasFP, CFAOffset); + // If split SVE is on, this dealloc PPRs, otherwise, deallocs ZPRs + PPRs: + if (SVECalleeSavesToDealloc) + emitFrameOffset(MBB, PPRRange.End, DL, AArch64::SP, AArch64::SP, + SVECalleeSavesToDealloc, TII, + MachineInstr::FrameDestroy, false, NeedsWinCFI, + &HasWinCFI, EmitCFI && !HasFP, CFAOffset); } - // We only emit CFI information for ZPRs so emit CFI after the ZPR restores. if (EmitCFI) - emitCalleeSavedSVERestores(ZPRRestoreEnd); + emitCalleeSavedSVERestores( + SVELayout == SVEStackLayout::Split ? ZPRRange.End : PPRRange.End); } if (!HasFP) { diff --git a/llvm/lib/Target/AArch64/AArch64PrologueEpilogue.h b/llvm/lib/Target/AArch64/AArch64PrologueEpilogue.h index a1c9b34..bccadda 100644 --- a/llvm/lib/Target/AArch64/AArch64PrologueEpilogue.h +++ b/llvm/lib/Target/AArch64/AArch64PrologueEpilogue.h @@ -27,11 +27,23 @@ class AArch64Subtarget; class AArch64FunctionInfo; class AArch64FrameLowering; +struct SVEFrameSizes { + struct { + StackOffset CalleeSavesSize, LocalsSize; + } PPR, ZPR; +}; + class AArch64PrologueEpilogueCommon { public: AArch64PrologueEpilogueCommon(MachineFunction &MF, MachineBasicBlock &MBB, const AArch64FrameLowering &AFL); + enum class SVEStackLayout { + Default, + Split, + CalleeSavesAboveFrameRecord, + }; + protected: bool requiresGetVGCall() const; @@ -53,6 +65,8 @@ protected: bool shouldCombineCSRLocalStackBump(uint64_t StackBumpBytes) const; + SVEFrameSizes getSVEStackFrameSizes() const; + MachineFunction &MF; MachineBasicBlock &MBB; @@ -68,6 +82,7 @@ protected: bool IsFunclet = false; // Note: Set in derived constructors. bool NeedsWinCFI = false; // Note: Can be changed in emitFramePointerSetup. bool HomPrologEpilog = false; // Note: Set in derived constructors. + SVEStackLayout SVELayout = SVEStackLayout::Default; // Note: "HasWinCFI" is mutable as it can change in any "emit" function. mutable bool HasWinCFI = false; diff --git a/llvm/lib/Target/AArch64/AArch64SystemOperands.td b/llvm/lib/Target/AArch64/AArch64SystemOperands.td index 65b752e..9438917 100644 --- a/llvm/lib/Target/AArch64/AArch64SystemOperands.td +++ b/llvm/lib/Target/AArch64/AArch64SystemOperands.td @@ -816,8 +816,8 @@ def : BTI<"jc", 0b110>; // TLBI (translation lookaside buffer invalidate) instruction options. //===----------------------------------------------------------------------===// -class TLBIEntry<string name, bits<3> op1, bits<4> crn, bits<4> crm, - bits<3> op2, bit needsreg> { +class TLBICommon<string name, bits<3> op1, bits<4> crn, bits<4> crm, + bits<3> op2, bit needsreg> { string Name = name; bits<14> Encoding; let Encoding{13-11} = op1; @@ -830,131 +830,150 @@ class TLBIEntry<string name, bits<3> op1, bits<4> crn, bits<4> crm, code RequiresStr = [{ { }] # !interleave(Requires # ExtraRequires, [{, }]) # [{ } }]; } -def TLBITable : GenericTable { - let FilterClass = "TLBIEntry"; - let CppTypeName = "TLBI"; - let Fields = ["Name", "Encoding", "NeedsReg", "RequiresStr"]; - - let PrimaryKey = ["Encoding"]; - let PrimaryKeyName = "lookupTLBIByEncoding"; +class TLBIEntry<string name, bits<3> op1, bits<4> crn, bits<4> crm, + bits<3> op2, bit needsreg> + : TLBICommon<name, op1, crn, crm, op2, needsreg>; + +class TLBIPEntry<string name, bits<3> op1, bits<4> crn, bits<4> crm, + bits<3> op2, bit needsreg> + : TLBICommon<name, op1, crn, crm, op2, needsreg>; + +multiclass TLBITableBase { + def NAME # Table : GenericTable { + let FilterClass = NAME # "Entry"; + let CppTypeName = NAME; + let Fields = ["Name", "Encoding", "NeedsReg", "RequiresStr"]; + let PrimaryKey = ["Encoding"]; + let PrimaryKeyName = "lookup" # NAME # "ByEncoding"; + } + def lookup # NAME # ByName : SearchIndex { + let Table = !cast<GenericTable>(NAME # "Table"); + let Key = ["Name"]; + } } -def lookupTLBIByName : SearchIndex { - let Table = TLBITable; - let Key = ["Name"]; -} +defm TLBI : TLBITableBase; +defm TLBIP : TLBITableBase; -multiclass TLBI<string name, bits<3> op1, bits<4> crn, bits<4> crm, +multiclass TLBI<string name, bit hasTLBIP, bits<3> op1, bits<4> crn, bits<4> crm, bits<3> op2, bit needsreg = 1> { def : TLBIEntry<name, op1, crn, crm, op2, needsreg>; def : TLBIEntry<!strconcat(name, "nXS"), op1, crn, crm, op2, needsreg> { let Encoding{7} = 1; let ExtraRequires = ["AArch64::FeatureXS"]; } + if !eq(hasTLBIP, true) then { + def : TLBIPEntry<name, op1, crn, crm, op2, needsreg>; + def : TLBIPEntry<!strconcat(name, "nXS"), op1, crn, crm, op2, needsreg> { + let Encoding{7} = 1; + let ExtraRequires = ["AArch64::FeatureXS"]; + } + } } -defm : TLBI<"IPAS2E1IS", 0b100, 0b1000, 0b0000, 0b001>; -defm : TLBI<"IPAS2LE1IS", 0b100, 0b1000, 0b0000, 0b101>; -defm : TLBI<"VMALLE1IS", 0b000, 0b1000, 0b0011, 0b000, 0>; -defm : TLBI<"ALLE2IS", 0b100, 0b1000, 0b0011, 0b000, 0>; -defm : TLBI<"ALLE3IS", 0b110, 0b1000, 0b0011, 0b000, 0>; -defm : TLBI<"VAE1IS", 0b000, 0b1000, 0b0011, 0b001>; -defm : TLBI<"VAE2IS", 0b100, 0b1000, 0b0011, 0b001>; -defm : TLBI<"VAE3IS", 0b110, 0b1000, 0b0011, 0b001>; -defm : TLBI<"ASIDE1IS", 0b000, 0b1000, 0b0011, 0b010>; -defm : TLBI<"VAAE1IS", 0b000, 0b1000, 0b0011, 0b011>; -defm : TLBI<"ALLE1IS", 0b100, 0b1000, 0b0011, 0b100, 0>; -defm : TLBI<"VALE1IS", 0b000, 0b1000, 0b0011, 0b101>; -defm : TLBI<"VALE2IS", 0b100, 0b1000, 0b0011, 0b101>; -defm : TLBI<"VALE3IS", 0b110, 0b1000, 0b0011, 0b101>; -defm : TLBI<"VMALLS12E1IS", 0b100, 0b1000, 0b0011, 0b110, 0>; -defm : TLBI<"VAALE1IS", 0b000, 0b1000, 0b0011, 0b111>; -defm : TLBI<"IPAS2E1", 0b100, 0b1000, 0b0100, 0b001>; -defm : TLBI<"IPAS2LE1", 0b100, 0b1000, 0b0100, 0b101>; -defm : TLBI<"VMALLE1", 0b000, 0b1000, 0b0111, 0b000, 0>; -defm : TLBI<"ALLE2", 0b100, 0b1000, 0b0111, 0b000, 0>; -defm : TLBI<"ALLE3", 0b110, 0b1000, 0b0111, 0b000, 0>; -defm : TLBI<"VAE1", 0b000, 0b1000, 0b0111, 0b001>; -defm : TLBI<"VAE2", 0b100, 0b1000, 0b0111, 0b001>; -defm : TLBI<"VAE3", 0b110, 0b1000, 0b0111, 0b001>; -defm : TLBI<"ASIDE1", 0b000, 0b1000, 0b0111, 0b010>; -defm : TLBI<"VAAE1", 0b000, 0b1000, 0b0111, 0b011>; -defm : TLBI<"ALLE1", 0b100, 0b1000, 0b0111, 0b100, 0>; -defm : TLBI<"VALE1", 0b000, 0b1000, 0b0111, 0b101>; -defm : TLBI<"VALE2", 0b100, 0b1000, 0b0111, 0b101>; -defm : TLBI<"VALE3", 0b110, 0b1000, 0b0111, 0b101>; -defm : TLBI<"VMALLS12E1", 0b100, 0b1000, 0b0111, 0b110, 0>; -defm : TLBI<"VAALE1", 0b000, 0b1000, 0b0111, 0b111>; +// hasTLBIP op1 CRn CRm op2 needsreg +defm : TLBI<"IPAS2E1IS", 1, 0b100, 0b1000, 0b0000, 0b001>; +defm : TLBI<"IPAS2LE1IS", 1, 0b100, 0b1000, 0b0000, 0b101>; +defm : TLBI<"VMALLE1IS", 0, 0b000, 0b1000, 0b0011, 0b000, 0>; +defm : TLBI<"ALLE2IS", 0, 0b100, 0b1000, 0b0011, 0b000, 0>; +defm : TLBI<"ALLE3IS", 0, 0b110, 0b1000, 0b0011, 0b000, 0>; +defm : TLBI<"VAE1IS", 1, 0b000, 0b1000, 0b0011, 0b001>; +defm : TLBI<"VAE2IS", 1, 0b100, 0b1000, 0b0011, 0b001>; +defm : TLBI<"VAE3IS", 1, 0b110, 0b1000, 0b0011, 0b001>; +defm : TLBI<"ASIDE1IS", 0, 0b000, 0b1000, 0b0011, 0b010>; +defm : TLBI<"VAAE1IS", 1, 0b000, 0b1000, 0b0011, 0b011>; +defm : TLBI<"ALLE1IS", 0, 0b100, 0b1000, 0b0011, 0b100, 0>; +defm : TLBI<"VALE1IS", 1, 0b000, 0b1000, 0b0011, 0b101>; +defm : TLBI<"VALE2IS", 1, 0b100, 0b1000, 0b0011, 0b101>; +defm : TLBI<"VALE3IS", 1, 0b110, 0b1000, 0b0011, 0b101>; +defm : TLBI<"VMALLS12E1IS", 0, 0b100, 0b1000, 0b0011, 0b110, 0>; +defm : TLBI<"VAALE1IS", 1, 0b000, 0b1000, 0b0011, 0b111>; +defm : TLBI<"IPAS2E1", 1, 0b100, 0b1000, 0b0100, 0b001>; +defm : TLBI<"IPAS2LE1", 1, 0b100, 0b1000, 0b0100, 0b101>; +defm : TLBI<"VMALLE1", 0, 0b000, 0b1000, 0b0111, 0b000, 0>; +defm : TLBI<"ALLE2", 0, 0b100, 0b1000, 0b0111, 0b000, 0>; +defm : TLBI<"ALLE3", 0, 0b110, 0b1000, 0b0111, 0b000, 0>; +defm : TLBI<"VAE1", 1, 0b000, 0b1000, 0b0111, 0b001>; +defm : TLBI<"VAE2", 1, 0b100, 0b1000, 0b0111, 0b001>; +defm : TLBI<"VAE3", 1, 0b110, 0b1000, 0b0111, 0b001>; +defm : TLBI<"ASIDE1", 0, 0b000, 0b1000, 0b0111, 0b010>; +defm : TLBI<"VAAE1", 1, 0b000, 0b1000, 0b0111, 0b011>; +defm : TLBI<"ALLE1", 0, 0b100, 0b1000, 0b0111, 0b100, 0>; +defm : TLBI<"VALE1", 1, 0b000, 0b1000, 0b0111, 0b101>; +defm : TLBI<"VALE2", 1, 0b100, 0b1000, 0b0111, 0b101>; +defm : TLBI<"VALE3", 1, 0b110, 0b1000, 0b0111, 0b101>; +defm : TLBI<"VMALLS12E1", 0, 0b100, 0b1000, 0b0111, 0b110, 0>; +defm : TLBI<"VAALE1", 1, 0b000, 0b1000, 0b0111, 0b111>; // Armv8.4-A Translation Lookaside Buffer Instructions (TLBI) let Requires = ["AArch64::FeatureTLB_RMI"] in { // Armv8.4-A Outer Sharable TLB Maintenance instructions: -// op1 CRn CRm op2 -defm : TLBI<"VMALLE1OS", 0b000, 0b1000, 0b0001, 0b000, 0>; -defm : TLBI<"VAE1OS", 0b000, 0b1000, 0b0001, 0b001>; -defm : TLBI<"ASIDE1OS", 0b000, 0b1000, 0b0001, 0b010>; -defm : TLBI<"VAAE1OS", 0b000, 0b1000, 0b0001, 0b011>; -defm : TLBI<"VALE1OS", 0b000, 0b1000, 0b0001, 0b101>; -defm : TLBI<"VAALE1OS", 0b000, 0b1000, 0b0001, 0b111>; -defm : TLBI<"IPAS2E1OS", 0b100, 0b1000, 0b0100, 0b000>; -defm : TLBI<"IPAS2LE1OS", 0b100, 0b1000, 0b0100, 0b100>; -defm : TLBI<"VAE2OS", 0b100, 0b1000, 0b0001, 0b001>; -defm : TLBI<"VALE2OS", 0b100, 0b1000, 0b0001, 0b101>; -defm : TLBI<"VMALLS12E1OS", 0b100, 0b1000, 0b0001, 0b110, 0>; -defm : TLBI<"VAE3OS", 0b110, 0b1000, 0b0001, 0b001>; -defm : TLBI<"VALE3OS", 0b110, 0b1000, 0b0001, 0b101>; -defm : TLBI<"ALLE2OS", 0b100, 0b1000, 0b0001, 0b000, 0>; -defm : TLBI<"ALLE1OS", 0b100, 0b1000, 0b0001, 0b100, 0>; -defm : TLBI<"ALLE3OS", 0b110, 0b1000, 0b0001, 0b000, 0>; +// hasTLBIP op1 CRn CRm op2 needsreg +defm : TLBI<"VMALLE1OS", 0, 0b000, 0b1000, 0b0001, 0b000, 0>; +defm : TLBI<"VAE1OS", 1, 0b000, 0b1000, 0b0001, 0b001>; +defm : TLBI<"ASIDE1OS", 0, 0b000, 0b1000, 0b0001, 0b010>; +defm : TLBI<"VAAE1OS", 1, 0b000, 0b1000, 0b0001, 0b011>; +defm : TLBI<"VALE1OS", 1, 0b000, 0b1000, 0b0001, 0b101>; +defm : TLBI<"VAALE1OS", 1, 0b000, 0b1000, 0b0001, 0b111>; +defm : TLBI<"IPAS2E1OS", 1, 0b100, 0b1000, 0b0100, 0b000>; +defm : TLBI<"IPAS2LE1OS", 1, 0b100, 0b1000, 0b0100, 0b100>; +defm : TLBI<"VAE2OS", 1, 0b100, 0b1000, 0b0001, 0b001>; +defm : TLBI<"VALE2OS", 1, 0b100, 0b1000, 0b0001, 0b101>; +defm : TLBI<"VMALLS12E1OS", 0, 0b100, 0b1000, 0b0001, 0b110, 0>; +defm : TLBI<"VAE3OS", 1, 0b110, 0b1000, 0b0001, 0b001>; +defm : TLBI<"VALE3OS", 1, 0b110, 0b1000, 0b0001, 0b101>; +defm : TLBI<"ALLE2OS", 0, 0b100, 0b1000, 0b0001, 0b000, 0>; +defm : TLBI<"ALLE1OS", 0, 0b100, 0b1000, 0b0001, 0b100, 0>; +defm : TLBI<"ALLE3OS", 0, 0b110, 0b1000, 0b0001, 0b000, 0>; // Armv8.4-A TLB Range Maintenance instructions: -// op1 CRn CRm op2 -defm : TLBI<"RVAE1", 0b000, 0b1000, 0b0110, 0b001>; -defm : TLBI<"RVAAE1", 0b000, 0b1000, 0b0110, 0b011>; -defm : TLBI<"RVALE1", 0b000, 0b1000, 0b0110, 0b101>; -defm : TLBI<"RVAALE1", 0b000, 0b1000, 0b0110, 0b111>; -defm : TLBI<"RVAE1IS", 0b000, 0b1000, 0b0010, 0b001>; -defm : TLBI<"RVAAE1IS", 0b000, 0b1000, 0b0010, 0b011>; -defm : TLBI<"RVALE1IS", 0b000, 0b1000, 0b0010, 0b101>; -defm : TLBI<"RVAALE1IS", 0b000, 0b1000, 0b0010, 0b111>; -defm : TLBI<"RVAE1OS", 0b000, 0b1000, 0b0101, 0b001>; -defm : TLBI<"RVAAE1OS", 0b000, 0b1000, 0b0101, 0b011>; -defm : TLBI<"RVALE1OS", 0b000, 0b1000, 0b0101, 0b101>; -defm : TLBI<"RVAALE1OS", 0b000, 0b1000, 0b0101, 0b111>; -defm : TLBI<"RIPAS2E1IS", 0b100, 0b1000, 0b0000, 0b010>; -defm : TLBI<"RIPAS2LE1IS", 0b100, 0b1000, 0b0000, 0b110>; -defm : TLBI<"RIPAS2E1", 0b100, 0b1000, 0b0100, 0b010>; -defm : TLBI<"RIPAS2LE1", 0b100, 0b1000, 0b0100, 0b110>; -defm : TLBI<"RIPAS2E1OS", 0b100, 0b1000, 0b0100, 0b011>; -defm : TLBI<"RIPAS2LE1OS", 0b100, 0b1000, 0b0100, 0b111>; -defm : TLBI<"RVAE2", 0b100, 0b1000, 0b0110, 0b001>; -defm : TLBI<"RVALE2", 0b100, 0b1000, 0b0110, 0b101>; -defm : TLBI<"RVAE2IS", 0b100, 0b1000, 0b0010, 0b001>; -defm : TLBI<"RVALE2IS", 0b100, 0b1000, 0b0010, 0b101>; -defm : TLBI<"RVAE2OS", 0b100, 0b1000, 0b0101, 0b001>; -defm : TLBI<"RVALE2OS", 0b100, 0b1000, 0b0101, 0b101>; -defm : TLBI<"RVAE3", 0b110, 0b1000, 0b0110, 0b001>; -defm : TLBI<"RVALE3", 0b110, 0b1000, 0b0110, 0b101>; -defm : TLBI<"RVAE3IS", 0b110, 0b1000, 0b0010, 0b001>; -defm : TLBI<"RVALE3IS", 0b110, 0b1000, 0b0010, 0b101>; -defm : TLBI<"RVAE3OS", 0b110, 0b1000, 0b0101, 0b001>; -defm : TLBI<"RVALE3OS", 0b110, 0b1000, 0b0101, 0b101>; +// hasTLBIP op1 CRn CRm op2 needsreg +defm : TLBI<"RVAE1", 1, 0b000, 0b1000, 0b0110, 0b001>; +defm : TLBI<"RVAAE1", 1, 0b000, 0b1000, 0b0110, 0b011>; +defm : TLBI<"RVALE1", 1, 0b000, 0b1000, 0b0110, 0b101>; +defm : TLBI<"RVAALE1", 1, 0b000, 0b1000, 0b0110, 0b111>; +defm : TLBI<"RVAE1IS", 1, 0b000, 0b1000, 0b0010, 0b001>; +defm : TLBI<"RVAAE1IS", 1, 0b000, 0b1000, 0b0010, 0b011>; +defm : TLBI<"RVALE1IS", 1, 0b000, 0b1000, 0b0010, 0b101>; +defm : TLBI<"RVAALE1IS", 1, 0b000, 0b1000, 0b0010, 0b111>; +defm : TLBI<"RVAE1OS", 1, 0b000, 0b1000, 0b0101, 0b001>; +defm : TLBI<"RVAAE1OS", 1, 0b000, 0b1000, 0b0101, 0b011>; +defm : TLBI<"RVALE1OS", 1, 0b000, 0b1000, 0b0101, 0b101>; +defm : TLBI<"RVAALE1OS", 1, 0b000, 0b1000, 0b0101, 0b111>; +defm : TLBI<"RIPAS2E1IS", 1, 0b100, 0b1000, 0b0000, 0b010>; +defm : TLBI<"RIPAS2LE1IS", 1, 0b100, 0b1000, 0b0000, 0b110>; +defm : TLBI<"RIPAS2E1", 1, 0b100, 0b1000, 0b0100, 0b010>; +defm : TLBI<"RIPAS2LE1", 1, 0b100, 0b1000, 0b0100, 0b110>; +defm : TLBI<"RIPAS2E1OS", 1, 0b100, 0b1000, 0b0100, 0b011>; +defm : TLBI<"RIPAS2LE1OS", 1, 0b100, 0b1000, 0b0100, 0b111>; +defm : TLBI<"RVAE2", 1, 0b100, 0b1000, 0b0110, 0b001>; +defm : TLBI<"RVALE2", 1, 0b100, 0b1000, 0b0110, 0b101>; +defm : TLBI<"RVAE2IS", 1, 0b100, 0b1000, 0b0010, 0b001>; +defm : TLBI<"RVALE2IS", 1, 0b100, 0b1000, 0b0010, 0b101>; +defm : TLBI<"RVAE2OS", 1, 0b100, 0b1000, 0b0101, 0b001>; +defm : TLBI<"RVALE2OS", 1, 0b100, 0b1000, 0b0101, 0b101>; +defm : TLBI<"RVAE3", 1, 0b110, 0b1000, 0b0110, 0b001>; +defm : TLBI<"RVALE3", 1, 0b110, 0b1000, 0b0110, 0b101>; +defm : TLBI<"RVAE3IS", 1, 0b110, 0b1000, 0b0010, 0b001>; +defm : TLBI<"RVALE3IS", 1, 0b110, 0b1000, 0b0010, 0b101>; +defm : TLBI<"RVAE3OS", 1, 0b110, 0b1000, 0b0101, 0b001>; +defm : TLBI<"RVALE3OS", 1, 0b110, 0b1000, 0b0101, 0b101>; } //FeatureTLB_RMI // Armv9-A Realm Management Extension TLBI Instructions let Requires = ["AArch64::FeatureRME"] in { -defm : TLBI<"RPAOS", 0b110, 0b1000, 0b0100, 0b011>; -defm : TLBI<"RPALOS", 0b110, 0b1000, 0b0100, 0b111>; -defm : TLBI<"PAALLOS", 0b110, 0b1000, 0b0001, 0b100, 0>; -defm : TLBI<"PAALL", 0b110, 0b1000, 0b0111, 0b100, 0>; +defm : TLBI<"RPAOS", 0, 0b110, 0b1000, 0b0100, 0b011>; +defm : TLBI<"RPALOS", 0, 0b110, 0b1000, 0b0100, 0b111>; +defm : TLBI<"PAALLOS", 0, 0b110, 0b1000, 0b0001, 0b100, 0>; +defm : TLBI<"PAALL", 0, 0b110, 0b1000, 0b0111, 0b100, 0>; } // Armv9.5-A TLBI VMALL for Dirty State let Requires = ["AArch64::FeatureTLBIW"] in { -// op1, CRn, CRm, op2, needsreg -defm : TLBI<"VMALLWS2E1", 0b100, 0b1000, 0b0110, 0b010, 0>; -defm : TLBI<"VMALLWS2E1IS", 0b100, 0b1000, 0b0010, 0b010, 0>; -defm : TLBI<"VMALLWS2E1OS", 0b100, 0b1000, 0b0101, 0b010, 0>; +// op1, CRn, CRm, op2, needsreg +defm : TLBI<"VMALLWS2E1", 0, 0b100, 0b1000, 0b0110, 0b010, 0>; +defm : TLBI<"VMALLWS2E1IS", 0, 0b100, 0b1000, 0b0010, 0b010, 0>; +defm : TLBI<"VMALLWS2E1OS", 0, 0b100, 0b1000, 0b0101, 0b010, 0>; } //===----------------------------------------------------------------------===// diff --git a/llvm/lib/Target/AArch64/AsmParser/AArch64AsmParser.cpp b/llvm/lib/Target/AArch64/AsmParser/AArch64AsmParser.cpp index 3641e22..2c3870c 100644 --- a/llvm/lib/Target/AArch64/AsmParser/AArch64AsmParser.cpp +++ b/llvm/lib/Target/AArch64/AsmParser/AArch64AsmParser.cpp @@ -4020,23 +4020,23 @@ bool AArch64AsmParser::parseSyspAlias(StringRef Name, SMLoc NameLoc, if (HasnXSQualifier) { Op = Op.drop_back(3); } - const AArch64TLBI::TLBI *TLBIorig = AArch64TLBI::lookupTLBIByName(Op); - if (!TLBIorig) + const AArch64TLBIP::TLBIP *TLBIPorig = AArch64TLBIP::lookupTLBIPByName(Op); + if (!TLBIPorig) return TokError("invalid operand for TLBIP instruction"); - const AArch64TLBI::TLBI TLBI( - TLBIorig->Name, TLBIorig->Encoding | (HasnXSQualifier ? (1 << 7) : 0), - TLBIorig->NeedsReg, + const AArch64TLBIP::TLBIP TLBIP( + TLBIPorig->Name, TLBIPorig->Encoding | (HasnXSQualifier ? (1 << 7) : 0), + TLBIPorig->NeedsReg, HasnXSQualifier - ? TLBIorig->FeaturesRequired | FeatureBitset({AArch64::FeatureXS}) - : TLBIorig->FeaturesRequired); - if (!TLBI.haveFeatures(getSTI().getFeatureBits())) { + ? TLBIPorig->FeaturesRequired | FeatureBitset({AArch64::FeatureXS}) + : TLBIPorig->FeaturesRequired); + if (!TLBIP.haveFeatures(getSTI().getFeatureBits())) { std::string Name = - std::string(TLBI.Name) + (HasnXSQualifier ? "nXS" : ""); + std::string(TLBIP.Name) + (HasnXSQualifier ? "nXS" : ""); std::string Str("TLBIP " + Name + " requires: "); - setRequiredFeatureString(TLBI.getRequiredFeatures(), Str); + setRequiredFeatureString(TLBIP.getRequiredFeatures(), Str); return TokError(Str); } - createSysAlias(TLBI.Encoding, Operands, S); + createSysAlias(TLBIP.Encoding, Operands, S); } Lex(); // Eat operand. diff --git a/llvm/lib/Target/AArch64/MCTargetDesc/AArch64InstPrinter.cpp b/llvm/lib/Target/AArch64/MCTargetDesc/AArch64InstPrinter.cpp index 2552ee3..35bd244 100644 --- a/llvm/lib/Target/AArch64/MCTargetDesc/AArch64InstPrinter.cpp +++ b/llvm/lib/Target/AArch64/MCTargetDesc/AArch64InstPrinter.cpp @@ -1066,12 +1066,13 @@ bool AArch64InstPrinter::printSyspAlias(const MCInst *MI, Encoding &= ~(1 << 7); } - const AArch64TLBI::TLBI *TLBI = AArch64TLBI::lookupTLBIByEncoding(Encoding); - if (!TLBI || !TLBI->haveFeatures(STI.getFeatureBits())) + const AArch64TLBIP::TLBIP *TLBIP = + AArch64TLBIP::lookupTLBIPByEncoding(Encoding); + if (!TLBIP || !TLBIP->haveFeatures(STI.getFeatureBits())) return false; Ins = "tlbip\t"; - Name = std::string(TLBI->Name); + Name = std::string(TLBIP->Name); if (CnVal == 9) Name += "nXS"; } else diff --git a/llvm/lib/Target/AArch64/Utils/AArch64BaseInfo.cpp b/llvm/lib/Target/AArch64/Utils/AArch64BaseInfo.cpp index 7767028..d6cb0e8 100644 --- a/llvm/lib/Target/AArch64/Utils/AArch64BaseInfo.cpp +++ b/llvm/lib/Target/AArch64/Utils/AArch64BaseInfo.cpp @@ -186,6 +186,13 @@ namespace llvm { } namespace llvm { +namespace AArch64TLBIP { +#define GET_TLBIPTable_IMPL +#include "AArch64GenSystemOperands.inc" +} // namespace AArch64TLBIP +} // namespace llvm + +namespace llvm { namespace AArch64SVCR { #define GET_SVCRsList_IMPL #include "AArch64GenSystemOperands.inc" diff --git a/llvm/lib/Target/AArch64/Utils/AArch64BaseInfo.h b/llvm/lib/Target/AArch64/Utils/AArch64BaseInfo.h index a4ee963..fea33ef 100644 --- a/llvm/lib/Target/AArch64/Utils/AArch64BaseInfo.h +++ b/llvm/lib/Target/AArch64/Utils/AArch64BaseInfo.h @@ -795,6 +795,14 @@ namespace AArch64TLBI { #include "AArch64GenSystemOperands.inc" } +namespace AArch64TLBIP { +struct TLBIP : SysAliasReg { + using SysAliasReg::SysAliasReg; +}; +#define GET_TLBIPTable_DECL +#include "AArch64GenSystemOperands.inc" +} // namespace AArch64TLBIP + namespace AArch64II { /// Target Operand Flag enum. enum TOF { diff --git a/llvm/lib/Target/AMDGPU/AMDGPU.h b/llvm/lib/Target/AMDGPU/AMDGPU.h index 0f2c335..ce2b4a5 100644 --- a/llvm/lib/Target/AMDGPU/AMDGPU.h +++ b/llvm/lib/Target/AMDGPU/AMDGPU.h @@ -562,6 +562,11 @@ public: void initializeAMDGPURewriteAGPRCopyMFMALegacyPass(PassRegistry &); extern char &AMDGPURewriteAGPRCopyMFMALegacyID; +struct AMDGPUUniformIntrinsicCombinePass + : public PassInfoMixin<AMDGPUUniformIntrinsicCombinePass> { + PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM); +}; + namespace AMDGPU { enum TargetIndex { TI_CONSTDATA_START, diff --git a/llvm/lib/Target/AMDGPU/AMDGPUAttributor.cpp b/llvm/lib/Target/AMDGPU/AMDGPUAttributor.cpp index ef58004..9907c88f 100644 --- a/llvm/lib/Target/AMDGPU/AMDGPUAttributor.cpp +++ b/llvm/lib/Target/AMDGPU/AMDGPUAttributor.cpp @@ -1288,16 +1288,17 @@ static unsigned inlineAsmGetNumRequiredAGPRs(const InlineAsm *IA, return std::min(MaxVirtReg + MaxPhysReg, 256u); } -// TODO: Migrate to range merge of amdgpu-agpr-alloc. -struct AAAMDGPUNoAGPR : public StateWrapper<BooleanState, AbstractAttribute> { - using Base = StateWrapper<BooleanState, AbstractAttribute>; - AAAMDGPUNoAGPR(const IRPosition &IRP, Attributor &A) : Base(IRP) {} +struct AAAMDGPUMinAGPRAlloc + : public StateWrapper<DecIntegerState<>, AbstractAttribute> { + using Base = StateWrapper<DecIntegerState<>, AbstractAttribute>; + AAAMDGPUMinAGPRAlloc(const IRPosition &IRP, Attributor &A) : Base(IRP) {} - static AAAMDGPUNoAGPR &createForPosition(const IRPosition &IRP, - Attributor &A) { + static AAAMDGPUMinAGPRAlloc &createForPosition(const IRPosition &IRP, + Attributor &A) { if (IRP.getPositionKind() == IRPosition::IRP_FUNCTION) - return *new (A.Allocator) AAAMDGPUNoAGPR(IRP, A); - llvm_unreachable("AAAMDGPUNoAGPR is only valid for function position"); + return *new (A.Allocator) AAAMDGPUMinAGPRAlloc(IRP, A); + llvm_unreachable( + "AAAMDGPUMinAGPRAlloc is only valid for function position"); } void initialize(Attributor &A) override { @@ -1310,25 +1311,33 @@ struct AAAMDGPUNoAGPR : public StateWrapper<BooleanState, AbstractAttribute> { } const std::string getAsStr(Attributor *A) const override { - return getAssumed() ? "amdgpu-no-agpr" : "amdgpu-maybe-agpr"; + std::string Str = "amdgpu-agpr-alloc="; + raw_string_ostream OS(Str); + OS << getAssumed(); + return OS.str(); } void trackStatistics() const override {} ChangeStatus updateImpl(Attributor &A) override { - // TODO: Use AACallEdges, but then we need a way to inspect asm edges. + DecIntegerState<> Maximum; - auto CheckForNoAGPRs = [&](Instruction &I) { + // Check for cases which require allocation of AGPRs. The only cases where + // AGPRs are required are if there are direct references to AGPRs, so inline + // assembly and special intrinsics. + auto CheckForMinAGPRAllocs = [&](Instruction &I) { const auto &CB = cast<CallBase>(I); const Value *CalleeOp = CB.getCalledOperand(); - const Function *Callee = dyn_cast<Function>(CalleeOp); - if (!Callee) { - if (const InlineAsm *IA = dyn_cast<InlineAsm>(CalleeOp)) - return inlineAsmGetNumRequiredAGPRs(IA, CB) == 0; - return false; + + if (const InlineAsm *IA = dyn_cast<InlineAsm>(CalleeOp)) { + // Technically, the inline asm could be invoking a call to an unknown + // external function that requires AGPRs, but ignore that. + unsigned NumRegs = inlineAsmGetNumRequiredAGPRs(IA, CB); + Maximum.takeAssumedMaximum(NumRegs); + return true; } - switch (Callee->getIntrinsicID()) { + switch (CB.getIntrinsicID()) { case Intrinsic::not_intrinsic: break; case Intrinsic::write_register: @@ -1340,7 +1349,10 @@ struct AAAMDGPUNoAGPR : public StateWrapper<BooleanState, AbstractAttribute> { ->getOperand(0)); auto [Kind, RegIdx, NumRegs] = AMDGPU::parseAsmPhysRegName(RegName->getString()); - return Kind != 'a'; + if (Kind == 'a') + Maximum.takeAssumedMaximum(std::min(RegIdx + NumRegs, 256u)); + + return true; } default: // Some intrinsics may use AGPRs, but if we have a choice, we are not @@ -1349,32 +1361,50 @@ struct AAAMDGPUNoAGPR : public StateWrapper<BooleanState, AbstractAttribute> { } // TODO: Handle callsite attributes - const auto *CalleeInfo = A.getAAFor<AAAMDGPUNoAGPR>( - *this, IRPosition::function(*Callee), DepClassTy::REQUIRED); - return CalleeInfo && CalleeInfo->isValidState() && - CalleeInfo->getAssumed(); + auto *CBEdges = A.getAAFor<AACallEdges>( + *this, IRPosition::callsite_function(CB), DepClassTy::REQUIRED); + if (!CBEdges || CBEdges->hasUnknownCallee()) { + Maximum.indicatePessimisticFixpoint(); + return false; + } + + for (const Function *PossibleCallee : CBEdges->getOptimisticEdges()) { + const auto *CalleeInfo = A.getAAFor<AAAMDGPUMinAGPRAlloc>( + *this, IRPosition::function(*PossibleCallee), DepClassTy::REQUIRED); + if (!CalleeInfo || !CalleeInfo->isValidState()) { + Maximum.indicatePessimisticFixpoint(); + return false; + } + + Maximum.takeAssumedMaximum(CalleeInfo->getAssumed()); + } + + return true; }; bool UsedAssumedInformation = false; - if (!A.checkForAllCallLikeInstructions(CheckForNoAGPRs, *this, + if (!A.checkForAllCallLikeInstructions(CheckForMinAGPRAllocs, *this, UsedAssumedInformation)) return indicatePessimisticFixpoint(); - return ChangeStatus::UNCHANGED; + + return clampStateAndIndicateChange(getState(), Maximum); } ChangeStatus manifest(Attributor &A) override { - if (!getAssumed()) - return ChangeStatus::UNCHANGED; LLVMContext &Ctx = getAssociatedFunction()->getContext(); - return A.manifestAttrs(getIRPosition(), - {Attribute::get(Ctx, "amdgpu-agpr-alloc", "0")}); + SmallString<4> Buffer; + raw_svector_ostream OS(Buffer); + OS << getAssumed(); + + return A.manifestAttrs( + getIRPosition(), {Attribute::get(Ctx, "amdgpu-agpr-alloc", OS.str())}); } - StringRef getName() const override { return "AAAMDGPUNoAGPR"; } + StringRef getName() const override { return "AAAMDGPUMinAGPRAlloc"; } const char *getIdAddr() const override { return &ID; } /// This function should return true if the type of the \p AA is - /// AAAMDGPUNoAGPRs + /// AAAMDGPUMinAGPRAllocs static bool classof(const AbstractAttribute *AA) { return (AA->getIdAddr() == &ID); } @@ -1382,7 +1412,7 @@ struct AAAMDGPUNoAGPR : public StateWrapper<BooleanState, AbstractAttribute> { static const char ID; }; -const char AAAMDGPUNoAGPR::ID = 0; +const char AAAMDGPUMinAGPRAlloc::ID = 0; /// An abstract attribute to propagate the function attribute /// "amdgpu-cluster-dims" from kernel entry functions to device functions. @@ -1550,10 +1580,11 @@ static bool runImpl(Module &M, AnalysisGetter &AG, TargetMachine &TM, DenseSet<const char *> Allowed( {&AAAMDAttributes::ID, &AAUniformWorkGroupSize::ID, &AAPotentialValues::ID, &AAAMDFlatWorkGroupSize::ID, - &AAAMDMaxNumWorkgroups::ID, &AAAMDWavesPerEU::ID, &AAAMDGPUNoAGPR::ID, - &AACallEdges::ID, &AAPointerInfo::ID, &AAPotentialConstantValues::ID, - &AAUnderlyingObjects::ID, &AANoAliasAddrSpace::ID, &AAAddressSpace::ID, - &AAIndirectCallInfo::ID, &AAAMDGPUClusterDims::ID}); + &AAAMDMaxNumWorkgroups::ID, &AAAMDWavesPerEU::ID, + &AAAMDGPUMinAGPRAlloc::ID, &AACallEdges::ID, &AAPointerInfo::ID, + &AAPotentialConstantValues::ID, &AAUnderlyingObjects::ID, + &AANoAliasAddrSpace::ID, &AAAddressSpace::ID, &AAIndirectCallInfo::ID, + &AAAMDGPUClusterDims::ID}); AttributorConfig AC(CGUpdater); AC.IsClosedWorldModule = Options.IsClosedWorld; @@ -1595,7 +1626,7 @@ static bool runImpl(Module &M, AnalysisGetter &AG, TargetMachine &TM, A.getOrCreateAAFor<AAAMDGPUClusterDims>(IRPosition::function(*F)); if (ST.hasGFX90AInsts()) - A.getOrCreateAAFor<AAAMDGPUNoAGPR>(IRPosition::function(*F)); + A.getOrCreateAAFor<AAAMDGPUMinAGPRAlloc>(IRPosition::function(*F)); for (auto &I : instructions(F)) { Value *Ptr = nullptr; diff --git a/llvm/lib/Target/AMDGPU/AMDGPUISelDAGToDAG.cpp b/llvm/lib/Target/AMDGPU/AMDGPUISelDAGToDAG.cpp index e4d328a..b8b419d 100644 --- a/llvm/lib/Target/AMDGPU/AMDGPUISelDAGToDAG.cpp +++ b/llvm/lib/Target/AMDGPU/AMDGPUISelDAGToDAG.cpp @@ -1112,8 +1112,7 @@ void AMDGPUDAGToDAGISel::SelectUADDO_USUBO(SDNode *N) { {N->getOperand(0), N->getOperand(1), CurDAG->getTargetConstant(0, {}, MVT::i1) /*clamp bit*/}); } else { - unsigned Opc = N->getOpcode() == ISD::UADDO ? AMDGPU::S_UADDO_PSEUDO - : AMDGPU::S_USUBO_PSEUDO; + unsigned Opc = IsAdd ? AMDGPU::S_UADDO_PSEUDO : AMDGPU::S_USUBO_PSEUDO; CurDAG->SelectNodeTo(N, Opc, N->getVTList(), {N->getOperand(0), N->getOperand(1)}); diff --git a/llvm/lib/Target/AMDGPU/AMDGPUPassRegistry.def b/llvm/lib/Target/AMDGPU/AMDGPUPassRegistry.def index 9449e70..a6074ea 100644 --- a/llvm/lib/Target/AMDGPU/AMDGPUPassRegistry.def +++ b/llvm/lib/Target/AMDGPU/AMDGPUPassRegistry.def @@ -30,6 +30,7 @@ MODULE_PASS("amdgpu-preload-kernel-arguments", AMDGPUPreloadKernelArgumentsPass( MODULE_PASS("amdgpu-printf-runtime-binding", AMDGPUPrintfRuntimeBindingPass()) MODULE_PASS("amdgpu-remove-incompatible-functions", AMDGPURemoveIncompatibleFunctionsPass(*this)) MODULE_PASS("amdgpu-sw-lower-lds", AMDGPUSwLowerLDSPass(*this)) +MODULE_PASS("amdgpu-uniform-intrinsic-combine", AMDGPUUniformIntrinsicCombinePass()) #undef MODULE_PASS #ifndef MODULE_PASS_WITH_PARAMS diff --git a/llvm/lib/Target/AMDGPU/AMDGPURewriteAGPRCopyMFMA.cpp b/llvm/lib/Target/AMDGPU/AMDGPURewriteAGPRCopyMFMA.cpp index fedb694..89c16da 100644 --- a/llvm/lib/Target/AMDGPU/AMDGPURewriteAGPRCopyMFMA.cpp +++ b/llvm/lib/Target/AMDGPU/AMDGPURewriteAGPRCopyMFMA.cpp @@ -482,12 +482,13 @@ void AMDGPURewriteAGPRCopyMFMAImpl::eliminateSpillsOfReassignedVGPRs() const { } sort(StackIntervals, [](const LiveInterval *A, const LiveInterval *B) { + // The ordering has to be strictly weak. /// Sort heaviest intervals first to prioritize their unspilling - if (A->weight() > B->weight()) - return true; + if (A->weight() != B->weight()) + return A->weight() > B->weight(); - if (A->getSize() > B->getSize()) - return true; + if (A->getSize() != B->getSize()) + return A->getSize() > B->getSize(); // Tie breaker by number to avoid need for stable sort return A->reg().stackSlotIndex() < B->reg().stackSlotIndex(); diff --git a/llvm/lib/Target/AMDGPU/AMDGPUTargetMachine.cpp b/llvm/lib/Target/AMDGPU/AMDGPUTargetMachine.cpp index c7a91f4c..4958a20 100644 --- a/llvm/lib/Target/AMDGPU/AMDGPUTargetMachine.cpp +++ b/llvm/lib/Target/AMDGPU/AMDGPUTargetMachine.cpp @@ -526,6 +526,11 @@ static cl::opt<bool> HasClosedWorldAssumption( cl::desc("Whether has closed-world assumption at link time"), cl::init(false), cl::Hidden); +static cl::opt<bool> EnableUniformIntrinsicCombine( + "amdgpu-enable-uniform-intrinsic-combine", + cl::desc("Enable/Disable the Uniform Intrinsic Combine Pass"), + cl::init(true), cl::Hidden); + extern "C" LLVM_ABI LLVM_EXTERNAL_VISIBILITY void LLVMInitializeAMDGPUTarget() { // Register the target RegisterTargetMachine<R600TargetMachine> X(getTheR600Target()); @@ -879,6 +884,9 @@ void AMDGPUTargetMachine::registerPassBuilderCallbacks(PassBuilder &PB) { if (EarlyInlineAll && !EnableFunctionCalls) PM.addPass(AMDGPUAlwaysInlinePass()); + + if (EnableUniformIntrinsicCombine) + PM.addPass(AMDGPUUniformIntrinsicCombinePass()); }); PB.registerPeepholeEPCallback( diff --git a/llvm/lib/Target/AMDGPU/AMDGPUUniformIntrinsicCombine.cpp b/llvm/lib/Target/AMDGPU/AMDGPUUniformIntrinsicCombine.cpp new file mode 100644 index 0000000..50c78d8 --- /dev/null +++ b/llvm/lib/Target/AMDGPU/AMDGPUUniformIntrinsicCombine.cpp @@ -0,0 +1,159 @@ +//===-- AMDGPUUniformIntrinsicCombine.cpp ---------------------------------===// +// +// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. +// See https://llvm.org/LICENSE.txt for license information. +// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception +// +//===----------------------------------------------------------------------===// +// +/// \file +/// This pass simplifies certain intrinsic calls when the arguments are uniform. +/// It's true that this pass has transforms that can lead to a situation where +/// some instruction whose operand was previously recognized as statically +/// uniform is later on no longer recognized as statically uniform. However, the +/// semantics of how programs execute don't (and must not, for this precise +/// reason) care about static uniformity, they only ever care about dynamic +/// uniformity. And every instruction that's downstream and cares about dynamic +/// uniformity must be convergent (and isel will introduce v_readfirstlane for +/// them if their operands can't be proven statically uniform). +/// +/// This pass is implemented as a ModulePass because intrinsic declarations +/// exist at the module scope, allowing us to skip processing entirely if no +/// declarations are present and to traverse their user lists directly when +/// they are. A FunctionPass would instead require scanning every instruction +/// in every function to find relevant intrinsics, which is far less efficient. +//===----------------------------------------------------------------------===// + +#include "AMDGPU.h" +#include "GCNSubtarget.h" +#include "llvm/Analysis/DomTreeUpdater.h" +#include "llvm/Analysis/LoopInfo.h" +#include "llvm/Analysis/ScalarEvolution.h" +#include "llvm/Analysis/TargetLibraryInfo.h" +#include "llvm/Analysis/UniformityAnalysis.h" +#include "llvm/CodeGen/TargetPassConfig.h" +#include "llvm/IR/IRBuilder.h" +#include "llvm/IR/InstIterator.h" +#include "llvm/IR/InstVisitor.h" +#include "llvm/IR/IntrinsicsAMDGPU.h" +#include "llvm/IR/PatternMatch.h" +#include "llvm/InitializePasses.h" +#include "llvm/Target/TargetMachine.h" +#include "llvm/Transforms/Utils/BasicBlockUtils.h" + +#define DEBUG_TYPE "amdgpu-uniform-intrinsic-combine" + +using namespace llvm; +using namespace llvm::AMDGPU; +using namespace llvm::PatternMatch; + +/// Wrapper for querying uniformity info that first checks locally tracked +/// instructions. +static bool +isDivergentUseWithNew(const Use &U, const UniformityInfo &UI, + const ValueMap<const Value *, bool> &Tracker) { + Value *V = U.get(); + if (auto It = Tracker.find(V); It != Tracker.end()) + return !It->second; // divergent if marked false + return UI.isDivergentUse(U); +} + +/// Optimizes uniform intrinsics calls if their operand can be proven uniform. +static bool optimizeUniformIntrinsic(IntrinsicInst &II, + const UniformityInfo &UI, + ValueMap<const Value *, bool> &Tracker) { + llvm::Intrinsic::ID IID = II.getIntrinsicID(); + + switch (IID) { + case Intrinsic::amdgcn_permlane64: + case Intrinsic::amdgcn_readfirstlane: + case Intrinsic::amdgcn_readlane: { + Value *Src = II.getArgOperand(0); + if (isDivergentUseWithNew(II.getOperandUse(0), UI, Tracker)) + return false; + LLVM_DEBUG(dbgs() << "Replacing " << II << " with " << *Src << '\n'); + II.replaceAllUsesWith(Src); + II.eraseFromParent(); + return true; + } + case Intrinsic::amdgcn_ballot: { + Value *Src = II.getArgOperand(0); + if (isDivergentUseWithNew(II.getOperandUse(0), UI, Tracker)) + return false; + LLVM_DEBUG(dbgs() << "Found uniform ballot intrinsic: " << II << '\n'); + + bool Changed = false; + for (User *U : make_early_inc_range(II.users())) { + if (auto *ICmp = dyn_cast<ICmpInst>(U)) { + Value *Op0 = ICmp->getOperand(0); + Value *Op1 = ICmp->getOperand(1); + ICmpInst::Predicate Pred = ICmp->getPredicate(); + Value *OtherOp = Op0 == &II ? Op1 : Op0; + + if (Pred == ICmpInst::ICMP_EQ && match(OtherOp, m_Zero())) { + // Case: (icmp eq %ballot, 0) -> xor %ballot_arg, 1 + Instruction *NotOp = + BinaryOperator::CreateNot(Src, "", ICmp->getIterator()); + Tracker[NotOp] = true; // NOT preserves uniformity + LLVM_DEBUG(dbgs() << "Replacing ICMP_EQ: " << *NotOp << '\n'); + ICmp->replaceAllUsesWith(NotOp); + ICmp->eraseFromParent(); + Changed = true; + } else if (Pred == ICmpInst::ICMP_NE && match(OtherOp, m_Zero())) { + // Case: (icmp ne %ballot, 0) -> %ballot_arg + LLVM_DEBUG(dbgs() << "Replacing ICMP_NE with ballot argument: " + << *Src << '\n'); + ICmp->replaceAllUsesWith(Src); + ICmp->eraseFromParent(); + Changed = true; + } + } + } + // Erase the intrinsic if it has no remaining uses. + if (II.use_empty()) + II.eraseFromParent(); + return Changed; + } + default: + llvm_unreachable("Unexpected intrinsic ID in optimizeUniformIntrinsic"); + } + return false; +} + +/// Iterates over intrinsic declarations in the module to optimize their uses. +static bool runUniformIntrinsicCombine(Module &M, ModuleAnalysisManager &AM) { + bool IsChanged = false; + ValueMap<const Value *, bool> Tracker; + + FunctionAnalysisManager &FAM = + AM.getResult<FunctionAnalysisManagerModuleProxy>(M).getManager(); + for (Function &F : M) { + switch (F.getIntrinsicID()) { + case Intrinsic::amdgcn_permlane64: + case Intrinsic::amdgcn_readfirstlane: + case Intrinsic::amdgcn_readlane: + case Intrinsic::amdgcn_ballot: + break; + default: + continue; + } + + for (User *U : make_early_inc_range(F.users())) { + auto *II = cast<IntrinsicInst>(U); + Function *ParentF = II->getFunction(); + const auto &UI = FAM.getResult<UniformityInfoAnalysis>(*ParentF); + IsChanged |= optimizeUniformIntrinsic(*II, UI, Tracker); + } + } + return IsChanged; +} + +PreservedAnalyses +AMDGPUUniformIntrinsicCombinePass::run(Module &M, ModuleAnalysisManager &AM) { + if (!runUniformIntrinsicCombine(M, AM)) + return PreservedAnalyses::all(); + + PreservedAnalyses PA; + PA.preserve<UniformityInfoAnalysis>(); + return PA; +} diff --git a/llvm/lib/Target/AMDGPU/CMakeLists.txt b/llvm/lib/Target/AMDGPU/CMakeLists.txt index aae56ee..13f727b68 100644 --- a/llvm/lib/Target/AMDGPU/CMakeLists.txt +++ b/llvm/lib/Target/AMDGPU/CMakeLists.txt @@ -64,6 +64,7 @@ add_llvm_target(AMDGPUCodeGen AMDGPUHSAMetadataStreamer.cpp AMDGPUInsertDelayAlu.cpp AMDGPUInstCombineIntrinsic.cpp + AMDGPUUniformIntrinsicCombine.cpp AMDGPUInstrInfo.cpp AMDGPUInstructionSelector.cpp AMDGPUISelDAGToDAG.cpp diff --git a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp index 1a686a9..730be69 100644 --- a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp +++ b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp @@ -6073,9 +6073,6 @@ SITargetLowering::EmitInstrWithCustomInserter(MachineInstr &MI, MachineOperand &Src0 = MI.getOperand(2); MachineOperand &Src1 = MI.getOperand(3); MachineOperand &Src2 = MI.getOperand(4); - unsigned Opc = (MI.getOpcode() == AMDGPU::S_ADD_CO_PSEUDO) - ? AMDGPU::S_ADDC_U32 - : AMDGPU::S_SUBB_U32; if (Src0.isReg() && TRI->isVectorRegister(MRI, Src0.getReg())) { Register RegOp0 = MRI.createVirtualRegister(&AMDGPU::SReg_32_XM0RegClass); BuildMI(*BB, MII, DL, TII->get(AMDGPU::V_READFIRSTLANE_B32), RegOp0) @@ -6124,11 +6121,11 @@ SITargetLowering::EmitInstrWithCustomInserter(MachineInstr &MI, .addImm(0); } - // clang-format off - BuildMI(*BB, MII, DL, TII->get(Opc), Dest.getReg()) - .add(Src0) - .add(Src1); - // clang-format on + unsigned Opc = MI.getOpcode() == AMDGPU::S_ADD_CO_PSEUDO + ? AMDGPU::S_ADDC_U32 + : AMDGPU::S_SUBB_U32; + + BuildMI(*BB, MII, DL, TII->get(Opc), Dest.getReg()).add(Src0).add(Src1); unsigned SelOpc = ST.isWave64() ? AMDGPU::S_CSELECT_B64 : AMDGPU::S_CSELECT_B32; @@ -16571,6 +16568,53 @@ SDValue SITargetLowering::performSetCCCombine(SDNode *N, } } + // Eliminate setcc by using carryout from add/sub instruction + + // LHS = ADD i64 RHS, Z LHSlo = UADDO i32 RHSlo, Zlo + // setcc LHS ult RHS -> LHSHi = UADDO_CARRY i32 RHShi, Zhi + // similarly for subtraction + + // LHS = ADD i64 Y, 1 LHSlo = UADDO i32 Ylo, 1 + // setcc LHS eq 0 -> LHSHi = UADDO_CARRY i32 Yhi, 0 + + if (VT == MVT::i64 && ((CC == ISD::SETULT && + sd_match(LHS, m_Add(m_Specific(RHS), m_Value()))) || + (CC == ISD::SETUGT && + sd_match(LHS, m_Sub(m_Specific(RHS), m_Value()))) || + (CC == ISD::SETEQ && CRHS && CRHS->isZero() && + sd_match(LHS, m_Add(m_Value(), m_One()))))) { + bool IsAdd = LHS.getOpcode() == ISD::ADD; + + SDValue Op0 = LHS.getOperand(0); + SDValue Op1 = LHS.getOperand(1); + + SDValue Op0Lo = DAG.getNode(ISD::TRUNCATE, SL, MVT::i32, Op0); + SDValue Op1Lo = DAG.getNode(ISD::TRUNCATE, SL, MVT::i32, Op1); + + SDValue Op0Hi = getHiHalf64(Op0, DAG); + SDValue Op1Hi = getHiHalf64(Op1, DAG); + + SDValue NodeLo = + DAG.getNode(IsAdd ? ISD::UADDO : ISD::USUBO, SL, + DAG.getVTList(MVT::i32, MVT::i1), {Op0Lo, Op1Lo}); + + SDValue CarryInHi = NodeLo.getValue(1); + SDValue NodeHi = DAG.getNode(IsAdd ? ISD::UADDO_CARRY : ISD::USUBO_CARRY, + SL, DAG.getVTList(MVT::i32, MVT::i1), + {Op0Hi, Op1Hi, CarryInHi}); + + SDValue ResultLo = NodeLo.getValue(0); + SDValue ResultHi = NodeHi.getValue(0); + + SDValue JoinedResult = + DAG.getBuildVector(MVT::v2i32, SL, {ResultLo, ResultHi}); + + SDValue Result = DAG.getNode(ISD::BITCAST, SL, VT, JoinedResult); + SDValue Overflow = NodeHi.getValue(1); + DCI.CombineTo(LHS.getNode(), Result); + return Overflow; + } + if (VT != MVT::f32 && VT != MVT::f64 && (!Subtarget->has16BitInsts() || VT != MVT::f16)) return SDValue(); diff --git a/llvm/lib/Target/DirectX/DXILWriter/DXILBitcodeWriter.cpp b/llvm/lib/Target/DirectX/DXILWriter/DXILBitcodeWriter.cpp index bc1a3a7..82c43ff 100644 --- a/llvm/lib/Target/DirectX/DXILWriter/DXILBitcodeWriter.cpp +++ b/llvm/lib/Target/DirectX/DXILWriter/DXILBitcodeWriter.cpp @@ -1507,7 +1507,7 @@ void DXILBitcodeWriter::writeDICompileUnit(const DICompileUnit *N, SmallVectorImpl<uint64_t> &Record, unsigned Abbrev) { Record.push_back(N->isDistinct()); - Record.push_back(N->getSourceLanguage()); + Record.push_back(N->getSourceLanguage().getUnversionedName()); Record.push_back(VE.getMetadataOrNullID(N->getFile())); Record.push_back(VE.getMetadataOrNullID(N->getRawProducer())); Record.push_back(N->isOptimized()); diff --git a/llvm/lib/Target/Hexagon/Hexagon.td b/llvm/lib/Target/Hexagon/Hexagon.td index 6d0529f..fb0928b8 100644 --- a/llvm/lib/Target/Hexagon/Hexagon.td +++ b/llvm/lib/Target/Hexagon/Hexagon.td @@ -110,8 +110,6 @@ def FeatureSmallData: SubtargetFeature<"small-data", "UseSmallData", "true", "Allow GP-relative addressing of global variables">; def FeatureDuplex: SubtargetFeature<"duplex", "EnableDuplex", "true", "Enable generation of duplex instruction">; -def FeatureUnsafeFP: SubtargetFeature<"unsafe-fp", "UseUnsafeMath", "true", - "Use unsafe FP math">; def FeatureReservedR19: SubtargetFeature<"reserved-r19", "ReservedR19", "true", "Reserve register R19">; def FeatureNoreturnStackElim: SubtargetFeature<"noreturn-stack-elim", @@ -167,7 +165,6 @@ def UseHVXQFloat : Predicate<"HST->useHVXQFloatOps()">, def UseHVXFloatingPoint: Predicate<"HST->useHVXFloatingPoint()">; def HasMemNoShuf : Predicate<"HST->hasMemNoShuf()">, AssemblerPredicate<(all_of FeatureMemNoShuf)>; -def UseUnsafeMath : Predicate<"HST->useUnsafeMath()">; def NotOptTinyCore : Predicate<"!HST->isTinyCore() ||" "MF->getFunction().hasOptSize()"> { let RecomputePerFunction = 1; diff --git a/llvm/lib/Target/Hexagon/HexagonPatterns.td b/llvm/lib/Target/Hexagon/HexagonPatterns.td index 4b23670..a0acfcf 100644 --- a/llvm/lib/Target/Hexagon/HexagonPatterns.td +++ b/llvm/lib/Target/Hexagon/HexagonPatterns.td @@ -1611,8 +1611,11 @@ def DfMpy: OutPatFrag<(ops node:$Rs, node:$Rt), $Rt, $Rs), $Rs, $Rt)>; -let Predicates = [HasV67,UseUnsafeMath], AddedComplexity = 50 in { - def: Pat<(fmul F64:$Rs, F64:$Rt), (DfMpy $Rs, $Rt)>; +def fmul_afn : PatFrag<(ops node:$a, node:$b), (fmul node:$a, node:$b), [{ + return N->getFlags().hasApproximateFuncs(); +}]>; +let Predicates = [HasV67], AddedComplexity = 50 in { + def : Pat<(fmul_afn F64:$Rs, F64:$Rt), (DfMpy $Rs, $Rt)>; } let Predicates = [HasV67] in { def: OpR_RR_pat<F2_dfmin, pf2<fminimumnum>, f64, F64>; diff --git a/llvm/lib/Target/Hexagon/HexagonSubtarget.h b/llvm/lib/Target/Hexagon/HexagonSubtarget.h index b111471..7430567 100644 --- a/llvm/lib/Target/Hexagon/HexagonSubtarget.h +++ b/llvm/lib/Target/Hexagon/HexagonSubtarget.h @@ -54,7 +54,6 @@ class HexagonSubtarget : public HexagonGenSubtargetInfo { bool UseNewValueJumps = false; bool UseNewValueStores = false; bool UseSmallData = false; - bool UseUnsafeMath = false; bool UseZRegOps = false; bool UseHVXIEEEFPOps = false; bool UseHVXQFloatOps = false; @@ -234,7 +233,6 @@ public: bool useNewValueJumps() const { return UseNewValueJumps; } bool useNewValueStores() const { return UseNewValueStores; } bool useSmallData() const { return UseSmallData; } - bool useUnsafeMath() const { return UseUnsafeMath; } bool useZRegOps() const { return UseZRegOps; } bool useCabac() const { return UseCabac; } diff --git a/llvm/lib/Target/Hexagon/HexagonTargetMachine.cpp b/llvm/lib/Target/Hexagon/HexagonTargetMachine.cpp index 0afa04a..f5d8b69 100644 --- a/llvm/lib/Target/Hexagon/HexagonTargetMachine.cpp +++ b/llvm/lib/Target/Hexagon/HexagonTargetMachine.cpp @@ -250,13 +250,6 @@ HexagonTargetMachine::getSubtargetImpl(const Function &F) const { CPUAttr.isValid() ? CPUAttr.getValueAsString().str() : TargetCPU; std::string FS = FSAttr.isValid() ? FSAttr.getValueAsString().str() : TargetFS; - // Append the preexisting target features last, so that +mattr overrides - // the "unsafe-fp-math" function attribute. - // Creating a separate target feature is not strictly necessary, it only - // exists to make "unsafe-fp-math" force creating a new subtarget. - - if (F.getFnAttribute("unsafe-fp-math").getValueAsBool()) - FS = FS.empty() ? "+unsafe-fp" : "+unsafe-fp," + FS; auto &I = SubtargetMap[CPU + FS]; if (!I) { diff --git a/llvm/lib/Target/NVPTX/NVPTXCtorDtorLowering.cpp b/llvm/lib/Target/NVPTX/NVPTXCtorDtorLowering.cpp index bb8cec0..4e06939 100644 --- a/llvm/lib/Target/NVPTX/NVPTXCtorDtorLowering.cpp +++ b/llvm/lib/Target/NVPTX/NVPTXCtorDtorLowering.cpp @@ -88,7 +88,7 @@ static Function *createInitOrFiniKernelFunction(Module &M, bool IsCtor) { // reinterpret_cast<InitCallback *>(*start)(); // } // -// void call_init_array_callbacks() { +// void call_fini_array_callbacks() { // size_t fini_array_size = __fini_array_end - __fini_array_start; // for (size_t i = fini_array_size; i > 0; --i) // reinterpret_cast<FiniCallback *>(__fini_array_start[i - 1])(); @@ -153,7 +153,7 @@ static void createInitOrFiniCalls(Function &F, bool IsCtor) { "start"); } IRB.CreateCondBr( - IRB.CreateCmp(IsCtor ? ICmpInst::ICMP_NE : ICmpInst::ICMP_UGT, BeginVal, + IRB.CreateCmp(IsCtor ? ICmpInst::ICMP_NE : ICmpInst::ICMP_UGE, BeginVal, EndVal), LoopBB, ExitBB); IRB.SetInsertPoint(LoopBB); diff --git a/llvm/lib/Target/RISCV/GISel/RISCVCallLowering.cpp b/llvm/lib/Target/RISCV/GISel/RISCVCallLowering.cpp index ecfb5fe..eb41588 100644 --- a/llvm/lib/Target/RISCV/GISel/RISCVCallLowering.cpp +++ b/llvm/lib/Target/RISCV/GISel/RISCVCallLowering.cpp @@ -334,7 +334,7 @@ static bool isLegalElementTypeForRVV(Type *EltTy, if (EltTy->isIntegerTy(64)) return Subtarget.hasVInstructionsI64(); if (EltTy->isHalfTy()) - return Subtarget.hasVInstructionsF16(); + return Subtarget.hasVInstructionsF16Minimal(); if (EltTy->isBFloatTy()) return Subtarget.hasVInstructionsBF16Minimal(); if (EltTy->isFloatTy()) diff --git a/llvm/lib/Target/RISCV/RISCVInstrInfoZb.td b/llvm/lib/Target/RISCV/RISCVInstrInfoZb.td index 8d9b777..e519b72 100644 --- a/llvm/lib/Target/RISCV/RISCVInstrInfoZb.td +++ b/llvm/lib/Target/RISCV/RISCVInstrInfoZb.td @@ -702,13 +702,13 @@ def : Pat<(binop_allwusers<or> (PACKW GPR:$rs1, (XLenVT (PACKH GPR:$op1rs1, GPR:$op1rs2)))>; def : Pat<(binop_allwusers<or> (or (zexti16 (XLenVT GPR:$rs1)), - (shl GPR:$op1rs1, (XLenVT 24))), - (shl (zexti8 (XLenVT GPR:$op1rs2)), (XLenVT 16))), + (shl GPR:$op1rs2, (XLenVT 24))), + (shl (zexti8 (XLenVT GPR:$op1rs1)), (XLenVT 16))), (PACKW GPR:$rs1, (XLenVT (PACKH GPR:$op1rs1, GPR:$op1rs2)))>; def : Pat<(i64 (or (or (zexti16 (XLenVT GPR:$rs1)), - (shl (zexti8 (XLenVT GPR:$op1rs2)), (XLenVT 16))), - (sext_inreg (shl GPR:$op1rs1, (XLenVT 24)), i32))), + (shl (zexti8 (XLenVT GPR:$op1rs1)), (XLenVT 16))), + (sext_inreg (shl GPR:$op1rs2, (XLenVT 24)), i32))), (PACKW GPR:$rs1, (XLenVT (PACKH GPR:$op1rs1, GPR:$op1rs2)))>; // Match a pattern of 2 halfwords being inserted into bits [63:32], with bits @@ -788,32 +788,32 @@ multiclass ShxAdd_UWPat<int i, Instruction shxadd_uw> { } multiclass Sh1Add_UWPat<Instruction sh1add_uw> { - def : Pat<(i64 (add_like_non_imm12 (and (shl GPR:$rs1, (i64 1)), 0x1FFFFFFFF), - (XLenVT GPR:$rs2))), + def : Pat<(add_like_non_imm12 (and (shl GPR:$rs1, (i64 1)), (i64 0x1FFFFFFFF)), + (XLenVT GPR:$rs2)), (sh1add_uw GPR:$rs1, GPR:$rs2)>; // Use SRLI to clear the LSBs and SHXADD_UW to mask and shift. - def : Pat<(i64 (add_like_non_imm12 (and GPR:$rs1, 0x1FFFFFFFE), - (XLenVT GPR:$rs2))), + def : Pat<(add_like_non_imm12 (and GPR:$rs1, (i64 0x1FFFFFFFE)), + (XLenVT GPR:$rs2)), (sh1add_uw (XLenVT (SRLI GPR:$rs1, 1)), GPR:$rs2)>; } multiclass Sh2Add_UWPat<Instruction sh2add_uw> { - def : Pat<(i64 (add_like_non_imm12 (and (shl GPR:$rs1, (i64 2)), 0x3FFFFFFFF), - (XLenVT GPR:$rs2))), + def : Pat<(add_like_non_imm12 (and (shl GPR:$rs1, (i64 2)), (i64 0x3FFFFFFFF)), + (XLenVT GPR:$rs2)), (sh2add_uw GPR:$rs1, GPR:$rs2)>; // Use SRLI to clear the LSBs and SHXADD_UW to mask and shift. - def : Pat<(i64 (add_like_non_imm12 (and GPR:$rs1, 0x3FFFFFFFC), - (XLenVT GPR:$rs2))), + def : Pat<(add_like_non_imm12 (and GPR:$rs1, (i64 0x3FFFFFFFC)), + (XLenVT GPR:$rs2)), (sh2add_uw (XLenVT (SRLI GPR:$rs1, 2)), GPR:$rs2)>; } multiclass Sh3Add_UWPat<Instruction sh3add_uw> { - def : Pat<(i64 (add_like_non_imm12 (and (shl GPR:$rs1, (i64 3)), 0x7FFFFFFFF), - (XLenVT GPR:$rs2))), + def : Pat<(add_like_non_imm12 (and (shl GPR:$rs1, (i64 3)), (i64 0x7FFFFFFFF)), + (XLenVT GPR:$rs2)), (sh3add_uw GPR:$rs1, GPR:$rs2)>; // Use SRLI to clear the LSBs and SHXADD_UW to mask and shift. - def : Pat<(i64 (add_like_non_imm12 (and GPR:$rs1, 0x7FFFFFFF8), - (XLenVT GPR:$rs2))), + def : Pat<(add_like_non_imm12 (and GPR:$rs1, (i64 0x7FFFFFFF8)), + (XLenVT GPR:$rs2)), (sh3add_uw (XLenVT (SRLI GPR:$rs1, 3)), GPR:$rs2)>; } diff --git a/llvm/lib/Target/RISCV/RISCVRegisterInfo.td b/llvm/lib/Target/RISCV/RISCVRegisterInfo.td index 82e768d..6605a5c 100644 --- a/llvm/lib/Target/RISCV/RISCVRegisterInfo.td +++ b/llvm/lib/Target/RISCV/RISCVRegisterInfo.td @@ -238,7 +238,7 @@ class RISCVRegisterClass<list<ValueType> regTypes, int align, dag regList> } class GPRRegisterClass<dag regList> - : RISCVRegisterClass<[XLenVT, XLenFVT, i32, i16], 32, regList> { + : RISCVRegisterClass<[XLenVT, XLenFVT], 32, regList> { let RegInfos = XLenRI; } diff --git a/llvm/lib/Target/SPIRV/MCTargetDesc/SPIRVInstPrinter.cpp b/llvm/lib/Target/SPIRV/MCTargetDesc/SPIRVInstPrinter.cpp index 776208b..35a2ee1 100644 --- a/llvm/lib/Target/SPIRV/MCTargetDesc/SPIRVInstPrinter.cpp +++ b/llvm/lib/Target/SPIRV/MCTargetDesc/SPIRVInstPrinter.cpp @@ -284,6 +284,17 @@ void SPIRVInstPrinter::printInst(const MCInst *MI, uint64_t Address, } break; } + case SPIRV::OpPredicatedLoadINTEL: + case SPIRV::OpPredicatedStoreINTEL: { + const unsigned NumOps = MI->getNumOperands(); + if (NumOps > NumFixedOps) { + OS << ' '; + printSymbolicOperand<OperandCategory::MemoryOperandOperand>( + MI, NumOps - 1, OS); + break; + } + break; + } default: printRemainingVariableOps(MI, NumFixedOps, OS); break; diff --git a/llvm/lib/Target/SPIRV/SPIRVBuiltins.cpp b/llvm/lib/Target/SPIRV/SPIRVBuiltins.cpp index 0e0c454..dbe8e18 100644 --- a/llvm/lib/Target/SPIRV/SPIRVBuiltins.cpp +++ b/llvm/lib/Target/SPIRV/SPIRVBuiltins.cpp @@ -2419,6 +2419,27 @@ static bool generatePipeInst(const SPIRV::IncomingCall *Call, return buildPipeInst(Call, Opcode, Scope, MIRBuilder, GR); } +static bool generatePredicatedLoadStoreInst(const SPIRV::IncomingCall *Call, + MachineIRBuilder &MIRBuilder, + SPIRVGlobalRegistry *GR) { + const SPIRV::DemangledBuiltin *Builtin = Call->Builtin; + unsigned Opcode = + SPIRV::lookupNativeBuiltin(Builtin->Name, Builtin->Set)->Opcode; + + bool IsSet = Opcode != SPIRV::OpPredicatedStoreINTEL; + unsigned ArgSz = Call->Arguments.size(); + SmallVector<uint32_t, 1> ImmArgs; + MachineRegisterInfo *MRI = MIRBuilder.getMRI(); + // Memory operand is optional and is literal. + if (ArgSz > 3) + ImmArgs.push_back( + getConstFromIntrinsic(Call->Arguments[/*Literal index*/ 3], MRI)); + + Register TypeReg = GR->getSPIRVTypeID(Call->ReturnType); + return buildOpFromWrapper(MIRBuilder, Opcode, Call, + IsSet ? TypeReg : Register(0), ImmArgs); +} + static bool buildNDRange(const SPIRV::IncomingCall *Call, MachineIRBuilder &MIRBuilder, SPIRVGlobalRegistry *GR) { @@ -3019,6 +3040,8 @@ std::optional<bool> lowerBuiltin(const StringRef DemangledCall, return generate2DBlockIOINTELInst(Call.get(), MIRBuilder, GR); case SPIRV::Pipe: return generatePipeInst(Call.get(), MIRBuilder, GR); + case SPIRV::PredicatedLoadStore: + return generatePredicatedLoadStoreInst(Call.get(), MIRBuilder, GR); } return false; } diff --git a/llvm/lib/Target/SPIRV/SPIRVBuiltins.td b/llvm/lib/Target/SPIRV/SPIRVBuiltins.td index 2a8deb6..3b8764a 100644 --- a/llvm/lib/Target/SPIRV/SPIRVBuiltins.td +++ b/llvm/lib/Target/SPIRV/SPIRVBuiltins.td @@ -70,6 +70,7 @@ def BindlessINTEL : BuiltinGroup; def TernaryBitwiseINTEL : BuiltinGroup; def Block2DLoadStore : BuiltinGroup; def Pipe : BuiltinGroup; +def PredicatedLoadStore : BuiltinGroup; //===----------------------------------------------------------------------===// // Class defining a demangled builtin record. The information in the record @@ -752,6 +753,10 @@ defm : DemangledNativeBuiltin<"__spirv_Subgroup2DBlockLoadTransformINTEL", OpenC defm : DemangledNativeBuiltin<"__spirv_Subgroup2DBlockPrefetchINTEL", OpenCL_std, Block2DLoadStore, 9, 9, OpSubgroup2DBlockPrefetchINTEL>; defm : DemangledNativeBuiltin<"__spirv_Subgroup2DBlockStoreINTEL", OpenCL_std, Block2DLoadStore, 10, 10, OpSubgroup2DBlockStoreINTEL>; +// SPV_INTEL_predicated_io builtin records +defm : DemangledNativeBuiltin<"__spirv_PredicatedLoadINTEL", OpenCL_std, PredicatedLoadStore, 3, 4, OpPredicatedLoadINTEL>; +defm : DemangledNativeBuiltin<"__spirv_PredicatedStoreINTEL", OpenCL_std, PredicatedLoadStore, 3, 4, OpPredicatedStoreINTEL>; + //===----------------------------------------------------------------------===// // Class defining a work/sub group builtin that should be translated into a // SPIR-V instruction using the defined properties. diff --git a/llvm/lib/Target/SPIRV/SPIRVCommandLine.cpp b/llvm/lib/Target/SPIRV/SPIRVCommandLine.cpp index 85ea9e1..5f3ed86 100644 --- a/llvm/lib/Target/SPIRV/SPIRVCommandLine.cpp +++ b/llvm/lib/Target/SPIRV/SPIRVCommandLine.cpp @@ -151,7 +151,9 @@ static const std::map<std::string, SPIRV::Extension::Extension, std::less<>> {"SPV_KHR_bfloat16", SPIRV::Extension::Extension::SPV_KHR_bfloat16}, {"SPV_EXT_relaxed_printf_string_address_space", SPIRV::Extension::Extension:: - SPV_EXT_relaxed_printf_string_address_space}}; + SPV_EXT_relaxed_printf_string_address_space}, + {"SPV_INTEL_predicated_io", + SPIRV::Extension::Extension::SPV_INTEL_predicated_io}}; bool SPIRVExtensionsParser::parse(cl::Option &O, StringRef ArgName, StringRef ArgValue, diff --git a/llvm/lib/Target/SPIRV/SPIRVEmitNonSemanticDI.cpp b/llvm/lib/Target/SPIRV/SPIRVEmitNonSemanticDI.cpp index 275463e..318ef06 100644 --- a/llvm/lib/Target/SPIRV/SPIRVEmitNonSemanticDI.cpp +++ b/llvm/lib/Target/SPIRV/SPIRVEmitNonSemanticDI.cpp @@ -112,7 +112,8 @@ bool SPIRVEmitNonSemanticDI::emitGlobalDI(MachineFunction &MF) { FilePaths.emplace_back(); sys::path::append(FilePaths.back(), File->getDirectory(), File->getFilename()); - LLVMSourceLanguages.push_back(CompileUnit->getSourceLanguage()); + LLVMSourceLanguages.push_back( + CompileUnit->getSourceLanguage().getUnversionedName()); } } const NamedMDNode *ModuleFlags = M->getNamedMetadata("llvm.module.flags"); diff --git a/llvm/lib/Target/SPIRV/SPIRVInstrInfo.td b/llvm/lib/Target/SPIRV/SPIRVInstrInfo.td index 1723bfb..a61351e 100644 --- a/llvm/lib/Target/SPIRV/SPIRVInstrInfo.td +++ b/llvm/lib/Target/SPIRV/SPIRVInstrInfo.td @@ -987,3 +987,9 @@ def OpSubgroup2DBlockPrefetchINTEL: Op<6234, (outs), (ins ID:$element_size, ID:$ def OpSubgroup2DBlockStoreINTEL: Op<6235, (outs), (ins ID:$element_size, ID:$block_width, ID:$block_height, ID:$block_count, ID:$src_ptr, ID:$dst_base_ptr, ID:$memory_width, ID:$memory_height, ID:$memory_pitch, ID:$coord), "OpSubgroup2DBlockStoreINTEL $element_size $block_width $block_height $block_count $src_ptr $dst_base_ptr $memory_width $memory_height $memory_pitch $coord">; + +// SPV_INTEL_predicated_io +def OpPredicatedLoadINTEL: Op<6528, (outs ID:$res), (ins TYPE:$resType, ID:$ptr, ID:$predicate, ID:$default_value, variable_ops), + "$res = OpPredicatedLoadINTEL $resType $ptr $predicate $default_value">; +def OpPredicatedStoreINTEL: Op<6529, (outs), (ins ID:$ptr, ID:$object, ID:$predicate, variable_ops), + "OpPredicatedStoreINTEL $ptr $object $predicate">; diff --git a/llvm/lib/Target/SPIRV/SPIRVModuleAnalysis.cpp b/llvm/lib/Target/SPIRV/SPIRVModuleAnalysis.cpp index dc717a6..5144fb1 100644 --- a/llvm/lib/Target/SPIRV/SPIRVModuleAnalysis.cpp +++ b/llvm/lib/Target/SPIRV/SPIRVModuleAnalysis.cpp @@ -2035,6 +2035,17 @@ void addInstrRequirements(const MachineInstr &MI, // TODO: Add UntypedPointersKHR when implemented. break; } + case SPIRV::OpPredicatedLoadINTEL: + case SPIRV::OpPredicatedStoreINTEL: { + if (!ST.canUseExtension(SPIRV::Extension::SPV_INTEL_predicated_io)) + report_fatal_error( + "OpPredicated[Load/Store]INTEL instructions require " + "the following SPIR-V extension: SPV_INTEL_predicated_io", + false); + Reqs.addExtension(SPIRV::Extension::SPV_INTEL_predicated_io); + Reqs.addCapability(SPIRV::Capability::PredicatedIOINTEL); + break; + } default: break; diff --git a/llvm/lib/Target/SPIRV/SPIRVSymbolicOperands.td b/llvm/lib/Target/SPIRV/SPIRVSymbolicOperands.td index 6a32dba..2625642 100644 --- a/llvm/lib/Target/SPIRV/SPIRVSymbolicOperands.td +++ b/llvm/lib/Target/SPIRV/SPIRVSymbolicOperands.td @@ -385,6 +385,7 @@ defm SPV_INTEL_int4 : ExtensionOperand<123, [EnvOpenCL]>; defm SPV_KHR_float_controls2 : ExtensionOperand<124, [EnvVulkan, EnvOpenCL]>; defm SPV_INTEL_tensor_float32_conversion : ExtensionOperand<125, [EnvOpenCL]>; defm SPV_KHR_bfloat16 : ExtensionOperand<126, [EnvVulkan, EnvOpenCL]>; +defm SPV_INTEL_predicated_io : ExtensionOperand<127, [EnvOpenCL]>; //===----------------------------------------------------------------------===// // Multiclass used to define Capabilities enum values and at the same time @@ -594,6 +595,7 @@ defm SubgroupMatrixMultiplyAccumulateINTEL : CapabilityOperand<6236, 0, 0, [SPV_ defm Subgroup2DBlockIOINTEL : CapabilityOperand<6228, 0, 0, [SPV_INTEL_2d_block_io], []>; defm Subgroup2DBlockTransformINTEL : CapabilityOperand<6229, 0, 0, [SPV_INTEL_2d_block_io], [Subgroup2DBlockIOINTEL]>; defm Subgroup2DBlockTransposeINTEL : CapabilityOperand<6230, 0, 0, [SPV_INTEL_2d_block_io], [Subgroup2DBlockIOINTEL]>; +defm PredicatedIOINTEL : CapabilityOperand<6257, 0, 0, [SPV_INTEL_predicated_io], []>; defm Int4TypeINTEL : CapabilityOperand<5112, 0, 0, [SPV_INTEL_int4], []>; defm Int4CooperativeMatrixINTEL : CapabilityOperand<5114, 0, 0, [SPV_INTEL_int4], [Int4TypeINTEL, CooperativeMatrixKHR]>; defm TensorFloat32RoundingINTEL : CapabilityOperand<6425, 0, 0, [SPV_INTEL_tensor_float32_conversion], []>; diff --git a/llvm/lib/Target/SystemZ/MCTargetDesc/SystemZInstPrinterCommon.cpp b/llvm/lib/Target/SystemZ/MCTargetDesc/SystemZInstPrinterCommon.cpp index af79070..275165d 100644 --- a/llvm/lib/Target/SystemZ/MCTargetDesc/SystemZInstPrinterCommon.cpp +++ b/llvm/lib/Target/SystemZ/MCTargetDesc/SystemZInstPrinterCommon.cpp @@ -184,8 +184,8 @@ void SystemZInstPrinterCommon::printPCRelTLSOperand(const MCInst *MI, // Output the TLS marker if present. if ((unsigned)OpNum + 1 < MI->getNumOperands()) { const MCOperand &MO = MI->getOperand(OpNum + 1); - const MCSymbolRefExpr &refExp = cast<MCSymbolRefExpr>(*MO.getExpr()); - switch (refExp.getSpecifier()) { + const MCSymbolRefExpr &RefExp = cast<MCSymbolRefExpr>(*MO.getExpr()); + switch (RefExp.getSpecifier()) { case SystemZ::S_TLSGD: O << ":tls_gdcall:"; break; @@ -195,7 +195,7 @@ void SystemZInstPrinterCommon::printPCRelTLSOperand(const MCInst *MI, default: llvm_unreachable("Unexpected symbol kind"); } - O << refExp.getSymbol().getName(); + O << RefExp.getSymbol().getName(); } } diff --git a/llvm/lib/Target/SystemZ/SystemZConstantPoolValue.cpp b/llvm/lib/Target/SystemZ/SystemZConstantPoolValue.cpp index fce6393..8c31579 100644 --- a/llvm/lib/Target/SystemZ/SystemZConstantPoolValue.cpp +++ b/llvm/lib/Target/SystemZ/SystemZConstantPoolValue.cpp @@ -13,10 +13,9 @@ using namespace llvm; -SystemZConstantPoolValue:: -SystemZConstantPoolValue(const GlobalValue *gv, - SystemZCP::SystemZCPModifier modifier) - : MachineConstantPoolValue(gv->getType()), GV(gv), Modifier(modifier) {} +SystemZConstantPoolValue::SystemZConstantPoolValue( + const GlobalValue *GV, SystemZCP::SystemZCPModifier Modifier) + : MachineConstantPoolValue(GV->getType()), GV(GV), Modifier(Modifier) {} SystemZConstantPoolValue * SystemZConstantPoolValue::Create(const GlobalValue *GV, diff --git a/llvm/lib/Target/SystemZ/SystemZHazardRecognizer.cpp b/llvm/lib/Target/SystemZ/SystemZHazardRecognizer.cpp index 34d58e0..5313fba 100644 --- a/llvm/lib/Target/SystemZ/SystemZHazardRecognizer.cpp +++ b/llvm/lib/Target/SystemZ/SystemZHazardRecognizer.cpp @@ -352,10 +352,9 @@ int SystemZHazardRecognizer::groupingCost(SUnit *SU) const { // Similarly, a group-ending SU may either fit well (last in group), or // end the group prematurely. if (SC->EndGroup) { - unsigned resultingGroupSize = - (CurrGroupSize + getNumDecoderSlots(SU)); - if (resultingGroupSize < 3) - return (3 - resultingGroupSize); + unsigned ResultingGroupSize = (CurrGroupSize + getNumDecoderSlots(SU)); + if (ResultingGroupSize < 3) + return (3 - ResultingGroupSize); return -1; } diff --git a/llvm/lib/Target/WebAssembly/WebAssemblyAsmPrinter.cpp b/llvm/lib/Target/WebAssembly/WebAssemblyAsmPrinter.cpp index 6bb064a..526420b 100644 --- a/llvm/lib/Target/WebAssembly/WebAssemblyAsmPrinter.cpp +++ b/llvm/lib/Target/WebAssembly/WebAssemblyAsmPrinter.cpp @@ -441,7 +441,9 @@ void WebAssemblyAsmPrinter::EmitProducerInfo(Module &M) { llvm::SmallSet<StringRef, 4> SeenLanguages; for (size_t I = 0, E = Debug->getNumOperands(); I < E; ++I) { const auto *CU = cast<DICompileUnit>(Debug->getOperand(I)); - StringRef Language = dwarf::LanguageString(CU->getSourceLanguage()); + StringRef Language = + dwarf::LanguageString(CU->getSourceLanguage().getUnversionedName()); + Language.consume_front("DW_LANG_"); if (SeenLanguages.insert(Language).second) Languages.emplace_back(Language.str(), ""); diff --git a/llvm/lib/Target/WebAssembly/WebAssemblyInstrSIMD.td b/llvm/lib/Target/WebAssembly/WebAssemblyInstrSIMD.td index 1306026..49af78b 100644 --- a/llvm/lib/Target/WebAssembly/WebAssemblyInstrSIMD.td +++ b/llvm/lib/Target/WebAssembly/WebAssemblyInstrSIMD.td @@ -1445,6 +1445,49 @@ def : Pat<(v16i8 (wasm_narrow_u (v8i16 V128:$left), (v8i16 V128:$right))), def : Pat<(v8i16 (wasm_narrow_u (v4i32 V128:$left), (v4i32 V128:$right))), (NARROW_U_I16x8 $left, $right)>; +// Recognize a saturating truncation and convert into the corresponding +// narrow_TYPE_s or narrow_TYPE_u instruction. +multiclass SignedSaturatingTruncate<ValueType input, ValueType output, + Instruction narrow, int minval, + int maxval, int mask> { + def : Pat< + (output (wasm_narrow_u + (and (smin (smax (input V128:$a), (splat_vector (i32 minval))), + (splat_vector (i32 maxval))), (splat_vector (i32 mask))), + (and (smin (smax (input V128:$b), (splat_vector (i32 minval))), + (splat_vector (i32 maxval))), (splat_vector (i32 mask))) + )), + (narrow V128:$a, V128:$b) + >; + + def : Pat< + (output (wasm_narrow_u + (and (smax (smin (input V128:$a), (splat_vector (i32 maxval))), + (splat_vector (i32 minval))), (splat_vector (i32 mask))), + (and (smax (smin (input V128:$b), (splat_vector (i32 maxval))), + (splat_vector (i32 minval))), (splat_vector (i32 mask))) + )), + (narrow V128:$a, V128:$b) + >; +} + +defm : SignedSaturatingTruncate<v8i16, v16i8, NARROW_S_I8x16, -128, 127, 0xFF>; +defm : SignedSaturatingTruncate<v4i32, v8i16, NARROW_S_I16x8, -32768, 32767, 0xFFFF>; + +multiclass UnsignedSaturatingTruncate<ValueType input, ValueType output, + Instruction narrow, int maxval> { + def : Pat< + (output (wasm_narrow_u + (umin (input V128:$a), (splat_vector (i32 maxval))), + (umin (input V128:$b), (splat_vector (i32 maxval))) + )), + (narrow V128:$a, V128:$b) + >; +} + +defm : UnsignedSaturatingTruncate<v8i16, v16i8, NARROW_U_I8x16, 0xFF>; +defm : UnsignedSaturatingTruncate<v4i32, v8i16, NARROW_U_I16x8, 0xFFFF>; + // Bitcasts are nops // Matching bitcast t1 to t1 causes strange errors, so avoid repeating types foreach t1 = AllVecs in diff --git a/llvm/lib/Target/X86/X86ISelLowering.cpp b/llvm/lib/Target/X86/X86ISelLowering.cpp index 931a10b..9580ade 100644 --- a/llvm/lib/Target/X86/X86ISelLowering.cpp +++ b/llvm/lib/Target/X86/X86ISelLowering.cpp @@ -3659,11 +3659,8 @@ bool X86TargetLowering::shouldFoldMaskToVariableShiftPair(SDValue Y) const { if (VT.isVector()) return false; - // 64-bit shifts on 32-bit targets produce really bad bloated code. - if (VT == MVT::i64 && !Subtarget.is64Bit()) - return false; - - return true; + unsigned MaxWidth = Subtarget.is64Bit() ? 64 : 32; + return VT.getScalarSizeInBits() <= MaxWidth; } TargetLowering::ShiftLegalizationStrategy diff --git a/llvm/lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp b/llvm/lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp index 805bdb4..bbbac45 100644 --- a/llvm/lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp +++ b/llvm/lib/Transforms/AggressiveInstCombine/AggressiveInstCombine.cpp @@ -28,8 +28,12 @@ #include "llvm/IR/Dominators.h" #include "llvm/IR/Function.h" #include "llvm/IR/IRBuilder.h" +#include "llvm/IR/Instruction.h" +#include "llvm/IR/MDBuilder.h" #include "llvm/IR/PatternMatch.h" #include "llvm/IR/ProfDataUtils.h" +#include "llvm/Support/Casting.h" +#include "llvm/Support/CommandLine.h" #include "llvm/Transforms/Utils/BasicBlockUtils.h" #include "llvm/Transforms/Utils/BuildLibCalls.h" #include "llvm/Transforms/Utils/Local.h" @@ -39,6 +43,10 @@ using namespace PatternMatch; #define DEBUG_TYPE "aggressive-instcombine" +namespace llvm { +extern cl::opt<bool> ProfcheckDisableMetadataFixes; +} + STATISTIC(NumAnyOrAllBitsSet, "Number of any/all-bits-set patterns folded"); STATISTIC(NumGuardedRotates, "Number of guarded rotates transformed into funnel shifts"); @@ -599,6 +607,14 @@ static bool tryToRecognizeTableBasedCttz(Instruction &I, const DataLayout &DL) { auto Cmp = B.CreateICmpEQ(X1, ConstantInt::get(XType, 0)); auto Select = B.CreateSelect(Cmp, B.CreateZExt(ZeroTableElem, XType), Cttz); + // The true branch of select handles the cttz(0) case, which is rare. + if (!ProfcheckDisableMetadataFixes) { + if (Instruction *SelectI = dyn_cast<Instruction>(Select)) + SelectI->setMetadata( + LLVMContext::MD_prof, + MDBuilder(SelectI->getContext()).createUnlikelyBranchWeights()); + } + // NOTE: If the table[0] is 0, but the cttz(0) is defined by the Target // it should be handled as: `cttz(x) & (typeSize - 1)`. diff --git a/llvm/lib/Transforms/Coroutines/CoroAnnotationElide.cpp b/llvm/lib/Transforms/Coroutines/CoroAnnotationElide.cpp index 9115946..f166fef 100644 --- a/llvm/lib/Transforms/Coroutines/CoroAnnotationElide.cpp +++ b/llvm/lib/Transforms/Coroutines/CoroAnnotationElide.cpp @@ -24,6 +24,9 @@ #include "llvm/IR/Instruction.h" #include "llvm/IR/Module.h" #include "llvm/IR/PassManager.h" +#include "llvm/Support/BranchProbability.h" +#include "llvm/Support/CommandLine.h" +#include "llvm/Support/FileSystem.h" #include "llvm/Transforms/Utils/CallGraphUpdater.h" #include "llvm/Transforms/Utils/Cloning.h" @@ -33,6 +36,11 @@ using namespace llvm; #define DEBUG_TYPE "coro-annotation-elide" +static cl::opt<float> CoroElideBranchRatio( + "coro-elide-branch-ratio", cl::init(0.55), cl::Hidden, + cl::desc("Minimum BranchProbability to consider a elide a coroutine.")); +extern cl::opt<unsigned> MinBlockCounterExecution; + static Instruction *getFirstNonAllocaInTheEntryBlock(Function *F) { for (Instruction &I : F->getEntryBlock()) if (!isa<AllocaInst>(&I)) @@ -145,6 +153,30 @@ PreservedAnalyses CoroAnnotationElidePass::run(LazyCallGraph::SCC &C, bool IsCallerPresplitCoroutine = Caller->isPresplitCoroutine(); bool HasAttr = CB->hasFnAttr(llvm::Attribute::CoroElideSafe); if (IsCallerPresplitCoroutine && HasAttr) { + BranchProbability MinBranchProbability( + static_cast<int>(CoroElideBranchRatio * MinBlockCounterExecution), + MinBlockCounterExecution); + + auto &BFI = FAM.getResult<BlockFrequencyAnalysis>(*Caller); + + auto Prob = BranchProbability::getBranchProbability( + BFI.getBlockFreq(CB->getParent()).getFrequency(), + BFI.getEntryFreq().getFrequency()); + + if (Prob < MinBranchProbability) { + ORE.emit([&]() { + return OptimizationRemarkMissed( + DEBUG_TYPE, "CoroAnnotationElideUnlikely", Caller) + << "'" << ore::NV("callee", Callee->getName()) + << "' not elided in '" + << ore::NV("caller", Caller->getName()) + << "' because of low probability: " + << ore::NV("probability", Prob) << " (threshold: " + << ore::NV("threshold", MinBranchProbability) << ")"; + }); + continue; + } + auto *CallerN = CG.lookup(*Caller); auto *CallerC = CallerN ? CG.lookupSCC(*CallerN) : nullptr; // If CallerC is nullptr, it means LazyCallGraph hasn't visited Caller @@ -156,7 +188,7 @@ PreservedAnalyses CoroAnnotationElidePass::run(LazyCallGraph::SCC &C, return OptimizationRemark(DEBUG_TYPE, "CoroAnnotationElide", Caller) << "'" << ore::NV("callee", Callee->getName()) << "' elided in '" << ore::NV("caller", Caller->getName()) - << "'"; + << "' (probability: " << ore::NV("probability", Prob) << ")"; }); FAM.invalidate(*Caller, PreservedAnalyses::none()); diff --git a/llvm/lib/Transforms/Coroutines/CoroFrame.cpp b/llvm/lib/Transforms/Coroutines/CoroFrame.cpp index 0accb22..c89af68 100644 --- a/llvm/lib/Transforms/Coroutines/CoroFrame.cpp +++ b/llvm/lib/Transforms/Coroutines/CoroFrame.cpp @@ -689,10 +689,14 @@ static void buildFrameDebugInfo(Function &F, coro::Shape &Shape, DISubprogram *DIS = F.getSubprogram(); // If there is no DISubprogram for F, it implies the function is compiled // without debug info. So we also don't generate debug info for the frame. - if (!DIS || !DIS->getUnit() || - !dwarf::isCPlusPlus( - (dwarf::SourceLanguage)DIS->getUnit()->getSourceLanguage()) || - DIS->getUnit()->getEmissionKind() != DICompileUnit::DebugEmissionKind::FullDebug) + + if (!DIS || !DIS->getUnit()) + return; + + if (!dwarf::isCPlusPlus(static_cast<llvm::dwarf::SourceLanguage>( + DIS->getUnit()->getSourceLanguage().getUnversionedName())) || + DIS->getUnit()->getEmissionKind() != + DICompileUnit::DebugEmissionKind::FullDebug) return; assert(Shape.ABI == coro::ABI::Switch && diff --git a/llvm/lib/Transforms/IPO/PartialInlining.cpp b/llvm/lib/Transforms/IPO/PartialInlining.cpp index 2583249..1a00d17 100644 --- a/llvm/lib/Transforms/IPO/PartialInlining.cpp +++ b/llvm/lib/Transforms/IPO/PartialInlining.cpp @@ -109,7 +109,7 @@ static cl::opt<float> MinRegionSizeRatio( "outline candidate and original function")); // Used to tune the minimum number of execution counts needed in the predecessor // block to the cold edge. ie. confidence interval. -static cl::opt<unsigned> +cl::opt<unsigned> MinBlockCounterExecution("min-block-execution", cl::init(100), cl::Hidden, cl::desc("Minimum block executions to consider " "its BranchProbabilityInfo valid")); diff --git a/llvm/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp b/llvm/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp index aa030294..127a506 100644 --- a/llvm/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp +++ b/llvm/lib/Transforms/InstCombine/InstCombineSimplifyDemanded.cpp @@ -60,6 +60,58 @@ static bool ShrinkDemandedConstant(Instruction *I, unsigned OpNo, return true; } +/// Let N = 2 * M. +/// Given an N-bit integer representing a pack of two M-bit integers, +/// we can select one of the packed integers by right-shifting by either +/// zero or M (which is the most straightforward to check if M is a power +/// of 2), and then isolating the lower M bits. In this case, we can +/// represent the shift as a select on whether the shr amount is nonzero. +static Value *simplifyShiftSelectingPackedElement(Instruction *I, + const APInt &DemandedMask, + InstCombinerImpl &IC, + unsigned Depth) { + assert(I->getOpcode() == Instruction::LShr && + "Only lshr instruction supported"); + + uint64_t ShlAmt; + Value *Upper, *Lower; + if (!match(I->getOperand(0), + m_OneUse(m_c_DisjointOr( + m_OneUse(m_Shl(m_Value(Upper), m_ConstantInt(ShlAmt))), + m_Value(Lower))))) + return nullptr; + + if (!isPowerOf2_64(ShlAmt)) + return nullptr; + + const uint64_t DemandedBitWidth = DemandedMask.getActiveBits(); + if (DemandedBitWidth > ShlAmt) + return nullptr; + + // Check that upper demanded bits are not lost from lshift. + if (Upper->getType()->getScalarSizeInBits() < ShlAmt + DemandedBitWidth) + return nullptr; + + KnownBits KnownLowerBits = IC.computeKnownBits(Lower, I, Depth); + if (!KnownLowerBits.getMaxValue().isIntN(ShlAmt)) + return nullptr; + + Value *ShrAmt = I->getOperand(1); + KnownBits KnownShrBits = IC.computeKnownBits(ShrAmt, I, Depth); + + // Verify that ShrAmt is either exactly ShlAmt (which is a power of 2) or + // zero. + if (~KnownShrBits.Zero != ShlAmt) + return nullptr; + + Value *ShrAmtZ = + IC.Builder.CreateICmpEQ(ShrAmt, Constant::getNullValue(ShrAmt->getType()), + ShrAmt->getName() + ".z"); + Value *Select = IC.Builder.CreateSelect(ShrAmtZ, Lower, Upper); + Select->takeName(I); + return Select; +} + /// Returns the bitwidth of the given scalar or pointer type. For vector types, /// returns the element type's bitwidth. static unsigned getBitWidth(Type *Ty, const DataLayout &DL) { @@ -798,9 +850,13 @@ Value *InstCombinerImpl::SimplifyDemandedUseBits(Instruction *I, Known >>= ShiftAmt; if (ShiftAmt) Known.Zero.setHighBits(ShiftAmt); // high bits known zero. - } else { - llvm::computeKnownBits(I, Known, Q, Depth); + break; } + if (Value *V = + simplifyShiftSelectingPackedElement(I, DemandedMask, *this, Depth)) + return V; + + llvm::computeKnownBits(I, Known, Q, Depth); break; } case Instruction::AShr: { diff --git a/llvm/lib/Transforms/Instrumentation/AllocToken.cpp b/llvm/lib/Transforms/Instrumentation/AllocToken.cpp index 782d5a1..40720ae 100644 --- a/llvm/lib/Transforms/Instrumentation/AllocToken.cpp +++ b/llvm/lib/Transforms/Instrumentation/AllocToken.cpp @@ -69,19 +69,30 @@ enum class TokenMode : unsigned { /// Token ID based on allocated type hash. TypeHash = 2, + + /// Token ID based on allocated type hash, where the top half ID-space is + /// reserved for types that contain pointers and the bottom half for types + /// that do not contain pointers. + TypeHashPointerSplit = 3, }; //===--- Command-line options ---------------------------------------------===// -cl::opt<TokenMode> - ClMode("alloc-token-mode", cl::Hidden, cl::desc("Token assignment mode"), - cl::init(TokenMode::TypeHash), - cl::values(clEnumValN(TokenMode::Increment, "increment", - "Incrementally increasing token ID"), - clEnumValN(TokenMode::Random, "random", - "Statically-assigned random token ID"), - clEnumValN(TokenMode::TypeHash, "typehash", - "Token ID based on allocated type hash"))); +cl::opt<TokenMode> ClMode( + "alloc-token-mode", cl::Hidden, cl::desc("Token assignment mode"), + cl::init(TokenMode::TypeHashPointerSplit), + cl::values( + clEnumValN(TokenMode::Increment, "increment", + "Incrementally increasing token ID"), + clEnumValN(TokenMode::Random, "random", + "Statically-assigned random token ID"), + clEnumValN(TokenMode::TypeHash, "typehash", + "Token ID based on allocated type hash"), + clEnumValN( + TokenMode::TypeHashPointerSplit, "typehashpointersplit", + "Token ID based on allocated type hash, where the top half " + "ID-space is reserved for types that contain pointers and the " + "bottom half for types that do not contain pointers. "))); cl::opt<std::string> ClFuncPrefix("alloc-token-prefix", cl::desc("The allocation function prefix"), @@ -127,16 +138,23 @@ STATISTIC(NumAllocationsInstrumented, "Allocations instrumented"); /// Returns the !alloc_token metadata if available. /// -/// Expected format is: !{<type-name>} +/// Expected format is: !{<type-name>, <contains-pointer>} MDNode *getAllocTokenMetadata(const CallBase &CB) { MDNode *Ret = CB.getMetadata(LLVMContext::MD_alloc_token); if (!Ret) return nullptr; - assert(Ret->getNumOperands() == 1 && "bad !alloc_token"); + assert(Ret->getNumOperands() == 2 && "bad !alloc_token"); assert(isa<MDString>(Ret->getOperand(0))); + assert(isa<ConstantAsMetadata>(Ret->getOperand(1))); return Ret; } +bool containsPointer(const MDNode *MD) { + ConstantAsMetadata *C = cast<ConstantAsMetadata>(MD->getOperand(1)); + auto *CI = cast<ConstantInt>(C->getValue()); + return CI->getValue().getBoolValue(); +} + class ModeBase { public: explicit ModeBase(const IntegerType &TokenTy, uint64_t MaxTokens) @@ -188,12 +206,20 @@ public: using ModeBase::ModeBase; uint64_t operator()(const CallBase &CB, OptimizationRemarkEmitter &ORE) { + const auto [N, H] = getHash(CB, ORE); + return N ? boundedToken(H) : H; + } + +protected: + std::pair<MDNode *, uint64_t> getHash(const CallBase &CB, + OptimizationRemarkEmitter &ORE) { if (MDNode *N = getAllocTokenMetadata(CB)) { MDString *S = cast<MDString>(N->getOperand(0)); - return boundedToken(getStableSipHash(S->getString())); + return {N, getStableSipHash(S->getString())}; } + // Fallback. remarkNoMetadata(CB, ORE); - return ClFallbackToken; + return {nullptr, ClFallbackToken}; } /// Remark that there was no precise type information. @@ -210,6 +236,29 @@ public: } }; +/// Implementation for TokenMode::TypeHashPointerSplit. +class TypeHashPointerSplitMode : public TypeHashMode { +public: + using TypeHashMode::TypeHashMode; + + uint64_t operator()(const CallBase &CB, OptimizationRemarkEmitter &ORE) { + if (MaxTokens == 1) + return 0; + const uint64_t HalfTokens = MaxTokens / 2; + const auto [N, H] = getHash(CB, ORE); + if (!N) { + // Pick the fallback token (ClFallbackToken), which by default is 0, + // meaning it'll fall into the pointer-less bucket. Override by setting + // -alloc-token-fallback if that is the wrong choice. + return H; + } + uint64_t Hash = H % HalfTokens; // base hash + if (containsPointer(N)) + Hash += HalfTokens; + return Hash; + } +}; + // Apply opt overrides. AllocTokenOptions transformOptionsFromCl(AllocTokenOptions Opts) { if (!Opts.MaxTokens.has_value()) @@ -236,6 +285,9 @@ public: case TokenMode::TypeHash: Mode.emplace<TypeHashMode>(*IntPtrTy, *Options.MaxTokens); break; + case TokenMode::TypeHashPointerSplit: + Mode.emplace<TypeHashPointerSplitMode>(*IntPtrTy, *Options.MaxTokens); + break; } } @@ -275,7 +327,9 @@ private: // Cache for replacement functions. DenseMap<std::pair<LibFunc, uint64_t>, FunctionCallee> TokenAllocFunctions; // Selected mode. - std::variant<IncrementMode, RandomMode, TypeHashMode> Mode; + std::variant<IncrementMode, RandomMode, TypeHashMode, + TypeHashPointerSplitMode> + Mode; }; bool AllocToken::instrumentFunction(Function &F) { diff --git a/llvm/lib/Transforms/Instrumentation/SanitizerCoverage.cpp b/llvm/lib/Transforms/Instrumentation/SanitizerCoverage.cpp index 5b8ea15..b74a070 100644 --- a/llvm/lib/Transforms/Instrumentation/SanitizerCoverage.cpp +++ b/llvm/lib/Transforms/Instrumentation/SanitizerCoverage.cpp @@ -1084,8 +1084,10 @@ void ModuleSanitizerCoverage::InjectCoverageAtBlock(Function &F, BasicBlock &BB, auto ThenTerm = SplitBlockAndInsertIfThen( IRB.CreateIsNull(Load), &*IP, false, MDBuilder(IRB.getContext()).createUnlikelyBranchWeights()); - IRBuilder<> ThenIRB(ThenTerm); + InstrumentationIRBuilder ThenIRB(ThenTerm); auto Store = ThenIRB.CreateStore(ConstantInt::getTrue(Int1Ty), FlagPtr); + if (EntryLoc) + Store->setDebugLoc(EntryLoc); Load->setNoSanitizeMetadata(); Store->setNoSanitizeMetadata(); } @@ -1131,7 +1133,10 @@ void ModuleSanitizerCoverage::InjectCoverageAtBlock(Function &F, BasicBlock &BB, EstimatedStackSize >= Options.StackDepthCallbackMin) { if (InsertBefore) IRB.SetInsertPoint(InsertBefore); - IRB.CreateCall(SanCovStackDepthCallback)->setCannotMerge(); + auto Call = IRB.CreateCall(SanCovStackDepthCallback); + if (EntryLoc) + Call->setDebugLoc(EntryLoc); + Call->setCannotMerge(); } } else { // Check stack depth. If it's the deepest so far, record it. @@ -1144,8 +1149,10 @@ void ModuleSanitizerCoverage::InjectCoverageAtBlock(Function &F, BasicBlock &BB, auto ThenTerm = SplitBlockAndInsertIfThen( IsStackLower, &*IP, false, MDBuilder(IRB.getContext()).createUnlikelyBranchWeights()); - IRBuilder<> ThenIRB(ThenTerm); + InstrumentationIRBuilder ThenIRB(ThenTerm); auto Store = ThenIRB.CreateStore(FrameAddrInt, SanCovLowestStack); + if (EntryLoc) + Store->setDebugLoc(EntryLoc); LowestStack->setNoSanitizeMetadata(); Store->setNoSanitizeMetadata(); } diff --git a/llvm/lib/Transforms/Scalar/DFAJumpThreading.cpp b/llvm/lib/Transforms/Scalar/DFAJumpThreading.cpp index e448230..3f7003d 100644 --- a/llvm/lib/Transforms/Scalar/DFAJumpThreading.cpp +++ b/llvm/lib/Transforms/Scalar/DFAJumpThreading.cpp @@ -61,6 +61,7 @@ #include "llvm/ADT/APInt.h" #include "llvm/ADT/DenseMap.h" #include "llvm/ADT/Statistic.h" +#include "llvm/ADT/StringExtras.h" #include "llvm/Analysis/AssumptionCache.h" #include "llvm/Analysis/CodeMetrics.h" #include "llvm/Analysis/DomTreeUpdater.h" @@ -382,16 +383,9 @@ typedef DenseMap<BasicBlock *, CloneList> DuplicateBlockMap; typedef MapVector<Instruction *, std::vector<Instruction *>> DefMap; inline raw_ostream &operator<<(raw_ostream &OS, const PathType &Path) { - OS << "< "; - for (const BasicBlock *BB : Path) { - std::string BBName; - if (BB->hasName()) - raw_string_ostream(BBName) << BB->getName(); - else - raw_string_ostream(BBName) << BB; - OS << BBName << " "; - } - OS << ">"; + auto BBNames = llvm::map_range( + Path, [](const BasicBlock *BB) { return BB->getNameOrAsOperand(); }); + OS << "< " << llvm::join(BBNames, ", ") << " >"; return OS; } @@ -423,7 +417,7 @@ struct ThreadingPath { } void print(raw_ostream &OS) const { - OS << Path << " [ " << ExitVal << ", " << DBB->getName() << " ]"; + OS << Path << " [ " << ExitVal << ", " << DBB->getNameOrAsOperand() << " ]"; } private: diff --git a/llvm/lib/Transforms/Scalar/GVN.cpp b/llvm/lib/Transforms/Scalar/GVN.cpp index b9b5b58..638952a 100644 --- a/llvm/lib/Transforms/Scalar/GVN.cpp +++ b/llvm/lib/Transforms/Scalar/GVN.cpp @@ -699,6 +699,7 @@ uint32_t GVNPass::ValueTable::lookupOrAdd(Value *V) { case Instruction::FPTrunc: case Instruction::FPExt: case Instruction::PtrToInt: + case Instruction::PtrToAddr: case Instruction::IntToPtr: case Instruction::AddrSpaceCast: case Instruction::BitCast: diff --git a/llvm/lib/Transforms/Scalar/NewGVN.cpp b/llvm/lib/Transforms/Scalar/NewGVN.cpp index d6b7633..3c1a8ba 100644 --- a/llvm/lib/Transforms/Scalar/NewGVN.cpp +++ b/llvm/lib/Transforms/Scalar/NewGVN.cpp @@ -2066,6 +2066,7 @@ NewGVN::performSymbolicEvaluation(Instruction *I, case Instruction::FPTrunc: case Instruction::FPExt: case Instruction::PtrToInt: + case Instruction::PtrToAddr: case Instruction::IntToPtr: case Instruction::Select: case Instruction::ExtractElement: diff --git a/llvm/lib/Transforms/Scalar/SimplifyCFGPass.cpp b/llvm/lib/Transforms/Scalar/SimplifyCFGPass.cpp index 60e5df0..7ffccf7 100644 --- a/llvm/lib/Transforms/Scalar/SimplifyCFGPass.cpp +++ b/llvm/lib/Transforms/Scalar/SimplifyCFGPass.cpp @@ -355,6 +355,8 @@ void SimplifyCFGPass::printPipeline( OS << (Options.ForwardSwitchCondToPhi ? "" : "no-") << "forward-switch-cond;"; OS << (Options.ConvertSwitchRangeToICmp ? "" : "no-") << "switch-range-to-icmp;"; + OS << (Options.ConvertSwitchToArithmetic ? "" : "no-") + << "switch-to-arithmetic;"; OS << (Options.ConvertSwitchToLookupTable ? "" : "no-") << "switch-to-lookup;"; OS << (Options.NeedCanonicalLoop ? "" : "no-") << "keep-loops;"; diff --git a/llvm/lib/Transforms/Utils/Debugify.cpp b/llvm/lib/Transforms/Utils/Debugify.cpp index 5a09b73..2923633 100644 --- a/llvm/lib/Transforms/Utils/Debugify.cpp +++ b/llvm/lib/Transforms/Utils/Debugify.cpp @@ -19,6 +19,7 @@ #include "llvm/Config/llvm-config.h" #include "llvm/IR/DIBuilder.h" #include "llvm/IR/DebugInfo.h" +#include "llvm/IR/DebugInfoMetadata.h" #include "llvm/IR/DebugLoc.h" #include "llvm/IR/InstIterator.h" #include "llvm/IR/Instructions.h" @@ -162,8 +163,8 @@ bool llvm::applyDebugifyMetadata( unsigned NextLine = 1; unsigned NextVar = 1; auto File = DIB.createFile(M.getName(), "/"); - auto CU = DIB.createCompileUnit(dwarf::DW_LANG_C, File, "debugify", - /*isOptimized=*/true, "", 0); + auto CU = DIB.createCompileUnit(DISourceLanguageName(dwarf::DW_LANG_C), File, + "debugify", /*isOptimized=*/true, "", 0); // Visit each instruction. for (Function &F : Functions) { diff --git a/llvm/lib/Transforms/Utils/LoopRotationUtils.cpp b/llvm/lib/Transforms/Utils/LoopRotationUtils.cpp index 7cc9ff8..0c8d6fa 100644 --- a/llvm/lib/Transforms/Utils/LoopRotationUtils.cpp +++ b/llvm/lib/Transforms/Utils/LoopRotationUtils.cpp @@ -45,12 +45,6 @@ STATISTIC(NumInstrsHoisted, "Number of instructions hoisted into loop preheader"); STATISTIC(NumInstrsDuplicated, "Number of instructions cloned into loop preheader"); -STATISTIC(NumRotated, "Number of loops rotated"); - -static cl::opt<bool> - MultiRotate("loop-rotate-multi", cl::init(false), cl::Hidden, - cl::desc("Allow loop rotation multiple times in order to reach " - "a better latch exit")); // Probability that a rotated loop has zero trip count / is never entered. static constexpr uint32_t ZeroTripCountWeights[] = {1, 127}; @@ -206,50 +200,6 @@ static bool profitableToRotateLoopExitingLatch(Loop *L) { return false; } -// Check that latch exit is deoptimizing (which means - very unlikely to happen) -// and there is another exit from the loop which is non-deoptimizing. -// If we rotate latch to that exit our loop has a better chance of being fully -// canonical. -// -// It can give false positives in some rare cases. -static bool canRotateDeoptimizingLatchExit(Loop *L) { - BasicBlock *Latch = L->getLoopLatch(); - assert(Latch && "need latch"); - BranchInst *BI = dyn_cast<BranchInst>(Latch->getTerminator()); - // Need normal exiting latch. - if (!BI || !BI->isConditional()) - return false; - - BasicBlock *Exit = BI->getSuccessor(1); - if (L->contains(Exit)) - Exit = BI->getSuccessor(0); - - // Latch exit is non-deoptimizing, no need to rotate. - if (!Exit->getPostdominatingDeoptimizeCall()) - return false; - - SmallVector<BasicBlock *, 4> Exits; - L->getUniqueExitBlocks(Exits); - if (!Exits.empty()) { - // There is at least one non-deoptimizing exit. - // - // Note, that BasicBlock::getPostdominatingDeoptimizeCall is not exact, - // as it can conservatively return false for deoptimizing exits with - // complex enough control flow down to deoptimize call. - // - // That means here we can report success for a case where - // all exits are deoptimizing but one of them has complex enough - // control flow (e.g. with loops). - // - // That should be a very rare case and false positives for this function - // have compile-time effect only. - return any_of(Exits, [](const BasicBlock *BB) { - return !BB->getPostdominatingDeoptimizeCall(); - }); - } - return false; -} - static void updateBranchWeights(BranchInst &PreHeaderBI, BranchInst &LoopBI, bool HasConditionalPreHeader, bool SuccsSwapped) { @@ -387,506 +337,489 @@ static void updateBranchWeights(BranchInst &PreHeaderBI, BranchInst &LoopBI, /// rotation. LoopRotate should be repeatable and converge to a canonical /// form. This property is satisfied because simplifying the loop latch can only /// happen once across multiple invocations of the LoopRotate pass. -/// -/// If -loop-rotate-multi is enabled we can do multiple rotations in one go -/// so to reach a suitable (non-deoptimizing) exit. bool LoopRotate::rotateLoop(Loop *L, bool SimplifiedLatch) { // If the loop has only one block then there is not much to rotate. if (L->getBlocks().size() == 1) return false; bool Rotated = false; - do { - BasicBlock *OrigHeader = L->getHeader(); - BasicBlock *OrigLatch = L->getLoopLatch(); - - BranchInst *BI = dyn_cast<BranchInst>(OrigHeader->getTerminator()); - if (!BI || BI->isUnconditional()) - return Rotated; - - // If the loop header is not one of the loop exiting blocks then - // either this loop is already rotated or it is not - // suitable for loop rotation transformations. - if (!L->isLoopExiting(OrigHeader)) + BasicBlock *OrigHeader = L->getHeader(); + BasicBlock *OrigLatch = L->getLoopLatch(); + + BranchInst *BI = dyn_cast<BranchInst>(OrigHeader->getTerminator()); + if (!BI || BI->isUnconditional()) + return Rotated; + + // If the loop header is not one of the loop exiting blocks then + // either this loop is already rotated or it is not + // suitable for loop rotation transformations. + if (!L->isLoopExiting(OrigHeader)) + return Rotated; + + // If the loop latch already contains a branch that leaves the loop then the + // loop is already rotated. + if (!OrigLatch) + return Rotated; + + // Rotate if the loop latch was just simplified. Or if it makes the loop exit + // count computable. Or if we think it will be profitable. + if (L->isLoopExiting(OrigLatch) && !SimplifiedLatch && IsUtilMode == false && + !profitableToRotateLoopExitingLatch(L)) + return Rotated; + + // Check size of original header and reject loop if it is very big or we can't + // duplicate blocks inside it. + { + SmallPtrSet<const Value *, 32> EphValues; + CodeMetrics::collectEphemeralValues(L, AC, EphValues); + + CodeMetrics Metrics; + Metrics.analyzeBasicBlock(OrigHeader, *TTI, EphValues, PrepareForLTO); + if (Metrics.notDuplicatable) { + LLVM_DEBUG( + dbgs() << "LoopRotation: NOT rotating - contains non-duplicatable" + << " instructions: "; + L->dump()); return Rotated; - - // If the loop latch already contains a branch that leaves the loop then the - // loop is already rotated. - if (!OrigLatch) + } + if (Metrics.Convergence != ConvergenceKind::None) { + LLVM_DEBUG(dbgs() << "LoopRotation: NOT rotating - contains convergent " + "instructions: "; + L->dump()); return Rotated; - - // Rotate if either the loop latch does *not* exit the loop, or if the loop - // latch was just simplified. Or if we think it will be profitable. - if (L->isLoopExiting(OrigLatch) && !SimplifiedLatch && IsUtilMode == false && - !profitableToRotateLoopExitingLatch(L) && - !canRotateDeoptimizingLatchExit(L)) + } + if (!Metrics.NumInsts.isValid()) { + LLVM_DEBUG(dbgs() << "LoopRotation: NOT rotating - contains instructions" + " with invalid cost: "; + L->dump()); return Rotated; - - // Check size of original header and reject loop if it is very big or we can't - // duplicate blocks inside it. - { - SmallPtrSet<const Value *, 32> EphValues; - CodeMetrics::collectEphemeralValues(L, AC, EphValues); - - CodeMetrics Metrics; - Metrics.analyzeBasicBlock(OrigHeader, *TTI, EphValues, PrepareForLTO); - if (Metrics.notDuplicatable) { - LLVM_DEBUG( - dbgs() << "LoopRotation: NOT rotating - contains non-duplicatable" - << " instructions: "; - L->dump()); - return Rotated; - } - if (Metrics.Convergence != ConvergenceKind::None) { - LLVM_DEBUG(dbgs() << "LoopRotation: NOT rotating - contains convergent " - "instructions: "; - L->dump()); - return Rotated; - } - if (!Metrics.NumInsts.isValid()) { - LLVM_DEBUG(dbgs() << "LoopRotation: NOT rotating - contains instructions" - " with invalid cost: "; - L->dump()); - return Rotated; - } - if (Metrics.NumInsts > MaxHeaderSize) { - LLVM_DEBUG(dbgs() << "LoopRotation: NOT rotating - contains " - << Metrics.NumInsts - << " instructions, which is more than the threshold (" - << MaxHeaderSize << " instructions): "; - L->dump()); - ++NumNotRotatedDueToHeaderSize; - return Rotated; - } - - // When preparing for LTO, avoid rotating loops with calls that could be - // inlined during the LTO stage. - if (PrepareForLTO && Metrics.NumInlineCandidates > 0) - return Rotated; } - - // Now, this loop is suitable for rotation. - BasicBlock *OrigPreheader = L->getLoopPreheader(); - - // If the loop could not be converted to canonical form, it must have an - // indirectbr in it, just give up. - if (!OrigPreheader || !L->hasDedicatedExits()) + if (Metrics.NumInsts > MaxHeaderSize) { + LLVM_DEBUG(dbgs() << "LoopRotation: NOT rotating - contains " + << Metrics.NumInsts + << " instructions, which is more than the threshold (" + << MaxHeaderSize << " instructions): "; + L->dump()); + ++NumNotRotatedDueToHeaderSize; return Rotated; - - // Anything ScalarEvolution may know about this loop or the PHI nodes - // in its header will soon be invalidated. We should also invalidate - // all outer loops because insertion and deletion of blocks that happens - // during the rotation may violate invariants related to backedge taken - // infos in them. - if (SE) { - SE->forgetTopmostLoop(L); - // We may hoist some instructions out of loop. In case if they were cached - // as "loop variant" or "loop computable", these caches must be dropped. - // We also may fold basic blocks, so cached block dispositions also need - // to be dropped. - SE->forgetBlockAndLoopDispositions(); } - LLVM_DEBUG(dbgs() << "LoopRotation: rotating "; L->dump()); - if (MSSAU && VerifyMemorySSA) - MSSAU->getMemorySSA()->verifyMemorySSA(); - - // Find new Loop header. NewHeader is a Header's one and only successor - // that is inside loop. Header's other successor is outside the - // loop. Otherwise loop is not suitable for rotation. - BasicBlock *Exit = BI->getSuccessor(0); - BasicBlock *NewHeader = BI->getSuccessor(1); - bool BISuccsSwapped = L->contains(Exit); - if (BISuccsSwapped) - std::swap(Exit, NewHeader); - assert(NewHeader && "Unable to determine new loop header"); - assert(L->contains(NewHeader) && !L->contains(Exit) && - "Unable to determine loop header and exit blocks"); - - // This code assumes that the new header has exactly one predecessor. - // Remove any single-entry PHI nodes in it. - assert(NewHeader->getSinglePredecessor() && - "New header doesn't have one pred!"); - FoldSingleEntryPHINodes(NewHeader); - - // Begin by walking OrigHeader and populating ValueMap with an entry for - // each Instruction. - BasicBlock::iterator I = OrigHeader->begin(), E = OrigHeader->end(); - ValueToValueMapTy ValueMap, ValueMapMSSA; - - // For PHI nodes, the value available in OldPreHeader is just the - // incoming value from OldPreHeader. - for (; PHINode *PN = dyn_cast<PHINode>(I); ++I) - InsertNewValueIntoMap(ValueMap, PN, - PN->getIncomingValueForBlock(OrigPreheader)); - - // For the rest of the instructions, either hoist to the OrigPreheader if - // possible or create a clone in the OldPreHeader if not. - Instruction *LoopEntryBranch = OrigPreheader->getTerminator(); - - // Record all debug records preceding LoopEntryBranch to avoid - // duplication. - using DbgHash = - std::pair<std::pair<hash_code, DILocalVariable *>, DIExpression *>; - auto makeHash = [](const DbgVariableRecord *D) -> DbgHash { - auto VarLocOps = D->location_ops(); - return {{hash_combine_range(VarLocOps), D->getVariable()}, - D->getExpression()}; - }; - - SmallDenseSet<DbgHash, 8> DbgRecords; - // Build DbgVariableRecord hashes for DbgVariableRecords attached to the - // terminator. - for (const DbgVariableRecord &DVR : - filterDbgVars(OrigPreheader->getTerminator()->getDbgRecordRange())) - DbgRecords.insert(makeHash(&DVR)); - - // Remember the local noalias scope declarations in the header. After the - // rotation, they must be duplicated and the scope must be cloned. This - // avoids unwanted interaction across iterations. - SmallVector<NoAliasScopeDeclInst *, 6> NoAliasDeclInstructions; - for (Instruction &I : *OrigHeader) - if (auto *Decl = dyn_cast<NoAliasScopeDeclInst>(&I)) - NoAliasDeclInstructions.push_back(Decl); - - Module *M = OrigHeader->getModule(); - - // Track the next DbgRecord to clone. If we have a sequence where an - // instruction is hoisted instead of being cloned: - // DbgRecord blah - // %foo = add i32 0, 0 - // DbgRecord xyzzy - // %bar = call i32 @foobar() - // where %foo is hoisted, then the DbgRecord "blah" will be seen twice, once - // attached to %foo, then when %foo his hoisted it will "fall down" onto the - // function call: - // DbgRecord blah - // DbgRecord xyzzy - // %bar = call i32 @foobar() - // causing it to appear attached to the call too. - // - // To avoid this, cloneDebugInfoFrom takes an optional "start cloning from - // here" position to account for this behaviour. We point it at any - // DbgRecords on the next instruction, here labelled xyzzy, before we hoist - // %foo. Later, we only only clone DbgRecords from that position (xyzzy) - // onwards, which avoids cloning DbgRecord "blah" multiple times. (Stored as - // a range because it gives us a natural way of testing whether - // there were DbgRecords on the next instruction before we hoisted things). - iterator_range<DbgRecord::self_iterator> NextDbgInsts = - (I != E) ? I->getDbgRecordRange() : DbgMarker::getEmptyDbgRecordRange(); - - while (I != E) { - Instruction *Inst = &*I++; - - // If the instruction's operands are invariant and it doesn't read or write - // memory, then it is safe to hoist. Doing this doesn't change the order of - // execution in the preheader, but does prevent the instruction from - // executing in each iteration of the loop. This means it is safe to hoist - // something that might trap, but isn't safe to hoist something that reads - // memory (without proving that the loop doesn't write). - if (L->hasLoopInvariantOperands(Inst) && !Inst->mayReadFromMemory() && - !Inst->mayWriteToMemory() && !Inst->isTerminator() && - !isa<AllocaInst>(Inst) && - // It is not safe to hoist the value of these instructions in - // coroutines, as the addresses of otherwise eligible variables (e.g. - // thread-local variables and errno) may change if the coroutine is - // resumed in a different thread.Therefore, we disable this - // optimization for correctness. However, this may block other correct - // optimizations. - // FIXME: This should be reverted once we have a better model for - // memory access in coroutines. - !Inst->getFunction()->isPresplitCoroutine()) { - - if (!NextDbgInsts.empty()) { - auto DbgValueRange = - LoopEntryBranch->cloneDebugInfoFrom(Inst, NextDbgInsts.begin()); - RemapDbgRecordRange(M, DbgValueRange, ValueMap, - RF_NoModuleLevelChanges | RF_IgnoreMissingLocals); - // Erase anything we've seen before. - for (DbgVariableRecord &DVR : - make_early_inc_range(filterDbgVars(DbgValueRange))) - if (DbgRecords.count(makeHash(&DVR))) - DVR.eraseFromParent(); - } - - NextDbgInsts = I->getDbgRecordRange(); - - Inst->moveBefore(LoopEntryBranch->getIterator()); + // When preparing for LTO, avoid rotating loops with calls that could be + // inlined during the LTO stage. + if (PrepareForLTO && Metrics.NumInlineCandidates > 0) + return Rotated; + } - ++NumInstrsHoisted; - continue; - } + // Now, this loop is suitable for rotation. + BasicBlock *OrigPreheader = L->getLoopPreheader(); + + // If the loop could not be converted to canonical form, it must have an + // indirectbr in it, just give up. + if (!OrigPreheader || !L->hasDedicatedExits()) + return Rotated; + + // Anything ScalarEvolution may know about this loop or the PHI nodes + // in its header will soon be invalidated. We should also invalidate + // all outer loops because insertion and deletion of blocks that happens + // during the rotation may violate invariants related to backedge taken + // infos in them. + if (SE) { + SE->forgetTopmostLoop(L); + // We may hoist some instructions out of loop. In case if they were cached + // as "loop variant" or "loop computable", these caches must be dropped. + // We also may fold basic blocks, so cached block dispositions also need + // to be dropped. + SE->forgetBlockAndLoopDispositions(); + } - // Otherwise, create a duplicate of the instruction. - Instruction *C = Inst->clone(); - if (const DebugLoc &DL = C->getDebugLoc()) - mapAtomInstance(DL, ValueMap); + LLVM_DEBUG(dbgs() << "LoopRotation: rotating "; L->dump()); + if (MSSAU && VerifyMemorySSA) + MSSAU->getMemorySSA()->verifyMemorySSA(); - C->insertBefore(LoopEntryBranch->getIterator()); + // Find new Loop header. NewHeader is a Header's one and only successor + // that is inside loop. Header's other successor is outside the + // loop. Otherwise loop is not suitable for rotation. + BasicBlock *Exit = BI->getSuccessor(0); + BasicBlock *NewHeader = BI->getSuccessor(1); + bool BISuccsSwapped = L->contains(Exit); + if (BISuccsSwapped) + std::swap(Exit, NewHeader); + assert(NewHeader && "Unable to determine new loop header"); + assert(L->contains(NewHeader) && !L->contains(Exit) && + "Unable to determine loop header and exit blocks"); + + // This code assumes that the new header has exactly one predecessor. + // Remove any single-entry PHI nodes in it. + assert(NewHeader->getSinglePredecessor() && + "New header doesn't have one pred!"); + FoldSingleEntryPHINodes(NewHeader); + + // Begin by walking OrigHeader and populating ValueMap with an entry for + // each Instruction. + BasicBlock::iterator I = OrigHeader->begin(), E = OrigHeader->end(); + ValueToValueMapTy ValueMap, ValueMapMSSA; + + // For PHI nodes, the value available in OldPreHeader is just the + // incoming value from OldPreHeader. + for (; PHINode *PN = dyn_cast<PHINode>(I); ++I) + InsertNewValueIntoMap(ValueMap, PN, + PN->getIncomingValueForBlock(OrigPreheader)); + + // For the rest of the instructions, either hoist to the OrigPreheader if + // possible or create a clone in the OldPreHeader if not. + Instruction *LoopEntryBranch = OrigPreheader->getTerminator(); + + // Record all debug records preceding LoopEntryBranch to avoid + // duplication. + using DbgHash = + std::pair<std::pair<hash_code, DILocalVariable *>, DIExpression *>; + auto makeHash = [](const DbgVariableRecord *D) -> DbgHash { + auto VarLocOps = D->location_ops(); + return {{hash_combine_range(VarLocOps), D->getVariable()}, + D->getExpression()}; + }; - ++NumInstrsDuplicated; + SmallDenseSet<DbgHash, 8> DbgRecords; + // Build DbgVariableRecord hashes for DbgVariableRecords attached to the + // terminator. + for (const DbgVariableRecord &DVR : + filterDbgVars(OrigPreheader->getTerminator()->getDbgRecordRange())) + DbgRecords.insert(makeHash(&DVR)); + + // Remember the local noalias scope declarations in the header. After the + // rotation, they must be duplicated and the scope must be cloned. This + // avoids unwanted interaction across iterations. + SmallVector<NoAliasScopeDeclInst *, 6> NoAliasDeclInstructions; + for (Instruction &I : *OrigHeader) + if (auto *Decl = dyn_cast<NoAliasScopeDeclInst>(&I)) + NoAliasDeclInstructions.push_back(Decl); + + Module *M = OrigHeader->getModule(); + + // Track the next DbgRecord to clone. If we have a sequence where an + // instruction is hoisted instead of being cloned: + // DbgRecord blah + // %foo = add i32 0, 0 + // DbgRecord xyzzy + // %bar = call i32 @foobar() + // where %foo is hoisted, then the DbgRecord "blah" will be seen twice, once + // attached to %foo, then when %foo his hoisted it will "fall down" onto the + // function call: + // DbgRecord blah + // DbgRecord xyzzy + // %bar = call i32 @foobar() + // causing it to appear attached to the call too. + // + // To avoid this, cloneDebugInfoFrom takes an optional "start cloning from + // here" position to account for this behaviour. We point it at any + // DbgRecords on the next instruction, here labelled xyzzy, before we hoist + // %foo. Later, we only only clone DbgRecords from that position (xyzzy) + // onwards, which avoids cloning DbgRecord "blah" multiple times. (Stored as + // a range because it gives us a natural way of testing whether + // there were DbgRecords on the next instruction before we hoisted things). + iterator_range<DbgRecord::self_iterator> NextDbgInsts = + (I != E) ? I->getDbgRecordRange() : DbgMarker::getEmptyDbgRecordRange(); + + while (I != E) { + Instruction *Inst = &*I++; + + // If the instruction's operands are invariant and it doesn't read or write + // memory, then it is safe to hoist. Doing this doesn't change the order of + // execution in the preheader, but does prevent the instruction from + // executing in each iteration of the loop. This means it is safe to hoist + // something that might trap, but isn't safe to hoist something that reads + // memory (without proving that the loop doesn't write). + if (L->hasLoopInvariantOperands(Inst) && !Inst->mayReadFromMemory() && + !Inst->mayWriteToMemory() && !Inst->isTerminator() && + !isa<AllocaInst>(Inst) && + // It is not safe to hoist the value of these instructions in + // coroutines, as the addresses of otherwise eligible variables (e.g. + // thread-local variables and errno) may change if the coroutine is + // resumed in a different thread.Therefore, we disable this + // optimization for correctness. However, this may block other correct + // optimizations. + // FIXME: This should be reverted once we have a better model for + // memory access in coroutines. + !Inst->getFunction()->isPresplitCoroutine()) { if (!NextDbgInsts.empty()) { - auto Range = C->cloneDebugInfoFrom(Inst, NextDbgInsts.begin()); - RemapDbgRecordRange(M, Range, ValueMap, + auto DbgValueRange = + LoopEntryBranch->cloneDebugInfoFrom(Inst, NextDbgInsts.begin()); + RemapDbgRecordRange(M, DbgValueRange, ValueMap, RF_NoModuleLevelChanges | RF_IgnoreMissingLocals); - NextDbgInsts = DbgMarker::getEmptyDbgRecordRange(); // Erase anything we've seen before. for (DbgVariableRecord &DVR : - make_early_inc_range(filterDbgVars(Range))) + make_early_inc_range(filterDbgVars(DbgValueRange))) if (DbgRecords.count(makeHash(&DVR))) DVR.eraseFromParent(); } - // Eagerly remap the operands of the instruction. - RemapInstruction(C, ValueMap, - RF_NoModuleLevelChanges | RF_IgnoreMissingLocals); - - // With the operands remapped, see if the instruction constant folds or is - // otherwise simplifyable. This commonly occurs because the entry from PHI - // nodes allows icmps and other instructions to fold. - Value *V = simplifyInstruction(C, SQ); - if (V && LI->replacementPreservesLCSSAForm(C, V)) { - // If so, then delete the temporary instruction and stick the folded value - // in the map. - InsertNewValueIntoMap(ValueMap, Inst, V); - if (!C->mayHaveSideEffects()) { - C->eraseFromParent(); - C = nullptr; - } - } else { - InsertNewValueIntoMap(ValueMap, Inst, C); - } - if (C) { - // Otherwise, stick the new instruction into the new block! - C->setName(Inst->getName()); - - if (auto *II = dyn_cast<AssumeInst>(C)) - AC->registerAssumption(II); - // MemorySSA cares whether the cloned instruction was inserted or not, and - // not whether it can be remapped to a simplified value. - if (MSSAU) - InsertNewValueIntoMap(ValueMapMSSA, Inst, C); - } - } + NextDbgInsts = I->getDbgRecordRange(); - if (!NoAliasDeclInstructions.empty()) { - // There are noalias scope declarations: - // (general): - // Original: OrigPre { OrigHeader NewHeader ... Latch } - // after: (OrigPre+OrigHeader') { NewHeader ... Latch OrigHeader } - // - // with D: llvm.experimental.noalias.scope.decl, - // U: !noalias or !alias.scope depending on D - // ... { D U1 U2 } can transform into: - // (0) : ... { D U1 U2 } // no relevant rotation for this part - // (1) : ... D' { U1 U2 D } // D is part of OrigHeader - // (2) : ... D' U1' { U2 D U1 } // D, U1 are part of OrigHeader - // - // We now want to transform: - // (1) -> : ... D' { D U1 U2 D'' } - // (2) -> : ... D' U1' { D U2 D'' U1'' } - // D: original llvm.experimental.noalias.scope.decl - // D', U1': duplicate with replaced scopes - // D'', U1'': different duplicate with replaced scopes - // This ensures a safe fallback to 'may_alias' introduced by the rotate, - // as U1'' and U1' scopes will not be compatible wrt to the local restrict - - // Clone the llvm.experimental.noalias.decl again for the NewHeader. - BasicBlock::iterator NewHeaderInsertionPoint = - NewHeader->getFirstNonPHIIt(); - for (NoAliasScopeDeclInst *NAD : NoAliasDeclInstructions) { - LLVM_DEBUG(dbgs() << " Cloning llvm.experimental.noalias.scope.decl:" - << *NAD << "\n"); - Instruction *NewNAD = NAD->clone(); - NewNAD->insertBefore(*NewHeader, NewHeaderInsertionPoint); - } + Inst->moveBefore(LoopEntryBranch->getIterator()); - // Scopes must now be duplicated, once for OrigHeader and once for - // OrigPreHeader'. - { - auto &Context = NewHeader->getContext(); - - SmallVector<MDNode *, 8> NoAliasDeclScopes; - for (NoAliasScopeDeclInst *NAD : NoAliasDeclInstructions) - NoAliasDeclScopes.push_back(NAD->getScopeList()); - - LLVM_DEBUG(dbgs() << " Updating OrigHeader scopes\n"); - cloneAndAdaptNoAliasScopes(NoAliasDeclScopes, {OrigHeader}, Context, - "h.rot"); - LLVM_DEBUG(OrigHeader->dump()); - - // Keep the compile time impact low by only adapting the inserted block - // of instructions in the OrigPreHeader. This might result in slightly - // more aliasing between these instructions and those that were already - // present, but it will be much faster when the original PreHeader is - // large. - LLVM_DEBUG(dbgs() << " Updating part of OrigPreheader scopes\n"); - auto *FirstDecl = - cast<Instruction>(ValueMap[*NoAliasDeclInstructions.begin()]); - auto *LastInst = &OrigPreheader->back(); - cloneAndAdaptNoAliasScopes(NoAliasDeclScopes, FirstDecl, LastInst, - Context, "pre.rot"); - LLVM_DEBUG(OrigPreheader->dump()); - - LLVM_DEBUG(dbgs() << " Updated NewHeader:\n"); - LLVM_DEBUG(NewHeader->dump()); - } + ++NumInstrsHoisted; + continue; } - // Along with all the other instructions, we just cloned OrigHeader's - // terminator into OrigPreHeader. Fix up the PHI nodes in each of OrigHeader's - // successors by duplicating their incoming values for OrigHeader. - for (BasicBlock *SuccBB : successors(OrigHeader)) - for (BasicBlock::iterator BI = SuccBB->begin(); - PHINode *PN = dyn_cast<PHINode>(BI); ++BI) - PN->addIncoming(PN->getIncomingValueForBlock(OrigHeader), OrigPreheader); - - // Now that OrigPreHeader has a clone of OrigHeader's terminator, remove - // OrigPreHeader's old terminator (the original branch into the loop), and - // remove the corresponding incoming values from the PHI nodes in OrigHeader. - LoopEntryBranch->eraseFromParent(); - OrigPreheader->flushTerminatorDbgRecords(); - - // Update MemorySSA before the rewrite call below changes the 1:1 - // instruction:cloned_instruction_or_value mapping. - if (MSSAU) { - InsertNewValueIntoMap(ValueMapMSSA, OrigHeader, OrigPreheader); - MSSAU->updateForClonedBlockIntoPred(OrigHeader, OrigPreheader, - ValueMapMSSA); - } + // Otherwise, create a duplicate of the instruction. + Instruction *C = Inst->clone(); + if (const DebugLoc &DL = C->getDebugLoc()) + mapAtomInstance(DL, ValueMap); - SmallVector<PHINode*, 2> InsertedPHIs; - // If there were any uses of instructions in the duplicated block outside the - // loop, update them, inserting PHI nodes as required - RewriteUsesOfClonedInstructions(OrigHeader, OrigPreheader, ValueMap, SE, - &InsertedPHIs); - - // Attach debug records to the new phis if that phi uses a value that - // previously had debug metadata attached. This keeps the debug info - // up-to-date in the loop body. - if (!InsertedPHIs.empty()) - insertDebugValuesForPHIs(OrigHeader, InsertedPHIs); - - // NewHeader is now the header of the loop. - L->moveToHeader(NewHeader); - assert(L->getHeader() == NewHeader && "Latch block is our new header"); - - // Inform DT about changes to the CFG. - if (DT) { - // The OrigPreheader branches to the NewHeader and Exit now. Then, inform - // the DT about the removed edge to the OrigHeader (that got removed). - SmallVector<DominatorTree::UpdateType, 3> Updates = { - {DominatorTree::Insert, OrigPreheader, Exit}, - {DominatorTree::Insert, OrigPreheader, NewHeader}, - {DominatorTree::Delete, OrigPreheader, OrigHeader}}; - - if (MSSAU) { - MSSAU->applyUpdates(Updates, *DT, /*UpdateDT=*/true); - if (VerifyMemorySSA) - MSSAU->getMemorySSA()->verifyMemorySSA(); - } else { - DT->applyUpdates(Updates); - } - } + C->insertBefore(LoopEntryBranch->getIterator()); - // At this point, we've finished our major CFG changes. As part of cloning - // the loop into the preheader we've simplified instructions and the - // duplicated conditional branch may now be branching on a constant. If it is - // branching on a constant and if that constant means that we enter the loop, - // then we fold away the cond branch to an uncond branch. This simplifies the - // loop in cases important for nested loops, and it also means we don't have - // to split as many edges. - BranchInst *PHBI = cast<BranchInst>(OrigPreheader->getTerminator()); - assert(PHBI->isConditional() && "Should be clone of BI condbr!"); - const Value *Cond = PHBI->getCondition(); - const bool HasConditionalPreHeader = - !isa<ConstantInt>(Cond) || - PHBI->getSuccessor(cast<ConstantInt>(Cond)->isZero()) != NewHeader; - - updateBranchWeights(*PHBI, *BI, HasConditionalPreHeader, BISuccsSwapped); + ++NumInstrsDuplicated; - if (HasConditionalPreHeader) { - // The conditional branch can't be folded, handle the general case. - // Split edges as necessary to preserve LoopSimplify form. - - // Right now OrigPreHeader has two successors, NewHeader and ExitBlock, and - // thus is not a preheader anymore. - // Split the edge to form a real preheader. - BasicBlock *NewPH = SplitCriticalEdge( - OrigPreheader, NewHeader, - CriticalEdgeSplittingOptions(DT, LI, MSSAU).setPreserveLCSSA()); - NewPH->setName(NewHeader->getName() + ".lr.ph"); - - // Preserve canonical loop form, which means that 'Exit' should have only - // one predecessor. Note that Exit could be an exit block for multiple - // nested loops, causing both of the edges to now be critical and need to - // be split. - SmallVector<BasicBlock *, 4> ExitPreds(predecessors(Exit)); - bool SplitLatchEdge = false; - for (BasicBlock *ExitPred : ExitPreds) { - // We only need to split loop exit edges. - Loop *PredLoop = LI->getLoopFor(ExitPred); - if (!PredLoop || PredLoop->contains(Exit) || - isa<IndirectBrInst>(ExitPred->getTerminator())) - continue; - SplitLatchEdge |= L->getLoopLatch() == ExitPred; - BasicBlock *ExitSplit = SplitCriticalEdge( - ExitPred, Exit, - CriticalEdgeSplittingOptions(DT, LI, MSSAU).setPreserveLCSSA()); - ExitSplit->moveBefore(Exit); + if (!NextDbgInsts.empty()) { + auto Range = C->cloneDebugInfoFrom(Inst, NextDbgInsts.begin()); + RemapDbgRecordRange(M, Range, ValueMap, + RF_NoModuleLevelChanges | RF_IgnoreMissingLocals); + NextDbgInsts = DbgMarker::getEmptyDbgRecordRange(); + // Erase anything we've seen before. + for (DbgVariableRecord &DVR : make_early_inc_range(filterDbgVars(Range))) + if (DbgRecords.count(makeHash(&DVR))) + DVR.eraseFromParent(); + } + + // Eagerly remap the operands of the instruction. + RemapInstruction(C, ValueMap, + RF_NoModuleLevelChanges | RF_IgnoreMissingLocals); + + // With the operands remapped, see if the instruction constant folds or is + // otherwise simplifyable. This commonly occurs because the entry from PHI + // nodes allows icmps and other instructions to fold. + Value *V = simplifyInstruction(C, SQ); + if (V && LI->replacementPreservesLCSSAForm(C, V)) { + // If so, then delete the temporary instruction and stick the folded value + // in the map. + InsertNewValueIntoMap(ValueMap, Inst, V); + if (!C->mayHaveSideEffects()) { + C->eraseFromParent(); + C = nullptr; } - assert(SplitLatchEdge && - "Despite splitting all preds, failed to split latch exit?"); - (void)SplitLatchEdge; } else { - // We can fold the conditional branch in the preheader, this makes things - // simpler. The first step is to remove the extra edge to the Exit block. - Exit->removePredecessor(OrigPreheader, true /*preserve LCSSA*/); - BranchInst *NewBI = BranchInst::Create(NewHeader, PHBI->getIterator()); - NewBI->setDebugLoc(PHBI->getDebugLoc()); - PHBI->eraseFromParent(); + InsertNewValueIntoMap(ValueMap, Inst, C); + } + if (C) { + // Otherwise, stick the new instruction into the new block! + C->setName(Inst->getName()); + + if (auto *II = dyn_cast<AssumeInst>(C)) + AC->registerAssumption(II); + // MemorySSA cares whether the cloned instruction was inserted or not, and + // not whether it can be remapped to a simplified value. + if (MSSAU) + InsertNewValueIntoMap(ValueMapMSSA, Inst, C); + } + } - // With our CFG finalized, update DomTree if it is available. - if (DT) DT->deleteEdge(OrigPreheader, Exit); + if (!NoAliasDeclInstructions.empty()) { + // There are noalias scope declarations: + // (general): + // Original: OrigPre { OrigHeader NewHeader ... Latch } + // after: (OrigPre+OrigHeader') { NewHeader ... Latch OrigHeader } + // + // with D: llvm.experimental.noalias.scope.decl, + // U: !noalias or !alias.scope depending on D + // ... { D U1 U2 } can transform into: + // (0) : ... { D U1 U2 } // no relevant rotation for this part + // (1) : ... D' { U1 U2 D } // D is part of OrigHeader + // (2) : ... D' U1' { U2 D U1 } // D, U1 are part of OrigHeader + // + // We now want to transform: + // (1) -> : ... D' { D U1 U2 D'' } + // (2) -> : ... D' U1' { D U2 D'' U1'' } + // D: original llvm.experimental.noalias.scope.decl + // D', U1': duplicate with replaced scopes + // D'', U1'': different duplicate with replaced scopes + // This ensures a safe fallback to 'may_alias' introduced by the rotate, + // as U1'' and U1' scopes will not be compatible wrt to the local restrict + + // Clone the llvm.experimental.noalias.decl again for the NewHeader. + BasicBlock::iterator NewHeaderInsertionPoint = + NewHeader->getFirstNonPHIIt(); + for (NoAliasScopeDeclInst *NAD : NoAliasDeclInstructions) { + LLVM_DEBUG(dbgs() << " Cloning llvm.experimental.noalias.scope.decl:" + << *NAD << "\n"); + Instruction *NewNAD = NAD->clone(); + NewNAD->insertBefore(*NewHeader, NewHeaderInsertionPoint); + } - // Update MSSA too, if available. - if (MSSAU) - MSSAU->removeEdge(OrigPreheader, Exit); + // Scopes must now be duplicated, once for OrigHeader and once for + // OrigPreHeader'. + { + auto &Context = NewHeader->getContext(); + + SmallVector<MDNode *, 8> NoAliasDeclScopes; + for (NoAliasScopeDeclInst *NAD : NoAliasDeclInstructions) + NoAliasDeclScopes.push_back(NAD->getScopeList()); + + LLVM_DEBUG(dbgs() << " Updating OrigHeader scopes\n"); + cloneAndAdaptNoAliasScopes(NoAliasDeclScopes, {OrigHeader}, Context, + "h.rot"); + LLVM_DEBUG(OrigHeader->dump()); + + // Keep the compile time impact low by only adapting the inserted block + // of instructions in the OrigPreHeader. This might result in slightly + // more aliasing between these instructions and those that were already + // present, but it will be much faster when the original PreHeader is + // large. + LLVM_DEBUG(dbgs() << " Updating part of OrigPreheader scopes\n"); + auto *FirstDecl = + cast<Instruction>(ValueMap[*NoAliasDeclInstructions.begin()]); + auto *LastInst = &OrigPreheader->back(); + cloneAndAdaptNoAliasScopes(NoAliasDeclScopes, FirstDecl, LastInst, + Context, "pre.rot"); + LLVM_DEBUG(OrigPreheader->dump()); + + LLVM_DEBUG(dbgs() << " Updated NewHeader:\n"); + LLVM_DEBUG(NewHeader->dump()); } + } - assert(L->getLoopPreheader() && "Invalid loop preheader after loop rotation"); - assert(L->getLoopLatch() && "Invalid loop latch after loop rotation"); + // Along with all the other instructions, we just cloned OrigHeader's + // terminator into OrigPreHeader. Fix up the PHI nodes in each of OrigHeader's + // successors by duplicating their incoming values for OrigHeader. + for (BasicBlock *SuccBB : successors(OrigHeader)) + for (BasicBlock::iterator BI = SuccBB->begin(); + PHINode *PN = dyn_cast<PHINode>(BI); ++BI) + PN->addIncoming(PN->getIncomingValueForBlock(OrigHeader), OrigPreheader); + + // Now that OrigPreHeader has a clone of OrigHeader's terminator, remove + // OrigPreHeader's old terminator (the original branch into the loop), and + // remove the corresponding incoming values from the PHI nodes in OrigHeader. + LoopEntryBranch->eraseFromParent(); + OrigPreheader->flushTerminatorDbgRecords(); + + // Update MemorySSA before the rewrite call below changes the 1:1 + // instruction:cloned_instruction_or_value mapping. + if (MSSAU) { + InsertNewValueIntoMap(ValueMapMSSA, OrigHeader, OrigPreheader); + MSSAU->updateForClonedBlockIntoPred(OrigHeader, OrigPreheader, + ValueMapMSSA); + } - if (MSSAU && VerifyMemorySSA) - MSSAU->getMemorySSA()->verifyMemorySSA(); + SmallVector<PHINode *, 2> InsertedPHIs; + // If there were any uses of instructions in the duplicated block outside the + // loop, update them, inserting PHI nodes as required + RewriteUsesOfClonedInstructions(OrigHeader, OrigPreheader, ValueMap, SE, + &InsertedPHIs); + + // Attach debug records to the new phis if that phi uses a value that + // previously had debug metadata attached. This keeps the debug info + // up-to-date in the loop body. + if (!InsertedPHIs.empty()) + insertDebugValuesForPHIs(OrigHeader, InsertedPHIs); + + // NewHeader is now the header of the loop. + L->moveToHeader(NewHeader); + assert(L->getHeader() == NewHeader && "Latch block is our new header"); + + // Inform DT about changes to the CFG. + if (DT) { + // The OrigPreheader branches to the NewHeader and Exit now. Then, inform + // the DT about the removed edge to the OrigHeader (that got removed). + SmallVector<DominatorTree::UpdateType, 3> Updates = { + {DominatorTree::Insert, OrigPreheader, Exit}, + {DominatorTree::Insert, OrigPreheader, NewHeader}, + {DominatorTree::Delete, OrigPreheader, OrigHeader}}; - // Now that the CFG and DomTree are in a consistent state again, try to merge - // the OrigHeader block into OrigLatch. This will succeed if they are - // connected by an unconditional branch. This is just a cleanup so the - // emitted code isn't too gross in this common case. - DomTreeUpdater DTU(DT, DomTreeUpdater::UpdateStrategy::Eager); - BasicBlock *PredBB = OrigHeader->getUniquePredecessor(); - bool DidMerge = MergeBlockIntoPredecessor(OrigHeader, &DTU, LI, MSSAU); - if (DidMerge) - RemoveRedundantDbgInstrs(PredBB); + if (MSSAU) { + MSSAU->applyUpdates(Updates, *DT, /*UpdateDT=*/true); + if (VerifyMemorySSA) + MSSAU->getMemorySSA()->verifyMemorySSA(); + } else { + DT->applyUpdates(Updates); + } + } - if (MSSAU && VerifyMemorySSA) - MSSAU->getMemorySSA()->verifyMemorySSA(); + // At this point, we've finished our major CFG changes. As part of cloning + // the loop into the preheader we've simplified instructions and the + // duplicated conditional branch may now be branching on a constant. If it is + // branching on a constant and if that constant means that we enter the loop, + // then we fold away the cond branch to an uncond branch. This simplifies the + // loop in cases important for nested loops, and it also means we don't have + // to split as many edges. + BranchInst *PHBI = cast<BranchInst>(OrigPreheader->getTerminator()); + assert(PHBI->isConditional() && "Should be clone of BI condbr!"); + const Value *Cond = PHBI->getCondition(); + const bool HasConditionalPreHeader = + !isa<ConstantInt>(Cond) || + PHBI->getSuccessor(cast<ConstantInt>(Cond)->isZero()) != NewHeader; + + updateBranchWeights(*PHBI, *BI, HasConditionalPreHeader, BISuccsSwapped); - LLVM_DEBUG(dbgs() << "LoopRotation: into "; L->dump()); + if (HasConditionalPreHeader) { + // The conditional branch can't be folded, handle the general case. + // Split edges as necessary to preserve LoopSimplify form. + + // Right now OrigPreHeader has two successors, NewHeader and ExitBlock, and + // thus is not a preheader anymore. + // Split the edge to form a real preheader. + BasicBlock *NewPH = SplitCriticalEdge( + OrigPreheader, NewHeader, + CriticalEdgeSplittingOptions(DT, LI, MSSAU).setPreserveLCSSA()); + NewPH->setName(NewHeader->getName() + ".lr.ph"); + + // Preserve canonical loop form, which means that 'Exit' should have only + // one predecessor. Note that Exit could be an exit block for multiple + // nested loops, causing both of the edges to now be critical and need to + // be split. + SmallVector<BasicBlock *, 4> ExitPreds(predecessors(Exit)); + bool SplitLatchEdge = false; + for (BasicBlock *ExitPred : ExitPreds) { + // We only need to split loop exit edges. + Loop *PredLoop = LI->getLoopFor(ExitPred); + if (!PredLoop || PredLoop->contains(Exit) || + isa<IndirectBrInst>(ExitPred->getTerminator())) + continue; + SplitLatchEdge |= L->getLoopLatch() == ExitPred; + BasicBlock *ExitSplit = SplitCriticalEdge( + ExitPred, Exit, + CriticalEdgeSplittingOptions(DT, LI, MSSAU).setPreserveLCSSA()); + ExitSplit->moveBefore(Exit); + } + assert(SplitLatchEdge && + "Despite splitting all preds, failed to split latch exit?"); + (void)SplitLatchEdge; + } else { + // We can fold the conditional branch in the preheader, this makes things + // simpler. The first step is to remove the extra edge to the Exit block. + Exit->removePredecessor(OrigPreheader, true /*preserve LCSSA*/); + BranchInst *NewBI = BranchInst::Create(NewHeader, PHBI->getIterator()); + NewBI->setDebugLoc(PHBI->getDebugLoc()); + PHBI->eraseFromParent(); + + // With our CFG finalized, update DomTree if it is available. + if (DT) + DT->deleteEdge(OrigPreheader, Exit); + + // Update MSSA too, if available. + if (MSSAU) + MSSAU->removeEdge(OrigPreheader, Exit); + } - ++NumRotated; + assert(L->getLoopPreheader() && "Invalid loop preheader after loop rotation"); + assert(L->getLoopLatch() && "Invalid loop latch after loop rotation"); - Rotated = true; - SimplifiedLatch = false; + if (MSSAU && VerifyMemorySSA) + MSSAU->getMemorySSA()->verifyMemorySSA(); + + // Now that the CFG and DomTree are in a consistent state again, try to merge + // the OrigHeader block into OrigLatch. This will succeed if they are + // connected by an unconditional branch. This is just a cleanup so the + // emitted code isn't too gross in this common case. + DomTreeUpdater DTU(DT, DomTreeUpdater::UpdateStrategy::Eager); + BasicBlock *PredBB = OrigHeader->getUniquePredecessor(); + bool DidMerge = MergeBlockIntoPredecessor(OrigHeader, &DTU, LI, MSSAU); + if (DidMerge) + RemoveRedundantDbgInstrs(PredBB); - // Check that new latch is a deoptimizing exit and then repeat rotation if possible. - // Deoptimizing latch exit is not a generally typical case, so we just loop over. - // TODO: if it becomes a performance bottleneck extend rotation algorithm - // to handle multiple rotations in one go. - } while (MultiRotate && canRotateDeoptimizingLatchExit(L)); + if (MSSAU && VerifyMemorySSA) + MSSAU->getMemorySSA()->verifyMemorySSA(); + LLVM_DEBUG(dbgs() << "LoopRotation: into "; L->dump()); return true; } diff --git a/llvm/lib/Transforms/Utils/SimplifyCFG.cpp b/llvm/lib/Transforms/Utils/SimplifyCFG.cpp index b8cfe3a..155fcc5 100644 --- a/llvm/lib/Transforms/Utils/SimplifyCFG.cpp +++ b/llvm/lib/Transforms/Utils/SimplifyCFG.cpp @@ -6642,6 +6642,9 @@ public: /// Return true if the replacement is a lookup table. bool isLookupTable(); + /// Return true if the replacement is a bit map. + bool isBitMap(); + private: // Depending on the switch, there are different alternatives. enum { @@ -6932,6 +6935,8 @@ Constant *SwitchReplacement::getDefaultValue() { return DefaultValue; } bool SwitchReplacement::isLookupTable() { return Kind == LookupTableKind; } +bool SwitchReplacement::isBitMap() { return Kind == BitMapKind; } + static bool isSwitchDense(uint64_t NumCases, uint64_t CaseRange) { // 40% is the default density for building a jump table in optsize/minsize // mode. See also TargetLoweringBase::isSuitableForJumpTable(), which this @@ -7097,7 +7102,8 @@ static void reuseTableCompare( /// lookup tables. static bool simplifySwitchLookup(SwitchInst *SI, IRBuilder<> &Builder, DomTreeUpdater *DTU, const DataLayout &DL, - const TargetTransformInfo &TTI) { + const TargetTransformInfo &TTI, + bool ConvertSwitchToLookupTable) { assert(SI->getNumCases() > 1 && "Degenerate switch?"); BasicBlock *BB = SI->getParent(); @@ -7262,6 +7268,8 @@ static bool simplifySwitchLookup(SwitchInst *SI, IRBuilder<> &Builder, bool AnyLookupTables = any_of( PhiToReplacementMap, [](auto &KV) { return KV.second.isLookupTable(); }); + bool AnyBitMaps = any_of(PhiToReplacementMap, + [](auto &KV) { return KV.second.isBitMap(); }); // A few conditions prevent the generation of lookup tables: // 1. The target does not support lookup tables. @@ -7274,6 +7282,12 @@ static bool simplifySwitchLookup(SwitchInst *SI, IRBuilder<> &Builder, Fn->getFnAttribute("no-jump-tables").getValueAsBool())) return false; + // In the early optimization pipeline, disable formation of lookup tables, + // bit maps and mask checks, as they may inhibit further optimization. + if (!ConvertSwitchToLookupTable && + (AnyLookupTables || AnyBitMaps || NeedMask)) + return false; + Builder.SetInsertPoint(SI); // TableIndex is the switch condition - TableIndexOffset if we don't // use the condition directly @@ -7929,14 +7943,13 @@ bool SimplifyCFGOpt::simplifySwitch(SwitchInst *SI, IRBuilder<> &Builder) { if (Options.ForwardSwitchCondToPhi && forwardSwitchConditionToPHI(SI)) return requestResimplify(); - // The conversion from switch to lookup tables results in difficult-to-analyze - // code and makes pruning branches much harder. This is a problem if the - // switch expression itself can still be restricted as a result of inlining or - // CVP. Therefore, only apply this transformation during late stages of the - // optimisation pipeline. - if (Options.ConvertSwitchToLookupTable && - simplifySwitchLookup(SI, Builder, DTU, DL, TTI)) - return requestResimplify(); + // The conversion of switches to arithmetic or lookup table is disabled in + // the early optimization pipeline, as it may lose information or make the + // resulting code harder to analyze. + if (Options.ConvertSwitchToArithmetic || Options.ConvertSwitchToLookupTable) + if (simplifySwitchLookup(SI, Builder, DTU, DL, TTI, + Options.ConvertSwitchToLookupTable)) + return requestResimplify(); if (simplifySwitchOfPowersOfTwo(SI, Builder, DL, TTI)) return requestResimplify(); diff --git a/llvm/lib/Transforms/Vectorize/VPlanRecipes.cpp b/llvm/lib/Transforms/Vectorize/VPlanRecipes.cpp index 3a9770c..600ff8a 100644 --- a/llvm/lib/Transforms/Vectorize/VPlanRecipes.cpp +++ b/llvm/lib/Transforms/Vectorize/VPlanRecipes.cpp @@ -3141,7 +3141,7 @@ static bool isUsedByLoadStoreAddress(const VPUser *V) { while (!WorkList.empty()) { auto *Cur = dyn_cast<VPSingleDefRecipe>(WorkList.pop_back_val()); - if (!Cur || !Seen.insert(Cur).second) + if (!Cur || !Seen.insert(Cur).second || isa<VPBlendRecipe>(Cur)) continue; for (VPUser *U : Cur->users()) { diff --git a/llvm/test/Analysis/ScalarEvolution/trip-count-minmax.ll b/llvm/test/Analysis/ScalarEvolution/trip-count-minmax.ll index 8d091a0..d380104 100644 --- a/llvm/test/Analysis/ScalarEvolution/trip-count-minmax.ll +++ b/llvm/test/Analysis/ScalarEvolution/trip-count-minmax.ll @@ -61,7 +61,7 @@ define void @umin(i32 noundef %a, i32 noundef %b) { ; CHECK-NEXT: Loop %for.body: backedge-taken count is (-1 + ((2 * %a) umin (4 * %b))) ; CHECK-NEXT: Loop %for.body: constant max backedge-taken count is i32 2147483646 ; CHECK-NEXT: Loop %for.body: symbolic max backedge-taken count is (-1 + ((2 * %a) umin (4 * %b))) -; CHECK-NEXT: Loop %for.body: Trip multiple is 1 +; CHECK-NEXT: Loop %for.body: Trip multiple is 2 ; ; void umin(unsigned a, unsigned b) { ; a *= 2; @@ -157,7 +157,7 @@ define void @smin(i32 noundef %a, i32 noundef %b) { ; CHECK-NEXT: Loop %for.body: backedge-taken count is (-1 + ((2 * %a)<nsw> smin (4 * %b)<nsw>)) ; CHECK-NEXT: Loop %for.body: constant max backedge-taken count is i32 2147483646 ; CHECK-NEXT: Loop %for.body: symbolic max backedge-taken count is (-1 + ((2 * %a)<nsw> smin (4 * %b)<nsw>)) -; CHECK-NEXT: Loop %for.body: Trip multiple is 1 +; CHECK-NEXT: Loop %for.body: Trip multiple is 2 ; ; void smin(signed a, signed b) { ; a *= 2; diff --git a/llvm/test/Analysis/ScalarEvolution/trip-multiple-guard-info.ll b/llvm/test/Analysis/ScalarEvolution/trip-multiple-guard-info.ll index b1fe7b1..7ba422d 100644 --- a/llvm/test/Analysis/ScalarEvolution/trip-multiple-guard-info.ll +++ b/llvm/test/Analysis/ScalarEvolution/trip-multiple-guard-info.ll @@ -615,22 +615,14 @@ define void @test_ptrs_aligned_by_4_via_assumption(ptr %start, ptr %end) { ; CHECK-LABEL: 'test_ptrs_aligned_by_4_via_assumption' ; CHECK-NEXT: Classifying expressions for: @test_ptrs_aligned_by_4_via_assumption ; CHECK-NEXT: %iv = phi ptr [ %start, %entry ], [ %iv.next, %loop ] -; CHECK-NEXT: --> {%start,+,4}<%loop> U: full-set S: full-set Exits: <<Unknown>> LoopDispositions: { %loop: Computable } +; CHECK-NEXT: --> {%start,+,4}<%loop> U: full-set S: full-set Exits: ((4 * ((-4 + (-1 * (ptrtoint ptr %start to i64)) + (ptrtoint ptr %end to i64)) /u 4))<nuw> + %start) LoopDispositions: { %loop: Computable } ; CHECK-NEXT: %iv.next = getelementptr i8, ptr %iv, i64 4 -; CHECK-NEXT: --> {(4 + %start),+,4}<%loop> U: full-set S: full-set Exits: <<Unknown>> LoopDispositions: { %loop: Computable } +; CHECK-NEXT: --> {(4 + %start),+,4}<%loop> U: full-set S: full-set Exits: (4 + (4 * ((-4 + (-1 * (ptrtoint ptr %start to i64)) + (ptrtoint ptr %end to i64)) /u 4))<nuw> + %start) LoopDispositions: { %loop: Computable } ; CHECK-NEXT: Determining loop execution counts for: @test_ptrs_aligned_by_4_via_assumption -; CHECK-NEXT: Loop %loop: Unpredictable backedge-taken count. -; CHECK-NEXT: Loop %loop: Unpredictable constant max backedge-taken count. -; CHECK-NEXT: Loop %loop: Unpredictable symbolic max backedge-taken count. -; CHECK-NEXT: Loop %loop: Predicated backedge-taken count is ((-4 + (-1 * (ptrtoint ptr %start to i64)) + (ptrtoint ptr %end to i64)) /u 4) -; CHECK-NEXT: Predicates: -; CHECK-NEXT: Equal predicate: (zext i2 ((trunc i64 (ptrtoint ptr %end to i64) to i2) + (-1 * (trunc i64 (ptrtoint ptr %start to i64) to i2))) to i64) == 0 -; CHECK-NEXT: Loop %loop: Predicated constant max backedge-taken count is i64 4611686018427387903 -; CHECK-NEXT: Predicates: -; CHECK-NEXT: Equal predicate: (zext i2 ((trunc i64 (ptrtoint ptr %end to i64) to i2) + (-1 * (trunc i64 (ptrtoint ptr %start to i64) to i2))) to i64) == 0 -; CHECK-NEXT: Loop %loop: Predicated symbolic max backedge-taken count is ((-4 + (-1 * (ptrtoint ptr %start to i64)) + (ptrtoint ptr %end to i64)) /u 4) -; CHECK-NEXT: Predicates: -; CHECK-NEXT: Equal predicate: (zext i2 ((trunc i64 (ptrtoint ptr %end to i64) to i2) + (-1 * (trunc i64 (ptrtoint ptr %start to i64) to i2))) to i64) == 0 +; CHECK-NEXT: Loop %loop: backedge-taken count is ((-4 + (-1 * (ptrtoint ptr %start to i64)) + (ptrtoint ptr %end to i64)) /u 4) +; CHECK-NEXT: Loop %loop: constant max backedge-taken count is i64 4611686018427387903 +; CHECK-NEXT: Loop %loop: symbolic max backedge-taken count is ((-4 + (-1 * (ptrtoint ptr %start to i64)) + (ptrtoint ptr %end to i64)) /u 4) +; CHECK-NEXT: Loop %loop: Trip multiple is 1 ; entry: call void @llvm.assume(i1 true) [ "align"(ptr %start, i64 4) ] @@ -652,22 +644,14 @@ define void @test_ptrs_aligned_by_8_via_assumption(ptr %start, ptr %end) { ; CHECK-LABEL: 'test_ptrs_aligned_by_8_via_assumption' ; CHECK-NEXT: Classifying expressions for: @test_ptrs_aligned_by_8_via_assumption ; CHECK-NEXT: %iv = phi ptr [ %start, %entry ], [ %iv.next, %loop ] -; CHECK-NEXT: --> {%start,+,4}<%loop> U: full-set S: full-set Exits: <<Unknown>> LoopDispositions: { %loop: Computable } +; CHECK-NEXT: --> {%start,+,4}<%loop> U: full-set S: full-set Exits: ((4 * ((-4 + (-1 * (ptrtoint ptr %start to i64)) + (ptrtoint ptr %end to i64)) /u 4))<nuw> + %start) LoopDispositions: { %loop: Computable } ; CHECK-NEXT: %iv.next = getelementptr i8, ptr %iv, i64 4 -; CHECK-NEXT: --> {(4 + %start),+,4}<%loop> U: full-set S: full-set Exits: <<Unknown>> LoopDispositions: { %loop: Computable } +; CHECK-NEXT: --> {(4 + %start),+,4}<%loop> U: full-set S: full-set Exits: (4 + (4 * ((-4 + (-1 * (ptrtoint ptr %start to i64)) + (ptrtoint ptr %end to i64)) /u 4))<nuw> + %start) LoopDispositions: { %loop: Computable } ; CHECK-NEXT: Determining loop execution counts for: @test_ptrs_aligned_by_8_via_assumption -; CHECK-NEXT: Loop %loop: Unpredictable backedge-taken count. -; CHECK-NEXT: Loop %loop: Unpredictable constant max backedge-taken count. -; CHECK-NEXT: Loop %loop: Unpredictable symbolic max backedge-taken count. -; CHECK-NEXT: Loop %loop: Predicated backedge-taken count is ((-4 + (-1 * (ptrtoint ptr %start to i64)) + (ptrtoint ptr %end to i64)) /u 4) -; CHECK-NEXT: Predicates: -; CHECK-NEXT: Equal predicate: (zext i2 ((trunc i64 (ptrtoint ptr %end to i64) to i2) + (-1 * (trunc i64 (ptrtoint ptr %start to i64) to i2))) to i64) == 0 -; CHECK-NEXT: Loop %loop: Predicated constant max backedge-taken count is i64 4611686018427387903 -; CHECK-NEXT: Predicates: -; CHECK-NEXT: Equal predicate: (zext i2 ((trunc i64 (ptrtoint ptr %end to i64) to i2) + (-1 * (trunc i64 (ptrtoint ptr %start to i64) to i2))) to i64) == 0 -; CHECK-NEXT: Loop %loop: Predicated symbolic max backedge-taken count is ((-4 + (-1 * (ptrtoint ptr %start to i64)) + (ptrtoint ptr %end to i64)) /u 4) -; CHECK-NEXT: Predicates: -; CHECK-NEXT: Equal predicate: (zext i2 ((trunc i64 (ptrtoint ptr %end to i64) to i2) + (-1 * (trunc i64 (ptrtoint ptr %start to i64) to i2))) to i64) == 0 +; CHECK-NEXT: Loop %loop: backedge-taken count is ((-4 + (-1 * (ptrtoint ptr %start to i64)) + (ptrtoint ptr %end to i64)) /u 4) +; CHECK-NEXT: Loop %loop: constant max backedge-taken count is i64 4611686018427387903 +; CHECK-NEXT: Loop %loop: symbolic max backedge-taken count is ((-4 + (-1 * (ptrtoint ptr %start to i64)) + (ptrtoint ptr %end to i64)) /u 4) +; CHECK-NEXT: Loop %loop: Trip multiple is 1 ; entry: call void @llvm.assume(i1 true) [ "align"(ptr %start, i64 8) ] diff --git a/llvm/test/CodeGen/AArch64/GlobalISel/legalizer-info-validation.mir b/llvm/test/CodeGen/AArch64/GlobalISel/legalizer-info-validation.mir index d721b73c..896603d 100644 --- a/llvm/test/CodeGen/AArch64/GlobalISel/legalizer-info-validation.mir +++ b/llvm/test/CodeGen/AArch64/GlobalISel/legalizer-info-validation.mir @@ -70,12 +70,12 @@ # DEBUG-NEXT: .. the first uncovered type index: 1, OK # DEBUG-NEXT: .. the first uncovered imm index: 0, OK # -# DEBUG-NEXT: G_ABDS (opcode 65): 1 type index, 0 imm indices +# DEBUG-NEXT: G_ABDS (opcode [[G_ABDS:[0-9]+]]): 1 type index, 0 imm indices # DEBUG-NEXT: .. type index coverage check SKIPPED: user-defined predicate detected # DEBUG-NEXT: .. imm index coverage check SKIPPED: user-defined predicate detected # -# DEBUG-NEXT: G_ABDU (opcode 66): 1 type index, 0 imm indices -# DEBUG-NEXT: .. opcode {{[0-9]+}} is aliased to {{[0-9]+}} +# DEBUG-NEXT: G_ABDU (opcode [[G_ABDU:[0-9]+]]): 1 type index, 0 imm indices +# DEBUG-NEXT: .. opcode [[G_ABDU]] is aliased to [[G_ABDS]] # DEBUG-NEXT: .. type index coverage check SKIPPED: user-defined predicate detected # DEBUG-NEXT: .. imm index coverage check SKIPPED: user-defined predicate detected # diff --git a/llvm/test/CodeGen/AMDGPU/a-v-flat-atomicrmw.ll b/llvm/test/CodeGen/AMDGPU/a-v-flat-atomicrmw.ll index 7cc5051..003aa04 100644 --- a/llvm/test/CodeGen/AMDGPU/a-v-flat-atomicrmw.ll +++ b/llvm/test/CodeGen/AMDGPU/a-v-flat-atomicrmw.ll @@ -8759,9 +8759,8 @@ define void @flat_atomic_usub_sat_i64_ret_a_a(ptr %ptr) #0 { ; GFX90A-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0) ; GFX90A-NEXT: v_sub_co_u32_e32 v0, vcc, v2, v6 ; GFX90A-NEXT: v_subb_co_u32_e32 v1, vcc, v3, v7, vcc -; GFX90A-NEXT: v_cmp_gt_u64_e32 vcc, v[0:1], v[2:3] -; GFX90A-NEXT: v_cndmask_b32_e64 v1, v1, 0, vcc ; GFX90A-NEXT: v_cndmask_b32_e64 v0, v0, 0, vcc +; GFX90A-NEXT: v_cndmask_b32_e64 v1, v1, 0, vcc ; GFX90A-NEXT: flat_atomic_cmpswap_x2 v[0:1], v[4:5], v[0:3] glc ; GFX90A-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0) ; GFX90A-NEXT: v_cmp_eq_u64_e32 vcc, v[0:1], v[2:3] @@ -8780,20 +8779,19 @@ define void @flat_atomic_usub_sat_i64_ret_a_a(ptr %ptr) #0 { ; GFX90A-NEXT: s_cbranch_execz .LBB113_6 ; GFX90A-NEXT: ; %bb.5: ; %atomicrmw.private ; GFX90A-NEXT: v_cmp_ne_u64_e32 vcc, 0, v[4:5] -; GFX90A-NEXT: v_cndmask_b32_e32 v4, -1, v4, vcc -; GFX90A-NEXT: buffer_load_dword v0, v4, s[0:3], 0 offen -; GFX90A-NEXT: buffer_load_dword v1, v4, s[0:3], 0 offen offset:4 +; GFX90A-NEXT: v_cndmask_b32_e32 v0, -1, v4, vcc +; GFX90A-NEXT: buffer_load_dword v1, v0, s[0:3], 0 offen +; GFX90A-NEXT: buffer_load_dword v2, v0, s[0:3], 0 offen offset:4 ; GFX90A-NEXT: s_waitcnt vmcnt(1) -; GFX90A-NEXT: v_sub_co_u32_e32 v2, vcc, v0, v6 +; GFX90A-NEXT: v_sub_co_u32_e32 v3, vcc, v1, v6 ; GFX90A-NEXT: s_waitcnt vmcnt(0) -; GFX90A-NEXT: v_subb_co_u32_e32 v3, vcc, v1, v7, vcc -; GFX90A-NEXT: v_cmp_gt_u64_e32 vcc, v[2:3], v[0:1] -; GFX90A-NEXT: v_accvgpr_write_b32 a0, v0 -; GFX90A-NEXT: v_cndmask_b32_e64 v0, v3, 0, vcc -; GFX90A-NEXT: v_accvgpr_write_b32 a1, v1 -; GFX90A-NEXT: v_cndmask_b32_e64 v2, v2, 0, vcc -; GFX90A-NEXT: buffer_store_dword v0, v4, s[0:3], 0 offen offset:4 -; GFX90A-NEXT: buffer_store_dword v2, v4, s[0:3], 0 offen +; GFX90A-NEXT: v_subb_co_u32_e32 v4, vcc, v2, v7, vcc +; GFX90A-NEXT: v_accvgpr_write_b32 a0, v1 +; GFX90A-NEXT: v_cndmask_b32_e64 v3, v3, 0, vcc +; GFX90A-NEXT: v_accvgpr_write_b32 a1, v2 +; GFX90A-NEXT: v_cndmask_b32_e64 v1, v4, 0, vcc +; GFX90A-NEXT: buffer_store_dword v3, v0, s[0:3], 0 offen +; GFX90A-NEXT: buffer_store_dword v1, v0, s[0:3], 0 offen offset:4 ; GFX90A-NEXT: .LBB113_6: ; %atomicrmw.phi ; GFX90A-NEXT: s_or_b64 exec, exec, s[4:5] ; GFX90A-NEXT: ;;#ASMSTART @@ -8827,10 +8825,9 @@ define void @flat_atomic_usub_sat_i64_ret_a_a(ptr %ptr) #0 { ; GFX950-NEXT: v_sub_co_u32_e32 v0, vcc, v2, v6 ; GFX950-NEXT: s_nop 1 ; GFX950-NEXT: v_subb_co_u32_e32 v1, vcc, v3, v7, vcc -; GFX950-NEXT: v_cmp_gt_u64_e32 vcc, v[0:1], v[2:3] ; GFX950-NEXT: s_nop 1 -; GFX950-NEXT: v_cndmask_b32_e64 v1, v1, 0, vcc ; GFX950-NEXT: v_cndmask_b32_e64 v0, v0, 0, vcc +; GFX950-NEXT: v_cndmask_b32_e64 v1, v1, 0, vcc ; GFX950-NEXT: flat_atomic_cmpswap_x2 v[0:1], v[4:5], v[0:3] sc0 ; GFX950-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0) ; GFX950-NEXT: v_cmp_eq_u64_e32 vcc, v[0:1], v[2:3] @@ -8856,11 +8853,11 @@ define void @flat_atomic_usub_sat_i64_ret_a_a(ptr %ptr) #0 { ; GFX950-NEXT: v_sub_co_u32_e32 v2, vcc, v0, v6 ; GFX950-NEXT: s_nop 1 ; GFX950-NEXT: v_subb_co_u32_e32 v3, vcc, v1, v7, vcc -; GFX950-NEXT: v_cmp_gt_u64_e32 vcc, v[2:3], v[0:1] ; GFX950-NEXT: v_accvgpr_write_b32 a0, v0 -; GFX950-NEXT: v_accvgpr_write_b32 a1, v1 +; GFX950-NEXT: s_nop 0 ; GFX950-NEXT: v_cndmask_b32_e64 v3, v3, 0, vcc ; GFX950-NEXT: v_cndmask_b32_e64 v2, v2, 0, vcc +; GFX950-NEXT: v_accvgpr_write_b32 a1, v1 ; GFX950-NEXT: scratch_store_dwordx2 v4, v[2:3], off ; GFX950-NEXT: .LBB113_6: ; %atomicrmw.phi ; GFX950-NEXT: s_or_b64 exec, exec, s[0:1] @@ -8900,9 +8897,8 @@ define void @flat_atomic_usub_sat_i64_ret_av_av(ptr %ptr) #0 { ; GFX90A-NEXT: v_pk_mov_b32 v[6:7], v[4:5], v[4:5] op_sel:[0,1] ; GFX90A-NEXT: v_sub_co_u32_e32 v4, vcc, v6, v2 ; GFX90A-NEXT: v_subb_co_u32_e32 v5, vcc, v7, v3, vcc -; GFX90A-NEXT: v_cmp_gt_u64_e32 vcc, v[4:5], v[6:7] -; GFX90A-NEXT: v_cndmask_b32_e64 v5, v5, 0, vcc ; GFX90A-NEXT: v_cndmask_b32_e64 v4, v4, 0, vcc +; GFX90A-NEXT: v_cndmask_b32_e64 v5, v5, 0, vcc ; GFX90A-NEXT: flat_atomic_cmpswap_x2 v[4:5], v[0:1], v[4:7] glc ; GFX90A-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0) ; GFX90A-NEXT: v_cmp_eq_u64_e32 vcc, v[4:5], v[6:7] @@ -8918,18 +8914,17 @@ define void @flat_atomic_usub_sat_i64_ret_av_av(ptr %ptr) #0 { ; GFX90A-NEXT: s_cbranch_execz .LBB114_6 ; GFX90A-NEXT: ; %bb.5: ; %atomicrmw.private ; GFX90A-NEXT: v_cmp_ne_u64_e32 vcc, 0, v[0:1] -; GFX90A-NEXT: v_cndmask_b32_e32 v6, -1, v0, vcc -; GFX90A-NEXT: buffer_load_dword v4, v6, s[0:3], 0 offen -; GFX90A-NEXT: buffer_load_dword v5, v6, s[0:3], 0 offen offset:4 +; GFX90A-NEXT: v_cndmask_b32_e32 v0, -1, v0, vcc +; GFX90A-NEXT: buffer_load_dword v4, v0, s[0:3], 0 offen +; GFX90A-NEXT: buffer_load_dword v5, v0, s[0:3], 0 offen offset:4 ; GFX90A-NEXT: s_waitcnt vmcnt(1) -; GFX90A-NEXT: v_sub_co_u32_e32 v0, vcc, v4, v2 +; GFX90A-NEXT: v_sub_co_u32_e32 v1, vcc, v4, v2 ; GFX90A-NEXT: s_waitcnt vmcnt(0) -; GFX90A-NEXT: v_subb_co_u32_e32 v1, vcc, v5, v3, vcc -; GFX90A-NEXT: v_cmp_gt_u64_e32 vcc, v[0:1], v[4:5] -; GFX90A-NEXT: v_cndmask_b32_e64 v0, v0, 0, vcc +; GFX90A-NEXT: v_subb_co_u32_e32 v2, vcc, v5, v3, vcc ; GFX90A-NEXT: v_cndmask_b32_e64 v1, v1, 0, vcc -; GFX90A-NEXT: buffer_store_dword v0, v6, s[0:3], 0 offen -; GFX90A-NEXT: buffer_store_dword v1, v6, s[0:3], 0 offen offset:4 +; GFX90A-NEXT: v_cndmask_b32_e64 v2, v2, 0, vcc +; GFX90A-NEXT: buffer_store_dword v1, v0, s[0:3], 0 offen +; GFX90A-NEXT: buffer_store_dword v2, v0, s[0:3], 0 offen offset:4 ; GFX90A-NEXT: .LBB114_6: ; %atomicrmw.phi ; GFX90A-NEXT: s_or_b64 exec, exec, s[4:5] ; GFX90A-NEXT: ;;#ASMSTART @@ -8962,10 +8957,9 @@ define void @flat_atomic_usub_sat_i64_ret_av_av(ptr %ptr) #0 { ; GFX950-NEXT: v_sub_co_u32_e32 v2, vcc, v8, v0 ; GFX950-NEXT: s_nop 1 ; GFX950-NEXT: v_subb_co_u32_e32 v3, vcc, v9, v1, vcc -; GFX950-NEXT: v_cmp_gt_u64_e32 vcc, v[2:3], v[8:9] ; GFX950-NEXT: s_nop 1 -; GFX950-NEXT: v_cndmask_b32_e64 v7, v3, 0, vcc ; GFX950-NEXT: v_cndmask_b32_e64 v6, v2, 0, vcc +; GFX950-NEXT: v_cndmask_b32_e64 v7, v3, 0, vcc ; GFX950-NEXT: flat_atomic_cmpswap_x2 v[2:3], v[4:5], v[6:9] sc0 ; GFX950-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0) ; GFX950-NEXT: v_cmp_eq_u64_e32 vcc, v[2:3], v[8:9] @@ -8988,7 +8982,6 @@ define void @flat_atomic_usub_sat_i64_ret_av_av(ptr %ptr) #0 { ; GFX950-NEXT: v_sub_co_u32_e32 v0, vcc, v2, v0 ; GFX950-NEXT: s_nop 1 ; GFX950-NEXT: v_subb_co_u32_e32 v1, vcc, v3, v1, vcc -; GFX950-NEXT: v_cmp_gt_u64_e32 vcc, v[0:1], v[2:3] ; GFX950-NEXT: s_nop 1 ; GFX950-NEXT: v_cndmask_b32_e64 v1, v1, 0, vcc ; GFX950-NEXT: v_cndmask_b32_e64 v0, v0, 0, vcc @@ -17064,9 +17057,8 @@ define void @flat_atomic_usub_sat_i64_saddr_ret_a_a(ptr inreg %ptr) #0 { ; GFX90A-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0) ; GFX90A-NEXT: v_sub_co_u32_e32 v0, vcc, v2, v4 ; GFX90A-NEXT: v_subb_co_u32_e32 v1, vcc, v3, v5, vcc -; GFX90A-NEXT: v_cmp_gt_u64_e32 vcc, v[0:1], v[2:3] -; GFX90A-NEXT: v_cndmask_b32_e64 v1, v1, 0, vcc ; GFX90A-NEXT: v_cndmask_b32_e64 v0, v0, 0, vcc +; GFX90A-NEXT: v_cndmask_b32_e64 v1, v1, 0, vcc ; GFX90A-NEXT: flat_atomic_cmpswap_x2 v[0:1], v[6:7], v[0:3] glc ; GFX90A-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0) ; GFX90A-NEXT: v_cmp_eq_u64_e32 vcc, v[0:1], v[2:3] @@ -17085,20 +17077,19 @@ define void @flat_atomic_usub_sat_i64_saddr_ret_a_a(ptr inreg %ptr) #0 { ; GFX90A-NEXT: ; %bb.5: ; %atomicrmw.private ; GFX90A-NEXT: s_cmp_lg_u64 s[4:5], 0 ; GFX90A-NEXT: s_cselect_b32 s4, s4, -1 -; GFX90A-NEXT: v_mov_b32_e32 v6, s4 -; GFX90A-NEXT: buffer_load_dword v0, v6, s[0:3], 0 offen -; GFX90A-NEXT: buffer_load_dword v1, v6, s[0:3], 0 offen offset:4 +; GFX90A-NEXT: v_mov_b32_e32 v0, s4 +; GFX90A-NEXT: buffer_load_dword v1, v0, s[0:3], 0 offen +; GFX90A-NEXT: buffer_load_dword v2, v0, s[0:3], 0 offen offset:4 ; GFX90A-NEXT: s_waitcnt vmcnt(1) -; GFX90A-NEXT: v_sub_co_u32_e32 v2, vcc, v0, v4 +; GFX90A-NEXT: v_sub_co_u32_e32 v3, vcc, v1, v4 ; GFX90A-NEXT: s_waitcnt vmcnt(0) -; GFX90A-NEXT: v_subb_co_u32_e32 v3, vcc, v1, v5, vcc -; GFX90A-NEXT: v_cmp_gt_u64_e32 vcc, v[2:3], v[0:1] -; GFX90A-NEXT: v_accvgpr_write_b32 a0, v0 -; GFX90A-NEXT: v_cndmask_b32_e64 v0, v3, 0, vcc -; GFX90A-NEXT: v_accvgpr_write_b32 a1, v1 -; GFX90A-NEXT: v_cndmask_b32_e64 v2, v2, 0, vcc -; GFX90A-NEXT: buffer_store_dword v0, v6, s[0:3], 0 offen offset:4 -; GFX90A-NEXT: buffer_store_dword v2, v6, s[0:3], 0 offen +; GFX90A-NEXT: v_subb_co_u32_e32 v4, vcc, v2, v5, vcc +; GFX90A-NEXT: v_accvgpr_write_b32 a0, v1 +; GFX90A-NEXT: v_cndmask_b32_e64 v3, v3, 0, vcc +; GFX90A-NEXT: v_accvgpr_write_b32 a1, v2 +; GFX90A-NEXT: v_cndmask_b32_e64 v1, v4, 0, vcc +; GFX90A-NEXT: buffer_store_dword v3, v0, s[0:3], 0 offen +; GFX90A-NEXT: buffer_store_dword v1, v0, s[0:3], 0 offen offset:4 ; GFX90A-NEXT: .LBB221_6: ; %atomicrmw.phi ; GFX90A-NEXT: ;;#ASMSTART ; GFX90A-NEXT: ; use a[0:1] @@ -17131,10 +17122,9 @@ define void @flat_atomic_usub_sat_i64_saddr_ret_a_a(ptr inreg %ptr) #0 { ; GFX950-NEXT: v_sub_co_u32_e32 v0, vcc, v2, v4 ; GFX950-NEXT: s_nop 1 ; GFX950-NEXT: v_subb_co_u32_e32 v1, vcc, v3, v5, vcc -; GFX950-NEXT: v_cmp_gt_u64_e32 vcc, v[0:1], v[2:3] ; GFX950-NEXT: s_nop 1 -; GFX950-NEXT: v_cndmask_b32_e64 v1, v1, 0, vcc ; GFX950-NEXT: v_cndmask_b32_e64 v0, v0, 0, vcc +; GFX950-NEXT: v_cndmask_b32_e64 v1, v1, 0, vcc ; GFX950-NEXT: flat_atomic_cmpswap_x2 v[0:1], v[6:7], v[0:3] sc0 ; GFX950-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0) ; GFX950-NEXT: v_cmp_eq_u64_e32 vcc, v[0:1], v[2:3] @@ -17158,11 +17148,11 @@ define void @flat_atomic_usub_sat_i64_saddr_ret_a_a(ptr inreg %ptr) #0 { ; GFX950-NEXT: v_sub_co_u32_e32 v2, vcc, v0, v4 ; GFX950-NEXT: s_nop 1 ; GFX950-NEXT: v_subb_co_u32_e32 v3, vcc, v1, v5, vcc -; GFX950-NEXT: v_cmp_gt_u64_e32 vcc, v[2:3], v[0:1] ; GFX950-NEXT: v_accvgpr_write_b32 a0, v0 -; GFX950-NEXT: v_accvgpr_write_b32 a1, v1 +; GFX950-NEXT: s_nop 0 ; GFX950-NEXT: v_cndmask_b32_e64 v3, v3, 0, vcc ; GFX950-NEXT: v_cndmask_b32_e64 v2, v2, 0, vcc +; GFX950-NEXT: v_accvgpr_write_b32 a1, v1 ; GFX950-NEXT: scratch_store_dwordx2 off, v[2:3], s0 ; GFX950-NEXT: .LBB221_6: ; %atomicrmw.phi ; GFX950-NEXT: ;;#ASMSTART @@ -17201,9 +17191,8 @@ define void @flat_atomic_usub_sat_i64_saddr_ret_av_av(ptr inreg %ptr) #0 { ; GFX90A-NEXT: v_pk_mov_b32 v[8:9], v[2:3], v[2:3] op_sel:[0,1] ; GFX90A-NEXT: v_sub_co_u32_e32 v2, vcc, v8, v0 ; GFX90A-NEXT: v_subb_co_u32_e32 v3, vcc, v9, v1, vcc -; GFX90A-NEXT: v_cmp_gt_u64_e32 vcc, v[2:3], v[8:9] -; GFX90A-NEXT: v_cndmask_b32_e64 v7, v3, 0, vcc ; GFX90A-NEXT: v_cndmask_b32_e64 v6, v2, 0, vcc +; GFX90A-NEXT: v_cndmask_b32_e64 v7, v3, 0, vcc ; GFX90A-NEXT: flat_atomic_cmpswap_x2 v[2:3], v[4:5], v[6:9] glc ; GFX90A-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0) ; GFX90A-NEXT: v_cmp_eq_u64_e32 vcc, v[2:3], v[8:9] @@ -17226,7 +17215,6 @@ define void @flat_atomic_usub_sat_i64_saddr_ret_av_av(ptr inreg %ptr) #0 { ; GFX90A-NEXT: v_sub_co_u32_e32 v0, vcc, v2, v0 ; GFX90A-NEXT: s_waitcnt vmcnt(0) ; GFX90A-NEXT: v_subb_co_u32_e32 v1, vcc, v3, v1, vcc -; GFX90A-NEXT: v_cmp_gt_u64_e32 vcc, v[0:1], v[2:3] ; GFX90A-NEXT: v_cndmask_b32_e64 v0, v0, 0, vcc ; GFX90A-NEXT: v_cndmask_b32_e64 v1, v1, 0, vcc ; GFX90A-NEXT: buffer_store_dword v0, v4, s[0:3], 0 offen @@ -17262,10 +17250,9 @@ define void @flat_atomic_usub_sat_i64_saddr_ret_av_av(ptr inreg %ptr) #0 { ; GFX950-NEXT: v_sub_co_u32_e32 v2, vcc, v8, v0 ; GFX950-NEXT: s_nop 1 ; GFX950-NEXT: v_subb_co_u32_e32 v3, vcc, v9, v1, vcc -; GFX950-NEXT: v_cmp_gt_u64_e32 vcc, v[2:3], v[8:9] ; GFX950-NEXT: s_nop 1 -; GFX950-NEXT: v_cndmask_b32_e64 v7, v3, 0, vcc ; GFX950-NEXT: v_cndmask_b32_e64 v6, v2, 0, vcc +; GFX950-NEXT: v_cndmask_b32_e64 v7, v3, 0, vcc ; GFX950-NEXT: flat_atomic_cmpswap_x2 v[2:3], v[4:5], v[6:9] sc0 ; GFX950-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0) ; GFX950-NEXT: v_cmp_eq_u64_e32 vcc, v[2:3], v[8:9] @@ -17286,7 +17273,6 @@ define void @flat_atomic_usub_sat_i64_saddr_ret_av_av(ptr inreg %ptr) #0 { ; GFX950-NEXT: v_sub_co_u32_e32 v0, vcc, v2, v0 ; GFX950-NEXT: s_nop 1 ; GFX950-NEXT: v_subb_co_u32_e32 v1, vcc, v3, v1, vcc -; GFX950-NEXT: v_cmp_gt_u64_e32 vcc, v[0:1], v[2:3] ; GFX950-NEXT: s_nop 1 ; GFX950-NEXT: v_cndmask_b32_e64 v1, v1, 0, vcc ; GFX950-NEXT: v_cndmask_b32_e64 v0, v0, 0, vcc diff --git a/llvm/test/CodeGen/AMDGPU/a-v-global-atomicrmw.ll b/llvm/test/CodeGen/AMDGPU/a-v-global-atomicrmw.ll index c98fff9..34a4899 100644 --- a/llvm/test/CodeGen/AMDGPU/a-v-global-atomicrmw.ll +++ b/llvm/test/CodeGen/AMDGPU/a-v-global-atomicrmw.ll @@ -5804,9 +5804,8 @@ define void @global_atomic_usub_sat_i64_ret_a_a(ptr addrspace(1) %ptr) #0 { ; GFX90A-NEXT: s_waitcnt vmcnt(0) ; GFX90A-NEXT: v_sub_co_u32_e32 v2, vcc, v4, v6 ; GFX90A-NEXT: v_subb_co_u32_e32 v3, vcc, v5, v7, vcc -; GFX90A-NEXT: v_cmp_gt_u64_e32 vcc, v[2:3], v[4:5] -; GFX90A-NEXT: v_cndmask_b32_e64 v3, v3, 0, vcc ; GFX90A-NEXT: v_cndmask_b32_e64 v2, v2, 0, vcc +; GFX90A-NEXT: v_cndmask_b32_e64 v3, v3, 0, vcc ; GFX90A-NEXT: global_atomic_cmpswap_x2 v[2:3], v[0:1], v[2:5], off offset:80 glc ; GFX90A-NEXT: s_waitcnt vmcnt(0) ; GFX90A-NEXT: v_cmp_eq_u64_e32 vcc, v[2:3], v[4:5] @@ -5839,10 +5838,9 @@ define void @global_atomic_usub_sat_i64_ret_a_a(ptr addrspace(1) %ptr) #0 { ; GFX950-NEXT: v_sub_co_u32_e32 v2, vcc, v4, v6 ; GFX950-NEXT: s_nop 1 ; GFX950-NEXT: v_subb_co_u32_e32 v3, vcc, v5, v7, vcc -; GFX950-NEXT: v_cmp_gt_u64_e32 vcc, v[2:3], v[4:5] ; GFX950-NEXT: s_nop 1 -; GFX950-NEXT: v_cndmask_b32_e64 v3, v3, 0, vcc ; GFX950-NEXT: v_cndmask_b32_e64 v2, v2, 0, vcc +; GFX950-NEXT: v_cndmask_b32_e64 v3, v3, 0, vcc ; GFX950-NEXT: global_atomic_cmpswap_x2 v[2:3], v[0:1], v[2:5], off offset:80 sc0 ; GFX950-NEXT: s_waitcnt vmcnt(0) ; GFX950-NEXT: v_cmp_eq_u64_e32 vcc, v[2:3], v[4:5] @@ -5880,9 +5878,8 @@ define void @global_atomic_usub_sat_i64_ret_av_av(ptr addrspace(1) %ptr) #0 { ; GFX90A-NEXT: v_pk_mov_b32 v[6:7], v[4:5], v[4:5] op_sel:[0,1] ; GFX90A-NEXT: v_sub_co_u32_e32 v4, vcc, v6, v2 ; GFX90A-NEXT: v_subb_co_u32_e32 v5, vcc, v7, v3, vcc -; GFX90A-NEXT: v_cmp_gt_u64_e32 vcc, v[4:5], v[6:7] -; GFX90A-NEXT: v_cndmask_b32_e64 v5, v5, 0, vcc ; GFX90A-NEXT: v_cndmask_b32_e64 v4, v4, 0, vcc +; GFX90A-NEXT: v_cndmask_b32_e64 v5, v5, 0, vcc ; GFX90A-NEXT: global_atomic_cmpswap_x2 v[4:5], v[0:1], v[4:7], off offset:80 glc ; GFX90A-NEXT: s_waitcnt vmcnt(0) ; GFX90A-NEXT: v_cmp_eq_u64_e32 vcc, v[4:5], v[6:7] @@ -5911,10 +5908,9 @@ define void @global_atomic_usub_sat_i64_ret_av_av(ptr addrspace(1) %ptr) #0 { ; GFX950-NEXT: v_sub_co_u32_e32 v4, vcc, v6, v2 ; GFX950-NEXT: s_nop 1 ; GFX950-NEXT: v_subb_co_u32_e32 v5, vcc, v7, v3, vcc -; GFX950-NEXT: v_cmp_gt_u64_e32 vcc, v[4:5], v[6:7] ; GFX950-NEXT: s_nop 1 -; GFX950-NEXT: v_cndmask_b32_e64 v5, v5, 0, vcc ; GFX950-NEXT: v_cndmask_b32_e64 v4, v4, 0, vcc +; GFX950-NEXT: v_cndmask_b32_e64 v5, v5, 0, vcc ; GFX950-NEXT: global_atomic_cmpswap_x2 v[4:5], v[0:1], v[4:7], off offset:80 sc0 ; GFX950-NEXT: s_waitcnt vmcnt(0) ; GFX950-NEXT: v_cmp_eq_u64_e32 vcc, v[4:5], v[6:7] @@ -11573,9 +11569,8 @@ define void @global_atomic_usub_sat_i64_saddr_ret_a_a(ptr addrspace(1) inreg %pt ; GFX90A-NEXT: s_waitcnt vmcnt(0) ; GFX90A-NEXT: v_sub_co_u32_e32 v0, vcc, v2, v4 ; GFX90A-NEXT: v_subb_co_u32_e32 v1, vcc, v3, v5, vcc -; GFX90A-NEXT: v_cmp_gt_u64_e32 vcc, v[0:1], v[2:3] -; GFX90A-NEXT: v_cndmask_b32_e64 v1, v1, 0, vcc ; GFX90A-NEXT: v_cndmask_b32_e64 v0, v0, 0, vcc +; GFX90A-NEXT: v_cndmask_b32_e64 v1, v1, 0, vcc ; GFX90A-NEXT: global_atomic_cmpswap_x2 v[0:1], v6, v[0:3], s[16:17] offset:80 glc ; GFX90A-NEXT: s_waitcnt vmcnt(0) ; GFX90A-NEXT: v_cmp_eq_u64_e32 vcc, v[0:1], v[2:3] @@ -11609,10 +11604,9 @@ define void @global_atomic_usub_sat_i64_saddr_ret_a_a(ptr addrspace(1) inreg %pt ; GFX950-NEXT: v_sub_co_u32_e32 v0, vcc, v2, v4 ; GFX950-NEXT: s_nop 1 ; GFX950-NEXT: v_subb_co_u32_e32 v1, vcc, v3, v5, vcc -; GFX950-NEXT: v_cmp_gt_u64_e32 vcc, v[0:1], v[2:3] ; GFX950-NEXT: s_nop 1 -; GFX950-NEXT: v_cndmask_b32_e64 v1, v1, 0, vcc ; GFX950-NEXT: v_cndmask_b32_e64 v0, v0, 0, vcc +; GFX950-NEXT: v_cndmask_b32_e64 v1, v1, 0, vcc ; GFX950-NEXT: global_atomic_cmpswap_x2 v[0:1], v6, v[0:3], s[0:1] offset:80 sc0 ; GFX950-NEXT: s_waitcnt vmcnt(0) ; GFX950-NEXT: v_cmp_eq_u64_e32 vcc, v[0:1], v[2:3] @@ -11651,9 +11645,8 @@ define void @global_atomic_usub_sat_i64_saddr_ret_av_av(ptr addrspace(1) inreg % ; GFX90A-NEXT: v_pk_mov_b32 v[8:9], v[2:3], v[2:3] op_sel:[0,1] ; GFX90A-NEXT: v_sub_co_u32_e32 v2, vcc, v8, v0 ; GFX90A-NEXT: v_subb_co_u32_e32 v3, vcc, v9, v1, vcc -; GFX90A-NEXT: v_cmp_gt_u64_e32 vcc, v[2:3], v[8:9] -; GFX90A-NEXT: v_cndmask_b32_e64 v7, v3, 0, vcc ; GFX90A-NEXT: v_cndmask_b32_e64 v6, v2, 0, vcc +; GFX90A-NEXT: v_cndmask_b32_e64 v7, v3, 0, vcc ; GFX90A-NEXT: global_atomic_cmpswap_x2 v[2:3], v4, v[6:9], s[16:17] offset:80 glc ; GFX90A-NEXT: s_waitcnt vmcnt(0) ; GFX90A-NEXT: v_cmp_eq_u64_e32 vcc, v[2:3], v[8:9] @@ -11683,10 +11676,9 @@ define void @global_atomic_usub_sat_i64_saddr_ret_av_av(ptr addrspace(1) inreg % ; GFX950-NEXT: v_sub_co_u32_e32 v2, vcc, v8, v0 ; GFX950-NEXT: s_nop 1 ; GFX950-NEXT: v_subb_co_u32_e32 v3, vcc, v9, v1, vcc -; GFX950-NEXT: v_cmp_gt_u64_e32 vcc, v[2:3], v[8:9] ; GFX950-NEXT: s_nop 1 -; GFX950-NEXT: v_cndmask_b32_e64 v7, v3, 0, vcc ; GFX950-NEXT: v_cndmask_b32_e64 v6, v2, 0, vcc +; GFX950-NEXT: v_cndmask_b32_e64 v7, v3, 0, vcc ; GFX950-NEXT: global_atomic_cmpswap_x2 v[2:3], v4, v[6:9], s[0:1] offset:80 sc0 ; GFX950-NEXT: s_waitcnt vmcnt(0) ; GFX950-NEXT: v_cmp_eq_u64_e32 vcc, v[2:3], v[8:9] diff --git a/llvm/test/CodeGen/AMDGPU/addsub64_carry.ll b/llvm/test/CodeGen/AMDGPU/addsub64_carry.ll index d326966..b72eba8 100644 --- a/llvm/test/CodeGen/AMDGPU/addsub64_carry.ll +++ b/llvm/test/CodeGen/AMDGPU/addsub64_carry.ll @@ -17,12 +17,9 @@ define %struct.uint96 @v_add64_32(i64 %val64A, i64 %val64B, i32 %val32) { ; CHECK-LABEL: v_add64_32: ; CHECK: ; %bb.0: ; CHECK-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; CHECK-NEXT: v_add_co_u32_e32 v5, vcc, v0, v2 -; CHECK-NEXT: v_addc_co_u32_e32 v6, vcc, v1, v3, vcc -; CHECK-NEXT: v_cmp_lt_u64_e32 vcc, v[5:6], v[0:1] -; CHECK-NEXT: v_mov_b32_e32 v0, v5 +; CHECK-NEXT: v_add_co_u32_e32 v0, vcc, v0, v2 +; CHECK-NEXT: v_addc_co_u32_e32 v1, vcc, v1, v3, vcc ; CHECK-NEXT: v_addc_co_u32_e32 v2, vcc, 0, v4, vcc -; CHECK-NEXT: v_mov_b32_e32 v1, v6 ; CHECK-NEXT: s_setpc_b64 s[30:31] %sum64 = add i64 %val64A, %val64B %obit = icmp ult i64 %sum64, %val64A @@ -38,16 +35,14 @@ define <2 x i64> @v_uadd_v2i64(<2 x i64> %val0, <2 x i64> %val1, ptr %ptrval) { ; CHECK: ; %bb.0: ; CHECK-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) ; CHECK-NEXT: v_add_co_u32_e32 v6, vcc, v2, v6 +; CHECK-NEXT: v_add_co_u32_e64 v4, s[4:5], v0, v4 ; CHECK-NEXT: v_addc_co_u32_e32 v7, vcc, v3, v7, vcc -; CHECK-NEXT: v_add_co_u32_e32 v4, vcc, v0, v4 -; CHECK-NEXT: v_addc_co_u32_e32 v5, vcc, v1, v5, vcc -; CHECK-NEXT: v_cmp_lt_u64_e32 vcc, v[4:5], v[0:1] -; CHECK-NEXT: flat_store_dwordx4 v[8:9], v[4:7] -; CHECK-NEXT: v_cndmask_b32_e64 v0, 0, -1, vcc -; CHECK-NEXT: v_cmp_lt_u64_e32 vcc, v[6:7], v[2:3] -; CHECK-NEXT: v_mov_b32_e32 v1, v0 +; CHECK-NEXT: v_addc_co_u32_e64 v5, s[4:5], v1, v5, s[4:5] +; CHECK-NEXT: v_cndmask_b32_e64 v0, 0, -1, s[4:5] ; CHECK-NEXT: v_cndmask_b32_e64 v2, 0, -1, vcc +; CHECK-NEXT: v_mov_b32_e32 v1, v0 ; CHECK-NEXT: v_mov_b32_e32 v3, v2 +; CHECK-NEXT: flat_store_dwordx4 v[8:9], v[4:7] ; CHECK-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0) ; CHECK-NEXT: s_setpc_b64 s[30:31] %pair = call {<2 x i64>, <2 x i1>} @llvm.uadd.with.overflow.v2i64(<2 x i64> %val0, <2 x i64> %val1) @@ -63,16 +58,14 @@ define <2 x i64> @v_usub_v2i64(<2 x i64> %val0, <2 x i64> %val1, ptr %ptrval) { ; CHECK: ; %bb.0: ; CHECK-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) ; CHECK-NEXT: v_sub_co_u32_e32 v6, vcc, v2, v6 +; CHECK-NEXT: v_sub_co_u32_e64 v4, s[4:5], v0, v4 ; CHECK-NEXT: v_subb_co_u32_e32 v7, vcc, v3, v7, vcc -; CHECK-NEXT: v_sub_co_u32_e32 v4, vcc, v0, v4 -; CHECK-NEXT: v_subb_co_u32_e32 v5, vcc, v1, v5, vcc -; CHECK-NEXT: v_cmp_gt_u64_e32 vcc, v[4:5], v[0:1] -; CHECK-NEXT: flat_store_dwordx4 v[8:9], v[4:7] -; CHECK-NEXT: v_cndmask_b32_e64 v0, 0, -1, vcc -; CHECK-NEXT: v_cmp_gt_u64_e32 vcc, v[6:7], v[2:3] -; CHECK-NEXT: v_mov_b32_e32 v1, v0 +; CHECK-NEXT: v_subb_co_u32_e64 v5, s[4:5], v1, v5, s[4:5] +; CHECK-NEXT: v_cndmask_b32_e64 v0, 0, -1, s[4:5] ; CHECK-NEXT: v_cndmask_b32_e64 v2, 0, -1, vcc +; CHECK-NEXT: v_mov_b32_e32 v1, v0 ; CHECK-NEXT: v_mov_b32_e32 v3, v2 +; CHECK-NEXT: flat_store_dwordx4 v[8:9], v[4:7] ; CHECK-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0) ; CHECK-NEXT: s_setpc_b64 s[30:31] %pair = call {<2 x i64>, <2 x i1>} @llvm.usub.with.overflow.v2i64(<2 x i64> %val0, <2 x i64> %val1) @@ -87,10 +80,9 @@ define i64 @v_uadd_i64(i64 %val0, i64 %val1, ptr %ptrval) { ; CHECK-LABEL: v_uadd_i64: ; CHECK: ; %bb.0: ; CHECK-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; CHECK-NEXT: v_add_co_u32_e32 v2, vcc, v0, v2 -; CHECK-NEXT: v_addc_co_u32_e32 v3, vcc, v1, v3, vcc -; CHECK-NEXT: v_cmp_lt_u64_e32 vcc, v[2:3], v[0:1] -; CHECK-NEXT: flat_store_dwordx2 v[4:5], v[2:3] +; CHECK-NEXT: v_add_co_u32_e32 v0, vcc, v0, v2 +; CHECK-NEXT: v_addc_co_u32_e32 v1, vcc, v1, v3, vcc +; CHECK-NEXT: flat_store_dwordx2 v[4:5], v[0:1] ; CHECK-NEXT: v_cndmask_b32_e64 v0, 0, -1, vcc ; CHECK-NEXT: v_mov_b32_e32 v1, v0 ; CHECK-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0) @@ -109,7 +101,6 @@ define i64 @v_uadd_p1(i64 %val0, i64 %val1, ptr %ptrval) { ; CHECK-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) ; CHECK-NEXT: v_add_co_u32_e32 v0, vcc, 1, v0 ; CHECK-NEXT: v_addc_co_u32_e32 v1, vcc, 0, v1, vcc -; CHECK-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[0:1] ; CHECK-NEXT: flat_store_dwordx2 v[4:5], v[0:1] ; CHECK-NEXT: v_cndmask_b32_e64 v0, 0, -1, vcc ; CHECK-NEXT: v_mov_b32_e32 v1, v0 @@ -147,10 +138,9 @@ define i64 @v_usub_p1(i64 %val0, i64 %val1, ptr %ptrval) { ; CHECK-LABEL: v_usub_p1: ; CHECK: ; %bb.0: ; CHECK-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; CHECK-NEXT: v_add_co_u32_e32 v2, vcc, -1, v0 -; CHECK-NEXT: v_addc_co_u32_e32 v3, vcc, -1, v1, vcc -; CHECK-NEXT: v_cmp_gt_u64_e32 vcc, v[2:3], v[0:1] -; CHECK-NEXT: flat_store_dwordx2 v[4:5], v[2:3] +; CHECK-NEXT: v_subrev_co_u32_e32 v0, vcc, 1, v0 +; CHECK-NEXT: v_subbrev_co_u32_e32 v1, vcc, 0, v1, vcc +; CHECK-NEXT: flat_store_dwordx2 v[4:5], v[0:1] ; CHECK-NEXT: v_cndmask_b32_e64 v0, 0, -1, vcc ; CHECK-NEXT: v_mov_b32_e32 v1, v0 ; CHECK-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0) @@ -167,10 +157,9 @@ define i64 @v_usub_n1(i64 %val0, i64 %val1, ptr %ptrval) { ; CHECK-LABEL: v_usub_n1: ; CHECK: ; %bb.0: ; CHECK-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; CHECK-NEXT: v_add_co_u32_e32 v2, vcc, 1, v0 -; CHECK-NEXT: v_addc_co_u32_e32 v3, vcc, 0, v1, vcc -; CHECK-NEXT: v_cmp_gt_u64_e32 vcc, v[2:3], v[0:1] -; CHECK-NEXT: flat_store_dwordx2 v[4:5], v[2:3] +; CHECK-NEXT: v_subrev_co_u32_e32 v0, vcc, -1, v0 +; CHECK-NEXT: v_subbrev_co_u32_e32 v1, vcc, -1, v1, vcc +; CHECK-NEXT: flat_store_dwordx2 v[4:5], v[0:1] ; CHECK-NEXT: v_cndmask_b32_e64 v0, 0, -1, vcc ; CHECK-NEXT: v_mov_b32_e32 v1, v0 ; CHECK-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0) @@ -190,15 +179,13 @@ define i64 @v_usub_n1(i64 %val0, i64 %val1, ptr %ptrval) { define amdgpu_ps %struct.uint96 @s_add64_32(i64 inreg %val64A, i64 inreg %val64B, i32 inreg %val32) { ; CHECK-LABEL: s_add64_32: ; CHECK: ; %bb.0: -; CHECK-NEXT: s_add_u32 s6, s0, s2 -; CHECK-NEXT: v_mov_b32_e32 v0, s0 -; CHECK-NEXT: s_addc_u32 s7, s1, s3 -; CHECK-NEXT: v_mov_b32_e32 v1, s1 -; CHECK-NEXT: v_cmp_lt_u64_e32 vcc, s[6:7], v[0:1] -; CHECK-NEXT: s_mov_b32 s0, s6 -; CHECK-NEXT: s_cmp_lg_u64 vcc, 0 +; CHECK-NEXT: s_add_u32 s0, s0, s2 +; CHECK-NEXT: s_cselect_b64 s[6:7], -1, 0 +; CHECK-NEXT: s_cmp_lg_u64 s[6:7], 0 +; CHECK-NEXT: s_addc_u32 s1, s1, s3 +; CHECK-NEXT: s_cselect_b64 s[2:3], -1, 0 +; CHECK-NEXT: s_cmp_lg_u64 s[2:3], 0 ; CHECK-NEXT: s_addc_u32 s2, s4, 0 -; CHECK-NEXT: s_mov_b32 s1, s7 ; CHECK-NEXT: ; return to shader part epilog %sum64 = add i64 %val64A, %val64B %obit = icmp ult i64 %sum64, %val64A @@ -212,24 +199,24 @@ define amdgpu_ps %struct.uint96 @s_add64_32(i64 inreg %val64A, i64 inreg %val64B define amdgpu_ps <2 x i64> @s_uadd_v2i64(<2 x i64> inreg %val0, <2 x i64> inreg %val1, ptr %ptrval) { ; CHECK-LABEL: s_uadd_v2i64: ; CHECK: ; %bb.0: -; CHECK-NEXT: s_add_u32 s6, s2, s6 -; CHECK-NEXT: v_mov_b32_e32 v9, s3 -; CHECK-NEXT: s_addc_u32 s7, s3, s7 -; CHECK-NEXT: v_mov_b32_e32 v8, s2 -; CHECK-NEXT: s_add_u32 s4, s0, s4 -; CHECK-NEXT: v_mov_b32_e32 v7, s1 -; CHECK-NEXT: v_cmp_lt_u64_e32 vcc, s[6:7], v[8:9] -; CHECK-NEXT: s_addc_u32 s5, s1, s5 -; CHECK-NEXT: v_mov_b32_e32 v6, s0 -; CHECK-NEXT: v_cndmask_b32_e64 v8, 0, -1, vcc -; CHECK-NEXT: v_cmp_lt_u64_e32 vcc, s[4:5], v[6:7] -; CHECK-NEXT: v_readfirstlane_b32 s2, v8 -; CHECK-NEXT: v_cndmask_b32_e64 v6, 0, -1, vcc -; CHECK-NEXT: v_readfirstlane_b32 s0, v6 -; CHECK-NEXT: v_mov_b32_e32 v2, s4 -; CHECK-NEXT: v_mov_b32_e32 v3, s5 -; CHECK-NEXT: v_mov_b32_e32 v4, s6 -; CHECK-NEXT: v_mov_b32_e32 v5, s7 +; CHECK-NEXT: s_add_u32 s10, s2, s6 +; CHECK-NEXT: s_cselect_b64 s[8:9], -1, 0 +; CHECK-NEXT: s_cmp_lg_u64 s[8:9], 0 +; CHECK-NEXT: s_addc_u32 s8, s3, s7 +; CHECK-NEXT: s_cselect_b64 s[2:3], -1, 0 +; CHECK-NEXT: s_add_u32 s0, s0, s4 +; CHECK-NEXT: s_cselect_b64 s[6:7], -1, 0 +; CHECK-NEXT: s_cmp_lg_u64 s[6:7], 0 +; CHECK-NEXT: s_addc_u32 s1, s1, s5 +; CHECK-NEXT: v_mov_b32_e32 v2, s0 +; CHECK-NEXT: v_mov_b32_e32 v3, s1 +; CHECK-NEXT: s_cselect_b64 s[0:1], -1, 0 +; CHECK-NEXT: v_cndmask_b32_e64 v6, 0, -1, s[2:3] +; CHECK-NEXT: v_cndmask_b32_e64 v7, 0, -1, s[0:1] +; CHECK-NEXT: v_readfirstlane_b32 s0, v7 +; CHECK-NEXT: v_readfirstlane_b32 s2, v6 +; CHECK-NEXT: v_mov_b32_e32 v4, s10 +; CHECK-NEXT: v_mov_b32_e32 v5, s8 ; CHECK-NEXT: s_mov_b32 s1, s0 ; CHECK-NEXT: s_mov_b32 s3, s2 ; CHECK-NEXT: flat_store_dwordx4 v[0:1], v[2:5] @@ -246,24 +233,24 @@ define amdgpu_ps <2 x i64> @s_uadd_v2i64(<2 x i64> inreg %val0, <2 x i64> inreg define amdgpu_ps <2 x i64> @s_usub_v2i64(<2 x i64> inreg %val0, <2 x i64> inreg %val1, ptr %ptrval) { ; CHECK-LABEL: s_usub_v2i64: ; CHECK: ; %bb.0: -; CHECK-NEXT: s_sub_u32 s6, s2, s6 -; CHECK-NEXT: v_mov_b32_e32 v9, s3 -; CHECK-NEXT: s_subb_u32 s7, s3, s7 -; CHECK-NEXT: v_mov_b32_e32 v8, s2 -; CHECK-NEXT: s_sub_u32 s4, s0, s4 -; CHECK-NEXT: v_mov_b32_e32 v7, s1 -; CHECK-NEXT: v_cmp_gt_u64_e32 vcc, s[6:7], v[8:9] -; CHECK-NEXT: s_subb_u32 s5, s1, s5 -; CHECK-NEXT: v_mov_b32_e32 v6, s0 -; CHECK-NEXT: v_cndmask_b32_e64 v8, 0, -1, vcc -; CHECK-NEXT: v_cmp_gt_u64_e32 vcc, s[4:5], v[6:7] -; CHECK-NEXT: v_readfirstlane_b32 s2, v8 -; CHECK-NEXT: v_cndmask_b32_e64 v6, 0, -1, vcc -; CHECK-NEXT: v_readfirstlane_b32 s0, v6 -; CHECK-NEXT: v_mov_b32_e32 v2, s4 -; CHECK-NEXT: v_mov_b32_e32 v3, s5 -; CHECK-NEXT: v_mov_b32_e32 v4, s6 -; CHECK-NEXT: v_mov_b32_e32 v5, s7 +; CHECK-NEXT: s_sub_u32 s10, s2, s6 +; CHECK-NEXT: s_cselect_b64 s[8:9], -1, 0 +; CHECK-NEXT: s_cmp_lg_u64 s[8:9], 0 +; CHECK-NEXT: s_subb_u32 s8, s3, s7 +; CHECK-NEXT: s_cselect_b64 s[2:3], -1, 0 +; CHECK-NEXT: s_sub_u32 s0, s0, s4 +; CHECK-NEXT: s_cselect_b64 s[6:7], -1, 0 +; CHECK-NEXT: s_cmp_lg_u64 s[6:7], 0 +; CHECK-NEXT: s_subb_u32 s1, s1, s5 +; CHECK-NEXT: v_mov_b32_e32 v2, s0 +; CHECK-NEXT: v_mov_b32_e32 v3, s1 +; CHECK-NEXT: s_cselect_b64 s[0:1], -1, 0 +; CHECK-NEXT: v_cndmask_b32_e64 v6, 0, -1, s[2:3] +; CHECK-NEXT: v_cndmask_b32_e64 v7, 0, -1, s[0:1] +; CHECK-NEXT: v_readfirstlane_b32 s0, v7 +; CHECK-NEXT: v_readfirstlane_b32 s2, v6 +; CHECK-NEXT: v_mov_b32_e32 v4, s10 +; CHECK-NEXT: v_mov_b32_e32 v5, s8 ; CHECK-NEXT: s_mov_b32 s1, s0 ; CHECK-NEXT: s_mov_b32 s3, s2 ; CHECK-NEXT: flat_store_dwordx4 v[0:1], v[2:5] @@ -280,15 +267,15 @@ define amdgpu_ps <2 x i64> @s_usub_v2i64(<2 x i64> inreg %val0, <2 x i64> inreg define amdgpu_ps i64 @s_uadd_i64(i64 inreg %val0, i64 inreg %val1, ptr %ptrval) { ; CHECK-LABEL: s_uadd_i64: ; CHECK: ; %bb.0: -; CHECK-NEXT: s_add_u32 s2, s0, s2 -; CHECK-NEXT: v_mov_b32_e32 v3, s1 -; CHECK-NEXT: s_addc_u32 s3, s1, s3 +; CHECK-NEXT: s_add_u32 s0, s0, s2 +; CHECK-NEXT: s_cselect_b64 s[4:5], -1, 0 +; CHECK-NEXT: s_cmp_lg_u64 s[4:5], 0 +; CHECK-NEXT: s_addc_u32 s1, s1, s3 ; CHECK-NEXT: v_mov_b32_e32 v2, s0 -; CHECK-NEXT: v_mov_b32_e32 v5, s3 -; CHECK-NEXT: v_cmp_lt_u64_e32 vcc, s[2:3], v[2:3] -; CHECK-NEXT: v_mov_b32_e32 v4, s2 -; CHECK-NEXT: flat_store_dwordx2 v[0:1], v[4:5] -; CHECK-NEXT: v_cndmask_b32_e64 v0, 0, -1, vcc +; CHECK-NEXT: v_mov_b32_e32 v3, s1 +; CHECK-NEXT: s_cselect_b64 s[0:1], -1, 0 +; CHECK-NEXT: flat_store_dwordx2 v[0:1], v[2:3] +; CHECK-NEXT: v_cndmask_b32_e64 v0, 0, -1, s[0:1] ; CHECK-NEXT: v_readfirstlane_b32 s0, v0 ; CHECK-NEXT: s_mov_b32 s1, s0 ; CHECK-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0) @@ -305,10 +292,11 @@ define amdgpu_ps i64 @s_uadd_p1(i64 inreg %val0, i64 inreg %val1, ptr %ptrval) { ; CHECK-LABEL: s_uadd_p1: ; CHECK: ; %bb.0: ; CHECK-NEXT: s_add_u32 s0, s0, 1 +; CHECK-NEXT: s_cselect_b64 s[2:3], -1, 0 +; CHECK-NEXT: s_cmp_lg_u64 s[2:3], 0 ; CHECK-NEXT: s_addc_u32 s1, s1, 0 -; CHECK-NEXT: s_cmp_eq_u64 s[0:1], 0 -; CHECK-NEXT: v_mov_b32_e32 v3, s1 ; CHECK-NEXT: v_mov_b32_e32 v2, s0 +; CHECK-NEXT: v_mov_b32_e32 v3, s1 ; CHECK-NEXT: s_cselect_b64 s[0:1], -1, 0 ; CHECK-NEXT: flat_store_dwordx2 v[0:1], v[2:3] ; CHECK-NEXT: v_cndmask_b32_e64 v0, 0, -1, s[0:1] @@ -350,15 +338,15 @@ define amdgpu_ps i64 @s_uadd_n1(i64 inreg %val0, i64 inreg %val1, ptr %ptrval) { define amdgpu_ps i64 @s_usub_p1(i64 inreg %val0, i64 inreg %val1, ptr %ptrval) { ; CHECK-LABEL: s_usub_p1: ; CHECK: ; %bb.0: -; CHECK-NEXT: s_add_u32 s2, s0, -1 -; CHECK-NEXT: v_mov_b32_e32 v3, s1 -; CHECK-NEXT: s_addc_u32 s3, s1, -1 +; CHECK-NEXT: s_sub_u32 s0, s0, 1 +; CHECK-NEXT: s_cselect_b64 s[2:3], -1, 0 +; CHECK-NEXT: s_cmp_lg_u64 s[2:3], 0 +; CHECK-NEXT: s_subb_u32 s1, s1, 0 ; CHECK-NEXT: v_mov_b32_e32 v2, s0 -; CHECK-NEXT: v_mov_b32_e32 v5, s3 -; CHECK-NEXT: v_cmp_gt_u64_e32 vcc, s[2:3], v[2:3] -; CHECK-NEXT: v_mov_b32_e32 v4, s2 -; CHECK-NEXT: flat_store_dwordx2 v[0:1], v[4:5] -; CHECK-NEXT: v_cndmask_b32_e64 v0, 0, -1, vcc +; CHECK-NEXT: v_mov_b32_e32 v3, s1 +; CHECK-NEXT: s_cselect_b64 s[0:1], -1, 0 +; CHECK-NEXT: flat_store_dwordx2 v[0:1], v[2:3] +; CHECK-NEXT: v_cndmask_b32_e64 v0, 0, -1, s[0:1] ; CHECK-NEXT: v_readfirstlane_b32 s0, v0 ; CHECK-NEXT: s_mov_b32 s1, s0 ; CHECK-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0) @@ -374,15 +362,15 @@ define amdgpu_ps i64 @s_usub_p1(i64 inreg %val0, i64 inreg %val1, ptr %ptrval) { define amdgpu_ps i64 @s_usub_n1(i64 inreg %val0, i64 inreg %val1, ptr %ptrval) { ; CHECK-LABEL: s_usub_n1: ; CHECK: ; %bb.0: -; CHECK-NEXT: s_add_u32 s2, s0, 1 -; CHECK-NEXT: v_mov_b32_e32 v3, s1 -; CHECK-NEXT: s_addc_u32 s3, s1, 0 +; CHECK-NEXT: s_sub_u32 s0, s0, -1 +; CHECK-NEXT: s_cselect_b64 s[2:3], -1, 0 +; CHECK-NEXT: s_cmp_lg_u64 s[2:3], 0 +; CHECK-NEXT: s_subb_u32 s1, s1, -1 ; CHECK-NEXT: v_mov_b32_e32 v2, s0 -; CHECK-NEXT: v_mov_b32_e32 v5, s3 -; CHECK-NEXT: v_cmp_gt_u64_e32 vcc, s[2:3], v[2:3] -; CHECK-NEXT: v_mov_b32_e32 v4, s2 -; CHECK-NEXT: flat_store_dwordx2 v[0:1], v[4:5] -; CHECK-NEXT: v_cndmask_b32_e64 v0, 0, -1, vcc +; CHECK-NEXT: v_mov_b32_e32 v3, s1 +; CHECK-NEXT: s_cselect_b64 s[0:1], -1, 0 +; CHECK-NEXT: flat_store_dwordx2 v[0:1], v[2:3] +; CHECK-NEXT: v_cndmask_b32_e64 v0, 0, -1, s[0:1] ; CHECK-NEXT: v_readfirstlane_b32 s0, v0 ; CHECK-NEXT: s_mov_b32 s1, s0 ; CHECK-NEXT: s_waitcnt vmcnt(0) lgkmcnt(0) diff --git a/llvm/test/CodeGen/AMDGPU/amdgpu-attributor-no-agpr.ll b/llvm/test/CodeGen/AMDGPU/amdgpu-attributor-min-agpr-alloc.ll index 2ad6e68..f730199 100644 --- a/llvm/test/CodeGen/AMDGPU/amdgpu-attributor-no-agpr.ll +++ b/llvm/test/CodeGen/AMDGPU/amdgpu-attributor-min-agpr-alloc.ll @@ -70,7 +70,7 @@ define amdgpu_kernel void @kernel_uses_asm_virtreg_def() { define amdgpu_kernel void @kernel_uses_asm_physreg_def_tuple() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_asm_physreg_def_tuple( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR2:[0-9]+]] { ; CHECK-NEXT: [[DEF:%.*]] = call i64 asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -118,7 +118,7 @@ define amdgpu_kernel void @kernel_uses_asm_physreg() { define amdgpu_kernel void @kernel_uses_asm_physreg_tuple() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_asm_physreg_tuple( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR2]] { ; CHECK-NEXT: call void asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -154,7 +154,7 @@ define void @func_uses_asm_physreg_agpr() { define void @func_uses_asm_physreg_agpr_tuple() { ; CHECK-LABEL: define void @func_uses_asm_physreg_agpr_tuple( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR2]] { ; CHECK-NEXT: call void asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -168,7 +168,7 @@ declare void @unknown() define amdgpu_kernel void @kernel_calls_extern() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_calls_extern( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR3:[0-9]+]] { ; CHECK-NEXT: call void @unknown() ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -180,8 +180,8 @@ define amdgpu_kernel void @kernel_calls_extern() { define amdgpu_kernel void @kernel_calls_extern_marked_callsite() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_calls_extern_marked_callsite( -; CHECK-SAME: ) #[[ATTR1]] { -; CHECK-NEXT: call void @unknown() #[[ATTR10:[0-9]+]] +; CHECK-SAME: ) #[[ATTR3]] { +; CHECK-NEXT: call void @unknown() #[[ATTR29:[0-9]+]] ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void ; @@ -192,7 +192,7 @@ define amdgpu_kernel void @kernel_calls_extern_marked_callsite() { define amdgpu_kernel void @kernel_calls_indirect(ptr %indirect) { ; CHECK-LABEL: define amdgpu_kernel void @kernel_calls_indirect( -; CHECK-SAME: ptr [[INDIRECT:%.*]]) #[[ATTR1]] { +; CHECK-SAME: ptr [[INDIRECT:%.*]]) #[[ATTR3]] { ; CHECK-NEXT: call void [[INDIRECT]]() ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -204,8 +204,8 @@ define amdgpu_kernel void @kernel_calls_indirect(ptr %indirect) { define amdgpu_kernel void @kernel_calls_indirect_marked_callsite(ptr %indirect) { ; CHECK-LABEL: define amdgpu_kernel void @kernel_calls_indirect_marked_callsite( -; CHECK-SAME: ptr [[INDIRECT:%.*]]) #[[ATTR1]] { -; CHECK-NEXT: call void [[INDIRECT]]() #[[ATTR10]] +; CHECK-SAME: ptr [[INDIRECT:%.*]]) #[[ATTR3]] { +; CHECK-NEXT: call void [[INDIRECT]]() #[[ATTR29]] ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void ; @@ -316,7 +316,7 @@ define amdgpu_kernel void @kernel_calls_workitem_id_x(ptr addrspace(1) %out) { define amdgpu_kernel void @indirect_calls_none_agpr(i1 %cond) { ; CHECK-LABEL: define amdgpu_kernel void @indirect_calls_none_agpr( -; CHECK-SAME: i1 [[COND:%.*]]) #[[ATTR1]] { +; CHECK-SAME: i1 [[COND:%.*]]) #[[ATTR0]] { ; CHECK-NEXT: [[FPTR:%.*]] = select i1 [[COND]], ptr @empty, ptr @also_empty ; CHECK-NEXT: [[TMP1:%.*]] = icmp eq ptr [[FPTR]], @also_empty ; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP3:%.*]] @@ -342,7 +342,7 @@ define amdgpu_kernel void @indirect_calls_none_agpr(i1 %cond) { define amdgpu_kernel void @kernel_uses_asm_virtreg_def_struct_0() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_asm_virtreg_def_struct_0( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR2]] { ; CHECK-NEXT: [[DEF:%.*]] = call { i32, i32 } asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -354,7 +354,7 @@ define amdgpu_kernel void @kernel_uses_asm_virtreg_def_struct_0() { define amdgpu_kernel void @kernel_uses_asm_virtreg_use_struct_1() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_asm_virtreg_use_struct_1( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR5:[0-9]+]] { ; CHECK-NEXT: [[DEF:%.*]] = call { i32, <2 x i32> } asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -378,7 +378,7 @@ define amdgpu_kernel void @kernel_uses_asm_virtreg_use_struct_2() { define amdgpu_kernel void @kernel_uses_asm_virtreg_ptr_ty() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_asm_virtreg_ptr_ty( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR2]] { ; CHECK-NEXT: call void asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -390,7 +390,7 @@ define amdgpu_kernel void @kernel_uses_asm_virtreg_ptr_ty() { define amdgpu_kernel void @kernel_uses_asm_virtreg_def_ptr_ty() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_asm_virtreg_def_ptr_ty( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR2]] { ; CHECK-NEXT: [[DEF:%.*]] = call ptr asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -402,7 +402,7 @@ define amdgpu_kernel void @kernel_uses_asm_virtreg_def_ptr_ty() { define amdgpu_kernel void @kernel_uses_asm_virtreg_def_vector_ptr_ty() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_asm_virtreg_def_vector_ptr_ty( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR5]] { ; CHECK-NEXT: [[DEF:%.*]] = call <2 x ptr> asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -414,7 +414,7 @@ define amdgpu_kernel void @kernel_uses_asm_virtreg_def_vector_ptr_ty() { define amdgpu_kernel void @kernel_uses_asm_physreg_def_struct_0() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_asm_physreg_def_struct_0( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR6:[0-9]+]] { ; CHECK-NEXT: [[DEF:%.*]] = call { i32, i32 } asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -426,7 +426,7 @@ define amdgpu_kernel void @kernel_uses_asm_physreg_def_struct_0() { define amdgpu_kernel void @kernel_uses_asm_clobber() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_asm_clobber( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR7:[0-9]+]] { ; CHECK-NEXT: call void asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -438,7 +438,7 @@ define amdgpu_kernel void @kernel_uses_asm_clobber() { define amdgpu_kernel void @kernel_uses_asm_clobber_tuple() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_asm_clobber_tuple( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR8:[0-9]+]] { ; CHECK-NEXT: call void asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -450,7 +450,7 @@ define amdgpu_kernel void @kernel_uses_asm_clobber_tuple() { define amdgpu_kernel void @kernel_uses_asm_clobber_oob() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_asm_clobber_oob( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR9:[0-9]+]] { ; CHECK-NEXT: call void asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -462,7 +462,7 @@ define amdgpu_kernel void @kernel_uses_asm_clobber_oob() { define amdgpu_kernel void @kernel_uses_asm_clobber_max() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_asm_clobber_max( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR9]] { ; CHECK-NEXT: call void asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -474,7 +474,7 @@ define amdgpu_kernel void @kernel_uses_asm_clobber_max() { define amdgpu_kernel void @kernel_uses_asm_physreg_oob() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_asm_physreg_oob( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR9]] { ; CHECK-NEXT: call void asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -486,7 +486,7 @@ define amdgpu_kernel void @kernel_uses_asm_physreg_oob() { define amdgpu_kernel void @kernel_uses_asm_virtreg_def_max_ty() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_asm_virtreg_def_max_ty( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR10:[0-9]+]] { ; CHECK-NEXT: [[DEF:%.*]] = call <32 x i32> asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -498,7 +498,7 @@ define amdgpu_kernel void @kernel_uses_asm_virtreg_def_max_ty() { define amdgpu_kernel void @kernel_uses_asm_virtreg_use_max_ty() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_asm_virtreg_use_max_ty( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR10]] { ; CHECK-NEXT: call void asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -510,7 +510,7 @@ define amdgpu_kernel void @kernel_uses_asm_virtreg_use_max_ty() { define amdgpu_kernel void @kernel_uses_asm_virtreg_use_def_max_ty() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_asm_virtreg_use_def_max_ty( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR10]] { ; CHECK-NEXT: [[DEF:%.*]] = call <32 x i32> asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -522,7 +522,7 @@ define amdgpu_kernel void @kernel_uses_asm_virtreg_use_def_max_ty() { define amdgpu_kernel void @vreg_use_exceeds_register_file() { ; CHECK-LABEL: define amdgpu_kernel void @vreg_use_exceeds_register_file( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR9]] { ; CHECK-NEXT: call void asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -534,7 +534,7 @@ define amdgpu_kernel void @vreg_use_exceeds_register_file() { define amdgpu_kernel void @vreg_def_exceeds_register_file() { ; CHECK-LABEL: define amdgpu_kernel void @vreg_def_exceeds_register_file( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR9]] { ; CHECK-NEXT: [[DEF:%.*]] = call <257 x i32> asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -546,7 +546,7 @@ define amdgpu_kernel void @vreg_def_exceeds_register_file() { define amdgpu_kernel void @multiple() { ; CHECK-LABEL: define amdgpu_kernel void @multiple( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR10]] { ; CHECK-NEXT: [[DEF:%.*]] = call { <16 x i32>, <8 x i32>, <8 x i32> } asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -558,7 +558,7 @@ define amdgpu_kernel void @multiple() { define amdgpu_kernel void @earlyclobber_0() { ; CHECK-LABEL: define amdgpu_kernel void @earlyclobber_0( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR11:[0-9]+]] { ; CHECK-NEXT: [[DEF:%.*]] = call <8 x i32> asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -570,7 +570,7 @@ define amdgpu_kernel void @earlyclobber_0() { define amdgpu_kernel void @earlyclobber_1() { ; CHECK-LABEL: define amdgpu_kernel void @earlyclobber_1( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR12:[0-9]+]] { ; CHECK-NEXT: [[DEF:%.*]] = call { <8 x i32>, <16 x i32> } asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -582,7 +582,7 @@ define amdgpu_kernel void @earlyclobber_1() { define amdgpu_kernel void @physreg_a32__vreg_a256__vreg_a512() { ; CHECK-LABEL: define amdgpu_kernel void @physreg_a32__vreg_a256__vreg_a512( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR13:[0-9]+]] { ; CHECK-NEXT: call void asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -594,7 +594,7 @@ define amdgpu_kernel void @physreg_a32__vreg_a256__vreg_a512() { define amdgpu_kernel void @physreg_def_a32__def_vreg_a256__def_vreg_a512() { ; CHECK-LABEL: define amdgpu_kernel void @physreg_def_a32__def_vreg_a256__def_vreg_a512( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR13]] { ; CHECK-NEXT: [[TMP1:%.*]] = call { i32, <8 x i32>, <16 x i32> } asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -606,7 +606,7 @@ define amdgpu_kernel void @physreg_def_a32__def_vreg_a256__def_vreg_a512() { define amdgpu_kernel void @physreg_def_a32___def_vreg_a512_use_vreg_a256() { ; CHECK-LABEL: define amdgpu_kernel void @physreg_def_a32___def_vreg_a512_use_vreg_a256( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR14:[0-9]+]] { ; CHECK-NEXT: [[TMP1:%.*]] = call { i32, <16 x i32> } asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -618,7 +618,7 @@ define amdgpu_kernel void @physreg_def_a32___def_vreg_a512_use_vreg_a256() { define amdgpu_kernel void @mixed_physreg_vreg_tuples_0() { ; CHECK-LABEL: define amdgpu_kernel void @mixed_physreg_vreg_tuples_0( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR11]] { ; CHECK-NEXT: call void asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -630,7 +630,7 @@ define amdgpu_kernel void @mixed_physreg_vreg_tuples_0() { define amdgpu_kernel void @mixed_physreg_vreg_tuples_1() { ; CHECK-LABEL: define amdgpu_kernel void @mixed_physreg_vreg_tuples_1( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR15:[0-9]+]] { ; CHECK-NEXT: call void asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -642,7 +642,7 @@ define amdgpu_kernel void @mixed_physreg_vreg_tuples_1() { define amdgpu_kernel void @physreg_raises_limit() { ; CHECK-LABEL: define amdgpu_kernel void @physreg_raises_limit( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR16:[0-9]+]] { ; CHECK-NEXT: call void asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -652,10 +652,9 @@ define amdgpu_kernel void @physreg_raises_limit() { ret void } -; FIXME: This should require 9. We cannot allocate an a128 at a0. define amdgpu_kernel void @physreg_tuple_alignment_raises_limit() { ; CHECK-LABEL: define amdgpu_kernel void @physreg_tuple_alignment_raises_limit( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR11]] { ; CHECK-NEXT: call void asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -667,7 +666,7 @@ define amdgpu_kernel void @physreg_tuple_alignment_raises_limit() { define amdgpu_kernel void @align3_virtreg() { ; CHECK-LABEL: define amdgpu_kernel void @align3_virtreg( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR6]] { ; CHECK-NEXT: call void asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -679,7 +678,7 @@ define amdgpu_kernel void @align3_virtreg() { define amdgpu_kernel void @align3_align4_virtreg() { ; CHECK-LABEL: define amdgpu_kernel void @align3_align4_virtreg( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR15]] { ; CHECK-NEXT: call void asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -691,7 +690,7 @@ define amdgpu_kernel void @align3_align4_virtreg() { define amdgpu_kernel void @align2_align4_virtreg() { ; CHECK-LABEL: define amdgpu_kernel void @align2_align4_virtreg( -; CHECK-SAME: ) #[[ATTR1]] { +; CHECK-SAME: ) #[[ATTR15]] { ; CHECK-NEXT: call void asm sideeffect " ; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void @@ -703,7 +702,7 @@ define amdgpu_kernel void @align2_align4_virtreg() { define amdgpu_kernel void @kernel_uses_write_register_a55() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_write_register_a55( -; CHECK-SAME: ) #[[ATTR3:[0-9]+]] { +; CHECK-SAME: ) #[[ATTR17:[0-9]+]] { ; CHECK-NEXT: call void @llvm.write_register.i32(metadata [[META0:![0-9]+]], i32 0) ; CHECK-NEXT: ret void ; @@ -713,71 +712,313 @@ define amdgpu_kernel void @kernel_uses_write_register_a55() { define amdgpu_kernel void @kernel_uses_write_register_v55() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_write_register_v55( -; CHECK-SAME: ) #[[ATTR4:[0-9]+]] { +; CHECK-SAME: ) #[[ATTR0]] { ; CHECK-NEXT: call void @llvm.write_register.i32(metadata [[META1:![0-9]+]], i32 0) +; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void ; call void @llvm.write_register.i64(metadata !1, i32 0) + call void @use_most() ret void } define amdgpu_kernel void @kernel_uses_write_register_a55_57() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_write_register_a55_57( -; CHECK-SAME: ) #[[ATTR3]] { +; CHECK-SAME: ) #[[ATTR18:[0-9]+]] { ; CHECK-NEXT: call void @llvm.write_register.i96(metadata [[META2:![0-9]+]], i96 0) +; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void ; call void @llvm.write_register.i64(metadata !2, i96 0) + call void @use_most() ret void } define amdgpu_kernel void @kernel_uses_read_register_a55(ptr addrspace(1) %ptr) { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_read_register_a55( -; CHECK-SAME: ptr addrspace(1) [[PTR:%.*]]) #[[ATTR3]] { +; CHECK-SAME: ptr addrspace(1) [[PTR:%.*]]) #[[ATTR19:[0-9]+]] { ; CHECK-NEXT: [[REG:%.*]] = call i32 @llvm.read_register.i32(metadata [[META0]]) ; CHECK-NEXT: store i32 [[REG]], ptr addrspace(1) [[PTR]], align 4 +; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void ; %reg = call i32 @llvm.read_register.i64(metadata !0) store i32 %reg, ptr addrspace(1) %ptr + call void @use_most() ret void } define amdgpu_kernel void @kernel_uses_read_volatile_register_a55(ptr addrspace(1) %ptr) { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_read_volatile_register_a55( -; CHECK-SAME: ptr addrspace(1) [[PTR:%.*]]) #[[ATTR3]] { +; CHECK-SAME: ptr addrspace(1) [[PTR:%.*]]) #[[ATTR19]] { ; CHECK-NEXT: [[REG:%.*]] = call i32 @llvm.read_volatile_register.i32(metadata [[META0]]) ; CHECK-NEXT: store i32 [[REG]], ptr addrspace(1) [[PTR]], align 4 +; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void ; %reg = call i32 @llvm.read_volatile_register.i64(metadata !0) store i32 %reg, ptr addrspace(1) %ptr + call void @use_most() ret void } define amdgpu_kernel void @kernel_uses_read_register_a56_59(ptr addrspace(1) %ptr) { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_read_register_a56_59( -; CHECK-SAME: ptr addrspace(1) [[PTR:%.*]]) #[[ATTR3]] { +; CHECK-SAME: ptr addrspace(1) [[PTR:%.*]]) #[[ATTR20:[0-9]+]] { ; CHECK-NEXT: [[REG:%.*]] = call i128 @llvm.read_register.i128(metadata [[META3:![0-9]+]]) ; CHECK-NEXT: store i128 [[REG]], ptr addrspace(1) [[PTR]], align 8 +; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void ; %reg = call i128 @llvm.read_register.i64(metadata !3) store i128 %reg, ptr addrspace(1) %ptr + call void @use_most() ret void } define amdgpu_kernel void @kernel_uses_write_register_out_of_bounds_a256() { ; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_write_register_out_of_bounds_a256( -; CHECK-SAME: ) #[[ATTR3]] { +; CHECK-SAME: ) #[[ATTR9]] { ; CHECK-NEXT: call void @llvm.write_register.i32(metadata [[META4:![0-9]+]], i32 0) +; CHECK-NEXT: call void @use_most() ; CHECK-NEXT: ret void ; call void @llvm.write_register.i64(metadata !4, i32 0) + call void @use_most() + ret void +} + +define amdgpu_kernel void @kernel_multiple_uses() { +; CHECK-LABEL: define amdgpu_kernel void @kernel_multiple_uses( +; CHECK-SAME: ) #[[ATTR5]] { +; CHECK-NEXT: call void asm sideeffect " +; CHECK-NEXT: call void asm sideeffect " +; CHECK-NEXT: call void asm sideeffect " +; CHECK-NEXT: call void @use_most() +; CHECK-NEXT: ret void +; + call void asm sideeffect "; use $0", "a"(i64 poison) + call void asm sideeffect "; use $0", "a"(i32 poison) + call void asm sideeffect "; use $0", "a"(i128 poison) + call void @use_most() + ret void +} + +define amdgpu_kernel void @kernel_multiple_defs() { +; CHECK-LABEL: define amdgpu_kernel void @kernel_multiple_defs( +; CHECK-SAME: ) #[[ATTR5]] { +; CHECK-NEXT: [[TMP1:%.*]] = call i64 asm sideeffect " +; CHECK-NEXT: [[TMP2:%.*]] = call i32 asm sideeffect " +; CHECK-NEXT: [[TMP3:%.*]] = call i128 asm sideeffect " +; CHECK-NEXT: call void @use_most() +; CHECK-NEXT: ret void +; + call i64 asm sideeffect "; def $0", "=a"() + call i32 asm sideeffect "; def $0", "=a"() + call i128 asm sideeffect "; def $0", "=a"() + call void @use_most() + ret void +} + +define amdgpu_kernel void @kernel_multiple_use_defs() { +; CHECK-LABEL: define amdgpu_kernel void @kernel_multiple_use_defs( +; CHECK-SAME: ) #[[ATTR5]] { +; CHECK-NEXT: call void asm sideeffect " +; CHECK-NEXT: [[TMP1:%.*]] = call i128 asm sideeffect " +; CHECK-NEXT: call void @use_most() +; CHECK-NEXT: ret void +; + call void asm sideeffect "; use $0", "a"(i32 poison) + call i128 asm sideeffect "; def $0", "=a"() + call void @use_most() + ret void +} + +define void @callgraph_b() { +; CHECK-LABEL: define void @callgraph_b( +; CHECK-SAME: ) #[[ATTR15]] { +; CHECK-NEXT: [[TMP1:%.*]] = call <4 x i32> asm sideeffect " +; CHECK-NEXT: call void asm sideeffect " +; CHECK-NEXT: call void @use_most() +; CHECK-NEXT: ret void +; + call <4 x i32> asm sideeffect "; def $0", "=a"() + call void asm sideeffect "; use $0", "a"(<8 x i32> poison) + call void @use_most() + ret void +} + +define void @callgraph_c() { +; CHECK-LABEL: define void @callgraph_c( +; CHECK-SAME: ) #[[ATTR2]] { +; CHECK-NEXT: [[TMP1:%.*]] = call i32 asm sideeffect " +; CHECK-NEXT: call void asm sideeffect " +; CHECK-NEXT: call void @use_most() +; CHECK-NEXT: ret void +; + call i32 asm sideeffect "; def $0", "=a"() + call void asm sideeffect "; use $0", "a"(<2 x i32> poison) + call void @use_most() + ret void +} + +define void @callgraph_a(i1 %cond) { +; CHECK-LABEL: define void @callgraph_a( +; CHECK-SAME: i1 [[COND:%.*]]) #[[ATTR15]] { +; CHECK-NEXT: br i1 [[COND]], label [[A:%.*]], label [[B:%.*]] +; CHECK: a: +; CHECK-NEXT: call void @callgraph_b() +; CHECK-NEXT: ret void +; CHECK: b: +; CHECK-NEXT: call void @callgraph_c() +; CHECK-NEXT: ret void +; + br i1 %cond, label %a, label %b + +a: + call void @callgraph_b() + ret void + +b: + call void @callgraph_c() + ret void +} + + +define void @kernel_max_callgraph(i1 %cond) { +; CHECK-LABEL: define void @kernel_max_callgraph( +; CHECK-SAME: i1 [[COND:%.*]]) #[[ATTR15]] { +; CHECK-NEXT: call void @callgraph_a(i1 [[COND]]) +; CHECK-NEXT: ret void +; + call void @callgraph_a(i1 %cond) + ret void +} + +define amdgpu_kernel void @kernel_uses_all_virtregs() #1 { +; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_all_virtregs( +; CHECK-SAME: ) #[[ATTR21:[0-9]+]] { +; CHECK-NEXT: call void asm sideeffect " +; CHECK-NEXT: call void @use_most() +; CHECK-NEXT: ret void +; + call void asm sideeffect "; use $0", "a,a,a,a,a,a,a,a"(<32 x i32> poison, <32 x i32> poison, <32 x i32> poison, <32 x i32> poison, <32 x i32> poison, <32 x i32> poison, <32 x i32> poison, <32 x i32> poison) + call void @use_most() + ret void +} + +define amdgpu_kernel void @kernel_uses_all_virtregs_plus_1() #1 { +; CHECK-LABEL: define amdgpu_kernel void @kernel_uses_all_virtregs_plus_1( +; CHECK-SAME: ) #[[ATTR21]] { +; CHECK-NEXT: call void asm sideeffect " +; CHECK-NEXT: call void @use_most() +; CHECK-NEXT: ret void +; + call void asm sideeffect "; use $0", "a,a,a,a,a,a,a,a,a"(<32 x i32> poison, <32 x i32> poison, <32 x i32> poison, <32 x i32> poison, <32 x i32> poison, <32 x i32> poison, <32 x i32> poison, <32 x i32> poison, i32 poison) + call void @use_most() + ret void +} + +define void @recursive() { +; CHECK-LABEL: define void @recursive( +; CHECK-SAME: ) #[[ATTR22:[0-9]+]] { +; CHECK-NEXT: call void asm sideeffect " +; CHECK-NEXT: call void @use_most() +; CHECK-NEXT: call void @recursive() +; CHECK-NEXT: ret void +; + call void asm sideeffect "; use $0", "a"(<7 x i32> poison) + call void @use_most() + call void @recursive() + ret void +} + +define void @indirect_0() { +; CHECK-LABEL: define void @indirect_0( +; CHECK-SAME: ) #[[ATTR22]] { +; CHECK-NEXT: call void asm sideeffect " +; CHECK-NEXT: call void @use_most() +; CHECK-NEXT: ret void +; + call void asm sideeffect "; use $0", "a"(<7 x i32> poison) + call void @use_most() + ret void +} + +define void @indirect_1() { +; CHECK-LABEL: define void @indirect_1( +; CHECK-SAME: ) #[[ATTR23:[0-9]+]] { +; CHECK-NEXT: [[TMP1:%.*]] = call <3 x i32> asm sideeffect " +; CHECK-NEXT: call void @use_most() +; CHECK-NEXT: ret void +; + call <3 x i32> asm sideeffect "; def $0", "=a"() + call void @use_most() + ret void +} + +define amdgpu_kernel void @knowable_indirect_call(i1 %cond) { +; CHECK-LABEL: define amdgpu_kernel void @knowable_indirect_call( +; CHECK-SAME: i1 [[COND:%.*]]) #[[ATTR22]] { +; CHECK-NEXT: [[FPTR:%.*]] = select i1 [[COND]], ptr @indirect_0, ptr @indirect_1 +; CHECK-NEXT: [[TMP1:%.*]] = icmp eq ptr [[FPTR]], @indirect_1 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP3:%.*]] +; CHECK: 2: +; CHECK-NEXT: call void @indirect_1() +; CHECK-NEXT: br label [[TMP6:%.*]] +; CHECK: 3: +; CHECK-NEXT: br i1 true, label [[TMP4:%.*]], label [[TMP5:%.*]] +; CHECK: 4: +; CHECK-NEXT: call void @indirect_0() +; CHECK-NEXT: br label [[TMP6]] +; CHECK: 5: +; CHECK-NEXT: unreachable +; CHECK: 6: +; CHECK-NEXT: call void @use_most() +; CHECK-NEXT: ret void +; + %fptr = select i1 %cond, ptr @indirect_0, ptr @indirect_1 + call void %fptr() + call void @use_most() + ret void +} + +define amdgpu_kernel void @calls_poison(i1 %cond) { +; CHECK-LABEL: define amdgpu_kernel void @calls_poison( +; CHECK-SAME: i1 [[COND:%.*]]) #[[ATTR3]] { +; CHECK-NEXT: call void poison() +; CHECK-NEXT: call void @use_most() +; CHECK-NEXT: ret void +; + call void poison() + call void @use_most() + ret void +} + +define amdgpu_kernel void @calls_null(i1 %cond) { +; CHECK-LABEL: define amdgpu_kernel void @calls_null( +; CHECK-SAME: i1 [[COND:%.*]]) #[[ATTR3]] { +; CHECK-NEXT: call void null() +; CHECK-NEXT: call void @use_most() +; CHECK-NEXT: ret void +; + call void null() + call void @use_most() + ret void +} + +define amdgpu_kernel void @indirect_unknown(ptr %fptr) { +; CHECK-LABEL: define amdgpu_kernel void @indirect_unknown( +; CHECK-SAME: ptr [[FPTR:%.*]]) #[[ATTR3]] { +; CHECK-NEXT: call void [[FPTR]]() +; CHECK-NEXT: ret void +; + call void %fptr() ret void } attributes #0 = { "amdgpu-agpr-alloc"="0" } +attributes #1 = { "amdgpu-waves-per-eu"="1,1" } !0 = !{!"a55"} !1 = !{!"v55"} @@ -787,16 +1028,35 @@ attributes #0 = { "amdgpu-agpr-alloc"="0" } ;. ; CHECK: attributes #[[ATTR0]] = { "amdgpu-agpr-alloc"="0" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } -; CHECK: attributes #[[ATTR1]] = { "target-cpu"="gfx90a" "uniform-work-group-size"="false" } -; CHECK: attributes #[[ATTR2:[0-9]+]] = { convergent nocallback nofree nosync nounwind willreturn memory(none) "target-cpu"="gfx90a" } -; CHECK: attributes #[[ATTR3]] = { "amdgpu-no-cluster-id-x" "amdgpu-no-cluster-id-y" "amdgpu-no-cluster-id-z" "amdgpu-no-completion-action" "amdgpu-no-default-queue" "amdgpu-no-dispatch-id" "amdgpu-no-dispatch-ptr" "amdgpu-no-flat-scratch-init" "amdgpu-no-heap-ptr" "amdgpu-no-hostcall-ptr" "amdgpu-no-implicitarg-ptr" "amdgpu-no-lds-kernel-id" "amdgpu-no-multigrid-sync-arg" "amdgpu-no-queue-ptr" "amdgpu-no-workgroup-id-x" "amdgpu-no-workgroup-id-y" "amdgpu-no-workgroup-id-z" "amdgpu-no-workitem-id-x" "amdgpu-no-workitem-id-y" "amdgpu-no-workitem-id-z" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } -; CHECK: attributes #[[ATTR4]] = { "amdgpu-agpr-alloc"="0" "amdgpu-no-cluster-id-x" "amdgpu-no-cluster-id-y" "amdgpu-no-cluster-id-z" "amdgpu-no-completion-action" "amdgpu-no-default-queue" "amdgpu-no-dispatch-id" "amdgpu-no-dispatch-ptr" "amdgpu-no-flat-scratch-init" "amdgpu-no-heap-ptr" "amdgpu-no-hostcall-ptr" "amdgpu-no-implicitarg-ptr" "amdgpu-no-lds-kernel-id" "amdgpu-no-multigrid-sync-arg" "amdgpu-no-queue-ptr" "amdgpu-no-workgroup-id-x" "amdgpu-no-workgroup-id-y" "amdgpu-no-workgroup-id-z" "amdgpu-no-workitem-id-x" "amdgpu-no-workitem-id-y" "amdgpu-no-workitem-id-z" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } -; CHECK: attributes #[[ATTR5:[0-9]+]] = { nocallback nofree nosync nounwind speculatable willreturn memory(none) "target-cpu"="gfx90a" } -; CHECK: attributes #[[ATTR6:[0-9]+]] = { nocallback nofree nounwind willreturn memory(argmem: readwrite) "target-cpu"="gfx90a" } -; CHECK: attributes #[[ATTR7:[0-9]+]] = { nocallback nofree nosync nounwind willreturn memory(read) "target-cpu"="gfx90a" } -; CHECK: attributes #[[ATTR8:[0-9]+]] = { nounwind "target-cpu"="gfx90a" } -; CHECK: attributes #[[ATTR9:[0-9]+]] = { nocallback nounwind "target-cpu"="gfx90a" } -; CHECK: attributes #[[ATTR10]] = { "amdgpu-agpr-alloc"="0" } +; CHECK: attributes #[[ATTR1]] = { "amdgpu-agpr-alloc"="1" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR2]] = { "amdgpu-agpr-alloc"="2" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR3]] = { "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR4:[0-9]+]] = { convergent nocallback nofree nosync nounwind willreturn memory(none) "target-cpu"="gfx90a" } +; CHECK: attributes #[[ATTR5]] = { "amdgpu-agpr-alloc"="4" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR6]] = { "amdgpu-agpr-alloc"="6" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR7]] = { "amdgpu-agpr-alloc"="5" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR8]] = { "amdgpu-agpr-alloc"="14" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR9]] = { "amdgpu-agpr-alloc"="256" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR10]] = { "amdgpu-agpr-alloc"="32" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR11]] = { "amdgpu-agpr-alloc"="9" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR12]] = { "amdgpu-agpr-alloc"="64" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR13]] = { "amdgpu-agpr-alloc"="49" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR14]] = { "amdgpu-agpr-alloc"="33" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR15]] = { "amdgpu-agpr-alloc"="8" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR16]] = { "amdgpu-agpr-alloc"="13" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR17]] = { "amdgpu-agpr-alloc"="56" "amdgpu-no-cluster-id-x" "amdgpu-no-cluster-id-y" "amdgpu-no-cluster-id-z" "amdgpu-no-completion-action" "amdgpu-no-default-queue" "amdgpu-no-dispatch-id" "amdgpu-no-dispatch-ptr" "amdgpu-no-flat-scratch-init" "amdgpu-no-heap-ptr" "amdgpu-no-hostcall-ptr" "amdgpu-no-implicitarg-ptr" "amdgpu-no-lds-kernel-id" "amdgpu-no-multigrid-sync-arg" "amdgpu-no-queue-ptr" "amdgpu-no-workgroup-id-x" "amdgpu-no-workgroup-id-y" "amdgpu-no-workgroup-id-z" "amdgpu-no-workitem-id-x" "amdgpu-no-workitem-id-y" "amdgpu-no-workitem-id-z" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR18]] = { "amdgpu-agpr-alloc"="58" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR19]] = { "amdgpu-agpr-alloc"="56" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR20]] = { "amdgpu-agpr-alloc"="60" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR21]] = { "amdgpu-agpr-alloc"="256" "amdgpu-waves-per-eu"="1,1" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR22]] = { "amdgpu-agpr-alloc"="7" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR23]] = { "amdgpu-agpr-alloc"="3" "target-cpu"="gfx90a" "uniform-work-group-size"="false" } +; CHECK: attributes #[[ATTR24:[0-9]+]] = { nocallback nofree nosync nounwind speculatable willreturn memory(none) "target-cpu"="gfx90a" } +; CHECK: attributes #[[ATTR25:[0-9]+]] = { nocallback nofree nounwind willreturn memory(argmem: readwrite) "target-cpu"="gfx90a" } +; CHECK: attributes #[[ATTR26:[0-9]+]] = { nocallback nofree nosync nounwind willreturn memory(read) "target-cpu"="gfx90a" } +; CHECK: attributes #[[ATTR27:[0-9]+]] = { nounwind "target-cpu"="gfx90a" } +; CHECK: attributes #[[ATTR28:[0-9]+]] = { nocallback nounwind "target-cpu"="gfx90a" } +; CHECK: attributes #[[ATTR29]] = { "amdgpu-agpr-alloc"="0" } ;. ; CHECK: [[META0]] = !{!"a55"} ; CHECK: [[META1]] = !{!"v55"} diff --git a/llvm/test/CodeGen/AMDGPU/amdgpu-simplify-uniform-waterfall.ll b/llvm/test/CodeGen/AMDGPU/amdgpu-simplify-uniform-waterfall.ll new file mode 100644 index 0000000..6c4f504 --- /dev/null +++ b/llvm/test/CodeGen/AMDGPU/amdgpu-simplify-uniform-waterfall.ll @@ -0,0 +1,452 @@ +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 6 +; RUN: opt -mtriple=amdgcn-amd-amdhsa -mcpu=gfx1010 -amdgpu-enable-uniform-intrinsic-combine=0 -O3 -S < %s | FileCheck %s -check-prefix=CURRENT-CHECK +; RUN: opt -mtriple=amdgcn-amd-amdhsa -mcpu=gfx1010 -passes=amdgpu-uniform-intrinsic-combine -S < %s | FileCheck %s -check-prefix=PASS-CHECK +; RUN: opt -mtriple=amdgcn-amd-amdhsa -mcpu=gfx1010 -O3 -S < %s | FileCheck %s -check-prefix=O3-CHECK + +define protected amdgpu_kernel void @trivial_waterfall_eq_zero(ptr addrspace(1) %out) { +; CURRENT-CHECK-LABEL: define protected amdgpu_kernel void @trivial_waterfall_eq_zero( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) [[OUT:%.*]]) local_unnamed_addr #[[ATTR0:[0-9]+]] { +; CURRENT-CHECK-NEXT: [[ENTRY:.*:]] +; CURRENT-CHECK-NEXT: [[TMP0:%.*]] = tail call i32 @llvm.amdgcn.ballot.i32(i1 true) +; CURRENT-CHECK-NEXT: [[IS_DONE_PEEL:%.*]] = icmp eq i32 [[TMP0]], 0 +; CURRENT-CHECK-NEXT: br i1 [[IS_DONE_PEEL]], label %[[EXIT:.*]], label %[[IF_PEEL:.*]] +; CURRENT-CHECK: [[IF_PEEL]]: +; CURRENT-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; CURRENT-CHECK-NEXT: br label %[[EXIT]] +; CURRENT-CHECK: [[EXIT]]: +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define protected amdgpu_kernel void @trivial_waterfall_eq_zero( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0:[0-9]+]] { +; PASS-CHECK-NEXT: [[ENTRY:.*]]: +; PASS-CHECK-NEXT: br label %[[WHILE:.*]] +; PASS-CHECK: [[WHILE]]: +; PASS-CHECK-NEXT: [[DONE:%.*]] = phi i1 [ false, %[[ENTRY]] ], [ true, %[[IF:.*]] ] +; PASS-CHECK-NEXT: [[NOT_DONE:%.*]] = xor i1 [[DONE]], true +; PASS-CHECK-NEXT: [[TMP0:%.*]] = xor i1 [[NOT_DONE]], true +; PASS-CHECK-NEXT: br i1 [[TMP0]], label %[[EXIT:.*]], label %[[IF]] +; PASS-CHECK: [[IF]]: +; PASS-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: br label %[[WHILE]] +; PASS-CHECK: [[EXIT]]: +; PASS-CHECK-NEXT: ret void +; +; O3-CHECK-LABEL: define protected amdgpu_kernel void @trivial_waterfall_eq_zero( +; O3-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR0:[0-9]+]] { +; O3-CHECK-NEXT: [[ENTRY:.*:]] +; O3-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; O3-CHECK-NEXT: ret void +; +entry: + br label %while + +while: + %done = phi i1 [ 0, %entry ], [ 1, %if ] + %not_done = xor i1 %done, true + %ballot = tail call i64 @llvm.amdgcn.ballot.i64(i1 %not_done) + %is_done = icmp eq i64 %ballot, 0 ; in this case is_done = !not_done + br i1 %is_done, label %exit, label %if + +if: + store i32 5, ptr addrspace(1) %out + br label %while + +exit: + ret void +} + +define protected amdgpu_kernel void @trivial_waterfall_eq_zero_swap_op(ptr addrspace(1) %out) { +; CURRENT-CHECK-LABEL: define protected amdgpu_kernel void @trivial_waterfall_eq_zero_swap_op( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) [[OUT:%.*]]) local_unnamed_addr #[[ATTR0]] { +; CURRENT-CHECK-NEXT: [[ENTRY:.*:]] +; CURRENT-CHECK-NEXT: [[TMP0:%.*]] = tail call i32 @llvm.amdgcn.ballot.i32(i1 true) +; CURRENT-CHECK-NEXT: [[IS_DONE_PEEL:%.*]] = icmp eq i32 [[TMP0]], 0 +; CURRENT-CHECK-NEXT: br i1 [[IS_DONE_PEEL]], label %[[EXIT:.*]], label %[[IF_PEEL:.*]] +; CURRENT-CHECK: [[IF_PEEL]]: +; CURRENT-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; CURRENT-CHECK-NEXT: br label %[[EXIT]] +; CURRENT-CHECK: [[EXIT]]: +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define protected amdgpu_kernel void @trivial_waterfall_eq_zero_swap_op( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[ENTRY:.*]]: +; PASS-CHECK-NEXT: br label %[[WHILE:.*]] +; PASS-CHECK: [[WHILE]]: +; PASS-CHECK-NEXT: [[DONE:%.*]] = phi i1 [ false, %[[ENTRY]] ], [ true, %[[IF:.*]] ] +; PASS-CHECK-NEXT: [[NOT_DONE:%.*]] = xor i1 [[DONE]], true +; PASS-CHECK-NEXT: [[TMP0:%.*]] = xor i1 [[NOT_DONE]], true +; PASS-CHECK-NEXT: br i1 [[TMP0]], label %[[EXIT:.*]], label %[[IF]] +; PASS-CHECK: [[IF]]: +; PASS-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: br label %[[WHILE]] +; PASS-CHECK: [[EXIT]]: +; PASS-CHECK-NEXT: ret void +; +; O3-CHECK-LABEL: define protected amdgpu_kernel void @trivial_waterfall_eq_zero_swap_op( +; O3-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR0]] { +; O3-CHECK-NEXT: [[ENTRY:.*:]] +; O3-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; O3-CHECK-NEXT: ret void +; +entry: + br label %while + +while: + %done = phi i1 [ 0, %entry ], [ 1, %if ] + %not_done = xor i1 %done, true + %ballot = tail call i64 @llvm.amdgcn.ballot.i64(i1 %not_done) + %is_done = icmp eq i64 0, %ballot ; in this case is_done = !not_done + br i1 %is_done, label %exit, label %if + +if: + store i32 5, ptr addrspace(1) %out + br label %while + +exit: + ret void +} + +define protected amdgpu_kernel void @trivial_waterfall_ne_zero(ptr addrspace(1) %out) { +; CURRENT-CHECK-LABEL: define protected amdgpu_kernel void @trivial_waterfall_ne_zero( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR1:[0-9]+]] { +; CURRENT-CHECK-NEXT: [[ENTRY:.*:]] +; CURRENT-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; CURRENT-CHECK-NEXT: br label %[[WHILE:.*]] +; CURRENT-CHECK: [[WHILE]]: +; CURRENT-CHECK-NEXT: [[TMP0:%.*]] = tail call i32 @llvm.amdgcn.ballot.i32(i1 true) +; CURRENT-CHECK-NEXT: [[IS_DONE_NOT:%.*]] = icmp eq i32 [[TMP0]], 0 +; CURRENT-CHECK-NEXT: br i1 [[IS_DONE_NOT]], label %[[WHILE]], label %[[EXIT:.*]], !llvm.loop [[LOOP0:![0-9]+]] +; CURRENT-CHECK: [[EXIT]]: +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define protected amdgpu_kernel void @trivial_waterfall_ne_zero( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[ENTRY:.*]]: +; PASS-CHECK-NEXT: br label %[[WHILE:.*]] +; PASS-CHECK: [[WHILE]]: +; PASS-CHECK-NEXT: [[DONE:%.*]] = phi i1 [ false, %[[ENTRY]] ], [ true, %[[IF:.*]] ] +; PASS-CHECK-NEXT: br i1 [[DONE]], label %[[EXIT:.*]], label %[[IF]] +; PASS-CHECK: [[IF]]: +; PASS-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: br label %[[WHILE]] +; PASS-CHECK: [[EXIT]]: +; PASS-CHECK-NEXT: ret void +; +; O3-CHECK-LABEL: define protected amdgpu_kernel void @trivial_waterfall_ne_zero( +; O3-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR0]] { +; O3-CHECK-NEXT: [[ENTRY:.*:]] +; O3-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; O3-CHECK-NEXT: ret void +; +entry: + br label %while + +while: + %done = phi i1 [ 0, %entry ], [ 1, %if ] + %ballot = tail call i64 @llvm.amdgcn.ballot.i64(i1 %done) + %is_done = icmp ne i64 0, %ballot ; in this case is_done = done + br i1 %is_done, label %exit, label %if + +if: + store i32 5, ptr addrspace(1) %out + br label %while + +exit: + ret void +} + +define protected amdgpu_kernel void @trivial_waterfall_ne_zero_swap(ptr addrspace(1) %out) { +; CURRENT-CHECK-LABEL: define protected amdgpu_kernel void @trivial_waterfall_ne_zero_swap( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR1]] { +; CURRENT-CHECK-NEXT: [[ENTRY:.*:]] +; CURRENT-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; CURRENT-CHECK-NEXT: br label %[[WHILE:.*]] +; CURRENT-CHECK: [[WHILE]]: +; CURRENT-CHECK-NEXT: [[TMP0:%.*]] = tail call i32 @llvm.amdgcn.ballot.i32(i1 true) +; CURRENT-CHECK-NEXT: [[IS_DONE_NOT:%.*]] = icmp eq i32 [[TMP0]], 0 +; CURRENT-CHECK-NEXT: br i1 [[IS_DONE_NOT]], label %[[WHILE]], label %[[EXIT:.*]], !llvm.loop [[LOOP2:![0-9]+]] +; CURRENT-CHECK: [[EXIT]]: +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define protected amdgpu_kernel void @trivial_waterfall_ne_zero_swap( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[ENTRY:.*]]: +; PASS-CHECK-NEXT: br label %[[WHILE:.*]] +; PASS-CHECK: [[WHILE]]: +; PASS-CHECK-NEXT: [[DONE:%.*]] = phi i1 [ false, %[[ENTRY]] ], [ true, %[[IF:.*]] ] +; PASS-CHECK-NEXT: br i1 [[DONE]], label %[[EXIT:.*]], label %[[IF]] +; PASS-CHECK: [[IF]]: +; PASS-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: br label %[[WHILE]] +; PASS-CHECK: [[EXIT]]: +; PASS-CHECK-NEXT: ret void +; +; O3-CHECK-LABEL: define protected amdgpu_kernel void @trivial_waterfall_ne_zero_swap( +; O3-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR0]] { +; O3-CHECK-NEXT: [[ENTRY:.*:]] +; O3-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; O3-CHECK-NEXT: ret void +; +entry: + br label %while + +while: + %done = phi i1 [ 0, %entry ], [ 1, %if ] + %ballot = tail call i64 @llvm.amdgcn.ballot.i64(i1 %done) + %is_done = icmp ne i64 %ballot, 0 ; in this case is_done = done + br i1 %is_done, label %exit, label %if + +if: + store i32 5, ptr addrspace(1) %out + br label %while + +exit: + ret void +} + +define protected amdgpu_kernel void @trivial_uniform_waterfall(ptr addrspace(1) %out) { +; CURRENT-CHECK-LABEL: define protected amdgpu_kernel void @trivial_uniform_waterfall( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) [[OUT:%.*]]) local_unnamed_addr #[[ATTR0]] { +; CURRENT-CHECK-NEXT: [[ENTRY:.*:]] +; CURRENT-CHECK-NEXT: [[TMP0:%.*]] = tail call i32 @llvm.amdgcn.ballot.i32(i1 true) +; CURRENT-CHECK-NEXT: [[IS_DONE_PEEL:%.*]] = icmp eq i32 [[TMP0]], 0 +; CURRENT-CHECK-NEXT: br i1 [[IS_DONE_PEEL]], label %[[EXIT:.*]], label %[[WORK_PEEL:.*]] +; CURRENT-CHECK: [[WORK_PEEL]]: +; CURRENT-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; CURRENT-CHECK-NEXT: br label %[[EXIT]] +; CURRENT-CHECK: [[EXIT]]: +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define protected amdgpu_kernel void @trivial_uniform_waterfall( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[ENTRY:.*]]: +; PASS-CHECK-NEXT: br label %[[WHILE:.*]] +; PASS-CHECK: [[WHILE]]: +; PASS-CHECK-NEXT: [[DONE:%.*]] = phi i1 [ false, %[[ENTRY]] ], [ [[NEW_DONE:%.*]], %[[TAIL:.*]] ] +; PASS-CHECK-NEXT: [[NOT_DONE:%.*]] = xor i1 [[DONE]], true +; PASS-CHECK-NEXT: [[TMP0:%.*]] = xor i1 [[NOT_DONE]], true +; PASS-CHECK-NEXT: br i1 [[TMP0]], label %[[EXIT:.*]], label %[[IF:.*]] +; PASS-CHECK: [[IF]]: +; PASS-CHECK-NEXT: [[IS_FIRST_ACTIVE_ID:%.*]] = icmp eq i32 0, 0 +; PASS-CHECK-NEXT: br i1 [[IS_FIRST_ACTIVE_ID]], label %[[WORK:.*]], label %[[TAIL]] +; PASS-CHECK: [[WORK]]: +; PASS-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: br label %[[TAIL]] +; PASS-CHECK: [[TAIL]]: +; PASS-CHECK-NEXT: [[NEW_DONE]] = phi i1 [ true, %[[WORK]] ], [ false, %[[IF]] ] +; PASS-CHECK-NEXT: br label %[[WHILE]] +; PASS-CHECK: [[EXIT]]: +; PASS-CHECK-NEXT: ret void +; +; O3-CHECK-LABEL: define protected amdgpu_kernel void @trivial_uniform_waterfall( +; O3-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR0]] { +; O3-CHECK-NEXT: [[ENTRY:.*:]] +; O3-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; O3-CHECK-NEXT: ret void +; +entry: + br label %while + +while: + %done = phi i1 [ false, %entry ], [ %new_done, %tail ] + %not_done = xor i1 %done, true + %ballot = tail call i64 @llvm.amdgcn.ballot.i64(i1 %not_done) + %is_done = icmp eq i64 %ballot, 0 + br i1 %is_done, label %exit, label %if + +if: + %first_active_id = tail call noundef i32 @llvm.amdgcn.readfirstlane.i32(i32 0) + %is_first_active_id = icmp eq i32 0, %first_active_id + br i1 %is_first_active_id, label %work, label %tail + +work: + store i32 5, ptr addrspace(1) %out + br label %tail + +tail: + %new_done = phi i1 [ true, %work ], [ false, %if ] + br label %while + +exit: + ret void +} + +define protected amdgpu_kernel void @uniform_waterfall(ptr addrspace(1) %out, i32 %mymask) { +; CURRENT-CHECK-LABEL: define protected amdgpu_kernel void @uniform_waterfall( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) [[OUT:%.*]], i32 [[MYMASK:%.*]]) local_unnamed_addr #[[ATTR0]] { +; CURRENT-CHECK-NEXT: [[ENTRY:.*:]] +; CURRENT-CHECK-NEXT: [[TMP0:%.*]] = tail call i32 @llvm.amdgcn.ballot.i32(i1 true) +; CURRENT-CHECK-NEXT: [[IS_DONE_PEEL:%.*]] = icmp eq i32 [[TMP0]], 0 +; CURRENT-CHECK-NEXT: br i1 [[IS_DONE_PEEL]], label %[[EXIT:.*]], label %[[WORK_PEEL:.*]] +; CURRENT-CHECK: [[WORK_PEEL]]: +; CURRENT-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; CURRENT-CHECK-NEXT: br label %[[EXIT]] +; CURRENT-CHECK: [[EXIT]]: +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define protected amdgpu_kernel void @uniform_waterfall( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]], i32 [[MYMASK:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[ENTRY:.*]]: +; PASS-CHECK-NEXT: br label %[[WHILE:.*]] +; PASS-CHECK: [[WHILE]]: +; PASS-CHECK-NEXT: [[DONE:%.*]] = phi i1 [ false, %[[ENTRY]] ], [ [[NEW_DONE:%.*]], %[[TAIL:.*]] ] +; PASS-CHECK-NEXT: [[NOT_DONE:%.*]] = xor i1 [[DONE]], true +; PASS-CHECK-NEXT: [[TMP0:%.*]] = xor i1 [[NOT_DONE]], true +; PASS-CHECK-NEXT: br i1 [[TMP0]], label %[[EXIT:.*]], label %[[IF:.*]] +; PASS-CHECK: [[IF]]: +; PASS-CHECK-NEXT: [[IS_FIRST_ACTIVE_ID:%.*]] = icmp eq i32 [[MYMASK]], [[MYMASK]] +; PASS-CHECK-NEXT: br i1 [[IS_FIRST_ACTIVE_ID]], label %[[WORK:.*]], label %[[TAIL]] +; PASS-CHECK: [[WORK]]: +; PASS-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: br label %[[TAIL]] +; PASS-CHECK: [[TAIL]]: +; PASS-CHECK-NEXT: [[NEW_DONE]] = phi i1 [ true, %[[WORK]] ], [ false, %[[IF]] ] +; PASS-CHECK-NEXT: br label %[[WHILE]] +; PASS-CHECK: [[EXIT]]: +; PASS-CHECK-NEXT: ret void +; +; O3-CHECK-LABEL: define protected amdgpu_kernel void @uniform_waterfall( +; O3-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]], i32 [[MYMASK:%.*]]) local_unnamed_addr #[[ATTR0]] { +; O3-CHECK-NEXT: [[ENTRY:.*:]] +; O3-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; O3-CHECK-NEXT: ret void +; +entry: + br label %while + +while: + %done = phi i1 [ false, %entry ], [ %new_done, %tail ] + %not_done = xor i1 %done, true + %ballot = tail call i64 @llvm.amdgcn.ballot.i64(i1 %not_done) + %is_done = icmp eq i64 %ballot, 0 + br i1 %is_done, label %exit, label %if + +if: + %first_active_id = tail call noundef i32 @llvm.amdgcn.readfirstlane.i32(i32 %mymask) + %is_first_active_id = icmp eq i32 %mymask, %first_active_id + br i1 %is_first_active_id, label %work, label %tail + +work: + store i32 5, ptr addrspace(1) %out + br label %tail + +tail: + %new_done = phi i1 [ true, %work ], [ false, %if ] + br label %while + +exit: + ret void +} + +define protected amdgpu_kernel void @trivial_waterfall_eq_zero_i32(ptr addrspace(1) %out) { +; CURRENT-CHECK-LABEL: define protected amdgpu_kernel void @trivial_waterfall_eq_zero_i32( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) [[OUT:%.*]]) local_unnamed_addr #[[ATTR0]] { +; CURRENT-CHECK-NEXT: [[ENTRY:.*:]] +; CURRENT-CHECK-NEXT: [[BALLOT_PEEL:%.*]] = tail call i32 @llvm.amdgcn.ballot.i32(i1 true) +; CURRENT-CHECK-NEXT: [[IS_DONE_PEEL:%.*]] = icmp eq i32 [[BALLOT_PEEL]], 0 +; CURRENT-CHECK-NEXT: br i1 [[IS_DONE_PEEL]], label %[[EXIT:.*]], label %[[IF_PEEL:.*]] +; CURRENT-CHECK: [[IF_PEEL]]: +; CURRENT-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; CURRENT-CHECK-NEXT: br label %[[EXIT]] +; CURRENT-CHECK: [[EXIT]]: +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define protected amdgpu_kernel void @trivial_waterfall_eq_zero_i32( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[ENTRY:.*]]: +; PASS-CHECK-NEXT: br label %[[WHILE:.*]] +; PASS-CHECK: [[WHILE]]: +; PASS-CHECK-NEXT: [[DONE:%.*]] = phi i1 [ false, %[[ENTRY]] ], [ true, %[[IF:.*]] ] +; PASS-CHECK-NEXT: [[NOT_DONE:%.*]] = xor i1 [[DONE]], true +; PASS-CHECK-NEXT: [[TMP0:%.*]] = xor i1 [[NOT_DONE]], true +; PASS-CHECK-NEXT: br i1 [[TMP0]], label %[[EXIT:.*]], label %[[IF]] +; PASS-CHECK: [[IF]]: +; PASS-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: br label %[[WHILE]] +; PASS-CHECK: [[EXIT]]: +; PASS-CHECK-NEXT: ret void +; +; O3-CHECK-LABEL: define protected amdgpu_kernel void @trivial_waterfall_eq_zero_i32( +; O3-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR0]] { +; O3-CHECK-NEXT: [[ENTRY:.*:]] +; O3-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; O3-CHECK-NEXT: ret void +; +entry: + br label %while + +while: + %done = phi i1 [ 0, %entry ], [ 1, %if ] + %not_done = xor i1 %done, true + %ballot = tail call i32 @llvm.amdgcn.ballot.i32(i1 %not_done) + %is_done = icmp eq i32 %ballot, 0 ; in this case is_done = !not_done + br i1 %is_done, label %exit, label %if + +if: + store i32 5, ptr addrspace(1) %out + br label %while + +exit: + ret void +} + +define protected amdgpu_kernel void @trivial_waterfall_ne_zero_i32(ptr addrspace(1) %out) { +; CURRENT-CHECK-LABEL: define protected amdgpu_kernel void @trivial_waterfall_ne_zero_i32( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR1]] { +; CURRENT-CHECK-NEXT: [[ENTRY:.*:]] +; CURRENT-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; CURRENT-CHECK-NEXT: br label %[[WHILE:.*]] +; CURRENT-CHECK: [[WHILE]]: +; CURRENT-CHECK-NEXT: [[BALLOT:%.*]] = tail call i32 @llvm.amdgcn.ballot.i32(i1 true) +; CURRENT-CHECK-NEXT: [[IS_DONE_NOT:%.*]] = icmp eq i32 [[BALLOT]], 0 +; CURRENT-CHECK-NEXT: br i1 [[IS_DONE_NOT]], label %[[WHILE]], label %[[EXIT:.*]], !llvm.loop [[LOOP3:![0-9]+]] +; CURRENT-CHECK: [[EXIT]]: +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define protected amdgpu_kernel void @trivial_waterfall_ne_zero_i32( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[ENTRY:.*]]: +; PASS-CHECK-NEXT: br label %[[WHILE:.*]] +; PASS-CHECK: [[WHILE]]: +; PASS-CHECK-NEXT: [[DONE:%.*]] = phi i1 [ false, %[[ENTRY]] ], [ true, %[[IF:.*]] ] +; PASS-CHECK-NEXT: br i1 [[DONE]], label %[[EXIT:.*]], label %[[IF]] +; PASS-CHECK: [[IF]]: +; PASS-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: br label %[[WHILE]] +; PASS-CHECK: [[EXIT]]: +; PASS-CHECK-NEXT: ret void +; +; O3-CHECK-LABEL: define protected amdgpu_kernel void @trivial_waterfall_ne_zero_i32( +; O3-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR0]] { +; O3-CHECK-NEXT: [[ENTRY:.*:]] +; O3-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; O3-CHECK-NEXT: ret void +; +entry: + br label %while + +while: + %done = phi i1 [ 0, %entry ], [ 1, %if ] + %ballot = tail call i32 @llvm.amdgcn.ballot.i32(i1 %done) + %is_done = icmp ne i32 0, %ballot ; in this case is_done = done + br i1 %is_done, label %exit, label %if + +if: + store i32 5, ptr addrspace(1) %out + br label %while + +exit: + ret void +} + +declare i64 @llvm.amdgcn.ballot.i64(i1) #1 +!6 = !{i64 690} +!7 = distinct !{!7, !8} +!8 = !{!"llvm.loop.mustprogress"} +;. +; CURRENT-CHECK: [[LOOP0]] = distinct !{[[LOOP0]], [[META1:![0-9]+]]} +; CURRENT-CHECK: [[META1]] = !{!"llvm.loop.peeled.count", i32 1} +; CURRENT-CHECK: [[LOOP2]] = distinct !{[[LOOP2]], [[META1]]} +; CURRENT-CHECK: [[LOOP3]] = distinct !{[[LOOP3]], [[META1]]} +;. diff --git a/llvm/test/CodeGen/AMDGPU/amdgpu-uniform-intrinsic-combine.ll b/llvm/test/CodeGen/AMDGPU/amdgpu-uniform-intrinsic-combine.ll new file mode 100644 index 0000000..aa11574 --- /dev/null +++ b/llvm/test/CodeGen/AMDGPU/amdgpu-uniform-intrinsic-combine.ll @@ -0,0 +1,790 @@ +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 6 +; RUN: opt -mtriple=amdgcn-amd-amdhsa -mcpu=gfx1010 -amdgpu-enable-uniform-intrinsic-combine=0 -O3 -S < %s | FileCheck %s -check-prefix=CURRENT-CHECK +; RUN: opt -mtriple=amdgcn-amd-amdhsa -mcpu=gfx1010 -passes=amdgpu-uniform-intrinsic-combine -S < %s | FileCheck %s -check-prefix=PASS-CHECK +; RUN: opt -mtriple=amdgcn-amd-amdhsa -mcpu=gfx1010 -passes=amdgpu-uniform-intrinsic-combine,dce -S < %s | FileCheck %s -check-prefix=DCE-CHECK + +define amdgpu_kernel void @permlane64_constant(ptr addrspace(1) %out) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @permlane64_constant( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR0:[0-9]+]] { +; CURRENT-CHECK-NEXT: store i32 77, ptr addrspace(1) [[OUT]], align 4 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @permlane64_constant( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0:[0-9]+]] { +; PASS-CHECK-NEXT: store i32 77, ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @permlane64_constant( +; DCE-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0:[0-9]+]] { +; DCE-CHECK-NEXT: store i32 77, ptr addrspace(1) [[OUT]], align 4 +; DCE-CHECK-NEXT: ret void +; + %v = call i32 @llvm.amdgcn.permlane64(i32 77) + store i32 %v, ptr addrspace(1) %out + ret void +} + +define amdgpu_kernel void @permlane64_uniform(ptr addrspace(1) %out, i32 %src) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @permlane64_uniform( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]], i32 [[SRC:%.*]]) local_unnamed_addr #[[ATTR0]] { +; CURRENT-CHECK-NEXT: store i32 [[SRC]], ptr addrspace(1) [[OUT]], align 4 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @permlane64_uniform( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]], i32 [[SRC:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: store i32 [[SRC]], ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @permlane64_uniform( +; DCE-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]], i32 [[SRC:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: store i32 [[SRC]], ptr addrspace(1) [[OUT]], align 4 +; DCE-CHECK-NEXT: ret void +; + %v = call i32 @llvm.amdgcn.permlane64(i32 %src) + store i32 %v, ptr addrspace(1) %out + ret void +} + +define amdgpu_kernel void @permlane64_nonuniform(i32 addrspace(1)* %out) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @permlane64_nonuniform( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) [[OUT:%.*]]) local_unnamed_addr #[[ATTR1:[0-9]+]] { +; CURRENT-CHECK-NEXT: [[TID:%.*]] = tail call i32 @llvm.amdgcn.workitem.id.x() +; CURRENT-CHECK-NEXT: [[V:%.*]] = tail call i32 @llvm.amdgcn.permlane64.i32(i32 [[TID]]) +; CURRENT-CHECK-NEXT: [[TMP1:%.*]] = zext nneg i32 [[TID]] to i64 +; CURRENT-CHECK-NEXT: [[OUT_PTR:%.*]] = getelementptr i32, ptr addrspace(1) [[OUT]], i64 [[TMP1]] +; CURRENT-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT_PTR]], align 4 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @permlane64_nonuniform( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[TID:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; PASS-CHECK-NEXT: [[V:%.*]] = call i32 @llvm.amdgcn.permlane64.i32(i32 [[TID]]) +; PASS-CHECK-NEXT: [[OUT_PTR:%.*]] = getelementptr i32, ptr addrspace(1) [[OUT]], i32 [[TID]] +; PASS-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT_PTR]], align 4 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @permlane64_nonuniform( +; DCE-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: [[TID:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; DCE-CHECK-NEXT: [[V:%.*]] = call i32 @llvm.amdgcn.permlane64.i32(i32 [[TID]]) +; DCE-CHECK-NEXT: [[OUT_PTR:%.*]] = getelementptr i32, ptr addrspace(1) [[OUT]], i32 [[TID]] +; DCE-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT_PTR]], align 4 +; DCE-CHECK-NEXT: ret void +; + %tid = call i32 @llvm.amdgcn.workitem.id.x() + %v = call i32 @llvm.amdgcn.permlane64(i32 %tid) + %out_ptr = getelementptr i32, i32 addrspace(1)* %out, i32 %tid + store i32 %v, i32 addrspace(1)* %out_ptr + ret void +} + +define amdgpu_kernel void @permlane64_nonuniform_expression(i32 addrspace(1)* %out) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @permlane64_nonuniform_expression( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) [[OUT:%.*]]) local_unnamed_addr #[[ATTR1]] { +; CURRENT-CHECK-NEXT: [[TID:%.*]] = tail call i32 @llvm.amdgcn.workitem.id.x() +; CURRENT-CHECK-NEXT: [[TID2:%.*]] = add nuw nsw i32 [[TID]], 1 +; CURRENT-CHECK-NEXT: [[V:%.*]] = tail call i32 @llvm.amdgcn.permlane64.i32(i32 [[TID2]]) +; CURRENT-CHECK-NEXT: [[TMP1:%.*]] = zext nneg i32 [[TID]] to i64 +; CURRENT-CHECK-NEXT: [[OUT_PTR:%.*]] = getelementptr i32, ptr addrspace(1) [[OUT]], i64 [[TMP1]] +; CURRENT-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT_PTR]], align 4 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @permlane64_nonuniform_expression( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[TID:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; PASS-CHECK-NEXT: [[TID2:%.*]] = add i32 [[TID]], 1 +; PASS-CHECK-NEXT: [[V:%.*]] = call i32 @llvm.amdgcn.permlane64.i32(i32 [[TID2]]) +; PASS-CHECK-NEXT: [[OUT_PTR:%.*]] = getelementptr i32, ptr addrspace(1) [[OUT]], i32 [[TID]] +; PASS-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT_PTR]], align 4 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @permlane64_nonuniform_expression( +; DCE-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: [[TID:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; DCE-CHECK-NEXT: [[TID2:%.*]] = add i32 [[TID]], 1 +; DCE-CHECK-NEXT: [[V:%.*]] = call i32 @llvm.amdgcn.permlane64.i32(i32 [[TID2]]) +; DCE-CHECK-NEXT: [[OUT_PTR:%.*]] = getelementptr i32, ptr addrspace(1) [[OUT]], i32 [[TID]] +; DCE-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT_PTR]], align 4 +; DCE-CHECK-NEXT: ret void +; + %tid = call i32 @llvm.amdgcn.workitem.id.x() + %tid2 = add i32 %tid, 1 + %v = call i32 @llvm.amdgcn.permlane64(i32 %tid2) + %out_ptr = getelementptr i32, i32 addrspace(1)* %out, i32 %tid + store i32 %v, i32 addrspace(1)* %out_ptr + ret void +} + +define amdgpu_kernel void @readlane_constant(ptr addrspace(1) %out) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @readlane_constant( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR0]] { +; CURRENT-CHECK-NEXT: store i32 7, ptr addrspace(1) [[OUT]], align 4 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @readlane_constant( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: store i32 7, ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @readlane_constant( +; DCE-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: store i32 7, ptr addrspace(1) [[OUT]], align 4 +; DCE-CHECK-NEXT: ret void +; + %v = call i32 @llvm.amdgcn.readlane(i32 7, i32 5) + store i32 %v, ptr addrspace(1) %out + ret void +} + +define amdgpu_kernel void @readlane_nonuniform_indices(ptr addrspace(1) %out, i32 %src0, i32 %src1) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @readlane_nonuniform_indices( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]], i32 [[SRC0:%.*]], i32 [[SRC1:%.*]]) local_unnamed_addr #[[ATTR0]] { +; CURRENT-CHECK-NEXT: store i32 [[SRC0]], ptr addrspace(1) [[OUT]], align 4 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @readlane_nonuniform_indices( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]], i32 [[SRC0:%.*]], i32 [[SRC1:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: store i32 [[SRC0]], ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @readlane_nonuniform_indices( +; DCE-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]], i32 [[SRC0:%.*]], i32 [[SRC1:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: store i32 [[SRC0]], ptr addrspace(1) [[OUT]], align 4 +; DCE-CHECK-NEXT: ret void +; + %v = call i32 @llvm.amdgcn.readlane(i32 %src0, i32 %src1) + store i32 %v, ptr addrspace(1) %out + ret void +} + +define amdgpu_kernel void @readlane_nonuniform_workitem(i32 addrspace(1)* %out) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @readlane_nonuniform_workitem( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) [[OUT:%.*]]) local_unnamed_addr #[[ATTR2:[0-9]+]] { +; CURRENT-CHECK-NEXT: [[TIDX:%.*]] = tail call i32 @llvm.amdgcn.workitem.id.x() +; CURRENT-CHECK-NEXT: [[TIDY:%.*]] = tail call i32 @llvm.amdgcn.workitem.id.y() +; CURRENT-CHECK-NEXT: [[V:%.*]] = tail call i32 @llvm.amdgcn.readlane.i32(i32 [[TIDX]], i32 [[TIDY]]) +; CURRENT-CHECK-NEXT: [[TMP1:%.*]] = zext nneg i32 [[TIDX]] to i64 +; CURRENT-CHECK-NEXT: [[OUT_PTR:%.*]] = getelementptr i32, ptr addrspace(1) [[OUT]], i64 [[TMP1]] +; CURRENT-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT_PTR]], align 4 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @readlane_nonuniform_workitem( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[TIDX:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; PASS-CHECK-NEXT: [[TIDY:%.*]] = call i32 @llvm.amdgcn.workitem.id.y() +; PASS-CHECK-NEXT: [[V:%.*]] = call i32 @llvm.amdgcn.readlane.i32(i32 [[TIDX]], i32 [[TIDY]]) +; PASS-CHECK-NEXT: [[OUT_PTR:%.*]] = getelementptr i32, ptr addrspace(1) [[OUT]], i32 [[TIDX]] +; PASS-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT_PTR]], align 4 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @readlane_nonuniform_workitem( +; DCE-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: [[TIDX:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; DCE-CHECK-NEXT: [[TIDY:%.*]] = call i32 @llvm.amdgcn.workitem.id.y() +; DCE-CHECK-NEXT: [[V:%.*]] = call i32 @llvm.amdgcn.readlane.i32(i32 [[TIDX]], i32 [[TIDY]]) +; DCE-CHECK-NEXT: [[OUT_PTR:%.*]] = getelementptr i32, ptr addrspace(1) [[OUT]], i32 [[TIDX]] +; DCE-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT_PTR]], align 4 +; DCE-CHECK-NEXT: ret void +; + %tidx = call i32 @llvm.amdgcn.workitem.id.x() + %tidy = call i32 @llvm.amdgcn.workitem.id.y() + %v = call i32 @llvm.amdgcn.readlane(i32 %tidx, i32 %tidy) + %out_ptr = getelementptr i32, i32 addrspace(1)* %out, i32 %tidx + store i32 %v, i32 addrspace(1)* %out_ptr + ret void +} + +define amdgpu_kernel void @readlane_nonuniform_expression(i32 addrspace(1)* %out) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @readlane_nonuniform_expression( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) [[OUT:%.*]]) local_unnamed_addr #[[ATTR2]] { +; CURRENT-CHECK-NEXT: [[TIDX:%.*]] = tail call i32 @llvm.amdgcn.workitem.id.x() +; CURRENT-CHECK-NEXT: [[TIDY:%.*]] = tail call i32 @llvm.amdgcn.workitem.id.y() +; CURRENT-CHECK-NEXT: [[TIDX2:%.*]] = add nuw nsw i32 [[TIDX]], 1 +; CURRENT-CHECK-NEXT: [[TIDY2:%.*]] = add nuw nsw i32 [[TIDY]], 2 +; CURRENT-CHECK-NEXT: [[V:%.*]] = tail call i32 @llvm.amdgcn.readlane.i32(i32 [[TIDX2]], i32 [[TIDY2]]) +; CURRENT-CHECK-NEXT: [[TMP1:%.*]] = zext nneg i32 [[TIDX]] to i64 +; CURRENT-CHECK-NEXT: [[OUT_PTR:%.*]] = getelementptr i32, ptr addrspace(1) [[OUT]], i64 [[TMP1]] +; CURRENT-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT_PTR]], align 4 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @readlane_nonuniform_expression( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[TIDX:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; PASS-CHECK-NEXT: [[TIDY:%.*]] = call i32 @llvm.amdgcn.workitem.id.y() +; PASS-CHECK-NEXT: [[TIDX2:%.*]] = add i32 [[TIDX]], 1 +; PASS-CHECK-NEXT: [[TIDY2:%.*]] = add i32 [[TIDY]], 2 +; PASS-CHECK-NEXT: [[V:%.*]] = call i32 @llvm.amdgcn.readlane.i32(i32 [[TIDX2]], i32 [[TIDY2]]) +; PASS-CHECK-NEXT: [[OUT_PTR:%.*]] = getelementptr i32, ptr addrspace(1) [[OUT]], i32 [[TIDX]] +; PASS-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT_PTR]], align 4 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @readlane_nonuniform_expression( +; DCE-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: [[TIDX:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; DCE-CHECK-NEXT: [[TIDY:%.*]] = call i32 @llvm.amdgcn.workitem.id.y() +; DCE-CHECK-NEXT: [[TIDX2:%.*]] = add i32 [[TIDX]], 1 +; DCE-CHECK-NEXT: [[TIDY2:%.*]] = add i32 [[TIDY]], 2 +; DCE-CHECK-NEXT: [[V:%.*]] = call i32 @llvm.amdgcn.readlane.i32(i32 [[TIDX2]], i32 [[TIDY2]]) +; DCE-CHECK-NEXT: [[OUT_PTR:%.*]] = getelementptr i32, ptr addrspace(1) [[OUT]], i32 [[TIDX]] +; DCE-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT_PTR]], align 4 +; DCE-CHECK-NEXT: ret void +; + %tidx = call i32 @llvm.amdgcn.workitem.id.x() + %tidy = call i32 @llvm.amdgcn.workitem.id.y() + %tidx2 = add i32 %tidx, 1 + %tidy2 = add i32 %tidy, 2 + %v = call i32 @llvm.amdgcn.readlane(i32 %tidx2, i32 %tidy2) + %out_ptr = getelementptr i32, i32 addrspace(1)* %out, i32 %tidx + store i32 %v, i32 addrspace(1)* %out_ptr + ret void +} + +define amdgpu_kernel void @readfirstlane_constant(ptr addrspace(1) %out) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @readfirstlane_constant( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR0]] { +; CURRENT-CHECK-NEXT: store i32 7, ptr addrspace(1) [[OUT]], align 4 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @readfirstlane_constant( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: store i32 7, ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @readfirstlane_constant( +; DCE-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: store i32 7, ptr addrspace(1) [[OUT]], align 4 +; DCE-CHECK-NEXT: ret void +; + %v = call i32 @llvm.amdgcn.readfirstlane(i32 7) + store i32 %v, ptr addrspace(1) %out + ret void +} + +define amdgpu_kernel void @readfirstlane_with_argument(ptr addrspace(1) %out, i32 %src0) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @readfirstlane_with_argument( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]], i32 [[SRC0:%.*]]) local_unnamed_addr #[[ATTR0]] { +; CURRENT-CHECK-NEXT: store i32 [[SRC0]], ptr addrspace(1) [[OUT]], align 4 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @readfirstlane_with_argument( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]], i32 [[SRC0:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: store i32 [[SRC0]], ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @readfirstlane_with_argument( +; DCE-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]], i32 [[SRC0:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: store i32 [[SRC0]], ptr addrspace(1) [[OUT]], align 4 +; DCE-CHECK-NEXT: ret void +; + %v = call i32 @llvm.amdgcn.readfirstlane(i32 %src0) + store i32 %v, ptr addrspace(1) %out + ret void +} + +define amdgpu_kernel void @readfirstlane_with_workitem_id(i32 addrspace(1)* %out) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @readfirstlane_with_workitem_id( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) [[OUT:%.*]]) local_unnamed_addr #[[ATTR1]] { +; CURRENT-CHECK-NEXT: [[TID:%.*]] = tail call i32 @llvm.amdgcn.workitem.id.x() +; CURRENT-CHECK-NEXT: [[V:%.*]] = tail call i32 @llvm.amdgcn.readfirstlane.i32(i32 [[TID]]) +; CURRENT-CHECK-NEXT: [[TMP1:%.*]] = zext nneg i32 [[TID]] to i64 +; CURRENT-CHECK-NEXT: [[OUT_PTR:%.*]] = getelementptr i32, ptr addrspace(1) [[OUT]], i64 [[TMP1]] +; CURRENT-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT_PTR]], align 4 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @readfirstlane_with_workitem_id( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[TID:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; PASS-CHECK-NEXT: [[V:%.*]] = call i32 @llvm.amdgcn.readfirstlane.i32(i32 [[TID]]) +; PASS-CHECK-NEXT: [[OUT_PTR:%.*]] = getelementptr i32, ptr addrspace(1) [[OUT]], i32 [[TID]] +; PASS-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT_PTR]], align 4 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @readfirstlane_with_workitem_id( +; DCE-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: [[TID:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; DCE-CHECK-NEXT: [[V:%.*]] = call i32 @llvm.amdgcn.readfirstlane.i32(i32 [[TID]]) +; DCE-CHECK-NEXT: [[OUT_PTR:%.*]] = getelementptr i32, ptr addrspace(1) [[OUT]], i32 [[TID]] +; DCE-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT_PTR]], align 4 +; DCE-CHECK-NEXT: ret void +; + %tid = call i32 @llvm.amdgcn.workitem.id.x() + %v = call i32 @llvm.amdgcn.readfirstlane(i32 %tid) + %out_ptr = getelementptr i32, i32 addrspace(1)* %out, i32 %tid + store i32 %v, i32 addrspace(1)* %out_ptr + ret void +} + +define amdgpu_kernel void @readfirstlane_expression(i32 addrspace(1)* %out) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @readfirstlane_expression( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) [[OUT:%.*]]) local_unnamed_addr #[[ATTR1]] { +; CURRENT-CHECK-NEXT: [[TID:%.*]] = tail call i32 @llvm.amdgcn.workitem.id.x() +; CURRENT-CHECK-NEXT: [[TID2:%.*]] = add nuw nsw i32 [[TID]], 1 +; CURRENT-CHECK-NEXT: [[V:%.*]] = tail call i32 @llvm.amdgcn.readfirstlane.i32(i32 [[TID2]]) +; CURRENT-CHECK-NEXT: [[TMP1:%.*]] = zext nneg i32 [[TID2]] to i64 +; CURRENT-CHECK-NEXT: [[OUT_PTR:%.*]] = getelementptr i32, ptr addrspace(1) [[OUT]], i64 [[TMP1]] +; CURRENT-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT_PTR]], align 4 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @readfirstlane_expression( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[TID:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; PASS-CHECK-NEXT: [[TID2:%.*]] = add i32 [[TID]], 1 +; PASS-CHECK-NEXT: [[V:%.*]] = call i32 @llvm.amdgcn.readfirstlane.i32(i32 [[TID2]]) +; PASS-CHECK-NEXT: [[OUT_PTR:%.*]] = getelementptr i32, ptr addrspace(1) [[OUT]], i32 [[TID2]] +; PASS-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT_PTR]], align 4 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @readfirstlane_expression( +; DCE-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: [[TID:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; DCE-CHECK-NEXT: [[TID2:%.*]] = add i32 [[TID]], 1 +; DCE-CHECK-NEXT: [[V:%.*]] = call i32 @llvm.amdgcn.readfirstlane.i32(i32 [[TID2]]) +; DCE-CHECK-NEXT: [[OUT_PTR:%.*]] = getelementptr i32, ptr addrspace(1) [[OUT]], i32 [[TID2]] +; DCE-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT_PTR]], align 4 +; DCE-CHECK-NEXT: ret void +; + %tid = call i32 @llvm.amdgcn.workitem.id.x() + %tid2 = add i32 %tid, 1 + %v = call i32 @llvm.amdgcn.readfirstlane(i32 %tid2) + %out_ptr = getelementptr i32, i32 addrspace(1)* %out, i32 %tid2 + store i32 %v, i32 addrspace(1)* %out_ptr + ret void +} + +define amdgpu_kernel void @readfirstlane_with_readfirstlane(ptr addrspace(1) %out) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @readfirstlane_with_readfirstlane( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR0]] { +; CURRENT-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @readfirstlane_with_readfirstlane( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @readfirstlane_with_readfirstlane( +; DCE-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: store i32 5, ptr addrspace(1) [[OUT]], align 4 +; DCE-CHECK-NEXT: ret void +; + %v1 = call i32 @llvm.amdgcn.readfirstlane(i32 5) + %v2 = call i32 @llvm.amdgcn.readfirstlane(i32 %v1) + store i32 %v2, ptr addrspace(1) %out + ret void +} + +define amdgpu_kernel void @readfirstlane_with_readlane(ptr addrspace(1) %out) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @readfirstlane_with_readlane( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR2]] { +; CURRENT-CHECK-NEXT: [[TIDX:%.*]] = tail call i32 @llvm.amdgcn.workitem.id.x() +; CURRENT-CHECK-NEXT: [[TIDY:%.*]] = tail call i32 @llvm.amdgcn.workitem.id.y() +; CURRENT-CHECK-NEXT: [[V1:%.*]] = tail call i32 @llvm.amdgcn.readlane.i32(i32 [[TIDX]], i32 [[TIDY]]) +; CURRENT-CHECK-NEXT: store i32 [[V1]], ptr addrspace(1) [[OUT]], align 4 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @readfirstlane_with_readlane( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[TIDX:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; PASS-CHECK-NEXT: [[TIDY:%.*]] = call i32 @llvm.amdgcn.workitem.id.y() +; PASS-CHECK-NEXT: [[V1:%.*]] = call i32 @llvm.amdgcn.readlane.i32(i32 [[TIDX]], i32 [[TIDY]]) +; PASS-CHECK-NEXT: store i32 [[V1]], ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @readfirstlane_with_readlane( +; DCE-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: [[TIDX:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; DCE-CHECK-NEXT: [[TIDY:%.*]] = call i32 @llvm.amdgcn.workitem.id.y() +; DCE-CHECK-NEXT: [[V1:%.*]] = call i32 @llvm.amdgcn.readlane.i32(i32 [[TIDX]], i32 [[TIDY]]) +; DCE-CHECK-NEXT: store i32 [[V1]], ptr addrspace(1) [[OUT]], align 4 +; DCE-CHECK-NEXT: ret void +; + %tidx = call i32 @llvm.amdgcn.workitem.id.x() + %tidy = call i32 @llvm.amdgcn.workitem.id.y() + %v1 = call i32 @llvm.amdgcn.readlane(i32 %tidx, i32 %tidy) + %v2 = call i32 @llvm.amdgcn.readfirstlane(i32 %v1) + store i32 %v2, ptr addrspace(1) %out + ret void +} + +define amdgpu_kernel void @readlane_with_firstlane(ptr addrspace(1) %out) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @readlane_with_firstlane( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR1]] { +; CURRENT-CHECK-NEXT: [[TIDX:%.*]] = tail call i32 @llvm.amdgcn.workitem.id.x() +; CURRENT-CHECK-NEXT: [[V1:%.*]] = tail call i32 @llvm.amdgcn.readfirstlane.i32(i32 [[TIDX]]) +; CURRENT-CHECK-NEXT: store i32 [[V1]], ptr addrspace(1) [[OUT]], align 4 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @readlane_with_firstlane( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[TIDX:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; PASS-CHECK-NEXT: [[V1:%.*]] = call i32 @llvm.amdgcn.readfirstlane.i32(i32 [[TIDX]]) +; PASS-CHECK-NEXT: store i32 [[V1]], ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @readlane_with_firstlane( +; DCE-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: [[TIDX:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; DCE-CHECK-NEXT: [[V1:%.*]] = call i32 @llvm.amdgcn.readfirstlane.i32(i32 [[TIDX]]) +; DCE-CHECK-NEXT: store i32 [[V1]], ptr addrspace(1) [[OUT]], align 4 +; DCE-CHECK-NEXT: ret void +; + %tidx = call i32 @llvm.amdgcn.workitem.id.x() + %v1 = call i32 @llvm.amdgcn.readfirstlane(i32 %tidx) + %v2 = call i32 @llvm.amdgcn.readlane(i32 %v1, i32 3) + store i32 %v2, ptr addrspace(1) %out + ret void +} + +define amdgpu_kernel void @readlane_readlane(ptr addrspace(1) %out) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @readlane_readlane( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR2]] { +; CURRENT-CHECK-NEXT: [[TIDX:%.*]] = tail call i32 @llvm.amdgcn.workitem.id.x() +; CURRENT-CHECK-NEXT: [[TIDY:%.*]] = tail call i32 @llvm.amdgcn.workitem.id.y() +; CURRENT-CHECK-NEXT: [[V1:%.*]] = tail call i32 @llvm.amdgcn.readlane.i32(i32 [[TIDX]], i32 [[TIDY]]) +; CURRENT-CHECK-NEXT: store i32 [[V1]], ptr addrspace(1) [[OUT]], align 4 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @readlane_readlane( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[TIDX:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; PASS-CHECK-NEXT: [[TIDY:%.*]] = call i32 @llvm.amdgcn.workitem.id.y() +; PASS-CHECK-NEXT: [[V1:%.*]] = call i32 @llvm.amdgcn.readlane.i32(i32 [[TIDX]], i32 [[TIDY]]) +; PASS-CHECK-NEXT: store i32 [[V1]], ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @readlane_readlane( +; DCE-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: [[TIDX:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; DCE-CHECK-NEXT: [[TIDY:%.*]] = call i32 @llvm.amdgcn.workitem.id.y() +; DCE-CHECK-NEXT: [[V1:%.*]] = call i32 @llvm.amdgcn.readlane.i32(i32 [[TIDX]], i32 [[TIDY]]) +; DCE-CHECK-NEXT: store i32 [[V1]], ptr addrspace(1) [[OUT]], align 4 +; DCE-CHECK-NEXT: ret void +; + %tidx = call i32 @llvm.amdgcn.workitem.id.x() + %tidy = call i32 @llvm.amdgcn.workitem.id.y() + %v1 = call i32 @llvm.amdgcn.readlane(i32 %tidx, i32 %tidy) + %v2 = call i32 @llvm.amdgcn.readlane(i32 %v1, i32 2) + store i32 %v2, ptr addrspace(1) %out + ret void +} + + +define amdgpu_kernel void @permlane64_boundary(ptr addrspace(1) %out_min, ptr addrspace(1) %out_max) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @permlane64_boundary( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT_MIN:%.*]], ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT_MAX:%.*]]) local_unnamed_addr #[[ATTR0]] { +; CURRENT-CHECK-NEXT: store i32 -2147483648, ptr addrspace(1) [[OUT_MIN]], align 4 +; CURRENT-CHECK-NEXT: store i32 2147483647, ptr addrspace(1) [[OUT_MAX]], align 4 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @permlane64_boundary( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT_MIN:%.*]], ptr addrspace(1) [[OUT_MAX:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: store i32 -2147483648, ptr addrspace(1) [[OUT_MIN]], align 4 +; PASS-CHECK-NEXT: store i32 2147483647, ptr addrspace(1) [[OUT_MAX]], align 4 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @permlane64_boundary( +; DCE-CHECK-SAME: ptr addrspace(1) [[OUT_MIN:%.*]], ptr addrspace(1) [[OUT_MAX:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: store i32 -2147483648, ptr addrspace(1) [[OUT_MIN]], align 4 +; DCE-CHECK-NEXT: store i32 2147483647, ptr addrspace(1) [[OUT_MAX]], align 4 +; DCE-CHECK-NEXT: ret void +; + %min_v = call i32 @llvm.amdgcn.permlane64(i32 -2147483648) + store i32 %min_v, ptr addrspace(1) %out_min + %max_v = call i32 @llvm.amdgcn.permlane64(i32 2147483647) + store i32 %max_v, ptr addrspace(1) %out_max + ret void +} + +define amdgpu_kernel void @readlane_cross_lane(ptr addrspace(1) %out) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @readlane_cross_lane( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR1]] { +; CURRENT-CHECK-NEXT: [[TIDX:%.*]] = tail call i32 @llvm.amdgcn.workitem.id.x() +; CURRENT-CHECK-NEXT: [[TIDY:%.*]] = add nuw nsw i32 [[TIDX]], 5 +; CURRENT-CHECK-NEXT: [[V:%.*]] = tail call i32 @llvm.amdgcn.readlane.i32(i32 [[TIDX]], i32 [[TIDY]]) +; CURRENT-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT]], align 4 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @readlane_cross_lane( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[TIDX:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; PASS-CHECK-NEXT: [[TIDY:%.*]] = add i32 [[TIDX]], 5 +; PASS-CHECK-NEXT: [[V:%.*]] = call i32 @llvm.amdgcn.readlane.i32(i32 [[TIDX]], i32 [[TIDY]]) +; PASS-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @readlane_cross_lane( +; DCE-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: [[TIDX:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; DCE-CHECK-NEXT: [[TIDY:%.*]] = add i32 [[TIDX]], 5 +; DCE-CHECK-NEXT: [[V:%.*]] = call i32 @llvm.amdgcn.readlane.i32(i32 [[TIDX]], i32 [[TIDY]]) +; DCE-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT]], align 4 +; DCE-CHECK-NEXT: ret void +; + %tidx = call i32 @llvm.amdgcn.workitem.id.x() + %tidy = add i32 %tidx, 5 + %v = call i32 @llvm.amdgcn.readlane(i32 %tidx, i32 %tidy) + store i32 %v, ptr addrspace(1) %out + ret void +} + +define amdgpu_kernel void @readfirstlane_random(ptr addrspace(1) %out) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @readfirstlane_random( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR0]] { +; CURRENT-CHECK-NEXT: store i32 435, ptr addrspace(1) [[OUT]], align 4 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @readfirstlane_random( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[RANDOM:%.*]] = xor i32 123, 456 +; PASS-CHECK-NEXT: store i32 [[RANDOM]], ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @readfirstlane_random( +; DCE-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: [[RANDOM:%.*]] = xor i32 123, 456 +; DCE-CHECK-NEXT: store i32 [[RANDOM]], ptr addrspace(1) [[OUT]], align 4 +; DCE-CHECK-NEXT: ret void +; + %random = xor i32 123, 456 + %v = call i32 @llvm.amdgcn.readfirstlane(i32 %random) + store i32 %v, ptr addrspace(1) %out + ret void +} + +define amdgpu_kernel void @readlane_expression(ptr addrspace(1) %out) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @readlane_expression( +; CURRENT-CHECK-SAME: ptr addrspace(1) writeonly captures(none) initializes((0, 4)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR1]] { +; CURRENT-CHECK-NEXT: [[IDX1:%.*]] = tail call i32 @llvm.amdgcn.workitem.id.x() +; CURRENT-CHECK-NEXT: [[IDX2:%.*]] = shl nuw nsw i32 [[IDX1]], 1 +; CURRENT-CHECK-NEXT: [[V:%.*]] = tail call i32 @llvm.amdgcn.readlane.i32(i32 [[IDX1]], i32 [[IDX2]]) +; CURRENT-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT]], align 4 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @readlane_expression( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[IDX1:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; PASS-CHECK-NEXT: [[IDX2:%.*]] = mul i32 [[IDX1]], 2 +; PASS-CHECK-NEXT: [[V:%.*]] = call i32 @llvm.amdgcn.readlane.i32(i32 [[IDX1]], i32 [[IDX2]]) +; PASS-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @readlane_expression( +; DCE-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: [[IDX1:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; DCE-CHECK-NEXT: [[IDX2:%.*]] = mul i32 [[IDX1]], 2 +; DCE-CHECK-NEXT: [[V:%.*]] = call i32 @llvm.amdgcn.readlane.i32(i32 [[IDX1]], i32 [[IDX2]]) +; DCE-CHECK-NEXT: store i32 [[V]], ptr addrspace(1) [[OUT]], align 4 +; DCE-CHECK-NEXT: ret void +; + %idx1 = call i32 @llvm.amdgcn.workitem.id.x() + %idx2 = mul i32 %idx1, 2 + %v = call i32 @llvm.amdgcn.readlane(i32 %idx1, i32 %idx2) + store i32 %v, ptr addrspace(1) %out + ret void +} + +define amdgpu_kernel void @ballot_i32(i32 %v, ptr addrspace(1) %out) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @ballot_i32( +; CURRENT-CHECK-SAME: i32 [[V:%.*]], ptr addrspace(1) writeonly captures(none) initializes((0, 1)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR1]] { +; CURRENT-CHECK-NEXT: [[C:%.*]] = trunc i32 [[V]] to i1 +; CURRENT-CHECK-NEXT: [[BALLOT:%.*]] = tail call i32 @llvm.amdgcn.ballot.i32(i1 [[C]]) +; CURRENT-CHECK-NEXT: [[BALLOT_NE_ZERO:%.*]] = icmp ne i32 [[BALLOT]], 0 +; CURRENT-CHECK-NEXT: store i1 [[BALLOT_NE_ZERO]], ptr addrspace(1) [[OUT]], align 1 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @ballot_i32( +; PASS-CHECK-SAME: i32 [[V:%.*]], ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[C:%.*]] = trunc i32 [[V]] to i1 +; PASS-CHECK-NEXT: store i1 [[C]], ptr addrspace(1) [[OUT]], align 1 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @ballot_i32( +; DCE-CHECK-SAME: i32 [[V:%.*]], ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: [[C:%.*]] = trunc i32 [[V]] to i1 +; DCE-CHECK-NEXT: store i1 [[C]], ptr addrspace(1) [[OUT]], align 1 +; DCE-CHECK-NEXT: ret void +; + %c = trunc i32 %v to i1 + %ballot = call i32 @llvm.amdgcn.ballot.i32(i1 %c) + %ballot_ne_zero = icmp ne i32 %ballot, 0 + store i1 %ballot_ne_zero, ptr addrspace(1) %out + ret void +} + +define amdgpu_kernel void @ballot_i64(i32 %v, ptr addrspace(1) %out) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @ballot_i64( +; CURRENT-CHECK-SAME: i32 [[V:%.*]], ptr addrspace(1) writeonly captures(none) initializes((0, 1)) [[OUT:%.*]]) local_unnamed_addr #[[ATTR1]] { +; CURRENT-CHECK-NEXT: [[C:%.*]] = trunc i32 [[V]] to i1 +; CURRENT-CHECK-NEXT: [[TMP1:%.*]] = tail call i32 @llvm.amdgcn.ballot.i32(i1 [[C]]) +; CURRENT-CHECK-NEXT: [[BALLOT_NE_ZERO:%.*]] = icmp ne i32 [[TMP1]], 0 +; CURRENT-CHECK-NEXT: store i1 [[BALLOT_NE_ZERO]], ptr addrspace(1) [[OUT]], align 1 +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @ballot_i64( +; PASS-CHECK-SAME: i32 [[V:%.*]], ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[C:%.*]] = trunc i32 [[V]] to i1 +; PASS-CHECK-NEXT: store i1 [[C]], ptr addrspace(1) [[OUT]], align 1 +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @ballot_i64( +; DCE-CHECK-SAME: i32 [[V:%.*]], ptr addrspace(1) [[OUT:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: [[C:%.*]] = trunc i32 [[V]] to i1 +; DCE-CHECK-NEXT: store i1 [[C]], ptr addrspace(1) [[OUT]], align 1 +; DCE-CHECK-NEXT: ret void +; + %c = trunc i32 %v to i1 + %ballot = call i64 @llvm.amdgcn.ballot.i64(i1 %c) + %ballot_ne_zero = icmp ne i64 %ballot, 0 + store i1 %ballot_ne_zero, ptr addrspace(1) %out + ret void +} + +define amdgpu_kernel void @test_readlane_i16(i16 %src0, i32 %src1) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @test_readlane_i16( +; CURRENT-CHECK-SAME: i16 [[SRC0:%.*]], i32 [[SRC1:%.*]]) local_unnamed_addr #[[ATTR3:[0-9]+]] { +; CURRENT-CHECK-NEXT: tail call void asm sideeffect " +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @test_readlane_i16( +; PASS-CHECK-SAME: i16 [[SRC0:%.*]], i32 [[SRC1:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: call void asm sideeffect " +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @test_readlane_i16( +; DCE-CHECK-SAME: i16 [[SRC0:%.*]], i32 [[SRC1:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: call void asm sideeffect " +; DCE-CHECK-NEXT: ret void +; + %readlane = call i16 @llvm.amdgcn.readlane.i16(i16 %src0, i32 %src1) + call void asm sideeffect "; use $0", "s"(i16 %readlane) + ret void +} + +define amdgpu_kernel void @test_readlane_i64(i64 %src0, i32 %src1) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @test_readlane_i64( +; CURRENT-CHECK-SAME: i64 [[SRC0:%.*]], i32 [[SRC1:%.*]]) local_unnamed_addr #[[ATTR3]] { +; CURRENT-CHECK-NEXT: tail call void asm sideeffect " +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @test_readlane_i64( +; PASS-CHECK-SAME: i64 [[SRC0:%.*]], i32 [[SRC1:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: call void asm sideeffect " +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @test_readlane_i64( +; DCE-CHECK-SAME: i64 [[SRC0:%.*]], i32 [[SRC1:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: call void asm sideeffect " +; DCE-CHECK-NEXT: ret void +; + %readlane = call i64 @llvm.amdgcn.readlane.i64(i64 %src0, i32 %src1) + call void asm sideeffect "; use $0", "s"(i64 %readlane) + ret void +} + +define amdgpu_kernel void @test_readlane_bf16(bfloat %src0, i32 %src1) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @test_readlane_bf16( +; CURRENT-CHECK-SAME: bfloat [[SRC0:%.*]], i32 [[SRC1:%.*]]) local_unnamed_addr #[[ATTR3]] { +; CURRENT-CHECK-NEXT: tail call void asm sideeffect " +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @test_readlane_bf16( +; PASS-CHECK-SAME: bfloat [[SRC0:%.*]], i32 [[SRC1:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: call void asm sideeffect " +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @test_readlane_bf16( +; DCE-CHECK-SAME: bfloat [[SRC0:%.*]], i32 [[SRC1:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: call void asm sideeffect " +; DCE-CHECK-NEXT: ret void +; + %readlane = call bfloat @llvm.amdgcn.readlane.bf16(bfloat %src0, i32 %src1) + call void asm sideeffect "; use $0", "s"(bfloat %readlane) + ret void +} + +define amdgpu_kernel void @test_readlane_f16(half %src0, i32 %src1) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @test_readlane_f16( +; CURRENT-CHECK-SAME: half [[SRC0:%.*]], i32 [[SRC1:%.*]]) local_unnamed_addr #[[ATTR3]] { +; CURRENT-CHECK-NEXT: tail call void asm sideeffect " +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @test_readlane_f16( +; PASS-CHECK-SAME: half [[SRC0:%.*]], i32 [[SRC1:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: call void asm sideeffect " +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @test_readlane_f16( +; DCE-CHECK-SAME: half [[SRC0:%.*]], i32 [[SRC1:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: call void asm sideeffect " +; DCE-CHECK-NEXT: ret void +; + %readlane = call half @llvm.amdgcn.readlane.f16(half %src0, i32 %src1) + call void asm sideeffect "; use $0", "s"(half %readlane) + ret void +} + +define amdgpu_kernel void @test_readlane_f32(float %src0, i32 %src1) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @test_readlane_f32( +; CURRENT-CHECK-SAME: float [[SRC0:%.*]], i32 [[SRC1:%.*]]) local_unnamed_addr #[[ATTR3]] { +; CURRENT-CHECK-NEXT: tail call void asm sideeffect " +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @test_readlane_f32( +; PASS-CHECK-SAME: float [[SRC0:%.*]], i32 [[SRC1:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: call void asm sideeffect " +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @test_readlane_f32( +; DCE-CHECK-SAME: float [[SRC0:%.*]], i32 [[SRC1:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: call void asm sideeffect " +; DCE-CHECK-NEXT: ret void +; + %readlane = call float @llvm.amdgcn.readlane.f32(float %src0, i32 %src1) + call void asm sideeffect "; use $0", "s"(float %readlane) + ret void +} + +define amdgpu_kernel void @test_readlane_f64(double %src0, i32 %src1) { +; CURRENT-CHECK-LABEL: define amdgpu_kernel void @test_readlane_f64( +; CURRENT-CHECK-SAME: double [[SRC0:%.*]], i32 [[SRC1:%.*]]) local_unnamed_addr #[[ATTR3]] { +; CURRENT-CHECK-NEXT: tail call void asm sideeffect " +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define amdgpu_kernel void @test_readlane_f64( +; PASS-CHECK-SAME: double [[SRC0:%.*]], i32 [[SRC1:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: call void asm sideeffect " +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define amdgpu_kernel void @test_readlane_f64( +; DCE-CHECK-SAME: double [[SRC0:%.*]], i32 [[SRC1:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: call void asm sideeffect " +; DCE-CHECK-NEXT: ret void +; + %readlane = call double @llvm.amdgcn.readlane.f64(double %src0, i32 %src1) + call void asm sideeffect "; use $0", "s"(double %readlane) + ret void +} +; All such cases can be optimised, given generic way to query getDeclarationIfExists() +define void @test_readlane_v8i16(ptr addrspace(1) %out, <8 x i16> %src, i32 %src1) { +; CURRENT-CHECK-LABEL: define void @test_readlane_v8i16( +; CURRENT-CHECK-SAME: ptr addrspace(1) readnone captures(none) [[OUT:%.*]], <8 x i16> [[SRC:%.*]], i32 [[SRC1:%.*]]) local_unnamed_addr #[[ATTR3]] { +; CURRENT-CHECK-NEXT: [[X:%.*]] = tail call <8 x i16> @llvm.amdgcn.readlane.v8i16(<8 x i16> [[SRC]], i32 [[SRC1]]) +; CURRENT-CHECK-NEXT: tail call void asm sideeffect " +; CURRENT-CHECK-NEXT: ret void +; +; PASS-CHECK-LABEL: define void @test_readlane_v8i16( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]], <8 x i16> [[SRC:%.*]], i32 [[SRC1:%.*]]) #[[ATTR0]] { +; PASS-CHECK-NEXT: [[X:%.*]] = call <8 x i16> @llvm.amdgcn.readlane.v8i16(<8 x i16> [[SRC]], i32 [[SRC1]]) +; PASS-CHECK-NEXT: call void asm sideeffect " +; PASS-CHECK-NEXT: ret void +; +; DCE-CHECK-LABEL: define void @test_readlane_v8i16( +; DCE-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]], <8 x i16> [[SRC:%.*]], i32 [[SRC1:%.*]]) #[[ATTR0]] { +; DCE-CHECK-NEXT: [[X:%.*]] = call <8 x i16> @llvm.amdgcn.readlane.v8i16(<8 x i16> [[SRC]], i32 [[SRC1]]) +; DCE-CHECK-NEXT: call void asm sideeffect " +; DCE-CHECK-NEXT: ret void +; + %x = call <8 x i16> @llvm.amdgcn.readlane.v8i16(<8 x i16> %src, i32 %src1) + call void asm sideeffect "; use $0", "s"(<8 x i16> %x) + ret void +} diff --git a/llvm/test/CodeGen/AMDGPU/amdgpu-uniform-temporal-divergence.ll b/llvm/test/CodeGen/AMDGPU/amdgpu-uniform-temporal-divergence.ll new file mode 100644 index 0000000..2fde3e3 --- /dev/null +++ b/llvm/test/CodeGen/AMDGPU/amdgpu-uniform-temporal-divergence.ll @@ -0,0 +1,57 @@ +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 5 +; RUN: opt -mtriple=amdgcn-amd-amdhsa -mcpu=gfx1010 -passes=amdgpu-uniform-intrinsic-combine -S < %s | FileCheck %s -check-prefix=PASS-CHECK +; RUN: opt -mtriple=amdgcn-amd-amdhsa -mcpu=gfx1010 -passes=amdgpu-uniform-intrinsic-combine,instcombine,early-cse,simplifycfg -S < %s | FileCheck %s -check-prefix=COMB-CHECK + +; This should not be optimized +define amdgpu_cs void @temporal_divergence(ptr addrspace(1) %out, i32 %n) { +; PASS-CHECK-LABEL: define amdgpu_cs void @temporal_divergence( +; PASS-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]], i32 [[N:%.*]]) #[[ATTR0:[0-9]+]] { +; PASS-CHECK-NEXT: [[ENTRY:.*]]: +; PASS-CHECK-NEXT: [[TID:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; PASS-CHECK-NEXT: br label %[[H:.*]] +; PASS-CHECK: [[H]]: +; PASS-CHECK-NEXT: [[UNI_MERGE_H:%.*]] = phi i32 [ 0, %[[ENTRY]] ], [ [[UNI_INC:%.*]], %[[H]] ] +; PASS-CHECK-NEXT: [[UNI_INC]] = add i32 [[UNI_MERGE_H]], 1 +; PASS-CHECK-NEXT: [[DIV_EXITX:%.*]] = icmp eq i32 [[TID]], 0 +; PASS-CHECK-NEXT: br i1 [[DIV_EXITX]], label %[[X:.*]], label %[[H]] +; PASS-CHECK: [[X]]: +; PASS-CHECK-NEXT: [[UNI_JOIN:%.*]] = call i32 @llvm.amdgcn.readfirstlane.i32(i32 [[UNI_INC]]) +; PASS-CHECK-NEXT: [[JOIN_USER:%.*]] = add i32 [[UNI_JOIN]], 5 +; PASS-CHECK-NEXT: store i32 [[JOIN_USER]], ptr addrspace(1) [[OUT]], align 4 +; PASS-CHECK-NEXT: ret void +; +; COMB-CHECK-LABEL: define amdgpu_cs void @temporal_divergence( +; COMB-CHECK-SAME: ptr addrspace(1) [[OUT:%.*]], i32 [[N:%.*]]) #[[ATTR0:[0-9]+]] { +; COMB-CHECK-NEXT: [[ENTRY:.*]]: +; COMB-CHECK-NEXT: [[TID:%.*]] = call i32 @llvm.amdgcn.workitem.id.x() +; COMB-CHECK-NEXT: br label %[[H:.*]] +; COMB-CHECK: [[H]]: +; COMB-CHECK-NEXT: [[UNI_MERGE_H:%.*]] = phi i32 [ 0, %[[ENTRY]] ], [ [[UNI_INC:%.*]], %[[H]] ] +; COMB-CHECK-NEXT: [[UNI_INC]] = add i32 [[UNI_MERGE_H]], 1 +; COMB-CHECK-NEXT: [[DIV_EXITX:%.*]] = icmp eq i32 [[TID]], 0 +; COMB-CHECK-NEXT: br i1 [[DIV_EXITX]], label %[[X:.*]], label %[[H]] +; COMB-CHECK: [[X]]: +; COMB-CHECK-NEXT: [[UNI_JOIN:%.*]] = call i32 @llvm.amdgcn.readfirstlane.i32(i32 [[UNI_INC]]) +; COMB-CHECK-NEXT: [[JOIN_USER:%.*]] = add i32 [[UNI_JOIN]], 5 +; COMB-CHECK-NEXT: store i32 [[JOIN_USER]], ptr addrspace(1) [[OUT]], align 4 +; COMB-CHECK-NEXT: ret void +; +entry: + %tid = call i32 @llvm.amdgcn.workitem.id.x() + br label %H + +H: + %uni.merge.h = phi i32 [ 0, %entry ], [ %uni.inc, %H ] + %uni.inc = add i32 %uni.merge.h, 1 + %div.exitx = icmp eq i32 %tid, 0 + br i1 %div.exitx, label %X, label %H ; divergent branch + +X: + %uni.join = call i32 @llvm.amdgcn.readfirstlane.i32(i32 %uni.inc) + %join.user = add i32 %uni.join, 5 + store i32 %join.user, ptr addrspace(1) %out + ret void +} + +declare i32 @llvm.amdgcn.workitem.id.x() +declare i32 @llvm.amdgcn.readfirstlane.i32(i32) diff --git a/llvm/test/CodeGen/AMDGPU/carryout-selection.ll b/llvm/test/CodeGen/AMDGPU/carryout-selection.ll index 2ae6fc2..4a6fa4f 100644 --- a/llvm/test/CodeGen/AMDGPU/carryout-selection.ll +++ b/llvm/test/CodeGen/AMDGPU/carryout-selection.ll @@ -691,7 +691,8 @@ define amdgpu_kernel void @uaddo32_vcc_user(ptr addrspace(1) %out, ptr addrspace ; GCN-ISEL-LABEL: name: suaddo64 ; GCN-ISEL-LABEL: body: ; GCN-ISEL-LABEL: bb.0 -; GCN-ISEL: S_ADD_U64_PSEUDO +; GCN-ISEL: S_UADDO_PSEUDO +; GCN-ISEL: S_ADD_CO_PSEUDO define amdgpu_kernel void @suaddo64(ptr addrspace(1) %out, ptr addrspace(1) %carryout, i64 %a, i64 %b) #0 { ; CISI-LABEL: suaddo64: @@ -700,21 +701,23 @@ define amdgpu_kernel void @suaddo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; CISI-NEXT: s_mov_b32 s11, 0xf000 ; CISI-NEXT: s_mov_b32 s10, -1 ; CISI-NEXT: s_waitcnt lgkmcnt(0) -; CISI-NEXT: s_add_u32 s6, s4, s6 -; CISI-NEXT: v_mov_b32_e32 v0, s4 -; CISI-NEXT: s_addc_u32 s7, s5, s7 -; CISI-NEXT: v_mov_b32_e32 v1, s5 -; CISI-NEXT: v_cmp_lt_u64_e32 vcc, s[6:7], v[0:1] -; CISI-NEXT: v_mov_b32_e32 v2, s6 +; CISI-NEXT: s_add_u32 s4, s4, s6 +; CISI-NEXT: s_cselect_b64 s[12:13], -1, 0 +; CISI-NEXT: s_or_b32 s6, s12, s13 +; CISI-NEXT: s_cmp_lg_u32 s6, 0 +; CISI-NEXT: s_addc_u32 s5, s5, s7 ; CISI-NEXT: s_mov_b32 s8, s0 ; CISI-NEXT: s_mov_b32 s9, s1 +; CISI-NEXT: v_mov_b32_e32 v0, s4 +; CISI-NEXT: v_mov_b32_e32 v1, s5 +; CISI-NEXT: s_cselect_b64 s[4:5], -1, 0 ; CISI-NEXT: s_mov_b32 s0, s2 ; CISI-NEXT: s_mov_b32 s1, s3 ; CISI-NEXT: s_mov_b32 s2, s10 ; CISI-NEXT: s_mov_b32 s3, s11 -; CISI-NEXT: v_mov_b32_e32 v3, s7 -; CISI-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc -; CISI-NEXT: buffer_store_dwordx2 v[2:3], off, s[8:11], 0 +; CISI-NEXT: buffer_store_dwordx2 v[0:1], off, s[8:11], 0 +; CISI-NEXT: s_waitcnt expcnt(0) +; CISI-NEXT: v_cndmask_b32_e64 v0, 0, 1, s[4:5] ; CISI-NEXT: buffer_store_byte v0, off, s[0:3], 0 ; CISI-NEXT: s_endpgm ; @@ -722,37 +725,37 @@ define amdgpu_kernel void @suaddo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; VI: ; %bb.0: ; VI-NEXT: s_load_dwordx8 s[0:7], s[4:5], 0x24 ; VI-NEXT: s_waitcnt lgkmcnt(0) +; VI-NEXT: v_mov_b32_e32 v2, s2 +; VI-NEXT: s_add_u32 s2, s4, s6 ; VI-NEXT: v_mov_b32_e32 v0, s0 -; VI-NEXT: s_add_u32 s0, s4, s6 -; VI-NEXT: v_mov_b32_e32 v4, s4 ; VI-NEXT: v_mov_b32_e32 v1, s1 -; VI-NEXT: s_addc_u32 s1, s5, s7 -; VI-NEXT: v_mov_b32_e32 v5, s5 -; VI-NEXT: v_mov_b32_e32 v7, s1 -; VI-NEXT: v_cmp_lt_u64_e32 vcc, s[0:1], v[4:5] -; VI-NEXT: v_mov_b32_e32 v6, s0 -; VI-NEXT: v_mov_b32_e32 v2, s2 +; VI-NEXT: s_cselect_b64 s[0:1], -1, 0 +; VI-NEXT: s_cmp_lg_u64 s[0:1], 0 +; VI-NEXT: s_addc_u32 s0, s5, s7 +; VI-NEXT: v_mov_b32_e32 v4, s2 +; VI-NEXT: v_mov_b32_e32 v5, s0 +; VI-NEXT: s_cselect_b64 s[0:1], -1, 0 ; VI-NEXT: v_mov_b32_e32 v3, s3 -; VI-NEXT: flat_store_dwordx2 v[0:1], v[6:7] -; VI-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc +; VI-NEXT: flat_store_dwordx2 v[0:1], v[4:5] +; VI-NEXT: v_cndmask_b32_e64 v0, 0, 1, s[0:1] ; VI-NEXT: flat_store_byte v[2:3], v0 ; VI-NEXT: s_endpgm ; ; GFX9-LABEL: suaddo64: ; GFX9: ; %bb.0: ; GFX9-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x24 -; GFX9-NEXT: v_mov_b32_e32 v4, 0 +; GFX9-NEXT: v_mov_b32_e32 v2, 0 ; GFX9-NEXT: s_waitcnt lgkmcnt(0) -; GFX9-NEXT: s_add_u32 s0, s12, s14 -; GFX9-NEXT: v_mov_b32_e32 v0, s12 -; GFX9-NEXT: v_mov_b32_e32 v1, s13 -; GFX9-NEXT: s_addc_u32 s1, s13, s15 -; GFX9-NEXT: v_mov_b32_e32 v3, s1 -; GFX9-NEXT: v_cmp_lt_u64_e32 vcc, s[0:1], v[0:1] -; GFX9-NEXT: v_mov_b32_e32 v2, s0 -; GFX9-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc -; GFX9-NEXT: global_store_dwordx2 v4, v[2:3], s[8:9] -; GFX9-NEXT: global_store_byte v4, v0, s[10:11] +; GFX9-NEXT: s_add_u32 s2, s12, s14 +; GFX9-NEXT: s_cselect_b64 s[0:1], -1, 0 +; GFX9-NEXT: s_cmp_lg_u64 s[0:1], 0 +; GFX9-NEXT: s_addc_u32 s0, s13, s15 +; GFX9-NEXT: v_mov_b32_e32 v0, s2 +; GFX9-NEXT: v_mov_b32_e32 v1, s0 +; GFX9-NEXT: s_cselect_b64 s[0:1], -1, 0 +; GFX9-NEXT: v_cndmask_b32_e64 v3, 0, 1, s[0:1] +; GFX9-NEXT: global_store_dwordx2 v2, v[0:1], s[8:9] +; GFX9-NEXT: global_store_byte v2, v3, s[10:11] ; GFX9-NEXT: s_endpgm ; ; GFX1010-LABEL: suaddo64: @@ -761,10 +764,12 @@ define amdgpu_kernel void @suaddo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX1010-NEXT: v_mov_b32_e32 v2, 0 ; GFX1010-NEXT: s_waitcnt lgkmcnt(0) ; GFX1010-NEXT: s_add_u32 s0, s12, s14 -; GFX1010-NEXT: s_addc_u32 s1, s13, s15 +; GFX1010-NEXT: s_cselect_b32 s1, -1, 0 ; GFX1010-NEXT: v_mov_b32_e32 v0, s0 +; GFX1010-NEXT: s_cmp_lg_u32 s1, 0 +; GFX1010-NEXT: s_addc_u32 s1, s13, s15 +; GFX1010-NEXT: s_cselect_b32 s0, -1, 0 ; GFX1010-NEXT: v_mov_b32_e32 v1, s1 -; GFX1010-NEXT: v_cmp_lt_u64_e64 s0, s[0:1], s[12:13] ; GFX1010-NEXT: v_cndmask_b32_e64 v3, 0, 1, s0 ; GFX1010-NEXT: global_store_dwordx2 v2, v[0:1], s[8:9] ; GFX1010-NEXT: global_store_byte v2, v3, s[10:11] @@ -775,11 +780,13 @@ define amdgpu_kernel void @suaddo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX1030W32-NEXT: s_load_dwordx8 s[0:7], s[4:5], 0x24 ; GFX1030W32-NEXT: v_mov_b32_e32 v2, 0 ; GFX1030W32-NEXT: s_waitcnt lgkmcnt(0) -; GFX1030W32-NEXT: s_add_u32 s6, s4, s6 -; GFX1030W32-NEXT: s_addc_u32 s7, s5, s7 -; GFX1030W32-NEXT: v_mov_b32_e32 v0, s6 -; GFX1030W32-NEXT: v_cmp_lt_u64_e64 s4, s[6:7], s[4:5] -; GFX1030W32-NEXT: v_mov_b32_e32 v1, s7 +; GFX1030W32-NEXT: s_add_u32 s4, s4, s6 +; GFX1030W32-NEXT: s_cselect_b32 s6, -1, 0 +; GFX1030W32-NEXT: v_mov_b32_e32 v0, s4 +; GFX1030W32-NEXT: s_cmp_lg_u32 s6, 0 +; GFX1030W32-NEXT: s_addc_u32 s5, s5, s7 +; GFX1030W32-NEXT: s_cselect_b32 s4, -1, 0 +; GFX1030W32-NEXT: v_mov_b32_e32 v1, s5 ; GFX1030W32-NEXT: v_cndmask_b32_e64 v3, 0, 1, s4 ; GFX1030W32-NEXT: global_store_dwordx2 v2, v[0:1], s[0:1] ; GFX1030W32-NEXT: global_store_byte v2, v3, s[2:3] @@ -790,11 +797,13 @@ define amdgpu_kernel void @suaddo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX1030W64-NEXT: s_load_dwordx8 s[0:7], s[4:5], 0x24 ; GFX1030W64-NEXT: v_mov_b32_e32 v2, 0 ; GFX1030W64-NEXT: s_waitcnt lgkmcnt(0) -; GFX1030W64-NEXT: s_add_u32 s6, s4, s6 -; GFX1030W64-NEXT: s_addc_u32 s7, s5, s7 -; GFX1030W64-NEXT: v_mov_b32_e32 v0, s6 -; GFX1030W64-NEXT: v_cmp_lt_u64_e64 s[4:5], s[6:7], s[4:5] -; GFX1030W64-NEXT: v_mov_b32_e32 v1, s7 +; GFX1030W64-NEXT: s_add_u32 s4, s4, s6 +; GFX1030W64-NEXT: s_cselect_b64 s[8:9], -1, 0 +; GFX1030W64-NEXT: v_mov_b32_e32 v0, s4 +; GFX1030W64-NEXT: s_cmp_lg_u64 s[8:9], 0 +; GFX1030W64-NEXT: s_addc_u32 s5, s5, s7 +; GFX1030W64-NEXT: v_mov_b32_e32 v1, s5 +; GFX1030W64-NEXT: s_cselect_b64 s[4:5], -1, 0 ; GFX1030W64-NEXT: v_cndmask_b32_e64 v3, 0, 1, s[4:5] ; GFX1030W64-NEXT: global_store_dwordx2 v2, v[0:1], s[0:1] ; GFX1030W64-NEXT: global_store_byte v2, v3, s[2:3] @@ -804,12 +813,13 @@ define amdgpu_kernel void @suaddo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX11: ; %bb.0: ; GFX11-NEXT: s_load_b256 s[0:7], s[4:5], 0x24 ; GFX11-NEXT: s_waitcnt lgkmcnt(0) -; GFX11-NEXT: s_add_u32 s6, s4, s6 -; GFX11-NEXT: s_addc_u32 s7, s5, s7 -; GFX11-NEXT: v_mov_b32_e32 v0, s6 -; GFX11-NEXT: v_cmp_lt_u64_e64 s4, s[6:7], s[4:5] -; GFX11-NEXT: v_dual_mov_b32 v2, 0 :: v_dual_mov_b32 v1, s7 -; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_2) +; GFX11-NEXT: s_add_u32 s4, s4, s6 +; GFX11-NEXT: s_cselect_b32 s6, -1, 0 +; GFX11-NEXT: v_mov_b32_e32 v0, s4 +; GFX11-NEXT: s_cmp_lg_u32 s6, 0 +; GFX11-NEXT: s_addc_u32 s5, s5, s7 +; GFX11-NEXT: s_cselect_b32 s4, -1, 0 +; GFX11-NEXT: v_dual_mov_b32 v2, 0 :: v_dual_mov_b32 v1, s5 ; GFX11-NEXT: v_cndmask_b32_e64 v3, 0, 1, s4 ; GFX11-NEXT: s_clause 0x1 ; GFX11-NEXT: global_store_b64 v2, v[0:1], s[0:1] @@ -819,12 +829,14 @@ define amdgpu_kernel void @suaddo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX1250-LABEL: suaddo64: ; GFX1250: ; %bb.0: ; GFX1250-NEXT: s_load_b256 s[8:15], s[4:5], 0x24 -; GFX1250-NEXT: v_mov_b32_e32 v2, 0 ; GFX1250-NEXT: s_wait_kmcnt 0x0 -; GFX1250-NEXT: s_add_nc_u64 s[0:1], s[12:13], s[14:15] -; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(SKIP_1) | instid1(VALU_DEP_1) -; GFX1250-NEXT: v_mov_b64_e32 v[0:1], s[0:1] -; GFX1250-NEXT: v_cmp_lt_u64_e64 s0, s[0:1], s[12:13] +; GFX1250-NEXT: s_add_co_u32 s0, s12, s14 +; GFX1250-NEXT: s_cselect_b32 s1, -1, 0 +; GFX1250-NEXT: v_dual_mov_b32 v2, 0 :: v_dual_mov_b32 v0, s0 +; GFX1250-NEXT: s_cmp_lg_u32 s1, 0 +; GFX1250-NEXT: s_add_co_ci_u32 s1, s13, s15 +; GFX1250-NEXT: s_cselect_b32 s0, -1, 0 +; GFX1250-NEXT: v_mov_b32_e32 v1, s1 ; GFX1250-NEXT: v_cndmask_b32_e64 v3, 0, 1, s0 ; GFX1250-NEXT: s_clause 0x1 ; GFX1250-NEXT: global_store_b64 v2, v[0:1], s[8:9] @@ -841,7 +853,8 @@ define amdgpu_kernel void @suaddo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GCN-ISEL-LABEL: name: vuaddo64 ; GCN-ISEL-LABEL: body: ; GCN-ISEL-LABEL: bb.0 -; GCN-ISEL: V_ADD_U64_PSEUDO +; GCN-ISEL: V_ADD_CO_U32_e64 +; GCN-ISEL: V_ADDC_U32_e64 define amdgpu_kernel void @vuaddo64(ptr addrspace(1) %out, ptr addrspace(1) %carryout, i64 %a) #0 { ; CISI-LABEL: vuaddo64: @@ -854,9 +867,8 @@ define amdgpu_kernel void @vuaddo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; CISI-NEXT: s_mov_b32 s4, s0 ; CISI-NEXT: v_mov_b32_e32 v1, s9 ; CISI-NEXT: v_add_i32_e32 v0, vcc, s8, v0 -; CISI-NEXT: v_addc_u32_e32 v1, vcc, 0, v1, vcc -; CISI-NEXT: v_cmp_gt_u64_e32 vcc, s[8:9], v[0:1] ; CISI-NEXT: s_mov_b32 s5, s1 +; CISI-NEXT: v_addc_u32_e32 v1, vcc, 0, v1, vcc ; CISI-NEXT: s_mov_b32 s0, s2 ; CISI-NEXT: s_mov_b32 s1, s3 ; CISI-NEXT: s_mov_b32 s2, s6 @@ -876,7 +888,6 @@ define amdgpu_kernel void @vuaddo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; VI-NEXT: v_mov_b32_e32 v6, s5 ; VI-NEXT: v_add_u32_e32 v5, vcc, s4, v0 ; VI-NEXT: v_addc_u32_e32 v6, vcc, 0, v6, vcc -; VI-NEXT: v_cmp_gt_u64_e32 vcc, s[4:5], v[5:6] ; VI-NEXT: v_mov_b32_e32 v2, s1 ; VI-NEXT: v_mov_b32_e32 v3, s2 ; VI-NEXT: v_mov_b32_e32 v4, s3 @@ -894,7 +905,6 @@ define amdgpu_kernel void @vuaddo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX9-NEXT: v_mov_b32_e32 v1, s7 ; GFX9-NEXT: v_add_co_u32_e32 v0, vcc, s6, v0 ; GFX9-NEXT: v_addc_co_u32_e32 v1, vcc, 0, v1, vcc -; GFX9-NEXT: v_cmp_gt_u64_e32 vcc, s[6:7], v[0:1] ; GFX9-NEXT: global_store_dwordx2 v2, v[0:1], s[0:1] ; GFX9-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc ; GFX9-NEXT: global_store_byte v2, v0, s[2:3] @@ -909,8 +919,7 @@ define amdgpu_kernel void @vuaddo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX1010-NEXT: s_waitcnt lgkmcnt(0) ; GFX1010-NEXT: v_add_co_u32 v0, s4, s6, v0 ; GFX1010-NEXT: v_add_co_ci_u32_e64 v1, s4, s7, 0, s4 -; GFX1010-NEXT: v_cmp_gt_u64_e32 vcc_lo, s[6:7], v[0:1] -; GFX1010-NEXT: v_cndmask_b32_e64 v3, 0, 1, vcc_lo +; GFX1010-NEXT: v_cndmask_b32_e64 v3, 0, 1, s4 ; GFX1010-NEXT: global_store_dwordx2 v2, v[0:1], s[0:1] ; GFX1010-NEXT: global_store_byte v2, v3, s[2:3] ; GFX1010-NEXT: s_endpgm @@ -923,9 +932,8 @@ define amdgpu_kernel void @vuaddo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX1030W32-NEXT: v_mov_b32_e32 v2, 0 ; GFX1030W32-NEXT: s_waitcnt lgkmcnt(0) ; GFX1030W32-NEXT: v_add_co_u32 v0, s4, s6, v0 -; GFX1030W32-NEXT: v_add_co_ci_u32_e64 v1, null, s7, 0, s4 -; GFX1030W32-NEXT: v_cmp_gt_u64_e32 vcc_lo, s[6:7], v[0:1] -; GFX1030W32-NEXT: v_cndmask_b32_e64 v3, 0, 1, vcc_lo +; GFX1030W32-NEXT: v_add_co_ci_u32_e64 v1, s4, s7, 0, s4 +; GFX1030W32-NEXT: v_cndmask_b32_e64 v3, 0, 1, s4 ; GFX1030W32-NEXT: global_store_dwordx2 v2, v[0:1], s[0:1] ; GFX1030W32-NEXT: global_store_byte v2, v3, s[2:3] ; GFX1030W32-NEXT: s_endpgm @@ -938,9 +946,8 @@ define amdgpu_kernel void @vuaddo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX1030W64-NEXT: v_mov_b32_e32 v2, 0 ; GFX1030W64-NEXT: s_waitcnt lgkmcnt(0) ; GFX1030W64-NEXT: v_add_co_u32 v0, s[4:5], s6, v0 -; GFX1030W64-NEXT: v_add_co_ci_u32_e64 v1, null, s7, 0, s[4:5] -; GFX1030W64-NEXT: v_cmp_gt_u64_e32 vcc, s[6:7], v[0:1] -; GFX1030W64-NEXT: v_cndmask_b32_e64 v3, 0, 1, vcc +; GFX1030W64-NEXT: v_add_co_ci_u32_e64 v1, s[4:5], s7, 0, s[4:5] +; GFX1030W64-NEXT: v_cndmask_b32_e64 v3, 0, 1, s[4:5] ; GFX1030W64-NEXT: global_store_dwordx2 v2, v[0:1], s[0:1] ; GFX1030W64-NEXT: global_store_byte v2, v3, s[2:3] ; GFX1030W64-NEXT: s_endpgm @@ -955,10 +962,9 @@ define amdgpu_kernel void @vuaddo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX11-NEXT: s_waitcnt lgkmcnt(0) ; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1) ; GFX11-NEXT: v_add_co_u32 v0, s4, s6, v0 -; GFX11-NEXT: v_add_co_ci_u32_e64 v1, null, s7, 0, s4 +; GFX11-NEXT: v_add_co_ci_u32_e64 v1, s4, s7, 0, s4 ; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1) -; GFX11-NEXT: v_cmp_gt_u64_e32 vcc_lo, s[6:7], v[0:1] -; GFX11-NEXT: v_cndmask_b32_e64 v3, 0, 1, vcc_lo +; GFX11-NEXT: v_cndmask_b32_e64 v3, 0, 1, s4 ; GFX11-NEXT: s_clause 0x1 ; GFX11-NEXT: global_store_b64 v2, v[0:1], s[0:1] ; GFX11-NEXT: global_store_b8 v2, v3, s[2:3] @@ -969,16 +975,17 @@ define amdgpu_kernel void @vuaddo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX1250-NEXT: s_clause 0x1 ; GFX1250-NEXT: s_load_b64 s[6:7], s[4:5], 0x34 ; GFX1250-NEXT: s_load_b128 s[0:3], s[4:5], 0x24 -; GFX1250-NEXT: v_mov_b32_e32 v1, 0 ; GFX1250-NEXT: v_and_b32_e32 v0, 0x3ff, v0 +; GFX1250-NEXT: v_mov_b32_e32 v2, 0 ; GFX1250-NEXT: s_wait_kmcnt 0x0 -; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1) -; GFX1250-NEXT: v_add_nc_u64_e32 v[2:3], s[6:7], v[0:1] -; GFX1250-NEXT: v_cmp_gt_u64_e32 vcc_lo, s[6:7], v[2:3] -; GFX1250-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc_lo +; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1) +; GFX1250-NEXT: v_add_co_u32 v0, s4, s6, v0 +; GFX1250-NEXT: v_add_co_ci_u32_e64 v1, s4, s7, 0, s4 +; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) +; GFX1250-NEXT: v_cndmask_b32_e64 v3, 0, 1, s4 ; GFX1250-NEXT: s_clause 0x1 -; GFX1250-NEXT: global_store_b64 v1, v[2:3], s[0:1] -; GFX1250-NEXT: global_store_b8 v1, v0, s[2:3] +; GFX1250-NEXT: global_store_b64 v2, v[0:1], s[0:1] +; GFX1250-NEXT: global_store_b8 v2, v3, s[2:3] ; GFX1250-NEXT: s_endpgm %tid = call i32 @llvm.amdgcn.workitem.id.x() %tid.ext = sext i32 %tid to i64 @@ -1671,7 +1678,8 @@ define amdgpu_kernel void @usubo32_vcc_user(ptr addrspace(1) %out, ptr addrspace ; GCN-ISEL-LABEL: name: susubo64 ; GCN-ISEL-LABEL: body: ; GCN-ISEL-LABEL: bb.0 -; GCN-ISEL: S_SUB_U64_PSEUDO +; GCN-ISEL: S_USUBO_PSEUDO +; GCN-ISEL: S_SUB_CO_PSEUDO define amdgpu_kernel void @susubo64(ptr addrspace(1) %out, ptr addrspace(1) %carryout, i64 %a, i64 %b) #0 { ; CISI-LABEL: susubo64: @@ -1680,21 +1688,23 @@ define amdgpu_kernel void @susubo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; CISI-NEXT: s_mov_b32 s11, 0xf000 ; CISI-NEXT: s_mov_b32 s10, -1 ; CISI-NEXT: s_waitcnt lgkmcnt(0) -; CISI-NEXT: s_sub_u32 s6, s4, s6 -; CISI-NEXT: v_mov_b32_e32 v0, s4 -; CISI-NEXT: s_subb_u32 s7, s5, s7 -; CISI-NEXT: v_mov_b32_e32 v1, s5 -; CISI-NEXT: v_cmp_gt_u64_e32 vcc, s[6:7], v[0:1] -; CISI-NEXT: v_mov_b32_e32 v2, s6 +; CISI-NEXT: s_sub_u32 s4, s4, s6 +; CISI-NEXT: s_cselect_b64 s[12:13], -1, 0 +; CISI-NEXT: s_or_b32 s6, s12, s13 +; CISI-NEXT: s_cmp_lg_u32 s6, 0 +; CISI-NEXT: s_subb_u32 s5, s5, s7 ; CISI-NEXT: s_mov_b32 s8, s0 ; CISI-NEXT: s_mov_b32 s9, s1 +; CISI-NEXT: v_mov_b32_e32 v0, s4 +; CISI-NEXT: v_mov_b32_e32 v1, s5 +; CISI-NEXT: s_cselect_b64 s[4:5], -1, 0 ; CISI-NEXT: s_mov_b32 s0, s2 ; CISI-NEXT: s_mov_b32 s1, s3 ; CISI-NEXT: s_mov_b32 s2, s10 ; CISI-NEXT: s_mov_b32 s3, s11 -; CISI-NEXT: v_mov_b32_e32 v3, s7 -; CISI-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc -; CISI-NEXT: buffer_store_dwordx2 v[2:3], off, s[8:11], 0 +; CISI-NEXT: buffer_store_dwordx2 v[0:1], off, s[8:11], 0 +; CISI-NEXT: s_waitcnt expcnt(0) +; CISI-NEXT: v_cndmask_b32_e64 v0, 0, 1, s[4:5] ; CISI-NEXT: buffer_store_byte v0, off, s[0:3], 0 ; CISI-NEXT: s_endpgm ; @@ -1702,37 +1712,37 @@ define amdgpu_kernel void @susubo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; VI: ; %bb.0: ; VI-NEXT: s_load_dwordx8 s[0:7], s[4:5], 0x24 ; VI-NEXT: s_waitcnt lgkmcnt(0) +; VI-NEXT: v_mov_b32_e32 v2, s2 +; VI-NEXT: s_sub_u32 s2, s4, s6 ; VI-NEXT: v_mov_b32_e32 v0, s0 -; VI-NEXT: s_sub_u32 s0, s4, s6 -; VI-NEXT: v_mov_b32_e32 v4, s4 ; VI-NEXT: v_mov_b32_e32 v1, s1 -; VI-NEXT: s_subb_u32 s1, s5, s7 -; VI-NEXT: v_mov_b32_e32 v5, s5 -; VI-NEXT: v_mov_b32_e32 v7, s1 -; VI-NEXT: v_cmp_gt_u64_e32 vcc, s[0:1], v[4:5] -; VI-NEXT: v_mov_b32_e32 v6, s0 -; VI-NEXT: v_mov_b32_e32 v2, s2 +; VI-NEXT: s_cselect_b64 s[0:1], -1, 0 +; VI-NEXT: s_cmp_lg_u64 s[0:1], 0 +; VI-NEXT: s_subb_u32 s0, s5, s7 +; VI-NEXT: v_mov_b32_e32 v4, s2 +; VI-NEXT: v_mov_b32_e32 v5, s0 +; VI-NEXT: s_cselect_b64 s[0:1], -1, 0 ; VI-NEXT: v_mov_b32_e32 v3, s3 -; VI-NEXT: flat_store_dwordx2 v[0:1], v[6:7] -; VI-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc +; VI-NEXT: flat_store_dwordx2 v[0:1], v[4:5] +; VI-NEXT: v_cndmask_b32_e64 v0, 0, 1, s[0:1] ; VI-NEXT: flat_store_byte v[2:3], v0 ; VI-NEXT: s_endpgm ; ; GFX9-LABEL: susubo64: ; GFX9: ; %bb.0: ; GFX9-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x24 -; GFX9-NEXT: v_mov_b32_e32 v4, 0 +; GFX9-NEXT: v_mov_b32_e32 v2, 0 ; GFX9-NEXT: s_waitcnt lgkmcnt(0) -; GFX9-NEXT: s_sub_u32 s0, s12, s14 -; GFX9-NEXT: v_mov_b32_e32 v0, s12 -; GFX9-NEXT: v_mov_b32_e32 v1, s13 -; GFX9-NEXT: s_subb_u32 s1, s13, s15 -; GFX9-NEXT: v_mov_b32_e32 v3, s1 -; GFX9-NEXT: v_cmp_gt_u64_e32 vcc, s[0:1], v[0:1] -; GFX9-NEXT: v_mov_b32_e32 v2, s0 -; GFX9-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc -; GFX9-NEXT: global_store_dwordx2 v4, v[2:3], s[8:9] -; GFX9-NEXT: global_store_byte v4, v0, s[10:11] +; GFX9-NEXT: s_sub_u32 s2, s12, s14 +; GFX9-NEXT: s_cselect_b64 s[0:1], -1, 0 +; GFX9-NEXT: s_cmp_lg_u64 s[0:1], 0 +; GFX9-NEXT: s_subb_u32 s0, s13, s15 +; GFX9-NEXT: v_mov_b32_e32 v0, s2 +; GFX9-NEXT: v_mov_b32_e32 v1, s0 +; GFX9-NEXT: s_cselect_b64 s[0:1], -1, 0 +; GFX9-NEXT: v_cndmask_b32_e64 v3, 0, 1, s[0:1] +; GFX9-NEXT: global_store_dwordx2 v2, v[0:1], s[8:9] +; GFX9-NEXT: global_store_byte v2, v3, s[10:11] ; GFX9-NEXT: s_endpgm ; ; GFX1010-LABEL: susubo64: @@ -1741,10 +1751,12 @@ define amdgpu_kernel void @susubo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX1010-NEXT: v_mov_b32_e32 v2, 0 ; GFX1010-NEXT: s_waitcnt lgkmcnt(0) ; GFX1010-NEXT: s_sub_u32 s0, s12, s14 -; GFX1010-NEXT: s_subb_u32 s1, s13, s15 +; GFX1010-NEXT: s_cselect_b32 s1, -1, 0 ; GFX1010-NEXT: v_mov_b32_e32 v0, s0 +; GFX1010-NEXT: s_cmp_lg_u32 s1, 0 +; GFX1010-NEXT: s_subb_u32 s1, s13, s15 +; GFX1010-NEXT: s_cselect_b32 s0, -1, 0 ; GFX1010-NEXT: v_mov_b32_e32 v1, s1 -; GFX1010-NEXT: v_cmp_gt_u64_e64 s0, s[0:1], s[12:13] ; GFX1010-NEXT: v_cndmask_b32_e64 v3, 0, 1, s0 ; GFX1010-NEXT: global_store_dwordx2 v2, v[0:1], s[8:9] ; GFX1010-NEXT: global_store_byte v2, v3, s[10:11] @@ -1755,11 +1767,13 @@ define amdgpu_kernel void @susubo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX1030W32-NEXT: s_load_dwordx8 s[0:7], s[4:5], 0x24 ; GFX1030W32-NEXT: v_mov_b32_e32 v2, 0 ; GFX1030W32-NEXT: s_waitcnt lgkmcnt(0) -; GFX1030W32-NEXT: s_sub_u32 s6, s4, s6 -; GFX1030W32-NEXT: s_subb_u32 s7, s5, s7 -; GFX1030W32-NEXT: v_mov_b32_e32 v0, s6 -; GFX1030W32-NEXT: v_cmp_gt_u64_e64 s4, s[6:7], s[4:5] -; GFX1030W32-NEXT: v_mov_b32_e32 v1, s7 +; GFX1030W32-NEXT: s_sub_u32 s4, s4, s6 +; GFX1030W32-NEXT: s_cselect_b32 s6, -1, 0 +; GFX1030W32-NEXT: v_mov_b32_e32 v0, s4 +; GFX1030W32-NEXT: s_cmp_lg_u32 s6, 0 +; GFX1030W32-NEXT: s_subb_u32 s5, s5, s7 +; GFX1030W32-NEXT: s_cselect_b32 s4, -1, 0 +; GFX1030W32-NEXT: v_mov_b32_e32 v1, s5 ; GFX1030W32-NEXT: v_cndmask_b32_e64 v3, 0, 1, s4 ; GFX1030W32-NEXT: global_store_dwordx2 v2, v[0:1], s[0:1] ; GFX1030W32-NEXT: global_store_byte v2, v3, s[2:3] @@ -1770,11 +1784,13 @@ define amdgpu_kernel void @susubo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX1030W64-NEXT: s_load_dwordx8 s[0:7], s[4:5], 0x24 ; GFX1030W64-NEXT: v_mov_b32_e32 v2, 0 ; GFX1030W64-NEXT: s_waitcnt lgkmcnt(0) -; GFX1030W64-NEXT: s_sub_u32 s6, s4, s6 -; GFX1030W64-NEXT: s_subb_u32 s7, s5, s7 -; GFX1030W64-NEXT: v_mov_b32_e32 v0, s6 -; GFX1030W64-NEXT: v_cmp_gt_u64_e64 s[4:5], s[6:7], s[4:5] -; GFX1030W64-NEXT: v_mov_b32_e32 v1, s7 +; GFX1030W64-NEXT: s_sub_u32 s4, s4, s6 +; GFX1030W64-NEXT: s_cselect_b64 s[8:9], -1, 0 +; GFX1030W64-NEXT: v_mov_b32_e32 v0, s4 +; GFX1030W64-NEXT: s_cmp_lg_u64 s[8:9], 0 +; GFX1030W64-NEXT: s_subb_u32 s5, s5, s7 +; GFX1030W64-NEXT: v_mov_b32_e32 v1, s5 +; GFX1030W64-NEXT: s_cselect_b64 s[4:5], -1, 0 ; GFX1030W64-NEXT: v_cndmask_b32_e64 v3, 0, 1, s[4:5] ; GFX1030W64-NEXT: global_store_dwordx2 v2, v[0:1], s[0:1] ; GFX1030W64-NEXT: global_store_byte v2, v3, s[2:3] @@ -1784,12 +1800,13 @@ define amdgpu_kernel void @susubo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX11: ; %bb.0: ; GFX11-NEXT: s_load_b256 s[0:7], s[4:5], 0x24 ; GFX11-NEXT: s_waitcnt lgkmcnt(0) -; GFX11-NEXT: s_sub_u32 s6, s4, s6 -; GFX11-NEXT: s_subb_u32 s7, s5, s7 -; GFX11-NEXT: v_mov_b32_e32 v0, s6 -; GFX11-NEXT: v_cmp_gt_u64_e64 s4, s[6:7], s[4:5] -; GFX11-NEXT: v_dual_mov_b32 v2, 0 :: v_dual_mov_b32 v1, s7 -; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_2) +; GFX11-NEXT: s_sub_u32 s4, s4, s6 +; GFX11-NEXT: s_cselect_b32 s6, -1, 0 +; GFX11-NEXT: v_mov_b32_e32 v0, s4 +; GFX11-NEXT: s_cmp_lg_u32 s6, 0 +; GFX11-NEXT: s_subb_u32 s5, s5, s7 +; GFX11-NEXT: s_cselect_b32 s4, -1, 0 +; GFX11-NEXT: v_dual_mov_b32 v2, 0 :: v_dual_mov_b32 v1, s5 ; GFX11-NEXT: v_cndmask_b32_e64 v3, 0, 1, s4 ; GFX11-NEXT: s_clause 0x1 ; GFX11-NEXT: global_store_b64 v2, v[0:1], s[0:1] @@ -1799,12 +1816,14 @@ define amdgpu_kernel void @susubo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX1250-LABEL: susubo64: ; GFX1250: ; %bb.0: ; GFX1250-NEXT: s_load_b256 s[8:15], s[4:5], 0x24 -; GFX1250-NEXT: v_mov_b32_e32 v2, 0 ; GFX1250-NEXT: s_wait_kmcnt 0x0 -; GFX1250-NEXT: s_sub_nc_u64 s[0:1], s[12:13], s[14:15] -; GFX1250-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(SKIP_1) | instid1(VALU_DEP_1) -; GFX1250-NEXT: v_mov_b64_e32 v[0:1], s[0:1] -; GFX1250-NEXT: v_cmp_gt_u64_e64 s0, s[0:1], s[12:13] +; GFX1250-NEXT: s_sub_co_u32 s0, s12, s14 +; GFX1250-NEXT: s_cselect_b32 s1, -1, 0 +; GFX1250-NEXT: v_dual_mov_b32 v2, 0 :: v_dual_mov_b32 v0, s0 +; GFX1250-NEXT: s_cmp_lg_u32 s1, 0 +; GFX1250-NEXT: s_sub_co_ci_u32 s1, s13, s15 +; GFX1250-NEXT: s_cselect_b32 s0, -1, 0 +; GFX1250-NEXT: v_mov_b32_e32 v1, s1 ; GFX1250-NEXT: v_cndmask_b32_e64 v3, 0, 1, s0 ; GFX1250-NEXT: s_clause 0x1 ; GFX1250-NEXT: global_store_b64 v2, v[0:1], s[8:9] @@ -1821,7 +1840,8 @@ define amdgpu_kernel void @susubo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GCN-ISEL-LABEL: name: vusubo64 ; GCN-ISEL-LABEL: body: ; GCN-ISEL-LABEL: bb.0 -; GCN-ISEL: V_SUB_U64_PSEUDO +; GCN-ISEL: V_SUB_CO_U32_e64 +; GCN-ISEL: V_SUBB_U32_e64 define amdgpu_kernel void @vusubo64(ptr addrspace(1) %out, ptr addrspace(1) %carryout, i64 %a) #0 { ; CISI-LABEL: vusubo64: @@ -1834,9 +1854,8 @@ define amdgpu_kernel void @vusubo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; CISI-NEXT: s_mov_b32 s4, s0 ; CISI-NEXT: v_mov_b32_e32 v1, s9 ; CISI-NEXT: v_sub_i32_e32 v0, vcc, s8, v0 -; CISI-NEXT: v_subbrev_u32_e32 v1, vcc, 0, v1, vcc -; CISI-NEXT: v_cmp_lt_u64_e32 vcc, s[8:9], v[0:1] ; CISI-NEXT: s_mov_b32 s5, s1 +; CISI-NEXT: v_subbrev_u32_e32 v1, vcc, 0, v1, vcc ; CISI-NEXT: s_mov_b32 s0, s2 ; CISI-NEXT: s_mov_b32 s1, s3 ; CISI-NEXT: s_mov_b32 s2, s6 @@ -1856,7 +1875,6 @@ define amdgpu_kernel void @vusubo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; VI-NEXT: v_mov_b32_e32 v6, s5 ; VI-NEXT: v_sub_u32_e32 v5, vcc, s4, v0 ; VI-NEXT: v_subbrev_u32_e32 v6, vcc, 0, v6, vcc -; VI-NEXT: v_cmp_lt_u64_e32 vcc, s[4:5], v[5:6] ; VI-NEXT: v_mov_b32_e32 v2, s1 ; VI-NEXT: v_mov_b32_e32 v3, s2 ; VI-NEXT: v_mov_b32_e32 v4, s3 @@ -1874,7 +1892,6 @@ define amdgpu_kernel void @vusubo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX9-NEXT: v_mov_b32_e32 v1, s7 ; GFX9-NEXT: v_sub_co_u32_e32 v0, vcc, s6, v0 ; GFX9-NEXT: v_subbrev_co_u32_e32 v1, vcc, 0, v1, vcc -; GFX9-NEXT: v_cmp_lt_u64_e32 vcc, s[6:7], v[0:1] ; GFX9-NEXT: global_store_dwordx2 v2, v[0:1], s[0:1] ; GFX9-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc ; GFX9-NEXT: global_store_byte v2, v0, s[2:3] @@ -1889,8 +1906,7 @@ define amdgpu_kernel void @vusubo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX1010-NEXT: s_waitcnt lgkmcnt(0) ; GFX1010-NEXT: v_sub_co_u32 v0, s4, s6, v0 ; GFX1010-NEXT: v_sub_co_ci_u32_e64 v1, s4, s7, 0, s4 -; GFX1010-NEXT: v_cmp_lt_u64_e32 vcc_lo, s[6:7], v[0:1] -; GFX1010-NEXT: v_cndmask_b32_e64 v3, 0, 1, vcc_lo +; GFX1010-NEXT: v_cndmask_b32_e64 v3, 0, 1, s4 ; GFX1010-NEXT: global_store_dwordx2 v2, v[0:1], s[0:1] ; GFX1010-NEXT: global_store_byte v2, v3, s[2:3] ; GFX1010-NEXT: s_endpgm @@ -1903,9 +1919,8 @@ define amdgpu_kernel void @vusubo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX1030W32-NEXT: v_mov_b32_e32 v2, 0 ; GFX1030W32-NEXT: s_waitcnt lgkmcnt(0) ; GFX1030W32-NEXT: v_sub_co_u32 v0, s4, s6, v0 -; GFX1030W32-NEXT: v_sub_co_ci_u32_e64 v1, null, s7, 0, s4 -; GFX1030W32-NEXT: v_cmp_lt_u64_e32 vcc_lo, s[6:7], v[0:1] -; GFX1030W32-NEXT: v_cndmask_b32_e64 v3, 0, 1, vcc_lo +; GFX1030W32-NEXT: v_sub_co_ci_u32_e64 v1, s4, s7, 0, s4 +; GFX1030W32-NEXT: v_cndmask_b32_e64 v3, 0, 1, s4 ; GFX1030W32-NEXT: global_store_dwordx2 v2, v[0:1], s[0:1] ; GFX1030W32-NEXT: global_store_byte v2, v3, s[2:3] ; GFX1030W32-NEXT: s_endpgm @@ -1918,9 +1933,8 @@ define amdgpu_kernel void @vusubo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX1030W64-NEXT: v_mov_b32_e32 v2, 0 ; GFX1030W64-NEXT: s_waitcnt lgkmcnt(0) ; GFX1030W64-NEXT: v_sub_co_u32 v0, s[4:5], s6, v0 -; GFX1030W64-NEXT: v_sub_co_ci_u32_e64 v1, null, s7, 0, s[4:5] -; GFX1030W64-NEXT: v_cmp_lt_u64_e32 vcc, s[6:7], v[0:1] -; GFX1030W64-NEXT: v_cndmask_b32_e64 v3, 0, 1, vcc +; GFX1030W64-NEXT: v_sub_co_ci_u32_e64 v1, s[4:5], s7, 0, s[4:5] +; GFX1030W64-NEXT: v_cndmask_b32_e64 v3, 0, 1, s[4:5] ; GFX1030W64-NEXT: global_store_dwordx2 v2, v[0:1], s[0:1] ; GFX1030W64-NEXT: global_store_byte v2, v3, s[2:3] ; GFX1030W64-NEXT: s_endpgm @@ -1935,10 +1949,9 @@ define amdgpu_kernel void @vusubo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX11-NEXT: s_waitcnt lgkmcnt(0) ; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1) ; GFX11-NEXT: v_sub_co_u32 v0, s4, s6, v0 -; GFX11-NEXT: v_sub_co_ci_u32_e64 v1, null, s7, 0, s4 +; GFX11-NEXT: v_sub_co_ci_u32_e64 v1, s4, s7, 0, s4 ; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1) -; GFX11-NEXT: v_cmp_lt_u64_e32 vcc_lo, s[6:7], v[0:1] -; GFX11-NEXT: v_cndmask_b32_e64 v3, 0, 1, vcc_lo +; GFX11-NEXT: v_cndmask_b32_e64 v3, 0, 1, s4 ; GFX11-NEXT: s_clause 0x1 ; GFX11-NEXT: global_store_b64 v2, v[0:1], s[0:1] ; GFX11-NEXT: global_store_b8 v2, v3, s[2:3] @@ -1949,16 +1962,17 @@ define amdgpu_kernel void @vusubo64(ptr addrspace(1) %out, ptr addrspace(1) %car ; GFX1250-NEXT: s_clause 0x1 ; GFX1250-NEXT: s_load_b64 s[6:7], s[4:5], 0x34 ; GFX1250-NEXT: s_load_b128 s[0:3], s[4:5], 0x24 -; GFX1250-NEXT: v_mov_b32_e32 v1, 0 ; GFX1250-NEXT: v_and_b32_e32 v0, 0x3ff, v0 +; GFX1250-NEXT: v_mov_b32_e32 v2, 0 ; GFX1250-NEXT: s_wait_kmcnt 0x0 -; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1) -; GFX1250-NEXT: v_sub_nc_u64_e32 v[2:3], s[6:7], v[0:1] -; GFX1250-NEXT: v_cmp_lt_u64_e32 vcc_lo, s[6:7], v[2:3] -; GFX1250-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc_lo +; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_1) +; GFX1250-NEXT: v_sub_co_u32 v0, s4, s6, v0 +; GFX1250-NEXT: v_sub_co_ci_u32_e64 v1, s4, s7, 0, s4 +; GFX1250-NEXT: s_delay_alu instid0(VALU_DEP_1) +; GFX1250-NEXT: v_cndmask_b32_e64 v3, 0, 1, s4 ; GFX1250-NEXT: s_clause 0x1 -; GFX1250-NEXT: global_store_b64 v1, v[2:3], s[0:1] -; GFX1250-NEXT: global_store_b8 v1, v0, s[2:3] +; GFX1250-NEXT: global_store_b64 v2, v[0:1], s[0:1] +; GFX1250-NEXT: global_store_b8 v2, v3, s[2:3] ; GFX1250-NEXT: s_endpgm %tid = call i32 @llvm.amdgcn.workitem.id.x() %tid.ext = sext i32 %tid to i64 diff --git a/llvm/test/CodeGen/AMDGPU/mad_int24.ll b/llvm/test/CodeGen/AMDGPU/mad_int24.ll index 93fda94..dd88310 100644 --- a/llvm/test/CodeGen/AMDGPU/mad_int24.ll +++ b/llvm/test/CodeGen/AMDGPU/mad_int24.ll @@ -1,17 +1,79 @@ -; RUN: llc < %s -mtriple=amdgcn | FileCheck %s --check-prefix=GCN --check-prefix=FUNC -; RUN: llc < %s -mtriple=amdgcn -mcpu=tonga -mattr=-flat-for-global | FileCheck %s --check-prefix=GCN --check-prefix=FUNC -; RUN: llc < %s -mtriple=r600 -mcpu=redwood | FileCheck %s --check-prefix=EG --check-prefix=FUNC -; RUN: llc < %s -mtriple=r600 -mcpu=cayman | FileCheck %s --check-prefix=CM --check-prefix=FUNC +; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 6 +; RUN: llc < %s -mtriple=amdgcn| FileCheck %s --check-prefixes=GCN +; RUN: llc < %s -mtriple=amdgcn -mcpu=tonga -mattr=-flat-for-global | FileCheck %s --check-prefixes=VI +; RUN: llc < %s -mtriple=r600 -mcpu=redwood | FileCheck %s --check-prefixes=EG,R600,RW +; RUN: llc < %s -mtriple=r600 -mcpu=cayman | FileCheck %s --check-prefixes=EG,R600,CM -; FUNC-LABEL: {{^}}i32_mad24: ; Signed 24-bit multiply is not supported on pre-Cayman GPUs. -; EG: MULLO_INT -; CM: MULLO_INT -; GCN: s_bfe_i32 -; GCN: s_bfe_i32 -; GCN: s_mul_i32 -; GCN: s_add_i32 define amdgpu_kernel void @i32_mad24(ptr addrspace(1) %out, i32 %a, i32 %b, i32 %c) { +; GCN-LABEL: i32_mad24: +; GCN: ; %bb.0: ; %entry +; GCN-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0xb +; GCN-NEXT: s_load_dwordx2 s[4:5], s[4:5], 0x9 +; GCN-NEXT: s_mov_b32 s7, 0xf000 +; GCN-NEXT: s_waitcnt lgkmcnt(0) +; GCN-NEXT: s_bfe_i32 s0, s0, 0x180000 +; GCN-NEXT: s_bfe_i32 s1, s1, 0x180000 +; GCN-NEXT: s_mul_i32 s0, s0, s1 +; GCN-NEXT: s_add_i32 s0, s0, s2 +; GCN-NEXT: s_mov_b32 s6, -1 +; GCN-NEXT: v_mov_b32_e32 v0, s0 +; GCN-NEXT: buffer_store_dword v0, off, s[4:7], 0 +; GCN-NEXT: s_endpgm +; +; VI-LABEL: i32_mad24: +; VI: ; %bb.0: ; %entry +; VI-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x2c +; VI-NEXT: s_load_dwordx2 s[4:5], s[4:5], 0x24 +; VI-NEXT: s_mov_b32 s7, 0xf000 +; VI-NEXT: s_mov_b32 s6, -1 +; VI-NEXT: s_waitcnt lgkmcnt(0) +; VI-NEXT: s_bfe_i32 s0, s0, 0x180000 +; VI-NEXT: s_bfe_i32 s1, s1, 0x180000 +; VI-NEXT: s_mul_i32 s0, s0, s1 +; VI-NEXT: s_add_i32 s0, s0, s2 +; VI-NEXT: v_mov_b32_e32 v0, s0 +; VI-NEXT: buffer_store_dword v0, off, s[4:7], 0 +; VI-NEXT: s_endpgm +; +; RW-LABEL: i32_mad24: +; RW: ; %bb.0: ; %entry +; RW-NEXT: ALU 9, @4, KC0[CB0:0-32], KC1[] +; RW-NEXT: MEM_RAT_CACHELESS STORE_RAW T0.X, T1.X, 1 +; RW-NEXT: CF_END +; RW-NEXT: PAD +; RW-NEXT: ALU clause starting at 4: +; RW-NEXT: LSHL T0.W, KC0[2].Z, literal.x, +; RW-NEXT: LSHL * T1.W, KC0[2].W, literal.x, +; RW-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; RW-NEXT: ASHR T1.W, PS, literal.x, +; RW-NEXT: ASHR * T0.W, PV.W, literal.x, +; RW-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; RW-NEXT: MULLO_INT * T0.X, PS, PV.W, +; RW-NEXT: ADD_INT T0.X, PS, KC0[3].X, +; RW-NEXT: LSHR * T1.X, KC0[2].Y, literal.x, +; RW-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; +; CM-LABEL: i32_mad24: +; CM: ; %bb.0: ; %entry +; CM-NEXT: ALU 12, @4, KC0[CB0:0-32], KC1[] +; CM-NEXT: MEM_RAT_CACHELESS STORE_DWORD T0.X, T1.X +; CM-NEXT: CF_END +; CM-NEXT: PAD +; CM-NEXT: ALU clause starting at 4: +; CM-NEXT: LSHL T0.Z, KC0[2].Z, literal.x, +; CM-NEXT: LSHL * T0.W, KC0[2].W, literal.x, +; CM-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; CM-NEXT: ASHR T1.Z, PV.W, literal.x, +; CM-NEXT: ASHR * T0.W, PV.Z, literal.x, +; CM-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; CM-NEXT: MULLO_INT T0.X, T0.W, T1.Z, +; CM-NEXT: MULLO_INT T0.Y (MASKED), T0.W, T1.Z, +; CM-NEXT: MULLO_INT T0.Z (MASKED), T0.W, T1.Z, +; CM-NEXT: MULLO_INT * T0.W (MASKED), T0.W, T1.Z, +; CM-NEXT: ADD_INT * T0.X, PV.X, KC0[3].X, +; CM-NEXT: LSHR * T1.X, KC0[2].Y, literal.x, +; CM-NEXT: 2(2.802597e-45), 0(0.000000e+00) entry: %0 = shl i32 %a, 8 %a_24 = ashr i32 %0, 8 @@ -23,13 +85,25 @@ entry: ret void } -; GCN-LABEL: {{^}}mad24_known_bits_destroyed: -; GCN: s_waitcnt -; GCN-NEXT: v_mad_i32_i24 -; GCN-NEXT: v_mul_i32_i24 -; GCN-NEXT: s_setpc_b64 define i32 @mad24_known_bits_destroyed(i32 %a, i32 %b, i32 %c) { - +; GCN-LABEL: mad24_known_bits_destroyed: +; GCN: ; %bb.0: +; GCN-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) +; GCN-NEXT: v_mad_i32_i24 v1, v0, v1, v2 +; GCN-NEXT: v_mul_i32_i24_e32 v0, v1, v0 +; GCN-NEXT: s_setpc_b64 s[30:31] +; +; VI-LABEL: mad24_known_bits_destroyed: +; VI: ; %bb.0: +; VI-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) +; VI-NEXT: v_mad_i32_i24 v1, v0, v1, v2 +; VI-NEXT: v_mul_i32_i24_e32 v0, v1, v0 +; VI-NEXT: s_setpc_b64 s[30:31] +; +; EG-LABEL: mad24_known_bits_destroyed: +; EG: ; %bb.0: +; EG-NEXT: CF_END +; EG-NEXT: PAD %shl.0 = shl i32 %a, 8 %sra.0 = ashr i32 %shl.0, 8 %shl.1 = shl i32 %b, 8 @@ -48,12 +122,25 @@ define i32 @mad24_known_bits_destroyed(i32 %a, i32 %b, i32 %c) { ret i32 %mul1 } -; GCN-LABEL: {{^}}mad24_intrin_known_bits_destroyed: -; GCN: s_waitcnt -; GCN-NEXT: v_mad_i32_i24 -; GCN-NEXT: v_mul_i32_i24 -; GCN-NEXT: s_setpc_b64 define i32 @mad24_intrin_known_bits_destroyed(i32 %a, i32 %b, i32 %c) { +; GCN-LABEL: mad24_intrin_known_bits_destroyed: +; GCN: ; %bb.0: +; GCN-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) +; GCN-NEXT: v_mad_i32_i24 v1, v0, v1, v2 +; GCN-NEXT: v_mul_i32_i24_e32 v0, v1, v0 +; GCN-NEXT: s_setpc_b64 s[30:31] +; +; VI-LABEL: mad24_intrin_known_bits_destroyed: +; VI: ; %bb.0: +; VI-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) +; VI-NEXT: v_mad_i32_i24 v1, v0, v1, v2 +; VI-NEXT: v_mul_i32_i24_e32 v0, v1, v0 +; VI-NEXT: s_setpc_b64 s[30:31] +; +; EG-LABEL: mad24_intrin_known_bits_destroyed: +; EG: ; %bb.0: +; EG-NEXT: CF_END +; EG-NEXT: PAD %shl.0 = shl i32 %a, 8 %sra.0 = ashr i32 %shl.0, 8 %shl.1 = shl i32 %b, 8 @@ -73,17 +160,177 @@ define i32 @mad24_intrin_known_bits_destroyed(i32 %a, i32 %b, i32 %c) { } ; Make sure no unnecessary BFEs are emitted in the loop. -; GCN-LABEL: {{^}}mad24_destroyed_knownbits_2: -; GCN-NOT: v_bfe -; GCN: v_mad_i32_i24 -; GCN-NOT: v_bfe -; GCN: v_mad_i32_i24 -; GCN-NOT: v_bfe -; GCN: v_mad_i32_i24 -; GCN-NOT: v_bfe -; GCN: v_mad_i32_i24 -; GCN-NOT: v_bfe define void @mad24_destroyed_knownbits_2(i32 %arg, i32 %arg1, i32 %arg2, ptr addrspace(1) %arg3) { +; GCN-LABEL: mad24_destroyed_knownbits_2: +; GCN: ; %bb.0: ; %bb +; GCN-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) +; GCN-NEXT: v_mov_b32_e32 v5, 1 +; GCN-NEXT: s_mov_b64 s[4:5], 0 +; GCN-NEXT: .LBB3_1: ; %bb6 +; GCN-NEXT: ; =>This Inner Loop Header: Depth=1 +; GCN-NEXT: v_mad_i32_i24 v0, v0, v5, v5 +; GCN-NEXT: v_add_i32_e32 v1, vcc, -1, v1 +; GCN-NEXT: v_mad_i32_i24 v5, v0, v5, v0 +; GCN-NEXT: v_cmp_eq_u32_e32 vcc, 0, v1 +; GCN-NEXT: v_mad_i32_i24 v0, v5, v0, v5 +; GCN-NEXT: s_or_b64 s[4:5], vcc, s[4:5] +; GCN-NEXT: v_mad_i32_i24 v0, v0, v5, v0 +; GCN-NEXT: v_mov_b32_e32 v5, v2 +; GCN-NEXT: s_andn2_b64 exec, exec, s[4:5] +; GCN-NEXT: s_cbranch_execnz .LBB3_1 +; GCN-NEXT: ; %bb.2: ; %bb5 +; GCN-NEXT: s_or_b64 exec, exec, s[4:5] +; GCN-NEXT: s_mov_b32 s6, 0 +; GCN-NEXT: s_mov_b32 s7, 0xf000 +; GCN-NEXT: s_mov_b32 s4, s6 +; GCN-NEXT: s_mov_b32 s5, s6 +; GCN-NEXT: buffer_store_dword v0, v[3:4], s[4:7], 0 addr64 +; GCN-NEXT: s_waitcnt vmcnt(0) expcnt(0) +; GCN-NEXT: s_setpc_b64 s[30:31] +; +; VI-LABEL: mad24_destroyed_knownbits_2: +; VI: ; %bb.0: ; %bb +; VI-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) +; VI-NEXT: v_mov_b32_e32 v5, 1 +; VI-NEXT: s_mov_b64 s[4:5], 0 +; VI-NEXT: .LBB3_1: ; %bb6 +; VI-NEXT: ; =>This Inner Loop Header: Depth=1 +; VI-NEXT: v_mad_i32_i24 v0, v0, v5, v5 +; VI-NEXT: v_mad_i32_i24 v5, v0, v5, v0 +; VI-NEXT: v_add_u32_e32 v1, vcc, -1, v1 +; VI-NEXT: v_mad_i32_i24 v0, v5, v0, v5 +; VI-NEXT: v_cmp_eq_u32_e32 vcc, 0, v1 +; VI-NEXT: v_mad_i32_i24 v0, v0, v5, v0 +; VI-NEXT: s_or_b64 s[4:5], vcc, s[4:5] +; VI-NEXT: v_mov_b32_e32 v5, v2 +; VI-NEXT: s_andn2_b64 exec, exec, s[4:5] +; VI-NEXT: s_cbranch_execnz .LBB3_1 +; VI-NEXT: ; %bb.2: ; %bb5 +; VI-NEXT: s_or_b64 exec, exec, s[4:5] +; VI-NEXT: flat_store_dword v[3:4], v0 +; VI-NEXT: s_waitcnt vmcnt(0) +; VI-NEXT: s_setpc_b64 s[30:31] +; +; RW-LABEL: mad24_destroyed_knownbits_2: +; RW: ; %bb.0: ; %bb +; RW-NEXT: ALU 5, @10, KC0[CB0:0-32], KC1[] +; RW-NEXT: LOOP_START_DX10 @7 +; RW-NEXT: ALU_PUSH_BEFORE 30, @16, KC0[], KC1[] +; RW-NEXT: JUMP @6 POP:1 +; RW-NEXT: LOOP_BREAK @6 +; RW-NEXT: POP @6 POP:1 +; RW-NEXT: END_LOOP @2 +; RW-NEXT: ALU 1, @47, KC0[], KC1[] +; RW-NEXT: MEM_RAT_CACHELESS STORE_RAW T0.X, T1.X, 1 +; RW-NEXT: CF_END +; RW-NEXT: ALU clause starting at 10: +; RW-NEXT: MOV T0.X, KC0[2].Y, +; RW-NEXT: MOV T0.Y, KC0[2].Z, +; RW-NEXT: MOV * T0.Z, KC0[2].W, +; RW-NEXT: MOV T0.W, KC0[3].X, +; RW-NEXT: MOV * T1.W, literal.x, +; RW-NEXT: 1(1.401298e-45), 0(0.000000e+00) +; RW-NEXT: ALU clause starting at 16: +; RW-NEXT: LSHL T2.W, T1.W, literal.x, +; RW-NEXT: LSHL * T3.W, T0.X, literal.x, +; RW-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; RW-NEXT: ASHR T3.W, PS, literal.x, +; RW-NEXT: ASHR * T2.W, PV.W, literal.x, +; RW-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; RW-NEXT: MULLO_INT * T0.X, PV.W, PS, +; RW-NEXT: ADD_INT * T1.W, PS, T1.W, +; RW-NEXT: LSHL * T3.W, PV.W, literal.x, +; RW-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; RW-NEXT: ASHR * T3.W, PV.W, literal.x, +; RW-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; RW-NEXT: MULLO_INT * T0.X, PV.W, T2.W, +; RW-NEXT: ADD_INT * T1.W, PS, T1.W, +; RW-NEXT: LSHL * T2.W, PV.W, literal.x, +; RW-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; RW-NEXT: ASHR * T2.W, PV.W, literal.x, +; RW-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; RW-NEXT: MULLO_INT * T0.X, PV.W, T3.W, +; RW-NEXT: ADD_INT * T1.W, PS, T1.W, +; RW-NEXT: LSHL * T3.W, PV.W, literal.x, +; RW-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; RW-NEXT: ASHR * T3.W, PV.W, literal.x, +; RW-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; RW-NEXT: ADD_INT T0.Y, T0.Y, literal.x, +; RW-NEXT: MULLO_INT * T0.X, PV.W, T2.W, +; RW-NEXT: -1(nan), 0(0.000000e+00) +; RW-NEXT: ADD_INT T0.X, PS, T1.W, +; RW-NEXT: SETE_INT T2.W, PV.Y, 0.0, +; RW-NEXT: MOV * T1.W, T0.Z, +; RW-NEXT: PRED_SETNE_INT * ExecMask,PredicateBit (MASKED), PV.W, 0.0, +; RW-NEXT: ALU clause starting at 47: +; RW-NEXT: LSHR * T1.X, T0.W, literal.x, +; RW-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; +; CM-LABEL: mad24_destroyed_knownbits_2: +; CM: ; %bb.0: ; %bb +; CM-NEXT: ALU 5, @10, KC0[CB0:0-32], KC1[] +; CM-NEXT: LOOP_START_DX10 @7 +; CM-NEXT: ALU_PUSH_BEFORE 41, @16, KC0[], KC1[] +; CM-NEXT: JUMP @6 POP:1 +; CM-NEXT: LOOP_BREAK @6 +; CM-NEXT: POP @6 POP:1 +; CM-NEXT: END_LOOP @2 +; CM-NEXT: ALU 1, @58, KC0[], KC1[] +; CM-NEXT: MEM_RAT_CACHELESS STORE_DWORD T1.X, T0.X +; CM-NEXT: CF_END +; CM-NEXT: ALU clause starting at 10: +; CM-NEXT: MOV * T1.X, KC0[2].Y, +; CM-NEXT: MOV T0.X, KC0[2].Z, +; CM-NEXT: MOV T0.Y, KC0[2].W, +; CM-NEXT: MOV T0.Z, KC0[3].X, +; CM-NEXT: MOV * T0.W, literal.x, +; CM-NEXT: 1(1.401298e-45), 0(0.000000e+00) +; CM-NEXT: ALU clause starting at 16: +; CM-NEXT: LSHL T1.Z, T0.W, literal.x, +; CM-NEXT: LSHL * T1.W, T1.X, literal.x, +; CM-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; CM-NEXT: ASHR T2.Z, PV.W, literal.x, +; CM-NEXT: ASHR * T1.W, PV.Z, literal.x, +; CM-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; CM-NEXT: MULLO_INT T1.X, T2.Z, T1.W, +; CM-NEXT: MULLO_INT T1.Y (MASKED), T2.Z, T1.W, +; CM-NEXT: MULLO_INT T1.Z (MASKED), T2.Z, T1.W, +; CM-NEXT: MULLO_INT * T1.W (MASKED), T2.Z, T1.W, +; CM-NEXT: ADD_INT * T0.W, PV.X, T0.W, +; CM-NEXT: LSHL * T2.W, PV.W, literal.x, +; CM-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; CM-NEXT: ASHR * T2.W, PV.W, literal.x, +; CM-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; CM-NEXT: MULLO_INT T1.X, T2.W, T1.W, +; CM-NEXT: MULLO_INT T1.Y (MASKED), T2.W, T1.W, +; CM-NEXT: MULLO_INT T1.Z (MASKED), T2.W, T1.W, +; CM-NEXT: MULLO_INT * T1.W (MASKED), T2.W, T1.W, +; CM-NEXT: ADD_INT * T0.W, PV.X, T0.W, +; CM-NEXT: LSHL * T1.W, PV.W, literal.x, +; CM-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; CM-NEXT: ASHR * T1.W, PV.W, literal.x, +; CM-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; CM-NEXT: MULLO_INT T1.X, T1.W, T2.W, +; CM-NEXT: MULLO_INT T1.Y (MASKED), T1.W, T2.W, +; CM-NEXT: MULLO_INT T1.Z (MASKED), T1.W, T2.W, +; CM-NEXT: MULLO_INT * T1.W (MASKED), T1.W, T2.W, +; CM-NEXT: ADD_INT * T0.W, PV.X, T0.W, +; CM-NEXT: LSHL * T2.W, PV.W, literal.x, +; CM-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; CM-NEXT: ADD_INT T0.X, T0.X, literal.x, +; CM-NEXT: ASHR * T2.W, PV.W, literal.y, +; CM-NEXT: -1(nan), 8(1.121039e-44) +; CM-NEXT: MULLO_INT T1.X, T2.W, T1.W, +; CM-NEXT: MULLO_INT T1.Y (MASKED), T2.W, T1.W, +; CM-NEXT: MULLO_INT T1.Z (MASKED), T2.W, T1.W, +; CM-NEXT: MULLO_INT * T1.W (MASKED), T2.W, T1.W, +; CM-NEXT: ADD_INT T1.X, PV.X, T0.W, +; CM-NEXT: SETE_INT T1.Z, T0.X, 0.0, +; CM-NEXT: MOV * T0.W, T0.Y, +; CM-NEXT: PRED_SETNE_INT * ExecMask,PredicateBit (MASKED), PV.Z, 0.0, +; CM-NEXT: ALU clause starting at 58: +; CM-NEXT: LSHR * T0.X, T0.Z, literal.x, +; CM-NEXT: 2(2.802597e-45), 0(0.000000e+00) bb: br label %bb6 @@ -119,3 +366,5 @@ bb6: ; preds = %bb6, %bb } declare i32 @llvm.amdgcn.mul.i24(i32, i32) +;; NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line: +; R600: {{.*}} diff --git a/llvm/test/CodeGen/AMDGPU/mad_uint24.ll b/llvm/test/CodeGen/AMDGPU/mad_uint24.ll index a6d458e..46b8df4 100644 --- a/llvm/test/CodeGen/AMDGPU/mad_uint24.ll +++ b/llvm/test/CodeGen/AMDGPU/mad_uint24.ll @@ -1,19 +1,75 @@ -; RUN: llc < %s -mtriple=r600 -mcpu=redwood | FileCheck %s --check-prefix=EG --check-prefix=FUNC -; RUN: llc < %s -mtriple=r600 -mcpu=cayman | FileCheck %s --check-prefix=EG --check-prefix=FUNC -; RUN: llc < %s -mtriple=amdgcn | FileCheck %s --check-prefix=SI --check-prefix=FUNC --check-prefix=GCN -; RUN: llc < %s -mtriple=amdgcn -mcpu=tonga -mattr=-flat-for-global | FileCheck %s --check-prefix=VI --check-prefix=FUNC --check-prefix=GCN --check-prefix=GCN2 -; RUN: llc < %s -mtriple=amdgcn -mcpu=fiji -mattr=-flat-for-global | FileCheck %s --check-prefix=VI --check-prefix=FUNC --check-prefix=GCN --check-prefix=GCN2 +; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 6 +; RUN: llc < %s -mtriple=r600 -mcpu=redwood | FileCheck %s --check-prefixes=EG +; RUN: llc < %s -mtriple=r600 -mcpu=cayman | FileCheck %s --check-prefixes=CM +; RUN: llc < %s -mtriple=amdgcn | FileCheck %s --check-prefixes=GCN +; RUN: llc < %s -mtriple=amdgcn -mcpu=tonga -mattr=-flat-for-global | FileCheck %s --check-prefixes=GFX8,SI +; RUN: llc < %s -mtriple=amdgcn -mcpu=fiji -mattr=-flat-for-global | FileCheck %s --check-prefixes=GFX8,VI declare i32 @llvm.amdgcn.workitem.id.x() nounwind readnone -; FUNC-LABEL: {{^}}u32_mad24: -; EG: MULLO_INT -; SI: s_mul_i32 -; SI: s_add_i32 -; VI: s_mul_{{[iu]}}32 -; VI: s_add_{{[iu]}}32 - define amdgpu_kernel void @u32_mad24(ptr addrspace(1) %out, i32 %a, i32 %b, i32 %c) { +; EG-LABEL: u32_mad24: +; EG: ; %bb.0: ; %entry +; EG-NEXT: ALU 6, @4, KC0[CB0:0-32], KC1[] +; EG-NEXT: MEM_RAT_CACHELESS STORE_RAW T0.X, T1.X, 1 +; EG-NEXT: CF_END +; EG-NEXT: PAD +; EG-NEXT: ALU clause starting at 4: +; EG-NEXT: AND_INT T0.W, KC0[2].W, literal.x, +; EG-NEXT: AND_INT * T1.W, KC0[2].Z, literal.x, +; EG-NEXT: 16777215(2.350989e-38), 0(0.000000e+00) +; EG-NEXT: MULLO_INT * T0.X, PS, PV.W, +; EG-NEXT: ADD_INT T0.X, PS, KC0[3].X, +; EG-NEXT: LSHR * T1.X, KC0[2].Y, literal.x, +; EG-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; +; CM-LABEL: u32_mad24: +; CM: ; %bb.0: ; %entry +; CM-NEXT: ALU 9, @4, KC0[CB0:0-32], KC1[] +; CM-NEXT: MEM_RAT_CACHELESS STORE_DWORD T0.X, T1.X +; CM-NEXT: CF_END +; CM-NEXT: PAD +; CM-NEXT: ALU clause starting at 4: +; CM-NEXT: AND_INT T0.Z, KC0[2].W, literal.x, +; CM-NEXT: AND_INT * T0.W, KC0[2].Z, literal.x, +; CM-NEXT: 16777215(2.350989e-38), 0(0.000000e+00) +; CM-NEXT: MULLO_INT T0.X, T0.W, T0.Z, +; CM-NEXT: MULLO_INT T0.Y (MASKED), T0.W, T0.Z, +; CM-NEXT: MULLO_INT T0.Z (MASKED), T0.W, T0.Z, +; CM-NEXT: MULLO_INT * T0.W (MASKED), T0.W, T0.Z, +; CM-NEXT: ADD_INT * T0.X, PV.X, KC0[3].X, +; CM-NEXT: LSHR * T1.X, KC0[2].Y, literal.x, +; CM-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; +; GCN-LABEL: u32_mad24: +; GCN: ; %bb.0: ; %entry +; GCN-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0xb +; GCN-NEXT: s_load_dwordx2 s[4:5], s[4:5], 0x9 +; GCN-NEXT: s_mov_b32 s7, 0xf000 +; GCN-NEXT: s_waitcnt lgkmcnt(0) +; GCN-NEXT: s_and_b32 s0, s0, 0xffffff +; GCN-NEXT: s_and_b32 s1, s1, 0xffffff +; GCN-NEXT: s_mul_i32 s0, s0, s1 +; GCN-NEXT: s_add_i32 s0, s0, s2 +; GCN-NEXT: s_mov_b32 s6, -1 +; GCN-NEXT: v_mov_b32_e32 v0, s0 +; GCN-NEXT: buffer_store_dword v0, off, s[4:7], 0 +; GCN-NEXT: s_endpgm +; +; GFX8-LABEL: u32_mad24: +; GFX8: ; %bb.0: ; %entry +; GFX8-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x2c +; GFX8-NEXT: s_load_dwordx2 s[4:5], s[4:5], 0x24 +; GFX8-NEXT: s_mov_b32 s7, 0xf000 +; GFX8-NEXT: s_mov_b32 s6, -1 +; GFX8-NEXT: s_waitcnt lgkmcnt(0) +; GFX8-NEXT: s_and_b32 s0, s0, 0xffffff +; GFX8-NEXT: s_and_b32 s1, s1, 0xffffff +; GFX8-NEXT: s_mul_i32 s0, s0, s1 +; GFX8-NEXT: s_add_i32 s0, s0, s2 +; GFX8-NEXT: v_mov_b32_e32 v0, s0 +; GFX8-NEXT: buffer_store_dword v0, off, s[4:7], 0 +; GFX8-NEXT: s_endpgm entry: %0 = shl i32 %a, 8 %a_24 = lshr i32 %0, 8 @@ -25,18 +81,88 @@ entry: ret void } -; FUNC-LABEL: {{^}}i16_mad24: ; The order of A and B does not matter. -; EG: MULLO_INT {{[* ]*}}T{{[0-9]}}.[[MAD_CHAN:[XYZW]]] -; EG: ADD_INT {{[* ]*}}T{{[0-9]}}.[[MAD_CHAN:[XYZW]]] ; The result must be sign-extended -; EG: BFE_INT {{[* ]*}}T{{[0-9]\.[XYZW]}}, PV.[[MAD_CHAN]], 0.0, literal.x -; EG: 16 -; GCN: s_mul_i32 [[MUL:s[0-9]]], {{[s][0-9], [s][0-9]}} -; GCN: s_add_i32 [[MAD:s[0-9]]], [[MUL]], s{{[0-9]}} -; GCN: s_sext_i32_i16 [[EXT:s[0-9]]], [[MAD]] -; GCN: v_mov_b32_e32 v0, [[EXT]] define amdgpu_kernel void @i16_mad24(ptr addrspace(1) %out, i16 %a, i16 %b, i16 %c) { +; EG-LABEL: i16_mad24: +; EG: ; %bb.0: ; %entry +; EG-NEXT: ALU 0, @12, KC0[], KC1[] +; EG-NEXT: TEX 2 @6 +; EG-NEXT: ALU 4, @13, KC0[CB0:0-32], KC1[] +; EG-NEXT: MEM_RAT_CACHELESS STORE_RAW T0.X, T1.X, 1 +; EG-NEXT: CF_END +; EG-NEXT: PAD +; EG-NEXT: Fetch clause starting at 6: +; EG-NEXT: VTX_READ_16 T1.X, T0.X, 40, #3 +; EG-NEXT: VTX_READ_16 T2.X, T0.X, 42, #3 +; EG-NEXT: VTX_READ_16 T0.X, T0.X, 44, #3 +; EG-NEXT: ALU clause starting at 12: +; EG-NEXT: MOV * T0.X, 0.0, +; EG-NEXT: ALU clause starting at 13: +; EG-NEXT: MULLO_INT * T0.Y, T1.X, T2.X, +; EG-NEXT: ADD_INT * T0.W, PS, T0.X, +; EG-NEXT: BFE_INT T0.X, PV.W, 0.0, literal.x, +; EG-NEXT: LSHR * T1.X, KC0[2].Y, literal.y, +; EG-NEXT: 16(2.242078e-44), 2(2.802597e-45) +; +; CM-LABEL: i16_mad24: +; CM: ; %bb.0: ; %entry +; CM-NEXT: ALU 0, @12, KC0[], KC1[] +; CM-NEXT: TEX 2 @6 +; CM-NEXT: ALU 8, @13, KC0[CB0:0-32], KC1[] +; CM-NEXT: MEM_RAT_CACHELESS STORE_DWORD T0.X, T1.X +; CM-NEXT: CF_END +; CM-NEXT: PAD +; CM-NEXT: Fetch clause starting at 6: +; CM-NEXT: VTX_READ_16 T1.X, T0.X, 40, #3 +; CM-NEXT: VTX_READ_16 T2.X, T0.X, 42, #3 +; CM-NEXT: VTX_READ_16 T0.X, T0.X, 44, #3 +; CM-NEXT: ALU clause starting at 12: +; CM-NEXT: MOV * T0.X, 0.0, +; CM-NEXT: ALU clause starting at 13: +; CM-NEXT: MULLO_INT T0.X (MASKED), T1.X, T2.X, +; CM-NEXT: MULLO_INT T0.Y, T1.X, T2.X, +; CM-NEXT: MULLO_INT T0.Z (MASKED), T1.X, T2.X, +; CM-NEXT: MULLO_INT * T0.W (MASKED), T1.X, T2.X, +; CM-NEXT: ADD_INT * T0.W, PV.Y, T0.X, +; CM-NEXT: BFE_INT * T0.X, PV.W, 0.0, literal.x, +; CM-NEXT: 16(2.242078e-44), 0(0.000000e+00) +; CM-NEXT: LSHR * T1.X, KC0[2].Y, literal.x, +; CM-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; +; GCN-LABEL: i16_mad24: +; GCN: ; %bb.0: ; %entry +; GCN-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x9 +; GCN-NEXT: s_load_dword s4, s[4:5], 0xb +; GCN-NEXT: s_mov_b32 s7, 0xf000 +; GCN-NEXT: s_waitcnt lgkmcnt(0) +; GCN-NEXT: s_lshr_b32 s2, s2, 16 +; GCN-NEXT: s_mul_i32 s2, s4, s2 +; GCN-NEXT: s_add_i32 s2, s2, s3 +; GCN-NEXT: s_sext_i32_i16 s2, s2 +; GCN-NEXT: s_mov_b32 s6, -1 +; GCN-NEXT: s_mov_b32 s4, s0 +; GCN-NEXT: s_mov_b32 s5, s1 +; GCN-NEXT: v_mov_b32_e32 v0, s2 +; GCN-NEXT: buffer_store_dword v0, off, s[4:7], 0 +; GCN-NEXT: s_endpgm +; +; GFX8-LABEL: i16_mad24: +; GFX8: ; %bb.0: ; %entry +; GFX8-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x24 +; GFX8-NEXT: s_load_dword s8, s[4:5], 0x2c +; GFX8-NEXT: s_mov_b32 s7, 0xf000 +; GFX8-NEXT: s_mov_b32 s6, -1 +; GFX8-NEXT: s_waitcnt lgkmcnt(0) +; GFX8-NEXT: s_mov_b32 s4, s0 +; GFX8-NEXT: s_lshr_b32 s0, s2, 16 +; GFX8-NEXT: s_mul_i32 s0, s8, s0 +; GFX8-NEXT: s_add_i32 s0, s0, s3 +; GFX8-NEXT: s_sext_i32_i16 s0, s0 +; GFX8-NEXT: s_mov_b32 s5, s1 +; GFX8-NEXT: v_mov_b32_e32 v0, s0 +; GFX8-NEXT: buffer_store_dword v0, off, s[4:7], 0 +; GFX8-NEXT: s_endpgm entry: %0 = mul i16 %a, %b %1 = add i16 %0, %c @@ -46,17 +172,85 @@ entry: } ; FIXME: Need to handle non-uniform case for function below (load without gep). -; FUNC-LABEL: {{^}}i8_mad24: -; EG: MULLO_INT {{[* ]*}}T{{[0-9]}}.[[MAD_CHAN:[XYZW]]] -; EG: ADD_INT {{[* ]*}}T{{[0-9]}}.[[MAD_CHAN:[XYZW]]] ; The result must be sign-extended -; EG: BFE_INT {{[* ]*}}T{{[0-9]\.[XYZW]}}, PV.[[MAD_CHAN]], 0.0, literal.x -; EG: 8 -; GCN: s_mul_i32 [[MUL:s[0-9]]], {{[s][0-9], [s][0-9]}} -; GCN: s_add_i32 [[MAD:s[0-9]]], [[MUL]], s{{[0-9]}} -; GCN: s_sext_i32_i8 [[EXT:s[0-9]]], [[MAD]] -; GCN: v_mov_b32_e32 v0, [[EXT]] define amdgpu_kernel void @i8_mad24(ptr addrspace(1) %out, i8 %a, i8 %b, i8 %c) { +; EG-LABEL: i8_mad24: +; EG: ; %bb.0: ; %entry +; EG-NEXT: ALU 0, @12, KC0[], KC1[] +; EG-NEXT: TEX 2 @6 +; EG-NEXT: ALU 4, @13, KC0[CB0:0-32], KC1[] +; EG-NEXT: MEM_RAT_CACHELESS STORE_RAW T0.X, T1.X, 1 +; EG-NEXT: CF_END +; EG-NEXT: PAD +; EG-NEXT: Fetch clause starting at 6: +; EG-NEXT: VTX_READ_8 T1.X, T0.X, 40, #3 +; EG-NEXT: VTX_READ_8 T2.X, T0.X, 41, #3 +; EG-NEXT: VTX_READ_8 T0.X, T0.X, 42, #3 +; EG-NEXT: ALU clause starting at 12: +; EG-NEXT: MOV * T0.X, 0.0, +; EG-NEXT: ALU clause starting at 13: +; EG-NEXT: MULLO_INT * T0.Y, T1.X, T2.X, +; EG-NEXT: ADD_INT * T0.W, PS, T0.X, +; EG-NEXT: BFE_INT T0.X, PV.W, 0.0, literal.x, +; EG-NEXT: LSHR * T1.X, KC0[2].Y, literal.y, +; EG-NEXT: 8(1.121039e-44), 2(2.802597e-45) +; +; CM-LABEL: i8_mad24: +; CM: ; %bb.0: ; %entry +; CM-NEXT: ALU 0, @12, KC0[], KC1[] +; CM-NEXT: TEX 2 @6 +; CM-NEXT: ALU 8, @13, KC0[CB0:0-32], KC1[] +; CM-NEXT: MEM_RAT_CACHELESS STORE_DWORD T0.X, T1.X +; CM-NEXT: CF_END +; CM-NEXT: PAD +; CM-NEXT: Fetch clause starting at 6: +; CM-NEXT: VTX_READ_8 T1.X, T0.X, 40, #3 +; CM-NEXT: VTX_READ_8 T2.X, T0.X, 41, #3 +; CM-NEXT: VTX_READ_8 T0.X, T0.X, 42, #3 +; CM-NEXT: ALU clause starting at 12: +; CM-NEXT: MOV * T0.X, 0.0, +; CM-NEXT: ALU clause starting at 13: +; CM-NEXT: MULLO_INT T0.X (MASKED), T1.X, T2.X, +; CM-NEXT: MULLO_INT T0.Y, T1.X, T2.X, +; CM-NEXT: MULLO_INT T0.Z (MASKED), T1.X, T2.X, +; CM-NEXT: MULLO_INT * T0.W (MASKED), T1.X, T2.X, +; CM-NEXT: ADD_INT * T0.W, PV.Y, T0.X, +; CM-NEXT: BFE_INT * T0.X, PV.W, 0.0, literal.x, +; CM-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; CM-NEXT: LSHR * T1.X, KC0[2].Y, literal.x, +; CM-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; +; GCN-LABEL: i8_mad24: +; GCN: ; %bb.0: ; %entry +; GCN-NEXT: s_load_dword s2, s[4:5], 0xb +; GCN-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x9 +; GCN-NEXT: s_mov_b32 s3, 0xf000 +; GCN-NEXT: s_waitcnt lgkmcnt(0) +; GCN-NEXT: s_lshr_b32 s4, s2, 8 +; GCN-NEXT: s_lshr_b32 s5, s2, 16 +; GCN-NEXT: s_mul_i32 s2, s2, s4 +; GCN-NEXT: s_add_i32 s2, s2, s5 +; GCN-NEXT: s_sext_i32_i8 s4, s2 +; GCN-NEXT: s_mov_b32 s2, -1 +; GCN-NEXT: v_mov_b32_e32 v0, s4 +; GCN-NEXT: buffer_store_dword v0, off, s[0:3], 0 +; GCN-NEXT: s_endpgm +; +; GFX8-LABEL: i8_mad24: +; GFX8: ; %bb.0: ; %entry +; GFX8-NEXT: s_load_dword s6, s[4:5], 0x2c +; GFX8-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x24 +; GFX8-NEXT: s_mov_b32 s3, 0xf000 +; GFX8-NEXT: s_mov_b32 s2, -1 +; GFX8-NEXT: s_waitcnt lgkmcnt(0) +; GFX8-NEXT: s_lshr_b32 s4, s6, 8 +; GFX8-NEXT: s_lshr_b32 s5, s6, 16 +; GFX8-NEXT: s_mul_i32 s4, s6, s4 +; GFX8-NEXT: s_add_i32 s4, s4, s5 +; GFX8-NEXT: s_sext_i32_i8 s4, s4 +; GFX8-NEXT: v_mov_b32_e32 v0, s4 +; GFX8-NEXT: buffer_store_dword v0, off, s[0:3], 0 +; GFX8-NEXT: s_endpgm entry: %0 = mul i8 %a, %b %1 = add i8 %0, %c @@ -72,11 +266,75 @@ entry: ; 24-bit mad pattern wasn't being matched. ; Check that the select instruction is not deleted. -; FUNC-LABEL: {{^}}i24_i32_i32_mad: -; EG: CNDE_INT -; SI: s_cselect -; GCN2: s_cselect define amdgpu_kernel void @i24_i32_i32_mad(ptr addrspace(1) %out, i32 %a, i32 %b, i32 %c, i32 %d) { +; EG-LABEL: i24_i32_i32_mad: +; EG: ; %bb.0: ; %entry +; EG-NEXT: ALU 7, @4, KC0[CB0:0-32], KC1[] +; EG-NEXT: MEM_RAT_CACHELESS STORE_RAW T0.X, T1.X, 1 +; EG-NEXT: CF_END +; EG-NEXT: PAD +; EG-NEXT: ALU clause starting at 4: +; EG-NEXT: ASHR * T0.W, KC0[2].Z, literal.x, +; EG-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; EG-NEXT: CNDE_INT * T0.W, KC0[3].X, literal.x, PV.W, +; EG-NEXT: 34(4.764415e-44), 0(0.000000e+00) +; EG-NEXT: MULLO_INT * T0.X, PV.W, KC0[3].X, +; EG-NEXT: ADD_INT T0.X, PS, KC0[3].Y, +; EG-NEXT: LSHR * T1.X, KC0[2].Y, literal.x, +; EG-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; +; CM-LABEL: i24_i32_i32_mad: +; CM: ; %bb.0: ; %entry +; CM-NEXT: ALU 10, @4, KC0[CB0:0-32], KC1[] +; CM-NEXT: MEM_RAT_CACHELESS STORE_DWORD T0.X, T1.X +; CM-NEXT: CF_END +; CM-NEXT: PAD +; CM-NEXT: ALU clause starting at 4: +; CM-NEXT: ASHR * T0.W, KC0[2].Z, literal.x, +; CM-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; CM-NEXT: CNDE_INT * T0.W, KC0[3].X, literal.x, PV.W, +; CM-NEXT: 34(4.764415e-44), 0(0.000000e+00) +; CM-NEXT: MULLO_INT T0.X, T0.W, KC0[3].X, +; CM-NEXT: MULLO_INT T0.Y (MASKED), T0.W, KC0[3].X, +; CM-NEXT: MULLO_INT T0.Z (MASKED), T0.W, KC0[3].X, +; CM-NEXT: MULLO_INT * T0.W (MASKED), T0.W, KC0[3].X, +; CM-NEXT: ADD_INT * T0.X, PV.X, KC0[3].Y, +; CM-NEXT: LSHR * T1.X, KC0[2].Y, literal.x, +; CM-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; +; GCN-LABEL: i24_i32_i32_mad: +; GCN: ; %bb.0: ; %entry +; GCN-NEXT: s_load_dword s2, s[4:5], 0xb +; GCN-NEXT: s_load_dwordx2 s[6:7], s[4:5], 0xd +; GCN-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x9 +; GCN-NEXT: s_mov_b32 s3, 0xf000 +; GCN-NEXT: s_waitcnt lgkmcnt(0) +; GCN-NEXT: s_ashr_i32 s2, s2, 8 +; GCN-NEXT: s_cmp_lg_u32 s6, 0 +; GCN-NEXT: s_cselect_b32 s2, s2, 34 +; GCN-NEXT: s_mul_i32 s2, s2, s6 +; GCN-NEXT: s_add_i32 s4, s2, s7 +; GCN-NEXT: s_mov_b32 s2, -1 +; GCN-NEXT: v_mov_b32_e32 v0, s4 +; GCN-NEXT: buffer_store_dword v0, off, s[0:3], 0 +; GCN-NEXT: s_endpgm +; +; GFX8-LABEL: i24_i32_i32_mad: +; GFX8: ; %bb.0: ; %entry +; GFX8-NEXT: s_load_dword s8, s[4:5], 0x2c +; GFX8-NEXT: s_load_dwordx2 s[6:7], s[4:5], 0x34 +; GFX8-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x24 +; GFX8-NEXT: s_mov_b32 s3, 0xf000 +; GFX8-NEXT: s_mov_b32 s2, -1 +; GFX8-NEXT: s_waitcnt lgkmcnt(0) +; GFX8-NEXT: s_ashr_i32 s4, s8, 8 +; GFX8-NEXT: s_cmp_lg_u32 s6, 0 +; GFX8-NEXT: s_cselect_b32 s4, s4, 34 +; GFX8-NEXT: s_mul_i32 s4, s4, s6 +; GFX8-NEXT: s_add_i32 s4, s4, s7 +; GFX8-NEXT: v_mov_b32_e32 v0, s4 +; GFX8-NEXT: buffer_store_dword v0, off, s[0:3], 0 +; GFX8-NEXT: s_endpgm entry: %0 = ashr i32 %a, 8 %1 = icmp ne i32 %c, 0 @@ -87,13 +345,139 @@ entry: ret void } -; FUNC-LABEL: {{^}}extra_and: -; SI-NOT: v_and -; SI: s_mul_i32 -; SI: s_mul_i32 -; SI: s_add_i32 -; SI: s_add_i32 define amdgpu_kernel void @extra_and(ptr addrspace(1) %arg, i32 %arg2, i32 %arg3) { +; EG-LABEL: extra_and: +; EG: ; %bb.0: ; %bb +; EG-NEXT: ALU 5, @10, KC0[CB0:0-32], KC1[] +; EG-NEXT: LOOP_START_DX10 @7 +; EG-NEXT: ALU_PUSH_BEFORE 12, @16, KC0[], KC1[] +; EG-NEXT: JUMP @6 POP:1 +; EG-NEXT: LOOP_BREAK @6 +; EG-NEXT: POP @6 POP:1 +; EG-NEXT: END_LOOP @2 +; EG-NEXT: ALU 1, @29, KC0[], KC1[] +; EG-NEXT: MEM_RAT_CACHELESS STORE_RAW T0.X, T1.X, 1 +; EG-NEXT: CF_END +; EG-NEXT: ALU clause starting at 10: +; EG-NEXT: MOV * T1.W, literal.x, +; EG-NEXT: 0(0.000000e+00), 0(0.000000e+00) +; EG-NEXT: MOV * T3.W, PV.W, +; EG-NEXT: MOV T0.Z, KC0[2].Y, +; EG-NEXT: MOV T0.W, KC0[2].Z, +; EG-NEXT: MOV * T2.W, KC0[2].W, +; EG-NEXT: ALU clause starting at 16: +; EG-NEXT: AND_INT T1.W, T1.W, literal.x, +; EG-NEXT: AND_INT * T4.W, T3.W, literal.x, +; EG-NEXT: 16777215(2.350989e-38), 0(0.000000e+00) +; EG-NEXT: AND_INT T3.W, T3.W, literal.x, +; EG-NEXT: MULLO_INT * T0.X, PS, PV.W, +; EG-NEXT: 16777215(2.350989e-38), 0(0.000000e+00) +; EG-NEXT: MULLO_INT * T0.Y, PV.W, T1.W, +; EG-NEXT: ADD_INT T3.W, T2.W, PS, +; EG-NEXT: ADD_INT * T1.W, T0.W, T0.X, +; EG-NEXT: ADD_INT * T0.X, PS, PV.W, +; EG-NEXT: SETNE_INT * T4.W, PV.X, literal.x, +; EG-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; EG-NEXT: PRED_SETE_INT * ExecMask,PredicateBit (MASKED), PV.W, 0.0, +; EG-NEXT: ALU clause starting at 29: +; EG-NEXT: LSHR * T1.X, T0.Z, literal.x, +; EG-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; +; CM-LABEL: extra_and: +; CM: ; %bb.0: ; %bb +; CM-NEXT: ALU 5, @10, KC0[CB0:0-32], KC1[] +; CM-NEXT: LOOP_START_DX10 @7 +; CM-NEXT: ALU_PUSH_BEFORE 17, @16, KC0[], KC1[] +; CM-NEXT: JUMP @6 POP:1 +; CM-NEXT: LOOP_BREAK @6 +; CM-NEXT: POP @6 POP:1 +; CM-NEXT: END_LOOP @2 +; CM-NEXT: ALU 1, @34, KC0[], KC1[] +; CM-NEXT: MEM_RAT_CACHELESS STORE_DWORD T0.X, T1.X +; CM-NEXT: CF_END +; CM-NEXT: ALU clause starting at 10: +; CM-NEXT: MOV * T0.W, literal.x, +; CM-NEXT: 0(0.000000e+00), 0(0.000000e+00) +; CM-NEXT: MOV * T1.Z, PV.W, +; CM-NEXT: MOV T0.Y, KC0[2].Y, +; CM-NEXT: MOV T0.Z, KC0[2].Z, +; CM-NEXT: MOV * T1.W, KC0[2].W, +; CM-NEXT: ALU clause starting at 16: +; CM-NEXT: AND_INT T1.Y, T1.Z, literal.x, +; CM-NEXT: AND_INT T2.Z, T0.W, literal.x, +; CM-NEXT: AND_INT * T0.W, T1.Z, literal.x, +; CM-NEXT: 16777215(2.350989e-38), 0(0.000000e+00) +; CM-NEXT: MULLO_INT T0.X, T0.W, T2.Z, +; CM-NEXT: MULLO_INT T0.Y (MASKED), T0.W, T2.Z, +; CM-NEXT: MULLO_INT T0.Z (MASKED), T0.W, T2.Z, +; CM-NEXT: MULLO_INT * T0.W (MASKED), T0.W, T2.Z, +; CM-NEXT: MULLO_INT T0.X (MASKED), T1.Y, T2.Z, +; CM-NEXT: MULLO_INT T0.Y (MASKED), T1.Y, T2.Z, +; CM-NEXT: MULLO_INT T0.Z (MASKED), T1.Y, T2.Z, +; CM-NEXT: MULLO_INT * T0.W, T1.Y, T2.Z, +; CM-NEXT: ADD_INT T1.Z, T1.W, PV.W, +; CM-NEXT: ADD_INT * T0.W, T0.Z, T0.X, +; CM-NEXT: ADD_INT * T0.X, PV.W, PV.Z, +; CM-NEXT: SETNE_INT * T2.W, PV.X, literal.x, +; CM-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; CM-NEXT: PRED_SETE_INT * ExecMask,PredicateBit (MASKED), PV.W, 0.0, +; CM-NEXT: ALU clause starting at 34: +; CM-NEXT: LSHR * T1.X, T0.Y, literal.x, +; CM-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; +; GCN-LABEL: extra_and: +; GCN: ; %bb.0: ; %bb +; GCN-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0xb +; GCN-NEXT: s_mov_b32 s2, 0 +; GCN-NEXT: s_mov_b32 s6, 0 +; GCN-NEXT: .LBB4_1: ; %bb4 +; GCN-NEXT: ; =>This Inner Loop Header: Depth=1 +; GCN-NEXT: s_and_b32 s3, s6, 0xffffff +; GCN-NEXT: s_and_b32 s6, s6, 0xffffff +; GCN-NEXT: s_and_b32 s2, s2, 0xffffff +; GCN-NEXT: s_mul_i32 s3, s3, s2 +; GCN-NEXT: s_mul_i32 s6, s6, s2 +; GCN-NEXT: s_waitcnt lgkmcnt(0) +; GCN-NEXT: s_add_i32 s2, s0, s3 +; GCN-NEXT: s_add_i32 s6, s1, s6 +; GCN-NEXT: s_add_i32 s3, s2, s6 +; GCN-NEXT: s_cmp_lg_u32 s3, 8 +; GCN-NEXT: s_cbranch_scc1 .LBB4_1 +; GCN-NEXT: ; %bb.2: ; %bb18 +; GCN-NEXT: s_load_dwordx2 s[4:5], s[4:5], 0x9 +; GCN-NEXT: s_mov_b32 s7, 0xf000 +; GCN-NEXT: s_mov_b32 s6, -1 +; GCN-NEXT: v_mov_b32_e32 v0, s3 +; GCN-NEXT: s_waitcnt lgkmcnt(0) +; GCN-NEXT: buffer_store_dword v0, off, s[4:7], 0 +; GCN-NEXT: s_endpgm +; +; GFX8-LABEL: extra_and: +; GFX8: ; %bb.0: ; %bb +; GFX8-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x2c +; GFX8-NEXT: s_mov_b32 s2, 0 +; GFX8-NEXT: s_mov_b32 s6, 0 +; GFX8-NEXT: .LBB4_1: ; %bb4 +; GFX8-NEXT: ; =>This Inner Loop Header: Depth=1 +; GFX8-NEXT: s_and_b32 s3, s6, 0xffffff +; GFX8-NEXT: s_and_b32 s6, s6, 0xffffff +; GFX8-NEXT: s_and_b32 s2, s2, 0xffffff +; GFX8-NEXT: s_mul_i32 s3, s3, s2 +; GFX8-NEXT: s_mul_i32 s6, s6, s2 +; GFX8-NEXT: s_waitcnt lgkmcnt(0) +; GFX8-NEXT: s_add_i32 s2, s0, s3 +; GFX8-NEXT: s_add_i32 s6, s1, s6 +; GFX8-NEXT: s_add_i32 s3, s2, s6 +; GFX8-NEXT: s_cmp_lg_u32 s3, 8 +; GFX8-NEXT: s_cbranch_scc1 .LBB4_1 +; GFX8-NEXT: ; %bb.2: ; %bb18 +; GFX8-NEXT: s_load_dwordx2 s[4:5], s[4:5], 0x24 +; GFX8-NEXT: s_mov_b32 s7, 0xf000 +; GFX8-NEXT: s_mov_b32 s6, -1 +; GFX8-NEXT: v_mov_b32_e32 v0, s3 +; GFX8-NEXT: s_waitcnt lgkmcnt(0) +; GFX8-NEXT: buffer_store_dword v0, off, s[4:7], 0 +; GFX8-NEXT: s_endpgm bb: br label %bb4 @@ -119,13 +503,139 @@ bb18: ; preds = %bb4 ret void } -; FUNC-LABEL: {{^}}dont_remove_shift -; SI: s_lshr -; SI: s_mul_i32 -; SI: s_mul_i32 -; SI: s_add_i32 -; SI: s_add_i32 define amdgpu_kernel void @dont_remove_shift(ptr addrspace(1) %arg, i32 %arg2, i32 %arg3) { +; EG-LABEL: dont_remove_shift: +; EG: ; %bb.0: ; %bb +; EG-NEXT: ALU 5, @10, KC0[CB0:0-32], KC1[] +; EG-NEXT: LOOP_START_DX10 @7 +; EG-NEXT: ALU_PUSH_BEFORE 12, @16, KC0[], KC1[] +; EG-NEXT: JUMP @6 POP:1 +; EG-NEXT: LOOP_BREAK @6 +; EG-NEXT: POP @6 POP:1 +; EG-NEXT: END_LOOP @2 +; EG-NEXT: ALU 1, @29, KC0[], KC1[] +; EG-NEXT: MEM_RAT_CACHELESS STORE_RAW T0.X, T1.X, 1 +; EG-NEXT: CF_END +; EG-NEXT: ALU clause starting at 10: +; EG-NEXT: MOV * T1.W, literal.x, +; EG-NEXT: 0(0.000000e+00), 0(0.000000e+00) +; EG-NEXT: MOV * T3.W, PV.W, +; EG-NEXT: MOV T0.Z, KC0[2].Y, +; EG-NEXT: MOV T0.W, KC0[2].Z, +; EG-NEXT: MOV * T2.W, KC0[2].W, +; EG-NEXT: ALU clause starting at 16: +; EG-NEXT: LSHR T1.W, T1.W, literal.x, +; EG-NEXT: LSHR * T4.W, T3.W, literal.x, +; EG-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; EG-NEXT: LSHR T3.W, T3.W, literal.x, +; EG-NEXT: MULLO_INT * T0.X, PS, PV.W, +; EG-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; EG-NEXT: MULLO_INT * T0.Y, PV.W, T1.W, +; EG-NEXT: ADD_INT T3.W, T2.W, PS, +; EG-NEXT: ADD_INT * T1.W, T0.W, T0.X, +; EG-NEXT: ADD_INT * T0.X, PS, PV.W, +; EG-NEXT: SETNE_INT * T4.W, PV.X, literal.x, +; EG-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; EG-NEXT: PRED_SETE_INT * ExecMask,PredicateBit (MASKED), PV.W, 0.0, +; EG-NEXT: ALU clause starting at 29: +; EG-NEXT: LSHR * T1.X, T0.Z, literal.x, +; EG-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; +; CM-LABEL: dont_remove_shift: +; CM: ; %bb.0: ; %bb +; CM-NEXT: ALU 5, @10, KC0[CB0:0-32], KC1[] +; CM-NEXT: LOOP_START_DX10 @7 +; CM-NEXT: ALU_PUSH_BEFORE 17, @16, KC0[], KC1[] +; CM-NEXT: JUMP @6 POP:1 +; CM-NEXT: LOOP_BREAK @6 +; CM-NEXT: POP @6 POP:1 +; CM-NEXT: END_LOOP @2 +; CM-NEXT: ALU 1, @34, KC0[], KC1[] +; CM-NEXT: MEM_RAT_CACHELESS STORE_DWORD T0.X, T1.X +; CM-NEXT: CF_END +; CM-NEXT: ALU clause starting at 10: +; CM-NEXT: MOV * T0.W, literal.x, +; CM-NEXT: 0(0.000000e+00), 0(0.000000e+00) +; CM-NEXT: MOV * T1.Z, PV.W, +; CM-NEXT: MOV T0.Y, KC0[2].Y, +; CM-NEXT: MOV T0.Z, KC0[2].Z, +; CM-NEXT: MOV * T1.W, KC0[2].W, +; CM-NEXT: ALU clause starting at 16: +; CM-NEXT: LSHR T1.Y, T1.Z, literal.x, +; CM-NEXT: LSHR T2.Z, T0.W, literal.x, +; CM-NEXT: LSHR * T0.W, T1.Z, literal.x, +; CM-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; CM-NEXT: MULLO_INT T0.X, T0.W, T2.Z, +; CM-NEXT: MULLO_INT T0.Y (MASKED), T0.W, T2.Z, +; CM-NEXT: MULLO_INT T0.Z (MASKED), T0.W, T2.Z, +; CM-NEXT: MULLO_INT * T0.W (MASKED), T0.W, T2.Z, +; CM-NEXT: MULLO_INT T0.X (MASKED), T1.Y, T2.Z, +; CM-NEXT: MULLO_INT T0.Y (MASKED), T1.Y, T2.Z, +; CM-NEXT: MULLO_INT T0.Z (MASKED), T1.Y, T2.Z, +; CM-NEXT: MULLO_INT * T0.W, T1.Y, T2.Z, +; CM-NEXT: ADD_INT T1.Z, T1.W, PV.W, +; CM-NEXT: ADD_INT * T0.W, T0.Z, T0.X, +; CM-NEXT: ADD_INT * T0.X, PV.W, PV.Z, +; CM-NEXT: SETNE_INT * T2.W, PV.X, literal.x, +; CM-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; CM-NEXT: PRED_SETE_INT * ExecMask,PredicateBit (MASKED), PV.W, 0.0, +; CM-NEXT: ALU clause starting at 34: +; CM-NEXT: LSHR * T1.X, T0.Y, literal.x, +; CM-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; +; GCN-LABEL: dont_remove_shift: +; GCN: ; %bb.0: ; %bb +; GCN-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0xb +; GCN-NEXT: s_mov_b32 s2, 0 +; GCN-NEXT: s_mov_b32 s6, 0 +; GCN-NEXT: .LBB5_1: ; %bb4 +; GCN-NEXT: ; =>This Inner Loop Header: Depth=1 +; GCN-NEXT: s_lshr_b32 s3, s6, 8 +; GCN-NEXT: s_lshr_b32 s6, s6, 8 +; GCN-NEXT: s_lshr_b32 s2, s2, 8 +; GCN-NEXT: s_mul_i32 s3, s3, s2 +; GCN-NEXT: s_mul_i32 s6, s6, s2 +; GCN-NEXT: s_waitcnt lgkmcnt(0) +; GCN-NEXT: s_add_i32 s2, s0, s3 +; GCN-NEXT: s_add_i32 s6, s1, s6 +; GCN-NEXT: s_add_i32 s3, s2, s6 +; GCN-NEXT: s_cmp_lg_u32 s3, 8 +; GCN-NEXT: s_cbranch_scc1 .LBB5_1 +; GCN-NEXT: ; %bb.2: ; %bb18 +; GCN-NEXT: s_load_dwordx2 s[4:5], s[4:5], 0x9 +; GCN-NEXT: s_mov_b32 s7, 0xf000 +; GCN-NEXT: s_mov_b32 s6, -1 +; GCN-NEXT: v_mov_b32_e32 v0, s3 +; GCN-NEXT: s_waitcnt lgkmcnt(0) +; GCN-NEXT: buffer_store_dword v0, off, s[4:7], 0 +; GCN-NEXT: s_endpgm +; +; GFX8-LABEL: dont_remove_shift: +; GFX8: ; %bb.0: ; %bb +; GFX8-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0x2c +; GFX8-NEXT: s_mov_b32 s2, 0 +; GFX8-NEXT: s_mov_b32 s6, 0 +; GFX8-NEXT: .LBB5_1: ; %bb4 +; GFX8-NEXT: ; =>This Inner Loop Header: Depth=1 +; GFX8-NEXT: s_lshr_b32 s3, s6, 8 +; GFX8-NEXT: s_lshr_b32 s6, s6, 8 +; GFX8-NEXT: s_lshr_b32 s2, s2, 8 +; GFX8-NEXT: s_mul_i32 s3, s3, s2 +; GFX8-NEXT: s_mul_i32 s6, s6, s2 +; GFX8-NEXT: s_waitcnt lgkmcnt(0) +; GFX8-NEXT: s_add_i32 s2, s0, s3 +; GFX8-NEXT: s_add_i32 s6, s1, s6 +; GFX8-NEXT: s_add_i32 s3, s2, s6 +; GFX8-NEXT: s_cmp_lg_u32 s3, 8 +; GFX8-NEXT: s_cbranch_scc1 .LBB5_1 +; GFX8-NEXT: ; %bb.2: ; %bb18 +; GFX8-NEXT: s_load_dwordx2 s[4:5], s[4:5], 0x24 +; GFX8-NEXT: s_mov_b32 s7, 0xf000 +; GFX8-NEXT: s_mov_b32 s6, -1 +; GFX8-NEXT: v_mov_b32_e32 v0, s3 +; GFX8-NEXT: s_waitcnt lgkmcnt(0) +; GFX8-NEXT: buffer_store_dword v0, off, s[4:7], 0 +; GFX8-NEXT: s_endpgm bb: br label %bb4 @@ -151,19 +661,234 @@ bb18: ; preds = %bb4 ret void } -; FUNC-LABEL: {{^}}i8_mad_sat_16: -; EG: MULLO_INT {{[* ]*}}T{{[0-9]}}.[[MAD_CHAN:[XYZW]]] -; EG: ADD_INT {{[* ]*}}T{{[0-9]}}.[[MAD_CHAN:[XYZW]]] -; The result must be sign-extended -; EG: BFE_INT {{[* ]*}}T{{[0-9]\.[XYZW]}}, PV.[[MAD_CHAN]], 0.0, literal.x -; EG: 8 -; SI: v_mad_u32_u24 [[MAD:v[0-9]]], {{[sv][0-9], [sv][0-9]}} -; SI: v_bfe_i32 [[EXT:v[0-9]]], [[MAD]], 0, 16 -; SI: v_med3_i32 v{{[0-9]}}, [[EXT]], -; VI: v_mad_u16 [[MAD:v[0-9]]], {{[sv][0-9], [sv][0-9]}} -; VI: v_max_i16_e32 [[MAX:v[0-9]]], 0xff80, [[MAD]] -; VI: v_min_i16_e32 {{v[0-9]}}, 0x7f, [[MAX]] define amdgpu_kernel void @i8_mad_sat_16(ptr addrspace(1) %out, ptr addrspace(1) %in0, ptr addrspace(1) %in1, ptr addrspace(1) %in2, ptr addrspace(5) %idx) { +; EG-LABEL: i8_mad_sat_16: +; EG: ; %bb.0: ; %entry +; EG-NEXT: ALU 4, @14, KC0[CB0:0-32], KC1[] +; EG-NEXT: TEX 0 @8 +; EG-NEXT: ALU 1, @19, KC0[CB0:0-32], KC1[] +; EG-NEXT: TEX 1 @10 +; EG-NEXT: ALU 24, @21, KC0[CB0:0-32], KC1[] +; EG-NEXT: MEM_RAT MSKOR T0.XW, T1.X +; EG-NEXT: CF_END +; EG-NEXT: PAD +; EG-NEXT: Fetch clause starting at 8: +; EG-NEXT: VTX_READ_8 T1.X, T1.X, 0, #1 +; EG-NEXT: Fetch clause starting at 10: +; EG-NEXT: VTX_READ_8 T3.X, T3.X, 0, #1 +; EG-NEXT: VTX_READ_8 T2.X, T2.X, 0, #1 +; EG-NEXT: ALU clause starting at 14: +; EG-NEXT: LSHR * T0.W, KC0[3].Y, literal.x, +; EG-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; EG-NEXT: MOVA_INT * AR.x (MASKED), PV.W, +; EG-NEXT: MOV * T0.X, T(0 + AR.x).X+, +; EG-NEXT: ADD_INT * T1.X, KC0[2].W, PV.X, +; EG-NEXT: ALU clause starting at 19: +; EG-NEXT: ADD_INT T2.X, KC0[2].Z, T0.X, +; EG-NEXT: ADD_INT * T3.X, KC0[3].X, T0.X, +; EG-NEXT: ALU clause starting at 21: +; EG-NEXT: BFE_INT T0.Z, T1.X, 0.0, literal.x, +; EG-NEXT: BFE_INT * T0.W, T2.X, 0.0, literal.x, BS:VEC_120/SCL_212 +; EG-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; EG-NEXT: BFE_INT T1.W, T3.X, 0.0, literal.x, +; EG-NEXT: MULLO_INT * T0.Y, PV.Z, PV.W, +; EG-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; EG-NEXT: ADD_INT * T0.W, PS, PV.W, +; EG-NEXT: BFE_INT * T0.W, PV.W, 0.0, literal.x, +; EG-NEXT: 16(2.242078e-44), 0(0.000000e+00) +; EG-NEXT: MAX_INT T0.W, PV.W, literal.x, +; EG-NEXT: ADD_INT * T1.W, KC0[2].Y, T0.X, +; EG-NEXT: -128(nan), 0(0.000000e+00) +; EG-NEXT: AND_INT T2.W, PS, literal.x, +; EG-NEXT: MIN_INT * T0.W, PV.W, literal.y, +; EG-NEXT: 3(4.203895e-45), 127(1.779649e-43) +; EG-NEXT: AND_INT T0.W, PS, literal.x, +; EG-NEXT: LSHL * T2.W, PV.W, literal.y, +; EG-NEXT: 255(3.573311e-43), 3(4.203895e-45) +; EG-NEXT: LSHL T0.X, PV.W, PS, +; EG-NEXT: LSHL * T0.W, literal.x, PS, +; EG-NEXT: 255(3.573311e-43), 0(0.000000e+00) +; EG-NEXT: MOV T0.Y, 0.0, +; EG-NEXT: MOV * T0.Z, 0.0, +; EG-NEXT: LSHR * T1.X, T1.W, literal.x, +; EG-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; +; CM-LABEL: i8_mad_sat_16: +; CM: ; %bb.0: ; %entry +; CM-NEXT: ALU 4, @14, KC0[CB0:0-32], KC1[] +; CM-NEXT: TEX 0 @8 +; CM-NEXT: ALU 1, @19, KC0[CB0:0-32], KC1[] +; CM-NEXT: TEX 1 @10 +; CM-NEXT: ALU 26, @21, KC0[CB0:0-32], KC1[] +; CM-NEXT: MEM_RAT MSKOR T1.XW, T0.X +; CM-NEXT: CF_END +; CM-NEXT: PAD +; CM-NEXT: Fetch clause starting at 8: +; CM-NEXT: VTX_READ_8 T1.X, T1.X, 0, #1 +; CM-NEXT: Fetch clause starting at 10: +; CM-NEXT: VTX_READ_8 T3.X, T3.X, 0, #1 +; CM-NEXT: VTX_READ_8 T2.X, T2.X, 0, #1 +; CM-NEXT: ALU clause starting at 14: +; CM-NEXT: LSHR * T0.W, KC0[3].Y, literal.x, +; CM-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; CM-NEXT: MOVA_INT * AR.x (MASKED), PV.W, +; CM-NEXT: MOV * T0.X, T(0 + AR.x).X+, +; CM-NEXT: ADD_INT * T1.X, KC0[3].X, PV.X, +; CM-NEXT: ALU clause starting at 19: +; CM-NEXT: ADD_INT * T2.X, KC0[2].W, T0.X, +; CM-NEXT: ADD_INT * T3.X, KC0[2].Z, T0.X, +; CM-NEXT: ALU clause starting at 21: +; CM-NEXT: BFE_INT T0.Y, T1.X, 0.0, literal.x, +; CM-NEXT: BFE_INT T0.Z, T2.X, 0.0, literal.x, BS:VEC_120/SCL_212 +; CM-NEXT: BFE_INT * T0.W, T3.X, 0.0, literal.x, BS:VEC_201 +; CM-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; CM-NEXT: MULLO_INT T0.X (MASKED), T0.Z, T0.W, +; CM-NEXT: MULLO_INT T0.Y (MASKED), T0.Z, T0.W, +; CM-NEXT: MULLO_INT T0.Z, T0.Z, T0.W, +; CM-NEXT: MULLO_INT * T0.W (MASKED), T0.Z, T0.W, +; CM-NEXT: ADD_INT * T0.W, PV.Z, T0.Y, +; CM-NEXT: BFE_INT * T0.W, PV.W, 0.0, literal.x, +; CM-NEXT: 16(2.242078e-44), 0(0.000000e+00) +; CM-NEXT: MAX_INT T0.Z, PV.W, literal.x, +; CM-NEXT: ADD_INT * T0.W, KC0[2].Y, T0.X, +; CM-NEXT: -128(nan), 0(0.000000e+00) +; CM-NEXT: AND_INT T1.Z, PV.W, literal.x, +; CM-NEXT: MIN_INT * T1.W, PV.Z, literal.y, +; CM-NEXT: 3(4.203895e-45), 127(1.779649e-43) +; CM-NEXT: AND_INT T0.Z, PV.W, literal.x, +; CM-NEXT: LSHL * T1.W, PV.Z, literal.y, +; CM-NEXT: 255(3.573311e-43), 3(4.203895e-45) +; CM-NEXT: LSHL T1.X, PV.Z, PV.W, +; CM-NEXT: LSHL * T1.W, literal.x, PV.W, +; CM-NEXT: 255(3.573311e-43), 0(0.000000e+00) +; CM-NEXT: MOV T1.Y, 0.0, +; CM-NEXT: MOV * T1.Z, 0.0, +; CM-NEXT: LSHR * T0.X, T0.W, literal.x, +; CM-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; +; GCN-LABEL: i8_mad_sat_16: +; GCN: ; %bb.0: ; %entry +; GCN-NEXT: s_mov_b32 s20, SCRATCH_RSRC_DWORD0 +; GCN-NEXT: s_mov_b32 s21, SCRATCH_RSRC_DWORD1 +; GCN-NEXT: s_mov_b32 s22, -1 +; GCN-NEXT: s_mov_b32 s23, 0xe8f000 +; GCN-NEXT: s_add_u32 s20, s20, s11 +; GCN-NEXT: s_addc_u32 s21, s21, 0 +; GCN-NEXT: s_load_dword s8, s[4:5], 0x11 +; GCN-NEXT: s_waitcnt lgkmcnt(0) +; GCN-NEXT: s_add_i32 s9, s8, 4 +; GCN-NEXT: s_load_dwordx8 s[0:7], s[4:5], 0x9 +; GCN-NEXT: v_mov_b32_e32 v0, s8 +; GCN-NEXT: v_mov_b32_e32 v1, s9 +; GCN-NEXT: buffer_load_dword v1, v1, s[20:23], 0 offen +; GCN-NEXT: buffer_load_dword v0, v0, s[20:23], 0 offen +; GCN-NEXT: s_mov_b32 s11, 0xf000 +; GCN-NEXT: s_mov_b32 s10, 0 +; GCN-NEXT: s_mov_b64 s[14:15], s[10:11] +; GCN-NEXT: s_mov_b64 s[18:19], s[10:11] +; GCN-NEXT: s_waitcnt lgkmcnt(0) +; GCN-NEXT: s_mov_b64 s[8:9], s[2:3] +; GCN-NEXT: s_mov_b64 s[12:13], s[4:5] +; GCN-NEXT: s_mov_b64 s[16:17], s[6:7] +; GCN-NEXT: s_waitcnt vmcnt(0) +; GCN-NEXT: buffer_load_sbyte v2, v[0:1], s[12:15], 0 addr64 +; GCN-NEXT: buffer_load_sbyte v3, v[0:1], s[8:11], 0 addr64 +; GCN-NEXT: buffer_load_sbyte v4, v[0:1], s[16:19], 0 addr64 +; GCN-NEXT: s_movk_i32 s2, 0xff80 +; GCN-NEXT: s_waitcnt vmcnt(2) +; GCN-NEXT: v_and_b32_e32 v2, 0xffff, v2 +; GCN-NEXT: s_waitcnt vmcnt(1) +; GCN-NEXT: v_and_b32_e32 v3, 0xffff, v3 +; GCN-NEXT: s_waitcnt vmcnt(0) +; GCN-NEXT: v_mad_u32_u24 v2, v2, v3, v4 +; GCN-NEXT: v_bfe_i32 v2, v2, 0, 16 +; GCN-NEXT: v_mov_b32_e32 v3, 0x7f +; GCN-NEXT: v_med3_i32 v2, v2, s2, v3 +; GCN-NEXT: s_mov_b64 s[2:3], s[10:11] +; GCN-NEXT: buffer_store_byte v2, v[0:1], s[0:3], 0 addr64 +; GCN-NEXT: s_endpgm +; +; SI-LABEL: i8_mad_sat_16: +; SI: ; %bb.0: ; %entry +; SI-NEXT: s_mov_b32 s88, SCRATCH_RSRC_DWORD0 +; SI-NEXT: s_load_dword s0, s[4:5], 0x44 +; SI-NEXT: s_mov_b32 s89, SCRATCH_RSRC_DWORD1 +; SI-NEXT: s_mov_b32 s90, -1 +; SI-NEXT: s_mov_b32 s91, 0xe80000 +; SI-NEXT: s_add_u32 s88, s88, s11 +; SI-NEXT: s_addc_u32 s89, s89, 0 +; SI-NEXT: s_waitcnt lgkmcnt(0) +; SI-NEXT: s_add_i32 s1, s0, 4 +; SI-NEXT: v_mov_b32_e32 v0, s0 +; SI-NEXT: buffer_load_dword v6, v0, s[88:91], 0 offen +; SI-NEXT: v_mov_b32_e32 v0, s1 +; SI-NEXT: buffer_load_dword v7, v0, s[88:91], 0 offen +; SI-NEXT: s_load_dwordx8 s[0:7], s[4:5], 0x24 +; SI-NEXT: s_waitcnt lgkmcnt(0) +; SI-NEXT: v_mov_b32_e32 v1, s3 +; SI-NEXT: v_mov_b32_e32 v3, s5 +; SI-NEXT: v_mov_b32_e32 v5, s7 +; SI-NEXT: s_waitcnt vmcnt(1) +; SI-NEXT: v_add_u32_e32 v0, vcc, s2, v6 +; SI-NEXT: s_waitcnt vmcnt(0) +; SI-NEXT: v_addc_u32_e32 v1, vcc, v1, v7, vcc +; SI-NEXT: v_add_u32_e32 v2, vcc, s4, v6 +; SI-NEXT: v_addc_u32_e32 v3, vcc, v3, v7, vcc +; SI-NEXT: v_add_u32_e32 v4, vcc, s6, v6 +; SI-NEXT: v_addc_u32_e32 v5, vcc, v5, v7, vcc +; SI-NEXT: flat_load_sbyte v0, v[0:1] +; SI-NEXT: flat_load_sbyte v1, v[2:3] +; SI-NEXT: flat_load_sbyte v2, v[4:5] +; SI-NEXT: v_mov_b32_e32 v3, s1 +; SI-NEXT: s_waitcnt vmcnt(0) +; SI-NEXT: v_mad_u16 v0, v1, v0, v2 +; SI-NEXT: v_max_i16_e32 v0, 0xff80, v0 +; SI-NEXT: v_min_i16_e32 v2, 0x7f, v0 +; SI-NEXT: v_add_u32_e32 v0, vcc, s0, v6 +; SI-NEXT: v_addc_u32_e32 v1, vcc, v3, v7, vcc +; SI-NEXT: flat_store_byte v[0:1], v2 +; SI-NEXT: s_endpgm +; +; VI-LABEL: i8_mad_sat_16: +; VI: ; %bb.0: ; %entry +; VI-NEXT: s_mov_b32 s12, SCRATCH_RSRC_DWORD0 +; VI-NEXT: s_load_dword s0, s[4:5], 0x44 +; VI-NEXT: s_mov_b32 s13, SCRATCH_RSRC_DWORD1 +; VI-NEXT: s_mov_b32 s14, -1 +; VI-NEXT: s_mov_b32 s15, 0xe80000 +; VI-NEXT: s_add_u32 s12, s12, s11 +; VI-NEXT: s_addc_u32 s13, s13, 0 +; VI-NEXT: s_waitcnt lgkmcnt(0) +; VI-NEXT: s_add_i32 s1, s0, 4 +; VI-NEXT: v_mov_b32_e32 v0, s0 +; VI-NEXT: buffer_load_dword v6, v0, s[12:15], 0 offen +; VI-NEXT: v_mov_b32_e32 v0, s1 +; VI-NEXT: buffer_load_dword v7, v0, s[12:15], 0 offen +; VI-NEXT: s_load_dwordx8 s[0:7], s[4:5], 0x24 +; VI-NEXT: s_waitcnt lgkmcnt(0) +; VI-NEXT: v_mov_b32_e32 v1, s3 +; VI-NEXT: v_mov_b32_e32 v3, s5 +; VI-NEXT: v_mov_b32_e32 v5, s7 +; VI-NEXT: s_waitcnt vmcnt(1) +; VI-NEXT: v_add_u32_e32 v0, vcc, s2, v6 +; VI-NEXT: s_waitcnt vmcnt(0) +; VI-NEXT: v_addc_u32_e32 v1, vcc, v1, v7, vcc +; VI-NEXT: v_add_u32_e32 v2, vcc, s4, v6 +; VI-NEXT: v_addc_u32_e32 v3, vcc, v3, v7, vcc +; VI-NEXT: v_add_u32_e32 v4, vcc, s6, v6 +; VI-NEXT: v_addc_u32_e32 v5, vcc, v5, v7, vcc +; VI-NEXT: flat_load_sbyte v0, v[0:1] +; VI-NEXT: flat_load_sbyte v1, v[2:3] +; VI-NEXT: flat_load_sbyte v2, v[4:5] +; VI-NEXT: v_mov_b32_e32 v3, s1 +; VI-NEXT: s_waitcnt vmcnt(0) +; VI-NEXT: v_mad_u16 v0, v1, v0, v2 +; VI-NEXT: v_max_i16_e32 v0, 0xff80, v0 +; VI-NEXT: v_min_i16_e32 v2, 0x7f, v0 +; VI-NEXT: v_add_u32_e32 v0, vcc, s0, v6 +; VI-NEXT: v_addc_u32_e32 v1, vcc, v3, v7, vcc +; VI-NEXT: flat_store_byte v[0:1], v2 +; VI-NEXT: s_endpgm entry: %retval.0.i = load i64, ptr addrspace(5) %idx %arrayidx = getelementptr inbounds i8, ptr addrspace(1) %in0, i64 %retval.0.i @@ -187,16 +912,201 @@ entry: ret void } -; FUNC-LABEL: {{^}}i8_mad_32: -; EG: MULLO_INT {{[* ]*}}T{{[0-9]}}.[[MAD_CHAN:[XYZW]]] -; EG: ADD_INT {{[* ]*}}T{{[0-9]}}.[[MAD_CHAN:[XYZW]]] -; The result must be sign-extended -; EG: BFE_INT {{[* ]*}}T{{[0-9]\.[XYZW]}}, PV.[[MAD_CHAN]], 0.0, literal.x -; EG: 8 -; SI: v_mad_u32_u24 [[MAD:v[0-9]]], {{[sv][0-9], [sv][0-9]}} -; VI: v_mad_u16 [[MAD:v[0-9]]], {{[sv][0-9], [sv][0-9]}} -; GCN: v_bfe_i32 [[EXT:v[0-9]]], [[MAD]], 0, 16 define amdgpu_kernel void @i8_mad_32(ptr addrspace(1) %out, ptr addrspace(1) %a, ptr addrspace(1) %b, ptr addrspace(1) %c, ptr addrspace(5) %idx) { +; EG-LABEL: i8_mad_32: +; EG: ; %bb.0: ; %entry +; EG-NEXT: ALU 4, @14, KC0[CB0:0-32], KC1[] +; EG-NEXT: TEX 0 @8 +; EG-NEXT: ALU 1, @19, KC0[CB0:0-32], KC1[] +; EG-NEXT: TEX 1 @10 +; EG-NEXT: ALU 9, @21, KC0[CB0:0-32], KC1[] +; EG-NEXT: MEM_RAT_CACHELESS STORE_RAW T0.X, T1.X, 1 +; EG-NEXT: CF_END +; EG-NEXT: PAD +; EG-NEXT: Fetch clause starting at 8: +; EG-NEXT: VTX_READ_8 T1.X, T1.X, 0, #1 +; EG-NEXT: Fetch clause starting at 10: +; EG-NEXT: VTX_READ_8 T0.X, T0.X, 0, #1 +; EG-NEXT: VTX_READ_8 T2.X, T2.X, 0, #1 +; EG-NEXT: ALU clause starting at 14: +; EG-NEXT: LSHR * T0.W, KC0[3].Y, literal.x, +; EG-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; EG-NEXT: MOVA_INT * AR.x (MASKED), PV.W, +; EG-NEXT: MOV * T0.X, T(0 + AR.x).X+, +; EG-NEXT: ADD_INT * T1.X, KC0[2].W, PV.X, +; EG-NEXT: ALU clause starting at 19: +; EG-NEXT: ADD_INT T2.X, KC0[2].Z, T0.X, +; EG-NEXT: ADD_INT * T0.X, KC0[3].X, T0.X, +; EG-NEXT: ALU clause starting at 21: +; EG-NEXT: BFE_INT T0.Z, T1.X, 0.0, literal.x, +; EG-NEXT: BFE_INT * T0.W, T2.X, 0.0, literal.x, BS:VEC_120/SCL_212 +; EG-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; EG-NEXT: BFE_INT T1.W, T0.X, 0.0, literal.x, +; EG-NEXT: MULLO_INT * T0.X, PV.W, PV.Z, +; EG-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; EG-NEXT: ADD_INT * T0.W, PS, PV.W, +; EG-NEXT: BFE_INT T0.X, PV.W, 0.0, literal.x, +; EG-NEXT: LSHR * T1.X, KC0[2].Y, literal.y, +; EG-NEXT: 16(2.242078e-44), 2(2.802597e-45) +; +; CM-LABEL: i8_mad_32: +; CM: ; %bb.0: ; %entry +; CM-NEXT: ALU 4, @14, KC0[CB0:0-32], KC1[] +; CM-NEXT: TEX 0 @8 +; CM-NEXT: ALU 1, @19, KC0[CB0:0-32], KC1[] +; CM-NEXT: TEX 1 @10 +; CM-NEXT: ALU 12, @21, KC0[CB0:0-32], KC1[] +; CM-NEXT: MEM_RAT_CACHELESS STORE_DWORD T0.X, T1.X +; CM-NEXT: CF_END +; CM-NEXT: PAD +; CM-NEXT: Fetch clause starting at 8: +; CM-NEXT: VTX_READ_8 T1.X, T1.X, 0, #1 +; CM-NEXT: Fetch clause starting at 10: +; CM-NEXT: VTX_READ_8 T0.X, T0.X, 0, #1 +; CM-NEXT: VTX_READ_8 T2.X, T2.X, 0, #1 +; CM-NEXT: ALU clause starting at 14: +; CM-NEXT: LSHR * T0.W, KC0[3].Y, literal.x, +; CM-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; CM-NEXT: MOVA_INT * AR.x (MASKED), PV.W, +; CM-NEXT: MOV * T0.X, T(0 + AR.x).X+, +; CM-NEXT: ADD_INT * T1.X, KC0[3].X, PV.X, +; CM-NEXT: ALU clause starting at 19: +; CM-NEXT: ADD_INT * T2.X, KC0[2].W, T0.X, +; CM-NEXT: ADD_INT * T0.X, KC0[2].Z, T0.X, +; CM-NEXT: ALU clause starting at 21: +; CM-NEXT: BFE_INT T0.Y, T1.X, 0.0, literal.x, +; CM-NEXT: BFE_INT T0.Z, T2.X, 0.0, literal.x, BS:VEC_120/SCL_212 +; CM-NEXT: BFE_INT * T0.W, T0.X, 0.0, literal.x, BS:VEC_201 +; CM-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; CM-NEXT: MULLO_INT T0.X, T0.W, T0.Z, +; CM-NEXT: MULLO_INT T0.Y (MASKED), T0.W, T0.Z, +; CM-NEXT: MULLO_INT T0.Z (MASKED), T0.W, T0.Z, +; CM-NEXT: MULLO_INT * T0.W (MASKED), T0.W, T0.Z, +; CM-NEXT: ADD_INT * T0.W, PV.X, T0.Y, +; CM-NEXT: BFE_INT * T0.X, PV.W, 0.0, literal.x, +; CM-NEXT: 16(2.242078e-44), 0(0.000000e+00) +; CM-NEXT: LSHR * T1.X, KC0[2].Y, literal.x, +; CM-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; +; GCN-LABEL: i8_mad_32: +; GCN: ; %bb.0: ; %entry +; GCN-NEXT: s_mov_b32 s24, SCRATCH_RSRC_DWORD0 +; GCN-NEXT: s_mov_b32 s25, SCRATCH_RSRC_DWORD1 +; GCN-NEXT: s_mov_b32 s26, -1 +; GCN-NEXT: s_mov_b32 s27, 0xe8f000 +; GCN-NEXT: s_add_u32 s24, s24, s11 +; GCN-NEXT: s_addc_u32 s25, s25, 0 +; GCN-NEXT: s_load_dword s8, s[4:5], 0x11 +; GCN-NEXT: s_waitcnt lgkmcnt(0) +; GCN-NEXT: s_add_i32 s9, s8, 4 +; GCN-NEXT: s_load_dwordx8 s[0:7], s[4:5], 0x9 +; GCN-NEXT: v_mov_b32_e32 v0, s8 +; GCN-NEXT: v_mov_b32_e32 v1, s9 +; GCN-NEXT: buffer_load_dword v1, v1, s[24:27], 0 offen +; GCN-NEXT: buffer_load_dword v0, v0, s[24:27], 0 offen +; GCN-NEXT: s_mov_b32 s11, 0xf000 +; GCN-NEXT: s_mov_b32 s14, 0 +; GCN-NEXT: s_mov_b32 s15, s11 +; GCN-NEXT: s_mov_b64 s[18:19], s[14:15] +; GCN-NEXT: s_mov_b64 s[22:23], s[14:15] +; GCN-NEXT: s_waitcnt lgkmcnt(0) +; GCN-NEXT: s_mov_b64 s[12:13], s[2:3] +; GCN-NEXT: s_mov_b64 s[16:17], s[4:5] +; GCN-NEXT: s_mov_b64 s[20:21], s[6:7] +; GCN-NEXT: s_waitcnt vmcnt(0) +; GCN-NEXT: buffer_load_sbyte v2, v[0:1], s[12:15], 0 addr64 +; GCN-NEXT: buffer_load_sbyte v3, v[0:1], s[16:19], 0 addr64 +; GCN-NEXT: buffer_load_sbyte v0, v[0:1], s[20:23], 0 addr64 +; GCN-NEXT: s_mov_b32 s10, -1 +; GCN-NEXT: s_mov_b32 s8, s0 +; GCN-NEXT: s_mov_b32 s9, s1 +; GCN-NEXT: s_waitcnt vmcnt(2) +; GCN-NEXT: v_and_b32_e32 v1, 0xffff, v2 +; GCN-NEXT: s_waitcnt vmcnt(1) +; GCN-NEXT: v_and_b32_e32 v2, 0xffff, v3 +; GCN-NEXT: s_waitcnt vmcnt(0) +; GCN-NEXT: v_mad_u32_u24 v0, v1, v2, v0 +; GCN-NEXT: v_bfe_i32 v0, v0, 0, 16 +; GCN-NEXT: buffer_store_dword v0, off, s[8:11], 0 +; GCN-NEXT: s_endpgm +; +; SI-LABEL: i8_mad_32: +; SI: ; %bb.0: ; %entry +; SI-NEXT: s_mov_b32 s88, SCRATCH_RSRC_DWORD0 +; SI-NEXT: s_load_dword s0, s[4:5], 0x44 +; SI-NEXT: s_mov_b32 s89, SCRATCH_RSRC_DWORD1 +; SI-NEXT: s_mov_b32 s90, -1 +; SI-NEXT: s_mov_b32 s91, 0xe80000 +; SI-NEXT: s_add_u32 s88, s88, s11 +; SI-NEXT: s_addc_u32 s89, s89, 0 +; SI-NEXT: s_waitcnt lgkmcnt(0) +; SI-NEXT: s_add_i32 s1, s0, 4 +; SI-NEXT: v_mov_b32_e32 v0, s0 +; SI-NEXT: buffer_load_dword v4, v0, s[88:91], 0 offen +; SI-NEXT: v_mov_b32_e32 v0, s1 +; SI-NEXT: buffer_load_dword v5, v0, s[88:91], 0 offen +; SI-NEXT: s_load_dwordx8 s[0:7], s[4:5], 0x24 +; SI-NEXT: s_waitcnt lgkmcnt(0) +; SI-NEXT: v_mov_b32_e32 v1, s3 +; SI-NEXT: v_mov_b32_e32 v3, s5 +; SI-NEXT: v_mov_b32_e32 v6, s7 +; SI-NEXT: s_mov_b32 s3, 0xf000 +; SI-NEXT: s_waitcnt vmcnt(1) +; SI-NEXT: v_add_u32_e32 v0, vcc, s2, v4 +; SI-NEXT: s_waitcnt vmcnt(0) +; SI-NEXT: v_addc_u32_e32 v1, vcc, v1, v5, vcc +; SI-NEXT: v_add_u32_e32 v2, vcc, s4, v4 +; SI-NEXT: v_addc_u32_e32 v3, vcc, v3, v5, vcc +; SI-NEXT: v_add_u32_e32 v4, vcc, s6, v4 +; SI-NEXT: v_addc_u32_e32 v5, vcc, v6, v5, vcc +; SI-NEXT: flat_load_sbyte v0, v[0:1] +; SI-NEXT: flat_load_sbyte v1, v[2:3] +; SI-NEXT: flat_load_sbyte v2, v[4:5] +; SI-NEXT: s_mov_b32 s2, -1 +; SI-NEXT: s_waitcnt vmcnt(0) +; SI-NEXT: v_mad_u16 v0, v0, v1, v2 +; SI-NEXT: v_bfe_i32 v0, v0, 0, 16 +; SI-NEXT: buffer_store_dword v0, off, s[0:3], 0 +; SI-NEXT: s_endpgm +; +; VI-LABEL: i8_mad_32: +; VI: ; %bb.0: ; %entry +; VI-NEXT: s_mov_b32 s12, SCRATCH_RSRC_DWORD0 +; VI-NEXT: s_load_dword s0, s[4:5], 0x44 +; VI-NEXT: s_mov_b32 s13, SCRATCH_RSRC_DWORD1 +; VI-NEXT: s_mov_b32 s14, -1 +; VI-NEXT: s_mov_b32 s15, 0xe80000 +; VI-NEXT: s_add_u32 s12, s12, s11 +; VI-NEXT: s_addc_u32 s13, s13, 0 +; VI-NEXT: s_waitcnt lgkmcnt(0) +; VI-NEXT: s_add_i32 s1, s0, 4 +; VI-NEXT: v_mov_b32_e32 v0, s0 +; VI-NEXT: buffer_load_dword v4, v0, s[12:15], 0 offen +; VI-NEXT: v_mov_b32_e32 v0, s1 +; VI-NEXT: buffer_load_dword v5, v0, s[12:15], 0 offen +; VI-NEXT: s_load_dwordx8 s[0:7], s[4:5], 0x24 +; VI-NEXT: s_waitcnt lgkmcnt(0) +; VI-NEXT: v_mov_b32_e32 v1, s3 +; VI-NEXT: v_mov_b32_e32 v3, s5 +; VI-NEXT: v_mov_b32_e32 v6, s7 +; VI-NEXT: s_mov_b32 s3, 0xf000 +; VI-NEXT: s_waitcnt vmcnt(1) +; VI-NEXT: v_add_u32_e32 v0, vcc, s2, v4 +; VI-NEXT: s_waitcnt vmcnt(0) +; VI-NEXT: v_addc_u32_e32 v1, vcc, v1, v5, vcc +; VI-NEXT: v_add_u32_e32 v2, vcc, s4, v4 +; VI-NEXT: v_addc_u32_e32 v3, vcc, v3, v5, vcc +; VI-NEXT: v_add_u32_e32 v4, vcc, s6, v4 +; VI-NEXT: v_addc_u32_e32 v5, vcc, v6, v5, vcc +; VI-NEXT: flat_load_sbyte v0, v[0:1] +; VI-NEXT: flat_load_sbyte v1, v[2:3] +; VI-NEXT: flat_load_sbyte v2, v[4:5] +; VI-NEXT: s_mov_b32 s2, -1 +; VI-NEXT: s_waitcnt vmcnt(0) +; VI-NEXT: v_mad_u16 v0, v0, v1, v2 +; VI-NEXT: v_bfe_i32 v0, v0, 0, 16 +; VI-NEXT: buffer_store_dword v0, off, s[0:3], 0 +; VI-NEXT: s_endpgm entry: %retval.0.i = load i64, ptr addrspace(5) %idx %arrayidx = getelementptr inbounds i8, ptr addrspace(1) %a, i64 %retval.0.i @@ -215,16 +1125,207 @@ entry: ret void } -; FUNC-LABEL: {{^}}i8_mad_64: -; EG: MULLO_INT {{[* ]*}}T{{[0-9]}}.[[MAD_CHAN:[XYZW]]] -; EG: ADD_INT {{[* ]*}}T{{[0-9]}}.[[MAD_CHAN:[XYZW]]] -; The result must be sign-extended -; EG: BFE_INT {{[* ]*}}T{{[0-9]\.[XYZW]}}, PV.[[MAD_CHAN]], 0.0, literal.x -; EG: 8 -; SI: v_mad_u32_u24 [[MAD:v[0-9]]], {{[sv][0-9], [sv][0-9]}} -; VI: v_mad_u16 [[MAD:v[0-9]]], {{[sv][0-9], [sv][0-9]}} -; GCN: v_bfe_i32 [[EXT:v[0-9]]], [[MAD]], 0, 16 define amdgpu_kernel void @i8_mad_64(ptr addrspace(1) %out, ptr addrspace(1) %a, ptr addrspace(1) %b, ptr addrspace(1) %c, ptr addrspace(5) %idx) { +; EG-LABEL: i8_mad_64: +; EG: ; %bb.0: ; %entry +; EG-NEXT: ALU 4, @14, KC0[CB0:0-32], KC1[] +; EG-NEXT: TEX 0 @8 +; EG-NEXT: ALU 1, @19, KC0[CB0:0-32], KC1[] +; EG-NEXT: TEX 1 @10 +; EG-NEXT: ALU 11, @21, KC0[CB0:0-32], KC1[] +; EG-NEXT: MEM_RAT_CACHELESS STORE_RAW T0.XY, T1.X, 1 +; EG-NEXT: CF_END +; EG-NEXT: PAD +; EG-NEXT: Fetch clause starting at 8: +; EG-NEXT: VTX_READ_8 T1.X, T1.X, 0, #1 +; EG-NEXT: Fetch clause starting at 10: +; EG-NEXT: VTX_READ_8 T0.X, T0.X, 0, #1 +; EG-NEXT: VTX_READ_8 T2.X, T2.X, 0, #1 +; EG-NEXT: ALU clause starting at 14: +; EG-NEXT: LSHR * T0.W, KC0[3].Y, literal.x, +; EG-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; EG-NEXT: MOVA_INT * AR.x (MASKED), PV.W, +; EG-NEXT: MOV * T0.X, T(0 + AR.x).X+, +; EG-NEXT: ADD_INT * T1.X, KC0[2].W, PV.X, +; EG-NEXT: ALU clause starting at 19: +; EG-NEXT: ADD_INT T2.X, KC0[2].Z, T0.X, +; EG-NEXT: ADD_INT * T0.X, KC0[3].X, T0.X, +; EG-NEXT: ALU clause starting at 21: +; EG-NEXT: BFE_INT T0.Z, T1.X, 0.0, literal.x, +; EG-NEXT: BFE_INT * T0.W, T2.X, 0.0, literal.x, BS:VEC_120/SCL_212 +; EG-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; EG-NEXT: BFE_INT T1.W, T0.X, 0.0, literal.x, +; EG-NEXT: MULLO_INT * T0.X, PV.W, PV.Z, +; EG-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; EG-NEXT: ADD_INT * T0.W, PS, PV.W, +; EG-NEXT: BFE_INT T0.X, PV.W, 0.0, literal.x, +; EG-NEXT: LSHR * T1.X, KC0[2].Y, literal.y, +; EG-NEXT: 16(2.242078e-44), 2(2.802597e-45) +; EG-NEXT: ASHR * T0.Y, PV.X, literal.x, +; EG-NEXT: 31(4.344025e-44), 0(0.000000e+00) +; +; CM-LABEL: i8_mad_64: +; CM: ; %bb.0: ; %entry +; CM-NEXT: ALU 4, @14, KC0[CB0:0-32], KC1[] +; CM-NEXT: TEX 0 @8 +; CM-NEXT: ALU 1, @19, KC0[CB0:0-32], KC1[] +; CM-NEXT: TEX 1 @10 +; CM-NEXT: ALU 13, @21, KC0[CB0:0-32], KC1[] +; CM-NEXT: MEM_RAT_CACHELESS STORE_DWORD T0, T1.X +; CM-NEXT: CF_END +; CM-NEXT: PAD +; CM-NEXT: Fetch clause starting at 8: +; CM-NEXT: VTX_READ_8 T1.X, T1.X, 0, #1 +; CM-NEXT: Fetch clause starting at 10: +; CM-NEXT: VTX_READ_8 T0.X, T0.X, 0, #1 +; CM-NEXT: VTX_READ_8 T2.X, T2.X, 0, #1 +; CM-NEXT: ALU clause starting at 14: +; CM-NEXT: LSHR * T0.W, KC0[3].Y, literal.x, +; CM-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; CM-NEXT: MOVA_INT * AR.x (MASKED), PV.W, +; CM-NEXT: MOV * T0.X, T(0 + AR.x).X+, +; CM-NEXT: ADD_INT * T1.X, KC0[3].X, PV.X, +; CM-NEXT: ALU clause starting at 19: +; CM-NEXT: ADD_INT * T2.X, KC0[2].W, T0.X, +; CM-NEXT: ADD_INT * T0.X, KC0[2].Z, T0.X, +; CM-NEXT: ALU clause starting at 21: +; CM-NEXT: BFE_INT T0.Y, T1.X, 0.0, literal.x, +; CM-NEXT: BFE_INT T0.Z, T2.X, 0.0, literal.x, BS:VEC_120/SCL_212 +; CM-NEXT: BFE_INT * T0.W, T0.X, 0.0, literal.x, BS:VEC_201 +; CM-NEXT: 8(1.121039e-44), 0(0.000000e+00) +; CM-NEXT: MULLO_INT T0.X, T0.W, T0.Z, +; CM-NEXT: MULLO_INT T0.Y (MASKED), T0.W, T0.Z, +; CM-NEXT: MULLO_INT T0.Z (MASKED), T0.W, T0.Z, +; CM-NEXT: MULLO_INT * T0.W (MASKED), T0.W, T0.Z, +; CM-NEXT: ADD_INT * T0.W, PV.X, T0.Y, +; CM-NEXT: BFE_INT * T0.X, PV.W, 0.0, literal.x, +; CM-NEXT: 16(2.242078e-44), 0(0.000000e+00) +; CM-NEXT: LSHR T1.X, KC0[2].Y, literal.x, +; CM-NEXT: ASHR * T0.Y, PV.X, literal.y, +; CM-NEXT: 2(2.802597e-45), 31(4.344025e-44) +; +; GCN-LABEL: i8_mad_64: +; GCN: ; %bb.0: ; %entry +; GCN-NEXT: s_mov_b32 s24, SCRATCH_RSRC_DWORD0 +; GCN-NEXT: s_mov_b32 s25, SCRATCH_RSRC_DWORD1 +; GCN-NEXT: s_mov_b32 s26, -1 +; GCN-NEXT: s_mov_b32 s27, 0xe8f000 +; GCN-NEXT: s_add_u32 s24, s24, s11 +; GCN-NEXT: s_addc_u32 s25, s25, 0 +; GCN-NEXT: s_load_dword s8, s[4:5], 0x11 +; GCN-NEXT: s_waitcnt lgkmcnt(0) +; GCN-NEXT: s_add_i32 s9, s8, 4 +; GCN-NEXT: s_load_dwordx8 s[0:7], s[4:5], 0x9 +; GCN-NEXT: v_mov_b32_e32 v0, s8 +; GCN-NEXT: v_mov_b32_e32 v1, s9 +; GCN-NEXT: buffer_load_dword v1, v1, s[24:27], 0 offen +; GCN-NEXT: buffer_load_dword v0, v0, s[24:27], 0 offen +; GCN-NEXT: s_mov_b32 s11, 0xf000 +; GCN-NEXT: s_mov_b32 s14, 0 +; GCN-NEXT: s_mov_b32 s15, s11 +; GCN-NEXT: s_mov_b64 s[18:19], s[14:15] +; GCN-NEXT: s_mov_b64 s[22:23], s[14:15] +; GCN-NEXT: s_waitcnt lgkmcnt(0) +; GCN-NEXT: s_mov_b64 s[12:13], s[2:3] +; GCN-NEXT: s_mov_b64 s[16:17], s[4:5] +; GCN-NEXT: s_mov_b64 s[20:21], s[6:7] +; GCN-NEXT: s_waitcnt vmcnt(0) +; GCN-NEXT: buffer_load_sbyte v2, v[0:1], s[12:15], 0 addr64 +; GCN-NEXT: buffer_load_sbyte v3, v[0:1], s[16:19], 0 addr64 +; GCN-NEXT: buffer_load_sbyte v0, v[0:1], s[20:23], 0 addr64 +; GCN-NEXT: s_mov_b32 s10, -1 +; GCN-NEXT: s_mov_b32 s8, s0 +; GCN-NEXT: s_mov_b32 s9, s1 +; GCN-NEXT: s_waitcnt vmcnt(2) +; GCN-NEXT: v_and_b32_e32 v1, 0xffff, v2 +; GCN-NEXT: s_waitcnt vmcnt(1) +; GCN-NEXT: v_and_b32_e32 v2, 0xffff, v3 +; GCN-NEXT: s_waitcnt vmcnt(0) +; GCN-NEXT: v_mad_u32_u24 v0, v1, v2, v0 +; GCN-NEXT: v_bfe_i32 v0, v0, 0, 16 +; GCN-NEXT: v_ashrrev_i32_e32 v1, 31, v0 +; GCN-NEXT: buffer_store_dwordx2 v[0:1], off, s[8:11], 0 +; GCN-NEXT: s_endpgm +; +; SI-LABEL: i8_mad_64: +; SI: ; %bb.0: ; %entry +; SI-NEXT: s_mov_b32 s88, SCRATCH_RSRC_DWORD0 +; SI-NEXT: s_load_dword s0, s[4:5], 0x44 +; SI-NEXT: s_mov_b32 s89, SCRATCH_RSRC_DWORD1 +; SI-NEXT: s_mov_b32 s90, -1 +; SI-NEXT: s_mov_b32 s91, 0xe80000 +; SI-NEXT: s_add_u32 s88, s88, s11 +; SI-NEXT: s_addc_u32 s89, s89, 0 +; SI-NEXT: s_waitcnt lgkmcnt(0) +; SI-NEXT: s_add_i32 s1, s0, 4 +; SI-NEXT: v_mov_b32_e32 v0, s0 +; SI-NEXT: buffer_load_dword v4, v0, s[88:91], 0 offen +; SI-NEXT: v_mov_b32_e32 v0, s1 +; SI-NEXT: buffer_load_dword v5, v0, s[88:91], 0 offen +; SI-NEXT: s_load_dwordx8 s[0:7], s[4:5], 0x24 +; SI-NEXT: s_waitcnt lgkmcnt(0) +; SI-NEXT: v_mov_b32_e32 v1, s3 +; SI-NEXT: v_mov_b32_e32 v3, s5 +; SI-NEXT: v_mov_b32_e32 v6, s7 +; SI-NEXT: s_mov_b32 s3, 0xf000 +; SI-NEXT: s_waitcnt vmcnt(1) +; SI-NEXT: v_add_u32_e32 v0, vcc, s2, v4 +; SI-NEXT: s_waitcnt vmcnt(0) +; SI-NEXT: v_addc_u32_e32 v1, vcc, v1, v5, vcc +; SI-NEXT: v_add_u32_e32 v2, vcc, s4, v4 +; SI-NEXT: v_addc_u32_e32 v3, vcc, v3, v5, vcc +; SI-NEXT: v_add_u32_e32 v4, vcc, s6, v4 +; SI-NEXT: v_addc_u32_e32 v5, vcc, v6, v5, vcc +; SI-NEXT: flat_load_sbyte v0, v[0:1] +; SI-NEXT: flat_load_sbyte v1, v[2:3] +; SI-NEXT: flat_load_sbyte v2, v[4:5] +; SI-NEXT: s_mov_b32 s2, -1 +; SI-NEXT: s_waitcnt vmcnt(0) +; SI-NEXT: v_mad_u16 v0, v0, v1, v2 +; SI-NEXT: v_bfe_i32 v0, v0, 0, 16 +; SI-NEXT: v_ashrrev_i32_e32 v1, 31, v0 +; SI-NEXT: buffer_store_dwordx2 v[0:1], off, s[0:3], 0 +; SI-NEXT: s_endpgm +; +; VI-LABEL: i8_mad_64: +; VI: ; %bb.0: ; %entry +; VI-NEXT: s_mov_b32 s12, SCRATCH_RSRC_DWORD0 +; VI-NEXT: s_load_dword s0, s[4:5], 0x44 +; VI-NEXT: s_mov_b32 s13, SCRATCH_RSRC_DWORD1 +; VI-NEXT: s_mov_b32 s14, -1 +; VI-NEXT: s_mov_b32 s15, 0xe80000 +; VI-NEXT: s_add_u32 s12, s12, s11 +; VI-NEXT: s_addc_u32 s13, s13, 0 +; VI-NEXT: s_waitcnt lgkmcnt(0) +; VI-NEXT: s_add_i32 s1, s0, 4 +; VI-NEXT: v_mov_b32_e32 v0, s0 +; VI-NEXT: buffer_load_dword v4, v0, s[12:15], 0 offen +; VI-NEXT: v_mov_b32_e32 v0, s1 +; VI-NEXT: buffer_load_dword v5, v0, s[12:15], 0 offen +; VI-NEXT: s_load_dwordx8 s[0:7], s[4:5], 0x24 +; VI-NEXT: s_waitcnt lgkmcnt(0) +; VI-NEXT: v_mov_b32_e32 v1, s3 +; VI-NEXT: v_mov_b32_e32 v3, s5 +; VI-NEXT: v_mov_b32_e32 v6, s7 +; VI-NEXT: s_mov_b32 s3, 0xf000 +; VI-NEXT: s_waitcnt vmcnt(1) +; VI-NEXT: v_add_u32_e32 v0, vcc, s2, v4 +; VI-NEXT: s_waitcnt vmcnt(0) +; VI-NEXT: v_addc_u32_e32 v1, vcc, v1, v5, vcc +; VI-NEXT: v_add_u32_e32 v2, vcc, s4, v4 +; VI-NEXT: v_addc_u32_e32 v3, vcc, v3, v5, vcc +; VI-NEXT: v_add_u32_e32 v4, vcc, s6, v4 +; VI-NEXT: v_addc_u32_e32 v5, vcc, v6, v5, vcc +; VI-NEXT: flat_load_sbyte v0, v[0:1] +; VI-NEXT: flat_load_sbyte v1, v[2:3] +; VI-NEXT: flat_load_sbyte v2, v[4:5] +; VI-NEXT: s_mov_b32 s2, -1 +; VI-NEXT: s_waitcnt vmcnt(0) +; VI-NEXT: v_mad_u16 v0, v0, v1, v2 +; VI-NEXT: v_bfe_i32 v0, v0, 0, 16 +; VI-NEXT: v_ashrrev_i32_e32 v1, 31, v0 +; VI-NEXT: buffer_store_dwordx2 v[0:1], off, s[0:3], 0 +; VI-NEXT: s_endpgm entry: %retval.0.i = load i64, ptr addrspace(5) %idx %arrayidx = getelementptr inbounds i8, ptr addrspace(1) %a, i64 %retval.0.i @@ -248,17 +1349,236 @@ entry: ; had a chance to form mul24. The mul combine would then see ; extractelement with no known bits and fail. All of the mul/add ; combos in this loop should form v_mad_u32_u24. - -; FUNC-LABEL: {{^}}mad24_known_bits_destroyed: -; GCN: v_mad_u32_u24 -; GCN: v_mad_u32_u24 -; GCN: v_mad_u32_u24 -; GCN: v_mad_u32_u24 -; GCN: v_mad_u32_u24 -; GCN: v_mad_u32_u24 -; GCN: v_mad_u32_u24 -; GCN: v_mad_u32_u24 define void @mad24_known_bits_destroyed(i32 %arg, <4 x i32> %arg1, <4 x i32> %arg2, <4 x i32> %arg3, i32 %arg4, i32 %arg5, i32 %arg6, ptr addrspace(1) %arg7, ptr addrspace(1) %arg8) #0 { +; EG-LABEL: mad24_known_bits_destroyed: +; EG: ; %bb.0: ; %bb +; EG-NEXT: ALU 21, @12, KC0[CB0:0-32], KC1[] +; EG-NEXT: LOOP_START_DX10 @11 +; EG-NEXT: ALU 8, @34, KC0[], KC1[] +; EG-NEXT: MEM_RAT_CACHELESS STORE_RAW T0.X, T2.X, 0 +; EG-NEXT: ALU 14, @43, KC0[], KC1[] +; EG-NEXT: MEM_RAT_CACHELESS STORE_RAW T0.XYZW, T1.X, 0 +; EG-NEXT: ALU_PUSH_BEFORE 3, @58, KC0[], KC1[] +; EG-NEXT: JUMP @10 POP:1 +; EG-NEXT: LOOP_BREAK @10 +; EG-NEXT: POP @10 POP:1 +; EG-NEXT: END_LOOP @2 +; EG-NEXT: CF_END +; EG-NEXT: ALU clause starting at 12: +; EG-NEXT: MOV * T0.W, KC0[5].X, +; EG-NEXT: MOV * T0.Z, KC0[4].W, +; EG-NEXT: MOV * T0.Y, KC0[4].Z, +; EG-NEXT: MOV T0.X, KC0[2].Y, +; EG-NEXT: AND_INT * T1.Y, KC0[4].X, literal.x, +; EG-NEXT: 16777215(2.350989e-38), 0(0.000000e+00) +; EG-NEXT: AND_INT T1.Z, KC0[3].W, literal.x, +; EG-NEXT: AND_INT T1.W, KC0[3].Z, literal.x, +; EG-NEXT: MOV * T2.W, KC0[7].Y, +; EG-NEXT: 16777215(2.350989e-38), 0(0.000000e+00) +; EG-NEXT: LSHR T1.X, PS, literal.x, +; EG-NEXT: AND_INT T2.Y, KC0[6].Y, literal.y, +; EG-NEXT: MOV T2.Z, KC0[6].X, +; EG-NEXT: MOV * T2.W, KC0[5].W, +; EG-NEXT: 2(2.802597e-45), 16777215(2.350989e-38) +; EG-NEXT: MOV * T3.W, KC0[7].X, +; EG-NEXT: LSHR T2.X, PV.W, literal.x, +; EG-NEXT: MOV T3.Y, KC0[5].Z, +; EG-NEXT: MOV T3.Z, KC0[6].Z, +; EG-NEXT: MOV * T3.W, KC0[6].W, +; EG-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; EG-NEXT: MOV * T4.W, KC0[4].Y, +; EG-NEXT: ALU clause starting at 34: +; EG-NEXT: MULLO_INT * T0.X, T0.X, T2.Y, +; EG-NEXT: ADD_INT * T4.W, PS, T3.Z, +; EG-NEXT: AND_INT * T4.W, PV.W, literal.x, +; EG-NEXT: 16777215(2.350989e-38), 0(0.000000e+00) +; EG-NEXT: MULLO_INT * T0.X, PV.W, T2.Y, +; EG-NEXT: MULLO_INT * T0.W, T0.W, T1.Y, +; EG-NEXT: MULLO_INT * T0.Z, T0.Z, T1.Z, +; EG-NEXT: MULLO_INT * T0.Y, T0.Y, T1.W, +; EG-NEXT: ADD_INT * T0.X, T0.X, T3.Z, +; EG-NEXT: ALU clause starting at 43: +; EG-NEXT: ADD_INT * T4.W, T0.Y, T3.Y, +; EG-NEXT: AND_INT T4.W, PV.W, literal.x, +; EG-NEXT: ADD_INT * T5.W, T0.Z, T2.W, +; EG-NEXT: 16777215(2.350989e-38), 0(0.000000e+00) +; EG-NEXT: AND_INT T0.Z, PS, literal.x, +; EG-NEXT: ADD_INT T0.W, T0.W, T2.Z, +; EG-NEXT: MULLO_INT * T0.Y, PV.W, T1.W, +; EG-NEXT: 16777215(2.350989e-38), 0(0.000000e+00) +; EG-NEXT: ADD_INT T0.Y, PS, T3.Y, +; EG-NEXT: AND_INT T0.W, PV.W, literal.x, +; EG-NEXT: MULLO_INT * T0.Z, PV.Z, T1.Z, +; EG-NEXT: 16777215(2.350989e-38), 0(0.000000e+00) +; EG-NEXT: ADD_INT T0.Z, PS, T2.W, +; EG-NEXT: MULLO_INT * T0.W, PV.W, T1.Y, +; EG-NEXT: ADD_INT * T0.W, PS, T2.Z, +; EG-NEXT: ALU clause starting at 58: +; EG-NEXT: ADD_INT * T3.W, T3.W, literal.x, +; EG-NEXT: -1(nan), 0(0.000000e+00) +; EG-NEXT: SETE_INT * T4.W, PV.W, 0.0, +; EG-NEXT: PRED_SETNE_INT * ExecMask,PredicateBit (MASKED), PV.W, 0.0, +; +; CM-LABEL: mad24_known_bits_destroyed: +; CM: ; %bb.0: ; %bb +; CM-NEXT: ALU 22, @12, KC0[CB0:0-32], KC1[] +; CM-NEXT: LOOP_START_DX10 @11 +; CM-NEXT: ALU 23, @35, KC0[], KC1[] +; CM-NEXT: MEM_RAT_CACHELESS STORE_DWORD T0.X, T2.X +; CM-NEXT: ALU 23, @59, KC0[], KC1[] +; CM-NEXT: MEM_RAT_CACHELESS STORE_DWORD T0, T1.X +; CM-NEXT: ALU_PUSH_BEFORE 3, @83, KC0[], KC1[] +; CM-NEXT: JUMP @10 POP:1 +; CM-NEXT: LOOP_BREAK @10 +; CM-NEXT: POP @10 POP:1 +; CM-NEXT: END_LOOP @2 +; CM-NEXT: CF_END +; CM-NEXT: ALU clause starting at 12: +; CM-NEXT: MOV * T0.W, KC0[5].X, +; CM-NEXT: MOV * T0.Z, KC0[4].W, +; CM-NEXT: MOV * T0.Y, KC0[4].Z, +; CM-NEXT: MOV T0.X, KC0[2].Y, +; CM-NEXT: AND_INT * T1.Y, KC0[4].X, literal.x, +; CM-NEXT: 16777215(2.350989e-38), 0(0.000000e+00) +; CM-NEXT: AND_INT T1.Z, KC0[3].W, literal.x, +; CM-NEXT: AND_INT * T1.W, KC0[3].Z, literal.x, +; CM-NEXT: 16777215(2.350989e-38), 0(0.000000e+00) +; CM-NEXT: AND_INT T2.Y, KC0[6].Y, literal.x, +; CM-NEXT: MOV T2.Z, KC0[6].X, +; CM-NEXT: MOV * T2.W, KC0[7].Y, +; CM-NEXT: 16777215(2.350989e-38), 0(0.000000e+00) +; CM-NEXT: LSHR T1.X, PV.W, literal.x, +; CM-NEXT: MOV T3.Y, KC0[5].W, +; CM-NEXT: MOV T3.Z, KC0[5].Z, +; CM-NEXT: MOV * T2.W, KC0[7].X, +; CM-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; CM-NEXT: LSHR T2.X, PV.W, literal.x, +; CM-NEXT: MOV T4.Y, KC0[6].Z, +; CM-NEXT: MOV T4.Z, KC0[6].W, +; CM-NEXT: MOV * T2.W, KC0[4].Y, +; CM-NEXT: 2(2.802597e-45), 0(0.000000e+00) +; CM-NEXT: ALU clause starting at 35: +; CM-NEXT: MULLO_INT T0.X, T0.X, T2.Y, +; CM-NEXT: MULLO_INT T0.Y (MASKED), T0.X, T2.Y, +; CM-NEXT: MULLO_INT T0.Z (MASKED), T0.X, T2.Y, +; CM-NEXT: MULLO_INT * T0.W (MASKED), T0.X, T2.Y, +; CM-NEXT: ADD_INT * T2.W, PV.X, T4.Y, +; CM-NEXT: AND_INT * T2.W, PV.W, literal.x, +; CM-NEXT: 16777215(2.350989e-38), 0(0.000000e+00) +; CM-NEXT: MULLO_INT T0.X, T2.W, T2.Y, +; CM-NEXT: MULLO_INT T0.Y (MASKED), T2.W, T2.Y, +; CM-NEXT: MULLO_INT T0.Z (MASKED), T2.W, T2.Y, +; CM-NEXT: MULLO_INT * T0.W (MASKED), T2.W, T2.Y, +; CM-NEXT: MULLO_INT T0.X (MASKED), T0.W, T1.Y, +; CM-NEXT: MULLO_INT T0.Y (MASKED), T0.W, T1.Y, +; CM-NEXT: MULLO_INT T0.Z (MASKED), T0.W, T1.Y, +; CM-NEXT: MULLO_INT * T0.W, T0.W, T1.Y, +; CM-NEXT: MULLO_INT T0.X (MASKED), T0.Z, T1.Z, +; CM-NEXT: MULLO_INT T0.Y (MASKED), T0.Z, T1.Z, +; CM-NEXT: MULLO_INT T0.Z, T0.Z, T1.Z, +; CM-NEXT: MULLO_INT * T0.W (MASKED), T0.Z, T1.Z, +; CM-NEXT: MULLO_INT T0.X (MASKED), T0.Y, T1.W, +; CM-NEXT: MULLO_INT T0.Y, T0.Y, T1.W, +; CM-NEXT: MULLO_INT T0.Z (MASKED), T0.Y, T1.W, +; CM-NEXT: MULLO_INT * T0.W (MASKED), T0.Y, T1.W, +; CM-NEXT: ADD_INT * T0.X, T0.X, T4.Y, +; CM-NEXT: ALU clause starting at 59: +; CM-NEXT: ADD_INT * T2.W, T0.Y, T3.Z, +; CM-NEXT: ADD_INT T0.Z, T0.Z, T3.Y, +; CM-NEXT: AND_INT * T2.W, PV.W, literal.x, +; CM-NEXT: 16777215(2.350989e-38), 0(0.000000e+00) +; CM-NEXT: MULLO_INT T0.X (MASKED), T2.W, T1.W, +; CM-NEXT: MULLO_INT T0.Y, T2.W, T1.W, +; CM-NEXT: MULLO_INT T0.Z (MASKED), T2.W, T1.W, +; CM-NEXT: MULLO_INT * T0.W (MASKED), T2.W, T1.W, +; CM-NEXT: ADD_INT T0.Y, PV.Y, T3.Z, +; CM-NEXT: ADD_INT T5.Z, T0.W, T2.Z, BS:VEC_021/SCL_122 +; CM-NEXT: AND_INT * T0.W, T0.Z, literal.x, +; CM-NEXT: 16777215(2.350989e-38), 0(0.000000e+00) +; CM-NEXT: MULLO_INT T0.X (MASKED), T0.W, T1.Z, +; CM-NEXT: MULLO_INT T0.Y (MASKED), T0.W, T1.Z, +; CM-NEXT: MULLO_INT T0.Z, T0.W, T1.Z, +; CM-NEXT: MULLO_INT * T0.W (MASKED), T0.W, T1.Z, +; CM-NEXT: ADD_INT T0.Z, PV.Z, T3.Y, +; CM-NEXT: AND_INT * T0.W, T5.Z, literal.x, +; CM-NEXT: 16777215(2.350989e-38), 0(0.000000e+00) +; CM-NEXT: MULLO_INT T0.X (MASKED), T0.W, T1.Y, +; CM-NEXT: MULLO_INT T0.Y (MASKED), T0.W, T1.Y, +; CM-NEXT: MULLO_INT T0.Z (MASKED), T0.W, T1.Y, +; CM-NEXT: MULLO_INT * T0.W, T0.W, T1.Y, +; CM-NEXT: ADD_INT * T0.W, PV.W, T2.Z, +; CM-NEXT: ALU clause starting at 83: +; CM-NEXT: ADD_INT * T4.Z, T4.Z, literal.x, +; CM-NEXT: -1(nan), 0(0.000000e+00) +; CM-NEXT: SETE_INT * T2.W, PV.Z, 0.0, +; CM-NEXT: PRED_SETNE_INT * ExecMask,PredicateBit (MASKED), PV.W, 0.0, +; +; GCN-LABEL: mad24_known_bits_destroyed: +; GCN: ; %bb.0: ; %bb +; GCN-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) +; GCN-NEXT: v_mov_b32_e32 v5, v0 +; GCN-NEXT: v_and_b32_e32 v0, 0xffffff, v13 +; GCN-NEXT: v_and_b32_e32 v1, 0xffffff, v2 +; GCN-NEXT: v_and_b32_e32 v2, 0xffffff, v3 +; GCN-NEXT: v_and_b32_e32 v3, 0xffffff, v4 +; GCN-NEXT: s_mov_b64 s[8:9], 0 +; GCN-NEXT: s_mov_b32 s6, 0 +; GCN-NEXT: s_mov_b32 s7, 0xf000 +; GCN-NEXT: s_mov_b32 s4, s6 +; GCN-NEXT: s_mov_b32 s5, s6 +; GCN-NEXT: .LBB9_1: ; %bb19 +; GCN-NEXT: ; =>This Inner Loop Header: Depth=1 +; GCN-NEXT: v_mad_u32_u24 v4, v5, v0, v14 +; GCN-NEXT: s_waitcnt expcnt(0) +; GCN-NEXT: v_mad_u32_u24 v6, v6, v1, v10 +; GCN-NEXT: v_mad_u32_u24 v7, v7, v2, v11 +; GCN-NEXT: v_mad_u32_u24 v8, v8, v3, v12 +; GCN-NEXT: v_add_i32_e32 v15, vcc, -1, v15 +; GCN-NEXT: v_mad_u32_u24 v5, v4, v0, v14 +; GCN-NEXT: v_mad_u32_u24 v6, v6, v1, v10 +; GCN-NEXT: v_mad_u32_u24 v7, v7, v2, v11 +; GCN-NEXT: v_mad_u32_u24 v8, v8, v3, v12 +; GCN-NEXT: v_cmp_eq_u32_e32 vcc, 0, v15 +; GCN-NEXT: buffer_store_dword v5, v[16:17], s[4:7], 0 addr64 +; GCN-NEXT: s_or_b64 s[8:9], vcc, s[8:9] +; GCN-NEXT: buffer_store_dwordx4 v[5:8], v[18:19], s[4:7], 0 addr64 +; GCN-NEXT: s_andn2_b64 exec, exec, s[8:9] +; GCN-NEXT: s_cbranch_execnz .LBB9_1 +; GCN-NEXT: ; %bb.2: ; %bb18 +; GCN-NEXT: s_or_b64 exec, exec, s[8:9] +; GCN-NEXT: s_waitcnt vmcnt(0) expcnt(0) +; GCN-NEXT: s_setpc_b64 s[30:31] +; +; GFX8-LABEL: mad24_known_bits_destroyed: +; GFX8: ; %bb.0: ; %bb +; GFX8-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) +; GFX8-NEXT: v_mov_b32_e32 v5, v0 +; GFX8-NEXT: v_and_b32_e32 v0, 0xffffff, v13 +; GFX8-NEXT: v_and_b32_e32 v1, 0xffffff, v2 +; GFX8-NEXT: v_and_b32_e32 v2, 0xffffff, v3 +; GFX8-NEXT: v_and_b32_e32 v3, 0xffffff, v4 +; GFX8-NEXT: s_mov_b64 s[4:5], 0 +; GFX8-NEXT: .LBB9_1: ; %bb19 +; GFX8-NEXT: ; =>This Inner Loop Header: Depth=1 +; GFX8-NEXT: v_add_u32_e32 v15, vcc, -1, v15 +; GFX8-NEXT: v_mad_u32_u24 v4, v5, v0, v14 +; GFX8-NEXT: v_mad_u32_u24 v6, v6, v1, v10 +; GFX8-NEXT: v_mad_u32_u24 v7, v7, v2, v11 +; GFX8-NEXT: v_mad_u32_u24 v8, v8, v3, v12 +; GFX8-NEXT: v_cmp_eq_u32_e32 vcc, 0, v15 +; GFX8-NEXT: v_mad_u32_u24 v5, v4, v0, v14 +; GFX8-NEXT: v_mad_u32_u24 v6, v6, v1, v10 +; GFX8-NEXT: v_mad_u32_u24 v7, v7, v2, v11 +; GFX8-NEXT: v_mad_u32_u24 v8, v8, v3, v12 +; GFX8-NEXT: s_or_b64 s[4:5], vcc, s[4:5] +; GFX8-NEXT: flat_store_dword v[16:17], v5 +; GFX8-NEXT: flat_store_dwordx4 v[18:19], v[5:8] +; GFX8-NEXT: s_andn2_b64 exec, exec, s[4:5] +; GFX8-NEXT: s_cbranch_execnz .LBB9_1 +; GFX8-NEXT: ; %bb.2: ; %bb18 +; GFX8-NEXT: s_or_b64 exec, exec, s[4:5] +; GFX8-NEXT: s_waitcnt vmcnt(0) +; GFX8-NEXT: s_setpc_b64 s[30:31] bb: %tmp = and i32 %arg4, 16777215 %tmp9 = extractelement <4 x i32> %arg1, i64 1 diff --git a/llvm/test/CodeGen/AMDGPU/sdiv64.ll b/llvm/test/CodeGen/AMDGPU/sdiv64.ll index 697bcc3..5f6d622 100644 --- a/llvm/test/CodeGen/AMDGPU/sdiv64.ll +++ b/llvm/test/CodeGen/AMDGPU/sdiv64.ll @@ -206,8 +206,11 @@ define amdgpu_kernel void @s_test_sdiv(ptr addrspace(1) %out, i64 %x, i64 %y) { ; GCN-IR-NEXT: s_cbranch_vccz .LBB0_5 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 ; GCN-IR-NEXT: s_add_u32 s18, s16, 1 -; GCN-IR-NEXT: s_addc_u32 s19, s17, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[10:11], s[18:19], 0 +; GCN-IR-NEXT: s_cselect_b64 s[10:11], -1, 0 +; GCN-IR-NEXT: s_or_b32 s10, s10, s11 +; GCN-IR-NEXT: s_cmp_lg_u32 s10, 0 +; GCN-IR-NEXT: s_addc_u32 s10, s17, 0 +; GCN-IR-NEXT: s_cselect_b64 s[10:11], -1, 0 ; GCN-IR-NEXT: s_sub_i32 s16, 63, s16 ; GCN-IR-NEXT: s_andn2_b64 vcc, exec, s[10:11] ; GCN-IR-NEXT: s_lshl_b64 s[10:11], s[12:13], s16 @@ -217,9 +220,9 @@ define amdgpu_kernel void @s_test_sdiv(ptr addrspace(1) %out, i64 %x, i64 %y) { ; GCN-IR-NEXT: s_add_u32 s18, s2, -1 ; GCN-IR-NEXT: s_addc_u32 s19, s3, -1 ; GCN-IR-NEXT: s_not_b64 s[8:9], s[14:15] -; GCN-IR-NEXT: s_add_u32 s12, s8, s20 -; GCN-IR-NEXT: s_addc_u32 s13, s9, 0 -; GCN-IR-NEXT: s_mov_b64 s[14:15], 0 +; GCN-IR-NEXT: s_add_u32 s14, s8, s20 +; GCN-IR-NEXT: s_addc_u32 s15, s9, 0 +; GCN-IR-NEXT: s_mov_b64 s[12:13], 0 ; GCN-IR-NEXT: s_mov_b32 s9, 0 ; GCN-IR-NEXT: .LBB0_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 @@ -227,19 +230,22 @@ define amdgpu_kernel void @s_test_sdiv(ptr addrspace(1) %out, i64 %x, i64 %y) { ; GCN-IR-NEXT: s_lshr_b32 s8, s11, 31 ; GCN-IR-NEXT: s_lshl_b64 s[10:11], s[10:11], 1 ; GCN-IR-NEXT: s_or_b64 s[16:17], s[16:17], s[8:9] -; GCN-IR-NEXT: s_or_b64 s[10:11], s[14:15], s[10:11] +; GCN-IR-NEXT: s_or_b64 s[10:11], s[12:13], s[10:11] ; GCN-IR-NEXT: s_sub_u32 s8, s18, s16 ; GCN-IR-NEXT: s_subb_u32 s8, s19, s17 -; GCN-IR-NEXT: s_ashr_i32 s14, s8, 31 -; GCN-IR-NEXT: s_mov_b32 s15, s14 -; GCN-IR-NEXT: s_and_b32 s8, s14, 1 -; GCN-IR-NEXT: s_and_b64 s[14:15], s[14:15], s[2:3] -; GCN-IR-NEXT: s_sub_u32 s16, s16, s14 -; GCN-IR-NEXT: s_subb_u32 s17, s17, s15 -; GCN-IR-NEXT: s_add_u32 s12, s12, 1 -; GCN-IR-NEXT: s_addc_u32 s13, s13, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[20:21], s[12:13], 0 -; GCN-IR-NEXT: s_mov_b64 s[14:15], s[8:9] +; GCN-IR-NEXT: s_ashr_i32 s12, s8, 31 +; GCN-IR-NEXT: s_mov_b32 s13, s12 +; GCN-IR-NEXT: s_and_b32 s8, s12, 1 +; GCN-IR-NEXT: s_and_b64 s[20:21], s[12:13], s[2:3] +; GCN-IR-NEXT: s_sub_u32 s16, s16, s20 +; GCN-IR-NEXT: s_subb_u32 s17, s17, s21 +; GCN-IR-NEXT: s_add_u32 s14, s14, 1 +; GCN-IR-NEXT: s_cselect_b64 s[20:21], -1, 0 +; GCN-IR-NEXT: s_or_b32 s20, s20, s21 +; GCN-IR-NEXT: s_cmp_lg_u32 s20, 0 +; GCN-IR-NEXT: s_addc_u32 s15, s15, 0 +; GCN-IR-NEXT: s_cselect_b64 s[20:21], -1, 0 +; GCN-IR-NEXT: s_mov_b64 s[12:13], s[8:9] ; GCN-IR-NEXT: s_and_b64 vcc, exec, s[20:21] ; GCN-IR-NEXT: s_cbranch_vccz .LBB0_3 ; GCN-IR-NEXT: .LBB0_4: ; %Flow7 @@ -389,25 +395,25 @@ define i64 @v_test_sdiv(i64 %x, i64 %y) { ; GCN-IR-LABEL: v_test_sdiv: ; GCN-IR: ; %bb.0: ; %_udiv-special-cases ; GCN-IR-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; GCN-IR-NEXT: v_ashrrev_i32_e32 v12, 31, v1 -; GCN-IR-NEXT: v_xor_b32_e32 v0, v0, v12 -; GCN-IR-NEXT: v_ashrrev_i32_e32 v13, 31, v3 -; GCN-IR-NEXT: v_xor_b32_e32 v1, v1, v12 -; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, v0, v12 -; GCN-IR-NEXT: v_subb_u32_e32 v7, vcc, v1, v12, vcc -; GCN-IR-NEXT: v_xor_b32_e32 v0, v2, v13 -; GCN-IR-NEXT: v_xor_b32_e32 v1, v3, v13 -; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v13 -; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v13, vcc +; GCN-IR-NEXT: v_ashrrev_i32_e32 v10, 31, v1 +; GCN-IR-NEXT: v_xor_b32_e32 v0, v0, v10 +; GCN-IR-NEXT: v_ashrrev_i32_e32 v11, 31, v3 +; GCN-IR-NEXT: v_xor_b32_e32 v1, v1, v10 +; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, v0, v10 +; GCN-IR-NEXT: v_subb_u32_e32 v7, vcc, v1, v10, vcc +; GCN-IR-NEXT: v_xor_b32_e32 v0, v2, v11 +; GCN-IR-NEXT: v_xor_b32_e32 v1, v3, v11 +; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v11 +; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v11, vcc ; GCN-IR-NEXT: v_ffbh_u32_e32 v2, v0 ; GCN-IR-NEXT: v_add_i32_e64 v2, s[6:7], 32, v2 ; GCN-IR-NEXT: v_ffbh_u32_e32 v3, v1 -; GCN-IR-NEXT: v_min_u32_e32 v10, v2, v3 +; GCN-IR-NEXT: v_min_u32_e32 v8, v2, v3 ; GCN-IR-NEXT: v_ffbh_u32_e32 v2, v6 ; GCN-IR-NEXT: v_add_i32_e64 v2, s[6:7], 32, v2 ; GCN-IR-NEXT: v_ffbh_u32_e32 v3, v7 -; GCN-IR-NEXT: v_min_u32_e32 v11, v2, v3 -; GCN-IR-NEXT: v_sub_i32_e64 v2, s[6:7], v10, v11 +; GCN-IR-NEXT: v_min_u32_e32 v9, v2, v3 +; GCN-IR-NEXT: v_sub_i32_e64 v2, s[6:7], v8, v9 ; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[0:1] ; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[4:5], 0, v[6:7] ; GCN-IR-NEXT: v_subb_u32_e64 v3, s[6:7], 0, 0, s[6:7] @@ -416,70 +422,69 @@ define i64 @v_test_sdiv(i64 %x, i64 %y) { ; GCN-IR-NEXT: s_or_b64 s[4:5], s[4:5], s[6:7] ; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 63, v[2:3] ; GCN-IR-NEXT: s_xor_b64 s[6:7], s[4:5], -1 -; GCN-IR-NEXT: v_mov_b32_e32 v14, v12 -; GCN-IR-NEXT: v_mov_b32_e32 v15, v13 +; GCN-IR-NEXT: v_mov_b32_e32 v12, v10 +; GCN-IR-NEXT: v_mov_b32_e32 v13, v11 ; GCN-IR-NEXT: v_cndmask_b32_e64 v5, v7, 0, s[4:5] ; GCN-IR-NEXT: v_cndmask_b32_e64 v4, v6, 0, s[4:5] ; GCN-IR-NEXT: s_and_b64 s[4:5], s[6:7], vcc ; GCN-IR-NEXT: s_and_saveexec_b64 s[6:7], s[4:5] ; GCN-IR-NEXT: s_cbranch_execz .LBB1_6 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 -; GCN-IR-NEXT: v_add_i32_e32 v8, vcc, 1, v2 -; GCN-IR-NEXT: v_addc_u32_e32 v9, vcc, 0, v3, vcc +; GCN-IR-NEXT: v_add_i32_e32 v14, vcc, 1, v2 +; GCN-IR-NEXT: v_addc_u32_e32 v3, vcc, 0, v3, vcc ; GCN-IR-NEXT: v_sub_i32_e64 v2, s[4:5], 63, v2 -; GCN-IR-NEXT: v_mov_b32_e32 v4, 0 -; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 0, v[8:9] ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[6:7], v2 +; GCN-IR-NEXT: v_mov_b32_e32 v4, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 -; GCN-IR-NEXT: s_and_saveexec_b64 s[4:5], vcc -; GCN-IR-NEXT: s_xor_b64 s[8:9], exec, s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], vcc, -1 +; GCN-IR-NEXT: s_and_saveexec_b64 s[8:9], s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execz .LBB1_5 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader -; GCN-IR-NEXT: v_add_i32_e32 v16, vcc, -1, v0 -; GCN-IR-NEXT: v_addc_u32_e32 v17, vcc, -1, v1, vcc -; GCN-IR-NEXT: v_not_b32_e32 v4, v10 -; GCN-IR-NEXT: v_lshr_b64 v[8:9], v[6:7], v8 -; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, v4, v11 -; GCN-IR-NEXT: v_mov_b32_e32 v10, 0 -; GCN-IR-NEXT: v_addc_u32_e64 v7, s[4:5], -1, 0, vcc -; GCN-IR-NEXT: s_mov_b64 s[10:11], 0 -; GCN-IR-NEXT: v_mov_b32_e32 v11, 0 +; GCN-IR-NEXT: v_lshr_b64 v[6:7], v[6:7], v14 +; GCN-IR-NEXT: v_add_i32_e32 v14, vcc, -1, v0 +; GCN-IR-NEXT: v_addc_u32_e32 v15, vcc, -1, v1, vcc +; GCN-IR-NEXT: v_not_b32_e32 v4, v8 +; GCN-IR-NEXT: v_add_i32_e32 v16, vcc, v4, v9 +; GCN-IR-NEXT: v_addc_u32_e64 v17, s[8:9], -1, 0, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v8, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0 +; GCN-IR-NEXT: v_mov_b32_e32 v9, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 ; GCN-IR-NEXT: .LBB1_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 -; GCN-IR-NEXT: v_lshl_b64 v[8:9], v[8:9], 1 +; GCN-IR-NEXT: v_lshl_b64 v[6:7], v[6:7], 1 ; GCN-IR-NEXT: v_lshrrev_b32_e32 v4, 31, v3 -; GCN-IR-NEXT: v_or_b32_e32 v8, v8, v4 +; GCN-IR-NEXT: v_or_b32_e32 v6, v6, v4 ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[2:3], 1 -; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, v16, v8 -; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, v17, v9, vcc -; GCN-IR-NEXT: v_or_b32_e32 v2, v10, v2 -; GCN-IR-NEXT: v_ashrrev_i32_e32 v10, 31, v4 -; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, 1, v6 -; GCN-IR-NEXT: v_or_b32_e32 v3, v11, v3 -; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v10 -; GCN-IR-NEXT: v_and_b32_e32 v11, v10, v1 -; GCN-IR-NEXT: v_and_b32_e32 v10, v10, v0 -; GCN-IR-NEXT: v_addc_u32_e32 v7, vcc, 0, v7, vcc -; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[6:7] -; GCN-IR-NEXT: v_sub_i32_e64 v8, s[4:5], v8, v10 -; GCN-IR-NEXT: v_subb_u32_e64 v9, s[4:5], v9, v11, s[4:5] -; GCN-IR-NEXT: v_mov_b32_e32 v11, v5 -; GCN-IR-NEXT: s_or_b64 s[10:11], vcc, s[10:11] -; GCN-IR-NEXT: v_mov_b32_e32 v10, v4 -; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[10:11] +; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, v14, v6 +; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, v15, v7, vcc +; GCN-IR-NEXT: v_or_b32_e32 v2, v8, v2 +; GCN-IR-NEXT: v_ashrrev_i32_e32 v8, 31, v4 +; GCN-IR-NEXT: v_or_b32_e32 v3, v9, v3 +; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v8 +; GCN-IR-NEXT: v_and_b32_e32 v9, v8, v1 +; GCN-IR-NEXT: v_and_b32_e32 v8, v8, v0 +; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, v6, v8 +; GCN-IR-NEXT: v_subb_u32_e32 v7, vcc, v7, v9, vcc +; GCN-IR-NEXT: v_add_i32_e32 v16, vcc, 1, v16 +; GCN-IR-NEXT: v_addc_u32_e32 v17, vcc, 0, v17, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v9, v5 +; GCN-IR-NEXT: s_or_b64 s[8:9], vcc, s[8:9] +; GCN-IR-NEXT: v_mov_b32_e32 v8, v4 +; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execnz .LBB1_3 ; GCN-IR-NEXT: ; %bb.4: ; %Flow -; GCN-IR-NEXT: s_or_b64 exec, exec, s[10:11] -; GCN-IR-NEXT: .LBB1_5: ; %Flow4 ; GCN-IR-NEXT: s_or_b64 exec, exec, s[8:9] +; GCN-IR-NEXT: .LBB1_5: ; %Flow4 +; GCN-IR-NEXT: s_or_b64 exec, exec, s[4:5] ; GCN-IR-NEXT: v_lshl_b64 v[0:1], v[2:3], 1 ; GCN-IR-NEXT: v_or_b32_e32 v5, v5, v1 ; GCN-IR-NEXT: v_or_b32_e32 v4, v4, v0 ; GCN-IR-NEXT: .LBB1_6: ; %Flow5 ; GCN-IR-NEXT: s_or_b64 exec, exec, s[6:7] -; GCN-IR-NEXT: v_xor_b32_e32 v0, v13, v12 -; GCN-IR-NEXT: v_xor_b32_e32 v1, v15, v14 +; GCN-IR-NEXT: v_xor_b32_e32 v0, v11, v10 +; GCN-IR-NEXT: v_xor_b32_e32 v1, v13, v12 ; GCN-IR-NEXT: v_xor_b32_e32 v3, v4, v0 ; GCN-IR-NEXT: v_xor_b32_e32 v2, v5, v1 ; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v3, v0 @@ -1293,34 +1298,37 @@ define amdgpu_kernel void @s_test_sdiv_k_num_i64(ptr addrspace(1) %out, i64 %x) ; GCN-IR-NEXT: s_xor_b64 s[2:3], s[2:3], s[4:5] ; GCN-IR-NEXT: s_sub_u32 s2, s2, s4 ; GCN-IR-NEXT: s_subb_u32 s3, s3, s4 -; GCN-IR-NEXT: s_flbit_i32_b64 s14, s[2:3] -; GCN-IR-NEXT: s_add_u32 s10, s14, 0xffffffc5 +; GCN-IR-NEXT: s_flbit_i32_b64 s16, s[2:3] +; GCN-IR-NEXT: s_add_u32 s10, s16, 0xffffffc5 ; GCN-IR-NEXT: s_addc_u32 s11, 0, -1 ; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[8:9], s[2:3], 0 ; GCN-IR-NEXT: v_cmp_gt_u64_e64 s[12:13], s[10:11], 63 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[16:17], s[10:11], 63 +; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[14:15], s[10:11], 63 ; GCN-IR-NEXT: s_or_b64 s[12:13], s[8:9], s[12:13] ; GCN-IR-NEXT: s_and_b64 s[8:9], s[12:13], exec ; GCN-IR-NEXT: s_cselect_b32 s8, 0, 24 -; GCN-IR-NEXT: s_or_b64 s[12:13], s[12:13], s[16:17] +; GCN-IR-NEXT: s_or_b64 s[12:13], s[12:13], s[14:15] ; GCN-IR-NEXT: s_andn2_b64 vcc, exec, s[12:13] ; GCN-IR-NEXT: s_mov_b32 s9, 0 ; GCN-IR-NEXT: s_cbranch_vccz .LBB10_5 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 ; GCN-IR-NEXT: s_add_u32 s12, s10, 1 -; GCN-IR-NEXT: s_addc_u32 s13, s11, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[8:9], s[12:13], 0 +; GCN-IR-NEXT: s_cselect_b64 s[8:9], -1, 0 +; GCN-IR-NEXT: s_or_b32 s8, s8, s9 +; GCN-IR-NEXT: s_cmp_lg_u32 s8, 0 +; GCN-IR-NEXT: s_addc_u32 s8, s11, 0 +; GCN-IR-NEXT: s_cselect_b64 s[8:9], -1, 0 ; GCN-IR-NEXT: s_sub_i32 s10, 63, s10 ; GCN-IR-NEXT: s_andn2_b64 vcc, exec, s[8:9] ; GCN-IR-NEXT: s_lshl_b64 s[8:9], 24, s10 ; GCN-IR-NEXT: s_cbranch_vccz .LBB10_4 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader ; GCN-IR-NEXT: s_lshr_b64 s[12:13], 24, s12 -; GCN-IR-NEXT: s_add_u32 s16, s2, -1 -; GCN-IR-NEXT: s_addc_u32 s17, s3, -1 -; GCN-IR-NEXT: s_sub_u32 s10, 58, s14 -; GCN-IR-NEXT: s_subb_u32 s11, 0, 0 -; GCN-IR-NEXT: s_mov_b64 s[14:15], 0 +; GCN-IR-NEXT: s_add_u32 s14, s2, -1 +; GCN-IR-NEXT: s_addc_u32 s15, s3, -1 +; GCN-IR-NEXT: s_sub_u32 s16, 58, s16 +; GCN-IR-NEXT: s_subb_u32 s17, 0, 0 +; GCN-IR-NEXT: s_mov_b64 s[10:11], 0 ; GCN-IR-NEXT: s_mov_b32 s7, 0 ; GCN-IR-NEXT: .LBB10_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 @@ -1328,19 +1336,22 @@ define amdgpu_kernel void @s_test_sdiv_k_num_i64(ptr addrspace(1) %out, i64 %x) ; GCN-IR-NEXT: s_lshr_b32 s6, s9, 31 ; GCN-IR-NEXT: s_lshl_b64 s[8:9], s[8:9], 1 ; GCN-IR-NEXT: s_or_b64 s[12:13], s[12:13], s[6:7] -; GCN-IR-NEXT: s_or_b64 s[8:9], s[14:15], s[8:9] -; GCN-IR-NEXT: s_sub_u32 s6, s16, s12 -; GCN-IR-NEXT: s_subb_u32 s6, s17, s13 -; GCN-IR-NEXT: s_ashr_i32 s14, s6, 31 -; GCN-IR-NEXT: s_mov_b32 s15, s14 -; GCN-IR-NEXT: s_and_b32 s6, s14, 1 -; GCN-IR-NEXT: s_and_b64 s[14:15], s[14:15], s[2:3] -; GCN-IR-NEXT: s_sub_u32 s12, s12, s14 -; GCN-IR-NEXT: s_subb_u32 s13, s13, s15 -; GCN-IR-NEXT: s_add_u32 s10, s10, 1 -; GCN-IR-NEXT: s_addc_u32 s11, s11, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[18:19], s[10:11], 0 -; GCN-IR-NEXT: s_mov_b64 s[14:15], s[6:7] +; GCN-IR-NEXT: s_or_b64 s[8:9], s[10:11], s[8:9] +; GCN-IR-NEXT: s_sub_u32 s6, s14, s12 +; GCN-IR-NEXT: s_subb_u32 s6, s15, s13 +; GCN-IR-NEXT: s_ashr_i32 s10, s6, 31 +; GCN-IR-NEXT: s_mov_b32 s11, s10 +; GCN-IR-NEXT: s_and_b32 s6, s10, 1 +; GCN-IR-NEXT: s_and_b64 s[18:19], s[10:11], s[2:3] +; GCN-IR-NEXT: s_sub_u32 s12, s12, s18 +; GCN-IR-NEXT: s_subb_u32 s13, s13, s19 +; GCN-IR-NEXT: s_add_u32 s16, s16, 1 +; GCN-IR-NEXT: s_cselect_b64 s[18:19], -1, 0 +; GCN-IR-NEXT: s_or_b32 s18, s18, s19 +; GCN-IR-NEXT: s_cmp_lg_u32 s18, 0 +; GCN-IR-NEXT: s_addc_u32 s17, s17, 0 +; GCN-IR-NEXT: s_cselect_b64 s[18:19], -1, 0 +; GCN-IR-NEXT: s_mov_b64 s[10:11], s[6:7] ; GCN-IR-NEXT: s_and_b64 vcc, exec, s[18:19] ; GCN-IR-NEXT: s_cbranch_vccz .LBB10_3 ; GCN-IR-NEXT: .LBB10_4: ; %Flow6 @@ -1472,17 +1483,17 @@ define i64 @v_test_sdiv_k_num_i64(i64 %x) { ; GCN-IR-LABEL: v_test_sdiv_k_num_i64: ; GCN-IR: ; %bb.0: ; %_udiv-special-cases ; GCN-IR-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; GCN-IR-NEXT: v_ashrrev_i32_e32 v12, 31, v1 -; GCN-IR-NEXT: v_xor_b32_e32 v0, v0, v12 -; GCN-IR-NEXT: v_xor_b32_e32 v1, v1, v12 -; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v12 -; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v12, vcc +; GCN-IR-NEXT: v_ashrrev_i32_e32 v10, 31, v1 +; GCN-IR-NEXT: v_xor_b32_e32 v0, v0, v10 +; GCN-IR-NEXT: v_xor_b32_e32 v1, v1, v10 +; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v10 +; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v10, vcc ; GCN-IR-NEXT: v_ffbh_u32_e32 v2, v0 ; GCN-IR-NEXT: v_add_i32_e32 v2, vcc, 32, v2 ; GCN-IR-NEXT: v_ffbh_u32_e32 v3, v1 -; GCN-IR-NEXT: v_min_u32_e32 v10, v2, v3 +; GCN-IR-NEXT: v_min_u32_e32 v8, v2, v3 ; GCN-IR-NEXT: s_movk_i32 s6, 0xffc5 -; GCN-IR-NEXT: v_add_i32_e32 v2, vcc, s6, v10 +; GCN-IR-NEXT: v_add_i32_e32 v2, vcc, s6, v8 ; GCN-IR-NEXT: v_addc_u32_e64 v3, s[6:7], 0, -1, vcc ; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[4:5], 0, v[0:1] ; GCN-IR-NEXT: v_cmp_lt_u64_e32 vcc, 63, v[2:3] @@ -1490,69 +1501,68 @@ define i64 @v_test_sdiv_k_num_i64(i64 %x) { ; GCN-IR-NEXT: s_or_b64 s[4:5], s[4:5], vcc ; GCN-IR-NEXT: v_cndmask_b32_e64 v4, 24, 0, s[4:5] ; GCN-IR-NEXT: s_xor_b64 s[4:5], s[4:5], -1 -; GCN-IR-NEXT: v_mov_b32_e32 v13, v12 +; GCN-IR-NEXT: v_mov_b32_e32 v11, v10 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 ; GCN-IR-NEXT: s_and_b64 s[4:5], s[4:5], s[6:7] ; GCN-IR-NEXT: s_and_saveexec_b64 s[6:7], s[4:5] ; GCN-IR-NEXT: s_cbranch_execz .LBB11_6 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 ; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, 1, v2 -; GCN-IR-NEXT: v_addc_u32_e32 v7, vcc, 0, v3, vcc +; GCN-IR-NEXT: v_addc_u32_e32 v3, vcc, 0, v3, vcc ; GCN-IR-NEXT: v_sub_i32_e64 v2, s[4:5], 63, v2 -; GCN-IR-NEXT: v_mov_b32_e32 v4, 0 -; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 0, v[6:7] ; GCN-IR-NEXT: v_lshl_b64 v[2:3], 24, v2 +; GCN-IR-NEXT: v_mov_b32_e32 v4, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 -; GCN-IR-NEXT: s_and_saveexec_b64 s[4:5], vcc -; GCN-IR-NEXT: s_xor_b64 s[8:9], exec, s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], vcc, -1 +; GCN-IR-NEXT: s_and_saveexec_b64 s[8:9], s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execz .LBB11_5 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader -; GCN-IR-NEXT: v_add_i32_e32 v14, vcc, -1, v0 -; GCN-IR-NEXT: v_addc_u32_e32 v15, vcc, -1, v1, vcc -; GCN-IR-NEXT: v_lshr_b64 v[8:9], 24, v6 -; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, 58, v10 -; GCN-IR-NEXT: v_mov_b32_e32 v10, 0 -; GCN-IR-NEXT: v_subb_u32_e64 v7, s[4:5], 0, 0, vcc -; GCN-IR-NEXT: s_mov_b64 s[10:11], 0 -; GCN-IR-NEXT: v_mov_b32_e32 v11, 0 +; GCN-IR-NEXT: v_add_i32_e32 v12, vcc, -1, v0 +; GCN-IR-NEXT: v_addc_u32_e32 v13, vcc, -1, v1, vcc +; GCN-IR-NEXT: v_sub_i32_e32 v14, vcc, 58, v8 +; GCN-IR-NEXT: v_lshr_b64 v[6:7], 24, v6 +; GCN-IR-NEXT: v_subb_u32_e64 v15, s[8:9], 0, 0, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v8, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0 +; GCN-IR-NEXT: v_mov_b32_e32 v9, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 ; GCN-IR-NEXT: .LBB11_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 -; GCN-IR-NEXT: v_lshl_b64 v[8:9], v[8:9], 1 +; GCN-IR-NEXT: v_lshl_b64 v[6:7], v[6:7], 1 ; GCN-IR-NEXT: v_lshrrev_b32_e32 v4, 31, v3 -; GCN-IR-NEXT: v_or_b32_e32 v8, v8, v4 +; GCN-IR-NEXT: v_or_b32_e32 v6, v6, v4 ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[2:3], 1 -; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, v14, v8 -; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, v15, v9, vcc -; GCN-IR-NEXT: v_or_b32_e32 v2, v10, v2 -; GCN-IR-NEXT: v_ashrrev_i32_e32 v10, 31, v4 -; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, 1, v6 -; GCN-IR-NEXT: v_or_b32_e32 v3, v11, v3 -; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v10 -; GCN-IR-NEXT: v_and_b32_e32 v11, v10, v1 -; GCN-IR-NEXT: v_and_b32_e32 v10, v10, v0 -; GCN-IR-NEXT: v_addc_u32_e32 v7, vcc, 0, v7, vcc -; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[6:7] -; GCN-IR-NEXT: v_sub_i32_e64 v8, s[4:5], v8, v10 -; GCN-IR-NEXT: v_subb_u32_e64 v9, s[4:5], v9, v11, s[4:5] -; GCN-IR-NEXT: v_mov_b32_e32 v11, v5 -; GCN-IR-NEXT: s_or_b64 s[10:11], vcc, s[10:11] -; GCN-IR-NEXT: v_mov_b32_e32 v10, v4 -; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[10:11] +; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, v12, v6 +; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, v13, v7, vcc +; GCN-IR-NEXT: v_or_b32_e32 v2, v8, v2 +; GCN-IR-NEXT: v_ashrrev_i32_e32 v8, 31, v4 +; GCN-IR-NEXT: v_or_b32_e32 v3, v9, v3 +; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v8 +; GCN-IR-NEXT: v_and_b32_e32 v9, v8, v1 +; GCN-IR-NEXT: v_and_b32_e32 v8, v8, v0 +; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, v6, v8 +; GCN-IR-NEXT: v_subb_u32_e32 v7, vcc, v7, v9, vcc +; GCN-IR-NEXT: v_add_i32_e32 v14, vcc, 1, v14 +; GCN-IR-NEXT: v_addc_u32_e32 v15, vcc, 0, v15, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v9, v5 +; GCN-IR-NEXT: s_or_b64 s[8:9], vcc, s[8:9] +; GCN-IR-NEXT: v_mov_b32_e32 v8, v4 +; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execnz .LBB11_3 ; GCN-IR-NEXT: ; %bb.4: ; %Flow -; GCN-IR-NEXT: s_or_b64 exec, exec, s[10:11] -; GCN-IR-NEXT: .LBB11_5: ; %Flow4 ; GCN-IR-NEXT: s_or_b64 exec, exec, s[8:9] +; GCN-IR-NEXT: .LBB11_5: ; %Flow4 +; GCN-IR-NEXT: s_or_b64 exec, exec, s[4:5] ; GCN-IR-NEXT: v_lshl_b64 v[0:1], v[2:3], 1 ; GCN-IR-NEXT: v_or_b32_e32 v5, v5, v1 ; GCN-IR-NEXT: v_or_b32_e32 v4, v4, v0 ; GCN-IR-NEXT: .LBB11_6: ; %Flow5 ; GCN-IR-NEXT: s_or_b64 exec, exec, s[6:7] -; GCN-IR-NEXT: v_xor_b32_e32 v0, v4, v12 -; GCN-IR-NEXT: v_xor_b32_e32 v1, v5, v13 -; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v12 -; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v13, vcc +; GCN-IR-NEXT: v_xor_b32_e32 v0, v4, v10 +; GCN-IR-NEXT: v_xor_b32_e32 v1, v5, v11 +; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v10 +; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v11, vcc ; GCN-IR-NEXT: s_setpc_b64 s[30:31] %result = sdiv i64 24, %x ret i64 %result @@ -1665,17 +1675,17 @@ define i64 @v_test_sdiv_pow2_k_num_i64(i64 %x) { ; GCN-IR-LABEL: v_test_sdiv_pow2_k_num_i64: ; GCN-IR: ; %bb.0: ; %_udiv-special-cases ; GCN-IR-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; GCN-IR-NEXT: v_ashrrev_i32_e32 v12, 31, v1 -; GCN-IR-NEXT: v_xor_b32_e32 v0, v0, v12 -; GCN-IR-NEXT: v_xor_b32_e32 v1, v1, v12 -; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v12 -; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v12, vcc +; GCN-IR-NEXT: v_ashrrev_i32_e32 v10, 31, v1 +; GCN-IR-NEXT: v_xor_b32_e32 v0, v0, v10 +; GCN-IR-NEXT: v_xor_b32_e32 v1, v1, v10 +; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v10 +; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v10, vcc ; GCN-IR-NEXT: v_ffbh_u32_e32 v2, v0 ; GCN-IR-NEXT: v_add_i32_e32 v2, vcc, 32, v2 ; GCN-IR-NEXT: v_ffbh_u32_e32 v3, v1 -; GCN-IR-NEXT: v_min_u32_e32 v10, v2, v3 +; GCN-IR-NEXT: v_min_u32_e32 v8, v2, v3 ; GCN-IR-NEXT: s_movk_i32 s6, 0xffd0 -; GCN-IR-NEXT: v_add_i32_e32 v2, vcc, s6, v10 +; GCN-IR-NEXT: v_add_i32_e32 v2, vcc, s6, v8 ; GCN-IR-NEXT: v_addc_u32_e64 v3, s[6:7], 0, -1, vcc ; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[4:5], 0, v[0:1] ; GCN-IR-NEXT: v_cmp_lt_u64_e32 vcc, 63, v[2:3] @@ -1684,70 +1694,69 @@ define i64 @v_test_sdiv_pow2_k_num_i64(i64 %x) { ; GCN-IR-NEXT: s_or_b64 s[4:5], s[4:5], vcc ; GCN-IR-NEXT: v_cndmask_b32_e64 v4, v4, 0, s[4:5] ; GCN-IR-NEXT: s_xor_b64 s[4:5], s[4:5], -1 -; GCN-IR-NEXT: v_mov_b32_e32 v13, v12 +; GCN-IR-NEXT: v_mov_b32_e32 v11, v10 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 ; GCN-IR-NEXT: s_and_b64 s[4:5], s[4:5], s[6:7] ; GCN-IR-NEXT: s_and_saveexec_b64 s[6:7], s[4:5] ; GCN-IR-NEXT: s_cbranch_execz .LBB12_6 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 ; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, 1, v2 +; GCN-IR-NEXT: v_addc_u32_e32 v3, vcc, 0, v3, vcc ; GCN-IR-NEXT: v_sub_i32_e64 v2, s[4:5], 63, v2 -; GCN-IR-NEXT: v_addc_u32_e32 v7, vcc, 0, v3, vcc -; GCN-IR-NEXT: s_mov_b64 s[4:5], 0x8000 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0x8000 +; GCN-IR-NEXT: v_lshl_b64 v[2:3], s[8:9], v2 ; GCN-IR-NEXT: v_mov_b32_e32 v4, 0 -; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 0, v[6:7] -; GCN-IR-NEXT: v_lshl_b64 v[2:3], s[4:5], v2 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 -; GCN-IR-NEXT: s_and_saveexec_b64 s[8:9], vcc -; GCN-IR-NEXT: s_xor_b64 s[8:9], exec, s[8:9] +; GCN-IR-NEXT: s_xor_b64 s[4:5], vcc, -1 +; GCN-IR-NEXT: s_and_saveexec_b64 s[10:11], s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], exec, s[10:11] ; GCN-IR-NEXT: s_cbranch_execz .LBB12_5 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader -; GCN-IR-NEXT: v_add_i32_e32 v14, vcc, -1, v0 -; GCN-IR-NEXT: v_addc_u32_e32 v15, vcc, -1, v1, vcc -; GCN-IR-NEXT: v_lshr_b64 v[8:9], s[4:5], v6 -; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, 47, v10 -; GCN-IR-NEXT: v_mov_b32_e32 v10, 0 -; GCN-IR-NEXT: v_subb_u32_e64 v7, s[4:5], 0, 0, vcc -; GCN-IR-NEXT: s_mov_b64 s[10:11], 0 -; GCN-IR-NEXT: v_mov_b32_e32 v11, 0 +; GCN-IR-NEXT: v_add_i32_e32 v12, vcc, -1, v0 +; GCN-IR-NEXT: v_addc_u32_e32 v13, vcc, -1, v1, vcc +; GCN-IR-NEXT: v_sub_i32_e32 v14, vcc, 47, v8 +; GCN-IR-NEXT: v_lshr_b64 v[6:7], s[8:9], v6 +; GCN-IR-NEXT: v_subb_u32_e64 v15, s[8:9], 0, 0, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v8, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0 +; GCN-IR-NEXT: v_mov_b32_e32 v9, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 ; GCN-IR-NEXT: .LBB12_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 -; GCN-IR-NEXT: v_lshl_b64 v[8:9], v[8:9], 1 +; GCN-IR-NEXT: v_lshl_b64 v[6:7], v[6:7], 1 ; GCN-IR-NEXT: v_lshrrev_b32_e32 v4, 31, v3 -; GCN-IR-NEXT: v_or_b32_e32 v8, v8, v4 +; GCN-IR-NEXT: v_or_b32_e32 v6, v6, v4 ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[2:3], 1 -; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, v14, v8 -; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, v15, v9, vcc -; GCN-IR-NEXT: v_or_b32_e32 v2, v10, v2 -; GCN-IR-NEXT: v_ashrrev_i32_e32 v10, 31, v4 -; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, 1, v6 -; GCN-IR-NEXT: v_or_b32_e32 v3, v11, v3 -; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v10 -; GCN-IR-NEXT: v_and_b32_e32 v11, v10, v1 -; GCN-IR-NEXT: v_and_b32_e32 v10, v10, v0 -; GCN-IR-NEXT: v_addc_u32_e32 v7, vcc, 0, v7, vcc -; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[6:7] -; GCN-IR-NEXT: v_sub_i32_e64 v8, s[4:5], v8, v10 -; GCN-IR-NEXT: v_subb_u32_e64 v9, s[4:5], v9, v11, s[4:5] -; GCN-IR-NEXT: v_mov_b32_e32 v11, v5 -; GCN-IR-NEXT: s_or_b64 s[10:11], vcc, s[10:11] -; GCN-IR-NEXT: v_mov_b32_e32 v10, v4 -; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[10:11] +; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, v12, v6 +; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, v13, v7, vcc +; GCN-IR-NEXT: v_or_b32_e32 v2, v8, v2 +; GCN-IR-NEXT: v_ashrrev_i32_e32 v8, 31, v4 +; GCN-IR-NEXT: v_or_b32_e32 v3, v9, v3 +; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v8 +; GCN-IR-NEXT: v_and_b32_e32 v9, v8, v1 +; GCN-IR-NEXT: v_and_b32_e32 v8, v8, v0 +; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, v6, v8 +; GCN-IR-NEXT: v_subb_u32_e32 v7, vcc, v7, v9, vcc +; GCN-IR-NEXT: v_add_i32_e32 v14, vcc, 1, v14 +; GCN-IR-NEXT: v_addc_u32_e32 v15, vcc, 0, v15, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v9, v5 +; GCN-IR-NEXT: s_or_b64 s[8:9], vcc, s[8:9] +; GCN-IR-NEXT: v_mov_b32_e32 v8, v4 +; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execnz .LBB12_3 ; GCN-IR-NEXT: ; %bb.4: ; %Flow -; GCN-IR-NEXT: s_or_b64 exec, exec, s[10:11] -; GCN-IR-NEXT: .LBB12_5: ; %Flow4 ; GCN-IR-NEXT: s_or_b64 exec, exec, s[8:9] +; GCN-IR-NEXT: .LBB12_5: ; %Flow4 +; GCN-IR-NEXT: s_or_b64 exec, exec, s[4:5] ; GCN-IR-NEXT: v_lshl_b64 v[0:1], v[2:3], 1 ; GCN-IR-NEXT: v_or_b32_e32 v5, v5, v1 ; GCN-IR-NEXT: v_or_b32_e32 v4, v4, v0 ; GCN-IR-NEXT: .LBB12_6: ; %Flow5 ; GCN-IR-NEXT: s_or_b64 exec, exec, s[6:7] -; GCN-IR-NEXT: v_xor_b32_e32 v0, v4, v12 -; GCN-IR-NEXT: v_xor_b32_e32 v1, v5, v13 -; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v12 -; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v13, vcc +; GCN-IR-NEXT: v_xor_b32_e32 v0, v4, v10 +; GCN-IR-NEXT: v_xor_b32_e32 v1, v5, v11 +; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v10 +; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v11, vcc ; GCN-IR-NEXT: s_setpc_b64 s[30:31] %result = sdiv i64 32768, %x ret i64 %result @@ -1767,20 +1776,20 @@ define i64 @v_test_sdiv_pow2_k_den_i64(i64 %x) { ; GCN-IR-LABEL: v_test_sdiv_pow2_k_den_i64: ; GCN-IR: ; %bb.0: ; %_udiv-special-cases ; GCN-IR-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; GCN-IR-NEXT: v_ashrrev_i32_e32 v10, 31, v1 -; GCN-IR-NEXT: v_xor_b32_e32 v0, v0, v10 -; GCN-IR-NEXT: v_xor_b32_e32 v1, v1, v10 -; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, v0, v10 -; GCN-IR-NEXT: v_subb_u32_e32 v5, vcc, v1, v10, vcc +; GCN-IR-NEXT: v_ashrrev_i32_e32 v8, 31, v1 +; GCN-IR-NEXT: v_xor_b32_e32 v0, v0, v8 +; GCN-IR-NEXT: v_xor_b32_e32 v1, v1, v8 +; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, v0, v8 +; GCN-IR-NEXT: v_subb_u32_e32 v5, vcc, v1, v8, vcc ; GCN-IR-NEXT: v_ffbh_u32_e32 v0, v4 ; GCN-IR-NEXT: v_add_i32_e64 v0, s[4:5], 32, v0 ; GCN-IR-NEXT: v_ffbh_u32_e32 v1, v5 -; GCN-IR-NEXT: v_min_u32_e32 v8, v0, v1 -; GCN-IR-NEXT: v_sub_i32_e64 v0, s[4:5], 48, v8 +; GCN-IR-NEXT: v_min_u32_e32 v6, v0, v1 +; GCN-IR-NEXT: v_sub_i32_e64 v0, s[4:5], 48, v6 ; GCN-IR-NEXT: v_subb_u32_e64 v1, s[4:5], 0, 0, s[4:5] ; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[4:5] ; GCN-IR-NEXT: v_cmp_lt_u64_e64 s[4:5], 63, v[0:1] -; GCN-IR-NEXT: v_mov_b32_e32 v11, v10 +; GCN-IR-NEXT: v_mov_b32_e32 v9, v8 ; GCN-IR-NEXT: s_or_b64 s[4:5], vcc, s[4:5] ; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 63, v[0:1] ; GCN-IR-NEXT: s_xor_b64 s[6:7], s[4:5], -1 @@ -1790,61 +1799,60 @@ define i64 @v_test_sdiv_pow2_k_den_i64(i64 %x) { ; GCN-IR-NEXT: s_and_saveexec_b64 s[6:7], s[4:5] ; GCN-IR-NEXT: s_cbranch_execz .LBB13_6 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 -; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, 1, v0 -; GCN-IR-NEXT: v_addc_u32_e32 v7, vcc, 0, v1, vcc +; GCN-IR-NEXT: v_add_i32_e32 v7, vcc, 1, v0 +; GCN-IR-NEXT: v_addc_u32_e32 v1, vcc, 0, v1, vcc ; GCN-IR-NEXT: v_sub_i32_e64 v0, s[4:5], 63, v0 -; GCN-IR-NEXT: v_mov_b32_e32 v2, 0 -; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 0, v[6:7] ; GCN-IR-NEXT: v_lshl_b64 v[0:1], v[4:5], v0 +; GCN-IR-NEXT: v_mov_b32_e32 v2, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v3, 0 -; GCN-IR-NEXT: s_and_saveexec_b64 s[4:5], vcc -; GCN-IR-NEXT: s_xor_b64 s[8:9], exec, s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], vcc, -1 +; GCN-IR-NEXT: s_and_saveexec_b64 s[8:9], s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execz .LBB13_5 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader -; GCN-IR-NEXT: v_lshr_b64 v[6:7], v[4:5], v6 -; GCN-IR-NEXT: v_add_i32_e32 v4, vcc, 0xffffffcf, v8 -; GCN-IR-NEXT: v_mov_b32_e32 v8, 0 -; GCN-IR-NEXT: v_addc_u32_e64 v5, s[4:5], 0, -1, vcc -; GCN-IR-NEXT: s_mov_b64 s[10:11], 0 -; GCN-IR-NEXT: v_mov_b32_e32 v9, 0 +; GCN-IR-NEXT: v_add_i32_e32 v10, vcc, 0xffffffcf, v6 +; GCN-IR-NEXT: v_lshr_b64 v[4:5], v[4:5], v7 +; GCN-IR-NEXT: v_addc_u32_e64 v11, s[8:9], 0, -1, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v6, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0 +; GCN-IR-NEXT: v_mov_b32_e32 v7, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v3, 0 -; GCN-IR-NEXT: s_movk_i32 s12, 0x7fff +; GCN-IR-NEXT: s_movk_i32 s10, 0x7fff ; GCN-IR-NEXT: .LBB13_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 -; GCN-IR-NEXT: v_lshl_b64 v[6:7], v[6:7], 1 +; GCN-IR-NEXT: v_lshl_b64 v[4:5], v[4:5], 1 ; GCN-IR-NEXT: v_lshrrev_b32_e32 v2, 31, v1 -; GCN-IR-NEXT: v_or_b32_e32 v6, v6, v2 -; GCN-IR-NEXT: v_sub_i32_e32 v2, vcc, s12, v6 +; GCN-IR-NEXT: v_or_b32_e32 v4, v4, v2 ; GCN-IR-NEXT: v_lshl_b64 v[0:1], v[0:1], 1 -; GCN-IR-NEXT: v_subb_u32_e32 v2, vcc, 0, v7, vcc -; GCN-IR-NEXT: v_add_i32_e32 v4, vcc, 1, v4 -; GCN-IR-NEXT: v_or_b32_e32 v0, v8, v0 -; GCN-IR-NEXT: v_ashrrev_i32_e32 v8, 31, v2 -; GCN-IR-NEXT: v_addc_u32_e32 v5, vcc, 0, v5, vcc -; GCN-IR-NEXT: v_and_b32_e32 v2, 1, v8 -; GCN-IR-NEXT: v_and_b32_e32 v8, 0x8000, v8 -; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[4:5] -; GCN-IR-NEXT: v_or_b32_e32 v1, v9, v1 -; GCN-IR-NEXT: v_sub_i32_e64 v6, s[4:5], v6, v8 -; GCN-IR-NEXT: v_mov_b32_e32 v9, v3 -; GCN-IR-NEXT: v_subbrev_u32_e64 v7, s[4:5], 0, v7, s[4:5] -; GCN-IR-NEXT: s_or_b64 s[10:11], vcc, s[10:11] -; GCN-IR-NEXT: v_mov_b32_e32 v8, v2 -; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[10:11] +; GCN-IR-NEXT: v_sub_i32_e32 v2, vcc, s10, v4 +; GCN-IR-NEXT: v_subb_u32_e32 v2, vcc, 0, v5, vcc +; GCN-IR-NEXT: v_or_b32_e32 v0, v6, v0 +; GCN-IR-NEXT: v_ashrrev_i32_e32 v6, 31, v2 +; GCN-IR-NEXT: v_and_b32_e32 v2, 1, v6 +; GCN-IR-NEXT: v_and_b32_e32 v6, 0x8000, v6 +; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, v4, v6 +; GCN-IR-NEXT: v_subbrev_u32_e32 v5, vcc, 0, v5, vcc +; GCN-IR-NEXT: v_add_i32_e32 v10, vcc, 1, v10 +; GCN-IR-NEXT: v_or_b32_e32 v1, v7, v1 +; GCN-IR-NEXT: v_addc_u32_e32 v11, vcc, 0, v11, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v7, v3 +; GCN-IR-NEXT: s_or_b64 s[8:9], vcc, s[8:9] +; GCN-IR-NEXT: v_mov_b32_e32 v6, v2 +; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execnz .LBB13_3 ; GCN-IR-NEXT: ; %bb.4: ; %Flow -; GCN-IR-NEXT: s_or_b64 exec, exec, s[10:11] -; GCN-IR-NEXT: .LBB13_5: ; %Flow4 ; GCN-IR-NEXT: s_or_b64 exec, exec, s[8:9] +; GCN-IR-NEXT: .LBB13_5: ; %Flow4 +; GCN-IR-NEXT: s_or_b64 exec, exec, s[4:5] ; GCN-IR-NEXT: v_lshl_b64 v[0:1], v[0:1], 1 ; GCN-IR-NEXT: v_or_b32_e32 v3, v3, v1 ; GCN-IR-NEXT: v_or_b32_e32 v2, v2, v0 ; GCN-IR-NEXT: .LBB13_6: ; %Flow5 ; GCN-IR-NEXT: s_or_b64 exec, exec, s[6:7] -; GCN-IR-NEXT: v_xor_b32_e32 v0, v2, v10 -; GCN-IR-NEXT: v_xor_b32_e32 v1, v3, v11 -; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v10 -; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v11, vcc +; GCN-IR-NEXT: v_xor_b32_e32 v0, v2, v8 +; GCN-IR-NEXT: v_xor_b32_e32 v1, v3, v9 +; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v8 +; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v9, vcc ; GCN-IR-NEXT: s_setpc_b64 s[30:31] %result = sdiv i64 %x, 32768 ret i64 %result diff --git a/llvm/test/CodeGen/AMDGPU/srem64.ll b/llvm/test/CodeGen/AMDGPU/srem64.ll index 465024a..33b0a5d 100644 --- a/llvm/test/CodeGen/AMDGPU/srem64.ll +++ b/llvm/test/CodeGen/AMDGPU/srem64.ll @@ -170,35 +170,38 @@ define amdgpu_kernel void @s_test_srem(ptr addrspace(1) %out, i64 %x, i64 %y) { ; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[8:9], s[6:7], 0 ; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[12:13], s[2:3], 0 ; GCN-IR-NEXT: s_flbit_i32_b64 s10, s[6:7] -; GCN-IR-NEXT: s_flbit_i32_b64 s18, s[2:3] +; GCN-IR-NEXT: s_flbit_i32_b64 s16, s[2:3] ; GCN-IR-NEXT: s_or_b64 s[8:9], s[8:9], s[12:13] -; GCN-IR-NEXT: s_sub_u32 s12, s10, s18 +; GCN-IR-NEXT: s_sub_u32 s12, s10, s16 ; GCN-IR-NEXT: s_subb_u32 s13, 0, 0 ; GCN-IR-NEXT: v_cmp_gt_u64_e64 s[14:15], s[12:13], 63 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[16:17], s[12:13], 63 +; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[18:19], s[12:13], 63 ; GCN-IR-NEXT: s_or_b64 s[14:15], s[8:9], s[14:15] ; GCN-IR-NEXT: s_and_b64 s[8:9], s[14:15], exec ; GCN-IR-NEXT: s_cselect_b32 s9, 0, s3 ; GCN-IR-NEXT: s_cselect_b32 s8, 0, s2 -; GCN-IR-NEXT: s_or_b64 s[14:15], s[14:15], s[16:17] +; GCN-IR-NEXT: s_or_b64 s[14:15], s[14:15], s[18:19] ; GCN-IR-NEXT: s_andn2_b64 vcc, exec, s[14:15] ; GCN-IR-NEXT: s_cbranch_vccz .LBB0_5 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 ; GCN-IR-NEXT: s_add_u32 s14, s12, 1 -; GCN-IR-NEXT: s_addc_u32 s15, s13, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[8:9], s[14:15], 0 +; GCN-IR-NEXT: s_cselect_b64 s[8:9], -1, 0 +; GCN-IR-NEXT: s_or_b32 s8, s8, s9 +; GCN-IR-NEXT: s_cmp_lg_u32 s8, 0 +; GCN-IR-NEXT: s_addc_u32 s8, s13, 0 +; GCN-IR-NEXT: s_cselect_b64 s[8:9], -1, 0 ; GCN-IR-NEXT: s_sub_i32 s12, 63, s12 ; GCN-IR-NEXT: s_andn2_b64 vcc, exec, s[8:9] ; GCN-IR-NEXT: s_lshl_b64 s[8:9], s[2:3], s12 ; GCN-IR-NEXT: s_cbranch_vccz .LBB0_4 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader ; GCN-IR-NEXT: s_lshr_b64 s[12:13], s[2:3], s14 -; GCN-IR-NEXT: s_add_u32 s16, s6, -1 -; GCN-IR-NEXT: s_addc_u32 s17, s7, -1 +; GCN-IR-NEXT: s_add_u32 s14, s6, -1 +; GCN-IR-NEXT: s_addc_u32 s15, s7, -1 ; GCN-IR-NEXT: s_not_b64 s[4:5], s[10:11] -; GCN-IR-NEXT: s_add_u32 s10, s4, s18 -; GCN-IR-NEXT: s_addc_u32 s11, s5, 0 -; GCN-IR-NEXT: s_mov_b64 s[14:15], 0 +; GCN-IR-NEXT: s_add_u32 s16, s4, s16 +; GCN-IR-NEXT: s_addc_u32 s17, s5, 0 +; GCN-IR-NEXT: s_mov_b64 s[10:11], 0 ; GCN-IR-NEXT: s_mov_b32 s5, 0 ; GCN-IR-NEXT: .LBB0_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 @@ -206,19 +209,22 @@ define amdgpu_kernel void @s_test_srem(ptr addrspace(1) %out, i64 %x, i64 %y) { ; GCN-IR-NEXT: s_lshr_b32 s4, s9, 31 ; GCN-IR-NEXT: s_lshl_b64 s[8:9], s[8:9], 1 ; GCN-IR-NEXT: s_or_b64 s[12:13], s[12:13], s[4:5] -; GCN-IR-NEXT: s_or_b64 s[8:9], s[14:15], s[8:9] -; GCN-IR-NEXT: s_sub_u32 s4, s16, s12 -; GCN-IR-NEXT: s_subb_u32 s4, s17, s13 -; GCN-IR-NEXT: s_ashr_i32 s14, s4, 31 -; GCN-IR-NEXT: s_mov_b32 s15, s14 -; GCN-IR-NEXT: s_and_b32 s4, s14, 1 -; GCN-IR-NEXT: s_and_b64 s[14:15], s[14:15], s[6:7] -; GCN-IR-NEXT: s_sub_u32 s12, s12, s14 -; GCN-IR-NEXT: s_subb_u32 s13, s13, s15 -; GCN-IR-NEXT: s_add_u32 s10, s10, 1 -; GCN-IR-NEXT: s_addc_u32 s11, s11, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[18:19], s[10:11], 0 -; GCN-IR-NEXT: s_mov_b64 s[14:15], s[4:5] +; GCN-IR-NEXT: s_or_b64 s[8:9], s[10:11], s[8:9] +; GCN-IR-NEXT: s_sub_u32 s4, s14, s12 +; GCN-IR-NEXT: s_subb_u32 s4, s15, s13 +; GCN-IR-NEXT: s_ashr_i32 s10, s4, 31 +; GCN-IR-NEXT: s_mov_b32 s11, s10 +; GCN-IR-NEXT: s_and_b32 s4, s10, 1 +; GCN-IR-NEXT: s_and_b64 s[18:19], s[10:11], s[6:7] +; GCN-IR-NEXT: s_sub_u32 s12, s12, s18 +; GCN-IR-NEXT: s_subb_u32 s13, s13, s19 +; GCN-IR-NEXT: s_add_u32 s16, s16, 1 +; GCN-IR-NEXT: s_cselect_b64 s[18:19], -1, 0 +; GCN-IR-NEXT: s_or_b32 s18, s18, s19 +; GCN-IR-NEXT: s_cmp_lg_u32 s18, 0 +; GCN-IR-NEXT: s_addc_u32 s17, s17, 0 +; GCN-IR-NEXT: s_cselect_b64 s[18:19], -1, 0 +; GCN-IR-NEXT: s_mov_b64 s[10:11], s[4:5] ; GCN-IR-NEXT: s_and_b64 vcc, exec, s[18:19] ; GCN-IR-NEXT: s_cbranch_vccz .LBB0_3 ; GCN-IR-NEXT: .LBB0_4: ; %Flow7 @@ -373,12 +379,12 @@ define i64 @v_test_srem(i64 %x, i64 %y) { ; GCN-IR-LABEL: v_test_srem: ; GCN-IR: ; %bb.0: ; %_udiv-special-cases ; GCN-IR-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; GCN-IR-NEXT: v_ashrrev_i32_e32 v14, 31, v1 -; GCN-IR-NEXT: v_xor_b32_e32 v0, v0, v14 -; GCN-IR-NEXT: v_xor_b32_e32 v1, v1, v14 -; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v14 +; GCN-IR-NEXT: v_ashrrev_i32_e32 v12, 31, v1 +; GCN-IR-NEXT: v_xor_b32_e32 v0, v0, v12 +; GCN-IR-NEXT: v_xor_b32_e32 v1, v1, v12 +; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v12 ; GCN-IR-NEXT: v_ashrrev_i32_e32 v4, 31, v3 -; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v14, vcc +; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v12, vcc ; GCN-IR-NEXT: v_xor_b32_e32 v2, v2, v4 ; GCN-IR-NEXT: v_xor_b32_e32 v3, v3, v4 ; GCN-IR-NEXT: v_sub_i32_e32 v2, vcc, v2, v4 @@ -386,12 +392,12 @@ define i64 @v_test_srem(i64 %x, i64 %y) { ; GCN-IR-NEXT: v_ffbh_u32_e32 v4, v2 ; GCN-IR-NEXT: v_add_i32_e64 v4, s[6:7], 32, v4 ; GCN-IR-NEXT: v_ffbh_u32_e32 v5, v3 -; GCN-IR-NEXT: v_min_u32_e32 v12, v4, v5 +; GCN-IR-NEXT: v_min_u32_e32 v10, v4, v5 ; GCN-IR-NEXT: v_ffbh_u32_e32 v4, v0 ; GCN-IR-NEXT: v_add_i32_e64 v4, s[6:7], 32, v4 ; GCN-IR-NEXT: v_ffbh_u32_e32 v5, v1 -; GCN-IR-NEXT: v_min_u32_e32 v13, v4, v5 -; GCN-IR-NEXT: v_sub_i32_e64 v4, s[6:7], v12, v13 +; GCN-IR-NEXT: v_min_u32_e32 v11, v4, v5 +; GCN-IR-NEXT: v_sub_i32_e64 v4, s[6:7], v10, v11 ; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[2:3] ; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[4:5], 0, v[0:1] ; GCN-IR-NEXT: v_subb_u32_e64 v5, s[6:7], 0, 0, s[6:7] @@ -400,7 +406,7 @@ define i64 @v_test_srem(i64 %x, i64 %y) { ; GCN-IR-NEXT: s_or_b64 s[4:5], s[4:5], s[6:7] ; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 63, v[4:5] ; GCN-IR-NEXT: s_xor_b64 s[6:7], s[4:5], -1 -; GCN-IR-NEXT: v_mov_b32_e32 v15, v14 +; GCN-IR-NEXT: v_mov_b32_e32 v13, v12 ; GCN-IR-NEXT: v_cndmask_b32_e64 v7, v1, 0, s[4:5] ; GCN-IR-NEXT: v_cndmask_b32_e64 v6, v0, 0, s[4:5] ; GCN-IR-NEXT: s_and_b64 s[4:5], s[6:7], vcc @@ -408,54 +414,53 @@ define i64 @v_test_srem(i64 %x, i64 %y) { ; GCN-IR-NEXT: s_cbranch_execz .LBB1_6 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 ; GCN-IR-NEXT: v_add_i32_e32 v8, vcc, 1, v4 -; GCN-IR-NEXT: v_addc_u32_e32 v9, vcc, 0, v5, vcc +; GCN-IR-NEXT: v_addc_u32_e32 v5, vcc, 0, v5, vcc ; GCN-IR-NEXT: v_sub_i32_e64 v4, s[4:5], 63, v4 -; GCN-IR-NEXT: v_mov_b32_e32 v6, 0 -; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 0, v[8:9] ; GCN-IR-NEXT: v_lshl_b64 v[4:5], v[0:1], v4 +; GCN-IR-NEXT: v_mov_b32_e32 v6, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v7, 0 -; GCN-IR-NEXT: s_and_saveexec_b64 s[4:5], vcc -; GCN-IR-NEXT: s_xor_b64 s[8:9], exec, s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], vcc, -1 +; GCN-IR-NEXT: s_and_saveexec_b64 s[8:9], s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execz .LBB1_5 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader -; GCN-IR-NEXT: v_add_i32_e32 v16, vcc, -1, v2 -; GCN-IR-NEXT: v_addc_u32_e32 v17, vcc, -1, v3, vcc -; GCN-IR-NEXT: v_not_b32_e32 v6, v12 -; GCN-IR-NEXT: v_lshr_b64 v[10:11], v[0:1], v8 -; GCN-IR-NEXT: v_add_i32_e32 v8, vcc, v6, v13 -; GCN-IR-NEXT: v_mov_b32_e32 v12, 0 -; GCN-IR-NEXT: v_addc_u32_e64 v9, s[4:5], -1, 0, vcc -; GCN-IR-NEXT: s_mov_b64 s[10:11], 0 -; GCN-IR-NEXT: v_mov_b32_e32 v13, 0 +; GCN-IR-NEXT: v_add_i32_e32 v14, vcc, -1, v2 +; GCN-IR-NEXT: v_addc_u32_e32 v15, vcc, -1, v3, vcc +; GCN-IR-NEXT: v_not_b32_e32 v6, v10 +; GCN-IR-NEXT: v_add_i32_e32 v16, vcc, v6, v11 +; GCN-IR-NEXT: v_lshr_b64 v[8:9], v[0:1], v8 +; GCN-IR-NEXT: v_addc_u32_e64 v17, s[8:9], -1, 0, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v10, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0 +; GCN-IR-NEXT: v_mov_b32_e32 v11, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v7, 0 ; GCN-IR-NEXT: .LBB1_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 -; GCN-IR-NEXT: v_lshl_b64 v[10:11], v[10:11], 1 +; GCN-IR-NEXT: v_lshl_b64 v[8:9], v[8:9], 1 ; GCN-IR-NEXT: v_lshrrev_b32_e32 v6, 31, v5 -; GCN-IR-NEXT: v_or_b32_e32 v10, v10, v6 +; GCN-IR-NEXT: v_or_b32_e32 v8, v8, v6 ; GCN-IR-NEXT: v_lshl_b64 v[4:5], v[4:5], 1 -; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, v16, v10 -; GCN-IR-NEXT: v_subb_u32_e32 v6, vcc, v17, v11, vcc -; GCN-IR-NEXT: v_or_b32_e32 v4, v12, v4 -; GCN-IR-NEXT: v_ashrrev_i32_e32 v12, 31, v6 -; GCN-IR-NEXT: v_add_i32_e32 v8, vcc, 1, v8 -; GCN-IR-NEXT: v_or_b32_e32 v5, v13, v5 -; GCN-IR-NEXT: v_and_b32_e32 v6, 1, v12 -; GCN-IR-NEXT: v_and_b32_e32 v13, v12, v3 -; GCN-IR-NEXT: v_and_b32_e32 v12, v12, v2 -; GCN-IR-NEXT: v_addc_u32_e32 v9, vcc, 0, v9, vcc -; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[8:9] -; GCN-IR-NEXT: v_sub_i32_e64 v10, s[4:5], v10, v12 -; GCN-IR-NEXT: v_subb_u32_e64 v11, s[4:5], v11, v13, s[4:5] -; GCN-IR-NEXT: v_mov_b32_e32 v13, v7 -; GCN-IR-NEXT: s_or_b64 s[10:11], vcc, s[10:11] -; GCN-IR-NEXT: v_mov_b32_e32 v12, v6 -; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[10:11] +; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, v14, v8 +; GCN-IR-NEXT: v_subb_u32_e32 v6, vcc, v15, v9, vcc +; GCN-IR-NEXT: v_or_b32_e32 v4, v10, v4 +; GCN-IR-NEXT: v_ashrrev_i32_e32 v10, 31, v6 +; GCN-IR-NEXT: v_or_b32_e32 v5, v11, v5 +; GCN-IR-NEXT: v_and_b32_e32 v6, 1, v10 +; GCN-IR-NEXT: v_and_b32_e32 v11, v10, v3 +; GCN-IR-NEXT: v_and_b32_e32 v10, v10, v2 +; GCN-IR-NEXT: v_sub_i32_e32 v8, vcc, v8, v10 +; GCN-IR-NEXT: v_subb_u32_e32 v9, vcc, v9, v11, vcc +; GCN-IR-NEXT: v_add_i32_e32 v16, vcc, 1, v16 +; GCN-IR-NEXT: v_addc_u32_e32 v17, vcc, 0, v17, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v11, v7 +; GCN-IR-NEXT: s_or_b64 s[8:9], vcc, s[8:9] +; GCN-IR-NEXT: v_mov_b32_e32 v10, v6 +; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execnz .LBB1_3 ; GCN-IR-NEXT: ; %bb.4: ; %Flow -; GCN-IR-NEXT: s_or_b64 exec, exec, s[10:11] -; GCN-IR-NEXT: .LBB1_5: ; %Flow4 ; GCN-IR-NEXT: s_or_b64 exec, exec, s[8:9] +; GCN-IR-NEXT: .LBB1_5: ; %Flow4 +; GCN-IR-NEXT: s_or_b64 exec, exec, s[4:5] ; GCN-IR-NEXT: v_lshl_b64 v[4:5], v[4:5], 1 ; GCN-IR-NEXT: v_or_b32_e32 v7, v7, v5 ; GCN-IR-NEXT: v_or_b32_e32 v6, v6, v4 @@ -469,10 +474,10 @@ define i64 @v_test_srem(i64 %x, i64 %y) { ; GCN-IR-NEXT: v_add_i32_e32 v3, vcc, v4, v3 ; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v2 ; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v3, vcc -; GCN-IR-NEXT: v_xor_b32_e32 v0, v0, v14 -; GCN-IR-NEXT: v_xor_b32_e32 v1, v1, v15 -; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v14 -; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v15, vcc +; GCN-IR-NEXT: v_xor_b32_e32 v0, v0, v12 +; GCN-IR-NEXT: v_xor_b32_e32 v1, v1, v13 +; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v12 +; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v13, vcc ; GCN-IR-NEXT: s_setpc_b64 s[30:31] %result = srem i64 %x, %y ret i64 %result @@ -1148,35 +1153,38 @@ define amdgpu_kernel void @s_test_srem33_64(ptr addrspace(1) %out, i64 %x, i64 % ; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[2:3], s[8:9], 0 ; GCN-IR-NEXT: s_flbit_i32_b64 s12, s[8:9] ; GCN-IR-NEXT: s_or_b64 s[10:11], s[2:3], s[10:11] -; GCN-IR-NEXT: s_flbit_i32_b64 s20, s[6:7] -; GCN-IR-NEXT: s_sub_u32 s14, s12, s20 +; GCN-IR-NEXT: s_flbit_i32_b64 s18, s[6:7] +; GCN-IR-NEXT: s_sub_u32 s14, s12, s18 ; GCN-IR-NEXT: s_subb_u32 s15, 0, 0 ; GCN-IR-NEXT: v_cmp_gt_u64_e64 s[16:17], s[14:15], 63 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[18:19], s[14:15], 63 +; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[20:21], s[14:15], 63 ; GCN-IR-NEXT: s_or_b64 s[16:17], s[10:11], s[16:17] ; GCN-IR-NEXT: s_and_b64 s[10:11], s[16:17], exec ; GCN-IR-NEXT: s_cselect_b32 s11, 0, s7 ; GCN-IR-NEXT: s_cselect_b32 s10, 0, s6 -; GCN-IR-NEXT: s_or_b64 s[16:17], s[16:17], s[18:19] +; GCN-IR-NEXT: s_or_b64 s[16:17], s[16:17], s[20:21] ; GCN-IR-NEXT: s_mov_b64 s[2:3], 0 ; GCN-IR-NEXT: s_andn2_b64 vcc, exec, s[16:17] ; GCN-IR-NEXT: s_cbranch_vccz .LBB8_5 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 ; GCN-IR-NEXT: s_add_u32 s16, s14, 1 -; GCN-IR-NEXT: s_addc_u32 s17, s15, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[10:11], s[16:17], 0 +; GCN-IR-NEXT: s_cselect_b64 s[10:11], -1, 0 +; GCN-IR-NEXT: s_or_b32 s10, s10, s11 +; GCN-IR-NEXT: s_cmp_lg_u32 s10, 0 +; GCN-IR-NEXT: s_addc_u32 s10, s15, 0 +; GCN-IR-NEXT: s_cselect_b64 s[10:11], -1, 0 ; GCN-IR-NEXT: s_sub_i32 s14, 63, s14 ; GCN-IR-NEXT: s_andn2_b64 vcc, exec, s[10:11] ; GCN-IR-NEXT: s_lshl_b64 s[10:11], s[6:7], s14 ; GCN-IR-NEXT: s_cbranch_vccz .LBB8_4 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader ; GCN-IR-NEXT: s_lshr_b64 s[14:15], s[6:7], s16 -; GCN-IR-NEXT: s_add_u32 s18, s8, -1 -; GCN-IR-NEXT: s_addc_u32 s19, s9, -1 +; GCN-IR-NEXT: s_add_u32 s16, s8, -1 +; GCN-IR-NEXT: s_addc_u32 s17, s9, -1 ; GCN-IR-NEXT: s_not_b64 s[2:3], s[12:13] -; GCN-IR-NEXT: s_add_u32 s12, s2, s20 -; GCN-IR-NEXT: s_addc_u32 s13, s3, 0 -; GCN-IR-NEXT: s_mov_b64 s[16:17], 0 +; GCN-IR-NEXT: s_add_u32 s18, s2, s18 +; GCN-IR-NEXT: s_addc_u32 s19, s3, 0 +; GCN-IR-NEXT: s_mov_b64 s[12:13], 0 ; GCN-IR-NEXT: s_mov_b32 s3, 0 ; GCN-IR-NEXT: .LBB8_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 @@ -1184,19 +1192,22 @@ define amdgpu_kernel void @s_test_srem33_64(ptr addrspace(1) %out, i64 %x, i64 % ; GCN-IR-NEXT: s_lshr_b32 s2, s11, 31 ; GCN-IR-NEXT: s_lshl_b64 s[10:11], s[10:11], 1 ; GCN-IR-NEXT: s_or_b64 s[14:15], s[14:15], s[2:3] -; GCN-IR-NEXT: s_or_b64 s[10:11], s[16:17], s[10:11] -; GCN-IR-NEXT: s_sub_u32 s2, s18, s14 -; GCN-IR-NEXT: s_subb_u32 s2, s19, s15 -; GCN-IR-NEXT: s_ashr_i32 s16, s2, 31 -; GCN-IR-NEXT: s_mov_b32 s17, s16 -; GCN-IR-NEXT: s_and_b32 s2, s16, 1 -; GCN-IR-NEXT: s_and_b64 s[16:17], s[16:17], s[8:9] -; GCN-IR-NEXT: s_sub_u32 s14, s14, s16 -; GCN-IR-NEXT: s_subb_u32 s15, s15, s17 -; GCN-IR-NEXT: s_add_u32 s12, s12, 1 -; GCN-IR-NEXT: s_addc_u32 s13, s13, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[20:21], s[12:13], 0 -; GCN-IR-NEXT: s_mov_b64 s[16:17], s[2:3] +; GCN-IR-NEXT: s_or_b64 s[10:11], s[12:13], s[10:11] +; GCN-IR-NEXT: s_sub_u32 s2, s16, s14 +; GCN-IR-NEXT: s_subb_u32 s2, s17, s15 +; GCN-IR-NEXT: s_ashr_i32 s12, s2, 31 +; GCN-IR-NEXT: s_mov_b32 s13, s12 +; GCN-IR-NEXT: s_and_b32 s2, s12, 1 +; GCN-IR-NEXT: s_and_b64 s[20:21], s[12:13], s[8:9] +; GCN-IR-NEXT: s_sub_u32 s14, s14, s20 +; GCN-IR-NEXT: s_subb_u32 s15, s15, s21 +; GCN-IR-NEXT: s_add_u32 s18, s18, 1 +; GCN-IR-NEXT: s_cselect_b64 s[20:21], -1, 0 +; GCN-IR-NEXT: s_or_b32 s20, s20, s21 +; GCN-IR-NEXT: s_cmp_lg_u32 s20, 0 +; GCN-IR-NEXT: s_addc_u32 s19, s19, 0 +; GCN-IR-NEXT: s_cselect_b64 s[20:21], -1, 0 +; GCN-IR-NEXT: s_mov_b64 s[12:13], s[2:3] ; GCN-IR-NEXT: s_and_b64 vcc, exec, s[20:21] ; GCN-IR-NEXT: s_cbranch_vccz .LBB8_3 ; GCN-IR-NEXT: .LBB8_4: ; %Flow7 @@ -1461,34 +1472,37 @@ define amdgpu_kernel void @s_test_srem_k_num_i64(ptr addrspace(1) %out, i64 %x) ; GCN-IR-NEXT: s_xor_b64 s[2:3], s[2:3], s[8:9] ; GCN-IR-NEXT: s_sub_u32 s4, s2, s8 ; GCN-IR-NEXT: s_subb_u32 s5, s3, s8 -; GCN-IR-NEXT: s_flbit_i32_b64 s12, s[4:5] -; GCN-IR-NEXT: s_add_u32 s2, s12, 0xffffffc5 +; GCN-IR-NEXT: s_flbit_i32_b64 s14, s[4:5] +; GCN-IR-NEXT: s_add_u32 s2, s14, 0xffffffc5 ; GCN-IR-NEXT: s_addc_u32 s3, 0, -1 ; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[8:9], s[4:5], 0 ; GCN-IR-NEXT: v_cmp_gt_u64_e64 s[10:11], s[2:3], 63 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[14:15], s[2:3], 63 +; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[12:13], s[2:3], 63 ; GCN-IR-NEXT: s_or_b64 s[10:11], s[8:9], s[10:11] ; GCN-IR-NEXT: s_and_b64 s[8:9], s[10:11], exec ; GCN-IR-NEXT: s_cselect_b32 s8, 0, 24 -; GCN-IR-NEXT: s_or_b64 s[10:11], s[10:11], s[14:15] +; GCN-IR-NEXT: s_or_b64 s[10:11], s[10:11], s[12:13] ; GCN-IR-NEXT: s_andn2_b64 vcc, exec, s[10:11] ; GCN-IR-NEXT: s_mov_b32 s9, 0 ; GCN-IR-NEXT: s_cbranch_vccz .LBB10_5 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 ; GCN-IR-NEXT: s_add_u32 s8, s2, 1 -; GCN-IR-NEXT: s_addc_u32 s9, s3, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[10:11], s[8:9], 0 +; GCN-IR-NEXT: s_cselect_b64 s[10:11], -1, 0 +; GCN-IR-NEXT: s_or_b32 s9, s10, s11 +; GCN-IR-NEXT: s_cmp_lg_u32 s9, 0 +; GCN-IR-NEXT: s_addc_u32 s3, s3, 0 +; GCN-IR-NEXT: s_cselect_b64 s[10:11], -1, 0 ; GCN-IR-NEXT: s_sub_i32 s2, 63, s2 ; GCN-IR-NEXT: s_andn2_b64 vcc, exec, s[10:11] ; GCN-IR-NEXT: s_lshl_b64 s[2:3], 24, s2 ; GCN-IR-NEXT: s_cbranch_vccz .LBB10_4 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader ; GCN-IR-NEXT: s_lshr_b64 s[10:11], 24, s8 -; GCN-IR-NEXT: s_add_u32 s14, s4, -1 -; GCN-IR-NEXT: s_addc_u32 s15, s5, -1 -; GCN-IR-NEXT: s_sub_u32 s8, 58, s12 -; GCN-IR-NEXT: s_subb_u32 s9, 0, 0 -; GCN-IR-NEXT: s_mov_b64 s[12:13], 0 +; GCN-IR-NEXT: s_add_u32 s12, s4, -1 +; GCN-IR-NEXT: s_addc_u32 s13, s5, -1 +; GCN-IR-NEXT: s_sub_u32 s14, 58, s14 +; GCN-IR-NEXT: s_subb_u32 s15, 0, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0 ; GCN-IR-NEXT: s_mov_b32 s7, 0 ; GCN-IR-NEXT: .LBB10_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 @@ -1496,19 +1510,22 @@ define amdgpu_kernel void @s_test_srem_k_num_i64(ptr addrspace(1) %out, i64 %x) ; GCN-IR-NEXT: s_lshr_b32 s6, s3, 31 ; GCN-IR-NEXT: s_lshl_b64 s[2:3], s[2:3], 1 ; GCN-IR-NEXT: s_or_b64 s[10:11], s[10:11], s[6:7] -; GCN-IR-NEXT: s_or_b64 s[2:3], s[12:13], s[2:3] -; GCN-IR-NEXT: s_sub_u32 s6, s14, s10 -; GCN-IR-NEXT: s_subb_u32 s6, s15, s11 -; GCN-IR-NEXT: s_ashr_i32 s12, s6, 31 -; GCN-IR-NEXT: s_mov_b32 s13, s12 -; GCN-IR-NEXT: s_and_b32 s6, s12, 1 -; GCN-IR-NEXT: s_and_b64 s[12:13], s[12:13], s[4:5] -; GCN-IR-NEXT: s_sub_u32 s10, s10, s12 -; GCN-IR-NEXT: s_subb_u32 s11, s11, s13 -; GCN-IR-NEXT: s_add_u32 s8, s8, 1 -; GCN-IR-NEXT: s_addc_u32 s9, s9, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[16:17], s[8:9], 0 -; GCN-IR-NEXT: s_mov_b64 s[12:13], s[6:7] +; GCN-IR-NEXT: s_or_b64 s[2:3], s[8:9], s[2:3] +; GCN-IR-NEXT: s_sub_u32 s6, s12, s10 +; GCN-IR-NEXT: s_subb_u32 s6, s13, s11 +; GCN-IR-NEXT: s_ashr_i32 s8, s6, 31 +; GCN-IR-NEXT: s_mov_b32 s9, s8 +; GCN-IR-NEXT: s_and_b32 s6, s8, 1 +; GCN-IR-NEXT: s_and_b64 s[16:17], s[8:9], s[4:5] +; GCN-IR-NEXT: s_sub_u32 s10, s10, s16 +; GCN-IR-NEXT: s_subb_u32 s11, s11, s17 +; GCN-IR-NEXT: s_add_u32 s14, s14, 1 +; GCN-IR-NEXT: s_cselect_b64 s[16:17], -1, 0 +; GCN-IR-NEXT: s_or_b32 s16, s16, s17 +; GCN-IR-NEXT: s_cmp_lg_u32 s16, 0 +; GCN-IR-NEXT: s_addc_u32 s15, s15, 0 +; GCN-IR-NEXT: s_cselect_b64 s[16:17], -1, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], s[6:7] ; GCN-IR-NEXT: s_and_b64 vcc, exec, s[16:17] ; GCN-IR-NEXT: s_cbranch_vccz .LBB10_3 ; GCN-IR-NEXT: .LBB10_4: ; %Flow6 @@ -1647,9 +1664,9 @@ define i64 @v_test_srem_k_num_i64(i64 %x) { ; GCN-IR-NEXT: v_ffbh_u32_e32 v2, v0 ; GCN-IR-NEXT: v_add_i32_e32 v2, vcc, 32, v2 ; GCN-IR-NEXT: v_ffbh_u32_e32 v3, v1 -; GCN-IR-NEXT: v_min_u32_e32 v10, v2, v3 +; GCN-IR-NEXT: v_min_u32_e32 v8, v2, v3 ; GCN-IR-NEXT: s_movk_i32 s6, 0xffc5 -; GCN-IR-NEXT: v_add_i32_e32 v2, vcc, s6, v10 +; GCN-IR-NEXT: v_add_i32_e32 v2, vcc, s6, v8 ; GCN-IR-NEXT: v_addc_u32_e64 v3, s[6:7], 0, -1, vcc ; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[4:5], 0, v[0:1] ; GCN-IR-NEXT: v_cmp_lt_u64_e32 vcc, 63, v[2:3] @@ -1663,53 +1680,52 @@ define i64 @v_test_srem_k_num_i64(i64 %x) { ; GCN-IR-NEXT: s_cbranch_execz .LBB11_6 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 ; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, 1, v2 -; GCN-IR-NEXT: v_addc_u32_e32 v7, vcc, 0, v3, vcc +; GCN-IR-NEXT: v_addc_u32_e32 v3, vcc, 0, v3, vcc ; GCN-IR-NEXT: v_sub_i32_e64 v2, s[4:5], 63, v2 -; GCN-IR-NEXT: v_mov_b32_e32 v4, 0 -; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 0, v[6:7] ; GCN-IR-NEXT: v_lshl_b64 v[2:3], 24, v2 +; GCN-IR-NEXT: v_mov_b32_e32 v4, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 -; GCN-IR-NEXT: s_and_saveexec_b64 s[4:5], vcc -; GCN-IR-NEXT: s_xor_b64 s[8:9], exec, s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], vcc, -1 +; GCN-IR-NEXT: s_and_saveexec_b64 s[8:9], s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execz .LBB11_5 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader -; GCN-IR-NEXT: v_add_i32_e32 v12, vcc, -1, v0 -; GCN-IR-NEXT: v_addc_u32_e32 v13, vcc, -1, v1, vcc -; GCN-IR-NEXT: v_lshr_b64 v[8:9], 24, v6 -; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, 58, v10 -; GCN-IR-NEXT: v_mov_b32_e32 v10, 0 -; GCN-IR-NEXT: v_subb_u32_e64 v7, s[4:5], 0, 0, vcc -; GCN-IR-NEXT: s_mov_b64 s[10:11], 0 -; GCN-IR-NEXT: v_mov_b32_e32 v11, 0 +; GCN-IR-NEXT: v_add_i32_e32 v10, vcc, -1, v0 +; GCN-IR-NEXT: v_addc_u32_e32 v11, vcc, -1, v1, vcc +; GCN-IR-NEXT: v_sub_i32_e32 v12, vcc, 58, v8 +; GCN-IR-NEXT: v_lshr_b64 v[6:7], 24, v6 +; GCN-IR-NEXT: v_subb_u32_e64 v13, s[8:9], 0, 0, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v8, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0 +; GCN-IR-NEXT: v_mov_b32_e32 v9, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 ; GCN-IR-NEXT: .LBB11_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 -; GCN-IR-NEXT: v_lshl_b64 v[8:9], v[8:9], 1 +; GCN-IR-NEXT: v_lshl_b64 v[6:7], v[6:7], 1 ; GCN-IR-NEXT: v_lshrrev_b32_e32 v4, 31, v3 -; GCN-IR-NEXT: v_or_b32_e32 v8, v8, v4 +; GCN-IR-NEXT: v_or_b32_e32 v6, v6, v4 ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[2:3], 1 -; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, v12, v8 -; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, v13, v9, vcc -; GCN-IR-NEXT: v_or_b32_e32 v2, v10, v2 -; GCN-IR-NEXT: v_ashrrev_i32_e32 v10, 31, v4 -; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, 1, v6 -; GCN-IR-NEXT: v_or_b32_e32 v3, v11, v3 -; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v10 -; GCN-IR-NEXT: v_and_b32_e32 v11, v10, v1 -; GCN-IR-NEXT: v_and_b32_e32 v10, v10, v0 -; GCN-IR-NEXT: v_addc_u32_e32 v7, vcc, 0, v7, vcc -; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[6:7] -; GCN-IR-NEXT: v_sub_i32_e64 v8, s[4:5], v8, v10 -; GCN-IR-NEXT: v_subb_u32_e64 v9, s[4:5], v9, v11, s[4:5] -; GCN-IR-NEXT: v_mov_b32_e32 v11, v5 -; GCN-IR-NEXT: s_or_b64 s[10:11], vcc, s[10:11] -; GCN-IR-NEXT: v_mov_b32_e32 v10, v4 -; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[10:11] +; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, v10, v6 +; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, v11, v7, vcc +; GCN-IR-NEXT: v_or_b32_e32 v2, v8, v2 +; GCN-IR-NEXT: v_ashrrev_i32_e32 v8, 31, v4 +; GCN-IR-NEXT: v_or_b32_e32 v3, v9, v3 +; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v8 +; GCN-IR-NEXT: v_and_b32_e32 v9, v8, v1 +; GCN-IR-NEXT: v_and_b32_e32 v8, v8, v0 +; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, v6, v8 +; GCN-IR-NEXT: v_subb_u32_e32 v7, vcc, v7, v9, vcc +; GCN-IR-NEXT: v_add_i32_e32 v12, vcc, 1, v12 +; GCN-IR-NEXT: v_addc_u32_e32 v13, vcc, 0, v13, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v9, v5 +; GCN-IR-NEXT: s_or_b64 s[8:9], vcc, s[8:9] +; GCN-IR-NEXT: v_mov_b32_e32 v8, v4 +; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execnz .LBB11_3 ; GCN-IR-NEXT: ; %bb.4: ; %Flow -; GCN-IR-NEXT: s_or_b64 exec, exec, s[10:11] -; GCN-IR-NEXT: .LBB11_5: ; %Flow4 ; GCN-IR-NEXT: s_or_b64 exec, exec, s[8:9] +; GCN-IR-NEXT: .LBB11_5: ; %Flow4 +; GCN-IR-NEXT: s_or_b64 exec, exec, s[4:5] ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[2:3], 1 ; GCN-IR-NEXT: v_or_b32_e32 v5, v5, v3 ; GCN-IR-NEXT: v_or_b32_e32 v4, v4, v2 @@ -1838,9 +1854,9 @@ define i64 @v_test_srem_pow2_k_num_i64(i64 %x) { ; GCN-IR-NEXT: v_ffbh_u32_e32 v2, v0 ; GCN-IR-NEXT: v_add_i32_e32 v2, vcc, 32, v2 ; GCN-IR-NEXT: v_ffbh_u32_e32 v3, v1 -; GCN-IR-NEXT: v_min_u32_e32 v10, v2, v3 +; GCN-IR-NEXT: v_min_u32_e32 v8, v2, v3 ; GCN-IR-NEXT: s_movk_i32 s6, 0xffd0 -; GCN-IR-NEXT: v_add_i32_e32 v2, vcc, s6, v10 +; GCN-IR-NEXT: v_add_i32_e32 v2, vcc, s6, v8 ; GCN-IR-NEXT: v_addc_u32_e64 v3, s[6:7], 0, -1, vcc ; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[4:5], 0, v[0:1] ; GCN-IR-NEXT: v_cmp_lt_u64_e32 vcc, 63, v[2:3] @@ -1855,54 +1871,53 @@ define i64 @v_test_srem_pow2_k_num_i64(i64 %x) { ; GCN-IR-NEXT: s_cbranch_execz .LBB12_6 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 ; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, 1, v2 +; GCN-IR-NEXT: v_addc_u32_e32 v3, vcc, 0, v3, vcc ; GCN-IR-NEXT: v_sub_i32_e64 v2, s[4:5], 63, v2 -; GCN-IR-NEXT: v_addc_u32_e32 v7, vcc, 0, v3, vcc -; GCN-IR-NEXT: s_mov_b64 s[4:5], 0x8000 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0x8000 +; GCN-IR-NEXT: v_lshl_b64 v[2:3], s[8:9], v2 ; GCN-IR-NEXT: v_mov_b32_e32 v4, 0 -; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 0, v[6:7] -; GCN-IR-NEXT: v_lshl_b64 v[2:3], s[4:5], v2 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 -; GCN-IR-NEXT: s_and_saveexec_b64 s[8:9], vcc -; GCN-IR-NEXT: s_xor_b64 s[8:9], exec, s[8:9] +; GCN-IR-NEXT: s_xor_b64 s[4:5], vcc, -1 +; GCN-IR-NEXT: s_and_saveexec_b64 s[10:11], s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], exec, s[10:11] ; GCN-IR-NEXT: s_cbranch_execz .LBB12_5 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader -; GCN-IR-NEXT: v_add_i32_e32 v12, vcc, -1, v0 -; GCN-IR-NEXT: v_addc_u32_e32 v13, vcc, -1, v1, vcc -; GCN-IR-NEXT: v_lshr_b64 v[8:9], s[4:5], v6 -; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, 47, v10 -; GCN-IR-NEXT: v_mov_b32_e32 v10, 0 -; GCN-IR-NEXT: v_subb_u32_e64 v7, s[4:5], 0, 0, vcc -; GCN-IR-NEXT: s_mov_b64 s[10:11], 0 -; GCN-IR-NEXT: v_mov_b32_e32 v11, 0 +; GCN-IR-NEXT: v_add_i32_e32 v10, vcc, -1, v0 +; GCN-IR-NEXT: v_addc_u32_e32 v11, vcc, -1, v1, vcc +; GCN-IR-NEXT: v_sub_i32_e32 v12, vcc, 47, v8 +; GCN-IR-NEXT: v_lshr_b64 v[6:7], s[8:9], v6 +; GCN-IR-NEXT: v_subb_u32_e64 v13, s[8:9], 0, 0, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v8, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0 +; GCN-IR-NEXT: v_mov_b32_e32 v9, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 ; GCN-IR-NEXT: .LBB12_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 -; GCN-IR-NEXT: v_lshl_b64 v[8:9], v[8:9], 1 +; GCN-IR-NEXT: v_lshl_b64 v[6:7], v[6:7], 1 ; GCN-IR-NEXT: v_lshrrev_b32_e32 v4, 31, v3 -; GCN-IR-NEXT: v_or_b32_e32 v8, v8, v4 +; GCN-IR-NEXT: v_or_b32_e32 v6, v6, v4 ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[2:3], 1 -; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, v12, v8 -; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, v13, v9, vcc -; GCN-IR-NEXT: v_or_b32_e32 v2, v10, v2 -; GCN-IR-NEXT: v_ashrrev_i32_e32 v10, 31, v4 -; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, 1, v6 -; GCN-IR-NEXT: v_or_b32_e32 v3, v11, v3 -; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v10 -; GCN-IR-NEXT: v_and_b32_e32 v11, v10, v1 -; GCN-IR-NEXT: v_and_b32_e32 v10, v10, v0 -; GCN-IR-NEXT: v_addc_u32_e32 v7, vcc, 0, v7, vcc -; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[6:7] -; GCN-IR-NEXT: v_sub_i32_e64 v8, s[4:5], v8, v10 -; GCN-IR-NEXT: v_subb_u32_e64 v9, s[4:5], v9, v11, s[4:5] -; GCN-IR-NEXT: v_mov_b32_e32 v11, v5 -; GCN-IR-NEXT: s_or_b64 s[10:11], vcc, s[10:11] -; GCN-IR-NEXT: v_mov_b32_e32 v10, v4 -; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[10:11] +; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, v10, v6 +; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, v11, v7, vcc +; GCN-IR-NEXT: v_or_b32_e32 v2, v8, v2 +; GCN-IR-NEXT: v_ashrrev_i32_e32 v8, 31, v4 +; GCN-IR-NEXT: v_or_b32_e32 v3, v9, v3 +; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v8 +; GCN-IR-NEXT: v_and_b32_e32 v9, v8, v1 +; GCN-IR-NEXT: v_and_b32_e32 v8, v8, v0 +; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, v6, v8 +; GCN-IR-NEXT: v_subb_u32_e32 v7, vcc, v7, v9, vcc +; GCN-IR-NEXT: v_add_i32_e32 v12, vcc, 1, v12 +; GCN-IR-NEXT: v_addc_u32_e32 v13, vcc, 0, v13, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v9, v5 +; GCN-IR-NEXT: s_or_b64 s[8:9], vcc, s[8:9] +; GCN-IR-NEXT: v_mov_b32_e32 v8, v4 +; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execnz .LBB12_3 ; GCN-IR-NEXT: ; %bb.4: ; %Flow -; GCN-IR-NEXT: s_or_b64 exec, exec, s[10:11] -; GCN-IR-NEXT: .LBB12_5: ; %Flow4 ; GCN-IR-NEXT: s_or_b64 exec, exec, s[8:9] +; GCN-IR-NEXT: .LBB12_5: ; %Flow4 +; GCN-IR-NEXT: s_or_b64 exec, exec, s[4:5] ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[2:3], 1 ; GCN-IR-NEXT: v_or_b32_e32 v5, v5, v3 ; GCN-IR-NEXT: v_or_b32_e32 v4, v4, v2 @@ -1937,20 +1952,20 @@ define i64 @v_test_srem_pow2_k_den_i64(i64 %x) { ; GCN-IR-LABEL: v_test_srem_pow2_k_den_i64: ; GCN-IR: ; %bb.0: ; %_udiv-special-cases ; GCN-IR-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; GCN-IR-NEXT: v_ashrrev_i32_e32 v12, 31, v1 -; GCN-IR-NEXT: v_xor_b32_e32 v0, v0, v12 -; GCN-IR-NEXT: v_xor_b32_e32 v1, v1, v12 -; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v12 -; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v12, vcc +; GCN-IR-NEXT: v_ashrrev_i32_e32 v10, 31, v1 +; GCN-IR-NEXT: v_xor_b32_e32 v0, v0, v10 +; GCN-IR-NEXT: v_xor_b32_e32 v1, v1, v10 +; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v10 +; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v10, vcc ; GCN-IR-NEXT: v_ffbh_u32_e32 v2, v0 ; GCN-IR-NEXT: v_add_i32_e64 v2, s[4:5], 32, v2 ; GCN-IR-NEXT: v_ffbh_u32_e32 v3, v1 -; GCN-IR-NEXT: v_min_u32_e32 v10, v2, v3 -; GCN-IR-NEXT: v_sub_i32_e64 v2, s[4:5], 48, v10 +; GCN-IR-NEXT: v_min_u32_e32 v8, v2, v3 +; GCN-IR-NEXT: v_sub_i32_e64 v2, s[4:5], 48, v8 ; GCN-IR-NEXT: v_subb_u32_e64 v3, s[4:5], 0, 0, s[4:5] ; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[0:1] ; GCN-IR-NEXT: v_cmp_lt_u64_e64 s[4:5], 63, v[2:3] -; GCN-IR-NEXT: v_mov_b32_e32 v13, v12 +; GCN-IR-NEXT: v_mov_b32_e32 v11, v10 ; GCN-IR-NEXT: s_or_b64 s[4:5], vcc, s[4:5] ; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 63, v[2:3] ; GCN-IR-NEXT: s_xor_b64 s[6:7], s[4:5], -1 @@ -1961,51 +1976,50 @@ define i64 @v_test_srem_pow2_k_den_i64(i64 %x) { ; GCN-IR-NEXT: s_cbranch_execz .LBB13_6 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 ; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, 1, v2 -; GCN-IR-NEXT: v_addc_u32_e32 v7, vcc, 0, v3, vcc +; GCN-IR-NEXT: v_addc_u32_e32 v3, vcc, 0, v3, vcc ; GCN-IR-NEXT: v_sub_i32_e64 v2, s[4:5], 63, v2 -; GCN-IR-NEXT: v_mov_b32_e32 v4, 0 -; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 0, v[6:7] ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[0:1], v2 +; GCN-IR-NEXT: v_mov_b32_e32 v4, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 -; GCN-IR-NEXT: s_and_saveexec_b64 s[4:5], vcc -; GCN-IR-NEXT: s_xor_b64 s[8:9], exec, s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], vcc, -1 +; GCN-IR-NEXT: s_and_saveexec_b64 s[8:9], s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execz .LBB13_5 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader -; GCN-IR-NEXT: v_lshr_b64 v[8:9], v[0:1], v6 -; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, 0xffffffcf, v10 -; GCN-IR-NEXT: v_mov_b32_e32 v10, 0 -; GCN-IR-NEXT: v_addc_u32_e64 v7, s[4:5], 0, -1, vcc -; GCN-IR-NEXT: s_mov_b64 s[10:11], 0 -; GCN-IR-NEXT: v_mov_b32_e32 v11, 0 +; GCN-IR-NEXT: v_add_i32_e32 v12, vcc, 0xffffffcf, v8 +; GCN-IR-NEXT: v_lshr_b64 v[6:7], v[0:1], v6 +; GCN-IR-NEXT: v_addc_u32_e64 v13, s[8:9], 0, -1, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v8, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0 +; GCN-IR-NEXT: v_mov_b32_e32 v9, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 -; GCN-IR-NEXT: s_movk_i32 s12, 0x7fff +; GCN-IR-NEXT: s_movk_i32 s10, 0x7fff ; GCN-IR-NEXT: .LBB13_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 -; GCN-IR-NEXT: v_lshl_b64 v[8:9], v[8:9], 1 +; GCN-IR-NEXT: v_lshl_b64 v[6:7], v[6:7], 1 ; GCN-IR-NEXT: v_lshrrev_b32_e32 v4, 31, v3 -; GCN-IR-NEXT: v_or_b32_e32 v8, v8, v4 -; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, s12, v8 +; GCN-IR-NEXT: v_or_b32_e32 v6, v6, v4 ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[2:3], 1 -; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, 0, v9, vcc -; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, 1, v6 -; GCN-IR-NEXT: v_or_b32_e32 v2, v10, v2 -; GCN-IR-NEXT: v_ashrrev_i32_e32 v10, 31, v4 -; GCN-IR-NEXT: v_addc_u32_e32 v7, vcc, 0, v7, vcc -; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v10 -; GCN-IR-NEXT: v_and_b32_e32 v10, 0x8000, v10 -; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[6:7] -; GCN-IR-NEXT: v_or_b32_e32 v3, v11, v3 -; GCN-IR-NEXT: v_sub_i32_e64 v8, s[4:5], v8, v10 -; GCN-IR-NEXT: v_mov_b32_e32 v11, v5 -; GCN-IR-NEXT: v_subbrev_u32_e64 v9, s[4:5], 0, v9, s[4:5] -; GCN-IR-NEXT: s_or_b64 s[10:11], vcc, s[10:11] -; GCN-IR-NEXT: v_mov_b32_e32 v10, v4 -; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[10:11] +; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, s10, v6 +; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, 0, v7, vcc +; GCN-IR-NEXT: v_or_b32_e32 v2, v8, v2 +; GCN-IR-NEXT: v_ashrrev_i32_e32 v8, 31, v4 +; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v8 +; GCN-IR-NEXT: v_and_b32_e32 v8, 0x8000, v8 +; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, v6, v8 +; GCN-IR-NEXT: v_subbrev_u32_e32 v7, vcc, 0, v7, vcc +; GCN-IR-NEXT: v_add_i32_e32 v12, vcc, 1, v12 +; GCN-IR-NEXT: v_or_b32_e32 v3, v9, v3 +; GCN-IR-NEXT: v_addc_u32_e32 v13, vcc, 0, v13, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v9, v5 +; GCN-IR-NEXT: s_or_b64 s[8:9], vcc, s[8:9] +; GCN-IR-NEXT: v_mov_b32_e32 v8, v4 +; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execnz .LBB13_3 ; GCN-IR-NEXT: ; %bb.4: ; %Flow -; GCN-IR-NEXT: s_or_b64 exec, exec, s[10:11] -; GCN-IR-NEXT: .LBB13_5: ; %Flow4 ; GCN-IR-NEXT: s_or_b64 exec, exec, s[8:9] +; GCN-IR-NEXT: .LBB13_5: ; %Flow4 +; GCN-IR-NEXT: s_or_b64 exec, exec, s[4:5] ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[2:3], 1 ; GCN-IR-NEXT: v_or_b32_e32 v5, v5, v3 ; GCN-IR-NEXT: v_or_b32_e32 v4, v4, v2 @@ -2014,10 +2028,10 @@ define i64 @v_test_srem_pow2_k_den_i64(i64 %x) { ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[4:5], 15 ; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v2 ; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v3, vcc -; GCN-IR-NEXT: v_xor_b32_e32 v0, v0, v12 -; GCN-IR-NEXT: v_xor_b32_e32 v1, v1, v13 -; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v12 -; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v13, vcc +; GCN-IR-NEXT: v_xor_b32_e32 v0, v0, v10 +; GCN-IR-NEXT: v_xor_b32_e32 v1, v1, v11 +; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v10 +; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v11, vcc ; GCN-IR-NEXT: s_setpc_b64 s[30:31] %result = srem i64 %x, 32768 ret i64 %result diff --git a/llvm/test/CodeGen/AMDGPU/uaddo.ll b/llvm/test/CodeGen/AMDGPU/uaddo.ll index e1574dc..bb5918b2 100644 --- a/llvm/test/CodeGen/AMDGPU/uaddo.ll +++ b/llvm/test/CodeGen/AMDGPU/uaddo.ll @@ -14,15 +14,16 @@ define amdgpu_kernel void @s_uaddo_i64_zext(ptr addrspace(1) %out, i64 %a, i64 % ; SI-NEXT: s_mov_b32 s6, -1 ; SI-NEXT: s_waitcnt lgkmcnt(0) ; SI-NEXT: s_mov_b32 s4, s0 -; SI-NEXT: s_add_u32 s0, s2, s8 -; SI-NEXT: v_mov_b32_e32 v0, s2 +; SI-NEXT: s_add_u32 s2, s2, s8 ; SI-NEXT: s_mov_b32 s5, s1 -; SI-NEXT: s_addc_u32 s1, s3, s9 +; SI-NEXT: s_cselect_b64 s[0:1], -1, 0 +; SI-NEXT: s_or_b32 s0, s0, s1 +; SI-NEXT: s_cmp_lg_u32 s0, 0 +; SI-NEXT: s_addc_u32 s3, s3, s9 +; SI-NEXT: s_cselect_b64 s[0:1], -1, 0 +; SI-NEXT: v_cndmask_b32_e64 v0, 0, 1, s[0:1] ; SI-NEXT: v_mov_b32_e32 v1, s3 -; SI-NEXT: v_cmp_lt_u64_e32 vcc, s[0:1], v[0:1] -; SI-NEXT: v_mov_b32_e32 v1, s1 -; SI-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc -; SI-NEXT: v_add_i32_e32 v0, vcc, s0, v0 +; SI-NEXT: v_add_i32_e32 v0, vcc, s2, v0 ; SI-NEXT: v_addc_u32_e32 v1, vcc, 0, v1, vcc ; SI-NEXT: buffer_store_dwordx2 v[0:1], off, s[4:7], 0 ; SI-NEXT: s_endpgm @@ -33,15 +34,15 @@ define amdgpu_kernel void @s_uaddo_i64_zext(ptr addrspace(1) %out, i64 %a, i64 % ; VI-NEXT: s_load_dwordx2 s[4:5], s[4:5], 0x34 ; VI-NEXT: s_waitcnt lgkmcnt(0) ; VI-NEXT: v_mov_b32_e32 v0, s0 -; VI-NEXT: s_add_u32 s0, s2, s4 -; VI-NEXT: v_mov_b32_e32 v2, s2 +; VI-NEXT: s_add_u32 s2, s2, s4 ; VI-NEXT: v_mov_b32_e32 v1, s1 +; VI-NEXT: s_cselect_b64 s[0:1], -1, 0 +; VI-NEXT: s_cmp_lg_u64 s[0:1], 0 +; VI-NEXT: s_addc_u32 s3, s3, s5 +; VI-NEXT: s_cselect_b64 s[0:1], -1, 0 +; VI-NEXT: v_cndmask_b32_e64 v2, 0, 1, s[0:1] ; VI-NEXT: v_mov_b32_e32 v3, s3 -; VI-NEXT: s_addc_u32 s1, s3, s5 -; VI-NEXT: v_cmp_lt_u64_e32 vcc, s[0:1], v[2:3] -; VI-NEXT: v_mov_b32_e32 v3, s1 -; VI-NEXT: v_cndmask_b32_e64 v2, 0, 1, vcc -; VI-NEXT: v_add_u32_e32 v2, vcc, s0, v2 +; VI-NEXT: v_add_u32_e32 v2, vcc, s2, v2 ; VI-NEXT: v_addc_u32_e32 v3, vcc, 0, v3, vcc ; VI-NEXT: flat_store_dwordx2 v[0:1], v[2:3] ; VI-NEXT: s_endpgm @@ -52,14 +53,14 @@ define amdgpu_kernel void @s_uaddo_i64_zext(ptr addrspace(1) %out, i64 %a, i64 % ; GFX9-NEXT: s_load_dwordx2 s[6:7], s[4:5], 0x34 ; GFX9-NEXT: v_mov_b32_e32 v2, 0 ; GFX9-NEXT: s_waitcnt lgkmcnt(0) -; GFX9-NEXT: v_mov_b32_e32 v0, s2 -; GFX9-NEXT: s_add_u32 s4, s2, s6 -; GFX9-NEXT: v_mov_b32_e32 v1, s3 -; GFX9-NEXT: s_addc_u32 s5, s3, s7 -; GFX9-NEXT: v_cmp_lt_u64_e32 vcc, s[4:5], v[0:1] -; GFX9-NEXT: v_mov_b32_e32 v1, s5 -; GFX9-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc -; GFX9-NEXT: v_add_co_u32_e32 v0, vcc, s4, v0 +; GFX9-NEXT: s_add_u32 s6, s2, s6 +; GFX9-NEXT: s_cselect_b64 s[4:5], -1, 0 +; GFX9-NEXT: s_cmp_lg_u64 s[4:5], 0 +; GFX9-NEXT: s_addc_u32 s4, s3, s7 +; GFX9-NEXT: s_cselect_b64 s[2:3], -1, 0 +; GFX9-NEXT: v_cndmask_b32_e64 v0, 0, 1, s[2:3] +; GFX9-NEXT: v_mov_b32_e32 v1, s4 +; GFX9-NEXT: v_add_co_u32_e32 v0, vcc, s6, v0 ; GFX9-NEXT: v_addc_co_u32_e32 v1, vcc, 0, v1, vcc ; GFX9-NEXT: global_store_dwordx2 v2, v[0:1], s[0:1] ; GFX9-NEXT: s_endpgm @@ -71,12 +72,14 @@ define amdgpu_kernel void @s_uaddo_i64_zext(ptr addrspace(1) %out, i64 %a, i64 % ; GFX10-NEXT: s_load_dwordx2 s[6:7], s[4:5], 0x34 ; GFX10-NEXT: v_mov_b32_e32 v2, 0 ; GFX10-NEXT: s_waitcnt lgkmcnt(0) -; GFX10-NEXT: s_add_u32 s4, s2, s6 -; GFX10-NEXT: s_addc_u32 s5, s3, s7 -; GFX10-NEXT: v_cmp_lt_u64_e64 s2, s[4:5], s[2:3] -; GFX10-NEXT: v_cndmask_b32_e64 v0, 0, 1, s2 -; GFX10-NEXT: v_add_co_u32 v0, s2, s4, v0 -; GFX10-NEXT: v_add_co_ci_u32_e64 v1, s2, s5, 0, s2 +; GFX10-NEXT: s_add_u32 s2, s2, s6 +; GFX10-NEXT: s_cselect_b32 s4, -1, 0 +; GFX10-NEXT: s_cmp_lg_u32 s4, 0 +; GFX10-NEXT: s_addc_u32 s3, s3, s7 +; GFX10-NEXT: s_cselect_b32 s4, -1, 0 +; GFX10-NEXT: v_cndmask_b32_e64 v0, 0, 1, s4 +; GFX10-NEXT: v_add_co_u32 v0, s2, s2, v0 +; GFX10-NEXT: v_add_co_ci_u32_e64 v1, s2, s3, 0, s2 ; GFX10-NEXT: global_store_dwordx2 v2, v[0:1], s[0:1] ; GFX10-NEXT: s_endpgm ; @@ -87,14 +90,16 @@ define amdgpu_kernel void @s_uaddo_i64_zext(ptr addrspace(1) %out, i64 %a, i64 % ; GFX11-NEXT: s_load_b64 s[4:5], s[4:5], 0x34 ; GFX11-NEXT: v_mov_b32_e32 v2, 0 ; GFX11-NEXT: s_waitcnt lgkmcnt(0) -; GFX11-NEXT: s_add_u32 s4, s2, s4 -; GFX11-NEXT: s_addc_u32 s5, s3, s5 -; GFX11-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1) -; GFX11-NEXT: v_cmp_lt_u64_e64 s2, s[4:5], s[2:3] -; GFX11-NEXT: v_cndmask_b32_e64 v0, 0, 1, s2 +; GFX11-NEXT: s_add_u32 s2, s2, s4 +; GFX11-NEXT: s_cselect_b32 s4, -1, 0 +; GFX11-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(SKIP_2) | instid1(SALU_CYCLE_1) +; GFX11-NEXT: s_cmp_lg_u32 s4, 0 +; GFX11-NEXT: s_addc_u32 s3, s3, s5 +; GFX11-NEXT: s_cselect_b32 s4, -1, 0 +; GFX11-NEXT: v_cndmask_b32_e64 v0, 0, 1, s4 ; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1) -; GFX11-NEXT: v_add_co_u32 v0, s2, s4, v0 -; GFX11-NEXT: v_add_co_ci_u32_e64 v1, null, s5, 0, s2 +; GFX11-NEXT: v_add_co_u32 v0, s2, s2, v0 +; GFX11-NEXT: v_add_co_ci_u32_e64 v1, null, s3, 0, s2 ; GFX11-NEXT: global_store_b64 v2, v[0:1], s[0:1] ; GFX11-NEXT: s_endpgm %uadd = call { i64, i1 } @llvm.uadd.with.overflow.i64(i64 %a, i64 %b) @@ -436,21 +441,23 @@ define amdgpu_kernel void @s_uaddo_i64(ptr addrspace(1) %out, ptr addrspace(1) % ; SI-NEXT: s_mov_b32 s11, 0xf000 ; SI-NEXT: s_mov_b32 s10, -1 ; SI-NEXT: s_waitcnt lgkmcnt(0) -; SI-NEXT: s_add_u32 s6, s4, s6 -; SI-NEXT: v_mov_b32_e32 v0, s4 -; SI-NEXT: s_addc_u32 s7, s5, s7 -; SI-NEXT: v_mov_b32_e32 v1, s5 -; SI-NEXT: v_cmp_lt_u64_e32 vcc, s[6:7], v[0:1] -; SI-NEXT: v_mov_b32_e32 v2, s6 +; SI-NEXT: s_add_u32 s4, s4, s6 +; SI-NEXT: s_cselect_b64 s[12:13], -1, 0 +; SI-NEXT: s_or_b32 s6, s12, s13 +; SI-NEXT: s_cmp_lg_u32 s6, 0 +; SI-NEXT: s_addc_u32 s5, s5, s7 ; SI-NEXT: s_mov_b32 s8, s0 ; SI-NEXT: s_mov_b32 s9, s1 +; SI-NEXT: v_mov_b32_e32 v0, s4 +; SI-NEXT: v_mov_b32_e32 v1, s5 +; SI-NEXT: s_cselect_b64 s[4:5], -1, 0 ; SI-NEXT: s_mov_b32 s0, s2 ; SI-NEXT: s_mov_b32 s1, s3 ; SI-NEXT: s_mov_b32 s2, s10 ; SI-NEXT: s_mov_b32 s3, s11 -; SI-NEXT: v_mov_b32_e32 v3, s7 -; SI-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc -; SI-NEXT: buffer_store_dwordx2 v[2:3], off, s[8:11], 0 +; SI-NEXT: buffer_store_dwordx2 v[0:1], off, s[8:11], 0 +; SI-NEXT: s_waitcnt expcnt(0) +; SI-NEXT: v_cndmask_b32_e64 v0, 0, 1, s[4:5] ; SI-NEXT: buffer_store_byte v0, off, s[0:3], 0 ; SI-NEXT: s_endpgm ; @@ -458,37 +465,37 @@ define amdgpu_kernel void @s_uaddo_i64(ptr addrspace(1) %out, ptr addrspace(1) % ; VI: ; %bb.0: ; VI-NEXT: s_load_dwordx8 s[0:7], s[4:5], 0x24 ; VI-NEXT: s_waitcnt lgkmcnt(0) +; VI-NEXT: v_mov_b32_e32 v2, s2 +; VI-NEXT: s_add_u32 s2, s4, s6 ; VI-NEXT: v_mov_b32_e32 v0, s0 -; VI-NEXT: s_add_u32 s0, s4, s6 -; VI-NEXT: v_mov_b32_e32 v4, s4 ; VI-NEXT: v_mov_b32_e32 v1, s1 -; VI-NEXT: s_addc_u32 s1, s5, s7 -; VI-NEXT: v_mov_b32_e32 v5, s5 -; VI-NEXT: v_mov_b32_e32 v7, s1 -; VI-NEXT: v_cmp_lt_u64_e32 vcc, s[0:1], v[4:5] -; VI-NEXT: v_mov_b32_e32 v6, s0 -; VI-NEXT: v_mov_b32_e32 v2, s2 +; VI-NEXT: s_cselect_b64 s[0:1], -1, 0 +; VI-NEXT: s_cmp_lg_u64 s[0:1], 0 +; VI-NEXT: s_addc_u32 s0, s5, s7 +; VI-NEXT: v_mov_b32_e32 v4, s2 +; VI-NEXT: v_mov_b32_e32 v5, s0 +; VI-NEXT: s_cselect_b64 s[0:1], -1, 0 ; VI-NEXT: v_mov_b32_e32 v3, s3 -; VI-NEXT: flat_store_dwordx2 v[0:1], v[6:7] -; VI-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc +; VI-NEXT: flat_store_dwordx2 v[0:1], v[4:5] +; VI-NEXT: v_cndmask_b32_e64 v0, 0, 1, s[0:1] ; VI-NEXT: flat_store_byte v[2:3], v0 ; VI-NEXT: s_endpgm ; ; GFX9-LABEL: s_uaddo_i64: ; GFX9: ; %bb.0: ; GFX9-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x24 -; GFX9-NEXT: v_mov_b32_e32 v4, 0 +; GFX9-NEXT: v_mov_b32_e32 v2, 0 ; GFX9-NEXT: s_waitcnt lgkmcnt(0) -; GFX9-NEXT: s_add_u32 s0, s12, s14 -; GFX9-NEXT: v_mov_b32_e32 v0, s12 -; GFX9-NEXT: v_mov_b32_e32 v1, s13 -; GFX9-NEXT: s_addc_u32 s1, s13, s15 -; GFX9-NEXT: v_mov_b32_e32 v3, s1 -; GFX9-NEXT: v_cmp_lt_u64_e32 vcc, s[0:1], v[0:1] -; GFX9-NEXT: v_mov_b32_e32 v2, s0 -; GFX9-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc -; GFX9-NEXT: global_store_dwordx2 v4, v[2:3], s[8:9] -; GFX9-NEXT: global_store_byte v4, v0, s[10:11] +; GFX9-NEXT: s_add_u32 s2, s12, s14 +; GFX9-NEXT: s_cselect_b64 s[0:1], -1, 0 +; GFX9-NEXT: s_cmp_lg_u64 s[0:1], 0 +; GFX9-NEXT: s_addc_u32 s0, s13, s15 +; GFX9-NEXT: v_mov_b32_e32 v0, s2 +; GFX9-NEXT: v_mov_b32_e32 v1, s0 +; GFX9-NEXT: s_cselect_b64 s[0:1], -1, 0 +; GFX9-NEXT: v_cndmask_b32_e64 v3, 0, 1, s[0:1] +; GFX9-NEXT: global_store_dwordx2 v2, v[0:1], s[8:9] +; GFX9-NEXT: global_store_byte v2, v3, s[10:11] ; GFX9-NEXT: s_endpgm ; ; GFX10-LABEL: s_uaddo_i64: @@ -497,10 +504,12 @@ define amdgpu_kernel void @s_uaddo_i64(ptr addrspace(1) %out, ptr addrspace(1) % ; GFX10-NEXT: v_mov_b32_e32 v2, 0 ; GFX10-NEXT: s_waitcnt lgkmcnt(0) ; GFX10-NEXT: s_add_u32 s0, s12, s14 -; GFX10-NEXT: s_addc_u32 s1, s13, s15 +; GFX10-NEXT: s_cselect_b32 s1, -1, 0 ; GFX10-NEXT: v_mov_b32_e32 v0, s0 +; GFX10-NEXT: s_cmp_lg_u32 s1, 0 +; GFX10-NEXT: s_addc_u32 s1, s13, s15 +; GFX10-NEXT: s_cselect_b32 s0, -1, 0 ; GFX10-NEXT: v_mov_b32_e32 v1, s1 -; GFX10-NEXT: v_cmp_lt_u64_e64 s0, s[0:1], s[12:13] ; GFX10-NEXT: v_cndmask_b32_e64 v3, 0, 1, s0 ; GFX10-NEXT: global_store_dwordx2 v2, v[0:1], s[8:9] ; GFX10-NEXT: global_store_byte v2, v3, s[10:11] @@ -510,12 +519,13 @@ define amdgpu_kernel void @s_uaddo_i64(ptr addrspace(1) %out, ptr addrspace(1) % ; GFX11: ; %bb.0: ; GFX11-NEXT: s_load_b256 s[0:7], s[4:5], 0x24 ; GFX11-NEXT: s_waitcnt lgkmcnt(0) -; GFX11-NEXT: s_add_u32 s6, s4, s6 -; GFX11-NEXT: s_addc_u32 s7, s5, s7 -; GFX11-NEXT: v_mov_b32_e32 v0, s6 -; GFX11-NEXT: v_cmp_lt_u64_e64 s4, s[6:7], s[4:5] -; GFX11-NEXT: v_dual_mov_b32 v2, 0 :: v_dual_mov_b32 v1, s7 -; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_2) +; GFX11-NEXT: s_add_u32 s4, s4, s6 +; GFX11-NEXT: s_cselect_b32 s6, -1, 0 +; GFX11-NEXT: v_mov_b32_e32 v0, s4 +; GFX11-NEXT: s_cmp_lg_u32 s6, 0 +; GFX11-NEXT: s_addc_u32 s5, s5, s7 +; GFX11-NEXT: s_cselect_b32 s4, -1, 0 +; GFX11-NEXT: v_dual_mov_b32 v2, 0 :: v_dual_mov_b32 v1, s5 ; GFX11-NEXT: v_cndmask_b32_e64 v3, 0, 1, s4 ; GFX11-NEXT: s_clause 0x1 ; GFX11-NEXT: global_store_b64 v2, v[0:1], s[0:1] @@ -551,10 +561,10 @@ define amdgpu_kernel void @v_uaddo_i64(ptr addrspace(1) %out, ptr addrspace(1) % ; SI-NEXT: s_mov_b32 s4, s2 ; SI-NEXT: s_mov_b32 s5, s3 ; SI-NEXT: s_waitcnt vmcnt(0) -; SI-NEXT: v_add_i32_e32 v2, vcc, v0, v2 -; SI-NEXT: v_addc_u32_e32 v3, vcc, v1, v3, vcc -; SI-NEXT: v_cmp_lt_u64_e32 vcc, v[2:3], v[0:1] -; SI-NEXT: buffer_store_dwordx2 v[2:3], off, s[8:11], 0 +; SI-NEXT: v_add_i32_e32 v0, vcc, v0, v2 +; SI-NEXT: v_addc_u32_e32 v1, vcc, v1, v3, vcc +; SI-NEXT: buffer_store_dwordx2 v[0:1], off, s[8:11], 0 +; SI-NEXT: s_waitcnt expcnt(0) ; SI-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc ; SI-NEXT: buffer_store_byte v0, off, s[4:7], 0 ; SI-NEXT: s_endpgm @@ -574,10 +584,9 @@ define amdgpu_kernel void @v_uaddo_i64(ptr addrspace(1) %out, ptr addrspace(1) % ; VI-NEXT: v_mov_b32_e32 v6, s2 ; VI-NEXT: v_mov_b32_e32 v7, s3 ; VI-NEXT: s_waitcnt vmcnt(0) -; VI-NEXT: v_add_u32_e32 v2, vcc, v0, v2 -; VI-NEXT: v_addc_u32_e32 v3, vcc, v1, v3, vcc -; VI-NEXT: v_cmp_lt_u64_e32 vcc, v[2:3], v[0:1] -; VI-NEXT: flat_store_dwordx2 v[4:5], v[2:3] +; VI-NEXT: v_add_u32_e32 v0, vcc, v0, v2 +; VI-NEXT: v_addc_u32_e32 v1, vcc, v1, v3, vcc +; VI-NEXT: flat_store_dwordx2 v[4:5], v[0:1] ; VI-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc ; VI-NEXT: flat_store_byte v[6:7], v0 ; VI-NEXT: s_endpgm @@ -590,10 +599,9 @@ define amdgpu_kernel void @v_uaddo_i64(ptr addrspace(1) %out, ptr addrspace(1) % ; GFX9-NEXT: global_load_dwordx2 v[0:1], v4, s[12:13] ; GFX9-NEXT: global_load_dwordx2 v[2:3], v4, s[14:15] ; GFX9-NEXT: s_waitcnt vmcnt(0) -; GFX9-NEXT: v_add_co_u32_e32 v2, vcc, v0, v2 -; GFX9-NEXT: v_addc_co_u32_e32 v3, vcc, v1, v3, vcc -; GFX9-NEXT: v_cmp_lt_u64_e32 vcc, v[2:3], v[0:1] -; GFX9-NEXT: global_store_dwordx2 v4, v[2:3], s[8:9] +; GFX9-NEXT: v_add_co_u32_e32 v0, vcc, v0, v2 +; GFX9-NEXT: v_addc_co_u32_e32 v1, vcc, v1, v3, vcc +; GFX9-NEXT: global_store_dwordx2 v4, v[0:1], s[8:9] ; GFX9-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc ; GFX9-NEXT: global_store_byte v4, v0, s[10:11] ; GFX9-NEXT: s_endpgm @@ -607,12 +615,11 @@ define amdgpu_kernel void @v_uaddo_i64(ptr addrspace(1) %out, ptr addrspace(1) % ; GFX10-NEXT: global_load_dwordx2 v[0:1], v4, s[12:13] ; GFX10-NEXT: global_load_dwordx2 v[2:3], v4, s[14:15] ; GFX10-NEXT: s_waitcnt vmcnt(0) -; GFX10-NEXT: v_add_co_u32 v2, vcc_lo, v0, v2 -; GFX10-NEXT: v_add_co_ci_u32_e32 v3, vcc_lo, v1, v3, vcc_lo -; GFX10-NEXT: v_cmp_lt_u64_e32 vcc_lo, v[2:3], v[0:1] -; GFX10-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc_lo -; GFX10-NEXT: global_store_dwordx2 v4, v[2:3], s[8:9] -; GFX10-NEXT: global_store_byte v4, v0, s[10:11] +; GFX10-NEXT: v_add_co_u32 v0, vcc_lo, v0, v2 +; GFX10-NEXT: v_add_co_ci_u32_e32 v1, vcc_lo, v1, v3, vcc_lo +; GFX10-NEXT: v_cndmask_b32_e64 v2, 0, 1, vcc_lo +; GFX10-NEXT: global_store_dwordx2 v4, v[0:1], s[8:9] +; GFX10-NEXT: global_store_byte v4, v2, s[10:11] ; GFX10-NEXT: s_endpgm ; ; GFX11-LABEL: v_uaddo_i64: @@ -624,14 +631,12 @@ define amdgpu_kernel void @v_uaddo_i64(ptr addrspace(1) %out, ptr addrspace(1) % ; GFX11-NEXT: global_load_b64 v[0:1], v4, s[4:5] ; GFX11-NEXT: global_load_b64 v[2:3], v4, s[6:7] ; GFX11-NEXT: s_waitcnt vmcnt(0) -; GFX11-NEXT: v_add_co_u32 v2, vcc_lo, v0, v2 -; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1) -; GFX11-NEXT: v_add_co_ci_u32_e64 v3, null, v1, v3, vcc_lo -; GFX11-NEXT: v_cmp_lt_u64_e32 vcc_lo, v[2:3], v[0:1] -; GFX11-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc_lo +; GFX11-NEXT: v_add_co_u32 v0, vcc_lo, v0, v2 +; GFX11-NEXT: v_add_co_ci_u32_e32 v1, vcc_lo, v1, v3, vcc_lo +; GFX11-NEXT: v_cndmask_b32_e64 v2, 0, 1, vcc_lo ; GFX11-NEXT: s_clause 0x1 -; GFX11-NEXT: global_store_b64 v4, v[2:3], s[0:1] -; GFX11-NEXT: global_store_b8 v4, v0, s[2:3] +; GFX11-NEXT: global_store_b64 v4, v[0:1], s[0:1] +; GFX11-NEXT: global_store_b8 v4, v2, s[2:3] ; GFX11-NEXT: s_endpgm %tid = call i32 @llvm.amdgcn.workitem.id.x() %tid.ext = sext i32 %tid to i64 diff --git a/llvm/test/CodeGen/AMDGPU/uaddsat.ll b/llvm/test/CodeGen/AMDGPU/uaddsat.ll index 9230174..7f89581 100644 --- a/llvm/test/CodeGen/AMDGPU/uaddsat.ll +++ b/llvm/test/CodeGen/AMDGPU/uaddsat.ll @@ -693,52 +693,47 @@ define i64 @v_uaddsat_i64(i64 %lhs, i64 %rhs) { ; GFX6-LABEL: v_uaddsat_i64: ; GFX6: ; %bb.0: ; GFX6-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; GFX6-NEXT: v_add_i32_e32 v2, vcc, v0, v2 -; GFX6-NEXT: v_addc_u32_e32 v3, vcc, v1, v3, vcc -; GFX6-NEXT: v_cmp_lt_u64_e32 vcc, v[2:3], v[0:1] -; GFX6-NEXT: v_cndmask_b32_e64 v0, v2, -1, vcc -; GFX6-NEXT: v_cndmask_b32_e64 v1, v3, -1, vcc +; GFX6-NEXT: v_add_i32_e32 v0, vcc, v0, v2 +; GFX6-NEXT: v_addc_u32_e32 v1, vcc, v1, v3, vcc +; GFX6-NEXT: v_cndmask_b32_e64 v0, v0, -1, vcc +; GFX6-NEXT: v_cndmask_b32_e64 v1, v1, -1, vcc ; GFX6-NEXT: s_setpc_b64 s[30:31] ; ; GFX8-LABEL: v_uaddsat_i64: ; GFX8: ; %bb.0: ; GFX8-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; GFX8-NEXT: v_add_u32_e32 v2, vcc, v0, v2 -; GFX8-NEXT: v_addc_u32_e32 v3, vcc, v1, v3, vcc -; GFX8-NEXT: v_cmp_lt_u64_e32 vcc, v[2:3], v[0:1] -; GFX8-NEXT: v_cndmask_b32_e64 v0, v2, -1, vcc -; GFX8-NEXT: v_cndmask_b32_e64 v1, v3, -1, vcc +; GFX8-NEXT: v_add_u32_e32 v0, vcc, v0, v2 +; GFX8-NEXT: v_addc_u32_e32 v1, vcc, v1, v3, vcc +; GFX8-NEXT: v_cndmask_b32_e64 v0, v0, -1, vcc +; GFX8-NEXT: v_cndmask_b32_e64 v1, v1, -1, vcc ; GFX8-NEXT: s_setpc_b64 s[30:31] ; ; GFX9-LABEL: v_uaddsat_i64: ; GFX9: ; %bb.0: ; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; GFX9-NEXT: v_add_co_u32_e32 v2, vcc, v0, v2 -; GFX9-NEXT: v_addc_co_u32_e32 v3, vcc, v1, v3, vcc -; GFX9-NEXT: v_cmp_lt_u64_e32 vcc, v[2:3], v[0:1] -; GFX9-NEXT: v_cndmask_b32_e64 v0, v2, -1, vcc -; GFX9-NEXT: v_cndmask_b32_e64 v1, v3, -1, vcc +; GFX9-NEXT: v_add_co_u32_e32 v0, vcc, v0, v2 +; GFX9-NEXT: v_addc_co_u32_e32 v1, vcc, v1, v3, vcc +; GFX9-NEXT: v_cndmask_b32_e64 v0, v0, -1, vcc +; GFX9-NEXT: v_cndmask_b32_e64 v1, v1, -1, vcc ; GFX9-NEXT: s_setpc_b64 s[30:31] ; ; GFX10-LABEL: v_uaddsat_i64: ; GFX10: ; %bb.0: ; GFX10-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; GFX10-NEXT: v_add_co_u32 v2, vcc_lo, v0, v2 -; GFX10-NEXT: v_add_co_ci_u32_e32 v3, vcc_lo, v1, v3, vcc_lo -; GFX10-NEXT: v_cmp_lt_u64_e32 vcc_lo, v[2:3], v[0:1] -; GFX10-NEXT: v_cndmask_b32_e64 v0, v2, -1, vcc_lo -; GFX10-NEXT: v_cndmask_b32_e64 v1, v3, -1, vcc_lo +; GFX10-NEXT: v_add_co_u32 v0, vcc_lo, v0, v2 +; GFX10-NEXT: v_add_co_ci_u32_e32 v1, vcc_lo, v1, v3, vcc_lo +; GFX10-NEXT: v_cndmask_b32_e64 v0, v0, -1, vcc_lo +; GFX10-NEXT: v_cndmask_b32_e64 v1, v1, -1, vcc_lo ; GFX10-NEXT: s_setpc_b64 s[30:31] ; ; GFX11-LABEL: v_uaddsat_i64: ; GFX11: ; %bb.0: ; GFX11-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; GFX11-NEXT: v_add_co_u32 v2, vcc_lo, v0, v2 -; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1) -; GFX11-NEXT: v_add_co_ci_u32_e64 v3, null, v1, v3, vcc_lo -; GFX11-NEXT: v_cmp_lt_u64_e32 vcc_lo, v[2:3], v[0:1] -; GFX11-NEXT: v_cndmask_b32_e64 v0, v2, -1, vcc_lo -; GFX11-NEXT: v_cndmask_b32_e64 v1, v3, -1, vcc_lo +; GFX11-NEXT: v_add_co_u32 v0, vcc_lo, v0, v2 +; GFX11-NEXT: v_add_co_ci_u32_e32 v1, vcc_lo, v1, v3, vcc_lo +; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_2) | instskip(NEXT) | instid1(VALU_DEP_2) +; GFX11-NEXT: v_cndmask_b32_e64 v0, v0, -1, vcc_lo +; GFX11-NEXT: v_cndmask_b32_e64 v1, v1, -1, vcc_lo ; GFX11-NEXT: s_setpc_b64 s[30:31] %result = call i64 @llvm.uadd.sat.i64(i64 %lhs, i64 %rhs) ret i64 %result diff --git a/llvm/test/CodeGen/AMDGPU/udiv64.ll b/llvm/test/CodeGen/AMDGPU/udiv64.ll index 1ed04f8..41199b0 100644 --- a/llvm/test/CodeGen/AMDGPU/udiv64.ll +++ b/llvm/test/CodeGen/AMDGPU/udiv64.ll @@ -146,8 +146,11 @@ define amdgpu_kernel void @s_test_udiv_i64(ptr addrspace(1) %out, i64 %x, i64 %y ; GCN-IR-NEXT: s_cbranch_vccz .LBB0_5 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 ; GCN-IR-NEXT: s_add_u32 s14, s12, 1 -; GCN-IR-NEXT: s_addc_u32 s15, s13, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[8:9], s[14:15], 0 +; GCN-IR-NEXT: s_cselect_b64 s[8:9], -1, 0 +; GCN-IR-NEXT: s_or_b32 s8, s8, s9 +; GCN-IR-NEXT: s_cmp_lg_u32 s8, 0 +; GCN-IR-NEXT: s_addc_u32 s8, s13, 0 +; GCN-IR-NEXT: s_cselect_b64 s[8:9], -1, 0 ; GCN-IR-NEXT: s_sub_i32 s12, 63, s12 ; GCN-IR-NEXT: s_andn2_b64 vcc, exec, s[8:9] ; GCN-IR-NEXT: s_lshl_b64 s[8:9], s[2:3], s12 @@ -157,9 +160,9 @@ define amdgpu_kernel void @s_test_udiv_i64(ptr addrspace(1) %out, i64 %x, i64 %y ; GCN-IR-NEXT: s_add_u32 s14, s6, -1 ; GCN-IR-NEXT: s_addc_u32 s15, s7, -1 ; GCN-IR-NEXT: s_not_b64 s[2:3], s[10:11] -; GCN-IR-NEXT: s_add_u32 s2, s2, s16 -; GCN-IR-NEXT: s_addc_u32 s3, s3, 0 -; GCN-IR-NEXT: s_mov_b64 s[10:11], 0 +; GCN-IR-NEXT: s_add_u32 s10, s2, s16 +; GCN-IR-NEXT: s_addc_u32 s11, s3, 0 +; GCN-IR-NEXT: s_mov_b64 s[2:3], 0 ; GCN-IR-NEXT: s_mov_b32 s5, 0 ; GCN-IR-NEXT: .LBB0_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 @@ -167,19 +170,22 @@ define amdgpu_kernel void @s_test_udiv_i64(ptr addrspace(1) %out, i64 %x, i64 %y ; GCN-IR-NEXT: s_lshr_b32 s4, s9, 31 ; GCN-IR-NEXT: s_lshl_b64 s[8:9], s[8:9], 1 ; GCN-IR-NEXT: s_or_b64 s[12:13], s[12:13], s[4:5] -; GCN-IR-NEXT: s_or_b64 s[8:9], s[10:11], s[8:9] -; GCN-IR-NEXT: s_sub_u32 s4, s14, s12 -; GCN-IR-NEXT: s_subb_u32 s4, s15, s13 -; GCN-IR-NEXT: s_ashr_i32 s10, s4, 31 -; GCN-IR-NEXT: s_mov_b32 s11, s10 -; GCN-IR-NEXT: s_and_b32 s4, s10, 1 -; GCN-IR-NEXT: s_and_b64 s[10:11], s[10:11], s[6:7] -; GCN-IR-NEXT: s_sub_u32 s12, s12, s10 -; GCN-IR-NEXT: s_subb_u32 s13, s13, s11 -; GCN-IR-NEXT: s_add_u32 s2, s2, 1 -; GCN-IR-NEXT: s_addc_u32 s3, s3, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[16:17], s[2:3], 0 -; GCN-IR-NEXT: s_mov_b64 s[10:11], s[4:5] +; GCN-IR-NEXT: s_or_b64 s[8:9], s[2:3], s[8:9] +; GCN-IR-NEXT: s_sub_u32 s2, s14, s12 +; GCN-IR-NEXT: s_subb_u32 s2, s15, s13 +; GCN-IR-NEXT: s_ashr_i32 s2, s2, 31 +; GCN-IR-NEXT: s_mov_b32 s3, s2 +; GCN-IR-NEXT: s_and_b32 s4, s2, 1 +; GCN-IR-NEXT: s_and_b64 s[16:17], s[2:3], s[6:7] +; GCN-IR-NEXT: s_sub_u32 s12, s12, s16 +; GCN-IR-NEXT: s_subb_u32 s13, s13, s17 +; GCN-IR-NEXT: s_add_u32 s10, s10, 1 +; GCN-IR-NEXT: s_cselect_b64 s[16:17], -1, 0 +; GCN-IR-NEXT: s_or_b32 s16, s16, s17 +; GCN-IR-NEXT: s_cmp_lg_u32 s16, 0 +; GCN-IR-NEXT: s_addc_u32 s11, s11, 0 +; GCN-IR-NEXT: s_cselect_b64 s[16:17], -1, 0 +; GCN-IR-NEXT: s_mov_b64 s[2:3], s[4:5] ; GCN-IR-NEXT: s_and_b64 vcc, exec, s[16:17] ; GCN-IR-NEXT: s_cbranch_vccz .LBB0_3 ; GCN-IR-NEXT: .LBB0_4: ; %Flow7 @@ -313,19 +319,19 @@ define i64 @v_test_udiv_i64(i64 %x, i64 %y) { ; GCN-IR-NEXT: v_ffbh_u32_e32 v4, v2 ; GCN-IR-NEXT: v_add_i32_e64 v4, s[6:7], 32, v4 ; GCN-IR-NEXT: v_ffbh_u32_e32 v5, v3 -; GCN-IR-NEXT: v_min_u32_e32 v14, v4, v5 +; GCN-IR-NEXT: v_min_u32_e32 v8, v4, v5 ; GCN-IR-NEXT: v_ffbh_u32_e32 v4, v0 ; GCN-IR-NEXT: v_add_i32_e64 v4, s[6:7], 32, v4 ; GCN-IR-NEXT: v_ffbh_u32_e32 v5, v1 -; GCN-IR-NEXT: v_min_u32_e32 v15, v4, v5 -; GCN-IR-NEXT: v_sub_i32_e64 v8, s[6:7], v14, v15 +; GCN-IR-NEXT: v_min_u32_e32 v9, v4, v5 +; GCN-IR-NEXT: v_sub_i32_e64 v6, s[6:7], v8, v9 ; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[2:3] ; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[4:5], 0, v[0:1] -; GCN-IR-NEXT: v_subb_u32_e64 v9, s[6:7], 0, 0, s[6:7] -; GCN-IR-NEXT: v_cmp_lt_u64_e64 s[6:7], 63, v[8:9] +; GCN-IR-NEXT: v_subb_u32_e64 v7, s[6:7], 0, 0, s[6:7] +; GCN-IR-NEXT: v_cmp_lt_u64_e64 s[6:7], 63, v[6:7] ; GCN-IR-NEXT: s_or_b64 s[4:5], vcc, s[4:5] ; GCN-IR-NEXT: s_or_b64 s[4:5], s[4:5], s[6:7] -; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 63, v[8:9] +; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 63, v[6:7] ; GCN-IR-NEXT: s_xor_b64 s[6:7], s[4:5], -1 ; GCN-IR-NEXT: v_cndmask_b32_e64 v4, v1, 0, s[4:5] ; GCN-IR-NEXT: v_cndmask_b32_e64 v5, v0, 0, s[4:5] @@ -333,55 +339,54 @@ define i64 @v_test_udiv_i64(i64 %x, i64 %y) { ; GCN-IR-NEXT: s_and_saveexec_b64 s[6:7], s[4:5] ; GCN-IR-NEXT: s_cbranch_execz .LBB1_6 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 -; GCN-IR-NEXT: v_add_i32_e32 v10, vcc, 1, v8 -; GCN-IR-NEXT: v_addc_u32_e32 v11, vcc, 0, v9, vcc -; GCN-IR-NEXT: v_sub_i32_e64 v4, s[4:5], 63, v8 -; GCN-IR-NEXT: v_mov_b32_e32 v6, 0 -; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 0, v[10:11] +; GCN-IR-NEXT: v_add_i32_e32 v10, vcc, 1, v6 +; GCN-IR-NEXT: v_addc_u32_e32 v4, vcc, 0, v7, vcc +; GCN-IR-NEXT: v_sub_i32_e64 v4, s[4:5], 63, v6 ; GCN-IR-NEXT: v_lshl_b64 v[4:5], v[0:1], v4 +; GCN-IR-NEXT: v_mov_b32_e32 v6, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v7, 0 -; GCN-IR-NEXT: s_and_saveexec_b64 s[4:5], vcc -; GCN-IR-NEXT: s_xor_b64 s[8:9], exec, s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], vcc, -1 +; GCN-IR-NEXT: s_and_saveexec_b64 s[8:9], s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execz .LBB1_5 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader -; GCN-IR-NEXT: v_add_i32_e32 v12, vcc, -1, v2 -; GCN-IR-NEXT: v_lshr_b64 v[8:9], v[0:1], v10 -; GCN-IR-NEXT: v_addc_u32_e32 v13, vcc, -1, v3, vcc -; GCN-IR-NEXT: v_not_b32_e32 v0, v14 -; GCN-IR-NEXT: v_add_i32_e32 v0, vcc, v0, v15 -; GCN-IR-NEXT: v_mov_b32_e32 v10, 0 -; GCN-IR-NEXT: v_addc_u32_e64 v1, s[4:5], -1, 0, vcc -; GCN-IR-NEXT: s_mov_b64 s[10:11], 0 -; GCN-IR-NEXT: v_mov_b32_e32 v11, 0 +; GCN-IR-NEXT: v_lshr_b64 v[0:1], v[0:1], v10 +; GCN-IR-NEXT: v_add_i32_e32 v10, vcc, -1, v2 +; GCN-IR-NEXT: v_addc_u32_e32 v11, vcc, -1, v3, vcc +; GCN-IR-NEXT: v_not_b32_e32 v6, v8 +; GCN-IR-NEXT: v_add_i32_e32 v12, vcc, v6, v9 +; GCN-IR-NEXT: v_addc_u32_e64 v13, s[8:9], -1, 0, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v8, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0 +; GCN-IR-NEXT: v_mov_b32_e32 v9, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v7, 0 ; GCN-IR-NEXT: .LBB1_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 -; GCN-IR-NEXT: v_lshl_b64 v[8:9], v[8:9], 1 +; GCN-IR-NEXT: v_lshl_b64 v[0:1], v[0:1], 1 ; GCN-IR-NEXT: v_lshrrev_b32_e32 v6, 31, v5 -; GCN-IR-NEXT: v_or_b32_e32 v8, v8, v6 +; GCN-IR-NEXT: v_or_b32_e32 v0, v0, v6 ; GCN-IR-NEXT: v_lshl_b64 v[4:5], v[4:5], 1 -; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, v12, v8 -; GCN-IR-NEXT: v_subb_u32_e32 v6, vcc, v13, v9, vcc -; GCN-IR-NEXT: v_or_b32_e32 v4, v10, v4 -; GCN-IR-NEXT: v_ashrrev_i32_e32 v10, 31, v6 -; GCN-IR-NEXT: v_add_i32_e32 v0, vcc, 1, v0 -; GCN-IR-NEXT: v_or_b32_e32 v5, v11, v5 -; GCN-IR-NEXT: v_and_b32_e32 v6, 1, v10 -; GCN-IR-NEXT: v_and_b32_e32 v11, v10, v3 -; GCN-IR-NEXT: v_and_b32_e32 v10, v10, v2 -; GCN-IR-NEXT: v_addc_u32_e32 v1, vcc, 0, v1, vcc -; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[0:1] -; GCN-IR-NEXT: v_sub_i32_e64 v8, s[4:5], v8, v10 -; GCN-IR-NEXT: v_subb_u32_e64 v9, s[4:5], v9, v11, s[4:5] -; GCN-IR-NEXT: v_mov_b32_e32 v11, v7 -; GCN-IR-NEXT: s_or_b64 s[10:11], vcc, s[10:11] -; GCN-IR-NEXT: v_mov_b32_e32 v10, v6 -; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[10:11] +; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, v10, v0 +; GCN-IR-NEXT: v_subb_u32_e32 v6, vcc, v11, v1, vcc +; GCN-IR-NEXT: v_or_b32_e32 v4, v8, v4 +; GCN-IR-NEXT: v_ashrrev_i32_e32 v8, 31, v6 +; GCN-IR-NEXT: v_or_b32_e32 v5, v9, v5 +; GCN-IR-NEXT: v_and_b32_e32 v6, 1, v8 +; GCN-IR-NEXT: v_and_b32_e32 v9, v8, v3 +; GCN-IR-NEXT: v_and_b32_e32 v8, v8, v2 +; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v8 +; GCN-IR-NEXT: v_subb_u32_e32 v1, vcc, v1, v9, vcc +; GCN-IR-NEXT: v_add_i32_e32 v12, vcc, 1, v12 +; GCN-IR-NEXT: v_addc_u32_e32 v13, vcc, 0, v13, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v9, v7 +; GCN-IR-NEXT: s_or_b64 s[8:9], vcc, s[8:9] +; GCN-IR-NEXT: v_mov_b32_e32 v8, v6 +; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execnz .LBB1_3 ; GCN-IR-NEXT: ; %bb.4: ; %Flow -; GCN-IR-NEXT: s_or_b64 exec, exec, s[10:11] -; GCN-IR-NEXT: .LBB1_5: ; %Flow4 ; GCN-IR-NEXT: s_or_b64 exec, exec, s[8:9] +; GCN-IR-NEXT: .LBB1_5: ; %Flow4 +; GCN-IR-NEXT: s_or_b64 exec, exec, s[4:5] ; GCN-IR-NEXT: v_lshl_b64 v[0:1], v[4:5], 1 ; GCN-IR-NEXT: v_or_b32_e32 v4, v7, v1 ; GCN-IR-NEXT: v_or_b32_e32 v5, v6, v0 @@ -923,34 +928,37 @@ define amdgpu_kernel void @s_test_udiv_k_num_i64(ptr addrspace(1) %out, i64 %x) ; GCN-IR-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x9 ; GCN-IR-NEXT: s_mov_b64 s[4:5], 0 ; GCN-IR-NEXT: s_waitcnt lgkmcnt(0) -; GCN-IR-NEXT: s_flbit_i32_b64 s12, s[2:3] -; GCN-IR-NEXT: s_add_u32 s8, s12, 0xffffffc5 +; GCN-IR-NEXT: s_flbit_i32_b64 s14, s[2:3] +; GCN-IR-NEXT: s_add_u32 s8, s14, 0xffffffc5 ; GCN-IR-NEXT: s_addc_u32 s9, 0, -1 ; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[6:7], s[2:3], 0 ; GCN-IR-NEXT: v_cmp_gt_u64_e64 s[10:11], s[8:9], 63 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[14:15], s[8:9], 63 +; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[12:13], s[8:9], 63 ; GCN-IR-NEXT: s_or_b64 s[10:11], s[6:7], s[10:11] ; GCN-IR-NEXT: s_and_b64 s[6:7], s[10:11], exec ; GCN-IR-NEXT: s_cselect_b32 s6, 0, 24 -; GCN-IR-NEXT: s_or_b64 s[10:11], s[10:11], s[14:15] +; GCN-IR-NEXT: s_or_b64 s[10:11], s[10:11], s[12:13] ; GCN-IR-NEXT: s_andn2_b64 vcc, exec, s[10:11] ; GCN-IR-NEXT: s_mov_b32 s7, 0 ; GCN-IR-NEXT: s_cbranch_vccz .LBB8_5 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 ; GCN-IR-NEXT: s_add_u32 s10, s8, 1 -; GCN-IR-NEXT: s_addc_u32 s11, s9, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[6:7], s[10:11], 0 +; GCN-IR-NEXT: s_cselect_b64 s[6:7], -1, 0 +; GCN-IR-NEXT: s_or_b32 s6, s6, s7 +; GCN-IR-NEXT: s_cmp_lg_u32 s6, 0 +; GCN-IR-NEXT: s_addc_u32 s6, s9, 0 +; GCN-IR-NEXT: s_cselect_b64 s[6:7], -1, 0 ; GCN-IR-NEXT: s_sub_i32 s8, 63, s8 ; GCN-IR-NEXT: s_andn2_b64 vcc, exec, s[6:7] ; GCN-IR-NEXT: s_lshl_b64 s[6:7], 24, s8 ; GCN-IR-NEXT: s_cbranch_vccz .LBB8_4 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader ; GCN-IR-NEXT: s_lshr_b64 s[10:11], 24, s10 -; GCN-IR-NEXT: s_add_u32 s14, s2, -1 -; GCN-IR-NEXT: s_addc_u32 s15, s3, -1 -; GCN-IR-NEXT: s_sub_u32 s8, 58, s12 -; GCN-IR-NEXT: s_subb_u32 s9, 0, 0 -; GCN-IR-NEXT: s_mov_b64 s[12:13], 0 +; GCN-IR-NEXT: s_add_u32 s12, s2, -1 +; GCN-IR-NEXT: s_addc_u32 s13, s3, -1 +; GCN-IR-NEXT: s_sub_u32 s14, 58, s14 +; GCN-IR-NEXT: s_subb_u32 s15, 0, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0 ; GCN-IR-NEXT: s_mov_b32 s5, 0 ; GCN-IR-NEXT: .LBB8_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 @@ -958,19 +966,22 @@ define amdgpu_kernel void @s_test_udiv_k_num_i64(ptr addrspace(1) %out, i64 %x) ; GCN-IR-NEXT: s_lshr_b32 s4, s7, 31 ; GCN-IR-NEXT: s_lshl_b64 s[6:7], s[6:7], 1 ; GCN-IR-NEXT: s_or_b64 s[10:11], s[10:11], s[4:5] -; GCN-IR-NEXT: s_or_b64 s[6:7], s[12:13], s[6:7] -; GCN-IR-NEXT: s_sub_u32 s4, s14, s10 -; GCN-IR-NEXT: s_subb_u32 s4, s15, s11 -; GCN-IR-NEXT: s_ashr_i32 s12, s4, 31 -; GCN-IR-NEXT: s_mov_b32 s13, s12 -; GCN-IR-NEXT: s_and_b32 s4, s12, 1 -; GCN-IR-NEXT: s_and_b64 s[12:13], s[12:13], s[2:3] -; GCN-IR-NEXT: s_sub_u32 s10, s10, s12 -; GCN-IR-NEXT: s_subb_u32 s11, s11, s13 -; GCN-IR-NEXT: s_add_u32 s8, s8, 1 -; GCN-IR-NEXT: s_addc_u32 s9, s9, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[16:17], s[8:9], 0 -; GCN-IR-NEXT: s_mov_b64 s[12:13], s[4:5] +; GCN-IR-NEXT: s_or_b64 s[6:7], s[8:9], s[6:7] +; GCN-IR-NEXT: s_sub_u32 s4, s12, s10 +; GCN-IR-NEXT: s_subb_u32 s4, s13, s11 +; GCN-IR-NEXT: s_ashr_i32 s8, s4, 31 +; GCN-IR-NEXT: s_mov_b32 s9, s8 +; GCN-IR-NEXT: s_and_b32 s4, s8, 1 +; GCN-IR-NEXT: s_and_b64 s[16:17], s[8:9], s[2:3] +; GCN-IR-NEXT: s_sub_u32 s10, s10, s16 +; GCN-IR-NEXT: s_subb_u32 s11, s11, s17 +; GCN-IR-NEXT: s_add_u32 s14, s14, 1 +; GCN-IR-NEXT: s_cselect_b64 s[16:17], -1, 0 +; GCN-IR-NEXT: s_or_b32 s16, s16, s17 +; GCN-IR-NEXT: s_cmp_lg_u32 s16, 0 +; GCN-IR-NEXT: s_addc_u32 s15, s15, 0 +; GCN-IR-NEXT: s_cselect_b64 s[16:17], -1, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], s[4:5] ; GCN-IR-NEXT: s_and_b64 vcc, exec, s[16:17] ; GCN-IR-NEXT: s_cbranch_vccz .LBB8_3 ; GCN-IR-NEXT: .LBB8_4: ; %Flow6 @@ -1094,12 +1105,12 @@ define i64 @v_test_udiv_pow2_k_num_i64(i64 %x) { ; GCN-IR-NEXT: v_ffbh_u32_e32 v2, v0 ; GCN-IR-NEXT: v_add_i32_e32 v2, vcc, 32, v2 ; GCN-IR-NEXT: v_ffbh_u32_e32 v3, v1 -; GCN-IR-NEXT: v_min_u32_e32 v10, v2, v3 -; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, 0xffffffd0, v10 -; GCN-IR-NEXT: v_addc_u32_e64 v7, s[6:7], 0, -1, vcc +; GCN-IR-NEXT: v_min_u32_e32 v8, v2, v3 +; GCN-IR-NEXT: v_add_i32_e32 v4, vcc, 0xffffffd0, v8 +; GCN-IR-NEXT: v_addc_u32_e64 v5, s[6:7], 0, -1, vcc ; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[4:5], 0, v[0:1] -; GCN-IR-NEXT: v_cmp_lt_u64_e32 vcc, 63, v[6:7] -; GCN-IR-NEXT: v_cmp_ne_u64_e64 s[6:7], 63, v[6:7] +; GCN-IR-NEXT: v_cmp_lt_u64_e32 vcc, 63, v[4:5] +; GCN-IR-NEXT: v_cmp_ne_u64_e64 s[6:7], 63, v[4:5] ; GCN-IR-NEXT: v_mov_b32_e32 v3, 0x8000 ; GCN-IR-NEXT: s_or_b64 s[4:5], s[4:5], vcc ; GCN-IR-NEXT: v_cndmask_b32_e64 v3, v3, 0, s[4:5] @@ -1109,55 +1120,54 @@ define i64 @v_test_udiv_pow2_k_num_i64(i64 %x) { ; GCN-IR-NEXT: s_and_saveexec_b64 s[6:7], s[4:5] ; GCN-IR-NEXT: s_cbranch_execz .LBB9_6 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 -; GCN-IR-NEXT: v_add_i32_e32 v8, vcc, 1, v6 -; GCN-IR-NEXT: v_sub_i32_e64 v2, s[4:5], 63, v6 -; GCN-IR-NEXT: v_addc_u32_e32 v9, vcc, 0, v7, vcc -; GCN-IR-NEXT: s_mov_b64 s[4:5], 0x8000 +; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, 1, v4 +; GCN-IR-NEXT: v_addc_u32_e32 v2, vcc, 0, v5, vcc +; GCN-IR-NEXT: v_sub_i32_e64 v2, s[4:5], 63, v4 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0x8000 +; GCN-IR-NEXT: v_lshl_b64 v[2:3], s[8:9], v2 ; GCN-IR-NEXT: v_mov_b32_e32 v4, 0 -; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 0, v[8:9] -; GCN-IR-NEXT: v_lshl_b64 v[2:3], s[4:5], v2 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 -; GCN-IR-NEXT: s_and_saveexec_b64 s[8:9], vcc -; GCN-IR-NEXT: s_xor_b64 s[8:9], exec, s[8:9] +; GCN-IR-NEXT: s_xor_b64 s[4:5], vcc, -1 +; GCN-IR-NEXT: s_and_saveexec_b64 s[10:11], s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], exec, s[10:11] ; GCN-IR-NEXT: s_cbranch_execz .LBB9_5 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader -; GCN-IR-NEXT: v_add_i32_e32 v12, vcc, -1, v0 -; GCN-IR-NEXT: v_addc_u32_e32 v13, vcc, -1, v1, vcc -; GCN-IR-NEXT: v_lshr_b64 v[8:9], s[4:5], v8 -; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, 47, v10 -; GCN-IR-NEXT: v_mov_b32_e32 v10, 0 -; GCN-IR-NEXT: v_subb_u32_e64 v7, s[4:5], 0, 0, vcc -; GCN-IR-NEXT: s_mov_b64 s[10:11], 0 -; GCN-IR-NEXT: v_mov_b32_e32 v11, 0 +; GCN-IR-NEXT: v_add_i32_e32 v10, vcc, -1, v0 +; GCN-IR-NEXT: v_addc_u32_e32 v11, vcc, -1, v1, vcc +; GCN-IR-NEXT: v_sub_i32_e32 v12, vcc, 47, v8 +; GCN-IR-NEXT: v_lshr_b64 v[6:7], s[8:9], v6 +; GCN-IR-NEXT: v_subb_u32_e64 v13, s[8:9], 0, 0, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v8, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0 +; GCN-IR-NEXT: v_mov_b32_e32 v9, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 ; GCN-IR-NEXT: .LBB9_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 -; GCN-IR-NEXT: v_lshl_b64 v[8:9], v[8:9], 1 +; GCN-IR-NEXT: v_lshl_b64 v[6:7], v[6:7], 1 ; GCN-IR-NEXT: v_lshrrev_b32_e32 v4, 31, v3 -; GCN-IR-NEXT: v_or_b32_e32 v8, v8, v4 +; GCN-IR-NEXT: v_or_b32_e32 v6, v6, v4 ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[2:3], 1 -; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, v12, v8 -; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, v13, v9, vcc -; GCN-IR-NEXT: v_or_b32_e32 v2, v10, v2 -; GCN-IR-NEXT: v_ashrrev_i32_e32 v10, 31, v4 -; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, 1, v6 -; GCN-IR-NEXT: v_or_b32_e32 v3, v11, v3 -; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v10 -; GCN-IR-NEXT: v_and_b32_e32 v11, v10, v1 -; GCN-IR-NEXT: v_and_b32_e32 v10, v10, v0 -; GCN-IR-NEXT: v_addc_u32_e32 v7, vcc, 0, v7, vcc -; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[6:7] -; GCN-IR-NEXT: v_sub_i32_e64 v8, s[4:5], v8, v10 -; GCN-IR-NEXT: v_subb_u32_e64 v9, s[4:5], v9, v11, s[4:5] -; GCN-IR-NEXT: v_mov_b32_e32 v11, v5 -; GCN-IR-NEXT: s_or_b64 s[10:11], vcc, s[10:11] -; GCN-IR-NEXT: v_mov_b32_e32 v10, v4 -; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[10:11] +; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, v10, v6 +; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, v11, v7, vcc +; GCN-IR-NEXT: v_or_b32_e32 v2, v8, v2 +; GCN-IR-NEXT: v_ashrrev_i32_e32 v8, 31, v4 +; GCN-IR-NEXT: v_or_b32_e32 v3, v9, v3 +; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v8 +; GCN-IR-NEXT: v_and_b32_e32 v9, v8, v1 +; GCN-IR-NEXT: v_and_b32_e32 v8, v8, v0 +; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, v6, v8 +; GCN-IR-NEXT: v_subb_u32_e32 v7, vcc, v7, v9, vcc +; GCN-IR-NEXT: v_add_i32_e32 v12, vcc, 1, v12 +; GCN-IR-NEXT: v_addc_u32_e32 v13, vcc, 0, v13, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v9, v5 +; GCN-IR-NEXT: s_or_b64 s[8:9], vcc, s[8:9] +; GCN-IR-NEXT: v_mov_b32_e32 v8, v4 +; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execnz .LBB9_3 ; GCN-IR-NEXT: ; %bb.4: ; %Flow -; GCN-IR-NEXT: s_or_b64 exec, exec, s[10:11] -; GCN-IR-NEXT: .LBB9_5: ; %Flow4 ; GCN-IR-NEXT: s_or_b64 exec, exec, s[8:9] +; GCN-IR-NEXT: .LBB9_5: ; %Flow4 +; GCN-IR-NEXT: s_or_b64 exec, exec, s[4:5] ; GCN-IR-NEXT: v_lshl_b64 v[0:1], v[2:3], 1 ; GCN-IR-NEXT: v_or_b32_e32 v2, v5, v1 ; GCN-IR-NEXT: v_or_b32_e32 v3, v4, v0 @@ -1184,13 +1194,13 @@ define i64 @v_test_udiv_pow2_k_den_i64(i64 %x) { ; GCN-IR-NEXT: v_ffbh_u32_e32 v2, v0 ; GCN-IR-NEXT: v_add_i32_e64 v2, s[4:5], 32, v2 ; GCN-IR-NEXT: v_ffbh_u32_e32 v3, v1 -; GCN-IR-NEXT: v_min_u32_e32 v10, v2, v3 -; GCN-IR-NEXT: v_sub_i32_e64 v6, s[4:5], 48, v10 -; GCN-IR-NEXT: v_subb_u32_e64 v7, s[4:5], 0, 0, s[4:5] +; GCN-IR-NEXT: v_min_u32_e32 v6, v2, v3 +; GCN-IR-NEXT: v_sub_i32_e64 v4, s[4:5], 48, v6 +; GCN-IR-NEXT: v_subb_u32_e64 v5, s[4:5], 0, 0, s[4:5] ; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[0:1] -; GCN-IR-NEXT: v_cmp_lt_u64_e64 s[4:5], 63, v[6:7] +; GCN-IR-NEXT: v_cmp_lt_u64_e64 s[4:5], 63, v[4:5] ; GCN-IR-NEXT: s_or_b64 s[4:5], vcc, s[4:5] -; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 63, v[6:7] +; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 63, v[4:5] ; GCN-IR-NEXT: s_xor_b64 s[6:7], s[4:5], -1 ; GCN-IR-NEXT: v_cndmask_b32_e64 v2, v1, 0, s[4:5] ; GCN-IR-NEXT: v_cndmask_b32_e64 v3, v0, 0, s[4:5] @@ -1198,52 +1208,51 @@ define i64 @v_test_udiv_pow2_k_den_i64(i64 %x) { ; GCN-IR-NEXT: s_and_saveexec_b64 s[6:7], s[4:5] ; GCN-IR-NEXT: s_cbranch_execz .LBB10_6 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 -; GCN-IR-NEXT: v_add_i32_e32 v8, vcc, 1, v6 -; GCN-IR-NEXT: v_addc_u32_e32 v9, vcc, 0, v7, vcc -; GCN-IR-NEXT: v_sub_i32_e64 v2, s[4:5], 63, v6 -; GCN-IR-NEXT: v_mov_b32_e32 v4, 0 -; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 0, v[8:9] +; GCN-IR-NEXT: v_add_i32_e32 v7, vcc, 1, v4 +; GCN-IR-NEXT: v_addc_u32_e32 v2, vcc, 0, v5, vcc +; GCN-IR-NEXT: v_sub_i32_e64 v2, s[4:5], 63, v4 ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[0:1], v2 +; GCN-IR-NEXT: v_mov_b32_e32 v4, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 -; GCN-IR-NEXT: s_and_saveexec_b64 s[4:5], vcc -; GCN-IR-NEXT: s_xor_b64 s[8:9], exec, s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], vcc, -1 +; GCN-IR-NEXT: s_and_saveexec_b64 s[8:9], s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execz .LBB10_5 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader -; GCN-IR-NEXT: v_lshr_b64 v[6:7], v[0:1], v8 -; GCN-IR-NEXT: v_add_i32_e32 v0, vcc, 0xffffffcf, v10 -; GCN-IR-NEXT: v_mov_b32_e32 v8, 0 -; GCN-IR-NEXT: v_addc_u32_e64 v1, s[4:5], 0, -1, vcc -; GCN-IR-NEXT: s_mov_b64 s[10:11], 0 -; GCN-IR-NEXT: v_mov_b32_e32 v9, 0 +; GCN-IR-NEXT: v_add_i32_e32 v8, vcc, 0xffffffcf, v6 +; GCN-IR-NEXT: v_lshr_b64 v[0:1], v[0:1], v7 +; GCN-IR-NEXT: v_addc_u32_e64 v9, s[8:9], 0, -1, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v6, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0 +; GCN-IR-NEXT: v_mov_b32_e32 v7, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 -; GCN-IR-NEXT: s_movk_i32 s12, 0x7fff +; GCN-IR-NEXT: s_movk_i32 s10, 0x7fff ; GCN-IR-NEXT: .LBB10_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 -; GCN-IR-NEXT: v_lshl_b64 v[6:7], v[6:7], 1 +; GCN-IR-NEXT: v_lshl_b64 v[0:1], v[0:1], 1 ; GCN-IR-NEXT: v_lshrrev_b32_e32 v4, 31, v3 -; GCN-IR-NEXT: v_or_b32_e32 v6, v6, v4 -; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, s12, v6 +; GCN-IR-NEXT: v_or_b32_e32 v0, v0, v4 ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[2:3], 1 -; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, 0, v7, vcc -; GCN-IR-NEXT: v_add_i32_e32 v0, vcc, 1, v0 -; GCN-IR-NEXT: v_or_b32_e32 v2, v8, v2 -; GCN-IR-NEXT: v_ashrrev_i32_e32 v8, 31, v4 -; GCN-IR-NEXT: v_addc_u32_e32 v1, vcc, 0, v1, vcc -; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v8 -; GCN-IR-NEXT: v_and_b32_e32 v8, 0x8000, v8 -; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[0:1] -; GCN-IR-NEXT: v_or_b32_e32 v3, v9, v3 -; GCN-IR-NEXT: v_sub_i32_e64 v6, s[4:5], v6, v8 -; GCN-IR-NEXT: v_mov_b32_e32 v9, v5 -; GCN-IR-NEXT: v_subbrev_u32_e64 v7, s[4:5], 0, v7, s[4:5] -; GCN-IR-NEXT: s_or_b64 s[10:11], vcc, s[10:11] -; GCN-IR-NEXT: v_mov_b32_e32 v8, v4 -; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[10:11] +; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, s10, v0 +; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, 0, v1, vcc +; GCN-IR-NEXT: v_or_b32_e32 v2, v6, v2 +; GCN-IR-NEXT: v_ashrrev_i32_e32 v6, 31, v4 +; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v6 +; GCN-IR-NEXT: v_and_b32_e32 v6, 0x8000, v6 +; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v6 +; GCN-IR-NEXT: v_subbrev_u32_e32 v1, vcc, 0, v1, vcc +; GCN-IR-NEXT: v_add_i32_e32 v8, vcc, 1, v8 +; GCN-IR-NEXT: v_or_b32_e32 v3, v7, v3 +; GCN-IR-NEXT: v_addc_u32_e32 v9, vcc, 0, v9, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v7, v5 +; GCN-IR-NEXT: s_or_b64 s[8:9], vcc, s[8:9] +; GCN-IR-NEXT: v_mov_b32_e32 v6, v4 +; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execnz .LBB10_3 ; GCN-IR-NEXT: ; %bb.4: ; %Flow -; GCN-IR-NEXT: s_or_b64 exec, exec, s[10:11] -; GCN-IR-NEXT: .LBB10_5: ; %Flow4 ; GCN-IR-NEXT: s_or_b64 exec, exec, s[8:9] +; GCN-IR-NEXT: .LBB10_5: ; %Flow4 +; GCN-IR-NEXT: s_or_b64 exec, exec, s[4:5] ; GCN-IR-NEXT: v_lshl_b64 v[0:1], v[2:3], 1 ; GCN-IR-NEXT: v_or_b32_e32 v2, v5, v1 ; GCN-IR-NEXT: v_or_b32_e32 v3, v4, v0 @@ -1290,52 +1299,58 @@ define amdgpu_kernel void @s_test_udiv_k_den_i64(ptr addrspace(1) %out, i64 %x) ; GCN-IR: ; %bb.0: ; %_udiv-special-cases ; GCN-IR-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x9 ; GCN-IR-NEXT: s_waitcnt lgkmcnt(0) -; GCN-IR-NEXT: s_flbit_i32_b64 s12, s[2:3] -; GCN-IR-NEXT: s_sub_u32 s8, 59, s12 +; GCN-IR-NEXT: s_flbit_i32_b64 s10, s[2:3] +; GCN-IR-NEXT: s_sub_u32 s8, 59, s10 ; GCN-IR-NEXT: s_subb_u32 s9, 0, 0 ; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[4:5], s[2:3], 0 ; GCN-IR-NEXT: v_cmp_gt_u64_e64 s[6:7], s[8:9], 63 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[10:11], s[8:9], 63 +; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[12:13], s[8:9], 63 ; GCN-IR-NEXT: s_or_b64 s[4:5], s[4:5], s[6:7] ; GCN-IR-NEXT: s_and_b64 s[6:7], s[4:5], exec ; GCN-IR-NEXT: s_cselect_b32 s7, 0, s3 ; GCN-IR-NEXT: s_cselect_b32 s6, 0, s2 -; GCN-IR-NEXT: s_or_b64 s[4:5], s[4:5], s[10:11] +; GCN-IR-NEXT: s_or_b64 s[4:5], s[4:5], s[12:13] ; GCN-IR-NEXT: s_andn2_b64 vcc, exec, s[4:5] ; GCN-IR-NEXT: s_mov_b64 s[4:5], 0 ; GCN-IR-NEXT: s_cbranch_vccz .LBB11_5 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 -; GCN-IR-NEXT: s_add_u32 s10, s8, 1 -; GCN-IR-NEXT: s_addc_u32 s11, s9, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[6:7], s[10:11], 0 +; GCN-IR-NEXT: s_add_u32 s11, s8, 1 +; GCN-IR-NEXT: s_cselect_b64 s[6:7], -1, 0 +; GCN-IR-NEXT: s_or_b32 s6, s6, s7 +; GCN-IR-NEXT: s_cmp_lg_u32 s6, 0 +; GCN-IR-NEXT: s_addc_u32 s6, s9, 0 +; GCN-IR-NEXT: s_cselect_b64 s[6:7], -1, 0 ; GCN-IR-NEXT: s_sub_i32 s8, 63, s8 ; GCN-IR-NEXT: s_andn2_b64 vcc, exec, s[6:7] ; GCN-IR-NEXT: s_lshl_b64 s[6:7], s[2:3], s8 ; GCN-IR-NEXT: s_cbranch_vccz .LBB11_4 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader -; GCN-IR-NEXT: s_lshr_b64 s[8:9], s[2:3], s10 -; GCN-IR-NEXT: s_add_u32 s2, s12, 0xffffffc4 -; GCN-IR-NEXT: s_addc_u32 s3, 0, -1 -; GCN-IR-NEXT: s_mov_b64 s[10:11], 0 +; GCN-IR-NEXT: s_lshr_b64 s[2:3], s[2:3], s11 +; GCN-IR-NEXT: s_add_u32 s10, s10, 0xffffffc4 +; GCN-IR-NEXT: s_addc_u32 s11, 0, -1 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0 ; GCN-IR-NEXT: s_mov_b32 s5, 0 ; GCN-IR-NEXT: .LBB11_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 -; GCN-IR-NEXT: s_lshl_b64 s[8:9], s[8:9], 1 +; GCN-IR-NEXT: s_lshl_b64 s[2:3], s[2:3], 1 ; GCN-IR-NEXT: s_lshr_b32 s4, s7, 31 ; GCN-IR-NEXT: s_lshl_b64 s[6:7], s[6:7], 1 -; GCN-IR-NEXT: s_or_b64 s[8:9], s[8:9], s[4:5] -; GCN-IR-NEXT: s_or_b64 s[6:7], s[10:11], s[6:7] -; GCN-IR-NEXT: s_sub_u32 s4, 23, s8 -; GCN-IR-NEXT: s_subb_u32 s4, 0, s9 -; GCN-IR-NEXT: s_ashr_i32 s10, s4, 31 -; GCN-IR-NEXT: s_and_b32 s4, s10, 1 -; GCN-IR-NEXT: s_and_b32 s10, s10, 24 -; GCN-IR-NEXT: s_sub_u32 s8, s8, s10 -; GCN-IR-NEXT: s_subb_u32 s9, s9, 0 -; GCN-IR-NEXT: s_add_u32 s2, s2, 1 -; GCN-IR-NEXT: s_addc_u32 s3, s3, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[12:13], s[2:3], 0 -; GCN-IR-NEXT: s_mov_b64 s[10:11], s[4:5] +; GCN-IR-NEXT: s_or_b64 s[2:3], s[2:3], s[4:5] +; GCN-IR-NEXT: s_or_b64 s[6:7], s[8:9], s[6:7] +; GCN-IR-NEXT: s_sub_u32 s4, 23, s2 +; GCN-IR-NEXT: s_subb_u32 s4, 0, s3 +; GCN-IR-NEXT: s_ashr_i32 s8, s4, 31 +; GCN-IR-NEXT: s_and_b32 s4, s8, 1 +; GCN-IR-NEXT: s_and_b32 s8, s8, 24 +; GCN-IR-NEXT: s_sub_u32 s2, s2, s8 +; GCN-IR-NEXT: s_subb_u32 s3, s3, 0 +; GCN-IR-NEXT: s_add_u32 s10, s10, 1 +; GCN-IR-NEXT: s_cselect_b64 s[12:13], -1, 0 +; GCN-IR-NEXT: s_or_b32 s12, s12, s13 +; GCN-IR-NEXT: s_cmp_lg_u32 s12, 0 +; GCN-IR-NEXT: s_addc_u32 s11, s11, 0 +; GCN-IR-NEXT: s_cselect_b64 s[12:13], -1, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], s[4:5] ; GCN-IR-NEXT: s_and_b64 vcc, exec, s[12:13] ; GCN-IR-NEXT: s_cbranch_vccz .LBB11_3 ; GCN-IR-NEXT: .LBB11_4: ; %Flow6 @@ -1384,13 +1399,13 @@ define i64 @v_test_udiv_k_den_i64(i64 %x) { ; GCN-IR-NEXT: v_ffbh_u32_e32 v2, v0 ; GCN-IR-NEXT: v_add_i32_e64 v2, s[4:5], 32, v2 ; GCN-IR-NEXT: v_ffbh_u32_e32 v3, v1 -; GCN-IR-NEXT: v_min_u32_e32 v10, v2, v3 -; GCN-IR-NEXT: v_sub_i32_e64 v6, s[4:5], 59, v10 -; GCN-IR-NEXT: v_subb_u32_e64 v7, s[4:5], 0, 0, s[4:5] +; GCN-IR-NEXT: v_min_u32_e32 v6, v2, v3 +; GCN-IR-NEXT: v_sub_i32_e64 v4, s[4:5], 59, v6 +; GCN-IR-NEXT: v_subb_u32_e64 v5, s[4:5], 0, 0, s[4:5] ; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[0:1] -; GCN-IR-NEXT: v_cmp_lt_u64_e64 s[4:5], 63, v[6:7] +; GCN-IR-NEXT: v_cmp_lt_u64_e64 s[4:5], 63, v[4:5] ; GCN-IR-NEXT: s_or_b64 s[4:5], vcc, s[4:5] -; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 63, v[6:7] +; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 63, v[4:5] ; GCN-IR-NEXT: s_xor_b64 s[6:7], s[4:5], -1 ; GCN-IR-NEXT: v_cndmask_b32_e64 v2, v1, 0, s[4:5] ; GCN-IR-NEXT: v_cndmask_b32_e64 v3, v0, 0, s[4:5] @@ -1398,51 +1413,50 @@ define i64 @v_test_udiv_k_den_i64(i64 %x) { ; GCN-IR-NEXT: s_and_saveexec_b64 s[6:7], s[4:5] ; GCN-IR-NEXT: s_cbranch_execz .LBB12_6 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 -; GCN-IR-NEXT: v_add_i32_e32 v8, vcc, 1, v6 -; GCN-IR-NEXT: v_addc_u32_e32 v9, vcc, 0, v7, vcc -; GCN-IR-NEXT: v_sub_i32_e64 v2, s[4:5], 63, v6 -; GCN-IR-NEXT: v_mov_b32_e32 v4, 0 -; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 0, v[8:9] +; GCN-IR-NEXT: v_add_i32_e32 v7, vcc, 1, v4 +; GCN-IR-NEXT: v_addc_u32_e32 v2, vcc, 0, v5, vcc +; GCN-IR-NEXT: v_sub_i32_e64 v2, s[4:5], 63, v4 ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[0:1], v2 +; GCN-IR-NEXT: v_mov_b32_e32 v4, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 -; GCN-IR-NEXT: s_and_saveexec_b64 s[4:5], vcc -; GCN-IR-NEXT: s_xor_b64 s[8:9], exec, s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], vcc, -1 +; GCN-IR-NEXT: s_and_saveexec_b64 s[8:9], s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execz .LBB12_5 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader -; GCN-IR-NEXT: v_lshr_b64 v[6:7], v[0:1], v8 -; GCN-IR-NEXT: v_add_i32_e32 v0, vcc, 0xffffffc4, v10 -; GCN-IR-NEXT: v_mov_b32_e32 v8, 0 -; GCN-IR-NEXT: v_addc_u32_e64 v1, s[4:5], 0, -1, vcc -; GCN-IR-NEXT: s_mov_b64 s[10:11], 0 -; GCN-IR-NEXT: v_mov_b32_e32 v9, 0 +; GCN-IR-NEXT: v_add_i32_e32 v8, vcc, 0xffffffc4, v6 +; GCN-IR-NEXT: v_lshr_b64 v[0:1], v[0:1], v7 +; GCN-IR-NEXT: v_addc_u32_e64 v9, s[8:9], 0, -1, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v6, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0 +; GCN-IR-NEXT: v_mov_b32_e32 v7, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 ; GCN-IR-NEXT: .LBB12_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 -; GCN-IR-NEXT: v_lshl_b64 v[6:7], v[6:7], 1 +; GCN-IR-NEXT: v_lshl_b64 v[0:1], v[0:1], 1 ; GCN-IR-NEXT: v_lshrrev_b32_e32 v4, 31, v3 -; GCN-IR-NEXT: v_or_b32_e32 v6, v6, v4 -; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, 23, v6 +; GCN-IR-NEXT: v_or_b32_e32 v0, v0, v4 ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[2:3], 1 -; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, 0, v7, vcc -; GCN-IR-NEXT: v_add_i32_e32 v0, vcc, 1, v0 -; GCN-IR-NEXT: v_or_b32_e32 v2, v8, v2 -; GCN-IR-NEXT: v_ashrrev_i32_e32 v8, 31, v4 -; GCN-IR-NEXT: v_addc_u32_e32 v1, vcc, 0, v1, vcc -; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v8 -; GCN-IR-NEXT: v_and_b32_e32 v8, 24, v8 -; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[0:1] -; GCN-IR-NEXT: v_or_b32_e32 v3, v9, v3 -; GCN-IR-NEXT: v_sub_i32_e64 v6, s[4:5], v6, v8 -; GCN-IR-NEXT: v_mov_b32_e32 v9, v5 -; GCN-IR-NEXT: v_subbrev_u32_e64 v7, s[4:5], 0, v7, s[4:5] -; GCN-IR-NEXT: s_or_b64 s[10:11], vcc, s[10:11] -; GCN-IR-NEXT: v_mov_b32_e32 v8, v4 -; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[10:11] +; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, 23, v0 +; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, 0, v1, vcc +; GCN-IR-NEXT: v_or_b32_e32 v2, v6, v2 +; GCN-IR-NEXT: v_ashrrev_i32_e32 v6, 31, v4 +; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v6 +; GCN-IR-NEXT: v_and_b32_e32 v6, 24, v6 +; GCN-IR-NEXT: v_sub_i32_e32 v0, vcc, v0, v6 +; GCN-IR-NEXT: v_subbrev_u32_e32 v1, vcc, 0, v1, vcc +; GCN-IR-NEXT: v_add_i32_e32 v8, vcc, 1, v8 +; GCN-IR-NEXT: v_or_b32_e32 v3, v7, v3 +; GCN-IR-NEXT: v_addc_u32_e32 v9, vcc, 0, v9, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v7, v5 +; GCN-IR-NEXT: s_or_b64 s[8:9], vcc, s[8:9] +; GCN-IR-NEXT: v_mov_b32_e32 v6, v4 +; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execnz .LBB12_3 ; GCN-IR-NEXT: ; %bb.4: ; %Flow -; GCN-IR-NEXT: s_or_b64 exec, exec, s[10:11] -; GCN-IR-NEXT: .LBB12_5: ; %Flow4 ; GCN-IR-NEXT: s_or_b64 exec, exec, s[8:9] +; GCN-IR-NEXT: .LBB12_5: ; %Flow4 +; GCN-IR-NEXT: s_or_b64 exec, exec, s[4:5] ; GCN-IR-NEXT: v_lshl_b64 v[0:1], v[2:3], 1 ; GCN-IR-NEXT: v_or_b32_e32 v2, v5, v1 ; GCN-IR-NEXT: v_or_b32_e32 v3, v4, v0 diff --git a/llvm/test/CodeGen/AMDGPU/urem64.ll b/llvm/test/CodeGen/AMDGPU/urem64.ll index b846ce7..cdcc914 100644 --- a/llvm/test/CodeGen/AMDGPU/urem64.ll +++ b/llvm/test/CodeGen/AMDGPU/urem64.ll @@ -170,35 +170,38 @@ define amdgpu_kernel void @s_test_urem_i64(ptr addrspace(1) %out, i64 %x, i64 %y ; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[8:9], s[6:7], 0 ; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[12:13], s[2:3], 0 ; GCN-IR-NEXT: s_flbit_i32_b64 s10, s[6:7] -; GCN-IR-NEXT: s_flbit_i32_b64 s18, s[2:3] +; GCN-IR-NEXT: s_flbit_i32_b64 s16, s[2:3] ; GCN-IR-NEXT: s_or_b64 s[8:9], s[8:9], s[12:13] -; GCN-IR-NEXT: s_sub_u32 s12, s10, s18 +; GCN-IR-NEXT: s_sub_u32 s12, s10, s16 ; GCN-IR-NEXT: s_subb_u32 s13, 0, 0 ; GCN-IR-NEXT: v_cmp_gt_u64_e64 s[14:15], s[12:13], 63 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[16:17], s[12:13], 63 +; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[18:19], s[12:13], 63 ; GCN-IR-NEXT: s_or_b64 s[14:15], s[8:9], s[14:15] ; GCN-IR-NEXT: s_and_b64 s[8:9], s[14:15], exec ; GCN-IR-NEXT: s_cselect_b32 s9, 0, s3 ; GCN-IR-NEXT: s_cselect_b32 s8, 0, s2 -; GCN-IR-NEXT: s_or_b64 s[14:15], s[14:15], s[16:17] +; GCN-IR-NEXT: s_or_b64 s[14:15], s[14:15], s[18:19] ; GCN-IR-NEXT: s_andn2_b64 vcc, exec, s[14:15] ; GCN-IR-NEXT: s_cbranch_vccz .LBB0_5 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 ; GCN-IR-NEXT: s_add_u32 s14, s12, 1 -; GCN-IR-NEXT: s_addc_u32 s15, s13, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[8:9], s[14:15], 0 +; GCN-IR-NEXT: s_cselect_b64 s[8:9], -1, 0 +; GCN-IR-NEXT: s_or_b32 s8, s8, s9 +; GCN-IR-NEXT: s_cmp_lg_u32 s8, 0 +; GCN-IR-NEXT: s_addc_u32 s8, s13, 0 +; GCN-IR-NEXT: s_cselect_b64 s[8:9], -1, 0 ; GCN-IR-NEXT: s_sub_i32 s12, 63, s12 ; GCN-IR-NEXT: s_andn2_b64 vcc, exec, s[8:9] ; GCN-IR-NEXT: s_lshl_b64 s[8:9], s[2:3], s12 ; GCN-IR-NEXT: s_cbranch_vccz .LBB0_4 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader ; GCN-IR-NEXT: s_lshr_b64 s[12:13], s[2:3], s14 -; GCN-IR-NEXT: s_add_u32 s16, s6, -1 -; GCN-IR-NEXT: s_addc_u32 s17, s7, -1 +; GCN-IR-NEXT: s_add_u32 s14, s6, -1 +; GCN-IR-NEXT: s_addc_u32 s15, s7, -1 ; GCN-IR-NEXT: s_not_b64 s[4:5], s[10:11] -; GCN-IR-NEXT: s_add_u32 s10, s4, s18 -; GCN-IR-NEXT: s_addc_u32 s11, s5, 0 -; GCN-IR-NEXT: s_mov_b64 s[14:15], 0 +; GCN-IR-NEXT: s_add_u32 s16, s4, s16 +; GCN-IR-NEXT: s_addc_u32 s17, s5, 0 +; GCN-IR-NEXT: s_mov_b64 s[10:11], 0 ; GCN-IR-NEXT: s_mov_b32 s5, 0 ; GCN-IR-NEXT: .LBB0_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 @@ -206,19 +209,22 @@ define amdgpu_kernel void @s_test_urem_i64(ptr addrspace(1) %out, i64 %x, i64 %y ; GCN-IR-NEXT: s_lshr_b32 s4, s9, 31 ; GCN-IR-NEXT: s_lshl_b64 s[8:9], s[8:9], 1 ; GCN-IR-NEXT: s_or_b64 s[12:13], s[12:13], s[4:5] -; GCN-IR-NEXT: s_or_b64 s[8:9], s[14:15], s[8:9] -; GCN-IR-NEXT: s_sub_u32 s4, s16, s12 -; GCN-IR-NEXT: s_subb_u32 s4, s17, s13 -; GCN-IR-NEXT: s_ashr_i32 s14, s4, 31 -; GCN-IR-NEXT: s_mov_b32 s15, s14 -; GCN-IR-NEXT: s_and_b32 s4, s14, 1 -; GCN-IR-NEXT: s_and_b64 s[14:15], s[14:15], s[6:7] -; GCN-IR-NEXT: s_sub_u32 s12, s12, s14 -; GCN-IR-NEXT: s_subb_u32 s13, s13, s15 -; GCN-IR-NEXT: s_add_u32 s10, s10, 1 -; GCN-IR-NEXT: s_addc_u32 s11, s11, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[18:19], s[10:11], 0 -; GCN-IR-NEXT: s_mov_b64 s[14:15], s[4:5] +; GCN-IR-NEXT: s_or_b64 s[8:9], s[10:11], s[8:9] +; GCN-IR-NEXT: s_sub_u32 s4, s14, s12 +; GCN-IR-NEXT: s_subb_u32 s4, s15, s13 +; GCN-IR-NEXT: s_ashr_i32 s10, s4, 31 +; GCN-IR-NEXT: s_mov_b32 s11, s10 +; GCN-IR-NEXT: s_and_b32 s4, s10, 1 +; GCN-IR-NEXT: s_and_b64 s[18:19], s[10:11], s[6:7] +; GCN-IR-NEXT: s_sub_u32 s12, s12, s18 +; GCN-IR-NEXT: s_subb_u32 s13, s13, s19 +; GCN-IR-NEXT: s_add_u32 s16, s16, 1 +; GCN-IR-NEXT: s_cselect_b64 s[18:19], -1, 0 +; GCN-IR-NEXT: s_or_b32 s18, s18, s19 +; GCN-IR-NEXT: s_cmp_lg_u32 s18, 0 +; GCN-IR-NEXT: s_addc_u32 s17, s17, 0 +; GCN-IR-NEXT: s_cselect_b64 s[18:19], -1, 0 +; GCN-IR-NEXT: s_mov_b64 s[10:11], s[4:5] ; GCN-IR-NEXT: s_and_b64 vcc, exec, s[18:19] ; GCN-IR-NEXT: s_cbranch_vccz .LBB0_3 ; GCN-IR-NEXT: .LBB0_4: ; %Flow7 @@ -362,12 +368,12 @@ define i64 @v_test_urem_i64(i64 %x, i64 %y) { ; GCN-IR-NEXT: v_ffbh_u32_e32 v4, v2 ; GCN-IR-NEXT: v_add_i32_e64 v4, s[6:7], 32, v4 ; GCN-IR-NEXT: v_ffbh_u32_e32 v5, v3 -; GCN-IR-NEXT: v_min_u32_e32 v12, v4, v5 +; GCN-IR-NEXT: v_min_u32_e32 v10, v4, v5 ; GCN-IR-NEXT: v_ffbh_u32_e32 v4, v0 ; GCN-IR-NEXT: v_add_i32_e64 v4, s[6:7], 32, v4 ; GCN-IR-NEXT: v_ffbh_u32_e32 v5, v1 -; GCN-IR-NEXT: v_min_u32_e32 v13, v4, v5 -; GCN-IR-NEXT: v_sub_i32_e64 v4, s[6:7], v12, v13 +; GCN-IR-NEXT: v_min_u32_e32 v11, v4, v5 +; GCN-IR-NEXT: v_sub_i32_e64 v4, s[6:7], v10, v11 ; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[2:3] ; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[4:5], 0, v[0:1] ; GCN-IR-NEXT: v_subb_u32_e64 v5, s[6:7], 0, 0, s[6:7] @@ -383,54 +389,53 @@ define i64 @v_test_urem_i64(i64 %x, i64 %y) { ; GCN-IR-NEXT: s_cbranch_execz .LBB1_6 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 ; GCN-IR-NEXT: v_add_i32_e32 v8, vcc, 1, v4 -; GCN-IR-NEXT: v_addc_u32_e32 v9, vcc, 0, v5, vcc +; GCN-IR-NEXT: v_addc_u32_e32 v5, vcc, 0, v5, vcc ; GCN-IR-NEXT: v_sub_i32_e64 v4, s[4:5], 63, v4 -; GCN-IR-NEXT: v_mov_b32_e32 v6, 0 -; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 0, v[8:9] ; GCN-IR-NEXT: v_lshl_b64 v[4:5], v[0:1], v4 +; GCN-IR-NEXT: v_mov_b32_e32 v6, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v7, 0 -; GCN-IR-NEXT: s_and_saveexec_b64 s[4:5], vcc -; GCN-IR-NEXT: s_xor_b64 s[8:9], exec, s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], vcc, -1 +; GCN-IR-NEXT: s_and_saveexec_b64 s[8:9], s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execz .LBB1_5 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader -; GCN-IR-NEXT: v_add_i32_e32 v14, vcc, -1, v2 -; GCN-IR-NEXT: v_addc_u32_e32 v15, vcc, -1, v3, vcc -; GCN-IR-NEXT: v_not_b32_e32 v6, v12 -; GCN-IR-NEXT: v_lshr_b64 v[10:11], v[0:1], v8 -; GCN-IR-NEXT: v_add_i32_e32 v8, vcc, v6, v13 -; GCN-IR-NEXT: v_mov_b32_e32 v12, 0 -; GCN-IR-NEXT: v_addc_u32_e64 v9, s[4:5], -1, 0, vcc -; GCN-IR-NEXT: s_mov_b64 s[10:11], 0 -; GCN-IR-NEXT: v_mov_b32_e32 v13, 0 +; GCN-IR-NEXT: v_add_i32_e32 v12, vcc, -1, v2 +; GCN-IR-NEXT: v_addc_u32_e32 v13, vcc, -1, v3, vcc +; GCN-IR-NEXT: v_not_b32_e32 v6, v10 +; GCN-IR-NEXT: v_add_i32_e32 v14, vcc, v6, v11 +; GCN-IR-NEXT: v_lshr_b64 v[8:9], v[0:1], v8 +; GCN-IR-NEXT: v_addc_u32_e64 v15, s[8:9], -1, 0, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v10, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0 +; GCN-IR-NEXT: v_mov_b32_e32 v11, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v7, 0 ; GCN-IR-NEXT: .LBB1_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 -; GCN-IR-NEXT: v_lshl_b64 v[10:11], v[10:11], 1 +; GCN-IR-NEXT: v_lshl_b64 v[8:9], v[8:9], 1 ; GCN-IR-NEXT: v_lshrrev_b32_e32 v6, 31, v5 -; GCN-IR-NEXT: v_or_b32_e32 v10, v10, v6 +; GCN-IR-NEXT: v_or_b32_e32 v8, v8, v6 ; GCN-IR-NEXT: v_lshl_b64 v[4:5], v[4:5], 1 -; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, v14, v10 -; GCN-IR-NEXT: v_subb_u32_e32 v6, vcc, v15, v11, vcc -; GCN-IR-NEXT: v_or_b32_e32 v4, v12, v4 -; GCN-IR-NEXT: v_ashrrev_i32_e32 v12, 31, v6 -; GCN-IR-NEXT: v_add_i32_e32 v8, vcc, 1, v8 -; GCN-IR-NEXT: v_or_b32_e32 v5, v13, v5 -; GCN-IR-NEXT: v_and_b32_e32 v6, 1, v12 -; GCN-IR-NEXT: v_and_b32_e32 v13, v12, v3 -; GCN-IR-NEXT: v_and_b32_e32 v12, v12, v2 -; GCN-IR-NEXT: v_addc_u32_e32 v9, vcc, 0, v9, vcc -; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[8:9] -; GCN-IR-NEXT: v_sub_i32_e64 v10, s[4:5], v10, v12 -; GCN-IR-NEXT: v_subb_u32_e64 v11, s[4:5], v11, v13, s[4:5] -; GCN-IR-NEXT: v_mov_b32_e32 v13, v7 -; GCN-IR-NEXT: s_or_b64 s[10:11], vcc, s[10:11] -; GCN-IR-NEXT: v_mov_b32_e32 v12, v6 -; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[10:11] +; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, v12, v8 +; GCN-IR-NEXT: v_subb_u32_e32 v6, vcc, v13, v9, vcc +; GCN-IR-NEXT: v_or_b32_e32 v4, v10, v4 +; GCN-IR-NEXT: v_ashrrev_i32_e32 v10, 31, v6 +; GCN-IR-NEXT: v_or_b32_e32 v5, v11, v5 +; GCN-IR-NEXT: v_and_b32_e32 v6, 1, v10 +; GCN-IR-NEXT: v_and_b32_e32 v11, v10, v3 +; GCN-IR-NEXT: v_and_b32_e32 v10, v10, v2 +; GCN-IR-NEXT: v_sub_i32_e32 v8, vcc, v8, v10 +; GCN-IR-NEXT: v_subb_u32_e32 v9, vcc, v9, v11, vcc +; GCN-IR-NEXT: v_add_i32_e32 v14, vcc, 1, v14 +; GCN-IR-NEXT: v_addc_u32_e32 v15, vcc, 0, v15, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v11, v7 +; GCN-IR-NEXT: s_or_b64 s[8:9], vcc, s[8:9] +; GCN-IR-NEXT: v_mov_b32_e32 v10, v6 +; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execnz .LBB1_3 ; GCN-IR-NEXT: ; %bb.4: ; %Flow -; GCN-IR-NEXT: s_or_b64 exec, exec, s[10:11] -; GCN-IR-NEXT: .LBB1_5: ; %Flow4 ; GCN-IR-NEXT: s_or_b64 exec, exec, s[8:9] +; GCN-IR-NEXT: .LBB1_5: ; %Flow4 +; GCN-IR-NEXT: s_or_b64 exec, exec, s[4:5] ; GCN-IR-NEXT: v_lshl_b64 v[4:5], v[4:5], 1 ; GCN-IR-NEXT: v_or_b32_e32 v7, v7, v5 ; GCN-IR-NEXT: v_or_b32_e32 v6, v6, v4 @@ -948,34 +953,37 @@ define amdgpu_kernel void @s_test_urem_k_num_i64(ptr addrspace(1) %out, i64 %x) ; GCN-IR-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x9 ; GCN-IR-NEXT: s_mov_b64 s[4:5], 0 ; GCN-IR-NEXT: s_waitcnt lgkmcnt(0) -; GCN-IR-NEXT: s_flbit_i32_b64 s12, s[2:3] -; GCN-IR-NEXT: s_add_u32 s8, s12, 0xffffffc5 +; GCN-IR-NEXT: s_flbit_i32_b64 s14, s[2:3] +; GCN-IR-NEXT: s_add_u32 s8, s14, 0xffffffc5 ; GCN-IR-NEXT: s_addc_u32 s9, 0, -1 ; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[6:7], s[2:3], 0 ; GCN-IR-NEXT: v_cmp_gt_u64_e64 s[10:11], s[8:9], 63 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[14:15], s[8:9], 63 +; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[12:13], s[8:9], 63 ; GCN-IR-NEXT: s_or_b64 s[10:11], s[6:7], s[10:11] ; GCN-IR-NEXT: s_and_b64 s[6:7], s[10:11], exec ; GCN-IR-NEXT: s_cselect_b32 s6, 0, 24 -; GCN-IR-NEXT: s_or_b64 s[10:11], s[10:11], s[14:15] +; GCN-IR-NEXT: s_or_b64 s[10:11], s[10:11], s[12:13] ; GCN-IR-NEXT: s_andn2_b64 vcc, exec, s[10:11] ; GCN-IR-NEXT: s_mov_b32 s7, 0 ; GCN-IR-NEXT: s_cbranch_vccz .LBB6_5 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 ; GCN-IR-NEXT: s_add_u32 s10, s8, 1 -; GCN-IR-NEXT: s_addc_u32 s11, s9, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[6:7], s[10:11], 0 +; GCN-IR-NEXT: s_cselect_b64 s[6:7], -1, 0 +; GCN-IR-NEXT: s_or_b32 s6, s6, s7 +; GCN-IR-NEXT: s_cmp_lg_u32 s6, 0 +; GCN-IR-NEXT: s_addc_u32 s6, s9, 0 +; GCN-IR-NEXT: s_cselect_b64 s[6:7], -1, 0 ; GCN-IR-NEXT: s_sub_i32 s8, 63, s8 ; GCN-IR-NEXT: s_andn2_b64 vcc, exec, s[6:7] ; GCN-IR-NEXT: s_lshl_b64 s[6:7], 24, s8 ; GCN-IR-NEXT: s_cbranch_vccz .LBB6_4 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader ; GCN-IR-NEXT: s_lshr_b64 s[10:11], 24, s10 -; GCN-IR-NEXT: s_add_u32 s14, s2, -1 -; GCN-IR-NEXT: s_addc_u32 s15, s3, -1 -; GCN-IR-NEXT: s_sub_u32 s8, 58, s12 -; GCN-IR-NEXT: s_subb_u32 s9, 0, 0 -; GCN-IR-NEXT: s_mov_b64 s[12:13], 0 +; GCN-IR-NEXT: s_add_u32 s12, s2, -1 +; GCN-IR-NEXT: s_addc_u32 s13, s3, -1 +; GCN-IR-NEXT: s_sub_u32 s14, 58, s14 +; GCN-IR-NEXT: s_subb_u32 s15, 0, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0 ; GCN-IR-NEXT: s_mov_b32 s5, 0 ; GCN-IR-NEXT: .LBB6_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 @@ -983,19 +991,22 @@ define amdgpu_kernel void @s_test_urem_k_num_i64(ptr addrspace(1) %out, i64 %x) ; GCN-IR-NEXT: s_lshr_b32 s4, s7, 31 ; GCN-IR-NEXT: s_lshl_b64 s[6:7], s[6:7], 1 ; GCN-IR-NEXT: s_or_b64 s[10:11], s[10:11], s[4:5] -; GCN-IR-NEXT: s_or_b64 s[6:7], s[12:13], s[6:7] -; GCN-IR-NEXT: s_sub_u32 s4, s14, s10 -; GCN-IR-NEXT: s_subb_u32 s4, s15, s11 -; GCN-IR-NEXT: s_ashr_i32 s12, s4, 31 -; GCN-IR-NEXT: s_mov_b32 s13, s12 -; GCN-IR-NEXT: s_and_b32 s4, s12, 1 -; GCN-IR-NEXT: s_and_b64 s[12:13], s[12:13], s[2:3] -; GCN-IR-NEXT: s_sub_u32 s10, s10, s12 -; GCN-IR-NEXT: s_subb_u32 s11, s11, s13 -; GCN-IR-NEXT: s_add_u32 s8, s8, 1 -; GCN-IR-NEXT: s_addc_u32 s9, s9, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[16:17], s[8:9], 0 -; GCN-IR-NEXT: s_mov_b64 s[12:13], s[4:5] +; GCN-IR-NEXT: s_or_b64 s[6:7], s[8:9], s[6:7] +; GCN-IR-NEXT: s_sub_u32 s4, s12, s10 +; GCN-IR-NEXT: s_subb_u32 s4, s13, s11 +; GCN-IR-NEXT: s_ashr_i32 s8, s4, 31 +; GCN-IR-NEXT: s_mov_b32 s9, s8 +; GCN-IR-NEXT: s_and_b32 s4, s8, 1 +; GCN-IR-NEXT: s_and_b64 s[16:17], s[8:9], s[2:3] +; GCN-IR-NEXT: s_sub_u32 s10, s10, s16 +; GCN-IR-NEXT: s_subb_u32 s11, s11, s17 +; GCN-IR-NEXT: s_add_u32 s14, s14, 1 +; GCN-IR-NEXT: s_cselect_b64 s[16:17], -1, 0 +; GCN-IR-NEXT: s_or_b32 s16, s16, s17 +; GCN-IR-NEXT: s_cmp_lg_u32 s16, 0 +; GCN-IR-NEXT: s_addc_u32 s15, s15, 0 +; GCN-IR-NEXT: s_cselect_b64 s[16:17], -1, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], s[4:5] ; GCN-IR-NEXT: s_and_b64 vcc, exec, s[16:17] ; GCN-IR-NEXT: s_cbranch_vccz .LBB6_3 ; GCN-IR-NEXT: .LBB6_4: ; %Flow6 @@ -1064,52 +1075,58 @@ define amdgpu_kernel void @s_test_urem_k_den_i64(ptr addrspace(1) %out, i64 %x) ; GCN-IR: ; %bb.0: ; %_udiv-special-cases ; GCN-IR-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x9 ; GCN-IR-NEXT: s_waitcnt lgkmcnt(0) -; GCN-IR-NEXT: s_flbit_i32_b64 s12, s[2:3] -; GCN-IR-NEXT: s_sub_u32 s8, 59, s12 +; GCN-IR-NEXT: s_flbit_i32_b64 s10, s[2:3] +; GCN-IR-NEXT: s_sub_u32 s8, 59, s10 ; GCN-IR-NEXT: s_subb_u32 s9, 0, 0 ; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[4:5], s[2:3], 0 ; GCN-IR-NEXT: v_cmp_gt_u64_e64 s[6:7], s[8:9], 63 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[10:11], s[8:9], 63 +; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[12:13], s[8:9], 63 ; GCN-IR-NEXT: s_or_b64 s[4:5], s[4:5], s[6:7] ; GCN-IR-NEXT: s_and_b64 s[6:7], s[4:5], exec ; GCN-IR-NEXT: s_cselect_b32 s7, 0, s3 ; GCN-IR-NEXT: s_cselect_b32 s6, 0, s2 -; GCN-IR-NEXT: s_or_b64 s[4:5], s[4:5], s[10:11] +; GCN-IR-NEXT: s_or_b64 s[4:5], s[4:5], s[12:13] ; GCN-IR-NEXT: s_andn2_b64 vcc, exec, s[4:5] ; GCN-IR-NEXT: s_mov_b64 s[4:5], 0 ; GCN-IR-NEXT: s_cbranch_vccz .LBB7_5 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 -; GCN-IR-NEXT: s_add_u32 s10, s8, 1 -; GCN-IR-NEXT: s_addc_u32 s11, s9, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[6:7], s[10:11], 0 +; GCN-IR-NEXT: s_add_u32 s11, s8, 1 +; GCN-IR-NEXT: s_cselect_b64 s[6:7], -1, 0 +; GCN-IR-NEXT: s_or_b32 s6, s6, s7 +; GCN-IR-NEXT: s_cmp_lg_u32 s6, 0 +; GCN-IR-NEXT: s_addc_u32 s6, s9, 0 +; GCN-IR-NEXT: s_cselect_b64 s[6:7], -1, 0 ; GCN-IR-NEXT: s_sub_i32 s8, 63, s8 ; GCN-IR-NEXT: s_andn2_b64 vcc, exec, s[6:7] ; GCN-IR-NEXT: s_lshl_b64 s[6:7], s[2:3], s8 ; GCN-IR-NEXT: s_cbranch_vccz .LBB7_4 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader -; GCN-IR-NEXT: s_lshr_b64 s[10:11], s[2:3], s10 -; GCN-IR-NEXT: s_add_u32 s8, s12, 0xffffffc4 -; GCN-IR-NEXT: s_addc_u32 s9, 0, -1 -; GCN-IR-NEXT: s_mov_b64 s[12:13], 0 +; GCN-IR-NEXT: s_lshr_b64 s[8:9], s[2:3], s11 +; GCN-IR-NEXT: s_add_u32 s12, s10, 0xffffffc4 +; GCN-IR-NEXT: s_addc_u32 s13, 0, -1 +; GCN-IR-NEXT: s_mov_b64 s[10:11], 0 ; GCN-IR-NEXT: s_mov_b32 s5, 0 ; GCN-IR-NEXT: .LBB7_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 -; GCN-IR-NEXT: s_lshl_b64 s[10:11], s[10:11], 1 +; GCN-IR-NEXT: s_lshl_b64 s[8:9], s[8:9], 1 ; GCN-IR-NEXT: s_lshr_b32 s4, s7, 31 ; GCN-IR-NEXT: s_lshl_b64 s[6:7], s[6:7], 1 -; GCN-IR-NEXT: s_or_b64 s[10:11], s[10:11], s[4:5] -; GCN-IR-NEXT: s_or_b64 s[6:7], s[12:13], s[6:7] -; GCN-IR-NEXT: s_sub_u32 s4, 23, s10 -; GCN-IR-NEXT: s_subb_u32 s4, 0, s11 -; GCN-IR-NEXT: s_ashr_i32 s12, s4, 31 -; GCN-IR-NEXT: s_and_b32 s4, s12, 1 -; GCN-IR-NEXT: s_and_b32 s12, s12, 24 -; GCN-IR-NEXT: s_sub_u32 s10, s10, s12 -; GCN-IR-NEXT: s_subb_u32 s11, s11, 0 -; GCN-IR-NEXT: s_add_u32 s8, s8, 1 -; GCN-IR-NEXT: s_addc_u32 s9, s9, 0 -; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[14:15], s[8:9], 0 -; GCN-IR-NEXT: s_mov_b64 s[12:13], s[4:5] +; GCN-IR-NEXT: s_or_b64 s[8:9], s[8:9], s[4:5] +; GCN-IR-NEXT: s_or_b64 s[6:7], s[10:11], s[6:7] +; GCN-IR-NEXT: s_sub_u32 s4, 23, s8 +; GCN-IR-NEXT: s_subb_u32 s4, 0, s9 +; GCN-IR-NEXT: s_ashr_i32 s10, s4, 31 +; GCN-IR-NEXT: s_and_b32 s4, s10, 1 +; GCN-IR-NEXT: s_and_b32 s10, s10, 24 +; GCN-IR-NEXT: s_sub_u32 s8, s8, s10 +; GCN-IR-NEXT: s_subb_u32 s9, s9, 0 +; GCN-IR-NEXT: s_add_u32 s12, s12, 1 +; GCN-IR-NEXT: s_cselect_b64 s[14:15], -1, 0 +; GCN-IR-NEXT: s_or_b32 s14, s14, s15 +; GCN-IR-NEXT: s_cmp_lg_u32 s14, 0 +; GCN-IR-NEXT: s_addc_u32 s13, s13, 0 +; GCN-IR-NEXT: s_cselect_b64 s[14:15], -1, 0 +; GCN-IR-NEXT: s_mov_b64 s[10:11], s[4:5] ; GCN-IR-NEXT: s_and_b64 vcc, exec, s[14:15] ; GCN-IR-NEXT: s_cbranch_vccz .LBB7_3 ; GCN-IR-NEXT: .LBB7_4: ; %Flow6 @@ -1241,8 +1258,8 @@ define i64 @v_test_urem_pow2_k_num_i64(i64 %x) { ; GCN-IR-NEXT: v_ffbh_u32_e32 v2, v0 ; GCN-IR-NEXT: v_add_i32_e32 v2, vcc, 32, v2 ; GCN-IR-NEXT: v_ffbh_u32_e32 v3, v1 -; GCN-IR-NEXT: v_min_u32_e32 v10, v2, v3 -; GCN-IR-NEXT: v_add_i32_e32 v2, vcc, 0xffffffd0, v10 +; GCN-IR-NEXT: v_min_u32_e32 v8, v2, v3 +; GCN-IR-NEXT: v_add_i32_e32 v2, vcc, 0xffffffd0, v8 ; GCN-IR-NEXT: v_addc_u32_e64 v3, s[6:7], 0, -1, vcc ; GCN-IR-NEXT: v_cmp_eq_u64_e64 s[4:5], 0, v[0:1] ; GCN-IR-NEXT: v_cmp_lt_u64_e32 vcc, 63, v[2:3] @@ -1257,54 +1274,53 @@ define i64 @v_test_urem_pow2_k_num_i64(i64 %x) { ; GCN-IR-NEXT: s_cbranch_execz .LBB8_6 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 ; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, 1, v2 +; GCN-IR-NEXT: v_addc_u32_e32 v3, vcc, 0, v3, vcc ; GCN-IR-NEXT: v_sub_i32_e64 v2, s[4:5], 63, v2 -; GCN-IR-NEXT: v_addc_u32_e32 v7, vcc, 0, v3, vcc -; GCN-IR-NEXT: s_mov_b64 s[4:5], 0x8000 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0x8000 +; GCN-IR-NEXT: v_lshl_b64 v[2:3], s[8:9], v2 ; GCN-IR-NEXT: v_mov_b32_e32 v4, 0 -; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 0, v[6:7] -; GCN-IR-NEXT: v_lshl_b64 v[2:3], s[4:5], v2 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 -; GCN-IR-NEXT: s_and_saveexec_b64 s[8:9], vcc -; GCN-IR-NEXT: s_xor_b64 s[8:9], exec, s[8:9] +; GCN-IR-NEXT: s_xor_b64 s[4:5], vcc, -1 +; GCN-IR-NEXT: s_and_saveexec_b64 s[10:11], s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], exec, s[10:11] ; GCN-IR-NEXT: s_cbranch_execz .LBB8_5 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader -; GCN-IR-NEXT: v_add_i32_e32 v12, vcc, -1, v0 -; GCN-IR-NEXT: v_addc_u32_e32 v13, vcc, -1, v1, vcc -; GCN-IR-NEXT: v_lshr_b64 v[8:9], s[4:5], v6 -; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, 47, v10 -; GCN-IR-NEXT: v_mov_b32_e32 v10, 0 -; GCN-IR-NEXT: v_subb_u32_e64 v7, s[4:5], 0, 0, vcc -; GCN-IR-NEXT: s_mov_b64 s[10:11], 0 -; GCN-IR-NEXT: v_mov_b32_e32 v11, 0 +; GCN-IR-NEXT: v_add_i32_e32 v10, vcc, -1, v0 +; GCN-IR-NEXT: v_addc_u32_e32 v11, vcc, -1, v1, vcc +; GCN-IR-NEXT: v_sub_i32_e32 v12, vcc, 47, v8 +; GCN-IR-NEXT: v_lshr_b64 v[6:7], s[8:9], v6 +; GCN-IR-NEXT: v_subb_u32_e64 v13, s[8:9], 0, 0, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v8, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0 +; GCN-IR-NEXT: v_mov_b32_e32 v9, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 ; GCN-IR-NEXT: .LBB8_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 -; GCN-IR-NEXT: v_lshl_b64 v[8:9], v[8:9], 1 +; GCN-IR-NEXT: v_lshl_b64 v[6:7], v[6:7], 1 ; GCN-IR-NEXT: v_lshrrev_b32_e32 v4, 31, v3 -; GCN-IR-NEXT: v_or_b32_e32 v8, v8, v4 +; GCN-IR-NEXT: v_or_b32_e32 v6, v6, v4 ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[2:3], 1 -; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, v12, v8 -; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, v13, v9, vcc -; GCN-IR-NEXT: v_or_b32_e32 v2, v10, v2 -; GCN-IR-NEXT: v_ashrrev_i32_e32 v10, 31, v4 -; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, 1, v6 -; GCN-IR-NEXT: v_or_b32_e32 v3, v11, v3 -; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v10 -; GCN-IR-NEXT: v_and_b32_e32 v11, v10, v1 -; GCN-IR-NEXT: v_and_b32_e32 v10, v10, v0 -; GCN-IR-NEXT: v_addc_u32_e32 v7, vcc, 0, v7, vcc -; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[6:7] -; GCN-IR-NEXT: v_sub_i32_e64 v8, s[4:5], v8, v10 -; GCN-IR-NEXT: v_subb_u32_e64 v9, s[4:5], v9, v11, s[4:5] -; GCN-IR-NEXT: v_mov_b32_e32 v11, v5 -; GCN-IR-NEXT: s_or_b64 s[10:11], vcc, s[10:11] -; GCN-IR-NEXT: v_mov_b32_e32 v10, v4 -; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[10:11] +; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, v10, v6 +; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, v11, v7, vcc +; GCN-IR-NEXT: v_or_b32_e32 v2, v8, v2 +; GCN-IR-NEXT: v_ashrrev_i32_e32 v8, 31, v4 +; GCN-IR-NEXT: v_or_b32_e32 v3, v9, v3 +; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v8 +; GCN-IR-NEXT: v_and_b32_e32 v9, v8, v1 +; GCN-IR-NEXT: v_and_b32_e32 v8, v8, v0 +; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, v6, v8 +; GCN-IR-NEXT: v_subb_u32_e32 v7, vcc, v7, v9, vcc +; GCN-IR-NEXT: v_add_i32_e32 v12, vcc, 1, v12 +; GCN-IR-NEXT: v_addc_u32_e32 v13, vcc, 0, v13, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v9, v5 +; GCN-IR-NEXT: s_or_b64 s[8:9], vcc, s[8:9] +; GCN-IR-NEXT: v_mov_b32_e32 v8, v4 +; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execnz .LBB8_3 ; GCN-IR-NEXT: ; %bb.4: ; %Flow -; GCN-IR-NEXT: s_or_b64 exec, exec, s[10:11] -; GCN-IR-NEXT: .LBB8_5: ; %Flow4 ; GCN-IR-NEXT: s_or_b64 exec, exec, s[8:9] +; GCN-IR-NEXT: .LBB8_5: ; %Flow4 +; GCN-IR-NEXT: s_or_b64 exec, exec, s[4:5] ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[2:3], 1 ; GCN-IR-NEXT: v_or_b32_e32 v5, v5, v3 ; GCN-IR-NEXT: v_or_b32_e32 v4, v4, v2 @@ -1337,8 +1353,8 @@ define i64 @v_test_urem_pow2_k_den_i64(i64 %x) { ; GCN-IR-NEXT: v_ffbh_u32_e32 v2, v0 ; GCN-IR-NEXT: v_add_i32_e64 v2, s[4:5], 32, v2 ; GCN-IR-NEXT: v_ffbh_u32_e32 v3, v1 -; GCN-IR-NEXT: v_min_u32_e32 v10, v2, v3 -; GCN-IR-NEXT: v_sub_i32_e64 v2, s[4:5], 48, v10 +; GCN-IR-NEXT: v_min_u32_e32 v8, v2, v3 +; GCN-IR-NEXT: v_sub_i32_e64 v2, s[4:5], 48, v8 ; GCN-IR-NEXT: v_subb_u32_e64 v3, s[4:5], 0, 0, s[4:5] ; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[0:1] ; GCN-IR-NEXT: v_cmp_lt_u64_e64 s[4:5], 63, v[2:3] @@ -1352,51 +1368,50 @@ define i64 @v_test_urem_pow2_k_den_i64(i64 %x) { ; GCN-IR-NEXT: s_cbranch_execz .LBB9_6 ; GCN-IR-NEXT: ; %bb.1: ; %udiv-bb1 ; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, 1, v2 -; GCN-IR-NEXT: v_addc_u32_e32 v7, vcc, 0, v3, vcc +; GCN-IR-NEXT: v_addc_u32_e32 v3, vcc, 0, v3, vcc ; GCN-IR-NEXT: v_sub_i32_e64 v2, s[4:5], 63, v2 -; GCN-IR-NEXT: v_mov_b32_e32 v4, 0 -; GCN-IR-NEXT: v_cmp_ne_u64_e32 vcc, 0, v[6:7] ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[0:1], v2 +; GCN-IR-NEXT: v_mov_b32_e32 v4, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 -; GCN-IR-NEXT: s_and_saveexec_b64 s[4:5], vcc -; GCN-IR-NEXT: s_xor_b64 s[8:9], exec, s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], vcc, -1 +; GCN-IR-NEXT: s_and_saveexec_b64 s[8:9], s[4:5] +; GCN-IR-NEXT: s_xor_b64 s[4:5], exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execz .LBB9_5 ; GCN-IR-NEXT: ; %bb.2: ; %udiv-preheader -; GCN-IR-NEXT: v_lshr_b64 v[8:9], v[0:1], v6 -; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, 0xffffffcf, v10 -; GCN-IR-NEXT: v_mov_b32_e32 v10, 0 -; GCN-IR-NEXT: v_addc_u32_e64 v7, s[4:5], 0, -1, vcc -; GCN-IR-NEXT: s_mov_b64 s[10:11], 0 -; GCN-IR-NEXT: v_mov_b32_e32 v11, 0 +; GCN-IR-NEXT: v_add_i32_e32 v10, vcc, 0xffffffcf, v8 +; GCN-IR-NEXT: v_lshr_b64 v[6:7], v[0:1], v6 +; GCN-IR-NEXT: v_addc_u32_e64 v11, s[8:9], 0, -1, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v8, 0 +; GCN-IR-NEXT: s_mov_b64 s[8:9], 0 +; GCN-IR-NEXT: v_mov_b32_e32 v9, 0 ; GCN-IR-NEXT: v_mov_b32_e32 v5, 0 -; GCN-IR-NEXT: s_movk_i32 s12, 0x7fff +; GCN-IR-NEXT: s_movk_i32 s10, 0x7fff ; GCN-IR-NEXT: .LBB9_3: ; %udiv-do-while ; GCN-IR-NEXT: ; =>This Inner Loop Header: Depth=1 -; GCN-IR-NEXT: v_lshl_b64 v[8:9], v[8:9], 1 +; GCN-IR-NEXT: v_lshl_b64 v[6:7], v[6:7], 1 ; GCN-IR-NEXT: v_lshrrev_b32_e32 v4, 31, v3 -; GCN-IR-NEXT: v_or_b32_e32 v8, v8, v4 -; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, s12, v8 +; GCN-IR-NEXT: v_or_b32_e32 v6, v6, v4 ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[2:3], 1 -; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, 0, v9, vcc -; GCN-IR-NEXT: v_add_i32_e32 v6, vcc, 1, v6 -; GCN-IR-NEXT: v_or_b32_e32 v2, v10, v2 -; GCN-IR-NEXT: v_ashrrev_i32_e32 v10, 31, v4 -; GCN-IR-NEXT: v_addc_u32_e32 v7, vcc, 0, v7, vcc -; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v10 -; GCN-IR-NEXT: v_and_b32_e32 v10, 0x8000, v10 -; GCN-IR-NEXT: v_cmp_eq_u64_e32 vcc, 0, v[6:7] -; GCN-IR-NEXT: v_or_b32_e32 v3, v11, v3 -; GCN-IR-NEXT: v_sub_i32_e64 v8, s[4:5], v8, v10 -; GCN-IR-NEXT: v_mov_b32_e32 v11, v5 -; GCN-IR-NEXT: v_subbrev_u32_e64 v9, s[4:5], 0, v9, s[4:5] -; GCN-IR-NEXT: s_or_b64 s[10:11], vcc, s[10:11] -; GCN-IR-NEXT: v_mov_b32_e32 v10, v4 -; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[10:11] +; GCN-IR-NEXT: v_sub_i32_e32 v4, vcc, s10, v6 +; GCN-IR-NEXT: v_subb_u32_e32 v4, vcc, 0, v7, vcc +; GCN-IR-NEXT: v_or_b32_e32 v2, v8, v2 +; GCN-IR-NEXT: v_ashrrev_i32_e32 v8, 31, v4 +; GCN-IR-NEXT: v_and_b32_e32 v4, 1, v8 +; GCN-IR-NEXT: v_and_b32_e32 v8, 0x8000, v8 +; GCN-IR-NEXT: v_sub_i32_e32 v6, vcc, v6, v8 +; GCN-IR-NEXT: v_subbrev_u32_e32 v7, vcc, 0, v7, vcc +; GCN-IR-NEXT: v_add_i32_e32 v10, vcc, 1, v10 +; GCN-IR-NEXT: v_or_b32_e32 v3, v9, v3 +; GCN-IR-NEXT: v_addc_u32_e32 v11, vcc, 0, v11, vcc +; GCN-IR-NEXT: v_mov_b32_e32 v9, v5 +; GCN-IR-NEXT: s_or_b64 s[8:9], vcc, s[8:9] +; GCN-IR-NEXT: v_mov_b32_e32 v8, v4 +; GCN-IR-NEXT: s_andn2_b64 exec, exec, s[8:9] ; GCN-IR-NEXT: s_cbranch_execnz .LBB9_3 ; GCN-IR-NEXT: ; %bb.4: ; %Flow -; GCN-IR-NEXT: s_or_b64 exec, exec, s[10:11] -; GCN-IR-NEXT: .LBB9_5: ; %Flow4 ; GCN-IR-NEXT: s_or_b64 exec, exec, s[8:9] +; GCN-IR-NEXT: .LBB9_5: ; %Flow4 +; GCN-IR-NEXT: s_or_b64 exec, exec, s[4:5] ; GCN-IR-NEXT: v_lshl_b64 v[2:3], v[2:3], 1 ; GCN-IR-NEXT: v_or_b32_e32 v5, v5, v3 ; GCN-IR-NEXT: v_or_b32_e32 v4, v4, v2 diff --git a/llvm/test/CodeGen/AMDGPU/usubo.ll b/llvm/test/CodeGen/AMDGPU/usubo.ll index 0289dab..d67a7b1 100644 --- a/llvm/test/CodeGen/AMDGPU/usubo.ll +++ b/llvm/test/CodeGen/AMDGPU/usubo.ll @@ -14,15 +14,16 @@ define amdgpu_kernel void @s_usubo_i64_zext(ptr addrspace(1) %out, i64 %a, i64 % ; SI-NEXT: s_mov_b32 s6, -1 ; SI-NEXT: s_waitcnt lgkmcnt(0) ; SI-NEXT: s_mov_b32 s4, s0 -; SI-NEXT: s_sub_u32 s0, s2, s8 -; SI-NEXT: v_mov_b32_e32 v0, s2 +; SI-NEXT: s_sub_u32 s2, s2, s8 ; SI-NEXT: s_mov_b32 s5, s1 -; SI-NEXT: s_subb_u32 s1, s3, s9 +; SI-NEXT: s_cselect_b64 s[0:1], -1, 0 +; SI-NEXT: s_or_b32 s0, s0, s1 +; SI-NEXT: s_cmp_lg_u32 s0, 0 +; SI-NEXT: s_subb_u32 s3, s3, s9 +; SI-NEXT: s_cselect_b64 s[0:1], -1, 0 +; SI-NEXT: v_cndmask_b32_e64 v0, 0, 1, s[0:1] ; SI-NEXT: v_mov_b32_e32 v1, s3 -; SI-NEXT: v_cmp_gt_u64_e32 vcc, s[0:1], v[0:1] -; SI-NEXT: v_mov_b32_e32 v1, s1 -; SI-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc -; SI-NEXT: v_add_i32_e32 v0, vcc, s0, v0 +; SI-NEXT: v_add_i32_e32 v0, vcc, s2, v0 ; SI-NEXT: v_addc_u32_e32 v1, vcc, 0, v1, vcc ; SI-NEXT: buffer_store_dwordx2 v[0:1], off, s[4:7], 0 ; SI-NEXT: s_endpgm @@ -33,15 +34,15 @@ define amdgpu_kernel void @s_usubo_i64_zext(ptr addrspace(1) %out, i64 %a, i64 % ; VI-NEXT: s_load_dwordx2 s[4:5], s[4:5], 0x34 ; VI-NEXT: s_waitcnt lgkmcnt(0) ; VI-NEXT: v_mov_b32_e32 v0, s0 -; VI-NEXT: s_sub_u32 s0, s2, s4 -; VI-NEXT: v_mov_b32_e32 v2, s2 +; VI-NEXT: s_sub_u32 s2, s2, s4 ; VI-NEXT: v_mov_b32_e32 v1, s1 +; VI-NEXT: s_cselect_b64 s[0:1], -1, 0 +; VI-NEXT: s_cmp_lg_u64 s[0:1], 0 +; VI-NEXT: s_subb_u32 s3, s3, s5 +; VI-NEXT: s_cselect_b64 s[0:1], -1, 0 +; VI-NEXT: v_cndmask_b32_e64 v2, 0, 1, s[0:1] ; VI-NEXT: v_mov_b32_e32 v3, s3 -; VI-NEXT: s_subb_u32 s1, s3, s5 -; VI-NEXT: v_cmp_gt_u64_e32 vcc, s[0:1], v[2:3] -; VI-NEXT: v_mov_b32_e32 v3, s1 -; VI-NEXT: v_cndmask_b32_e64 v2, 0, 1, vcc -; VI-NEXT: v_add_u32_e32 v2, vcc, s0, v2 +; VI-NEXT: v_add_u32_e32 v2, vcc, s2, v2 ; VI-NEXT: v_addc_u32_e32 v3, vcc, 0, v3, vcc ; VI-NEXT: flat_store_dwordx2 v[0:1], v[2:3] ; VI-NEXT: s_endpgm @@ -52,14 +53,14 @@ define amdgpu_kernel void @s_usubo_i64_zext(ptr addrspace(1) %out, i64 %a, i64 % ; GFX9-NEXT: s_load_dwordx2 s[6:7], s[4:5], 0x34 ; GFX9-NEXT: v_mov_b32_e32 v2, 0 ; GFX9-NEXT: s_waitcnt lgkmcnt(0) -; GFX9-NEXT: v_mov_b32_e32 v0, s2 -; GFX9-NEXT: s_sub_u32 s4, s2, s6 -; GFX9-NEXT: v_mov_b32_e32 v1, s3 -; GFX9-NEXT: s_subb_u32 s5, s3, s7 -; GFX9-NEXT: v_cmp_gt_u64_e32 vcc, s[4:5], v[0:1] -; GFX9-NEXT: v_mov_b32_e32 v1, s5 -; GFX9-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc -; GFX9-NEXT: v_add_co_u32_e32 v0, vcc, s4, v0 +; GFX9-NEXT: s_sub_u32 s6, s2, s6 +; GFX9-NEXT: s_cselect_b64 s[4:5], -1, 0 +; GFX9-NEXT: s_cmp_lg_u64 s[4:5], 0 +; GFX9-NEXT: s_subb_u32 s4, s3, s7 +; GFX9-NEXT: s_cselect_b64 s[2:3], -1, 0 +; GFX9-NEXT: v_cndmask_b32_e64 v0, 0, 1, s[2:3] +; GFX9-NEXT: v_mov_b32_e32 v1, s4 +; GFX9-NEXT: v_add_co_u32_e32 v0, vcc, s6, v0 ; GFX9-NEXT: v_addc_co_u32_e32 v1, vcc, 0, v1, vcc ; GFX9-NEXT: global_store_dwordx2 v2, v[0:1], s[0:1] ; GFX9-NEXT: s_endpgm @@ -71,12 +72,14 @@ define amdgpu_kernel void @s_usubo_i64_zext(ptr addrspace(1) %out, i64 %a, i64 % ; GFX10-NEXT: s_load_dwordx2 s[6:7], s[4:5], 0x34 ; GFX10-NEXT: v_mov_b32_e32 v2, 0 ; GFX10-NEXT: s_waitcnt lgkmcnt(0) -; GFX10-NEXT: s_sub_u32 s4, s2, s6 -; GFX10-NEXT: s_subb_u32 s5, s3, s7 -; GFX10-NEXT: v_cmp_gt_u64_e64 s2, s[4:5], s[2:3] -; GFX10-NEXT: v_cndmask_b32_e64 v0, 0, 1, s2 -; GFX10-NEXT: v_add_co_u32 v0, s2, s4, v0 -; GFX10-NEXT: v_add_co_ci_u32_e64 v1, s2, s5, 0, s2 +; GFX10-NEXT: s_sub_u32 s2, s2, s6 +; GFX10-NEXT: s_cselect_b32 s4, -1, 0 +; GFX10-NEXT: s_cmp_lg_u32 s4, 0 +; GFX10-NEXT: s_subb_u32 s3, s3, s7 +; GFX10-NEXT: s_cselect_b32 s4, -1, 0 +; GFX10-NEXT: v_cndmask_b32_e64 v0, 0, 1, s4 +; GFX10-NEXT: v_add_co_u32 v0, s2, s2, v0 +; GFX10-NEXT: v_add_co_ci_u32_e64 v1, s2, s3, 0, s2 ; GFX10-NEXT: global_store_dwordx2 v2, v[0:1], s[0:1] ; GFX10-NEXT: s_endpgm ; @@ -87,14 +90,16 @@ define amdgpu_kernel void @s_usubo_i64_zext(ptr addrspace(1) %out, i64 %a, i64 % ; GFX11-NEXT: s_load_b64 s[4:5], s[4:5], 0x34 ; GFX11-NEXT: v_mov_b32_e32 v2, 0 ; GFX11-NEXT: s_waitcnt lgkmcnt(0) -; GFX11-NEXT: s_sub_u32 s4, s2, s4 -; GFX11-NEXT: s_subb_u32 s5, s3, s5 -; GFX11-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(NEXT) | instid1(VALU_DEP_1) -; GFX11-NEXT: v_cmp_gt_u64_e64 s2, s[4:5], s[2:3] -; GFX11-NEXT: v_cndmask_b32_e64 v0, 0, 1, s2 +; GFX11-NEXT: s_sub_u32 s2, s2, s4 +; GFX11-NEXT: s_cselect_b32 s4, -1, 0 +; GFX11-NEXT: s_delay_alu instid0(SALU_CYCLE_1) | instskip(SKIP_2) | instid1(SALU_CYCLE_1) +; GFX11-NEXT: s_cmp_lg_u32 s4, 0 +; GFX11-NEXT: s_subb_u32 s3, s3, s5 +; GFX11-NEXT: s_cselect_b32 s4, -1, 0 +; GFX11-NEXT: v_cndmask_b32_e64 v0, 0, 1, s4 ; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1) -; GFX11-NEXT: v_add_co_u32 v0, s2, s4, v0 -; GFX11-NEXT: v_add_co_ci_u32_e64 v1, null, s5, 0, s2 +; GFX11-NEXT: v_add_co_u32 v0, s2, s2, v0 +; GFX11-NEXT: v_add_co_ci_u32_e64 v1, null, s3, 0, s2 ; GFX11-NEXT: global_store_b64 v2, v[0:1], s[0:1] ; GFX11-NEXT: s_endpgm %usub = call { i64, i1 } @llvm.usub.with.overflow.i64(i64 %a, i64 %b) #0 @@ -435,21 +440,23 @@ define amdgpu_kernel void @s_usubo_i64(ptr addrspace(1) %out, ptr addrspace(1) % ; SI-NEXT: s_mov_b32 s11, 0xf000 ; SI-NEXT: s_mov_b32 s10, -1 ; SI-NEXT: s_waitcnt lgkmcnt(0) -; SI-NEXT: s_sub_u32 s6, s4, s6 -; SI-NEXT: v_mov_b32_e32 v0, s4 -; SI-NEXT: s_subb_u32 s7, s5, s7 -; SI-NEXT: v_mov_b32_e32 v1, s5 -; SI-NEXT: v_cmp_gt_u64_e32 vcc, s[6:7], v[0:1] -; SI-NEXT: v_mov_b32_e32 v2, s6 +; SI-NEXT: s_sub_u32 s4, s4, s6 +; SI-NEXT: s_cselect_b64 s[12:13], -1, 0 +; SI-NEXT: s_or_b32 s6, s12, s13 +; SI-NEXT: s_cmp_lg_u32 s6, 0 +; SI-NEXT: s_subb_u32 s5, s5, s7 ; SI-NEXT: s_mov_b32 s8, s0 ; SI-NEXT: s_mov_b32 s9, s1 +; SI-NEXT: v_mov_b32_e32 v0, s4 +; SI-NEXT: v_mov_b32_e32 v1, s5 +; SI-NEXT: s_cselect_b64 s[4:5], -1, 0 ; SI-NEXT: s_mov_b32 s0, s2 ; SI-NEXT: s_mov_b32 s1, s3 ; SI-NEXT: s_mov_b32 s2, s10 ; SI-NEXT: s_mov_b32 s3, s11 -; SI-NEXT: v_mov_b32_e32 v3, s7 -; SI-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc -; SI-NEXT: buffer_store_dwordx2 v[2:3], off, s[8:11], 0 +; SI-NEXT: buffer_store_dwordx2 v[0:1], off, s[8:11], 0 +; SI-NEXT: s_waitcnt expcnt(0) +; SI-NEXT: v_cndmask_b32_e64 v0, 0, 1, s[4:5] ; SI-NEXT: buffer_store_byte v0, off, s[0:3], 0 ; SI-NEXT: s_endpgm ; @@ -457,37 +464,37 @@ define amdgpu_kernel void @s_usubo_i64(ptr addrspace(1) %out, ptr addrspace(1) % ; VI: ; %bb.0: ; VI-NEXT: s_load_dwordx8 s[0:7], s[4:5], 0x24 ; VI-NEXT: s_waitcnt lgkmcnt(0) +; VI-NEXT: v_mov_b32_e32 v2, s2 +; VI-NEXT: s_sub_u32 s2, s4, s6 ; VI-NEXT: v_mov_b32_e32 v0, s0 -; VI-NEXT: s_sub_u32 s0, s4, s6 -; VI-NEXT: v_mov_b32_e32 v4, s4 ; VI-NEXT: v_mov_b32_e32 v1, s1 -; VI-NEXT: s_subb_u32 s1, s5, s7 -; VI-NEXT: v_mov_b32_e32 v5, s5 -; VI-NEXT: v_mov_b32_e32 v7, s1 -; VI-NEXT: v_cmp_gt_u64_e32 vcc, s[0:1], v[4:5] -; VI-NEXT: v_mov_b32_e32 v6, s0 -; VI-NEXT: v_mov_b32_e32 v2, s2 +; VI-NEXT: s_cselect_b64 s[0:1], -1, 0 +; VI-NEXT: s_cmp_lg_u64 s[0:1], 0 +; VI-NEXT: s_subb_u32 s0, s5, s7 +; VI-NEXT: v_mov_b32_e32 v4, s2 +; VI-NEXT: v_mov_b32_e32 v5, s0 +; VI-NEXT: s_cselect_b64 s[0:1], -1, 0 ; VI-NEXT: v_mov_b32_e32 v3, s3 -; VI-NEXT: flat_store_dwordx2 v[0:1], v[6:7] -; VI-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc +; VI-NEXT: flat_store_dwordx2 v[0:1], v[4:5] +; VI-NEXT: v_cndmask_b32_e64 v0, 0, 1, s[0:1] ; VI-NEXT: flat_store_byte v[2:3], v0 ; VI-NEXT: s_endpgm ; ; GFX9-LABEL: s_usubo_i64: ; GFX9: ; %bb.0: ; GFX9-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x24 -; GFX9-NEXT: v_mov_b32_e32 v4, 0 +; GFX9-NEXT: v_mov_b32_e32 v2, 0 ; GFX9-NEXT: s_waitcnt lgkmcnt(0) -; GFX9-NEXT: s_sub_u32 s0, s12, s14 -; GFX9-NEXT: v_mov_b32_e32 v0, s12 -; GFX9-NEXT: v_mov_b32_e32 v1, s13 -; GFX9-NEXT: s_subb_u32 s1, s13, s15 -; GFX9-NEXT: v_mov_b32_e32 v3, s1 -; GFX9-NEXT: v_cmp_gt_u64_e32 vcc, s[0:1], v[0:1] -; GFX9-NEXT: v_mov_b32_e32 v2, s0 -; GFX9-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc -; GFX9-NEXT: global_store_dwordx2 v4, v[2:3], s[8:9] -; GFX9-NEXT: global_store_byte v4, v0, s[10:11] +; GFX9-NEXT: s_sub_u32 s2, s12, s14 +; GFX9-NEXT: s_cselect_b64 s[0:1], -1, 0 +; GFX9-NEXT: s_cmp_lg_u64 s[0:1], 0 +; GFX9-NEXT: s_subb_u32 s0, s13, s15 +; GFX9-NEXT: v_mov_b32_e32 v0, s2 +; GFX9-NEXT: v_mov_b32_e32 v1, s0 +; GFX9-NEXT: s_cselect_b64 s[0:1], -1, 0 +; GFX9-NEXT: v_cndmask_b32_e64 v3, 0, 1, s[0:1] +; GFX9-NEXT: global_store_dwordx2 v2, v[0:1], s[8:9] +; GFX9-NEXT: global_store_byte v2, v3, s[10:11] ; GFX9-NEXT: s_endpgm ; ; GFX10-LABEL: s_usubo_i64: @@ -496,10 +503,12 @@ define amdgpu_kernel void @s_usubo_i64(ptr addrspace(1) %out, ptr addrspace(1) % ; GFX10-NEXT: v_mov_b32_e32 v2, 0 ; GFX10-NEXT: s_waitcnt lgkmcnt(0) ; GFX10-NEXT: s_sub_u32 s0, s12, s14 -; GFX10-NEXT: s_subb_u32 s1, s13, s15 +; GFX10-NEXT: s_cselect_b32 s1, -1, 0 ; GFX10-NEXT: v_mov_b32_e32 v0, s0 +; GFX10-NEXT: s_cmp_lg_u32 s1, 0 +; GFX10-NEXT: s_subb_u32 s1, s13, s15 +; GFX10-NEXT: s_cselect_b32 s0, -1, 0 ; GFX10-NEXT: v_mov_b32_e32 v1, s1 -; GFX10-NEXT: v_cmp_gt_u64_e64 s0, s[0:1], s[12:13] ; GFX10-NEXT: v_cndmask_b32_e64 v3, 0, 1, s0 ; GFX10-NEXT: global_store_dwordx2 v2, v[0:1], s[8:9] ; GFX10-NEXT: global_store_byte v2, v3, s[10:11] @@ -509,12 +518,13 @@ define amdgpu_kernel void @s_usubo_i64(ptr addrspace(1) %out, ptr addrspace(1) % ; GFX11: ; %bb.0: ; GFX11-NEXT: s_load_b256 s[0:7], s[4:5], 0x24 ; GFX11-NEXT: s_waitcnt lgkmcnt(0) -; GFX11-NEXT: s_sub_u32 s6, s4, s6 -; GFX11-NEXT: s_subb_u32 s7, s5, s7 -; GFX11-NEXT: v_mov_b32_e32 v0, s6 -; GFX11-NEXT: v_cmp_gt_u64_e64 s4, s[6:7], s[4:5] -; GFX11-NEXT: v_dual_mov_b32 v2, 0 :: v_dual_mov_b32 v1, s7 -; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_2) +; GFX11-NEXT: s_sub_u32 s4, s4, s6 +; GFX11-NEXT: s_cselect_b32 s6, -1, 0 +; GFX11-NEXT: v_mov_b32_e32 v0, s4 +; GFX11-NEXT: s_cmp_lg_u32 s6, 0 +; GFX11-NEXT: s_subb_u32 s5, s5, s7 +; GFX11-NEXT: s_cselect_b32 s4, -1, 0 +; GFX11-NEXT: v_dual_mov_b32 v2, 0 :: v_dual_mov_b32 v1, s5 ; GFX11-NEXT: v_cndmask_b32_e64 v3, 0, 1, s4 ; GFX11-NEXT: s_clause 0x1 ; GFX11-NEXT: global_store_b64 v2, v[0:1], s[0:1] @@ -550,10 +560,10 @@ define amdgpu_kernel void @v_usubo_i64(ptr addrspace(1) %out, ptr addrspace(1) % ; SI-NEXT: s_mov_b32 s4, s2 ; SI-NEXT: s_mov_b32 s5, s3 ; SI-NEXT: s_waitcnt vmcnt(0) -; SI-NEXT: v_sub_i32_e32 v2, vcc, v0, v2 -; SI-NEXT: v_subb_u32_e32 v3, vcc, v1, v3, vcc -; SI-NEXT: v_cmp_gt_u64_e32 vcc, v[2:3], v[0:1] -; SI-NEXT: buffer_store_dwordx2 v[2:3], off, s[8:11], 0 +; SI-NEXT: v_sub_i32_e32 v0, vcc, v0, v2 +; SI-NEXT: v_subb_u32_e32 v1, vcc, v1, v3, vcc +; SI-NEXT: buffer_store_dwordx2 v[0:1], off, s[8:11], 0 +; SI-NEXT: s_waitcnt expcnt(0) ; SI-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc ; SI-NEXT: buffer_store_byte v0, off, s[4:7], 0 ; SI-NEXT: s_endpgm @@ -573,10 +583,9 @@ define amdgpu_kernel void @v_usubo_i64(ptr addrspace(1) %out, ptr addrspace(1) % ; VI-NEXT: v_mov_b32_e32 v6, s2 ; VI-NEXT: v_mov_b32_e32 v7, s3 ; VI-NEXT: s_waitcnt vmcnt(0) -; VI-NEXT: v_sub_u32_e32 v2, vcc, v0, v2 -; VI-NEXT: v_subb_u32_e32 v3, vcc, v1, v3, vcc -; VI-NEXT: v_cmp_gt_u64_e32 vcc, v[2:3], v[0:1] -; VI-NEXT: flat_store_dwordx2 v[4:5], v[2:3] +; VI-NEXT: v_sub_u32_e32 v0, vcc, v0, v2 +; VI-NEXT: v_subb_u32_e32 v1, vcc, v1, v3, vcc +; VI-NEXT: flat_store_dwordx2 v[4:5], v[0:1] ; VI-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc ; VI-NEXT: flat_store_byte v[6:7], v0 ; VI-NEXT: s_endpgm @@ -589,10 +598,9 @@ define amdgpu_kernel void @v_usubo_i64(ptr addrspace(1) %out, ptr addrspace(1) % ; GFX9-NEXT: global_load_dwordx2 v[0:1], v4, s[12:13] ; GFX9-NEXT: global_load_dwordx2 v[2:3], v4, s[14:15] ; GFX9-NEXT: s_waitcnt vmcnt(0) -; GFX9-NEXT: v_sub_co_u32_e32 v2, vcc, v0, v2 -; GFX9-NEXT: v_subb_co_u32_e32 v3, vcc, v1, v3, vcc -; GFX9-NEXT: v_cmp_gt_u64_e32 vcc, v[2:3], v[0:1] -; GFX9-NEXT: global_store_dwordx2 v4, v[2:3], s[8:9] +; GFX9-NEXT: v_sub_co_u32_e32 v0, vcc, v0, v2 +; GFX9-NEXT: v_subb_co_u32_e32 v1, vcc, v1, v3, vcc +; GFX9-NEXT: global_store_dwordx2 v4, v[0:1], s[8:9] ; GFX9-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc ; GFX9-NEXT: global_store_byte v4, v0, s[10:11] ; GFX9-NEXT: s_endpgm @@ -606,12 +614,11 @@ define amdgpu_kernel void @v_usubo_i64(ptr addrspace(1) %out, ptr addrspace(1) % ; GFX10-NEXT: global_load_dwordx2 v[0:1], v4, s[12:13] ; GFX10-NEXT: global_load_dwordx2 v[2:3], v4, s[14:15] ; GFX10-NEXT: s_waitcnt vmcnt(0) -; GFX10-NEXT: v_sub_co_u32 v2, vcc_lo, v0, v2 -; GFX10-NEXT: v_sub_co_ci_u32_e32 v3, vcc_lo, v1, v3, vcc_lo -; GFX10-NEXT: v_cmp_gt_u64_e32 vcc_lo, v[2:3], v[0:1] -; GFX10-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc_lo -; GFX10-NEXT: global_store_dwordx2 v4, v[2:3], s[8:9] -; GFX10-NEXT: global_store_byte v4, v0, s[10:11] +; GFX10-NEXT: v_sub_co_u32 v0, vcc_lo, v0, v2 +; GFX10-NEXT: v_sub_co_ci_u32_e32 v1, vcc_lo, v1, v3, vcc_lo +; GFX10-NEXT: v_cndmask_b32_e64 v2, 0, 1, vcc_lo +; GFX10-NEXT: global_store_dwordx2 v4, v[0:1], s[8:9] +; GFX10-NEXT: global_store_byte v4, v2, s[10:11] ; GFX10-NEXT: s_endpgm ; ; GFX11-LABEL: v_usubo_i64: @@ -623,14 +630,12 @@ define amdgpu_kernel void @v_usubo_i64(ptr addrspace(1) %out, ptr addrspace(1) % ; GFX11-NEXT: global_load_b64 v[0:1], v4, s[4:5] ; GFX11-NEXT: global_load_b64 v[2:3], v4, s[6:7] ; GFX11-NEXT: s_waitcnt vmcnt(0) -; GFX11-NEXT: v_sub_co_u32 v2, vcc_lo, v0, v2 -; GFX11-NEXT: s_delay_alu instid0(VALU_DEP_1) | instskip(NEXT) | instid1(VALU_DEP_1) -; GFX11-NEXT: v_sub_co_ci_u32_e64 v3, null, v1, v3, vcc_lo -; GFX11-NEXT: v_cmp_gt_u64_e32 vcc_lo, v[2:3], v[0:1] -; GFX11-NEXT: v_cndmask_b32_e64 v0, 0, 1, vcc_lo +; GFX11-NEXT: v_sub_co_u32 v0, vcc_lo, v0, v2 +; GFX11-NEXT: v_sub_co_ci_u32_e32 v1, vcc_lo, v1, v3, vcc_lo +; GFX11-NEXT: v_cndmask_b32_e64 v2, 0, 1, vcc_lo ; GFX11-NEXT: s_clause 0x1 -; GFX11-NEXT: global_store_b64 v4, v[2:3], s[0:1] -; GFX11-NEXT: global_store_b8 v4, v0, s[2:3] +; GFX11-NEXT: global_store_b64 v4, v[0:1], s[0:1] +; GFX11-NEXT: global_store_b8 v4, v2, s[2:3] ; GFX11-NEXT: s_endpgm %tid = call i32 @llvm.amdgcn.workitem.id.x() %tid.ext = sext i32 %tid to i64 diff --git a/llvm/test/CodeGen/AMDGPU/usubsat.ll b/llvm/test/CodeGen/AMDGPU/usubsat.ll index 90491a0..3ddb2f0 100644 --- a/llvm/test/CodeGen/AMDGPU/usubsat.ll +++ b/llvm/test/CodeGen/AMDGPU/usubsat.ll @@ -730,52 +730,38 @@ define i64 @v_usubsat_i64(i64 %lhs, i64 %rhs) { ; GFX6-LABEL: v_usubsat_i64: ; GFX6: ; %bb.0: ; GFX6-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; GFX6-NEXT: v_sub_i32_e32 v2, vcc, v0, v2 -; GFX6-NEXT: v_subb_u32_e32 v3, vcc, v1, v3, vcc -; GFX6-NEXT: v_cmp_gt_u64_e32 vcc, v[2:3], v[0:1] -; GFX6-NEXT: v_cndmask_b32_e64 v0, v2, 0, vcc -; GFX6-NEXT: v_cndmask_b32_e64 v1, v3, 0, vcc +; GFX6-NEXT: v_sub_i32_e32 v0, vcc, v0, v2 +; GFX6-NEXT: v_subb_u32_e32 v1, vcc, v1, v3, vcc +; GFX6-NEXT: v_cndmask_b32_e64 v0, v0, 0, vcc +; GFX6-NEXT: v_cndmask_b32_e64 v1, v1, 0, vcc ; GFX6-NEXT: s_setpc_b64 s[30:31] ; ; GFX8-LABEL: v_usubsat_i64: ; GFX8: ; %bb.0: ; GFX8-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; GFX8-NEXT: v_sub_u32_e32 v2, vcc, v0, v2 -; GFX8-NEXT: v_subb_u32_e32 v3, vcc, v1, v3, vcc -; GFX8-NEXT: v_cmp_gt_u64_e32 vcc, v[2:3], v[0:1] -; GFX8-NEXT: v_cndmask_b32_e64 v0, v2, 0, vcc -; GFX8-NEXT: v_cndmask_b32_e64 v1, v3, 0, vcc +; GFX8-NEXT: v_sub_u32_e32 v0, vcc, v0, v2 +; GFX8-NEXT: v_subb_u32_e32 v1, vcc, v1, v3, vcc +; GFX8-NEXT: v_cndmask_b32_e64 v0, v0, 0, vcc +; GFX8-NEXT: v_cndmask_b32_e64 v1, v1, 0, vcc ; GFX8-NEXT: s_setpc_b64 s[30:31] ; ; GFX9-LABEL: v_usubsat_i64: ; GFX9: ; %bb.0: ; GFX9-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; GFX9-NEXT: v_sub_co_u32_e32 v2, vcc, v0, v2 -; GFX9-NEXT: v_subb_co_u32_e32 v3, vcc, v1, v3, vcc -; GFX9-NEXT: v_cmp_gt_u64_e32 vcc, v[2:3], v[0:1] -; GFX9-NEXT: v_cndmask_b32_e64 v0, v2, 0, vcc -; GFX9-NEXT: v_cndmask_b32_e64 v1, v3, 0, vcc +; GFX9-NEXT: v_sub_co_u32_e32 v0, vcc, v0, v2 +; GFX9-NEXT: v_subb_co_u32_e32 v1, vcc, v1, v3, vcc +; GFX9-NEXT: v_cndmask_b32_e64 v0, v0, 0, vcc +; GFX9-NEXT: v_cndmask_b32_e64 v1, v1, 0, vcc ; GFX9-NEXT: s_setpc_b64 s[30:31] ; -; GFX10-LABEL: v_usubsat_i64: -; GFX10: ; %bb.0: -; GFX10-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; GFX10-NEXT: v_sub_co_u32 v2, vcc_lo, v0, v2 -; GFX10-NEXT: v_sub_co_ci_u32_e32 v3, vcc_lo, v1, v3, vcc_lo -; GFX10-NEXT: v_cmp_gt_u64_e32 vcc_lo, v[2:3], v[0:1] -; GFX10-NEXT: v_cndmask_b32_e64 v0, v2, 0, vcc_lo -; GFX10-NEXT: v_cndmask_b32_e64 v1, v3, 0, vcc_lo -; GFX10-NEXT: s_setpc_b64 s[30:31] -; -; GFX11-LABEL: v_usubsat_i64: -; GFX11: ; %bb.0: -; GFX11-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; GFX11-NEXT: v_sub_co_u32 v2, vcc_lo, v0, v2 -; GFX11-NEXT: v_sub_co_ci_u32_e64 v3, null, v1, v3, vcc_lo -; GFX11-NEXT: v_cmp_gt_u64_e32 vcc_lo, v[2:3], v[0:1] -; GFX11-NEXT: v_cndmask_b32_e64 v0, v2, 0, vcc_lo -; GFX11-NEXT: v_cndmask_b32_e64 v1, v3, 0, vcc_lo -; GFX11-NEXT: s_setpc_b64 s[30:31] +; GFX10PLUS-LABEL: v_usubsat_i64: +; GFX10PLUS: ; %bb.0: +; GFX10PLUS-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) +; GFX10PLUS-NEXT: v_sub_co_u32 v0, vcc_lo, v0, v2 +; GFX10PLUS-NEXT: v_sub_co_ci_u32_e32 v1, vcc_lo, v1, v3, vcc_lo +; GFX10PLUS-NEXT: v_cndmask_b32_e64 v0, v0, 0, vcc_lo +; GFX10PLUS-NEXT: v_cndmask_b32_e64 v1, v1, 0, vcc_lo +; GFX10PLUS-NEXT: s_setpc_b64 s[30:31] %result = call i64 @llvm.usub.sat.i64(i64 %lhs, i64 %rhs) ret i64 %result } diff --git a/llvm/test/CodeGen/DirectX/ContainerData/RootSignature-RootDescriptor-Invalid-Flags_V1.ll b/llvm/test/CodeGen/DirectX/ContainerData/RootSignature-RootDescriptor-Invalid-Flags_V1.ll new file mode 100644 index 0000000..610ce4f --- /dev/null +++ b/llvm/test/CodeGen/DirectX/ContainerData/RootSignature-RootDescriptor-Invalid-Flags_V1.ll @@ -0,0 +1,18 @@ +; RUN: not opt -passes='print<dxil-root-signature>' %s -S -o - 2>&1 | FileCheck %s +; On Version 1, the only valid flag is DataVolatile (2). +target triple = "dxil-unknown-shadermodel6.0-compute" + + +; CHECK: error: Invalid value for RootDescriptorFlag: 4 +; CHECK-NOT: Root Signature Definitions +define void @main() #0 { +entry: + ret void +} +attributes #0 = { "hlsl.numthreads"="1,1,1" "hlsl.shader"="compute" } + + +!dx.rootsignatures = !{!2} ; list of function/root signature pairs +!2 = !{ ptr @main, !3, i32 1 } ; function, root signature +!3 = !{ !5 } ; list of root signature elements +!5 = !{ !"RootCBV", i32 0, i32 1, i32 2, i32 4 } diff --git a/llvm/test/CodeGen/DirectX/ContainerData/RootSignature-StaticSamplers-Invalid-Flag_V1.ll b/llvm/test/CodeGen/DirectX/ContainerData/RootSignature-StaticSamplers-Invalid-Flag_V1.ll new file mode 100644 index 0000000..76b60b8 --- /dev/null +++ b/llvm/test/CodeGen/DirectX/ContainerData/RootSignature-StaticSamplers-Invalid-Flag_V1.ll @@ -0,0 +1,19 @@ +; RUN: not opt -passes='print<dxil-root-signature>' %s -S -o - 2>&1 | FileCheck %s + + +target triple = "dxil-unknown-shadermodel6.0-compute" + +; CHECK: error: Invalid value for Static Sampler Flag: 1 +; CHECK-NOT: Root Signature Definitions + +define void @main() #0 { +entry: + ret void +} +attributes #0 = { "hlsl.numthreads"="1,1,1" "hlsl.shader"="compute" } + + +!dx.rootsignatures = !{!2} ; list of function/root signature pairs +!2 = !{ ptr @main, !3, i32 1 } ; function, root signature +!3 = !{ !5 } ; list of root signature elements +!5 = !{ !"StaticSampler", i32 4, i32 2, i32 3, i32 5, float 0x3FF6CCCCC0000000, i32 9, i32 3, i32 2, float -1.280000e+02, float 1.280000e+02, i32 42, i32 0, i32 0, i32 1 } diff --git a/llvm/test/CodeGen/Hexagon/fmul-v67.ll b/llvm/test/CodeGen/Hexagon/fmul-v67.ll index 49098cd..fc0b7f7 100644 --- a/llvm/test/CodeGen/Hexagon/fmul-v67.ll +++ b/llvm/test/CodeGen/Hexagon/fmul-v67.ll @@ -29,7 +29,7 @@ b2: ; CHECK: [[R22]] += dfmpylh([[R20]],[[R21]]) ; CHECK: [[R22]] += dfmpylh([[R21]],[[R20]]) ; CHECK: [[R22]] += dfmpyhh([[R20]],[[R21]]) -define double @test_02(double %a0, double %a1) #2 { +define double @test_02(double %a0, double %a1) #1 { b2: %v3 = fmul double %a0, %a1 ret double %v3 @@ -40,13 +40,11 @@ b2: ; CHECK: [[R30]] += dfmpylh(r1:0,r3:2) ; CHECK: [[R30]] += dfmpylh(r3:2,r1:0) ; CHECK: [[R30]] += dfmpyhh(r1:0,r3:2) -define double @test_03(double %a0, double %a1) #3 { +define double @test_03(double %a0, double %a1) #1 { b2: - %v3 = fmul double %a0, %a1 + %v3 = fmul afn double %a0, %a1 ret double %v3 } attributes #0 = { nounwind } attributes #1 = { nounwind "target-cpu"="hexagonv67" } -attributes #2 = { nounwind "target-cpu"="hexagonv67" "unsafe-fp-math"="false" } -attributes #3 = { nounwind "target-cpu"="hexagonv67" "unsafe-fp-math"="true" } diff --git a/llvm/test/CodeGen/MIR2Vec/vocab-error-handling.ll b/llvm/test/CodeGen/MIR2Vec/vocab-error-handling.ll index 1da516a..80b4048 100644 --- a/llvm/test/CodeGen/MIR2Vec/vocab-error-handling.ll +++ b/llvm/test/CodeGen/MIR2Vec/vocab-error-handling.ll @@ -1,15 +1,15 @@ ; REQUIRES: x86_64-linux -; RUN: not llc -o /dev/null -print-mir2vec-vocab %s 2>&1 | FileCheck %s --check-prefix=CHECK-INVALID -; RUN: not llc -o /dev/null -print-mir2vec-vocab -mir2vec-vocab-path=%S/Inputs/mir2vec_zero_vocab.json %s 2>&1 | FileCheck %s --check-prefix=CHECK-ZERO-DIM -; RUN: not llc -o /dev/null -print-mir2vec-vocab -mir2vec-vocab-path=%S/Inputs/mir2vec_invalid_vocab.json %s 2>&1 | FileCheck %s --check-prefix=CHECK-NO-ENTITIES -; RUN: not llc -o /dev/null -print-mir2vec-vocab -mir2vec-vocab-path=%S/Inputs/mir2vec_inconsistent_dims.json %s 2>&1 | FileCheck %s --check-prefix=CHECK-INCONSISTENT-DIMS +; RUN: llc -o /dev/null -print-mir2vec-vocab %s 2>&1 | FileCheck %s --check-prefix=CHECK-INVALID +; RUN: llc -o /dev/null -print-mir2vec-vocab -mir2vec-vocab-path=%S/Inputs/mir2vec_zero_vocab.json %s 2>&1 | FileCheck %s --check-prefix=CHECK-ZERO-DIM +; RUN: llc -o /dev/null -print-mir2vec-vocab -mir2vec-vocab-path=%S/Inputs/mir2vec_invalid_vocab.json %s 2>&1 | FileCheck %s --check-prefix=CHECK-NO-ENTITIES +; RUN: llc -o /dev/null -print-mir2vec-vocab -mir2vec-vocab-path=%S/Inputs/mir2vec_inconsistent_dims.json %s 2>&1 | FileCheck %s --check-prefix=CHECK-INCONSISTENT-DIMS define dso_local void @test() { entry: ret void } -; CHECK-INVALID: error: MIR2Vec vocabulary file path not specified; set it using --mir2vec-vocab-path -; CHECK-ZERO-DIM: error: Dimension of 'entities' section of the vocabulary is zero -; CHECK-NO-ENTITIES: error: Missing 'entities' section in vocabulary file -; CHECK-INCONSISTENT-DIMS: error: All vectors in the 'entities' section of the vocabulary are not of the same dimension +; CHECK-INVALID: MIR2Vec Vocabulary Printer: Failed to get vocabulary - MIR2Vec vocabulary file path not specified; set it using --mir2vec-vocab-path +; CHECK-ZERO-DIM: MIR2Vec Vocabulary Printer: Failed to get vocabulary - Dimension of 'entities' section of the vocabulary is zero +; CHECK-NO-ENTITIES: MIR2Vec Vocabulary Printer: Failed to get vocabulary - Missing 'entities' section in vocabulary file +; CHECK-INCONSISTENT-DIMS: MIR2Vec Vocabulary Printer: Failed to get vocabulary - All vectors in the 'entities' section of the vocabulary are not of the same dimension diff --git a/llvm/test/CodeGen/NVPTX/lower-ctor-dtor.ll b/llvm/test/CodeGen/NVPTX/lower-ctor-dtor.ll index 02118fb..b503da4 100644 --- a/llvm/test/CodeGen/NVPTX/lower-ctor-dtor.ll +++ b/llvm/test/CodeGen/NVPTX/lower-ctor-dtor.ll @@ -72,7 +72,7 @@ define internal void @bar() { ; CHECK-NEXT: [[OFFSET:%.*]] = ashr exact i64 [[TMP2]], 3 ; CHECK-NEXT: [[TMP3:%.*]] = getelementptr ptr, ptr addrspace(1) [[BEGIN]], i64 [[OFFSET]] ; CHECK-NEXT: [[START:%.*]] = getelementptr inbounds ptr, ptr addrspace(1) [[TMP3]], i64 -1 -; CHECK-NEXT: [[TMP4:%.*]] = icmp ugt ptr addrspace(1) [[START]], [[BEGIN]] +; CHECK-NEXT: [[TMP4:%.*]] = icmp uge ptr addrspace(1) [[START]], [[BEGIN]] ; CHECK-NEXT: br i1 [[TMP4]], label [[WHILE_ENTRY:%.*]], label [[WHILE_END:%.*]] ; CHECK: while.entry: ; CHECK-NEXT: [[PTR:%.*]] = phi ptr addrspace(1) [ [[START]], [[ENTRY:%.*]] ], [ [[NEXT:%.*]], [[WHILE_ENTRY]] ] diff --git a/llvm/test/CodeGen/NVPTX/tcgen05-alloc.ll b/llvm/test/CodeGen/NVPTX/tcgen05-alloc.ll index 41a0e81..1edb387 100644 --- a/llvm/test/CodeGen/NVPTX/tcgen05-alloc.ll +++ b/llvm/test/CodeGen/NVPTX/tcgen05-alloc.ll @@ -12,63 +12,104 @@ declare void @llvm.nvvm.tcgen05.alloc.cg2(ptr %addr, i32 %ncols) declare void @llvm.nvvm.tcgen05.alloc.shared.cg1(ptr addrspace(3) %addr, i32 %ncols) declare void @llvm.nvvm.tcgen05.alloc.shared.cg2(ptr addrspace(3) %addr, i32 %ncols) -; CHECK-LABEL: test_tcgen05_alloc -define void @test_tcgen05_alloc(ptr %addr, i32 %ncols) { -; CHECK_PTX64-LABEL: test_tcgen05_alloc( +define void @test_tcgen05_alloc_cg1(ptr %addr, i32 %ncols) { +; CHECK_PTX64-LABEL: test_tcgen05_alloc_cg1( ; CHECK_PTX64: { ; CHECK_PTX64-NEXT: .reg .b32 %r<2>; ; CHECK_PTX64-NEXT: .reg .b64 %rd<2>; ; CHECK_PTX64-EMPTY: ; CHECK_PTX64-NEXT: // %bb.0: -; CHECK_PTX64-NEXT: ld.param.b64 %rd1, [test_tcgen05_alloc_param_0]; -; CHECK_PTX64-NEXT: ld.param.b32 %r1, [test_tcgen05_alloc_param_1]; +; CHECK_PTX64-NEXT: ld.param.b64 %rd1, [test_tcgen05_alloc_cg1_param_0]; +; CHECK_PTX64-NEXT: ld.param.b32 %r1, [test_tcgen05_alloc_cg1_param_1]; ; CHECK_PTX64-NEXT: tcgen05.alloc.cta_group::1.sync.aligned.b32 [%rd1], %r1; -; CHECK_PTX64-NEXT: tcgen05.alloc.cta_group::2.sync.aligned.b32 [%rd1], %r1; ; CHECK_PTX64-NEXT: ret; ; -; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_alloc( +; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_alloc_cg1( ; CHECK_PTX64_SHARED32: { ; CHECK_PTX64_SHARED32-NEXT: .reg .b32 %r<2>; ; CHECK_PTX64_SHARED32-NEXT: .reg .b64 %rd<2>; ; CHECK_PTX64_SHARED32-EMPTY: ; CHECK_PTX64_SHARED32-NEXT: // %bb.0: -; CHECK_PTX64_SHARED32-NEXT: ld.param.b64 %rd1, [test_tcgen05_alloc_param_0]; -; CHECK_PTX64_SHARED32-NEXT: ld.param.b32 %r1, [test_tcgen05_alloc_param_1]; +; CHECK_PTX64_SHARED32-NEXT: ld.param.b64 %rd1, [test_tcgen05_alloc_cg1_param_0]; +; CHECK_PTX64_SHARED32-NEXT: ld.param.b32 %r1, [test_tcgen05_alloc_cg1_param_1]; ; CHECK_PTX64_SHARED32-NEXT: tcgen05.alloc.cta_group::1.sync.aligned.b32 [%rd1], %r1; -; CHECK_PTX64_SHARED32-NEXT: tcgen05.alloc.cta_group::2.sync.aligned.b32 [%rd1], %r1; ; CHECK_PTX64_SHARED32-NEXT: ret; call void @llvm.nvvm.tcgen05.alloc.cg1(ptr %addr, i32 %ncols) - call void @llvm.nvvm.tcgen05.alloc.cg2(ptr %addr, i32 %ncols) + ret void +} +define void @test_tcgen05_alloc_cg2(ptr %addr, i32 %ncols) { +; CHECK_PTX64-LABEL: test_tcgen05_alloc_cg2( +; CHECK_PTX64: { +; CHECK_PTX64-NEXT: .reg .b32 %r<2>; +; CHECK_PTX64-NEXT: .reg .b64 %rd<2>; +; CHECK_PTX64-EMPTY: +; CHECK_PTX64-NEXT: // %bb.0: +; CHECK_PTX64-NEXT: ld.param.b64 %rd1, [test_tcgen05_alloc_cg2_param_0]; +; CHECK_PTX64-NEXT: ld.param.b32 %r1, [test_tcgen05_alloc_cg2_param_1]; +; CHECK_PTX64-NEXT: tcgen05.alloc.cta_group::2.sync.aligned.b32 [%rd1], %r1; +; CHECK_PTX64-NEXT: ret; +; +; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_alloc_cg2( +; CHECK_PTX64_SHARED32: { +; CHECK_PTX64_SHARED32-NEXT: .reg .b32 %r<2>; +; CHECK_PTX64_SHARED32-NEXT: .reg .b64 %rd<2>; +; CHECK_PTX64_SHARED32-EMPTY: +; CHECK_PTX64_SHARED32-NEXT: // %bb.0: +; CHECK_PTX64_SHARED32-NEXT: ld.param.b64 %rd1, [test_tcgen05_alloc_cg2_param_0]; +; CHECK_PTX64_SHARED32-NEXT: ld.param.b32 %r1, [test_tcgen05_alloc_cg2_param_1]; +; CHECK_PTX64_SHARED32-NEXT: tcgen05.alloc.cta_group::2.sync.aligned.b32 [%rd1], %r1; +; CHECK_PTX64_SHARED32-NEXT: ret; + call void @llvm.nvvm.tcgen05.alloc.cg2(ptr %addr, i32 %ncols) ret void } -; CHECK-LABEL: test_tcgen05_alloc_shared -define void @test_tcgen05_alloc_shared(ptr addrspace(3) %addr, i32 %ncols) { -; CHECK_PTX64-LABEL: test_tcgen05_alloc_shared( +define void @test_tcgen05_alloc_shared_cg1(ptr addrspace(3) %addr, i32 %ncols) { +; CHECK_PTX64-LABEL: test_tcgen05_alloc_shared_cg1( ; CHECK_PTX64: { ; CHECK_PTX64-NEXT: .reg .b32 %r<2>; ; CHECK_PTX64-NEXT: .reg .b64 %rd<2>; ; CHECK_PTX64-EMPTY: ; CHECK_PTX64-NEXT: // %bb.0: -; CHECK_PTX64-NEXT: ld.param.b64 %rd1, [test_tcgen05_alloc_shared_param_0]; -; CHECK_PTX64-NEXT: ld.param.b32 %r1, [test_tcgen05_alloc_shared_param_1]; +; CHECK_PTX64-NEXT: ld.param.b64 %rd1, [test_tcgen05_alloc_shared_cg1_param_0]; +; CHECK_PTX64-NEXT: ld.param.b32 %r1, [test_tcgen05_alloc_shared_cg1_param_1]; ; CHECK_PTX64-NEXT: tcgen05.alloc.cta_group::1.sync.aligned.shared::cta.b32 [%rd1], %r1; -; CHECK_PTX64-NEXT: tcgen05.alloc.cta_group::2.sync.aligned.shared::cta.b32 [%rd1], %r1; ; CHECK_PTX64-NEXT: ret; ; -; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_alloc_shared( +; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_alloc_shared_cg1( ; CHECK_PTX64_SHARED32: { ; CHECK_PTX64_SHARED32-NEXT: .reg .b32 %r<3>; ; CHECK_PTX64_SHARED32-EMPTY: ; CHECK_PTX64_SHARED32-NEXT: // %bb.0: -; CHECK_PTX64_SHARED32-NEXT: ld.param.b32 %r1, [test_tcgen05_alloc_shared_param_0]; -; CHECK_PTX64_SHARED32-NEXT: ld.param.b32 %r2, [test_tcgen05_alloc_shared_param_1]; +; CHECK_PTX64_SHARED32-NEXT: ld.param.b32 %r1, [test_tcgen05_alloc_shared_cg1_param_0]; +; CHECK_PTX64_SHARED32-NEXT: ld.param.b32 %r2, [test_tcgen05_alloc_shared_cg1_param_1]; ; CHECK_PTX64_SHARED32-NEXT: tcgen05.alloc.cta_group::1.sync.aligned.shared::cta.b32 [%r1], %r2; -; CHECK_PTX64_SHARED32-NEXT: tcgen05.alloc.cta_group::2.sync.aligned.shared::cta.b32 [%r1], %r2; ; CHECK_PTX64_SHARED32-NEXT: ret; call void @llvm.nvvm.tcgen05.alloc.shared.cg1(ptr addrspace(3) %addr, i32 %ncols) + ret void +} +define void @test_tcgen05_alloc_shared_cg2(ptr addrspace(3) %addr, i32 %ncols) { +; CHECK_PTX64-LABEL: test_tcgen05_alloc_shared_cg2( +; CHECK_PTX64: { +; CHECK_PTX64-NEXT: .reg .b32 %r<2>; +; CHECK_PTX64-NEXT: .reg .b64 %rd<2>; +; CHECK_PTX64-EMPTY: +; CHECK_PTX64-NEXT: // %bb.0: +; CHECK_PTX64-NEXT: ld.param.b64 %rd1, [test_tcgen05_alloc_shared_cg2_param_0]; +; CHECK_PTX64-NEXT: ld.param.b32 %r1, [test_tcgen05_alloc_shared_cg2_param_1]; +; CHECK_PTX64-NEXT: tcgen05.alloc.cta_group::2.sync.aligned.shared::cta.b32 [%rd1], %r1; +; CHECK_PTX64-NEXT: ret; +; +; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_alloc_shared_cg2( +; CHECK_PTX64_SHARED32: { +; CHECK_PTX64_SHARED32-NEXT: .reg .b32 %r<3>; +; CHECK_PTX64_SHARED32-EMPTY: +; CHECK_PTX64_SHARED32-NEXT: // %bb.0: +; CHECK_PTX64_SHARED32-NEXT: ld.param.b32 %r1, [test_tcgen05_alloc_shared_cg2_param_0]; +; CHECK_PTX64_SHARED32-NEXT: ld.param.b32 %r2, [test_tcgen05_alloc_shared_cg2_param_1]; +; CHECK_PTX64_SHARED32-NEXT: tcgen05.alloc.cta_group::2.sync.aligned.shared::cta.b32 [%r1], %r2; +; CHECK_PTX64_SHARED32-NEXT: ret; call void @llvm.nvvm.tcgen05.alloc.shared.cg2(ptr addrspace(3) %addr, i32 %ncols) ret void } @@ -76,31 +117,50 @@ define void @test_tcgen05_alloc_shared(ptr addrspace(3) %addr, i32 %ncols) { declare void @llvm.nvvm.tcgen05.dealloc.cg1(ptr addrspace(6) %tmem_addr, i32 %ncols) declare void @llvm.nvvm.tcgen05.dealloc.cg2(ptr addrspace(6) %tmem_addr, i32 %ncols) -; CHECK-LABEL: test_tcgen05_dealloc -define void @test_tcgen05_dealloc(ptr addrspace(6) %tmem_addr, i32 %ncols) { -; CHECK_PTX64-LABEL: test_tcgen05_dealloc( +define void @test_tcgen05_dealloc_cg1(ptr addrspace(6) %tmem_addr, i32 %ncols) { +; CHECK_PTX64-LABEL: test_tcgen05_dealloc_cg1( ; CHECK_PTX64: { ; CHECK_PTX64-NEXT: .reg .b32 %r<3>; ; CHECK_PTX64-EMPTY: ; CHECK_PTX64-NEXT: // %bb.0: -; CHECK_PTX64-NEXT: ld.param.b32 %r1, [test_tcgen05_dealloc_param_0]; -; CHECK_PTX64-NEXT: ld.param.b32 %r2, [test_tcgen05_dealloc_param_1]; +; CHECK_PTX64-NEXT: ld.param.b32 %r1, [test_tcgen05_dealloc_cg1_param_0]; +; CHECK_PTX64-NEXT: ld.param.b32 %r2, [test_tcgen05_dealloc_cg1_param_1]; ; CHECK_PTX64-NEXT: tcgen05.dealloc.cta_group::1.sync.aligned.b32 %r1, %r2; -; CHECK_PTX64-NEXT: tcgen05.dealloc.cta_group::2.sync.aligned.b32 %r1, %r2; ; CHECK_PTX64-NEXT: ret; ; -; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_dealloc( +; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_dealloc_cg1( ; CHECK_PTX64_SHARED32: { ; CHECK_PTX64_SHARED32-NEXT: .reg .b32 %r<3>; ; CHECK_PTX64_SHARED32-EMPTY: ; CHECK_PTX64_SHARED32-NEXT: // %bb.0: -; CHECK_PTX64_SHARED32-NEXT: ld.param.b32 %r1, [test_tcgen05_dealloc_param_0]; -; CHECK_PTX64_SHARED32-NEXT: ld.param.b32 %r2, [test_tcgen05_dealloc_param_1]; +; CHECK_PTX64_SHARED32-NEXT: ld.param.b32 %r1, [test_tcgen05_dealloc_cg1_param_0]; +; CHECK_PTX64_SHARED32-NEXT: ld.param.b32 %r2, [test_tcgen05_dealloc_cg1_param_1]; ; CHECK_PTX64_SHARED32-NEXT: tcgen05.dealloc.cta_group::1.sync.aligned.b32 %r1, %r2; -; CHECK_PTX64_SHARED32-NEXT: tcgen05.dealloc.cta_group::2.sync.aligned.b32 %r1, %r2; ; CHECK_PTX64_SHARED32-NEXT: ret; call void @llvm.nvvm.tcgen05.dealloc.cg1(ptr addrspace(6) %tmem_addr, i32 %ncols) + ret void +} +define void @test_tcgen05_dealloc_cg2(ptr addrspace(6) %tmem_addr, i32 %ncols) { +; CHECK_PTX64-LABEL: test_tcgen05_dealloc_cg2( +; CHECK_PTX64: { +; CHECK_PTX64-NEXT: .reg .b32 %r<3>; +; CHECK_PTX64-EMPTY: +; CHECK_PTX64-NEXT: // %bb.0: +; CHECK_PTX64-NEXT: ld.param.b32 %r1, [test_tcgen05_dealloc_cg2_param_0]; +; CHECK_PTX64-NEXT: ld.param.b32 %r2, [test_tcgen05_dealloc_cg2_param_1]; +; CHECK_PTX64-NEXT: tcgen05.dealloc.cta_group::2.sync.aligned.b32 %r1, %r2; +; CHECK_PTX64-NEXT: ret; +; +; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_dealloc_cg2( +; CHECK_PTX64_SHARED32: { +; CHECK_PTX64_SHARED32-NEXT: .reg .b32 %r<3>; +; CHECK_PTX64_SHARED32-EMPTY: +; CHECK_PTX64_SHARED32-NEXT: // %bb.0: +; CHECK_PTX64_SHARED32-NEXT: ld.param.b32 %r1, [test_tcgen05_dealloc_cg2_param_0]; +; CHECK_PTX64_SHARED32-NEXT: ld.param.b32 %r2, [test_tcgen05_dealloc_cg2_param_1]; +; CHECK_PTX64_SHARED32-NEXT: tcgen05.dealloc.cta_group::2.sync.aligned.b32 %r1, %r2; +; CHECK_PTX64_SHARED32-NEXT: ret; call void @llvm.nvvm.tcgen05.dealloc.cg2(ptr addrspace(6) %tmem_addr, i32 %ncols) ret void } @@ -108,27 +168,42 @@ define void @test_tcgen05_dealloc(ptr addrspace(6) %tmem_addr, i32 %ncols) { declare void @llvm.nvvm.tcgen05.relinq.alloc.permit.cg1() declare void @llvm.nvvm.tcgen05.relinq.alloc.permit.cg2() -; CHECK-LABEL: test_tcgen05_relinquish_alloc_permit -define void @test_tcgen05_relinquish_alloc_permit() { -; CHECK_PTX64-LABEL: test_tcgen05_relinquish_alloc_permit( +define void @test_tcgen05_relinquish_alloc_permit_cg1() { +; CHECK_PTX64-LABEL: test_tcgen05_relinquish_alloc_permit_cg1( ; CHECK_PTX64: { ; CHECK_PTX64-EMPTY: ; CHECK_PTX64-EMPTY: ; CHECK_PTX64-NEXT: // %bb.0: ; CHECK_PTX64-NEXT: tcgen05.relinquish_alloc_permit.cta_group::1.sync.aligned; -; CHECK_PTX64-NEXT: tcgen05.relinquish_alloc_permit.cta_group::2.sync.aligned; ; CHECK_PTX64-NEXT: ret; ; -; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_relinquish_alloc_permit( +; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_relinquish_alloc_permit_cg1( ; CHECK_PTX64_SHARED32: { ; CHECK_PTX64_SHARED32-EMPTY: ; CHECK_PTX64_SHARED32-EMPTY: ; CHECK_PTX64_SHARED32-NEXT: // %bb.0: ; CHECK_PTX64_SHARED32-NEXT: tcgen05.relinquish_alloc_permit.cta_group::1.sync.aligned; -; CHECK_PTX64_SHARED32-NEXT: tcgen05.relinquish_alloc_permit.cta_group::2.sync.aligned; ; CHECK_PTX64_SHARED32-NEXT: ret; call void @llvm.nvvm.tcgen05.relinq.alloc.permit.cg1() + ret void +} +define void @test_tcgen05_relinquish_alloc_permit_cg2() { +; CHECK_PTX64-LABEL: test_tcgen05_relinquish_alloc_permit_cg2( +; CHECK_PTX64: { +; CHECK_PTX64-EMPTY: +; CHECK_PTX64-EMPTY: +; CHECK_PTX64-NEXT: // %bb.0: +; CHECK_PTX64-NEXT: tcgen05.relinquish_alloc_permit.cta_group::2.sync.aligned; +; CHECK_PTX64-NEXT: ret; +; +; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_relinquish_alloc_permit_cg2( +; CHECK_PTX64_SHARED32: { +; CHECK_PTX64_SHARED32-EMPTY: +; CHECK_PTX64_SHARED32-EMPTY: +; CHECK_PTX64_SHARED32-NEXT: // %bb.0: +; CHECK_PTX64_SHARED32-NEXT: tcgen05.relinquish_alloc_permit.cta_group::2.sync.aligned; +; CHECK_PTX64_SHARED32-NEXT: ret; call void @llvm.nvvm.tcgen05.relinq.alloc.permit.cg2() ret void } diff --git a/llvm/test/CodeGen/NVPTX/tcgen05-commit.ll b/llvm/test/CodeGen/NVPTX/tcgen05-commit.ll index 7981feb..2e80c4c 100644 --- a/llvm/test/CodeGen/NVPTX/tcgen05-commit.ll +++ b/llvm/test/CodeGen/NVPTX/tcgen05-commit.ll @@ -11,57 +11,93 @@ declare void @llvm.nvvm.tcgen05.commit.cg2(ptr %bar_addr) declare void @llvm.nvvm.tcgen05.commit.shared.cg1(ptr addrspace(3) %bar_addr) declare void @llvm.nvvm.tcgen05.commit.shared.cg2(ptr addrspace(3) %bar_addr) -; CHECK-LABEL: test_tcgen05_commit -define void @test_tcgen05_commit(ptr %bar_addr) { -; CHECK_PTX64-LABEL: test_tcgen05_commit( +define void @test_tcgen05_commit_cg1(ptr %bar_addr) { +; CHECK_PTX64-LABEL: test_tcgen05_commit_cg1( ; CHECK_PTX64: { ; CHECK_PTX64-NEXT: .reg .b64 %rd<2>; ; CHECK_PTX64-EMPTY: ; CHECK_PTX64-NEXT: // %bb.0: -; CHECK_PTX64-NEXT: ld.param.b64 %rd1, [test_tcgen05_commit_param_0]; +; CHECK_PTX64-NEXT: ld.param.b64 %rd1, [test_tcgen05_commit_cg1_param_0]; ; CHECK_PTX64-NEXT: tcgen05.commit.cta_group::1.mbarrier::arrive::one.shared::cluster.b64 [%rd1]; -; CHECK_PTX64-NEXT: tcgen05.commit.cta_group::2.mbarrier::arrive::one.shared::cluster.b64 [%rd1]; ; CHECK_PTX64-NEXT: ret; ; -; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_commit( +; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_commit_cg1( ; CHECK_PTX64_SHARED32: { ; CHECK_PTX64_SHARED32-NEXT: .reg .b64 %rd<2>; ; CHECK_PTX64_SHARED32-EMPTY: ; CHECK_PTX64_SHARED32-NEXT: // %bb.0: -; CHECK_PTX64_SHARED32-NEXT: ld.param.b64 %rd1, [test_tcgen05_commit_param_0]; +; CHECK_PTX64_SHARED32-NEXT: ld.param.b64 %rd1, [test_tcgen05_commit_cg1_param_0]; ; CHECK_PTX64_SHARED32-NEXT: tcgen05.commit.cta_group::1.mbarrier::arrive::one.shared::cluster.b64 [%rd1]; -; CHECK_PTX64_SHARED32-NEXT: tcgen05.commit.cta_group::2.mbarrier::arrive::one.shared::cluster.b64 [%rd1]; ; CHECK_PTX64_SHARED32-NEXT: ret; call void @llvm.nvvm.tcgen05.commit.cg1(ptr %bar_addr) + ret void +} + +define void @test_tcgen05_commit_cg2(ptr %bar_addr) { +; CHECK_PTX64-LABEL: test_tcgen05_commit_cg2( +; CHECK_PTX64: { +; CHECK_PTX64-NEXT: .reg .b64 %rd<2>; +; CHECK_PTX64-EMPTY: +; CHECK_PTX64-NEXT: // %bb.0: +; CHECK_PTX64-NEXT: ld.param.b64 %rd1, [test_tcgen05_commit_cg2_param_0]; +; CHECK_PTX64-NEXT: tcgen05.commit.cta_group::2.mbarrier::arrive::one.shared::cluster.b64 [%rd1]; +; CHECK_PTX64-NEXT: ret; +; +; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_commit_cg2( +; CHECK_PTX64_SHARED32: { +; CHECK_PTX64_SHARED32-NEXT: .reg .b64 %rd<2>; +; CHECK_PTX64_SHARED32-EMPTY: +; CHECK_PTX64_SHARED32-NEXT: // %bb.0: +; CHECK_PTX64_SHARED32-NEXT: ld.param.b64 %rd1, [test_tcgen05_commit_cg2_param_0]; +; CHECK_PTX64_SHARED32-NEXT: tcgen05.commit.cta_group::2.mbarrier::arrive::one.shared::cluster.b64 [%rd1]; +; CHECK_PTX64_SHARED32-NEXT: ret; call void @llvm.nvvm.tcgen05.commit.cg2(ptr %bar_addr) ret void } -; CHECK-LABEL: test_tcgen05_commit_shared -define void @test_tcgen05_commit_shared(ptr addrspace(3) %bar_addr) { -; CHECK_PTX64-LABEL: test_tcgen05_commit_shared( +define void @test_tcgen05_commit_shared_cg1(ptr addrspace(3) %bar_addr) { +; CHECK_PTX64-LABEL: test_tcgen05_commit_shared_cg1( ; CHECK_PTX64: { ; CHECK_PTX64-NEXT: .reg .b64 %rd<2>; ; CHECK_PTX64-EMPTY: ; CHECK_PTX64-NEXT: // %bb.0: -; CHECK_PTX64-NEXT: ld.param.b64 %rd1, [test_tcgen05_commit_shared_param_0]; +; CHECK_PTX64-NEXT: ld.param.b64 %rd1, [test_tcgen05_commit_shared_cg1_param_0]; ; CHECK_PTX64-NEXT: tcgen05.commit.cta_group::1.mbarrier::arrive::one.shared::cluster.b64 [%rd1]; -; CHECK_PTX64-NEXT: tcgen05.commit.cta_group::2.mbarrier::arrive::one.shared::cluster.b64 [%rd1]; ; CHECK_PTX64-NEXT: ret; ; -; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_commit_shared( +; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_commit_shared_cg1( ; CHECK_PTX64_SHARED32: { ; CHECK_PTX64_SHARED32-NEXT: .reg .b32 %r<2>; ; CHECK_PTX64_SHARED32-EMPTY: ; CHECK_PTX64_SHARED32-NEXT: // %bb.0: -; CHECK_PTX64_SHARED32-NEXT: ld.param.b32 %r1, [test_tcgen05_commit_shared_param_0]; +; CHECK_PTX64_SHARED32-NEXT: ld.param.b32 %r1, [test_tcgen05_commit_shared_cg1_param_0]; ; CHECK_PTX64_SHARED32-NEXT: tcgen05.commit.cta_group::1.mbarrier::arrive::one.shared::cluster.b64 [%r1]; -; CHECK_PTX64_SHARED32-NEXT: tcgen05.commit.cta_group::2.mbarrier::arrive::one.shared::cluster.b64 [%r1]; ; CHECK_PTX64_SHARED32-NEXT: ret; call void @llvm.nvvm.tcgen05.commit.shared.cg1(ptr addrspace(3) %bar_addr) + ret void +} + +define void @test_tcgen05_commit_shared_cg2(ptr addrspace(3) %bar_addr) { +; CHECK_PTX64-LABEL: test_tcgen05_commit_shared_cg2( +; CHECK_PTX64: { +; CHECK_PTX64-NEXT: .reg .b64 %rd<2>; +; CHECK_PTX64-EMPTY: +; CHECK_PTX64-NEXT: // %bb.0: +; CHECK_PTX64-NEXT: ld.param.b64 %rd1, [test_tcgen05_commit_shared_cg2_param_0]; +; CHECK_PTX64-NEXT: tcgen05.commit.cta_group::2.mbarrier::arrive::one.shared::cluster.b64 [%rd1]; +; CHECK_PTX64-NEXT: ret; +; +; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_commit_shared_cg2( +; CHECK_PTX64_SHARED32: { +; CHECK_PTX64_SHARED32-NEXT: .reg .b32 %r<2>; +; CHECK_PTX64_SHARED32-EMPTY: +; CHECK_PTX64_SHARED32-NEXT: // %bb.0: +; CHECK_PTX64_SHARED32-NEXT: ld.param.b32 %r1, [test_tcgen05_commit_shared_cg2_param_0]; +; CHECK_PTX64_SHARED32-NEXT: tcgen05.commit.cta_group::2.mbarrier::arrive::one.shared::cluster.b64 [%r1]; +; CHECK_PTX64_SHARED32-NEXT: ret; call void @llvm.nvvm.tcgen05.commit.shared.cg2(ptr addrspace(3) %bar_addr) ret void @@ -72,66 +108,106 @@ declare void @llvm.nvvm.tcgen05.commit.mc.cg2(ptr %bar_addr, i16 %cta_mask) declare void @llvm.nvvm.tcgen05.commit.mc.shared.cg1(ptr addrspace(3) %bar_addr, i16 %cta_mask) declare void @llvm.nvvm.tcgen05.commit.mc.shared.cg2(ptr addrspace(3) %bar_addr, i16 %cta_mask) -; CHECK-LABEL: test_tcgen05_commit_mc -define void @test_tcgen05_commit_mc(ptr %bar_addr, i16 %cta_mask) { -; CHECK_PTX64-LABEL: test_tcgen05_commit_mc( +define void @test_tcgen05_commit_mc_cg1(ptr %bar_addr, i16 %cta_mask) { +; CHECK_PTX64-LABEL: test_tcgen05_commit_mc_cg1( ; CHECK_PTX64: { ; CHECK_PTX64-NEXT: .reg .b16 %rs<2>; ; CHECK_PTX64-NEXT: .reg .b64 %rd<2>; ; CHECK_PTX64-EMPTY: ; CHECK_PTX64-NEXT: // %bb.0: -; CHECK_PTX64-NEXT: ld.param.b64 %rd1, [test_tcgen05_commit_mc_param_0]; -; CHECK_PTX64-NEXT: ld.param.b16 %rs1, [test_tcgen05_commit_mc_param_1]; +; CHECK_PTX64-NEXT: ld.param.b64 %rd1, [test_tcgen05_commit_mc_cg1_param_0]; +; CHECK_PTX64-NEXT: ld.param.b16 %rs1, [test_tcgen05_commit_mc_cg1_param_1]; ; CHECK_PTX64-NEXT: tcgen05.commit.cta_group::1.mbarrier::arrive::one.shared::cluster.multicast::cluster.b64 [%rd1], %rs1; -; CHECK_PTX64-NEXT: tcgen05.commit.cta_group::2.mbarrier::arrive::one.shared::cluster.multicast::cluster.b64 [%rd1], %rs1; ; CHECK_PTX64-NEXT: ret; ; -; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_commit_mc( +; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_commit_mc_cg1( ; CHECK_PTX64_SHARED32: { ; CHECK_PTX64_SHARED32-NEXT: .reg .b16 %rs<2>; ; CHECK_PTX64_SHARED32-NEXT: .reg .b64 %rd<2>; ; CHECK_PTX64_SHARED32-EMPTY: ; CHECK_PTX64_SHARED32-NEXT: // %bb.0: -; CHECK_PTX64_SHARED32-NEXT: ld.param.b64 %rd1, [test_tcgen05_commit_mc_param_0]; -; CHECK_PTX64_SHARED32-NEXT: ld.param.b16 %rs1, [test_tcgen05_commit_mc_param_1]; +; CHECK_PTX64_SHARED32-NEXT: ld.param.b64 %rd1, [test_tcgen05_commit_mc_cg1_param_0]; +; CHECK_PTX64_SHARED32-NEXT: ld.param.b16 %rs1, [test_tcgen05_commit_mc_cg1_param_1]; ; CHECK_PTX64_SHARED32-NEXT: tcgen05.commit.cta_group::1.mbarrier::arrive::one.shared::cluster.multicast::cluster.b64 [%rd1], %rs1; -; CHECK_PTX64_SHARED32-NEXT: tcgen05.commit.cta_group::2.mbarrier::arrive::one.shared::cluster.multicast::cluster.b64 [%rd1], %rs1; ; CHECK_PTX64_SHARED32-NEXT: ret; call void @llvm.nvvm.tcgen05.commit.mc.cg1(ptr %bar_addr, i16 %cta_mask) + ret void +} +define void @test_tcgen05_commit_mc_cg2(ptr %bar_addr, i16 %cta_mask) { +; CHECK_PTX64-LABEL: test_tcgen05_commit_mc_cg2( +; CHECK_PTX64: { +; CHECK_PTX64-NEXT: .reg .b16 %rs<2>; +; CHECK_PTX64-NEXT: .reg .b64 %rd<2>; +; CHECK_PTX64-EMPTY: +; CHECK_PTX64-NEXT: // %bb.0: +; CHECK_PTX64-NEXT: ld.param.b64 %rd1, [test_tcgen05_commit_mc_cg2_param_0]; +; CHECK_PTX64-NEXT: ld.param.b16 %rs1, [test_tcgen05_commit_mc_cg2_param_1]; +; CHECK_PTX64-NEXT: tcgen05.commit.cta_group::2.mbarrier::arrive::one.shared::cluster.multicast::cluster.b64 [%rd1], %rs1; +; CHECK_PTX64-NEXT: ret; +; +; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_commit_mc_cg2( +; CHECK_PTX64_SHARED32: { +; CHECK_PTX64_SHARED32-NEXT: .reg .b16 %rs<2>; +; CHECK_PTX64_SHARED32-NEXT: .reg .b64 %rd<2>; +; CHECK_PTX64_SHARED32-EMPTY: +; CHECK_PTX64_SHARED32-NEXT: // %bb.0: +; CHECK_PTX64_SHARED32-NEXT: ld.param.b64 %rd1, [test_tcgen05_commit_mc_cg2_param_0]; +; CHECK_PTX64_SHARED32-NEXT: ld.param.b16 %rs1, [test_tcgen05_commit_mc_cg2_param_1]; +; CHECK_PTX64_SHARED32-NEXT: tcgen05.commit.cta_group::2.mbarrier::arrive::one.shared::cluster.multicast::cluster.b64 [%rd1], %rs1; +; CHECK_PTX64_SHARED32-NEXT: ret; call void @llvm.nvvm.tcgen05.commit.mc.cg2(ptr %bar_addr, i16 %cta_mask) - ret void } -; CHECK-LABEL: test_tcgen05_commit_mc_shared -define void @test_tcgen05_commit_mc_shared(ptr addrspace(3) %bar_addr, i16 %cta_mask) { -; CHECK_PTX64-LABEL: test_tcgen05_commit_mc_shared( +define void @test_tcgen05_commit_mc_shared_cg1(ptr addrspace(3) %bar_addr, i16 %cta_mask) { +; CHECK_PTX64-LABEL: test_tcgen05_commit_mc_shared_cg1( ; CHECK_PTX64: { ; CHECK_PTX64-NEXT: .reg .b16 %rs<2>; ; CHECK_PTX64-NEXT: .reg .b64 %rd<2>; ; CHECK_PTX64-EMPTY: ; CHECK_PTX64-NEXT: // %bb.0: -; CHECK_PTX64-NEXT: ld.param.b64 %rd1, [test_tcgen05_commit_mc_shared_param_0]; -; CHECK_PTX64-NEXT: ld.param.b16 %rs1, [test_tcgen05_commit_mc_shared_param_1]; +; CHECK_PTX64-NEXT: ld.param.b64 %rd1, [test_tcgen05_commit_mc_shared_cg1_param_0]; +; CHECK_PTX64-NEXT: ld.param.b16 %rs1, [test_tcgen05_commit_mc_shared_cg1_param_1]; ; CHECK_PTX64-NEXT: tcgen05.commit.cta_group::1.mbarrier::arrive::one.shared::cluster.multicast::cluster.b64 [%rd1], %rs1; -; CHECK_PTX64-NEXT: tcgen05.commit.cta_group::2.mbarrier::arrive::one.shared::cluster.multicast::cluster.b64 [%rd1], %rs1; ; CHECK_PTX64-NEXT: ret; ; -; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_commit_mc_shared( +; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_commit_mc_shared_cg1( ; CHECK_PTX64_SHARED32: { ; CHECK_PTX64_SHARED32-NEXT: .reg .b16 %rs<2>; ; CHECK_PTX64_SHARED32-NEXT: .reg .b32 %r<2>; ; CHECK_PTX64_SHARED32-EMPTY: ; CHECK_PTX64_SHARED32-NEXT: // %bb.0: -; CHECK_PTX64_SHARED32-NEXT: ld.param.b32 %r1, [test_tcgen05_commit_mc_shared_param_0]; -; CHECK_PTX64_SHARED32-NEXT: ld.param.b16 %rs1, [test_tcgen05_commit_mc_shared_param_1]; +; CHECK_PTX64_SHARED32-NEXT: ld.param.b32 %r1, [test_tcgen05_commit_mc_shared_cg1_param_0]; +; CHECK_PTX64_SHARED32-NEXT: ld.param.b16 %rs1, [test_tcgen05_commit_mc_shared_cg1_param_1]; ; CHECK_PTX64_SHARED32-NEXT: tcgen05.commit.cta_group::1.mbarrier::arrive::one.shared::cluster.multicast::cluster.b64 [%r1], %rs1; -; CHECK_PTX64_SHARED32-NEXT: tcgen05.commit.cta_group::2.mbarrier::arrive::one.shared::cluster.multicast::cluster.b64 [%r1], %rs1; ; CHECK_PTX64_SHARED32-NEXT: ret; call void @llvm.nvvm.tcgen05.commit.mc.shared.cg1(ptr addrspace(3) %bar_addr, i16 %cta_mask) + ret void +} +define void @test_tcgen05_commit_mc_shared_cg2(ptr addrspace(3) %bar_addr, i16 %cta_mask) { +; CHECK_PTX64-LABEL: test_tcgen05_commit_mc_shared_cg2( +; CHECK_PTX64: { +; CHECK_PTX64-NEXT: .reg .b16 %rs<2>; +; CHECK_PTX64-NEXT: .reg .b64 %rd<2>; +; CHECK_PTX64-EMPTY: +; CHECK_PTX64-NEXT: // %bb.0: +; CHECK_PTX64-NEXT: ld.param.b64 %rd1, [test_tcgen05_commit_mc_shared_cg2_param_0]; +; CHECK_PTX64-NEXT: ld.param.b16 %rs1, [test_tcgen05_commit_mc_shared_cg2_param_1]; +; CHECK_PTX64-NEXT: tcgen05.commit.cta_group::2.mbarrier::arrive::one.shared::cluster.multicast::cluster.b64 [%rd1], %rs1; +; CHECK_PTX64-NEXT: ret; +; +; CHECK_PTX64_SHARED32-LABEL: test_tcgen05_commit_mc_shared_cg2( +; CHECK_PTX64_SHARED32: { +; CHECK_PTX64_SHARED32-NEXT: .reg .b16 %rs<2>; +; CHECK_PTX64_SHARED32-NEXT: .reg .b32 %r<2>; +; CHECK_PTX64_SHARED32-EMPTY: +; CHECK_PTX64_SHARED32-NEXT: // %bb.0: +; CHECK_PTX64_SHARED32-NEXT: ld.param.b32 %r1, [test_tcgen05_commit_mc_shared_cg2_param_0]; +; CHECK_PTX64_SHARED32-NEXT: ld.param.b16 %rs1, [test_tcgen05_commit_mc_shared_cg2_param_1]; +; CHECK_PTX64_SHARED32-NEXT: tcgen05.commit.cta_group::2.mbarrier::arrive::one.shared::cluster.multicast::cluster.b64 [%r1], %rs1; +; CHECK_PTX64_SHARED32-NEXT: ret; call void @llvm.nvvm.tcgen05.commit.mc.shared.cg2(ptr addrspace(3) %bar_addr, i16 %cta_mask) - ret void } diff --git a/llvm/test/CodeGen/NVPTX/tcgen05-cp.ll b/llvm/test/CodeGen/NVPTX/tcgen05-cp.ll index c540f78..817b1d5 100644 --- a/llvm/test/CodeGen/NVPTX/tcgen05-cp.ll +++ b/llvm/test/CodeGen/NVPTX/tcgen05-cp.ll @@ -4,346 +4,580 @@ ; RUN: %if ptxas-sm_100a && ptxas-isa-8.6 %{ llc < %s -march=nvptx64 -mcpu=sm_100a -mattr=+ptx86 | %ptxas-verify -arch=sm_100a %} ; RUN: %if ptxas-sm_103a && ptxas-isa-8.8 %{ llc < %s -march=nvptx64 -mcpu=sm_103a -mattr=+ptx88 | %ptxas-verify -arch=sm_103a %} -; CHECK-LABEL: test_tcgen05_cp_64x128_v1 -define void @test_tcgen05_cp_64x128_v1(ptr addrspace(6) %addr, i64 %sdesc) { -; CHECK-LABEL: test_tcgen05_cp_64x128_v1( +define void @test_tcgen05_cp_64x128_v1_cg1(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_64x128_v1_cg1( ; CHECK: { ; CHECK-NEXT: .reg .b32 %r<2>; ; CHECK-NEXT: .reg .b64 %rd<2>; ; CHECK-EMPTY: ; CHECK-NEXT: // %bb.0: -; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_64x128_v1_param_0]; -; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_64x128_v1_param_1]; +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_64x128_v1_cg1_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_64x128_v1_cg1_param_1]; ; CHECK-NEXT: tcgen05.cp.cta_group::1.64x128b.warpx2::02_13 [%r1], %rd1; -; CHECK-NEXT: tcgen05.cp.cta_group::2.64x128b.warpx2::02_13 [%r1], %rd1; ; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.64x128b_warpx2_02_13.cg1(ptr addrspace(6) %addr, i64 %sdesc) + + ret void +} + +define void @test_tcgen05_cp_64x128_v1_cg2(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_64x128_v1_cg2( +; CHECK: { +; CHECK-NEXT: .reg .b32 %r<2>; +; CHECK-NEXT: .reg .b64 %rd<2>; +; CHECK-EMPTY: +; CHECK-NEXT: // %bb.0: +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_64x128_v1_cg2_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_64x128_v1_cg2_param_1]; +; CHECK-NEXT: tcgen05.cp.cta_group::2.64x128b.warpx2::02_13 [%r1], %rd1; +; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.64x128b_warpx2_02_13.cg2(ptr addrspace(6) %addr, i64 %sdesc) ret void } -; CHECK-LABEL: test_tcgen05_cp_64x128_v2 -define void @test_tcgen05_cp_64x128_v2(ptr addrspace(6) %addr, i64 %sdesc) { -; CHECK-LABEL: test_tcgen05_cp_64x128_v2( +define void @test_tcgen05_cp_64x128_v2_cg1(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_64x128_v2_cg1( ; CHECK: { ; CHECK-NEXT: .reg .b32 %r<2>; ; CHECK-NEXT: .reg .b64 %rd<2>; ; CHECK-EMPTY: ; CHECK-NEXT: // %bb.0: -; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_64x128_v2_param_0]; -; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_64x128_v2_param_1]; +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_64x128_v2_cg1_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_64x128_v2_cg1_param_1]; ; CHECK-NEXT: tcgen05.cp.cta_group::1.64x128b.warpx2::01_23 [%r1], %rd1; -; CHECK-NEXT: tcgen05.cp.cta_group::2.64x128b.warpx2::01_23 [%r1], %rd1; ; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.64x128b_warpx2_01_23.cg1(ptr addrspace(6) %addr, i64 %sdesc) + + ret void +} + +define void @test_tcgen05_cp_64x128_v2_cg2(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_64x128_v2_cg2( +; CHECK: { +; CHECK-NEXT: .reg .b32 %r<2>; +; CHECK-NEXT: .reg .b64 %rd<2>; +; CHECK-EMPTY: +; CHECK-NEXT: // %bb.0: +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_64x128_v2_cg2_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_64x128_v2_cg2_param_1]; +; CHECK-NEXT: tcgen05.cp.cta_group::2.64x128b.warpx2::01_23 [%r1], %rd1; +; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.64x128b_warpx2_01_23.cg2(ptr addrspace(6) %addr, i64 %sdesc) ret void } -; CHECK-LABEL: test_tcgen05_cp_32x128 -define void @test_tcgen05_cp_32x128(ptr addrspace(6) %addr, i64 %sdesc) { -; CHECK-LABEL: test_tcgen05_cp_32x128( +define void @test_tcgen05_cp_32x128_cg1(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_32x128_cg1( ; CHECK: { ; CHECK-NEXT: .reg .b32 %r<2>; ; CHECK-NEXT: .reg .b64 %rd<2>; ; CHECK-EMPTY: ; CHECK-NEXT: // %bb.0: -; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_32x128_param_0]; -; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_32x128_param_1]; +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_32x128_cg1_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_32x128_cg1_param_1]; ; CHECK-NEXT: tcgen05.cp.cta_group::1.32x128b.warpx4 [%r1], %rd1; -; CHECK-NEXT: tcgen05.cp.cta_group::2.32x128b.warpx4 [%r1], %rd1; ; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.32x128b_warpx4.cg1(ptr addrspace(6) %addr, i64 %sdesc) + + ret void +} + +define void @test_tcgen05_cp_32x128_cg2(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_32x128_cg2( +; CHECK: { +; CHECK-NEXT: .reg .b32 %r<2>; +; CHECK-NEXT: .reg .b64 %rd<2>; +; CHECK-EMPTY: +; CHECK-NEXT: // %bb.0: +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_32x128_cg2_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_32x128_cg2_param_1]; +; CHECK-NEXT: tcgen05.cp.cta_group::2.32x128b.warpx4 [%r1], %rd1; +; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.32x128b_warpx4.cg2(ptr addrspace(6) %addr, i64 %sdesc) ret void } -; CHECK-LABEL: test_tcgen05_cp_128x128b -define void @test_tcgen05_cp_128x128b(ptr addrspace(6) %addr, i64 %sdesc) { -; CHECK-LABEL: test_tcgen05_cp_128x128b( +define void @test_tcgen05_cp_128x128b_cg1(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_128x128b_cg1( ; CHECK: { ; CHECK-NEXT: .reg .b32 %r<2>; ; CHECK-NEXT: .reg .b64 %rd<2>; ; CHECK-EMPTY: ; CHECK-NEXT: // %bb.0: -; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_128x128b_param_0]; -; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_128x128b_param_1]; +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_128x128b_cg1_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_128x128b_cg1_param_1]; ; CHECK-NEXT: tcgen05.cp.cta_group::1.128x128b [%r1], %rd1; -; CHECK-NEXT: tcgen05.cp.cta_group::2.128x128b [%r1], %rd1; ; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.128x128b.cg1(ptr addrspace(6) %addr, i64 %sdesc) + + ret void +} + +define void @test_tcgen05_cp_128x128b_cg2(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_128x128b_cg2( +; CHECK: { +; CHECK-NEXT: .reg .b32 %r<2>; +; CHECK-NEXT: .reg .b64 %rd<2>; +; CHECK-EMPTY: +; CHECK-NEXT: // %bb.0: +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_128x128b_cg2_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_128x128b_cg2_param_1]; +; CHECK-NEXT: tcgen05.cp.cta_group::2.128x128b [%r1], %rd1; +; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.128x128b.cg2(ptr addrspace(6) %addr, i64 %sdesc) ret void } -; CHECK-LABEL: test_tcgen05_cp_128x256b -define void @test_tcgen05_cp_128x256b(ptr addrspace(6) %addr, i64 %sdesc) { -; CHECK-LABEL: test_tcgen05_cp_128x256b( +define void @test_tcgen05_cp_128x256b_cg1(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_128x256b_cg1( ; CHECK: { ; CHECK-NEXT: .reg .b32 %r<2>; ; CHECK-NEXT: .reg .b64 %rd<2>; ; CHECK-EMPTY: ; CHECK-NEXT: // %bb.0: -; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_128x256b_param_0]; -; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_128x256b_param_1]; +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_128x256b_cg1_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_128x256b_cg1_param_1]; ; CHECK-NEXT: tcgen05.cp.cta_group::1.128x256b [%r1], %rd1; -; CHECK-NEXT: tcgen05.cp.cta_group::2.128x256b [%r1], %rd1; ; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.128x256b.cg1(ptr addrspace(6) %addr, i64 %sdesc) + + ret void +} + +define void @test_tcgen05_cp_128x256b_cg2(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_128x256b_cg2( +; CHECK: { +; CHECK-NEXT: .reg .b32 %r<2>; +; CHECK-NEXT: .reg .b64 %rd<2>; +; CHECK-EMPTY: +; CHECK-NEXT: // %bb.0: +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_128x256b_cg2_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_128x256b_cg2_param_1]; +; CHECK-NEXT: tcgen05.cp.cta_group::2.128x256b [%r1], %rd1; +; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.128x256b.cg2(ptr addrspace(6) %addr, i64 %sdesc) ret void } -; CHECK-LABEL: test_tcgen05_cp_4x256b -define void @test_tcgen05_cp_4x256b(ptr addrspace(6) %addr, i64 %sdesc) { -; CHECK-LABEL: test_tcgen05_cp_4x256b( +define void @test_tcgen05_cp_4x256b_cg1(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_4x256b_cg1( ; CHECK: { ; CHECK-NEXT: .reg .b32 %r<2>; ; CHECK-NEXT: .reg .b64 %rd<2>; ; CHECK-EMPTY: ; CHECK-NEXT: // %bb.0: -; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_4x256b_param_0]; -; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_4x256b_param_1]; +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_4x256b_cg1_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_4x256b_cg1_param_1]; ; CHECK-NEXT: tcgen05.cp.cta_group::1.4x256b [%r1], %rd1; -; CHECK-NEXT: tcgen05.cp.cta_group::2.4x256b [%r1], %rd1; ; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.4x256b.cg1(ptr addrspace(6) %addr, i64 %sdesc) + + ret void +} + +define void @test_tcgen05_cp_4x256b_cg2(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_4x256b_cg2( +; CHECK: { +; CHECK-NEXT: .reg .b32 %r<2>; +; CHECK-NEXT: .reg .b64 %rd<2>; +; CHECK-EMPTY: +; CHECK-NEXT: // %bb.0: +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_4x256b_cg2_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_4x256b_cg2_param_1]; +; CHECK-NEXT: tcgen05.cp.cta_group::2.4x256b [%r1], %rd1; +; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.4x256b.cg2(ptr addrspace(6) %addr, i64 %sdesc) ret void } ; With src_fmt as b6x16_p32 -; CHECK-LABEL: test_tcgen05_cp_128x256b_b6x16_p32 -define void @test_tcgen05_cp_128x256b_b6x16_p32(ptr addrspace(6) %addr, i64 %sdesc) { -; CHECK-LABEL: test_tcgen05_cp_128x256b_b6x16_p32( +define void @test_tcgen05_cp_128x256b_b6x16_p32_cg1(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_128x256b_b6x16_p32_cg1( ; CHECK: { ; CHECK-NEXT: .reg .b32 %r<2>; ; CHECK-NEXT: .reg .b64 %rd<2>; ; CHECK-EMPTY: ; CHECK-NEXT: // %bb.0: -; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_128x256b_b6x16_p32_param_0]; -; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_128x256b_b6x16_p32_param_1]; +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_128x256b_b6x16_p32_cg1_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_128x256b_b6x16_p32_cg1_param_1]; ; CHECK-NEXT: tcgen05.cp.cta_group::1.128x256b.b8x16.b6x16_p32 [%r1], %rd1; -; CHECK-NEXT: tcgen05.cp.cta_group::2.128x256b.b8x16.b6x16_p32 [%r1], %rd1; ; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.128x256b.b6x16_p32.cg1(ptr addrspace(6) %addr, i64 %sdesc) + + ret void +} + +define void @test_tcgen05_cp_128x256b_b6x16_p32_cg2(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_128x256b_b6x16_p32_cg2( +; CHECK: { +; CHECK-NEXT: .reg .b32 %r<2>; +; CHECK-NEXT: .reg .b64 %rd<2>; +; CHECK-EMPTY: +; CHECK-NEXT: // %bb.0: +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_128x256b_b6x16_p32_cg2_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_128x256b_b6x16_p32_cg2_param_1]; +; CHECK-NEXT: tcgen05.cp.cta_group::2.128x256b.b8x16.b6x16_p32 [%r1], %rd1; +; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.128x256b.b6x16_p32.cg2(ptr addrspace(6) %addr, i64 %sdesc) ret void } -; CHECK-LABEL: test_tcgen05_cp_4x256b_b6x16_p32 -define void @test_tcgen05_cp_4x256b_b6x16_p32(ptr addrspace(6) %addr, i64 %sdesc) { -; CHECK-LABEL: test_tcgen05_cp_4x256b_b6x16_p32( +define void @test_tcgen05_cp_4x256b_b6x16_p32_cg1(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_4x256b_b6x16_p32_cg1( ; CHECK: { ; CHECK-NEXT: .reg .b32 %r<2>; ; CHECK-NEXT: .reg .b64 %rd<2>; ; CHECK-EMPTY: ; CHECK-NEXT: // %bb.0: -; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_4x256b_b6x16_p32_param_0]; -; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_4x256b_b6x16_p32_param_1]; +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_4x256b_b6x16_p32_cg1_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_4x256b_b6x16_p32_cg1_param_1]; ; CHECK-NEXT: tcgen05.cp.cta_group::1.4x256b.b8x16.b6x16_p32 [%r1], %rd1; -; CHECK-NEXT: tcgen05.cp.cta_group::2.4x256b.b8x16.b6x16_p32 [%r1], %rd1; ; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.4x256b.b6x16_p32.cg1(ptr addrspace(6) %addr, i64 %sdesc) + + ret void +} + +define void @test_tcgen05_cp_4x256b_b6x16_p32_cg2(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_4x256b_b6x16_p32_cg2( +; CHECK: { +; CHECK-NEXT: .reg .b32 %r<2>; +; CHECK-NEXT: .reg .b64 %rd<2>; +; CHECK-EMPTY: +; CHECK-NEXT: // %bb.0: +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_4x256b_b6x16_p32_cg2_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_4x256b_b6x16_p32_cg2_param_1]; +; CHECK-NEXT: tcgen05.cp.cta_group::2.4x256b.b8x16.b6x16_p32 [%r1], %rd1; +; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.4x256b.b6x16_p32.cg2(ptr addrspace(6) %addr, i64 %sdesc) ret void } -; CHECK-LABEL: test_tcgen05_cp_128x128b_b6x16_p32 -define void @test_tcgen05_cp_128x128b_b6x16_p32(ptr addrspace(6) %addr, i64 %sdesc) { -; CHECK-LABEL: test_tcgen05_cp_128x128b_b6x16_p32( +define void @test_tcgen05_cp_128x128b_b6x16_p32_cg1(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_128x128b_b6x16_p32_cg1( ; CHECK: { ; CHECK-NEXT: .reg .b32 %r<2>; ; CHECK-NEXT: .reg .b64 %rd<2>; ; CHECK-EMPTY: ; CHECK-NEXT: // %bb.0: -; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_128x128b_b6x16_p32_param_0]; -; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_128x128b_b6x16_p32_param_1]; +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_128x128b_b6x16_p32_cg1_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_128x128b_b6x16_p32_cg1_param_1]; ; CHECK-NEXT: tcgen05.cp.cta_group::1.128x128b.b8x16.b6x16_p32 [%r1], %rd1; -; CHECK-NEXT: tcgen05.cp.cta_group::2.128x128b.b8x16.b6x16_p32 [%r1], %rd1; ; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.128x128b.b6x16_p32.cg1(ptr addrspace(6) %addr, i64 %sdesc) + + ret void +} + +define void @test_tcgen05_cp_128x128b_b6x16_p32_cg2(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_128x128b_b6x16_p32_cg2( +; CHECK: { +; CHECK-NEXT: .reg .b32 %r<2>; +; CHECK-NEXT: .reg .b64 %rd<2>; +; CHECK-EMPTY: +; CHECK-NEXT: // %bb.0: +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_128x128b_b6x16_p32_cg2_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_128x128b_b6x16_p32_cg2_param_1]; +; CHECK-NEXT: tcgen05.cp.cta_group::2.128x128b.b8x16.b6x16_p32 [%r1], %rd1; +; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.128x128b.b6x16_p32.cg2(ptr addrspace(6) %addr, i64 %sdesc) ret void } -; CHECK-LABEL: test_tcgen05_cp_64x128_v1_b6x16_p32 -define void @test_tcgen05_cp_64x128_v1_b6x16_p32(ptr addrspace(6) %addr, i64 %sdesc) { -; CHECK-LABEL: test_tcgen05_cp_64x128_v1_b6x16_p32( +define void @test_tcgen05_cp_64x128_v1_b6x16_p32_cg1(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_64x128_v1_b6x16_p32_cg1( ; CHECK: { ; CHECK-NEXT: .reg .b32 %r<2>; ; CHECK-NEXT: .reg .b64 %rd<2>; ; CHECK-EMPTY: ; CHECK-NEXT: // %bb.0: -; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_64x128_v1_b6x16_p32_param_0]; -; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_64x128_v1_b6x16_p32_param_1]; +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_64x128_v1_b6x16_p32_cg1_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_64x128_v1_b6x16_p32_cg1_param_1]; ; CHECK-NEXT: tcgen05.cp.cta_group::1.64x128b.warpx2::02_13.b8x16.b6x16_p32 [%r1], %rd1; -; CHECK-NEXT: tcgen05.cp.cta_group::2.64x128b.warpx2::02_13.b8x16.b6x16_p32 [%r1], %rd1; ; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.64x128b_warpx2_02_13.b6x16_p32.cg1(ptr addrspace(6) %addr, i64 %sdesc) + + ret void +} + +define void @test_tcgen05_cp_64x128_v1_b6x16_p32_cg2(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_64x128_v1_b6x16_p32_cg2( +; CHECK: { +; CHECK-NEXT: .reg .b32 %r<2>; +; CHECK-NEXT: .reg .b64 %rd<2>; +; CHECK-EMPTY: +; CHECK-NEXT: // %bb.0: +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_64x128_v1_b6x16_p32_cg2_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_64x128_v1_b6x16_p32_cg2_param_1]; +; CHECK-NEXT: tcgen05.cp.cta_group::2.64x128b.warpx2::02_13.b8x16.b6x16_p32 [%r1], %rd1; +; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.64x128b_warpx2_02_13.b6x16_p32.cg2(ptr addrspace(6) %addr, i64 %sdesc) ret void } -; CHECK-LABEL: test_tcgen05_cp_64x128_v2_b6x16_p32 -define void @test_tcgen05_cp_64x128_v2_b6x16_p32(ptr addrspace(6) %addr, i64 %sdesc) { -; CHECK-LABEL: test_tcgen05_cp_64x128_v2_b6x16_p32( +define void @test_tcgen05_cp_64x128_v2_b6x16_p32_cg1(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_64x128_v2_b6x16_p32_cg1( ; CHECK: { ; CHECK-NEXT: .reg .b32 %r<2>; ; CHECK-NEXT: .reg .b64 %rd<2>; ; CHECK-EMPTY: ; CHECK-NEXT: // %bb.0: -; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_64x128_v2_b6x16_p32_param_0]; -; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_64x128_v2_b6x16_p32_param_1]; +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_64x128_v2_b6x16_p32_cg1_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_64x128_v2_b6x16_p32_cg1_param_1]; ; CHECK-NEXT: tcgen05.cp.cta_group::1.64x128b.warpx2::01_23.b8x16.b6x16_p32 [%r1], %rd1; -; CHECK-NEXT: tcgen05.cp.cta_group::2.64x128b.warpx2::01_23.b8x16.b6x16_p32 [%r1], %rd1; ; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.64x128b_warpx2_01_23.b6x16_p32.cg1(ptr addrspace(6) %addr, i64 %sdesc) + + ret void +} + +define void @test_tcgen05_cp_64x128_v2_b6x16_p32_cg2(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_64x128_v2_b6x16_p32_cg2( +; CHECK: { +; CHECK-NEXT: .reg .b32 %r<2>; +; CHECK-NEXT: .reg .b64 %rd<2>; +; CHECK-EMPTY: +; CHECK-NEXT: // %bb.0: +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_64x128_v2_b6x16_p32_cg2_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_64x128_v2_b6x16_p32_cg2_param_1]; +; CHECK-NEXT: tcgen05.cp.cta_group::2.64x128b.warpx2::01_23.b8x16.b6x16_p32 [%r1], %rd1; +; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.64x128b_warpx2_01_23.b6x16_p32.cg2(ptr addrspace(6) %addr, i64 %sdesc) ret void } -; CHECK-LABEL: test_tcgen05_cp_32x128_b6x16_p32 -define void @test_tcgen05_cp_32x128_b6x16_p32(ptr addrspace(6) %addr, i64 %sdesc) { -; CHECK-LABEL: test_tcgen05_cp_32x128_b6x16_p32( +define void @test_tcgen05_cp_32x128_b6x16_p32_cg1(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_32x128_b6x16_p32_cg1( ; CHECK: { ; CHECK-NEXT: .reg .b32 %r<2>; ; CHECK-NEXT: .reg .b64 %rd<2>; ; CHECK-EMPTY: ; CHECK-NEXT: // %bb.0: -; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_32x128_b6x16_p32_param_0]; -; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_32x128_b6x16_p32_param_1]; +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_32x128_b6x16_p32_cg1_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_32x128_b6x16_p32_cg1_param_1]; ; CHECK-NEXT: tcgen05.cp.cta_group::1.32x128b.warpx4.b8x16.b6x16_p32 [%r1], %rd1; -; CHECK-NEXT: tcgen05.cp.cta_group::2.32x128b.warpx4.b8x16.b6x16_p32 [%r1], %rd1; ; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.32x128b_warpx4.b6x16_p32.cg1(ptr addrspace(6) %addr, i64 %sdesc) + + ret void +} + +define void @test_tcgen05_cp_32x128_b6x16_p32_cg2(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_32x128_b6x16_p32_cg2( +; CHECK: { +; CHECK-NEXT: .reg .b32 %r<2>; +; CHECK-NEXT: .reg .b64 %rd<2>; +; CHECK-EMPTY: +; CHECK-NEXT: // %bb.0: +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_32x128_b6x16_p32_cg2_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_32x128_b6x16_p32_cg2_param_1]; +; CHECK-NEXT: tcgen05.cp.cta_group::2.32x128b.warpx4.b8x16.b6x16_p32 [%r1], %rd1; +; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.32x128b_warpx4.b6x16_p32.cg2(ptr addrspace(6) %addr, i64 %sdesc) ret void } ; With src_fmt as b4x16_p64 -; CHECK-LABEL: test_tcgen05_cp_128x256b_b4x16_p64 -define void @test_tcgen05_cp_128x256b_b4x16_p64(ptr addrspace(6) %addr, i64 %sdesc) { -; CHECK-LABEL: test_tcgen05_cp_128x256b_b4x16_p64( +define void @test_tcgen05_cp_128x256b_b4x16_p64_cg1(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_128x256b_b4x16_p64_cg1( ; CHECK: { ; CHECK-NEXT: .reg .b32 %r<2>; ; CHECK-NEXT: .reg .b64 %rd<2>; ; CHECK-EMPTY: ; CHECK-NEXT: // %bb.0: -; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_128x256b_b4x16_p64_param_0]; -; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_128x256b_b4x16_p64_param_1]; +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_128x256b_b4x16_p64_cg1_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_128x256b_b4x16_p64_cg1_param_1]; ; CHECK-NEXT: tcgen05.cp.cta_group::1.128x256b.b8x16.b4x16_p64 [%r1], %rd1; -; CHECK-NEXT: tcgen05.cp.cta_group::2.128x256b.b8x16.b4x16_p64 [%r1], %rd1; ; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.128x256b.b4x16_p64.cg1(ptr addrspace(6) %addr, i64 %sdesc) + + ret void +} + +define void @test_tcgen05_cp_128x256b_b4x16_p64_cg2(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_128x256b_b4x16_p64_cg2( +; CHECK: { +; CHECK-NEXT: .reg .b32 %r<2>; +; CHECK-NEXT: .reg .b64 %rd<2>; +; CHECK-EMPTY: +; CHECK-NEXT: // %bb.0: +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_128x256b_b4x16_p64_cg2_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_128x256b_b4x16_p64_cg2_param_1]; +; CHECK-NEXT: tcgen05.cp.cta_group::2.128x256b.b8x16.b4x16_p64 [%r1], %rd1; +; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.128x256b.b4x16_p64.cg2(ptr addrspace(6) %addr, i64 %sdesc) ret void } -; CHECK-LABEL: test_tcgen05_cp_4x256b_b4x16_p64 -define void @test_tcgen05_cp_4x256b_b4x16_p64(ptr addrspace(6) %addr, i64 %sdesc) { -; CHECK-LABEL: test_tcgen05_cp_4x256b_b4x16_p64( +define void @test_tcgen05_cp_4x256b_b4x16_p64_cg1(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_4x256b_b4x16_p64_cg1( ; CHECK: { ; CHECK-NEXT: .reg .b32 %r<2>; ; CHECK-NEXT: .reg .b64 %rd<2>; ; CHECK-EMPTY: ; CHECK-NEXT: // %bb.0: -; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_4x256b_b4x16_p64_param_0]; -; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_4x256b_b4x16_p64_param_1]; +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_4x256b_b4x16_p64_cg1_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_4x256b_b4x16_p64_cg1_param_1]; ; CHECK-NEXT: tcgen05.cp.cta_group::1.4x256b.b8x16.b4x16_p64 [%r1], %rd1; -; CHECK-NEXT: tcgen05.cp.cta_group::2.4x256b.b8x16.b4x16_p64 [%r1], %rd1; ; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.4x256b.b4x16_p64.cg1(ptr addrspace(6) %addr, i64 %sdesc) + + ret void +} + +define void @test_tcgen05_cp_4x256b_b4x16_p64_cg2(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_4x256b_b4x16_p64_cg2( +; CHECK: { +; CHECK-NEXT: .reg .b32 %r<2>; +; CHECK-NEXT: .reg .b64 %rd<2>; +; CHECK-EMPTY: +; CHECK-NEXT: // %bb.0: +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_4x256b_b4x16_p64_cg2_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_4x256b_b4x16_p64_cg2_param_1]; +; CHECK-NEXT: tcgen05.cp.cta_group::2.4x256b.b8x16.b4x16_p64 [%r1], %rd1; +; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.4x256b.b4x16_p64.cg2(ptr addrspace(6) %addr, i64 %sdesc) ret void } -; CHECK-LABEL: test_tcgen05_cp_128x128b_b4x16_p64 -define void @test_tcgen05_cp_128x128b_b4x16_p64(ptr addrspace(6) %addr, i64 %sdesc) { -; CHECK-LABEL: test_tcgen05_cp_128x128b_b4x16_p64( +define void @test_tcgen05_cp_128x128b_b4x16_p64_cg1(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_128x128b_b4x16_p64_cg1( ; CHECK: { ; CHECK-NEXT: .reg .b32 %r<2>; ; CHECK-NEXT: .reg .b64 %rd<2>; ; CHECK-EMPTY: ; CHECK-NEXT: // %bb.0: -; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_128x128b_b4x16_p64_param_0]; -; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_128x128b_b4x16_p64_param_1]; +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_128x128b_b4x16_p64_cg1_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_128x128b_b4x16_p64_cg1_param_1]; ; CHECK-NEXT: tcgen05.cp.cta_group::1.128x128b.b8x16.b4x16_p64 [%r1], %rd1; -; CHECK-NEXT: tcgen05.cp.cta_group::2.128x128b.b8x16.b4x16_p64 [%r1], %rd1; ; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.128x128b.b4x16_p64.cg1(ptr addrspace(6) %addr, i64 %sdesc) + + ret void +} + +define void @test_tcgen05_cp_128x128b_b4x16_p64_cg2(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_128x128b_b4x16_p64_cg2( +; CHECK: { +; CHECK-NEXT: .reg .b32 %r<2>; +; CHECK-NEXT: .reg .b64 %rd<2>; +; CHECK-EMPTY: +; CHECK-NEXT: // %bb.0: +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_128x128b_b4x16_p64_cg2_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_128x128b_b4x16_p64_cg2_param_1]; +; CHECK-NEXT: tcgen05.cp.cta_group::2.128x128b.b8x16.b4x16_p64 [%r1], %rd1; +; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.128x128b.b4x16_p64.cg2(ptr addrspace(6) %addr, i64 %sdesc) ret void } -; CHECK-LABEL: test_tcgen05_cp_64x128_v1_b4x16_p64 -define void @test_tcgen05_cp_64x128_v1_b4x16_p64(ptr addrspace(6) %addr, i64 %sdesc) { -; CHECK-LABEL: test_tcgen05_cp_64x128_v1_b4x16_p64( +define void @test_tcgen05_cp_64x128_v1_b4x16_p64_cg1(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_64x128_v1_b4x16_p64_cg1( ; CHECK: { ; CHECK-NEXT: .reg .b32 %r<2>; ; CHECK-NEXT: .reg .b64 %rd<2>; ; CHECK-EMPTY: ; CHECK-NEXT: // %bb.0: -; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_64x128_v1_b4x16_p64_param_0]; -; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_64x128_v1_b4x16_p64_param_1]; +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_64x128_v1_b4x16_p64_cg1_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_64x128_v1_b4x16_p64_cg1_param_1]; ; CHECK-NEXT: tcgen05.cp.cta_group::1.64x128b.warpx2::02_13.b8x16.b4x16_p64 [%r1], %rd1; -; CHECK-NEXT: tcgen05.cp.cta_group::2.64x128b.warpx2::02_13.b8x16.b4x16_p64 [%r1], %rd1; ; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.64x128b_warpx2_02_13.b4x16_p64.cg1(ptr addrspace(6) %addr, i64 %sdesc) + + ret void +} + +define void @test_tcgen05_cp_64x128_v1_b4x16_p64_cg2(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_64x128_v1_b4x16_p64_cg2( +; CHECK: { +; CHECK-NEXT: .reg .b32 %r<2>; +; CHECK-NEXT: .reg .b64 %rd<2>; +; CHECK-EMPTY: +; CHECK-NEXT: // %bb.0: +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_64x128_v1_b4x16_p64_cg2_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_64x128_v1_b4x16_p64_cg2_param_1]; +; CHECK-NEXT: tcgen05.cp.cta_group::2.64x128b.warpx2::02_13.b8x16.b4x16_p64 [%r1], %rd1; +; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.64x128b_warpx2_02_13.b4x16_p64.cg2(ptr addrspace(6) %addr, i64 %sdesc) ret void } -; CHECK-LABEL: test_tcgen05_cp_64x128_v2_b4x16_p64 -define void @test_tcgen05_cp_64x128_v2_b4x16_p64(ptr addrspace(6) %addr, i64 %sdesc) { -; CHECK-LABEL: test_tcgen05_cp_64x128_v2_b4x16_p64( +define void @test_tcgen05_cp_64x128_v2_b4x16_p64_cg1(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_64x128_v2_b4x16_p64_cg1( ; CHECK: { ; CHECK-NEXT: .reg .b32 %r<2>; ; CHECK-NEXT: .reg .b64 %rd<2>; ; CHECK-EMPTY: ; CHECK-NEXT: // %bb.0: -; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_64x128_v2_b4x16_p64_param_0]; -; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_64x128_v2_b4x16_p64_param_1]; +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_64x128_v2_b4x16_p64_cg1_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_64x128_v2_b4x16_p64_cg1_param_1]; ; CHECK-NEXT: tcgen05.cp.cta_group::1.64x128b.warpx2::01_23.b8x16.b4x16_p64 [%r1], %rd1; -; CHECK-NEXT: tcgen05.cp.cta_group::2.64x128b.warpx2::01_23.b8x16.b4x16_p64 [%r1], %rd1; ; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.64x128b_warpx2_01_23.b4x16_p64.cg1(ptr addrspace(6) %addr, i64 %sdesc) + + ret void +} + +define void @test_tcgen05_cp_64x128_v2_b4x16_p64_cg2(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_64x128_v2_b4x16_p64_cg2( +; CHECK: { +; CHECK-NEXT: .reg .b32 %r<2>; +; CHECK-NEXT: .reg .b64 %rd<2>; +; CHECK-EMPTY: +; CHECK-NEXT: // %bb.0: +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_64x128_v2_b4x16_p64_cg2_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_64x128_v2_b4x16_p64_cg2_param_1]; +; CHECK-NEXT: tcgen05.cp.cta_group::2.64x128b.warpx2::01_23.b8x16.b4x16_p64 [%r1], %rd1; +; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.64x128b_warpx2_01_23.b4x16_p64.cg2(ptr addrspace(6) %addr, i64 %sdesc) ret void } -; CHECK-LABEL: test_tcgen05_cp_32x128_b4x16_p64 -define void @test_tcgen05_cp_32x128_b4x16_p64(ptr addrspace(6) %addr, i64 %sdesc) { -; CHECK-LABEL: test_tcgen05_cp_32x128_b4x16_p64( +define void @test_tcgen05_cp_32x128_b4x16_p64_cg1(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_32x128_b4x16_p64_cg1( ; CHECK: { ; CHECK-NEXT: .reg .b32 %r<2>; ; CHECK-NEXT: .reg .b64 %rd<2>; ; CHECK-EMPTY: ; CHECK-NEXT: // %bb.0: -; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_32x128_b4x16_p64_param_0]; -; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_32x128_b4x16_p64_param_1]; +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_32x128_b4x16_p64_cg1_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_32x128_b4x16_p64_cg1_param_1]; ; CHECK-NEXT: tcgen05.cp.cta_group::1.32x128b.warpx4.b8x16.b4x16_p64 [%r1], %rd1; -; CHECK-NEXT: tcgen05.cp.cta_group::2.32x128b.warpx4.b8x16.b4x16_p64 [%r1], %rd1; ; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.32x128b_warpx4.b4x16_p64.cg1(ptr addrspace(6) %addr, i64 %sdesc) + + ret void +} + +define void @test_tcgen05_cp_32x128_b4x16_p64_cg2(ptr addrspace(6) %addr, i64 %sdesc) { +; CHECK-LABEL: test_tcgen05_cp_32x128_b4x16_p64_cg2( +; CHECK: { +; CHECK-NEXT: .reg .b32 %r<2>; +; CHECK-NEXT: .reg .b64 %rd<2>; +; CHECK-EMPTY: +; CHECK-NEXT: // %bb.0: +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_cp_32x128_b4x16_p64_cg2_param_0]; +; CHECK-NEXT: ld.param.b64 %rd1, [test_tcgen05_cp_32x128_b4x16_p64_cg2_param_1]; +; CHECK-NEXT: tcgen05.cp.cta_group::2.32x128b.warpx4.b8x16.b4x16_p64 [%r1], %rd1; +; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.cp.32x128b_warpx4.b4x16_p64.cg2(ptr addrspace(6) %addr, i64 %sdesc) ret void diff --git a/llvm/test/CodeGen/NVPTX/tcgen05-shift.ll b/llvm/test/CodeGen/NVPTX/tcgen05-shift.ll index 8ca6a2a0..bf2adac 100644 --- a/llvm/test/CodeGen/NVPTX/tcgen05-shift.ll +++ b/llvm/test/CodeGen/NVPTX/tcgen05-shift.ll @@ -7,18 +7,29 @@ declare void @llvm.nvvm.tcgen05.shift.down.cg1(ptr addrspace(6) %tmem_addr) declare void @llvm.nvvm.tcgen05.shift.down.cg2(ptr addrspace(6) %tmem_addr) -; CHECK-LABEL: test_tcgen05_shift -define void @test_tcgen05_shift(ptr addrspace(6) %tmem_addr) { -; CHECK-LABEL: test_tcgen05_shift( +define void @test_tcgen05_shift_cg1(ptr addrspace(6) %tmem_addr) { +; CHECK-LABEL: test_tcgen05_shift_cg1( ; CHECK: { ; CHECK-NEXT: .reg .b32 %r<2>; ; CHECK-EMPTY: ; CHECK-NEXT: // %bb.0: -; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_shift_param_0]; +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_shift_cg1_param_0]; ; CHECK-NEXT: tcgen05.shift.cta_group::1.down [%r1]; -; CHECK-NEXT: tcgen05.shift.cta_group::2.down [%r1]; ; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.shift.down.cg1(ptr addrspace(6) %tmem_addr) + + ret void +} + +define void @test_tcgen05_shift_cg2(ptr addrspace(6) %tmem_addr) { +; CHECK-LABEL: test_tcgen05_shift_cg2( +; CHECK: { +; CHECK-NEXT: .reg .b32 %r<2>; +; CHECK-EMPTY: +; CHECK-NEXT: // %bb.0: +; CHECK-NEXT: ld.param.b32 %r1, [test_tcgen05_shift_cg2_param_0]; +; CHECK-NEXT: tcgen05.shift.cta_group::2.down [%r1]; +; CHECK-NEXT: ret; call void @llvm.nvvm.tcgen05.shift.down.cg2(ptr addrspace(6) %tmem_addr) ret void diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ret.ll b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ret.ll index 4b1359e..73b0d3a 100644 --- a/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ret.ll +++ b/llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/vec-ret.ll @@ -1,7 +1,7 @@ ; NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py -; RUN: llc -mtriple=riscv32 -mattr=+v,+zvfbfmin,+zvfh -global-isel -stop-after=irtranslator \ +; RUN: llc -mtriple=riscv32 -mattr=+v,+zvfbfmin,+zvfhmin -global-isel -stop-after=irtranslator \ ; RUN: -verify-machineinstrs < %s | FileCheck -check-prefixes=RV32 %s -; RUN: llc -mtriple=riscv64 -mattr=+v,+zvfbfmin,+zvfh -global-isel -stop-after=irtranslator \ +; RUN: llc -mtriple=riscv64 -mattr=+v,+zvfbfmin,+zvfhmin -global-isel -stop-after=irtranslator \ ; RUN: -verify-machineinstrs < %s | FileCheck -check-prefixes=RV64 %s ; ========================================================================== diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/legalizer-info-validation.mir b/llvm/test/CodeGen/RISCV/GlobalISel/legalizer-info-validation.mir index 1361d92..2e500d5 100644 --- a/llvm/test/CodeGen/RISCV/GlobalISel/legalizer-info-validation.mir +++ b/llvm/test/CodeGen/RISCV/GlobalISel/legalizer-info-validation.mir @@ -72,12 +72,12 @@ # DEBUG-NEXT: .. type index coverage check SKIPPED: user-defined predicate detected # DEBUG-NEXT: .. imm index coverage check SKIPPED: user-defined predicate detected # -# DEBUG-NEXT: G_ABDS (opcode 65): 1 type index, 0 imm indices +# DEBUG-NEXT: G_ABDS (opcode [[G_ABDS:[0-9]+]]): 1 type index, 0 imm indices # DEBUG-NEXT:.. type index coverage check SKIPPED: user-defined predicate detected # DEBUG-NEXT:.. imm index coverage check SKIPPED: user-defined predicate detected # -# DEBUG-NEXT:G_ABDU (opcode 66): 1 type index, 0 imm indices -# DEBUG-NEXT:.. opcode 66 is aliased to 65 +# DEBUG-NEXT:G_ABDU (opcode [[G_ABDU:[0-9]+]]): 1 type index, 0 imm indices +# DEBUG-NEXT:.. opcode [[G_ABDU]] is aliased to [[G_ABDS]] # DEBUG-NEXT:.. type index coverage check SKIPPED: user-defined predicate detected # DEBUG-NEXT:.. imm index coverage check SKIPPED: user-defined predicate detected # diff --git a/llvm/test/CodeGen/RISCV/double-arith.ll b/llvm/test/CodeGen/RISCV/double-arith.ll index 911692e..f960bc1 100644 --- a/llvm/test/CodeGen/RISCV/double-arith.ll +++ b/llvm/test/CodeGen/RISCV/double-arith.ll @@ -305,9 +305,6 @@ define i32 @fneg_d(double %a, double %b) nounwind { } define double @fsgnjn_d(double %a, double %b) nounwind { -; TODO: fsgnjn.s isn't selected on RV64 because DAGCombiner::visitBITCAST will -; convert (bitconvert (fneg x)) to a xor. -; ; CHECKIFD-LABEL: fsgnjn_d: ; CHECKIFD: # %bb.0: ; CHECKIFD-NEXT: fsgnjn.d fa0, fa0, fa1 diff --git a/llvm/test/CodeGen/RISCV/rv64zbkb.ll b/llvm/test/CodeGen/RISCV/rv64zbkb.ll index 4537d18..b2ad8d7 100644 --- a/llvm/test/CodeGen/RISCV/rv64zbkb.ll +++ b/llvm/test/CodeGen/RISCV/rv64zbkb.ll @@ -441,7 +441,7 @@ define void @pack_lo_packh_hi_packh_2(i8 zeroext %0, i8 zeroext %1, i8 zeroext % ; RV64ZBKB-LABEL: pack_lo_packh_hi_packh_2: ; RV64ZBKB: # %bb.0: ; RV64ZBKB-NEXT: packh a0, a0, a1 -; RV64ZBKB-NEXT: packh a1, a3, a2 +; RV64ZBKB-NEXT: packh a1, a2, a3 ; RV64ZBKB-NEXT: packw a0, a0, a1 ; RV64ZBKB-NEXT: sw a0, 0(a4) ; RV64ZBKB-NEXT: ret @@ -477,7 +477,7 @@ define void @pack_lo_packh_hi_packh_3(i8 %0, i8 %1, i8 %2, i8 %3, ptr %p) nounwi ; RV64ZBKB-LABEL: pack_lo_packh_hi_packh_3: ; RV64ZBKB: # %bb.0: ; RV64ZBKB-NEXT: packh a0, a0, a1 -; RV64ZBKB-NEXT: packh a1, a3, a2 +; RV64ZBKB-NEXT: packh a1, a2, a3 ; RV64ZBKB-NEXT: packw a0, a0, a1 ; RV64ZBKB-NEXT: sw a0, 0(a4) ; RV64ZBKB-NEXT: ret @@ -509,7 +509,7 @@ define i32 @pack_lo_packh_hi_packh_4(i8 zeroext %0, i8 zeroext %1, i8 zeroext %2 ; RV64ZBKB-LABEL: pack_lo_packh_hi_packh_4: ; RV64ZBKB: # %bb.0: ; RV64ZBKB-NEXT: packh a0, a0, a1 -; RV64ZBKB-NEXT: packh a1, a3, a2 +; RV64ZBKB-NEXT: packh a1, a2, a3 ; RV64ZBKB-NEXT: packw a0, a0, a1 ; RV64ZBKB-NEXT: ret %a = zext i8 %0 to i32 diff --git a/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_predicated_io/predicated_io_generic.ll b/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_predicated_io/predicated_io_generic.ll new file mode 100644 index 0000000..a3127e8 --- /dev/null +++ b/llvm/test/CodeGen/SPIRV/extensions/SPV_INTEL_predicated_io/predicated_io_generic.ll @@ -0,0 +1,36 @@ +; RUN: not llc -O0 -mtriple=spirv64-unknown-unknown %s -o %t.spvt 2>&1 | FileCheck %s --check-prefix=CHECK-ERROR +; RUN: llc -O0 -verify-machineinstrs -mtriple=spirv64-unknown-unknown --spirv-ext=+SPV_INTEL_predicated_io %s -o - | FileCheck %s + +; CHECK-ERROR: LLVM ERROR: OpPredicated[Load/Store]INTEL +; CHECK-ERROR-SAME: instructions require the following SPIR-V extension: SPV_INTEL_predicated_io + +; CHECK-DAG: Capability PredicatedIOINTEL +; CHECK-DAG: Extension "SPV_INTEL_predicated_io" + +; CHECK-DAG: %[[Int32Ty:[0-9]+]] = OpTypeInt 32 0 +; CHECK-DAG: %[[IntPtrTy:[0-9]+]] = OpTypePointer CrossWorkgroup %[[Int32Ty]] +; CHECK-DAG: %[[BoolTy:[0-9]+]] = OpTypeBool +; CHECK-DAG: %[[VoidTy:[0-9]+]] = OpTypeVoid +; CHECK: %[[LoadPtr:[0-9]+]] = OpFunctionParameter %[[IntPtrTy]] +; CHECK: %[[StorePtr:[0-9]+]] = OpFunctionParameter %[[IntPtrTy]] +; CHECK: %[[DefaultVal:[0-9]+]] = OpFunctionParameter %[[Int32Ty]] +; CHECK: %[[StoreObj:[0-9]+]] = OpFunctionParameter %[[Int32Ty]] +; CHECK: %[[Predicate:[0-9]+]] = OpFunctionParameter %[[BoolTy]] +; CHECK: PredicatedLoadINTEL %[[Int32Ty]] %[[LoadPtr]] %[[Predicate]] %[[DefaultVal]] +; CHECK: PredicatedLoadINTEL %[[Int32Ty]] %[[LoadPtr]] %[[Predicate]] %[[DefaultVal]] None +; CHECK: PredicatedStoreINTEL %[[StorePtr]] %[[StoreObj]] %[[Predicate]] +; CHECK: PredicatedStoreINTEL %[[StorePtr]] %[[StoreObj]] %[[Predicate]] None + +define spir_func void @foo(ptr addrspace(1) %load_pointer, ptr addrspace(1) %store_pointer, i32 %default_value, i32 %store_object, i1 zeroext %predicate) { +entry: + %1 = call spir_func i32 @_Z27__spirv_PredicatedLoadINTELPU3AS1Kibi(ptr addrspace(1) %load_pointer, i1 %predicate, i32 %default_value) + %2 = call spir_func i32 @_Z27__spirv_PredicatedLoadINTELPU3AS1Kibii(ptr addrspace(1) %load_pointer, i1 %predicate, i32 %default_value, i32 0) + call spir_func void @_Z28__spirv_PredicatedStoreINTELPU3AS1Kiib(ptr addrspace(1) %store_pointer, i32 %store_object, i1 %predicate) + call spir_func void @_Z28__spirv_PredicatedStoreINTELPU3AS1Kiibi(ptr addrspace(1) %store_pointer, i32 %store_object, i1 %predicate, i32 0) + ret void +} + +declare spir_func i32 @_Z27__spirv_PredicatedLoadINTELPU3AS1Kibi(ptr addrspace(1), i1, i32) +declare spir_func i32 @_Z27__spirv_PredicatedLoadINTELPU3AS1Kibii(ptr addrspace(1), i1, i32, i32) +declare spir_func void @_Z28__spirv_PredicatedStoreINTELPU3AS1Kiib(ptr addrspace(1), i32, i1) +declare spir_func void @_Z28__spirv_PredicatedStoreINTELPU3AS1Kiibi(ptr addrspace(1), i32, i1, i32) diff --git a/llvm/test/CodeGen/WebAssembly/fpclamptosat_vec.ll b/llvm/test/CodeGen/WebAssembly/fpclamptosat_vec.ll index 52f57dc..a8d37be 100644 --- a/llvm/test/CodeGen/WebAssembly/fpclamptosat_vec.ll +++ b/llvm/test/CodeGen/WebAssembly/fpclamptosat_vec.ll @@ -434,7 +434,6 @@ entry: define <8 x i16> @stest_f16i16(<8 x half> %x) { ; CHECK-LABEL: stest_f16i16: ; CHECK: .functype stest_f16i16 (f32, f32, f32, f32, f32, f32, f32, f32) -> (v128) -; CHECK-NEXT: .local v128, v128, v128 ; CHECK-NEXT: # %bb.0: # %entry ; CHECK-NEXT: local.get 5 ; CHECK-NEXT: call __truncsfhf2 @@ -474,15 +473,6 @@ define <8 x i16> @stest_f16i16(<8 x half> %x) { ; CHECK-NEXT: call __extendhfsf2 ; CHECK-NEXT: i32.trunc_sat_f32_s ; CHECK-NEXT: i32x4.replace_lane 3 -; CHECK-NEXT: v128.const 32767, 32767, 32767, 32767 -; CHECK-NEXT: local.tee 8 -; CHECK-NEXT: i32x4.min_s -; CHECK-NEXT: v128.const -32768, -32768, -32768, -32768 -; CHECK-NEXT: local.tee 9 -; CHECK-NEXT: i32x4.max_s -; CHECK-NEXT: v128.const 65535, 65535, 65535, 65535 -; CHECK-NEXT: local.tee 10 -; CHECK-NEXT: v128.and ; CHECK-NEXT: local.get 4 ; CHECK-NEXT: i32.trunc_sat_f32_s ; CHECK-NEXT: i32x4.splat @@ -495,13 +485,7 @@ define <8 x i16> @stest_f16i16(<8 x half> %x) { ; CHECK-NEXT: local.get 7 ; CHECK-NEXT: i32.trunc_sat_f32_s ; CHECK-NEXT: i32x4.replace_lane 3 -; CHECK-NEXT: local.get 8 -; CHECK-NEXT: i32x4.min_s -; CHECK-NEXT: local.get 9 -; CHECK-NEXT: i32x4.max_s -; CHECK-NEXT: local.get 10 -; CHECK-NEXT: v128.and -; CHECK-NEXT: i16x8.narrow_i32x4_u +; CHECK-NEXT: i16x8.narrow_i32x4_s ; CHECK-NEXT: # fallthrough-return entry: %conv = fptosi <8 x half> %x to <8 x i32> @@ -516,7 +500,6 @@ entry: define <8 x i16> @utest_f16i16(<8 x half> %x) { ; CHECK-LABEL: utest_f16i16: ; CHECK: .functype utest_f16i16 (f32, f32, f32, f32, f32, f32, f32, f32) -> (v128) -; CHECK-NEXT: .local v128 ; CHECK-NEXT: # %bb.0: # %entry ; CHECK-NEXT: local.get 5 ; CHECK-NEXT: call __truncsfhf2 @@ -556,9 +539,6 @@ define <8 x i16> @utest_f16i16(<8 x half> %x) { ; CHECK-NEXT: call __extendhfsf2 ; CHECK-NEXT: i32.trunc_sat_f32_u ; CHECK-NEXT: i32x4.replace_lane 3 -; CHECK-NEXT: v128.const 65535, 65535, 65535, 65535 -; CHECK-NEXT: local.tee 8 -; CHECK-NEXT: i32x4.min_u ; CHECK-NEXT: local.get 4 ; CHECK-NEXT: i32.trunc_sat_f32_u ; CHECK-NEXT: i32x4.splat @@ -571,8 +551,6 @@ define <8 x i16> @utest_f16i16(<8 x half> %x) { ; CHECK-NEXT: local.get 7 ; CHECK-NEXT: i32.trunc_sat_f32_u ; CHECK-NEXT: i32x4.replace_lane 3 -; CHECK-NEXT: local.get 8 -; CHECK-NEXT: i32x4.min_u ; CHECK-NEXT: i16x8.narrow_i32x4_u ; CHECK-NEXT: # fallthrough-return entry: @@ -1861,7 +1839,6 @@ entry: define <8 x i16> @stest_f16i16_mm(<8 x half> %x) { ; CHECK-LABEL: stest_f16i16_mm: ; CHECK: .functype stest_f16i16_mm (f32, f32, f32, f32, f32, f32, f32, f32) -> (v128) -; CHECK-NEXT: .local v128, v128, v128 ; CHECK-NEXT: # %bb.0: # %entry ; CHECK-NEXT: local.get 5 ; CHECK-NEXT: call __truncsfhf2 @@ -1901,15 +1878,6 @@ define <8 x i16> @stest_f16i16_mm(<8 x half> %x) { ; CHECK-NEXT: call __extendhfsf2 ; CHECK-NEXT: i32.trunc_sat_f32_s ; CHECK-NEXT: i32x4.replace_lane 3 -; CHECK-NEXT: v128.const 32767, 32767, 32767, 32767 -; CHECK-NEXT: local.tee 8 -; CHECK-NEXT: i32x4.min_s -; CHECK-NEXT: v128.const -32768, -32768, -32768, -32768 -; CHECK-NEXT: local.tee 9 -; CHECK-NEXT: i32x4.max_s -; CHECK-NEXT: v128.const 65535, 65535, 65535, 65535 -; CHECK-NEXT: local.tee 10 -; CHECK-NEXT: v128.and ; CHECK-NEXT: local.get 4 ; CHECK-NEXT: i32.trunc_sat_f32_s ; CHECK-NEXT: i32x4.splat @@ -1922,13 +1890,7 @@ define <8 x i16> @stest_f16i16_mm(<8 x half> %x) { ; CHECK-NEXT: local.get 7 ; CHECK-NEXT: i32.trunc_sat_f32_s ; CHECK-NEXT: i32x4.replace_lane 3 -; CHECK-NEXT: local.get 8 -; CHECK-NEXT: i32x4.min_s -; CHECK-NEXT: local.get 9 -; CHECK-NEXT: i32x4.max_s -; CHECK-NEXT: local.get 10 -; CHECK-NEXT: v128.and -; CHECK-NEXT: i16x8.narrow_i32x4_u +; CHECK-NEXT: i16x8.narrow_i32x4_s ; CHECK-NEXT: # fallthrough-return entry: %conv = fptosi <8 x half> %x to <8 x i32> @@ -1941,7 +1903,6 @@ entry: define <8 x i16> @utest_f16i16_mm(<8 x half> %x) { ; CHECK-LABEL: utest_f16i16_mm: ; CHECK: .functype utest_f16i16_mm (f32, f32, f32, f32, f32, f32, f32, f32) -> (v128) -; CHECK-NEXT: .local v128 ; CHECK-NEXT: # %bb.0: # %entry ; CHECK-NEXT: local.get 5 ; CHECK-NEXT: call __truncsfhf2 @@ -1981,9 +1942,6 @@ define <8 x i16> @utest_f16i16_mm(<8 x half> %x) { ; CHECK-NEXT: call __extendhfsf2 ; CHECK-NEXT: i32.trunc_sat_f32_u ; CHECK-NEXT: i32x4.replace_lane 3 -; CHECK-NEXT: v128.const 65535, 65535, 65535, 65535 -; CHECK-NEXT: local.tee 8 -; CHECK-NEXT: i32x4.min_u ; CHECK-NEXT: local.get 4 ; CHECK-NEXT: i32.trunc_sat_f32_u ; CHECK-NEXT: i32x4.splat @@ -1996,8 +1954,6 @@ define <8 x i16> @utest_f16i16_mm(<8 x half> %x) { ; CHECK-NEXT: local.get 7 ; CHECK-NEXT: i32.trunc_sat_f32_u ; CHECK-NEXT: i32x4.replace_lane 3 -; CHECK-NEXT: local.get 8 -; CHECK-NEXT: i32x4.min_u ; CHECK-NEXT: i16x8.narrow_i32x4_u ; CHECK-NEXT: # fallthrough-return entry: diff --git a/llvm/test/CodeGen/WebAssembly/saturating-truncation.ll b/llvm/test/CodeGen/WebAssembly/saturating-truncation.ll new file mode 100644 index 0000000..f3f3ba9 --- /dev/null +++ b/llvm/test/CodeGen/WebAssembly/saturating-truncation.ll @@ -0,0 +1,87 @@ +; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5 + +; RUN: llc < %s -verify-machineinstrs -mattr=+simd128 | FileCheck %s + +target triple = "wasm32-unknown-unknown" + +declare <8 x i32> @llvm.smin.v8i32(<8 x i32>, <8 x i32>) #2 +declare <8 x i32> @llvm.smax.v8i32(<8 x i32>, <8 x i32>) #2 + +define <16 x i8> @i16_signed(<8 x i16> %a, <8 x i16> %b) { +; CHECK-LABEL: i16_signed: +; CHECK: .functype i16_signed (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: # %bb2 +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: i8x16.narrow_i16x8_s +; CHECK-NEXT: # fallthrough-return +bb2: + %0 = shufflevector <8 x i16> %a, <8 x i16> %b, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15> + %1 = tail call <16 x i16> @llvm.smax.v16i16(<16 x i16> %0, <16 x i16> splat (i16 -128)) + %2 = tail call <16 x i16> @llvm.smin.v16i16(<16 x i16> %1, <16 x i16> splat (i16 127)) + %3 = trunc nsw <16 x i16> %2 to <16 x i8> + ret <16 x i8> %3 + ret <16 x i8> %3 +} + +define <8 x i16> @i32_signed(<4 x i32> %a, <4 x i32> %b) { +; CHECK-LABEL: i32_signed: +; CHECK: .functype i32_signed (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: # %bb2 +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: i16x8.narrow_i32x4_s +; CHECK-NEXT: # fallthrough-return +bb2: + %0 = shufflevector <4 x i32> %a, <4 x i32> %b, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7> + %1 = tail call <8 x i32> @llvm.smax.v8i32(<8 x i32> %0, <8 x i32> splat (i32 -32768)) + %2 = tail call <8 x i32> @llvm.smin.v8i32(<8 x i32> %1, <8 x i32> splat (i32 32767)) + %3 = trunc nsw <8 x i32> %2 to <8 x i16> + ret <8 x i16> %3 +} + +define <8 x i16> @i32_signed_flipped(<4 x i32> %a, <4 x i32> %b) { +; CHECK-LABEL: i32_signed_flipped: +; CHECK: .functype i32_signed_flipped (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: # %bb2 +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: i16x8.narrow_i32x4_s +; CHECK-NEXT: # fallthrough-return +bb2: + %0 = shufflevector <4 x i32> %a, <4 x i32> %b, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7> + %1 = tail call <8 x i32> @llvm.smin.v8i32(<8 x i32> splat (i32 32767), <8 x i32> %0) + %2 = tail call <8 x i32> @llvm.smax.v8i32(<8 x i32> splat (i32 -32768), <8 x i32> %1) + %3 = trunc nsw <8 x i32> %2 to <8 x i16> + ret <8 x i16> %3 +} + +define <16 x i8> @i16_unsigned(<8 x i16> %a, <8 x i16> %b) { +; CHECK-LABEL: i16_unsigned: +; CHECK: .functype i16_unsigned (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: # %bb2 +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: i8x16.narrow_i16x8_u +; CHECK-NEXT: # fallthrough-return +bb2: + %0 = shufflevector <8 x i16> %a, <8 x i16> %b, <16 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15> + %1 = tail call <16 x i16> @llvm.umin.v16i16(<16 x i16> %0, <16 x i16> splat (i16 255)) + %2 = trunc nuw <16 x i16> %1 to <16 x i8> + ret <16 x i8> %2 +} + +define <8 x i16> @i32_unsigned(<4 x i32> %a, <4 x i32> %b) { +; CHECK-LABEL: i32_unsigned: +; CHECK: .functype i32_unsigned (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: # %bb2 +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: i16x8.narrow_i32x4_u +; CHECK-NEXT: # fallthrough-return +bb2: + %0 = shufflevector <4 x i32> %a, <4 x i32> %b, <8 x i32> <i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 6, i32 7> + %1 = tail call <8 x i32> @llvm.umin.v8i32(<8 x i32> %0, <8 x i32> splat (i32 65535)) + %2 = trunc nsw <8 x i32> %1 to <8 x i16> + ret <8 x i16> %2 +} diff --git a/llvm/test/CodeGen/X86/and-mask-variable.ll b/llvm/test/CodeGen/X86/and-mask-variable.ll new file mode 100644 index 0000000..d89f0db --- /dev/null +++ b/llvm/test/CodeGen/X86/and-mask-variable.ll @@ -0,0 +1,212 @@ +; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py +; RUN: llc -mtriple=i686-unknown-linux-gnu -mattr=-bmi,-tbm,-bmi2,+fast-bextr < %s | FileCheck %s --check-prefixes=X86-NOBMI +; RUN: llc -mtriple=i686-unknown-linux-gnu -mattr=+bmi,+tbm,+bmi2,+fast-bextr < %s | FileCheck %s --check-prefixes=X86-BMI2 +; RUN: llc -mtriple=i686-unknown-linux-gnu -mattr=+bmi,-tbm,+bmi2,+fast-bextr < %s | FileCheck %s --check-prefixes=X86-BMI2 +; RUN: llc -mtriple=x86_64-unknown-linux-gnu -mattr=-bmi,-tbm,-bmi2,+fast-bextr < %s | FileCheck %s --check-prefixes=X64-NOBMI +; RUN: llc -mtriple=x86_64-unknown-linux-gnu -mattr=+bmi,+tbm,+bmi2,+fast-bextr < %s | FileCheck %s --check-prefixes=X64-BMI2 +; RUN: llc -mtriple=x86_64-unknown-linux-gnu -mattr=+bmi,-tbm,+bmi2,+fast-bextr < %s | FileCheck %s --check-prefixes=X64-BMI2 + +define i32 @mask_pair(i32 %x, i32 %y) nounwind { +; X86-NOBMI-LABEL: mask_pair: +; X86-NOBMI: # %bb.0: +; X86-NOBMI-NEXT: movzbl {{[0-9]+}}(%esp), %ecx +; X86-NOBMI-NEXT: movl {{[0-9]+}}(%esp), %eax +; X86-NOBMI-NEXT: shrl %cl, %eax +; X86-NOBMI-NEXT: shll %cl, %eax +; X86-NOBMI-NEXT: retl +; +; X86-BMI2-LABEL: mask_pair: +; X86-BMI2: # %bb.0: +; X86-BMI2-NEXT: movzbl {{[0-9]+}}(%esp), %eax +; X86-BMI2-NEXT: shrxl %eax, {{[0-9]+}}(%esp), %ecx +; X86-BMI2-NEXT: shlxl %eax, %ecx, %eax +; X86-BMI2-NEXT: retl +; +; X64-NOBMI-LABEL: mask_pair: +; X64-NOBMI: # %bb.0: +; X64-NOBMI-NEXT: movl %esi, %ecx +; X64-NOBMI-NEXT: movl %edi, %eax +; X64-NOBMI-NEXT: shrl %cl, %eax +; X64-NOBMI-NEXT: # kill: def $cl killed $cl killed $ecx +; X64-NOBMI-NEXT: shll %cl, %eax +; X64-NOBMI-NEXT: retq +; +; X64-BMI2-LABEL: mask_pair: +; X64-BMI2: # %bb.0: +; X64-BMI2-NEXT: shrxl %esi, %edi, %eax +; X64-BMI2-NEXT: shlxl %esi, %eax, %eax +; X64-BMI2-NEXT: retq + %shl = shl nsw i32 -1, %y + %and = and i32 %shl, %x + ret i32 %and +} + +define i64 @mask_pair_64(i64 %x, i64 %y) nounwind { +; X86-NOBMI-LABEL: mask_pair_64: +; X86-NOBMI: # %bb.0: +; X86-NOBMI-NEXT: movzbl {{[0-9]+}}(%esp), %ecx +; X86-NOBMI-NEXT: movl $-1, %edx +; X86-NOBMI-NEXT: movl $-1, %eax +; X86-NOBMI-NEXT: shll %cl, %eax +; X86-NOBMI-NEXT: testb $32, %cl +; X86-NOBMI-NEXT: je .LBB1_2 +; X86-NOBMI-NEXT: # %bb.1: +; X86-NOBMI-NEXT: movl %eax, %edx +; X86-NOBMI-NEXT: xorl %eax, %eax +; X86-NOBMI-NEXT: .LBB1_2: +; X86-NOBMI-NEXT: andl {{[0-9]+}}(%esp), %eax +; X86-NOBMI-NEXT: andl {{[0-9]+}}(%esp), %edx +; X86-NOBMI-NEXT: retl +; +; X86-BMI2-LABEL: mask_pair_64: +; X86-BMI2: # %bb.0: +; X86-BMI2-NEXT: movzbl {{[0-9]+}}(%esp), %ecx +; X86-BMI2-NEXT: movl $-1, %edx +; X86-BMI2-NEXT: shlxl %ecx, %edx, %eax +; X86-BMI2-NEXT: testb $32, %cl +; X86-BMI2-NEXT: je .LBB1_2 +; X86-BMI2-NEXT: # %bb.1: +; X86-BMI2-NEXT: movl %eax, %edx +; X86-BMI2-NEXT: xorl %eax, %eax +; X86-BMI2-NEXT: .LBB1_2: +; X86-BMI2-NEXT: andl {{[0-9]+}}(%esp), %eax +; X86-BMI2-NEXT: andl {{[0-9]+}}(%esp), %edx +; X86-BMI2-NEXT: retl +; +; X64-NOBMI-LABEL: mask_pair_64: +; X64-NOBMI: # %bb.0: +; X64-NOBMI-NEXT: movq %rsi, %rcx +; X64-NOBMI-NEXT: movq %rdi, %rax +; X64-NOBMI-NEXT: shrq %cl, %rax +; X64-NOBMI-NEXT: # kill: def $cl killed $cl killed $rcx +; X64-NOBMI-NEXT: shlq %cl, %rax +; X64-NOBMI-NEXT: retq +; +; X64-BMI2-LABEL: mask_pair_64: +; X64-BMI2: # %bb.0: +; X64-BMI2-NEXT: shrxq %rsi, %rdi, %rax +; X64-BMI2-NEXT: shlxq %rsi, %rax, %rax +; X64-BMI2-NEXT: retq + %shl = shl nsw i64 -1, %y + %and = and i64 %shl, %x + ret i64 %and +} + +define i128 @mask_pair_128(i128 %x, i128 %y) nounwind { +; X86-NOBMI-LABEL: mask_pair_128: +; X86-NOBMI: # %bb.0: +; X86-NOBMI-NEXT: pushl %ebx +; X86-NOBMI-NEXT: pushl %edi +; X86-NOBMI-NEXT: pushl %esi +; X86-NOBMI-NEXT: subl $32, %esp +; X86-NOBMI-NEXT: movl {{[0-9]+}}(%esp), %ecx +; X86-NOBMI-NEXT: movl {{[0-9]+}}(%esp), %eax +; X86-NOBMI-NEXT: movl $-1, {{[0-9]+}}(%esp) +; X86-NOBMI-NEXT: movl $-1, {{[0-9]+}}(%esp) +; X86-NOBMI-NEXT: movl $-1, {{[0-9]+}}(%esp) +; X86-NOBMI-NEXT: movl $-1, {{[0-9]+}}(%esp) +; X86-NOBMI-NEXT: movl $0, {{[0-9]+}}(%esp) +; X86-NOBMI-NEXT: movl $0, {{[0-9]+}}(%esp) +; X86-NOBMI-NEXT: movl $0, {{[0-9]+}}(%esp) +; X86-NOBMI-NEXT: movl $0, (%esp) +; X86-NOBMI-NEXT: movl %ecx, %edx +; X86-NOBMI-NEXT: shrb $3, %dl +; X86-NOBMI-NEXT: andb $12, %dl +; X86-NOBMI-NEXT: negb %dl +; X86-NOBMI-NEXT: movsbl %dl, %ebx +; X86-NOBMI-NEXT: movl 24(%esp,%ebx), %edx +; X86-NOBMI-NEXT: movl 28(%esp,%ebx), %esi +; X86-NOBMI-NEXT: shldl %cl, %edx, %esi +; X86-NOBMI-NEXT: movl 16(%esp,%ebx), %edi +; X86-NOBMI-NEXT: movl 20(%esp,%ebx), %ebx +; X86-NOBMI-NEXT: shldl %cl, %ebx, %edx +; X86-NOBMI-NEXT: shldl %cl, %edi, %ebx +; X86-NOBMI-NEXT: # kill: def $cl killed $cl killed $ecx +; X86-NOBMI-NEXT: shll %cl, %edi +; X86-NOBMI-NEXT: andl {{[0-9]+}}(%esp), %edx +; X86-NOBMI-NEXT: andl {{[0-9]+}}(%esp), %esi +; X86-NOBMI-NEXT: andl {{[0-9]+}}(%esp), %edi +; X86-NOBMI-NEXT: andl {{[0-9]+}}(%esp), %ebx +; X86-NOBMI-NEXT: movl %esi, 12(%eax) +; X86-NOBMI-NEXT: movl %edx, 8(%eax) +; X86-NOBMI-NEXT: movl %ebx, 4(%eax) +; X86-NOBMI-NEXT: movl %edi, (%eax) +; X86-NOBMI-NEXT: addl $32, %esp +; X86-NOBMI-NEXT: popl %esi +; X86-NOBMI-NEXT: popl %edi +; X86-NOBMI-NEXT: popl %ebx +; X86-NOBMI-NEXT: retl $4 +; +; X86-BMI2-LABEL: mask_pair_128: +; X86-BMI2: # %bb.0: +; X86-BMI2-NEXT: pushl %ebx +; X86-BMI2-NEXT: pushl %edi +; X86-BMI2-NEXT: pushl %esi +; X86-BMI2-NEXT: subl $32, %esp +; X86-BMI2-NEXT: movl {{[0-9]+}}(%esp), %ecx +; X86-BMI2-NEXT: movl {{[0-9]+}}(%esp), %eax +; X86-BMI2-NEXT: movl $-1, {{[0-9]+}}(%esp) +; X86-BMI2-NEXT: movl $-1, {{[0-9]+}}(%esp) +; X86-BMI2-NEXT: movl $-1, {{[0-9]+}}(%esp) +; X86-BMI2-NEXT: movl $-1, {{[0-9]+}}(%esp) +; X86-BMI2-NEXT: movl $0, {{[0-9]+}}(%esp) +; X86-BMI2-NEXT: movl $0, {{[0-9]+}}(%esp) +; X86-BMI2-NEXT: movl $0, {{[0-9]+}}(%esp) +; X86-BMI2-NEXT: movl $0, (%esp) +; X86-BMI2-NEXT: movl %ecx, %edx +; X86-BMI2-NEXT: shrb $3, %dl +; X86-BMI2-NEXT: andb $12, %dl +; X86-BMI2-NEXT: negb %dl +; X86-BMI2-NEXT: movsbl %dl, %edi +; X86-BMI2-NEXT: movl 24(%esp,%edi), %edx +; X86-BMI2-NEXT: movl 28(%esp,%edi), %esi +; X86-BMI2-NEXT: shldl %cl, %edx, %esi +; X86-BMI2-NEXT: movl 16(%esp,%edi), %ebx +; X86-BMI2-NEXT: movl 20(%esp,%edi), %edi +; X86-BMI2-NEXT: shldl %cl, %edi, %edx +; X86-BMI2-NEXT: shldl %cl, %ebx, %edi +; X86-BMI2-NEXT: shlxl %ecx, %ebx, %ecx +; X86-BMI2-NEXT: andl {{[0-9]+}}(%esp), %edx +; X86-BMI2-NEXT: andl {{[0-9]+}}(%esp), %esi +; X86-BMI2-NEXT: andl {{[0-9]+}}(%esp), %ecx +; X86-BMI2-NEXT: andl {{[0-9]+}}(%esp), %edi +; X86-BMI2-NEXT: movl %esi, 12(%eax) +; X86-BMI2-NEXT: movl %edx, 8(%eax) +; X86-BMI2-NEXT: movl %edi, 4(%eax) +; X86-BMI2-NEXT: movl %ecx, (%eax) +; X86-BMI2-NEXT: addl $32, %esp +; X86-BMI2-NEXT: popl %esi +; X86-BMI2-NEXT: popl %edi +; X86-BMI2-NEXT: popl %ebx +; X86-BMI2-NEXT: retl $4 +; +; X64-NOBMI-LABEL: mask_pair_128: +; X64-NOBMI: # %bb.0: +; X64-NOBMI-NEXT: movq %rdx, %rcx +; X64-NOBMI-NEXT: movq $-1, %rdx +; X64-NOBMI-NEXT: movq $-1, %r8 +; X64-NOBMI-NEXT: shlq %cl, %r8 +; X64-NOBMI-NEXT: xorl %eax, %eax +; X64-NOBMI-NEXT: testb $64, %cl +; X64-NOBMI-NEXT: cmovneq %r8, %rdx +; X64-NOBMI-NEXT: cmoveq %r8, %rax +; X64-NOBMI-NEXT: andq %rdi, %rax +; X64-NOBMI-NEXT: andq %rsi, %rdx +; X64-NOBMI-NEXT: retq +; +; X64-BMI2-LABEL: mask_pair_128: +; X64-BMI2: # %bb.0: +; X64-BMI2-NEXT: movq $-1, %rcx +; X64-BMI2-NEXT: shlxq %rdx, %rcx, %r8 +; X64-BMI2-NEXT: xorl %eax, %eax +; X64-BMI2-NEXT: testb $64, %dl +; X64-BMI2-NEXT: cmovneq %r8, %rcx +; X64-BMI2-NEXT: cmoveq %r8, %rax +; X64-BMI2-NEXT: andq %rdi, %rax +; X64-BMI2-NEXT: andq %rsi, %rcx +; X64-BMI2-NEXT: movq %rcx, %rdx +; X64-BMI2-NEXT: retq + %shl = shl nsw i128 -1, %y + %and = and i128 %shl, %x + ret i128 %and +} diff --git a/llvm/test/CodeGen/X86/ptrtoaddr-fast-isel.ll b/llvm/test/CodeGen/X86/ptrtoaddr-fast-isel.ll new file mode 100644 index 0000000..c302d41 --- /dev/null +++ b/llvm/test/CodeGen/X86/ptrtoaddr-fast-isel.ll @@ -0,0 +1,11 @@ +; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py +; RUN: llc -mtriple=x86_64-linux-gnu -fast-isel -fast-isel-abort=1 < %s -o - | FileCheck %s + +define i64 @ptrtoaddr(ptr %p) { +; CHECK-LABEL: ptrtoaddr: +; CHECK: # %bb.0: +; CHECK-NEXT: movq %rdi, %rax +; CHECK-NEXT: retq + %addr = ptrtoaddr ptr %p to i64 + ret i64 %addr +} diff --git a/llvm/test/DebugInfo/X86/instr-ref-opt-bisect2.ll b/llvm/test/DebugInfo/X86/instr-ref-opt-bisect2.ll new file mode 100644 index 0000000..92aedfe --- /dev/null +++ b/llvm/test/DebugInfo/X86/instr-ref-opt-bisect2.ll @@ -0,0 +1,36 @@ +; RUN: llc %s -o - -stop-after=livedebugvalues -opt-bisect-limit=1 | FileCheck %s +; RUN: llc %s -o - -stop-after=livedebugvalues -opt-bisect-limit=10 | FileCheck %s +; RUN: llc %s -o - -stop-after=livedebugvalues -opt-bisect-limit=100 | FileCheck %s + +; RUN: llc %s -o - -stop-after=livedebugvalues -opt-bisect-limit=1 -fast-isel=true | FileCheck %s +; RUN: llc %s -o - -stop-after=livedebugvalues -opt-bisect-limit=10 -fast-isel=true | FileCheck %s +; RUN: llc %s -o - -stop-after=livedebugvalues -opt-bisect-limit=100 -fast-isel=true | FileCheck %s + +; This test has the same purpose as the instr-ref-opt-bisect.ll, to check if +; during opt-bisect's optimisation level change we won't run into an assert. +; This is simply testing different IR. + +; CHECK: DBG_VALUE + +target triple = "x86_64-pc-windows-msvc" + +define i1 @foo(i32 %arg) !dbg !3 { +entry: + #dbg_value(i32 %arg, !4, !DIExpression(), !5) + switch i32 %arg, label %bb [ + i32 810, label %bb + ], !dbg !5 +bb: + %a = load volatile i1, ptr null, align 1 + ret i1 false +} + +!llvm.dbg.cu = !{!0} +!llvm.module.flags = !{!2} + +!0 = distinct !DICompileUnit(language: DW_LANG_C_plus_plus_14, file: !1) +!1 = !DIFile(filename: "instr-ref-opt-bisect2.ll", directory: ".") +!2 = !{i32 2, !"Debug Info Version", i32 3} +!3 = distinct !DISubprogram(name: "instr-ref-opt-bisect2", file: !1, unit: !0) +!4 = !DILocalVariable(name: "arg", arg: 2, scope: !3) +!5 = !DILocation(line: 0, scope: !3) diff --git a/llvm/test/Instrumentation/AddressSanitizer/alloca-offset-lifetime.ll b/llvm/test/Instrumentation/AddressSanitizer/alloca-offset-lifetime.ll deleted file mode 100644 index a4846176..0000000 --- a/llvm/test/Instrumentation/AddressSanitizer/alloca-offset-lifetime.ll +++ /dev/null @@ -1,27 +0,0 @@ -; Test that ASAN will not instrument lifetime markers on alloca offsets. -; -; RUN: opt < %s -passes=asan --asan-use-after-scope -S | FileCheck %s - -target datalayout = "e-m:o-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128" -target triple = "x86_64-apple-macosx10.15.0" - -%t = type { ptr, ptr, %sub, i64 } -%sub = type { i32 } - -define void @foo() sanitize_address { -entry: - %0 = alloca %t, align 8 - %x = getelementptr inbounds %t, ptr %0, i64 0, i32 2 - call void @llvm.lifetime.start.p0(i64 4, ptr nonnull %x) - call void @bar(ptr nonnull %x) - call void @llvm.lifetime.end.p0(i64 4, ptr nonnull %x) #3 - ret void -} - -declare void @llvm.lifetime.start.p0(i64 immarg, ptr nocapture) -declare void @bar(ptr) -declare void @llvm.lifetime.end.p0(i64 immarg, ptr nocapture) - -; CHECK: store i64 %[[STACK_BASE:.+]], ptr %asan_local_stack_base, align 8 -; CHECK-NOT: store i8 0 -; CHECK: call void @bar(ptr nonnull %x) diff --git a/llvm/test/Instrumentation/AddressSanitizer/calls-only-smallfn.ll b/llvm/test/Instrumentation/AddressSanitizer/calls-only-smallfn.ll index 0859a7e..d7204e6 100644 --- a/llvm/test/Instrumentation/AddressSanitizer/calls-only-smallfn.ll +++ b/llvm/test/Instrumentation/AddressSanitizer/calls-only-smallfn.ll @@ -9,15 +9,15 @@ define void @foo() #0 { entry: %array01 = alloca [1 x i8], align 1 %array02 = alloca [2 x i8], align 1 -; OUTLINE: call void @__asan_set_shadow_f1(i64 %23, i64 4) -; OUTLINE: call void @__asan_set_shadow_01(i64 %24, i64 1) -; OUTLINE: call void @__asan_set_shadow_f2(i64 %25, i64 1) -; OUTLINE: call void @__asan_set_shadow_02(i64 %26, i64 1) -; OUTLINE: call void @__asan_set_shadow_f3(i64 %27, i64 1) -; OUTLINE: call void @__asan_stack_free_0(i64 %7, i64 64) -; OUTLINE: call void @__asan_set_shadow_00(i64 %55, i64 8) -; INLINE: store i64 -935919682371587599, ptr %24, align 1 -; INLINE: store i64 -723401728380766731, ptr %52, align 1 +; OUTLINE: call void @__asan_set_shadow_f1(i64 %{{.+}}, i64 4) +; OUTLINE: call void @__asan_set_shadow_01(i64 %{{.+}}, i64 1) +; OUTLINE: call void @__asan_set_shadow_f2(i64 %{{.+}}, i64 1) +; OUTLINE: call void @__asan_set_shadow_02(i64 %{{.+}}, i64 1) +; OUTLINE: call void @__asan_set_shadow_f3(i64 %{{.+}}, i64 1) +; OUTLINE: call void @__asan_stack_free_0(i64 %{{.+}}, i64 64) +; OUTLINE: call void @__asan_set_shadow_00(i64 %{{.+}}, i64 8) +; INLINE: store i64 -935919682371587599, ptr %{{.+}}, align 1 +; INLINE: store i64 -723401728380766731, ptr %{{.+}}, align 1 %arrayidx = getelementptr inbounds [1 x i8], ptr %array01, i64 0, i64 1 store i8 1, ptr %arrayidx, align 1 %arrayidx1 = getelementptr inbounds [2 x i8], ptr %array02, i64 0, i64 2 diff --git a/llvm/test/Instrumentation/AddressSanitizer/calls-only.ll b/llvm/test/Instrumentation/AddressSanitizer/calls-only.ll index 5f122ad..6f52289 100644 --- a/llvm/test/Instrumentation/AddressSanitizer/calls-only.ll +++ b/llvm/test/Instrumentation/AddressSanitizer/calls-only.ll @@ -14,26 +14,26 @@ entry: %array05 = alloca [5 x i8], align 1 %array06 = alloca [6 x i8], align 1 %array07 = alloca [7 x i8], align 1 -; OUTLINE: call void @__asan_set_shadow_f1(i64 %33, i64 4) -; OUTLINE: call void @__asan_set_shadow_01(i64 %34, i64 1) -; OUTLINE: call void @__asan_set_shadow_f2(i64 %35, i64 1) -; OUTLINE: call void @__asan_set_shadow_02(i64 %36, i64 1) -; OUTLINE: call void @__asan_set_shadow_f2(i64 %37, i64 1) -; OUTLINE: call void @__asan_set_shadow_03(i64 %38, i64 1) -; OUTLINE: call void @__asan_set_shadow_f2(i64 %39, i64 1) -; OUTLINE: call void @__asan_set_shadow_04(i64 %40, i64 1) -; OUTLINE: call void @__asan_set_shadow_f2(i64 %41, i64 1) -; OUTLINE: call void @__asan_set_shadow_05(i64 %42, i64 1) -; OUTLINE: call void @__asan_set_shadow_f2(i64 %43, i64 3) -; OUTLINE: call void @__asan_set_shadow_06(i64 %44, i64 1) -; OUTLINE: call void @__asan_set_shadow_f2(i64 %45, i64 3) -; OUTLINE: call void @__asan_set_shadow_07(i64 %46, i64 1) -; OUTLINE: call void @__asan_set_shadow_f3(i64 %47, i64 3) -; OUTLINE: call void @__asan_stack_free_2(i64 %7, i64 192) -; OUTLINE: call void @__asan_set_shadow_00(i64 %135, i64 24) -; INLINE: store i64 -1007977276409515535, ptr %34, align 1 -; INLINE: store i64 -940423264817843709, ptr %36, align 1 -; INLINE: store i64 -868083087686045178, ptr %38, align 1 +; OUTLINE: call void @__asan_set_shadow_f1(i64 %{{.+}}, i64 4) +; OUTLINE: call void @__asan_set_shadow_01(i64 %{{.+}}, i64 1) +; OUTLINE: call void @__asan_set_shadow_f2(i64 %{{.+}}, i64 1) +; OUTLINE: call void @__asan_set_shadow_02(i64 %{{.+}}, i64 1) +; OUTLINE: call void @__asan_set_shadow_f2(i64 %{{.+}}, i64 1) +; OUTLINE: call void @__asan_set_shadow_03(i64 %{{.+}}, i64 1) +; OUTLINE: call void @__asan_set_shadow_f2(i64 %{{.+}}, i64 1) +; OUTLINE: call void @__asan_set_shadow_04(i64 %{{.+}}, i64 1) +; OUTLINE: call void @__asan_set_shadow_f2(i64 %{{.+}}, i64 1) +; OUTLINE: call void @__asan_set_shadow_05(i64 %{{.+}}, i64 1) +; OUTLINE: call void @__asan_set_shadow_f2(i64 %{{.+}}, i64 3) +; OUTLINE: call void @__asan_set_shadow_06(i64 %{{.+}}, i64 1) +; OUTLINE: call void @__asan_set_shadow_f2(i64 %{{.+}}, i64 3) +; OUTLINE: call void @__asan_set_shadow_07(i64 %{{.+}}, i64 1) +; OUTLINE: call void @__asan_set_shadow_f3(i64 %{{.+}}, i64 3) +; OUTLINE: call void @__asan_stack_free_2(i64 %{{.+}}, i64 192) +; OUTLINE: call void @__asan_set_shadow_00(i64 %{{.+}}, i64 24) +; INLINE: store i64 -1007977276409515535, ptr %{{.+}}, align 1 +; INLINE: store i64 -940423264817843709, ptr %{{.+}}, align 1 +; INLINE: store i64 -868083087686045178, ptr %{{.+}}, align 1 %arrayidx = getelementptr inbounds [1 x i8], ptr %array01, i64 0, i64 1 store i8 1, ptr %arrayidx, align 1 %arrayidx1 = getelementptr inbounds [2 x i8], ptr %array02, i64 0, i64 2 @@ -48,7 +48,7 @@ entry: store i8 6, ptr %arrayidx5, align 1 %arrayidx6 = getelementptr inbounds [7 x i8], ptr %array07, i64 0, i64 7 store i8 7, ptr %arrayidx6, align 1 -; CHECK-NOT: store i64 -723401728380766731, ptr %126, align 1 +; CHECK-NOT: store i64 -723401728380766731, ptr %{{.+}}, align 1 ret void } attributes #0 = { noinline nounwind optnone sanitize_address ssp uwtable(sync) "frame-pointer"="non-leaf" "min-legal-vector-width"="0" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="apple-m1" "target-features"="+aes,+crc,+crypto,+dotprod,+fp-armv8,+fp16fml,+fullfp16,+lse,+neon,+ras,+rcpc,+rdm,+sha2,+sha3,+sm4,+v8.1a,+v8.2a,+v8.3a,+v8.4a,+v8a" } diff --git a/llvm/test/Instrumentation/AllocToken/extralibfuncs.ll b/llvm/test/Instrumentation/AllocToken/extralibfuncs.ll index 5f08552..0e382b2 100644 --- a/llvm/test/Instrumentation/AllocToken/extralibfuncs.ll +++ b/llvm/test/Instrumentation/AllocToken/extralibfuncs.ll @@ -38,7 +38,7 @@ entry: ret ptr %ptr1 } -!0 = !{!"int"} +!0 = !{!"int", i1 0} ;. -; CHECK: [[META0]] = !{!"int"} +; CHECK: [[META0]] = !{!"int", i1 false} ;. diff --git a/llvm/test/Instrumentation/AllocToken/nonlibcalls.ll b/llvm/test/Instrumentation/AllocToken/nonlibcalls.ll index e023ab6b..19673da 100644 --- a/llvm/test/Instrumentation/AllocToken/nonlibcalls.ll +++ b/llvm/test/Instrumentation/AllocToken/nonlibcalls.ll @@ -79,7 +79,7 @@ entry: ret ptr %ptr1 } -!0 = !{!"int"} +!0 = !{!"int", i1 0} ;. -; CHECK: [[META0]] = !{!"int"} +; CHECK: [[META0]] = !{!"int", i1 false} ;. diff --git a/llvm/test/Instrumentation/AllocToken/remark.ll b/llvm/test/Instrumentation/AllocToken/remark.ll index a2404526..f2eaa62 100644 --- a/llvm/test/Instrumentation/AllocToken/remark.ll +++ b/llvm/test/Instrumentation/AllocToken/remark.ll @@ -32,7 +32,7 @@ entry: ret ptr %ptr1 } -!0 = !{!"int"} +!0 = !{!"int", i1 0} ;. -; CHECK: [[META0]] = !{!"int"} +; CHECK: [[META0]] = !{!"int", i1 false} ;. diff --git a/llvm/test/Instrumentation/AllocToken/typehashpointersplit.ll b/llvm/test/Instrumentation/AllocToken/typehashpointersplit.ll new file mode 100644 index 0000000..1f77648 --- /dev/null +++ b/llvm/test/Instrumentation/AllocToken/typehashpointersplit.ll @@ -0,0 +1,35 @@ +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 5 +; RUN: opt < %s -passes=inferattrs,alloc-token -alloc-token-mode=typehashpointersplit -alloc-token-max=2 -S | FileCheck %s + +target datalayout = "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128" + +declare ptr @malloc(i64) + +define void @test_typehashpointersplit() sanitize_alloc_token { +; CHECK-LABEL: define void @test_typehashpointersplit( +; CHECK-SAME: ) #[[ATTR1:[0-9]+]] { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: [[TMP0:%.*]] = call ptr @__alloc_token_malloc(i64 4, i64 0), !alloc_token [[META0:![0-9]+]] +; CHECK-NEXT: [[TMP1:%.*]] = call ptr @__alloc_token_malloc(i64 128, i64 0), !alloc_token [[META1:![0-9]+]] +; CHECK-NEXT: [[TMP2:%.*]] = call ptr @__alloc_token_malloc(i64 8, i64 1), !alloc_token [[META2:![0-9]+]] +; CHECK-NEXT: [[TMP3:%.*]] = call ptr @__alloc_token_malloc(i64 64, i64 1), !alloc_token [[META3:![0-9]+]] +; CHECK-NEXT: ret void +; +entry: + call ptr @malloc(i64 4), !alloc_token !0 + call ptr @malloc(i64 128), !alloc_token !1 + call ptr @malloc(i64 8), !alloc_token !2 + call ptr @malloc(i64 64), !alloc_token !3 + ret void +} + +!0 = !{!"int", i1 0} +!1 = !{!"Foo", i1 0} +!2 = !{!"int*", i1 1} +!3 = !{!"Foo", i1 1} +;. +; CHECK: [[META0]] = !{!"int", i1 false} +; CHECK: [[META1]] = !{!"Foo", i1 false} +; CHECK: [[META2]] = !{!"int*", i1 true} +; CHECK: [[META3]] = !{!"Foo", i1 true} +;. diff --git a/llvm/test/Instrumentation/SanitizerCoverage/missing_dbg.ll b/llvm/test/Instrumentation/SanitizerCoverage/missing_dbg.ll index 3568434..07b9a1c 100644 --- a/llvm/test/Instrumentation/SanitizerCoverage/missing_dbg.ll +++ b/llvm/test/Instrumentation/SanitizerCoverage/missing_dbg.ll @@ -1,5 +1,7 @@ ; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 5 ; RUN: opt < %s -passes='module(sancov-module)' -sanitizer-coverage-level=2 -S | FileCheck %s +; RUN: opt < %s -passes='module(sancov-module)' -sanitizer-coverage-level=1 -sanitizer-coverage-stack-depth -sanitizer-coverage-stack-depth-callback-min=1 -S | FileCheck %s --check-prefix=CHECK-STACK-CALLBACK +; RUN: opt < %s -passes='module(sancov-module)' -sanitizer-coverage-level=1 -sanitizer-coverage-stack-depth -S | FileCheck %s --check-prefix=CHECK-STACK-DEPTH target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128" @@ -55,6 +57,86 @@ entry: ret i32 %t } +define i32 @with_dbg_stack_callback(ptr %a) !dbg !8 { +; CHECK-STACK-CALLBACK-LABEL: define i32 @with_dbg_stack_callback( +; CHECK-STACK-CALLBACK-SAME: ptr [[A:%.*]]) !dbg [[DBG8:![0-9]+]] { +; CHECK-STACK-CALLBACK-NEXT: entry: +; CHECK-STACK-CALLBACK-NEXT: [[BUF:%.*]] = alloca [64 x i8], align 1 +; CHECK-STACK-CALLBACK-NEXT: call void @__sanitizer_cov_stack_depth() #[[ATTR1:[0-9]+]], !dbg [[DBG9:![0-9]+]] +; CHECK-STACK-CALLBACK-NEXT: %t = load i32, ptr [[A]], align 4 +; CHECK-STACK-CALLBACK-NEXT: call void @external_func() +; CHECK-STACK-CALLBACK-NEXT: ret i32 %t +; +entry: + %buf = alloca [64 x i8], align 1 + %t = load i32, ptr %a, align 4 + call void @external_func() + ret i32 %t +} + +define i32 @with_dbg_stack_depth(ptr %a) !dbg !10 { +; CHECK-STACK-DEPTH-LABEL: define i32 @with_dbg_stack_depth( +; CHECK-STACK-DEPTH-SAME: ptr [[A:%.*]]) !dbg [[DBG10:![0-9]+]] { +; CHECK-STACK-DEPTH-NEXT: entry: +; CHECK-STACK-DEPTH-NEXT: [[BUF:%.*]] = alloca [64 x i8], align 1 +; CHECK-STACK-DEPTH-NEXT: [[TMP1:%.*]] = call ptr @llvm.frameaddress.p0(i32 0) +; CHECK-STACK-DEPTH-NEXT: [[TMP2:%.*]] = ptrtoint ptr [[TMP1]] to i64 +; CHECK-STACK-DEPTH-NEXT: [[TMP3:%.*]] = load i64, ptr @__sancov_lowest_stack, align 8 +; CHECK-STACK-DEPTH-NEXT: [[TMP4:%.*]] = icmp ult i64 [[TMP2]], [[TMP3]] +; CHECK-STACK-DEPTH-NEXT: br i1 [[TMP4]], label {{%.*}}, label {{%.*}} +; CHECK-STACK-DEPTH: store i64 [[TMP2]], ptr @__sancov_lowest_stack, align 8, !dbg [[DBG11:![0-9]+]], {{.*}}!nosanitize +; CHECK-STACK-DEPTH: %t = load i32, ptr [[A]], align 4 +; CHECK-STACK-DEPTH-NEXT: call void @external_func() +; CHECK-STACK-DEPTH-NEXT: ret i32 %t +; +entry: + %buf = alloca [64 x i8], align 1 + %t = load i32, ptr %a, align 4 + call void @external_func() + ret i32 %t +} + +define i32 @without_dbg_stack_callback(ptr %a) { +; CHECK-STACK-CALLBACK-LABEL: define i32 @without_dbg_stack_callback( +; CHECK-STACK-CALLBACK-SAME: ptr [[A:%.*]]) { +; CHECK-STACK-CALLBACK-NEXT: entry: +; CHECK-STACK-CALLBACK-NEXT: [[BUF:%.*]] = alloca [64 x i8], align 1 +; CHECK-STACK-CALLBACK-NEXT: call void @__sanitizer_cov_stack_depth() #[[ATTR1]] +; CHECK-STACK-CALLBACK-NEXT: %t = load i32, ptr [[A]], align 4 +; CHECK-STACK-CALLBACK-NEXT: call void @external_func() +; CHECK-STACK-CALLBACK-NEXT: ret i32 %t +; +entry: + %buf = alloca [64 x i8], align 1 + %t = load i32, ptr %a, align 4 + call void @external_func() + ret i32 %t +} + +define i32 @without_dbg_stack_depth(ptr %a) { +; CHECK-STACK-DEPTH-LABEL: define i32 @without_dbg_stack_depth( +; CHECK-STACK-DEPTH-SAME: ptr [[A:%.*]]) { +; CHECK-STACK-DEPTH-NEXT: entry: +; CHECK-STACK-DEPTH-NEXT: [[BUF:%.*]] = alloca [64 x i8], align 1 +; CHECK-STACK-DEPTH-NEXT: [[TMP1:%.*]] = call ptr @llvm.frameaddress.p0(i32 0) +; CHECK-STACK-DEPTH-NEXT: [[TMP2:%.*]] = ptrtoint ptr [[TMP1]] to i64 +; CHECK-STACK-DEPTH-NEXT: [[TMP3:%.*]] = load i64, ptr @__sancov_lowest_stack, align 8 +; CHECK-STACK-DEPTH-NEXT: [[TMP4:%.*]] = icmp ult i64 [[TMP2]], [[TMP3]] +; CHECK-STACK-DEPTH-NEXT: br i1 [[TMP4]], label {{%.*}}, label {{%.*}} +; CHECK-STACK-DEPTH: store i64 [[TMP2]], ptr @__sancov_lowest_stack, align 8, {{.*}}!nosanitize +; CHECK-STACK-DEPTH: %t = load i32, ptr [[A]], align 4 +; CHECK-STACK-DEPTH-NEXT: call void @external_func() +; CHECK-STACK-DEPTH-NEXT: ret i32 %t +; +entry: + %buf = alloca [64 x i8], align 1 + %t = load i32, ptr %a, align 4 + call void @external_func() + ret i32 %t +} + +declare void @external_func() + !llvm.dbg.cu = !{!0} !llvm.module.flags = !{!2} @@ -66,6 +148,10 @@ entry: !5 = !{} !6 = !DILocation(line: 192, scope: !3) !7 = !DILocation(line: 0, scope: !3) +!8 = distinct !DISubprogram(name: "with_dbg_stack_callback", scope: !1, file: !1, line: 200, type: !4, scopeLine: 200, flags: DIFlagPrototyped | DIFlagAllCallsDescribed, spFlags: DISPFlagLocalToUnit | DISPFlagDefinition | DISPFlagOptimized, unit: !0) +!9 = !DILocation(line: 200, scope: !8) +!10 = distinct !DISubprogram(name: "with_dbg_stack_depth", scope: !1, file: !1, line: 210, type: !4, scopeLine: 210, flags: DIFlagPrototyped | DIFlagAllCallsDescribed, spFlags: DISPFlagLocalToUnit | DISPFlagDefinition | DISPFlagOptimized, unit: !0) +!11 = !DILocation(line: 210, scope: !10) ;. ; CHECK: [[META0:![0-9]+]] = distinct !DICompileUnit(language: DW_LANG_C89, file: [[META1:![0-9]+]], isOptimized: true, runtimeVersion: 0, emissionKind: LineTablesOnly, splitDebugInlining: false, nameTableKind: None) @@ -76,3 +162,9 @@ entry: ; CHECK: [[DBG6]] = !DILocation(line: 192, scope: [[DBG3]]) ; CHECK: [[DBG7]] = !DILocation(line: 0, scope: [[DBG3]]) ;. +; CHECK-STACK-CALLBACK: [[DBG8]] = distinct !DISubprogram(name: "with_dbg_stack_callback", scope: {{.*}}, file: {{.*}}, line: 200 +; CHECK-STACK-CALLBACK: [[DBG9]] = !DILocation(line: 200, scope: [[DBG8]]) +;. +; CHECK-STACK-DEPTH: [[DBG10]] = distinct !DISubprogram(name: "with_dbg_stack_depth", scope: {{.*}}, file: {{.*}}, line: 210 +; CHECK-STACK-DEPTH: [[DBG11]] = !DILocation(line: 210, scope: [[DBG10]]) +;. diff --git a/llvm/test/MC/AArch64/armv9a-sysp-diagnostics.s b/llvm/test/MC/AArch64/armv9a-sysp-diagnostics.s new file mode 100644 index 0000000..f8baf37 --- /dev/null +++ b/llvm/test/MC/AArch64/armv9a-sysp-diagnostics.s @@ -0,0 +1,95 @@ +// RUN: not llvm-mc -triple=aarch64 -show-encoding < %s 2>&1 \ +// RUN: | FileCheck %s --check-prefixes=CHECK-ERROR + +tlbip ALLE1 +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ALLE1IS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ALLE1ISNXS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ALLE1NXS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ALLE1OS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ALLE1OSNXS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ALLE2 +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ALLE2IS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ALLE2ISNXS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ALLE2NXS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ALLE2OS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ALLE2OSNXS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ALLE3 +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ALLE3IS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ALLE3ISNXS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ALLE3NXS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ALLE3OS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ALLE3OSNXS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ASIDE1 +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ASIDE1IS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ASIDE1ISNXS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ASIDE1NXS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ASIDE1OS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip ASIDE1OSNXS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip PAALL +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip PAALLOS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip RPALOS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip RPAOS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip VMALLE1 +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip VMALLE1IS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip VMALLE1ISNXS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip VMALLE1NXS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip VMALLE1OS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip VMALLE1OSNXS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip VMALLS12E1 +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip VMALLS12E1IS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip VMALLS12E1ISNXS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip VMALLS12E1NXS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip VMALLS12E1OS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip VMALLS12E1OSNXS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip VMALLWS2E1 +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip VMALLWS2E1IS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip VMALLWS2E1ISNXS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip VMALLWS2E1NXS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip VMALLWS2E1OS +// CHECK-ERROR: error: invalid operand for TLBIP instruction +tlbip VMALLWS2E1OSNXS +// CHECK-ERROR: error: invalid operand for TLBIP instruction diff --git a/llvm/test/Other/new-pm-print-pipeline.ll b/llvm/test/Other/new-pm-print-pipeline.ll index 6fa57f1..3536932 100644 --- a/llvm/test/Other/new-pm-print-pipeline.ll +++ b/llvm/test/Other/new-pm-print-pipeline.ll @@ -50,7 +50,7 @@ ; CHECK-17: function(print<stack-lifetime><may>,print<stack-lifetime><must>) ; RUN: opt -disable-output -disable-verify -print-pipeline-passes -passes='function(simplifycfg<bonus-inst-threshold=5;forward-switch-cond;switch-to-lookup;keep-loops;hoist-common-insts;hoist-loads-stores-with-cond-faulting;sink-common-insts;speculate-blocks;simplify-cond-branch;speculate-unpredictables>,simplifycfg<bonus-inst-threshold=7;no-forward-switch-cond;no-switch-to-lookup;no-keep-loops;no-hoist-common-insts;no-hoist-loads-stores-with-cond-faulting;no-sink-common-insts;no-speculate-blocks;no-simplify-cond-branch;no-speculate-unpredictables>)' < %s | FileCheck %s --match-full-lines --check-prefixes=CHECK-18 -; CHECK-18: function(simplifycfg<bonus-inst-threshold=5;forward-switch-cond;no-switch-range-to-icmp;switch-to-lookup;keep-loops;hoist-common-insts;hoist-loads-stores-with-cond-faulting;sink-common-insts;speculate-blocks;simplify-cond-branch;speculate-unpredictables>,simplifycfg<bonus-inst-threshold=7;no-forward-switch-cond;no-switch-range-to-icmp;no-switch-to-lookup;no-keep-loops;no-hoist-common-insts;no-hoist-loads-stores-with-cond-faulting;no-sink-common-insts;no-speculate-blocks;no-simplify-cond-branch;no-speculate-unpredictables>) +; CHECK-18: function(simplifycfg<bonus-inst-threshold=5;forward-switch-cond;no-switch-range-to-icmp;no-switch-to-arithmetic;switch-to-lookup;keep-loops;hoist-common-insts;hoist-loads-stores-with-cond-faulting;sink-common-insts;speculate-blocks;simplify-cond-branch;speculate-unpredictables>,simplifycfg<bonus-inst-threshold=7;no-forward-switch-cond;no-switch-range-to-icmp;no-switch-to-arithmetic;no-switch-to-lookup;no-keep-loops;no-hoist-common-insts;no-hoist-loads-stores-with-cond-faulting;no-sink-common-insts;no-speculate-blocks;no-simplify-cond-branch;no-speculate-unpredictables>) ; RUN: opt -disable-output -disable-verify -print-pipeline-passes -passes='function(loop-vectorize<no-interleave-forced-only;no-vectorize-forced-only>,loop-vectorize<interleave-forced-only;vectorize-forced-only>)' < %s | FileCheck %s --match-full-lines --check-prefixes=CHECK-19 ; CHECK-19: function(loop-vectorize<no-interleave-forced-only;no-vectorize-forced-only;>,loop-vectorize<interleave-forced-only;vectorize-forced-only;>) diff --git a/llvm/test/Transforms/AggressiveInstCombine/lower-table-based-cttz-basics.ll b/llvm/test/Transforms/AggressiveInstCombine/lower-table-based-cttz-basics.ll index bb3001e..a7d3446 100644 --- a/llvm/test/Transforms/AggressiveInstCombine/lower-table-based-cttz-basics.ll +++ b/llvm/test/Transforms/AggressiveInstCombine/lower-table-based-cttz-basics.ll @@ -91,12 +91,13 @@ @ctz7.table = internal unnamed_addr constant [32 x i8] c"\00\01\1C\02\1D\0E\18\03\1E\16\14\0F\19\11\04\08\1F\1B\0D\17\15\13\10\07\1A\0C\12\06\0B\05\0A\09", align 1 -define i32 @ctz1(i32 %x) { +define i32 @ctz1(i32 %x) !prof !0 { ; CHECK-LABEL: @ctz1( +; CHECK: !prof [[PROF_0:![0-9]+]] { ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = call i32 @llvm.cttz.i32(i32 [[X:%.*]], i1 true) ; CHECK-NEXT: [[TMP1:%.*]] = icmp eq i32 [[X]], 0 -; CHECK-NEXT: [[TMP2:%.*]] = select i1 [[TMP1]], i32 0, i32 [[TMP0]] +; CHECK-NEXT: [[TMP2:%.*]] = select i1 [[TMP1]], i32 0, i32 [[TMP0]], !prof [[PROF_1:![0-9]+]] ; CHECK-NEXT: [[TMP3:%.*]] = trunc i32 [[TMP2]] to i8 ; CHECK-NEXT: [[CONV:%.*]] = zext i8 [[TMP3]] to i32 ; CHECK-NEXT: ret i32 [[CONV]] @@ -498,3 +499,7 @@ entry: %conv = zext i8 %0 to i32 ret i32 %conv } + +!0 = !{!"function_entry_count", i64 1000} +; CHECK: [[PROF_0]] = !{!"function_entry_count", i64 1000} +; CHECK: [[PROF_1]] = !{!"branch_weights", i32 1, i32 1048575} diff --git a/llvm/test/Transforms/AggressiveInstCombine/lower-table-based-cttz-dereferencing-pointer.ll b/llvm/test/Transforms/AggressiveInstCombine/lower-table-based-cttz-dereferencing-pointer.ll index d2ecb57..0e5c4f0 100644 --- a/llvm/test/Transforms/AggressiveInstCombine/lower-table-based-cttz-dereferencing-pointer.ll +++ b/llvm/test/Transforms/AggressiveInstCombine/lower-table-based-cttz-dereferencing-pointer.ll @@ -20,13 +20,14 @@ @table = internal unnamed_addr constant [64 x i32] [i32 0, i32 1, i32 12, i32 2, i32 13, i32 22, i32 17, i32 3, i32 14, i32 33, i32 23, i32 36, i32 18, i32 58, i32 28, i32 4, i32 62, i32 15, i32 34, i32 26, i32 24, i32 48, i32 50, i32 37, i32 19, i32 55, i32 59, i32 52, i32 29, i32 44, i32 39, i32 5, i32 63, i32 11, i32 21, i32 16, i32 32, i32 35, i32 57, i32 27, i32 61, i32 25, i32 47, i32 49, i32 54, i32 51, i32 43, i32 38, i32 10, i32 20, i32 31, i32 56, i32 60, i32 46, i32 53, i32 42, i32 9, i32 30, i32 45, i32 41, i32 8, i32 40, i32 7, i32 6], align 4 -define i32 @ctz6(ptr nocapture readonly %b) { +define i32 @ctz6(ptr nocapture readonly %b) !prof !0 { ; CHECK-LABEL: @ctz6( +; CHECK: !prof [[PROF_0:![0-9]+]] { ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr [[B:%.*]], align 8 ; CHECK-NEXT: [[TMP1:%.*]] = call i64 @llvm.cttz.i64(i64 [[TMP0]], i1 true) ; CHECK-NEXT: [[TMP2:%.*]] = icmp eq i64 [[TMP0]], 0 -; CHECK-NEXT: [[TMP3:%.*]] = select i1 [[TMP2]], i64 0, i64 [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = select i1 [[TMP2]], i64 0, i64 [[TMP1]], !prof [[PROF_1:![0-9]+]] ; CHECK-NEXT: [[TMP4:%.*]] = trunc i64 [[TMP3]] to i32 ; CHECK-NEXT: ret i32 [[TMP4]] ; @@ -40,3 +41,7 @@ entry: %1 = load i32, ptr %arrayidx, align 4 ret i32 %1 } + +!0 = !{!"function_entry_count", i64 1000} +; CHECK: [[PROF_0]] = !{!"function_entry_count", i64 1000} +; CHECK: [[PROF_1]] = !{!"branch_weights", i32 1, i32 1048575} diff --git a/llvm/test/Transforms/AggressiveInstCombine/lower-table-based-cttz-non-argument-value.ll b/llvm/test/Transforms/AggressiveInstCombine/lower-table-based-cttz-non-argument-value.ll index f63badb..a7732f0 100644 --- a/llvm/test/Transforms/AggressiveInstCombine/lower-table-based-cttz-non-argument-value.ll +++ b/llvm/test/Transforms/AggressiveInstCombine/lower-table-based-cttz-non-argument-value.ll @@ -20,13 +20,14 @@ @.str = private constant [3 x i8] c"%u\00", align 1 @test.table = internal constant [32 x i8] c"\00\01\1C\02\1D\0E\18\03\1E\16\14\0F\19\11\04\08\1F\1B\0D\17\15\13\10\07\1A\0C\12\06\0B\05\0A\09", align 1 -define i32 @test() { +define i32 @test() !prof !0 { ; CHECK-LABEL: @test( +; CHECK: !prof [[PROF_0:![0-9]+]] { ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i32, ptr @x, align 4 ; CHECK-NEXT: [[TMP1:%.*]] = call i32 @llvm.cttz.i32(i32 [[TMP0]], i1 true) ; CHECK-NEXT: [[TMP2:%.*]] = icmp eq i32 [[TMP0]], 0 -; CHECK-NEXT: [[TMP3:%.*]] = select i1 [[TMP2]], i32 0, i32 [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = select i1 [[TMP2]], i32 0, i32 [[TMP1]], !prof [[PROF_1:![0-9]+]] ; CHECK-NEXT: [[TMP4:%.*]] = trunc i32 [[TMP3]] to i8 ; CHECK-NEXT: [[CONV:%.*]] = zext i8 [[TMP4]] to i32 ; CHECK-NEXT: ret i32 [[CONV]] @@ -43,3 +44,7 @@ entry: %conv = zext i8 %1 to i32 ret i32 %conv } + +!0 = !{!"function_entry_count", i64 1000} +; CHECK: [[PROF_0]] = !{!"function_entry_count", i64 1000} +; CHECK: [[PROF_1]] = !{!"branch_weights", i32 1, i32 1048575} diff --git a/llvm/test/Transforms/AggressiveInstCombine/lower-table-based-cttz-zero-element.ll b/llvm/test/Transforms/AggressiveInstCombine/lower-table-based-cttz-zero-element.ll index bbdd9b7c..5f9b4ce 100644 --- a/llvm/test/Transforms/AggressiveInstCombine/lower-table-based-cttz-zero-element.ll +++ b/llvm/test/Transforms/AggressiveInstCombine/lower-table-based-cttz-zero-element.ll @@ -3,12 +3,13 @@ @ctz1.table = internal constant [32 x i8] c"\00\01\1C\02\1D\0E\18\03\1E\16\14\0F\19\11\04\08\1F\1B\0D\17\15\13\10\07\1A\0C\12\06\0B\05\0A\09", align 1 -define i32 @ctz1(i32 %x) { +define i32 @ctz1(i32 %x) !prof !0 { ; CHECK-LABEL: @ctz1( +; CHECK: !prof [[PROF_0:![0-9]+]] { ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = call i32 @llvm.cttz.i32(i32 [[X:%.*]], i1 true) ; CHECK-NEXT: [[TMP1:%.*]] = icmp eq i32 [[X]], 0 -; CHECK-NEXT: [[TMP2:%.*]] = select i1 [[TMP1]], i32 0, i32 [[TMP0]] +; CHECK-NEXT: [[TMP2:%.*]] = select i1 [[TMP1]], i32 0, i32 [[TMP0]], !prof [[PROF_1:![0-9]+]] ; CHECK-NEXT: [[TMP3:%.*]] = trunc i32 [[TMP2]] to i8 ; CHECK-NEXT: [[CONV:%.*]] = zext i8 [[TMP3]] to i32 ; CHECK-NEXT: ret i32 [[CONV]] @@ -24,3 +25,7 @@ entry: %conv = zext i8 %0 to i32 ret i32 %conv } + +!0 = !{!"function_entry_count", i64 1000} +; CHECK: [[PROF_0]] = !{!"function_entry_count", i64 1000} +; CHECK: [[PROF_1]] = !{!"branch_weights", i32 1, i32 1048575} diff --git a/llvm/test/Transforms/Coroutines/coro-transform-must-elide.ll b/llvm/test/Transforms/Coroutines/coro-elide-safe.ll index 4eec7ed..722693d 100644 --- a/llvm/test/Transforms/Coroutines/coro-transform-must-elide.ll +++ b/llvm/test/Transforms/Coroutines/coro-elide-safe.ll @@ -1,4 +1,8 @@ -; Testing elide performed its job for calls to coroutines marked safe. +; Coroutine calls marked with `coro_elide_safe` should be elided. +; Inside `caller`, we expect the `callee` coroutine to be elided. +; Inside `caller_conditional`, `callee` is only called on an unlikely +; path, hence we expect the `callee` coroutine NOT to be elided. +; ; RUN: opt < %s -S -passes='cgscc(coro-annotation-elide)' | FileCheck %s %struct.Task = type { ptr } @@ -57,7 +61,7 @@ define ptr @callee.noalloc(i8 %arg, ptr dereferenceable(32) align(8) %frame) { ; Function Attrs: presplitcoroutine define ptr @caller() #0 { entry: - %task = call ptr @callee(i8 0) #1 + %task = call ptr @callee(i8 0) coro_elide_safe ret ptr %task ; CHECK: %[[TASK:.+]] = alloca %struct.Task, align 8 ; CHECK-NEXT: %[[FRAME:.+]] = alloca [32 x i8], align 8 @@ -69,6 +73,25 @@ entry: ; CHECK-NEXT: ret ptr %[[TASK]] } +; CHECK-LABEL: define ptr @caller_conditional(i1 %cond) +; Function Attrs: presplitcoroutine +define ptr @caller_conditional(i1 %cond) #0 { +entry: + br i1 %cond, label %call, label %ret + +call: + ; CHECK-NOT: alloca + ; CHECK-NOT: @llvm.coro.id({{.*}}, ptr @callee, {{.*}}) + ; CHECK: %task = call ptr @callee(i8 0) + ; CHECK-NEXT: br label %ret + %task = call ptr @callee(i8 0) coro_elide_safe + br label %ret + +ret: + %retval = phi ptr [ %task, %call ], [ null, %entry ] + ret ptr %retval +} + declare token @llvm.coro.id(i32, ptr, ptr, ptr) declare ptr @llvm.coro.begin(token, ptr) declare ptr @llvm.coro.frame() @@ -76,4 +99,3 @@ declare ptr @llvm.coro.subfn.addr(ptr, i8) declare i1 @llvm.coro.alloc(token) attributes #0 = { presplitcoroutine } -attributes #1 = { coro_elide_safe } diff --git a/llvm/test/Transforms/DFAJumpThreading/dfa-jump-threading-analysis.ll b/llvm/test/Transforms/DFAJumpThreading/dfa-jump-threading-analysis.ll index 4173c32..f45798b 100644 --- a/llvm/test/Transforms/DFAJumpThreading/dfa-jump-threading-analysis.ll +++ b/llvm/test/Transforms/DFAJumpThreading/dfa-jump-threading-analysis.ll @@ -7,10 +7,10 @@ ; state, and the block that determines the next state. ; < path of BBs that form a cycle > [ state, determinator ] define i32 @test1(i32 %num) !prof !0{ -; CHECK: < case2 for.inc for.body > [ 1, for.inc ] -; CHECK-NEXT: < for.inc for.body > [ 1, for.inc ] -; CHECK-NEXT: < case1 for.inc for.body > [ 2, for.inc ] -; CHECK-NEXT: < case2 sel.si.unfold.false for.inc for.body > [ 2, sel.si.unfold.false ] +; CHECK: < case2, for.inc, for.body > [ 1, for.inc ] +; CHECK-NEXT: < for.inc, for.body > [ 1, for.inc ] +; CHECK-NEXT: < case1, for.inc, for.body > [ 2, for.inc ] +; CHECK-NEXT: < case2, sel.si.unfold.false, for.inc, for.body > [ 2, sel.si.unfold.false ] entry: br label %for.body @@ -47,12 +47,12 @@ for.end: ; complicated CFG. Here the FSM is represented as a nested loop, with ; fallthrough cases. define i32 @test2(i32 %init) { -; CHECK: < loop.1.backedge loop.1 loop.2 loop.3 > [ 1, loop.1 ] -; CHECK-NEXT: < case4 loop.1.backedge state.1.be2.si.unfold.false loop.1 loop.2 loop.3 > [ 2, loop.1.backedge ] -; CHECK-NEXT: < case2 loop.1.backedge state.1.be2.si.unfold.false loop.1 loop.2 loop.3 > [ 4, loop.1.backedge ] -; CHECK-NEXT: < case4 loop.2.backedge loop.2 loop.3 > [ 3, loop.2.backedge ] -; CHECK-NEXT: < case3 loop.2.backedge loop.2 loop.3 > [ 0, loop.2.backedge ] -; CHECK-NEXT: < case2 loop.3 > [ 3, loop.3 ] +; CHECK: < loop.1.backedge, loop.1, loop.2, loop.3 > [ 1, loop.1 ] +; CHECK-NEXT: < case4, loop.1.backedge, state.1.be2.si.unfold.false, loop.1, loop.2, loop.3 > [ 2, loop.1.backedge ] +; CHECK-NEXT: < case2, loop.1.backedge, state.1.be2.si.unfold.false, loop.1, loop.2, loop.3 > [ 4, loop.1.backedge ] +; CHECK-NEXT: < case4, loop.2.backedge, loop.2, loop.3 > [ 3, loop.2.backedge ] +; CHECK-NEXT: < case3, loop.2.backedge, loop.2, loop.3 > [ 0, loop.2.backedge ] +; CHECK-NEXT: < case2, loop.3 > [ 3, loop.3 ] entry: %cmp = icmp eq i32 %init, 0 %sel = select i1 %cmp, i32 0, i32 2 @@ -187,12 +187,12 @@ bb66: ; preds = %bb59 ; Value %init is not predictable but it's okay since it is the value initial to the switch. define i32 @initial.value.positive1(i32 %init) !prof !0 { -; CHECK: < loop.1.backedge loop.1 loop.2 loop.3 > [ 1, loop.1 ] -; CHECK-NEXT: < case4 loop.1.backedge state.1.be2.si.unfold.false loop.1 loop.2 loop.3 > [ 2, loop.1.backedge ] -; CHECK-NEXT: < case2 loop.1.backedge state.1.be2.si.unfold.false loop.1 loop.2 loop.3 > [ 4, loop.1.backedge ] -; CHECK-NEXT: < case4 loop.2.backedge loop.2 loop.3 > [ 3, loop.2.backedge ] -; CHECK-NEXT: < case3 loop.2.backedge loop.2 loop.3 > [ 0, loop.2.backedge ] -; CHECK-NEXT: < case2 loop.3 > [ 3, loop.3 ] +; CHECK: < loop.1.backedge, loop.1, loop.2, loop.3 > [ 1, loop.1 ] +; CHECK-NEXT: < case4, loop.1.backedge, state.1.be2.si.unfold.false, loop.1, loop.2, loop.3 > [ 2, loop.1.backedge ] +; CHECK-NEXT: < case2, loop.1.backedge, state.1.be2.si.unfold.false, loop.1, loop.2, loop.3 > [ 4, loop.1.backedge ] +; CHECK-NEXT: < case4, loop.2.backedge, loop.2, loop.3 > [ 3, loop.2.backedge ] +; CHECK-NEXT: < case3, loop.2.backedge, loop.2, loop.3 > [ 0, loop.2.backedge ] +; CHECK-NEXT: < case2, loop.3 > [ 3, loop.3 ] entry: %cmp = icmp eq i32 %init, 0 br label %loop.1 diff --git a/llvm/test/Transforms/DFAJumpThreading/max-path-length.ll b/llvm/test/Transforms/DFAJumpThreading/max-path-length.ll index 92747629..cb7c46e 100644 --- a/llvm/test/Transforms/DFAJumpThreading/max-path-length.ll +++ b/llvm/test/Transforms/DFAJumpThreading/max-path-length.ll @@ -9,9 +9,9 @@ ; too long so that it is not jump-threaded. define i32 @max_path_length(i32 %num) { ; CHECK-NOT: 3, case1 -; CHECK: < case2 for.inc for.body > [ 1, for.inc ] -; CHECK-NEXT: < for.inc for.body > [ 1, for.inc ] -; CHECK-NEXT: < case2 sel.si.unfold.false for.inc for.body > [ 2, sel.si.unfold.false ] +; CHECK: < case2, for.inc, for.body > [ 1, for.inc ] +; CHECK-NEXT: < for.inc, for.body > [ 1, for.inc ] +; CHECK-NEXT: < case2, sel.si.unfold.false, for.inc, for.body > [ 2, sel.si.unfold.false ] ; CHECK-NEXT: DFA-JT: Renaming non-local uses of: entry: br label %for.body diff --git a/llvm/test/Transforms/GVN/assume-equal.ll b/llvm/test/Transforms/GVN/assume-equal.ll index 0c922da..bbbc5c5 100644 --- a/llvm/test/Transforms/GVN/assume-equal.ll +++ b/llvm/test/Transforms/GVN/assume-equal.ll @@ -221,21 +221,22 @@ define i32 @_Z1ii(i32 %p) { ; CHECK-NEXT: [[ENTRY:.*:]] ; CHECK-NEXT: [[CMP:%.*]] = icmp eq i32 [[P]], 42 ; CHECK-NEXT: call void @llvm.assume(i1 [[CMP]]) -; CHECK-NEXT: br i1 true, label %[[BB2:.*]], label %[[BB2]] -; CHECK: [[BB2]]: -; CHECK-NEXT: br i1 true, label %[[BB2]], label %[[BB2]] -; CHECK: [[BB0:.*:]] +; CHECK-NEXT: br i1 true, label %[[COMMON:.*]], label %[[COMMON]] +; CHECK: [[COMMON]]: +; CHECK-NEXT: br i1 true, label %[[COMMON]], label %[[COMMON]] +; CHECK: [[EXIT:.*:]] ; CHECK-NEXT: ret i32 42 ; entry: %cmp = icmp eq i32 %p, 42 call void @llvm.assume(i1 %cmp) - br i1 %cmp, label %bb2, label %bb2 -bb2: + br i1 %cmp, label %common, label %common +common: call void @llvm.assume(i1 true) - br i1 %cmp, label %bb2, label %bb2 + br i1 %cmp, label %common, label %common +exit: ret i32 %p } @@ -357,8 +358,8 @@ define i8 @assume_ptr_eq_different_prov_matters(ptr %p, ptr %p2) { ret i8 %v } -define i1 @assume_ptr_eq_different_prov_does_not_matter(ptr %p, ptr %p2) { -; CHECK-LABEL: define i1 @assume_ptr_eq_different_prov_does_not_matter( +define i1 @assume_ptr_eq_different_prov_does_not_matter_icmp(ptr %p, ptr %p2) { +; CHECK-LABEL: define i1 @assume_ptr_eq_different_prov_does_not_matter_icmp( ; CHECK-SAME: ptr [[P:%.*]], ptr [[P2:%.*]]) { ; CHECK-NEXT: [[CMP:%.*]] = icmp eq ptr [[P]], [[P2]] ; CHECK-NEXT: call void @llvm.assume(i1 [[CMP]]) @@ -371,6 +372,36 @@ define i1 @assume_ptr_eq_different_prov_does_not_matter(ptr %p, ptr %p2) { ret i1 %c } +; This is not correct, as it may change the provenance exposed by ptrtoint. +; We still allow it for now. +define i64 @assume_ptr_eq_different_prov_does_not_matter_ptrtoint(ptr %p, ptr %p2) { +; CHECK-LABEL: define i64 @assume_ptr_eq_different_prov_does_not_matter_ptrtoint( +; CHECK-SAME: ptr [[P:%.*]], ptr [[P2:%.*]]) { +; CHECK-NEXT: [[CMP:%.*]] = icmp eq ptr [[P]], [[P2]] +; CHECK-NEXT: call void @llvm.assume(i1 [[CMP]]) +; CHECK-NEXT: [[INT:%.*]] = ptrtoint ptr [[P]] to i64 +; CHECK-NEXT: ret i64 [[INT]] +; + %cmp = icmp eq ptr %p, %p2 + call void @llvm.assume(i1 %cmp) + %int = ptrtoint ptr %p2 to i64 + ret i64 %int +} + +define i64 @assume_ptr_eq_different_prov_does_not_matter_ptrtoaddr(ptr %p, ptr %p2) { +; CHECK-LABEL: define i64 @assume_ptr_eq_different_prov_does_not_matter_ptrtoaddr( +; CHECK-SAME: ptr [[P:%.*]], ptr [[P2:%.*]]) { +; CHECK-NEXT: [[CMP:%.*]] = icmp eq ptr [[P]], [[P2]] +; CHECK-NEXT: call void @llvm.assume(i1 [[CMP]]) +; CHECK-NEXT: [[INT:%.*]] = ptrtoaddr ptr [[P]] to i64 +; CHECK-NEXT: ret i64 [[INT]] +; + %cmp = icmp eq ptr %p, %p2 + call void @llvm.assume(i1 %cmp) + %int = ptrtoaddr ptr %p2 to i64 + ret i64 %int +} + define i8 @assume_ptr_eq_same_prov(ptr %p, i64 %x) { ; CHECK-LABEL: define i8 @assume_ptr_eq_same_prov( ; CHECK-SAME: ptr [[P:%.*]], i64 [[X:%.*]]) { diff --git a/llvm/test/Transforms/GVN/ptrtoaddr.ll b/llvm/test/Transforms/GVN/ptrtoaddr.ll new file mode 100644 index 0000000..6d02bc6 --- /dev/null +++ b/llvm/test/Transforms/GVN/ptrtoaddr.ll @@ -0,0 +1,30 @@ +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 6 +; RUN: opt -S -passes=gvn < %s | FileCheck %s + +define i64 @ptrtoaddr_same(ptr %p) { +; CHECK-LABEL: define i64 @ptrtoaddr_same( +; CHECK-SAME: ptr [[P:%.*]]) { +; CHECK-NEXT: [[J:%.*]] = ptrtoaddr ptr [[P]] to i64 +; CHECK-NEXT: ret i64 0 +; + %i = ptrtoaddr ptr %p to i64 + %j = ptrtoaddr ptr %p to i64 + %sub = sub i64 %i, %j + ret i64 %sub +} + +; Note that unlike for ptrtoint, it's not possible for ptrtoaddr to differ +; in result type for the same input. +define i64 @ptrtoaddr_different(ptr %p, ptr %p2) { +; CHECK-LABEL: define i64 @ptrtoaddr_different( +; CHECK-SAME: ptr [[P:%.*]], ptr [[P2:%.*]]) { +; CHECK-NEXT: [[I:%.*]] = ptrtoaddr ptr [[P]] to i64 +; CHECK-NEXT: [[J:%.*]] = ptrtoaddr ptr [[P2]] to i64 +; CHECK-NEXT: [[SUB:%.*]] = sub i64 [[I]], [[J]] +; CHECK-NEXT: ret i64 [[SUB]] +; + %i = ptrtoaddr ptr %p to i64 + %j = ptrtoaddr ptr %p2 to i64 + %sub = sub i64 %i, %j + ret i64 %sub +} diff --git a/llvm/test/Transforms/InstCombine/fold-selective-shift.ll b/llvm/test/Transforms/InstCombine/fold-selective-shift.ll new file mode 100644 index 0000000..2b22965 --- /dev/null +++ b/llvm/test/Transforms/InstCombine/fold-selective-shift.ll @@ -0,0 +1,323 @@ +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 6 +; RUN: opt -passes=instcombine %s -S | FileCheck %s + +declare void @clobber.i32(i32) + +define i16 @selective_shift_16(i32 %mask, i16 %upper, i16 %lower) { +; CHECK-LABEL: define i16 @selective_shift_16( +; CHECK-SAME: i32 [[MASK:%.*]], i16 [[UPPER:%.*]], i16 [[LOWER:%.*]]) { +; CHECK-NEXT: [[MASK_BIT:%.*]] = and i32 [[MASK]], 16 +; CHECK-NEXT: [[MASK_BIT_Z:%.*]] = icmp eq i32 [[MASK_BIT]], 0 +; CHECK-NEXT: [[SEL_V:%.*]] = select i1 [[MASK_BIT_Z]], i16 [[LOWER]], i16 [[UPPER]] +; CHECK-NEXT: ret i16 [[SEL_V]] +; + %upper.zext = zext i16 %upper to i32 + %upper.shl = shl nuw i32 %upper.zext, 16 + %lower.zext = zext i16 %lower to i32 + %pack = or disjoint i32 %upper.shl, %lower.zext + %mask.bit = and i32 %mask, 16 + %sel = lshr i32 %pack, %mask.bit + %trunc = trunc i32 %sel to i16 + ret i16 %trunc +} + +define i16 @selective_shift_16.commute(i32 %mask, i16 %upper, i16 %lower) { +; CHECK-LABEL: define i16 @selective_shift_16.commute( +; CHECK-SAME: i32 [[MASK:%.*]], i16 [[UPPER:%.*]], i16 [[LOWER:%.*]]) { +; CHECK-NEXT: [[MASK_BIT:%.*]] = and i32 [[MASK]], 16 +; CHECK-NEXT: [[MASK_BIT_Z:%.*]] = icmp eq i32 [[MASK_BIT]], 0 +; CHECK-NEXT: [[SEL_V:%.*]] = select i1 [[MASK_BIT_Z]], i16 [[LOWER]], i16 [[UPPER]] +; CHECK-NEXT: ret i16 [[SEL_V]] +; + %upper.zext = zext i16 %upper to i32 + %upper.shl = shl nuw i32 %upper.zext, 16 + %lower.zext = zext i16 %lower to i32 + %pack = or disjoint i32 %lower.zext, %upper.shl + %mask.bit = and i32 %mask, 16 + %sel = lshr i32 %pack, %mask.bit + %trunc = trunc i32 %sel to i16 + ret i16 %trunc +} + +define i16 @selective_shift_16.range(i32 %mask, i32 %upper, i32 range(i32 0, 65536) %lower) { +; CHECK-LABEL: define i16 @selective_shift_16.range( +; CHECK-SAME: i32 [[MASK:%.*]], i32 [[UPPER:%.*]], i32 range(i32 0, 65536) [[LOWER:%.*]]) { +; CHECK-NEXT: [[MASK_BIT:%.*]] = and i32 [[MASK]], 16 +; CHECK-NEXT: [[MASK_BIT_Z:%.*]] = icmp eq i32 [[MASK_BIT]], 0 +; CHECK-NEXT: [[SEL:%.*]] = select i1 [[MASK_BIT_Z]], i32 [[LOWER]], i32 [[UPPER]] +; CHECK-NEXT: [[TRUNC:%.*]] = trunc i32 [[SEL]] to i16 +; CHECK-NEXT: ret i16 [[TRUNC]] +; + %upper.shl = shl nuw i32 %upper, 16 + %pack = or disjoint i32 %upper.shl, %lower + %mask.bit = and i32 %mask, 16 + %sel = lshr i32 %pack, %mask.bit + %trunc = trunc i32 %sel to i16 + ret i16 %trunc +} + +define i16 @selective_shift_16.range.commute(i32 %mask, i32 %upper, i32 range(i32 0, 65536) %lower) { +; CHECK-LABEL: define i16 @selective_shift_16.range.commute( +; CHECK-SAME: i32 [[MASK:%.*]], i32 [[UPPER:%.*]], i32 range(i32 0, 65536) [[LOWER:%.*]]) { +; CHECK-NEXT: [[MASK_BIT:%.*]] = and i32 [[MASK]], 16 +; CHECK-NEXT: [[MASK_BIT_Z:%.*]] = icmp eq i32 [[MASK_BIT]], 0 +; CHECK-NEXT: [[SEL:%.*]] = select i1 [[MASK_BIT_Z]], i32 [[LOWER]], i32 [[UPPER]] +; CHECK-NEXT: [[TRUNC:%.*]] = trunc i32 [[SEL]] to i16 +; CHECK-NEXT: ret i16 [[TRUNC]] +; + %upper.shl = shl nuw i32 %upper, 16 + %pack = or disjoint i32 %lower, %upper.shl + %mask.bit = and i32 %mask, 16 + %sel = lshr i32 %pack, %mask.bit + %trunc = trunc i32 %sel to i16 + ret i16 %trunc +} + +define i32 @selective_shift_16.masked(i32 %mask, i16 %upper, i16 %lower) { +; CHECK-LABEL: define i32 @selective_shift_16.masked( +; CHECK-SAME: i32 [[MASK:%.*]], i16 [[UPPER:%.*]], i16 [[LOWER:%.*]]) { +; CHECK-NEXT: [[MASK_BIT:%.*]] = and i32 [[MASK]], 16 +; CHECK-NEXT: [[MASK_BIT_Z:%.*]] = icmp eq i32 [[MASK_BIT]], 0 +; CHECK-NEXT: [[SEL_V:%.*]] = select i1 [[MASK_BIT_Z]], i16 [[LOWER]], i16 [[UPPER]] +; CHECK-NEXT: [[SEL:%.*]] = zext i16 [[SEL_V]] to i32 +; CHECK-NEXT: ret i32 [[SEL]] +; + %upper.zext = zext i16 %upper to i32 + %upper.shl = shl nuw i32 %upper.zext, 16 + %lower.zext = zext i16 %lower to i32 + %pack = or disjoint i32 %lower.zext, %upper.shl + %mask.bit = and i32 %mask, 16 + %sel = lshr i32 %pack, %mask.bit + %sel.masked = and i32 %sel, 65535 + ret i32 %sel.masked +} + +define i32 @selective_shift_16.masked.commute(i32 %mask, i16 %upper, i16 %lower) { +; CHECK-LABEL: define i32 @selective_shift_16.masked.commute( +; CHECK-SAME: i32 [[MASK:%.*]], i16 [[UPPER:%.*]], i16 [[LOWER:%.*]]) { +; CHECK-NEXT: [[MASK_BIT:%.*]] = and i32 [[MASK]], 16 +; CHECK-NEXT: [[MASK_BIT_Z:%.*]] = icmp eq i32 [[MASK_BIT]], 0 +; CHECK-NEXT: [[SEL_V:%.*]] = select i1 [[MASK_BIT_Z]], i16 [[LOWER]], i16 [[UPPER]] +; CHECK-NEXT: [[SEL:%.*]] = zext i16 [[SEL_V]] to i32 +; CHECK-NEXT: ret i32 [[SEL]] +; + %upper.zext = zext i16 %upper to i32 + %upper.shl = shl nuw i32 %upper.zext, 16 + %lower.zext = zext i16 %lower to i32 + %pack = or disjoint i32 %upper.shl, %lower.zext + %mask.bit = and i32 %mask, 16 + %sel = lshr i32 %pack, %mask.bit + %sel.masked = and i32 %sel, 65535 + ret i32 %sel.masked +} + +define <2 x i16> @selective_shift.v16(<2 x i32> %mask, <2 x i16> %upper, <2 x i16> %lower) { +; CHECK-LABEL: define <2 x i16> @selective_shift.v16( +; CHECK-SAME: <2 x i32> [[MASK:%.*]], <2 x i16> [[UPPER:%.*]], <2 x i16> [[LOWER:%.*]]) { +; CHECK-NEXT: [[MASK_BIT:%.*]] = and <2 x i32> [[MASK]], splat (i32 16) +; CHECK-NEXT: [[MASK_BIT_Z:%.*]] = icmp eq <2 x i32> [[MASK_BIT]], zeroinitializer +; CHECK-NEXT: [[SEL_V:%.*]] = select <2 x i1> [[MASK_BIT_Z]], <2 x i16> [[LOWER]], <2 x i16> [[UPPER]] +; CHECK-NEXT: ret <2 x i16> [[SEL_V]] +; + %upper.zext = zext <2 x i16> %upper to <2 x i32> + %upper.shl = shl nuw <2 x i32> %upper.zext, splat(i32 16) + %lower.zext = zext <2 x i16> %lower to <2 x i32> + %pack = or disjoint <2 x i32> %upper.shl, %lower.zext + %mask.bit = and <2 x i32> %mask, splat(i32 16) + %sel = lshr <2 x i32> %pack, %mask.bit + %trunc = trunc <2 x i32> %sel to <2 x i16> + ret <2 x i16> %trunc +} + +define i16 @selective_shift_16.wide(i64 %mask, i16 %upper, i16 %lower) { +; CHECK-LABEL: define i16 @selective_shift_16.wide( +; CHECK-SAME: i64 [[MASK:%.*]], i16 [[UPPER:%.*]], i16 [[LOWER:%.*]]) { +; CHECK-NEXT: [[MASK_BIT:%.*]] = and i64 [[MASK]], 16 +; CHECK-NEXT: [[MASK_BIT_Z:%.*]] = icmp eq i64 [[MASK_BIT]], 0 +; CHECK-NEXT: [[SEL_V:%.*]] = select i1 [[MASK_BIT_Z]], i16 [[LOWER]], i16 [[UPPER]] +; CHECK-NEXT: ret i16 [[SEL_V]] +; + %upper.zext = zext i16 %upper to i64 + %upper.shl = shl nuw i64 %upper.zext, 16 + %lower.zext = zext i16 %lower to i64 + %pack = or disjoint i64 %upper.shl, %lower.zext + %mask.bit = and i64 %mask, 16 + %sel = lshr i64 %pack, %mask.bit + %trunc = trunc i64 %sel to i16 + ret i16 %trunc +} + +; narrow zext type blocks fold +define i16 @selective_shift_16.narrow(i24 %mask, i16 %upper, i16 %lower) { +; CHECK-LABEL: define i16 @selective_shift_16.narrow( +; CHECK-SAME: i24 [[MASK:%.*]], i16 [[UPPER:%.*]], i16 [[LOWER:%.*]]) { +; CHECK-NEXT: [[UPPER_ZEXT:%.*]] = zext i16 [[UPPER]] to i24 +; CHECK-NEXT: [[UPPER_SHL:%.*]] = shl i24 [[UPPER_ZEXT]], 16 +; CHECK-NEXT: [[LOWER_ZEXT:%.*]] = zext i16 [[LOWER]] to i24 +; CHECK-NEXT: [[PACK:%.*]] = or disjoint i24 [[UPPER_SHL]], [[LOWER_ZEXT]] +; CHECK-NEXT: [[MASK_BIT:%.*]] = and i24 [[MASK]], 16 +; CHECK-NEXT: [[SEL:%.*]] = lshr i24 [[PACK]], [[MASK_BIT]] +; CHECK-NEXT: [[TRUNC:%.*]] = trunc i24 [[SEL]] to i16 +; CHECK-NEXT: ret i16 [[TRUNC]] +; + %upper.zext = zext i16 %upper to i24 + %upper.shl = shl i24 %upper.zext, 16 + %lower.zext = zext i16 %lower to i24 + %pack = or disjoint i24 %upper.shl, %lower.zext + %mask.bit = and i24 %mask, 16 + %sel = lshr i24 %pack, %mask.bit + %trunc = trunc i24 %sel to i16 + ret i16 %trunc +} + +; %lower's upper bits block fold +define i16 @selective_shift_16_norange(i32 %mask, i32 %upper, i32 %lower) { +; CHECK-LABEL: define i16 @selective_shift_16_norange( +; CHECK-SAME: i32 [[MASK:%.*]], i32 [[UPPER:%.*]], i32 [[LOWER:%.*]]) { +; CHECK-NEXT: [[UPPER_SHL:%.*]] = shl nuw i32 [[UPPER]], 16 +; CHECK-NEXT: [[PACK:%.*]] = or i32 [[UPPER_SHL]], [[LOWER]] +; CHECK-NEXT: [[MASK_BIT:%.*]] = and i32 [[MASK]], 16 +; CHECK-NEXT: [[SEL:%.*]] = lshr i32 [[PACK]], [[MASK_BIT]] +; CHECK-NEXT: [[TRUNC:%.*]] = trunc i32 [[SEL]] to i16 +; CHECK-NEXT: ret i16 [[TRUNC]] +; + %upper.shl = shl nuw i32 %upper, 16 + %pack = or i32 %upper.shl, %lower + %mask.bit = and i32 %mask, 16 + %sel = lshr i32 %pack, %mask.bit + %trunc = trunc i32 %sel to i16 + ret i16 %trunc +} + +define i16 @selective_shift_16.mu.0(i32 %mask, i16 %upper, i16 %lower) { +; CHECK-LABEL: define i16 @selective_shift_16.mu.0( +; CHECK-SAME: i32 [[MASK:%.*]], i16 [[UPPER:%.*]], i16 [[LOWER:%.*]]) { +; CHECK-NEXT: [[UPPER_ZEXT:%.*]] = zext i16 [[UPPER]] to i32 +; CHECK-NEXT: call void @clobber.i32(i32 [[UPPER_ZEXT]]) +; CHECK-NEXT: [[LOWER_ZEXT:%.*]] = zext i16 [[LOWER]] to i32 +; CHECK-NEXT: call void @clobber.i32(i32 [[LOWER_ZEXT]]) +; CHECK-NEXT: [[MASK_BIT:%.*]] = and i32 [[MASK]], 16 +; CHECK-NEXT: [[MASK_BIT_Z:%.*]] = icmp eq i32 [[MASK_BIT]], 0 +; CHECK-NEXT: [[TRUNC:%.*]] = select i1 [[MASK_BIT_Z]], i16 [[LOWER]], i16 [[UPPER]] +; CHECK-NEXT: ret i16 [[TRUNC]] +; + %upper.zext = zext i16 %upper to i32 + call void @clobber.i32(i32 %upper.zext) + %upper.shl = shl nuw i32 %upper.zext, 16 + %lower.zext = zext i16 %lower to i32 + call void @clobber.i32(i32 %lower.zext) + %pack = or disjoint i32 %upper.shl, %lower.zext + %mask.bit = and i32 %mask, 16 + %sel = lshr i32 %pack, %mask.bit + %trunc = trunc i32 %sel to i16 + ret i16 %trunc +} + +; multi-use of %pack blocks fold +define i16 @selective_shift_16.mu.1(i32 %mask, i16 %upper, i16 %lower) { +; CHECK-LABEL: define i16 @selective_shift_16.mu.1( +; CHECK-SAME: i32 [[MASK:%.*]], i16 [[UPPER:%.*]], i16 [[LOWER:%.*]]) { +; CHECK-NEXT: [[UPPER_ZEXT:%.*]] = zext i16 [[UPPER]] to i32 +; CHECK-NEXT: [[UPPER_SHL:%.*]] = shl nuw i32 [[UPPER_ZEXT]], 16 +; CHECK-NEXT: [[LOWER_ZEXT:%.*]] = zext i16 [[LOWER]] to i32 +; CHECK-NEXT: [[PACK:%.*]] = or disjoint i32 [[UPPER_SHL]], [[LOWER_ZEXT]] +; CHECK-NEXT: call void @clobber.i32(i32 [[PACK]]) +; CHECK-NEXT: [[MASK_BIT:%.*]] = and i32 [[MASK]], 16 +; CHECK-NEXT: [[SEL:%.*]] = lshr i32 [[PACK]], [[MASK_BIT]] +; CHECK-NEXT: [[TRUNC:%.*]] = trunc i32 [[SEL]] to i16 +; CHECK-NEXT: ret i16 [[TRUNC]] +; + %upper.zext = zext i16 %upper to i32 + %upper.shl = shl nuw i32 %upper.zext, 16 + %lower.zext = zext i16 %lower to i32 + %pack = or disjoint i32 %upper.shl, %lower.zext + call void @clobber.i32(i32 %pack) + %mask.bit = and i32 %mask, 16 + %sel = lshr i32 %pack, %mask.bit + %trunc = trunc i32 %sel to i16 + ret i16 %trunc +} + +; non-truncated use of %sel blocks fold +define i16 @selective_shift_16.mu.2(i32 %mask, i16 %upper, i16 %lower) { +; CHECK-LABEL: define i16 @selective_shift_16.mu.2( +; CHECK-SAME: i32 [[MASK:%.*]], i16 [[UPPER:%.*]], i16 [[LOWER:%.*]]) { +; CHECK-NEXT: [[UPPER_ZEXT:%.*]] = zext i16 [[UPPER]] to i32 +; CHECK-NEXT: [[UPPER_SHL:%.*]] = shl nuw i32 [[UPPER_ZEXT]], 16 +; CHECK-NEXT: [[LOWER_ZEXT:%.*]] = zext i16 [[LOWER]] to i32 +; CHECK-NEXT: [[PACK:%.*]] = or disjoint i32 [[UPPER_SHL]], [[LOWER_ZEXT]] +; CHECK-NEXT: [[MASK_BIT:%.*]] = and i32 [[MASK]], 16 +; CHECK-NEXT: [[SEL:%.*]] = lshr i32 [[PACK]], [[MASK_BIT]] +; CHECK-NEXT: call void @clobber.i32(i32 [[SEL]]) +; CHECK-NEXT: [[TRUNC:%.*]] = trunc i32 [[SEL]] to i16 +; CHECK-NEXT: ret i16 [[TRUNC]] +; + %upper.zext = zext i16 %upper to i32 + %upper.shl = shl nuw i32 %upper.zext, 16 + %lower.zext = zext i16 %lower to i32 + %pack = or disjoint i32 %upper.shl, %lower.zext + %mask.bit = and i32 %mask, 16 + %sel = lshr i32 %pack, %mask.bit + call void @clobber.i32(i32 %sel) + %trunc = trunc i32 %sel to i16 + ret i16 %trunc +} + +; bitwidth must be a power of 2 to fold +define i24 @selective_shift_24(i48 %mask, i24 %upper, i24 %lower) { +; CHECK-LABEL: define i24 @selective_shift_24( +; CHECK-SAME: i48 [[MASK:%.*]], i24 [[UPPER:%.*]], i24 [[LOWER:%.*]]) { +; CHECK-NEXT: [[UPPER_ZEXT:%.*]] = zext i24 [[UPPER]] to i48 +; CHECK-NEXT: [[UPPER_SHL:%.*]] = shl nuw i48 [[UPPER_ZEXT]], 24 +; CHECK-NEXT: [[LOWER_ZEXT:%.*]] = zext i24 [[LOWER]] to i48 +; CHECK-NEXT: [[PACK:%.*]] = or disjoint i48 [[UPPER_SHL]], [[LOWER_ZEXT]] +; CHECK-NEXT: [[MASK_BIT:%.*]] = and i48 [[MASK]], 24 +; CHECK-NEXT: [[SEL:%.*]] = lshr i48 [[PACK]], [[MASK_BIT]] +; CHECK-NEXT: [[TRUNC:%.*]] = trunc i48 [[SEL]] to i24 +; CHECK-NEXT: ret i24 [[TRUNC]] +; + %upper.zext = zext i24 %upper to i48 + %upper.shl = shl nuw i48 %upper.zext, 24 + %lower.zext = zext i24 %lower to i48 + %pack = or disjoint i48 %upper.shl, %lower.zext + %mask.bit = and i48 %mask, 24 + %sel = lshr i48 %pack, %mask.bit + %trunc = trunc i48 %sel to i24 + ret i24 %trunc +} + +define i32 @selective_shift_32(i64 %mask, i32 %upper, i32 %lower) { +; CHECK-LABEL: define i32 @selective_shift_32( +; CHECK-SAME: i64 [[MASK:%.*]], i32 [[UPPER:%.*]], i32 [[LOWER:%.*]]) { +; CHECK-NEXT: [[MASK_BIT:%.*]] = and i64 [[MASK]], 32 +; CHECK-NEXT: [[MASK_BIT_Z:%.*]] = icmp eq i64 [[MASK_BIT]], 0 +; CHECK-NEXT: [[SEL_V:%.*]] = select i1 [[MASK_BIT_Z]], i32 [[LOWER]], i32 [[UPPER]] +; CHECK-NEXT: ret i32 [[SEL_V]] +; + %upper.zext = zext i32 %upper to i64 + %upper.shl = shl nuw i64 %upper.zext, 32 + %lower.zext = zext i32 %lower to i64 + %pack = or disjoint i64 %upper.shl, %lower.zext + %mask.bit = and i64 %mask, 32 + %sel = lshr i64 %pack, %mask.bit + %trunc = trunc i64 %sel to i32 + ret i32 %trunc +} + +define i32 @selective_shift_32.commute(i64 %mask, i32 %upper, i32 %lower) { +; CHECK-LABEL: define i32 @selective_shift_32.commute( +; CHECK-SAME: i64 [[MASK:%.*]], i32 [[UPPER:%.*]], i32 [[LOWER:%.*]]) { +; CHECK-NEXT: [[MASK_BIT:%.*]] = and i64 [[MASK]], 32 +; CHECK-NEXT: [[MASK_BIT_Z:%.*]] = icmp eq i64 [[MASK_BIT]], 0 +; CHECK-NEXT: [[SEL_V:%.*]] = select i1 [[MASK_BIT_Z]], i32 [[LOWER]], i32 [[UPPER]] +; CHECK-NEXT: ret i32 [[SEL_V]] +; + %upper.zext = zext i32 %upper to i64 + %upper.shl = shl nuw i64 %upper.zext, 32 + %lower.zext = zext i32 %lower to i64 + %pack = or disjoint i64 %lower.zext, %upper.shl + %mask.bit = and i64 %mask, 32 + %sel = lshr i64 %pack, %mask.bit + %trunc = trunc i64 %sel to i32 + ret i32 %trunc +} diff --git a/llvm/test/Transforms/InstCombine/ptrtoaddr.ll b/llvm/test/Transforms/InstCombine/ptrtoaddr.ll index 61b1331..5211fbd 100644 --- a/llvm/test/Transforms/InstCombine/ptrtoaddr.ll +++ b/llvm/test/Transforms/InstCombine/ptrtoaddr.ll @@ -1,6 +1,14 @@ ; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 6 ; RUN: opt < %s -passes=instcombine -S | FileCheck %s -target datalayout = "p1:64:64:64:32" + +; The ptrtoaddr folds are also valid for pointers that have external state. +target datalayout = "pe1:64:64:64:32" + +@g = external global i8 +@g2 = external global i8 + +@g.as1 = external addrspace(1) global i8 +@g2.as1 = external addrspace(1) global i8 define i32 @ptrtoaddr_inttoptr_arg(i32 %a) { ; CHECK-LABEL: define i32 @ptrtoaddr_inttoptr_arg( @@ -24,14 +32,14 @@ define i32 @ptrtoaddr_inttoptr() { define i32 @ptrtoaddr_inttoptr_diff_size1() { ; CHECK-LABEL: define i32 @ptrtoaddr_inttoptr_diff_size1() { -; CHECK-NEXT: ret i32 ptrtoaddr (ptr addrspace(1) inttoptr (i64 -1 to ptr addrspace(1)) to i32) +; CHECK-NEXT: ret i32 -1 ; ret i32 ptrtoaddr (ptr addrspace(1) inttoptr (i64 -1 to ptr addrspace(1)) to i32) } define i32 @ptrtoaddr_inttoptr_diff_size2() { ; CHECK-LABEL: define i32 @ptrtoaddr_inttoptr_diff_size2() { -; CHECK-NEXT: ret i32 ptrtoaddr (ptr addrspace(1) inttoptr (i16 -1 to ptr addrspace(1)) to i32) +; CHECK-NEXT: ret i32 65535 ; ret i32 ptrtoaddr (ptr addrspace(1) inttoptr (i16 -1 to ptr addrspace(1)) to i32) } @@ -52,14 +60,73 @@ define i64 @ptr2addr2_inttoptr_noas2() { define i64 @ptrtoaddr_inttoptr_noas_diff_size1() { ; CHECK-LABEL: define i64 @ptrtoaddr_inttoptr_noas_diff_size1() { -; CHECK-NEXT: ret i64 ptrtoaddr (ptr inttoptr (i32 -1 to ptr) to i64) +; CHECK-NEXT: ret i64 4294967295 ; ret i64 ptrtoaddr (ptr inttoptr (i32 -1 to ptr) to i64) } define i64 @ptrtoaddr_inttoptr_noas_diff_size2() { ; CHECK-LABEL: define i64 @ptrtoaddr_inttoptr_noas_diff_size2() { -; CHECK-NEXT: ret i64 ptrtoaddr (ptr inttoptr (i128 -1 to ptr) to i64) +; CHECK-NEXT: ret i64 -1 ; ret i64 ptrtoaddr (ptr inttoptr (i128 -1 to ptr) to i64) } + +define i64 @ptrtoaddr_gep_null() { +; CHECK-LABEL: define i64 @ptrtoaddr_gep_null() { +; CHECK-NEXT: ret i64 42 +; + ret i64 ptrtoaddr (ptr getelementptr (i8, ptr null, i64 42) to i64) +} + +define i32 @ptrtoaddr_gep_null_addrsize() { +; CHECK-LABEL: define i32 @ptrtoaddr_gep_null_addrsize() { +; CHECK-NEXT: ret i32 42 +; + ret i32 ptrtoaddr (ptr addrspace(1) getelementptr (i8, ptr addrspace(1) null, i32 42) to i32) +} + +define i64 @ptrtoaddr_gep_sub() { +; CHECK-LABEL: define i64 @ptrtoaddr_gep_sub() { +; CHECK-NEXT: ret i64 sub (i64 ptrtoaddr (ptr @g to i64), i64 ptrtoaddr (ptr @g2 to i64)) +; + ret i64 ptrtoaddr (ptr getelementptr (i8, ptr @g, i64 sub (i64 0, i64 ptrtoaddr (ptr @g2 to i64))) to i64) +} + +define i32 @ptrtoaddr_gep_sub_addrsize() { +; CHECK-LABEL: define i32 @ptrtoaddr_gep_sub_addrsize() { +; CHECK-NEXT: ret i32 sub (i32 ptrtoaddr (ptr addrspace(1) @g.as1 to i32), i32 ptrtoaddr (ptr addrspace(1) @g2.as1 to i32)) +; + ret i32 ptrtoaddr (ptr addrspace(1) getelementptr (i8, ptr addrspace(1) @g.as1, i32 sub (i32 0, i32 ptrtoaddr (ptr addrspace(1) @g2.as1 to i32))) to i32) +} + +; Don't fold inttoptr of ptrtoaddr away. inttoptr will pick a previously +; exposed provenance, which is not necessarily that of @g (especially as +; ptrtoaddr does not expose the provenance.) +define ptr @inttoptr_of_ptrtoaddr() { +; CHECK-LABEL: define ptr @inttoptr_of_ptrtoaddr() { +; CHECK-NEXT: ret ptr inttoptr (i64 ptrtoaddr (ptr @g to i64) to ptr) +; + ret ptr inttoptr (i64 ptrtoaddr (ptr @g to i64) to ptr) +} + +define i64 @ptrtoaddr_sub_consts_unrelated() { +; CHECK-LABEL: define i64 @ptrtoaddr_sub_consts_unrelated() { +; CHECK-NEXT: ret i64 sub (i64 ptrtoaddr (ptr @g to i64), i64 ptrtoaddr (ptr @g2 to i64)) +; + ret i64 sub (i64 ptrtoaddr (ptr @g to i64), i64 ptrtoaddr (ptr @g2 to i64)) +} + +define i64 @ptrtoaddr_sub_consts_offset() { +; CHECK-LABEL: define i64 @ptrtoaddr_sub_consts_offset() { +; CHECK-NEXT: ret i64 42 +; + ret i64 sub (i64 ptrtoaddr (ptr getelementptr (i8, ptr @g, i64 42) to i64), i64 ptrtoaddr (ptr @g to i64)) +} + +define i32 @ptrtoaddr_sub_consts_offset_addrsize() { +; CHECK-LABEL: define i32 @ptrtoaddr_sub_consts_offset_addrsize() { +; CHECK-NEXT: ret i32 42 +; + ret i32 sub (i32 ptrtoaddr (ptr addrspace(1) getelementptr (i8, ptr addrspace(1) @g.as1, i32 42) to i32), i32 ptrtoaddr (ptr addrspace(1) @g.as1 to i32)) +} diff --git a/llvm/test/Transforms/InstSimplify/ptr_diff.ll b/llvm/test/Transforms/InstSimplify/ptr_diff.ll index d18b462..fdd9e8e 100644 --- a/llvm/test/Transforms/InstSimplify/ptr_diff.ll +++ b/llvm/test/Transforms/InstSimplify/ptr_diff.ll @@ -1,11 +1,9 @@ -; NOTE: Assertions have been autogenerated by update_test_checks.py +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py ; RUN: opt < %s -passes=instsimplify -S | FileCheck %s -target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128" -target triple = "x86_64-unknown-linux-gnu" -define i64 @ptrdiff1(ptr %ptr) { -; CHECK-LABEL: @ptrdiff1( -; CHECK: ret i64 42 +define i64 @ptrdiff(ptr %ptr) { +; CHECK-LABEL: @ptrdiff( +; CHECK-NEXT: ret i64 42 ; %last = getelementptr inbounds i8, ptr %ptr, i32 42 %first.int = ptrtoint ptr %ptr to i64 @@ -14,9 +12,24 @@ define i64 @ptrdiff1(ptr %ptr) { ret i64 %diff } -define i64 @ptrdiff2(ptr %ptr) { -; CHECK-LABEL: @ptrdiff2( -; CHECK: ret i64 42 +define i64 @ptrdiff_no_inbounds(ptr %ptr) { +; CHECK-LABEL: @ptrdiff_no_inbounds( +; CHECK-NEXT: [[LAST:%.*]] = getelementptr i8, ptr [[PTR:%.*]], i32 42 +; CHECK-NEXT: [[FIRST_INT:%.*]] = ptrtoint ptr [[PTR]] to i64 +; CHECK-NEXT: [[LAST_INT:%.*]] = ptrtoint ptr [[LAST]] to i64 +; CHECK-NEXT: [[DIFF:%.*]] = sub i64 [[LAST_INT]], [[FIRST_INT]] +; CHECK-NEXT: ret i64 [[DIFF]] +; + %last = getelementptr i8, ptr %ptr, i32 42 + %first.int = ptrtoint ptr %ptr to i64 + %last.int = ptrtoint ptr %last to i64 + %diff = sub i64 %last.int, %first.int + ret i64 %diff +} + +define i64 @ptrdiff_chain(ptr %ptr) { +; CHECK-LABEL: @ptrdiff_chain( +; CHECK-NEXT: ret i64 42 ; %first2 = getelementptr inbounds i8, ptr %ptr, i32 1 %first3 = getelementptr inbounds i8, ptr %first2, i32 2 @@ -31,26 +44,10 @@ define i64 @ptrdiff2(ptr %ptr) { ret i64 %diff } -define i64 @ptrdiff3(ptr %ptr) { -; Don't bother with non-inbounds GEPs. -; CHECK-LABEL: @ptrdiff3( -; CHECK: [[LAST:%.*]] = getelementptr i8, ptr %ptr, i32 42 -; CHECK-NEXT: [[FIRST_INT:%.*]] = ptrtoint ptr %ptr to i64 -; CHECK-NEXT: [[LAST_INT:%.*]] = ptrtoint ptr [[LAST]] to i64 -; CHECK-NEXT: [[DIFF:%.*]] = sub i64 [[LAST_INT]], [[FIRST_INT]] -; CHECK-NEXT: ret i64 [[DIFF]] -; - %last = getelementptr i8, ptr %ptr, i32 42 - %first.int = ptrtoint ptr %ptr to i64 - %last.int = ptrtoint ptr %last to i64 - %diff = sub i64 %last.int, %first.int - ret i64 %diff -} - -define <4 x i32> @ptrdiff4(<4 x ptr> %arg) nounwind { ; Handle simple cases of vectors of pointers. -; CHECK-LABEL: @ptrdiff4( -; CHECK: ret <4 x i32> zeroinitializer +define <4 x i32> @ptrdiff_vectors(<4 x ptr> %arg) nounwind { +; CHECK-LABEL: @ptrdiff_vectors( +; CHECK-NEXT: ret <4 x i32> zeroinitializer ; %p1 = ptrtoint <4 x ptr> %arg to <4 x i32> %bc = bitcast <4 x ptr> %arg to <4 x ptr> @@ -63,9 +60,9 @@ define <4 x i32> @ptrdiff4(<4 x ptr> %arg) nounwind { @global = internal global %struct.ham zeroinitializer, align 4 -define i32 @ptrdiff5() nounwind { -; CHECK-LABEL: @ptrdiff5( -; CHECK: bb: +define i32 @ptrdiff_global() nounwind { +; CHECK-LABEL: @ptrdiff_global( +; CHECK-NEXT: bb: ; CHECK-NEXT: ret i32 0 ; bb: diff --git a/llvm/test/Transforms/LICM/vector-intrinsics.ll b/llvm/test/Transforms/LICM/vector-intrinsics.ll new file mode 100644 index 0000000..351773e --- /dev/null +++ b/llvm/test/Transforms/LICM/vector-intrinsics.ll @@ -0,0 +1,176 @@ +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 6 +; RUN: opt -S -passes='loop-mssa(licm)' -verify-memoryssa %s | FileCheck %s + +define i32 @reduce_umax(<2 x i32> %inv, i1 %c) { +; CHECK-LABEL: define i32 @reduce_umax( +; CHECK-SAME: <2 x i32> [[INV:%.*]], i1 [[C:%.*]]) { +; CHECK-NEXT: [[ENTRY:.*]]: +; CHECK-NEXT: [[REDUCE_UMAX:%.*]] = call i32 @llvm.vector.reduce.umax.v2i32(<2 x i32> [[INV]]) +; CHECK-NEXT: br label %[[LOOP:.*]] +; CHECK: [[LOOP]]: +; CHECK-NEXT: [[IV:%.*]] = phi i32 [ 0, %[[ENTRY]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ] +; CHECK-NEXT: [[IV_NEXT]] = add i32 [[IV]], 1 +; CHECK-NEXT: [[BACKEDGE_COND:%.*]] = icmp ult i32 [[IV]], [[REDUCE_UMAX]] +; CHECK-NEXT: [[OR_COND:%.*]] = select i1 [[C]], i1 [[BACKEDGE_COND]], i1 false +; CHECK-NEXT: br i1 [[OR_COND]], label %[[LOOP]], label %[[EXIT:.*]] +; CHECK: [[EXIT]]: +; CHECK-NEXT: [[IV_LCSSA:%.*]] = phi i32 [ [[IV]], %[[LOOP]] ] +; CHECK-NEXT: ret i32 [[IV_LCSSA]] +; +entry: + br label %loop + +loop: + %iv = phi i32 [ 0, %entry ], [ %iv.next, %cond.true ] + %iv.next = add i32 %iv, 1 + br i1 %c, label %cond.true, label %exit + +cond.true: + %reduce.umax = call i32 @llvm.vector.reduce.umax.v2i32(<2 x i32> %inv) + %backedge.cond = icmp ult i32 %iv, %reduce.umax + br i1 %backedge.cond, label %loop, label %exit + +exit: + ret i32 %iv +} + +define i32 @vp_umax(<2 x i32> %inv.l, <2 x i32> %inv.r, i1 %c) { +; CHECK-LABEL: define i32 @vp_umax( +; CHECK-SAME: <2 x i32> [[INV_L:%.*]], <2 x i32> [[INV_R:%.*]], i1 [[C:%.*]]) { +; CHECK-NEXT: [[ENTRY:.*]]: +; CHECK-NEXT: [[VP_UMAX:%.*]] = call <2 x i32> @llvm.vp.umax.v2i32(<2 x i32> [[INV_L]], <2 x i32> [[INV_R]], <2 x i1> splat (i1 true), i32 2) +; CHECK-NEXT: [[EXTRACT:%.*]] = extractelement <2 x i32> [[VP_UMAX]], i32 0 +; CHECK-NEXT: br label %[[LOOP:.*]] +; CHECK: [[LOOP]]: +; CHECK-NEXT: [[IV:%.*]] = phi i32 [ 0, %[[ENTRY]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ] +; CHECK-NEXT: [[IV_NEXT]] = add i32 [[IV]], 1 +; CHECK-NEXT: [[BACKEDGE_COND:%.*]] = icmp ult i32 [[IV]], [[EXTRACT]] +; CHECK-NEXT: [[OR_COND:%.*]] = select i1 [[C]], i1 [[BACKEDGE_COND]], i1 false +; CHECK-NEXT: br i1 [[OR_COND]], label %[[LOOP]], label %[[EXIT:.*]] +; CHECK: [[EXIT]]: +; CHECK-NEXT: [[IV_LCSSA:%.*]] = phi i32 [ [[IV]], %[[LOOP]] ] +; CHECK-NEXT: ret i32 [[IV_LCSSA]] +; +entry: + br label %loop + +loop: + %iv = phi i32 [ 0, %entry ], [ %iv.next, %cond.true ] + %iv.next = add i32 %iv, 1 + br i1 %c, label %cond.true, label %exit + +cond.true: + %vp.umax = call <2 x i32> @llvm.vp.umax.v2i32(<2 x i32> %inv.l, <2 x i32> %inv.r, <2 x i1> splat (i1 1), i32 2) + %extract = extractelement <2 x i32> %vp.umax, i32 0 + %backedge.cond = icmp ult i32 %iv, %extract + br i1 %backedge.cond, label %loop, label %exit + +exit: + ret i32 %iv +} + +define i32 @vp_udiv(<2 x i32> %inv.q, <2 x i32> %inv.d, i1 %c) { +; CHECK-LABEL: define i32 @vp_udiv( +; CHECK-SAME: <2 x i32> [[INV_Q:%.*]], <2 x i32> [[INV_D:%.*]], i1 [[C:%.*]]) { +; CHECK-NEXT: [[ENTRY:.*]]: +; CHECK-NEXT: br label %[[LOOP:.*]] +; CHECK: [[LOOP]]: +; CHECK-NEXT: [[IV:%.*]] = phi i32 [ 0, %[[ENTRY]] ], [ [[IV_NEXT:%.*]], %[[COND_TRUE:.*]] ] +; CHECK-NEXT: [[IV_NEXT]] = add i32 [[IV]], 1 +; CHECK-NEXT: br i1 [[C]], label %[[COND_TRUE]], label %[[EXIT:.*]] +; CHECK: [[COND_TRUE]]: +; CHECK-NEXT: [[VP_UDIV:%.*]] = call <2 x i32> @llvm.vp.udiv.v2i32(<2 x i32> [[INV_Q]], <2 x i32> [[INV_D]], <2 x i1> splat (i1 true), i32 2) +; CHECK-NEXT: [[EXTRACT:%.*]] = extractelement <2 x i32> [[VP_UDIV]], i32 0 +; CHECK-NEXT: [[LOOP_COND:%.*]] = icmp ult i32 [[IV]], [[EXTRACT]] +; CHECK-NEXT: br i1 [[LOOP_COND]], label %[[LOOP]], label %[[EXIT]] +; CHECK: [[EXIT]]: +; CHECK-NEXT: [[IV_LCSSA:%.*]] = phi i32 [ [[IV]], %[[COND_TRUE]] ], [ [[IV]], %[[LOOP]] ] +; CHECK-NEXT: ret i32 [[IV_LCSSA]] +; +entry: + br label %loop + +loop: + %iv = phi i32 [ 0, %entry ], [ %iv.next, %cond.true ] + %iv.next = add i32 %iv, 1 + br i1 %c, label %cond.true, label %exit + +cond.true: + %vp.udiv = call <2 x i32> @llvm.vp.udiv.v2i32(<2 x i32> %inv.q, <2 x i32> %inv.d, <2 x i1> splat (i1 1), i32 2) + %extract = extractelement <2 x i32> %vp.udiv, i32 0 + %backedge.cond = icmp ult i32 %iv, %extract + br i1 %backedge.cond, label %loop, label %exit + +exit: + ret i32 %iv +} + +define i32 @vp_load(ptr %inv, i1 %c) { +; CHECK-LABEL: define i32 @vp_load( +; CHECK-SAME: ptr [[INV:%.*]], i1 [[C:%.*]]) { +; CHECK-NEXT: [[ENTRY:.*]]: +; CHECK-NEXT: br label %[[LOOP:.*]] +; CHECK: [[LOOP]]: +; CHECK-NEXT: [[IV:%.*]] = phi i32 [ 0, %[[ENTRY]] ], [ [[IV_NEXT:%.*]], %[[COND_TRUE:.*]] ] +; CHECK-NEXT: [[IV_NEXT]] = add i32 [[IV]], 1 +; CHECK-NEXT: br i1 [[C]], label %[[COND_TRUE]], label %[[EXIT:.*]] +; CHECK: [[COND_TRUE]]: +; CHECK-NEXT: [[VP_LOAD:%.*]] = call <2 x i32> @llvm.vp.load.v2i32.p0(ptr [[INV]], <2 x i1> splat (i1 true), i32 2) +; CHECK-NEXT: [[EXTRACT:%.*]] = extractelement <2 x i32> [[VP_LOAD]], i32 0 +; CHECK-NEXT: [[LOOP_COND:%.*]] = icmp ult i32 [[IV]], [[EXTRACT]] +; CHECK-NEXT: br i1 [[LOOP_COND]], label %[[LOOP]], label %[[EXIT]] +; CHECK: [[EXIT]]: +; CHECK-NEXT: [[IV_LCSSA:%.*]] = phi i32 [ [[IV]], %[[COND_TRUE]] ], [ [[IV]], %[[LOOP]] ] +; CHECK-NEXT: ret i32 [[IV_LCSSA]] +; +entry: + br label %loop + +loop: + %iv = phi i32 [ 0, %entry ], [ %iv.next, %cond.true ] + %iv.next = add i32 %iv, 1 + br i1 %c, label %cond.true, label %exit + +cond.true: + %vp.load = call <2 x i32> @llvm.vp.load.v2i32(ptr %inv, <2 x i1> splat (i1 1), i32 2) + %extract = extractelement <2 x i32> %vp.load, i32 0 + %backedge.cond = icmp ult i32 %iv, %extract + br i1 %backedge.cond, label %loop, label %exit + +exit: + ret i32 %iv +} + +define i32 @vp_store(<2 x i32> %inv.v, ptr %inv.p, i1 %c) { +; CHECK-LABEL: define i32 @vp_store( +; CHECK-SAME: <2 x i32> [[INV_V:%.*]], ptr [[INV_P:%.*]], i1 [[C:%.*]]) { +; CHECK-NEXT: [[ENTRY:.*]]: +; CHECK-NEXT: br label %[[LOOP:.*]] +; CHECK: [[LOOP]]: +; CHECK-NEXT: [[IV:%.*]] = phi i32 [ 0, %[[ENTRY]] ], [ [[IV_NEXT:%.*]], %[[COND_TRUE:.*]] ] +; CHECK-NEXT: [[IV_NEXT]] = add i32 [[IV]], 1 +; CHECK-NEXT: br i1 [[C]], label %[[COND_TRUE]], label %[[EXIT:.*]] +; CHECK: [[COND_TRUE]]: +; CHECK-NEXT: call void @llvm.vp.store.v2i32.p0(<2 x i32> [[INV_V]], ptr [[INV_P]], <2 x i1> splat (i1 true), i32 2) +; CHECK-NEXT: [[BACKEDGE_COND:%.*]] = icmp ult i32 [[IV]], 10 +; CHECK-NEXT: br i1 [[BACKEDGE_COND]], label %[[LOOP]], label %[[EXIT]] +; CHECK: [[EXIT]]: +; CHECK-NEXT: [[IV_LCSSA:%.*]] = phi i32 [ [[IV]], %[[COND_TRUE]] ], [ [[IV]], %[[LOOP]] ] +; CHECK-NEXT: ret i32 [[IV_LCSSA]] +; +entry: + br label %loop + +loop: + %iv = phi i32 [ 0, %entry ], [ %iv.next, %cond.true ] + %iv.next = add i32 %iv, 1 + br i1 %c, label %cond.true, label %exit + +cond.true: + call void @llvm.vp.store.v2i32(<2 x i32> %inv.v, ptr %inv.p, <2 x i1> splat (i1 1), i32 2) + %backedge.cond = icmp ult i32 %iv, 10 + br i1 %backedge.cond, label %loop, label %exit + +exit: + ret i32 %iv +} diff --git a/llvm/test/Transforms/LoopRotate/multiple-deopt-exits.ll b/llvm/test/Transforms/LoopRotate/multiple-deopt-exits.ll deleted file mode 100644 index 72bc543..0000000 --- a/llvm/test/Transforms/LoopRotate/multiple-deopt-exits.ll +++ /dev/null @@ -1,164 +0,0 @@ -; NOTE: Assertions have been autogenerated by utils/update_test_checks.py -; RUN: opt -S < %s -passes='loop(loop-rotate)' -loop-rotate-multi=true | FileCheck %s - -; Test loop rotation with multiple exits, some of them - deoptimizing. -; We should end up with a latch which exit is non-deoptimizing, so we should rotate -; more than once. - -declare i32 @llvm.experimental.deoptimize.i32(...) - -define i32 @test_cond_with_one_deopt_exit(ptr nonnull %a, i64 %x) { -; Rotation done twice. -; Latch should be at the 2nd condition (for.cond2), exiting to %return. -; -; CHECK-LABEL: @test_cond_with_one_deopt_exit( -; CHECK-NEXT: entry: -; CHECK-NEXT: [[VAL_A_IDX3:%.*]] = load i32, ptr %a, align 4 -; CHECK-NEXT: [[ZERO_CHECK4:%.*]] = icmp eq i32 [[VAL_A_IDX3]], 0 -; CHECK-NEXT: br i1 [[ZERO_CHECK4]], label %deopt.exit, label %for.cond2.lr.ph -; CHECK: for.cond2.lr.ph: -; CHECK-NEXT: [[FOR_CHECK8:%.*]] = icmp ult i64 0, %x -; CHECK-NEXT: br i1 [[FOR_CHECK8]], label %for.body.lr.ph, label %return -; CHECK: for.body.lr.ph: -; CHECK-NEXT: br label %for.body -; CHECK: for.cond2: -; CHECK: [[FOR_CHECK:%.*]] = icmp ult i64 {{%.*}}, %x -; CHECK-NEXT: br i1 [[FOR_CHECK]], label %for.body, label %for.cond2.return_crit_edge -; CHECK: for.body: -; CHECK: br label %for.tail -; CHECK: for.tail: -; CHECK: [[VAL_A_IDX:%.*]] = load i32, ptr -; CHECK-NEXT: [[ZERO_CHECK:%.*]] = icmp eq i32 [[VAL_A_IDX]], 0 -; CHECK-NEXT: br i1 [[ZERO_CHECK]], label %for.cond1.deopt.exit_crit_edge, label %for.cond2 -; CHECK: for.cond2.return_crit_edge: -; CHECK-NEXT: {{%.*}} = phi i32 -; CHECK-NEXT: br label %return -; CHECK: return: -; CHECK-NEXT: [[SUM_LCSSA2:%.*]] = phi i32 -; CHECK-NEXT: ret i32 [[SUM_LCSSA2]] -; CHECK: for.cond1.deopt.exit_crit_edge: -; CHECK-NEXT: {{%.*}} = phi i32 -; CHECK-NEXT: br label %deopt.exit -; CHECK: deopt.exit: -; CHECK: [[DEOPT_VAL:%.*]] = call i32 (...) @llvm.experimental.deoptimize.i32() [ "deopt"(i32 {{%.*}}) ] -; CHECK-NEXT: ret i32 [[DEOPT_VAL]] -; -entry: - br label %for.cond1 - -for.cond1: - %idx = phi i64 [ 0, %entry ], [ %idx.next, %for.tail ] - %sum = phi i32 [ 0, %entry ], [ %sum.next, %for.tail ] - %a.idx = getelementptr inbounds i32, ptr %a, i64 %idx - %val.a.idx = load i32, ptr %a.idx, align 4 - %zero.check = icmp eq i32 %val.a.idx, 0 - br i1 %zero.check, label %deopt.exit, label %for.cond2 - -for.cond2: - %for.check = icmp ult i64 %idx, %x - br i1 %for.check, label %for.body, label %return - -for.body: - br label %for.tail - -for.tail: - %sum.next = add i32 %sum, %val.a.idx - %idx.next = add nuw nsw i64 %idx, 1 - br label %for.cond1 - -return: - ret i32 %sum - -deopt.exit: - %deopt.val = call i32(...) @llvm.experimental.deoptimize.i32() [ "deopt"(i32 %val.a.idx) ] - ret i32 %deopt.val -} - -define i32 @test_cond_with_two_deopt_exits(ptr nonnull %a, i64 %x) { -; Rotation done three times. -; Latch should be at the 3rd condition (for.cond3), exiting to %return. -; -; CHECK-LABEL: @test_cond_with_two_deopt_exits( -; CHECK-NEXT: entry: -; CHECK-NEXT: [[A_IDX_DEREF4:%.*]] = load ptr, ptr %a -; CHECK-NEXT: [[NULL_CHECK5:%.*]] = icmp eq ptr [[A_IDX_DEREF4]], null -; CHECK-NEXT: br i1 [[NULL_CHECK5]], label %deopt.exit1, label %for.cond2.lr.ph -; CHECK: for.cond2.lr.ph: -; CHECK-NEXT: [[VAL_A_IDX9:%.*]] = load i32, ptr [[A_IDX_DEREF4]], align 4 -; CHECK-NEXT: [[ZERO_CHECK10:%.*]] = icmp eq i32 [[VAL_A_IDX9]], 0 -; CHECK-NEXT: br i1 [[ZERO_CHECK10]], label %deopt.exit2, label %for.cond3.lr.ph -; CHECK: for.cond3.lr.ph: -; CHECK-NEXT: [[FOR_CHECK14:%.*]] = icmp ult i64 0, %x -; CHECK-NEXT: br i1 [[FOR_CHECK14]], label %for.body.lr.ph, label %return -; CHECK: for.body.lr.ph: -; CHECK-NEXT: br label %for.body -; CHECK: for.cond2: -; CHECK: [[VAL_A_IDX:%.*]] = load i32, ptr -; CHECK-NEXT: [[ZERO_CHECK:%.*]] = icmp eq i32 [[VAL_A_IDX]], 0 -; CHECK-NEXT: br i1 [[ZERO_CHECK]], label %for.cond2.deopt.exit2_crit_edge, label %for.cond3 -; CHECK: for.cond3: -; CHECK: [[FOR_CHECK:%.*]] = icmp ult i64 {{%.*}}, %x -; CHECK-NEXT: br i1 [[FOR_CHECK]], label %for.body, label %for.cond3.return_crit_edge -; CHECK: for.body: -; CHECK: br label %for.tail -; CHECK: for.tail: -; CHECK: [[IDX_NEXT:%.*]] = add nuw nsw i64 {{%.*}}, 1 -; CHECK: [[NULL_CHECK:%.*]] = icmp eq ptr {{%.*}}, null -; CHECK-NEXT: br i1 [[NULL_CHECK]], label %for.cond1.deopt.exit1_crit_edge, label %for.cond2 -; CHECK: for.cond3.return_crit_edge: -; CHECK-NEXT: [[SPLIT18:%.*]] = phi i32 -; CHECK-NEXT: br label %return -; CHECK: return: -; CHECK-NEXT: [[SUM_LCSSA2:%.*]] = phi i32 -; CHECK-NEXT: ret i32 [[SUM_LCSSA2]] -; CHECK: for.cond1.deopt.exit1_crit_edge: -; CHECK-NEXT: br label %deopt.exit1 -; CHECK: deopt.exit1: -; CHECK-NEXT: [[DEOPT_VAL1:%.*]] = call i32 (...) @llvm.experimental.deoptimize.i32() [ "deopt"(i32 0) ] -; CHECK-NEXT: ret i32 [[DEOPT_VAL1]] -; CHECK: for.cond2.deopt.exit2_crit_edge: -; CHECK-NEXT: [[SPLIT:%.*]] = phi i32 -; CHECK-NEXT: br label %deopt.exit2 -; CHECK: deopt.exit2: -; CHECK-NEXT: [[VAL_A_IDX_LCSSA:%.*]] = phi i32 -; CHECK-NEXT: [[DEOPT_VAL2:%.*]] = call i32 (...) @llvm.experimental.deoptimize.i32() [ "deopt"(i32 [[VAL_A_IDX_LCSSA]]) ] -; CHECK-NEXT: ret i32 [[DEOPT_VAL2]] -; -entry: - br label %for.cond1 - -for.cond1: - %idx = phi i64 [ 0, %entry ], [ %idx.next, %for.tail ] - %sum = phi i32 [ 0, %entry ], [ %sum.next, %for.tail ] - %a.idx = getelementptr inbounds ptr, ptr %a, i64 %idx - %a.idx.deref = load ptr, ptr %a.idx - %null.check = icmp eq ptr %a.idx.deref, null - br i1 %null.check, label %deopt.exit1, label %for.cond2 - -for.cond2: - %val.a.idx = load i32, ptr %a.idx.deref, align 4 - %zero.check = icmp eq i32 %val.a.idx, 0 - br i1 %zero.check, label %deopt.exit2, label %for.cond3 - -for.cond3: - %for.check = icmp ult i64 %idx, %x - br i1 %for.check, label %for.body, label %return - -for.body: - br label %for.tail - -for.tail: - %sum.next = add i32 %sum, %val.a.idx - %idx.next = add nuw nsw i64 %idx, 1 - br label %for.cond1 - -return: - ret i32 %sum - -deopt.exit1: - %deopt.val1 = call i32(...) @llvm.experimental.deoptimize.i32() [ "deopt"(i32 0) ] - ret i32 %deopt.val1 -deopt.exit2: - %deopt.val2 = call i32(...) @llvm.experimental.deoptimize.i32() [ "deopt"(i32 %val.a.idx) ] - ret i32 %deopt.val2 -} diff --git a/llvm/test/Transforms/LoopRotate/multiple-exits.ll b/llvm/test/Transforms/LoopRotate/multiple-exits.ll deleted file mode 100644 index 748700c..0000000 --- a/llvm/test/Transforms/LoopRotate/multiple-exits.ll +++ /dev/null @@ -1,236 +0,0 @@ -; RUN: opt -S -passes=loop-rotate < %s -verify-loop-info -verify-dom-info -verify-memoryssa | FileCheck %s - -target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128" -target triple = "x86_64-apple-macosx10.8.0" - -; PR7447 -define i32 @test1(ptr nocapture %a) nounwind readonly { -entry: - br label %for.cond - -for.cond: ; preds = %for.cond1, %entry - %sum.0 = phi i32 [ 0, %entry ], [ %sum.1, %for.cond1 ] - %i.0 = phi i1 [ true, %entry ], [ false, %for.cond1 ] - br i1 %i.0, label %for.cond1, label %return - -for.cond1: ; preds = %for.cond, %land.rhs - %sum.1 = phi i32 [ %add, %land.rhs ], [ %sum.0, %for.cond ] - %i.1 = phi i32 [ %inc, %land.rhs ], [ 0, %for.cond ] - %cmp2 = icmp ult i32 %i.1, 100 - br i1 %cmp2, label %land.rhs, label %for.cond - -land.rhs: ; preds = %for.cond1 - %conv = zext i32 %i.1 to i64 - %arrayidx = getelementptr inbounds [100 x i32], ptr %a, i64 0, i64 %conv - %0 = load i32, ptr %arrayidx, align 4 - %add = add i32 %0, %sum.1 - %cmp4 = icmp ugt i32 %add, 1000 - %inc = add i32 %i.1, 1 - br i1 %cmp4, label %return, label %for.cond1 - -return: ; preds = %for.cond, %land.rhs - %retval.0 = phi i32 [ 1000, %land.rhs ], [ %sum.0, %for.cond ] - ret i32 %retval.0 - -; CHECK-LABEL: @test1( -; CHECK: for.cond1.preheader: -; CHECK: %sum.04 = phi i32 [ 0, %entry ], [ %sum.1.lcssa, %for.cond.loopexit ] -; CHECK: br label %for.cond1 - -; CHECK: for.cond1: -; CHECK: %sum.1 = phi i32 [ %add, %land.rhs ], [ %sum.04, %for.cond1.preheader ] -; CHECK: %i.1 = phi i32 [ %inc, %land.rhs ], [ 0, %for.cond1.preheader ] -; CHECK: %cmp2 = icmp ult i32 %i.1, 100 -; CHECK: br i1 %cmp2, label %land.rhs, label %for.cond.loopexit -} - -define void @test2(i32 %x) nounwind { -entry: - br label %for.cond - -for.cond: ; preds = %if.end, %entry - %i.0 = phi i32 [ 0, %entry ], [ %inc, %if.end ] - %cmp = icmp eq i32 %i.0, %x - br i1 %cmp, label %return.loopexit, label %for.body - -for.body: ; preds = %for.cond - %call = tail call i32 @foo(i32 %i.0) nounwind - %tobool = icmp eq i32 %call, 0 - br i1 %tobool, label %if.end, label %a - -if.end: ; preds = %for.body - %call1 = tail call i32 @foo(i32 42) nounwind - %inc = add i32 %i.0, 1 - br label %for.cond - -a: ; preds = %for.body - %call2 = tail call i32 @bar(i32 1) nounwind - br label %return - -return.loopexit: ; preds = %for.cond - br label %return - -return: ; preds = %return.loopexit, %a - ret void - -; CHECK-LABEL: @test2( -; CHECK: if.end: -; CHECK: %inc = add i32 %i.02, 1 -; CHECK: %cmp = icmp eq i32 %inc, %x -; CHECK: br i1 %cmp, label %for.cond.return.loopexit_crit_edge, label %for.body -} - -declare i32 @foo(i32) - -declare i32 @bar(i32) - -@_ZTIi = external constant ptr - -; Verify dominators. -define void @test3(i32 %x) personality ptr @__gxx_personality_v0 { -entry: - %cmp2 = icmp eq i32 0, %x - br i1 %cmp2, label %try.cont.loopexit, label %for.body.lr.ph - -for.body.lr.ph: ; preds = %entry - br label %for.body - -for.body: ; preds = %for.body.lr.ph, %for.inc - %i.03 = phi i32 [ 0, %for.body.lr.ph ], [ %inc, %for.inc ] - invoke void @_Z3fooi(i32 %i.03) - to label %for.inc unwind label %lpad - -for.inc: ; preds = %for.body - %inc = add i32 %i.03, 1 - %cmp = icmp eq i32 %inc, %x - br i1 %cmp, label %for.cond.try.cont.loopexit_crit_edge, label %for.body - -lpad: ; preds = %for.body - %0 = landingpad { ptr, i32 } - catch ptr @_ZTIi - %1 = extractvalue { ptr, i32 } %0, 0 - %2 = extractvalue { ptr, i32 } %0, 1 - %3 = tail call i32 @llvm.eh.typeid.for(ptr @_ZTIi) nounwind - %matches = icmp eq i32 %2, %3 - br i1 %matches, label %catch, label %eh.resume - -catch: ; preds = %lpad - %4 = tail call ptr @__cxa_begin_catch(ptr %1) nounwind - br i1 true, label %invoke.cont2.loopexit, label %for.body.i.lr.ph - -for.body.i.lr.ph: ; preds = %catch - br label %for.body.i - -for.body.i: ; preds = %for.body.i.lr.ph, %for.inc.i - %i.0.i1 = phi i32 [ 0, %for.body.i.lr.ph ], [ %inc.i, %for.inc.i ] - invoke void @_Z3fooi(i32 %i.0.i1) - to label %for.inc.i unwind label %lpad.i - -for.inc.i: ; preds = %for.body.i - %inc.i = add i32 %i.0.i1, 1 - %cmp.i = icmp eq i32 %inc.i, 0 - br i1 %cmp.i, label %for.cond.i.invoke.cont2.loopexit_crit_edge, label %for.body.i - -lpad.i: ; preds = %for.body.i - %5 = landingpad { ptr, i32 } - catch ptr @_ZTIi - %6 = extractvalue { ptr, i32 } %5, 0 - %7 = extractvalue { ptr, i32 } %5, 1 - %matches.i = icmp eq i32 %7, %3 - br i1 %matches.i, label %catch.i, label %lpad1.body - -catch.i: ; preds = %lpad.i - %8 = tail call ptr @__cxa_begin_catch(ptr %6) nounwind - invoke void @test3(i32 0) - to label %invoke.cont2.i unwind label %lpad1.i - -invoke.cont2.i: ; preds = %catch.i - tail call void @__cxa_end_catch() nounwind - br label %invoke.cont2 - -lpad1.i: ; preds = %catch.i - %9 = landingpad { ptr, i32 } - cleanup - %10 = extractvalue { ptr, i32 } %9, 0 - %11 = extractvalue { ptr, i32 } %9, 1 - tail call void @__cxa_end_catch() nounwind - br label %lpad1.body - -for.cond.i.invoke.cont2.loopexit_crit_edge: ; preds = %for.inc.i - br label %invoke.cont2.loopexit - -invoke.cont2.loopexit: ; preds = %for.cond.i.invoke.cont2.loopexit_crit_edge, %catch - br label %invoke.cont2 - -invoke.cont2: ; preds = %invoke.cont2.loopexit, %invoke.cont2.i - tail call void @__cxa_end_catch() nounwind - br label %try.cont - -for.cond.try.cont.loopexit_crit_edge: ; preds = %for.inc - br label %try.cont.loopexit - -try.cont.loopexit: ; preds = %for.cond.try.cont.loopexit_crit_edge, %entry - br label %try.cont - -try.cont: ; preds = %try.cont.loopexit, %invoke.cont2 - ret void - -lpad1.body: ; preds = %lpad1.i, %lpad.i - %exn.slot.0.i = phi ptr [ %10, %lpad1.i ], [ %6, %lpad.i ] - %ehselector.slot.0.i = phi i32 [ %11, %lpad1.i ], [ %7, %lpad.i ] - tail call void @__cxa_end_catch() nounwind - br label %eh.resume - -eh.resume: ; preds = %lpad1.body, %lpad - %exn.slot.0 = phi ptr [ %exn.slot.0.i, %lpad1.body ], [ %1, %lpad ] - %ehselector.slot.0 = phi i32 [ %ehselector.slot.0.i, %lpad1.body ], [ %2, %lpad ] - %lpad.val = insertvalue { ptr, i32 } undef, ptr %exn.slot.0, 0 - %lpad.val5 = insertvalue { ptr, i32 } %lpad.val, i32 %ehselector.slot.0, 1 - resume { ptr, i32 } %lpad.val5 -} - -declare void @_Z3fooi(i32) - -declare i32 @__gxx_personality_v0(...) - -declare i32 @llvm.eh.typeid.for(ptr) nounwind readnone - -declare ptr @__cxa_begin_catch(ptr) - -declare void @__cxa_end_catch() - -define void @test4(i1 %arg) nounwind uwtable { -entry: - br label %"7" - -"3": ; preds = %"7" - br i1 %arg, label %"31", label %"4" - -"4": ; preds = %"3" - %. = select i1 undef, float 0x3F50624DE0000000, float undef - %0 = add i32 %1, 1 - br label %"7" - -"7": ; preds = %"4", %entry - %1 = phi i32 [ %0, %"4" ], [ 0, %entry ] - %2 = icmp slt i32 %1, 100 - br i1 %2, label %"3", label %"8" - -"8": ; preds = %"7" - br i1 %arg, label %"9", label %"31" - -"9": ; preds = %"8" - br label %"33" - -"27": ; preds = %"31" - unreachable - -"31": ; preds = %"8", %"3" - br i1 %arg, label %"27", label %"32" - -"32": ; preds = %"31" - br label %"33" - -"33": ; preds = %"32", %"9" - ret void -} diff --git a/llvm/test/Transforms/LoopVectorize/RISCV/veclib-function-calls.ll b/llvm/test/Transforms/LoopVectorize/RISCV/veclib-function-calls.ll index d73900d..83b494a 100644 --- a/llvm/test/Transforms/LoopVectorize/RISCV/veclib-function-calls.ll +++ b/llvm/test/Transforms/LoopVectorize/RISCV/veclib-function-calls.ll @@ -2288,7 +2288,7 @@ define void @tgamma_f32(ptr noalias %in.ptr, ptr noalias %out.ptr) { } ;. ; CHECK: attributes #[[ATTR0]] = { "target-features"="+v" } -; CHECK: attributes #[[ATTR1:[0-9]+]] = { nocallback nofree nosync nounwind willreturn memory(none) } +; CHECK: attributes #[[ATTR1:[0-9]+]] = { nocallback nofree nosync nounwind speculatable willreturn memory(none) } ; CHECK: attributes #[[ATTR2]] = { "vector-function-abi-variant"="_ZGVrNxv_acos(Sleef_acosdx_u10rvvm2)" } ; CHECK: attributes #[[ATTR3]] = { "vector-function-abi-variant"="_ZGVrNxv_acosf(Sleef_acosfx_u10rvvm2)" } ; CHECK: attributes #[[ATTR4]] = { "vector-function-abi-variant"="_ZGVrNxv_acosh(Sleef_acoshdx_u10rvvm2)" } diff --git a/llvm/test/Transforms/LoopVectorize/X86/replicating-load-store-costs.ll b/llvm/test/Transforms/LoopVectorize/X86/replicating-load-store-costs.ll index f5329cf..c225ede5 100644 --- a/llvm/test/Transforms/LoopVectorize/X86/replicating-load-store-costs.ll +++ b/llvm/test/Transforms/LoopVectorize/X86/replicating-load-store-costs.ll @@ -580,6 +580,201 @@ exit: ret double %accum } +define void @loaded_address_used_by_load_through_blend(i64 %start, ptr noalias %src, ptr noalias %src.2, ptr noalias %dst) #0 { +; I64-LABEL: define void @loaded_address_used_by_load_through_blend( +; I64-SAME: i64 [[START:%.*]], ptr noalias [[SRC:%.*]], ptr noalias [[SRC_2:%.*]], ptr noalias [[DST:%.*]]) #[[ATTR0]] { +; I64-NEXT: [[ENTRY:.*]]: +; I64-NEXT: br label %[[LOOP_HEADER:.*]] +; I64: [[LOOP_HEADER]]: +; I64-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[ENTRY]] ], [ [[IV_NEXT:%.*]], %[[LOOP_LATCH:.*]] ] +; I64-NEXT: [[IV_2:%.*]] = phi i64 [ [[START]], %[[ENTRY]] ], [ [[IV_2_NEXT:%.*]], %[[LOOP_LATCH]] ] +; I64-NEXT: [[IV_1:%.*]] = add i64 [[IV]], 1 +; I64-NEXT: [[GEP_SRC:%.*]] = getelementptr i8, ptr [[SRC]], i64 [[IV_1]] +; I64-NEXT: [[L_SRC:%.*]] = load float, ptr [[GEP_SRC]], align 4 +; I64-NEXT: [[C:%.*]] = fcmp oeq float [[L_SRC]], 0.000000e+00 +; I64-NEXT: br i1 [[C]], label %[[THEN:.*]], label %[[LOOP_LATCH]] +; I64: [[THEN]]: +; I64-NEXT: [[IV_MUL:%.*]] = mul i64 [[IV_1]], [[START]] +; I64-NEXT: [[GEP_SRC_2:%.*]] = getelementptr i8, ptr [[SRC_2]], i64 [[IV_MUL]] +; I64-NEXT: br label %[[LOOP_LATCH]] +; I64: [[LOOP_LATCH]]: +; I64-NEXT: [[MERGE_GEP:%.*]] = phi ptr [ [[GEP_SRC_2]], %[[THEN]] ], [ [[SRC_2]], %[[LOOP_HEADER]] ] +; I64-NEXT: [[L_2:%.*]] = load float, ptr [[MERGE_GEP]], align 4 +; I64-NEXT: [[GEP_DST:%.*]] = getelementptr i8, ptr [[DST]], i64 [[IV]] +; I64-NEXT: store float [[L_2]], ptr [[GEP_DST]], align 4 +; I64-NEXT: [[IV_NEXT]] = add i64 [[IV]], 1 +; I64-NEXT: [[IV_2_NEXT]] = add i64 [[IV_2]], -1 +; I64-NEXT: [[EC:%.*]] = icmp sgt i64 [[IV_2]], 100 +; I64-NEXT: br i1 [[EC]], label %[[LOOP_HEADER]], label %[[EXIT:.*]] +; I64: [[EXIT]]: +; I64-NEXT: ret void +; +; I32-LABEL: define void @loaded_address_used_by_load_through_blend( +; I32-SAME: i64 [[START:%.*]], ptr noalias [[SRC:%.*]], ptr noalias [[SRC_2:%.*]], ptr noalias [[DST:%.*]]) #[[ATTR0]] { +; I32-NEXT: [[ENTRY:.*:]] +; I32-NEXT: [[TMP0:%.*]] = add i64 [[START]], 1 +; I32-NEXT: [[SMIN:%.*]] = call i64 @llvm.smin.i64(i64 [[START]], i64 100) +; I32-NEXT: [[TMP1:%.*]] = sub i64 [[TMP0]], [[SMIN]] +; I32-NEXT: [[MIN_ITERS_CHECK:%.*]] = icmp ult i64 [[TMP1]], 8 +; I32-NEXT: br i1 [[MIN_ITERS_CHECK]], label %[[SCALAR_PH:.*]], label %[[VECTOR_PH:.*]] +; I32: [[VECTOR_PH]]: +; I32-NEXT: [[N_MOD_VF:%.*]] = urem i64 [[TMP1]], 8 +; I32-NEXT: [[N_VEC:%.*]] = sub i64 [[TMP1]], [[N_MOD_VF]] +; I32-NEXT: [[TMP2:%.*]] = sub i64 [[START]], [[N_VEC]] +; I32-NEXT: [[BROADCAST_SPLATINSERT:%.*]] = insertelement <8 x i64> poison, i64 [[START]], i64 0 +; I32-NEXT: [[BROADCAST_SPLAT:%.*]] = shufflevector <8 x i64> [[BROADCAST_SPLATINSERT]], <8 x i64> poison, <8 x i32> zeroinitializer +; I32-NEXT: [[BROADCAST_SPLATINSERT1:%.*]] = insertelement <8 x ptr> poison, ptr [[SRC_2]], i64 0 +; I32-NEXT: [[BROADCAST_SPLAT2:%.*]] = shufflevector <8 x ptr> [[BROADCAST_SPLATINSERT1]], <8 x ptr> poison, <8 x i32> zeroinitializer +; I32-NEXT: br label %[[VECTOR_BODY:.*]] +; I32: [[VECTOR_BODY]]: +; I32-NEXT: [[INDEX:%.*]] = phi i64 [ 0, %[[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], %[[VECTOR_BODY]] ] +; I32-NEXT: [[TMP3:%.*]] = add i64 [[INDEX]], 0 +; I32-NEXT: [[TMP4:%.*]] = add i64 [[INDEX]], 1 +; I32-NEXT: [[TMP5:%.*]] = add i64 [[INDEX]], 2 +; I32-NEXT: [[TMP6:%.*]] = add i64 [[INDEX]], 3 +; I32-NEXT: [[TMP7:%.*]] = add i64 [[INDEX]], 4 +; I32-NEXT: [[TMP8:%.*]] = add i64 [[INDEX]], 5 +; I32-NEXT: [[TMP9:%.*]] = add i64 [[INDEX]], 6 +; I32-NEXT: [[TMP10:%.*]] = add i64 [[INDEX]], 7 +; I32-NEXT: [[TMP11:%.*]] = add i64 [[TMP3]], 1 +; I32-NEXT: [[TMP12:%.*]] = add i64 [[TMP4]], 1 +; I32-NEXT: [[TMP13:%.*]] = add i64 [[TMP5]], 1 +; I32-NEXT: [[TMP14:%.*]] = add i64 [[TMP6]], 1 +; I32-NEXT: [[TMP15:%.*]] = add i64 [[TMP7]], 1 +; I32-NEXT: [[TMP16:%.*]] = add i64 [[TMP8]], 1 +; I32-NEXT: [[TMP17:%.*]] = add i64 [[TMP9]], 1 +; I32-NEXT: [[TMP18:%.*]] = add i64 [[TMP10]], 1 +; I32-NEXT: [[TMP19:%.*]] = insertelement <8 x i64> poison, i64 [[TMP11]], i32 0 +; I32-NEXT: [[TMP20:%.*]] = insertelement <8 x i64> [[TMP19]], i64 [[TMP12]], i32 1 +; I32-NEXT: [[TMP21:%.*]] = insertelement <8 x i64> [[TMP20]], i64 [[TMP13]], i32 2 +; I32-NEXT: [[TMP22:%.*]] = insertelement <8 x i64> [[TMP21]], i64 [[TMP14]], i32 3 +; I32-NEXT: [[TMP23:%.*]] = insertelement <8 x i64> [[TMP22]], i64 [[TMP15]], i32 4 +; I32-NEXT: [[TMP24:%.*]] = insertelement <8 x i64> [[TMP23]], i64 [[TMP16]], i32 5 +; I32-NEXT: [[TMP25:%.*]] = insertelement <8 x i64> [[TMP24]], i64 [[TMP17]], i32 6 +; I32-NEXT: [[TMP26:%.*]] = insertelement <8 x i64> [[TMP25]], i64 [[TMP18]], i32 7 +; I32-NEXT: [[TMP27:%.*]] = getelementptr i8, ptr [[SRC]], i64 [[TMP11]] +; I32-NEXT: [[TMP28:%.*]] = getelementptr i8, ptr [[SRC]], i64 [[TMP12]] +; I32-NEXT: [[TMP29:%.*]] = getelementptr i8, ptr [[SRC]], i64 [[TMP13]] +; I32-NEXT: [[TMP30:%.*]] = getelementptr i8, ptr [[SRC]], i64 [[TMP14]] +; I32-NEXT: [[TMP31:%.*]] = getelementptr i8, ptr [[SRC]], i64 [[TMP15]] +; I32-NEXT: [[TMP32:%.*]] = getelementptr i8, ptr [[SRC]], i64 [[TMP16]] +; I32-NEXT: [[TMP33:%.*]] = getelementptr i8, ptr [[SRC]], i64 [[TMP17]] +; I32-NEXT: [[TMP34:%.*]] = getelementptr i8, ptr [[SRC]], i64 [[TMP18]] +; I32-NEXT: [[TMP35:%.*]] = load float, ptr [[TMP27]], align 4 +; I32-NEXT: [[TMP36:%.*]] = load float, ptr [[TMP28]], align 4 +; I32-NEXT: [[TMP37:%.*]] = load float, ptr [[TMP29]], align 4 +; I32-NEXT: [[TMP38:%.*]] = load float, ptr [[TMP30]], align 4 +; I32-NEXT: [[TMP39:%.*]] = load float, ptr [[TMP31]], align 4 +; I32-NEXT: [[TMP40:%.*]] = load float, ptr [[TMP32]], align 4 +; I32-NEXT: [[TMP41:%.*]] = load float, ptr [[TMP33]], align 4 +; I32-NEXT: [[TMP42:%.*]] = load float, ptr [[TMP34]], align 4 +; I32-NEXT: [[TMP43:%.*]] = insertelement <8 x float> poison, float [[TMP35]], i32 0 +; I32-NEXT: [[TMP44:%.*]] = insertelement <8 x float> [[TMP43]], float [[TMP36]], i32 1 +; I32-NEXT: [[TMP45:%.*]] = insertelement <8 x float> [[TMP44]], float [[TMP37]], i32 2 +; I32-NEXT: [[TMP46:%.*]] = insertelement <8 x float> [[TMP45]], float [[TMP38]], i32 3 +; I32-NEXT: [[TMP47:%.*]] = insertelement <8 x float> [[TMP46]], float [[TMP39]], i32 4 +; I32-NEXT: [[TMP48:%.*]] = insertelement <8 x float> [[TMP47]], float [[TMP40]], i32 5 +; I32-NEXT: [[TMP49:%.*]] = insertelement <8 x float> [[TMP48]], float [[TMP41]], i32 6 +; I32-NEXT: [[TMP50:%.*]] = insertelement <8 x float> [[TMP49]], float [[TMP42]], i32 7 +; I32-NEXT: [[TMP51:%.*]] = fcmp oeq <8 x float> [[TMP50]], zeroinitializer +; I32-NEXT: [[TMP52:%.*]] = mul <8 x i64> [[TMP26]], [[BROADCAST_SPLAT]] +; I32-NEXT: [[TMP53:%.*]] = extractelement <8 x i64> [[TMP52]], i32 0 +; I32-NEXT: [[TMP54:%.*]] = getelementptr i8, ptr [[SRC_2]], i64 [[TMP53]] +; I32-NEXT: [[TMP55:%.*]] = extractelement <8 x i64> [[TMP52]], i32 1 +; I32-NEXT: [[TMP56:%.*]] = getelementptr i8, ptr [[SRC_2]], i64 [[TMP55]] +; I32-NEXT: [[TMP57:%.*]] = extractelement <8 x i64> [[TMP52]], i32 2 +; I32-NEXT: [[TMP58:%.*]] = getelementptr i8, ptr [[SRC_2]], i64 [[TMP57]] +; I32-NEXT: [[TMP59:%.*]] = extractelement <8 x i64> [[TMP52]], i32 3 +; I32-NEXT: [[TMP60:%.*]] = getelementptr i8, ptr [[SRC_2]], i64 [[TMP59]] +; I32-NEXT: [[TMP61:%.*]] = extractelement <8 x i64> [[TMP52]], i32 4 +; I32-NEXT: [[TMP62:%.*]] = getelementptr i8, ptr [[SRC_2]], i64 [[TMP61]] +; I32-NEXT: [[TMP63:%.*]] = extractelement <8 x i64> [[TMP52]], i32 5 +; I32-NEXT: [[TMP64:%.*]] = getelementptr i8, ptr [[SRC_2]], i64 [[TMP63]] +; I32-NEXT: [[TMP65:%.*]] = extractelement <8 x i64> [[TMP52]], i32 6 +; I32-NEXT: [[TMP66:%.*]] = getelementptr i8, ptr [[SRC_2]], i64 [[TMP65]] +; I32-NEXT: [[TMP67:%.*]] = extractelement <8 x i64> [[TMP52]], i32 7 +; I32-NEXT: [[TMP68:%.*]] = getelementptr i8, ptr [[SRC_2]], i64 [[TMP67]] +; I32-NEXT: [[TMP69:%.*]] = insertelement <8 x ptr> poison, ptr [[TMP54]], i32 0 +; I32-NEXT: [[TMP70:%.*]] = insertelement <8 x ptr> [[TMP69]], ptr [[TMP56]], i32 1 +; I32-NEXT: [[TMP71:%.*]] = insertelement <8 x ptr> [[TMP70]], ptr [[TMP58]], i32 2 +; I32-NEXT: [[TMP72:%.*]] = insertelement <8 x ptr> [[TMP71]], ptr [[TMP60]], i32 3 +; I32-NEXT: [[TMP73:%.*]] = insertelement <8 x ptr> [[TMP72]], ptr [[TMP62]], i32 4 +; I32-NEXT: [[TMP74:%.*]] = insertelement <8 x ptr> [[TMP73]], ptr [[TMP64]], i32 5 +; I32-NEXT: [[TMP75:%.*]] = insertelement <8 x ptr> [[TMP74]], ptr [[TMP66]], i32 6 +; I32-NEXT: [[TMP76:%.*]] = insertelement <8 x ptr> [[TMP75]], ptr [[TMP68]], i32 7 +; I32-NEXT: [[PREDPHI:%.*]] = select <8 x i1> [[TMP51]], <8 x ptr> [[TMP76]], <8 x ptr> [[BROADCAST_SPLAT2]] +; I32-NEXT: [[TMP77:%.*]] = extractelement <8 x ptr> [[PREDPHI]], i32 0 +; I32-NEXT: [[TMP78:%.*]] = load float, ptr [[TMP77]], align 4 +; I32-NEXT: [[TMP79:%.*]] = extractelement <8 x ptr> [[PREDPHI]], i32 1 +; I32-NEXT: [[TMP80:%.*]] = load float, ptr [[TMP79]], align 4 +; I32-NEXT: [[TMP81:%.*]] = extractelement <8 x ptr> [[PREDPHI]], i32 2 +; I32-NEXT: [[TMP82:%.*]] = load float, ptr [[TMP81]], align 4 +; I32-NEXT: [[TMP83:%.*]] = extractelement <8 x ptr> [[PREDPHI]], i32 3 +; I32-NEXT: [[TMP84:%.*]] = load float, ptr [[TMP83]], align 4 +; I32-NEXT: [[TMP85:%.*]] = extractelement <8 x ptr> [[PREDPHI]], i32 4 +; I32-NEXT: [[TMP86:%.*]] = load float, ptr [[TMP85]], align 4 +; I32-NEXT: [[TMP87:%.*]] = extractelement <8 x ptr> [[PREDPHI]], i32 5 +; I32-NEXT: [[TMP88:%.*]] = load float, ptr [[TMP87]], align 4 +; I32-NEXT: [[TMP89:%.*]] = extractelement <8 x ptr> [[PREDPHI]], i32 6 +; I32-NEXT: [[TMP90:%.*]] = load float, ptr [[TMP89]], align 4 +; I32-NEXT: [[TMP91:%.*]] = extractelement <8 x ptr> [[PREDPHI]], i32 7 +; I32-NEXT: [[TMP92:%.*]] = load float, ptr [[TMP91]], align 4 +; I32-NEXT: [[TMP93:%.*]] = getelementptr i8, ptr [[DST]], i64 [[TMP3]] +; I32-NEXT: [[TMP94:%.*]] = getelementptr i8, ptr [[DST]], i64 [[TMP4]] +; I32-NEXT: [[TMP95:%.*]] = getelementptr i8, ptr [[DST]], i64 [[TMP5]] +; I32-NEXT: [[TMP96:%.*]] = getelementptr i8, ptr [[DST]], i64 [[TMP6]] +; I32-NEXT: [[TMP97:%.*]] = getelementptr i8, ptr [[DST]], i64 [[TMP7]] +; I32-NEXT: [[TMP98:%.*]] = getelementptr i8, ptr [[DST]], i64 [[TMP8]] +; I32-NEXT: [[TMP99:%.*]] = getelementptr i8, ptr [[DST]], i64 [[TMP9]] +; I32-NEXT: [[TMP100:%.*]] = getelementptr i8, ptr [[DST]], i64 [[TMP10]] +; I32-NEXT: store float [[TMP78]], ptr [[TMP93]], align 4 +; I32-NEXT: store float [[TMP80]], ptr [[TMP94]], align 4 +; I32-NEXT: store float [[TMP82]], ptr [[TMP95]], align 4 +; I32-NEXT: store float [[TMP84]], ptr [[TMP96]], align 4 +; I32-NEXT: store float [[TMP86]], ptr [[TMP97]], align 4 +; I32-NEXT: store float [[TMP88]], ptr [[TMP98]], align 4 +; I32-NEXT: store float [[TMP90]], ptr [[TMP99]], align 4 +; I32-NEXT: store float [[TMP92]], ptr [[TMP100]], align 4 +; I32-NEXT: [[INDEX_NEXT]] = add nuw i64 [[INDEX]], 8 +; I32-NEXT: [[TMP101:%.*]] = icmp eq i64 [[INDEX_NEXT]], [[N_VEC]] +; I32-NEXT: br i1 [[TMP101]], label %[[MIDDLE_BLOCK:.*]], label %[[VECTOR_BODY]], !llvm.loop [[LOOP8:![0-9]+]] +; I32: [[MIDDLE_BLOCK]]: +; I32-NEXT: [[CMP_N:%.*]] = icmp eq i64 [[TMP1]], [[N_VEC]] +; I32-NEXT: br i1 [[CMP_N]], [[EXIT:label %.*]], label %[[SCALAR_PH]] +; I32: [[SCALAR_PH]]: +; +entry: + br label %loop.header + +loop.header: + %iv = phi i64 [ 0, %entry ], [ %iv.next, %loop.latch ] + %iv.2 = phi i64 [ %start, %entry ], [ %iv.2.next, %loop.latch ] + %iv.1 = add i64 %iv, 1 + %gep.src = getelementptr i8, ptr %src, i64 %iv.1 + %l.src = load float, ptr %gep.src, align 4 + %c = fcmp oeq float %l.src, 0.000000e+00 + br i1 %c, label %then, label %loop.latch + +then: + %iv.mul = mul i64 %iv.1, %start + %gep.src.2 = getelementptr i8, ptr %src.2, i64 %iv.mul + br label %loop.latch + +loop.latch: + %merge.gep = phi ptr [ %gep.src.2, %then ], [ %src.2, %loop.header ] + %l.2 = load float, ptr %merge.gep, align 4 + %gep.dst = getelementptr i8, ptr %dst, i64 %iv + store float %l.2, ptr %gep.dst, align 4 + %iv.next = add i64 %iv, 1 + %iv.2.next = add i64 %iv.2, -1 + %ec = icmp sgt i64 %iv.2, 100 + br i1 %ec, label %loop.header, label %exit + +exit: + ret void +} + +attributes #0 = { "target-cpu"="znver3" } attributes #0 = { "target-cpu"="znver2" } !0 = distinct !{!0, !1} diff --git a/llvm/test/Transforms/LoopVectorize/single_early_exit.ll b/llvm/test/Transforms/LoopVectorize/single_early_exit.ll index 3500c5c..4fd8d17 100644 --- a/llvm/test/Transforms/LoopVectorize/single_early_exit.ll +++ b/llvm/test/Transforms/LoopVectorize/single_early_exit.ll @@ -546,19 +546,50 @@ define i64 @loop_guards_needed_to_prove_deref_multiple(i32 %x, i1 %c, ptr derefe ; CHECK-NEXT: call void @llvm.assume(i1 [[PRE_2]]) ; CHECK-NEXT: [[N:%.*]] = add i32 [[SEL]], -1 ; CHECK-NEXT: [[N_EXT:%.*]] = zext i32 [[N]] to i64 +; CHECK-NEXT: [[TMP0:%.*]] = add i32 [[SEL]], -2 +; CHECK-NEXT: [[TMP1:%.*]] = zext i32 [[TMP0]] to i64 +; CHECK-NEXT: [[TMP2:%.*]] = add nuw nsw i64 [[TMP1]], 2 +; CHECK-NEXT: [[MIN_ITERS_CHECK:%.*]] = icmp ult i64 [[TMP2]], 4 +; CHECK-NEXT: br i1 [[MIN_ITERS_CHECK]], label [[SCALAR_PH:%.*]], label [[VECTOR_PH:%.*]] +; CHECK: vector.ph: +; CHECK-NEXT: [[N_MOD_VF:%.*]] = urem i64 [[TMP2]], 4 +; CHECK-NEXT: [[IV_NEXT:%.*]] = sub i64 [[TMP2]], [[N_MOD_VF]] ; CHECK-NEXT: br label [[LOOP_HEADER:%.*]] +; CHECK: vector.body: +; CHECK-NEXT: [[INDEX:%.*]] = phi i64 [ 0, [[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], [[LOOP_HEADER]] ] +; CHECK-NEXT: [[TMP3:%.*]] = getelementptr i8, ptr [[SRC]], i64 [[INDEX]] +; CHECK-NEXT: [[WIDE_LOAD:%.*]] = load <4 x i8>, ptr [[TMP3]], align 1 +; CHECK-NEXT: [[TMP4:%.*]] = icmp eq <4 x i8> [[WIDE_LOAD]], zeroinitializer +; CHECK-NEXT: [[INDEX_NEXT]] = add nuw i64 [[INDEX]], 4 +; CHECK-NEXT: [[TMP5:%.*]] = freeze <4 x i1> [[TMP4]] +; CHECK-NEXT: [[TMP6:%.*]] = call i1 @llvm.vector.reduce.or.v4i1(<4 x i1> [[TMP5]]) +; CHECK-NEXT: [[TMP7:%.*]] = icmp eq i64 [[INDEX_NEXT]], [[IV_NEXT]] +; CHECK-NEXT: [[TMP8:%.*]] = or i1 [[TMP6]], [[TMP7]] +; CHECK-NEXT: br i1 [[TMP8]], label [[MIDDLE_SPLIT:%.*]], label [[LOOP_HEADER]], !llvm.loop [[LOOP11:![0-9]+]] +; CHECK: middle.split: +; CHECK-NEXT: br i1 [[TMP6]], label [[VECTOR_EARLY_EXIT:%.*]], label [[LOOP_LATCH:%.*]] +; CHECK: middle.block: +; CHECK-NEXT: [[CMP_N:%.*]] = icmp eq i64 [[TMP2]], [[IV_NEXT]] +; CHECK-NEXT: br i1 [[CMP_N]], label [[EXIT_LOOPEXIT:%.*]], label [[SCALAR_PH]] +; CHECK: vector.early.exit: +; CHECK-NEXT: [[TMP9:%.*]] = call i64 @llvm.experimental.cttz.elts.i64.v4i1(<4 x i1> [[TMP4]], i1 true) +; CHECK-NEXT: [[TMP10:%.*]] = add i64 [[INDEX]], [[TMP9]] +; CHECK-NEXT: br label [[EXIT_LOOPEXIT]] +; CHECK: scalar.ph: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[IV_NEXT]], [[LOOP_LATCH]] ], [ 0, [[PH]] ] +; CHECK-NEXT: br label [[LOOP_HEADER1:%.*]] ; CHECK: loop.header: -; CHECK-NEXT: [[IV:%.*]] = phi i64 [ [[IV_NEXT:%.*]], [[LOOP_LATCH:%.*]] ], [ 0, [[PH]] ] -; CHECK-NEXT: [[GEP_SRC_I:%.*]] = getelementptr i8, ptr [[SRC]], i64 [[IV]] +; CHECK-NEXT: [[IV1:%.*]] = phi i64 [ [[IV_NEXT1:%.*]], [[LOOP_LATCH1:%.*]] ], [ [[IV]], [[SCALAR_PH]] ] +; CHECK-NEXT: [[GEP_SRC_I:%.*]] = getelementptr i8, ptr [[SRC]], i64 [[IV1]] ; CHECK-NEXT: [[L:%.*]] = load i8, ptr [[GEP_SRC_I]], align 1 ; CHECK-NEXT: [[C_1:%.*]] = icmp eq i8 [[L]], 0 -; CHECK-NEXT: br i1 [[C_1]], label [[EXIT_LOOPEXIT:%.*]], label [[LOOP_LATCH]] +; CHECK-NEXT: br i1 [[C_1]], label [[EXIT_LOOPEXIT]], label [[LOOP_LATCH1]] ; CHECK: loop.latch: -; CHECK-NEXT: [[IV_NEXT]] = add i64 [[IV]], 1 -; CHECK-NEXT: [[EC:%.*]] = icmp eq i64 [[IV]], [[N_EXT]] -; CHECK-NEXT: br i1 [[EC]], label [[EXIT_LOOPEXIT]], label [[LOOP_HEADER]] +; CHECK-NEXT: [[IV_NEXT1]] = add i64 [[IV1]], 1 +; CHECK-NEXT: [[EC:%.*]] = icmp eq i64 [[IV1]], [[N_EXT]] +; CHECK-NEXT: br i1 [[EC]], label [[EXIT_LOOPEXIT]], label [[LOOP_HEADER1]], !llvm.loop [[LOOP12:![0-9]+]] ; CHECK: exit.loopexit: -; CHECK-NEXT: [[RES_PH:%.*]] = phi i64 [ [[IV]], [[LOOP_HEADER]] ], [ 0, [[LOOP_LATCH]] ] +; CHECK-NEXT: [[RES_PH:%.*]] = phi i64 [ [[IV1]], [[LOOP_HEADER1]] ], [ 0, [[LOOP_LATCH1]] ], [ 0, [[LOOP_LATCH]] ], [ [[TMP10]], [[VECTOR_EARLY_EXIT]] ] ; CHECK-NEXT: br label [[EXIT]] ; CHECK: exit: ; CHECK-NEXT: [[RES:%.*]] = phi i64 [ -1, [[ENTRY:%.*]] ], [ -2, [[THEN]] ], [ [[RES_PH]], [[EXIT_LOOPEXIT]] ] @@ -609,4 +640,6 @@ exit: ; CHECK: [[LOOP8]] = distinct !{[[LOOP8]], [[META2]], [[META1]]} ; CHECK: [[LOOP9]] = distinct !{[[LOOP9]], [[META1]], [[META2]]} ; CHECK: [[LOOP10]] = distinct !{[[LOOP10]], [[META2]], [[META1]]} +; CHECK: [[LOOP11]] = distinct !{[[LOOP11]], [[META1]], [[META2]]} +; CHECK: [[LOOP12]] = distinct !{[[LOOP12]], [[META2]], [[META1]]} ;. diff --git a/llvm/test/Transforms/NewGVN/ptrtoaddr.ll b/llvm/test/Transforms/NewGVN/ptrtoaddr.ll new file mode 100644 index 0000000..e51b42a --- /dev/null +++ b/llvm/test/Transforms/NewGVN/ptrtoaddr.ll @@ -0,0 +1,29 @@ +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 6 +; RUN: opt -S -passes=newgvn < %s | FileCheck %s + +define i64 @ptrtoaddr_same(ptr %p) { +; CHECK-LABEL: define i64 @ptrtoaddr_same( +; CHECK-SAME: ptr [[P:%.*]]) { +; CHECK-NEXT: ret i64 0 +; + %i = ptrtoaddr ptr %p to i64 + %j = ptrtoaddr ptr %p to i64 + %sub = sub i64 %i, %j + ret i64 %sub +} + +; Note that unlike for ptrtoint, it's not possible for ptrtoaddr to differ +; in result type for the same input. +define i64 @ptrtoaddr_different(ptr %p, ptr %p2) { +; CHECK-LABEL: define i64 @ptrtoaddr_different( +; CHECK-SAME: ptr [[P:%.*]], ptr [[P2:%.*]]) { +; CHECK-NEXT: [[I:%.*]] = ptrtoaddr ptr [[P]] to i64 +; CHECK-NEXT: [[J:%.*]] = ptrtoaddr ptr [[P2]] to i64 +; CHECK-NEXT: [[SUB:%.*]] = sub i64 [[I]], [[J]] +; CHECK-NEXT: ret i64 [[SUB]] +; + %i = ptrtoaddr ptr %p to i64 + %j = ptrtoaddr ptr %p2 to i64 + %sub = sub i64 %i, %j + ret i64 %sub +} diff --git a/llvm/test/Transforms/PhaseOrdering/switch-to-arithmetic-inlining.ll b/llvm/test/Transforms/PhaseOrdering/switch-to-arithmetic-inlining.ll index caf7a80..7c9888f 100644 --- a/llvm/test/Transforms/PhaseOrdering/switch-to-arithmetic-inlining.ll +++ b/llvm/test/Transforms/PhaseOrdering/switch-to-arithmetic-inlining.ll @@ -436,10 +436,11 @@ bb104: ; preds = %bb102 br label %bb105 } +; Make sure the call is inlined. define i8 @test2(i8 %x) { ; CHECK-LABEL: define range(i8 0, 53) i8 @test2( ; CHECK-SAME: i8 [[X:%.*]]) local_unnamed_addr #[[ATTR0]] { -; CHECK-NEXT: [[CALL:%.*]] = tail call i8 @test(i8 [[X]]) +; CHECK-NEXT: [[CALL:%.*]] = tail call range(i8 0, 53) i8 @llvm.umin.i8(i8 [[X]], i8 52) ; CHECK-NEXT: ret i8 [[CALL]] ; %call = call i8 @test(i8 %x) diff --git a/llvm/test/Transforms/PreISelIntrinsicLowering/AArch64/expand-exp.ll b/llvm/test/Transforms/PreISelIntrinsicLowering/AArch64/expand-exp.ll index 9acc6d6..09f583f 100644 --- a/llvm/test/Transforms/PreISelIntrinsicLowering/AArch64/expand-exp.ll +++ b/llvm/test/Transforms/PreISelIntrinsicLowering/AArch64/expand-exp.ll @@ -39,5 +39,4 @@ declare <4 x float> @llvm.exp.v4f32(<4 x float>) #0 declare <vscale x 4 x float> @llvm.exp.nxv4f32(<vscale x 4 x float>) #0 ; CHECK: attributes #0 = { nocallback nofree nosync nounwind speculatable willreturn memory(none) } -; CHECK-NEXT: attributes #1 = { nocallback nofree nosync nounwind willreturn memory(none) } attributes #0 = { nocallback nofree nosync nounwind speculatable willreturn memory(none) } diff --git a/llvm/test/Transforms/SimplifyCFG/merge-calls-alloc-token.ll b/llvm/test/Transforms/SimplifyCFG/merge-calls-alloc-token.ll index 9bbe3eb..42d3dcc 100644 --- a/llvm/test/Transforms/SimplifyCFG/merge-calls-alloc-token.ll +++ b/llvm/test/Transforms/SimplifyCFG/merge-calls-alloc-token.ll @@ -97,8 +97,8 @@ if.end: ret ptr %x.0 } -!0 = !{!"int"} -!1 = !{!"char[4]"} +!0 = !{!"int", i1 0} +!1 = !{!"char[4]", i1 0} ;. -; CHECK: [[META0]] = !{!"int"} +; CHECK: [[META0]] = !{!"int", i1 false} ;. diff --git a/llvm/test/Transforms/SimplifyCFG/switch-transformations-no-lut.ll b/llvm/test/Transforms/SimplifyCFG/switch-transformations-no-lut.ll index c9063d3..25267dc 100644 --- a/llvm/test/Transforms/SimplifyCFG/switch-transformations-no-lut.ll +++ b/llvm/test/Transforms/SimplifyCFG/switch-transformations-no-lut.ll @@ -1,5 +1,5 @@ ; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 5 -; RUN: opt -S -passes='simplifycfg' < %s | FileCheck %s --check-prefix=OPTNOLUT +; RUN: opt -S -passes='simplifycfg<switch-to-arithmetic>' < %s | FileCheck %s --check-prefix=OPTNOLUT ; RUN: %if amdgpu-registered-target %{ opt -mtriple=amdgcn--amdpal -S -passes='simplifycfg<switch-to-lookup>' < %s | FileCheck %s --check-prefix=TTINOLUT %} ; target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128" @@ -7,23 +7,11 @@ target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128" define i32 @linear_transform_with_default(i32 %x) { ; OPTNOLUT-LABEL: define i32 @linear_transform_with_default( ; OPTNOLUT-SAME: i32 [[X:%.*]]) { -; OPTNOLUT-NEXT: [[ENTRY:.*]]: -; OPTNOLUT-NEXT: switch i32 [[X]], label %[[END:.*]] [ -; OPTNOLUT-NEXT: i32 0, label %[[CASE0:.*]] -; OPTNOLUT-NEXT: i32 1, label %[[CASE1:.*]] -; OPTNOLUT-NEXT: i32 2, label %[[CASE2:.*]] -; OPTNOLUT-NEXT: i32 3, label %[[CASE3:.*]] -; OPTNOLUT-NEXT: ] -; OPTNOLUT: [[CASE0]]: -; OPTNOLUT-NEXT: br label %[[END]] -; OPTNOLUT: [[CASE1]]: -; OPTNOLUT-NEXT: br label %[[END]] -; OPTNOLUT: [[CASE2]]: -; OPTNOLUT-NEXT: br label %[[END]] -; OPTNOLUT: [[CASE3]]: -; OPTNOLUT-NEXT: br label %[[END]] -; OPTNOLUT: [[END]]: -; OPTNOLUT-NEXT: [[IDX:%.*]] = phi i32 [ 1, %[[CASE0]] ], [ 4, %[[CASE1]] ], [ 7, %[[CASE2]] ], [ 10, %[[CASE3]] ], [ 13, %[[ENTRY]] ] +; OPTNOLUT-NEXT: [[ENTRY:.*:]] +; OPTNOLUT-NEXT: [[TMP0:%.*]] = icmp ult i32 [[X]], 4 +; OPTNOLUT-NEXT: [[SWITCH_IDX_MULT:%.*]] = mul nsw i32 [[X]], 3 +; OPTNOLUT-NEXT: [[SWITCH_OFFSET:%.*]] = add nsw i32 [[SWITCH_IDX_MULT]], 1 +; OPTNOLUT-NEXT: [[IDX:%.*]] = select i1 [[TMP0]], i32 [[SWITCH_OFFSET]], i32 13 ; OPTNOLUT-NEXT: ret i32 [[IDX]] ; ; TTINOLUT-LABEL: define i32 @linear_transform_with_default( @@ -138,26 +126,8 @@ end: define i32 @linear_transform_no_default(i32 %x) { ; OPTNOLUT-LABEL: define i32 @linear_transform_no_default( ; OPTNOLUT-SAME: i32 [[X:%.*]]) { -; OPTNOLUT-NEXT: [[ENTRY:.*]]: -; OPTNOLUT-NEXT: switch i32 [[X]], label %[[DEFAULT:.*]] [ -; OPTNOLUT-NEXT: i32 0, label %[[END:.*]] -; OPTNOLUT-NEXT: i32 1, label %[[CASE1:.*]] -; OPTNOLUT-NEXT: i32 2, label %[[CASE2:.*]] -; OPTNOLUT-NEXT: i32 3, label %[[CASE3:.*]] -; OPTNOLUT-NEXT: i32 4, label %[[CASE4:.*]] -; OPTNOLUT-NEXT: ] -; OPTNOLUT: [[CASE1]]: -; OPTNOLUT-NEXT: br label %[[END]] -; OPTNOLUT: [[CASE2]]: -; OPTNOLUT-NEXT: br label %[[END]] -; OPTNOLUT: [[CASE3]]: -; OPTNOLUT-NEXT: br label %[[END]] -; OPTNOLUT: [[CASE4]]: -; OPTNOLUT-NEXT: br label %[[END]] -; OPTNOLUT: [[DEFAULT]]: -; OPTNOLUT-NEXT: unreachable -; OPTNOLUT: [[END]]: -; OPTNOLUT-NEXT: [[SWITCH_IDX_MULT:%.*]] = phi i32 [ 3, %[[CASE1]] ], [ 6, %[[CASE2]] ], [ 9, %[[CASE3]] ], [ 12, %[[CASE4]] ], [ 0, %[[ENTRY]] ] +; OPTNOLUT-NEXT: [[ENTRY:.*:]] +; OPTNOLUT-NEXT: [[SWITCH_IDX_MULT:%.*]] = mul nsw i32 [[X]], 3 ; OPTNOLUT-NEXT: ret i32 [[SWITCH_IDX_MULT]] ; ; TTINOLUT-LABEL: define i32 @linear_transform_no_default( @@ -350,18 +320,9 @@ end: define i32 @single_value_withdefault(i32 %x) { ; OPTNOLUT-LABEL: define i32 @single_value_withdefault( ; OPTNOLUT-SAME: i32 [[X:%.*]]) { -; OPTNOLUT-NEXT: [[ENTRY:.*]]: -; OPTNOLUT-NEXT: switch i32 [[X]], label %[[DEFAULT:.*]] [ -; OPTNOLUT-NEXT: i32 0, label %[[END:.*]] -; OPTNOLUT-NEXT: i32 1, label %[[END]] -; OPTNOLUT-NEXT: i32 2, label %[[END]] -; OPTNOLUT-NEXT: i32 3, label %[[END]] -; OPTNOLUT-NEXT: i32 4, label %[[END]] -; OPTNOLUT-NEXT: ] -; OPTNOLUT: [[DEFAULT]]: -; OPTNOLUT-NEXT: br label %[[END]] -; OPTNOLUT: [[END]]: -; OPTNOLUT-NEXT: [[DOT:%.*]] = phi i32 [ 3, %[[DEFAULT]] ], [ 2, %[[ENTRY]] ], [ 2, %[[ENTRY]] ], [ 2, %[[ENTRY]] ], [ 2, %[[ENTRY]] ], [ 2, %[[ENTRY]] ] +; OPTNOLUT-NEXT: [[ENTRY:.*:]] +; OPTNOLUT-NEXT: [[TMP0:%.*]] = icmp ult i32 [[X]], 5 +; OPTNOLUT-NEXT: [[DOT:%.*]] = select i1 [[TMP0]], i32 2, i32 3 ; OPTNOLUT-NEXT: ret i32 [[DOT]] ; ; TTINOLUT-LABEL: define i32 @single_value_withdefault( @@ -401,18 +362,9 @@ end: define i32 @single_value_no_jump_tables(i32 %x) "no-jump-tables"="true" { ; OPTNOLUT-LABEL: define i32 @single_value_no_jump_tables( ; OPTNOLUT-SAME: i32 [[X:%.*]]) #[[ATTR0:[0-9]+]] { -; OPTNOLUT-NEXT: [[ENTRY:.*]]: -; OPTNOLUT-NEXT: switch i32 [[X]], label %[[DEFAULT:.*]] [ -; OPTNOLUT-NEXT: i32 0, label %[[END:.*]] -; OPTNOLUT-NEXT: i32 1, label %[[END]] -; OPTNOLUT-NEXT: i32 2, label %[[END]] -; OPTNOLUT-NEXT: i32 3, label %[[END]] -; OPTNOLUT-NEXT: i32 4, label %[[END]] -; OPTNOLUT-NEXT: ] -; OPTNOLUT: [[DEFAULT]]: -; OPTNOLUT-NEXT: br label %[[END]] -; OPTNOLUT: [[END]]: -; OPTNOLUT-NEXT: [[IDX:%.*]] = phi i32 [ 3, %[[DEFAULT]] ], [ 2, %[[ENTRY]] ], [ 2, %[[ENTRY]] ], [ 2, %[[ENTRY]] ], [ 2, %[[ENTRY]] ], [ 2, %[[ENTRY]] ] +; OPTNOLUT-NEXT: [[ENTRY:.*:]] +; OPTNOLUT-NEXT: [[TMP0:%.*]] = icmp ult i32 [[X]], 5 +; OPTNOLUT-NEXT: [[IDX:%.*]] = select i1 [[TMP0]], i32 2, i32 3 ; OPTNOLUT-NEXT: ret i32 [[IDX]] ; ; TTINOLUT-LABEL: define i32 @single_value_no_jump_tables( @@ -449,6 +401,60 @@ end: ret i32 %idx } +define i1 @single_value_with_mask(i32 %x) { +; OPTNOLUT-LABEL: define i1 @single_value_with_mask( +; OPTNOLUT-SAME: i32 [[X:%.*]]) { +; OPTNOLUT-NEXT: [[ENTRY:.*]]: +; OPTNOLUT-NEXT: switch i32 [[X]], label %[[DEFAULT:.*]] [ +; OPTNOLUT-NEXT: i32 18, label %[[END:.*]] +; OPTNOLUT-NEXT: i32 21, label %[[END]] +; OPTNOLUT-NEXT: i32 48, label %[[END]] +; OPTNOLUT-NEXT: i32 16, label %[[END]] +; OPTNOLUT-NEXT: ] +; OPTNOLUT: [[DEFAULT]]: +; OPTNOLUT-NEXT: [[CMP:%.*]] = icmp eq i32 [[X]], 80 +; OPTNOLUT-NEXT: [[SEL:%.*]] = select i1 [[CMP]], i1 false, i1 true +; OPTNOLUT-NEXT: br label %[[END]] +; OPTNOLUT: [[END]]: +; OPTNOLUT-NEXT: [[RES:%.*]] = phi i1 [ false, %[[ENTRY]] ], [ false, %[[ENTRY]] ], [ false, %[[ENTRY]] ], [ false, %[[ENTRY]] ], [ [[SEL]], %[[DEFAULT]] ] +; OPTNOLUT-NEXT: ret i1 [[RES]] +; +; TTINOLUT-LABEL: define i1 @single_value_with_mask( +; TTINOLUT-SAME: i32 [[X:%.*]]) { +; TTINOLUT-NEXT: [[ENTRY:.*]]: +; TTINOLUT-NEXT: [[SWITCH_TABLEIDX:%.*]] = sub i32 [[X]], 16 +; TTINOLUT-NEXT: [[TMP0:%.*]] = icmp ult i32 [[SWITCH_TABLEIDX]], 33 +; TTINOLUT-NEXT: [[SWITCH_MASKINDEX:%.*]] = zext i32 [[SWITCH_TABLEIDX]] to i64 +; TTINOLUT-NEXT: [[SWITCH_SHIFTED:%.*]] = lshr i64 4294967333, [[SWITCH_MASKINDEX]] +; TTINOLUT-NEXT: [[SWITCH_LOBIT:%.*]] = trunc i64 [[SWITCH_SHIFTED]] to i1 +; TTINOLUT-NEXT: [[OR_COND:%.*]] = select i1 [[TMP0]], i1 [[SWITCH_LOBIT]], i1 false +; TTINOLUT-NEXT: br i1 [[OR_COND]], label %[[END:.*]], label %[[DEFAULT:.*]] +; TTINOLUT: [[DEFAULT]]: +; TTINOLUT-NEXT: [[CMP:%.*]] = icmp eq i32 [[X]], 80 +; TTINOLUT-NEXT: [[SEL:%.*]] = select i1 [[CMP]], i1 false, i1 true +; TTINOLUT-NEXT: br label %[[END]] +; TTINOLUT: [[END]]: +; TTINOLUT-NEXT: [[RES:%.*]] = phi i1 [ [[SEL]], %[[DEFAULT]] ], [ false, %[[ENTRY]] ] +; TTINOLUT-NEXT: ret i1 [[RES]] +; +entry: + switch i32 %x, label %default [ + i32 18, label %end + i32 21, label %end + i32 48, label %end + i32 16, label %end + ] + +default: + %cmp = icmp eq i32 %x, 80 + %sel = select i1 %cmp, i1 false, i1 true + br label %end + +end: + %res = phi i1 [ false, %entry ], [ false, %entry ], [ false, %entry ], [ false, %entry ], [ %sel, %default ] + ret i1 %res +} + define i32 @lookup_table(i32 %x) { ; OPTNOLUT-LABEL: define i32 @lookup_table( ; OPTNOLUT-SAME: i32 [[X:%.*]]) { diff --git a/llvm/test/tools/llvm-exegesis/AArch64/no-aliasing-ld-str.s b/llvm/test/tools/llvm-exegesis/AArch64/no-aliasing-ld-str.s index da83c54..8348c97 100644 --- a/llvm/test/tools/llvm-exegesis/AArch64/no-aliasing-ld-str.s +++ b/llvm/test/tools/llvm-exegesis/AArch64/no-aliasing-ld-str.s @@ -1,10 +1,12 @@ REQUIRES: aarch64-registered-target -// Flakey on SVE buildbots, disabled pending invesgitation. +// This will sometimes fail with "Not all operands were initialized by the snippet generator for...". UNSUPPORTED: target={{.*}} RUN: llvm-exegesis -mtriple=aarch64 -mcpu=neoverse-v2 -mode=latency --dump-object-to-disk=%t.obj --opcode-name=FMOVWSr --benchmark-phase=assemble-measured-code 2>&1 RUN: llvm-objdump -d %t.obj > %t.s RUN: FileCheck %s < %t.s +// Start matching after the printed file path, as that may contain something that looks like a mnemonic. +CHECK: Disassembly of section .text: CHECK-NOT: ld{{[1-4]}} CHECK-NOT: st{{[1-4]}} diff --git a/llvm/test/tools/llvm-profgen/Inputs/coff-profile.exe b/llvm/test/tools/llvm-profgen/Inputs/coff-profile.exe Binary files differindex 309476a..a4c36a3 100644 --- a/llvm/test/tools/llvm-profgen/Inputs/coff-profile.exe +++ b/llvm/test/tools/llvm-profgen/Inputs/coff-profile.exe diff --git a/llvm/test/tools/llvm-profgen/Inputs/coff-profile.perfscript b/llvm/test/tools/llvm-profgen/Inputs/coff-profile.perfscript index ec5c8ff..29a8803 100644 --- a/llvm/test/tools/llvm-profgen/Inputs/coff-profile.perfscript +++ b/llvm/test/tools/llvm-profgen/Inputs/coff-profile.perfscript @@ -1,13 +1,13 @@ PERF_RECORD_MMAP2 5752/0: [0x7ff70a1b0000(0x640000) @ 0x1000 00:00 0 0]: r-xp c:\Users\haohaiwe\Desktop\coff-profile.exe - 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 - 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/-/X/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/-/X/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/-/X/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 - 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/-/X/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 - 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/-/X/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/-/X/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/-/X/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 - 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/-/X/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/-/X/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/-/X/A/0 - 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 - 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 - 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/-/X/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/-/X/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/-/X/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/-/X/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/-/X/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 - 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/-/X/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 - 0x7ff70a1b1415/0x7ff70a1b13b0/P/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/M/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/P/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/P/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/M/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/P/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/P/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/M/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/P/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/M/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/M/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/P/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/M/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/M/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/P/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/P/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/M/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/P/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/M/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/M/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/P/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/M/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/M/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/P/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/P/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/M/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/P/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/M/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/M/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/P/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/P/X/A/0 0x7ff70a1b1415/0x7ff70a1b13b0/M/X/A/0 - 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/-/X/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/-/X/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 - 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 0x7ff70a1b1482/0x7ff70a1b1430/P/-/A/0 + 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 + 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/-/X/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/-/X/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/-/X/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 + 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/-/X/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 + 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/-/X/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/-/X/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/-/X/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 + 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/-/X/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/-/X/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/-/X/A/0 + 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 + 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 + 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/-/X/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/-/X/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/-/X/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/-/X/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/-/X/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 + 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/-/X/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 + 0x7ff70a1b1400/0x7ff70a1b13a0/P/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/M/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/P/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/P/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/M/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/P/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/P/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/M/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/P/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/M/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/M/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/P/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/M/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/M/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/P/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/P/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/M/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/P/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/M/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/M/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/P/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/M/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/M/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/P/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/P/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/M/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/P/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/M/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/M/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/P/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/P/X/A/0 0x7ff70a1b1400/0x7ff70a1b13a0/M/X/A/0 + 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/-/X/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/-/X/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 + 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 0x7ff70a1b1461/0x7ff70a1b1410/P/-/A/0 diff --git a/llvm/test/tools/llvm-profgen/coff-profile.test b/llvm/test/tools/llvm-profgen/coff-profile.test index 5578f73..6411642 100644 --- a/llvm/test/tools/llvm-profgen/coff-profile.test +++ b/llvm/test/tools/llvm-profgen/coff-profile.test @@ -1,37 +1,77 @@ +; RUN: llvm-profgen --format=text --use-dwarf-correlation --perfscript=%S/Inputs/coff-profile.perfscript --binary=%S/Inputs/coff-profile.exe --output=%t +; RUN: FileCheck %s --input-file %t --check-prefix=DWARF ; RUN: llvm-profgen --format=text --perfscript=%S/Inputs/coff-profile.perfscript --binary=%S/Inputs/coff-profile.exe --output=%t -; RUN: FileCheck %s --input-file %t +; RUN: FileCheck %s --input-file %t --check-prefix=PROBE -CHECK: main:31837:0 -CHECK-NEXT: 0: 0 -CHECK-NEXT: 3.1: 0 -CHECK-NEXT: 3.2: 0 -CHECK-NEXT: 8: 0 -CHECK-NEXT: 65501: 0 -CHECK-NEXT: 1: ??$init@HG@MyNameSpace2@@YAXHPEAG@Z:0 -CHECK-NEXT: 1: 0 -CHECK-NEXT: 1.1: 0 -CHECK-NEXT: 1.2: 0 -CHECK-NEXT: 2: 0 -CHECK-NEXT: 65514: 0 -CHECK-NEXT: 4: ?work1@?$MyClass@GH@MyNameSpace1@@QEAAXQEAGH@Z:3193 -CHECK-NEXT: 0: ?work@?$MyClass@GH@MyNameSpace1@@AEAAXQEAGHH@Z:3193 -CHECK-NEXT: 1.1: 31 -CHECK-NEXT: 1.2: 31 -CHECK-NEXT: 2: 31 -CHECK-NEXT: 3: 31 -CHECK-NEXT: 65530: 0 -CHECK-NEXT: 5: ?work2@?$MyClass@GH@MyNameSpace1@@QEAAXQEAGH@Z:28644 -CHECK-NEXT: 0: ?work@?$MyClass@GH@MyNameSpace1@@AEAAXQEAGHH@Z:28644 -CHECK-NEXT: 1.1: 341 -CHECK-NEXT: 1.2: 341 -CHECK-NEXT: 2: 341 -CHECK-NEXT: 3: 341 -CHECK-NEXT: 65530: 0 -CHECK-NEXT: 7: ?print@MyNameSpace2@@YAXPEAGH@Z:0 -CHECK-NEXT: 1: 0 +DWARF: main:31341:0 +DWARF-NEXT: 0: 0 +DWARF-NEXT: 3: 0 +DWARF-NEXT: 3.1: 0 +DWARF-NEXT: 3.2: 0 +DWARF-NEXT: 8: 0 +DWARF-NEXT: 65501: 0 +DWARF-NEXT: 1: ??$init@HG@MyNameSpace2@@YAXHPEAG@Z:0 +DWARF-NEXT: 1: 0 +DWARF-NEXT: 1.1: 0 +DWARF-NEXT: 1.2: 0 +DWARF-NEXT: 2: 0 +DWARF-NEXT: 65514: 0 +DWARF-NEXT: 4: ?work1@?$MyClass@GH@MyNameSpace1@@QEAAXQEAGH@Z:3038 +DWARF-NEXT: 0: ?work@?$MyClass@GH@MyNameSpace1@@AEAAXQEAGHH@Z:3038 +DWARF-NEXT: 1.1: 31 +DWARF-NEXT: 1.2: 31 +DWARF-NEXT: 2: 31 +DWARF-NEXT: 3: 31 +DWARF-NEXT: 5: ?work2@?$MyClass@GH@MyNameSpace1@@QEAAXQEAGH@Z:28303 +DWARF-NEXT: 0: ?work@?$MyClass@GH@MyNameSpace1@@AEAAXQEAGHH@Z:28303 +DWARF-NEXT: 1.1: 341 +DWARF-NEXT: 1.2: 341 +DWARF-NEXT: 2: 341 +DWARF-NEXT: 3: 341 +DWARF-NEXT: 7: ?print@MyNameSpace2@@YAXPEAGH@Z:0 +DWARF-NEXT: 1: 0 + +PROBE: main:1116:0 +PROBE-NEXT: 1: 0 +PROBE-NEXT: 3: 0 +PROBE-NEXT: 4: 0 +PROBE-NEXT: 5: 0 +PROBE-NEXT: 8: 0 +PROBE-NEXT: 9: 0 +PROBE-NEXT: 2: ??$init@HG@MyNameSpace2@@YAXHPEAG@Z:0 +PROBE-NEXT: 1: 0 +PROBE-NEXT: 2: 0 +PROBE-NEXT: 3: 0 +PROBE-NEXT: 4: 0 +PROBE-NEXT: 5: 0 +PROBE-NEXT: 6: 0 +PROBE-NEXT: !CFGChecksum: 107105011060 +PROBE-NEXT: 6: ?work1@?$MyClass@GH@MyNameSpace1@@QEAAXQEAGH@Z:93 +PROBE-NEXT: 1: 0 +PROBE-NEXT: 2: ?work@?$MyClass@GH@MyNameSpace1@@AEAAXQEAGHH@Z:93 +PROBE-NEXT: 1: 0 +PROBE-NEXT: 2: 31 +PROBE-NEXT: 4: 31 +PROBE-NEXT: 5: 31 +PROBE-NEXT: !CFGChecksum: 107105011060 +PROBE-NEXT: !CFGChecksum: 281479271677951 +PROBE-NEXT: 7: ?work2@?$MyClass@GH@MyNameSpace1@@QEAAXQEAGH@Z:1023 +PROBE-NEXT: 2: ?work@?$MyClass@GH@MyNameSpace1@@AEAAXQEAGHH@Z:1023 +PROBE-NEXT: 2: 341 +PROBE-NEXT: 3: 0 +PROBE-NEXT: 4: 341 +PROBE-NEXT: 5: 341 +PROBE-NEXT: 6: 0 +PROBE-NEXT: !CFGChecksum: 107105011060 +PROBE-NEXT: !CFGChecksum: 281479271677951 +PROBE-NEXT: 10: ?print@MyNameSpace2@@YAXPEAGH@Z:0 +PROBE-NEXT: 1: 0 +PROBE-NEXT: 2: 0 +PROBE-NEXT: !CFGChecksum: 281479271677951 +PROBE-NEXT: !CFGChecksum: 1126005794311845 ; Original code -; clang-cl.exe -O2 -gdwarf -gline-tables-only coff-profile.cpp -fuse-ld=lld -Xclang -fdebug-info-for-profiling -link -debug:dwarf +; clang-cl.exe -O2 -gdwarf -gline-tables-only -fpseudo-probe-for-profiling coff-profile.cpp -fuse-ld=lld -Xclang -fdebug-info-for-profiling -link -debug:dwarf #include <stdio.h> diff --git a/llvm/tools/llvm-gpu-loader/amdhsa.cpp b/llvm/tools/llvm-gpu-loader/amdhsa.cpp index be1b6b7..5715058 100644 --- a/llvm/tools/llvm-gpu-loader/amdhsa.cpp +++ b/llvm/tools/llvm-gpu-loader/amdhsa.cpp @@ -192,7 +192,7 @@ hsa_status_t launch_kernel(hsa_agent_t dev_agent, hsa_executable_t executable, // Initialize all the arguments (explicit and implicit) to zero, then set the // explicit arguments to the values created above. std::memset(args, 0, args_size); - std::memcpy(args, &kernel_args, sizeof(args_t)); + std::memcpy(args, &kernel_args, std::is_empty_v<args_t> ? 0 : sizeof(args_t)); // Initialize the necessary implicit arguments to the proper values. int dims = 1 + (params.num_blocks_y * params.num_threads_y != 1) + @@ -563,7 +563,7 @@ int load_amdhsa(int argc, const char **argv, const char **envp, void *image, // Save the return value and perform basic clean-up. int ret = *static_cast<int *>(host_ret); - end_args_t fini_args = {ret}; + end_args_t fini_args = {}; if (hsa_status_t err = launch_kernel( dev_agent, executable, kernargs_pool, coarsegrained_pool, queue, server, single_threaded_params, "_end.kd", fini_args, diff --git a/llvm/tools/llvm-gpu-loader/llvm-gpu-loader.h b/llvm/tools/llvm-gpu-loader/llvm-gpu-loader.h index ed34d0b..08861c2 100644 --- a/llvm/tools/llvm-gpu-loader/llvm-gpu-loader.h +++ b/llvm/tools/llvm-gpu-loader/llvm-gpu-loader.h @@ -41,9 +41,7 @@ struct start_args_t { }; /// The arguments to the '_end' kernel. -struct end_args_t { - int argc; -}; +struct end_args_t {}; /// Generic interface to load the \p image and launch execution of the _start /// kernel on the target device. Copies \p argc and \p argv to the device. diff --git a/llvm/tools/llvm-gpu-loader/nvptx.cpp b/llvm/tools/llvm-gpu-loader/nvptx.cpp index 781a045..82b4552 100644 --- a/llvm/tools/llvm-gpu-loader/nvptx.cpp +++ b/llvm/tools/llvm-gpu-loader/nvptx.cpp @@ -177,7 +177,7 @@ CUresult launch_kernel(CUmodule binary, CUstream stream, rpc::Server &server, handle_error(err); // Set up the arguments to the '_start' kernel on the GPU. - uint64_t args_size = sizeof(args_t); + uint64_t args_size = std::is_empty_v<args_t> ? 0 : sizeof(args_t); void *args_config[] = {CU_LAUNCH_PARAM_BUFFER_POINTER, &kernel_args, CU_LAUNCH_PARAM_BUFFER_SIZE, &args_size, CU_LAUNCH_PARAM_END}; @@ -342,7 +342,7 @@ int load_nvptx(int argc, const char **argv, const char **envp, void *image, if (CUresult err = cuStreamSynchronize(stream)) handle_error(err); - end_args_t fini_args = {host_ret}; + end_args_t fini_args = {}; if (CUresult err = launch_kernel(binary, stream, server, single_threaded_params, "_end", fini_args, print_resource_usage)) diff --git a/llvm/tools/llvm-profgen/ProfiledBinary.cpp b/llvm/tools/llvm-profgen/ProfiledBinary.cpp index 6865e36..94728ce 100644 --- a/llvm/tools/llvm-profgen/ProfiledBinary.cpp +++ b/llvm/tools/llvm-profgen/ProfiledBinary.cpp @@ -250,14 +250,12 @@ void ProfiledBinary::load() { DisassembleFunctionSet.insert_range(DisassembleFunctions); - if (auto *ELFObj = dyn_cast<ELFObjectFileBase>(Obj)) { - checkPseudoProbe(ELFObj); - if (UsePseudoProbes) - populateElfSymbolAddressList(ELFObj); + checkPseudoProbe(Obj); + if (UsePseudoProbes) + populateSymbolAddressList(Obj); - if (ShowDisassemblyOnly) - decodePseudoProbe(ELFObj); - } + if (ShowDisassemblyOnly) + decodePseudoProbe(Obj); // Disassemble the text sections. disassemble(Obj); @@ -417,7 +415,7 @@ void ProfiledBinary::setPreferredTextSegmentAddresses(const ObjectFile *Obj) { llvm_unreachable("invalid object format"); } -void ProfiledBinary::checkPseudoProbe(const ELFObjectFileBase *Obj) { +void ProfiledBinary::checkPseudoProbe(const ObjectFile *Obj) { if (UseDwarfCorrelation) return; @@ -440,7 +438,7 @@ void ProfiledBinary::checkPseudoProbe(const ELFObjectFileBase *Obj) { UsePseudoProbes = HasProbeDescSection && HasPseudoProbeSection; } -void ProfiledBinary::decodePseudoProbe(const ELFObjectFileBase *Obj) { +void ProfiledBinary::decodePseudoProbe(const ObjectFile *Obj) { if (!UsePseudoProbes) return; @@ -511,7 +509,7 @@ void ProfiledBinary::decodePseudoProbe(const ELFObjectFileBase *Obj) { void ProfiledBinary::decodePseudoProbe() { OwningBinary<Binary> OBinary = unwrapOrError(createBinary(Path), Path); Binary &ExeBinary = *OBinary.getBinary(); - auto *Obj = cast<ELFObjectFileBase>(&ExeBinary); + auto *Obj = cast<ObjectFile>(&ExeBinary); decodePseudoProbe(Obj); } @@ -809,8 +807,7 @@ void ProfiledBinary::checkUseFSDiscriminator( } } -void ProfiledBinary::populateElfSymbolAddressList( - const ELFObjectFileBase *Obj) { +void ProfiledBinary::populateSymbolAddressList(const ObjectFile *Obj) { // Create a mapping from virtual address to symbol GUID and the other way // around. StringRef FileName = Obj->getFileName(); diff --git a/llvm/tools/llvm-profgen/ProfiledBinary.h b/llvm/tools/llvm-profgen/ProfiledBinary.h index e82fbab..5a814b7 100644 --- a/llvm/tools/llvm-profgen/ProfiledBinary.h +++ b/llvm/tools/llvm-profgen/ProfiledBinary.h @@ -228,19 +228,19 @@ class ProfiledBinary { // A list of binary functions that have samples. std::unordered_set<const BinaryFunction *> ProfiledFunctions; - // GUID to Elf symbol start address map + // GUID to symbol start address map DenseMap<uint64_t, uint64_t> SymbolStartAddrs; // These maps are for temporary use of warning diagnosis. DenseSet<int64_t> AddrsWithMultipleSymbols; DenseSet<std::pair<uint64_t, uint64_t>> AddrsWithInvalidInstruction; - // Start address to Elf symbol GUID map + // Start address to symbol GUID map std::unordered_multimap<uint64_t, uint64_t> StartAddrToSymMap; // An ordered map of mapping function's start address to function range - // relevant info. Currently to determine if the offset of ELF is the start of - // a real function, we leverage the function range info from DWARF. + // relevant info. Currently to determine if the offset of ELF/COFF is the + // start of a real function, we leverage the function range info from DWARF. std::map<uint64_t, FuncRange> StartAddrToFuncRangeMap; // Address to context location map. Used to expand the context. @@ -335,9 +335,9 @@ class ProfiledBinary { void setPreferredTextSegmentAddresses(const object::COFFObjectFile *Obj, StringRef FileName); - void checkPseudoProbe(const object::ELFObjectFileBase *Obj); + void checkPseudoProbe(const object::ObjectFile *Obj); - void decodePseudoProbe(const object::ELFObjectFileBase *Obj); + void decodePseudoProbe(const object::ObjectFile *Obj); void checkUseFSDiscriminator( const object::ObjectFile *Obj, @@ -353,8 +353,8 @@ class ProfiledBinary { // Load debug info from DWARF unit. void loadSymbolsFromDWARFUnit(DWARFUnit &CompilationUnit); - // Create elf symbol to its start address mapping. - void populateElfSymbolAddressList(const object::ELFObjectFileBase *O); + // Create symbol to its start address mapping. + void populateSymbolAddressList(const object::ObjectFile *O); // A function may be spilt into multiple non-continuous address ranges. We use // this to set whether start a function range is the real entry of the diff --git a/llvm/unittests/BinaryFormat/DwarfTest.cpp b/llvm/unittests/BinaryFormat/DwarfTest.cpp index 684e59f..f4519f6 100644 --- a/llvm/unittests/BinaryFormat/DwarfTest.cpp +++ b/llvm/unittests/BinaryFormat/DwarfTest.cpp @@ -219,4 +219,77 @@ TEST(DwarfTest, lname) { EXPECT_EQ(roundtrip(DW_LANG_##NAME), DW_LANG_##NAME); #include "llvm/BinaryFormat/Dwarf.def" } + +TEST(DwarfTest, lname_getSourceLanguageName) { + // Some basics. + EXPECT_EQ(getSourceLanguageName("DW_LNAME_Ada"), DW_LNAME_Ada); + EXPECT_EQ(getSourceLanguageName("DW_LNAME_Metal"), DW_LNAME_Metal); + + // Test invalid input. + EXPECT_EQ(getSourceLanguageName(""), 0U); + EXPECT_EQ(getSourceLanguageName("blah"), 0U); + EXPECT_EQ(getSourceLanguageName("DW_LNAME__something_unlikely"), 0U); + EXPECT_EQ(getSourceLanguageName("DW_LANG_C"), 0U); + + // Test that we cover all DW_LNAME_ names. +#define xstr(X) #X +#define HANDLE_DW_LNAME(ID, NAME, DESC, LOWER_BOUND) \ + EXPECT_EQ(getSourceLanguageName(xstr(DW_LNAME_##NAME)), DW_LNAME_##NAME); +#include "llvm/BinaryFormat/Dwarf.def" +} + +TEST(DwarfTest, lname_SourceLanguageNameString) { + // Some basics. + EXPECT_EQ(SourceLanguageNameString(DW_LNAME_C_plus_plus), + "DW_LNAME_C_plus_plus"); + EXPECT_EQ(SourceLanguageNameString(DW_LNAME_CPP_for_OpenCL), + "DW_LNAME_CPP_for_OpenCL"); + + // Test invalid input. + EXPECT_EQ(SourceLanguageNameString(static_cast<SourceLanguageName>(0)), ""); + + // Test that we cover all DW_LNAME_ names. +#define xstr(X) #X +#define HANDLE_DW_LNAME(ID, NAME, DESC, LOWER_BOUND) \ + EXPECT_EQ(SourceLanguageNameString(DW_LNAME_##NAME), xstr(DW_LNAME_##NAME)); +#include "llvm/BinaryFormat/Dwarf.def" +} + +TEST(DWARFDebugInfo, TestLanguageDescription_Versioned) { + // Tests for the llvm::dwarf::LanguageDescription API that + // takes a name *and* a version. + + // Unknown language. + EXPECT_EQ( + llvm::dwarf::LanguageDescription(static_cast<SourceLanguageName>(0)), + "Unknown"); + + EXPECT_EQ( + llvm::dwarf::LanguageDescription(static_cast<SourceLanguageName>(0), 0), + "Unknown"); + + // Test that specifying an invalid version falls back to a valid language name + // regardless. + EXPECT_EQ(llvm::dwarf::LanguageDescription(DW_LNAME_ObjC, 0), "Objective C"); + EXPECT_EQ(llvm::dwarf::LanguageDescription(DW_LNAME_Julia, 0), "Julia"); + + // Check some versions. + EXPECT_EQ(llvm::dwarf::LanguageDescription(DW_LNAME_C_plus_plus, 199711), + "C++98"); + EXPECT_EQ(llvm::dwarf::LanguageDescription(DW_LNAME_C_plus_plus, 201402), + "C++14"); + + // Versions round up. + EXPECT_EQ(llvm::dwarf::LanguageDescription(DW_LNAME_C_plus_plus, 201400), + "C++14"); + + // Version 0 for C and C++ is an unversioned name. + EXPECT_EQ(llvm::dwarf::LanguageDescription(DW_LNAME_C, 0), "C (K&R and ISO)"); + EXPECT_EQ(llvm::dwarf::LanguageDescription(DW_LNAME_C_plus_plus, 0), + "ISO C++"); + + // Version 0 for other versioned languages may not be the unversioned name. + EXPECT_EQ(llvm::dwarf::LanguageDescription(DW_LNAME_Fortran, 0), + "FORTRAN 77"); +} } // end namespace diff --git a/llvm/unittests/CodeGen/GlobalISel/LegalizerInfoTest.cpp b/llvm/unittests/CodeGen/GlobalISel/LegalizerInfoTest.cpp index 988e307..7340f56 100644 --- a/llvm/unittests/CodeGen/GlobalISel/LegalizerInfoTest.cpp +++ b/llvm/unittests/CodeGen/GlobalISel/LegalizerInfoTest.cpp @@ -480,18 +480,21 @@ TEST(LegalizerInfoTest, MMOAlignment) { LegacyInfo.computeTables(); - EXPECT_ACTION(Legal, 0, LLT(), - LegalityQuery(G_LOAD, {s32, p0}, - LegalityQuery::MemDesc{ - s32, 32, AtomicOrdering::NotAtomic})); - EXPECT_ACTION(Unsupported, 0, LLT(), - LegalityQuery(G_LOAD, {s32, p0}, - LegalityQuery::MemDesc{ - s32, 16, AtomicOrdering::NotAtomic })); - EXPECT_ACTION(Unsupported, 0, LLT(), - LegalityQuery(G_LOAD, {s32, p0}, - LegalityQuery::MemDesc{ - s32, 8, AtomicOrdering::NotAtomic})); + EXPECT_ACTION( + Legal, 0, LLT(), + LegalityQuery(G_LOAD, {s32, p0}, + LegalityQuery::MemDesc{s32, 32, AtomicOrdering::NotAtomic, + AtomicOrdering::NotAtomic})); + EXPECT_ACTION( + Unsupported, 0, LLT(), + LegalityQuery(G_LOAD, {s32, p0}, + LegalityQuery::MemDesc{s32, 16, AtomicOrdering::NotAtomic, + AtomicOrdering::NotAtomic})); + EXPECT_ACTION( + Unsupported, 0, LLT(), + LegalityQuery(G_LOAD, {s32, p0}, + LegalityQuery::MemDesc{s32, 8, AtomicOrdering::NotAtomic, + AtomicOrdering::NotAtomic})); } // Test that the maximum supported alignment value isn't truncated @@ -506,14 +509,17 @@ TEST(LegalizerInfoTest, MMOAlignment) { LegacyInfo.computeTables(); - EXPECT_ACTION(Legal, 0, LLT(), - LegalityQuery(G_LOAD, {s32, p0}, - LegalityQuery::MemDesc{s32, - MaxAlignInBits, AtomicOrdering::NotAtomic})); - EXPECT_ACTION(Unsupported, 0, LLT(), - LegalityQuery(G_LOAD, {s32, p0}, - LegalityQuery::MemDesc{ - s32, 8, AtomicOrdering::NotAtomic })); + EXPECT_ACTION( + Legal, 0, LLT(), + LegalityQuery(G_LOAD, {s32, p0}, + LegalityQuery::MemDesc{s32, MaxAlignInBits, + AtomicOrdering::NotAtomic, + AtomicOrdering::NotAtomic})); + EXPECT_ACTION( + Unsupported, 0, LLT(), + LegalityQuery(G_LOAD, {s32, p0}, + LegalityQuery::MemDesc{s32, 8, AtomicOrdering::NotAtomic, + AtomicOrdering::NotAtomic})); } } diff --git a/llvm/unittests/CodeGen/InstrRefLDVTest.cpp b/llvm/unittests/CodeGen/InstrRefLDVTest.cpp index 3a625b2..ce2a38b 100644 --- a/llvm/unittests/CodeGen/InstrRefLDVTest.cpp +++ b/llvm/unittests/CodeGen/InstrRefLDVTest.cpp @@ -100,8 +100,8 @@ public: // scope. DIBuilder DIB(*Mod); OurFile = DIB.createFile("xyzzy.c", "/cave"); - OurCU = - DIB.createCompileUnit(dwarf::DW_LANG_C99, OurFile, "nou", false, "", 0); + OurCU = DIB.createCompileUnit(DISourceLanguageName(dwarf::DW_LANG_C99), + OurFile, "nou", false, "", 0); auto OurSubT = DIB.createSubroutineType(DIB.getOrCreateTypeArray({})); OurFunc = DIB.createFunction(OurCU, "bees", "", OurFile, 1, OurSubT, 1, diff --git a/llvm/unittests/CodeGen/LexicalScopesTest.cpp b/llvm/unittests/CodeGen/LexicalScopesTest.cpp index 34bd37a..0c6b932 100644 --- a/llvm/unittests/CodeGen/LexicalScopesTest.cpp +++ b/llvm/unittests/CodeGen/LexicalScopesTest.cpp @@ -102,8 +102,8 @@ public: // scope. DIBuilder DIB(Mod); OurFile = DIB.createFile("xyzzy.c", "/cave"); - OurCU = - DIB.createCompileUnit(dwarf::DW_LANG_C99, OurFile, "nou", false, "", 0); + OurCU = DIB.createCompileUnit(DISourceLanguageName(dwarf::DW_LANG_C99), + OurFile, "nou", false, "", 0); OurSubT = DIB.createSubroutineType(DIB.getOrCreateTypeArray({})); OurFunc = DIB.createFunction(OurCU, "bees", "", OurFile, 1, OurSubT, 1, diff --git a/llvm/unittests/CodeGen/MIR2VecTest.cpp b/llvm/unittests/CodeGen/MIR2VecTest.cpp index d243d82..11222b4 100644 --- a/llvm/unittests/CodeGen/MIR2VecTest.cpp +++ b/llvm/unittests/CodeGen/MIR2VecTest.cpp @@ -17,6 +17,7 @@ #include "llvm/IR/Module.h" #include "llvm/MC/TargetRegistry.h" #include "llvm/Support/TargetSelect.h" +#include "llvm/Support/raw_ostream.h" #include "llvm/Target/TargetMachine.h" #include "llvm/Target/TargetOptions.h" #include "llvm/TargetParser/Triple.h" @@ -52,7 +53,7 @@ protected: std::unique_ptr<LLVMContext> Ctx; std::unique_ptr<Module> M; std::unique_ptr<TargetMachine> TM; - const TargetInstrInfo *TII; + const TargetInstrInfo *TII = nullptr; static void SetUpTestCase() { InitializeAllTargets(); @@ -93,6 +94,8 @@ protected: return; } } + + void TearDown() override { TII = nullptr; } }; // Function to find an opcode by name @@ -118,7 +121,11 @@ TEST_F(MIR2VecVocabTestFixture, CanonicalOpcodeMappingTest) { VocabMap VMap; Embedding Val = Embedding(64, 1.0f); VMap["ADD"] = Val; - MIRVocabulary TestVocab(std::move(VMap), TII); + auto TestVocabOrErr = MIRVocabulary::create(std::move(VMap), *TII); + ASSERT_TRUE(static_cast<bool>(TestVocabOrErr)) + << "Failed to create vocabulary: " + << toString(TestVocabOrErr.takeError()); + auto &TestVocab = *TestVocabOrErr; unsigned Index1 = TestVocab.getCanonicalIndexForBaseName(BaseName1); unsigned Index2 = TestVocab.getCanonicalIndexForBaseName(BaseName2); @@ -173,7 +180,11 @@ TEST_F(MIR2VecVocabTestFixture, DeterministicMapping) { // Use a minimal MIRVocabulary to trigger canonical mapping construction VocabMap VMap; VMap["ADD"] = Embedding(64, 1.0f); - MIRVocabulary TestVocab(std::move(VMap), TII); + auto TestVocabOrErr = MIRVocabulary::create(std::move(VMap), *TII); + ASSERT_TRUE(static_cast<bool>(TestVocabOrErr)) + << "Failed to create vocabulary: " + << toString(TestVocabOrErr.takeError()); + auto &TestVocab = *TestVocabOrErr; unsigned Index1 = TestVocab.getCanonicalIndexForBaseName(BaseName); unsigned Index2 = TestVocab.getCanonicalIndexForBaseName(BaseName); @@ -195,8 +206,10 @@ TEST_F(MIR2VecVocabTestFixture, VocabularyConstruction) { VMap["ADD"] = Embedding(128, 1.0f); // Dimension 128, all values 1.0 VMap["SUB"] = Embedding(128, 2.0f); // Dimension 128, all values 2.0 - MIRVocabulary Vocab(std::move(VMap), TII); - EXPECT_TRUE(Vocab.isValid()); + auto VocabOrErr = MIRVocabulary::create(std::move(VMap), *TII); + ASSERT_TRUE(static_cast<bool>(VocabOrErr)) + << "Failed to create vocabulary: " << toString(VocabOrErr.takeError()); + auto &Vocab = *VocabOrErr; EXPECT_EQ(Vocab.getDimension(), 128u); // Test iterator - iterates over individual embeddings @@ -214,4 +227,20 @@ TEST_F(MIR2VecVocabTestFixture, VocabularyConstruction) { EXPECT_GT(Count, 0u); } -} // namespace
\ No newline at end of file +// Test factory method with empty vocabulary +TEST_F(MIR2VecVocabTestFixture, EmptyVocabularyCreation) { + VocabMap EmptyVMap; + + auto VocabOrErr = MIRVocabulary::create(std::move(EmptyVMap), *TII); + EXPECT_FALSE(static_cast<bool>(VocabOrErr)) + << "Factory method should fail with empty vocabulary"; + + // Consume the error + if (!VocabOrErr) { + auto Err = VocabOrErr.takeError(); + std::string ErrorMsg = toString(std::move(Err)); + EXPECT_FALSE(ErrorMsg.empty()); + } +} + +} // namespace diff --git a/llvm/unittests/CodeGen/MachineBasicBlockTest.cpp b/llvm/unittests/CodeGen/MachineBasicBlockTest.cpp index bcb5a18..ef0d40b 100644 --- a/llvm/unittests/CodeGen/MachineBasicBlockTest.cpp +++ b/llvm/unittests/CodeGen/MachineBasicBlockTest.cpp @@ -40,8 +40,8 @@ TEST(FindDebugLocTest, DifferentIterators) { // scope. DIBuilder DIB(Mod); DIFile *OurFile = DIB.createFile("foo.c", "/bar"); - DICompileUnit *OurCU = - DIB.createCompileUnit(dwarf::DW_LANG_C99, OurFile, "", false, "", 0); + DICompileUnit *OurCU = DIB.createCompileUnit( + DISourceLanguageName(dwarf::DW_LANG_C99), OurFile, "", false, "", 0); auto OurSubT = DIB.createSubroutineType(DIB.getOrCreateTypeArray({})); DISubprogram *OurFunc = DIB.createFunction(OurCU, "bees", "", OurFile, 1, OurSubT, 1, diff --git a/llvm/unittests/Frontend/OpenMPIRBuilderTest.cpp b/llvm/unittests/Frontend/OpenMPIRBuilderTest.cpp index c13570d..e568723 100644 --- a/llvm/unittests/Frontend/OpenMPIRBuilderTest.cpp +++ b/llvm/unittests/Frontend/OpenMPIRBuilderTest.cpp @@ -11,6 +11,7 @@ #include "llvm/Frontend/OpenMP/OMPIRBuilder.h" #include "llvm/IR/BasicBlock.h" #include "llvm/IR/DIBuilder.h" +#include "llvm/IR/DebugInfoMetadata.h" #include "llvm/IR/Function.h" #include "llvm/IR/InstIterator.h" #include "llvm/IR/Instructions.h" @@ -212,8 +213,8 @@ protected: DIBuilder DIB(*M); auto File = DIB.createFile("test.dbg", "/src", std::nullopt, std::optional<StringRef>("/src/test.dbg")); - auto CU = - DIB.createCompileUnit(dwarf::DW_LANG_C, File, "llvm-C", true, "", 0); + auto CU = DIB.createCompileUnit(DISourceLanguageName(dwarf::DW_LANG_C), + File, "llvm-C", true, "", 0); auto Type = DIB.createSubroutineType(DIB.getOrCreateTypeArray({})); auto SP = DIB.createFunction( CU, "foo", "", File, 1, Type, 1, DINode::FlagZero, diff --git a/llvm/unittests/IR/DebugInfoTest.cpp b/llvm/unittests/IR/DebugInfoTest.cpp index 475e0a9..060f45d 100644 --- a/llvm/unittests/IR/DebugInfoTest.cpp +++ b/llvm/unittests/IR/DebugInfoTest.cpp @@ -409,7 +409,8 @@ TEST(DIBuilder, CreateFortranArrayTypeWithAttributes) { DIFile *F = DIB.createFile("main.c", "/"); DICompileUnit *CU = DIB.createCompileUnit( - dwarf::DW_LANG_C, DIB.createFile("main.c", "/"), "llvm-c", true, "", 0); + DISourceLanguageName(dwarf::DW_LANG_C), DIB.createFile("main.c", "/"), + "llvm-c", true, "", 0); DIVariable *DataLocation = DIB.createTempGlobalVariableFwdDecl(CU, "dl", "_dl", F, 1, nullptr, true); @@ -1335,8 +1336,8 @@ TEST(DIBuilder, HashingDISubprogram) { DIBuilder DIB(*M); DIFile *F = DIB.createFile("main.c", "/"); - DICompileUnit *CU = - DIB.createCompileUnit(dwarf::DW_LANG_C, F, "Test", false, "", 0); + DICompileUnit *CU = DIB.createCompileUnit( + DISourceLanguageName(dwarf::DW_LANG_C), F, "Test", false, "", 0); llvm::TempDIType ForwardDeclaredType = llvm::TempDIType(DIB.createReplaceableCompositeType( @@ -1381,8 +1382,8 @@ TEST(DIBuilder, CompositeTypes) { DIBuilder DIB(*M); DIFile *F = DIB.createFile("main.c", "/"); - DICompileUnit *CU = - DIB.createCompileUnit(dwarf::DW_LANG_C, F, "Test", false, "", 0); + DICompileUnit *CU = DIB.createCompileUnit( + DISourceLanguageName(dwarf::DW_LANG_C), F, "Test", false, "", 0); DICompositeType *Class = DIB.createClassType(CU, "MyClass", F, 0, 8, 8, 0, {}, nullptr, {}, 0, diff --git a/llvm/unittests/IR/IRBuilderTest.cpp b/llvm/unittests/IR/IRBuilderTest.cpp index 773c32e..37826b2 100644 --- a/llvm/unittests/IR/IRBuilderTest.cpp +++ b/llvm/unittests/IR/IRBuilderTest.cpp @@ -6,11 +6,12 @@ // //===----------------------------------------------------------------------===// -#include "llvm/Analysis/InstSimplifyFolder.h" #include "llvm/IR/IRBuilder.h" +#include "llvm/Analysis/InstSimplifyFolder.h" #include "llvm/IR/BasicBlock.h" #include "llvm/IR/DIBuilder.h" #include "llvm/IR/DataLayout.h" +#include "llvm/IR/DebugInfoMetadata.h" #include "llvm/IR/Function.h" #include "llvm/IR/IntrinsicInst.h" #include "llvm/IR/IntrinsicsAArch64.h" @@ -859,8 +860,8 @@ TEST_F(IRBuilderTest, createFunction) { IRBuilder<> Builder(BB); DIBuilder DIB(*M); auto File = DIB.createFile("error.swift", "/"); - auto CU = - DIB.createCompileUnit(dwarf::DW_LANG_Swift, File, "swiftc", true, "", 0); + auto CU = DIB.createCompileUnit(DISourceLanguageName(dwarf::DW_LANG_Swift), + File, "swiftc", true, "", 0); auto Type = DIB.createSubroutineType(DIB.getOrCreateTypeArray({})); auto NoErr = DIB.createFunction( CU, "noerr", "", File, 1, Type, 1, DINode::FlagZero, @@ -893,9 +894,9 @@ TEST_F(IRBuilderTest, DIBuilder) { IRBuilder<> Builder(BB); DIBuilder DIB(*M); auto File = DIB.createFile("F.CBL", "/"); - auto CU = DIB.createCompileUnit(dwarf::DW_LANG_Cobol74, - DIB.createFile("F.CBL", "/"), - "llvm-cobol74", true, "", 0); + auto CU = DIB.createCompileUnit( + DISourceLanguageName(dwarf::DW_LANG_Cobol74), + DIB.createFile("F.CBL", "/"), "llvm-cobol74", true, "", 0); auto Type = DIB.createSubroutineType(DIB.getOrCreateTypeArray({})); auto SP = DIB.createFunction( CU, "foo", "", File, 1, Type, 1, DINode::FlagZero, @@ -1004,7 +1005,8 @@ TEST_F(IRBuilderTest, createArtificialSubprogram) { IRBuilder<> Builder(BB); DIBuilder DIB(*M); auto File = DIB.createFile("main.c", "/"); - auto CU = DIB.createCompileUnit(dwarf::DW_LANG_C, File, "clang", + auto CU = DIB.createCompileUnit(DISourceLanguageName(dwarf::DW_LANG_C), File, + "clang", /*isOptimized=*/true, /*Flags=*/"", /*Runtime Version=*/0); auto Type = DIB.createSubroutineType(DIB.getOrCreateTypeArray({})); @@ -1083,7 +1085,8 @@ TEST_F(IRBuilderTest, appendDebugInfo) { { DIBuilder DIB(*M); auto *File = DIB.createFile("main.c", "/"); - CU = DIB.createCompileUnit(dwarf::DW_LANG_C, File, "clang", + CU = DIB.createCompileUnit(DISourceLanguageName(dwarf::DW_LANG_C), File, + "clang", /*isOptimized=*/true, /*Flags=*/"", /*Runtime Version=*/0); auto *ByteTy = DIB.createBasicType("byte0", 8, dwarf::DW_ATE_signed); @@ -1158,9 +1161,9 @@ TEST_F(IRBuilderTest, DebugLoc) { DIBuilder DIB(*M); auto File = DIB.createFile("tmp.cpp", "/"); - auto CU = DIB.createCompileUnit(dwarf::DW_LANG_C_plus_plus_11, - DIB.createFile("tmp.cpp", "/"), "", true, "", - 0); + auto CU = + DIB.createCompileUnit(DISourceLanguageName(dwarf::DW_LANG_C_plus_plus_11), + DIB.createFile("tmp.cpp", "/"), "", true, "", 0); auto SPType = DIB.createSubroutineType(DIB.getOrCreateTypeArray({})); auto SP = DIB.createFunction(CU, "foo", "foo", File, 1, SPType, 1, DINode::FlagZero, @@ -1191,9 +1194,8 @@ TEST_F(IRBuilderTest, DIImportedEntity) { IRBuilder<> Builder(BB); DIBuilder DIB(*M); auto F = DIB.createFile("F.CBL", "/"); - auto CU = DIB.createCompileUnit(dwarf::DW_LANG_Cobol74, - F, "llvm-cobol74", - true, "", 0); + auto CU = DIB.createCompileUnit(DISourceLanguageName(dwarf::DW_LANG_Cobol74), + F, "llvm-cobol74", true, "", 0); MDTuple *Elements = MDTuple::getDistinct(Ctx, {}); DIB.createImportedDeclaration(CU, nullptr, F, 1); @@ -1218,8 +1220,9 @@ TEST_F(IRBuilderTest, DIBuilderMacro) { DIBuilder DIB(*M); auto File1 = DIB.createFile("main.c", "/"); auto File2 = DIB.createFile("file.h", "/"); - auto CU = DIB.createCompileUnit( - dwarf::DW_LANG_C, DIB.createFile("main.c", "/"), "llvm-c", true, "", 0); + auto CU = DIB.createCompileUnit(DISourceLanguageName(dwarf::DW_LANG_C), + DIB.createFile("main.c", "/"), "llvm-c", true, + "", 0); auto MDef0 = DIB.createMacro(nullptr, 0, dwarf::DW_MACINFO_define, "M0", "V0"); auto TMF1 = DIB.createTempMacroFile(nullptr, 0, File1); diff --git a/llvm/unittests/IR/MetadataTest.cpp b/llvm/unittests/IR/MetadataTest.cpp index 7425703..85c79d1 100644 --- a/llvm/unittests/IR/MetadataTest.cpp +++ b/llvm/unittests/IR/MetadataTest.cpp @@ -101,8 +101,8 @@ protected: } DICompileUnit *getUnit() { return DICompileUnit::getDistinct( - Context, 1, getFile(), "clang", false, "-g", 2, "", - DICompileUnit::FullDebug, getTuple(), getTuple(), getTuple(), + Context, DISourceLanguageName(1), getFile(), "clang", false, "-g", 2, + "", DICompileUnit::FullDebug, getTuple(), getTuple(), getTuple(), getTuple(), getTuple(), 0, true, false, DICompileUnit::DebugNameTableKind::Default, false, "/", ""); } @@ -2896,13 +2896,14 @@ TEST_F(DICompileUnitTest, get) { StringRef SysRoot = "/"; StringRef SDK = "MacOSX.sdk"; auto *N = DICompileUnit::getDistinct( - Context, SourceLanguage, File, Producer, IsOptimized, Flags, - RuntimeVersion, SplitDebugFilename, EmissionKind, EnumTypes, - RetainedTypes, GlobalVariables, ImportedEntities, Macros, DWOId, true, - false, DICompileUnit::DebugNameTableKind::Default, false, SysRoot, SDK); + Context, DISourceLanguageName(SourceLanguage), File, Producer, + IsOptimized, Flags, RuntimeVersion, SplitDebugFilename, EmissionKind, + EnumTypes, RetainedTypes, GlobalVariables, ImportedEntities, Macros, + DWOId, true, false, DICompileUnit::DebugNameTableKind::Default, false, + SysRoot, SDK); EXPECT_EQ(dwarf::DW_TAG_compile_unit, N->getTag()); - EXPECT_EQ(SourceLanguage, N->getSourceLanguage()); + EXPECT_EQ(SourceLanguage, N->getSourceLanguage().getUnversionedName()); EXPECT_EQ(File, N->getFile()); EXPECT_EQ(Producer, N->getProducer()); EXPECT_EQ(IsOptimized, N->isOptimized()); @@ -2921,7 +2922,7 @@ TEST_F(DICompileUnitTest, get) { TempDICompileUnit Temp = N->clone(); EXPECT_EQ(dwarf::DW_TAG_compile_unit, Temp->getTag()); - EXPECT_EQ(SourceLanguage, Temp->getSourceLanguage()); + EXPECT_EQ(SourceLanguage, Temp->getSourceLanguage().getUnversionedName()); EXPECT_EQ(File, Temp->getFile()); EXPECT_EQ(Producer, Temp->getProducer()); EXPECT_EQ(IsOptimized, Temp->isOptimized()); @@ -2959,10 +2960,10 @@ TEST_F(DICompileUnitTest, replaceArrays) { StringRef SysRoot = "/"; StringRef SDK = "MacOSX.sdk"; auto *N = DICompileUnit::getDistinct( - Context, SourceLanguage, File, Producer, IsOptimized, Flags, - RuntimeVersion, SplitDebugFilename, EmissionKind, EnumTypes, - RetainedTypes, nullptr, ImportedEntities, nullptr, DWOId, true, false, - DICompileUnit::DebugNameTableKind::Default, false, SysRoot, SDK); + Context, DISourceLanguageName(SourceLanguage), File, Producer, + IsOptimized, Flags, RuntimeVersion, SplitDebugFilename, EmissionKind, + EnumTypes, RetainedTypes, nullptr, ImportedEntities, nullptr, DWOId, true, + false, DICompileUnit::DebugNameTableKind::Default, false, SysRoot, SDK); auto *GlobalVariables = MDTuple::getDistinct(Context, {}); EXPECT_EQ(nullptr, N->getGlobalVariables().get()); diff --git a/llvm/unittests/IR/VerifierTest.cpp b/llvm/unittests/IR/VerifierTest.cpp index 7a136e6..440db12 100644 --- a/llvm/unittests/IR/VerifierTest.cpp +++ b/llvm/unittests/IR/VerifierTest.cpp @@ -9,6 +9,7 @@ #include "llvm/IR/Verifier.h" #include "llvm/IR/Constants.h" #include "llvm/IR/DIBuilder.h" +#include "llvm/IR/DebugInfoMetadata.h" #include "llvm/IR/DerivedTypes.h" #include "llvm/IR/Function.h" #include "llvm/IR/GlobalAlias.h" @@ -232,8 +233,9 @@ TEST(VerifierTest, DetectInvalidDebugInfo) { LLVMContext C; Module M("M", C); DIBuilder DIB(M); - DIB.createCompileUnit(dwarf::DW_LANG_C89, DIB.createFile("broken.c", "/"), - "unittest", false, "", 0); + DIB.createCompileUnit(DISourceLanguageName(dwarf::DW_LANG_C89), + DIB.createFile("broken.c", "/"), "unittest", false, + "", 0); DIB.finalize(); EXPECT_FALSE(verifyModule(M)); @@ -247,7 +249,7 @@ TEST(VerifierTest, DetectInvalidDebugInfo) { LLVMContext C; Module M("M", C); DIBuilder DIB(M); - auto *CU = DIB.createCompileUnit(dwarf::DW_LANG_C89, + auto *CU = DIB.createCompileUnit(DISourceLanguageName(dwarf::DW_LANG_C89), DIB.createFile("broken.c", "/"), "unittest", false, "", 0); new GlobalVariable(M, Type::getInt8Ty(C), false, diff --git a/llvm/unittests/Support/SpecialCaseListTest.cpp b/llvm/unittests/Support/SpecialCaseListTest.cpp index 5be2b9e..750feda 100644 --- a/llvm/unittests/Support/SpecialCaseListTest.cpp +++ b/llvm/unittests/Support/SpecialCaseListTest.cpp @@ -22,33 +22,31 @@ namespace { class SpecialCaseListTest : public ::testing::Test { protected: - std::unique_ptr<SpecialCaseList> makeSpecialCaseList(StringRef List, - std::string &Error, - bool UseGlobs = true) { + std::unique_ptr<SpecialCaseList> + makeSpecialCaseList(StringRef List, std::string &Error, int Version = 0) { auto S = List.str(); - if (!UseGlobs) - S = (Twine("#!special-case-list-v1\n") + S).str(); + if (Version) + S = (Twine("#!special-case-list-v") + Twine(Version) + "\n" + S).str(); std::unique_ptr<MemoryBuffer> MB = MemoryBuffer::getMemBuffer(S); return SpecialCaseList::create(MB.get(), Error); } std::unique_ptr<SpecialCaseList> makeSpecialCaseList(StringRef List, - bool UseGlobs = true) { + int Version = 0) { std::string Error; - auto SCL = makeSpecialCaseList(List, Error, UseGlobs); + auto SCL = makeSpecialCaseList(List, Error, Version); assert(SCL); assert(Error == ""); return SCL; } - std::string makeSpecialCaseListFile(StringRef Contents, - bool UseGlobs = true) { + std::string makeSpecialCaseListFile(StringRef Contents, int Version = 0) { int FD; SmallString<64> Path; sys::fs::createTemporaryFile("SpecialCaseListTest", "temp", FD, Path); raw_fd_ostream OF(FD, true, true); - if (!UseGlobs) - OF << "#!special-case-list-v1\n"; + if (Version) + OF << "#!special-case-list-v" << Version << "\n"; OF << Contents; OF.close(); return std::string(Path.str()); @@ -261,7 +259,7 @@ TEST_F(SpecialCaseListTest, Version1) { "fun:foo.*\n" "fun:abc|def\n" "fun:b.r\n", - /*UseGlobs=*/false); + /*Version=*/1); EXPECT_TRUE(SCL->inSection("sect1", "fun", "fooz")); EXPECT_TRUE(SCL->inSection("sect2", "fun", "fooz")); @@ -309,6 +307,46 @@ TEST_F(SpecialCaseListTest, Version2) { EXPECT_FALSE(SCL->inSection("sect3", "fun", "bar")); } +TEST_F(SpecialCaseListTest, DotSlash) { + std::unique_ptr<SpecialCaseList> SCL2 = makeSpecialCaseList("[dot]\n" + "fun:./foo\n" + "src:./bar\n" + "[not]\n" + "fun:foo\n" + "src:bar\n"); + std::unique_ptr<SpecialCaseList> SCL3 = makeSpecialCaseList("[dot]\n" + "fun:./foo\n" + "src:./bar\n" + "[not]\n" + "fun:foo\n" + "src:bar\n", + /*Version=*/3); + + EXPECT_TRUE(SCL2->inSection("dot", "fun", "./foo")); + EXPECT_TRUE(SCL3->inSection("dot", "fun", "./foo")); + + EXPECT_FALSE(SCL2->inSection("dot", "fun", "foo")); + EXPECT_FALSE(SCL3->inSection("dot", "fun", "foo")); + + EXPECT_TRUE(SCL2->inSection("dot", "src", "./bar")); + EXPECT_FALSE(SCL3->inSection("dot", "src", "./bar")); + + EXPECT_FALSE(SCL2->inSection("dot", "src", "bar")); + EXPECT_FALSE(SCL3->inSection("dot", "src", "bar")); + + EXPECT_FALSE(SCL2->inSection("not", "fun", "./foo")); + EXPECT_FALSE(SCL3->inSection("not", "fun", "./foo")); + + EXPECT_TRUE(SCL2->inSection("not", "fun", "foo")); + EXPECT_TRUE(SCL3->inSection("not", "fun", "foo")); + + EXPECT_FALSE(SCL2->inSection("not", "src", "./bar")); + EXPECT_TRUE(SCL3->inSection("not", "src", "./bar")); + + EXPECT_TRUE(SCL2->inSection("not", "src", "bar")); + EXPECT_TRUE(SCL3->inSection("not", "src", "bar")); +} + TEST_F(SpecialCaseListTest, LinesInSection) { std::unique_ptr<SpecialCaseList> SCL = makeSpecialCaseList("fun:foo\n" "fun:bar\n" diff --git a/llvm/unittests/Transforms/Utils/CloningTest.cpp b/llvm/unittests/Transforms/Utils/CloningTest.cpp index fe81986..d990808 100644 --- a/llvm/unittests/Transforms/Utils/CloningTest.cpp +++ b/llvm/unittests/Transforms/Utils/CloningTest.cpp @@ -18,6 +18,7 @@ #include "llvm/IR/Constant.h" #include "llvm/IR/DIBuilder.h" #include "llvm/IR/DebugInfo.h" +#include "llvm/IR/DebugInfoMetadata.h" #include "llvm/IR/Function.h" #include "llvm/IR/IRBuilder.h" #include "llvm/IR/InstIterator.h" @@ -482,10 +483,10 @@ protected: DITypeRefArray ParamTypes = DBuilder.getOrCreateTypeArray({}); DISubroutineType *FuncType = DBuilder.createSubroutineType(ParamTypes); - auto *CU = DBuilder.createCompileUnit(dwarf::DW_LANG_C99, - DBuilder.createFile("filename.c", - "/file/dir"), - "CloneFunc", false, "", 0); + auto *CU = DBuilder.createCompileUnit( + DISourceLanguageName(dwarf::DW_LANG_C99), + DBuilder.createFile("filename.c", "/file/dir"), "CloneFunc", false, "", + 0); auto *Subprogram = DBuilder.createFunction( CU, "f", "f", File, 4, FuncType, 3, DINode::FlagZero, @@ -540,7 +541,7 @@ protected: // Create another, empty, compile unit. DIBuilder DBuilder2(*M); - DBuilder2.createCompileUnit(dwarf::DW_LANG_C99, + DBuilder2.createCompileUnit(DISourceLanguageName(dwarf::DW_LANG_C99), DBuilder.createFile("extra.c", "/file/dir"), "CloneFunc", false, "", 0); DBuilder2.finalize(); @@ -953,8 +954,9 @@ protected: // confirm that compile units get cloned in the correct order. DIBuilder EmptyBuilder(*OldM); auto *File = EmptyBuilder.createFile("empty.c", "/file/dir/"); - (void)EmptyBuilder.createCompileUnit(dwarf::DW_LANG_C99, File, - "EmptyUnit", false, "", 0); + (void)EmptyBuilder.createCompileUnit( + DISourceLanguageName(dwarf::DW_LANG_C99), File, "EmptyUnit", false, + "", 0); EmptyBuilder.finalize(); } @@ -973,10 +975,10 @@ protected: auto *File = DBuilder.createFile("filename.c", "/file/dir/"); DITypeRefArray ParamTypes = DBuilder.getOrCreateTypeArray({}); DISubroutineType *DFuncType = DBuilder.createSubroutineType(ParamTypes); - auto *CU = DBuilder.createCompileUnit(dwarf::DW_LANG_C99, - DBuilder.createFile("filename.c", - "/file/dir"), - "CloneModule", false, "", 0); + auto *CU = DBuilder.createCompileUnit( + DISourceLanguageName(dwarf::DW_LANG_C99), + DBuilder.createFile("filename.c", "/file/dir"), "CloneModule", false, + "", 0); // Function DI auto *Subprogram = DBuilder.createFunction( CU, "f", "f", File, 4, DFuncType, 3, DINode::FlagZero, diff --git a/llvm/utils/docker/example/Dockerfile b/llvm/utils/docker/example/Dockerfile index 197716f..39990c0 100644 --- a/llvm/utils/docker/example/Dockerfile +++ b/llvm/utils/docker/example/Dockerfile @@ -10,7 +10,7 @@ # Stage 1. Check out LLVM source code and run the build. # FIXME: Replace 'ubuntu' with your base image -FROM ubuntu AS builder +FROM docker.io/ubuntu AS builder # FIXME: Change maintainer name LABEL maintainer="Maintainer <maintainer@email>" # FIXME: Install llvm/clang build dependencies here. Including compiler to @@ -29,7 +29,7 @@ RUN /tmp/scripts/build_install_llvm.sh --to /tmp/clang-install ${buildscript_arg # Stage 2. Produce a minimal release image with build results. # FIXME: Replace 'ubuntu' with your base image. -FROM ubuntu +FROM docker.io/ubuntu # FIXME: Change maintainer name. LABEL maintainer="Maintainer <maintainer@email>" # FIXME: Install all packages you want to have in your release container. diff --git a/llvm/utils/docker/nvidia-cuda/Dockerfile b/llvm/utils/docker/nvidia-cuda/Dockerfile index 035c582..f4fb3cc 100644 --- a/llvm/utils/docker/nvidia-cuda/Dockerfile +++ b/llvm/utils/docker/nvidia-cuda/Dockerfile @@ -6,7 +6,7 @@ # #===----------------------------------------------------------------------===// # Stage 1. Check out LLVM source code and run the build. -FROM nvidia/cuda:12.6.3-devel-ubuntu24.04 AS builder +FROM docker.io/nvidia/cuda:12.6.3-devel-ubuntu24.04 AS builder LABEL maintainer="LLVM Developers" # Install llvm build dependencies. RUN apt-get update && \ @@ -26,7 +26,7 @@ RUN /tmp/scripts/build_install_llvm.sh --to /tmp/clang-install ${buildscript_arg # Stage 2. Produce a minimal release image with build results. -FROM nvidia/cuda:12.6.3-devel-ubuntu24.04 +FROM docker.io/nvidia/cuda:12.6.3-devel-ubuntu24.04 LABEL maintainer="LLVM Developers" # Copy clang installation into this container. COPY --from=builder /tmp/clang-install/ /usr/local/ diff --git a/llvm/utils/gn/secondary/clang-tools-extra/clang-tidy/fuchsia/BUILD.gn b/llvm/utils/gn/secondary/clang-tools-extra/clang-tidy/fuchsia/BUILD.gn index ddb9848..48384ef 100644 --- a/llvm/utils/gn/secondary/clang-tools-extra/clang-tidy/fuchsia/BUILD.gn +++ b/llvm/utils/gn/secondary/clang-tools-extra/clang-tidy/fuchsia/BUILD.gn @@ -18,6 +18,7 @@ static_library("fuchsia") { "MultipleInheritanceCheck.cpp", "OverloadedOperatorCheck.cpp", "StaticallyConstructedObjectsCheck.cpp", + "TemporaryObjectsCheck.cpp", "TrailingReturnCheck.cpp", "VirtualInheritanceCheck.cpp", ] diff --git a/llvm/utils/gn/secondary/clang-tools-extra/clang-tidy/zircon/BUILD.gn b/llvm/utils/gn/secondary/clang-tools-extra/clang-tidy/zircon/BUILD.gn index c349414..8195452 100644 --- a/llvm/utils/gn/secondary/clang-tools-extra/clang-tidy/zircon/BUILD.gn +++ b/llvm/utils/gn/secondary/clang-tools-extra/clang-tidy/zircon/BUILD.gn @@ -10,8 +10,5 @@ static_library("zircon") { "//clang/lib/Lex", "//llvm/lib/Support", ] - sources = [ - "TemporaryObjectsCheck.cpp", - "ZirconTidyModule.cpp", - ] + sources = [ "ZirconTidyModule.cpp" ] } diff --git a/llvm/utils/gn/secondary/llvm/lib/Target/AMDGPU/BUILD.gn b/llvm/utils/gn/secondary/llvm/lib/Target/AMDGPU/BUILD.gn index 2208ae5..c89e335 100644 --- a/llvm/utils/gn/secondary/llvm/lib/Target/AMDGPU/BUILD.gn +++ b/llvm/utils/gn/secondary/llvm/lib/Target/AMDGPU/BUILD.gn @@ -202,6 +202,7 @@ static_library("LLVMAMDGPUCodeGen") { "AMDGPUTargetMachine.cpp", "AMDGPUTargetObjectFile.cpp", "AMDGPUTargetTransformInfo.cpp", + "AMDGPUUniformIntrinsicCombine.cpp", "AMDGPUUnifyDivergentExitNodes.cpp", "AMDGPUWaitSGPRHazards.cpp", "GCNCreateVOPD.cpp", diff --git a/llvm/utils/lit/tests/xunit-output-report-failures-only.py b/llvm/utils/lit/tests/xunit-output-report-failures-only.py index e15fd6a..c331578 100644 --- a/llvm/utils/lit/tests/xunit-output-report-failures-only.py +++ b/llvm/utils/lit/tests/xunit-output-report-failures-only.py @@ -5,7 +5,7 @@ # CHECK: <?xml version="1.0" encoding="UTF-8"?> # CHECK-NEXT: <testsuites time="{{[0-9.]+}}"> # CHECK-NEXT: <testsuite name="test-data" tests="1" failures="1" skipped="0" time="{{[0-9.]+}}"> -# CHECK-NEXT: <testcase classname="test-data.test-data" name="bad&name.ini" time="{{[0-1]\.[0-9]+}}"> +# CHECK-NEXT: <testcase classname="test-data.test-data" name="bad&name.ini" time="{{[0-9.]+}}"> # CHECK-NEXT: <failure><![CDATA[& < > ]]]]><![CDATA[> &"]]></failure> # CHECK-NEXT: </testcase> # CHECK-NEXT: </testsuite> diff --git a/llvm/utils/profcheck-xfail.txt b/llvm/utils/profcheck-xfail.txt index 74ed172..8c41466 100644 --- a/llvm/utils/profcheck-xfail.txt +++ b/llvm/utils/profcheck-xfail.txt @@ -915,7 +915,6 @@ Transforms/InstCombine/select_frexp.ll Transforms/InstCombine/select.ll Transforms/InstCombine/select-min-max.ll Transforms/InstCombine/select-of-symmetric-selects.ll -Transforms/InstCombine/select-safe-bool-transforms.ll Transforms/InstCombine/select-safe-impliedcond-transforms.ll Transforms/InstCombine/select-safe-transforms.ll Transforms/InstCombine/select-select.ll |