diff options
Diffstat (limited to 'llvm')
187 files changed, 5785 insertions, 1255 deletions
diff --git a/llvm/docs/LangRef.rst b/llvm/docs/LangRef.rst index 0339101..1c6823b 100644 --- a/llvm/docs/LangRef.rst +++ b/llvm/docs/LangRef.rst @@ -11455,9 +11455,9 @@ If the ``load`` is marked as ``atomic``, it takes an extra :ref:`ordering <ordering>` and optional ``syncscope("<target-scope>")`` argument. The ``release`` and ``acq_rel`` orderings are not valid on ``load`` instructions. Atomic loads produce :ref:`defined <memmodel>` results when they may see -multiple atomic stores. The type of the pointee must be an integer, pointer, or -floating-point type whose bit width is a power of two greater than or equal to -eight. ``align`` must be +multiple atomic stores. The type of the pointee must be an integer, pointer, +floating-point, or vector type whose bit width is a power of two greater than +or equal to eight. ``align`` must be explicitly specified on atomic loads. Note: if the alignment is not greater or equal to the size of the `<value>` type, the atomic operation is likely to require a lock and have poor performance. ``!nontemporal`` does not have any @@ -11594,9 +11594,9 @@ If the ``store`` is marked as ``atomic``, it takes an extra :ref:`ordering <ordering>` and optional ``syncscope("<target-scope>")`` argument. The ``acquire`` and ``acq_rel`` orderings aren't valid on ``store`` instructions. Atomic loads produce :ref:`defined <memmodel>` results when they may see -multiple atomic stores. The type of the pointee must be an integer, pointer, or -floating-point type whose bit width is a power of two greater than or equal to -eight. ``align`` must be +multiple atomic stores. The type of the pointee must be an integer, pointer, +floating-point, or vector type whose bit width is a power of two greater than +or equal to eight. ``align`` must be explicitly specified on atomic stores. Note: if the alignment is not greater or equal to the size of the `<value>` type, the atomic operation is likely to require a lock and have poor performance. ``!nontemporal`` does not have any diff --git a/llvm/docs/MLGO.rst b/llvm/docs/MLGO.rst index bf3de11..2443835 100644 --- a/llvm/docs/MLGO.rst +++ b/llvm/docs/MLGO.rst @@ -434,8 +434,27 @@ The latter is also used in tests. There is no C++ implementation of a log reader. We do not have a scenario motivating one. -IR2Vec Embeddings -================= +Embeddings +========== + +LLVM provides embedding frameworks to generate vector representations of code +at different abstraction levels. These embeddings capture syntactic, semantic, +and structural properties of the code and can be used as features for machine +learning models in various compiler optimization tasks. + +Two embedding frameworks are available: + +- **IR2Vec**: Generates embeddings for LLVM IR +- **MIR2Vec**: Generates embeddings for Machine IR + +Both frameworks follow a similar architecture with vocabulary-based embedding +generation, where a vocabulary maps code entities to n-dimensional floating +point vectors. These embeddings can be computed at multiple granularity levels +(instruction, basic block, and function) and used for ML-guided compiler +optimizations. + +IR2Vec +------ IR2Vec is a program embedding approach designed specifically for LLVM IR. It is implemented as a function analysis pass in LLVM. The IR2Vec embeddings @@ -466,7 +485,7 @@ The core components are: compute embeddings for instructions, basic blocks, and functions. Using IR2Vec ------------- +^^^^^^^^^^^^ .. note:: @@ -526,7 +545,7 @@ embeddings can be computed and accessed via an ``ir2vec::Embedder`` instance. between different code snippets, or perform other analyses as needed. Further Details ---------------- +^^^^^^^^^^^^^^^ For more detailed information about the IR2Vec algorithm, its parameters, and advanced usage, please refer to the original paper: @@ -538,6 +557,123 @@ triplets from LLVM IR, see :doc:`CommandGuide/llvm-ir2vec`. The LLVM source code for ``IR2Vec`` can also be explored to understand the implementation details. +MIR2Vec +------- + +MIR2Vec is an extension of IR2Vec designed specifically for LLVM Machine IR +(MIR). It generates embeddings for machine-level instructions, basic blocks, +and functions. MIR2Vec operates on the target-specific machine representation, +capturing machine instruction semantics including opcodes, operands, and +register information at the machine level. + +MIR2Vec extends the vocabulary to include: + +- **Machine Opcodes**: Target-specific instruction opcodes derived from the + TargetInstrInfo, grouped by instruction semantics. + +- **Common Operands**: All common operand types (excluding register operands), + defined by the ``MachineOperand::MachineOperandType`` enum. + +- **Physical Register Classes**: Register classes defined by the target, + specialized for physical registers. + +- **Virtual Register Classes**: Register classes defined by the target, + specialized for virtual registers. + +The core components are: + +- **Vocabulary**: A mapping from machine IR entities (opcodes, operands, register + classes) to their vector representations. This is managed by + ``MIR2VecVocabLegacyAnalysis`` for the legacy pass manager, with a + ``MIR2VecVocabProvider`` that can be used standalone or wrapped by pass + managers. The vocabulary (.json file) contains sections for opcodes, common + operands, physical register classes, and virtual register classes. + + .. note:: + + The vocabulary file should contain these sections for it to be valid. + +- **Embedder**: A class (``mir2vec::MIREmbedder``) that uses the vocabulary to + compute embeddings for machine instructions, machine basic blocks, and + machine functions. Currently, ``SymbolicMIREmbedder`` is the available + implementation. + +Using MIR2Vec +^^^^^^^^^^^^^ + +.. note:: + + This section describes how to use MIR2Vec within LLVM passes. `llvm-ir2vec` + tool ` :doc:`CommandGuide/llvm-ir2vec` can be used for generating MIR2Vec + embeddings from Machine IR files (.mir), which can be useful for generating + embeddings outside of compiler passes. + +To generate MIR2Vec embeddings in a compiler pass, first obtain the vocabulary, +then create an embedder instance to compute and access embeddings. + +1. **Get the Vocabulary**: + In a MachineFunctionPass, get the vocabulary from the analysis: + + .. code-block:: c++ + + auto &VocabAnalysis = getAnalysis<MIR2VecVocabLegacyAnalysis>(); + auto VocabOrErr = VocabAnalysis.getMIR2VecVocabulary(*MF.getFunction().getParent()); + if (!VocabOrErr) { + // Handle error: vocabulary is not available or invalid + return; + } + const mir2vec::MIRVocabulary &Vocabulary = *VocabOrErr; + + Note that ``MIR2VecVocabLegacyAnalysis`` is an immutable pass. + +2. **Create Embedder instance**: + With the vocabulary, create an embedder for a specific machine function: + + .. code-block:: c++ + + // Assuming MF is a MachineFunction& + // For example, using MIR2VecKind::Symbolic: + std::unique_ptr<mir2vec::MIREmbedder> Emb = + mir2vec::MIREmbedder::create(MIR2VecKind::Symbolic, MF, Vocabulary); + + +3. **Compute and Access Embeddings**: + Call ``getMFunctionVector()`` to get the embedding for the machine function. + + .. code-block:: c++ + + mir2vec::Embedding FuncVector = Emb->getMFunctionVector(); + + Currently, ``MIREmbedder`` can generate embeddings at three levels: Machine + Instructions, Machine Basic Blocks, and Machine Functions. Appropriate + getters are provided to access the embeddings at these levels. + + .. note:: + + The validity of the ``MIREmbedder`` instance (and the embeddings it + generates) is tied to the machine function it is associated with. If the + machine function is modified, the embeddings may become stale and should + be recomputed accordingly. + +4. **Working with Embeddings:** + Embeddings are represented as ``std::vector<double>``. These vectors can be + used as features for machine learning models, compute similarity scores + between different code snippets, or perform other analyses as needed. + +Further Details +^^^^^^^^^^^^^^^ + +For more detailed information about the MIR2Vec algorithm, its parameters, and +advanced usage, please refer to the original paper: +`RL4ReAl: Reinforcement Learning for Register Allocation <https://doi.org/10.1145/3578360.3580273>`_. + +For information about using MIR2Vec tool for generating embeddings from +Machine IR, see :doc:`CommandGuide/llvm-ir2vec`. + +The LLVM source code for ``MIR2Vec`` can be explored to understand the +implementation details. See ``llvm/include/llvm/CodeGen/MIR2Vec.h`` and +``llvm/lib/CodeGen/MIR2Vec.cpp``. + Building with ML support ======================== diff --git a/llvm/docs/ReleaseNotes.md b/llvm/docs/ReleaseNotes.md index 754cd40..6bd3278 100644 --- a/llvm/docs/ReleaseNotes.md +++ b/llvm/docs/ReleaseNotes.md @@ -66,6 +66,7 @@ Changes to the LLVM IR `@llvm.masked.gather` and `@llvm.masked.scatter` intrinsics has been removed. Instead, the `align` attribute should be placed on the pointer (or vector of pointers) argument. +* A `load atomic` may now be used with vector types on x86. Changes to LLVM infrastructure ------------------------------ diff --git a/llvm/include/llvm/ADT/FoldingSet.h b/llvm/include/llvm/ADT/FoldingSet.h index 82a88c4..675b5c6 100644 --- a/llvm/include/llvm/ADT/FoldingSet.h +++ b/llvm/include/llvm/ADT/FoldingSet.h @@ -332,6 +332,14 @@ class FoldingSetNodeID { /// Use a SmallVector to avoid a heap allocation in the common case. SmallVector<unsigned, 32> Bits; + template <typename T> void AddIntegerImpl(T I) { + static_assert(std::is_integral_v<T> && sizeof(T) <= sizeof(unsigned) * 2, + "T must be an integer type no wider than 64 bits"); + Bits.push_back(static_cast<unsigned>(I)); + if constexpr (sizeof(unsigned) < sizeof(T)) + Bits.push_back(static_cast<unsigned long long>(I) >> 32); + } + public: FoldingSetNodeID() = default; @@ -348,24 +356,12 @@ public: "unexpected pointer size"); AddInteger(reinterpret_cast<uintptr_t>(Ptr)); } - void AddInteger(signed I) { Bits.push_back(I); } - void AddInteger(unsigned I) { Bits.push_back(I); } - void AddInteger(long I) { AddInteger((unsigned long)I); } - void AddInteger(unsigned long I) { - if (sizeof(long) == sizeof(int)) - AddInteger(unsigned(I)); - else if (sizeof(long) == sizeof(long long)) { - AddInteger((unsigned long long)I); - } else { - llvm_unreachable("unexpected sizeof(long)"); - } - } - void AddInteger(long long I) { AddInteger((unsigned long long)I); } - void AddInteger(unsigned long long I) { - AddInteger(unsigned(I)); - AddInteger(unsigned(I >> 32)); - } - + void AddInteger(signed I) { AddIntegerImpl(I); } + void AddInteger(unsigned I) { AddIntegerImpl(I); } + void AddInteger(long I) { AddIntegerImpl(I); } + void AddInteger(unsigned long I) { AddIntegerImpl(I); } + void AddInteger(long long I) { AddIntegerImpl(I); } + void AddInteger(unsigned long long I) { AddIntegerImpl(I); } void AddBoolean(bool B) { AddInteger(B ? 1U : 0U); } LLVM_ABI void AddString(StringRef String); LLVM_ABI void AddNodeID(const FoldingSetNodeID &ID); diff --git a/llvm/include/llvm/ADT/IndexedMap.h b/llvm/include/llvm/ADT/IndexedMap.h index cda0316..55935a7 100644 --- a/llvm/include/llvm/ADT/IndexedMap.h +++ b/llvm/include/llvm/ADT/IndexedMap.h @@ -22,12 +22,21 @@ #include "llvm/ADT/STLExtras.h" #include "llvm/ADT/SmallVector.h" -#include "llvm/ADT/identity.h" #include <cassert> namespace llvm { -template <typename T, typename ToIndexT = identity<unsigned>> class IndexedMap { +namespace detail { +template <class Ty> struct IdentityIndex { + using argument_type = Ty; + + Ty &operator()(Ty &self) const { return self; } + const Ty &operator()(const Ty &self) const { return self; } +}; +} // namespace detail + +template <typename T, typename ToIndexT = detail::IdentityIndex<unsigned>> +class IndexedMap { using IndexT = typename ToIndexT::argument_type; // Prefer SmallVector with zero inline storage over std::vector. IndexedMaps // can grow very large and SmallVector grows more efficiently as long as T @@ -35,11 +44,11 @@ template <typename T, typename ToIndexT = identity<unsigned>> class IndexedMap { using StorageT = SmallVector<T, 0>; StorageT storage_; - T nullVal_; + T nullVal_ = T(); ToIndexT toIndex_; public: - IndexedMap() : nullVal_(T()) {} + IndexedMap() = default; explicit IndexedMap(const T &val) : nullVal_(val) {} diff --git a/llvm/include/llvm/ADT/identity.h b/llvm/include/llvm/ADT/identity.h deleted file mode 100644 index 88d033f..0000000 --- a/llvm/include/llvm/ADT/identity.h +++ /dev/null @@ -1,31 +0,0 @@ -//===- llvm/ADT/Identity.h - Provide std::identity from C++20 ---*- C++ -*-===// -// -// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. -// See https://llvm.org/LICENSE.txt for license information. -// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception -// -//===----------------------------------------------------------------------===// -// -// This file provides an implementation of std::identity from C++20. -// -// No library is required when using these functions. -// -//===----------------------------------------------------------------------===// - -#ifndef LLVM_ADT_IDENTITY_H -#define LLVM_ADT_IDENTITY_H - -namespace llvm { - -// Similar to `std::identity` from C++20. -template <class Ty> struct identity { - using is_transparent = void; - using argument_type = Ty; - - Ty &operator()(Ty &self) const { return self; } - const Ty &operator()(const Ty &self) const { return self; } -}; - -} // namespace llvm - -#endif // LLVM_ADT_IDENTITY_H diff --git a/llvm/include/llvm/CodeGen/AtomicExpand.h b/llvm/include/llvm/CodeGen/AtomicExpand.h index 1b8a988..34f520f 100644 --- a/llvm/include/llvm/CodeGen/AtomicExpand.h +++ b/llvm/include/llvm/CodeGen/AtomicExpand.h @@ -21,7 +21,7 @@ private: const TargetMachine *TM; public: - AtomicExpandPass(const TargetMachine *TM) : TM(TM) {} + AtomicExpandPass(const TargetMachine &TM) : TM(&TM) {} PreservedAnalyses run(Function &F, FunctionAnalysisManager &AM); }; diff --git a/llvm/include/llvm/CodeGen/BasicBlockSectionsProfileReader.h b/llvm/include/llvm/CodeGen/BasicBlockSectionsProfileReader.h index 82dd5fe..48650a6 100644 --- a/llvm/include/llvm/CodeGen/BasicBlockSectionsProfileReader.h +++ b/llvm/include/llvm/CodeGen/BasicBlockSectionsProfileReader.h @@ -155,7 +155,7 @@ class BasicBlockSectionsProfileReaderAnalysis public: static AnalysisKey Key; typedef BasicBlockSectionsProfileReader Result; - BasicBlockSectionsProfileReaderAnalysis(const TargetMachine *TM) : TM(TM) {} + BasicBlockSectionsProfileReaderAnalysis(const TargetMachine &TM) : TM(&TM) {} Result run(Function &F, FunctionAnalysisManager &AM); diff --git a/llvm/include/llvm/CodeGen/CodeGenPrepare.h b/llvm/include/llvm/CodeGen/CodeGenPrepare.h index dee3a9e..e673d0f 100644 --- a/llvm/include/llvm/CodeGen/CodeGenPrepare.h +++ b/llvm/include/llvm/CodeGen/CodeGenPrepare.h @@ -26,7 +26,7 @@ private: const TargetMachine *TM; public: - CodeGenPreparePass(const TargetMachine *TM) : TM(TM) {} + CodeGenPreparePass(const TargetMachine &TM) : TM(&TM) {} PreservedAnalyses run(Function &F, FunctionAnalysisManager &AM); }; diff --git a/llvm/include/llvm/CodeGen/ComplexDeinterleavingPass.h b/llvm/include/llvm/CodeGen/ComplexDeinterleavingPass.h index 4383249..7b74f2a 100644 --- a/llvm/include/llvm/CodeGen/ComplexDeinterleavingPass.h +++ b/llvm/include/llvm/CodeGen/ComplexDeinterleavingPass.h @@ -24,10 +24,10 @@ class TargetMachine; struct ComplexDeinterleavingPass : public PassInfoMixin<ComplexDeinterleavingPass> { private: - TargetMachine *TM; + const TargetMachine *TM; public: - ComplexDeinterleavingPass(TargetMachine *TM) : TM(TM) {} + ComplexDeinterleavingPass(const TargetMachine &TM) : TM(&TM) {} PreservedAnalyses run(Function &F, FunctionAnalysisManager &AM); }; diff --git a/llvm/include/llvm/CodeGen/DwarfEHPrepare.h b/llvm/include/llvm/CodeGen/DwarfEHPrepare.h index 3f625cd..5b68b8c 100644 --- a/llvm/include/llvm/CodeGen/DwarfEHPrepare.h +++ b/llvm/include/llvm/CodeGen/DwarfEHPrepare.h @@ -24,7 +24,7 @@ class DwarfEHPreparePass : public PassInfoMixin<DwarfEHPreparePass> { const TargetMachine *TM; public: - explicit DwarfEHPreparePass(const TargetMachine *TM_) : TM(TM_) {} + explicit DwarfEHPreparePass(const TargetMachine &TM_) : TM(&TM_) {} PreservedAnalyses run(Function &F, FunctionAnalysisManager &FAM); }; diff --git a/llvm/include/llvm/CodeGen/ExpandFp.h b/llvm/include/llvm/CodeGen/ExpandFp.h index f1f441b..28e6aec 100644 --- a/llvm/include/llvm/CodeGen/ExpandFp.h +++ b/llvm/include/llvm/CodeGen/ExpandFp.h @@ -22,7 +22,7 @@ private: CodeGenOptLevel OptLevel; public: - explicit ExpandFpPass(const TargetMachine *TM, CodeGenOptLevel OptLevel); + explicit ExpandFpPass(const TargetMachine &TM, CodeGenOptLevel OptLevel); PreservedAnalyses run(Function &F, FunctionAnalysisManager &AM); static bool isRequired() { return true; } diff --git a/llvm/include/llvm/CodeGen/ExpandLargeDivRem.h b/llvm/include/llvm/CodeGen/ExpandLargeDivRem.h index 6fc4409..b73a382 100644 --- a/llvm/include/llvm/CodeGen/ExpandLargeDivRem.h +++ b/llvm/include/llvm/CodeGen/ExpandLargeDivRem.h @@ -20,7 +20,7 @@ private: const TargetMachine *TM; public: - explicit ExpandLargeDivRemPass(const TargetMachine *TM_) : TM(TM_) {} + explicit ExpandLargeDivRemPass(const TargetMachine &TM_) : TM(&TM_) {} PreservedAnalyses run(Function &F, FunctionAnalysisManager &AM); }; diff --git a/llvm/include/llvm/CodeGen/ExpandMemCmp.h b/llvm/include/llvm/CodeGen/ExpandMemCmp.h index 94a8778..0b845e4 100644 --- a/llvm/include/llvm/CodeGen/ExpandMemCmp.h +++ b/llvm/include/llvm/CodeGen/ExpandMemCmp.h @@ -19,7 +19,7 @@ class ExpandMemCmpPass : public PassInfoMixin<ExpandMemCmpPass> { const TargetMachine *TM; public: - explicit ExpandMemCmpPass(const TargetMachine *TM_) : TM(TM_) {} + explicit ExpandMemCmpPass(const TargetMachine &TM_) : TM(&TM_) {} PreservedAnalyses run(Function &F, FunctionAnalysisManager &FAM); }; diff --git a/llvm/include/llvm/CodeGen/IndirectBrExpand.h b/llvm/include/llvm/CodeGen/IndirectBrExpand.h index f7d9d5d..572a712 100644 --- a/llvm/include/llvm/CodeGen/IndirectBrExpand.h +++ b/llvm/include/llvm/CodeGen/IndirectBrExpand.h @@ -19,7 +19,7 @@ class IndirectBrExpandPass : public PassInfoMixin<IndirectBrExpandPass> { const TargetMachine *TM; public: - IndirectBrExpandPass(const TargetMachine *TM) : TM(TM) {} + IndirectBrExpandPass(const TargetMachine &TM) : TM(&TM) {} PreservedAnalyses run(Function &F, FunctionAnalysisManager &FAM); }; diff --git a/llvm/include/llvm/CodeGen/InterleavedAccess.h b/llvm/include/llvm/CodeGen/InterleavedAccess.h index 31bd19a..42bfa84 100644 --- a/llvm/include/llvm/CodeGen/InterleavedAccess.h +++ b/llvm/include/llvm/CodeGen/InterleavedAccess.h @@ -25,7 +25,7 @@ class InterleavedAccessPass : public PassInfoMixin<InterleavedAccessPass> { const TargetMachine *TM; public: - explicit InterleavedAccessPass(const TargetMachine *TM) : TM(TM) {} + explicit InterleavedAccessPass(const TargetMachine &TM) : TM(&TM) {} PreservedAnalyses run(Function &F, FunctionAnalysisManager &FAM); }; diff --git a/llvm/include/llvm/CodeGen/InterleavedLoadCombine.h b/llvm/include/llvm/CodeGen/InterleavedLoadCombine.h index fa99aa3..2750fd4 100644 --- a/llvm/include/llvm/CodeGen/InterleavedLoadCombine.h +++ b/llvm/include/llvm/CodeGen/InterleavedLoadCombine.h @@ -20,7 +20,7 @@ class InterleavedLoadCombinePass const TargetMachine *TM; public: - explicit InterleavedLoadCombinePass(const TargetMachine *TM) : TM(TM) {} + explicit InterleavedLoadCombinePass(const TargetMachine &TM) : TM(&TM) {} PreservedAnalyses run(Function &F, FunctionAnalysisManager &FAM); }; diff --git a/llvm/include/llvm/CodeGen/MachineFunctionAnalysis.h b/llvm/include/llvm/CodeGen/MachineFunctionAnalysis.h index cd00e5f..bea1c78 100644 --- a/llvm/include/llvm/CodeGen/MachineFunctionAnalysis.h +++ b/llvm/include/llvm/CodeGen/MachineFunctionAnalysis.h @@ -42,7 +42,7 @@ public: FunctionAnalysisManager::Invalidator &); }; - MachineFunctionAnalysis(const TargetMachine *TM) : TM(TM) {}; + MachineFunctionAnalysis(const TargetMachine &TM) : TM(&TM) {}; LLVM_ABI Result run(Function &F, FunctionAnalysisManager &FAM); }; diff --git a/llvm/include/llvm/CodeGen/SafeStack.h b/llvm/include/llvm/CodeGen/SafeStack.h index e8f0d14..05ad40e 100644 --- a/llvm/include/llvm/CodeGen/SafeStack.h +++ b/llvm/include/llvm/CodeGen/SafeStack.h @@ -19,7 +19,7 @@ class SafeStackPass : public PassInfoMixin<SafeStackPass> { const TargetMachine *TM; public: - explicit SafeStackPass(const TargetMachine *TM_) : TM(TM_) {} + explicit SafeStackPass(const TargetMachine &TM_) : TM(&TM_) {} PreservedAnalyses run(Function &F, FunctionAnalysisManager &FAM); }; diff --git a/llvm/include/llvm/CodeGen/SelectOptimize.h b/llvm/include/llvm/CodeGen/SelectOptimize.h index 37024a15..33f66bb 100644 --- a/llvm/include/llvm/CodeGen/SelectOptimize.h +++ b/llvm/include/llvm/CodeGen/SelectOptimize.h @@ -25,7 +25,7 @@ class SelectOptimizePass : public PassInfoMixin<SelectOptimizePass> { const TargetMachine *TM; public: - explicit SelectOptimizePass(const TargetMachine *TM) : TM(TM) {} + explicit SelectOptimizePass(const TargetMachine &TM) : TM(&TM) {} PreservedAnalyses run(Function &F, FunctionAnalysisManager &FAM); }; diff --git a/llvm/include/llvm/CodeGen/SelectionDAGNodes.h b/llvm/include/llvm/CodeGen/SelectionDAGNodes.h index 69713d0..1759463 100644 --- a/llvm/include/llvm/CodeGen/SelectionDAGNodes.h +++ b/llvm/include/llvm/CodeGen/SelectionDAGNodes.h @@ -418,12 +418,21 @@ public: Unpredictable = 1 << 13, // Compare instructions which may carry the samesign flag. SameSign = 1 << 14, + // ISD::PTRADD operations that remain in bounds, i.e., the left operand is + // an address in a memory object in which the result of the operation also + // lies. WARNING: Since SDAG generally uses integers instead of pointer + // types, a PTRADD's pointer operand is effectively the result of an + // implicit inttoptr cast. Therefore, when an inbounds PTRADD uses a + // pointer P, transformations cannot assume that P has the provenance + // implied by its producer as, e.g, operations between producer and PTRADD + // that affect the provenance may have been optimized away. + InBounds = 1 << 15, // NOTE: Please update LargestValue in LLVM_DECLARE_ENUM_AS_BITMASK below // the class definition when adding new flags. PoisonGeneratingFlags = NoUnsignedWrap | NoSignedWrap | Exact | Disjoint | - NonNeg | NoNaNs | NoInfs | SameSign, + NonNeg | NoNaNs | NoInfs | SameSign | InBounds, FastMathFlags = NoNaNs | NoInfs | NoSignedZeros | AllowReciprocal | AllowContract | ApproximateFuncs | AllowReassociation, }; @@ -458,6 +467,7 @@ public: void setAllowReassociation(bool b) { setFlag<AllowReassociation>(b); } void setNoFPExcept(bool b) { setFlag<NoFPExcept>(b); } void setUnpredictable(bool b) { setFlag<Unpredictable>(b); } + void setInBounds(bool b) { setFlag<InBounds>(b); } // These are accessors for each flag. bool hasNoUnsignedWrap() const { return Flags & NoUnsignedWrap; } @@ -475,6 +485,7 @@ public: bool hasAllowReassociation() const { return Flags & AllowReassociation; } bool hasNoFPExcept() const { return Flags & NoFPExcept; } bool hasUnpredictable() const { return Flags & Unpredictable; } + bool hasInBounds() const { return Flags & InBounds; } bool operator==(const SDNodeFlags &Other) const { return Flags == Other.Flags; @@ -484,7 +495,7 @@ public: }; LLVM_DECLARE_ENUM_AS_BITMASK(decltype(SDNodeFlags::None), - SDNodeFlags::SameSign); + SDNodeFlags::InBounds); inline SDNodeFlags operator|(SDNodeFlags LHS, SDNodeFlags RHS) { LHS |= RHS; diff --git a/llvm/include/llvm/CodeGen/StackProtector.h b/llvm/include/llvm/CodeGen/StackProtector.h index dfafc78..fbe7935 100644 --- a/llvm/include/llvm/CodeGen/StackProtector.h +++ b/llvm/include/llvm/CodeGen/StackProtector.h @@ -86,7 +86,7 @@ class StackProtectorPass : public PassInfoMixin<StackProtectorPass> { const TargetMachine *TM; public: - explicit StackProtectorPass(const TargetMachine *TM) : TM(TM) {} + explicit StackProtectorPass(const TargetMachine &TM) : TM(&TM) {} PreservedAnalyses run(Function &F, FunctionAnalysisManager &FAM); }; diff --git a/llvm/include/llvm/CodeGen/TypePromotion.h b/llvm/include/llvm/CodeGen/TypePromotion.h index efe5823..ba32a21 100644 --- a/llvm/include/llvm/CodeGen/TypePromotion.h +++ b/llvm/include/llvm/CodeGen/TypePromotion.h @@ -26,7 +26,7 @@ private: const TargetMachine *TM; public: - TypePromotionPass(const TargetMachine *TM): TM(TM) { } + TypePromotionPass(const TargetMachine &TM) : TM(&TM) {} PreservedAnalyses run(Function &F, FunctionAnalysisManager &AM); }; diff --git a/llvm/include/llvm/IR/IntrinsicsAMDGPU.td b/llvm/include/llvm/IR/IntrinsicsAMDGPU.td index 9e334d4..8e35109 100644 --- a/llvm/include/llvm/IR/IntrinsicsAMDGPU.td +++ b/llvm/include/llvm/IR/IntrinsicsAMDGPU.td @@ -3789,6 +3789,20 @@ def int_amdgcn_perm_pk16_b8_u4 : ClangBuiltin<"__builtin_amdgcn_perm_pk16_b8_u4" DefaultAttrsIntrinsic<[llvm_v4i32_ty], [llvm_i64_ty, llvm_i64_ty, llvm_v2i32_ty], [IntrNoMem, IntrSpeculatable]>; +class AMDGPUAddMinMax<LLVMType Ty, string Name> : ClangBuiltin<"__builtin_amdgcn_"#Name>, + DefaultAttrsIntrinsic<[Ty], [Ty, Ty, Ty, llvm_i1_ty /* clamp */], + [IntrNoMem, IntrSpeculatable, ImmArg<ArgIndex<3>>]>; + +def int_amdgcn_add_max_i32 : AMDGPUAddMinMax<llvm_i32_ty, "add_max_i32">; +def int_amdgcn_add_max_u32 : AMDGPUAddMinMax<llvm_i32_ty, "add_max_u32">; +def int_amdgcn_add_min_i32 : AMDGPUAddMinMax<llvm_i32_ty, "add_min_i32">; +def int_amdgcn_add_min_u32 : AMDGPUAddMinMax<llvm_i32_ty, "add_min_u32">; + +def int_amdgcn_pk_add_max_i16 : AMDGPUAddMinMax<llvm_v2i16_ty, "pk_add_max_i16">; +def int_amdgcn_pk_add_max_u16 : AMDGPUAddMinMax<llvm_v2i16_ty, "pk_add_max_u16">; +def int_amdgcn_pk_add_min_i16 : AMDGPUAddMinMax<llvm_v2i16_ty, "pk_add_min_i16">; +def int_amdgcn_pk_add_min_u16 : AMDGPUAddMinMax<llvm_v2i16_ty, "pk_add_min_u16">; + class AMDGPUCooperativeAtomicStore<LLVMType Ty> : Intrinsic < [], [llvm_anyptr_ty, // pointer to store to diff --git a/llvm/include/llvm/IR/RuntimeLibcalls.td b/llvm/include/llvm/IR/RuntimeLibcalls.td index a8b647c..3dc9055 100644 --- a/llvm/include/llvm/IR/RuntimeLibcalls.td +++ b/llvm/include/llvm/IR/RuntimeLibcalls.td @@ -2117,7 +2117,9 @@ defvar MSP430DefaultOptOut = [ __fixdfdi, __fixunsdfsi, __modsi3, __floatunsisf, __fixunsdfdi, __ltsf2, __floatdisf, __floatdidf, __lshrsi3, __subsf3, __umodhi3, __floatunsidf, - __floatundidf + __floatundidf, __gtdf2, __eqdf2, __gedf2, __ltdf2, __ledf2, + __adddf3, __divdf3, __divdi3, __moddi3, + __muldf3, __subdf3, __udivdi3, __umoddi3 ]; // EABI Libcalls - EABI Section 6.2 diff --git a/llvm/include/llvm/Passes/CodeGenPassBuilder.h b/llvm/include/llvm/Passes/CodeGenPassBuilder.h index 6a235c0..2f207925 100644 --- a/llvm/include/llvm/Passes/CodeGenPassBuilder.h +++ b/llvm/include/llvm/Passes/CodeGenPassBuilder.h @@ -736,8 +736,8 @@ void CodeGenPassBuilder<Derived, TargetMachineT>::addISelPasses( addPass(LowerEmuTLSPass()); addPass(PreISelIntrinsicLoweringPass(&TM)); - addPass(ExpandLargeDivRemPass(&TM)); - addPass(ExpandFpPass(&TM, getOptLevel())); + addPass(ExpandLargeDivRemPass(TM)); + addPass(ExpandFpPass(TM, getOptLevel())); derived().addIRPasses(addPass); derived().addCodeGenPrepare(addPass); @@ -773,7 +773,7 @@ void CodeGenPassBuilder<Derived, TargetMachineT>::addIRPasses( // target lowering hook. if (!Opt.DisableMergeICmps) addPass(MergeICmpsPass()); - addPass(ExpandMemCmpPass(&TM)); + addPass(ExpandMemCmpPass(TM)); } // Run GC lowering passes for builtin collectors @@ -812,7 +812,7 @@ void CodeGenPassBuilder<Derived, TargetMachineT>::addIRPasses( // Convert conditional moves to conditional jumps when profitable. if (getOptLevel() != CodeGenOptLevel::None && !Opt.DisableSelectOptimize) - addPass(SelectOptimizePass(&TM)); + addPass(SelectOptimizePass(TM)); if (Opt.EnableGlobalMergeFunc) addPass(GlobalMergeFuncPass()); @@ -839,14 +839,14 @@ void CodeGenPassBuilder<Derived, TargetMachineT>::addPassesToHandleExceptions( case ExceptionHandling::ARM: case ExceptionHandling::AIX: case ExceptionHandling::ZOS: - addPass(DwarfEHPreparePass(&TM)); + addPass(DwarfEHPreparePass(TM)); break; case ExceptionHandling::WinEH: // We support using both GCC-style and MSVC-style exceptions on Windows, so // add both preparation passes. Each pass will only actually run if it // recognizes the personality function. addPass(WinEHPreparePass()); - addPass(DwarfEHPreparePass(&TM)); + addPass(DwarfEHPreparePass(TM)); break; case ExceptionHandling::Wasm: // Wasm EH uses Windows EH instructions, but it does not need to demote PHIs @@ -871,7 +871,7 @@ template <typename Derived, typename TargetMachineT> void CodeGenPassBuilder<Derived, TargetMachineT>::addCodeGenPrepare( AddIRPass &addPass) const { if (getOptLevel() != CodeGenOptLevel::None && !Opt.DisableCGP) - addPass(CodeGenPreparePass(&TM)); + addPass(CodeGenPreparePass(TM)); // TODO: Default ctor'd RewriteSymbolPass is no-op. // addPass(RewriteSymbolPass()); } @@ -892,8 +892,8 @@ void CodeGenPassBuilder<Derived, TargetMachineT>::addISelPrepare( addPass(CallBrPreparePass()); // Add both the safe stack and the stack protection passes: each of them will // only protect functions that have corresponding attributes. - addPass(SafeStackPass(&TM)); - addPass(StackProtectorPass(&TM)); + addPass(SafeStackPass(TM)); + addPass(StackProtectorPass(TM)); if (Opt.PrintISelInput) addPass(PrintFunctionPass(dbgs(), diff --git a/llvm/include/llvm/Support/GlobPattern.h b/llvm/include/llvm/Support/GlobPattern.h index c1b4484..8cae6a3 100644 --- a/llvm/include/llvm/Support/GlobPattern.h +++ b/llvm/include/llvm/Support/GlobPattern.h @@ -63,21 +63,30 @@ public: // Returns true for glob pattern "*". Can be used to avoid expensive // preparation/acquisition of the input for match(). bool isTrivialMatchAll() const { - if (!Prefix.empty()) + if (PrefixSize) return false; - if (!Suffix.empty()) + if (SuffixSize) return false; if (SubGlobs.size() != 1) return false; return SubGlobs[0].getPat() == "*"; } - StringRef prefix() const { return Prefix; } - StringRef suffix() const { return Suffix; } + // The following functions are just shortcuts for faster matching. They are + // conservative to simplify implementations. + + // Returns plain prefix of the pattern. + StringRef prefix() const { return Pattern.take_front(PrefixSize); } + // Returns plain suffix of the pattern. + StringRef suffix() const { return Pattern.take_back(SuffixSize); } + // Returns the longest plain substring of the pattern between prefix and + // suffix. + StringRef longest_substr() const; private: - StringRef Prefix; - StringRef Suffix; + StringRef Pattern; + size_t PrefixSize = 0; + size_t SuffixSize = 0; struct SubGlobPattern { /// \param Pat the pattern to match against diff --git a/llvm/lib/Analysis/InlineCost.cpp b/llvm/lib/Analysis/InlineCost.cpp index 757f689..c4fee39 100644 --- a/llvm/lib/Analysis/InlineCost.cpp +++ b/llvm/lib/Analysis/InlineCost.cpp @@ -751,7 +751,7 @@ class InlineCostCallAnalyzer final : public CallAnalyzer { if (CA.analyze().isSuccess()) { // We were able to inline the indirect call! Subtract the cost from the // threshold to get the bonus we want to apply, but don't go below zero. - Cost -= std::max(0, CA.getThreshold() - CA.getCost()); + addCost(-std::max(0, CA.getThreshold() - CA.getCost())); } } else // Otherwise simply add the cost for merely making the call. @@ -1191,7 +1191,7 @@ class InlineCostCallAnalyzer final : public CallAnalyzer { // If this function uses the coldcc calling convention, prefer not to inline // it. if (F.getCallingConv() == CallingConv::Cold) - Cost += InlineConstants::ColdccPenalty; + addCost(InlineConstants::ColdccPenalty); LLVM_DEBUG(dbgs() << " Initial cost: " << Cost << "\n"); @@ -2193,7 +2193,7 @@ void InlineCostCallAnalyzer::updateThreshold(CallBase &Call, Function &Callee) { // the cost of inlining it drops dramatically. It may seem odd to update // Cost in updateThreshold, but the bonus depends on the logic in this method. if (isSoleCallToLocalFunction(Call, F)) { - Cost -= LastCallToStaticBonus; + addCost(-LastCallToStaticBonus); StaticBonusApplied = LastCallToStaticBonus; } } diff --git a/llvm/lib/CGData/CodeGenDataReader.cpp b/llvm/lib/CGData/CodeGenDataReader.cpp index b1cd939..aeb4a4d 100644 --- a/llvm/lib/CGData/CodeGenDataReader.cpp +++ b/llvm/lib/CGData/CodeGenDataReader.cpp @@ -125,7 +125,7 @@ Error IndexedCodeGenDataReader::read() { FunctionMapRecord.setReadStableFunctionMapNames( IndexedCodeGenDataReadFunctionMapNames); if (IndexedCodeGenDataLazyLoading) - FunctionMapRecord.lazyDeserialize(SharedDataBuffer, + FunctionMapRecord.lazyDeserialize(std::move(SharedDataBuffer), Header.StableFunctionMapOffset); else FunctionMapRecord.deserialize(Ptr); diff --git a/llvm/lib/CGData/StableFunctionMap.cpp b/llvm/lib/CGData/StableFunctionMap.cpp index 46e04bd..d0fae3a 100644 --- a/llvm/lib/CGData/StableFunctionMap.cpp +++ b/llvm/lib/CGData/StableFunctionMap.cpp @@ -137,6 +137,7 @@ size_t StableFunctionMap::size(SizeType Type) const { const StableFunctionMap::StableFunctionEntries & StableFunctionMap::at(HashFuncsMapType::key_type FunctionHash) const { auto It = HashToFuncs.find(FunctionHash); + assert(It != HashToFuncs.end() && "FunctionHash not found!"); if (isLazilyLoaded()) deserializeLazyLoadingEntry(It); return It->second.Entries; diff --git a/llvm/lib/CodeGen/ExpandFp.cpp b/llvm/lib/CodeGen/ExpandFp.cpp index 2b5ced3..f44eb22 100644 --- a/llvm/lib/CodeGen/ExpandFp.cpp +++ b/llvm/lib/CodeGen/ExpandFp.cpp @@ -1108,8 +1108,8 @@ public: }; } // namespace -ExpandFpPass::ExpandFpPass(const TargetMachine *TM, CodeGenOptLevel OptLevel) - : TM(TM), OptLevel(OptLevel) {} +ExpandFpPass::ExpandFpPass(const TargetMachine &TM, CodeGenOptLevel OptLevel) + : TM(&TM), OptLevel(OptLevel) {} void ExpandFpPass::printPipeline( raw_ostream &OS, function_ref<StringRef(StringRef)> MapClassName2PassName) { diff --git a/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp b/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp index 1fe38d6..b49040b 100644 --- a/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp +++ b/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp @@ -1862,15 +1862,19 @@ bool IRTranslator::translateVectorDeinterleave2Intrinsic( void IRTranslator::getStackGuard(Register DstReg, MachineIRBuilder &MIRBuilder) { + Value *Global = TLI->getSDagStackGuard(*MF->getFunction().getParent()); + if (!Global) { + LLVMContext &Ctx = MIRBuilder.getContext(); + Ctx.diagnose(DiagnosticInfoGeneric("unable to lower stackguard")); + MIRBuilder.buildUndef(DstReg); + return; + } + const TargetRegisterInfo *TRI = MF->getSubtarget().getRegisterInfo(); MRI->setRegClass(DstReg, TRI->getPointerRegClass()); auto MIB = MIRBuilder.buildInstr(TargetOpcode::LOAD_STACK_GUARD, {DstReg}, {}); - Value *Global = TLI->getSDagStackGuard(*MF->getFunction().getParent()); - if (!Global) - return; - unsigned AddrSpace = Global->getType()->getPointerAddressSpace(); LLT PtrTy = LLT::pointer(AddrSpace, DL->getPointerSizeInBits(AddrSpace)); diff --git a/llvm/lib/CodeGen/InlineSpiller.cpp b/llvm/lib/CodeGen/InlineSpiller.cpp index d6e8505..c3e0964 100644 --- a/llvm/lib/CodeGen/InlineSpiller.cpp +++ b/llvm/lib/CodeGen/InlineSpiller.cpp @@ -721,6 +721,9 @@ bool InlineSpiller::reMaterializeFor(LiveInterval &VirtReg, MachineInstr &MI) { // Allocate a new register for the remat. Register NewVReg = Edit->createFrom(Original); + // Constrain it to the register class of MI. + MRI.constrainRegClass(NewVReg, MRI.getRegClass(VirtReg.reg())); + // Finally we can rematerialize OrigMI before MI. SlotIndex DefIdx = Edit->rematerializeAt(*MI.getParent(), MI, NewVReg, RM, TRI); diff --git a/llvm/lib/CodeGen/PreISelIntrinsicLowering.cpp b/llvm/lib/CodeGen/PreISelIntrinsicLowering.cpp index f54e2f2..620d3d3 100644 --- a/llvm/lib/CodeGen/PreISelIntrinsicLowering.cpp +++ b/llvm/lib/CodeGen/PreISelIntrinsicLowering.cpp @@ -593,7 +593,7 @@ bool PreISelIntrinsicLowering::lowerIntrinsics(Module &M) const { case Intrinsic::log: Changed |= forEachCall(F, [&](CallInst *CI) { Type *Ty = CI->getArgOperand(0)->getType(); - if (!isa<ScalableVectorType>(Ty)) + if (!TM || !isa<ScalableVectorType>(Ty)) return false; const TargetLowering *TL = TM->getSubtargetImpl(F)->getTargetLowering(); unsigned Op = TL->IntrinsicIDToISD(F.getIntrinsicID()); diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h index 603dc34..9656a30 100644 --- a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h +++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h @@ -890,6 +890,7 @@ private: SDValue ScalarizeVecRes_UnaryOpWithExtraInput(SDNode *N); SDValue ScalarizeVecRes_INSERT_VECTOR_ELT(SDNode *N); SDValue ScalarizeVecRes_LOAD(LoadSDNode *N); + SDValue ScalarizeVecRes_ATOMIC_LOAD(AtomicSDNode *N); SDValue ScalarizeVecRes_SCALAR_TO_VECTOR(SDNode *N); SDValue ScalarizeVecRes_VSELECT(SDNode *N); SDValue ScalarizeVecRes_SELECT(SDNode *N); diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp index 3b5f83f..bb4a8d9 100644 --- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp @@ -69,6 +69,9 @@ void DAGTypeLegalizer::ScalarizeVectorResult(SDNode *N, unsigned ResNo) { R = ScalarizeVecRes_UnaryOpWithExtraInput(N); break; case ISD::INSERT_VECTOR_ELT: R = ScalarizeVecRes_INSERT_VECTOR_ELT(N); break; + case ISD::ATOMIC_LOAD: + R = ScalarizeVecRes_ATOMIC_LOAD(cast<AtomicSDNode>(N)); + break; case ISD::LOAD: R = ScalarizeVecRes_LOAD(cast<LoadSDNode>(N));break; case ISD::SCALAR_TO_VECTOR: R = ScalarizeVecRes_SCALAR_TO_VECTOR(N); break; case ISD::SIGN_EXTEND_INREG: R = ScalarizeVecRes_InregOp(N); break; @@ -475,6 +478,18 @@ SDValue DAGTypeLegalizer::ScalarizeVecRes_INSERT_VECTOR_ELT(SDNode *N) { return Op; } +SDValue DAGTypeLegalizer::ScalarizeVecRes_ATOMIC_LOAD(AtomicSDNode *N) { + SDValue Result = DAG.getAtomicLoad( + N->getExtensionType(), SDLoc(N), N->getMemoryVT().getVectorElementType(), + N->getValueType(0).getVectorElementType(), N->getChain(), N->getBasePtr(), + N->getMemOperand()); + + // Legalize the chain result - switch anything that used the old chain to + // use the new one. + ReplaceValueWith(SDValue(N, 1), Result.getValue(1)); + return Result; +} + SDValue DAGTypeLegalizer::ScalarizeVecRes_LOAD(LoadSDNode *N) { assert(N->isUnindexed() && "Indexed vector load?"); diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp index 90edaf3..379242e 100644 --- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp @@ -8620,7 +8620,10 @@ SDValue SelectionDAG::getMemBasePlusOffset(SDValue Ptr, SDValue Offset, if (TLI->shouldPreservePtrArith(this->getMachineFunction().getFunction(), BasePtrVT)) return getNode(ISD::PTRADD, DL, BasePtrVT, Ptr, Offset, Flags); - return getNode(ISD::ADD, DL, BasePtrVT, Ptr, Offset, Flags); + // InBounds only applies to PTRADD, don't set it if we generate ADD. + SDNodeFlags AddFlags = Flags; + AddFlags.setInBounds(false); + return getNode(ISD::ADD, DL, BasePtrVT, Ptr, Offset, AddFlags); } /// Returns true if memcpy source is constant data. diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp index dcf2df3..bfa566a 100644 --- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp @@ -3131,12 +3131,16 @@ void SelectionDAGBuilder::visitSPDescriptorParent(StackProtectorDescriptor &SPD, if (TLI.useLoadStackGuardNode(M)) { Guard = getLoadStackGuard(DAG, dl, Chain); } else { - const Value *IRGuard = TLI.getSDagStackGuard(M); - SDValue GuardPtr = getValue(IRGuard); - - Guard = DAG.getLoad(PtrMemTy, dl, Chain, GuardPtr, - MachinePointerInfo(IRGuard, 0), Align, - MachineMemOperand::MOVolatile); + if (const Value *IRGuard = TLI.getSDagStackGuard(M)) { + SDValue GuardPtr = getValue(IRGuard); + Guard = DAG.getLoad(PtrMemTy, dl, Chain, GuardPtr, + MachinePointerInfo(IRGuard, 0), Align, + MachineMemOperand::MOVolatile); + } else { + LLVMContext &Ctx = *DAG.getContext(); + Ctx.diagnose(DiagnosticInfoGeneric("unable to lower stackguard")); + Guard = DAG.getPOISON(PtrMemTy); + } } // Perform the comparison via a getsetcc. @@ -4386,6 +4390,7 @@ void SelectionDAGBuilder::visitGetElementPtr(const User &I) { if (NW.hasNoUnsignedWrap() || (int64_t(Offset) >= 0 && NW.hasNoUnsignedSignedWrap())) Flags |= SDNodeFlags::NoUnsignedWrap; + Flags.setInBounds(NW.isInBounds()); N = DAG.getMemBasePlusOffset( N, DAG.getConstant(Offset, dl, N.getValueType()), dl, Flags); @@ -4429,6 +4434,7 @@ void SelectionDAGBuilder::visitGetElementPtr(const User &I) { if (NW.hasNoUnsignedWrap() || (Offs.isNonNegative() && NW.hasNoUnsignedSignedWrap())) Flags.setNoUnsignedWrap(true); + Flags.setInBounds(NW.isInBounds()); OffsVal = DAG.getSExtOrTrunc(OffsVal, dl, N.getValueType()); @@ -4498,6 +4504,7 @@ void SelectionDAGBuilder::visitGetElementPtr(const User &I) { // pointer index type (add nuw). SDNodeFlags AddFlags; AddFlags.setNoUnsignedWrap(NW.hasNoUnsignedWrap()); + AddFlags.setInBounds(NW.isInBounds()); N = DAG.getMemBasePlusOffset(N, IdxN, dl, AddFlags); } @@ -7324,6 +7331,13 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I, Res = DAG.getPtrExtOrTrunc(Res, sdl, PtrTy); } else { const Value *Global = TLI.getSDagStackGuard(M); + if (!Global) { + LLVMContext &Ctx = *DAG.getContext(); + Ctx.diagnose(DiagnosticInfoGeneric("unable to lower stackguard")); + setValue(&I, DAG.getPOISON(PtrTy)); + return; + } + Align Align = DAG.getDataLayout().getPrefTypeAlign(Global->getType()); Res = DAG.getLoad(PtrTy, sdl, Chain, getValue(Global), MachinePointerInfo(Global, 0), Align, diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp index 39cbfad..77377d3 100644 --- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp +++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp @@ -689,6 +689,9 @@ void SDNode::print_details(raw_ostream &OS, const SelectionDAG *G) const { if (getFlags().hasSameSign()) OS << " samesign"; + if (getFlags().hasInBounds()) + OS << " inbounds"; + if (getFlags().hasNonNeg()) OS << " nneg"; diff --git a/llvm/lib/Frontend/HLSL/CBuffer.cpp b/llvm/lib/Frontend/HLSL/CBuffer.cpp index 407b6ad..1f53c87 100644 --- a/llvm/lib/Frontend/HLSL/CBuffer.cpp +++ b/llvm/lib/Frontend/HLSL/CBuffer.cpp @@ -43,8 +43,13 @@ std::optional<CBufferMetadata> CBufferMetadata::get(Module &M) { for (const MDNode *MD : CBufMD->operands()) { assert(MD->getNumOperands() && "Invalid cbuffer metadata"); - auto *Handle = cast<GlobalVariable>( - cast<ValueAsMetadata>(MD->getOperand(0))->getValue()); + // For an unused cbuffer, the handle may have been optimized out + Metadata *OpMD = MD->getOperand(0); + if (!OpMD) + continue; + + auto *Handle = + cast<GlobalVariable>(cast<ValueAsMetadata>(OpMD)->getValue()); CBufferMapping &Mapping = Result->Mappings.emplace_back(Handle); for (int I = 1, E = MD->getNumOperands(); I < E; ++I) { diff --git a/llvm/lib/IR/Verifier.cpp b/llvm/lib/IR/Verifier.cpp index 03da154..7917712 100644 --- a/llvm/lib/IR/Verifier.cpp +++ b/llvm/lib/IR/Verifier.cpp @@ -4446,10 +4446,12 @@ void Verifier::visitLoadInst(LoadInst &LI) { Check(LI.getOrdering() != AtomicOrdering::Release && LI.getOrdering() != AtomicOrdering::AcquireRelease, "Load cannot have Release ordering", &LI); - Check(ElTy->isIntOrPtrTy() || ElTy->isFloatingPointTy(), - "atomic load operand must have integer, pointer, or floating point " - "type!", + Check(ElTy->getScalarType()->isIntOrPtrTy() || + ElTy->getScalarType()->isFloatingPointTy(), + "atomic load operand must have integer, pointer, floating point, " + "or vector type!", ElTy, &LI); + checkAtomicMemAccessSize(ElTy, &LI); } else { Check(LI.getSyncScopeID() == SyncScope::System, @@ -4472,9 +4474,10 @@ void Verifier::visitStoreInst(StoreInst &SI) { Check(SI.getOrdering() != AtomicOrdering::Acquire && SI.getOrdering() != AtomicOrdering::AcquireRelease, "Store cannot have Acquire ordering", &SI); - Check(ElTy->isIntOrPtrTy() || ElTy->isFloatingPointTy(), - "atomic store operand must have integer, pointer, or floating point " - "type!", + Check(ElTy->getScalarType()->isIntOrPtrTy() || + ElTy->getScalarType()->isFloatingPointTy(), + "atomic store operand must have integer, pointer, floating point, " + "or vector type!", ElTy, &SI); checkAtomicMemAccessSize(ElTy, &SI); } else { diff --git a/llvm/lib/Passes/PassBuilder.cpp b/llvm/lib/Passes/PassBuilder.cpp index 048c58d..3c9a27a 100644 --- a/llvm/lib/Passes/PassBuilder.cpp +++ b/llvm/lib/Passes/PassBuilder.cpp @@ -669,7 +669,14 @@ void PassBuilder::registerFunctionAnalyses(FunctionAnalysisManager &FAM) { FAM.registerPass([&] { return buildDefaultAAPipeline(); }); #define FUNCTION_ANALYSIS(NAME, CREATE_PASS) \ - FAM.registerPass([&] { return CREATE_PASS; }); + if constexpr (std::is_constructible_v< \ + std::remove_reference_t<decltype(CREATE_PASS)>, \ + const TargetMachine &>) { \ + if (TM) \ + FAM.registerPass([&] { return CREATE_PASS; }); \ + } else { \ + FAM.registerPass([&] { return CREATE_PASS; }); \ + } #include "PassRegistry.def" for (auto &C : FunctionAnalysisRegistrationCallbacks) @@ -2038,6 +2045,14 @@ Error PassBuilder::parseModulePass(ModulePassManager &MPM, } #define FUNCTION_PASS(NAME, CREATE_PASS) \ if (Name == NAME) { \ + if constexpr (std::is_constructible_v< \ + std::remove_reference_t<decltype(CREATE_PASS)>, \ + const TargetMachine &>) { \ + if (!TM) \ + return make_error<StringError>( \ + formatv("pass '{0}' requires TargetMachine", Name).str(), \ + inconvertibleErrorCode()); \ + } \ MPM.addPass(createModuleToFunctionPassAdaptor(CREATE_PASS)); \ return Error::success(); \ } @@ -2046,6 +2061,18 @@ Error PassBuilder::parseModulePass(ModulePassManager &MPM, auto Params = parsePassParameters(PARSER, Name, NAME); \ if (!Params) \ return Params.takeError(); \ + auto CreatePass = CREATE_PASS; \ + if constexpr (std::is_constructible_v< \ + std::remove_reference_t<decltype(CreatePass( \ + Params.get()))>, \ + const TargetMachine &, \ + std::remove_reference_t<decltype(Params.get())>>) { \ + if (!TM) { \ + return make_error<StringError>( \ + formatv("pass '{0}' requires TargetMachine", Name).str(), \ + inconvertibleErrorCode()); \ + } \ + } \ MPM.addPass(createModuleToFunctionPassAdaptor(CREATE_PASS(Params.get()))); \ return Error::success(); \ } @@ -2152,6 +2179,14 @@ Error PassBuilder::parseCGSCCPass(CGSCCPassManager &CGPM, } #define FUNCTION_PASS(NAME, CREATE_PASS) \ if (Name == NAME) { \ + if constexpr (std::is_constructible_v< \ + std::remove_reference_t<decltype(CREATE_PASS)>, \ + const TargetMachine &>) { \ + if (!TM) \ + return make_error<StringError>( \ + formatv("pass '{0}' requires TargetMachine", Name).str(), \ + inconvertibleErrorCode()); \ + } \ CGPM.addPass(createCGSCCToFunctionPassAdaptor(CREATE_PASS)); \ return Error::success(); \ } @@ -2160,6 +2195,18 @@ Error PassBuilder::parseCGSCCPass(CGSCCPassManager &CGPM, auto Params = parsePassParameters(PARSER, Name, NAME); \ if (!Params) \ return Params.takeError(); \ + auto CreatePass = CREATE_PASS; \ + if constexpr (std::is_constructible_v< \ + std::remove_reference_t<decltype(CreatePass( \ + Params.get()))>, \ + const TargetMachine &, \ + std::remove_reference_t<decltype(Params.get())>>) { \ + if (!TM) { \ + return make_error<StringError>( \ + formatv("pass '{0}' requires TargetMachine", Name).str(), \ + inconvertibleErrorCode()); \ + } \ + } \ CGPM.addPass(createCGSCCToFunctionPassAdaptor(CREATE_PASS(Params.get()))); \ return Error::success(); \ } @@ -2239,6 +2286,14 @@ Error PassBuilder::parseFunctionPass(FunctionPassManager &FPM, // Now expand the basic registered passes from the .inc file. #define FUNCTION_PASS(NAME, CREATE_PASS) \ if (Name == NAME) { \ + if constexpr (std::is_constructible_v< \ + std::remove_reference_t<decltype(CREATE_PASS)>, \ + const TargetMachine &>) { \ + if (!TM) \ + return make_error<StringError>( \ + formatv("pass '{0}' requires TargetMachine", Name).str(), \ + inconvertibleErrorCode()); \ + } \ FPM.addPass(CREATE_PASS); \ return Error::success(); \ } @@ -2247,14 +2302,34 @@ Error PassBuilder::parseFunctionPass(FunctionPassManager &FPM, auto Params = parsePassParameters(PARSER, Name, NAME); \ if (!Params) \ return Params.takeError(); \ + auto CreatePass = CREATE_PASS; \ + if constexpr (std::is_constructible_v< \ + std::remove_reference_t<decltype(CreatePass( \ + Params.get()))>, \ + const TargetMachine &, \ + std::remove_reference_t<decltype(Params.get())>>) { \ + if (!TM) { \ + return make_error<StringError>( \ + formatv("pass '{0}' requires TargetMachine", Name).str(), \ + inconvertibleErrorCode()); \ + } \ + } \ FPM.addPass(CREATE_PASS(Params.get())); \ return Error::success(); \ } #define FUNCTION_ANALYSIS(NAME, CREATE_PASS) \ if (Name == "require<" NAME ">") { \ + if constexpr (std::is_constructible_v< \ + std::remove_reference_t<decltype(CREATE_PASS)>, \ + const TargetMachine &>) { \ + if (!TM) \ + return make_error<StringError>( \ + formatv("pass '{0}' requires TargetMachine", Name).str(), \ + inconvertibleErrorCode()); \ + } \ FPM.addPass( \ - RequireAnalysisPass< \ - std::remove_reference_t<decltype(CREATE_PASS)>, Function>()); \ + RequireAnalysisPass<std::remove_reference_t<decltype(CREATE_PASS)>, \ + Function>()); \ return Error::success(); \ } \ if (Name == "invalidate<" NAME ">") { \ diff --git a/llvm/lib/Passes/PassRegistry.def b/llvm/lib/Passes/PassRegistry.def index a66b6e4..1853cdd 100644 --- a/llvm/lib/Passes/PassRegistry.def +++ b/llvm/lib/Passes/PassRegistry.def @@ -345,7 +345,7 @@ FUNCTION_ANALYSIS("aa", AAManager()) FUNCTION_ANALYSIS("access-info", LoopAccessAnalysis()) FUNCTION_ANALYSIS("assumptions", AssumptionAnalysis()) FUNCTION_ANALYSIS("bb-sections-profile-reader", - BasicBlockSectionsProfileReaderAnalysis(TM)) + BasicBlockSectionsProfileReaderAnalysis(*TM)) FUNCTION_ANALYSIS("block-freq", BlockFrequencyAnalysis()) FUNCTION_ANALYSIS("branch-prob", BranchProbabilityAnalysis()) FUNCTION_ANALYSIS("cycles", CycleAnalysis()) @@ -356,7 +356,7 @@ FUNCTION_ANALYSIS("domfrontier", DominanceFrontierAnalysis()) FUNCTION_ANALYSIS("domtree", DominatorTreeAnalysis()) FUNCTION_ANALYSIS("ephemerals", EphemeralValuesAnalysis()) FUNCTION_ANALYSIS("func-properties", FunctionPropertiesAnalysis()) -FUNCTION_ANALYSIS("machine-function-info", MachineFunctionAnalysis(TM)) +FUNCTION_ANALYSIS("machine-function-info", MachineFunctionAnalysis(*TM)) FUNCTION_ANALYSIS("gc-function", GCFunctionAnalysis()) FUNCTION_ANALYSIS("inliner-size-estimator", InlineSizeEstimatorAnalysis()) FUNCTION_ANALYSIS("last-run-tracking", LastRunTrackingAnalysis()) @@ -406,14 +406,14 @@ FUNCTION_PASS("alignment-from-assumptions", AlignmentFromAssumptionsPass()) FUNCTION_PASS("annotation-remarks", AnnotationRemarksPass()) FUNCTION_PASS("assume-builder", AssumeBuilderPass()) FUNCTION_PASS("assume-simplify", AssumeSimplifyPass()) -FUNCTION_PASS("atomic-expand", AtomicExpandPass(TM)) +FUNCTION_PASS("atomic-expand", AtomicExpandPass(*TM)) FUNCTION_PASS("bdce", BDCEPass()) FUNCTION_PASS("break-crit-edges", BreakCriticalEdgesPass()) FUNCTION_PASS("callbr-prepare", CallBrPreparePass()) FUNCTION_PASS("callsite-splitting", CallSiteSplittingPass()) FUNCTION_PASS("chr", ControlHeightReductionPass()) -FUNCTION_PASS("codegenprepare", CodeGenPreparePass(TM)) -FUNCTION_PASS("complex-deinterleaving", ComplexDeinterleavingPass(TM)) +FUNCTION_PASS("codegenprepare", CodeGenPreparePass(*TM)) +FUNCTION_PASS("complex-deinterleaving", ComplexDeinterleavingPass(*TM)) FUNCTION_PASS("consthoist", ConstantHoistingPass()) FUNCTION_PASS("constraint-elimination", ConstraintEliminationPass()) FUNCTION_PASS("coro-elide", CoroElidePass()) @@ -430,10 +430,10 @@ FUNCTION_PASS("dot-dom-only", DomOnlyPrinter()) FUNCTION_PASS("dot-post-dom", PostDomPrinter()) FUNCTION_PASS("dot-post-dom-only", PostDomOnlyPrinter()) FUNCTION_PASS("dse", DSEPass()) -FUNCTION_PASS("dwarf-eh-prepare", DwarfEHPreparePass(TM)) +FUNCTION_PASS("dwarf-eh-prepare", DwarfEHPreparePass(*TM)) FUNCTION_PASS("drop-unnecessary-assumes", DropUnnecessaryAssumesPass()) -FUNCTION_PASS("expand-large-div-rem", ExpandLargeDivRemPass(TM)) -FUNCTION_PASS("expand-memcmp", ExpandMemCmpPass(TM)) +FUNCTION_PASS("expand-large-div-rem", ExpandLargeDivRemPass(*TM)) +FUNCTION_PASS("expand-memcmp", ExpandMemCmpPass(*TM)) FUNCTION_PASS("expand-reductions", ExpandReductionsPass()) FUNCTION_PASS("extra-vector-passes", ExtraFunctionPassManager<ShouldRunExtraVectorPasses>()) @@ -446,15 +446,15 @@ FUNCTION_PASS("guard-widening", GuardWideningPass()) FUNCTION_PASS("gvn-hoist", GVNHoistPass()) FUNCTION_PASS("gvn-sink", GVNSinkPass()) FUNCTION_PASS("helloworld", HelloWorldPass()) -FUNCTION_PASS("indirectbr-expand", IndirectBrExpandPass(TM)) +FUNCTION_PASS("indirectbr-expand", IndirectBrExpandPass(*TM)) FUNCTION_PASS("infer-address-spaces", InferAddressSpacesPass()) FUNCTION_PASS("infer-alignment", InferAlignmentPass()) FUNCTION_PASS("inject-tli-mappings", InjectTLIMappings()) FUNCTION_PASS("instcount", InstCountPass()) FUNCTION_PASS("instnamer", InstructionNamerPass()) FUNCTION_PASS("instsimplify", InstSimplifyPass()) -FUNCTION_PASS("interleaved-access", InterleavedAccessPass(TM)) -FUNCTION_PASS("interleaved-load-combine", InterleavedLoadCombinePass(TM)) +FUNCTION_PASS("interleaved-access", InterleavedAccessPass(*TM)) +FUNCTION_PASS("interleaved-load-combine", InterleavedLoadCombinePass(*TM)) FUNCTION_PASS("invalidate<all>", InvalidateAllAnalysesPass()) FUNCTION_PASS("irce", IRCEPass()) FUNCTION_PASS("jump-threading", JumpThreadingPass()) @@ -533,25 +533,25 @@ FUNCTION_PASS("reassociate", ReassociatePass()) FUNCTION_PASS("redundant-dbg-inst-elim", RedundantDbgInstEliminationPass()) FUNCTION_PASS("replace-with-veclib", ReplaceWithVeclib()) FUNCTION_PASS("reg2mem", RegToMemPass()) -FUNCTION_PASS("safe-stack", SafeStackPass(TM)) +FUNCTION_PASS("safe-stack", SafeStackPass(*TM)) FUNCTION_PASS("sandbox-vectorizer", SandboxVectorizerPass()) FUNCTION_PASS("scalarize-masked-mem-intrin", ScalarizeMaskedMemIntrinPass()) FUNCTION_PASS("sccp", SCCPPass()) -FUNCTION_PASS("select-optimize", SelectOptimizePass(TM)) +FUNCTION_PASS("select-optimize", SelectOptimizePass(*TM)) FUNCTION_PASS("separate-const-offset-from-gep", SeparateConstOffsetFromGEPPass()) FUNCTION_PASS("sink", SinkingPass()) FUNCTION_PASS("sjlj-eh-prepare", SjLjEHPreparePass(TM)) FUNCTION_PASS("slp-vectorizer", SLPVectorizerPass()) FUNCTION_PASS("slsr", StraightLineStrengthReducePass()) -FUNCTION_PASS("stack-protector", StackProtectorPass(TM)) +FUNCTION_PASS("stack-protector", StackProtectorPass(*TM)) FUNCTION_PASS("strip-gc-relocates", StripGCRelocates()) FUNCTION_PASS("tailcallelim", TailCallElimPass()) FUNCTION_PASS("transform-warning", WarnMissedTransformationsPass()) FUNCTION_PASS("trigger-crash-function", TriggerCrashFunctionPass()) FUNCTION_PASS("trigger-verifier-error", TriggerVerifierErrorPass()) FUNCTION_PASS("tsan", ThreadSanitizerPass()) -FUNCTION_PASS("typepromotion", TypePromotionPass(TM)) +FUNCTION_PASS("typepromotion", TypePromotionPass(*TM)) FUNCTION_PASS("unify-loop-exits", UnifyLoopExitsPass()) FUNCTION_PASS("unreachableblockelim", UnreachableBlockElimPass()) FUNCTION_PASS("vector-combine", VectorCombinePass()) @@ -730,7 +730,7 @@ FUNCTION_PASS_WITH_PARAMS( FUNCTION_PASS_WITH_PARAMS( "expand-fp", "ExpandFpPass", [TM = TM](CodeGenOptLevel OL) { - return ExpandFpPass(TM, OL); + return ExpandFpPass(*TM, OL); }, parseExpandFpOptions, "O0;O1;O2;O3") diff --git a/llvm/lib/Support/GlobPattern.cpp b/llvm/lib/Support/GlobPattern.cpp index 0ecf47d..2715229 100644 --- a/llvm/lib/Support/GlobPattern.cpp +++ b/llvm/lib/Support/GlobPattern.cpp @@ -132,24 +132,70 @@ parseBraceExpansions(StringRef S, std::optional<size_t> MaxSubPatterns) { return std::move(SubPatterns); } +static StringRef maxPlainSubstring(StringRef S) { + StringRef Best; + while (!S.empty()) { + size_t PrefixSize = S.find_first_of("?*[{\\"); + if (PrefixSize == std::string::npos) + PrefixSize = S.size(); + + if (Best.size() < PrefixSize) + Best = S.take_front(PrefixSize); + + S = S.drop_front(PrefixSize); + + // It's impossible, as the first and last characters of the input string + // must be Glob special characters, otherwise they would be parts of + // the prefix or the suffix. + assert(!S.empty()); + + switch (S.front()) { + case '\\': + S = S.drop_front(2); + break; + case '[': { + // Drop '[' and the first character which can be ']'. + S = S.drop_front(2); + size_t EndBracket = S.find_first_of("]"); + // Should not be possible, SubGlobPattern::create should fail on invalid + // pattern before we get here. + assert(EndBracket != std::string::npos); + S = S.drop_front(EndBracket + 1); + break; + } + case '{': + // TODO: implement. + // Fallback to whatever is best for now. + return Best; + default: + S = S.drop_front(1); + } + } + + return Best; +} + Expected<GlobPattern> GlobPattern::create(StringRef S, std::optional<size_t> MaxSubPatterns) { GlobPattern Pat; + Pat.Pattern = S; // Store the prefix that does not contain any metacharacter. - size_t PrefixSize = S.find_first_of("?*[{\\"); - Pat.Prefix = S.substr(0, PrefixSize); - if (PrefixSize == std::string::npos) + Pat.PrefixSize = S.find_first_of("?*[{\\"); + if (Pat.PrefixSize == std::string::npos) { + Pat.PrefixSize = S.size(); return Pat; - S = S.substr(PrefixSize); + } + S = S.substr(Pat.PrefixSize); // Just in case we stop on unmatched opening brackets. size_t SuffixStart = S.find_last_of("?*[]{}\\"); assert(SuffixStart != std::string::npos); if (S[SuffixStart] == '\\') ++SuffixStart; - ++SuffixStart; - Pat.Suffix = S.substr(SuffixStart); + if (SuffixStart < S.size()) + ++SuffixStart; + Pat.SuffixSize = S.size() - SuffixStart; S = S.substr(0, SuffixStart); SmallVector<std::string, 1> SubPats; @@ -199,10 +245,15 @@ GlobPattern::SubGlobPattern::create(StringRef S) { return Pat; } +StringRef GlobPattern::longest_substr() const { + return maxPlainSubstring( + Pattern.drop_front(PrefixSize).drop_back(SuffixSize)); +} + bool GlobPattern::match(StringRef S) const { - if (!S.consume_front(Prefix)) + if (!S.consume_front(prefix())) return false; - if (!S.consume_back(Suffix)) + if (!S.consume_back(suffix())) return false; if (SubGlobs.empty() && S.empty()) return true; diff --git a/llvm/lib/Target/AMDGPU/AMDGPU.td b/llvm/lib/Target/AMDGPU/AMDGPU.td index ea32748..1c8383c 100644 --- a/llvm/lib/Target/AMDGPU/AMDGPU.td +++ b/llvm/lib/Target/AMDGPU/AMDGPU.td @@ -1430,6 +1430,18 @@ def FeatureAddSubU64Insts def FeatureMadU32Inst : SubtargetFeature<"mad-u32-inst", "HasMadU32Inst", "true", "Has v_mad_u32 instruction">; +def FeatureAddMinMaxInsts : SubtargetFeature<"add-min-max-insts", + "HasAddMinMaxInsts", + "true", + "Has v_add_{min|max}_{i|u}32 instructions" +>; + +def FeaturePkAddMinMaxInsts : SubtargetFeature<"pk-add-min-max-insts", + "HasPkAddMinMaxInsts", + "true", + "Has v_pk_add_{min|max}_{i|u}16 instructions" +>; + def FeatureMemToLDSLoad : SubtargetFeature<"vmem-to-lds-load-insts", "HasVMemToLDSLoad", "true", @@ -2115,6 +2127,8 @@ def FeatureISAVersion12_50 : FeatureSet< FeatureLshlAddU64Inst, FeatureAddSubU64Insts, FeatureMadU32Inst, + FeatureAddMinMaxInsts, + FeaturePkAddMinMaxInsts, FeatureLdsBarrierArriveAtomic, FeatureSetPrioIncWgInst, Feature45BitNumRecordsBufferResource, @@ -2658,11 +2672,11 @@ def HasFmaakFmamkF64Insts : def HasAddMinMaxInsts : Predicate<"Subtarget->hasAddMinMaxInsts()">, - AssemblerPredicate<(any_of FeatureGFX1250Insts)>; + AssemblerPredicate<(any_of FeatureAddMinMaxInsts)>; def HasPkAddMinMaxInsts : Predicate<"Subtarget->hasPkAddMinMaxInsts()">, - AssemblerPredicate<(any_of FeatureGFX1250Insts)>; + AssemblerPredicate<(any_of FeaturePkAddMinMaxInsts)>; def HasPkMinMax3Insts : Predicate<"Subtarget->hasPkMinMax3Insts()">, diff --git a/llvm/lib/Target/AMDGPU/AMDGPURegisterBankInfo.cpp b/llvm/lib/Target/AMDGPU/AMDGPURegisterBankInfo.cpp index 56807a4..54ba2f8 100644 --- a/llvm/lib/Target/AMDGPU/AMDGPURegisterBankInfo.cpp +++ b/llvm/lib/Target/AMDGPU/AMDGPURegisterBankInfo.cpp @@ -4835,6 +4835,14 @@ AMDGPURegisterBankInfo::getInstrMapping(const MachineInstr &MI) const { case Intrinsic::amdgcn_perm_pk16_b4_u4: case Intrinsic::amdgcn_perm_pk16_b6_u4: case Intrinsic::amdgcn_perm_pk16_b8_u4: + case Intrinsic::amdgcn_add_max_i32: + case Intrinsic::amdgcn_add_max_u32: + case Intrinsic::amdgcn_add_min_i32: + case Intrinsic::amdgcn_add_min_u32: + case Intrinsic::amdgcn_pk_add_max_i16: + case Intrinsic::amdgcn_pk_add_max_u16: + case Intrinsic::amdgcn_pk_add_min_i16: + case Intrinsic::amdgcn_pk_add_min_u16: return getDefaultMappingVOP(MI); case Intrinsic::amdgcn_log: case Intrinsic::amdgcn_exp2: diff --git a/llvm/lib/Target/AMDGPU/AMDGPUTargetMachine.cpp b/llvm/lib/Target/AMDGPU/AMDGPUTargetMachine.cpp index 996b55f..02c5390 100644 --- a/llvm/lib/Target/AMDGPU/AMDGPUTargetMachine.cpp +++ b/llvm/lib/Target/AMDGPU/AMDGPUTargetMachine.cpp @@ -2086,7 +2086,7 @@ void AMDGPUCodeGenPassBuilder::addIRPasses(AddIRPass &addPass) const { (AMDGPUAtomicOptimizerStrategy != ScanOptions::None)) addPass(AMDGPUAtomicOptimizerPass(TM, AMDGPUAtomicOptimizerStrategy)); - addPass(AtomicExpandPass(&TM)); + addPass(AtomicExpandPass(TM)); if (TM.getOptLevel() > CodeGenOptLevel::None) { addPass(AMDGPUPromoteAllocaPass(TM)); diff --git a/llvm/lib/Target/AMDGPU/GCNSubtarget.h b/llvm/lib/Target/AMDGPU/GCNSubtarget.h index a466780..ac660d5 100644 --- a/llvm/lib/Target/AMDGPU/GCNSubtarget.h +++ b/llvm/lib/Target/AMDGPU/GCNSubtarget.h @@ -277,6 +277,8 @@ protected: bool HasLshlAddU64Inst = false; bool HasAddSubU64Insts = false; bool HasMadU32Inst = false; + bool HasAddMinMaxInsts = false; + bool HasPkAddMinMaxInsts = false; bool HasPointSampleAccel = false; bool HasLdsBarrierArriveAtomic = false; bool HasSetPrioIncWgInst = false; @@ -1567,10 +1569,10 @@ public: bool hasIntMinMax64() const { return GFX1250Insts; } // \returns true if the target has V_ADD_{MIN|MAX}_{I|U}32 instructions. - bool hasAddMinMaxInsts() const { return GFX1250Insts; } + bool hasAddMinMaxInsts() const { return HasAddMinMaxInsts; } // \returns true if the target has V_PK_ADD_{MIN|MAX}_{I|U}16 instructions. - bool hasPkAddMinMaxInsts() const { return GFX1250Insts; } + bool hasPkAddMinMaxInsts() const { return HasPkAddMinMaxInsts; } // \returns true if the target has V_PK_{MIN|MAX}3_{I|U}16 instructions. bool hasPkMinMax3Insts() const { return GFX1250Insts; } diff --git a/llvm/lib/Target/AMDGPU/VOP3Instructions.td b/llvm/lib/Target/AMDGPU/VOP3Instructions.td index 7cce033..ee10190 100644 --- a/llvm/lib/Target/AMDGPU/VOP3Instructions.td +++ b/llvm/lib/Target/AMDGPU/VOP3Instructions.td @@ -775,10 +775,10 @@ let SubtargetPredicate = HasMinimum3Maximum3F16, ReadsModeReg = 0 in { } // End SubtargetPredicate = isGFX12Plus, ReadsModeReg = 0 let SubtargetPredicate = HasAddMinMaxInsts, isCommutable = 1, isReMaterializable = 1 in { - defm V_ADD_MAX_I32 : VOP3Inst <"v_add_max_i32", VOP3_Profile<VOP_I32_I32_I32_I32, VOP3_CLAMP>>; - defm V_ADD_MAX_U32 : VOP3Inst <"v_add_max_u32", VOP3_Profile<VOP_I32_I32_I32_I32, VOP3_CLAMP>>; - defm V_ADD_MIN_I32 : VOP3Inst <"v_add_min_i32", VOP3_Profile<VOP_I32_I32_I32_I32, VOP3_CLAMP>>; - defm V_ADD_MIN_U32 : VOP3Inst <"v_add_min_u32", VOP3_Profile<VOP_I32_I32_I32_I32, VOP3_CLAMP>>; + defm V_ADD_MAX_I32 : VOP3Inst <"v_add_max_i32", VOP3_Profile<VOP_I32_I32_I32_I32, VOP3_CLAMP>, int_amdgcn_add_max_i32>; + defm V_ADD_MAX_U32 : VOP3Inst <"v_add_max_u32", VOP3_Profile<VOP_I32_I32_I32_I32, VOP3_CLAMP>, int_amdgcn_add_max_u32>; + defm V_ADD_MIN_I32 : VOP3Inst <"v_add_min_i32", VOP3_Profile<VOP_I32_I32_I32_I32, VOP3_CLAMP>, int_amdgcn_add_min_i32>; + defm V_ADD_MIN_U32 : VOP3Inst <"v_add_min_u32", VOP3_Profile<VOP_I32_I32_I32_I32, VOP3_CLAMP>, int_amdgcn_add_min_u32>; } defm V_ADD_I16 : VOP3Inst_t16 <"v_add_i16", VOP_I16_I16_I16>; diff --git a/llvm/lib/Target/AMDGPU/VOP3PInstructions.td b/llvm/lib/Target/AMDGPU/VOP3PInstructions.td index 6500fce..c4692b7 100644 --- a/llvm/lib/Target/AMDGPU/VOP3PInstructions.td +++ b/llvm/lib/Target/AMDGPU/VOP3PInstructions.td @@ -75,7 +75,7 @@ multiclass VOP3PInst<string OpName, VOPProfile P, SDPatternOperator node = null_frag, bit IsDOT = 0> { def NAME : VOP3P_Pseudo<OpName, P, !if (P.HasModifiers, - getVOP3PModPat<P, node, IsDOT, IsDOT>.ret, + getVOP3PModPat<P, node, !or(P.EnableClamp, IsDOT), IsDOT>.ret, getVOP3Pat<P, node>.ret)>; let SubtargetPredicate = isGFX11Plus in { if P.HasExtVOP3DPP then @@ -434,15 +434,16 @@ defm : MadFmaMixFP16Pats_t16<fma, V_FMA_MIX_BF16_t16>; } // End SubtargetPredicate = HasFmaMixBF16Insts def PK_ADD_MINMAX_Profile : VOP3P_Profile<VOP_V2I16_V2I16_V2I16_V2I16, VOP3_PACKED> { - let HasModifiers = 0; + let HasNeg = 0; + let EnableClamp = 1; } let isCommutable = 1, isReMaterializable = 1 in { let SubtargetPredicate = HasPkAddMinMaxInsts in { -defm V_PK_ADD_MAX_I16 : VOP3PInst<"v_pk_add_max_i16", PK_ADD_MINMAX_Profile>; -defm V_PK_ADD_MAX_U16 : VOP3PInst<"v_pk_add_max_u16", PK_ADD_MINMAX_Profile>; -defm V_PK_ADD_MIN_I16 : VOP3PInst<"v_pk_add_min_i16", PK_ADD_MINMAX_Profile>; -defm V_PK_ADD_MIN_U16 : VOP3PInst<"v_pk_add_min_u16", PK_ADD_MINMAX_Profile>; +defm V_PK_ADD_MAX_I16 : VOP3PInst<"v_pk_add_max_i16", PK_ADD_MINMAX_Profile, int_amdgcn_pk_add_max_i16>; +defm V_PK_ADD_MAX_U16 : VOP3PInst<"v_pk_add_max_u16", PK_ADD_MINMAX_Profile, int_amdgcn_pk_add_max_u16>; +defm V_PK_ADD_MIN_I16 : VOP3PInst<"v_pk_add_min_i16", PK_ADD_MINMAX_Profile, int_amdgcn_pk_add_min_i16>; +defm V_PK_ADD_MIN_U16 : VOP3PInst<"v_pk_add_min_u16", PK_ADD_MINMAX_Profile, int_amdgcn_pk_add_min_u16>; } let SubtargetPredicate = HasPkMinMax3Insts in { defm V_PK_MAX3_I16 : VOP3PInst<"v_pk_max3_i16", PK_ADD_MINMAX_Profile>; diff --git a/llvm/lib/Target/LoongArch/LoongArchISelLowering.cpp b/llvm/lib/Target/LoongArch/LoongArchISelLowering.cpp index f7deeaf..ca4a655 100644 --- a/llvm/lib/Target/LoongArch/LoongArchISelLowering.cpp +++ b/llvm/lib/Target/LoongArch/LoongArchISelLowering.cpp @@ -2614,6 +2614,9 @@ static SDValue lower256BitShuffle(const SDLoc &DL, ArrayRef<int> Mask, MVT VT, if ((Result = lowerVECTOR_SHUFFLE_XVSHUF4I(DL, Mask, VT, V1, V2, DAG, Subtarget))) return Result; + // Try to widen vectors to gain more optimization opportunities. + if (SDValue NewShuffle = widenShuffleMask(DL, Mask, VT, V1, V2, DAG)) + return NewShuffle; if ((Result = lowerVECTOR_SHUFFLE_XVPERMI(DL, Mask, VT, V1, DAG, Subtarget))) return Result; diff --git a/llvm/lib/Target/RISCV/GISel/RISCVPostLegalizerCombiner.cpp b/llvm/lib/Target/RISCV/GISel/RISCVPostLegalizerCombiner.cpp index 67b510d..f2b216b 100644 --- a/llvm/lib/Target/RISCV/GISel/RISCVPostLegalizerCombiner.cpp +++ b/llvm/lib/Target/RISCV/GISel/RISCVPostLegalizerCombiner.cpp @@ -27,6 +27,7 @@ #include "llvm/CodeGen/MachineDominators.h" #include "llvm/CodeGen/MachineFunctionPass.h" #include "llvm/CodeGen/TargetPassConfig.h" +#include "llvm/Support/FormatVariadic.h" #define GET_GICOMBINER_DEPS #include "RISCVGenPostLegalizeGICombiner.inc" @@ -42,6 +43,56 @@ namespace { #include "RISCVGenPostLegalizeGICombiner.inc" #undef GET_GICOMBINER_TYPES +/// Match: G_STORE (G_FCONSTANT +0.0), addr +/// Return the source vreg in MatchInfo if matched. +bool matchFoldFPZeroStore(MachineInstr &MI, MachineRegisterInfo &MRI, + const RISCVSubtarget &STI, Register &MatchInfo) { + if (MI.getOpcode() != TargetOpcode::G_STORE) + return false; + + Register SrcReg = MI.getOperand(0).getReg(); + if (!SrcReg.isVirtual()) + return false; + + MachineInstr *Def = MRI.getVRegDef(SrcReg); + if (!Def || Def->getOpcode() != TargetOpcode::G_FCONSTANT) + return false; + + auto *CFP = Def->getOperand(1).getFPImm(); + if (!CFP || !CFP->getValueAPF().isPosZero()) + return false; + + unsigned ValBits = MRI.getType(SrcReg).getSizeInBits(); + if ((ValBits == 16 && !STI.hasStdExtZfh()) || + (ValBits == 32 && !STI.hasStdExtF()) || + (ValBits == 64 && (!STI.hasStdExtD() || !STI.is64Bit()))) + return false; + + MatchInfo = SrcReg; + return true; +} + +/// Apply: rewrite to G_STORE (G_CONSTANT 0 [XLEN]), addr +void applyFoldFPZeroStore(MachineInstr &MI, MachineRegisterInfo &MRI, + MachineIRBuilder &B, const RISCVSubtarget &STI, + Register &MatchInfo) { + const unsigned XLen = STI.getXLen(); + + auto Zero = B.buildConstant(LLT::scalar(XLen), 0); + MI.getOperand(0).setReg(Zero.getReg(0)); + + MachineInstr *Def = MRI.getVRegDef(MatchInfo); + if (Def && MRI.use_nodbg_empty(MatchInfo)) + Def->eraseFromParent(); + +#ifndef NDEBUG + unsigned ValBits = MRI.getType(MatchInfo).getSizeInBits(); + LLVM_DEBUG(dbgs() << formatv("[{0}] Fold FP zero store -> int zero " + "(XLEN={1}, ValBits={2}):\n {3}\n", + DEBUG_TYPE, XLen, ValBits, MI)); +#endif +} + class RISCVPostLegalizerCombinerImpl : public Combiner { protected: const CombinerHelper Helper; diff --git a/llvm/lib/Target/RISCV/RISCVCombine.td b/llvm/lib/Target/RISCV/RISCVCombine.td index 995dd0c..a06b60d 100644 --- a/llvm/lib/Target/RISCV/RISCVCombine.td +++ b/llvm/lib/Target/RISCV/RISCVCombine.td @@ -19,11 +19,20 @@ def RISCVO0PreLegalizerCombiner: GICombiner< "RISCVO0PreLegalizerCombinerImpl", [optnone_combines]> { } +// Rule: fold store (fp +0.0) -> store (int zero [XLEN]) +def fp_zero_store_matchdata : GIDefMatchData<"Register">; +def fold_fp_zero_store : GICombineRule< + (defs root:$root, fp_zero_store_matchdata:$matchinfo), + (match (G_STORE $src, $addr):$root, + [{ return matchFoldFPZeroStore(*${root}, MRI, STI, ${matchinfo}); }]), + (apply [{ applyFoldFPZeroStore(*${root}, MRI, B, STI, ${matchinfo}); }])>; + // Post-legalization combines which are primarily optimizations. // TODO: Add more combines. def RISCVPostLegalizerCombiner : GICombiner<"RISCVPostLegalizerCombinerImpl", [sub_to_add, combines_for_extload, redundant_and, identity_combines, shift_immed_chain, - commute_constant_to_rhs, simplify_neg_minmax]> { + commute_constant_to_rhs, simplify_neg_minmax, + fold_fp_zero_store]> { } diff --git a/llvm/lib/Target/RISCV/RISCVInstrInfoZvfbf.td b/llvm/lib/Target/RISCV/RISCVInstrInfoZvfbf.td index f7d1a09..b9c5b75 100644 --- a/llvm/lib/Target/RISCV/RISCVInstrInfoZvfbf.td +++ b/llvm/lib/Target/RISCV/RISCVInstrInfoZvfbf.td @@ -668,4 +668,38 @@ foreach vti = NoGroupBF16Vectors in { def : Pat<(vti.Scalar (extractelt (vti.Vector vti.RegClass:$rs2), 0)), (vfmv_f_s_inst vti.RegClass:$rs2, vti.Log2SEW)>; } + +let Predicates = [HasStdExtZvfbfa] in { + foreach fvtiToFWti = AllWidenableBF16ToFloatVectors in { + defvar fvti = fvtiToFWti.Vti; + defvar fwti = fvtiToFWti.Wti; + def : Pat<(fwti.Vector (any_riscv_fpextend_vl + (fvti.Vector fvti.RegClass:$rs1), + (fvti.Mask VMV0:$vm), + VLOpFrag)), + (!cast<Instruction>("PseudoVFWCVT_F_F_ALT_V_"#fvti.LMul.MX#"_E"#fvti.SEW#"_MASK") + (fwti.Vector (IMPLICIT_DEF)), fvti.RegClass:$rs1, + (fvti.Mask VMV0:$vm), + GPR:$vl, fvti.Log2SEW, TA_MA)>; + + def : Pat<(fvti.Vector (any_riscv_fpround_vl + (fwti.Vector fwti.RegClass:$rs1), + (fwti.Mask VMV0:$vm), VLOpFrag)), + (!cast<Instruction>("PseudoVFNCVT_F_F_ALT_W_"#fvti.LMul.MX#"_E"#fvti.SEW#"_MASK") + (fvti.Vector (IMPLICIT_DEF)), fwti.RegClass:$rs1, + (fwti.Mask VMV0:$vm), + // Value to indicate no rounding mode change in + // RISCVInsertReadWriteCSR + FRM_DYN, + GPR:$vl, fvti.Log2SEW, TA_MA)>; + def : Pat<(fvti.Vector (fpround (fwti.Vector fwti.RegClass:$rs1))), + (!cast<Instruction>("PseudoVFNCVT_F_F_ALT_W_"#fvti.LMul.MX#"_E"#fvti.SEW) + (fvti.Vector (IMPLICIT_DEF)), + fwti.RegClass:$rs1, + // Value to indicate no rounding mode change in + // RISCVInsertReadWriteCSR + FRM_DYN, + fvti.AVL, fvti.Log2SEW, TA_MA)>; + } +} } // Predicates = [HasStdExtZvfbfa] diff --git a/llvm/lib/Target/WebAssembly/WebAssemblyInstrSIMD.td b/llvm/lib/Target/WebAssembly/WebAssemblyInstrSIMD.td index f0ac26b..14097d7 100644 --- a/llvm/lib/Target/WebAssembly/WebAssemblyInstrSIMD.td +++ b/llvm/lib/Target/WebAssembly/WebAssemblyInstrSIMD.td @@ -1336,22 +1336,25 @@ def pmax : PatFrags<(ops node:$lhs, node:$rhs), [ ]>; defm PMAX : SIMDBinaryFP<pmax, "pmax", 235>; +multiclass PMinMaxInt<Vec vec, NI baseMinInst, NI baseMaxInst> { + def : Pat<(vec.int_vt (vselect + (setolt (vec.vt (bitconvert V128:$rhs)), + (vec.vt (bitconvert V128:$lhs))), + V128:$rhs, V128:$lhs)), + (baseMinInst $lhs, $rhs)>; + def : Pat<(vec.int_vt (vselect + (setolt (vec.vt (bitconvert V128:$lhs)), + (vec.vt (bitconvert V128:$rhs))), + V128:$rhs, V128:$lhs)), + (baseMaxInst $lhs, $rhs)>; +} // Also match the pmin/pmax cases where the operands are int vectors (but the // comparison is still a floating point comparison). This can happen when using // the wasm_simd128.h intrinsics because v128_t is an integer vector. foreach vec = [F32x4, F64x2, F16x8] in { -defvar pmin = !cast<NI>("PMIN_"#vec); -defvar pmax = !cast<NI>("PMAX_"#vec); -def : Pat<(vec.int_vt (vselect - (setolt (vec.vt (bitconvert V128:$rhs)), - (vec.vt (bitconvert V128:$lhs))), - V128:$rhs, V128:$lhs)), - (pmin $lhs, $rhs)>; -def : Pat<(vec.int_vt (vselect - (setolt (vec.vt (bitconvert V128:$lhs)), - (vec.vt (bitconvert V128:$rhs))), - V128:$rhs, V128:$lhs)), - (pmax $lhs, $rhs)>; + defvar pmin = !cast<NI>("PMIN_"#vec); + defvar pmax = !cast<NI>("PMAX_"#vec); + defm : PMinMaxInt<vec, pmin, pmax>; } // And match the pmin/pmax LLVM intrinsics as well @@ -1756,6 +1759,15 @@ let Predicates = [HasRelaxedSIMD] in { (relaxed_max V128:$lhs, V128:$rhs)>; def : Pat<(vec.vt (fmaximumnum (vec.vt V128:$lhs), (vec.vt V128:$rhs))), (relaxed_max V128:$lhs, V128:$rhs)>; + + // Transform pmin/max-supposed patterns to relaxed min max + let AddedComplexity = 1 in { + def : Pat<(vec.vt (pmin (vec.vt V128:$lhs), (vec.vt V128:$rhs))), + (relaxed_min $lhs, $rhs)>; + def : Pat<(vec.vt (pmax (vec.vt V128:$lhs), (vec.vt V128:$rhs))), + (relaxed_max $lhs, $rhs)>; + defm : PMinMaxInt<vec, relaxed_min, relaxed_max>; + } } } diff --git a/llvm/lib/TargetParser/TargetParser.cpp b/llvm/lib/TargetParser/TargetParser.cpp index 62a3c88..975a271 100644 --- a/llvm/lib/TargetParser/TargetParser.cpp +++ b/llvm/lib/TargetParser/TargetParser.cpp @@ -433,6 +433,8 @@ static void fillAMDGCNFeatureMap(StringRef GPU, const Triple &T, Features["fp8e5m3-insts"] = true; Features["permlane16-swap"] = true; Features["ashr-pk-insts"] = true; + Features["add-min-max-insts"] = true; + Features["pk-add-min-max-insts"] = true; Features["atomic-buffer-pk-add-bf16-inst"] = true; Features["vmem-pref-insts"] = true; Features["atomic-fadd-rtn-insts"] = true; diff --git a/llvm/lib/Transforms/InstCombine/InstCombineAddSub.cpp b/llvm/lib/Transforms/InstCombine/InstCombineAddSub.cpp index 73ec451..9bee523 100644 --- a/llvm/lib/Transforms/InstCombine/InstCombineAddSub.cpp +++ b/llvm/lib/Transforms/InstCombine/InstCombineAddSub.cpp @@ -2760,21 +2760,34 @@ Instruction *InstCombinerImpl::visitSub(BinaryOperator &I) { // Optimize pointer differences into the same array into a size. Consider: // &A[10] - &A[0]: we should compile this to "10". Value *LHSOp, *RHSOp; - if (match(Op0, m_PtrToInt(m_Value(LHSOp))) && - match(Op1, m_PtrToInt(m_Value(RHSOp)))) + if (match(Op0, m_PtrToIntOrAddr(m_Value(LHSOp))) && + match(Op1, m_PtrToIntOrAddr(m_Value(RHSOp)))) if (Value *Res = OptimizePointerDifference(LHSOp, RHSOp, I.getType(), I.hasNoUnsignedWrap())) return replaceInstUsesWith(I, Res); // trunc(p)-trunc(q) -> trunc(p-q) - if (match(Op0, m_Trunc(m_PtrToInt(m_Value(LHSOp)))) && - match(Op1, m_Trunc(m_PtrToInt(m_Value(RHSOp))))) + if (match(Op0, m_Trunc(m_PtrToIntOrAddr(m_Value(LHSOp)))) && + match(Op1, m_Trunc(m_PtrToIntOrAddr(m_Value(RHSOp))))) if (Value *Res = OptimizePointerDifference(LHSOp, RHSOp, I.getType(), /* IsNUW */ false)) return replaceInstUsesWith(I, Res); - if (match(Op0, m_ZExt(m_PtrToIntSameSize(DL, m_Value(LHSOp)))) && - match(Op1, m_ZExtOrSelf(m_PtrToInt(m_Value(RHSOp))))) { + auto MatchSubOfZExtOfPtrToIntOrAddr = [&]() { + if (match(Op0, m_ZExt(m_PtrToIntSameSize(DL, m_Value(LHSOp)))) && + match(Op1, m_ZExt(m_PtrToIntSameSize(DL, m_Value(RHSOp))))) + return true; + if (match(Op0, m_ZExt(m_PtrToAddr(m_Value(LHSOp)))) && + match(Op1, m_ZExt(m_PtrToAddr(m_Value(RHSOp))))) + return true; + // Special case for non-canonical ptrtoint in constant expression, + // where the zext has been folded into the ptrtoint. + if (match(Op0, m_ZExt(m_PtrToIntSameSize(DL, m_Value(LHSOp)))) && + match(Op1, m_PtrToInt(m_Value(RHSOp)))) + return true; + return false; + }; + if (MatchSubOfZExtOfPtrToIntOrAddr()) { if (auto *GEP = dyn_cast<GEPOperator>(LHSOp)) { if (GEP->getPointerOperand() == RHSOp) { if (GEP->hasNoUnsignedWrap() || GEP->hasNoUnsignedSignedWrap()) { diff --git a/llvm/lib/Transforms/InstCombine/InstCombineCalls.cpp b/llvm/lib/Transforms/InstCombine/InstCombineCalls.cpp index dab200d..669d4f0 100644 --- a/llvm/lib/Transforms/InstCombine/InstCombineCalls.cpp +++ b/llvm/lib/Transforms/InstCombine/InstCombineCalls.cpp @@ -4003,18 +4003,29 @@ Instruction *InstCombinerImpl::visitCallInst(CallInst &CI) { // Try to fold intrinsic into select/phi operands. This is legal if: // * The intrinsic is speculatable. - // * The select condition is not a vector, or the intrinsic does not - // perform cross-lane operations. - if (isSafeToSpeculativelyExecuteWithVariableReplaced(&CI) && - isNotCrossLaneOperation(II)) + // * The operand is one of the following: + // - a phi. + // - a select with a scalar condition. + // - a select with a vector condition and II is not a cross lane operation. + if (isSafeToSpeculativelyExecuteWithVariableReplaced(&CI)) { for (Value *Op : II->args()) { - if (auto *Sel = dyn_cast<SelectInst>(Op)) - if (Instruction *R = FoldOpIntoSelect(*II, Sel)) + if (auto *Sel = dyn_cast<SelectInst>(Op)) { + bool IsVectorCond = Sel->getCondition()->getType()->isVectorTy(); + if (IsVectorCond && !isNotCrossLaneOperation(II)) + continue; + // Don't replace a scalar select with a more expensive vector select if + // we can't simplify both arms of the select. + bool SimplifyBothArms = + !Op->getType()->isVectorTy() && II->getType()->isVectorTy(); + if (Instruction *R = FoldOpIntoSelect( + *II, Sel, /*FoldWithMultiUse=*/false, SimplifyBothArms)) return R; + } if (auto *Phi = dyn_cast<PHINode>(Op)) if (Instruction *R = foldOpIntoPhi(*II, Phi)) return R; } + } if (Instruction *Shuf = foldShuffledIntrinsicOperands(II)) return Shuf; diff --git a/llvm/lib/Transforms/InstCombine/InstCombineInternal.h b/llvm/lib/Transforms/InstCombine/InstCombineInternal.h index 943c223..ede73f8 100644 --- a/llvm/lib/Transforms/InstCombine/InstCombineInternal.h +++ b/llvm/lib/Transforms/InstCombine/InstCombineInternal.h @@ -664,7 +664,8 @@ public: /// This also works for Cast instructions, which obviously do not have a /// second operand. Instruction *FoldOpIntoSelect(Instruction &Op, SelectInst *SI, - bool FoldWithMultiUse = false); + bool FoldWithMultiUse = false, + bool SimplifyBothArms = false); /// This is a convenience wrapper function for the above two functions. Instruction *foldBinOpIntoSelectOrPhi(BinaryOperator &I); diff --git a/llvm/lib/Transforms/InstCombine/InstructionCombining.cpp b/llvm/lib/Transforms/InstCombine/InstructionCombining.cpp index 3f11cae..67e2aae 100644 --- a/llvm/lib/Transforms/InstCombine/InstructionCombining.cpp +++ b/llvm/lib/Transforms/InstCombine/InstructionCombining.cpp @@ -1777,7 +1777,8 @@ static Value *foldOperationIntoSelectOperand(Instruction &I, SelectInst *SI, } Instruction *InstCombinerImpl::FoldOpIntoSelect(Instruction &Op, SelectInst *SI, - bool FoldWithMultiUse) { + bool FoldWithMultiUse, + bool SimplifyBothArms) { // Don't modify shared select instructions unless set FoldWithMultiUse if (!SI->hasOneUse() && !FoldWithMultiUse) return nullptr; @@ -1821,6 +1822,9 @@ Instruction *InstCombinerImpl::FoldOpIntoSelect(Instruction &Op, SelectInst *SI, if (!NewTV && !NewFV) return nullptr; + if (SimplifyBothArms && !(NewTV && NewFV)) + return nullptr; + // Create an instruction for the arm that did not fold. if (!NewTV) NewTV = foldOperationIntoSelectOperand(Op, SI, TV, *this); diff --git a/llvm/test/Analysis/BranchProbabilityInfo/pr22718.ll b/llvm/test/Analysis/BranchProbabilityInfo/pr22718.ll index ed851f2..67ce44e 100644 --- a/llvm/test/Analysis/BranchProbabilityInfo/pr22718.ll +++ b/llvm/test/Analysis/BranchProbabilityInfo/pr22718.ll @@ -12,14 +12,14 @@ @.str = private unnamed_addr constant [17 x i8] c"x = %lu\0Ay = %lu\0A\00", align 1 ; Function Attrs: inlinehint nounwind uwtable -define i32 @main() #0 { +define i32 @main() { entry: %retval = alloca i32, align 4 %i = alloca i64, align 8 store i32 0, ptr %retval store i64 0, ptr @y, align 8 store i64 0, ptr @x, align 8 - call void @srand(i32 422304) #3 + call void @srand(i32 422304) store i64 0, ptr %i, align 8 br label %for.cond @@ -29,7 +29,7 @@ for.cond: ; preds = %for.inc, %entry br i1 %cmp, label %for.body, label %for.end, !prof !1 for.body: ; preds = %for.cond - %call = call i32 @rand() #3 + %call = call i32 @rand() %conv = sitofp i32 %call to double %mul = fmul double %conv, 1.000000e+02 %div = fdiv double %mul, 0x41E0000000000000 @@ -65,17 +65,12 @@ for.end: ; preds = %for.cond } ; Function Attrs: nounwind -declare void @srand(i32) #1 +declare void @srand(i32) ; Function Attrs: nounwind -declare i32 @rand() #1 +declare i32 @rand() -declare i32 @printf(ptr, ...) #2 - -attributes #0 = { inlinehint nounwind uwtable "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+sse,+sse2" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { nounwind "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+sse,+sse2" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #2 = { "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+sse,+sse2" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #3 = { nounwind } +declare i32 @printf(ptr, ...) !llvm.ident = !{!0} diff --git a/llvm/test/Analysis/CostModel/SystemZ/intrinsic-cost-crash.ll b/llvm/test/Analysis/CostModel/SystemZ/intrinsic-cost-crash.ll index 245e8f7..058370c 100644 --- a/llvm/test/Analysis/CostModel/SystemZ/intrinsic-cost-crash.ll +++ b/llvm/test/Analysis/CostModel/SystemZ/intrinsic-cost-crash.ll @@ -23,10 +23,10 @@ %"class.llvm::Metadata.306.1758.9986.10470.10954.11438.11922.12406.12890.13374.13858.15310.15794.16278.17730.19182.21118.25958.26926.29346.29830.30314.30798.31282.31766.32250.32734.33702.36606.38058.41638" = type { i8, i8, i16, i32 } ; Function Attrs: argmemonly nounwind -declare void @llvm.lifetime.end(ptr nocapture) #0 +declare void @llvm.lifetime.end(ptr nocapture) ; Function Attrs: nounwind ssp uwtable -define hidden void @fun(ptr %N, i1 %arg) #1 align 2 { +define hidden void @fun(ptr %N, i1 %arg) align 2 { ; CHECK: define entry: %NumOperands.i = getelementptr inbounds %"class.llvm::SDNode.310.1762.9990.10474.10958.11442.11926.12410.12894.13378.13862.15314.15798.16282.17734.19186.21122.25962.26930.29350.29834.30318.30802.31286.31770.32254.32738.33706.36610.38062.41642", ptr %N, i64 0, i32 8 @@ -47,8 +47,6 @@ for.body: ; preds = %for.body, %for.body br i1 %exitcond193, label %for.cond.cleanup, label %for.body } -attributes #0 = { argmemonly nounwind } -attributes #1 = { nounwind ssp uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="penryn" "target-features"="+cx16,+fxsr,+mmx,+sse,+sse2,+sse3,+sse4.1,+ssse3,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } !llvm.ident = !{!0} diff --git a/llvm/test/Analysis/Delinearization/constant_functions_multi_dim.ll b/llvm/test/Analysis/Delinearization/constant_functions_multi_dim.ll index 0c0fb41..891d604 100644 --- a/llvm/test/Analysis/Delinearization/constant_functions_multi_dim.ll +++ b/llvm/test/Analysis/Delinearization/constant_functions_multi_dim.ll @@ -4,7 +4,7 @@ target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128" ; Function Attrs: noinline nounwind uwtable -define void @mat_mul(ptr %C, ptr %A, ptr %B, i64 %N) #0 !kernel_arg_addr_space !2 !kernel_arg_access_qual !3 !kernel_arg_type !4 !kernel_arg_base_type !4 !kernel_arg_type_qual !5 { +define void @mat_mul(ptr %C, ptr %A, ptr %B, i64 %N) !kernel_arg_addr_space !2 !kernel_arg_access_qual !3 !kernel_arg_type !4 !kernel_arg_base_type !4 !kernel_arg_type_qual !5 { ; CHECK-LABEL: 'mat_mul' ; CHECK-NEXT: Inst: %tmp = load float, ptr %arrayidx, align 4 ; CHECK-NEXT: AccessFunction: {(4 * %N * %call),+,4}<%for.inc> @@ -22,8 +22,8 @@ entry: br label %entry.split entry.split: ; preds = %entry - %call = tail call i64 @_Z13get_global_idj(i32 0) #3 - %call1 = tail call i64 @_Z13get_global_idj(i32 1) #3 + %call = tail call i64 @_Z13get_global_idj(i32 0) + %call1 = tail call i64 @_Z13get_global_idj(i32 1) %cmp1 = icmp sgt i64 %N, 0 %mul = mul nsw i64 %call, %N br i1 %cmp1, label %for.inc.lr.ph, label %for.end @@ -59,15 +59,10 @@ for.end: ; preds = %for.cond.for.end_cr } ; Function Attrs: nounwind readnone -declare i64 @_Z13get_global_idj(i32) #1 +declare i64 @_Z13get_global_idj(i32) ; Function Attrs: nounwind readnone speculatable -declare float @llvm.fmuladd.f32(float, float, float) #2 - -attributes #0 = { noinline nounwind uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #2 = { nounwind readnone speculatable } -attributes #3 = { nounwind readnone } +declare float @llvm.fmuladd.f32(float, float, float) !llvm.module.flags = !{!0} !llvm.ident = !{!1} diff --git a/llvm/test/Analysis/DependenceAnalysis/MIVCheckConst.ll b/llvm/test/Analysis/DependenceAnalysis/MIVCheckConst.ll index b498d7064..f5be89a 100644 --- a/llvm/test/Analysis/DependenceAnalysis/MIVCheckConst.ll +++ b/llvm/test/Analysis/DependenceAnalysis/MIVCheckConst.ll @@ -30,7 +30,7 @@ target datalayout = "e-m:e-p:32:32-i1:32-i64:64-a:0-v32:32-n16:32" %20 = type { [768 x i32] } %21 = type { [416 x i32] } -define void @test(ptr %A, ptr %B, i1 %arg, i32 %n, i32 %m) #0 align 2 { +define void @test(ptr %A, ptr %B, i1 %arg, i32 %n, i32 %m) align 2 { ; CHECK-LABEL: 'test' ; CHECK-NEXT: Src: %v1 = load i32, ptr %B, align 4 --> Dst: %v1 = load i32, ptr %B, align 4 ; CHECK-NEXT: da analyze - none! @@ -91,5 +91,3 @@ bb38: bb40: ret void } - -attributes #0 = { "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } diff --git a/llvm/test/Analysis/DependenceAnalysis/NonCanonicalizedSubscript.ll b/llvm/test/Analysis/DependenceAnalysis/NonCanonicalizedSubscript.ll index e5d5d21e..eba017a 100644 --- a/llvm/test/Analysis/DependenceAnalysis/NonCanonicalizedSubscript.ll +++ b/llvm/test/Analysis/DependenceAnalysis/NonCanonicalizedSubscript.ll @@ -52,7 +52,7 @@ for.end: @a = global [10004 x [10004 x i32]] zeroinitializer, align 16 ; Function Attrs: nounwind uwtable -define void @coupled_miv_type_mismatch(i32 %n) #0 { +define void @coupled_miv_type_mismatch(i32 %n) { ; CHECK-LABEL: 'coupled_miv_type_mismatch' ; CHECK-NEXT: Src: %2 = load i32, ptr %arrayidx5, align 4 --> Dst: %2 = load i32, ptr %arrayidx5, align 4 ; CHECK-NEXT: da analyze - none! @@ -101,8 +101,6 @@ for.end13: ; preds = %for.cond ret void } -attributes #0 = { nounwind uwtable "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+sse,+sse2" "unsafe-fp-math"="false" "use-soft-float"="false" } - !llvm.ident = !{!0} !0 = !{!"clang version 3.7.0"} diff --git a/llvm/test/Analysis/MemoryDependenceAnalysis/invariant.group-bug.ll b/llvm/test/Analysis/MemoryDependenceAnalysis/invariant.group-bug.ll index c11191e..5470ef9 100644 --- a/llvm/test/Analysis/MemoryDependenceAnalysis/invariant.group-bug.ll +++ b/llvm/test/Analysis/MemoryDependenceAnalysis/invariant.group-bug.ll @@ -17,13 +17,13 @@ target triple = "x86_64-grtev4-linux-gnu" %4 = type { ptr } %5 = type { i64, [8 x i8] } -define void @fail(ptr noalias sret(i1) %arg, ptr %arg1, ptr %arg2, ptr %arg3, i1 %arg4) local_unnamed_addr #0 { +define void @fail(ptr noalias sret(i1) %arg, ptr %arg1, ptr %arg2, ptr %arg3, i1 %arg4) local_unnamed_addr { ; CHECK-LABEL: @fail( ; CHECK-NEXT: bb: ; CHECK-NEXT: [[I4:%.*]] = load ptr, ptr [[ARG1:%.*]], align 8, !invariant.group [[META6:![0-9]+]] ; CHECK-NEXT: [[I5:%.*]] = getelementptr inbounds ptr, ptr [[I4]], i64 6 ; CHECK-NEXT: [[I6:%.*]] = load ptr, ptr [[I5]], align 8, !invariant.load [[META6]] -; CHECK-NEXT: [[I7:%.*]] = tail call i64 [[I6]](ptr [[ARG1]]) #[[ATTR1:[0-9]+]] +; CHECK-NEXT: [[I7:%.*]] = tail call i64 [[I6]](ptr [[ARG1]]) ; CHECK-NEXT: [[I9:%.*]] = load ptr, ptr [[ARG2:%.*]], align 8 ; CHECK-NEXT: store i8 0, ptr [[I9]], align 1 ; CHECK-NEXT: br i1 [[ARG4:%.*]], label [[BB10:%.*]], label [[BB29:%.*]] @@ -32,7 +32,7 @@ define void @fail(ptr noalias sret(i1) %arg, ptr %arg1, ptr %arg2, ptr %arg3, i1 ; CHECK-NEXT: [[I15_PRE:%.*]] = load ptr, ptr [[I14_PHI_TRANS_INSERT]], align 8, !invariant.load [[META6]] ; CHECK-NEXT: br label [[BB12:%.*]] ; CHECK: bb12: -; CHECK-NEXT: [[I16:%.*]] = call i64 [[I15_PRE]](ptr nonnull [[ARG1]], ptr null, i64 0) #[[ATTR1]] +; CHECK-NEXT: [[I16:%.*]] = call i64 [[I15_PRE]](ptr nonnull [[ARG1]], ptr null, i64 0) ; CHECK-NEXT: br i1 true, label [[BB28:%.*]], label [[BB17:%.*]] ; CHECK: bb17: ; CHECK-NEXT: br i1 true, label [[BB18:%.*]], label [[BB21:%.*]] @@ -55,7 +55,7 @@ bb: %i4 = load ptr, ptr %arg1, align 8, !invariant.group !6 %i5 = getelementptr inbounds ptr, ptr %i4, i64 6 %i6 = load ptr, ptr %i5, align 8, !invariant.load !6 - %i7 = tail call i64 %i6(ptr %arg1) #1 + %i7 = tail call i64 %i6(ptr %arg1) %i9 = load ptr, ptr %arg2, align 8 store i8 0, ptr %i9, align 1 br i1 %arg4, label %bb10, label %bb29 @@ -67,7 +67,7 @@ bb12: ; preds = %bb28, %bb10 %i13 = load ptr, ptr %arg1, align 8, !invariant.group !6 %i14 = getelementptr inbounds ptr, ptr %i13, i64 22 %i15 = load ptr, ptr %i14, align 8, !invariant.load !6 - %i16 = call i64 %i15(ptr nonnull %arg1, ptr null, i64 0) #1 + %i16 = call i64 %i15(ptr nonnull %arg1, ptr null, i64 0) br i1 %arg4, label %bb28, label %bb17 bb17: ; preds = %bb12 @@ -110,9 +110,6 @@ bb29: ; preds = %bb28, %bb ret void } -attributes #0 = { "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "frame-pointer"="non-leaf" "less-precise-fpmad"="false" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+fxsr,+mmx,+popcnt,+sse,+sse2,+sse3,+sse4.1,+sse4.2,+ssse3,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { nounwind uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "frame-pointer"="non-leaf" "less-precise-fpmad"="false" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+fxsr,+mmx,+popcnt,+sse,+sse2,+sse3,+sse4.1,+sse4.2,+ssse3,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } - !llvm.linker.options = !{} !llvm.module.flags = !{!0, !1, !3, !4, !5} diff --git a/llvm/test/Analysis/MemorySSA/pr28880.ll b/llvm/test/Analysis/MemorySSA/pr28880.ll index 98f3261..a2690b9 100644 --- a/llvm/test/Analysis/MemorySSA/pr28880.ll +++ b/llvm/test/Analysis/MemorySSA/pr28880.ll @@ -8,7 +8,7 @@ @global.1 = external hidden unnamed_addr global double, align 8 ; Function Attrs: nounwind ssp uwtable -define hidden fastcc void @hoge(i1 %arg) unnamed_addr #0 { +define hidden fastcc void @hoge(i1 %arg) unnamed_addr { bb: br i1 %arg, label %bb1, label %bb2 @@ -45,6 +45,3 @@ bb4: ; preds = %bb3 bb6: ; preds = %bb3 unreachable } - -attributes #0 = { nounwind ssp uwtable "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="core2" "target-features"="+cx16,+fxsr,+mmx,+sse,+sse2,+sse3,+ssse3" "unsafe-fp-math"="false" "use-soft-float"="false" } - diff --git a/llvm/test/Analysis/MemorySSA/pr39197.ll b/llvm/test/Analysis/MemorySSA/pr39197.ll index af57b3c..6be0c58 100644 --- a/llvm/test/Analysis/MemorySSA/pr39197.ll +++ b/llvm/test/Analysis/MemorySSA/pr39197.ll @@ -12,13 +12,13 @@ declare void @dummy() ; CHECK-LABEL: @main() ; Function Attrs: nounwind -define dso_local void @main() #0 { +define dso_local void @main() { call void @func_1() unreachable } ; Function Attrs: nounwind -define dso_local void @func_1() #0 { +define dso_local void @func_1() { %1 = alloca ptr, align 8 %2 = call signext i32 @func_2() %3 = icmp ne i32 %2, 0 @@ -64,45 +64,45 @@ define dso_local void @func_1() #0 { } ; Function Attrs: nounwind -declare dso_local signext i32 @func_2() #0 +declare dso_local signext i32 @func_2() ; Function Attrs: nounwind -define dso_local void @safe_sub_func_uint8_t_u_u() #0 { +define dso_local void @safe_sub_func_uint8_t_u_u() { ret void } ; Function Attrs: nounwind -define dso_local void @safe_add_func_int64_t_s_s() #0 { +define dso_local void @safe_add_func_int64_t_s_s() { ret void } ; Function Attrs: nounwind -define dso_local void @safe_rshift_func_int16_t_s_u() #0 { +define dso_local void @safe_rshift_func_int16_t_s_u() { ret void } ; Function Attrs: nounwind -define dso_local void @safe_div_func_uint8_t_u_u() #0 { +define dso_local void @safe_div_func_uint8_t_u_u() { ret void } ; Function Attrs: nounwind -define dso_local void @safe_mul_func_uint16_t_u_u() #0 { +define dso_local void @safe_mul_func_uint16_t_u_u() { ret void } ; Function Attrs: nounwind -define dso_local void @safe_mul_func_int16_t_s_s() #0 { +define dso_local void @safe_mul_func_int16_t_s_s() { ret void } ; Function Attrs: nounwind -define dso_local void @safe_div_func_int32_t_s_s() #0 { +define dso_local void @safe_div_func_int32_t_s_s() { ret void } ; Function Attrs: nounwind -define dso_local signext i16 @safe_sub_func_int16_t_s_s(i16 signext) #0 { +define dso_local signext i16 @safe_sub_func_int16_t_s_s(i16 signext) { %2 = alloca i16, align 2 store i16 %0, ptr %2, align 2, !tbaa !1 %3 = load i16, ptr %2, align 2, !tbaa !1 @@ -113,29 +113,25 @@ define dso_local signext i16 @safe_sub_func_int16_t_s_s(i16 signext) #0 { } ; Function Attrs: nounwind -define dso_local void @safe_add_func_uint16_t_u_u() #0 { +define dso_local void @safe_add_func_uint16_t_u_u() { ret void } ; Function Attrs: nounwind -define dso_local void @safe_div_func_int8_t_s_s() #0 { +define dso_local void @safe_div_func_int8_t_s_s() { ret void } ; Function Attrs: nounwind -define dso_local void @safe_add_func_int16_t_s_s() #0 { +define dso_local void @safe_add_func_int16_t_s_s() { ret void } ; Function Attrs: nounwind -define dso_local void @safe_add_func_uint8_t_u_u() #0 { +define dso_local void @safe_add_func_uint8_t_u_u() { ret void } -attributes #0 = { nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="z13" "target-features"="+transactional-execution,+vector" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { argmemonly nounwind } -attributes #2 = { nounwind } - !llvm.ident = !{!0} !0 = !{!"clang version 8.0.0 (http://llvm.org/git/clang.git 7cda4756fc9713d98fd3513b8df172700f267bad) (http://llvm.org/git/llvm.git 199c0d32e96b646bd8cf6beeaf0f99f8a434b56a)"} diff --git a/llvm/test/Analysis/MemorySSA/pr40038.ll b/llvm/test/Analysis/MemorySSA/pr40038.ll index efdcbe5..39ea78b 100644 --- a/llvm/test/Analysis/MemorySSA/pr40038.ll +++ b/llvm/test/Analysis/MemorySSA/pr40038.ll @@ -10,21 +10,21 @@ target triple = "s390x-ibm-linux" ; Function Attrs: nounwind ; CHECK-LABEL: @main -define dso_local void @main() #0 { +define dso_local void @main() { bb: call void @func_1() unreachable } ; Function Attrs: nounwind -define dso_local void @func_1() #0 { +define dso_local void @func_1() { bb: call void @func_2() unreachable } ; Function Attrs: nounwind -define dso_local void @func_2() #0 { +define dso_local void @func_2() { bb: %tmp = alloca i32, align 4 store i32 0, ptr @g_80, align 4, !tbaa !1 @@ -68,10 +68,7 @@ bb18: ; preds = %bb12, %bb1 } ; Function Attrs: cold noreturn nounwind -declare void @llvm.trap() #1 - -attributes #0 = { nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "frame-pointer"="none" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="z13" "target-features"="+transactional-execution,+vector" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { cold noreturn nounwind } +declare void @llvm.trap() !llvm.ident = !{!0} diff --git a/llvm/test/Analysis/MemorySSA/pr43569.ll b/llvm/test/Analysis/MemorySSA/pr43569.ll index 02d074e..c81f8d4 100644 --- a/llvm/test/Analysis/MemorySSA/pr43569.ll +++ b/llvm/test/Analysis/MemorySSA/pr43569.ll @@ -10,7 +10,7 @@ target triple = "x86_64-unknown-linux-gnu" ; CHECK-LABEL: @c() ; Function Attrs: nounwind uwtable -define dso_local void @c() #0 { +define dso_local void @c() { entry: call void @llvm.instrprof.increment(ptr @__profn_c, i64 68269137, i32 3, i32 0) br label %for.cond @@ -42,8 +42,4 @@ for.end: ; preds = %for.cond1 } ; Function Attrs: nounwind -declare void @llvm.instrprof.increment(ptr, i64, i32, i32) #1 - -attributes #0 = { nounwind uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { nounwind } - +declare void @llvm.instrprof.increment(ptr, i64, i32, i32) diff --git a/llvm/test/Analysis/ScalarEvolution/pr22674.ll b/llvm/test/Analysis/ScalarEvolution/pr22674.ll index 95f96ca..b2f4ae6 100644 --- a/llvm/test/Analysis/ScalarEvolution/pr22674.ll +++ b/llvm/test/Analysis/ScalarEvolution/pr22674.ll @@ -11,7 +11,7 @@ target triple = "x86_64-pc-linux-gnux32" %"class.llvm::AttributeImpl.2.1802.3601.5914.6685.7456.8227.9255.9769.10026.18508" = type <{ ptr, %"class.llvm::FoldingSetImpl::Node.1.1801.3600.5913.6684.7455.8226.9254.9768.10025.18505", i8, [3 x i8] }> ; Function Attrs: nounwind uwtable -define void @_ZNK4llvm11AttrBuilder13hasAttributesENS_12AttributeSetEy(i1 %arg) #0 align 2 { +define void @_ZNK4llvm11AttrBuilder13hasAttributesENS_12AttributeSetEy(i1 %arg) align 2 { entry: br i1 %arg, label %cond.false, label %_ZNK4llvm12AttributeSet11getNumSlotsEv.exit @@ -82,8 +82,6 @@ return: ; preds = %_ZNK4llvm9Attribute ret void } -attributes #0 = { nounwind uwtable "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } - !llvm.module.flags = !{!0} !llvm.ident = !{!1} diff --git a/llvm/test/Analysis/ScalarEvolution/scev-canonical-mode.ll b/llvm/test/Analysis/ScalarEvolution/scev-canonical-mode.ll index 3879b2e7..d9cc3e5 100644 --- a/llvm/test/Analysis/ScalarEvolution/scev-canonical-mode.ll +++ b/llvm/test/Analysis/ScalarEvolution/scev-canonical-mode.ll @@ -6,7 +6,7 @@ target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128" target triple = "x86_64-unknown-linux-gnu" ; Function Attrs: norecurse nounwind uwtable -define void @ehF(i1 %arg) #0 { +define void @ehF(i1 %arg) { entry: br i1 %arg, label %if.then.i, label %hup.exit @@ -28,5 +28,3 @@ for.body.i: ; preds = %for.body.i, %for.bo hup.exit: ; preds = %for.body.i, %if.then.i, %entry ret void } - -attributes #0 = { norecurse nounwind uwtable "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+fxsr,+mmx,+sse,+sse2" "unsafe-fp-math"="false" "use-soft-float"="false" } diff --git a/llvm/test/Analysis/TypeBasedAliasAnalysis/PR17620.ll b/llvm/test/Analysis/TypeBasedAliasAnalysis/PR17620.ll index 67d81e7..7fb4231 100644 --- a/llvm/test/Analysis/TypeBasedAliasAnalysis/PR17620.ll +++ b/llvm/test/Analysis/TypeBasedAliasAnalysis/PR17620.ll @@ -13,7 +13,7 @@ target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f3 %classD = type { ptr } ; Function Attrs: ssp uwtable -define ptr @test(ptr %this, ptr %p1) #0 align 2 { +define ptr @test(ptr %this, ptr %p1) align 2 { entry: ; CHECK-LABEL: @test ; CHECK: load ptr, ptr %p1, align 8, !tbaa @@ -25,10 +25,7 @@ entry: unreachable } -declare void @callee(ptr, ptr) #1 - -attributes #0 = { ssp uwtable "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } +declare void @callee(ptr, ptr) !llvm.ident = !{!0} diff --git a/llvm/test/Analysis/TypeBasedAliasAnalysis/tbaa-path.ll b/llvm/test/Analysis/TypeBasedAliasAnalysis/tbaa-path.ll index 942fdf5..f9a2988 100644 --- a/llvm/test/Analysis/TypeBasedAliasAnalysis/tbaa-path.ll +++ b/llvm/test/Analysis/TypeBasedAliasAnalysis/tbaa-path.ll @@ -9,7 +9,7 @@ %struct.StructC = type { i16, %struct.StructB, i32 } %struct.StructD = type { i16, %struct.StructB, i32, i8 } -define i32 @_Z1gPjP7StructAy(ptr %s, ptr %A, i64 %count) #0 { +define i32 @_Z1gPjP7StructAy(ptr %s, ptr %A, i64 %count) { entry: ; Access to ptr and &(A->f32). ; CHECK: Function @@ -35,7 +35,7 @@ entry: ret i32 %3 } -define i32 @_Z2g2PjP7StructAy(ptr %s, ptr %A, i64 %count) #0 { +define i32 @_Z2g2PjP7StructAy(ptr %s, ptr %A, i64 %count) { entry: ; Access to ptr and &(A->f16). ; CHECK: Function @@ -60,7 +60,7 @@ entry: ret i32 %3 } -define i32 @_Z2g3P7StructAP7StructBy(ptr %A, ptr %B, i64 %count) #0 { +define i32 @_Z2g3P7StructAP7StructBy(ptr %A, ptr %B, i64 %count) { entry: ; Access to &(A->f32) and &(B->a.f32). ; CHECK: Function @@ -89,7 +89,7 @@ entry: ret i32 %3 } -define i32 @_Z2g4P7StructAP7StructBy(ptr %A, ptr %B, i64 %count) #0 { +define i32 @_Z2g4P7StructAP7StructBy(ptr %A, ptr %B, i64 %count) { entry: ; Access to &(A->f32) and &(B->a.f16). ; CHECK: Function @@ -117,7 +117,7 @@ entry: ret i32 %3 } -define i32 @_Z2g5P7StructAP7StructBy(ptr %A, ptr %B, i64 %count) #0 { +define i32 @_Z2g5P7StructAP7StructBy(ptr %A, ptr %B, i64 %count) { entry: ; Access to &(A->f32) and &(B->f32). ; CHECK: Function @@ -145,7 +145,7 @@ entry: ret i32 %3 } -define i32 @_Z2g6P7StructAP7StructBy(ptr %A, ptr %B, i64 %count) #0 { +define i32 @_Z2g6P7StructAP7StructBy(ptr %A, ptr %B, i64 %count) { entry: ; Access to &(A->f32) and &(B->a.f32_2). ; CHECK: Function @@ -174,7 +174,7 @@ entry: ret i32 %3 } -define i32 @_Z2g7P7StructAP7StructSy(ptr %A, ptr %S, i64 %count) #0 { +define i32 @_Z2g7P7StructAP7StructSy(ptr %A, ptr %S, i64 %count) { entry: ; Access to &(A->f32) and &(S->f32). ; CHECK: Function @@ -202,7 +202,7 @@ entry: ret i32 %3 } -define i32 @_Z2g8P7StructAP7StructSy(ptr %A, ptr %S, i64 %count) #0 { +define i32 @_Z2g8P7StructAP7StructSy(ptr %A, ptr %S, i64 %count) { entry: ; Access to &(A->f32) and &(S->f16). ; CHECK: Function @@ -229,7 +229,7 @@ entry: ret i32 %3 } -define i32 @_Z2g9P7StructSP8StructS2y(ptr %S, ptr %S2, i64 %count) #0 { +define i32 @_Z2g9P7StructSP8StructS2y(ptr %S, ptr %S2, i64 %count) { entry: ; Access to &(S->f32) and &(S2->f32). ; CHECK: Function @@ -257,7 +257,7 @@ entry: ret i32 %3 } -define i32 @_Z3g10P7StructSP8StructS2y(ptr %S, ptr %S2, i64 %count) #0 { +define i32 @_Z3g10P7StructSP8StructS2y(ptr %S, ptr %S2, i64 %count) { entry: ; Access to &(S->f32) and &(S2->f16). ; CHECK: Function @@ -284,7 +284,7 @@ entry: ret i32 %3 } -define i32 @_Z3g11P7StructCP7StructDy(ptr %C, ptr %D, i64 %count) #0 { +define i32 @_Z3g11P7StructCP7StructDy(ptr %C, ptr %D, i64 %count) { entry: ; Access to &(C->b.a.f32) and &(D->b.a.f32). ; CHECK: Function @@ -318,7 +318,7 @@ entry: ret i32 %3 } -define i32 @_Z3g12P7StructCP7StructDy(ptr %C, ptr %D, i64 %count) #0 { +define i32 @_Z3g12P7StructCP7StructDy(ptr %C, ptr %D, i64 %count) { entry: ; Access to &(b1->a.f32) and &(b2->a.f32). ; CHECK: Function @@ -357,8 +357,6 @@ entry: ret i32 %5 } -attributes #0 = { nounwind "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "unsafe-fp-math"="false" "use-soft-float"="false" } - !0 = !{!1, !1, i64 0} !1 = !{!"any pointer", !2} !2 = !{!"omnipotent char", !3} diff --git a/llvm/test/Assembler/atomic.ll b/llvm/test/Assembler/atomic.ll index 39f33f9f..6609edc 100644 --- a/llvm/test/Assembler/atomic.ll +++ b/llvm/test/Assembler/atomic.ll @@ -52,6 +52,25 @@ define void @f(ptr %x) { ; CHECK: atomicrmw volatile usub_sat ptr %x, i32 10 syncscope("agent") monotonic atomicrmw volatile usub_sat ptr %x, i32 10 syncscope("agent") monotonic + ; CHECK : load atomic <1 x i32>, ptr %x unordered, align 4 + load atomic <1 x i32>, ptr %x unordered, align 4 + ; CHECK : store atomic <1 x i32> splat (i32 3), ptr %x release, align 4 + store atomic <1 x i32> <i32 3>, ptr %x release, align 4 + ; CHECK : load atomic <2 x i32>, ptr %x unordered, align 4 + load atomic <2 x i32>, ptr %x unordered, align 4 + ; CHECK : store atomic <2 x i32> <i32 3, i32 4>, ptr %x release, align 4 + store atomic <2 x i32> <i32 3, i32 4>, ptr %x release, align 4 + + ; CHECK : load atomic <2 x ptr>, ptr %x unordered, align 4 + load atomic <2 x ptr>, ptr %x unordered, align 4 + ; CHECK : store atomic <2 x ptr> zeroinitializer, ptr %x release, align 4 + store atomic <2 x ptr> zeroinitializer, ptr %x release, align 4 + + ; CHECK : load atomic <2 x float>, ptr %x unordered, align 4 + load atomic <2 x float>, ptr %x unordered, align 4 + ; CHECK : store atomic <2 x float> <float 3.0, float 4.0>, ptr %x release, align 4 + store atomic <2 x float> <float 3.0, float 4.0>, ptr %x release, align 4 + ; CHECK: fence syncscope("singlethread") release fence syncscope("singlethread") release ; CHECK: fence seq_cst diff --git a/llvm/test/Bitcode/DILocation-implicit-code.ll b/llvm/test/Bitcode/DILocation-implicit-code.ll index 159cb8a..1090bff 100644 --- a/llvm/test/Bitcode/DILocation-implicit-code.ll +++ b/llvm/test/Bitcode/DILocation-implicit-code.ll @@ -13,7 +13,7 @@ $_ZN1A3fooEi = comdat any @_ZTIi = external dso_local constant i8* ; Function Attrs: noinline optnone uwtable -define dso_local void @_Z5test1v() #0 personality i8* bitcast (i32 (...)* @__gxx_personality_v0 to i8*) !dbg !7 { +define dso_local void @_Z5test1v() personality i8* bitcast (i32 (...)* @__gxx_personality_v0 to i8*) !dbg !7 { entry: %retval = alloca %struct.A, align 1 %a = alloca %struct.A, align 1 @@ -40,13 +40,13 @@ lpad: ; preds = %entry catch.dispatch: ; preds = %lpad %sel = load i32, i32* %ehselector.slot, align 4, !dbg !13 - %3 = call i32 @llvm.eh.typeid.for(i8* bitcast (i8** @_ZTIi to i8*)) #4, !dbg !13 + %3 = call i32 @llvm.eh.typeid.for(i8* bitcast (i8** @_ZTIi to i8*)), !dbg !13 %matches = icmp eq i32 %sel, %3, !dbg !13 br i1 %matches, label %catch, label %eh.resume, !dbg !13 catch: ; preds = %catch.dispatch %exn = load i8*, i8** %exn.slot, align 8, !dbg !13 - %4 = call i8* @__cxa_begin_catch(i8* %exn) #4, !dbg !13 + %4 = call i8* @__cxa_begin_catch(i8* %exn), !dbg !13 %5 = bitcast i8* %4 to i32*, !dbg !13 %6 = load i32, i32* %5, align 4, !dbg !13 store i32 %6, i32* %e, align 4, !dbg !13 @@ -55,7 +55,7 @@ catch: ; preds = %catch.dispatch to label %invoke.cont2 unwind label %lpad1, !dbg !15 invoke.cont2: ; preds = %catch - call void @__cxa_end_catch() #4, !dbg !16 + call void @__cxa_end_catch(), !dbg !16 br label %return lpad1: ; preds = %catch @@ -65,7 +65,7 @@ lpad1: ; preds = %catch store i8* %9, i8** %exn.slot, align 8, !dbg !12 %10 = extractvalue { i8*, i32 } %8, 1, !dbg !12 store i32 %10, i32* %ehselector.slot, align 4, !dbg !12 - call void @__cxa_end_catch() #4, !dbg !16 + call void @__cxa_end_catch(), !dbg !16 br label %eh.resume, !dbg !16 try.cont: ; No predecessors! @@ -84,7 +84,7 @@ eh.resume: ; preds = %lpad1, %catch.dispa } ; Function Attrs: noinline nounwind optnone uwtable -define linkonce_odr dso_local void @_ZN1AC2Ev(%struct.A* %this) unnamed_addr #1 comdat align 2 !dbg !17 { +define linkonce_odr dso_local void @_ZN1AC2Ev(%struct.A* %this) unnamed_addr comdat align 2 !dbg !17 { entry: %this.addr = alloca %struct.A*, align 8 store %struct.A* %this, %struct.A** %this.addr, align 8 @@ -93,7 +93,7 @@ entry: } ; Function Attrs: noinline optnone uwtable -define linkonce_odr dso_local void @_ZN1A3fooEi(%struct.A* %this, i32 %i) #0 comdat align 2 !dbg !19 { +define linkonce_odr dso_local void @_ZN1A3fooEi(%struct.A* %this, i32 %i) comdat align 2 !dbg !19 { entry: %retval = alloca %struct.A, align 1 %this.addr = alloca %struct.A*, align 8 @@ -106,10 +106,10 @@ entry: br i1 %cmp, label %if.then, label %if.end, !dbg !20 if.then: ; preds = %entry - %exception = call i8* @__cxa_allocate_exception(i64 4) #4, !dbg !22 + %exception = call i8* @__cxa_allocate_exception(i64 4), !dbg !22 %1 = bitcast i8* %exception to i32*, !dbg !22 store i32 1, i32* %1, align 16, !dbg !22 - call void @__cxa_throw(i8* %exception, i8* bitcast (i8** @_ZTIi to i8*), i8* null) #5, !dbg !22 + call void @__cxa_throw(i8* %exception, i8* bitcast (i8** @_ZTIi to i8*), i8* null), !dbg !22 unreachable, !dbg !22 if.end: ; preds = %entry @@ -119,17 +119,17 @@ if.end: ; preds = %entry declare dso_local i32 @__gxx_personality_v0(...) ; Function Attrs: nounwind readnone -declare i32 @llvm.eh.typeid.for(i8*) #2 +declare i32 @llvm.eh.typeid.for(i8*) declare dso_local i8* @__cxa_begin_catch(i8*) declare dso_local void @__cxa_end_catch() ; Function Attrs: noreturn nounwind -declare void @llvm.trap() #3 +declare void @llvm.trap() ; Function Attrs: noinline optnone uwtable -define dso_local void @_Z5test2v() #0 !dbg !24 { +define dso_local void @_Z5test2v() !dbg !24 { entry: %a = alloca %struct.A, align 1 %b = alloca %struct.A, align 1 @@ -143,13 +143,6 @@ declare dso_local i8* @__cxa_allocate_exception(i64) declare dso_local void @__cxa_throw(i8*, i8*, i8*) -attributes #0 = { noinline optnone uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { noinline nounwind optnone uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #2 = { nounwind readnone } -attributes #3 = { noreturn nounwind } -attributes #4 = { nounwind } -attributes #5 = { noreturn } - !llvm.dbg.cu = !{!0} !llvm.module.flags = !{!3, !4, !5} !llvm.ident = !{!6} diff --git a/llvm/test/Bitcode/drop-debug-info.3.5.ll b/llvm/test/Bitcode/drop-debug-info.3.5.ll index 35d3958..16102a0 100644 --- a/llvm/test/Bitcode/drop-debug-info.3.5.ll +++ b/llvm/test/Bitcode/drop-debug-info.3.5.ll @@ -12,15 +12,13 @@ target datalayout = "e-m:o-i64:64-f80:128-n8:16:32:64-S128" target triple = "x86_64-apple-macosx10.10.0" ; Function Attrs: nounwind ssp uwtable -define i32 @main() #0 { +define i32 @main() { entry: %retval = alloca i32, align 4 store i32 0, i32* %retval ret i32 0, !dbg !12 } -attributes #0 = { nounwind ssp uwtable "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } - !llvm.dbg.cu = !{!0} !llvm.module.flags = !{!9, !10} !llvm.ident = !{!11} diff --git a/llvm/test/Bitcode/upgrade-tbaa.ll b/llvm/test/Bitcode/upgrade-tbaa.ll index 7a70d99..893ba69 100644 --- a/llvm/test/Bitcode/upgrade-tbaa.ll +++ b/llvm/test/Bitcode/upgrade-tbaa.ll @@ -2,7 +2,7 @@ ; RUN: verify-uselistorder < %s ; Function Attrs: nounwind -define void @_Z4testPiPf(i32* nocapture %pI, float* nocapture %pF) #0 { +define void @_Z4testPiPf(i32* nocapture %pI, float* nocapture %pF) { entry: store i32 0, i32* %pI, align 4, !tbaa !{!"int", !0} ; CHECK: store i32 0, ptr %pI, align 4, !tbaa [[TAG_INT:!.*]] @@ -11,8 +11,6 @@ entry: ret void } -attributes #0 = { nounwind "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-realign-stack" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } - !0 = !{!"omnipotent char", !1} !1 = !{!"Simple C/C++ TBAA"} !2 = !{!"float", !0} diff --git a/llvm/test/CodeGen/AArch64/pr164181.ll b/llvm/test/CodeGen/AArch64/pr164181.ll new file mode 100644 index 0000000..4ec63ec --- /dev/null +++ b/llvm/test/CodeGen/AArch64/pr164181.ll @@ -0,0 +1,640 @@ +; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 6 +; RUN: llc -verify-machineinstrs < %s | FileCheck %s + +; This test recreates a regalloc crash reported in +; https://github.com/llvm/llvm-project/issues/164181 +; When rematting an instruction we need to make sure to constrain the newly +; allocated register to both the rematted def's reg class and the use's reg +; class. + +target triple = "aarch64-unknown-linux-gnu" + +@var_32 = external global i16 +@var_35 = external global i64 +@var_39 = external global i64 +@var_46 = external global i64 +@var_50 = external global i32 + +define void @f(i1 %var_0, i16 %var_1, i64 %var_2, i8 %var_3, i16 %var_4, i1 %var_5, i32 %var_6, i32 %var_7, i8 %var_10, i64 %var_11, i8 %var_14, i32 %var_15, i64 %var_16, ptr %arr_3, ptr %arr_4, ptr %arr_6, ptr %arr_7, ptr %arr_12, ptr %arr_13, ptr %arr_19, i64 %mul, i64 %conv35, i64 %idxprom138.us16, i8 %0, i8 %1, ptr %invariant.gep875.us) #0 { +; CHECK-LABEL: f: +; CHECK: // %bb.0: // %entry +; CHECK-NEXT: sub sp, sp, #240 +; CHECK-NEXT: str x30, [sp, #144] // 8-byte Folded Spill +; CHECK-NEXT: stp x28, x27, [sp, #160] // 16-byte Folded Spill +; CHECK-NEXT: stp x26, x25, [sp, #176] // 16-byte Folded Spill +; CHECK-NEXT: stp x24, x23, [sp, #192] // 16-byte Folded Spill +; CHECK-NEXT: stp x22, x21, [sp, #208] // 16-byte Folded Spill +; CHECK-NEXT: stp x20, x19, [sp, #224] // 16-byte Folded Spill +; CHECK-NEXT: str w6, [sp, #20] // 4-byte Folded Spill +; CHECK-NEXT: str w4, [sp, #72] // 4-byte Folded Spill +; CHECK-NEXT: str w3, [sp, #112] // 4-byte Folded Spill +; CHECK-NEXT: str w5, [sp, #36] // 4-byte Folded Spill +; CHECK-NEXT: tbz w5, #0, .LBB0_43 +; CHECK-NEXT: // %bb.1: // %for.body41.lr.ph +; CHECK-NEXT: ldr x4, [sp, #312] +; CHECK-NEXT: ldr x14, [sp, #280] +; CHECK-NEXT: tbz w0, #0, .LBB0_42 +; CHECK-NEXT: // %bb.2: // %for.body41.us.preheader +; CHECK-NEXT: ldrb w8, [sp, #368] +; CHECK-NEXT: ldrb w12, [sp, #256] +; CHECK-NEXT: ldr w26, [sp, #264] +; CHECK-NEXT: adrp x20, :got:var_50 +; CHECK-NEXT: mov x28, #-1 // =0xffffffffffffffff +; CHECK-NEXT: mov w21, #36006 // =0x8ca6 +; CHECK-NEXT: ldr x11, [sp, #376] +; CHECK-NEXT: ldrb w13, [sp, #360] +; CHECK-NEXT: ldp x17, x16, [sp, #296] +; CHECK-NEXT: mov w22, #1 // =0x1 +; CHECK-NEXT: add x27, x14, #120 +; CHECK-NEXT: ldr x18, [sp, #288] +; CHECK-NEXT: ldr x7, [sp, #272] +; CHECK-NEXT: ldr x5, [sp, #248] +; CHECK-NEXT: mov x10, xzr +; CHECK-NEXT: mov w23, wzr +; CHECK-NEXT: mov w30, wzr +; CHECK-NEXT: ldrb w19, [sp, #240] +; CHECK-NEXT: mov w25, wzr +; CHECK-NEXT: mov x24, xzr +; CHECK-NEXT: str w8, [sp, #108] // 4-byte Folded Spill +; CHECK-NEXT: mov x3, x26 +; CHECK-NEXT: ldp x9, x8, [sp, #344] +; CHECK-NEXT: str w12, [sp, #92] // 4-byte Folded Spill +; CHECK-NEXT: mov w12, #1 // =0x1 +; CHECK-NEXT: bic w12, w12, w0 +; CHECK-NEXT: str w12, [sp, #76] // 4-byte Folded Spill +; CHECK-NEXT: mov w12, #48 // =0x30 +; CHECK-NEXT: str x9, [sp, #136] // 8-byte Folded Spill +; CHECK-NEXT: ldp x9, x15, [sp, #328] +; CHECK-NEXT: madd x8, x8, x12, x9 +; CHECK-NEXT: str x8, [sp, #64] // 8-byte Folded Spill +; CHECK-NEXT: add x8, x26, w26, uxtw #1 +; CHECK-NEXT: ldr x20, [x20, :got_lo12:var_50] +; CHECK-NEXT: str x26, [sp, #96] // 8-byte Folded Spill +; CHECK-NEXT: str x14, [sp, #152] // 8-byte Folded Spill +; CHECK-NEXT: lsl x6, x8, #3 +; CHECK-NEXT: add x8, x14, #120 +; CHECK-NEXT: str x4, [sp, #24] // 8-byte Folded Spill +; CHECK-NEXT: str w19, [sp, #16] // 4-byte Folded Spill +; CHECK-NEXT: str x8, [sp, #80] // 8-byte Folded Spill +; CHECK-NEXT: b .LBB0_4 +; CHECK-NEXT: .p2align 5, , 16 +; CHECK-NEXT: .LBB0_3: // in Loop: Header=BB0_4 Depth=1 +; CHECK-NEXT: ldr w19, [sp, #16] // 4-byte Folded Reload +; CHECK-NEXT: ldr x24, [sp, #40] // 8-byte Folded Reload +; CHECK-NEXT: ldr x14, [sp, #152] // 8-byte Folded Reload +; CHECK-NEXT: mov w23, #1 // =0x1 +; CHECK-NEXT: mov w30, #1 // =0x1 +; CHECK-NEXT: mov w25, w19 +; CHECK-NEXT: .LBB0_4: // %for.body41.us +; CHECK-NEXT: // =>This Loop Header: Depth=1 +; CHECK-NEXT: // Child Loop BB0_6 Depth 2 +; CHECK-NEXT: // Child Loop BB0_8 Depth 3 +; CHECK-NEXT: // Child Loop BB0_10 Depth 4 +; CHECK-NEXT: // Child Loop BB0_11 Depth 5 +; CHECK-NEXT: // Child Loop BB0_28 Depth 5 +; CHECK-NEXT: // Child Loop BB0_39 Depth 5 +; CHECK-NEXT: ldr w8, [sp, #20] // 4-byte Folded Reload +; CHECK-NEXT: mov x12, x24 +; CHECK-NEXT: str x24, [sp, #48] // 8-byte Folded Spill +; CHECK-NEXT: str w8, [x14] +; CHECK-NEXT: mov w8, #1 // =0x1 +; CHECK-NEXT: strb w19, [x14] +; CHECK-NEXT: b .LBB0_6 +; CHECK-NEXT: .p2align 5, , 16 +; CHECK-NEXT: .LBB0_5: // %for.cond.cleanup93.us +; CHECK-NEXT: // in Loop: Header=BB0_6 Depth=2 +; CHECK-NEXT: ldr w9, [sp, #36] // 4-byte Folded Reload +; CHECK-NEXT: ldr x4, [sp, #24] // 8-byte Folded Reload +; CHECK-NEXT: ldp x24, x12, [sp, #48] // 16-byte Folded Reload +; CHECK-NEXT: mov x22, xzr +; CHECK-NEXT: mov w25, wzr +; CHECK-NEXT: mov w8, wzr +; CHECK-NEXT: tbz w9, #0, .LBB0_3 +; CHECK-NEXT: .LBB0_6: // %for.body67.us +; CHECK-NEXT: // Parent Loop BB0_4 Depth=1 +; CHECK-NEXT: // => This Loop Header: Depth=2 +; CHECK-NEXT: // Child Loop BB0_8 Depth 3 +; CHECK-NEXT: // Child Loop BB0_10 Depth 4 +; CHECK-NEXT: // Child Loop BB0_11 Depth 5 +; CHECK-NEXT: // Child Loop BB0_28 Depth 5 +; CHECK-NEXT: // Child Loop BB0_39 Depth 5 +; CHECK-NEXT: str x12, [sp, #40] // 8-byte Folded Spill +; CHECK-NEXT: cmn x24, #30 +; CHECK-NEXT: mov x12, #-30 // =0xffffffffffffffe2 +; CHECK-NEXT: add x19, x4, w8, sxtw #2 +; CHECK-NEXT: mov x9, xzr +; CHECK-NEXT: csel x12, x24, x12, lo +; CHECK-NEXT: mov w4, w30 +; CHECK-NEXT: str x12, [sp, #56] // 8-byte Folded Spill +; CHECK-NEXT: b .LBB0_8 +; CHECK-NEXT: .p2align 5, , 16 +; CHECK-NEXT: .LBB0_7: // %for.cond.cleanup98.us +; CHECK-NEXT: // in Loop: Header=BB0_8 Depth=3 +; CHECK-NEXT: ldr w4, [sp, #72] // 4-byte Folded Reload +; CHECK-NEXT: ldr w23, [sp, #128] // 4-byte Folded Reload +; CHECK-NEXT: mov w9, #1 // =0x1 +; CHECK-NEXT: mov x22, xzr +; CHECK-NEXT: tbnz w0, #0, .LBB0_5 +; CHECK-NEXT: .LBB0_8: // %for.cond95.preheader.us +; CHECK-NEXT: // Parent Loop BB0_4 Depth=1 +; CHECK-NEXT: // Parent Loop BB0_6 Depth=2 +; CHECK-NEXT: // => This Loop Header: Depth=3 +; CHECK-NEXT: // Child Loop BB0_10 Depth 4 +; CHECK-NEXT: // Child Loop BB0_11 Depth 5 +; CHECK-NEXT: // Child Loop BB0_28 Depth 5 +; CHECK-NEXT: // Child Loop BB0_39 Depth 5 +; CHECK-NEXT: ldr x8, [sp, #64] // 8-byte Folded Reload +; CHECK-NEXT: mov w14, #1152 // =0x480 +; CHECK-NEXT: mov w24, #1 // =0x1 +; CHECK-NEXT: mov w12, wzr +; CHECK-NEXT: str wzr, [sp, #132] // 4-byte Folded Spill +; CHECK-NEXT: mov w30, w4 +; CHECK-NEXT: madd x8, x9, x14, x8 +; CHECK-NEXT: mov w14, #1 // =0x1 +; CHECK-NEXT: str x8, [sp, #120] // 8-byte Folded Spill +; CHECK-NEXT: add x8, x9, x9, lsl #1 +; CHECK-NEXT: lsl x26, x8, #4 +; CHECK-NEXT: sxtb w8, w23 +; CHECK-NEXT: mov w23, w25 +; CHECK-NEXT: str w8, [sp, #116] // 4-byte Folded Spill +; CHECK-NEXT: b .LBB0_10 +; CHECK-NEXT: .p2align 5, , 16 +; CHECK-NEXT: .LBB0_9: // %for.cond510.preheader.us +; CHECK-NEXT: // in Loop: Header=BB0_10 Depth=4 +; CHECK-NEXT: ldr w23, [sp, #92] // 4-byte Folded Reload +; CHECK-NEXT: mov x22, x8 +; CHECK-NEXT: ldr x3, [sp, #96] // 8-byte Folded Reload +; CHECK-NEXT: ldr x27, [sp, #80] // 8-byte Folded Reload +; CHECK-NEXT: mov x28, #-1 // =0xffffffffffffffff +; CHECK-NEXT: mov x14, xzr +; CHECK-NEXT: ldr w8, [sp, #76] // 4-byte Folded Reload +; CHECK-NEXT: tbz w8, #31, .LBB0_7 +; CHECK-NEXT: .LBB0_10: // %for.body99.us +; CHECK-NEXT: // Parent Loop BB0_4 Depth=1 +; CHECK-NEXT: // Parent Loop BB0_6 Depth=2 +; CHECK-NEXT: // Parent Loop BB0_8 Depth=3 +; CHECK-NEXT: // => This Loop Header: Depth=4 +; CHECK-NEXT: // Child Loop BB0_11 Depth 5 +; CHECK-NEXT: // Child Loop BB0_28 Depth 5 +; CHECK-NEXT: // Child Loop BB0_39 Depth 5 +; CHECK-NEXT: ldr w8, [sp, #116] // 4-byte Folded Reload +; CHECK-NEXT: and w8, w8, w8, asr #31 +; CHECK-NEXT: str w8, [sp, #128] // 4-byte Folded Spill +; CHECK-NEXT: .p2align 5, , 16 +; CHECK-NEXT: .LBB0_11: // %for.body113.us +; CHECK-NEXT: // Parent Loop BB0_4 Depth=1 +; CHECK-NEXT: // Parent Loop BB0_6 Depth=2 +; CHECK-NEXT: // Parent Loop BB0_8 Depth=3 +; CHECK-NEXT: // Parent Loop BB0_10 Depth=4 +; CHECK-NEXT: // => This Inner Loop Header: Depth=5 +; CHECK-NEXT: tbnz w0, #0, .LBB0_11 +; CHECK-NEXT: // %bb.12: // %for.cond131.preheader.us +; CHECK-NEXT: // in Loop: Header=BB0_10 Depth=4 +; CHECK-NEXT: ldr w8, [sp, #112] // 4-byte Folded Reload +; CHECK-NEXT: mov w4, #1 // =0x1 +; CHECK-NEXT: strb w8, [x18] +; CHECK-NEXT: ldr x8, [sp, #120] // 8-byte Folded Reload +; CHECK-NEXT: ldrh w8, [x8] +; CHECK-NEXT: cbnz w4, .LBB0_14 +; CHECK-NEXT: // %bb.13: // %cond.true146.us +; CHECK-NEXT: // in Loop: Header=BB0_10 Depth=4 +; CHECK-NEXT: ldrsb w4, [x27, x3] +; CHECK-NEXT: b .LBB0_15 +; CHECK-NEXT: .p2align 5, , 16 +; CHECK-NEXT: .LBB0_14: // in Loop: Header=BB0_10 Depth=4 +; CHECK-NEXT: mov w4, wzr +; CHECK-NEXT: .LBB0_15: // %cond.end154.us +; CHECK-NEXT: // in Loop: Header=BB0_10 Depth=4 +; CHECK-NEXT: mov w25, #18984 // =0x4a28 +; CHECK-NEXT: mul w8, w8, w25 +; CHECK-NEXT: and w8, w8, #0xfff8 +; CHECK-NEXT: lsl w8, w8, w4 +; CHECK-NEXT: cbz w8, .LBB0_17 +; CHECK-NEXT: // %bb.16: // %if.then.us +; CHECK-NEXT: // in Loop: Header=BB0_10 Depth=4 +; CHECK-NEXT: str wzr, [sp, #132] // 4-byte Folded Spill +; CHECK-NEXT: str wzr, [x18] +; CHECK-NEXT: .LBB0_17: // %if.end.us +; CHECK-NEXT: // in Loop: Header=BB0_10 Depth=4 +; CHECK-NEXT: ldr w8, [sp, #108] // 4-byte Folded Reload +; CHECK-NEXT: mov w4, #18984 // =0x4a28 +; CHECK-NEXT: mov w25, w23 +; CHECK-NEXT: strb w8, [x18] +; CHECK-NEXT: ldrsb w8, [x27, x3] +; CHECK-NEXT: lsl w8, w4, w8 +; CHECK-NEXT: mov x4, #-18403 // =0xffffffffffffb81d +; CHECK-NEXT: movk x4, #58909, lsl #16 +; CHECK-NEXT: cbz w8, .LBB0_19 +; CHECK-NEXT: // %bb.18: // %if.then.us.2 +; CHECK-NEXT: // in Loop: Header=BB0_10 Depth=4 +; CHECK-NEXT: str wzr, [sp, #132] // 4-byte Folded Spill +; CHECK-NEXT: strb wzr, [x18] +; CHECK-NEXT: .LBB0_19: // %if.then.us.5 +; CHECK-NEXT: // in Loop: Header=BB0_10 Depth=4 +; CHECK-NEXT: ldr w23, [sp, #132] // 4-byte Folded Reload +; CHECK-NEXT: mov w8, #29625 // =0x73b9 +; CHECK-NEXT: movk w8, #21515, lsl #16 +; CHECK-NEXT: cmp w23, w8 +; CHECK-NEXT: csel w23, w23, w8, lt +; CHECK-NEXT: str w23, [sp, #132] // 4-byte Folded Spill +; CHECK-NEXT: tbz w0, #0, .LBB0_21 +; CHECK-NEXT: // %bb.20: // in Loop: Header=BB0_10 Depth=4 +; CHECK-NEXT: mov w8, wzr +; CHECK-NEXT: b .LBB0_22 +; CHECK-NEXT: .p2align 5, , 16 +; CHECK-NEXT: .LBB0_21: // %cond.true146.us.7 +; CHECK-NEXT: // in Loop: Header=BB0_10 Depth=4 +; CHECK-NEXT: ldrsb w8, [x27, x3] +; CHECK-NEXT: .LBB0_22: // %cond.end154.us.7 +; CHECK-NEXT: // in Loop: Header=BB0_10 Depth=4 +; CHECK-NEXT: mov w23, #18984 // =0x4a28 +; CHECK-NEXT: mov w3, #149 // =0x95 +; CHECK-NEXT: lsl w8, w23, w8 +; CHECK-NEXT: cbz w8, .LBB0_24 +; CHECK-NEXT: // %bb.23: // %if.then.us.7 +; CHECK-NEXT: // in Loop: Header=BB0_10 Depth=4 +; CHECK-NEXT: ldr x8, [sp, #152] // 8-byte Folded Reload +; CHECK-NEXT: str wzr, [sp, #132] // 4-byte Folded Spill +; CHECK-NEXT: str wzr, [x8] +; CHECK-NEXT: .LBB0_24: // %if.end.us.7 +; CHECK-NEXT: // in Loop: Header=BB0_10 Depth=4 +; CHECK-NEXT: mov x23, xzr +; CHECK-NEXT: b .LBB0_28 +; CHECK-NEXT: .p2align 5, , 16 +; CHECK-NEXT: .LBB0_25: // %cond.true331.us +; CHECK-NEXT: // in Loop: Header=BB0_28 Depth=5 +; CHECK-NEXT: ldrsb w4, [x10] +; CHECK-NEXT: .LBB0_26: // %cond.end345.us +; CHECK-NEXT: // in Loop: Header=BB0_28 Depth=5 +; CHECK-NEXT: strh w4, [x18] +; CHECK-NEXT: mul x4, x22, x28 +; CHECK-NEXT: adrp x22, :got:var_46 +; CHECK-NEXT: mov x8, xzr +; CHECK-NEXT: ldr x22, [x22, :got_lo12:var_46] +; CHECK-NEXT: str x4, [x22] +; CHECK-NEXT: mov x4, #-18403 // =0xffffffffffffb81d +; CHECK-NEXT: movk x4, #58909, lsl #16 +; CHECK-NEXT: .LBB0_27: // %for.inc371.us +; CHECK-NEXT: // in Loop: Header=BB0_28 Depth=5 +; CHECK-NEXT: mov w22, #-18978 // =0xffffb5de +; CHECK-NEXT: orr x23, x23, #0x1 +; CHECK-NEXT: mov x24, xzr +; CHECK-NEXT: mul w12, w12, w22 +; CHECK-NEXT: mov x22, x5 +; CHECK-NEXT: tbz w0, #0, .LBB0_36 +; CHECK-NEXT: .LBB0_28: // %for.body194.us +; CHECK-NEXT: // Parent Loop BB0_4 Depth=1 +; CHECK-NEXT: // Parent Loop BB0_6 Depth=2 +; CHECK-NEXT: // Parent Loop BB0_8 Depth=3 +; CHECK-NEXT: // Parent Loop BB0_10 Depth=4 +; CHECK-NEXT: // => This Inner Loop Header: Depth=5 +; CHECK-NEXT: cbnz wzr, .LBB0_30 +; CHECK-NEXT: // %bb.29: // %if.then222.us +; CHECK-NEXT: // in Loop: Header=BB0_28 Depth=5 +; CHECK-NEXT: adrp x27, :got:var_32 +; CHECK-NEXT: ldur w8, [x19, #-12] +; CHECK-NEXT: ldr x27, [x27, :got_lo12:var_32] +; CHECK-NEXT: strh w8, [x27] +; CHECK-NEXT: sxtb w8, w25 +; CHECK-NEXT: bic w25, w8, w8, asr #31 +; CHECK-NEXT: b .LBB0_31 +; CHECK-NEXT: .p2align 5, , 16 +; CHECK-NEXT: .LBB0_30: // in Loop: Header=BB0_28 Depth=5 +; CHECK-NEXT: mov w25, wzr +; CHECK-NEXT: .LBB0_31: // %if.end239.us +; CHECK-NEXT: // in Loop: Header=BB0_28 Depth=5 +; CHECK-NEXT: strb w3, [x16] +; CHECK-NEXT: tst w13, #0xff +; CHECK-NEXT: b.eq .LBB0_33 +; CHECK-NEXT: // %bb.32: // %if.then254.us +; CHECK-NEXT: // in Loop: Header=BB0_28 Depth=5 +; CHECK-NEXT: ldrh w8, [x26, x14, lsl #1] +; CHECK-NEXT: adrp x27, :got:var_35 +; CHECK-NEXT: ldr x27, [x27, :got_lo12:var_35] +; CHECK-NEXT: cmp w8, #0 +; CHECK-NEXT: csel x8, xzr, x7, eq +; CHECK-NEXT: str x8, [x27] +; CHECK-NEXT: strh w1, [x17] +; CHECK-NEXT: .LBB0_33: // %if.end282.us +; CHECK-NEXT: // in Loop: Header=BB0_28 Depth=5 +; CHECK-NEXT: orr x27, x24, x4 +; CHECK-NEXT: adrp x8, :got:var_39 +; CHECK-NEXT: str x27, [x18] +; CHECK-NEXT: ldr x8, [x8, :got_lo12:var_39] +; CHECK-NEXT: str x10, [x8] +; CHECK-NEXT: ldrb w8, [x6, x9] +; CHECK-NEXT: str x8, [x18] +; CHECK-NEXT: mov w8, #1 // =0x1 +; CHECK-NEXT: cbnz x2, .LBB0_27 +; CHECK-NEXT: // %bb.34: // %if.then327.us +; CHECK-NEXT: // in Loop: Header=BB0_28 Depth=5 +; CHECK-NEXT: cbz w8, .LBB0_25 +; CHECK-NEXT: // %bb.35: // in Loop: Header=BB0_28 Depth=5 +; CHECK-NEXT: mov w4, wzr +; CHECK-NEXT: b .LBB0_26 +; CHECK-NEXT: .p2align 5, , 16 +; CHECK-NEXT: .LBB0_36: // %for.cond376.preheader.us +; CHECK-NEXT: // in Loop: Header=BB0_10 Depth=4 +; CHECK-NEXT: mov w3, #1152 // =0x480 +; CHECK-NEXT: mov x22, xzr +; CHECK-NEXT: mov w4, wzr +; CHECK-NEXT: mov x24, x27 +; CHECK-NEXT: lsl x23, x14, #1 +; CHECK-NEXT: mov x27, #-1 // =0xffffffffffffffff +; CHECK-NEXT: madd x14, x14, x3, x11 +; CHECK-NEXT: mov w28, w30 +; CHECK-NEXT: mov w3, #-7680 // =0xffffe200 +; CHECK-NEXT: b .LBB0_39 +; CHECK-NEXT: .p2align 5, , 16 +; CHECK-NEXT: .LBB0_37: // %if.then466.us +; CHECK-NEXT: // in Loop: Header=BB0_39 Depth=5 +; CHECK-NEXT: ldr x28, [sp, #152] // 8-byte Folded Reload +; CHECK-NEXT: ldr x3, [sp, #136] // 8-byte Folded Reload +; CHECK-NEXT: sxtb w4, w4 +; CHECK-NEXT: bic w4, w4, w4, asr #31 +; CHECK-NEXT: str x3, [x28] +; CHECK-NEXT: mov w3, #-7680 // =0xffffe200 +; CHECK-NEXT: .LBB0_38: // %for.inc505.us +; CHECK-NEXT: // in Loop: Header=BB0_39 Depth=5 +; CHECK-NEXT: add x22, x22, #1 +; CHECK-NEXT: add x27, x27, #1 +; CHECK-NEXT: mov w28, wzr +; CHECK-NEXT: cmp x27, #0 +; CHECK-NEXT: b.hs .LBB0_9 +; CHECK-NEXT: .LBB0_39: // %for.body380.us +; CHECK-NEXT: // Parent Loop BB0_4 Depth=1 +; CHECK-NEXT: // Parent Loop BB0_6 Depth=2 +; CHECK-NEXT: // Parent Loop BB0_8 Depth=3 +; CHECK-NEXT: // Parent Loop BB0_10 Depth=4 +; CHECK-NEXT: // => This Inner Loop Header: Depth=5 +; CHECK-NEXT: mov w30, w28 +; CHECK-NEXT: ldrh w28, [x23] +; CHECK-NEXT: tst w0, #0x1 +; CHECK-NEXT: strh w28, [x11] +; CHECK-NEXT: csel w28, w21, w3, ne +; CHECK-NEXT: str w28, [x20] +; CHECK-NEXT: cbz x15, .LBB0_38 +; CHECK-NEXT: // %bb.40: // %if.then436.us +; CHECK-NEXT: // in Loop: Header=BB0_39 Depth=5 +; CHECK-NEXT: ldrh w28, [x14] +; CHECK-NEXT: cbnz w28, .LBB0_37 +; CHECK-NEXT: // %bb.41: // in Loop: Header=BB0_39 Depth=5 +; CHECK-NEXT: mov w4, wzr +; CHECK-NEXT: b .LBB0_38 +; CHECK-NEXT: .LBB0_42: // %for.body41 +; CHECK-NEXT: strb wzr, [x4] +; CHECK-NEXT: strb wzr, [x14] +; CHECK-NEXT: .LBB0_43: // %for.cond563.preheader +; CHECK-NEXT: ldp x20, x19, [sp, #224] // 16-byte Folded Reload +; CHECK-NEXT: ldp x22, x21, [sp, #208] // 16-byte Folded Reload +; CHECK-NEXT: ldp x24, x23, [sp, #192] // 16-byte Folded Reload +; CHECK-NEXT: ldp x26, x25, [sp, #176] // 16-byte Folded Reload +; CHECK-NEXT: ldp x28, x27, [sp, #160] // 16-byte Folded Reload +; CHECK-NEXT: ldr x30, [sp, #144] // 8-byte Folded Reload +; CHECK-NEXT: add sp, sp, #240 +; CHECK-NEXT: ret +entry: + br i1 %var_5, label %for.body41.lr.ph, label %for.cond563.preheader + +for.body41.lr.ph: ; preds = %entry + %arrayidx147 = getelementptr i8, ptr %arr_3, i64 120 + %tobool326.not = icmp eq i64 %var_2, 0 + %not353 = xor i64 0, -1 + %add538 = select i1 %var_0, i16 0, i16 1 + br i1 %var_0, label %for.body41.us, label %for.body41 + +for.body41.us: ; preds = %for.cond.cleanup93.us, %for.body41.lr.ph + %var_24.promoted9271009.us = phi i64 [ 0, %for.body41.lr.ph ], [ %6, %for.cond.cleanup93.us ] + %var_37.promoted9301008.us = phi i64 [ 1, %for.body41.lr.ph ], [ 0, %for.cond.cleanup93.us ] + %2 = phi i8 [ 0, %for.body41.lr.ph ], [ 1, %for.cond.cleanup93.us ] + %add4139751001.us = phi i16 [ 0, %for.body41.lr.ph ], [ 1, %for.cond.cleanup93.us ] + %3 = phi i8 [ 0, %for.body41.lr.ph ], [ %var_10, %for.cond.cleanup93.us ] + store i32 %var_6, ptr %arr_3, align 4 + store i8 %var_10, ptr %arr_3, align 1 + br label %for.body67.us + +for.body67.us: ; preds = %for.cond.cleanup93.us, %for.body41.us + %4 = phi i8 [ %3, %for.body41.us ], [ 0, %for.cond.cleanup93.us ] + %add413977.us = phi i16 [ %add4139751001.us, %for.body41.us ], [ %add413.us17, %for.cond.cleanup93.us ] + %5 = phi i8 [ %2, %for.body41.us ], [ %.sroa.speculated829.us, %for.cond.cleanup93.us ] + %conv64922.us = phi i32 [ 1, %for.body41.us ], [ 0, %for.cond.cleanup93.us ] + %6 = phi i64 [ %var_24.promoted9271009.us, %for.body41.us ], [ %.sroa.speculated832.us, %for.cond.cleanup93.us ] + %mul354903918.us = phi i64 [ %var_37.promoted9301008.us, %for.body41.us ], [ 0, %for.cond.cleanup93.us ] + %i_2.0921.us = zext i32 %var_15 to i64 + %.sroa.speculated832.us = tail call i64 @llvm.umin.i64(i64 %var_24.promoted9271009.us, i64 -30) + %sext1023 = shl i64 %i_2.0921.us, 1 + %idxprom138.us162 = ashr i64 %sext1023, 1 + %gep889.us = getelementptr [24 x i16], ptr %arr_19, i64 %idxprom138.us16 + %arrayidx149.us = getelementptr i8, ptr %arrayidx147, i64 %idxprom138.us162 + %arrayidx319.us = getelementptr [24 x i8], ptr null, i64 %idxprom138.us162 + %7 = sext i32 %conv64922.us to i64 + %8 = getelementptr i32, ptr %arr_12, i64 %7 + %arrayidx226.us = getelementptr i8, ptr %8, i64 -12 + br label %for.cond95.preheader.us + +for.cond.cleanup93.us: ; preds = %for.cond.cleanup98.us + br i1 %var_5, label %for.body67.us, label %for.body41.us + +for.cond.cleanup98.us: ; preds = %for.cond510.preheader.us + br i1 %var_0, label %for.cond.cleanup93.us, label %for.cond95.preheader.us + +for.body99.us: ; preds = %for.cond95.preheader.us, %for.cond510.preheader.us + %mul287985.us = phi i16 [ 0, %for.cond95.preheader.us ], [ %mul287.us, %for.cond510.preheader.us ] + %9 = phi i8 [ %29, %for.cond95.preheader.us ], [ %var_14, %for.cond510.preheader.us ] + %add413979.us = phi i16 [ %add413978.us, %for.cond95.preheader.us ], [ %add413.us17, %for.cond510.preheader.us ] + %10 = phi i32 [ 0, %for.cond95.preheader.us ], [ %26, %for.cond510.preheader.us ] + %mul354905.us = phi i64 [ %mul354904.us, %for.cond95.preheader.us ], [ %mul354907.us, %for.cond510.preheader.us ] + %sub283896.us = phi i64 [ 1, %for.cond95.preheader.us ], [ %sub283.us, %for.cond510.preheader.us ] + %conv96880.us = phi i64 [ 1, %for.cond95.preheader.us ], [ 0, %for.cond510.preheader.us ] + %.sroa.speculated829.us = tail call i8 @llvm.smin.i8(i8 %30, i8 0) + br label %for.body113.us + +for.body380.us: ; preds = %for.cond376.preheader.us, %for.inc505.us + %indvars.iv1018 = phi i64 [ 0, %for.cond376.preheader.us ], [ %indvars.iv.next1019, %for.inc505.us ] + %11 = phi i8 [ 0, %for.cond376.preheader.us ], [ %13, %for.inc505.us ] + %add413980.us = phi i16 [ %add413979.us, %for.cond376.preheader.us ], [ 0, %for.inc505.us ] + %12 = load i16, ptr %arrayidx384.us, align 2 + store i16 %12, ptr %invariant.gep875.us, align 2 + %add413.us17 = or i16 %add413980.us, 0 + %arrayidx416.us = getelementptr i16, ptr %arr_13, i64 %indvars.iv1018 + %conv419.us = select i1 %var_0, i32 36006, i32 -7680 + store i32 %conv419.us, ptr @var_50, align 4 + %tobool435.not.us = icmp eq i64 %mul, 0 + br i1 %tobool435.not.us, label %for.inc505.us, label %if.then436.us + +if.then436.us: ; preds = %for.body380.us + %.sroa.speculated817.us = tail call i8 @llvm.smax.i8(i8 %11, i8 0) + %cond464.in.us = load i16, ptr %gep876.us, align 2 + %tobool465.not.us = icmp eq i16 %cond464.in.us, 0 + br i1 %tobool465.not.us, label %for.inc505.us, label %if.then466.us + +if.then466.us: ; preds = %if.then436.us + store i64 %conv35, ptr %arr_3, align 8 + br label %for.inc505.us + +for.inc505.us: ; preds = %if.then466.us, %if.then436.us, %for.body380.us + %13 = phi i8 [ %11, %for.body380.us ], [ %.sroa.speculated817.us, %if.then466.us ], [ 0, %if.then436.us ] + %indvars.iv.next1019 = add i64 %indvars.iv1018, 1 + %cmp378.us = icmp ult i64 %indvars.iv1018, 0 + br i1 %cmp378.us, label %for.body380.us, label %for.cond510.preheader.us + +for.body194.us: ; preds = %if.end.us.7, %for.inc371.us + %indvars.iv = phi i64 [ 0, %if.end.us.7 ], [ %indvars.iv.next, %for.inc371.us ] + %mul287986.us = phi i16 [ %mul287985.us, %if.end.us.7 ], [ %mul287.us, %for.inc371.us ] + %14 = phi i8 [ %9, %if.end.us.7 ], [ %16, %for.inc371.us ] + %mul354906.us = phi i64 [ %mul354905.us, %if.end.us.7 ], [ %var_11, %for.inc371.us ] + %sub283897.us = phi i64 [ %sub283896.us, %if.end.us.7 ], [ 0, %for.inc371.us ] + %tobool221.not.us = icmp eq i32 1, 0 + br i1 %tobool221.not.us, label %if.end239.us, label %if.then222.us + +if.then222.us: ; preds = %for.body194.us + %15 = load i32, ptr %arrayidx226.us, align 4 + %conv227.us = trunc i32 %15 to i16 + store i16 %conv227.us, ptr @var_32, align 2 + %.sroa.speculated820.us = tail call i8 @llvm.smax.i8(i8 %14, i8 0) + br label %if.end239.us + +if.end239.us: ; preds = %if.then222.us, %for.body194.us + %16 = phi i8 [ %.sroa.speculated820.us, %if.then222.us ], [ 0, %for.body194.us ] + store i8 -107, ptr %arr_7, align 1 + %tobool253.not.us = icmp eq i8 %0, 0 + br i1 %tobool253.not.us, label %if.end282.us, label %if.then254.us + +if.then254.us: ; preds = %if.end239.us + %17 = load i16, ptr %arrayidx259.us, align 2 + %tobool261.not.us = icmp eq i16 %17, 0 + %conv268.us = select i1 %tobool261.not.us, i64 0, i64 %var_16 + store i64 %conv268.us, ptr @var_35, align 8 + %gep867.us = getelementptr [24 x [24 x i64]], ptr null, i64 %indvars.iv + store i16 %var_1, ptr %arr_6, align 2 + br label %if.end282.us + +if.end282.us: ; preds = %if.then254.us, %if.end239.us + %sub283.us = or i64 %sub283897.us, -434259939 + store i64 %sub283.us, ptr %arr_4, align 8 + %mul287.us = mul i16 %mul287986.us, -18978 + store i64 0, ptr @var_39, align 8 + %18 = load i8, ptr %arrayidx321.us, align 1 + %conv322.us = zext i8 %18 to i64 + store i64 %conv322.us, ptr %arr_4, align 8 + br i1 %tobool326.not, label %if.then327.us, label %for.inc371.us + +if.then327.us: ; preds = %if.end282.us + %tobool330.not.us = icmp eq i32 0, 0 + br i1 %tobool330.not.us, label %cond.end345.us, label %cond.true331.us + +cond.true331.us: ; preds = %if.then327.us + %19 = load i8, ptr null, align 1 + %20 = sext i8 %19 to i16 + br label %cond.end345.us + +cond.end345.us: ; preds = %cond.true331.us, %if.then327.us + %cond346.us = phi i16 [ %20, %cond.true331.us ], [ 0, %if.then327.us ] + store i16 %cond346.us, ptr %arr_4, align 2 + %mul354.us = mul i64 %mul354906.us, %not353 + store i64 %mul354.us, ptr @var_46, align 8 + br label %for.inc371.us + +for.inc371.us: ; preds = %cond.end345.us, %if.end282.us + %mul354907.us = phi i64 [ 1, %if.end282.us ], [ 0, %cond.end345.us ] + %indvars.iv.next = or i64 %indvars.iv, 1 + br i1 %var_0, label %for.body194.us, label %for.cond376.preheader.us + +cond.true146.us: ; preds = %for.cond131.preheader.us + %21 = load i8, ptr %arrayidx149.us, align 1 + %conv150.us = sext i8 %21 to i32 + br label %cond.end154.us + +cond.end154.us: ; preds = %for.cond131.preheader.us, %cond.true146.us + %cond155.us = phi i32 [ %conv150.us, %cond.true146.us ], [ 0, %for.cond131.preheader.us ] + %shl.us = shl i32 %div.us, %cond155.us + %tobool157.not.us = icmp eq i32 %shl.us, 0 + br i1 %tobool157.not.us, label %if.end.us, label %if.then.us + +if.then.us: ; preds = %cond.end154.us + store i32 0, ptr %arr_4, align 4 + br label %if.end.us + +if.end.us: ; preds = %if.then.us, %cond.end154.us + %22 = phi i32 [ 0, %if.then.us ], [ %10, %cond.end154.us ] + store i8 %1, ptr %arr_4, align 1 + call void @llvm.assume(i1 true) + %23 = load i8, ptr %arrayidx149.us, align 1 + %conv150.us.2 = sext i8 %23 to i32 + %shl.us.2 = shl i32 18984, %conv150.us.2 + %tobool157.not.us.2 = icmp eq i32 %shl.us.2, 0 + br i1 %tobool157.not.us.2, label %if.then.us.5, label %if.then.us.2 + +if.then.us.2: ; preds = %if.end.us + %.sroa.speculated826.us.2 = tail call i32 @llvm.smin.i32(i32 %10, i32 0) + store i8 0, ptr %arr_4, align 1 + br label %if.then.us.5 + +if.then.us.5: ; preds = %if.then.us.2, %if.end.us + %24 = phi i32 [ 0, %if.then.us.2 ], [ %22, %if.end.us ] + %.sroa.speculated826.us.5 = tail call i32 @llvm.smin.i32(i32 %24, i32 1410036665) + br i1 %var_0, label %cond.end154.us.7, label %cond.true146.us.7 + +cond.true146.us.7: ; preds = %if.then.us.5 + %25 = load i8, ptr %arrayidx149.us, align 1 + %conv150.us.7 = sext i8 %25 to i32 + br label %cond.end154.us.7 + +cond.end154.us.7: ; preds = %cond.true146.us.7, %if.then.us.5 + %cond155.us.7 = phi i32 [ %conv150.us.7, %cond.true146.us.7 ], [ 0, %if.then.us.5 ] + %shl.us.7 = shl i32 18984, %cond155.us.7 + %tobool157.not.us.7 = icmp eq i32 %shl.us.7, 0 + br i1 %tobool157.not.us.7, label %if.end.us.7, label %if.then.us.7 + +if.then.us.7: ; preds = %cond.end154.us.7 + store i32 0, ptr %arr_3, align 4 + br label %if.end.us.7 + +if.end.us.7: ; preds = %if.then.us.7, %cond.end154.us.7 + %26 = phi i32 [ 0, %if.then.us.7 ], [ %.sroa.speculated826.us.5, %cond.end154.us.7 ] + %arrayidx259.us = getelementptr i16, ptr %arrayidx257.us, i64 %conv96880.us + br label %for.body194.us + +for.body113.us: ; preds = %for.body113.us, %for.body99.us + br i1 %var_0, label %for.body113.us, label %for.cond131.preheader.us + +for.cond510.preheader.us: ; preds = %for.inc505.us + %cmp97.us = icmp slt i16 %add538, 0 + br i1 %cmp97.us, label %for.body99.us, label %for.cond.cleanup98.us + +for.cond376.preheader.us: ; preds = %for.inc371.us + %arrayidx384.us = getelementptr i16, ptr null, i64 %conv96880.us + %gep876.us = getelementptr [24 x [24 x i16]], ptr %invariant.gep875.us, i64 %conv96880.us + br label %for.body380.us + +for.cond131.preheader.us: ; preds = %for.body113.us + store i8 %var_3, ptr %arr_4, align 1 + %27 = load i16, ptr %gep884.us, align 2 + %28 = mul i16 18984, %27 + %div.us = zext i16 %28 to i32 + %tobool145.not.us = icmp eq i8 0, 0 + br i1 %tobool145.not.us, label %cond.end154.us, label %cond.true146.us + +for.cond95.preheader.us: ; preds = %for.cond.cleanup98.us, %for.body67.us + %indvars.iv1021 = phi i64 [ 1, %for.cond.cleanup98.us ], [ 0, %for.body67.us ] + %29 = phi i8 [ %16, %for.cond.cleanup98.us ], [ %4, %for.body67.us ] + %add413978.us = phi i16 [ %var_4, %for.cond.cleanup98.us ], [ %add413977.us, %for.body67.us ] + %30 = phi i8 [ %.sroa.speculated829.us, %for.cond.cleanup98.us ], [ %5, %for.body67.us ] + %mul354904.us = phi i64 [ 0, %for.cond.cleanup98.us ], [ %mul354903918.us, %for.body67.us ] + %gep884.us = getelementptr [24 x [24 x i16]], ptr %gep889.us, i64 %indvars.iv1021 + %arrayidx321.us = getelementptr i8, ptr %arrayidx319.us, i64 %indvars.iv1021 + %arrayidx257.us = getelementptr [24 x i16], ptr null, i64 %indvars.iv1021 + br label %for.body99.us + +for.cond563.preheader: ; preds = %for.body41, %entry + ret void + +for.body41: ; preds = %for.body41.lr.ph + store i8 0, ptr %arr_12, align 1 + store i8 0, ptr %arr_3, align 1 + br label %for.cond563.preheader +} + +attributes #0 = { nounwind "frame-pointer"="non-leaf" "target-cpu"="grace" } +attributes #1 = { nocallback nofree nosync nounwind speculatable willreturn memory(none) } +attributes #2 = { nocallback nofree nosync nounwind willreturn memory(inaccessiblemem: write) } diff --git a/llvm/test/CodeGen/AMDGPU/combine_andor_with_cmps.ll b/llvm/test/CodeGen/AMDGPU/combine_andor_with_cmps.ll index 57a1e4c..ec92edb 100644 --- a/llvm/test/CodeGen/AMDGPU/combine_andor_with_cmps.ll +++ b/llvm/test/CodeGen/AMDGPU/combine_andor_with_cmps.ll @@ -3385,7 +3385,7 @@ declare half @llvm.canonicalize.f16(half) declare <2 x half> @llvm.canonicalize.v2f16(<2 x half>) attributes #0 = { nounwind "amdgpu-ieee"="false" } -attributes #1 = { nounwind "unsafe-fp-math"="true" "no-nans-fp-math"="true" } +attributes #1 = { nounwind "no-nans-fp-math"="true" } ;; NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line: ; GFX11NONANS-FAKE16: {{.*}} ; GFX11NONANS-TRUE16: {{.*}} diff --git a/llvm/test/CodeGen/AMDGPU/fdiv.f64.ll b/llvm/test/CodeGen/AMDGPU/fdiv.f64.ll index acb32d4..11476a6 100644 --- a/llvm/test/CodeGen/AMDGPU/fdiv.f64.ll +++ b/llvm/test/CodeGen/AMDGPU/fdiv.f64.ll @@ -127,7 +127,7 @@ define amdgpu_kernel void @s_fdiv_v4f64(ptr addrspace(1) %out, <4 x double> %num ; GCN-LABEL: {{^}}div_fast_2_x_pat_f64: ; GCN: v_mul_f64 [[MUL:v\[[0-9]+:[0-9]+\]]], s{{\[[0-9]+:[0-9]+\]}}, 0.5 ; GCN: buffer_store_dwordx2 [[MUL]] -define amdgpu_kernel void @div_fast_2_x_pat_f64(ptr addrspace(1) %out) #1 { +define amdgpu_kernel void @div_fast_2_x_pat_f64(ptr addrspace(1) %out) #0 { %x = load double, ptr addrspace(1) poison %rcp = fdiv fast double %x, 2.0 store double %rcp, ptr addrspace(1) %out, align 4 @@ -139,7 +139,7 @@ define amdgpu_kernel void @div_fast_2_x_pat_f64(ptr addrspace(1) %out) #1 { ; GCN-DAG: v_mov_b32_e32 v[[K_HI:[0-9]+]], 0x3fb99999 ; GCN: v_mul_f64 [[MUL:v\[[0-9]+:[0-9]+\]]], s{{\[[0-9]+:[0-9]+\]}}, v[[[K_LO]]:[[K_HI]]] ; GCN: buffer_store_dwordx2 [[MUL]] -define amdgpu_kernel void @div_fast_k_x_pat_f64(ptr addrspace(1) %out) #1 { +define amdgpu_kernel void @div_fast_k_x_pat_f64(ptr addrspace(1) %out) #0 { %x = load double, ptr addrspace(1) poison %rcp = fdiv fast double %x, 10.0 store double %rcp, ptr addrspace(1) %out, align 4 @@ -151,7 +151,7 @@ define amdgpu_kernel void @div_fast_k_x_pat_f64(ptr addrspace(1) %out) #1 { ; GCN-DAG: v_mov_b32_e32 v[[K_HI:[0-9]+]], 0xbfb99999 ; GCN: v_mul_f64 [[MUL:v\[[0-9]+:[0-9]+\]]], s{{\[[0-9]+:[0-9]+\]}}, v[[[K_LO]]:[[K_HI]]] ; GCN: buffer_store_dwordx2 [[MUL]] -define amdgpu_kernel void @div_fast_neg_k_x_pat_f64(ptr addrspace(1) %out) #1 { +define amdgpu_kernel void @div_fast_neg_k_x_pat_f64(ptr addrspace(1) %out) #0 { %x = load double, ptr addrspace(1) poison %rcp = fdiv fast double %x, -10.0 store double %rcp, ptr addrspace(1) %out, align 4 @@ -159,4 +159,3 @@ define amdgpu_kernel void @div_fast_neg_k_x_pat_f64(ptr addrspace(1) %out) #1 { } attributes #0 = { nounwind } -attributes #1 = { nounwind "unsafe-fp-math"="true" } diff --git a/llvm/test/CodeGen/AMDGPU/fmad-formation-fmul-distribute-denormal-mode.ll b/llvm/test/CodeGen/AMDGPU/fmad-formation-fmul-distribute-denormal-mode.ll index 92eb4a6..0a266bc 100644 --- a/llvm/test/CodeGen/AMDGPU/fmad-formation-fmul-distribute-denormal-mode.ll +++ b/llvm/test/CodeGen/AMDGPU/fmad-formation-fmul-distribute-denormal-mode.ll @@ -284,4 +284,4 @@ define <2 x float> @unsafe_fast_fmul_fsub_ditribute_post_legalize(float %arg0, < ret <2 x float> %tmp1 } -attributes #0 = { "no-infs-fp-math"="true" "unsafe-fp-math"="true" } +attributes #0 = { "no-infs-fp-math"="true" } diff --git a/llvm/test/CodeGen/AMDGPU/fmed3.bf16.ll b/llvm/test/CodeGen/AMDGPU/fmed3.bf16.ll index bc85dc2..3e513de 100644 --- a/llvm/test/CodeGen/AMDGPU/fmed3.bf16.ll +++ b/llvm/test/CodeGen/AMDGPU/fmed3.bf16.ll @@ -219,8 +219,8 @@ define <2 x bfloat> @v_test_fmed3_r_i_i_v2bf16_minimumnum_maximumnum(<2 x bfloat } attributes #0 = { nounwind readnone } -attributes #1 = { nounwind "unsafe-fp-math"="false" "no-nans-fp-math"="false" } -attributes #2 = { nounwind "unsafe-fp-math"="false" "no-nans-fp-math"="true" } +attributes #1 = { nounwind "no-nans-fp-math"="false" } +attributes #2 = { nounwind "no-nans-fp-math"="true" } ;; NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line: ; GFX11: {{.*}} ; GFX11-SDAG: {{.*}} diff --git a/llvm/test/CodeGen/AMDGPU/fmed3.ll b/llvm/test/CodeGen/AMDGPU/fmed3.ll index 3145a27..60ac0b9 100644 --- a/llvm/test/CodeGen/AMDGPU/fmed3.ll +++ b/llvm/test/CodeGen/AMDGPU/fmed3.ll @@ -8905,4 +8905,4 @@ declare half @llvm.minnum.f16(half, half) #0 declare half @llvm.maxnum.f16(half, half) #0 attributes #0 = { nounwind readnone } -attributes #2 = { nounwind "unsafe-fp-math"="false" "no-nans-fp-math"="true" } +attributes #2 = { nounwind "no-nans-fp-math"="true" } diff --git a/llvm/test/CodeGen/AMDGPU/fneg-combines.legal.f16.ll b/llvm/test/CodeGen/AMDGPU/fneg-combines.legal.f16.ll index d8bbda1..69d1ee3f 100644 --- a/llvm/test/CodeGen/AMDGPU/fneg-combines.legal.f16.ll +++ b/llvm/test/CodeGen/AMDGPU/fneg-combines.legal.f16.ll @@ -159,7 +159,7 @@ declare half @llvm.amdgcn.interp.p2.f16(float, float, i32, i32, i1, i32) #0 attributes #0 = { nounwind "denormal-fp-math-f32"="preserve-sign,preserve-sign" } attributes #1 = { nounwind readnone } -attributes #2 = { nounwind "unsafe-fp-math"="true" } +attributes #2 = { nounwind } attributes #3 = { nounwind "no-signed-zeros-fp-math"="true" } attributes #4 = { nounwind "amdgpu-ieee"="false" "denormal-fp-math-f32"="preserve-sign,preserve-sign" } ;; NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line: diff --git a/llvm/test/CodeGen/AMDGPU/fneg-combines.ll b/llvm/test/CodeGen/AMDGPU/fneg-combines.ll index aaea4f7..b3202cb 100644 --- a/llvm/test/CodeGen/AMDGPU/fneg-combines.ll +++ b/llvm/test/CodeGen/AMDGPU/fneg-combines.ll @@ -8006,7 +8006,7 @@ declare float @llvm.amdgcn.interp.p2(float, float, i32, i32, i32) #0 attributes #0 = { nounwind "denormal-fp-math-f32"="preserve-sign,preserve-sign" } attributes #1 = { nounwind readnone } -attributes #2 = { nounwind "unsafe-fp-math"="true" } +attributes #2 = { nounwind } attributes #3 = { nounwind "no-signed-zeros-fp-math"="true" } ;; NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line: ; GCN-NSZ: {{.*}} diff --git a/llvm/test/CodeGen/AMDGPU/frem.ll b/llvm/test/CodeGen/AMDGPU/frem.ll index 6f91222..d8cbdb1 100644 --- a/llvm/test/CodeGen/AMDGPU/frem.ll +++ b/llvm/test/CodeGen/AMDGPU/frem.ll @@ -2048,7 +2048,7 @@ define amdgpu_kernel void @unsafe_frem_f16(ptr addrspace(1) %out, ptr addrspace( ; GFX1200-FAKE16-NEXT: v_fmac_f16_e32 v1, v3, v2 ; GFX1200-FAKE16-NEXT: global_store_b16 v0, v1, s[0:1] ; GFX1200-FAKE16-NEXT: s_endpgm - ptr addrspace(1) %in2) #1 { + ptr addrspace(1) %in2) #0 { %gep2 = getelementptr half, ptr addrspace(1) %in2, i32 4 %r0 = load half, ptr addrspace(1) %in1, align 4 %r1 = load half, ptr addrspace(1) %gep2, align 4 @@ -3417,7 +3417,7 @@ define amdgpu_kernel void @unsafe_frem_f32(ptr addrspace(1) %out, ptr addrspace( ; GFX1200-NEXT: v_fmac_f32_e32 v1, v3, v2 ; GFX1200-NEXT: global_store_b32 v0, v1, s[0:1] ; GFX1200-NEXT: s_endpgm - ptr addrspace(1) %in2) #1 { + ptr addrspace(1) %in2) #0 { %gep2 = getelementptr float, ptr addrspace(1) %in2, i32 4 %r0 = load float, ptr addrspace(1) %in1, align 4 %r1 = load float, ptr addrspace(1) %gep2, align 4 @@ -4821,7 +4821,7 @@ define amdgpu_kernel void @unsafe_frem_f64(ptr addrspace(1) %out, ptr addrspace( ; GFX1200-NEXT: v_fma_f64 v[0:1], -v[4:5], v[2:3], v[0:1] ; GFX1200-NEXT: global_store_b64 v12, v[0:1], s[0:1] ; GFX1200-NEXT: s_endpgm - ptr addrspace(1) %in2) #1 { + ptr addrspace(1) %in2) #0 { %r0 = load double, ptr addrspace(1) %in1, align 8 %r1 = load double, ptr addrspace(1) %in2, align 8 %r2 = frem afn double %r0, %r1 @@ -18918,7 +18918,4 @@ define amdgpu_kernel void @frem_v2f64_const(ptr addrspace(1) %out) #0 { -attributes #0 = { nounwind "unsafe-fp-math"="false" "denormal-fp-math-f32"="preserve-sign,preserve-sign" } -attributes #1 = { nounwind "unsafe-fp-math"="true" "denormal-fp-math-f32"="preserve-sign,preserve-sign" } - - +attributes #0 = { nounwind "denormal-fp-math-f32"="preserve-sign,preserve-sign" } diff --git a/llvm/test/CodeGen/AMDGPU/fsqrt.f64.ll b/llvm/test/CodeGen/AMDGPU/fsqrt.f64.ll index 1b74ddf..9b97981 100644 --- a/llvm/test/CodeGen/AMDGPU/fsqrt.f64.ll +++ b/llvm/test/CodeGen/AMDGPU/fsqrt.f64.ll @@ -2870,7 +2870,7 @@ define double @v_sqrt_f64__enough_unsafe_attrs(double %x) #3 { ret double %result } -define double @v_sqrt_f64__unsafe_attr(double %x) #4 { +define double @v_sqrt_f64__unsafe_attr(double %x) { ; GFX6-SDAG-LABEL: v_sqrt_f64__unsafe_attr: ; GFX6-SDAG: ; %bb.0: ; GFX6-SDAG-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) @@ -3449,7 +3449,6 @@ declare i32 @llvm.amdgcn.readfirstlane(i32) #1 attributes #0 = { nocallback nofree nosync nounwind speculatable willreturn memory(none) } attributes #1 = { convergent nounwind willreturn memory(none) } attributes #3 = { "no-nans-fp-math"="true" "no-infs-fp-math"="true" } -attributes #4 = { "unsafe-fp-math"="true" } ;; NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line: ; GFX6: {{.*}} ; GFX8: {{.*}} diff --git a/llvm/test/CodeGen/AMDGPU/fsqrt.r600.ll b/llvm/test/CodeGen/AMDGPU/fsqrt.r600.ll index 9f19bcb..c93c077 100644 --- a/llvm/test/CodeGen/AMDGPU/fsqrt.r600.ll +++ b/llvm/test/CodeGen/AMDGPU/fsqrt.r600.ll @@ -239,4 +239,4 @@ declare <2 x float> @llvm.sqrt.v2f32(<2 x float> %in) #0 declare <4 x float> @llvm.sqrt.v4f32(<4 x float> %in) #0 attributes #0 = { nounwind readnone } -attributes #1 = { nounwind "unsafe-fp-math"="true" } +attributes #1 = { nounwind } diff --git a/llvm/test/CodeGen/AMDGPU/inline-attr.ll b/llvm/test/CodeGen/AMDGPU/inline-attr.ll index 4e93eca..c33b3344 100644 --- a/llvm/test/CodeGen/AMDGPU/inline-attr.ll +++ b/llvm/test/CodeGen/AMDGPU/inline-attr.ll @@ -36,18 +36,18 @@ entry: ret void } -attributes #0 = { nounwind "uniform-work-group-size"="false" "unsafe-fp-math"="true"} -attributes #1 = { nounwind "less-precise-fpmad"="true" "no-infs-fp-math"="true" "no-nans-fp-math"="true" "unsafe-fp-math"="true" } +attributes #0 = { nounwind "uniform-work-group-size"="false"} +attributes #1 = { nounwind "less-precise-fpmad"="true" "no-infs-fp-math"="true" "no-nans-fp-math"="true" } ;. -; UNSAFE: attributes #[[ATTR0]] = { nounwind "uniform-work-group-size"="false" "unsafe-fp-math"="true" } -; UNSAFE: attributes #[[ATTR1]] = { nounwind "less-precise-fpmad"="false" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "uniform-work-group-size"="false" "unsafe-fp-math"="true" } +; UNSAFE: attributes #[[ATTR0]] = { nounwind "uniform-work-group-size"="false" } +; UNSAFE: attributes #[[ATTR1]] = { nounwind "less-precise-fpmad"="false" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "uniform-work-group-size"="false" } ;. -; NONANS: attributes #[[ATTR0]] = { nounwind "no-nans-fp-math"="true" "uniform-work-group-size"="false" "unsafe-fp-math"="true" } -; NONANS: attributes #[[ATTR1]] = { nounwind "less-precise-fpmad"="false" "no-infs-fp-math"="false" "no-nans-fp-math"="true" "uniform-work-group-size"="false" "unsafe-fp-math"="true" } +; NONANS: attributes #[[ATTR0]] = { nounwind "no-nans-fp-math"="true" "uniform-work-group-size"="false" } +; NONANS: attributes #[[ATTR1]] = { nounwind "less-precise-fpmad"="false" "no-infs-fp-math"="false" "no-nans-fp-math"="true" "uniform-work-group-size"="false" } ;. -; NOINFS: attributes #[[ATTR0]] = { nounwind "no-infs-fp-math"="true" "uniform-work-group-size"="false" "unsafe-fp-math"="true" } -; NOINFS: attributes #[[ATTR1]] = { nounwind "less-precise-fpmad"="false" "no-infs-fp-math"="true" "no-nans-fp-math"="false" "uniform-work-group-size"="false" "unsafe-fp-math"="true" } +; NOINFS: attributes #[[ATTR0]] = { nounwind "no-infs-fp-math"="true" "uniform-work-group-size"="false" } +; NOINFS: attributes #[[ATTR1]] = { nounwind "less-precise-fpmad"="false" "no-infs-fp-math"="true" "no-nans-fp-math"="false" "uniform-work-group-size"="false" } ;. ; UNSAFE: [[META0]] = !{} ;. diff --git a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.add.min.max.ll b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.add.min.max.ll new file mode 100644 index 0000000..99421d4 --- /dev/null +++ b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.add.min.max.ll @@ -0,0 +1,191 @@ +; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 6 +; RUN: llc -global-isel=0 -mtriple=amdgcn -mcpu=gfx1250 < %s | FileCheck -check-prefixes=GCN,GFX1250-SDAG %s +; RUN: llc -global-isel=1 -mtriple=amdgcn -mcpu=gfx1250 < %s | FileCheck -check-prefixes=GCN,GFX1250-GISEL %s + +declare i32 @llvm.amdgcn.add.min.i32(i32, i32, i32, i1) +declare i32 @llvm.amdgcn.add.max.i32(i32, i32, i32, i1) +declare i32 @llvm.amdgcn.add.min.u32(i32, i32, i32, i1) +declare i32 @llvm.amdgcn.add.max.u32(i32, i32, i32, i1) +declare <2 x i16> @llvm.amdgcn.pk.add.min.i16(<2 x i16>, <2 x i16>, <2 x i16>, i1) +declare <2 x i16> @llvm.amdgcn.pk.add.max.i16(<2 x i16>, <2 x i16>, <2 x i16>, i1) +declare <2 x i16> @llvm.amdgcn.pk.add.min.u16(<2 x i16>, <2 x i16>, <2 x i16>, i1) +declare <2 x i16> @llvm.amdgcn.pk.add.max.u16(<2 x i16>, <2 x i16>, <2 x i16>, i1) + +define i32 @test_add_min_i32_vvv(i32 %a, i32 %b, i32 %c) { +; GCN-LABEL: test_add_min_i32_vvv: +; GCN: ; %bb.0: +; GCN-NEXT: s_wait_loadcnt_dscnt 0x0 +; GCN-NEXT: s_wait_kmcnt 0x0 +; GCN-NEXT: v_add_min_i32 v0, v0, v1, v2 +; GCN-NEXT: s_set_pc_i64 s[30:31] + %ret = tail call i32 @llvm.amdgcn.add.min.i32(i32 %a, i32 %b, i32 %c, i1 0) + ret i32 %ret +} + +define i32 @test_add_min_i32_ssi_clamp(i32 inreg %a, i32 inreg %b) { +; GCN-LABEL: test_add_min_i32_ssi_clamp: +; GCN: ; %bb.0: +; GCN-NEXT: s_wait_loadcnt_dscnt 0x0 +; GCN-NEXT: s_wait_kmcnt 0x0 +; GCN-NEXT: v_add_min_i32 v0, s0, s1, 1 clamp +; GCN-NEXT: s_set_pc_i64 s[30:31] + %ret = tail call i32 @llvm.amdgcn.add.min.i32(i32 %a, i32 %b, i32 1, i1 1) + ret i32 %ret +} + +define i32 @test_add_min_u32_vvv(i32 %a, i32 %b, i32 %c) { +; GCN-LABEL: test_add_min_u32_vvv: +; GCN: ; %bb.0: +; GCN-NEXT: s_wait_loadcnt_dscnt 0x0 +; GCN-NEXT: s_wait_kmcnt 0x0 +; GCN-NEXT: v_add_min_u32 v0, v0, v1, v2 +; GCN-NEXT: s_set_pc_i64 s[30:31] + %ret = tail call i32 @llvm.amdgcn.add.min.u32(i32 %a, i32 %b, i32 %c, i1 0) + ret i32 %ret +} + +define i32 @test_add_min_u32_ssi_clamp(i32 inreg %a, i32 inreg %b) { +; GCN-LABEL: test_add_min_u32_ssi_clamp: +; GCN: ; %bb.0: +; GCN-NEXT: s_wait_loadcnt_dscnt 0x0 +; GCN-NEXT: s_wait_kmcnt 0x0 +; GCN-NEXT: v_add_min_u32 v0, s0, s1, 1 clamp +; GCN-NEXT: s_set_pc_i64 s[30:31] + %ret = tail call i32 @llvm.amdgcn.add.min.u32(i32 %a, i32 %b, i32 1, i1 1) + ret i32 %ret +} + +define i32 @test_add_max_i32_vvv(i32 %a, i32 %b, i32 %c) { +; GCN-LABEL: test_add_max_i32_vvv: +; GCN: ; %bb.0: +; GCN-NEXT: s_wait_loadcnt_dscnt 0x0 +; GCN-NEXT: s_wait_kmcnt 0x0 +; GCN-NEXT: v_add_max_i32 v0, v0, v1, v2 +; GCN-NEXT: s_set_pc_i64 s[30:31] + %ret = tail call i32 @llvm.amdgcn.add.max.i32(i32 %a, i32 %b, i32 %c, i1 0) + ret i32 %ret +} + +define i32 @test_add_max_i32_ssi_clamp(i32 inreg %a, i32 inreg %b) { +; GCN-LABEL: test_add_max_i32_ssi_clamp: +; GCN: ; %bb.0: +; GCN-NEXT: s_wait_loadcnt_dscnt 0x0 +; GCN-NEXT: s_wait_kmcnt 0x0 +; GCN-NEXT: v_add_max_i32 v0, s0, s1, 1 clamp +; GCN-NEXT: s_set_pc_i64 s[30:31] + %ret = tail call i32 @llvm.amdgcn.add.max.i32(i32 %a, i32 %b, i32 1, i1 1) + ret i32 %ret +} + +define i32 @test_add_max_u32_vvv(i32 %a, i32 %b, i32 %c) { +; GCN-LABEL: test_add_max_u32_vvv: +; GCN: ; %bb.0: +; GCN-NEXT: s_wait_loadcnt_dscnt 0x0 +; GCN-NEXT: s_wait_kmcnt 0x0 +; GCN-NEXT: v_add_max_u32 v0, v0, v1, v2 +; GCN-NEXT: s_set_pc_i64 s[30:31] + %ret = tail call i32 @llvm.amdgcn.add.max.u32(i32 %a, i32 %b, i32 %c, i1 0) + ret i32 %ret +} + +define i32 @test_add_max_u32_ssi_clamp(i32 inreg %a, i32 inreg %b) { +; GCN-LABEL: test_add_max_u32_ssi_clamp: +; GCN: ; %bb.0: +; GCN-NEXT: s_wait_loadcnt_dscnt 0x0 +; GCN-NEXT: s_wait_kmcnt 0x0 +; GCN-NEXT: v_add_max_u32 v0, s0, s1, 1 clamp +; GCN-NEXT: s_set_pc_i64 s[30:31] + %ret = tail call i32 @llvm.amdgcn.add.max.u32(i32 %a, i32 %b, i32 1, i1 1) + ret i32 %ret +} + +define <2 x i16> @test_add_min_i16_vvv(<2 x i16> %a, <2 x i16> %b, <2 x i16> %c) { +; GCN-LABEL: test_add_min_i16_vvv: +; GCN: ; %bb.0: +; GCN-NEXT: s_wait_loadcnt_dscnt 0x0 +; GCN-NEXT: s_wait_kmcnt 0x0 +; GCN-NEXT: v_pk_add_min_i16 v0, v0, v1, v2 +; GCN-NEXT: s_set_pc_i64 s[30:31] + %ret = tail call <2 x i16> @llvm.amdgcn.pk.add.min.i16(<2 x i16> %a, <2 x i16> %b, <2 x i16> %c, i1 0) + ret <2 x i16> %ret +} + +define <2 x i16> @test_add_min_i16_ssi_clamp(<2 x i16> inreg %a, <2 x i16> inreg %b) { +; GCN-LABEL: test_add_min_i16_ssi_clamp: +; GCN: ; %bb.0: +; GCN-NEXT: s_wait_loadcnt_dscnt 0x0 +; GCN-NEXT: s_wait_kmcnt 0x0 +; GCN-NEXT: v_pk_add_min_i16 v0, s0, s1, 1 op_sel_hi:[1,1,0] clamp +; GCN-NEXT: s_set_pc_i64 s[30:31] + %ret = tail call <2 x i16> @llvm.amdgcn.pk.add.min.i16(<2 x i16> %a, <2 x i16> %b, <2 x i16> <i16 1, i16 1>, i1 1) + ret <2 x i16> %ret +} + +define <2 x i16> @test_add_min_u16_vvv(<2 x i16> %a, <2 x i16> %b, <2 x i16> %c) { +; GCN-LABEL: test_add_min_u16_vvv: +; GCN: ; %bb.0: +; GCN-NEXT: s_wait_loadcnt_dscnt 0x0 +; GCN-NEXT: s_wait_kmcnt 0x0 +; GCN-NEXT: v_pk_add_min_u16 v0, v0, v1, v2 +; GCN-NEXT: s_set_pc_i64 s[30:31] + %ret = tail call <2 x i16> @llvm.amdgcn.pk.add.min.u16(<2 x i16> %a, <2 x i16> %b, <2 x i16> %c, i1 0) + ret <2 x i16> %ret +} + +define <2 x i16> @test_add_min_u16_ssi_clamp(<2 x i16> inreg %a, <2 x i16> inreg %b) { +; GCN-LABEL: test_add_min_u16_ssi_clamp: +; GCN: ; %bb.0: +; GCN-NEXT: s_wait_loadcnt_dscnt 0x0 +; GCN-NEXT: s_wait_kmcnt 0x0 +; GCN-NEXT: v_pk_add_min_u16 v0, s0, s1, 1 op_sel_hi:[1,1,0] clamp +; GCN-NEXT: s_set_pc_i64 s[30:31] + %ret = tail call <2 x i16> @llvm.amdgcn.pk.add.min.u16(<2 x i16> %a, <2 x i16> %b, <2 x i16> <i16 1, i16 1>, i1 1) + ret <2 x i16> %ret +} + +define <2 x i16> @test_add_max_i16_vvv(<2 x i16> %a, <2 x i16> %b, <2 x i16> %c) { +; GCN-LABEL: test_add_max_i16_vvv: +; GCN: ; %bb.0: +; GCN-NEXT: s_wait_loadcnt_dscnt 0x0 +; GCN-NEXT: s_wait_kmcnt 0x0 +; GCN-NEXT: v_pk_add_max_i16 v0, v0, v1, v2 +; GCN-NEXT: s_set_pc_i64 s[30:31] + %ret = tail call <2 x i16> @llvm.amdgcn.pk.add.max.i16(<2 x i16> %a, <2 x i16> %b, <2 x i16> %c, i1 0) + ret <2 x i16> %ret +} + +define <2 x i16> @test_add_max_i16_ssi_clamp(<2 x i16> inreg %a, <2 x i16> inreg %b) { +; GCN-LABEL: test_add_max_i16_ssi_clamp: +; GCN: ; %bb.0: +; GCN-NEXT: s_wait_loadcnt_dscnt 0x0 +; GCN-NEXT: s_wait_kmcnt 0x0 +; GCN-NEXT: v_pk_add_max_i16 v0, s0, s1, 1 op_sel_hi:[1,1,0] clamp +; GCN-NEXT: s_set_pc_i64 s[30:31] + %ret = tail call <2 x i16> @llvm.amdgcn.pk.add.max.i16(<2 x i16> %a, <2 x i16> %b, <2 x i16> <i16 1, i16 1>, i1 1) + ret <2 x i16> %ret +} + +define <2 x i16> @test_add_max_u16_vvv(<2 x i16> %a, <2 x i16> %b, <2 x i16> %c) { +; GCN-LABEL: test_add_max_u16_vvv: +; GCN: ; %bb.0: +; GCN-NEXT: s_wait_loadcnt_dscnt 0x0 +; GCN-NEXT: s_wait_kmcnt 0x0 +; GCN-NEXT: v_pk_add_max_u16 v0, v0, v1, v2 +; GCN-NEXT: s_set_pc_i64 s[30:31] + %ret = tail call <2 x i16> @llvm.amdgcn.pk.add.max.u16(<2 x i16> %a, <2 x i16> %b, <2 x i16> %c, i1 0) + ret <2 x i16> %ret +} + +define <2 x i16> @test_add_max_u16_ssi_clamp(<2 x i16> inreg %a, <2 x i16> inreg %b) { +; GCN-LABEL: test_add_max_u16_ssi_clamp: +; GCN: ; %bb.0: +; GCN-NEXT: s_wait_loadcnt_dscnt 0x0 +; GCN-NEXT: s_wait_kmcnt 0x0 +; GCN-NEXT: v_pk_add_max_u16 v0, s0, s1, 1 op_sel_hi:[1,1,0] clamp +; GCN-NEXT: s_set_pc_i64 s[30:31] + %ret = tail call <2 x i16> @llvm.amdgcn.pk.add.max.u16(<2 x i16> %a, <2 x i16> %b, <2 x i16> <i16 1, i16 1>, i1 1) + ret <2 x i16> %ret +} +;; NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line: +; GFX1250-GISEL: {{.*}} +; GFX1250-SDAG: {{.*}} diff --git a/llvm/test/CodeGen/AMDGPU/llvm.exp2.ll b/llvm/test/CodeGen/AMDGPU/llvm.exp2.ll index 883db20..e30a586 100644 --- a/llvm/test/CodeGen/AMDGPU/llvm.exp2.ll +++ b/llvm/test/CodeGen/AMDGPU/llvm.exp2.ll @@ -1485,7 +1485,7 @@ define float @v_exp2_f32_fast(float %in) { ret float %result } -define float @v_exp2_f32_unsafe_math_attr(float %in) "unsafe-fp-math"="true" { +define float @v_exp2_f32_unsafe_math_attr(float %in) { ; SI-SDAG-LABEL: v_exp2_f32_unsafe_math_attr: ; SI-SDAG: ; %bb.0: ; SI-SDAG-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) diff --git a/llvm/test/CodeGen/AMDGPU/llvm.log2.ll b/llvm/test/CodeGen/AMDGPU/llvm.log2.ll index 0854134..61a777f 100644 --- a/llvm/test/CodeGen/AMDGPU/llvm.log2.ll +++ b/llvm/test/CodeGen/AMDGPU/llvm.log2.ll @@ -1907,7 +1907,7 @@ define float @v_log2_f32_fast(float %in) { ret float %result } -define float @v_log2_f32_unsafe_math_attr(float %in) "unsafe-fp-math"="true" { +define float @v_log2_f32_unsafe_math_attr(float %in) { ; SI-SDAG-LABEL: v_log2_f32_unsafe_math_attr: ; SI-SDAG: ; %bb.0: ; SI-SDAG-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) diff --git a/llvm/test/CodeGen/AMDGPU/minmax.ll b/llvm/test/CodeGen/AMDGPU/minmax.ll index d578d2e..60570bd 100644 --- a/llvm/test/CodeGen/AMDGPU/minmax.ll +++ b/llvm/test/CodeGen/AMDGPU/minmax.ll @@ -1296,4 +1296,4 @@ declare half @llvm.minnum.f16(half, half) declare half @llvm.maxnum.f16(half, half) declare float @llvm.minnum.f32(float, float) declare float @llvm.maxnum.f32(float, float) -attributes #0 = { nounwind "unsafe-fp-math"="false" "no-nans-fp-math"="true" } +attributes #0 = { nounwind "no-nans-fp-math"="true" } diff --git a/llvm/test/CodeGen/AMDGPU/stackguard.ll b/llvm/test/CodeGen/AMDGPU/stackguard.ll new file mode 100644 index 0000000..393686f --- /dev/null +++ b/llvm/test/CodeGen/AMDGPU/stackguard.ll @@ -0,0 +1,14 @@ +; RUN: not llc -global-isel=0 -mtriple=amdgcn-amd-amdhsa -mcpu=gfx900 -filetype=null %s 2>&1 | FileCheck %s +; RUN: not llc -global-isel -mtriple=amdgcn-amd-amdhsa -mcpu=gfx900 -filetype=null %s 2>&1 | FileCheck %s + +; FIXME: To actually support stackguard, need to fix intrinsic to +; return pointer in any address space. + +; CHECK: error: unable to lower stackguard +define i1 @test_stackguard(ptr %p1) { + %p2 = call ptr @llvm.stackguard() + %res = icmp ne ptr %p2, %p1 + ret i1 %res +} + +declare ptr @llvm.stackguard() diff --git a/llvm/test/CodeGen/ARM/2014-05-14-DwarfEHCrash.ll b/llvm/test/CodeGen/ARM/2014-05-14-DwarfEHCrash.ll index 96639ed..bfaf799 100644 --- a/llvm/test/CodeGen/ARM/2014-05-14-DwarfEHCrash.ll +++ b/llvm/test/CodeGen/ARM/2014-05-14-DwarfEHCrash.ll @@ -45,6 +45,6 @@ declare ptr @__cxa_begin_catch(ptr) declare void @__cxa_end_catch() -attributes #0 = { "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="true" } +attributes #0 = { "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "use-soft-float"="true" } attributes #1 = { nounwind readnone } attributes #2 = { nounwind } diff --git a/llvm/test/CodeGen/ARM/ARMLoadStoreDBG.mir b/llvm/test/CodeGen/ARM/ARMLoadStoreDBG.mir index 812ac23..a18023c 100644 --- a/llvm/test/CodeGen/ARM/ARMLoadStoreDBG.mir +++ b/llvm/test/CodeGen/ARM/ARMLoadStoreDBG.mir @@ -31,8 +31,8 @@ ; Function Attrs: nounwind readnone declare void @llvm.dbg.value(metadata, i64, metadata, metadata) #2 - attributes #0 = { nounwind "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-realign-stack" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } - attributes #1 = { "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-realign-stack" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } + attributes #0 = { nounwind "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-realign-stack" "stack-protector-buffer-size"="8" "use-soft-float"="false" } + attributes #1 = { "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-realign-stack" "stack-protector-buffer-size"="8" "use-soft-float"="false" } attributes #2 = { nounwind readnone } attributes #3 = { nounwind } diff --git a/llvm/test/CodeGen/ARM/Windows/wineh-basic.ll b/llvm/test/CodeGen/ARM/Windows/wineh-basic.ll index d0bdd66..e4dc92d 100644 --- a/llvm/test/CodeGen/ARM/Windows/wineh-basic.ll +++ b/llvm/test/CodeGen/ARM/Windows/wineh-basic.ll @@ -36,8 +36,8 @@ declare arm_aapcs_vfpcc i32 @__CxxFrameHandler3(...) declare arm_aapcs_vfpcc void @__std_terminate() local_unnamed_addr -attributes #0 = { nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="cortex-a9" "target-features"="+dsp,+fp16,+neon,+strict-align,+vfp3" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="cortex-a9" "target-features"="+dsp,+fp16,+neon,+strict-align,+vfp3" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #0 = { nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="cortex-a9" "target-features"="+dsp,+fp16,+neon,+strict-align,+vfp3" "use-soft-float"="false" } +attributes #1 = { "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="cortex-a9" "target-features"="+dsp,+fp16,+neon,+strict-align,+vfp3" "use-soft-float"="false" } attributes #2 = { noreturn nounwind } !llvm.module.flags = !{!0, !1} diff --git a/llvm/test/CodeGen/ARM/byval_load_align.ll b/llvm/test/CodeGen/ARM/byval_load_align.ll index c594bd3..5bb4fe7 100644 --- a/llvm/test/CodeGen/ARM/byval_load_align.ll +++ b/llvm/test/CodeGen/ARM/byval_load_align.ll @@ -22,6 +22,6 @@ entry: declare void @Logger(i8 signext, ptr byval(%struct.ModuleID)) #1 -attributes #0 = { nounwind ssp "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #0 = { nounwind ssp "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "use-soft-float"="false" } +attributes #1 = { "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "use-soft-float"="false" } attributes #2 = { nounwind } diff --git a/llvm/test/CodeGen/ARM/cfguard-module-flag.ll b/llvm/test/CodeGen/ARM/cfguard-module-flag.ll index 3e8c9f4..bb3c04a 100644 --- a/llvm/test/CodeGen/ARM/cfguard-module-flag.ll +++ b/llvm/test/CodeGen/ARM/cfguard-module-flag.ll @@ -21,7 +21,7 @@ entry: ; CHECK-NOT: __guard_check_icall_fptr ; CHECK-NOT: __guard_dispatch_icall_fptr } -attributes #0 = { "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="cortex-a9" "target-features"="+armv7-a,+dsp,+fp16,+neon,+strict-align,+thumb-mode,+vfp3" "unsafe-fp-math"="false" "use-soft-float"="false"} +attributes #0 = { "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="cortex-a9" "target-features"="+armv7-a,+dsp,+fp16,+neon,+strict-align,+thumb-mode,+vfp3" "use-soft-float"="false"} !llvm.module.flags = !{!0} !0 = !{i32 2, !"cfguard", i32 1} diff --git a/llvm/test/CodeGen/ARM/clang-section.ll b/llvm/test/CodeGen/ARM/clang-section.ll index 9277d90..9c32ab2 100644 --- a/llvm/test/CodeGen/ARM/clang-section.ll +++ b/llvm/test/CodeGen/ARM/clang-section.ll @@ -35,8 +35,8 @@ attributes #0 = { "bss-section"="my_bss.1" "data-section"="my_data.1" "rodata-se attributes #1 = { "data-section"="my_data.1" "rodata-section"="my_rodata.1" } attributes #2 = { "bss-section"="my_bss.2" "rodata-section"="my_rodata.1" } attributes #3 = { "bss-section"="my_bss.2" "data-section"="my_data.2" "rodata-section"="my_rodata.2" } -attributes #6 = { "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign,preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="true" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="cortex-a9" "target-features"="+dsp,+fp16,+neon,+vfp3" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #7 = { noinline nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign,preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="cortex-a9" "target-features"="+dsp,+fp16,+neon,+vfp3" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #6 = { "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign,preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="true" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="cortex-a9" "target-features"="+dsp,+fp16,+neon,+vfp3" "use-soft-float"="false" } +attributes #7 = { noinline nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign,preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="cortex-a9" "target-features"="+dsp,+fp16,+neon,+vfp3" "use-soft-float"="false" } !llvm.module.flags = !{!0, !1, !2, !3} diff --git a/llvm/test/CodeGen/ARM/cmse-clear-float-bigend.mir b/llvm/test/CodeGen/ARM/cmse-clear-float-bigend.mir index 47f4e1a..ae36da4 100644 --- a/llvm/test/CodeGen/ARM/cmse-clear-float-bigend.mir +++ b/llvm/test/CodeGen/ARM/cmse-clear-float-bigend.mir @@ -16,7 +16,7 @@ ; Function Attrs: nounwind declare void @llvm.stackprotector(ptr, ptr) #1 - attributes #0 = { "cmse_nonsecure_entry" nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "no-frame-pointer-elim"="false" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+8msecext,+armv8-m.main,-d32,-fp64,+fp-armv8,+hwdiv,+thumb-mode,-crypto,-fullfp16,-neon" "unsafe-fp-math"="false" "use-soft-float"="false" } + attributes #0 = { "cmse_nonsecure_entry" nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "no-frame-pointer-elim"="false" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+8msecext,+armv8-m.main,-d32,-fp64,+fp-armv8,+hwdiv,+thumb-mode,-crypto,-fullfp16,-neon" "use-soft-float"="false" } attributes #1 = { nounwind } attributes #2 = { "cmse_nonsecure_call" nounwind } diff --git a/llvm/test/CodeGen/ARM/coalesce-dbgvalue.ll b/llvm/test/CodeGen/ARM/coalesce-dbgvalue.ll index 4d4853c..7960b79 100644 --- a/llvm/test/CodeGen/ARM/coalesce-dbgvalue.ll +++ b/llvm/test/CodeGen/ARM/coalesce-dbgvalue.ll @@ -72,8 +72,8 @@ declare i32 @fn3(...) #1 ; Function Attrs: nounwind readnone declare void @llvm.dbg.value(metadata, metadata, metadata) #2 -attributes #0 = { nounwind ssp "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #0 = { nounwind ssp "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "use-soft-float"="false" } +attributes #1 = { "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "use-soft-float"="false" } attributes #2 = { nounwind readnone } attributes #3 = { nounwind } diff --git a/llvm/test/CodeGen/ARM/constantpool-promote-dbg.ll b/llvm/test/CodeGen/ARM/constantpool-promote-dbg.ll index 246eeeb..4bc6c41 100644 --- a/llvm/test/CodeGen/ARM/constantpool-promote-dbg.ll +++ b/llvm/test/CodeGen/ARM/constantpool-promote-dbg.ll @@ -19,7 +19,7 @@ entry: ret ptr getelementptr inbounds ([4 x i8], ptr @.str, i32 0, i32 1), !dbg !16 } -attributes #0 = { minsize norecurse nounwind optsize readnone "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="cortex-m3" "target-features"="+hwdiv,+soft-float,-crypto,-neon" "unsafe-fp-math"="false" "use-soft-float"="true" } +attributes #0 = { minsize norecurse nounwind optsize readnone "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="cortex-m3" "target-features"="+hwdiv,+soft-float,-crypto,-neon" "use-soft-float"="true" } !llvm.dbg.cu = !{!0} !llvm.module.flags = !{!3, !4, !5, !6} diff --git a/llvm/test/CodeGen/ARM/constantpool-promote.ll b/llvm/test/CodeGen/ARM/constantpool-promote.ll index c383b39..87f14ebf 100644 --- a/llvm/test/CodeGen/ARM/constantpool-promote.ll +++ b/llvm/test/CodeGen/ARM/constantpool-promote.ll @@ -200,8 +200,8 @@ declare void @d(ptr) #1 declare void @llvm.memcpy.p0.p0.i32(ptr nocapture writeonly, ptr nocapture readonly, i32, i1) declare void @llvm.memmove.p0.p0.i32(ptr, ptr, i32, i1) local_unnamed_addr -attributes #0 = { nounwind "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #0 = { nounwind "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "use-soft-float"="false" } +attributes #1 = { "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "use-soft-float"="false" } attributes #2 = { nounwind } !llvm.module.flags = !{!0, !1} diff --git a/llvm/test/CodeGen/ARM/early-cfi-sections.ll b/llvm/test/CodeGen/ARM/early-cfi-sections.ll index 72b8702..ef99ae5 100644 --- a/llvm/test/CodeGen/ARM/early-cfi-sections.ll +++ b/llvm/test/CodeGen/ARM/early-cfi-sections.ll @@ -13,7 +13,7 @@ entry: ret void, !dbg !10 } -attributes #0 = { nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="arm7tdmi" "target-features"="+soft-float,+strict-align,-crypto,-neon" "unsafe-fp-math"="false" "use-soft-float"="true" } +attributes #0 = { nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="arm7tdmi" "target-features"="+soft-float,+strict-align,-crypto,-neon" "use-soft-float"="true" } !llvm.dbg.cu = !{!0} !llvm.module.flags = !{!3, !4, !5, !6} diff --git a/llvm/test/CodeGen/ARM/fp16-vld.ll b/llvm/test/CodeGen/ARM/fp16-vld.ll index 549546e..778685c 100644 --- a/llvm/test/CodeGen/ARM/fp16-vld.ll +++ b/llvm/test/CodeGen/ARM/fp16-vld.ll @@ -43,4 +43,4 @@ byeblock: ret void } -attributes #0 = { norecurse nounwind readonly "no-infs-fp-math"="true" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "target-cpu"="generic" "target-features"="+armv8.2-a,+fullfp16,+strict-align,-thumb-mode" "unsafe-fp-math"="true" "use-soft-float"="false" } +attributes #0 = { norecurse nounwind readonly "no-infs-fp-math"="true" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "target-cpu"="generic" "target-features"="+armv8.2-a,+fullfp16,+strict-align,-thumb-mode" "use-soft-float"="false" } diff --git a/llvm/test/CodeGen/ARM/global-merge-1.ll b/llvm/test/CodeGen/ARM/global-merge-1.ll index 46e9d96..05719ae 100644 --- a/llvm/test/CodeGen/ARM/global-merge-1.ll +++ b/llvm/test/CodeGen/ARM/global-merge-1.ll @@ -74,9 +74,9 @@ define internal ptr @returnFoo() #2 { ret ptr @foo } -attributes #0 = { nounwind ssp "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #2 = { nounwind readnone ssp "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #0 = { nounwind ssp "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "use-soft-float"="false" } +attributes #1 = { "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "use-soft-float"="false" } +attributes #2 = { nounwind readnone ssp "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "use-soft-float"="false" } attributes #3 = { nounwind } !llvm.ident = !{!0} diff --git a/llvm/test/CodeGen/ARM/isel-v8i32-crash.ll b/llvm/test/CodeGen/ARM/isel-v8i32-crash.ll index 27534a6..bdd842a 100644 --- a/llvm/test/CodeGen/ARM/isel-v8i32-crash.ll +++ b/llvm/test/CodeGen/ARM/isel-v8i32-crash.ll @@ -21,4 +21,4 @@ entry: ret void } -attributes #0 = { nounwind "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-realign-stack" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #0 = { nounwind "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-realign-stack" "stack-protector-buffer-size"="8" "use-soft-float"="false" } diff --git a/llvm/test/CodeGen/ARM/out-of-registers.ll b/llvm/test/CodeGen/ARM/out-of-registers.ll index c6488f1..8da2069 100644 --- a/llvm/test/CodeGen/ARM/out-of-registers.ll +++ b/llvm/test/CodeGen/ARM/out-of-registers.ll @@ -32,7 +32,7 @@ declare { <4 x float>, <4 x float>, <4 x float>, <4 x float> } @llvm.arm.neon.vl ; Function Attrs: nounwind -attributes #0 = { nounwind "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="true" "no-nans-fp-math"="true" "stack-protector-buffer-size"="8" "unsafe-fp-math"="true" "use-soft-float"="false" } +attributes #0 = { nounwind "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="true" "no-nans-fp-math"="true" "stack-protector-buffer-size"="8" "use-soft-float"="false" } attributes #1 = { nounwind } attributes #2 = { nounwind readonly } diff --git a/llvm/test/CodeGen/ARM/relax-per-target-feature.ll b/llvm/test/CodeGen/ARM/relax-per-target-feature.ll index 71db294..99ed6f3 100644 --- a/llvm/test/CodeGen/ARM/relax-per-target-feature.ll +++ b/llvm/test/CodeGen/ARM/relax-per-target-feature.ll @@ -30,5 +30,5 @@ entry: attributes #0 = { nounwind "disable-tail-calls"="false" "target-cpu"="cortex-a53" "target-features"="+crypto,+fp-armv8,+neon,+soft-float-abi,+strict-align,+thumb-mode,-crc,-dotprod,-dsp,-hwdiv,-hwdiv-arm,-ras" "use-soft-float"="true" } -attributes #2 = { nounwind "disable-tail-calls"="false" "target-cpu"="arm7tdmi" "target-features"="+strict-align,+thumb-mode,-crc,-dotprod,-dsp,-hwdiv,-hwdiv-arm,-ras" "unsafe-fp-math"="false" "use-soft-float"="true" } +attributes #2 = { nounwind "disable-tail-calls"="false" "target-cpu"="arm7tdmi" "target-features"="+strict-align,+thumb-mode,-crc,-dotprod,-dsp,-hwdiv,-hwdiv-arm,-ras" "use-soft-float"="true" } attributes #3 = { nounwind } diff --git a/llvm/test/CodeGen/ARM/softfp-constant-comparison.ll b/llvm/test/CodeGen/ARM/softfp-constant-comparison.ll index 76df93b..2aa7611 100644 --- a/llvm/test/CodeGen/ARM/softfp-constant-comparison.ll +++ b/llvm/test/CodeGen/ARM/softfp-constant-comparison.ll @@ -32,4 +32,4 @@ land.end: ; preds = %land.rhs, %entry ret void } -attributes #0 = { noinline nounwind optnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign,preserve-sign" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="cortex-m4" "target-features"="+armv7e-m,+dsp,+fp16,+hwdiv,+thumb-mode,+vfp2sp,+vfp3d16sp,+vfp4d16sp,-aes,-crc,-crypto,-dotprod,-fp16fml,-fullfp16,-hwdiv-arm,-lob,-mve,-mve.fp,-ras,-sb,-sha2" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #0 = { noinline nounwind optnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign,preserve-sign" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="cortex-m4" "target-features"="+armv7e-m,+dsp,+fp16,+hwdiv,+thumb-mode,+vfp2sp,+vfp3d16sp,+vfp4d16sp,-aes,-crc,-crypto,-dotprod,-fp16fml,-fullfp16,-hwdiv-arm,-lob,-mve,-mve.fp,-ras,-sb,-sha2" "use-soft-float"="false" } diff --git a/llvm/test/CodeGen/ARM/stack-protector-bmovpcb_call.ll b/llvm/test/CodeGen/ARM/stack-protector-bmovpcb_call.ll index 6f2cb42..2cf6d29 100644 --- a/llvm/test/CodeGen/ARM/stack-protector-bmovpcb_call.ll +++ b/llvm/test/CodeGen/ARM/stack-protector-bmovpcb_call.ll @@ -25,7 +25,7 @@ declare void @llvm.memcpy.p0.p0.i32(ptr nocapture, ptr nocapture readonly, i32, ; Function Attrs: nounwind optsize declare i32 @printf(ptr nocapture readonly, ...) #2 -attributes #0 = { nounwind optsize ssp "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #0 = { nounwind optsize ssp "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "use-soft-float"="false" } attributes #1 = { nounwind } -attributes #2 = { nounwind optsize "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #2 = { nounwind optsize "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "use-soft-float"="false" } attributes #3 = { nounwind optsize } diff --git a/llvm/test/CodeGen/ARM/stack_guard_remat.ll b/llvm/test/CodeGen/ARM/stack_guard_remat.ll index 983ef13..0930ccc 100644 --- a/llvm/test/CodeGen/ARM/stack_guard_remat.ll +++ b/llvm/test/CodeGen/ARM/stack_guard_remat.ll @@ -68,7 +68,7 @@ declare void @foo3(ptr) ; Function Attrs: nounwind declare void @llvm.lifetime.end.p0(i64, ptr nocapture) -attributes #0 = { nounwind ssp "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #0 = { nounwind ssp "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "use-soft-float"="false" } ;--- pic-flag.ll !llvm.module.flags = !{!0} diff --git a/llvm/test/CodeGen/ARM/struct-byval-frame-index.ll b/llvm/test/CodeGen/ARM/struct-byval-frame-index.ll index 24df0d3..868dc03 100644 --- a/llvm/test/CodeGen/ARM/struct-byval-frame-index.ll +++ b/llvm/test/CodeGen/ARM/struct-byval-frame-index.ll @@ -34,4 +34,4 @@ entry: ; Function Attrs: nounwind declare void @RestoreMVBlock8x8(i32, i32, ptr byval(%structN) nocapture, i32) #1 -attributes #1 = { nounwind "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #1 = { nounwind "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "use-soft-float"="false" } diff --git a/llvm/test/CodeGen/ARM/subtarget-align.ll b/llvm/test/CodeGen/ARM/subtarget-align.ll index a24b487..f87e21f 100644 --- a/llvm/test/CodeGen/ARM/subtarget-align.ll +++ b/llvm/test/CodeGen/ARM/subtarget-align.ll @@ -18,7 +18,7 @@ entry: ret i32 0 } -attributes #0 = { "target-cpu"="generic" "target-features"="+armv7-a,+dsp,+neon,+vfp3,-thumb-mode" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #0 = { "target-cpu"="generic" "target-features"="+armv7-a,+dsp,+neon,+vfp3,-thumb-mode" "use-soft-float"="false" } attributes #1 = { "target-cpu"="arm7tdmi" "target-features"="+armv4t" "use-soft-float"="true" } diff --git a/llvm/test/CodeGen/ARM/unschedule-first-call.ll b/llvm/test/CodeGen/ARM/unschedule-first-call.ll index e0bb787..ad422f7 100644 --- a/llvm/test/CodeGen/ARM/unschedule-first-call.ll +++ b/llvm/test/CodeGen/ARM/unschedule-first-call.ll @@ -128,7 +128,7 @@ declare { i64, i1 } @llvm.sadd.with.overflow.i64(i64, i64) #1 ; Function Attrs: nounwind readnone declare { i64, i1 } @llvm.ssub.with.overflow.i64(i64, i64) #1 -attributes #0 = { nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "polly-optimized" "stack-protector-buffer-size"="8" "target-cpu"="arm1176jzf-s" "target-features"="+dsp,+strict-align,+vfp2" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #0 = { nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "polly-optimized" "stack-protector-buffer-size"="8" "target-cpu"="arm1176jzf-s" "target-features"="+dsp,+strict-align,+vfp2" "use-soft-float"="false" } attributes #1 = { nounwind readnone } !llvm.ident = !{!0} diff --git a/llvm/test/CodeGen/ARM/vector-spilling.ll b/llvm/test/CodeGen/ARM/vector-spilling.ll index 5dc20a8..8d1339844 100644 --- a/llvm/test/CodeGen/ARM/vector-spilling.ll +++ b/llvm/test/CodeGen/ARM/vector-spilling.ll @@ -30,4 +30,4 @@ entry: declare void @foo(<8 x i64>, <8 x i64>, <8 x i64>, <8 x i64>, <8 x i64>, <8 x i64>) -attributes #0 = { noredzone "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #0 = { noredzone "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "use-soft-float"="false" } diff --git a/llvm/test/CodeGen/ARM/vldm-sched-a9.ll b/llvm/test/CodeGen/ARM/vldm-sched-a9.ll index 892b261..4e36711 100644 --- a/llvm/test/CodeGen/ARM/vldm-sched-a9.ll +++ b/llvm/test/CodeGen/ARM/vldm-sched-a9.ll @@ -132,4 +132,4 @@ entry: declare void @capture(ptr, ptr) -attributes #0 = { noredzone "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #0 = { noredzone "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "use-soft-float"="false" } diff --git a/llvm/test/CodeGen/DirectX/CBufferAccess/unused.ll b/llvm/test/CodeGen/DirectX/CBufferAccess/unused.ll new file mode 100644 index 0000000..8c0d82e --- /dev/null +++ b/llvm/test/CodeGen/DirectX/CBufferAccess/unused.ll @@ -0,0 +1,13 @@ +; RUN: opt -S -dxil-cbuffer-access -mtriple=dxil--shadermodel6.3-library %s | FileCheck %s +; Check that we correctly ignore cbuffers that were nulled out by optimizations. + +%__cblayout_CB = type <{ float }> +@CB.cb = local_unnamed_addr global target("dx.CBuffer", %__cblayout_CB) poison +@x = external local_unnamed_addr addrspace(2) global float, align 4 + +; CHECK-NOT: !hlsl.cbs = +!hlsl.cbs = !{!0, !1, !2} + +!0 = !{ptr @CB.cb, ptr addrspace(2) @x} +!1 = !{ptr @CB.cb, null} +!2 = !{null, null} diff --git a/llvm/test/CodeGen/LoongArch/lasx/shuffle-as-permute-and-shuffle.ll b/llvm/test/CodeGen/LoongArch/lasx/shuffle-as-permute-and-shuffle.ll index 245f764..7149cdb 100644 --- a/llvm/test/CodeGen/LoongArch/lasx/shuffle-as-permute-and-shuffle.ll +++ b/llvm/test/CodeGen/LoongArch/lasx/shuffle-as-permute-and-shuffle.ll @@ -32,9 +32,7 @@ define <16 x i16> @shuffle_v16i16(<16 x i16> %a) { ; CHECK: # %bb.0: ; CHECK-NEXT: pcalau12i $a0, %pc_hi20(.LCPI2_0) ; CHECK-NEXT: xvld $xr1, $a0, %pc_lo12(.LCPI2_0) -; CHECK-NEXT: xvpermi.d $xr2, $xr0, 78 -; CHECK-NEXT: xvshuf.w $xr1, $xr2, $xr0 -; CHECK-NEXT: xvori.b $xr0, $xr1, 0 +; CHECK-NEXT: xvperm.w $xr0, $xr0, $xr1 ; CHECK-NEXT: ret %shuffle = shufflevector <16 x i16> %a, <16 x i16> poison, <16 x i32> <i32 8, i32 9, i32 0, i32 1, i32 2, i32 3, i32 4, i32 5, i32 8, i32 9, i32 10, i32 11, i32 12, i32 13, i32 14, i32 15> ret <16 x i16> %shuffle @@ -55,9 +53,7 @@ define <16 x i16> @shuffle_v16i16_same_lane(<16 x i16> %a) { define <8 x i32> @shuffle_v8i32(<8 x i32> %a) { ; CHECK-LABEL: shuffle_v8i32: ; CHECK: # %bb.0: -; CHECK-NEXT: pcalau12i $a0, %pc_hi20(.LCPI4_0) -; CHECK-NEXT: xvld $xr1, $a0, %pc_lo12(.LCPI4_0) -; CHECK-NEXT: xvperm.w $xr0, $xr0, $xr1 +; CHECK-NEXT: xvpermi.d $xr0, $xr0, 226 ; CHECK-NEXT: ret %shuffle = shufflevector <8 x i32> %a, <8 x i32> poison, <8 x i32> <i32 4, i32 5, i32 0, i32 1, i32 4, i32 5, i32 6, i32 7> ret <8 x i32> %shuffle @@ -93,9 +89,7 @@ define <4 x i64> @shuffle_v4i64_same_lane(<4 x i64> %a) { define <8 x float> @shuffle_v8f32(<8 x float> %a) { ; CHECK-LABEL: shuffle_v8f32: ; CHECK: # %bb.0: -; CHECK-NEXT: pcalau12i $a0, %pc_hi20(.LCPI8_0) -; CHECK-NEXT: xvld $xr1, $a0, %pc_lo12(.LCPI8_0) -; CHECK-NEXT: xvperm.w $xr0, $xr0, $xr1 +; CHECK-NEXT: xvpermi.d $xr0, $xr0, 226 ; CHECK-NEXT: ret %shuffle = shufflevector <8 x float> %a, <8 x float> poison, <8 x i32> <i32 4, i32 5, i32 0, i32 1, i32 4, i32 5, i32 6, i32 7> ret <8 x float> %shuffle diff --git a/llvm/test/CodeGen/MSP430/libcalls.ll b/llvm/test/CodeGen/MSP430/libcalls.ll index 5d3755c..d1bafea 100644 --- a/llvm/test/CodeGen/MSP430/libcalls.ll +++ b/llvm/test/CodeGen/MSP430/libcalls.ll @@ -639,4 +639,18 @@ entry: ret i32 %shr } +define i64 @test__mspabi_divull(i64 %a, i64 %b) #0 { +; CHECK-LABEL: test__mspabi_divull: +; CHECK: call #__mspabi_divull + %result = udiv i64 %a, %b + ret i64 %result +} + +define i64 @test__mspabi_remull(i64 %a, i64 %b) #0 { +; CHECK-LABEL: test__mspabi_remull: +; CHECK: call #__mspabi_remull + %result = urem i64 %a, %b + ret i64 %result +} + attributes #0 = { nounwind } diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/store-fp-zero-to-x0.ll b/llvm/test/CodeGen/RISCV/GlobalISel/store-fp-zero-to-x0.ll new file mode 100644 index 0000000..bc79c6f --- /dev/null +++ b/llvm/test/CodeGen/RISCV/GlobalISel/store-fp-zero-to-x0.ll @@ -0,0 +1,320 @@ +; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py +; RUN: llc -global-isel -mtriple=riscv32 -mattr=+f,+zfh < %s \ +; RUN: | FileCheck %s --check-prefix=RV32F +; RUN: llc -global-isel -mtriple=riscv32 -mattr=+d,+zfh < %s \ +; RUN: | FileCheck %s --check-prefix=RV32D +; RUN: llc -global-isel -mtriple=riscv64 -mattr=+f,+zfh < %s \ +; RUN: | FileCheck %s --check-prefix=RV64F +; RUN: llc -global-isel -mtriple=riscv64 -mattr=+d,+zfh < %s \ +; RUN: | FileCheck %s --check-prefix=RV64D + +define void @zero_f16(ptr %i) { +; RV32F-LABEL: zero_f16: +; RV32F: # %bb.0: # %entry +; RV32F-NEXT: sh zero, 0(a0) +; RV32F-NEXT: ret +; +; RV32D-LABEL: zero_f16: +; RV32D: # %bb.0: # %entry +; RV32D-NEXT: sh zero, 0(a0) +; RV32D-NEXT: ret +; +; RV64F-LABEL: zero_f16: +; RV64F: # %bb.0: # %entry +; RV64F-NEXT: sh zero, 0(a0) +; RV64F-NEXT: ret +; +; RV64D-LABEL: zero_f16: +; RV64D: # %bb.0: # %entry +; RV64D-NEXT: sh zero, 0(a0) +; RV64D-NEXT: ret +entry: + store half 0.0, ptr %i, align 4 + ret void +} + +define void @zero_bf16(ptr %i) { +; RV32F-LABEL: zero_bf16: +; RV32F: # %bb.0: # %entry +; RV32F-NEXT: sh zero, 0(a0) +; RV32F-NEXT: ret +; +; RV32D-LABEL: zero_bf16: +; RV32D: # %bb.0: # %entry +; RV32D-NEXT: sh zero, 0(a0) +; RV32D-NEXT: ret +; +; RV64F-LABEL: zero_bf16: +; RV64F: # %bb.0: # %entry +; RV64F-NEXT: sh zero, 0(a0) +; RV64F-NEXT: ret +; +; RV64D-LABEL: zero_bf16: +; RV64D: # %bb.0: # %entry +; RV64D-NEXT: sh zero, 0(a0) +; RV64D-NEXT: ret +entry: + store bfloat 0.0, ptr %i, align 4 + ret void +} + +define void @zero_f32(ptr %i) { +; RV32F-LABEL: zero_f32: +; RV32F: # %bb.0: # %entry +; RV32F-NEXT: sw zero, 0(a0) +; RV32F-NEXT: ret +; +; RV32D-LABEL: zero_f32: +; RV32D: # %bb.0: # %entry +; RV32D-NEXT: sw zero, 0(a0) +; RV32D-NEXT: ret +; +; RV64F-LABEL: zero_f32: +; RV64F: # %bb.0: # %entry +; RV64F-NEXT: sw zero, 0(a0) +; RV64F-NEXT: ret +; +; RV64D-LABEL: zero_f32: +; RV64D: # %bb.0: # %entry +; RV64D-NEXT: sw zero, 0(a0) +; RV64D-NEXT: ret +entry: + store float 0.0, ptr %i, align 4 + ret void +} + + +define void @zero_f64(ptr %i) { +; RV32F-LABEL: zero_f64: +; RV32F: # %bb.0: # %entry +; RV32F-NEXT: lui a1, %hi(.LCPI3_0) +; RV32F-NEXT: addi a1, a1, %lo(.LCPI3_0) +; RV32F-NEXT: lw a2, 0(a1) +; RV32F-NEXT: lw a1, 4(a1) +; RV32F-NEXT: sw a2, 0(a0) +; RV32F-NEXT: sw a1, 4(a0) +; RV32F-NEXT: ret +; +; RV32D-LABEL: zero_f64: +; RV32D: # %bb.0: # %entry +; RV32D-NEXT: fcvt.d.w fa5, zero +; RV32D-NEXT: fsd fa5, 0(a0) +; RV32D-NEXT: ret +; +; RV64F-LABEL: zero_f64: +; RV64F: # %bb.0: # %entry +; RV64F-NEXT: sd zero, 0(a0) +; RV64F-NEXT: ret +; +; RV64D-LABEL: zero_f64: +; RV64D: # %bb.0: # %entry +; RV64D-NEXT: sd zero, 0(a0) +; RV64D-NEXT: ret +entry: + store double 0.0, ptr %i, align 8 + ret void +} + +define void @zero_v1f32(ptr %i) { +; RV32F-LABEL: zero_v1f32: +; RV32F: # %bb.0: # %entry +; RV32F-NEXT: sw zero, 0(a0) +; RV32F-NEXT: ret +; +; RV32D-LABEL: zero_v1f32: +; RV32D: # %bb.0: # %entry +; RV32D-NEXT: sw zero, 0(a0) +; RV32D-NEXT: ret +; +; RV64F-LABEL: zero_v1f32: +; RV64F: # %bb.0: # %entry +; RV64F-NEXT: sw zero, 0(a0) +; RV64F-NEXT: ret +; +; RV64D-LABEL: zero_v1f32: +; RV64D: # %bb.0: # %entry +; RV64D-NEXT: sw zero, 0(a0) +; RV64D-NEXT: ret +entry: + store <1 x float> <float 0.0>, ptr %i, align 8 + ret void +} + +define void @zero_v2f32(ptr %i) { +; RV32F-LABEL: zero_v2f32: +; RV32F: # %bb.0: # %entry +; RV32F-NEXT: sw zero, 0(a0) +; RV32F-NEXT: sw zero, 4(a0) +; RV32F-NEXT: ret +; +; RV32D-LABEL: zero_v2f32: +; RV32D: # %bb.0: # %entry +; RV32D-NEXT: sw zero, 0(a0) +; RV32D-NEXT: sw zero, 4(a0) +; RV32D-NEXT: ret +; +; RV64F-LABEL: zero_v2f32: +; RV64F: # %bb.0: # %entry +; RV64F-NEXT: sw zero, 0(a0) +; RV64F-NEXT: sw zero, 4(a0) +; RV64F-NEXT: ret +; +; RV64D-LABEL: zero_v2f32: +; RV64D: # %bb.0: # %entry +; RV64D-NEXT: sw zero, 0(a0) +; RV64D-NEXT: sw zero, 4(a0) +; RV64D-NEXT: ret +entry: + store <2 x float> <float 0.0, float 0.0>, ptr %i, align 8 + ret void +} + +define void @zero_v4f32(ptr %i) { +; RV32F-LABEL: zero_v4f32: +; RV32F: # %bb.0: # %entry +; RV32F-NEXT: sw zero, 0(a0) +; RV32F-NEXT: sw zero, 4(a0) +; RV32F-NEXT: sw zero, 8(a0) +; RV32F-NEXT: sw zero, 12(a0) +; RV32F-NEXT: ret +; +; RV32D-LABEL: zero_v4f32: +; RV32D: # %bb.0: # %entry +; RV32D-NEXT: sw zero, 0(a0) +; RV32D-NEXT: sw zero, 4(a0) +; RV32D-NEXT: sw zero, 8(a0) +; RV32D-NEXT: sw zero, 12(a0) +; RV32D-NEXT: ret +; +; RV64F-LABEL: zero_v4f32: +; RV64F: # %bb.0: # %entry +; RV64F-NEXT: sw zero, 0(a0) +; RV64F-NEXT: sw zero, 4(a0) +; RV64F-NEXT: sw zero, 8(a0) +; RV64F-NEXT: sw zero, 12(a0) +; RV64F-NEXT: ret +; +; RV64D-LABEL: zero_v4f32: +; RV64D: # %bb.0: # %entry +; RV64D-NEXT: sw zero, 0(a0) +; RV64D-NEXT: sw zero, 4(a0) +; RV64D-NEXT: sw zero, 8(a0) +; RV64D-NEXT: sw zero, 12(a0) +; RV64D-NEXT: ret +entry: + store <4 x float> <float 0.0, float 0.0, float 0.0, float 0.0>, ptr %i, align 8 + ret void +} + +define void @zero_v1f64(ptr %i) { +; RV32F-LABEL: zero_v1f64: +; RV32F: # %bb.0: # %entry +; RV32F-NEXT: lui a1, %hi(.LCPI7_0) +; RV32F-NEXT: addi a1, a1, %lo(.LCPI7_0) +; RV32F-NEXT: lw a2, 0(a1) +; RV32F-NEXT: lw a1, 4(a1) +; RV32F-NEXT: sw a2, 0(a0) +; RV32F-NEXT: sw a1, 4(a0) +; RV32F-NEXT: ret +; +; RV32D-LABEL: zero_v1f64: +; RV32D: # %bb.0: # %entry +; RV32D-NEXT: fcvt.d.w fa5, zero +; RV32D-NEXT: fsd fa5, 0(a0) +; RV32D-NEXT: ret +; +; RV64F-LABEL: zero_v1f64: +; RV64F: # %bb.0: # %entry +; RV64F-NEXT: sd zero, 0(a0) +; RV64F-NEXT: ret +; +; RV64D-LABEL: zero_v1f64: +; RV64D: # %bb.0: # %entry +; RV64D-NEXT: sd zero, 0(a0) +; RV64D-NEXT: ret +entry: + store <1 x double> <double 0.0>, ptr %i, align 8 + ret void +} + +define void @zero_v2f64(ptr %i) { +; RV32F-LABEL: zero_v2f64: +; RV32F: # %bb.0: # %entry +; RV32F-NEXT: lui a1, %hi(.LCPI8_0) +; RV32F-NEXT: addi a1, a1, %lo(.LCPI8_0) +; RV32F-NEXT: lw a2, 0(a1) +; RV32F-NEXT: lw a1, 4(a1) +; RV32F-NEXT: sw a2, 0(a0) +; RV32F-NEXT: sw a1, 4(a0) +; RV32F-NEXT: sw a2, 8(a0) +; RV32F-NEXT: sw a1, 12(a0) +; RV32F-NEXT: ret +; +; RV32D-LABEL: zero_v2f64: +; RV32D: # %bb.0: # %entry +; RV32D-NEXT: fcvt.d.w fa5, zero +; RV32D-NEXT: fsd fa5, 0(a0) +; RV32D-NEXT: fsd fa5, 8(a0) +; RV32D-NEXT: ret +; +; RV64F-LABEL: zero_v2f64: +; RV64F: # %bb.0: # %entry +; RV64F-NEXT: sd zero, 0(a0) +; RV64F-NEXT: sd zero, 8(a0) +; RV64F-NEXT: ret +; +; RV64D-LABEL: zero_v2f64: +; RV64D: # %bb.0: # %entry +; RV64D-NEXT: sd zero, 0(a0) +; RV64D-NEXT: sd zero, 8(a0) +; RV64D-NEXT: ret +entry: + store <2 x double> <double 0.0, double 0.0>, ptr %i, align 8 + ret void +} + +define void @zero_v4f64(ptr %i) { +; RV32F-LABEL: zero_v4f64: +; RV32F: # %bb.0: # %entry +; RV32F-NEXT: lui a1, %hi(.LCPI9_0) +; RV32F-NEXT: addi a1, a1, %lo(.LCPI9_0) +; RV32F-NEXT: lw a2, 0(a1) +; RV32F-NEXT: lw a1, 4(a1) +; RV32F-NEXT: sw a2, 0(a0) +; RV32F-NEXT: sw a1, 4(a0) +; RV32F-NEXT: sw a2, 8(a0) +; RV32F-NEXT: sw a1, 12(a0) +; RV32F-NEXT: sw a2, 16(a0) +; RV32F-NEXT: sw a1, 20(a0) +; RV32F-NEXT: sw a2, 24(a0) +; RV32F-NEXT: sw a1, 28(a0) +; RV32F-NEXT: ret +; +; RV32D-LABEL: zero_v4f64: +; RV32D: # %bb.0: # %entry +; RV32D-NEXT: fcvt.d.w fa5, zero +; RV32D-NEXT: fsd fa5, 0(a0) +; RV32D-NEXT: fsd fa5, 8(a0) +; RV32D-NEXT: fsd fa5, 16(a0) +; RV32D-NEXT: fsd fa5, 24(a0) +; RV32D-NEXT: ret +; +; RV64F-LABEL: zero_v4f64: +; RV64F: # %bb.0: # %entry +; RV64F-NEXT: sd zero, 0(a0) +; RV64F-NEXT: sd zero, 8(a0) +; RV64F-NEXT: sd zero, 16(a0) +; RV64F-NEXT: sd zero, 24(a0) +; RV64F-NEXT: ret +; +; RV64D-LABEL: zero_v4f64: +; RV64D: # %bb.0: # %entry +; RV64D-NEXT: sd zero, 0(a0) +; RV64D-NEXT: sd zero, 8(a0) +; RV64D-NEXT: sd zero, 16(a0) +; RV64D-NEXT: sd zero, 24(a0) +; RV64D-NEXT: ret +entry: + store <4 x double> <double 0.0, double 0.0, double 0.0, double 0.0>, ptr %i, align 8 + ret void +} diff --git a/llvm/test/CodeGen/RISCV/rvv/vfadd-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/vfadd-sdnode.ll index 061b2b0..abd00b6 100644 --- a/llvm/test/CodeGen/RISCV/rvv/vfadd-sdnode.ll +++ b/llvm/test/CodeGen/RISCV/rvv/vfadd-sdnode.ll @@ -11,33 +11,80 @@ ; RUN: llc -mtriple=riscv64 -mattr=+d,+zfhmin,+zvfhmin,+zfbfmin,+zvfbfmin,+v \ ; RUN: -target-abi=lp64d -verify-machineinstrs < %s | FileCheck %s \ ; RUN: --check-prefixes=CHECK,ZVFHMIN +; RUN: llc -mtriple=riscv64 -mattr=+zvfh,+experimental-zvfbfa,+v \ +; RUN: -target-abi=lp64d -verify-machineinstrs < %s | FileCheck %s \ +; RUN: --check-prefixes=CHECK,ZVFBFA define <vscale x 1 x bfloat> @vfadd_vv_nxv1bf16(<vscale x 1 x bfloat> %va, <vscale x 1 x bfloat> %vb) { -; CHECK-LABEL: vfadd_vv_nxv1bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: vsetvli a0, zero, e16, mf4, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v9 -; CHECK-NEXT: vfwcvtbf16.f.f.v v9, v8 -; CHECK-NEXT: vsetvli zero, zero, e32, mf2, ta, ma -; CHECK-NEXT: vfadd.vv v9, v9, v10 -; CHECK-NEXT: vsetvli zero, zero, e16, mf4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v9 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vv_nxv1bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: vsetvli a0, zero, e16, mf4, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v10, v9 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v9, v8 +; ZVFH-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFH-NEXT: vfadd.vv v9, v9, v10 +; ZVFH-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v9 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vv_nxv1bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: vsetvli a0, zero, e16, mf4, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v10, v9 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v9, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v9, v9, v10 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v9 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv1bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli a0, zero, e16alt, mf4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v9 +; ZVFBFA-NEXT: vfwcvt.f.f.v v9, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v9, v10 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, mf4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9 +; ZVFBFA-NEXT: ret %vc = fadd <vscale x 1 x bfloat> %va, %vb ret <vscale x 1 x bfloat> %vc } define <vscale x 1 x bfloat> @vfadd_vf_nxv1bf16(<vscale x 1 x bfloat> %va, bfloat %b) { -; CHECK-LABEL: vfadd_vf_nxv1bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: fcvt.s.bf16 fa5, fa0 -; CHECK-NEXT: vsetvli a0, zero, e16, mf4, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v9, v8 -; CHECK-NEXT: vsetvli zero, zero, e32, mf2, ta, ma -; CHECK-NEXT: vfadd.vf v9, v9, fa5 -; CHECK-NEXT: vsetvli zero, zero, e16, mf4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v9 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vf_nxv1bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: fcvt.s.bf16 fa5, fa0 +; ZVFH-NEXT: vsetvli a0, zero, e16, mf4, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v9, v8 +; ZVFH-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFH-NEXT: vfadd.vf v9, v9, fa5 +; ZVFH-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v9 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vf_nxv1bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: fcvt.s.bf16 fa5, fa0 +; ZVFHMIN-NEXT: vsetvli a0, zero, e16, mf4, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v9, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFHMIN-NEXT: vfadd.vf v9, v9, fa5 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v9 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv1bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fcvt.s.bf16 fa5, fa0 +; ZVFBFA-NEXT: vsetvli a0, zero, e16alt, mf4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v9, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFBFA-NEXT: vfadd.vf v9, v9, fa5 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, mf4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9 +; ZVFBFA-NEXT: ret %head = insertelement <vscale x 1 x bfloat> poison, bfloat %b, i32 0 %splat = shufflevector <vscale x 1 x bfloat> %head, <vscale x 1 x bfloat> poison, <vscale x 1 x i32> zeroinitializer %vc = fadd <vscale x 1 x bfloat> %va, %splat @@ -45,31 +92,75 @@ define <vscale x 1 x bfloat> @vfadd_vf_nxv1bf16(<vscale x 1 x bfloat> %va, bfloa } define <vscale x 2 x bfloat> @vfadd_vv_nxv2bf16(<vscale x 2 x bfloat> %va, <vscale x 2 x bfloat> %vb) { -; CHECK-LABEL: vfadd_vv_nxv2bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: vsetvli a0, zero, e16, mf2, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v9 -; CHECK-NEXT: vfwcvtbf16.f.f.v v9, v8 -; CHECK-NEXT: vsetvli zero, zero, e32, m1, ta, ma -; CHECK-NEXT: vfadd.vv v9, v9, v10 -; CHECK-NEXT: vsetvli zero, zero, e16, mf2, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v9 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vv_nxv2bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: vsetvli a0, zero, e16, mf2, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v10, v9 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v9, v8 +; ZVFH-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFH-NEXT: vfadd.vv v9, v9, v10 +; ZVFH-NEXT: vsetvli zero, zero, e16, mf2, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v9 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vv_nxv2bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: vsetvli a0, zero, e16, mf2, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v10, v9 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v9, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v9, v9, v10 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v9 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv2bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli a0, zero, e16alt, mf2, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v9 +; ZVFBFA-NEXT: vfwcvt.f.f.v v9, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v9, v10 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, mf2, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9 +; ZVFBFA-NEXT: ret %vc = fadd <vscale x 2 x bfloat> %va, %vb ret <vscale x 2 x bfloat> %vc } define <vscale x 2 x bfloat> @vfadd_vf_nxv2bf16(<vscale x 2 x bfloat> %va, bfloat %b) { -; CHECK-LABEL: vfadd_vf_nxv2bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: fcvt.s.bf16 fa5, fa0 -; CHECK-NEXT: vsetvli a0, zero, e16, mf2, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v9, v8 -; CHECK-NEXT: vsetvli zero, zero, e32, m1, ta, ma -; CHECK-NEXT: vfadd.vf v9, v9, fa5 -; CHECK-NEXT: vsetvli zero, zero, e16, mf2, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v9 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vf_nxv2bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: fcvt.s.bf16 fa5, fa0 +; ZVFH-NEXT: vsetvli a0, zero, e16, mf2, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v9, v8 +; ZVFH-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFH-NEXT: vfadd.vf v9, v9, fa5 +; ZVFH-NEXT: vsetvli zero, zero, e16, mf2, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v9 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vf_nxv2bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: fcvt.s.bf16 fa5, fa0 +; ZVFHMIN-NEXT: vsetvli a0, zero, e16, mf2, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v9, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFHMIN-NEXT: vfadd.vf v9, v9, fa5 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v9 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv2bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fcvt.s.bf16 fa5, fa0 +; ZVFBFA-NEXT: vsetvli a0, zero, e16alt, mf2, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v9, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFBFA-NEXT: vfadd.vf v9, v9, fa5 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, mf2, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9 +; ZVFBFA-NEXT: ret %head = insertelement <vscale x 2 x bfloat> poison, bfloat %b, i32 0 %splat = shufflevector <vscale x 2 x bfloat> %head, <vscale x 2 x bfloat> poison, <vscale x 2 x i32> zeroinitializer %vc = fadd <vscale x 2 x bfloat> %va, %splat @@ -77,31 +168,75 @@ define <vscale x 2 x bfloat> @vfadd_vf_nxv2bf16(<vscale x 2 x bfloat> %va, bfloa } define <vscale x 4 x bfloat> @vfadd_vv_nxv4bf16(<vscale x 4 x bfloat> %va, <vscale x 4 x bfloat> %vb) { -; CHECK-LABEL: vfadd_vv_nxv4bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: vsetvli a0, zero, e16, m1, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v9 -; CHECK-NEXT: vfwcvtbf16.f.f.v v12, v8 -; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma -; CHECK-NEXT: vfadd.vv v10, v12, v10 -; CHECK-NEXT: vsetvli zero, zero, e16, m1, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v10 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vv_nxv4bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: vsetvli a0, zero, e16, m1, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v10, v9 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v12, v8 +; ZVFH-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFH-NEXT: vfadd.vv v10, v12, v10 +; ZVFH-NEXT: vsetvli zero, zero, e16, m1, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v10 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vv_nxv4bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m1, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v10, v9 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v12, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v10, v12, v10 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m1, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v10 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv4bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli a0, zero, e16alt, m1, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v9 +; ZVFBFA-NEXT: vfwcvt.f.f.v v12, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v10, v12, v10 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m1, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v10 +; ZVFBFA-NEXT: ret %vc = fadd <vscale x 4 x bfloat> %va, %vb ret <vscale x 4 x bfloat> %vc } define <vscale x 4 x bfloat> @vfadd_vf_nxv4bf16(<vscale x 4 x bfloat> %va, bfloat %b) { -; CHECK-LABEL: vfadd_vf_nxv4bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: fcvt.s.bf16 fa5, fa0 -; CHECK-NEXT: vsetvli a0, zero, e16, m1, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v8 -; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma -; CHECK-NEXT: vfadd.vf v10, v10, fa5 -; CHECK-NEXT: vsetvli zero, zero, e16, m1, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v10 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vf_nxv4bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: fcvt.s.bf16 fa5, fa0 +; ZVFH-NEXT: vsetvli a0, zero, e16, m1, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v10, v8 +; ZVFH-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFH-NEXT: vfadd.vf v10, v10, fa5 +; ZVFH-NEXT: vsetvli zero, zero, e16, m1, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v10 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vf_nxv4bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: fcvt.s.bf16 fa5, fa0 +; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m1, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v10, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFHMIN-NEXT: vfadd.vf v10, v10, fa5 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m1, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v10 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv4bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fcvt.s.bf16 fa5, fa0 +; ZVFBFA-NEXT: vsetvli a0, zero, e16alt, m1, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFBFA-NEXT: vfadd.vf v10, v10, fa5 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m1, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v10 +; ZVFBFA-NEXT: ret %head = insertelement <vscale x 4 x bfloat> poison, bfloat %b, i32 0 %splat = shufflevector <vscale x 4 x bfloat> %head, <vscale x 4 x bfloat> poison, <vscale x 4 x i32> zeroinitializer %vc = fadd <vscale x 4 x bfloat> %va, %splat @@ -109,31 +244,75 @@ define <vscale x 4 x bfloat> @vfadd_vf_nxv4bf16(<vscale x 4 x bfloat> %va, bfloa } define <vscale x 8 x bfloat> @vfadd_vv_nxv8bf16(<vscale x 8 x bfloat> %va, <vscale x 8 x bfloat> %vb) { -; CHECK-LABEL: vfadd_vv_nxv8bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: vsetvli a0, zero, e16, m2, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v12, v10 -; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v8 -; CHECK-NEXT: vsetvli zero, zero, e32, m4, ta, ma -; CHECK-NEXT: vfadd.vv v12, v16, v12 -; CHECK-NEXT: vsetvli zero, zero, e16, m2, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v12 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vv_nxv8bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: vsetvli a0, zero, e16, m2, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v12, v10 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v8 +; ZVFH-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFH-NEXT: vfadd.vv v12, v16, v12 +; ZVFH-NEXT: vsetvli zero, zero, e16, m2, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v12 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vv_nxv8bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m2, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v12, v10 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v12, v16, v12 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m2, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v12 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv8bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli a0, zero, e16alt, m2, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v12, v10 +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFBFA-NEXT: vfadd.vv v12, v16, v12 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m2, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v12 +; ZVFBFA-NEXT: ret %vc = fadd <vscale x 8 x bfloat> %va, %vb ret <vscale x 8 x bfloat> %vc } define <vscale x 8 x bfloat> @vfadd_vf_nxv8bf16(<vscale x 8 x bfloat> %va, bfloat %b) { -; CHECK-LABEL: vfadd_vf_nxv8bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: fcvt.s.bf16 fa5, fa0 -; CHECK-NEXT: vsetvli a0, zero, e16, m2, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v12, v8 -; CHECK-NEXT: vsetvli zero, zero, e32, m4, ta, ma -; CHECK-NEXT: vfadd.vf v12, v12, fa5 -; CHECK-NEXT: vsetvli zero, zero, e16, m2, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v12 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vf_nxv8bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: fcvt.s.bf16 fa5, fa0 +; ZVFH-NEXT: vsetvli a0, zero, e16, m2, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v12, v8 +; ZVFH-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFH-NEXT: vfadd.vf v12, v12, fa5 +; ZVFH-NEXT: vsetvli zero, zero, e16, m2, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v12 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vf_nxv8bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: fcvt.s.bf16 fa5, fa0 +; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m2, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v12, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFHMIN-NEXT: vfadd.vf v12, v12, fa5 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m2, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v12 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv8bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fcvt.s.bf16 fa5, fa0 +; ZVFBFA-NEXT: vsetvli a0, zero, e16alt, m2, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v12, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFBFA-NEXT: vfadd.vf v12, v12, fa5 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m2, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v12 +; ZVFBFA-NEXT: ret %head = insertelement <vscale x 8 x bfloat> poison, bfloat %b, i32 0 %splat = shufflevector <vscale x 8 x bfloat> %head, <vscale x 8 x bfloat> poison, <vscale x 8 x i32> zeroinitializer %vc = fadd <vscale x 8 x bfloat> %va, %splat @@ -141,16 +320,38 @@ define <vscale x 8 x bfloat> @vfadd_vf_nxv8bf16(<vscale x 8 x bfloat> %va, bfloa } define <vscale x 8 x bfloat> @vfadd_fv_nxv8bf16(<vscale x 8 x bfloat> %va, bfloat %b) { -; CHECK-LABEL: vfadd_fv_nxv8bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: fcvt.s.bf16 fa5, fa0 -; CHECK-NEXT: vsetvli a0, zero, e16, m2, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v12, v8 -; CHECK-NEXT: vsetvli zero, zero, e32, m4, ta, ma -; CHECK-NEXT: vfadd.vf v12, v12, fa5 -; CHECK-NEXT: vsetvli zero, zero, e16, m2, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v12 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_fv_nxv8bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: fcvt.s.bf16 fa5, fa0 +; ZVFH-NEXT: vsetvli a0, zero, e16, m2, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v12, v8 +; ZVFH-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFH-NEXT: vfadd.vf v12, v12, fa5 +; ZVFH-NEXT: vsetvli zero, zero, e16, m2, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v12 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_fv_nxv8bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: fcvt.s.bf16 fa5, fa0 +; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m2, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v12, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFHMIN-NEXT: vfadd.vf v12, v12, fa5 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m2, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v12 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_fv_nxv8bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fcvt.s.bf16 fa5, fa0 +; ZVFBFA-NEXT: vsetvli a0, zero, e16alt, m2, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v12, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFBFA-NEXT: vfadd.vf v12, v12, fa5 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m2, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v12 +; ZVFBFA-NEXT: ret %head = insertelement <vscale x 8 x bfloat> poison, bfloat %b, i32 0 %splat = shufflevector <vscale x 8 x bfloat> %head, <vscale x 8 x bfloat> poison, <vscale x 8 x i32> zeroinitializer %vc = fadd <vscale x 8 x bfloat> %splat, %va @@ -158,31 +359,75 @@ define <vscale x 8 x bfloat> @vfadd_fv_nxv8bf16(<vscale x 8 x bfloat> %va, bfloa } define <vscale x 16 x bfloat> @vfadd_vv_nxv16bf16(<vscale x 16 x bfloat> %va, <vscale x 16 x bfloat> %vb) { -; CHECK-LABEL: vfadd_vv_nxv16bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v12 -; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v8 -; CHECK-NEXT: vsetvli zero, zero, e32, m8, ta, ma -; CHECK-NEXT: vfadd.vv v16, v24, v16 -; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vv_nxv16bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: vsetvli a0, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v12 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v24, v8 +; ZVFH-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFH-NEXT: vfadd.vv v16, v24, v16 +; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v16 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vv_nxv16bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v12 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v24, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v16, v24, v16 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v16 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv16bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli a0, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v12 +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v24, v16 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v16 +; ZVFBFA-NEXT: ret %vc = fadd <vscale x 16 x bfloat> %va, %vb ret <vscale x 16 x bfloat> %vc } define <vscale x 16 x bfloat> @vfadd_vf_nxv16bf16(<vscale x 16 x bfloat> %va, bfloat %b) { -; CHECK-LABEL: vfadd_vf_nxv16bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: fcvt.s.bf16 fa5, fa0 -; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v8 -; CHECK-NEXT: vsetvli zero, zero, e32, m8, ta, ma -; CHECK-NEXT: vfadd.vf v16, v16, fa5 -; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vf_nxv16bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: fcvt.s.bf16 fa5, fa0 +; ZVFH-NEXT: vsetvli a0, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v8 +; ZVFH-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFH-NEXT: vfadd.vf v16, v16, fa5 +; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v16 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vf_nxv16bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: fcvt.s.bf16 fa5, fa0 +; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFHMIN-NEXT: vfadd.vf v16, v16, fa5 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v16 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv16bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fcvt.s.bf16 fa5, fa0 +; ZVFBFA-NEXT: vsetvli a0, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vf v16, v16, fa5 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v16 +; ZVFBFA-NEXT: ret %head = insertelement <vscale x 16 x bfloat> poison, bfloat %b, i32 0 %splat = shufflevector <vscale x 16 x bfloat> %head, <vscale x 16 x bfloat> poison, <vscale x 16 x i32> zeroinitializer %vc = fadd <vscale x 16 x bfloat> %va, %splat @@ -190,78 +435,216 @@ define <vscale x 16 x bfloat> @vfadd_vf_nxv16bf16(<vscale x 16 x bfloat> %va, bf } define <vscale x 32 x bfloat> @vfadd_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <vscale x 32 x bfloat> %vb) { -; CHECK-LABEL: vfadd_vv_nxv32bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: addi sp, sp, -16 -; CHECK-NEXT: .cfi_def_cfa_offset 16 -; CHECK-NEXT: csrr a0, vlenb -; CHECK-NEXT: slli a0, a0, 3 -; CHECK-NEXT: sub sp, sp, a0 -; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb -; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v16 -; CHECK-NEXT: addi a0, sp, 16 -; CHECK-NEXT: vs8r.v v24, (a0) # vscale x 64-byte Folded Spill -; CHECK-NEXT: vfwcvtbf16.f.f.v v0, v8 -; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v20 -; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v12 -; CHECK-NEXT: vl8r.v v8, (a0) # vscale x 64-byte Folded Reload -; CHECK-NEXT: vsetvli zero, zero, e32, m8, ta, ma -; CHECK-NEXT: vfadd.vv v0, v0, v8 -; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v0 -; CHECK-NEXT: vsetvli zero, zero, e32, m8, ta, ma -; CHECK-NEXT: vfadd.vv v16, v16, v24 -; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v12, v16 -; CHECK-NEXT: csrr a0, vlenb -; CHECK-NEXT: slli a0, a0, 3 -; CHECK-NEXT: add sp, sp, a0 -; CHECK-NEXT: .cfi_def_cfa sp, 16 -; CHECK-NEXT: addi sp, sp, 16 -; CHECK-NEXT: .cfi_def_cfa_offset 0 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vv_nxv32bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: addi sp, sp, -16 +; ZVFH-NEXT: .cfi_def_cfa_offset 16 +; ZVFH-NEXT: csrr a0, vlenb +; ZVFH-NEXT: slli a0, a0, 3 +; ZVFH-NEXT: sub sp, sp, a0 +; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb +; ZVFH-NEXT: vsetvli a0, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v24, v16 +; ZVFH-NEXT: addi a0, sp, 16 +; ZVFH-NEXT: vs8r.v v24, (a0) # vscale x 64-byte Folded Spill +; ZVFH-NEXT: vfwcvtbf16.f.f.v v0, v8 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v24, v20 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v12 +; ZVFH-NEXT: vl8r.v v8, (a0) # vscale x 64-byte Folded Reload +; ZVFH-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFH-NEXT: vfadd.vv v0, v0, v8 +; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v0 +; ZVFH-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFH-NEXT: vfadd.vv v16, v16, v24 +; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v12, v16 +; ZVFH-NEXT: csrr a0, vlenb +; ZVFH-NEXT: slli a0, a0, 3 +; ZVFH-NEXT: add sp, sp, a0 +; ZVFH-NEXT: .cfi_def_cfa sp, 16 +; ZVFH-NEXT: addi sp, sp, 16 +; ZVFH-NEXT: .cfi_def_cfa_offset 0 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vv_nxv32bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: addi sp, sp, -16 +; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16 +; ZVFHMIN-NEXT: csrr a0, vlenb +; ZVFHMIN-NEXT: slli a0, a0, 3 +; ZVFHMIN-NEXT: sub sp, sp, a0 +; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb +; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v24, v16 +; ZVFHMIN-NEXT: addi a0, sp, 16 +; ZVFHMIN-NEXT: vs8r.v v24, (a0) # vscale x 64-byte Folded Spill +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v0, v8 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v24, v20 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v12 +; ZVFHMIN-NEXT: vl8r.v v8, (a0) # vscale x 64-byte Folded Reload +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v0, v0, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v0 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v16, v16, v24 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v12, v16 +; ZVFHMIN-NEXT: csrr a0, vlenb +; ZVFHMIN-NEXT: slli a0, a0, 3 +; ZVFHMIN-NEXT: add sp, sp, a0 +; ZVFHMIN-NEXT: .cfi_def_cfa sp, 16 +; ZVFHMIN-NEXT: addi sp, sp, 16 +; ZVFHMIN-NEXT: .cfi_def_cfa_offset 0 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv32bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: addi sp, sp, -16 +; ZVFBFA-NEXT: .cfi_def_cfa_offset 16 +; ZVFBFA-NEXT: csrr a0, vlenb +; ZVFBFA-NEXT: slli a0, a0, 3 +; ZVFBFA-NEXT: sub sp, sp, a0 +; ZVFBFA-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb +; ZVFBFA-NEXT: vsetvli a0, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v16 +; ZVFBFA-NEXT: addi a0, sp, 16 +; ZVFBFA-NEXT: vs8r.v v24, (a0) # vscale x 64-byte Folded Spill +; ZVFBFA-NEXT: vfwcvt.f.f.v v0, v8 +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v20 +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v12 +; ZVFBFA-NEXT: vl8r.v v8, (a0) # vscale x 64-byte Folded Reload +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v0, v0, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v0 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v16, v24 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v12, v16 +; ZVFBFA-NEXT: csrr a0, vlenb +; ZVFBFA-NEXT: slli a0, a0, 3 +; ZVFBFA-NEXT: add sp, sp, a0 +; ZVFBFA-NEXT: .cfi_def_cfa sp, 16 +; ZVFBFA-NEXT: addi sp, sp, 16 +; ZVFBFA-NEXT: .cfi_def_cfa_offset 0 +; ZVFBFA-NEXT: ret %vc = fadd <vscale x 32 x bfloat> %va, %vb ret <vscale x 32 x bfloat> %vc } define <vscale x 32 x bfloat> @vfadd_vf_nxv32bf16(<vscale x 32 x bfloat> %va, bfloat %b) { -; CHECK-LABEL: vfadd_vf_nxv32bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: addi sp, sp, -16 -; CHECK-NEXT: .cfi_def_cfa_offset 16 -; CHECK-NEXT: csrr a0, vlenb -; CHECK-NEXT: slli a0, a0, 3 -; CHECK-NEXT: sub sp, sp, a0 -; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb -; CHECK-NEXT: fmv.x.h a0, fa0 -; CHECK-NEXT: vsetvli a1, zero, e16, m4, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v8 -; CHECK-NEXT: addi a1, sp, 16 -; CHECK-NEXT: vs8r.v v16, (a1) # vscale x 64-byte Folded Spill -; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v12 -; CHECK-NEXT: vsetvli a1, zero, e16, m8, ta, ma -; CHECK-NEXT: vmv.v.x v8, a0 -; CHECK-NEXT: vsetvli a0, zero, e16, m4, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v0, v8 -; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v12 -; CHECK-NEXT: addi a0, sp, 16 -; CHECK-NEXT: vl8r.v v8, (a0) # vscale x 64-byte Folded Reload -; CHECK-NEXT: vsetvli zero, zero, e32, m8, ta, ma -; CHECK-NEXT: vfadd.vv v0, v8, v0 -; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v0 -; CHECK-NEXT: vsetvli zero, zero, e32, m8, ta, ma -; CHECK-NEXT: vfadd.vv v16, v24, v16 -; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v12, v16 -; CHECK-NEXT: csrr a0, vlenb -; CHECK-NEXT: slli a0, a0, 3 -; CHECK-NEXT: add sp, sp, a0 -; CHECK-NEXT: .cfi_def_cfa sp, 16 -; CHECK-NEXT: addi sp, sp, 16 -; CHECK-NEXT: .cfi_def_cfa_offset 0 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vf_nxv32bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: addi sp, sp, -16 +; ZVFH-NEXT: .cfi_def_cfa_offset 16 +; ZVFH-NEXT: csrr a0, vlenb +; ZVFH-NEXT: slli a0, a0, 3 +; ZVFH-NEXT: sub sp, sp, a0 +; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb +; ZVFH-NEXT: fmv.x.h a0, fa0 +; ZVFH-NEXT: vsetvli a1, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v8 +; ZVFH-NEXT: addi a1, sp, 16 +; ZVFH-NEXT: vs8r.v v16, (a1) # vscale x 64-byte Folded Spill +; ZVFH-NEXT: vfwcvtbf16.f.f.v v24, v12 +; ZVFH-NEXT: vsetvli a1, zero, e16, m8, ta, ma +; ZVFH-NEXT: vmv.v.x v8, a0 +; ZVFH-NEXT: vsetvli a0, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v0, v8 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v12 +; ZVFH-NEXT: addi a0, sp, 16 +; ZVFH-NEXT: vl8r.v v8, (a0) # vscale x 64-byte Folded Reload +; ZVFH-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFH-NEXT: vfadd.vv v0, v8, v0 +; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v0 +; ZVFH-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFH-NEXT: vfadd.vv v16, v24, v16 +; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v12, v16 +; ZVFH-NEXT: csrr a0, vlenb +; ZVFH-NEXT: slli a0, a0, 3 +; ZVFH-NEXT: add sp, sp, a0 +; ZVFH-NEXT: .cfi_def_cfa sp, 16 +; ZVFH-NEXT: addi sp, sp, 16 +; ZVFH-NEXT: .cfi_def_cfa_offset 0 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vf_nxv32bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: addi sp, sp, -16 +; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16 +; ZVFHMIN-NEXT: csrr a0, vlenb +; ZVFHMIN-NEXT: slli a0, a0, 3 +; ZVFHMIN-NEXT: sub sp, sp, a0 +; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb +; ZVFHMIN-NEXT: fmv.x.h a0, fa0 +; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v8 +; ZVFHMIN-NEXT: addi a1, sp, 16 +; ZVFHMIN-NEXT: vs8r.v v16, (a1) # vscale x 64-byte Folded Spill +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v24, v12 +; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m8, ta, ma +; ZVFHMIN-NEXT: vmv.v.x v8, a0 +; ZVFHMIN-NEXT: vsetvli a0, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v0, v8 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v12 +; ZVFHMIN-NEXT: addi a0, sp, 16 +; ZVFHMIN-NEXT: vl8r.v v8, (a0) # vscale x 64-byte Folded Reload +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v0, v8, v0 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v0 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v16, v24, v16 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v12, v16 +; ZVFHMIN-NEXT: csrr a0, vlenb +; ZVFHMIN-NEXT: slli a0, a0, 3 +; ZVFHMIN-NEXT: add sp, sp, a0 +; ZVFHMIN-NEXT: .cfi_def_cfa sp, 16 +; ZVFHMIN-NEXT: addi sp, sp, 16 +; ZVFHMIN-NEXT: .cfi_def_cfa_offset 0 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv32bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: addi sp, sp, -16 +; ZVFBFA-NEXT: .cfi_def_cfa_offset 16 +; ZVFBFA-NEXT: csrr a0, vlenb +; ZVFBFA-NEXT: slli a0, a0, 3 +; ZVFBFA-NEXT: sub sp, sp, a0 +; ZVFBFA-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb +; ZVFBFA-NEXT: fmv.x.h a0, fa0 +; ZVFBFA-NEXT: vsetvli a1, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v8 +; ZVFBFA-NEXT: addi a1, sp, 16 +; ZVFBFA-NEXT: vs8r.v v16, (a1) # vscale x 64-byte Folded Spill +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v12 +; ZVFBFA-NEXT: vsetvli a1, zero, e16alt, m8, ta, ma +; ZVFBFA-NEXT: vmv.v.x v8, a0 +; ZVFBFA-NEXT: vsetvli a0, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v0, v8 +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v12 +; ZVFBFA-NEXT: addi a0, sp, 16 +; ZVFBFA-NEXT: vl8r.v v8, (a0) # vscale x 64-byte Folded Reload +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v0, v8, v0 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v0 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v24, v16 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v12, v16 +; ZVFBFA-NEXT: csrr a0, vlenb +; ZVFBFA-NEXT: slli a0, a0, 3 +; ZVFBFA-NEXT: add sp, sp, a0 +; ZVFBFA-NEXT: .cfi_def_cfa sp, 16 +; ZVFBFA-NEXT: addi sp, sp, 16 +; ZVFBFA-NEXT: .cfi_def_cfa_offset 0 +; ZVFBFA-NEXT: ret %head = insertelement <vscale x 32 x bfloat> poison, bfloat %b, i32 0 %splat = shufflevector <vscale x 32 x bfloat> %head, <vscale x 32 x bfloat> poison, <vscale x 32 x i32> zeroinitializer %vc = fadd <vscale x 32 x bfloat> %va, %splat @@ -285,6 +668,12 @@ define <vscale x 1 x half> @vfadd_vv_nxv1f16(<vscale x 1 x half> %va, <vscale x ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf4, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v9 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv1f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli a0, zero, e16, mf4, ta, ma +; ZVFBFA-NEXT: vfadd.vv v8, v8, v9 +; ZVFBFA-NEXT: ret %vc = fadd <vscale x 1 x half> %va, %vb ret <vscale x 1 x half> %vc } @@ -306,6 +695,12 @@ define <vscale x 1 x half> @vfadd_vf_nxv1f16(<vscale x 1 x half> %va, half %b) { ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf4, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v9 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv1f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli a0, zero, e16, mf4, ta, ma +; ZVFBFA-NEXT: vfadd.vf v8, v8, fa0 +; ZVFBFA-NEXT: ret %head = insertelement <vscale x 1 x half> poison, half %b, i32 0 %splat = shufflevector <vscale x 1 x half> %head, <vscale x 1 x half> poison, <vscale x 1 x i32> zeroinitializer %vc = fadd <vscale x 1 x half> %va, %splat @@ -329,6 +724,12 @@ define <vscale x 2 x half> @vfadd_vv_nxv2f16(<vscale x 2 x half> %va, <vscale x ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v9 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv2f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli a0, zero, e16, mf2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v8, v8, v9 +; ZVFBFA-NEXT: ret %vc = fadd <vscale x 2 x half> %va, %vb ret <vscale x 2 x half> %vc } @@ -350,6 +751,12 @@ define <vscale x 2 x half> @vfadd_vf_nxv2f16(<vscale x 2 x half> %va, half %b) { ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v9 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv2f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli a0, zero, e16, mf2, ta, ma +; ZVFBFA-NEXT: vfadd.vf v8, v8, fa0 +; ZVFBFA-NEXT: ret %head = insertelement <vscale x 2 x half> poison, half %b, i32 0 %splat = shufflevector <vscale x 2 x half> %head, <vscale x 2 x half> poison, <vscale x 2 x i32> zeroinitializer %vc = fadd <vscale x 2 x half> %va, %splat @@ -373,6 +780,12 @@ define <vscale x 4 x half> @vfadd_vv_nxv4f16(<vscale x 4 x half> %va, <vscale x ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m1, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v10 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv4f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli a0, zero, e16, m1, ta, ma +; ZVFBFA-NEXT: vfadd.vv v8, v8, v9 +; ZVFBFA-NEXT: ret %vc = fadd <vscale x 4 x half> %va, %vb ret <vscale x 4 x half> %vc } @@ -394,6 +807,12 @@ define <vscale x 4 x half> @vfadd_vf_nxv4f16(<vscale x 4 x half> %va, half %b) { ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m1, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v10 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv4f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli a0, zero, e16, m1, ta, ma +; ZVFBFA-NEXT: vfadd.vf v8, v8, fa0 +; ZVFBFA-NEXT: ret %head = insertelement <vscale x 4 x half> poison, half %b, i32 0 %splat = shufflevector <vscale x 4 x half> %head, <vscale x 4 x half> poison, <vscale x 4 x i32> zeroinitializer %vc = fadd <vscale x 4 x half> %va, %splat @@ -417,6 +836,12 @@ define <vscale x 8 x half> @vfadd_vv_nxv8f16(<vscale x 8 x half> %va, <vscale x ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m2, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v12 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv8f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli a0, zero, e16, m2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v8, v8, v10 +; ZVFBFA-NEXT: ret %vc = fadd <vscale x 8 x half> %va, %vb ret <vscale x 8 x half> %vc } @@ -438,6 +863,12 @@ define <vscale x 8 x half> @vfadd_vf_nxv8f16(<vscale x 8 x half> %va, half %b) { ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m2, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v12 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv8f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli a0, zero, e16, m2, ta, ma +; ZVFBFA-NEXT: vfadd.vf v8, v8, fa0 +; ZVFBFA-NEXT: ret %head = insertelement <vscale x 8 x half> poison, half %b, i32 0 %splat = shufflevector <vscale x 8 x half> %head, <vscale x 8 x half> poison, <vscale x 8 x i32> zeroinitializer %vc = fadd <vscale x 8 x half> %va, %splat @@ -461,6 +892,12 @@ define <vscale x 8 x half> @vfadd_fv_nxv8f16(<vscale x 8 x half> %va, half %b) { ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m2, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v12 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_fv_nxv8f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli a0, zero, e16, m2, ta, ma +; ZVFBFA-NEXT: vfadd.vf v8, v8, fa0 +; ZVFBFA-NEXT: ret %head = insertelement <vscale x 8 x half> poison, half %b, i32 0 %splat = shufflevector <vscale x 8 x half> %head, <vscale x 8 x half> poison, <vscale x 8 x i32> zeroinitializer %vc = fadd <vscale x 8 x half> %splat, %va @@ -484,6 +921,12 @@ define <vscale x 16 x half> @vfadd_vv_nxv16f16(<vscale x 16 x half> %va, <vscale ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv16f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli a0, zero, e16, m4, ta, ma +; ZVFBFA-NEXT: vfadd.vv v8, v8, v12 +; ZVFBFA-NEXT: ret %vc = fadd <vscale x 16 x half> %va, %vb ret <vscale x 16 x half> %vc } @@ -505,6 +948,12 @@ define <vscale x 16 x half> @vfadd_vf_nxv16f16(<vscale x 16 x half> %va, half %b ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv16f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli a0, zero, e16, m4, ta, ma +; ZVFBFA-NEXT: vfadd.vf v8, v8, fa0 +; ZVFBFA-NEXT: ret %head = insertelement <vscale x 16 x half> poison, half %b, i32 0 %splat = shufflevector <vscale x 16 x half> %head, <vscale x 16 x half> poison, <vscale x 16 x i32> zeroinitializer %vc = fadd <vscale x 16 x half> %va, %splat @@ -549,6 +998,12 @@ define <vscale x 32 x half> @vfadd_vv_nxv32f16(<vscale x 32 x half> %va, <vscale ; ZVFHMIN-NEXT: addi sp, sp, 16 ; ZVFHMIN-NEXT: .cfi_def_cfa_offset 0 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv32f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli a0, zero, e16, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v8, v8, v16 +; ZVFBFA-NEXT: ret %vc = fadd <vscale x 32 x half> %va, %vb ret <vscale x 32 x half> %vc } @@ -596,6 +1051,12 @@ define <vscale x 32 x half> @vfadd_vf_nxv32f16(<vscale x 32 x half> %va, half %b ; ZVFHMIN-NEXT: addi sp, sp, 16 ; ZVFHMIN-NEXT: .cfi_def_cfa_offset 0 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv32f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli a0, zero, e16, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vf v8, v8, fa0 +; ZVFBFA-NEXT: ret %head = insertelement <vscale x 32 x half> poison, half %b, i32 0 %splat = shufflevector <vscale x 32 x half> %head, <vscale x 32 x half> poison, <vscale x 32 x i32> zeroinitializer %vc = fadd <vscale x 32 x half> %va, %splat diff --git a/llvm/test/CodeGen/RISCV/rvv/vfadd-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vfadd-vp.ll index 32e3d6b..633a201 100644 --- a/llvm/test/CodeGen/RISCV/rvv/vfadd-vp.ll +++ b/llvm/test/CodeGen/RISCV/rvv/vfadd-vp.ll @@ -11,52 +11,125 @@ ; RUN: llc -mtriple=riscv64 -mattr=+d,+zfhmin,+zvfhmin,+zfbfmin,+zvfbfmin,+v \ ; RUN: -target-abi=lp64d -verify-machineinstrs < %s | FileCheck %s \ ; RUN: --check-prefixes=CHECK,ZVFHMIN +; RUN: llc -mtriple=riscv64 -mattr=+zvfhmin,+experimental-zvfbfa,+v \ +; RUN: -target-abi=lp64d -verify-machineinstrs < %s | FileCheck %s \ +; RUN: --check-prefixes=CHECK,ZVFBFA declare <vscale x 1 x bfloat> @llvm.vp.fadd.nxv1bf16(<vscale x 1 x bfloat>, <vscale x 1 x bfloat>, <vscale x 1 x i1>, i32) define <vscale x 1 x bfloat> @vfadd_vv_nxv1bf16(<vscale x 1 x bfloat> %va, <vscale x 1 x bfloat> %b, <vscale x 1 x i1> %m, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vv_nxv1bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v9, v0.t -; CHECK-NEXT: vfwcvtbf16.f.f.v v9, v8, v0.t -; CHECK-NEXT: vsetvli zero, zero, e32, mf2, ta, ma -; CHECK-NEXT: vfadd.vv v9, v9, v10, v0.t -; CHECK-NEXT: vsetvli zero, zero, e16, mf4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v9, v0.t -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vv_nxv1bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: vsetvli zero, a0, e16, mf4, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v10, v9, v0.t +; ZVFH-NEXT: vfwcvtbf16.f.f.v v9, v8, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFH-NEXT: vfadd.vv v9, v9, v10, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v9, v0.t +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vv_nxv1bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf4, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v10, v9, v0.t +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v9, v8, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v9, v9, v10, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v9, v0.t +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv1bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, mf4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v9, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v9, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v9, v10, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, mf4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9, v0.t +; ZVFBFA-NEXT: ret %v = call <vscale x 1 x bfloat> @llvm.vp.fadd.nxv1bf16(<vscale x 1 x bfloat> %va, <vscale x 1 x bfloat> %b, <vscale x 1 x i1> %m, i32 %evl) ret <vscale x 1 x bfloat> %v } define <vscale x 1 x bfloat> @vfadd_vv_nxv1bf16_unmasked(<vscale x 1 x bfloat> %va, <vscale x 1 x bfloat> %b, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vv_nxv1bf16_unmasked: -; CHECK: # %bb.0: -; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v9 -; CHECK-NEXT: vfwcvtbf16.f.f.v v9, v8 -; CHECK-NEXT: vsetvli zero, zero, e32, mf2, ta, ma -; CHECK-NEXT: vfadd.vv v9, v9, v10 -; CHECK-NEXT: vsetvli zero, zero, e16, mf4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v9 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vv_nxv1bf16_unmasked: +; ZVFH: # %bb.0: +; ZVFH-NEXT: vsetvli zero, a0, e16, mf4, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v10, v9 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v9, v8 +; ZVFH-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFH-NEXT: vfadd.vv v9, v9, v10 +; ZVFH-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v9 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vv_nxv1bf16_unmasked: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf4, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v10, v9 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v9, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v9, v9, v10 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v9 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv1bf16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, mf4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v9 +; ZVFBFA-NEXT: vfwcvt.f.f.v v9, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v9, v10 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, mf4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9 +; ZVFBFA-NEXT: ret %v = call <vscale x 1 x bfloat> @llvm.vp.fadd.nxv1bf16(<vscale x 1 x bfloat> %va, <vscale x 1 x bfloat> %b, <vscale x 1 x i1> splat (i1 true), i32 %evl) ret <vscale x 1 x bfloat> %v } define <vscale x 1 x bfloat> @vfadd_vf_nxv1bf16(<vscale x 1 x bfloat> %va, bfloat %b, <vscale x 1 x i1> %m, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vf_nxv1bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: fmv.x.h a1, fa0 -; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, ma -; CHECK-NEXT: vmv.v.x v9, a1 -; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v8, v0.t -; CHECK-NEXT: vfwcvtbf16.f.f.v v8, v9, v0.t -; CHECK-NEXT: vsetvli zero, zero, e32, mf2, ta, ma -; CHECK-NEXT: vfadd.vv v9, v10, v8, v0.t -; CHECK-NEXT: vsetvli zero, zero, e16, mf4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v9, v0.t -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vf_nxv1bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: fmv.x.h a1, fa0 +; ZVFH-NEXT: vsetvli zero, a0, e16, mf4, ta, ma +; ZVFH-NEXT: vmv.v.x v9, a1 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v10, v8, v0.t +; ZVFH-NEXT: vfwcvtbf16.f.f.v v8, v9, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFH-NEXT: vfadd.vv v9, v10, v8, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v9, v0.t +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vf_nxv1bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: fmv.x.h a1, fa0 +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf4, ta, ma +; ZVFHMIN-NEXT: vmv.v.x v9, a1 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v10, v8, v0.t +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v8, v9, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v9, v10, v8, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v9, v0.t +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv1bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.h a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, mf4, ta, ma +; ZVFBFA-NEXT: vmv.v.x v9, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v8, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v9, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v10, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, mf4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9, v0.t +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 1 x bfloat> poison, bfloat %b, i32 0 %vb = shufflevector <vscale x 1 x bfloat> %elt.head, <vscale x 1 x bfloat> poison, <vscale x 1 x i32> zeroinitializer %v = call <vscale x 1 x bfloat> @llvm.vp.fadd.nxv1bf16(<vscale x 1 x bfloat> %va, <vscale x 1 x bfloat> %vb, <vscale x 1 x i1> %m, i32 %evl) @@ -64,18 +137,44 @@ define <vscale x 1 x bfloat> @vfadd_vf_nxv1bf16(<vscale x 1 x bfloat> %va, bfloa } define <vscale x 1 x bfloat> @vfadd_vf_nxv1bf16_commute(<vscale x 1 x bfloat> %va, bfloat %b, <vscale x 1 x i1> %m, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vf_nxv1bf16_commute: -; CHECK: # %bb.0: -; CHECK-NEXT: fmv.x.h a1, fa0 -; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, ma -; CHECK-NEXT: vmv.v.x v9, a1 -; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v8, v0.t -; CHECK-NEXT: vfwcvtbf16.f.f.v v8, v9, v0.t -; CHECK-NEXT: vsetvli zero, zero, e32, mf2, ta, ma -; CHECK-NEXT: vfadd.vv v9, v8, v10, v0.t -; CHECK-NEXT: vsetvli zero, zero, e16, mf4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v9, v0.t -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vf_nxv1bf16_commute: +; ZVFH: # %bb.0: +; ZVFH-NEXT: fmv.x.h a1, fa0 +; ZVFH-NEXT: vsetvli zero, a0, e16, mf4, ta, ma +; ZVFH-NEXT: vmv.v.x v9, a1 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v10, v8, v0.t +; ZVFH-NEXT: vfwcvtbf16.f.f.v v8, v9, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFH-NEXT: vfadd.vv v9, v8, v10, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v9, v0.t +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vf_nxv1bf16_commute: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: fmv.x.h a1, fa0 +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf4, ta, ma +; ZVFHMIN-NEXT: vmv.v.x v9, a1 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v10, v8, v0.t +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v8, v9, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v9, v8, v10, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v9, v0.t +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv1bf16_commute: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.h a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, mf4, ta, ma +; ZVFBFA-NEXT: vmv.v.x v9, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v8, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v9, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v8, v10, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, mf4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9, v0.t +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 1 x bfloat> poison, bfloat %b, i32 0 %vb = shufflevector <vscale x 1 x bfloat> %elt.head, <vscale x 1 x bfloat> poison, <vscale x 1 x i32> zeroinitializer %v = call <vscale x 1 x bfloat> @llvm.vp.fadd.nxv1bf16(<vscale x 1 x bfloat> %vb, <vscale x 1 x bfloat> %va, <vscale x 1 x i1> %m, i32 %evl) @@ -83,18 +182,44 @@ define <vscale x 1 x bfloat> @vfadd_vf_nxv1bf16_commute(<vscale x 1 x bfloat> %v } define <vscale x 1 x bfloat> @vfadd_vf_nxv1bf16_unmasked(<vscale x 1 x bfloat> %va, bfloat %b, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vf_nxv1bf16_unmasked: -; CHECK: # %bb.0: -; CHECK-NEXT: fmv.x.h a1, fa0 -; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, ma -; CHECK-NEXT: vmv.v.x v9, a1 -; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v8 -; CHECK-NEXT: vfwcvtbf16.f.f.v v8, v9 -; CHECK-NEXT: vsetvli zero, zero, e32, mf2, ta, ma -; CHECK-NEXT: vfadd.vv v9, v10, v8 -; CHECK-NEXT: vsetvli zero, zero, e16, mf4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v9 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vf_nxv1bf16_unmasked: +; ZVFH: # %bb.0: +; ZVFH-NEXT: fmv.x.h a1, fa0 +; ZVFH-NEXT: vsetvli zero, a0, e16, mf4, ta, ma +; ZVFH-NEXT: vmv.v.x v9, a1 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v10, v8 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v8, v9 +; ZVFH-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFH-NEXT: vfadd.vv v9, v10, v8 +; ZVFH-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v9 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vf_nxv1bf16_unmasked: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: fmv.x.h a1, fa0 +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf4, ta, ma +; ZVFHMIN-NEXT: vmv.v.x v9, a1 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v10, v8 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v8, v9 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v9, v10, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v9 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv1bf16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.h a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, mf4, ta, ma +; ZVFBFA-NEXT: vmv.v.x v9, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v8 +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v9 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v10, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, mf4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9 +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 1 x bfloat> poison, bfloat %b, i32 0 %vb = shufflevector <vscale x 1 x bfloat> %elt.head, <vscale x 1 x bfloat> poison, <vscale x 1 x i32> zeroinitializer %v = call <vscale x 1 x bfloat> @llvm.vp.fadd.nxv1bf16(<vscale x 1 x bfloat> %va, <vscale x 1 x bfloat> %vb, <vscale x 1 x i1> splat (i1 true), i32 %evl) @@ -102,18 +227,44 @@ define <vscale x 1 x bfloat> @vfadd_vf_nxv1bf16_unmasked(<vscale x 1 x bfloat> % } define <vscale x 1 x bfloat> @vfadd_vf_nxv1bf16_unmasked_commute(<vscale x 1 x bfloat> %va, bfloat %b, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vf_nxv1bf16_unmasked_commute: -; CHECK: # %bb.0: -; CHECK-NEXT: fmv.x.h a1, fa0 -; CHECK-NEXT: vsetvli zero, a0, e16, mf4, ta, ma -; CHECK-NEXT: vmv.v.x v9, a1 -; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v8 -; CHECK-NEXT: vfwcvtbf16.f.f.v v8, v9 -; CHECK-NEXT: vsetvli zero, zero, e32, mf2, ta, ma -; CHECK-NEXT: vfadd.vv v9, v8, v10 -; CHECK-NEXT: vsetvli zero, zero, e16, mf4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v9 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vf_nxv1bf16_unmasked_commute: +; ZVFH: # %bb.0: +; ZVFH-NEXT: fmv.x.h a1, fa0 +; ZVFH-NEXT: vsetvli zero, a0, e16, mf4, ta, ma +; ZVFH-NEXT: vmv.v.x v9, a1 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v10, v8 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v8, v9 +; ZVFH-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFH-NEXT: vfadd.vv v9, v8, v10 +; ZVFH-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v9 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vf_nxv1bf16_unmasked_commute: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: fmv.x.h a1, fa0 +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf4, ta, ma +; ZVFHMIN-NEXT: vmv.v.x v9, a1 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v10, v8 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v8, v9 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v9, v8, v10 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v9 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv1bf16_unmasked_commute: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.h a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, mf4, ta, ma +; ZVFBFA-NEXT: vmv.v.x v9, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v8 +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v9 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v8, v10 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, mf4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9 +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 1 x bfloat> poison, bfloat %b, i32 0 %vb = shufflevector <vscale x 1 x bfloat> %elt.head, <vscale x 1 x bfloat> poison, <vscale x 1 x i32> zeroinitializer %v = call <vscale x 1 x bfloat> @llvm.vp.fadd.nxv1bf16(<vscale x 1 x bfloat> %vb, <vscale x 1 x bfloat> %va, <vscale x 1 x i1> splat (i1 true), i32 %evl) @@ -123,48 +274,118 @@ define <vscale x 1 x bfloat> @vfadd_vf_nxv1bf16_unmasked_commute(<vscale x 1 x b declare <vscale x 2 x bfloat> @llvm.vp.fadd.nxv2bf16(<vscale x 2 x bfloat>, <vscale x 2 x bfloat>, <vscale x 2 x i1>, i32) define <vscale x 2 x bfloat> @vfadd_vv_nxv2bf16(<vscale x 2 x bfloat> %va, <vscale x 2 x bfloat> %b, <vscale x 2 x i1> %m, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vv_nxv2bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v9, v0.t -; CHECK-NEXT: vfwcvtbf16.f.f.v v9, v8, v0.t -; CHECK-NEXT: vsetvli zero, zero, e32, m1, ta, ma -; CHECK-NEXT: vfadd.vv v9, v9, v10, v0.t -; CHECK-NEXT: vsetvli zero, zero, e16, mf2, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v9, v0.t -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vv_nxv2bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: vsetvli zero, a0, e16, mf2, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v10, v9, v0.t +; ZVFH-NEXT: vfwcvtbf16.f.f.v v9, v8, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFH-NEXT: vfadd.vv v9, v9, v10, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e16, mf2, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v9, v0.t +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vv_nxv2bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf2, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v10, v9, v0.t +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v9, v8, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v9, v9, v10, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v9, v0.t +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv2bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, mf2, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v9, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v9, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v9, v10, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, mf2, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9, v0.t +; ZVFBFA-NEXT: ret %v = call <vscale x 2 x bfloat> @llvm.vp.fadd.nxv2bf16(<vscale x 2 x bfloat> %va, <vscale x 2 x bfloat> %b, <vscale x 2 x i1> %m, i32 %evl) ret <vscale x 2 x bfloat> %v } define <vscale x 2 x bfloat> @vfadd_vv_nxv2bf16_unmasked(<vscale x 2 x bfloat> %va, <vscale x 2 x bfloat> %b, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vv_nxv2bf16_unmasked: -; CHECK: # %bb.0: -; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v9 -; CHECK-NEXT: vfwcvtbf16.f.f.v v9, v8 -; CHECK-NEXT: vsetvli zero, zero, e32, m1, ta, ma -; CHECK-NEXT: vfadd.vv v9, v9, v10 -; CHECK-NEXT: vsetvli zero, zero, e16, mf2, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v9 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vv_nxv2bf16_unmasked: +; ZVFH: # %bb.0: +; ZVFH-NEXT: vsetvli zero, a0, e16, mf2, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v10, v9 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v9, v8 +; ZVFH-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFH-NEXT: vfadd.vv v9, v9, v10 +; ZVFH-NEXT: vsetvli zero, zero, e16, mf2, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v9 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vv_nxv2bf16_unmasked: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf2, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v10, v9 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v9, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v9, v9, v10 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v9 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv2bf16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, mf2, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v9 +; ZVFBFA-NEXT: vfwcvt.f.f.v v9, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v9, v10 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, mf2, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9 +; ZVFBFA-NEXT: ret %v = call <vscale x 2 x bfloat> @llvm.vp.fadd.nxv2bf16(<vscale x 2 x bfloat> %va, <vscale x 2 x bfloat> %b, <vscale x 2 x i1> splat (i1 true), i32 %evl) ret <vscale x 2 x bfloat> %v } define <vscale x 2 x bfloat> @vfadd_vf_nxv2bf16(<vscale x 2 x bfloat> %va, bfloat %b, <vscale x 2 x i1> %m, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vf_nxv2bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: fmv.x.h a1, fa0 -; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, ma -; CHECK-NEXT: vmv.v.x v9, a1 -; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v8, v0.t -; CHECK-NEXT: vfwcvtbf16.f.f.v v8, v9, v0.t -; CHECK-NEXT: vsetvli zero, zero, e32, m1, ta, ma -; CHECK-NEXT: vfadd.vv v9, v10, v8, v0.t -; CHECK-NEXT: vsetvli zero, zero, e16, mf2, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v9, v0.t -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vf_nxv2bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: fmv.x.h a1, fa0 +; ZVFH-NEXT: vsetvli zero, a0, e16, mf2, ta, ma +; ZVFH-NEXT: vmv.v.x v9, a1 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v10, v8, v0.t +; ZVFH-NEXT: vfwcvtbf16.f.f.v v8, v9, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFH-NEXT: vfadd.vv v9, v10, v8, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e16, mf2, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v9, v0.t +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vf_nxv2bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: fmv.x.h a1, fa0 +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf2, ta, ma +; ZVFHMIN-NEXT: vmv.v.x v9, a1 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v10, v8, v0.t +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v8, v9, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v9, v10, v8, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v9, v0.t +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv2bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.h a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, mf2, ta, ma +; ZVFBFA-NEXT: vmv.v.x v9, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v8, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v9, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v10, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, mf2, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9, v0.t +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 2 x bfloat> poison, bfloat %b, i32 0 %vb = shufflevector <vscale x 2 x bfloat> %elt.head, <vscale x 2 x bfloat> poison, <vscale x 2 x i32> zeroinitializer %v = call <vscale x 2 x bfloat> @llvm.vp.fadd.nxv2bf16(<vscale x 2 x bfloat> %va, <vscale x 2 x bfloat> %vb, <vscale x 2 x i1> %m, i32 %evl) @@ -172,18 +393,44 @@ define <vscale x 2 x bfloat> @vfadd_vf_nxv2bf16(<vscale x 2 x bfloat> %va, bfloa } define <vscale x 2 x bfloat> @vfadd_vf_nxv2bf16_unmasked(<vscale x 2 x bfloat> %va, bfloat %b, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vf_nxv2bf16_unmasked: -; CHECK: # %bb.0: -; CHECK-NEXT: fmv.x.h a1, fa0 -; CHECK-NEXT: vsetvli zero, a0, e16, mf2, ta, ma -; CHECK-NEXT: vmv.v.x v9, a1 -; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v8 -; CHECK-NEXT: vfwcvtbf16.f.f.v v8, v9 -; CHECK-NEXT: vsetvli zero, zero, e32, m1, ta, ma -; CHECK-NEXT: vfadd.vv v9, v10, v8 -; CHECK-NEXT: vsetvli zero, zero, e16, mf2, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v9 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vf_nxv2bf16_unmasked: +; ZVFH: # %bb.0: +; ZVFH-NEXT: fmv.x.h a1, fa0 +; ZVFH-NEXT: vsetvli zero, a0, e16, mf2, ta, ma +; ZVFH-NEXT: vmv.v.x v9, a1 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v10, v8 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v8, v9 +; ZVFH-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFH-NEXT: vfadd.vv v9, v10, v8 +; ZVFH-NEXT: vsetvli zero, zero, e16, mf2, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v9 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vf_nxv2bf16_unmasked: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: fmv.x.h a1, fa0 +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, mf2, ta, ma +; ZVFHMIN-NEXT: vmv.v.x v9, a1 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v10, v8 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v8, v9 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v9, v10, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v9 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv2bf16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.h a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, mf2, ta, ma +; ZVFBFA-NEXT: vmv.v.x v9, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v8 +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v9 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v10, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, mf2, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9 +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 2 x bfloat> poison, bfloat %b, i32 0 %vb = shufflevector <vscale x 2 x bfloat> %elt.head, <vscale x 2 x bfloat> poison, <vscale x 2 x i32> zeroinitializer %v = call <vscale x 2 x bfloat> @llvm.vp.fadd.nxv2bf16(<vscale x 2 x bfloat> %va, <vscale x 2 x bfloat> %vb, <vscale x 2 x i1> splat (i1 true), i32 %evl) @@ -193,48 +440,118 @@ define <vscale x 2 x bfloat> @vfadd_vf_nxv2bf16_unmasked(<vscale x 2 x bfloat> % declare <vscale x 4 x bfloat> @llvm.vp.fadd.nxv4bf16(<vscale x 4 x bfloat>, <vscale x 4 x bfloat>, <vscale x 4 x i1>, i32) define <vscale x 4 x bfloat> @vfadd_vv_nxv4bf16(<vscale x 4 x bfloat> %va, <vscale x 4 x bfloat> %b, <vscale x 4 x i1> %m, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vv_nxv4bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v9, v0.t -; CHECK-NEXT: vfwcvtbf16.f.f.v v12, v8, v0.t -; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma -; CHECK-NEXT: vfadd.vv v10, v12, v10, v0.t -; CHECK-NEXT: vsetvli zero, zero, e16, m1, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v10, v0.t -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vv_nxv4bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: vsetvli zero, a0, e16, m1, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v10, v9, v0.t +; ZVFH-NEXT: vfwcvtbf16.f.f.v v12, v8, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFH-NEXT: vfadd.vv v10, v12, v10, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e16, m1, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v10, v0.t +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vv_nxv4bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m1, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v10, v9, v0.t +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v12, v8, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v10, v12, v10, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m1, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v10, v0.t +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv4bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, m1, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v9, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v12, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v10, v12, v10, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m1, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v10, v0.t +; ZVFBFA-NEXT: ret %v = call <vscale x 4 x bfloat> @llvm.vp.fadd.nxv4bf16(<vscale x 4 x bfloat> %va, <vscale x 4 x bfloat> %b, <vscale x 4 x i1> %m, i32 %evl) ret <vscale x 4 x bfloat> %v } define <vscale x 4 x bfloat> @vfadd_vv_nxv4bf16_unmasked(<vscale x 4 x bfloat> %va, <vscale x 4 x bfloat> %b, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vv_nxv4bf16_unmasked: -; CHECK: # %bb.0: -; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v9 -; CHECK-NEXT: vfwcvtbf16.f.f.v v12, v8 -; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma -; CHECK-NEXT: vfadd.vv v10, v12, v10 -; CHECK-NEXT: vsetvli zero, zero, e16, m1, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v10 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vv_nxv4bf16_unmasked: +; ZVFH: # %bb.0: +; ZVFH-NEXT: vsetvli zero, a0, e16, m1, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v10, v9 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v12, v8 +; ZVFH-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFH-NEXT: vfadd.vv v10, v12, v10 +; ZVFH-NEXT: vsetvli zero, zero, e16, m1, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v10 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vv_nxv4bf16_unmasked: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m1, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v10, v9 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v12, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v10, v12, v10 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m1, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v10 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv4bf16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, m1, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v9 +; ZVFBFA-NEXT: vfwcvt.f.f.v v12, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v10, v12, v10 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m1, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v10 +; ZVFBFA-NEXT: ret %v = call <vscale x 4 x bfloat> @llvm.vp.fadd.nxv4bf16(<vscale x 4 x bfloat> %va, <vscale x 4 x bfloat> %b, <vscale x 4 x i1> splat (i1 true), i32 %evl) ret <vscale x 4 x bfloat> %v } define <vscale x 4 x bfloat> @vfadd_vf_nxv4bf16(<vscale x 4 x bfloat> %va, bfloat %b, <vscale x 4 x i1> %m, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vf_nxv4bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: fmv.x.h a1, fa0 -; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, ma -; CHECK-NEXT: vmv.v.x v12, a1 -; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v8, v0.t -; CHECK-NEXT: vfwcvtbf16.f.f.v v8, v12, v0.t -; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma -; CHECK-NEXT: vfadd.vv v10, v10, v8, v0.t -; CHECK-NEXT: vsetvli zero, zero, e16, m1, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v10, v0.t -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vf_nxv4bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: fmv.x.h a1, fa0 +; ZVFH-NEXT: vsetvli zero, a0, e16, m1, ta, ma +; ZVFH-NEXT: vmv.v.x v12, a1 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v10, v8, v0.t +; ZVFH-NEXT: vfwcvtbf16.f.f.v v8, v12, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFH-NEXT: vfadd.vv v10, v10, v8, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e16, m1, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v10, v0.t +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vf_nxv4bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: fmv.x.h a1, fa0 +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m1, ta, ma +; ZVFHMIN-NEXT: vmv.v.x v12, a1 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v10, v8, v0.t +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v8, v12, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v10, v10, v8, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m1, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v10, v0.t +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv4bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.h a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, m1, ta, ma +; ZVFBFA-NEXT: vmv.v.x v12, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v8, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v12, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v10, v10, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m1, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v10, v0.t +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 4 x bfloat> poison, bfloat %b, i32 0 %vb = shufflevector <vscale x 4 x bfloat> %elt.head, <vscale x 4 x bfloat> poison, <vscale x 4 x i32> zeroinitializer %v = call <vscale x 4 x bfloat> @llvm.vp.fadd.nxv4bf16(<vscale x 4 x bfloat> %va, <vscale x 4 x bfloat> %vb, <vscale x 4 x i1> %m, i32 %evl) @@ -242,18 +559,44 @@ define <vscale x 4 x bfloat> @vfadd_vf_nxv4bf16(<vscale x 4 x bfloat> %va, bfloa } define <vscale x 4 x bfloat> @vfadd_vf_nxv4bf16_unmasked(<vscale x 4 x bfloat> %va, bfloat %b, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vf_nxv4bf16_unmasked: -; CHECK: # %bb.0: -; CHECK-NEXT: fmv.x.h a1, fa0 -; CHECK-NEXT: vsetvli zero, a0, e16, m1, ta, ma -; CHECK-NEXT: vmv.v.x v12, a1 -; CHECK-NEXT: vfwcvtbf16.f.f.v v10, v8 -; CHECK-NEXT: vfwcvtbf16.f.f.v v8, v12 -; CHECK-NEXT: vsetvli zero, zero, e32, m2, ta, ma -; CHECK-NEXT: vfadd.vv v10, v10, v8 -; CHECK-NEXT: vsetvli zero, zero, e16, m1, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v10 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vf_nxv4bf16_unmasked: +; ZVFH: # %bb.0: +; ZVFH-NEXT: fmv.x.h a1, fa0 +; ZVFH-NEXT: vsetvli zero, a0, e16, m1, ta, ma +; ZVFH-NEXT: vmv.v.x v12, a1 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v10, v8 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v8, v12 +; ZVFH-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFH-NEXT: vfadd.vv v10, v10, v8 +; ZVFH-NEXT: vsetvli zero, zero, e16, m1, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v10 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vf_nxv4bf16_unmasked: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: fmv.x.h a1, fa0 +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m1, ta, ma +; ZVFHMIN-NEXT: vmv.v.x v12, a1 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v10, v8 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v8, v12 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v10, v10, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m1, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v10 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv4bf16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.h a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, m1, ta, ma +; ZVFBFA-NEXT: vmv.v.x v12, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v8 +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v12 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v10, v10, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m1, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v10 +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 4 x bfloat> poison, bfloat %b, i32 0 %vb = shufflevector <vscale x 4 x bfloat> %elt.head, <vscale x 4 x bfloat> poison, <vscale x 4 x i32> zeroinitializer %v = call <vscale x 4 x bfloat> @llvm.vp.fadd.nxv4bf16(<vscale x 4 x bfloat> %va, <vscale x 4 x bfloat> %vb, <vscale x 4 x i1> splat (i1 true), i32 %evl) @@ -263,48 +606,118 @@ define <vscale x 4 x bfloat> @vfadd_vf_nxv4bf16_unmasked(<vscale x 4 x bfloat> % declare <vscale x 8 x bfloat> @llvm.vp.fadd.nxv8bf16(<vscale x 8 x bfloat>, <vscale x 8 x bfloat>, <vscale x 8 x i1>, i32) define <vscale x 8 x bfloat> @vfadd_vv_nxv8bf16(<vscale x 8 x bfloat> %va, <vscale x 8 x bfloat> %b, <vscale x 8 x i1> %m, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vv_nxv8bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v12, v10, v0.t -; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v8, v0.t -; CHECK-NEXT: vsetvli zero, zero, e32, m4, ta, ma -; CHECK-NEXT: vfadd.vv v12, v16, v12, v0.t -; CHECK-NEXT: vsetvli zero, zero, e16, m2, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v12, v0.t -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vv_nxv8bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v12, v10, v0.t +; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v8, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFH-NEXT: vfadd.vv v12, v16, v12, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e16, m2, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v12, v0.t +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vv_nxv8bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m2, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v12, v10, v0.t +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v8, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v12, v16, v12, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m2, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v12, v0.t +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv8bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, m2, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v12, v10, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFBFA-NEXT: vfadd.vv v12, v16, v12, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m2, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v12, v0.t +; ZVFBFA-NEXT: ret %v = call <vscale x 8 x bfloat> @llvm.vp.fadd.nxv8bf16(<vscale x 8 x bfloat> %va, <vscale x 8 x bfloat> %b, <vscale x 8 x i1> %m, i32 %evl) ret <vscale x 8 x bfloat> %v } define <vscale x 8 x bfloat> @vfadd_vv_nxv8bf16_unmasked(<vscale x 8 x bfloat> %va, <vscale x 8 x bfloat> %b, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vv_nxv8bf16_unmasked: -; CHECK: # %bb.0: -; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v12, v10 -; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v8 -; CHECK-NEXT: vsetvli zero, zero, e32, m4, ta, ma -; CHECK-NEXT: vfadd.vv v12, v16, v12 -; CHECK-NEXT: vsetvli zero, zero, e16, m2, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v12 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vv_nxv8bf16_unmasked: +; ZVFH: # %bb.0: +; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v12, v10 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v8 +; ZVFH-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFH-NEXT: vfadd.vv v12, v16, v12 +; ZVFH-NEXT: vsetvli zero, zero, e16, m2, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v12 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vv_nxv8bf16_unmasked: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m2, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v12, v10 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v12, v16, v12 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m2, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v12 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv8bf16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, m2, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v12, v10 +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFBFA-NEXT: vfadd.vv v12, v16, v12 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m2, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v12 +; ZVFBFA-NEXT: ret %v = call <vscale x 8 x bfloat> @llvm.vp.fadd.nxv8bf16(<vscale x 8 x bfloat> %va, <vscale x 8 x bfloat> %b, <vscale x 8 x i1> splat (i1 true), i32 %evl) ret <vscale x 8 x bfloat> %v } define <vscale x 8 x bfloat> @vfadd_vf_nxv8bf16(<vscale x 8 x bfloat> %va, bfloat %b, <vscale x 8 x i1> %m, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vf_nxv8bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: fmv.x.h a1, fa0 -; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, ma -; CHECK-NEXT: vmv.v.x v16, a1 -; CHECK-NEXT: vfwcvtbf16.f.f.v v12, v8, v0.t -; CHECK-NEXT: vfwcvtbf16.f.f.v v8, v16, v0.t -; CHECK-NEXT: vsetvli zero, zero, e32, m4, ta, ma -; CHECK-NEXT: vfadd.vv v12, v12, v8, v0.t -; CHECK-NEXT: vsetvli zero, zero, e16, m2, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v12, v0.t -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vf_nxv8bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: fmv.x.h a1, fa0 +; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma +; ZVFH-NEXT: vmv.v.x v16, a1 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v12, v8, v0.t +; ZVFH-NEXT: vfwcvtbf16.f.f.v v8, v16, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFH-NEXT: vfadd.vv v12, v12, v8, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e16, m2, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v12, v0.t +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vf_nxv8bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: fmv.x.h a1, fa0 +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m2, ta, ma +; ZVFHMIN-NEXT: vmv.v.x v16, a1 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v12, v8, v0.t +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v8, v16, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v12, v12, v8, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m2, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v12, v0.t +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv8bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.h a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, m2, ta, ma +; ZVFBFA-NEXT: vmv.v.x v16, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v12, v8, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v16, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFBFA-NEXT: vfadd.vv v12, v12, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m2, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v12, v0.t +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 8 x bfloat> poison, bfloat %b, i32 0 %vb = shufflevector <vscale x 8 x bfloat> %elt.head, <vscale x 8 x bfloat> poison, <vscale x 8 x i32> zeroinitializer %v = call <vscale x 8 x bfloat> @llvm.vp.fadd.nxv8bf16(<vscale x 8 x bfloat> %va, <vscale x 8 x bfloat> %vb, <vscale x 8 x i1> %m, i32 %evl) @@ -312,18 +725,44 @@ define <vscale x 8 x bfloat> @vfadd_vf_nxv8bf16(<vscale x 8 x bfloat> %va, bfloa } define <vscale x 8 x bfloat> @vfadd_vf_nxv8bf16_unmasked(<vscale x 8 x bfloat> %va, bfloat %b, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vf_nxv8bf16_unmasked: -; CHECK: # %bb.0: -; CHECK-NEXT: fmv.x.h a1, fa0 -; CHECK-NEXT: vsetvli zero, a0, e16, m2, ta, ma -; CHECK-NEXT: vmv.v.x v16, a1 -; CHECK-NEXT: vfwcvtbf16.f.f.v v12, v8 -; CHECK-NEXT: vfwcvtbf16.f.f.v v8, v16 -; CHECK-NEXT: vsetvli zero, zero, e32, m4, ta, ma -; CHECK-NEXT: vfadd.vv v12, v12, v8 -; CHECK-NEXT: vsetvli zero, zero, e16, m2, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v12 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vf_nxv8bf16_unmasked: +; ZVFH: # %bb.0: +; ZVFH-NEXT: fmv.x.h a1, fa0 +; ZVFH-NEXT: vsetvli zero, a0, e16, m2, ta, ma +; ZVFH-NEXT: vmv.v.x v16, a1 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v12, v8 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v8, v16 +; ZVFH-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFH-NEXT: vfadd.vv v12, v12, v8 +; ZVFH-NEXT: vsetvli zero, zero, e16, m2, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v12 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vf_nxv8bf16_unmasked: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: fmv.x.h a1, fa0 +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m2, ta, ma +; ZVFHMIN-NEXT: vmv.v.x v16, a1 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v12, v8 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v8, v16 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v12, v12, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m2, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v12 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv8bf16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.h a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, m2, ta, ma +; ZVFBFA-NEXT: vmv.v.x v16, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v12, v8 +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v16 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFBFA-NEXT: vfadd.vv v12, v12, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m2, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v12 +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 8 x bfloat> poison, bfloat %b, i32 0 %vb = shufflevector <vscale x 8 x bfloat> %elt.head, <vscale x 8 x bfloat> poison, <vscale x 8 x i32> zeroinitializer %v = call <vscale x 8 x bfloat> @llvm.vp.fadd.nxv8bf16(<vscale x 8 x bfloat> %va, <vscale x 8 x bfloat> %vb, <vscale x 8 x i1> splat (i1 true), i32 %evl) @@ -333,48 +772,118 @@ define <vscale x 8 x bfloat> @vfadd_vf_nxv8bf16_unmasked(<vscale x 8 x bfloat> % declare <vscale x 16 x bfloat> @llvm.vp.fadd.nxv16bf16(<vscale x 16 x bfloat>, <vscale x 16 x bfloat>, <vscale x 16 x i1>, i32) define <vscale x 16 x bfloat> @vfadd_vv_nxv16bf16(<vscale x 16 x bfloat> %va, <vscale x 16 x bfloat> %b, <vscale x 16 x i1> %m, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vv_nxv16bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v12, v0.t -; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v8, v0.t -; CHECK-NEXT: vsetvli zero, zero, e32, m8, ta, ma -; CHECK-NEXT: vfadd.vv v16, v24, v16, v0.t -; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16, v0.t -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vv_nxv16bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v12, v0.t +; ZVFH-NEXT: vfwcvtbf16.f.f.v v24, v8, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFH-NEXT: vfadd.vv v16, v24, v16, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v16, v0.t +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vv_nxv16bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v12, v0.t +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v24, v8, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v16, v24, v16, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v16, v0.t +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv16bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v12, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v24, v16, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v16, v0.t +; ZVFBFA-NEXT: ret %v = call <vscale x 16 x bfloat> @llvm.vp.fadd.nxv16bf16(<vscale x 16 x bfloat> %va, <vscale x 16 x bfloat> %b, <vscale x 16 x i1> %m, i32 %evl) ret <vscale x 16 x bfloat> %v } define <vscale x 16 x bfloat> @vfadd_vv_nxv16bf16_unmasked(<vscale x 16 x bfloat> %va, <vscale x 16 x bfloat> %b, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vv_nxv16bf16_unmasked: -; CHECK: # %bb.0: -; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v12 -; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v8 -; CHECK-NEXT: vsetvli zero, zero, e32, m8, ta, ma -; CHECK-NEXT: vfadd.vv v16, v24, v16 -; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vv_nxv16bf16_unmasked: +; ZVFH: # %bb.0: +; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v12 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v24, v8 +; ZVFH-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFH-NEXT: vfadd.vv v16, v24, v16 +; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v16 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vv_nxv16bf16_unmasked: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v12 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v24, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v16, v24, v16 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v16 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv16bf16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v12 +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v24, v16 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v16 +; ZVFBFA-NEXT: ret %v = call <vscale x 16 x bfloat> @llvm.vp.fadd.nxv16bf16(<vscale x 16 x bfloat> %va, <vscale x 16 x bfloat> %b, <vscale x 16 x i1> splat (i1 true), i32 %evl) ret <vscale x 16 x bfloat> %v } define <vscale x 16 x bfloat> @vfadd_vf_nxv16bf16(<vscale x 16 x bfloat> %va, bfloat %b, <vscale x 16 x i1> %m, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vf_nxv16bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: fmv.x.h a1, fa0 -; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, ma -; CHECK-NEXT: vmv.v.x v24, a1 -; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v8, v0.t -; CHECK-NEXT: vfwcvtbf16.f.f.v v8, v24, v0.t -; CHECK-NEXT: vsetvli zero, zero, e32, m8, ta, ma -; CHECK-NEXT: vfadd.vv v16, v16, v8, v0.t -; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16, v0.t -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vf_nxv16bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: fmv.x.h a1, fa0 +; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFH-NEXT: vmv.v.x v24, a1 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v8, v0.t +; ZVFH-NEXT: vfwcvtbf16.f.f.v v8, v24, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFH-NEXT: vfadd.vv v16, v16, v8, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v16, v0.t +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vf_nxv16bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: fmv.x.h a1, fa0 +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFHMIN-NEXT: vmv.v.x v24, a1 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v8, v0.t +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v8, v24, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v16, v16, v8, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v16, v0.t +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv16bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.h a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vmv.v.x v24, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v8, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v24, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v16, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v16, v0.t +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 16 x bfloat> poison, bfloat %b, i32 0 %vb = shufflevector <vscale x 16 x bfloat> %elt.head, <vscale x 16 x bfloat> poison, <vscale x 16 x i32> zeroinitializer %v = call <vscale x 16 x bfloat> @llvm.vp.fadd.nxv16bf16(<vscale x 16 x bfloat> %va, <vscale x 16 x bfloat> %vb, <vscale x 16 x i1> %m, i32 %evl) @@ -382,18 +891,44 @@ define <vscale x 16 x bfloat> @vfadd_vf_nxv16bf16(<vscale x 16 x bfloat> %va, bf } define <vscale x 16 x bfloat> @vfadd_vf_nxv16bf16_unmasked(<vscale x 16 x bfloat> %va, bfloat %b, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vf_nxv16bf16_unmasked: -; CHECK: # %bb.0: -; CHECK-NEXT: fmv.x.h a1, fa0 -; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, ma -; CHECK-NEXT: vmv.v.x v24, a1 -; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v8 -; CHECK-NEXT: vfwcvtbf16.f.f.v v8, v24 -; CHECK-NEXT: vsetvli zero, zero, e32, m8, ta, ma -; CHECK-NEXT: vfadd.vv v16, v16, v8 -; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vf_nxv16bf16_unmasked: +; ZVFH: # %bb.0: +; ZVFH-NEXT: fmv.x.h a1, fa0 +; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFH-NEXT: vmv.v.x v24, a1 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v8 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v8, v24 +; ZVFH-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFH-NEXT: vfadd.vv v16, v16, v8 +; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v16 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vf_nxv16bf16_unmasked: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: fmv.x.h a1, fa0 +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFHMIN-NEXT: vmv.v.x v24, a1 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v8 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v8, v24 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v16, v16, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v16 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv16bf16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.h a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vmv.v.x v24, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v8 +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v24 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v16, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v16 +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 16 x bfloat> poison, bfloat %b, i32 0 %vb = shufflevector <vscale x 16 x bfloat> %elt.head, <vscale x 16 x bfloat> poison, <vscale x 16 x i32> zeroinitializer %v = call <vscale x 16 x bfloat> @llvm.vp.fadd.nxv16bf16(<vscale x 16 x bfloat> %va, <vscale x 16 x bfloat> %vb, <vscale x 16 x i1> splat (i1 true), i32 %evl) @@ -403,173 +938,493 @@ define <vscale x 16 x bfloat> @vfadd_vf_nxv16bf16_unmasked(<vscale x 16 x bfloat declare <vscale x 32 x bfloat> @llvm.vp.fadd.nxv32bf16(<vscale x 32 x bfloat>, <vscale x 32 x bfloat>, <vscale x 32 x i1>, i32) define <vscale x 32 x bfloat> @vfadd_vv_nxv32bf16(<vscale x 32 x bfloat> %va, <vscale x 32 x bfloat> %b, <vscale x 32 x i1> %m, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vv_nxv32bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: addi sp, sp, -16 -; CHECK-NEXT: .cfi_def_cfa_offset 16 -; CHECK-NEXT: csrr a1, vlenb -; CHECK-NEXT: slli a1, a1, 3 -; CHECK-NEXT: sub sp, sp, a1 -; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb -; CHECK-NEXT: vsetvli a1, zero, e8, mf2, ta, ma -; CHECK-NEXT: vmv1r.v v7, v0 -; CHECK-NEXT: csrr a2, vlenb -; CHECK-NEXT: slli a1, a2, 1 -; CHECK-NEXT: srli a2, a2, 2 -; CHECK-NEXT: sub a3, a0, a1 -; CHECK-NEXT: vslidedown.vx v0, v0, a2 -; CHECK-NEXT: sltu a2, a0, a3 -; CHECK-NEXT: addi a2, a2, -1 -; CHECK-NEXT: and a2, a2, a3 -; CHECK-NEXT: addi a3, sp, 16 -; CHECK-NEXT: vs8r.v v16, (a3) # vscale x 64-byte Folded Spill -; CHECK-NEXT: vsetvli zero, a2, e16, m4, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v20, v0.t -; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v12, v0.t -; CHECK-NEXT: vsetvli zero, zero, e32, m8, ta, ma -; CHECK-NEXT: vfadd.vv v16, v16, v24, v0.t -; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v12, v16, v0.t -; CHECK-NEXT: bltu a0, a1, .LBB22_2 -; CHECK-NEXT: # %bb.1: -; CHECK-NEXT: mv a0, a1 -; CHECK-NEXT: .LBB22_2: -; CHECK-NEXT: vmv1r.v v0, v7 -; CHECK-NEXT: addi a1, sp, 16 -; CHECK-NEXT: vl8r.v v24, (a1) # vscale x 64-byte Folded Reload -; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v24, v0.t -; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v8, v0.t -; CHECK-NEXT: vsetvli zero, zero, e32, m8, ta, ma -; CHECK-NEXT: vfadd.vv v16, v24, v16, v0.t -; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16, v0.t -; CHECK-NEXT: csrr a0, vlenb -; CHECK-NEXT: slli a0, a0, 3 -; CHECK-NEXT: add sp, sp, a0 -; CHECK-NEXT: .cfi_def_cfa sp, 16 -; CHECK-NEXT: addi sp, sp, 16 -; CHECK-NEXT: .cfi_def_cfa_offset 0 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vv_nxv32bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: addi sp, sp, -16 +; ZVFH-NEXT: .cfi_def_cfa_offset 16 +; ZVFH-NEXT: csrr a1, vlenb +; ZVFH-NEXT: slli a1, a1, 3 +; ZVFH-NEXT: sub sp, sp, a1 +; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb +; ZVFH-NEXT: vsetvli a1, zero, e8, mf2, ta, ma +; ZVFH-NEXT: vmv1r.v v7, v0 +; ZVFH-NEXT: csrr a2, vlenb +; ZVFH-NEXT: slli a1, a2, 1 +; ZVFH-NEXT: srli a2, a2, 2 +; ZVFH-NEXT: sub a3, a0, a1 +; ZVFH-NEXT: vslidedown.vx v0, v0, a2 +; ZVFH-NEXT: sltu a2, a0, a3 +; ZVFH-NEXT: addi a2, a2, -1 +; ZVFH-NEXT: and a2, a2, a3 +; ZVFH-NEXT: addi a3, sp, 16 +; ZVFH-NEXT: vs8r.v v16, (a3) # vscale x 64-byte Folded Spill +; ZVFH-NEXT: vsetvli zero, a2, e16, m4, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v24, v20, v0.t +; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v12, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFH-NEXT: vfadd.vv v16, v16, v24, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v12, v16, v0.t +; ZVFH-NEXT: bltu a0, a1, .LBB22_2 +; ZVFH-NEXT: # %bb.1: +; ZVFH-NEXT: mv a0, a1 +; ZVFH-NEXT: .LBB22_2: +; ZVFH-NEXT: vmv1r.v v0, v7 +; ZVFH-NEXT: addi a1, sp, 16 +; ZVFH-NEXT: vl8r.v v24, (a1) # vscale x 64-byte Folded Reload +; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v24, v0.t +; ZVFH-NEXT: vfwcvtbf16.f.f.v v24, v8, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFH-NEXT: vfadd.vv v16, v24, v16, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v16, v0.t +; ZVFH-NEXT: csrr a0, vlenb +; ZVFH-NEXT: slli a0, a0, 3 +; ZVFH-NEXT: add sp, sp, a0 +; ZVFH-NEXT: .cfi_def_cfa sp, 16 +; ZVFH-NEXT: addi sp, sp, 16 +; ZVFH-NEXT: .cfi_def_cfa_offset 0 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vv_nxv32bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: addi sp, sp, -16 +; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16 +; ZVFHMIN-NEXT: csrr a1, vlenb +; ZVFHMIN-NEXT: slli a1, a1, 3 +; ZVFHMIN-NEXT: sub sp, sp, a1 +; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb +; ZVFHMIN-NEXT: vsetvli a1, zero, e8, mf2, ta, ma +; ZVFHMIN-NEXT: vmv1r.v v7, v0 +; ZVFHMIN-NEXT: csrr a2, vlenb +; ZVFHMIN-NEXT: slli a1, a2, 1 +; ZVFHMIN-NEXT: srli a2, a2, 2 +; ZVFHMIN-NEXT: sub a3, a0, a1 +; ZVFHMIN-NEXT: vslidedown.vx v0, v0, a2 +; ZVFHMIN-NEXT: sltu a2, a0, a3 +; ZVFHMIN-NEXT: addi a2, a2, -1 +; ZVFHMIN-NEXT: and a2, a2, a3 +; ZVFHMIN-NEXT: addi a3, sp, 16 +; ZVFHMIN-NEXT: vs8r.v v16, (a3) # vscale x 64-byte Folded Spill +; ZVFHMIN-NEXT: vsetvli zero, a2, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v24, v20, v0.t +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v12, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v16, v16, v24, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v12, v16, v0.t +; ZVFHMIN-NEXT: bltu a0, a1, .LBB22_2 +; ZVFHMIN-NEXT: # %bb.1: +; ZVFHMIN-NEXT: mv a0, a1 +; ZVFHMIN-NEXT: .LBB22_2: +; ZVFHMIN-NEXT: vmv1r.v v0, v7 +; ZVFHMIN-NEXT: addi a1, sp, 16 +; ZVFHMIN-NEXT: vl8r.v v24, (a1) # vscale x 64-byte Folded Reload +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v24, v0.t +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v24, v8, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v16, v24, v16, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v16, v0.t +; ZVFHMIN-NEXT: csrr a0, vlenb +; ZVFHMIN-NEXT: slli a0, a0, 3 +; ZVFHMIN-NEXT: add sp, sp, a0 +; ZVFHMIN-NEXT: .cfi_def_cfa sp, 16 +; ZVFHMIN-NEXT: addi sp, sp, 16 +; ZVFHMIN-NEXT: .cfi_def_cfa_offset 0 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv32bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: addi sp, sp, -16 +; ZVFBFA-NEXT: .cfi_def_cfa_offset 16 +; ZVFBFA-NEXT: csrr a1, vlenb +; ZVFBFA-NEXT: slli a1, a1, 3 +; ZVFBFA-NEXT: sub sp, sp, a1 +; ZVFBFA-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb +; ZVFBFA-NEXT: vsetvli a1, zero, e8, mf2, ta, ma +; ZVFBFA-NEXT: vmv1r.v v7, v0 +; ZVFBFA-NEXT: csrr a2, vlenb +; ZVFBFA-NEXT: slli a1, a2, 1 +; ZVFBFA-NEXT: srli a2, a2, 2 +; ZVFBFA-NEXT: sub a3, a0, a1 +; ZVFBFA-NEXT: vslidedown.vx v0, v0, a2 +; ZVFBFA-NEXT: sltu a2, a0, a3 +; ZVFBFA-NEXT: addi a2, a2, -1 +; ZVFBFA-NEXT: and a2, a2, a3 +; ZVFBFA-NEXT: addi a3, sp, 16 +; ZVFBFA-NEXT: vs8r.v v16, (a3) # vscale x 64-byte Folded Spill +; ZVFBFA-NEXT: vsetvli zero, a2, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v20, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v12, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v16, v24, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v12, v16, v0.t +; ZVFBFA-NEXT: bltu a0, a1, .LBB22_2 +; ZVFBFA-NEXT: # %bb.1: +; ZVFBFA-NEXT: mv a0, a1 +; ZVFBFA-NEXT: .LBB22_2: +; ZVFBFA-NEXT: vmv1r.v v0, v7 +; ZVFBFA-NEXT: addi a1, sp, 16 +; ZVFBFA-NEXT: vl8r.v v24, (a1) # vscale x 64-byte Folded Reload +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v24, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v24, v16, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v16, v0.t +; ZVFBFA-NEXT: csrr a0, vlenb +; ZVFBFA-NEXT: slli a0, a0, 3 +; ZVFBFA-NEXT: add sp, sp, a0 +; ZVFBFA-NEXT: .cfi_def_cfa sp, 16 +; ZVFBFA-NEXT: addi sp, sp, 16 +; ZVFBFA-NEXT: .cfi_def_cfa_offset 0 +; ZVFBFA-NEXT: ret %v = call <vscale x 32 x bfloat> @llvm.vp.fadd.nxv32bf16(<vscale x 32 x bfloat> %va, <vscale x 32 x bfloat> %b, <vscale x 32 x i1> %m, i32 %evl) ret <vscale x 32 x bfloat> %v } define <vscale x 32 x bfloat> @vfadd_vv_nxv32bf16_unmasked(<vscale x 32 x bfloat> %va, <vscale x 32 x bfloat> %b, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vv_nxv32bf16_unmasked: -; CHECK: # %bb.0: -; CHECK-NEXT: addi sp, sp, -16 -; CHECK-NEXT: .cfi_def_cfa_offset 16 -; CHECK-NEXT: csrr a1, vlenb -; CHECK-NEXT: slli a1, a1, 3 -; CHECK-NEXT: sub sp, sp, a1 -; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb -; CHECK-NEXT: csrr a2, vlenb -; CHECK-NEXT: vsetvli a1, zero, e8, m4, ta, ma -; CHECK-NEXT: vmset.m v24 -; CHECK-NEXT: slli a1, a2, 1 -; CHECK-NEXT: srli a2, a2, 2 -; CHECK-NEXT: sub a3, a0, a1 -; CHECK-NEXT: vsetvli a4, zero, e8, mf2, ta, ma -; CHECK-NEXT: vslidedown.vx v0, v24, a2 -; CHECK-NEXT: sltu a2, a0, a3 -; CHECK-NEXT: addi a2, a2, -1 -; CHECK-NEXT: and a2, a2, a3 -; CHECK-NEXT: addi a3, sp, 16 -; CHECK-NEXT: vs8r.v v16, (a3) # vscale x 64-byte Folded Spill -; CHECK-NEXT: vsetvli zero, a2, e16, m4, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v20, v0.t -; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v12, v0.t -; CHECK-NEXT: vsetvli zero, zero, e32, m8, ta, ma -; CHECK-NEXT: vfadd.vv v16, v16, v24, v0.t -; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v12, v16, v0.t -; CHECK-NEXT: bltu a0, a1, .LBB23_2 -; CHECK-NEXT: # %bb.1: -; CHECK-NEXT: mv a0, a1 -; CHECK-NEXT: .LBB23_2: -; CHECK-NEXT: addi a1, sp, 16 -; CHECK-NEXT: vl8r.v v24, (a1) # vscale x 64-byte Folded Reload -; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v24 -; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v8 -; CHECK-NEXT: vsetvli zero, zero, e32, m8, ta, ma -; CHECK-NEXT: vfadd.vv v16, v24, v16 -; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16 -; CHECK-NEXT: csrr a0, vlenb -; CHECK-NEXT: slli a0, a0, 3 -; CHECK-NEXT: add sp, sp, a0 -; CHECK-NEXT: .cfi_def_cfa sp, 16 -; CHECK-NEXT: addi sp, sp, 16 -; CHECK-NEXT: .cfi_def_cfa_offset 0 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vv_nxv32bf16_unmasked: +; ZVFH: # %bb.0: +; ZVFH-NEXT: addi sp, sp, -16 +; ZVFH-NEXT: .cfi_def_cfa_offset 16 +; ZVFH-NEXT: csrr a1, vlenb +; ZVFH-NEXT: slli a1, a1, 3 +; ZVFH-NEXT: sub sp, sp, a1 +; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb +; ZVFH-NEXT: csrr a2, vlenb +; ZVFH-NEXT: vsetvli a1, zero, e8, m4, ta, ma +; ZVFH-NEXT: vmset.m v24 +; ZVFH-NEXT: slli a1, a2, 1 +; ZVFH-NEXT: srli a2, a2, 2 +; ZVFH-NEXT: sub a3, a0, a1 +; ZVFH-NEXT: vsetvli a4, zero, e8, mf2, ta, ma +; ZVFH-NEXT: vslidedown.vx v0, v24, a2 +; ZVFH-NEXT: sltu a2, a0, a3 +; ZVFH-NEXT: addi a2, a2, -1 +; ZVFH-NEXT: and a2, a2, a3 +; ZVFH-NEXT: addi a3, sp, 16 +; ZVFH-NEXT: vs8r.v v16, (a3) # vscale x 64-byte Folded Spill +; ZVFH-NEXT: vsetvli zero, a2, e16, m4, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v24, v20, v0.t +; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v12, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFH-NEXT: vfadd.vv v16, v16, v24, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v12, v16, v0.t +; ZVFH-NEXT: bltu a0, a1, .LBB23_2 +; ZVFH-NEXT: # %bb.1: +; ZVFH-NEXT: mv a0, a1 +; ZVFH-NEXT: .LBB23_2: +; ZVFH-NEXT: addi a1, sp, 16 +; ZVFH-NEXT: vl8r.v v24, (a1) # vscale x 64-byte Folded Reload +; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v24 +; ZVFH-NEXT: vfwcvtbf16.f.f.v v24, v8 +; ZVFH-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFH-NEXT: vfadd.vv v16, v24, v16 +; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v16 +; ZVFH-NEXT: csrr a0, vlenb +; ZVFH-NEXT: slli a0, a0, 3 +; ZVFH-NEXT: add sp, sp, a0 +; ZVFH-NEXT: .cfi_def_cfa sp, 16 +; ZVFH-NEXT: addi sp, sp, 16 +; ZVFH-NEXT: .cfi_def_cfa_offset 0 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vv_nxv32bf16_unmasked: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: addi sp, sp, -16 +; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16 +; ZVFHMIN-NEXT: csrr a1, vlenb +; ZVFHMIN-NEXT: slli a1, a1, 3 +; ZVFHMIN-NEXT: sub sp, sp, a1 +; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb +; ZVFHMIN-NEXT: csrr a2, vlenb +; ZVFHMIN-NEXT: vsetvli a1, zero, e8, m4, ta, ma +; ZVFHMIN-NEXT: vmset.m v24 +; ZVFHMIN-NEXT: slli a1, a2, 1 +; ZVFHMIN-NEXT: srli a2, a2, 2 +; ZVFHMIN-NEXT: sub a3, a0, a1 +; ZVFHMIN-NEXT: vsetvli a4, zero, e8, mf2, ta, ma +; ZVFHMIN-NEXT: vslidedown.vx v0, v24, a2 +; ZVFHMIN-NEXT: sltu a2, a0, a3 +; ZVFHMIN-NEXT: addi a2, a2, -1 +; ZVFHMIN-NEXT: and a2, a2, a3 +; ZVFHMIN-NEXT: addi a3, sp, 16 +; ZVFHMIN-NEXT: vs8r.v v16, (a3) # vscale x 64-byte Folded Spill +; ZVFHMIN-NEXT: vsetvli zero, a2, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v24, v20, v0.t +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v12, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v16, v16, v24, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v12, v16, v0.t +; ZVFHMIN-NEXT: bltu a0, a1, .LBB23_2 +; ZVFHMIN-NEXT: # %bb.1: +; ZVFHMIN-NEXT: mv a0, a1 +; ZVFHMIN-NEXT: .LBB23_2: +; ZVFHMIN-NEXT: addi a1, sp, 16 +; ZVFHMIN-NEXT: vl8r.v v24, (a1) # vscale x 64-byte Folded Reload +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v24 +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v24, v8 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v16, v24, v16 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v16 +; ZVFHMIN-NEXT: csrr a0, vlenb +; ZVFHMIN-NEXT: slli a0, a0, 3 +; ZVFHMIN-NEXT: add sp, sp, a0 +; ZVFHMIN-NEXT: .cfi_def_cfa sp, 16 +; ZVFHMIN-NEXT: addi sp, sp, 16 +; ZVFHMIN-NEXT: .cfi_def_cfa_offset 0 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv32bf16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: addi sp, sp, -16 +; ZVFBFA-NEXT: .cfi_def_cfa_offset 16 +; ZVFBFA-NEXT: csrr a1, vlenb +; ZVFBFA-NEXT: slli a1, a1, 3 +; ZVFBFA-NEXT: sub sp, sp, a1 +; ZVFBFA-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb +; ZVFBFA-NEXT: csrr a2, vlenb +; ZVFBFA-NEXT: vsetvli a1, zero, e8, m4, ta, ma +; ZVFBFA-NEXT: vmset.m v24 +; ZVFBFA-NEXT: slli a1, a2, 1 +; ZVFBFA-NEXT: srli a2, a2, 2 +; ZVFBFA-NEXT: sub a3, a0, a1 +; ZVFBFA-NEXT: vsetvli a4, zero, e8, mf2, ta, ma +; ZVFBFA-NEXT: vslidedown.vx v0, v24, a2 +; ZVFBFA-NEXT: sltu a2, a0, a3 +; ZVFBFA-NEXT: addi a2, a2, -1 +; ZVFBFA-NEXT: and a2, a2, a3 +; ZVFBFA-NEXT: addi a3, sp, 16 +; ZVFBFA-NEXT: vs8r.v v16, (a3) # vscale x 64-byte Folded Spill +; ZVFBFA-NEXT: vsetvli zero, a2, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v20, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v12, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v16, v24, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v12, v16, v0.t +; ZVFBFA-NEXT: bltu a0, a1, .LBB23_2 +; ZVFBFA-NEXT: # %bb.1: +; ZVFBFA-NEXT: mv a0, a1 +; ZVFBFA-NEXT: .LBB23_2: +; ZVFBFA-NEXT: addi a1, sp, 16 +; ZVFBFA-NEXT: vl8r.v v24, (a1) # vscale x 64-byte Folded Reload +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v24 +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v24, v16 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v16 +; ZVFBFA-NEXT: csrr a0, vlenb +; ZVFBFA-NEXT: slli a0, a0, 3 +; ZVFBFA-NEXT: add sp, sp, a0 +; ZVFBFA-NEXT: .cfi_def_cfa sp, 16 +; ZVFBFA-NEXT: addi sp, sp, 16 +; ZVFBFA-NEXT: .cfi_def_cfa_offset 0 +; ZVFBFA-NEXT: ret %v = call <vscale x 32 x bfloat> @llvm.vp.fadd.nxv32bf16(<vscale x 32 x bfloat> %va, <vscale x 32 x bfloat> %b, <vscale x 32 x i1> splat (i1 true), i32 %evl) ret <vscale x 32 x bfloat> %v } define <vscale x 32 x bfloat> @vfadd_vf_nxv32bf16(<vscale x 32 x bfloat> %va, bfloat %b, <vscale x 32 x i1> %m, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vf_nxv32bf16: -; CHECK: # %bb.0: -; CHECK-NEXT: addi sp, sp, -16 -; CHECK-NEXT: .cfi_def_cfa_offset 16 -; CHECK-NEXT: csrr a1, vlenb -; CHECK-NEXT: slli a1, a1, 4 -; CHECK-NEXT: sub sp, sp, a1 -; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb -; CHECK-NEXT: vsetvli a1, zero, e16, m8, ta, ma -; CHECK-NEXT: vmv1r.v v7, v0 -; CHECK-NEXT: fmv.x.h a1, fa0 -; CHECK-NEXT: csrr a2, vlenb -; CHECK-NEXT: vmv.v.x v24, a1 -; CHECK-NEXT: slli a1, a2, 1 -; CHECK-NEXT: srli a2, a2, 2 -; CHECK-NEXT: sub a3, a0, a1 -; CHECK-NEXT: vsetvli a4, zero, e8, mf2, ta, ma -; CHECK-NEXT: vslidedown.vx v0, v0, a2 -; CHECK-NEXT: sltu a2, a0, a3 -; CHECK-NEXT: addi a2, a2, -1 -; CHECK-NEXT: and a2, a2, a3 -; CHECK-NEXT: csrr a3, vlenb -; CHECK-NEXT: slli a3, a3, 3 -; CHECK-NEXT: add a3, sp, a3 -; CHECK-NEXT: addi a3, a3, 16 -; CHECK-NEXT: vs8r.v v24, (a3) # vscale x 64-byte Folded Spill -; CHECK-NEXT: vsetvli zero, a2, e16, m4, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v28, v0.t -; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v12, v0.t -; CHECK-NEXT: vsetvli zero, zero, e32, m8, ta, ma -; CHECK-NEXT: vfadd.vv v16, v24, v16, v0.t -; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v12, v16, v0.t -; CHECK-NEXT: bltu a0, a1, .LBB24_2 -; CHECK-NEXT: # %bb.1: -; CHECK-NEXT: mv a0, a1 -; CHECK-NEXT: .LBB24_2: -; CHECK-NEXT: vmv1r.v v0, v7 -; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v8, v0.t -; CHECK-NEXT: addi a0, sp, 16 -; CHECK-NEXT: vs8r.v v16, (a0) # vscale x 64-byte Folded Spill -; CHECK-NEXT: csrr a0, vlenb -; CHECK-NEXT: slli a0, a0, 3 -; CHECK-NEXT: add a0, sp, a0 -; CHECK-NEXT: addi a0, a0, 16 -; CHECK-NEXT: vl8r.v v16, (a0) # vscale x 64-byte Folded Reload -; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v16, v0.t -; CHECK-NEXT: addi a0, sp, 16 -; CHECK-NEXT: vl8r.v v16, (a0) # vscale x 64-byte Folded Reload -; CHECK-NEXT: vsetvli zero, zero, e32, m8, ta, ma -; CHECK-NEXT: vfadd.vv v16, v16, v24, v0.t -; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16, v0.t -; CHECK-NEXT: csrr a0, vlenb -; CHECK-NEXT: slli a0, a0, 4 -; CHECK-NEXT: add sp, sp, a0 -; CHECK-NEXT: .cfi_def_cfa sp, 16 -; CHECK-NEXT: addi sp, sp, 16 -; CHECK-NEXT: .cfi_def_cfa_offset 0 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vf_nxv32bf16: +; ZVFH: # %bb.0: +; ZVFH-NEXT: addi sp, sp, -16 +; ZVFH-NEXT: .cfi_def_cfa_offset 16 +; ZVFH-NEXT: csrr a1, vlenb +; ZVFH-NEXT: slli a1, a1, 4 +; ZVFH-NEXT: sub sp, sp, a1 +; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb +; ZVFH-NEXT: vsetvli a1, zero, e16, m8, ta, ma +; ZVFH-NEXT: vmv1r.v v7, v0 +; ZVFH-NEXT: fmv.x.h a1, fa0 +; ZVFH-NEXT: csrr a2, vlenb +; ZVFH-NEXT: vmv.v.x v24, a1 +; ZVFH-NEXT: slli a1, a2, 1 +; ZVFH-NEXT: srli a2, a2, 2 +; ZVFH-NEXT: sub a3, a0, a1 +; ZVFH-NEXT: vsetvli a4, zero, e8, mf2, ta, ma +; ZVFH-NEXT: vslidedown.vx v0, v0, a2 +; ZVFH-NEXT: sltu a2, a0, a3 +; ZVFH-NEXT: addi a2, a2, -1 +; ZVFH-NEXT: and a2, a2, a3 +; ZVFH-NEXT: csrr a3, vlenb +; ZVFH-NEXT: slli a3, a3, 3 +; ZVFH-NEXT: add a3, sp, a3 +; ZVFH-NEXT: addi a3, a3, 16 +; ZVFH-NEXT: vs8r.v v24, (a3) # vscale x 64-byte Folded Spill +; ZVFH-NEXT: vsetvli zero, a2, e16, m4, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v28, v0.t +; ZVFH-NEXT: vfwcvtbf16.f.f.v v24, v12, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFH-NEXT: vfadd.vv v16, v24, v16, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v12, v16, v0.t +; ZVFH-NEXT: bltu a0, a1, .LBB24_2 +; ZVFH-NEXT: # %bb.1: +; ZVFH-NEXT: mv a0, a1 +; ZVFH-NEXT: .LBB24_2: +; ZVFH-NEXT: vmv1r.v v0, v7 +; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v8, v0.t +; ZVFH-NEXT: addi a0, sp, 16 +; ZVFH-NEXT: vs8r.v v16, (a0) # vscale x 64-byte Folded Spill +; ZVFH-NEXT: csrr a0, vlenb +; ZVFH-NEXT: slli a0, a0, 3 +; ZVFH-NEXT: add a0, sp, a0 +; ZVFH-NEXT: addi a0, a0, 16 +; ZVFH-NEXT: vl8r.v v16, (a0) # vscale x 64-byte Folded Reload +; ZVFH-NEXT: vfwcvtbf16.f.f.v v24, v16, v0.t +; ZVFH-NEXT: addi a0, sp, 16 +; ZVFH-NEXT: vl8r.v v16, (a0) # vscale x 64-byte Folded Reload +; ZVFH-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFH-NEXT: vfadd.vv v16, v16, v24, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v16, v0.t +; ZVFH-NEXT: csrr a0, vlenb +; ZVFH-NEXT: slli a0, a0, 4 +; ZVFH-NEXT: add sp, sp, a0 +; ZVFH-NEXT: .cfi_def_cfa sp, 16 +; ZVFH-NEXT: addi sp, sp, 16 +; ZVFH-NEXT: .cfi_def_cfa_offset 0 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vf_nxv32bf16: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: addi sp, sp, -16 +; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16 +; ZVFHMIN-NEXT: csrr a1, vlenb +; ZVFHMIN-NEXT: slli a1, a1, 4 +; ZVFHMIN-NEXT: sub sp, sp, a1 +; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb +; ZVFHMIN-NEXT: vsetvli a1, zero, e16, m8, ta, ma +; ZVFHMIN-NEXT: vmv1r.v v7, v0 +; ZVFHMIN-NEXT: fmv.x.h a1, fa0 +; ZVFHMIN-NEXT: csrr a2, vlenb +; ZVFHMIN-NEXT: vmv.v.x v24, a1 +; ZVFHMIN-NEXT: slli a1, a2, 1 +; ZVFHMIN-NEXT: srli a2, a2, 2 +; ZVFHMIN-NEXT: sub a3, a0, a1 +; ZVFHMIN-NEXT: vsetvli a4, zero, e8, mf2, ta, ma +; ZVFHMIN-NEXT: vslidedown.vx v0, v0, a2 +; ZVFHMIN-NEXT: sltu a2, a0, a3 +; ZVFHMIN-NEXT: addi a2, a2, -1 +; ZVFHMIN-NEXT: and a2, a2, a3 +; ZVFHMIN-NEXT: csrr a3, vlenb +; ZVFHMIN-NEXT: slli a3, a3, 3 +; ZVFHMIN-NEXT: add a3, sp, a3 +; ZVFHMIN-NEXT: addi a3, a3, 16 +; ZVFHMIN-NEXT: vs8r.v v24, (a3) # vscale x 64-byte Folded Spill +; ZVFHMIN-NEXT: vsetvli zero, a2, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v28, v0.t +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v24, v12, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v16, v24, v16, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v12, v16, v0.t +; ZVFHMIN-NEXT: bltu a0, a1, .LBB24_2 +; ZVFHMIN-NEXT: # %bb.1: +; ZVFHMIN-NEXT: mv a0, a1 +; ZVFHMIN-NEXT: .LBB24_2: +; ZVFHMIN-NEXT: vmv1r.v v0, v7 +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v8, v0.t +; ZVFHMIN-NEXT: addi a0, sp, 16 +; ZVFHMIN-NEXT: vs8r.v v16, (a0) # vscale x 64-byte Folded Spill +; ZVFHMIN-NEXT: csrr a0, vlenb +; ZVFHMIN-NEXT: slli a0, a0, 3 +; ZVFHMIN-NEXT: add a0, sp, a0 +; ZVFHMIN-NEXT: addi a0, a0, 16 +; ZVFHMIN-NEXT: vl8r.v v16, (a0) # vscale x 64-byte Folded Reload +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v24, v16, v0.t +; ZVFHMIN-NEXT: addi a0, sp, 16 +; ZVFHMIN-NEXT: vl8r.v v16, (a0) # vscale x 64-byte Folded Reload +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v16, v16, v24, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v16, v0.t +; ZVFHMIN-NEXT: csrr a0, vlenb +; ZVFHMIN-NEXT: slli a0, a0, 4 +; ZVFHMIN-NEXT: add sp, sp, a0 +; ZVFHMIN-NEXT: .cfi_def_cfa sp, 16 +; ZVFHMIN-NEXT: addi sp, sp, 16 +; ZVFHMIN-NEXT: .cfi_def_cfa_offset 0 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv32bf16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: addi sp, sp, -16 +; ZVFBFA-NEXT: .cfi_def_cfa_offset 16 +; ZVFBFA-NEXT: csrr a1, vlenb +; ZVFBFA-NEXT: slli a1, a1, 4 +; ZVFBFA-NEXT: sub sp, sp, a1 +; ZVFBFA-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb +; ZVFBFA-NEXT: vsetvli a1, zero, e16, m8, ta, ma +; ZVFBFA-NEXT: vmv1r.v v7, v0 +; ZVFBFA-NEXT: fmv.x.h a1, fa0 +; ZVFBFA-NEXT: csrr a2, vlenb +; ZVFBFA-NEXT: vmv.v.x v24, a1 +; ZVFBFA-NEXT: slli a1, a2, 1 +; ZVFBFA-NEXT: srli a2, a2, 2 +; ZVFBFA-NEXT: sub a3, a0, a1 +; ZVFBFA-NEXT: vsetvli a4, zero, e8, mf2, ta, ma +; ZVFBFA-NEXT: vslidedown.vx v0, v0, a2 +; ZVFBFA-NEXT: sltu a2, a0, a3 +; ZVFBFA-NEXT: addi a2, a2, -1 +; ZVFBFA-NEXT: and a2, a2, a3 +; ZVFBFA-NEXT: csrr a3, vlenb +; ZVFBFA-NEXT: slli a3, a3, 3 +; ZVFBFA-NEXT: add a3, sp, a3 +; ZVFBFA-NEXT: addi a3, a3, 16 +; ZVFBFA-NEXT: vs8r.v v24, (a3) # vscale x 64-byte Folded Spill +; ZVFBFA-NEXT: vsetvli zero, a2, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v28, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v12, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v24, v16, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v12, v16, v0.t +; ZVFBFA-NEXT: bltu a0, a1, .LBB24_2 +; ZVFBFA-NEXT: # %bb.1: +; ZVFBFA-NEXT: mv a0, a1 +; ZVFBFA-NEXT: .LBB24_2: +; ZVFBFA-NEXT: vmv1r.v v0, v7 +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v8, v0.t +; ZVFBFA-NEXT: addi a0, sp, 16 +; ZVFBFA-NEXT: vs8r.v v16, (a0) # vscale x 64-byte Folded Spill +; ZVFBFA-NEXT: csrr a0, vlenb +; ZVFBFA-NEXT: slli a0, a0, 3 +; ZVFBFA-NEXT: add a0, sp, a0 +; ZVFBFA-NEXT: addi a0, a0, 16 +; ZVFBFA-NEXT: vl8r.v v16, (a0) # vscale x 64-byte Folded Reload +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v16, v0.t +; ZVFBFA-NEXT: addi a0, sp, 16 +; ZVFBFA-NEXT: vl8r.v v16, (a0) # vscale x 64-byte Folded Reload +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v16, v24, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v16, v0.t +; ZVFBFA-NEXT: csrr a0, vlenb +; ZVFBFA-NEXT: slli a0, a0, 4 +; ZVFBFA-NEXT: add sp, sp, a0 +; ZVFBFA-NEXT: .cfi_def_cfa sp, 16 +; ZVFBFA-NEXT: addi sp, sp, 16 +; ZVFBFA-NEXT: .cfi_def_cfa_offset 0 +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 32 x bfloat> poison, bfloat %b, i32 0 %vb = shufflevector <vscale x 32 x bfloat> %elt.head, <vscale x 32 x bfloat> poison, <vscale x 32 x i32> zeroinitializer %v = call <vscale x 32 x bfloat> @llvm.vp.fadd.nxv32bf16(<vscale x 32 x bfloat> %va, <vscale x 32 x bfloat> %vb, <vscale x 32 x i1> %m, i32 %evl) @@ -577,56 +1432,158 @@ define <vscale x 32 x bfloat> @vfadd_vf_nxv32bf16(<vscale x 32 x bfloat> %va, bf } define <vscale x 32 x bfloat> @vfadd_vf_nxv32bf16_unmasked(<vscale x 32 x bfloat> %va, bfloat %b, i32 zeroext %evl) { -; CHECK-LABEL: vfadd_vf_nxv32bf16_unmasked: -; CHECK: # %bb.0: -; CHECK-NEXT: addi sp, sp, -16 -; CHECK-NEXT: .cfi_def_cfa_offset 16 -; CHECK-NEXT: csrr a1, vlenb -; CHECK-NEXT: slli a1, a1, 3 -; CHECK-NEXT: sub sp, sp, a1 -; CHECK-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb -; CHECK-NEXT: fmv.x.h a1, fa0 -; CHECK-NEXT: csrr a2, vlenb -; CHECK-NEXT: vsetvli a3, zero, e16, m8, ta, ma -; CHECK-NEXT: vmset.m v24 -; CHECK-NEXT: vmv.v.x v16, a1 -; CHECK-NEXT: slli a1, a2, 1 -; CHECK-NEXT: srli a2, a2, 2 -; CHECK-NEXT: sub a3, a0, a1 -; CHECK-NEXT: vsetvli a4, zero, e8, mf2, ta, ma -; CHECK-NEXT: vslidedown.vx v0, v24, a2 -; CHECK-NEXT: sltu a2, a0, a3 -; CHECK-NEXT: addi a2, a2, -1 -; CHECK-NEXT: and a2, a2, a3 -; CHECK-NEXT: addi a3, sp, 16 -; CHECK-NEXT: vs8r.v v16, (a3) # vscale x 64-byte Folded Spill -; CHECK-NEXT: vsetvli zero, a2, e16, m4, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v20, v0.t -; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v12, v0.t -; CHECK-NEXT: vsetvli zero, zero, e32, m8, ta, ma -; CHECK-NEXT: vfadd.vv v16, v16, v24, v0.t -; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v12, v16, v0.t -; CHECK-NEXT: bltu a0, a1, .LBB25_2 -; CHECK-NEXT: # %bb.1: -; CHECK-NEXT: mv a0, a1 -; CHECK-NEXT: .LBB25_2: -; CHECK-NEXT: vsetvli zero, a0, e16, m4, ta, ma -; CHECK-NEXT: vfwcvtbf16.f.f.v v16, v8 -; CHECK-NEXT: addi a0, sp, 16 -; CHECK-NEXT: vl8r.v v0, (a0) # vscale x 64-byte Folded Reload -; CHECK-NEXT: vfwcvtbf16.f.f.v v24, v0 -; CHECK-NEXT: vsetvli zero, zero, e32, m8, ta, ma -; CHECK-NEXT: vfadd.vv v16, v16, v24 -; CHECK-NEXT: vsetvli zero, zero, e16, m4, ta, ma -; CHECK-NEXT: vfncvtbf16.f.f.w v8, v16 -; CHECK-NEXT: csrr a0, vlenb -; CHECK-NEXT: slli a0, a0, 3 -; CHECK-NEXT: add sp, sp, a0 -; CHECK-NEXT: .cfi_def_cfa sp, 16 -; CHECK-NEXT: addi sp, sp, 16 -; CHECK-NEXT: .cfi_def_cfa_offset 0 -; CHECK-NEXT: ret +; ZVFH-LABEL: vfadd_vf_nxv32bf16_unmasked: +; ZVFH: # %bb.0: +; ZVFH-NEXT: addi sp, sp, -16 +; ZVFH-NEXT: .cfi_def_cfa_offset 16 +; ZVFH-NEXT: csrr a1, vlenb +; ZVFH-NEXT: slli a1, a1, 3 +; ZVFH-NEXT: sub sp, sp, a1 +; ZVFH-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb +; ZVFH-NEXT: fmv.x.h a1, fa0 +; ZVFH-NEXT: csrr a2, vlenb +; ZVFH-NEXT: vsetvli a3, zero, e16, m8, ta, ma +; ZVFH-NEXT: vmset.m v24 +; ZVFH-NEXT: vmv.v.x v16, a1 +; ZVFH-NEXT: slli a1, a2, 1 +; ZVFH-NEXT: srli a2, a2, 2 +; ZVFH-NEXT: sub a3, a0, a1 +; ZVFH-NEXT: vsetvli a4, zero, e8, mf2, ta, ma +; ZVFH-NEXT: vslidedown.vx v0, v24, a2 +; ZVFH-NEXT: sltu a2, a0, a3 +; ZVFH-NEXT: addi a2, a2, -1 +; ZVFH-NEXT: and a2, a2, a3 +; ZVFH-NEXT: addi a3, sp, 16 +; ZVFH-NEXT: vs8r.v v16, (a3) # vscale x 64-byte Folded Spill +; ZVFH-NEXT: vsetvli zero, a2, e16, m4, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v24, v20, v0.t +; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v12, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFH-NEXT: vfadd.vv v16, v16, v24, v0.t +; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v12, v16, v0.t +; ZVFH-NEXT: bltu a0, a1, .LBB25_2 +; ZVFH-NEXT: # %bb.1: +; ZVFH-NEXT: mv a0, a1 +; ZVFH-NEXT: .LBB25_2: +; ZVFH-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFH-NEXT: vfwcvtbf16.f.f.v v16, v8 +; ZVFH-NEXT: addi a0, sp, 16 +; ZVFH-NEXT: vl8r.v v0, (a0) # vscale x 64-byte Folded Reload +; ZVFH-NEXT: vfwcvtbf16.f.f.v v24, v0 +; ZVFH-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFH-NEXT: vfadd.vv v16, v16, v24 +; ZVFH-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFH-NEXT: vfncvtbf16.f.f.w v8, v16 +; ZVFH-NEXT: csrr a0, vlenb +; ZVFH-NEXT: slli a0, a0, 3 +; ZVFH-NEXT: add sp, sp, a0 +; ZVFH-NEXT: .cfi_def_cfa sp, 16 +; ZVFH-NEXT: addi sp, sp, 16 +; ZVFH-NEXT: .cfi_def_cfa_offset 0 +; ZVFH-NEXT: ret +; +; ZVFHMIN-LABEL: vfadd_vf_nxv32bf16_unmasked: +; ZVFHMIN: # %bb.0: +; ZVFHMIN-NEXT: addi sp, sp, -16 +; ZVFHMIN-NEXT: .cfi_def_cfa_offset 16 +; ZVFHMIN-NEXT: csrr a1, vlenb +; ZVFHMIN-NEXT: slli a1, a1, 3 +; ZVFHMIN-NEXT: sub sp, sp, a1 +; ZVFHMIN-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb +; ZVFHMIN-NEXT: fmv.x.h a1, fa0 +; ZVFHMIN-NEXT: csrr a2, vlenb +; ZVFHMIN-NEXT: vsetvli a3, zero, e16, m8, ta, ma +; ZVFHMIN-NEXT: vmset.m v24 +; ZVFHMIN-NEXT: vmv.v.x v16, a1 +; ZVFHMIN-NEXT: slli a1, a2, 1 +; ZVFHMIN-NEXT: srli a2, a2, 2 +; ZVFHMIN-NEXT: sub a3, a0, a1 +; ZVFHMIN-NEXT: vsetvli a4, zero, e8, mf2, ta, ma +; ZVFHMIN-NEXT: vslidedown.vx v0, v24, a2 +; ZVFHMIN-NEXT: sltu a2, a0, a3 +; ZVFHMIN-NEXT: addi a2, a2, -1 +; ZVFHMIN-NEXT: and a2, a2, a3 +; ZVFHMIN-NEXT: addi a3, sp, 16 +; ZVFHMIN-NEXT: vs8r.v v16, (a3) # vscale x 64-byte Folded Spill +; ZVFHMIN-NEXT: vsetvli zero, a2, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v24, v20, v0.t +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v12, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v16, v16, v24, v0.t +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v12, v16, v0.t +; ZVFHMIN-NEXT: bltu a0, a1, .LBB25_2 +; ZVFHMIN-NEXT: # %bb.1: +; ZVFHMIN-NEXT: mv a0, a1 +; ZVFHMIN-NEXT: .LBB25_2: +; ZVFHMIN-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v16, v8 +; ZVFHMIN-NEXT: addi a0, sp, 16 +; ZVFHMIN-NEXT: vl8r.v v0, (a0) # vscale x 64-byte Folded Reload +; ZVFHMIN-NEXT: vfwcvtbf16.f.f.v v24, v0 +; ZVFHMIN-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFHMIN-NEXT: vfadd.vv v16, v16, v24 +; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFHMIN-NEXT: vfncvtbf16.f.f.w v8, v16 +; ZVFHMIN-NEXT: csrr a0, vlenb +; ZVFHMIN-NEXT: slli a0, a0, 3 +; ZVFHMIN-NEXT: add sp, sp, a0 +; ZVFHMIN-NEXT: .cfi_def_cfa sp, 16 +; ZVFHMIN-NEXT: addi sp, sp, 16 +; ZVFHMIN-NEXT: .cfi_def_cfa_offset 0 +; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv32bf16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: addi sp, sp, -16 +; ZVFBFA-NEXT: .cfi_def_cfa_offset 16 +; ZVFBFA-NEXT: csrr a1, vlenb +; ZVFBFA-NEXT: slli a1, a1, 3 +; ZVFBFA-NEXT: sub sp, sp, a1 +; ZVFBFA-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb +; ZVFBFA-NEXT: fmv.x.h a1, fa0 +; ZVFBFA-NEXT: csrr a2, vlenb +; ZVFBFA-NEXT: vsetvli a3, zero, e16, m8, ta, ma +; ZVFBFA-NEXT: vmset.m v24 +; ZVFBFA-NEXT: vmv.v.x v16, a1 +; ZVFBFA-NEXT: slli a1, a2, 1 +; ZVFBFA-NEXT: srli a2, a2, 2 +; ZVFBFA-NEXT: sub a3, a0, a1 +; ZVFBFA-NEXT: vsetvli a4, zero, e8, mf2, ta, ma +; ZVFBFA-NEXT: vslidedown.vx v0, v24, a2 +; ZVFBFA-NEXT: sltu a2, a0, a3 +; ZVFBFA-NEXT: addi a2, a2, -1 +; ZVFBFA-NEXT: and a2, a2, a3 +; ZVFBFA-NEXT: addi a3, sp, 16 +; ZVFBFA-NEXT: vs8r.v v16, (a3) # vscale x 64-byte Folded Spill +; ZVFBFA-NEXT: vsetvli zero, a2, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v20, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v12, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v16, v24, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v12, v16, v0.t +; ZVFBFA-NEXT: bltu a0, a1, .LBB25_2 +; ZVFBFA-NEXT: # %bb.1: +; ZVFBFA-NEXT: mv a0, a1 +; ZVFBFA-NEXT: .LBB25_2: +; ZVFBFA-NEXT: vsetvli zero, a0, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v8 +; ZVFBFA-NEXT: addi a0, sp, 16 +; ZVFBFA-NEXT: vl8r.v v0, (a0) # vscale x 64-byte Folded Reload +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v0 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v16, v24 +; ZVFBFA-NEXT: vsetvli zero, zero, e16alt, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v16 +; ZVFBFA-NEXT: csrr a0, vlenb +; ZVFBFA-NEXT: slli a0, a0, 3 +; ZVFBFA-NEXT: add sp, sp, a0 +; ZVFBFA-NEXT: .cfi_def_cfa sp, 16 +; ZVFBFA-NEXT: addi sp, sp, 16 +; ZVFBFA-NEXT: .cfi_def_cfa_offset 0 +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 32 x bfloat> poison, bfloat %b, i32 0 %vb = shufflevector <vscale x 32 x bfloat> %elt.head, <vscale x 32 x bfloat> poison, <vscale x 32 x i32> zeroinitializer %v = call <vscale x 32 x bfloat> @llvm.vp.fadd.nxv32bf16(<vscale x 32 x bfloat> %va, <vscale x 32 x bfloat> %vb, <vscale x 32 x i1> splat (i1 true), i32 %evl) @@ -651,6 +1608,17 @@ define <vscale x 1 x half> @vfadd_vv_nxv1f16(<vscale x 1 x half> %va, <vscale x ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf4, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v9, v0.t ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv1f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli zero, a0, e16, mf4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v9, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v9, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v9, v10, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9, v0.t +; ZVFBFA-NEXT: ret %v = call <vscale x 1 x half> @llvm.vp.fadd.nxv1f16(<vscale x 1 x half> %va, <vscale x 1 x half> %b, <vscale x 1 x i1> %m, i32 %evl) ret <vscale x 1 x half> %v } @@ -672,6 +1640,17 @@ define <vscale x 1 x half> @vfadd_vv_nxv1f16_unmasked(<vscale x 1 x half> %va, < ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf4, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v9 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv1f16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli zero, a0, e16, mf4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v9 +; ZVFBFA-NEXT: vfwcvt.f.f.v v9, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v9, v10 +; ZVFBFA-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9 +; ZVFBFA-NEXT: ret %v = call <vscale x 1 x half> @llvm.vp.fadd.nxv1f16(<vscale x 1 x half> %va, <vscale x 1 x half> %b, <vscale x 1 x i1> splat (i1 true), i32 %evl) ret <vscale x 1 x half> %v } @@ -695,6 +1674,19 @@ define <vscale x 1 x half> @vfadd_vf_nxv1f16(<vscale x 1 x half> %va, half %b, < ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf4, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v9, v0.t ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv1f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.w a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16, mf4, ta, ma +; ZVFBFA-NEXT: vmv.v.x v9, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v8, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v9, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v10, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9, v0.t +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 1 x half> poison, half %b, i32 0 %vb = shufflevector <vscale x 1 x half> %elt.head, <vscale x 1 x half> poison, <vscale x 1 x i32> zeroinitializer %v = call <vscale x 1 x half> @llvm.vp.fadd.nxv1f16(<vscale x 1 x half> %va, <vscale x 1 x half> %vb, <vscale x 1 x i1> %m, i32 %evl) @@ -720,6 +1712,19 @@ define <vscale x 1 x half> @vfadd_vf_nxv1f16_commute(<vscale x 1 x half> %va, ha ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf4, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v9, v0.t ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv1f16_commute: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.w a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16, mf4, ta, ma +; ZVFBFA-NEXT: vmv.v.x v9, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v8, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v9, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v8, v10, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9, v0.t +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 1 x half> poison, half %b, i32 0 %vb = shufflevector <vscale x 1 x half> %elt.head, <vscale x 1 x half> poison, <vscale x 1 x i32> zeroinitializer %v = call <vscale x 1 x half> @llvm.vp.fadd.nxv1f16(<vscale x 1 x half> %vb, <vscale x 1 x half> %va, <vscale x 1 x i1> %m, i32 %evl) @@ -745,6 +1750,19 @@ define <vscale x 1 x half> @vfadd_vf_nxv1f16_unmasked(<vscale x 1 x half> %va, h ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf4, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v9 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv1f16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.w a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16, mf4, ta, ma +; ZVFBFA-NEXT: vmv.v.x v9, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v8 +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v9 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v10, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9 +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 1 x half> poison, half %b, i32 0 %vb = shufflevector <vscale x 1 x half> %elt.head, <vscale x 1 x half> poison, <vscale x 1 x i32> zeroinitializer %v = call <vscale x 1 x half> @llvm.vp.fadd.nxv1f16(<vscale x 1 x half> %va, <vscale x 1 x half> %vb, <vscale x 1 x i1> splat (i1 true), i32 %evl) @@ -770,6 +1788,19 @@ define <vscale x 1 x half> @vfadd_vf_nxv1f16_unmasked_commute(<vscale x 1 x half ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf4, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v9 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv1f16_unmasked_commute: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.w a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16, mf4, ta, ma +; ZVFBFA-NEXT: vmv.v.x v9, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v8 +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v9 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, mf2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v8, v10 +; ZVFBFA-NEXT: vsetvli zero, zero, e16, mf4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9 +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 1 x half> poison, half %b, i32 0 %vb = shufflevector <vscale x 1 x half> %elt.head, <vscale x 1 x half> poison, <vscale x 1 x i32> zeroinitializer %v = call <vscale x 1 x half> @llvm.vp.fadd.nxv1f16(<vscale x 1 x half> %vb, <vscale x 1 x half> %va, <vscale x 1 x i1> splat (i1 true), i32 %evl) @@ -795,6 +1826,17 @@ define <vscale x 2 x half> @vfadd_vv_nxv2f16(<vscale x 2 x half> %va, <vscale x ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v9, v0.t ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv2f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli zero, a0, e16, mf2, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v9, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v9, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v9, v10, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16, mf2, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9, v0.t +; ZVFBFA-NEXT: ret %v = call <vscale x 2 x half> @llvm.vp.fadd.nxv2f16(<vscale x 2 x half> %va, <vscale x 2 x half> %b, <vscale x 2 x i1> %m, i32 %evl) ret <vscale x 2 x half> %v } @@ -816,6 +1858,17 @@ define <vscale x 2 x half> @vfadd_vv_nxv2f16_unmasked(<vscale x 2 x half> %va, < ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v9 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv2f16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli zero, a0, e16, mf2, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v9 +; ZVFBFA-NEXT: vfwcvt.f.f.v v9, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v9, v10 +; ZVFBFA-NEXT: vsetvli zero, zero, e16, mf2, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9 +; ZVFBFA-NEXT: ret %v = call <vscale x 2 x half> @llvm.vp.fadd.nxv2f16(<vscale x 2 x half> %va, <vscale x 2 x half> %b, <vscale x 2 x i1> splat (i1 true), i32 %evl) ret <vscale x 2 x half> %v } @@ -839,6 +1892,19 @@ define <vscale x 2 x half> @vfadd_vf_nxv2f16(<vscale x 2 x half> %va, half %b, < ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v9, v0.t ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv2f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.w a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16, mf2, ta, ma +; ZVFBFA-NEXT: vmv.v.x v9, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v8, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v9, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v10, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16, mf2, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9, v0.t +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 2 x half> poison, half %b, i32 0 %vb = shufflevector <vscale x 2 x half> %elt.head, <vscale x 2 x half> poison, <vscale x 2 x i32> zeroinitializer %v = call <vscale x 2 x half> @llvm.vp.fadd.nxv2f16(<vscale x 2 x half> %va, <vscale x 2 x half> %vb, <vscale x 2 x i1> %m, i32 %evl) @@ -864,6 +1930,19 @@ define <vscale x 2 x half> @vfadd_vf_nxv2f16_unmasked(<vscale x 2 x half> %va, h ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, mf2, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v9 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv2f16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.w a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16, mf2, ta, ma +; ZVFBFA-NEXT: vmv.v.x v9, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v8 +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v9 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m1, ta, ma +; ZVFBFA-NEXT: vfadd.vv v9, v10, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e16, mf2, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v9 +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 2 x half> poison, half %b, i32 0 %vb = shufflevector <vscale x 2 x half> %elt.head, <vscale x 2 x half> poison, <vscale x 2 x i32> zeroinitializer %v = call <vscale x 2 x half> @llvm.vp.fadd.nxv2f16(<vscale x 2 x half> %va, <vscale x 2 x half> %vb, <vscale x 2 x i1> splat (i1 true), i32 %evl) @@ -889,6 +1968,17 @@ define <vscale x 4 x half> @vfadd_vv_nxv4f16(<vscale x 4 x half> %va, <vscale x ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m1, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v10, v0.t ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv4f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli zero, a0, e16, m1, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v9, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v12, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v10, v12, v10, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16, m1, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v10, v0.t +; ZVFBFA-NEXT: ret %v = call <vscale x 4 x half> @llvm.vp.fadd.nxv4f16(<vscale x 4 x half> %va, <vscale x 4 x half> %b, <vscale x 4 x i1> %m, i32 %evl) ret <vscale x 4 x half> %v } @@ -910,6 +2000,17 @@ define <vscale x 4 x half> @vfadd_vv_nxv4f16_unmasked(<vscale x 4 x half> %va, < ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m1, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v10 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv4f16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli zero, a0, e16, m1, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v9 +; ZVFBFA-NEXT: vfwcvt.f.f.v v12, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v10, v12, v10 +; ZVFBFA-NEXT: vsetvli zero, zero, e16, m1, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v10 +; ZVFBFA-NEXT: ret %v = call <vscale x 4 x half> @llvm.vp.fadd.nxv4f16(<vscale x 4 x half> %va, <vscale x 4 x half> %b, <vscale x 4 x i1> splat (i1 true), i32 %evl) ret <vscale x 4 x half> %v } @@ -933,6 +2034,19 @@ define <vscale x 4 x half> @vfadd_vf_nxv4f16(<vscale x 4 x half> %va, half %b, < ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m1, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v10, v0.t ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv4f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.w a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16, m1, ta, ma +; ZVFBFA-NEXT: vmv.v.x v12, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v8, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v12, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v10, v10, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16, m1, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v10, v0.t +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 4 x half> poison, half %b, i32 0 %vb = shufflevector <vscale x 4 x half> %elt.head, <vscale x 4 x half> poison, <vscale x 4 x i32> zeroinitializer %v = call <vscale x 4 x half> @llvm.vp.fadd.nxv4f16(<vscale x 4 x half> %va, <vscale x 4 x half> %vb, <vscale x 4 x i1> %m, i32 %evl) @@ -958,6 +2072,19 @@ define <vscale x 4 x half> @vfadd_vf_nxv4f16_unmasked(<vscale x 4 x half> %va, h ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m1, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v10 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv4f16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.w a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16, m1, ta, ma +; ZVFBFA-NEXT: vmv.v.x v12, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v10, v8 +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v12 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m2, ta, ma +; ZVFBFA-NEXT: vfadd.vv v10, v10, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e16, m1, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v10 +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 4 x half> poison, half %b, i32 0 %vb = shufflevector <vscale x 4 x half> %elt.head, <vscale x 4 x half> poison, <vscale x 4 x i32> zeroinitializer %v = call <vscale x 4 x half> @llvm.vp.fadd.nxv4f16(<vscale x 4 x half> %va, <vscale x 4 x half> %vb, <vscale x 4 x i1> splat (i1 true), i32 %evl) @@ -983,6 +2110,17 @@ define <vscale x 8 x half> @vfadd_vv_nxv8f16(<vscale x 8 x half> %va, <vscale x ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m2, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v12, v0.t ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv8f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli zero, a0, e16, m2, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v12, v10, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFBFA-NEXT: vfadd.vv v12, v16, v12, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16, m2, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v12, v0.t +; ZVFBFA-NEXT: ret %v = call <vscale x 8 x half> @llvm.vp.fadd.nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x half> %b, <vscale x 8 x i1> %m, i32 %evl) ret <vscale x 8 x half> %v } @@ -1004,6 +2142,17 @@ define <vscale x 8 x half> @vfadd_vv_nxv8f16_unmasked(<vscale x 8 x half> %va, < ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m2, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v12 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv8f16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli zero, a0, e16, m2, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v12, v10 +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFBFA-NEXT: vfadd.vv v12, v16, v12 +; ZVFBFA-NEXT: vsetvli zero, zero, e16, m2, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v12 +; ZVFBFA-NEXT: ret %v = call <vscale x 8 x half> @llvm.vp.fadd.nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x half> %b, <vscale x 8 x i1> splat (i1 true), i32 %evl) ret <vscale x 8 x half> %v } @@ -1027,6 +2176,19 @@ define <vscale x 8 x half> @vfadd_vf_nxv8f16(<vscale x 8 x half> %va, half %b, < ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m2, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v12, v0.t ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv8f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.w a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16, m2, ta, ma +; ZVFBFA-NEXT: vmv.v.x v16, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v12, v8, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v16, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFBFA-NEXT: vfadd.vv v12, v12, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16, m2, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v12, v0.t +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 8 x half> poison, half %b, i32 0 %vb = shufflevector <vscale x 8 x half> %elt.head, <vscale x 8 x half> poison, <vscale x 8 x i32> zeroinitializer %v = call <vscale x 8 x half> @llvm.vp.fadd.nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x half> %vb, <vscale x 8 x i1> %m, i32 %evl) @@ -1052,6 +2214,19 @@ define <vscale x 8 x half> @vfadd_vf_nxv8f16_unmasked(<vscale x 8 x half> %va, h ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m2, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v12 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv8f16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.w a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16, m2, ta, ma +; ZVFBFA-NEXT: vmv.v.x v16, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v12, v8 +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v16 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m4, ta, ma +; ZVFBFA-NEXT: vfadd.vv v12, v12, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e16, m2, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v12 +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 8 x half> poison, half %b, i32 0 %vb = shufflevector <vscale x 8 x half> %elt.head, <vscale x 8 x half> poison, <vscale x 8 x i32> zeroinitializer %v = call <vscale x 8 x half> @llvm.vp.fadd.nxv8f16(<vscale x 8 x half> %va, <vscale x 8 x half> %vb, <vscale x 8 x i1> splat (i1 true), i32 %evl) @@ -1077,6 +2252,17 @@ define <vscale x 16 x half> @vfadd_vv_nxv16f16(<vscale x 16 x half> %va, <vscale ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16, v0.t ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv16f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v12, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v24, v16, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v16, v0.t +; ZVFBFA-NEXT: ret %v = call <vscale x 16 x half> @llvm.vp.fadd.nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x half> %b, <vscale x 16 x i1> %m, i32 %evl) ret <vscale x 16 x half> %v } @@ -1098,6 +2284,17 @@ define <vscale x 16 x half> @vfadd_vv_nxv16f16_unmasked(<vscale x 16 x half> %va ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv16f16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v12 +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v24, v16 +; ZVFBFA-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v16 +; ZVFBFA-NEXT: ret %v = call <vscale x 16 x half> @llvm.vp.fadd.nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x half> %b, <vscale x 16 x i1> splat (i1 true), i32 %evl) ret <vscale x 16 x half> %v } @@ -1121,6 +2318,19 @@ define <vscale x 16 x half> @vfadd_vf_nxv16f16(<vscale x 16 x half> %va, half %b ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16, v0.t ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv16f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.w a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFBFA-NEXT: vmv.v.x v24, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v8, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v24, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v16, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v16, v0.t +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 16 x half> poison, half %b, i32 0 %vb = shufflevector <vscale x 16 x half> %elt.head, <vscale x 16 x half> poison, <vscale x 16 x i32> zeroinitializer %v = call <vscale x 16 x half> @llvm.vp.fadd.nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x half> %vb, <vscale x 16 x i1> %m, i32 %evl) @@ -1146,6 +2356,19 @@ define <vscale x 16 x half> @vfadd_vf_nxv16f16_unmasked(<vscale x 16 x half> %va ; ZVFHMIN-NEXT: vsetvli zero, zero, e16, m4, ta, ma ; ZVFHMIN-NEXT: vfncvt.f.f.w v8, v16 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv16f16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: fmv.x.w a1, fa0 +; ZVFBFA-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFBFA-NEXT: vmv.v.x v24, a1 +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v8 +; ZVFBFA-NEXT: vfwcvt.f.f.v v8, v24 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v16, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v16 +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 16 x half> poison, half %b, i32 0 %vb = shufflevector <vscale x 16 x half> %elt.head, <vscale x 16 x half> poison, <vscale x 16 x i32> zeroinitializer %v = call <vscale x 16 x half> @llvm.vp.fadd.nxv16f16(<vscale x 16 x half> %va, <vscale x 16 x half> %vb, <vscale x 16 x i1> splat (i1 true), i32 %evl) @@ -1209,6 +2432,55 @@ define <vscale x 32 x half> @vfadd_vv_nxv32f16(<vscale x 32 x half> %va, <vscale ; ZVFHMIN-NEXT: addi sp, sp, 16 ; ZVFHMIN-NEXT: .cfi_def_cfa_offset 0 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv32f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: addi sp, sp, -16 +; ZVFBFA-NEXT: .cfi_def_cfa_offset 16 +; ZVFBFA-NEXT: csrr a1, vlenb +; ZVFBFA-NEXT: slli a1, a1, 3 +; ZVFBFA-NEXT: sub sp, sp, a1 +; ZVFBFA-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb +; ZVFBFA-NEXT: vsetvli a1, zero, e8, mf2, ta, ma +; ZVFBFA-NEXT: vmv1r.v v7, v0 +; ZVFBFA-NEXT: csrr a2, vlenb +; ZVFBFA-NEXT: slli a1, a2, 1 +; ZVFBFA-NEXT: srli a2, a2, 2 +; ZVFBFA-NEXT: sub a3, a0, a1 +; ZVFBFA-NEXT: vslidedown.vx v0, v0, a2 +; ZVFBFA-NEXT: sltu a2, a0, a3 +; ZVFBFA-NEXT: addi a2, a2, -1 +; ZVFBFA-NEXT: and a2, a2, a3 +; ZVFBFA-NEXT: addi a3, sp, 16 +; ZVFBFA-NEXT: vs8r.v v16, (a3) # vscale x 64-byte Folded Spill +; ZVFBFA-NEXT: vsetvli zero, a2, e16, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v20, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v12, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v16, v24, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v12, v16, v0.t +; ZVFBFA-NEXT: bltu a0, a1, .LBB48_2 +; ZVFBFA-NEXT: # %bb.1: +; ZVFBFA-NEXT: mv a0, a1 +; ZVFBFA-NEXT: .LBB48_2: +; ZVFBFA-NEXT: vmv1r.v v0, v7 +; ZVFBFA-NEXT: addi a1, sp, 16 +; ZVFBFA-NEXT: vl8r.v v24, (a1) # vscale x 64-byte Folded Reload +; ZVFBFA-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v24, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v8, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v24, v16, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v16, v0.t +; ZVFBFA-NEXT: csrr a0, vlenb +; ZVFBFA-NEXT: slli a0, a0, 3 +; ZVFBFA-NEXT: add sp, sp, a0 +; ZVFBFA-NEXT: .cfi_def_cfa sp, 16 +; ZVFBFA-NEXT: addi sp, sp, 16 +; ZVFBFA-NEXT: .cfi_def_cfa_offset 0 +; ZVFBFA-NEXT: ret %v = call <vscale x 32 x half> @llvm.vp.fadd.nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x half> %b, <vscale x 32 x i1> %m, i32 %evl) ret <vscale x 32 x half> %v } @@ -1268,6 +2540,55 @@ define <vscale x 32 x half> @vfadd_vv_nxv32f16_unmasked(<vscale x 32 x half> %va ; ZVFHMIN-NEXT: addi sp, sp, 16 ; ZVFHMIN-NEXT: .cfi_def_cfa_offset 0 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vv_nxv32f16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: addi sp, sp, -16 +; ZVFBFA-NEXT: .cfi_def_cfa_offset 16 +; ZVFBFA-NEXT: csrr a1, vlenb +; ZVFBFA-NEXT: slli a1, a1, 3 +; ZVFBFA-NEXT: sub sp, sp, a1 +; ZVFBFA-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb +; ZVFBFA-NEXT: csrr a2, vlenb +; ZVFBFA-NEXT: vsetvli a1, zero, e8, m4, ta, ma +; ZVFBFA-NEXT: vmset.m v24 +; ZVFBFA-NEXT: slli a1, a2, 1 +; ZVFBFA-NEXT: srli a2, a2, 2 +; ZVFBFA-NEXT: sub a3, a0, a1 +; ZVFBFA-NEXT: vsetvli a4, zero, e8, mf2, ta, ma +; ZVFBFA-NEXT: vslidedown.vx v0, v24, a2 +; ZVFBFA-NEXT: sltu a2, a0, a3 +; ZVFBFA-NEXT: addi a2, a2, -1 +; ZVFBFA-NEXT: and a2, a2, a3 +; ZVFBFA-NEXT: addi a3, sp, 16 +; ZVFBFA-NEXT: vs8r.v v16, (a3) # vscale x 64-byte Folded Spill +; ZVFBFA-NEXT: vsetvli zero, a2, e16, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v20, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v12, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v16, v24, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v12, v16, v0.t +; ZVFBFA-NEXT: bltu a0, a1, .LBB49_2 +; ZVFBFA-NEXT: # %bb.1: +; ZVFBFA-NEXT: mv a0, a1 +; ZVFBFA-NEXT: .LBB49_2: +; ZVFBFA-NEXT: addi a1, sp, 16 +; ZVFBFA-NEXT: vl8r.v v24, (a1) # vscale x 64-byte Folded Reload +; ZVFBFA-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v24 +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v8 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v24, v16 +; ZVFBFA-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v16 +; ZVFBFA-NEXT: csrr a0, vlenb +; ZVFBFA-NEXT: slli a0, a0, 3 +; ZVFBFA-NEXT: add sp, sp, a0 +; ZVFBFA-NEXT: .cfi_def_cfa sp, 16 +; ZVFBFA-NEXT: addi sp, sp, 16 +; ZVFBFA-NEXT: .cfi_def_cfa_offset 0 +; ZVFBFA-NEXT: ret %v = call <vscale x 32 x half> @llvm.vp.fadd.nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x half> %b, <vscale x 32 x i1> splat (i1 true), i32 %evl) ret <vscale x 32 x half> %v } @@ -1340,6 +2661,68 @@ define <vscale x 32 x half> @vfadd_vf_nxv32f16(<vscale x 32 x half> %va, half %b ; ZVFHMIN-NEXT: addi sp, sp, 16 ; ZVFHMIN-NEXT: .cfi_def_cfa_offset 0 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv32f16: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: addi sp, sp, -16 +; ZVFBFA-NEXT: .cfi_def_cfa_offset 16 +; ZVFBFA-NEXT: csrr a1, vlenb +; ZVFBFA-NEXT: slli a1, a1, 4 +; ZVFBFA-NEXT: sub sp, sp, a1 +; ZVFBFA-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x10, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 16 * vlenb +; ZVFBFA-NEXT: vsetvli a1, zero, e16, m8, ta, ma +; ZVFBFA-NEXT: vmv1r.v v7, v0 +; ZVFBFA-NEXT: fmv.x.w a1, fa0 +; ZVFBFA-NEXT: csrr a2, vlenb +; ZVFBFA-NEXT: vmv.v.x v24, a1 +; ZVFBFA-NEXT: slli a1, a2, 1 +; ZVFBFA-NEXT: srli a2, a2, 2 +; ZVFBFA-NEXT: sub a3, a0, a1 +; ZVFBFA-NEXT: vsetvli a4, zero, e8, mf2, ta, ma +; ZVFBFA-NEXT: vslidedown.vx v0, v0, a2 +; ZVFBFA-NEXT: sltu a2, a0, a3 +; ZVFBFA-NEXT: addi a2, a2, -1 +; ZVFBFA-NEXT: and a2, a2, a3 +; ZVFBFA-NEXT: csrr a3, vlenb +; ZVFBFA-NEXT: slli a3, a3, 3 +; ZVFBFA-NEXT: add a3, sp, a3 +; ZVFBFA-NEXT: addi a3, a3, 16 +; ZVFBFA-NEXT: vs8r.v v24, (a3) # vscale x 64-byte Folded Spill +; ZVFBFA-NEXT: vsetvli zero, a2, e16, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v28, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v12, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v24, v16, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v12, v16, v0.t +; ZVFBFA-NEXT: bltu a0, a1, .LBB50_2 +; ZVFBFA-NEXT: # %bb.1: +; ZVFBFA-NEXT: mv a0, a1 +; ZVFBFA-NEXT: .LBB50_2: +; ZVFBFA-NEXT: vmv1r.v v0, v7 +; ZVFBFA-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v8, v0.t +; ZVFBFA-NEXT: addi a0, sp, 16 +; ZVFBFA-NEXT: vs8r.v v16, (a0) # vscale x 64-byte Folded Spill +; ZVFBFA-NEXT: csrr a0, vlenb +; ZVFBFA-NEXT: slli a0, a0, 3 +; ZVFBFA-NEXT: add a0, sp, a0 +; ZVFBFA-NEXT: addi a0, a0, 16 +; ZVFBFA-NEXT: vl8r.v v16, (a0) # vscale x 64-byte Folded Reload +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v16, v0.t +; ZVFBFA-NEXT: addi a0, sp, 16 +; ZVFBFA-NEXT: vl8r.v v16, (a0) # vscale x 64-byte Folded Reload +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v16, v24, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v16, v0.t +; ZVFBFA-NEXT: csrr a0, vlenb +; ZVFBFA-NEXT: slli a0, a0, 4 +; ZVFBFA-NEXT: add sp, sp, a0 +; ZVFBFA-NEXT: .cfi_def_cfa sp, 16 +; ZVFBFA-NEXT: addi sp, sp, 16 +; ZVFBFA-NEXT: .cfi_def_cfa_offset 0 +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 32 x half> poison, half %b, i32 0 %vb = shufflevector <vscale x 32 x half> %elt.head, <vscale x 32 x half> poison, <vscale x 32 x i32> zeroinitializer %v = call <vscale x 32 x half> @llvm.vp.fadd.nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x half> %vb, <vscale x 32 x i1> %m, i32 %evl) @@ -1403,6 +2786,57 @@ define <vscale x 32 x half> @vfadd_vf_nxv32f16_unmasked(<vscale x 32 x half> %va ; ZVFHMIN-NEXT: addi sp, sp, 16 ; ZVFHMIN-NEXT: .cfi_def_cfa_offset 0 ; ZVFHMIN-NEXT: ret +; +; ZVFBFA-LABEL: vfadd_vf_nxv32f16_unmasked: +; ZVFBFA: # %bb.0: +; ZVFBFA-NEXT: addi sp, sp, -16 +; ZVFBFA-NEXT: .cfi_def_cfa_offset 16 +; ZVFBFA-NEXT: csrr a1, vlenb +; ZVFBFA-NEXT: slli a1, a1, 3 +; ZVFBFA-NEXT: sub sp, sp, a1 +; ZVFBFA-NEXT: .cfi_escape 0x0f, 0x0d, 0x72, 0x00, 0x11, 0x10, 0x22, 0x11, 0x08, 0x92, 0xa2, 0x38, 0x00, 0x1e, 0x22 # sp + 16 + 8 * vlenb +; ZVFBFA-NEXT: fmv.x.w a1, fa0 +; ZVFBFA-NEXT: csrr a2, vlenb +; ZVFBFA-NEXT: vsetvli a3, zero, e16, m8, ta, ma +; ZVFBFA-NEXT: vmset.m v24 +; ZVFBFA-NEXT: vmv.v.x v16, a1 +; ZVFBFA-NEXT: slli a1, a2, 1 +; ZVFBFA-NEXT: srli a2, a2, 2 +; ZVFBFA-NEXT: sub a3, a0, a1 +; ZVFBFA-NEXT: vsetvli a4, zero, e8, mf2, ta, ma +; ZVFBFA-NEXT: vslidedown.vx v0, v24, a2 +; ZVFBFA-NEXT: sltu a2, a0, a3 +; ZVFBFA-NEXT: addi a2, a2, -1 +; ZVFBFA-NEXT: and a2, a2, a3 +; ZVFBFA-NEXT: addi a3, sp, 16 +; ZVFBFA-NEXT: vs8r.v v16, (a3) # vscale x 64-byte Folded Spill +; ZVFBFA-NEXT: vsetvli zero, a2, e16, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v20, v0.t +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v12, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v16, v24, v0.t +; ZVFBFA-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v12, v16, v0.t +; ZVFBFA-NEXT: bltu a0, a1, .LBB51_2 +; ZVFBFA-NEXT: # %bb.1: +; ZVFBFA-NEXT: mv a0, a1 +; ZVFBFA-NEXT: .LBB51_2: +; ZVFBFA-NEXT: vsetvli zero, a0, e16, m4, ta, ma +; ZVFBFA-NEXT: vfwcvt.f.f.v v16, v8 +; ZVFBFA-NEXT: addi a0, sp, 16 +; ZVFBFA-NEXT: vl8r.v v0, (a0) # vscale x 64-byte Folded Reload +; ZVFBFA-NEXT: vfwcvt.f.f.v v24, v0 +; ZVFBFA-NEXT: vsetvli zero, zero, e32, m8, ta, ma +; ZVFBFA-NEXT: vfadd.vv v16, v16, v24 +; ZVFBFA-NEXT: vsetvli zero, zero, e16, m4, ta, ma +; ZVFBFA-NEXT: vfncvt.f.f.w v8, v16 +; ZVFBFA-NEXT: csrr a0, vlenb +; ZVFBFA-NEXT: slli a0, a0, 3 +; ZVFBFA-NEXT: add sp, sp, a0 +; ZVFBFA-NEXT: .cfi_def_cfa sp, 16 +; ZVFBFA-NEXT: addi sp, sp, 16 +; ZVFBFA-NEXT: .cfi_def_cfa_offset 0 +; ZVFBFA-NEXT: ret %elt.head = insertelement <vscale x 32 x half> poison, half %b, i32 0 %vb = shufflevector <vscale x 32 x half> %elt.head, <vscale x 32 x half> poison, <vscale x 32 x i32> zeroinitializer %v = call <vscale x 32 x half> @llvm.vp.fadd.nxv32f16(<vscale x 32 x half> %va, <vscale x 32 x half> %vb, <vscale x 32 x i1> splat (i1 true), i32 %evl) diff --git a/llvm/test/CodeGen/Thumb/PR17309.ll b/llvm/test/CodeGen/Thumb/PR17309.ll index b548499..4da25ca 100644 --- a/llvm/test/CodeGen/Thumb/PR17309.ll +++ b/llvm/test/CodeGen/Thumb/PR17309.ll @@ -48,7 +48,7 @@ declare void @llvm.lifetime.start.p0(i64, ptr nocapture) #1 declare void @llvm.lifetime.end.p0(i64, ptr nocapture) #1 -attributes #0 = { nounwind optsize "less-precise-fpmad"="false" "frame-pointer"="non-leaf" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #0 = { nounwind optsize "less-precise-fpmad"="false" "frame-pointer"="non-leaf" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "use-soft-float"="false" } attributes #1 = { nounwind } -attributes #2 = { optsize "less-precise-fpmad"="false" "frame-pointer"="non-leaf" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #2 = { optsize "less-precise-fpmad"="false" "frame-pointer"="non-leaf" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "use-soft-float"="false" } attributes #3 = { nounwind optsize } diff --git a/llvm/test/CodeGen/Thumb/fastcc.ll b/llvm/test/CodeGen/Thumb/fastcc.ll index be356d8..000e20a 100644 --- a/llvm/test/CodeGen/Thumb/fastcc.ll +++ b/llvm/test/CodeGen/Thumb/fastcc.ll @@ -29,7 +29,7 @@ for.body193: ; preds = %for.body193, %for.e br label %for.body193 } -attributes #0 = { optsize "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #0 = { optsize "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "use-soft-float"="false" } !llvm.ident = !{!0} diff --git a/llvm/test/CodeGen/Thumb/ldm-merge-call.ll b/llvm/test/CodeGen/Thumb/ldm-merge-call.ll index 700b207..33c4346 100644 --- a/llvm/test/CodeGen/Thumb/ldm-merge-call.ll +++ b/llvm/test/CodeGen/Thumb/ldm-merge-call.ll @@ -19,6 +19,6 @@ entry: ; Function Attrs: optsize declare i32 @bar(i32, i32, i32, i32) #1 -attributes #0 = { nounwind optsize "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { optsize "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #0 = { nounwind optsize "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "use-soft-float"="false" } +attributes #1 = { optsize "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "use-soft-float"="false" } attributes #2 = { nounwind optsize } diff --git a/llvm/test/CodeGen/Thumb/stack_guard_remat.ll b/llvm/test/CodeGen/Thumb/stack_guard_remat.ll index cc14239..82314be 100644 --- a/llvm/test/CodeGen/Thumb/stack_guard_remat.ll +++ b/llvm/test/CodeGen/Thumb/stack_guard_remat.ll @@ -50,4 +50,4 @@ declare void @foo3(ptr) ; Function Attrs: nounwind declare void @llvm.lifetime.end.p0(i64, ptr nocapture) -attributes #0 = { nounwind ssp "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #0 = { nounwind ssp "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "use-soft-float"="false" } diff --git a/llvm/test/CodeGen/Thumb/stm-merge.ll b/llvm/test/CodeGen/Thumb/stm-merge.ll index 837c2f6..426210a 100644 --- a/llvm/test/CodeGen/Thumb/stm-merge.ll +++ b/llvm/test/CodeGen/Thumb/stm-merge.ll @@ -38,4 +38,4 @@ for.end8: ; preds = %for.body5 ret void } -attributes #0 = { nounwind optsize "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #0 = { nounwind optsize "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "use-soft-float"="false" } diff --git a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vpt-block-debug.mir b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vpt-block-debug.mir index 4e2a275..00f0a1c 100644 --- a/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vpt-block-debug.mir +++ b/llvm/test/CodeGen/Thumb2/LowOverheadLoops/vpt-block-debug.mir @@ -118,7 +118,7 @@ declare i32 @llvm.start.loop.iterations.i32(i32) declare i32 @llvm.loop.decrement.reg.i32(i32, i32) - attributes #0 = { nofree nounwind "frame-pointer"="all" "min-legal-vector-width"="0" "no-infs-fp-math"="true" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="cortex-m55" "target-features"="+armv8.1-m.main,+cdecp0,+dsp,+fp-armv8d16,+fp-armv8d16sp,+fp16,+fp64,+fullfp16,+hwdiv,+lob,+mve,+mve.fp,+ras,+strict-align,+thumb-mode,+vfp2,+vfp2sp,+vfp3d16,+vfp3d16sp,+vfp4d16,+vfp4d16sp,-aes,-bf16,-cdecp1,-cdecp2,-cdecp3,-cdecp4,-cdecp5,-cdecp6,-cdecp7,-crc,-crypto,-d32,-dotprod,-fp-armv8,-fp-armv8sp,-fp16fml,-hwdiv-arm,-i8mm,-neon,-sb,-sha2,-vfp3,-vfp3sp,-vfp4,-vfp4sp" "unsafe-fp-math"="true" } + attributes #0 = { nofree nounwind "frame-pointer"="all" "min-legal-vector-width"="0" "no-infs-fp-math"="true" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="cortex-m55" "target-features"="+armv8.1-m.main,+cdecp0,+dsp,+fp-armv8d16,+fp-armv8d16sp,+fp16,+fp64,+fullfp16,+hwdiv,+lob,+mve,+mve.fp,+ras,+strict-align,+thumb-mode,+vfp2,+vfp2sp,+vfp3d16,+vfp3d16sp,+vfp4d16,+vfp4d16sp,-aes,-bf16,-cdecp1,-cdecp2,-cdecp3,-cdecp4,-cdecp5,-cdecp6,-cdecp7,-crc,-crypto,-d32,-dotprod,-fp-armv8,-fp-armv8sp,-fp16fml,-hwdiv-arm,-i8mm,-neon,-sb,-sha2,-vfp3,-vfp3sp,-vfp4,-vfp4sp" } !0 = distinct !DICompileUnit(language: DW_LANG_C99, file: !1, producer: "clang", isOptimized: true, runtimeVersion: 0, emissionKind: FullDebug, retainedTypes: !2, splitDebugInlining: false, nameTableKind: None) !1 = !DIFile(filename: "tmp.c", directory: "") diff --git a/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-1-pred.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-1-pred.mir index 4e817ba..e48a038 100644 --- a/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-1-pred.mir +++ b/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-1-pred.mir @@ -13,7 +13,7 @@ ret <4 x float> %inactive1 } - attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "unsafe-fp-math"="false" "use-soft-float"="false" } + attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "use-soft-float"="false" } attributes #1 = { nounwind readnone } attributes #2 = { nounwind } diff --git a/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-2-preds.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-2-preds.mir index 6b5cbce..b8657c2 100644 --- a/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-2-preds.mir +++ b/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-2-preds.mir @@ -16,7 +16,7 @@ declare <4 x float> @llvm.arm.mve.vminnm.m.v4f32.v4f32.v4f32.v4f32.i32(<4 x float>, <4 x float>, <4 x float>, i32) #1 - attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "unsafe-fp-math"="false" "use-soft-float"="false" } + attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "use-soft-float"="false" } attributes #1 = { nounwind readnone } attributes #2 = { nounwind } diff --git a/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-ctrl-flow.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-ctrl-flow.mir index 91ccf3b..68a38a4 100644 --- a/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-ctrl-flow.mir +++ b/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-ctrl-flow.mir @@ -19,7 +19,7 @@ declare <4 x float> @llvm.arm.mve.vminnm.m.v4f32.v4f32.v4f32.v4f32.i32(<4 x float>, <4 x float>, <4 x float>, i32) #1 - attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "unsafe-fp-math"="false" "use-soft-float"="false" } + attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "use-soft-float"="false" } attributes #1 = { nounwind readnone } attributes #2 = { nounwind } diff --git a/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-non-consecutive-ins.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-non-consecutive-ins.mir index 1f19ed9..caa7b17 100644 --- a/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-non-consecutive-ins.mir +++ b/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks-non-consecutive-ins.mir @@ -17,7 +17,7 @@ declare <4 x float> @llvm.arm.mve.vminnm.m.v4f32.v4f32.v4f32.v4f32.i32(<4 x float>, <4 x float>, <4 x float>, i32) #1 - attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "unsafe-fp-math"="false" "use-soft-float"="false" } + attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "use-soft-float"="false" } attributes #1 = { nounwind readnone } attributes #2 = { nounwind } diff --git a/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks.mir index 4f75b01..2f07485 100644 --- a/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks.mir +++ b/llvm/test/CodeGen/Thumb2/mve-vpt-2-blocks.mir @@ -18,7 +18,7 @@ declare <4 x float> @llvm.arm.mve.vminnm.m.v4f32.v4f32.v4f32.v4f32.i32(<4 x float>, <4 x float>, <4 x float>, i32) #1 - attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "unsafe-fp-math"="false" "use-soft-float"="false" } + attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "use-soft-float"="false" } attributes #1 = { nounwind readnone } attributes #2 = { nounwind } diff --git a/llvm/test/CodeGen/Thumb2/mve-vpt-3-blocks-kill-vpr.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-3-blocks-kill-vpr.mir index c268388..f6b64a0 100644 --- a/llvm/test/CodeGen/Thumb2/mve-vpt-3-blocks-kill-vpr.mir +++ b/llvm/test/CodeGen/Thumb2/mve-vpt-3-blocks-kill-vpr.mir @@ -17,7 +17,7 @@ declare <4 x float> @llvm.arm.mve.vminnm.m.v4f32.v4f32.v4f32.v4f32.i32(<4 x float>, <4 x float>, <4 x float>, i32) #1 - attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "unsafe-fp-math"="false" "use-soft-float"="false" } + attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "use-soft-float"="false" } attributes #1 = { nounwind readnone } attributes #2 = { nounwind } diff --git a/llvm/test/CodeGen/Thumb2/mve-vpt-block-1-ins.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-block-1-ins.mir index 1e9e0e3..d086566 100644 --- a/llvm/test/CodeGen/Thumb2/mve-vpt-block-1-ins.mir +++ b/llvm/test/CodeGen/Thumb2/mve-vpt-block-1-ins.mir @@ -14,7 +14,7 @@ declare <4 x float> @llvm.arm.mve.vminnm.m.v4f32.v4f32.v4f32.v4f32.i32(<4 x float>, <4 x float>, <4 x float>, i32) #1 - attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "unsafe-fp-math"="false" "use-soft-float"="false" } + attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "use-soft-float"="false" } attributes #1 = { nounwind readnone } attributes #2 = { nounwind } diff --git a/llvm/test/CodeGen/Thumb2/mve-vpt-block-2-ins.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-block-2-ins.mir index cb73cdf..5436882 100644 --- a/llvm/test/CodeGen/Thumb2/mve-vpt-block-2-ins.mir +++ b/llvm/test/CodeGen/Thumb2/mve-vpt-block-2-ins.mir @@ -15,7 +15,7 @@ declare <4 x float> @llvm.arm.mve.vminnm.m.v4f32.v4f32.v4f32.v4f32.i32(<4 x float>, <4 x float>, <4 x float>, i32) #1 - attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "unsafe-fp-math"="false" "use-soft-float"="false" } + attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "use-soft-float"="false" } attributes #1 = { nounwind readnone } attributes #2 = { nounwind } diff --git a/llvm/test/CodeGen/Thumb2/mve-vpt-block-4-ins.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-block-4-ins.mir index 62d7640d..435836d 100644 --- a/llvm/test/CodeGen/Thumb2/mve-vpt-block-4-ins.mir +++ b/llvm/test/CodeGen/Thumb2/mve-vpt-block-4-ins.mir @@ -17,7 +17,7 @@ declare <4 x float> @llvm.arm.mve.vminnm.m.v4f32.v4f32.v4f32.v4f32.i32(<4 x float>, <4 x float>, <4 x float>, i32) #1 - attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "unsafe-fp-math"="false" "use-soft-float"="false" } + attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "use-soft-float"="false" } attributes #1 = { nounwind readnone } attributes #2 = { nounwind } diff --git a/llvm/test/CodeGen/Thumb2/mve-vpt-block-elses.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-block-elses.mir index 130c7f4..dc195dd 100644 --- a/llvm/test/CodeGen/Thumb2/mve-vpt-block-elses.mir +++ b/llvm/test/CodeGen/Thumb2/mve-vpt-block-elses.mir @@ -17,7 +17,7 @@ declare <4 x float> @llvm.arm.mve.vminnm.m.v4f32.v4f32.v4f32.v4f32.i32(<4 x float>, <4 x float>, <4 x float>, i32) #1 - attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "unsafe-fp-math"="false" "use-soft-float"="false" } + attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "frame-pointer"="none" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "use-soft-float"="false" } attributes #1 = { nounwind readnone } attributes #2 = { nounwind } diff --git a/llvm/test/CodeGen/Thumb2/mve-vpt-block-fold-vcmp.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-block-fold-vcmp.mir index 0ffed2e..ee2e58f 100644 --- a/llvm/test/CodeGen/Thumb2/mve-vpt-block-fold-vcmp.mir +++ b/llvm/test/CodeGen/Thumb2/mve-vpt-block-fold-vcmp.mir @@ -18,7 +18,7 @@ declare <4 x i32> @llvm.masked.load.v4i32.p0(ptr, i32 immarg, <4 x i1>, <4 x i32>) #2 declare void @llvm.masked.store.v4i32.p0(<4 x i32>, ptr, i32 immarg, <4 x i1>) #3 - attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "frame-pointer"="all" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+fp-armv8d16sp,+fp16,+fpregs,+fullfp16,+hwdiv,+lob,+mve.fp,+ras,+strict-align,+thumb-mode,+vfp2sp,+vfp3d16sp,+vfp4d16sp" "unsafe-fp-math"="false" "use-soft-float"="false" } + attributes #0 = { nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "frame-pointer"="all" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+fp-armv8d16sp,+fp16,+fpregs,+fullfp16,+hwdiv,+lob,+mve.fp,+ras,+strict-align,+thumb-mode,+vfp2sp,+vfp3d16sp,+vfp4d16sp" "use-soft-float"="false" } attributes #1 = { nounwind readnone } attributes #2 = { argmemonly nounwind readonly willreturn } attributes #3 = { argmemonly nounwind willreturn } diff --git a/llvm/test/CodeGen/Thumb2/mve-vpt-block-optnone.mir b/llvm/test/CodeGen/Thumb2/mve-vpt-block-optnone.mir index 695a8d8..ba21068 100644 --- a/llvm/test/CodeGen/Thumb2/mve-vpt-block-optnone.mir +++ b/llvm/test/CodeGen/Thumb2/mve-vpt-block-optnone.mir @@ -14,7 +14,7 @@ declare <4 x float> @llvm.arm.mve.vminnm.m.v4f32.v4f32.v4f32.v4f32.i32(<4 x float>, <4 x float>, <4 x float>, i32) #1 - attributes #0 = { noinline optnone nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "no-frame-pointer-elim"="false" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "unsafe-fp-math"="false" "use-soft-float"="false" } + attributes #0 = { noinline optnone nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="preserve-sign" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="128" "no-frame-pointer-elim"="false" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="generic" "target-features"="+armv8.1-m.main,+hwdiv,+mve.fp,+ras,+thumb-mode" "use-soft-float"="false" } attributes #1 = { nounwind readnone } attributes #2 = { nounwind } diff --git a/llvm/test/CodeGen/Thumb2/pacbti-m-outliner-4.ll b/llvm/test/CodeGen/Thumb2/pacbti-m-outliner-4.ll index 8777d51..db779de 100644 --- a/llvm/test/CodeGen/Thumb2/pacbti-m-outliner-4.ll +++ b/llvm/test/CodeGen/Thumb2/pacbti-m-outliner-4.ll @@ -179,7 +179,7 @@ return: ; preds = %entry, %if.end ; CHECK-NOT: aut ; CHECK: b _Z1hii -attributes #0 = { minsize noinline optsize "sign-return-address"="non-leaf" "denormal-fp-math"="preserve-sign,preserve-sign" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="cortex-m3" "target-features"="+armv7-m,+hwdiv,+thumb-mode" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #0 = { minsize noinline optsize "sign-return-address"="non-leaf" "denormal-fp-math"="preserve-sign,preserve-sign" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-infs-fp-math"="true" "no-jump-tables"="false" "no-nans-fp-math"="true" "no-signed-zeros-fp-math"="true" "no-trapping-math"="true" "stack-protector-buffer-size"="8" "target-cpu"="cortex-m3" "target-features"="+armv7-m,+hwdiv,+thumb-mode" "use-soft-float"="false" } attributes #1 = { nounwind "sign-return-address"="non-leaf" } attributes #2 = { noreturn "sign-return-address"="non-leaf" } diff --git a/llvm/test/CodeGen/Thumb2/stack_guard_remat.ll b/llvm/test/CodeGen/Thumb2/stack_guard_remat.ll index 4a93c2c..0ee075c 100644 --- a/llvm/test/CodeGen/Thumb2/stack_guard_remat.ll +++ b/llvm/test/CodeGen/Thumb2/stack_guard_remat.ll @@ -38,7 +38,7 @@ declare void @foo3(ptr) ; Function Attrs: nounwind declare void @llvm.lifetime.end.p0(i64, ptr nocapture) -attributes #0 = { nounwind ssp "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #0 = { nounwind ssp "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "use-soft-float"="false" } !llvm.module.flags = !{!0} !0 = !{i32 7, !"PIC Level", i32 2} diff --git a/llvm/test/CodeGen/Thumb2/t2sizereduction.mir b/llvm/test/CodeGen/Thumb2/t2sizereduction.mir index 48b75ed5..f5eb642 100644 --- a/llvm/test/CodeGen/Thumb2/t2sizereduction.mir +++ b/llvm/test/CodeGen/Thumb2/t2sizereduction.mir @@ -29,7 +29,7 @@ br i1 %exitcond, label %for.cond.cleanup, label %for.body } - attributes #0 = { norecurse nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="cortex-m7" "target-features"="-d32,+dsp,+fp-armv8,-fp64,+hwdiv,+strict-align,+thumb-mode,-crc,-dotprod,-hwdiv-arm,-ras" "unsafe-fp-math"="false" "use-soft-float"="false" } + attributes #0 = { norecurse nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="cortex-m7" "target-features"="-d32,+dsp,+fp-armv8,-fp64,+hwdiv,+strict-align,+thumb-mode,-crc,-dotprod,-hwdiv-arm,-ras" "use-soft-float"="false" } ... --- diff --git a/llvm/test/CodeGen/WebAssembly/simd-relaxed-fmax.ll b/llvm/test/CodeGen/WebAssembly/simd-relaxed-fmax.ll index 45f4ddd..f224a0d 100644 --- a/llvm/test/CodeGen/WebAssembly/simd-relaxed-fmax.ll +++ b/llvm/test/CodeGen/WebAssembly/simd-relaxed-fmax.ll @@ -54,6 +54,250 @@ define <2 x double> @test_minimumnum_f64x2(<2 x double> %a, <2 x double> %b) { ret <2 x double> %result } +define <4 x float> @test_pmax_v4f32_olt(<4 x float> %x, <4 x float> %y) { +; CHECK-LABEL: test_pmax_v4f32_olt: +; CHECK: .functype test_pmax_v4f32_olt (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f32x4.relaxed_max +; CHECK-NEXT: # fallthrough-return + %c = fcmp olt <4 x float> %x, %y + %a = select <4 x i1> %c, <4 x float> %y, <4 x float> %x + ret <4 x float> %a +} + +define <4 x float> @test_pmax_v4f32_ole(<4 x float> %x, <4 x float> %y) { +; CHECK-LABEL: test_pmax_v4f32_ole: +; CHECK: .functype test_pmax_v4f32_ole (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f32x4.relaxed_max +; CHECK-NEXT: # fallthrough-return + %c = fcmp ole <4 x float> %x, %y + %a = select <4 x i1> %c, <4 x float> %y, <4 x float> %x + ret <4 x float> %a +} + +define <4 x float> @test_pmax_v4f32_ogt(<4 x float> %x, <4 x float> %y) { +; CHECK-LABEL: test_pmax_v4f32_ogt: +; CHECK: .functype test_pmax_v4f32_ogt (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f32x4.relaxed_max +; CHECK-NEXT: # fallthrough-return + %c = fcmp ogt <4 x float> %y, %x + %a = select <4 x i1> %c, <4 x float> %y, <4 x float> %x + ret <4 x float> %a +} + +define <4 x float> @test_pmax_v4f32_oge(<4 x float> %x, <4 x float> %y) { +; CHECK-LABEL: test_pmax_v4f32_oge: +; CHECK: .functype test_pmax_v4f32_oge (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f32x4.relaxed_max +; CHECK-NEXT: # fallthrough-return + %c = fcmp oge <4 x float> %y, %x + %a = select <4 x i1> %c, <4 x float> %y, <4 x float> %x + ret <4 x float> %a +} + +; For setlt +define <4 x float> @pmax_v4f32_fast_olt(<4 x float> %x, <4 x float> %y) { +; CHECK-LABEL: pmax_v4f32_fast_olt: +; CHECK: .functype pmax_v4f32_fast_olt (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f32x4.relaxed_max +; CHECK-NEXT: # fallthrough-return + %c = fcmp fast olt <4 x float> %x, %y + %a = select <4 x i1> %c, <4 x float> %y, <4 x float> %x + ret <4 x float> %a +} + +; For setle +define <4 x float> @test_pmax_v4f32_fast_ole(<4 x float> %x, <4 x float> %y) { +; CHECK-LABEL: test_pmax_v4f32_fast_ole: +; CHECK: .functype test_pmax_v4f32_fast_ole (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f32x4.relaxed_max +; CHECK-NEXT: # fallthrough-return + %c = fcmp fast ole <4 x float> %x, %y + %a = select <4 x i1> %c, <4 x float> %y, <4 x float> %x + ret <4 x float> %a +} + +; For setgt +define <4 x float> @test_pmax_v4f32_fast_ogt(<4 x float> %x, <4 x float> %y) { +; CHECK-LABEL: test_pmax_v4f32_fast_ogt: +; CHECK: .functype test_pmax_v4f32_fast_ogt (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f32x4.relaxed_max +; CHECK-NEXT: # fallthrough-return + %c = fcmp fast ogt <4 x float> %x, %y + %a = select <4 x i1> %c, <4 x float> %x, <4 x float> %y + ret <4 x float> %a +} + +; For setge +define <4 x float> @test_pmax_v4f32_fast_oge(<4 x float> %x, <4 x float> %y) { +; CHECK-LABEL: test_pmax_v4f32_fast_oge: +; CHECK: .functype test_pmax_v4f32_fast_oge (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f32x4.relaxed_max +; CHECK-NEXT: # fallthrough-return + %c = fcmp fast oge <4 x float> %x, %y + %a = select <4 x i1> %c, <4 x float> %x, <4 x float> %y + ret <4 x float> %a +} + +define <4 x i32> @test_pmax_int_v4f32(<4 x i32> %x, <4 x i32> %y) { +; CHECK-LABEL: test_pmax_int_v4f32: +; CHECK: .functype test_pmax_int_v4f32 (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: f32x4.relaxed_max +; CHECK-NEXT: # fallthrough-return + %fx = bitcast <4 x i32> %x to <4 x float> + %fy = bitcast <4 x i32> %y to <4 x float> + %c = fcmp olt <4 x float> %fy, %fx + %a = select <4 x i1> %c, <4 x i32> %x, <4 x i32> %y + ret <4 x i32> %a +} + +define <2 x double> @test_pmax_v2f64_olt(<2 x double> %x, <2 x double> %y) { +; CHECK-LABEL: test_pmax_v2f64_olt: +; CHECK: .functype test_pmax_v2f64_olt (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f64x2.relaxed_max +; CHECK-NEXT: # fallthrough-return + %c = fcmp olt <2 x double> %x, %y + %a = select <2 x i1> %c, <2 x double> %y, <2 x double> %x + ret <2 x double> %a +} + +define <2 x double> @test_pmax_v2f64_ole(<2 x double> %x, <2 x double> %y) { +; CHECK-LABEL: test_pmax_v2f64_ole: +; CHECK: .functype test_pmax_v2f64_ole (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f64x2.relaxed_max +; CHECK-NEXT: # fallthrough-return + %c = fcmp ole <2 x double> %x, %y + %a = select <2 x i1> %c, <2 x double> %y, <2 x double> %x + ret <2 x double> %a +} + +define <2 x double> @test_pmax_v2f64_ogt(<2 x double> %x, <2 x double> %y) { +; CHECK-LABEL: test_pmax_v2f64_ogt: +; CHECK: .functype test_pmax_v2f64_ogt (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: f64x2.relaxed_max +; CHECK-NEXT: # fallthrough-return + %c = fcmp ogt <2 x double> %x, %y + %a = select <2 x i1> %c, <2 x double> %x, <2 x double> %y + ret <2 x double> %a +} +define <2 x double> @test_pmax_v2f64_oge(<2 x double> %x, <2 x double> %y) { +; CHECK-LABEL: test_pmax_v2f64_oge: +; CHECK: .functype test_pmax_v2f64_oge (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: f64x2.relaxed_max +; CHECK-NEXT: # fallthrough-return + %c = fcmp oge <2 x double> %x, %y + %a = select <2 x i1> %c, <2 x double> %x, <2 x double> %y + ret <2 x double> %a +} + +; For setlt +define <2 x double> @pmax_v2f64_fast_olt(<2 x double> %x, <2 x double> %y) { +; CHECK-LABEL: pmax_v2f64_fast_olt: +; CHECK: .functype pmax_v2f64_fast_olt (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f64x2.relaxed_max +; CHECK-NEXT: # fallthrough-return + %c = fcmp fast olt <2 x double> %x, %y + %a = select <2 x i1> %c, <2 x double> %y, <2 x double> %x + ret <2 x double> %a +} + +; For setle +define <2 x double> @test_pmax_v2f64_fast_ole(<2 x double> %x, <2 x double> %y) { +; CHECK-LABEL: test_pmax_v2f64_fast_ole: +; CHECK: .functype test_pmax_v2f64_fast_ole (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f64x2.relaxed_max +; CHECK-NEXT: # fallthrough-return + %c = fcmp fast ole <2 x double> %x, %y + %a = select <2 x i1> %c, <2 x double> %y, <2 x double> %x + ret <2 x double> %a +} +; For setgt +define <2 x double> @test_pmax_v2f64_fast_ogt(<2 x double> %x, <2 x double> %y) { +; CHECK-LABEL: test_pmax_v2f64_fast_ogt: +; CHECK: .functype test_pmax_v2f64_fast_ogt (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f64x2.relaxed_max +; CHECK-NEXT: # fallthrough-return + %c = fcmp fast ogt <2 x double> %x, %y + %a = select <2 x i1> %c, <2 x double> %x, <2 x double> %y + ret <2 x double> %a +} + +; For setge +define <2 x double> @test_pmax_v2f64_fast_oge(<2 x double> %x, <2 x double> %y) { +; CHECK-LABEL: test_pmax_v2f64_fast_oge: +; CHECK: .functype test_pmax_v2f64_fast_oge (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f64x2.relaxed_max +; CHECK-NEXT: # fallthrough-return + %c = fcmp fast oge <2 x double> %x, %y + %a = select <2 x i1> %c, <2 x double> %x, <2 x double> %y + ret <2 x double> %a +} + +define <2 x i64> @test_pmax_int_v2f64(<2 x i64> %x, <2 x i64> %y) { +; CHECK-LABEL: test_pmax_int_v2f64: +; CHECK: .functype test_pmax_int_v2f64 (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: f64x2.relaxed_max +; CHECK-NEXT: # fallthrough-return + %fx = bitcast <2 x i64> %x to <2 x double> + %fy = bitcast <2 x i64> %y to <2 x double> + %c = fcmp olt <2 x double> %fy, %fx + %a = select <2 x i1> %c, <2 x i64> %x, <2 x i64> %y + ret <2 x i64> %a +} + declare <4 x float> @llvm.maxnum.v4f32(<4 x float>, <4 x float>) declare <4 x float> @llvm.maximumnum.v4f32(<4 x float>, <4 x float>) declare <2 x double> @llvm.maxnum.v2f64(<2 x double>, <2 x double>) diff --git a/llvm/test/CodeGen/WebAssembly/simd-relaxed-fmin.ll b/llvm/test/CodeGen/WebAssembly/simd-relaxed-fmin.ll index f3eec02..4604465 100644 --- a/llvm/test/CodeGen/WebAssembly/simd-relaxed-fmin.ll +++ b/llvm/test/CodeGen/WebAssembly/simd-relaxed-fmin.ll @@ -53,6 +53,252 @@ define <2 x double> @test_minimumnum_f64x2(<2 x double> %a, <2 x double> %b) { ret <2 x double> %result } +define <4 x float> @test_pmin_v4f32_olt(<4 x float> %x, <4 x float> %y) { +; CHECK-LABEL: test_pmin_v4f32_olt: +; CHECK: .functype test_pmin_v4f32_olt (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f32x4.relaxed_min +; CHECK-NEXT: # fallthrough-return + %c = fcmp olt <4 x float> %y, %x + %a = select <4 x i1> %c, <4 x float> %y, <4 x float> %x + ret <4 x float> %a +} + +define <4 x float> @test_pmin_v4f32_ole(<4 x float> %x, <4 x float> %y) { +; CHECK-LABEL: test_pmin_v4f32_ole: +; CHECK: .functype test_pmin_v4f32_ole (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f32x4.relaxed_min +; CHECK-NEXT: # fallthrough-return + %c = fcmp ole <4 x float> %y, %x + %a = select <4 x i1> %c, <4 x float> %y, <4 x float> %x + ret <4 x float> %a +} + +define <4 x float> @test_pmin_v4f32_ogt(<4 x float> %x, <4 x float> %y) { +; CHECK-LABEL: test_pmin_v4f32_ogt: +; CHECK: .functype test_pmin_v4f32_ogt (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f32x4.relaxed_min +; CHECK-NEXT: # fallthrough-return + %c = fcmp ogt <4 x float> %x, %y + %a = select <4 x i1> %c, <4 x float> %y, <4 x float> %x + ret <4 x float> %a +} + +define <4 x float> @test_pmin_v4f32_oge(<4 x float> %x, <4 x float> %y) { +; CHECK-LABEL: test_pmin_v4f32_oge: +; CHECK: .functype test_pmin_v4f32_oge (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f32x4.relaxed_min +; CHECK-NEXT: # fallthrough-return + %c = fcmp oge <4 x float> %x, %y + %a = select <4 x i1> %c, <4 x float> %y, <4 x float> %x + ret <4 x float> %a +} + +; For setlt +define <4 x float> @pmin_v4f32_fast_olt(<4 x float> %x, <4 x float> %y) { +; CHECK-LABEL: pmin_v4f32_fast_olt: +; CHECK: .functype pmin_v4f32_fast_olt (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: f32x4.relaxed_min +; CHECK-NEXT: # fallthrough-return + %c = fcmp fast olt <4 x float> %y, %x + %a = select <4 x i1> %c, <4 x float> %y, <4 x float> %x + ret <4 x float> %a +} + +; For setle +define <4 x float> @test_pmin_v4f32_fast_ole(<4 x float> %x, <4 x float> %y) { +; CHECK-LABEL: test_pmin_v4f32_fast_ole: +; CHECK: .functype test_pmin_v4f32_fast_ole (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: f32x4.relaxed_min +; CHECK-NEXT: # fallthrough-return + %c = fcmp fast ole <4 x float> %y, %x + %a = select <4 x i1> %c, <4 x float> %y, <4 x float> %x + ret <4 x float> %a +} + +; For setgt +define <4 x float> @test_pmin_v4f32_fast_ogt(<4 x float> %x, <4 x float> %y) { +; CHECK-LABEL: test_pmin_v4f32_fast_ogt: +; CHECK: .functype test_pmin_v4f32_fast_ogt (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f32x4.relaxed_min +; CHECK-NEXT: # fallthrough-return + %c = fcmp fast ogt <4 x float> %x, %y + %a = select <4 x i1> %c, <4 x float> %y, <4 x float> %x + ret <4 x float> %a +} + +; For setge +define <4 x float> @test_pmin_v4f32_fast_oge(<4 x float> %x, <4 x float> %y) { +; CHECK-LABEL: test_pmin_v4f32_fast_oge: +; CHECK: .functype test_pmin_v4f32_fast_oge (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f32x4.relaxed_min +; CHECK-NEXT: # fallthrough-return + %c = fcmp fast oge <4 x float> %x, %y + %a = select <4 x i1> %c, <4 x float> %y, <4 x float> %x + ret <4 x float> %a +} + +define <4 x i32> @test_pmin_int_v4f32(<4 x i32> %x, <4 x i32> %y) { +; CHECK-LABEL: test_pmin_int_v4f32: +; CHECK: .functype test_pmin_int_v4f32 (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f32x4.relaxed_min +; CHECK-NEXT: # fallthrough-return + %fx = bitcast <4 x i32> %x to <4 x float> + %fy = bitcast <4 x i32> %y to <4 x float> + %c = fcmp olt <4 x float> %fy, %fx + %a = select <4 x i1> %c, <4 x i32> %y, <4 x i32> %x + ret <4 x i32> %a +} + +define <2 x double> @test_pmin_v2f64_olt(<2 x double> %x, <2 x double> %y) { +; CHECK-LABEL: test_pmin_v2f64_olt: +; CHECK: .functype test_pmin_v2f64_olt (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f64x2.relaxed_min +; CHECK-NEXT: # fallthrough-return + %c = fcmp olt <2 x double> %y, %x + %a = select <2 x i1> %c, <2 x double> %y, <2 x double> %x + ret <2 x double> %a +} + +define <2 x double> @test_pmin_v2f64_ole(<2 x double> %x, <2 x double> %y) { +; CHECK-LABEL: test_pmin_v2f64_ole: +; CHECK: .functype test_pmin_v2f64_ole (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f64x2.relaxed_min +; CHECK-NEXT: # fallthrough-return + %c = fcmp ole <2 x double> %y, %x + %a = select <2 x i1> %c, <2 x double> %y, <2 x double> %x + ret <2 x double> %a +} + +define <2 x double> @test_pmin_v2f64_ogt(<2 x double> %x, <2 x double> %y) { +; CHECK-LABEL: test_pmin_v2f64_ogt: +; CHECK: .functype test_pmin_v2f64_ogt (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f64x2.relaxed_min +; CHECK-NEXT: # fallthrough-return + %c = fcmp ogt <2 x double> %x, %y + %a = select <2 x i1> %c, <2 x double> %y, <2 x double> %x + ret <2 x double> %a +} + +define <2 x double> @test_pmin_v2f64_oge(<2 x double> %x, <2 x double> %y) { +; CHECK-LABEL: test_pmin_v2f64_oge: +; CHECK: .functype test_pmin_v2f64_oge (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f64x2.relaxed_min +; CHECK-NEXT: # fallthrough-return + %c = fcmp oge <2 x double> %x, %y + %a = select <2 x i1> %c, <2 x double> %y, <2 x double> %x + ret <2 x double> %a +} + +; For setlt +define <2 x double> @pmin_v2f64_fast_olt(<2 x double> %x, <2 x double> %y) { +; CHECK-LABEL: pmin_v2f64_fast_olt: +; CHECK: .functype pmin_v2f64_fast_olt (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: f64x2.relaxed_min +; CHECK-NEXT: # fallthrough-return + %c = fcmp fast olt <2 x double> %y, %x + %a = select <2 x i1> %c, <2 x double> %y, <2 x double> %x + ret <2 x double> %a +} + +; For setle +define <2 x double> @test_pmin_v2f64_fast_ole(<2 x double> %x, <2 x double> %y) { +; CHECK-LABEL: test_pmin_v2f64_fast_ole: +; CHECK: .functype test_pmin_v2f64_fast_ole (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: f64x2.relaxed_min +; CHECK-NEXT: # fallthrough-return + %c = fcmp fast ole <2 x double> %y, %x + %a = select <2 x i1> %c, <2 x double> %y, <2 x double> %x + ret <2 x double> %a +} + +; For setgt +define <2 x double> @test_pmin_v2f64_fast_ogt(<2 x double> %x, <2 x double> %y) { +; CHECK-LABEL: test_pmin_v2f64_fast_ogt: +; CHECK: .functype test_pmin_v2f64_fast_ogt (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f64x2.relaxed_min +; CHECK-NEXT: # fallthrough-return + %c = fcmp fast ogt <2 x double> %x, %y + %a = select <2 x i1> %c, <2 x double> %y, <2 x double> %x + ret <2 x double> %a +} + +; For setge +define <2 x double> @test_pmin_v2f64_fast_oge(<2 x double> %x, <2 x double> %y) { +; CHECK-LABEL: test_pmin_v2f64_fast_oge: +; CHECK: .functype test_pmin_v2f64_fast_oge (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f64x2.relaxed_min +; CHECK-NEXT: # fallthrough-return + %c = fcmp fast oge <2 x double> %x, %y + %a = select <2 x i1> %c, <2 x double> %y, <2 x double> %x + ret <2 x double> %a +} + +define <2 x i64> @test_pmin_int_v2f64(<2 x i64> %x, <2 x i64> %y) { +; CHECK-LABEL: test_pmin_int_v2f64: +; CHECK: .functype test_pmin_int_v2f64 (v128, v128) -> (v128) +; CHECK-NEXT: # %bb.0: +; CHECK-NEXT: local.get 0 +; CHECK-NEXT: local.get 1 +; CHECK-NEXT: f64x2.relaxed_min +; CHECK-NEXT: # fallthrough-return + %fx = bitcast <2 x i64> %x to <2 x double> + %fy = bitcast <2 x i64> %y to <2 x double> + %c = fcmp olt <2 x double> %fy, %fx + %a = select <2 x i1> %c, <2 x i64> %y, <2 x i64> %x + ret <2 x i64> %a +} + declare <4 x float> @llvm.minnum.v4f32(<4 x float>, <4 x float>) declare <4 x float> @llvm.fminimumnum.v4f32(<4 x float>, <4 x float>) declare <2 x double> @llvm.minnum.v2f64(<2 x double>, <2 x double>) diff --git a/llvm/test/CodeGen/X86/atomic-load-store.ll b/llvm/test/CodeGen/X86/atomic-load-store.ll index 45277ce..4f5cb5a 100644 --- a/llvm/test/CodeGen/X86/atomic-load-store.ll +++ b/llvm/test/CodeGen/X86/atomic-load-store.ll @@ -1,12 +1,12 @@ ; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py ; RUN: llc < %s -mtriple=x86_64-- -verify-machineinstrs -mcpu=x86-64 | FileCheck %s --check-prefixes=CHECK,CHECK-O3 -; RUN: llc < %s -mtriple=x86_64-- -verify-machineinstrs -mcpu=x86-64-v2 | FileCheck %s --check-prefixes=CHECK,CHECK-O3 -; RUN: llc < %s -mtriple=x86_64-- -verify-machineinstrs -mcpu=x86-64-v3 | FileCheck %s --check-prefixes=CHECK,CHECK-O3 -; RUN: llc < %s -mtriple=x86_64-- -verify-machineinstrs -mcpu=x86-64-v4 | FileCheck %s --check-prefixes=CHECK,CHECK-O3 +; RUN: llc < %s -mtriple=x86_64-- -verify-machineinstrs -mcpu=x86-64-v2 | FileCheck %s --check-prefixes=CHECK,CHECK-SSE-O3 +; RUN: llc < %s -mtriple=x86_64-- -verify-machineinstrs -mcpu=x86-64-v3 | FileCheck %s --check-prefixes=CHECK,CHECK-AVX-O3 +; RUN: llc < %s -mtriple=x86_64-- -verify-machineinstrs -mcpu=x86-64-v4 | FileCheck %s --check-prefixes=CHECK,CHECK-AVX-O3 ; RUN: llc < %s -mtriple=x86_64-- -verify-machineinstrs -O0 -mcpu=x86-64 | FileCheck %s --check-prefixes=CHECK,CHECK-O0 -; RUN: llc < %s -mtriple=x86_64-- -verify-machineinstrs -O0 -mcpu=x86-64-v2 | FileCheck %s --check-prefixes=CHECK,CHECK-O0 -; RUN: llc < %s -mtriple=x86_64-- -verify-machineinstrs -O0 -mcpu=x86-64-v3 | FileCheck %s --check-prefixes=CHECK,CHECK-O0 -; RUN: llc < %s -mtriple=x86_64-- -verify-machineinstrs -O0 -mcpu=x86-64-v4 | FileCheck %s --check-prefixes=CHECK,CHECK-O0 +; RUN: llc < %s -mtriple=x86_64-- -verify-machineinstrs -O0 -mcpu=x86-64-v2 | FileCheck %s --check-prefixes=CHECK,CHECK-SSE-O0 +; RUN: llc < %s -mtriple=x86_64-- -verify-machineinstrs -O0 -mcpu=x86-64-v3 | FileCheck %s --check-prefixes=CHECK,CHECK-AVX-O0 +; RUN: llc < %s -mtriple=x86_64-- -verify-machineinstrs -O0 -mcpu=x86-64-v4 | FileCheck %s --check-prefixes=CHECK,CHECK-AVX-O0 define void @test1(ptr %ptr, i32 %val1) { ; CHECK-LABEL: test1: @@ -34,6 +34,238 @@ define i32 @test3(ptr %ptr) { %val = load atomic i32, ptr %ptr seq_cst, align 4 ret i32 %val } -;; NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line: -; CHECK-O0: {{.*}} -; CHECK-O3: {{.*}} + +define <1 x i32> @atomic_vec1_i32(ptr %x) { +; CHECK-LABEL: atomic_vec1_i32: +; CHECK: # %bb.0: +; CHECK-NEXT: movl (%rdi), %eax +; CHECK-NEXT: retq + %ret = load atomic <1 x i32>, ptr %x acquire, align 4 + ret <1 x i32> %ret +} + +define <1 x i8> @atomic_vec1_i8(ptr %x) { +; CHECK-O3-LABEL: atomic_vec1_i8: +; CHECK-O3: # %bb.0: +; CHECK-O3-NEXT: movzbl (%rdi), %eax +; CHECK-O3-NEXT: retq +; +; CHECK-SSE-O3-LABEL: atomic_vec1_i8: +; CHECK-SSE-O3: # %bb.0: +; CHECK-SSE-O3-NEXT: movzbl (%rdi), %eax +; CHECK-SSE-O3-NEXT: retq +; +; CHECK-AVX-O3-LABEL: atomic_vec1_i8: +; CHECK-AVX-O3: # %bb.0: +; CHECK-AVX-O3-NEXT: movzbl (%rdi), %eax +; CHECK-AVX-O3-NEXT: retq +; +; CHECK-O0-LABEL: atomic_vec1_i8: +; CHECK-O0: # %bb.0: +; CHECK-O0-NEXT: movb (%rdi), %al +; CHECK-O0-NEXT: retq +; +; CHECK-SSE-O0-LABEL: atomic_vec1_i8: +; CHECK-SSE-O0: # %bb.0: +; CHECK-SSE-O0-NEXT: movb (%rdi), %al +; CHECK-SSE-O0-NEXT: retq +; +; CHECK-AVX-O0-LABEL: atomic_vec1_i8: +; CHECK-AVX-O0: # %bb.0: +; CHECK-AVX-O0-NEXT: movb (%rdi), %al +; CHECK-AVX-O0-NEXT: retq + %ret = load atomic <1 x i8>, ptr %x acquire, align 1 + ret <1 x i8> %ret +} + +define <1 x i16> @atomic_vec1_i16(ptr %x) { +; CHECK-O3-LABEL: atomic_vec1_i16: +; CHECK-O3: # %bb.0: +; CHECK-O3-NEXT: movzwl (%rdi), %eax +; CHECK-O3-NEXT: retq +; +; CHECK-SSE-O3-LABEL: atomic_vec1_i16: +; CHECK-SSE-O3: # %bb.0: +; CHECK-SSE-O3-NEXT: movzwl (%rdi), %eax +; CHECK-SSE-O3-NEXT: retq +; +; CHECK-AVX-O3-LABEL: atomic_vec1_i16: +; CHECK-AVX-O3: # %bb.0: +; CHECK-AVX-O3-NEXT: movzwl (%rdi), %eax +; CHECK-AVX-O3-NEXT: retq +; +; CHECK-O0-LABEL: atomic_vec1_i16: +; CHECK-O0: # %bb.0: +; CHECK-O0-NEXT: movw (%rdi), %ax +; CHECK-O0-NEXT: retq +; +; CHECK-SSE-O0-LABEL: atomic_vec1_i16: +; CHECK-SSE-O0: # %bb.0: +; CHECK-SSE-O0-NEXT: movw (%rdi), %ax +; CHECK-SSE-O0-NEXT: retq +; +; CHECK-AVX-O0-LABEL: atomic_vec1_i16: +; CHECK-AVX-O0: # %bb.0: +; CHECK-AVX-O0-NEXT: movw (%rdi), %ax +; CHECK-AVX-O0-NEXT: retq + %ret = load atomic <1 x i16>, ptr %x acquire, align 2 + ret <1 x i16> %ret +} + +define <1 x i32> @atomic_vec1_i8_zext(ptr %x) { +; CHECK-O3-LABEL: atomic_vec1_i8_zext: +; CHECK-O3: # %bb.0: +; CHECK-O3-NEXT: movzbl (%rdi), %eax +; CHECK-O3-NEXT: movzbl %al, %eax +; CHECK-O3-NEXT: retq +; +; CHECK-SSE-O3-LABEL: atomic_vec1_i8_zext: +; CHECK-SSE-O3: # %bb.0: +; CHECK-SSE-O3-NEXT: movzbl (%rdi), %eax +; CHECK-SSE-O3-NEXT: movzbl %al, %eax +; CHECK-SSE-O3-NEXT: retq +; +; CHECK-AVX-O3-LABEL: atomic_vec1_i8_zext: +; CHECK-AVX-O3: # %bb.0: +; CHECK-AVX-O3-NEXT: movzbl (%rdi), %eax +; CHECK-AVX-O3-NEXT: movzbl %al, %eax +; CHECK-AVX-O3-NEXT: retq +; +; CHECK-O0-LABEL: atomic_vec1_i8_zext: +; CHECK-O0: # %bb.0: +; CHECK-O0-NEXT: movb (%rdi), %al +; CHECK-O0-NEXT: movzbl %al, %eax +; CHECK-O0-NEXT: retq +; +; CHECK-SSE-O0-LABEL: atomic_vec1_i8_zext: +; CHECK-SSE-O0: # %bb.0: +; CHECK-SSE-O0-NEXT: movb (%rdi), %al +; CHECK-SSE-O0-NEXT: movzbl %al, %eax +; CHECK-SSE-O0-NEXT: retq +; +; CHECK-AVX-O0-LABEL: atomic_vec1_i8_zext: +; CHECK-AVX-O0: # %bb.0: +; CHECK-AVX-O0-NEXT: movb (%rdi), %al +; CHECK-AVX-O0-NEXT: movzbl %al, %eax +; CHECK-AVX-O0-NEXT: retq + %ret = load atomic <1 x i8>, ptr %x acquire, align 1 + %zret = zext <1 x i8> %ret to <1 x i32> + ret <1 x i32> %zret +} + +define <1 x i64> @atomic_vec1_i16_sext(ptr %x) { +; CHECK-O3-LABEL: atomic_vec1_i16_sext: +; CHECK-O3: # %bb.0: +; CHECK-O3-NEXT: movzwl (%rdi), %eax +; CHECK-O3-NEXT: movswq %ax, %rax +; CHECK-O3-NEXT: retq +; +; CHECK-SSE-O3-LABEL: atomic_vec1_i16_sext: +; CHECK-SSE-O3: # %bb.0: +; CHECK-SSE-O3-NEXT: movzwl (%rdi), %eax +; CHECK-SSE-O3-NEXT: movswq %ax, %rax +; CHECK-SSE-O3-NEXT: retq +; +; CHECK-AVX-O3-LABEL: atomic_vec1_i16_sext: +; CHECK-AVX-O3: # %bb.0: +; CHECK-AVX-O3-NEXT: movzwl (%rdi), %eax +; CHECK-AVX-O3-NEXT: movswq %ax, %rax +; CHECK-AVX-O3-NEXT: retq +; +; CHECK-O0-LABEL: atomic_vec1_i16_sext: +; CHECK-O0: # %bb.0: +; CHECK-O0-NEXT: movw (%rdi), %ax +; CHECK-O0-NEXT: movswq %ax, %rax +; CHECK-O0-NEXT: retq +; +; CHECK-SSE-O0-LABEL: atomic_vec1_i16_sext: +; CHECK-SSE-O0: # %bb.0: +; CHECK-SSE-O0-NEXT: movw (%rdi), %ax +; CHECK-SSE-O0-NEXT: movswq %ax, %rax +; CHECK-SSE-O0-NEXT: retq +; +; CHECK-AVX-O0-LABEL: atomic_vec1_i16_sext: +; CHECK-AVX-O0: # %bb.0: +; CHECK-AVX-O0-NEXT: movw (%rdi), %ax +; CHECK-AVX-O0-NEXT: movswq %ax, %rax +; CHECK-AVX-O0-NEXT: retq + %ret = load atomic <1 x i16>, ptr %x acquire, align 2 + %sret = sext <1 x i16> %ret to <1 x i64> + ret <1 x i64> %sret +} + +define <1 x ptr addrspace(270)> @atomic_vec1_ptr270(ptr %x) { +; CHECK-LABEL: atomic_vec1_ptr270: +; CHECK: # %bb.0: +; CHECK-NEXT: movl (%rdi), %eax +; CHECK-NEXT: retq + %ret = load atomic <1 x ptr addrspace(270)>, ptr %x acquire, align 4 + ret <1 x ptr addrspace(270)> %ret +} + +define <1 x bfloat> @atomic_vec1_bfloat(ptr %x) { +; CHECK-O3-LABEL: atomic_vec1_bfloat: +; CHECK-O3: # %bb.0: +; CHECK-O3-NEXT: movzwl (%rdi), %eax +; CHECK-O3-NEXT: pinsrw $0, %eax, %xmm0 +; CHECK-O3-NEXT: retq +; +; CHECK-SSE-O3-LABEL: atomic_vec1_bfloat: +; CHECK-SSE-O3: # %bb.0: +; CHECK-SSE-O3-NEXT: movzwl (%rdi), %eax +; CHECK-SSE-O3-NEXT: pinsrw $0, %eax, %xmm0 +; CHECK-SSE-O3-NEXT: retq +; +; CHECK-AVX-O3-LABEL: atomic_vec1_bfloat: +; CHECK-AVX-O3: # %bb.0: +; CHECK-AVX-O3-NEXT: movzwl (%rdi), %eax +; CHECK-AVX-O3-NEXT: vpinsrw $0, %eax, %xmm0, %xmm0 +; CHECK-AVX-O3-NEXT: retq +; +; CHECK-O0-LABEL: atomic_vec1_bfloat: +; CHECK-O0: # %bb.0: +; CHECK-O0-NEXT: movw (%rdi), %cx +; CHECK-O0-NEXT: # implicit-def: $eax +; CHECK-O0-NEXT: movw %cx, %ax +; CHECK-O0-NEXT: # implicit-def: $xmm0 +; CHECK-O0-NEXT: pinsrw $0, %eax, %xmm0 +; CHECK-O0-NEXT: retq +; +; CHECK-SSE-O0-LABEL: atomic_vec1_bfloat: +; CHECK-SSE-O0: # %bb.0: +; CHECK-SSE-O0-NEXT: movw (%rdi), %cx +; CHECK-SSE-O0-NEXT: # implicit-def: $eax +; CHECK-SSE-O0-NEXT: movw %cx, %ax +; CHECK-SSE-O0-NEXT: # implicit-def: $xmm0 +; CHECK-SSE-O0-NEXT: pinsrw $0, %eax, %xmm0 +; CHECK-SSE-O0-NEXT: retq +; +; CHECK-AVX-O0-LABEL: atomic_vec1_bfloat: +; CHECK-AVX-O0: # %bb.0: +; CHECK-AVX-O0-NEXT: movw (%rdi), %cx +; CHECK-AVX-O0-NEXT: # implicit-def: $eax +; CHECK-AVX-O0-NEXT: movw %cx, %ax +; CHECK-AVX-O0-NEXT: # implicit-def: $xmm0 +; CHECK-AVX-O0-NEXT: vpinsrw $0, %eax, %xmm0, %xmm0 +; CHECK-AVX-O0-NEXT: retq + %ret = load atomic <1 x bfloat>, ptr %x acquire, align 2 + ret <1 x bfloat> %ret +} + +define <1 x ptr> @atomic_vec1_ptr_align(ptr %x) nounwind { +; CHECK-LABEL: atomic_vec1_ptr_align: +; CHECK: # %bb.0: +; CHECK-NEXT: movq (%rdi), %rax +; CHECK-NEXT: retq + %ret = load atomic <1 x ptr>, ptr %x acquire, align 8 + ret <1 x ptr> %ret +} + +define <1 x i64> @atomic_vec1_i64_align(ptr %x) nounwind { +; CHECK-LABEL: atomic_vec1_i64_align: +; CHECK: # %bb.0: +; CHECK-NEXT: movq (%rdi), %rax +; CHECK-NEXT: retq + %ret = load atomic <1 x i64>, ptr %x acquire, align 8 + ret <1 x i64> %ret +} diff --git a/llvm/test/Transforms/ADCE/2016-09-06.ll b/llvm/test/Transforms/ADCE/2016-09-06.ll index 850f412..1329ac6 100644 --- a/llvm/test/Transforms/ADCE/2016-09-06.ll +++ b/llvm/test/Transforms/ADCE/2016-09-06.ll @@ -5,7 +5,7 @@ target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128" target triple = "x86_64-unknown-linux-gnu" ; Function Attrs: nounwind uwtable -define i32 @foo(i32, i32, i32) #0 { +define i32 @foo(i32, i32, i32) { %4 = alloca i32, align 4 %5 = alloca i32, align 4 %6 = alloca i32, align 4 @@ -48,8 +48,6 @@ B21: ret i32 %I22 } -attributes #0 = { nounwind uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } - !llvm.ident = !{!0} !0 = !{!"clang version 4.0.0"} diff --git a/llvm/test/Transforms/ADCE/blocks-with-dead-term-nondeterministic.ll b/llvm/test/Transforms/ADCE/blocks-with-dead-term-nondeterministic.ll index 9708be9..5e844b4 100644 --- a/llvm/test/Transforms/ADCE/blocks-with-dead-term-nondeterministic.ll +++ b/llvm/test/Transforms/ADCE/blocks-with-dead-term-nondeterministic.ll @@ -5,7 +5,7 @@ target triple = "x86_64-apple-macosx10.10.0" ; CHECK: uselistorder label %bb16, { 1, 0 } ; Function Attrs: noinline nounwind ssp uwtable -define void @ham(i1 %arg) local_unnamed_addr #0 { +define void @ham(i1 %arg) local_unnamed_addr { bb: br i1 false, label %bb1, label %bb22 @@ -64,8 +64,6 @@ bb22: ; preds = %bb21, %bb ret void } -attributes #0 = { noinline nounwind ssp uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="core2" "target-features"="+cx16,+fxsr,+mmx,+sse,+sse2,+sse3,+ssse3,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } - !llvm.module.flags = !{!0} !0 = !{i32 7, !"PIC Level", i32 2} diff --git a/llvm/test/Transforms/AddDiscriminators/basic.ll b/llvm/test/Transforms/AddDiscriminators/basic.ll index 5186537..fc4c10a 100644 --- a/llvm/test/Transforms/AddDiscriminators/basic.ll +++ b/llvm/test/Transforms/AddDiscriminators/basic.ll @@ -11,7 +11,7 @@ ; if (i < 10) x = i; ; } -define void @foo(i32 %i) #0 !dbg !4 { +define void @foo(i32 %i) !dbg !4 { entry: %i.addr = alloca i32, align 4 %x = alloca i32, align 4 @@ -35,8 +35,6 @@ if.end: ; preds = %if.then, %entry ; CHECK: ret void, !dbg ![[END:[0-9]+]] } -attributes #0 = { nounwind uwtable noinline optnone "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } - !llvm.dbg.cu = !{!0} !llvm.module.flags = !{!7, !8} !llvm.ident = !{!9} diff --git a/llvm/test/Transforms/AddDiscriminators/call-nested.ll b/llvm/test/Transforms/AddDiscriminators/call-nested.ll index 99340a5..f1373e4 100644 --- a/llvm/test/Transforms/AddDiscriminators/call-nested.ll +++ b/llvm/test/Transforms/AddDiscriminators/call-nested.ll @@ -9,7 +9,7 @@ ; #6 } ; Function Attrs: uwtable -define i32 @_Z3bazv() #0 !dbg !4 { +define i32 @_Z3bazv() !dbg !4 { %1 = call i32 @_Z3barv(), !dbg !11 ; CHECK: %1 = call i32 @_Z3barv(), !dbg ![[CALL0:[0-9]+]] %2 = call i32 @_Z3barv(), !dbg !12 @@ -19,12 +19,9 @@ define i32 @_Z3bazv() #0 !dbg !4 { ret i32 %3, !dbg !14 } -declare i32 @_Z3fooii(i32, i32) #1 +declare i32 @_Z3fooii(i32, i32) -declare i32 @_Z3barv() #1 - -attributes #0 = { uwtable "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } +declare i32 @_Z3barv() !llvm.dbg.cu = !{!0} !llvm.module.flags = !{!8, !9} diff --git a/llvm/test/Transforms/AddDiscriminators/call.ll b/llvm/test/Transforms/AddDiscriminators/call.ll index 93d3aa4..11b21ef 100644 --- a/llvm/test/Transforms/AddDiscriminators/call.ll +++ b/llvm/test/Transforms/AddDiscriminators/call.ll @@ -8,7 +8,7 @@ ; #5 } ; Function Attrs: uwtable -define void @_Z3foov() #0 !dbg !4 { +define void @_Z3foov() !dbg !4 { call void @_Z3barv(), !dbg !10 ; CHECK: call void @_Z3barv(), !dbg ![[CALL0:[0-9]+]] %a = alloca [100 x i8], align 16 @@ -21,13 +21,10 @@ define void @_Z3foov() #0 !dbg !4 { ret void, !dbg !13 } -declare void @_Z3barv() #1 +declare void @_Z3barv() declare void @llvm.lifetime.start.p0(ptr nocapture) nounwind argmemonly declare void @llvm.lifetime.end.p0(ptr nocapture) nounwind argmemonly -attributes #0 = { uwtable "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+fxsr,+mmx,+sse,+sse2" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+fxsr,+mmx,+sse,+sse2" "unsafe-fp-math"="false" "use-soft-float"="false" } - !llvm.dbg.cu = !{!0} !llvm.module.flags = !{!7, !8} !llvm.ident = !{!9} diff --git a/llvm/test/Transforms/AddDiscriminators/diamond.ll b/llvm/test/Transforms/AddDiscriminators/diamond.ll index c93a57a..9edcf39 100644 --- a/llvm/test/Transforms/AddDiscriminators/diamond.ll +++ b/llvm/test/Transforms/AddDiscriminators/diamond.ll @@ -12,7 +12,7 @@ ; bar(3): discriminator 2 ; Function Attrs: uwtable -define void @_Z3fooi(i32 %i) #0 !dbg !4 { +define void @_Z3fooi(i32 %i) !dbg !4 { %1 = alloca i32, align 4 store i32 %i, ptr %1, align 4 call void @llvm.dbg.declare(metadata ptr %1, metadata !11, metadata !12), !dbg !13 @@ -34,13 +34,9 @@ define void @_Z3fooi(i32 %i) #0 !dbg !4 { } ; Function Attrs: nounwind readnone -declare void @llvm.dbg.declare(metadata, metadata, metadata) #1 +declare void @llvm.dbg.declare(metadata, metadata, metadata) -declare void @_Z3bari(i32) #2 - -attributes #0 = { uwtable "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+fxsr,+mmx,+sse,+sse2" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { nounwind readnone } -attributes #2 = { "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+fxsr,+mmx,+sse,+sse2" "unsafe-fp-math"="false" "use-soft-float"="false" } +declare void @_Z3bari(i32) !llvm.dbg.cu = !{!0} !llvm.module.flags = !{!8, !9} diff --git a/llvm/test/Transforms/AddDiscriminators/first-only.ll b/llvm/test/Transforms/AddDiscriminators/first-only.ll index 7ae9ed0..415e5f0 100644 --- a/llvm/test/Transforms/AddDiscriminators/first-only.ll +++ b/llvm/test/Transforms/AddDiscriminators/first-only.ll @@ -13,7 +13,7 @@ ; } ; } -define void @foo(i32 %i) #0 !dbg !4 { +define void @foo(i32 %i) !dbg !4 { entry: %i.addr = alloca i32, align 4 %x = alloca i32, align 4 @@ -44,8 +44,6 @@ if.end: ; preds = %if.then, %entry ; CHECK: ret void, !dbg ![[END:[0-9]+]] } -attributes #0 = { nounwind uwtable "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } - !llvm.dbg.cu = !{!0} !llvm.module.flags = !{!7, !8} !llvm.ident = !{!9} diff --git a/llvm/test/Transforms/AddDiscriminators/invoke.ll b/llvm/test/Transforms/AddDiscriminators/invoke.ll index d39014d..a3989b6 100644 --- a/llvm/test/Transforms/AddDiscriminators/invoke.ll +++ b/llvm/test/Transforms/AddDiscriminators/invoke.ll @@ -5,14 +5,14 @@ target datalayout = "e-m:o-i64:64-f80:128-n8:16:32:64-S128" target triple = "x86_64-apple-macosx10.14.0" ; Function Attrs: ssp uwtable -define void @_Z3foov() #0 personality ptr @__gxx_personality_v0 !dbg !8 { +define void @_Z3foov() personality ptr @__gxx_personality_v0 !dbg !8 { entry: %exn.slot = alloca ptr %ehselector.slot = alloca i32 ; CHECK: call void @_Z12bar_noexceptv({{.*}} !dbg ![[CALL1:[0-9]+]] - call void @_Z12bar_noexceptv() #4, !dbg !11 + call void @_Z12bar_noexceptv(), !dbg !11 ; CHECK: call void @_Z12bar_noexceptv({{.*}} !dbg ![[CALL2:[0-9]+]] - call void @_Z12bar_noexceptv() #4, !dbg !13 + call void @_Z12bar_noexceptv(), !dbg !13 invoke void @_Z3barv() ; CHECK: unwind label {{.*}} !dbg ![[INVOKE:[0-9]+]] to label %invoke.cont unwind label %lpad, !dbg !14 @@ -31,8 +31,8 @@ lpad: ; preds = %entry catch: ; preds = %lpad %exn = load ptr, ptr %exn.slot, align 8, !dbg !15 - %3 = call ptr @__cxa_begin_catch(ptr %exn) #4, !dbg !15 - invoke void @__cxa_rethrow() #5 + %3 = call ptr @__cxa_begin_catch(ptr %exn), !dbg !15 + invoke void @__cxa_rethrow() to label %unreachable unwind label %lpad1, !dbg !17 lpad1: ; preds = %catch @@ -62,7 +62,7 @@ terminate.lpad: ; preds = %lpad1 %7 = landingpad { ptr, i32 } catch ptr null, !dbg !20 %8 = extractvalue { ptr, i32 } %7, 0, !dbg !20 - call void @__clang_call_terminate(ptr %8) #6, !dbg !20 + call void @__clang_call_terminate(ptr %8), !dbg !20 unreachable, !dbg !20 unreachable: ; preds = %catch @@ -70,9 +70,9 @@ unreachable: ; preds = %catch } ; Function Attrs: nounwind -declare void @_Z12bar_noexceptv() #1 +declare void @_Z12bar_noexceptv() -declare void @_Z3barv() #2 +declare void @_Z3barv() declare i32 @__gxx_personality_v0(...) @@ -83,22 +83,14 @@ declare void @__cxa_rethrow() declare void @__cxa_end_catch() ; Function Attrs: noinline noreturn nounwind -define linkonce_odr hidden void @__clang_call_terminate(ptr) #3 { - %2 = call ptr @__cxa_begin_catch(ptr %0) #4 - call void @_ZSt9terminatev() #6 +define linkonce_odr hidden void @__clang_call_terminate(ptr) { + %2 = call ptr @__cxa_begin_catch(ptr %0) + call void @_ZSt9terminatev() unreachable } declare void @_ZSt9terminatev() -attributes #0 = { ssp uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "frame-pointer"="all" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="penryn" "target-features"="+cx16,+fxsr,+mmx,+sahf,+sse,+sse2,+sse3,+sse4.1,+ssse3,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="penryn" "target-features"="+cx16,+fxsr,+mmx,+sahf,+sse,+sse2,+sse3,+sse4.1,+ssse3,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #2 = { "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="penryn" "target-features"="+cx16,+fxsr,+mmx,+sahf,+sse,+sse2,+sse3,+sse4.1,+ssse3,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #3 = { noinline noreturn nounwind } -attributes #4 = { nounwind } -attributes #5 = { noreturn } -attributes #6 = { noreturn nounwind } - !llvm.dbg.cu = !{!0} !llvm.module.flags = !{!3, !4, !5, !6} !llvm.ident = !{!7} diff --git a/llvm/test/Transforms/AddDiscriminators/multiple.ll b/llvm/test/Transforms/AddDiscriminators/multiple.ll index 54c1a5d..8e8ca6a 100644 --- a/llvm/test/Transforms/AddDiscriminators/multiple.ll +++ b/llvm/test/Transforms/AddDiscriminators/multiple.ll @@ -10,7 +10,7 @@ ; The two stores inside the if-then-else line must have different discriminator ; values. -define void @foo(i32 %i) #0 !dbg !4 { +define void @foo(i32 %i) !dbg !4 { entry: %i.addr = alloca i32, align 4 %x = alloca i32, align 4 @@ -45,8 +45,6 @@ if.end: ; preds = %if.else, %if.then ret void, !dbg !12 } -attributes #0 = { nounwind uwtable "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } - !llvm.dbg.cu = !{!0} !llvm.module.flags = !{!7, !8} !llvm.ident = !{!9} diff --git a/llvm/test/Transforms/AddDiscriminators/no-discriminators.ll b/llvm/test/Transforms/AddDiscriminators/no-discriminators.ll index c23edd6..f84579b 100644 --- a/llvm/test/Transforms/AddDiscriminators/no-discriminators.ll +++ b/llvm/test/Transforms/AddDiscriminators/no-discriminators.ll @@ -12,7 +12,7 @@ ; altered. If they are, it means that the discriminators pass added a ; new lexical scope. -define i32 @foo(i64 %i) #0 !dbg !4 { +define i32 @foo(i64 %i) !dbg !4 { entry: %retval = alloca i32, align 4 %i.addr = alloca i64, align 8 @@ -39,10 +39,7 @@ return: ; preds = %if.else, %if.then } ; Function Attrs: nounwind readnone -declare void @llvm.dbg.declare(metadata, metadata, metadata) #1 - -attributes #0 = { nounwind uwtable "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { nounwind readnone } +declare void @llvm.dbg.declare(metadata, metadata, metadata) ; We should be able to add discriminators even in the absence of llvm.dbg.cu. ; When using sample profiles, the front end will generate line tables but it diff --git a/llvm/test/Transforms/AddDiscriminators/oneline.ll b/llvm/test/Transforms/AddDiscriminators/oneline.ll index 533d547..fc1675b 100644 --- a/llvm/test/Transforms/AddDiscriminators/oneline.ll +++ b/llvm/test/Transforms/AddDiscriminators/oneline.ll @@ -10,7 +10,7 @@ ; return 100: discriminator 4 ; return 99: discriminator 6 -define i32 @_Z3fooi(i32 %i) #0 !dbg !4 { +define i32 @_Z3fooi(i32 %i) !dbg !4 { %1 = alloca i32, align 4 %2 = alloca i32, align 4 store i32 %i, ptr %2, align 4, !tbaa !13 @@ -49,10 +49,7 @@ define i32 @_Z3fooi(i32 %i) #0 !dbg !4 { } ; Function Attrs: nounwind readnone -declare void @llvm.dbg.declare(metadata, metadata, metadata) #1 - -attributes #0 = { nounwind uwtable "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+fxsr,+mmx,+sse,+sse2" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { nounwind readnone } +declare void @llvm.dbg.declare(metadata, metadata, metadata) !llvm.dbg.cu = !{!0} !llvm.module.flags = !{!10, !11} diff --git a/llvm/test/Transforms/Attributor/reduced/register_benchmark_test.ll b/llvm/test/Transforms/Attributor/reduced/register_benchmark_test.ll index eb7d78f..4704238 100644 --- a/llvm/test/Transforms/Attributor/reduced/register_benchmark_test.ll +++ b/llvm/test/Transforms/Attributor/reduced/register_benchmark_test.ll @@ -1557,24 +1557,24 @@ declare dso_local void @_GLOBAL__sub_I_register_benchmark_test.cc() #0 section " ; Function Attrs: cold noreturn nounwind declare void @llvm.trap() #20 -attributes #0 = { uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #2 = { nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #0 = { uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-jump-tables"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "use-soft-float"="false" } +attributes #1 = { "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "use-soft-float"="false" } +attributes #2 = { nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "use-soft-float"="false" } attributes #3 = { nounwind } -attributes #4 = { nounwind uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #4 = { nounwind uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-jump-tables"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "use-soft-float"="false" } attributes #5 = { argmemonly nounwind willreturn } -attributes #6 = { alwaysinline uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #7 = { alwaysinline nounwind uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #8 = { nobuiltin "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #9 = { nobuiltin nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #10 = { inlinehint uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #11 = { inlinehint nounwind uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #12 = { noreturn nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #13 = { norecurse uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #6 = { alwaysinline uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-jump-tables"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "use-soft-float"="false" } +attributes #7 = { alwaysinline nounwind uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-jump-tables"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "use-soft-float"="false" } +attributes #8 = { nobuiltin "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "use-soft-float"="false" } +attributes #9 = { nobuiltin nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "use-soft-float"="false" } +attributes #10 = { inlinehint uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-jump-tables"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "use-soft-float"="false" } +attributes #11 = { inlinehint nounwind uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-jump-tables"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "use-soft-float"="false" } +attributes #12 = { noreturn nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "use-soft-float"="false" } +attributes #13 = { norecurse uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-jump-tables"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "use-soft-float"="false" } attributes #14 = { nounwind readnone willreturn } -attributes #15 = { noreturn "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #15 = { noreturn "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "use-soft-float"="false" } attributes #16 = { noinline noreturn nounwind } attributes #17 = { argmemonly nounwind willreturn writeonly } -attributes #18 = { noreturn uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #19 = { inlinehint noreturn uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } +attributes #18 = { noreturn uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-jump-tables"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "use-soft-float"="false" } +attributes #19 = { inlinehint noreturn uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "denormal-fp-math"="ieee,ieee" "denormal-fp-math-f32"="ieee,ieee" "disable-tail-calls"="false" "frame-pointer"="none" "less-precise-fpmad"="false" "min-legal-vector-width"="0" "no-jump-tables"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+cx8,+fxsr,+mmx,+sse,+sse2,+x87" "use-soft-float"="false" } attributes #20 = { cold noreturn nounwind } diff --git a/llvm/test/Transforms/CodeGenPrepare/ARM/bitreverse-recognize.ll b/llvm/test/Transforms/CodeGenPrepare/ARM/bitreverse-recognize.ll index d272fef..189186b 100644 --- a/llvm/test/Transforms/CodeGenPrepare/ARM/bitreverse-recognize.ll +++ b/llvm/test/Transforms/CodeGenPrepare/ARM/bitreverse-recognize.ll @@ -4,7 +4,7 @@ target datalayout = "e-m:e-p:32:32-i64:64-v128:64:128-a:0:32-n32-S64" target triple = "armv7--linux-gnueabihf" ; CHECK-LABEL: @f -define i32 @f(i32 %a) #0 { +define i32 @f(i32 %a) { ; CHECK: call i32 @llvm.bitreverse.i32 entry: br label %for.body @@ -25,8 +25,6 @@ for.body: ; preds = %for.body, %entry br i1 %exitcond, label %for.cond.cleanup, label %for.body, !llvm.loop !3 } -attributes #0 = { norecurse nounwind readnone "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="cortex-a8" "target-features"="+dsp,+neon,+vfp3" "unsafe-fp-math"="false" "use-soft-float"="false" } - !llvm.module.flags = !{!0, !1} !llvm.ident = !{!2} diff --git a/llvm/test/Transforms/CodeGenPrepare/X86/bitreverse-hang.ll b/llvm/test/Transforms/CodeGenPrepare/X86/bitreverse-hang.ll index eec0967..35115cf 100644 --- a/llvm/test/Transforms/CodeGenPrepare/X86/bitreverse-hang.ll +++ b/llvm/test/Transforms/CodeGenPrepare/X86/bitreverse-hang.ll @@ -21,7 +21,7 @@ @b = common global i32 0, align 4 ; CHECK: define i32 @fn1 -define i32 @fn1() #0 { +define i32 @fn1() { entry: %b.promoted = load i32, ptr @b, align 4, !tbaa !2 br label %for.body @@ -40,8 +40,6 @@ for.end: ; preds = %for.body ret i32 undef } -attributes #0 = { norecurse nounwind ssp uwtable "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="core2" "target-features"="+cx16,+fxsr,+mmx,+sse,+sse2,+sse3,+ssse3" "unsafe-fp-math"="false" "use-soft-float"="false" } - !llvm.module.flags = !{!0} !llvm.ident = !{!1} diff --git a/llvm/test/Transforms/CodeGenPrepare/dom-tree.ll b/llvm/test/Transforms/CodeGenPrepare/dom-tree.ll index 1c990ff..14360fe 100644 --- a/llvm/test/Transforms/CodeGenPrepare/dom-tree.ll +++ b/llvm/test/Transforms/CodeGenPrepare/dom-tree.ll @@ -10,7 +10,7 @@ target datalayout = "e-m:e-p:32:32-i64:64-v128:64:128-a:0:32-n32-S64" target triple = "armv7--linux-gnueabihf" -define i32 @f(i32 %a) #0 { +define i32 @f(i32 %a) { entry: br label %for.body @@ -30,8 +30,6 @@ for.body: br i1 %exitcond, label %for.cond.cleanup, label %for.body, !llvm.loop !3 } -attributes #0 = { norecurse nounwind readnone "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="cortex-a8" "target-features"="+dsp,+neon,+vfp3" "unsafe-fp-math"="false" "use-soft-float"="false" } - !llvm.module.flags = !{!0, !1} !llvm.ident = !{!2} diff --git a/llvm/test/Transforms/ConstantHoisting/X86/ehpad.ll b/llvm/test/Transforms/ConstantHoisting/X86/ehpad.ll index 46fa066..78760db 100644 --- a/llvm/test/Transforms/ConstantHoisting/X86/ehpad.ll +++ b/llvm/test/Transforms/ConstantHoisting/X86/ehpad.ll @@ -20,7 +20,7 @@ target triple = "x86_64-pc-windows-msvc" ; BFIHOIST: br label %endif ; Function Attrs: norecurse -define i32 @main(i32 %argc, ptr nocapture readnone %argv) local_unnamed_addr #0 personality ptr @__CxxFrameHandler3 { +define i32 @main(i32 %argc, ptr nocapture readnone %argv) local_unnamed_addr personality ptr @__CxxFrameHandler3 { %call = tail call i64 @fn(i64 0) %call1 = tail call i64 @fn(i64 1) %tobool = icmp eq i32 %argc, 0 @@ -62,9 +62,6 @@ endif: ret i32 0 } -declare i64 @fn(i64) local_unnamed_addr #1 +declare i64 @fn(i64) local_unnamed_addr declare i32 @__CxxFrameHandler3(...) - -attributes #0 = { norecurse "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "stack-protector-buffer-size"="8" "target-features"="+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "stack-protector-buffer-size"="8" "target-features"="+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } diff --git a/llvm/test/Transforms/Coroutines/coro-debug.ll b/llvm/test/Transforms/Coroutines/coro-debug.ll index d1f1922..109be51f 100644 --- a/llvm/test/Transforms/Coroutines/coro-debug.ll +++ b/llvm/test/Transforms/Coroutines/coro-debug.ll @@ -14,7 +14,7 @@ entry: %0 = call token @llvm.coro.id(i32 0, ptr null, ptr @flink, ptr null), !dbg !16 %1 = call i64 @llvm.coro.size.i64(), !dbg !16 %call = call ptr @malloc(i64 %1), !dbg !16 - %2 = call ptr @llvm.coro.begin(token %0, ptr %call) #7, !dbg !16 + %2 = call ptr @llvm.coro.begin(token %0, ptr %call), !dbg !16 store ptr %2, ptr %coro_hdl, align 8, !dbg !16 %3 = call i8 @llvm.coro.suspend(token none, i1 false), !dbg !17 %conv = sext i8 %3 to i32, !dbg !17 @@ -69,7 +69,7 @@ coro_Cleanup: ; preds = %sw.epilog, %sw.bb1 br label %coro_Suspend, !dbg !24 coro_Suspend: ; preds = %coro_Cleanup, %sw.default - call void @llvm.coro.end(ptr null, i1 false, token none) #7, !dbg !24 + call void @llvm.coro.end(ptr null, i1 false, token none), !dbg !24 %7 = load ptr, ptr %coro_hdl, align 8, !dbg !24 store i32 0, ptr %late_local, !dbg !24 ret ptr %7, !dbg !24 @@ -82,47 +82,40 @@ ehcleanup: } ; Function Attrs: nounwind readnone speculatable -declare void @llvm.dbg.value(metadata, metadata, metadata) #1 +declare void @llvm.dbg.value(metadata, metadata, metadata) ; Function Attrs: nounwind readnone speculatable -declare void @llvm.dbg.declare(metadata, metadata, metadata) #1 +declare void @llvm.dbg.declare(metadata, metadata, metadata) ; Function Attrs: argmemonly nounwind readonly -declare token @llvm.coro.id(i32, ptr readnone, ptr nocapture readonly, ptr) #2 +declare token @llvm.coro.id(i32, ptr readnone, ptr nocapture readonly, ptr) -declare ptr @malloc(i64) #3 +declare ptr @malloc(i64) declare ptr @allocate() declare void @print({ ptr, i32 }) declare void @log() ; Function Attrs: nounwind readnone -declare i64 @llvm.coro.size.i64() #4 +declare i64 @llvm.coro.size.i64() ; Function Attrs: nounwind -declare ptr @llvm.coro.begin(token, ptr writeonly) #5 +declare ptr @llvm.coro.begin(token, ptr writeonly) ; Function Attrs: nounwind -declare i8 @llvm.coro.suspend(token, i1) #5 +declare i8 @llvm.coro.suspend(token, i1) -declare void @free(ptr) #3 +declare void @free(ptr) ; Function Attrs: argmemonly nounwind readonly -declare ptr @llvm.coro.free(token, ptr nocapture readonly) #2 +declare ptr @llvm.coro.free(token, ptr nocapture readonly) ; Function Attrs: nounwind -declare void @llvm.coro.end(ptr, i1, token) #5 +declare void @llvm.coro.end(ptr, i1, token) ; Function Attrs: argmemonly nounwind readonly -declare ptr @llvm.coro.subfn.addr(ptr nocapture readonly, i8) #2 - -attributes #0 = { noinline nounwind presplitcoroutine "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-features"="+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { nounwind readnone speculatable } -attributes #2 = { argmemonly nounwind readonly } -attributes #3 = { "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-features"="+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #4 = { nounwind readnone } -attributes #5 = { nounwind } -attributes #6 = { alwaysinline } -attributes #7 = { noduplicate } +declare ptr @llvm.coro.subfn.addr(ptr nocapture readonly, i8) + +attributes #0 = { noinline nounwind presplitcoroutine } !llvm.dbg.cu = !{!0} !llvm.module.flags = !{!3, !4} diff --git a/llvm/test/Transforms/Coroutines/coro-split-dbg.ll b/llvm/test/Transforms/Coroutines/coro-split-dbg.ll index c53bea8..577ca9a 100644 --- a/llvm/test/Transforms/Coroutines/coro-split-dbg.ll +++ b/llvm/test/Transforms/Coroutines/coro-split-dbg.ll @@ -6,9 +6,9 @@ target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128" target triple = "x86_64-unknown-linux-gnu" ; Function Attrs: nounwind readnone -declare void @llvm.dbg.declare(metadata, metadata, metadata) #1 +declare void @llvm.dbg.declare(metadata, metadata, metadata) -declare void @bar(...) local_unnamed_addr #2 +declare void @bar(...) local_unnamed_addr ; Function Attrs: nounwind uwtable define ptr @f() #3 !dbg !16 { @@ -16,14 +16,14 @@ entry: %0 = tail call token @llvm.coro.id(i32 0, ptr null, ptr @f, ptr null), !dbg !26 %1 = tail call i64 @llvm.coro.size.i64(), !dbg !26 %call = tail call ptr @malloc(i64 %1), !dbg !26 - %2 = tail call ptr @llvm.coro.begin(token %0, ptr %call) #9, !dbg !26 + %2 = tail call ptr @llvm.coro.begin(token %0, ptr %call), !dbg !26 tail call void @llvm.dbg.value(metadata ptr %2, metadata !21, metadata !12), !dbg !26 br label %for.cond, !dbg !27 for.cond: ; preds = %for.cond, %entry tail call void @llvm.dbg.value(metadata i32 undef, metadata !22, metadata !12), !dbg !28 - tail call void @llvm.dbg.value(metadata i32 undef, metadata !11, metadata !12) #7, !dbg !29 - tail call void (...) @bar() #7, !dbg !33 + tail call void @llvm.dbg.value(metadata i32 undef, metadata !11, metadata !12), !dbg !29 + tail call void (...) @bar(), !dbg !33 %3 = tail call token @llvm.coro.save(ptr null), !dbg !34 %4 = tail call i8 @llvm.coro.suspend(token %3, i1 false), !dbg !34 %conv = sext i8 %4 to i32, !dbg !34 @@ -38,40 +38,31 @@ coro_Cleanup: ; preds = %for.cond br label %coro_Suspend, !dbg !36 coro_Suspend: ; preds = %for.cond, %if.then, %coro_Cleanup - tail call void @llvm.coro.end(ptr null, i1 false, token none) #9, !dbg !38 + tail call void @llvm.coro.end(ptr null, i1 false, token none), !dbg !38 ret ptr %2, !dbg !39 } ; Function Attrs: argmemonly nounwind -declare void @llvm.lifetime.start.p0(ptr nocapture) #4 +declare void @llvm.lifetime.start.p0(ptr nocapture) ; Function Attrs: argmemonly nounwind readonly -declare token @llvm.coro.id(i32, ptr readnone, ptr nocapture readonly, ptr) #5 +declare token @llvm.coro.id(i32, ptr readnone, ptr nocapture readonly, ptr) ; Function Attrs: nounwind -declare noalias ptr @malloc(i64) local_unnamed_addr #6 -declare i64 @llvm.coro.size.i64() #1 -declare ptr @llvm.coro.begin(token, ptr writeonly) #7 -declare token @llvm.coro.save(ptr) #7 -declare i8 @llvm.coro.suspend(token, i1) #7 -declare void @llvm.lifetime.end.p0(ptr nocapture) #4 -declare ptr @llvm.coro.free(token, ptr nocapture readonly) #5 -declare void @free(ptr nocapture) local_unnamed_addr #6 -declare void @llvm.coro.end(ptr, i1, token) #7 -declare ptr @llvm.coro.subfn.addr(ptr nocapture readonly, i8) #5 +declare noalias ptr @malloc(i64) local_unnamed_addr +declare i64 @llvm.coro.size.i64() +declare ptr @llvm.coro.begin(token, ptr writeonly) +declare token @llvm.coro.save(ptr) +declare i8 @llvm.coro.suspend(token, i1) +declare void @llvm.lifetime.end.p0(ptr nocapture) +declare ptr @llvm.coro.free(token, ptr nocapture readonly) +declare void @free(ptr nocapture) local_unnamed_addr +declare void @llvm.coro.end(ptr, i1, token) +declare ptr @llvm.coro.subfn.addr(ptr nocapture readonly, i8) -declare void @llvm.dbg.value(metadata, metadata, metadata) #1 +declare void @llvm.dbg.value(metadata, metadata, metadata) -attributes #0 = { nounwind uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { nounwind readnone } -attributes #2 = { "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #3 = { nounwind uwtable presplitcoroutine "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #4 = { argmemonly nounwind } -attributes #5 = { argmemonly nounwind readonly } -attributes #6 = { nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #7 = { nounwind } -attributes #8 = { alwaysinline nounwind } -attributes #9 = { noduplicate } +attributes #3 = { nounwind uwtable presplitcoroutine } !llvm.dbg.cu = !{!0} !llvm.module.flags = !{!3, !4} diff --git a/llvm/test/Transforms/InstCombine/intrinsic-select.ll b/llvm/test/Transforms/InstCombine/intrinsic-select.ll index 2f1f9fc..fc9ab9f 100644 --- a/llvm/test/Transforms/InstCombine/intrinsic-select.ll +++ b/llvm/test/Transforms/InstCombine/intrinsic-select.ll @@ -222,8 +222,7 @@ declare i32 @llvm.vector.reduce.add.v2i32(<2 x i32>) define i32 @vec_to_scalar_select_scalar(i1 %b) { ; CHECK-LABEL: @vec_to_scalar_select_scalar( -; CHECK-NEXT: [[S:%.*]] = select i1 [[B:%.*]], <2 x i32> <i32 1, i32 2>, <2 x i32> <i32 3, i32 4> -; CHECK-NEXT: [[C:%.*]] = call i32 @llvm.vector.reduce.add.v2i32(<2 x i32> [[S]]) +; CHECK-NEXT: [[C:%.*]] = select i1 [[B:%.*]], i32 3, i32 7 ; CHECK-NEXT: ret i32 [[C]] ; %s = select i1 %b, <2 x i32> <i32 1, i32 2>, <2 x i32> <i32 3, i32 4> @@ -371,3 +370,36 @@ define float @test_fabs_select_multiuse_both_constant(i1 %cond, float %x) { %fabs = call float @llvm.fabs.f32(float %select) ret float %fabs } + +; Negative test: Don't replace with select between vector mask and zeroinitializer. +define <16 x i1> @test_select_of_active_lane_mask_bound(i64 %base, i64 %n, i1 %cond) { +; CHECK-LABEL: @test_select_of_active_lane_mask_bound( +; CHECK-NEXT: [[S:%.*]] = select i1 [[COND:%.*]], i64 [[N:%.*]], i64 0 +; CHECK-NEXT: [[MASK:%.*]] = call <16 x i1> @llvm.get.active.lane.mask.v16i1.i64(i64 [[BASE:%.*]], i64 [[S]]) +; CHECK-NEXT: ret <16 x i1> [[MASK]] +; + %s = select i1 %cond, i64 %n, i64 0 + %mask = call <16 x i1> @llvm.get.active.lane.mask.v16i1.i64(i64 %base, i64 %s) + ret <16 x i1> %mask +} + +define <16 x i1> @test_select_of_active_lane_mask_bound_both_constant(i64 %base, i64 %n, i1 %cond) { +; CHECK-LABEL: @test_select_of_active_lane_mask_bound_both_constant( +; CHECK-NEXT: [[MASK:%.*]] = select i1 [[COND:%.*]], <16 x i1> splat (i1 true), <16 x i1> zeroinitializer +; CHECK-NEXT: ret <16 x i1> [[MASK]] +; + %s = select i1 %cond, i64 16, i64 0 + %mask = call <16 x i1> @llvm.get.active.lane.mask.v16i1.i64(i64 0, i64 %s) + ret <16 x i1> %mask +} + +define { i64, i1 } @test_select_of_overflow_intrinsic_operand(i64 %n, i1 %cond) { +; CHECK-LABEL: @test_select_of_overflow_intrinsic_operand( +; CHECK-NEXT: [[TMP1:%.*]] = call { i64, i1 } @llvm.uadd.with.overflow.i64(i64 [[N:%.*]], i64 42) +; CHECK-NEXT: [[ADD_OVERFLOW:%.*]] = select i1 [[COND:%.*]], { i64, i1 } [[TMP1]], { i64, i1 } { i64 42, i1 false } +; CHECK-NEXT: ret { i64, i1 } [[ADD_OVERFLOW]] +; + %s = select i1 %cond, i64 %n, i64 0 + %add_overflow = call { i64, i1 } @llvm.uadd.with.overflow.i64(i64 %s, i64 42) + ret { i64, i1 } %add_overflow +} diff --git a/llvm/test/Transforms/InstCombine/phi.ll b/llvm/test/Transforms/InstCombine/phi.ll index 3454835..1c8b21c 100644 --- a/llvm/test/Transforms/InstCombine/phi.ll +++ b/llvm/test/Transforms/InstCombine/phi.ll @@ -3026,6 +3026,56 @@ join: ret i32 %umax } +define i32 @cross_lane_intrinsic_over_phi(i1 %c, i1 %c2, <4 x i32> %a) { +; CHECK-LABEL: @cross_lane_intrinsic_over_phi( +; CHECK-NEXT: entry: +; CHECK-NEXT: br i1 [[C:%.*]], label [[IF:%.*]], label [[JOIN:%.*]] +; CHECK: if: +; CHECK-NEXT: [[TMP0:%.*]] = call i32 @llvm.vector.reduce.add.v4i32(<4 x i32> [[A:%.*]]) +; CHECK-NEXT: br label [[JOIN]] +; CHECK: join: +; CHECK-NEXT: [[PHI:%.*]] = phi i32 [ [[TMP0]], [[IF]] ], [ 0, [[ENTRY:%.*]] ] +; CHECK-NEXT: call void @may_exit() +; CHECK-NEXT: ret i32 [[PHI]] +; +entry: + br i1 %c, label %if, label %join + +if: + br label %join + +join: + %phi = phi <4 x i32> [ %a, %if ], [ zeroinitializer, %entry ] + call void @may_exit() + %sum = call i32 @llvm.vector.reduce.add.v4i32(<4 x i32> %phi) + ret i32 %sum +} + +define { i64, i1 } @overflow_intrinsic_over_phi(i1 %c, i64 %a) { +; CHECK-LABEL: @overflow_intrinsic_over_phi( +; CHECK-NEXT: entry: +; CHECK-NEXT: br i1 [[C:%.*]], label [[IF:%.*]], label [[JOIN:%.*]] +; CHECK: if: +; CHECK-NEXT: [[TMP0:%.*]] = call { i64, i1 } @llvm.uadd.with.overflow.i64(i64 [[A:%.*]], i64 1) +; CHECK-NEXT: br label [[JOIN]] +; CHECK: join: +; CHECK-NEXT: [[PHI:%.*]] = phi { i64, i1 } [ [[TMP0]], [[IF]] ], [ { i64 1, i1 false }, [[ENTRY:%.*]] ] +; CHECK-NEXT: call void @may_exit() +; CHECK-NEXT: ret { i64, i1 } [[PHI]] +; +entry: + br i1 %c, label %if, label %join + +if: + br label %join + +join: + %phi = phi i64 [ %a, %if ], [ 0, %entry ] + call void @may_exit() + %add_overflow = call { i64, i1 } @llvm.uadd.with.overflow.i64(i64 %phi, i64 1) + ret { i64, i1 } %add_overflow +} + define i32 @multiple_intrinsics_with_multiple_phi_uses(i1 %c, i32 %arg) { ; CHECK-LABEL: @multiple_intrinsics_with_multiple_phi_uses( ; CHECK-NEXT: entry: diff --git a/llvm/test/Transforms/InstCombine/ptrtoaddr.ll b/llvm/test/Transforms/InstCombine/ptrtoaddr.ll index 410c43c..f19cca8 100644 --- a/llvm/test/Transforms/InstCombine/ptrtoaddr.ll +++ b/llvm/test/Transforms/InstCombine/ptrtoaddr.ll @@ -40,3 +40,134 @@ define i128 @ptrtoaddr_sext(ptr %p) { %ext = sext i64 %p.addr to i128 ret i128 %ext } + +define i64 @sub_ptrtoaddr(ptr %p, i64 %offset) { +; CHECK-LABEL: define i64 @sub_ptrtoaddr( +; CHECK-SAME: ptr [[P:%.*]], i64 [[OFFSET:%.*]]) { +; CHECK-NEXT: ret i64 [[OFFSET]] +; + %p2 = getelementptr i8, ptr %p, i64 %offset + %p.addr = ptrtoaddr ptr %p to i64 + %p2.addr = ptrtoaddr ptr %p2 to i64 + %sub = sub i64 %p2.addr, %p.addr + ret i64 %sub +} + +define i64 @sub_ptrtoint_ptrtoaddr(ptr %p, i64 %offset) { +; CHECK-LABEL: define i64 @sub_ptrtoint_ptrtoaddr( +; CHECK-SAME: ptr [[P:%.*]], i64 [[OFFSET:%.*]]) { +; CHECK-NEXT: ret i64 [[OFFSET]] +; + %p2 = getelementptr i8, ptr %p, i64 %offset + %p.int = ptrtoint ptr %p to i64 + %p2.addr = ptrtoaddr ptr %p2 to i64 + %sub = sub i64 %p2.addr, %p.int + ret i64 %sub +} + +define i32 @sub_ptrtoaddr_addrsize(ptr addrspace(1) %p, i32 %offset) { +; CHECK-LABEL: define i32 @sub_ptrtoaddr_addrsize( +; CHECK-SAME: ptr addrspace(1) [[P:%.*]], i32 [[OFFSET:%.*]]) { +; CHECK-NEXT: ret i32 [[OFFSET]] +; + %p2 = getelementptr i8, ptr addrspace(1) %p, i32 %offset + %p.addr = ptrtoaddr ptr addrspace(1) %p to i32 + %p2.addr = ptrtoaddr ptr addrspace(1) %p2 to i32 + %sub = sub i32 %p2.addr, %p.addr + ret i32 %sub +} + +define i32 @sub_trunc_ptrtoaddr(ptr %p, i64 %offset) { +; CHECK-LABEL: define i32 @sub_trunc_ptrtoaddr( +; CHECK-SAME: ptr [[P:%.*]], i64 [[OFFSET:%.*]]) { +; CHECK-NEXT: [[SUB:%.*]] = trunc i64 [[OFFSET]] to i32 +; CHECK-NEXT: ret i32 [[SUB]] +; + %p2 = getelementptr i8, ptr %p, i64 %offset + %p.addr = ptrtoaddr ptr %p to i64 + %p2.addr = ptrtoaddr ptr %p2 to i64 + %p.addr.trunc = trunc i64 %p.addr to i32 + %p2.addr.trunc = trunc i64 %p2.addr to i32 + %sub = sub i32 %p2.addr.trunc, %p.addr.trunc + ret i32 %sub +} + +define i16 @sub_trunc_ptrtoaddr_addrsize(ptr addrspace(1) %p, i32 %offset) { +; CHECK-LABEL: define i16 @sub_trunc_ptrtoaddr_addrsize( +; CHECK-SAME: ptr addrspace(1) [[P:%.*]], i32 [[OFFSET:%.*]]) { +; CHECK-NEXT: [[SUB:%.*]] = trunc i32 [[OFFSET]] to i16 +; CHECK-NEXT: ret i16 [[SUB]] +; + %p2 = getelementptr i8, ptr addrspace(1) %p, i32 %offset + %p.addr = ptrtoaddr ptr addrspace(1) %p to i32 + %p2.addr = ptrtoaddr ptr addrspace(1) %p2 to i32 + %p.addr.trunc = trunc i32 %p.addr to i16 + %p2.addr.trunc = trunc i32 %p2.addr to i16 + %sub = sub i16 %p2.addr.trunc, %p.addr.trunc + ret i16 %sub +} + +define i16 @sub_trunc_ptrtoint_ptrtoaddr_addrsize(ptr addrspace(1) %p, i32 %offset) { +; CHECK-LABEL: define i16 @sub_trunc_ptrtoint_ptrtoaddr_addrsize( +; CHECK-SAME: ptr addrspace(1) [[P:%.*]], i32 [[OFFSET:%.*]]) { +; CHECK-NEXT: [[SUB:%.*]] = trunc i32 [[OFFSET]] to i16 +; CHECK-NEXT: ret i16 [[SUB]] +; + %p2 = getelementptr i8, ptr addrspace(1) %p, i32 %offset + %p.int = ptrtoint ptr addrspace(1) %p to i64 + %p2.addr = ptrtoaddr ptr addrspace(1) %p2 to i32 + %p.int.trunc = trunc i64 %p.int to i16 + %p2.addr.trunc = trunc i32 %p2.addr to i16 + %sub = sub i16 %p2.addr.trunc, %p.int.trunc + ret i16 %sub +} + +define i128 @sub_zext_ptrtoaddr(ptr %p, i64 %offset) { +; CHECK-LABEL: define i128 @sub_zext_ptrtoaddr( +; CHECK-SAME: ptr [[P:%.*]], i64 [[OFFSET:%.*]]) { +; CHECK-NEXT: [[SUB:%.*]] = zext i64 [[OFFSET]] to i128 +; CHECK-NEXT: ret i128 [[SUB]] +; + %p2 = getelementptr nuw i8, ptr %p, i64 %offset + %p.addr = ptrtoaddr ptr %p to i64 + %p2.addr = ptrtoaddr ptr %p2 to i64 + %p.addr.ext = zext i64 %p.addr to i128 + %p2.addr.ext = zext i64 %p2.addr to i128 + %sub = sub i128 %p2.addr.ext, %p.addr.ext + ret i128 %sub +} + +define i64 @sub_zext_ptrtoaddr_addrsize(ptr addrspace(1) %p, i32 %offset) { +; CHECK-LABEL: define i64 @sub_zext_ptrtoaddr_addrsize( +; CHECK-SAME: ptr addrspace(1) [[P:%.*]], i32 [[OFFSET:%.*]]) { +; CHECK-NEXT: [[SUB:%.*]] = zext i32 [[OFFSET]] to i64 +; CHECK-NEXT: ret i64 [[SUB]] +; + %p2 = getelementptr nuw i8, ptr addrspace(1) %p, i32 %offset + %p.addr = ptrtoaddr ptr addrspace(1) %p to i32 + %p2.addr = ptrtoaddr ptr addrspace(1) %p2 to i32 + %p.addr.ext = zext i32 %p.addr to i64 + %p2.addr.ext = zext i32 %p2.addr to i64 + %sub = sub i64 %p2.addr.ext, %p.addr.ext + ret i64 %sub +} + +define i128 @sub_zext_ptrtoint_ptrtoaddr_addrsize(ptr addrspace(1) %p, i32 %offset) { +; CHECK-LABEL: define i128 @sub_zext_ptrtoint_ptrtoaddr_addrsize( +; CHECK-SAME: ptr addrspace(1) [[P:%.*]], i32 [[OFFSET:%.*]]) { +; CHECK-NEXT: [[P2:%.*]] = getelementptr nuw i8, ptr addrspace(1) [[P]], i32 [[OFFSET]] +; CHECK-NEXT: [[P_INT:%.*]] = ptrtoint ptr addrspace(1) [[P]] to i64 +; CHECK-NEXT: [[P2_ADDR:%.*]] = ptrtoaddr ptr addrspace(1) [[P2]] to i32 +; CHECK-NEXT: [[P_INT_EXT:%.*]] = zext i64 [[P_INT]] to i128 +; CHECK-NEXT: [[P2_ADDR_EXT:%.*]] = zext i32 [[P2_ADDR]] to i128 +; CHECK-NEXT: [[SUB:%.*]] = sub nsw i128 [[P2_ADDR_EXT]], [[P_INT_EXT]] +; CHECK-NEXT: ret i128 [[SUB]] +; + %p2 = getelementptr nuw i8, ptr addrspace(1) %p, i32 %offset + %p.int = ptrtoint ptr addrspace(1) %p to i64 + %p2.addr = ptrtoaddr ptr addrspace(1) %p2 to i32 + %p.int.ext = zext i64 %p.int to i128 + %p2.addr.ext = zext i32 %p2.addr to i128 + %sub = sub i128 %p2.addr.ext, %p.int.ext + ret i128 %sub +} diff --git a/llvm/test/Transforms/InstCombine/select_frexp.ll b/llvm/test/Transforms/InstCombine/select_frexp.ll index d025aed..ccfcca1 100644 --- a/llvm/test/Transforms/InstCombine/select_frexp.ll +++ b/llvm/test/Transforms/InstCombine/select_frexp.ll @@ -115,10 +115,10 @@ define float @test_select_frexp_no_const(float %x, float %y, i1 %cond) { define i32 @test_select_frexp_extract_exp(float %x, i1 %cond) { ; CHECK-LABEL: define i32 @test_select_frexp_extract_exp( ; CHECK-SAME: float [[X:%.*]], i1 [[COND:%.*]]) { -; CHECK-NEXT: [[SEL:%.*]] = select i1 [[COND]], float 1.000000e+00, float [[X]] -; CHECK-NEXT: [[FREXP:%.*]] = call { float, i32 } @llvm.frexp.f32.i32(float [[SEL]]) +; CHECK-NEXT: [[FREXP:%.*]] = call { float, i32 } @llvm.frexp.f32.i32(float [[X]]) ; CHECK-NEXT: [[FREXP_1:%.*]] = extractvalue { float, i32 } [[FREXP]], 1 -; CHECK-NEXT: ret i32 [[FREXP_1]] +; CHECK-NEXT: [[FREXP_2:%.*]] = select i1 [[COND]], i32 1, i32 [[FREXP_1]] +; CHECK-NEXT: ret i32 [[FREXP_2]] ; %sel = select i1 %cond, float 1.000000e+00, float %x %frexp = call { float, i32 } @llvm.frexp.f32.i32(float %sel) @@ -132,7 +132,7 @@ define float @test_select_frexp_fast_math_select(float %x, i1 %cond) { ; CHECK-SAME: float [[X:%.*]], i1 [[COND:%.*]]) { ; CHECK-NEXT: [[FREXP1:%.*]] = call { float, i32 } @llvm.frexp.f32.i32(float [[X]]) ; CHECK-NEXT: [[MANTISSA:%.*]] = extractvalue { float, i32 } [[FREXP1]], 0 -; CHECK-NEXT: [[SELECT_FREXP:%.*]] = select nnan ninf nsz i1 [[COND]], float 5.000000e-01, float [[MANTISSA]] +; CHECK-NEXT: [[SELECT_FREXP:%.*]] = select i1 [[COND]], float 5.000000e-01, float [[MANTISSA]] ; CHECK-NEXT: ret float [[SELECT_FREXP]] ; %sel = select nnan ninf nsz i1 %cond, float 1.000000e+00, float %x diff --git a/llvm/test/Transforms/InstCombine/sub-gep.ll b/llvm/test/Transforms/InstCombine/sub-gep.ll index 8eeaea1..ee70137 100644 --- a/llvm/test/Transforms/InstCombine/sub-gep.ll +++ b/llvm/test/Transforms/InstCombine/sub-gep.ll @@ -1,7 +1,7 @@ ; NOTE: Assertions have been autogenerated by utils/update_test_checks.py ; RUN: opt -S -passes=instcombine < %s | FileCheck %s -target datalayout = "e-p:64:64:64-p1:16:16:16-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-p2:32:32" +target datalayout = "e-p:64:64:64-p1:16:16:16-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-p2:32:32-p3:32:32:32:16" define i64 @test_inbounds(ptr %base, i64 %idx) { ; CHECK-LABEL: @test_inbounds( @@ -505,6 +505,23 @@ define i64 @negative_zext_ptrtoint_sub_ptrtoint_as2_nuw(i32 %offset) { ret i64 %D } +define i64 @negative_zext_ptrtoint_sub_zext_ptrtoint_as2_nuw_truncating(i32 %offset) { +; CHECK-LABEL: @negative_zext_ptrtoint_sub_zext_ptrtoint_as2_nuw_truncating( +; CHECK-NEXT: [[A:%.*]] = getelementptr nuw bfloat, ptr addrspace(2) @Arr_as2, i32 [[OFFSET:%.*]] +; CHECK-NEXT: [[A_IDX:%.*]] = ptrtoint ptr addrspace(2) [[A]] to i32 +; CHECK-NEXT: [[E:%.*]] = zext i32 [[A_IDX]] to i64 +; CHECK-NEXT: [[D:%.*]] = zext i16 ptrtoint (ptr addrspace(2) @Arr_as2 to i16) to i64 +; CHECK-NEXT: [[E1:%.*]] = sub nsw i64 [[E]], [[D]] +; CHECK-NEXT: ret i64 [[E1]] +; + %A = getelementptr nuw bfloat, ptr addrspace(2) @Arr_as2, i32 %offset + %B = ptrtoint ptr addrspace(2) %A to i32 + %C = zext i32 %B to i64 + %D = zext i16 ptrtoint (ptr addrspace(2) @Arr_as2 to i16) to i64 + %E = sub i64 %C, %D + ret i64 %E +} + define i64 @ptrtoint_sub_zext_ptrtoint_as2_inbounds_local(ptr addrspace(2) %p, i32 %offset) { ; CHECK-LABEL: @ptrtoint_sub_zext_ptrtoint_as2_inbounds_local( ; CHECK-NEXT: [[A:%.*]] = getelementptr inbounds bfloat, ptr addrspace(2) [[P:%.*]], i32 [[OFFSET:%.*]] @@ -614,6 +631,20 @@ define i64 @negative_zext_ptrtoint_sub_ptrtoint_as2_nuw_local(ptr addrspace(2) % ret i64 %D } +define i64 @zext_ptrtoint_sub_ptrtoint_as3_nuw_local(ptr addrspace(3) %p, i16 %offset) { +; CHECK-LABEL: @zext_ptrtoint_sub_ptrtoint_as3_nuw_local( +; CHECK-NEXT: [[SUB:%.*]] = zext i16 [[GEP_IDX:%.*]] to i64 +; CHECK-NEXT: ret i64 [[SUB]] +; + %gep = getelementptr nuw i8, ptr addrspace(3) %p, i16 %offset + %gep.int = ptrtoint ptr addrspace(3) %gep to i32 + %p.int = ptrtoint ptr addrspace(3) %p to i32 + %gep.int.ext = zext i32 %gep.int to i64 + %p.int.ext = zext i32 %p.int to i64 + %sub = sub i64 %gep.int.ext, %p.int.ext + ret i64 %sub +} + define i64 @test30(ptr %foo, i64 %i, i64 %j) { ; CHECK-LABEL: @test30( ; CHECK-NEXT: [[GEP1_IDX:%.*]] = shl nsw i64 [[I:%.*]], 2 diff --git a/llvm/test/Transforms/PhaseOrdering/always-inline-alloca-promotion.ll b/llvm/test/Transforms/PhaseOrdering/always-inline-alloca-promotion.ll index 63279b0..92ea2c6 100644 --- a/llvm/test/Transforms/PhaseOrdering/always-inline-alloca-promotion.ll +++ b/llvm/test/Transforms/PhaseOrdering/always-inline-alloca-promotion.ll @@ -35,9 +35,8 @@ define void @ham() #1 { ; CHECK-NEXT: [[SNORK_EXIT:.*:]] ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr inttoptr (i64 48 to ptr), align 16 ; CHECK-NEXT: [[TMP1:%.*]] = icmp sgt i64 [[TMP0]], 0 -; CHECK-NEXT: [[TMP2:%.*]] = tail call <vscale x 16 x float> @llvm.vector.insert.nxv16f32.nxv4f32(<vscale x 16 x float> zeroinitializer, <vscale x 4 x float> zeroinitializer, i64 0) -; CHECK-NEXT: [[SPEC_SELECT:%.*]] = select i1 [[TMP1]], <vscale x 16 x float> [[TMP2]], <vscale x 16 x float> undef -; CHECK-NEXT: [[TMP3:%.*]] = tail call <vscale x 4 x float> @llvm.vector.extract.nxv4f32.nxv16f32(<vscale x 16 x float> [[SPEC_SELECT]], i64 0) +; CHECK-NEXT: [[TMP2:%.*]] = tail call <vscale x 4 x float> @llvm.vector.extract.nxv4f32.nxv16f32(<vscale x 16 x float> undef, i64 0) +; CHECK-NEXT: [[TMP3:%.*]] = select i1 [[TMP1]], <vscale x 4 x float> zeroinitializer, <vscale x 4 x float> [[TMP2]] ; CHECK-NEXT: tail call void @llvm.aarch64.sme.mopa.nxv4f32(i32 0, <vscale x 4 x i1> zeroinitializer, <vscale x 4 x i1> zeroinitializer, <vscale x 4 x float> zeroinitializer, <vscale x 4 x float> [[TMP3]]) ; CHECK-NEXT: ret void ; diff --git a/llvm/test/Transforms/Util/dbg-user-of-aext.ll b/llvm/test/Transforms/Util/dbg-user-of-aext.ll index 9e7935e..b3d1b90 100644 --- a/llvm/test/Transforms/Util/dbg-user-of-aext.ll +++ b/llvm/test/Transforms/Util/dbg-user-of-aext.ll @@ -27,7 +27,7 @@ %struct.foo = type { i8, i64 } ; Function Attrs: noinline nounwind uwtable -define void @_Z1fbb3foo(i1 zeroext %b, i1 zeroext %frag, i8 %g.coerce0, i64 %g.coerce1) #0 !dbg !6 { +define void @_Z1fbb3foo(i1 zeroext %b, i1 zeroext %frag, i8 %g.coerce0, i64 %g.coerce1) !dbg !6 { entry: %g = alloca %struct.foo, align 8 %b.addr = alloca i8, align 1 @@ -51,10 +51,7 @@ entry: ; CHECK: ![[VAR_FRAG]] = !DILocalVariable(name: "frag" ; Function Attrs: nounwind readnone speculatable -declare void @llvm.dbg.declare(metadata, metadata, metadata) #1 - -attributes #0 = { noinline nounwind uwtable "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="all" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-cpu"="x86-64" "target-features"="+fxsr,+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { nounwind readnone speculatable } +declare void @llvm.dbg.declare(metadata, metadata, metadata) !llvm.dbg.cu = !{!0} !llvm.module.flags = !{!3, !4} diff --git a/llvm/test/Transforms/Util/libcalls-fast-math-inf-loop.ll b/llvm/test/Transforms/Util/libcalls-fast-math-inf-loop.ll index ad23bf7..e9f0c8c 100644 --- a/llvm/test/Transforms/Util/libcalls-fast-math-inf-loop.ll +++ b/llvm/test/Transforms/Util/libcalls-fast-math-inf-loop.ll @@ -19,18 +19,18 @@ target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128" target triple = "x86_64-unknown-unknown" ; Function Attrs: nounwind -define float @fn(float %f) #0 { +define float @fn(float %f) { ; CHECK: define float @fn( ; CHECK: call fast float @expf( %f.addr = alloca float, align 4 store float %f, ptr %f.addr, align 4, !tbaa !1 %1 = load float, ptr %f.addr, align 4, !tbaa !1 - %call = call fast float @expf(float %1) #3 + %call = call fast float @expf(float %1) ret float %call } ; Function Attrs: inlinehint nounwind readnone -define available_externally float @expf(float %x) #1 { +define available_externally float @expf(float %x) { ; CHECK: define available_externally float @expf( ; CHECK: fpext float ; CHECK: call fast double @exp( @@ -39,17 +39,13 @@ define available_externally float @expf(float %x) #1 { store float %x, ptr %x.addr, align 4, !tbaa !1 %1 = load float, ptr %x.addr, align 4, !tbaa !1 %conv = fpext float %1 to double - %call = call fast double @exp(double %conv) #3 + %call = call fast double @exp(double %conv) %conv1 = fptrunc double %call to float ret float %conv1 } ; Function Attrs: nounwind readnone -declare double @exp(double) #2 - -attributes #0 = { nounwind "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-features"="+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #1 = { inlinehint nounwind readnone "correctly-rounded-divide-sqrt-fp-math"="false" "disable-tail-calls"="false" "less-precise-fpmad"="false" "frame-pointer"="none" "no-infs-fp-math"="false" "no-jump-tables"="false" "no-nans-fp-math"="false" "no-signed-zeros-fp-math"="false" "no-trapping-math"="false" "stack-protector-buffer-size"="8" "target-features"="+mmx,+sse,+sse2,+x87" "unsafe-fp-math"="false" "use-soft-float"="false" } -attributes #2 = { nounwind readnone } +declare double @exp(double) !llvm.ident = !{!0} diff --git a/llvm/test/Verifier/atomics.ll b/llvm/test/Verifier/atomics.ll index f835b98..17bf5a0 100644 --- a/llvm/test/Verifier/atomics.ll +++ b/llvm/test/Verifier/atomics.ll @@ -1,14 +1,15 @@ ; RUN: not opt -passes=verify < %s 2>&1 | FileCheck %s +; CHECK: atomic store operand must have integer, pointer, floating point, or vector type! +; CHECK: atomic load operand must have integer, pointer, floating point, or vector type! -; CHECK: atomic store operand must have integer, pointer, or floating point type! -; CHECK: atomic load operand must have integer, pointer, or floating point type! +%ty = type { i32 }; -define void @foo(ptr %P, <1 x i64> %v) { - store atomic <1 x i64> %v, ptr %P unordered, align 8 +define void @foo(ptr %P, %ty %v) { + store atomic %ty %v, ptr %P unordered, align 8 ret void } -define <1 x i64> @bar(ptr %P) { - %v = load atomic <1 x i64>, ptr %P unordered, align 8 - ret <1 x i64> %v +define %ty @bar(ptr %P) { + %v = load atomic %ty, ptr %P unordered, align 8 + ret %ty %v } diff --git a/llvm/test/tools/llvm-ir2vec/embeddings-flowaware.ll b/llvm/test/tools/llvm-ir2vec/embeddings-flowaware.ll index b2362f8..ade228d 100644 --- a/llvm/test/tools/llvm-ir2vec/embeddings-flowaware.ll +++ b/llvm/test/tools/llvm-ir2vec/embeddings-flowaware.ll @@ -49,7 +49,7 @@ entry: ; CHECK-FUNC-LEVEL-ABC: Function: abc ; CHECK-FUNC-LEVEL-NEXT-ABC: [ 3630.00 3672.00 3714.00 ] -; CHECK-FUNC-DEF: Error: Function 'def' not found +; CHECK-FUNC-DEF: error: Function 'def' not found ; CHECK-BB-LEVEL: Function: abc ; CHECK-BB-LEVEL-NEXT: entry: [ 3630.00 3672.00 3714.00 ] diff --git a/llvm/test/tools/llvm-ir2vec/embeddings-symbolic.ll b/llvm/test/tools/llvm-ir2vec/embeddings-symbolic.ll index f9aa108..9d60e12 100644 --- a/llvm/test/tools/llvm-ir2vec/embeddings-symbolic.ll +++ b/llvm/test/tools/llvm-ir2vec/embeddings-symbolic.ll @@ -49,7 +49,7 @@ entry: ; CHECK-FUNC-LEVEL-ABC: Function: abc ; CHECK-FUNC-LEVEL-NEXT-ABC: [ 878.00 889.00 900.00 ] -; CHECK-FUNC-DEF: Error: Function 'def' not found +; CHECK-FUNC-DEF: error: Function 'def' not found ; CHECK-BB-LEVEL: Function: abc ; CHECK-BB-LEVEL-NEXT: entry: [ 878.00 889.00 900.00 ] diff --git a/llvm/test/tools/llvm-ir2vec/embeddings-symbolic.mir b/llvm/test/tools/llvm-ir2vec/embeddings-symbolic.mir index e5f78bf..ef835fe 100644 --- a/llvm/test/tools/llvm-ir2vec/embeddings-symbolic.mir +++ b/llvm/test/tools/llvm-ir2vec/embeddings-symbolic.mir @@ -67,7 +67,7 @@ body: | # CHECK-FUNC-LEVEL-ADD-NEXT: Function vector: [ 26.50 27.10 27.70 ] # CHECK-FUNC-LEVEL-ADD-NOT: simple_function -# CHECK-FUNC-MISSING: Error: Function 'missing_function' not found +# CHECK-FUNC-MISSING: error: Function 'missing_function' not found # CHECK-BB-LEVEL: MIR2Vec embeddings for machine function add_function: # CHECK-BB-LEVEL-NEXT: Basic block vectors: diff --git a/llvm/test/tools/llvm-ir2vec/error-handling.ll b/llvm/test/tools/llvm-ir2vec/error-handling.ll index b944ea0..8e9e455 100644 --- a/llvm/test/tools/llvm-ir2vec/error-handling.ll +++ b/llvm/test/tools/llvm-ir2vec/error-handling.ll @@ -10,4 +10,4 @@ entry: } ; CHECK-NO-VOCAB: error: IR2Vec vocabulary file path not specified; You may need to set it using --ir2vec-vocab-path -; CHECK-FUNC-NOT-FOUND: Error: Function 'nonexistent' not found +; CHECK-FUNC-NOT-FOUND: error: Function 'nonexistent' not found diff --git a/llvm/test/tools/llvm-ir2vec/error-handling.mir b/llvm/test/tools/llvm-ir2vec/error-handling.mir index 154078c..caec454c 100644 --- a/llvm/test/tools/llvm-ir2vec/error-handling.mir +++ b/llvm/test/tools/llvm-ir2vec/error-handling.mir @@ -31,11 +31,11 @@ body: | $eax = COPY %0 RET 0, $eax -# CHECK-NO-VOCAB: Error: Failed to load MIR2Vec vocabulary - MIR2Vec vocabulary file path not specified; set it using --mir2vec-vocab-path +# CHECK-NO-VOCAB: error: Failed to load MIR2Vec vocabulary - MIR2Vec vocabulary file path not specified; set it using --mir2vec-vocab-path -# CHECK-VOCAB-NOT-FOUND: Error: Failed to load MIR2Vec vocabulary +# CHECK-VOCAB-NOT-FOUND: error: Failed to load MIR2Vec vocabulary # CHECK-VOCAB-NOT-FOUND: No such file or directory -# CHECK-INVALID-VOCAB: Error: Failed to load MIR2Vec vocabulary - Missing 'Opcodes' section in vocabulary file +# CHECK-INVALID-VOCAB: error: Failed to load MIR2Vec vocabulary - Missing 'Opcodes' section in vocabulary file -# CHECK-FUNC-NOT-FOUND: Error: Function 'nonexistent_function' not found +# CHECK-FUNC-NOT-FOUND: error: Function 'nonexistent_function' not found diff --git a/llvm/test/tools/opt/no-target-machine.ll b/llvm/test/tools/opt/no-target-machine.ll new file mode 100644 index 0000000..4f07c81 --- /dev/null +++ b/llvm/test/tools/opt/no-target-machine.ll @@ -0,0 +1,18 @@ +; Report error when pass requires TargetMachine. +; RUN: not opt -passes=atomic-expand -disable-output %s 2>&1 | FileCheck %s +; RUN: not opt -passes=codegenprepare -disable-output %s 2>&1 | FileCheck %s +; RUN: not opt -passes=complex-deinterleaving -disable-output %s 2>&1 | FileCheck %s +; RUN: not opt -passes=dwarf-eh-prepare -disable-output %s 2>&1 | FileCheck %s +; RUN: not opt -passes=expand-large-div-rem -disable-output %s 2>&1 | FileCheck %s +; RUN: not opt -passes=expand-memcmp -disable-output %s 2>&1 | FileCheck %s +; RUN: not opt -passes=indirectbr-expand -disable-output %s 2>&1 | FileCheck %s +; RUN: not opt -passes=interleaved-access -disable-output %s 2>&1 | FileCheck %s +; RUN: not opt -passes=interleaved-load-combine -disable-output %s 2>&1 | FileCheck %s +; RUN: not opt -passes=safe-stack -disable-output %s 2>&1 | FileCheck %s +; RUN: not opt -passes=select-optimize -disable-output %s 2>&1 | FileCheck %s +; RUN: not opt -passes=stack-protector -disable-output %s 2>&1 | FileCheck %s +; RUN: not opt -passes=typepromotion -disable-output %s 2>&1 | FileCheck %s +; RUN: not opt -passes='expand-fp<O1>' -disable-output %s 2>&1 | FileCheck %s +define void @foo() { ret void } +; CHECK: pass '{{.+}}' requires TargetMachine +;requires TargetMachine diff --git a/llvm/tools/llvm-ir2vec/llvm-ir2vec.cpp b/llvm/tools/llvm-ir2vec/llvm-ir2vec.cpp index c41cf205..a723d37 100644 --- a/llvm/tools/llvm-ir2vec/llvm-ir2vec.cpp +++ b/llvm/tools/llvm-ir2vec/llvm-ir2vec.cpp @@ -77,6 +77,8 @@ namespace llvm { +static const char *ToolName = "llvm-ir2vec"; + // Common option category for options shared between IR2Vec and MIR2Vec static cl::OptionCategory CommonCategory("Common Options", "Options applicable to both IR2Vec " @@ -258,7 +260,8 @@ public: /// Generate embeddings for the entire module void generateEmbeddings(raw_ostream &OS) const { if (!Vocab->isValid()) { - OS << "Error: Vocabulary is not valid. IR2VecTool not initialized.\n"; + WithColor::error(errs(), ToolName) + << "Vocabulary is not valid. IR2VecTool not initialized.\n"; return; } @@ -277,8 +280,8 @@ public: assert(Vocab->isValid() && "Vocabulary is not valid"); auto Emb = Embedder::create(IR2VecEmbeddingKind, F, *Vocab); if (!Emb) { - OS << "Error: Failed to create embedder for function " << F.getName() - << "\n"; + WithColor::error(errs(), ToolName) + << "Failed to create embedder for function " << F.getName() << "\n"; return; } @@ -351,13 +354,14 @@ private: public: explicit MIR2VecTool(MachineModuleInfo &MMI) : MMI(MMI) {} - /// Initialize the MIR2Vec vocabulary + /// Initialize MIR2Vec vocabulary bool initializeVocabulary(const Module &M) { MIR2VecVocabProvider Provider(MMI); auto VocabOrErr = Provider.getVocabulary(M); if (!VocabOrErr) { - errs() << "Error: Failed to load MIR2Vec vocabulary - " - << toString(VocabOrErr.takeError()) << "\n"; + WithColor::error(errs(), ToolName) + << "Failed to load MIR2Vec vocabulary - " + << toString(VocabOrErr.takeError()) << "\n"; return false; } Vocab = std::make_unique<MIRVocabulary>(std::move(*VocabOrErr)); @@ -367,7 +371,7 @@ public: /// Generate embeddings for all machine functions in the module void generateEmbeddings(const Module &M, raw_ostream &OS) const { if (!Vocab) { - OS << "Error: Vocabulary not initialized.\n"; + WithColor::error(errs(), ToolName) << "Vocabulary not initialized.\n"; return; } @@ -377,7 +381,8 @@ public: MachineFunction *MF = MMI.getMachineFunction(F); if (!MF) { - errs() << "Warning: No MachineFunction for " << F.getName() << "\n"; + WithColor::warning(errs(), ToolName) + << "No MachineFunction for " << F.getName() << "\n"; continue; } @@ -388,13 +393,14 @@ public: /// Generate embeddings for a specific machine function void generateEmbeddings(MachineFunction &MF, raw_ostream &OS) const { if (!Vocab) { - OS << "Error: Vocabulary not initialized.\n"; + WithColor::error(errs(), ToolName) << "Vocabulary not initialized.\n"; return; } auto Emb = MIREmbedder::create(MIR2VecKind::Symbolic, MF, *Vocab); if (!Emb) { - errs() << "Error: Failed to create embedder for " << MF.getName() << "\n"; + WithColor::error(errs(), ToolName) + << "Failed to create embedder for " << MF.getName() << "\n"; return; } @@ -456,7 +462,8 @@ int main(int argc, char **argv) { std::error_code EC; raw_fd_ostream OS(OutputFilename, EC); if (EC) { - errs() << "Error opening output file: " << EC.message() << "\n"; + WithColor::error(errs(), ToolName) + << "opening output file: " << EC.message() << "\n"; return 1; } @@ -472,13 +479,13 @@ int main(int argc, char **argv) { LLVMContext Context; std::unique_ptr<Module> M = parseIRFile(InputFilename, Err, Context); if (!M) { - Err.print(argv[0], errs()); + Err.print(ToolName, errs()); return 1; } if (Error Err = processModule(*M, OS)) { handleAllErrors(std::move(Err), [&](const ErrorInfoBase &EIB) { - errs() << "Error: " << EIB.message() << "\n"; + WithColor::error(errs(), ToolName) << EIB.message() << "\n"; }); return 1; } @@ -499,7 +506,7 @@ int main(int argc, char **argv) { auto MIR = createMIRParserFromFile(InputFilename, Err, Context); if (!MIR) { - Err.print(argv[0], WithColor::error(errs(), argv[0])); + Err.print(ToolName, errs()); return 1; } @@ -511,7 +518,7 @@ int main(int argc, char **argv) { TheTriple.setTriple(sys::getDefaultTargetTriple()); auto TMOrErr = codegen::createTargetMachineForTriple(TheTriple.str()); if (!TMOrErr) { - Err.print(argv[0], WithColor::error(errs(), argv[0])); + Err.print(ToolName, errs()); exit(1); } TM = std::move(*TMOrErr); @@ -520,14 +527,14 @@ int main(int argc, char **argv) { std::unique_ptr<Module> M = MIR->parseIRModule(SetDataLayout); if (!M) { - Err.print(argv[0], WithColor::error(errs(), argv[0])); + Err.print(ToolName, errs()); return 1; } // Parse machine functions auto MMI = std::make_unique<MachineModuleInfo>(TM.get()); if (!MMI || MIR->parseMachineFunctions(*M, *MMI)) { - Err.print(argv[0], WithColor::error(errs(), argv[0])); + Err.print(ToolName, errs()); return 1; } @@ -547,13 +554,15 @@ int main(int argc, char **argv) { // Process single function Function *F = M->getFunction(FunctionName); if (!F) { - errs() << "Error: Function '" << FunctionName << "' not found\n"; + WithColor::error(errs(), ToolName) + << "Function '" << FunctionName << "' not found\n"; return 1; } MachineFunction *MF = MMI->getMachineFunction(*F); if (!MF) { - errs() << "Error: No MachineFunction for " << FunctionName << "\n"; + WithColor::error(errs(), ToolName) + << "No MachineFunction for " << FunctionName << "\n"; return 1; } diff --git a/llvm/unittests/Support/GlobPatternTest.cpp b/llvm/unittests/Support/GlobPatternTest.cpp index 58fd767..872a21e 100644 --- a/llvm/unittests/Support/GlobPatternTest.cpp +++ b/llvm/unittests/Support/GlobPatternTest.cpp @@ -329,6 +329,72 @@ TEST_F(GlobPatternTest, PrefixSuffix) { EXPECT_EQ("cd", Pat->suffix()); } +TEST_F(GlobPatternTest, Substr) { + auto Pat = GlobPattern::create(""); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("", Pat->longest_substr()); + + Pat = GlobPattern::create("abcd"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("", Pat->longest_substr()); + + Pat = GlobPattern::create("a*bcd"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("", Pat->longest_substr()); + + Pat = GlobPattern::create("*abcd"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("", Pat->longest_substr()); + + Pat = GlobPattern::create("abcd*"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("", Pat->longest_substr()); + + Pat = GlobPattern::create("a*bc*d"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("bc", Pat->longest_substr()); + + Pat = GlobPattern::create("a*bc*def*g"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("def", Pat->longest_substr()); + + Pat = GlobPattern::create("a*bcd*ef*g"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("bcd", Pat->longest_substr()); + + Pat = GlobPattern::create("a*bcd*efg*h"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("bcd", Pat->longest_substr()); + + Pat = GlobPattern::create("a*bcd[ef]g*h"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("bcd", Pat->longest_substr()); + + Pat = GlobPattern::create("a*bc[d]efg*h"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("efg", Pat->longest_substr()); + + Pat = GlobPattern::create("a*bc[]]efg*h"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("efg", Pat->longest_substr()); + + Pat = GlobPattern::create("a*bcde\\fg*h"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("bcde", Pat->longest_substr()); + + Pat = GlobPattern::create("a*bcde\\[fg*h"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("bcde", Pat->longest_substr()); + + Pat = GlobPattern::create("a*bcde?fg*h"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("bcde", Pat->longest_substr()); + + Pat = GlobPattern::create("a*bcdef{g}*h"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("bcdef", Pat->longest_substr()); +} + TEST_F(GlobPatternTest, Pathological) { std::string P, S(40, 'a'); StringRef Pieces[] = {"a*", "[ba]*", "{b*,a*}*"}; diff --git a/llvm/utils/gn/secondary/clang/lib/AST/BUILD.gn b/llvm/utils/gn/secondary/clang/lib/AST/BUILD.gn index 9981d10..4da907c 100644 --- a/llvm/utils/gn/secondary/clang/lib/AST/BUILD.gn +++ b/llvm/utils/gn/secondary/clang/lib/AST/BUILD.gn @@ -121,6 +121,7 @@ static_library("AST") { "ExternalASTMerger.cpp", "ExternalASTSource.cpp", "FormatString.cpp", + "InferAlloc.cpp", "InheritViz.cpp", "ItaniumCXXABI.cpp", "ItaniumMangle.cpp", diff --git a/llvm/utils/lit/lit/run.py b/llvm/utils/lit/lit/run.py index 62070e8..55de914 100644 --- a/llvm/utils/lit/lit/run.py +++ b/llvm/utils/lit/lit/run.py @@ -137,6 +137,10 @@ class Run(object): "Raised process limit from %d to %d" % (soft_limit, desired_limit) ) except Exception as ex: - # Warn, unless this is Windows or z/OS, in which case this is expected. - if os.name != "nt" and platform.system() != "OS/390": + # Warn, unless this is Windows, z/OS, or Cygwin in which case this is expected. + if ( + os.name != "nt" + and platform.system() != "OS/390" + and platform.sys.platform != "cygwin" + ): self.lit_config.warning("Failed to raise process limit: %s" % ex) diff --git a/llvm/utils/lit/tests/shtest-ulimit.py b/llvm/utils/lit/tests/shtest-ulimit.py index dadde70..09cd475 100644 --- a/llvm/utils/lit/tests/shtest-ulimit.py +++ b/llvm/utils/lit/tests/shtest-ulimit.py @@ -3,7 +3,7 @@ # ulimit does not work on non-POSIX platforms. # Solaris for some reason does not respect ulimit -n, so mark it unsupported # as well. -# UNSUPPORTED: system-windows, system-solaris +# UNSUPPORTED: system-windows, system-cygwin, system-solaris # RUN: %{python} %S/Inputs/shtest-ulimit/print_limits.py | grep RLIMIT_NOFILE \ # RUN: | sed -n -e 's/.*=//p' | tr -d '\n' > %t.nofile_limit |