diff options
Diffstat (limited to 'llvm')
254 files changed, 13258 insertions, 4962 deletions
diff --git a/llvm/docs/HowToReleaseLLVM.rst b/llvm/docs/HowToReleaseLLVM.rst index f3792e3..1795d3a 100644 --- a/llvm/docs/HowToReleaseLLVM.rst +++ b/llvm/docs/HowToReleaseLLVM.rst @@ -116,13 +116,11 @@ Branch the Git trunk using the following procedure: #. Bump the version in trunk to N.0.0git with the script in ``llvm/utils/release/bump-version.py``, and tag the commit with llvmorg-N-init. - If ``X`` is the version to be released, then ``N`` is ``X + 1``. + If ``X`` is the version to be released, then ``N`` is ``X + 1``. :: -:: - - $ git tag -sa llvmorg-N-init + $ git tag -sa llvmorg-N-init -4. Clear the release notes in trunk with the script in +#. Clear the release notes in trunk with the script in ``llvm/utils/release/clear-release-notes.py``. #. Create the release branch from the last known good revision from before the @@ -145,10 +143,12 @@ Tag release candidates: $ git tag -sa llvmorg-X.Y.Z-rcN The pre-packaged source tarballs will be automatically generated via the -"Release Sources" workflow on GitHub. This workflow will create an artifact -containing all the release tarballs and the artifact attestation. The -Release Manager should download the artifact, verify the tarballs, sign them, -and then upload them to the release page. +`Release Sources +<https://github.com/llvm/llvm-project/actions/workflows/release-sources.yml>`_ +workflow on GitHub. This workflow will create an artifact containing all the +release tarballs and the artifact attestation. The Release Manager should +download the artifact, verify the tarballs, sign them, and then upload them to +the release page. :: @@ -217,8 +217,9 @@ consistently validated and released binaries for their targets/OSs. To contact them, you should post on the `Discourse forums (Project Infrastructure - Release Testers). <https://discourse.llvm.org/c/infrastructure/release-testers/66>`_ -The official testers list is in the file ``RELEASE_TESTERS.TXT``, in the ``LLVM`` -repository. +The official testers list is in the file `RELEASE_TESTERS.TXT +<https://github.com/llvm/llvm-project/blob/main/llvm/RELEASE_TESTERS.TXT>`_, in +the LLVM repository. Community Testing ----------------- @@ -276,7 +277,8 @@ from the Milestone. Debugging can continue, but on trunk. Backport Requests ----------------- -Instructions for requesting a backport to a stable branch can be found :doc:`here <GitHub>`. +Instructions for requesting a backport to a stable branch can be found +:ref:`here <backporting>`. Triaging Bug Reports for Releases --------------------------------- @@ -301,26 +303,19 @@ This section describes how to triage bug reports: using the /cherry-pick or /branch comments if this has not been done already. #. If a bug has been fixed and has a pull request created for backporting it, - then update its status to "Needs Review" and notify a knowledgeable reviewer. - Usually you will want to notify the person who approved the patch in Phabricator, - but you may use your best judgement on who a good reviewer would be. Once - you have identified the reviewer(s), assign the issue to them and mention - them (i.e @username) in a comment and ask them if the patch is safe to backport. - You should also review the bug yourself to ensure that it meets the requirements - for committing to the release branch. + then update its status to "Needs Review" and notify a knowledgeable + reviewer. Usually you will want to notify the person who approved the + patch, but you may use your best judgement on who a good reviewer would be. + Once you have identified the reviewer(s), assign the issue to them and + mention them (i.e @username) in a comment and ask them if the patch is safe + to backport. You should also review the bug yourself to ensure that it + meets the requirements for committing to the release branch. #. Once a bug has been reviewed, add the release:reviewed label and update the issue's status to "Needs Merge". Check the pull request associated with the issue. If all the tests pass, then the pull request can be merged. If not, then add a comment on the issue asking someone to take a look at the failures. -#. Once the pull request has been merged push it to the official release branch - with the script ``llvm/utils/git/sync-release-repo.sh``. - - Then add a comment to the issue stating that the fix has been merged along with - the git hashes from the release branch. Add the release:merged label to the issue - and close it. - Release Patch Rules ------------------- @@ -364,9 +359,8 @@ Update Documentation Review the documentation in the release branch and ensure that it is up to date. The "Release Notes" must be updated to reflect new features, bug fixes, new known issues, and changes in the list of supported platforms. -The "Getting Started Guide" should be updated to reflect the new release -version number tag available from Subversion and changes in basic system -requirements. +The :doc:`GettingStarted` page should be updated to reflect the new release +version number tag and changes in basic system requirements. .. _tag: @@ -386,7 +380,8 @@ Update the LLVM Website The website must be updated before the release announcement is sent out. Here is what to do: -#. Check out the ``www-releases`` module from GitHub. +#. Check out the `www-releases <https://github.com/llvm/www-releases>`_ repo + from GitHub. #. Create a new sub-directory ``X.Y.Z`` in the releases directory. diff --git a/llvm/docs/PDB/HashTable.rst b/llvm/docs/PDB/HashTable.rst index 581ec59..7420510 100644 --- a/llvm/docs/PDB/HashTable.rst +++ b/llvm/docs/PDB/HashTable.rst @@ -17,8 +17,8 @@ a consumer to read a list of values and reconstruct the hash table on the fly. The serialization format supports hash tables of arbitrarily large size and capacity, as well as value types and hash functions. The only supported key value type is a uint32. The only requirement is that the producer and consumer -agree on the hash function. As such, the hash function can is not discussed -further in this document, it is assumed that for a particular instance of a PDB +agree on the hash function. As such, the hash function is not discussed +further in this document. It is assumed that for a particular instance of a PDB file hash table, the appropriate hash function is being used. On-Disk Format diff --git a/llvm/examples/CMakeLists.txt b/llvm/examples/CMakeLists.txt index 74613bd..b10a94c 100644 --- a/llvm/examples/CMakeLists.txt +++ b/llvm/examples/CMakeLists.txt @@ -8,6 +8,7 @@ add_subdirectory(ModuleMaker) add_subdirectory(OrcV2Examples) add_subdirectory(SpeculativeJIT) add_subdirectory(Bye) +add_subdirectory(OptSubcommand) if(LLVM_ENABLE_EH AND (NOT WIN32) AND (NOT "${LLVM_NATIVE_ARCH}" STREQUAL "ARM")) add_subdirectory(ExceptionDemo) diff --git a/llvm/examples/Kaleidoscope/Chapter8/toy.cpp b/llvm/examples/Kaleidoscope/Chapter8/toy.cpp index 739b895..1575211 100644 --- a/llvm/examples/Kaleidoscope/Chapter8/toy.cpp +++ b/llvm/examples/Kaleidoscope/Chapter8/toy.cpp @@ -1228,7 +1228,8 @@ int main() { TheModule->setTargetTriple(Triple(TargetTriple)); std::string Error; - auto Target = TargetRegistry::lookupTarget(TargetTriple, Error); + auto Target = + TargetRegistry::lookupTarget(TheModule->getTargetTriple(), Error); // Print an error and exit if we couldn't find the requested target. // This generally occurs if we've forgotten to initialise the diff --git a/llvm/examples/OptSubcommand/CMakeLists.txt b/llvm/examples/OptSubcommand/CMakeLists.txt new file mode 100644 index 0000000..debc948 --- /dev/null +++ b/llvm/examples/OptSubcommand/CMakeLists.txt @@ -0,0 +1,19 @@ +# Set the .td file to be processed for this target. +set(LLVM_TARGET_DEFINITIONS Opts.td) + +tablegen(LLVM Opts.inc -gen-opt-parser-defs) +add_public_tablegen_target(HelloSubTableGen) + +set(LLVM_LINK_COMPONENTS + Support + Option + ) + +add_llvm_example(OptSubcommand + llvm-hello-sub.cpp + ) + +target_include_directories(OptSubcommand + PRIVATE + ${CMAKE_CURRENT_BINARY_DIR} + ) diff --git a/llvm/examples/OptSubcommand/Opts.td b/llvm/examples/OptSubcommand/Opts.td new file mode 100644 index 0000000..7c980ee --- /dev/null +++ b/llvm/examples/OptSubcommand/Opts.td @@ -0,0 +1,18 @@ +include "llvm/Option/OptParser.td" + +def sc_foo : SubCommand<"foo", "HelpText for SubCommand foo.">; + +def sc_bar : SubCommand<"bar", "HelpText for SubCommand bar.", + "OptSubcommand bar <options>">; + +def help : Flag<["--"], "help">, + HelpText<"OptSubcommand <subcommand> <options>">; + +def version : Flag<["-"], "version">, + HelpText<"Toplevel Display the version number">; + +def uppercase : Flag<["-"], "uppercase", [sc_foo, sc_bar]>, + HelpText<"Print in uppercase">; + +def lowercase : Flag<["-"], "lowercase", [sc_foo]>, + HelpText<"Print in lowercase">; diff --git a/llvm/examples/OptSubcommand/llvm-hello-sub.cpp b/llvm/examples/OptSubcommand/llvm-hello-sub.cpp new file mode 100644 index 0000000..8071f56 --- /dev/null +++ b/llvm/examples/OptSubcommand/llvm-hello-sub.cpp @@ -0,0 +1,137 @@ +#include "llvm/ADT/ArrayRef.h" +#include "llvm/ADT/StringRef.h" +#include "llvm/Option/ArgList.h" +#include "llvm/Option/OptTable.h" +#include "llvm/Support/Error.h" +#include "llvm/Support/InitLLVM.h" +#include "llvm/Support/raw_ostream.h" + +using namespace llvm; +using namespace llvm::opt; + +namespace { +enum ID { + OPT_INVALID = 0, +#define OPTION(PREFIXES, NAME, ID, KIND, GROUP, ALIAS, ALIASARGS, FLAGS, \ + VISIBILITY, PARAM, HELPTEXT, HELPTEXTSFORVARIANTS, METAVAR, \ + VALUES, SUBCOMMANDIDS_OFFSET) \ + OPT_##ID, +#include "Opts.inc" +#undef OPTION +}; +#define OPTTABLE_STR_TABLE_CODE +#include "Opts.inc" +#undef OPTTABLE_STR_TABLE_CODE + +#define OPTTABLE_PREFIXES_TABLE_CODE +#include "Opts.inc" +#undef OPTTABLE_PREFIXES_TABLE_CODE + +#define OPTTABLE_SUBCOMMAND_IDS_TABLE_CODE +#include "Opts.inc" +#undef OPTTABLE_SUBCOMMAND_IDS_TABLE_CODE + +#define OPTTABLE_SUBCOMMANDS_CODE +#include "Opts.inc" +#undef OPTTABLE_SUBCOMMANDS_CODE + +static constexpr OptTable::Info InfoTable[] = { +#define OPTION(...) LLVM_CONSTRUCT_OPT_INFO(__VA_ARGS__), +#include "Opts.inc" +#undef OPTION +}; + +class HelloSubOptTable : public GenericOptTable { +public: + HelloSubOptTable() + : GenericOptTable(OptionStrTable, OptionPrefixesTable, InfoTable, + /*IgnoreCase=*/false, OptionSubCommands, + OptionSubCommandIDsTable) {} +}; +} // namespace + +int main(int argc, char **argv) { + InitLLVM X(argc, argv); + HelloSubOptTable T; + unsigned MissingArgIndex, MissingArgCount; + + auto HandleMultipleSubcommands = [](ArrayRef<StringRef> SubCommands) { + assert(SubCommands.size() > 1); + llvm::errs() << "error: more than one subcommand passed [\n"; + for (auto SC : SubCommands) + llvm::errs() << " `" << SC << "`\n"; + llvm::errs() << "]\n"; + llvm::errs() << "See --help.\n"; + exit(1); + }; + + auto HandleOtherPositionals = [](ArrayRef<StringRef> Positionals) { + assert(!Positionals.empty()); + llvm::errs() << "error: unknown positional argument(s) [\n"; + for (auto SC : Positionals) + llvm::errs() << " `" << SC << "`\n"; + llvm::errs() << "]\n"; + llvm::errs() << "See --help.\n"; + exit(1); + }; + + InputArgList Args = T.ParseArgs(ArrayRef(argv + 1, argc - 1), MissingArgIndex, + MissingArgCount); + + StringRef SubCommand = Args.getSubCommand( + T.getSubCommands(), HandleMultipleSubcommands, HandleOtherPositionals); + // Handle help. When help options is found, ignore all other options and exit + // after printing help. + + if (Args.hasArg(OPT_help)) { + T.printHelp(llvm::outs(), "llvm-hello-sub [subcommand] [options]", + "LLVM Hello SubCommand Example", false, false, Visibility(), + SubCommand); + return 0; + } + + auto HandleSubCommandArg = [&](ID OptionType) { + if (!Args.hasArg(OptionType)) + return false; + auto O = T.getOption(OptionType); + if (!O.isRegisteredSC(SubCommand)) { + llvm::errs() << "Option [" << O.getName() + << "] is not valid for SubCommand [" << SubCommand << "]\n"; + return false; + } + return true; + }; + + bool HasUnknownOptions = false; + for (const Arg *A : Args.filtered(OPT_UNKNOWN)) { + HasUnknownOptions = true; + llvm::errs() << "Unknown option `" << A->getAsString(Args) << "'\n"; + } + if (HasUnknownOptions) { + llvm::errs() << "See `OptSubcommand --help`.\n"; + return 1; + } + if (SubCommand.empty()) { + if (Args.hasArg(OPT_version)) + llvm::outs() << "LLVM Hello SubCommand Example 1.0\n"; + } else if (SubCommand == "foo") { + if (HandleSubCommandArg(OPT_uppercase)) + llvm::outs() << "FOO\n"; + else if (HandleSubCommandArg(OPT_lowercase)) + llvm::outs() << "foo\n"; + + if (HandleSubCommandArg(OPT_version)) + llvm::outs() << "LLVM Hello SubCommand foo Example 1.0\n"; + + } else if (SubCommand == "bar") { + if (HandleSubCommandArg(OPT_lowercase)) + llvm::outs() << "bar\n"; + else if (HandleSubCommandArg(OPT_uppercase)) + llvm::outs() << "BAR\n"; + + if (HandleSubCommandArg(OPT_version)) + llvm::outs() << "LLVM Hello SubCommand bar Example 1.0\n"; + } + + return 0; +} diff --git a/llvm/include/llvm/ADT/STLExtras.h b/llvm/include/llvm/ADT/STLExtras.h index 4a91b06..5b20d6bd 100644 --- a/llvm/include/llvm/ADT/STLExtras.h +++ b/llvm/include/llvm/ADT/STLExtras.h @@ -1692,6 +1692,28 @@ template <typename R, typename E> auto accumulate(R &&Range, E &&Init) { std::forward<E>(Init)); } +/// Wrapper for std::accumulate with a binary operator. +template <typename R, typename E, typename BinaryOp> +auto accumulate(R &&Range, E &&Init, BinaryOp &&Op) { + return std::accumulate(adl_begin(Range), adl_end(Range), + std::forward<E>(Init), std::forward<BinaryOp>(Op)); +} + +/// Returns the sum of all values in `Range` with `Init` initial value. +/// The default initial value is 0. +template <typename R, typename E = detail::ValueOfRange<R>> +auto sum_of(R &&Range, E Init = E{0}) { + return accumulate(std::forward<R>(Range), std::move(Init)); +} + +/// Returns the product of all values in `Range` with `Init` initial value. +/// The default initial value is 1. +template <typename R, typename E = detail::ValueOfRange<R>> +auto product_of(R &&Range, E Init = E{1}) { + return accumulate(std::forward<R>(Range), std::move(Init), + std::multiplies<>{}); +} + /// Provide wrappers to std::for_each which take ranges instead of having to /// pass begin/end explicitly. template <typename R, typename UnaryFunction> diff --git a/llvm/include/llvm/BinaryFormat/DXContainer.h b/llvm/include/llvm/BinaryFormat/DXContainer.h index 8944e736..b9a08ce 100644 --- a/llvm/include/llvm/BinaryFormat/DXContainer.h +++ b/llvm/include/llvm/BinaryFormat/DXContainer.h @@ -201,19 +201,9 @@ enum class RootParameterType : uint32_t { LLVM_ABI ArrayRef<EnumEntry<RootParameterType>> getRootParameterTypes(); -#define ROOT_PARAMETER(Val, Enum) \ - case Val: \ - return true; -inline bool isValidParameterType(uint32_t V) { - switch (V) { -#include "DXContainerConstants.def" - } - return false; -} +bool isValidParameterType(uint32_t V); -inline bool isValidRangeType(uint32_t V) { - return V <= llvm::to_underlying(dxil::ResourceClass::LastEntry); -} +bool isValidRangeType(uint32_t V); #define SHADER_VISIBILITY(Val, Enum) Enum = Val, enum class ShaderVisibility : uint32_t { @@ -222,30 +212,14 @@ enum class ShaderVisibility : uint32_t { LLVM_ABI ArrayRef<EnumEntry<ShaderVisibility>> getShaderVisibility(); -#define SHADER_VISIBILITY(Val, Enum) \ - case Val: \ - return true; -inline bool isValidShaderVisibility(uint32_t V) { - switch (V) { -#include "DXContainerConstants.def" - } - return false; -} +bool isValidShaderVisibility(uint32_t V); #define FILTER(Val, Enum) Enum = Val, enum class SamplerFilter : uint32_t { #include "DXContainerConstants.def" }; -#define FILTER(Val, Enum) \ - case Val: \ - return true; -inline bool isValidSamplerFilter(uint32_t V) { - switch (V) { -#include "DXContainerConstants.def" - } - return false; -} +bool isValidSamplerFilter(uint32_t V); LLVM_ABI ArrayRef<EnumEntry<SamplerFilter>> getSamplerFilters(); @@ -256,15 +230,7 @@ enum class TextureAddressMode : uint32_t { LLVM_ABI ArrayRef<EnumEntry<TextureAddressMode>> getTextureAddressModes(); -#define TEXTURE_ADDRESS_MODE(Val, Enum) \ - case Val: \ - return true; -inline bool isValidAddress(uint32_t V) { - switch (V) { -#include "DXContainerConstants.def" - } - return false; -} +bool isValidAddress(uint32_t V); #define COMPARISON_FUNC(Val, Enum) Enum = Val, enum class ComparisonFunc : uint32_t { @@ -273,30 +239,20 @@ enum class ComparisonFunc : uint32_t { LLVM_ABI ArrayRef<EnumEntry<ComparisonFunc>> getComparisonFuncs(); -#define COMPARISON_FUNC(Val, Enum) \ - case Val: \ - return true; -inline bool isValidComparisonFunc(uint32_t V) { - switch (V) { -#include "DXContainerConstants.def" - } - return false; -} +bool isValidComparisonFunc(uint32_t V); #define STATIC_BORDER_COLOR(Val, Enum) Enum = Val, enum class StaticBorderColor : uint32_t { #include "DXContainerConstants.def" }; -#define STATIC_BORDER_COLOR(Val, Enum) \ - case Val: \ - return true; -inline bool isValidBorderColor(uint32_t V) { - switch (V) { -#include "DXContainerConstants.def" - } - return false; -} +bool isValidBorderColor(uint32_t V); + +bool isValidRootDesciptorFlags(uint32_t V); + +bool isValidDescriptorRangeFlags(uint32_t V); + +bool isValidStaticSamplerFlags(uint32_t V); LLVM_ABI ArrayRef<EnumEntry<StaticBorderColor>> getStaticBorderColors(); diff --git a/llvm/include/llvm/CAS/OnDiskDataAllocator.h b/llvm/include/llvm/CAS/OnDiskDataAllocator.h new file mode 100644 index 0000000..2809df8 --- /dev/null +++ b/llvm/include/llvm/CAS/OnDiskDataAllocator.h @@ -0,0 +1,95 @@ +//===----------------------------------------------------------------------===// +// +// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. +// See https://llvm.org/LICENSE.txt for license information. +// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception +// +//===----------------------------------------------------------------------===// +// +/// \file +/// This file declares interface for OnDiskDataAllocator, a file backed data +/// pool can be used to allocate space to store data packed in a single file. It +/// is based on MappedFileRegionArena and includes a header in the beginning to +/// provide metadata. +/// +//===----------------------------------------------------------------------===// + +#ifndef LLVM_CAS_ONDISKDATAALLOCATOR_H +#define LLVM_CAS_ONDISKDATAALLOCATOR_H + +#include "llvm/ADT/ArrayRef.h" +#include "llvm/CAS/FileOffset.h" +#include "llvm/Support/Error.h" + +namespace llvm::cas { + +/// Sink for data. Stores variable length data with 8-byte alignment. Does not +/// track size of data, which is assumed to known from context, or embedded. +/// Uses 0-padding but does not guarantee 0-termination. +class OnDiskDataAllocator { +public: + using ValueProxy = MutableArrayRef<char>; + + /// A pointer to data stored on disk. + class OnDiskPtr { + public: + FileOffset getOffset() const { return Offset; } + explicit operator bool() const { return bool(getOffset()); } + const ValueProxy &operator*() const { + assert(Offset && "Null dereference"); + return Value; + } + const ValueProxy *operator->() const { + assert(Offset && "Null dereference"); + return &Value; + } + + OnDiskPtr() = default; + + private: + friend class OnDiskDataAllocator; + OnDiskPtr(FileOffset Offset, ValueProxy Value) + : Offset(Offset), Value(Value) {} + FileOffset Offset; + ValueProxy Value; + }; + + /// Get the data of \p Size stored at the given \p Offset. Note the allocator + /// doesn't keep track of the allocation size, thus \p Size doesn't need to + /// match the size of allocation but needs to be smaller. + Expected<ArrayRef<char>> get(FileOffset Offset, size_t Size) const; + + /// Allocate at least \p Size with 8-byte alignment. + Expected<OnDiskPtr> allocate(size_t Size); + + /// \returns the buffer that was allocated at \p create time, with size + /// \p UserHeaderSize. + MutableArrayRef<uint8_t> getUserHeader(); + + size_t size() const; + size_t capacity() const; + + static Expected<OnDiskDataAllocator> + create(const Twine &Path, const Twine &TableName, uint64_t MaxFileSize, + std::optional<uint64_t> NewFileInitialSize, + uint32_t UserHeaderSize = 0, + function_ref<void(void *)> UserHeaderInit = nullptr); + + OnDiskDataAllocator(OnDiskDataAllocator &&RHS); + OnDiskDataAllocator &operator=(OnDiskDataAllocator &&RHS); + + // No copy. Just call \a create() again. + OnDiskDataAllocator(const OnDiskDataAllocator &) = delete; + OnDiskDataAllocator &operator=(const OnDiskDataAllocator &) = delete; + + ~OnDiskDataAllocator(); + +private: + struct ImplType; + explicit OnDiskDataAllocator(std::unique_ptr<ImplType> Impl); + std::unique_ptr<ImplType> Impl; +}; + +} // namespace llvm::cas + +#endif // LLVM_CAS_ONDISKDATAALLOCATOR_H diff --git a/llvm/include/llvm/CAS/OnDiskTrieRawHashMap.h b/llvm/include/llvm/CAS/OnDiskTrieRawHashMap.h index 5e41bf6..fbd68d0 100644 --- a/llvm/include/llvm/CAS/OnDiskTrieRawHashMap.h +++ b/llvm/include/llvm/CAS/OnDiskTrieRawHashMap.h @@ -133,38 +133,38 @@ public: bool IsValue = false; }; - class pointer; - class const_pointer : public PointerImpl<ConstValueProxy> { + class OnDiskPtr; + class ConstOnDiskPtr : public PointerImpl<ConstValueProxy> { public: - const_pointer() = default; + ConstOnDiskPtr() = default; private: - friend class pointer; + friend class OnDiskPtr; friend class OnDiskTrieRawHashMap; - using const_pointer::PointerImpl::PointerImpl; + using ConstOnDiskPtr::PointerImpl::PointerImpl; }; - class pointer : public PointerImpl<ValueProxy> { + class OnDiskPtr : public PointerImpl<ValueProxy> { public: - operator const_pointer() const { - return const_pointer(Value, getOffset(), IsValue); + operator ConstOnDiskPtr() const { + return ConstOnDiskPtr(Value, getOffset(), IsValue); } - pointer() = default; + OnDiskPtr() = default; private: friend class OnDiskTrieRawHashMap; - using pointer::PointerImpl::PointerImpl; + using OnDiskPtr::PointerImpl::PointerImpl; }; /// Find the value from hash. /// /// \returns pointer to the value if exists, otherwise returns a non-value /// pointer that evaluates to `false` when convert to boolean. - const_pointer find(ArrayRef<uint8_t> Hash) const; + ConstOnDiskPtr find(ArrayRef<uint8_t> Hash) const; /// Helper function to recover a pointer into the trie from file offset. - Expected<const_pointer> recoverFromFileOffset(FileOffset Offset) const; + Expected<ConstOnDiskPtr> recoverFromFileOffset(FileOffset Offset) const; using LazyInsertOnConstructCB = function_ref<void(FileOffset TentativeOffset, ValueProxy TentativeValue)>; @@ -186,11 +186,11 @@ public: /// The in-memory \a TrieRawHashMap uses LazyAtomicPointer to synchronize /// simultaneous writes, but that seems dangerous to use in a memory-mapped /// file in case a process crashes in the busy state. - Expected<pointer> insertLazy(ArrayRef<uint8_t> Hash, - LazyInsertOnConstructCB OnConstruct = nullptr, - LazyInsertOnLeakCB OnLeak = nullptr); + Expected<OnDiskPtr> insertLazy(ArrayRef<uint8_t> Hash, + LazyInsertOnConstructCB OnConstruct = nullptr, + LazyInsertOnLeakCB OnLeak = nullptr); - Expected<pointer> insert(const ConstValueProxy &Value) { + Expected<OnDiskPtr> insert(const ConstValueProxy &Value) { return insertLazy(Value.Hash, [&](FileOffset, ValueProxy Allocated) { assert(Allocated.Hash == Value.Hash); assert(Allocated.Data.size() == Value.Data.size()); diff --git a/llvm/include/llvm/ExecutionEngine/Orc/EPCGenericDylibManager.h b/llvm/include/llvm/ExecutionEngine/Orc/EPCGenericDylibManager.h index 68bc54b..7c995a7 100644 --- a/llvm/include/llvm/ExecutionEngine/Orc/EPCGenericDylibManager.h +++ b/llvm/include/llvm/ExecutionEngine/Orc/EPCGenericDylibManager.h @@ -34,7 +34,7 @@ public: struct SymbolAddrs { ExecutorAddr Instance; ExecutorAddr Open; - ExecutorAddr Lookup; + ExecutorAddr Resolve; }; /// Create an EPCGenericMemoryAccess instance from a given set of @@ -51,25 +51,25 @@ public: LLVM_ABI Expected<tpctypes::DylibHandle> open(StringRef Path, uint64_t Mode); /// Looks up symbols within the given dylib. - Expected<std::vector<ExecutorSymbolDef>> - lookup(tpctypes::DylibHandle H, const SymbolLookupSet &Lookup) { - std::promise<MSVCPExpected<std::vector<ExecutorSymbolDef>>> RP; + Expected<tpctypes::LookupResult> lookup(tpctypes::DylibHandle H, + const SymbolLookupSet &Lookup) { + std::promise<MSVCPExpected<tpctypes::LookupResult>> RP; auto RF = RP.get_future(); lookupAsync(H, Lookup, [&RP](auto R) { RP.set_value(std::move(R)); }); return RF.get(); } /// Looks up symbols within the given dylib. - Expected<std::vector<ExecutorSymbolDef>> - lookup(tpctypes::DylibHandle H, const RemoteSymbolLookupSet &Lookup) { - std::promise<MSVCPExpected<std::vector<ExecutorSymbolDef>>> RP; + Expected<tpctypes::LookupResult> lookup(tpctypes::DylibHandle H, + const RemoteSymbolLookupSet &Lookup) { + std::promise<MSVCPExpected<tpctypes::LookupResult>> RP; auto RF = RP.get_future(); lookupAsync(H, Lookup, [&RP](auto R) { RP.set_value(std::move(R)); }); return RF.get(); } using SymbolLookupCompleteFn = - unique_function<void(Expected<std::vector<ExecutorSymbolDef>>)>; + unique_function<void(Expected<tpctypes::LookupResult>)>; /// Looks up symbols within the given dylib. LLVM_ABI void lookupAsync(tpctypes::DylibHandle H, diff --git a/llvm/include/llvm/ExecutionEngine/Orc/ExecutorResolutionGenerator.h b/llvm/include/llvm/ExecutionEngine/Orc/ExecutorResolutionGenerator.h new file mode 100644 index 0000000..9b972ed --- /dev/null +++ b/llvm/include/llvm/ExecutionEngine/Orc/ExecutorResolutionGenerator.h @@ -0,0 +1,74 @@ +//===----- ExecutorResolver.h - Resolve symbols in executor -----*- C++ -*-===// +// +// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. +// See https://llvm.org/LICENSE.txt for license information. +// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception +// +//===----------------------------------------------------------------------===// +// +// Declares ExecutorResolutionGenerator for symbol resolution, +// dynamic library loading, and lookup in an executor process via +// ExecutorResolver. +// +//===----------------------------------------------------------------------===// + +#ifndef LLVM_EXECUTIONENGINE_ORC_EXECUTORRESOLUTIONGENERATOR_H +#define LLVM_EXECUTIONENGINE_ORC_EXECUTORRESOLUTIONGENERATOR_H + +#include "llvm/ADT/FunctionExtras.h" +#include "llvm/ExecutionEngine/Orc/AbsoluteSymbols.h" +#include "llvm/ExecutionEngine/Orc/Core.h" + +namespace llvm::orc { + +class ExecutorResolutionGenerator : public DefinitionGenerator { +public: + using SymbolPredicate = unique_function<bool(const SymbolStringPtr &)>; + using AbsoluteSymbolsFn = + unique_function<std::unique_ptr<MaterializationUnit>(SymbolMap)>; + + ExecutorResolutionGenerator( + ExecutionSession &ES, tpctypes::ResolverHandle H, + SymbolPredicate Allow = SymbolPredicate(), + AbsoluteSymbolsFn AbsoluteSymbols = absoluteSymbols) + : EPC(ES.getExecutorProcessControl()), H(H), Allow(std::move(Allow)), + AbsoluteSymbols(std::move(AbsoluteSymbols)) {} + + ExecutorResolutionGenerator( + ExecutionSession &ES, SymbolPredicate Allow = SymbolPredicate(), + AbsoluteSymbolsFn AbsoluteSymbols = absoluteSymbols) + : EPC(ES.getExecutorProcessControl()), Allow(std::move(Allow)), + AbsoluteSymbols(std::move(AbsoluteSymbols)) {} + + /// Permanently loads the library at the given path and, on success, returns + /// an ExecutorResolutionGenerator that will search it for symbol + /// definitions in the library. On failure returns the reason the library + /// failed to load. + static Expected<std::unique_ptr<ExecutorResolutionGenerator>> + Load(ExecutionSession &ES, const char *LibraryPath, + SymbolPredicate Allow = SymbolPredicate(), + AbsoluteSymbolsFn AbsoluteSymbols = absoluteSymbols); + + /// Creates a ExecutorResolutionGenerator that searches for symbols in + /// the target process. + static Expected<std::unique_ptr<ExecutorResolutionGenerator>> + GetForTargetProcess(ExecutionSession &ES, + SymbolPredicate Allow = SymbolPredicate(), + AbsoluteSymbolsFn AbsoluteSymbols = absoluteSymbols) { + return Load(ES, nullptr, std::move(Allow), std::move(AbsoluteSymbols)); + } + + Error tryToGenerate(LookupState &LS, LookupKind K, JITDylib &JD, + JITDylibLookupFlags JDLookupFlags, + const SymbolLookupSet &LookupSet) override; + +private: + ExecutorProcessControl &EPC; + tpctypes::ResolverHandle H; + SymbolPredicate Allow; + AbsoluteSymbolsFn AbsoluteSymbols; +}; + +} // namespace llvm::orc + +#endif // LLVM_EXECUTIONENGINE_ORC_EXECUTORRESOLUTIONGENERATOR_H diff --git a/llvm/include/llvm/ExecutionEngine/Orc/Shared/OrcRTBridge.h b/llvm/include/llvm/ExecutionEngine/Orc/Shared/OrcRTBridge.h index 2bc6c12..99ba456 100644 --- a/llvm/include/llvm/ExecutionEngine/Orc/Shared/OrcRTBridge.h +++ b/llvm/include/llvm/ExecutionEngine/Orc/Shared/OrcRTBridge.h @@ -25,7 +25,7 @@ namespace rt { LLVM_ABI extern const char *SimpleExecutorDylibManagerInstanceName; LLVM_ABI extern const char *SimpleExecutorDylibManagerOpenWrapperName; -LLVM_ABI extern const char *SimpleExecutorDylibManagerLookupWrapperName; +LLVM_ABI extern const char *SimpleExecutorDylibManagerResolveWrapperName; LLVM_ABI extern const char *SimpleExecutorMemoryManagerInstanceName; LLVM_ABI extern const char *SimpleExecutorMemoryManagerReserveWrapperName; @@ -66,10 +66,9 @@ using SPSSimpleExecutorDylibManagerOpenSignature = shared::SPSExpected<shared::SPSExecutorAddr>(shared::SPSExecutorAddr, shared::SPSString, uint64_t); -using SPSSimpleExecutorDylibManagerLookupSignature = - shared::SPSExpected<shared::SPSSequence<shared::SPSExecutorSymbolDef>>( - shared::SPSExecutorAddr, shared::SPSExecutorAddr, - shared::SPSRemoteSymbolLookupSet); +using SPSSimpleExecutorDylibManagerResolveSignature = shared::SPSExpected< + shared::SPSSequence<shared::SPSOptional<shared::SPSExecutorSymbolDef>>>( + shared::SPSExecutorAddr, shared::SPSRemoteSymbolLookupSet); using SPSSimpleExecutorMemoryManagerReserveSignature = shared::SPSExpected<shared::SPSExecutorAddr>(shared::SPSExecutorAddr, diff --git a/llvm/include/llvm/ExecutionEngine/Orc/Shared/TargetProcessControlTypes.h b/llvm/include/llvm/ExecutionEngine/Orc/Shared/TargetProcessControlTypes.h index adb07ba..28ff322 100644 --- a/llvm/include/llvm/ExecutionEngine/Orc/Shared/TargetProcessControlTypes.h +++ b/llvm/include/llvm/ExecutionEngine/Orc/Shared/TargetProcessControlTypes.h @@ -114,7 +114,11 @@ struct PointerWrite { /// A handle used to represent a loaded dylib in the target process. using DylibHandle = ExecutorAddr; -using LookupResult = std::vector<ExecutorSymbolDef>; +/// A handle used to reference the resolver associated with a loaded +/// dylib in the target process. +using ResolverHandle = ExecutorAddr; + +using LookupResult = std::vector<std::optional<ExecutorSymbolDef>>; } // end namespace tpctypes diff --git a/llvm/include/llvm/ExecutionEngine/Orc/TargetProcess/ExecutorResolver.h b/llvm/include/llvm/ExecutionEngine/Orc/TargetProcess/ExecutorResolver.h new file mode 100644 index 0000000..2c5e98c --- /dev/null +++ b/llvm/include/llvm/ExecutionEngine/Orc/TargetProcess/ExecutorResolver.h @@ -0,0 +1,48 @@ +//===----- ExecutorResolver.h - Symbol resolver -----*- C++ -*-===// +// +// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. +// See https://llvm.org/LICENSE.txt for license information. +// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception +// +//===----------------------------------------------------------------------===// +// +// Executor Symbol resolver. +// +//===----------------------------------------------------------------------===// + +#ifndef LLVM_EXECUTIONENGINE_ORC_TARGETPROCESS_EXECUTORRESOLVER_H +#define LLVM_EXECUTIONENGINE_ORC_TARGETPROCESS_EXECUTORRESOLVER_H + +#include "llvm/ADT/FunctionExtras.h" + +#include "llvm/ExecutionEngine/Orc/Shared/ExecutorSymbolDef.h" +#include "llvm/ExecutionEngine/Orc/Shared/SimpleRemoteEPCUtils.h" +#include "llvm/ExecutionEngine/Orc/Shared/TargetProcessControlTypes.h" + +namespace llvm::orc { + +class ExecutorResolver { +public: + using ResolveResult = Expected<std::vector<std::optional<ExecutorSymbolDef>>>; + using YieldResolveResultFn = unique_function<void(ResolveResult)>; + + virtual ~ExecutorResolver() = default; + + virtual void resolveAsync(const RemoteSymbolLookupSet &L, + YieldResolveResultFn &&OnResolve) = 0; +}; + +class DylibSymbolResolver : public ExecutorResolver { +public: + DylibSymbolResolver(tpctypes::DylibHandle H) : Handle(H) {} + + void + resolveAsync(const RemoteSymbolLookupSet &L, + ExecutorResolver::YieldResolveResultFn &&OnResolve) override; + +private: + tpctypes::DylibHandle Handle; +}; + +} // end namespace llvm::orc +#endif // LLVM_EXECUTIONENGINE_ORC_TARGETPROCESS_EXECUTORRESOLVER_H diff --git a/llvm/include/llvm/ExecutionEngine/Orc/TargetProcess/SimpleExecutorDylibManager.h b/llvm/include/llvm/ExecutionEngine/Orc/TargetProcess/SimpleExecutorDylibManager.h index 36a6f4b..7526a29d 100644 --- a/llvm/include/llvm/ExecutionEngine/Orc/TargetProcess/SimpleExecutorDylibManager.h +++ b/llvm/include/llvm/ExecutionEngine/Orc/TargetProcess/SimpleExecutorDylibManager.h @@ -23,6 +23,7 @@ #include "llvm/ExecutionEngine/Orc/Shared/TargetProcessControlTypes.h" #include "llvm/ExecutionEngine/Orc/Shared/WrapperFunctionUtils.h" #include "llvm/ExecutionEngine/Orc/TargetProcess/ExecutorBootstrapService.h" +#include "llvm/ExecutionEngine/Orc/TargetProcess/ExecutorResolver.h" #include "llvm/Support/Compiler.h" #include "llvm/Support/DynamicLibrary.h" #include "llvm/Support/Error.h" @@ -39,8 +40,6 @@ public: virtual ~SimpleExecutorDylibManager(); Expected<tpctypes::DylibHandle> open(const std::string &Path, uint64_t Mode); - Expected<std::vector<ExecutorSymbolDef>> - lookup(tpctypes::DylibHandle H, const RemoteSymbolLookupSet &L); Error shutdown() override; void addBootstrapSymbols(StringMap<ExecutorAddr> &M) override; @@ -52,10 +51,11 @@ private: openWrapper(const char *ArgData, size_t ArgSize); static llvm::orc::shared::CWrapperFunctionResult - lookupWrapper(const char *ArgData, size_t ArgSize); + resolveWrapper(const char *ArgData, size_t ArgSize); std::mutex M; DylibSet Dylibs; + std::vector<std::unique_ptr<ExecutorResolver>> Resolvers; }; } // end namespace rt_bootstrap diff --git a/llvm/include/llvm/Frontend/HLSL/RootSignatureValidations.h b/llvm/include/llvm/Frontend/HLSL/RootSignatureValidations.h index 4dd1811..7131980 100644 --- a/llvm/include/llvm/Frontend/HLSL/RootSignatureValidations.h +++ b/llvm/include/llvm/Frontend/HLSL/RootSignatureValidations.h @@ -28,12 +28,14 @@ LLVM_ABI bool verifyRootFlag(uint32_t Flags); LLVM_ABI bool verifyVersion(uint32_t Version); LLVM_ABI bool verifyRegisterValue(uint32_t RegisterValue); LLVM_ABI bool verifyRegisterSpace(uint32_t RegisterSpace); -LLVM_ABI bool verifyRootDescriptorFlag(uint32_t Version, uint32_t FlagsVal); +LLVM_ABI bool verifyRootDescriptorFlag(uint32_t Version, + dxbc::RootDescriptorFlags Flags); LLVM_ABI bool verifyRangeType(uint32_t Type); LLVM_ABI bool verifyDescriptorRangeFlag(uint32_t Version, dxil::ResourceClass Type, - dxbc::DescriptorRangeFlags FlagsVal); -LLVM_ABI bool verifyStaticSamplerFlags(uint32_t Version, uint32_t FlagsNumber); + dxbc::DescriptorRangeFlags Flags); +LLVM_ABI bool verifyStaticSamplerFlags(uint32_t Version, + dxbc::StaticSamplerFlags Flags); LLVM_ABI bool verifyNumDescriptors(uint32_t NumDescriptors); LLVM_ABI bool verifyMipLODBias(float MipLODBias); LLVM_ABI bool verifyMaxAnisotropy(uint32_t MaxAnisotropy); diff --git a/llvm/include/llvm/Frontend/OpenMP/OMPConstants.h b/llvm/include/llvm/Frontend/OpenMP/OMPConstants.h index 6e1bce1..7bec7e0 100644 --- a/llvm/include/llvm/Frontend/OpenMP/OMPConstants.h +++ b/llvm/include/llvm/Frontend/OpenMP/OMPConstants.h @@ -239,6 +239,9 @@ enum class OpenMPOffloadMappingFlags : uint64_t { // dynamic. // This is an OpenMP extension for the sake of OpenACC support. OMP_MAP_OMPX_HOLD = 0x2000, + // Attach pointer and pointee, after processing all other maps. + // Applicable to map-entering directives. Does not change ref-count. + OMP_MAP_ATTACH = 0x4000, /// Signal that the runtime library should use args as an array of /// descriptor_dim pointers and use args_size as dims. Used when we have /// non-contiguous list items in target update directive diff --git a/llvm/include/llvm/IR/DebugInfoMetadata.h b/llvm/include/llvm/IR/DebugInfoMetadata.h index 6652e30..7c6e709 100644 --- a/llvm/include/llvm/IR/DebugInfoMetadata.h +++ b/llvm/include/llvm/IR/DebugInfoMetadata.h @@ -2600,14 +2600,19 @@ public: StringRef getDirectory() const { return getScope()->getDirectory(); } std::optional<StringRef> getSource() const { return getScope()->getSource(); } - /// Get the scope where this is inlined. - /// - /// Walk through \a getInlinedAt() and return \a getScope() from the deepest - /// location. + /// Walk through \a getInlinedAt() and return the \a DILocation of the + /// outermost call site in the inlining chain. + const DILocation *getInlinedAtLocation() const { + const DILocation *Current = this; + while (const DILocation *Next = Current->getInlinedAt()) + Current = Next; + return Current; + } + + // Return the \a DILocalScope of the outermost call site in the inlining + // chain. DILocalScope *getInlinedAtScope() const { - if (auto *IA = getInlinedAt()) - return IA->getInlinedAtScope(); - return getScope(); + return getInlinedAtLocation()->getScope(); } /// Get the DWARF discriminator. diff --git a/llvm/include/llvm/IR/IRBuilder.h b/llvm/include/llvm/IR/IRBuilder.h index 783f8f6..041a4ce 100644 --- a/llvm/include/llvm/IR/IRBuilder.h +++ b/llvm/include/llvm/IR/IRBuilder.h @@ -1722,16 +1722,19 @@ public: return Insert(BinOp, Name); } - Value *CreateLogicalAnd(Value *Cond1, Value *Cond2, const Twine &Name = "") { + Value *CreateLogicalAnd(Value *Cond1, Value *Cond2, const Twine &Name = "", + Instruction *MDFrom = nullptr) { assert(Cond2->getType()->isIntOrIntVectorTy(1)); return CreateSelect(Cond1, Cond2, - ConstantInt::getNullValue(Cond2->getType()), Name); + ConstantInt::getNullValue(Cond2->getType()), Name, + MDFrom); } - Value *CreateLogicalOr(Value *Cond1, Value *Cond2, const Twine &Name = "") { + Value *CreateLogicalOr(Value *Cond1, Value *Cond2, const Twine &Name = "", + Instruction *MDFrom = nullptr) { assert(Cond2->getType()->isIntOrIntVectorTy(1)); return CreateSelect(Cond1, ConstantInt::getAllOnesValue(Cond2->getType()), - Cond2, Name); + Cond2, Name, MDFrom); } Value *CreateLogicalOp(Instruction::BinaryOps Opc, Value *Cond1, Value *Cond2, diff --git a/llvm/include/llvm/IR/IntrinsicsNVVM.td b/llvm/include/llvm/IR/IntrinsicsNVVM.td index 23d878f..3af1750 100644 --- a/llvm/include/llvm/IR/IntrinsicsNVVM.td +++ b/llvm/include/llvm/IR/IntrinsicsNVVM.td @@ -272,6 +272,10 @@ class WMMA_REGS<string Geom, string Frag, string PtxEltType, bit IsSparse = fals !eq(gft,"m16n8k16:d:f32") : !listsplat(llvm_float_ty, 4), !eq(gft,"m16n8k4:c:f32") : !listsplat(llvm_float_ty, 4), !eq(gft,"m16n8k4:d:f32") : !listsplat(llvm_float_ty, 4), + !eq(gft,"m16n8k32:c:f16") : !listsplat(llvm_v2f16_ty, 2), + !eq(gft,"m16n8k32:c:f32") : !listsplat(llvm_float_ty, 4), + !eq(gft,"m16n8k32:d:f16") : !listsplat(llvm_v2f16_ty, 2), + !eq(gft,"m16n8k32:d:f32") : !listsplat(llvm_float_ty, 4), // wmma fp16 -> fp16/fp32 @ m16n16k16/m8n32k16/m32n8k16 // All other supported geometries use the same fragment format for f32 and @@ -298,6 +302,21 @@ class WMMA_REGS<string Geom, string Frag, string PtxEltType, bit IsSparse = fals !eq(gft,"m8n8k4:c:f64") : !listsplat(llvm_double_ty, 2), !eq(gft,"m8n8k4:d:f64") : !listsplat(llvm_double_ty, 2), + !eq(gft,"m16n8k4:a:f64") : !listsplat(llvm_double_ty, 2), + !eq(gft,"m16n8k4:b:f64") : [llvm_double_ty], + !eq(gft,"m16n8k4:c:f64") : !listsplat(llvm_double_ty, 4), + !eq(gft,"m16n8k4:d:f64") : !listsplat(llvm_double_ty, 4), + + !eq(gft,"m16n8k8:a:f64") : !listsplat(llvm_double_ty, 4), + !eq(gft,"m16n8k8:b:f64") : !listsplat(llvm_double_ty, 2), + !eq(gft,"m16n8k8:c:f64") : !listsplat(llvm_double_ty, 4), + !eq(gft,"m16n8k8:d:f64") : !listsplat(llvm_double_ty, 4), + + !eq(gft,"m16n8k16:a:f64") : !listsplat(llvm_double_ty, 8), + !eq(gft,"m16n8k16:b:f64") : !listsplat(llvm_double_ty, 4), + !eq(gft,"m16n8k16:c:f64") : !listsplat(llvm_double_ty, 4), + !eq(gft,"m16n8k16:d:f64") : !listsplat(llvm_double_ty, 4), + // wmma bf16 -> s32 @ m16n16k16/m8n32k16/m32n8k16 !eq(gft,"m16n16k16:a:bf16") : !listsplat(llvm_i32_ty, 4), !eq(gft,"m16n16k16:b:bf16") : !listsplat(llvm_i32_ty, 4), @@ -378,6 +397,26 @@ class WMMA_REGS<string Geom, string Frag, string PtxEltType, bit IsSparse = fals !eq(gft,"m16n8k64:c:s32") : !listsplat(llvm_i32_ty, 4), !eq(gft,"m16n8k64:d:s32") : !listsplat(llvm_i32_ty, 4), + // mma e4m3/e5m2 -> f16/f32 @ m16n8k16 + !eq(gft,"m16n8k16:a:e4m3") : !listsplat(llvm_i32_ty, 2), + !eq(gft,"m16n8k16:a:e5m2") : !listsplat(llvm_i32_ty, 2), + !eq(gft,"m16n8k16:b:e4m3") : [llvm_i32_ty], + !eq(gft,"m16n8k16:b:e5m2") : [llvm_i32_ty], + // mma e4m3/e5m2/e3m2/e2m3/e2m1 -> f32 @ m16n8k32 + !eq(gft,"m16n8k32:a:e4m3") : !listsplat(llvm_i32_ty, 4), + !eq(gft,"m16n8k32:a:e5m2") : !listsplat(llvm_i32_ty, 4), + !eq(gft,"m16n8k32:a:e3m2") : !listsplat(llvm_i32_ty, 4), + !eq(gft,"m16n8k32:a:e2m3") : !listsplat(llvm_i32_ty, 4), + !eq(gft,"m16n8k32:a:e2m1") : !listsplat(llvm_i32_ty, 4), + !eq(gft,"m16n8k32:b:e4m3") : !listsplat(llvm_i32_ty, 2), + !eq(gft,"m16n8k32:b:e5m2") : !listsplat(llvm_i32_ty, 2), + !eq(gft,"m16n8k32:b:e3m2") : !listsplat(llvm_i32_ty, 2), + !eq(gft,"m16n8k32:b:e2m3") : !listsplat(llvm_i32_ty, 2), + !eq(gft,"m16n8k32:b:e2m1") : !listsplat(llvm_i32_ty, 2), + // mma e2m1 -> f32 @m16n8k64 + !eq(gft,"m16n8k64:a:e2m1") : !listsplat(llvm_i32_ty, 4), + !eq(gft,"m16n8k64:b:e2m1") : !listsplat(llvm_i32_ty, 2), + // wmma/mma b1 -> s32 @ m8n8k128(b1) !eq(gft,"m8n8k128:a:b1") : [llvm_i32_ty], !eq(gft,"m8n8k128:b:b1") : [llvm_i32_ty], @@ -468,7 +507,7 @@ class WMMA_NAME<string ALayout, string BLayout, int Satfinite, string Rnd, strin # !if(Satfinite, "_satfinite", ""); } -class MMA_NAME<string ALayout, string BLayout, int Satfinite, string b1op, +class MMA_NAME<string ALayout, string BLayout, int Satfinite, string b1op, string Kind, WMMA_REGS A, WMMA_REGS B, WMMA_REGS C, WMMA_REGS D> { string signature = MMA_SIGNATURE<A, B, C, D>.ret; string record = "int_nvvm_mma" @@ -476,6 +515,7 @@ class MMA_NAME<string ALayout, string BLayout, int Satfinite, string b1op, # "_" # A.geom # "_" # ALayout # "_" # BLayout + # !if(!ne(Kind, ""), !strconcat("_", !subst("::", "_", Kind)), "") # !if(Satfinite, "_satfinite", "") # signature; } @@ -601,7 +641,7 @@ class NVVM_MMA_OPS { ["m16n8k16", "m16n8k8"], ["bf16"], [], ["f32"], []>.ret; list<list<WMMA_REGS>> f64_mma_ops = MMA_OPS< - ["m8n8k4"], + ["m8n8k4", "m16n8k4", "m16n8k8", "m16n8k16"], ["f64"], [], ["f64"], []>.ret; list<list<WMMA_REGS>> fp_mma_ops = MMA_OPS< ["m8n8k4", "m16n8k8", "m16n8k16"], @@ -609,6 +649,18 @@ class NVVM_MMA_OPS { list<list<WMMA_REGS>> int_mma_ops = MMA_OPS< ["m8n8k16", "m16n8k16", "m16n8k32"], ["s8", "u8"], ["s8", "u8"], ["s32"], []>.ret; + // m16n8k32 fp8 variants are intersected with f8f6f4 variants + // and processed there + list<list<WMMA_REGS>> fp8_mma_ops = MMA_OPS< + ["m16n8k16"], + ["e4m3", "e5m2"], ["e4m3", "e5m2"], + ["f16", "f32"], ["f16", "f32"]>.ret; + // it also contains e4m3/e5m2 from fp8 variants + list<list<WMMA_REGS>> f8f6f4_mma_ops = MMA_OPS< + ["m16n8k32"], + ["e4m3", "e5m2", "e3m2", "e2m3", "e2m1"], + ["e4m3", "e5m2", "e3m2", "e2m3", "e2m1"], + ["f16", "f32"], ["f16", "f32"]>.ret; list<list<WMMA_REGS>> subint_mma_ops = MMA_OPS< ["m8n8k32", "m16n8k32", "m16n8k64"], ["s4", "u4"], ["s4", "u4"], ["s32"], []>.ret; @@ -617,7 +669,8 @@ class NVVM_MMA_OPS { ["b1"], [], ["s32"], []>.ret; list<list<WMMA_REGS>> all_mma_ops = !listconcat( tf32_mma_ops, bf16_mma_ops, f64_mma_ops, - fp_mma_ops, int_mma_ops, subint_mma_ops, bit_mma_ops); + fp_mma_ops, fp8_mma_ops, f8f6f4_mma_ops, + int_mma_ops, subint_mma_ops, bit_mma_ops); list<list<WMMA_REGS>> bf16_mma_sp_ops = MMA_OPS< ["m16n8k16", "m16n8k32"], @@ -770,7 +823,8 @@ class NVVM_MMA_B1OPS<list<WMMA_REGS> frags> { // if NVVM_MMA_SUPPORTED<...>.ret then // def : FOO<>; // The record will only be defined for supported ops. // -class NVVM_MMA_SUPPORTED<list<WMMA_REGS> frags, string layout_a, string layout_b, int satf> { +class NVVM_MMA_SUPPORTED<list<WMMA_REGS> frags, string layout_a, string layout_b, + string kind, int satf> { // MMA ops check both layouts. string layout = layout_a # ":" # layout_b; string a_type = frags[0].ptx_elt_type; @@ -805,10 +859,31 @@ class NVVM_MMA_SUPPORTED<list<WMMA_REGS> frags, string layout_a, string layout_b !or(!ne(a_type, b_type), !ne(c_type, d_type))): false, - // m16n8k8 requires C and D to be the same type. - !and(!eq(geom, "m16n8k8"), + // m16n8k16/m16n8k32 requires C and D to be the same type + !and(!or(!eq(geom, "m16n8k16"), + !eq(geom, "m16n8k32")), !ne(c_type, d_type)): false, + // Limit kind to valid types and geometries + !and(!ne(kind, ""), + !or(!ne(geom, "m16n8k32"), + !and(!ne(a_type, "e4m3"), + !ne(a_type, "e5m2"), + !ne(a_type, "e3m2"), + !ne(a_type, "e2m3"), + !ne(a_type, "e2m1")))): false, + + // Limit m16n8k16/m16n8k32 with no kind to valid types + !and(!eq(kind, ""), + !or(!eq(geom, "m16n8k16"), + !eq(geom, "m16n8k32")), + !or(!eq(a_type, "e3m2"), + !eq(a_type, "e2m3"), + !eq(a_type, "e2m1"), + !eq(b_type, "e3m2"), + !eq(b_type, "e2m3"), + !eq(b_type, "e2m1"))): false, + // All other are OK. true: true ); @@ -882,9 +957,10 @@ class NVVM_MMA_SP_SUPPORTED<list<WMMA_REGS> frags, string metadata, !eq(a_type, "tf32")), !ne(a_type, b_type)): false, - // m16n8k16 and m16n8k32 requires C and D to be the same type. + // m16n8k16, m16n8k32 and m16n8k64 requires C and D to be the same type. !and(!or(!eq(geom, "m16n8k16"), - !eq(geom, "m16n8k32")), + !eq(geom, "m16n8k32"), + !eq(geom, "m16n8k64")), !ne(c_type, d_type)): false, !and(!eq(kind, ""), @@ -2252,10 +2328,12 @@ foreach layout_a = ["row", "col"] in { foreach satf = [0, 1] in { foreach op = NVVM_MMA_OPS.all_mma_ops in { foreach b1op = NVVM_MMA_B1OPS<op>.ret in { - if NVVM_MMA_SUPPORTED<op, layout_a, layout_b, satf>.ret then { - def MMA_NAME<layout_a, layout_b, satf, b1op, op[0], op[1], op[2], op[3]>.record - : NVVM_MMA<op[0], op[1], op[2], op[3]>; - } + foreach kind = ["", "kind::f8f6f4"] in { + if NVVM_MMA_SUPPORTED<op, layout_a, layout_b, kind, satf>.ret then { + def MMA_NAME<layout_a, layout_b, satf, b1op, kind, op[0], op[1], op[2], op[3]>.record + : NVVM_MMA<op[0], op[1], op[2], op[3]>; + } + } // kind } // b1op } // op } // satf diff --git a/llvm/include/llvm/IR/IntrinsicsSPIRV.td b/llvm/include/llvm/IR/IntrinsicsSPIRV.td index 823c491..66e24fa 100644 --- a/llvm/include/llvm/IR/IntrinsicsSPIRV.td +++ b/llvm/include/llvm/IR/IntrinsicsSPIRV.td @@ -150,6 +150,14 @@ def int_spv_rsqrt : DefaultAttrsIntrinsic<[LLVMMatchType<0>], [llvm_anyfloat_ty] [llvm_i32_ty, llvm_i32_ty, llvm_i32_ty, llvm_i32_ty, llvm_ptr_ty], [IntrNoMem]>; + def int_spv_resource_counterhandlefromimplicitbinding + : DefaultAttrsIntrinsic<[llvm_any_ty], + [llvm_any_ty, llvm_i32_ty, llvm_i32_ty], + [IntrNoMem]>; + def int_spv_resource_counterhandlefrombinding + : DefaultAttrsIntrinsic<[llvm_any_ty], + [llvm_any_ty, llvm_i32_ty, llvm_i32_ty], + [IntrNoMem]>; def int_spv_firstbituhigh : DefaultAttrsIntrinsic<[LLVMScalarOrSameVectorWidth<0, llvm_i32_ty>], [llvm_anyint_ty], [IntrNoMem]>; def int_spv_firstbitshigh : DefaultAttrsIntrinsic<[LLVMScalarOrSameVectorWidth<0, llvm_i32_ty>], [llvm_anyint_ty], [IntrNoMem]>; diff --git a/llvm/include/llvm/MC/TargetRegistry.h b/llvm/include/llvm/MC/TargetRegistry.h index 570d4c0..234c587 100644 --- a/llvm/include/llvm/MC/TargetRegistry.h +++ b/llvm/include/llvm/MC/TargetRegistry.h @@ -737,7 +737,8 @@ struct TargetRegistry { /// \param TripleStr - The triple to use for finding a target. /// \param Error - On failure, an error string describing why no target was /// found. - // TODO: Drop this in favor of the method accepting Triple. + // TODO(boomanaiden154): Remove this function after LLVM 22 branches. + [[deprecated("Use overload accepting Triple instead")]] static const Target *lookupTarget(StringRef TripleStr, std::string &Error) { return lookupTarget(Triple(TripleStr), Error); } diff --git a/llvm/include/llvm/Object/OffloadBundle.h b/llvm/include/llvm/Object/OffloadBundle.h index 18be62b..bbb313c0 100644 --- a/llvm/include/llvm/Object/OffloadBundle.h +++ b/llvm/include/llvm/Object/OffloadBundle.h @@ -32,29 +32,41 @@ namespace llvm { namespace object { +// CompressedOffloadBundle represents the format for the compressed offload +// bundles. +// +// The format is as follows: +// - Magic Number (4 bytes) - A constant "CCOB". +// - Version (2 bytes) +// - Compression Method (2 bytes) - Uses the values from +// llvm::compression::Format. +// - Total file size (4 bytes in V2, 8 bytes in V3). +// - Uncompressed Size (4 bytes in V1/V2, 8 bytes in V3). +// - Truncated MD5 Hash (8 bytes). +// - Compressed Data (variable length). class CompressedOffloadBundle { private: - static inline const size_t MagicSize = 4; - static inline const size_t VersionFieldSize = sizeof(uint16_t); - static inline const size_t MethodFieldSize = sizeof(uint16_t); - static inline const size_t FileSizeFieldSize = sizeof(uint32_t); - static inline const size_t UncompressedSizeFieldSize = sizeof(uint32_t); - static inline const size_t HashFieldSize = sizeof(uint64_t); - static inline const size_t V1HeaderSize = - MagicSize + VersionFieldSize + MethodFieldSize + - UncompressedSizeFieldSize + HashFieldSize; - static inline const size_t V2HeaderSize = - MagicSize + VersionFieldSize + FileSizeFieldSize + MethodFieldSize + - UncompressedSizeFieldSize + HashFieldSize; static inline const llvm::StringRef MagicNumber = "CCOB"; - static inline const uint16_t Version = 2; public: - LLVM_ABI static llvm::Expected<std::unique_ptr<llvm::MemoryBuffer>> + struct CompressedBundleHeader { + unsigned Version; + llvm::compression::Format CompressionFormat; + std::optional<size_t> FileSize; + size_t UncompressedFileSize; + uint64_t Hash; + + static llvm::Expected<CompressedBundleHeader> tryParse(llvm::StringRef); + }; + + static inline const uint16_t DefaultVersion = 3; + + static llvm::Expected<std::unique_ptr<llvm::MemoryBuffer>> compress(llvm::compression::Params P, const llvm::MemoryBuffer &Input, - bool Verbose = false); - LLVM_ABI static llvm::Expected<std::unique_ptr<llvm::MemoryBuffer>> - decompress(llvm::MemoryBufferRef &Input, bool Verbose = false); + uint16_t Version, raw_ostream *VerboseStream = nullptr); + static llvm::Expected<std::unique_ptr<llvm::MemoryBuffer>> + decompress(const llvm::MemoryBuffer &Input, + raw_ostream *VerboseStream = nullptr); }; /// Bundle entry in binary clang-offload-bundler format. @@ -62,12 +74,12 @@ struct OffloadBundleEntry { uint64_t Offset = 0u; uint64_t Size = 0u; uint64_t IDLength = 0u; - StringRef ID; + std::string ID; OffloadBundleEntry(uint64_t O, uint64_t S, uint64_t I, StringRef T) - : Offset(O), Size(S), IDLength(I), ID(T) {} + : Offset(O), Size(S), IDLength(I), ID(T.str()) {} void dumpInfo(raw_ostream &OS) { OS << "Offset = " << Offset << ", Size = " << Size - << ", ID Length = " << IDLength << ", ID = " << ID; + << ", ID Length = " << IDLength << ", ID = " << ID << "\n"; } void dumpURI(raw_ostream &OS, StringRef FilePath) { OS << ID.data() << "\tfile://" << FilePath << "#offset=" << Offset @@ -81,16 +93,21 @@ class OffloadBundleFatBin { uint64_t Size = 0u; StringRef FileName; uint64_t NumberOfEntries; + bool Decompressed; SmallVector<OffloadBundleEntry> Entries; public: + std::unique_ptr<MemoryBuffer> DecompressedBuffer; + SmallVector<OffloadBundleEntry> getEntries() { return Entries; } uint64_t getSize() const { return Size; } StringRef getFileName() const { return FileName; } uint64_t getNumEntries() const { return NumberOfEntries; } + bool isDecompressed() const { return Decompressed; } LLVM_ABI static Expected<std::unique_ptr<OffloadBundleFatBin>> - create(MemoryBufferRef, uint64_t SectionOffset, StringRef FileName); + create(MemoryBufferRef, uint64_t SectionOffset, StringRef FileName, + bool Decompress = false); LLVM_ABI Error extractBundle(const ObjectFile &Source); LLVM_ABI Error dumpEntryToCodeObject(); @@ -106,9 +123,14 @@ public: Entry.dumpURI(outs(), FileName); } - OffloadBundleFatBin(MemoryBufferRef Source, StringRef File) - : FileName(File), NumberOfEntries(0), - Entries(SmallVector<OffloadBundleEntry>()) {} + OffloadBundleFatBin(MemoryBufferRef Source, StringRef File, + bool Decompress = false) + : FileName(File), NumberOfEntries(0), Decompressed(Decompress), + Entries(SmallVector<OffloadBundleEntry>()) { + if (Decompress) + DecompressedBuffer = + MemoryBuffer::getMemBufferCopy(Source.getBuffer(), File); + } }; enum UriTypeT { FILE_URI, MEMORY_URI }; @@ -191,6 +213,10 @@ LLVM_ABI Error extractOffloadBundleFatBinary( LLVM_ABI Error extractCodeObject(const ObjectFile &Source, int64_t Offset, int64_t Size, StringRef OutputFileName); +/// Extract code object memory from the given \p Source object file at \p Offset +/// and of \p Size, and copy into \p OutputFileName. +LLVM_ABI Error extractCodeObject(MemoryBufferRef Buffer, int64_t Offset, + int64_t Size, StringRef OutputFileName); /// Extracts an Offload Bundle Entry given by URI LLVM_ABI Error extractOffloadBundleByURI(StringRef URIstr); diff --git a/llvm/include/llvm/Option/ArgList.h b/llvm/include/llvm/Option/ArgList.h index 3e80574..db36509 100644 --- a/llvm/include/llvm/Option/ArgList.h +++ b/llvm/include/llvm/Option/ArgList.h @@ -20,6 +20,7 @@ #include "llvm/Option/OptSpecifier.h" #include "llvm/Option/Option.h" #include "llvm/Support/Compiler.h" +#include "llvm/Support/Error.h" #include <algorithm> #include <cstddef> #include <initializer_list> @@ -280,6 +281,22 @@ public: /// list. virtual unsigned getNumInputArgStrings() const = 0; + /// getSubCommand - Find subcommand from the arguments if the usage is valid. + /// + /// \param AllSubCommands - A list of all valid subcommands. + /// \param HandleMultipleSubcommands - A callback for the case where multiple + /// subcommands are present in the arguments. It gets a list of all found + /// subcommands. + /// \param HandleOtherPositionals - A callback for the case where positional + /// arguments that are not subcommands are present. + /// \return The name of the subcommand found. If no subcommand is found, + /// this returns an empty StringRef. If multiple subcommands are found, the + /// first one is returned. + StringRef getSubCommand( + ArrayRef<OptTable::SubCommand> AllSubCommands, + std::function<void(ArrayRef<StringRef>)> HandleMultipleSubcommands, + std::function<void(ArrayRef<StringRef>)> HandleOtherPositionals) const; + /// @} /// @name Argument Lookup Utilities /// @{ diff --git a/llvm/include/llvm/Option/OptParser.td b/llvm/include/llvm/Option/OptParser.td index 9fd606b..8f32fb4 100644 --- a/llvm/include/llvm/Option/OptParser.td +++ b/llvm/include/llvm/Option/OptParser.td @@ -98,7 +98,15 @@ class HelpTextVariant<list<OptionVisibility> visibilities, string text> { string Text = text; } -class Option<list<string> prefixes, string name, OptionKind kind> { +// Class definition for positional subcommands. +class SubCommand<string name, string helpText, string usage = ""> { + string Name = name; + string HelpText = helpText; + string Usage = usage; +} + +class Option<list<string> prefixes, string name, OptionKind kind, + list<SubCommand> subcommands = []> { string EnumName = ?; // Uses the def name if undefined. list<string> Prefixes = prefixes; string Name = name; @@ -129,26 +137,34 @@ class Option<list<string> prefixes, string name, OptionKind kind> { code ValueMerger = "mergeForwardValue"; code ValueExtractor = "extractForwardValue"; list<code> NormalizedValues = ?; + list<SubCommand> SubCommands = subcommands; } // Helpers for defining options. -class Flag<list<string> prefixes, string name> - : Option<prefixes, name, KIND_FLAG>; -class Joined<list<string> prefixes, string name> - : Option<prefixes, name, KIND_JOINED>; -class Separate<list<string> prefixes, string name> - : Option<prefixes, name, KIND_SEPARATE>; -class CommaJoined<list<string> prefixes, string name> - : Option<prefixes, name, KIND_COMMAJOINED>; -class MultiArg<list<string> prefixes, string name, int numargs> - : Option<prefixes, name, KIND_MULTIARG> { +class Flag<list<string> prefixes, string name, + list<SubCommand> subcommands = []> + : Option<prefixes, name, KIND_FLAG, subcommands>; +class Joined<list<string> prefixes, string name, + list<SubCommand> subcommands = []> + : Option<prefixes, name, KIND_JOINED, subcommands>; +class Separate<list<string> prefixes, string name, + list<SubCommand> subcommands = []> + : Option<prefixes, name, KIND_SEPARATE, subcommands>; +class CommaJoined<list<string> prefixes, string name, + list<SubCommand> subcommands = []> + : Option<prefixes, name, KIND_COMMAJOINED, subcommands>; +class MultiArg<list<string> prefixes, string name, int numargs, + list<SubCommand> subcommands = []> + : Option<prefixes, name, KIND_MULTIARG, subcommands> { int NumArgs = numargs; } -class JoinedOrSeparate<list<string> prefixes, string name> - : Option<prefixes, name, KIND_JOINED_OR_SEPARATE>; -class JoinedAndSeparate<list<string> prefixes, string name> - : Option<prefixes, name, KIND_JOINED_AND_SEPARATE>; +class JoinedOrSeparate<list<string> prefixes, string name, + list<SubCommand> subcommands = []> + : Option<prefixes, name, KIND_JOINED_OR_SEPARATE, subcommands>; +class JoinedAndSeparate<list<string> prefixes, string name, + list<SubCommand> subcommands = []> + : Option<prefixes, name, KIND_JOINED_AND_SEPARATE, subcommands>; // Mix-ins for adding optional attributes. diff --git a/llvm/include/llvm/Option/OptTable.h b/llvm/include/llvm/Option/OptTable.h index df42ee3..f641ca4 100644 --- a/llvm/include/llvm/Option/OptTable.h +++ b/llvm/include/llvm/Option/OptTable.h @@ -53,6 +53,13 @@ public: /// parts of the driver still use Option instances where convenient. class LLVM_ABI OptTable { public: + /// Represents a subcommand and its options in the option table. + struct SubCommand { + const char *Name; + const char *HelpText; + const char *Usage; + }; + /// Entry for a single option instance in the option data table. struct Info { unsigned PrefixesOffset; @@ -79,6 +86,8 @@ public: unsigned short AliasID; const char *AliasArgs; const char *Values; + // Offset into OptTable's SubCommandIDsTable. + unsigned SubCommandIDsOffset; bool hasNoPrefix() const { return PrefixesOffset == 0; } @@ -94,6 +103,21 @@ public: getNumPrefixes(PrefixesTable)); } + bool hasSubCommands() const { return SubCommandIDsOffset != 0; } + + unsigned getNumSubCommandIDs(ArrayRef<unsigned> SubCommandIDsTable) const { + // We embed the number of subcommand IDs in the value of the first offset. + return SubCommandIDsTable[SubCommandIDsOffset]; + } + + ArrayRef<unsigned> + getSubCommandIDs(ArrayRef<unsigned> SubCommandIDsTable) const { + return hasSubCommands() ? SubCommandIDsTable.slice( + SubCommandIDsOffset + 1, + getNumSubCommandIDs(SubCommandIDsTable)) + : ArrayRef<unsigned>(); + } + void appendPrefixes(const StringTable &StrTable, ArrayRef<StringTable::Offset> PrefixesTable, SmallVectorImpl<StringRef> &Prefixes) const { @@ -119,6 +143,22 @@ public: } }; +public: + bool isValidForSubCommand(const Info *CandidateInfo, + StringRef SubCommand) const { + assert(!SubCommand.empty() && + "This helper is only for valid registered subcommands."); + auto SCIT = + std::find_if(SubCommands.begin(), SubCommands.end(), + [&](const auto &C) { return SubCommand == C.Name; }); + assert(SCIT != SubCommands.end() && + "This helper is only for valid registered subcommands."); + auto SubCommandIDs = CandidateInfo->getSubCommandIDs(SubCommandIDsTable); + unsigned CurrentSubCommandID = SCIT - &SubCommands[0]; + return std::find(SubCommandIDs.begin(), SubCommandIDs.end(), + CurrentSubCommandID) != SubCommandIDs.end(); + } + private: // A unified string table for these options. Individual strings are stored as // null terminated C-strings at offsets within this table. @@ -134,6 +174,13 @@ private: ArrayRef<Info> OptionInfos; bool IgnoreCase; + + /// The subcommand information table. + ArrayRef<SubCommand> SubCommands; + + /// The subcommand IDs table. + ArrayRef<unsigned> SubCommandIDsTable; + bool GroupedShortOptions = false; bool DashDashParsing = false; const char *EnvVar = nullptr; @@ -168,7 +215,9 @@ protected: /// manually call \c buildPrefixChars once they are fully constructed. OptTable(const StringTable &StrTable, ArrayRef<StringTable::Offset> PrefixesTable, - ArrayRef<Info> OptionInfos, bool IgnoreCase = false); + ArrayRef<Info> OptionInfos, bool IgnoreCase = false, + ArrayRef<SubCommand> SubCommands = {}, + ArrayRef<unsigned> SubCommandIDsTable = {}); /// Build (or rebuild) the PrefixChars member. void buildPrefixChars(); @@ -179,6 +228,8 @@ public: /// Return the string table used for option names. const StringTable &getStrTable() const { return *StrTable; } + ArrayRef<SubCommand> getSubCommands() const { return SubCommands; } + /// Return the prefixes table used for option names. ArrayRef<StringTable::Offset> getPrefixesTable() const { return PrefixesTable; @@ -410,7 +461,8 @@ public: /// texts. void printHelp(raw_ostream &OS, const char *Usage, const char *Title, bool ShowHidden = false, bool ShowAllAliases = false, - Visibility VisibilityMask = Visibility()) const; + Visibility VisibilityMask = Visibility(), + StringRef SubCommand = {}) const; void printHelp(raw_ostream &OS, const char *Usage, const char *Title, unsigned FlagsToInclude, unsigned FlagsToExclude, @@ -418,7 +470,8 @@ public: private: void internalPrintHelp(raw_ostream &OS, const char *Usage, const char *Title, - bool ShowHidden, bool ShowAllAliases, + StringRef SubCommand, bool ShowHidden, + bool ShowAllAliases, std::function<bool(const Info &)> ExcludeOption, Visibility VisibilityMask) const; }; @@ -428,7 +481,9 @@ class GenericOptTable : public OptTable { protected: LLVM_ABI GenericOptTable(const StringTable &StrTable, ArrayRef<StringTable::Offset> PrefixesTable, - ArrayRef<Info> OptionInfos, bool IgnoreCase = false); + ArrayRef<Info> OptionInfos, bool IgnoreCase = false, + ArrayRef<SubCommand> SubCommands = {}, + ArrayRef<unsigned> SubCommandIDsTable = {}); }; class PrecomputedOptTable : public OptTable { @@ -437,8 +492,11 @@ protected: ArrayRef<StringTable::Offset> PrefixesTable, ArrayRef<Info> OptionInfos, ArrayRef<StringTable::Offset> PrefixesUnionOffsets, - bool IgnoreCase = false) - : OptTable(StrTable, PrefixesTable, OptionInfos, IgnoreCase) { + bool IgnoreCase = false, + ArrayRef<SubCommand> SubCommands = {}, + ArrayRef<unsigned> SubCommandIDsTable = {}) + : OptTable(StrTable, PrefixesTable, OptionInfos, IgnoreCase, SubCommands, + SubCommandIDsTable) { for (auto PrefixOffset : PrefixesUnionOffsets) PrefixesUnion.push_back(StrTable[PrefixOffset]); buildPrefixChars(); @@ -452,33 +510,36 @@ protected: #define LLVM_MAKE_OPT_ID_WITH_ID_PREFIX( \ ID_PREFIX, PREFIXES_OFFSET, PREFIXED_NAME_OFFSET, ID, KIND, GROUP, ALIAS, \ ALIASARGS, FLAGS, VISIBILITY, PARAM, HELPTEXT, HELPTEXTSFORVARIANTS, \ - METAVAR, VALUES) \ + METAVAR, VALUES, SUBCOMMANDIDS_OFFSET) \ ID_PREFIX##ID #define LLVM_MAKE_OPT_ID(PREFIXES_OFFSET, PREFIXED_NAME_OFFSET, ID, KIND, \ GROUP, ALIAS, ALIASARGS, FLAGS, VISIBILITY, PARAM, \ - HELPTEXT, HELPTEXTSFORVARIANTS, METAVAR, VALUES) \ - LLVM_MAKE_OPT_ID_WITH_ID_PREFIX(OPT_, PREFIXES_OFFSET, PREFIXED_NAME_OFFSET, \ - ID, KIND, GROUP, ALIAS, ALIASARGS, FLAGS, \ - VISIBILITY, PARAM, HELPTEXT, \ - HELPTEXTSFORVARIANTS, METAVAR, VALUES) + HELPTEXT, HELPTEXTSFORVARIANTS, METAVAR, VALUES, \ + SUBCOMMANDIDS_OFFSET) \ + LLVM_MAKE_OPT_ID_WITH_ID_PREFIX( \ + OPT_, PREFIXES_OFFSET, PREFIXED_NAME_OFFSET, ID, KIND, GROUP, ALIAS, \ + ALIASARGS, FLAGS, VISIBILITY, PARAM, HELPTEXT, HELPTEXTSFORVARIANTS, \ + METAVAR, VALUES, SUBCOMMANDIDS_OFFSET) #define LLVM_CONSTRUCT_OPT_INFO_WITH_ID_PREFIX( \ ID_PREFIX, PREFIXES_OFFSET, PREFIXED_NAME_OFFSET, ID, KIND, GROUP, ALIAS, \ ALIASARGS, FLAGS, VISIBILITY, PARAM, HELPTEXT, HELPTEXTSFORVARIANTS, \ - METAVAR, VALUES) \ + METAVAR, VALUES, SUBCOMMANDIDS_OFFSET) \ llvm::opt::OptTable::Info { \ PREFIXES_OFFSET, PREFIXED_NAME_OFFSET, HELPTEXT, HELPTEXTSFORVARIANTS, \ METAVAR, ID_PREFIX##ID, llvm::opt::Option::KIND##Class, PARAM, FLAGS, \ - VISIBILITY, ID_PREFIX##GROUP, ID_PREFIX##ALIAS, ALIASARGS, VALUES \ + VISIBILITY, ID_PREFIX##GROUP, ID_PREFIX##ALIAS, ALIASARGS, VALUES, \ + SUBCOMMANDIDS_OFFSET \ } #define LLVM_CONSTRUCT_OPT_INFO( \ PREFIXES_OFFSET, PREFIXED_NAME_OFFSET, ID, KIND, GROUP, ALIAS, ALIASARGS, \ - FLAGS, VISIBILITY, PARAM, HELPTEXT, HELPTEXTSFORVARIANTS, METAVAR, VALUES) \ + FLAGS, VISIBILITY, PARAM, HELPTEXT, HELPTEXTSFORVARIANTS, METAVAR, VALUES, \ + SUBCOMMANDIDS_OFFSET) \ LLVM_CONSTRUCT_OPT_INFO_WITH_ID_PREFIX( \ OPT_, PREFIXES_OFFSET, PREFIXED_NAME_OFFSET, ID, KIND, GROUP, ALIAS, \ ALIASARGS, FLAGS, VISIBILITY, PARAM, HELPTEXT, HELPTEXTSFORVARIANTS, \ - METAVAR, VALUES) + METAVAR, VALUES, SUBCOMMANDIDS_OFFSET) #endif // LLVM_OPTION_OPTTABLE_H diff --git a/llvm/include/llvm/Option/Option.h b/llvm/include/llvm/Option/Option.h index 51c330a..192cb3c9 100644 --- a/llvm/include/llvm/Option/Option.h +++ b/llvm/include/llvm/Option/Option.h @@ -216,6 +216,12 @@ public: /// always be false. LLVM_ABI bool matches(OptSpecifier ID) const; + LLVM_ABI bool isRegisteredSC(StringRef SubCommand) const { + assert(Info && "Must have a valid info!"); + assert(Owner && "Must have a valid owner!"); + return Owner->isValidForSubCommand(Info, SubCommand); + } + /// Potentially accept the current argument, returning a new Arg instance, /// or 0 if the option does not accept this argument (or the argument is /// missing values). diff --git a/llvm/include/llvm/Support/GlobPattern.h b/llvm/include/llvm/Support/GlobPattern.h index 62ed4a0..c1b4484 100644 --- a/llvm/include/llvm/Support/GlobPattern.h +++ b/llvm/include/llvm/Support/GlobPattern.h @@ -65,13 +65,19 @@ public: bool isTrivialMatchAll() const { if (!Prefix.empty()) return false; + if (!Suffix.empty()) + return false; if (SubGlobs.size() != 1) return false; return SubGlobs[0].getPat() == "*"; } + StringRef prefix() const { return Prefix; } + StringRef suffix() const { return Suffix; } + private: StringRef Prefix; + StringRef Suffix; struct SubGlobPattern { /// \param Pat the pattern to match against diff --git a/llvm/include/llvm/Transforms/IPO/FunctionAttrs.h b/llvm/include/llvm/Transforms/IPO/FunctionAttrs.h index 754714d..eaca0a8 100644 --- a/llvm/include/llvm/Transforms/IPO/FunctionAttrs.h +++ b/llvm/include/llvm/Transforms/IPO/FunctionAttrs.h @@ -79,6 +79,19 @@ public: LLVM_ABI PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM); }; +/// Additional 'norecurse' attribute deduction during postlink LTO phase. +/// +/// This is a module pass that infers 'norecurse' attribute on functions. +/// It runs during LTO and analyzes the module's call graph to find functions +/// that are guaranteed not to call themselves, either directly or indirectly. +/// The pass uses a module-wide flag which checks if any function's address is +/// taken or any function in the module has external linkage, to safely handle +/// indirect and library function calls from current function. +class NoRecurseLTOInferencePass + : public PassInfoMixin<NoRecurseLTOInferencePass> { +public: + LLVM_ABI PreservedAnalyses run(Module &M, ModuleAnalysisManager &MAM); +}; } // end namespace llvm #endif // LLVM_TRANSFORMS_IPO_FUNCTIONATTRS_H diff --git a/llvm/lib/Analysis/IR2Vec.cpp b/llvm/lib/Analysis/IR2Vec.cpp index 295b6d3..6885351 100644 --- a/llvm/lib/Analysis/IR2Vec.cpp +++ b/llvm/lib/Analysis/IR2Vec.cpp @@ -200,6 +200,8 @@ void Embedder::computeEmbeddings() const { if (F.isDeclaration()) return; + FuncVector = Embedding(Dimension, 0.0); + // Consider only the basic blocks that are reachable from entry for (const BasicBlock *BB : depth_first(&F)) { computeEmbeddings(*BB); diff --git a/llvm/lib/BinaryFormat/DXContainer.cpp b/llvm/lib/BinaryFormat/DXContainer.cpp index c06a3e3..22f5180 100644 --- a/llvm/lib/BinaryFormat/DXContainer.cpp +++ b/llvm/lib/BinaryFormat/DXContainer.cpp @@ -18,6 +18,91 @@ using namespace llvm; using namespace llvm::dxbc; +#define ROOT_PARAMETER(Val, Enum) \ + case Val: \ + return true; +bool llvm::dxbc::isValidParameterType(uint32_t V) { + switch (V) { +#include "llvm/BinaryFormat/DXContainerConstants.def" + } + return false; +} + +bool llvm::dxbc::isValidRangeType(uint32_t V) { + return V <= llvm::to_underlying(dxil::ResourceClass::LastEntry); +} + +#define SHADER_VISIBILITY(Val, Enum) \ + case Val: \ + return true; +bool llvm::dxbc::isValidShaderVisibility(uint32_t V) { + switch (V) { +#include "llvm/BinaryFormat/DXContainerConstants.def" + } + return false; +} + +#define FILTER(Val, Enum) \ + case Val: \ + return true; +bool llvm::dxbc::isValidSamplerFilter(uint32_t V) { + switch (V) { +#include "llvm/BinaryFormat/DXContainerConstants.def" + } + return false; +} + +#define TEXTURE_ADDRESS_MODE(Val, Enum) \ + case Val: \ + return true; +bool llvm::dxbc::isValidAddress(uint32_t V) { + switch (V) { +#include "llvm/BinaryFormat/DXContainerConstants.def" + } + return false; +} + +#define COMPARISON_FUNC(Val, Enum) \ + case Val: \ + return true; +bool llvm::dxbc::isValidComparisonFunc(uint32_t V) { + switch (V) { +#include "llvm/BinaryFormat/DXContainerConstants.def" + } + return false; +} + +#define STATIC_BORDER_COLOR(Val, Enum) \ + case Val: \ + return true; +bool llvm::dxbc::isValidBorderColor(uint32_t V) { + switch (V) { +#include "llvm/BinaryFormat/DXContainerConstants.def" + } + return false; +} + +bool llvm::dxbc::isValidRootDesciptorFlags(uint32_t V) { + using FlagT = dxbc::RootDescriptorFlags; + uint32_t LargestValue = + llvm::to_underlying(FlagT::LLVM_BITMASK_LARGEST_ENUMERATOR); + return V < NextPowerOf2(LargestValue); +} + +bool llvm::dxbc::isValidDescriptorRangeFlags(uint32_t V) { + using FlagT = dxbc::DescriptorRangeFlags; + uint32_t LargestValue = + llvm::to_underlying(FlagT::LLVM_BITMASK_LARGEST_ENUMERATOR); + return V < NextPowerOf2(LargestValue); +} + +bool llvm::dxbc::isValidStaticSamplerFlags(uint32_t V) { + using FlagT = dxbc::StaticSamplerFlags; + uint32_t LargestValue = + llvm::to_underlying(FlagT::LLVM_BITMASK_LARGEST_ENUMERATOR); + return V < NextPowerOf2(LargestValue); +} + dxbc::PartType dxbc::parsePartType(StringRef S) { #define CONTAINER_PART(PartName) .Case(#PartName, PartType::PartName) return StringSwitch<dxbc::PartType>(S) diff --git a/llvm/lib/CAS/CMakeLists.txt b/llvm/lib/CAS/CMakeLists.txt index 7ae5f7e..bca39b6 100644 --- a/llvm/lib/CAS/CMakeLists.txt +++ b/llvm/lib/CAS/CMakeLists.txt @@ -7,6 +7,7 @@ add_llvm_component_library(LLVMCAS MappedFileRegionArena.cpp ObjectStore.cpp OnDiskCommon.cpp + OnDiskDataAllocator.cpp OnDiskTrieRawHashMap.cpp ADDITIONAL_HEADER_DIRS diff --git a/llvm/lib/CAS/OnDiskDataAllocator.cpp b/llvm/lib/CAS/OnDiskDataAllocator.cpp new file mode 100644 index 0000000..13bbd66 --- /dev/null +++ b/llvm/lib/CAS/OnDiskDataAllocator.cpp @@ -0,0 +1,234 @@ +//===----------------------------------------------------------------------===// +// +// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. +// See https://llvm.org/LICENSE.txt for license information. +// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception +// +//===----------------------------------------------------------------------===// +/// +/// \file Implements OnDiskDataAllocator. +/// +//===----------------------------------------------------------------------===// + +#include "llvm/CAS/OnDiskDataAllocator.h" +#include "DatabaseFile.h" +#include "llvm/Config/llvm-config.h" + +using namespace llvm; +using namespace llvm::cas; +using namespace llvm::cas::ondisk; + +#if LLVM_ENABLE_ONDISK_CAS + +//===----------------------------------------------------------------------===// +// DataAllocator data structures. +//===----------------------------------------------------------------------===// + +namespace { +/// DataAllocator table layout: +/// - [8-bytes: Generic table header] +/// - 8-bytes: AllocatorOffset (reserved for implementing free lists) +/// - 8-bytes: Size for user data header +/// - <user data buffer> +/// +/// Record layout: +/// - <data> +class DataAllocatorHandle { +public: + static constexpr TableHandle::TableKind Kind = + TableHandle::TableKind::DataAllocator; + + struct Header { + TableHandle::Header GenericHeader; + std::atomic<int64_t> AllocatorOffset; + const uint64_t UserHeaderSize; + }; + + operator TableHandle() const { + if (!H) + return TableHandle(); + return TableHandle(*Region, H->GenericHeader); + } + + Expected<MutableArrayRef<char>> allocate(MappedFileRegionArena &Alloc, + size_t DataSize) { + assert(&Alloc.getRegion() == Region); + auto Ptr = Alloc.allocate(DataSize); + if (LLVM_UNLIKELY(!Ptr)) + return Ptr.takeError(); + return MutableArrayRef(*Ptr, DataSize); + } + + explicit operator bool() const { return H; } + const Header &getHeader() const { return *H; } + MappedFileRegion &getRegion() const { return *Region; } + + MutableArrayRef<uint8_t> getUserHeader() { + return MutableArrayRef(reinterpret_cast<uint8_t *>(H + 1), + H->UserHeaderSize); + } + + static Expected<DataAllocatorHandle> + create(MappedFileRegionArena &Alloc, StringRef Name, uint32_t UserHeaderSize); + + DataAllocatorHandle() = default; + DataAllocatorHandle(MappedFileRegion &Region, Header &H) + : Region(&Region), H(&H) {} + DataAllocatorHandle(MappedFileRegion &Region, intptr_t HeaderOffset) + : DataAllocatorHandle( + Region, *reinterpret_cast<Header *>(Region.data() + HeaderOffset)) { + } + +private: + MappedFileRegion *Region = nullptr; + Header *H = nullptr; +}; + +} // end anonymous namespace + +struct OnDiskDataAllocator::ImplType { + DatabaseFile File; + DataAllocatorHandle Store; +}; + +Expected<DataAllocatorHandle> +DataAllocatorHandle::create(MappedFileRegionArena &Alloc, StringRef Name, + uint32_t UserHeaderSize) { + // Allocate. + auto Offset = + Alloc.allocateOffset(sizeof(Header) + UserHeaderSize + Name.size() + 1); + if (LLVM_UNLIKELY(!Offset)) + return Offset.takeError(); + + // Construct the header and the name. + assert(Name.size() <= UINT16_MAX && "Expected smaller table name"); + auto *H = new (Alloc.getRegion().data() + *Offset) + Header{{TableHandle::TableKind::DataAllocator, + static_cast<uint16_t>(Name.size()), + static_cast<int32_t>(sizeof(Header) + UserHeaderSize)}, + /*AllocatorOffset=*/{0}, + /*UserHeaderSize=*/UserHeaderSize}; + // Memset UserHeader. + char *UserHeader = reinterpret_cast<char *>(H + 1); + memset(UserHeader, 0, UserHeaderSize); + // Write database file name (null-terminated). + char *NameStorage = UserHeader + UserHeaderSize; + llvm::copy(Name, NameStorage); + NameStorage[Name.size()] = 0; + return DataAllocatorHandle(Alloc.getRegion(), *H); +} + +Expected<OnDiskDataAllocator> OnDiskDataAllocator::create( + const Twine &PathTwine, const Twine &TableNameTwine, uint64_t MaxFileSize, + std::optional<uint64_t> NewFileInitialSize, uint32_t UserHeaderSize, + function_ref<void(void *)> UserHeaderInit) { + assert(!UserHeaderSize || UserHeaderInit); + SmallString<128> PathStorage; + StringRef Path = PathTwine.toStringRef(PathStorage); + SmallString<128> TableNameStorage; + StringRef TableName = TableNameTwine.toStringRef(TableNameStorage); + + // Constructor for if the file doesn't exist. + auto NewDBConstructor = [&](DatabaseFile &DB) -> Error { + auto Store = + DataAllocatorHandle::create(DB.getAlloc(), TableName, UserHeaderSize); + if (LLVM_UNLIKELY(!Store)) + return Store.takeError(); + + if (auto E = DB.addTable(*Store)) + return E; + + if (UserHeaderSize) + UserHeaderInit(Store->getUserHeader().data()); + return Error::success(); + }; + + // Get or create the file. + Expected<DatabaseFile> File = + DatabaseFile::create(Path, MaxFileSize, NewDBConstructor); + if (!File) + return File.takeError(); + + // Find the table and validate it. + std::optional<TableHandle> Table = File->findTable(TableName); + if (!Table) + return createTableConfigError(std::errc::argument_out_of_domain, Path, + TableName, "table not found"); + if (Error E = checkTable("table kind", (size_t)DataAllocatorHandle::Kind, + (size_t)Table->getHeader().Kind, Path, TableName)) + return std::move(E); + auto Store = Table->cast<DataAllocatorHandle>(); + assert(Store && "Already checked the kind"); + + // Success. + OnDiskDataAllocator::ImplType Impl{DatabaseFile(std::move(*File)), Store}; + return OnDiskDataAllocator(std::make_unique<ImplType>(std::move(Impl))); +} + +Expected<OnDiskDataAllocator::OnDiskPtr> +OnDiskDataAllocator::allocate(size_t Size) { + auto Data = Impl->Store.allocate(Impl->File.getAlloc(), Size); + if (LLVM_UNLIKELY(!Data)) + return Data.takeError(); + + return OnDiskPtr(FileOffset(Data->data() - Impl->Store.getRegion().data()), + *Data); +} + +Expected<ArrayRef<char>> OnDiskDataAllocator::get(FileOffset Offset, + size_t Size) const { + assert(Offset); + assert(Impl); + if (Offset.get() + Size >= Impl->File.getAlloc().size()) + return createStringError(make_error_code(std::errc::protocol_error), + "requested size too large in allocator"); + return ArrayRef<char>{Impl->File.getRegion().data() + Offset.get(), Size}; +} + +MutableArrayRef<uint8_t> OnDiskDataAllocator::getUserHeader() { + return Impl->Store.getUserHeader(); +} + +size_t OnDiskDataAllocator::size() const { return Impl->File.size(); } +size_t OnDiskDataAllocator::capacity() const { + return Impl->File.getRegion().size(); +} + +OnDiskDataAllocator::OnDiskDataAllocator(std::unique_ptr<ImplType> Impl) + : Impl(std::move(Impl)) {} + +#else // !LLVM_ENABLE_ONDISK_CAS + +struct OnDiskDataAllocator::ImplType {}; + +Expected<OnDiskDataAllocator> OnDiskDataAllocator::create( + const Twine &Path, const Twine &TableName, uint64_t MaxFileSize, + std::optional<uint64_t> NewFileInitialSize, uint32_t UserHeaderSize, + function_ref<void(void *)> UserHeaderInit) { + return createStringError(make_error_code(std::errc::not_supported), + "OnDiskDataAllocator is not supported"); +} + +Expected<OnDiskDataAllocator::OnDiskPtr> +OnDiskDataAllocator::allocate(size_t Size) { + return createStringError(make_error_code(std::errc::not_supported), + "OnDiskDataAllocator is not supported"); +} + +Expected<ArrayRef<char>> OnDiskDataAllocator::get(FileOffset Offset, + size_t Size) const { + return createStringError(make_error_code(std::errc::not_supported), + "OnDiskDataAllocator is not supported"); +} + +MutableArrayRef<uint8_t> OnDiskDataAllocator::getUserHeader() { return {}; } + +size_t OnDiskDataAllocator::size() const { return 0; } +size_t OnDiskDataAllocator::capacity() const { return 0; } + +#endif // LLVM_ENABLE_ONDISK_CAS + +OnDiskDataAllocator::OnDiskDataAllocator(OnDiskDataAllocator &&RHS) = default; +OnDiskDataAllocator & +OnDiskDataAllocator::operator=(OnDiskDataAllocator &&RHS) = default; +OnDiskDataAllocator::~OnDiskDataAllocator() = default; diff --git a/llvm/lib/CAS/OnDiskTrieRawHashMap.cpp b/llvm/lib/CAS/OnDiskTrieRawHashMap.cpp index 9403893..323b21e 100644 --- a/llvm/lib/CAS/OnDiskTrieRawHashMap.cpp +++ b/llvm/lib/CAS/OnDiskTrieRawHashMap.cpp @@ -427,7 +427,7 @@ TrieRawHashMapHandle::createRecord(MappedFileRegionArena &Alloc, return Record; } -Expected<OnDiskTrieRawHashMap::const_pointer> +Expected<OnDiskTrieRawHashMap::ConstOnDiskPtr> OnDiskTrieRawHashMap::recoverFromFileOffset(FileOffset Offset) const { // Check alignment. if (!isAligned(MappedFileRegionArena::getAlign(), Offset.get())) @@ -448,17 +448,17 @@ OnDiskTrieRawHashMap::recoverFromFileOffset(FileOffset Offset) const { // Looks okay... TrieRawHashMapHandle::RecordData D = Impl->Trie.getRecord(SubtrieSlotValue::getDataOffset(Offset)); - return const_pointer(D.Proxy, D.getFileOffset()); + return ConstOnDiskPtr(D.Proxy, D.getFileOffset()); } -OnDiskTrieRawHashMap::const_pointer +OnDiskTrieRawHashMap::ConstOnDiskPtr OnDiskTrieRawHashMap::find(ArrayRef<uint8_t> Hash) const { TrieRawHashMapHandle Trie = Impl->Trie; assert(Hash.size() == Trie.getNumHashBytes() && "Invalid hash"); SubtrieHandle S = Trie.getRoot(); if (!S) - return const_pointer(); + return ConstOnDiskPtr(); TrieHashIndexGenerator IndexGen = Trie.getIndexGen(S, Hash); size_t Index = IndexGen.next(); @@ -466,13 +466,13 @@ OnDiskTrieRawHashMap::find(ArrayRef<uint8_t> Hash) const { // Try to set the content. SubtrieSlotValue V = S.load(Index); if (!V) - return const_pointer(); + return ConstOnDiskPtr(); // Check for an exact match. if (V.isData()) { TrieRawHashMapHandle::RecordData D = Trie.getRecord(V); - return D.Proxy.Hash == Hash ? const_pointer(D.Proxy, D.getFileOffset()) - : const_pointer(); + return D.Proxy.Hash == Hash ? ConstOnDiskPtr(D.Proxy, D.getFileOffset()) + : ConstOnDiskPtr(); } Index = IndexGen.next(); @@ -490,7 +490,7 @@ void SubtrieHandle::reinitialize(uint32_t StartBit, uint32_t NumBits) { H->NumBits = NumBits; } -Expected<OnDiskTrieRawHashMap::pointer> +Expected<OnDiskTrieRawHashMap::OnDiskPtr> OnDiskTrieRawHashMap::insertLazy(ArrayRef<uint8_t> Hash, LazyInsertOnConstructCB OnConstruct, LazyInsertOnLeakCB OnLeak) { @@ -523,7 +523,8 @@ OnDiskTrieRawHashMap::insertLazy(ArrayRef<uint8_t> Hash, } if (S->compare_exchange_strong(Index, Existing, NewRecord->Offset)) - return pointer(NewRecord->Proxy, NewRecord->Offset.asDataFileOffset()); + return OnDiskPtr(NewRecord->Proxy, + NewRecord->Offset.asDataFileOffset()); // Race means that Existing is no longer empty; fall through... } @@ -540,8 +541,8 @@ OnDiskTrieRawHashMap::insertLazy(ArrayRef<uint8_t> Hash, if (NewRecord && OnLeak) OnLeak(NewRecord->Offset.asDataFileOffset(), NewRecord->Proxy, ExistingRecord.Offset.asDataFileOffset(), ExistingRecord.Proxy); - return pointer(ExistingRecord.Proxy, - ExistingRecord.Offset.asDataFileOffset()); + return OnDiskPtr(ExistingRecord.Proxy, + ExistingRecord.Offset.asDataFileOffset()); } // Sink the existing content as long as the indexes match. @@ -1135,7 +1136,7 @@ OnDiskTrieRawHashMap::create(const Twine &PathTwine, const Twine &TrieNameTwine, "OnDiskTrieRawHashMap is not supported"); } -Expected<OnDiskTrieRawHashMap::pointer> +Expected<OnDiskTrieRawHashMap::OnDiskPtr> OnDiskTrieRawHashMap::insertLazy(ArrayRef<uint8_t> Hash, LazyInsertOnConstructCB OnConstruct, LazyInsertOnLeakCB OnLeak) { @@ -1143,15 +1144,15 @@ OnDiskTrieRawHashMap::insertLazy(ArrayRef<uint8_t> Hash, "OnDiskTrieRawHashMap is not supported"); } -Expected<OnDiskTrieRawHashMap::const_pointer> +Expected<OnDiskTrieRawHashMap::ConstOnDiskPtr> OnDiskTrieRawHashMap::recoverFromFileOffset(FileOffset Offset) const { return createStringError(make_error_code(std::errc::not_supported), "OnDiskTrieRawHashMap is not supported"); } -OnDiskTrieRawHashMap::const_pointer +OnDiskTrieRawHashMap::ConstOnDiskPtr OnDiskTrieRawHashMap::find(ArrayRef<uint8_t> Hash) const { - return const_pointer(); + return ConstOnDiskPtr(); } void OnDiskTrieRawHashMap::print( diff --git a/llvm/lib/CodeGen/AsmPrinter/DebugHandlerBase.cpp b/llvm/lib/CodeGen/AsmPrinter/DebugHandlerBase.cpp index d98d180..dc38f5a 100644 --- a/llvm/lib/CodeGen/AsmPrinter/DebugHandlerBase.cpp +++ b/llvm/lib/CodeGen/AsmPrinter/DebugHandlerBase.cpp @@ -240,6 +240,8 @@ bool DebugHandlerBase::isUnsignedDIType(const DIType *Ty) { Encoding == dwarf::DW_ATE_complex_float || Encoding == dwarf::DW_ATE_signed_fixed || Encoding == dwarf::DW_ATE_unsigned_fixed || + (Encoding >= dwarf::DW_ATE_lo_user && + Encoding <= dwarf::DW_ATE_hi_user) || (Ty->getTag() == dwarf::DW_TAG_unspecified_type && Ty->getName() == "decltype(nullptr)")) && "Unsupported encoding"); diff --git a/llvm/lib/ExecutionEngine/Orc/CMakeLists.txt b/llvm/lib/ExecutionEngine/Orc/CMakeLists.txt index f159d59..0ffe3ae 100644 --- a/llvm/lib/ExecutionEngine/Orc/CMakeLists.txt +++ b/llvm/lib/ExecutionEngine/Orc/CMakeLists.txt @@ -24,6 +24,7 @@ add_llvm_component_library(LLVMOrcJIT EPCGenericRTDyldMemoryManager.cpp EPCIndirectionUtils.cpp ExecutionUtils.cpp + ExecutorResolutionGenerator.cpp ObjectFileInterface.cpp GetDylibInterface.cpp IndirectionUtils.cpp diff --git a/llvm/lib/ExecutionEngine/Orc/EPCDebugObjectRegistrar.cpp b/llvm/lib/ExecutionEngine/Orc/EPCDebugObjectRegistrar.cpp index 9f7d517..08bef37 100644 --- a/llvm/lib/ExecutionEngine/Orc/EPCDebugObjectRegistrar.cpp +++ b/llvm/lib/ExecutionEngine/Orc/EPCDebugObjectRegistrar.cpp @@ -42,7 +42,12 @@ Expected<std::unique_ptr<EPCDebugObjectRegistrar>> createJITLoaderGDBRegistrar( assert((*Result)[0].size() == 1 && "Unexpected number of addresses in result"); - ExecutorAddr RegisterAddr = (*Result)[0][0].getAddress(); + if (!(*Result)[0][0].has_value()) + return make_error<StringError>( + "Expected a valid address in the lookup result", + inconvertibleErrorCode()); + + ExecutorAddr RegisterAddr = (*Result)[0][0]->getAddress(); return std::make_unique<EPCDebugObjectRegistrar>(ES, RegisterAddr); } diff --git a/llvm/lib/ExecutionEngine/Orc/EPCDynamicLibrarySearchGenerator.cpp b/llvm/lib/ExecutionEngine/Orc/EPCDynamicLibrarySearchGenerator.cpp index 59d66b2..1e83c07 100644 --- a/llvm/lib/ExecutionEngine/Orc/EPCDynamicLibrarySearchGenerator.cpp +++ b/llvm/lib/ExecutionEngine/Orc/EPCDynamicLibrarySearchGenerator.cpp @@ -79,12 +79,16 @@ Error EPCDynamicLibrarySearchGenerator::tryToGenerate( assert(Result->front().size() == LookupSymbols.size() && "Result has incorrect number of elements"); + auto SymsIt = Result->front().begin(); + SymbolNameSet MissingSymbols; SymbolMap NewSymbols; - auto ResultI = Result->front().begin(); - for (auto &KV : LookupSymbols) { - if (ResultI->getAddress()) - NewSymbols[KV.first] = *ResultI; - ++ResultI; + for (auto &[Name, Flags] : LookupSymbols) { + const auto &Sym = *SymsIt++; + if (Sym && Sym->getAddress()) + NewSymbols[Name] = *Sym; + else if (LLVM_UNLIKELY(!Sym && + Flags == SymbolLookupFlags::RequiredSymbol)) + MissingSymbols.insert(Name); } LLVM_DEBUG({ @@ -96,6 +100,10 @@ Error EPCDynamicLibrarySearchGenerator::tryToGenerate( if (NewSymbols.empty()) return LS.continueLookup(Error::success()); + if (LLVM_UNLIKELY(!MissingSymbols.empty())) + return LS.continueLookup(make_error<SymbolsNotFound>( + this->EPC.getSymbolStringPool(), std::move(MissingSymbols))); + // Define resolved symbols. Error Err = addAbsolutes(JD, std::move(NewSymbols)); diff --git a/llvm/lib/ExecutionEngine/Orc/EPCGenericDylibManager.cpp b/llvm/lib/ExecutionEngine/Orc/EPCGenericDylibManager.cpp index f98b18c..1f19d17 100644 --- a/llvm/lib/ExecutionEngine/Orc/EPCGenericDylibManager.cpp +++ b/llvm/lib/ExecutionEngine/Orc/EPCGenericDylibManager.cpp @@ -66,7 +66,7 @@ EPCGenericDylibManager::CreateWithDefaultBootstrapSymbols( if (auto Err = EPC.getBootstrapSymbols( {{SAs.Instance, rt::SimpleExecutorDylibManagerInstanceName}, {SAs.Open, rt::SimpleExecutorDylibManagerOpenWrapperName}, - {SAs.Lookup, rt::SimpleExecutorDylibManagerLookupWrapperName}})) + {SAs.Resolve, rt::SimpleExecutorDylibManagerResolveWrapperName}})) return std::move(Err); return EPCGenericDylibManager(EPC, std::move(SAs)); } @@ -84,11 +84,12 @@ Expected<tpctypes::DylibHandle> EPCGenericDylibManager::open(StringRef Path, void EPCGenericDylibManager::lookupAsync(tpctypes::DylibHandle H, const SymbolLookupSet &Lookup, SymbolLookupCompleteFn Complete) { - EPC.callSPSWrapperAsync<rt::SPSSimpleExecutorDylibManagerLookupSignature>( - SAs.Lookup, + EPC.callSPSWrapperAsync<rt::SPSSimpleExecutorDylibManagerResolveSignature>( + SAs.Resolve, [Complete = std::move(Complete)]( Error SerializationErr, - Expected<std::vector<ExecutorSymbolDef>> Result) mutable { + Expected<std::vector<std::optional<ExecutorSymbolDef>>> + Result) mutable { if (SerializationErr) { cantFail(Result.takeError()); Complete(std::move(SerializationErr)); @@ -96,17 +97,18 @@ void EPCGenericDylibManager::lookupAsync(tpctypes::DylibHandle H, } Complete(std::move(Result)); }, - SAs.Instance, H, Lookup); + H, Lookup); } void EPCGenericDylibManager::lookupAsync(tpctypes::DylibHandle H, const RemoteSymbolLookupSet &Lookup, SymbolLookupCompleteFn Complete) { - EPC.callSPSWrapperAsync<rt::SPSSimpleExecutorDylibManagerLookupSignature>( - SAs.Lookup, + EPC.callSPSWrapperAsync<rt::SPSSimpleExecutorDylibManagerResolveSignature>( + SAs.Resolve, [Complete = std::move(Complete)]( Error SerializationErr, - Expected<std::vector<ExecutorSymbolDef>> Result) mutable { + Expected<std::vector<std::optional<ExecutorSymbolDef>>> + Result) mutable { if (SerializationErr) { cantFail(Result.takeError()); Complete(std::move(SerializationErr)); @@ -114,7 +116,7 @@ void EPCGenericDylibManager::lookupAsync(tpctypes::DylibHandle H, } Complete(std::move(Result)); }, - SAs.Instance, H, Lookup); + H, Lookup); } } // end namespace orc diff --git a/llvm/lib/ExecutionEngine/Orc/ExecutorResolutionGenerator.cpp b/llvm/lib/ExecutionEngine/Orc/ExecutorResolutionGenerator.cpp new file mode 100644 index 0000000..e5b0bd3 --- /dev/null +++ b/llvm/lib/ExecutionEngine/Orc/ExecutorResolutionGenerator.cpp @@ -0,0 +1,98 @@ +//===---- ExecutorProcessControl.cpp -- Executor process control APIs -----===// +// +// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. +// See https://llvm.org/LICENSE.txt for license information. +// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception +// +//===----------------------------------------------------------------------===// + +#include "llvm/ExecutionEngine/Orc/ExecutorResolutionGenerator.h" + +#include "llvm/ExecutionEngine/Orc/DebugUtils.h" +#include "llvm/ExecutionEngine/Orc/Shared/ExecutorSymbolDef.h" +#include "llvm/Support/Error.h" + +#define DEBUG_TYPE "orc" + +namespace llvm { +namespace orc { + +Expected<std::unique_ptr<ExecutorResolutionGenerator>> +ExecutorResolutionGenerator::Load(ExecutionSession &ES, const char *LibraryPath, + SymbolPredicate Allow, + AbsoluteSymbolsFn AbsoluteSymbols) { + auto H = ES.getExecutorProcessControl().getDylibMgr().loadDylib(LibraryPath); + if (H) + return H.takeError(); + return std::make_unique<ExecutorResolutionGenerator>( + ES, *H, std::move(Allow), std::move(AbsoluteSymbols)); +} + +Error ExecutorResolutionGenerator::tryToGenerate( + LookupState &LS, LookupKind K, JITDylib &JD, + JITDylibLookupFlags JDLookupFlags, const SymbolLookupSet &LookupSet) { + + if (LookupSet.empty()) + return Error::success(); + + LLVM_DEBUG({ + dbgs() << "ExecutorResolutionGenerator trying to generate " << LookupSet + << "\n"; + }); + + SymbolLookupSet LookupSymbols; + for (auto &[Name, LookupFlag] : LookupSet) { + if (Allow && !Allow(Name)) + continue; + LookupSymbols.add(Name, LookupFlag); + } + + DylibManager::LookupRequest LR(H, LookupSymbols); + EPC.getDylibMgr().lookupSymbolsAsync( + LR, [this, LS = std::move(LS), JD = JITDylibSP(&JD), + LookupSymbols](auto Result) mutable { + if (Result) { + LLVM_DEBUG({ + dbgs() << "ExecutorResolutionGenerator lookup failed due to error"; + }); + return LS.continueLookup(Result.takeError()); + } + assert(Result->size() == 1 && + "Results for more than one library returned"); + assert(Result->front().size() == LookupSymbols.size() && + "Result has incorrect number of elements"); + + // const tpctypes::LookupResult &Syms = Result->front(); + // size_t SymIdx = 0; + auto Syms = Result->front().begin(); + SymbolNameSet MissingSymbols; + SymbolMap NewSyms; + for (auto &[Name, Flags] : LookupSymbols) { + const auto &Sym = *Syms++; + if (Sym && Sym->getAddress()) + NewSyms[Name] = *Sym; + else if (LLVM_UNLIKELY(!Sym && + Flags == SymbolLookupFlags::RequiredSymbol)) + MissingSymbols.insert(Name); + } + + LLVM_DEBUG({ + dbgs() << "ExecutorResolutionGenerator lookup returned " << NewSyms + << "\n"; + }); + + if (NewSyms.empty()) + return LS.continueLookup(Error::success()); + + if (LLVM_UNLIKELY(!MissingSymbols.empty())) + return LS.continueLookup(make_error<SymbolsNotFound>( + this->EPC.getSymbolStringPool(), std::move(MissingSymbols))); + + LS.continueLookup(JD->define(AbsoluteSymbols(std::move(NewSyms)))); + }); + + return Error::success(); +} + +} // end namespace orc +} // end namespace llvm diff --git a/llvm/lib/ExecutionEngine/Orc/LookupAndRecordAddrs.cpp b/llvm/lib/ExecutionEngine/Orc/LookupAndRecordAddrs.cpp index 78169a2..42d630d 100644 --- a/llvm/lib/ExecutionEngine/Orc/LookupAndRecordAddrs.cpp +++ b/llvm/lib/ExecutionEngine/Orc/LookupAndRecordAddrs.cpp @@ -72,9 +72,10 @@ Error lookupAndRecordAddrs( return make_error<StringError>("Error in lookup result elements", inconvertibleErrorCode()); - for (unsigned I = 0; I != Pairs.size(); ++I) - *Pairs[I].second = Result->front()[I].getAddress(); - + for (unsigned I = 0; I != Pairs.size(); ++I) { + if (Result->front()[I]) + *Pairs[I].second = Result->front()[I]->getAddress(); + } return Error::success(); } diff --git a/llvm/lib/ExecutionEngine/Orc/SelfExecutorProcessControl.cpp b/llvm/lib/ExecutionEngine/Orc/SelfExecutorProcessControl.cpp index 78045f1..f8a2bd3 100644 --- a/llvm/lib/ExecutionEngine/Orc/SelfExecutorProcessControl.cpp +++ b/llvm/lib/ExecutionEngine/Orc/SelfExecutorProcessControl.cpp @@ -91,22 +91,18 @@ void SelfExecutorProcessControl::lookupSymbolsAsync( for (auto &Elem : Request) { sys::DynamicLibrary Dylib(Elem.Handle.toPtr<void *>()); - R.push_back(std::vector<ExecutorSymbolDef>()); + R.push_back(tpctypes::LookupResult()); for (auto &KV : Elem.Symbols) { auto &Sym = KV.first; std::string Tmp((*Sym).data() + !!GlobalManglingPrefix, (*Sym).size() - !!GlobalManglingPrefix); void *Addr = Dylib.getAddressOfSymbol(Tmp.c_str()); - if (!Addr && KV.second == SymbolLookupFlags::RequiredSymbol) { - // FIXME: Collect all failing symbols before erroring out. - SymbolNameVector MissingSymbols; - MissingSymbols.push_back(Sym); - return Complete( - make_error<SymbolsNotFound>(SSP, std::move(MissingSymbols))); - } - // FIXME: determine accurate JITSymbolFlags. - R.back().push_back( - {ExecutorAddr::fromPtr(Addr), JITSymbolFlags::Exported}); + if (!Addr && KV.second == SymbolLookupFlags::RequiredSymbol) + R.back().emplace_back(); + else + // FIXME: determine accurate JITSymbolFlags. + R.back().emplace_back(ExecutorSymbolDef(ExecutorAddr::fromPtr(Addr), + JITSymbolFlags::Exported)); } } diff --git a/llvm/lib/ExecutionEngine/Orc/Shared/OrcRTBridge.cpp b/llvm/lib/ExecutionEngine/Orc/Shared/OrcRTBridge.cpp index 123651f..26e8f53 100644 --- a/llvm/lib/ExecutionEngine/Orc/Shared/OrcRTBridge.cpp +++ b/llvm/lib/ExecutionEngine/Orc/Shared/OrcRTBridge.cpp @@ -16,8 +16,8 @@ const char *SimpleExecutorDylibManagerInstanceName = "__llvm_orc_SimpleExecutorDylibManager_Instance"; const char *SimpleExecutorDylibManagerOpenWrapperName = "__llvm_orc_SimpleExecutorDylibManager_open_wrapper"; -const char *SimpleExecutorDylibManagerLookupWrapperName = - "__llvm_orc_SimpleExecutorDylibManager_lookup_wrapper"; +const char *SimpleExecutorDylibManagerResolveWrapperName = + "__llvm_orc_SimpleExecutorDylibManager_resolve_wrapper"; const char *SimpleExecutorMemoryManagerInstanceName = "__llvm_orc_SimpleExecutorMemoryManager_Instance"; diff --git a/llvm/lib/ExecutionEngine/Orc/TargetProcess/CMakeLists.txt b/llvm/lib/ExecutionEngine/Orc/TargetProcess/CMakeLists.txt index 9f3abac..9275586 100644 --- a/llvm/lib/ExecutionEngine/Orc/TargetProcess/CMakeLists.txt +++ b/llvm/lib/ExecutionEngine/Orc/TargetProcess/CMakeLists.txt @@ -15,6 +15,7 @@ endif() add_llvm_component_library(LLVMOrcTargetProcess ExecutorSharedMemoryMapperService.cpp DefaultHostBootstrapValues.cpp + ExecutorResolver.cpp JITLoaderGDB.cpp JITLoaderPerf.cpp JITLoaderVTune.cpp diff --git a/llvm/lib/ExecutionEngine/Orc/TargetProcess/ExecutorResolver.cpp b/llvm/lib/ExecutionEngine/Orc/TargetProcess/ExecutorResolver.cpp new file mode 100644 index 0000000..6054d86 --- /dev/null +++ b/llvm/lib/ExecutionEngine/Orc/TargetProcess/ExecutorResolver.cpp @@ -0,0 +1,47 @@ + +#include "llvm/ExecutionEngine/Orc/TargetProcess/ExecutorResolver.h" + +#include "llvm/Support/DynamicLibrary.h" +#include "llvm/Support/Error.h" + +namespace llvm::orc { + +void DylibSymbolResolver::resolveAsync( + const RemoteSymbolLookupSet &L, + ExecutorResolver::YieldResolveResultFn &&OnResolve) { + std::vector<std::optional<ExecutorSymbolDef>> Result; + auto DL = sys::DynamicLibrary(Handle.toPtr<void *>()); + + for (const auto &E : L) { + if (E.Name.empty()) { + if (E.Required) + OnResolve( + make_error<StringError>("Required address for empty symbol \"\"", + inconvertibleErrorCode())); + else + Result.emplace_back(); + } else { + + const char *DemangledSymName = E.Name.c_str(); +#ifdef __APPLE__ + if (E.Name.front() != '_') + OnResolve(make_error<StringError>(Twine("MachO symbol \"") + E.Name + + "\" missing leading '_'", + inconvertibleErrorCode())); + ++DemangledSymName; +#endif + + void *Addr = DL.getAddressOfSymbol(DemangledSymName); + if (!Addr && E.Required) + Result.emplace_back(); + else + // FIXME: determine accurate JITSymbolFlags. + Result.emplace_back(ExecutorSymbolDef(ExecutorAddr::fromPtr(Addr), + JITSymbolFlags::Exported)); + } + } + + OnResolve(std::move(Result)); +} + +} // end namespace llvm::orc
\ No newline at end of file diff --git a/llvm/lib/ExecutionEngine/Orc/TargetProcess/SimpleExecutorDylibManager.cpp b/llvm/lib/ExecutionEngine/Orc/TargetProcess/SimpleExecutorDylibManager.cpp index db6f201..52bb55d 100644 --- a/llvm/lib/ExecutionEngine/Orc/TargetProcess/SimpleExecutorDylibManager.cpp +++ b/llvm/lib/ExecutionEngine/Orc/TargetProcess/SimpleExecutorDylibManager.cpp @@ -10,6 +10,10 @@ #include "llvm/ExecutionEngine/Orc/Shared/OrcRTBridge.h" +#include "llvm/Support/MSVCErrorWorkarounds.h" + +#include <future> + #define DEBUG_TYPE "orc" namespace llvm { @@ -35,46 +39,9 @@ SimpleExecutorDylibManager::open(const std::string &Path, uint64_t Mode) { std::lock_guard<std::mutex> Lock(M); auto H = ExecutorAddr::fromPtr(DL.getOSSpecificHandle()); + Resolvers.push_back(std::make_unique<DylibSymbolResolver>(H)); Dylibs.insert(DL.getOSSpecificHandle()); - return H; -} - -Expected<std::vector<ExecutorSymbolDef>> -SimpleExecutorDylibManager::lookup(tpctypes::DylibHandle H, - const RemoteSymbolLookupSet &L) { - std::vector<ExecutorSymbolDef> Result; - auto DL = sys::DynamicLibrary(H.toPtr<void *>()); - - for (const auto &E : L) { - if (E.Name.empty()) { - if (E.Required) - return make_error<StringError>("Required address for empty symbol \"\"", - inconvertibleErrorCode()); - else - Result.push_back(ExecutorSymbolDef()); - } else { - - const char *DemangledSymName = E.Name.c_str(); -#ifdef __APPLE__ - if (E.Name.front() != '_') - return make_error<StringError>(Twine("MachO symbol \"") + E.Name + - "\" missing leading '_'", - inconvertibleErrorCode()); - ++DemangledSymName; -#endif - - void *Addr = DL.getAddressOfSymbol(DemangledSymName); - if (!Addr && E.Required) - return make_error<StringError>(Twine("Missing definition for ") + - DemangledSymName, - inconvertibleErrorCode()); - - // FIXME: determine accurate JITSymbolFlags. - Result.push_back({ExecutorAddr::fromPtr(Addr), JITSymbolFlags::Exported}); - } - } - - return Result; + return ExecutorAddr::fromPtr(Resolvers.back().get()); } Error SimpleExecutorDylibManager::shutdown() { @@ -94,8 +61,8 @@ void SimpleExecutorDylibManager::addBootstrapSymbols( M[rt::SimpleExecutorDylibManagerInstanceName] = ExecutorAddr::fromPtr(this); M[rt::SimpleExecutorDylibManagerOpenWrapperName] = ExecutorAddr::fromPtr(&openWrapper); - M[rt::SimpleExecutorDylibManagerLookupWrapperName] = - ExecutorAddr::fromPtr(&lookupWrapper); + M[rt::SimpleExecutorDylibManagerResolveWrapperName] = + ExecutorAddr::fromPtr(&resolveWrapper); } llvm::orc::shared::CWrapperFunctionResult @@ -109,12 +76,22 @@ SimpleExecutorDylibManager::openWrapper(const char *ArgData, size_t ArgSize) { } llvm::orc::shared::CWrapperFunctionResult -SimpleExecutorDylibManager::lookupWrapper(const char *ArgData, size_t ArgSize) { - return shared:: - WrapperFunction<rt::SPSSimpleExecutorDylibManagerLookupSignature>::handle( - ArgData, ArgSize, - shared::makeMethodWrapperHandler( - &SimpleExecutorDylibManager::lookup)) +SimpleExecutorDylibManager::resolveWrapper(const char *ArgData, + size_t ArgSize) { + using ResolveResult = ExecutorResolver::ResolveResult; + return shared::WrapperFunction< + rt::SPSSimpleExecutorDylibManagerResolveSignature>:: + handle(ArgData, ArgSize, + [](ExecutorAddr Obj, RemoteSymbolLookupSet L) -> ResolveResult { + using TmpResult = + MSVCPExpected<std::vector<std::optional<ExecutorSymbolDef>>>; + std::promise<TmpResult> P; + auto F = P.get_future(); + Obj.toPtr<ExecutorResolver *>()->resolveAsync( + std::move(L), + [&](TmpResult R) { P.set_value(std::move(R)); }); + return F.get(); + }) .release(); } diff --git a/llvm/lib/Frontend/HLSL/RootSignatureMetadata.cpp b/llvm/lib/Frontend/HLSL/RootSignatureMetadata.cpp index 7a0cf40..707f0c3 100644 --- a/llvm/lib/Frontend/HLSL/RootSignatureMetadata.cpp +++ b/llvm/lib/Frontend/HLSL/RootSignatureMetadata.cpp @@ -651,8 +651,11 @@ Error MetadataParser::validateRootSignature( "RegisterSpace", Descriptor.RegisterSpace)); if (RSD.Version > 1) { - if (!hlsl::rootsig::verifyRootDescriptorFlag(RSD.Version, - Descriptor.Flags)) + bool IsValidFlag = + dxbc::isValidRootDesciptorFlags(Descriptor.Flags) && + hlsl::rootsig::verifyRootDescriptorFlag( + RSD.Version, dxbc::RootDescriptorFlags(Descriptor.Flags)); + if (!IsValidFlag) DeferredErrs = joinErrors(std::move(DeferredErrs), make_error<RootSignatureValidationError<uint32_t>>( @@ -676,9 +679,11 @@ Error MetadataParser::validateRootSignature( make_error<RootSignatureValidationError<uint32_t>>( "NumDescriptors", Range.NumDescriptors)); - if (!hlsl::rootsig::verifyDescriptorRangeFlag( - RSD.Version, Range.RangeType, - dxbc::DescriptorRangeFlags(Range.Flags))) + bool IsValidFlag = dxbc::isValidDescriptorRangeFlags(Range.Flags) && + hlsl::rootsig::verifyDescriptorRangeFlag( + RSD.Version, Range.RangeType, + dxbc::DescriptorRangeFlags(Range.Flags)); + if (!IsValidFlag) DeferredErrs = joinErrors(std::move(DeferredErrs), make_error<RootSignatureValidationError<uint32_t>>( @@ -731,8 +736,11 @@ Error MetadataParser::validateRootSignature( joinErrors(std::move(DeferredErrs), make_error<RootSignatureValidationError<uint32_t>>( "RegisterSpace", Sampler.RegisterSpace)); - - if (!hlsl::rootsig::verifyStaticSamplerFlags(RSD.Version, Sampler.Flags)) + bool IsValidFlag = + dxbc::isValidStaticSamplerFlags(Sampler.Flags) && + hlsl::rootsig::verifyStaticSamplerFlags( + RSD.Version, dxbc::StaticSamplerFlags(Sampler.Flags)); + if (!IsValidFlag) DeferredErrs = joinErrors(std::move(DeferredErrs), make_error<RootSignatureValidationError<uint32_t>>( diff --git a/llvm/lib/Frontend/HLSL/RootSignatureValidations.cpp b/llvm/lib/Frontend/HLSL/RootSignatureValidations.cpp index 8a2b03d..30408df 100644 --- a/llvm/lib/Frontend/HLSL/RootSignatureValidations.cpp +++ b/llvm/lib/Frontend/HLSL/RootSignatureValidations.cpp @@ -34,7 +34,8 @@ bool verifyRegisterSpace(uint32_t RegisterSpace) { return !(RegisterSpace >= 0xFFFFFFF0); } -bool verifyRootDescriptorFlag(uint32_t Version, uint32_t FlagsVal) { +bool verifyRootDescriptorFlag(uint32_t Version, + dxbc::RootDescriptorFlags FlagsVal) { using FlagT = dxbc::RootDescriptorFlags; FlagT Flags = FlagT(FlagsVal); if (Version == 1) @@ -56,7 +57,6 @@ bool verifyRootDescriptorFlag(uint32_t Version, uint32_t FlagsVal) { bool verifyDescriptorRangeFlag(uint32_t Version, dxil::ResourceClass Type, dxbc::DescriptorRangeFlags Flags) { using FlagT = dxbc::DescriptorRangeFlags; - const bool IsSampler = (Type == dxil::ResourceClass::Sampler); if (Version == 1) { @@ -113,13 +113,8 @@ bool verifyDescriptorRangeFlag(uint32_t Version, dxil::ResourceClass Type, return (Flags & ~Mask) == FlagT::None; } -bool verifyStaticSamplerFlags(uint32_t Version, uint32_t FlagsNumber) { - uint32_t LargestValue = llvm::to_underlying( - dxbc::StaticSamplerFlags::LLVM_BITMASK_LARGEST_ENUMERATOR); - if (FlagsNumber >= NextPowerOf2(LargestValue)) - return false; - - dxbc::StaticSamplerFlags Flags = dxbc::StaticSamplerFlags(FlagsNumber); +bool verifyStaticSamplerFlags(uint32_t Version, + dxbc::StaticSamplerFlags Flags) { if (Version <= 2) return Flags == dxbc::StaticSamplerFlags::None; diff --git a/llvm/lib/IR/Globals.cpp b/llvm/lib/IR/Globals.cpp index 1a7a5c5..c3a472b 100644 --- a/llvm/lib/IR/Globals.cpp +++ b/llvm/lib/IR/Globals.cpp @@ -419,6 +419,7 @@ findBaseObject(const Constant *C, DenseSet<const GlobalAlias *> &Aliases, case Instruction::PtrToAddr: case Instruction::PtrToInt: case Instruction::BitCast: + case Instruction::AddrSpaceCast: case Instruction::GetElementPtr: return findBaseObject(CE->getOperand(0), Aliases, Op); default: diff --git a/llvm/lib/IR/Mangler.cpp b/llvm/lib/IR/Mangler.cpp index ca6a480..55c825d 100644 --- a/llvm/lib/IR/Mangler.cpp +++ b/llvm/lib/IR/Mangler.cpp @@ -307,6 +307,19 @@ std::optional<std::string> llvm::getArm64ECMangledFunctionName(StringRef Name) { if (Name.contains("$$h")) return std::nullopt; + // Handle MD5 mangled names, which use a slightly different rule from + // other C++ manglings. + // + // A non-Arm64EC function: + // + // ??@aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa@ + // + // An Arm64EC function: + // + // ??@aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa@$$h@ + if (Name.starts_with("??@") && Name.ends_with("@")) + return (Name + "$$h@").str(); + // Ask the demangler where we should insert "$$h". auto InsertIdx = getArm64ECInsertionPointInMangledName(Name); if (!InsertIdx) @@ -324,6 +337,10 @@ llvm::getArm64ECDemangledFunctionName(StringRef Name) { if (Name[0] != '?') return std::nullopt; + // MD5 mangled name; see comment in getArm64ECMangledFunctionName. + if (Name.starts_with("??@") && Name.ends_with("@$$h@")) + return Name.drop_back(4).str(); + // Drop the ARM64EC "$$h" tag. std::pair<StringRef, StringRef> Pair = Name.split("$$h"); if (Pair.second.empty()) diff --git a/llvm/lib/Object/OffloadBundle.cpp b/llvm/lib/Object/OffloadBundle.cpp index 329dcbf..046cde8 100644 --- a/llvm/lib/Object/OffloadBundle.cpp +++ b/llvm/lib/Object/OffloadBundle.cpp @@ -25,38 +25,71 @@ using namespace llvm; using namespace llvm::object; -static llvm::TimerGroup - OffloadBundlerTimerGroup("Offload Bundler Timer Group", - "Timer group for offload bundler"); +static TimerGroup OffloadBundlerTimerGroup("Offload Bundler Timer Group", + "Timer group for offload bundler"); // Extract an Offload bundle (usually a Offload Bundle) from a fat_bin -// section +// section. Error extractOffloadBundle(MemoryBufferRef Contents, uint64_t SectionOffset, StringRef FileName, SmallVectorImpl<OffloadBundleFatBin> &Bundles) { size_t Offset = 0; size_t NextbundleStart = 0; + StringRef Magic; + std::unique_ptr<MemoryBuffer> Buffer; // There could be multiple offloading bundles stored at this section. - while (NextbundleStart != StringRef::npos) { - std::unique_ptr<MemoryBuffer> Buffer = + while ((NextbundleStart != StringRef::npos) && + (Offset < Contents.getBuffer().size())) { + Buffer = MemoryBuffer::getMemBuffer(Contents.getBuffer().drop_front(Offset), "", /*RequiresNullTerminator=*/false); - // Create the FatBinBindle object. This will also create the Bundle Entry - // list info. - auto FatBundleOrErr = - OffloadBundleFatBin::create(*Buffer, SectionOffset + Offset, FileName); - if (!FatBundleOrErr) - return FatBundleOrErr.takeError(); - - // Add current Bundle to list. - Bundles.emplace_back(std::move(**FatBundleOrErr)); - - // Find the next bundle by searching for the magic string - StringRef Str = Buffer->getBuffer(); - NextbundleStart = Str.find(StringRef("__CLANG_OFFLOAD_BUNDLE__"), 24); + if (identify_magic((*Buffer).getBuffer()) == + file_magic::offload_bundle_compressed) { + Magic = "CCOB"; + // Decompress this bundle first. + NextbundleStart = (*Buffer).getBuffer().find(Magic, Magic.size()); + if (NextbundleStart == StringRef::npos) + NextbundleStart = (*Buffer).getBuffer().size(); + + ErrorOr<std::unique_ptr<MemoryBuffer>> CodeOrErr = + MemoryBuffer::getMemBuffer( + (*Buffer).getBuffer().take_front(NextbundleStart), FileName, + false); + if (std::error_code EC = CodeOrErr.getError()) + return createFileError(FileName, EC); + + Expected<std::unique_ptr<MemoryBuffer>> DecompressedBufferOrErr = + CompressedOffloadBundle::decompress(**CodeOrErr, nullptr); + if (!DecompressedBufferOrErr) + return createStringError("failed to decompress input: " + + toString(DecompressedBufferOrErr.takeError())); + + auto FatBundleOrErr = OffloadBundleFatBin::create( + **DecompressedBufferOrErr, Offset, FileName, true); + if (!FatBundleOrErr) + return FatBundleOrErr.takeError(); + + // Add current Bundle to list. + Bundles.emplace_back(std::move(**FatBundleOrErr)); + + } else if (identify_magic((*Buffer).getBuffer()) == + file_magic::offload_bundle) { + // Create the OffloadBundleFatBin object. This will also create the Bundle + // Entry list info. + auto FatBundleOrErr = OffloadBundleFatBin::create( + *Buffer, SectionOffset + Offset, FileName); + if (!FatBundleOrErr) + return FatBundleOrErr.takeError(); + + // Add current Bundle to list. + Bundles.emplace_back(std::move(**FatBundleOrErr)); + + Magic = "__CLANG_OFFLOAD_BUNDLE__"; + NextbundleStart = (*Buffer).getBuffer().find(Magic, Magic.size()); + } if (NextbundleStart != StringRef::npos) Offset += NextbundleStart; @@ -82,7 +115,7 @@ Error OffloadBundleFatBin::readEntries(StringRef Buffer, NumberOfEntries = NumOfEntries; - // For each Bundle Entry (code object) + // For each Bundle Entry (code object). for (uint64_t I = 0; I < NumOfEntries; I++) { uint64_t EntrySize; uint64_t EntryOffset; @@ -112,19 +145,22 @@ Error OffloadBundleFatBin::readEntries(StringRef Buffer, Expected<std::unique_ptr<OffloadBundleFatBin>> OffloadBundleFatBin::create(MemoryBufferRef Buf, uint64_t SectionOffset, - StringRef FileName) { + StringRef FileName, bool Decompress) { if (Buf.getBufferSize() < 24) return errorCodeToError(object_error::parse_failed); // Check for magic bytes. - if (identify_magic(Buf.getBuffer()) != file_magic::offload_bundle) + if ((identify_magic(Buf.getBuffer()) != file_magic::offload_bundle) && + (identify_magic(Buf.getBuffer()) != + file_magic::offload_bundle_compressed)) return errorCodeToError(object_error::parse_failed); std::unique_ptr<OffloadBundleFatBin> TheBundle( new OffloadBundleFatBin(Buf, FileName)); - // Read the Bundle Entries - Error Err = TheBundle->readEntries(Buf.getBuffer(), SectionOffset); + // Read the Bundle Entries. + Error Err = + TheBundle->readEntries(Buf.getBuffer(), Decompress ? 0 : SectionOffset); if (Err) return Err; @@ -132,7 +168,7 @@ OffloadBundleFatBin::create(MemoryBufferRef Buf, uint64_t SectionOffset, } Error OffloadBundleFatBin::extractBundle(const ObjectFile &Source) { - // This will extract all entries in the Bundle + // This will extract all entries in the Bundle. for (OffloadBundleEntry &Entry : Entries) { if (Entry.Size == 0) @@ -161,40 +197,21 @@ Error object::extractOffloadBundleFatBinary( return Buffer.takeError(); // If it does not start with the reserved suffix, just skip this section. - if ((llvm::identify_magic(*Buffer) == llvm::file_magic::offload_bundle) || + if ((llvm::identify_magic(*Buffer) == file_magic::offload_bundle) || (llvm::identify_magic(*Buffer) == - llvm::file_magic::offload_bundle_compressed)) { + file_magic::offload_bundle_compressed)) { uint64_t SectionOffset = 0; if (Obj.isELF()) { SectionOffset = ELFSectionRef(Sec).getOffset(); - } else if (Obj.isCOFF()) // TODO: add COFF Support + } else if (Obj.isCOFF()) // TODO: add COFF Support. return createStringError(object_error::parse_failed, - "COFF object files not supported.\n"); + "COFF object files not supported"); MemoryBufferRef Contents(*Buffer, Obj.getFileName()); - - if (llvm::identify_magic(*Buffer) == - llvm::file_magic::offload_bundle_compressed) { - // Decompress the input if necessary. - Expected<std::unique_ptr<MemoryBuffer>> DecompressedBufferOrErr = - CompressedOffloadBundle::decompress(Contents, false); - - if (!DecompressedBufferOrErr) - return createStringError( - inconvertibleErrorCode(), - "Failed to decompress input: " + - llvm::toString(DecompressedBufferOrErr.takeError())); - - MemoryBuffer &DecompressedInput = **DecompressedBufferOrErr; - if (Error Err = extractOffloadBundle(DecompressedInput, SectionOffset, - Obj.getFileName(), Bundles)) - return Err; - } else { - if (Error Err = extractOffloadBundle(Contents, SectionOffset, - Obj.getFileName(), Bundles)) - return Err; - } + if (Error Err = extractOffloadBundle(Contents, SectionOffset, + Obj.getFileName(), Bundles)) + return Err; } } return Error::success(); @@ -222,8 +239,22 @@ Error object::extractCodeObject(const ObjectFile &Source, int64_t Offset, return Error::success(); } +Error object::extractCodeObject(const MemoryBufferRef Buffer, int64_t Offset, + int64_t Size, StringRef OutputFileName) { + Expected<std::unique_ptr<FileOutputBuffer>> BufferOrErr = + FileOutputBuffer::create(OutputFileName, Size); + if (!BufferOrErr) + return BufferOrErr.takeError(); + + std::unique_ptr<FileOutputBuffer> Buf = std::move(*BufferOrErr); + std::copy(Buffer.getBufferStart() + Offset, + Buffer.getBufferStart() + Offset + Size, Buf->getBufferStart()); + + return Buf->commit(); +} + // given a file name, offset, and size, extract data into a code object file, -// into file <SourceFile>-offset<Offset>-size<Size>.co +// into file "<SourceFile>-offset<Offset>-size<Size>.co". Error object::extractOffloadBundleByURI(StringRef URIstr) { // create a URI object Expected<std::unique_ptr<OffloadBundleURI>> UriOrErr( @@ -236,7 +267,7 @@ Error object::extractOffloadBundleByURI(StringRef URIstr) { OutputFile += "-offset" + itostr(Uri.Offset) + "-size" + itostr(Uri.Size) + ".co"; - // Create an ObjectFile object from uri.file_uri + // Create an ObjectFile object from uri.file_uri. auto ObjOrErr = ObjectFile::createObjectFile(Uri.FileName); if (!ObjOrErr) return ObjOrErr.takeError(); @@ -249,7 +280,7 @@ Error object::extractOffloadBundleByURI(StringRef URIstr) { return Error::success(); } -// Utility function to format numbers with commas +// Utility function to format numbers with commas. static std::string formatWithCommas(unsigned long long Value) { std::string Num = std::to_string(Value); int InsertPosition = Num.length() - 3; @@ -260,87 +291,278 @@ static std::string formatWithCommas(unsigned long long Value) { return Num; } -llvm::Expected<std::unique_ptr<llvm::MemoryBuffer>> -CompressedOffloadBundle::decompress(llvm::MemoryBufferRef &Input, - bool Verbose) { - StringRef Blob = Input.getBuffer(); +Expected<std::unique_ptr<MemoryBuffer>> +CompressedOffloadBundle::compress(compression::Params P, + const MemoryBuffer &Input, uint16_t Version, + raw_ostream *VerboseStream) { + if (!compression::zstd::isAvailable() && !compression::zlib::isAvailable()) + return createStringError("compression not supported."); + Timer HashTimer("Hash Calculation Timer", "Hash calculation time", + OffloadBundlerTimerGroup); + if (VerboseStream) + HashTimer.startTimer(); + MD5 Hash; + MD5::MD5Result Result; + Hash.update(Input.getBuffer()); + Hash.final(Result); + uint64_t TruncatedHash = Result.low(); + if (VerboseStream) + HashTimer.stopTimer(); + + SmallVector<uint8_t, 0> CompressedBuffer; + auto BufferUint8 = ArrayRef<uint8_t>( + reinterpret_cast<const uint8_t *>(Input.getBuffer().data()), + Input.getBuffer().size()); + Timer CompressTimer("Compression Timer", "Compression time", + OffloadBundlerTimerGroup); + if (VerboseStream) + CompressTimer.startTimer(); + compression::compress(P, BufferUint8, CompressedBuffer); + if (VerboseStream) + CompressTimer.stopTimer(); + + uint16_t CompressionMethod = static_cast<uint16_t>(P.format); + + // Store sizes in 64-bit variables first. + uint64_t UncompressedSize64 = Input.getBuffer().size(); + uint64_t TotalFileSize64; + + // Calculate total file size based on version. + if (Version == 2) { + // For V2, ensure the sizes don't exceed 32-bit limit. + if (UncompressedSize64 > std::numeric_limits<uint32_t>::max()) + return createStringError("uncompressed size (%llu) exceeds version 2 " + "unsigned 32-bit integer limit", + UncompressedSize64); + TotalFileSize64 = MagicNumber.size() + sizeof(uint32_t) + sizeof(Version) + + sizeof(CompressionMethod) + sizeof(uint32_t) + + sizeof(TruncatedHash) + CompressedBuffer.size(); + if (TotalFileSize64 > std::numeric_limits<uint32_t>::max()) + return createStringError("total file size (%llu) exceeds version 2 " + "unsigned 32-bit integer limit", + TotalFileSize64); + + } else { // Version 3. + TotalFileSize64 = MagicNumber.size() + sizeof(uint64_t) + sizeof(Version) + + sizeof(CompressionMethod) + sizeof(uint64_t) + + sizeof(TruncatedHash) + CompressedBuffer.size(); + } + + SmallVector<char, 0> FinalBuffer; + raw_svector_ostream OS(FinalBuffer); + OS << MagicNumber; + OS.write(reinterpret_cast<const char *>(&Version), sizeof(Version)); + OS.write(reinterpret_cast<const char *>(&CompressionMethod), + sizeof(CompressionMethod)); + + // Write size fields according to version. + if (Version == 2) { + uint32_t TotalFileSize32 = static_cast<uint32_t>(TotalFileSize64); + uint32_t UncompressedSize32 = static_cast<uint32_t>(UncompressedSize64); + OS.write(reinterpret_cast<const char *>(&TotalFileSize32), + sizeof(TotalFileSize32)); + OS.write(reinterpret_cast<const char *>(&UncompressedSize32), + sizeof(UncompressedSize32)); + } else { // Version 3. + OS.write(reinterpret_cast<const char *>(&TotalFileSize64), + sizeof(TotalFileSize64)); + OS.write(reinterpret_cast<const char *>(&UncompressedSize64), + sizeof(UncompressedSize64)); + } + + OS.write(reinterpret_cast<const char *>(&TruncatedHash), + sizeof(TruncatedHash)); + OS.write(reinterpret_cast<const char *>(CompressedBuffer.data()), + CompressedBuffer.size()); + + if (VerboseStream) { + auto MethodUsed = P.format == compression::Format::Zstd ? "zstd" : "zlib"; + double CompressionRate = + static_cast<double>(UncompressedSize64) / CompressedBuffer.size(); + double CompressionTimeSeconds = CompressTimer.getTotalTime().getWallTime(); + double CompressionSpeedMBs = + (UncompressedSize64 / (1024.0 * 1024.0)) / CompressionTimeSeconds; + *VerboseStream << "Compressed bundle format version: " << Version << "\n" + << "Total file size (including headers): " + << formatWithCommas(TotalFileSize64) << " bytes\n" + << "Compression method used: " << MethodUsed << "\n" + << "Compression level: " << P.level << "\n" + << "Binary size before compression: " + << formatWithCommas(UncompressedSize64) << " bytes\n" + << "Binary size after compression: " + << formatWithCommas(CompressedBuffer.size()) << " bytes\n" + << "Compression rate: " << format("%.2lf", CompressionRate) + << "\n" + << "Compression ratio: " + << format("%.2lf%%", 100.0 / CompressionRate) << "\n" + << "Compression speed: " + << format("%.2lf MB/s", CompressionSpeedMBs) << "\n" + << "Truncated MD5 hash: " << format_hex(TruncatedHash, 16) + << "\n"; + } + + return MemoryBuffer::getMemBufferCopy( + StringRef(FinalBuffer.data(), FinalBuffer.size())); +} + +// Use packed structs to avoid padding, such that the structs map the serialized +// format. +LLVM_PACKED_START +union RawCompressedBundleHeader { + struct CommonFields { + uint32_t Magic; + uint16_t Version; + uint16_t Method; + }; + + struct V1Header { + CommonFields Common; + uint32_t UncompressedFileSize; + uint64_t Hash; + }; + + struct V2Header { + CommonFields Common; + uint32_t FileSize; + uint32_t UncompressedFileSize; + uint64_t Hash; + }; + + struct V3Header { + CommonFields Common; + uint64_t FileSize; + uint64_t UncompressedFileSize; + uint64_t Hash; + }; + + CommonFields Common; + V1Header V1; + V2Header V2; + V3Header V3; +}; +LLVM_PACKED_END + +// Helper method to get header size based on version. +static size_t getHeaderSize(uint16_t Version) { + switch (Version) { + case 1: + return sizeof(RawCompressedBundleHeader::V1Header); + case 2: + return sizeof(RawCompressedBundleHeader::V2Header); + case 3: + return sizeof(RawCompressedBundleHeader::V3Header); + default: + llvm_unreachable("Unsupported version"); + } +} - if (Blob.size() < V1HeaderSize) - return llvm::MemoryBuffer::getMemBufferCopy(Blob); +Expected<CompressedOffloadBundle::CompressedBundleHeader> +CompressedOffloadBundle::CompressedBundleHeader::tryParse(StringRef Blob) { + assert(Blob.size() >= sizeof(RawCompressedBundleHeader::CommonFields)); + assert(identify_magic(Blob) == file_magic::offload_bundle_compressed); + + RawCompressedBundleHeader Header; + std::memcpy(&Header, Blob.data(), std::min(Blob.size(), sizeof(Header))); + + CompressedBundleHeader Normalized; + Normalized.Version = Header.Common.Version; + + size_t RequiredSize = getHeaderSize(Normalized.Version); + + if (Blob.size() < RequiredSize) + return createStringError("compressed bundle header size too small"); + + switch (Normalized.Version) { + case 1: + Normalized.UncompressedFileSize = Header.V1.UncompressedFileSize; + Normalized.Hash = Header.V1.Hash; + break; + case 2: + Normalized.FileSize = Header.V2.FileSize; + Normalized.UncompressedFileSize = Header.V2.UncompressedFileSize; + Normalized.Hash = Header.V2.Hash; + break; + case 3: + Normalized.FileSize = Header.V3.FileSize; + Normalized.UncompressedFileSize = Header.V3.UncompressedFileSize; + Normalized.Hash = Header.V3.Hash; + break; + default: + return createStringError("unknown compressed bundle version"); + } - if (llvm::identify_magic(Blob) != - llvm::file_magic::offload_bundle_compressed) { - if (Verbose) - llvm::errs() << "Uncompressed bundle.\n"; - return llvm::MemoryBuffer::getMemBufferCopy(Blob); + // Determine compression format. + switch (Header.Common.Method) { + case static_cast<uint16_t>(compression::Format::Zlib): + case static_cast<uint16_t>(compression::Format::Zstd): + Normalized.CompressionFormat = + static_cast<compression::Format>(Header.Common.Method); + break; + default: + return createStringError("unknown compressing method"); } - size_t CurrentOffset = MagicSize; + return Normalized; +} - uint16_t ThisVersion; - memcpy(&ThisVersion, Blob.data() + CurrentOffset, sizeof(uint16_t)); - CurrentOffset += VersionFieldSize; +Expected<std::unique_ptr<MemoryBuffer>> +CompressedOffloadBundle::decompress(const MemoryBuffer &Input, + raw_ostream *VerboseStream) { + StringRef Blob = Input.getBuffer(); - uint16_t CompressionMethod; - memcpy(&CompressionMethod, Blob.data() + CurrentOffset, sizeof(uint16_t)); - CurrentOffset += MethodFieldSize; + // Check minimum header size (using V1 as it's the smallest). + if (Blob.size() < sizeof(RawCompressedBundleHeader::CommonFields)) + return MemoryBuffer::getMemBufferCopy(Blob); - uint32_t TotalFileSize; - if (ThisVersion >= 2) { - if (Blob.size() < V2HeaderSize) - return createStringError(inconvertibleErrorCode(), - "Compressed bundle header size too small"); - memcpy(&TotalFileSize, Blob.data() + CurrentOffset, sizeof(uint32_t)); - CurrentOffset += FileSizeFieldSize; + if (identify_magic(Blob) != file_magic::offload_bundle_compressed) { + if (VerboseStream) + *VerboseStream << "Uncompressed bundle\n"; + return MemoryBuffer::getMemBufferCopy(Blob); } - uint32_t UncompressedSize; - memcpy(&UncompressedSize, Blob.data() + CurrentOffset, sizeof(uint32_t)); - CurrentOffset += UncompressedSizeFieldSize; - - uint64_t StoredHash; - memcpy(&StoredHash, Blob.data() + CurrentOffset, sizeof(uint64_t)); - CurrentOffset += HashFieldSize; - - llvm::compression::Format CompressionFormat; - if (CompressionMethod == - static_cast<uint16_t>(llvm::compression::Format::Zlib)) - CompressionFormat = llvm::compression::Format::Zlib; - else if (CompressionMethod == - static_cast<uint16_t>(llvm::compression::Format::Zstd)) - CompressionFormat = llvm::compression::Format::Zstd; - else - return createStringError(inconvertibleErrorCode(), - "Unknown compressing method"); - - llvm::Timer DecompressTimer("Decompression Timer", "Decompression time", - OffloadBundlerTimerGroup); - if (Verbose) + Expected<CompressedBundleHeader> HeaderOrErr = + CompressedBundleHeader::tryParse(Blob); + if (!HeaderOrErr) + return HeaderOrErr.takeError(); + + const CompressedBundleHeader &Normalized = *HeaderOrErr; + unsigned ThisVersion = Normalized.Version; + size_t HeaderSize = getHeaderSize(ThisVersion); + + compression::Format CompressionFormat = Normalized.CompressionFormat; + + size_t TotalFileSize = Normalized.FileSize.value_or(0); + size_t UncompressedSize = Normalized.UncompressedFileSize; + auto StoredHash = Normalized.Hash; + + Timer DecompressTimer("Decompression Timer", "Decompression time", + OffloadBundlerTimerGroup); + if (VerboseStream) DecompressTimer.startTimer(); SmallVector<uint8_t, 0> DecompressedData; - StringRef CompressedData = Blob.substr(CurrentOffset); - if (llvm::Error DecompressionError = llvm::compression::decompress( - CompressionFormat, llvm::arrayRefFromStringRef(CompressedData), + StringRef CompressedData = + Blob.substr(HeaderSize, TotalFileSize - HeaderSize); + + if (Error DecompressionError = compression::decompress( + CompressionFormat, arrayRefFromStringRef(CompressedData), DecompressedData, UncompressedSize)) - return createStringError(inconvertibleErrorCode(), - "Could not decompress embedded file contents: " + - llvm::toString(std::move(DecompressionError))); + return createStringError("could not decompress embedded file contents: " + + toString(std::move(DecompressionError))); - if (Verbose) { + if (VerboseStream) { DecompressTimer.stopTimer(); double DecompressionTimeSeconds = DecompressTimer.getTotalTime().getWallTime(); // Recalculate MD5 hash for integrity check. - llvm::Timer HashRecalcTimer("Hash Recalculation Timer", - "Hash recalculation time", - OffloadBundlerTimerGroup); + Timer HashRecalcTimer("Hash Recalculation Timer", "Hash recalculation time", + OffloadBundlerTimerGroup); HashRecalcTimer.startTimer(); - llvm::MD5 Hash; - llvm::MD5::MD5Result Result; - Hash.update(llvm::ArrayRef<uint8_t>(DecompressedData)); + MD5 Hash; + MD5::MD5Result Result; + Hash.update(ArrayRef<uint8_t>(DecompressedData)); Hash.final(Result); uint64_t RecalculatedHash = Result.low(); HashRecalcTimer.stopTimer(); @@ -351,118 +573,28 @@ CompressedOffloadBundle::decompress(llvm::MemoryBufferRef &Input, double DecompressionSpeedMBs = (UncompressedSize / (1024.0 * 1024.0)) / DecompressionTimeSeconds; - llvm::errs() << "Compressed bundle format version: " << ThisVersion << "\n"; + *VerboseStream << "Compressed bundle format version: " << ThisVersion + << "\n"; if (ThisVersion >= 2) - llvm::errs() << "Total file size (from header): " - << formatWithCommas(TotalFileSize) << " bytes\n"; - llvm::errs() << "Decompression method: " - << (CompressionFormat == llvm::compression::Format::Zlib - ? "zlib" - : "zstd") - << "\n" - << "Size before decompression: " - << formatWithCommas(CompressedData.size()) << " bytes\n" - << "Size after decompression: " - << formatWithCommas(UncompressedSize) << " bytes\n" - << "Compression rate: " - << llvm::format("%.2lf", CompressionRate) << "\n" - << "Compression ratio: " - << llvm::format("%.2lf%%", 100.0 / CompressionRate) << "\n" - << "Decompression speed: " - << llvm::format("%.2lf MB/s", DecompressionSpeedMBs) << "\n" - << "Stored hash: " << llvm::format_hex(StoredHash, 16) << "\n" - << "Recalculated hash: " - << llvm::format_hex(RecalculatedHash, 16) << "\n" - << "Hashes match: " << (HashMatch ? "Yes" : "No") << "\n"; + *VerboseStream << "Total file size (from header): " + << formatWithCommas(TotalFileSize) << " bytes\n"; + *VerboseStream + << "Decompression method: " + << (CompressionFormat == compression::Format::Zlib ? "zlib" : "zstd") + << "\n" + << "Size before decompression: " + << formatWithCommas(CompressedData.size()) << " bytes\n" + << "Size after decompression: " << formatWithCommas(UncompressedSize) + << " bytes\n" + << "Compression rate: " << format("%.2lf", CompressionRate) << "\n" + << "Compression ratio: " << format("%.2lf%%", 100.0 / CompressionRate) + << "\n" + << "Decompression speed: " + << format("%.2lf MB/s", DecompressionSpeedMBs) << "\n" + << "Stored hash: " << format_hex(StoredHash, 16) << "\n" + << "Recalculated hash: " << format_hex(RecalculatedHash, 16) << "\n" + << "Hashes match: " << (HashMatch ? "Yes" : "No") << "\n"; } - return llvm::MemoryBuffer::getMemBufferCopy( - llvm::toStringRef(DecompressedData)); -} - -llvm::Expected<std::unique_ptr<llvm::MemoryBuffer>> -CompressedOffloadBundle::compress(llvm::compression::Params P, - const llvm::MemoryBuffer &Input, - bool Verbose) { - if (!llvm::compression::zstd::isAvailable() && - !llvm::compression::zlib::isAvailable()) - return createStringError(llvm::inconvertibleErrorCode(), - "Compression not supported"); - - llvm::Timer HashTimer("Hash Calculation Timer", "Hash calculation time", - OffloadBundlerTimerGroup); - if (Verbose) - HashTimer.startTimer(); - llvm::MD5 Hash; - llvm::MD5::MD5Result Result; - Hash.update(Input.getBuffer()); - Hash.final(Result); - uint64_t TruncatedHash = Result.low(); - if (Verbose) - HashTimer.stopTimer(); - - SmallVector<uint8_t, 0> CompressedBuffer; - auto BufferUint8 = llvm::ArrayRef<uint8_t>( - reinterpret_cast<const uint8_t *>(Input.getBuffer().data()), - Input.getBuffer().size()); - - llvm::Timer CompressTimer("Compression Timer", "Compression time", - OffloadBundlerTimerGroup); - if (Verbose) - CompressTimer.startTimer(); - llvm::compression::compress(P, BufferUint8, CompressedBuffer); - if (Verbose) - CompressTimer.stopTimer(); - - uint16_t CompressionMethod = static_cast<uint16_t>(P.format); - uint32_t UncompressedSize = Input.getBuffer().size(); - uint32_t TotalFileSize = MagicNumber.size() + sizeof(TotalFileSize) + - sizeof(Version) + sizeof(CompressionMethod) + - sizeof(UncompressedSize) + sizeof(TruncatedHash) + - CompressedBuffer.size(); - - SmallVector<char, 0> FinalBuffer; - llvm::raw_svector_ostream OS(FinalBuffer); - OS << MagicNumber; - OS.write(reinterpret_cast<const char *>(&Version), sizeof(Version)); - OS.write(reinterpret_cast<const char *>(&CompressionMethod), - sizeof(CompressionMethod)); - OS.write(reinterpret_cast<const char *>(&TotalFileSize), - sizeof(TotalFileSize)); - OS.write(reinterpret_cast<const char *>(&UncompressedSize), - sizeof(UncompressedSize)); - OS.write(reinterpret_cast<const char *>(&TruncatedHash), - sizeof(TruncatedHash)); - OS.write(reinterpret_cast<const char *>(CompressedBuffer.data()), - CompressedBuffer.size()); - - if (Verbose) { - auto MethodUsed = - P.format == llvm::compression::Format::Zstd ? "zstd" : "zlib"; - double CompressionRate = - static_cast<double>(UncompressedSize) / CompressedBuffer.size(); - double CompressionTimeSeconds = CompressTimer.getTotalTime().getWallTime(); - double CompressionSpeedMBs = - (UncompressedSize / (1024.0 * 1024.0)) / CompressionTimeSeconds; - - llvm::errs() << "Compressed bundle format version: " << Version << "\n" - << "Total file size (including headers): " - << formatWithCommas(TotalFileSize) << " bytes\n" - << "Compression method used: " << MethodUsed << "\n" - << "Compression level: " << P.level << "\n" - << "Binary size before compression: " - << formatWithCommas(UncompressedSize) << " bytes\n" - << "Binary size after compression: " - << formatWithCommas(CompressedBuffer.size()) << " bytes\n" - << "Compression rate: " - << llvm::format("%.2lf", CompressionRate) << "\n" - << "Compression ratio: " - << llvm::format("%.2lf%%", 100.0 / CompressionRate) << "\n" - << "Compression speed: " - << llvm::format("%.2lf MB/s", CompressionSpeedMBs) << "\n" - << "Truncated MD5 hash: " - << llvm::format_hex(TruncatedHash, 16) << "\n"; - } - return llvm::MemoryBuffer::getMemBufferCopy( - llvm::StringRef(FinalBuffer.data(), FinalBuffer.size())); + return MemoryBuffer::getMemBufferCopy(toStringRef(DecompressedData)); } diff --git a/llvm/lib/Option/ArgList.cpp b/llvm/lib/Option/ArgList.cpp index c4188b3b..2f4e212 100644 --- a/llvm/lib/Option/ArgList.cpp +++ b/llvm/lib/Option/ArgList.cpp @@ -14,12 +14,14 @@ #include "llvm/Config/llvm-config.h" #include "llvm/Option/Arg.h" #include "llvm/Option/OptSpecifier.h" +#include "llvm/Option/OptTable.h" #include "llvm/Option/Option.h" #include "llvm/Support/Compiler.h" #include "llvm/Support/Debug.h" #include "llvm/Support/raw_ostream.h" #include <algorithm> #include <cassert> +#include <cstddef> #include <memory> #include <string> #include <utility> @@ -202,6 +204,42 @@ void ArgList::print(raw_ostream &O) const { LLVM_DUMP_METHOD void ArgList::dump() const { print(dbgs()); } #endif +StringRef ArgList::getSubCommand( + ArrayRef<OptTable::SubCommand> AllSubCommands, + std::function<void(ArrayRef<StringRef>)> HandleMultipleSubcommands, + std::function<void(ArrayRef<StringRef>)> HandleOtherPositionals) const { + + SmallVector<StringRef, 4> SubCommands; + SmallVector<StringRef, 4> OtherPositionals; + for (const Arg *A : *this) { + if (A->getOption().getKind() != Option::InputClass) + continue; + + size_t OldSize = SubCommands.size(); + for (const OptTable::SubCommand &CMD : AllSubCommands) { + if (StringRef(CMD.Name) == A->getValue()) + SubCommands.push_back(A->getValue()); + } + + if (SubCommands.size() == OldSize) + OtherPositionals.push_back(A->getValue()); + } + + // Invoke callbacks if necessary. + if (SubCommands.size() > 1) { + HandleMultipleSubcommands(SubCommands); + return {}; + } + if (!OtherPositionals.empty()) { + HandleOtherPositionals(OtherPositionals); + return {}; + } + + if (SubCommands.size() == 1) + return SubCommands.front(); + return {}; // No valid usage of subcommand found. +} + void InputArgList::releaseMemory() { // An InputArgList always owns its arguments. for (Arg *A : *this) diff --git a/llvm/lib/Option/OptTable.cpp b/llvm/lib/Option/OptTable.cpp index 6d10e61..14e3b0d 100644 --- a/llvm/lib/Option/OptTable.cpp +++ b/llvm/lib/Option/OptTable.cpp @@ -79,9 +79,12 @@ OptSpecifier::OptSpecifier(const Option *Opt) : ID(Opt->getID()) {} OptTable::OptTable(const StringTable &StrTable, ArrayRef<StringTable::Offset> PrefixesTable, - ArrayRef<Info> OptionInfos, bool IgnoreCase) + ArrayRef<Info> OptionInfos, bool IgnoreCase, + ArrayRef<SubCommand> SubCommands, + ArrayRef<unsigned> SubCommandIDsTable) : StrTable(&StrTable), PrefixesTable(PrefixesTable), - OptionInfos(OptionInfos), IgnoreCase(IgnoreCase) { + OptionInfos(OptionInfos), IgnoreCase(IgnoreCase), + SubCommands(SubCommands), SubCommandIDsTable(SubCommandIDsTable) { // Explicitly zero initialize the error to work around a bug in array // value-initialization on MinGW with gcc 4.3.5. @@ -715,9 +718,10 @@ static const char *getOptionHelpGroup(const OptTable &Opts, OptSpecifier Id) { void OptTable::printHelp(raw_ostream &OS, const char *Usage, const char *Title, bool ShowHidden, bool ShowAllAliases, - Visibility VisibilityMask) const { + Visibility VisibilityMask, + StringRef SubCommand) const { return internalPrintHelp( - OS, Usage, Title, ShowHidden, ShowAllAliases, + OS, Usage, Title, SubCommand, ShowHidden, ShowAllAliases, [VisibilityMask](const Info &CandidateInfo) -> bool { return (CandidateInfo.Visibility & VisibilityMask) == 0; }, @@ -730,7 +734,7 @@ void OptTable::printHelp(raw_ostream &OS, const char *Usage, const char *Title, bool ShowHidden = !(FlagsToExclude & HelpHidden); FlagsToExclude &= ~HelpHidden; return internalPrintHelp( - OS, Usage, Title, ShowHidden, ShowAllAliases, + OS, Usage, Title, /*SubCommand=*/{}, ShowHidden, ShowAllAliases, [FlagsToInclude, FlagsToExclude](const Info &CandidateInfo) { if (FlagsToInclude && !(CandidateInfo.Flags & FlagsToInclude)) return true; @@ -742,16 +746,62 @@ void OptTable::printHelp(raw_ostream &OS, const char *Usage, const char *Title, } void OptTable::internalPrintHelp( - raw_ostream &OS, const char *Usage, const char *Title, bool ShowHidden, - bool ShowAllAliases, std::function<bool(const Info &)> ExcludeOption, + raw_ostream &OS, const char *Usage, const char *Title, StringRef SubCommand, + bool ShowHidden, bool ShowAllAliases, + std::function<bool(const Info &)> ExcludeOption, Visibility VisibilityMask) const { OS << "OVERVIEW: " << Title << "\n\n"; - OS << "USAGE: " << Usage << "\n\n"; // Render help text into a map of group-name to a list of (option, help) // pairs. std::map<std::string, std::vector<OptionInfo>> GroupedOptionHelp; + auto ActiveSubCommand = + std::find_if(SubCommands.begin(), SubCommands.end(), + [&](const auto &C) { return SubCommand == C.Name; }); + if (!SubCommand.empty()) { + assert(ActiveSubCommand != SubCommands.end() && + "Not a valid registered subcommand."); + OS << ActiveSubCommand->HelpText << "\n\n"; + if (!StringRef(ActiveSubCommand->Usage).empty()) + OS << "USAGE: " << ActiveSubCommand->Usage << "\n\n"; + } else { + OS << "USAGE: " << Usage << "\n\n"; + if (SubCommands.size() > 1) { + OS << "SUBCOMMANDS:\n\n"; + for (const auto &C : SubCommands) + OS << C.Name << " - " << C.HelpText << "\n"; + OS << "\n"; + } + } + + auto DoesOptionBelongToSubcommand = [&](const Info &CandidateInfo) { + // Retrieve the SubCommandIDs registered to the given current CandidateInfo + // Option. + ArrayRef<unsigned> SubCommandIDs = + CandidateInfo.getSubCommandIDs(SubCommandIDsTable); + + // If no registered subcommands, then only global options are to be printed. + // If no valid SubCommand (empty) in commandline then print the current + // global CandidateInfo Option. + if (SubCommandIDs.empty()) + return SubCommand.empty(); + + // Handle CandidateInfo Option which has at least one registered SubCommand. + // If no valid SubCommand (empty) in commandline, this CandidateInfo option + // should not be printed. + if (SubCommand.empty()) + return false; + + // Find the ID of the valid subcommand passed in commandline (its index in + // the SubCommands table which contains all subcommands). + unsigned ActiveSubCommandID = ActiveSubCommand - &SubCommands[0]; + // Print if the ActiveSubCommandID is registered with the CandidateInfo + // Option. + return std::find(SubCommandIDs.begin(), SubCommandIDs.end(), + ActiveSubCommandID) != SubCommandIDs.end(); + }; + for (unsigned Id = 1, e = getNumOptions() + 1; Id != e; ++Id) { // FIXME: Split out option groups. if (getOptionKind(Id) == Option::GroupClass) @@ -764,6 +814,9 @@ void OptTable::internalPrintHelp( if (ExcludeOption(CandidateInfo)) continue; + if (!DoesOptionBelongToSubcommand(CandidateInfo)) + continue; + // If an alias doesn't have a help text, show a help text for the aliased // option instead. const char *HelpText = getOptionHelpText(Id, VisibilityMask); @@ -791,8 +844,11 @@ void OptTable::internalPrintHelp( GenericOptTable::GenericOptTable(const StringTable &StrTable, ArrayRef<StringTable::Offset> PrefixesTable, - ArrayRef<Info> OptionInfos, bool IgnoreCase) - : OptTable(StrTable, PrefixesTable, OptionInfos, IgnoreCase) { + ArrayRef<Info> OptionInfos, bool IgnoreCase, + ArrayRef<SubCommand> SubCommands, + ArrayRef<unsigned> SubCommandIDsTable) + : OptTable(StrTable, PrefixesTable, OptionInfos, IgnoreCase, SubCommands, + SubCommandIDsTable) { std::set<StringRef> TmpPrefixesUnion; for (auto const &Info : OptionInfos.drop_front(FirstSearchableIndex)) diff --git a/llvm/lib/Passes/PassBuilderPipelines.cpp b/llvm/lib/Passes/PassBuilderPipelines.cpp index 7069e8d..119caea 100644 --- a/llvm/lib/Passes/PassBuilderPipelines.cpp +++ b/llvm/lib/Passes/PassBuilderPipelines.cpp @@ -1960,6 +1960,7 @@ PassBuilder::buildLTODefaultPipeline(OptimizationLevel Level, // is fixed. MPM.addPass(WholeProgramDevirtPass(ExportSummary, nullptr)); + MPM.addPass(NoRecurseLTOInferencePass()); // Stop here at -O1. if (Level == OptimizationLevel::O1) { // The LowerTypeTestsPass needs to run to lower type metadata and the diff --git a/llvm/lib/Passes/PassRegistry.def b/llvm/lib/Passes/PassRegistry.def index f0e7d36..88550ea 100644 --- a/llvm/lib/Passes/PassRegistry.def +++ b/llvm/lib/Passes/PassRegistry.def @@ -119,6 +119,7 @@ MODULE_PASS("metarenamer", MetaRenamerPass()) MODULE_PASS("module-inline", ModuleInlinerPass()) MODULE_PASS("name-anon-globals", NameAnonGlobalPass()) MODULE_PASS("no-op-module", NoOpModulePass()) +MODULE_PASS("norecurse-lto-inference", NoRecurseLTOInferencePass()) MODULE_PASS("nsan", NumericalStabilitySanitizerPass()) MODULE_PASS("openmp-opt", OpenMPOptPass()) MODULE_PASS("openmp-opt-postlink", diff --git a/llvm/lib/Support/GlobPattern.cpp b/llvm/lib/Support/GlobPattern.cpp index 7004adf..0ecf47d 100644 --- a/llvm/lib/Support/GlobPattern.cpp +++ b/llvm/lib/Support/GlobPattern.cpp @@ -143,6 +143,15 @@ GlobPattern::create(StringRef S, std::optional<size_t> MaxSubPatterns) { return Pat; S = S.substr(PrefixSize); + // Just in case we stop on unmatched opening brackets. + size_t SuffixStart = S.find_last_of("?*[]{}\\"); + assert(SuffixStart != std::string::npos); + if (S[SuffixStart] == '\\') + ++SuffixStart; + ++SuffixStart; + Pat.Suffix = S.substr(SuffixStart); + S = S.substr(0, SuffixStart); + SmallVector<std::string, 1> SubPats; if (auto Err = parseBraceExpansions(S, MaxSubPatterns).moveInto(SubPats)) return std::move(Err); @@ -193,6 +202,8 @@ GlobPattern::SubGlobPattern::create(StringRef S) { bool GlobPattern::match(StringRef S) const { if (!S.consume_front(Prefix)) return false; + if (!S.consume_back(Suffix)) + return false; if (SubGlobs.empty() && S.empty()) return true; for (auto &Glob : SubGlobs) diff --git a/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp b/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp index 4357264d..c76689f 100644 --- a/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp +++ b/llvm/lib/Target/AArch64/AArch64FrameLowering.cpp @@ -345,12 +345,6 @@ static unsigned getStackHazardSize(const MachineFunction &MF) { return MF.getSubtarget<AArch64Subtarget>().getStreamingHazardSize(); } -/// Returns true if PPRs are spilled as ZPRs. -static bool arePPRsSpilledAsZPR(const MachineFunction &MF) { - return MF.getSubtarget().getRegisterInfo()->getSpillSize( - AArch64::PPRRegClass) == 16; -} - StackOffset AArch64FrameLowering::getZPRStackSize(const MachineFunction &MF) const { const AArch64FunctionInfo *AFI = MF.getInfo<AArch64FunctionInfo>(); @@ -1966,8 +1960,7 @@ bool AArch64FrameLowering::spillCalleeSavedRegisters( StrOpc = RPI.isPaired() ? AArch64::ST1B_2Z_IMM : AArch64::STR_ZXI; break; case RegPairInfo::PPR: - StrOpc = - Size == 16 ? AArch64::SPILL_PPR_TO_ZPR_SLOT_PSEUDO : AArch64::STR_PXI; + StrOpc = AArch64::STR_PXI; break; case RegPairInfo::VG: StrOpc = AArch64::STRXui; @@ -2178,8 +2171,7 @@ bool AArch64FrameLowering::restoreCalleeSavedRegisters( LdrOpc = RPI.isPaired() ? AArch64::LD1B_2Z_IMM : AArch64::LDR_ZXI; break; case RegPairInfo::PPR: - LdrOpc = Size == 16 ? AArch64::FILL_PPR_FROM_ZPR_SLOT_PSEUDO - : AArch64::LDR_PXI; + LdrOpc = AArch64::LDR_PXI; break; case RegPairInfo::VG: continue; @@ -2286,9 +2278,7 @@ static std::optional<int> getLdStFrameID(const MachineInstr &MI, // Returns true if the LDST MachineInstr \p MI is a PPR access. static bool isPPRAccess(const MachineInstr &MI) { - return MI.getOpcode() != AArch64::SPILL_PPR_TO_ZPR_SLOT_PSEUDO && - MI.getOpcode() != AArch64::FILL_PPR_FROM_ZPR_SLOT_PSEUDO && - AArch64::PPRRegClass.contains(MI.getOperand(0).getReg()); + return AArch64::PPRRegClass.contains(MI.getOperand(0).getReg()); } // Check if a Hazard slot is needed for the current function, and if so create @@ -2390,12 +2380,6 @@ void AArch64FrameLowering::determineStackHazardSlot( return; } - if (arePPRsSpilledAsZPR(MF)) { - LLVM_DEBUG(dbgs() << "SplitSVEObjects is not supported with " - "-aarch64-enable-zpr-predicate-spills"); - return; - } - // If another calling convention is explicitly set FPRs can't be promoted to // ZPR callee-saves. if (!is_contained({CallingConv::C, CallingConv::Fast, @@ -2519,14 +2503,6 @@ void AArch64FrameLowering::determineCalleeSaves(MachineFunction &MF, continue; } - // Always save P4 when PPR spills are ZPR-sized and a predicate above p8 is - // spilled. If all of p0-p3 are used as return values p4 is must be free - // to reload p8-p15. - if (RegInfo->getSpillSize(AArch64::PPRRegClass) == 16 && - AArch64::PPR_p8to15RegClass.contains(Reg)) { - SavedRegs.set(AArch64::P4); - } - // MachO's compact unwind format relies on all registers being stored in // pairs. // FIXME: the usual format is actually better if unwinding isn't needed. @@ -2587,7 +2563,7 @@ void AArch64FrameLowering::determineCalleeSaves(MachineFunction &MF, auto SpillSize = TRI->getSpillSize(*RC); bool IsZPR = AArch64::ZPRRegClass.contains(Reg); bool IsPPR = !IsZPR && AArch64::PPRRegClass.contains(Reg); - if (IsZPR || (IsPPR && arePPRsSpilledAsZPR(MF))) + if (IsZPR) ZPRCSStackSize += SpillSize; else if (IsPPR) PPRCSStackSize += SpillSize; @@ -2902,7 +2878,7 @@ static SVEStackSizes determineSVEStackSizes(MachineFunction &MF, StackTop += MFI.getObjectSize(FI); StackTop = alignTo(StackTop, Alignment); - assert(StackTop < std::numeric_limits<int64_t>::max() && + assert(StackTop < (uint64_t)std::numeric_limits<int64_t>::max() && "SVE StackTop far too large?!"); int64_t Offset = -int64_t(StackTop); @@ -2961,314 +2937,8 @@ static SVEStackSizes determineSVEStackSizes(MachineFunction &MF, return SVEStack; } -/// Attempts to scavenge a register from \p ScavengeableRegs given the used -/// registers in \p UsedRegs. -static Register tryScavengeRegister(LiveRegUnits const &UsedRegs, - BitVector const &ScavengeableRegs, - Register PreferredReg) { - if (PreferredReg != AArch64::NoRegister && UsedRegs.available(PreferredReg)) - return PreferredReg; - for (auto Reg : ScavengeableRegs.set_bits()) { - if (UsedRegs.available(Reg)) - return Reg; - } - return AArch64::NoRegister; -} - -/// Propagates frame-setup/destroy flags from \p SourceMI to all instructions in -/// \p MachineInstrs. -static void propagateFrameFlags(MachineInstr &SourceMI, - ArrayRef<MachineInstr *> MachineInstrs) { - for (MachineInstr *MI : MachineInstrs) { - if (SourceMI.getFlag(MachineInstr::FrameSetup)) - MI->setFlag(MachineInstr::FrameSetup); - if (SourceMI.getFlag(MachineInstr::FrameDestroy)) - MI->setFlag(MachineInstr::FrameDestroy); - } -} - -/// RAII helper class for scavenging or spilling a register. On construction -/// attempts to find a free register of class \p RC (given \p UsedRegs and \p -/// AllocatableRegs), if no register can be found spills \p SpillCandidate to \p -/// MaybeSpillFI to free a register. The free'd register is returned via the \p -/// FreeReg output parameter. On destruction, if there is a spill, its previous -/// value is reloaded. The spilling and scavenging is only valid at the -/// insertion point \p MBBI, this class should _not_ be used in places that -/// create or manipulate basic blocks, moving the expected insertion point. -struct ScopedScavengeOrSpill { - ScopedScavengeOrSpill(const ScopedScavengeOrSpill &) = delete; - ScopedScavengeOrSpill(ScopedScavengeOrSpill &&) = delete; - - ScopedScavengeOrSpill(MachineFunction &MF, MachineBasicBlock &MBB, - MachineBasicBlock::iterator MBBI, - Register SpillCandidate, const TargetRegisterClass &RC, - LiveRegUnits const &UsedRegs, - BitVector const &AllocatableRegs, - std::optional<int> *MaybeSpillFI, - Register PreferredReg = AArch64::NoRegister) - : MBB(MBB), MBBI(MBBI), RC(RC), TII(static_cast<const AArch64InstrInfo &>( - *MF.getSubtarget().getInstrInfo())), - TRI(*MF.getSubtarget().getRegisterInfo()) { - FreeReg = tryScavengeRegister(UsedRegs, AllocatableRegs, PreferredReg); - if (FreeReg != AArch64::NoRegister) - return; - assert(MaybeSpillFI && "Expected emergency spill slot FI information " - "(attempted to spill in prologue/epilogue?)"); - if (!MaybeSpillFI->has_value()) { - MachineFrameInfo &MFI = MF.getFrameInfo(); - *MaybeSpillFI = MFI.CreateSpillStackObject(TRI.getSpillSize(RC), - TRI.getSpillAlign(RC)); - } - FreeReg = SpillCandidate; - SpillFI = MaybeSpillFI->value(); - TII.storeRegToStackSlot(MBB, MBBI, FreeReg, false, *SpillFI, &RC, &TRI, - Register()); - } - - bool hasSpilled() const { return SpillFI.has_value(); } - - /// Returns the free register (found from scavenging or spilling a register). - Register freeRegister() const { return FreeReg; } - - Register operator*() const { return freeRegister(); } - - ~ScopedScavengeOrSpill() { - if (hasSpilled()) - TII.loadRegFromStackSlot(MBB, MBBI, FreeReg, *SpillFI, &RC, &TRI, - Register()); - } - -private: - MachineBasicBlock &MBB; - MachineBasicBlock::iterator MBBI; - const TargetRegisterClass &RC; - const AArch64InstrInfo &TII; - const TargetRegisterInfo &TRI; - Register FreeReg = AArch64::NoRegister; - std::optional<int> SpillFI; -}; - -/// Emergency stack slots for expanding SPILL_PPR_TO_ZPR_SLOT_PSEUDO and -/// FILL_PPR_FROM_ZPR_SLOT_PSEUDO. -struct EmergencyStackSlots { - std::optional<int> ZPRSpillFI; - std::optional<int> PPRSpillFI; - std::optional<int> GPRSpillFI; -}; - -/// Registers available for scavenging (ZPR, PPR3b, GPR). -struct ScavengeableRegs { - BitVector ZPRRegs; - BitVector PPR3bRegs; - BitVector GPRRegs; -}; - -static bool isInPrologueOrEpilogue(const MachineInstr &MI) { - return MI.getFlag(MachineInstr::FrameSetup) || - MI.getFlag(MachineInstr::FrameDestroy); -} - -/// Expands: -/// ``` -/// SPILL_PPR_TO_ZPR_SLOT_PSEUDO $p0, %stack.0, 0 -/// ``` -/// To: -/// ``` -/// $z0 = CPY_ZPzI_B $p0, 1, 0 -/// STR_ZXI $z0, $stack.0, 0 -/// ``` -/// While ensuring a ZPR ($z0 in this example) is free for the predicate ( -/// spilling if necessary). -static void expandSpillPPRToZPRSlotPseudo(MachineBasicBlock &MBB, - MachineInstr &MI, - const TargetRegisterInfo &TRI, - LiveRegUnits const &UsedRegs, - ScavengeableRegs const &SR, - EmergencyStackSlots &SpillSlots) { - MachineFunction &MF = *MBB.getParent(); - auto *TII = - static_cast<const AArch64InstrInfo *>(MF.getSubtarget().getInstrInfo()); - - ScopedScavengeOrSpill ZPredReg( - MF, MBB, MI, AArch64::Z0, AArch64::ZPRRegClass, UsedRegs, SR.ZPRRegs, - isInPrologueOrEpilogue(MI) ? nullptr : &SpillSlots.ZPRSpillFI); - - SmallVector<MachineInstr *, 2> MachineInstrs; - const DebugLoc &DL = MI.getDebugLoc(); - MachineInstrs.push_back(BuildMI(MBB, MI, DL, TII->get(AArch64::CPY_ZPzI_B)) - .addReg(*ZPredReg, RegState::Define) - .add(MI.getOperand(0)) - .addImm(1) - .addImm(0) - .getInstr()); - MachineInstrs.push_back(BuildMI(MBB, MI, DL, TII->get(AArch64::STR_ZXI)) - .addReg(*ZPredReg) - .add(MI.getOperand(1)) - .addImm(MI.getOperand(2).getImm()) - .setMemRefs(MI.memoperands()) - .getInstr()); - propagateFrameFlags(MI, MachineInstrs); -} - -/// Expands: -/// ``` -/// $p0 = FILL_PPR_FROM_ZPR_SLOT_PSEUDO %stack.0, 0 -/// ``` -/// To: -/// ``` -/// $z0 = LDR_ZXI %stack.0, 0 -/// $p0 = PTRUE_B 31, implicit $vg -/// $p0 = CMPNE_PPzZI_B $p0, $z0, 0, implicit-def $nzcv, implicit-def $nzcv -/// ``` -/// While ensuring a ZPR ($z0 in this example) is free for the predicate ( -/// spilling if necessary). If the status flags are in use at the point of -/// expansion they are preserved (by moving them to/from a GPR). This may cause -/// an additional spill if no GPR is free at the expansion point. -static bool expandFillPPRFromZPRSlotPseudo( - MachineBasicBlock &MBB, MachineInstr &MI, const TargetRegisterInfo &TRI, - LiveRegUnits const &UsedRegs, ScavengeableRegs const &SR, - MachineInstr *&LastPTrue, EmergencyStackSlots &SpillSlots) { - MachineFunction &MF = *MBB.getParent(); - auto *TII = - static_cast<const AArch64InstrInfo *>(MF.getSubtarget().getInstrInfo()); - - ScopedScavengeOrSpill ZPredReg( - MF, MBB, MI, AArch64::Z0, AArch64::ZPRRegClass, UsedRegs, SR.ZPRRegs, - isInPrologueOrEpilogue(MI) ? nullptr : &SpillSlots.ZPRSpillFI); - - ScopedScavengeOrSpill PredReg( - MF, MBB, MI, AArch64::P0, AArch64::PPR_3bRegClass, UsedRegs, SR.PPR3bRegs, - isInPrologueOrEpilogue(MI) ? nullptr : &SpillSlots.PPRSpillFI, - /*PreferredReg=*/ - LastPTrue ? LastPTrue->getOperand(0).getReg() : AArch64::NoRegister); - - // Elide NZCV spills if we know it is not used. - bool IsNZCVUsed = !UsedRegs.available(AArch64::NZCV); - std::optional<ScopedScavengeOrSpill> NZCVSaveReg; - if (IsNZCVUsed) - NZCVSaveReg.emplace( - MF, MBB, MI, AArch64::X0, AArch64::GPR64RegClass, UsedRegs, SR.GPRRegs, - isInPrologueOrEpilogue(MI) ? nullptr : &SpillSlots.GPRSpillFI); - SmallVector<MachineInstr *, 4> MachineInstrs; - const DebugLoc &DL = MI.getDebugLoc(); - MachineInstrs.push_back(BuildMI(MBB, MI, DL, TII->get(AArch64::LDR_ZXI)) - .addReg(*ZPredReg, RegState::Define) - .add(MI.getOperand(1)) - .addImm(MI.getOperand(2).getImm()) - .setMemRefs(MI.memoperands()) - .getInstr()); - if (IsNZCVUsed) - MachineInstrs.push_back( - BuildMI(MBB, MI, DL, TII->get(AArch64::MRS)) - .addReg(NZCVSaveReg->freeRegister(), RegState::Define) - .addImm(AArch64SysReg::NZCV) - .addReg(AArch64::NZCV, RegState::Implicit) - .getInstr()); - - // Reuse previous ptrue if we know it has not been clobbered. - if (LastPTrue) { - assert(*PredReg == LastPTrue->getOperand(0).getReg()); - LastPTrue->moveBefore(&MI); - } else { - LastPTrue = BuildMI(MBB, MI, DL, TII->get(AArch64::PTRUE_B)) - .addReg(*PredReg, RegState::Define) - .addImm(31); - } - MachineInstrs.push_back(LastPTrue); - MachineInstrs.push_back( - BuildMI(MBB, MI, DL, TII->get(AArch64::CMPNE_PPzZI_B)) - .addReg(MI.getOperand(0).getReg(), RegState::Define) - .addReg(*PredReg) - .addReg(*ZPredReg) - .addImm(0) - .addReg(AArch64::NZCV, RegState::ImplicitDefine) - .getInstr()); - if (IsNZCVUsed) - MachineInstrs.push_back(BuildMI(MBB, MI, DL, TII->get(AArch64::MSR)) - .addImm(AArch64SysReg::NZCV) - .addReg(NZCVSaveReg->freeRegister()) - .addReg(AArch64::NZCV, RegState::ImplicitDefine) - .getInstr()); - - propagateFrameFlags(MI, MachineInstrs); - return PredReg.hasSpilled(); -} - -/// Expands all FILL_PPR_FROM_ZPR_SLOT_PSEUDO and SPILL_PPR_TO_ZPR_SLOT_PSEUDO -/// operations within the MachineBasicBlock \p MBB. -static bool expandSMEPPRToZPRSpillPseudos(MachineBasicBlock &MBB, - const TargetRegisterInfo &TRI, - ScavengeableRegs const &SR, - EmergencyStackSlots &SpillSlots) { - LiveRegUnits UsedRegs(TRI); - UsedRegs.addLiveOuts(MBB); - bool HasPPRSpills = false; - MachineInstr *LastPTrue = nullptr; - for (MachineInstr &MI : make_early_inc_range(reverse(MBB))) { - UsedRegs.stepBackward(MI); - switch (MI.getOpcode()) { - case AArch64::FILL_PPR_FROM_ZPR_SLOT_PSEUDO: - if (LastPTrue && - MI.definesRegister(LastPTrue->getOperand(0).getReg(), &TRI)) - LastPTrue = nullptr; - HasPPRSpills |= expandFillPPRFromZPRSlotPseudo(MBB, MI, TRI, UsedRegs, SR, - LastPTrue, SpillSlots); - MI.eraseFromParent(); - break; - case AArch64::SPILL_PPR_TO_ZPR_SLOT_PSEUDO: - expandSpillPPRToZPRSlotPseudo(MBB, MI, TRI, UsedRegs, SR, SpillSlots); - MI.eraseFromParent(); - [[fallthrough]]; - default: - LastPTrue = nullptr; - break; - } - } - - return HasPPRSpills; -} - void AArch64FrameLowering::processFunctionBeforeFrameFinalized( MachineFunction &MF, RegScavenger *RS) const { - - AArch64FunctionInfo *AFI = MF.getInfo<AArch64FunctionInfo>(); - const TargetSubtargetInfo &TSI = MF.getSubtarget(); - const TargetRegisterInfo &TRI = *TSI.getRegisterInfo(); - - // If predicates spills are 16-bytes we may need to expand - // SPILL_PPR_TO_ZPR_SLOT_PSEUDO/FILL_PPR_FROM_ZPR_SLOT_PSEUDO. - if (AFI->hasStackFrame() && TRI.getSpillSize(AArch64::PPRRegClass) == 16) { - auto ComputeScavengeableRegisters = [&](unsigned RegClassID) { - BitVector Regs = TRI.getAllocatableSet(MF, TRI.getRegClass(RegClassID)); - assert(Regs.count() > 0 && "Expected scavengeable registers"); - return Regs; - }; - - ScavengeableRegs SR{}; - SR.ZPRRegs = ComputeScavengeableRegisters(AArch64::ZPRRegClassID); - // Only p0-7 are possible as the second operand of cmpne (needed for fills). - SR.PPR3bRegs = ComputeScavengeableRegisters(AArch64::PPR_3bRegClassID); - SR.GPRRegs = ComputeScavengeableRegisters(AArch64::GPR64RegClassID); - - EmergencyStackSlots SpillSlots; - for (MachineBasicBlock &MBB : MF) { - // In the case we had to spill a predicate (in the range p0-p7) to reload - // a predicate (>= p8), additional spill/fill pseudos will be created. - // These need an additional expansion pass. Note: There will only be at - // most two expansion passes, as spilling/filling a predicate in the range - // p0-p7 never requires spilling another predicate. - for (int Pass = 0; Pass < 2; Pass++) { - bool HasPPRSpills = - expandSMEPPRToZPRSpillPseudos(MBB, TRI, SR, SpillSlots); - assert((Pass == 0 || !HasPPRSpills) && "Did not expect PPR spills"); - if (!HasPPRSpills) - break; - } - } - } - - MachineFrameInfo &MFI = MF.getFrameInfo(); - assert(getStackGrowthDirection() == TargetFrameLowering::StackGrowsDown && "Upwards growing stack unsupported"); @@ -3279,6 +2949,9 @@ void AArch64FrameLowering::processFunctionBeforeFrameFinalized( if (!MF.hasEHFunclets()) return; + MachineFrameInfo &MFI = MF.getFrameInfo(); + auto *AFI = MF.getInfo<AArch64FunctionInfo>(); + // Win64 C++ EH needs to allocate space for the catch objects in the fixed // object area right next to the UnwindHelp object. WinEHFuncInfo &EHInfo = *MF.getWinEHFuncInfo(); @@ -4280,18 +3953,10 @@ void AArch64FrameLowering::emitRemarks( } unsigned RegTy = StackAccess::AccessType::GPR; - if (MFI.hasScalableStackID(FrameIdx)) { - // SPILL_PPR_TO_ZPR_SLOT_PSEUDO and FILL_PPR_FROM_ZPR_SLOT_PSEUDO - // spill/fill the predicate as a data vector (so are an FPR access). - if (MI.getOpcode() != AArch64::SPILL_PPR_TO_ZPR_SLOT_PSEUDO && - MI.getOpcode() != AArch64::FILL_PPR_FROM_ZPR_SLOT_PSEUDO && - AArch64::PPRRegClass.contains(MI.getOperand(0).getReg())) { - RegTy = StackAccess::PPR; - } else - RegTy = StackAccess::FPR; - } else if (AArch64InstrInfo::isFpOrNEON(MI)) { + if (MFI.hasScalableStackID(FrameIdx)) + RegTy = isPPRAccess(MI) ? StackAccess::PPR : StackAccess::FPR; + else if (AArch64InstrInfo::isFpOrNEON(MI)) RegTy = StackAccess::FPR; - } StackAccesses[ArrIdx].AccessTypes |= RegTy; diff --git a/llvm/lib/Target/AArch64/AArch64InstrInfo.cpp b/llvm/lib/Target/AArch64/AArch64InstrInfo.cpp index 5a90da1..b8761d97 100644 --- a/llvm/lib/Target/AArch64/AArch64InstrInfo.cpp +++ b/llvm/lib/Target/AArch64/AArch64InstrInfo.cpp @@ -2579,8 +2579,6 @@ unsigned AArch64InstrInfo::getLoadStoreImmIdx(unsigned Opc) { case AArch64::STZ2Gi: case AArch64::STZGi: case AArch64::TAGPstack: - case AArch64::SPILL_PPR_TO_ZPR_SLOT_PSEUDO: - case AArch64::FILL_PPR_FROM_ZPR_SLOT_PSEUDO: return 2; case AArch64::LD1B_D_IMM: case AArch64::LD1B_H_IMM: @@ -4387,8 +4385,6 @@ bool AArch64InstrInfo::getMemOpInfo(unsigned Opcode, TypeSize &Scale, MinOffset = -256; MaxOffset = 254; break; - case AArch64::SPILL_PPR_TO_ZPR_SLOT_PSEUDO: - case AArch64::FILL_PPR_FROM_ZPR_SLOT_PSEUDO: case AArch64::LDR_ZXI: case AArch64::STR_ZXI: Scale = TypeSize::getScalable(16); @@ -5098,33 +5094,31 @@ void AArch64InstrInfo::copyPhysReg(MachineBasicBlock &MBB, BuildMI(MBB, I, DL, get(AArch64::MOVZWi), DestReg) .addImm(0) .addImm(AArch64_AM::getShifterImm(AArch64_AM::LSL, 0)); + } else if (Subtarget.hasZeroCycleRegMoveGPR64() && + !Subtarget.hasZeroCycleRegMoveGPR32()) { + // Cyclone recognizes "ORR Xd, XZR, Xm" as a zero-cycle register move. + MCRegister DestRegX = TRI->getMatchingSuperReg(DestReg, AArch64::sub_32, + &AArch64::GPR64spRegClass); + assert(DestRegX.isValid() && "Destination super-reg not valid"); + MCRegister SrcRegX = + SrcReg == AArch64::WZR + ? AArch64::XZR + : TRI->getMatchingSuperReg(SrcReg, AArch64::sub_32, + &AArch64::GPR64spRegClass); + assert(SrcRegX.isValid() && "Source super-reg not valid"); + // This instruction is reading and writing X registers. This may upset + // the register scavenger and machine verifier, so we need to indicate + // that we are reading an undefined value from SrcRegX, but a proper + // value from SrcReg. + BuildMI(MBB, I, DL, get(AArch64::ORRXrr), DestRegX) + .addReg(AArch64::XZR) + .addReg(SrcRegX, RegState::Undef) + .addReg(SrcReg, RegState::Implicit | getKillRegState(KillSrc)); } else { - if (Subtarget.hasZeroCycleRegMoveGPR64() && - !Subtarget.hasZeroCycleRegMoveGPR32()) { - // Cyclone recognizes "ORR Xd, XZR, Xm" as a zero-cycle register move. - MCRegister DestRegX = TRI->getMatchingSuperReg( - DestReg, AArch64::sub_32, &AArch64::GPR64spRegClass); - assert(DestRegX.isValid() && "Destination super-reg not valid"); - MCRegister SrcRegX = - SrcReg == AArch64::WZR - ? AArch64::XZR - : TRI->getMatchingSuperReg(SrcReg, AArch64::sub_32, - &AArch64::GPR64spRegClass); - assert(SrcRegX.isValid() && "Source super-reg not valid"); - // This instruction is reading and writing X registers. This may upset - // the register scavenger and machine verifier, so we need to indicate - // that we are reading an undefined value from SrcRegX, but a proper - // value from SrcReg. - BuildMI(MBB, I, DL, get(AArch64::ORRXrr), DestRegX) - .addReg(AArch64::XZR) - .addReg(SrcRegX, RegState::Undef) - .addReg(SrcReg, RegState::Implicit | getKillRegState(KillSrc)); - } else { - // Otherwise, expand to ORR WZR. - BuildMI(MBB, I, DL, get(AArch64::ORRWrr), DestReg) - .addReg(AArch64::WZR) - .addReg(SrcReg, getKillRegState(KillSrc)); - } + // Otherwise, expand to ORR WZR. + BuildMI(MBB, I, DL, get(AArch64::ORRWrr), DestReg) + .addReg(AArch64::WZR) + .addReg(SrcReg, getKillRegState(KillSrc)); } return; } @@ -5650,11 +5644,6 @@ void AArch64InstrInfo::storeRegToStackSlot(MachineBasicBlock &MBB, "Unexpected register store without SVE store instructions"); Opc = AArch64::STR_ZXI; StackID = TargetStackID::ScalableVector; - } else if (AArch64::PPRRegClass.hasSubClassEq(RC)) { - assert(Subtarget.isSVEorStreamingSVEAvailable() && - "Unexpected predicate store without SVE store instructions"); - Opc = AArch64::SPILL_PPR_TO_ZPR_SLOT_PSEUDO; - StackID = TargetStackID::ScalableVector; } break; case 24: @@ -5835,11 +5824,6 @@ void AArch64InstrInfo::loadRegFromStackSlot( "Unexpected register load without SVE load instructions"); Opc = AArch64::LDR_ZXI; StackID = TargetStackID::ScalableVector; - } else if (AArch64::PPRRegClass.hasSubClassEq(RC)) { - assert(Subtarget.isSVEorStreamingSVEAvailable() && - "Unexpected predicate load without SVE load instructions"); - Opc = AArch64::FILL_PPR_FROM_ZPR_SLOT_PSEUDO; - StackID = TargetStackID::ScalableVector; } break; case 24: diff --git a/llvm/lib/Target/AArch64/AArch64PrologueEpilogue.cpp b/llvm/lib/Target/AArch64/AArch64PrologueEpilogue.cpp index aed137c..1568161 100644 --- a/llvm/lib/Target/AArch64/AArch64PrologueEpilogue.cpp +++ b/llvm/lib/Target/AArch64/AArch64PrologueEpilogue.cpp @@ -57,10 +57,7 @@ static bool isPartOfZPRCalleeSaves(MachineBasicBlock::iterator I) { case AArch64::ST1B_2Z_IMM: case AArch64::STR_ZXI: case AArch64::LDR_ZXI: - case AArch64::CPY_ZPzI_B: - case AArch64::CMPNE_PPzZI_B: case AArch64::PTRUE_C_B: - case AArch64::PTRUE_B: return I->getFlag(MachineInstr::FrameSetup) || I->getFlag(MachineInstr::FrameDestroy); case AArch64::SEH_SavePReg: diff --git a/llvm/lib/Target/AArch64/AArch64RegisterInfo.td b/llvm/lib/Target/AArch64/AArch64RegisterInfo.td index 5d89862..ef974df 100644 --- a/llvm/lib/Target/AArch64/AArch64RegisterInfo.td +++ b/llvm/lib/Target/AArch64/AArch64RegisterInfo.td @@ -980,19 +980,10 @@ class ZPRRegOp <string Suffix, AsmOperandClass C, ElementSizeEnum Size, //****************************************************************************** // SVE predicate register classes. - -// Note: This hardware mode is enabled in AArch64Subtarget::getHwModeSet() -// (without the use of the table-gen'd predicates). -def SMEWithZPRPredicateSpills : HwMode<[Predicate<"false">]>; - -def PPRSpillFillRI : RegInfoByHwMode< - [DefaultMode, SMEWithZPRPredicateSpills], - [RegInfo<16,16,16>, RegInfo<16,128,128>]>; - class PPRClass<int firstreg, int lastreg, int step = 1> : RegisterClass<"AArch64", [ nxv16i1, nxv8i1, nxv4i1, nxv2i1, nxv1i1 ], 16, (sequence "P%u", firstreg, lastreg, step)> { - let RegInfos = PPRSpillFillRI; + let Size = 16; } def PPR : PPRClass<0, 15> { diff --git a/llvm/lib/Target/AArch64/AArch64Subtarget.cpp b/llvm/lib/Target/AArch64/AArch64Subtarget.cpp index 98e0a11..12ddf47 100644 --- a/llvm/lib/Target/AArch64/AArch64Subtarget.cpp +++ b/llvm/lib/Target/AArch64/AArch64Subtarget.cpp @@ -86,11 +86,6 @@ static cl::alias AArch64StreamingStackHazardSize( cl::desc("alias for -aarch64-streaming-hazard-size"), cl::aliasopt(AArch64StreamingHazardSize)); -static cl::opt<bool> EnableZPRPredicateSpills( - "aarch64-enable-zpr-predicate-spills", cl::init(false), cl::Hidden, - cl::desc( - "Enables spilling/reloading SVE predicates as data vectors (ZPRs)")); - static cl::opt<unsigned> VScaleForTuningOpt("sve-vscale-for-tuning", cl::Hidden, cl::desc("Force a vscale for tuning factor for SVE")); @@ -426,20 +421,6 @@ AArch64Subtarget::AArch64Subtarget(const Triple &TT, StringRef CPU, EnableSubregLiveness = EnableSubregLivenessTracking.getValue(); } -unsigned AArch64Subtarget::getHwModeSet() const { - AArch64HwModeBits Modes = AArch64HwModeBits::DefaultMode; - - // Use a special hardware mode in streaming[-compatible] functions with - // aarch64-enable-zpr-predicate-spills. This changes the spill size (and - // alignment) for the predicate register class. - if (EnableZPRPredicateSpills.getValue() && - (isStreaming() || isStreamingCompatible())) { - Modes |= AArch64HwModeBits::SMEWithZPRPredicateSpills; - } - - return to_underlying(Modes); -} - const CallLowering *AArch64Subtarget::getCallLowering() const { return CallLoweringInfo.get(); } diff --git a/llvm/lib/Target/AArch64/AArch64Subtarget.h b/llvm/lib/Target/AArch64/AArch64Subtarget.h index 671df35..8974965 100644 --- a/llvm/lib/Target/AArch64/AArch64Subtarget.h +++ b/llvm/lib/Target/AArch64/AArch64Subtarget.h @@ -130,8 +130,6 @@ public: bool IsStreaming = false, bool IsStreamingCompatible = false, bool HasMinSize = false); - virtual unsigned getHwModeSet() const override; - // Getters for SubtargetFeatures defined in tablegen #define GET_SUBTARGETINFO_MACRO(ATTRIBUTE, DEFAULT, GETTER) \ bool GETTER() const { return ATTRIBUTE; } diff --git a/llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp b/llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp index 50a8754..479e345 100644 --- a/llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp +++ b/llvm/lib/Target/AArch64/AArch64TargetTransformInfo.cpp @@ -5666,18 +5666,21 @@ InstructionCost AArch64TTIImpl::getPartialReductionCost( VectorType *AccumVectorType = VectorType::get(AccumType, VF.divideCoefficientBy(Ratio)); // We don't yet support all kinds of legalization. - auto TA = TLI->getTypeAction(AccumVectorType->getContext(), - EVT::getEVT(AccumVectorType)); - switch (TA) { + auto TC = TLI->getTypeConversion(AccumVectorType->getContext(), + EVT::getEVT(AccumVectorType)); + switch (TC.first) { default: return Invalid; case TargetLowering::TypeLegal: case TargetLowering::TypePromoteInteger: case TargetLowering::TypeSplitVector: + // The legalised type (e.g. after splitting) must be legal too. + if (TLI->getTypeAction(AccumVectorType->getContext(), TC.second) != + TargetLowering::TypeLegal) + return Invalid; break; } - // Check what kind of type-legalisation happens. std::pair<InstructionCost, MVT> AccumLT = getTypeLegalizationCost(AccumVectorType); std::pair<InstructionCost, MVT> InputLT = diff --git a/llvm/lib/Target/AArch64/SMEInstrFormats.td b/llvm/lib/Target/AArch64/SMEInstrFormats.td index be44b8f..33f35ad 100644 --- a/llvm/lib/Target/AArch64/SMEInstrFormats.td +++ b/llvm/lib/Target/AArch64/SMEInstrFormats.td @@ -58,20 +58,6 @@ def FORM_TRANSPOSED_REG_TUPLE_X4_PSEUDO : let hasSideEffects = 0; } -def SPILL_PPR_TO_ZPR_SLOT_PSEUDO : - Pseudo<(outs), (ins PPRorPNRAny:$Pt, GPR64sp:$Rn, simm9:$imm9), []>, Sched<[]> -{ - let mayStore = 1; - let hasSideEffects = 0; -} - -def FILL_PPR_FROM_ZPR_SLOT_PSEUDO : - Pseudo<(outs PPRorPNRAny:$Pt), (ins GPR64sp:$Rn, simm9:$imm9), []>, Sched<[]> -{ - let mayLoad = 1; - let hasSideEffects = 0; -} - def SDTZALoadStore : SDTypeProfile<0, 3, [SDTCisInt<0>, SDTCisPtrTy<1>, SDTCisInt<2>]>; // SME ZA loads and stores def AArch64SMELdr : SDNode<"AArch64ISD::SME_ZA_LDR", SDTZALoadStore, diff --git a/llvm/lib/Target/AMDGPU/AMDGPU.td b/llvm/lib/Target/AMDGPU/AMDGPU.td index ddb2381..1a697f7 100644 --- a/llvm/lib/Target/AMDGPU/AMDGPU.td +++ b/llvm/lib/Target/AMDGPU/AMDGPU.td @@ -1411,20 +1411,6 @@ def FeatureGloballyAddressableScratch : SubtargetFeature< "FLAT instructions can access scratch memory for any thread in any wave" >; -// FIXME: Remove after all users are migrated to attribute. -def FeatureDynamicVGPR : SubtargetFeature <"dynamic-vgpr", - "DynamicVGPR", - "true", - "Enable dynamic VGPR mode" ->; - -// FIXME: Remove after all users are migrated to attribute. -def FeatureDynamicVGPRBlockSize32 : SubtargetFeature<"dynamic-vgpr-block-size-32", - "DynamicVGPRBlockSize32", - "true", - "Use a block size of 32 for dynamic VGPR allocation (default is 16)" ->; - // Enable the use of SCRATCH_STORE/LOAD_BLOCK instructions for saving and // restoring the callee-saved registers. def FeatureUseBlockVGPROpsForCSR : SubtargetFeature<"block-vgpr-csr", @@ -1462,10 +1448,10 @@ def Feature45BitNumRecordsBufferResource : SubtargetFeature< "45-bit-num-records "The buffer resource (V#) supports 45-bit num_records" >; -def FeatureCluster : SubtargetFeature< "cluster", - "HasCluster", +def FeatureClusters : SubtargetFeature< "clusters", + "HasClusters", "true", - "Has cluster support" + "Has clusters of workgroups support" >; // Dummy feature used to disable assembler instructions. @@ -2134,7 +2120,7 @@ def FeatureISAVersion12_50 : FeatureSet< Feature45BitNumRecordsBufferResource, FeatureSupportsXNACK, FeatureXNACK, - FeatureCluster, + FeatureClusters, ]>; def FeatureISAVersion12_51 : FeatureSet< diff --git a/llvm/lib/Target/AMDGPU/AMDGPURegisterBankInfo.cpp b/llvm/lib/Target/AMDGPU/AMDGPURegisterBankInfo.cpp index 848d9a5..557d87f 100644 --- a/llvm/lib/Target/AMDGPU/AMDGPURegisterBankInfo.cpp +++ b/llvm/lib/Target/AMDGPU/AMDGPURegisterBankInfo.cpp @@ -5043,6 +5043,9 @@ AMDGPURegisterBankInfo::getInstrMapping(const MachineInstr &MI) const { case Intrinsic::amdgcn_mfma_i32_16x16x64_i8: case Intrinsic::amdgcn_mfma_i32_32x32x32_i8: case Intrinsic::amdgcn_mfma_f32_16x16x32_bf16: { + unsigned DstSize = MRI.getType(MI.getOperand(0).getReg()).getSizeInBits(); + unsigned MinNumRegsRequired = DstSize / 32; + // Default for MAI intrinsics. // srcC can also be an immediate which can be folded later. // FIXME: Should we eventually add an alternative mapping with AGPR src @@ -5051,29 +5054,32 @@ AMDGPURegisterBankInfo::getInstrMapping(const MachineInstr &MI) const { // vdst, srcA, srcB, srcC const SIMachineFunctionInfo *Info = MF.getInfo<SIMachineFunctionInfo>(); OpdsMapping[0] = - Info->mayNeedAGPRs() + Info->getMinNumAGPRs() >= MinNumRegsRequired ? getAGPROpMapping(MI.getOperand(0).getReg(), MRI, *TRI) : getVGPROpMapping(MI.getOperand(0).getReg(), MRI, *TRI); OpdsMapping[2] = getVGPROpMapping(MI.getOperand(2).getReg(), MRI, *TRI); OpdsMapping[3] = getVGPROpMapping(MI.getOperand(3).getReg(), MRI, *TRI); OpdsMapping[4] = - Info->mayNeedAGPRs() + Info->getMinNumAGPRs() >= MinNumRegsRequired ? getAGPROpMapping(MI.getOperand(4).getReg(), MRI, *TRI) : getVGPROpMapping(MI.getOperand(4).getReg(), MRI, *TRI); break; } case Intrinsic::amdgcn_mfma_scale_f32_16x16x128_f8f6f4: case Intrinsic::amdgcn_mfma_scale_f32_32x32x64_f8f6f4: { + unsigned DstSize = MRI.getType(MI.getOperand(0).getReg()).getSizeInBits(); + unsigned MinNumRegsRequired = DstSize / 32; + const SIMachineFunctionInfo *Info = MF.getInfo<SIMachineFunctionInfo>(); OpdsMapping[0] = - Info->mayNeedAGPRs() + Info->getMinNumAGPRs() >= MinNumRegsRequired ? getAGPROpMapping(MI.getOperand(0).getReg(), MRI, *TRI) : getVGPROpMapping(MI.getOperand(0).getReg(), MRI, *TRI); OpdsMapping[2] = getVGPROpMapping(MI.getOperand(2).getReg(), MRI, *TRI); OpdsMapping[3] = getVGPROpMapping(MI.getOperand(3).getReg(), MRI, *TRI); OpdsMapping[4] = - Info->mayNeedAGPRs() + Info->getMinNumAGPRs() >= MinNumRegsRequired ? getAGPROpMapping(MI.getOperand(4).getReg(), MRI, *TRI) : getVGPROpMapping(MI.getOperand(4).getReg(), MRI, *TRI); diff --git a/llvm/lib/Target/AMDGPU/AsmParser/AMDGPUAsmParser.cpp b/llvm/lib/Target/AMDGPU/AsmParser/AMDGPUAsmParser.cpp index a67a7be..d0c0822 100644 --- a/llvm/lib/Target/AMDGPU/AsmParser/AMDGPUAsmParser.cpp +++ b/llvm/lib/Target/AMDGPU/AsmParser/AMDGPUAsmParser.cpp @@ -1944,6 +1944,7 @@ public: void cvtVOP3Interp(MCInst &Inst, const OperandVector &Operands); void cvtVINTERP(MCInst &Inst, const OperandVector &Operands); + void cvtOpSelHelper(MCInst &Inst, unsigned OpSel); bool parseDimId(unsigned &Encoding); ParseStatus parseDim(OperandVector &Operands); @@ -9239,6 +9240,33 @@ static bool isRegOrImmWithInputMods(const MCInstrDesc &Desc, unsigned OpNum) { MCOI::OperandConstraint::TIED_TO) == -1; } +void AMDGPUAsmParser::cvtOpSelHelper(MCInst &Inst, unsigned OpSel) { + unsigned Opc = Inst.getOpcode(); + constexpr AMDGPU::OpName Ops[] = {AMDGPU::OpName::src0, AMDGPU::OpName::src1, + AMDGPU::OpName::src2}; + constexpr AMDGPU::OpName ModOps[] = {AMDGPU::OpName::src0_modifiers, + AMDGPU::OpName::src1_modifiers, + AMDGPU::OpName::src2_modifiers}; + for (int J = 0; J < 3; ++J) { + int OpIdx = AMDGPU::getNamedOperandIdx(Opc, Ops[J]); + if (OpIdx == -1) + // Some instructions, e.g. v_interp_p2_f16 in GFX9, have src0, src2, but + // no src1. So continue instead of break. + continue; + + int ModIdx = AMDGPU::getNamedOperandIdx(Opc, ModOps[J]); + uint32_t ModVal = Inst.getOperand(ModIdx).getImm(); + + if ((OpSel & (1 << J)) != 0) + ModVal |= SISrcMods::OP_SEL_0; + // op_sel[3] is encoded in src0_modifiers. + if (ModOps[J] == AMDGPU::OpName::src0_modifiers && (OpSel & (1 << 3)) != 0) + ModVal |= SISrcMods::DST_OP_SEL; + + Inst.getOperand(ModIdx).setImm(ModVal); + } +} + void AMDGPUAsmParser::cvtVOP3Interp(MCInst &Inst, const OperandVector &Operands) { OptionalImmIndexMap OptionalIdx; @@ -9275,6 +9303,16 @@ void AMDGPUAsmParser::cvtVOP3Interp(MCInst &Inst, const OperandVector &Operands) if (AMDGPU::hasNamedOperand(Opc, AMDGPU::OpName::omod)) addOptionalImmOperand(Inst, Operands, OptionalIdx, AMDGPUOperand::ImmTyOModSI); + + // Some v_interp instructions use op_sel[3] for dst. + if (AMDGPU::hasNamedOperand(Opc, AMDGPU::OpName::op_sel)) { + addOptionalImmOperand(Inst, Operands, OptionalIdx, + AMDGPUOperand::ImmTyOpSel); + int OpSelIdx = AMDGPU::getNamedOperandIdx(Opc, AMDGPU::OpName::op_sel); + unsigned OpSel = Inst.getOperand(OpSelIdx).getImm(); + + cvtOpSelHelper(Inst, OpSel); + } } void AMDGPUAsmParser::cvtVINTERP(MCInst &Inst, const OperandVector &Operands) @@ -9310,31 +9348,10 @@ void AMDGPUAsmParser::cvtVINTERP(MCInst &Inst, const OperandVector &Operands) if (OpSelIdx == -1) return; - const AMDGPU::OpName Ops[] = {AMDGPU::OpName::src0, AMDGPU::OpName::src1, - AMDGPU::OpName::src2}; - const AMDGPU::OpName ModOps[] = {AMDGPU::OpName::src0_modifiers, - AMDGPU::OpName::src1_modifiers, - AMDGPU::OpName::src2_modifiers}; - unsigned OpSel = Inst.getOperand(OpSelIdx).getImm(); - - for (int J = 0; J < 3; ++J) { - int OpIdx = AMDGPU::getNamedOperandIdx(Opc, Ops[J]); - if (OpIdx == -1) - break; - - int ModIdx = AMDGPU::getNamedOperandIdx(Opc, ModOps[J]); - uint32_t ModVal = Inst.getOperand(ModIdx).getImm(); - - if ((OpSel & (1 << J)) != 0) - ModVal |= SISrcMods::OP_SEL_0; - if (ModOps[J] == AMDGPU::OpName::src0_modifiers && - (OpSel & (1 << 3)) != 0) - ModVal |= SISrcMods::DST_OP_SEL; - - Inst.getOperand(ModIdx).setImm(ModVal); - } + cvtOpSelHelper(Inst, OpSel); } + void AMDGPUAsmParser::cvtScaledMFMA(MCInst &Inst, const OperandVector &Operands) { OptionalImmIndexMap OptionalIdx; diff --git a/llvm/lib/Target/AMDGPU/GCNSubtarget.cpp b/llvm/lib/Target/AMDGPU/GCNSubtarget.cpp index 7b94ea3..f291e37 100644 --- a/llvm/lib/Target/AMDGPU/GCNSubtarget.cpp +++ b/llvm/lib/Target/AMDGPU/GCNSubtarget.cpp @@ -541,7 +541,7 @@ unsigned GCNSubtarget::getMaxNumSGPRs(const Function &F) const { unsigned GCNSubtarget::getBaseMaxNumVGPRs( const Function &F, std::pair<unsigned, unsigned> NumVGPRBounds) const { - const auto &[Min, Max] = NumVGPRBounds; + const auto [Min, Max] = NumVGPRBounds; // Check if maximum number of VGPRs was explicitly requested using // "amdgpu-num-vgpr" attribute. diff --git a/llvm/lib/Target/AMDGPU/GCNSubtarget.h b/llvm/lib/Target/AMDGPU/GCNSubtarget.h index 879bf5a..c2e6078 100644 --- a/llvm/lib/Target/AMDGPU/GCNSubtarget.h +++ b/llvm/lib/Target/AMDGPU/GCNSubtarget.h @@ -288,7 +288,7 @@ protected: bool Has45BitNumRecordsBufferResource = false; - bool HasCluster = false; + bool HasClusters = false; // Dummy feature to use for assembler in tablegen. bool FeatureDisable = false; @@ -1839,7 +1839,7 @@ public: } /// \returns true if the subtarget supports clusters of workgroups. - bool hasClusters() const { return HasCluster; } + bool hasClusters() const { return HasClusters; } /// \returns true if the subtarget requires a wait for xcnt before atomic /// flat/global stores & rmw. diff --git a/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUInstPrinter.cpp b/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUInstPrinter.cpp index d3b5718..3563caa 100644 --- a/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUInstPrinter.cpp +++ b/llvm/lib/Target/AMDGPU/MCTargetDesc/AMDGPUInstPrinter.cpp @@ -1280,6 +1280,17 @@ void AMDGPUInstPrinter::printPackedModifier(const MCInst *MI, (ModIdx != -1) ? MI->getOperand(ModIdx).getImm() : DefaultValue; } + // Some instructions, e.g. v_interp_p2_f16 in GFX9, have src0, src2, but no + // src1. + if (NumOps == 1 && AMDGPU::hasNamedOperand(Opc, AMDGPU::OpName::src2) && + !AMDGPU::hasNamedOperand(Opc, AMDGPU::OpName::src1)) { + Ops[NumOps++] = DefaultValue; // Set src1_modifiers to default. + int Mod2Idx = + AMDGPU::getNamedOperandIdx(Opc, AMDGPU::OpName::src2_modifiers); + assert(Mod2Idx != -1); + Ops[NumOps++] = MI->getOperand(Mod2Idx).getImm(); + } + const bool HasDst = (AMDGPU::getNamedOperandIdx(Opc, AMDGPU::OpName::vdst) != -1) || (AMDGPU::getNamedOperandIdx(Opc, AMDGPU::OpName::sdst) != -1); diff --git a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp index e233457..1a686a9 100644 --- a/llvm/lib/Target/AMDGPU/SIISelLowering.cpp +++ b/llvm/lib/Target/AMDGPU/SIISelLowering.cpp @@ -17346,74 +17346,24 @@ void SITargetLowering::AdjustInstrPostInstrSelection(MachineInstr &MI, MachineFunction *MF = MI.getParent()->getParent(); MachineRegisterInfo &MRI = MF->getRegInfo(); - SIMachineFunctionInfo *Info = MF->getInfo<SIMachineFunctionInfo>(); if (TII->isVOP3(MI.getOpcode())) { // Make sure constant bus requirements are respected. TII->legalizeOperandsVOP3(MRI, MI); - // Prefer VGPRs over AGPRs in mAI instructions where possible. - // This saves a chain-copy of registers and better balance register - // use between vgpr and agpr as agpr tuples tend to be big. - if (!MI.getDesc().operands().empty()) { - unsigned Opc = MI.getOpcode(); - bool HasAGPRs = Info->mayNeedAGPRs(); - const SIRegisterInfo *TRI = Subtarget->getRegisterInfo(); - int16_t Src2Idx = AMDGPU::getNamedOperandIdx(Opc, AMDGPU::OpName::src2); - for (auto I : - {AMDGPU::getNamedOperandIdx(Opc, AMDGPU::OpName::src0), - AMDGPU::getNamedOperandIdx(Opc, AMDGPU::OpName::src1), Src2Idx}) { - if (I == -1) - break; - if ((I == Src2Idx) && (HasAGPRs)) - break; - MachineOperand &Op = MI.getOperand(I); - if (!Op.isReg() || !Op.getReg().isVirtual()) - continue; - auto *RC = TRI->getRegClassForReg(MRI, Op.getReg()); - if (!TRI->hasAGPRs(RC)) - continue; - auto *Src = MRI.getUniqueVRegDef(Op.getReg()); - if (!Src || !Src->isCopy() || - !TRI->isSGPRReg(MRI, Src->getOperand(1).getReg())) - continue; - auto *NewRC = TRI->getEquivalentVGPRClass(RC); - // All uses of agpr64 and agpr32 can also accept vgpr except for - // v_accvgpr_read, but we do not produce agpr reads during selection, - // so no use checks are needed. - MRI.setRegClass(Op.getReg(), NewRC); - } - - if (TII->isMAI(MI)) { - // The ordinary src0, src1, src2 were legalized above. - // - // We have to also legalize the appended v_mfma_ld_scale_b32 operands, - // as a separate instruction. - int Src0Idx = AMDGPU::getNamedOperandIdx(MI.getOpcode(), - AMDGPU::OpName::scale_src0); - if (Src0Idx != -1) { - int Src1Idx = AMDGPU::getNamedOperandIdx(MI.getOpcode(), - AMDGPU::OpName::scale_src1); - if (TII->usesConstantBus(MRI, MI, Src0Idx) && - TII->usesConstantBus(MRI, MI, Src1Idx)) - TII->legalizeOpWithMove(MI, Src1Idx); - } - } - - if (!HasAGPRs) - return; - - // Resolve the rest of AV operands to AGPRs. - if (auto *Src2 = TII->getNamedOperand(MI, AMDGPU::OpName::src2)) { - if (Src2->isReg() && Src2->getReg().isVirtual()) { - auto *RC = TRI->getRegClassForReg(MRI, Src2->getReg()); - if (TRI->isVectorSuperClass(RC)) { - auto *NewRC = TRI->getEquivalentAGPRClass(RC); - MRI.setRegClass(Src2->getReg(), NewRC); - if (Src2->isTied()) - MRI.setRegClass(MI.getOperand(0).getReg(), NewRC); - } - } + if (TII->isMAI(MI)) { + // The ordinary src0, src1, src2 were legalized above. + // + // We have to also legalize the appended v_mfma_ld_scale_b32 operands, + // as a separate instruction. + int Src0Idx = AMDGPU::getNamedOperandIdx(MI.getOpcode(), + AMDGPU::OpName::scale_src0); + if (Src0Idx != -1) { + int Src1Idx = AMDGPU::getNamedOperandIdx(MI.getOpcode(), + AMDGPU::OpName::scale_src1); + if (TII->usesConstantBus(MRI, MI, Src0Idx) && + TII->usesConstantBus(MRI, MI, Src1Idx)) + TII->legalizeOpWithMove(MI, Src1Idx); } } diff --git a/llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.cpp b/llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.cpp index 908d856..b398db4 100644 --- a/llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.cpp +++ b/llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.cpp @@ -33,17 +33,20 @@ using namespace llvm; // optimal RC for Opc and Dest of MFMA. In particular, there are high RP cases // where it is better to produce the VGPR form (e.g. if there are VGPR users // of the MFMA result). -static cl::opt<bool> MFMAVGPRForm( - "amdgpu-mfma-vgpr-form", cl::Hidden, +static cl::opt<bool, true> MFMAVGPRFormOpt( + "amdgpu-mfma-vgpr-form", cl::desc("Whether to force use VGPR for Opc and Dest of MFMA. If " "unspecified, default to compiler heuristics"), - cl::init(false)); + cl::location(SIMachineFunctionInfo::MFMAVGPRForm), cl::init(false), + cl::Hidden); const GCNTargetMachine &getTM(const GCNSubtarget *STI) { const SITargetLowering *TLI = STI->getTargetLowering(); return static_cast<const GCNTargetMachine &>(TLI->getTargetMachine()); } +bool SIMachineFunctionInfo::MFMAVGPRForm = false; + SIMachineFunctionInfo::SIMachineFunctionInfo(const Function &F, const GCNSubtarget *STI) : AMDGPUMachineFunction(F, *STI), Mode(F, *STI), GWSResourcePSV(getTM(STI)), @@ -81,14 +84,13 @@ SIMachineFunctionInfo::SIMachineFunctionInfo(const Function &F, PSInputAddr = AMDGPU::getInitialPSInputAddr(F); } - MayNeedAGPRs = ST.hasMAIInsts(); if (ST.hasGFX90AInsts()) { - // FIXME: MayNeedAGPRs is a misnomer for how this is used. MFMA selection - // should be separated from availability of AGPRs - if (MFMAVGPRForm || - (ST.getMaxNumVGPRs(F) <= ST.getAddressableNumArchVGPRs() && - !mayUseAGPRs(F))) - MayNeedAGPRs = false; // We will select all MAI with VGPR operands. + // FIXME: Extract logic out of getMaxNumVectorRegs; we need to apply the + // allocation granule and clamping. + auto [MinNumAGPRAttr, MaxNumAGPRAttr] = + AMDGPU::getIntegerPairAttribute(F, "amdgpu-agpr-alloc", {~0u, ~0u}, + /*OnlyFirstRequired=*/true); + MinNumAGPRs = MinNumAGPRAttr; } if (AMDGPU::isChainCC(CC)) { diff --git a/llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.h b/llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.h index 4560615..b7dbb59 100644 --- a/llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.h +++ b/llvm/lib/Target/AMDGPU/SIMachineFunctionInfo.h @@ -509,7 +509,9 @@ private: // user arguments. This is an offset from the KernargSegmentPtr. bool ImplicitArgPtr : 1; - bool MayNeedAGPRs : 1; + /// Minimum number of AGPRs required to allocate in the function. Only + /// relevant for gfx90a-gfx950. For gfx908, this should be infinite. + unsigned MinNumAGPRs = ~0u; // The hard-wired high half of the address of the global information table // for AMDPAL OS type. 0xffffffff represents no hard-wired high half, since @@ -537,6 +539,8 @@ private: void MRI_NoteCloneVirtualRegister(Register NewReg, Register SrcReg) override; public: + static bool MFMAVGPRForm; + struct VGPRSpillToAGPR { SmallVector<MCPhysReg, 32> Lanes; bool FullyAllocated = false; @@ -1196,9 +1200,7 @@ public: unsigned getMaxMemoryClusterDWords() const { return MaxMemoryClusterDWords; } - bool mayNeedAGPRs() const { - return MayNeedAGPRs; - } + unsigned getMinNumAGPRs() const { return MinNumAGPRs; } // \returns true if a function has a use of AGPRs via inline asm or // has a call which may use it. diff --git a/llvm/lib/Target/AMDGPU/SIRegisterInfo.cpp b/llvm/lib/Target/AMDGPU/SIRegisterInfo.cpp index 3c2dd42..3115579 100644 --- a/llvm/lib/Target/AMDGPU/SIRegisterInfo.cpp +++ b/llvm/lib/Target/AMDGPU/SIRegisterInfo.cpp @@ -1118,12 +1118,7 @@ SIRegisterInfo::getPointerRegClass(unsigned Kind) const { const TargetRegisterClass * SIRegisterInfo::getCrossCopyRegClass(const TargetRegisterClass *RC) const { - if (isAGPRClass(RC) && !ST.hasGFX90AInsts()) - return getEquivalentVGPRClass(RC); - if (RC == &AMDGPU::SCC_CLASSRegClass) - return getWaveMaskRegClass(); - - return RC; + return RC == &AMDGPU::SCC_CLASSRegClass ? &AMDGPU::SReg_32RegClass : RC; } static unsigned getNumSubRegsForSpillOp(const MachineInstr &MI, diff --git a/llvm/lib/Target/AMDGPU/Utils/AMDGPUBaseInfo.cpp b/llvm/lib/Target/AMDGPU/Utils/AMDGPUBaseInfo.cpp index 20fa141..f7f4d46 100644 --- a/llvm/lib/Target/AMDGPU/Utils/AMDGPUBaseInfo.cpp +++ b/llvm/lib/Target/AMDGPU/Utils/AMDGPUBaseInfo.cpp @@ -1353,11 +1353,6 @@ unsigned getVGPRAllocGranule(const MCSubtargetInfo *STI, if (DynamicVGPRBlockSize != 0) return DynamicVGPRBlockSize; - // Temporarily check the subtarget feature, until we fully switch to using - // attributes. - if (STI->getFeatureBits().test(FeatureDynamicVGPR)) - return STI->getFeatureBits().test(FeatureDynamicVGPRBlockSize32) ? 32 : 16; - bool IsWave32 = EnableWavefrontSize32 ? *EnableWavefrontSize32 : STI->getFeatureBits().test(FeatureWavefrontSize32); @@ -1412,10 +1407,7 @@ unsigned getAddressableNumVGPRs(const MCSubtargetInfo *STI, if (Features.test(FeatureGFX90AInsts)) return 512; - // Temporarily check the subtarget feature, until we fully switch to using - // attributes. - if (DynamicVGPRBlockSize != 0 || - STI->getFeatureBits().test(FeatureDynamicVGPR)) + if (DynamicVGPRBlockSize != 0) // On GFX12 we can allocate at most 8 blocks of VGPRs. return 8 * getVGPRAllocGranule(STI, DynamicVGPRBlockSize); return getAddressableNumArchVGPRs(STI); diff --git a/llvm/lib/Target/AMDGPU/VOP3Instructions.td b/llvm/lib/Target/AMDGPU/VOP3Instructions.td index 4a2b54d..42ec8ba 100644 --- a/llvm/lib/Target/AMDGPU/VOP3Instructions.td +++ b/llvm/lib/Target/AMDGPU/VOP3Instructions.td @@ -97,6 +97,7 @@ class VOP3Interp<string OpName, VOPProfile P, list<dag> pattern = []> : VOP3_Pseudo<OpName, P, pattern> { let AsmMatchConverter = "cvtVOP3Interp"; let mayRaiseFPException = 0; + let VOP3_OPSEL = P.HasOpSel; } def VOP3_INTERP : VOPProfile<[f32, f32, i32, untyped]> { @@ -119,16 +120,17 @@ def VOP3_INTERP_MOV : VOPProfile<[f32, i32, i32, untyped]> { let HasSrc0Mods = 0; } -class getInterp16Asm <bit HasSrc2, bit HasOMod> { +class getInterp16Asm <bit HasSrc2, bit HasOMod, bit OpSel> { string src2 = !if(HasSrc2, ", $src2_modifiers", ""); string omod = !if(HasOMod, "$omod", ""); + string opsel = !if(OpSel, "$op_sel", ""); string ret = - " $vdst, $src0_modifiers, $attr$attrchan"#src2#"$high$clamp"#omod; + " $vdst, $src0_modifiers, $attr$attrchan"#src2#"$high$clamp"#omod#opsel; } class getInterp16Ins <bit HasSrc2, bit HasOMod, - Operand Src0Mod, Operand Src2Mod> { - dag ret = !if(HasSrc2, + Operand Src0Mod, Operand Src2Mod, bit OpSel> { + dag ret1 = !if(HasSrc2, !if(HasOMod, (ins Src0Mod:$src0_modifiers, VRegSrc_32:$src0, InterpAttr:$attr, InterpAttrChan:$attrchan, @@ -143,19 +145,22 @@ class getInterp16Ins <bit HasSrc2, bit HasOMod, InterpAttr:$attr, InterpAttrChan:$attrchan, highmod:$high, Clamp0:$clamp, omod0:$omod) ); + dag ret2 = !if(OpSel, (ins op_sel0:$op_sel), (ins)); + dag ret = !con(ret1, ret2); } -class VOP3_INTERP16 <list<ValueType> ArgVT> : VOPProfile<ArgVT> { +class VOP3_INTERP16 <list<ValueType> ArgVT, bit OpSel = 0> : VOPProfile<ArgVT> { let IsSingle = 1; let HasOMod = !ne(DstVT.Value, f16.Value); let HasHigh = 1; + let HasOpSel = OpSel; let Src0Mod = FPVRegInputMods; let Src2Mod = FPVRegInputMods; let Outs64 = (outs DstRC.RegClass:$vdst); - let Ins64 = getInterp16Ins<HasSrc2, HasOMod, Src0Mod, Src2Mod>.ret; - let Asm64 = getInterp16Asm<HasSrc2, HasOMod>.ret; + let Ins64 = getInterp16Ins<HasSrc2, HasOMod, Src0Mod, Src2Mod, OpSel>.ret; + let Asm64 = getInterp16Asm<HasSrc2, HasOMod, OpSel>.ret; } //===----------------------------------------------------------------------===// @@ -480,7 +485,7 @@ let SubtargetPredicate = isGFX9Plus in { defm V_MAD_U16_gfx9 : VOP3Inst_t16 <"v_mad_u16_gfx9", VOP_I16_I16_I16_I16>; defm V_MAD_I16_gfx9 : VOP3Inst_t16 <"v_mad_i16_gfx9", VOP_I16_I16_I16_I16>; let OtherPredicates = [isNotGFX90APlus] in -def V_INTERP_P2_F16_gfx9 : VOP3Interp <"v_interp_p2_f16_gfx9", VOP3_INTERP16<[f16, f32, i32, f32]>>; +def V_INTERP_P2_F16_opsel : VOP3Interp <"v_interp_p2_f16_opsel", VOP3_INTERP16<[f16, f32, i32, f32], /*OpSel*/ 1>>; } // End SubtargetPredicate = isGFX9Plus // This predicate should only apply to the selection pattern. The @@ -2676,6 +2681,14 @@ multiclass VOP3Interp_F16_Real_gfx9<bits<10> op, string OpName, string AsmName> } } +multiclass VOP3Interp_F16_OpSel_Real_gfx9<bits<10> op, string OpName, string AsmName> { + def _gfx9 : VOP3_Real<!cast<VOP3_Pseudo>(OpName), SIEncodingFamily.GFX9>, + VOP3Interp_OpSel_gfx9 <op, !cast<VOP3_Pseudo>(OpName).Pfl> { + VOP3_Pseudo ps = !cast<VOP3_Pseudo>(OpName); + let AsmString = AsmName # ps.AsmOperands; + } +} + multiclass VOP3_Real_gfx9<bits<10> op, string AsmName> { def _gfx9 : VOP3_Real<!cast<VOP_Pseudo>(NAME#"_e64"), SIEncodingFamily.GFX9>, VOP3e_vi <op, !cast<VOP_Pseudo>(NAME#"_e64").Pfl> { @@ -2788,7 +2801,7 @@ defm V_MAD_U16_gfx9 : VOP3OpSel_F16_Real_gfx9 <0x204, "v_mad_u16">; defm V_MAD_I16_gfx9 : VOP3OpSel_F16_Real_gfx9 <0x205, "v_mad_i16">; defm V_FMA_F16_gfx9 : VOP3OpSel_F16_Real_gfx9 <0x206, "v_fma_f16">; defm V_DIV_FIXUP_F16_gfx9 : VOP3OpSel_F16_Real_gfx9 <0x207, "v_div_fixup_f16">; -defm V_INTERP_P2_F16_gfx9 : VOP3Interp_F16_Real_gfx9 <0x277, "V_INTERP_P2_F16_gfx9", "v_interp_p2_f16">; +defm V_INTERP_P2_F16_opsel : VOP3Interp_F16_OpSel_Real_gfx9 <0x277, "V_INTERP_P2_F16_opsel", "v_interp_p2_f16">; defm V_ADD_I32 : VOP3_Real_vi <0x29c>; defm V_SUB_I32 : VOP3_Real_vi <0x29d>; diff --git a/llvm/lib/Target/AMDGPU/VOP3PInstructions.td b/llvm/lib/Target/AMDGPU/VOP3PInstructions.td index 5daf860..3a0cc35 100644 --- a/llvm/lib/Target/AMDGPU/VOP3PInstructions.td +++ b/llvm/lib/Target/AMDGPU/VOP3PInstructions.td @@ -67,7 +67,7 @@ class VOP3P_Mix_Profile<VOPProfile P, VOP3Features Features = VOP3_REGULAR, class VOP3P_Mix_Profile_t16<VOPProfile P, VOP3Features Features = VOP3_REGULAR> : VOP3P_Mix_Profile<P, Features, 0> { let IsTrue16 = 1; - let IsRealTrue16 = 1; + let IsRealTrue16 = 1; let DstRC64 = getVALUDstForVT<P.DstVT, 1 /*IsTrue16*/, 1 /*IsVOP3Encoding*/>.ret; } @@ -950,7 +950,7 @@ class MFMA_F8F6F4_WithSizeTable_Helper<VOP3_Pseudo ps, string F8F8Op> : } // Currently assumes scaled instructions never have abid -class MAIFrag<SDPatternOperator Op, code pred, bit HasAbid = true, bit Scaled = false> : PatFrag < +class MAIFrag<SDPatternOperator Op, bit HasAbid = true, bit Scaled = false> : PatFrag < !if(Scaled, (ops node:$src0, node:$src1, node:$src2, node:$cbsz, node:$blgp, node:$src0_modifiers, node:$scale_src0, node:$src1_modifiers, node:$scale_src1), @@ -959,37 +959,30 @@ class MAIFrag<SDPatternOperator Op, code pred, bit HasAbid = true, bit Scaled = (ops node:$blgp))), !if(Scaled, (Op $src0, $src1, $src2, $cbsz, $blgp, $src0_modifiers, $scale_src0, $src1_modifiers, $scale_src1), !if(HasAbid, (Op $src0, $src1, $src2, $cbsz, $abid, $blgp), - (Op $src0, $src1, $src2, $cbsz, $blgp))), - pred ->; - -defvar MayNeedAGPRs = [{ - return MF->getInfo<SIMachineFunctionInfo>()->mayNeedAGPRs(); -}]; - -defvar MayNeedAGPRs_gisel = [{ - return MF.getInfo<SIMachineFunctionInfo>()->mayNeedAGPRs(); -}]; + (Op $src0, $src1, $src2, $cbsz, $blgp)))>; -defvar MayNotNeedAGPRs = [{ - return !MF->getInfo<SIMachineFunctionInfo>()->mayNeedAGPRs(); -}]; +class CanUseAGPR_MAI<ValueType vt> { + code PredicateCode = [{ + return !Subtarget->hasGFX90AInsts() || + (!SIMachineFunctionInfo::MFMAVGPRForm && + MF->getInfo<SIMachineFunctionInfo>()->getMinNumAGPRs() >= + }] # !srl(vt.Size, 5) # ");"; -defvar MayNotNeedAGPRs_gisel = [{ - return !MF.getInfo<SIMachineFunctionInfo>()->mayNeedAGPRs(); -}]; + code GISelPredicateCode = [{ + return !Subtarget->hasGFX90AInsts() || + (!SIMachineFunctionInfo::MFMAVGPRForm && + MF.getInfo<SIMachineFunctionInfo>()->getMinNumAGPRs() >= + }] # !srl(vt.Size, 5) # ");"; +} -class AgprMAIFrag<SDPatternOperator Op, bit HasAbid = true, +class AgprMAIFrag<SDPatternOperator Op, ValueType vt, bit HasAbid = true, bit Scaled = false> : - MAIFrag<Op, MayNeedAGPRs, HasAbid, Scaled> { - let GISelPredicateCode = MayNeedAGPRs_gisel; -} + MAIFrag<Op, HasAbid, Scaled>, + CanUseAGPR_MAI<vt>; class VgprMAIFrag<SDPatternOperator Op, bit HasAbid = true, - bit Scaled = false> : - MAIFrag<Op, MayNotNeedAGPRs, HasAbid, Scaled> { - let GISelPredicateCode = MayNotNeedAGPRs_gisel; -} + bit Scaled = false> : + MAIFrag<Op, HasAbid, Scaled>; let isAsCheapAsAMove = 1, isReMaterializable = 1 in { defm V_ACCVGPR_READ_B32 : VOP3Inst<"v_accvgpr_read_b32", VOPProfileAccRead>; @@ -1037,16 +1030,19 @@ multiclass MAIInst<string OpName, string P, SDPatternOperator node = null_frag, bit HasAbid = true, bit Scaled = false> { defvar NoDstOverlap = !cast<VOPProfileMAI>("VOPProfileMAI_" # P).NoDstOverlap; + defvar ProfileAGPR = !cast<VOPProfileMAI>("VOPProfileMAI_" # P); + defvar ProfileVGPR = !cast<VOPProfileMAI>("VOPProfileMAI_" # P # "_VCD"); + let isConvergent = 1, mayRaiseFPException = 0, ReadsModeReg = 1 in { // FP32 denorm mode is respected, rounding mode is not. Exceptions are not supported. let Constraints = !if(NoDstOverlap, "@earlyclobber $vdst", "") in { - def _e64 : MAIInst<OpName, !cast<VOPProfileMAI>("VOPProfileMAI_" # P), - !if(!or(NoDstOverlap, !eq(node, null_frag)), null_frag, AgprMAIFrag<node, HasAbid, Scaled>), Scaled>, + def _e64 : MAIInst<OpName, ProfileAGPR, + !if(!or(NoDstOverlap, !eq(node, null_frag)), null_frag, AgprMAIFrag<node, ProfileAGPR.DstVT, HasAbid, Scaled>), Scaled>, MFMATable<0, "AGPR", NAME # "_e64">; let OtherPredicates = [isGFX90APlus], Mnemonic = OpName in - def _vgprcd_e64 : MAIInst<OpName # "_vgprcd", !cast<VOPProfileMAI>("VOPProfileMAI_" # P # "_VCD"), + def _vgprcd_e64 : MAIInst<OpName # "_vgprcd", ProfileVGPR, !if(!or(NoDstOverlap, !eq(node, null_frag)), null_frag, VgprMAIFrag<node, HasAbid, Scaled>), Scaled>, MFMATable<0, "VGPR", NAME # "_vgprcd_e64", NAME # "_e64">; } @@ -1055,12 +1051,12 @@ multiclass MAIInst<string OpName, string P, SDPatternOperator node = null_frag, let Constraints = !if(NoDstOverlap, "$vdst = $src2", ""), isConvertibleToThreeAddress = NoDstOverlap, Mnemonic = OpName in { - def "_mac_e64" : MAIInst<OpName # "_mac", !cast<VOPProfileMAI>("VOPProfileMAI_" # P), - !if(!eq(node, null_frag), null_frag, AgprMAIFrag<node, HasAbid, Scaled>), Scaled>, + def "_mac_e64" : MAIInst<OpName # "_mac", ProfileAGPR, + !if(!eq(node, null_frag), null_frag, AgprMAIFrag<node, ProfileAGPR.DstVT, HasAbid, Scaled>), Scaled>, MFMATable<1, "AGPR", NAME # "_e64", NAME # "_mac_e64">; let OtherPredicates = [isGFX90APlus] in - def _mac_vgprcd_e64 : MAIInst<OpName # "_mac_vgprcd", !cast<VOPProfileMAI>("VOPProfileMAI_" # P # "_VCD"), + def _mac_vgprcd_e64 : MAIInst<OpName # "_mac_vgprcd", ProfileVGPR, !if(!eq(node, null_frag), null_frag, VgprMAIFrag<node, HasAbid, Scaled>), Scaled>, MFMATable<1, "VGPR", NAME # "_vgprcd_e64", NAME # "_mac_e64">; } @@ -1074,11 +1070,11 @@ multiclass ScaledMAIInst_mc<string OpName, string UnscaledOpName_, SDPatternOper defvar UnscaledOpName = UnscaledOpName_#VariantSuffix; defvar HasAbid = false; - - defvar NoDstOverlap = !cast<VOPProfileMAI>(!cast<MAIInst>(UnscaledOpName#"_e64").Pfl).NoDstOverlap; + defvar Profile = !cast<VOPProfileMAI>(!cast<MAIInst>(UnscaledOpName#"_e64").Pfl); + defvar NoDstOverlap = Profile.NoDstOverlap; def _e64 : ScaledMAIInst<OpName, - !cast<MAIInst>(UnscaledOpName#"_e64"), !if(NoDstOverlap, null_frag, AgprMAIFrag<node, HasAbid, true>)>, + !cast<MAIInst>(UnscaledOpName#"_e64"), !if(NoDstOverlap, null_frag, AgprMAIFrag<node, Profile.DstVT, HasAbid, true>)>, MFMATable<0, "AGPR", NAME # "_e64">; def _vgprcd_e64 : ScaledMAIInst<OpName # "_vgprcd", @@ -1090,7 +1086,7 @@ multiclass ScaledMAIInst_mc<string OpName, string UnscaledOpName_, SDPatternOper isConvertibleToThreeAddress = NoDstOverlap, Mnemonic = UnscaledOpName_ in { def _mac_e64 : ScaledMAIInst<OpName # "_mac", - !cast<MAIInst>(UnscaledOpName # "_mac_e64"), AgprMAIFrag<node, HasAbid, true>>, + !cast<MAIInst>(UnscaledOpName # "_mac_e64"), AgprMAIFrag<node, Profile.DstVT, HasAbid, true>>, MFMATable<1, "AGPR", NAME # "_e64">; def _mac_vgprcd_e64 : ScaledMAIInst<OpName # " _mac_vgprcd", diff --git a/llvm/lib/Target/AMDGPU/VOPInstructions.td b/llvm/lib/Target/AMDGPU/VOPInstructions.td index 631f0f3..8325c62 100644 --- a/llvm/lib/Target/AMDGPU/VOPInstructions.td +++ b/llvm/lib/Target/AMDGPU/VOPInstructions.td @@ -419,6 +419,13 @@ class VOP3a_ScaleSel_gfx1250<bits<10> op, VOPProfile p> : VOP3e_gfx11_gfx12<op, let Inst{14-11} = scale_sel; } +class VOP3Interp_OpSel_gfx9<bits<10> op, VOPProfile p> : VOP3Interp_vi<op, p> { + let Inst{11} = src0_modifiers{2}; + // There's no src1 + let Inst{13} = src2_modifiers{2}; + let Inst{14} = !if(p.HasDst, src0_modifiers{3}, 0); +} + class VOP3Interp_gfx10<bits<10> op, VOPProfile p> : VOP3e_gfx10<op, p> { bits<6> attr; bits<2> attrchan; diff --git a/llvm/lib/Target/NVPTX/NVPTXIntrinsics.td b/llvm/lib/Target/NVPTX/NVPTXIntrinsics.td index 28d4bb9..a8b854f 100644 --- a/llvm/lib/Target/NVPTX/NVPTXIntrinsics.td +++ b/llvm/lib/Target/NVPTX/NVPTXIntrinsics.td @@ -4528,6 +4528,10 @@ class WMMA_REGINFO<WMMA_REGS r, string op, string metadata = "", string kind = " !eq(ptx_elt_type, "e2m1"), !ne(kind, "")) : [hasSM120a, hasPTX<87>], + !and(!or(!eq(ptx_elt_type,"e4m3"), + !eq(ptx_elt_type,"e5m2")), + !eq(geom, "m16n8k16")) : [hasSM<89>, hasPTX<87>], + !or(!eq(ptx_elt_type, "e4m3"), !eq(ptx_elt_type, "e5m2")) : [hasSM<89>, hasPTX<84>], @@ -4543,6 +4547,11 @@ class WMMA_REGINFO<WMMA_REGS r, string op, string metadata = "", string kind = " !and(!eq(geom, "m8n8k4"), !eq(ptx_elt_type, "f64")) : [hasSM<80>, hasPTX<70>], + !and(!or(!eq(geom, "m16n8k4"), + !eq(geom, "m16n8k8"), + !eq(geom, "m16n8k16")), + !eq(ptx_elt_type, "f64")) : [hasSM<90>, hasPTX<78>], + // fp16 -> fp16/fp32 @ m8n32k16/m32n8k16 !and(!or(!eq(geom, "m8n32k16"), !eq(geom, "m32n8k16")), @@ -4827,8 +4836,8 @@ defset list<WMMA_INSTR> WMMAs = { // MMA class MMA<WMMA_REGINFO FragA, WMMA_REGINFO FragB, WMMA_REGINFO FragC, WMMA_REGINFO FragD, - string ALayout, string BLayout, int Satfinite, string b1op> - : WMMA_INSTR<MMA_NAME<ALayout, BLayout, Satfinite, b1op, FragA, FragB, FragC, FragD>.record, + string ALayout, string BLayout, int Satfinite, string b1op, string Kind> + : WMMA_INSTR<MMA_NAME<ALayout, BLayout, Satfinite, b1op, Kind, FragA, FragB, FragC, FragD>.record, [FragA.Ins, FragB.Ins, FragC.Ins]>, // Requires does not seem to have effect on Instruction w/o Patterns. // We set it here anyways and propagate to the Pat<> we construct below. @@ -4843,6 +4852,7 @@ class MMA<WMMA_REGINFO FragA, WMMA_REGINFO FragB, # FragA.geom # "." # ALayout # "." # BLayout + # !if(!ne(Kind, ""), "." # Kind, "") # !if(Satfinite, ".satfinite", "") # TypeList # b1op # "\n\t\t" @@ -4859,13 +4869,15 @@ defset list<WMMA_INSTR> MMAs = { foreach satf = [0, 1] in { foreach op = NVVM_MMA_OPS.all_mma_ops in { foreach b1op = NVVM_MMA_B1OPS<op>.ret in { - if NVVM_MMA_SUPPORTED<op, layout_a, layout_b, satf>.ret then { - def : MMA<WMMA_REGINFO<op[0], "mma">, - WMMA_REGINFO<op[1], "mma">, - WMMA_REGINFO<op[2], "mma">, - WMMA_REGINFO<op[3], "mma">, - layout_a, layout_b, satf, b1op>; - } + foreach kind = ["", "kind::f8f6f4"] in { + if NVVM_MMA_SUPPORTED<op, layout_a, layout_b, kind, satf>.ret then { + def : MMA<WMMA_REGINFO<op[0], "mma", "", kind>, + WMMA_REGINFO<op[1], "mma", "", kind>, + WMMA_REGINFO<op[2], "mma", "", kind>, + WMMA_REGINFO<op[3], "mma", "", kind>, + layout_a, layout_b, satf, b1op, kind>; + } + } // kind } // b1op } // op } // satf diff --git a/llvm/lib/Target/PowerPC/AsmParser/PPCAsmParser.cpp b/llvm/lib/Target/PowerPC/AsmParser/PPCAsmParser.cpp index 1fc475d..561a9c5 100644 --- a/llvm/lib/Target/PowerPC/AsmParser/PPCAsmParser.cpp +++ b/llvm/lib/Target/PowerPC/AsmParser/PPCAsmParser.cpp @@ -349,32 +349,30 @@ public: bool isImm() const override { return Kind == Immediate || Kind == Expression; } - bool isU1Imm() const { return Kind == Immediate && isUInt<1>(getImm()); } - bool isU2Imm() const { return Kind == Immediate && isUInt<2>(getImm()); } - bool isU3Imm() const { return Kind == Immediate && isUInt<3>(getImm()); } - bool isU4Imm() const { return Kind == Immediate && isUInt<4>(getImm()); } - bool isU5Imm() const { return Kind == Immediate && isUInt<5>(getImm()); } - bool isS5Imm() const { return Kind == Immediate && isInt<5>(getImm()); } - bool isU6Imm() const { return Kind == Immediate && isUInt<6>(getImm()); } - bool isU6ImmX2() const { return Kind == Immediate && - isUInt<6>(getImm()) && - (getImm() & 1) == 0; } - bool isU7Imm() const { return Kind == Immediate && isUInt<7>(getImm()); } - bool isU7ImmX4() const { return Kind == Immediate && - isUInt<7>(getImm()) && - (getImm() & 3) == 0; } - bool isU8Imm() const { return Kind == Immediate && isUInt<8>(getImm()); } - bool isU8ImmX8() const { return Kind == Immediate && - isUInt<8>(getImm()) && - (getImm() & 7) == 0; } - - bool isU10Imm() const { return Kind == Immediate && isUInt<10>(getImm()); } - bool isU12Imm() const { return Kind == Immediate && isUInt<12>(getImm()); } + + template <uint64_t N> bool isUImm() const { + return Kind == Immediate && isUInt<N>(getImm()); + } + template <uint64_t N> bool isSImm() const { + return Kind == Immediate && isInt<N>(getImm()); + } + bool isU6ImmX2() const { return isUImm<6>() && (getImm() & 1) == 0; } + bool isU7ImmX4() const { return isUImm<7>() && (getImm() & 3) == 0; } + bool isU8ImmX8() const { return isUImm<8>() && (getImm() & 7) == 0; } + bool isU16Imm() const { return isExtImm<16>(/*Signed*/ false, 1); } bool isS16Imm() const { return isExtImm<16>(/*Signed*/ true, 1); } bool isS16ImmX4() const { return isExtImm<16>(/*Signed*/ true, 4); } bool isS16ImmX16() const { return isExtImm<16>(/*Signed*/ true, 16); } bool isS17Imm() const { return isExtImm<17>(/*Signed*/ true, 1); } + bool isS34Imm() const { + // Once the PC-Rel ABI is finalized, evaluate whether a 34-bit + // ContextImmediate is needed. + return Kind == Expression || isSImm<34>(); + } + bool isS34ImmX16() const { + return Kind == Expression || (isSImm<34>() && (getImm() & 15) == 0); + } bool isHashImmX8() const { // The Hash Imm form is used for instructions that check or store a hash. @@ -384,16 +382,6 @@ public: (getImm() & 7) == 0); } - bool isS34ImmX16() const { - return Kind == Expression || - (Kind == Immediate && isInt<34>(getImm()) && (getImm() & 15) == 0); - } - bool isS34Imm() const { - // Once the PC-Rel ABI is finalized, evaluate whether a 34-bit - // ContextImmediate is needed. - return Kind == Expression || (Kind == Immediate && isInt<34>(getImm())); - } - bool isTLSReg() const { return Kind == TLSRegister; } bool isDirectBr() const { if (Kind == Expression) @@ -1637,7 +1625,7 @@ bool PPCAsmParser::parseInstruction(ParseInstructionInfo &Info, StringRef Name, if (Operands.size() != 5) return false; PPCOperand &EHOp = (PPCOperand &)*Operands[4]; - if (EHOp.isU1Imm() && EHOp.getImm() == 0) + if (EHOp.isUImm<1>() && EHOp.getImm() == 0) Operands.pop_back(); } @@ -1817,7 +1805,7 @@ unsigned PPCAsmParser::validateTargetOperandClass(MCParsedAsmOperand &AsmOp, } PPCOperand &Op = static_cast<PPCOperand &>(AsmOp); - if (Op.isU3Imm() && Op.getImm() == ImmVal) + if (Op.isUImm<3>() && Op.getImm() == ImmVal) return Match_Success; return Match_InvalidOperand; diff --git a/llvm/lib/Target/PowerPC/MCTargetDesc/PPCMCCodeEmitter.cpp b/llvm/lib/Target/PowerPC/MCTargetDesc/PPCMCCodeEmitter.cpp index 48c31c9..81d8e94 100644 --- a/llvm/lib/Target/PowerPC/MCTargetDesc/PPCMCCodeEmitter.cpp +++ b/llvm/lib/Target/PowerPC/MCTargetDesc/PPCMCCodeEmitter.cpp @@ -206,45 +206,24 @@ PPCMCCodeEmitter::getVSRpEvenEncoding(const MCInst &MI, unsigned OpNo, return RegBits; } -unsigned PPCMCCodeEmitter::getImm16Encoding(const MCInst &MI, unsigned OpNo, - SmallVectorImpl<MCFixup> &Fixups, - const MCSubtargetInfo &STI) const { - const MCOperand &MO = MI.getOperand(OpNo); - if (MO.isReg() || MO.isImm()) return getMachineOpValue(MI, MO, Fixups, STI); - - // Add a fixup for the immediate field. - addFixup(Fixups, IsLittleEndian ? 0 : 2, MO.getExpr(), PPC::fixup_ppc_half16); - return 0; -} - -uint64_t PPCMCCodeEmitter::getImm34Encoding(const MCInst &MI, unsigned OpNo, - SmallVectorImpl<MCFixup> &Fixups, - const MCSubtargetInfo &STI, - MCFixupKind Fixup) const { +template <MCFixupKind Fixup> +uint64_t PPCMCCodeEmitter::getImmEncoding(const MCInst &MI, unsigned OpNo, + SmallVectorImpl<MCFixup> &Fixups, + const MCSubtargetInfo &STI) const { const MCOperand &MO = MI.getOperand(OpNo); assert(!MO.isReg() && "Not expecting a register for this operand."); if (MO.isImm()) return getMachineOpValue(MI, MO, Fixups, STI); + uint32_t Offset = 0; + if (Fixup == PPC::fixup_ppc_half16) + Offset = IsLittleEndian ? 0 : 2; + // Add a fixup for the immediate field. - addFixup(Fixups, 0, MO.getExpr(), Fixup); + addFixup(Fixups, Offset, MO.getExpr(), Fixup); return 0; } -uint64_t -PPCMCCodeEmitter::getImm34EncodingNoPCRel(const MCInst &MI, unsigned OpNo, - SmallVectorImpl<MCFixup> &Fixups, - const MCSubtargetInfo &STI) const { - return getImm34Encoding(MI, OpNo, Fixups, STI, PPC::fixup_ppc_imm34); -} - -uint64_t -PPCMCCodeEmitter::getImm34EncodingPCRel(const MCInst &MI, unsigned OpNo, - SmallVectorImpl<MCFixup> &Fixups, - const MCSubtargetInfo &STI) const { - return getImm34Encoding(MI, OpNo, Fixups, STI, PPC::fixup_ppc_pcrel34); -} - unsigned PPCMCCodeEmitter::getDispRIEncoding(const MCInst &MI, unsigned OpNo, SmallVectorImpl<MCFixup> &Fixups, const MCSubtargetInfo &STI) const { diff --git a/llvm/lib/Target/PowerPC/MCTargetDesc/PPCMCCodeEmitter.h b/llvm/lib/Target/PowerPC/MCTargetDesc/PPCMCCodeEmitter.h index b574557..3356513 100644 --- a/llvm/lib/Target/PowerPC/MCTargetDesc/PPCMCCodeEmitter.h +++ b/llvm/lib/Target/PowerPC/MCTargetDesc/PPCMCCodeEmitter.h @@ -47,19 +47,10 @@ public: unsigned getAbsCondBrEncoding(const MCInst &MI, unsigned OpNo, SmallVectorImpl<MCFixup> &Fixups, const MCSubtargetInfo &STI) const; - unsigned getImm16Encoding(const MCInst &MI, unsigned OpNo, - SmallVectorImpl<MCFixup> &Fixups, - const MCSubtargetInfo &STI) const; - uint64_t getImm34Encoding(const MCInst &MI, unsigned OpNo, - SmallVectorImpl<MCFixup> &Fixups, - const MCSubtargetInfo &STI, - MCFixupKind Fixup) const; - uint64_t getImm34EncodingNoPCRel(const MCInst &MI, unsigned OpNo, - SmallVectorImpl<MCFixup> &Fixups, - const MCSubtargetInfo &STI) const; - uint64_t getImm34EncodingPCRel(const MCInst &MI, unsigned OpNo, - SmallVectorImpl<MCFixup> &Fixups, - const MCSubtargetInfo &STI) const; + template <MCFixupKind Fixup> + uint64_t getImmEncoding(const MCInst &MI, unsigned OpNo, + SmallVectorImpl<MCFixup> &Fixups, + const MCSubtargetInfo &STI) const; unsigned getDispRIEncoding(const MCInst &MI, unsigned OpNo, SmallVectorImpl<MCFixup> &Fixups, const MCSubtargetInfo &STI) const; diff --git a/llvm/lib/Target/PowerPC/PPCInstr64Bit.td b/llvm/lib/Target/PowerPC/PPCInstr64Bit.td index 60efa4c..fdca5ebc 100644 --- a/llvm/lib/Target/PowerPC/PPCInstr64Bit.td +++ b/llvm/lib/Target/PowerPC/PPCInstr64Bit.td @@ -14,30 +14,6 @@ //===----------------------------------------------------------------------===// // 64-bit operands. // -def s16imm64 : Operand<i64> { - let PrintMethod = "printS16ImmOperand"; - let EncoderMethod = "getImm16Encoding"; - let ParserMatchClass = PPCS16ImmAsmOperand; - let DecoderMethod = "decodeSImmOperand<16>"; - let OperandType = "OPERAND_IMMEDIATE"; -} -def u16imm64 : Operand<i64> { - let PrintMethod = "printU16ImmOperand"; - let EncoderMethod = "getImm16Encoding"; - let ParserMatchClass = PPCU16ImmAsmOperand; - let DecoderMethod = "decodeUImmOperand<16>"; - let OperandType = "OPERAND_IMMEDIATE"; -} -def s17imm64 : Operand<i64> { - // This operand type is used for addis/lis to allow the assembler parser - // to accept immediates in the range -65536..65535 for compatibility with - // the GNU assembler. The operand is treated as 16-bit otherwise. - let PrintMethod = "printS16ImmOperand"; - let EncoderMethod = "getImm16Encoding"; - let ParserMatchClass = PPCS17ImmAsmOperand; - let DecoderMethod = "decodeSImmOperand<16>"; - let OperandType = "OPERAND_IMMEDIATE"; -} def tocentry : Operand<iPTR> { let MIOperandInfo = (ops i64imm:$imm); } diff --git a/llvm/lib/Target/PowerPC/PPCInstrAltivec.td b/llvm/lib/Target/PowerPC/PPCInstrAltivec.td index c616db4..23d6d88 100644 --- a/llvm/lib/Target/PowerPC/PPCInstrAltivec.td +++ b/llvm/lib/Target/PowerPC/PPCInstrAltivec.td @@ -30,6 +30,11 @@ // Altivec transformation functions and pattern fragments. // +// fneg is not legal, and desugared as an xor. +def desugared_fneg : PatFrag<(ops node:$x), (v4f32 (bitconvert (xor (bitconvert $x), + (int_ppc_altivec_vslw (bitconvert (v16i8 immAllOnesV)), + (bitconvert (v16i8 immAllOnesV))))))>; + def vpkuhum_shuffle : PatFrag<(ops node:$lhs, node:$rhs), (vector_shuffle node:$lhs, node:$rhs), [{ return PPC::isVPKUHUMShuffleMask(cast<ShuffleVectorSDNode>(N), 0, *CurDAG); @@ -467,11 +472,12 @@ def VMADDFP : VAForm_1<46, (outs vrrc:$RT), (ins vrrc:$RA, vrrc:$RC, vrrc:$RB), [(set v4f32:$RT, (fma v4f32:$RA, v4f32:$RC, v4f32:$RB))]>; -// FIXME: The fma+fneg pattern won't match because fneg is not legal. +// fneg is not legal, hence we have to match on the desugared version. def VNMSUBFP: VAForm_1<47, (outs vrrc:$RT), (ins vrrc:$RA, vrrc:$RC, vrrc:$RB), "vnmsubfp $RT, $RA, $RC, $RB", IIC_VecFP, - [(set v4f32:$RT, (fneg (fma v4f32:$RA, v4f32:$RC, - (fneg v4f32:$RB))))]>; + [(set v4f32:$RT, (desugared_fneg (fma v4f32:$RA, v4f32:$RC, + (desugared_fneg v4f32:$RB))))]>; + let hasSideEffects = 1 in { def VMHADDSHS : VA1a_Int_Ty<32, "vmhaddshs", int_ppc_altivec_vmhaddshs, v8i16>; def VMHRADDSHS : VA1a_Int_Ty<33, "vmhraddshs", int_ppc_altivec_vmhraddshs, @@ -892,6 +898,13 @@ def : Pat<(mul v8i16:$vA, v8i16:$vB), (VMLADDUHM $vA, $vB, (v8i16(V_SET0H)))>; // Add def : Pat<(add (mul v8i16:$vA, v8i16:$vB), v8i16:$vC), (VMLADDUHM $vA, $vB, $vC)>; + +// Fused negated multiply-subtract +def : Pat<(v4f32 (desugared_fneg + (int_ppc_altivec_vmaddfp v4f32:$RA, v4f32:$RC, + (desugared_fneg v4f32:$RB)))), + (VNMSUBFP $RA, $RC, $RB)>; + // Saturating adds/subtracts. def : Pat<(v16i8 (saddsat v16i8:$vA, v16i8:$vB)), (v16i8 (VADDSBS $vA, $vB))>; def : Pat<(v16i8 (uaddsat v16i8:$vA, v16i8:$vB)), (v16i8 (VADDUBS $vA, $vB))>; diff --git a/llvm/lib/Target/PowerPC/PPCRegisterInfo.td b/llvm/lib/Target/PowerPC/PPCRegisterInfo.td index 6d8c122..65d0484 100644 --- a/llvm/lib/Target/PowerPC/PPCRegisterInfo.td +++ b/llvm/lib/Target/PowerPC/PPCRegisterInfo.td @@ -615,7 +615,8 @@ def spe4rc : RegisterOperand<GPRC> { } def PPCU1ImmAsmOperand : AsmOperandClass { - let Name = "U1Imm"; let PredicateMethod = "isU1Imm"; + let Name = "U1Imm"; + let PredicateMethod = "isUImm<1>"; let RenderMethod = "addImmOperands"; } def u1imm : Operand<i32> { @@ -626,7 +627,8 @@ def u1imm : Operand<i32> { } def PPCU2ImmAsmOperand : AsmOperandClass { - let Name = "U2Imm"; let PredicateMethod = "isU2Imm"; + let Name = "U2Imm"; + let PredicateMethod = "isUImm<2>"; let RenderMethod = "addImmOperands"; } def u2imm : Operand<i32> { @@ -647,7 +649,8 @@ def atimm : Operand<i32> { } def PPCU3ImmAsmOperand : AsmOperandClass { - let Name = "U3Imm"; let PredicateMethod = "isU3Imm"; + let Name = "U3Imm"; + let PredicateMethod = "isUImm<3>"; let RenderMethod = "addImmOperands"; } def u3imm : Operand<i32> { @@ -658,7 +661,8 @@ def u3imm : Operand<i32> { } def PPCU4ImmAsmOperand : AsmOperandClass { - let Name = "U4Imm"; let PredicateMethod = "isU4Imm"; + let Name = "U4Imm"; + let PredicateMethod = "isUImm<4>"; let RenderMethod = "addImmOperands"; } def u4imm : Operand<i32> { @@ -668,7 +672,8 @@ def u4imm : Operand<i32> { let OperandType = "OPERAND_IMMEDIATE"; } def PPCS5ImmAsmOperand : AsmOperandClass { - let Name = "S5Imm"; let PredicateMethod = "isS5Imm"; + let Name = "S5Imm"; + let PredicateMethod = "isSImm<5>"; let RenderMethod = "addImmOperands"; } def s5imm : Operand<i32> { @@ -678,7 +683,8 @@ def s5imm : Operand<i32> { let OperandType = "OPERAND_IMMEDIATE"; } def PPCU5ImmAsmOperand : AsmOperandClass { - let Name = "U5Imm"; let PredicateMethod = "isU5Imm"; + let Name = "U5Imm"; + let PredicateMethod = "isUImm<5>"; let RenderMethod = "addImmOperands"; } def u5imm : Operand<i32> { @@ -688,7 +694,8 @@ def u5imm : Operand<i32> { let OperandType = "OPERAND_IMMEDIATE"; } def PPCU6ImmAsmOperand : AsmOperandClass { - let Name = "U6Imm"; let PredicateMethod = "isU6Imm"; + let Name = "U6Imm"; + let PredicateMethod = "isUImm<6>"; let RenderMethod = "addImmOperands"; } def u6imm : Operand<i32> { @@ -698,7 +705,8 @@ def u6imm : Operand<i32> { let OperandType = "OPERAND_IMMEDIATE"; } def PPCU7ImmAsmOperand : AsmOperandClass { - let Name = "U7Imm"; let PredicateMethod = "isU7Imm"; + let Name = "U7Imm"; + let PredicateMethod = "isUImm<7>"; let RenderMethod = "addImmOperands"; } def u7imm : Operand<i32> { @@ -708,7 +716,8 @@ def u7imm : Operand<i32> { let OperandType = "OPERAND_IMMEDIATE"; } def PPCU8ImmAsmOperand : AsmOperandClass { - let Name = "U8Imm"; let PredicateMethod = "isU8Imm"; + let Name = "U8Imm"; + let PredicateMethod = "isUImm<8>"; let RenderMethod = "addImmOperands"; } def u8imm : Operand<i32> { @@ -718,7 +727,8 @@ def u8imm : Operand<i32> { let OperandType = "OPERAND_IMMEDIATE"; } def PPCU10ImmAsmOperand : AsmOperandClass { - let Name = "U10Imm"; let PredicateMethod = "isU10Imm"; + let Name = "U10Imm"; + let PredicateMethod = "isUImm<10>"; let RenderMethod = "addImmOperands"; } def u10imm : Operand<i32> { @@ -728,7 +738,8 @@ def u10imm : Operand<i32> { let OperandType = "OPERAND_IMMEDIATE"; } def PPCU12ImmAsmOperand : AsmOperandClass { - let Name = "U12Imm"; let PredicateMethod = "isU12Imm"; + let Name = "U12Imm"; + let PredicateMethod = "isUImm<12>"; let RenderMethod = "addImmOperands"; } def u12imm : Operand<i32> { @@ -743,7 +754,14 @@ def PPCS16ImmAsmOperand : AsmOperandClass { } def s16imm : Operand<i32> { let PrintMethod = "printS16ImmOperand"; - let EncoderMethod = "getImm16Encoding"; + let EncoderMethod = "getImmEncoding<PPC::fixup_ppc_half16>"; + let ParserMatchClass = PPCS16ImmAsmOperand; + let DecoderMethod = "decodeSImmOperand<16>"; + let OperandType = "OPERAND_IMMEDIATE"; +} +def s16imm64 : Operand<i64> { + let PrintMethod = "printS16ImmOperand"; + let EncoderMethod = "getImmEncoding<PPC::fixup_ppc_half16>"; let ParserMatchClass = PPCS16ImmAsmOperand; let DecoderMethod = "decodeSImmOperand<16>"; let OperandType = "OPERAND_IMMEDIATE"; @@ -754,7 +772,14 @@ def PPCU16ImmAsmOperand : AsmOperandClass { } def u16imm : Operand<i32> { let PrintMethod = "printU16ImmOperand"; - let EncoderMethod = "getImm16Encoding"; + let EncoderMethod = "getImmEncoding<PPC::fixup_ppc_half16>"; + let ParserMatchClass = PPCU16ImmAsmOperand; + let DecoderMethod = "decodeUImmOperand<16>"; + let OperandType = "OPERAND_IMMEDIATE"; +} +def u16imm64 : Operand<i64> { + let PrintMethod = "printU16ImmOperand"; + let EncoderMethod = "getImmEncoding<PPC::fixup_ppc_half16>"; let ParserMatchClass = PPCU16ImmAsmOperand; let DecoderMethod = "decodeUImmOperand<16>"; let OperandType = "OPERAND_IMMEDIATE"; @@ -768,7 +793,17 @@ def s17imm : Operand<i32> { // to accept immediates in the range -65536..65535 for compatibility with // the GNU assembler. The operand is treated as 16-bit otherwise. let PrintMethod = "printS16ImmOperand"; - let EncoderMethod = "getImm16Encoding"; + let EncoderMethod = "getImmEncoding<PPC::fixup_ppc_half16>"; + let ParserMatchClass = PPCS17ImmAsmOperand; + let DecoderMethod = "decodeSImmOperand<16>"; + let OperandType = "OPERAND_IMMEDIATE"; +} +def s17imm64 : Operand<i64> { + // This operand type is used for addis/lis to allow the assembler parser + // to accept immediates in the range -65536..65535 for compatibility with + // the GNU assembler. The operand is treated as 16-bit otherwise. + let PrintMethod = "printS16ImmOperand"; + let EncoderMethod = "getImmEncoding<PPC::fixup_ppc_half16>"; let ParserMatchClass = PPCS17ImmAsmOperand; let DecoderMethod = "decodeSImmOperand<16>"; let OperandType = "OPERAND_IMMEDIATE"; @@ -780,14 +815,14 @@ def PPCS34ImmAsmOperand : AsmOperandClass { } def s34imm : Operand<i64> { let PrintMethod = "printS34ImmOperand"; - let EncoderMethod = "getImm34EncodingNoPCRel"; + let EncoderMethod = "getImmEncoding<PPC::fixup_ppc_imm34>"; let ParserMatchClass = PPCS34ImmAsmOperand; let DecoderMethod = "decodeSImmOperand<34>"; let OperandType = "OPERAND_IMMEDIATE"; } def s34imm_pcrel : Operand<i64> { let PrintMethod = "printS34ImmOperand"; - let EncoderMethod = "getImm34EncodingPCRel"; + let EncoderMethod = "getImmEncoding<PPC::fixup_ppc_pcrel34>"; let ParserMatchClass = PPCS34ImmAsmOperand; let DecoderMethod = "decodeSImmOperand<34>"; let OperandType = "OPERAND_IMMEDIATE"; diff --git a/llvm/lib/Target/RISCV/GISel/RISCVCallLowering.cpp b/llvm/lib/Target/RISCV/GISel/RISCVCallLowering.cpp index 34026ed..ecfb5fe 100644 --- a/llvm/lib/Target/RISCV/GISel/RISCVCallLowering.cpp +++ b/llvm/lib/Target/RISCV/GISel/RISCVCallLowering.cpp @@ -439,18 +439,6 @@ bool RISCVCallLowering::canLowerReturn(MachineFunction &MF, CCState CCInfo(CallConv, IsVarArg, MF, ArgLocs, MF.getFunction().getContext()); - const RISCVSubtarget &Subtarget = MF.getSubtarget<RISCVSubtarget>(); - - std::optional<unsigned> FirstMaskArgument = std::nullopt; - // Preassign the first mask argument. - if (Subtarget.hasVInstructions()) { - for (const auto &ArgIdx : enumerate(Outs)) { - MVT ArgVT = MVT::getVT(ArgIdx.value().Ty); - if (ArgVT.isVector() && ArgVT.getVectorElementType() == MVT::i1) - FirstMaskArgument = ArgIdx.index(); - } - } - for (unsigned I = 0, E = Outs.size(); I < E; ++I) { MVT VT = MVT::getVT(Outs[I].Ty); if (CC_RISCV(I, VT, VT, CCValAssign::Full, Outs[I].Flags[0], CCInfo, diff --git a/llvm/lib/Target/RISCV/GISel/RISCVRegisterBankInfo.cpp b/llvm/lib/Target/RISCV/GISel/RISCVRegisterBankInfo.cpp index 597dd12..9f9ae2f 100644 --- a/llvm/lib/Target/RISCV/GISel/RISCVRegisterBankInfo.cpp +++ b/llvm/lib/Target/RISCV/GISel/RISCVRegisterBankInfo.cpp @@ -324,6 +324,10 @@ RISCVRegisterBankInfo::getInstrMapping(const MachineInstr &MI) const { OpdsMapping[0] = GPRValueMapping; + // Atomics always use GPR destinations. Don't refine any further. + if (cast<GLoad>(MI).isAtomic()) + break; + // Use FPR64 for s64 loads on rv32. if (GPRSize == 32 && Size.getFixedValue() == 64) { assert(MF.getSubtarget<RISCVSubtarget>().hasStdExtD()); @@ -358,6 +362,10 @@ RISCVRegisterBankInfo::getInstrMapping(const MachineInstr &MI) const { OpdsMapping[0] = GPRValueMapping; + // Atomics always use GPR sources. Don't refine any further. + if (cast<GStore>(MI).isAtomic()) + break; + // Use FPR64 for s64 stores on rv32. if (GPRSize == 32 && Size.getFixedValue() == 64) { assert(MF.getSubtarget<RISCVSubtarget>().hasStdExtD()); diff --git a/llvm/lib/Target/RISCV/RISCVFeatures.td b/llvm/lib/Target/RISCV/RISCVFeatures.td index a02de31..27cf057 100644 --- a/llvm/lib/Target/RISCV/RISCVFeatures.td +++ b/llvm/lib/Target/RISCV/RISCVFeatures.td @@ -1421,7 +1421,7 @@ def HasVendorXMIPSCMov : Predicate<"Subtarget->hasVendorXMIPSCMov()">, AssemblerPredicate<(all_of FeatureVendorXMIPSCMov), "'Xmipscmov' ('mips.ccmov' instruction)">; -def UseCCMovInsn : Predicate<"Subtarget->useCCMovInsn()">; +def UseMIPSCCMovInsn : Predicate<"Subtarget->useMIPSCCMovInsn()">; def FeatureVendorXMIPSLSP : RISCVExtension<1, 0, "MIPS optimization for hardware load-store bonding">; diff --git a/llvm/lib/Target/RISCV/RISCVGISel.td b/llvm/lib/Target/RISCV/RISCVGISel.td index 7f5d0af..6d01250 100644 --- a/llvm/lib/Target/RISCV/RISCVGISel.td +++ b/llvm/lib/Target/RISCV/RISCVGISel.td @@ -190,3 +190,29 @@ let Predicates = [HasStdExtZbkb, NoStdExtZbb, IsRV64] in { def : Pat<(i64 (zext (i16 GPR:$rs))), (PACKW GPR:$rs, (XLenVT X0))>; def : Pat<(i32 (zext (i16 GPR:$rs))), (PACKW GPR:$rs, (XLenVT X0))>; } + +//===----------------------------------------------------------------------===// +// Zalasr patterns not used by SelectionDAG +//===----------------------------------------------------------------------===// + +let Predicates = [HasStdExtZalasr] in { + // the sequentially consistent loads use + // .aq instead of .aqrl to match the psABI/A.7 + def : PatLAQ<acquiring_load<atomic_load_aext_8>, LB_AQ, i16>; + def : PatLAQ<seq_cst_load<atomic_load_aext_8>, LB_AQ, i16>; + + def : PatLAQ<acquiring_load<atomic_load_nonext_16>, LH_AQ, i16>; + def : PatLAQ<seq_cst_load<atomic_load_nonext_16>, LH_AQ, i16>; + + def : PatSRL<releasing_store<atomic_store_8>, SB_RL, i16>; + def : PatSRL<seq_cst_store<atomic_store_8>, SB_RL, i16>; + + def : PatSRL<releasing_store<atomic_store_16>, SH_RL, i16>; + def : PatSRL<seq_cst_store<atomic_store_16>, SH_RL, i16>; +} + +let Predicates = [HasStdExtZalasr, IsRV64] in { + // Load pattern is in RISCVInstrInfoZalasr.td and shared with RV32. + def : PatSRL<releasing_store<atomic_store_32>, SW_RL, i32>; + def : PatSRL<seq_cst_store<atomic_store_32>, SW_RL, i32>; +} diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp index dcce2d2..a3a4cf2 100644 --- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp +++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp @@ -434,7 +434,7 @@ RISCVTargetLowering::RISCVTargetLowering(const TargetMachine &TM, setOperationAction(ISD::ABS, MVT::i32, Custom); } - if (!Subtarget.useCCMovInsn() && !Subtarget.hasVendorXTHeadCondMov()) + if (!Subtarget.useMIPSCCMovInsn() && !Subtarget.hasVendorXTHeadCondMov()) setOperationAction(ISD::SELECT, XLenVT, Custom); if (Subtarget.hasVendorXqcia() && !Subtarget.is64Bit()) { @@ -16498,43 +16498,60 @@ static SDValue expandMul(SDNode *N, SelectionDAG &DAG, SDValue X = N->getOperand(0); if (Subtarget.hasShlAdd(3)) { - for (uint64_t Divisor : {3, 5, 9}) { - if (MulAmt % Divisor != 0) - continue; - uint64_t MulAmt2 = MulAmt / Divisor; - // 3/5/9 * 2^N -> shl (shXadd X, X), N - if (isPowerOf2_64(MulAmt2)) { - SDLoc DL(N); - SDValue X = N->getOperand(0); - // Put the shift first if we can fold a zext into the - // shift forming a slli.uw. - if (X.getOpcode() == ISD::AND && isa<ConstantSDNode>(X.getOperand(1)) && - X.getConstantOperandVal(1) == UINT64_C(0xffffffff)) { - SDValue Shl = DAG.getNode(ISD::SHL, DL, VT, X, - DAG.getConstant(Log2_64(MulAmt2), DL, VT)); - return DAG.getNode(RISCVISD::SHL_ADD, DL, VT, Shl, - DAG.getConstant(Log2_64(Divisor - 1), DL, VT), - Shl); - } - // Otherwise, put rhe shl second so that it can fold with following - // instructions (e.g. sext or add). - SDValue Mul359 = - DAG.getNode(RISCVISD::SHL_ADD, DL, VT, X, - DAG.getConstant(Log2_64(Divisor - 1), DL, VT), X); - return DAG.getNode(ISD::SHL, DL, VT, Mul359, - DAG.getConstant(Log2_64(MulAmt2), DL, VT)); - } - - // 3/5/9 * 3/5/9 -> shXadd (shYadd X, X), (shYadd X, X) - if (MulAmt2 == 3 || MulAmt2 == 5 || MulAmt2 == 9) { - SDLoc DL(N); - SDValue Mul359 = - DAG.getNode(RISCVISD::SHL_ADD, DL, VT, X, - DAG.getConstant(Log2_64(Divisor - 1), DL, VT), X); - return DAG.getNode(RISCVISD::SHL_ADD, DL, VT, Mul359, - DAG.getConstant(Log2_64(MulAmt2 - 1), DL, VT), - Mul359); + int Shift; + if (int ShXAmount = isShifted359(MulAmt, Shift)) { + // 3/5/9 * 2^N -> shl (shXadd X, X), N + SDLoc DL(N); + SDValue X = N->getOperand(0); + // Put the shift first if we can fold a zext into the shift forming + // a slli.uw. + if (X.getOpcode() == ISD::AND && isa<ConstantSDNode>(X.getOperand(1)) && + X.getConstantOperandVal(1) == UINT64_C(0xffffffff)) { + SDValue Shl = + DAG.getNode(ISD::SHL, DL, VT, X, DAG.getConstant(Shift, DL, VT)); + return DAG.getNode(RISCVISD::SHL_ADD, DL, VT, Shl, + DAG.getConstant(ShXAmount, DL, VT), Shl); } + // Otherwise, put the shl second so that it can fold with following + // instructions (e.g. sext or add). + SDValue Mul359 = DAG.getNode(RISCVISD::SHL_ADD, DL, VT, X, + DAG.getConstant(ShXAmount, DL, VT), X); + return DAG.getNode(ISD::SHL, DL, VT, Mul359, + DAG.getConstant(Shift, DL, VT)); + } + + // 3/5/9 * 3/5/9 -> shXadd (shYadd X, X), (shYadd X, X) + int ShX; + int ShY; + switch (MulAmt) { + case 3 * 5: + ShY = 1; + ShX = 2; + break; + case 3 * 9: + ShY = 1; + ShX = 3; + break; + case 5 * 5: + ShX = ShY = 2; + break; + case 5 * 9: + ShY = 2; + ShX = 3; + break; + case 9 * 9: + ShX = ShY = 3; + break; + default: + ShX = ShY = 0; + break; + } + if (ShX) { + SDLoc DL(N); + SDValue Mul359 = DAG.getNode(RISCVISD::SHL_ADD, DL, VT, X, + DAG.getConstant(ShY, DL, VT), X); + return DAG.getNode(RISCVISD::SHL_ADD, DL, VT, Mul359, + DAG.getConstant(ShX, DL, VT), Mul359); } // If this is a power 2 + 2/4/8, we can use a shift followed by a single @@ -16557,18 +16574,14 @@ static SDValue expandMul(SDNode *N, SelectionDAG &DAG, // variants we could implement. e.g. // (2^(1,2,3) * 3,5,9 + 1) << C2 // 2^(C1>3) * 3,5,9 +/- 1 - for (uint64_t Divisor : {3, 5, 9}) { - uint64_t C = MulAmt - 1; - if (C <= Divisor) - continue; - unsigned TZ = llvm::countr_zero(C); - if ((C >> TZ) == Divisor && (TZ == 1 || TZ == 2 || TZ == 3)) { + if (int ShXAmount = isShifted359(MulAmt - 1, Shift)) { + assert(Shift != 0 && "MulAmt=4,6,10 handled before"); + if (Shift <= 3) { SDLoc DL(N); - SDValue Mul359 = - DAG.getNode(RISCVISD::SHL_ADD, DL, VT, X, - DAG.getConstant(Log2_64(Divisor - 1), DL, VT), X); + SDValue Mul359 = DAG.getNode(RISCVISD::SHL_ADD, DL, VT, X, + DAG.getConstant(ShXAmount, DL, VT), X); return DAG.getNode(RISCVISD::SHL_ADD, DL, VT, Mul359, - DAG.getConstant(TZ, DL, VT), X); + DAG.getConstant(Shift, DL, VT), X); } } @@ -16576,7 +16589,7 @@ static SDValue expandMul(SDNode *N, SelectionDAG &DAG, if (MulAmt > 2 && isPowerOf2_64((MulAmt - 1) & (MulAmt - 2))) { unsigned ScaleShift = llvm::countr_zero(MulAmt - 1); if (ScaleShift >= 1 && ScaleShift < 4) { - unsigned ShiftAmt = Log2_64(((MulAmt - 1) & (MulAmt - 2))); + unsigned ShiftAmt = llvm::countr_zero((MulAmt - 1) & (MulAmt - 2)); SDLoc DL(N); SDValue Shift1 = DAG.getNode(ISD::SHL, DL, VT, X, DAG.getConstant(ShiftAmt, DL, VT)); @@ -16589,7 +16602,7 @@ static SDValue expandMul(SDNode *N, SelectionDAG &DAG, // 2^N - 3/5/9 --> (sub (shl X, C1), (shXadd X, x)) for (uint64_t Offset : {3, 5, 9}) { if (isPowerOf2_64(MulAmt + Offset)) { - unsigned ShAmt = Log2_64(MulAmt + Offset); + unsigned ShAmt = llvm::countr_zero(MulAmt + Offset); if (ShAmt >= VT.getSizeInBits()) continue; SDLoc DL(N); @@ -16608,21 +16621,16 @@ static SDValue expandMul(SDNode *N, SelectionDAG &DAG, uint64_t MulAmt2 = MulAmt / Divisor; // 3/5/9 * 3/5/9 * 2^N - In particular, this covers multiples // of 25 which happen to be quite common. - for (uint64_t Divisor2 : {3, 5, 9}) { - if (MulAmt2 % Divisor2 != 0) - continue; - uint64_t MulAmt3 = MulAmt2 / Divisor2; - if (isPowerOf2_64(MulAmt3)) { - SDLoc DL(N); - SDValue Mul359A = - DAG.getNode(RISCVISD::SHL_ADD, DL, VT, X, - DAG.getConstant(Log2_64(Divisor - 1), DL, VT), X); - SDValue Mul359B = DAG.getNode( - RISCVISD::SHL_ADD, DL, VT, Mul359A, - DAG.getConstant(Log2_64(Divisor2 - 1), DL, VT), Mul359A); - return DAG.getNode(ISD::SHL, DL, VT, Mul359B, - DAG.getConstant(Log2_64(MulAmt3), DL, VT)); - } + if (int ShBAmount = isShifted359(MulAmt2, Shift)) { + SDLoc DL(N); + SDValue Mul359A = + DAG.getNode(RISCVISD::SHL_ADD, DL, VT, X, + DAG.getConstant(Log2_64(Divisor - 1), DL, VT), X); + SDValue Mul359B = + DAG.getNode(RISCVISD::SHL_ADD, DL, VT, Mul359A, + DAG.getConstant(ShBAmount, DL, VT), Mul359A); + return DAG.getNode(ISD::SHL, DL, VT, Mul359B, + DAG.getConstant(Shift, DL, VT)); } } } @@ -25031,8 +25039,17 @@ bool RISCVTargetLowering::fallBackToDAGISel(const Instruction &Inst) const { if (auto *II = dyn_cast<IntrinsicInst>(&Inst)) { // Mark RVV intrinsic as supported. - if (RISCVVIntrinsicsTable::getRISCVVIntrinsicInfo(II->getIntrinsicID())) + if (RISCVVIntrinsicsTable::getRISCVVIntrinsicInfo(II->getIntrinsicID())) { + // GISel doesn't support tuple types yet. + if (Inst.getType()->isRISCVVectorTupleTy()) + return true; + + for (unsigned i = 0; i < II->arg_size(); ++i) + if (II->getArgOperand(i)->getType()->isRISCVVectorTupleTy()) + return true; + return false; + } } if (Inst.getType()->isScalableTy()) diff --git a/llvm/lib/Target/RISCV/RISCVInstrInfo.cpp b/llvm/lib/Target/RISCV/RISCVInstrInfo.cpp index 7db4832..96e1078 100644 --- a/llvm/lib/Target/RISCV/RISCVInstrInfo.cpp +++ b/llvm/lib/Target/RISCV/RISCVInstrInfo.cpp @@ -4586,24 +4586,23 @@ void RISCVInstrInfo::mulImm(MachineFunction &MF, MachineBasicBlock &MBB, .addReg(DestReg, RegState::Kill) .addImm(ShiftAmount) .setMIFlag(Flag); - } else if (STI.hasShlAdd(3) && - ((Amount % 3 == 0 && isPowerOf2_64(Amount / 3)) || - (Amount % 5 == 0 && isPowerOf2_64(Amount / 5)) || - (Amount % 9 == 0 && isPowerOf2_64(Amount / 9)))) { + } else if (int ShXAmount, ShiftAmount; + STI.hasShlAdd(3) && + (ShXAmount = isShifted359(Amount, ShiftAmount)) != 0) { // We can use Zba SHXADD+SLLI instructions for multiply in some cases. unsigned Opc; - uint32_t ShiftAmount; - if (Amount % 9 == 0) { - Opc = RISCV::SH3ADD; - ShiftAmount = Log2_64(Amount / 9); - } else if (Amount % 5 == 0) { - Opc = RISCV::SH2ADD; - ShiftAmount = Log2_64(Amount / 5); - } else if (Amount % 3 == 0) { + switch (ShXAmount) { + case 1: Opc = RISCV::SH1ADD; - ShiftAmount = Log2_64(Amount / 3); - } else { - llvm_unreachable("implied by if-clause"); + break; + case 2: + Opc = RISCV::SH2ADD; + break; + case 3: + Opc = RISCV::SH3ADD; + break; + default: + llvm_unreachable("unexpected result of isShifted359"); } if (ShiftAmount) BuildMI(MBB, II, DL, get(RISCV::SLLI), DestReg) diff --git a/llvm/lib/Target/RISCV/RISCVInstrInfo.h b/llvm/lib/Target/RISCV/RISCVInstrInfo.h index 42a0c4c..c5eddb9 100644 --- a/llvm/lib/Target/RISCV/RISCVInstrInfo.h +++ b/llvm/lib/Target/RISCV/RISCVInstrInfo.h @@ -25,6 +25,25 @@ namespace llvm { +// If Value is of the form C1<<C2, where C1 = 3, 5 or 9, +// returns log2(C1 - 1) and assigns Shift = C2. +// Otherwise, returns 0. +template <typename T> int isShifted359(T Value, int &Shift) { + if (Value == 0) + return 0; + Shift = llvm::countr_zero(Value); + switch (Value >> Shift) { + case 3: + return 1; + case 5: + return 2; + case 9: + return 3; + default: + return 0; + } +} + class RISCVSubtarget; static const MachineMemOperand::Flags MONontemporalBit0 = diff --git a/llvm/lib/Target/RISCV/RISCVInstrInfoXMips.td b/llvm/lib/Target/RISCV/RISCVInstrInfoXMips.td index 115ab38e..0b5bee1 100644 --- a/llvm/lib/Target/RISCV/RISCVInstrInfoXMips.td +++ b/llvm/lib/Target/RISCV/RISCVInstrInfoXMips.td @@ -175,7 +175,7 @@ def MIPS_CCMOV : RVInstR4<0b11, 0b011, OPC_CUSTOM_0, (outs GPR:$rd), Sched<[]>; } -let Predicates = [UseCCMovInsn] in { +let Predicates = [UseMIPSCCMovInsn] in { def : Pat<(select (riscv_setne (XLenVT GPR:$rs2)), (XLenVT GPR:$rs1), (XLenVT GPR:$rs3)), (MIPS_CCMOV GPR:$rs1, GPR:$rs2, GPR:$rs3)>; diff --git a/llvm/lib/Target/RISCV/RISCVInstrInfoZalasr.td b/llvm/lib/Target/RISCV/RISCVInstrInfoZalasr.td index 1dd7332..1deecd2 100644 --- a/llvm/lib/Target/RISCV/RISCVInstrInfoZalasr.td +++ b/llvm/lib/Target/RISCV/RISCVInstrInfoZalasr.td @@ -93,12 +93,11 @@ let Predicates = [HasStdExtZalasr] in { def : PatSRL<releasing_store<atomic_store_32>, SW_RL>; def : PatSRL<seq_cst_store<atomic_store_32>, SW_RL>; -} // Predicates = [HasStdExtZalasr] -let Predicates = [HasStdExtZalasr, IsRV32] in { - def : PatLAQ<acquiring_load<atomic_load_nonext_32>, LW_AQ>; - def : PatLAQ<seq_cst_load<atomic_load_nonext_32>, LW_AQ>; -} // Predicates = [HasStdExtZalasr, IsRV32] + // Used by GISel for RV32 and RV64. + def : PatLAQ<acquiring_load<atomic_load_nonext_32>, LW_AQ, i32>; + def : PatLAQ<seq_cst_load<atomic_load_nonext_32>, LW_AQ, i32>; +} // Predicates = [HasStdExtZalasr] let Predicates = [HasStdExtZalasr, IsRV64] in { def : PatLAQ<acquiring_load<atomic_load_asext_32>, LW_AQ, i64>; diff --git a/llvm/lib/Target/RISCV/RISCVInstrInfoZb.td b/llvm/lib/Target/RISCV/RISCVInstrInfoZb.td index ce21d83..8d9b777 100644 --- a/llvm/lib/Target/RISCV/RISCVInstrInfoZb.td +++ b/llvm/lib/Target/RISCV/RISCVInstrInfoZb.td @@ -808,9 +808,9 @@ multiclass Sh2Add_UWPat<Instruction sh2add_uw> { } multiclass Sh3Add_UWPat<Instruction sh3add_uw> { - def : Pat<(i64 (add_like_non_imm12 (and GPR:$rs1, 0xFFFFFFF8), + def : Pat<(i64 (add_like_non_imm12 (and (shl GPR:$rs1, (i64 3)), 0x7FFFFFFFF), (XLenVT GPR:$rs2))), - (sh3add_uw (XLenVT (SRLIW GPR:$rs1, 3)), GPR:$rs2)>; + (sh3add_uw GPR:$rs1, GPR:$rs2)>; // Use SRLI to clear the LSBs and SHXADD_UW to mask and shift. def : Pat<(i64 (add_like_non_imm12 (and GPR:$rs1, 0x7FFFFFFF8), (XLenVT GPR:$rs2))), diff --git a/llvm/lib/Target/RISCV/RISCVLoadStoreOptimizer.cpp b/llvm/lib/Target/RISCV/RISCVLoadStoreOptimizer.cpp index c81a20b..115a96e 100644 --- a/llvm/lib/Target/RISCV/RISCVLoadStoreOptimizer.cpp +++ b/llvm/lib/Target/RISCV/RISCVLoadStoreOptimizer.cpp @@ -92,7 +92,7 @@ bool RISCVLoadStoreOpt::runOnMachineFunction(MachineFunction &Fn) { if (skipFunction(Fn.getFunction())) return false; const RISCVSubtarget &Subtarget = Fn.getSubtarget<RISCVSubtarget>(); - if (!Subtarget.useLoadStorePairs()) + if (!Subtarget.useMIPSLoadStorePairs()) return false; bool MadeChange = false; diff --git a/llvm/lib/Target/RISCV/RISCVSubtarget.cpp b/llvm/lib/Target/RISCV/RISCVSubtarget.cpp index e35ffaf..715ac4c 100644 --- a/llvm/lib/Target/RISCV/RISCVSubtarget.cpp +++ b/llvm/lib/Target/RISCV/RISCVSubtarget.cpp @@ -65,9 +65,9 @@ static cl::opt<bool> UseMIPSLoadStorePairsOpt( cl::desc("Enable the load/store pair optimization pass"), cl::init(false), cl::Hidden); -static cl::opt<bool> UseCCMovInsn("use-riscv-ccmov", - cl::desc("Use 'mips.ccmov' instruction"), - cl::init(true), cl::Hidden); +static cl::opt<bool> UseMIPSCCMovInsn("use-riscv-mips-ccmov", + cl::desc("Use 'mips.ccmov' instruction"), + cl::init(true), cl::Hidden); void RISCVSubtarget::anchor() {} @@ -246,10 +246,10 @@ void RISCVSubtarget::overridePostRASchedPolicy( } } -bool RISCVSubtarget::useLoadStorePairs() const { +bool RISCVSubtarget::useMIPSLoadStorePairs() const { return UseMIPSLoadStorePairsOpt && HasVendorXMIPSLSP; } -bool RISCVSubtarget::useCCMovInsn() const { - return UseCCMovInsn && HasVendorXMIPSCMov; +bool RISCVSubtarget::useMIPSCCMovInsn() const { + return UseMIPSCCMovInsn && HasVendorXMIPSCMov; } diff --git a/llvm/lib/Target/RISCV/RISCVSubtarget.h b/llvm/lib/Target/RISCV/RISCVSubtarget.h index 7dffa63..6acf799 100644 --- a/llvm/lib/Target/RISCV/RISCVSubtarget.h +++ b/llvm/lib/Target/RISCV/RISCVSubtarget.h @@ -227,8 +227,8 @@ public: unsigned getXLen() const { return is64Bit() ? 64 : 32; } - bool useLoadStorePairs() const; - bool useCCMovInsn() const; + bool useMIPSLoadStorePairs() const; + bool useMIPSCCMovInsn() const; unsigned getFLen() const { if (HasStdExtD) return 64; diff --git a/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp b/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp index ee25f69..7bc0b5b 100644 --- a/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp +++ b/llvm/lib/Target/RISCV/RISCVTargetTransformInfo.cpp @@ -2747,20 +2747,72 @@ bool RISCVTTIImpl::getTgtMemIntrinsic(IntrinsicInst *Inst, Intrinsic::ID IID = Inst->getIntrinsicID(); LLVMContext &C = Inst->getContext(); bool HasMask = false; + + auto getSegNum = [](const IntrinsicInst *II, unsigned PtrOperandNo, + bool IsWrite) -> int64_t { + if (auto *TarExtTy = + dyn_cast<TargetExtType>(II->getArgOperand(0)->getType())) + return TarExtTy->getIntParameter(0); + + return 1; + }; + switch (IID) { case Intrinsic::riscv_vle_mask: case Intrinsic::riscv_vse_mask: + case Intrinsic::riscv_vlseg2_mask: + case Intrinsic::riscv_vlseg3_mask: + case Intrinsic::riscv_vlseg4_mask: + case Intrinsic::riscv_vlseg5_mask: + case Intrinsic::riscv_vlseg6_mask: + case Intrinsic::riscv_vlseg7_mask: + case Intrinsic::riscv_vlseg8_mask: + case Intrinsic::riscv_vsseg2_mask: + case Intrinsic::riscv_vsseg3_mask: + case Intrinsic::riscv_vsseg4_mask: + case Intrinsic::riscv_vsseg5_mask: + case Intrinsic::riscv_vsseg6_mask: + case Intrinsic::riscv_vsseg7_mask: + case Intrinsic::riscv_vsseg8_mask: HasMask = true; [[fallthrough]]; case Intrinsic::riscv_vle: - case Intrinsic::riscv_vse: { + case Intrinsic::riscv_vse: + case Intrinsic::riscv_vlseg2: + case Intrinsic::riscv_vlseg3: + case Intrinsic::riscv_vlseg4: + case Intrinsic::riscv_vlseg5: + case Intrinsic::riscv_vlseg6: + case Intrinsic::riscv_vlseg7: + case Intrinsic::riscv_vlseg8: + case Intrinsic::riscv_vsseg2: + case Intrinsic::riscv_vsseg3: + case Intrinsic::riscv_vsseg4: + case Intrinsic::riscv_vsseg5: + case Intrinsic::riscv_vsseg6: + case Intrinsic::riscv_vsseg7: + case Intrinsic::riscv_vsseg8: { // Intrinsic interface: // riscv_vle(merge, ptr, vl) // riscv_vle_mask(merge, ptr, mask, vl, policy) // riscv_vse(val, ptr, vl) // riscv_vse_mask(val, ptr, mask, vl, policy) + // riscv_vlseg#(merge, ptr, vl, sew) + // riscv_vlseg#_mask(merge, ptr, mask, vl, policy, sew) + // riscv_vsseg#(val, ptr, vl, sew) + // riscv_vsseg#_mask(val, ptr, mask, vl, sew) bool IsWrite = Inst->getType()->isVoidTy(); Type *Ty = IsWrite ? Inst->getArgOperand(0)->getType() : Inst->getType(); + // The results of segment loads are TargetExtType. + if (auto *TarExtTy = dyn_cast<TargetExtType>(Ty)) { + unsigned SEW = + 1 << cast<ConstantInt>(Inst->getArgOperand(Inst->arg_size() - 1)) + ->getZExtValue(); + Ty = TarExtTy->getTypeParameter(0U); + Ty = ScalableVectorType::get( + IntegerType::get(C, SEW), + cast<ScalableVectorType>(Ty)->getMinNumElements() * 8 / SEW); + } const auto *RVVIInfo = RISCVVIntrinsicsTable::getRISCVVIntrinsicInfo(IID); unsigned VLIndex = RVVIInfo->VLOperand; unsigned PtrOperandNo = VLIndex - 1 - HasMask; @@ -2771,23 +2823,72 @@ bool RISCVTTIImpl::getTgtMemIntrinsic(IntrinsicInst *Inst, if (HasMask) Mask = Inst->getArgOperand(VLIndex - 1); Value *EVL = Inst->getArgOperand(VLIndex); + unsigned SegNum = getSegNum(Inst, PtrOperandNo, IsWrite); + // RVV uses contiguous elements as a segment. + if (SegNum > 1) { + unsigned ElemSize = Ty->getScalarSizeInBits(); + auto *SegTy = IntegerType::get(C, ElemSize * SegNum); + Ty = VectorType::get(SegTy, cast<VectorType>(Ty)); + } Info.InterestingOperands.emplace_back(Inst, PtrOperandNo, IsWrite, Ty, Alignment, Mask, EVL); return true; } case Intrinsic::riscv_vlse_mask: case Intrinsic::riscv_vsse_mask: + case Intrinsic::riscv_vlsseg2_mask: + case Intrinsic::riscv_vlsseg3_mask: + case Intrinsic::riscv_vlsseg4_mask: + case Intrinsic::riscv_vlsseg5_mask: + case Intrinsic::riscv_vlsseg6_mask: + case Intrinsic::riscv_vlsseg7_mask: + case Intrinsic::riscv_vlsseg8_mask: + case Intrinsic::riscv_vssseg2_mask: + case Intrinsic::riscv_vssseg3_mask: + case Intrinsic::riscv_vssseg4_mask: + case Intrinsic::riscv_vssseg5_mask: + case Intrinsic::riscv_vssseg6_mask: + case Intrinsic::riscv_vssseg7_mask: + case Intrinsic::riscv_vssseg8_mask: HasMask = true; [[fallthrough]]; case Intrinsic::riscv_vlse: - case Intrinsic::riscv_vsse: { + case Intrinsic::riscv_vsse: + case Intrinsic::riscv_vlsseg2: + case Intrinsic::riscv_vlsseg3: + case Intrinsic::riscv_vlsseg4: + case Intrinsic::riscv_vlsseg5: + case Intrinsic::riscv_vlsseg6: + case Intrinsic::riscv_vlsseg7: + case Intrinsic::riscv_vlsseg8: + case Intrinsic::riscv_vssseg2: + case Intrinsic::riscv_vssseg3: + case Intrinsic::riscv_vssseg4: + case Intrinsic::riscv_vssseg5: + case Intrinsic::riscv_vssseg6: + case Intrinsic::riscv_vssseg7: + case Intrinsic::riscv_vssseg8: { // Intrinsic interface: // riscv_vlse(merge, ptr, stride, vl) // riscv_vlse_mask(merge, ptr, stride, mask, vl, policy) // riscv_vsse(val, ptr, stride, vl) // riscv_vsse_mask(val, ptr, stride, mask, vl, policy) + // riscv_vlsseg#(merge, ptr, offset, vl, sew) + // riscv_vlsseg#_mask(merge, ptr, offset, mask, vl, policy, sew) + // riscv_vssseg#(val, ptr, offset, vl, sew) + // riscv_vssseg#_mask(val, ptr, offset, mask, vl, sew) bool IsWrite = Inst->getType()->isVoidTy(); Type *Ty = IsWrite ? Inst->getArgOperand(0)->getType() : Inst->getType(); + // The results of segment loads are TargetExtType. + if (auto *TarExtTy = dyn_cast<TargetExtType>(Ty)) { + unsigned SEW = + 1 << cast<ConstantInt>(Inst->getArgOperand(Inst->arg_size() - 1)) + ->getZExtValue(); + Ty = TarExtTy->getTypeParameter(0U); + Ty = ScalableVectorType::get( + IntegerType::get(C, SEW), + cast<ScalableVectorType>(Ty)->getMinNumElements() * 8 / SEW); + } const auto *RVVIInfo = RISCVVIntrinsicsTable::getRISCVVIntrinsicInfo(IID); unsigned VLIndex = RVVIInfo->VLOperand; unsigned PtrOperandNo = VLIndex - 2 - HasMask; @@ -2809,6 +2910,13 @@ bool RISCVTTIImpl::getTgtMemIntrinsic(IntrinsicInst *Inst, if (HasMask) Mask = Inst->getArgOperand(VLIndex - 1); Value *EVL = Inst->getArgOperand(VLIndex); + unsigned SegNum = getSegNum(Inst, PtrOperandNo, IsWrite); + // RVV uses contiguous elements as a segment. + if (SegNum > 1) { + unsigned ElemSize = Ty->getScalarSizeInBits(); + auto *SegTy = IntegerType::get(C, ElemSize * SegNum); + Ty = VectorType::get(SegTy, cast<VectorType>(Ty)); + } Info.InterestingOperands.emplace_back(Inst, PtrOperandNo, IsWrite, Ty, Alignment, Mask, EVL, Stride); return true; @@ -2817,19 +2925,89 @@ bool RISCVTTIImpl::getTgtMemIntrinsic(IntrinsicInst *Inst, case Intrinsic::riscv_vluxei_mask: case Intrinsic::riscv_vsoxei_mask: case Intrinsic::riscv_vsuxei_mask: + case Intrinsic::riscv_vloxseg2_mask: + case Intrinsic::riscv_vloxseg3_mask: + case Intrinsic::riscv_vloxseg4_mask: + case Intrinsic::riscv_vloxseg5_mask: + case Intrinsic::riscv_vloxseg6_mask: + case Intrinsic::riscv_vloxseg7_mask: + case Intrinsic::riscv_vloxseg8_mask: + case Intrinsic::riscv_vluxseg2_mask: + case Intrinsic::riscv_vluxseg3_mask: + case Intrinsic::riscv_vluxseg4_mask: + case Intrinsic::riscv_vluxseg5_mask: + case Intrinsic::riscv_vluxseg6_mask: + case Intrinsic::riscv_vluxseg7_mask: + case Intrinsic::riscv_vluxseg8_mask: + case Intrinsic::riscv_vsoxseg2_mask: + case Intrinsic::riscv_vsoxseg3_mask: + case Intrinsic::riscv_vsoxseg4_mask: + case Intrinsic::riscv_vsoxseg5_mask: + case Intrinsic::riscv_vsoxseg6_mask: + case Intrinsic::riscv_vsoxseg7_mask: + case Intrinsic::riscv_vsoxseg8_mask: + case Intrinsic::riscv_vsuxseg2_mask: + case Intrinsic::riscv_vsuxseg3_mask: + case Intrinsic::riscv_vsuxseg4_mask: + case Intrinsic::riscv_vsuxseg5_mask: + case Intrinsic::riscv_vsuxseg6_mask: + case Intrinsic::riscv_vsuxseg7_mask: + case Intrinsic::riscv_vsuxseg8_mask: HasMask = true; [[fallthrough]]; case Intrinsic::riscv_vloxei: case Intrinsic::riscv_vluxei: case Intrinsic::riscv_vsoxei: - case Intrinsic::riscv_vsuxei: { + case Intrinsic::riscv_vsuxei: + case Intrinsic::riscv_vloxseg2: + case Intrinsic::riscv_vloxseg3: + case Intrinsic::riscv_vloxseg4: + case Intrinsic::riscv_vloxseg5: + case Intrinsic::riscv_vloxseg6: + case Intrinsic::riscv_vloxseg7: + case Intrinsic::riscv_vloxseg8: + case Intrinsic::riscv_vluxseg2: + case Intrinsic::riscv_vluxseg3: + case Intrinsic::riscv_vluxseg4: + case Intrinsic::riscv_vluxseg5: + case Intrinsic::riscv_vluxseg6: + case Intrinsic::riscv_vluxseg7: + case Intrinsic::riscv_vluxseg8: + case Intrinsic::riscv_vsoxseg2: + case Intrinsic::riscv_vsoxseg3: + case Intrinsic::riscv_vsoxseg4: + case Intrinsic::riscv_vsoxseg5: + case Intrinsic::riscv_vsoxseg6: + case Intrinsic::riscv_vsoxseg7: + case Intrinsic::riscv_vsoxseg8: + case Intrinsic::riscv_vsuxseg2: + case Intrinsic::riscv_vsuxseg3: + case Intrinsic::riscv_vsuxseg4: + case Intrinsic::riscv_vsuxseg5: + case Intrinsic::riscv_vsuxseg6: + case Intrinsic::riscv_vsuxseg7: + case Intrinsic::riscv_vsuxseg8: { // Intrinsic interface (only listed ordered version): // riscv_vloxei(merge, ptr, index, vl) // riscv_vloxei_mask(merge, ptr, index, mask, vl, policy) // riscv_vsoxei(val, ptr, index, vl) // riscv_vsoxei_mask(val, ptr, index, mask, vl, policy) + // riscv_vloxseg#(merge, ptr, index, vl, sew) + // riscv_vloxseg#_mask(merge, ptr, index, mask, vl, policy, sew) + // riscv_vsoxseg#(val, ptr, index, vl, sew) + // riscv_vsoxseg#_mask(val, ptr, index, mask, vl, sew) bool IsWrite = Inst->getType()->isVoidTy(); Type *Ty = IsWrite ? Inst->getArgOperand(0)->getType() : Inst->getType(); + // The results of segment loads are TargetExtType. + if (auto *TarExtTy = dyn_cast<TargetExtType>(Ty)) { + unsigned SEW = + 1 << cast<ConstantInt>(Inst->getArgOperand(Inst->arg_size() - 1)) + ->getZExtValue(); + Ty = TarExtTy->getTypeParameter(0U); + Ty = ScalableVectorType::get( + IntegerType::get(C, SEW), + cast<ScalableVectorType>(Ty)->getMinNumElements() * 8 / SEW); + } const auto *RVVIInfo = RISCVVIntrinsicsTable::getRISCVVIntrinsicInfo(IID); unsigned VLIndex = RVVIInfo->VLOperand; unsigned PtrOperandNo = VLIndex - 2 - HasMask; @@ -2845,6 +3023,13 @@ bool RISCVTTIImpl::getTgtMemIntrinsic(IntrinsicInst *Inst, Mask = ConstantInt::getTrue(MaskType); } Value *EVL = Inst->getArgOperand(VLIndex); + unsigned SegNum = getSegNum(Inst, PtrOperandNo, IsWrite); + // RVV uses contiguous elements as a segment. + if (SegNum > 1) { + unsigned ElemSize = Ty->getScalarSizeInBits(); + auto *SegTy = IntegerType::get(C, ElemSize * SegNum); + Ty = VectorType::get(SegTy, cast<VectorType>(Ty)); + } Value *OffsetOp = Inst->getArgOperand(PtrOperandNo + 1); Info.InterestingOperands.emplace_back(Inst, PtrOperandNo, IsWrite, Ty, Align(1), Mask, EVL, diff --git a/llvm/lib/Target/SPIRV/SPIRVEmitIntrinsics.cpp b/llvm/lib/Target/SPIRV/SPIRVEmitIntrinsics.cpp index 9f2e075..e16c8f0 100644 --- a/llvm/lib/Target/SPIRV/SPIRVEmitIntrinsics.cpp +++ b/llvm/lib/Target/SPIRV/SPIRVEmitIntrinsics.cpp @@ -2811,9 +2811,7 @@ bool SPIRVEmitIntrinsics::runOnFunction(Function &Func) { GetElementPtrInst *NewGEP = simplifyZeroLengthArrayGepInst(Ref); if (NewGEP) { Ref->replaceAllUsesWith(NewGEP); - if (isInstructionTriviallyDead(Ref)) - DeadInsts.insert(Ref); - + DeadInsts.insert(Ref); Ref = NewGEP; } if (Type *GepTy = getGEPType(Ref)) diff --git a/llvm/lib/Target/SPIRV/SPIRVInstructionSelector.cpp b/llvm/lib/Target/SPIRV/SPIRVInstructionSelector.cpp index 0afec42..989950f 100644 --- a/llvm/lib/Target/SPIRV/SPIRVInstructionSelector.cpp +++ b/llvm/lib/Target/SPIRV/SPIRVInstructionSelector.cpp @@ -307,6 +307,10 @@ private: bool selectHandleFromBinding(Register &ResVReg, const SPIRVType *ResType, MachineInstr &I) const; + bool selectCounterHandleFromBinding(Register &ResVReg, + const SPIRVType *ResType, + MachineInstr &I) const; + bool selectReadImageIntrinsic(Register &ResVReg, const SPIRVType *ResType, MachineInstr &I) const; bool selectImageWriteIntrinsic(MachineInstr &I) const; @@ -314,6 +318,8 @@ private: MachineInstr &I) const; bool selectModf(Register ResVReg, const SPIRVType *ResType, MachineInstr &I) const; + bool selectUpdateCounter(Register &ResVReg, const SPIRVType *ResType, + MachineInstr &I) const; bool selectFrexp(Register ResVReg, const SPIRVType *ResType, MachineInstr &I) const; // Utilities @@ -3443,6 +3449,10 @@ bool SPIRVInstructionSelector::selectIntrinsic(Register ResVReg, case Intrinsic::spv_resource_handlefrombinding: { return selectHandleFromBinding(ResVReg, ResType, I); } + case Intrinsic::spv_resource_counterhandlefrombinding: + return selectCounterHandleFromBinding(ResVReg, ResType, I); + case Intrinsic::spv_resource_updatecounter: + return selectUpdateCounter(ResVReg, ResType, I); case Intrinsic::spv_resource_store_typedbuffer: { return selectImageWriteIntrinsic(I); } @@ -3478,6 +3488,130 @@ bool SPIRVInstructionSelector::selectHandleFromBinding(Register &ResVReg, *cast<GIntrinsic>(&I), I); } +bool SPIRVInstructionSelector::selectCounterHandleFromBinding( + Register &ResVReg, const SPIRVType *ResType, MachineInstr &I) const { + auto &Intr = cast<GIntrinsic>(I); + assert(Intr.getIntrinsicID() == + Intrinsic::spv_resource_counterhandlefrombinding); + + // Extract information from the intrinsic call. + Register MainHandleReg = Intr.getOperand(2).getReg(); + auto *MainHandleDef = cast<GIntrinsic>(getVRegDef(*MRI, MainHandleReg)); + assert(MainHandleDef->getIntrinsicID() == + Intrinsic::spv_resource_handlefrombinding); + + uint32_t Set = getIConstVal(Intr.getOperand(4).getReg(), MRI); + uint32_t Binding = getIConstVal(Intr.getOperand(3).getReg(), MRI); + uint32_t ArraySize = getIConstVal(MainHandleDef->getOperand(4).getReg(), MRI); + Register IndexReg = MainHandleDef->getOperand(5).getReg(); + const bool IsNonUniform = false; + std::string CounterName = + getStringValueFromReg(MainHandleDef->getOperand(6).getReg(), *MRI) + + ".counter"; + + // Create the counter variable. + MachineIRBuilder MIRBuilder(I); + Register CounterVarReg = buildPointerToResource( + GR.getPointeeType(ResType), GR.getPointerStorageClass(ResType), Set, + Binding, ArraySize, IndexReg, IsNonUniform, CounterName, MIRBuilder); + + return BuildCOPY(ResVReg, CounterVarReg, I); +} + +bool SPIRVInstructionSelector::selectUpdateCounter(Register &ResVReg, + const SPIRVType *ResType, + MachineInstr &I) const { + auto &Intr = cast<GIntrinsic>(I); + assert(Intr.getIntrinsicID() == Intrinsic::spv_resource_updatecounter); + + Register CounterHandleReg = Intr.getOperand(2).getReg(); + Register IncrReg = Intr.getOperand(3).getReg(); + + // The counter handle is a pointer to the counter variable (which is a struct + // containing an i32). We need to get a pointer to that i32 member to do the + // atomic operation. +#ifndef NDEBUG + SPIRVType *CounterVarType = GR.getSPIRVTypeForVReg(CounterHandleReg); + SPIRVType *CounterVarPointeeType = GR.getPointeeType(CounterVarType); + assert(CounterVarPointeeType && + CounterVarPointeeType->getOpcode() == SPIRV::OpTypeStruct && + "Counter variable must be a struct"); + assert(GR.getPointerStorageClass(CounterVarType) == + SPIRV::StorageClass::StorageBuffer && + "Counter variable must be in the storage buffer storage class"); + assert(CounterVarPointeeType->getNumOperands() == 2 && + "Counter variable must have exactly 1 member in the struct"); + const SPIRVType *MemberType = + GR.getSPIRVTypeForVReg(CounterVarPointeeType->getOperand(1).getReg()); + assert(MemberType->getOpcode() == SPIRV::OpTypeInt && + "Counter variable struct must have a single i32 member"); +#endif + + // The struct has a single i32 member. + MachineIRBuilder MIRBuilder(I); + const Type *LLVMIntType = + Type::getInt32Ty(I.getMF()->getFunction().getContext()); + + SPIRVType *IntPtrType = GR.getOrCreateSPIRVPointerType( + LLVMIntType, MIRBuilder, SPIRV::StorageClass::StorageBuffer); + + auto Zero = buildI32Constant(0, I); + if (!Zero.second) + return false; + + Register PtrToCounter = + MRI->createVirtualRegister(GR.getRegClass(IntPtrType)); + if (!BuildMI(*I.getParent(), I, I.getDebugLoc(), + TII.get(SPIRV::OpAccessChain)) + .addDef(PtrToCounter) + .addUse(GR.getSPIRVTypeID(IntPtrType)) + .addUse(CounterHandleReg) + .addUse(Zero.first) + .constrainAllUses(TII, TRI, RBI)) { + return false; + } + + // For UAV/SSBO counters, the scope is Device. The counter variable is not + // used as a flag. So the memory semantics can be None. + auto Scope = buildI32Constant(SPIRV::Scope::Device, I); + if (!Scope.second) + return false; + auto Semantics = buildI32Constant(SPIRV::MemorySemantics::None, I); + if (!Semantics.second) + return false; + + int64_t IncrVal = getIConstValSext(IncrReg, MRI); + auto Incr = buildI32Constant(static_cast<uint32_t>(IncrVal), I); + if (!Incr.second) + return false; + + Register AtomicRes = MRI->createVirtualRegister(GR.getRegClass(ResType)); + if (!BuildMI(*I.getParent(), I, I.getDebugLoc(), TII.get(SPIRV::OpAtomicIAdd)) + .addDef(AtomicRes) + .addUse(GR.getSPIRVTypeID(ResType)) + .addUse(PtrToCounter) + .addUse(Scope.first) + .addUse(Semantics.first) + .addUse(Incr.first) + .constrainAllUses(TII, TRI, RBI)) { + return false; + } + if (IncrVal >= 0) { + return BuildCOPY(ResVReg, AtomicRes, I); + } + + // In HLSL, IncrementCounter returns the value *before* the increment, while + // DecrementCounter returns the value *after* the decrement. Both are lowered + // to the same atomic intrinsic which returns the value *before* the + // operation. So for decrements (negative IncrVal), we must subtract the + // increment value from the result to get the post-decrement value. + return BuildMI(*I.getParent(), I, I.getDebugLoc(), TII.get(SPIRV::OpIAddS)) + .addDef(ResVReg) + .addUse(GR.getSPIRVTypeID(ResType)) + .addUse(AtomicRes) + .addUse(Incr.first) + .constrainAllUses(TII, TRI, RBI); +} bool SPIRVInstructionSelector::selectReadImageIntrinsic( Register &ResVReg, const SPIRVType *ResType, MachineInstr &I) const { diff --git a/llvm/lib/Target/SPIRV/SPIRVLegalizeImplicitBinding.cpp b/llvm/lib/Target/SPIRV/SPIRVLegalizeImplicitBinding.cpp index 205895e..fc14a03 100644 --- a/llvm/lib/Target/SPIRV/SPIRVLegalizeImplicitBinding.cpp +++ b/llvm/lib/Target/SPIRV/SPIRVLegalizeImplicitBinding.cpp @@ -39,6 +39,10 @@ private: void collectBindingInfo(Module &M); uint32_t getAndReserveFirstUnusedBinding(uint32_t DescSet); void replaceImplicitBindingCalls(Module &M); + void replaceResourceHandleCall(Module &M, CallInst *OldCI, + uint32_t NewBinding); + void replaceCounterHandleCall(Module &M, CallInst *OldCI, + uint32_t NewBinding); void verifyUniqueOrderIdPerResource(SmallVectorImpl<CallInst *> &Calls); // A map from descriptor set to a bit vector of used binding numbers. @@ -56,64 +60,93 @@ struct BindingInfoCollector : public InstVisitor<BindingInfoCollector> { : UsedBindings(UsedBindings), ImplicitBindingCalls(ImplicitBindingCalls) { } + void addBinding(uint32_t DescSet, uint32_t Binding) { + if (UsedBindings.size() <= DescSet) { + UsedBindings.resize(DescSet + 1); + UsedBindings[DescSet].resize(64); + } + if (UsedBindings[DescSet].size() <= Binding) { + UsedBindings[DescSet].resize(2 * Binding + 1); + } + UsedBindings[DescSet].set(Binding); + } + void visitCallInst(CallInst &CI) { if (CI.getIntrinsicID() == Intrinsic::spv_resource_handlefrombinding) { const uint32_t DescSet = cast<ConstantInt>(CI.getArgOperand(0))->getZExtValue(); const uint32_t Binding = cast<ConstantInt>(CI.getArgOperand(1))->getZExtValue(); - - if (UsedBindings.size() <= DescSet) { - UsedBindings.resize(DescSet + 1); - UsedBindings[DescSet].resize(64); - } - if (UsedBindings[DescSet].size() <= Binding) { - UsedBindings[DescSet].resize(2 * Binding + 1); - } - UsedBindings[DescSet].set(Binding); + addBinding(DescSet, Binding); } else if (CI.getIntrinsicID() == Intrinsic::spv_resource_handlefromimplicitbinding) { ImplicitBindingCalls.push_back(&CI); + } else if (CI.getIntrinsicID() == + Intrinsic::spv_resource_counterhandlefrombinding) { + const uint32_t DescSet = + cast<ConstantInt>(CI.getArgOperand(2))->getZExtValue(); + const uint32_t Binding = + cast<ConstantInt>(CI.getArgOperand(1))->getZExtValue(); + addBinding(DescSet, Binding); + } else if (CI.getIntrinsicID() == + Intrinsic::spv_resource_counterhandlefromimplicitbinding) { + ImplicitBindingCalls.push_back(&CI); } } }; +static uint32_t getOrderId(const CallInst *CI) { + uint32_t OrderIdArgIdx = 0; + switch (CI->getIntrinsicID()) { + case Intrinsic::spv_resource_handlefromimplicitbinding: + OrderIdArgIdx = 0; + break; + case Intrinsic::spv_resource_counterhandlefromimplicitbinding: + OrderIdArgIdx = 1; + break; + default: + llvm_unreachable("CallInst is not an implicit binding intrinsic"); + } + return cast<ConstantInt>(CI->getArgOperand(OrderIdArgIdx))->getZExtValue(); +} + +static uint32_t getDescSet(const CallInst *CI) { + uint32_t DescSetArgIdx; + switch (CI->getIntrinsicID()) { + case Intrinsic::spv_resource_handlefromimplicitbinding: + case Intrinsic::spv_resource_handlefrombinding: + DescSetArgIdx = 1; + break; + case Intrinsic::spv_resource_counterhandlefromimplicitbinding: + case Intrinsic::spv_resource_counterhandlefrombinding: + DescSetArgIdx = 2; + break; + default: + llvm_unreachable("CallInst is not an implicit binding intrinsic"); + } + return cast<ConstantInt>(CI->getArgOperand(DescSetArgIdx))->getZExtValue(); +} + void SPIRVLegalizeImplicitBinding::collectBindingInfo(Module &M) { BindingInfoCollector InfoCollector(UsedBindings, ImplicitBindingCalls); InfoCollector.visit(M); // Sort the collected calls by their order ID. - std::sort( - ImplicitBindingCalls.begin(), ImplicitBindingCalls.end(), - [](const CallInst *A, const CallInst *B) { - const uint32_t OrderIdArgIdx = 0; - const uint32_t OrderA = - cast<ConstantInt>(A->getArgOperand(OrderIdArgIdx))->getZExtValue(); - const uint32_t OrderB = - cast<ConstantInt>(B->getArgOperand(OrderIdArgIdx))->getZExtValue(); - return OrderA < OrderB; - }); + std::sort(ImplicitBindingCalls.begin(), ImplicitBindingCalls.end(), + [](const CallInst *A, const CallInst *B) { + return getOrderId(A) < getOrderId(B); + }); } void SPIRVLegalizeImplicitBinding::verifyUniqueOrderIdPerResource( SmallVectorImpl<CallInst *> &Calls) { // Check that the order Id is unique per resource. for (uint32_t i = 1; i < Calls.size(); ++i) { - const uint32_t OrderIdArgIdx = 0; - const uint32_t DescSetArgIdx = 1; - const uint32_t OrderA = - cast<ConstantInt>(Calls[i - 1]->getArgOperand(OrderIdArgIdx)) - ->getZExtValue(); - const uint32_t OrderB = - cast<ConstantInt>(Calls[i]->getArgOperand(OrderIdArgIdx)) - ->getZExtValue(); + const uint32_t OrderA = getOrderId(Calls[i - 1]); + const uint32_t OrderB = getOrderId(Calls[i]); if (OrderA == OrderB) { - const uint32_t DescSetA = - cast<ConstantInt>(Calls[i - 1]->getArgOperand(DescSetArgIdx)) - ->getZExtValue(); - const uint32_t DescSetB = - cast<ConstantInt>(Calls[i]->getArgOperand(DescSetArgIdx)) - ->getZExtValue(); + const uint32_t DescSetA = getDescSet(Calls[i - 1]); + const uint32_t DescSetB = getDescSet(Calls[i]); if (DescSetA != DescSetB) { report_fatal_error("Implicit binding calls with the same order ID must " "have the same descriptor set"); @@ -144,36 +177,26 @@ void SPIRVLegalizeImplicitBinding::replaceImplicitBindingCalls(Module &M) { uint32_t lastBindingNumber = -1; for (CallInst *OldCI : ImplicitBindingCalls) { - IRBuilder<> Builder(OldCI); - const uint32_t OrderId = - cast<ConstantInt>(OldCI->getArgOperand(0))->getZExtValue(); - const uint32_t DescSet = - cast<ConstantInt>(OldCI->getArgOperand(1))->getZExtValue(); - - // Reuse an existing binding for this order ID, if one was already assigned. - // Otherwise, assign a new binding. - const uint32_t NewBinding = (lastOrderId == OrderId) - ? lastBindingNumber - : getAndReserveFirstUnusedBinding(DescSet); - lastOrderId = OrderId; - lastBindingNumber = NewBinding; - - SmallVector<Value *, 8> Args; - Args.push_back(Builder.getInt32(DescSet)); - Args.push_back(Builder.getInt32(NewBinding)); - - // Copy the remaining arguments from the old call. - for (uint32_t i = 2; i < OldCI->arg_size(); ++i) { - Args.push_back(OldCI->getArgOperand(i)); + const uint32_t OrderId = getOrderId(OldCI); + uint32_t BindingNumber; + if (OrderId == lastOrderId) { + BindingNumber = lastBindingNumber; + } else { + const uint32_t DescSet = getDescSet(OldCI); + BindingNumber = getAndReserveFirstUnusedBinding(DescSet); } - Function *NewFunc = Intrinsic::getOrInsertDeclaration( - &M, Intrinsic::spv_resource_handlefrombinding, OldCI->getType()); - CallInst *NewCI = Builder.CreateCall(NewFunc, Args); - NewCI->setCallingConv(OldCI->getCallingConv()); - - OldCI->replaceAllUsesWith(NewCI); - OldCI->eraseFromParent(); + if (OldCI->getIntrinsicID() == + Intrinsic::spv_resource_handlefromimplicitbinding) { + replaceResourceHandleCall(M, OldCI, BindingNumber); + } else { + assert(OldCI->getIntrinsicID() == + Intrinsic::spv_resource_counterhandlefromimplicitbinding && + "Unexpected implicit binding intrinsic"); + replaceCounterHandleCall(M, OldCI, BindingNumber); + } + lastOrderId = OrderId; + lastBindingNumber = BindingNumber; } } @@ -196,4 +219,49 @@ INITIALIZE_PASS(SPIRVLegalizeImplicitBinding, "legalize-spirv-implicit-binding", ModulePass *llvm::createSPIRVLegalizeImplicitBindingPass() { return new SPIRVLegalizeImplicitBinding(); -}
\ No newline at end of file +} + +void SPIRVLegalizeImplicitBinding::replaceResourceHandleCall( + Module &M, CallInst *OldCI, uint32_t NewBinding) { + IRBuilder<> Builder(OldCI); + const uint32_t DescSet = + cast<ConstantInt>(OldCI->getArgOperand(1))->getZExtValue(); + + SmallVector<Value *, 8> Args; + Args.push_back(Builder.getInt32(DescSet)); + Args.push_back(Builder.getInt32(NewBinding)); + + // Copy the remaining arguments from the old call. + for (uint32_t i = 2; i < OldCI->arg_size(); ++i) { + Args.push_back(OldCI->getArgOperand(i)); + } + + Function *NewFunc = Intrinsic::getOrInsertDeclaration( + &M, Intrinsic::spv_resource_handlefrombinding, OldCI->getType()); + CallInst *NewCI = Builder.CreateCall(NewFunc, Args); + NewCI->setCallingConv(OldCI->getCallingConv()); + + OldCI->replaceAllUsesWith(NewCI); + OldCI->eraseFromParent(); +} + +void SPIRVLegalizeImplicitBinding::replaceCounterHandleCall( + Module &M, CallInst *OldCI, uint32_t NewBinding) { + IRBuilder<> Builder(OldCI); + const uint32_t DescSet = + cast<ConstantInt>(OldCI->getArgOperand(2))->getZExtValue(); + + SmallVector<Value *, 8> Args; + Args.push_back(OldCI->getArgOperand(0)); + Args.push_back(Builder.getInt32(NewBinding)); + Args.push_back(Builder.getInt32(DescSet)); + + Type *Tys[] = {OldCI->getType(), OldCI->getArgOperand(0)->getType()}; + Function *NewFunc = Intrinsic::getOrInsertDeclaration( + &M, Intrinsic::spv_resource_counterhandlefrombinding, Tys); + CallInst *NewCI = Builder.CreateCall(NewFunc, Args); + NewCI->setCallingConv(OldCI->getCallingConv()); + + OldCI->replaceAllUsesWith(NewCI); + OldCI->eraseFromParent(); +} diff --git a/llvm/lib/Target/SPIRV/SPIRVUtils.cpp b/llvm/lib/Target/SPIRV/SPIRVUtils.cpp index 327c011..1d47c89 100644 --- a/llvm/lib/Target/SPIRV/SPIRVUtils.cpp +++ b/llvm/lib/Target/SPIRV/SPIRVUtils.cpp @@ -385,6 +385,12 @@ uint64_t getIConstVal(Register ConstReg, const MachineRegisterInfo *MRI) { return MI->getOperand(1).getCImm()->getValue().getZExtValue(); } +int64_t getIConstValSext(Register ConstReg, const MachineRegisterInfo *MRI) { + const MachineInstr *MI = getDefInstrMaybeConstant(ConstReg, MRI); + assert(MI && MI->getOpcode() == TargetOpcode::G_CONSTANT); + return MI->getOperand(1).getCImm()->getSExtValue(); +} + bool isSpvIntrinsic(const MachineInstr &MI, Intrinsic::ID IntrinsicID) { if (const auto *GI = dyn_cast<GIntrinsic>(&MI)) return GI->is(IntrinsicID); diff --git a/llvm/lib/Target/SPIRV/SPIRVUtils.h b/llvm/lib/Target/SPIRV/SPIRVUtils.h index 409a0fd..5777a24 100644 --- a/llvm/lib/Target/SPIRV/SPIRVUtils.h +++ b/llvm/lib/Target/SPIRV/SPIRVUtils.h @@ -289,6 +289,9 @@ MachineInstr *getDefInstrMaybeConstant(Register &ConstReg, // Get constant integer value of the given ConstReg. uint64_t getIConstVal(Register ConstReg, const MachineRegisterInfo *MRI); +// Get constant integer value of the given ConstReg, sign-extended. +int64_t getIConstValSext(Register ConstReg, const MachineRegisterInfo *MRI); + // Check if MI is a SPIR-V specific intrinsic call. bool isSpvIntrinsic(const MachineInstr &MI, Intrinsic::ID IntrinsicID); // Check if it's a SPIR-V specific intrinsic call. diff --git a/llvm/lib/Target/X86/GISel/X86InstructionSelector.cpp b/llvm/lib/Target/X86/GISel/X86InstructionSelector.cpp index 3090ad3..27fba34 100644 --- a/llvm/lib/Target/X86/GISel/X86InstructionSelector.cpp +++ b/llvm/lib/Target/X86/GISel/X86InstructionSelector.cpp @@ -407,6 +407,7 @@ bool X86InstructionSelector::select(MachineInstr &I) { case TargetOpcode::G_TRUNC: return selectTruncOrPtrToInt(I, MRI, MF); case TargetOpcode::G_INTTOPTR: + case TargetOpcode::G_FREEZE: return selectCopy(I, MRI); case TargetOpcode::G_ZEXT: return selectZext(I, MRI, MF); diff --git a/llvm/lib/Target/X86/GISel/X86LegalizerInfo.cpp b/llvm/lib/Target/X86/GISel/X86LegalizerInfo.cpp index e7709ef..11ef721 100644 --- a/llvm/lib/Target/X86/GISel/X86LegalizerInfo.cpp +++ b/llvm/lib/Target/X86/GISel/X86LegalizerInfo.cpp @@ -89,9 +89,29 @@ X86LegalizerInfo::X86LegalizerInfo(const X86Subtarget &STI, // 32/64-bits needs support for s64/s128 to handle cases: // s64 = EXTEND (G_IMPLICIT_DEF s32) -> s64 = G_IMPLICIT_DEF // s128 = EXTEND (G_IMPLICIT_DEF s32/s64) -> s128 = G_IMPLICIT_DEF - getActionDefinitionsBuilder(G_IMPLICIT_DEF) + getActionDefinitionsBuilder( + {G_IMPLICIT_DEF, G_PHI, G_FREEZE, G_CONSTANT_FOLD_BARRIER}) .legalFor({p0, s1, s8, s16, s32, s64}) - .legalFor(Is64Bit, {s128}); + .legalFor(UseX87, {s80}) + .legalFor(Is64Bit, {s128}) + .legalFor(HasSSE2, {v16s8, v8s16, v4s32, v2s64}) + .legalFor(HasAVX, {v32s8, v16s16, v8s32, v4s64}) + .legalFor(HasAVX512, {v64s8, v32s16, v16s32, v8s64}) + .widenScalarOrEltToNextPow2(0, /*Min=*/8) + .clampScalarOrElt(0, s8, sMaxScalar) + .moreElementsToNextPow2(0) + .clampMinNumElements(0, s8, 16) + .clampMinNumElements(0, s16, 8) + .clampMinNumElements(0, s32, 4) + .clampMinNumElements(0, s64, 2) + .clampMaxNumElements(0, s8, HasAVX512 ? 64 : (HasAVX ? 32 : 16)) + .clampMaxNumElements(0, s16, HasAVX512 ? 32 : (HasAVX ? 16 : 8)) + .clampMaxNumElements(0, s32, HasAVX512 ? 16 : (HasAVX ? 8 : 4)) + .clampMaxNumElements(0, s64, HasAVX512 ? 8 : (HasAVX ? 4 : 2)) + .clampMaxNumElements(0, p0, + Is64Bit ? s64MaxVector.getNumElements() + : s32MaxVector.getNumElements()) + .scalarizeIf(scalarOrEltWiderThan(0, 64), 0); getActionDefinitionsBuilder(G_CONSTANT) .legalFor({p0, s8, s16, s32}) @@ -289,26 +309,6 @@ X86LegalizerInfo::X86LegalizerInfo(const X86Subtarget &STI, .clampScalar(1, s16, sMaxScalar) .scalarSameSizeAs(0, 1); - // control flow - getActionDefinitionsBuilder(G_PHI) - .legalFor({s8, s16, s32, p0}) - .legalFor(UseX87, {s80}) - .legalFor(Is64Bit, {s64}) - .legalFor(HasSSE1, {v16s8, v8s16, v4s32, v2s64}) - .legalFor(HasAVX, {v32s8, v16s16, v8s32, v4s64}) - .legalFor(HasAVX512, {v64s8, v32s16, v16s32, v8s64}) - .clampMinNumElements(0, s8, 16) - .clampMinNumElements(0, s16, 8) - .clampMinNumElements(0, s32, 4) - .clampMinNumElements(0, s64, 2) - .clampMaxNumElements(0, s8, HasAVX512 ? 64 : (HasAVX ? 32 : 16)) - .clampMaxNumElements(0, s16, HasAVX512 ? 32 : (HasAVX ? 16 : 8)) - .clampMaxNumElements(0, s32, HasAVX512 ? 16 : (HasAVX ? 8 : 4)) - .clampMaxNumElements(0, s64, HasAVX512 ? 8 : (HasAVX ? 4 : 2)) - .widenScalarToNextPow2(0, /*Min=*/32) - .clampScalar(0, s8, sMaxScalar) - .scalarize(0); - getActionDefinitionsBuilder(G_BRCOND).legalFor({s1}); // pointer handling @@ -592,11 +592,6 @@ X86LegalizerInfo::X86LegalizerInfo(const X86Subtarget &STI, .minScalar(0, LLT::scalar(32)) .libcall(); - getActionDefinitionsBuilder({G_FREEZE, G_CONSTANT_FOLD_BARRIER}) - .legalFor({s8, s16, s32, s64, p0}) - .widenScalarToNextPow2(0, /*Min=*/8) - .clampScalar(0, s8, sMaxScalar); - getLegacyLegalizerInfo().computeTables(); verify(*STI.getInstrInfo()); } diff --git a/llvm/lib/Target/X86/X86InstrAVX512.td b/llvm/lib/Target/X86/X86InstrAVX512.td index 564810c..83bd6ac 100644 --- a/llvm/lib/Target/X86/X86InstrAVX512.td +++ b/llvm/lib/Target/X86/X86InstrAVX512.td @@ -662,6 +662,7 @@ def VINSERTPSZrri : AVX512AIi8<0x21, MRMSrcReg, (outs VR128X:$dst), "vinsertps\t{$src3, $src2, $src1, $dst|$dst, $src1, $src2, $src3}", [(set VR128X:$dst, (X86insertps VR128X:$src1, VR128X:$src2, timm:$src3))]>, EVEX, VVVV, Sched<[SchedWriteFShuffle.XMM]>; +let mayLoad = 1 in def VINSERTPSZrmi : AVX512AIi8<0x21, MRMSrcMem, (outs VR128X:$dst), (ins VR128X:$src1, f32mem:$src2, u8imm:$src3), "vinsertps\t{$src3, $src2, $src1, $dst|$dst, $src1, $src2, $src3}", @@ -1293,6 +1294,7 @@ multiclass avx512_subvec_broadcast_rm<bits<8> opc, string OpcodeStr, SDPatternOperator OpNode, X86VectorVTInfo _Dst, X86VectorVTInfo _Src> { + let hasSideEffects = 0, mayLoad = 1 in defm rm : AVX512_maskable<opc, MRMSrcMem, _Dst, (outs _Dst.RC:$dst), (ins _Src.MemOp:$src), OpcodeStr, "$src", "$src", (_Dst.VT (OpNode addr:$src))>, @@ -1748,6 +1750,7 @@ let Constraints = "$src1 = $dst", ExeDomain = _.ExeDomain in { (_.VT (X86VPermt2 _.RC:$src1, IdxVT.RC:$src2, _.RC:$src3)), 1>, EVEX, VVVV, AVX5128IBase, Sched<[sched]>; + let hasSideEffects = 0, mayLoad = 1 in defm rm: AVX512_maskable_3src<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins IdxVT.RC:$src2, _.MemOp:$src3), OpcodeStr, "$src3, $src2", "$src2, $src3", @@ -1759,7 +1762,7 @@ let Constraints = "$src1 = $dst", ExeDomain = _.ExeDomain in { multiclass avx512_perm_t_mb<bits<8> opc, string OpcodeStr, X86FoldableSchedWrite sched, X86VectorVTInfo _, X86VectorVTInfo IdxVT> { - let Constraints = "$src1 = $dst", ExeDomain = _.ExeDomain in + let Constraints = "$src1 = $dst", ExeDomain = _.ExeDomain, hasSideEffects = 0, mayLoad = 1 in defm rmb: AVX512_maskable_3src<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins IdxVT.RC:$src2, _.ScalarMemOp:$src3), OpcodeStr, !strconcat("${src3}", _.BroadcastStr,", $src2"), @@ -1987,6 +1990,7 @@ multiclass avx512_cmp_scalar<X86VectorVTInfo _, SDNode OpNode, SDNode OpNodeSAE, _.FRC:$src2, timm:$cc))]>, EVEX, VVVV, VEX_LIG, Sched<[sched]>, SIMD_EXC; + let mayLoad = 1 in def rmi : AVX512Ii8<0xC2, MRMSrcMem, (outs _.KRC:$dst), (ins _.FRC:$src1, _.ScalarMemOp:$src2, u8imm:$cc), @@ -2145,6 +2149,7 @@ multiclass avx512_icmp_cc<bits<8> opc, string Suffix, PatFrag Frag, (_.VT _.RC:$src2), cond)))]>, EVEX, VVVV, Sched<[sched]>; + let mayLoad = 1 in def rmi : AVX512AIi8<opc, MRMSrcMem, (outs _.KRC:$dst), (ins _.RC:$src1, _.MemOp:$src2, u8imm:$cc), !strconcat("vpcmp", Suffix, @@ -2167,6 +2172,7 @@ multiclass avx512_icmp_cc<bits<8> opc, string Suffix, PatFrag Frag, (_.VT _.RC:$src2), cond))))]>, EVEX, VVVV, EVEX_K, Sched<[sched]>; + let mayLoad = 1 in def rmik : AVX512AIi8<opc, MRMSrcMem, (outs _.KRC:$dst), (ins _.KRCWM:$mask, _.RC:$src1, _.MemOp:$src2, u8imm:$cc), @@ -2198,6 +2204,7 @@ multiclass avx512_icmp_cc_rmb<bits<8> opc, string Suffix, PatFrag Frag, PatFrag Frag_su, X86FoldableSchedWrite sched, X86VectorVTInfo _, string Name> : avx512_icmp_cc<opc, Suffix, Frag, Frag_su, sched, _, Name> { + let mayLoad = 1 in { def rmbi : AVX512AIi8<opc, MRMSrcMem, (outs _.KRC:$dst), (ins _.RC:$src1, _.ScalarMemOp:$src2, u8imm:$cc), @@ -2221,6 +2228,7 @@ multiclass avx512_icmp_cc_rmb<bits<8> opc, string Suffix, PatFrag Frag, (_.BroadcastLdFrag addr:$src2), cond))))]>, EVEX, VVVV, EVEX_K, EVEX_B, Sched<[sched.Folded, sched.ReadAfterFold]>; + } def : Pat<(_.KVT (Frag:$cc (_.BroadcastLdFrag addr:$src2), (_.VT _.RC:$src1), cond)), @@ -2305,6 +2313,7 @@ let Uses = [MXCSR], mayRaiseFPException = 1 in { (X86cmpm_su (_.VT _.RC:$src1), (_.VT _.RC:$src2), timm:$cc), 1>, Sched<[sched]>; + let mayLoad = 1 in { defm rmi : AVX512_maskable_cmp<0xC2, MRMSrcMem, _, (outs _.KRC:$dst),(ins _.RC:$src1, _.MemOp:$src2, u8imm:$cc), "vcmp"#_.Suffix, @@ -2329,6 +2338,7 @@ let Uses = [MXCSR], mayRaiseFPException = 1 in { timm:$cc)>, EVEX_B, Sched<[sched.Folded, sched.ReadAfterFold]>; } + } // Patterns for selecting with loads in other operand. def : Pat<(X86any_cmpm (_.LdFrag addr:$src2), (_.VT _.RC:$src1), @@ -3771,6 +3781,7 @@ def VMOVDI2PDIZrr : AVX512BI<0x6E, MRMSrcReg, (outs VR128X:$dst), (ins GR32:$src [(set VR128X:$dst, (v4i32 (scalar_to_vector GR32:$src)))]>, EVEX, Sched<[WriteVecMoveFromGpr]>; +let mayLoad = 1 in def VMOVDI2PDIZrm : AVX512BI<0x6E, MRMSrcMem, (outs VR128X:$dst), (ins i32mem:$src), "vmovd\t{$src, $dst|$dst, $src}", [(set VR128X:$dst, @@ -3874,7 +3885,7 @@ def VMOVSS2DIZrr : AVX512BI<0x7E, MRMDestReg, (outs GR32:$dst), // Move Quadword Int to Packed Quadword Int // -let ExeDomain = SSEPackedInt in { +let ExeDomain = SSEPackedInt, mayLoad = 1, hasSideEffects = 0 in { def VMOVQI2PQIZrm : AVX512XSI<0x7E, MRMSrcMem, (outs VR128X:$dst), (ins i64mem:$src), "vmovq\t{$src, $dst|$dst, $src}", @@ -3930,13 +3941,13 @@ multiclass avx512_move_scalar<string asm, SDNode OpNode, PatFrag vzload_frag, (_.VT (OpNode _.RC:$src1, _.RC:$src2)), (_.VT _.RC:$src0))))], _.ExeDomain>, EVEX, VVVV, EVEX_K, Sched<[SchedWriteFShuffle.XMM]>; - let canFoldAsLoad = 1, isReMaterializable = 1 in { + let canFoldAsLoad = 1, isReMaterializable = 1, mayLoad = 1, hasSideEffects = 0 in { def rm : AVX512PI<0x10, MRMSrcMem, (outs _.RC:$dst), (ins _.ScalarMemOp:$src), !strconcat(asm, "\t{$src, $dst|$dst, $src}"), [(set _.RC:$dst, (_.VT (vzload_frag addr:$src)))], _.ExeDomain>, EVEX, Sched<[WriteFLoad]>; // _alt version uses FR32/FR64 register class. - let isCodeGenOnly = 1 in + let isCodeGenOnly = 1, mayLoad = 1, hasSideEffects = 0 in def rm_alt : AVX512PI<0x10, MRMSrcMem, (outs _.FRC:$dst), (ins _.ScalarMemOp:$src), !strconcat(asm, "\t{$src, $dst|$dst, $src}"), [(set _.FRC:$dst, (_.ScalarLdFrag addr:$src))], @@ -4557,6 +4568,7 @@ let Predicates = [HasAVX512] in { // AVX-512 - Non-temporals //===----------------------------------------------------------------------===// +let mayLoad = 1, hasSideEffects = 0 in { def VMOVNTDQAZrm : AVX512PI<0x2A, MRMSrcMem, (outs VR512:$dst), (ins i512mem:$src), "vmovntdqa\t{$src, $dst|$dst, $src}", [], SSEPackedInt>, Sched<[SchedWriteVecMoveLSNT.ZMM.RM]>, @@ -4575,11 +4587,12 @@ let Predicates = [HasVLX] in { [], SSEPackedInt>, Sched<[SchedWriteVecMoveLSNT.XMM.RM]>, EVEX, T8, PD, EVEX_V128, EVEX_CD8<64, CD8VF>; } +} multiclass avx512_movnt<bits<8> opc, string OpcodeStr, X86VectorVTInfo _, X86SchedWriteMoveLS Sched, PatFrag st_frag = alignednontemporalstore> { - let SchedRW = [Sched.MR], AddedComplexity = 400 in + let mayStore = 1, SchedRW = [Sched.MR], AddedComplexity = 400 in def mr : AVX512PI<opc, MRMDestMem, (outs), (ins _.MemOp:$dst, _.RC:$src), !strconcat(OpcodeStr, "\t{$src, $dst|$dst, $src}"), [(st_frag (_.VT _.RC:$src), addr:$dst)], @@ -4682,6 +4695,7 @@ multiclass avx512_binop_rm<bits<8> opc, string OpcodeStr, SDNode OpNode, IsCommutable, IsCommutable>, AVX512BIBase, EVEX, VVVV, Sched<[sched]>; + let mayLoad = 1, hasSideEffects = 0 in defm rm : AVX512_maskable<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.RC:$src1, _.MemOp:$src2), OpcodeStr, "$src2, $src1", "$src1, $src2", @@ -4694,6 +4708,7 @@ multiclass avx512_binop_rmb<bits<8> opc, string OpcodeStr, SDNode OpNode, X86VectorVTInfo _, X86FoldableSchedWrite sched, bit IsCommutable = 0> : avx512_binop_rm<opc, OpcodeStr, OpNode, _, sched, IsCommutable> { + let mayLoad = 1, hasSideEffects = 0 in defm rmb : AVX512_maskable<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.RC:$src1, _.ScalarMemOp:$src2), OpcodeStr, "${src2}"#_.BroadcastStr#", $src1", @@ -4811,6 +4826,7 @@ multiclass avx512_binop_rm2<bits<8> opc, string OpcodeStr, (_Src.VT _Src.RC:$src2))), IsCommutable>, AVX512BIBase, EVEX, VVVV, Sched<[sched]>; + let mayLoad = 1, hasSideEffects = 0 in { defm rm : AVX512_maskable<opc, MRMSrcMem, _Dst, (outs _Dst.RC:$dst), (ins _Src.RC:$src1, _Src.MemOp:$src2), OpcodeStr, "$src2, $src1", "$src1, $src2", @@ -4828,6 +4844,7 @@ multiclass avx512_binop_rm2<bits<8> opc, string OpcodeStr, (_Brdct.VT (_Brdct.BroadcastLdFrag addr:$src2)))))>, AVX512BIBase, EVEX, VVVV, EVEX_B, Sched<[sched.Folded, sched.ReadAfterFold]>; + } } defm VPADD : avx512_binop_rm_vl_all<0xFC, 0xFD, 0xFE, 0xD4, "vpadd", add, @@ -4893,6 +4910,7 @@ defm VPMULTISHIFTQB : avx512_binop_all<0x83, "vpmultishiftqb", SchedWriteVecALU, multiclass avx512_packs_rmb<bits<8> opc, string OpcodeStr, SDNode OpNode, X86VectorVTInfo _Src, X86VectorVTInfo _Dst, X86FoldableSchedWrite sched> { + let mayLoad = 1, hasSideEffects = 0 in defm rmb : AVX512_maskable<opc, MRMSrcMem, _Dst, (outs _Dst.RC:$dst), (ins _Src.RC:$src1, _Src.ScalarMemOp:$src2), OpcodeStr, @@ -4916,6 +4934,7 @@ multiclass avx512_packs_rm<bits<8> opc, string OpcodeStr, (_Src.VT _Src.RC:$src2))), IsCommutable, IsCommutable>, EVEX_CD8<_Src.EltSize, CD8VF>, EVEX, VVVV, Sched<[sched]>; + let mayLoad = 1, hasSideEffects = 0 in defm rm : AVX512_maskable<opc, MRMSrcMem, _Dst, (outs _Dst.RC:$dst), (ins _Src.RC:$src1, _Src.MemOp:$src2), OpcodeStr, "$src2, $src1", "$src1, $src2", @@ -5370,6 +5389,7 @@ multiclass avx512_fp_scalar<bits<8> opc, string OpcodeStr,X86VectorVTInfo _, (_.VT (VecNode _.RC:$src1, _.RC:$src2)), "_Int">, Sched<[sched]>; + let mayLoad = 1 in defm rm : AVX512_maskable_scalar<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.RC:$src1, _.IntScalarMemOp:$src2), OpcodeStr, "$src2, $src1", "$src1, $src2", @@ -5384,6 +5404,7 @@ multiclass avx512_fp_scalar<bits<8> opc, string OpcodeStr,X86VectorVTInfo _, Sched<[sched]> { let isCommutable = IsCommutable; } + let mayLoad = 1 in def rm : I< opc, MRMSrcMem, (outs _.FRC:$dst), (ins _.FRC:$src1, _.ScalarMemOp:$src2), OpcodeStr#"\t{$src2, $src1, $dst|$dst, $src1, $src2}", @@ -5414,6 +5435,7 @@ multiclass avx512_fp_scalar_sae<bits<8> opc, string OpcodeStr,X86VectorVTInfo _, (_.VT (VecNode _.RC:$src1, _.RC:$src2)), "_Int">, Sched<[sched]>, SIMD_EXC; + let mayLoad = 1 in defm rm : AVX512_maskable_scalar<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.RC:$src1, _.IntScalarMemOp:$src2), OpcodeStr, "$src2, $src1", "$src1, $src2", @@ -5430,6 +5452,7 @@ multiclass avx512_fp_scalar_sae<bits<8> opc, string OpcodeStr,X86VectorVTInfo _, Sched<[sched]> { let isCommutable = IsCommutable; } + let mayLoad = 1 in def rm : I< opc, MRMSrcMem, (outs _.FRC:$dst), (ins _.FRC:$src1, _.ScalarMemOp:$src2), OpcodeStr#"\t{$src2, $src1, $dst|$dst, $src1, $src2}", @@ -5509,6 +5532,7 @@ multiclass avx512_comutable_binop_s<bits<8> opc, string OpcodeStr, Sched<[sched]> { let isCommutable = 1; } + let mayLoad = 1 in def rm : I< opc, MRMSrcMem, (outs _.FRC:$dst), (ins _.FRC:$src1, _.ScalarMemOp:$src2), OpcodeStr#"\t{$src2, $src1, $dst|$dst, $src1, $src2}", @@ -5737,6 +5761,7 @@ multiclass avx512_fp_scalef_p<bits<8> opc, string OpcodeStr, SDNode OpNode, "$src2, $src1", "$src1, $src2", (_.VT (OpNode _.RC:$src1, _.RC:$src2))>, EVEX, VVVV, Sched<[sched]>; + let mayLoad = 1 in { defm rm: AVX512_maskable<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.RC:$src1, _.MemOp:$src2), OpcodeStr#_.Suffix, "$src2, $src1", "$src1, $src2", @@ -5749,6 +5774,7 @@ multiclass avx512_fp_scalef_p<bits<8> opc, string OpcodeStr, SDNode OpNode, (OpNode _.RC:$src1, (_.VT (_.BroadcastLdFrag addr:$src2)))>, EVEX, VVVV, EVEX_B, Sched<[sched.Folded, sched.ReadAfterFold]>; } + } } multiclass avx512_fp_scalef_scalar<bits<8> opc, string OpcodeStr, SDNode OpNode, @@ -5759,6 +5785,7 @@ multiclass avx512_fp_scalef_scalar<bits<8> opc, string OpcodeStr, SDNode OpNode, "$src2, $src1", "$src1, $src2", (_.VT (OpNode _.RC:$src1, _.RC:$src2))>, Sched<[sched]>; + let mayLoad = 1 in defm rm: AVX512_maskable_scalar<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.RC:$src1, _.IntScalarMemOp:$src2), OpcodeStr#_.Suffix, "$src2, $src1", "$src1, $src2", @@ -5916,6 +5943,7 @@ multiclass avx512_shift_rmi<bits<8> opc, Format ImmFormR, Format ImmFormM, "$src2, $src1", "$src1, $src2", (_.VT (OpNode _.RC:$src1, (i8 timm:$src2)))>, Sched<[sched]>; + let mayLoad = 1 in defm mi : AVX512_maskable<opc, ImmFormM, _, (outs _.RC:$dst), (ins _.MemOp:$src1, u8imm:$src2), OpcodeStr, "$src2, $src1", "$src1, $src2", @@ -5928,7 +5956,7 @@ multiclass avx512_shift_rmi<bits<8> opc, Format ImmFormR, Format ImmFormM, multiclass avx512_shift_rmbi<bits<8> opc, Format ImmFormM, string OpcodeStr, SDNode OpNode, X86FoldableSchedWrite sched, X86VectorVTInfo _> { - let ExeDomain = _.ExeDomain in + let ExeDomain = _.ExeDomain, mayLoad = 1 in defm mbi : AVX512_maskable<opc, ImmFormM, _, (outs _.RC:$dst), (ins _.ScalarMemOp:$src1, u8imm:$src2), OpcodeStr, "$src2, ${src1}"#_.BroadcastStr, "${src1}"#_.BroadcastStr#", $src2", @@ -5946,6 +5974,7 @@ multiclass avx512_shift_rrm<bits<8> opc, string OpcodeStr, SDNode OpNode, "$src2, $src1", "$src1, $src2", (_.VT (OpNode _.RC:$src1, (SrcVT VR128X:$src2)))>, AVX512BIBase, EVEX, VVVV, Sched<[sched]>; + let mayLoad = 1 in defm rm : AVX512_maskable<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.RC:$src1, i128mem:$src2), OpcodeStr, "$src2, $src1", "$src1, $src2", @@ -6095,6 +6124,7 @@ multiclass avx512_var_shift<bits<8> opc, string OpcodeStr, SDNode OpNode, "$src2, $src1", "$src1, $src2", (_.VT (OpNode _.RC:$src1, (_.VT _.RC:$src2)))>, AVX5128IBase, EVEX, VVVV, Sched<[sched]>; + let mayLoad = 1 in defm rm : AVX512_maskable<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.RC:$src1, _.MemOp:$src2), OpcodeStr, "$src2, $src1", "$src1, $src2", @@ -6107,7 +6137,7 @@ multiclass avx512_var_shift<bits<8> opc, string OpcodeStr, SDNode OpNode, multiclass avx512_var_shift_mb<bits<8> opc, string OpcodeStr, SDNode OpNode, X86FoldableSchedWrite sched, X86VectorVTInfo _> { - let ExeDomain = _.ExeDomain in + let ExeDomain = _.ExeDomain, mayLoad = 1 in defm rmb : AVX512_maskable<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.RC:$src1, _.ScalarMemOp:$src2), OpcodeStr, "${src2}"#_.BroadcastStr#", $src1", @@ -6372,6 +6402,7 @@ multiclass avx512_permil_vec<bits<8> OpcVar, string OpcodeStr, SDNode OpNode, (_.VT (OpNode _.RC:$src1, (Ctrl.VT Ctrl.RC:$src2)))>, T8, PD, EVEX, VVVV, Sched<[sched]>; + let mayLoad = 1 in { defm rm: AVX512_maskable<OpcVar, MRMSrcMem, _, (outs _.RC:$dst), (ins _.RC:$src1, Ctrl.MemOp:$src2), OpcodeStr, "$src2, $src1", "$src1, $src2", @@ -6389,6 +6420,7 @@ multiclass avx512_permil_vec<bits<8> OpcVar, string OpcodeStr, SDNode OpNode, (Ctrl.VT (Ctrl.BroadcastLdFrag addr:$src2))))>, T8, PD, EVEX, VVVV, EVEX_B, EVEX_CD8<_.EltSize, CD8VF>, Sched<[sched.Folded, sched.ReadAfterFold]>; + } } multiclass avx512_permil_vec_common<string OpcodeStr, bits<8> OpcVar, @@ -7258,6 +7290,7 @@ let ExeDomain = DstVT.ExeDomain, Uses = _Uses, (OpNode (DstVT.VT DstVT.RC:$src1), SrcRC:$src2))]>, EVEX, VVVV, Sched<[sched, ReadDefault, ReadInt2Fpu]>; + let mayLoad = 1 in def rm_Int : SI<opc, MRMSrcMem, (outs DstVT.RC:$dst), (ins DstVT.RC:$src1, x86memop:$src2), asm#"{"#mem#"}\t{$src2, $src1, $dst|$dst, $src1, $src2}", @@ -7400,6 +7433,7 @@ multiclass avx512_cvt_s_int_round<bits<8> opc, X86VectorVTInfo SrcVT, [(set DstVT.RC:$dst, (OpNodeRnd (SrcVT.VT SrcVT.RC:$src),(i32 timm:$rc)))]>, EVEX, VEX_LIG, EVEX_B, EVEX_RC, Sched<[sched]>; + let mayLoad = 1 in def rm_Int : SI<opc, MRMSrcMem, (outs DstVT.RC:$dst), (ins SrcVT.IntScalarMemOp:$src), !strconcat(asm,"\t{$src, $dst|$dst, $src}"), [(set DstVT.RC:$dst, (OpNode @@ -7451,6 +7485,7 @@ multiclass avx512_cvt_s<bits<8> opc, string asm, X86VectorVTInfo SrcVT, !strconcat(asm,"\t{$src, $dst|$dst, $src}"), [(set DstVT.RC:$dst, (OpNode SrcVT.FRC:$src))]>, EVEX, VEX_LIG, Sched<[sched]>, SIMD_EXC; + let mayLoad = 1 in def rm : AVX512<opc, MRMSrcMem, (outs DstVT.RC:$dst), (ins SrcVT.ScalarMemOp:$src), !strconcat(asm,"\t{$src, $dst|$dst, $src}"), [(set DstVT.RC:$dst, (OpNode (SrcVT.ScalarLdFrag addr:$src)))]>, @@ -7572,6 +7607,7 @@ let Predicates = [prd], ExeDomain = _SrcRC.ExeDomain in { !strconcat(asm,"\t{$src, $dst|$dst, $src}"), [(set _DstRC.RC:$dst, (OpNode _SrcRC.FRC:$src))]>, EVEX, VEX_LIG, Sched<[sched]>, SIMD_EXC; + let mayLoad = 1 in def rm : AVX512<opc, MRMSrcMem, (outs _DstRC.RC:$dst), (ins _SrcRC.ScalarMemOp:$src), !strconcat(asm,"\t{$src, $dst|$dst, $src}"), [(set _DstRC.RC:$dst, (OpNode (_SrcRC.ScalarLdFrag addr:$src)))]>, @@ -7587,6 +7623,7 @@ let Predicates = [prd], ExeDomain = _SrcRC.ExeDomain in { !strconcat(asm,"\t{{sae}, $src, $dst|$dst, $src, {sae}}"), [(set _DstRC.RC:$dst, (OpNodeSAE (_SrcRC.VT _SrcRC.RC:$src)))]>, EVEX, VEX_LIG, EVEX_B, Sched<[sched]>; + let mayLoad = 1 in def rm_Int : AVX512<opc, MRMSrcMem, (outs _DstRC.RC:$dst), (ins _SrcRC.IntScalarMemOp:$src), !strconcat(asm,"\t{$src, $dst|$dst, $src}"), @@ -7644,6 +7681,7 @@ multiclass avx512_cvt_fp_scalar<bits<8> opc, string OpcodeStr, X86VectorVTInfo _ (_.VT (OpNode (_.VT _.RC:$src1), (_Src.VT _Src.RC:$src2))), "_Int">, EVEX, VVVV, VEX_LIG, Sched<[sched]>; + let mayLoad = 1 in defm rm : AVX512_maskable_scalar<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.RC:$src1, _Src.IntScalarMemOp:$src2), OpcodeStr, "$src2, $src1", "$src1, $src2", @@ -7807,6 +7845,7 @@ let Uses = [MXCSR], mayRaiseFPException = 1 in { _.ImmAllZerosV)>, EVEX, Sched<[sched]>; + let mayLoad = 1 in { defm rm : AVX512_maskable_cvt<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins MemOp:$src), (ins _.RC:$src0, MaskRC:$mask, MemOp:$src), @@ -7840,6 +7879,7 @@ let Uses = [MXCSR], mayRaiseFPException = 1 in { _.ImmAllZerosV)>, EVEX, EVEX_B, Sched<[sched.Folded]>; } + } } // Conversion with SAE - suppress all exceptions multiclass avx512_vcvt_fp_sae<bits<8> opc, string OpcodeStr, X86VectorVTInfo _, @@ -8944,6 +8984,7 @@ multiclass avx512_cvtph2ps<X86VectorVTInfo _dest, X86VectorVTInfo _src, (X86any_cvtph2ps (_src.VT _src.RC:$src)), (X86cvtph2ps (_src.VT _src.RC:$src))>, T8, PD, Sched<[sched]>; + let mayLoad = 1 in defm rm : AVX512_maskable_split<0x13, MRMSrcMem, _dest, (outs _dest.RC:$dst), (ins x86memop:$src), "vcvtph2ps", "$src", "$src", (X86any_cvtph2ps (_src.VT ld_dag)), @@ -9161,6 +9202,7 @@ multiclass avx512_fp14_s<bits<8> opc, string OpcodeStr, SDNode OpNode, "$src2, $src1", "$src1, $src2", (OpNode (_.VT _.RC:$src1), (_.VT _.RC:$src2))>, EVEX, VVVV, VEX_LIG, Sched<[sched]>; + let mayLoad = 1 in defm rm : AVX512_maskable_scalar<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.RC:$src1, _.IntScalarMemOp:$src2), OpcodeStr, "$src2, $src1", "$src1, $src2", @@ -9621,6 +9663,7 @@ multiclass avx512_rndscale_scalar<bits<8> opc, string OpcodeStr, (i32 timm:$src3))), "_Int">, EVEX_B, Sched<[sched]>; + let mayLoad = 1 in defm rmi : AVX512_maskable_scalar<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.RC:$src1, _.IntScalarMemOp:$src2, i32u8imm:$src3), OpcodeStr, @@ -9999,6 +10042,7 @@ multiclass avx512_pmovx_common<bits<8> opc, string OpcodeStr, X86FoldableSchedWr (DestInfo.VT (OpNode (SrcInfo.VT SrcInfo.RC:$src)))>, EVEX, Sched<[sched]>; + let mayLoad = 1 in defm rm : AVX512_maskable<opc, MRMSrcMem, DestInfo, (outs DestInfo.RC:$dst), (ins x86memop:$src), OpcodeStr ,"$src", "$src", (DestInfo.VT (LdFrag addr:$src))>, @@ -10601,6 +10645,7 @@ multiclass expand_by_vec_width<bits<8> opc, X86VectorVTInfo _, (null_frag)>, AVX5128IBase, Sched<[sched]>; + let mayLoad = 1 in defm rm : AVX512_maskable<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.MemOp:$src1), OpcodeStr, "$src1", "$src1", (null_frag)>, @@ -10673,6 +10718,7 @@ multiclass avx512_unary_fp_packed_imm<bits<8> opc, string OpcodeStr, (OpNode (_.VT _.RC:$src1), (i32 timm:$src2)), (MaskOpNode (_.VT _.RC:$src1), (i32 timm:$src2))>, Sched<[sched]>; + let mayLoad = 1 in { defm rmi : AVX512_maskable_split<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.MemOp:$src1, i32u8imm:$src2), OpcodeStr#_.Suffix, "$src2, $src1", "$src1, $src2", @@ -10691,6 +10737,7 @@ multiclass avx512_unary_fp_packed_imm<bits<8> opc, string OpcodeStr, (i32 timm:$src2))>, EVEX_B, Sched<[sched.Folded, sched.ReadAfterFold]>; } + } } //handle instruction reg_vec1 = op(reg_vec2,reg_vec3,imm),{sae} @@ -10739,6 +10786,7 @@ multiclass avx512_fp_packed_imm<bits<8> opc, string OpcodeStr, SDNode OpNode, (_.VT _.RC:$src2), (i32 timm:$src3))>, Sched<[sched]>; + let mayLoad = 1 in { defm rmi : AVX512_maskable<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.RC:$src1, _.MemOp:$src2, i32u8imm:$src3), OpcodeStr, "$src3, $src2, $src1", "$src1, $src2, $src3", @@ -10755,6 +10803,7 @@ multiclass avx512_fp_packed_imm<bits<8> opc, string OpcodeStr, SDNode OpNode, (i32 timm:$src3))>, EVEX_B, Sched<[sched.Folded, sched.ReadAfterFold]>; } + } } //handle instruction reg_vec1 = op(reg_vec2,reg_vec3,imm) @@ -10770,6 +10819,7 @@ multiclass avx512_3Op_rm_imm8<bits<8> opc, string OpcodeStr, SDNode OpNode, (SrcInfo.VT SrcInfo.RC:$src2), (i8 timm:$src3)))>, Sched<[sched]>; + let mayLoad = 1 in defm rmi : AVX512_maskable<opc, MRMSrcMem, DestInfo, (outs DestInfo.RC:$dst), (ins SrcInfo.RC:$src1, SrcInfo.MemOp:$src2, u8imm:$src3), OpcodeStr, "$src3, $src2, $src1", "$src1, $src2, $src3", @@ -10788,7 +10838,7 @@ multiclass avx512_3Op_imm8<bits<8> opc, string OpcodeStr, SDNode OpNode, X86FoldableSchedWrite sched, X86VectorVTInfo _>: avx512_3Op_rm_imm8<opc, OpcodeStr, OpNode, sched, _, _>{ - let ExeDomain = _.ExeDomain, ImmT = Imm8 in + let ExeDomain = _.ExeDomain, ImmT = Imm8, mayLoad = 1 in defm rmbi : AVX512_maskable<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.RC:$src1, _.ScalarMemOp:$src2, u8imm:$src3), OpcodeStr, "$src3, ${src2}"#_.BroadcastStr#", $src1", @@ -10811,6 +10861,7 @@ multiclass avx512_fp_scalar_imm<bits<8> opc, string OpcodeStr, SDNode OpNode, (_.VT _.RC:$src2), (i32 timm:$src3))>, Sched<[sched]>; + let mayLoad = 1 in defm rmi : AVX512_maskable_scalar<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.RC:$src1, _.IntScalarMemOp:$src2, i32u8imm:$src3), OpcodeStr, "$src3, $src2, $src1", "$src1, $src2, $src3", @@ -10979,6 +11030,7 @@ multiclass avx512_shuff_packed_128_common<bits<8> opc, string OpcodeStr, (CastInfo.VT (X86Shuf128 _.RC:$src1, _.RC:$src2, (i8 timm:$src3)))))>, Sched<[sched]>; + let mayLoad = 1 in { defm rmi : AVX512_maskable<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.RC:$src1, _.MemOp:$src2, u8imm:$src3), OpcodeStr, "$src3, $src2, $src1", "$src1, $src2, $src3", @@ -11000,6 +11052,7 @@ multiclass avx512_shuff_packed_128_common<bits<8> opc, string OpcodeStr, (i8 timm:$src3)))))>, EVEX_B, Sched<[sched.Folded, sched.ReadAfterFold]>; } + } } multiclass avx512_shuff_packed_128<string OpcodeStr, X86FoldableSchedWrite sched, @@ -11031,6 +11084,7 @@ multiclass avx512_valign<bits<8> opc, string OpcodeStr, OpcodeStr, "$src3, $src2, $src1", "$src1, $src2, $src3", (_.VT (X86VAlign _.RC:$src1, _.RC:$src2, (i8 timm:$src3)))>, Sched<[sched]>; + let mayLoad = 1 in { defm rmi : AVX512_maskable<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.RC:$src1, _.MemOp:$src2, u8imm:$src3), OpcodeStr, "$src3, $src2, $src1", "$src1, $src2, $src3", @@ -11048,6 +11102,7 @@ multiclass avx512_valign<bits<8> opc, string OpcodeStr, (i8 timm:$src3))>, EVEX_B, Sched<[sched.Folded, sched.ReadAfterFold]>; } + } } multiclass avx512_valign_common<string OpcodeStr, X86SchedWriteWidths sched, @@ -11202,6 +11257,7 @@ multiclass avx512_unary_rm<bits<8> opc, string OpcodeStr, SDNode OpNode, (_.VT (OpNode (_.VT _.RC:$src1)))>, EVEX, AVX5128IBase, Sched<[sched]>; + let mayLoad = 1 in defm rm : AVX512_maskable<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.MemOp:$src1), OpcodeStr, "$src1", "$src1", @@ -11214,6 +11270,7 @@ multiclass avx512_unary_rm<bits<8> opc, string OpcodeStr, SDNode OpNode, multiclass avx512_unary_rmb<bits<8> opc, string OpcodeStr, SDNode OpNode, X86FoldableSchedWrite sched, X86VectorVTInfo _> : avx512_unary_rm<opc, OpcodeStr, OpNode, sched, _> { + let mayLoad = 1 in defm rmb : AVX512_maskable<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.ScalarMemOp:$src1), OpcodeStr, "${src1}"#_.BroadcastStr, @@ -11368,6 +11425,7 @@ multiclass avx512_movddup_128<bits<8> opc, string OpcodeStr, (ins _.RC:$src), OpcodeStr, "$src", "$src", (_.VT (X86VBroadcast (_.VT _.RC:$src)))>, EVEX, Sched<[sched]>; + let mayLoad = 1 in defm rm : AVX512_maskable<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.ScalarMemOp:$src), OpcodeStr, "$src", "$src", (_.VT (_.BroadcastLdFrag addr:$src))>, @@ -11513,6 +11571,7 @@ defm VPEXTRQZ : avx512_extract_elt_dq<"vpextrq", v2i64x_info, GR64>, REX_W; multiclass avx512_insert_elt_m<bits<8> opc, string OpcodeStr, SDNode OpNode, X86VectorVTInfo _, PatFrag LdFrag, SDPatternOperator immoperator> { + let mayLoad = 1 in def rmi : AVX512Ii8<opc, MRMSrcMem, (outs _.RC:$dst), (ins _.RC:$src1, _.ScalarMemOp:$src2, u8imm:$src3), OpcodeStr#"\t{$src3, $src2, $src1, $dst|$dst, $src1, $src2, $src3}", @@ -11650,6 +11709,7 @@ multiclass avx512_psadbw_packed<bits<8> opc, SDNode OpNode, (OpNode (_src.VT _src.RC:$src1), (_src.VT _src.RC:$src2))))]>, Sched<[sched]>; + let mayLoad = 1 in def rm : AVX512BI<opc, MRMSrcMem, (outs _dst.RC:$dst), (ins _src.RC:$src1, _src.MemOp:$src2), !strconcat(OpcodeStr, "\t{$src2, $src1, $dst|$dst, $src1, $src2}"), @@ -11751,6 +11811,7 @@ multiclass avx512_ternlog<bits<8> opc, string OpcodeStr, SDNode OpNode, (_.VT _.RC:$src3), (i8 timm:$src4)), 1, 1>, AVX512AIi8Base, EVEX, VVVV, Sched<[sched]>; + let mayLoad = 1 in { defm rmi : AVX512_maskable_3src<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.RC:$src2, _.MemOp:$src3, u8imm:$src4), OpcodeStr, "$src4, $src3, $src2", "$src2, $src3, $src4", @@ -11770,6 +11831,7 @@ multiclass avx512_ternlog<bits<8> opc, string OpcodeStr, SDNode OpNode, (i8 timm:$src4)), 1, 0>, EVEX_B, AVX512AIi8Base, EVEX, VVVV, EVEX_CD8<_.EltSize, CD8VF>, Sched<[sched.Folded, sched.ReadAfterFold]>; + } }// Constraints = "$src1 = $dst" // Additional patterns for matching passthru operand in other positions. @@ -12016,6 +12078,7 @@ multiclass avx512_fixupimm_packed<bits<8> opc, string OpcodeStr, (_.VT _.RC:$src2), (TblVT.VT _.RC:$src3), (i32 timm:$src4))>, Sched<[sched]>; + let mayLoad = 1 in { defm rmi : AVX512_maskable_3src<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.RC:$src2, _.MemOp:$src3, i32u8imm:$src4), OpcodeStr#_.Suffix, "$src4, $src3, $src2", "$src2, $src3, $src4", @@ -12033,6 +12096,7 @@ multiclass avx512_fixupimm_packed<bits<8> opc, string OpcodeStr, (TblVT.VT (TblVT.BroadcastLdFrag addr:$src3)), (i32 timm:$src4))>, EVEX_B, Sched<[sched.Folded, sched.ReadAfterFold]>; + } } // Constraints = "$src1 = $dst" } @@ -12075,6 +12139,7 @@ multiclass avx512_fixupimm_scalar<bits<8> opc, string OpcodeStr, (_src3VT.VT _src3VT.RC:$src3), (i32 timm:$src4))>, EVEX_B, Sched<[sched.Folded, sched.ReadAfterFold]>; + let mayLoad = 1 in defm rmi : AVX512_maskable_3src_scalar<opc, MRMSrcMem, _, (outs _.RC:$dst), (ins _.RC:$src2, _.ScalarMemOp:$src3, i32u8imm:$src4), OpcodeStr#_.Suffix, "$src4, $src3, $src2", "$src2, $src3, $src4", @@ -12417,6 +12482,7 @@ multiclass VNNI_rmb<bits<8> Op, string OpStr, SDNode OpNode, VTI.RC:$src2, VTI.RC:$src3)), IsCommutable, IsCommutable>, EVEX, VVVV, T8, Sched<[sched]>; + let mayLoad = 1 in { defm rm : AVX512_maskable_3src<Op, MRMSrcMem, VTI, (outs VTI.RC:$dst), (ins VTI.RC:$src2, VTI.MemOp:$src3), OpStr, "$src3, $src2", "$src2, $src3", @@ -12435,6 +12501,7 @@ multiclass VNNI_rmb<bits<8> Op, string OpStr, SDNode OpNode, T8, Sched<[sched.Folded, sched.ReadAfterFold, sched.ReadAfterFold]>; } + } } multiclass VNNI_common<bits<8> Op, string OpStr, SDNode OpNode, @@ -12508,6 +12575,7 @@ multiclass VPSHUFBITQMB_rm<X86FoldableSchedWrite sched, X86VectorVTInfo VTI> { (X86Vpshufbitqmb_su (VTI.VT VTI.RC:$src1), (VTI.VT VTI.RC:$src2))>, EVEX, VVVV, T8, PD, Sched<[sched]>; + let mayLoad = 1 in defm rm : AVX512_maskable_cmp<0x8F, MRMSrcMem, VTI, (outs VTI.KRC:$dst), (ins VTI.RC:$src1, VTI.MemOp:$src2), "vpshufbitqmb", @@ -12557,7 +12625,7 @@ multiclass GF2P8AFFINE_avx512_rmb_imm<bits<8> Op, string OpStr, SDNode OpNode, X86FoldableSchedWrite sched, X86VectorVTInfo VTI, X86VectorVTInfo BcstVTI> : avx512_3Op_rm_imm8<Op, OpStr, OpNode, sched, VTI, VTI> { - let ExeDomain = VTI.ExeDomain in + let ExeDomain = VTI.ExeDomain, mayLoad = 1 in defm rmbi : AVX512_maskable<Op, MRMSrcMem, VTI, (outs VTI.RC:$dst), (ins VTI.RC:$src1, BcstVTI.ScalarMemOp:$src2, u8imm:$src3), OpStr, "$src3, ${src2}"#BcstVTI.BroadcastStr#", $src1", @@ -12660,6 +12728,7 @@ multiclass avx512_vp2intersect_modes<X86FoldableSchedWrite sched, X86VectorVTInf _.RC:$src1, (_.VT _.RC:$src2)))]>, EVEX, VVVV, T8, XD, Sched<[sched]>; + let mayLoad = 1 in { def rm : I<0x68, MRMSrcMem, (outs _.KRPC:$dst), (ins _.RC:$src1, _.MemOp:$src2), @@ -12679,6 +12748,7 @@ multiclass avx512_vp2intersect_modes<X86FoldableSchedWrite sched, X86VectorVTInf _.RC:$src1, (_.VT (_.BroadcastLdFrag addr:$src2))))]>, EVEX, VVVV, T8, XD, EVEX_B, EVEX_CD8<_.EltSize, CD8VF>, Sched<[sched.Folded, sched.ReadAfterFold]>; + } } multiclass avx512_vp2intersect<X86SchedWriteWidths sched, AVX512VLVectorVTInfo _> { @@ -12882,6 +12952,7 @@ let Predicates = [HasFP16] in { // Move word ( r/m16) to Packed word def VMOVW2SHrr : AVX512<0x6E, MRMSrcReg, (outs VR128X:$dst), (ins GR32:$src), "vmovw\t{$src, $dst|$dst, $src}", []>, T_MAP5, PD, EVEX, Sched<[WriteVecMoveFromGpr]>; +let mayLoad = 1 in def VMOVWrm : AVX512<0x6E, MRMSrcMem, (outs VR128X:$dst), (ins i16mem:$src), "vmovw\t{$src, $dst|$dst, $src}", [(set VR128X:$dst, @@ -13607,6 +13678,7 @@ multiclass avx512_cfmbinop_sh_common<bits<8> opc, string OpcodeStr, SDNode OpNod (v4f32 (OpNode VR128X:$src1, VR128X:$src2)), IsCommutable, IsCommutable, IsCommutable, X86selects, "@earlyclobber $dst">, Sched<[WriteFMAX]>; + let mayLoad = 1 in defm rm : AVX512_maskable<opc, MRMSrcMem, f32x_info, (outs VR128X:$dst), (ins VR128X:$src1, ssmem:$src2), OpcodeStr, "$src2, $src1", "$src1, $src2", diff --git a/llvm/lib/TargetParser/TargetParser.cpp b/llvm/lib/TargetParser/TargetParser.cpp index b906690..62a3c88 100644 --- a/llvm/lib/TargetParser/TargetParser.cpp +++ b/llvm/lib/TargetParser/TargetParser.cpp @@ -444,7 +444,7 @@ static void fillAMDGCNFeatureMap(StringRef GPU, const Triple &T, Features["atomic-fmin-fmax-global-f32"] = true; Features["atomic-fmin-fmax-global-f64"] = true; Features["wavefrontsize32"] = true; - Features["cluster"] = true; + Features["clusters"] = true; break; case GK_GFX1201: case GK_GFX1200: diff --git a/llvm/lib/Transforms/IPO/FunctionAttrs.cpp b/llvm/lib/Transforms/IPO/FunctionAttrs.cpp index 8d9a0e7..50130da 100644 --- a/llvm/lib/Transforms/IPO/FunctionAttrs.cpp +++ b/llvm/lib/Transforms/IPO/FunctionAttrs.cpp @@ -2067,6 +2067,36 @@ static void inferAttrsFromFunctionBodies(const SCCNodeSet &SCCNodes, AI.run(SCCNodes, Changed); } +// Determines if the function 'F' can be marked 'norecurse'. +// It returns true if any call within 'F' could lead to a recursive +// call back to 'F', and false otherwise. +// The 'AnyFunctionsAddressIsTaken' parameter is a module-wide flag +// that is true if any function's address is taken, or if any function +// has external linkage. This is used to determine the safety of +// external/library calls. +static bool mayHaveRecursiveCallee(Function &F, + bool AnyFunctionsAddressIsTaken = true) { + for (const auto &BB : F) { + for (const auto &I : BB.instructionsWithoutDebug()) { + if (const auto *CB = dyn_cast<CallBase>(&I)) { + const Function *Callee = CB->getCalledFunction(); + if (!Callee || Callee == &F) + return true; + + if (Callee->doesNotRecurse()) + continue; + + if (!AnyFunctionsAddressIsTaken || + (Callee->isDeclaration() && + Callee->hasFnAttribute(Attribute::NoCallback))) + continue; + return true; + } + } + } + return false; +} + static void addNoRecurseAttrs(const SCCNodeSet &SCCNodes, SmallPtrSet<Function *, 8> &Changed) { // Try and identify functions that do not recurse. @@ -2078,28 +2108,14 @@ static void addNoRecurseAttrs(const SCCNodeSet &SCCNodes, Function *F = *SCCNodes.begin(); if (!F || !F->hasExactDefinition() || F->doesNotRecurse()) return; - - // If all of the calls in F are identifiable and are to norecurse functions, F - // is norecurse. This check also detects self-recursion as F is not currently - // marked norecurse, so any called from F to F will not be marked norecurse. - for (auto &BB : *F) - for (auto &I : BB.instructionsWithoutDebug()) - if (auto *CB = dyn_cast<CallBase>(&I)) { - Function *Callee = CB->getCalledFunction(); - if (!Callee || Callee == F || - (!Callee->doesNotRecurse() && - !(Callee->isDeclaration() && - Callee->hasFnAttribute(Attribute::NoCallback)))) - // Function calls a potentially recursive function. - return; - } - - // Every call was to a non-recursive function other than this function, and - // we have no indirect recursion as the SCC size is one. This function cannot - // recurse. - F->setDoesNotRecurse(); - ++NumNoRecurse; - Changed.insert(F); + if (!mayHaveRecursiveCallee(*F)) { + // Every call was to a non-recursive function other than this function, and + // we have no indirect recursion as the SCC size is one. This function + // cannot recurse. + F->setDoesNotRecurse(); + ++NumNoRecurse; + Changed.insert(F); + } } // Set the noreturn function attribute if possible. @@ -2429,3 +2445,62 @@ ReversePostOrderFunctionAttrsPass::run(Module &M, ModuleAnalysisManager &AM) { PA.preserve<LazyCallGraphAnalysis>(); return PA; } + +PreservedAnalyses NoRecurseLTOInferencePass::run(Module &M, + ModuleAnalysisManager &MAM) { + + // Check if any function in the whole program has its address taken or has + // potentially external linkage. + // We use this information when inferring norecurse attribute: If there is + // no function whose address is taken and all functions have internal + // linkage, there is no path for a callback to any user function. + bool AnyFunctionsAddressIsTaken = false; + for (Function &F : M) { + if (F.isDeclaration() || F.doesNotRecurse()) + continue; + if (!F.hasLocalLinkage() || F.hasAddressTaken()) { + AnyFunctionsAddressIsTaken = true; + break; + } + } + + // Run norecurse inference on all RefSCCs in the LazyCallGraph for this + // module. + bool Changed = false; + LazyCallGraph &CG = MAM.getResult<LazyCallGraphAnalysis>(M); + CG.buildRefSCCs(); + + for (LazyCallGraph::RefSCC &RC : CG.postorder_ref_sccs()) { + // Skip any RefSCC that is part of a call cycle. A RefSCC containing more + // than one SCC indicates a recursive relationship involving indirect calls. + if (RC.size() > 1) + continue; + + // RefSCC contains a single-SCC. SCC size > 1 indicates mutually recursive + // functions. Ex: foo1 -> foo2 -> foo3 -> foo1. + LazyCallGraph::SCC &S = *RC.begin(); + if (S.size() > 1) + continue; + + // Get the single function from this SCC. + Function &F = S.begin()->getFunction(); + if (!F.hasExactDefinition() || F.doesNotRecurse()) + continue; + + // If the analysis confirms that this function has no recursive calls + // (either direct, indirect, or through external linkages), + // we can safely apply the norecurse attribute. + if (!mayHaveRecursiveCallee(F, AnyFunctionsAddressIsTaken)) { + F.setDoesNotRecurse(); + ++NumNoRecurse; + Changed = true; + } + } + + PreservedAnalyses PA; + if (Changed) + PA.preserve<LazyCallGraphAnalysis>(); + else + PA = PreservedAnalyses::all(); + return PA; +} diff --git a/llvm/lib/Transforms/InstCombine/InstCombineSelect.cpp b/llvm/lib/Transforms/InstCombine/InstCombineSelect.cpp index 8f60e50..8c8fc69 100644 --- a/llvm/lib/Transforms/InstCombine/InstCombineSelect.cpp +++ b/llvm/lib/Transforms/InstCombine/InstCombineSelect.cpp @@ -3356,7 +3356,10 @@ Instruction *InstCombinerImpl::foldSelectOfBools(SelectInst &SI) { impliesPoisonOrCond(FalseVal, B, /*Expected=*/false)) { // (A || B) || C --> A || (B | C) return replaceInstUsesWith( - SI, Builder.CreateLogicalOr(A, Builder.CreateOr(B, FalseVal))); + SI, Builder.CreateLogicalOr(A, Builder.CreateOr(B, FalseVal), "", + ProfcheckDisableMetadataFixes + ? nullptr + : cast<SelectInst>(CondVal))); } // (A && B) || (C && B) --> (A || C) && B @@ -3398,7 +3401,10 @@ Instruction *InstCombinerImpl::foldSelectOfBools(SelectInst &SI) { impliesPoisonOrCond(TrueVal, B, /*Expected=*/true)) { // (A && B) && C --> A && (B & C) return replaceInstUsesWith( - SI, Builder.CreateLogicalAnd(A, Builder.CreateAnd(B, TrueVal))); + SI, Builder.CreateLogicalAnd(A, Builder.CreateAnd(B, TrueVal), "", + ProfcheckDisableMetadataFixes + ? nullptr + : cast<SelectInst>(CondVal))); } // (A || B) && (C || B) --> (A && C) || B diff --git a/llvm/lib/Transforms/Instrumentation/DataFlowSanitizer.cpp b/llvm/lib/Transforms/Instrumentation/DataFlowSanitizer.cpp index 480ff4a..5ba2167 100644 --- a/llvm/lib/Transforms/Instrumentation/DataFlowSanitizer.cpp +++ b/llvm/lib/Transforms/Instrumentation/DataFlowSanitizer.cpp @@ -261,6 +261,11 @@ static cl::opt<bool> ClIgnorePersonalityRoutine( "list, do not create a wrapper for it."), cl::Hidden, cl::init(false)); +static cl::opt<bool> ClAddGlobalNameSuffix( + "dfsan-add-global-name-suffix", + cl::desc("Whether to add .dfsan suffix to global names"), cl::Hidden, + cl::init(true)); + static StringRef getGlobalTypeString(const GlobalValue &G) { // Types of GlobalVariables are always pointer types. Type *GType = G.getValueType(); @@ -1256,6 +1261,9 @@ DataFlowSanitizer::WrapperKind DataFlowSanitizer::getWrapperKind(Function *F) { } void DataFlowSanitizer::addGlobalNameSuffix(GlobalValue *GV) { + if (!ClAddGlobalNameSuffix) + return; + std::string GVName = std::string(GV->getName()), Suffix = ".dfsan"; GV->setName(GVName + Suffix); @@ -1784,10 +1792,8 @@ bool DataFlowSanitizer::runImpl( } Value *DFSanFunction::getArgTLS(Type *T, unsigned ArgOffset, IRBuilder<> &IRB) { - Value *Base = IRB.CreatePointerCast(DFS.ArgTLS, DFS.IntptrTy); - if (ArgOffset) - Base = IRB.CreateAdd(Base, ConstantInt::get(DFS.IntptrTy, ArgOffset)); - return IRB.CreateIntToPtr(Base, PointerType::get(*DFS.Ctx, 0), "_dfsarg"); + return IRB.CreatePtrAdd(DFS.ArgTLS, ConstantInt::get(DFS.IntptrTy, ArgOffset), + "_dfsarg"); } Value *DFSanFunction::getRetvalTLS(Type *T, IRBuilder<> &IRB) { diff --git a/llvm/lib/Transforms/Scalar/DFAJumpThreading.cpp b/llvm/lib/Transforms/Scalar/DFAJumpThreading.cpp index e9a3e98..7968a5d 100644 --- a/llvm/lib/Transforms/Scalar/DFAJumpThreading.cpp +++ b/llvm/lib/Transforms/Scalar/DFAJumpThreading.cpp @@ -120,6 +120,12 @@ static cl::opt<unsigned> cl::desc("Maximum cost accepted for the transformation"), cl::Hidden, cl::init(50)); +static cl::opt<double> MaxClonedRate( + "dfa-max-cloned-rate", + cl::desc( + "Maximum cloned instructions rate accepted for the transformation"), + cl::Hidden, cl::init(7.5)); + namespace { class SelectInstToUnfold { @@ -828,6 +834,7 @@ private: /// also returns false if it is illegal to clone some required block. bool isLegalAndProfitableToTransform() { CodeMetrics Metrics; + uint64_t NumClonedInst = 0; SwitchInst *Switch = SwitchPaths->getSwitchInst(); // Don't thread switch without multiple successors. @@ -837,7 +844,6 @@ private: // Note that DuplicateBlockMap is not being used as intended here. It is // just being used to ensure (BB, State) pairs are only counted once. DuplicateBlockMap DuplicateMap; - for (ThreadingPath &TPath : SwitchPaths->getThreadingPaths()) { PathType PathBBs = TPath.getPath(); APInt NextState = TPath.getExitValue(); @@ -848,6 +854,7 @@ private: BasicBlock *VisitedBB = getClonedBB(BB, NextState, DuplicateMap); if (!VisitedBB) { Metrics.analyzeBasicBlock(BB, *TTI, EphValues); + NumClonedInst += BB->sizeWithoutDebug(); DuplicateMap[BB].push_back({BB, NextState}); } @@ -865,6 +872,7 @@ private: if (VisitedBB) continue; Metrics.analyzeBasicBlock(BB, *TTI, EphValues); + NumClonedInst += BB->sizeWithoutDebug(); DuplicateMap[BB].push_back({BB, NextState}); } @@ -901,6 +909,22 @@ private: } } + // Too much cloned instructions slow down later optimizations, especially + // SLPVectorizer. + // TODO: Thread the switch partially before reaching the threshold. + uint64_t NumOrigInst = 0; + for (auto *BB : DuplicateMap.keys()) + NumOrigInst += BB->sizeWithoutDebug(); + if (double(NumClonedInst) / double(NumOrigInst) > MaxClonedRate) { + LLVM_DEBUG(dbgs() << "DFA Jump Threading: Not jump threading, too much " + "instructions wll be cloned\n"); + ORE->emit([&]() { + return OptimizationRemarkMissed(DEBUG_TYPE, "NotProfitable", Switch) + << "Too much instructions will be cloned."; + }); + return false; + } + InstructionCost DuplicationCost = 0; unsigned JumpTableSize = 0; @@ -969,14 +993,14 @@ private: SmallPtrSet<BasicBlock *, 16> BlocksToClean; BlocksToClean.insert_range(successors(SwitchBlock)); - for (ThreadingPath &TPath : SwitchPaths->getThreadingPaths()) { + for (const ThreadingPath &TPath : SwitchPaths->getThreadingPaths()) { createExitPath(NewDefs, TPath, DuplicateMap, BlocksToClean, &DTU); NumPaths++; } // After all paths are cloned, now update the last successor of the cloned // path so it skips over the switch statement - for (ThreadingPath &TPath : SwitchPaths->getThreadingPaths()) + for (const ThreadingPath &TPath : SwitchPaths->getThreadingPaths()) updateLastSuccessor(TPath, DuplicateMap, &DTU); // For each instruction that was cloned and used outside, update its uses @@ -993,7 +1017,7 @@ private: /// To remember the correct destination, we have to duplicate blocks /// corresponding to each state. Also update the terminating instruction of /// the predecessors, and phis in the successor blocks. - void createExitPath(DefMap &NewDefs, ThreadingPath &Path, + void createExitPath(DefMap &NewDefs, const ThreadingPath &Path, DuplicateBlockMap &DuplicateMap, SmallPtrSet<BasicBlock *, 16> &BlocksToClean, DomTreeUpdater *DTU) { @@ -1239,7 +1263,7 @@ private: /// /// Note that this is an optional step and would have been done in later /// optimizations, but it makes the CFG significantly easier to work with. - void updateLastSuccessor(ThreadingPath &TPath, + void updateLastSuccessor(const ThreadingPath &TPath, DuplicateBlockMap &DuplicateMap, DomTreeUpdater *DTU) { APInt NextState = TPath.getExitValue(); diff --git a/llvm/lib/Transforms/Utils/SCCPSolver.cpp b/llvm/lib/Transforms/Utils/SCCPSolver.cpp index af216cd..9693ae6 100644 --- a/llvm/lib/Transforms/Utils/SCCPSolver.cpp +++ b/llvm/lib/Transforms/Utils/SCCPSolver.cpp @@ -317,24 +317,29 @@ static Value *simplifyInstruction(SCCPSolver &Solver, // Early exit if we know nothing about X. if (LRange.isFullSet()) return nullptr; - // We are allowed to refine the comparison to either true or false for out - // of range inputs. Here we refine the comparison to true, i.e. we relax - // the range check. - auto NewCR = CR->exactUnionWith(LRange.inverse()); - // TODO: Check if we can narrow the range check to an equality test. - // E.g, for X in [0, 4), X - 3 u< 2 -> X == 3 - if (!NewCR) + auto ConvertCRToICmp = + [&](const std::optional<ConstantRange> &NewCR) -> Value * { + ICmpInst::Predicate Pred; + APInt RHS; + // Check if we can represent NewCR as an icmp predicate. + if (NewCR && NewCR->getEquivalentICmp(Pred, RHS)) { + IRBuilder<NoFolder> Builder(&Inst); + Value *NewICmp = + Builder.CreateICmp(Pred, X, ConstantInt::get(X->getType(), RHS)); + InsertedValues.insert(NewICmp); + return NewICmp; + } return nullptr; - ICmpInst::Predicate Pred; - APInt RHS; - // Check if we can represent NewCR as an icmp predicate. - if (NewCR->getEquivalentICmp(Pred, RHS)) { - IRBuilder<NoFolder> Builder(&Inst); - Value *NewICmp = - Builder.CreateICmp(Pred, X, ConstantInt::get(X->getType(), RHS)); - InsertedValues.insert(NewICmp); - return NewICmp; - } + }; + // We are allowed to refine the comparison to either true or false for out + // of range inputs. + // Here we refine the comparison to false, and check if we can narrow the + // range check to a simpler test. + if (auto *V = ConvertCRToICmp(CR->exactIntersectWith(LRange))) + return V; + // Here we refine the comparison to true, i.e. we relax the range check. + if (auto *V = ConvertCRToICmp(CR->exactUnionWith(LRange.inverse()))) + return V; } } diff --git a/llvm/lib/Transforms/Utils/SimplifyCFG.cpp b/llvm/lib/Transforms/Utils/SimplifyCFG.cpp index 148bfa8..b8cfe3a 100644 --- a/llvm/lib/Transforms/Utils/SimplifyCFG.cpp +++ b/llvm/lib/Transforms/Utils/SimplifyCFG.cpp @@ -4895,9 +4895,8 @@ bool SimplifyCFGOpt::simplifyTerminatorOnSelect(Instruction *OldTerm, // We found both of the successors we were looking for. // Create a conditional branch sharing the condition of the select. BranchInst *NewBI = Builder.CreateCondBr(Cond, TrueBB, FalseBB); - if (TrueWeight != FalseWeight) - setBranchWeights(*NewBI, {TrueWeight, FalseWeight}, - /*IsExpected=*/false, /*ElideAllZero=*/true); + setBranchWeights(*NewBI, {TrueWeight, FalseWeight}, + /*IsExpected=*/false, /*ElideAllZero=*/true); } } else if (KeepEdge1 && (KeepEdge2 || TrueBB == FalseBB)) { // Neither of the selected blocks were successors, so this @@ -4982,9 +4981,15 @@ bool SimplifyCFGOpt::simplifyIndirectBrOnSelect(IndirectBrInst *IBI, BasicBlock *TrueBB = TBA->getBasicBlock(); BasicBlock *FalseBB = FBA->getBasicBlock(); + // The select's profile becomes the profile of the conditional branch that + // replaces the indirect branch. + SmallVector<uint32_t> SelectBranchWeights(2); + if (!ProfcheckDisableMetadataFixes) + extractBranchWeights(*SI, SelectBranchWeights); // Perform the actual simplification. - return simplifyTerminatorOnSelect(IBI, SI->getCondition(), TrueBB, FalseBB, 0, - 0); + return simplifyTerminatorOnSelect(IBI, SI->getCondition(), TrueBB, FalseBB, + SelectBranchWeights[0], + SelectBranchWeights[1]); } /// This is called when we find an icmp instruction @@ -7952,19 +7957,27 @@ bool SimplifyCFGOpt::simplifySwitch(SwitchInst *SI, IRBuilder<> &Builder) { bool SimplifyCFGOpt::simplifyIndirectBr(IndirectBrInst *IBI) { BasicBlock *BB = IBI->getParent(); bool Changed = false; + SmallVector<uint32_t> BranchWeights; + const bool HasBranchWeights = !ProfcheckDisableMetadataFixes && + extractBranchWeights(*IBI, BranchWeights); + + DenseMap<const BasicBlock *, uint64_t> TargetWeight; + if (HasBranchWeights) + for (size_t I = 0, E = IBI->getNumDestinations(); I < E; ++I) + TargetWeight[IBI->getDestination(I)] += BranchWeights[I]; // Eliminate redundant destinations. SmallPtrSet<Value *, 8> Succs; SmallSetVector<BasicBlock *, 8> RemovedSuccs; - for (unsigned i = 0, e = IBI->getNumDestinations(); i != e; ++i) { - BasicBlock *Dest = IBI->getDestination(i); + for (unsigned I = 0, E = IBI->getNumDestinations(); I != E; ++I) { + BasicBlock *Dest = IBI->getDestination(I); if (!Dest->hasAddressTaken() || !Succs.insert(Dest).second) { if (!Dest->hasAddressTaken()) RemovedSuccs.insert(Dest); Dest->removePredecessor(BB); - IBI->removeDestination(i); - --i; - --e; + IBI->removeDestination(I); + --I; + --E; Changed = true; } } @@ -7990,7 +8003,12 @@ bool SimplifyCFGOpt::simplifyIndirectBr(IndirectBrInst *IBI) { eraseTerminatorAndDCECond(IBI); return true; } - + if (HasBranchWeights) { + SmallVector<uint64_t> NewBranchWeights(IBI->getNumDestinations()); + for (size_t I = 0, E = IBI->getNumDestinations(); I < E; ++I) + NewBranchWeights[I] += TargetWeight.find(IBI->getDestination(I))->second; + setFittedBranchWeights(*IBI, NewBranchWeights, /*IsExpected=*/false); + } if (SelectInst *SI = dyn_cast<SelectInst>(IBI->getAddress())) { if (simplifyIndirectBrOnSelect(IBI, SI)) return requestResimplify(); diff --git a/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp b/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp index 56a3d6d..cee08ef 100644 --- a/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp +++ b/llvm/lib/Transforms/Vectorize/LoopVectorize.cpp @@ -3903,7 +3903,8 @@ void LoopVectorizationPlanner::emitInvalidCostRemarks( if (VF.isScalar()) continue; - VPCostContext CostCtx(CM.TTI, *CM.TLI, *Plan, CM, CM.CostKind); + VPCostContext CostCtx(CM.TTI, *CM.TLI, *Plan, CM, CM.CostKind, + *CM.PSE.getSE()); precomputeCosts(*Plan, VF, CostCtx); auto Iter = vp_depth_first_deep(Plan->getVectorLoopRegion()->getEntry()); for (VPBasicBlock *VPBB : VPBlockUtils::blocksOnly<VPBasicBlock>(Iter)) { @@ -4160,7 +4161,8 @@ VectorizationFactor LoopVectorizationPlanner::selectVectorizationFactor() { // Add on other costs that are modelled in VPlan, but not in the legacy // cost model. - VPCostContext CostCtx(CM.TTI, *CM.TLI, *P, CM, CM.CostKind); + VPCostContext CostCtx(CM.TTI, *CM.TLI, *P, CM, CM.CostKind, + *CM.PSE.getSE()); VPRegionBlock *VectorRegion = P->getVectorLoopRegion(); assert(VectorRegion && "Expected to have a vector region!"); for (VPBasicBlock *VPBB : VPBlockUtils::blocksOnly<VPBasicBlock>( @@ -6852,7 +6854,7 @@ LoopVectorizationPlanner::precomputeCosts(VPlan &Plan, ElementCount VF, InstructionCost LoopVectorizationPlanner::cost(VPlan &Plan, ElementCount VF) const { - VPCostContext CostCtx(CM.TTI, *CM.TLI, Plan, CM, CM.CostKind); + VPCostContext CostCtx(CM.TTI, *CM.TLI, Plan, CM, CM.CostKind, *PSE.getSE()); InstructionCost Cost = precomputeCosts(Plan, VF, CostCtx); // Now compute and add the VPlan-based cost. @@ -7085,7 +7087,8 @@ VectorizationFactor LoopVectorizationPlanner::computeBestVF() { // simplifications not accounted for in the legacy cost model. If that's the // case, don't trigger the assertion, as the extra simplifications may cause a // different VF to be picked by the VPlan-based cost model. - VPCostContext CostCtx(CM.TTI, *CM.TLI, BestPlan, CM, CM.CostKind); + VPCostContext CostCtx(CM.TTI, *CM.TLI, BestPlan, CM, CM.CostKind, + *CM.PSE.getSE()); precomputeCosts(BestPlan, BestFactor.Width, CostCtx); // Verify that the VPlan-based and legacy cost models agree, except for VPlans // with early exits and plans with additional VPlan simplifications. The @@ -8201,211 +8204,6 @@ void LoopVectorizationPlanner::buildVPlansWithVPRecipes(ElementCount MinVF, } } -/// Create and return a ResumePhi for \p WideIV, unless it is truncated. If the -/// induction recipe is not canonical, creates a VPDerivedIVRecipe to compute -/// the end value of the induction. -static VPInstruction *addResumePhiRecipeForInduction( - VPWidenInductionRecipe *WideIV, VPBuilder &VectorPHBuilder, - VPBuilder &ScalarPHBuilder, VPTypeAnalysis &TypeInfo, VPValue *VectorTC) { - auto *WideIntOrFp = dyn_cast<VPWidenIntOrFpInductionRecipe>(WideIV); - // Truncated wide inductions resume from the last lane of their vector value - // in the last vector iteration which is handled elsewhere. - if (WideIntOrFp && WideIntOrFp->getTruncInst()) - return nullptr; - - VPValue *Start = WideIV->getStartValue(); - VPValue *Step = WideIV->getStepValue(); - const InductionDescriptor &ID = WideIV->getInductionDescriptor(); - VPValue *EndValue = VectorTC; - if (!WideIntOrFp || !WideIntOrFp->isCanonical()) { - EndValue = VectorPHBuilder.createDerivedIV( - ID.getKind(), dyn_cast_or_null<FPMathOperator>(ID.getInductionBinOp()), - Start, VectorTC, Step); - } - - // EndValue is derived from the vector trip count (which has the same type as - // the widest induction) and thus may be wider than the induction here. - Type *ScalarTypeOfWideIV = TypeInfo.inferScalarType(WideIV); - if (ScalarTypeOfWideIV != TypeInfo.inferScalarType(EndValue)) { - EndValue = VectorPHBuilder.createScalarCast(Instruction::Trunc, EndValue, - ScalarTypeOfWideIV, - WideIV->getDebugLoc()); - } - - auto *ResumePhiRecipe = ScalarPHBuilder.createScalarPhi( - {EndValue, Start}, WideIV->getDebugLoc(), "bc.resume.val"); - return ResumePhiRecipe; -} - -/// Create resume phis in the scalar preheader for first-order recurrences, -/// reductions and inductions, and update the VPIRInstructions wrapping the -/// original phis in the scalar header. End values for inductions are added to -/// \p IVEndValues. -static void addScalarResumePhis(VPRecipeBuilder &Builder, VPlan &Plan, - DenseMap<VPValue *, VPValue *> &IVEndValues) { - VPTypeAnalysis TypeInfo(Plan); - auto *ScalarPH = Plan.getScalarPreheader(); - auto *MiddleVPBB = cast<VPBasicBlock>(ScalarPH->getPredecessors()[0]); - VPRegionBlock *VectorRegion = Plan.getVectorLoopRegion(); - VPBuilder VectorPHBuilder( - cast<VPBasicBlock>(VectorRegion->getSinglePredecessor())); - VPBuilder MiddleBuilder(MiddleVPBB, MiddleVPBB->getFirstNonPhi()); - VPBuilder ScalarPHBuilder(ScalarPH); - for (VPRecipeBase &ScalarPhiR : Plan.getScalarHeader()->phis()) { - auto *ScalarPhiIRI = cast<VPIRPhi>(&ScalarPhiR); - - // TODO: Extract final value from induction recipe initially, optimize to - // pre-computed end value together in optimizeInductionExitUsers. - auto *VectorPhiR = - cast<VPHeaderPHIRecipe>(Builder.getRecipe(&ScalarPhiIRI->getIRPhi())); - if (auto *WideIVR = dyn_cast<VPWidenInductionRecipe>(VectorPhiR)) { - if (VPInstruction *ResumePhi = addResumePhiRecipeForInduction( - WideIVR, VectorPHBuilder, ScalarPHBuilder, TypeInfo, - &Plan.getVectorTripCount())) { - assert(isa<VPPhi>(ResumePhi) && "Expected a phi"); - IVEndValues[WideIVR] = ResumePhi->getOperand(0); - ScalarPhiIRI->addOperand(ResumePhi); - continue; - } - // TODO: Also handle truncated inductions here. Computing end-values - // separately should be done as VPlan-to-VPlan optimization, after - // legalizing all resume values to use the last lane from the loop. - assert(cast<VPWidenIntOrFpInductionRecipe>(VectorPhiR)->getTruncInst() && - "should only skip truncated wide inductions"); - continue; - } - - // The backedge value provides the value to resume coming out of a loop, - // which for FORs is a vector whose last element needs to be extracted. The - // start value provides the value if the loop is bypassed. - bool IsFOR = isa<VPFirstOrderRecurrencePHIRecipe>(VectorPhiR); - auto *ResumeFromVectorLoop = VectorPhiR->getBackedgeValue(); - assert(VectorRegion->getSingleSuccessor() == Plan.getMiddleBlock() && - "Cannot handle loops with uncountable early exits"); - if (IsFOR) - ResumeFromVectorLoop = MiddleBuilder.createNaryOp( - VPInstruction::ExtractLastElement, {ResumeFromVectorLoop}, {}, - "vector.recur.extract"); - StringRef Name = IsFOR ? "scalar.recur.init" : "bc.merge.rdx"; - auto *ResumePhiR = ScalarPHBuilder.createScalarPhi( - {ResumeFromVectorLoop, VectorPhiR->getStartValue()}, {}, Name); - ScalarPhiIRI->addOperand(ResumePhiR); - } -} - -/// Handle users in the exit block for first order reductions in the original -/// exit block. The penultimate value of recurrences is fed to their LCSSA phi -/// users in the original exit block using the VPIRInstruction wrapping to the -/// LCSSA phi. -static void addExitUsersForFirstOrderRecurrences(VPlan &Plan, VFRange &Range) { - VPRegionBlock *VectorRegion = Plan.getVectorLoopRegion(); - auto *ScalarPHVPBB = Plan.getScalarPreheader(); - auto *MiddleVPBB = Plan.getMiddleBlock(); - VPBuilder ScalarPHBuilder(ScalarPHVPBB); - VPBuilder MiddleBuilder(MiddleVPBB, MiddleVPBB->getFirstNonPhi()); - - auto IsScalableOne = [](ElementCount VF) -> bool { - return VF == ElementCount::getScalable(1); - }; - - for (auto &HeaderPhi : VectorRegion->getEntryBasicBlock()->phis()) { - auto *FOR = dyn_cast<VPFirstOrderRecurrencePHIRecipe>(&HeaderPhi); - if (!FOR) - continue; - - assert(VectorRegion->getSingleSuccessor() == Plan.getMiddleBlock() && - "Cannot handle loops with uncountable early exits"); - - // This is the second phase of vectorizing first-order recurrences, creating - // extract for users outside the loop. An overview of the transformation is - // described below. Suppose we have the following loop with some use after - // the loop of the last a[i-1], - // - // for (int i = 0; i < n; ++i) { - // t = a[i - 1]; - // b[i] = a[i] - t; - // } - // use t; - // - // There is a first-order recurrence on "a". For this loop, the shorthand - // scalar IR looks like: - // - // scalar.ph: - // s.init = a[-1] - // br scalar.body - // - // scalar.body: - // i = phi [0, scalar.ph], [i+1, scalar.body] - // s1 = phi [s.init, scalar.ph], [s2, scalar.body] - // s2 = a[i] - // b[i] = s2 - s1 - // br cond, scalar.body, exit.block - // - // exit.block: - // use = lcssa.phi [s1, scalar.body] - // - // In this example, s1 is a recurrence because it's value depends on the - // previous iteration. In the first phase of vectorization, we created a - // VPFirstOrderRecurrencePHIRecipe v1 for s1. Now we create the extracts - // for users in the scalar preheader and exit block. - // - // vector.ph: - // v_init = vector(..., ..., ..., a[-1]) - // br vector.body - // - // vector.body - // i = phi [0, vector.ph], [i+4, vector.body] - // v1 = phi [v_init, vector.ph], [v2, vector.body] - // v2 = a[i, i+1, i+2, i+3] - // b[i] = v2 - v1 - // // Next, third phase will introduce v1' = splice(v1(3), v2(0, 1, 2)) - // b[i, i+1, i+2, i+3] = v2 - v1 - // br cond, vector.body, middle.block - // - // middle.block: - // vector.recur.extract.for.phi = v2(2) - // vector.recur.extract = v2(3) - // br cond, scalar.ph, exit.block - // - // scalar.ph: - // scalar.recur.init = phi [vector.recur.extract, middle.block], - // [s.init, otherwise] - // br scalar.body - // - // scalar.body: - // i = phi [0, scalar.ph], [i+1, scalar.body] - // s1 = phi [scalar.recur.init, scalar.ph], [s2, scalar.body] - // s2 = a[i] - // b[i] = s2 - s1 - // br cond, scalar.body, exit.block - // - // exit.block: - // lo = lcssa.phi [s1, scalar.body], - // [vector.recur.extract.for.phi, middle.block] - // - // Now update VPIRInstructions modeling LCSSA phis in the exit block. - // Extract the penultimate value of the recurrence and use it as operand for - // the VPIRInstruction modeling the phi. - for (VPUser *U : FOR->users()) { - using namespace llvm::VPlanPatternMatch; - if (!match(U, m_ExtractLastElement(m_Specific(FOR)))) - continue; - // For VF vscale x 1, if vscale = 1, we are unable to extract the - // penultimate value of the recurrence. Instead we rely on the existing - // extract of the last element from the result of - // VPInstruction::FirstOrderRecurrenceSplice. - // TODO: Consider vscale_range info and UF. - if (LoopVectorizationPlanner::getDecisionAndClampRange(IsScalableOne, - Range)) - return; - VPValue *PenultimateElement = MiddleBuilder.createNaryOp( - VPInstruction::ExtractPenultimateElement, {FOR->getBackedgeValue()}, - {}, "vector.recur.extract.for.phi"); - cast<VPInstruction>(U)->replaceAllUsesWith(PenultimateElement); - } - } -} - VPlanPtr LoopVectorizationPlanner::tryToBuildVPlanWithVPRecipes( VPlanPtr Plan, VFRange &Range, LoopVersioning *LVer) { @@ -8598,9 +8396,11 @@ VPlanPtr LoopVectorizationPlanner::tryToBuildVPlanWithVPRecipes( R->setOperand(1, WideIV->getStepValue()); } - addExitUsersForFirstOrderRecurrences(*Plan, Range); + // TODO: We can't call runPass on these transforms yet, due to verifier + // failures. + VPlanTransforms::addExitUsersForFirstOrderRecurrences(*Plan, Range); DenseMap<VPValue *, VPValue *> IVEndValues; - addScalarResumePhis(RecipeBuilder, *Plan, IVEndValues); + VPlanTransforms::addScalarResumePhis(*Plan, RecipeBuilder, IVEndValues); // --------------------------------------------------------------------------- // Transform initial VPlan: Apply previously taken decisions, in order, to @@ -8621,7 +8421,8 @@ VPlanPtr LoopVectorizationPlanner::tryToBuildVPlanWithVPRecipes( // TODO: Enable following transform when the EVL-version of extended-reduction // and mulacc-reduction are implemented. if (!CM.foldTailWithEVL()) { - VPCostContext CostCtx(CM.TTI, *CM.TLI, *Plan, CM, CM.CostKind); + VPCostContext CostCtx(CM.TTI, *CM.TLI, *Plan, CM, CM.CostKind, + *CM.PSE.getSE()); VPlanTransforms::runPass(VPlanTransforms::convertToAbstractRecipes, *Plan, CostCtx, Range); } @@ -8711,7 +8512,9 @@ VPlanPtr LoopVectorizationPlanner::tryToBuildVPlan(VFRange &Range) { DenseMap<VPValue *, VPValue *> IVEndValues; // TODO: IVEndValues are not used yet in the native path, to optimize exit // values. - addScalarResumePhis(RecipeBuilder, *Plan, IVEndValues); + // TODO: We can't call runPass on the transform yet, due to verifier + // failures. + VPlanTransforms::addScalarResumePhis(*Plan, RecipeBuilder, IVEndValues); assert(verifyVPlanIsValid(*Plan) && "VPlan is invalid"); return Plan; @@ -10075,7 +9878,7 @@ bool LoopVectorizePass::processLoop(Loop *L) { bool ForceVectorization = Hints.getForce() == LoopVectorizeHints::FK_Enabled; VPCostContext CostCtx(CM.TTI, *CM.TLI, LVP.getPlanFor(VF.Width), CM, - CM.CostKind); + CM.CostKind, *CM.PSE.getSE()); if (!ForceVectorization && !isOutsideLoopWorkProfitable(Checks, VF, L, PSE, CostCtx, LVP.getPlanFor(VF.Width), SEL, diff --git a/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp b/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp index fedca65..91c3d42 100644 --- a/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp +++ b/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp @@ -10620,7 +10620,8 @@ class InstructionsCompatibilityAnalysis { /// Checks if the opcode is supported as the main opcode for copyable /// elements. static bool isSupportedOpcode(const unsigned Opcode) { - return Opcode == Instruction::Add || Opcode == Instruction::LShr; + return Opcode == Instruction::Add || Opcode == Instruction::LShr || + Opcode == Instruction::Shl; } /// Identifies the best candidate value, which represents main opcode @@ -10937,6 +10938,7 @@ public: switch (MainOpcode) { case Instruction::Add: case Instruction::LShr: + case Instruction::Shl: VectorCost = TTI.getArithmeticInstrCost(MainOpcode, VecTy, Kind); break; default: @@ -22006,6 +22008,8 @@ bool BoUpSLP::collectValuesToDemote( return all_of(E.Scalars, [&](Value *V) { if (isa<PoisonValue>(V)) return true; + if (E.isCopyableElement(V)) + return true; auto *I = cast<Instruction>(V); KnownBits AmtKnownBits = computeKnownBits(I->getOperand(1), *DL); return AmtKnownBits.getMaxValue().ult(BitWidth); diff --git a/llvm/lib/Transforms/Vectorize/VPlan.cpp b/llvm/lib/Transforms/Vectorize/VPlan.cpp index 07b191a..2555ebe 100644 --- a/llvm/lib/Transforms/Vectorize/VPlan.cpp +++ b/llvm/lib/Transforms/Vectorize/VPlan.cpp @@ -1772,7 +1772,8 @@ VPCostContext::getOperandInfo(VPValue *V) const { } InstructionCost VPCostContext::getScalarizationOverhead( - Type *ResultTy, ArrayRef<const VPValue *> Operands, ElementCount VF) { + Type *ResultTy, ArrayRef<const VPValue *> Operands, ElementCount VF, + bool AlwaysIncludeReplicatingR) { if (VF.isScalar()) return 0; @@ -1792,7 +1793,11 @@ InstructionCost VPCostContext::getScalarizationOverhead( SmallPtrSet<const VPValue *, 4> UniqueOperands; SmallVector<Type *> Tys; for (auto *Op : Operands) { - if (Op->isLiveIn() || isa<VPReplicateRecipe, VPPredInstPHIRecipe>(Op) || + if (Op->isLiveIn() || + (!AlwaysIncludeReplicatingR && + isa<VPReplicateRecipe, VPPredInstPHIRecipe>(Op)) || + (isa<VPReplicateRecipe>(Op) && + cast<VPReplicateRecipe>(Op)->getOpcode() == Instruction::Load) || !UniqueOperands.insert(Op).second) continue; Tys.push_back(toVectorizedTy(Types.inferScalarType(Op), VF)); diff --git a/llvm/lib/Transforms/Vectorize/VPlanHelpers.h b/llvm/lib/Transforms/Vectorize/VPlanHelpers.h index fc1a09e..1580a3b 100644 --- a/llvm/lib/Transforms/Vectorize/VPlanHelpers.h +++ b/llvm/lib/Transforms/Vectorize/VPlanHelpers.h @@ -349,12 +349,14 @@ struct VPCostContext { LoopVectorizationCostModel &CM; SmallPtrSet<Instruction *, 8> SkipCostComputation; TargetTransformInfo::TargetCostKind CostKind; + ScalarEvolution &SE; VPCostContext(const TargetTransformInfo &TTI, const TargetLibraryInfo &TLI, const VPlan &Plan, LoopVectorizationCostModel &CM, - TargetTransformInfo::TargetCostKind CostKind) + TargetTransformInfo::TargetCostKind CostKind, + ScalarEvolution &SE) : TTI(TTI), TLI(TLI), Types(Plan), LLVMCtx(Plan.getContext()), CM(CM), - CostKind(CostKind) {} + CostKind(CostKind), SE(SE) {} /// Return the cost for \p UI with \p VF using the legacy cost model as /// fallback until computing the cost of all recipes migrates to VPlan. @@ -374,10 +376,12 @@ struct VPCostContext { /// Estimate the overhead of scalarizing a recipe with result type \p ResultTy /// and \p Operands with \p VF. This is a convenience wrapper for the - /// type-based getScalarizationOverhead API. - InstructionCost getScalarizationOverhead(Type *ResultTy, - ArrayRef<const VPValue *> Operands, - ElementCount VF); + /// type-based getScalarizationOverhead API. If \p AlwaysIncludeReplicatingR + /// is true, always compute the cost of scalarizing replicating operands. + InstructionCost + getScalarizationOverhead(Type *ResultTy, ArrayRef<const VPValue *> Operands, + ElementCount VF, + bool AlwaysIncludeReplicatingR = false); }; /// This class can be used to assign names to VPValues. For VPValues without diff --git a/llvm/lib/Transforms/Vectorize/VPlanRecipes.cpp b/llvm/lib/Transforms/Vectorize/VPlanRecipes.cpp index 67b9244..94e2628 100644 --- a/llvm/lib/Transforms/Vectorize/VPlanRecipes.cpp +++ b/llvm/lib/Transforms/Vectorize/VPlanRecipes.cpp @@ -40,6 +40,7 @@ #include <cassert> using namespace llvm; +using namespace llvm::VPlanPatternMatch; using VectorParts = SmallVector<Value *, 2>; @@ -303,7 +304,6 @@ VPPartialReductionRecipe::computeCost(ElementCount VF, VPRecipeBase *OpR = Op->getDefiningRecipe(); // If the partial reduction is predicated, a select will be operand 0 - using namespace llvm::VPlanPatternMatch; if (match(getOperand(1), m_Select(m_VPValue(), m_VPValue(Op), m_VPValue()))) { OpR = Op->getDefiningRecipe(); } @@ -1963,7 +1963,6 @@ InstructionCost VPWidenSelectRecipe::computeCost(ElementCount VF, Type *VectorTy = toVectorTy(Ctx.Types.inferScalarType(this), VF); VPValue *Op0, *Op1; - using namespace llvm::VPlanPatternMatch; if (!ScalarCond && ScalarTy->getScalarSizeInBits() == 1 && (match(this, m_LogicalAnd(m_VPValue(Op0), m_VPValue(Op1))) || match(this, m_LogicalOr(m_VPValue(Op0), m_VPValue(Op1))))) { @@ -2778,7 +2777,7 @@ VPExpressionRecipe::VPExpressionRecipe( // Recipes in the expression, except the last one, must only be used by // (other) recipes inside the expression. If there are other users, external // to the expression, use a clone of the recipe for external users. - for (VPSingleDefRecipe *R : ExpressionRecipes) { + for (VPSingleDefRecipe *R : reverse(ExpressionRecipes)) { if (R != ExpressionRecipes.back() && any_of(R->users(), [&ExpressionRecipesAsSetOfUsers](VPUser *U) { return !ExpressionRecipesAsSetOfUsers.contains(U); @@ -3111,6 +3110,62 @@ bool VPReplicateRecipe::shouldPack() const { }); } +/// Returns true if \p Ptr is a pointer computation for which the legacy cost +/// model computes a SCEV expression when computing the address cost. +static bool shouldUseAddressAccessSCEV(const VPValue *Ptr) { + auto *PtrR = Ptr->getDefiningRecipe(); + if (!PtrR || !((isa<VPReplicateRecipe>(PtrR) && + cast<VPReplicateRecipe>(PtrR)->getOpcode() == + Instruction::GetElementPtr) || + isa<VPWidenGEPRecipe>(PtrR) || + match(Ptr, m_GetElementPtr(m_VPValue(), m_VPValue())))) + return false; + + // We are looking for a GEP where all indices are either loop invariant or + // inductions. + for (VPValue *Opd : drop_begin(PtrR->operands())) { + if (!Opd->isDefinedOutsideLoopRegions() && + !isa<VPScalarIVStepsRecipe, VPWidenIntOrFpInductionRecipe>(Opd)) + return false; + } + + return true; +} + +/// Returns true if \p V is used as part of the address of another load or +/// store. +static bool isUsedByLoadStoreAddress(const VPUser *V) { + SmallPtrSet<const VPUser *, 4> Seen; + SmallVector<const VPUser *> WorkList = {V}; + + while (!WorkList.empty()) { + auto *Cur = dyn_cast<VPSingleDefRecipe>(WorkList.pop_back_val()); + if (!Cur || !Seen.insert(Cur).second) + continue; + + for (VPUser *U : Cur->users()) { + if (auto *InterleaveR = dyn_cast<VPInterleaveBase>(U)) + if (InterleaveR->getAddr() == Cur) + return true; + if (auto *RepR = dyn_cast<VPReplicateRecipe>(U)) { + if (RepR->getOpcode() == Instruction::Load && + RepR->getOperand(0) == Cur) + return true; + if (RepR->getOpcode() == Instruction::Store && + RepR->getOperand(1) == Cur) + return true; + } + if (auto *MemR = dyn_cast<VPWidenMemoryRecipe>(U)) { + if (MemR->getAddr() == Cur && MemR->isConsecutive()) + return true; + } + } + + append_range(WorkList, cast<VPSingleDefRecipe>(Cur)->users()); + } + return false; +} + InstructionCost VPReplicateRecipe::computeCost(ElementCount VF, VPCostContext &Ctx) const { Instruction *UI = cast<Instruction>(getUnderlyingValue()); @@ -3218,21 +3273,60 @@ InstructionCost VPReplicateRecipe::computeCost(ElementCount VF, } case Instruction::Load: case Instruction::Store: { - if (isSingleScalar()) { - bool IsLoad = UI->getOpcode() == Instruction::Load; - Type *ValTy = Ctx.Types.inferScalarType(IsLoad ? this : getOperand(0)); - Type *ScalarPtrTy = Ctx.Types.inferScalarType(getOperand(IsLoad ? 0 : 1)); - const Align Alignment = getLoadStoreAlignment(UI); - unsigned AS = getLoadStoreAddressSpace(UI); - TTI::OperandValueInfo OpInfo = TTI::getOperandInfo(UI->getOperand(0)); - InstructionCost ScalarMemOpCost = Ctx.TTI.getMemoryOpCost( - UI->getOpcode(), ValTy, Alignment, AS, Ctx.CostKind, OpInfo, UI); - return ScalarMemOpCost + Ctx.TTI.getAddressComputationCost( - ScalarPtrTy, nullptr, nullptr, Ctx.CostKind); - } + if (VF.isScalable() && !isSingleScalar()) + return InstructionCost::getInvalid(); + // TODO: See getMemInstScalarizationCost for how to handle replicating and // predicated cases. - break; + const VPRegionBlock *ParentRegion = getParent()->getParent(); + if (ParentRegion && ParentRegion->isReplicator()) + break; + + bool IsLoad = UI->getOpcode() == Instruction::Load; + const VPValue *PtrOp = getOperand(!IsLoad); + // TODO: Handle cases where we need to pass a SCEV to + // getAddressComputationCost. + if (shouldUseAddressAccessSCEV(PtrOp)) + break; + + Type *ValTy = Ctx.Types.inferScalarType(IsLoad ? this : getOperand(0)); + Type *ScalarPtrTy = Ctx.Types.inferScalarType(PtrOp); + const Align Alignment = getLoadStoreAlignment(UI); + unsigned AS = getLoadStoreAddressSpace(UI); + TTI::OperandValueInfo OpInfo = TTI::getOperandInfo(UI->getOperand(0)); + InstructionCost ScalarMemOpCost = Ctx.TTI.getMemoryOpCost( + UI->getOpcode(), ValTy, Alignment, AS, Ctx.CostKind, OpInfo); + + Type *PtrTy = isSingleScalar() ? ScalarPtrTy : toVectorTy(ScalarPtrTy, VF); + bool PreferVectorizedAddressing = Ctx.TTI.prefersVectorizedAddressing(); + bool UsedByLoadStoreAddress = + !PreferVectorizedAddressing && isUsedByLoadStoreAddress(this); + InstructionCost ScalarCost = + ScalarMemOpCost + Ctx.TTI.getAddressComputationCost( + PtrTy, UsedByLoadStoreAddress ? nullptr : &Ctx.SE, + nullptr, Ctx.CostKind); + if (isSingleScalar()) + return ScalarCost; + + SmallVector<const VPValue *> OpsToScalarize; + Type *ResultTy = Type::getVoidTy(PtrTy->getContext()); + // Set ResultTy and OpsToScalarize, if scalarization is needed. Currently we + // don't assign scalarization overhead in general, if the target prefers + // vectorized addressing or the loaded value is used as part of an address + // of another load or store. + if (!UsedByLoadStoreAddress) { + bool EfficientVectorLoadStore = + Ctx.TTI.supportsEfficientVectorElementLoadStore(); + if (!(IsLoad && !PreferVectorizedAddressing) && + !(!IsLoad && EfficientVectorLoadStore)) + append_range(OpsToScalarize, operands()); + + if (!EfficientVectorLoadStore) + ResultTy = Ctx.Types.inferScalarType(this); + } + + return (ScalarCost * VF.getFixedValue()) + + Ctx.getScalarizationOverhead(ResultTy, OpsToScalarize, VF, true); } } diff --git a/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp b/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp index ca63bf3..ebf833e 100644 --- a/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp +++ b/llvm/lib/Transforms/Vectorize/VPlanTransforms.cpp @@ -4198,3 +4198,202 @@ void VPlanTransforms::addBranchWeightToMiddleTerminator( MDB.createBranchWeights({1, VectorStep - 1}, /*IsExpected=*/false); MiddleTerm->addMetadata(LLVMContext::MD_prof, BranchWeights); } + +/// Create and return a ResumePhi for \p WideIV, unless it is truncated. If the +/// induction recipe is not canonical, creates a VPDerivedIVRecipe to compute +/// the end value of the induction. +static VPInstruction *addResumePhiRecipeForInduction( + VPWidenInductionRecipe *WideIV, VPBuilder &VectorPHBuilder, + VPBuilder &ScalarPHBuilder, VPTypeAnalysis &TypeInfo, VPValue *VectorTC) { + auto *WideIntOrFp = dyn_cast<VPWidenIntOrFpInductionRecipe>(WideIV); + // Truncated wide inductions resume from the last lane of their vector value + // in the last vector iteration which is handled elsewhere. + if (WideIntOrFp && WideIntOrFp->getTruncInst()) + return nullptr; + + VPValue *Start = WideIV->getStartValue(); + VPValue *Step = WideIV->getStepValue(); + const InductionDescriptor &ID = WideIV->getInductionDescriptor(); + VPValue *EndValue = VectorTC; + if (!WideIntOrFp || !WideIntOrFp->isCanonical()) { + EndValue = VectorPHBuilder.createDerivedIV( + ID.getKind(), dyn_cast_or_null<FPMathOperator>(ID.getInductionBinOp()), + Start, VectorTC, Step); + } + + // EndValue is derived from the vector trip count (which has the same type as + // the widest induction) and thus may be wider than the induction here. + Type *ScalarTypeOfWideIV = TypeInfo.inferScalarType(WideIV); + if (ScalarTypeOfWideIV != TypeInfo.inferScalarType(EndValue)) { + EndValue = VectorPHBuilder.createScalarCast(Instruction::Trunc, EndValue, + ScalarTypeOfWideIV, + WideIV->getDebugLoc()); + } + + auto *ResumePhiRecipe = ScalarPHBuilder.createScalarPhi( + {EndValue, Start}, WideIV->getDebugLoc(), "bc.resume.val"); + return ResumePhiRecipe; +} + +void VPlanTransforms::addScalarResumePhis( + VPlan &Plan, VPRecipeBuilder &Builder, + DenseMap<VPValue *, VPValue *> &IVEndValues) { + VPTypeAnalysis TypeInfo(Plan); + auto *ScalarPH = Plan.getScalarPreheader(); + auto *MiddleVPBB = cast<VPBasicBlock>(ScalarPH->getPredecessors()[0]); + VPRegionBlock *VectorRegion = Plan.getVectorLoopRegion(); + VPBuilder VectorPHBuilder( + cast<VPBasicBlock>(VectorRegion->getSinglePredecessor())); + VPBuilder MiddleBuilder(MiddleVPBB, MiddleVPBB->getFirstNonPhi()); + VPBuilder ScalarPHBuilder(ScalarPH); + for (VPRecipeBase &ScalarPhiR : Plan.getScalarHeader()->phis()) { + auto *ScalarPhiIRI = cast<VPIRPhi>(&ScalarPhiR); + + // TODO: Extract final value from induction recipe initially, optimize to + // pre-computed end value together in optimizeInductionExitUsers. + auto *VectorPhiR = + cast<VPHeaderPHIRecipe>(Builder.getRecipe(&ScalarPhiIRI->getIRPhi())); + if (auto *WideIVR = dyn_cast<VPWidenInductionRecipe>(VectorPhiR)) { + if (VPInstruction *ResumePhi = addResumePhiRecipeForInduction( + WideIVR, VectorPHBuilder, ScalarPHBuilder, TypeInfo, + &Plan.getVectorTripCount())) { + assert(isa<VPPhi>(ResumePhi) && "Expected a phi"); + IVEndValues[WideIVR] = ResumePhi->getOperand(0); + ScalarPhiIRI->addOperand(ResumePhi); + continue; + } + // TODO: Also handle truncated inductions here. Computing end-values + // separately should be done as VPlan-to-VPlan optimization, after + // legalizing all resume values to use the last lane from the loop. + assert(cast<VPWidenIntOrFpInductionRecipe>(VectorPhiR)->getTruncInst() && + "should only skip truncated wide inductions"); + continue; + } + + // The backedge value provides the value to resume coming out of a loop, + // which for FORs is a vector whose last element needs to be extracted. The + // start value provides the value if the loop is bypassed. + bool IsFOR = isa<VPFirstOrderRecurrencePHIRecipe>(VectorPhiR); + auto *ResumeFromVectorLoop = VectorPhiR->getBackedgeValue(); + assert(VectorRegion->getSingleSuccessor() == Plan.getMiddleBlock() && + "Cannot handle loops with uncountable early exits"); + if (IsFOR) + ResumeFromVectorLoop = MiddleBuilder.createNaryOp( + VPInstruction::ExtractLastElement, {ResumeFromVectorLoop}, {}, + "vector.recur.extract"); + StringRef Name = IsFOR ? "scalar.recur.init" : "bc.merge.rdx"; + auto *ResumePhiR = ScalarPHBuilder.createScalarPhi( + {ResumeFromVectorLoop, VectorPhiR->getStartValue()}, {}, Name); + ScalarPhiIRI->addOperand(ResumePhiR); + } +} + +void VPlanTransforms::addExitUsersForFirstOrderRecurrences(VPlan &Plan, + VFRange &Range) { + VPRegionBlock *VectorRegion = Plan.getVectorLoopRegion(); + auto *ScalarPHVPBB = Plan.getScalarPreheader(); + auto *MiddleVPBB = Plan.getMiddleBlock(); + VPBuilder ScalarPHBuilder(ScalarPHVPBB); + VPBuilder MiddleBuilder(MiddleVPBB, MiddleVPBB->getFirstNonPhi()); + + auto IsScalableOne = [](ElementCount VF) -> bool { + return VF == ElementCount::getScalable(1); + }; + + for (auto &HeaderPhi : VectorRegion->getEntryBasicBlock()->phis()) { + auto *FOR = dyn_cast<VPFirstOrderRecurrencePHIRecipe>(&HeaderPhi); + if (!FOR) + continue; + + assert(VectorRegion->getSingleSuccessor() == Plan.getMiddleBlock() && + "Cannot handle loops with uncountable early exits"); + + // This is the second phase of vectorizing first-order recurrences, creating + // extract for users outside the loop. An overview of the transformation is + // described below. Suppose we have the following loop with some use after + // the loop of the last a[i-1], + // + // for (int i = 0; i < n; ++i) { + // t = a[i - 1]; + // b[i] = a[i] - t; + // } + // use t; + // + // There is a first-order recurrence on "a". For this loop, the shorthand + // scalar IR looks like: + // + // scalar.ph: + // s.init = a[-1] + // br scalar.body + // + // scalar.body: + // i = phi [0, scalar.ph], [i+1, scalar.body] + // s1 = phi [s.init, scalar.ph], [s2, scalar.body] + // s2 = a[i] + // b[i] = s2 - s1 + // br cond, scalar.body, exit.block + // + // exit.block: + // use = lcssa.phi [s1, scalar.body] + // + // In this example, s1 is a recurrence because it's value depends on the + // previous iteration. In the first phase of vectorization, we created a + // VPFirstOrderRecurrencePHIRecipe v1 for s1. Now we create the extracts + // for users in the scalar preheader and exit block. + // + // vector.ph: + // v_init = vector(..., ..., ..., a[-1]) + // br vector.body + // + // vector.body + // i = phi [0, vector.ph], [i+4, vector.body] + // v1 = phi [v_init, vector.ph], [v2, vector.body] + // v2 = a[i, i+1, i+2, i+3] + // b[i] = v2 - v1 + // // Next, third phase will introduce v1' = splice(v1(3), v2(0, 1, 2)) + // b[i, i+1, i+2, i+3] = v2 - v1 + // br cond, vector.body, middle.block + // + // middle.block: + // vector.recur.extract.for.phi = v2(2) + // vector.recur.extract = v2(3) + // br cond, scalar.ph, exit.block + // + // scalar.ph: + // scalar.recur.init = phi [vector.recur.extract, middle.block], + // [s.init, otherwise] + // br scalar.body + // + // scalar.body: + // i = phi [0, scalar.ph], [i+1, scalar.body] + // s1 = phi [scalar.recur.init, scalar.ph], [s2, scalar.body] + // s2 = a[i] + // b[i] = s2 - s1 + // br cond, scalar.body, exit.block + // + // exit.block: + // lo = lcssa.phi [s1, scalar.body], + // [vector.recur.extract.for.phi, middle.block] + // + // Now update VPIRInstructions modeling LCSSA phis in the exit block. + // Extract the penultimate value of the recurrence and use it as operand for + // the VPIRInstruction modeling the phi. + for (VPUser *U : FOR->users()) { + using namespace llvm::VPlanPatternMatch; + if (!match(U, m_ExtractLastElement(m_Specific(FOR)))) + continue; + // For VF vscale x 1, if vscale = 1, we are unable to extract the + // penultimate value of the recurrence. Instead we rely on the existing + // extract of the last element from the result of + // VPInstruction::FirstOrderRecurrenceSplice. + // TODO: Consider vscale_range info and UF. + if (LoopVectorizationPlanner::getDecisionAndClampRange(IsScalableOne, + Range)) + return; + VPValue *PenultimateElement = MiddleBuilder.createNaryOp( + VPInstruction::ExtractPenultimateElement, {FOR->getBackedgeValue()}, + {}, "vector.recur.extract.for.phi"); + cast<VPInstruction>(U)->replaceAllUsesWith(PenultimateElement); + } + } +} diff --git a/llvm/lib/Transforms/Vectorize/VPlanTransforms.h b/llvm/lib/Transforms/Vectorize/VPlanTransforms.h index 2f00e51..5a8a2bb 100644 --- a/llvm/lib/Transforms/Vectorize/VPlanTransforms.h +++ b/llvm/lib/Transforms/Vectorize/VPlanTransforms.h @@ -363,6 +363,19 @@ struct VPlanTransforms { static void addBranchWeightToMiddleTerminator(VPlan &Plan, ElementCount VF, std::optional<unsigned> VScaleForTuning); + + /// Create resume phis in the scalar preheader for first-order recurrences, + /// reductions and inductions, and update the VPIRInstructions wrapping the + /// original phis in the scalar header. End values for inductions are added to + /// \p IVEndValues. + static void addScalarResumePhis(VPlan &Plan, VPRecipeBuilder &Builder, + DenseMap<VPValue *, VPValue *> &IVEndValues); + + /// Handle users in the exit block for first order reductions in the original + /// exit block. The penultimate value of recurrences is fed to their LCSSA phi + /// users in the original exit block using the VPIRInstruction wrapping to the + /// LCSSA phi. + static void addExitUsersForFirstOrderRecurrences(VPlan &Plan, VFRange &Range); }; } // namespace llvm diff --git a/llvm/test/Bitcode/thinlto-alias-addrspacecast.ll b/llvm/test/Bitcode/thinlto-alias-addrspacecast.ll new file mode 100644 index 0000000..fe4f05e --- /dev/null +++ b/llvm/test/Bitcode/thinlto-alias-addrspacecast.ll @@ -0,0 +1,7 @@ +; RUN: opt -module-summary < %s | llvm-dis | FileCheck %s + +@__oclc_ABI_version = linkonce_odr hidden addrspace(4) constant i32 500, align 4 +@_ZL20__oclc_ABI_version__ = internal alias i32, addrspacecast (ptr addrspace(4) @__oclc_ABI_version to ptr) + +; CHECK: ^1 = gv: (name: "__oclc_ABI_version", summaries: (variable: (module: ^0, flags: {{.*}}))) +; CHECK: ^2 = gv: (name: "_ZL20__oclc_ABI_version__", summaries: (alias: (module: ^0, flags: {{.*}}, aliasee: ^1))) diff --git a/llvm/test/CodeGen/AArch64/arm64ec-exit-thunks.ll b/llvm/test/CodeGen/AArch64/arm64ec-exit-thunks.ll index f829227..dc35224 100644 --- a/llvm/test/CodeGen/AArch64/arm64ec-exit-thunks.ll +++ b/llvm/test/CodeGen/AArch64/arm64ec-exit-thunks.ll @@ -563,6 +563,41 @@ declare <8 x i16> @large_vector(<8 x i16> %0) nounwind; ; CHECK-NEXT: .seh_endfunclet ; CHECK-NEXT: .seh_endproc +declare void @"??@md5mangleaaaaaaaaaaaaaaaaaaaaaaa@"() +; CHECK-LABEL: .def "??$exit_thunk@md5mangleaaaaaaaaaaaaaaaaaaaaaaa@$$h@"; +; CHECK-NEXT: .scl 2; +; CHECK-NEXT: .type 32; +; CHECK-NEXT: .endef +; CHECK-NEXT: .section .wowthk$aa,"xr",discard,"??$exit_thunk@md5mangleaaaaaaaaaaaaaaaaaaaaaaa@$$h@" +; CHECK-NEXT: .globl "??$exit_thunk@md5mangleaaaaaaaaaaaaaaaaaaaaaaa@$$h@" // -- Begin function ??$exit_thunk@md5mangleaaaaaaaaaaaaaaaaaaaaaaa@$$h@ +; CHECK-NEXT: .p2align 2 +; CHECK-NEXT: "??$exit_thunk@md5mangleaaaaaaaaaaaaaaaaaaaaaaa@$$h@": // @"??$exit_thunk@md5mangleaaaaaaaaaaaaaaaaaaaaaaa@$$h@" +; CHECK-NEXT: .weak_anti_dep "??@md5mangleaaaaaaaaaaaaaaaaaaaaaaa@" +; CHECK-NEXT: "??@md5mangleaaaaaaaaaaaaaaaaaaaaaaa@" = "??@md5mangleaaaaaaaaaaaaaaaaaaaaaaa@$$h@" +; CHECK-NEXT: .weak_anti_dep "??@md5mangleaaaaaaaaaaaaaaaaaaaaaaa@$$h@" +; CHECK-NEXT: "??@md5mangleaaaaaaaaaaaaaaaaaaaaaaa@$$h@" = "??$exit_thunk@md5mangleaaaaaaaaaaaaaaaaaaaaaaa@$$h@" +; CHECK-NEXT: .seh_proc "??$exit_thunk@md5mangleaaaaaaaaaaaaaaaaaaaaaaa@$$h@" +; CHECK-NEXT: // %bb.0: +; CHECK-NEXT: str x30, [sp, #-16]! // 8-byte Folded Spill +; CHECK-NEXT: .seh_save_reg_x x30, 16 +; CHECK-NEXT: .seh_endprologue +; CHECK-NEXT: adrp x8, __os_arm64x_check_icall +; CHECK-NEXT: adrp x11, "??@md5mangleaaaaaaaaaaaaaaaaaaaaaaa@" +; CHECK-NEXT: add x11, x11, :lo12:"??@md5mangleaaaaaaaaaaaaaaaaaaaaaaa@" +; CHECK-NEXT: ldr x8, [x8, :lo12:__os_arm64x_check_icall] +; CHECK-NEXT: adrp x10, $iexit_thunk$cdecl$v$v +; CHECK-NEXT: add x10, x10, :lo12:$iexit_thunk$cdecl$v$v +; CHECK-NEXT: blr x8 +; CHECK-NEXT: .seh_startepilogue +; CHECK-NEXT: ldr x30, [sp], #16 // 8-byte Folded Reload +; CHECK-NEXT: .seh_save_reg_x x30, 16 +; CHECK-NEXT: .seh_endepilogue +; CHECK-NEXT: br x11 +; CHECK-NEXT: .seh_endfunclet +; CHECK-NEXT: .seh_endproc + + + ; CHECK-LABEL: .section .hybmp$x,"yi" ; CHECK-NEXT: .symidx "#func_caller" ; CHECK-NEXT: .symidx $ientry_thunk$cdecl$v$v @@ -633,6 +668,12 @@ declare <8 x i16> @large_vector(<8 x i16> %0) nounwind; ; CHECK-NEXT: .symidx "#large_vector$exit_thunk" ; CHECK-NEXT: .symidx large_vector ; CHECK-NEXT: .word 0 +; CHECK-NEXT: .symidx "??@md5mangleaaaaaaaaaaaaaaaaaaaaaaa@" +; CHECK-NEXT: .symidx $iexit_thunk$cdecl$v$v +; CHECK-NEXT: .word 4 +; CHECK-NEXT: .symidx "??$exit_thunk@md5mangleaaaaaaaaaaaaaaaaaaaaaaa@$$h@" +; CHECK-NEXT: .symidx "??@md5mangleaaaaaaaaaaaaaaaaaaaaaaa@" +; CHECK-NEXT: .word 0 define void @func_caller() nounwind { call void @no_op() @@ -649,5 +690,6 @@ define void @func_caller() nounwind { call %T2 @simple_struct(%T1 { i16 0 }, %T2 { i32 0, float 0.0 }, %T3 { i64 0, double 0.0 }, %T4 { i64 0, double 0.0, i8 0 }) call <4 x i8> @small_vector(<4 x i8> <i8 0, i8 0, i8 0, i8 0>) call <8 x i16> @large_vector(<8 x i16> <i16 0, i16 0, i16 0, i16 0, i16 0, i16 0, i16 0, i16 0>) + call void @"??@md5mangleaaaaaaaaaaaaaaaaaaaaaaa@"() ret void } diff --git a/llvm/test/CodeGen/AArch64/spill-fill-zpr-predicates.mir b/llvm/test/CodeGen/AArch64/spill-fill-zpr-predicates.mir deleted file mode 100644 index 0298168..0000000 --- a/llvm/test/CodeGen/AArch64/spill-fill-zpr-predicates.mir +++ /dev/null @@ -1,1009 +0,0 @@ -# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 5 -# RUN: llc -mtriple=aarch64-linux-gnu -aarch64-enable-zpr-predicate-spills -run-pass=greedy %s -o - | FileCheck %s -# RUN: llc -mtriple=aarch64-linux-gnu -aarch64-enable-zpr-predicate-spills -start-before=greedy -stop-after=aarch64-expand-pseudo -verify-machineinstrs %s -o - | FileCheck %s --check-prefix=EXPAND ---- | - source_filename = "<stdin>" - target datalayout = "e-m:e-i8:8:32-i16:16:32-i64:64-i128:128-n32:64-S128" - target triple = "aarch64--linux-gnu" - - define aarch64_sve_vector_pcs void @zpr_predicate_spill() #0 { entry: unreachable } - - define aarch64_sve_vector_pcs void @zpr_predicate_spill__save_restore_nzcv() #0 { entry: unreachable } - - define aarch64_sve_vector_pcs void @zpr_predicate_spill__save_restore_nzcv__scavenge_csr_gpr() #0 { entry: unreachable } - - define aarch64_sve_vector_pcs void @zpr_predicate_spill__spill_zpr() #0 { entry: unreachable } - - define aarch64_sve_vector_pcs void @zpr_predicate_spill_above_p7() #0 { entry: unreachable } - - define aarch64_sve_vector_pcs void @zpr_predicate_spill_p4_saved() #0 { entry: unreachable } - - attributes #0 = {nounwind "target-features"="+sme,+sve" "aarch64_pstate_sm_compatible"} -... ---- -name: zpr_predicate_spill -tracksRegLiveness: true -stack: -liveins: - - { reg: '$p0' } -body: | - bb.0.entry: - liveins: $p0 - - ; CHECK-LABEL: name: zpr_predicate_spill - ; CHECK: stack: - ; CHECK: - { id: 0, name: '', type: spill-slot, offset: 0, size: 16, alignment: 16, - ; CHECK-NEXT: stack-id: scalable-vector, callee-saved-register: - ; CHECK: liveins: $p0 - ; CHECK-NEXT: {{ $}} - ; - ; CHECK-NEXT: SPILL_PPR_TO_ZPR_SLOT_PSEUDO $p0, %stack.0, 0 :: (store (s128) into %stack.0) - ; - ; CHECK-NEXT: $p0 = IMPLICIT_DEF - ; CHECK-NEXT: $p1 = IMPLICIT_DEF - ; CHECK-NEXT: $p2 = IMPLICIT_DEF - ; CHECK-NEXT: $p3 = IMPLICIT_DEF - ; CHECK-NEXT: $p4 = IMPLICIT_DEF - ; CHECK-NEXT: $p5 = IMPLICIT_DEF - ; CHECK-NEXT: $p6 = IMPLICIT_DEF - ; CHECK-NEXT: $p7 = IMPLICIT_DEF - ; CHECK-NEXT: $p8 = IMPLICIT_DEF - ; CHECK-NEXT: $p9 = IMPLICIT_DEF - ; CHECK-NEXT: $p10 = IMPLICIT_DEF - ; CHECK-NEXT: $p11 = IMPLICIT_DEF - ; CHECK-NEXT: $p12 = IMPLICIT_DEF - ; CHECK-NEXT: $p13 = IMPLICIT_DEF - ; CHECK-NEXT: $p14 = IMPLICIT_DEF - ; CHECK-NEXT: $p15 = IMPLICIT_DEF - ; - ; CHECK-NEXT: $p0 = FILL_PPR_FROM_ZPR_SLOT_PSEUDO %stack.0, 0 :: (load (s128) from %stack.0) - ; - ; CHECK-NEXT: RET_ReallyLR implicit $p0 - - ; EXPAND-LABEL: name: zpr_predicate_spill - ; EXPAND: liveins: $p0, $fp, $p15, $p14, $p13, $p12, $p11, $p10, $p9, $p8, $p7, $p6, $p5, $p4 - ; EXPAND-NEXT: {{ $}} - ; - ; EXPAND-NEXT: $sp = frame-setup SUBXri $sp, 1040, 0 - ; EXPAND-NEXT: frame-setup STRXui killed $fp, $sp, 128 :: (store (s64) into %stack.14) - ; EXPAND-NEXT: $sp = frame-setup ADDVL_XXI $sp, -12, implicit $vg - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p15, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 0 :: (store (s128) into %stack.13) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p14, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 1 :: (store (s128) into %stack.12) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p13, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 2 :: (store (s128) into %stack.11) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p12, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 3 :: (store (s128) into %stack.10) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p11, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 4 :: (store (s128) into %stack.9) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p10, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 5 :: (store (s128) into %stack.8) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p9, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 6 :: (store (s128) into %stack.7) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p8, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 7 :: (store (s128) into %stack.6) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p7, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 8 :: (store (s128) into %stack.5) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p6, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 9 :: (store (s128) into %stack.4) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p5, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 10 :: (store (s128) into %stack.3) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p4, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 11 :: (store (s128) into %stack.2) - ; EXPAND-NEXT: $sp = frame-setup SUBXri $sp, 1024, 0 - ; EXPAND-NEXT: $sp = frame-setup ADDVL_XXI $sp, -1, implicit $vg - ; - ; EXPAND-NEXT: $z0 = CPY_ZPzI_B $p0, 1, 0 - ; EXPAND-NEXT: $x8 = ADDXri $sp, 1024, 0 - ; EXPAND-NEXT: STR_ZXI $z0, $x8, 0 :: (store (s128) into %stack.0) - ; - ; EXPAND-NEXT: $p0 = IMPLICIT_DEF - ; EXPAND-NEXT: $p1 = IMPLICIT_DEF - ; EXPAND-NEXT: $p2 = IMPLICIT_DEF - ; EXPAND-NEXT: $p3 = IMPLICIT_DEF - ; EXPAND-NEXT: $p4 = IMPLICIT_DEF - ; EXPAND-NEXT: $p5 = IMPLICIT_DEF - ; EXPAND-NEXT: $p6 = IMPLICIT_DEF - ; EXPAND-NEXT: $p7 = IMPLICIT_DEF - ; EXPAND-NEXT: $p8 = IMPLICIT_DEF - ; EXPAND-NEXT: $p9 = IMPLICIT_DEF - ; EXPAND-NEXT: $p10 = IMPLICIT_DEF - ; EXPAND-NEXT: $p11 = IMPLICIT_DEF - ; EXPAND-NEXT: $p12 = IMPLICIT_DEF - ; EXPAND-NEXT: $p13 = IMPLICIT_DEF - ; EXPAND-NEXT: $p14 = IMPLICIT_DEF - ; EXPAND-NEXT: $p15 = IMPLICIT_DEF - ; - ; EXPAND-NEXT: $z0 = LDR_ZXI killed $x8, 0 :: (load (s128) from %stack.0) - ; EXPAND-NEXT: $p1 = frame-destroy PTRUE_B 31, implicit $vg - ; EXPAND-NEXT: $p0 = CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; - ; EXPAND-NEXT: $sp = frame-destroy ADDXri $sp, 1024, 0 - ; EXPAND-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 1, implicit $vg - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 0 :: (load (s128) from %stack.13) - ; EXPAND-NEXT: $p15 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 1 :: (load (s128) from %stack.12) - ; EXPAND-NEXT: $p14 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 2 :: (load (s128) from %stack.11) - ; EXPAND-NEXT: $p13 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 3 :: (load (s128) from %stack.10) - ; EXPAND-NEXT: $p12 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 4 :: (load (s128) from %stack.9) - ; EXPAND-NEXT: $p11 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 5 :: (load (s128) from %stack.8) - ; EXPAND-NEXT: $p10 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 6 :: (load (s128) from %stack.7) - ; EXPAND-NEXT: $p9 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 7 :: (load (s128) from %stack.6) - ; EXPAND-NEXT: $p8 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 8 :: (load (s128) from %stack.5) - ; EXPAND-NEXT: $p7 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 9 :: (load (s128) from %stack.4) - ; EXPAND-NEXT: $p6 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 10 :: (load (s128) from %stack.3) - ; EXPAND-NEXT: $p5 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 11 :: (load (s128) from %stack.2) - ; EXPAND-NEXT: $p4 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 12, implicit $vg - ; EXPAND-NEXT: $fp = frame-destroy LDRXui $sp, 128 :: (load (s64) from %stack.14) - ; EXPAND-NEXT: $sp = frame-destroy ADDXri $sp, 1040, 0 - ; EXPAND-NEXT: RET undef $lr, implicit $p0 - %1:ppr = COPY $p0 - - $p0 = IMPLICIT_DEF - $p1 = IMPLICIT_DEF - $p2 = IMPLICIT_DEF - $p3 = IMPLICIT_DEF - $p4 = IMPLICIT_DEF - $p5 = IMPLICIT_DEF - $p6 = IMPLICIT_DEF - $p7 = IMPLICIT_DEF - $p8 = IMPLICIT_DEF - $p9 = IMPLICIT_DEF - $p10 = IMPLICIT_DEF - $p11 = IMPLICIT_DEF - $p12 = IMPLICIT_DEF - $p13 = IMPLICIT_DEF - $p14 = IMPLICIT_DEF - $p15 = IMPLICIT_DEF - - $p0 = COPY %1 - - RET_ReallyLR implicit $p0 -... ---- -name: zpr_predicate_spill__save_restore_nzcv -tracksRegLiveness: true -stack: -liveins: - - { reg: '$p0' } -body: | - bb.0.entry: - liveins: $p0 - - ; CHECK-LABEL: name: zpr_predicate_spill__save_restore_nzcv - ; CHECK: stack: - ; CHECK: - { id: 0, name: '', type: spill-slot, offset: 0, size: 16, alignment: 16, - ; CHECK-NEXT: stack-id: scalable-vector, callee-saved-register: - ; CHECK: liveins: $p0 - ; CHECK-NEXT: {{ $}} - ; - ; CHECK-NEXT: $nzcv = IMPLICIT_DEF - ; - ; CHECK-NEXT: SPILL_PPR_TO_ZPR_SLOT_PSEUDO $p0, %stack.0, 0 :: (store (s128) into %stack.0) - ; - ; CHECK-NEXT: $p0 = IMPLICIT_DEF - ; CHECK-NEXT: $p1 = IMPLICIT_DEF - ; CHECK-NEXT: $p2 = IMPLICIT_DEF - ; CHECK-NEXT: $p3 = IMPLICIT_DEF - ; CHECK-NEXT: $p4 = IMPLICIT_DEF - ; CHECK-NEXT: $p5 = IMPLICIT_DEF - ; CHECK-NEXT: $p6 = IMPLICIT_DEF - ; CHECK-NEXT: $p7 = IMPLICIT_DEF - ; CHECK-NEXT: $p8 = IMPLICIT_DEF - ; CHECK-NEXT: $p9 = IMPLICIT_DEF - ; CHECK-NEXT: $p10 = IMPLICIT_DEF - ; CHECK-NEXT: $p11 = IMPLICIT_DEF - ; CHECK-NEXT: $p12 = IMPLICIT_DEF - ; CHECK-NEXT: $p13 = IMPLICIT_DEF - ; CHECK-NEXT: $p14 = IMPLICIT_DEF - ; CHECK-NEXT: $p15 = IMPLICIT_DEF - ; - ; CHECK-NEXT: $p0 = FILL_PPR_FROM_ZPR_SLOT_PSEUDO %stack.0, 0 :: (load (s128) from %stack.0) - ; - ; CHECK-NEXT: FAKE_USE implicit $nzcv - ; - ; CHECK-NEXT: RET_ReallyLR implicit $p0 - - ; EXPAND-LABEL: name: zpr_predicate_spill__save_restore_nzcv - ; EXPAND: liveins: $p0, $fp, $p15, $p14, $p13, $p12, $p11, $p10, $p9, $p8, $p7, $p6, $p5, $p4 - ; EXPAND-NEXT: {{ $}} - ; - ; EXPAND-NEXT: $sp = frame-setup SUBXri $sp, 1040, 0 - ; EXPAND-NEXT: frame-setup STRXui killed $fp, $sp, 128 :: (store (s64) into %stack.14) - ; EXPAND-NEXT: $sp = frame-setup ADDVL_XXI $sp, -12, implicit $vg - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p15, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 0 :: (store (s128) into %stack.13) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p14, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 1 :: (store (s128) into %stack.12) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p13, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 2 :: (store (s128) into %stack.11) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p12, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 3 :: (store (s128) into %stack.10) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p11, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 4 :: (store (s128) into %stack.9) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p10, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 5 :: (store (s128) into %stack.8) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p9, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 6 :: (store (s128) into %stack.7) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p8, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 7 :: (store (s128) into %stack.6) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p7, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 8 :: (store (s128) into %stack.5) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p6, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 9 :: (store (s128) into %stack.4) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p5, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 10 :: (store (s128) into %stack.3) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p4, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 11 :: (store (s128) into %stack.2) - ; EXPAND-NEXT: $sp = frame-setup SUBXri $sp, 1024, 0 - ; EXPAND-NEXT: $sp = frame-setup ADDVL_XXI $sp, -1, implicit $vg - ; - ; EXPAND-NEXT: $nzcv = IMPLICIT_DEF - ; - ; EXPAND-NEXT: $z0 = CPY_ZPzI_B $p0, 1, 0 - ; EXPAND-NEXT: $x8 = ADDXri $sp, 1024, 0 - ; EXPAND-NEXT: STR_ZXI $z0, $x8, 0 :: (store (s128) into %stack.0) - ; - ; EXPAND-NEXT: $p0 = IMPLICIT_DEF - ; EXPAND-NEXT: $p1 = IMPLICIT_DEF - ; EXPAND-NEXT: $p2 = IMPLICIT_DEF - ; EXPAND-NEXT: $p3 = IMPLICIT_DEF - ; EXPAND-NEXT: $p4 = IMPLICIT_DEF - ; EXPAND-NEXT: $p5 = IMPLICIT_DEF - ; EXPAND-NEXT: $p6 = IMPLICIT_DEF - ; EXPAND-NEXT: $p7 = IMPLICIT_DEF - ; EXPAND-NEXT: $p8 = IMPLICIT_DEF - ; EXPAND-NEXT: $p9 = IMPLICIT_DEF - ; EXPAND-NEXT: $p10 = IMPLICIT_DEF - ; EXPAND-NEXT: $p11 = IMPLICIT_DEF - ; EXPAND-NEXT: $p12 = IMPLICIT_DEF - ; EXPAND-NEXT: $p13 = IMPLICIT_DEF - ; EXPAND-NEXT: $p14 = IMPLICIT_DEF - ; EXPAND-NEXT: $p15 = IMPLICIT_DEF - ; - ; EXPAND-NEXT: $z0 = LDR_ZXI killed $x8, 0 :: (load (s128) from %stack.0) - ; EXPAND-NEXT: $fp = MRS 55824, implicit-def $nzcv, implicit $nzcv - ; EXPAND-NEXT: $p0 = PTRUE_B 31, implicit $vg - ; EXPAND-NEXT: $p0 = CMPNE_PPzZI_B $p0, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: MSR 55824, $fp, implicit-def $nzcv - ; - ; EXPAND-NEXT: FAKE_USE implicit $nzcv - ; - ; EXPAND-NEXT: $sp = frame-destroy ADDXri $sp, 1024, 0 - ; EXPAND-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 1, implicit $vg - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 0 :: (load (s128) from %stack.13) - ; EXPAND-NEXT: $p1 = frame-destroy PTRUE_B 31, implicit $vg - ; EXPAND-NEXT: $p15 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 1 :: (load (s128) from %stack.12) - ; EXPAND-NEXT: $p14 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 2 :: (load (s128) from %stack.11) - ; EXPAND-NEXT: $p13 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 3 :: (load (s128) from %stack.10) - ; EXPAND-NEXT: $p12 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 4 :: (load (s128) from %stack.9) - ; EXPAND-NEXT: $p11 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 5 :: (load (s128) from %stack.8) - ; EXPAND-NEXT: $p10 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 6 :: (load (s128) from %stack.7) - ; EXPAND-NEXT: $p9 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 7 :: (load (s128) from %stack.6) - ; EXPAND-NEXT: $p8 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 8 :: (load (s128) from %stack.5) - ; EXPAND-NEXT: $p7 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 9 :: (load (s128) from %stack.4) - ; EXPAND-NEXT: $p6 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 10 :: (load (s128) from %stack.3) - ; EXPAND-NEXT: $p5 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 11 :: (load (s128) from %stack.2) - ; EXPAND-NEXT: $p4 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 12, implicit $vg - ; EXPAND-NEXT: $fp = frame-destroy LDRXui $sp, 128 :: (load (s64) from %stack.14) - ; EXPAND-NEXT: $sp = frame-destroy ADDXri $sp, 1040, 0 - ; EXPAND-NEXT: RET undef $lr, implicit $p0 - $nzcv = IMPLICIT_DEF - - %1:ppr = COPY $p0 - - $p0 = IMPLICIT_DEF - $p1 = IMPLICIT_DEF - $p2 = IMPLICIT_DEF - $p3 = IMPLICIT_DEF - $p4 = IMPLICIT_DEF - $p5 = IMPLICIT_DEF - $p6 = IMPLICIT_DEF - $p7 = IMPLICIT_DEF - $p8 = IMPLICIT_DEF - $p9 = IMPLICIT_DEF - $p10 = IMPLICIT_DEF - $p11 = IMPLICIT_DEF - $p12 = IMPLICIT_DEF - $p13 = IMPLICIT_DEF - $p14 = IMPLICIT_DEF - $p15 = IMPLICIT_DEF - - $p0 = COPY %1 - - FAKE_USE implicit $nzcv - - RET_ReallyLR implicit $p0 -... ---- -name: zpr_predicate_spill__save_restore_nzcv__scavenge_csr_gpr -tracksRegLiveness: true -stack: -liveins: - - { reg: '$p0' } - - { reg: '$x0' } - - { reg: '$x1' } - - { reg: '$x2' } - - { reg: '$x3' } - - { reg: '$x4' } - - { reg: '$x5' } - - { reg: '$x6' } - - { reg: '$x7' } -body: | - bb.0.entry: - liveins: $p0, $x0, $x1, $x2, $x3, $x4, $x5, $x6, $x7 - - ; CHECK-LABEL: name: zpr_predicate_spill__save_restore_nzcv__scavenge_csr_gpr - ; CHECK: stack: - ; CHECK: - { id: 0, name: '', type: spill-slot, offset: 0, size: 16, alignment: 16, - ; CHECK-NEXT: stack-id: scalable-vector, callee-saved-register: - ; CHECK: liveins: $p0, $x0, $x1, $x2, $x3, $x4, $x5, $x6, $x7 - ; CHECK-NEXT: {{ $}} - ; - ; CHECK-NEXT: $nzcv = IMPLICIT_DEF - ; - ; CHECK-NEXT: $x8 = IMPLICIT_DEF - ; CHECK-NEXT: $x9 = IMPLICIT_DEF - ; CHECK-NEXT: $x10 = IMPLICIT_DEF - ; CHECK-NEXT: $x11 = IMPLICIT_DEF - ; CHECK-NEXT: $x12 = IMPLICIT_DEF - ; CHECK-NEXT: $x13 = IMPLICIT_DEF - ; CHECK-NEXT: $x14 = IMPLICIT_DEF - ; CHECK-NEXT: $x15 = IMPLICIT_DEF - ; CHECK-NEXT: $x16 = IMPLICIT_DEF - ; CHECK-NEXT: $x17 = IMPLICIT_DEF - ; CHECK-NEXT: $x18 = IMPLICIT_DEF - ; - ; CHECK-NEXT: SPILL_PPR_TO_ZPR_SLOT_PSEUDO $p0, %stack.0, 0 :: (store (s128) into %stack.0) - ; - ; CHECK-NEXT: $p0 = IMPLICIT_DEF - ; CHECK-NEXT: $p1 = IMPLICIT_DEF - ; CHECK-NEXT: $p2 = IMPLICIT_DEF - ; CHECK-NEXT: $p3 = IMPLICIT_DEF - ; CHECK-NEXT: $p4 = IMPLICIT_DEF - ; CHECK-NEXT: $p5 = IMPLICIT_DEF - ; CHECK-NEXT: $p6 = IMPLICIT_DEF - ; CHECK-NEXT: $p7 = IMPLICIT_DEF - ; CHECK-NEXT: $p8 = IMPLICIT_DEF - ; CHECK-NEXT: $p9 = IMPLICIT_DEF - ; CHECK-NEXT: $p10 = IMPLICIT_DEF - ; CHECK-NEXT: $p11 = IMPLICIT_DEF - ; CHECK-NEXT: $p12 = IMPLICIT_DEF - ; CHECK-NEXT: $p13 = IMPLICIT_DEF - ; CHECK-NEXT: $p14 = IMPLICIT_DEF - ; CHECK-NEXT: $p15 = IMPLICIT_DEF - ; - ; CHECK-NEXT: $p0 = FILL_PPR_FROM_ZPR_SLOT_PSEUDO %stack.0, 0 :: (load (s128) from %stack.0) - ; - ; CHECK-NEXT: FAKE_USE implicit $nzcv, implicit $x8, implicit $x9, implicit $x10, implicit $x11, implicit $x12, implicit $x13, implicit $x14, implicit $x15, implicit $x16, implicit $x17, implicit $x18 - ; - ; CHECK-NEXT: RET_ReallyLR implicit $p0, implicit $x0, implicit $x1, implicit $x2, implicit $x3, implicit $x4, implicit $x5, implicit $x6, implicit $x7, implicit $x8, implicit $x9, implicit $x10, implicit $x11, implicit $x12, implicit $x13, implicit $x14, implicit $x15, implicit $x16, implicit $x17, implicit $x18 - - ; EXPAND-LABEL: name: zpr_predicate_spill__save_restore_nzcv__scavenge_csr_gpr - ; EXPAND: liveins: $p0, $x0, $x1, $x2, $x3, $x4, $x5, $x6, $x7, $fp, $p15, $p14, $p13, $p12, $p11, $p10, $p9, $p8, $p7, $p6, $p5, $p4 - ; EXPAND-NEXT: {{ $}} - ; - ; EXPAND-NEXT: $sp = frame-setup SUBXri $sp, 1040, 0 - ; EXPAND-NEXT: frame-setup STRXui killed $fp, $sp, 128 :: (store (s64) into %stack.14) - ; EXPAND-NEXT: $sp = frame-setup ADDVL_XXI $sp, -12, implicit $vg - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p15, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 0 :: (store (s128) into %stack.13) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p14, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 1 :: (store (s128) into %stack.12) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p13, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 2 :: (store (s128) into %stack.11) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p12, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 3 :: (store (s128) into %stack.10) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p11, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 4 :: (store (s128) into %stack.9) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p10, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 5 :: (store (s128) into %stack.8) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p9, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 6 :: (store (s128) into %stack.7) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p8, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 7 :: (store (s128) into %stack.6) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p7, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 8 :: (store (s128) into %stack.5) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p6, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 9 :: (store (s128) into %stack.4) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p5, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 10 :: (store (s128) into %stack.3) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p4, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 11 :: (store (s128) into %stack.2) - ; EXPAND-NEXT: $sp = frame-setup SUBXri $sp, 1024, 0 - ; EXPAND-NEXT: $sp = frame-setup ADDVL_XXI $sp, -1, implicit $vg - ; - ; EXPAND-NEXT: $nzcv = IMPLICIT_DEF - ; - ; EXPAND-NEXT: $x8 = IMPLICIT_DEF - ; EXPAND-NEXT: $x9 = IMPLICIT_DEF - ; EXPAND-NEXT: $x10 = IMPLICIT_DEF - ; EXPAND-NEXT: $x11 = IMPLICIT_DEF - ; EXPAND-NEXT: $x12 = IMPLICIT_DEF - ; EXPAND-NEXT: $x13 = IMPLICIT_DEF - ; EXPAND-NEXT: $x14 = IMPLICIT_DEF - ; EXPAND-NEXT: $x15 = IMPLICIT_DEF - ; EXPAND-NEXT: $x16 = IMPLICIT_DEF - ; EXPAND-NEXT: $x17 = IMPLICIT_DEF - ; EXPAND-NEXT: $x18 = IMPLICIT_DEF - ; - ; EXPAND-NEXT: $z0 = CPY_ZPzI_B $p0, 1, 0 - ; EXPAND-NEXT: $fp = ADDXri $sp, 1024, 0 - ; EXPAND-NEXT: STR_ZXI $z0, $fp, 0 :: (store (s128) into %stack.0) - ; - ; EXPAND-NEXT: $p0 = IMPLICIT_DEF - ; EXPAND-NEXT: $p1 = IMPLICIT_DEF - ; EXPAND-NEXT: $p2 = IMPLICIT_DEF - ; EXPAND-NEXT: $p3 = IMPLICIT_DEF - ; EXPAND-NEXT: $p4 = IMPLICIT_DEF - ; EXPAND-NEXT: $p5 = IMPLICIT_DEF - ; EXPAND-NEXT: $p6 = IMPLICIT_DEF - ; EXPAND-NEXT: $p7 = IMPLICIT_DEF - ; EXPAND-NEXT: $p8 = IMPLICIT_DEF - ; EXPAND-NEXT: $p9 = IMPLICIT_DEF - ; EXPAND-NEXT: $p10 = IMPLICIT_DEF - ; EXPAND-NEXT: $p11 = IMPLICIT_DEF - ; EXPAND-NEXT: $p12 = IMPLICIT_DEF - ; EXPAND-NEXT: $p13 = IMPLICIT_DEF - ; EXPAND-NEXT: $p14 = IMPLICIT_DEF - ; EXPAND-NEXT: $p15 = IMPLICIT_DEF - ; - ; EXPAND-NEXT: $z0 = LDR_ZXI killed $fp, 0 :: (load (s128) from %stack.0) - ; EXPAND-NEXT: $fp = MRS 55824, implicit-def $nzcv, implicit $nzcv - ; EXPAND-NEXT: $p0 = PTRUE_B 31, implicit $vg - ; EXPAND-NEXT: $p0 = CMPNE_PPzZI_B $p0, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: MSR 55824, $fp, implicit-def $nzcv - ; - ; EXPAND-NEXT: FAKE_USE implicit $nzcv, implicit $x8, implicit $x9, implicit $x10, implicit $x11, implicit $x12, implicit $x13, implicit $x14, implicit $x15, implicit $x16, implicit $x17, implicit $x18 - ; - ; EXPAND-NEXT: $sp = frame-destroy ADDXri $sp, 1024, 0 - ; EXPAND-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 1, implicit $vg - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 0 :: (load (s128) from %stack.13) - ; EXPAND-NEXT: $p1 = frame-destroy PTRUE_B 31, implicit $vg - ; EXPAND-NEXT: $p15 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 1 :: (load (s128) from %stack.12) - ; EXPAND-NEXT: $p14 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 2 :: (load (s128) from %stack.11) - ; EXPAND-NEXT: $p13 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 3 :: (load (s128) from %stack.10) - ; EXPAND-NEXT: $p12 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 4 :: (load (s128) from %stack.9) - ; EXPAND-NEXT: $p11 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 5 :: (load (s128) from %stack.8) - ; EXPAND-NEXT: $p10 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 6 :: (load (s128) from %stack.7) - ; EXPAND-NEXT: $p9 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 7 :: (load (s128) from %stack.6) - ; EXPAND-NEXT: $p8 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 8 :: (load (s128) from %stack.5) - ; EXPAND-NEXT: $p7 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 9 :: (load (s128) from %stack.4) - ; EXPAND-NEXT: $p6 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 10 :: (load (s128) from %stack.3) - ; EXPAND-NEXT: $p5 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 11 :: (load (s128) from %stack.2) - ; EXPAND-NEXT: $p4 = frame-destroy CMPNE_PPzZI_B $p1, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 12, implicit $vg - ; EXPAND-NEXT: $fp = frame-destroy LDRXui $sp, 128 :: (load (s64) from %stack.14) - ; EXPAND-NEXT: $sp = frame-destroy ADDXri $sp, 1040, 0 - ; EXPAND-NEXT: RET undef $lr, implicit $p0, implicit $x0, implicit $x1, implicit $x2, implicit $x3, implicit $x4, implicit $x5, implicit $x6, implicit $x7, implicit $x8, implicit $x9, implicit $x10, implicit $x11, implicit $x12, implicit $x13, implicit $x14, implicit $x15, implicit $x16, implicit $x17, implicit $x18 - $nzcv = IMPLICIT_DEF - $x8 = IMPLICIT_DEF - $x9 = IMPLICIT_DEF - $x10 = IMPLICIT_DEF - $x11 = IMPLICIT_DEF - $x12 = IMPLICIT_DEF - $x13 = IMPLICIT_DEF - $x14 = IMPLICIT_DEF - $x15 = IMPLICIT_DEF - $x16 = IMPLICIT_DEF - $x17 = IMPLICIT_DEF - $x18 = IMPLICIT_DEF - - %1:ppr = COPY $p0 - - $p0 = IMPLICIT_DEF - $p1 = IMPLICIT_DEF - $p2 = IMPLICIT_DEF - $p3 = IMPLICIT_DEF - $p4 = IMPLICIT_DEF - $p5 = IMPLICIT_DEF - $p6 = IMPLICIT_DEF - $p7 = IMPLICIT_DEF - $p8 = IMPLICIT_DEF - $p9 = IMPLICIT_DEF - $p10 = IMPLICIT_DEF - $p11 = IMPLICIT_DEF - $p12 = IMPLICIT_DEF - $p13 = IMPLICIT_DEF - $p14 = IMPLICIT_DEF - $p15 = IMPLICIT_DEF - - $p0 = COPY %1 - - FAKE_USE implicit $nzcv, implicit $x8, implicit $x9, implicit $x10, implicit $x11, implicit $x12, implicit $x13, implicit $x14, implicit $x15, implicit $x16, implicit $x17, implicit $x18 - - RET_ReallyLR implicit $p0, implicit $x0, implicit $x1, implicit $x2, implicit $x3, implicit $x4, implicit $x5, implicit $x6, implicit $x7, implicit $x8, implicit $x9, implicit $x10, implicit $x11, implicit $x12, implicit $x13, implicit $x14, implicit $x15, implicit $x16, implicit $x17, implicit $x18 -... ---- -name: zpr_predicate_spill__spill_zpr -tracksRegLiveness: true -stack: -liveins: - - { reg: '$p0' } - - { reg: '$z0' } - - { reg: '$z1' } - - { reg: '$z2' } - - { reg: '$z3' } - - { reg: '$z4' } - - { reg: '$z5' } - - { reg: '$z6' } - - { reg: '$z7' } -body: | - bb.0.entry: - liveins: $p0, $z0, $z1, $z2, $z3, $z4, $z5, $z6, $z7 - - ; CHECK-LABEL: name: zpr_predicate_spill__spill_zpr - ; CHECK: stack: - ; CHECK: - { id: 0, name: '', type: spill-slot, offset: 0, size: 16, alignment: 16, - ; CHECK-NEXT: stack-id: scalable-vector, callee-saved-register: - ; CHECK: liveins: $p0, $z0, $z1, $z2, $z3, $z4, $z5, $z6, $z7 - ; CHECK-NEXT: {{ $}} - ; - ; CHECK-NEXT: $z16 = IMPLICIT_DEF - ; CHECK-NEXT: $z17 = IMPLICIT_DEF - ; CHECK-NEXT: $z18 = IMPLICIT_DEF - ; CHECK-NEXT: $z19 = IMPLICIT_DEF - ; CHECK-NEXT: $z20 = IMPLICIT_DEF - ; CHECK-NEXT: $z21 = IMPLICIT_DEF - ; CHECK-NEXT: $z22 = IMPLICIT_DEF - ; CHECK-NEXT: $z23 = IMPLICIT_DEF - ; CHECK-NEXT: $z24 = IMPLICIT_DEF - ; CHECK-NEXT: $z25 = IMPLICIT_DEF - ; CHECK-NEXT: $z26 = IMPLICIT_DEF - ; CHECK-NEXT: $z27 = IMPLICIT_DEF - ; CHECK-NEXT: $z28 = IMPLICIT_DEF - ; CHECK-NEXT: $z29 = IMPLICIT_DEF - ; CHECK-NEXT: $z30 = IMPLICIT_DEF - ; CHECK-NEXT: $z31 = IMPLICIT_DEF - ; - ; CHECK-NEXT: SPILL_PPR_TO_ZPR_SLOT_PSEUDO $p0, %stack.0, 0 :: (store (s128) into %stack.0) - ; - ; CHECK-NEXT: $p0 = IMPLICIT_DEF - ; CHECK-NEXT: $p1 = IMPLICIT_DEF - ; CHECK-NEXT: $p2 = IMPLICIT_DEF - ; CHECK-NEXT: $p3 = IMPLICIT_DEF - ; CHECK-NEXT: $p4 = IMPLICIT_DEF - ; CHECK-NEXT: $p5 = IMPLICIT_DEF - ; CHECK-NEXT: $p6 = IMPLICIT_DEF - ; CHECK-NEXT: $p7 = IMPLICIT_DEF - ; CHECK-NEXT: $p8 = IMPLICIT_DEF - ; CHECK-NEXT: $p9 = IMPLICIT_DEF - ; CHECK-NEXT: $p10 = IMPLICIT_DEF - ; CHECK-NEXT: $p11 = IMPLICIT_DEF - ; CHECK-NEXT: $p12 = IMPLICIT_DEF - ; CHECK-NEXT: $p13 = IMPLICIT_DEF - ; CHECK-NEXT: $p14 = IMPLICIT_DEF - ; CHECK-NEXT: $p15 = IMPLICIT_DEF - ; - ; CHECK-NEXT: $p0 = FILL_PPR_FROM_ZPR_SLOT_PSEUDO %stack.0, 0 :: (load (s128) from %stack.0) - ; - ; CHECK-NEXT: FAKE_USE implicit $z16, implicit $z17, implicit $z18, implicit $z19, implicit $z20, implicit $z21, implicit $z22, implicit $z23, implicit $z24, implicit $z25, implicit $z26, implicit $z27, implicit $z28, implicit $z29, implicit $z30, implicit $z31 - ; - ; CHECK-NEXT: RET_ReallyLR implicit $p0, implicit $z0, implicit $z1, implicit $z2, implicit $z3, implicit $z4, implicit $z5, implicit $z6, implicit $z7 - - ; EXPAND-LABEL: name: zpr_predicate_spill__spill_zpr - ; EXPAND: liveins: $p0, $z0, $z1, $z2, $z3, $z4, $z5, $z6, $z7, $fp, $p15, $p14, $p13, $p12, $p11, $p10, $p9, $p8, $p7, $p6, $p5, $p4, $z23, $z22, $z21, $z20, $z19, $z18, $z17, $z16 - ; EXPAND-NEXT: {{ $}} - ; - ; EXPAND-NEXT: $sp = frame-setup SUBXri $sp, 1040, 0 - ; EXPAND-NEXT: frame-setup STRXui killed $fp, $sp, 128 :: (store (s64) into %stack.22) - ; EXPAND-NEXT: $sp = frame-setup ADDVL_XXI $sp, -20, implicit $vg - ; EXPAND-NEXT: $z24 = frame-setup CPY_ZPzI_B killed $p15, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z24, $sp, 0 :: (store (s128) into %stack.21) - ; EXPAND-NEXT: $z24 = frame-setup CPY_ZPzI_B killed $p14, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z24, $sp, 1 :: (store (s128) into %stack.20) - ; EXPAND-NEXT: $z24 = frame-setup CPY_ZPzI_B killed $p13, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z24, $sp, 2 :: (store (s128) into %stack.19) - ; EXPAND-NEXT: $z24 = frame-setup CPY_ZPzI_B killed $p12, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z24, $sp, 3 :: (store (s128) into %stack.18) - ; EXPAND-NEXT: $z24 = frame-setup CPY_ZPzI_B killed $p11, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z24, $sp, 4 :: (store (s128) into %stack.17) - ; EXPAND-NEXT: $z24 = frame-setup CPY_ZPzI_B killed $p10, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z24, $sp, 5 :: (store (s128) into %stack.16) - ; EXPAND-NEXT: $z24 = frame-setup CPY_ZPzI_B killed $p9, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z24, $sp, 6 :: (store (s128) into %stack.15) - ; EXPAND-NEXT: $z24 = frame-setup CPY_ZPzI_B killed $p8, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z24, $sp, 7 :: (store (s128) into %stack.14) - ; EXPAND-NEXT: $z24 = frame-setup CPY_ZPzI_B killed $p7, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z24, $sp, 8 :: (store (s128) into %stack.13) - ; EXPAND-NEXT: $z24 = frame-setup CPY_ZPzI_B killed $p6, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z24, $sp, 9 :: (store (s128) into %stack.12) - ; EXPAND-NEXT: $z24 = frame-setup CPY_ZPzI_B killed $p5, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z24, $sp, 10 :: (store (s128) into %stack.11) - ; EXPAND-NEXT: $z24 = frame-setup CPY_ZPzI_B killed $p4, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z24, $sp, 11 :: (store (s128) into %stack.10) - ; EXPAND-NEXT: frame-setup STR_ZXI killed $z23, $sp, 12 :: (store (s128) into %stack.9) - ; EXPAND-NEXT: frame-setup STR_ZXI killed $z22, $sp, 13 :: (store (s128) into %stack.8) - ; EXPAND-NEXT: frame-setup STR_ZXI killed $z21, $sp, 14 :: (store (s128) into %stack.7) - ; EXPAND-NEXT: frame-setup STR_ZXI killed $z20, $sp, 15 :: (store (s128) into %stack.6) - ; EXPAND-NEXT: frame-setup STR_ZXI killed $z19, $sp, 16 :: (store (s128) into %stack.5) - ; EXPAND-NEXT: frame-setup STR_ZXI killed $z18, $sp, 17 :: (store (s128) into %stack.4) - ; EXPAND-NEXT: frame-setup STR_ZXI killed $z17, $sp, 18 :: (store (s128) into %stack.3) - ; EXPAND-NEXT: frame-setup STR_ZXI killed $z16, $sp, 19 :: (store (s128) into %stack.2) - ; EXPAND-NEXT: $sp = frame-setup SUBXri $sp, 1024, 0 - ; EXPAND-NEXT: $sp = frame-setup ADDVL_XXI $sp, -2, implicit $vg - ; - ; EXPAND-NEXT: $z16 = IMPLICIT_DEF - ; EXPAND-NEXT: $z17 = IMPLICIT_DEF - ; EXPAND-NEXT: $z18 = IMPLICIT_DEF - ; EXPAND-NEXT: $z19 = IMPLICIT_DEF - ; EXPAND-NEXT: $z20 = IMPLICIT_DEF - ; EXPAND-NEXT: $z21 = IMPLICIT_DEF - ; EXPAND-NEXT: $z22 = IMPLICIT_DEF - ; EXPAND-NEXT: $z23 = IMPLICIT_DEF - ; EXPAND-NEXT: $z24 = IMPLICIT_DEF - ; EXPAND-NEXT: $z25 = IMPLICIT_DEF - ; EXPAND-NEXT: $z26 = IMPLICIT_DEF - ; EXPAND-NEXT: $z27 = IMPLICIT_DEF - ; EXPAND-NEXT: $z28 = IMPLICIT_DEF - ; EXPAND-NEXT: $z29 = IMPLICIT_DEF - ; EXPAND-NEXT: $z30 = IMPLICIT_DEF - ; EXPAND-NEXT: $z31 = IMPLICIT_DEF - ; - ; EXPAND-NEXT: $x8 = ADDXri $sp, 1024, 0 - ; EXPAND-NEXT: STR_ZXI $z0, $x8, 0 :: (store (s128) into %stack.24) - ; EXPAND-NEXT: $z0 = CPY_ZPzI_B $p0, 1, 0 - ; EXPAND-NEXT: STR_ZXI $z0, $x8, 1 :: (store (s128) into %stack.0) - ; EXPAND-NEXT: $z0 = LDR_ZXI $x8, 0 :: (load (s128) from %stack.24) - ; - ; EXPAND-NEXT: $p0 = IMPLICIT_DEF - ; EXPAND-NEXT: $p1 = IMPLICIT_DEF - ; EXPAND-NEXT: $p2 = IMPLICIT_DEF - ; EXPAND-NEXT: $p3 = IMPLICIT_DEF - ; EXPAND-NEXT: $p4 = IMPLICIT_DEF - ; EXPAND-NEXT: $p5 = IMPLICIT_DEF - ; EXPAND-NEXT: $p6 = IMPLICIT_DEF - ; EXPAND-NEXT: $p7 = IMPLICIT_DEF - ; EXPAND-NEXT: $p8 = IMPLICIT_DEF - ; EXPAND-NEXT: $p9 = IMPLICIT_DEF - ; EXPAND-NEXT: $p10 = IMPLICIT_DEF - ; EXPAND-NEXT: $p11 = IMPLICIT_DEF - ; EXPAND-NEXT: $p12 = IMPLICIT_DEF - ; EXPAND-NEXT: $p13 = IMPLICIT_DEF - ; EXPAND-NEXT: $p14 = IMPLICIT_DEF - ; EXPAND-NEXT: $p15 = IMPLICIT_DEF - ; - ; EXPAND-NEXT: STR_ZXI $z0, $x8, 0 :: (store (s128) into %stack.24) - ; EXPAND-NEXT: $z0 = LDR_ZXI $x8, 1 :: (load (s128) from %stack.0) - ; EXPAND-NEXT: $p0 = PTRUE_B 31, implicit $vg - ; EXPAND-NEXT: $p0 = CMPNE_PPzZI_B $p0, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = LDR_ZXI killed $x8, 0 :: (load (s128) from %stack.24) - ; - ; EXPAND-NEXT: FAKE_USE implicit $z16, implicit $z17, implicit $z18, implicit $z19, implicit $z20, implicit $z21, implicit $z22, implicit $z23, implicit $z24, implicit $z25, implicit $z26, implicit $z27, implicit $z28, implicit $z29, implicit $z30, implicit $z31 - ; - ; EXPAND-NEXT: $sp = frame-destroy ADDXri $sp, 1024, 0 - ; EXPAND-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 2, implicit $vg - ; EXPAND-NEXT: $z23 = frame-destroy LDR_ZXI $sp, 12 :: (load (s128) from %stack.9) - ; EXPAND-NEXT: $z22 = frame-destroy LDR_ZXI $sp, 13 :: (load (s128) from %stack.8) - ; EXPAND-NEXT: $z21 = frame-destroy LDR_ZXI $sp, 14 :: (load (s128) from %stack.7) - ; EXPAND-NEXT: $z20 = frame-destroy LDR_ZXI $sp, 15 :: (load (s128) from %stack.6) - ; EXPAND-NEXT: $z19 = frame-destroy LDR_ZXI $sp, 16 :: (load (s128) from %stack.5) - ; EXPAND-NEXT: $z18 = frame-destroy LDR_ZXI $sp, 17 :: (load (s128) from %stack.4) - ; EXPAND-NEXT: $z17 = frame-destroy LDR_ZXI $sp, 18 :: (load (s128) from %stack.3) - ; EXPAND-NEXT: $z16 = frame-destroy LDR_ZXI $sp, 19 :: (load (s128) from %stack.2) - ; EXPAND-NEXT: $z24 = frame-destroy LDR_ZXI $sp, 0 :: (load (s128) from %stack.21) - ; EXPAND-NEXT: $p1 = frame-destroy PTRUE_B 31, implicit $vg - ; EXPAND-NEXT: $p15 = frame-destroy CMPNE_PPzZI_B $p1, $z24, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z24 = frame-destroy LDR_ZXI $sp, 1 :: (load (s128) from %stack.20) - ; EXPAND-NEXT: $p14 = frame-destroy CMPNE_PPzZI_B $p1, $z24, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z24 = frame-destroy LDR_ZXI $sp, 2 :: (load (s128) from %stack.19) - ; EXPAND-NEXT: $p13 = frame-destroy CMPNE_PPzZI_B $p1, $z24, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z24 = frame-destroy LDR_ZXI $sp, 3 :: (load (s128) from %stack.18) - ; EXPAND-NEXT: $p12 = frame-destroy CMPNE_PPzZI_B $p1, $z24, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z24 = frame-destroy LDR_ZXI $sp, 4 :: (load (s128) from %stack.17) - ; EXPAND-NEXT: $p11 = frame-destroy CMPNE_PPzZI_B $p1, $z24, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z24 = frame-destroy LDR_ZXI $sp, 5 :: (load (s128) from %stack.16) - ; EXPAND-NEXT: $p10 = frame-destroy CMPNE_PPzZI_B $p1, $z24, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z24 = frame-destroy LDR_ZXI $sp, 6 :: (load (s128) from %stack.15) - ; EXPAND-NEXT: $p9 = frame-destroy CMPNE_PPzZI_B $p1, $z24, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z24 = frame-destroy LDR_ZXI $sp, 7 :: (load (s128) from %stack.14) - ; EXPAND-NEXT: $p8 = frame-destroy CMPNE_PPzZI_B $p1, $z24, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z24 = frame-destroy LDR_ZXI $sp, 8 :: (load (s128) from %stack.13) - ; EXPAND-NEXT: $p7 = frame-destroy CMPNE_PPzZI_B $p1, $z24, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z24 = frame-destroy LDR_ZXI $sp, 9 :: (load (s128) from %stack.12) - ; EXPAND-NEXT: $p6 = frame-destroy CMPNE_PPzZI_B $p1, $z24, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z24 = frame-destroy LDR_ZXI $sp, 10 :: (load (s128) from %stack.11) - ; EXPAND-NEXT: $p5 = frame-destroy CMPNE_PPzZI_B $p1, $z24, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z24 = frame-destroy LDR_ZXI $sp, 11 :: (load (s128) from %stack.10) - ; EXPAND-NEXT: $p4 = frame-destroy CMPNE_PPzZI_B $p1, $z24, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 20, implicit $vg - ; EXPAND-NEXT: $fp = frame-destroy LDRXui $sp, 128 :: (load (s64) from %stack.22) - ; EXPAND-NEXT: $sp = frame-destroy ADDXri $sp, 1040, 0 - ; EXPAND-NEXT: RET undef $lr, implicit $p0, implicit $z0, implicit $z1, implicit $z2, implicit $z3, implicit $z4, implicit $z5, implicit $z6, implicit $z7 - $z16 = IMPLICIT_DEF - $z17 = IMPLICIT_DEF - $z18 = IMPLICIT_DEF - $z19 = IMPLICIT_DEF - $z20 = IMPLICIT_DEF - $z21 = IMPLICIT_DEF - $z22 = IMPLICIT_DEF - $z23 = IMPLICIT_DEF - $z24 = IMPLICIT_DEF - $z25 = IMPLICIT_DEF - $z26 = IMPLICIT_DEF - $z27 = IMPLICIT_DEF - $z28 = IMPLICIT_DEF - $z29 = IMPLICIT_DEF - $z30 = IMPLICIT_DEF - $z31 = IMPLICIT_DEF - - %1:ppr = COPY $p0 - - $p0 = IMPLICIT_DEF - $p1 = IMPLICIT_DEF - $p2 = IMPLICIT_DEF - $p3 = IMPLICIT_DEF - $p4 = IMPLICIT_DEF - $p5 = IMPLICIT_DEF - $p6 = IMPLICIT_DEF - $p7 = IMPLICIT_DEF - $p8 = IMPLICIT_DEF - $p9 = IMPLICIT_DEF - $p10 = IMPLICIT_DEF - $p11 = IMPLICIT_DEF - $p12 = IMPLICIT_DEF - $p13 = IMPLICIT_DEF - $p14 = IMPLICIT_DEF - $p15 = IMPLICIT_DEF - - $p0 = COPY %1 - - FAKE_USE implicit $z16, implicit $z17, implicit $z18, implicit $z19, implicit $z20, implicit $z21, implicit $z22, implicit $z23, implicit $z24, implicit $z25, implicit $z26, implicit $z27, implicit $z28, implicit $z29, implicit $z30, implicit $z31 - - RET_ReallyLR implicit $p0, implicit $z0, implicit $z1, implicit $z2, implicit $z3, implicit $z4, implicit $z5, implicit $z6, implicit $z7 -... ---- -name: zpr_predicate_spill_above_p7 -tracksRegLiveness: true -stack: -liveins: - - { reg: '$p0' } - - { reg: '$p1' } - - { reg: '$p2' } - - { reg: '$p3' } -body: | - bb.0.entry: - liveins: $p0, $p1, $p2, $p3 - - ; CHECK-LABEL: name: zpr_predicate_spill_above_p7 - ; CHECK: stack: - ; CHECK: - { id: 0, name: '', type: spill-slot, offset: 0, size: 16, alignment: 16, - ; CHECK-NEXT: stack-id: scalable-vector, callee-saved-register: - ; CHECK: liveins: $p0, $p1, $p2, $p3 - ; CHECK-NEXT: {{ $}} - ; - ; CHECK-NEXT: $p15 = IMPLICIT_DEF - ; - ; CHECK-NEXT: SPILL_PPR_TO_ZPR_SLOT_PSEUDO $p15, %stack.0, 0 :: (store (s128) into %stack.0) - ; - ; CHECK-NEXT: $p0 = IMPLICIT_DEF - ; CHECK-NEXT: $p1 = IMPLICIT_DEF - ; CHECK-NEXT: $p2 = IMPLICIT_DEF - ; CHECK-NEXT: $p3 = IMPLICIT_DEF - ; CHECK-NEXT: $p4 = IMPLICIT_DEF - ; CHECK-NEXT: $p5 = IMPLICIT_DEF - ; CHECK-NEXT: $p6 = IMPLICIT_DEF - ; CHECK-NEXT: $p7 = IMPLICIT_DEF - ; CHECK-NEXT: $p8 = IMPLICIT_DEF - ; CHECK-NEXT: $p9 = IMPLICIT_DEF - ; CHECK-NEXT: $p10 = IMPLICIT_DEF - ; CHECK-NEXT: $p11 = IMPLICIT_DEF - ; CHECK-NEXT: $p12 = IMPLICIT_DEF - ; CHECK-NEXT: $p13 = IMPLICIT_DEF - ; CHECK-NEXT: $p14 = IMPLICIT_DEF - ; CHECK-NEXT: $p15 = IMPLICIT_DEF - ; - ; CHECK-NEXT: $p15 = FILL_PPR_FROM_ZPR_SLOT_PSEUDO %stack.0, 0 :: (load (s128) from %stack.0) - ; - ; CHECK-NEXT: FAKE_USE implicit $p4, implicit $p5, implicit $p6, implicit $p7 - ; - ; CHECK-NEXT: RET_ReallyLR implicit $p0, implicit $p1, implicit $p2, implicit $p3 - - ; EXPAND-LABEL: name: zpr_predicate_spill_above_p7 - ; EXPAND: liveins: $p0, $p1, $p2, $p3, $fp, $p15, $p14, $p13, $p12, $p11, $p10, $p9, $p8, $p7, $p6, $p5, $p4 - ; EXPAND-NEXT: {{ $}} - ; - ; EXPAND-NEXT: $sp = frame-setup SUBXri $sp, 1040, 0 - ; EXPAND-NEXT: frame-setup STRXui killed $fp, $sp, 128 :: (store (s64) into %stack.14) - ; EXPAND-NEXT: $sp = frame-setup ADDVL_XXI $sp, -12, implicit $vg - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p15, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 0 :: (store (s128) into %stack.13) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p14, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 1 :: (store (s128) into %stack.12) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p13, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 2 :: (store (s128) into %stack.11) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p12, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 3 :: (store (s128) into %stack.10) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p11, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 4 :: (store (s128) into %stack.9) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p10, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 5 :: (store (s128) into %stack.8) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p9, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 6 :: (store (s128) into %stack.7) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p8, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 7 :: (store (s128) into %stack.6) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p7, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 8 :: (store (s128) into %stack.5) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p6, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 9 :: (store (s128) into %stack.4) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p5, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 10 :: (store (s128) into %stack.3) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p4, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 11 :: (store (s128) into %stack.2) - ; EXPAND-NEXT: $sp = frame-setup SUBXri $sp, 1024, 0 - ; EXPAND-NEXT: $sp = frame-setup ADDVL_XXI $sp, -2, implicit $vg - ; - ; EXPAND-NEXT: $p15 = IMPLICIT_DEF - ; - ; EXPAND-NEXT: $z0 = CPY_ZPzI_B $p15, 1, 0 - ; EXPAND-NEXT: $x8 = ADDXri $sp, 1024, 0 - ; EXPAND-NEXT: STR_ZXI $z0, $x8, 1 :: (store (s128) into %stack.0) - ; - ; EXPAND-NEXT: $p0 = IMPLICIT_DEF - ; EXPAND-NEXT: $p1 = IMPLICIT_DEF - ; EXPAND-NEXT: $p2 = IMPLICIT_DEF - ; EXPAND-NEXT: $p3 = IMPLICIT_DEF - ; EXPAND-NEXT: $p4 = IMPLICIT_DEF - ; EXPAND-NEXT: $p5 = IMPLICIT_DEF - ; EXPAND-NEXT: $p6 = IMPLICIT_DEF - ; EXPAND-NEXT: $p7 = IMPLICIT_DEF - ; EXPAND-NEXT: $p8 = IMPLICIT_DEF - ; EXPAND-NEXT: $p9 = IMPLICIT_DEF - ; EXPAND-NEXT: $p10 = IMPLICIT_DEF - ; EXPAND-NEXT: $p11 = IMPLICIT_DEF - ; EXPAND-NEXT: $p12 = IMPLICIT_DEF - ; EXPAND-NEXT: $p13 = IMPLICIT_DEF - ; EXPAND-NEXT: $p14 = IMPLICIT_DEF - ; EXPAND-NEXT: $p15 = IMPLICIT_DEF - ; - ; EXPAND-NEXT: $z0 = CPY_ZPzI_B $p0, 1, 0 - ; EXPAND-NEXT: STR_ZXI $z0, $x8, 0 :: (store (s128) into %stack.16) - ; EXPAND-NEXT: $z0 = LDR_ZXI $x8, 1 :: (load (s128) from %stack.0) - ; EXPAND-NEXT: $p0 = PTRUE_B 31, implicit $vg - ; EXPAND-NEXT: $p15 = CMPNE_PPzZI_B $p0, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = LDR_ZXI killed $x8, 0 :: (load (s128) from %stack.16) - ; EXPAND-NEXT: $p0 = PTRUE_B 31, implicit $vg - ; EXPAND-NEXT: $p0 = CMPNE_PPzZI_B $p0, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; - ; EXPAND-NEXT: FAKE_USE implicit $p4, implicit $p5, implicit $p6, implicit $p7 - ; - ; EXPAND-NEXT: $sp = frame-destroy ADDXri $sp, 1024, 0 - ; EXPAND-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 2, implicit $vg - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 0 :: (load (s128) from %stack.13) - ; EXPAND-NEXT: $p4 = frame-destroy PTRUE_B 31, implicit $vg - ; EXPAND-NEXT: $p15 = frame-destroy CMPNE_PPzZI_B $p4, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 1 :: (load (s128) from %stack.12) - ; EXPAND-NEXT: $p14 = frame-destroy CMPNE_PPzZI_B $p4, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 2 :: (load (s128) from %stack.11) - ; EXPAND-NEXT: $p13 = frame-destroy CMPNE_PPzZI_B $p4, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 3 :: (load (s128) from %stack.10) - ; EXPAND-NEXT: $p12 = frame-destroy CMPNE_PPzZI_B $p4, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 4 :: (load (s128) from %stack.9) - ; EXPAND-NEXT: $p11 = frame-destroy CMPNE_PPzZI_B $p4, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 5 :: (load (s128) from %stack.8) - ; EXPAND-NEXT: $p10 = frame-destroy CMPNE_PPzZI_B $p4, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 6 :: (load (s128) from %stack.7) - ; EXPAND-NEXT: $p9 = frame-destroy CMPNE_PPzZI_B $p4, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 7 :: (load (s128) from %stack.6) - ; EXPAND-NEXT: $p8 = frame-destroy CMPNE_PPzZI_B $p4, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 8 :: (load (s128) from %stack.5) - ; EXPAND-NEXT: $p7 = frame-destroy CMPNE_PPzZI_B $p4, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 9 :: (load (s128) from %stack.4) - ; EXPAND-NEXT: $p6 = frame-destroy CMPNE_PPzZI_B $p4, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 10 :: (load (s128) from %stack.3) - ; EXPAND-NEXT: $p5 = frame-destroy CMPNE_PPzZI_B $p4, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 11 :: (load (s128) from %stack.2) - ; EXPAND-NEXT: $p4 = frame-destroy CMPNE_PPzZI_B $p4, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 12, implicit $vg - ; EXPAND-NEXT: $fp = frame-destroy LDRXui $sp, 128 :: (load (s64) from %stack.14) - ; EXPAND-NEXT: $sp = frame-destroy ADDXri $sp, 1040, 0 - ; EXPAND-NEXT: RET undef $lr, implicit $p0, implicit $p1, implicit $p2, implicit $p3 - $p15 = IMPLICIT_DEF - %1:ppr = COPY $p15 - - $p0 = IMPLICIT_DEF - $p1 = IMPLICIT_DEF - $p2 = IMPLICIT_DEF - $p3 = IMPLICIT_DEF - $p4 = IMPLICIT_DEF - $p5 = IMPLICIT_DEF - $p6 = IMPLICIT_DEF - $p7 = IMPLICIT_DEF - $p8 = IMPLICIT_DEF - $p9 = IMPLICIT_DEF - $p10 = IMPLICIT_DEF - $p11 = IMPLICIT_DEF - $p12 = IMPLICIT_DEF - $p13 = IMPLICIT_DEF - $p14 = IMPLICIT_DEF - $p15 = IMPLICIT_DEF - - $p15 = COPY %1 - - FAKE_USE implicit $p4, implicit $p5, implicit $p6, implicit $p7 - - RET_ReallyLR implicit $p0, implicit $p1, implicit $p2, implicit $p3 -... ---- -name: zpr_predicate_spill_p4_saved -tracksRegLiveness: true -stack: -liveins: - - { reg: '$p0' } - - { reg: '$p1' } - - { reg: '$p2' } - - { reg: '$p3' } -body: | - bb.0.entry: - liveins: $p0, $p1, $p2, $p3 - - ; CHECK-LABEL: name: zpr_predicate_spill_p4_saved - ; CHECK: liveins: $p0, $p1, $p2, $p3 - ; CHECK-NEXT: {{ $}} - ; - ; CHECK-NEXT: $p8 = IMPLICIT_DEF - ; - ; CHECK-NEXT: RET_ReallyLR implicit $p0, implicit $p1, implicit $p2, implicit $p3 - - ; EXPAND-LABEL: name: zpr_predicate_spill_p4_saved - ; EXPAND: liveins: $p0, $p1, $p2, $p3, $fp, $p8, $p4 - ; EXPAND-NEXT: {{ $}} - ; EXPAND-NEXT: early-clobber $sp = frame-setup STRXpre killed $fp, $sp, -16 :: (store (s64) into %stack.2) - ; EXPAND-NEXT: $sp = frame-setup ADDVL_XXI $sp, -2, implicit $vg - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p8, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 0 :: (store (s128) into %stack.1) - ; EXPAND-NEXT: $z0 = frame-setup CPY_ZPzI_B killed $p4, 1, 0 - ; EXPAND-NEXT: frame-setup STR_ZXI $z0, $sp, 1 :: (store (s128) into %stack.0) - ; - ; EXPAND-NEXT: $p8 = IMPLICIT_DEF - ; - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 0 :: (load (s128) from %stack.1) - ; EXPAND-NEXT: $p4 = frame-destroy PTRUE_B 31, implicit $vg - ; EXPAND-NEXT: $p8 = frame-destroy CMPNE_PPzZI_B $p4, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $z0 = frame-destroy LDR_ZXI $sp, 1 :: (load (s128) from %stack.0) - ; EXPAND-NEXT: $p4 = frame-destroy CMPNE_PPzZI_B $p4, $z0, 0, implicit-def $nzcv, implicit-def $nzcv - ; EXPAND-NEXT: $sp = frame-destroy ADDVL_XXI $sp, 2, implicit $vg - ; EXPAND-NEXT: early-clobber $sp, $fp = frame-destroy LDRXpost $sp, 16 :: (load (s64) from %stack.2) - ; EXPAND-NEXT: RET undef $lr, implicit $p0, implicit $p1, implicit $p2, implicit $p3 - - ; If we spill a register above p8, p4 must also be saved, so we can guarantee - ; they will be a register (in the range p0-p7 to for the cmpne reload). - $p8 = IMPLICIT_DEF - - RET_ReallyLR implicit $p0, implicit $p1, implicit $p2, implicit $p3 -... diff --git a/llvm/test/CodeGen/AArch64/ssve-stack-hazard-remarks.ll b/llvm/test/CodeGen/AArch64/ssve-stack-hazard-remarks.ll index 01e3d3a..c0a2943 100644 --- a/llvm/test/CodeGen/AArch64/ssve-stack-hazard-remarks.ll +++ b/llvm/test/CodeGen/AArch64/ssve-stack-hazard-remarks.ll @@ -1,7 +1,5 @@ ; RUN: llc < %s -mtriple=aarch64 -mattr=+sve2 -pass-remarks-analysis=sme -aarch64-stack-hazard-remark-size=64 -o /dev/null < %s 2>&1 | FileCheck %s --check-prefixes=CHECK ; RUN: llc < %s -mtriple=aarch64 -mattr=+sve2 -pass-remarks-analysis=sme -aarch64-stack-hazard-size=1024 -o /dev/null < %s 2>&1 | FileCheck %s --check-prefixes=CHECK-PADDING -; RUN: llc < %s -mtriple=aarch64 -mattr=+sve2 -pass-remarks-analysis=sme -aarch64-enable-zpr-predicate-spills -aarch64-stack-hazard-remark-size=64 -o /dev/null < %s 2>&1 | FileCheck %s --check-prefixes=CHECK-ZPR-PRED-SPILLS -; RUN: llc < %s -mtriple=aarch64 -mattr=+sve2 -pass-remarks-analysis=sme -aarch64-enable-zpr-predicate-spills -aarch64-stack-hazard-size=1024 -o /dev/null < %s 2>&1 | FileCheck %s --check-prefixes=CHECK-ZPR-PRED-SPILLS-WITH-PADDING ; Don't emit remarks for non-streaming functions. define float @csr_x20_stackargs_notsc(float %a, float %b, float %c, float %d, float %e, float %f, float %g, float %h, float %i) { @@ -69,16 +67,11 @@ entry: ; SVE calling conventions ; Padding is placed between predicate and fpr/zpr register spills, so only emit remarks when hazard padding is off. -; Note: The -aarch64-enable-zpr-predicate-spills option is deprecated (and will be removed soon). define i32 @svecc_call(<4 x i16> %P0, ptr %P1, i32 %P2, <vscale x 16 x i8> %P3, i16 %P4) #2 { ; CHECK: remark: <unknown>:0:0: stack hazard in 'svecc_call': PPR stack object at [SP-64-258 * vscale] is too close to FPR stack object at [SP-64-256 * vscale] ; CHECK: remark: <unknown>:0:0: stack hazard in 'svecc_call': FPR stack object at [SP-64-16 * vscale] is too close to GPR stack object at [SP-64] ; CHECK-PADDING-NOT: remark: <unknown>:0:0: stack hazard in 'svecc_call': -; CHECK-ZPR-PRED-SPILLS-NOT: <unknown>:0:0: stack hazard in 'svecc_call': PPR stack object at {{.*}} is too close to FPR stack object -; CHECK-ZPR-PRED-SPILLS: <unknown>:0:0: stack hazard in 'svecc_call': FPR stack object at [SP-64-16 * vscale] is too close to GPR stack object at [SP-64] -; CHECK-ZPR-PRED-SPILLS-WITH-PADDING-NOT: <unknown>:0:0: stack hazard in 'svecc_call': PPR stack object at {{.*}} is too close to FPR stack object -; CHECK-ZPR-PRED-SPILLS-WITH-PADDING-NOT: <unknown>:0:0: stack hazard in 'svecc_call': FPR stack object at {{.*}} is too close to GPR stack object entry: tail call void asm sideeffect "", "~{x0},~{x28},~{x27},~{x3}"() #2 %call = call ptr @memset(ptr noundef nonnull %P1, i32 noundef 45, i32 noundef 37) @@ -89,10 +82,6 @@ define i32 @svecc_alloca_call(<4 x i16> %P0, ptr %P1, i32 %P2, <vscale x 16 x i8 ; CHECK: remark: <unknown>:0:0: stack hazard in 'svecc_alloca_call': PPR stack object at [SP-64-258 * vscale] is too close to FPR stack object at [SP-64-256 * vscale] ; CHECK: remark: <unknown>:0:0: stack hazard in 'svecc_alloca_call': FPR stack object at [SP-64-16 * vscale] is too close to GPR stack object at [SP-64] ; CHECK-PADDING-NOT: remark: <unknown>:0:0: stack hazard in 'svecc_alloca_call': -; CHECK-ZPR-PRED-SPILLS-NOT: <unknown>:0:0: stack hazard in 'svecc_call': PPR stack object at {{.*}} is too close to FPR stack object -; CHECK-ZPR-PRED-SPILLS: <unknown>:0:0: stack hazard in 'svecc_alloca_call': FPR stack object at [SP-64-16 * vscale] is too close to GPR stack object at [SP-64] -; CHECK-ZPR-PRED-SPILLS-WITH-PADDING-NOT: <unknown>:0:0: stack hazard in 'svecc_alloca_call': PPR stack object at {{.*}} is too close to FPR stack object -; CHECK-ZPR-PRED-SPILLS-WITH-PADDING-NOT: <unknown>:0:0: stack hazard in 'svecc_alloca_call': FPR stack object at {{.*}} is too close to GPR stack object entry: tail call void asm sideeffect "", "~{x0},~{x28},~{x27},~{x3}"() #2 %0 = alloca [37 x i8], align 16 diff --git a/llvm/test/CodeGen/AMDGPU/agpr-copy-no-free-registers.ll b/llvm/test/CodeGen/AMDGPU/agpr-copy-no-free-registers.ll index 9e24023..ebbeab9 100644 --- a/llvm/test/CodeGen/AMDGPU/agpr-copy-no-free-registers.ll +++ b/llvm/test/CodeGen/AMDGPU/agpr-copy-no-free-registers.ll @@ -146,9 +146,9 @@ define void @no_free_vgprs_at_agpr_to_agpr_copy(float %v0, float %v1) #0 { ; GFX908-NEXT: ;;#ASMSTART ; GFX908-NEXT: ; copy ; GFX908-NEXT: ;;#ASMEND -; GFX908-NEXT: v_accvgpr_read_b32 v32, a2 +; GFX908-NEXT: v_accvgpr_read_b32 v39, a2 ; GFX908-NEXT: s_nop 1 -; GFX908-NEXT: v_accvgpr_write_b32 a3, v32 +; GFX908-NEXT: v_accvgpr_write_b32 a3, v39 ; GFX908-NEXT: ;;#ASMSTART ; GFX908-NEXT: ; use a3 v[0:31] ; GFX908-NEXT: ;;#ASMEND @@ -437,9 +437,9 @@ define void @v32_asm_def_use(float %v0, float %v1) #4 { ; GFX908-NEXT: ; copy ; GFX908-NEXT: ;;#ASMEND ; GFX908-NEXT: s_nop 7 -; GFX908-NEXT: v_accvgpr_read_b32 v33, a2 +; GFX908-NEXT: v_accvgpr_read_b32 v35, a2 ; GFX908-NEXT: s_nop 1 -; GFX908-NEXT: v_accvgpr_write_b32 a3, v33 +; GFX908-NEXT: v_accvgpr_write_b32 a3, v35 ; GFX908-NEXT: ;;#ASMSTART ; GFX908-NEXT: ; use a3 v[0:31] ; GFX908-NEXT: ;;#ASMEND @@ -1045,9 +1045,9 @@ define void @no_free_vgprs_at_sgpr_to_agpr_copy(float %v0, float %v1) #0 { ; GFX908-NEXT: ;;#ASMSTART ; GFX908-NEXT: ; copy ; GFX908-NEXT: ;;#ASMEND -; GFX908-NEXT: v_accvgpr_read_b32 v32, a2 +; GFX908-NEXT: v_accvgpr_read_b32 v39, a2 ; GFX908-NEXT: s_nop 1 -; GFX908-NEXT: v_accvgpr_write_b32 a3, v32 +; GFX908-NEXT: v_accvgpr_write_b32 a3, v39 ; GFX908-NEXT: ;;#ASMSTART ; GFX908-NEXT: ; use a3 v[0:31] ; GFX908-NEXT: ;;#ASMEND diff --git a/llvm/test/CodeGen/AMDGPU/agpr-copy-propagation.mir b/llvm/test/CodeGen/AMDGPU/agpr-copy-propagation.mir index a42cf43..7e82382d 100644 --- a/llvm/test/CodeGen/AMDGPU/agpr-copy-propagation.mir +++ b/llvm/test/CodeGen/AMDGPU/agpr-copy-propagation.mir @@ -40,8 +40,8 @@ body: | ; GFX908: liveins: $agpr0 ; GFX908-NEXT: {{ $}} ; GFX908-NEXT: renamable $vgpr0 = COPY renamable $agpr0, implicit $exec - ; GFX908-NEXT: renamable $agpr1 = COPY renamable $vgpr0, implicit $exec - ; GFX908-NEXT: renamable $agpr2 = COPY renamable $vgpr0, implicit $exec + ; GFX908-NEXT: renamable $agpr1 = COPY $agpr0, implicit $exec + ; GFX908-NEXT: renamable $agpr2 = COPY $agpr0, implicit $exec ; GFX908-NEXT: S_ENDPGM 0, implicit $vgpr0, implicit $agpr1, implicit $agpr2 ; ; GFX90A-LABEL: name: do_not_propagate_agpr_to_agpr diff --git a/llvm/test/CodeGen/AMDGPU/elf-header-flags-sramecc.ll b/llvm/test/CodeGen/AMDGPU/elf-header-flags-sramecc.ll index c4479b3..e3bc516 100644 --- a/llvm/test/CodeGen/AMDGPU/elf-header-flags-sramecc.ll +++ b/llvm/test/CodeGen/AMDGPU/elf-header-flags-sramecc.ll @@ -15,6 +15,9 @@ ; RUN: llc -filetype=obj -mtriple=amdgcn -mcpu=gfx950 < %s | llvm-readobj --file-header - | FileCheck --check-prefix=SRAM-ECC-GFX950 %s ; RUN: llc -filetype=obj -mtriple=amdgcn -mcpu=gfx950 -mattr=+sramecc < %s | llvm-readobj --file-header - | FileCheck --check-prefix=SRAM-ECC-GFX950 %s +; RUN: llc -filetype=obj -mtriple=amdgcn -mcpu=gfx1250 < %s | llvm-readobj --file-header - | FileCheck --check-prefix=SRAM-ECC-GFX1250 %s +; RUN: llc -filetype=obj -mtriple=amdgcn -mcpu=gfx1250 -mattr=+sramecc < %s | llvm-readobj --file-header - | FileCheck --check-prefix=SRAM-ECC-GFX1250 %s + ; NO-SRAM-ECC-GFX906: Flags [ ; NO-SRAM-ECC-GFX906-NEXT: EF_AMDGPU_FEATURE_XNACK_V3 (0x100) ; NO-SRAM-ECC-GFX906-NEXT: EF_AMDGPU_MACH_AMDGCN_GFX906 (0x2F) @@ -52,6 +55,11 @@ ; SRAM-ECC-GFX950: EF_AMDGPU_MACH_AMDGCN_GFX950 (0x4F) ; SRAM-ECC-GFX950: ] +; SRAM-ECC-GFX1250: Flags [ +; SRAM-ECC-GFX1250: EF_AMDGPU_FEATURE_SRAMECC_V3 (0x200) +; SRAM-ECC-GFX1250: EF_AMDGPU_MACH_AMDGCN_GFX1250 (0x49) +; SRAM-ECC-GFX1250: ] + define amdgpu_kernel void @elf_header() { ret void } diff --git a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.form.ll b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.form.ll index 87a7c2e..cc4cc8e 100644 --- a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.form.ll +++ b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.form.ll @@ -72,5 +72,206 @@ define <4 x float> @request_no_agpr(<8 x half> %arg0, <8 x half> %arg1, <4 x flo ret <4 x float> %result } +; Make sure this selects the VGPR form, if AGPRs available, but not +; enough. +define amdgpu_kernel void @not_enough_agprs(ptr addrspace(1) %arg) #2 { +; HEURRC-LABEL: not_enough_agprs: +; HEURRC: ; %bb.0: ; %bb +; HEURRC-NEXT: s_load_dwordx2 s[34:35], s[4:5], 0x24 +; HEURRC-NEXT: v_mov_b32_e32 v33, 1.0 +; HEURRC-NEXT: v_mov_b32_e32 v34, 2.0 +; HEURRC-NEXT: v_mov_b32_e32 v32, 0 +; HEURRC-NEXT: s_waitcnt lgkmcnt(0) +; HEURRC-NEXT: s_load_dwordx16 s[16:31], s[34:35], 0x0 +; HEURRC-NEXT: s_load_dwordx16 s[0:15], s[34:35], 0x40 +; HEURRC-NEXT: s_waitcnt lgkmcnt(0) +; HEURRC-NEXT: v_mov_b32_e32 v0, s16 +; HEURRC-NEXT: v_mov_b32_e32 v1, s17 +; HEURRC-NEXT: v_mov_b32_e32 v2, s18 +; HEURRC-NEXT: v_mov_b32_e32 v3, s19 +; HEURRC-NEXT: v_mov_b32_e32 v4, s20 +; HEURRC-NEXT: v_mov_b32_e32 v5, s21 +; HEURRC-NEXT: v_mov_b32_e32 v6, s22 +; HEURRC-NEXT: v_mov_b32_e32 v7, s23 +; HEURRC-NEXT: v_mov_b32_e32 v8, s24 +; HEURRC-NEXT: v_mov_b32_e32 v9, s25 +; HEURRC-NEXT: v_mov_b32_e32 v10, s26 +; HEURRC-NEXT: v_mov_b32_e32 v11, s27 +; HEURRC-NEXT: v_mov_b32_e32 v12, s28 +; HEURRC-NEXT: v_mov_b32_e32 v13, s29 +; HEURRC-NEXT: v_mov_b32_e32 v14, s30 +; HEURRC-NEXT: v_mov_b32_e32 v15, s31 +; HEURRC-NEXT: v_mov_b32_e32 v16, s0 +; HEURRC-NEXT: v_mov_b32_e32 v17, s1 +; HEURRC-NEXT: v_mov_b32_e32 v18, s2 +; HEURRC-NEXT: v_mov_b32_e32 v19, s3 +; HEURRC-NEXT: v_mov_b32_e32 v20, s4 +; HEURRC-NEXT: v_mov_b32_e32 v21, s5 +; HEURRC-NEXT: v_mov_b32_e32 v22, s6 +; HEURRC-NEXT: v_mov_b32_e32 v23, s7 +; HEURRC-NEXT: v_mov_b32_e32 v24, s8 +; HEURRC-NEXT: v_mov_b32_e32 v25, s9 +; HEURRC-NEXT: v_mov_b32_e32 v26, s10 +; HEURRC-NEXT: v_mov_b32_e32 v27, s11 +; HEURRC-NEXT: v_mov_b32_e32 v28, s12 +; HEURRC-NEXT: v_mov_b32_e32 v29, s13 +; HEURRC-NEXT: v_mov_b32_e32 v30, s14 +; HEURRC-NEXT: v_mov_b32_e32 v31, s15 +; HEURRC-NEXT: s_nop 1 +; HEURRC-NEXT: v_mfma_f32_32x32x1_2b_f32 v[0:31], v33, v34, v[0:31] cbsz:1 abid:2 blgp:3 +; HEURRC-NEXT: s_nop 15 +; HEURRC-NEXT: s_nop 1 +; HEURRC-NEXT: global_store_dwordx4 v32, v[24:27], s[34:35] offset:96 +; HEURRC-NEXT: global_store_dwordx4 v32, v[28:31], s[34:35] offset:112 +; HEURRC-NEXT: global_store_dwordx4 v32, v[16:19], s[34:35] offset:64 +; HEURRC-NEXT: global_store_dwordx4 v32, v[20:23], s[34:35] offset:80 +; HEURRC-NEXT: global_store_dwordx4 v32, v[8:11], s[34:35] offset:32 +; HEURRC-NEXT: global_store_dwordx4 v32, v[12:15], s[34:35] offset:48 +; HEURRC-NEXT: global_store_dwordx4 v32, v[0:3], s[34:35] +; HEURRC-NEXT: global_store_dwordx4 v32, v[4:7], s[34:35] offset:16 +; HEURRC-NEXT: s_endpgm +; +; VGPRRC-LABEL: not_enough_agprs: +; VGPRRC: ; %bb.0: ; %bb +; VGPRRC-NEXT: s_load_dwordx2 s[34:35], s[4:5], 0x24 +; VGPRRC-NEXT: v_mov_b32_e32 v33, 1.0 +; VGPRRC-NEXT: v_mov_b32_e32 v34, 2.0 +; VGPRRC-NEXT: v_mov_b32_e32 v32, 0 +; VGPRRC-NEXT: s_waitcnt lgkmcnt(0) +; VGPRRC-NEXT: s_load_dwordx16 s[16:31], s[34:35], 0x0 +; VGPRRC-NEXT: s_load_dwordx16 s[0:15], s[34:35], 0x40 +; VGPRRC-NEXT: s_waitcnt lgkmcnt(0) +; VGPRRC-NEXT: v_mov_b32_e32 v0, s16 +; VGPRRC-NEXT: v_mov_b32_e32 v1, s17 +; VGPRRC-NEXT: v_mov_b32_e32 v2, s18 +; VGPRRC-NEXT: v_mov_b32_e32 v3, s19 +; VGPRRC-NEXT: v_mov_b32_e32 v4, s20 +; VGPRRC-NEXT: v_mov_b32_e32 v5, s21 +; VGPRRC-NEXT: v_mov_b32_e32 v6, s22 +; VGPRRC-NEXT: v_mov_b32_e32 v7, s23 +; VGPRRC-NEXT: v_mov_b32_e32 v8, s24 +; VGPRRC-NEXT: v_mov_b32_e32 v9, s25 +; VGPRRC-NEXT: v_mov_b32_e32 v10, s26 +; VGPRRC-NEXT: v_mov_b32_e32 v11, s27 +; VGPRRC-NEXT: v_mov_b32_e32 v12, s28 +; VGPRRC-NEXT: v_mov_b32_e32 v13, s29 +; VGPRRC-NEXT: v_mov_b32_e32 v14, s30 +; VGPRRC-NEXT: v_mov_b32_e32 v15, s31 +; VGPRRC-NEXT: v_mov_b32_e32 v16, s0 +; VGPRRC-NEXT: v_mov_b32_e32 v17, s1 +; VGPRRC-NEXT: v_mov_b32_e32 v18, s2 +; VGPRRC-NEXT: v_mov_b32_e32 v19, s3 +; VGPRRC-NEXT: v_mov_b32_e32 v20, s4 +; VGPRRC-NEXT: v_mov_b32_e32 v21, s5 +; VGPRRC-NEXT: v_mov_b32_e32 v22, s6 +; VGPRRC-NEXT: v_mov_b32_e32 v23, s7 +; VGPRRC-NEXT: v_mov_b32_e32 v24, s8 +; VGPRRC-NEXT: v_mov_b32_e32 v25, s9 +; VGPRRC-NEXT: v_mov_b32_e32 v26, s10 +; VGPRRC-NEXT: v_mov_b32_e32 v27, s11 +; VGPRRC-NEXT: v_mov_b32_e32 v28, s12 +; VGPRRC-NEXT: v_mov_b32_e32 v29, s13 +; VGPRRC-NEXT: v_mov_b32_e32 v30, s14 +; VGPRRC-NEXT: v_mov_b32_e32 v31, s15 +; VGPRRC-NEXT: s_nop 1 +; VGPRRC-NEXT: v_mfma_f32_32x32x1_2b_f32 v[0:31], v33, v34, v[0:31] cbsz:1 abid:2 blgp:3 +; VGPRRC-NEXT: s_nop 15 +; VGPRRC-NEXT: s_nop 1 +; VGPRRC-NEXT: global_store_dwordx4 v32, v[24:27], s[34:35] offset:96 +; VGPRRC-NEXT: global_store_dwordx4 v32, v[28:31], s[34:35] offset:112 +; VGPRRC-NEXT: global_store_dwordx4 v32, v[16:19], s[34:35] offset:64 +; VGPRRC-NEXT: global_store_dwordx4 v32, v[20:23], s[34:35] offset:80 +; VGPRRC-NEXT: global_store_dwordx4 v32, v[8:11], s[34:35] offset:32 +; VGPRRC-NEXT: global_store_dwordx4 v32, v[12:15], s[34:35] offset:48 +; VGPRRC-NEXT: global_store_dwordx4 v32, v[0:3], s[34:35] +; VGPRRC-NEXT: global_store_dwordx4 v32, v[4:7], s[34:35] offset:16 +; VGPRRC-NEXT: s_endpgm +bb: + %in.1 = load <32 x float>, ptr addrspace(1) %arg, align 128 + %mai.1 = call <32 x float> @llvm.amdgcn.mfma.f32.32x32x1f32(float 1.000000e+00, float 2.000000e+00, <32 x float> %in.1, i32 1, i32 2, i32 3) + store <32 x float> %mai.1, ptr addrspace(1) %arg, align 128 + ret void +} + +define <16 x float> @mfma_scale_respect_flag(<8 x i32> %arg0, <8 x i32> %arg1, <16 x float> %arg2, i32 %scale0, i32 %scale1) #2 { +; HEURRC-LABEL: mfma_scale_respect_flag: +; HEURRC: ; %bb.0: +; HEURRC-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) +; HEURRC-NEXT: scratch_load_dword a15, off, s32 +; HEURRC-NEXT: scratch_load_dword v31, off, s32 offset:8 +; HEURRC-NEXT: scratch_load_dword v32, off, s32 offset:4 +; HEURRC-NEXT: v_accvgpr_write_b32 a0, v16 +; HEURRC-NEXT: v_accvgpr_write_b32 a1, v17 +; HEURRC-NEXT: v_accvgpr_write_b32 a2, v18 +; HEURRC-NEXT: v_accvgpr_write_b32 a3, v19 +; HEURRC-NEXT: v_accvgpr_write_b32 a4, v20 +; HEURRC-NEXT: v_accvgpr_write_b32 a5, v21 +; HEURRC-NEXT: v_accvgpr_write_b32 a6, v22 +; HEURRC-NEXT: v_accvgpr_write_b32 a7, v23 +; HEURRC-NEXT: v_accvgpr_write_b32 a8, v24 +; HEURRC-NEXT: v_accvgpr_write_b32 a9, v25 +; HEURRC-NEXT: v_accvgpr_write_b32 a10, v26 +; HEURRC-NEXT: v_accvgpr_write_b32 a11, v27 +; HEURRC-NEXT: v_accvgpr_write_b32 a12, v28 +; HEURRC-NEXT: v_accvgpr_write_b32 a13, v29 +; HEURRC-NEXT: v_accvgpr_write_b32 a14, v30 +; HEURRC-NEXT: s_waitcnt vmcnt(0) +; HEURRC-NEXT: s_nop 0 +; HEURRC-NEXT: v_mfma_scale_f32_32x32x64_f8f6f4 a[0:15], v[0:7], v[8:15], a[0:15], v32, v31 op_sel_hi:[0,0,0] +; HEURRC-NEXT: s_nop 15 +; HEURRC-NEXT: s_nop 3 +; HEURRC-NEXT: v_accvgpr_read_b32 v0, a0 +; HEURRC-NEXT: v_accvgpr_read_b32 v1, a1 +; HEURRC-NEXT: v_accvgpr_read_b32 v2, a2 +; HEURRC-NEXT: v_accvgpr_read_b32 v3, a3 +; HEURRC-NEXT: v_accvgpr_read_b32 v4, a4 +; HEURRC-NEXT: v_accvgpr_read_b32 v5, a5 +; HEURRC-NEXT: v_accvgpr_read_b32 v6, a6 +; HEURRC-NEXT: v_accvgpr_read_b32 v7, a7 +; HEURRC-NEXT: v_accvgpr_read_b32 v8, a8 +; HEURRC-NEXT: v_accvgpr_read_b32 v9, a9 +; HEURRC-NEXT: v_accvgpr_read_b32 v10, a10 +; HEURRC-NEXT: v_accvgpr_read_b32 v11, a11 +; HEURRC-NEXT: v_accvgpr_read_b32 v12, a12 +; HEURRC-NEXT: v_accvgpr_read_b32 v13, a13 +; HEURRC-NEXT: v_accvgpr_read_b32 v14, a14 +; HEURRC-NEXT: v_accvgpr_read_b32 v15, a15 +; HEURRC-NEXT: s_setpc_b64 s[30:31] +; +; VGPRRC-LABEL: mfma_scale_respect_flag: +; VGPRRC: ; %bb.0: +; VGPRRC-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) +; VGPRRC-NEXT: scratch_load_dword v31, off, s32 +; VGPRRC-NEXT: scratch_load_dword v32, off, s32 offset:8 +; VGPRRC-NEXT: scratch_load_dword v33, off, s32 offset:4 +; VGPRRC-NEXT: s_waitcnt vmcnt(0) +; VGPRRC-NEXT: v_mfma_scale_f32_32x32x64_f8f6f4 v[16:31], v[0:7], v[8:15], v[16:31], v33, v32 op_sel_hi:[0,0,0] +; VGPRRC-NEXT: s_nop 15 +; VGPRRC-NEXT: s_nop 3 +; VGPRRC-NEXT: v_mov_b32_e32 v0, v16 +; VGPRRC-NEXT: v_mov_b32_e32 v1, v17 +; VGPRRC-NEXT: v_mov_b32_e32 v2, v18 +; VGPRRC-NEXT: v_mov_b32_e32 v3, v19 +; VGPRRC-NEXT: v_mov_b32_e32 v4, v20 +; VGPRRC-NEXT: v_mov_b32_e32 v5, v21 +; VGPRRC-NEXT: v_mov_b32_e32 v6, v22 +; VGPRRC-NEXT: v_mov_b32_e32 v7, v23 +; VGPRRC-NEXT: v_mov_b32_e32 v8, v24 +; VGPRRC-NEXT: v_mov_b32_e32 v9, v25 +; VGPRRC-NEXT: v_mov_b32_e32 v10, v26 +; VGPRRC-NEXT: v_mov_b32_e32 v11, v27 +; VGPRRC-NEXT: v_mov_b32_e32 v12, v28 +; VGPRRC-NEXT: v_mov_b32_e32 v13, v29 +; VGPRRC-NEXT: v_mov_b32_e32 v14, v30 +; VGPRRC-NEXT: v_mov_b32_e32 v15, v31 +; VGPRRC-NEXT: s_setpc_b64 s[30:31] + %result = call <16 x float> @llvm.amdgcn.mfma.scale.f32.32x32x64.f8f6f4.v8i32.v8i32(<8 x i32> %arg0, <8 x i32> %arg1, <16 x float> %arg2, + i32 0, ; cbsz + i32 0, ; blgp + i32 0, i32 %scale0, i32 0, i32 %scale1) + ret <16 x float> %result +} + attributes #0 = { "amdgpu-agpr-alloc"="32,256" } attributes #1 = { "amdgpu-agpr-alloc"="0,0" } +attributes #2 = { nounwind "amdgpu-agpr-alloc"="20" } diff --git a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.gfx90a.ll b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.gfx90a.ll index 5ab8706..22bc62a 100644 --- a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.gfx90a.ll +++ b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.gfx90a.ll @@ -726,12 +726,12 @@ define amdgpu_kernel void @test_mfma_f64_4x4x4f64(ptr addrspace(1) %arg, double ; GFX90A-VGPR-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x24 ; GFX90A-VGPR-NEXT: s_load_dwordx2 s[6:7], s[4:5], 0x34 ; GFX90A-VGPR-NEXT: s_waitcnt lgkmcnt(0) -; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[0:1], s[2:3], s[2:3] op_sel:[0,1] -; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[2:3], s[6:7], s[6:7] op_sel:[0,1] +; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[2:3], s[2:3], s[2:3] op_sel:[0,1] +; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[4:5], s[6:7], s[6:7] op_sel:[0,1] ; GFX90A-VGPR-NEXT: s_nop 1 -; GFX90A-VGPR-NEXT: v_mfma_f64_4x4x4f64 v[4:5], v[0:1], v[2:3], 0 +; GFX90A-VGPR-NEXT: v_mfma_f64_4x4x4f64 v[0:1], v[2:3], v[4:5], 0 ; GFX90A-VGPR-NEXT: s_nop 3 -; GFX90A-VGPR-NEXT: v_mfma_f64_4x4x4f64 v[0:1], v[0:1], v[2:3], v[4:5] cbsz:1 abid:2 blgp:3 +; GFX90A-VGPR-NEXT: v_mfma_f64_4x4x4f64 v[0:1], v[2:3], v[4:5], v[0:1] cbsz:1 abid:2 blgp:3 ; GFX90A-VGPR-NEXT: v_mov_b32_e32 v2, 0 ; GFX90A-VGPR-NEXT: s_nop 7 ; GFX90A-VGPR-NEXT: global_store_dwordx2 v2, v[0:1], s[0:1] @@ -742,12 +742,12 @@ define amdgpu_kernel void @test_mfma_f64_4x4x4f64(ptr addrspace(1) %arg, double ; GFX942-VGPR-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x24 ; GFX942-VGPR-NEXT: s_load_dwordx2 s[6:7], s[4:5], 0x34 ; GFX942-VGPR-NEXT: s_waitcnt lgkmcnt(0) -; GFX942-VGPR-NEXT: v_mov_b64_e32 v[0:1], s[2:3] -; GFX942-VGPR-NEXT: v_mov_b64_e32 v[2:3], s[6:7] +; GFX942-VGPR-NEXT: v_mov_b64_e32 v[2:3], s[2:3] +; GFX942-VGPR-NEXT: v_mov_b64_e32 v[4:5], s[6:7] ; GFX942-VGPR-NEXT: s_nop 1 -; GFX942-VGPR-NEXT: v_mfma_f64_4x4x4_4b_f64 v[4:5], v[0:1], v[2:3], 0 +; GFX942-VGPR-NEXT: v_mfma_f64_4x4x4_4b_f64 v[0:1], v[2:3], v[4:5], 0 ; GFX942-VGPR-NEXT: s_nop 3 -; GFX942-VGPR-NEXT: v_mfma_f64_4x4x4_4b_f64 v[0:1], v[0:1], v[2:3], v[4:5] cbsz:1 abid:2 neg:[1,1,0] +; GFX942-VGPR-NEXT: v_mfma_f64_4x4x4_4b_f64 v[0:1], v[2:3], v[4:5], v[0:1] cbsz:1 abid:2 neg:[1,1,0] ; GFX942-VGPR-NEXT: v_mov_b32_e32 v2, 0 ; GFX942-VGPR-NEXT: s_nop 7 ; GFX942-VGPR-NEXT: global_store_dwordx2 v2, v[0:1], s[0:1] @@ -765,10 +765,10 @@ define amdgpu_kernel void @test_mfma_f64_16x16x4f64(ptr addrspace(1) %arg, doubl ; GFX90A-NEXT: s_load_dwordx4 s[8:11], s[4:5], 0x24 ; GFX90A-NEXT: s_load_dwordx2 s[12:13], s[4:5], 0x34 ; GFX90A-NEXT: s_waitcnt lgkmcnt(0) -; GFX90A-NEXT: v_mov_b32_e32 v2, s10 +; GFX90A-NEXT: v_mov_b32_e32 v0, s10 ; GFX90A-NEXT: s_load_dwordx8 s[0:7], s[8:9], 0x0 -; GFX90A-NEXT: v_mov_b32_e32 v3, s11 -; GFX90A-NEXT: v_pk_mov_b32 v[0:1], s[12:13], s[12:13] op_sel:[0,1] +; GFX90A-NEXT: v_mov_b32_e32 v1, s11 +; GFX90A-NEXT: v_pk_mov_b32 v[2:3], s[12:13], s[12:13] op_sel:[0,1] ; GFX90A-NEXT: s_waitcnt lgkmcnt(0) ; GFX90A-NEXT: v_accvgpr_write_b32 a0, s0 ; GFX90A-NEXT: v_accvgpr_write_b32 a1, s1 @@ -779,7 +779,7 @@ define amdgpu_kernel void @test_mfma_f64_16x16x4f64(ptr addrspace(1) %arg, doubl ; GFX90A-NEXT: v_accvgpr_write_b32 a6, s6 ; GFX90A-NEXT: v_accvgpr_write_b32 a7, s7 ; GFX90A-NEXT: s_nop 1 -; GFX90A-NEXT: v_mfma_f64_16x16x4f64 a[0:7], v[2:3], v[0:1], a[0:7] cbsz:1 abid:2 blgp:3 +; GFX90A-NEXT: v_mfma_f64_16x16x4f64 a[0:7], v[0:1], v[2:3], a[0:7] cbsz:1 abid:2 blgp:3 ; GFX90A-NEXT: v_mov_b32_e32 v0, 0 ; GFX90A-NEXT: s_nop 15 ; GFX90A-NEXT: s_nop 0 @@ -792,10 +792,10 @@ define amdgpu_kernel void @test_mfma_f64_16x16x4f64(ptr addrspace(1) %arg, doubl ; GFX942-NEXT: s_load_dwordx4 s[8:11], s[4:5], 0x24 ; GFX942-NEXT: s_load_dwordx2 s[12:13], s[4:5], 0x34 ; GFX942-NEXT: s_waitcnt lgkmcnt(0) -; GFX942-NEXT: v_mov_b32_e32 v2, s10 +; GFX942-NEXT: v_mov_b32_e32 v0, s10 ; GFX942-NEXT: s_load_dwordx8 s[0:7], s[8:9], 0x0 -; GFX942-NEXT: v_mov_b32_e32 v3, s11 -; GFX942-NEXT: v_mov_b64_e32 v[0:1], s[12:13] +; GFX942-NEXT: v_mov_b32_e32 v1, s11 +; GFX942-NEXT: v_mov_b64_e32 v[2:3], s[12:13] ; GFX942-NEXT: s_waitcnt lgkmcnt(0) ; GFX942-NEXT: v_accvgpr_write_b32 a0, s0 ; GFX942-NEXT: v_accvgpr_write_b32 a1, s1 @@ -806,7 +806,7 @@ define amdgpu_kernel void @test_mfma_f64_16x16x4f64(ptr addrspace(1) %arg, doubl ; GFX942-NEXT: v_accvgpr_write_b32 a6, s6 ; GFX942-NEXT: v_accvgpr_write_b32 a7, s7 ; GFX942-NEXT: s_nop 1 -; GFX942-NEXT: v_mfma_f64_16x16x4_f64 a[0:7], v[2:3], v[0:1], a[0:7] cbsz:1 abid:2 neg:[1,1,0] +; GFX942-NEXT: v_mfma_f64_16x16x4_f64 a[0:7], v[0:1], v[2:3], a[0:7] cbsz:1 abid:2 neg:[1,1,0] ; GFX942-NEXT: v_mov_b32_e32 v0, 0 ; GFX942-NEXT: s_nop 15 ; GFX942-NEXT: s_nop 0 @@ -819,17 +819,17 @@ define amdgpu_kernel void @test_mfma_f64_16x16x4f64(ptr addrspace(1) %arg, doubl ; GFX90A-VGPR-NEXT: s_load_dwordx4 s[8:11], s[4:5], 0x24 ; GFX90A-VGPR-NEXT: s_load_dwordx2 s[12:13], s[4:5], 0x34 ; GFX90A-VGPR-NEXT: s_waitcnt lgkmcnt(0) -; GFX90A-VGPR-NEXT: v_mov_b32_e32 v10, s10 +; GFX90A-VGPR-NEXT: v_mov_b32_e32 v8, s10 ; GFX90A-VGPR-NEXT: s_load_dwordx8 s[0:7], s[8:9], 0x0 -; GFX90A-VGPR-NEXT: v_mov_b32_e32 v11, s11 -; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[8:9], s[12:13], s[12:13] op_sel:[0,1] +; GFX90A-VGPR-NEXT: v_mov_b32_e32 v9, s11 +; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[10:11], s[12:13], s[12:13] op_sel:[0,1] ; GFX90A-VGPR-NEXT: s_waitcnt lgkmcnt(0) ; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[0:1], s[0:1], s[0:1] op_sel:[0,1] ; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[2:3], s[2:3], s[2:3] op_sel:[0,1] ; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[4:5], s[4:5], s[4:5] op_sel:[0,1] ; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[6:7], s[6:7], s[6:7] op_sel:[0,1] ; GFX90A-VGPR-NEXT: s_nop 1 -; GFX90A-VGPR-NEXT: v_mfma_f64_16x16x4f64 v[0:7], v[10:11], v[8:9], v[0:7] cbsz:1 abid:2 blgp:3 +; GFX90A-VGPR-NEXT: v_mfma_f64_16x16x4f64 v[0:7], v[8:9], v[10:11], v[0:7] cbsz:1 abid:2 blgp:3 ; GFX90A-VGPR-NEXT: v_mov_b32_e32 v8, 0 ; GFX90A-VGPR-NEXT: s_nop 15 ; GFX90A-VGPR-NEXT: s_nop 0 @@ -842,17 +842,17 @@ define amdgpu_kernel void @test_mfma_f64_16x16x4f64(ptr addrspace(1) %arg, doubl ; GFX942-VGPR-NEXT: s_load_dwordx4 s[8:11], s[4:5], 0x24 ; GFX942-VGPR-NEXT: s_load_dwordx2 s[12:13], s[4:5], 0x34 ; GFX942-VGPR-NEXT: s_waitcnt lgkmcnt(0) -; GFX942-VGPR-NEXT: v_mov_b32_e32 v10, s10 +; GFX942-VGPR-NEXT: v_mov_b32_e32 v8, s10 ; GFX942-VGPR-NEXT: s_load_dwordx8 s[0:7], s[8:9], 0x0 -; GFX942-VGPR-NEXT: v_mov_b32_e32 v11, s11 -; GFX942-VGPR-NEXT: v_mov_b64_e32 v[8:9], s[12:13] +; GFX942-VGPR-NEXT: v_mov_b32_e32 v9, s11 +; GFX942-VGPR-NEXT: v_mov_b64_e32 v[10:11], s[12:13] ; GFX942-VGPR-NEXT: s_waitcnt lgkmcnt(0) ; GFX942-VGPR-NEXT: v_mov_b64_e32 v[0:1], s[0:1] ; GFX942-VGPR-NEXT: v_mov_b64_e32 v[2:3], s[2:3] ; GFX942-VGPR-NEXT: v_mov_b64_e32 v[4:5], s[4:5] ; GFX942-VGPR-NEXT: v_mov_b64_e32 v[6:7], s[6:7] ; GFX942-VGPR-NEXT: s_nop 1 -; GFX942-VGPR-NEXT: v_mfma_f64_16x16x4_f64 v[0:7], v[10:11], v[8:9], v[0:7] cbsz:1 abid:2 neg:[1,1,0] +; GFX942-VGPR-NEXT: v_mfma_f64_16x16x4_f64 v[0:7], v[8:9], v[10:11], v[0:7] cbsz:1 abid:2 neg:[1,1,0] ; GFX942-VGPR-NEXT: v_mov_b32_e32 v8, 0 ; GFX942-VGPR-NEXT: s_nop 15 ; GFX942-VGPR-NEXT: s_nop 0 @@ -1629,20 +1629,20 @@ define amdgpu_kernel void @test_mfma_f64_16x16x4f64_imm(ptr addrspace(1) %arg, d ; GFX90A-VGPR-NEXT: v_mov_b32_e32 v7, 0x3ff00000 ; GFX90A-VGPR-NEXT: v_mov_b32_e32 v2, v0 ; GFX90A-VGPR-NEXT: s_waitcnt lgkmcnt(0) -; GFX90A-VGPR-NEXT: v_mov_b32_e32 v12, s2 -; GFX90A-VGPR-NEXT: v_mov_b32_e32 v13, s3 +; GFX90A-VGPR-NEXT: v_mov_b32_e32 v10, s2 +; GFX90A-VGPR-NEXT: v_mov_b32_e32 v11, s3 ; GFX90A-VGPR-NEXT: v_mov_b32_e32 v3, v0 ; GFX90A-VGPR-NEXT: v_mov_b32_e32 v4, v0 ; GFX90A-VGPR-NEXT: v_mov_b32_e32 v5, v0 ; GFX90A-VGPR-NEXT: v_mov_b32_e32 v6, v0 ; GFX90A-VGPR-NEXT: v_mov_b32_e32 v1, v0 ; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[8:9], v[6:7], v[6:7] op_sel:[0,1] -; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[10:11], s[6:7], s[6:7] op_sel:[0,1] +; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[12:13], s[6:7], s[6:7] op_sel:[0,1] ; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[6:7], v[4:5], v[4:5] op_sel:[0,1] ; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[4:5], v[2:3], v[2:3] op_sel:[0,1] ; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[2:3], v[0:1], v[0:1] op_sel:[0,1] ; GFX90A-VGPR-NEXT: s_nop 1 -; GFX90A-VGPR-NEXT: v_mfma_f64_16x16x4f64 v[2:9], v[12:13], v[10:11], v[2:9] +; GFX90A-VGPR-NEXT: v_mfma_f64_16x16x4f64 v[2:9], v[10:11], v[12:13], v[2:9] ; GFX90A-VGPR-NEXT: s_nop 15 ; GFX90A-VGPR-NEXT: s_nop 1 ; GFX90A-VGPR-NEXT: global_store_dwordx4 v0, v[6:9], s[0:1] offset:16 @@ -1657,20 +1657,20 @@ define amdgpu_kernel void @test_mfma_f64_16x16x4f64_imm(ptr addrspace(1) %arg, d ; GFX942-VGPR-NEXT: v_mov_b32_e32 v7, 0x3ff00000 ; GFX942-VGPR-NEXT: v_mov_b32_e32 v2, v0 ; GFX942-VGPR-NEXT: s_waitcnt lgkmcnt(0) -; GFX942-VGPR-NEXT: v_mov_b32_e32 v12, s2 -; GFX942-VGPR-NEXT: v_mov_b32_e32 v13, s3 +; GFX942-VGPR-NEXT: v_mov_b32_e32 v10, s2 +; GFX942-VGPR-NEXT: v_mov_b32_e32 v11, s3 ; GFX942-VGPR-NEXT: v_mov_b32_e32 v3, v0 ; GFX942-VGPR-NEXT: v_mov_b32_e32 v4, v0 ; GFX942-VGPR-NEXT: v_mov_b32_e32 v5, v0 ; GFX942-VGPR-NEXT: v_mov_b32_e32 v6, v0 ; GFX942-VGPR-NEXT: v_mov_b32_e32 v1, v0 ; GFX942-VGPR-NEXT: v_mov_b64_e32 v[8:9], v[6:7] -; GFX942-VGPR-NEXT: v_mov_b64_e32 v[10:11], s[6:7] +; GFX942-VGPR-NEXT: v_mov_b64_e32 v[12:13], s[6:7] ; GFX942-VGPR-NEXT: v_mov_b64_e32 v[6:7], v[4:5] ; GFX942-VGPR-NEXT: v_mov_b64_e32 v[4:5], v[2:3] ; GFX942-VGPR-NEXT: v_mov_b64_e32 v[2:3], v[0:1] ; GFX942-VGPR-NEXT: s_nop 1 -; GFX942-VGPR-NEXT: v_mfma_f64_16x16x4_f64 v[2:9], v[12:13], v[10:11], v[2:9] +; GFX942-VGPR-NEXT: v_mfma_f64_16x16x4_f64 v[2:9], v[10:11], v[12:13], v[2:9] ; GFX942-VGPR-NEXT: s_nop 15 ; GFX942-VGPR-NEXT: s_nop 1 ; GFX942-VGPR-NEXT: global_store_dwordx4 v0, v[6:9], s[0:1] offset:16 @@ -1743,20 +1743,20 @@ define amdgpu_kernel void @test_mfma_f64_16x16x4f64_splat_lit(ptr addrspace(1) % ; GFX90A-VGPR-NEXT: v_mov_b32_e32 v1, 0x405ec000 ; GFX90A-VGPR-NEXT: v_mov_b32_e32 v2, v0 ; GFX90A-VGPR-NEXT: s_waitcnt lgkmcnt(0) -; GFX90A-VGPR-NEXT: v_mov_b32_e32 v12, s2 -; GFX90A-VGPR-NEXT: v_mov_b32_e32 v13, s3 +; GFX90A-VGPR-NEXT: v_mov_b32_e32 v10, s2 +; GFX90A-VGPR-NEXT: v_mov_b32_e32 v11, s3 ; GFX90A-VGPR-NEXT: v_mov_b32_e32 v3, v1 ; GFX90A-VGPR-NEXT: v_mov_b32_e32 v4, v0 ; GFX90A-VGPR-NEXT: v_mov_b32_e32 v5, v1 ; GFX90A-VGPR-NEXT: v_mov_b32_e32 v6, v0 ; GFX90A-VGPR-NEXT: v_mov_b32_e32 v7, v1 ; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[8:9], v[6:7], v[6:7] op_sel:[0,1] -; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[10:11], s[6:7], s[6:7] op_sel:[0,1] +; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[12:13], s[6:7], s[6:7] op_sel:[0,1] ; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[6:7], v[4:5], v[4:5] op_sel:[0,1] ; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[4:5], v[2:3], v[2:3] op_sel:[0,1] ; GFX90A-VGPR-NEXT: v_pk_mov_b32 v[2:3], v[0:1], v[0:1] op_sel:[0,1] ; GFX90A-VGPR-NEXT: s_nop 1 -; GFX90A-VGPR-NEXT: v_mfma_f64_16x16x4f64 v[2:9], v[12:13], v[10:11], v[2:9] +; GFX90A-VGPR-NEXT: v_mfma_f64_16x16x4f64 v[2:9], v[10:11], v[12:13], v[2:9] ; GFX90A-VGPR-NEXT: s_nop 15 ; GFX90A-VGPR-NEXT: s_nop 1 ; GFX90A-VGPR-NEXT: global_store_dwordx4 v0, v[6:9], s[0:1] offset:16 @@ -1771,20 +1771,20 @@ define amdgpu_kernel void @test_mfma_f64_16x16x4f64_splat_lit(ptr addrspace(1) % ; GFX942-VGPR-NEXT: v_mov_b32_e32 v1, 0x405ec000 ; GFX942-VGPR-NEXT: v_mov_b32_e32 v2, v0 ; GFX942-VGPR-NEXT: s_waitcnt lgkmcnt(0) -; GFX942-VGPR-NEXT: v_mov_b32_e32 v12, s2 -; GFX942-VGPR-NEXT: v_mov_b32_e32 v13, s3 +; GFX942-VGPR-NEXT: v_mov_b32_e32 v10, s2 +; GFX942-VGPR-NEXT: v_mov_b32_e32 v11, s3 ; GFX942-VGPR-NEXT: v_mov_b32_e32 v3, v1 ; GFX942-VGPR-NEXT: v_mov_b32_e32 v4, v0 ; GFX942-VGPR-NEXT: v_mov_b32_e32 v5, v1 ; GFX942-VGPR-NEXT: v_mov_b32_e32 v6, v0 ; GFX942-VGPR-NEXT: v_mov_b32_e32 v7, v1 ; GFX942-VGPR-NEXT: v_mov_b64_e32 v[8:9], v[6:7] -; GFX942-VGPR-NEXT: v_mov_b64_e32 v[10:11], s[6:7] +; GFX942-VGPR-NEXT: v_mov_b64_e32 v[12:13], s[6:7] ; GFX942-VGPR-NEXT: v_mov_b64_e32 v[6:7], v[4:5] ; GFX942-VGPR-NEXT: v_mov_b64_e32 v[4:5], v[2:3] ; GFX942-VGPR-NEXT: v_mov_b64_e32 v[2:3], v[0:1] ; GFX942-VGPR-NEXT: s_nop 1 -; GFX942-VGPR-NEXT: v_mfma_f64_16x16x4_f64 v[2:9], v[12:13], v[10:11], v[2:9] +; GFX942-VGPR-NEXT: v_mfma_f64_16x16x4_f64 v[2:9], v[10:11], v[12:13], v[2:9] ; GFX942-VGPR-NEXT: s_nop 15 ; GFX942-VGPR-NEXT: s_nop 1 ; GFX942-VGPR-NEXT: global_store_dwordx4 v0, v[6:9], s[0:1] offset:16 diff --git a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.gfx942.ll b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.gfx942.ll index dc4c9291..2fb677e 100644 --- a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.gfx942.ll +++ b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.gfx942.ll @@ -1445,20 +1445,20 @@ define amdgpu_kernel void @test_smfmac_f32_16x16x32_f16(ptr addrspace(1) %arg, < ; GFX942-SDAG: ; %bb.0: ; %bb ; GFX942-SDAG-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x24 ; GFX942-SDAG-NEXT: s_load_dword s6, s[4:5], 0x44 -; GFX942-SDAG-NEXT: v_mov_b32_e32 v6, 0 +; GFX942-SDAG-NEXT: v_mov_b32_e32 v0, 0 ; GFX942-SDAG-NEXT: s_waitcnt lgkmcnt(0) ; GFX942-SDAG-NEXT: s_load_dwordx4 s[0:3], s[8:9], 0x0 -; GFX942-SDAG-NEXT: v_mov_b64_e32 v[4:5], s[10:11] -; GFX942-SDAG-NEXT: v_mov_b64_e32 v[0:1], s[12:13] -; GFX942-SDAG-NEXT: v_mov_b64_e32 v[2:3], s[14:15] -; GFX942-SDAG-NEXT: v_mov_b32_e32 v7, s6 +; GFX942-SDAG-NEXT: v_mov_b64_e32 v[10:11], s[10:11] +; GFX942-SDAG-NEXT: v_mov_b64_e32 v[2:3], s[12:13] +; GFX942-SDAG-NEXT: v_mov_b64_e32 v[4:5], s[14:15] +; GFX942-SDAG-NEXT: v_mov_b32_e32 v1, s6 ; GFX942-SDAG-NEXT: s_waitcnt lgkmcnt(0) -; GFX942-SDAG-NEXT: v_mov_b64_e32 v[10:11], s[2:3] -; GFX942-SDAG-NEXT: v_mov_b64_e32 v[8:9], s[0:1] +; GFX942-SDAG-NEXT: v_mov_b64_e32 v[8:9], s[2:3] +; GFX942-SDAG-NEXT: v_mov_b64_e32 v[6:7], s[0:1] ; GFX942-SDAG-NEXT: s_nop 1 -; GFX942-SDAG-NEXT: v_smfmac_f32_16x16x32_f16 v[8:11], v[4:5], v[0:3], v7 cbsz:1 abid:2 +; GFX942-SDAG-NEXT: v_smfmac_f32_16x16x32_f16 v[6:9], v[10:11], v[2:5], v1 cbsz:1 abid:2 ; GFX942-SDAG-NEXT: s_nop 6 -; GFX942-SDAG-NEXT: global_store_dwordx4 v6, v[8:11], s[8:9] +; GFX942-SDAG-NEXT: global_store_dwordx4 v0, v[6:9], s[8:9] ; GFX942-SDAG-NEXT: s_endpgm ; ; GFX942-GISEL-LABEL: test_smfmac_f32_16x16x32_f16: @@ -1485,20 +1485,20 @@ define amdgpu_kernel void @test_smfmac_f32_16x16x32_f16(ptr addrspace(1) %arg, < ; GFX950-SDAG: ; %bb.0: ; %bb ; GFX950-SDAG-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x24 ; GFX950-SDAG-NEXT: s_load_dword s6, s[4:5], 0x44 -; GFX950-SDAG-NEXT: v_mov_b32_e32 v6, 0 +; GFX950-SDAG-NEXT: v_mov_b32_e32 v0, 0 ; GFX950-SDAG-NEXT: s_waitcnt lgkmcnt(0) ; GFX950-SDAG-NEXT: s_load_dwordx4 s[0:3], s[8:9], 0x0 -; GFX950-SDAG-NEXT: v_mov_b64_e32 v[4:5], s[10:11] -; GFX950-SDAG-NEXT: v_mov_b64_e32 v[0:1], s[12:13] -; GFX950-SDAG-NEXT: v_mov_b64_e32 v[2:3], s[14:15] -; GFX950-SDAG-NEXT: v_mov_b32_e32 v7, s6 +; GFX950-SDAG-NEXT: v_mov_b64_e32 v[10:11], s[10:11] +; GFX950-SDAG-NEXT: v_mov_b64_e32 v[2:3], s[12:13] +; GFX950-SDAG-NEXT: v_mov_b64_e32 v[4:5], s[14:15] +; GFX950-SDAG-NEXT: v_mov_b32_e32 v1, s6 ; GFX950-SDAG-NEXT: s_waitcnt lgkmcnt(0) -; GFX950-SDAG-NEXT: v_mov_b64_e32 v[10:11], s[2:3] -; GFX950-SDAG-NEXT: v_mov_b64_e32 v[8:9], s[0:1] +; GFX950-SDAG-NEXT: v_mov_b64_e32 v[8:9], s[2:3] +; GFX950-SDAG-NEXT: v_mov_b64_e32 v[6:7], s[0:1] ; GFX950-SDAG-NEXT: s_nop 1 -; GFX950-SDAG-NEXT: v_smfmac_f32_16x16x32_f16 v[8:11], v[4:5], v[0:3], v7 cbsz:1 abid:2 +; GFX950-SDAG-NEXT: v_smfmac_f32_16x16x32_f16 v[6:9], v[10:11], v[2:5], v1 cbsz:1 abid:2 ; GFX950-SDAG-NEXT: s_nop 7 -; GFX950-SDAG-NEXT: global_store_dwordx4 v6, v[8:11], s[8:9] +; GFX950-SDAG-NEXT: global_store_dwordx4 v0, v[6:9], s[8:9] ; GFX950-SDAG-NEXT: s_endpgm ; ; GFX950-GISEL-LABEL: test_smfmac_f32_16x16x32_f16: @@ -1577,11 +1577,11 @@ define amdgpu_kernel void @test_smfmac_f32_32x32x16_f16(ptr addrspace(1) %arg, < ; GFX942-SDAG-NEXT: s_load_dwordx8 s[16:23], s[4:5], 0x24 ; GFX942-SDAG-NEXT: s_load_dword s24, s[4:5], 0x44 ; GFX942-SDAG-NEXT: s_waitcnt lgkmcnt(0) -; GFX942-SDAG-NEXT: v_mov_b64_e32 v[20:21], s[18:19] +; GFX942-SDAG-NEXT: v_mov_b64_e32 v[22:23], s[18:19] ; GFX942-SDAG-NEXT: s_load_dwordx16 s[0:15], s[16:17], 0x0 -; GFX942-SDAG-NEXT: v_mov_b64_e32 v[16:17], s[20:21] -; GFX942-SDAG-NEXT: v_mov_b64_e32 v[18:19], s[22:23] -; GFX942-SDAG-NEXT: v_mov_b32_e32 v22, s24 +; GFX942-SDAG-NEXT: v_mov_b64_e32 v[18:19], s[20:21] +; GFX942-SDAG-NEXT: v_mov_b64_e32 v[20:21], s[22:23] +; GFX942-SDAG-NEXT: v_mov_b32_e32 v16, s24 ; GFX942-SDAG-NEXT: s_waitcnt lgkmcnt(0) ; GFX942-SDAG-NEXT: v_mov_b64_e32 v[0:1], s[0:1] ; GFX942-SDAG-NEXT: v_mov_b64_e32 v[2:3], s[2:3] @@ -1592,7 +1592,7 @@ define amdgpu_kernel void @test_smfmac_f32_32x32x16_f16(ptr addrspace(1) %arg, < ; GFX942-SDAG-NEXT: v_mov_b64_e32 v[12:13], s[12:13] ; GFX942-SDAG-NEXT: v_mov_b64_e32 v[14:15], s[14:15] ; GFX942-SDAG-NEXT: s_nop 1 -; GFX942-SDAG-NEXT: v_smfmac_f32_32x32x16_f16 v[0:15], v[20:21], v[16:19], v22 cbsz:1 abid:2 +; GFX942-SDAG-NEXT: v_smfmac_f32_32x32x16_f16 v[0:15], v[22:23], v[18:21], v16 cbsz:1 abid:2 ; GFX942-SDAG-NEXT: v_mov_b32_e32 v16, 0 ; GFX942-SDAG-NEXT: s_nop 9 ; GFX942-SDAG-NEXT: global_store_dwordx4 v16, v[12:15], s[16:17] offset:48 @@ -1635,11 +1635,11 @@ define amdgpu_kernel void @test_smfmac_f32_32x32x16_f16(ptr addrspace(1) %arg, < ; GFX950-SDAG-NEXT: s_load_dwordx8 s[16:23], s[4:5], 0x24 ; GFX950-SDAG-NEXT: s_load_dword s24, s[4:5], 0x44 ; GFX950-SDAG-NEXT: s_waitcnt lgkmcnt(0) -; GFX950-SDAG-NEXT: v_mov_b64_e32 v[20:21], s[18:19] +; GFX950-SDAG-NEXT: v_mov_b64_e32 v[22:23], s[18:19] ; GFX950-SDAG-NEXT: s_load_dwordx16 s[0:15], s[16:17], 0x0 -; GFX950-SDAG-NEXT: v_mov_b64_e32 v[16:17], s[20:21] -; GFX950-SDAG-NEXT: v_mov_b64_e32 v[18:19], s[22:23] -; GFX950-SDAG-NEXT: v_mov_b32_e32 v22, s24 +; GFX950-SDAG-NEXT: v_mov_b64_e32 v[18:19], s[20:21] +; GFX950-SDAG-NEXT: v_mov_b64_e32 v[20:21], s[22:23] +; GFX950-SDAG-NEXT: v_mov_b32_e32 v16, s24 ; GFX950-SDAG-NEXT: s_waitcnt lgkmcnt(0) ; GFX950-SDAG-NEXT: v_mov_b64_e32 v[0:1], s[0:1] ; GFX950-SDAG-NEXT: v_mov_b64_e32 v[2:3], s[2:3] @@ -1650,7 +1650,7 @@ define amdgpu_kernel void @test_smfmac_f32_32x32x16_f16(ptr addrspace(1) %arg, < ; GFX950-SDAG-NEXT: v_mov_b64_e32 v[12:13], s[12:13] ; GFX950-SDAG-NEXT: v_mov_b64_e32 v[14:15], s[14:15] ; GFX950-SDAG-NEXT: s_nop 1 -; GFX950-SDAG-NEXT: v_smfmac_f32_32x32x16_f16 v[0:15], v[20:21], v[16:19], v22 cbsz:1 abid:2 +; GFX950-SDAG-NEXT: v_smfmac_f32_32x32x16_f16 v[0:15], v[22:23], v[18:21], v16 cbsz:1 abid:2 ; GFX950-SDAG-NEXT: v_mov_b32_e32 v16, 0 ; GFX950-SDAG-NEXT: s_nop 10 ; GFX950-SDAG-NEXT: global_store_dwordx4 v16, v[12:15], s[16:17] offset:48 @@ -1847,20 +1847,20 @@ define amdgpu_kernel void @test_smfmac_f32_16x16x32_bf16(ptr addrspace(1) %arg, ; GFX942-SDAG: ; %bb.0: ; %bb ; GFX942-SDAG-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x24 ; GFX942-SDAG-NEXT: s_load_dword s6, s[4:5], 0x44 -; GFX942-SDAG-NEXT: v_mov_b32_e32 v6, 0 +; GFX942-SDAG-NEXT: v_mov_b32_e32 v0, 0 ; GFX942-SDAG-NEXT: s_waitcnt lgkmcnt(0) ; GFX942-SDAG-NEXT: s_load_dwordx4 s[0:3], s[8:9], 0x0 -; GFX942-SDAG-NEXT: v_mov_b64_e32 v[4:5], s[10:11] -; GFX942-SDAG-NEXT: v_mov_b64_e32 v[0:1], s[12:13] -; GFX942-SDAG-NEXT: v_mov_b64_e32 v[2:3], s[14:15] -; GFX942-SDAG-NEXT: v_mov_b32_e32 v7, s6 +; GFX942-SDAG-NEXT: v_mov_b64_e32 v[10:11], s[10:11] +; GFX942-SDAG-NEXT: v_mov_b64_e32 v[2:3], s[12:13] +; GFX942-SDAG-NEXT: v_mov_b64_e32 v[4:5], s[14:15] +; GFX942-SDAG-NEXT: v_mov_b32_e32 v1, s6 ; GFX942-SDAG-NEXT: s_waitcnt lgkmcnt(0) -; GFX942-SDAG-NEXT: v_mov_b64_e32 v[10:11], s[2:3] -; GFX942-SDAG-NEXT: v_mov_b64_e32 v[8:9], s[0:1] +; GFX942-SDAG-NEXT: v_mov_b64_e32 v[8:9], s[2:3] +; GFX942-SDAG-NEXT: v_mov_b64_e32 v[6:7], s[0:1] ; GFX942-SDAG-NEXT: s_nop 1 -; GFX942-SDAG-NEXT: v_smfmac_f32_16x16x32_bf16 v[8:11], v[4:5], v[0:3], v7 cbsz:1 abid:2 +; GFX942-SDAG-NEXT: v_smfmac_f32_16x16x32_bf16 v[6:9], v[10:11], v[2:5], v1 cbsz:1 abid:2 ; GFX942-SDAG-NEXT: s_nop 6 -; GFX942-SDAG-NEXT: global_store_dwordx4 v6, v[8:11], s[8:9] +; GFX942-SDAG-NEXT: global_store_dwordx4 v0, v[6:9], s[8:9] ; GFX942-SDAG-NEXT: s_endpgm ; ; GFX942-GISEL-LABEL: test_smfmac_f32_16x16x32_bf16: @@ -1887,20 +1887,20 @@ define amdgpu_kernel void @test_smfmac_f32_16x16x32_bf16(ptr addrspace(1) %arg, ; GFX950-SDAG: ; %bb.0: ; %bb ; GFX950-SDAG-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x24 ; GFX950-SDAG-NEXT: s_load_dword s6, s[4:5], 0x44 -; GFX950-SDAG-NEXT: v_mov_b32_e32 v6, 0 +; GFX950-SDAG-NEXT: v_mov_b32_e32 v0, 0 ; GFX950-SDAG-NEXT: s_waitcnt lgkmcnt(0) ; GFX950-SDAG-NEXT: s_load_dwordx4 s[0:3], s[8:9], 0x0 -; GFX950-SDAG-NEXT: v_mov_b64_e32 v[4:5], s[10:11] -; GFX950-SDAG-NEXT: v_mov_b64_e32 v[0:1], s[12:13] -; GFX950-SDAG-NEXT: v_mov_b64_e32 v[2:3], s[14:15] -; GFX950-SDAG-NEXT: v_mov_b32_e32 v7, s6 +; GFX950-SDAG-NEXT: v_mov_b64_e32 v[10:11], s[10:11] +; GFX950-SDAG-NEXT: v_mov_b64_e32 v[2:3], s[12:13] +; GFX950-SDAG-NEXT: v_mov_b64_e32 v[4:5], s[14:15] +; GFX950-SDAG-NEXT: v_mov_b32_e32 v1, s6 ; GFX950-SDAG-NEXT: s_waitcnt lgkmcnt(0) -; GFX950-SDAG-NEXT: v_mov_b64_e32 v[10:11], s[2:3] -; GFX950-SDAG-NEXT: v_mov_b64_e32 v[8:9], s[0:1] +; GFX950-SDAG-NEXT: v_mov_b64_e32 v[8:9], s[2:3] +; GFX950-SDAG-NEXT: v_mov_b64_e32 v[6:7], s[0:1] ; GFX950-SDAG-NEXT: s_nop 1 -; GFX950-SDAG-NEXT: v_smfmac_f32_16x16x32_bf16 v[8:11], v[4:5], v[0:3], v7 cbsz:1 abid:2 +; GFX950-SDAG-NEXT: v_smfmac_f32_16x16x32_bf16 v[6:9], v[10:11], v[2:5], v1 cbsz:1 abid:2 ; GFX950-SDAG-NEXT: s_nop 7 -; GFX950-SDAG-NEXT: global_store_dwordx4 v6, v[8:11], s[8:9] +; GFX950-SDAG-NEXT: global_store_dwordx4 v0, v[6:9], s[8:9] ; GFX950-SDAG-NEXT: s_endpgm ; ; GFX950-GISEL-LABEL: test_smfmac_f32_16x16x32_bf16: @@ -1979,11 +1979,11 @@ define amdgpu_kernel void @test_smfmac_f32_32x32x16_bf16(ptr addrspace(1) %arg, ; GFX942-SDAG-NEXT: s_load_dwordx8 s[16:23], s[4:5], 0x24 ; GFX942-SDAG-NEXT: s_load_dword s24, s[4:5], 0x44 ; GFX942-SDAG-NEXT: s_waitcnt lgkmcnt(0) -; GFX942-SDAG-NEXT: v_mov_b64_e32 v[20:21], s[18:19] +; GFX942-SDAG-NEXT: v_mov_b64_e32 v[22:23], s[18:19] ; GFX942-SDAG-NEXT: s_load_dwordx16 s[0:15], s[16:17], 0x0 -; GFX942-SDAG-NEXT: v_mov_b64_e32 v[16:17], s[20:21] -; GFX942-SDAG-NEXT: v_mov_b64_e32 v[18:19], s[22:23] -; GFX942-SDAG-NEXT: v_mov_b32_e32 v22, s24 +; GFX942-SDAG-NEXT: v_mov_b64_e32 v[18:19], s[20:21] +; GFX942-SDAG-NEXT: v_mov_b64_e32 v[20:21], s[22:23] +; GFX942-SDAG-NEXT: v_mov_b32_e32 v16, s24 ; GFX942-SDAG-NEXT: s_waitcnt lgkmcnt(0) ; GFX942-SDAG-NEXT: v_mov_b64_e32 v[0:1], s[0:1] ; GFX942-SDAG-NEXT: v_mov_b64_e32 v[2:3], s[2:3] @@ -1994,7 +1994,7 @@ define amdgpu_kernel void @test_smfmac_f32_32x32x16_bf16(ptr addrspace(1) %arg, ; GFX942-SDAG-NEXT: v_mov_b64_e32 v[12:13], s[12:13] ; GFX942-SDAG-NEXT: v_mov_b64_e32 v[14:15], s[14:15] ; GFX942-SDAG-NEXT: s_nop 1 -; GFX942-SDAG-NEXT: v_smfmac_f32_32x32x16_bf16 v[0:15], v[20:21], v[16:19], v22 cbsz:1 abid:2 +; GFX942-SDAG-NEXT: v_smfmac_f32_32x32x16_bf16 v[0:15], v[22:23], v[18:21], v16 cbsz:1 abid:2 ; GFX942-SDAG-NEXT: v_mov_b32_e32 v16, 0 ; GFX942-SDAG-NEXT: s_nop 9 ; GFX942-SDAG-NEXT: global_store_dwordx4 v16, v[12:15], s[16:17] offset:48 @@ -2037,11 +2037,11 @@ define amdgpu_kernel void @test_smfmac_f32_32x32x16_bf16(ptr addrspace(1) %arg, ; GFX950-SDAG-NEXT: s_load_dwordx8 s[16:23], s[4:5], 0x24 ; GFX950-SDAG-NEXT: s_load_dword s24, s[4:5], 0x44 ; GFX950-SDAG-NEXT: s_waitcnt lgkmcnt(0) -; GFX950-SDAG-NEXT: v_mov_b64_e32 v[20:21], s[18:19] +; GFX950-SDAG-NEXT: v_mov_b64_e32 v[22:23], s[18:19] ; GFX950-SDAG-NEXT: s_load_dwordx16 s[0:15], s[16:17], 0x0 -; GFX950-SDAG-NEXT: v_mov_b64_e32 v[16:17], s[20:21] -; GFX950-SDAG-NEXT: v_mov_b64_e32 v[18:19], s[22:23] -; GFX950-SDAG-NEXT: v_mov_b32_e32 v22, s24 +; GFX950-SDAG-NEXT: v_mov_b64_e32 v[18:19], s[20:21] +; GFX950-SDAG-NEXT: v_mov_b64_e32 v[20:21], s[22:23] +; GFX950-SDAG-NEXT: v_mov_b32_e32 v16, s24 ; GFX950-SDAG-NEXT: s_waitcnt lgkmcnt(0) ; GFX950-SDAG-NEXT: v_mov_b64_e32 v[0:1], s[0:1] ; GFX950-SDAG-NEXT: v_mov_b64_e32 v[2:3], s[2:3] @@ -2052,7 +2052,7 @@ define amdgpu_kernel void @test_smfmac_f32_32x32x16_bf16(ptr addrspace(1) %arg, ; GFX950-SDAG-NEXT: v_mov_b64_e32 v[12:13], s[12:13] ; GFX950-SDAG-NEXT: v_mov_b64_e32 v[14:15], s[14:15] ; GFX950-SDAG-NEXT: s_nop 1 -; GFX950-SDAG-NEXT: v_smfmac_f32_32x32x16_bf16 v[0:15], v[20:21], v[16:19], v22 cbsz:1 abid:2 +; GFX950-SDAG-NEXT: v_smfmac_f32_32x32x16_bf16 v[0:15], v[22:23], v[18:21], v16 cbsz:1 abid:2 ; GFX950-SDAG-NEXT: v_mov_b32_e32 v16, 0 ; GFX950-SDAG-NEXT: s_nop 10 ; GFX950-SDAG-NEXT: global_store_dwordx4 v16, v[12:15], s[16:17] offset:48 diff --git a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.gfx950.bf16.ll b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.gfx950.bf16.ll index 033a35f..13a96cf 100644 --- a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.gfx950.bf16.ll +++ b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.gfx950.bf16.ll @@ -15,15 +15,15 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_bf16(<8 x bfloat> %arg0, <8 x ; GCN: ; %bb.0: ; GCN-NEXT: s_load_dwordx8 s[24:31], s[4:5], 0x24 ; GCN-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x64 -; GCN-NEXT: v_mov_b64_e32 v[8:9], 48 -; GCN-NEXT: v_mov_b64_e32 v[10:11], 32 -; GCN-NEXT: v_mov_b64_e32 v[12:13], 16 +; GCN-NEXT: v_mov_b64_e32 v[0:1], 48 +; GCN-NEXT: v_mov_b64_e32 v[2:3], 32 +; GCN-NEXT: v_mov_b64_e32 v[4:5], 16 ; GCN-NEXT: s_waitcnt lgkmcnt(0) -; GCN-NEXT: v_mov_b64_e32 v[0:1], s[24:25] -; GCN-NEXT: v_mov_b64_e32 v[2:3], s[26:27] -; GCN-NEXT: v_mov_b64_e32 v[4:5], s[28:29] +; GCN-NEXT: v_mov_b64_e32 v[8:9], s[24:25] +; GCN-NEXT: v_mov_b64_e32 v[10:11], s[26:27] +; GCN-NEXT: v_mov_b64_e32 v[12:13], s[28:29] ; GCN-NEXT: v_accvgpr_write_b32 a0, s8 -; GCN-NEXT: v_mov_b64_e32 v[6:7], s[30:31] +; GCN-NEXT: v_mov_b64_e32 v[14:15], s[30:31] ; GCN-NEXT: v_accvgpr_write_b32 a1, s9 ; GCN-NEXT: v_accvgpr_write_b32 a2, s10 ; GCN-NEXT: v_accvgpr_write_b32 a3, s11 @@ -41,40 +41,39 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_bf16(<8 x bfloat> %arg0, <8 x ; GCN-NEXT: v_accvgpr_write_b32 a15, s23 ; GCN-NEXT: v_mov_b32_e32 v16, s16 ; GCN-NEXT: v_mov_b32_e32 v17, s17 -; GCN-NEXT: v_mfma_f32_32x32x16_bf16 a[16:31], v[0:3], v[4:7], a[0:15] +; GCN-NEXT: v_mfma_f32_32x32x16_bf16 a[16:31], v[8:11], v[12:15], a[0:15] ; GCN-NEXT: v_mov_b32_e32 v18, s18 ; GCN-NEXT: v_mov_b32_e32 v19, s19 -; GCN-NEXT: v_mov_b32_e32 v0, s20 -; GCN-NEXT: v_mov_b32_e32 v1, s21 -; GCN-NEXT: v_mov_b32_e32 v2, s22 -; GCN-NEXT: v_mov_b32_e32 v3, s23 -; GCN-NEXT: v_mov_b64_e32 v[14:15], 0 +; GCN-NEXT: v_mov_b32_e32 v8, s20 +; GCN-NEXT: v_mov_b32_e32 v9, s21 +; GCN-NEXT: v_mov_b32_e32 v10, s22 +; GCN-NEXT: v_mov_b32_e32 v11, s23 +; GCN-NEXT: v_mov_b64_e32 v[6:7], 0 ; GCN-NEXT: s_nop 4 -; GCN-NEXT: global_store_dwordx4 v[8:9], a[28:31], off sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v[0:1], a[28:31], off sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) -; GCN-NEXT: global_store_dwordx4 v[10:11], a[24:27], off sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v[2:3], a[24:27], off sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) -; GCN-NEXT: global_store_dwordx4 v[12:13], a[20:23], off sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v[4:5], a[20:23], off sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) -; GCN-NEXT: global_store_dwordx4 v[14:15], a[16:19], off sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v[6:7], a[16:19], off sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) -; GCN-NEXT: global_store_dwordx4 v[10:11], v[16:19], off sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v[2:3], v[16:19], off sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) -; GCN-NEXT: global_store_dwordx4 v[8:9], v[0:3], off sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v[0:1], v[8:11], off sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) -; GCN-NEXT: s_nop 0 ; GCN-NEXT: v_mov_b32_e32 v0, s8 ; GCN-NEXT: v_mov_b32_e32 v1, s9 ; GCN-NEXT: v_mov_b32_e32 v2, s10 ; GCN-NEXT: v_mov_b32_e32 v3, s11 -; GCN-NEXT: global_store_dwordx4 v[14:15], v[0:3], off sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v[6:7], v[0:3], off sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) ; GCN-NEXT: s_nop 0 ; GCN-NEXT: v_mov_b32_e32 v0, s12 ; GCN-NEXT: v_mov_b32_e32 v1, s13 ; GCN-NEXT: v_mov_b32_e32 v2, s14 ; GCN-NEXT: v_mov_b32_e32 v3, s15 -; GCN-NEXT: global_store_dwordx4 v[12:13], v[0:3], off sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v[4:5], v[0:3], off sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) ; GCN-NEXT: s_endpgm %result = call <16 x float> @llvm.amdgcn.mfma.f32.32x32x16.bf16(<8 x bfloat> %arg0, <8 x bfloat> %arg1, <16 x float> %arg2, i32 0, i32 0, i32 0) @@ -88,15 +87,15 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_bf16__flags(<8 x bfloat> %arg0 ; GCN: ; %bb.0: ; GCN-NEXT: s_load_dwordx8 s[24:31], s[4:5], 0x24 ; GCN-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x64 -; GCN-NEXT: v_mov_b64_e32 v[8:9], 48 -; GCN-NEXT: v_mov_b64_e32 v[10:11], 32 -; GCN-NEXT: v_mov_b64_e32 v[12:13], 16 +; GCN-NEXT: v_mov_b64_e32 v[0:1], 48 +; GCN-NEXT: v_mov_b64_e32 v[2:3], 32 +; GCN-NEXT: v_mov_b64_e32 v[4:5], 16 ; GCN-NEXT: s_waitcnt lgkmcnt(0) -; GCN-NEXT: v_mov_b64_e32 v[0:1], s[24:25] -; GCN-NEXT: v_mov_b64_e32 v[2:3], s[26:27] -; GCN-NEXT: v_mov_b64_e32 v[4:5], s[28:29] +; GCN-NEXT: v_mov_b64_e32 v[8:9], s[24:25] +; GCN-NEXT: v_mov_b64_e32 v[10:11], s[26:27] +; GCN-NEXT: v_mov_b64_e32 v[12:13], s[28:29] ; GCN-NEXT: v_accvgpr_write_b32 a0, s8 -; GCN-NEXT: v_mov_b64_e32 v[6:7], s[30:31] +; GCN-NEXT: v_mov_b64_e32 v[14:15], s[30:31] ; GCN-NEXT: v_accvgpr_write_b32 a1, s9 ; GCN-NEXT: v_accvgpr_write_b32 a2, s10 ; GCN-NEXT: v_accvgpr_write_b32 a3, s11 @@ -114,40 +113,39 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_bf16__flags(<8 x bfloat> %arg0 ; GCN-NEXT: v_accvgpr_write_b32 a15, s23 ; GCN-NEXT: v_mov_b32_e32 v16, s16 ; GCN-NEXT: v_mov_b32_e32 v17, s17 -; GCN-NEXT: v_mfma_f32_32x32x16_bf16 a[16:31], v[0:3], v[4:7], a[0:15] cbsz:2 abid:3 blgp:1 +; GCN-NEXT: v_mfma_f32_32x32x16_bf16 a[16:31], v[8:11], v[12:15], a[0:15] cbsz:2 abid:3 blgp:1 ; GCN-NEXT: v_mov_b32_e32 v18, s18 ; GCN-NEXT: v_mov_b32_e32 v19, s19 -; GCN-NEXT: v_mov_b32_e32 v0, s20 -; GCN-NEXT: v_mov_b32_e32 v1, s21 -; GCN-NEXT: v_mov_b32_e32 v2, s22 -; GCN-NEXT: v_mov_b32_e32 v3, s23 -; GCN-NEXT: v_mov_b64_e32 v[14:15], 0 +; GCN-NEXT: v_mov_b32_e32 v8, s20 +; GCN-NEXT: v_mov_b32_e32 v9, s21 +; GCN-NEXT: v_mov_b32_e32 v10, s22 +; GCN-NEXT: v_mov_b32_e32 v11, s23 +; GCN-NEXT: v_mov_b64_e32 v[6:7], 0 ; GCN-NEXT: s_nop 4 -; GCN-NEXT: global_store_dwordx4 v[8:9], a[28:31], off sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v[0:1], a[28:31], off sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) -; GCN-NEXT: global_store_dwordx4 v[10:11], a[24:27], off sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v[2:3], a[24:27], off sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) -; GCN-NEXT: global_store_dwordx4 v[12:13], a[20:23], off sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v[4:5], a[20:23], off sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) -; GCN-NEXT: global_store_dwordx4 v[14:15], a[16:19], off sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v[6:7], a[16:19], off sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) -; GCN-NEXT: global_store_dwordx4 v[10:11], v[16:19], off sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v[2:3], v[16:19], off sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) -; GCN-NEXT: global_store_dwordx4 v[8:9], v[0:3], off sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v[0:1], v[8:11], off sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) -; GCN-NEXT: s_nop 0 ; GCN-NEXT: v_mov_b32_e32 v0, s8 ; GCN-NEXT: v_mov_b32_e32 v1, s9 ; GCN-NEXT: v_mov_b32_e32 v2, s10 ; GCN-NEXT: v_mov_b32_e32 v3, s11 -; GCN-NEXT: global_store_dwordx4 v[14:15], v[0:3], off sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v[6:7], v[0:3], off sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) ; GCN-NEXT: s_nop 0 ; GCN-NEXT: v_mov_b32_e32 v0, s12 ; GCN-NEXT: v_mov_b32_e32 v1, s13 ; GCN-NEXT: v_mov_b32_e32 v2, s14 ; GCN-NEXT: v_mov_b32_e32 v3, s15 -; GCN-NEXT: global_store_dwordx4 v[12:13], v[0:3], off sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v[4:5], v[0:3], off sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) ; GCN-NEXT: s_endpgm %result = call <16 x float> @llvm.amdgcn.mfma.f32.32x32x16.bf16(<8 x bfloat> %arg0, <8 x bfloat> %arg1, <16 x float> %arg2, i32 2, i32 3, i32 1) @@ -250,13 +248,13 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_bf16__vgprcd(<8 x bfloat> %arg ; GCN-NEXT: s_load_dwordx8 s[24:31], s[4:5], 0x24 ; GCN-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x64 ; GCN-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0xa4 -; GCN-NEXT: v_mov_b32_e32 v44, 0 +; GCN-NEXT: v_mov_b32_e32 v36, 0 ; GCN-NEXT: s_waitcnt lgkmcnt(0) -; GCN-NEXT: v_mov_b64_e32 v[34:35], s[26:27] -; GCN-NEXT: v_mov_b64_e32 v[32:33], s[24:25] -; GCN-NEXT: v_mov_b64_e32 v[38:39], s[30:31] +; GCN-NEXT: v_mov_b64_e32 v[40:41], s[26:27] +; GCN-NEXT: v_mov_b64_e32 v[38:39], s[24:25] +; GCN-NEXT: v_mov_b64_e32 v[44:45], s[30:31] ; GCN-NEXT: v_mov_b64_e32 v[30:31], s[22:23] -; GCN-NEXT: v_mov_b64_e32 v[36:37], s[28:29] +; GCN-NEXT: v_mov_b64_e32 v[42:43], s[28:29] ; GCN-NEXT: v_mov_b64_e32 v[28:29], s[20:21] ; GCN-NEXT: v_mov_b64_e32 v[26:27], s[18:19] ; GCN-NEXT: v_mov_b64_e32 v[24:25], s[16:17] @@ -264,41 +262,41 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_bf16__vgprcd(<8 x bfloat> %arg ; GCN-NEXT: v_mov_b64_e32 v[20:21], s[12:13] ; GCN-NEXT: v_mov_b64_e32 v[18:19], s[10:11] ; GCN-NEXT: v_mov_b64_e32 v[16:17], s[8:9] -; GCN-NEXT: v_mov_b32_e32 v40, s20 -; GCN-NEXT: v_mov_b32_e32 v41, s21 -; GCN-NEXT: v_mfma_f32_32x32x16_bf16 v[0:15], v[32:35], v[36:39], v[16:31] -; GCN-NEXT: v_mov_b32_e32 v42, s22 -; GCN-NEXT: v_mov_b32_e32 v43, s23 -; GCN-NEXT: global_store_dwordx4 v44, v[40:43], s[0:1] offset:48 sc0 sc1 +; GCN-NEXT: v_mov_b32_e32 v32, s20 +; GCN-NEXT: v_mov_b32_e32 v33, s21 +; GCN-NEXT: v_mfma_f32_32x32x16_bf16 v[0:15], v[38:41], v[42:45], v[16:31] +; GCN-NEXT: v_mov_b32_e32 v34, s22 +; GCN-NEXT: v_mov_b32_e32 v35, s23 +; GCN-NEXT: global_store_dwordx4 v36, v[32:35], s[0:1] offset:48 sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) ; GCN-NEXT: s_nop 2 ; GCN-NEXT: v_mov_b32_e32 v16, s16 ; GCN-NEXT: v_mov_b32_e32 v17, s17 ; GCN-NEXT: v_mov_b32_e32 v18, s18 ; GCN-NEXT: v_mov_b32_e32 v19, s19 -; GCN-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] offset:32 sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] offset:32 sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) ; GCN-NEXT: s_nop 0 ; GCN-NEXT: v_mov_b32_e32 v16, s12 ; GCN-NEXT: v_mov_b32_e32 v17, s13 ; GCN-NEXT: v_mov_b32_e32 v18, s14 ; GCN-NEXT: v_mov_b32_e32 v19, s15 -; GCN-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] offset:16 sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] offset:16 sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) ; GCN-NEXT: s_nop 0 ; GCN-NEXT: v_mov_b32_e32 v16, s8 ; GCN-NEXT: v_mov_b32_e32 v17, s9 ; GCN-NEXT: v_mov_b32_e32 v18, s10 ; GCN-NEXT: v_mov_b32_e32 v19, s11 -; GCN-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) -; GCN-NEXT: global_store_dwordx4 v44, v[8:11], s[0:1] offset:32 sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v36, v[8:11], s[0:1] offset:32 sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) -; GCN-NEXT: global_store_dwordx4 v44, v[12:15], s[0:1] offset:48 sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v36, v[12:15], s[0:1] offset:48 sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) -; GCN-NEXT: global_store_dwordx4 v44, v[0:3], s[0:1] sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v36, v[0:3], s[0:1] sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) -; GCN-NEXT: global_store_dwordx4 v44, v[4:7], s[0:1] offset:16 sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v36, v[4:7], s[0:1] offset:16 sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) ; GCN-NEXT: s_endpgm %result = call <16 x float> @llvm.amdgcn.mfma.f32.32x32x16.bf16(<8 x bfloat> %arg0, <8 x bfloat> %arg1, <16 x float> %arg2, i32 0, i32 0, i32 0) @@ -313,13 +311,13 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_bf16__vgprcd__flags(<8 x bfloa ; GCN-NEXT: s_load_dwordx8 s[24:31], s[4:5], 0x24 ; GCN-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x64 ; GCN-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0xa4 -; GCN-NEXT: v_mov_b32_e32 v44, 0 +; GCN-NEXT: v_mov_b32_e32 v36, 0 ; GCN-NEXT: s_waitcnt lgkmcnt(0) -; GCN-NEXT: v_mov_b64_e32 v[34:35], s[26:27] -; GCN-NEXT: v_mov_b64_e32 v[32:33], s[24:25] -; GCN-NEXT: v_mov_b64_e32 v[38:39], s[30:31] +; GCN-NEXT: v_mov_b64_e32 v[40:41], s[26:27] +; GCN-NEXT: v_mov_b64_e32 v[38:39], s[24:25] +; GCN-NEXT: v_mov_b64_e32 v[44:45], s[30:31] ; GCN-NEXT: v_mov_b64_e32 v[30:31], s[22:23] -; GCN-NEXT: v_mov_b64_e32 v[36:37], s[28:29] +; GCN-NEXT: v_mov_b64_e32 v[42:43], s[28:29] ; GCN-NEXT: v_mov_b64_e32 v[28:29], s[20:21] ; GCN-NEXT: v_mov_b64_e32 v[26:27], s[18:19] ; GCN-NEXT: v_mov_b64_e32 v[24:25], s[16:17] @@ -327,41 +325,41 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_bf16__vgprcd__flags(<8 x bfloa ; GCN-NEXT: v_mov_b64_e32 v[20:21], s[12:13] ; GCN-NEXT: v_mov_b64_e32 v[18:19], s[10:11] ; GCN-NEXT: v_mov_b64_e32 v[16:17], s[8:9] -; GCN-NEXT: v_mov_b32_e32 v40, s20 -; GCN-NEXT: v_mov_b32_e32 v41, s21 -; GCN-NEXT: v_mfma_f32_32x32x16_bf16 v[0:15], v[32:35], v[36:39], v[16:31] cbsz:1 abid:2 blgp:3 -; GCN-NEXT: v_mov_b32_e32 v42, s22 -; GCN-NEXT: v_mov_b32_e32 v43, s23 -; GCN-NEXT: global_store_dwordx4 v44, v[40:43], s[0:1] offset:48 sc0 sc1 +; GCN-NEXT: v_mov_b32_e32 v32, s20 +; GCN-NEXT: v_mov_b32_e32 v33, s21 +; GCN-NEXT: v_mfma_f32_32x32x16_bf16 v[0:15], v[38:41], v[42:45], v[16:31] cbsz:1 abid:2 blgp:3 +; GCN-NEXT: v_mov_b32_e32 v34, s22 +; GCN-NEXT: v_mov_b32_e32 v35, s23 +; GCN-NEXT: global_store_dwordx4 v36, v[32:35], s[0:1] offset:48 sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) ; GCN-NEXT: s_nop 2 ; GCN-NEXT: v_mov_b32_e32 v16, s16 ; GCN-NEXT: v_mov_b32_e32 v17, s17 ; GCN-NEXT: v_mov_b32_e32 v18, s18 ; GCN-NEXT: v_mov_b32_e32 v19, s19 -; GCN-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] offset:32 sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] offset:32 sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) ; GCN-NEXT: s_nop 0 ; GCN-NEXT: v_mov_b32_e32 v16, s12 ; GCN-NEXT: v_mov_b32_e32 v17, s13 ; GCN-NEXT: v_mov_b32_e32 v18, s14 ; GCN-NEXT: v_mov_b32_e32 v19, s15 -; GCN-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] offset:16 sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] offset:16 sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) ; GCN-NEXT: s_nop 0 ; GCN-NEXT: v_mov_b32_e32 v16, s8 ; GCN-NEXT: v_mov_b32_e32 v17, s9 ; GCN-NEXT: v_mov_b32_e32 v18, s10 ; GCN-NEXT: v_mov_b32_e32 v19, s11 -; GCN-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) -; GCN-NEXT: global_store_dwordx4 v44, v[8:11], s[0:1] offset:32 sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v36, v[8:11], s[0:1] offset:32 sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) -; GCN-NEXT: global_store_dwordx4 v44, v[12:15], s[0:1] offset:48 sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v36, v[12:15], s[0:1] offset:48 sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) -; GCN-NEXT: global_store_dwordx4 v44, v[0:3], s[0:1] sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v36, v[0:3], s[0:1] sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) -; GCN-NEXT: global_store_dwordx4 v44, v[4:7], s[0:1] offset:16 sc0 sc1 +; GCN-NEXT: global_store_dwordx4 v36, v[4:7], s[0:1] offset:16 sc0 sc1 ; GCN-NEXT: s_waitcnt vmcnt(0) ; GCN-NEXT: s_endpgm %result = call <16 x float> @llvm.amdgcn.mfma.f32.32x32x16.bf16(<8 x bfloat> %arg0, <8 x bfloat> %arg1, <16 x float> %arg2, i32 1, i32 2, i32 3) diff --git a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.gfx950.ll b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.gfx950.ll index 7532062..ab0000f 100644 --- a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.gfx950.ll +++ b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.mfma.gfx950.ll @@ -141,18 +141,18 @@ define amdgpu_kernel void @test_mfma_f32_16x16x32_f16_no_agpr__vgprcd(ptr addrsp ; SDAG-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x34 ; SDAG-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x54 ; SDAG-NEXT: s_load_dwordx2 s[6:7], s[4:5], 0x24 -; SDAG-NEXT: v_mov_b32_e32 v12, 0 +; SDAG-NEXT: v_mov_b32_e32 v4, 0 ; SDAG-NEXT: s_waitcnt lgkmcnt(0) -; SDAG-NEXT: v_mov_b64_e32 v[0:1], s[8:9] -; SDAG-NEXT: v_mov_b64_e32 v[2:3], s[10:11] -; SDAG-NEXT: v_mov_b64_e32 v[4:5], s[12:13] -; SDAG-NEXT: v_mov_b64_e32 v[10:11], s[2:3] -; SDAG-NEXT: v_mov_b64_e32 v[6:7], s[14:15] -; SDAG-NEXT: v_mov_b64_e32 v[8:9], s[0:1] +; SDAG-NEXT: v_mov_b64_e32 v[6:7], s[8:9] +; SDAG-NEXT: v_mov_b64_e32 v[8:9], s[10:11] +; SDAG-NEXT: v_mov_b64_e32 v[10:11], s[12:13] +; SDAG-NEXT: v_mov_b64_e32 v[0:1], s[0:1] +; SDAG-NEXT: v_mov_b64_e32 v[12:13], s[14:15] +; SDAG-NEXT: v_mov_b64_e32 v[2:3], s[2:3] ; SDAG-NEXT: s_nop 1 -; SDAG-NEXT: v_mfma_f32_16x16x32_f16 v[0:3], v[0:3], v[4:7], v[8:11] +; SDAG-NEXT: v_mfma_f32_16x16x32_f16 v[0:3], v[6:9], v[10:13], v[0:3] ; SDAG-NEXT: s_nop 7 -; SDAG-NEXT: global_store_dwordx4 v12, v[0:3], s[6:7] +; SDAG-NEXT: global_store_dwordx4 v4, v[0:3], s[6:7] ; SDAG-NEXT: s_endpgm ; ; GISEL-LABEL: test_mfma_f32_16x16x32_f16_no_agpr__vgprcd: @@ -179,18 +179,18 @@ define amdgpu_kernel void @test_mfma_f32_16x16x32_f16_no_agpr__vgprcd(ptr addrsp ; HEURRC-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x34 ; HEURRC-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x54 ; HEURRC-NEXT: s_load_dwordx2 s[6:7], s[4:5], 0x24 -; HEURRC-NEXT: v_mov_b32_e32 v12, 0 +; HEURRC-NEXT: v_mov_b32_e32 v4, 0 ; HEURRC-NEXT: s_waitcnt lgkmcnt(0) -; HEURRC-NEXT: v_mov_b64_e32 v[0:1], s[8:9] -; HEURRC-NEXT: v_mov_b64_e32 v[2:3], s[10:11] -; HEURRC-NEXT: v_mov_b64_e32 v[4:5], s[12:13] -; HEURRC-NEXT: v_mov_b64_e32 v[10:11], s[2:3] -; HEURRC-NEXT: v_mov_b64_e32 v[6:7], s[14:15] -; HEURRC-NEXT: v_mov_b64_e32 v[8:9], s[0:1] +; HEURRC-NEXT: v_mov_b64_e32 v[6:7], s[8:9] +; HEURRC-NEXT: v_mov_b64_e32 v[8:9], s[10:11] +; HEURRC-NEXT: v_mov_b64_e32 v[10:11], s[12:13] +; HEURRC-NEXT: v_mov_b64_e32 v[0:1], s[0:1] +; HEURRC-NEXT: v_mov_b64_e32 v[12:13], s[14:15] +; HEURRC-NEXT: v_mov_b64_e32 v[2:3], s[2:3] ; HEURRC-NEXT: s_nop 1 -; HEURRC-NEXT: v_mfma_f32_16x16x32_f16 v[0:3], v[0:3], v[4:7], v[8:11] +; HEURRC-NEXT: v_mfma_f32_16x16x32_f16 v[0:3], v[6:9], v[10:13], v[0:3] ; HEURRC-NEXT: s_nop 7 -; HEURRC-NEXT: global_store_dwordx4 v12, v[0:3], s[6:7] +; HEURRC-NEXT: global_store_dwordx4 v4, v[0:3], s[6:7] ; HEURRC-NEXT: s_endpgm ; ; VGPRRC-LABEL: test_mfma_f32_16x16x32_f16_no_agpr__vgprcd: @@ -198,18 +198,18 @@ define amdgpu_kernel void @test_mfma_f32_16x16x32_f16_no_agpr__vgprcd(ptr addrsp ; VGPRRC-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x34 ; VGPRRC-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x54 ; VGPRRC-NEXT: s_load_dwordx2 s[6:7], s[4:5], 0x24 -; VGPRRC-NEXT: v_mov_b32_e32 v12, 0 +; VGPRRC-NEXT: v_mov_b32_e32 v4, 0 ; VGPRRC-NEXT: s_waitcnt lgkmcnt(0) -; VGPRRC-NEXT: v_mov_b64_e32 v[0:1], s[8:9] -; VGPRRC-NEXT: v_mov_b64_e32 v[2:3], s[10:11] -; VGPRRC-NEXT: v_mov_b64_e32 v[4:5], s[12:13] -; VGPRRC-NEXT: v_mov_b64_e32 v[10:11], s[2:3] -; VGPRRC-NEXT: v_mov_b64_e32 v[6:7], s[14:15] -; VGPRRC-NEXT: v_mov_b64_e32 v[8:9], s[0:1] +; VGPRRC-NEXT: v_mov_b64_e32 v[6:7], s[8:9] +; VGPRRC-NEXT: v_mov_b64_e32 v[8:9], s[10:11] +; VGPRRC-NEXT: v_mov_b64_e32 v[10:11], s[12:13] +; VGPRRC-NEXT: v_mov_b64_e32 v[0:1], s[0:1] +; VGPRRC-NEXT: v_mov_b64_e32 v[12:13], s[14:15] +; VGPRRC-NEXT: v_mov_b64_e32 v[2:3], s[2:3] ; VGPRRC-NEXT: s_nop 1 -; VGPRRC-NEXT: v_mfma_f32_16x16x32_f16 v[0:3], v[0:3], v[4:7], v[8:11] +; VGPRRC-NEXT: v_mfma_f32_16x16x32_f16 v[0:3], v[6:9], v[10:13], v[0:3] ; VGPRRC-NEXT: s_nop 7 -; VGPRRC-NEXT: global_store_dwordx4 v12, v[0:3], s[6:7] +; VGPRRC-NEXT: global_store_dwordx4 v4, v[0:3], s[6:7] ; VGPRRC-NEXT: s_endpgm ; AGPR-LABEL: test_mfma_f32_16x16x32_f16_no_agpr__vgprcd: ; AGPR: ; %bb.0: @@ -260,18 +260,18 @@ define amdgpu_kernel void @test_mfma_f32_16x16x32_f16_no_agpr__vgprcd__flags(ptr ; SDAG-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x34 ; SDAG-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x54 ; SDAG-NEXT: s_load_dwordx2 s[6:7], s[4:5], 0x24 -; SDAG-NEXT: v_mov_b32_e32 v12, 0 +; SDAG-NEXT: v_mov_b32_e32 v4, 0 ; SDAG-NEXT: s_waitcnt lgkmcnt(0) -; SDAG-NEXT: v_mov_b64_e32 v[0:1], s[8:9] -; SDAG-NEXT: v_mov_b64_e32 v[2:3], s[10:11] -; SDAG-NEXT: v_mov_b64_e32 v[4:5], s[12:13] -; SDAG-NEXT: v_mov_b64_e32 v[10:11], s[2:3] -; SDAG-NEXT: v_mov_b64_e32 v[6:7], s[14:15] -; SDAG-NEXT: v_mov_b64_e32 v[8:9], s[0:1] +; SDAG-NEXT: v_mov_b64_e32 v[6:7], s[8:9] +; SDAG-NEXT: v_mov_b64_e32 v[8:9], s[10:11] +; SDAG-NEXT: v_mov_b64_e32 v[10:11], s[12:13] +; SDAG-NEXT: v_mov_b64_e32 v[0:1], s[0:1] +; SDAG-NEXT: v_mov_b64_e32 v[12:13], s[14:15] +; SDAG-NEXT: v_mov_b64_e32 v[2:3], s[2:3] ; SDAG-NEXT: s_nop 1 -; SDAG-NEXT: v_mfma_f32_16x16x32_f16 v[0:3], v[0:3], v[4:7], v[8:11] cbsz:3 abid:2 blgp:1 +; SDAG-NEXT: v_mfma_f32_16x16x32_f16 v[0:3], v[6:9], v[10:13], v[0:3] cbsz:3 abid:2 blgp:1 ; SDAG-NEXT: s_nop 7 -; SDAG-NEXT: global_store_dwordx4 v12, v[0:3], s[6:7] +; SDAG-NEXT: global_store_dwordx4 v4, v[0:3], s[6:7] ; SDAG-NEXT: s_endpgm ; ; GISEL-LABEL: test_mfma_f32_16x16x32_f16_no_agpr__vgprcd__flags: @@ -298,18 +298,18 @@ define amdgpu_kernel void @test_mfma_f32_16x16x32_f16_no_agpr__vgprcd__flags(ptr ; HEURRC-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x34 ; HEURRC-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x54 ; HEURRC-NEXT: s_load_dwordx2 s[6:7], s[4:5], 0x24 -; HEURRC-NEXT: v_mov_b32_e32 v12, 0 +; HEURRC-NEXT: v_mov_b32_e32 v4, 0 ; HEURRC-NEXT: s_waitcnt lgkmcnt(0) -; HEURRC-NEXT: v_mov_b64_e32 v[0:1], s[8:9] -; HEURRC-NEXT: v_mov_b64_e32 v[2:3], s[10:11] -; HEURRC-NEXT: v_mov_b64_e32 v[4:5], s[12:13] -; HEURRC-NEXT: v_mov_b64_e32 v[10:11], s[2:3] -; HEURRC-NEXT: v_mov_b64_e32 v[6:7], s[14:15] -; HEURRC-NEXT: v_mov_b64_e32 v[8:9], s[0:1] +; HEURRC-NEXT: v_mov_b64_e32 v[6:7], s[8:9] +; HEURRC-NEXT: v_mov_b64_e32 v[8:9], s[10:11] +; HEURRC-NEXT: v_mov_b64_e32 v[10:11], s[12:13] +; HEURRC-NEXT: v_mov_b64_e32 v[0:1], s[0:1] +; HEURRC-NEXT: v_mov_b64_e32 v[12:13], s[14:15] +; HEURRC-NEXT: v_mov_b64_e32 v[2:3], s[2:3] ; HEURRC-NEXT: s_nop 1 -; HEURRC-NEXT: v_mfma_f32_16x16x32_f16 v[0:3], v[0:3], v[4:7], v[8:11] cbsz:3 abid:2 blgp:1 +; HEURRC-NEXT: v_mfma_f32_16x16x32_f16 v[0:3], v[6:9], v[10:13], v[0:3] cbsz:3 abid:2 blgp:1 ; HEURRC-NEXT: s_nop 7 -; HEURRC-NEXT: global_store_dwordx4 v12, v[0:3], s[6:7] +; HEURRC-NEXT: global_store_dwordx4 v4, v[0:3], s[6:7] ; HEURRC-NEXT: s_endpgm ; ; VGPRRC-LABEL: test_mfma_f32_16x16x32_f16_no_agpr__vgprcd__flags: @@ -317,18 +317,18 @@ define amdgpu_kernel void @test_mfma_f32_16x16x32_f16_no_agpr__vgprcd__flags(ptr ; VGPRRC-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x34 ; VGPRRC-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x54 ; VGPRRC-NEXT: s_load_dwordx2 s[6:7], s[4:5], 0x24 -; VGPRRC-NEXT: v_mov_b32_e32 v12, 0 +; VGPRRC-NEXT: v_mov_b32_e32 v4, 0 ; VGPRRC-NEXT: s_waitcnt lgkmcnt(0) -; VGPRRC-NEXT: v_mov_b64_e32 v[0:1], s[8:9] -; VGPRRC-NEXT: v_mov_b64_e32 v[2:3], s[10:11] -; VGPRRC-NEXT: v_mov_b64_e32 v[4:5], s[12:13] -; VGPRRC-NEXT: v_mov_b64_e32 v[10:11], s[2:3] -; VGPRRC-NEXT: v_mov_b64_e32 v[6:7], s[14:15] -; VGPRRC-NEXT: v_mov_b64_e32 v[8:9], s[0:1] +; VGPRRC-NEXT: v_mov_b64_e32 v[6:7], s[8:9] +; VGPRRC-NEXT: v_mov_b64_e32 v[8:9], s[10:11] +; VGPRRC-NEXT: v_mov_b64_e32 v[10:11], s[12:13] +; VGPRRC-NEXT: v_mov_b64_e32 v[0:1], s[0:1] +; VGPRRC-NEXT: v_mov_b64_e32 v[12:13], s[14:15] +; VGPRRC-NEXT: v_mov_b64_e32 v[2:3], s[2:3] ; VGPRRC-NEXT: s_nop 1 -; VGPRRC-NEXT: v_mfma_f32_16x16x32_f16 v[0:3], v[0:3], v[4:7], v[8:11] cbsz:3 abid:2 blgp:1 +; VGPRRC-NEXT: v_mfma_f32_16x16x32_f16 v[0:3], v[6:9], v[10:13], v[0:3] cbsz:3 abid:2 blgp:1 ; VGPRRC-NEXT: s_nop 7 -; VGPRRC-NEXT: global_store_dwordx4 v12, v[0:3], s[6:7] +; VGPRRC-NEXT: global_store_dwordx4 v4, v[0:3], s[6:7] ; VGPRRC-NEXT: s_endpgm ; AGPR-LABEL: test_mfma_f32_16x16x32_f16_no_agpr__vgprcd__flags: ; AGPR: ; %bb.0: @@ -382,15 +382,15 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16(<8 x half> %arg0, <8 x hal ; SDAG: ; %bb.0: ; SDAG-NEXT: s_load_dwordx8 s[24:31], s[4:5], 0x24 ; SDAG-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x64 -; SDAG-NEXT: v_mov_b64_e32 v[8:9], 48 -; SDAG-NEXT: v_mov_b64_e32 v[10:11], 32 -; SDAG-NEXT: v_mov_b64_e32 v[12:13], 16 +; SDAG-NEXT: v_mov_b64_e32 v[0:1], 48 +; SDAG-NEXT: v_mov_b64_e32 v[2:3], 32 +; SDAG-NEXT: v_mov_b64_e32 v[4:5], 16 ; SDAG-NEXT: s_waitcnt lgkmcnt(0) -; SDAG-NEXT: v_mov_b64_e32 v[0:1], s[24:25] -; SDAG-NEXT: v_mov_b64_e32 v[2:3], s[26:27] -; SDAG-NEXT: v_mov_b64_e32 v[4:5], s[28:29] +; SDAG-NEXT: v_mov_b64_e32 v[8:9], s[24:25] +; SDAG-NEXT: v_mov_b64_e32 v[10:11], s[26:27] +; SDAG-NEXT: v_mov_b64_e32 v[12:13], s[28:29] ; SDAG-NEXT: v_accvgpr_write_b32 a0, s8 -; SDAG-NEXT: v_mov_b64_e32 v[6:7], s[30:31] +; SDAG-NEXT: v_mov_b64_e32 v[14:15], s[30:31] ; SDAG-NEXT: v_accvgpr_write_b32 a1, s9 ; SDAG-NEXT: v_accvgpr_write_b32 a2, s10 ; SDAG-NEXT: v_accvgpr_write_b32 a3, s11 @@ -408,40 +408,39 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16(<8 x half> %arg0, <8 x hal ; SDAG-NEXT: v_accvgpr_write_b32 a15, s23 ; SDAG-NEXT: v_mov_b32_e32 v16, s16 ; SDAG-NEXT: v_mov_b32_e32 v17, s17 -; SDAG-NEXT: v_mfma_f32_32x32x16_f16 a[16:31], v[0:3], v[4:7], a[0:15] +; SDAG-NEXT: v_mfma_f32_32x32x16_f16 a[16:31], v[8:11], v[12:15], a[0:15] ; SDAG-NEXT: v_mov_b32_e32 v18, s18 ; SDAG-NEXT: v_mov_b32_e32 v19, s19 -; SDAG-NEXT: v_mov_b32_e32 v0, s20 -; SDAG-NEXT: v_mov_b32_e32 v1, s21 -; SDAG-NEXT: v_mov_b32_e32 v2, s22 -; SDAG-NEXT: v_mov_b32_e32 v3, s23 -; SDAG-NEXT: v_mov_b64_e32 v[14:15], 0 +; SDAG-NEXT: v_mov_b32_e32 v8, s20 +; SDAG-NEXT: v_mov_b32_e32 v9, s21 +; SDAG-NEXT: v_mov_b32_e32 v10, s22 +; SDAG-NEXT: v_mov_b32_e32 v11, s23 +; SDAG-NEXT: v_mov_b64_e32 v[6:7], 0 ; SDAG-NEXT: s_nop 4 -; SDAG-NEXT: global_store_dwordx4 v[8:9], a[28:31], off sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v[0:1], a[28:31], off sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) -; SDAG-NEXT: global_store_dwordx4 v[10:11], a[24:27], off sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v[2:3], a[24:27], off sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) -; SDAG-NEXT: global_store_dwordx4 v[12:13], a[20:23], off sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v[4:5], a[20:23], off sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) -; SDAG-NEXT: global_store_dwordx4 v[14:15], a[16:19], off sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v[6:7], a[16:19], off sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) -; SDAG-NEXT: global_store_dwordx4 v[10:11], v[16:19], off sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v[2:3], v[16:19], off sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) -; SDAG-NEXT: global_store_dwordx4 v[8:9], v[0:3], off sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v[0:1], v[8:11], off sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) -; SDAG-NEXT: s_nop 0 ; SDAG-NEXT: v_mov_b32_e32 v0, s8 ; SDAG-NEXT: v_mov_b32_e32 v1, s9 ; SDAG-NEXT: v_mov_b32_e32 v2, s10 ; SDAG-NEXT: v_mov_b32_e32 v3, s11 -; SDAG-NEXT: global_store_dwordx4 v[14:15], v[0:3], off sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v[6:7], v[0:3], off sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) ; SDAG-NEXT: s_nop 0 ; SDAG-NEXT: v_mov_b32_e32 v0, s12 ; SDAG-NEXT: v_mov_b32_e32 v1, s13 ; SDAG-NEXT: v_mov_b32_e32 v2, s14 ; SDAG-NEXT: v_mov_b32_e32 v3, s15 -; SDAG-NEXT: global_store_dwordx4 v[12:13], v[0:3], off sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v[4:5], v[0:3], off sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) ; SDAG-NEXT: s_endpgm ; @@ -508,15 +507,15 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16(<8 x half> %arg0, <8 x hal ; HEURRC: ; %bb.0: ; HEURRC-NEXT: s_load_dwordx8 s[24:31], s[4:5], 0x24 ; HEURRC-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x64 -; HEURRC-NEXT: v_mov_b64_e32 v[8:9], 48 -; HEURRC-NEXT: v_mov_b64_e32 v[10:11], 32 -; HEURRC-NEXT: v_mov_b64_e32 v[12:13], 16 +; HEURRC-NEXT: v_mov_b64_e32 v[0:1], 48 +; HEURRC-NEXT: v_mov_b64_e32 v[2:3], 32 +; HEURRC-NEXT: v_mov_b64_e32 v[4:5], 16 ; HEURRC-NEXT: s_waitcnt lgkmcnt(0) -; HEURRC-NEXT: v_mov_b64_e32 v[0:1], s[24:25] -; HEURRC-NEXT: v_mov_b64_e32 v[2:3], s[26:27] -; HEURRC-NEXT: v_mov_b64_e32 v[4:5], s[28:29] +; HEURRC-NEXT: v_mov_b64_e32 v[8:9], s[24:25] +; HEURRC-NEXT: v_mov_b64_e32 v[10:11], s[26:27] +; HEURRC-NEXT: v_mov_b64_e32 v[12:13], s[28:29] ; HEURRC-NEXT: v_accvgpr_write_b32 a0, s8 -; HEURRC-NEXT: v_mov_b64_e32 v[6:7], s[30:31] +; HEURRC-NEXT: v_mov_b64_e32 v[14:15], s[30:31] ; HEURRC-NEXT: v_accvgpr_write_b32 a1, s9 ; HEURRC-NEXT: v_accvgpr_write_b32 a2, s10 ; HEURRC-NEXT: v_accvgpr_write_b32 a3, s11 @@ -534,40 +533,39 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16(<8 x half> %arg0, <8 x hal ; HEURRC-NEXT: v_accvgpr_write_b32 a15, s23 ; HEURRC-NEXT: v_mov_b32_e32 v16, s16 ; HEURRC-NEXT: v_mov_b32_e32 v17, s17 -; HEURRC-NEXT: v_mfma_f32_32x32x16_f16 a[16:31], v[0:3], v[4:7], a[0:15] +; HEURRC-NEXT: v_mfma_f32_32x32x16_f16 a[16:31], v[8:11], v[12:15], a[0:15] ; HEURRC-NEXT: v_mov_b32_e32 v18, s18 ; HEURRC-NEXT: v_mov_b32_e32 v19, s19 -; HEURRC-NEXT: v_mov_b32_e32 v0, s20 -; HEURRC-NEXT: v_mov_b32_e32 v1, s21 -; HEURRC-NEXT: v_mov_b32_e32 v2, s22 -; HEURRC-NEXT: v_mov_b32_e32 v3, s23 -; HEURRC-NEXT: v_mov_b64_e32 v[14:15], 0 +; HEURRC-NEXT: v_mov_b32_e32 v8, s20 +; HEURRC-NEXT: v_mov_b32_e32 v9, s21 +; HEURRC-NEXT: v_mov_b32_e32 v10, s22 +; HEURRC-NEXT: v_mov_b32_e32 v11, s23 +; HEURRC-NEXT: v_mov_b64_e32 v[6:7], 0 ; HEURRC-NEXT: s_nop 4 -; HEURRC-NEXT: global_store_dwordx4 v[8:9], a[28:31], off sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v[0:1], a[28:31], off sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) -; HEURRC-NEXT: global_store_dwordx4 v[10:11], a[24:27], off sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v[2:3], a[24:27], off sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) -; HEURRC-NEXT: global_store_dwordx4 v[12:13], a[20:23], off sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v[4:5], a[20:23], off sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) -; HEURRC-NEXT: global_store_dwordx4 v[14:15], a[16:19], off sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v[6:7], a[16:19], off sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) -; HEURRC-NEXT: global_store_dwordx4 v[10:11], v[16:19], off sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v[2:3], v[16:19], off sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) -; HEURRC-NEXT: global_store_dwordx4 v[8:9], v[0:3], off sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v[0:1], v[8:11], off sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) -; HEURRC-NEXT: s_nop 0 ; HEURRC-NEXT: v_mov_b32_e32 v0, s8 ; HEURRC-NEXT: v_mov_b32_e32 v1, s9 ; HEURRC-NEXT: v_mov_b32_e32 v2, s10 ; HEURRC-NEXT: v_mov_b32_e32 v3, s11 -; HEURRC-NEXT: global_store_dwordx4 v[14:15], v[0:3], off sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v[6:7], v[0:3], off sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) ; HEURRC-NEXT: s_nop 0 ; HEURRC-NEXT: v_mov_b32_e32 v0, s12 ; HEURRC-NEXT: v_mov_b32_e32 v1, s13 ; HEURRC-NEXT: v_mov_b32_e32 v2, s14 ; HEURRC-NEXT: v_mov_b32_e32 v3, s15 -; HEURRC-NEXT: global_store_dwordx4 v[12:13], v[0:3], off sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v[4:5], v[0:3], off sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) ; HEURRC-NEXT: s_endpgm ; @@ -575,15 +573,15 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16(<8 x half> %arg0, <8 x hal ; VGPRRC: ; %bb.0: ; VGPRRC-NEXT: s_load_dwordx8 s[24:31], s[4:5], 0x24 ; VGPRRC-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x64 -; VGPRRC-NEXT: v_mov_b64_e32 v[40:41], 48 -; VGPRRC-NEXT: v_mov_b64_e32 v[42:43], 32 -; VGPRRC-NEXT: v_mov_b64_e32 v[44:45], 16 +; VGPRRC-NEXT: v_mov_b64_e32 v[32:33], 48 +; VGPRRC-NEXT: v_mov_b64_e32 v[34:35], 32 +; VGPRRC-NEXT: v_mov_b64_e32 v[36:37], 16 ; VGPRRC-NEXT: s_waitcnt lgkmcnt(0) -; VGPRRC-NEXT: v_mov_b64_e32 v[34:35], s[26:27] -; VGPRRC-NEXT: v_mov_b64_e32 v[32:33], s[24:25] -; VGPRRC-NEXT: v_mov_b64_e32 v[38:39], s[30:31] +; VGPRRC-NEXT: v_mov_b64_e32 v[42:43], s[26:27] +; VGPRRC-NEXT: v_mov_b64_e32 v[40:41], s[24:25] +; VGPRRC-NEXT: v_mov_b64_e32 v[46:47], s[30:31] ; VGPRRC-NEXT: v_mov_b64_e32 v[0:1], s[8:9] -; VGPRRC-NEXT: v_mov_b64_e32 v[36:37], s[28:29] +; VGPRRC-NEXT: v_mov_b64_e32 v[44:45], s[28:29] ; VGPRRC-NEXT: v_mov_b64_e32 v[2:3], s[10:11] ; VGPRRC-NEXT: v_mov_b64_e32 v[4:5], s[12:13] ; VGPRRC-NEXT: v_mov_b64_e32 v[6:7], s[14:15] @@ -593,40 +591,40 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16(<8 x half> %arg0, <8 x hal ; VGPRRC-NEXT: v_mov_b64_e32 v[14:15], s[22:23] ; VGPRRC-NEXT: v_mov_b32_e32 v48, s16 ; VGPRRC-NEXT: v_mov_b32_e32 v49, s17 -; VGPRRC-NEXT: v_mfma_f32_32x32x16_f16 v[16:31], v[32:35], v[36:39], v[0:15] +; VGPRRC-NEXT: v_mfma_f32_32x32x16_f16 v[16:31], v[40:43], v[44:47], v[0:15] ; VGPRRC-NEXT: v_mov_b32_e32 v50, s18 ; VGPRRC-NEXT: v_mov_b32_e32 v51, s19 -; VGPRRC-NEXT: v_mov_b64_e32 v[46:47], 0 +; VGPRRC-NEXT: v_mov_b64_e32 v[38:39], 0 ; VGPRRC-NEXT: s_nop 8 -; VGPRRC-NEXT: global_store_dwordx4 v[40:41], v[28:31], off sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v[32:33], v[28:31], off sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) -; VGPRRC-NEXT: global_store_dwordx4 v[42:43], v[24:27], off sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v[34:35], v[24:27], off sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) -; VGPRRC-NEXT: global_store_dwordx4 v[44:45], v[20:23], off sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v[36:37], v[20:23], off sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) -; VGPRRC-NEXT: global_store_dwordx4 v[46:47], v[16:19], off sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v[38:39], v[16:19], off sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) ; VGPRRC-NEXT: v_mov_b32_e32 v0, s20 ; VGPRRC-NEXT: v_mov_b32_e32 v1, s21 ; VGPRRC-NEXT: v_mov_b32_e32 v2, s22 ; VGPRRC-NEXT: v_mov_b32_e32 v3, s23 -; VGPRRC-NEXT: global_store_dwordx4 v[42:43], v[48:51], off sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v[34:35], v[48:51], off sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) -; VGPRRC-NEXT: global_store_dwordx4 v[40:41], v[0:3], off sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v[32:33], v[0:3], off sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) ; VGPRRC-NEXT: s_nop 0 ; VGPRRC-NEXT: v_mov_b32_e32 v0, s8 ; VGPRRC-NEXT: v_mov_b32_e32 v1, s9 ; VGPRRC-NEXT: v_mov_b32_e32 v2, s10 ; VGPRRC-NEXT: v_mov_b32_e32 v3, s11 -; VGPRRC-NEXT: global_store_dwordx4 v[46:47], v[0:3], off sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v[38:39], v[0:3], off sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) ; VGPRRC-NEXT: s_nop 0 ; VGPRRC-NEXT: v_mov_b32_e32 v0, s12 ; VGPRRC-NEXT: v_mov_b32_e32 v1, s13 ; VGPRRC-NEXT: v_mov_b32_e32 v2, s14 ; VGPRRC-NEXT: v_mov_b32_e32 v3, s15 -; VGPRRC-NEXT: global_store_dwordx4 v[44:45], v[0:3], off sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v[36:37], v[0:3], off sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) ; VGPRRC-NEXT: s_endpgm ; AGPR-LABEL: test_mfma_f32_32x32x16_f16: @@ -765,15 +763,15 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16__flags(<8 x half> %arg0, < ; SDAG: ; %bb.0: ; SDAG-NEXT: s_load_dwordx8 s[24:31], s[4:5], 0x24 ; SDAG-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x64 -; SDAG-NEXT: v_mov_b64_e32 v[8:9], 48 -; SDAG-NEXT: v_mov_b64_e32 v[10:11], 32 -; SDAG-NEXT: v_mov_b64_e32 v[12:13], 16 +; SDAG-NEXT: v_mov_b64_e32 v[0:1], 48 +; SDAG-NEXT: v_mov_b64_e32 v[2:3], 32 +; SDAG-NEXT: v_mov_b64_e32 v[4:5], 16 ; SDAG-NEXT: s_waitcnt lgkmcnt(0) -; SDAG-NEXT: v_mov_b64_e32 v[0:1], s[24:25] -; SDAG-NEXT: v_mov_b64_e32 v[2:3], s[26:27] -; SDAG-NEXT: v_mov_b64_e32 v[4:5], s[28:29] +; SDAG-NEXT: v_mov_b64_e32 v[8:9], s[24:25] +; SDAG-NEXT: v_mov_b64_e32 v[10:11], s[26:27] +; SDAG-NEXT: v_mov_b64_e32 v[12:13], s[28:29] ; SDAG-NEXT: v_accvgpr_write_b32 a0, s8 -; SDAG-NEXT: v_mov_b64_e32 v[6:7], s[30:31] +; SDAG-NEXT: v_mov_b64_e32 v[14:15], s[30:31] ; SDAG-NEXT: v_accvgpr_write_b32 a1, s9 ; SDAG-NEXT: v_accvgpr_write_b32 a2, s10 ; SDAG-NEXT: v_accvgpr_write_b32 a3, s11 @@ -791,40 +789,39 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16__flags(<8 x half> %arg0, < ; SDAG-NEXT: v_accvgpr_write_b32 a15, s23 ; SDAG-NEXT: v_mov_b32_e32 v16, s16 ; SDAG-NEXT: v_mov_b32_e32 v17, s17 -; SDAG-NEXT: v_mfma_f32_32x32x16_f16 a[16:31], v[0:3], v[4:7], a[0:15] cbsz:2 abid:3 blgp:1 +; SDAG-NEXT: v_mfma_f32_32x32x16_f16 a[16:31], v[8:11], v[12:15], a[0:15] cbsz:2 abid:3 blgp:1 ; SDAG-NEXT: v_mov_b32_e32 v18, s18 ; SDAG-NEXT: v_mov_b32_e32 v19, s19 -; SDAG-NEXT: v_mov_b32_e32 v0, s20 -; SDAG-NEXT: v_mov_b32_e32 v1, s21 -; SDAG-NEXT: v_mov_b32_e32 v2, s22 -; SDAG-NEXT: v_mov_b32_e32 v3, s23 -; SDAG-NEXT: v_mov_b64_e32 v[14:15], 0 +; SDAG-NEXT: v_mov_b32_e32 v8, s20 +; SDAG-NEXT: v_mov_b32_e32 v9, s21 +; SDAG-NEXT: v_mov_b32_e32 v10, s22 +; SDAG-NEXT: v_mov_b32_e32 v11, s23 +; SDAG-NEXT: v_mov_b64_e32 v[6:7], 0 ; SDAG-NEXT: s_nop 4 -; SDAG-NEXT: global_store_dwordx4 v[8:9], a[28:31], off sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v[0:1], a[28:31], off sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) -; SDAG-NEXT: global_store_dwordx4 v[10:11], a[24:27], off sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v[2:3], a[24:27], off sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) -; SDAG-NEXT: global_store_dwordx4 v[12:13], a[20:23], off sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v[4:5], a[20:23], off sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) -; SDAG-NEXT: global_store_dwordx4 v[14:15], a[16:19], off sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v[6:7], a[16:19], off sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) -; SDAG-NEXT: global_store_dwordx4 v[10:11], v[16:19], off sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v[2:3], v[16:19], off sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) -; SDAG-NEXT: global_store_dwordx4 v[8:9], v[0:3], off sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v[0:1], v[8:11], off sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) -; SDAG-NEXT: s_nop 0 ; SDAG-NEXT: v_mov_b32_e32 v0, s8 ; SDAG-NEXT: v_mov_b32_e32 v1, s9 ; SDAG-NEXT: v_mov_b32_e32 v2, s10 ; SDAG-NEXT: v_mov_b32_e32 v3, s11 -; SDAG-NEXT: global_store_dwordx4 v[14:15], v[0:3], off sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v[6:7], v[0:3], off sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) ; SDAG-NEXT: s_nop 0 ; SDAG-NEXT: v_mov_b32_e32 v0, s12 ; SDAG-NEXT: v_mov_b32_e32 v1, s13 ; SDAG-NEXT: v_mov_b32_e32 v2, s14 ; SDAG-NEXT: v_mov_b32_e32 v3, s15 -; SDAG-NEXT: global_store_dwordx4 v[12:13], v[0:3], off sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v[4:5], v[0:3], off sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) ; SDAG-NEXT: s_endpgm ; @@ -891,15 +888,15 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16__flags(<8 x half> %arg0, < ; HEURRC: ; %bb.0: ; HEURRC-NEXT: s_load_dwordx8 s[24:31], s[4:5], 0x24 ; HEURRC-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x64 -; HEURRC-NEXT: v_mov_b64_e32 v[8:9], 48 -; HEURRC-NEXT: v_mov_b64_e32 v[10:11], 32 -; HEURRC-NEXT: v_mov_b64_e32 v[12:13], 16 +; HEURRC-NEXT: v_mov_b64_e32 v[0:1], 48 +; HEURRC-NEXT: v_mov_b64_e32 v[2:3], 32 +; HEURRC-NEXT: v_mov_b64_e32 v[4:5], 16 ; HEURRC-NEXT: s_waitcnt lgkmcnt(0) -; HEURRC-NEXT: v_mov_b64_e32 v[0:1], s[24:25] -; HEURRC-NEXT: v_mov_b64_e32 v[2:3], s[26:27] -; HEURRC-NEXT: v_mov_b64_e32 v[4:5], s[28:29] +; HEURRC-NEXT: v_mov_b64_e32 v[8:9], s[24:25] +; HEURRC-NEXT: v_mov_b64_e32 v[10:11], s[26:27] +; HEURRC-NEXT: v_mov_b64_e32 v[12:13], s[28:29] ; HEURRC-NEXT: v_accvgpr_write_b32 a0, s8 -; HEURRC-NEXT: v_mov_b64_e32 v[6:7], s[30:31] +; HEURRC-NEXT: v_mov_b64_e32 v[14:15], s[30:31] ; HEURRC-NEXT: v_accvgpr_write_b32 a1, s9 ; HEURRC-NEXT: v_accvgpr_write_b32 a2, s10 ; HEURRC-NEXT: v_accvgpr_write_b32 a3, s11 @@ -917,40 +914,39 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16__flags(<8 x half> %arg0, < ; HEURRC-NEXT: v_accvgpr_write_b32 a15, s23 ; HEURRC-NEXT: v_mov_b32_e32 v16, s16 ; HEURRC-NEXT: v_mov_b32_e32 v17, s17 -; HEURRC-NEXT: v_mfma_f32_32x32x16_f16 a[16:31], v[0:3], v[4:7], a[0:15] cbsz:2 abid:3 blgp:1 +; HEURRC-NEXT: v_mfma_f32_32x32x16_f16 a[16:31], v[8:11], v[12:15], a[0:15] cbsz:2 abid:3 blgp:1 ; HEURRC-NEXT: v_mov_b32_e32 v18, s18 ; HEURRC-NEXT: v_mov_b32_e32 v19, s19 -; HEURRC-NEXT: v_mov_b32_e32 v0, s20 -; HEURRC-NEXT: v_mov_b32_e32 v1, s21 -; HEURRC-NEXT: v_mov_b32_e32 v2, s22 -; HEURRC-NEXT: v_mov_b32_e32 v3, s23 -; HEURRC-NEXT: v_mov_b64_e32 v[14:15], 0 +; HEURRC-NEXT: v_mov_b32_e32 v8, s20 +; HEURRC-NEXT: v_mov_b32_e32 v9, s21 +; HEURRC-NEXT: v_mov_b32_e32 v10, s22 +; HEURRC-NEXT: v_mov_b32_e32 v11, s23 +; HEURRC-NEXT: v_mov_b64_e32 v[6:7], 0 ; HEURRC-NEXT: s_nop 4 -; HEURRC-NEXT: global_store_dwordx4 v[8:9], a[28:31], off sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v[0:1], a[28:31], off sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) -; HEURRC-NEXT: global_store_dwordx4 v[10:11], a[24:27], off sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v[2:3], a[24:27], off sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) -; HEURRC-NEXT: global_store_dwordx4 v[12:13], a[20:23], off sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v[4:5], a[20:23], off sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) -; HEURRC-NEXT: global_store_dwordx4 v[14:15], a[16:19], off sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v[6:7], a[16:19], off sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) -; HEURRC-NEXT: global_store_dwordx4 v[10:11], v[16:19], off sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v[2:3], v[16:19], off sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) -; HEURRC-NEXT: global_store_dwordx4 v[8:9], v[0:3], off sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v[0:1], v[8:11], off sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) -; HEURRC-NEXT: s_nop 0 ; HEURRC-NEXT: v_mov_b32_e32 v0, s8 ; HEURRC-NEXT: v_mov_b32_e32 v1, s9 ; HEURRC-NEXT: v_mov_b32_e32 v2, s10 ; HEURRC-NEXT: v_mov_b32_e32 v3, s11 -; HEURRC-NEXT: global_store_dwordx4 v[14:15], v[0:3], off sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v[6:7], v[0:3], off sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) ; HEURRC-NEXT: s_nop 0 ; HEURRC-NEXT: v_mov_b32_e32 v0, s12 ; HEURRC-NEXT: v_mov_b32_e32 v1, s13 ; HEURRC-NEXT: v_mov_b32_e32 v2, s14 ; HEURRC-NEXT: v_mov_b32_e32 v3, s15 -; HEURRC-NEXT: global_store_dwordx4 v[12:13], v[0:3], off sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v[4:5], v[0:3], off sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) ; HEURRC-NEXT: s_endpgm ; @@ -958,15 +954,15 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16__flags(<8 x half> %arg0, < ; VGPRRC: ; %bb.0: ; VGPRRC-NEXT: s_load_dwordx8 s[24:31], s[4:5], 0x24 ; VGPRRC-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x64 -; VGPRRC-NEXT: v_mov_b64_e32 v[40:41], 48 -; VGPRRC-NEXT: v_mov_b64_e32 v[42:43], 32 -; VGPRRC-NEXT: v_mov_b64_e32 v[44:45], 16 +; VGPRRC-NEXT: v_mov_b64_e32 v[32:33], 48 +; VGPRRC-NEXT: v_mov_b64_e32 v[34:35], 32 +; VGPRRC-NEXT: v_mov_b64_e32 v[36:37], 16 ; VGPRRC-NEXT: s_waitcnt lgkmcnt(0) -; VGPRRC-NEXT: v_mov_b64_e32 v[34:35], s[26:27] -; VGPRRC-NEXT: v_mov_b64_e32 v[32:33], s[24:25] -; VGPRRC-NEXT: v_mov_b64_e32 v[38:39], s[30:31] +; VGPRRC-NEXT: v_mov_b64_e32 v[42:43], s[26:27] +; VGPRRC-NEXT: v_mov_b64_e32 v[40:41], s[24:25] +; VGPRRC-NEXT: v_mov_b64_e32 v[46:47], s[30:31] ; VGPRRC-NEXT: v_mov_b64_e32 v[0:1], s[8:9] -; VGPRRC-NEXT: v_mov_b64_e32 v[36:37], s[28:29] +; VGPRRC-NEXT: v_mov_b64_e32 v[44:45], s[28:29] ; VGPRRC-NEXT: v_mov_b64_e32 v[2:3], s[10:11] ; VGPRRC-NEXT: v_mov_b64_e32 v[4:5], s[12:13] ; VGPRRC-NEXT: v_mov_b64_e32 v[6:7], s[14:15] @@ -976,40 +972,40 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16__flags(<8 x half> %arg0, < ; VGPRRC-NEXT: v_mov_b64_e32 v[14:15], s[22:23] ; VGPRRC-NEXT: v_mov_b32_e32 v48, s16 ; VGPRRC-NEXT: v_mov_b32_e32 v49, s17 -; VGPRRC-NEXT: v_mfma_f32_32x32x16_f16 v[16:31], v[32:35], v[36:39], v[0:15] cbsz:2 abid:3 blgp:1 +; VGPRRC-NEXT: v_mfma_f32_32x32x16_f16 v[16:31], v[40:43], v[44:47], v[0:15] cbsz:2 abid:3 blgp:1 ; VGPRRC-NEXT: v_mov_b32_e32 v50, s18 ; VGPRRC-NEXT: v_mov_b32_e32 v51, s19 -; VGPRRC-NEXT: v_mov_b64_e32 v[46:47], 0 +; VGPRRC-NEXT: v_mov_b64_e32 v[38:39], 0 ; VGPRRC-NEXT: s_nop 8 -; VGPRRC-NEXT: global_store_dwordx4 v[40:41], v[28:31], off sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v[32:33], v[28:31], off sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) -; VGPRRC-NEXT: global_store_dwordx4 v[42:43], v[24:27], off sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v[34:35], v[24:27], off sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) -; VGPRRC-NEXT: global_store_dwordx4 v[44:45], v[20:23], off sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v[36:37], v[20:23], off sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) -; VGPRRC-NEXT: global_store_dwordx4 v[46:47], v[16:19], off sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v[38:39], v[16:19], off sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) ; VGPRRC-NEXT: v_mov_b32_e32 v0, s20 ; VGPRRC-NEXT: v_mov_b32_e32 v1, s21 ; VGPRRC-NEXT: v_mov_b32_e32 v2, s22 ; VGPRRC-NEXT: v_mov_b32_e32 v3, s23 -; VGPRRC-NEXT: global_store_dwordx4 v[42:43], v[48:51], off sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v[34:35], v[48:51], off sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) -; VGPRRC-NEXT: global_store_dwordx4 v[40:41], v[0:3], off sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v[32:33], v[0:3], off sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) ; VGPRRC-NEXT: s_nop 0 ; VGPRRC-NEXT: v_mov_b32_e32 v0, s8 ; VGPRRC-NEXT: v_mov_b32_e32 v1, s9 ; VGPRRC-NEXT: v_mov_b32_e32 v2, s10 ; VGPRRC-NEXT: v_mov_b32_e32 v3, s11 -; VGPRRC-NEXT: global_store_dwordx4 v[46:47], v[0:3], off sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v[38:39], v[0:3], off sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) ; VGPRRC-NEXT: s_nop 0 ; VGPRRC-NEXT: v_mov_b32_e32 v0, s12 ; VGPRRC-NEXT: v_mov_b32_e32 v1, s13 ; VGPRRC-NEXT: v_mov_b32_e32 v2, s14 ; VGPRRC-NEXT: v_mov_b32_e32 v3, s15 -; VGPRRC-NEXT: global_store_dwordx4 v[44:45], v[0:3], off sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v[36:37], v[0:3], off sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) ; VGPRRC-NEXT: s_endpgm ; AGPR-LABEL: test_mfma_f32_32x32x16_f16__flags: @@ -1489,13 +1485,13 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16__vgprcd(<8 x half> %arg0, ; SDAG-NEXT: s_load_dwordx8 s[24:31], s[4:5], 0x24 ; SDAG-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x64 ; SDAG-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0xa4 -; SDAG-NEXT: v_mov_b32_e32 v44, 0 +; SDAG-NEXT: v_mov_b32_e32 v36, 0 ; SDAG-NEXT: s_waitcnt lgkmcnt(0) -; SDAG-NEXT: v_mov_b64_e32 v[34:35], s[26:27] -; SDAG-NEXT: v_mov_b64_e32 v[32:33], s[24:25] -; SDAG-NEXT: v_mov_b64_e32 v[38:39], s[30:31] +; SDAG-NEXT: v_mov_b64_e32 v[40:41], s[26:27] +; SDAG-NEXT: v_mov_b64_e32 v[38:39], s[24:25] +; SDAG-NEXT: v_mov_b64_e32 v[44:45], s[30:31] ; SDAG-NEXT: v_mov_b64_e32 v[30:31], s[22:23] -; SDAG-NEXT: v_mov_b64_e32 v[36:37], s[28:29] +; SDAG-NEXT: v_mov_b64_e32 v[42:43], s[28:29] ; SDAG-NEXT: v_mov_b64_e32 v[28:29], s[20:21] ; SDAG-NEXT: v_mov_b64_e32 v[26:27], s[18:19] ; SDAG-NEXT: v_mov_b64_e32 v[24:25], s[16:17] @@ -1503,41 +1499,41 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16__vgprcd(<8 x half> %arg0, ; SDAG-NEXT: v_mov_b64_e32 v[20:21], s[12:13] ; SDAG-NEXT: v_mov_b64_e32 v[18:19], s[10:11] ; SDAG-NEXT: v_mov_b64_e32 v[16:17], s[8:9] -; SDAG-NEXT: v_mov_b32_e32 v40, s20 -; SDAG-NEXT: v_mov_b32_e32 v41, s21 -; SDAG-NEXT: v_mfma_f32_32x32x16_f16 v[0:15], v[32:35], v[36:39], v[16:31] -; SDAG-NEXT: v_mov_b32_e32 v42, s22 -; SDAG-NEXT: v_mov_b32_e32 v43, s23 -; SDAG-NEXT: global_store_dwordx4 v44, v[40:43], s[0:1] offset:48 sc0 sc1 +; SDAG-NEXT: v_mov_b32_e32 v32, s20 +; SDAG-NEXT: v_mov_b32_e32 v33, s21 +; SDAG-NEXT: v_mfma_f32_32x32x16_f16 v[0:15], v[38:41], v[42:45], v[16:31] +; SDAG-NEXT: v_mov_b32_e32 v34, s22 +; SDAG-NEXT: v_mov_b32_e32 v35, s23 +; SDAG-NEXT: global_store_dwordx4 v36, v[32:35], s[0:1] offset:48 sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) ; SDAG-NEXT: s_nop 2 ; SDAG-NEXT: v_mov_b32_e32 v16, s16 ; SDAG-NEXT: v_mov_b32_e32 v17, s17 ; SDAG-NEXT: v_mov_b32_e32 v18, s18 ; SDAG-NEXT: v_mov_b32_e32 v19, s19 -; SDAG-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] offset:32 sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] offset:32 sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) ; SDAG-NEXT: s_nop 0 ; SDAG-NEXT: v_mov_b32_e32 v16, s12 ; SDAG-NEXT: v_mov_b32_e32 v17, s13 ; SDAG-NEXT: v_mov_b32_e32 v18, s14 ; SDAG-NEXT: v_mov_b32_e32 v19, s15 -; SDAG-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] offset:16 sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] offset:16 sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) ; SDAG-NEXT: s_nop 0 ; SDAG-NEXT: v_mov_b32_e32 v16, s8 ; SDAG-NEXT: v_mov_b32_e32 v17, s9 ; SDAG-NEXT: v_mov_b32_e32 v18, s10 ; SDAG-NEXT: v_mov_b32_e32 v19, s11 -; SDAG-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) -; SDAG-NEXT: global_store_dwordx4 v44, v[8:11], s[0:1] offset:32 sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v36, v[8:11], s[0:1] offset:32 sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) -; SDAG-NEXT: global_store_dwordx4 v44, v[12:15], s[0:1] offset:48 sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v36, v[12:15], s[0:1] offset:48 sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) -; SDAG-NEXT: global_store_dwordx4 v44, v[0:3], s[0:1] sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v36, v[0:3], s[0:1] sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) -; SDAG-NEXT: global_store_dwordx4 v44, v[4:7], s[0:1] offset:16 sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v36, v[4:7], s[0:1] offset:16 sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) ; SDAG-NEXT: s_endpgm ; @@ -1592,13 +1588,13 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16__vgprcd(<8 x half> %arg0, ; HEURRC-NEXT: s_load_dwordx8 s[24:31], s[4:5], 0x24 ; HEURRC-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x64 ; HEURRC-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0xa4 -; HEURRC-NEXT: v_mov_b32_e32 v44, 0 +; HEURRC-NEXT: v_mov_b32_e32 v36, 0 ; HEURRC-NEXT: s_waitcnt lgkmcnt(0) -; HEURRC-NEXT: v_mov_b64_e32 v[34:35], s[26:27] -; HEURRC-NEXT: v_mov_b64_e32 v[32:33], s[24:25] -; HEURRC-NEXT: v_mov_b64_e32 v[38:39], s[30:31] +; HEURRC-NEXT: v_mov_b64_e32 v[40:41], s[26:27] +; HEURRC-NEXT: v_mov_b64_e32 v[38:39], s[24:25] +; HEURRC-NEXT: v_mov_b64_e32 v[44:45], s[30:31] ; HEURRC-NEXT: v_mov_b64_e32 v[30:31], s[22:23] -; HEURRC-NEXT: v_mov_b64_e32 v[36:37], s[28:29] +; HEURRC-NEXT: v_mov_b64_e32 v[42:43], s[28:29] ; HEURRC-NEXT: v_mov_b64_e32 v[28:29], s[20:21] ; HEURRC-NEXT: v_mov_b64_e32 v[26:27], s[18:19] ; HEURRC-NEXT: v_mov_b64_e32 v[24:25], s[16:17] @@ -1606,41 +1602,41 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16__vgprcd(<8 x half> %arg0, ; HEURRC-NEXT: v_mov_b64_e32 v[20:21], s[12:13] ; HEURRC-NEXT: v_mov_b64_e32 v[18:19], s[10:11] ; HEURRC-NEXT: v_mov_b64_e32 v[16:17], s[8:9] -; HEURRC-NEXT: v_mov_b32_e32 v40, s20 -; HEURRC-NEXT: v_mov_b32_e32 v41, s21 -; HEURRC-NEXT: v_mfma_f32_32x32x16_f16 v[0:15], v[32:35], v[36:39], v[16:31] -; HEURRC-NEXT: v_mov_b32_e32 v42, s22 -; HEURRC-NEXT: v_mov_b32_e32 v43, s23 -; HEURRC-NEXT: global_store_dwordx4 v44, v[40:43], s[0:1] offset:48 sc0 sc1 +; HEURRC-NEXT: v_mov_b32_e32 v32, s20 +; HEURRC-NEXT: v_mov_b32_e32 v33, s21 +; HEURRC-NEXT: v_mfma_f32_32x32x16_f16 v[0:15], v[38:41], v[42:45], v[16:31] +; HEURRC-NEXT: v_mov_b32_e32 v34, s22 +; HEURRC-NEXT: v_mov_b32_e32 v35, s23 +; HEURRC-NEXT: global_store_dwordx4 v36, v[32:35], s[0:1] offset:48 sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) ; HEURRC-NEXT: s_nop 2 ; HEURRC-NEXT: v_mov_b32_e32 v16, s16 ; HEURRC-NEXT: v_mov_b32_e32 v17, s17 ; HEURRC-NEXT: v_mov_b32_e32 v18, s18 ; HEURRC-NEXT: v_mov_b32_e32 v19, s19 -; HEURRC-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] offset:32 sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] offset:32 sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) ; HEURRC-NEXT: s_nop 0 ; HEURRC-NEXT: v_mov_b32_e32 v16, s12 ; HEURRC-NEXT: v_mov_b32_e32 v17, s13 ; HEURRC-NEXT: v_mov_b32_e32 v18, s14 ; HEURRC-NEXT: v_mov_b32_e32 v19, s15 -; HEURRC-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] offset:16 sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] offset:16 sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) ; HEURRC-NEXT: s_nop 0 ; HEURRC-NEXT: v_mov_b32_e32 v16, s8 ; HEURRC-NEXT: v_mov_b32_e32 v17, s9 ; HEURRC-NEXT: v_mov_b32_e32 v18, s10 ; HEURRC-NEXT: v_mov_b32_e32 v19, s11 -; HEURRC-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) -; HEURRC-NEXT: global_store_dwordx4 v44, v[8:11], s[0:1] offset:32 sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v36, v[8:11], s[0:1] offset:32 sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) -; HEURRC-NEXT: global_store_dwordx4 v44, v[12:15], s[0:1] offset:48 sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v36, v[12:15], s[0:1] offset:48 sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) -; HEURRC-NEXT: global_store_dwordx4 v44, v[0:3], s[0:1] sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v36, v[0:3], s[0:1] sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) -; HEURRC-NEXT: global_store_dwordx4 v44, v[4:7], s[0:1] offset:16 sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v36, v[4:7], s[0:1] offset:16 sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) ; HEURRC-NEXT: s_endpgm ; @@ -1649,13 +1645,13 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16__vgprcd(<8 x half> %arg0, ; VGPRRC-NEXT: s_load_dwordx8 s[24:31], s[4:5], 0x24 ; VGPRRC-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x64 ; VGPRRC-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0xa4 -; VGPRRC-NEXT: v_mov_b32_e32 v44, 0 +; VGPRRC-NEXT: v_mov_b32_e32 v36, 0 ; VGPRRC-NEXT: s_waitcnt lgkmcnt(0) -; VGPRRC-NEXT: v_mov_b64_e32 v[34:35], s[26:27] -; VGPRRC-NEXT: v_mov_b64_e32 v[32:33], s[24:25] -; VGPRRC-NEXT: v_mov_b64_e32 v[38:39], s[30:31] +; VGPRRC-NEXT: v_mov_b64_e32 v[40:41], s[26:27] +; VGPRRC-NEXT: v_mov_b64_e32 v[38:39], s[24:25] +; VGPRRC-NEXT: v_mov_b64_e32 v[44:45], s[30:31] ; VGPRRC-NEXT: v_mov_b64_e32 v[30:31], s[22:23] -; VGPRRC-NEXT: v_mov_b64_e32 v[36:37], s[28:29] +; VGPRRC-NEXT: v_mov_b64_e32 v[42:43], s[28:29] ; VGPRRC-NEXT: v_mov_b64_e32 v[28:29], s[20:21] ; VGPRRC-NEXT: v_mov_b64_e32 v[26:27], s[18:19] ; VGPRRC-NEXT: v_mov_b64_e32 v[24:25], s[16:17] @@ -1663,41 +1659,41 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16__vgprcd(<8 x half> %arg0, ; VGPRRC-NEXT: v_mov_b64_e32 v[20:21], s[12:13] ; VGPRRC-NEXT: v_mov_b64_e32 v[18:19], s[10:11] ; VGPRRC-NEXT: v_mov_b64_e32 v[16:17], s[8:9] -; VGPRRC-NEXT: v_mov_b32_e32 v40, s20 -; VGPRRC-NEXT: v_mov_b32_e32 v41, s21 -; VGPRRC-NEXT: v_mfma_f32_32x32x16_f16 v[0:15], v[32:35], v[36:39], v[16:31] -; VGPRRC-NEXT: v_mov_b32_e32 v42, s22 -; VGPRRC-NEXT: v_mov_b32_e32 v43, s23 -; VGPRRC-NEXT: global_store_dwordx4 v44, v[40:43], s[0:1] offset:48 sc0 sc1 +; VGPRRC-NEXT: v_mov_b32_e32 v32, s20 +; VGPRRC-NEXT: v_mov_b32_e32 v33, s21 +; VGPRRC-NEXT: v_mfma_f32_32x32x16_f16 v[0:15], v[38:41], v[42:45], v[16:31] +; VGPRRC-NEXT: v_mov_b32_e32 v34, s22 +; VGPRRC-NEXT: v_mov_b32_e32 v35, s23 +; VGPRRC-NEXT: global_store_dwordx4 v36, v[32:35], s[0:1] offset:48 sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) ; VGPRRC-NEXT: s_nop 2 ; VGPRRC-NEXT: v_mov_b32_e32 v16, s16 ; VGPRRC-NEXT: v_mov_b32_e32 v17, s17 ; VGPRRC-NEXT: v_mov_b32_e32 v18, s18 ; VGPRRC-NEXT: v_mov_b32_e32 v19, s19 -; VGPRRC-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] offset:32 sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] offset:32 sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) ; VGPRRC-NEXT: s_nop 0 ; VGPRRC-NEXT: v_mov_b32_e32 v16, s12 ; VGPRRC-NEXT: v_mov_b32_e32 v17, s13 ; VGPRRC-NEXT: v_mov_b32_e32 v18, s14 ; VGPRRC-NEXT: v_mov_b32_e32 v19, s15 -; VGPRRC-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] offset:16 sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] offset:16 sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) ; VGPRRC-NEXT: s_nop 0 ; VGPRRC-NEXT: v_mov_b32_e32 v16, s8 ; VGPRRC-NEXT: v_mov_b32_e32 v17, s9 ; VGPRRC-NEXT: v_mov_b32_e32 v18, s10 ; VGPRRC-NEXT: v_mov_b32_e32 v19, s11 -; VGPRRC-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) -; VGPRRC-NEXT: global_store_dwordx4 v44, v[8:11], s[0:1] offset:32 sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v36, v[8:11], s[0:1] offset:32 sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) -; VGPRRC-NEXT: global_store_dwordx4 v44, v[12:15], s[0:1] offset:48 sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v36, v[12:15], s[0:1] offset:48 sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) -; VGPRRC-NEXT: global_store_dwordx4 v44, v[0:3], s[0:1] sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v36, v[0:3], s[0:1] sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) -; VGPRRC-NEXT: global_store_dwordx4 v44, v[4:7], s[0:1] offset:16 sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v36, v[4:7], s[0:1] offset:16 sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) ; VGPRRC-NEXT: s_endpgm ; AGPR-LABEL: test_mfma_f32_32x32x16_f16__vgprcd: @@ -1831,13 +1827,13 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16__vgprcd__flags(<8 x half> ; SDAG-NEXT: s_load_dwordx8 s[24:31], s[4:5], 0x24 ; SDAG-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x64 ; SDAG-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0xa4 -; SDAG-NEXT: v_mov_b32_e32 v44, 0 +; SDAG-NEXT: v_mov_b32_e32 v36, 0 ; SDAG-NEXT: s_waitcnt lgkmcnt(0) -; SDAG-NEXT: v_mov_b64_e32 v[34:35], s[26:27] -; SDAG-NEXT: v_mov_b64_e32 v[32:33], s[24:25] -; SDAG-NEXT: v_mov_b64_e32 v[38:39], s[30:31] +; SDAG-NEXT: v_mov_b64_e32 v[40:41], s[26:27] +; SDAG-NEXT: v_mov_b64_e32 v[38:39], s[24:25] +; SDAG-NEXT: v_mov_b64_e32 v[44:45], s[30:31] ; SDAG-NEXT: v_mov_b64_e32 v[30:31], s[22:23] -; SDAG-NEXT: v_mov_b64_e32 v[36:37], s[28:29] +; SDAG-NEXT: v_mov_b64_e32 v[42:43], s[28:29] ; SDAG-NEXT: v_mov_b64_e32 v[28:29], s[20:21] ; SDAG-NEXT: v_mov_b64_e32 v[26:27], s[18:19] ; SDAG-NEXT: v_mov_b64_e32 v[24:25], s[16:17] @@ -1845,41 +1841,41 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16__vgprcd__flags(<8 x half> ; SDAG-NEXT: v_mov_b64_e32 v[20:21], s[12:13] ; SDAG-NEXT: v_mov_b64_e32 v[18:19], s[10:11] ; SDAG-NEXT: v_mov_b64_e32 v[16:17], s[8:9] -; SDAG-NEXT: v_mov_b32_e32 v40, s20 -; SDAG-NEXT: v_mov_b32_e32 v41, s21 -; SDAG-NEXT: v_mfma_f32_32x32x16_f16 v[0:15], v[32:35], v[36:39], v[16:31] cbsz:1 abid:2 blgp:3 -; SDAG-NEXT: v_mov_b32_e32 v42, s22 -; SDAG-NEXT: v_mov_b32_e32 v43, s23 -; SDAG-NEXT: global_store_dwordx4 v44, v[40:43], s[0:1] offset:48 sc0 sc1 +; SDAG-NEXT: v_mov_b32_e32 v32, s20 +; SDAG-NEXT: v_mov_b32_e32 v33, s21 +; SDAG-NEXT: v_mfma_f32_32x32x16_f16 v[0:15], v[38:41], v[42:45], v[16:31] cbsz:1 abid:2 blgp:3 +; SDAG-NEXT: v_mov_b32_e32 v34, s22 +; SDAG-NEXT: v_mov_b32_e32 v35, s23 +; SDAG-NEXT: global_store_dwordx4 v36, v[32:35], s[0:1] offset:48 sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) ; SDAG-NEXT: s_nop 2 ; SDAG-NEXT: v_mov_b32_e32 v16, s16 ; SDAG-NEXT: v_mov_b32_e32 v17, s17 ; SDAG-NEXT: v_mov_b32_e32 v18, s18 ; SDAG-NEXT: v_mov_b32_e32 v19, s19 -; SDAG-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] offset:32 sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] offset:32 sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) ; SDAG-NEXT: s_nop 0 ; SDAG-NEXT: v_mov_b32_e32 v16, s12 ; SDAG-NEXT: v_mov_b32_e32 v17, s13 ; SDAG-NEXT: v_mov_b32_e32 v18, s14 ; SDAG-NEXT: v_mov_b32_e32 v19, s15 -; SDAG-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] offset:16 sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] offset:16 sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) ; SDAG-NEXT: s_nop 0 ; SDAG-NEXT: v_mov_b32_e32 v16, s8 ; SDAG-NEXT: v_mov_b32_e32 v17, s9 ; SDAG-NEXT: v_mov_b32_e32 v18, s10 ; SDAG-NEXT: v_mov_b32_e32 v19, s11 -; SDAG-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) -; SDAG-NEXT: global_store_dwordx4 v44, v[8:11], s[0:1] offset:32 sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v36, v[8:11], s[0:1] offset:32 sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) -; SDAG-NEXT: global_store_dwordx4 v44, v[12:15], s[0:1] offset:48 sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v36, v[12:15], s[0:1] offset:48 sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) -; SDAG-NEXT: global_store_dwordx4 v44, v[0:3], s[0:1] sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v36, v[0:3], s[0:1] sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) -; SDAG-NEXT: global_store_dwordx4 v44, v[4:7], s[0:1] offset:16 sc0 sc1 +; SDAG-NEXT: global_store_dwordx4 v36, v[4:7], s[0:1] offset:16 sc0 sc1 ; SDAG-NEXT: s_waitcnt vmcnt(0) ; SDAG-NEXT: s_endpgm ; @@ -1934,13 +1930,13 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16__vgprcd__flags(<8 x half> ; HEURRC-NEXT: s_load_dwordx8 s[24:31], s[4:5], 0x24 ; HEURRC-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x64 ; HEURRC-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0xa4 -; HEURRC-NEXT: v_mov_b32_e32 v44, 0 +; HEURRC-NEXT: v_mov_b32_e32 v36, 0 ; HEURRC-NEXT: s_waitcnt lgkmcnt(0) -; HEURRC-NEXT: v_mov_b64_e32 v[34:35], s[26:27] -; HEURRC-NEXT: v_mov_b64_e32 v[32:33], s[24:25] -; HEURRC-NEXT: v_mov_b64_e32 v[38:39], s[30:31] +; HEURRC-NEXT: v_mov_b64_e32 v[40:41], s[26:27] +; HEURRC-NEXT: v_mov_b64_e32 v[38:39], s[24:25] +; HEURRC-NEXT: v_mov_b64_e32 v[44:45], s[30:31] ; HEURRC-NEXT: v_mov_b64_e32 v[30:31], s[22:23] -; HEURRC-NEXT: v_mov_b64_e32 v[36:37], s[28:29] +; HEURRC-NEXT: v_mov_b64_e32 v[42:43], s[28:29] ; HEURRC-NEXT: v_mov_b64_e32 v[28:29], s[20:21] ; HEURRC-NEXT: v_mov_b64_e32 v[26:27], s[18:19] ; HEURRC-NEXT: v_mov_b64_e32 v[24:25], s[16:17] @@ -1948,41 +1944,41 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16__vgprcd__flags(<8 x half> ; HEURRC-NEXT: v_mov_b64_e32 v[20:21], s[12:13] ; HEURRC-NEXT: v_mov_b64_e32 v[18:19], s[10:11] ; HEURRC-NEXT: v_mov_b64_e32 v[16:17], s[8:9] -; HEURRC-NEXT: v_mov_b32_e32 v40, s20 -; HEURRC-NEXT: v_mov_b32_e32 v41, s21 -; HEURRC-NEXT: v_mfma_f32_32x32x16_f16 v[0:15], v[32:35], v[36:39], v[16:31] cbsz:1 abid:2 blgp:3 -; HEURRC-NEXT: v_mov_b32_e32 v42, s22 -; HEURRC-NEXT: v_mov_b32_e32 v43, s23 -; HEURRC-NEXT: global_store_dwordx4 v44, v[40:43], s[0:1] offset:48 sc0 sc1 +; HEURRC-NEXT: v_mov_b32_e32 v32, s20 +; HEURRC-NEXT: v_mov_b32_e32 v33, s21 +; HEURRC-NEXT: v_mfma_f32_32x32x16_f16 v[0:15], v[38:41], v[42:45], v[16:31] cbsz:1 abid:2 blgp:3 +; HEURRC-NEXT: v_mov_b32_e32 v34, s22 +; HEURRC-NEXT: v_mov_b32_e32 v35, s23 +; HEURRC-NEXT: global_store_dwordx4 v36, v[32:35], s[0:1] offset:48 sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) ; HEURRC-NEXT: s_nop 2 ; HEURRC-NEXT: v_mov_b32_e32 v16, s16 ; HEURRC-NEXT: v_mov_b32_e32 v17, s17 ; HEURRC-NEXT: v_mov_b32_e32 v18, s18 ; HEURRC-NEXT: v_mov_b32_e32 v19, s19 -; HEURRC-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] offset:32 sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] offset:32 sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) ; HEURRC-NEXT: s_nop 0 ; HEURRC-NEXT: v_mov_b32_e32 v16, s12 ; HEURRC-NEXT: v_mov_b32_e32 v17, s13 ; HEURRC-NEXT: v_mov_b32_e32 v18, s14 ; HEURRC-NEXT: v_mov_b32_e32 v19, s15 -; HEURRC-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] offset:16 sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] offset:16 sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) ; HEURRC-NEXT: s_nop 0 ; HEURRC-NEXT: v_mov_b32_e32 v16, s8 ; HEURRC-NEXT: v_mov_b32_e32 v17, s9 ; HEURRC-NEXT: v_mov_b32_e32 v18, s10 ; HEURRC-NEXT: v_mov_b32_e32 v19, s11 -; HEURRC-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) -; HEURRC-NEXT: global_store_dwordx4 v44, v[8:11], s[0:1] offset:32 sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v36, v[8:11], s[0:1] offset:32 sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) -; HEURRC-NEXT: global_store_dwordx4 v44, v[12:15], s[0:1] offset:48 sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v36, v[12:15], s[0:1] offset:48 sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) -; HEURRC-NEXT: global_store_dwordx4 v44, v[0:3], s[0:1] sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v36, v[0:3], s[0:1] sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) -; HEURRC-NEXT: global_store_dwordx4 v44, v[4:7], s[0:1] offset:16 sc0 sc1 +; HEURRC-NEXT: global_store_dwordx4 v36, v[4:7], s[0:1] offset:16 sc0 sc1 ; HEURRC-NEXT: s_waitcnt vmcnt(0) ; HEURRC-NEXT: s_endpgm ; @@ -1991,13 +1987,13 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16__vgprcd__flags(<8 x half> ; VGPRRC-NEXT: s_load_dwordx8 s[24:31], s[4:5], 0x24 ; VGPRRC-NEXT: s_load_dwordx16 s[8:23], s[4:5], 0x64 ; VGPRRC-NEXT: s_load_dwordx2 s[0:1], s[4:5], 0xa4 -; VGPRRC-NEXT: v_mov_b32_e32 v44, 0 +; VGPRRC-NEXT: v_mov_b32_e32 v36, 0 ; VGPRRC-NEXT: s_waitcnt lgkmcnt(0) -; VGPRRC-NEXT: v_mov_b64_e32 v[34:35], s[26:27] -; VGPRRC-NEXT: v_mov_b64_e32 v[32:33], s[24:25] -; VGPRRC-NEXT: v_mov_b64_e32 v[38:39], s[30:31] +; VGPRRC-NEXT: v_mov_b64_e32 v[40:41], s[26:27] +; VGPRRC-NEXT: v_mov_b64_e32 v[38:39], s[24:25] +; VGPRRC-NEXT: v_mov_b64_e32 v[44:45], s[30:31] ; VGPRRC-NEXT: v_mov_b64_e32 v[30:31], s[22:23] -; VGPRRC-NEXT: v_mov_b64_e32 v[36:37], s[28:29] +; VGPRRC-NEXT: v_mov_b64_e32 v[42:43], s[28:29] ; VGPRRC-NEXT: v_mov_b64_e32 v[28:29], s[20:21] ; VGPRRC-NEXT: v_mov_b64_e32 v[26:27], s[18:19] ; VGPRRC-NEXT: v_mov_b64_e32 v[24:25], s[16:17] @@ -2005,41 +2001,41 @@ define amdgpu_kernel void @test_mfma_f32_32x32x16_f16__vgprcd__flags(<8 x half> ; VGPRRC-NEXT: v_mov_b64_e32 v[20:21], s[12:13] ; VGPRRC-NEXT: v_mov_b64_e32 v[18:19], s[10:11] ; VGPRRC-NEXT: v_mov_b64_e32 v[16:17], s[8:9] -; VGPRRC-NEXT: v_mov_b32_e32 v40, s20 -; VGPRRC-NEXT: v_mov_b32_e32 v41, s21 -; VGPRRC-NEXT: v_mfma_f32_32x32x16_f16 v[0:15], v[32:35], v[36:39], v[16:31] cbsz:1 abid:2 blgp:3 -; VGPRRC-NEXT: v_mov_b32_e32 v42, s22 -; VGPRRC-NEXT: v_mov_b32_e32 v43, s23 -; VGPRRC-NEXT: global_store_dwordx4 v44, v[40:43], s[0:1] offset:48 sc0 sc1 +; VGPRRC-NEXT: v_mov_b32_e32 v32, s20 +; VGPRRC-NEXT: v_mov_b32_e32 v33, s21 +; VGPRRC-NEXT: v_mfma_f32_32x32x16_f16 v[0:15], v[38:41], v[42:45], v[16:31] cbsz:1 abid:2 blgp:3 +; VGPRRC-NEXT: v_mov_b32_e32 v34, s22 +; VGPRRC-NEXT: v_mov_b32_e32 v35, s23 +; VGPRRC-NEXT: global_store_dwordx4 v36, v[32:35], s[0:1] offset:48 sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) ; VGPRRC-NEXT: s_nop 2 ; VGPRRC-NEXT: v_mov_b32_e32 v16, s16 ; VGPRRC-NEXT: v_mov_b32_e32 v17, s17 ; VGPRRC-NEXT: v_mov_b32_e32 v18, s18 ; VGPRRC-NEXT: v_mov_b32_e32 v19, s19 -; VGPRRC-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] offset:32 sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] offset:32 sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) ; VGPRRC-NEXT: s_nop 0 ; VGPRRC-NEXT: v_mov_b32_e32 v16, s12 ; VGPRRC-NEXT: v_mov_b32_e32 v17, s13 ; VGPRRC-NEXT: v_mov_b32_e32 v18, s14 ; VGPRRC-NEXT: v_mov_b32_e32 v19, s15 -; VGPRRC-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] offset:16 sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] offset:16 sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) ; VGPRRC-NEXT: s_nop 0 ; VGPRRC-NEXT: v_mov_b32_e32 v16, s8 ; VGPRRC-NEXT: v_mov_b32_e32 v17, s9 ; VGPRRC-NEXT: v_mov_b32_e32 v18, s10 ; VGPRRC-NEXT: v_mov_b32_e32 v19, s11 -; VGPRRC-NEXT: global_store_dwordx4 v44, v[16:19], s[0:1] sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v36, v[16:19], s[0:1] sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) -; VGPRRC-NEXT: global_store_dwordx4 v44, v[8:11], s[0:1] offset:32 sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v36, v[8:11], s[0:1] offset:32 sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) -; VGPRRC-NEXT: global_store_dwordx4 v44, v[12:15], s[0:1] offset:48 sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v36, v[12:15], s[0:1] offset:48 sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) -; VGPRRC-NEXT: global_store_dwordx4 v44, v[0:3], s[0:1] sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v36, v[0:3], s[0:1] sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) -; VGPRRC-NEXT: global_store_dwordx4 v44, v[4:7], s[0:1] offset:16 sc0 sc1 +; VGPRRC-NEXT: global_store_dwordx4 v36, v[4:7], s[0:1] offset:16 sc0 sc1 ; VGPRRC-NEXT: s_waitcnt vmcnt(0) ; VGPRRC-NEXT: s_endpgm ; AGPR-LABEL: test_mfma_f32_32x32x16_f16__vgprcd__flags: @@ -5425,18 +5421,18 @@ define amdgpu_kernel void @test_mfma_f32_16x16x32_bf16_no_agpr__vgprcd(ptr addrs ; GCN-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x34 ; GCN-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x54 ; GCN-NEXT: s_load_dwordx2 s[6:7], s[4:5], 0x24 -; GCN-NEXT: v_mov_b32_e32 v12, 0 +; GCN-NEXT: v_mov_b32_e32 v4, 0 ; GCN-NEXT: s_waitcnt lgkmcnt(0) -; GCN-NEXT: v_mov_b64_e32 v[0:1], s[8:9] -; GCN-NEXT: v_mov_b64_e32 v[2:3], s[10:11] -; GCN-NEXT: v_mov_b64_e32 v[4:5], s[12:13] -; GCN-NEXT: v_mov_b64_e32 v[10:11], s[2:3] -; GCN-NEXT: v_mov_b64_e32 v[6:7], s[14:15] -; GCN-NEXT: v_mov_b64_e32 v[8:9], s[0:1] +; GCN-NEXT: v_mov_b64_e32 v[6:7], s[8:9] +; GCN-NEXT: v_mov_b64_e32 v[8:9], s[10:11] +; GCN-NEXT: v_mov_b64_e32 v[10:11], s[12:13] +; GCN-NEXT: v_mov_b64_e32 v[0:1], s[0:1] +; GCN-NEXT: v_mov_b64_e32 v[12:13], s[14:15] +; GCN-NEXT: v_mov_b64_e32 v[2:3], s[2:3] ; GCN-NEXT: s_nop 1 -; GCN-NEXT: v_mfma_f32_16x16x32_bf16 v[0:3], v[0:3], v[4:7], v[8:11] +; GCN-NEXT: v_mfma_f32_16x16x32_bf16 v[0:3], v[6:9], v[10:13], v[0:3] ; GCN-NEXT: s_nop 7 -; GCN-NEXT: global_store_dwordx4 v12, v[0:3], s[6:7] +; GCN-NEXT: global_store_dwordx4 v4, v[0:3], s[6:7] ; GCN-NEXT: s_endpgm ; ; HEURRC-LABEL: test_mfma_f32_16x16x32_bf16_no_agpr__vgprcd: @@ -5444,18 +5440,18 @@ define amdgpu_kernel void @test_mfma_f32_16x16x32_bf16_no_agpr__vgprcd(ptr addrs ; HEURRC-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x34 ; HEURRC-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x54 ; HEURRC-NEXT: s_load_dwordx2 s[6:7], s[4:5], 0x24 -; HEURRC-NEXT: v_mov_b32_e32 v12, 0 +; HEURRC-NEXT: v_mov_b32_e32 v4, 0 ; HEURRC-NEXT: s_waitcnt lgkmcnt(0) -; HEURRC-NEXT: v_mov_b64_e32 v[0:1], s[8:9] -; HEURRC-NEXT: v_mov_b64_e32 v[2:3], s[10:11] -; HEURRC-NEXT: v_mov_b64_e32 v[4:5], s[12:13] -; HEURRC-NEXT: v_mov_b64_e32 v[10:11], s[2:3] -; HEURRC-NEXT: v_mov_b64_e32 v[6:7], s[14:15] -; HEURRC-NEXT: v_mov_b64_e32 v[8:9], s[0:1] +; HEURRC-NEXT: v_mov_b64_e32 v[6:7], s[8:9] +; HEURRC-NEXT: v_mov_b64_e32 v[8:9], s[10:11] +; HEURRC-NEXT: v_mov_b64_e32 v[10:11], s[12:13] +; HEURRC-NEXT: v_mov_b64_e32 v[0:1], s[0:1] +; HEURRC-NEXT: v_mov_b64_e32 v[12:13], s[14:15] +; HEURRC-NEXT: v_mov_b64_e32 v[2:3], s[2:3] ; HEURRC-NEXT: s_nop 1 -; HEURRC-NEXT: v_mfma_f32_16x16x32_bf16 v[0:3], v[0:3], v[4:7], v[8:11] +; HEURRC-NEXT: v_mfma_f32_16x16x32_bf16 v[0:3], v[6:9], v[10:13], v[0:3] ; HEURRC-NEXT: s_nop 7 -; HEURRC-NEXT: global_store_dwordx4 v12, v[0:3], s[6:7] +; HEURRC-NEXT: global_store_dwordx4 v4, v[0:3], s[6:7] ; HEURRC-NEXT: s_endpgm ; ; VGPRRC-LABEL: test_mfma_f32_16x16x32_bf16_no_agpr__vgprcd: @@ -5463,18 +5459,18 @@ define amdgpu_kernel void @test_mfma_f32_16x16x32_bf16_no_agpr__vgprcd(ptr addrs ; VGPRRC-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x34 ; VGPRRC-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x54 ; VGPRRC-NEXT: s_load_dwordx2 s[6:7], s[4:5], 0x24 -; VGPRRC-NEXT: v_mov_b32_e32 v12, 0 +; VGPRRC-NEXT: v_mov_b32_e32 v4, 0 ; VGPRRC-NEXT: s_waitcnt lgkmcnt(0) -; VGPRRC-NEXT: v_mov_b64_e32 v[0:1], s[8:9] -; VGPRRC-NEXT: v_mov_b64_e32 v[2:3], s[10:11] -; VGPRRC-NEXT: v_mov_b64_e32 v[4:5], s[12:13] -; VGPRRC-NEXT: v_mov_b64_e32 v[10:11], s[2:3] -; VGPRRC-NEXT: v_mov_b64_e32 v[6:7], s[14:15] -; VGPRRC-NEXT: v_mov_b64_e32 v[8:9], s[0:1] +; VGPRRC-NEXT: v_mov_b64_e32 v[6:7], s[8:9] +; VGPRRC-NEXT: v_mov_b64_e32 v[8:9], s[10:11] +; VGPRRC-NEXT: v_mov_b64_e32 v[10:11], s[12:13] +; VGPRRC-NEXT: v_mov_b64_e32 v[0:1], s[0:1] +; VGPRRC-NEXT: v_mov_b64_e32 v[12:13], s[14:15] +; VGPRRC-NEXT: v_mov_b64_e32 v[2:3], s[2:3] ; VGPRRC-NEXT: s_nop 1 -; VGPRRC-NEXT: v_mfma_f32_16x16x32_bf16 v[0:3], v[0:3], v[4:7], v[8:11] +; VGPRRC-NEXT: v_mfma_f32_16x16x32_bf16 v[0:3], v[6:9], v[10:13], v[0:3] ; VGPRRC-NEXT: s_nop 7 -; VGPRRC-NEXT: global_store_dwordx4 v12, v[0:3], s[6:7] +; VGPRRC-NEXT: global_store_dwordx4 v4, v[0:3], s[6:7] ; VGPRRC-NEXT: s_endpgm ; AGPR-LABEL: test_mfma_f32_16x16x32_bf16_no_agpr__vgprcd: ; AGPR: ; %bb.0: @@ -5525,18 +5521,18 @@ define amdgpu_kernel void @test_mfma_f32_16x16x32_bf16_no_agpr__vgprcd__flags(pt ; GCN-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x34 ; GCN-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x54 ; GCN-NEXT: s_load_dwordx2 s[6:7], s[4:5], 0x24 -; GCN-NEXT: v_mov_b32_e32 v12, 0 +; GCN-NEXT: v_mov_b32_e32 v4, 0 ; GCN-NEXT: s_waitcnt lgkmcnt(0) -; GCN-NEXT: v_mov_b64_e32 v[0:1], s[8:9] -; GCN-NEXT: v_mov_b64_e32 v[2:3], s[10:11] -; GCN-NEXT: v_mov_b64_e32 v[4:5], s[12:13] -; GCN-NEXT: v_mov_b64_e32 v[10:11], s[2:3] -; GCN-NEXT: v_mov_b64_e32 v[6:7], s[14:15] -; GCN-NEXT: v_mov_b64_e32 v[8:9], s[0:1] +; GCN-NEXT: v_mov_b64_e32 v[6:7], s[8:9] +; GCN-NEXT: v_mov_b64_e32 v[8:9], s[10:11] +; GCN-NEXT: v_mov_b64_e32 v[10:11], s[12:13] +; GCN-NEXT: v_mov_b64_e32 v[0:1], s[0:1] +; GCN-NEXT: v_mov_b64_e32 v[12:13], s[14:15] +; GCN-NEXT: v_mov_b64_e32 v[2:3], s[2:3] ; GCN-NEXT: s_nop 1 -; GCN-NEXT: v_mfma_f32_16x16x32_bf16 v[0:3], v[0:3], v[4:7], v[8:11] cbsz:3 abid:2 blgp:1 +; GCN-NEXT: v_mfma_f32_16x16x32_bf16 v[0:3], v[6:9], v[10:13], v[0:3] cbsz:3 abid:2 blgp:1 ; GCN-NEXT: s_nop 7 -; GCN-NEXT: global_store_dwordx4 v12, v[0:3], s[6:7] +; GCN-NEXT: global_store_dwordx4 v4, v[0:3], s[6:7] ; GCN-NEXT: s_endpgm ; ; HEURRC-LABEL: test_mfma_f32_16x16x32_bf16_no_agpr__vgprcd__flags: @@ -5544,18 +5540,18 @@ define amdgpu_kernel void @test_mfma_f32_16x16x32_bf16_no_agpr__vgprcd__flags(pt ; HEURRC-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x34 ; HEURRC-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x54 ; HEURRC-NEXT: s_load_dwordx2 s[6:7], s[4:5], 0x24 -; HEURRC-NEXT: v_mov_b32_e32 v12, 0 +; HEURRC-NEXT: v_mov_b32_e32 v4, 0 ; HEURRC-NEXT: s_waitcnt lgkmcnt(0) -; HEURRC-NEXT: v_mov_b64_e32 v[0:1], s[8:9] -; HEURRC-NEXT: v_mov_b64_e32 v[2:3], s[10:11] -; HEURRC-NEXT: v_mov_b64_e32 v[4:5], s[12:13] -; HEURRC-NEXT: v_mov_b64_e32 v[10:11], s[2:3] -; HEURRC-NEXT: v_mov_b64_e32 v[6:7], s[14:15] -; HEURRC-NEXT: v_mov_b64_e32 v[8:9], s[0:1] +; HEURRC-NEXT: v_mov_b64_e32 v[6:7], s[8:9] +; HEURRC-NEXT: v_mov_b64_e32 v[8:9], s[10:11] +; HEURRC-NEXT: v_mov_b64_e32 v[10:11], s[12:13] +; HEURRC-NEXT: v_mov_b64_e32 v[0:1], s[0:1] +; HEURRC-NEXT: v_mov_b64_e32 v[12:13], s[14:15] +; HEURRC-NEXT: v_mov_b64_e32 v[2:3], s[2:3] ; HEURRC-NEXT: s_nop 1 -; HEURRC-NEXT: v_mfma_f32_16x16x32_bf16 v[0:3], v[0:3], v[4:7], v[8:11] cbsz:3 abid:2 blgp:1 +; HEURRC-NEXT: v_mfma_f32_16x16x32_bf16 v[0:3], v[6:9], v[10:13], v[0:3] cbsz:3 abid:2 blgp:1 ; HEURRC-NEXT: s_nop 7 -; HEURRC-NEXT: global_store_dwordx4 v12, v[0:3], s[6:7] +; HEURRC-NEXT: global_store_dwordx4 v4, v[0:3], s[6:7] ; HEURRC-NEXT: s_endpgm ; ; VGPRRC-LABEL: test_mfma_f32_16x16x32_bf16_no_agpr__vgprcd__flags: @@ -5563,18 +5559,18 @@ define amdgpu_kernel void @test_mfma_f32_16x16x32_bf16_no_agpr__vgprcd__flags(pt ; VGPRRC-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x34 ; VGPRRC-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x54 ; VGPRRC-NEXT: s_load_dwordx2 s[6:7], s[4:5], 0x24 -; VGPRRC-NEXT: v_mov_b32_e32 v12, 0 +; VGPRRC-NEXT: v_mov_b32_e32 v4, 0 ; VGPRRC-NEXT: s_waitcnt lgkmcnt(0) -; VGPRRC-NEXT: v_mov_b64_e32 v[0:1], s[8:9] -; VGPRRC-NEXT: v_mov_b64_e32 v[2:3], s[10:11] -; VGPRRC-NEXT: v_mov_b64_e32 v[4:5], s[12:13] -; VGPRRC-NEXT: v_mov_b64_e32 v[10:11], s[2:3] -; VGPRRC-NEXT: v_mov_b64_e32 v[6:7], s[14:15] -; VGPRRC-NEXT: v_mov_b64_e32 v[8:9], s[0:1] +; VGPRRC-NEXT: v_mov_b64_e32 v[6:7], s[8:9] +; VGPRRC-NEXT: v_mov_b64_e32 v[8:9], s[10:11] +; VGPRRC-NEXT: v_mov_b64_e32 v[10:11], s[12:13] +; VGPRRC-NEXT: v_mov_b64_e32 v[0:1], s[0:1] +; VGPRRC-NEXT: v_mov_b64_e32 v[12:13], s[14:15] +; VGPRRC-NEXT: v_mov_b64_e32 v[2:3], s[2:3] ; VGPRRC-NEXT: s_nop 1 -; VGPRRC-NEXT: v_mfma_f32_16x16x32_bf16 v[0:3], v[0:3], v[4:7], v[8:11] cbsz:3 abid:2 blgp:1 +; VGPRRC-NEXT: v_mfma_f32_16x16x32_bf16 v[0:3], v[6:9], v[10:13], v[0:3] cbsz:3 abid:2 blgp:1 ; VGPRRC-NEXT: s_nop 7 -; VGPRRC-NEXT: global_store_dwordx4 v12, v[0:3], s[6:7] +; VGPRRC-NEXT: global_store_dwordx4 v4, v[0:3], s[6:7] ; VGPRRC-NEXT: s_endpgm ; AGPR-LABEL: test_mfma_f32_16x16x32_bf16_no_agpr__vgprcd__flags: ; AGPR: ; %bb.0: diff --git a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.smfmac.gfx950.ll b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.smfmac.gfx950.ll index 6eb9449..ee11b92 100644 --- a/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.smfmac.gfx950.ll +++ b/llvm/test/CodeGen/AMDGPU/llvm.amdgcn.smfmac.gfx950.ll @@ -17,24 +17,24 @@ define amdgpu_kernel void @test_smfmac_f32_16x16x64_f16__vgpr(ptr addrspace(1) % ; SDAG-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x34 ; SDAG-NEXT: v_and_b32_e32 v0, 0x3ff, v0 ; SDAG-NEXT: v_lshlrev_b32_e32 v0, 4, v0 -; SDAG-NEXT: v_mov_b32_e32 v16, 0 +; SDAG-NEXT: v_mov_b32_e32 v4, 0 ; SDAG-NEXT: s_waitcnt lgkmcnt(0) -; SDAG-NEXT: global_load_dwordx4 v[8:11], v0, s[6:7] +; SDAG-NEXT: global_load_dwordx4 v[0:3], v0, s[6:7] ; SDAG-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x44 ; SDAG-NEXT: s_load_dword s16, s[4:5], 0x64 -; SDAG-NEXT: v_mov_b64_e32 v[14:15], s[2:3] -; SDAG-NEXT: v_mov_b64_e32 v[12:13], s[0:1] +; SDAG-NEXT: v_mov_b64_e32 v[16:17], s[2:3] +; SDAG-NEXT: v_mov_b64_e32 v[14:15], s[0:1] ; SDAG-NEXT: s_waitcnt lgkmcnt(0) -; SDAG-NEXT: v_mov_b64_e32 v[0:1], s[8:9] -; SDAG-NEXT: v_mov_b64_e32 v[2:3], s[10:11] -; SDAG-NEXT: v_mov_b64_e32 v[4:5], s[12:13] -; SDAG-NEXT: v_mov_b64_e32 v[6:7], s[14:15] -; SDAG-NEXT: v_mov_b32_e32 v17, s16 +; SDAG-NEXT: v_mov_b64_e32 v[6:7], s[8:9] +; SDAG-NEXT: v_mov_b64_e32 v[8:9], s[10:11] +; SDAG-NEXT: v_mov_b64_e32 v[10:11], s[12:13] +; SDAG-NEXT: v_mov_b64_e32 v[12:13], s[14:15] +; SDAG-NEXT: v_mov_b32_e32 v5, s16 ; SDAG-NEXT: s_waitcnt vmcnt(0) ; SDAG-NEXT: s_nop 0 -; SDAG-NEXT: v_smfmac_f32_16x16x64_f16 v[8:11], v[12:15], v[0:7], v17 cbsz:1 abid:2 +; SDAG-NEXT: v_smfmac_f32_16x16x64_f16 v[0:3], v[14:17], v[6:13], v5 cbsz:1 abid:2 ; SDAG-NEXT: s_nop 7 -; SDAG-NEXT: global_store_dwordx4 v16, v[8:11], s[6:7] +; SDAG-NEXT: global_store_dwordx4 v4, v[0:3], s[6:7] ; SDAG-NEXT: s_endpgm ; ; GISEL-LABEL: test_smfmac_f32_16x16x64_f16__vgpr: @@ -120,30 +120,25 @@ define <4 x float> @test_smfmac_f32_16x16x64_f16__sgpr(<8 x half> inreg %arg0, < ; SDAG-LABEL: test_smfmac_f32_16x16x64_f16__sgpr: ; SDAG: ; %bb.0: ; SDAG-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; SDAG-NEXT: v_mov_b32_e32 v10, s0 -; SDAG-NEXT: v_mov_b32_e32 v11, s1 -; SDAG-NEXT: v_mov_b32_e32 v12, s2 -; SDAG-NEXT: v_mov_b32_e32 v13, s3 -; SDAG-NEXT: v_mov_b32_e32 v2, s16 -; SDAG-NEXT: v_mov_b32_e32 v3, s17 -; SDAG-NEXT: v_mov_b32_e32 v4, s18 -; SDAG-NEXT: v_mov_b32_e32 v5, s19 -; SDAG-NEXT: v_mov_b32_e32 v6, s20 -; SDAG-NEXT: v_mov_b32_e32 v7, s21 -; SDAG-NEXT: v_mov_b32_e32 v8, s22 -; SDAG-NEXT: v_mov_b32_e32 v9, s23 -; SDAG-NEXT: v_accvgpr_write_b32 a0, s24 -; SDAG-NEXT: v_accvgpr_write_b32 a1, s25 -; SDAG-NEXT: v_accvgpr_write_b32 a2, s26 -; SDAG-NEXT: v_accvgpr_write_b32 a3, s27 -; SDAG-NEXT: v_mov_b32_e32 v0, s28 +; SDAG-NEXT: v_mov_b32_e32 v14, s0 +; SDAG-NEXT: v_mov_b32_e32 v15, s1 +; SDAG-NEXT: v_mov_b32_e32 v16, s2 +; SDAG-NEXT: v_mov_b32_e32 v17, s3 +; SDAG-NEXT: v_mov_b32_e32 v6, s16 +; SDAG-NEXT: v_mov_b32_e32 v7, s17 +; SDAG-NEXT: v_mov_b32_e32 v8, s18 +; SDAG-NEXT: v_mov_b32_e32 v9, s19 +; SDAG-NEXT: v_mov_b32_e32 v10, s20 +; SDAG-NEXT: v_mov_b32_e32 v11, s21 +; SDAG-NEXT: v_mov_b32_e32 v12, s22 +; SDAG-NEXT: v_mov_b32_e32 v13, s23 +; SDAG-NEXT: v_mov_b32_e32 v0, s24 +; SDAG-NEXT: v_mov_b32_e32 v1, s25 +; SDAG-NEXT: v_mov_b32_e32 v2, s26 +; SDAG-NEXT: v_mov_b32_e32 v3, s27 +; SDAG-NEXT: v_mov_b32_e32 v4, s28 ; SDAG-NEXT: s_nop 1 -; SDAG-NEXT: v_smfmac_f32_16x16x64_f16 a[0:3], v[10:13], v[2:9], v0 -; SDAG-NEXT: s_nop 7 -; SDAG-NEXT: v_accvgpr_read_b32 v0, a0 -; SDAG-NEXT: v_accvgpr_read_b32 v1, a1 -; SDAG-NEXT: v_accvgpr_read_b32 v2, a2 -; SDAG-NEXT: v_accvgpr_read_b32 v3, a3 +; SDAG-NEXT: v_smfmac_f32_16x16x64_f16 v[0:3], v[14:17], v[6:13], v4 ; SDAG-NEXT: s_setpc_b64 s[30:31] ; ; GISEL-LABEL: test_smfmac_f32_16x16x64_f16__sgpr: @@ -187,17 +182,17 @@ define amdgpu_kernel void @test_smfmac_f32_32x32x32_f16__vgpr(ptr addrspace(1) % ; SDAG-NEXT: global_load_dwordx4 v[0:3], v16, s[6:7] ; SDAG-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x44 ; SDAG-NEXT: s_load_dword s16, s[4:5], 0x64 -; SDAG-NEXT: v_mov_b64_e32 v[26:27], s[2:3] -; SDAG-NEXT: v_mov_b64_e32 v[24:25], s[0:1] +; SDAG-NEXT: v_mov_b64_e32 v[28:29], s[2:3] +; SDAG-NEXT: v_mov_b64_e32 v[26:27], s[0:1] ; SDAG-NEXT: s_waitcnt lgkmcnt(0) -; SDAG-NEXT: v_mov_b64_e32 v[22:23], s[14:15] -; SDAG-NEXT: v_mov_b64_e32 v[20:21], s[12:13] -; SDAG-NEXT: v_mov_b64_e32 v[18:19], s[10:11] -; SDAG-NEXT: v_mov_b64_e32 v[16:17], s[8:9] -; SDAG-NEXT: v_mov_b32_e32 v28, s16 +; SDAG-NEXT: v_mov_b64_e32 v[24:25], s[14:15] +; SDAG-NEXT: v_mov_b64_e32 v[22:23], s[12:13] +; SDAG-NEXT: v_mov_b64_e32 v[20:21], s[10:11] +; SDAG-NEXT: v_mov_b64_e32 v[18:19], s[8:9] +; SDAG-NEXT: v_mov_b32_e32 v16, s16 ; SDAG-NEXT: s_waitcnt vmcnt(0) ; SDAG-NEXT: s_nop 0 -; SDAG-NEXT: v_smfmac_f32_32x32x32_f16 v[0:15], v[24:27], v[16:23], v28 cbsz:1 abid:2 +; SDAG-NEXT: v_smfmac_f32_32x32x32_f16 v[0:15], v[26:29], v[18:25], v16 cbsz:1 abid:2 ; SDAG-NEXT: v_mov_b32_e32 v16, 0 ; SDAG-NEXT: s_nop 10 ; SDAG-NEXT: global_store_dwordx4 v16, v[8:11], s[6:7] offset:32 @@ -436,53 +431,37 @@ define <16 x float> @test_smfmac_f32_32x32x32_f16__sgpr(<8 x half> inreg %arg0, ; SDAG-LABEL: test_smfmac_f32_32x32x32_f16__sgpr: ; SDAG: ; %bb.0: ; SDAG-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; SDAG-NEXT: v_mov_b32_e32 v36, s0 -; SDAG-NEXT: v_mov_b32_e32 v37, s1 -; SDAG-NEXT: v_mov_b32_e32 v38, s2 -; SDAG-NEXT: v_mov_b32_e32 v39, s3 -; SDAG-NEXT: v_mov_b32_e32 v13, s25 -; SDAG-NEXT: v_mov_b32_e32 v14, s26 -; SDAG-NEXT: v_mov_b32_e32 v15, s27 -; SDAG-NEXT: v_mov_b32_e32 v16, s28 -; SDAG-NEXT: v_mov_b32_e32 v17, s29 -; SDAG-NEXT: v_mov_b32_e32 v28, s16 -; SDAG-NEXT: v_mov_b32_e32 v29, s17 -; SDAG-NEXT: v_mov_b32_e32 v30, s18 -; SDAG-NEXT: v_mov_b32_e32 v31, s19 -; SDAG-NEXT: v_mov_b32_e32 v32, s20 -; SDAG-NEXT: v_mov_b32_e32 v33, s21 -; SDAG-NEXT: v_mov_b32_e32 v34, s22 -; SDAG-NEXT: v_mov_b32_e32 v35, s23 -; SDAG-NEXT: v_mov_b32_e32 v12, s24 -; SDAG-NEXT: v_mov_b32_e32 v18, v0 -; SDAG-NEXT: v_mov_b32_e32 v19, v1 -; SDAG-NEXT: v_mov_b32_e32 v20, v2 -; SDAG-NEXT: v_mov_b32_e32 v21, v3 -; SDAG-NEXT: v_mov_b32_e32 v22, v4 -; SDAG-NEXT: v_mov_b32_e32 v23, v5 -; SDAG-NEXT: v_mov_b32_e32 v24, v6 -; SDAG-NEXT: v_mov_b32_e32 v25, v7 -; SDAG-NEXT: v_mov_b32_e32 v26, v8 -; SDAG-NEXT: v_mov_b32_e32 v27, v9 +; SDAG-NEXT: v_mov_b32_e32 v26, s0 +; SDAG-NEXT: v_mov_b32_e32 v27, s1 +; SDAG-NEXT: v_mov_b32_e32 v28, s2 +; SDAG-NEXT: v_mov_b32_e32 v29, s3 +; SDAG-NEXT: v_mov_b32_e32 v16, v10 +; SDAG-NEXT: v_mov_b32_e32 v15, v9 +; SDAG-NEXT: v_mov_b32_e32 v14, v8 +; SDAG-NEXT: v_mov_b32_e32 v13, v7 +; SDAG-NEXT: v_mov_b32_e32 v12, v6 +; SDAG-NEXT: v_mov_b32_e32 v11, v5 +; SDAG-NEXT: v_mov_b32_e32 v10, v4 +; SDAG-NEXT: v_mov_b32_e32 v9, v3 +; SDAG-NEXT: v_mov_b32_e32 v8, v2 +; SDAG-NEXT: v_mov_b32_e32 v7, v1 +; SDAG-NEXT: v_mov_b32_e32 v6, v0 +; SDAG-NEXT: v_mov_b32_e32 v0, s24 +; SDAG-NEXT: v_mov_b32_e32 v1, s25 +; SDAG-NEXT: v_mov_b32_e32 v2, s26 +; SDAG-NEXT: v_mov_b32_e32 v3, s27 +; SDAG-NEXT: v_mov_b32_e32 v4, s28 +; SDAG-NEXT: v_mov_b32_e32 v5, s29 +; SDAG-NEXT: v_mov_b32_e32 v18, s16 +; SDAG-NEXT: v_mov_b32_e32 v19, s17 +; SDAG-NEXT: v_mov_b32_e32 v20, s18 +; SDAG-NEXT: v_mov_b32_e32 v21, s19 +; SDAG-NEXT: v_mov_b32_e32 v22, s20 +; SDAG-NEXT: v_mov_b32_e32 v23, s21 +; SDAG-NEXT: v_mov_b32_e32 v24, s22 +; SDAG-NEXT: v_mov_b32_e32 v25, s23 ; SDAG-NEXT: s_nop 1 -; SDAG-NEXT: v_smfmac_f32_32x32x32_f16 v[12:27], v[36:39], v[28:35], v10 -; SDAG-NEXT: s_nop 11 -; SDAG-NEXT: v_mov_b32_e32 v0, v12 -; SDAG-NEXT: v_mov_b32_e32 v1, v13 -; SDAG-NEXT: v_mov_b32_e32 v2, v14 -; SDAG-NEXT: v_mov_b32_e32 v3, v15 -; SDAG-NEXT: v_mov_b32_e32 v4, v16 -; SDAG-NEXT: v_mov_b32_e32 v5, v17 -; SDAG-NEXT: v_mov_b32_e32 v6, v18 -; SDAG-NEXT: v_mov_b32_e32 v7, v19 -; SDAG-NEXT: v_mov_b32_e32 v8, v20 -; SDAG-NEXT: v_mov_b32_e32 v9, v21 -; SDAG-NEXT: v_mov_b32_e32 v10, v22 -; SDAG-NEXT: v_mov_b32_e32 v11, v23 -; SDAG-NEXT: v_mov_b32_e32 v12, v24 -; SDAG-NEXT: v_mov_b32_e32 v13, v25 -; SDAG-NEXT: v_mov_b32_e32 v14, v26 -; SDAG-NEXT: v_mov_b32_e32 v15, v27 +; SDAG-NEXT: v_smfmac_f32_32x32x32_f16 v[0:15], v[26:29], v[18:25], v16 ; SDAG-NEXT: s_setpc_b64 s[30:31] ; ; GISEL-LABEL: test_smfmac_f32_32x32x32_f16__sgpr: @@ -541,24 +520,24 @@ define amdgpu_kernel void @test_smfmac_f32_16x16x64_bf16__vgpr(ptr addrspace(1) ; GCN-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x34 ; GCN-NEXT: v_and_b32_e32 v0, 0x3ff, v0 ; GCN-NEXT: v_lshlrev_b32_e32 v0, 4, v0 -; GCN-NEXT: v_mov_b32_e32 v16, 0 +; GCN-NEXT: v_mov_b32_e32 v4, 0 ; GCN-NEXT: s_waitcnt lgkmcnt(0) -; GCN-NEXT: global_load_dwordx4 v[8:11], v0, s[6:7] +; GCN-NEXT: global_load_dwordx4 v[0:3], v0, s[6:7] ; GCN-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x44 ; GCN-NEXT: s_load_dword s16, s[4:5], 0x64 -; GCN-NEXT: v_mov_b64_e32 v[14:15], s[2:3] -; GCN-NEXT: v_mov_b64_e32 v[12:13], s[0:1] +; GCN-NEXT: v_mov_b64_e32 v[16:17], s[2:3] +; GCN-NEXT: v_mov_b64_e32 v[14:15], s[0:1] ; GCN-NEXT: s_waitcnt lgkmcnt(0) -; GCN-NEXT: v_mov_b64_e32 v[0:1], s[8:9] -; GCN-NEXT: v_mov_b64_e32 v[2:3], s[10:11] -; GCN-NEXT: v_mov_b64_e32 v[4:5], s[12:13] -; GCN-NEXT: v_mov_b64_e32 v[6:7], s[14:15] -; GCN-NEXT: v_mov_b32_e32 v17, s16 +; GCN-NEXT: v_mov_b64_e32 v[6:7], s[8:9] +; GCN-NEXT: v_mov_b64_e32 v[8:9], s[10:11] +; GCN-NEXT: v_mov_b64_e32 v[10:11], s[12:13] +; GCN-NEXT: v_mov_b64_e32 v[12:13], s[14:15] +; GCN-NEXT: v_mov_b32_e32 v5, s16 ; GCN-NEXT: s_waitcnt vmcnt(0) ; GCN-NEXT: s_nop 0 -; GCN-NEXT: v_smfmac_f32_16x16x64_bf16 v[8:11], v[12:15], v[0:7], v17 cbsz:1 abid:2 +; GCN-NEXT: v_smfmac_f32_16x16x64_bf16 v[0:3], v[14:17], v[6:13], v5 cbsz:1 abid:2 ; GCN-NEXT: s_nop 7 -; GCN-NEXT: global_store_dwordx4 v16, v[8:11], s[6:7] +; GCN-NEXT: global_store_dwordx4 v4, v[0:3], s[6:7] ; GCN-NEXT: s_endpgm bb: %id = call i32 @llvm.amdgcn.workitem.id.x() @@ -618,30 +597,25 @@ define <4 x float> @test_smfmac_f32_16x16x64_bf16__sgpr(<8 x bfloat> inreg %arg0 ; GCN-LABEL: test_smfmac_f32_16x16x64_bf16__sgpr: ; GCN: ; %bb.0: ; GCN-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; GCN-NEXT: v_mov_b32_e32 v10, s0 -; GCN-NEXT: v_mov_b32_e32 v11, s1 -; GCN-NEXT: v_mov_b32_e32 v12, s2 -; GCN-NEXT: v_mov_b32_e32 v13, s3 -; GCN-NEXT: v_mov_b32_e32 v2, s16 -; GCN-NEXT: v_mov_b32_e32 v3, s17 -; GCN-NEXT: v_mov_b32_e32 v4, s18 -; GCN-NEXT: v_mov_b32_e32 v5, s19 -; GCN-NEXT: v_mov_b32_e32 v6, s20 -; GCN-NEXT: v_mov_b32_e32 v7, s21 -; GCN-NEXT: v_mov_b32_e32 v8, s22 -; GCN-NEXT: v_mov_b32_e32 v9, s23 -; GCN-NEXT: v_accvgpr_write_b32 a0, s24 -; GCN-NEXT: v_accvgpr_write_b32 a1, s25 -; GCN-NEXT: v_accvgpr_write_b32 a2, s26 -; GCN-NEXT: v_accvgpr_write_b32 a3, s27 -; GCN-NEXT: v_mov_b32_e32 v0, s28 +; GCN-NEXT: v_mov_b32_e32 v14, s0 +; GCN-NEXT: v_mov_b32_e32 v15, s1 +; GCN-NEXT: v_mov_b32_e32 v16, s2 +; GCN-NEXT: v_mov_b32_e32 v17, s3 +; GCN-NEXT: v_mov_b32_e32 v6, s16 +; GCN-NEXT: v_mov_b32_e32 v7, s17 +; GCN-NEXT: v_mov_b32_e32 v8, s18 +; GCN-NEXT: v_mov_b32_e32 v9, s19 +; GCN-NEXT: v_mov_b32_e32 v10, s20 +; GCN-NEXT: v_mov_b32_e32 v11, s21 +; GCN-NEXT: v_mov_b32_e32 v12, s22 +; GCN-NEXT: v_mov_b32_e32 v13, s23 +; GCN-NEXT: v_mov_b32_e32 v0, s24 +; GCN-NEXT: v_mov_b32_e32 v1, s25 +; GCN-NEXT: v_mov_b32_e32 v2, s26 +; GCN-NEXT: v_mov_b32_e32 v3, s27 +; GCN-NEXT: v_mov_b32_e32 v4, s28 ; GCN-NEXT: s_nop 1 -; GCN-NEXT: v_smfmac_f32_16x16x64_bf16 a[0:3], v[10:13], v[2:9], v0 -; GCN-NEXT: s_nop 7 -; GCN-NEXT: v_accvgpr_read_b32 v0, a0 -; GCN-NEXT: v_accvgpr_read_b32 v1, a1 -; GCN-NEXT: v_accvgpr_read_b32 v2, a2 -; GCN-NEXT: v_accvgpr_read_b32 v3, a3 +; GCN-NEXT: v_smfmac_f32_16x16x64_bf16 v[0:3], v[14:17], v[6:13], v4 ; GCN-NEXT: s_setpc_b64 s[30:31] %result = call <4 x float> @llvm.amdgcn.smfmac.f32.16x16x64.bf16(<8 x bfloat> %arg0, <16 x bfloat> %arg1, <4 x float> %arg2, i32 %arg3, i32 immarg 0, i32 immarg 0) ret <4 x float> %result @@ -667,17 +641,17 @@ define amdgpu_kernel void @test_smfmac_f32_32x32x32_bf16__vgpr(ptr addrspace(1) ; GCN-NEXT: global_load_dwordx4 v[0:3], v16, s[6:7] ; GCN-NEXT: s_load_dwordx8 s[8:15], s[4:5], 0x44 ; GCN-NEXT: s_load_dword s16, s[4:5], 0x64 -; GCN-NEXT: v_mov_b64_e32 v[26:27], s[2:3] -; GCN-NEXT: v_mov_b64_e32 v[24:25], s[0:1] +; GCN-NEXT: v_mov_b64_e32 v[28:29], s[2:3] +; GCN-NEXT: v_mov_b64_e32 v[26:27], s[0:1] ; GCN-NEXT: s_waitcnt lgkmcnt(0) -; GCN-NEXT: v_mov_b64_e32 v[22:23], s[14:15] -; GCN-NEXT: v_mov_b64_e32 v[20:21], s[12:13] -; GCN-NEXT: v_mov_b64_e32 v[18:19], s[10:11] -; GCN-NEXT: v_mov_b64_e32 v[16:17], s[8:9] -; GCN-NEXT: v_mov_b32_e32 v28, s16 +; GCN-NEXT: v_mov_b64_e32 v[24:25], s[14:15] +; GCN-NEXT: v_mov_b64_e32 v[22:23], s[12:13] +; GCN-NEXT: v_mov_b64_e32 v[20:21], s[10:11] +; GCN-NEXT: v_mov_b64_e32 v[18:19], s[8:9] +; GCN-NEXT: v_mov_b32_e32 v16, s16 ; GCN-NEXT: s_waitcnt vmcnt(0) ; GCN-NEXT: s_nop 0 -; GCN-NEXT: v_smfmac_f32_32x32x32_bf16 v[0:15], v[24:27], v[16:23], v28 cbsz:1 abid:2 +; GCN-NEXT: v_smfmac_f32_32x32x32_bf16 v[0:15], v[26:29], v[18:25], v16 cbsz:1 abid:2 ; GCN-NEXT: v_mov_b32_e32 v16, 0 ; GCN-NEXT: s_nop 10 ; GCN-NEXT: global_store_dwordx4 v16, v[8:11], s[6:7] offset:32 @@ -779,53 +753,37 @@ define <16 x float> @test_smfmac_f32_32x32x32_bf16__sgpr(<8 x bfloat> inreg %arg ; GCN-LABEL: test_smfmac_f32_32x32x32_bf16__sgpr: ; GCN: ; %bb.0: ; GCN-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; GCN-NEXT: v_mov_b32_e32 v36, s0 -; GCN-NEXT: v_mov_b32_e32 v37, s1 -; GCN-NEXT: v_mov_b32_e32 v38, s2 -; GCN-NEXT: v_mov_b32_e32 v39, s3 -; GCN-NEXT: v_mov_b32_e32 v13, s25 -; GCN-NEXT: v_mov_b32_e32 v14, s26 -; GCN-NEXT: v_mov_b32_e32 v15, s27 -; GCN-NEXT: v_mov_b32_e32 v16, s28 -; GCN-NEXT: v_mov_b32_e32 v17, s29 -; GCN-NEXT: v_mov_b32_e32 v28, s16 -; GCN-NEXT: v_mov_b32_e32 v29, s17 -; GCN-NEXT: v_mov_b32_e32 v30, s18 -; GCN-NEXT: v_mov_b32_e32 v31, s19 -; GCN-NEXT: v_mov_b32_e32 v32, s20 -; GCN-NEXT: v_mov_b32_e32 v33, s21 -; GCN-NEXT: v_mov_b32_e32 v34, s22 -; GCN-NEXT: v_mov_b32_e32 v35, s23 -; GCN-NEXT: v_mov_b32_e32 v12, s24 -; GCN-NEXT: v_mov_b32_e32 v18, v0 -; GCN-NEXT: v_mov_b32_e32 v19, v1 -; GCN-NEXT: v_mov_b32_e32 v20, v2 -; GCN-NEXT: v_mov_b32_e32 v21, v3 -; GCN-NEXT: v_mov_b32_e32 v22, v4 -; GCN-NEXT: v_mov_b32_e32 v23, v5 -; GCN-NEXT: v_mov_b32_e32 v24, v6 -; GCN-NEXT: v_mov_b32_e32 v25, v7 -; GCN-NEXT: v_mov_b32_e32 v26, v8 -; GCN-NEXT: v_mov_b32_e32 v27, v9 +; GCN-NEXT: v_mov_b32_e32 v26, s0 +; GCN-NEXT: v_mov_b32_e32 v27, s1 +; GCN-NEXT: v_mov_b32_e32 v28, s2 +; GCN-NEXT: v_mov_b32_e32 v29, s3 +; GCN-NEXT: v_mov_b32_e32 v16, v10 +; GCN-NEXT: v_mov_b32_e32 v15, v9 +; GCN-NEXT: v_mov_b32_e32 v14, v8 +; GCN-NEXT: v_mov_b32_e32 v13, v7 +; GCN-NEXT: v_mov_b32_e32 v12, v6 +; GCN-NEXT: v_mov_b32_e32 v11, v5 +; GCN-NEXT: v_mov_b32_e32 v10, v4 +; GCN-NEXT: v_mov_b32_e32 v9, v3 +; GCN-NEXT: v_mov_b32_e32 v8, v2 +; GCN-NEXT: v_mov_b32_e32 v7, v1 +; GCN-NEXT: v_mov_b32_e32 v6, v0 +; GCN-NEXT: v_mov_b32_e32 v0, s24 +; GCN-NEXT: v_mov_b32_e32 v1, s25 +; GCN-NEXT: v_mov_b32_e32 v2, s26 +; GCN-NEXT: v_mov_b32_e32 v3, s27 +; GCN-NEXT: v_mov_b32_e32 v4, s28 +; GCN-NEXT: v_mov_b32_e32 v5, s29 +; GCN-NEXT: v_mov_b32_e32 v18, s16 +; GCN-NEXT: v_mov_b32_e32 v19, s17 +; GCN-NEXT: v_mov_b32_e32 v20, s18 +; GCN-NEXT: v_mov_b32_e32 v21, s19 +; GCN-NEXT: v_mov_b32_e32 v22, s20 +; GCN-NEXT: v_mov_b32_e32 v23, s21 +; GCN-NEXT: v_mov_b32_e32 v24, s22 +; GCN-NEXT: v_mov_b32_e32 v25, s23 ; GCN-NEXT: s_nop 1 -; GCN-NEXT: v_smfmac_f32_32x32x32_bf16 v[12:27], v[36:39], v[28:35], v10 -; GCN-NEXT: s_nop 11 -; GCN-NEXT: v_mov_b32_e32 v0, v12 -; GCN-NEXT: v_mov_b32_e32 v1, v13 -; GCN-NEXT: v_mov_b32_e32 v2, v14 -; GCN-NEXT: v_mov_b32_e32 v3, v15 -; GCN-NEXT: v_mov_b32_e32 v4, v16 -; GCN-NEXT: v_mov_b32_e32 v5, v17 -; GCN-NEXT: v_mov_b32_e32 v6, v18 -; GCN-NEXT: v_mov_b32_e32 v7, v19 -; GCN-NEXT: v_mov_b32_e32 v8, v20 -; GCN-NEXT: v_mov_b32_e32 v9, v21 -; GCN-NEXT: v_mov_b32_e32 v10, v22 -; GCN-NEXT: v_mov_b32_e32 v11, v23 -; GCN-NEXT: v_mov_b32_e32 v12, v24 -; GCN-NEXT: v_mov_b32_e32 v13, v25 -; GCN-NEXT: v_mov_b32_e32 v14, v26 -; GCN-NEXT: v_mov_b32_e32 v15, v27 +; GCN-NEXT: v_smfmac_f32_32x32x32_bf16 v[0:15], v[26:29], v[18:25], v16 ; GCN-NEXT: s_setpc_b64 s[30:31] %result = call <16 x float> @llvm.amdgcn.smfmac.f32.32x32x32.bf16(<8 x bfloat> %arg0, <16 x bfloat> %arg1, <16 x float> %arg2, i32 %arg3, i32 immarg 0, i32 immarg 0) ret <16 x float> %result @@ -953,30 +911,25 @@ define <4 x i32> @test_smfmac_i32_16x16x128_i8__sgpr(<4 x i32> inreg %arg0, <8 x ; SDAG-LABEL: test_smfmac_i32_16x16x128_i8__sgpr: ; SDAG: ; %bb.0: ; SDAG-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; SDAG-NEXT: v_mov_b32_e32 v10, s0 -; SDAG-NEXT: v_mov_b32_e32 v11, s1 -; SDAG-NEXT: v_mov_b32_e32 v12, s2 -; SDAG-NEXT: v_mov_b32_e32 v13, s3 -; SDAG-NEXT: v_mov_b32_e32 v2, s16 -; SDAG-NEXT: v_mov_b32_e32 v3, s17 -; SDAG-NEXT: v_mov_b32_e32 v4, s18 -; SDAG-NEXT: v_mov_b32_e32 v5, s19 -; SDAG-NEXT: v_mov_b32_e32 v6, s20 -; SDAG-NEXT: v_mov_b32_e32 v7, s21 -; SDAG-NEXT: v_mov_b32_e32 v8, s22 -; SDAG-NEXT: v_mov_b32_e32 v9, s23 -; SDAG-NEXT: v_accvgpr_write_b32 a0, s24 -; SDAG-NEXT: v_accvgpr_write_b32 a1, s25 -; SDAG-NEXT: v_accvgpr_write_b32 a2, s26 -; SDAG-NEXT: v_accvgpr_write_b32 a3, s27 -; SDAG-NEXT: v_mov_b32_e32 v0, s28 +; SDAG-NEXT: v_mov_b32_e32 v14, s0 +; SDAG-NEXT: v_mov_b32_e32 v15, s1 +; SDAG-NEXT: v_mov_b32_e32 v16, s2 +; SDAG-NEXT: v_mov_b32_e32 v17, s3 +; SDAG-NEXT: v_mov_b32_e32 v6, s16 +; SDAG-NEXT: v_mov_b32_e32 v7, s17 +; SDAG-NEXT: v_mov_b32_e32 v8, s18 +; SDAG-NEXT: v_mov_b32_e32 v9, s19 +; SDAG-NEXT: v_mov_b32_e32 v10, s20 +; SDAG-NEXT: v_mov_b32_e32 v11, s21 +; SDAG-NEXT: v_mov_b32_e32 v12, s22 +; SDAG-NEXT: v_mov_b32_e32 v13, s23 +; SDAG-NEXT: v_mov_b32_e32 v0, s24 +; SDAG-NEXT: v_mov_b32_e32 v1, s25 +; SDAG-NEXT: v_mov_b32_e32 v2, s26 +; SDAG-NEXT: v_mov_b32_e32 v3, s27 +; SDAG-NEXT: v_mov_b32_e32 v4, s28 ; SDAG-NEXT: s_nop 1 -; SDAG-NEXT: v_smfmac_i32_16x16x128_i8 a[0:3], v[10:13], v[2:9], v0 -; SDAG-NEXT: s_nop 7 -; SDAG-NEXT: v_accvgpr_read_b32 v0, a0 -; SDAG-NEXT: v_accvgpr_read_b32 v1, a1 -; SDAG-NEXT: v_accvgpr_read_b32 v2, a2 -; SDAG-NEXT: v_accvgpr_read_b32 v3, a3 +; SDAG-NEXT: v_smfmac_i32_16x16x128_i8 v[0:3], v[14:17], v[6:13], v4 ; SDAG-NEXT: s_setpc_b64 s[30:31] ; ; GISEL-LABEL: test_smfmac_i32_16x16x128_i8__sgpr: @@ -1275,53 +1228,37 @@ define <16 x i32> @test_smfmac_i32_32x32x64_i8__sgpr(<4 x i32> inreg %arg0, <8 x ; SDAG-LABEL: test_smfmac_i32_32x32x64_i8__sgpr: ; SDAG: ; %bb.0: ; SDAG-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; SDAG-NEXT: v_mov_b32_e32 v36, s0 -; SDAG-NEXT: v_mov_b32_e32 v37, s1 -; SDAG-NEXT: v_mov_b32_e32 v38, s2 -; SDAG-NEXT: v_mov_b32_e32 v39, s3 -; SDAG-NEXT: v_mov_b32_e32 v13, s25 -; SDAG-NEXT: v_mov_b32_e32 v14, s26 -; SDAG-NEXT: v_mov_b32_e32 v15, s27 -; SDAG-NEXT: v_mov_b32_e32 v16, s28 -; SDAG-NEXT: v_mov_b32_e32 v17, s29 -; SDAG-NEXT: v_mov_b32_e32 v28, s16 -; SDAG-NEXT: v_mov_b32_e32 v29, s17 -; SDAG-NEXT: v_mov_b32_e32 v30, s18 -; SDAG-NEXT: v_mov_b32_e32 v31, s19 -; SDAG-NEXT: v_mov_b32_e32 v32, s20 -; SDAG-NEXT: v_mov_b32_e32 v33, s21 -; SDAG-NEXT: v_mov_b32_e32 v34, s22 -; SDAG-NEXT: v_mov_b32_e32 v35, s23 -; SDAG-NEXT: v_mov_b32_e32 v12, s24 -; SDAG-NEXT: v_mov_b32_e32 v18, v0 -; SDAG-NEXT: v_mov_b32_e32 v19, v1 -; SDAG-NEXT: v_mov_b32_e32 v20, v2 -; SDAG-NEXT: v_mov_b32_e32 v21, v3 -; SDAG-NEXT: v_mov_b32_e32 v22, v4 -; SDAG-NEXT: v_mov_b32_e32 v23, v5 -; SDAG-NEXT: v_mov_b32_e32 v24, v6 -; SDAG-NEXT: v_mov_b32_e32 v25, v7 -; SDAG-NEXT: v_mov_b32_e32 v26, v8 -; SDAG-NEXT: v_mov_b32_e32 v27, v9 +; SDAG-NEXT: v_mov_b32_e32 v26, s0 +; SDAG-NEXT: v_mov_b32_e32 v27, s1 +; SDAG-NEXT: v_mov_b32_e32 v28, s2 +; SDAG-NEXT: v_mov_b32_e32 v29, s3 +; SDAG-NEXT: v_mov_b32_e32 v16, v10 +; SDAG-NEXT: v_mov_b32_e32 v15, v9 +; SDAG-NEXT: v_mov_b32_e32 v14, v8 +; SDAG-NEXT: v_mov_b32_e32 v13, v7 +; SDAG-NEXT: v_mov_b32_e32 v12, v6 +; SDAG-NEXT: v_mov_b32_e32 v11, v5 +; SDAG-NEXT: v_mov_b32_e32 v10, v4 +; SDAG-NEXT: v_mov_b32_e32 v9, v3 +; SDAG-NEXT: v_mov_b32_e32 v8, v2 +; SDAG-NEXT: v_mov_b32_e32 v7, v1 +; SDAG-NEXT: v_mov_b32_e32 v6, v0 +; SDAG-NEXT: v_mov_b32_e32 v0, s24 +; SDAG-NEXT: v_mov_b32_e32 v1, s25 +; SDAG-NEXT: v_mov_b32_e32 v2, s26 +; SDAG-NEXT: v_mov_b32_e32 v3, s27 +; SDAG-NEXT: v_mov_b32_e32 v4, s28 +; SDAG-NEXT: v_mov_b32_e32 v5, s29 +; SDAG-NEXT: v_mov_b32_e32 v18, s16 +; SDAG-NEXT: v_mov_b32_e32 v19, s17 +; SDAG-NEXT: v_mov_b32_e32 v20, s18 +; SDAG-NEXT: v_mov_b32_e32 v21, s19 +; SDAG-NEXT: v_mov_b32_e32 v22, s20 +; SDAG-NEXT: v_mov_b32_e32 v23, s21 +; SDAG-NEXT: v_mov_b32_e32 v24, s22 +; SDAG-NEXT: v_mov_b32_e32 v25, s23 ; SDAG-NEXT: s_nop 1 -; SDAG-NEXT: v_smfmac_i32_32x32x64_i8 v[12:27], v[36:39], v[28:35], v10 -; SDAG-NEXT: s_nop 11 -; SDAG-NEXT: v_mov_b32_e32 v0, v12 -; SDAG-NEXT: v_mov_b32_e32 v1, v13 -; SDAG-NEXT: v_mov_b32_e32 v2, v14 -; SDAG-NEXT: v_mov_b32_e32 v3, v15 -; SDAG-NEXT: v_mov_b32_e32 v4, v16 -; SDAG-NEXT: v_mov_b32_e32 v5, v17 -; SDAG-NEXT: v_mov_b32_e32 v6, v18 -; SDAG-NEXT: v_mov_b32_e32 v7, v19 -; SDAG-NEXT: v_mov_b32_e32 v8, v20 -; SDAG-NEXT: v_mov_b32_e32 v9, v21 -; SDAG-NEXT: v_mov_b32_e32 v10, v22 -; SDAG-NEXT: v_mov_b32_e32 v11, v23 -; SDAG-NEXT: v_mov_b32_e32 v12, v24 -; SDAG-NEXT: v_mov_b32_e32 v13, v25 -; SDAG-NEXT: v_mov_b32_e32 v14, v26 -; SDAG-NEXT: v_mov_b32_e32 v15, v27 +; SDAG-NEXT: v_smfmac_i32_32x32x64_i8 v[0:15], v[26:29], v[18:25], v16 ; SDAG-NEXT: s_setpc_b64 s[30:31] ; ; GISEL-LABEL: test_smfmac_i32_32x32x64_i8__sgpr: @@ -1489,30 +1426,25 @@ define <4 x float> @test_smfmac_f32_16x16x128_bf8_bf8__sgpr(<4 x i32> inreg %arg ; SDAG-LABEL: test_smfmac_f32_16x16x128_bf8_bf8__sgpr: ; SDAG: ; %bb.0: ; SDAG-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; SDAG-NEXT: v_mov_b32_e32 v10, s0 -; SDAG-NEXT: v_mov_b32_e32 v11, s1 -; SDAG-NEXT: v_mov_b32_e32 v12, s2 -; SDAG-NEXT: v_mov_b32_e32 v13, s3 -; SDAG-NEXT: v_mov_b32_e32 v2, s16 -; SDAG-NEXT: v_mov_b32_e32 v3, s17 -; SDAG-NEXT: v_mov_b32_e32 v4, s18 -; SDAG-NEXT: v_mov_b32_e32 v5, s19 -; SDAG-NEXT: v_mov_b32_e32 v6, s20 -; SDAG-NEXT: v_mov_b32_e32 v7, s21 -; SDAG-NEXT: v_mov_b32_e32 v8, s22 -; SDAG-NEXT: v_mov_b32_e32 v9, s23 -; SDAG-NEXT: v_accvgpr_write_b32 a0, s24 -; SDAG-NEXT: v_accvgpr_write_b32 a1, s25 -; SDAG-NEXT: v_accvgpr_write_b32 a2, s26 -; SDAG-NEXT: v_accvgpr_write_b32 a3, s27 -; SDAG-NEXT: v_mov_b32_e32 v0, s28 +; SDAG-NEXT: v_mov_b32_e32 v14, s0 +; SDAG-NEXT: v_mov_b32_e32 v15, s1 +; SDAG-NEXT: v_mov_b32_e32 v16, s2 +; SDAG-NEXT: v_mov_b32_e32 v17, s3 +; SDAG-NEXT: v_mov_b32_e32 v6, s16 +; SDAG-NEXT: v_mov_b32_e32 v7, s17 +; SDAG-NEXT: v_mov_b32_e32 v8, s18 +; SDAG-NEXT: v_mov_b32_e32 v9, s19 +; SDAG-NEXT: v_mov_b32_e32 v10, s20 +; SDAG-NEXT: v_mov_b32_e32 v11, s21 +; SDAG-NEXT: v_mov_b32_e32 v12, s22 +; SDAG-NEXT: v_mov_b32_e32 v13, s23 +; SDAG-NEXT: v_mov_b32_e32 v0, s24 +; SDAG-NEXT: v_mov_b32_e32 v1, s25 +; SDAG-NEXT: v_mov_b32_e32 v2, s26 +; SDAG-NEXT: v_mov_b32_e32 v3, s27 +; SDAG-NEXT: v_mov_b32_e32 v4, s28 ; SDAG-NEXT: s_nop 1 -; SDAG-NEXT: v_smfmac_f32_16x16x128_bf8_bf8 a[0:3], v[10:13], v[2:9], v0 -; SDAG-NEXT: s_nop 7 -; SDAG-NEXT: v_accvgpr_read_b32 v0, a0 -; SDAG-NEXT: v_accvgpr_read_b32 v1, a1 -; SDAG-NEXT: v_accvgpr_read_b32 v2, a2 -; SDAG-NEXT: v_accvgpr_read_b32 v3, a3 +; SDAG-NEXT: v_smfmac_f32_16x16x128_bf8_bf8 v[0:3], v[14:17], v[6:13], v4 ; SDAG-NEXT: s_setpc_b64 s[30:31] ; ; GISEL-LABEL: test_smfmac_f32_16x16x128_bf8_bf8__sgpr: @@ -1658,30 +1590,25 @@ define <4 x float> @test_smfmac_f32_16x16x128_bf8_fp8__sgpr(<4 x i32> inreg %arg ; SDAG-LABEL: test_smfmac_f32_16x16x128_bf8_fp8__sgpr: ; SDAG: ; %bb.0: ; SDAG-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; SDAG-NEXT: v_mov_b32_e32 v10, s0 -; SDAG-NEXT: v_mov_b32_e32 v11, s1 -; SDAG-NEXT: v_mov_b32_e32 v12, s2 -; SDAG-NEXT: v_mov_b32_e32 v13, s3 -; SDAG-NEXT: v_mov_b32_e32 v2, s16 -; SDAG-NEXT: v_mov_b32_e32 v3, s17 -; SDAG-NEXT: v_mov_b32_e32 v4, s18 -; SDAG-NEXT: v_mov_b32_e32 v5, s19 -; SDAG-NEXT: v_mov_b32_e32 v6, s20 -; SDAG-NEXT: v_mov_b32_e32 v7, s21 -; SDAG-NEXT: v_mov_b32_e32 v8, s22 -; SDAG-NEXT: v_mov_b32_e32 v9, s23 -; SDAG-NEXT: v_accvgpr_write_b32 a0, s24 -; SDAG-NEXT: v_accvgpr_write_b32 a1, s25 -; SDAG-NEXT: v_accvgpr_write_b32 a2, s26 -; SDAG-NEXT: v_accvgpr_write_b32 a3, s27 -; SDAG-NEXT: v_mov_b32_e32 v0, s28 +; SDAG-NEXT: v_mov_b32_e32 v14, s0 +; SDAG-NEXT: v_mov_b32_e32 v15, s1 +; SDAG-NEXT: v_mov_b32_e32 v16, s2 +; SDAG-NEXT: v_mov_b32_e32 v17, s3 +; SDAG-NEXT: v_mov_b32_e32 v6, s16 +; SDAG-NEXT: v_mov_b32_e32 v7, s17 +; SDAG-NEXT: v_mov_b32_e32 v8, s18 +; SDAG-NEXT: v_mov_b32_e32 v9, s19 +; SDAG-NEXT: v_mov_b32_e32 v10, s20 +; SDAG-NEXT: v_mov_b32_e32 v11, s21 +; SDAG-NEXT: v_mov_b32_e32 v12, s22 +; SDAG-NEXT: v_mov_b32_e32 v13, s23 +; SDAG-NEXT: v_mov_b32_e32 v0, s24 +; SDAG-NEXT: v_mov_b32_e32 v1, s25 +; SDAG-NEXT: v_mov_b32_e32 v2, s26 +; SDAG-NEXT: v_mov_b32_e32 v3, s27 +; SDAG-NEXT: v_mov_b32_e32 v4, s28 ; SDAG-NEXT: s_nop 1 -; SDAG-NEXT: v_smfmac_f32_16x16x128_bf8_fp8 a[0:3], v[10:13], v[2:9], v0 -; SDAG-NEXT: s_nop 7 -; SDAG-NEXT: v_accvgpr_read_b32 v0, a0 -; SDAG-NEXT: v_accvgpr_read_b32 v1, a1 -; SDAG-NEXT: v_accvgpr_read_b32 v2, a2 -; SDAG-NEXT: v_accvgpr_read_b32 v3, a3 +; SDAG-NEXT: v_smfmac_f32_16x16x128_bf8_fp8 v[0:3], v[14:17], v[6:13], v4 ; SDAG-NEXT: s_setpc_b64 s[30:31] ; ; GISEL-LABEL: test_smfmac_f32_16x16x128_bf8_fp8__sgpr: @@ -1827,30 +1754,25 @@ define <4 x float> @test_smfmac_f32_16x16x128_fp8_bf8__sgpr(<4 x i32> inreg %arg ; SDAG-LABEL: test_smfmac_f32_16x16x128_fp8_bf8__sgpr: ; SDAG: ; %bb.0: ; SDAG-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; SDAG-NEXT: v_mov_b32_e32 v10, s0 -; SDAG-NEXT: v_mov_b32_e32 v11, s1 -; SDAG-NEXT: v_mov_b32_e32 v12, s2 -; SDAG-NEXT: v_mov_b32_e32 v13, s3 -; SDAG-NEXT: v_mov_b32_e32 v2, s16 -; SDAG-NEXT: v_mov_b32_e32 v3, s17 -; SDAG-NEXT: v_mov_b32_e32 v4, s18 -; SDAG-NEXT: v_mov_b32_e32 v5, s19 -; SDAG-NEXT: v_mov_b32_e32 v6, s20 -; SDAG-NEXT: v_mov_b32_e32 v7, s21 -; SDAG-NEXT: v_mov_b32_e32 v8, s22 -; SDAG-NEXT: v_mov_b32_e32 v9, s23 -; SDAG-NEXT: v_accvgpr_write_b32 a0, s24 -; SDAG-NEXT: v_accvgpr_write_b32 a1, s25 -; SDAG-NEXT: v_accvgpr_write_b32 a2, s26 -; SDAG-NEXT: v_accvgpr_write_b32 a3, s27 -; SDAG-NEXT: v_mov_b32_e32 v0, s28 +; SDAG-NEXT: v_mov_b32_e32 v14, s0 +; SDAG-NEXT: v_mov_b32_e32 v15, s1 +; SDAG-NEXT: v_mov_b32_e32 v16, s2 +; SDAG-NEXT: v_mov_b32_e32 v17, s3 +; SDAG-NEXT: v_mov_b32_e32 v6, s16 +; SDAG-NEXT: v_mov_b32_e32 v7, s17 +; SDAG-NEXT: v_mov_b32_e32 v8, s18 +; SDAG-NEXT: v_mov_b32_e32 v9, s19 +; SDAG-NEXT: v_mov_b32_e32 v10, s20 +; SDAG-NEXT: v_mov_b32_e32 v11, s21 +; SDAG-NEXT: v_mov_b32_e32 v12, s22 +; SDAG-NEXT: v_mov_b32_e32 v13, s23 +; SDAG-NEXT: v_mov_b32_e32 v0, s24 +; SDAG-NEXT: v_mov_b32_e32 v1, s25 +; SDAG-NEXT: v_mov_b32_e32 v2, s26 +; SDAG-NEXT: v_mov_b32_e32 v3, s27 +; SDAG-NEXT: v_mov_b32_e32 v4, s28 ; SDAG-NEXT: s_nop 1 -; SDAG-NEXT: v_smfmac_f32_16x16x128_fp8_bf8 a[0:3], v[10:13], v[2:9], v0 -; SDAG-NEXT: s_nop 7 -; SDAG-NEXT: v_accvgpr_read_b32 v0, a0 -; SDAG-NEXT: v_accvgpr_read_b32 v1, a1 -; SDAG-NEXT: v_accvgpr_read_b32 v2, a2 -; SDAG-NEXT: v_accvgpr_read_b32 v3, a3 +; SDAG-NEXT: v_smfmac_f32_16x16x128_fp8_bf8 v[0:3], v[14:17], v[6:13], v4 ; SDAG-NEXT: s_setpc_b64 s[30:31] ; ; GISEL-LABEL: test_smfmac_f32_16x16x128_fp8_bf8__sgpr: @@ -1996,30 +1918,25 @@ define <4 x float> @test_smfmac_f32_16x16x128_fp8_fp8__sgpr(<4 x i32> inreg %arg ; SDAG-LABEL: test_smfmac_f32_16x16x128_fp8_fp8__sgpr: ; SDAG: ; %bb.0: ; SDAG-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; SDAG-NEXT: v_mov_b32_e32 v10, s0 -; SDAG-NEXT: v_mov_b32_e32 v11, s1 -; SDAG-NEXT: v_mov_b32_e32 v12, s2 -; SDAG-NEXT: v_mov_b32_e32 v13, s3 -; SDAG-NEXT: v_mov_b32_e32 v2, s16 -; SDAG-NEXT: v_mov_b32_e32 v3, s17 -; SDAG-NEXT: v_mov_b32_e32 v4, s18 -; SDAG-NEXT: v_mov_b32_e32 v5, s19 -; SDAG-NEXT: v_mov_b32_e32 v6, s20 -; SDAG-NEXT: v_mov_b32_e32 v7, s21 -; SDAG-NEXT: v_mov_b32_e32 v8, s22 -; SDAG-NEXT: v_mov_b32_e32 v9, s23 -; SDAG-NEXT: v_accvgpr_write_b32 a0, s24 -; SDAG-NEXT: v_accvgpr_write_b32 a1, s25 -; SDAG-NEXT: v_accvgpr_write_b32 a2, s26 -; SDAG-NEXT: v_accvgpr_write_b32 a3, s27 -; SDAG-NEXT: v_mov_b32_e32 v0, s28 +; SDAG-NEXT: v_mov_b32_e32 v14, s0 +; SDAG-NEXT: v_mov_b32_e32 v15, s1 +; SDAG-NEXT: v_mov_b32_e32 v16, s2 +; SDAG-NEXT: v_mov_b32_e32 v17, s3 +; SDAG-NEXT: v_mov_b32_e32 v6, s16 +; SDAG-NEXT: v_mov_b32_e32 v7, s17 +; SDAG-NEXT: v_mov_b32_e32 v8, s18 +; SDAG-NEXT: v_mov_b32_e32 v9, s19 +; SDAG-NEXT: v_mov_b32_e32 v10, s20 +; SDAG-NEXT: v_mov_b32_e32 v11, s21 +; SDAG-NEXT: v_mov_b32_e32 v12, s22 +; SDAG-NEXT: v_mov_b32_e32 v13, s23 +; SDAG-NEXT: v_mov_b32_e32 v0, s24 +; SDAG-NEXT: v_mov_b32_e32 v1, s25 +; SDAG-NEXT: v_mov_b32_e32 v2, s26 +; SDAG-NEXT: v_mov_b32_e32 v3, s27 +; SDAG-NEXT: v_mov_b32_e32 v4, s28 ; SDAG-NEXT: s_nop 1 -; SDAG-NEXT: v_smfmac_f32_16x16x128_fp8_fp8 a[0:3], v[10:13], v[2:9], v0 -; SDAG-NEXT: s_nop 7 -; SDAG-NEXT: v_accvgpr_read_b32 v0, a0 -; SDAG-NEXT: v_accvgpr_read_b32 v1, a1 -; SDAG-NEXT: v_accvgpr_read_b32 v2, a2 -; SDAG-NEXT: v_accvgpr_read_b32 v3, a3 +; SDAG-NEXT: v_smfmac_f32_16x16x128_fp8_fp8 v[0:3], v[14:17], v[6:13], v4 ; SDAG-NEXT: s_setpc_b64 s[30:31] ; ; GISEL-LABEL: test_smfmac_f32_16x16x128_fp8_fp8__sgpr: @@ -2318,53 +2235,37 @@ define <16 x float> @test_smfmac_f32_32x32x64_bf8_bf8__sgpr(<4 x i32> inreg %arg ; SDAG-LABEL: test_smfmac_f32_32x32x64_bf8_bf8__sgpr: ; SDAG: ; %bb.0: ; SDAG-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; SDAG-NEXT: v_mov_b32_e32 v36, s0 -; SDAG-NEXT: v_mov_b32_e32 v37, s1 -; SDAG-NEXT: v_mov_b32_e32 v38, s2 -; SDAG-NEXT: v_mov_b32_e32 v39, s3 -; SDAG-NEXT: v_mov_b32_e32 v13, s25 -; SDAG-NEXT: v_mov_b32_e32 v14, s26 -; SDAG-NEXT: v_mov_b32_e32 v15, s27 -; SDAG-NEXT: v_mov_b32_e32 v16, s28 -; SDAG-NEXT: v_mov_b32_e32 v17, s29 -; SDAG-NEXT: v_mov_b32_e32 v28, s16 -; SDAG-NEXT: v_mov_b32_e32 v29, s17 -; SDAG-NEXT: v_mov_b32_e32 v30, s18 -; SDAG-NEXT: v_mov_b32_e32 v31, s19 -; SDAG-NEXT: v_mov_b32_e32 v32, s20 -; SDAG-NEXT: v_mov_b32_e32 v33, s21 -; SDAG-NEXT: v_mov_b32_e32 v34, s22 -; SDAG-NEXT: v_mov_b32_e32 v35, s23 -; SDAG-NEXT: v_mov_b32_e32 v12, s24 -; SDAG-NEXT: v_mov_b32_e32 v18, v0 -; SDAG-NEXT: v_mov_b32_e32 v19, v1 -; SDAG-NEXT: v_mov_b32_e32 v20, v2 -; SDAG-NEXT: v_mov_b32_e32 v21, v3 -; SDAG-NEXT: v_mov_b32_e32 v22, v4 -; SDAG-NEXT: v_mov_b32_e32 v23, v5 -; SDAG-NEXT: v_mov_b32_e32 v24, v6 -; SDAG-NEXT: v_mov_b32_e32 v25, v7 -; SDAG-NEXT: v_mov_b32_e32 v26, v8 -; SDAG-NEXT: v_mov_b32_e32 v27, v9 +; SDAG-NEXT: v_mov_b32_e32 v26, s0 +; SDAG-NEXT: v_mov_b32_e32 v27, s1 +; SDAG-NEXT: v_mov_b32_e32 v28, s2 +; SDAG-NEXT: v_mov_b32_e32 v29, s3 +; SDAG-NEXT: v_mov_b32_e32 v16, v10 +; SDAG-NEXT: v_mov_b32_e32 v15, v9 +; SDAG-NEXT: v_mov_b32_e32 v14, v8 +; SDAG-NEXT: v_mov_b32_e32 v13, v7 +; SDAG-NEXT: v_mov_b32_e32 v12, v6 +; SDAG-NEXT: v_mov_b32_e32 v11, v5 +; SDAG-NEXT: v_mov_b32_e32 v10, v4 +; SDAG-NEXT: v_mov_b32_e32 v9, v3 +; SDAG-NEXT: v_mov_b32_e32 v8, v2 +; SDAG-NEXT: v_mov_b32_e32 v7, v1 +; SDAG-NEXT: v_mov_b32_e32 v6, v0 +; SDAG-NEXT: v_mov_b32_e32 v0, s24 +; SDAG-NEXT: v_mov_b32_e32 v1, s25 +; SDAG-NEXT: v_mov_b32_e32 v2, s26 +; SDAG-NEXT: v_mov_b32_e32 v3, s27 +; SDAG-NEXT: v_mov_b32_e32 v4, s28 +; SDAG-NEXT: v_mov_b32_e32 v5, s29 +; SDAG-NEXT: v_mov_b32_e32 v18, s16 +; SDAG-NEXT: v_mov_b32_e32 v19, s17 +; SDAG-NEXT: v_mov_b32_e32 v20, s18 +; SDAG-NEXT: v_mov_b32_e32 v21, s19 +; SDAG-NEXT: v_mov_b32_e32 v22, s20 +; SDAG-NEXT: v_mov_b32_e32 v23, s21 +; SDAG-NEXT: v_mov_b32_e32 v24, s22 +; SDAG-NEXT: v_mov_b32_e32 v25, s23 ; SDAG-NEXT: s_nop 1 -; SDAG-NEXT: v_smfmac_f32_32x32x64_bf8_bf8 v[12:27], v[36:39], v[28:35], v10 -; SDAG-NEXT: s_nop 11 -; SDAG-NEXT: v_mov_b32_e32 v0, v12 -; SDAG-NEXT: v_mov_b32_e32 v1, v13 -; SDAG-NEXT: v_mov_b32_e32 v2, v14 -; SDAG-NEXT: v_mov_b32_e32 v3, v15 -; SDAG-NEXT: v_mov_b32_e32 v4, v16 -; SDAG-NEXT: v_mov_b32_e32 v5, v17 -; SDAG-NEXT: v_mov_b32_e32 v6, v18 -; SDAG-NEXT: v_mov_b32_e32 v7, v19 -; SDAG-NEXT: v_mov_b32_e32 v8, v20 -; SDAG-NEXT: v_mov_b32_e32 v9, v21 -; SDAG-NEXT: v_mov_b32_e32 v10, v22 -; SDAG-NEXT: v_mov_b32_e32 v11, v23 -; SDAG-NEXT: v_mov_b32_e32 v12, v24 -; SDAG-NEXT: v_mov_b32_e32 v13, v25 -; SDAG-NEXT: v_mov_b32_e32 v14, v26 -; SDAG-NEXT: v_mov_b32_e32 v15, v27 +; SDAG-NEXT: v_smfmac_f32_32x32x64_bf8_bf8 v[0:15], v[26:29], v[18:25], v16 ; SDAG-NEXT: s_setpc_b64 s[30:31] ; ; GISEL-LABEL: test_smfmac_f32_32x32x64_bf8_bf8__sgpr: @@ -2685,53 +2586,37 @@ define <16 x float> @test_smfmac_f32_32x32x64_bf8_fp8__sgpr(<4 x i32> inreg %arg ; SDAG-LABEL: test_smfmac_f32_32x32x64_bf8_fp8__sgpr: ; SDAG: ; %bb.0: ; SDAG-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; SDAG-NEXT: v_mov_b32_e32 v36, s0 -; SDAG-NEXT: v_mov_b32_e32 v37, s1 -; SDAG-NEXT: v_mov_b32_e32 v38, s2 -; SDAG-NEXT: v_mov_b32_e32 v39, s3 -; SDAG-NEXT: v_mov_b32_e32 v13, s25 -; SDAG-NEXT: v_mov_b32_e32 v14, s26 -; SDAG-NEXT: v_mov_b32_e32 v15, s27 -; SDAG-NEXT: v_mov_b32_e32 v16, s28 -; SDAG-NEXT: v_mov_b32_e32 v17, s29 -; SDAG-NEXT: v_mov_b32_e32 v28, s16 -; SDAG-NEXT: v_mov_b32_e32 v29, s17 -; SDAG-NEXT: v_mov_b32_e32 v30, s18 -; SDAG-NEXT: v_mov_b32_e32 v31, s19 -; SDAG-NEXT: v_mov_b32_e32 v32, s20 -; SDAG-NEXT: v_mov_b32_e32 v33, s21 -; SDAG-NEXT: v_mov_b32_e32 v34, s22 -; SDAG-NEXT: v_mov_b32_e32 v35, s23 -; SDAG-NEXT: v_mov_b32_e32 v12, s24 -; SDAG-NEXT: v_mov_b32_e32 v18, v0 -; SDAG-NEXT: v_mov_b32_e32 v19, v1 -; SDAG-NEXT: v_mov_b32_e32 v20, v2 -; SDAG-NEXT: v_mov_b32_e32 v21, v3 -; SDAG-NEXT: v_mov_b32_e32 v22, v4 -; SDAG-NEXT: v_mov_b32_e32 v23, v5 -; SDAG-NEXT: v_mov_b32_e32 v24, v6 -; SDAG-NEXT: v_mov_b32_e32 v25, v7 -; SDAG-NEXT: v_mov_b32_e32 v26, v8 -; SDAG-NEXT: v_mov_b32_e32 v27, v9 +; SDAG-NEXT: v_mov_b32_e32 v26, s0 +; SDAG-NEXT: v_mov_b32_e32 v27, s1 +; SDAG-NEXT: v_mov_b32_e32 v28, s2 +; SDAG-NEXT: v_mov_b32_e32 v29, s3 +; SDAG-NEXT: v_mov_b32_e32 v16, v10 +; SDAG-NEXT: v_mov_b32_e32 v15, v9 +; SDAG-NEXT: v_mov_b32_e32 v14, v8 +; SDAG-NEXT: v_mov_b32_e32 v13, v7 +; SDAG-NEXT: v_mov_b32_e32 v12, v6 +; SDAG-NEXT: v_mov_b32_e32 v11, v5 +; SDAG-NEXT: v_mov_b32_e32 v10, v4 +; SDAG-NEXT: v_mov_b32_e32 v9, v3 +; SDAG-NEXT: v_mov_b32_e32 v8, v2 +; SDAG-NEXT: v_mov_b32_e32 v7, v1 +; SDAG-NEXT: v_mov_b32_e32 v6, v0 +; SDAG-NEXT: v_mov_b32_e32 v0, s24 +; SDAG-NEXT: v_mov_b32_e32 v1, s25 +; SDAG-NEXT: v_mov_b32_e32 v2, s26 +; SDAG-NEXT: v_mov_b32_e32 v3, s27 +; SDAG-NEXT: v_mov_b32_e32 v4, s28 +; SDAG-NEXT: v_mov_b32_e32 v5, s29 +; SDAG-NEXT: v_mov_b32_e32 v18, s16 +; SDAG-NEXT: v_mov_b32_e32 v19, s17 +; SDAG-NEXT: v_mov_b32_e32 v20, s18 +; SDAG-NEXT: v_mov_b32_e32 v21, s19 +; SDAG-NEXT: v_mov_b32_e32 v22, s20 +; SDAG-NEXT: v_mov_b32_e32 v23, s21 +; SDAG-NEXT: v_mov_b32_e32 v24, s22 +; SDAG-NEXT: v_mov_b32_e32 v25, s23 ; SDAG-NEXT: s_nop 1 -; SDAG-NEXT: v_smfmac_f32_32x32x64_bf8_fp8 v[12:27], v[36:39], v[28:35], v10 -; SDAG-NEXT: s_nop 11 -; SDAG-NEXT: v_mov_b32_e32 v0, v12 -; SDAG-NEXT: v_mov_b32_e32 v1, v13 -; SDAG-NEXT: v_mov_b32_e32 v2, v14 -; SDAG-NEXT: v_mov_b32_e32 v3, v15 -; SDAG-NEXT: v_mov_b32_e32 v4, v16 -; SDAG-NEXT: v_mov_b32_e32 v5, v17 -; SDAG-NEXT: v_mov_b32_e32 v6, v18 -; SDAG-NEXT: v_mov_b32_e32 v7, v19 -; SDAG-NEXT: v_mov_b32_e32 v8, v20 -; SDAG-NEXT: v_mov_b32_e32 v9, v21 -; SDAG-NEXT: v_mov_b32_e32 v10, v22 -; SDAG-NEXT: v_mov_b32_e32 v11, v23 -; SDAG-NEXT: v_mov_b32_e32 v12, v24 -; SDAG-NEXT: v_mov_b32_e32 v13, v25 -; SDAG-NEXT: v_mov_b32_e32 v14, v26 -; SDAG-NEXT: v_mov_b32_e32 v15, v27 +; SDAG-NEXT: v_smfmac_f32_32x32x64_bf8_fp8 v[0:15], v[26:29], v[18:25], v16 ; SDAG-NEXT: s_setpc_b64 s[30:31] ; ; GISEL-LABEL: test_smfmac_f32_32x32x64_bf8_fp8__sgpr: @@ -3052,53 +2937,37 @@ define <16 x float> @test_smfmac_f32_32x32x64_fp8_bf8__sgpr(<4 x i32> inreg %arg ; SDAG-LABEL: test_smfmac_f32_32x32x64_fp8_bf8__sgpr: ; SDAG: ; %bb.0: ; SDAG-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; SDAG-NEXT: v_mov_b32_e32 v36, s0 -; SDAG-NEXT: v_mov_b32_e32 v37, s1 -; SDAG-NEXT: v_mov_b32_e32 v38, s2 -; SDAG-NEXT: v_mov_b32_e32 v39, s3 -; SDAG-NEXT: v_mov_b32_e32 v13, s25 -; SDAG-NEXT: v_mov_b32_e32 v14, s26 -; SDAG-NEXT: v_mov_b32_e32 v15, s27 -; SDAG-NEXT: v_mov_b32_e32 v16, s28 -; SDAG-NEXT: v_mov_b32_e32 v17, s29 -; SDAG-NEXT: v_mov_b32_e32 v28, s16 -; SDAG-NEXT: v_mov_b32_e32 v29, s17 -; SDAG-NEXT: v_mov_b32_e32 v30, s18 -; SDAG-NEXT: v_mov_b32_e32 v31, s19 -; SDAG-NEXT: v_mov_b32_e32 v32, s20 -; SDAG-NEXT: v_mov_b32_e32 v33, s21 -; SDAG-NEXT: v_mov_b32_e32 v34, s22 -; SDAG-NEXT: v_mov_b32_e32 v35, s23 -; SDAG-NEXT: v_mov_b32_e32 v12, s24 -; SDAG-NEXT: v_mov_b32_e32 v18, v0 -; SDAG-NEXT: v_mov_b32_e32 v19, v1 -; SDAG-NEXT: v_mov_b32_e32 v20, v2 -; SDAG-NEXT: v_mov_b32_e32 v21, v3 -; SDAG-NEXT: v_mov_b32_e32 v22, v4 -; SDAG-NEXT: v_mov_b32_e32 v23, v5 -; SDAG-NEXT: v_mov_b32_e32 v24, v6 -; SDAG-NEXT: v_mov_b32_e32 v25, v7 -; SDAG-NEXT: v_mov_b32_e32 v26, v8 -; SDAG-NEXT: v_mov_b32_e32 v27, v9 +; SDAG-NEXT: v_mov_b32_e32 v26, s0 +; SDAG-NEXT: v_mov_b32_e32 v27, s1 +; SDAG-NEXT: v_mov_b32_e32 v28, s2 +; SDAG-NEXT: v_mov_b32_e32 v29, s3 +; SDAG-NEXT: v_mov_b32_e32 v16, v10 +; SDAG-NEXT: v_mov_b32_e32 v15, v9 +; SDAG-NEXT: v_mov_b32_e32 v14, v8 +; SDAG-NEXT: v_mov_b32_e32 v13, v7 +; SDAG-NEXT: v_mov_b32_e32 v12, v6 +; SDAG-NEXT: v_mov_b32_e32 v11, v5 +; SDAG-NEXT: v_mov_b32_e32 v10, v4 +; SDAG-NEXT: v_mov_b32_e32 v9, v3 +; SDAG-NEXT: v_mov_b32_e32 v8, v2 +; SDAG-NEXT: v_mov_b32_e32 v7, v1 +; SDAG-NEXT: v_mov_b32_e32 v6, v0 +; SDAG-NEXT: v_mov_b32_e32 v0, s24 +; SDAG-NEXT: v_mov_b32_e32 v1, s25 +; SDAG-NEXT: v_mov_b32_e32 v2, s26 +; SDAG-NEXT: v_mov_b32_e32 v3, s27 +; SDAG-NEXT: v_mov_b32_e32 v4, s28 +; SDAG-NEXT: v_mov_b32_e32 v5, s29 +; SDAG-NEXT: v_mov_b32_e32 v18, s16 +; SDAG-NEXT: v_mov_b32_e32 v19, s17 +; SDAG-NEXT: v_mov_b32_e32 v20, s18 +; SDAG-NEXT: v_mov_b32_e32 v21, s19 +; SDAG-NEXT: v_mov_b32_e32 v22, s20 +; SDAG-NEXT: v_mov_b32_e32 v23, s21 +; SDAG-NEXT: v_mov_b32_e32 v24, s22 +; SDAG-NEXT: v_mov_b32_e32 v25, s23 ; SDAG-NEXT: s_nop 1 -; SDAG-NEXT: v_smfmac_f32_32x32x64_fp8_bf8 v[12:27], v[36:39], v[28:35], v10 -; SDAG-NEXT: s_nop 11 -; SDAG-NEXT: v_mov_b32_e32 v0, v12 -; SDAG-NEXT: v_mov_b32_e32 v1, v13 -; SDAG-NEXT: v_mov_b32_e32 v2, v14 -; SDAG-NEXT: v_mov_b32_e32 v3, v15 -; SDAG-NEXT: v_mov_b32_e32 v4, v16 -; SDAG-NEXT: v_mov_b32_e32 v5, v17 -; SDAG-NEXT: v_mov_b32_e32 v6, v18 -; SDAG-NEXT: v_mov_b32_e32 v7, v19 -; SDAG-NEXT: v_mov_b32_e32 v8, v20 -; SDAG-NEXT: v_mov_b32_e32 v9, v21 -; SDAG-NEXT: v_mov_b32_e32 v10, v22 -; SDAG-NEXT: v_mov_b32_e32 v11, v23 -; SDAG-NEXT: v_mov_b32_e32 v12, v24 -; SDAG-NEXT: v_mov_b32_e32 v13, v25 -; SDAG-NEXT: v_mov_b32_e32 v14, v26 -; SDAG-NEXT: v_mov_b32_e32 v15, v27 +; SDAG-NEXT: v_smfmac_f32_32x32x64_fp8_bf8 v[0:15], v[26:29], v[18:25], v16 ; SDAG-NEXT: s_setpc_b64 s[30:31] ; ; GISEL-LABEL: test_smfmac_f32_32x32x64_fp8_bf8__sgpr: @@ -3419,53 +3288,37 @@ define <16 x float> @test_smfmac_f32_32x32x64_fp8_fp8__sgpr(<4 x i32> inreg %arg ; SDAG-LABEL: test_smfmac_f32_32x32x64_fp8_fp8__sgpr: ; SDAG: ; %bb.0: ; SDAG-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0) -; SDAG-NEXT: v_mov_b32_e32 v36, s0 -; SDAG-NEXT: v_mov_b32_e32 v37, s1 -; SDAG-NEXT: v_mov_b32_e32 v38, s2 -; SDAG-NEXT: v_mov_b32_e32 v39, s3 -; SDAG-NEXT: v_mov_b32_e32 v13, s25 -; SDAG-NEXT: v_mov_b32_e32 v14, s26 -; SDAG-NEXT: v_mov_b32_e32 v15, s27 -; SDAG-NEXT: v_mov_b32_e32 v16, s28 -; SDAG-NEXT: v_mov_b32_e32 v17, s29 -; SDAG-NEXT: v_mov_b32_e32 v28, s16 -; SDAG-NEXT: v_mov_b32_e32 v29, s17 -; SDAG-NEXT: v_mov_b32_e32 v30, s18 -; SDAG-NEXT: v_mov_b32_e32 v31, s19 -; SDAG-NEXT: v_mov_b32_e32 v32, s20 -; SDAG-NEXT: v_mov_b32_e32 v33, s21 -; SDAG-NEXT: v_mov_b32_e32 v34, s22 -; SDAG-NEXT: v_mov_b32_e32 v35, s23 -; SDAG-NEXT: v_mov_b32_e32 v12, s24 -; SDAG-NEXT: v_mov_b32_e32 v18, v0 -; SDAG-NEXT: v_mov_b32_e32 v19, v1 -; SDAG-NEXT: v_mov_b32_e32 v20, v2 -; SDAG-NEXT: v_mov_b32_e32 v21, v3 -; SDAG-NEXT: v_mov_b32_e32 v22, v4 -; SDAG-NEXT: v_mov_b32_e32 v23, v5 -; SDAG-NEXT: v_mov_b32_e32 v24, v6 -; SDAG-NEXT: v_mov_b32_e32 v25, v7 -; SDAG-NEXT: v_mov_b32_e32 v26, v8 -; SDAG-NEXT: v_mov_b32_e32 v27, v9 +; SDAG-NEXT: v_mov_b32_e32 v26, s0 +; SDAG-NEXT: v_mov_b32_e32 v27, s1 +; SDAG-NEXT: v_mov_b32_e32 v28, s2 +; SDAG-NEXT: v_mov_b32_e32 v29, s3 +; SDAG-NEXT: v_mov_b32_e32 v16, v10 +; SDAG-NEXT: v_mov_b32_e32 v15, v9 +; SDAG-NEXT: v_mov_b32_e32 v14, v8 +; SDAG-NEXT: v_mov_b32_e32 v13, v7 +; SDAG-NEXT: v_mov_b32_e32 v12, v6 +; SDAG-NEXT: v_mov_b32_e32 v11, v5 +; SDAG-NEXT: v_mov_b32_e32 v10, v4 +; SDAG-NEXT: v_mov_b32_e32 v9, v3 +; SDAG-NEXT: v_mov_b32_e32 v8, v2 +; SDAG-NEXT: v_mov_b32_e32 v7, v1 +; SDAG-NEXT: v_mov_b32_e32 v6, v0 +; SDAG-NEXT: v_mov_b32_e32 v0, s24 +; SDAG-NEXT: v_mov_b32_e32 v1, s25 +; SDAG-NEXT: v_mov_b32_e32 v2, s26 +; SDAG-NEXT: v_mov_b32_e32 v3, s27 +; SDAG-NEXT: v_mov_b32_e32 v4, s28 +; SDAG-NEXT: v_mov_b32_e32 v5, s29 +; SDAG-NEXT: v_mov_b32_e32 v18, s16 +; SDAG-NEXT: v_mov_b32_e32 v19, s17 +; SDAG-NEXT: v_mov_b32_e32 v20, s18 +; SDAG-NEXT: v_mov_b32_e32 v21, s19 +; SDAG-NEXT: v_mov_b32_e32 v22, s20 +; SDAG-NEXT: v_mov_b32_e32 v23, s21 +; SDAG-NEXT: v_mov_b32_e32 v24, s22 +; SDAG-NEXT: v_mov_b32_e32 v25, s23 ; SDAG-NEXT: s_nop 1 -; SDAG-NEXT: v_smfmac_f32_32x32x64_fp8_fp8 v[12:27], v[36:39], v[28:35], v10 -; SDAG-NEXT: s_nop 11 -; SDAG-NEXT: v_mov_b32_e32 v0, v12 -; SDAG-NEXT: v_mov_b32_e32 v1, v13 -; SDAG-NEXT: v_mov_b32_e32 v2, v14 -; SDAG-NEXT: v_mov_b32_e32 v3, v15 -; SDAG-NEXT: v_mov_b32_e32 v4, v16 -; SDAG-NEXT: v_mov_b32_e32 v5, v17 -; SDAG-NEXT: v_mov_b32_e32 v6, v18 -; SDAG-NEXT: v_mov_b32_e32 v7, v19 -; SDAG-NEXT: v_mov_b32_e32 v8, v20 -; SDAG-NEXT: v_mov_b32_e32 v9, v21 -; SDAG-NEXT: v_mov_b32_e32 v10, v22 -; SDAG-NEXT: v_mov_b32_e32 v11, v23 -; SDAG-NEXT: v_mov_b32_e32 v12, v24 -; SDAG-NEXT: v_mov_b32_e32 v13, v25 -; SDAG-NEXT: v_mov_b32_e32 v14, v26 -; SDAG-NEXT: v_mov_b32_e32 v15, v27 +; SDAG-NEXT: v_smfmac_f32_32x32x64_fp8_fp8 v[0:15], v[26:29], v[18:25], v16 ; SDAG-NEXT: s_setpc_b64 s[30:31] ; ; GISEL-LABEL: test_smfmac_f32_32x32x64_fp8_fp8__sgpr: diff --git a/llvm/test/CodeGen/AMDGPU/mfma-no-register-aliasing.ll b/llvm/test/CodeGen/AMDGPU/mfma-no-register-aliasing.ll index 51cd564..f46116e 100644 --- a/llvm/test/CodeGen/AMDGPU/mfma-no-register-aliasing.ll +++ b/llvm/test/CodeGen/AMDGPU/mfma-no-register-aliasing.ll @@ -95,66 +95,66 @@ define amdgpu_kernel void @test_mfma_f32_32x32x1f32(ptr addrspace(1) %arg) #0 { ; GREEDY908-NEXT: v_mfma_f32_32x32x1f32 a[32:63], v3, v0, a[0:31] ; GREEDY908-NEXT: s_nop 15 ; GREEDY908-NEXT: s_nop 1 -; GREEDY908-NEXT: v_accvgpr_read_b32 v1, a32 -; GREEDY908-NEXT: v_accvgpr_read_b32 v5, a61 -; GREEDY908-NEXT: v_accvgpr_read_b32 v6, a60 -; GREEDY908-NEXT: v_accvgpr_write_b32 a2, v1 -; GREEDY908-NEXT: v_accvgpr_read_b32 v1, a33 -; GREEDY908-NEXT: v_accvgpr_read_b32 v7, a59 -; GREEDY908-NEXT: v_accvgpr_read_b32 v8, a58 -; GREEDY908-NEXT: v_accvgpr_write_b32 a3, v1 +; GREEDY908-NEXT: v_accvgpr_read_b32 v2, a32 +; GREEDY908-NEXT: v_accvgpr_read_b32 v6, a33 ; GREEDY908-NEXT: v_accvgpr_read_b32 v1, a34 -; GREEDY908-NEXT: v_accvgpr_read_b32 v9, a57 -; GREEDY908-NEXT: v_accvgpr_read_b32 v10, a56 +; GREEDY908-NEXT: v_accvgpr_write_b32 a2, v2 +; GREEDY908-NEXT: v_accvgpr_write_b32 a3, v6 ; GREEDY908-NEXT: v_accvgpr_write_b32 a4, v1 -; GREEDY908-NEXT: v_accvgpr_read_b32 v1, a35 -; GREEDY908-NEXT: v_accvgpr_read_b32 v11, a55 -; GREEDY908-NEXT: v_accvgpr_read_b32 v12, a54 -; GREEDY908-NEXT: v_accvgpr_write_b32 a5, v1 -; GREEDY908-NEXT: v_accvgpr_read_b32 v1, a36 -; GREEDY908-NEXT: v_accvgpr_read_b32 v13, a53 -; GREEDY908-NEXT: v_accvgpr_read_b32 v14, a52 -; GREEDY908-NEXT: v_accvgpr_write_b32 a6, v1 +; GREEDY908-NEXT: v_accvgpr_read_b32 v2, a35 +; GREEDY908-NEXT: v_accvgpr_read_b32 v6, a36 ; GREEDY908-NEXT: v_accvgpr_read_b32 v1, a37 -; GREEDY908-NEXT: v_accvgpr_read_b32 v15, a51 -; GREEDY908-NEXT: v_accvgpr_read_b32 v16, a50 +; GREEDY908-NEXT: v_accvgpr_write_b32 a5, v2 +; GREEDY908-NEXT: v_accvgpr_write_b32 a6, v6 ; GREEDY908-NEXT: v_accvgpr_write_b32 a7, v1 -; GREEDY908-NEXT: v_accvgpr_read_b32 v1, a38 -; GREEDY908-NEXT: v_accvgpr_read_b32 v17, a49 -; GREEDY908-NEXT: v_accvgpr_read_b32 v18, a48 -; GREEDY908-NEXT: v_accvgpr_write_b32 a8, v1 -; GREEDY908-NEXT: v_accvgpr_read_b32 v1, a39 -; GREEDY908-NEXT: v_accvgpr_read_b32 v19, a47 -; GREEDY908-NEXT: v_accvgpr_read_b32 v2, a46 -; GREEDY908-NEXT: v_accvgpr_write_b32 a9, v1 +; GREEDY908-NEXT: v_accvgpr_read_b32 v2, a38 +; GREEDY908-NEXT: v_accvgpr_read_b32 v6, a39 ; GREEDY908-NEXT: v_accvgpr_read_b32 v1, a40 -; GREEDY908-NEXT: v_accvgpr_write_b32 a16, v2 -; GREEDY908-NEXT: v_accvgpr_write_b32 a17, v19 +; GREEDY908-NEXT: v_accvgpr_write_b32 a8, v2 +; GREEDY908-NEXT: v_accvgpr_write_b32 a9, v6 ; GREEDY908-NEXT: v_accvgpr_write_b32 a10, v1 -; GREEDY908-NEXT: v_accvgpr_read_b32 v1, a41 -; GREEDY908-NEXT: v_accvgpr_write_b32 a18, v18 -; GREEDY908-NEXT: v_accvgpr_write_b32 a19, v17 -; GREEDY908-NEXT: v_accvgpr_write_b32 a11, v1 -; GREEDY908-NEXT: v_accvgpr_read_b32 v1, a42 -; GREEDY908-NEXT: v_accvgpr_write_b32 a20, v16 -; GREEDY908-NEXT: v_accvgpr_write_b32 a21, v15 -; GREEDY908-NEXT: v_accvgpr_write_b32 a12, v1 +; GREEDY908-NEXT: v_accvgpr_read_b32 v2, a41 +; GREEDY908-NEXT: v_accvgpr_read_b32 v6, a42 ; GREEDY908-NEXT: v_accvgpr_read_b32 v1, a43 -; GREEDY908-NEXT: v_accvgpr_write_b32 a22, v14 -; GREEDY908-NEXT: v_accvgpr_write_b32 a23, v13 +; GREEDY908-NEXT: v_accvgpr_write_b32 a11, v2 +; GREEDY908-NEXT: v_accvgpr_write_b32 a12, v6 ; GREEDY908-NEXT: v_accvgpr_write_b32 a13, v1 -; GREEDY908-NEXT: v_accvgpr_read_b32 v1, a44 -; GREEDY908-NEXT: v_accvgpr_write_b32 a24, v12 -; GREEDY908-NEXT: v_accvgpr_write_b32 a25, v11 -; GREEDY908-NEXT: v_accvgpr_write_b32 a14, v1 -; GREEDY908-NEXT: v_accvgpr_read_b32 v1, a45 -; GREEDY908-NEXT: v_accvgpr_write_b32 a26, v10 -; GREEDY908-NEXT: v_accvgpr_write_b32 a27, v9 -; GREEDY908-NEXT: v_accvgpr_write_b32 a15, v1 -; GREEDY908-NEXT: v_accvgpr_write_b32 a28, v8 -; GREEDY908-NEXT: v_accvgpr_write_b32 a29, v7 +; GREEDY908-NEXT: v_accvgpr_read_b32 v2, a44 +; GREEDY908-NEXT: v_accvgpr_read_b32 v6, a45 +; GREEDY908-NEXT: v_accvgpr_read_b32 v1, a46 +; GREEDY908-NEXT: v_accvgpr_write_b32 a14, v2 +; GREEDY908-NEXT: v_accvgpr_write_b32 a15, v6 +; GREEDY908-NEXT: v_accvgpr_write_b32 a16, v1 +; GREEDY908-NEXT: v_accvgpr_read_b32 v2, a47 +; GREEDY908-NEXT: v_accvgpr_read_b32 v6, a48 +; GREEDY908-NEXT: v_accvgpr_read_b32 v1, a49 +; GREEDY908-NEXT: v_accvgpr_write_b32 a17, v2 +; GREEDY908-NEXT: v_accvgpr_write_b32 a18, v6 +; GREEDY908-NEXT: v_accvgpr_write_b32 a19, v1 +; GREEDY908-NEXT: v_accvgpr_read_b32 v2, a50 +; GREEDY908-NEXT: v_accvgpr_read_b32 v6, a51 +; GREEDY908-NEXT: v_accvgpr_read_b32 v1, a52 +; GREEDY908-NEXT: v_accvgpr_write_b32 a20, v2 +; GREEDY908-NEXT: v_accvgpr_write_b32 a21, v6 +; GREEDY908-NEXT: v_accvgpr_write_b32 a22, v1 +; GREEDY908-NEXT: v_accvgpr_read_b32 v2, a53 +; GREEDY908-NEXT: v_accvgpr_read_b32 v6, a54 +; GREEDY908-NEXT: v_accvgpr_read_b32 v1, a55 +; GREEDY908-NEXT: v_accvgpr_write_b32 a23, v2 +; GREEDY908-NEXT: v_accvgpr_write_b32 a24, v6 +; GREEDY908-NEXT: v_accvgpr_write_b32 a25, v1 +; GREEDY908-NEXT: v_accvgpr_read_b32 v2, a56 +; GREEDY908-NEXT: v_accvgpr_read_b32 v6, a57 +; GREEDY908-NEXT: v_accvgpr_read_b32 v1, a58 +; GREEDY908-NEXT: v_accvgpr_write_b32 a26, v2 +; GREEDY908-NEXT: v_accvgpr_write_b32 a27, v6 +; GREEDY908-NEXT: v_accvgpr_write_b32 a28, v1 +; GREEDY908-NEXT: v_accvgpr_read_b32 v2, a59 +; GREEDY908-NEXT: v_accvgpr_read_b32 v6, a60 +; GREEDY908-NEXT: v_accvgpr_read_b32 v1, a61 +; GREEDY908-NEXT: v_accvgpr_write_b32 a29, v2 ; GREEDY908-NEXT: v_accvgpr_write_b32 a30, v6 -; GREEDY908-NEXT: v_accvgpr_write_b32 a31, v5 +; GREEDY908-NEXT: v_accvgpr_write_b32 a31, v1 ; GREEDY908-NEXT: s_nop 0 ; GREEDY908-NEXT: v_mfma_f32_32x32x1f32 a[0:31], v3, v0, a[0:31] ; GREEDY908-NEXT: s_nop 15 @@ -667,11 +667,11 @@ define amdgpu_kernel void @test_mfma_f32_16x16x1f32(ptr addrspace(1) %arg) #0 { ; GREEDY908-NEXT: v_mfma_f32_16x16x1f32 a[18:33], v0, v1, a[18:33] ; GREEDY908-NEXT: v_mfma_f32_16x16x1f32 a[2:17], v0, v1, a[18:33] ; GREEDY908-NEXT: s_nop 8 +; GREEDY908-NEXT: v_accvgpr_read_b32 v5, a18 ; GREEDY908-NEXT: v_accvgpr_read_b32 v2, a19 -; GREEDY908-NEXT: v_accvgpr_read_b32 v3, a18 ; GREEDY908-NEXT: s_nop 0 +; GREEDY908-NEXT: v_accvgpr_write_b32 a0, v5 ; GREEDY908-NEXT: v_accvgpr_write_b32 a1, v2 -; GREEDY908-NEXT: v_accvgpr_write_b32 a0, v3 ; GREEDY908-NEXT: s_nop 0 ; GREEDY908-NEXT: v_mfma_f32_16x16x1f32 a[0:15], v0, v1, a[0:15] ; GREEDY908-NEXT: s_nop 9 diff --git a/llvm/test/CodeGen/AMDGPU/no-fold-accvgpr-mov.ll b/llvm/test/CodeGen/AMDGPU/no-fold-accvgpr-mov.ll index cf244f0..be1788c 100644 --- a/llvm/test/CodeGen/AMDGPU/no-fold-accvgpr-mov.ll +++ b/llvm/test/CodeGen/AMDGPU/no-fold-accvgpr-mov.ll @@ -54,19 +54,20 @@ define amdgpu_kernel void @matmul_kernel(i32 %a0, i32 %a1) { ; GFX908-NEXT: s_branch .LBB0_2 ; GFX908-NEXT: .LBB0_1: ; %bb2 ; GFX908-NEXT: ; in Loop: Header=BB0_2 Depth=1 +; GFX908-NEXT: s_nop 6 +; GFX908-NEXT: v_accvgpr_read_b32 v3, a2 ; GFX908-NEXT: s_or_b32 s4, s3, 1 ; GFX908-NEXT: s_ashr_i32 s5, s3, 31 ; GFX908-NEXT: s_mov_b32 s3, s2 ; GFX908-NEXT: v_mov_b32_e32 v1, s2 -; GFX908-NEXT: s_nop 2 -; GFX908-NEXT: v_accvgpr_read_b32 v0, a2 ; GFX908-NEXT: v_mov_b32_e32 v2, s3 +; GFX908-NEXT: v_accvgpr_write_b32 a0, v3 ; GFX908-NEXT: v_accvgpr_read_b32 v4, a1 ; GFX908-NEXT: v_accvgpr_read_b32 v3, a1 -; GFX908-NEXT: v_accvgpr_write_b32 a0, v0 +; GFX908-NEXT: s_and_b32 s3, s5, s4 ; GFX908-NEXT: v_accvgpr_write_b32 a2, v4 ; GFX908-NEXT: v_accvgpr_write_b32 a3, v3 -; GFX908-NEXT: s_and_b32 s3, s5, s4 +; GFX908-NEXT: s_nop 0 ; GFX908-NEXT: v_mfma_f32_16x16x16f16 a[2:5], v[1:2], v[1:2], a[0:3] ; GFX908-NEXT: s_cbranch_execz .LBB0_4 ; GFX908-NEXT: .LBB0_2: ; %bb diff --git a/llvm/test/CodeGen/AMDGPU/pal-metadata-3.0-callable.ll b/llvm/test/CodeGen/AMDGPU/pal-metadata-3.0-callable.ll index 6b7d704..ede470b 100644 --- a/llvm/test/CodeGen/AMDGPU/pal-metadata-3.0-callable.ll +++ b/llvm/test/CodeGen/AMDGPU/pal-metadata-3.0-callable.ll @@ -1,13 +1,11 @@ ; RUN: llc -mtriple=amdgcn--amdpal -mcpu=gfx1100 < %s | FileCheck --check-prefixes=CHECK,GFX11 %s ; RUN: llc -mtriple=amdgcn--amdpal -mcpu=gfx1200 < %s | FileCheck --check-prefixes=CHECK,GFX12 %s -; RUN: llc -mtriple=amdgcn--amdpal -mcpu=gfx1200 -mattr=+dynamic-vgpr < %s | FileCheck --check-prefixes=CHECK,GFX12,DVGPR %s ; CHECK: .amdgpu_pal_metadata ; CHECK-NEXT: --- ; CHECK-NEXT: amdpal.pipelines: ; CHECK-NEXT: - .api: Vulkan ; CHECK-NEXT: .compute_registers: -; DVGPR-NEXT: .dynamic_vgpr_en: true ; CHECK-NEXT: .tg_size_en: true ; CHECK-NEXT: .tgid_x_en: false ; CHECK-NEXT: .tgid_y_en: false diff --git a/llvm/test/CodeGen/AMDGPU/pal-metadata-3.0.ll b/llvm/test/CodeGen/AMDGPU/pal-metadata-3.0.ll index 5c0c366..5325499 100644 --- a/llvm/test/CodeGen/AMDGPU/pal-metadata-3.0.ll +++ b/llvm/test/CodeGen/AMDGPU/pal-metadata-3.0.ll @@ -1,17 +1,14 @@ -; RUN: llc -mtriple=amdgcn--amdpal -mcpu=gfx1100 <%s | FileCheck %s --check-prefixes=CHECK,GFX11,NODVGPR -; RUN: llc -mtriple=amdgcn--amdpal -mcpu=gfx1200 <%s | FileCheck %s --check-prefixes=CHECK,NODVGPR -; RUN: llc -mtriple=amdgcn--amdpal -mcpu=gfx1200 -mattr=+dynamic-vgpr <%s | FileCheck %s --check-prefixes=CHECK,DVGPR +; RUN: llc -mtriple=amdgcn--amdpal -mcpu=gfx1100 <%s | FileCheck %s --check-prefixes=CHECK,GFX11 +; RUN: llc -mtriple=amdgcn--amdpal -mcpu=gfx1200 <%s | FileCheck %s --check-prefixes=CHECK ; CHECK-LABEL: {{^}}_amdgpu_cs_main: -; NODVGPR: ; TotalNumSgprs: 4 -; DVGPR: ; TotalNumSgprs: 34 +; CHECK: ; TotalNumSgprs: 4 ; CHECK: ; NumVgprs: 2 ; CHECK: .amdgpu_pal_metadata ; CHECK-NEXT: --- ; CHECK-NEXT: amdpal.pipelines: ; CHECK-NEXT: - .api: Vulkan ; CHECK-NEXT: .compute_registers: -; DVGPR-NEXT: .dynamic_vgpr_en: true ; CHECK-NEXT: .tg_size_en: true ; CHECK-NEXT: .tgid_x_en: false ; CHECK-NEXT: .tgid_y_en: false @@ -57,7 +54,6 @@ ; CHECK-NEXT: .cs: ; CHECK-NEXT: .checksum_value: 0x9444d7d0 ; CHECK-NEXT: .debug_mode: false -; DVGPR-NEXT: .dynamic_vgpr_saved_count: 0x70 ; CHECK-NEXT: .entry_point: _amdgpu_cs_main ; CHECK-NEXT: .entry_point_symbol: _amdgpu_cs_main ; CHECK-NEXT: .excp_en: 0 @@ -69,8 +65,7 @@ ; CHECK-NEXT: .mem_ordered: true ; CHECK-NEXT: .scratch_en: false ; CHECK-NEXT: .scratch_memory_size: 0 -; NODVGPR-NEXT: .sgpr_count: 0x4 -; DVGPR-NEXT: .sgpr_count: 0x22 +; CHECK-NEXT: .sgpr_count: 0x4 ; CHECK-NEXT: .sgpr_limit: 0x6a ; CHECK-NEXT: .threadgroup_dimensions: ; CHECK-NEXT: - 0x1 diff --git a/llvm/test/CodeGen/AMDGPU/pal-metadata-3.6-dvgpr.ll b/llvm/test/CodeGen/AMDGPU/pal-metadata-3.6-dvgpr.ll new file mode 100644 index 0000000..e598b0c --- /dev/null +++ b/llvm/test/CodeGen/AMDGPU/pal-metadata-3.6-dvgpr.ll @@ -0,0 +1,204 @@ +; RUN: llc -mtriple=amdgcn--amdpal -mcpu=gfx1200 <%s | FileCheck %s --check-prefixes=CHECK + +; CHECK-LABEL: {{^}}_amdgpu_cs_main: +; CHECK: ; TotalNumSgprs: 34 +; CHECK: ; NumVgprs: 2 +; CHECK: .amdgpu_pal_metadata +; CHECK-NEXT: --- +; CHECK-NEXT: amdpal.pipelines: +; CHECK-NEXT: - .api: Vulkan +; CHECK-NEXT: .compute_registers: +; CHECK-NEXT: .dynamic_vgpr_en: true +; CHECK-NEXT: .tg_size_en: true +; CHECK-NEXT: .tgid_x_en: false +; CHECK-NEXT: .tgid_y_en: false +; CHECK-NEXT: .tgid_z_en: false +; CHECK-NEXT: .tidig_comp_cnt: 0x1 +; CHECK-NEXT: .graphics_registers: +; CHECK-NEXT: .ps_extra_lds_size: 0 +; CHECK-NEXT: .spi_ps_input_addr: +; CHECK-NEXT: .ancillary_ena: false +; CHECK-NEXT: .front_face_ena: true +; CHECK-NEXT: .line_stipple_tex_ena: false +; CHECK-NEXT: .linear_center_ena: true +; CHECK-NEXT: .linear_centroid_ena: true +; CHECK-NEXT: .linear_sample_ena: true +; CHECK-NEXT: .persp_center_ena: true +; CHECK-NEXT: .persp_centroid_ena: true +; CHECK-NEXT: .persp_pull_model_ena: false +; CHECK-NEXT: .persp_sample_ena: true +; CHECK-NEXT: .pos_fixed_pt_ena: true +; CHECK-NEXT: .pos_w_float_ena: false +; CHECK-NEXT: .pos_x_float_ena: false +; CHECK-NEXT: .pos_y_float_ena: false +; CHECK-NEXT: .pos_z_float_ena: false +; CHECK-NEXT: .sample_coverage_ena: false +; CHECK-NEXT: .spi_ps_input_ena: +; CHECK-NEXT: .ancillary_ena: false +; CHECK-NEXT: .front_face_ena: false +; CHECK-NEXT: .line_stipple_tex_ena: false +; CHECK-NEXT: .linear_center_ena: false +; CHECK-NEXT: .linear_centroid_ena: false +; CHECK-NEXT: .linear_sample_ena: false +; CHECK-NEXT: .persp_center_ena: false +; CHECK-NEXT: .persp_centroid_ena: false +; CHECK-NEXT: .persp_pull_model_ena: false +; CHECK-NEXT: .persp_sample_ena: true +; CHECK-NEXT: .pos_fixed_pt_ena: false +; CHECK-NEXT: .pos_w_float_ena: false +; CHECK-NEXT: .pos_x_float_ena: false +; CHECK-NEXT: .pos_y_float_ena: false +; CHECK-NEXT: .pos_z_float_ena: false +; CHECK-NEXT: .sample_coverage_ena: false +; CHECK-NEXT: .hardware_stages: +; CHECK-NEXT: .cs: +; CHECK-NEXT: .checksum_value: 0x9444d7d0 +; CHECK-NEXT: .debug_mode: false +; CHECK-NEXT: .dynamic_vgpr_saved_count: 0x70 +; CHECK-NOT: .entry_point: _amdgpu_cs_main +; CHECK-NEXT: .entry_point_symbol: _amdgpu_cs_main +; CHECK-NEXT: .excp_en: 0 +; CHECK-NEXT: .float_mode: 0xc0 +; CHECK-NEXT: .forward_progress: true +; GFX11-NEXT: .ieee_mode: false +; CHECK-NEXT: .image_op: false +; CHECK-NEXT: .lds_size: 0 +; CHECK-NEXT: .mem_ordered: true +; CHECK-NEXT: .scratch_en: false +; CHECK-NEXT: .scratch_memory_size: 0 +; CHECK-NEXT: .sgpr_count: 0x22 +; CHECK-NEXT: .sgpr_limit: 0x6a +; CHECK-NEXT: .threadgroup_dimensions: +; CHECK-NEXT: - 0x1 +; CHECK-NEXT: - 0x400 +; CHECK-NEXT: - 0x1 +; CHECK-NEXT: .trap_present: false +; CHECK-NEXT: .user_data_reg_map: +; CHECK-NEXT: - 0x10000000 +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0 +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: - 0xffffffff +; CHECK-NEXT: .user_sgprs: 0x3 +; CHECK-NEXT: .vgpr_count: 0x2 +; CHECK-NEXT: .vgpr_limit: 0x100 +; CHECK-NEXT: .wavefront_size: 0x40 +; CHECK-NEXT: .wgp_mode: false +; CHECK-NEXT: .gs: +; CHECK-NEXT: .debug_mode: false +; CHECK-NOT: .entry_point: _amdgpu_gs_main +; CHECK-NEXT: .entry_point_symbol: gs_shader +; CHECK-NEXT: .forward_progress: true +; GFX11-NEXT: .ieee_mode: false +; CHECK-NEXT: .lds_size: 0x200 +; CHECK-NEXT: .mem_ordered: true +; CHECK-NEXT: .scratch_en: false +; CHECK-NEXT: .scratch_memory_size: 0 +; CHECK-NEXT: .sgpr_count: 0x1 +; CHECK-NEXT: .vgpr_count: 0x1 +; CHECK-NEXT: .wgp_mode: true +; CHECK-NEXT: .hs: +; CHECK-NEXT: .debug_mode: false +; CHECK-NOT: .entry_point: _amdgpu_hs_main +; CHECK-NEXT: .entry_point_symbol: hs_shader +; CHECK-NEXT: .forward_progress: true +; GFX11-NEXT: .ieee_mode: false +; CHECK-NEXT: .lds_size: 0x1000 +; CHECK-NEXT: .mem_ordered: true +; CHECK-NEXT: .scratch_en: false +; CHECK-NEXT: .scratch_memory_size: 0 +; CHECK-NEXT: .sgpr_count: 0x1 +; CHECK-NEXT: .vgpr_count: 0x1 +; CHECK-NEXT: .wgp_mode: true +; CHECK-NEXT: .ps: +; CHECK-NEXT: .debug_mode: false +; CHECK-NOT: .entry_point: _amdgpu_ps_main +; CHECK-NEXT: .entry_point_symbol: ps_shader +; CHECK-NEXT: .forward_progress: true +; GFX11-NEXT: .ieee_mode: false +; CHECK-NEXT: .lds_size: 0 +; CHECK-NEXT: .mem_ordered: true +; CHECK-NEXT: .scratch_en: false +; CHECK-NEXT: .scratch_memory_size: 0 +; CHECK-NEXT: .sgpr_count: 0x1 +; CHECK-NEXT: .vgpr_count: 0x1 +; CHECK-NEXT: .wgp_mode: true +; CHECK: .registers: {} +; CHECK:amdpal.version: +; CHECK-NEXT: - 0x3 +; CHECK-NEXT: - 0x6 +; CHECK-NEXT:... +; CHECK-NEXT: .end_amdgpu_pal_metadata + +define dllexport amdgpu_cs void @_amdgpu_cs_main(i32 inreg %arg1, i32 %arg2) #0 !lgc.shaderstage !1 { +.entry: + %i = call i64 @llvm.amdgcn.s.getpc() + %i1 = and i64 %i, -4294967296 + %i2 = zext i32 %arg1 to i64 + %i3 = or i64 %i1, %i2 + %i4 = inttoptr i64 %i3 to ptr addrspace(4) + %i5 = and i32 %arg2, 1023 + %i6 = lshr i32 %arg2, 10 + %i7 = and i32 %i6, 1023 + %i8 = add nuw nsw i32 %i7, %i5 + %i9 = load <4 x i32>, ptr addrspace(4) %i4, align 16 + %.idx = shl nuw nsw i32 %i8, 2 + call void @llvm.amdgcn.raw.buffer.store.i32(i32 1, <4 x i32> %i9, i32 %.idx, i32 0, i32 0) + ret void +} + +define dllexport amdgpu_ps void @ps_shader() #1 { + ret void +} + +@LDS.GS = external addrspace(3) global [1 x i32], align 4 + +define dllexport amdgpu_gs void @gs_shader() { + %ptr = getelementptr i32, ptr addrspace(3) @LDS.GS, i32 0 + store i32 0, ptr addrspace(3) %ptr, align 4 + ret void +} + +@LDS.HS = external addrspace(3) global [1024 x i32], align 4 + +define dllexport amdgpu_hs void @hs_shader() { + %ptr = getelementptr i32, ptr addrspace(3) @LDS.HS, i32 0 + store i32 0, ptr addrspace(3) %ptr, align 4 + ret void +} + +!amdgpu.pal.metadata.msgpack = !{!0} + +attributes #0 = { nounwind memory(readwrite) "target-features"=",+wavefrontsize64,+cumode" "amdgpu-dynamic-vgpr-block-size"="16" } + +attributes #1 = { nounwind memory(readwrite) "InitialPSInputAddr"="36983" "amdgpu-dynamic-vgpr-block-size"="16" } + +!0 = !{!"\82\B0amdpal.pipelines\91\8A\A4.api\A6Vulkan\B2.compute_registers\85\AB.tg_size_en\C3\AA.tgid_x_en\C2\AA.tgid_y_en\C2\AA.tgid_z_en\C2\AF.tidig_comp_cnt\01\B0.hardware_stages\81\A3.cs\8C\AF.checksum_value\CE\94D\D7\D0\AB.debug_mode\00\AB.float_mode\CC\C0\A9.image_op\C2\AC.mem_ordered\C3\AB.sgpr_limitj\B7.threadgroup_dimensions\93\01\CD\04\00\01\AD.trap_present\00\B2.user_data_reg_map\DC\00 \CE\10\00\00\00\CE\FF\FF\FF\FF\00\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\CE\FF\FF\FF\FF\AB.user_sgprs\03\AB.vgpr_limit\CD\01\00\AF.wavefront_size@\B7.internal_pipeline_hash\92\CF\E7\10k\A6:\A6%\F7\CF\B2\1F\1A\D4{\DA\E1T\AA.registers\80\A8.shaders\81\A8.compute\82\B0.api_shader_hash\92\CF\E9Zn7}\1E\B9\E7\00\B1.hardware_mapping\91\A3.cs\B0.spill_threshold\CE\FF\FF\FF\FF\A5.type\A2Cs\B0.user_data_limit\01\AF.xgl_cache_info\82\B3.128_bit_cache_hash\92\CF\B4X\B8\11[\A4\88P\CF\A0;\B0\AF\FF\B4\BE\C0\AD.llpc_version\A461.1\AEamdpal.version\92\03\06"} +!1 = !{i32 7} diff --git a/llvm/test/CodeGen/AMDGPU/pal-metadata-3.6.ll b/llvm/test/CodeGen/AMDGPU/pal-metadata-3.6.ll index 830872a..d2f26e8 100644 --- a/llvm/test/CodeGen/AMDGPU/pal-metadata-3.6.ll +++ b/llvm/test/CodeGen/AMDGPU/pal-metadata-3.6.ll @@ -1,17 +1,14 @@ -; RUN: llc -mtriple=amdgcn--amdpal -mcpu=gfx1100 <%s | FileCheck %s --check-prefixes=CHECK,GFX11,NODVGPR -; RUN: llc -mtriple=amdgcn--amdpal -mcpu=gfx1200 <%s | FileCheck %s --check-prefixes=CHECK,NODVGPR -; RUN: llc -mtriple=amdgcn--amdpal -mcpu=gfx1200 -mattr=+dynamic-vgpr <%s | FileCheck %s --check-prefixes=CHECK,DVGPR +; RUN: llc -mtriple=amdgcn--amdpal -mcpu=gfx1100 <%s | FileCheck %s --check-prefixes=CHECK,GFX11 +; RUN: llc -mtriple=amdgcn--amdpal -mcpu=gfx1200 <%s | FileCheck %s --check-prefixes=CHECK ; CHECK-LABEL: {{^}}_amdgpu_cs_main: -; NODVGPR: ; TotalNumSgprs: 4 -; DVGPR: ; TotalNumSgprs: 34 +; CHECK: ; TotalNumSgprs: 4 ; CHECK: ; NumVgprs: 2 ; CHECK: .amdgpu_pal_metadata ; CHECK-NEXT: --- ; CHECK-NEXT: amdpal.pipelines: ; CHECK-NEXT: - .api: Vulkan ; CHECK-NEXT: .compute_registers: -; DVGPR-NEXT: .dynamic_vgpr_en: true ; CHECK-NEXT: .tg_size_en: true ; CHECK-NEXT: .tgid_x_en: false ; CHECK-NEXT: .tgid_y_en: false @@ -57,7 +54,6 @@ ; CHECK-NEXT: .cs: ; CHECK-NEXT: .checksum_value: 0x9444d7d0 ; CHECK-NEXT: .debug_mode: false -; DVGPR-NEXT: .dynamic_vgpr_saved_count: 0x70 ; CHECK-NOT: .entry_point: _amdgpu_cs_main ; CHECK-NEXT: .entry_point_symbol: _amdgpu_cs_main ; CHECK-NEXT: .excp_en: 0 @@ -69,8 +65,7 @@ ; CHECK-NEXT: .mem_ordered: true ; CHECK-NEXT: .scratch_en: false ; CHECK-NEXT: .scratch_memory_size: 0 -; NODVGPR-NEXT: .sgpr_count: 0x4 -; DVGPR-NEXT: .sgpr_count: 0x22 +; CHECK-NEXT: .sgpr_count: 0x4 ; CHECK-NEXT: .sgpr_limit: 0x6a ; CHECK-NEXT: .threadgroup_dimensions: ; CHECK-NEXT: - 0x1 diff --git a/llvm/test/CodeGen/AMDGPU/rewrite-vgpr-mfma-to-agpr.ll b/llvm/test/CodeGen/AMDGPU/rewrite-vgpr-mfma-to-agpr.ll index b9e9893..9a23788 100644 --- a/llvm/test/CodeGen/AMDGPU/rewrite-vgpr-mfma-to-agpr.ll +++ b/llvm/test/CodeGen/AMDGPU/rewrite-vgpr-mfma-to-agpr.ll @@ -369,7 +369,7 @@ define amdgpu_kernel void @illegal_mfma_after_rewrite() #1 { ; CHECK: ; %bb.0: ; %entry ; CHECK-NEXT: s_mov_b32 s0, 0 ; CHECK-NEXT: s_mov_b32 s1, s0 -; CHECK-NEXT: v_mov_b64_e32 v[8:9], s[0:1] +; CHECK-NEXT: v_mov_b64_e32 v[28:29], s[0:1] ; CHECK-NEXT: ;;#ASMSTART ; CHECK-NEXT: ; def s[0:3] ; CHECK-NEXT: ;;#ASMEND @@ -378,73 +378,66 @@ define amdgpu_kernel void @illegal_mfma_after_rewrite() #1 { ; CHECK-NEXT: v_mov_b64_e32 v[4:5], s[0:1] ; CHECK-NEXT: s_mov_b32 s0, 0x3c003c00 ; CHECK-NEXT: s_mov_b32 s1, s0 -; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[0:3], v[8:9], v[8:9], v[4:7] -; CHECK-NEXT: v_mov_b64_e32 v[12:13], s[0:1] +; CHECK-NEXT: v_mov_b64_e32 v[30:31], s[0:1] ; CHECK-NEXT: s_mov_b32 s0, 0x7e007e00 ; CHECK-NEXT: s_mov_b32 s1, s0 -; CHECK-NEXT: v_mov_b64_e32 v[10:11], s[0:1] -; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[14:17], v[8:9], v[12:13], v[4:7] -; CHECK-NEXT: s_nop 1 -; CHECK-NEXT: v_accvgpr_write_b32 a0, v0 -; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[18:21], v[8:9], v[10:11], v[4:7] -; CHECK-NEXT: v_accvgpr_write_b32 a1, v1 -; CHECK-NEXT: v_accvgpr_write_b32 a2, v2 -; CHECK-NEXT: v_accvgpr_write_b32 a3, v3 +; CHECK-NEXT: v_accvgpr_write_b32 a0, s0 +; CHECK-NEXT: v_accvgpr_write_b32 a1, s1 +; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[0:3], v[28:29], v[28:29], v[4:7] +; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[8:11], v[28:29], v[30:31], v[4:7] +; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[12:15], v[28:29], a[0:1], v[4:7] +; CHECK-NEXT: s_nop 2 ; CHECK-NEXT: v_mov_b32_e32 v4, 0x7fc00000 ; CHECK-NEXT: v_mov_b32_e32 v5, v4 ; CHECK-NEXT: v_mov_b32_e32 v6, v4 ; CHECK-NEXT: v_mov_b32_e32 v7, v4 -; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[14:17], v[8:9], v[8:9], v[14:17] +; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[8:11], v[28:29], v[28:29], v[8:11] ; CHECK-NEXT: s_nop 0 -; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[22:25], v[8:9], v[8:9], v[4:7] +; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[16:19], v[28:29], v[28:29], v[4:7] ; CHECK-NEXT: ;;#ASMSTART ; CHECK-NEXT: ; def v[4:7] ; CHECK-NEXT: ;;#ASMEND -; CHECK-NEXT: s_nop 0 -; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[0:3], v[8:9], v[12:13], v[4:7] -; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[26:29], v[8:9], v[8:9], v[4:7] -; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[0:3], v[8:9], v[8:9], v[0:3] -; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[22:25], v[8:9], v[8:9], v[22:25] -; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[4:7], v[8:9], v[8:9], v[26:29] +; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[16:19], v[28:29], v[28:29], v[16:19] +; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[24:27], v[28:29], v[30:31], v[4:7] ; CHECK-NEXT: s_nop 5 -; CHECK-NEXT: v_cvt_f16_f32_e32 v23, v14 -; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[14:17], v[8:9], v[8:9], v[18:21] -; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[0:3], v[12:13], v[8:9], v[0:3] -; CHECK-NEXT: s_nop 1 -; CHECK-NEXT: v_accvgpr_read_b32 v19, a3 -; CHECK-NEXT: v_accvgpr_read_b32 v18, a2 -; CHECK-NEXT: v_mov_b64_e32 v[20:21], 0 -; CHECK-NEXT: s_nop 0 -; CHECK-NEXT: v_accvgpr_read_b32 v17, a1 -; CHECK-NEXT: v_accvgpr_read_b32 v16, a0 -; CHECK-NEXT: v_cvt_f16_f32_e32 v15, v22 -; CHECK-NEXT: v_cvt_f16_f32_e32 v14, v14 -; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[16:19], v[8:9], v[8:9], v[16:19] -; CHECK-NEXT: v_cvt_f16_f32_e32 v12, v0 -; CHECK-NEXT: global_store_short v[20:21], v23, off +; CHECK-NEXT: v_cvt_f16_f32_e32 v17, v8 +; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[8:11], v[28:29], v[28:29], v[12:15] +; CHECK-NEXT: s_nop 2 +; CHECK-NEXT: v_mov_b64_e32 v[12:13], 0 +; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[0:3], v[28:29], v[28:29], v[0:3] +; CHECK-NEXT: global_store_short v[12:13], v17, off ; CHECK-NEXT: buffer_wbl2 sc0 sc1 ; CHECK-NEXT: s_waitcnt vmcnt(0) ; CHECK-NEXT: buffer_inv sc0 sc1 -; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[0:3], v[10:11], v[8:9], v[4:7] -; CHECK-NEXT: global_store_short v[20:21], v15, off +; CHECK-NEXT: v_cvt_f16_f32_e32 v9, v16 +; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[20:23], v[28:29], v[28:29], v[4:7] +; CHECK-NEXT: global_store_short v[12:13], v9, off +; CHECK-NEXT: v_cvt_f16_f32_e32 v1, v8 +; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[8:11], v[28:29], v[28:29], v[24:27] ; CHECK-NEXT: buffer_wbl2 sc0 sc1 ; CHECK-NEXT: s_waitcnt vmcnt(0) ; CHECK-NEXT: buffer_inv sc0 sc1 -; CHECK-NEXT: global_store_short v[20:21], v14, off -; CHECK-NEXT: v_cvt_f16_f32_e32 v14, v16 +; CHECK-NEXT: v_cvt_f16_f32_e32 v14, v0 +; CHECK-NEXT: global_store_short v[12:13], v1, off +; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[4:7], v[28:29], v[28:29], v[20:23] ; CHECK-NEXT: buffer_wbl2 sc0 sc1 ; CHECK-NEXT: s_waitcnt vmcnt(0) ; CHECK-NEXT: buffer_inv sc0 sc1 -; CHECK-NEXT: global_store_short v[20:21], v14, off -; CHECK-NEXT: v_cvt_f16_f32_e32 v0, v0 +; CHECK-NEXT: global_store_short v[12:13], v14, off ; CHECK-NEXT: buffer_wbl2 sc0 sc1 ; CHECK-NEXT: s_waitcnt vmcnt(0) ; CHECK-NEXT: buffer_inv sc0 sc1 -; CHECK-NEXT: global_store_short v[20:21], v12, off +; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[0:3], v[30:31], v[28:29], v[8:11] +; CHECK-NEXT: s_nop 6 +; CHECK-NEXT: v_cvt_f16_f32_e32 v8, v0 +; CHECK-NEXT: v_mfma_f32_16x16x16_f16 v[0:3], a[0:1], v[28:29], v[4:7] +; CHECK-NEXT: global_store_short v[12:13], v8, off ; CHECK-NEXT: buffer_wbl2 sc0 sc1 ; CHECK-NEXT: s_waitcnt vmcnt(0) ; CHECK-NEXT: buffer_inv sc0 sc1 -; CHECK-NEXT: global_store_short v[20:21], v0, off +; CHECK-NEXT: s_nop 2 +; CHECK-NEXT: v_cvt_f16_f32_e32 v0, v0 +; CHECK-NEXT: global_store_short v[12:13], v0, off ; CHECK-NEXT: s_endpgm entry: %k0 = call <4 x float> asm sideeffect "; def $0", "=s"() diff --git a/llvm/test/CodeGen/AMDGPU/smfmac_alloc_failure_no_agpr_O0.ll b/llvm/test/CodeGen/AMDGPU/smfmac_alloc_failure_no_agpr_O0.ll new file mode 100644 index 0000000..ba0fdc68 --- /dev/null +++ b/llvm/test/CodeGen/AMDGPU/smfmac_alloc_failure_no_agpr_O0.ll @@ -0,0 +1,119 @@ +; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 6 +; RUN: llc -O0 -mtriple=amdgcn -mcpu=gfx950 -amdgpu-mfma-vgpr-form=0 < %s | FileCheck %s +; RUN: llc -O0 -mtriple=amdgcn -mcpu=gfx950 -amdgpu-mfma-vgpr-form=1 < %s | FileCheck %s + +declare <16 x float> @llvm.amdgcn.smfmac.f32.32x32x32.f16(<8 x half>, <16 x half>, <16 x float>, i32, i32 immarg, i32 immarg) + +define amdgpu_kernel void @test_smfmac_f32_32x32x32_f16__vgpr(ptr addrspace(1) %arg, <8 x half> %a, <16 x half> %b, i32 %idx) #0 { +; CHECK-LABEL: test_smfmac_f32_32x32x32_f16__vgpr: +; CHECK: ; %bb.0: ; %bb +; CHECK-NEXT: s_mov_b64 s[2:3], s[4:5] +; CHECK-NEXT: v_mov_b32_e32 v1, v0 +; CHECK-NEXT: v_mov_b32_e32 v0, 0 +; CHECK-NEXT: s_load_dwordx2 s[0:1], s[2:3], 0x24 +; CHECK-NEXT: s_load_dwordx4 s[12:15], s[2:3], 0x34 +; CHECK-NEXT: s_load_dwordx8 s[4:11], s[2:3], 0x44 +; CHECK-NEXT: s_nop 0 +; CHECK-NEXT: s_load_dword s2, s[2:3], 0x64 +; CHECK-NEXT: s_mov_b32 s3, 0x3ff +; CHECK-NEXT: v_and_b32_e64 v1, v1, s3 +; CHECK-NEXT: s_mov_b32 s3, 6 +; CHECK-NEXT: v_lshlrev_b32_e64 v8, s3, v1 +; CHECK-NEXT: s_waitcnt lgkmcnt(0) +; CHECK-NEXT: global_load_dwordx4 v[4:7], v8, s[0:1] offset:48 +; CHECK-NEXT: s_waitcnt vmcnt(0) +; CHECK-NEXT: v_mov_b32_e32 v1, v7 +; CHECK-NEXT: v_mov_b32_e32 v2, v6 +; CHECK-NEXT: v_mov_b32_e32 v3, v5 +; CHECK-NEXT: ; kill: def $vgpr4 killed $vgpr4 killed $vgpr4_vgpr5_vgpr6_vgpr7 killed $exec +; CHECK-NEXT: global_load_dwordx4 v[10:13], v8, s[0:1] offset:32 +; CHECK-NEXT: s_waitcnt vmcnt(0) +; CHECK-NEXT: v_mov_b32_e32 v5, v13 +; CHECK-NEXT: v_mov_b32_e32 v6, v12 +; CHECK-NEXT: v_mov_b32_e32 v7, v11 +; CHECK-NEXT: v_mov_b32_e32 v24, v10 +; CHECK-NEXT: global_load_dwordx4 v[10:13], v8, s[0:1] offset:16 +; CHECK-NEXT: s_waitcnt vmcnt(0) +; CHECK-NEXT: v_mov_b32_e32 v25, v13 +; CHECK-NEXT: v_mov_b32_e32 v26, v12 +; CHECK-NEXT: v_mov_b32_e32 v27, v11 +; CHECK-NEXT: v_mov_b32_e32 v28, v10 +; CHECK-NEXT: global_load_dwordx4 v[8:11], v8, s[0:1] +; CHECK-NEXT: s_waitcnt vmcnt(0) +; CHECK-NEXT: v_mov_b32_e32 v29, v11 +; CHECK-NEXT: v_mov_b32_e32 v30, v10 +; CHECK-NEXT: v_mov_b32_e32 v31, v9 +; CHECK-NEXT: ; kill: def $vgpr8 killed $vgpr8 killed $vgpr8_vgpr9_vgpr10_vgpr11 killed $exec +; CHECK-NEXT: ; kill: def $vgpr8 killed $vgpr8 def $vgpr8_vgpr9_vgpr10_vgpr11_vgpr12_vgpr13_vgpr14_vgpr15_vgpr16_vgpr17_vgpr18_vgpr19_vgpr20_vgpr21_vgpr22_vgpr23 killed $exec +; CHECK-NEXT: v_mov_b32_e32 v9, v31 +; CHECK-NEXT: v_mov_b32_e32 v10, v30 +; CHECK-NEXT: v_mov_b32_e32 v11, v29 +; CHECK-NEXT: v_mov_b32_e32 v12, v28 +; CHECK-NEXT: v_mov_b32_e32 v13, v27 +; CHECK-NEXT: v_mov_b32_e32 v14, v26 +; CHECK-NEXT: v_mov_b32_e32 v15, v25 +; CHECK-NEXT: v_mov_b32_e32 v16, v24 +; CHECK-NEXT: v_mov_b32_e32 v17, v7 +; CHECK-NEXT: v_mov_b32_e32 v18, v6 +; CHECK-NEXT: v_mov_b32_e32 v19, v5 +; CHECK-NEXT: v_mov_b32_e32 v20, v4 +; CHECK-NEXT: v_mov_b32_e32 v21, v3 +; CHECK-NEXT: v_mov_b32_e32 v22, v2 +; CHECK-NEXT: v_mov_b32_e32 v23, v1 +; CHECK-NEXT: v_mov_b64_e32 v[2:3], s[12:13] +; CHECK-NEXT: v_mov_b64_e32 v[4:5], s[14:15] +; CHECK-NEXT: v_mov_b64_e32 v[30:31], s[10:11] +; CHECK-NEXT: v_mov_b64_e32 v[28:29], s[8:9] +; CHECK-NEXT: v_mov_b64_e32 v[26:27], s[6:7] +; CHECK-NEXT: v_mov_b64_e32 v[24:25], s[4:5] +; CHECK-NEXT: v_mov_b32_e32 v1, s2 +; CHECK-NEXT: s_nop 1 +; CHECK-NEXT: v_smfmac_f32_32x32x32_f16 v[8:23], v[2:5], v[24:31], v1 cbsz:1 abid:2 +; CHECK-NEXT: s_nop 11 +; CHECK-NEXT: v_mov_b32_e32 v1, v23 +; CHECK-NEXT: v_mov_b32_e32 v6, v22 +; CHECK-NEXT: v_mov_b32_e32 v7, v21 +; CHECK-NEXT: v_mov_b32_e32 v2, v20 +; CHECK-NEXT: ; kill: def $vgpr2 killed $vgpr2 def $vgpr2_vgpr3_vgpr4_vgpr5 killed $exec +; CHECK-NEXT: v_mov_b32_e32 v3, v7 +; CHECK-NEXT: v_mov_b32_e32 v4, v6 +; CHECK-NEXT: v_mov_b32_e32 v5, v1 +; CHECK-NEXT: global_store_dwordx4 v0, v[2:5], s[0:1] offset:48 +; CHECK-NEXT: v_mov_b32_e32 v1, v19 +; CHECK-NEXT: v_mov_b32_e32 v6, v18 +; CHECK-NEXT: v_mov_b32_e32 v7, v17 +; CHECK-NEXT: v_mov_b32_e32 v2, v16 +; CHECK-NEXT: ; kill: def $vgpr2 killed $vgpr2 def $vgpr2_vgpr3_vgpr4_vgpr5 killed $exec +; CHECK-NEXT: v_mov_b32_e32 v3, v7 +; CHECK-NEXT: v_mov_b32_e32 v4, v6 +; CHECK-NEXT: v_mov_b32_e32 v5, v1 +; CHECK-NEXT: global_store_dwordx4 v0, v[2:5], s[0:1] offset:32 +; CHECK-NEXT: v_mov_b32_e32 v1, v15 +; CHECK-NEXT: v_mov_b32_e32 v6, v14 +; CHECK-NEXT: v_mov_b32_e32 v7, v13 +; CHECK-NEXT: v_mov_b32_e32 v2, v12 +; CHECK-NEXT: ; kill: def $vgpr2 killed $vgpr2 def $vgpr2_vgpr3_vgpr4_vgpr5 killed $exec +; CHECK-NEXT: v_mov_b32_e32 v3, v7 +; CHECK-NEXT: v_mov_b32_e32 v4, v6 +; CHECK-NEXT: v_mov_b32_e32 v5, v1 +; CHECK-NEXT: global_store_dwordx4 v0, v[2:5], s[0:1] offset:16 +; CHECK-NEXT: v_mov_b32_e32 v1, v11 +; CHECK-NEXT: v_mov_b32_e32 v6, v10 +; CHECK-NEXT: v_mov_b32_e32 v7, v9 +; CHECK-NEXT: v_mov_b32_e32 v2, v8 +; CHECK-NEXT: ; kill: def $vgpr2 killed $vgpr2 def $vgpr2_vgpr3_vgpr4_vgpr5 killed $exec +; CHECK-NEXT: v_mov_b32_e32 v3, v7 +; CHECK-NEXT: v_mov_b32_e32 v4, v6 +; CHECK-NEXT: v_mov_b32_e32 v5, v1 +; CHECK-NEXT: global_store_dwordx4 v0, v[2:5], s[0:1] +; CHECK-NEXT: s_endpgm +bb: + %id = call i32 @llvm.amdgcn.workitem.id.x() + %gep = getelementptr <16 x float>, ptr addrspace(1) %arg, i32 %id + %in.1 = load <16 x float>, ptr addrspace(1) %gep + %mai.1 = tail call <16 x float> @llvm.amdgcn.smfmac.f32.32x32x32.f16(<8 x half> %a, <16 x half> %b, <16 x float> %in.1, i32 %idx, i32 1, i32 2) + store <16 x float> %mai.1, ptr addrspace(1) %arg + ret void +} + +attributes #0 = { "amdgpu-flat-work-group-size"="1,256" "amdgpu-agpr-alloc"="0,0" } diff --git a/llvm/test/CodeGen/DirectX/ContainerData/RootSignature-DescriptorTable-Invalid-Flag-LargeNumber.ll b/llvm/test/CodeGen/DirectX/ContainerData/RootSignature-DescriptorTable-Invalid-Flag-LargeNumber.ll new file mode 100644 index 0000000..c27c87f --- /dev/null +++ b/llvm/test/CodeGen/DirectX/ContainerData/RootSignature-DescriptorTable-Invalid-Flag-LargeNumber.ll @@ -0,0 +1,20 @@ +; RUN: not opt -passes='print<dxil-root-signature>' %s -S -o - 2>&1 | FileCheck %s + +target triple = "dxil-unknown-shadermodel6.0-compute" + +; CHECK: error: Invalid value for DescriptorFlag: 66666 +; CHECK-NOT: Root Signature Definitions + +define void @main() #0 { +entry: + ret void +} +attributes #0 = { "hlsl.numthreads"="1,1,1" "hlsl.shader"="compute" } + + +!dx.rootsignatures = !{!2} ; list of function/root signature pairs +!2 = !{ ptr @main, !3, i32 2 } ; function, root signature +!3 = !{ !5 } ; list of root signature elements +!5 = !{ !"DescriptorTable", i32 0, !6, !7 } +!6 = !{ !"SRV", i32 1, i32 1, i32 0, i32 -1, i32 66666 } +!7 = !{ !"UAV", i32 5, i32 1, i32 10, i32 5, i32 2 } diff --git a/llvm/test/CodeGen/DirectX/ContainerData/RootSignature-RootDescriptor-Invalid-Flags-LargeNumber.ll b/llvm/test/CodeGen/DirectX/ContainerData/RootSignature-RootDescriptor-Invalid-Flags-LargeNumber.ll new file mode 100644 index 0000000..898e197 --- /dev/null +++ b/llvm/test/CodeGen/DirectX/ContainerData/RootSignature-RootDescriptor-Invalid-Flags-LargeNumber.ll @@ -0,0 +1,18 @@ +; RUN: not opt -passes='print<dxil-root-signature>' %s -S -o - 2>&1 | FileCheck %s + +target triple = "dxil-unknown-shadermodel6.0-compute" + + +; CHECK: error: Invalid value for RootDescriptorFlag: 666 +; CHECK-NOT: Root Signature Definitions +define void @main() #0 { +entry: + ret void +} +attributes #0 = { "hlsl.numthreads"="1,1,1" "hlsl.shader"="compute" } + + +!dx.rootsignatures = !{!2} ; list of function/root signature pairs +!2 = !{ ptr @main, !3, i32 2 } ; function, root signature +!3 = !{ !5 } ; list of root signature elements +!5 = !{ !"RootCBV", i32 0, i32 1, i32 2, i32 666 } diff --git a/llvm/test/CodeGen/NVPTX/wmma-ptx87-sm120a.py b/llvm/test/CodeGen/NVPTX/wmma-ptx87-sm120a.py index ae781df..40055ae 100644 --- a/llvm/test/CodeGen/NVPTX/wmma-ptx87-sm120a.py +++ b/llvm/test/CodeGen/NVPTX/wmma-ptx87-sm120a.py @@ -2,7 +2,7 @@ # RUN: %python %s --ptx=87 --gpu-arch=120 --aa > %t-ptx87-sm_120a.ll # RUN: llc < %t-ptx87-sm_120a.ll -mtriple=nvptx64 -mcpu=sm_120a -mattr=+ptx87 \ # RUN: | FileCheck %t-ptx87-sm_120a.ll -# RUN: %if ptxas-12.7 %{ \ +# RUN: %if ptxas-sm_120a && ptxas-isa-8.7 %{ \ # RUN: llc < %t-ptx87-sm_120a.ll -mtriple=nvptx64 -mcpu=sm_120a -mattr=+ptx87 \ # RUN: | %ptxas-verify -arch=sm_120a \ # RUN: %} diff --git a/llvm/test/CodeGen/NVPTX/wmma.py b/llvm/test/CodeGen/NVPTX/wmma.py index 6d73bce..8427ae4 100644 --- a/llvm/test/CodeGen/NVPTX/wmma.py +++ b/llvm/test/CodeGen/NVPTX/wmma.py @@ -90,6 +90,21 @@ class MMAFrag: "m16n8k32:b:s8": 2, "m16n8k32:c:s32": 4, "m16n8k32:d:s32": 4, + # e4m3/e5m2/e3m2/e2m3/e2m1 -> f16/f32 @ m16n8k16/m16n8k32 + "m16n8k16:a:e4m3": 2, + "m16n8k16:a:e5m2": 2, + "m16n8k32:a:e4m3": 4, + "m16n8k32:a:e5m2": 4, + "m16n8k32:a:e3m2": 4, + "m16n8k32:a:e2m3": 4, + "m16n8k32:a:e2m1": 4, + "m16n8k16:b:e4m3": 1, + "m16n8k16:b:e5m2": 1, + "m16n8k32:b:e4m3": 2, + "m16n8k32:b:e5m2": 2, + "m16n8k32:b:e3m2": 2, + "m16n8k32:b:e2m3": 2, + "m16n8k32:b:e2m1": 2, # mma sp "m16n8k32:a:bf16": 4, "m16n8k32:a:f16": 4, @@ -182,6 +197,18 @@ class MMAFrag: "m8n8k4:b:f64": 1, "m8n8k4:c:f64": 2, "m8n8k4:d:f64": 2, + "m16n8k4:a:f64": 2, + "m16n8k4:b:f64": 1, + "m16n8k4:c:f64": 4, + "m16n8k4:d:f64": 4, + "m16n8k8:a:f64": 4, + "m16n8k8:b:f64": 2, + "m16n8k8:c:f64": 4, + "m16n8k8:d:f64": 4, + "m16n8k16:a:f64": 8, + "m16n8k16:b:f64": 4, + "m16n8k16:c:f64": 4, + "m16n8k16:d:f64": 4, # tf32 -> s32 @ m16n16k8 "m16n16k8:a:tf32": 4, "m16n16k8:b:tf32": 4, @@ -324,7 +351,9 @@ def get_wmma_ops(): def get_mma_ops(): return ( - make_mma_ops(["m8n8k4"], ["f64"], [], ["f64"], []) + make_mma_ops( + ["m8n8k4", "m16n8k4", "m16n8k8", "m16n8k16"], ["f64"], [], ["f64"], [] + ) + make_mma_ops(["m16n8k4", "m16n8k8"], ["tf32"], [], ["f32"], []) + make_mma_ops(["m16n8k16", "m16n8k8"], ["bf16"], [], ["f32"], []) + make_mma_ops( @@ -341,6 +370,20 @@ def get_mma_ops(): ["m8n8k32", "m16n8k32", "m16n8k64"], ["s4", "u4"], ["s4", "u4"], ["s32"], [] ) + make_mma_ops(["m8n8k128", "m16n8k128", "m16n8k256"], ["b1"], [], ["s32"], []) + + make_mma_ops( + ["m16n8k16"], + ["e4m3", "e5m2"], + ["e4m3", "e5m2"], + ["f16", "f32"], + ["f16", "f32"], + ) + + make_mma_ops( + ["m16n8k32"], + ["e4m3", "e5m2", "e3m2", "e2m3", "e2m1"], + ["e4m3", "e5m2", "e3m2", "e2m3", "e2m1"], + ["f16", "f32"], + ["f16", "f32"], + ) ) @@ -492,7 +535,7 @@ def is_wmma_variant_supported(op, layout_a, layout_b, rnd, satf): return True -def is_mma_variant_supported(op, layout_a, layout_b, satf): +def is_mma_variant_supported(op, layout_a, layout_b, kind, satf): if not ( is_type_supported(op.a.mma_type.ptx_type) and is_mma_geom_supported(op.a.geom) ): @@ -516,13 +559,53 @@ def is_mma_variant_supported(op, layout_a, layout_b, satf): ): return False + if ( + op.a.geom != "m8n8k4" + and op.a.mma_type.ptx_type == "f64" + and (ptx_version < 78 or gpu_arch < 90) + ): + return False + # C and D type must be the same - if op.a.geom == "m16n8k16" and op.c.mma_type.ptx_type != op.d.mma_type.ptx_type: + if ( + op.a.geom in ["m16n8k16", "m16n8k32"] + and op.c.mma_type.ptx_type != op.d.mma_type.ptx_type + ): + return False + + if ( + op.a.geom in ["m16n8k16", "m16n8k32"] + and any( + x in ["e4m3", "e5m2"] + for x in (op.a.mma_type.ptx_type, op.b.mma_type.ptx_type) + ) + and ptx_version < 87 + ): + return False + + if kind != "" and not (ptx_version >= 87 and gpu_arch >= 120 and aa): + return False + + if kind != "" and ( + op.a.geom != "m16n8k32" + or op.a.mma_type.ptx_type not in ["e4m3", "e5m2", "e3m2", "e2m3", "e2m1"] + ): + return False + + if ( + kind == "" + and op.a.geom in ["m16n8k16", "m16n8k32"] + and any( + x in ["e3m2", "e2m3", "e2m1"] + for x in (op.a.mma_type.ptx_type, op.b.mma_type.ptx_type) + ) + ): return False # Require row/col layout for all MMA except m8n8k4 on FP16 if not (op.a.geom == "m8n8k4" and op.a.mma_type.ptx_type == "f16"): return layout_a == "row" and layout_b == "col" + return True @@ -937,7 +1020,12 @@ define ${ret_ty} @test_${function}( """ test_params = params - test_params["intrinsic"] = Template(intrinsic_template).substitute(params) + test_params["intrinsic"] = ( + Template(intrinsic_template) + .substitute(params) + .replace("::", ".") + .replace("_", ".") + ) test_params["function"] = test_params["intrinsic"].replace(".", "_") test_params["instruction"] = Template(instruction_template).substitute(params) test_params["ret_ty"] = make_wmma_ld_ret_ty(op.d) @@ -1002,16 +1090,20 @@ def gen_wmma_mma_tests(): def gen_mma_tests(): - mma_intrinsic_template = "llvm.nvvm.mma${b1op}.${geom}.${alayout}.${blayout}${satf}.${intrinsic_signature}" - mma_instruction_template = "mma.sync${aligned}.${geom}.${alayout}.${blayout}${satf}.${ptx_signature}${b1op}" + mma_intrinsic_template = "llvm.nvvm.mma${b1op}.${geom}.${alayout}.${blayout}${kind}${satf}.${intrinsic_signature}" + mma_instruction_template = "mma.sync${aligned}.${geom}.${alayout}.${blayout}${kind}${satf}.${ptx_signature}${b1op}" generated_items = [] - for op, alayout, blayout, satf in product( - get_mma_ops(), ["row", "col"], ["row", "col"], [".satfinite", ""] + for op, alayout, blayout, kind, satf in product( + get_mma_ops(), + ["row", "col"], + ["row", "col"], + ["", ".kind::f8f6f4"], + [".satfinite", ""], ): - if not is_mma_variant_supported(op, alayout, blayout, satf): + if not is_mma_variant_supported(op, alayout, blayout, kind, satf): continue for b1op in get_b1_ops(op.a.mma_type.ptx_type): @@ -1024,6 +1116,7 @@ def gen_mma_tests(): "satf": satf, "geom": op.a.geom, "b1op": b1op, + "kind": kind, } intrinsic_template = mma_intrinsic_template @@ -1105,9 +1198,9 @@ def is_mma_sp_variant_supported(op, metadata, kind, satf): ): return False - # C and D type must be the same for m16n8k16/m16n8k32 + # C and D type must be the same for m16n8k16/m16n8k32/m16n8k64 if ( - op.a.geom in ["m16n8k16", "m16n8k32"] + op.a.geom in ["m16n8k16", "m16n8k32", "m16n8k64"] and op.c.mma_type.ptx_type != op.d.mma_type.ptx_type ): return False diff --git a/llvm/test/CodeGen/PowerPC/vec-nmsub.ll b/llvm/test/CodeGen/PowerPC/vec-nmsub.ll new file mode 100644 index 0000000..8f4ac972 --- /dev/null +++ b/llvm/test/CodeGen/PowerPC/vec-nmsub.ll @@ -0,0 +1,36 @@ +; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5 +; RUN: llc -verify-machineinstrs < %s -mcpu=pwr5 -mtriple=ppc32-- -mattr=+altivec | FileCheck %s + +define dso_local <4 x float> @intrinsic(<4 x float> noundef %a, <4 x float> noundef %b, <4 x float> noundef %c) local_unnamed_addr { +; CHECK-LABEL: intrinsic: +; CHECK: # %bb.0: # %entry +; CHECK-NEXT: vnmsubfp 2, 2, 3, 4 +; CHECK-NEXT: blr +entry: + %0 = tail call <4 x float> @llvm.ppc.altivec.vnmsubfp(<4 x float> %a, <4 x float> %b, <4 x float> %c) + ret <4 x float> %0 +} + +define <4 x float> @manual_llvm_fma(<4 x float> %a, <4 x float> %b, <4 x float> %c) unnamed_addr { +; CHECK-LABEL: manual_llvm_fma: +; CHECK: # %bb.0: # %start +; CHECK-NEXT: vnmsubfp 2, 2, 3, 4 +; CHECK-NEXT: blr +start: + %0 = fneg <4 x float> %c + %1 = tail call <4 x float> @llvm.fma.v4f32(<4 x float> %a, <4 x float> %b, <4 x float> %0) + %2 = fneg <4 x float> %1 + ret <4 x float> %2 +} + +define dso_local <4 x float> @manual_vmaddfp(<4 x float> noundef %a, <4 x float> noundef %b, <4 x float> noundef %c) local_unnamed_addr { +; CHECK-LABEL: manual_vmaddfp: +; CHECK: # %bb.0: # %entry +; CHECK-NEXT: vnmsubfp 2, 2, 3, 4 +; CHECK-NEXT: blr +entry: + %fneg.i3 = fneg <4 x float> %c + %0 = tail call <4 x float> @llvm.ppc.altivec.vmaddfp(<4 x float> %a, <4 x float> %b, <4 x float> %fneg.i3) + %fneg.i = fneg <4 x float> %0 + ret <4 x float> %fneg.i +} diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/atomic-load-store-fp.ll b/llvm/test/CodeGen/RISCV/GlobalISel/atomic-load-store-fp.ll new file mode 100644 index 0000000..4ad2d2c --- /dev/null +++ b/llvm/test/CodeGen/RISCV/GlobalISel/atomic-load-store-fp.ll @@ -0,0 +1,950 @@ +; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py +; RUN: llc -mtriple=riscv32 -global-isel -mattr=+d -verify-machineinstrs < %s \ +; RUN: | FileCheck -check-prefix=RV32I %s +; RUN: llc -mtriple=riscv32 -global-isel -mattr=+d,+a,+no-trailing-seq-cst-fence \ +; RUN: -verify-machineinstrs < %s | FileCheck -check-prefixes=RV32IA,RV32IA-WMO %s +; RUN: llc -mtriple=riscv32 -global-isel -mattr=+d,+a,+ztso,+no-trailing-seq-cst-fence \ +; RUN: -verify-machineinstrs < %s | FileCheck -check-prefixes=RV32IA,RV32IA-TSO %s +; RUN: llc -mtriple=riscv64 -global-isel -verify-machineinstrs < %s \ +; RUN: | FileCheck -check-prefix=RV64I %s +; RUN: llc -mtriple=riscv64 -global-isel -mattr=+d,+a,+no-trailing-seq-cst-fence \ +; RUN: -verify-machineinstrs < %s | FileCheck -check-prefixes=RV64IA,RV64IA-WMO %s +; RUN: llc -mtriple=riscv64 -global-isel -mattr=+d,+a,+ztso,+no-trailing-seq-cst-fence \ +; RUN: -verify-machineinstrs < %s | FileCheck -check-prefixes=RV64IA,RV64IA-TSO %s + + +; RUN: llc -mtriple=riscv32 -global-isel -mattr=+d,+a -verify-machineinstrs < %s \ +; RUN: | FileCheck -check-prefixes=RV32IA,RV32IA-WMO-TRAILING-FENCE %s +; RUN: llc -mtriple=riscv32 -global-isel -mattr=+d,+a,+ztso -verify-machineinstrs < %s \ +; RUN: | FileCheck -check-prefixes=RV32IA,RV32IA-TSO-TRAILING-FENCE %s + +; RUN: llc -mtriple=riscv64 -global-isel -mattr=+d,+a -verify-machineinstrs < %s \ +; RUN: | FileCheck -check-prefixes=RV64IA,RV64IA-WMO-TRAILING-FENCE %s +; RUN: llc -mtriple=riscv64 -global-isel -mattr=+d,+a,+ztso -verify-machineinstrs < %s \ +; RUN: | FileCheck -check-prefixes=RV64IA,RV64IA-TSO-TRAILING-FENCE %s + + +define float @atomic_load_f32_unordered(ptr %a) nounwind { +; RV32I-LABEL: atomic_load_f32_unordered: +; RV32I: # %bb.0: +; RV32I-NEXT: addi sp, sp, -16 +; RV32I-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32I-NEXT: li a1, 0 +; RV32I-NEXT: call __atomic_load_4 +; RV32I-NEXT: fmv.w.x fa0, a0 +; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32I-NEXT: addi sp, sp, 16 +; RV32I-NEXT: ret +; +; RV32IA-LABEL: atomic_load_f32_unordered: +; RV32IA: # %bb.0: +; RV32IA-NEXT: lw a0, 0(a0) +; RV32IA-NEXT: fmv.w.x fa0, a0 +; RV32IA-NEXT: ret +; +; RV64I-LABEL: atomic_load_f32_unordered: +; RV64I: # %bb.0: +; RV64I-NEXT: addi sp, sp, -16 +; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill +; RV64I-NEXT: li a1, 0 +; RV64I-NEXT: call __atomic_load_4 +; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload +; RV64I-NEXT: addi sp, sp, 16 +; RV64I-NEXT: ret +; +; RV64IA-LABEL: atomic_load_f32_unordered: +; RV64IA: # %bb.0: +; RV64IA-NEXT: lw a0, 0(a0) +; RV64IA-NEXT: fmv.w.x fa0, a0 +; RV64IA-NEXT: ret + %1 = load atomic float, ptr %a unordered, align 4 + ret float %1 +} + +define float @atomic_load_f32_monotonic(ptr %a) nounwind { +; RV32I-LABEL: atomic_load_f32_monotonic: +; RV32I: # %bb.0: +; RV32I-NEXT: addi sp, sp, -16 +; RV32I-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32I-NEXT: li a1, 0 +; RV32I-NEXT: call __atomic_load_4 +; RV32I-NEXT: fmv.w.x fa0, a0 +; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32I-NEXT: addi sp, sp, 16 +; RV32I-NEXT: ret +; +; RV32IA-LABEL: atomic_load_f32_monotonic: +; RV32IA: # %bb.0: +; RV32IA-NEXT: lw a0, 0(a0) +; RV32IA-NEXT: fmv.w.x fa0, a0 +; RV32IA-NEXT: ret +; +; RV64I-LABEL: atomic_load_f32_monotonic: +; RV64I: # %bb.0: +; RV64I-NEXT: addi sp, sp, -16 +; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill +; RV64I-NEXT: li a1, 0 +; RV64I-NEXT: call __atomic_load_4 +; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload +; RV64I-NEXT: addi sp, sp, 16 +; RV64I-NEXT: ret +; +; RV64IA-LABEL: atomic_load_f32_monotonic: +; RV64IA: # %bb.0: +; RV64IA-NEXT: lw a0, 0(a0) +; RV64IA-NEXT: fmv.w.x fa0, a0 +; RV64IA-NEXT: ret + %1 = load atomic float, ptr %a monotonic, align 4 + ret float %1 +} + +define float @atomic_load_f32_acquire(ptr %a) nounwind { +; RV32I-LABEL: atomic_load_f32_acquire: +; RV32I: # %bb.0: +; RV32I-NEXT: addi sp, sp, -16 +; RV32I-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32I-NEXT: li a1, 2 +; RV32I-NEXT: call __atomic_load_4 +; RV32I-NEXT: fmv.w.x fa0, a0 +; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32I-NEXT: addi sp, sp, 16 +; RV32I-NEXT: ret +; +; RV32IA-WMO-LABEL: atomic_load_f32_acquire: +; RV32IA-WMO: # %bb.0: +; RV32IA-WMO-NEXT: lw a0, 0(a0) +; RV32IA-WMO-NEXT: fence r, rw +; RV32IA-WMO-NEXT: fmv.w.x fa0, a0 +; RV32IA-WMO-NEXT: ret +; +; RV32IA-TSO-LABEL: atomic_load_f32_acquire: +; RV32IA-TSO: # %bb.0: +; RV32IA-TSO-NEXT: lw a0, 0(a0) +; RV32IA-TSO-NEXT: fmv.w.x fa0, a0 +; RV32IA-TSO-NEXT: ret +; +; RV64I-LABEL: atomic_load_f32_acquire: +; RV64I: # %bb.0: +; RV64I-NEXT: addi sp, sp, -16 +; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill +; RV64I-NEXT: li a1, 2 +; RV64I-NEXT: call __atomic_load_4 +; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload +; RV64I-NEXT: addi sp, sp, 16 +; RV64I-NEXT: ret +; +; RV64IA-WMO-LABEL: atomic_load_f32_acquire: +; RV64IA-WMO: # %bb.0: +; RV64IA-WMO-NEXT: lw a0, 0(a0) +; RV64IA-WMO-NEXT: fence r, rw +; RV64IA-WMO-NEXT: fmv.w.x fa0, a0 +; RV64IA-WMO-NEXT: ret +; +; RV64IA-TSO-LABEL: atomic_load_f32_acquire: +; RV64IA-TSO: # %bb.0: +; RV64IA-TSO-NEXT: lw a0, 0(a0) +; RV64IA-TSO-NEXT: fmv.w.x fa0, a0 +; RV64IA-TSO-NEXT: ret +; +; RV32IA-WMO-TRAILING-FENCE-LABEL: atomic_load_f32_acquire: +; RV32IA-WMO-TRAILING-FENCE: # %bb.0: +; RV32IA-WMO-TRAILING-FENCE-NEXT: lw a0, 0(a0) +; RV32IA-WMO-TRAILING-FENCE-NEXT: fence r, rw +; RV32IA-WMO-TRAILING-FENCE-NEXT: fmv.w.x fa0, a0 +; RV32IA-WMO-TRAILING-FENCE-NEXT: ret +; +; RV32IA-TSO-TRAILING-FENCE-LABEL: atomic_load_f32_acquire: +; RV32IA-TSO-TRAILING-FENCE: # %bb.0: +; RV32IA-TSO-TRAILING-FENCE-NEXT: lw a0, 0(a0) +; RV32IA-TSO-TRAILING-FENCE-NEXT: fmv.w.x fa0, a0 +; RV32IA-TSO-TRAILING-FENCE-NEXT: ret +; +; RV64IA-WMO-TRAILING-FENCE-LABEL: atomic_load_f32_acquire: +; RV64IA-WMO-TRAILING-FENCE: # %bb.0: +; RV64IA-WMO-TRAILING-FENCE-NEXT: lw a0, 0(a0) +; RV64IA-WMO-TRAILING-FENCE-NEXT: fence r, rw +; RV64IA-WMO-TRAILING-FENCE-NEXT: fmv.w.x fa0, a0 +; RV64IA-WMO-TRAILING-FENCE-NEXT: ret +; +; RV64IA-TSO-TRAILING-FENCE-LABEL: atomic_load_f32_acquire: +; RV64IA-TSO-TRAILING-FENCE: # %bb.0: +; RV64IA-TSO-TRAILING-FENCE-NEXT: lw a0, 0(a0) +; RV64IA-TSO-TRAILING-FENCE-NEXT: fmv.w.x fa0, a0 +; RV64IA-TSO-TRAILING-FENCE-NEXT: ret + %1 = load atomic float, ptr %a acquire, align 4 + ret float %1 +} + +define float @atomic_load_f32_seq_cst(ptr %a) nounwind { +; RV32I-LABEL: atomic_load_f32_seq_cst: +; RV32I: # %bb.0: +; RV32I-NEXT: addi sp, sp, -16 +; RV32I-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32I-NEXT: li a1, 5 +; RV32I-NEXT: call __atomic_load_4 +; RV32I-NEXT: fmv.w.x fa0, a0 +; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32I-NEXT: addi sp, sp, 16 +; RV32I-NEXT: ret +; +; RV32IA-WMO-LABEL: atomic_load_f32_seq_cst: +; RV32IA-WMO: # %bb.0: +; RV32IA-WMO-NEXT: fence rw, rw +; RV32IA-WMO-NEXT: lw a0, 0(a0) +; RV32IA-WMO-NEXT: fence r, rw +; RV32IA-WMO-NEXT: fmv.w.x fa0, a0 +; RV32IA-WMO-NEXT: ret +; +; RV32IA-TSO-LABEL: atomic_load_f32_seq_cst: +; RV32IA-TSO: # %bb.0: +; RV32IA-TSO-NEXT: fence rw, rw +; RV32IA-TSO-NEXT: lw a0, 0(a0) +; RV32IA-TSO-NEXT: fmv.w.x fa0, a0 +; RV32IA-TSO-NEXT: ret +; +; RV64I-LABEL: atomic_load_f32_seq_cst: +; RV64I: # %bb.0: +; RV64I-NEXT: addi sp, sp, -16 +; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill +; RV64I-NEXT: li a1, 5 +; RV64I-NEXT: call __atomic_load_4 +; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload +; RV64I-NEXT: addi sp, sp, 16 +; RV64I-NEXT: ret +; +; RV64IA-WMO-LABEL: atomic_load_f32_seq_cst: +; RV64IA-WMO: # %bb.0: +; RV64IA-WMO-NEXT: fence rw, rw +; RV64IA-WMO-NEXT: lw a0, 0(a0) +; RV64IA-WMO-NEXT: fence r, rw +; RV64IA-WMO-NEXT: fmv.w.x fa0, a0 +; RV64IA-WMO-NEXT: ret +; +; RV64IA-TSO-LABEL: atomic_load_f32_seq_cst: +; RV64IA-TSO: # %bb.0: +; RV64IA-TSO-NEXT: fence rw, rw +; RV64IA-TSO-NEXT: lw a0, 0(a0) +; RV64IA-TSO-NEXT: fmv.w.x fa0, a0 +; RV64IA-TSO-NEXT: ret +; +; RV32IA-WMO-TRAILING-FENCE-LABEL: atomic_load_f32_seq_cst: +; RV32IA-WMO-TRAILING-FENCE: # %bb.0: +; RV32IA-WMO-TRAILING-FENCE-NEXT: fence rw, rw +; RV32IA-WMO-TRAILING-FENCE-NEXT: lw a0, 0(a0) +; RV32IA-WMO-TRAILING-FENCE-NEXT: fence r, rw +; RV32IA-WMO-TRAILING-FENCE-NEXT: fmv.w.x fa0, a0 +; RV32IA-WMO-TRAILING-FENCE-NEXT: ret +; +; RV32IA-TSO-TRAILING-FENCE-LABEL: atomic_load_f32_seq_cst: +; RV32IA-TSO-TRAILING-FENCE: # %bb.0: +; RV32IA-TSO-TRAILING-FENCE-NEXT: fence rw, rw +; RV32IA-TSO-TRAILING-FENCE-NEXT: lw a0, 0(a0) +; RV32IA-TSO-TRAILING-FENCE-NEXT: fmv.w.x fa0, a0 +; RV32IA-TSO-TRAILING-FENCE-NEXT: ret +; +; RV64IA-WMO-TRAILING-FENCE-LABEL: atomic_load_f32_seq_cst: +; RV64IA-WMO-TRAILING-FENCE: # %bb.0: +; RV64IA-WMO-TRAILING-FENCE-NEXT: fence rw, rw +; RV64IA-WMO-TRAILING-FENCE-NEXT: lw a0, 0(a0) +; RV64IA-WMO-TRAILING-FENCE-NEXT: fence r, rw +; RV64IA-WMO-TRAILING-FENCE-NEXT: fmv.w.x fa0, a0 +; RV64IA-WMO-TRAILING-FENCE-NEXT: ret +; +; RV64IA-TSO-TRAILING-FENCE-LABEL: atomic_load_f32_seq_cst: +; RV64IA-TSO-TRAILING-FENCE: # %bb.0: +; RV64IA-TSO-TRAILING-FENCE-NEXT: fence rw, rw +; RV64IA-TSO-TRAILING-FENCE-NEXT: lw a0, 0(a0) +; RV64IA-TSO-TRAILING-FENCE-NEXT: fmv.w.x fa0, a0 +; RV64IA-TSO-TRAILING-FENCE-NEXT: ret + %1 = load atomic float, ptr %a seq_cst, align 4 + ret float %1 +} + +define double @atomic_load_f64_unordered(ptr %a) nounwind { +; RV32I-LABEL: atomic_load_f64_unordered: +; RV32I: # %bb.0: +; RV32I-NEXT: addi sp, sp, -16 +; RV32I-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32I-NEXT: li a1, 0 +; RV32I-NEXT: call __atomic_load_8 +; RV32I-NEXT: sw a0, 0(sp) +; RV32I-NEXT: sw a1, 4(sp) +; RV32I-NEXT: fld fa0, 0(sp) +; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32I-NEXT: addi sp, sp, 16 +; RV32I-NEXT: ret +; +; RV32IA-LABEL: atomic_load_f64_unordered: +; RV32IA: # %bb.0: +; RV32IA-NEXT: addi sp, sp, -16 +; RV32IA-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32IA-NEXT: li a1, 0 +; RV32IA-NEXT: call __atomic_load_8 +; RV32IA-NEXT: sw a0, 0(sp) +; RV32IA-NEXT: sw a1, 4(sp) +; RV32IA-NEXT: fld fa0, 0(sp) +; RV32IA-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32IA-NEXT: addi sp, sp, 16 +; RV32IA-NEXT: ret +; +; RV64I-LABEL: atomic_load_f64_unordered: +; RV64I: # %bb.0: +; RV64I-NEXT: addi sp, sp, -16 +; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill +; RV64I-NEXT: li a1, 0 +; RV64I-NEXT: call __atomic_load_8 +; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload +; RV64I-NEXT: addi sp, sp, 16 +; RV64I-NEXT: ret +; +; RV64IA-LABEL: atomic_load_f64_unordered: +; RV64IA: # %bb.0: +; RV64IA-NEXT: ld a0, 0(a0) +; RV64IA-NEXT: fmv.d.x fa0, a0 +; RV64IA-NEXT: ret + %1 = load atomic double, ptr %a unordered, align 8 + ret double %1 +} + +define double @atomic_load_f64_monotonic(ptr %a) nounwind { +; RV32I-LABEL: atomic_load_f64_monotonic: +; RV32I: # %bb.0: +; RV32I-NEXT: addi sp, sp, -16 +; RV32I-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32I-NEXT: li a1, 0 +; RV32I-NEXT: call __atomic_load_8 +; RV32I-NEXT: sw a0, 0(sp) +; RV32I-NEXT: sw a1, 4(sp) +; RV32I-NEXT: fld fa0, 0(sp) +; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32I-NEXT: addi sp, sp, 16 +; RV32I-NEXT: ret +; +; RV32IA-LABEL: atomic_load_f64_monotonic: +; RV32IA: # %bb.0: +; RV32IA-NEXT: addi sp, sp, -16 +; RV32IA-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32IA-NEXT: li a1, 0 +; RV32IA-NEXT: call __atomic_load_8 +; RV32IA-NEXT: sw a0, 0(sp) +; RV32IA-NEXT: sw a1, 4(sp) +; RV32IA-NEXT: fld fa0, 0(sp) +; RV32IA-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32IA-NEXT: addi sp, sp, 16 +; RV32IA-NEXT: ret +; +; RV64I-LABEL: atomic_load_f64_monotonic: +; RV64I: # %bb.0: +; RV64I-NEXT: addi sp, sp, -16 +; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill +; RV64I-NEXT: li a1, 0 +; RV64I-NEXT: call __atomic_load_8 +; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload +; RV64I-NEXT: addi sp, sp, 16 +; RV64I-NEXT: ret +; +; RV64IA-LABEL: atomic_load_f64_monotonic: +; RV64IA: # %bb.0: +; RV64IA-NEXT: ld a0, 0(a0) +; RV64IA-NEXT: fmv.d.x fa0, a0 +; RV64IA-NEXT: ret + %1 = load atomic double, ptr %a monotonic, align 8 + ret double %1 +} + +define double @atomic_load_f64_acquire(ptr %a) nounwind { +; RV32I-LABEL: atomic_load_f64_acquire: +; RV32I: # %bb.0: +; RV32I-NEXT: addi sp, sp, -16 +; RV32I-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32I-NEXT: li a1, 2 +; RV32I-NEXT: call __atomic_load_8 +; RV32I-NEXT: sw a0, 0(sp) +; RV32I-NEXT: sw a1, 4(sp) +; RV32I-NEXT: fld fa0, 0(sp) +; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32I-NEXT: addi sp, sp, 16 +; RV32I-NEXT: ret +; +; RV32IA-LABEL: atomic_load_f64_acquire: +; RV32IA: # %bb.0: +; RV32IA-NEXT: addi sp, sp, -16 +; RV32IA-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32IA-NEXT: li a1, 2 +; RV32IA-NEXT: call __atomic_load_8 +; RV32IA-NEXT: sw a0, 0(sp) +; RV32IA-NEXT: sw a1, 4(sp) +; RV32IA-NEXT: fld fa0, 0(sp) +; RV32IA-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32IA-NEXT: addi sp, sp, 16 +; RV32IA-NEXT: ret +; +; RV64I-LABEL: atomic_load_f64_acquire: +; RV64I: # %bb.0: +; RV64I-NEXT: addi sp, sp, -16 +; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill +; RV64I-NEXT: li a1, 2 +; RV64I-NEXT: call __atomic_load_8 +; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload +; RV64I-NEXT: addi sp, sp, 16 +; RV64I-NEXT: ret +; +; RV64IA-WMO-LABEL: atomic_load_f64_acquire: +; RV64IA-WMO: # %bb.0: +; RV64IA-WMO-NEXT: ld a0, 0(a0) +; RV64IA-WMO-NEXT: fence r, rw +; RV64IA-WMO-NEXT: fmv.d.x fa0, a0 +; RV64IA-WMO-NEXT: ret +; +; RV64IA-TSO-LABEL: atomic_load_f64_acquire: +; RV64IA-TSO: # %bb.0: +; RV64IA-TSO-NEXT: ld a0, 0(a0) +; RV64IA-TSO-NEXT: fmv.d.x fa0, a0 +; RV64IA-TSO-NEXT: ret +; +; RV64IA-WMO-TRAILING-FENCE-LABEL: atomic_load_f64_acquire: +; RV64IA-WMO-TRAILING-FENCE: # %bb.0: +; RV64IA-WMO-TRAILING-FENCE-NEXT: ld a0, 0(a0) +; RV64IA-WMO-TRAILING-FENCE-NEXT: fence r, rw +; RV64IA-WMO-TRAILING-FENCE-NEXT: fmv.d.x fa0, a0 +; RV64IA-WMO-TRAILING-FENCE-NEXT: ret +; +; RV64IA-TSO-TRAILING-FENCE-LABEL: atomic_load_f64_acquire: +; RV64IA-TSO-TRAILING-FENCE: # %bb.0: +; RV64IA-TSO-TRAILING-FENCE-NEXT: ld a0, 0(a0) +; RV64IA-TSO-TRAILING-FENCE-NEXT: fmv.d.x fa0, a0 +; RV64IA-TSO-TRAILING-FENCE-NEXT: ret + %1 = load atomic double, ptr %a acquire, align 8 + ret double %1 +} + +define double @atomic_load_f64_seq_cst(ptr %a) nounwind { +; RV32I-LABEL: atomic_load_f64_seq_cst: +; RV32I: # %bb.0: +; RV32I-NEXT: addi sp, sp, -16 +; RV32I-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32I-NEXT: li a1, 5 +; RV32I-NEXT: call __atomic_load_8 +; RV32I-NEXT: sw a0, 0(sp) +; RV32I-NEXT: sw a1, 4(sp) +; RV32I-NEXT: fld fa0, 0(sp) +; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32I-NEXT: addi sp, sp, 16 +; RV32I-NEXT: ret +; +; RV32IA-LABEL: atomic_load_f64_seq_cst: +; RV32IA: # %bb.0: +; RV32IA-NEXT: addi sp, sp, -16 +; RV32IA-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32IA-NEXT: li a1, 5 +; RV32IA-NEXT: call __atomic_load_8 +; RV32IA-NEXT: sw a0, 0(sp) +; RV32IA-NEXT: sw a1, 4(sp) +; RV32IA-NEXT: fld fa0, 0(sp) +; RV32IA-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32IA-NEXT: addi sp, sp, 16 +; RV32IA-NEXT: ret +; +; RV64I-LABEL: atomic_load_f64_seq_cst: +; RV64I: # %bb.0: +; RV64I-NEXT: addi sp, sp, -16 +; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill +; RV64I-NEXT: li a1, 5 +; RV64I-NEXT: call __atomic_load_8 +; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload +; RV64I-NEXT: addi sp, sp, 16 +; RV64I-NEXT: ret +; +; RV64IA-WMO-LABEL: atomic_load_f64_seq_cst: +; RV64IA-WMO: # %bb.0: +; RV64IA-WMO-NEXT: fence rw, rw +; RV64IA-WMO-NEXT: ld a0, 0(a0) +; RV64IA-WMO-NEXT: fence r, rw +; RV64IA-WMO-NEXT: fmv.d.x fa0, a0 +; RV64IA-WMO-NEXT: ret +; +; RV64IA-TSO-LABEL: atomic_load_f64_seq_cst: +; RV64IA-TSO: # %bb.0: +; RV64IA-TSO-NEXT: fence rw, rw +; RV64IA-TSO-NEXT: ld a0, 0(a0) +; RV64IA-TSO-NEXT: fmv.d.x fa0, a0 +; RV64IA-TSO-NEXT: ret +; +; RV64IA-WMO-TRAILING-FENCE-LABEL: atomic_load_f64_seq_cst: +; RV64IA-WMO-TRAILING-FENCE: # %bb.0: +; RV64IA-WMO-TRAILING-FENCE-NEXT: fence rw, rw +; RV64IA-WMO-TRAILING-FENCE-NEXT: ld a0, 0(a0) +; RV64IA-WMO-TRAILING-FENCE-NEXT: fence r, rw +; RV64IA-WMO-TRAILING-FENCE-NEXT: fmv.d.x fa0, a0 +; RV64IA-WMO-TRAILING-FENCE-NEXT: ret +; +; RV64IA-TSO-TRAILING-FENCE-LABEL: atomic_load_f64_seq_cst: +; RV64IA-TSO-TRAILING-FENCE: # %bb.0: +; RV64IA-TSO-TRAILING-FENCE-NEXT: fence rw, rw +; RV64IA-TSO-TRAILING-FENCE-NEXT: ld a0, 0(a0) +; RV64IA-TSO-TRAILING-FENCE-NEXT: fmv.d.x fa0, a0 +; RV64IA-TSO-TRAILING-FENCE-NEXT: ret + %1 = load atomic double, ptr %a seq_cst, align 8 + ret double %1 +} + +define void @atomic_store_f32_unordered(ptr %a, float %b) nounwind { +; RV32I-LABEL: atomic_store_f32_unordered: +; RV32I: # %bb.0: +; RV32I-NEXT: addi sp, sp, -16 +; RV32I-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32I-NEXT: fmv.x.w a1, fa0 +; RV32I-NEXT: li a2, 0 +; RV32I-NEXT: call __atomic_store_4 +; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32I-NEXT: addi sp, sp, 16 +; RV32I-NEXT: ret +; +; RV32IA-LABEL: atomic_store_f32_unordered: +; RV32IA: # %bb.0: +; RV32IA-NEXT: fmv.x.w a1, fa0 +; RV32IA-NEXT: sw a1, 0(a0) +; RV32IA-NEXT: ret +; +; RV64I-LABEL: atomic_store_f32_unordered: +; RV64I: # %bb.0: +; RV64I-NEXT: addi sp, sp, -16 +; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill +; RV64I-NEXT: li a2, 0 +; RV64I-NEXT: call __atomic_store_4 +; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload +; RV64I-NEXT: addi sp, sp, 16 +; RV64I-NEXT: ret +; +; RV64IA-LABEL: atomic_store_f32_unordered: +; RV64IA: # %bb.0: +; RV64IA-NEXT: fmv.x.w a1, fa0 +; RV64IA-NEXT: sw a1, 0(a0) +; RV64IA-NEXT: ret + store atomic float %b, ptr %a unordered, align 4 + ret void +} + +define void @atomic_store_f32_monotonic(ptr %a, float %b) nounwind { +; RV32I-LABEL: atomic_store_f32_monotonic: +; RV32I: # %bb.0: +; RV32I-NEXT: addi sp, sp, -16 +; RV32I-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32I-NEXT: fmv.x.w a1, fa0 +; RV32I-NEXT: li a2, 0 +; RV32I-NEXT: call __atomic_store_4 +; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32I-NEXT: addi sp, sp, 16 +; RV32I-NEXT: ret +; +; RV32IA-LABEL: atomic_store_f32_monotonic: +; RV32IA: # %bb.0: +; RV32IA-NEXT: fmv.x.w a1, fa0 +; RV32IA-NEXT: sw a1, 0(a0) +; RV32IA-NEXT: ret +; +; RV64I-LABEL: atomic_store_f32_monotonic: +; RV64I: # %bb.0: +; RV64I-NEXT: addi sp, sp, -16 +; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill +; RV64I-NEXT: li a2, 0 +; RV64I-NEXT: call __atomic_store_4 +; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload +; RV64I-NEXT: addi sp, sp, 16 +; RV64I-NEXT: ret +; +; RV64IA-LABEL: atomic_store_f32_monotonic: +; RV64IA: # %bb.0: +; RV64IA-NEXT: fmv.x.w a1, fa0 +; RV64IA-NEXT: sw a1, 0(a0) +; RV64IA-NEXT: ret + store atomic float %b, ptr %a monotonic, align 4 + ret void +} + +define void @atomic_store_f32_release(ptr %a, float %b) nounwind { +; RV32I-LABEL: atomic_store_f32_release: +; RV32I: # %bb.0: +; RV32I-NEXT: addi sp, sp, -16 +; RV32I-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32I-NEXT: li a2, 3 +; RV32I-NEXT: fmv.x.w a1, fa0 +; RV32I-NEXT: call __atomic_store_4 +; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32I-NEXT: addi sp, sp, 16 +; RV32I-NEXT: ret +; +; RV32IA-WMO-LABEL: atomic_store_f32_release: +; RV32IA-WMO: # %bb.0: +; RV32IA-WMO-NEXT: fence rw, w +; RV32IA-WMO-NEXT: fmv.x.w a1, fa0 +; RV32IA-WMO-NEXT: sw a1, 0(a0) +; RV32IA-WMO-NEXT: ret +; +; RV32IA-TSO-LABEL: atomic_store_f32_release: +; RV32IA-TSO: # %bb.0: +; RV32IA-TSO-NEXT: fmv.x.w a1, fa0 +; RV32IA-TSO-NEXT: sw a1, 0(a0) +; RV32IA-TSO-NEXT: ret +; +; RV64I-LABEL: atomic_store_f32_release: +; RV64I: # %bb.0: +; RV64I-NEXT: addi sp, sp, -16 +; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill +; RV64I-NEXT: li a2, 3 +; RV64I-NEXT: call __atomic_store_4 +; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload +; RV64I-NEXT: addi sp, sp, 16 +; RV64I-NEXT: ret +; +; RV64IA-WMO-LABEL: atomic_store_f32_release: +; RV64IA-WMO: # %bb.0: +; RV64IA-WMO-NEXT: fence rw, w +; RV64IA-WMO-NEXT: fmv.x.w a1, fa0 +; RV64IA-WMO-NEXT: sw a1, 0(a0) +; RV64IA-WMO-NEXT: ret +; +; RV64IA-TSO-LABEL: atomic_store_f32_release: +; RV64IA-TSO: # %bb.0: +; RV64IA-TSO-NEXT: fmv.x.w a1, fa0 +; RV64IA-TSO-NEXT: sw a1, 0(a0) +; RV64IA-TSO-NEXT: ret +; +; RV32IA-WMO-TRAILING-FENCE-LABEL: atomic_store_f32_release: +; RV32IA-WMO-TRAILING-FENCE: # %bb.0: +; RV32IA-WMO-TRAILING-FENCE-NEXT: fence rw, w +; RV32IA-WMO-TRAILING-FENCE-NEXT: fmv.x.w a1, fa0 +; RV32IA-WMO-TRAILING-FENCE-NEXT: sw a1, 0(a0) +; RV32IA-WMO-TRAILING-FENCE-NEXT: ret +; +; RV32IA-TSO-TRAILING-FENCE-LABEL: atomic_store_f32_release: +; RV32IA-TSO-TRAILING-FENCE: # %bb.0: +; RV32IA-TSO-TRAILING-FENCE-NEXT: fmv.x.w a1, fa0 +; RV32IA-TSO-TRAILING-FENCE-NEXT: sw a1, 0(a0) +; RV32IA-TSO-TRAILING-FENCE-NEXT: ret +; +; RV64IA-WMO-TRAILING-FENCE-LABEL: atomic_store_f32_release: +; RV64IA-WMO-TRAILING-FENCE: # %bb.0: +; RV64IA-WMO-TRAILING-FENCE-NEXT: fence rw, w +; RV64IA-WMO-TRAILING-FENCE-NEXT: fmv.x.w a1, fa0 +; RV64IA-WMO-TRAILING-FENCE-NEXT: sw a1, 0(a0) +; RV64IA-WMO-TRAILING-FENCE-NEXT: ret +; +; RV64IA-TSO-TRAILING-FENCE-LABEL: atomic_store_f32_release: +; RV64IA-TSO-TRAILING-FENCE: # %bb.0: +; RV64IA-TSO-TRAILING-FENCE-NEXT: fmv.x.w a1, fa0 +; RV64IA-TSO-TRAILING-FENCE-NEXT: sw a1, 0(a0) +; RV64IA-TSO-TRAILING-FENCE-NEXT: ret + store atomic float %b, ptr %a release, align 4 + ret void +} + +define void @atomic_store_f32_seq_cst(ptr %a, float %b) nounwind { +; RV32I-LABEL: atomic_store_f32_seq_cst: +; RV32I: # %bb.0: +; RV32I-NEXT: addi sp, sp, -16 +; RV32I-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32I-NEXT: li a2, 5 +; RV32I-NEXT: fmv.x.w a1, fa0 +; RV32I-NEXT: call __atomic_store_4 +; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32I-NEXT: addi sp, sp, 16 +; RV32I-NEXT: ret +; +; RV32IA-WMO-LABEL: atomic_store_f32_seq_cst: +; RV32IA-WMO: # %bb.0: +; RV32IA-WMO-NEXT: fence rw, w +; RV32IA-WMO-NEXT: fmv.x.w a1, fa0 +; RV32IA-WMO-NEXT: sw a1, 0(a0) +; RV32IA-WMO-NEXT: ret +; +; RV32IA-TSO-LABEL: atomic_store_f32_seq_cst: +; RV32IA-TSO: # %bb.0: +; RV32IA-TSO-NEXT: fmv.x.w a1, fa0 +; RV32IA-TSO-NEXT: sw a1, 0(a0) +; RV32IA-TSO-NEXT: fence rw, rw +; RV32IA-TSO-NEXT: ret +; +; RV64I-LABEL: atomic_store_f32_seq_cst: +; RV64I: # %bb.0: +; RV64I-NEXT: addi sp, sp, -16 +; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill +; RV64I-NEXT: li a2, 5 +; RV64I-NEXT: call __atomic_store_4 +; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload +; RV64I-NEXT: addi sp, sp, 16 +; RV64I-NEXT: ret +; +; RV64IA-WMO-LABEL: atomic_store_f32_seq_cst: +; RV64IA-WMO: # %bb.0: +; RV64IA-WMO-NEXT: fence rw, w +; RV64IA-WMO-NEXT: fmv.x.w a1, fa0 +; RV64IA-WMO-NEXT: sw a1, 0(a0) +; RV64IA-WMO-NEXT: ret +; +; RV64IA-TSO-LABEL: atomic_store_f32_seq_cst: +; RV64IA-TSO: # %bb.0: +; RV64IA-TSO-NEXT: fmv.x.w a1, fa0 +; RV64IA-TSO-NEXT: sw a1, 0(a0) +; RV64IA-TSO-NEXT: fence rw, rw +; RV64IA-TSO-NEXT: ret +; +; RV32IA-WMO-TRAILING-FENCE-LABEL: atomic_store_f32_seq_cst: +; RV32IA-WMO-TRAILING-FENCE: # %bb.0: +; RV32IA-WMO-TRAILING-FENCE-NEXT: fence rw, w +; RV32IA-WMO-TRAILING-FENCE-NEXT: fmv.x.w a1, fa0 +; RV32IA-WMO-TRAILING-FENCE-NEXT: sw a1, 0(a0) +; RV32IA-WMO-TRAILING-FENCE-NEXT: fence rw, rw +; RV32IA-WMO-TRAILING-FENCE-NEXT: ret +; +; RV32IA-TSO-TRAILING-FENCE-LABEL: atomic_store_f32_seq_cst: +; RV32IA-TSO-TRAILING-FENCE: # %bb.0: +; RV32IA-TSO-TRAILING-FENCE-NEXT: fmv.x.w a1, fa0 +; RV32IA-TSO-TRAILING-FENCE-NEXT: sw a1, 0(a0) +; RV32IA-TSO-TRAILING-FENCE-NEXT: fence rw, rw +; RV32IA-TSO-TRAILING-FENCE-NEXT: ret +; +; RV64IA-WMO-TRAILING-FENCE-LABEL: atomic_store_f32_seq_cst: +; RV64IA-WMO-TRAILING-FENCE: # %bb.0: +; RV64IA-WMO-TRAILING-FENCE-NEXT: fence rw, w +; RV64IA-WMO-TRAILING-FENCE-NEXT: fmv.x.w a1, fa0 +; RV64IA-WMO-TRAILING-FENCE-NEXT: sw a1, 0(a0) +; RV64IA-WMO-TRAILING-FENCE-NEXT: fence rw, rw +; RV64IA-WMO-TRAILING-FENCE-NEXT: ret +; +; RV64IA-TSO-TRAILING-FENCE-LABEL: atomic_store_f32_seq_cst: +; RV64IA-TSO-TRAILING-FENCE: # %bb.0: +; RV64IA-TSO-TRAILING-FENCE-NEXT: fmv.x.w a1, fa0 +; RV64IA-TSO-TRAILING-FENCE-NEXT: sw a1, 0(a0) +; RV64IA-TSO-TRAILING-FENCE-NEXT: fence rw, rw +; RV64IA-TSO-TRAILING-FENCE-NEXT: ret + store atomic float %b, ptr %a seq_cst, align 4 + ret void +} + +define void @atomic_store_f64_unordered(ptr %a, double %b) nounwind { +; RV32I-LABEL: atomic_store_f64_unordered: +; RV32I: # %bb.0: +; RV32I-NEXT: addi sp, sp, -16 +; RV32I-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32I-NEXT: fsd fa0, 0(sp) +; RV32I-NEXT: lw a1, 0(sp) +; RV32I-NEXT: lw a2, 4(sp) +; RV32I-NEXT: li a3, 0 +; RV32I-NEXT: call __atomic_store_8 +; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32I-NEXT: addi sp, sp, 16 +; RV32I-NEXT: ret +; +; RV32IA-LABEL: atomic_store_f64_unordered: +; RV32IA: # %bb.0: +; RV32IA-NEXT: addi sp, sp, -16 +; RV32IA-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32IA-NEXT: fsd fa0, 0(sp) +; RV32IA-NEXT: lw a1, 0(sp) +; RV32IA-NEXT: lw a2, 4(sp) +; RV32IA-NEXT: li a3, 0 +; RV32IA-NEXT: call __atomic_store_8 +; RV32IA-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32IA-NEXT: addi sp, sp, 16 +; RV32IA-NEXT: ret +; +; RV64I-LABEL: atomic_store_f64_unordered: +; RV64I: # %bb.0: +; RV64I-NEXT: addi sp, sp, -16 +; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill +; RV64I-NEXT: li a2, 0 +; RV64I-NEXT: call __atomic_store_8 +; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload +; RV64I-NEXT: addi sp, sp, 16 +; RV64I-NEXT: ret +; +; RV64IA-LABEL: atomic_store_f64_unordered: +; RV64IA: # %bb.0: +; RV64IA-NEXT: fmv.x.d a1, fa0 +; RV64IA-NEXT: sd a1, 0(a0) +; RV64IA-NEXT: ret + store atomic double %b, ptr %a unordered, align 8 + ret void +} + +define void @atomic_store_f64_monotonic(ptr %a, double %b) nounwind { +; RV32I-LABEL: atomic_store_f64_monotonic: +; RV32I: # %bb.0: +; RV32I-NEXT: addi sp, sp, -16 +; RV32I-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32I-NEXT: fsd fa0, 0(sp) +; RV32I-NEXT: lw a1, 0(sp) +; RV32I-NEXT: lw a2, 4(sp) +; RV32I-NEXT: li a3, 0 +; RV32I-NEXT: call __atomic_store_8 +; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32I-NEXT: addi sp, sp, 16 +; RV32I-NEXT: ret +; +; RV32IA-LABEL: atomic_store_f64_monotonic: +; RV32IA: # %bb.0: +; RV32IA-NEXT: addi sp, sp, -16 +; RV32IA-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32IA-NEXT: fsd fa0, 0(sp) +; RV32IA-NEXT: lw a1, 0(sp) +; RV32IA-NEXT: lw a2, 4(sp) +; RV32IA-NEXT: li a3, 0 +; RV32IA-NEXT: call __atomic_store_8 +; RV32IA-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32IA-NEXT: addi sp, sp, 16 +; RV32IA-NEXT: ret +; +; RV64I-LABEL: atomic_store_f64_monotonic: +; RV64I: # %bb.0: +; RV64I-NEXT: addi sp, sp, -16 +; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill +; RV64I-NEXT: li a2, 0 +; RV64I-NEXT: call __atomic_store_8 +; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload +; RV64I-NEXT: addi sp, sp, 16 +; RV64I-NEXT: ret +; +; RV64IA-LABEL: atomic_store_f64_monotonic: +; RV64IA: # %bb.0: +; RV64IA-NEXT: fmv.x.d a1, fa0 +; RV64IA-NEXT: sd a1, 0(a0) +; RV64IA-NEXT: ret + store atomic double %b, ptr %a monotonic, align 8 + ret void +} + +define void @atomic_store_f64_release(ptr %a, double %b) nounwind { +; RV32I-LABEL: atomic_store_f64_release: +; RV32I: # %bb.0: +; RV32I-NEXT: addi sp, sp, -16 +; RV32I-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32I-NEXT: fsd fa0, 0(sp) +; RV32I-NEXT: lw a1, 0(sp) +; RV32I-NEXT: lw a2, 4(sp) +; RV32I-NEXT: li a3, 3 +; RV32I-NEXT: call __atomic_store_8 +; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32I-NEXT: addi sp, sp, 16 +; RV32I-NEXT: ret +; +; RV32IA-LABEL: atomic_store_f64_release: +; RV32IA: # %bb.0: +; RV32IA-NEXT: addi sp, sp, -16 +; RV32IA-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32IA-NEXT: fsd fa0, 0(sp) +; RV32IA-NEXT: lw a1, 0(sp) +; RV32IA-NEXT: lw a2, 4(sp) +; RV32IA-NEXT: li a3, 3 +; RV32IA-NEXT: call __atomic_store_8 +; RV32IA-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32IA-NEXT: addi sp, sp, 16 +; RV32IA-NEXT: ret +; +; RV64I-LABEL: atomic_store_f64_release: +; RV64I: # %bb.0: +; RV64I-NEXT: addi sp, sp, -16 +; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill +; RV64I-NEXT: li a2, 3 +; RV64I-NEXT: call __atomic_store_8 +; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload +; RV64I-NEXT: addi sp, sp, 16 +; RV64I-NEXT: ret +; +; RV64IA-WMO-LABEL: atomic_store_f64_release: +; RV64IA-WMO: # %bb.0: +; RV64IA-WMO-NEXT: fence rw, w +; RV64IA-WMO-NEXT: fmv.x.d a1, fa0 +; RV64IA-WMO-NEXT: sd a1, 0(a0) +; RV64IA-WMO-NEXT: ret +; +; RV64IA-TSO-LABEL: atomic_store_f64_release: +; RV64IA-TSO: # %bb.0: +; RV64IA-TSO-NEXT: fmv.x.d a1, fa0 +; RV64IA-TSO-NEXT: sd a1, 0(a0) +; RV64IA-TSO-NEXT: ret +; +; RV64IA-WMO-TRAILING-FENCE-LABEL: atomic_store_f64_release: +; RV64IA-WMO-TRAILING-FENCE: # %bb.0: +; RV64IA-WMO-TRAILING-FENCE-NEXT: fence rw, w +; RV64IA-WMO-TRAILING-FENCE-NEXT: fmv.x.d a1, fa0 +; RV64IA-WMO-TRAILING-FENCE-NEXT: sd a1, 0(a0) +; RV64IA-WMO-TRAILING-FENCE-NEXT: ret +; +; RV64IA-TSO-TRAILING-FENCE-LABEL: atomic_store_f64_release: +; RV64IA-TSO-TRAILING-FENCE: # %bb.0: +; RV64IA-TSO-TRAILING-FENCE-NEXT: fmv.x.d a1, fa0 +; RV64IA-TSO-TRAILING-FENCE-NEXT: sd a1, 0(a0) +; RV64IA-TSO-TRAILING-FENCE-NEXT: ret + store atomic double %b, ptr %a release, align 8 + ret void +} + +define void @atomic_store_f64_seq_cst(ptr %a, double %b) nounwind { +; RV32I-LABEL: atomic_store_f64_seq_cst: +; RV32I: # %bb.0: +; RV32I-NEXT: addi sp, sp, -16 +; RV32I-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32I-NEXT: fsd fa0, 0(sp) +; RV32I-NEXT: lw a1, 0(sp) +; RV32I-NEXT: lw a2, 4(sp) +; RV32I-NEXT: li a3, 5 +; RV32I-NEXT: call __atomic_store_8 +; RV32I-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32I-NEXT: addi sp, sp, 16 +; RV32I-NEXT: ret +; +; RV32IA-LABEL: atomic_store_f64_seq_cst: +; RV32IA: # %bb.0: +; RV32IA-NEXT: addi sp, sp, -16 +; RV32IA-NEXT: sw ra, 12(sp) # 4-byte Folded Spill +; RV32IA-NEXT: fsd fa0, 0(sp) +; RV32IA-NEXT: lw a1, 0(sp) +; RV32IA-NEXT: lw a2, 4(sp) +; RV32IA-NEXT: li a3, 5 +; RV32IA-NEXT: call __atomic_store_8 +; RV32IA-NEXT: lw ra, 12(sp) # 4-byte Folded Reload +; RV32IA-NEXT: addi sp, sp, 16 +; RV32IA-NEXT: ret +; +; RV64I-LABEL: atomic_store_f64_seq_cst: +; RV64I: # %bb.0: +; RV64I-NEXT: addi sp, sp, -16 +; RV64I-NEXT: sd ra, 8(sp) # 8-byte Folded Spill +; RV64I-NEXT: li a2, 5 +; RV64I-NEXT: call __atomic_store_8 +; RV64I-NEXT: ld ra, 8(sp) # 8-byte Folded Reload +; RV64I-NEXT: addi sp, sp, 16 +; RV64I-NEXT: ret +; +; RV64IA-WMO-LABEL: atomic_store_f64_seq_cst: +; RV64IA-WMO: # %bb.0: +; RV64IA-WMO-NEXT: fence rw, w +; RV64IA-WMO-NEXT: fmv.x.d a1, fa0 +; RV64IA-WMO-NEXT: sd a1, 0(a0) +; RV64IA-WMO-NEXT: ret +; +; RV64IA-TSO-LABEL: atomic_store_f64_seq_cst: +; RV64IA-TSO: # %bb.0: +; RV64IA-TSO-NEXT: fmv.x.d a1, fa0 +; RV64IA-TSO-NEXT: sd a1, 0(a0) +; RV64IA-TSO-NEXT: fence rw, rw +; RV64IA-TSO-NEXT: ret +; +; RV64IA-WMO-TRAILING-FENCE-LABEL: atomic_store_f64_seq_cst: +; RV64IA-WMO-TRAILING-FENCE: # %bb.0: +; RV64IA-WMO-TRAILING-FENCE-NEXT: fence rw, w +; RV64IA-WMO-TRAILING-FENCE-NEXT: fmv.x.d a1, fa0 +; RV64IA-WMO-TRAILING-FENCE-NEXT: sd a1, 0(a0) +; RV64IA-WMO-TRAILING-FENCE-NEXT: fence rw, rw +; RV64IA-WMO-TRAILING-FENCE-NEXT: ret +; +; RV64IA-TSO-TRAILING-FENCE-LABEL: atomic_store_f64_seq_cst: +; RV64IA-TSO-TRAILING-FENCE: # %bb.0: +; RV64IA-TSO-TRAILING-FENCE-NEXT: fmv.x.d a1, fa0 +; RV64IA-TSO-TRAILING-FENCE-NEXT: sd a1, 0(a0) +; RV64IA-TSO-TRAILING-FENCE-NEXT: fence rw, rw +; RV64IA-TSO-TRAILING-FENCE-NEXT: ret + store atomic double %b, ptr %a seq_cst, align 8 + ret void +} diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/atomic-load-store.ll b/llvm/test/CodeGen/RISCV/GlobalISel/atomic-load-store.ll index 1d5d918..5d3fed4 100644 --- a/llvm/test/CodeGen/RISCV/GlobalISel/atomic-load-store.ll +++ b/llvm/test/CodeGen/RISCV/GlobalISel/atomic-load-store.ll @@ -23,6 +23,15 @@ ; RUN: llc -mtriple=riscv64 -global-isel -mattr=+a,+ztso -verify-machineinstrs < %s \ ; RUN: | FileCheck -check-prefixes=RV64IA,RV64IA-TSO-TRAILING-FENCE %s +; RUN: llc -mtriple=riscv32 -global-isel -mattr=+a,+experimental-zalasr -verify-machineinstrs < %s \ +; RUN: | FileCheck -check-prefixes=RV32IA,RV32IA-ZALASR,RV32IA-ZALASR-WMO %s +; RUN: llc -mtriple=riscv32 -global-isel -mattr=+a,+experimental-zalasr,+ztso -verify-machineinstrs < %s \ +; RUN: | FileCheck -check-prefixes=RV32IA,RV32IA-ZALASR,RV32IA-ZALASR-TSO %s + +; RUN: llc -mtriple=riscv64 -global-isel -mattr=+a,+experimental-zalasr -verify-machineinstrs < %s \ +; RUN: | FileCheck -check-prefixes=RV64IA,RV64IA-ZALASR,RV64IA-ZALASR-WMO %s +; RUN: llc -mtriple=riscv64 -global-isel -mattr=+a,+experimental-zalasr,+ztso -verify-machineinstrs < %s \ +; RUN: | FileCheck -check-prefixes=RV64IA,RV64IA-ZALASR,RV64IA-ZALASR-TSO %s define i8 @atomic_load_i8_unordered(ptr %a) nounwind { ; RV32I-LABEL: atomic_load_i8_unordered: @@ -156,6 +165,26 @@ define i8 @atomic_load_i8_acquire(ptr %a) nounwind { ; RV64IA-TSO-TRAILING-FENCE: # %bb.0: ; RV64IA-TSO-TRAILING-FENCE-NEXT: lbu a0, 0(a0) ; RV64IA-TSO-TRAILING-FENCE-NEXT: ret +; +; RV32IA-ZALASR-WMO-LABEL: atomic_load_i8_acquire: +; RV32IA-ZALASR-WMO: # %bb.0: +; RV32IA-ZALASR-WMO-NEXT: lb.aq a0, (a0) +; RV32IA-ZALASR-WMO-NEXT: ret +; +; RV32IA-ZALASR-TSO-LABEL: atomic_load_i8_acquire: +; RV32IA-ZALASR-TSO: # %bb.0: +; RV32IA-ZALASR-TSO-NEXT: lbu a0, 0(a0) +; RV32IA-ZALASR-TSO-NEXT: ret +; +; RV64IA-ZALASR-WMO-LABEL: atomic_load_i8_acquire: +; RV64IA-ZALASR-WMO: # %bb.0: +; RV64IA-ZALASR-WMO-NEXT: lb.aq a0, (a0) +; RV64IA-ZALASR-WMO-NEXT: ret +; +; RV64IA-ZALASR-TSO-LABEL: atomic_load_i8_acquire: +; RV64IA-ZALASR-TSO: # %bb.0: +; RV64IA-ZALASR-TSO-NEXT: lbu a0, 0(a0) +; RV64IA-ZALASR-TSO-NEXT: ret %1 = load atomic i8, ptr %a acquire, align 1 ret i8 %1 } @@ -232,6 +261,16 @@ define i8 @atomic_load_i8_seq_cst(ptr %a) nounwind { ; RV64IA-TSO-TRAILING-FENCE-NEXT: fence rw, rw ; RV64IA-TSO-TRAILING-FENCE-NEXT: lbu a0, 0(a0) ; RV64IA-TSO-TRAILING-FENCE-NEXT: ret +; +; RV32IA-ZALASR-LABEL: atomic_load_i8_seq_cst: +; RV32IA-ZALASR: # %bb.0: +; RV32IA-ZALASR-NEXT: lb.aq a0, (a0) +; RV32IA-ZALASR-NEXT: ret +; +; RV64IA-ZALASR-LABEL: atomic_load_i8_seq_cst: +; RV64IA-ZALASR: # %bb.0: +; RV64IA-ZALASR-NEXT: lb.aq a0, (a0) +; RV64IA-ZALASR-NEXT: ret %1 = load atomic i8, ptr %a seq_cst, align 1 ret i8 %1 } @@ -368,6 +407,26 @@ define i16 @atomic_load_i16_acquire(ptr %a) nounwind { ; RV64IA-TSO-TRAILING-FENCE: # %bb.0: ; RV64IA-TSO-TRAILING-FENCE-NEXT: lh a0, 0(a0) ; RV64IA-TSO-TRAILING-FENCE-NEXT: ret +; +; RV32IA-ZALASR-WMO-LABEL: atomic_load_i16_acquire: +; RV32IA-ZALASR-WMO: # %bb.0: +; RV32IA-ZALASR-WMO-NEXT: lh.aq a0, (a0) +; RV32IA-ZALASR-WMO-NEXT: ret +; +; RV32IA-ZALASR-TSO-LABEL: atomic_load_i16_acquire: +; RV32IA-ZALASR-TSO: # %bb.0: +; RV32IA-ZALASR-TSO-NEXT: lh a0, 0(a0) +; RV32IA-ZALASR-TSO-NEXT: ret +; +; RV64IA-ZALASR-WMO-LABEL: atomic_load_i16_acquire: +; RV64IA-ZALASR-WMO: # %bb.0: +; RV64IA-ZALASR-WMO-NEXT: lh.aq a0, (a0) +; RV64IA-ZALASR-WMO-NEXT: ret +; +; RV64IA-ZALASR-TSO-LABEL: atomic_load_i16_acquire: +; RV64IA-ZALASR-TSO: # %bb.0: +; RV64IA-ZALASR-TSO-NEXT: lh a0, 0(a0) +; RV64IA-ZALASR-TSO-NEXT: ret %1 = load atomic i16, ptr %a acquire, align 2 ret i16 %1 } @@ -444,6 +503,16 @@ define i16 @atomic_load_i16_seq_cst(ptr %a) nounwind { ; RV64IA-TSO-TRAILING-FENCE-NEXT: fence rw, rw ; RV64IA-TSO-TRAILING-FENCE-NEXT: lh a0, 0(a0) ; RV64IA-TSO-TRAILING-FENCE-NEXT: ret +; +; RV32IA-ZALASR-LABEL: atomic_load_i16_seq_cst: +; RV32IA-ZALASR: # %bb.0: +; RV32IA-ZALASR-NEXT: lh.aq a0, (a0) +; RV32IA-ZALASR-NEXT: ret +; +; RV64IA-ZALASR-LABEL: atomic_load_i16_seq_cst: +; RV64IA-ZALASR: # %bb.0: +; RV64IA-ZALASR-NEXT: lh.aq a0, (a0) +; RV64IA-ZALASR-NEXT: ret %1 = load atomic i16, ptr %a seq_cst, align 2 ret i16 %1 } @@ -580,6 +649,26 @@ define i32 @atomic_load_i32_acquire(ptr %a) nounwind { ; RV64IA-TSO-TRAILING-FENCE: # %bb.0: ; RV64IA-TSO-TRAILING-FENCE-NEXT: lw a0, 0(a0) ; RV64IA-TSO-TRAILING-FENCE-NEXT: ret +; +; RV32IA-ZALASR-WMO-LABEL: atomic_load_i32_acquire: +; RV32IA-ZALASR-WMO: # %bb.0: +; RV32IA-ZALASR-WMO-NEXT: lw.aq a0, (a0) +; RV32IA-ZALASR-WMO-NEXT: ret +; +; RV32IA-ZALASR-TSO-LABEL: atomic_load_i32_acquire: +; RV32IA-ZALASR-TSO: # %bb.0: +; RV32IA-ZALASR-TSO-NEXT: lw a0, 0(a0) +; RV32IA-ZALASR-TSO-NEXT: ret +; +; RV64IA-ZALASR-WMO-LABEL: atomic_load_i32_acquire: +; RV64IA-ZALASR-WMO: # %bb.0: +; RV64IA-ZALASR-WMO-NEXT: lw.aq a0, (a0) +; RV64IA-ZALASR-WMO-NEXT: ret +; +; RV64IA-ZALASR-TSO-LABEL: atomic_load_i32_acquire: +; RV64IA-ZALASR-TSO: # %bb.0: +; RV64IA-ZALASR-TSO-NEXT: lw a0, 0(a0) +; RV64IA-ZALASR-TSO-NEXT: ret %1 = load atomic i32, ptr %a acquire, align 4 ret i32 %1 } @@ -656,6 +745,16 @@ define i32 @atomic_load_i32_seq_cst(ptr %a) nounwind { ; RV64IA-TSO-TRAILING-FENCE-NEXT: fence rw, rw ; RV64IA-TSO-TRAILING-FENCE-NEXT: lw a0, 0(a0) ; RV64IA-TSO-TRAILING-FENCE-NEXT: ret +; +; RV32IA-ZALASR-LABEL: atomic_load_i32_seq_cst: +; RV32IA-ZALASR: # %bb.0: +; RV32IA-ZALASR-NEXT: lw.aq a0, (a0) +; RV32IA-ZALASR-NEXT: ret +; +; RV64IA-ZALASR-LABEL: atomic_load_i32_seq_cst: +; RV64IA-ZALASR: # %bb.0: +; RV64IA-ZALASR-NEXT: lw.aq a0, (a0) +; RV64IA-ZALASR-NEXT: ret %1 = load atomic i32, ptr %a seq_cst, align 4 ret i32 %1 } @@ -790,6 +889,16 @@ define i64 @atomic_load_i64_acquire(ptr %a) nounwind { ; RV64IA-TSO-TRAILING-FENCE: # %bb.0: ; RV64IA-TSO-TRAILING-FENCE-NEXT: ld a0, 0(a0) ; RV64IA-TSO-TRAILING-FENCE-NEXT: ret +; +; RV64IA-ZALASR-WMO-LABEL: atomic_load_i64_acquire: +; RV64IA-ZALASR-WMO: # %bb.0: +; RV64IA-ZALASR-WMO-NEXT: ld.aq a0, (a0) +; RV64IA-ZALASR-WMO-NEXT: ret +; +; RV64IA-ZALASR-TSO-LABEL: atomic_load_i64_acquire: +; RV64IA-ZALASR-TSO: # %bb.0: +; RV64IA-ZALASR-TSO-NEXT: ld a0, 0(a0) +; RV64IA-ZALASR-TSO-NEXT: ret %1 = load atomic i64, ptr %a acquire, align 8 ret i64 %1 } @@ -850,6 +959,11 @@ define i64 @atomic_load_i64_seq_cst(ptr %a) nounwind { ; RV64IA-TSO-TRAILING-FENCE-NEXT: fence rw, rw ; RV64IA-TSO-TRAILING-FENCE-NEXT: ld a0, 0(a0) ; RV64IA-TSO-TRAILING-FENCE-NEXT: ret +; +; RV64IA-ZALASR-LABEL: atomic_load_i64_seq_cst: +; RV64IA-ZALASR: # %bb.0: +; RV64IA-ZALASR-NEXT: ld.aq a0, (a0) +; RV64IA-ZALASR-NEXT: ret %1 = load atomic i64, ptr %a seq_cst, align 8 ret i64 %1 } @@ -986,6 +1100,26 @@ define void @atomic_store_i8_release(ptr %a, i8 %b) nounwind { ; RV64IA-TSO-TRAILING-FENCE: # %bb.0: ; RV64IA-TSO-TRAILING-FENCE-NEXT: sb a1, 0(a0) ; RV64IA-TSO-TRAILING-FENCE-NEXT: ret +; +; RV32IA-ZALASR-WMO-LABEL: atomic_store_i8_release: +; RV32IA-ZALASR-WMO: # %bb.0: +; RV32IA-ZALASR-WMO-NEXT: sb.rl a1, (a0) +; RV32IA-ZALASR-WMO-NEXT: ret +; +; RV32IA-ZALASR-TSO-LABEL: atomic_store_i8_release: +; RV32IA-ZALASR-TSO: # %bb.0: +; RV32IA-ZALASR-TSO-NEXT: sb a1, 0(a0) +; RV32IA-ZALASR-TSO-NEXT: ret +; +; RV64IA-ZALASR-WMO-LABEL: atomic_store_i8_release: +; RV64IA-ZALASR-WMO: # %bb.0: +; RV64IA-ZALASR-WMO-NEXT: sb.rl a1, (a0) +; RV64IA-ZALASR-WMO-NEXT: ret +; +; RV64IA-ZALASR-TSO-LABEL: atomic_store_i8_release: +; RV64IA-ZALASR-TSO: # %bb.0: +; RV64IA-ZALASR-TSO-NEXT: sb a1, 0(a0) +; RV64IA-ZALASR-TSO-NEXT: ret store atomic i8 %b, ptr %a release, align 1 ret void } @@ -1060,6 +1194,16 @@ define void @atomic_store_i8_seq_cst(ptr %a, i8 %b) nounwind { ; RV64IA-TSO-TRAILING-FENCE-NEXT: sb a1, 0(a0) ; RV64IA-TSO-TRAILING-FENCE-NEXT: fence rw, rw ; RV64IA-TSO-TRAILING-FENCE-NEXT: ret +; +; RV32IA-ZALASR-LABEL: atomic_store_i8_seq_cst: +; RV32IA-ZALASR: # %bb.0: +; RV32IA-ZALASR-NEXT: sb.rl a1, (a0) +; RV32IA-ZALASR-NEXT: ret +; +; RV64IA-ZALASR-LABEL: atomic_store_i8_seq_cst: +; RV64IA-ZALASR: # %bb.0: +; RV64IA-ZALASR-NEXT: sb.rl a1, (a0) +; RV64IA-ZALASR-NEXT: ret store atomic i8 %b, ptr %a seq_cst, align 1 ret void } @@ -1196,6 +1340,26 @@ define void @atomic_store_i16_release(ptr %a, i16 %b) nounwind { ; RV64IA-TSO-TRAILING-FENCE: # %bb.0: ; RV64IA-TSO-TRAILING-FENCE-NEXT: sh a1, 0(a0) ; RV64IA-TSO-TRAILING-FENCE-NEXT: ret +; +; RV32IA-ZALASR-WMO-LABEL: atomic_store_i16_release: +; RV32IA-ZALASR-WMO: # %bb.0: +; RV32IA-ZALASR-WMO-NEXT: sh.rl a1, (a0) +; RV32IA-ZALASR-WMO-NEXT: ret +; +; RV32IA-ZALASR-TSO-LABEL: atomic_store_i16_release: +; RV32IA-ZALASR-TSO: # %bb.0: +; RV32IA-ZALASR-TSO-NEXT: sh a1, 0(a0) +; RV32IA-ZALASR-TSO-NEXT: ret +; +; RV64IA-ZALASR-WMO-LABEL: atomic_store_i16_release: +; RV64IA-ZALASR-WMO: # %bb.0: +; RV64IA-ZALASR-WMO-NEXT: sh.rl a1, (a0) +; RV64IA-ZALASR-WMO-NEXT: ret +; +; RV64IA-ZALASR-TSO-LABEL: atomic_store_i16_release: +; RV64IA-ZALASR-TSO: # %bb.0: +; RV64IA-ZALASR-TSO-NEXT: sh a1, 0(a0) +; RV64IA-ZALASR-TSO-NEXT: ret store atomic i16 %b, ptr %a release, align 2 ret void } @@ -1270,6 +1434,16 @@ define void @atomic_store_i16_seq_cst(ptr %a, i16 %b) nounwind { ; RV64IA-TSO-TRAILING-FENCE-NEXT: sh a1, 0(a0) ; RV64IA-TSO-TRAILING-FENCE-NEXT: fence rw, rw ; RV64IA-TSO-TRAILING-FENCE-NEXT: ret +; +; RV32IA-ZALASR-LABEL: atomic_store_i16_seq_cst: +; RV32IA-ZALASR: # %bb.0: +; RV32IA-ZALASR-NEXT: sh.rl a1, (a0) +; RV32IA-ZALASR-NEXT: ret +; +; RV64IA-ZALASR-LABEL: atomic_store_i16_seq_cst: +; RV64IA-ZALASR: # %bb.0: +; RV64IA-ZALASR-NEXT: sh.rl a1, (a0) +; RV64IA-ZALASR-NEXT: ret store atomic i16 %b, ptr %a seq_cst, align 2 ret void } @@ -1406,6 +1580,26 @@ define void @atomic_store_i32_release(ptr %a, i32 %b) nounwind { ; RV64IA-TSO-TRAILING-FENCE: # %bb.0: ; RV64IA-TSO-TRAILING-FENCE-NEXT: sw a1, 0(a0) ; RV64IA-TSO-TRAILING-FENCE-NEXT: ret +; +; RV32IA-ZALASR-WMO-LABEL: atomic_store_i32_release: +; RV32IA-ZALASR-WMO: # %bb.0: +; RV32IA-ZALASR-WMO-NEXT: sw.rl a1, (a0) +; RV32IA-ZALASR-WMO-NEXT: ret +; +; RV32IA-ZALASR-TSO-LABEL: atomic_store_i32_release: +; RV32IA-ZALASR-TSO: # %bb.0: +; RV32IA-ZALASR-TSO-NEXT: sw a1, 0(a0) +; RV32IA-ZALASR-TSO-NEXT: ret +; +; RV64IA-ZALASR-WMO-LABEL: atomic_store_i32_release: +; RV64IA-ZALASR-WMO: # %bb.0: +; RV64IA-ZALASR-WMO-NEXT: sw.rl a1, (a0) +; RV64IA-ZALASR-WMO-NEXT: ret +; +; RV64IA-ZALASR-TSO-LABEL: atomic_store_i32_release: +; RV64IA-ZALASR-TSO: # %bb.0: +; RV64IA-ZALASR-TSO-NEXT: sw a1, 0(a0) +; RV64IA-ZALASR-TSO-NEXT: ret store atomic i32 %b, ptr %a release, align 4 ret void } @@ -1480,6 +1674,16 @@ define void @atomic_store_i32_seq_cst(ptr %a, i32 %b) nounwind { ; RV64IA-TSO-TRAILING-FENCE-NEXT: sw a1, 0(a0) ; RV64IA-TSO-TRAILING-FENCE-NEXT: fence rw, rw ; RV64IA-TSO-TRAILING-FENCE-NEXT: ret +; +; RV32IA-ZALASR-LABEL: atomic_store_i32_seq_cst: +; RV32IA-ZALASR: # %bb.0: +; RV32IA-ZALASR-NEXT: sw.rl a1, (a0) +; RV32IA-ZALASR-NEXT: ret +; +; RV64IA-ZALASR-LABEL: atomic_store_i32_seq_cst: +; RV64IA-ZALASR: # %bb.0: +; RV64IA-ZALASR-NEXT: sw.rl a1, (a0) +; RV64IA-ZALASR-NEXT: ret store atomic i32 %b, ptr %a seq_cst, align 4 ret void } @@ -1614,6 +1818,16 @@ define void @atomic_store_i64_release(ptr %a, i64 %b) nounwind { ; RV64IA-TSO-TRAILING-FENCE: # %bb.0: ; RV64IA-TSO-TRAILING-FENCE-NEXT: sd a1, 0(a0) ; RV64IA-TSO-TRAILING-FENCE-NEXT: ret +; +; RV64IA-ZALASR-WMO-LABEL: atomic_store_i64_release: +; RV64IA-ZALASR-WMO: # %bb.0: +; RV64IA-ZALASR-WMO-NEXT: sd.rl a1, (a0) +; RV64IA-ZALASR-WMO-NEXT: ret +; +; RV64IA-ZALASR-TSO-LABEL: atomic_store_i64_release: +; RV64IA-ZALASR-TSO: # %bb.0: +; RV64IA-ZALASR-TSO-NEXT: sd a1, 0(a0) +; RV64IA-ZALASR-TSO-NEXT: ret store atomic i64 %b, ptr %a release, align 8 ret void } @@ -1673,6 +1887,11 @@ define void @atomic_store_i64_seq_cst(ptr %a, i64 %b) nounwind { ; RV64IA-TSO-TRAILING-FENCE-NEXT: sd a1, 0(a0) ; RV64IA-TSO-TRAILING-FENCE-NEXT: fence rw, rw ; RV64IA-TSO-TRAILING-FENCE-NEXT: ret +; +; RV64IA-ZALASR-LABEL: atomic_store_i64_seq_cst: +; RV64IA-ZALASR: # %bb.0: +; RV64IA-ZALASR-NEXT: sd.rl a1, (a0) +; RV64IA-ZALASR-NEXT: ret store atomic i64 %b, ptr %a seq_cst, align 8 ret void } diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/rvv/fallback-rv32.ll b/llvm/test/CodeGen/RISCV/GlobalISel/rvv/fallback-rv32.ll new file mode 100644 index 0000000..85a5d9a --- /dev/null +++ b/llvm/test/CodeGen/RISCV/GlobalISel/rvv/fallback-rv32.ll @@ -0,0 +1,22 @@ +; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py +; RUN: llc -mtriple=riscv32 -global-isel -global-isel-abort=2 \ +; RUN: -pass-remarks-missed='gisel*' -mattr=+zve64d,+f,+d,+zvfh,+zvfbfmin \ +; RUN: %s -o %t.out 2> %t.err +; RUN: FileCheck %s --check-prefix=FALLBACK-WITH-REPORT-OUT < %t.out +; RUN: FileCheck %s --check-prefix=FALLBACK-WITH-REPORT-ERR < %t.err + +; FALLBACK-WITH-REPORT-ERR: remark: <unknown>:0:0: unable to translate instruction: call +; FALLBACK-WITH-REPORT-OUT-LABEL: test_vlseg2_nxv1i8_triscv.vector.tuple_nxv1i8_2t +define target("riscv.vector.tuple", <vscale x 1 x i8>, 2) @test_vlseg2_nxv1i8_triscv.vector.tuple_nxv1i8_2t(ptr %base, i32 %vl) { +entry: + %0 = tail call target("riscv.vector.tuple", <vscale x 1 x i8>, 2) @llvm.riscv.vlseg2.triscv.vector.tuple_nxv1i8_2t(target("riscv.vector.tuple", <vscale x 1 x i8>, 2) poison, ptr %base, i32 %vl, i32 3) + ret target("riscv.vector.tuple", <vscale x 1 x i8>, 2) %0 +} + +; FALLBACK-WITH-REPORT-ERR: remark: <unknown>:0:0: unable to lower arguments +; FALLBACK-WITH-REPORT-OUT-LABEL: test_vsseg2_nxv1i8_triscv.vector.tuple_nxv1i8_2t +define void @test_vsseg2_nxv1i8_triscv.vector.tuple_nxv1i8_2t(target("riscv.vector.tuple", <vscale x 1 x i8>, 2) %val, ptr %base, i32 %vl) { +entry: + tail call void @llvm.riscv.vsseg2.triscv.vector.tuple_nxv1i8_2t(target("riscv.vector.tuple", <vscale x 1 x i8>, 2) %val, ptr %base, i32 %vl, i32 3) + ret void +} diff --git a/llvm/test/CodeGen/RISCV/GlobalISel/rvv/fallback-rv64.ll b/llvm/test/CodeGen/RISCV/GlobalISel/rvv/fallback-rv64.ll new file mode 100644 index 0000000..b5405d3 --- /dev/null +++ b/llvm/test/CodeGen/RISCV/GlobalISel/rvv/fallback-rv64.ll @@ -0,0 +1,22 @@ +; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py +; RUN: llc -mtriple=riscv64 -global-isel -global-isel-abort=2 \ +; RUN: -pass-remarks-missed='gisel*' -mattr=+zve64d,+f,+d,+zvfh,+zvfbfmin \ +; RUN: %s -o %t.out 2> %t.err +; RUN: FileCheck %s --check-prefix=FALLBACK-WITH-REPORT-OUT < %t.out +; RUN: FileCheck %s --check-prefix=FALLBACK-WITH-REPORT-ERR < %t.err + +; FALLBACK-WITH-REPORT-ERR: remark: <unknown>:0:0: unable to translate instruction: call +; FALLBACK-WITH-REPORT-OUT-LABEL: test_vlseg2_nxv1i8_triscv.vector.tuple_nxv1i8_2t +define target("riscv.vector.tuple", <vscale x 1 x i8>, 2) @test_vlseg2_nxv1i8_triscv.vector.tuple_nxv1i8_2t(ptr %base, i64 %vl) { +entry: + %0 = tail call target("riscv.vector.tuple", <vscale x 1 x i8>, 2) @llvm.riscv.vlseg2.triscv.vector.tuple_nxv1i8_2t(target("riscv.vector.tuple", <vscale x 1 x i8>, 2) poison, ptr %base, i64 %vl, i64 3) + ret target("riscv.vector.tuple", <vscale x 1 x i8>, 2) %0 +} + +; FALLBACK-WITH-REPORT-ERR: remark: <unknown>:0:0: unable to lower arguments +; FALLBACK-WITH-REPORT-OUT-LABEL: test_vsseg2_nxv1i8_triscv.vector.tuple_nxv1i8_2t +define void @test_vsseg2_nxv1i8_triscv.vector.tuple_nxv1i8_2t(target("riscv.vector.tuple", <vscale x 1 x i8>, 2) %val, ptr %base, i64 %vl) { +entry: + tail call void @llvm.riscv.vsseg2.triscv.vector.tuple_nxv1i8_2t(target("riscv.vector.tuple", <vscale x 1 x i8>, 2) %val, ptr %base, i64 %vl, i64 3) + ret void +} diff --git a/llvm/test/CodeGen/RISCV/float-imm.ll b/llvm/test/CodeGen/RISCV/float-imm.ll index e4e3454..610c72b 100644 --- a/llvm/test/CodeGen/RISCV/float-imm.ll +++ b/llvm/test/CodeGen/RISCV/float-imm.ll @@ -4,11 +4,10 @@ ; RUN: llc -mtriple=riscv64 -mattr=+f -verify-machineinstrs < %s \ ; RUN: -target-abi=lp64f | FileCheck %s ; RUN: llc -mtriple=riscv32 -mattr=+zfinx -verify-machineinstrs < %s \ -; RUN: -target-abi=ilp32 | FileCheck --check-prefixes=CHECKZFINX,RV32ZFINX %s +; RUN: -target-abi=ilp32 | FileCheck --check-prefixes=CHECKZFINX %s ; RUN: llc -mtriple=riscv64 -mattr=+zfinx -verify-machineinstrs < %s \ -; RUN: -target-abi=lp64 | FileCheck --check-prefixes=CHECKZFINX,RV64ZFINX %s +; RUN: -target-abi=lp64 | FileCheck --check-prefixes=CHECKZFINX %s -; TODO: constant pool shouldn't be necessary for RV64IF. define float @float_imm() nounwind { ; CHECK-LABEL: float_imm: ; CHECK: # %bb.0: @@ -69,6 +68,3 @@ define float @float_negative_zero(ptr %pf) nounwind { ; CHECKZFINX-NEXT: ret ret float -0.0 } -;; NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line: -; RV32ZFINX: {{.*}} -; RV64ZFINX: {{.*}} diff --git a/llvm/test/CodeGen/RISCV/half-imm.ll b/llvm/test/CodeGen/RISCV/half-imm.ll index 1dc0da8c..ec1a7a4 100644 --- a/llvm/test/CodeGen/RISCV/half-imm.ll +++ b/llvm/test/CodeGen/RISCV/half-imm.ll @@ -5,22 +5,21 @@ ; RUN: -target-abi lp64f < %s | FileCheck %s ; RUN: llc -mtriple=riscv32 -mattr=+zhinx -verify-machineinstrs \ ; RUN: -target-abi ilp32 < %s \ -; RUN: | FileCheck -check-prefix=RV32IZHINX %s +; RUN: | FileCheck -check-prefixes=CHECKIZHINX %s ; RUN: llc -mtriple=riscv64 -mattr=+zhinx -verify-machineinstrs \ ; RUN: -target-abi lp64 < %s \ -; RUN: | FileCheck -check-prefix=RV64IZHINX %s +; RUN: | FileCheck -check-prefixes=CHECKIZHINX %s ; RUN: llc -mtriple=riscv32 -mattr=+zfhmin -verify-machineinstrs \ ; RUN: -target-abi ilp32f < %s | FileCheck -check-prefixes=CHECKIZFHMIN %s ; RUN: llc -mtriple=riscv64 -mattr=+zfhmin -verify-machineinstrs \ ; RUN: -target-abi lp64f < %s | FileCheck -check-prefixes=CHECKIZFHMIN %s ; RUN: llc -mtriple=riscv32 -mattr=+zhinxmin -verify-machineinstrs \ ; RUN: -target-abi ilp32 < %s \ -; RUN: | FileCheck -check-prefixes=CHECKIZHINXMIN,RV32IZHINXMIN %s +; RUN: | FileCheck -check-prefixes=CHECKIZHINXMIN %s ; RUN: llc -mtriple=riscv64 -mattr=+zhinxmin -verify-machineinstrs \ ; RUN: -target-abi lp64 < %s \ -; RUN: | FileCheck -check-prefixes=CHECKIZHINXMIN,RV64IZHINXMIN %s +; RUN: | FileCheck -check-prefixes=CHECKIZHINXMIN %s -; TODO: constant pool shouldn't be necessary for RV32IZfh and RV64IZfh define half @half_imm() nounwind { ; CHECK-LABEL: half_imm: ; CHECK: # %bb.0: @@ -29,19 +28,12 @@ define half @half_imm() nounwind { ; CHECK-NEXT: fmv.h.x fa0, a0 ; CHECK-NEXT: ret ; -; RV32IZHINX-LABEL: half_imm: -; RV32IZHINX: # %bb.0: -; RV32IZHINX-NEXT: lui a0, 4 -; RV32IZHINX-NEXT: addi a0, a0, 512 -; RV32IZHINX-NEXT: # kill: def $x10_h killed $x10_h killed $x10 -; RV32IZHINX-NEXT: ret -; -; RV64IZHINX-LABEL: half_imm: -; RV64IZHINX: # %bb.0: -; RV64IZHINX-NEXT: lui a0, 4 -; RV64IZHINX-NEXT: addi a0, a0, 512 -; RV64IZHINX-NEXT: # kill: def $x10_h killed $x10_h killed $x10 -; RV64IZHINX-NEXT: ret +; CHECKIZHINX-LABEL: half_imm: +; CHECKIZHINX: # %bb.0: +; CHECKIZHINX-NEXT: lui a0, 4 +; CHECKIZHINX-NEXT: addi a0, a0, 512 +; CHECKIZHINX-NEXT: # kill: def $x10_h killed $x10_h killed $x10 +; CHECKIZHINX-NEXT: ret ; ; CHECKIZFHMIN-LABEL: half_imm: ; CHECKIZFHMIN: # %bb.0: @@ -68,19 +60,12 @@ define half @half_imm_op(half %a) nounwind { ; CHECK-NEXT: fadd.h fa0, fa0, fa5 ; CHECK-NEXT: ret ; -; RV32IZHINX-LABEL: half_imm_op: -; RV32IZHINX: # %bb.0: -; RV32IZHINX-NEXT: li a1, 15 -; RV32IZHINX-NEXT: slli a1, a1, 10 -; RV32IZHINX-NEXT: fadd.h a0, a0, a1 -; RV32IZHINX-NEXT: ret -; -; RV64IZHINX-LABEL: half_imm_op: -; RV64IZHINX: # %bb.0: -; RV64IZHINX-NEXT: li a1, 15 -; RV64IZHINX-NEXT: slli a1, a1, 10 -; RV64IZHINX-NEXT: fadd.h a0, a0, a1 -; RV64IZHINX-NEXT: ret +; CHECKIZHINX-LABEL: half_imm_op: +; CHECKIZHINX: # %bb.0: +; CHECKIZHINX-NEXT: li a1, 15 +; CHECKIZHINX-NEXT: slli a1, a1, 10 +; CHECKIZHINX-NEXT: fadd.h a0, a0, a1 +; CHECKIZHINX-NEXT: ret ; ; CHECKIZFHMIN-LABEL: half_imm_op: ; CHECKIZFHMIN: # %bb.0: @@ -108,15 +93,10 @@ define half @half_positive_zero(ptr %pf) nounwind { ; CHECK-NEXT: fmv.h.x fa0, zero ; CHECK-NEXT: ret ; -; RV32IZHINX-LABEL: half_positive_zero: -; RV32IZHINX: # %bb.0: -; RV32IZHINX-NEXT: li a0, 0 -; RV32IZHINX-NEXT: ret -; -; RV64IZHINX-LABEL: half_positive_zero: -; RV64IZHINX: # %bb.0: -; RV64IZHINX-NEXT: li a0, 0 -; RV64IZHINX-NEXT: ret +; CHECKIZHINX-LABEL: half_positive_zero: +; CHECKIZHINX: # %bb.0: +; CHECKIZHINX-NEXT: li a0, 0 +; CHECKIZHINX-NEXT: ret ; ; CHECKIZFHMIN-LABEL: half_positive_zero: ; CHECKIZFHMIN: # %bb.0: @@ -137,15 +117,10 @@ define half @half_negative_zero(ptr %pf) nounwind { ; CHECK-NEXT: fmv.h.x fa0, a0 ; CHECK-NEXT: ret ; -; RV32IZHINX-LABEL: half_negative_zero: -; RV32IZHINX: # %bb.0: -; RV32IZHINX-NEXT: lui a0, 1048568 -; RV32IZHINX-NEXT: ret -; -; RV64IZHINX-LABEL: half_negative_zero: -; RV64IZHINX: # %bb.0: -; RV64IZHINX-NEXT: lui a0, 1048568 -; RV64IZHINX-NEXT: ret +; CHECKIZHINX-LABEL: half_negative_zero: +; CHECKIZHINX: # %bb.0: +; CHECKIZHINX-NEXT: lui a0, 1048568 +; CHECKIZHINX-NEXT: ret ; ; CHECKIZFHMIN-LABEL: half_negative_zero: ; CHECKIZFHMIN: # %bb.0: @@ -159,6 +134,3 @@ define half @half_negative_zero(ptr %pf) nounwind { ; CHECKIZHINXMIN-NEXT: ret ret half -0.0 } -;; NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line: -; RV32IZHINXMIN: {{.*}} -; RV64IZHINXMIN: {{.*}} diff --git a/llvm/test/CodeGen/RISCV/rv64zba.ll b/llvm/test/CodeGen/RISCV/rv64zba.ll index c028d25..7fd7626 100644 --- a/llvm/test/CodeGen/RISCV/rv64zba.ll +++ b/llvm/test/CodeGen/RISCV/rv64zba.ll @@ -409,15 +409,11 @@ define i64 @sh3adduw_2(i64 %0, i64 %1) { ; ; RV64ZBA-LABEL: sh3adduw_2: ; RV64ZBA: # %bb.0: -; RV64ZBA-NEXT: slli a0, a0, 3 -; RV64ZBA-NEXT: srli a0, a0, 3 ; RV64ZBA-NEXT: sh3add.uw a0, a0, a1 ; RV64ZBA-NEXT: ret ; ; RV64XANDESPERF-LABEL: sh3adduw_2: ; RV64XANDESPERF: # %bb.0: -; RV64XANDESPERF-NEXT: slli a0, a0, 3 -; RV64XANDESPERF-NEXT: srli a0, a0, 3 ; RV64XANDESPERF-NEXT: nds.lea.d.ze a0, a1, a0 ; RV64XANDESPERF-NEXT: ret %3 = shl i64 %0, 3 @@ -436,15 +432,11 @@ define i64 @sh3adduw_3(i64 %0, i64 %1) { ; ; RV64ZBA-LABEL: sh3adduw_3: ; RV64ZBA: # %bb.0: -; RV64ZBA-NEXT: slli a0, a0, 3 -; RV64ZBA-NEXT: srli a0, a0, 3 ; RV64ZBA-NEXT: sh3add.uw a0, a0, a1 ; RV64ZBA-NEXT: ret ; ; RV64XANDESPERF-LABEL: sh3adduw_3: ; RV64XANDESPERF: # %bb.0: -; RV64XANDESPERF-NEXT: slli a0, a0, 3 -; RV64XANDESPERF-NEXT: srli a0, a0, 3 ; RV64XANDESPERF-NEXT: nds.lea.d.ze a0, a1, a0 ; RV64XANDESPERF-NEXT: ret %3 = shl i64 %0, 3 @@ -2681,7 +2673,7 @@ define i64 @srliw_3_sh3add(ptr %0, i32 signext %1) { ; RV64ZBA-LABEL: srliw_3_sh3add: ; RV64ZBA: # %bb.0: ; RV64ZBA-NEXT: srliw a1, a1, 3 -; RV64ZBA-NEXT: sh3add.uw a0, a1, a0 +; RV64ZBA-NEXT: sh3add a0, a1, a0 ; RV64ZBA-NEXT: ld a0, 0(a0) ; RV64ZBA-NEXT: ret ; diff --git a/llvm/test/CodeGen/SPIRV/hlsl-resources/test_counters.ll b/llvm/test/CodeGen/SPIRV/hlsl-resources/test_counters.ll new file mode 100644 index 0000000..b178a56 --- /dev/null +++ b/llvm/test/CodeGen/SPIRV/hlsl-resources/test_counters.ll @@ -0,0 +1,65 @@ +; RUN: llc -O0 -verify-machineinstrs -mtriple=spirv-vulkan-library %s -o - | FileCheck %s +; RUN: %if spirv-tools %{ llc -O0 -mtriple=spirv-vulkan-library %s -o - -filetype=obj | spirv-val --target-env vulkan1.3 %} + +; ModuleID = 'test_counters.hlsl' +source_filename = "test_counters.hlsl" + +; CHECK: OpCapability Int8 +; CHECK-DAG: OpName [[OutputBuffer:%[0-9]+]] "OutputBuffer" +; CHECK-DAG: OpName [[InputBuffer:%[0-9]+]] "InputBuffer" +; CHECK-DAG: OpName [[OutputBufferCounter:%[0-9]+]] "OutputBuffer.counter" +; CHECK-DAG: OpName [[InputBufferCounter:%[0-9]+]] "InputBuffer.counter" +; CHECK-DAG: OpDecorate [[OutputBuffer]] DescriptorSet 0 +; CHECK-DAG: OpDecorate [[OutputBuffer]] Binding 10 +; CHECK-DAG: OpDecorate [[OutputBufferCounter]] DescriptorSet 0 +; CHECK-DAG: OpDecorate [[OutputBufferCounter]] Binding 0 +; CHECK-DAG: OpDecorate [[InputBuffer]] DescriptorSet 0 +; CHECK-DAG: OpDecorate [[InputBuffer]] Binding 1 +; CHECK-DAG: OpDecorate [[InputBufferCounter]] DescriptorSet 0 +; CHECK-DAG: OpDecorate [[InputBufferCounter]] Binding 2 +; CHECK-DAG: [[int:%[0-9]+]] = OpTypeInt 32 0 +; CHECK-DAG: [[zero:%[0-9]+]] = OpConstant [[int]] 0{{$}} +; CHECK-DAG: [[one:%[0-9]+]] = OpConstant [[int]] 1{{$}} +; CHECK-DAG: [[minus_one:%[0-9]+]] = OpConstant [[int]] 4294967295 +; CHECK: [[OutputBufferHandle:%[0-9]+]] = OpCopyObject {{%[0-9]+}} [[OutputBuffer]] +; CHECK: [[InputBufferHandle:%[0-9]+]] = OpCopyObject {{%[0-9]+}} [[InputBuffer]] +; CHECK: [[InputCounterAC:%[0-9]+]] = OpAccessChain {{%[0-9]+}} [[InputBufferCounter]] [[zero]] +; CHECK: [[dec:%[0-9]+]] = OpAtomicIAdd [[int]] [[InputCounterAC]] [[one]] [[zero]] [[minus_one]] +; CHECK: [[iadd:%[0-9]+]] = OpIAdd [[int]] [[dec]] [[minus_one]] +; CHECK: [[OutputCounterAC:%[0-9]+]] = OpAccessChain {{%[0-9]+}} [[OutputBufferCounter]] [[zero]] +; CHECK: [[inc:%[0-9]+]] = OpAtomicIAdd [[int]] [[OutputCounterAC]] [[one]] [[zero]] [[one]] +; CHECK: [[InputAC:%[0-9]+]] = OpAccessChain {{%[0-9]+}} [[InputBufferHandle]] [[zero]] [[iadd]] +; CHECK: [[load:%[0-9]+]] = OpLoad {{%[0-9]+}} [[InputAC]] +; CHECK: [[OutputAC:%[0-9]+]] = OpAccessChain {{%[0-9]+}} [[OutputBufferHandle]] [[zero]] [[inc]] +; CHECK: OpStore [[OutputAC]] [[load]] + + +target triple = "spirv1.6-unknown-vulkan1.3-compute" + +@.str = private unnamed_addr constant [13 x i8] c"OutputBuffer\00" +@.str.2 = private unnamed_addr constant [12 x i8] c"InputBuffer\00" + +define void @main() #0 { +entry: + %0 = call target("spirv.VulkanBuffer", [0 x float], 12, 1) @llvm.spv.resource.handlefrombinding.tspirv.VulkanBuffer_a0f32_12_1t(i32 0, i32 10, i32 1, i32 0, ptr @.str) + %1 = call target("spirv.VulkanBuffer", i32, 12, 1) @llvm.spv.resource.counterhandlefromimplicitbinding.tspirv.VulkanBuffer_i32_12_1t.tspirv.VulkanBuffer_a0f32_12_1t(target("spirv.VulkanBuffer", [0 x float], 12, 1) %0, i32 0, i32 0) + %2 = call target("spirv.VulkanBuffer", [0 x float], 12, 1) @llvm.spv.resource.handlefromimplicitbinding.tspirv.VulkanBuffer_a0f32_12_1t(i32 1, i32 0, i32 1, i32 0, ptr @.str.2) + %3 = call target("spirv.VulkanBuffer", i32, 12, 1) @llvm.spv.resource.counterhandlefromimplicitbinding.tspirv.VulkanBuffer_i32_12_1t.tspirv.VulkanBuffer_a0f32_12_1t(target("spirv.VulkanBuffer", [0 x float], 12, 1) %2, i32 2, i32 0) + %4 = call i32 @llvm.spv.resource.updatecounter.tspirv.VulkanBuffer_i32_12_1t(target("spirv.VulkanBuffer", i32, 12, 1) %3, i8 -1) + %5 = call i32 @llvm.spv.resource.updatecounter.tspirv.VulkanBuffer_i32_12_1t(target("spirv.VulkanBuffer", i32, 12, 1) %1, i8 1) + %6 = call ptr addrspace(11) @llvm.spv.resource.getpointer.p11.tspirv.VulkanBuffer_a0f32_12_1t(target("spirv.VulkanBuffer", [0 x float], 12, 1) %2, i32 %4) + %7 = load float, ptr addrspace(11) %6 + %8 = call ptr addrspace(11) @llvm.spv.resource.getpointer.p11.tspirv.VulkanBuffer_a0f32_12_1t(target("spirv.VulkanBuffer", [0 x float], 12, 1) %0, i32 %5) + store float %7, ptr addrspace(11) %8 + ret void +} + +declare target("spirv.VulkanBuffer", [0 x float], 12, 1) @llvm.spv.resource.handlefrombinding.tspirv.VulkanBuffer_a0f32_12_1t(i32, i32, i32, i32, ptr) #1 +declare target("spirv.VulkanBuffer", i32, 12, 1) @llvm.spv.resource.counterhandlefromimplicitbinding.tspirv.VulkanBuffer_i32_12_1t.tspirv.VulkanBuffer_a0f32_12_1t(target("spirv.VulkanBuffer", [0 x float], 12, 1), i32, i32) #1 +declare target("spirv.VulkanBuffer", [0 x float], 12, 1) @llvm.spv.resource.handlefromimplicitbinding.tspirv.VulkanBuffer_a0f32_12_1t(i32, i32, i32, i32, ptr) #1 +declare i32 @llvm.spv.resource.updatecounter.tspirv.VulkanBuffer_i32_12_1t(target("spirv.VulkanBuffer", i32, 12, 1), i8) #2 +declare ptr addrspace(11) @llvm.spv.resource.getpointer.p11.tspirv.VulkanBuffer_a0f32_12_1t(target("spirv.VulkanBuffer", [0 x float], 12, 1), i32) #1 + +attributes #0 = { "hlsl.shader"="compute" "hlsl.numthreads"="1,1,1" } +attributes #1 = { memory(none) } +attributes #2 = { memory(argmem: readwrite, inaccessiblemem: readwrite) } diff --git a/llvm/test/CodeGen/X86/GlobalISel/legalize-phi.mir b/llvm/test/CodeGen/X86/GlobalISel/legalize-phi.mir index 31de686..92e4588 100644 --- a/llvm/test/CodeGen/X86/GlobalISel/legalize-phi.mir +++ b/llvm/test/CodeGen/X86/GlobalISel/legalize-phi.mir @@ -148,21 +148,21 @@ body: | ; CHECK-NEXT: {{ $}} ; CHECK-NEXT: [[COPY:%[0-9]+]]:_(s32) = COPY $edi ; CHECK-NEXT: [[COPY1:%[0-9]+]]:_(s32) = COPY $esi + ; CHECK-NEXT: [[TRUNC:%[0-9]+]]:_(s1) = G_TRUNC [[COPY1]](s32) ; CHECK-NEXT: [[COPY2:%[0-9]+]]:_(s32) = COPY $edx + ; CHECK-NEXT: [[TRUNC2:%[0-9]+]]:_(s1) = G_TRUNC [[COPY2]](s32) ; CHECK-NEXT: [[C:%[0-9]+]]:_(s32) = G_CONSTANT i32 0 ; CHECK-NEXT: [[ICMP:%[0-9]+]]:_(s8) = G_ICMP intpred(sgt), [[COPY]](s32), [[C]] - ; CHECK-NEXT: [[TRUNC:%[0-9]+]]:_(s1) = G_TRUNC [[ICMP]](s8) - ; CHECK-NEXT: [[TRUNC1:%[0-9]+]]:_(s8) = G_TRUNC [[COPY1]](s32) - ; CHECK-NEXT: G_BRCOND [[TRUNC]](s1), %bb.2 + ; CHECK-NEXT: [[TRUNC1:%[0-9]+]]:_(s1) = G_TRUNC [[ICMP]](s8) + ; CHECK-NEXT: G_BRCOND [[TRUNC1]](s1), %bb.2 ; CHECK-NEXT: {{ $}} ; CHECK-NEXT: bb.1.cond.false: ; CHECK-NEXT: successors: %bb.2(0x80000000) ; CHECK-NEXT: {{ $}} - ; CHECK-NEXT: [[TRUNC2:%[0-9]+]]:_(s8) = G_TRUNC [[COPY2]](s32) - ; CHECK-NEXT: {{ $}} ; CHECK-NEXT: bb.2.cond.end: - ; CHECK-NEXT: [[PHI:%[0-9]+]]:_(s8) = G_PHI [[TRUNC2]](s8), %bb.1, [[TRUNC1]](s8), %bb.0 - ; CHECK-NEXT: $al = COPY [[PHI]](s8) + ; CHECK-NEXT: [[PHI:%[0-9]+]]:_(s1) = G_PHI [[TRUNC2]](s1), %bb.1, [[TRUNC]](s1), %bb.0 + ; CHECK-NEXT: [[EXT:%[0-9]+]]:_(s8) = G_ANYEXT [[PHI]](s1) + ; CHECK-NEXT: $al = COPY [[EXT]](s8) ; CHECK-NEXT: RET 0, implicit $al bb.1.entry: successors: %bb.3(0x40000000), %bb.2(0x40000000) diff --git a/llvm/test/CodeGen/X86/GlobalISel/legalize-undef-vec-scaling.mir b/llvm/test/CodeGen/X86/GlobalISel/legalize-undef-vec-scaling.mir new file mode 100644 index 0000000..b02832b --- /dev/null +++ b/llvm/test/CodeGen/X86/GlobalISel/legalize-undef-vec-scaling.mir @@ -0,0 +1,32 @@ +# RUN: llc -mtriple=x86_64-linux-gnu -mattr=avx2 -run-pass=legalizer -global-isel-abort=2 -pass-remarks-missed='gisel*' %s -o - | FileCheck %s --check-prefixes=CHECK,AVX2 +# RUN: llc -mtriple=x86_64-linux-gnu -mattr=sse2 -run-pass=legalizer -global-isel-abort=2 -pass-remarks-missed='gisel*' %s -o - | FileCheck %s --check-prefixes=CHECK,SSE2 +# RUN: llc -mtriple=x86_64-linux-gnu -mattr=avx512f -run-pass=legalizer -global-isel-abort=2 -pass-remarks-missed='gisel*' %s -o - | FileCheck %s --check-prefixes=CHECK,AVX512F + + +--- +name: test_basic_g_implicit_def_v8i64 +body: | + bb.0: + ; CHECK-LABEL: name: test_basic_g_implicit_def_v8i64 + ; AVX512F: {{%[0-9]+}}:_(<8 x s64>) = G_IMPLICIT_DEF + ; AVX2: [[DEF_AVX2:%[0-9]+]]:_(<4 x s64>) = G_IMPLICIT_DEF + ; AVX2-NEXT: {{%[0-9]+}}:_(<8 x s64>) = G_CONCAT_VECTORS [[DEF_AVX2]](<4 x s64>), [[DEF_AVX2]](<4 x s64>) + ; SSE2: [[DEF_SSE2:%[0-9]+]]:_(<2 x s64>) = G_IMPLICIT_DEF + ; SSE2-NEXT: {{%[0-9]+}}:_(<8 x s64>) = G_CONCAT_VECTORS [[DEF_SSE2]](<2 x s64>), [[DEF_SSE2]](<2 x s64>), [[DEF_SSE2]](<2 x s64>), [[DEF_SSE2]](<2 x s64>) + %0:_(<8 x s64>) = G_IMPLICIT_DEF + RET 0, implicit %0 +... + +--- +name: test_g_implicit_def_cample_size +body: | + bb.1: + ; CHECK-LABEL: name: test_g_implicit_def_cample_size + ; AVX512: {{%[0-9]+}}:_(<8 x s64>) = G_IMPLICIT_DEF + ; AVX2: {{%[0-9]+}}:_(<4 x s64>) = G_IMPLICIT_DEF + ; SSE2: {{%[0-9]+}}:_(<2 x s64>) = G_IMPLICIT_DEF + %0:_(<5 x s63>) = G_IMPLICIT_DEF + RET 0, implicit %0 +... + + diff --git a/llvm/test/CodeGen/X86/GlobalISel/select-constant-fold-barrier-vec256.mir b/llvm/test/CodeGen/X86/GlobalISel/select-constant-fold-barrier-vec256.mir new file mode 100644 index 0000000..254c1b6 --- /dev/null +++ b/llvm/test/CodeGen/X86/GlobalISel/select-constant-fold-barrier-vec256.mir @@ -0,0 +1,23 @@ +# RUN: llc -mtriple=x86_64-linux-gnu -mattr=+avx -run-pass=instruction-select -verify-machineinstrs %s -o - | FileCheck %s + +--- +name: select_cfb_vec256 +legalized: true +regBankSelected: true +registers: + - { id: 0, class: vecr, preferred-register: '', flags: [ ] } + - { id: 1, class: vecr, preferred-register: '', flags: [ ] } +body: | + bb.0: + liveins: $ymm0 + + ; CHECK-LABEL: name: select_cfb_vec256 + ; CHECK: [[COPY:%[0-9]+]]:vr256 = COPY $ymm0 + ; CHECK-NOT: G_CONSTANT_FOLD_BARRIER + ; CHECK-NEXT: $ymm1 = COPY [[COPY]] + ; CHECK-NEXT: RET 0, implicit $ymm1 + %0:vecr(<8 x s32>) = COPY $ymm0 + %1:vecr(<8 x s32>) = G_CONSTANT_FOLD_BARRIER %0 + $ymm1 = COPY %1(<8 x s32>) + RET 0, implicit $ymm1 +... diff --git a/llvm/test/CodeGen/X86/GlobalISel/select-constant-fold-barrier-vec512.mir b/llvm/test/CodeGen/X86/GlobalISel/select-constant-fold-barrier-vec512.mir new file mode 100644 index 0000000..3da354b --- /dev/null +++ b/llvm/test/CodeGen/X86/GlobalISel/select-constant-fold-barrier-vec512.mir @@ -0,0 +1,23 @@ +# RUN: llc -mtriple=x86_64-linux-gnu -mattr=+avx512f -run-pass=instruction-select -verify-machineinstrs %s -o - | FileCheck %s + +--- +name: select_cfb_vec512 +legalized: true +regBankSelected: true +registers: + - { id: 0, class: vecr, preferred-register: '', flags: [ ] } + - { id: 1, class: vecr, preferred-register: '', flags: [ ] } +body: | + bb.0: + liveins: $zmm0 + + ; CHECK-LABEL: name: select_cfb_vec512 + ; CHECK: [[COPY:%[0-9]+]]:vr512 = COPY $zmm0 + ; CHECK-NOT: G_CONSTANT_FOLD_BARRIER + ; CHECK-NEXT: $zmm1 = COPY [[COPY]] + ; CHECK-NEXT: RET 0, implicit $zmm1 + %0:vecr(<8 x s64>) = COPY $zmm0 + %1:vecr(<8 x s64>) = G_CONSTANT_FOLD_BARRIER %0 + $zmm1 = COPY %1(<8 x s64>) + RET 0, implicit $zmm1 +... diff --git a/llvm/test/CodeGen/X86/GlobalISel/select-constant-fold-barrier.mir b/llvm/test/CodeGen/X86/GlobalISel/select-constant-fold-barrier.mir new file mode 100644 index 0000000..fa012f9 --- /dev/null +++ b/llvm/test/CodeGen/X86/GlobalISel/select-constant-fold-barrier.mir @@ -0,0 +1,77 @@ +# RUN: llc -mtriple=x86_64-linux-gnu -run-pass=instruction-select -verify-machineinstrs %s -o - | FileCheck %s + + +--- +name: select_cfb_scalar_s32 +legalized: true +regBankSelected: true +registers: + - { id: 0, class: gpr, preferred-register: '', flags: [ ] } + - { id: 1, class: gpr, preferred-register: '', flags: [ ] } +liveins: +fixedStack: +stack: +constants: +body: | + bb.0: + liveins: $edi + + ; CHECK-LABEL: name: select_cfb_scalar_s32 + ; CHECK: [[COPY:%[0-9]+]]:gr32 = COPY $edi + ; CHECK-NOT: G_CONSTANT_FOLD_BARRIER + ; CHECK-NEXT: $eax = COPY [[COPY]] + ; CHECK-NEXT: RET 0, implicit $eax + %0:gpr(s32) = COPY $edi + %1:gpr(s32) = G_CONSTANT_FOLD_BARRIER %0 + $eax = COPY %1(s32) + RET 0, implicit $eax +... + +--- +name: select_cfb_scalar_s64 +legalized: true +regBankSelected: true +registers: + - { id: 0, class: gpr, preferred-register: '', flags: [ ] } + - { id: 1, class: gpr, preferred-register: '', flags: [ ] } +liveins: +fixedStack: +stack: +constants: +body: | + bb.0: + liveins: $rdi + + ; CHECK-LABEL: name: select_cfb_scalar_s64 + ; CHECK: [[COPY:%[0-9]+]]:gr64 = COPY $rdi + ; CHECK-NOT: G_CONSTANT_FOLD_BARRIER + ; CHECK-NEXT: $rax = COPY [[COPY]] + ; CHECK-NEXT: RET 0, implicit $rax + %0:gpr(s64) = COPY $rdi + %1:gpr(s64) = G_CONSTANT_FOLD_BARRIER %0 + $rax = COPY %1(s64) + RET 0, implicit $rax +... + + +--- +name: select_cfb_vec128 +legalized: true +regBankSelected: true +registers: + - { id: 0, class: vecr, preferred-register: '', flags: [ ] } + - { id: 1, class: vecr, preferred-register: '', flags: [ ] } +body: | + bb.0: + liveins: $xmm0 + + ; CHECK-LABEL: name: select_cfb_vec128 + ; CHECK: [[COPY:%[0-9]+]]:vr128 = COPY $xmm0 + ; CHECK-NOT: G_CONSTANT_FOLD_BARRIER + ; CHECK-NEXT: $xmm1 = COPY [[COPY]] + ; CHECK-NEXT: RET 0, implicit $xmm1 + %0:vecr(<4 x s32>) = COPY $xmm0 + %1:vecr(<4 x s32>) = G_CONSTANT_FOLD_BARRIER %0 + $xmm1 = COPY %1(<4 x s32>) + RET 0, implicit $xmm1 +... diff --git a/llvm/test/CodeGen/X86/GlobalISel/select-freeze-vec256.mir b/llvm/test/CodeGen/X86/GlobalISel/select-freeze-vec256.mir new file mode 100644 index 0000000..11251e4 --- /dev/null +++ b/llvm/test/CodeGen/X86/GlobalISel/select-freeze-vec256.mir @@ -0,0 +1,23 @@ +# RUN: llc -mtriple=x86_64-linux-gnu -mattr=+avx -run-pass=instruction-select -verify-machineinstrs %s -o - | FileCheck %s + +--- +name: select_freeze_vec256 +legalized: true +regBankSelected: true +registers: + - { id: 0, class: vecr, preferred-register: '', flags: [ ] } + - { id: 1, class: vecr, preferred-register: '', flags: [ ] } +body: | + bb.0: + liveins: $ymm0 + + ; CHECK-LABEL: name: select_freeze_vec256 + ; CHECK: [[COPY:%[0-9]+]]:vr256 = COPY $ymm0 + ; CHECK-NOT: G_FREEZE + ; CHECK-NEXT: $ymm1 = COPY [[COPY]] + ; CHECK-NEXT: RET 0, implicit $ymm1 + %0:vecr(<8 x s32>) = COPY $ymm0 + %1:vecr(<8 x s32>) = G_FREEZE %0 + $ymm1 = COPY %1(<8 x s32>) + RET 0, implicit $ymm1 +... diff --git a/llvm/test/CodeGen/X86/GlobalISel/select-freeze-vec512.mir b/llvm/test/CodeGen/X86/GlobalISel/select-freeze-vec512.mir new file mode 100644 index 0000000..bcf299a --- /dev/null +++ b/llvm/test/CodeGen/X86/GlobalISel/select-freeze-vec512.mir @@ -0,0 +1,23 @@ +# RUN: llc -mtriple=x86_64-linux-gnu -mattr=+avx512f -run-pass=instruction-select -verify-machineinstrs %s -o - | FileCheck %s + +--- +name: select_freeze_vec512 +legalized: true +regBankSelected: true +registers: + - { id: 0, class: vecr, preferred-register: '', flags: [ ] } + - { id: 1, class: vecr, preferred-register: '', flags: [ ] } +body: | + bb.0: + liveins: $zmm0 + + ; CHECK-LABEL: name: select_freeze_vec512 + ; CHECK: [[COPY:%[0-9]+]]:vr512 = COPY $zmm0 + ; CHECK-NOT: G_FREEZE + ; CHECK-NEXT: $zmm1 = COPY [[COPY]] + ; CHECK-NEXT: RET 0, implicit $zmm1 + %0:vecr(<8 x s64>) = COPY $zmm0 + %1:vecr(<8 x s64>) = G_FREEZE %0 + $zmm1 = COPY %1(<8 x s64>) + RET 0, implicit $zmm1 +... diff --git a/llvm/test/CodeGen/X86/GlobalISel/select-freeze.mir b/llvm/test/CodeGen/X86/GlobalISel/select-freeze.mir new file mode 100644 index 0000000..cf5ad47 --- /dev/null +++ b/llvm/test/CodeGen/X86/GlobalISel/select-freeze.mir @@ -0,0 +1,77 @@ +# RUN: llc -mtriple=x86_64-linux-gnu -run-pass=instruction-select -verify-machineinstrs %s -o - | FileCheck %s + + +--- +name: select_freeze_scalar_s32 +legalized: true +regBankSelected: true +registers: + - { id: 0, class: gpr, preferred-register: '', flags: [ ] } + - { id: 1, class: gpr, preferred-register: '', flags: [ ] } +liveins: +fixedStack: +stack: +constants: +body: | + bb.0: + liveins: $edi + + ; CHECK-LABEL: name: select_freeze_scalar_s32 + ; CHECK: [[COPY:%[0-9]+]]:gr32 = COPY $edi + ; CHECK-NOT: G_FREEZE + ; CHECK-NEXT: $eax = COPY [[COPY]] + ; CHECK-NEXT: RET 0, implicit $eax + %0:gpr(s32) = COPY $edi + %1:gpr(s32) = G_FREEZE %0 + $eax = COPY %1(s32) + RET 0, implicit $eax +... + +--- +name: select_freeze_scalar_s64 +legalized: true +regBankSelected: true +registers: + - { id: 0, class: gpr, preferred-register: '', flags: [ ] } + - { id: 1, class: gpr, preferred-register: '', flags: [ ] } +liveins: +fixedStack: +stack: +constants: +body: | + bb.0: + liveins: $rdi + + ; CHECK-LABEL: name: select_freeze_scalar_s64 + ; CHECK: [[COPY:%[0-9]+]]:gr64 = COPY $rdi + ; CHECK-NOT: G_FREEZE + ; CHECK-NEXT: $rax = COPY [[COPY]] + ; CHECK-NEXT: RET 0, implicit $rax + %0:gpr(s64) = COPY $rdi + %1:gpr(s64) = G_FREEZE %0 + $rax = COPY %1(s64) + RET 0, implicit $rax +... + + +--- +name: select_freeze_vec128 +legalized: true +regBankSelected: true +registers: + - { id: 0, class: vecr, preferred-register: '', flags: [ ] } + - { id: 1, class: vecr, preferred-register: '', flags: [ ] } +body: | + bb.0: + liveins: $xmm0 + + ; CHECK-LABEL: name: select_freeze_vec128 + ; CHECK: [[COPY:%[0-9]+]]:vr128 = COPY $xmm0 + ; CHECK-NOT: G_FREEZE + ; CHECK-NEXT: $xmm1 = COPY [[COPY]] + ; CHECK-NEXT: RET 0, implicit $xmm1 + %0:vecr(<4 x s32>) = COPY $xmm0 + %1:vecr(<4 x s32>) = G_FREEZE %0 + $xmm1 = COPY %1(<4 x s32>) + RET 0, implicit $xmm1 +... diff --git a/llvm/test/CodeGen/X86/x86-shrink-wrap-unwind.ll b/llvm/test/CodeGen/X86/x86-shrink-wrap-unwind.ll index 3349d31..b2064b1 100644 --- a/llvm/test/CodeGen/X86/x86-shrink-wrap-unwind.ll +++ b/llvm/test/CodeGen/X86/x86-shrink-wrap-unwind.ll @@ -317,13 +317,13 @@ define void @with_nounwind(i1 %cond) nounwind personality ptr @my_personality { ; CHECK-NEXT: popq %rax ; CHECK-NEXT: retq ; CHECK-NEXT: LBB4_1: ## %throw -; CHECK-NEXT: Ltmp0: +; CHECK-NEXT: Ltmp0: ## EH_LABEL ; CHECK-NEXT: callq _throw_exception -; CHECK-NEXT: Ltmp1: +; CHECK-NEXT: Ltmp1: ## EH_LABEL ; CHECK-NEXT: ## %bb.2: ## %unreachable ; CHECK-NEXT: ud2 ; CHECK-NEXT: LBB4_3: ## %landing -; CHECK-NEXT: Ltmp2: +; CHECK-NEXT: Ltmp2: ## EH_LABEL ; CHECK-NEXT: popq %rax ; CHECK-NEXT: retq ; CHECK-NEXT: Lfunc_end0: @@ -340,12 +340,12 @@ define void @with_nounwind(i1 %cond) nounwind personality ptr @my_personality { ; NOCOMPACTUNWIND-NEXT: retq ; NOCOMPACTUNWIND-NEXT: .LBB4_1: # %throw ; NOCOMPACTUNWIND-NEXT: .cfi_def_cfa_offset 16 -; NOCOMPACTUNWIND-NEXT: .Ltmp0: +; NOCOMPACTUNWIND-NEXT: .Ltmp0: # EH_LABEL ; NOCOMPACTUNWIND-NEXT: callq throw_exception@PLT -; NOCOMPACTUNWIND-NEXT: .Ltmp1: +; NOCOMPACTUNWIND-NEXT: .Ltmp1: # EH_LABEL ; NOCOMPACTUNWIND-NEXT: # %bb.2: # %unreachable ; NOCOMPACTUNWIND-NEXT: .LBB4_3: # %landing -; NOCOMPACTUNWIND-NEXT: .Ltmp2: +; NOCOMPACTUNWIND-NEXT: .Ltmp2: # EH_LABEL ; NOCOMPACTUNWIND-NEXT: popq %rax ; NOCOMPACTUNWIND-NEXT: .cfi_def_cfa_offset 8 ; NOCOMPACTUNWIND-NEXT: retq @@ -379,9 +379,9 @@ define void @with_nounwind_same_succ(i1 %cond) nounwind personality ptr @my_pers ; CHECK-NEXT: ## %bb.1: ## %throw ; CHECK-NEXT: pushq %rax ; CHECK-NEXT: .cfi_def_cfa_offset 16 -; CHECK-NEXT: Ltmp3: +; CHECK-NEXT: Ltmp3: ## EH_LABEL ; CHECK-NEXT: callq _throw_exception -; CHECK-NEXT: Ltmp4: +; CHECK-NEXT: Ltmp4: ## EH_LABEL ; CHECK-NEXT: LBB5_3: ## %fallthrough ; CHECK-NEXT: ## InlineAsm Start ; CHECK-NEXT: nop @@ -390,7 +390,7 @@ define void @with_nounwind_same_succ(i1 %cond) nounwind personality ptr @my_pers ; CHECK-NEXT: LBB5_4: ## %return ; CHECK-NEXT: retq ; CHECK-NEXT: LBB5_2: ## %landing -; CHECK-NEXT: Ltmp5: +; CHECK-NEXT: Ltmp5: ## EH_LABEL ; CHECK-NEXT: jmp LBB5_3 ; CHECK-NEXT: Lfunc_end1: ; @@ -401,9 +401,9 @@ define void @with_nounwind_same_succ(i1 %cond) nounwind personality ptr @my_pers ; NOCOMPACTUNWIND-NEXT: # %bb.1: # %throw ; NOCOMPACTUNWIND-NEXT: pushq %rax ; NOCOMPACTUNWIND-NEXT: .cfi_def_cfa_offset 16 -; NOCOMPACTUNWIND-NEXT: .Ltmp3: +; NOCOMPACTUNWIND-NEXT: .Ltmp3: # EH_LABEL ; NOCOMPACTUNWIND-NEXT: callq throw_exception@PLT -; NOCOMPACTUNWIND-NEXT: .Ltmp4: +; NOCOMPACTUNWIND-NEXT: .Ltmp4: # EH_LABEL ; NOCOMPACTUNWIND-NEXT: .LBB5_3: # %fallthrough ; NOCOMPACTUNWIND-NEXT: #APP ; NOCOMPACTUNWIND-NEXT: nop @@ -414,7 +414,7 @@ define void @with_nounwind_same_succ(i1 %cond) nounwind personality ptr @my_pers ; NOCOMPACTUNWIND-NEXT: retq ; NOCOMPACTUNWIND-NEXT: .LBB5_2: # %landing ; NOCOMPACTUNWIND-NEXT: .cfi_def_cfa_offset 16 -; NOCOMPACTUNWIND-NEXT: .Ltmp5: +; NOCOMPACTUNWIND-NEXT: .Ltmp5: # EH_LABEL ; NOCOMPACTUNWIND-NEXT: jmp .LBB5_3 entry: br i1 %cond, label %throw, label %return diff --git a/llvm/test/DebugInfo/dwarf-complex-int.ll b/llvm/test/DebugInfo/dwarf-complex-int.ll new file mode 100644 index 0000000..effd0ec --- /dev/null +++ b/llvm/test/DebugInfo/dwarf-complex-int.ll @@ -0,0 +1,59 @@ +; REQUIRES: object-emission +; RUN: %llc_dwarf %s -filetype=obj -o - | llvm-dwarfdump - | FileCheck %s + +;; https://github.com/llvm/llvm-project/issues/140362 +;; Don't assert when emitting a complex integer type in DWARF. + +;; C source: +;; int g; +;; +;; void foo(_Complex short c) { __builtin_memmove(&g, (char *)&c, 2); } +;; +;; void bar() { foo(0); } + +; CHECK: DW_AT_type ([[complex:0x[0-9a-f]+]] "complex") + +; CHECK: [[complex]]: DW_TAG_base_type +; CHECK-NEXT: DW_AT_name ("complex") +; CHECK-NEXT: DW_AT_encoding (0x80) +; CHECK-NEXT: DW_AT_byte_size (0x04) + +@g = dso_local local_unnamed_addr global i32 0, align 4, !dbg !0 + +define dso_local void @bar() local_unnamed_addr !dbg !18 { +entry: + #dbg_value(i32 0, !21, !DIExpression(), !27) + store i16 0, ptr @g, align 4, !dbg !29 + ret void, !dbg !30 +} + +!llvm.dbg.cu = !{!2} +!llvm.module.flags = !{!10, !11} +!llvm.ident = !{!17} + +!0 = !DIGlobalVariableExpression(var: !1, expr: !DIExpression()) +!1 = distinct !DIGlobalVariable(name: "g", scope: !2, file: !8, line: 1, type: !9, isLocal: false, isDefinition: true) +!2 = distinct !DICompileUnit(language: DW_LANG_C_plus_plus_14, file: !3, producer: "clang version 22.0.0git", isOptimized: true, runtimeVersion: 0, emissionKind: FullDebug, retainedTypes: !4, globals: !7, splitDebugInlining: false, nameTableKind: None) +!3 = !DIFile(filename: "/app/example.cpp", directory: "/app") +!4 = !{!5} +!5 = !DIDerivedType(tag: DW_TAG_pointer_type, baseType: !6, size: 64) +!6 = !DIBasicType(name: "char", size: 8, encoding: DW_ATE_signed_char) +!7 = !{!0} +!8 = !DIFile(filename: "example.cpp", directory: "/app") +!9 = !DIBasicType(name: "int", size: 32, encoding: DW_ATE_signed) +!10 = !{i32 7, !"Dwarf Version", i32 5} +!11 = !{i32 2, !"Debug Info Version", i32 3} +!17 = !{!"clang version 22.0.0git"} +!18 = distinct !DISubprogram(name: "bar", linkageName: "bar()", scope: !8, file: !8, line: 5, type: !19, scopeLine: 5, flags: DIFlagPrototyped | DIFlagAllCallsDescribed, spFlags: DISPFlagDefinition | DISPFlagOptimized, unit: !2, keyInstructions: true) +!19 = !DISubroutineType(types: !20) +!20 = !{null} +!21 = !DILocalVariable(name: "c", arg: 1, scope: !22, file: !8, line: 3, type: !25) +!22 = distinct !DISubprogram(name: "foo", linkageName: "_ZL3fooCs", scope: !8, file: !8, line: 3, type: !23, scopeLine: 3, flags: DIFlagPrototyped | DIFlagAllCallsDescribed, spFlags: DISPFlagLocalToUnit | DISPFlagDefinition | DISPFlagOptimized, unit: !2, retainedNodes: !26, keyInstructions: true) +!23 = !DISubroutineType(types: !24) +!24 = !{null, !25} +!25 = !DIBasicType(name: "complex", size: 32, encoding: 128) +!26 = !{!21} +!27 = !DILocation(line: 0, scope: !22, inlinedAt: !28) +!28 = distinct !DILocation(line: 5, column: 14, scope: !18) +!29 = !DILocation(line: 3, column: 37, scope: !22, inlinedAt: !28, atomGroup: 1, atomRank: 1) +!30 = !DILocation(line: 5, column: 22, scope: !18, atomGroup: 1, atomRank: 1) diff --git a/llvm/test/Instrumentation/AddressSanitizer/RISCV/asan-rvv-intrinsics.ll b/llvm/test/Instrumentation/AddressSanitizer/RISCV/asan-rvv-intrinsics.ll index 919f16b..4b50094 100644 --- a/llvm/test/Instrumentation/AddressSanitizer/RISCV/asan-rvv-intrinsics.ll +++ b/llvm/test/Instrumentation/AddressSanitizer/RISCV/asan-rvv-intrinsics.ll @@ -180,7 +180,29 @@ define <vscale x 1 x i32> @test_vlseg2_nxv1i32(ptr %base, i64 %vl) sanitize_addr ; CHECK-LABEL: @test_vlseg2_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP24:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 2) @llvm.riscv.vlseg2.triscv.vector.tuple_nxv4i8_2t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) poison, ptr [[BASE:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i64>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP8]], i64 8) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[TMP24:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 2) @llvm.riscv.vlseg2.triscv.vector.tuple_nxv4i8_2t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) poison, ptr [[BASE]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP25:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[TMP24]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP25]] ; @@ -194,7 +216,29 @@ define <vscale x 1 x i32> @test_vlseg2_mask_nxv1i32(ptr %base, i64 %vl, <vscale ; CHECK-LABEL: @test_vlseg2_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP24:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 2) @llvm.riscv.vlseg2.mask.triscv.vector.tuple_nxv4i8_2t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) poison, ptr [[BASE:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i64>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP8]], i64 8) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[TMP24:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 2) @llvm.riscv.vlseg2.mask.triscv.vector.tuple_nxv4i8_2t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) poison, ptr [[BASE]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP25:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[TMP24]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP25]] ; @@ -212,7 +256,29 @@ define <vscale x 1 x i32> @test_vlseg3_nxv1i32(ptr %base, i64 %vl) sanitize_addr ; CHECK-LABEL: @test_vlseg3_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP36:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 3) @llvm.riscv.vlseg3.triscv.vector.tuple_nxv4i8_3t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) poison, ptr [[BASE:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i96>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP8]], i64 12) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[TMP36:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 3) @llvm.riscv.vlseg3.triscv.vector.tuple_nxv4i8_3t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) poison, ptr [[BASE]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP37:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[TMP36]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP37]] ; @@ -226,7 +292,29 @@ define <vscale x 1 x i32> @test_vlseg3_mask_nxv1i32(ptr %base, i64 %vl, <vscale ; CHECK-LABEL: @test_vlseg3_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP36:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 3) @llvm.riscv.vlseg3.mask.triscv.vector.tuple_nxv4i8_3t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) poison, ptr [[BASE:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i96>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP8]], i64 12) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[TMP36:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 3) @llvm.riscv.vlseg3.mask.triscv.vector.tuple_nxv4i8_3t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) poison, ptr [[BASE]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP37:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[TMP36]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP37]] ; @@ -244,7 +332,29 @@ define <vscale x 1 x i32> @test_vlseg4_nxv1i32(ptr %base, i64 %vl) sanitize_addr ; CHECK-LABEL: @test_vlseg4_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP48:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 4) @llvm.riscv.vlseg4.triscv.vector.tuple_nxv4i8_4t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) poison, ptr [[BASE:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i128>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP8]], i64 16) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[TMP48:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 4) @llvm.riscv.vlseg4.triscv.vector.tuple_nxv4i8_4t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) poison, ptr [[BASE]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP49:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[TMP48]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP49]] ; @@ -258,7 +368,29 @@ define <vscale x 1 x i32> @test_vlseg4_mask_nxv1i32(ptr %base, i64 %vl, <vscale ; CHECK-LABEL: @test_vlseg4_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP48:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 4) @llvm.riscv.vlseg4.mask.triscv.vector.tuple_nxv4i8_4t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) poison, ptr [[BASE:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i128>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP8]], i64 16) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[TMP48:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 4) @llvm.riscv.vlseg4.mask.triscv.vector.tuple_nxv4i8_4t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) poison, ptr [[BASE]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP49:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[TMP48]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP49]] ; @@ -276,7 +408,29 @@ define <vscale x 1 x i32> @test_vlseg5_nxv1i32(ptr %base, i64 %vl) sanitize_addr ; CHECK-LABEL: @test_vlseg5_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP60:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 5) @llvm.riscv.vlseg5.triscv.vector.tuple_nxv4i8_5t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) poison, ptr [[BASE:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i160>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP8]], i64 20) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[TMP60:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 5) @llvm.riscv.vlseg5.triscv.vector.tuple_nxv4i8_5t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) poison, ptr [[BASE]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP61:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[TMP60]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP61]] ; @@ -290,7 +444,29 @@ define <vscale x 1 x i32> @test_vlseg5_mask_nxv1i32(ptr %base, i64 %vl, <vscale ; CHECK-LABEL: @test_vlseg5_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP60:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 5) @llvm.riscv.vlseg5.mask.triscv.vector.tuple_nxv4i8_5t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) poison, ptr [[BASE:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i160>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP8]], i64 20) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[TMP60:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 5) @llvm.riscv.vlseg5.mask.triscv.vector.tuple_nxv4i8_5t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) poison, ptr [[BASE]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP61:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[TMP60]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP61]] ; @@ -308,7 +484,29 @@ define <vscale x 1 x i32> @test_vlseg6_nxv1i32(ptr %base, i64 %vl) sanitize_addr ; CHECK-LABEL: @test_vlseg6_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP72:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 6) @llvm.riscv.vlseg6.triscv.vector.tuple_nxv4i8_6t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) poison, ptr [[BASE:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i192>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP8]], i64 24) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[TMP72:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 6) @llvm.riscv.vlseg6.triscv.vector.tuple_nxv4i8_6t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) poison, ptr [[BASE]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP73:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[TMP72]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP73]] ; @@ -322,7 +520,29 @@ define <vscale x 1 x i32> @test_vlseg6_mask_nxv1i32(ptr %base, i64 %vl, <vscale ; CHECK-LABEL: @test_vlseg6_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP72:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 6) @llvm.riscv.vlseg6.mask.triscv.vector.tuple_nxv4i8_6t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) poison, ptr [[BASE:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i192>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP8]], i64 24) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[TMP72:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 6) @llvm.riscv.vlseg6.mask.triscv.vector.tuple_nxv4i8_6t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) poison, ptr [[BASE]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP73:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[TMP72]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP73]] ; @@ -340,7 +560,29 @@ define <vscale x 1 x i32> @test_vlseg7_nxv1i32(ptr %base, i64 %vl) sanitize_addr ; CHECK-LABEL: @test_vlseg7_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP84:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 7) @llvm.riscv.vlseg7.triscv.vector.tuple_nxv4i8_7t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) poison, ptr [[BASE:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i224>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP8]], i64 28) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[TMP84:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 7) @llvm.riscv.vlseg7.triscv.vector.tuple_nxv4i8_7t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) poison, ptr [[BASE]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP85:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[TMP84]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP85]] ; @@ -354,7 +596,29 @@ define <vscale x 1 x i32> @test_vlseg7_mask_nxv1i32(ptr %base, i64 %vl, <vscale ; CHECK-LABEL: @test_vlseg7_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP84:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 7) @llvm.riscv.vlseg7.mask.triscv.vector.tuple_nxv4i8_7t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) poison, ptr [[BASE:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i224>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP8]], i64 28) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[TMP84:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 7) @llvm.riscv.vlseg7.mask.triscv.vector.tuple_nxv4i8_7t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) poison, ptr [[BASE]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP85:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[TMP84]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP85]] ; @@ -372,7 +636,29 @@ define <vscale x 1 x i32> @test_vlseg8_nxv1i32(ptr %base, i64 %vl) sanitize_addr ; CHECK-LABEL: @test_vlseg8_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP96:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 8) @llvm.riscv.vlseg8.triscv.vector.tuple_nxv4i8_8t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) poison, ptr [[BASE:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i256>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP8]], i64 32) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[TMP96:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 8) @llvm.riscv.vlseg8.triscv.vector.tuple_nxv4i8_8t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) poison, ptr [[BASE]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP97:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[TMP96]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP97]] ; @@ -386,7 +672,29 @@ define <vscale x 1 x i32> @test_vlseg8_mask_nxv1i32(ptr %base, i64 %vl, <vscale ; CHECK-LABEL: @test_vlseg8_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP96:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 8) @llvm.riscv.vlseg8.mask.triscv.vector.tuple_nxv4i8_8t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) poison, ptr [[BASE:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i256>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP8]], i64 32) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[TMP96:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 8) @llvm.riscv.vlseg8.mask.triscv.vector.tuple_nxv4i8_8t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) poison, ptr [[BASE]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP97:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[TMP96]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP97]] ; @@ -404,7 +712,29 @@ define void @test_vsseg2_nxv1i32(target("riscv.vector.tuple", <vscale x 4 x i8>, ; CHECK-LABEL: @test_vsseg2_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsseg2.triscv.vector.tuple_nxv4i8_2t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[VAL:%.*]], ptr [[BASE:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i64>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP8]], i64 8) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: tail call void @llvm.riscv.vsseg2.triscv.vector.tuple_nxv4i8_2t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[VAL:%.*]], ptr [[BASE]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -416,7 +746,29 @@ define void @test_vsseg2_mask_nxv1i32(target("riscv.vector.tuple", <vscale x 4 x ; CHECK-LABEL: @test_vsseg2_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsseg2.mask.triscv.vector.tuple_nxv4i8_2t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i64>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP8]], i64 8) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: tail call void @llvm.riscv.vsseg2.mask.triscv.vector.tuple_nxv4i8_2t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -432,7 +784,29 @@ define void @test_vsseg3_nxv1i32(target("riscv.vector.tuple", <vscale x 4 x i8>, ; CHECK-LABEL: @test_vsseg3_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsseg3.triscv.vector.tuple_nxv4i8_3t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[VAL:%.*]], ptr [[BASE:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i96>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP8]], i64 12) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: tail call void @llvm.riscv.vsseg3.triscv.vector.tuple_nxv4i8_3t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[VAL:%.*]], ptr [[BASE]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -444,7 +818,29 @@ define void @test_vsseg3_mask_nxv1i32(target("riscv.vector.tuple", <vscale x 4 x ; CHECK-LABEL: @test_vsseg3_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsseg3.mask.triscv.vector.tuple_nxv4i8_3t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i96>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP8]], i64 12) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: tail call void @llvm.riscv.vsseg3.mask.triscv.vector.tuple_nxv4i8_3t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -460,7 +856,29 @@ define void @test_vsseg4_nxv1i32(target("riscv.vector.tuple", <vscale x 4 x i8>, ; CHECK-LABEL: @test_vsseg4_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsseg4.triscv.vector.tuple_nxv4i8_4t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[VAL:%.*]], ptr [[BASE:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i128>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP8]], i64 16) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: tail call void @llvm.riscv.vsseg4.triscv.vector.tuple_nxv4i8_4t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[VAL:%.*]], ptr [[BASE]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -472,7 +890,29 @@ define void @test_vsseg4_mask_nxv1i32(target("riscv.vector.tuple", <vscale x 4 x ; CHECK-LABEL: @test_vsseg4_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsseg4.mask.triscv.vector.tuple_nxv4i8_4t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i128>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP8]], i64 16) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: tail call void @llvm.riscv.vsseg4.mask.triscv.vector.tuple_nxv4i8_4t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -488,7 +928,29 @@ define void @test_vsseg5_nxv1i32(target("riscv.vector.tuple", <vscale x 4 x i8>, ; CHECK-LABEL: @test_vsseg5_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsseg5.triscv.vector.tuple_nxv4i8_5t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[VAL:%.*]], ptr [[BASE:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i160>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP8]], i64 20) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: tail call void @llvm.riscv.vsseg5.triscv.vector.tuple_nxv4i8_5t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[VAL:%.*]], ptr [[BASE]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -500,7 +962,29 @@ define void @test_vsseg5_mask_nxv1i32(target("riscv.vector.tuple", <vscale x 4 x ; CHECK-LABEL: @test_vsseg5_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsseg5.mask.triscv.vector.tuple_nxv4i8_5t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i160>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP8]], i64 20) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: tail call void @llvm.riscv.vsseg5.mask.triscv.vector.tuple_nxv4i8_5t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -516,7 +1000,29 @@ define void @test_vsseg6_nxv1i32(target("riscv.vector.tuple", <vscale x 4 x i8>, ; CHECK-LABEL: @test_vsseg6_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsseg6.triscv.vector.tuple_nxv4i8_6t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[VAL:%.*]], ptr [[BASE:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i192>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP8]], i64 24) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: tail call void @llvm.riscv.vsseg6.triscv.vector.tuple_nxv4i8_6t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[VAL:%.*]], ptr [[BASE]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -528,7 +1034,29 @@ define void @test_vsseg6_mask_nxv1i32(target("riscv.vector.tuple", <vscale x 4 x ; CHECK-LABEL: @test_vsseg6_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsseg6.mask.triscv.vector.tuple_nxv4i8_6t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i192>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP8]], i64 24) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: tail call void @llvm.riscv.vsseg6.mask.triscv.vector.tuple_nxv4i8_6t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -544,7 +1072,29 @@ define void @test_vsseg7_nxv1i32(target("riscv.vector.tuple", <vscale x 4 x i8>, ; CHECK-LABEL: @test_vsseg7_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsseg7.triscv.vector.tuple_nxv4i8_7t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[VAL:%.*]], ptr [[BASE:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i224>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP8]], i64 28) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: tail call void @llvm.riscv.vsseg7.triscv.vector.tuple_nxv4i8_7t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[VAL:%.*]], ptr [[BASE]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -556,7 +1106,29 @@ define void @test_vsseg7_mask_nxv1i32(target("riscv.vector.tuple", <vscale x 4 x ; CHECK-LABEL: @test_vsseg7_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsseg7.mask.triscv.vector.tuple_nxv4i8_7t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i224>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP8]], i64 28) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: tail call void @llvm.riscv.vsseg7.mask.triscv.vector.tuple_nxv4i8_7t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -572,7 +1144,29 @@ define void @test_vsseg8_nxv1i32(target("riscv.vector.tuple", <vscale x 4 x i8>, ; CHECK-LABEL: @test_vsseg8_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsseg8.triscv.vector.tuple_nxv4i8_8t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[VAL:%.*]], ptr [[BASE:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i256>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP8]], i64 32) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: tail call void @llvm.riscv.vsseg8.triscv.vector.tuple_nxv4i8_8t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[VAL:%.*]], ptr [[BASE]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -584,7 +1178,29 @@ define void @test_vsseg8_mask_nxv1i32(target("riscv.vector.tuple", <vscale x 4 x ; CHECK-LABEL: @test_vsseg8_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsseg8.mask.triscv.vector.tuple_nxv4i8_8t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP10:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP9:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP9]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = getelementptr <vscale x 1 x i256>, ptr [[BASE:%.*]], i64 0, i64 [[IV]] +; CHECK-NEXT: [[TMP8:%.*]] = ptrtoint ptr [[TMP7]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP8]], i64 32) +; CHECK-NEXT: br label [[TMP9]] +; CHECK: 9: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: tail call void @llvm.riscv.vsseg8.mask.triscv.vector.tuple_nxv4i8_8t.p0.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -792,7 +1408,30 @@ define <vscale x 1 x i32> @test_vlsseg2_nxv1i32(ptr %base, i64 %offset, i64 %vl) ; CHECK-LABEL: @test_vlsseg2_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP24:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 2) @llvm.riscv.vlsseg2.triscv.vector.tuple_nxv4i8_2t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) poison, ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP9]], i64 8) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[TMP24:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 2) @llvm.riscv.vlsseg2.triscv.vector.tuple_nxv4i8_2t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) poison, ptr [[BASE]], i64 [[OFFSET]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP25:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[TMP24]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP25]] ; @@ -806,7 +1445,30 @@ define <vscale x 1 x i32> @test_vlsseg2_mask_nxv1i32(ptr %base, i64 %offset, i64 ; CHECK-LABEL: @test_vlsseg2_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP24:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 2) @llvm.riscv.vlsseg2.mask.triscv.vector.tuple_nxv4i8_2t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) poison, ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP9]], i64 8) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[TMP24:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 2) @llvm.riscv.vlsseg2.mask.triscv.vector.tuple_nxv4i8_2t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) poison, ptr [[BASE]], i64 [[OFFSET]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP25:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[TMP24]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP25]] ; @@ -824,7 +1486,30 @@ define <vscale x 1 x i32> @test_vlsseg3_nxv1i32(ptr %base, i64 %offset, i64 %vl) ; CHECK-LABEL: @test_vlsseg3_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP36:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 3) @llvm.riscv.vlsseg3.triscv.vector.tuple_nxv4i8_3t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) poison, ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP9]], i64 12) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[TMP36:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 3) @llvm.riscv.vlsseg3.triscv.vector.tuple_nxv4i8_3t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) poison, ptr [[BASE]], i64 [[OFFSET]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP37:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[TMP36]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP37]] ; @@ -838,7 +1523,30 @@ define <vscale x 1 x i32> @test_vlsseg3_mask_nxv1i32(ptr %base, i64 %offset, i64 ; CHECK-LABEL: @test_vlsseg3_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP36:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 3) @llvm.riscv.vlsseg3.mask.triscv.vector.tuple_nxv4i8_3t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) poison, ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP9]], i64 12) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[TMP36:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 3) @llvm.riscv.vlsseg3.mask.triscv.vector.tuple_nxv4i8_3t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) poison, ptr [[BASE]], i64 [[OFFSET]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP37:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[TMP36]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP37]] ; @@ -856,7 +1564,30 @@ define <vscale x 1 x i32> @test_vlsseg4_nxv1i32(ptr %base, i64 %offset, i64 %vl) ; CHECK-LABEL: @test_vlsseg4_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP48:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 4) @llvm.riscv.vlsseg4.triscv.vector.tuple_nxv4i8_4t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) poison, ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP9]], i64 16) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[TMP48:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 4) @llvm.riscv.vlsseg4.triscv.vector.tuple_nxv4i8_4t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) poison, ptr [[BASE]], i64 [[OFFSET]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP49:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[TMP48]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP49]] ; @@ -870,7 +1601,30 @@ define <vscale x 1 x i32> @test_vlsseg4_mask_nxv1i32(ptr %base, i64 %offset, i64 ; CHECK-LABEL: @test_vlsseg4_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP48:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 4) @llvm.riscv.vlsseg4.mask.triscv.vector.tuple_nxv4i8_4t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) poison, ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP9]], i64 16) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[TMP48:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 4) @llvm.riscv.vlsseg4.mask.triscv.vector.tuple_nxv4i8_4t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) poison, ptr [[BASE]], i64 [[OFFSET]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP49:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[TMP48]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP49]] ; @@ -888,7 +1642,30 @@ define <vscale x 1 x i32> @test_vlsseg5_nxv1i32(ptr %base, i64 %offset, i64 %vl) ; CHECK-LABEL: @test_vlsseg5_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP60:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 5) @llvm.riscv.vlsseg5.triscv.vector.tuple_nxv4i8_5t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) poison, ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP9]], i64 20) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[TMP60:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 5) @llvm.riscv.vlsseg5.triscv.vector.tuple_nxv4i8_5t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) poison, ptr [[BASE]], i64 [[OFFSET]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP61:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[TMP60]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP61]] ; @@ -902,7 +1679,30 @@ define <vscale x 1 x i32> @test_vlsseg5_mask_nxv1i32(ptr %base, i64 %offset, i64 ; CHECK-LABEL: @test_vlsseg5_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP60:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 5) @llvm.riscv.vlsseg5.mask.triscv.vector.tuple_nxv4i8_5t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) poison, ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP9]], i64 20) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[TMP60:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 5) @llvm.riscv.vlsseg5.mask.triscv.vector.tuple_nxv4i8_5t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) poison, ptr [[BASE]], i64 [[OFFSET]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP61:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[TMP60]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP61]] ; @@ -920,7 +1720,30 @@ define <vscale x 1 x i32> @test_vlsseg6_nxv1i32(ptr %base, i64 %offset, i64 %vl) ; CHECK-LABEL: @test_vlsseg6_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP72:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 6) @llvm.riscv.vlsseg6.triscv.vector.tuple_nxv4i8_6t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) poison, ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP9]], i64 24) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[TMP72:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 6) @llvm.riscv.vlsseg6.triscv.vector.tuple_nxv4i8_6t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) poison, ptr [[BASE]], i64 [[OFFSET]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP73:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[TMP72]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP73]] ; @@ -934,7 +1757,30 @@ define <vscale x 1 x i32> @test_vlsseg6_mask_nxv1i32(ptr %base, i64 %offset, i64 ; CHECK-LABEL: @test_vlsseg6_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP72:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 6) @llvm.riscv.vlsseg6.mask.triscv.vector.tuple_nxv4i8_6t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) poison, ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP9]], i64 24) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[TMP72:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 6) @llvm.riscv.vlsseg6.mask.triscv.vector.tuple_nxv4i8_6t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) poison, ptr [[BASE]], i64 [[OFFSET]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP73:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[TMP72]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP73]] ; @@ -952,7 +1798,30 @@ define <vscale x 1 x i32> @test_vlsseg7_nxv1i32(ptr %base, i64 %offset, i64 %vl) ; CHECK-LABEL: @test_vlsseg7_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP84:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 7) @llvm.riscv.vlsseg7.triscv.vector.tuple_nxv4i8_7t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) poison, ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP9]], i64 28) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[TMP84:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 7) @llvm.riscv.vlsseg7.triscv.vector.tuple_nxv4i8_7t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) poison, ptr [[BASE]], i64 [[OFFSET]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP85:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[TMP84]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP85]] ; @@ -966,7 +1835,30 @@ define <vscale x 1 x i32> @test_vlsseg7_mask_nxv1i32(ptr %base, i64 %offset, i64 ; CHECK-LABEL: @test_vlsseg7_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP84:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 7) @llvm.riscv.vlsseg7.mask.triscv.vector.tuple_nxv4i8_7t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) poison, ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP9]], i64 28) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[TMP84:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 7) @llvm.riscv.vlsseg7.mask.triscv.vector.tuple_nxv4i8_7t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) poison, ptr [[BASE]], i64 [[OFFSET]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP85:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[TMP84]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP85]] ; @@ -984,7 +1876,30 @@ define <vscale x 1 x i32> @test_vlsseg8_nxv1i32(ptr %base, i64 %offset, i64 %vl) ; CHECK-LABEL: @test_vlsseg8_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP96:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 8) @llvm.riscv.vlsseg8.triscv.vector.tuple_nxv4i8_8t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) poison, ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP9]], i64 32) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[TMP96:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 8) @llvm.riscv.vlsseg8.triscv.vector.tuple_nxv4i8_8t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) poison, ptr [[BASE]], i64 [[OFFSET]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP97:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[TMP96]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP97]] ; @@ -998,7 +1913,30 @@ define <vscale x 1 x i32> @test_vlsseg8_mask_nxv1i32(ptr %base, i64 %offset, i64 ; CHECK-LABEL: @test_vlsseg8_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP96:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 8) @llvm.riscv.vlsseg8.mask.triscv.vector.tuple_nxv4i8_8t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) poison, ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP9]], i64 32) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[TMP96:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 8) @llvm.riscv.vlsseg8.mask.triscv.vector.tuple_nxv4i8_8t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) poison, ptr [[BASE]], i64 [[OFFSET]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP97:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[TMP96]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP97]] ; @@ -1016,7 +1954,30 @@ define void @test_vssseg2_nxv1i32(target("riscv.vector.tuple", <vscale x 4 x i8> ; CHECK-LABEL: @test_vssseg2_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vssseg2.triscv.vector.tuple_nxv4i8_2t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[VAL:%.*]], ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP9]], i64 8) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: tail call void @llvm.riscv.vssseg2.triscv.vector.tuple_nxv4i8_2t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[VAL:%.*]], ptr [[BASE]], i64 [[OFFSET]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -1028,7 +1989,30 @@ define void @test_vssseg2_mask_nxv1i32(target("riscv.vector.tuple", <vscale x 4 ; CHECK-LABEL: @test_vssseg2_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vssseg2.mask.triscv.vector.tuple_nxv4i8_2t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[VAL:%.*]], ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP9]], i64 8) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: tail call void @llvm.riscv.vssseg2.mask.triscv.vector.tuple_nxv4i8_2t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[VAL:%.*]], ptr [[BASE]], i64 [[OFFSET]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -1044,7 +2028,30 @@ define void @test_vssseg3_nxv1i32(target("riscv.vector.tuple", <vscale x 4 x i8> ; CHECK-LABEL: @test_vssseg3_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vssseg3.triscv.vector.tuple_nxv4i8_3t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[VAL:%.*]], ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP9]], i64 12) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: tail call void @llvm.riscv.vssseg3.triscv.vector.tuple_nxv4i8_3t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[VAL:%.*]], ptr [[BASE]], i64 [[OFFSET]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -1056,7 +2063,30 @@ define void @test_vssseg3_mask_nxv1i32(target("riscv.vector.tuple", <vscale x 4 ; CHECK-LABEL: @test_vssseg3_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vssseg3.mask.triscv.vector.tuple_nxv4i8_3t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[VAL:%.*]], ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP9]], i64 12) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: tail call void @llvm.riscv.vssseg3.mask.triscv.vector.tuple_nxv4i8_3t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[VAL:%.*]], ptr [[BASE]], i64 [[OFFSET]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -1072,7 +2102,30 @@ define void @test_vssseg4_nxv1i32(target("riscv.vector.tuple", <vscale x 4 x i8> ; CHECK-LABEL: @test_vssseg4_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vssseg4.triscv.vector.tuple_nxv4i8_4t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[VAL:%.*]], ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP9]], i64 16) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: tail call void @llvm.riscv.vssseg4.triscv.vector.tuple_nxv4i8_4t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[VAL:%.*]], ptr [[BASE]], i64 [[OFFSET]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -1084,7 +2137,30 @@ define void @test_vssseg4_mask_nxv1i32(target("riscv.vector.tuple", <vscale x 4 ; CHECK-LABEL: @test_vssseg4_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vssseg4.mask.triscv.vector.tuple_nxv4i8_4t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[VAL:%.*]], ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP9]], i64 16) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: tail call void @llvm.riscv.vssseg4.mask.triscv.vector.tuple_nxv4i8_4t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[VAL:%.*]], ptr [[BASE]], i64 [[OFFSET]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -1100,7 +2176,30 @@ define void @test_vssseg5_nxv1i32(target("riscv.vector.tuple", <vscale x 4 x i8> ; CHECK-LABEL: @test_vssseg5_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vssseg5.triscv.vector.tuple_nxv4i8_5t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[VAL:%.*]], ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP9]], i64 20) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: tail call void @llvm.riscv.vssseg5.triscv.vector.tuple_nxv4i8_5t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[VAL:%.*]], ptr [[BASE]], i64 [[OFFSET]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -1112,7 +2211,30 @@ define void @test_vssseg5_mask_nxv1i32(target("riscv.vector.tuple", <vscale x 4 ; CHECK-LABEL: @test_vssseg5_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vssseg5.mask.triscv.vector.tuple_nxv4i8_5t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[VAL:%.*]], ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP9]], i64 20) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: tail call void @llvm.riscv.vssseg5.mask.triscv.vector.tuple_nxv4i8_5t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[VAL:%.*]], ptr [[BASE]], i64 [[OFFSET]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -1128,7 +2250,30 @@ define void @test_vssseg6_nxv1i32(target("riscv.vector.tuple", <vscale x 4 x i8> ; CHECK-LABEL: @test_vssseg6_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vssseg6.triscv.vector.tuple_nxv4i8_6t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[VAL:%.*]], ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP9]], i64 24) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: tail call void @llvm.riscv.vssseg6.triscv.vector.tuple_nxv4i8_6t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[VAL:%.*]], ptr [[BASE]], i64 [[OFFSET]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -1140,7 +2285,30 @@ define void @test_vssseg6_mask_nxv1i32(target("riscv.vector.tuple", <vscale x 4 ; CHECK-LABEL: @test_vssseg6_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vssseg6.mask.triscv.vector.tuple_nxv4i8_6t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[VAL:%.*]], ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP9]], i64 24) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: tail call void @llvm.riscv.vssseg6.mask.triscv.vector.tuple_nxv4i8_6t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[VAL:%.*]], ptr [[BASE]], i64 [[OFFSET]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -1156,7 +2324,30 @@ define void @test_vssseg7_nxv1i32(target("riscv.vector.tuple", <vscale x 4 x i8> ; CHECK-LABEL: @test_vssseg7_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vssseg7.triscv.vector.tuple_nxv4i8_7t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[VAL:%.*]], ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP9]], i64 28) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: tail call void @llvm.riscv.vssseg7.triscv.vector.tuple_nxv4i8_7t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[VAL:%.*]], ptr [[BASE]], i64 [[OFFSET]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -1168,7 +2359,30 @@ define void @test_vssseg7_mask_nxv1i32(target("riscv.vector.tuple", <vscale x 4 ; CHECK-LABEL: @test_vssseg7_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vssseg7.mask.triscv.vector.tuple_nxv4i8_7t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[VAL:%.*]], ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP9]], i64 28) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: tail call void @llvm.riscv.vssseg7.mask.triscv.vector.tuple_nxv4i8_7t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[VAL:%.*]], ptr [[BASE]], i64 [[OFFSET]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -1184,7 +2398,30 @@ define void @test_vssseg8_nxv1i32(target("riscv.vector.tuple", <vscale x 4 x i8> ; CHECK-LABEL: @test_vssseg8_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vssseg8.triscv.vector.tuple_nxv4i8_8t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[VAL:%.*]], ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP9]], i64 32) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: tail call void @llvm.riscv.vssseg8.triscv.vector.tuple_nxv4i8_8t.p0.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[VAL:%.*]], ptr [[BASE]], i64 [[OFFSET]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -1196,7 +2433,30 @@ define void @test_vssseg8_mask_nxv1i32(target("riscv.vector.tuple", <vscale x 4 ; CHECK-LABEL: @test_vssseg8_mask_nxv1i32( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vssseg8.mask.triscv.vector.tuple_nxv4i8_8t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[VAL:%.*]], ptr [[BASE:%.*]], i64 [[OFFSET:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP1]], label [[TMP2:%.*]], label [[TMP11:%.*]] +; CHECK: 2: +; CHECK-NEXT: [[TMP3:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP4:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP3]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP2]] ], [ [[IV_NEXT:%.*]], [[TMP10:%.*]] ] +; CHECK-NEXT: [[TMP5:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP5]], label [[TMP6:%.*]], label [[TMP10]] +; CHECK: 6: +; CHECK-NEXT: [[TMP7:%.*]] = mul i64 [[IV]], [[OFFSET:%.*]] +; CHECK-NEXT: [[TMP8:%.*]] = getelementptr i8, ptr [[BASE:%.*]], i64 [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = ptrtoint ptr [[TMP8]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP9]], i64 32) +; CHECK-NEXT: br label [[TMP10]] +; CHECK: 10: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP4]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: tail call void @llvm.riscv.vssseg8.mask.triscv.vector.tuple_nxv4i8_8t.p0.i64.nxv1i1(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[VAL:%.*]], ptr [[BASE]], i64 [[OFFSET]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -1687,7 +2947,31 @@ define <vscale x 1 x i32> @test_vloxseg2_nxv1i32_nxv1i16(ptr %base, <vscale x 1 ; CHECK-LABEL: @test_vloxseg2_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP25:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 2) @llvm.riscv.vloxseg2.triscv.vector.tuple_nxv4i8_2t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 8) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP25:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 2) @llvm.riscv.vloxseg2.triscv.vector.tuple_nxv4i8_2t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP26:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[TMP25]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP26]] ; @@ -1701,7 +2985,31 @@ define <vscale x 1 x i32> @test_vloxseg2_mask_nxv1i32_nxv1i16(ptr %base, <vscale ; CHECK-LABEL: @test_vloxseg2_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP25:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 2) @llvm.riscv.vloxseg2.mask.triscv.vector.tuple_nxv4i8_2t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 8) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP25:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 2) @llvm.riscv.vloxseg2.mask.triscv.vector.tuple_nxv4i8_2t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP26:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[TMP25]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP26]] ; @@ -1719,7 +3027,31 @@ define <vscale x 1 x i32> @test_vloxseg3_nxv1i32_nxv1i16(ptr %base, <vscale x 1 ; CHECK-LABEL: @test_vloxseg3_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP37:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 3) @llvm.riscv.vloxseg3.triscv.vector.tuple_nxv4i8_3t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 12) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP37:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 3) @llvm.riscv.vloxseg3.triscv.vector.tuple_nxv4i8_3t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP38:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[TMP37]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP38]] ; @@ -1733,7 +3065,31 @@ define <vscale x 1 x i32> @test_vloxseg3_mask_nxv1i32_nxv1i16(ptr %base, <vscale ; CHECK-LABEL: @test_vloxseg3_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP37:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 3) @llvm.riscv.vloxseg3.mask.triscv.vector.tuple_nxv4i8_3t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 12) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP37:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 3) @llvm.riscv.vloxseg3.mask.triscv.vector.tuple_nxv4i8_3t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP38:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[TMP37]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP38]] ; @@ -1751,7 +3107,31 @@ define <vscale x 1 x i32> @test_vloxseg4_nxv1i32_nxv1i16(ptr %base, <vscale x 1 ; CHECK-LABEL: @test_vloxseg4_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP49:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 4) @llvm.riscv.vloxseg4.triscv.vector.tuple_nxv4i8_4t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 16) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP49:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 4) @llvm.riscv.vloxseg4.triscv.vector.tuple_nxv4i8_4t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP50:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[TMP49]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP50]] ; @@ -1765,7 +3145,31 @@ define <vscale x 1 x i32> @test_vloxseg4_mask_nxv1i32_nxv1i16(ptr %base, <vscale ; CHECK-LABEL: @test_vloxseg4_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP49:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 4) @llvm.riscv.vloxseg4.mask.triscv.vector.tuple_nxv4i8_4t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 16) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP49:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 4) @llvm.riscv.vloxseg4.mask.triscv.vector.tuple_nxv4i8_4t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP50:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[TMP49]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP50]] ; @@ -1783,7 +3187,31 @@ define <vscale x 1 x i32> @test_vloxseg5_nxv1i32_nxv1i16(ptr %base, <vscale x 1 ; CHECK-LABEL: @test_vloxseg5_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP61:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 5) @llvm.riscv.vloxseg5.triscv.vector.tuple_nxv4i8_5t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 20) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP61:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 5) @llvm.riscv.vloxseg5.triscv.vector.tuple_nxv4i8_5t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP62:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[TMP61]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP62]] ; @@ -1797,7 +3225,31 @@ define <vscale x 1 x i32> @test_vloxseg5_mask_nxv1i32_nxv1i16(ptr %base, <vscale ; CHECK-LABEL: @test_vloxseg5_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP61:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 5) @llvm.riscv.vloxseg5.mask.triscv.vector.tuple_nxv4i8_5t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 20) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP61:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 5) @llvm.riscv.vloxseg5.mask.triscv.vector.tuple_nxv4i8_5t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP62:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[TMP61]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP62]] ; @@ -1815,7 +3267,31 @@ define <vscale x 1 x i32> @test_vloxseg6_nxv1i32_nxv1i16(ptr %base, <vscale x 1 ; CHECK-LABEL: @test_vloxseg6_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP73:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 6) @llvm.riscv.vloxseg6.triscv.vector.tuple_nxv4i8_6t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 24) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP73:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 6) @llvm.riscv.vloxseg6.triscv.vector.tuple_nxv4i8_6t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP74:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[TMP73]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP74]] ; @@ -1829,7 +3305,31 @@ define <vscale x 1 x i32> @test_vloxseg6_mask_nxv1i32_nxv1i16(ptr %base, <vscale ; CHECK-LABEL: @test_vloxseg6_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP73:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 6) @llvm.riscv.vloxseg6.mask.triscv.vector.tuple_nxv4i8_6t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 24) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP73:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 6) @llvm.riscv.vloxseg6.mask.triscv.vector.tuple_nxv4i8_6t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP74:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[TMP73]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP74]] ; @@ -1847,7 +3347,31 @@ define <vscale x 1 x i32> @test_vloxseg7_nxv1i32_nxv1i16(ptr %base, <vscale x 1 ; CHECK-LABEL: @test_vloxseg7_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP85:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 7) @llvm.riscv.vloxseg7.triscv.vector.tuple_nxv4i8_7t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 28) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP85:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 7) @llvm.riscv.vloxseg7.triscv.vector.tuple_nxv4i8_7t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP86:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[TMP85]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP86]] ; @@ -1861,7 +3385,31 @@ define <vscale x 1 x i32> @test_vloxseg7_mask_nxv1i32_nxv1i16(ptr %base, <vscale ; CHECK-LABEL: @test_vloxseg7_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP85:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 7) @llvm.riscv.vloxseg7.mask.triscv.vector.tuple_nxv4i8_7t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 28) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP85:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 7) @llvm.riscv.vloxseg7.mask.triscv.vector.tuple_nxv4i8_7t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP86:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[TMP85]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP86]] ; @@ -1879,7 +3427,31 @@ define <vscale x 1 x i32> @test_vloxseg8_nxv1i32_nxv1i16(ptr %base, <vscale x 1 ; CHECK-LABEL: @test_vloxseg8_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP97:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 8) @llvm.riscv.vloxseg8.triscv.vector.tuple_nxv4i8_8t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 32) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP97:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 8) @llvm.riscv.vloxseg8.triscv.vector.tuple_nxv4i8_8t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP98:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[TMP97]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP98]] ; @@ -1893,7 +3465,31 @@ define <vscale x 1 x i32> @test_vloxseg8_mask_nxv1i32_nxv1i16(ptr %base, <vscale ; CHECK-LABEL: @test_vloxseg8_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP97:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 8) @llvm.riscv.vloxseg8.mask.triscv.vector.tuple_nxv4i8_8t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 32) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP97:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 8) @llvm.riscv.vloxseg8.mask.triscv.vector.tuple_nxv4i8_8t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP98:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[TMP97]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP98]] ; @@ -1911,7 +3507,31 @@ define <vscale x 1 x i32> @test_vluxseg2_nxv1i32_nxv1i16(ptr %base, <vscale x 1 ; CHECK-LABEL: @test_vluxseg2_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP25:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 2) @llvm.riscv.vluxseg2.triscv.vector.tuple_nxv4i8_2t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 8) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP25:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 2) @llvm.riscv.vluxseg2.triscv.vector.tuple_nxv4i8_2t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP26:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[TMP25]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP26]] ; @@ -1925,7 +3545,31 @@ define <vscale x 1 x i32> @test_vluxseg2_mask_nxv1i32_nxv1i16(ptr %base, <vscale ; CHECK-LABEL: @test_vluxseg2_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP25:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 2) @llvm.riscv.vluxseg2.mask.triscv.vector.tuple_nxv4i8_2t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 8) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP25:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 2) @llvm.riscv.vluxseg2.mask.triscv.vector.tuple_nxv4i8_2t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP26:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_2t(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[TMP25]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP26]] ; @@ -1943,7 +3587,31 @@ define <vscale x 1 x i32> @test_vluxseg3_nxv1i32_nxv1i16(ptr %base, <vscale x 1 ; CHECK-LABEL: @test_vluxseg3_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP37:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 3) @llvm.riscv.vluxseg3.triscv.vector.tuple_nxv4i8_3t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 12) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP37:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 3) @llvm.riscv.vluxseg3.triscv.vector.tuple_nxv4i8_3t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP38:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[TMP37]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP38]] ; @@ -1957,7 +3625,31 @@ define <vscale x 1 x i32> @test_vluxseg3_mask_nxv1i32_nxv1i16(ptr %base, <vscale ; CHECK-LABEL: @test_vluxseg3_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP37:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 3) @llvm.riscv.vluxseg3.mask.triscv.vector.tuple_nxv4i8_3t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 12) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP37:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 3) @llvm.riscv.vluxseg3.mask.triscv.vector.tuple_nxv4i8_3t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP38:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_3t(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[TMP37]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP38]] ; @@ -1975,7 +3667,31 @@ define <vscale x 1 x i32> @test_vluxseg4_nxv1i32_nxv1i16(ptr %base, <vscale x 1 ; CHECK-LABEL: @test_vluxseg4_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP49:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 4) @llvm.riscv.vluxseg4.triscv.vector.tuple_nxv4i8_4t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 16) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP49:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 4) @llvm.riscv.vluxseg4.triscv.vector.tuple_nxv4i8_4t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP50:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[TMP49]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP50]] ; @@ -1989,7 +3705,31 @@ define <vscale x 1 x i32> @test_vluxseg4_mask_nxv1i32_nxv1i16(ptr %base, <vscale ; CHECK-LABEL: @test_vluxseg4_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP49:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 4) @llvm.riscv.vluxseg4.mask.triscv.vector.tuple_nxv4i8_4t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 16) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP49:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 4) @llvm.riscv.vluxseg4.mask.triscv.vector.tuple_nxv4i8_4t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP50:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_4t(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[TMP49]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP50]] ; @@ -2007,7 +3747,31 @@ define <vscale x 1 x i32> @test_vluxseg5_nxv1i32_nxv1i16(ptr %base, <vscale x 1 ; CHECK-LABEL: @test_vluxseg5_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP61:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 5) @llvm.riscv.vluxseg5.triscv.vector.tuple_nxv4i8_5t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 20) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP61:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 5) @llvm.riscv.vluxseg5.triscv.vector.tuple_nxv4i8_5t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP62:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[TMP61]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP62]] ; @@ -2021,7 +3785,31 @@ define <vscale x 1 x i32> @test_vluxseg5_mask_nxv1i32_nxv1i16(ptr %base, <vscale ; CHECK-LABEL: @test_vluxseg5_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP61:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 5) @llvm.riscv.vluxseg5.mask.triscv.vector.tuple_nxv4i8_5t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 20) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP61:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 5) @llvm.riscv.vluxseg5.mask.triscv.vector.tuple_nxv4i8_5t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP62:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_5t(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[TMP61]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP62]] ; @@ -2039,7 +3827,31 @@ define <vscale x 1 x i32> @test_vluxseg6_nxv1i32_nxv1i16(ptr %base, <vscale x 1 ; CHECK-LABEL: @test_vluxseg6_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP73:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 6) @llvm.riscv.vluxseg6.triscv.vector.tuple_nxv4i8_6t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 24) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP73:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 6) @llvm.riscv.vluxseg6.triscv.vector.tuple_nxv4i8_6t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP74:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[TMP73]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP74]] ; @@ -2053,7 +3865,31 @@ define <vscale x 1 x i32> @test_vluxseg6_mask_nxv1i32_nxv1i16(ptr %base, <vscale ; CHECK-LABEL: @test_vluxseg6_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP73:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 6) @llvm.riscv.vluxseg6.mask.triscv.vector.tuple_nxv4i8_6t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 24) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP73:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 6) @llvm.riscv.vluxseg6.mask.triscv.vector.tuple_nxv4i8_6t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP74:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_6t(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[TMP73]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP74]] ; @@ -2071,7 +3907,31 @@ define <vscale x 1 x i32> @test_vluxseg7_nxv1i32_nxv1i16(ptr %base, <vscale x 1 ; CHECK-LABEL: @test_vluxseg7_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP85:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 7) @llvm.riscv.vluxseg7.triscv.vector.tuple_nxv4i8_7t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 28) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP85:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 7) @llvm.riscv.vluxseg7.triscv.vector.tuple_nxv4i8_7t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP86:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[TMP85]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP86]] ; @@ -2085,7 +3945,31 @@ define <vscale x 1 x i32> @test_vluxseg7_mask_nxv1i32_nxv1i16(ptr %base, <vscale ; CHECK-LABEL: @test_vluxseg7_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP85:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 7) @llvm.riscv.vluxseg7.mask.triscv.vector.tuple_nxv4i8_7t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 28) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP85:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 7) @llvm.riscv.vluxseg7.mask.triscv.vector.tuple_nxv4i8_7t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP86:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_7t(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[TMP85]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP86]] ; @@ -2103,7 +3987,31 @@ define <vscale x 1 x i32> @test_vluxseg8_nxv1i32_nxv1i16(ptr %base, <vscale x 1 ; CHECK-LABEL: @test_vluxseg8_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP97:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 8) @llvm.riscv.vluxseg8.triscv.vector.tuple_nxv4i8_8t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 32) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP97:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 8) @llvm.riscv.vluxseg8.triscv.vector.tuple_nxv4i8_8t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: [[TMP98:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[TMP97]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP98]] ; @@ -2117,7 +4025,31 @@ define <vscale x 1 x i32> @test_vluxseg8_mask_nxv1i32_nxv1i16(ptr %base, <vscale ; CHECK-LABEL: @test_vluxseg8_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: [[TMP97:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 8) @llvm.riscv.vluxseg8.mask.triscv.vector.tuple_nxv4i8_8t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) poison, ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 1, i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_loadN(i64 [[TMP10]], i64 32) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: [[TMP97:%.*]] = tail call target("riscv.vector.tuple", <vscale x 4 x i8>, 8) @llvm.riscv.vluxseg8.mask.triscv.vector.tuple_nxv4i8_8t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) poison, ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 1, i64 5) ; CHECK-NEXT: [[TMP98:%.*]] = call <vscale x 1 x i32> @llvm.riscv.tuple.extract.nxv1i32.triscv.vector.tuple_nxv4i8_8t(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[TMP97]], i32 1) ; CHECK-NEXT: ret <vscale x 1 x i32> [[TMP98]] ; @@ -2135,7 +4067,31 @@ define void @test_vsoxseg2_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vscale ; CHECK-LABEL: @test_vsoxseg2_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg2.triscv.vector.tuple_nxv4i8_2t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 8) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg2.triscv.vector.tuple_nxv4i8_2t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2147,7 +4103,31 @@ define void @test_vsoxseg2_mask_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vs ; CHECK-LABEL: @test_vsoxseg2_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg2.mask.triscv.vector.tuple_nxv4i8_2t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 8) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg2.mask.triscv.vector.tuple_nxv4i8_2t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2163,7 +4143,31 @@ define void @test_vsoxseg3_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vscale ; CHECK-LABEL: @test_vsoxseg3_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg3.triscv.vector.tuple_nxv4i8_3t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 12) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg3.triscv.vector.tuple_nxv4i8_3t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2175,7 +4179,31 @@ define void @test_vsoxseg3_mask_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vs ; CHECK-LABEL: @test_vsoxseg3_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg3.mask.triscv.vector.tuple_nxv4i8_3t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 12) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg3.mask.triscv.vector.tuple_nxv4i8_3t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2191,7 +4219,31 @@ define void @test_vsoxseg4_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vscale ; CHECK-LABEL: @test_vsoxseg4_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg4.triscv.vector.tuple_nxv4i8_4t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 16) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg4.triscv.vector.tuple_nxv4i8_4t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2203,7 +4255,31 @@ define void @test_vsoxseg4_mask_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vs ; CHECK-LABEL: @test_vsoxseg4_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg4.mask.triscv.vector.tuple_nxv4i8_4t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 16) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg4.mask.triscv.vector.tuple_nxv4i8_4t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2219,7 +4295,31 @@ define void @test_vsoxseg5_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vscale ; CHECK-LABEL: @test_vsoxseg5_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg5.triscv.vector.tuple_nxv4i8_5t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 20) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg5.triscv.vector.tuple_nxv4i8_5t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2231,7 +4331,31 @@ define void @test_vsoxseg5_mask_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vs ; CHECK-LABEL: @test_vsoxseg5_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg5.mask.triscv.vector.tuple_nxv4i8_5t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 20) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg5.mask.triscv.vector.tuple_nxv4i8_5t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2247,7 +4371,31 @@ define void @test_vsoxseg6_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vscale ; CHECK-LABEL: @test_vsoxseg6_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg6.triscv.vector.tuple_nxv4i8_6t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 24) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg6.triscv.vector.tuple_nxv4i8_6t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2259,7 +4407,31 @@ define void @test_vsoxseg6_mask_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vs ; CHECK-LABEL: @test_vsoxseg6_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg6.mask.triscv.vector.tuple_nxv4i8_6t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 24) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg6.mask.triscv.vector.tuple_nxv4i8_6t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2275,7 +4447,31 @@ define void @test_vsoxseg7_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vscale ; CHECK-LABEL: @test_vsoxseg7_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg7.triscv.vector.tuple_nxv4i8_7t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 28) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg7.triscv.vector.tuple_nxv4i8_7t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2287,7 +4483,31 @@ define void @test_vsoxseg7_mask_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vs ; CHECK-LABEL: @test_vsoxseg7_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg7.mask.triscv.vector.tuple_nxv4i8_7t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 28) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg7.mask.triscv.vector.tuple_nxv4i8_7t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2303,7 +4523,31 @@ define void @test_vsoxseg8_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vscale ; CHECK-LABEL: @test_vsoxseg8_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg8.triscv.vector.tuple_nxv4i8_8t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 32) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg8.triscv.vector.tuple_nxv4i8_8t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2315,7 +4559,31 @@ define void @test_vsoxseg8_mask_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vs ; CHECK-LABEL: @test_vsoxseg8_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg8.mask.triscv.vector.tuple_nxv4i8_8t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 32) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg8.mask.triscv.vector.tuple_nxv4i8_8t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2331,7 +4599,31 @@ define void @test_vsuxseg2_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vscale ; CHECK-LABEL: @test_vsuxseg2_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsuxseg2.triscv.vector.tuple_nxv4i8_2t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 8) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsuxseg2.triscv.vector.tuple_nxv4i8_2t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2343,7 +4635,31 @@ define void @test_vsuxseg2_mask_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vs ; CHECK-LABEL: @test_vsuxseg2_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg2.mask.triscv.vector.tuple_nxv4i8_2t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 8) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg2.mask.triscv.vector.tuple_nxv4i8_2t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 2) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2359,7 +4675,31 @@ define void @test_vsuxseg3_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vscale ; CHECK-LABEL: @test_vsuxseg3_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsuxseg3.triscv.vector.tuple_nxv4i8_3t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 12) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsuxseg3.triscv.vector.tuple_nxv4i8_3t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2371,7 +4711,31 @@ define void @test_vsuxseg3_mask_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vs ; CHECK-LABEL: @test_vsuxseg3_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg3.mask.triscv.vector.tuple_nxv4i8_3t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 12) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg3.mask.triscv.vector.tuple_nxv4i8_3t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 3) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2387,7 +4751,31 @@ define void @test_vsuxseg4_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vscale ; CHECK-LABEL: @test_vsuxseg4_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsuxseg4.triscv.vector.tuple_nxv4i8_4t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 16) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsuxseg4.triscv.vector.tuple_nxv4i8_4t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2399,7 +4787,31 @@ define void @test_vsuxseg4_mask_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vs ; CHECK-LABEL: @test_vsuxseg4_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg4.mask.triscv.vector.tuple_nxv4i8_4t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 16) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg4.mask.triscv.vector.tuple_nxv4i8_4t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 4) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2415,7 +4827,31 @@ define void @test_vsuxseg5_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vscale ; CHECK-LABEL: @test_vsuxseg5_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsuxseg5.triscv.vector.tuple_nxv4i8_5t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 20) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsuxseg5.triscv.vector.tuple_nxv4i8_5t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2427,7 +4863,31 @@ define void @test_vsuxseg5_mask_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vs ; CHECK-LABEL: @test_vsuxseg5_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg5.mask.triscv.vector.tuple_nxv4i8_5t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 20) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg5.mask.triscv.vector.tuple_nxv4i8_5t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 5) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2443,7 +4903,31 @@ define void @test_vsuxseg6_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vscale ; CHECK-LABEL: @test_vsuxseg6_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsuxseg6.triscv.vector.tuple_nxv4i8_6t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 24) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsuxseg6.triscv.vector.tuple_nxv4i8_6t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2455,7 +4939,31 @@ define void @test_vsuxseg6_mask_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vs ; CHECK-LABEL: @test_vsuxseg6_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg6.mask.triscv.vector.tuple_nxv4i8_6t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 24) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg6.mask.triscv.vector.tuple_nxv4i8_6t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 6) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2471,7 +4979,31 @@ define void @test_vsuxseg7_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vscale ; CHECK-LABEL: @test_vsuxseg7_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsuxseg7.triscv.vector.tuple_nxv4i8_7t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 28) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsuxseg7.triscv.vector.tuple_nxv4i8_7t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2483,7 +5015,31 @@ define void @test_vsuxseg7_mask_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vs ; CHECK-LABEL: @test_vsuxseg7_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg7.mask.triscv.vector.tuple_nxv4i8_7t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 28) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg7.mask.triscv.vector.tuple_nxv4i8_7t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 7) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2499,7 +5055,31 @@ define void @test_vsuxseg8_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vscale ; CHECK-LABEL: @test_vsuxseg8_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsuxseg8.triscv.vector.tuple_nxv4i8_8t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> splat (i1 true), i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 32) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsuxseg8.triscv.vector.tuple_nxv4i8_8t.p0.nxv1i16.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: @@ -2511,7 +5091,31 @@ define void @test_vsuxseg8_mask_nxv1i32_nxv1i16(target("riscv.vector.tuple", <vs ; CHECK-LABEL: @test_vsuxseg8_mask_nxv1i32_nxv1i16( ; CHECK-NEXT: entry: ; CHECK-NEXT: [[TMP0:%.*]] = load i64, ptr @__asan_shadow_memory_dynamic_address, align 8 -; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg8.mask.triscv.vector.tuple_nxv4i8_8t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[VAL:%.*]], ptr [[BASE:%.*]], <vscale x 1 x i16> [[INDEX:%.*]], <vscale x 1 x i1> [[MASK:%.*]], i64 [[VL:%.*]], i64 5) +; CHECK-NEXT: [[TMP1:%.*]] = zext <vscale x 1 x i16> [[INDEX:%.*]] to <vscale x 1 x i64> +; CHECK-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[BASE:%.*]], <vscale x 1 x i64> [[TMP1]] +; CHECK-NEXT: [[TMP3:%.*]] = icmp ne i64 [[VL:%.*]], 0 +; CHECK-NEXT: br i1 [[TMP3]], label [[TMP4:%.*]], label [[TMP12:%.*]] +; CHECK: 4: +; CHECK-NEXT: [[TMP5:%.*]] = call i64 @llvm.vscale.i64() +; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.umin.i64(i64 [[VL]], i64 [[TMP5]]) +; CHECK-NEXT: br label [[DOTSPLIT:%.*]] +; CHECK: .split: +; CHECK-NEXT: [[IV:%.*]] = phi i64 [ 0, [[TMP4]] ], [ [[IV_NEXT:%.*]], [[TMP11:%.*]] ] +; CHECK-NEXT: [[TMP7:%.*]] = extractelement <vscale x 1 x i1> [[MASK:%.*]], i64 [[IV]] +; CHECK-NEXT: br i1 [[TMP7]], label [[TMP8:%.*]], label [[TMP11]] +; CHECK: 8: +; CHECK-NEXT: [[TMP9:%.*]] = extractelement <vscale x 1 x ptr> [[TMP2]], i64 [[IV]] +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[TMP9]] to i64 +; CHECK-NEXT: call void @__asan_storeN(i64 [[TMP10]], i64 32) +; CHECK-NEXT: br label [[TMP11]] +; CHECK: 11: +; CHECK-NEXT: [[IV_NEXT]] = add nuw nsw i64 [[IV]], 1 +; CHECK-NEXT: [[IV_CHECK:%.*]] = icmp eq i64 [[IV_NEXT]], [[TMP6]] +; CHECK-NEXT: br i1 [[IV_CHECK]], label [[DOTSPLIT_SPLIT:%.*]], label [[DOTSPLIT]] +; CHECK: .split.split: +; CHECK-NEXT: br label [[TMP12]] +; CHECK: 12: +; CHECK-NEXT: tail call void @llvm.riscv.vsoxseg8.mask.triscv.vector.tuple_nxv4i8_8t.p0.nxv1i16.nxv1i1.i64(target("riscv.vector.tuple", <vscale x 4 x i8>, 8) [[VAL:%.*]], ptr [[BASE]], <vscale x 1 x i16> [[INDEX]], <vscale x 1 x i1> [[MASK]], i64 [[VL]], i64 5) ; CHECK-NEXT: ret void ; entry: diff --git a/llvm/test/Instrumentation/DataFlowSanitizer/abilist_aggregate.ll b/llvm/test/Instrumentation/DataFlowSanitizer/abilist_aggregate.ll index 2cf5771..3cab62b 100644 --- a/llvm/test/Instrumentation/DataFlowSanitizer/abilist_aggregate.ll +++ b/llvm/test/Instrumentation/DataFlowSanitizer/abilist_aggregate.ll @@ -13,7 +13,7 @@ define {i1, i7} @functional({i32, i1} %a, [2 x i7] %b) { define {i1, i7} @call_functional({i32, i1} %a, [2 x i7] %b) { ; CHECK-LABEL: @call_functional.dfsan - ; CHECK-NEXT: %[[#REG:]] = load [2 x i8], ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN:2]] + ; CHECK-NEXT: %[[#REG:]] = load [2 x i8], ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN:2]] ; CHECK-NEXT: %[[#REG+1]] = load { i8, i8 }, ptr @__dfsan_arg_tls, align [[ALIGN]] ; CHECK-NEXT: %[[#REG+2]] = extractvalue { i8, i8 } %[[#REG+1]], 0 ; CHECK-NEXT: %[[#REG+3]] = extractvalue { i8, i8 } %[[#REG+1]], 1 @@ -68,7 +68,7 @@ define {i1, i7} @call_uninstrumented({i32, i1} %a, [2 x i7] %b) { define {i1, i7} @call_custom_with_ret({i32, i1} %a, [2 x i7] %b) { ; CHECK: @call_custom_with_ret.dfsan ; CHECK: %labelreturn = alloca i8, align 1 - ; CHECK: [[B:%.*]] = load [2 x i8], ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN:2]] + ; CHECK: [[B:%.*]] = load [2 x i8], ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN:2]] ; CHECK: [[A:%.*]] = load { i8, i8 }, ptr @__dfsan_arg_tls, align [[ALIGN]] ; CHECK: [[A0:%.*]] = extractvalue { i8, i8 } [[A]], 0 ; CHECK: [[A1:%.*]] = extractvalue { i8, i8 } [[A]], 1 @@ -89,7 +89,7 @@ define {i1, i7} @call_custom_with_ret({i32, i1} %a, [2 x i7] %b) { define void @call_custom_without_ret({i32, i1} %a, [2 x i7] %b) { ; CHECK: @call_custom_without_ret.dfsan - ; CHECK: [[B:%.*]] = load [2 x i8], ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN:2]] + ; CHECK: [[B:%.*]] = load [2 x i8], ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN:2]] ; CHECK: [[A:%.*]] = load { i8, i8 }, ptr @__dfsan_arg_tls, align [[ALIGN]] ; CHECK: [[A0:%.*]] = extractvalue { i8, i8 } [[A]], 0 ; CHECK: [[A1:%.*]] = extractvalue { i8, i8 } [[A]], 1 @@ -105,7 +105,7 @@ define void @call_custom_without_ret({i32, i1} %a, [2 x i7] %b) { define void @call_custom_varg({i32, i1} %a, [2 x i7] %b) { ; CHECK: @call_custom_varg.dfsan - ; CHECK: [[B:%.*]] = load [2 x i8], ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN:2]] + ; CHECK: [[B:%.*]] = load [2 x i8], ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN:2]] ; CHECK: %labelva = alloca [1 x i8], align 1 ; CHECK: [[A:%.*]] = load { i8, i8 }, ptr @__dfsan_arg_tls, align [[ALIGN]] ; CHECK: [[A0:%.*]] = extractvalue { i8, i8 } [[A]], 0 @@ -126,7 +126,7 @@ define void @call_custom_varg({i32, i1} %a, [2 x i7] %b) { define {i1, i7} @call_custom_cb({i32, i1} %a, [2 x i7] %b) { ; CHECK: define { i1, i7 } @call_custom_cb.dfsan({ i32, i1 } %a, [2 x i7] %b) { ; CHECK: %labelreturn = alloca i8, align 1 - ; CHECK: [[B:%.*]] = load [2 x i8], ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN:2]] + ; CHECK: [[B:%.*]] = load [2 x i8], ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN:2]] ; CHECK: [[A:%.*]] = load { i8, i8 }, ptr @__dfsan_arg_tls, align [[ALIGN]] ; CHECK: [[A0:%.*]] = extractvalue { i8, i8 } [[A]], 0 ; CHECK: [[A1:%.*]] = extractvalue { i8, i8 } [[A]], 1 @@ -153,7 +153,7 @@ define {i1, i7} @custom_cb(ptr %cb, {i32, i1} %a, [2 x i7] %b) { define {i1, i7} @cb({i32, i1} %a, [2 x i7] %b) { ; CHECK: define { i1, i7 } @cb.dfsan({ i32, i1 } %a, [2 x i7] %b) - ; CHECK: [[BL:%.*]] = load [2 x i8], ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN:2]] + ; CHECK: [[BL:%.*]] = load [2 x i8], ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN:2]] ; CHECK: [[AL:%.*]] = load { i8, i8 }, ptr @__dfsan_arg_tls, align [[ALIGN]] ; CHECK: [[AL1:%.*]] = extractvalue { i8, i8 } [[AL]], 1 ; CHECK: [[BL0:%.*]] = extractvalue [2 x i8] [[BL]], 0 @@ -180,8 +180,8 @@ define ptr @ret_custom() { ; COMM: TODO simplify the expression [[#mul(2,SBYTES) + max(SBYTES,2)]] to ; COMM: [[#mul(3,SBYTES)]], if shadow-tls-alignment is updated to match shadow ; COMM: width bytes. -; CHECK: [[B:%.*]] = load [2 x i8], ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 4) to ptr), align [[ALIGN:2]] -; CHECK: [[A:%.*]] = load { i8, i8 }, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN]] +; CHECK: [[B:%.*]] = load [2 x i8], ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 4), align [[ALIGN:2]] +; CHECK: [[A:%.*]] = load { i8, i8 }, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN]] ; CHECK: [[CB:%.*]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN]] ; CHECK: [[A0:%.*]] = extractvalue { i8, i8 } [[A]], 0 ; CHECK: [[A1:%.*]] = extractvalue { i8, i8 } [[A]], 1 @@ -198,7 +198,7 @@ define ptr @ret_custom() { define {i1, i7} @custom_with_ret({i32, i1} %a, [2 x i7] %b) { ; CHECK: define linkonce_odr { i1, i7 } @"dfsw$custom_with_ret"({ i32, i1 } %0, [2 x i7] %1) ; CHECK: %labelreturn = alloca i8, align 1 - ; CHECK: [[B:%.*]] = load [2 x i8], ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN:2]] + ; CHECK: [[B:%.*]] = load [2 x i8], ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN:2]] ; CHECK: [[A:%.*]] = load { i8, i8 }, ptr @__dfsan_arg_tls, align [[ALIGN]] ; CHECK: [[A0:%.*]] = extractvalue { i8, i8 } [[A]], 0 ; CHECK: [[A1:%.*]] = extractvalue { i8, i8 } [[A]], 1 @@ -221,7 +221,7 @@ define {i1, i7} @custom_with_ret({i32, i1} %a, [2 x i7] %b) { define void @custom_without_ret({i32, i1} %a, [2 x i7] %b) { ; CHECK: define linkonce_odr void @"dfsw$custom_without_ret"({ i32, i1 } %0, [2 x i7] %1) - ; CHECK: [[B:%.*]] = load [2 x i8], ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN:2]] + ; CHECK: [[B:%.*]] = load [2 x i8], ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN:2]] ; CHECK: [[A:%.*]] = load { i8, i8 }, ptr @__dfsan_arg_tls, align [[ALIGN]] ; CHECK: [[A0:%.*]] = extractvalue { i8, i8 } [[A]], 0 ; CHECK: [[A1:%.*]] = extractvalue { i8, i8 } [[A]], 1 diff --git a/llvm/test/Instrumentation/DataFlowSanitizer/arith.ll b/llvm/test/Instrumentation/DataFlowSanitizer/arith.ll index 8c9eb5f..b474383 100644 --- a/llvm/test/Instrumentation/DataFlowSanitizer/arith.ll +++ b/llvm/test/Instrumentation/DataFlowSanitizer/arith.ll @@ -1,73 +1,86 @@ -; RUN: opt < %s -passes=dfsan -S | FileCheck %s +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 6 +; RUN: opt < %s -passes=dfsan -dfsan-add-global-name-suffix=0 -S | FileCheck %s target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128" target triple = "x86_64-unknown-linux-gnu" define i8 @add(i8 %a, i8 %b) { - ; CHECK: @add.dfsan - ; CHECK-DAG: %[[#ALABEL:]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN:2]] - ; CHECK-DAG: %[[#BLABEL:]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN]] - ; CHECK: %[[#UNION:]] = or i8 %[[#ALABEL]], %[[#BLABEL]] - ; CHECK: %c = add i8 %a, %b - ; CHECK: store i8 %[[#UNION]], ptr @__dfsan_retval_tls, align [[ALIGN]] - ; CHECK: ret i8 %c +; CHECK-LABEL: define i8 @add( +; CHECK-SAME: i8 [[A:%.*]], i8 [[B:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 +; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[TMP3:%.*]] = or i8 [[TMP2]], [[TMP1]] +; CHECK-NEXT: [[C:%.*]] = add i8 [[A]], [[B]] +; CHECK-NEXT: store i8 [[TMP3]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: ret i8 [[C]] +; %c = add i8 %a, %b ret i8 %c } define i8 @sub(i8 %a, i8 %b) { - ; CHECK: @sub.dfsan - ; CHECK: load{{.*}}__dfsan_arg_tls - ; CHECK: load{{.*}}__dfsan_arg_tls - ; CHECK: or i8 - ; CHECK: %c = sub i8 %a, %b - ; CHECK: store{{.*}}__dfsan_retval_tls - ; CHECK: ret i8 %c +; CHECK-LABEL: define i8 @sub( +; CHECK-SAME: i8 [[A:%.*]], i8 [[B:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 +; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[TMP3:%.*]] = or i8 [[TMP2]], [[TMP1]] +; CHECK-NEXT: [[C:%.*]] = sub i8 [[A]], [[B]] +; CHECK-NEXT: store i8 [[TMP3]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: ret i8 [[C]] +; %c = sub i8 %a, %b ret i8 %c } define i8 @mul(i8 %a, i8 %b) { - ; CHECK: @mul.dfsan - ; CHECK: load{{.*}}__dfsan_arg_tls - ; CHECK: load{{.*}}__dfsan_arg_tls - ; CHECK: or i8 - ; CHECK: %c = mul i8 %a, %b - ; CHECK: store{{.*}}__dfsan_retval_tls - ; CHECK: ret i8 %c +; CHECK-LABEL: define i8 @mul( +; CHECK-SAME: i8 [[A:%.*]], i8 [[B:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 +; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[TMP3:%.*]] = or i8 [[TMP2]], [[TMP1]] +; CHECK-NEXT: [[C:%.*]] = mul i8 [[A]], [[B]] +; CHECK-NEXT: store i8 [[TMP3]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: ret i8 [[C]] +; %c = mul i8 %a, %b ret i8 %c } define i8 @sdiv(i8 %a, i8 %b) { - ; CHECK: @sdiv.dfsan - ; CHECK: load{{.*}}__dfsan_arg_tls - ; CHECK: load{{.*}}__dfsan_arg_tls - ; CHECK: or i8 - ; CHECK: %c = sdiv i8 %a, %b - ; CHECK: store{{.*}}__dfsan_retval_tls - ; CHECK: ret i8 %c +; CHECK-LABEL: define i8 @sdiv( +; CHECK-SAME: i8 [[A:%.*]], i8 [[B:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 +; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[TMP3:%.*]] = or i8 [[TMP2]], [[TMP1]] +; CHECK-NEXT: [[C:%.*]] = sdiv i8 [[A]], [[B]] +; CHECK-NEXT: store i8 [[TMP3]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: ret i8 [[C]] +; %c = sdiv i8 %a, %b ret i8 %c } define i8 @udiv(i8 %a, i8 %b) { - ; CHECK: @udiv.dfsan - ; CHECK: load{{.*}}__dfsan_arg_tls - ; CHECK: load{{.*}}__dfsan_arg_tls - ; CHECK: or i8 - ; CHECK: %c = udiv i8 %a, %b - ; CHECK: store{{.*}}__dfsan_retval_tls - ; CHECK: ret i8 %c +; CHECK-LABEL: define i8 @udiv( +; CHECK-SAME: i8 [[A:%.*]], i8 [[B:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 +; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[TMP3:%.*]] = or i8 [[TMP2]], [[TMP1]] +; CHECK-NEXT: [[C:%.*]] = udiv i8 [[A]], [[B]] +; CHECK-NEXT: store i8 [[TMP3]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: ret i8 [[C]] +; %c = udiv i8 %a, %b ret i8 %c } define double @fneg(double %a) { - ; CHECK: @fneg.dfsan - ; CHECK: load{{.*}}__dfsan_arg_tls - ; CHECK: %c = fneg double %a - ; CHECK: store{{.*}}__dfsan_retval_tls - ; CHECK: ret double %c +; CHECK-LABEL: define double @fneg( +; CHECK-SAME: double [[A:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[C:%.*]] = fneg double [[A]] +; CHECK-NEXT: store i8 [[TMP1]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: ret double [[C]] +; %c = fneg double %a ret double %c } diff --git a/llvm/test/Instrumentation/DataFlowSanitizer/array.ll b/llvm/test/Instrumentation/DataFlowSanitizer/array.ll index 5642edc..14468c1 100644 --- a/llvm/test/Instrumentation/DataFlowSanitizer/array.ll +++ b/llvm/test/Instrumentation/DataFlowSanitizer/array.ll @@ -158,7 +158,7 @@ define i1 @extract_array([4 x i1] %a) { define [4 x i1] @insert_array([4 x i1] %a, i1 %e2) { ; NO_COMBINE_LOAD_PTR: @insert_array.dfsan ; NO_COMBINE_LOAD_PTR: [[EM:%.*]] = load i8, ptr - ; NO_COMBINE_LOAD_PTR-SAME: inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 4) to ptr), align [[ALIGN:2]] + ; NO_COMBINE_LOAD_PTR-SAME: getelementptr (i8, ptr @__dfsan_arg_tls, i64 4), align [[ALIGN:2]] ; NO_COMBINE_LOAD_PTR: [[AM:%.*]] = load [4 x i8], ptr @__dfsan_arg_tls, align [[ALIGN]] ; NO_COMBINE_LOAD_PTR: [[AM1:%.*]] = insertvalue [4 x i8] [[AM]], i8 [[EM]], 0 ; NO_COMBINE_LOAD_PTR: store [4 x i8] [[AM1]], ptr @__dfsan_retval_tls, align [[ALIGN]] diff --git a/llvm/test/Instrumentation/DataFlowSanitizer/dfsan-pass-second-run.ll b/llvm/test/Instrumentation/DataFlowSanitizer/dfsan-pass-second-run.ll index 7da647b..7f49c14 100644 --- a/llvm/test/Instrumentation/DataFlowSanitizer/dfsan-pass-second-run.ll +++ b/llvm/test/Instrumentation/DataFlowSanitizer/dfsan-pass-second-run.ll @@ -5,7 +5,7 @@ target triple = "x86_64-unknown-linux-gnu" define i8 @add(i8 %a, i8 %b) { ; CHECK: @add.dfsan ; CHECK-DAG: %[[#ALABEL:]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN:2]] - ; CHECK-DAG: %[[#BLABEL:]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN]] + ; CHECK-DAG: %[[#BLABEL:]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN]] ; CHECK: %[[#UNION:]] = or i8 %[[#ALABEL]], %[[#BLABEL]] ; CHECK: %c = add i8 %a, %b ; CHECK: store i8 %[[#UNION]], ptr @__dfsan_retval_tls, align [[ALIGN]] diff --git a/llvm/test/Instrumentation/DataFlowSanitizer/dont_combine_offset_labels_on_gep.ll b/llvm/test/Instrumentation/DataFlowSanitizer/dont_combine_offset_labels_on_gep.ll index 997681b..7574346 100644 --- a/llvm/test/Instrumentation/DataFlowSanitizer/dont_combine_offset_labels_on_gep.ll +++ b/llvm/test/Instrumentation/DataFlowSanitizer/dont_combine_offset_labels_on_gep.ll @@ -1,19 +1,26 @@ -; RUN: opt < %s -passes=dfsan -dfsan-combine-offset-labels-on-gep=false -S | FileCheck %s -; RUN: opt < %s -passes=dfsan -dfsan-combine-offset-labels-on-gep=false -dfsan-track-origins=1 -S | FileCheck %s --check-prefixes=CHECK,CHECK_ORIGIN +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 6 +; RUN: opt < %s -passes=dfsan -dfsan-combine-offset-labels-on-gep=false -dfsan-add-global-name-suffix=0 -S | FileCheck %s +; RUN: opt < %s -passes=dfsan -dfsan-combine-offset-labels-on-gep=false -dfsan-track-origins=1 -dfsan-add-global-name-suffix=0 -S | FileCheck %s --check-prefix=CHECK_ORIGIN target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128" target triple = "x86_64-unknown-linux-gnu" -; CHECK: @__dfsan_arg_tls = external thread_local(initialexec) global [[TLS_ARR:\[100 x i64\]]] -; CHECK: @__dfsan_retval_tls = external thread_local(initialexec) global [[TLS_ARR]] define ptr @gepop(ptr %p, i32 %a, i32 %b, i32 %c) { - ; CHECK: @gepop.dfsan - ; CHECK_ORIGIN: %[[#PO:]] = load i32, ptr @__dfsan_arg_origin_tls, align [[ALIGN_O:4]] - ; CHECK: %[[#PS:]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN_S:2]] - ; CHECK: %e = getelementptr [10 x [20 x i32]], ptr %p, i32 %a, i32 %b, i32 %c - ; CHECK: store i8 %[[#PS]], ptr @__dfsan_retval_tls, align [[ALIGN_S]] - ; CHECK_ORIGIN: store i32 %[[#PO]], ptr @__dfsan_retval_origin_tls, align [[ALIGN_O]] - +; CHECK-LABEL: define ptr @gepop( +; CHECK-SAME: ptr [[P:%.*]], i32 [[A:%.*]], i32 [[B:%.*]], i32 [[C:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[E:%.*]] = getelementptr [10 x [20 x i32]], ptr [[P]], i32 [[A]], i32 [[B]], i32 [[C]] +; CHECK-NEXT: store i8 [[TMP1]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: ret ptr [[E]] +; +; CHECK_ORIGIN-LABEL: define ptr @gepop( +; CHECK_ORIGIN-SAME: ptr [[P:%.*]], i32 [[A:%.*]], i32 [[B:%.*]], i32 [[C:%.*]]) { +; CHECK_ORIGIN-NEXT: [[TMP1:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 +; CHECK_ORIGIN-NEXT: [[TMP2:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK_ORIGIN-NEXT: [[E:%.*]] = getelementptr [10 x [20 x i32]], ptr [[P]], i32 [[A]], i32 [[B]], i32 [[C]] +; CHECK_ORIGIN-NEXT: store i8 [[TMP2]], ptr @__dfsan_retval_tls, align 2 +; CHECK_ORIGIN-NEXT: store i32 [[TMP1]], ptr @__dfsan_retval_origin_tls, align 4 +; CHECK_ORIGIN-NEXT: ret ptr [[E]] +; %e = getelementptr [10 x [20 x i32]], ptr %p, i32 %a, i32 %b, i32 %c ret ptr %e } - diff --git a/llvm/test/Instrumentation/DataFlowSanitizer/origin_abilist.ll b/llvm/test/Instrumentation/DataFlowSanitizer/origin_abilist.ll index 031fd1c..fbcdb3d 100644 --- a/llvm/test/Instrumentation/DataFlowSanitizer/origin_abilist.ll +++ b/llvm/test/Instrumentation/DataFlowSanitizer/origin_abilist.ll @@ -114,7 +114,7 @@ define void @call_custom_without_ret(i32 %a, i32 %b) { ; CHECK: @call_custom_without_ret.dfsan ; CHECK: [[BO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 ; CHECK: [[AO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 - ; CHECK: [[BS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align 2 + ; CHECK: [[BS:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 ; CHECK: [[AS:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 ; CHECK: call void @__dfso_custom_without_ret(i32 %a, i32 %b, i8 zeroext [[AS]], i8 zeroext [[BS]], i32 zeroext [[AO]], i32 zeroext [[BO]]) ; CHECK-NEXT: ret void @@ -129,7 +129,7 @@ define i32 @call_custom_with_ret(i32 %a, i32 %b) { ; CHECK: [[BO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 ; CHECK: [[AO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 ; CHECK: %labelreturn = alloca i8, align 1 - ; CHECK: [[BS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align 2 + ; CHECK: [[BS:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 ; CHECK: [[AS:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 ; CHECK: {{.*}} = call i32 @__dfso_custom_with_ret(i32 %a, i32 %b, i8 zeroext [[AS]], i8 zeroext [[BS]], ptr %labelreturn, i32 zeroext [[AO]], i32 zeroext [[BO]], ptr %originreturn) ; CHECK: [[RS:%.*]] = load i8, ptr %labelreturn, align 1 @@ -147,7 +147,7 @@ define void @call_custom_varg_without_ret(i32 %a, i32 %b) { ; CHECK: [[BO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 ; CHECK: [[AO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 ; CHECK: %labelva = alloca [1 x i8], align 1 - ; CHECK: [[BS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align 2 + ; CHECK: [[BS:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 ; CHECK: [[AS:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 ; CHECK: [[VS0:%.*]] = getelementptr inbounds nuw [1 x i8], ptr %labelva, i32 0, i32 0 ; CHECK: store i8 [[AS]], ptr [[VS0]], align 1 @@ -170,7 +170,7 @@ define i32 @call_custom_varg_with_ret(i32 %a, i32 %b) { ; CHECK: [[AO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls ; CHECK: %labelreturn = alloca i8, align 1 ; CHECK: %labelva = alloca [1 x i8], align 1 - ; CHECK: [[BS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align 2 + ; CHECK: [[BS:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 ; CHECK: [[AS:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 ; CHECK: [[VS0:%.*]] = getelementptr inbounds nuw [1 x i8], ptr %labelva, i32 0, i32 0 ; CHECK: store i8 [[BS]], ptr [[VS0]], align 1 @@ -194,7 +194,7 @@ define i32 @call_custom_cb_with_ret(i32 %a, i32 %b) { ; CHECK: [[BO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 ; CHECK: [[AO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 ; CHECK: %labelreturn = alloca i8, align 1 - ; CHECK: [[BS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align 2 + ; CHECK: [[BS:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 ; CHECK: [[AS:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 ; CHECK: {{.*}} = call i32 @__dfso_custom_cb_with_ret(ptr @cb_with_ret.dfsan, i32 %a, i32 %b, i8 zeroext 0, i8 zeroext [[AS]], i8 zeroext [[BS]], ptr %labelreturn, i32 zeroext 0, i32 zeroext [[AO]], i32 zeroext [[BO]], ptr %originreturn) ; CHECK: [[RS:%.*]] = load i8, ptr %labelreturn, align 1 @@ -210,7 +210,7 @@ define void @call_custom_cb_without_ret(i32 %a, i32 %b) { ; CHECK-LABEL: @call_custom_cb_without_ret.dfsan ; CHECK: [[BO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 ; CHECK: [[AO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 - ; CHECK: [[BS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align 2 + ; CHECK: [[BS:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 ; CHECK: [[AS:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 ; CHECK: call void @__dfso_custom_cb_without_ret(ptr @cb_without_ret.dfsan, i32 %a, i32 %b, i8 zeroext 0, i8 zeroext [[AS]], i8 zeroext [[BS]], i32 zeroext 0, i32 zeroext [[AO]], i32 zeroext [[BO]]) ; CHECK-NEXT: ret void @@ -228,7 +228,7 @@ define void @call_custom_cb_without_ret(i32 %a, i32 %b) { ; CHECK: define linkonce_odr void @"dfso$custom_without_ret"(i32 %0, i32 %1) ; CHECK: [[BO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 ; CHECK-NEXT: [[AO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 -; CHECK-NEXT: [[BS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align 2 +; CHECK-NEXT: [[BS:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 ; CHECK-NEXT: [[AS:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 ; CHECK-NEXT: call void @__dfso_custom_without_ret(i32 %0, i32 %1, i8 zeroext [[AS]], i8 zeroext [[BS]], i32 zeroext [[AO]], i32 zeroext [[BO]]) ; CHECK-NEXT: ret void @@ -238,7 +238,7 @@ define void @call_custom_cb_without_ret(i32 %a, i32 %b) { ; CHECK-NEXT: [[BO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 ; CHECK-NEXT: [[AO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 ; CHECK-NEXT: %labelreturn = alloca i8, align 1 -; CHECK-NEXT: [[BS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align 2 +; CHECK-NEXT: [[BS:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 ; CHECK-NEXT: [[AS:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 ; CHECK-NEXT: [[R:%.*]] = call i32 @__dfso_custom_with_ret(i32 %0, i32 %1, i8 zeroext [[AS]], i8 zeroext [[BS]], ptr %labelreturn, i32 zeroext [[AO]], i32 zeroext [[BO]], ptr %originreturn) ; CHECK-NEXT: [[RS:%.*]] = load i8, ptr %labelreturn, align 1 @@ -261,8 +261,8 @@ define void @call_custom_cb_without_ret(i32 %a, i32 %b) { ; CHECK-NEXT: [[AO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 ; CHECK-NEXT: [[CO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 ; CHECK-NEXT: %labelreturn = alloca i8, align 1 -; CHECK-NEXT: [[BS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 4) to ptr), align 2 -; CHECK-NEXT: [[AS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align 2 +; CHECK-NEXT: [[BS:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 4), align 2 +; CHECK-NEXT: [[AS:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 ; CHECK-NEXT: [[CS:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 ; CHECK-NEXT: [[R:%.*]] = call i32 @__dfso_custom_cb_with_ret(ptr %0, i32 %1, i32 %2, i8 zeroext [[CS]], i8 zeroext [[AS]], i8 zeroext [[BS]], ptr %labelreturn, i32 zeroext [[CO]], i32 zeroext [[AO]], i32 zeroext [[BO]], ptr %originreturn) ; CHECK-NEXT: [[RS:%.*]] = load i8, ptr %labelreturn, align 1 @@ -275,8 +275,8 @@ define void @call_custom_cb_without_ret(i32 %a, i32 %b) { ; CHECK: [[BO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 2), align 4 ; CHECK-NEXT: [[AO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 ; CHECK-NEXT: [[CO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 -; CHECK-NEXT: [[BS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 4) to ptr), align 2 -; CHECK-NEXT: [[AS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align 2 +; CHECK-NEXT: [[BS:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 4), align 2 +; CHECK-NEXT: [[AS:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 ; CHECK-NEXT: [[CS:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 ; CHECK-NEXT: call void @__dfso_custom_cb_without_ret(ptr %0, i32 %1, i32 %2, i8 zeroext [[CS]], i8 zeroext [[AS]], i8 zeroext [[BS]], i32 zeroext [[CO]], i32 zeroext [[AO]], i32 zeroext [[BO]]) ; CHECK-NEXT: ret void diff --git a/llvm/test/Instrumentation/DataFlowSanitizer/origin_cached_shadows.ll b/llvm/test/Instrumentation/DataFlowSanitizer/origin_cached_shadows.ll index cb9a306e..194a193 100644 --- a/llvm/test/Instrumentation/DataFlowSanitizer/origin_cached_shadows.ll +++ b/llvm/test/Instrumentation/DataFlowSanitizer/origin_cached_shadows.ll @@ -1,4 +1,5 @@ -; RUN: opt < %s -passes=dfsan -dfsan-track-origins=1 -S | FileCheck %s +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 6 +; RUN: opt < %s -passes=dfsan -dfsan-track-origins=1 -dfsan-add-global-name-suffix=0 -S | FileCheck %s ; ; %i13 and %i15 have the same key in shadow cache. They should not reuse the same ; shadow because their blocks do not dominate each other. Origin tracking @@ -7,43 +8,129 @@ target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128" target triple = "x86_64-unknown-linux-gnu" -; CHECK: @__dfsan_arg_tls = external thread_local(initialexec) global [[TLS_ARR:\[100 x i64\]]] define void @cached_shadows(double %arg) { - ; CHECK: @cached_shadows.dfsan - ; CHECK: [[AO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align - ; CHECK: [[AS:%.*]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN:2]] - ; CHECK: [[L1:.+]]: - ; CHECK: {{.*}} = phi i8 - ; CHECK: {{.*}} = phi i32 - ; CHECK: {{.*}} = phi double [ 3.000000e+00 - ; CHECK: [[S_L1:%.*]] = phi i8 [ 0, %[[L0:.*]] ], [ [[S_L7:%.*]], %[[L7:.*]] ] - ; CHECK: [[O_L1:%.*]] = phi i32 [ 0, %[[L0]] ], [ [[O_L7:%.*]], %[[L7]] ] - ; CHECK: [[V_L1:%.*]] = phi double [ 4.000000e+00, %[[L0]] ], [ [[V_L7:%.*]], %[[L7]] ] - ; CHECK: br i1 {{%.+}}, label %[[L2:.*]], label %[[L4:.*]] - ; CHECK: [[L2]]: - ; CHECK: br i1 {{%.+}}, label %[[L3:.+]], label %[[L7]] - ; CHECK: [[L3]]: - ; CHECK: [[S_L3:%.*]] = or i8 - ; CHECK: [[AS_NE_L3:%.*]] = icmp ne i8 [[AS]], 0 - ; CHECK: [[O_L3:%.*]] = select i1 [[AS_NE_L3]], i32 %{{[0-9]+}}, i32 [[O_L1]] - ; CHECK: [[V_L3:%.*]] = fsub double [[V_L1]], %{{.+}} - ; CHECK: br label %[[L7]] - ; CHECK: [[L4]]: - ; CHECK: br i1 %_dfscmp, label %[[L5:.+]], label %[[L6:.+]], - ; CHECK: [[L5]]: - ; CHECK: br label %[[L6]] - ; CHECK: [[L6]]: - ; CHECK: [[S_L6:%.*]] = or i8 - ; CHECK: [[AS_NE_L6:%.*]] = icmp ne i8 [[AS]], 0 - ; CHECK: [[O_L6:%.*]] = select i1 [[AS_NE_L6]], i32 [[AO]], i32 [[O_L1]] - ; CHECK: [[V_L6:%.*]] = fadd double [[V_L1]], %{{.+}} - ; CHECK: br label %[[L7]] - ; CHECK: [[L7]]: - ; CHECK: [[S_L7]] = phi i8 [ [[S_L3]], %[[L3]] ], [ [[S_L1]], %[[L2]] ], [ [[S_L6]], %[[L6]] ] - ; CHECK: [[O_L7]] = phi i32 [ [[O_L3]], %[[L3]] ], [ [[O_L1]], %[[L2]] ], [ [[O_L6]], %[[L6]] ] - ; CHECK: [[V_L7]] = phi double [ [[V_L3]], %[[L3]] ], [ [[V_L1]], %[[L2]] ], [ [[V_L6]], %[[L6]] ] - ; CHECK: br i1 %{{.+}}, label %[[L1]], label %[[L8:.+]] - ; CHECK: [[L8]]: +; CHECK-LABEL: define void @cached_shadows( +; CHECK-SAME: double [[ARG:%.*]]) { +; CHECK-NEXT: [[BB:.*]]: +; CHECK-NEXT: [[TMP0:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 +; CHECK-NEXT: [[TMP1:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[I:%.*]] = alloca double, align 8 +; CHECK-NEXT: [[I1:%.*]] = alloca double, align 8 +; CHECK-NEXT: [[I2:%.*]] = bitcast ptr [[I]] to ptr +; CHECK-NEXT: [[TMP2:%.*]] = ptrtoint ptr [[I]] to i64 +; CHECK-NEXT: [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080 +; CHECK-NEXT: [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr +; CHECK-NEXT: store i64 0, ptr [[TMP4]], align 1 +; CHECK-NEXT: store volatile double 1.000000e+00, ptr [[I]], align 8 +; CHECK-NEXT: [[I3:%.*]] = bitcast ptr [[I1]] to ptr +; CHECK-NEXT: [[TMP5:%.*]] = ptrtoint ptr [[I1]] to i64 +; CHECK-NEXT: [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080 +; CHECK-NEXT: [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr +; CHECK-NEXT: store i64 0, ptr [[TMP7]], align 1 +; CHECK-NEXT: store volatile double 2.000000e+00, ptr [[I1]], align 8 +; CHECK-NEXT: br label %[[BB4:.*]] +; CHECK: [[BB4]]: +; CHECK-NEXT: [[TMP8:%.*]] = phi i8 [ 0, %[[BB]] ], [ [[TMP76:%.*]], %[[BB16:.*]] ] +; CHECK-NEXT: [[TMP9:%.*]] = phi i32 [ 0, %[[BB]] ], [ [[TMP77:%.*]], %[[BB16]] ] +; CHECK-NEXT: [[I5:%.*]] = phi double [ 3.000000e+00, %[[BB]] ], [ [[I17:%.*]], %[[BB16]] ] +; CHECK-NEXT: [[TMP10:%.*]] = phi i8 [ 0, %[[BB]] ], [ [[TMP78:%.*]], %[[BB16]] ] +; CHECK-NEXT: [[TMP11:%.*]] = phi i32 [ 0, %[[BB]] ], [ [[TMP79:%.*]], %[[BB16]] ] +; CHECK-NEXT: [[I6:%.*]] = phi double [ 4.000000e+00, %[[BB]] ], [ [[I18:%.*]], %[[BB16]] ] +; CHECK-NEXT: [[TMP12:%.*]] = ptrtoint ptr [[I1]] to i64 +; CHECK-NEXT: [[TMP13:%.*]] = xor i64 [[TMP12]], 87960930222080 +; CHECK-NEXT: [[TMP14:%.*]] = inttoptr i64 [[TMP13]] to ptr +; CHECK-NEXT: [[TMP15:%.*]] = add i64 [[TMP13]], 17592186044416 +; CHECK-NEXT: [[TMP16:%.*]] = inttoptr i64 [[TMP15]] to ptr +; CHECK-NEXT: [[TMP17:%.*]] = load i32, ptr [[TMP16]], align 8 +; CHECK-NEXT: [[TMP18:%.*]] = load i64, ptr [[TMP14]], align 1 +; CHECK-NEXT: [[TMP19:%.*]] = shl i64 [[TMP18]], 32 +; CHECK-NEXT: [[TMP20:%.*]] = getelementptr i32, ptr [[TMP16]], i64 1 +; CHECK-NEXT: [[TMP21:%.*]] = load i32, ptr [[TMP20]], align 8 +; CHECK-NEXT: [[TMP22:%.*]] = lshr i64 [[TMP18]], 32 +; CHECK-NEXT: [[TMP23:%.*]] = or i64 [[TMP18]], [[TMP22]] +; CHECK-NEXT: [[TMP24:%.*]] = lshr i64 [[TMP23]], 16 +; CHECK-NEXT: [[TMP25:%.*]] = or i64 [[TMP23]], [[TMP24]] +; CHECK-NEXT: [[TMP26:%.*]] = lshr i64 [[TMP25]], 8 +; CHECK-NEXT: [[TMP27:%.*]] = or i64 [[TMP25]], [[TMP26]] +; CHECK-NEXT: [[TMP28:%.*]] = trunc i64 [[TMP27]] to i8 +; CHECK-NEXT: [[TMP29:%.*]] = icmp ne i64 [[TMP19]], 0 +; CHECK-NEXT: [[TMP30:%.*]] = select i1 [[TMP29]], i32 [[TMP17]], i32 [[TMP21]] +; CHECK-NEXT: [[I7:%.*]] = load volatile double, ptr [[I1]], align 8 +; CHECK-NEXT: [[I8:%.*]] = fcmp une double [[I7]], 0.000000e+00 +; CHECK-NEXT: [[TMP31:%.*]] = ptrtoint ptr [[I1]] to i64 +; CHECK-NEXT: [[TMP32:%.*]] = xor i64 [[TMP31]], 87960930222080 +; CHECK-NEXT: [[TMP33:%.*]] = inttoptr i64 [[TMP32]] to ptr +; CHECK-NEXT: [[TMP34:%.*]] = add i64 [[TMP32]], 17592186044416 +; CHECK-NEXT: [[TMP35:%.*]] = inttoptr i64 [[TMP34]] to ptr +; CHECK-NEXT: [[TMP36:%.*]] = load i32, ptr [[TMP35]], align 8 +; CHECK-NEXT: [[TMP37:%.*]] = load i64, ptr [[TMP33]], align 1 +; CHECK-NEXT: [[TMP38:%.*]] = shl i64 [[TMP37]], 32 +; CHECK-NEXT: [[TMP39:%.*]] = getelementptr i32, ptr [[TMP35]], i64 1 +; CHECK-NEXT: [[TMP40:%.*]] = load i32, ptr [[TMP39]], align 8 +; CHECK-NEXT: [[TMP41:%.*]] = lshr i64 [[TMP37]], 32 +; CHECK-NEXT: [[TMP42:%.*]] = or i64 [[TMP37]], [[TMP41]] +; CHECK-NEXT: [[TMP43:%.*]] = lshr i64 [[TMP42]], 16 +; CHECK-NEXT: [[TMP44:%.*]] = or i64 [[TMP42]], [[TMP43]] +; CHECK-NEXT: [[TMP45:%.*]] = lshr i64 [[TMP44]], 8 +; CHECK-NEXT: [[TMP46:%.*]] = or i64 [[TMP44]], [[TMP45]] +; CHECK-NEXT: [[TMP47:%.*]] = trunc i64 [[TMP46]] to i8 +; CHECK-NEXT: [[TMP48:%.*]] = icmp ne i64 [[TMP38]], 0 +; CHECK-NEXT: [[TMP49:%.*]] = select i1 [[TMP48]], i32 [[TMP36]], i32 [[TMP40]] +; CHECK-NEXT: [[I9:%.*]] = load volatile double, ptr [[I1]], align 8 +; CHECK-NEXT: br i1 [[I8]], label %[[BB10:.*]], label %[[BB14:.*]] +; CHECK: [[BB10]]: +; CHECK-NEXT: [[I11:%.*]] = fcmp une double [[I9]], 0.000000e+00 +; CHECK-NEXT: br i1 [[I11]], label %[[BB12:.*]], label %[[BB16]] +; CHECK: [[BB12]]: +; CHECK-NEXT: [[TMP50:%.*]] = or i8 [[TMP10]], [[TMP1]] +; CHECK-NEXT: [[TMP51:%.*]] = icmp ne i8 [[TMP1]], 0 +; CHECK-NEXT: [[TMP52:%.*]] = select i1 [[TMP51]], i32 [[TMP0]], i32 [[TMP11]] +; CHECK-NEXT: [[I13:%.*]] = fsub double [[I6]], [[ARG]] +; CHECK-NEXT: br label %[[BB16]] +; CHECK: [[BB14]]: +; CHECK-NEXT: [[TMP53:%.*]] = ptrtoint ptr [[I]] to i64 +; CHECK-NEXT: [[TMP54:%.*]] = xor i64 [[TMP53]], 87960930222080 +; CHECK-NEXT: [[TMP55:%.*]] = inttoptr i64 [[TMP54]] to ptr +; CHECK-NEXT: [[TMP56:%.*]] = add i64 [[TMP54]], 17592186044416 +; CHECK-NEXT: [[TMP57:%.*]] = inttoptr i64 [[TMP56]] to ptr +; CHECK-NEXT: [[TMP58:%.*]] = insertelement <8 x i8> poison, i8 [[TMP47]], i32 0 +; CHECK-NEXT: [[TMP59:%.*]] = insertelement <8 x i8> [[TMP58]], i8 [[TMP47]], i32 1 +; CHECK-NEXT: [[TMP60:%.*]] = insertelement <8 x i8> [[TMP59]], i8 [[TMP47]], i32 2 +; CHECK-NEXT: [[TMP61:%.*]] = insertelement <8 x i8> [[TMP60]], i8 [[TMP47]], i32 3 +; CHECK-NEXT: [[TMP62:%.*]] = insertelement <8 x i8> [[TMP61]], i8 [[TMP47]], i32 4 +; CHECK-NEXT: [[TMP63:%.*]] = insertelement <8 x i8> [[TMP62]], i8 [[TMP47]], i32 5 +; CHECK-NEXT: [[TMP64:%.*]] = insertelement <8 x i8> [[TMP63]], i8 [[TMP47]], i32 6 +; CHECK-NEXT: [[TMP65:%.*]] = insertelement <8 x i8> [[TMP64]], i8 [[TMP47]], i32 7 +; CHECK-NEXT: [[TMP66:%.*]] = getelementptr <8 x i8>, ptr [[TMP55]], i32 0 +; CHECK-NEXT: store <8 x i8> [[TMP65]], ptr [[TMP66]], align 1 +; CHECK-NEXT: [[_DFSCMP:%.*]] = icmp ne i8 [[TMP47]], 0 +; CHECK-NEXT: br i1 [[_DFSCMP]], label %[[BB67:.*]], label %[[BB72:.*]], !prof [[PROF1:![0-9]+]] +; CHECK: [[BB67]]: +; CHECK-NEXT: [[TMP68:%.*]] = call i32 @__dfsan_chain_origin(i32 [[TMP49]]) +; CHECK-NEXT: [[TMP69:%.*]] = zext i32 [[TMP68]] to i64 +; CHECK-NEXT: [[TMP70:%.*]] = shl i64 [[TMP69]], 32 +; CHECK-NEXT: [[TMP71:%.*]] = or i64 [[TMP69]], [[TMP70]] +; CHECK-NEXT: store i64 [[TMP71]], ptr [[TMP57]], align 8 +; CHECK-NEXT: br label %[[BB72]] +; CHECK: [[BB72]]: +; CHECK-NEXT: store volatile double [[I9]], ptr [[I]], align 8 +; CHECK-NEXT: [[TMP73:%.*]] = or i8 [[TMP10]], [[TMP1]] +; CHECK-NEXT: [[TMP74:%.*]] = icmp ne i8 [[TMP1]], 0 +; CHECK-NEXT: [[TMP75:%.*]] = select i1 [[TMP74]], i32 [[TMP0]], i32 [[TMP11]] +; CHECK-NEXT: [[I15:%.*]] = fadd double [[I6]], [[ARG]] +; CHECK-NEXT: br label %[[BB16]] +; CHECK: [[BB16]]: +; CHECK-NEXT: [[TMP76]] = phi i8 [ [[TMP10]], %[[BB12]] ], [ [[TMP8]], %[[BB10]] ], [ [[TMP10]], %[[BB72]] ] +; CHECK-NEXT: [[TMP77]] = phi i32 [ [[TMP11]], %[[BB12]] ], [ [[TMP9]], %[[BB10]] ], [ [[TMP11]], %[[BB72]] ] +; CHECK-NEXT: [[I17]] = phi double [ [[I6]], %[[BB12]] ], [ [[I5]], %[[BB10]] ], [ [[I6]], %[[BB72]] ] +; CHECK-NEXT: [[TMP78]] = phi i8 [ [[TMP50]], %[[BB12]] ], [ [[TMP10]], %[[BB10]] ], [ [[TMP73]], %[[BB72]] ] +; CHECK-NEXT: [[TMP79]] = phi i32 [ [[TMP52]], %[[BB12]] ], [ [[TMP11]], %[[BB10]] ], [ [[TMP75]], %[[BB72]] ] +; CHECK-NEXT: [[I18]] = phi double [ [[I13]], %[[BB12]] ], [ [[I6]], %[[BB10]] ], [ [[I15]], %[[BB72]] ] +; CHECK-NEXT: [[I19:%.*]] = fcmp olt double [[I17]], 9.900000e+01 +; CHECK-NEXT: br i1 [[I19]], label %[[BB4]], label %[[BB20:.*]] +; CHECK: [[BB20]]: +; CHECK-NEXT: ret void +; bb: %i = alloca double, align 8 %i1 = alloca double, align 8 @@ -83,3 +170,6 @@ bb16: ; preds = %bb14, %bb12, %bb10 bb20: ; preds = %bb16 ret void } +;. +; CHECK: [[PROF1]] = !{!"branch_weights", i32 1, i32 1048575} +;. diff --git a/llvm/test/Instrumentation/DataFlowSanitizer/origin_call.ll b/llvm/test/Instrumentation/DataFlowSanitizer/origin_call.ll index 5ee9927..9e8d015 100644 --- a/llvm/test/Instrumentation/DataFlowSanitizer/origin_call.ll +++ b/llvm/test/Instrumentation/DataFlowSanitizer/origin_call.ll @@ -37,8 +37,8 @@ i1 %a200 define i1 @param_overflow(i1 %a) { ; CHECK: @param_overflow.dfsan ; CHECK: store i32 %1, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 199), align 4 - ; CHECK-NEXT: store i8 %2, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 398) to ptr), align 2 - ; CHECK-NEXT: store i8 %2, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 400) to ptr), align 2 + ; CHECK-NEXT: store i8 %2, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 398), align 2 + ; CHECK-NEXT: store i8 %2, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 400), align 2 ; CHECK-NEXT: %r = call i1 @arg_overflow.dfsan ; CHECK: %_dfsret_o = load i32, ptr @__dfsan_retval_origin_tls, align 4 ; CHECK: store i32 %_dfsret_o, ptr @__dfsan_retval_origin_tls, align 4 diff --git a/llvm/test/Instrumentation/DataFlowSanitizer/origin_load.ll b/llvm/test/Instrumentation/DataFlowSanitizer/origin_load.ll index 0c84c79..a0c642a 100644 --- a/llvm/test/Instrumentation/DataFlowSanitizer/origin_load.ll +++ b/llvm/test/Instrumentation/DataFlowSanitizer/origin_load.ll @@ -93,7 +93,7 @@ define i16 @load16(i1 %i, ptr %p) { ; CHECK-LABEL: @load16.dfsan ; COMBINE_LOAD_PTR-NEXT: %[[#PO:]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 - ; COMBINE_LOAD_PTR-NEXT: %[[#PS:]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN]] + ; COMBINE_LOAD_PTR-NEXT: %[[#PS:]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN]] ; CHECK-NEXT: %[[#INTP:]] = ptrtoint ptr %p to i64 ; CHECK-NEXT: %[[#SHADOW_OFFSET:]] = xor i64 %[[#INTP]], [[#MASK]] diff --git a/llvm/test/Instrumentation/DataFlowSanitizer/origin_mem_intrinsic.ll b/llvm/test/Instrumentation/DataFlowSanitizer/origin_mem_intrinsic.ll index f8adb01..f4f3cb5 100644 --- a/llvm/test/Instrumentation/DataFlowSanitizer/origin_mem_intrinsic.ll +++ b/llvm/test/Instrumentation/DataFlowSanitizer/origin_mem_intrinsic.ll @@ -1,4 +1,5 @@ -; RUN: opt < %s -passes=dfsan -dfsan-track-origins=1 -S | FileCheck %s +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 6 +; RUN: opt < %s -passes=dfsan -dfsan-track-origins=1 -dfsan-add-global-name-suffix=0 -S | FileCheck %s target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128" target triple = "x86_64-unknown-linux-gnu" @@ -7,32 +8,54 @@ declare void @llvm.memmove.p0.p0.i32(ptr, ptr, i32, i1) declare void @llvm.memset.p0.i64(ptr nocapture, i8, i64, i1) define void @memcpy(ptr %d, ptr %s, i32 %l) { - ; CHECK: @memcpy.dfsan - ; CHECK: [[L64:%.*]] = zext i32 %l to i64 - ; CHECK: call void @__dfsan_mem_origin_transfer(ptr %d, ptr %s, i64 [[L64]]) - ; CHECK: call void @llvm.memcpy.p0.p0.i32(ptr align 1 {{.*}}, ptr align 1 {{.*}}, i32 {{.*}}, i1 false) - ; CHECK: call void @llvm.memcpy.p0.p0.i32(ptr %d, ptr %s, i32 %l, i1 false) - +; CHECK-LABEL: define void @memcpy( +; CHECK-SAME: ptr [[D:%.*]], ptr [[S:%.*]], i32 [[L:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = zext i32 [[L]] to i64 +; CHECK-NEXT: call void @__dfsan_mem_origin_transfer(ptr [[D]], ptr [[S]], i64 [[TMP1]]) +; CHECK-NEXT: [[TMP2:%.*]] = ptrtoint ptr [[D]] to i64 +; CHECK-NEXT: [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080 +; CHECK-NEXT: [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr +; CHECK-NEXT: [[TMP5:%.*]] = ptrtoint ptr [[S]] to i64 +; CHECK-NEXT: [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080 +; CHECK-NEXT: [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr +; CHECK-NEXT: [[TMP8:%.*]] = mul i32 [[L]], 1 +; CHECK-NEXT: call void @llvm.memcpy.p0.p0.i32(ptr align 1 [[TMP4]], ptr align 1 [[TMP7]], i32 [[TMP8]], i1 false) +; CHECK-NEXT: call void @llvm.memcpy.p0.p0.i32(ptr [[D]], ptr [[S]], i32 [[L]], i1 false) +; CHECK-NEXT: ret void +; call void @llvm.memcpy.p0.p0.i32(ptr %d, ptr %s, i32 %l, i1 0) ret void } define void @memmove(ptr %d, ptr %s, i32 %l) { - ; CHECK: @memmove.dfsan - ; CHECK: [[L64:%.*]] = zext i32 %l to i64 - ; CHECK: call void @__dfsan_mem_origin_transfer(ptr %d, ptr %s, i64 [[L64]]) - ; CHECK: call void @llvm.memmove.p0.p0.i32(ptr align 1 {{.*}}, ptr align 1 {{.*}}, i32 {{.*}}, i1 false) - ; CHECK: call void @llvm.memmove.p0.p0.i32(ptr %d, ptr %s, i32 %l, i1 false) - +; CHECK-LABEL: define void @memmove( +; CHECK-SAME: ptr [[D:%.*]], ptr [[S:%.*]], i32 [[L:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = zext i32 [[L]] to i64 +; CHECK-NEXT: call void @__dfsan_mem_origin_transfer(ptr [[D]], ptr [[S]], i64 [[TMP1]]) +; CHECK-NEXT: [[TMP2:%.*]] = ptrtoint ptr [[D]] to i64 +; CHECK-NEXT: [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080 +; CHECK-NEXT: [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr +; CHECK-NEXT: [[TMP5:%.*]] = ptrtoint ptr [[S]] to i64 +; CHECK-NEXT: [[TMP6:%.*]] = xor i64 [[TMP5]], 87960930222080 +; CHECK-NEXT: [[TMP7:%.*]] = inttoptr i64 [[TMP6]] to ptr +; CHECK-NEXT: [[TMP8:%.*]] = mul i32 [[L]], 1 +; CHECK-NEXT: call void @llvm.memmove.p0.p0.i32(ptr align 1 [[TMP4]], ptr align 1 [[TMP7]], i32 [[TMP8]], i1 false) +; CHECK-NEXT: call void @llvm.memmove.p0.p0.i32(ptr [[D]], ptr [[S]], i32 [[L]], i1 false) +; CHECK-NEXT: ret void +; call void @llvm.memmove.p0.p0.i32(ptr %d, ptr %s, i32 %l, i1 0) ret void } define void @memset(ptr %p, i8 %v) { - ; CHECK: @memset.dfsan - ; CHECK: [[O:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 - ; CHECK: [[S:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN:2]] - ; CHECK: call void @__dfsan_set_label(i8 [[S]], i32 [[O]], ptr %p, i64 1) +; CHECK-LABEL: define void @memset( +; CHECK-SAME: ptr [[P:%.*]], i8 [[V:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 +; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 +; CHECK-NEXT: call void @__dfsan_set_label(i8 [[TMP2]], i32 [[TMP1]], ptr [[P]], i64 1) +; CHECK-NEXT: call void @llvm.memset.p0.i64(ptr [[P]], i8 [[V]], i64 1, i1 true) +; CHECK-NEXT: ret void +; call void @llvm.memset.p0.i64(ptr %p, i8 %v, i64 1, i1 1) ret void -}
\ No newline at end of file +} diff --git a/llvm/test/Instrumentation/DataFlowSanitizer/origin_other_ops.ll b/llvm/test/Instrumentation/DataFlowSanitizer/origin_other_ops.ll index 3b10204..f409143 100644 --- a/llvm/test/Instrumentation/DataFlowSanitizer/origin_other_ops.ll +++ b/llvm/test/Instrumentation/DataFlowSanitizer/origin_other_ops.ll @@ -1,140 +1,200 @@ -; RUN: opt < %s -passes=dfsan -dfsan-track-origins=1 -S | FileCheck %s +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 6 +; RUN: opt < %s -passes=dfsan -dfsan-track-origins=1 -dfsan-add-global-name-suffix=0 -S | FileCheck %s target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128" target triple = "x86_64-unknown-linux-gnu" -; CHECK: @__dfsan_arg_tls = external thread_local(initialexec) global [[TLS_ARR:\[100 x i64\]]] -; CHECK: @__dfsan_retval_tls = external thread_local(initialexec) global [[TLS_ARR]] define float @unop(float %f) { - ; CHECK: @unop.dfsan - ; CHECK: [[FO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 - ; CHECK: store i32 [[FO]], ptr @__dfsan_retval_origin_tls, align 4 - +; CHECK-LABEL: define float @unop( +; CHECK-SAME: float [[F:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 +; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[R:%.*]] = fneg float [[F]] +; CHECK-NEXT: store i8 [[TMP2]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: store i32 [[TMP1]], ptr @__dfsan_retval_origin_tls, align 4 +; CHECK-NEXT: ret float [[R]] +; %r = fneg float %f ret float %r } define i1 @binop(i1 %a, i1 %b) { - ; CHECK: @binop.dfsan - ; CHECK: [[BO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 - ; CHECK: [[AO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 - ; CHECK: [[BS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align 2 - ; CHECK: [[NE:%.*]] = icmp ne i8 [[BS]], 0 - ; CHECK: [[MO:%.*]] = select i1 [[NE]], i32 [[BO]], i32 [[AO]] - ; CHECK: store i32 [[MO]], ptr @__dfsan_retval_origin_tls, align 4 - +; CHECK-LABEL: define i1 @binop( +; CHECK-SAME: i1 [[A:%.*]], i1 [[B:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 +; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 +; CHECK-NEXT: [[TMP3:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 +; CHECK-NEXT: [[TMP4:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[TMP5:%.*]] = or i8 [[TMP4]], [[TMP3]] +; CHECK-NEXT: [[TMP6:%.*]] = icmp ne i8 [[TMP3]], 0 +; CHECK-NEXT: [[TMP7:%.*]] = select i1 [[TMP6]], i32 [[TMP1]], i32 [[TMP2]] +; CHECK-NEXT: [[R:%.*]] = add i1 [[A]], [[B]] +; CHECK-NEXT: store i8 [[TMP5]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: store i32 [[TMP7]], ptr @__dfsan_retval_origin_tls, align 4 +; CHECK-NEXT: ret i1 [[R]] +; %r = add i1 %a, %b ret i1 %r } define i8 @castop(ptr %p) { - ; CHECK: @castop.dfsan - ; CHECK: [[PO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 - ; CHECK: store i32 [[PO]], ptr @__dfsan_retval_origin_tls, align 4 - +; CHECK-LABEL: define i8 @castop( +; CHECK-SAME: ptr [[P:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 +; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[R:%.*]] = ptrtoint ptr [[P]] to i8 +; CHECK-NEXT: store i8 [[TMP2]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: store i32 [[TMP1]], ptr @__dfsan_retval_origin_tls, align 4 +; CHECK-NEXT: ret i8 [[R]] +; %r = ptrtoint ptr %p to i8 ret i8 %r } define i1 @cmpop(i1 %a, i1 %b) { - ; CHECK: @cmpop.dfsan - ; CHECK: [[BO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 - ; CHECK: [[AO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 - ; CHECK: [[BS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align 2 - ; CHECK: [[NE:%.*]] = icmp ne i8 [[BS]], 0 - ; CHECK: [[MO:%.*]] = select i1 [[NE]], i32 [[BO]], i32 [[AO]] - ; CHECK: store i32 [[MO]], ptr @__dfsan_retval_origin_tls, align 4 - +; CHECK-LABEL: define i1 @cmpop( +; CHECK-SAME: i1 [[A:%.*]], i1 [[B:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 +; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 +; CHECK-NEXT: [[TMP3:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 +; CHECK-NEXT: [[TMP4:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[TMP5:%.*]] = or i8 [[TMP4]], [[TMP3]] +; CHECK-NEXT: [[TMP6:%.*]] = icmp ne i8 [[TMP3]], 0 +; CHECK-NEXT: [[TMP7:%.*]] = select i1 [[TMP6]], i32 [[TMP1]], i32 [[TMP2]] +; CHECK-NEXT: [[R:%.*]] = icmp eq i1 [[A]], [[B]] +; CHECK-NEXT: store i8 [[TMP5]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: store i32 [[TMP7]], ptr @__dfsan_retval_origin_tls, align 4 +; CHECK-NEXT: ret i1 [[R]] +; %r = icmp eq i1 %a, %b ret i1 %r } define ptr @gepop(ptr %p, i32 %a, i32 %b, i32 %c) { - ; CHECK: @gepop.dfsan - ; CHECK: [[CO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 3), align 4 - ; CHECK: [[BO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 2), align 4 - ; CHECK: [[AO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 - ; CHECK: [[PO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 - ; CHECK: [[CS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 6) to ptr), align 2 - ; CHECK: [[BS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 4) to ptr), align 2 - ; CHECK: [[AS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align 2 - ; CHECK: [[AS_NE:%.*]] = icmp ne i8 [[AS]], 0 - ; CHECK: [[APO:%.*]] = select i1 [[AS_NE]], i32 [[AO]], i32 [[PO]] - ; CHECK: [[BS_NE:%.*]] = icmp ne i8 [[BS]], 0 - ; CHECK: [[ABPO:%.*]] = select i1 [[BS_NE]], i32 [[BO]], i32 [[APO]] - ; CHECK: [[CS_NE:%.*]] = icmp ne i8 [[CS]], 0 - ; CHECK: [[ABCPO:%.*]] = select i1 [[CS_NE]], i32 [[CO]], i32 [[ABPO]] - ; CHECK: store i32 [[ABCPO]], ptr @__dfsan_retval_origin_tls, align 4 - +; CHECK-LABEL: define ptr @gepop( +; CHECK-SAME: ptr [[P:%.*]], i32 [[A:%.*]], i32 [[B:%.*]], i32 [[C:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 3), align 4 +; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 2), align 4 +; CHECK-NEXT: [[TMP3:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 +; CHECK-NEXT: [[TMP4:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 +; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 6), align 2 +; CHECK-NEXT: [[TMP6:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 4), align 2 +; CHECK-NEXT: [[TMP7:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 +; CHECK-NEXT: [[TMP8:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[TMP9:%.*]] = or i8 [[TMP8]], [[TMP7]] +; CHECK-NEXT: [[TMP10:%.*]] = or i8 [[TMP9]], [[TMP6]] +; CHECK-NEXT: [[TMP11:%.*]] = or i8 [[TMP10]], [[TMP5]] +; CHECK-NEXT: [[TMP12:%.*]] = icmp ne i8 [[TMP7]], 0 +; CHECK-NEXT: [[TMP13:%.*]] = select i1 [[TMP12]], i32 [[TMP3]], i32 [[TMP4]] +; CHECK-NEXT: [[TMP14:%.*]] = icmp ne i8 [[TMP6]], 0 +; CHECK-NEXT: [[TMP15:%.*]] = select i1 [[TMP14]], i32 [[TMP2]], i32 [[TMP13]] +; CHECK-NEXT: [[TMP16:%.*]] = icmp ne i8 [[TMP5]], 0 +; CHECK-NEXT: [[TMP17:%.*]] = select i1 [[TMP16]], i32 [[TMP1]], i32 [[TMP15]] +; CHECK-NEXT: [[E:%.*]] = getelementptr [10 x [20 x i32]], ptr [[P]], i32 [[A]], i32 [[B]], i32 [[C]] +; CHECK-NEXT: store i8 [[TMP11]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: store i32 [[TMP17]], ptr @__dfsan_retval_origin_tls, align 4 +; CHECK-NEXT: ret ptr [[E]] +; %e = getelementptr [10 x [20 x i32]], ptr %p, i32 %a, i32 %b, i32 %c ret ptr %e } define i32 @eeop(<4 x i32> %a, i32 %b) { - ; CHECK: @eeop.dfsan - ; CHECK: [[BO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 - ; CHECK: [[AO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 - ; CHECK: [[BS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align 2 - ; CHECK: [[NE:%.*]] = icmp ne i8 [[BS]], 0 - ; CHECK: [[MO:%.*]] = select i1 [[NE]], i32 [[BO]], i32 [[AO]] - ; CHECK: store i32 [[MO]], ptr @__dfsan_retval_origin_tls, align 4 - +; CHECK-LABEL: define i32 @eeop( +; CHECK-SAME: <4 x i32> [[A:%.*]], i32 [[B:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 +; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 +; CHECK-NEXT: [[TMP3:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 +; CHECK-NEXT: [[TMP4:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[TMP5:%.*]] = or i8 [[TMP4]], [[TMP3]] +; CHECK-NEXT: [[TMP6:%.*]] = icmp ne i8 [[TMP3]], 0 +; CHECK-NEXT: [[TMP7:%.*]] = select i1 [[TMP6]], i32 [[TMP1]], i32 [[TMP2]] +; CHECK-NEXT: [[E:%.*]] = extractelement <4 x i32> [[A]], i32 [[B]] +; CHECK-NEXT: store i8 [[TMP5]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: store i32 [[TMP7]], ptr @__dfsan_retval_origin_tls, align 4 +; CHECK-NEXT: ret i32 [[E]] +; %e = extractelement <4 x i32> %a, i32 %b ret i32 %e } define <4 x i32> @ieop(<4 x i32> %p, i32 %a, i32 %b) { - ; CHECK: @ieop.dfsan - ; CHECK: [[BO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 2), align 4 - ; CHECK: [[AO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 - ; CHECK: [[PO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 - ; CHECK: [[BS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 4) to ptr), align 2 - ; CHECK: [[AS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align 2 - ; CHECK: [[AS_NE:%.*]] = icmp ne i8 [[AS]], 0 - ; CHECK: [[APO:%.*]] = select i1 [[AS_NE]], i32 [[AO]], i32 [[PO]] - ; CHECK: [[BS_NE:%.*]] = icmp ne i8 [[BS]], 0 - ; CHECK: [[ABPO:%.*]] = select i1 [[BS_NE]], i32 [[BO]], i32 [[APO]] - ; CHECK: store i32 [[ABPO]], ptr @__dfsan_retval_origin_tls, align 4 - +; CHECK-LABEL: define <4 x i32> @ieop( +; CHECK-SAME: <4 x i32> [[P:%.*]], i32 [[A:%.*]], i32 [[B:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 2), align 4 +; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 +; CHECK-NEXT: [[TMP3:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 +; CHECK-NEXT: [[TMP4:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 4), align 2 +; CHECK-NEXT: [[TMP5:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 +; CHECK-NEXT: [[TMP6:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[TMP7:%.*]] = or i8 [[TMP6]], [[TMP5]] +; CHECK-NEXT: [[TMP8:%.*]] = or i8 [[TMP7]], [[TMP4]] +; CHECK-NEXT: [[TMP9:%.*]] = icmp ne i8 [[TMP5]], 0 +; CHECK-NEXT: [[TMP10:%.*]] = select i1 [[TMP9]], i32 [[TMP2]], i32 [[TMP3]] +; CHECK-NEXT: [[TMP11:%.*]] = icmp ne i8 [[TMP4]], 0 +; CHECK-NEXT: [[TMP12:%.*]] = select i1 [[TMP11]], i32 [[TMP1]], i32 [[TMP10]] +; CHECK-NEXT: [[E:%.*]] = insertelement <4 x i32> [[P]], i32 [[A]], i32 [[B]] +; CHECK-NEXT: store i8 [[TMP8]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: store i32 [[TMP12]], ptr @__dfsan_retval_origin_tls, align 4 +; CHECK-NEXT: ret <4 x i32> [[E]] +; %e = insertelement <4 x i32> %p, i32 %a, i32 %b ret <4 x i32> %e } define <4 x i32> @svop(<4 x i32> %a, <4 x i32> %b) { - ; CHECK: @svop.dfsan - ; CHECK: [[BO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 - ; CHECK: [[AO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 - ; CHECK: [[BS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align 2 - ; CHECK: [[NE:%.*]] = icmp ne i8 [[BS]], 0 - ; CHECK: [[MO:%.*]] = select i1 [[NE]], i32 [[BO]], i32 [[AO]] - ; CHECK: store i32 [[MO]], ptr @__dfsan_retval_origin_tls, align 4 - +; CHECK-LABEL: define <4 x i32> @svop( +; CHECK-SAME: <4 x i32> [[A:%.*]], <4 x i32> [[B:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 +; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 +; CHECK-NEXT: [[TMP3:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 +; CHECK-NEXT: [[TMP4:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[TMP5:%.*]] = or i8 [[TMP4]], [[TMP3]] +; CHECK-NEXT: [[TMP6:%.*]] = icmp ne i8 [[TMP3]], 0 +; CHECK-NEXT: [[TMP7:%.*]] = select i1 [[TMP6]], i32 [[TMP1]], i32 [[TMP2]] +; CHECK-NEXT: [[E:%.*]] = shufflevector <4 x i32> [[A]], <4 x i32> [[B]], <4 x i32> <i32 0, i32 4, i32 1, i32 5> +; CHECK-NEXT: store i8 [[TMP5]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: store i32 [[TMP7]], ptr @__dfsan_retval_origin_tls, align 4 +; CHECK-NEXT: ret <4 x i32> [[E]] +; %e = shufflevector <4 x i32> %a, <4 x i32> %b, <4 x i32> <i32 0, i32 4, i32 1, i32 5> ret <4 x i32> %e -} +} define i32 @evop({i32, float} %a) { - ; CHECK: @evop.dfsan - ; CHECK: [[AO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 - ; CHECK: store i32 [[AO]], ptr @__dfsan_retval_origin_tls, align 4 - +; CHECK-LABEL: define i32 @evop( +; CHECK-SAME: { i32, float } [[A:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 +; CHECK-NEXT: [[TMP2:%.*]] = load { i8, i8 }, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[TMP3:%.*]] = extractvalue { i8, i8 } [[TMP2]], 0 +; CHECK-NEXT: [[E:%.*]] = extractvalue { i32, float } [[A]], 0 +; CHECK-NEXT: store i8 [[TMP3]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: store i32 [[TMP1]], ptr @__dfsan_retval_origin_tls, align 4 +; CHECK-NEXT: ret i32 [[E]] +; %e = extractvalue {i32, float} %a, 0 ret i32 %e } +; COMM: TODO simplify the expression 4 to +; COMM: 6, if shadow-tls-alignment is updated to match shadow define {i32, {float, float}} @ivop({i32, {float, float}} %a, {float, float} %b) { - ; CHECK: @ivop.dfsan - ; CHECK: [[BO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 - ; CHECK: [[AO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 - ; COMM: TODO simplify the expression 4 to - ; COMM: 6, if shadow-tls-alignment is updated to match shadow - ; CHECK: [[BS:%.*]] = load { i8, i8 }, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 4) to ptr), align 2 - ; CHECK: [[BS0:%.*]] = extractvalue { i8, i8 } [[BS]], 0 - ; CHECK: [[BS1:%.*]] = extractvalue { i8, i8 } [[BS]], 1 - ; CHECK: [[BS01:%.*]] = or i8 [[BS0]], [[BS1]] - ; CHECK: [[NE:%.*]] = icmp ne i8 [[BS01]], 0 - ; CHECK: [[MO:%.*]] = select i1 [[NE]], i32 [[BO]], i32 [[AO]] - ; CHECK: store i32 [[MO]], ptr @__dfsan_retval_origin_tls, align 4 - +; CHECK-LABEL: define { i32, { float, float } } @ivop( +; CHECK-SAME: { i32, { float, float } } [[A:%.*]], { float, float } [[B:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 +; CHECK-NEXT: [[TMP2:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 +; CHECK-NEXT: [[TMP3:%.*]] = load { i8, i8 }, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 4), align 2 +; CHECK-NEXT: [[TMP4:%.*]] = load { i8, { i8, i8 } }, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[TMP5:%.*]] = insertvalue { i8, { i8, i8 } } [[TMP4]], { i8, i8 } [[TMP3]], 1 +; CHECK-NEXT: [[TMP6:%.*]] = extractvalue { i8, i8 } [[TMP3]], 0 +; CHECK-NEXT: [[TMP7:%.*]] = extractvalue { i8, i8 } [[TMP3]], 1 +; CHECK-NEXT: [[TMP8:%.*]] = or i8 [[TMP6]], [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = icmp ne i8 [[TMP8]], 0 +; CHECK-NEXT: [[TMP10:%.*]] = select i1 [[TMP9]], i32 [[TMP1]], i32 [[TMP2]] +; CHECK-NEXT: [[E:%.*]] = insertvalue { i32, { float, float } } [[A]], { float, float } [[B]], 1 +; CHECK-NEXT: store { i8, { i8, i8 } } [[TMP5]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: store i32 [[TMP10]], ptr @__dfsan_retval_origin_tls, align 4 +; CHECK-NEXT: ret { i32, { float, float } } [[E]] +; %e = insertvalue {i32, {float, float}} %a, {float, float} %b, 1 ret {i32, {float, float}} %e } diff --git a/llvm/test/Instrumentation/DataFlowSanitizer/origin_phi.ll b/llvm/test/Instrumentation/DataFlowSanitizer/origin_phi.ll index e98dd2b..b69c383 100644 --- a/llvm/test/Instrumentation/DataFlowSanitizer/origin_phi.ll +++ b/llvm/test/Instrumentation/DataFlowSanitizer/origin_phi.ll @@ -1,41 +1,50 @@ -; RUN: opt < %s -passes=dfsan -dfsan-track-origins=1 -S | FileCheck %s +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 6 +; RUN: opt < %s -passes=dfsan -dfsan-track-origins=1 -dfsan-add-global-name-suffix=0 -S | FileCheck %s target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128" target triple = "x86_64-unknown-linux-gnu" -; CHECK: @__dfsan_arg_tls = external thread_local(initialexec) global [[TLS_ARR:\[100 x i64\]]] define i32 @phiop(i32 %a, i32 %b, i1 %c) { - ; CHECK: @phiop.dfsan - ; CHECK: entry: - ; CHECK: [[BO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 - ; CHECK: [[AO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 - ; CHECK: [[BS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN:2]] - ; CHECK: [[AS:%.*]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN]] - ; CHECK: br i1 %c, label %next, label %done - ; CHECK: next: - ; CHECK: br i1 %c, label %T, label %F - ; CHECK: T: - ; CHECK: [[BS_NE:%.*]] = icmp ne i8 [[BS]], 0 - ; CHECK: [[BAO_T:%.*]] = select i1 [[BS_NE]], i32 [[BO]], i32 [[AO]] - ; CHECK: br label %done - ; CHECK: F: - ; CHECK: [[AS_NE:%.*]] = icmp ne i8 [[AS]], 0 - ; CHECK: [[BAO_F:%.*]] = select i1 [[AS_NE]], i32 [[AO]], i32 [[BO]] - ; CHECK: br label %done - ; CHECK: done: - ; CHECK: [[PO:%.*]] = phi i32 [ [[BAO_T]], %T ], [ [[BAO_F]], %F ], [ [[AO]], %entry ] - ; CHECK: store i32 [[PO]], ptr @__dfsan_retval_origin_tls, align 4 - +; CHECK-LABEL: define i32 @phiop( +; CHECK-SAME: i32 [[A:%.*]], i32 [[B:%.*]], i1 [[C:%.*]]) { +; CHECK-NEXT: [[ENTRY:.*]]: +; CHECK-NEXT: [[TMP0:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 +; CHECK-NEXT: [[TMP1:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 +; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 +; CHECK-NEXT: [[TMP3:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: br i1 [[C]], label %[[NEXT:.*]], label %[[DONE:.*]] +; CHECK: [[NEXT]]: +; CHECK-NEXT: br i1 [[C]], label %[[T:.*]], label %[[F:.*]] +; CHECK: [[T]]: +; CHECK-NEXT: [[TMP4:%.*]] = or i8 [[TMP3]], [[TMP2]] +; CHECK-NEXT: [[TMP5:%.*]] = icmp ne i8 [[TMP2]], 0 +; CHECK-NEXT: [[TMP6:%.*]] = select i1 [[TMP5]], i32 [[TMP0]], i32 [[TMP1]] +; CHECK-NEXT: [[SUM:%.*]] = add i32 [[A]], [[B]] +; CHECK-NEXT: br label %[[DONE]] +; CHECK: [[F]]: +; CHECK-NEXT: [[TMP7:%.*]] = or i8 [[TMP2]], [[TMP3]] +; CHECK-NEXT: [[TMP8:%.*]] = icmp ne i8 [[TMP3]], 0 +; CHECK-NEXT: [[TMP9:%.*]] = select i1 [[TMP8]], i32 [[TMP1]], i32 [[TMP0]] +; CHECK-NEXT: [[DIFF:%.*]] = sub i32 [[B]], [[A]] +; CHECK-NEXT: br label %[[DONE]] +; CHECK: [[DONE]]: +; CHECK-NEXT: [[TMP10:%.*]] = phi i8 [ [[TMP4]], %[[T]] ], [ [[TMP7]], %[[F]] ], [ [[TMP3]], %[[ENTRY]] ] +; CHECK-NEXT: [[TMP11:%.*]] = phi i32 [ [[TMP6]], %[[T]] ], [ [[TMP9]], %[[F]] ], [ [[TMP1]], %[[ENTRY]] ] +; CHECK-NEXT: [[R:%.*]] = phi i32 [ [[SUM]], %[[T]] ], [ [[DIFF]], %[[F]] ], [ [[A]], %[[ENTRY]] ] +; CHECK-NEXT: store i8 [[TMP10]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: store i32 [[TMP11]], ptr @__dfsan_retval_origin_tls, align 4 +; CHECK-NEXT: ret i32 [[R]] +; entry: br i1 %c, label %next, label %done -next: - br i1 %c, label %T, label %F +next: + br i1 %c, label %T, label %F T: - %sum = add i32 %a, %b + %sum = add i32 %a, %b br label %done F: - %diff = sub i32 %b, %a + %diff = sub i32 %b, %a br label %done done: %r = phi i32 [%sum, %T], [%diff, %F], [%a, %entry] ret i32 %r -}
\ No newline at end of file +} diff --git a/llvm/test/Instrumentation/DataFlowSanitizer/origin_select.ll b/llvm/test/Instrumentation/DataFlowSanitizer/origin_select.ll index 133bf22..2839897 100644 --- a/llvm/test/Instrumentation/DataFlowSanitizer/origin_select.ll +++ b/llvm/test/Instrumentation/DataFlowSanitizer/origin_select.ll @@ -48,7 +48,7 @@ define <4 x i8> @select8v(<4 x i1> %c, <4 x i8> %t, <4 x i8> %f) { ; TRACK_CONTROL_FLOW: [[CO:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 ; TRACK_CONTROL_FLOW: [[FO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 2), align 4 ; TRACK_CONTROL_FLOW: [[TO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 - ; TRACK_CONTROL_FLOW: [[FS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 4) to ptr), align 2 + ; TRACK_CONTROL_FLOW: [[FS:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 4), align 2 ; TRACK_CONTROL_FLOW: [[CS:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 ; TRACK_CONTROL_FLOW: [[FS_NE:%.*]] = icmp ne i8 [[FS]], 0 ; TRACK_CONTROL_FLOW: [[FTO:%.*]] = select i1 [[FS_NE]], i32 [[FO]], i32 [[TO]] @@ -59,11 +59,11 @@ define <4 x i8> @select8v(<4 x i1> %c, <4 x i8> %t, <4 x i8> %f) { ; NO_TRACK_CONTROL_FLOW: @select8v.dfsan ; NO_TRACK_CONTROL_FLOW: [[FO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 2), align 4 ; NO_TRACK_CONTROL_FLOW: [[TO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 - ; NO_TRACK_CONTROL_FLOW: [[FS:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 4) to ptr), align 2 + ; NO_TRACK_CONTROL_FLOW: [[FS:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 4), align 2 ; NO_TRACK_CONTROL_FLOW: [[FS_NE:%.*]] = icmp ne i8 [[FS]], 0 ; NO_TRACK_CONTROL_FLOW: [[FTO:%.*]] = select i1 [[FS_NE]], i32 [[FO]], i32 [[TO]] ; NO_TRACK_CONTROL_FLOW: store i32 [[FTO]], ptr @__dfsan_retval_origin_tls, align 4 %a = select <4 x i1> %c, <4 x i8> %t, <4 x i8> %f ret <4 x i8> %a -}
\ No newline at end of file +} diff --git a/llvm/test/Instrumentation/DataFlowSanitizer/origin_store.ll b/llvm/test/Instrumentation/DataFlowSanitizer/origin_store.ll index 0b0ba40..55b0a01 100644 --- a/llvm/test/Instrumentation/DataFlowSanitizer/origin_store.ll +++ b/llvm/test/Instrumentation/DataFlowSanitizer/origin_store.ll @@ -75,7 +75,7 @@ define void @store64_align8(ptr %p, i64 %a) { ; COMBINE_STORE_PTR-NEXT: %[[#PS:]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN]] ; CHECK-NEXT: %[[#AO:]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 - ; CHECK-NEXT: %[[#AS:]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN]] + ; CHECK-NEXT: %[[#AS:]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN]] ; COMBINE_STORE_PTR-NEXT: %[[#AS:]] = or i8 %[[#AS]], %[[#PS]] ; COMBINE_STORE_PTR-NEXT: %[[#NE:]] = icmp ne i8 %[[#PS]], 0 @@ -104,7 +104,7 @@ define void @store64_align2(ptr %p, i64 %a) { ; COMBINE_STORE_PTR-NEXT: %[[#PS:]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN]] ; CHECK-NEXT: %[[#AO:]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 - ; CHECK-NEXT: %[[#AS:]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN]] + ; CHECK-NEXT: %[[#AS:]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN]] ; COMBINE_STORE_PTR-NEXT: %[[#AS:]] = or i8 %[[#AS]], %[[#PS]] ; COMBINE_STORE_PTR-NEXT: %[[#NE:]] = icmp ne i8 %[[#PS]], 0 @@ -131,7 +131,7 @@ define void @store96_align8(ptr %p, i96 %a) { ; COMBINE_STORE_PTR-NEXT: %[[#PS:]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN]] ; CHECK-NEXT: %[[#AO:]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 - ; CHECK-NEXT: %[[#AS:]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN]] + ; CHECK-NEXT: %[[#AS:]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN]] ; COMBINE_STORE_PTR-NEXT: %[[#AS:]] = or i8 %[[#AS]], %[[#PS]] ; COMBINE_STORE_PTR-NEXT: %[[#NE:]] = icmp ne i8 %[[#PS]], 0 diff --git a/llvm/test/Instrumentation/DataFlowSanitizer/origin_store_threshold.ll b/llvm/test/Instrumentation/DataFlowSanitizer/origin_store_threshold.ll index 3630ebc..8b526f1 100644 --- a/llvm/test/Instrumentation/DataFlowSanitizer/origin_store_threshold.ll +++ b/llvm/test/Instrumentation/DataFlowSanitizer/origin_store_threshold.ll @@ -1,16 +1,37 @@ -; RUN: opt < %s -passes=dfsan -dfsan-track-origins=1 -dfsan-instrument-with-call-threshold=0 -S | FileCheck %s +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 6 +; RUN: opt < %s -passes=dfsan -dfsan-track-origins=1 -dfsan-instrument-with-call-threshold=0 -dfsan-add-global-name-suffix=0 -S | FileCheck %s target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128" target triple = "x86_64-unknown-linux-gnu" define void @store_threshold(ptr %p, [2 x i64] %a) { - ; CHECK: @store_threshold.dfsan - ; CHECK: [[AO:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 - ; CHECK: [[AS:%.*]] = load [2 x i8], ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align 2 - ; CHECK: [[AS0:%.*]] = extractvalue [2 x i8] [[AS]], 0 - ; CHECK: [[AS1:%.*]] = extractvalue [2 x i8] [[AS]], 1 - ; CHECK: [[AS01:%.*]] = or i8 [[AS0]], [[AS1]] - ; CHECK: call void @__dfsan_maybe_store_origin(i8 [[AS01]], ptr %p, i64 16, i32 [[AO]]) - ; CHECK: store [2 x i64] %a, ptr %p, align 8 +; CHECK-LABEL: define void @store_threshold( +; CHECK-SAME: ptr [[P:%.*]], [2 x i64] [[A:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i32, ptr getelementptr inbounds ([200 x i32], ptr @__dfsan_arg_origin_tls, i64 0, i64 1), align 4 +; CHECK-NEXT: [[TMP2:%.*]] = load [2 x i8], ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 +; CHECK-NEXT: [[TMP3:%.*]] = extractvalue [2 x i8] [[TMP2]], 0 +; CHECK-NEXT: [[TMP4:%.*]] = extractvalue [2 x i8] [[TMP2]], 1 +; CHECK-NEXT: [[TMP5:%.*]] = or i8 [[TMP3]], [[TMP4]] +; CHECK-NEXT: [[TMP6:%.*]] = ptrtoint ptr [[P]] to i64 +; CHECK-NEXT: [[TMP7:%.*]] = xor i64 [[TMP6]], 87960930222080 +; CHECK-NEXT: [[TMP8:%.*]] = inttoptr i64 [[TMP7]] to ptr +; CHECK-NEXT: [[TMP9:%.*]] = add i64 [[TMP7]], 17592186044416 +; CHECK-NEXT: [[TMP10:%.*]] = inttoptr i64 [[TMP9]] to ptr +; CHECK-NEXT: [[TMP11:%.*]] = insertelement <8 x i8> poison, i8 [[TMP5]], i32 0 +; CHECK-NEXT: [[TMP12:%.*]] = insertelement <8 x i8> [[TMP11]], i8 [[TMP5]], i32 1 +; CHECK-NEXT: [[TMP13:%.*]] = insertelement <8 x i8> [[TMP12]], i8 [[TMP5]], i32 2 +; CHECK-NEXT: [[TMP14:%.*]] = insertelement <8 x i8> [[TMP13]], i8 [[TMP5]], i32 3 +; CHECK-NEXT: [[TMP15:%.*]] = insertelement <8 x i8> [[TMP14]], i8 [[TMP5]], i32 4 +; CHECK-NEXT: [[TMP16:%.*]] = insertelement <8 x i8> [[TMP15]], i8 [[TMP5]], i32 5 +; CHECK-NEXT: [[TMP17:%.*]] = insertelement <8 x i8> [[TMP16]], i8 [[TMP5]], i32 6 +; CHECK-NEXT: [[TMP18:%.*]] = insertelement <8 x i8> [[TMP17]], i8 [[TMP5]], i32 7 +; CHECK-NEXT: [[TMP19:%.*]] = getelementptr <8 x i8>, ptr [[TMP8]], i32 0 +; CHECK-NEXT: store <8 x i8> [[TMP18]], ptr [[TMP19]], align 1 +; CHECK-NEXT: [[TMP20:%.*]] = getelementptr <8 x i8>, ptr [[TMP8]], i32 1 +; CHECK-NEXT: store <8 x i8> [[TMP18]], ptr [[TMP20]], align 1 +; CHECK-NEXT: call void @__dfsan_maybe_store_origin(i8 [[TMP5]], ptr [[P]], i64 16, i32 [[TMP1]]) +; CHECK-NEXT: store [2 x i64] [[A]], ptr [[P]], align 8 +; CHECK-NEXT: ret void +; store [2 x i64] %a, ptr %p ret void diff --git a/llvm/test/Instrumentation/DataFlowSanitizer/origin_track_load.ll b/llvm/test/Instrumentation/DataFlowSanitizer/origin_track_load.ll index b93d2eb..f967ccf 100644 --- a/llvm/test/Instrumentation/DataFlowSanitizer/origin_track_load.ll +++ b/llvm/test/Instrumentation/DataFlowSanitizer/origin_track_load.ll @@ -1,27 +1,26 @@ -; RUN: opt < %s -passes=dfsan -dfsan-track-origins=2 -S | FileCheck %s +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 6 +; RUN: opt < %s -passes=dfsan -dfsan-track-origins=2 -dfsan-add-global-name-suffix=0 -S | FileCheck %s target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128" target triple = "x86_64-unknown-linux-gnu" define i64 @load64(ptr %p) { - ; CHECK-LABEL: @load64.dfsan - - ; CHECK-NEXT: %[[#PO:]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 - ; CHECK-NEXT: %[[#PS:]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN:2]] - - ; CHECK-NEXT: %[[#LABEL_ORIGIN:]] = call zeroext i64 @__dfsan_load_label_and_origin(ptr %p, i64 8) - ; CHECK-NEXT: %[[#LABEL_ORIGIN_H32:]] = lshr i64 %[[#LABEL_ORIGIN]], 32 - ; CHECK-NEXT: %[[#LABEL:]] = trunc i64 %[[#LABEL_ORIGIN_H32]] to i8 - ; CHECK-NEXT: %[[#ORIGIN:]] = trunc i64 %[[#LABEL_ORIGIN]] to i32 - ; CHECK-NEXT: %[[#ORIGIN_CHAINED:]] = call i32 @__dfsan_chain_origin_if_tainted(i8 %[[#LABEL]], i32 %[[#ORIGIN]]) - - ; CHECK-NEXT: %[[#LABEL:]] = or i8 %[[#LABEL]], %[[#PS]] - ; CHECK-NEXT: %[[#NZ:]] = icmp ne i8 %[[#PS]], 0 - ; CHECK-NEXT: %[[#ORIGIN_SEL:]] = select i1 %[[#NZ]], i32 %[[#PO]], i32 %[[#ORIGIN_CHAINED]] - - ; CHECK-NEXT: %a = load i64, ptr %p - ; CHECK-NEXT: store i8 %[[#LABEL]], ptr @__dfsan_retval_tls, align [[ALIGN]] - ; CHECK-NEXT: store i32 %[[#ORIGIN_SEL]], ptr @__dfsan_retval_origin_tls, align 4 - +; CHECK-LABEL: define i64 @load64( +; CHECK-SAME: ptr [[P:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i32, ptr @__dfsan_arg_origin_tls, align 4 +; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[TMP3:%.*]] = call zeroext i64 @__dfsan_load_label_and_origin(ptr [[P]], i64 8) +; CHECK-NEXT: [[TMP4:%.*]] = lshr i64 [[TMP3]], 32 +; CHECK-NEXT: [[TMP5:%.*]] = trunc i64 [[TMP4]] to i8 +; CHECK-NEXT: [[TMP6:%.*]] = trunc i64 [[TMP3]] to i32 +; CHECK-NEXT: [[TMP7:%.*]] = call i32 @__dfsan_chain_origin_if_tainted(i8 [[TMP5]], i32 [[TMP6]]) +; CHECK-NEXT: [[TMP8:%.*]] = or i8 [[TMP5]], [[TMP2]] +; CHECK-NEXT: [[TMP9:%.*]] = icmp ne i8 [[TMP2]], 0 +; CHECK-NEXT: [[TMP10:%.*]] = select i1 [[TMP9]], i32 [[TMP1]], i32 [[TMP7]] +; CHECK-NEXT: [[A:%.*]] = load i64, ptr [[P]], align 8 +; CHECK-NEXT: store i8 [[TMP8]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: store i32 [[TMP10]], ptr @__dfsan_retval_origin_tls, align 4 +; CHECK-NEXT: ret i64 [[A]] +; %a = load i64, ptr %p ret i64 %a } diff --git a/llvm/test/Instrumentation/DataFlowSanitizer/phi.ll b/llvm/test/Instrumentation/DataFlowSanitizer/phi.ll index 592d3eb..ecf0d9c8 100644 --- a/llvm/test/Instrumentation/DataFlowSanitizer/phi.ll +++ b/llvm/test/Instrumentation/DataFlowSanitizer/phi.ll @@ -1,26 +1,41 @@ -; RUN: opt < %s -passes=dfsan -S | FileCheck %s +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 6 +; RUN: opt < %s -passes=dfsan -dfsan-add-global-name-suffix=0 -S | FileCheck %s target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128" target triple = "x86_64-unknown-linux-gnu" define {i32, i32} @test({i32, i32} %a, i1 %c) { - ; CHECK: %[[#AL:]] = load { i8, i8 }, ptr @__dfsan_arg_tls, align [[ALIGN:2]] - ; CHECK: %[[#AL0:]] = insertvalue { i8, i8 } %[[#AL]], i8 0, 0 - ; CHECK: %[[#AL1:]] = insertvalue { i8, i8 } %[[#AL]], i8 0, 1 - ; CHECK: %[[#PL:]] = phi { i8, i8 } [ %[[#AL0]], %T ], [ %[[#AL1]], %F ] - ; CHECK: store { i8, i8 } %[[#PL]], ptr @__dfsan_retval_tls, align [[ALIGN]] +; CHECK-LABEL: define { i32, i32 } @test( +; CHECK-SAME: { i32, i32 } [[A:%.*]], i1 [[C:%.*]]) { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: [[TMP0:%.*]] = load { i8, i8 }, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: br i1 [[C]], label %[[T:.*]], label %[[F:.*]] +; CHECK: [[T]]: +; CHECK-NEXT: [[TMP1:%.*]] = insertvalue { i8, i8 } [[TMP0]], i8 0, 0 +; CHECK-NEXT: [[AT:%.*]] = insertvalue { i32, i32 } [[A]], i32 1, 0 +; CHECK-NEXT: br label %[[DONE:.*]] +; CHECK: [[F]]: +; CHECK-NEXT: [[TMP2:%.*]] = insertvalue { i8, i8 } [[TMP0]], i8 0, 1 +; CHECK-NEXT: [[AF:%.*]] = insertvalue { i32, i32 } [[A]], i32 1, 1 +; CHECK-NEXT: br label %[[DONE]] +; CHECK: [[DONE]]: +; CHECK-NEXT: [[TMP3:%.*]] = phi { i8, i8 } [ [[TMP1]], %[[T]] ], [ [[TMP2]], %[[F]] ] +; CHECK-NEXT: [[B:%.*]] = phi { i32, i32 } [ [[AT]], %[[T]] ], [ [[AF]], %[[F]] ] +; CHECK-NEXT: store { i8, i8 } [[TMP3]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: ret { i32, i32 } [[B]] +; entry: br i1 %c, label %T, label %F - + T: %at = insertvalue {i32, i32} %a, i32 1, 0 br label %done - + F: %af = insertvalue {i32, i32} %a, i32 1, 1 br label %done - + done: %b = phi {i32, i32} [%at, %T], [%af, %F] - ret {i32, i32} %b + ret {i32, i32} %b } diff --git a/llvm/test/Instrumentation/DataFlowSanitizer/select.ll b/llvm/test/Instrumentation/DataFlowSanitizer/select.ll index 5056616..005648b 100644 --- a/llvm/test/Instrumentation/DataFlowSanitizer/select.ll +++ b/llvm/test/Instrumentation/DataFlowSanitizer/select.ll @@ -1,74 +1,81 @@ -; RUN: opt < %s -passes=dfsan -dfsan-track-select-control-flow=true -S | FileCheck %s --check-prefixes=CHECK,TRACK_CF -; RUN: opt < %s -passes=dfsan -dfsan-track-select-control-flow=false -S | FileCheck %s --check-prefixes=CHECK,NO_TRACK_CF +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 6 +; RUN: opt < %s -passes=dfsan -dfsan-track-select-control-flow=true -dfsan-add-global-name-suffix=0 -S | FileCheck %s --check-prefixes=CHECK,TRACK_CF +; RUN: opt < %s -passes=dfsan -dfsan-track-select-control-flow=false -dfsan-add-global-name-suffix=0 -S | FileCheck %s --check-prefixes=CHECK,NO_TRACK_CF target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128" target triple = "x86_64-unknown-linux-gnu" -; CHECK: @__dfsan_arg_tls = external thread_local(initialexec) global [[TLS_ARR:\[100 x i64\]]] -; CHECK: @__dfsan_retval_tls = external thread_local(initialexec) global [[TLS_ARR]] define i8 @select8(i1 %c, i8 %t, i8 %f) { - ; TRACK_CF: @select8.dfsan - ; TRACK_CF: %[[#R:]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 4) to ptr), align [[ALIGN:2]] - ; TRACK_CF: %[[#R+1]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN]] - ; TRACK_CF: %[[#R+2]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN]] - ; TRACK_CF: %[[#R+3]] = select i1 %c, i8 %[[#R+1]], i8 %[[#R]] - ; TRACK_CF: %[[#RO:]] = or i8 %[[#R+2]], %[[#R+3]] - ; TRACK_CF: %a = select i1 %c, i8 %t, i8 %f - ; TRACK_CF: store i8 %[[#RO]], ptr @__dfsan_retval_tls, align [[ALIGN]] - ; TRACK_CF: ret i8 %a - - ; NO_TRACK_CF: @select8.dfsan - ; NO_TRACK_CF: %[[#R:]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 4) to ptr), align [[ALIGN:2]] - ; NO_TRACK_CF: %[[#R+1]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN]] - ; NO_TRACK_CF: %[[#R+2]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN]] - ; NO_TRACK_CF: %[[#R+3]] = select i1 %c, i8 %[[#R+1]], i8 %[[#R]] - ; NO_TRACK_CF: %a = select i1 %c, i8 %t, i8 %f - ; NO_TRACK_CF: store i8 %[[#R+3]], ptr @__dfsan_retval_tls, align [[ALIGN]] - ; NO_TRACK_CF: ret i8 %a - +; TRACK_CF-LABEL: define i8 @select8( +; TRACK_CF-SAME: i1 [[C:%.*]], i8 [[T:%.*]], i8 [[F:%.*]]) { +; TRACK_CF-NEXT: [[TMP1:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 4), align 2 +; TRACK_CF-NEXT: [[TMP2:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 +; TRACK_CF-NEXT: [[TMP3:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; TRACK_CF-NEXT: [[TMP4:%.*]] = select i1 [[C]], i8 [[TMP2]], i8 [[TMP1]] +; TRACK_CF-NEXT: [[TMP5:%.*]] = or i8 [[TMP3]], [[TMP4]] +; TRACK_CF-NEXT: [[A:%.*]] = select i1 [[C]], i8 [[T]], i8 [[F]] +; TRACK_CF-NEXT: store i8 [[TMP5]], ptr @__dfsan_retval_tls, align 2 +; TRACK_CF-NEXT: ret i8 [[A]] +; +; NO_TRACK_CF-LABEL: define i8 @select8( +; NO_TRACK_CF-SAME: i1 [[C:%.*]], i8 [[T:%.*]], i8 [[F:%.*]]) { +; NO_TRACK_CF-NEXT: [[TMP1:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 4), align 2 +; NO_TRACK_CF-NEXT: [[TMP2:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 +; NO_TRACK_CF-NEXT: [[TMP3:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; NO_TRACK_CF-NEXT: [[TMP4:%.*]] = select i1 [[C]], i8 [[TMP2]], i8 [[TMP1]] +; NO_TRACK_CF-NEXT: [[A:%.*]] = select i1 [[C]], i8 [[T]], i8 [[F]] +; NO_TRACK_CF-NEXT: store i8 [[TMP4]], ptr @__dfsan_retval_tls, align 2 +; NO_TRACK_CF-NEXT: ret i8 [[A]] +; %a = select i1 %c, i8 %t, i8 %f ret i8 %a } define i8 @select8e(i1 %c, i8 %tf) { - ; TRACK_CF: @select8e.dfsan - ; TRACK_CF: %[[#R:]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN]] - ; TRACK_CF: %[[#R+1]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN]] - ; TRACK_CF: %[[#RO:]] = or i8 %[[#R+1]], %[[#R]] - ; TRACK_CF: %a = select i1 %c, i8 %tf, i8 %tf - ; TRACK_CF: store i8 %[[#RO]], ptr @__dfsan_retval_tls, align [[ALIGN]] - ; TRACK_CF: ret i8 %a - - ; NO_TRACK_CF: @select8e.dfsan - ; NO_TRACK_CF: %[[#R:]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN]] - ; NO_TRACK_CF: %[[#R+1]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN]] - ; NO_TRACK_CF: %a = select i1 %c, i8 %tf, i8 %tf - ; NO_TRACK_CF: store i8 %[[#R]], ptr @__dfsan_retval_tls, align [[ALIGN]] - ; NO_TRACK_CF: ret i8 %a - +; TRACK_CF-LABEL: define i8 @select8e( +; TRACK_CF-SAME: i1 [[C:%.*]], i8 [[TF:%.*]]) { +; TRACK_CF-NEXT: [[TMP1:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 +; TRACK_CF-NEXT: [[TMP2:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; TRACK_CF-NEXT: [[TMP3:%.*]] = or i8 [[TMP2]], [[TMP1]] +; TRACK_CF-NEXT: [[A:%.*]] = select i1 [[C]], i8 [[TF]], i8 [[TF]] +; TRACK_CF-NEXT: store i8 [[TMP3]], ptr @__dfsan_retval_tls, align 2 +; TRACK_CF-NEXT: ret i8 [[A]] +; +; NO_TRACK_CF-LABEL: define i8 @select8e( +; NO_TRACK_CF-SAME: i1 [[C:%.*]], i8 [[TF:%.*]]) { +; NO_TRACK_CF-NEXT: [[TMP1:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 +; NO_TRACK_CF-NEXT: [[TMP2:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; NO_TRACK_CF-NEXT: [[A:%.*]] = select i1 [[C]], i8 [[TF]], i8 [[TF]] +; NO_TRACK_CF-NEXT: store i8 [[TMP1]], ptr @__dfsan_retval_tls, align 2 +; NO_TRACK_CF-NEXT: ret i8 [[A]] +; %a = select i1 %c, i8 %tf, i8 %tf ret i8 %a } define <4 x i8> @select8v(<4 x i1> %c, <4 x i8> %t, <4 x i8> %f) { - ; TRACK_CF: @select8v.dfsan - ; TRACK_CF: %[[#R:]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 4) to ptr), align [[ALIGN:2]] - ; TRACK_CF: %[[#R+1]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN]] - ; TRACK_CF: %[[#R+2]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN]] - ; TRACK_CF: %[[#R+3]] = or i8 %[[#R+1]], %[[#R]] - ; TRACK_CF: %[[#RO:]] = or i8 %[[#R+2]], %[[#R+3]] - ; TRACK_CF: %a = select <4 x i1> %c, <4 x i8> %t, <4 x i8> %f - ; TRACK_CF: store i8 %[[#RO]], ptr @__dfsan_retval_tls, align [[ALIGN]] - ; TRACK_CF: ret <4 x i8> %a - - ; NO_TRACK_CF: @select8v.dfsan - ; NO_TRACK_CF: %[[#R:]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 4) to ptr), align [[ALIGN:2]] - ; NO_TRACK_CF: %[[#R+1]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN]] - ; NO_TRACK_CF: %[[#R+2]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN]] - ; NO_TRACK_CF: %[[#RO:]] = or i8 %[[#R+1]], %[[#R]] - ; NO_TRACK_CF: %a = select <4 x i1> %c, <4 x i8> %t, <4 x i8> %f - ; NO_TRACK_CF: store i8 %[[#RO]], ptr @__dfsan_retval_tls, align [[ALIGN]] - ; NO_TRACK_CF: ret <4 x i8> %a - +; TRACK_CF-LABEL: define <4 x i8> @select8v( +; TRACK_CF-SAME: <4 x i1> [[C:%.*]], <4 x i8> [[T:%.*]], <4 x i8> [[F:%.*]]) { +; TRACK_CF-NEXT: [[TMP1:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 4), align 2 +; TRACK_CF-NEXT: [[TMP2:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 +; TRACK_CF-NEXT: [[TMP3:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; TRACK_CF-NEXT: [[TMP4:%.*]] = or i8 [[TMP2]], [[TMP1]] +; TRACK_CF-NEXT: [[TMP5:%.*]] = or i8 [[TMP3]], [[TMP4]] +; TRACK_CF-NEXT: [[A:%.*]] = select <4 x i1> [[C]], <4 x i8> [[T]], <4 x i8> [[F]] +; TRACK_CF-NEXT: store i8 [[TMP5]], ptr @__dfsan_retval_tls, align 2 +; TRACK_CF-NEXT: ret <4 x i8> [[A]] +; +; NO_TRACK_CF-LABEL: define <4 x i8> @select8v( +; NO_TRACK_CF-SAME: <4 x i1> [[C:%.*]], <4 x i8> [[T:%.*]], <4 x i8> [[F:%.*]]) { +; NO_TRACK_CF-NEXT: [[TMP1:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 4), align 2 +; NO_TRACK_CF-NEXT: [[TMP2:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 +; NO_TRACK_CF-NEXT: [[TMP3:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; NO_TRACK_CF-NEXT: [[TMP4:%.*]] = or i8 [[TMP2]], [[TMP1]] +; NO_TRACK_CF-NEXT: [[A:%.*]] = select <4 x i1> [[C]], <4 x i8> [[T]], <4 x i8> [[F]] +; NO_TRACK_CF-NEXT: store i8 [[TMP4]], ptr @__dfsan_retval_tls, align 2 +; NO_TRACK_CF-NEXT: ret <4 x i8> [[A]] +; %a = select <4 x i1> %c, <4 x i8> %t, <4 x i8> %f ret <4 x i8> %a } +;; NOTE: These prefixes are unused and the list is autogenerated. Do not add tests below this line: +; CHECK: {{.*}} diff --git a/llvm/test/Instrumentation/DataFlowSanitizer/store.ll b/llvm/test/Instrumentation/DataFlowSanitizer/store.ll index bc2a70e..1c8ab65 100644 --- a/llvm/test/Instrumentation/DataFlowSanitizer/store.ll +++ b/llvm/test/Instrumentation/DataFlowSanitizer/store.ll @@ -16,7 +16,7 @@ define void @store0({} %v, ptr %p) { define void @store8(i8 %v, ptr %p) { ; CHECK-LABEL: @store8.dfsan ; NO_COMBINE_PTR_LABEL: load i8, ptr @__dfsan_arg_tls - ; COMBINE_PTR_LABEL: load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align 2 + ; COMBINE_PTR_LABEL: load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 ; COMBINE_PTR_LABEL: load i8, ptr @__dfsan_arg_tls ; COMBINE_PTR_LABEL: or i8 @@ -35,7 +35,7 @@ define void @store8(i8 %v, ptr %p) { define void @store16(i16 %v, ptr %p) { ; CHECK-LABEL: @store16.dfsan ; NO_COMBINE_PTR_LABEL: load i8, ptr @__dfsan_arg_tls - ; COMBINE_PTR_LABEL: load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align 2 + ; COMBINE_PTR_LABEL: load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 ; COMBINE_PTR_LABEL: load i8, ptr @__dfsan_arg_tls ; COMBINE_PTR_LABEL: or i8 ; CHECK: ptrtoint ptr {{.*}} i64 @@ -55,7 +55,7 @@ define void @store16(i16 %v, ptr %p) { define void @store32(i32 %v, ptr %p) { ; CHECK-LABEL: @store32.dfsan ; NO_COMBINE_PTR_LABEL: load i8, ptr @__dfsan_arg_tls - ; COMBINE_PTR_LABEL: load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align 2 + ; COMBINE_PTR_LABEL: load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 ; COMBINE_PTR_LABEL: load i8, ptr @__dfsan_arg_tls ; COMBINE_PTR_LABEL: or i8 ; CHECK: ptrtoint ptr {{.*}} i64 @@ -79,7 +79,7 @@ define void @store32(i32 %v, ptr %p) { define void @store64(i64 %v, ptr %p) { ; CHECK-LABEL: @store64.dfsan ; NO_COMBINE_PTR_LABEL: load i8, ptr @__dfsan_arg_tls - ; COMBINE_PTR_LABEL: load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align 2 + ; COMBINE_PTR_LABEL: load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 ; COMBINE_PTR_LABEL: load i8, ptr @__dfsan_arg_tls ; COMBINE_PTR_LABEL: or i8 ; CHECK: ptrtoint ptr {{.*}} i64 diff --git a/llvm/test/Instrumentation/DataFlowSanitizer/struct.ll b/llvm/test/Instrumentation/DataFlowSanitizer/struct.ll index 8069d28..9b4a350 100644 --- a/llvm/test/Instrumentation/DataFlowSanitizer/struct.ll +++ b/llvm/test/Instrumentation/DataFlowSanitizer/struct.ll @@ -56,15 +56,15 @@ define {i1, i32} @load_global_struct() { define {i1, i32} @select_struct(i1 %c, {i1, i32} %a, {i1, i32} %b) { ; NO_SELECT_CONTROL: @select_struct.dfsan - ; NO_SELECT_CONTROL: [[B:%.*]] = load { i8, i8 }, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 4) to ptr), align [[ALIGN:2]] - ; NO_SELECT_CONTROL: [[A:%.*]] = load { i8, i8 }, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN]] + ; NO_SELECT_CONTROL: [[B:%.*]] = load { i8, i8 }, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 4), align [[ALIGN:2]] + ; NO_SELECT_CONTROL: [[A:%.*]] = load { i8, i8 }, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN]] ; NO_SELECT_CONTROL: [[C:%.*]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN]] ; NO_SELECT_CONTROL: [[S:%.*]] = select i1 %c, { i8, i8 } [[A]], { i8, i8 } [[B]] ; NO_SELECT_CONTROL: store { i8, i8 } [[S]], ptr @__dfsan_retval_tls, align [[ALIGN]] ; FAST: @select_struct.dfsan - ; FAST: %[[#R:]] = load { i8, i8 }, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 4) to ptr), align [[ALIGN:2]] - ; FAST: %[[#R+1]] = load { i8, i8 }, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN]] + ; FAST: %[[#R:]] = load { i8, i8 }, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 4), align [[ALIGN:2]] + ; FAST: %[[#R+1]] = load { i8, i8 }, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN]] ; FAST: %[[#R+2]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN]] ; FAST: %[[#R+3]] = select i1 %c, { i8, i8 } %[[#R+1]], { i8, i8 } %[[#R]] ; FAST: %[[#R+4]] = extractvalue { i8, i8 } %[[#R+3]], 0 @@ -81,7 +81,7 @@ define {i1, i32} @select_struct(i1 %c, {i1, i32} %a, {i1, i32} %b) { define { i32, i32 } @asm_struct(i32 %0, i32 %1) { ; FAST: @asm_struct.dfsan - ; FAST: [[E1:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN:2]] + ; FAST: [[E1:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN:2]] ; FAST: [[E0:%.*]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN]] ; FAST: [[E01:%.*]] = or i8 [[E0]], [[E1]] ; FAST: [[S0:%.*]] = insertvalue { i8, i8 } undef, i8 [[E01]], 0 @@ -111,7 +111,7 @@ define i1 @extract_struct({i1, i5} %s) { define {i1, i5} @insert_struct({i1, i5} %s, i5 %e1) { ; FAST: @insert_struct.dfsan - ; FAST: [[EM:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN:2]] + ; FAST: [[EM:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN:2]] ; FAST: [[SM:%.*]] = load { i8, i8 }, ptr @__dfsan_arg_tls, align [[ALIGN]] ; FAST: [[SM1:%.*]] = insertvalue { i8, i8 } [[SM]], i8 [[EM]], 1 ; FAST: store { i8, i8 } [[SM1]], ptr @__dfsan_retval_tls, align [[ALIGN]] @@ -138,7 +138,7 @@ define {i1, i1} @load_struct(ptr %p) { define void @store_struct(ptr %p, {i1, i1} %s) { ; FAST: @store_struct.dfsan - ; FAST: [[S:%.*]] = load { i8, i8 }, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN:2]] + ; FAST: [[S:%.*]] = load { i8, i8 }, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN:2]] ; FAST: [[E0:%.*]] = extractvalue { i8, i8 } [[S]], 0 ; FAST: [[E1:%.*]] = extractvalue { i8, i8 } [[S]], 1 ; FAST: [[E:%.*]] = or i8 [[E0]], [[E1]] @@ -153,7 +153,7 @@ define void @store_struct(ptr %p, {i1, i1} %s) { ; COMBINE_STORE_PTR: @store_struct.dfsan ; COMBINE_STORE_PTR: [[PL:%.*]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN:2]] - ; COMBINE_STORE_PTR: [[SL:%.*]] = load { i8, i8 }, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN]] + ; COMBINE_STORE_PTR: [[SL:%.*]] = load { i8, i8 }, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN]] ; COMBINE_STORE_PTR: [[SL0:%.*]] = extractvalue { i8, i8 } [[SL]], 0 ; COMBINE_STORE_PTR: [[SL1:%.*]] = extractvalue { i8, i8 } [[SL]], 1 ; COMBINE_STORE_PTR: [[SL01:%.*]] = or i8 [[SL0]], [[SL1]] @@ -215,7 +215,7 @@ define i1 @extract_struct_of_aggregate31(%StructOfAggr %s) { define %StructOfAggr @insert_struct_of_aggregate11(%StructOfAggr %s, i2 %e11) { ; FAST: @insert_struct_of_aggregate11.dfsan - ; FAST: [[E11:%.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 8) to ptr), align [[ALIGN:2]] + ; FAST: [[E11:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 8), align [[ALIGN:2]] ; FAST: [[S:%.*]] = load { i8, [4 x i8], i8, { i8, i8 } }, ptr @__dfsan_arg_tls, align [[ALIGN]] ; FAST: [[S1:%.*]] = insertvalue { i8, [4 x i8], i8, { i8, i8 } } [[S]], i8 [[E11]], 1, 1 ; FAST: store { i8, [4 x i8], i8, { i8, i8 } } [[S1]], ptr @__dfsan_retval_tls, align [[ALIGN]] @@ -239,12 +239,12 @@ declare %StructOfAggr @fun_with_many_aggr_args(<2 x i7> %v, [2 x i5] %a, {i3, i3 define %StructOfAggr @call_many_aggr_args(<2 x i7> %v, [2 x i5] %a, {i3, i3} %s) { ; FAST: @call_many_aggr_args.dfsan - ; FAST: [[S:%.*]] = load { i8, i8 }, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 4) to ptr), align [[ALIGN:2]] - ; FAST: [[A:%.*]] = load [2 x i8], ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN]] + ; FAST: [[S:%.*]] = load { i8, i8 }, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 4), align [[ALIGN:2]] + ; FAST: [[A:%.*]] = load [2 x i8], ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN]] ; FAST: [[V:%.*]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN]] ; FAST: store i8 [[V]], ptr @__dfsan_arg_tls, align [[ALIGN]] - ; FAST: store [2 x i8] [[A]], ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN]] - ; FAST: store { i8, i8 } [[S]], ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 4) to ptr), align [[ALIGN]] + ; FAST: store [2 x i8] [[A]], ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align [[ALIGN]] + ; FAST: store { i8, i8 } [[S]], ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 4), align [[ALIGN]] ; FAST: %_dfsret = load { i8, [4 x i8], i8, { i8, i8 } }, ptr @__dfsan_retval_tls, align [[ALIGN]] ; FAST: store { i8, [4 x i8], i8, { i8, i8 } } %_dfsret, ptr @__dfsan_retval_tls, align [[ALIGN]] diff --git a/llvm/test/Instrumentation/DataFlowSanitizer/vector.ll b/llvm/test/Instrumentation/DataFlowSanitizer/vector.ll index 64052d6..0580c18 100644 --- a/llvm/test/Instrumentation/DataFlowSanitizer/vector.ll +++ b/llvm/test/Instrumentation/DataFlowSanitizer/vector.ll @@ -1,19 +1,43 @@ -; RUN: opt < %s -passes=dfsan -S | FileCheck %s +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 6 +; RUN: opt < %s -passes=dfsan -dfsan-add-global-name-suffix=0 -S | FileCheck %s target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128" target triple = "x86_64-unknown-linux-gnu" define <4 x i4> @pass_vector(<4 x i4> %v) { - ; CHECK-LABEL: @pass_vector.dfsan - ; CHECK-NEXT: %[[#REG:]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN:2]] - ; CHECK-NEXT: store i8 %[[#REG]], ptr @__dfsan_retval_tls, align [[ALIGN]] - ; CHECK-NEXT: ret <4 x i4> %v +; CHECK-LABEL: define <4 x i4> @pass_vector( +; CHECK-SAME: <4 x i4> [[V:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: store i8 [[TMP1]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: ret <4 x i4> [[V]] +; ret <4 x i4> %v } define void @load_update_store_vector(ptr %p) { - ; CHECK-LABEL: @load_update_store_vector.dfsan - ; CHECK: {{.*}} = load i8, ptr @__dfsan_arg_tls, align 2 - +; CHECK-LABEL: define void @load_update_store_vector( +; CHECK-SAME: ptr [[P:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[TMP2:%.*]] = ptrtoint ptr [[P]] to i64 +; CHECK-NEXT: [[TMP3:%.*]] = xor i64 [[TMP2]], 87960930222080 +; CHECK-NEXT: [[TMP4:%.*]] = inttoptr i64 [[TMP3]] to ptr +; CHECK-NEXT: [[TMP5:%.*]] = getelementptr i8, ptr [[TMP4]], i64 1 +; CHECK-NEXT: [[TMP6:%.*]] = load i8, ptr [[TMP4]], align 1 +; CHECK-NEXT: [[TMP7:%.*]] = load i8, ptr [[TMP5]], align 1 +; CHECK-NEXT: [[TMP8:%.*]] = or i8 [[TMP6]], [[TMP7]] +; CHECK-NEXT: [[TMP9:%.*]] = or i8 [[TMP8]], [[TMP1]] +; CHECK-NEXT: [[V:%.*]] = load <4 x i4>, ptr [[P]], align 2 +; CHECK-NEXT: [[E2:%.*]] = extractelement <4 x i4> [[V]], i32 2 +; CHECK-NEXT: [[V1:%.*]] = insertelement <4 x i4> [[V]], i4 [[E2]], i32 0 +; CHECK-NEXT: [[TMP10:%.*]] = ptrtoint ptr [[P]] to i64 +; CHECK-NEXT: [[TMP11:%.*]] = xor i64 [[TMP10]], 87960930222080 +; CHECK-NEXT: [[TMP12:%.*]] = inttoptr i64 [[TMP11]] to ptr +; CHECK-NEXT: [[TMP13:%.*]] = getelementptr i8, ptr [[TMP12]], i32 0 +; CHECK-NEXT: store i8 [[TMP9]], ptr [[TMP13]], align 1 +; CHECK-NEXT: [[TMP14:%.*]] = getelementptr i8, ptr [[TMP12]], i32 1 +; CHECK-NEXT: store i8 [[TMP9]], ptr [[TMP14]], align 1 +; CHECK-NEXT: store <4 x i4> [[V1]], ptr [[P]], align 2 +; CHECK-NEXT: ret void +; %v = load <4 x i4>, ptr %p %e2 = extractelement <4 x i4> %v, i32 2 %v1 = insertelement <4 x i4> %v, i4 %e2, i32 0 @@ -22,36 +46,37 @@ define void @load_update_store_vector(ptr %p) { } define <4 x i1> @icmp_vector(<4 x i8> %a, <4 x i8> %b) { - ; CHECK-LABEL: @icmp_vector.dfsan - ; CHECK-NEXT: %[[B:.*]] = load i8, ptr inttoptr (i64 add (i64 ptrtoint (ptr @__dfsan_arg_tls to i64), i64 2) to ptr), align [[ALIGN:2]] - ; CHECK-NEXT: %[[A:.*]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN]] - ; CHECK: %[[L:.*]] = or i8 %[[A]], %[[B]] - - ; CHECK: %r = icmp eq <4 x i8> %a, %b - ; CHECK: store i8 %[[L]], ptr @__dfsan_retval_tls, align [[ALIGN]] - ; CHECK: ret <4 x i1> %r - +; CHECK-LABEL: define <4 x i1> @icmp_vector( +; CHECK-SAME: <4 x i8> [[A:%.*]], <4 x i8> [[B:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i8, ptr getelementptr (i8, ptr @__dfsan_arg_tls, i64 2), align 2 +; CHECK-NEXT: [[TMP2:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[TMP3:%.*]] = or i8 [[TMP2]], [[TMP1]] +; CHECK-NEXT: [[R:%.*]] = icmp eq <4 x i8> [[A]], [[B]] +; CHECK-NEXT: store i8 [[TMP3]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: ret <4 x i1> [[R]] +; %r = icmp eq <4 x i8> %a, %b ret <4 x i1> %r } define <2 x i32> @const_vector() { - ; CHECK-LABEL: @const_vector.dfsan - ; CHECK-NEXT: store i8 0, ptr @__dfsan_retval_tls, align 2 - ; CHECK-NEXT: ret <2 x i32> <i32 42, i32 11> - +; CHECK-LABEL: define <2 x i32> @const_vector() { +; CHECK-NEXT: store i8 0, ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: ret <2 x i32> <i32 42, i32 11> +; ret <2 x i32> < i32 42, i32 11 > } define <4 x i4> @call_vector(<4 x i4> %v) { - ; CHECK-LABEL: @call_vector.dfsan - ; CHECK-NEXT: %[[V:.*]] = load i8, ptr @__dfsan_arg_tls, align [[ALIGN:2]] - ; CHECK-NEXT: store i8 %[[V]], ptr @__dfsan_arg_tls, align [[ALIGN]] - ; CHECK-NEXT: %r = call <4 x i4> @pass_vector.dfsan(<4 x i4> %v) - ; CHECK-NEXT: %_dfsret = load i8, ptr @__dfsan_retval_tls, align [[ALIGN]] - ; CHECK-NEXT: store i8 %_dfsret, ptr @__dfsan_retval_tls, align [[ALIGN]] - ; CHECK-NEXT: ret <4 x i4> %r - +; CHECK-LABEL: define <4 x i4> @call_vector( +; CHECK-SAME: <4 x i4> [[V:%.*]]) { +; CHECK-NEXT: [[TMP1:%.*]] = load i8, ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: store i8 [[TMP1]], ptr @__dfsan_arg_tls, align 2 +; CHECK-NEXT: [[R:%.*]] = call <4 x i4> @pass_vector(<4 x i4> [[V]]) +; CHECK-NEXT: [[_DFSRET:%.*]] = load i8, ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: store i8 [[_DFSRET]], ptr @__dfsan_retval_tls, align 2 +; CHECK-NEXT: ret <4 x i4> [[R]] +; %r = call <4 x i4> @pass_vector(<4 x i4> %v) ret <4 x i4> %r } diff --git a/llvm/test/MC/AMDGPU/vop3-gfx9.s b/llvm/test/MC/AMDGPU/vop3-gfx9.s index f98f33a..50a7433 100644 --- a/llvm/test/MC/AMDGPU/vop3-gfx9.s +++ b/llvm/test/MC/AMDGPU/vop3-gfx9.s @@ -566,6 +566,141 @@ v_interp_p2_f16 v5, v2, attr0.x, v3 clamp // NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU // VI: v_interp_p2_f16 v5, v2, attr0.x, v3 clamp ; encoding: [0x05,0x80,0x76,0xd2,0x00,0x04,0x0e,0x04] +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,0,0] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 ; encoding: [0x05,0x00,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,0,1] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,0,1,0] ; encoding: [0x05,0x20,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,1,0] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 ; encoding: [0x05,0x00,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,1,1] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,0,1,0] ; encoding: [0x05,0x20,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,0] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,0,0] ; encoding: [0x05,0x08,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,1] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,1,0] ; encoding: [0x05,0x28,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,1,1] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,0,1,0] ; encoding: [0x05,0x20,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,0] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,0,0] ; encoding: [0x05,0x08,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,1] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,1,0] ; encoding: [0x05,0x28,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,1,0] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,0,0] ; encoding: [0x05,0x08,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,1,1] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,1,0] ; encoding: [0x05,0x28,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,0,0,0] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 ; encoding: [0x05,0x00,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,0,0,1] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,0,0,1] ; encoding: [0x05,0x40,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,0,1,0] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,0,1,0] ; encoding: [0x05,0x20,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,0,1,1] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,0,1,1] ; encoding: [0x05,0x60,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,1,0,0] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 ; encoding: [0x05,0x00,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,1,0,1] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,0,0,1] ; encoding: [0x05,0x40,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,1,1,0] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,0,1,0] ; encoding: [0x05,0x20,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,1,1,1] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,0,1,1] ; encoding: [0x05,0x60,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,0,0] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,0,0] ; encoding: [0x05,0x08,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,0,1] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,0,1] ; encoding: [0x05,0x48,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,1,0] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,1,0] ; encoding: [0x05,0x28,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,1,1] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,1,1] ; encoding: [0x05,0x68,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,1,0,0] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,0,0] ; encoding: [0x05,0x08,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,1,0,1] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,0,1] ; encoding: [0x05,0x48,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,1,1,0] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,1,0] ; encoding: [0x05,0x28,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + +v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,1,1,1] +// GFX9: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,1,1] ; encoding: [0x05,0x68,0x77,0xd2,0x00,0x04,0x0e,0x04] +// NOSICI: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU +// NOVI: :[[@LINE-3]]:{{[0-9]+}}: error: not a valid operand. + v_interp_p2_legacy_f16 v5, v2, attr31.x, v3 // GFX9: v_interp_p2_legacy_f16 v5, v2, attr31.x, v3 ; encoding: [0x05,0x00,0x76,0xd2,0x1f,0x04,0x0e,0x04] // NOGCN: :[[@LINE-2]]:{{[0-9]+}}: error: instruction not supported on this GPU diff --git a/llvm/test/MC/Disassembler/AMDGPU/gfx9_vop3.txt b/llvm/test/MC/Disassembler/AMDGPU/gfx9_vop3.txt index 802d6368..60f058d 100644 --- a/llvm/test/MC/Disassembler/AMDGPU/gfx9_vop3.txt +++ b/llvm/test/MC/Disassembler/AMDGPU/gfx9_vop3.txt @@ -19311,6 +19311,27 @@ # CHECK: v_interp_p2_f16 v5, v2, attr0.x, v3 clamp ; encoding: [0x05,0x80,0x77,0xd2,0x00,0x04,0x0e,0x04] 0x05,0x80,0x77,0xd2,0x00,0x04,0x0e,0x04 +# CHECK: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,0,0,1] ; encoding: [0x05,0x40,0x77,0xd2,0x00,0x04,0x0e,0x04] +0x05,0x40,0x77,0xd2,0x00,0x04,0x0e,0x04 + +# CHECK: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,0,1,0] ; encoding: [0x05,0x20,0x77,0xd2,0x00,0x04,0x0e,0x04] +0x05,0x20,0x77,0xd2,0x00,0x04,0x0e,0x04 + +# CHECK: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[0,0,1,1] ; encoding: [0x05,0x60,0x77,0xd2,0x00,0x04,0x0e,0x04] +0x05,0x60,0x77,0xd2,0x00,0x04,0x0e,0x04 + +# CHECK: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,0,0] ; encoding: [0x05,0x08,0x77,0xd2,0x00,0x04,0x0e,0x04] +0x05,0x08,0x77,0xd2,0x00,0x04,0x0e,0x04 + +# CHECK: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,0,1] ; encoding: [0x05,0x48,0x77,0xd2,0x00,0x04,0x0e,0x04] +0x05,0x48,0x77,0xd2,0x00,0x04,0x0e,0x04 + +# CHECK: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,1,0] ; encoding: [0x05,0x28,0x77,0xd2,0x00,0x04,0x0e,0x04] +0x05,0x28,0x77,0xd2,0x00,0x04,0x0e,0x04 + +# CHECK: v_interp_p2_f16 v5, v2, attr0.x, v3 op_sel:[1,0,1,1] ; encoding: [0x05,0x68,0x77,0xd2,0x00,0x04,0x0e,0x04] +0x05,0x68,0x77,0xd2,0x00,0x04,0x0e,0x04 + # CHECK: v_add_f64 v[5:6], v[1:2], v[2:3] ; encoding: [0x05,0x00,0x80,0xd2,0x01,0x05,0x02,0x00] 0x05,0x00,0x80,0xd2,0x01,0x05,0x02,0x00 diff --git a/llvm/test/Other/new-pm-lto-defaults.ll b/llvm/test/Other/new-pm-lto-defaults.ll index 3aea0f2..f595dfe 100644 --- a/llvm/test/Other/new-pm-lto-defaults.ll +++ b/llvm/test/Other/new-pm-lto-defaults.ll @@ -67,6 +67,7 @@ ; CHECK-O1-NEXT: Running analysis: TargetLibraryAnalysis ; CHECK-O-NEXT: Running pass: GlobalSplitPass ; CHECK-O-NEXT: Running pass: WholeProgramDevirtPass +; CHECK-O-NEXT: Running pass: NoRecurseLTOInferencePass ; CHECK-O23SZ-NEXT: Running pass: CoroEarlyPass ; CHECK-O1-NEXT: Running pass: LowerTypeTestsPass ; CHECK-O23SZ-NEXT: Running pass: GlobalOptPass diff --git a/llvm/test/TableGen/RuntimeLibcallEmitter-calling-conv.td b/llvm/test/TableGen/RuntimeLibcallEmitter-calling-conv.td index c224cd6..7ec70b7 100644 --- a/llvm/test/TableGen/RuntimeLibcallEmitter-calling-conv.td +++ b/llvm/test/TableGen/RuntimeLibcallEmitter-calling-conv.td @@ -48,47 +48,39 @@ def MSP430LibraryWithCondCC : SystemRuntimeLibrary<isMSP430, // CHECK-NEXT: Entry = DefaultCC; // CHECK-NEXT: } // CHECK-EMPTY: -// CHECK-NEXT: setLibcallsImpl({ -// CHECK-NEXT: {RTLIB::MALLOC, RTLIB::impl_malloc}, // malloc -// CHECK-NEXT: }); +// CHECK-NEXT: setLibcallImpl(RTLIB::MALLOC, RTLIB::impl_malloc); // malloc // CHECK-EMPTY: -// CHECK-NEXT: setLibcallsImpl({ -// CHECK-NEXT: {RTLIB::SDIVREM_I8, RTLIB::impl___divmodqi4}, // __divmodqi4 -// CHECK-NEXT: {RTLIB::UDIVREM_I16, RTLIB::impl___udivmodhi4}, // __udivmodhi4 -// CHECK-NEXT: }, CallingConv::AVR_BUILTIN); +// CHECK-NEXT: setLibcallImpl(RTLIB::SDIVREM_I8, RTLIB::impl___divmodqi4); // __divmodqi4 +// CHECK-NEXT: setLibcallImplCallingConv(RTLIB::impl___divmodqi4, CallingConv::AVR_BUILTIN); +// CHECK-NEXT: setLibcallImpl(RTLIB::UDIVREM_I16, RTLIB::impl___udivmodhi4); // __udivmodhi4 +// CHECK-NEXT: setLibcallImplCallingConv(RTLIB::impl___udivmodhi4, CallingConv::AVR_BUILTIN); // CHECK-EMPTY: // CHECK-NEXT: return; // CHECK-NEXT: } // CHECK-EMPTY: // CHECK-NEXT: if (TT.getArch() == Triple::avr) { -// CHECK-NEXT: setLibcallsImpl({ -// CHECK-NEXT: {RTLIB::MALLOC, RTLIB::impl_malloc}, // malloc -// CHECK-NEXT: }); +// CHECK-NEXT: setLibcallImpl(RTLIB::MALLOC, RTLIB::impl_malloc); // malloc // CHECK-EMPTY: -// CHECK-NEXT: setLibcallsImpl({ -// CHECK-NEXT: {RTLIB::SDIVREM_I8, RTLIB::impl___divmodqi4}, // __divmodqi4 -// CHECK-NEXT: {RTLIB::UDIVREM_I16, RTLIB::impl___udivmodhi4}, // __udivmodhi4 -// CHECK-NEXT: }, CallingConv::AVR_BUILTIN); +// CHECK-NEXT: setLibcallImpl(RTLIB::SDIVREM_I8, RTLIB::impl___divmodqi4); // __divmodqi4 +// CHECK-NEXT: setLibcallImplCallingConv(RTLIB::impl___divmodqi4, CallingConv::AVR_BUILTIN); +// CHECK-NEXT: setLibcallImpl(RTLIB::UDIVREM_I16, RTLIB::impl___udivmodhi4); // __udivmodhi4 +// CHECK-NEXT: setLibcallImplCallingConv(RTLIB::impl___udivmodhi4, CallingConv::AVR_BUILTIN); // CHECK-EMPTY: // CHECK-NEXT: return; // CHECK-NEXT: } // CHECK-EMPTY: // CHECK-NEXT: if (TT.getArch() == Triple::msp430) { -// CHECK-NEXT: setLibcallsImpl({ -// CHECK-NEXT: {RTLIB::MALLOC, RTLIB::impl_malloc}, // malloc -// CHECK-NEXT: }); +// CHECK-NEXT: setLibcallImpl(RTLIB::MALLOC, RTLIB::impl_malloc); // malloc // CHECK-EMPTY: // CHECK-NEXT: if ( isFoo() ) { -// CHECK-NEXT: setLibcallsImpl({ -// CHECK-NEXT: {RTLIB::SDIVREM_I8, RTLIB::impl___divmodqi4}, // __divmodqi4 -// CHECK-NEXT: }, CallingConv::AVR_BUILTIN); +// CHECK-NEXT: setLibcallImpl(RTLIB::SDIVREM_I8, RTLIB::impl___divmodqi4); // __divmodqi4 +// CHECK-NEXT: setLibcallImplCallingConv(RTLIB::impl___divmodqi4, CallingConv::AVR_BUILTIN); // CHECK-EMPTY: // CHECK-NEXT: } // CHECK-EMPTY: // CHECK-NEXT: if ( isBar() ) { -// CHECK-NEXT: setLibcallsImpl({ -// CHECK-NEXT: {RTLIB::UDIVREM_I16, RTLIB::impl___udivmodhi4}, // __udivmodhi4 -// CHECK-NEXT: }, CallingConv::MSP430_BUILTIN); +// CHECK-NEXT: setLibcallImpl(RTLIB::UDIVREM_I16, RTLIB::impl___udivmodhi4); // __udivmodhi4 +// CHECK-NEXT: setLibcallImplCallingConv(RTLIB::impl___udivmodhi4, CallingConv::MSP430_BUILTIN); // CHECK-EMPTY: // CHECK-NEXT: } // CHECK-EMPTY: diff --git a/llvm/test/TableGen/RuntimeLibcallEmitter-conflict-warning.td b/llvm/test/TableGen/RuntimeLibcallEmitter-conflict-warning.td index 8169f56..112c33e 100644 --- a/llvm/test/TableGen/RuntimeLibcallEmitter-conflict-warning.td +++ b/llvm/test/TableGen/RuntimeLibcallEmitter-conflict-warning.td @@ -25,9 +25,7 @@ def dup1 : RuntimeLibcallImpl<ANOTHER_DUP>; // func_a and func_b both provide SOME_FUNC. // CHECK: if (isTargetArchA()) { -// CHECK-NEXT: setLibcallsImpl({ -// CHECK-NEXT: {RTLIB::SOME_FUNC, RTLIB::impl_func_b}, // func_b -// CHECK-NEXT: }); +// CHECK-NEXT: setLibcallImpl(RTLIB::SOME_FUNC, RTLIB::impl_func_b); // func_b // ERR: :[[@LINE+1]]:5: warning: conflicting implementations for libcall SOME_FUNC: func_b, func_a def TheSystemLibraryA : SystemRuntimeLibrary<isTargetArchA, @@ -35,10 +33,8 @@ def TheSystemLibraryA : SystemRuntimeLibrary<isTargetArchA, >; // CHECK: if (isTargetArchB()) { -// CHECK-NEXT: setLibcallsImpl({ -// CHECK-NEXT: {RTLIB::OTHER_FUNC, RTLIB::impl_other_func}, // other_func -// CHECK-NEXT: {RTLIB::SOME_FUNC, RTLIB::impl_func_a}, // func_a -// CHECK-NEXT: }); +// CHECK-NEXT: setLibcallImpl(RTLIB::OTHER_FUNC, RTLIB::impl_other_func); // other_func +// CHECK-NEXT: setLibcallImpl(RTLIB::SOME_FUNC, RTLIB::impl_func_a); // func_a // ERR: :[[@LINE+1]]:5: warning: conflicting implementations for libcall SOME_FUNC: func_a, func_b def TheSystemLibraryB : SystemRuntimeLibrary<isTargetArchB, @@ -46,11 +42,9 @@ def TheSystemLibraryB : SystemRuntimeLibrary<isTargetArchB, >; // CHECK: if (isTargetArchC()) { -// CHECK-NEXT: setLibcallsImpl({ -// CHECK-NEXT: {RTLIB::ANOTHER_DUP, RTLIB::impl_dup1}, // dup1 -// CHECK-NEXT: {RTLIB::OTHER_FUNC, RTLIB::impl_other_func}, // other_func -// CHECK-NEXT: {RTLIB::SOME_FUNC, RTLIB::impl_func_a}, // func_a -// CHECK-NEXT: }); +// CHECK-NEXT: setLibcallImpl(RTLIB::ANOTHER_DUP, RTLIB::impl_dup1); // dup1 +// CHECK-NEXT: setLibcallImpl(RTLIB::OTHER_FUNC, RTLIB::impl_other_func); // other_func +// CHECK-NEXT: setLibcallImpl(RTLIB::SOME_FUNC, RTLIB::impl_func_a); // func_a // ERR: :[[@LINE+3]]:5: warning: conflicting implementations for libcall ANOTHER_DUP: dup1, dup0 // ERR: :[[@LINE+2]]:5: warning: conflicting implementations for libcall SOME_FUNC: func_a, func_b diff --git a/llvm/test/TableGen/RuntimeLibcallEmitter.td b/llvm/test/TableGen/RuntimeLibcallEmitter.td index 78705e2..f4577f8 100644 --- a/llvm/test/TableGen/RuntimeLibcallEmitter.td +++ b/llvm/test/TableGen/RuntimeLibcallEmitter.td @@ -190,40 +190,20 @@ def BlahLibrary : SystemRuntimeLibrary<isBlahArch, (add calloc, LibraryWithCondi // CHECK-NEXT: } // CHECK: void llvm::RTLIB::RuntimeLibcallsInfo::setTargetRuntimeLibcallSets(const llvm::Triple &TT, ExceptionHandling ExceptionModel, FloatABI::ABIType FloatABI, EABI EABIVersion, StringRef ABIName) { -// CHECK-NEXT: struct LibcallImplPair { -// CHECK-NEXT: RTLIB::Libcall Func; -// CHECK-NEXT: RTLIB::LibcallImpl Impl; -// CHECK-NEXT: }; -// CHECK-NEXT: auto setLibcallsImpl = [this]( -// CHECK-NEXT: ArrayRef<LibcallImplPair> Libcalls, -// CHECK-NEXT: std::optional<llvm::CallingConv::ID> CC = {}) -// CHECK-NEXT: { -// CHECK-NEXT: for (const auto [Func, Impl] : Libcalls) { -// CHECK-NEXT: setLibcallImpl(Func, Impl); -// CHECK-NEXT: if (CC) -// CHECK-NEXT: setLibcallImplCallingConv(Impl, *CC); -// CHECK-NEXT: } -// CHECK-NEXT: }; // CHECK-EMPTY: // CHECK-NEXT: if (TT.getArch() == Triple::blah) { -// CHECK-NEXT: setLibcallsImpl({ -// CHECK-NEXT: {RTLIB::BZERO, RTLIB::impl_bzero}, // bzero -// CHECK-NEXT: {RTLIB::CALLOC, RTLIB::impl_calloc}, // calloc -// CHECK-NEXT: {RTLIB::SQRT_F128, RTLIB::impl_sqrtl_f128}, // sqrtl -// CHECK-NEXT: }); +// CHECK-NEXT: setLibcallImpl(RTLIB::BZERO, RTLIB::impl_bzero); // bzero +// CHECK-NEXT: setLibcallImpl(RTLIB::CALLOC, RTLIB::impl_calloc); // calloc +// CHECK-NEXT: setLibcallImpl(RTLIB::SQRT_F128, RTLIB::impl_sqrtl_f128); // sqrtl // CHECK-EMPTY: // CHECK-NEXT: if (TT.hasCompilerRT()) { -// CHECK-NEXT: setLibcallsImpl({ -// CHECK-NEXT: {RTLIB::SHL_I32, RTLIB::impl___ashlsi3}, // __ashlsi3 -// CHECK-NEXT: {RTLIB::SRL_I64, RTLIB::impl___lshrdi3}, // __lshrdi3 -// CHECK-NEXT: }); +// CHECK-NEXT: setLibcallImpl(RTLIB::SHL_I32, RTLIB::impl___ashlsi3); // __ashlsi3 +// CHECK-NEXT: setLibcallImpl(RTLIB::SRL_I64, RTLIB::impl___lshrdi3); // __lshrdi3 // CHECK-EMPTY: // CHECK-NEXT: } // CHECK-EMPTY: // CHECK-NEXT: if (TT.getOS() == Triple::bar) { -// CHECK-NEXT: setLibcallsImpl({ -// CHECK-NEXT: {RTLIB::MEMSET, RTLIB::impl____memset}, // ___memset -// CHECK-NEXT: }); +// CHECK-NEXT: setLibcallImpl(RTLIB::MEMSET, RTLIB::impl____memset); // ___memset // CHECK-EMPTY: // CHECK-NEXT: } // CHECK-EMPTY: @@ -231,25 +211,19 @@ def BlahLibrary : SystemRuntimeLibrary<isBlahArch, (add calloc, LibraryWithCondi // CHECK-NEXT: } // CHECK-EMPTY: // CHECK-NEXT: if (TT.getArch() == Triple::buzz) { -// CHECK-NEXT: setLibcallsImpl({ -// CHECK-NEXT: {RTLIB::SHL_I32, RTLIB::impl___ashlsi3}, // __ashlsi3 -// CHECK-NEXT: {RTLIB::SQRT_F80, RTLIB::impl_sqrtl_f80}, // sqrtl -// CHECK-NEXT: {RTLIB::SRL_I64, RTLIB::impl___lshrdi3}, // __lshrdi3 -// CHECK-NEXT: }); +// CHECK-NEXT: setLibcallImpl(RTLIB::SHL_I32, RTLIB::impl___ashlsi3); // __ashlsi3 +// CHECK-NEXT: setLibcallImpl(RTLIB::SQRT_F80, RTLIB::impl_sqrtl_f80); // sqrtl +// CHECK-NEXT: setLibcallImpl(RTLIB::SRL_I64, RTLIB::impl___lshrdi3); // __lshrdi3 // CHECK-EMPTY: // CHECK-NEXT: return; // CHECK-NEXT: } // CHECK-EMPTY: // CHECK-NEXT: if (TT.getArch() == Triple::foo) { -// CHECK-NEXT: setLibcallsImpl({ -// CHECK-NEXT: {RTLIB::BZERO, RTLIB::impl_bzero}, // bzero -// CHECK-NEXT: {RTLIB::SQRT_F128, RTLIB::impl_sqrtl_f128}, // sqrtl -// CHECK-NEXT: }); +// CHECK-NEXT: setLibcallImpl(RTLIB::BZERO, RTLIB::impl_bzero); // bzero +// CHECK-NEXT: setLibcallImpl(RTLIB::SQRT_F128, RTLIB::impl_sqrtl_f128); // sqrtl // CHECK-EMPTY: // CHECK-NEXT: if (TT.getOS() == Triple::bar) { -// CHECK-NEXT: setLibcallsImpl({ -// CHECK-NEXT: {RTLIB::MEMSET, RTLIB::impl____memset}, // ___memset -// CHECK-NEXT: }); +// CHECK-NEXT: setLibcallImpl(RTLIB::MEMSET, RTLIB::impl____memset); // ___memset // CHECK-EMPTY: // CHECK-NEXT: } // CHECK-EMPTY: @@ -257,12 +231,10 @@ def BlahLibrary : SystemRuntimeLibrary<isBlahArch, (add calloc, LibraryWithCondi // CHECK-NEXT: } // CHECK-EMPTY: // CHECK-NEXT: if (TT.getArch() == Triple::simple) { -// CHECK-NEXT: setLibcallsImpl({ -// CHECK-NEXT: {RTLIB::CALLOC, RTLIB::impl_calloc}, // calloc -// CHECK-NEXT: {RTLIB::SHL_I32, RTLIB::impl___ashlsi3}, // __ashlsi3 -// CHECK-NEXT: {RTLIB::SQRT_F80, RTLIB::impl_sqrtl_f80}, // sqrtl -// CHECK-NEXT: {RTLIB::SRL_I64, RTLIB::impl___lshrdi3}, // __lshrdi3 -// CHECK-NEXT: }); +// CHECK-NEXT: setLibcallImpl(RTLIB::CALLOC, RTLIB::impl_calloc); // calloc +// CHECK-NEXT: setLibcallImpl(RTLIB::SHL_I32, RTLIB::impl___ashlsi3); // __ashlsi3 +// CHECK-NEXT: setLibcallImpl(RTLIB::SQRT_F80, RTLIB::impl_sqrtl_f80); // sqrtl +// CHECK-NEXT: setLibcallImpl(RTLIB::SRL_I64, RTLIB::impl___lshrdi3); // __lshrdi3 // CHECK-EMPTY: // CHECK-NEXT: return; // CHECK-NEXT: } diff --git a/llvm/test/Transforms/FunctionAttrs/norecurse_libfunc_address_taken.ll b/llvm/test/Transforms/FunctionAttrs/norecurse_libfunc_address_taken.ll new file mode 100644 index 0000000..bcdf75b --- /dev/null +++ b/llvm/test/Transforms/FunctionAttrs/norecurse_libfunc_address_taken.ll @@ -0,0 +1,40 @@ +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --check-attributes --check-globals all --version 5 +; RUN: opt < %s -passes=norecurse-lto-inference -S | FileCheck %s + +; This test includes a call to a library function which is not marked as +; NoCallback. Function bob() does not have internal linkage and hence prevents +; norecurse to be added. + +@.str = private unnamed_addr constant [12 x i8] c"Hello World\00", align 1 + +;. +; CHECK: @.str = private unnamed_addr constant [12 x i8] c"Hello World\00", align 1 +;. +define dso_local void @bob() { +; CHECK-LABEL: define dso_local void @bob() { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: [[CALL:%.*]] = tail call i32 (ptr, ...) @printf(ptr nonnull dereferenceable(1) @.str) +; CHECK-NEXT: ret void +; +entry: + %call = tail call i32 (ptr, ...) @printf(ptr nonnull dereferenceable(1) @.str) + ret void +} + +declare i32 @printf(ptr readonly captures(none), ...) + +define dso_local i32 @main() norecurse { +; CHECK: Function Attrs: norecurse +; CHECK-LABEL: define dso_local i32 @main( +; CHECK-SAME: ) #[[ATTR0:[0-9]+]] { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: tail call void @bob() +; CHECK-NEXT: ret i32 0 +; +entry: + tail call void @bob() + ret i32 0 +} +;. +; CHECK: attributes #[[ATTR0]] = { norecurse } +;. diff --git a/llvm/test/Transforms/FunctionAttrs/norecurse_libfunc_no_address_taken.ll b/llvm/test/Transforms/FunctionAttrs/norecurse_libfunc_no_address_taken.ll new file mode 100644 index 0000000..a03b4ca --- /dev/null +++ b/llvm/test/Transforms/FunctionAttrs/norecurse_libfunc_no_address_taken.ll @@ -0,0 +1,45 @@ +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --check-attributes --check-globals all --version 5 +; RUN: opt < %s -passes=norecurse-lto-inference -S | FileCheck %s + +; This test includes a call to a library function which is not marked as +; NoCallback. All functions except main() are internal and main is marked +; norecurse, so as to not block norecurse to be added to bob(). + +@.str = private unnamed_addr constant [12 x i8] c"Hello World\00", align 1 + +; Function Attrs: nofree noinline nounwind uwtable +;. +; CHECK: @.str = private unnamed_addr constant [12 x i8] c"Hello World\00", align 1 +;. +define internal void @bob() { +; CHECK: Function Attrs: norecurse +; CHECK-LABEL: define internal void @bob( +; CHECK-SAME: ) #[[ATTR0:[0-9]+]] { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: [[CALL:%.*]] = tail call i32 (ptr, ...) @printf(ptr nonnull dereferenceable(1) @.str) +; CHECK-NEXT: ret void +; +entry: + %call = tail call i32 (ptr, ...) @printf(ptr nonnull dereferenceable(1) @.str) + ret void +} + +; Function Attrs: nofree nounwind +declare i32 @printf(ptr readonly captures(none), ...) + +; Function Attrs: nofree norecurse nounwind uwtable +define dso_local i32 @main() norecurse { +; CHECK: Function Attrs: norecurse +; CHECK-LABEL: define dso_local i32 @main( +; CHECK-SAME: ) #[[ATTR0]] { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: tail call void @bob() +; CHECK-NEXT: ret i32 0 +; +entry: + tail call void @bob() + ret i32 0 +} +;. +; CHECK: attributes #[[ATTR0]] = { norecurse } +;. diff --git a/llvm/test/Transforms/FunctionAttrs/norecurse_lto.ll b/llvm/test/Transforms/FunctionAttrs/norecurse_lto.ll new file mode 100644 index 0000000..5be707b --- /dev/null +++ b/llvm/test/Transforms/FunctionAttrs/norecurse_lto.ll @@ -0,0 +1,69 @@ +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --check-attributes --check-globals all --version 5 +; RUN: opt < %s -passes=norecurse-lto-inference -S | FileCheck %s + +; This test includes a call graph which has a recursive function(foo2) which +; calls a non-recursive internal function (foo3) satisfying the norecurse +; attribute criteria. + + +define internal void @foo3() { +; CHECK: Function Attrs: norecurse +; CHECK-LABEL: define internal void @foo3( +; CHECK-SAME: ) #[[ATTR0:[0-9]+]] { +; CHECK-NEXT: ret void +; + ret void +} + +define internal i32 @foo2(i32 %accum, i32 %n) { +; CHECK-LABEL: define internal i32 @foo2( +; CHECK-SAME: i32 [[ACCUM:%.*]], i32 [[N:%.*]]) { +; CHECK-NEXT: [[ENTRY:.*]]: +; CHECK-NEXT: [[CMP:%.*]] = icmp eq i32 [[N]], 0 +; CHECK-NEXT: br i1 [[CMP]], label %[[EXIT:.*]], label %[[RECURSE:.*]] +; CHECK: [[RECURSE]]: +; CHECK-NEXT: [[SUB:%.*]] = sub i32 [[N]], 1 +; CHECK-NEXT: [[MUL:%.*]] = mul i32 [[ACCUM]], [[SUB]] +; CHECK-NEXT: [[CALL:%.*]] = call i32 @foo2(i32 [[MUL]], i32 [[SUB]]) +; CHECK-NEXT: call void @foo3() +; CHECK-NEXT: br label %[[EXIT]] +; CHECK: [[EXIT]]: +; CHECK-NEXT: [[RES:%.*]] = phi i32 [ [[ACCUM]], %[[ENTRY]] ], [ [[CALL]], %[[RECURSE]] ] +; CHECK-NEXT: ret i32 [[RES]] +; +entry: + %cmp = icmp eq i32 %n, 0 + br i1 %cmp, label %exit, label %recurse + +recurse: + %sub = sub i32 %n, 1 + %mul = mul i32 %accum, %sub + %call = call i32 @foo2(i32 %mul, i32 %sub) + call void @foo3() + br label %exit + +exit: + %res = phi i32 [ %accum, %entry ], [ %call, %recurse ] + ret i32 %res +} + +define internal i32 @foo1() { +; CHECK-LABEL: define internal i32 @foo1() { +; CHECK-NEXT: [[RES:%.*]] = call i32 @foo2(i32 1, i32 5) +; CHECK-NEXT: ret i32 [[RES]] +; + %res = call i32 @foo2(i32 1, i32 5) + ret i32 %res +} + +define dso_local i32 @main() { +; CHECK-LABEL: define dso_local i32 @main() { +; CHECK-NEXT: [[RES:%.*]] = call i32 @foo1() +; CHECK-NEXT: ret i32 [[RES]] +; + %res = call i32 @foo1() + ret i32 %res +} +;. +; CHECK: attributes #[[ATTR0]] = { norecurse } +;. diff --git a/llvm/test/Transforms/FunctionAttrs/norecurse_multi_scc_indirect_recursion.ll b/llvm/test/Transforms/FunctionAttrs/norecurse_multi_scc_indirect_recursion.ll new file mode 100644 index 0000000..e351f60 --- /dev/null +++ b/llvm/test/Transforms/FunctionAttrs/norecurse_multi_scc_indirect_recursion.ll @@ -0,0 +1,141 @@ +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --check-attributes --check-globals all --version 5 +; RUN: opt < %s -passes=norecurse-lto-inference -S | FileCheck %s + +; This test includes a call graph with multiple SCCs. The purpose of this is +; to check that norecurse is not added when a function is part of non-singular +; SCC. +; There are three different SCCs in this test: +; SCC#1: f1, foo, bar, foo1, bar1 +; SCC#2: bar2, bar3, bar4 +; SCC#3: baz, fun +; None of these functions should be marked as norecurse + +define internal void @bar1() { +; CHECK-LABEL: define internal void @bar1() { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: tail call void @f1() +; CHECK-NEXT: ret void +; +entry: + tail call void @f1() + ret void +} + +define internal void @f1() { +; CHECK-LABEL: define internal void @f1() { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: tail call void @foo() +; CHECK-NEXT: tail call void @bar2() +; CHECK-NEXT: tail call void @baz() +; CHECK-NEXT: ret void +; +entry: + tail call void @foo() + tail call void @bar2() + tail call void @baz() + ret void +} + +define dso_local i32 @main() norecurse { +; CHECK: Function Attrs: norecurse +; CHECK-LABEL: define dso_local i32 @main( +; CHECK-SAME: ) #[[ATTR0:[0-9]+]] { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: tail call void @f1() +; CHECK-NEXT: ret i32 0 +; +entry: + tail call void @f1() + ret i32 0 +} + +define internal void @foo1() { +; CHECK-LABEL: define internal void @foo1() { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: tail call void @bar1() +; CHECK-NEXT: ret void +; +entry: + tail call void @bar1() + ret void +} + +define internal void @bar() { +; CHECK-LABEL: define internal void @bar() { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: tail call void @foo1() +; CHECK-NEXT: ret void +; +entry: + tail call void @foo1() + ret void +} + +define internal void @foo() { +; CHECK-LABEL: define internal void @foo() { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: tail call void @bar() +; CHECK-NEXT: ret void +; +entry: + tail call void @bar() + ret void +} + +define internal void @bar4() { +; CHECK-LABEL: define internal void @bar4() { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: tail call void @bar2() +; CHECK-NEXT: ret void +; +entry: + tail call void @bar2() + ret void +} + +define internal void @bar2() { +; CHECK-LABEL: define internal void @bar2() { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: tail call void @bar3() +; CHECK-NEXT: ret void +; +entry: + tail call void @bar3() + ret void +} + +define internal void @bar3() { +; CHECK-LABEL: define internal void @bar3() { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: tail call void @bar4() +; CHECK-NEXT: ret void +; +entry: + tail call void @bar4() + ret void +} + +define internal void @fun() { +; CHECK-LABEL: define internal void @fun() { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: tail call void @baz() +; CHECK-NEXT: ret void +; +entry: + tail call void @baz() + ret void +} + +define internal void @baz() { +; CHECK-LABEL: define internal void @baz() { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: tail call void @fun() +; CHECK-NEXT: ret void +; +entry: + tail call void @fun() + ret void +} +;. +; CHECK: attributes #[[ATTR0]] = { norecurse } +;. diff --git a/llvm/test/Transforms/FunctionAttrs/norecurse_multi_scc_indirect_recursion1.ll b/llvm/test/Transforms/FunctionAttrs/norecurse_multi_scc_indirect_recursion1.ll new file mode 100644 index 0000000..cd94037 --- /dev/null +++ b/llvm/test/Transforms/FunctionAttrs/norecurse_multi_scc_indirect_recursion1.ll @@ -0,0 +1,98 @@ +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --check-attributes --check-globals all --version 5 +; RUN: opt < %s -passes=norecurse-lto-inference -S | FileCheck %s + +; This test includes a call graph with multiple SCCs. The purpose of this is +; to check that norecurse is added to a function which calls a function which +; is indirectly recursive but is not part of the recursive chain. +; There are two SCCs in this test: +; SCC#1: bar2, bar3, bar4 +; SCC#2: baz, fun +; f1() calls bar2 and baz, both of which are part of some indirect recursive +; chain. but does not call back f1() and hence f1() can be marked as +; norecurse. + +define dso_local i32 @main() norecurse { +; CHECK: Function Attrs: norecurse +; CHECK-LABEL: define dso_local i32 @main( +; CHECK-SAME: ) #[[ATTR0:[0-9]+]] { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: tail call void @f1() +; CHECK-NEXT: ret i32 0 +; +entry: + tail call void @f1() + ret i32 0 +} + +define internal void @f1() { +; CHECK: Function Attrs: norecurse +; CHECK-LABEL: define internal void @f1( +; CHECK-SAME: ) #[[ATTR0]] { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: tail call void @bar2() +; CHECK-NEXT: tail call void @baz() +; CHECK-NEXT: ret void +; +entry: + tail call void @bar2() + tail call void @baz() + ret void +} + +define internal void @bar4() { +; CHECK-LABEL: define internal void @bar4() { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: tail call void @bar2() +; CHECK-NEXT: ret void +; +entry: + tail call void @bar2() + ret void +} + +define internal void @bar2() { +; CHECK-LABEL: define internal void @bar2() { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: tail call void @bar3() +; CHECK-NEXT: ret void +; +entry: + tail call void @bar3() + ret void +} + +define internal void @bar3() { +; CHECK-LABEL: define internal void @bar3() { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: tail call void @bar4() +; CHECK-NEXT: ret void +; +entry: + tail call void @bar4() + ret void +} + +define internal void @fun() { +; CHECK-LABEL: define internal void @fun() { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: tail call void @baz() +; CHECK-NEXT: ret void +; +entry: + tail call void @baz() + ret void +} + +define internal void @baz() { +; CHECK-LABEL: define internal void @baz() { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: tail call void @fun() +; CHECK-NEXT: ret void +; +entry: + tail call void @fun() + ret void +} +;. +; CHECK: attributes #[[ATTR0]] = { norecurse } +;. diff --git a/llvm/test/Transforms/FunctionAttrs/norecurse_multinode_refscc.ll b/llvm/test/Transforms/FunctionAttrs/norecurse_multinode_refscc.ll new file mode 100644 index 0000000..8b81a90 --- /dev/null +++ b/llvm/test/Transforms/FunctionAttrs/norecurse_multinode_refscc.ll @@ -0,0 +1,41 @@ +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --check-attributes --check-globals all --version 5 +; RUN: opt -passes=norecurse-lto-inference -S %s | FileCheck %s + +; This is a negative test which results in RefSCC with size > 1. +; RefSCC : [(f2), (f1)] +; --- SCC A (f1) --- size() = 1 +define internal void @f1() { +; CHECK-LABEL: define internal void @f1() { +; CHECK-NEXT: call void @f2() +; CHECK-NEXT: ret void +; + call void @f2() + ret void +} + +; --- SCC B (f2) --- size() = 1 +; f2 indirectly calls f1 using locally allocated function pointer +define internal void @f2() { +; CHECK-LABEL: define internal void @f2() { +; CHECK-NEXT: [[FP:%.*]] = alloca ptr, align 8 +; CHECK-NEXT: store ptr @f1, ptr [[FP]], align 8 +; CHECK-NEXT: [[TMP:%.*]] = load ptr, ptr [[FP]], align 8 +; CHECK-NEXT: call void [[TMP]]() +; CHECK-NEXT: ret void +; + %fp = alloca void ()* + store void ()* @f1, void ()** %fp + %tmp = load void ()*, void ()** %fp + call void %tmp() + ret void +} + +define i32 @main() { +; CHECK-LABEL: define i32 @main() { +; CHECK-NEXT: call void @f1() +; CHECK-NEXT: ret i32 0 +; + call void @f1() + ret i32 0 +} + diff --git a/llvm/test/Transforms/FunctionAttrs/norecurse_self_recursive_callee.ll b/llvm/test/Transforms/FunctionAttrs/norecurse_self_recursive_callee.ll new file mode 100644 index 0000000..461e5df --- /dev/null +++ b/llvm/test/Transforms/FunctionAttrs/norecurse_self_recursive_callee.ll @@ -0,0 +1,88 @@ +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --check-attributes --check-globals all --version 5 +; RUN: opt < %s -passes=norecurse-lto-inference -S | FileCheck %s + +; This test includes a call graph with a self recursive function. +; The purpose of this is to check that norecurse is added to functions +; which have a self-recursive function in the call-chain. +; The call-chain in this test is as follows +; main -> bob -> callee1 -> callee2 +; where callee2 is self recursive. + +@x = dso_local global i32 4, align 4 +@y = dso_local global i32 2, align 4 + +;. +; CHECK: @x = dso_local global i32 4, align 4 +; CHECK: @y = dso_local global i32 2, align 4 +;. +define internal void @callee2() { +; CHECK: Function Attrs: norecurse +; CHECK-LABEL: define internal void @callee2( +; CHECK-SAME: ) #[[ATTR0:[0-9]+]] { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: [[TMP0:%.*]] = load volatile i32, ptr @y, align 4 +; CHECK-NEXT: [[INC:%.*]] = add nsw i32 [[TMP0]], 1 +; CHECK-NEXT: store volatile i32 [[INC]], ptr @y, align 4 +; CHECK-NEXT: ret void +; +entry: + %0 = load volatile i32, ptr @y, align 4 + %inc = add nsw i32 %0, 1 + store volatile i32 %inc, ptr @y, align 4 + ret void +} + +define internal void @callee1(i32 %x) { +; CHECK-LABEL: define internal void @callee1( +; CHECK-SAME: i32 [[X:%.*]]) { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: [[CMP:%.*]] = icmp sgt i32 [[X]], 0 +; CHECK-NEXT: br i1 [[CMP]], label %[[IF_THEN:.*]], label %[[IF_END:.*]] +; CHECK: [[IF_THEN]]: +; CHECK-NEXT: tail call void @callee1(i32 [[X]]) +; CHECK-NEXT: br label %[[IF_END]] +; CHECK: [[IF_END]]: +; CHECK-NEXT: tail call void @callee2() +; CHECK-NEXT: ret void +; +entry: + %cmp = icmp sgt i32 %x, 0 + br i1 %cmp, label %if.then, label %if.end + +if.then: ; preds = %entry + tail call void @callee1(i32 %x) + br label %if.end + +if.end: ; preds = %if.then, %entry + tail call void @callee2() + ret void +} + +define internal void @bob() { +; CHECK-LABEL: define internal void @bob() { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: [[TMP0:%.*]] = load volatile i32, ptr @x, align 4 +; CHECK-NEXT: tail call void @callee2(i32 [[TMP0]]) +; CHECK-NEXT: ret void +; +entry: + %0 = load volatile i32, ptr @x, align 4 + tail call void @callee2(i32 %0) + ret void +} + +define dso_local i32 @main() norecurse { +; CHECK: Function Attrs: norecurse +; CHECK-LABEL: define dso_local i32 @main( +; CHECK-SAME: ) #[[ATTR0]] { +; CHECK-NEXT: [[ENTRY:.*:]] +; CHECK-NEXT: tail call void @bob() +; CHECK-NEXT: ret i32 0 +; +entry: + tail call void @bob() + ret i32 0 +} +;. +; CHECK: attributes #[[ATTR0]] = { norecurse } +;. diff --git a/llvm/test/Transforms/InstCombine/select-safe-bool-transforms.ll b/llvm/test/Transforms/InstCombine/select-safe-bool-transforms.ll index 9de9150..8b0a5ca 100644 --- a/llvm/test/Transforms/InstCombine/select-safe-bool-transforms.ll +++ b/llvm/test/Transforms/InstCombine/select-safe-bool-transforms.ll @@ -1,4 +1,4 @@ -; NOTE: Assertions have been autogenerated by utils/update_test_checks.py +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --check-globals ; RUN: opt < %s -passes=instcombine -S | FileCheck %s ; TODO: All of these should be optimized to less than or equal to a single @@ -7,13 +7,13 @@ ; --- (A op B) op' A / (B op A) op' A --- ; (A land B) land A -define i1 @land_land_left1(i1 %A, i1 %B) { +define i1 @land_land_left1(i1 %A, i1 %B) !prof !0 { ; CHECK-LABEL: @land_land_left1( -; CHECK-NEXT: [[C:%.*]] = select i1 [[A:%.*]], i1 [[B:%.*]], i1 false +; CHECK-NEXT: [[C:%.*]] = select i1 [[A:%.*]], i1 [[B:%.*]], i1 false, !prof [[PROF1:![0-9]+]] ; CHECK-NEXT: ret i1 [[C]] ; - %c = select i1 %A, i1 %B, i1 false - %res = select i1 %c, i1 %A, i1 false + %c = select i1 %A, i1 %B, i1 false, !prof !1 + %res = select i1 %c, i1 %A, i1 false, !prof !2 ret i1 %res } define i1 @land_land_left2(i1 %A, i1 %B) { @@ -157,13 +157,13 @@ define i1 @lor_band_left2(i1 %A, i1 %B) { } ; (A lor B) lor A -define i1 @lor_lor_left1(i1 %A, i1 %B) { +define i1 @lor_lor_left1(i1 %A, i1 %B) !prof !0 { ; CHECK-LABEL: @lor_lor_left1( -; CHECK-NEXT: [[C:%.*]] = select i1 [[A:%.*]], i1 true, i1 [[B:%.*]] +; CHECK-NEXT: [[C:%.*]] = select i1 [[A:%.*]], i1 true, i1 [[B:%.*]], !prof [[PROF1]] ; CHECK-NEXT: ret i1 [[C]] ; - %c = select i1 %A, i1 true, i1 %B - %res = select i1 %c, i1 true, i1 %A + %c = select i1 %A, i1 true, i1 %B, !prof !1 + %res = select i1 %c, i1 true, i1 %A, !prof !2 ret i1 %res } define i1 @lor_lor_left2(i1 %A, i1 %B) { @@ -506,3 +506,12 @@ define <2 x i1> @PR50500_falseval(<2 x i1> %a, <2 x i1> %b) { %r = select <2 x i1> %a, <2 x i1> %b, <2 x i1> %s ret <2 x i1> %r } + +!0 = !{!"function_entry_count", i64 1000} +!1 = !{!"branch_weights", i32 2, i32 3} +!2 = !{!"branch_weights", i32 5, i32 7} + +;. +; CHECK: [[META0:![0-9]+]] = !{!"function_entry_count", i64 1000} +; CHECK: [[PROF1]] = !{!"branch_weights", i32 2, i32 3} +;. diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/neon-inloop-reductions.ll b/llvm/test/Transforms/LoopVectorize/AArch64/neon-inloop-reductions.ll new file mode 100644 index 0000000..22696d0 --- /dev/null +++ b/llvm/test/Transforms/LoopVectorize/AArch64/neon-inloop-reductions.ll @@ -0,0 +1,121 @@ +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --check-globals none --version 6 +; RUN: opt -p loop-vectorize -prefer-inloop-reductions -mcpu=apple-m1 -force-vector-interleave=1 -S %s | FileCheck %s + +target triple = "arm64-apple-macosx" + +define i32 @mul_used_outside_vpexpression(ptr %src.0, ptr %src.1) { +; CHECK-LABEL: define i32 @mul_used_outside_vpexpression( +; CHECK-SAME: ptr [[SRC_0:%.*]], ptr [[SRC_1:%.*]]) #[[ATTR0:[0-9]+]] { +; CHECK-NEXT: [[ITER_CHECK:.*]]: +; CHECK-NEXT: br i1 false, label %[[VEC_EPILOG_SCALAR_PH:.*]], label %[[VECTOR_MAIN_LOOP_ITER_CHECK:.*]] +; CHECK: [[VECTOR_MAIN_LOOP_ITER_CHECK]]: +; CHECK-NEXT: br i1 false, label %[[VEC_EPILOG_PH:.*]], label %[[VECTOR_PH:.*]] +; CHECK: [[VECTOR_PH]]: +; CHECK-NEXT: [[TMP0:%.*]] = getelementptr i8, ptr [[SRC_1]], i64 1 +; CHECK-NEXT: br label %[[VECTOR_BODY:.*]] +; CHECK: [[VECTOR_BODY]]: +; CHECK-NEXT: [[INDEX:%.*]] = phi i64 [ 0, %[[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], %[[VECTOR_BODY]] ] +; CHECK-NEXT: [[VEC_PHI:%.*]] = phi i32 [ 0, %[[VECTOR_PH]] ], [ [[TMP6:%.*]], %[[VECTOR_BODY]] ] +; CHECK-NEXT: [[VEC_PHI1:%.*]] = phi i32 [ 0, %[[VECTOR_PH]] ], [ [[TMP8:%.*]], %[[VECTOR_BODY]] ] +; CHECK-NEXT: [[NEXT_GEP:%.*]] = getelementptr i8, ptr [[SRC_0]], i64 [[INDEX]] +; CHECK-NEXT: [[WIDE_LOAD:%.*]] = load <16 x i8>, ptr [[NEXT_GEP]], align 1 +; CHECK-NEXT: [[TMP1:%.*]] = load i8, ptr [[TMP0]], align 1 +; CHECK-NEXT: [[BROADCAST_SPLATINSERT:%.*]] = insertelement <16 x i8> poison, i8 [[TMP1]], i64 0 +; CHECK-NEXT: [[BROADCAST_SPLAT:%.*]] = shufflevector <16 x i8> [[BROADCAST_SPLATINSERT]], <16 x i8> poison, <16 x i32> zeroinitializer +; CHECK-NEXT: [[TMP2:%.*]] = zext <16 x i8> [[WIDE_LOAD]] to <16 x i32> +; CHECK-NEXT: [[TMP3:%.*]] = zext <16 x i8> [[BROADCAST_SPLAT]] to <16 x i32> +; CHECK-NEXT: [[TMP4:%.*]] = mul <16 x i32> [[TMP2]], [[TMP3]] +; CHECK-NEXT: [[TMP5:%.*]] = call i32 @llvm.vector.reduce.add.v16i32(<16 x i32> [[TMP4]]) +; CHECK-NEXT: [[TMP6]] = add i32 [[VEC_PHI]], [[TMP5]] +; CHECK-NEXT: [[TMP7:%.*]] = call i32 @llvm.vector.reduce.or.v16i32(<16 x i32> [[TMP4]]) +; CHECK-NEXT: [[TMP8]] = or i32 [[VEC_PHI1]], [[TMP7]] +; CHECK-NEXT: [[INDEX_NEXT]] = add nuw i64 [[INDEX]], 16 +; CHECK-NEXT: [[TMP9:%.*]] = icmp eq i64 [[INDEX_NEXT]], 96 +; CHECK-NEXT: br i1 [[TMP9]], label %[[MIDDLE_BLOCK:.*]], label %[[VECTOR_BODY]], !llvm.loop [[LOOP0:![0-9]+]] +; CHECK: [[MIDDLE_BLOCK]]: +; CHECK-NEXT: br i1 false, label %[[EXIT:.*]], label %[[VEC_EPILOG_ITER_CHECK:.*]] +; CHECK: [[VEC_EPILOG_ITER_CHECK]]: +; CHECK-NEXT: [[IND_END:%.*]] = getelementptr i8, ptr [[SRC_0]], i64 96 +; CHECK-NEXT: br i1 false, label %[[VEC_EPILOG_SCALAR_PH]], label %[[VEC_EPILOG_PH]], !prof [[PROF3:![0-9]+]] +; CHECK: [[VEC_EPILOG_PH]]: +; CHECK-NEXT: [[VEC_EPILOG_RESUME_VAL:%.*]] = phi i64 [ 96, %[[VEC_EPILOG_ITER_CHECK]] ], [ 0, %[[VECTOR_MAIN_LOOP_ITER_CHECK]] ] +; CHECK-NEXT: [[BC_MERGE_RDX:%.*]] = phi i32 [ [[TMP6]], %[[VEC_EPILOG_ITER_CHECK]] ], [ 0, %[[VECTOR_MAIN_LOOP_ITER_CHECK]] ] +; CHECK-NEXT: [[BC_MERGE_RDX2:%.*]] = phi i32 [ [[TMP8]], %[[VEC_EPILOG_ITER_CHECK]] ], [ 0, %[[VECTOR_MAIN_LOOP_ITER_CHECK]] ] +; CHECK-NEXT: [[TMP10:%.*]] = getelementptr i8, ptr [[SRC_0]], i64 100 +; CHECK-NEXT: [[TMP11:%.*]] = getelementptr i8, ptr [[SRC_1]], i64 1 +; CHECK-NEXT: br label %[[VEC_EPILOG_VECTOR_BODY:.*]] +; CHECK: [[VEC_EPILOG_VECTOR_BODY]]: +; CHECK-NEXT: [[INDEX3:%.*]] = phi i64 [ [[VEC_EPILOG_RESUME_VAL]], %[[VEC_EPILOG_PH]] ], [ [[INDEX_NEXT10:%.*]], %[[VEC_EPILOG_VECTOR_BODY]] ] +; CHECK-NEXT: [[VEC_PHI4:%.*]] = phi i32 [ [[BC_MERGE_RDX]], %[[VEC_EPILOG_PH]] ], [ [[TMP17:%.*]], %[[VEC_EPILOG_VECTOR_BODY]] ] +; CHECK-NEXT: [[VEC_PHI5:%.*]] = phi i32 [ [[BC_MERGE_RDX2]], %[[VEC_EPILOG_PH]] ], [ [[TMP19:%.*]], %[[VEC_EPILOG_VECTOR_BODY]] ] +; CHECK-NEXT: [[NEXT_GEP6:%.*]] = getelementptr i8, ptr [[SRC_0]], i64 [[INDEX3]] +; CHECK-NEXT: [[WIDE_LOAD7:%.*]] = load <4 x i8>, ptr [[NEXT_GEP6]], align 1 +; CHECK-NEXT: [[TMP12:%.*]] = load i8, ptr [[TMP11]], align 1 +; CHECK-NEXT: [[BROADCAST_SPLATINSERT8:%.*]] = insertelement <4 x i8> poison, i8 [[TMP12]], i64 0 +; CHECK-NEXT: [[BROADCAST_SPLAT9:%.*]] = shufflevector <4 x i8> [[BROADCAST_SPLATINSERT8]], <4 x i8> poison, <4 x i32> zeroinitializer +; CHECK-NEXT: [[TMP13:%.*]] = zext <4 x i8> [[WIDE_LOAD7]] to <4 x i32> +; CHECK-NEXT: [[TMP14:%.*]] = zext <4 x i8> [[BROADCAST_SPLAT9]] to <4 x i32> +; CHECK-NEXT: [[TMP15:%.*]] = mul <4 x i32> [[TMP13]], [[TMP14]] +; CHECK-NEXT: [[TMP16:%.*]] = call i32 @llvm.vector.reduce.add.v4i32(<4 x i32> [[TMP15]]) +; CHECK-NEXT: [[TMP17]] = add i32 [[VEC_PHI4]], [[TMP16]] +; CHECK-NEXT: [[TMP18:%.*]] = call i32 @llvm.vector.reduce.or.v4i32(<4 x i32> [[TMP15]]) +; CHECK-NEXT: [[TMP19]] = or i32 [[VEC_PHI5]], [[TMP18]] +; CHECK-NEXT: [[INDEX_NEXT10]] = add nuw i64 [[INDEX3]], 4 +; CHECK-NEXT: [[TMP20:%.*]] = icmp eq i64 [[INDEX_NEXT10]], 100 +; CHECK-NEXT: br i1 [[TMP20]], label %[[VEC_EPILOG_MIDDLE_BLOCK:.*]], label %[[VEC_EPILOG_VECTOR_BODY]], !llvm.loop [[LOOP4:![0-9]+]] +; CHECK: [[VEC_EPILOG_MIDDLE_BLOCK]]: +; CHECK-NEXT: br i1 false, label %[[EXIT]], label %[[VEC_EPILOG_SCALAR_PH]] +; CHECK: [[VEC_EPILOG_SCALAR_PH]]: +; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i32 [ 100, %[[VEC_EPILOG_MIDDLE_BLOCK]] ], [ 96, %[[VEC_EPILOG_ITER_CHECK]] ], [ 0, %[[ITER_CHECK]] ] +; CHECK-NEXT: [[BC_RESUME_VAL11:%.*]] = phi ptr [ [[TMP10]], %[[VEC_EPILOG_MIDDLE_BLOCK]] ], [ [[IND_END]], %[[VEC_EPILOG_ITER_CHECK]] ], [ [[SRC_0]], %[[ITER_CHECK]] ] +; CHECK-NEXT: [[BC_MERGE_RDX12:%.*]] = phi i32 [ [[TMP17]], %[[VEC_EPILOG_MIDDLE_BLOCK]] ], [ [[TMP6]], %[[VEC_EPILOG_ITER_CHECK]] ], [ 0, %[[ITER_CHECK]] ] +; CHECK-NEXT: [[BC_MERGE_RDX13:%.*]] = phi i32 [ [[TMP19]], %[[VEC_EPILOG_MIDDLE_BLOCK]] ], [ [[TMP8]], %[[VEC_EPILOG_ITER_CHECK]] ], [ 0, %[[ITER_CHECK]] ] +; CHECK-NEXT: br label %[[LOOP:.*]] +; CHECK: [[LOOP]]: +; CHECK-NEXT: [[IV:%.*]] = phi i32 [ [[BC_RESUME_VAL]], %[[VEC_EPILOG_SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[LOOP]] ] +; CHECK-NEXT: [[PTR_IV:%.*]] = phi ptr [ [[BC_RESUME_VAL11]], %[[VEC_EPILOG_SCALAR_PH]] ], [ [[GEP_0:%.*]], %[[LOOP]] ] +; CHECK-NEXT: [[RED_0:%.*]] = phi i32 [ [[BC_MERGE_RDX12]], %[[VEC_EPILOG_SCALAR_PH]] ], [ [[RED_0_NEXT:%.*]], %[[LOOP]] ] +; CHECK-NEXT: [[RED_1:%.*]] = phi i32 [ [[BC_MERGE_RDX13]], %[[VEC_EPILOG_SCALAR_PH]] ], [ [[RED_1_NEXT:%.*]], %[[LOOP]] ] +; CHECK-NEXT: [[GEP_0]] = getelementptr i8, ptr [[PTR_IV]], i64 1 +; CHECK-NEXT: [[L_0:%.*]] = load i8, ptr [[PTR_IV]], align 1 +; CHECK-NEXT: [[GEP_1:%.*]] = getelementptr i8, ptr [[SRC_1]], i64 1 +; CHECK-NEXT: [[L_1:%.*]] = load i8, ptr [[GEP_1]], align 1 +; CHECK-NEXT: [[L_0_EXT:%.*]] = zext i8 [[L_0]] to i32 +; CHECK-NEXT: [[L_1_EXT:%.*]] = zext i8 [[L_1]] to i32 +; CHECK-NEXT: [[MUL_EXT_LL:%.*]] = mul i32 [[L_0_EXT]], [[L_1_EXT]] +; CHECK-NEXT: [[RED_1_NEXT]] = or i32 [[MUL_EXT_LL]], [[RED_1]] +; CHECK-NEXT: [[RED_0_NEXT]] = add i32 [[MUL_EXT_LL]], [[RED_0]] +; CHECK-NEXT: [[IV_NEXT]] = add i32 [[IV]], 1 +; CHECK-NEXT: [[EC:%.*]] = icmp eq i32 [[IV]], 101 +; CHECK-NEXT: br i1 [[EC]], label %[[EXIT]], label %[[LOOP]], !llvm.loop [[LOOP5:![0-9]+]] +; CHECK: [[EXIT]]: +; CHECK-NEXT: [[RED_1_NEXT_LCSSA:%.*]] = phi i32 [ [[RED_1_NEXT]], %[[LOOP]] ], [ [[TMP8]], %[[MIDDLE_BLOCK]] ], [ [[TMP19]], %[[VEC_EPILOG_MIDDLE_BLOCK]] ] +; CHECK-NEXT: [[RED_0_NEXT_LCSSA:%.*]] = phi i32 [ [[RED_0_NEXT]], %[[LOOP]] ], [ [[TMP6]], %[[MIDDLE_BLOCK]] ], [ [[TMP17]], %[[VEC_EPILOG_MIDDLE_BLOCK]] ] +; CHECK-NEXT: [[RES:%.*]] = add i32 [[RED_1_NEXT_LCSSA]], [[RED_0_NEXT_LCSSA]] +; CHECK-NEXT: ret i32 [[RES]] +; +entry: + br label %loop + +loop: + %iv = phi i32 [ 0, %entry ], [ %iv.next, %loop ] + %ptr.iv = phi ptr [ %src.0, %entry ], [ %gep.0, %loop ] + %red.0 = phi i32 [ 0, %entry ], [ %red.0.next, %loop ] + %red.1 = phi i32 [ 0, %entry ], [ %red.1.next, %loop ] + %gep.0 = getelementptr i8, ptr %ptr.iv, i64 1 + %l.0 = load i8, ptr %ptr.iv, align 1 + %gep.1 = getelementptr i8, ptr %src.1, i64 1 + %l.1 = load i8, ptr %gep.1, align 1 + %l.0.ext = zext i8 %l.0 to i32 + %l.1.ext = zext i8 %l.1 to i32 + %mul.ext.ll = mul i32 %l.0.ext, %l.1.ext + %red.1.next = or i32 %mul.ext.ll, %red.1 + %red.0.next = add i32 %mul.ext.ll, %red.0 + %iv.next = add i32 %iv, 1 + %ec = icmp eq i32 %iv, 101 + br i1 %ec, label %exit, label %loop + +exit: + %res = add i32 %red.1.next, %red.0.next + ret i32 %res +} diff --git a/llvm/test/Transforms/LoopVectorize/AArch64/pr162009.ll b/llvm/test/Transforms/LoopVectorize/AArch64/pr162009.ll new file mode 100644 index 0000000..6095b24 --- /dev/null +++ b/llvm/test/Transforms/LoopVectorize/AArch64/pr162009.ll @@ -0,0 +1,79 @@ +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 6 +; RUN: opt -passes=loop-vectorize -force-vector-interleave=1 -enable-epilogue-vectorization=false -S < %s | FileCheck %s --check-prefixes=CHECK-NO-PARTIAL-REDUCTION + +target triple = "aarch64" + +define i128 @add_reduc_i32_i128_unsupported(ptr %a, ptr %b) "target-features"="+dotprod" { +; CHECK-NO-PARTIAL-REDUCTION-LABEL: define i128 @add_reduc_i32_i128_unsupported( +; CHECK-NO-PARTIAL-REDUCTION-SAME: ptr [[A:%.*]], ptr [[B:%.*]]) #[[ATTR0:[0-9]+]] { +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[ENTRY:.*:]] +; CHECK-NO-PARTIAL-REDUCTION-NEXT: br label %[[VECTOR_PH:.*]] +; CHECK-NO-PARTIAL-REDUCTION: [[VECTOR_PH]]: +; CHECK-NO-PARTIAL-REDUCTION-NEXT: br label %[[VECTOR_BODY:.*]] +; CHECK-NO-PARTIAL-REDUCTION: [[VECTOR_BODY]]: +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[INDEX:%.*]] = phi i64 [ 0, %[[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], %[[VECTOR_BODY]] ] +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[VEC_PHI:%.*]] = phi <4 x i128> [ zeroinitializer, %[[VECTOR_PH]] ], [ [[TMP7:%.*]], %[[VECTOR_BODY]] ] +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[TMP0:%.*]] = getelementptr i32, ptr [[A]], i64 [[INDEX]] +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[WIDE_LOAD:%.*]] = load <4 x i32>, ptr [[TMP0]], align 1 +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[TMP1:%.*]] = zext <4 x i32> [[WIDE_LOAD]] to <4 x i64> +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[TMP2:%.*]] = getelementptr i32, ptr [[B]], i64 [[INDEX]] +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[WIDE_LOAD1:%.*]] = load <4 x i32>, ptr [[TMP2]], align 1 +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[TMP3:%.*]] = zext <4 x i32> [[WIDE_LOAD1]] to <4 x i64> +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[TMP4:%.*]] = mul nuw <4 x i64> [[TMP1]], [[TMP3]] +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[TMP5:%.*]] = zext <4 x i64> [[TMP4]] to <4 x i128> +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[TMP7]] = add <4 x i128> [[VEC_PHI]], [[TMP5]] +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[INDEX_NEXT]] = add nuw i64 [[INDEX]], 4 +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[TMP6:%.*]] = icmp eq i64 [[INDEX_NEXT]], 4024 +; CHECK-NO-PARTIAL-REDUCTION-NEXT: br i1 [[TMP6]], label %[[MIDDLE_BLOCK:.*]], label %[[VECTOR_BODY]], !llvm.loop [[LOOP0:![0-9]+]] +; CHECK-NO-PARTIAL-REDUCTION: [[MIDDLE_BLOCK]]: +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[TMP8:%.*]] = call i128 @llvm.vector.reduce.add.v4i128(<4 x i128> [[TMP7]]) +; CHECK-NO-PARTIAL-REDUCTION-NEXT: br label %[[SCALAR_PH:.*]] +; CHECK-NO-PARTIAL-REDUCTION: [[SCALAR_PH]]: +; CHECK-NO-PARTIAL-REDUCTION-NEXT: br label %[[FOR_BODY:.*]] +; CHECK-NO-PARTIAL-REDUCTION: [[FOR_BODY]]: +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[IV:%.*]] = phi i64 [ 4024, %[[SCALAR_PH]] ], [ [[IV_NEXT:%.*]], %[[FOR_BODY]] ] +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[ACCUM:%.*]] = phi i128 [ [[TMP8]], %[[SCALAR_PH]] ], [ [[ADD:%.*]], %[[FOR_BODY]] ] +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[GEP_A:%.*]] = getelementptr i32, ptr [[A]], i64 [[IV]] +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[LOAD_A:%.*]] = load i32, ptr [[GEP_A]], align 1 +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[EXT_A:%.*]] = zext i32 [[LOAD_A]] to i64 +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[GEP_B:%.*]] = getelementptr i32, ptr [[B]], i64 [[IV]] +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[LOAD_B:%.*]] = load i32, ptr [[GEP_B]], align 1 +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[EXT_B:%.*]] = zext i32 [[LOAD_B]] to i64 +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[MUL:%.*]] = mul nuw i64 [[EXT_A]], [[EXT_B]] +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[MUL_ZEXT:%.*]] = zext i64 [[MUL]] to i128 +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[ADD]] = add i128 [[ACCUM]], [[MUL_ZEXT]] +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[IV_NEXT]] = add i64 [[IV]], 1 +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[EXITCOND_NOT:%.*]] = icmp eq i64 [[IV_NEXT]], 4025 +; CHECK-NO-PARTIAL-REDUCTION-NEXT: br i1 [[EXITCOND_NOT]], label %[[FOR_EXIT:.*]], label %[[FOR_BODY]], !llvm.loop [[LOOP3:![0-9]+]] +; CHECK-NO-PARTIAL-REDUCTION: [[FOR_EXIT]]: +; CHECK-NO-PARTIAL-REDUCTION-NEXT: [[ADD_LCSSA:%.*]] = phi i128 [ [[ADD]], %[[FOR_BODY]] ] +; CHECK-NO-PARTIAL-REDUCTION-NEXT: ret i128 [[ADD_LCSSA]] +; +entry: + br label %for.body + +for.body: + %iv = phi i64 [ 0, %entry ], [ %iv.next, %for.body ] + %accum = phi i128 [ 0, %entry ], [ %add, %for.body ] + %gep.a = getelementptr i32, ptr %a, i64 %iv + %load.a = load i32, ptr %gep.a, align 1 + %ext.a = zext i32 %load.a to i64 + %gep.b = getelementptr i32, ptr %b, i64 %iv + %load.b = load i32, ptr %gep.b, align 1 + %ext.b = zext i32 %load.b to i64 + %mul = mul nuw i64 %ext.a, %ext.b + %mul.zext = zext i64 %mul to i128 + %add = add i128 %accum, %mul.zext + %iv.next = add i64 %iv, 1 + %exitcond.not = icmp eq i64 %iv.next, 4025 + br i1 %exitcond.not, label %for.exit, label %for.body + +for.exit: + ret i128 %add +} +;. +; CHECK-NO-PARTIAL-REDUCTION: [[LOOP0]] = distinct !{[[LOOP0]], [[META1:![0-9]+]], [[META2:![0-9]+]]} +; CHECK-NO-PARTIAL-REDUCTION: [[META1]] = !{!"llvm.loop.isvectorized", i32 1} +; CHECK-NO-PARTIAL-REDUCTION: [[META2]] = !{!"llvm.loop.unroll.runtime.disable"} +; CHECK-NO-PARTIAL-REDUCTION: [[LOOP3]] = distinct !{[[LOOP3]], [[META2]], [[META1]]} +;. diff --git a/llvm/test/Transforms/LoopVectorize/ARM/replicating-load-store-costs.ll b/llvm/test/Transforms/LoopVectorize/ARM/replicating-load-store-costs.ll new file mode 100644 index 0000000..fd83a01 --- /dev/null +++ b/llvm/test/Transforms/LoopVectorize/ARM/replicating-load-store-costs.ll @@ -0,0 +1,84 @@ +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 6 +; RUN: opt -p loop-vectorize -S %s | FileCheck %s + +target triple = "armv7-unknown-linux-gnueabihf" + +define void @replicating_load_used_by_other_load(i32 %arg, ptr %a, i32 %b) { +; CHECK-LABEL: define void @replicating_load_used_by_other_load( +; CHECK-SAME: i32 [[ARG:%.*]], ptr [[A:%.*]], i32 [[B:%.*]]) { +; CHECK-NEXT: [[ENTRY:.*]]: +; CHECK-NEXT: br label %[[LOOP:.*]] +; CHECK: [[LOOP]]: +; CHECK-NEXT: [[IV:%.*]] = phi i32 [ [[IV_NEXT:%.*]], %[[LOOP]] ], [ [[ARG]], %[[ENTRY]] ] +; CHECK-NEXT: [[SHR:%.*]] = lshr i32 [[IV]], 1 +; CHECK-NEXT: [[AND_1:%.*]] = and i32 [[IV]], 1 +; CHECK-NEXT: [[SHL_1:%.*]] = shl i32 [[IV]], 2 +; CHECK-NEXT: [[SHL_2:%.*]] = shl i32 [[IV]], 1 +; CHECK-NEXT: [[AND_2:%.*]] = and i32 [[SHL_2]], 2 +; CHECK-NEXT: [[OR_1:%.*]] = or i32 [[AND_2]], [[AND_1]] +; CHECK-NEXT: [[OR_2:%.*]] = or i32 [[OR_1]], [[SHL_1]] +; CHECK-NEXT: [[XOR_1:%.*]] = xor i32 [[B]], [[OR_2]] +; CHECK-NEXT: [[XOR_2:%.*]] = xor i32 [[XOR_1]], [[ARG]] +; CHECK-NEXT: [[SHR_2:%.*]] = lshr i32 [[SHL_1]], 1 +; CHECK-NEXT: [[XOR_3:%.*]] = xor i32 [[SHR]], [[ARG]] +; CHECK-NEXT: [[AND_3:%.*]] = and i32 [[XOR_3]], 1 +; CHECK-NEXT: [[AND_4:%.*]] = and i32 [[IV]], 2147483646 +; CHECK-NEXT: [[OR_3:%.*]] = or i32 [[AND_3]], [[AND_4]] +; CHECK-NEXT: [[AND_5:%.*]] = and i32 [[IV]], 254 +; CHECK-NEXT: [[SHL_3:%.*]] = shl i32 [[OR_3]], 1 +; CHECK-NEXT: [[XOR_4:%.*]] = xor i32 [[SHL_3]], 2 +; CHECK-NEXT: [[OR_4:%.*]] = or i32 [[AND_5]], [[XOR_4]] +; CHECK-NEXT: [[XOR_5:%.*]] = xor i32 [[SHR_2]], [[OR_4]] +; CHECK-NEXT: [[XOR_6:%.*]] = xor i32 [[XOR_5]], [[XOR_2]] +; CHECK-NEXT: [[AND_6:%.*]] = and i32 [[XOR_6]], 255 +; CHECK-NEXT: [[XOR_7:%.*]] = xor i32 [[AND_6]], 1 +; CHECK-NEXT: [[GEP:%.*]] = getelementptr i8, ptr [[A]], i32 [[XOR_7]] +; CHECK-NEXT: [[LD:%.*]] = load i8, ptr [[GEP]], align 1 +; CHECK-NEXT: [[ZEXT:%.*]] = zext i8 [[LD]] to i32 +; CHECK-NEXT: [[GEP_2:%.*]] = getelementptr i32, ptr null, i32 [[ZEXT]] +; CHECK-NEXT: store i32 0, ptr [[GEP_2]], align 4 +; CHECK-NEXT: [[IV_NEXT]] = add i32 [[IV]], 1 +; CHECK-NEXT: [[CMP:%.*]] = icmp eq i32 [[IV_NEXT]], 100 +; CHECK-NEXT: br i1 [[CMP]], label %[[EXIT:.*]], label %[[LOOP]] +; CHECK: [[EXIT]]: +; CHECK-NEXT: ret void +; +entry: + br label %loop + +loop: + %iv = phi i32 [ %iv.next, %loop ], [ %arg, %entry ] + %shr = lshr i32 %iv, 1 + %and.1 = and i32 %iv, 1 + %shl.1 = shl i32 %iv, 2 + %shl.2 = shl i32 %iv, 1 + %and.2 = and i32 %shl.2, 2 + %or.1 = or i32 %and.2, %and.1 + %or.2 = or i32 %or.1, %shl.1 + %xor.1 = xor i32 %b, %or.2 + %xor.2 = xor i32 %xor.1, %arg + %shr.2 = lshr i32 %shl.1, 1 + %xor.3 = xor i32 %shr, %arg + %and.3 = and i32 %xor.3, 1 + %and.4 = and i32 %iv, 2147483646 + %or.3 = or i32 %and.3, %and.4 + %and.5 = and i32 %iv, 254 + %shl.3 = shl i32 %or.3, 1 + %xor.4 = xor i32 %shl.3, 2 + %or.4 = or i32 %and.5, %xor.4 + %xor.5 = xor i32 %shr.2, %or.4 + %xor.6 = xor i32 %xor.5, %xor.2 + %and.6 = and i32 %xor.6, 255 + %xor.7 = xor i32 %and.6, 1 + %gep = getelementptr i8, ptr %a, i32 %xor.7 + %ld = load i8, ptr %gep, align 1 + %zext = zext i8 %ld to i32 + %gep.2 = getelementptr i32, ptr null, i32 %zext + store i32 0, ptr %gep.2, align 4 + %iv.next = add i32 %iv, 1 + %cmp = icmp eq i32 %iv.next, 100 + br i1 %cmp, label %exit, label %loop + +exit: + ret void +} diff --git a/llvm/test/Transforms/LoopVectorize/X86/replicating-load-store-costs.ll b/llvm/test/Transforms/LoopVectorize/X86/replicating-load-store-costs.ll index 8784873..f5329cf 100644 --- a/llvm/test/Transforms/LoopVectorize/X86/replicating-load-store-costs.ll +++ b/llvm/test/Transforms/LoopVectorize/X86/replicating-load-store-costs.ll @@ -454,6 +454,132 @@ exit: ret void } +declare i1 @cond() + +define double @test_load_used_by_other_load_scev(ptr %ptr.a, ptr %ptr.b, ptr %ptr.c) { +; I64-LABEL: define double @test_load_used_by_other_load_scev( +; I64-SAME: ptr [[PTR_A:%.*]], ptr [[PTR_B:%.*]], ptr [[PTR_C:%.*]]) { +; I64-NEXT: [[ENTRY:.*]]: +; I64-NEXT: br label %[[OUTER_LOOP:.*]] +; I64: [[OUTER_LOOP_LOOPEXIT:.*]]: +; I64-NEXT: br label %[[OUTER_LOOP]] +; I64: [[OUTER_LOOP]]: +; I64-NEXT: [[ACCUM:%.*]] = phi double [ 0.000000e+00, %[[ENTRY]] ], [ [[TMP29:%.*]], %[[OUTER_LOOP_LOOPEXIT]] ] +; I64-NEXT: [[COND:%.*]] = call i1 @cond() +; I64-NEXT: br i1 [[COND]], label %[[INNER_LOOP_PREHEADER:.*]], label %[[EXIT:.*]] +; I64: [[INNER_LOOP_PREHEADER]]: +; I64-NEXT: br label %[[VECTOR_PH:.*]] +; I64: [[VECTOR_PH]]: +; I64-NEXT: br label %[[VECTOR_BODY:.*]] +; I64: [[VECTOR_BODY]]: +; I64-NEXT: [[TMP0:%.*]] = add i64 0, 1 +; I64-NEXT: [[TMP1:%.*]] = add i64 1, 1 +; I64-NEXT: [[TMP2:%.*]] = getelementptr i8, ptr [[PTR_C]], i64 [[TMP0]] +; I64-NEXT: [[TMP3:%.*]] = getelementptr i8, ptr [[PTR_C]], i64 [[TMP1]] +; I64-NEXT: [[TMP4:%.*]] = getelementptr i64, ptr [[PTR_A]], i64 [[TMP0]] +; I64-NEXT: [[TMP5:%.*]] = getelementptr i64, ptr [[PTR_A]], i64 [[TMP1]] +; I64-NEXT: [[TMP6:%.*]] = load i64, ptr [[TMP4]], align 8 +; I64-NEXT: [[TMP7:%.*]] = load i64, ptr [[TMP5]], align 8 +; I64-NEXT: [[TMP8:%.*]] = getelementptr double, ptr [[PTR_B]], i64 [[TMP6]] +; I64-NEXT: [[TMP9:%.*]] = getelementptr double, ptr [[PTR_B]], i64 [[TMP7]] +; I64-NEXT: [[TMP10:%.*]] = load double, ptr [[PTR_A]], align 8 +; I64-NEXT: [[BROADCAST_SPLATINSERT:%.*]] = insertelement <2 x double> poison, double [[TMP10]], i64 0 +; I64-NEXT: [[BROADCAST_SPLAT:%.*]] = shufflevector <2 x double> [[BROADCAST_SPLATINSERT]], <2 x double> poison, <2 x i32> zeroinitializer +; I64-NEXT: [[TMP11:%.*]] = fadd <2 x double> [[BROADCAST_SPLAT]], zeroinitializer +; I64-NEXT: [[TMP12:%.*]] = getelementptr i8, ptr [[TMP2]], i64 8 +; I64-NEXT: [[TMP13:%.*]] = getelementptr i8, ptr [[TMP3]], i64 8 +; I64-NEXT: [[TMP14:%.*]] = load double, ptr [[TMP12]], align 8 +; I64-NEXT: [[TMP15:%.*]] = load double, ptr [[TMP13]], align 8 +; I64-NEXT: [[TMP16:%.*]] = insertelement <2 x double> poison, double [[TMP14]], i32 0 +; I64-NEXT: [[TMP17:%.*]] = insertelement <2 x double> [[TMP16]], double [[TMP15]], i32 1 +; I64-NEXT: [[TMP18:%.*]] = fmul <2 x double> [[TMP11]], zeroinitializer +; I64-NEXT: [[BROADCAST_SPLATINSERT1:%.*]] = insertelement <2 x double> poison, double [[ACCUM]], i64 0 +; I64-NEXT: [[BROADCAST_SPLAT2:%.*]] = shufflevector <2 x double> [[BROADCAST_SPLATINSERT1]], <2 x double> poison, <2 x i32> zeroinitializer +; I64-NEXT: [[TMP19:%.*]] = shufflevector <2 x double> [[BROADCAST_SPLAT2]], <2 x double> [[TMP18]], <2 x i32> <i32 1, i32 2> +; I64-NEXT: [[TMP20:%.*]] = fmul <2 x double> [[TMP17]], zeroinitializer +; I64-NEXT: [[TMP21:%.*]] = fadd <2 x double> [[TMP20]], zeroinitializer +; I64-NEXT: [[TMP22:%.*]] = fadd <2 x double> [[TMP21]], splat (double 1.000000e+00) +; I64-NEXT: [[TMP23:%.*]] = load double, ptr [[TMP8]], align 8 +; I64-NEXT: [[TMP24:%.*]] = load double, ptr [[TMP9]], align 8 +; I64-NEXT: [[TMP25:%.*]] = insertelement <2 x double> poison, double [[TMP23]], i32 0 +; I64-NEXT: [[TMP26:%.*]] = insertelement <2 x double> [[TMP25]], double [[TMP24]], i32 1 +; I64-NEXT: [[TMP27:%.*]] = fdiv <2 x double> [[TMP26]], [[TMP22]] +; I64-NEXT: [[TMP28:%.*]] = fsub <2 x double> [[TMP19]], [[TMP27]] +; I64-NEXT: br label %[[MIDDLE_BLOCK:.*]] +; I64: [[MIDDLE_BLOCK]]: +; I64-NEXT: [[TMP29]] = extractelement <2 x double> [[TMP28]], i32 1 +; I64-NEXT: br label %[[OUTER_LOOP_LOOPEXIT]] +; I64: [[EXIT]]: +; I64-NEXT: ret double [[ACCUM]] +; +; I32-LABEL: define double @test_load_used_by_other_load_scev( +; I32-SAME: ptr [[PTR_A:%.*]], ptr [[PTR_B:%.*]], ptr [[PTR_C:%.*]]) { +; I32-NEXT: [[ENTRY:.*]]: +; I32-NEXT: br label %[[OUTER_LOOP:.*]] +; I32: [[OUTER_LOOP]]: +; I32-NEXT: [[ACCUM:%.*]] = phi double [ 0.000000e+00, %[[ENTRY]] ], [ [[RESULT:%.*]], %[[INNER_LOOP:.*]] ] +; I32-NEXT: [[COND:%.*]] = call i1 @cond() +; I32-NEXT: br i1 [[COND]], label %[[INNER_LOOP]], label %[[EXIT:.*]] +; I32: [[INNER_LOOP]]: +; I32-NEXT: [[IV:%.*]] = phi i64 [ 0, %[[OUTER_LOOP]] ], [ [[IV_NEXT:%.*]], %[[INNER_LOOP]] ] +; I32-NEXT: [[ACCUM_INNER:%.*]] = phi double [ [[ACCUM]], %[[OUTER_LOOP]] ], [ [[MUL1:%.*]], %[[INNER_LOOP]] ] +; I32-NEXT: [[IDX_PLUS1:%.*]] = add i64 [[IV]], 1 +; I32-NEXT: [[GEP_C:%.*]] = getelementptr i8, ptr [[PTR_C]], i64 [[IDX_PLUS1]] +; I32-NEXT: [[GEP_A_I64:%.*]] = getelementptr i64, ptr [[PTR_A]], i64 [[IDX_PLUS1]] +; I32-NEXT: [[LOAD_IDX:%.*]] = load i64, ptr [[GEP_A_I64]], align 8 +; I32-NEXT: [[GEP_B:%.*]] = getelementptr double, ptr [[PTR_B]], i64 [[LOAD_IDX]] +; I32-NEXT: [[LOAD_A:%.*]] = load double, ptr [[PTR_A]], align 8 +; I32-NEXT: [[ADD1:%.*]] = fadd double [[LOAD_A]], 0.000000e+00 +; I32-NEXT: [[GEP_C_OFFSET:%.*]] = getelementptr i8, ptr [[GEP_C]], i64 8 +; I32-NEXT: [[LOAD_C:%.*]] = load double, ptr [[GEP_C_OFFSET]], align 8 +; I32-NEXT: [[MUL1]] = fmul double [[ADD1]], 0.000000e+00 +; I32-NEXT: [[MUL2:%.*]] = fmul double [[LOAD_C]], 0.000000e+00 +; I32-NEXT: [[ADD2:%.*]] = fadd double [[MUL2]], 0.000000e+00 +; I32-NEXT: [[ADD3:%.*]] = fadd double [[ADD2]], 1.000000e+00 +; I32-NEXT: [[LOAD_B:%.*]] = load double, ptr [[GEP_B]], align 8 +; I32-NEXT: [[DIV:%.*]] = fdiv double [[LOAD_B]], [[ADD3]] +; I32-NEXT: [[RESULT]] = fsub double [[ACCUM_INNER]], [[DIV]] +; I32-NEXT: [[IV_NEXT]] = add i64 [[IV]], 1 +; I32-NEXT: [[EXITCOND:%.*]] = icmp eq i64 [[IV]], 1 +; I32-NEXT: br i1 [[EXITCOND]], label %[[OUTER_LOOP]], label %[[INNER_LOOP]] +; I32: [[EXIT]]: +; I32-NEXT: ret double [[ACCUM]] +; +entry: + br label %outer.loop + +outer.loop: + %accum = phi double [ 0.0, %entry ], [ %result, %inner.loop ] + %cond = call i1 @cond() + br i1 %cond, label %inner.loop, label %exit + +inner.loop: + %iv = phi i64 [ 0, %outer.loop ], [ %iv.next, %inner.loop ] + %accum.inner = phi double [ %accum, %outer.loop ], [ %mul1, %inner.loop ] + %idx.plus1 = add i64 %iv, 1 + %gep.c = getelementptr i8, ptr %ptr.c, i64 %idx.plus1 + %gep.a.i64 = getelementptr i64, ptr %ptr.a, i64 %idx.plus1 + %load.idx = load i64, ptr %gep.a.i64, align 8 + %gep.b = getelementptr double, ptr %ptr.b, i64 %load.idx + %load.a = load double, ptr %ptr.a, align 8 + %add1 = fadd double %load.a, 0.000000e+00 + %gep.c.offset = getelementptr i8, ptr %gep.c, i64 8 + %load.c = load double, ptr %gep.c.offset, align 8 + %mul1 = fmul double %add1, 0.000000e+00 + %mul2 = fmul double %load.c, 0.000000e+00 + %add2 = fadd double %mul2, 0.000000e+00 + %add3 = fadd double %add2, 1.000000e+00 + %load.b = load double, ptr %gep.b, align 8 + %div = fdiv double %load.b, %add3 + %result = fsub double %accum.inner, %div + %iv.next = add i64 %iv, 1 + %exitcond = icmp eq i64 %iv, 1 + br i1 %exitcond, label %outer.loop, label %inner.loop + +exit: + ret double %accum +} + attributes #0 = { "target-cpu"="znver2" } !0 = distinct !{!0, !1} diff --git a/llvm/test/Transforms/SCCP/relax-range-checks.ll b/llvm/test/Transforms/SCCP/relax-range-checks.ll index 90722f3..34e4813 100644 --- a/llvm/test/Transforms/SCCP/relax-range-checks.ll +++ b/llvm/test/Transforms/SCCP/relax-range-checks.ll @@ -89,4 +89,28 @@ define i1 @relax_range_check_multiuse(i8 range(i8 0, 5) %x) { ret i1 %ret } +define i1 @range_check_to_icmp_eq1(i32 range(i32 0, 4) %x) { +; CHECK-LABEL: define i1 @range_check_to_icmp_eq1( +; CHECK-SAME: i32 range(i32 0, 4) [[X:%.*]]) { +; CHECK-NEXT: [[OFF:%.*]] = add nsw i32 [[X]], -3 +; CHECK-NEXT: [[TMP1:%.*]] = icmp eq i32 [[X]], 3 +; CHECK-NEXT: ret i1 [[TMP1]] +; + %off = add nsw i32 %x, -3 + %cmp = icmp ult i32 %off, 2 + ret i1 %cmp +} + +define i1 @range_check_to_icmp_eq2(i32 range(i32 -1, 2) %x) { +; CHECK-LABEL: define i1 @range_check_to_icmp_eq2( +; CHECK-SAME: i32 range(i32 -1, 2) [[X:%.*]]) { +; CHECK-NEXT: [[OFF:%.*]] = add nsw i32 [[X]], -1 +; CHECK-NEXT: [[CMP:%.*]] = icmp eq i32 [[X]], 1 +; CHECK-NEXT: ret i1 [[CMP]] +; + %off = add nsw i32 %x, -1 + %cmp = icmp ult i32 %off, -2 + ret i1 %cmp +} + declare void @use(i8) diff --git a/llvm/test/Transforms/SLPVectorizer/RISCV/strided-loads-with-external-indices.ll b/llvm/test/Transforms/SLPVectorizer/RISCV/strided-loads-with-external-indices.ll index 655db54..a079203 100644 --- a/llvm/test/Transforms/SLPVectorizer/RISCV/strided-loads-with-external-indices.ll +++ b/llvm/test/Transforms/SLPVectorizer/RISCV/strided-loads-with-external-indices.ll @@ -10,14 +10,10 @@ define void @test() { ; CHECK-NEXT: [[SUB4_I_I65_US:%.*]] = or i64 0, 1 ; CHECK-NEXT: br label [[BODY:%.*]] ; CHECK: body: -; CHECK-NEXT: [[ADD_I_I62_US:%.*]] = shl i64 0, 0 -; CHECK-NEXT: [[TMP0:%.*]] = insertelement <2 x i64> <i64 poison, i64 1>, i64 [[ADD_I_I62_US]], i32 0 -; CHECK-NEXT: [[TMP1:%.*]] = or <2 x i64> zeroinitializer, [[TMP0]] -; CHECK-NEXT: [[TMP2:%.*]] = getelementptr [[CLASS_A:%.*]], <2 x ptr> zeroinitializer, <2 x i64> [[TMP1]] -; CHECK-NEXT: [[TMP3:%.*]] = call <2 x i32> @llvm.masked.gather.v2i32.v2p0(<2 x ptr> [[TMP2]], i32 4, <2 x i1> splat (i1 true), <2 x i32> poison) -; CHECK-NEXT: [[TMP4:%.*]] = extractelement <2 x i32> [[TMP3]], i32 0 -; CHECK-NEXT: [[TMP5:%.*]] = extractelement <2 x i32> [[TMP3]], i32 1 -; CHECK-NEXT: [[CMP_I_I_I_I67_US:%.*]] = icmp slt i32 [[TMP4]], [[TMP5]] +; CHECK-NEXT: [[TMP0:%.*]] = call <2 x i32> @llvm.masked.gather.v2i32.v2p0(<2 x ptr> getelementptr ([[CLASS_A:%.*]], <2 x ptr> zeroinitializer, <2 x i64> <i64 0, i64 1>), i32 4, <2 x i1> splat (i1 true), <2 x i32> poison) +; CHECK-NEXT: [[TMP1:%.*]] = extractelement <2 x i32> [[TMP0]], i32 0 +; CHECK-NEXT: [[TMP2:%.*]] = extractelement <2 x i32> [[TMP0]], i32 1 +; CHECK-NEXT: [[CMP_I_I_I_I67_US:%.*]] = icmp slt i32 [[TMP1]], [[TMP2]] ; CHECK-NEXT: [[SPEC_SELECT_I_I68_US:%.*]] = select i1 false, i64 [[SUB4_I_I65_US]], i64 0 ; CHECK-NEXT: br label [[BODY]] ; diff --git a/llvm/test/Transforms/SLPVectorizer/X86/ext-used-scalar-different-bitwidth.ll b/llvm/test/Transforms/SLPVectorizer/X86/ext-used-scalar-different-bitwidth.ll index 7758596..87f2cca 100644 --- a/llvm/test/Transforms/SLPVectorizer/X86/ext-used-scalar-different-bitwidth.ll +++ b/llvm/test/Transforms/SLPVectorizer/X86/ext-used-scalar-different-bitwidth.ll @@ -8,8 +8,8 @@ define i32 @test() { ; CHECK-NEXT: [[ENTRY:.*:]] ; CHECK-NEXT: store i32 152, ptr @f, align 4 ; CHECK-NEXT: [[AGG_TMP_SROA_0_0_COPYLOAD_I:%.*]] = load i32, ptr @f, align 4 -; CHECK-NEXT: [[ADD_I_I:%.*]] = shl i32 [[AGG_TMP_SROA_0_0_COPYLOAD_I]], 24 -; CHECK-NEXT: [[TMP0:%.*]] = insertelement <8 x i32> <i32 poison, i32 83886080, i32 83886080, i32 83886080, i32 83886080, i32 83886080, i32 83886080, i32 83886080>, i32 [[ADD_I_I]], i32 0 +; CHECK-NEXT: [[TMP3:%.*]] = insertelement <8 x i32> <i32 poison, i32 83886080, i32 83886080, i32 83886080, i32 83886080, i32 83886080, i32 83886080, i32 83886080>, i32 [[AGG_TMP_SROA_0_0_COPYLOAD_I]], i32 0 +; CHECK-NEXT: [[TMP0:%.*]] = shl <8 x i32> [[TMP3]], <i32 24, i32 0, i32 0, i32 0, i32 0, i32 0, i32 0, i32 0> ; CHECK-NEXT: [[TMP1:%.*]] = add <8 x i32> <i32 83886080, i32 0, i32 0, i32 0, i32 0, i32 0, i32 0, i32 0>, [[TMP0]] ; CHECK-NEXT: [[TMP2:%.*]] = ashr <8 x i32> [[TMP1]], splat (i32 24) ; CHECK-NEXT: [[TMP5:%.*]] = and <8 x i32> [[TMP2]], <i32 66440127, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1, i32 1> diff --git a/llvm/test/Transforms/SLPVectorizer/X86/vect_copyable_in_binops.ll b/llvm/test/Transforms/SLPVectorizer/X86/vect_copyable_in_binops.ll index 75aec45..3e0a374 100644 --- a/llvm/test/Transforms/SLPVectorizer/X86/vect_copyable_in_binops.ll +++ b/llvm/test/Transforms/SLPVectorizer/X86/vect_copyable_in_binops.ll @@ -247,32 +247,12 @@ entry: } define void @shl0(ptr noalias %dst, ptr noalias %src) { -; NON-POW2-LABEL: @shl0( -; NON-POW2-NEXT: entry: -; NON-POW2-NEXT: [[INCDEC_PTR:%.*]] = getelementptr inbounds i32, ptr [[SRC:%.*]], i64 1 -; NON-POW2-NEXT: [[TMP0:%.*]] = load i32, ptr [[SRC]], align 4 -; NON-POW2-NEXT: [[INCDEC_PTR1:%.*]] = getelementptr inbounds i32, ptr [[DST:%.*]], i64 1 -; NON-POW2-NEXT: store i32 [[TMP0]], ptr [[DST]], align 4 -; NON-POW2-NEXT: [[TMP1:%.*]] = load <3 x i32>, ptr [[INCDEC_PTR]], align 4 -; NON-POW2-NEXT: [[TMP2:%.*]] = shl <3 x i32> [[TMP1]], <i32 1, i32 2, i32 3> -; NON-POW2-NEXT: store <3 x i32> [[TMP2]], ptr [[INCDEC_PTR1]], align 4 -; NON-POW2-NEXT: ret void -; -; POW2-ONLY-LABEL: @shl0( -; POW2-ONLY-NEXT: entry: -; POW2-ONLY-NEXT: [[INCDEC_PTR:%.*]] = getelementptr inbounds i32, ptr [[SRC:%.*]], i64 1 -; POW2-ONLY-NEXT: [[TMP0:%.*]] = load i32, ptr [[SRC]], align 4 -; POW2-ONLY-NEXT: [[INCDEC_PTR1:%.*]] = getelementptr inbounds i32, ptr [[DST:%.*]], i64 1 -; POW2-ONLY-NEXT: store i32 [[TMP0]], ptr [[DST]], align 4 -; POW2-ONLY-NEXT: [[INCDEC_PTR4:%.*]] = getelementptr inbounds i32, ptr [[SRC]], i64 3 -; POW2-ONLY-NEXT: [[INCDEC_PTR6:%.*]] = getelementptr inbounds i32, ptr [[DST]], i64 3 -; POW2-ONLY-NEXT: [[TMP1:%.*]] = load <2 x i32>, ptr [[INCDEC_PTR]], align 4 -; POW2-ONLY-NEXT: [[TMP2:%.*]] = shl <2 x i32> [[TMP1]], <i32 1, i32 2> -; POW2-ONLY-NEXT: store <2 x i32> [[TMP2]], ptr [[INCDEC_PTR1]], align 4 -; POW2-ONLY-NEXT: [[TMP3:%.*]] = load i32, ptr [[INCDEC_PTR4]], align 4 -; POW2-ONLY-NEXT: [[SHL8:%.*]] = shl i32 [[TMP3]], 3 -; POW2-ONLY-NEXT: store i32 [[SHL8]], ptr [[INCDEC_PTR6]], align 4 -; POW2-ONLY-NEXT: ret void +; CHECK-LABEL: @shl0( +; CHECK-NEXT: entry: +; CHECK-NEXT: [[TMP0:%.*]] = load <4 x i32>, ptr [[SRC:%.*]], align 4 +; CHECK-NEXT: [[TMP1:%.*]] = shl <4 x i32> [[TMP0]], <i32 0, i32 1, i32 2, i32 3> +; CHECK-NEXT: store <4 x i32> [[TMP1]], ptr [[DST:%.*]], align 4 +; CHECK-NEXT: ret void ; entry: %incdec.ptr = getelementptr inbounds i32, ptr %src, i64 1 diff --git a/llvm/test/Transforms/SLPVectorizer/bool-logical-op-reduction-with-poison.ll b/llvm/test/Transforms/SLPVectorizer/bool-logical-op-reduction-with-poison.ll index a5b1e9b..769b360 100644 --- a/llvm/test/Transforms/SLPVectorizer/bool-logical-op-reduction-with-poison.ll +++ b/llvm/test/Transforms/SLPVectorizer/bool-logical-op-reduction-with-poison.ll @@ -1,25 +1,44 @@ ; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 3 -; RUN: %if x86-registered-target %{ opt -S --passes=slp-vectorizer < %s -mtriple=x86_64-unknown-linux-gnu | FileCheck %s %} -; RUN: %if aarch64-registered-target %{ opt -S --passes=slp-vectorizer < %s -mtriple=aarch64-unknown-linux-gnu | FileCheck %s %} +; RUN: %if x86-registered-target %{ opt -S --passes=slp-vectorizer < %s -mtriple=x86_64-unknown-linux-gnu | FileCheck %s --check-prefix=X86 %} +; RUN: %if aarch64-registered-target %{ opt -S --passes=slp-vectorizer < %s -mtriple=aarch64-unknown-linux-gnu | FileCheck %s --check-prefix=AARCH64 %} define i1 @test(i32 %0, i32 %1, i32 %p) { -; CHECK-LABEL: define i1 @test( -; CHECK-SAME: i32 [[TMP0:%.*]], i32 [[TMP1:%.*]], i32 [[P:%.*]]) { -; CHECK-NEXT: entry: -; CHECK-NEXT: [[CMP1:%.*]] = icmp sgt i32 [[TMP0]], 0 -; CHECK-NEXT: [[TMP2:%.*]] = insertelement <4 x i32> poison, i32 [[TMP1]], i32 0 -; CHECK-NEXT: [[TMP3:%.*]] = shufflevector <4 x i32> [[TMP2]], <4 x i32> poison, <4 x i32> zeroinitializer -; CHECK-NEXT: [[TMP4:%.*]] = shl <4 x i32> zeroinitializer, [[TMP3]] -; CHECK-NEXT: [[TMP5:%.*]] = icmp slt <4 x i32> [[TMP4]], zeroinitializer -; CHECK-NEXT: [[CMP6:%.*]] = icmp slt i32 0, [[P]] -; CHECK-NEXT: [[TMP6:%.*]] = freeze <4 x i1> [[TMP5]] -; CHECK-NEXT: [[TMP7:%.*]] = call i1 @llvm.vector.reduce.or.v4i1(<4 x i1> [[TMP6]]) -; CHECK-NEXT: [[OP_RDX:%.*]] = select i1 [[TMP7]], i1 true, i1 [[CMP6]] -; CHECK-NEXT: [[OP_RDX1:%.*]] = select i1 [[CMP1]], i1 true, i1 [[CMP1]] -; CHECK-NEXT: [[TMP8:%.*]] = freeze i1 [[OP_RDX]] -; CHECK-NEXT: [[OP_RDX2:%.*]] = select i1 [[TMP8]], i1 true, i1 [[OP_RDX1]] -; CHECK-NEXT: ret i1 [[OP_RDX2]] +; X86-LABEL: define i1 @test( +; X86-SAME: i32 [[TMP0:%.*]], i32 [[TMP1:%.*]], i32 [[P:%.*]]) { +; X86-NEXT: entry: +; X86-NEXT: [[CMP1:%.*]] = icmp sgt i32 [[TMP0]], 0 +; X86-NEXT: [[TMP2:%.*]] = insertelement <4 x i32> poison, i32 [[TMP1]], i32 0 +; X86-NEXT: [[TMP3:%.*]] = shufflevector <4 x i32> [[TMP2]], <4 x i32> poison, <4 x i32> zeroinitializer +; X86-NEXT: [[TMP4:%.*]] = shl <4 x i32> zeroinitializer, [[TMP3]] +; X86-NEXT: [[TMP5:%.*]] = icmp slt <4 x i32> [[TMP4]], zeroinitializer +; X86-NEXT: [[CMP6:%.*]] = icmp slt i32 0, [[P]] +; X86-NEXT: [[TMP6:%.*]] = freeze <4 x i1> [[TMP5]] +; X86-NEXT: [[TMP7:%.*]] = call i1 @llvm.vector.reduce.or.v4i1(<4 x i1> [[TMP6]]) +; X86-NEXT: [[OP_RDX:%.*]] = select i1 [[TMP7]], i1 true, i1 [[CMP6]] +; X86-NEXT: [[OP_RDX1:%.*]] = select i1 [[CMP1]], i1 true, i1 [[CMP1]] +; X86-NEXT: [[TMP8:%.*]] = freeze i1 [[OP_RDX]] +; X86-NEXT: [[OP_RDX2:%.*]] = select i1 [[TMP8]], i1 true, i1 [[OP_RDX1]] +; X86-NEXT: ret i1 [[OP_RDX2]] +; +; AARCH64-LABEL: define i1 @test( +; AARCH64-SAME: i32 [[TMP0:%.*]], i32 [[TMP1:%.*]], i32 [[P:%.*]]) { +; AARCH64-NEXT: entry: +; AARCH64-NEXT: [[CMP1:%.*]] = icmp sgt i32 [[TMP0]], 0 +; AARCH64-NEXT: [[SHL4:%.*]] = shl i32 0, [[TMP1]] +; AARCH64-NEXT: [[CMP5:%.*]] = icmp slt i32 [[SHL4]], 0 +; AARCH64-NEXT: [[TMP2:%.*]] = insertelement <4 x i32> <i32 0, i32 poison, i32 poison, i32 poison>, i32 [[TMP1]], i32 1 +; AARCH64-NEXT: [[TMP3:%.*]] = shufflevector <4 x i32> [[TMP2]], <4 x i32> poison, <4 x i32> <i32 0, i32 1, i32 1, i32 1> +; AARCH64-NEXT: [[TMP4:%.*]] = shl <4 x i32> zeroinitializer, [[TMP3]] +; AARCH64-NEXT: [[TMP5:%.*]] = insertelement <4 x i32> <i32 poison, i32 0, i32 0, i32 0>, i32 [[P]], i32 0 +; AARCH64-NEXT: [[TMP6:%.*]] = icmp slt <4 x i32> [[TMP4]], [[TMP5]] +; AARCH64-NEXT: [[TMP7:%.*]] = freeze <4 x i1> [[TMP6]] +; AARCH64-NEXT: [[TMP8:%.*]] = call i1 @llvm.vector.reduce.or.v4i1(<4 x i1> [[TMP7]]) +; AARCH64-NEXT: [[OP_RDX:%.*]] = select i1 [[TMP8]], i1 true, i1 [[CMP5]] +; AARCH64-NEXT: [[OP_RDX1:%.*]] = select i1 [[CMP1]], i1 true, i1 [[CMP1]] +; AARCH64-NEXT: [[TMP9:%.*]] = freeze i1 [[OP_RDX]] +; AARCH64-NEXT: [[OP_RDX2:%.*]] = select i1 [[TMP9]], i1 true, i1 [[OP_RDX1]] +; AARCH64-NEXT: ret i1 [[OP_RDX2]] ; entry: %cmp1 = icmp sgt i32 %0, 0 diff --git a/llvm/test/Transforms/SimplifyCFG/indirectbr.ll b/llvm/test/Transforms/SimplifyCFG/indirectbr.ll index 87d8b39..2fa36b0 100644 --- a/llvm/test/Transforms/SimplifyCFG/indirectbr.ll +++ b/llvm/test/Transforms/SimplifyCFG/indirectbr.ll @@ -1,4 +1,4 @@ -; NOTE: Assertions have been autogenerated by utils/update_test_checks.py +; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --check-globals ; RUN: opt -S -passes=simplifycfg -simplifycfg-require-and-preserve-domtree=1 < %s | FileCheck %s ; SimplifyCFG should eliminate redundant indirectbr edges. @@ -8,7 +8,11 @@ declare void @A() declare void @B(i32) declare void @C() -define void @indbrtest0(ptr %P, ptr %Q) { +;. +; CHECK: @anchor = constant [13 x ptr] [ptr blockaddress(@indbrtest3, %L1), ptr blockaddress(@indbrtest3, %L2), ptr inttoptr (i32 1 to ptr), ptr blockaddress(@indbrtest4, %L1), ptr inttoptr (i32 1 to ptr), ptr inttoptr (i32 1 to ptr), ptr inttoptr (i32 1 to ptr), ptr inttoptr (i32 1 to ptr), ptr inttoptr (i32 1 to ptr), ptr inttoptr (i32 1 to ptr), ptr inttoptr (i32 1 to ptr), ptr inttoptr (i32 1 to ptr), ptr inttoptr (i32 1 to ptr)] +; CHECK: @xblkx.bbs = internal unnamed_addr constant [9 x ptr] [ptr blockaddress(@indbrtest7, %xlab4x), ptr blockaddress(@indbrtest7, %xlab4x), ptr blockaddress(@indbrtest7, %v2j), ptr blockaddress(@indbrtest7, %xlab4x), ptr blockaddress(@indbrtest7, %xlab4x), ptr blockaddress(@indbrtest7, %xlab4x), ptr blockaddress(@indbrtest7, %xlab4x), ptr blockaddress(@indbrtest7, %xlab4x), ptr blockaddress(@indbrtest7, %v2j)] +;. +define void @indbrtest0(ptr %P, ptr %Q) !prof !0 { ; CHECK-LABEL: @indbrtest0( ; CHECK-NEXT: entry: ; CHECK-NEXT: store ptr blockaddress(@indbrtest0, [[BB0:%.*]]), ptr [[P:%.*]], align 8 @@ -16,7 +20,7 @@ define void @indbrtest0(ptr %P, ptr %Q) { ; CHECK-NEXT: store ptr blockaddress(@indbrtest0, [[BB2:%.*]]), ptr [[P]], align 8 ; CHECK-NEXT: call void @foo() ; CHECK-NEXT: [[T:%.*]] = load ptr, ptr [[Q:%.*]], align 8 -; CHECK-NEXT: indirectbr ptr [[T]], [label [[BB0]], label [[BB1]], label %BB2] +; CHECK-NEXT: indirectbr ptr [[T]], [label [[BB0]], label [[BB1]], label %BB2], !prof [[PROF1:![0-9]+]] ; CHECK: BB0: ; CHECK-NEXT: call void @A() ; CHECK-NEXT: br label [[BB1]] @@ -36,7 +40,7 @@ entry: store ptr blockaddress(@indbrtest0, %BB2), ptr %P call void @foo() %t = load ptr, ptr %Q - indirectbr ptr %t, [label %BB0, label %BB1, label %BB2, label %BB0, label %BB1, label %BB2] + indirectbr ptr %t, [label %BB0, label %BB1, label %BB2, label %BB0, label %BB1, label %BB2], !prof !1 BB0: call void @A() br label %BB1 @@ -103,10 +107,10 @@ BB0: ; SimplifyCFG should turn the indirectbr into a conditional branch on the ; condition of the select. -define void @indbrtest3(i1 %cond, ptr %address) nounwind { +define void @indbrtest3(i1 %cond, ptr %address) nounwind !prof !0 { ; CHECK-LABEL: @indbrtest3( ; CHECK-NEXT: entry: -; CHECK-NEXT: br i1 [[COND:%.*]], label [[L1:%.*]], label [[L2:%.*]] +; CHECK-NEXT: br i1 [[COND:%.*]], label [[L1:%.*]], label [[L2:%.*]], !prof [[PROF2:![0-9]+]] ; CHECK: common.ret: ; CHECK-NEXT: ret void ; CHECK: L1: @@ -117,8 +121,8 @@ define void @indbrtest3(i1 %cond, ptr %address) nounwind { ; CHECK-NEXT: br label [[COMMON_RET]] ; entry: - %indirect.goto.dest = select i1 %cond, ptr blockaddress(@indbrtest3, %L1), ptr blockaddress(@indbrtest3, %L2) - indirectbr ptr %indirect.goto.dest, [label %L1, label %L2, label %L3] + %indirect.goto.dest = select i1 %cond, ptr blockaddress(@indbrtest3, %L1), ptr blockaddress(@indbrtest3, %L2), !prof !2 + indirectbr ptr %indirect.goto.dest, [label %L1, label %L2, label %L3], !prof !3 L1: call void @A() @@ -385,3 +389,15 @@ declare i32 @xfunc5x() declare i8 @xfunc7x() declare i32 @xselectorx() declare i32 @xactionx() + +!0 = !{!"function_entry_count", i32 10} +!1 = !{!"branch_weights", i32 3, i32 5, i32 7, i32 11, i32 13, i32 17} +!2 = !{!"branch_weights", i32 3, i32 5} +!3 = !{!"branch_weights", i32 3, i32 5, i32 7} +;. +; CHECK: attributes #[[ATTR0:[0-9]+]] = { nounwind } +;. +; CHECK: [[META0:![0-9]+]] = !{!"function_entry_count", i32 10} +; CHECK: [[PROF1]] = !{!"branch_weights", i32 14, i32 18, i32 24} +; CHECK: [[PROF2]] = !{!"branch_weights", i32 3, i32 5} +;. diff --git a/llvm/test/tools/llvm-exegesis/AArch64/no-aliasing-ld-str.s b/llvm/test/tools/llvm-exegesis/AArch64/no-aliasing-ld-str.s new file mode 100644 index 0000000..c8a5746 --- /dev/null +++ b/llvm/test/tools/llvm-exegesis/AArch64/no-aliasing-ld-str.s @@ -0,0 +1,10 @@ +REQUIRES: aarch64-registered-target +// Flakey on SVE buildbots, disabled pending invesgitation. +UNSUPPORTED: target={{.*}} + +RUN: llvm-exegesis -mtriple=aarch64 -mcpu=neoverse-v2 -mode=latency --dump-object-to-disk=%d --opcode-name=FMOVWSr --benchmark-phase=assemble-measured-code 2>&1 +RUN: llvm-objdump -d %d > %t.s +RUN: FileCheck %s < %t.s + +CHECK-NOT: ld{{[1-4]}} +CHECK-NOT: st{{[1-4]}} diff --git a/llvm/test/tools/llvm-mca/X86/Generic/resources-avx512vbmi2.s b/llvm/test/tools/llvm-mca/X86/Generic/resources-avx512vbmi2.s index d777d31..8e0d47e 100644 --- a/llvm/test/tools/llvm-mca/X86/Generic/resources-avx512vbmi2.s +++ b/llvm/test/tools/llvm-mca/X86/Generic/resources-avx512vbmi2.s @@ -153,12 +153,12 @@ vpshrdw $1, (%rax), %zmm17, %zmm19 {k1}{z} # CHECK-NEXT: 2 8 1.00 * vpcompressw %zmm16, (%rax) {%k1} # CHECK-NEXT: 1 1 1.00 vpcompressw %zmm16, %zmm19 {%k1} {z} # CHECK-NEXT: 1 1 1.00 U vpexpandb %zmm16, %zmm19 -# CHECK-NEXT: 2 8 1.00 U vpexpandb (%rax), %zmm19 +# CHECK-NEXT: 2 8 1.00 * U vpexpandb (%rax), %zmm19 # CHECK-NEXT: 1 1 1.00 vpexpandb %zmm16, %zmm19 {%k1} # CHECK-NEXT: 2 8 1.00 * vpexpandb (%rax), %zmm19 {%k1} # CHECK-NEXT: 1 1 1.00 vpexpandb %zmm16, %zmm19 {%k1} {z} # CHECK-NEXT: 1 1 1.00 U vpexpandw %zmm16, %zmm19 -# CHECK-NEXT: 2 8 1.00 U vpexpandw (%rax), %zmm19 +# CHECK-NEXT: 2 8 1.00 * U vpexpandw (%rax), %zmm19 # CHECK-NEXT: 1 1 1.00 vpexpandw %zmm16, %zmm19 {%k1} # CHECK-NEXT: 2 8 1.00 * vpexpandw (%rax), %zmm19 {%k1} # CHECK-NEXT: 1 1 1.00 vpexpandw %zmm16, %zmm19 {%k1} {z} diff --git a/llvm/test/tools/llvm-mca/X86/Generic/resources-avx512vbmi2vl.s b/llvm/test/tools/llvm-mca/X86/Generic/resources-avx512vbmi2vl.s index 99b88fe..f6be964 100644 --- a/llvm/test/tools/llvm-mca/X86/Generic/resources-avx512vbmi2vl.s +++ b/llvm/test/tools/llvm-mca/X86/Generic/resources-avx512vbmi2vl.s @@ -295,22 +295,22 @@ vpshrdw $1, (%rax), %ymm17, %ymm19 {k1}{z} # CHECK-NEXT: 2 8 1.00 * vpcompressw %ymm16, (%rax) {%k1} # CHECK-NEXT: 1 1 1.00 vpcompressw %ymm16, %ymm19 {%k1} {z} # CHECK-NEXT: 1 1 1.00 U vpexpandb %xmm16, %xmm19 -# CHECK-NEXT: 2 8 1.00 U vpexpandb (%rax), %xmm19 +# CHECK-NEXT: 2 8 1.00 * U vpexpandb (%rax), %xmm19 # CHECK-NEXT: 1 1 1.00 vpexpandb %xmm16, %xmm19 {%k1} # CHECK-NEXT: 2 8 1.00 * vpexpandb (%rax), %xmm19 {%k1} # CHECK-NEXT: 1 1 1.00 vpexpandb %xmm16, %xmm19 {%k1} {z} # CHECK-NEXT: 1 1 1.00 U vpexpandb %ymm16, %ymm19 -# CHECK-NEXT: 2 8 1.00 U vpexpandb (%rax), %ymm19 +# CHECK-NEXT: 2 8 1.00 * U vpexpandb (%rax), %ymm19 # CHECK-NEXT: 1 1 1.00 vpexpandb %ymm16, %ymm19 {%k1} # CHECK-NEXT: 2 8 1.00 * vpexpandb (%rax), %ymm19 {%k1} # CHECK-NEXT: 1 1 1.00 vpexpandb %ymm16, %ymm19 {%k1} {z} # CHECK-NEXT: 1 1 1.00 U vpexpandw %xmm16, %xmm19 -# CHECK-NEXT: 2 8 1.00 U vpexpandw (%rax), %xmm19 +# CHECK-NEXT: 2 8 1.00 * U vpexpandw (%rax), %xmm19 # CHECK-NEXT: 1 1 1.00 vpexpandw %xmm16, %xmm19 {%k1} # CHECK-NEXT: 2 8 1.00 * vpexpandw (%rax), %xmm19 {%k1} # CHECK-NEXT: 1 1 1.00 vpexpandw %xmm16, %xmm19 {%k1} {z} # CHECK-NEXT: 1 1 1.00 U vpexpandw %ymm16, %ymm19 -# CHECK-NEXT: 2 8 1.00 U vpexpandw (%rax), %ymm19 +# CHECK-NEXT: 2 8 1.00 * U vpexpandw (%rax), %ymm19 # CHECK-NEXT: 1 1 1.00 vpexpandw %ymm16, %ymm19 {%k1} # CHECK-NEXT: 2 8 1.00 * vpexpandw (%rax), %ymm19 {%k1} # CHECK-NEXT: 1 1 1.00 vpexpandw %ymm16, %ymm19 {%k1} {z} diff --git a/llvm/test/tools/llvm-mca/X86/IceLakeServer/resources-avx512vbmi2.s b/llvm/test/tools/llvm-mca/X86/IceLakeServer/resources-avx512vbmi2.s index 08f07dc..5c987ee 100644 --- a/llvm/test/tools/llvm-mca/X86/IceLakeServer/resources-avx512vbmi2.s +++ b/llvm/test/tools/llvm-mca/X86/IceLakeServer/resources-avx512vbmi2.s @@ -153,12 +153,12 @@ vpshrdw $1, (%rax), %zmm17, %zmm19 {k1}{z} # CHECK-NEXT: 2 10 1.00 * vpcompressw %zmm16, (%rax) {%k1} # CHECK-NEXT: 1 3 1.00 vpcompressw %zmm16, %zmm19 {%k1} {z} # CHECK-NEXT: 1 3 1.00 U vpexpandb %zmm16, %zmm19 -# CHECK-NEXT: 2 10 1.00 U vpexpandb (%rax), %zmm19 +# CHECK-NEXT: 2 10 1.00 * U vpexpandb (%rax), %zmm19 # CHECK-NEXT: 1 3 1.00 vpexpandb %zmm16, %zmm19 {%k1} # CHECK-NEXT: 2 10 1.00 * vpexpandb (%rax), %zmm19 {%k1} # CHECK-NEXT: 1 3 1.00 vpexpandb %zmm16, %zmm19 {%k1} {z} # CHECK-NEXT: 1 3 1.00 U vpexpandw %zmm16, %zmm19 -# CHECK-NEXT: 2 10 1.00 U vpexpandw (%rax), %zmm19 +# CHECK-NEXT: 2 10 1.00 * U vpexpandw (%rax), %zmm19 # CHECK-NEXT: 1 3 1.00 vpexpandw %zmm16, %zmm19 {%k1} # CHECK-NEXT: 2 10 1.00 * vpexpandw (%rax), %zmm19 {%k1} # CHECK-NEXT: 1 3 1.00 vpexpandw %zmm16, %zmm19 {%k1} {z} diff --git a/llvm/test/tools/llvm-mca/X86/IceLakeServer/resources-avx512vbmi2vl.s b/llvm/test/tools/llvm-mca/X86/IceLakeServer/resources-avx512vbmi2vl.s index 0194303..023026b 100644 --- a/llvm/test/tools/llvm-mca/X86/IceLakeServer/resources-avx512vbmi2vl.s +++ b/llvm/test/tools/llvm-mca/X86/IceLakeServer/resources-avx512vbmi2vl.s @@ -295,22 +295,22 @@ vpshrdw $1, (%rax), %ymm17, %ymm19 {k1}{z} # CHECK-NEXT: 2 10 1.00 * vpcompressw %ymm16, (%rax) {%k1} # CHECK-NEXT: 1 3 1.00 vpcompressw %ymm16, %ymm19 {%k1} {z} # CHECK-NEXT: 1 3 1.00 U vpexpandb %xmm16, %xmm19 -# CHECK-NEXT: 2 10 1.00 U vpexpandb (%rax), %xmm19 +# CHECK-NEXT: 2 10 1.00 * U vpexpandb (%rax), %xmm19 # CHECK-NEXT: 1 3 1.00 vpexpandb %xmm16, %xmm19 {%k1} # CHECK-NEXT: 2 10 1.00 * vpexpandb (%rax), %xmm19 {%k1} # CHECK-NEXT: 1 3 1.00 vpexpandb %xmm16, %xmm19 {%k1} {z} # CHECK-NEXT: 1 3 1.00 U vpexpandb %ymm16, %ymm19 -# CHECK-NEXT: 2 10 1.00 U vpexpandb (%rax), %ymm19 +# CHECK-NEXT: 2 10 1.00 * U vpexpandb (%rax), %ymm19 # CHECK-NEXT: 1 3 1.00 vpexpandb %ymm16, %ymm19 {%k1} # CHECK-NEXT: 2 10 1.00 * vpexpandb (%rax), %ymm19 {%k1} # CHECK-NEXT: 1 3 1.00 vpexpandb %ymm16, %ymm19 {%k1} {z} # CHECK-NEXT: 1 3 1.00 U vpexpandw %xmm16, %xmm19 -# CHECK-NEXT: 2 10 1.00 U vpexpandw (%rax), %xmm19 +# CHECK-NEXT: 2 10 1.00 * U vpexpandw (%rax), %xmm19 # CHECK-NEXT: 1 3 1.00 vpexpandw %xmm16, %xmm19 {%k1} # CHECK-NEXT: 2 10 1.00 * vpexpandw (%rax), %xmm19 {%k1} # CHECK-NEXT: 1 3 1.00 vpexpandw %xmm16, %xmm19 {%k1} {z} # CHECK-NEXT: 1 3 1.00 U vpexpandw %ymm16, %ymm19 -# CHECK-NEXT: 2 10 1.00 U vpexpandw (%rax), %ymm19 +# CHECK-NEXT: 2 10 1.00 * U vpexpandw (%rax), %ymm19 # CHECK-NEXT: 1 3 1.00 vpexpandw %ymm16, %ymm19 {%k1} # CHECK-NEXT: 2 10 1.00 * vpexpandw (%rax), %ymm19 {%k1} # CHECK-NEXT: 1 3 1.00 vpexpandw %ymm16, %ymm19 {%k1} {z} diff --git a/llvm/test/tools/llvm-mca/X86/SapphireRapids/resources-avx512vbmi2.s b/llvm/test/tools/llvm-mca/X86/SapphireRapids/resources-avx512vbmi2.s index ed8a417..db1f9af 100644 --- a/llvm/test/tools/llvm-mca/X86/SapphireRapids/resources-avx512vbmi2.s +++ b/llvm/test/tools/llvm-mca/X86/SapphireRapids/resources-avx512vbmi2.s @@ -153,12 +153,12 @@ vpshrdw $1, (%rax), %zmm17, %zmm19 {k1}{z} # CHECK-NEXT: 6 14 2.00 * vpcompressw %zmm16, (%rax) {%k1} # CHECK-NEXT: 2 6 2.00 vpcompressw %zmm16, %zmm19 {%k1} {z} # CHECK-NEXT: 2 3 2.00 U vpexpandb %zmm16, %zmm19 -# CHECK-NEXT: 3 11 2.00 U vpexpandb (%rax), %zmm19 +# CHECK-NEXT: 3 11 2.00 * U vpexpandb (%rax), %zmm19 # CHECK-NEXT: 2 8 2.00 vpexpandb %zmm16, %zmm19 {%k1} # CHECK-NEXT: 3 13 2.00 * vpexpandb (%rax), %zmm19 {%k1} # CHECK-NEXT: 2 8 2.00 vpexpandb %zmm16, %zmm19 {%k1} {z} # CHECK-NEXT: 2 3 2.00 U vpexpandw %zmm16, %zmm19 -# CHECK-NEXT: 3 11 2.00 U vpexpandw (%rax), %zmm19 +# CHECK-NEXT: 3 11 2.00 * U vpexpandw (%rax), %zmm19 # CHECK-NEXT: 2 8 2.00 vpexpandw %zmm16, %zmm19 {%k1} # CHECK-NEXT: 3 13 2.00 * vpexpandw (%rax), %zmm19 {%k1} # CHECK-NEXT: 2 8 2.00 vpexpandw %zmm16, %zmm19 {%k1} {z} diff --git a/llvm/test/tools/llvm-mca/X86/SapphireRapids/resources-avx512vbmi2vl.s b/llvm/test/tools/llvm-mca/X86/SapphireRapids/resources-avx512vbmi2vl.s index 3db09bc..9277a91 100644 --- a/llvm/test/tools/llvm-mca/X86/SapphireRapids/resources-avx512vbmi2vl.s +++ b/llvm/test/tools/llvm-mca/X86/SapphireRapids/resources-avx512vbmi2vl.s @@ -295,22 +295,22 @@ vpshrdw $1, (%rax), %ymm17, %ymm19 {k1}{z} # CHECK-NEXT: 6 14 2.00 * vpcompressw %ymm16, (%rax) {%k1} # CHECK-NEXT: 2 6 2.00 vpcompressw %ymm16, %ymm19 {%k1} {z} # CHECK-NEXT: 2 3 2.00 U vpexpandb %xmm16, %xmm19 -# CHECK-NEXT: 3 10 2.00 U vpexpandb (%rax), %xmm19 +# CHECK-NEXT: 3 10 2.00 * U vpexpandb (%rax), %xmm19 # CHECK-NEXT: 2 8 2.00 vpexpandb %xmm16, %xmm19 {%k1} # CHECK-NEXT: 3 13 2.00 * vpexpandb (%rax), %xmm19 {%k1} # CHECK-NEXT: 2 8 2.00 vpexpandb %xmm16, %xmm19 {%k1} {z} # CHECK-NEXT: 2 3 2.00 U vpexpandb %ymm16, %ymm19 -# CHECK-NEXT: 3 11 2.00 U vpexpandb (%rax), %ymm19 +# CHECK-NEXT: 3 11 2.00 * U vpexpandb (%rax), %ymm19 # CHECK-NEXT: 2 8 2.00 vpexpandb %ymm16, %ymm19 {%k1} # CHECK-NEXT: 3 13 2.00 * vpexpandb (%rax), %ymm19 {%k1} # CHECK-NEXT: 2 8 2.00 vpexpandb %ymm16, %ymm19 {%k1} {z} # CHECK-NEXT: 2 3 2.00 U vpexpandw %xmm16, %xmm19 -# CHECK-NEXT: 3 10 2.00 U vpexpandw (%rax), %xmm19 +# CHECK-NEXT: 3 10 2.00 * U vpexpandw (%rax), %xmm19 # CHECK-NEXT: 2 8 2.00 vpexpandw %xmm16, %xmm19 {%k1} # CHECK-NEXT: 3 13 2.00 * vpexpandw (%rax), %xmm19 {%k1} # CHECK-NEXT: 2 8 2.00 vpexpandw %xmm16, %xmm19 {%k1} {z} # CHECK-NEXT: 2 3 2.00 U vpexpandw %ymm16, %ymm19 -# CHECK-NEXT: 3 11 2.00 U vpexpandw (%rax), %ymm19 +# CHECK-NEXT: 3 11 2.00 * U vpexpandw (%rax), %ymm19 # CHECK-NEXT: 2 8 2.00 vpexpandw %ymm16, %ymm19 {%k1} # CHECK-NEXT: 3 13 2.00 * vpexpandw (%rax), %ymm19 {%k1} # CHECK-NEXT: 2 8 2.00 vpexpandw %ymm16, %ymm19 {%k1} {z} diff --git a/llvm/test/tools/llvm-mca/X86/Znver4/resources-avx512vbmi2.s b/llvm/test/tools/llvm-mca/X86/Znver4/resources-avx512vbmi2.s index 594518d..88e140d 100644 --- a/llvm/test/tools/llvm-mca/X86/Znver4/resources-avx512vbmi2.s +++ b/llvm/test/tools/llvm-mca/X86/Znver4/resources-avx512vbmi2.s @@ -153,12 +153,12 @@ vpshrdw $1, (%rax), %zmm17, %zmm19 {k1}{z} # CHECK-NEXT: 2 8 0.50 * vpcompressw %zmm16, (%rax) {%k1} # CHECK-NEXT: 1 5 1.00 vpcompressw %zmm16, %zmm19 {%k1} {z} # CHECK-NEXT: 1 5 1.00 U vpexpandb %zmm16, %zmm19 -# CHECK-NEXT: 2 8 0.50 U vpexpandb (%rax), %zmm19 +# CHECK-NEXT: 2 8 0.50 * U vpexpandb (%rax), %zmm19 # CHECK-NEXT: 1 5 1.00 vpexpandb %zmm16, %zmm19 {%k1} # CHECK-NEXT: 2 8 0.50 * vpexpandb (%rax), %zmm19 {%k1} # CHECK-NEXT: 1 5 1.00 vpexpandb %zmm16, %zmm19 {%k1} {z} # CHECK-NEXT: 1 5 1.00 U vpexpandw %zmm16, %zmm19 -# CHECK-NEXT: 2 8 0.50 U vpexpandw (%rax), %zmm19 +# CHECK-NEXT: 2 8 0.50 * U vpexpandw (%rax), %zmm19 # CHECK-NEXT: 1 5 1.00 vpexpandw %zmm16, %zmm19 {%k1} # CHECK-NEXT: 2 8 0.50 * vpexpandw (%rax), %zmm19 {%k1} # CHECK-NEXT: 1 5 1.00 vpexpandw %zmm16, %zmm19 {%k1} {z} diff --git a/llvm/test/tools/llvm-mca/X86/Znver4/resources-avx512vbmi2vl.s b/llvm/test/tools/llvm-mca/X86/Znver4/resources-avx512vbmi2vl.s index 7b9c2516..325835a 100644 --- a/llvm/test/tools/llvm-mca/X86/Znver4/resources-avx512vbmi2vl.s +++ b/llvm/test/tools/llvm-mca/X86/Znver4/resources-avx512vbmi2vl.s @@ -295,22 +295,22 @@ vpshrdw $1, (%rax), %ymm17, %ymm19 {k1}{z} # CHECK-NEXT: 2 8 0.50 * vpcompressw %ymm16, (%rax) {%k1} # CHECK-NEXT: 1 4 1.00 vpcompressw %ymm16, %ymm19 {%k1} {z} # CHECK-NEXT: 2 1 0.50 U vpexpandb %xmm16, %xmm19 -# CHECK-NEXT: 2 8 0.50 U vpexpandb (%rax), %xmm19 +# CHECK-NEXT: 2 8 0.50 * U vpexpandb (%rax), %xmm19 # CHECK-NEXT: 2 1 0.50 vpexpandb %xmm16, %xmm19 {%k1} # CHECK-NEXT: 2 8 0.50 * vpexpandb (%rax), %xmm19 {%k1} # CHECK-NEXT: 2 1 0.50 vpexpandb %xmm16, %xmm19 {%k1} {z} # CHECK-NEXT: 1 4 1.00 U vpexpandb %ymm16, %ymm19 -# CHECK-NEXT: 2 8 0.50 U vpexpandb (%rax), %ymm19 +# CHECK-NEXT: 2 8 0.50 * U vpexpandb (%rax), %ymm19 # CHECK-NEXT: 1 4 1.00 vpexpandb %ymm16, %ymm19 {%k1} # CHECK-NEXT: 2 8 0.50 * vpexpandb (%rax), %ymm19 {%k1} # CHECK-NEXT: 1 4 1.00 vpexpandb %ymm16, %ymm19 {%k1} {z} # CHECK-NEXT: 2 1 0.50 U vpexpandw %xmm16, %xmm19 -# CHECK-NEXT: 2 8 0.50 U vpexpandw (%rax), %xmm19 +# CHECK-NEXT: 2 8 0.50 * U vpexpandw (%rax), %xmm19 # CHECK-NEXT: 2 1 0.50 vpexpandw %xmm16, %xmm19 {%k1} # CHECK-NEXT: 2 8 0.50 * vpexpandw (%rax), %xmm19 {%k1} # CHECK-NEXT: 2 1 0.50 vpexpandw %xmm16, %xmm19 {%k1} {z} # CHECK-NEXT: 1 4 1.00 U vpexpandw %ymm16, %ymm19 -# CHECK-NEXT: 2 8 0.50 U vpexpandw (%rax), %ymm19 +# CHECK-NEXT: 2 8 0.50 * U vpexpandw (%rax), %ymm19 # CHECK-NEXT: 1 4 1.00 vpexpandw %ymm16, %ymm19 {%k1} # CHECK-NEXT: 2 8 0.50 * vpexpandw (%rax), %ymm19 {%k1} # CHECK-NEXT: 1 4 1.00 vpexpandw %ymm16, %ymm19 {%k1} {z} diff --git a/llvm/test/tools/llvm-reduce/inline-call-sites-cost.ll b/llvm/test/tools/llvm-reduce/inline-call-sites-cost.ll new file mode 100644 index 0000000..fc25ca4 --- /dev/null +++ b/llvm/test/tools/llvm-reduce/inline-call-sites-cost.ll @@ -0,0 +1,95 @@ +; RUN: llvm-reduce --abort-on-invalid-reduction --delta-passes=inline-call-sites -reduce-callsite-inline-threshold=3 --test FileCheck --test-arg --check-prefix=CHECK --test-arg %s --test-arg --input-file %s -o %t +; RUN: FileCheck -check-prefixes=RESULT,CHECK %s < %t + +declare void @extern_b() +declare void @extern_a() + +; RESULT: @gv_init = global ptr @no_inline_noncall_user +@gv_init = global ptr @no_inline_noncall_user + + +; CHECK-LABEL: define void @no_inline_noncall_user( +define void @no_inline_noncall_user() { + call void @extern_a() + call void @extern_a() + call void @extern_a() + call void @extern_a() + ret void +} + +; RESULT-LABEL: define void @noncall_user_call() { +; RESULT-NEXT: call void @no_inline_noncall_user() +; RESULT-NEXT: ret void +define void @noncall_user_call() { + call void @no_inline_noncall_user() + ret void +} + +; RESULT-LABEL: define void @big_callee_small_caller_callee() { +define void @big_callee_small_caller_callee() { + call void @extern_a() + call void @extern_a() + call void @extern_a() + call void @extern_a() + ret void +} + +; RESULT-LABEL: define void @big_callee_small_caller_caller() { +; RESULT-NEXT: call void @extern_b() +; RESULT-NEXT: call void @extern_a() +; RESULT-NEXT: call void @extern_a() +; RESULT-NEXT: call void @extern_a() +; RESULT-NEXT: call void @extern_a() +; RESULT-NEXT: ret void +define void @big_callee_small_caller_caller() { + call void @extern_b() + call void @big_callee_small_caller_callee() + ret void +} + +; RESULT-LABEL: define void @small_callee_big_caller_callee() { +; RESULT-NEXT: call void @extern_a() +; RESULT-NEXT: ret void +define void @small_callee_big_caller_callee() { + call void @extern_a() + ret void +} + +; RESULT-LABEL: define void @small_callee_big_caller_caller() { +; RESULT-NEXT: call void @extern_b() +; RESULT-NEXT: call void @extern_a() +; RESULT-NEXT: call void @extern_b() +; RESULT-NEXT: call void @extern_b() +; RESULT-NEXT: ret void +define void @small_callee_big_caller_caller() { + call void @extern_b() + call void @small_callee_big_caller_callee() + call void @extern_b() + call void @extern_b() + ret void +} + +; RESULT-LABEL: define void @big_callee_big_caller_callee() { +define void @big_callee_big_caller_callee() { + call void @extern_a() + call void @extern_a() + call void @extern_a() + call void @extern_a() + ret void +} + +; RESULT-LABEL: define void @big_callee_big_caller_caller() { +; RESULT-NEXT: call void @extern_b() +; RESULT-NEXT: call void @big_callee_big_caller_callee() +; RESULT-NEXT: call void @extern_b() +; RESULT-NEXT: call void @extern_b() +; RESULT-NEXT: call void @extern_b() +; RESULT-NEXT: ret void +define void @big_callee_big_caller_caller() { + call void @extern_b() + call void @big_callee_big_caller_callee() + call void @extern_b() + call void @extern_b() + call void @extern_b() + ret void +} diff --git a/llvm/test/tools/llvm-reduce/inline-call-sites.ll b/llvm/test/tools/llvm-reduce/inline-call-sites.ll new file mode 100644 index 0000000..34775d9 --- /dev/null +++ b/llvm/test/tools/llvm-reduce/inline-call-sites.ll @@ -0,0 +1,765 @@ +; RUN: llvm-reduce --abort-on-invalid-reduction --delta-passes=inline-call-sites -reduce-callsite-inline-threshold=-1 --test FileCheck --test-arg --check-prefixes=CHECK,INTERESTING --test-arg %s --test-arg --input-file %s -o %t +; RUN: FileCheck -check-prefixes=RESULT,CHECK %s < %t + +; RESULT: @gv = global [2 x ptr] [ptr @only_gv_user, ptr @simple_callee] +@gv = global [2 x ptr] [ptr @only_gv_user, ptr @simple_callee] + +; RESULT: @indirectbr.L = internal unnamed_addr constant [3 x ptr] [ptr blockaddress(@callee_with_indirectbr, %L1), ptr blockaddress(@callee_with_indirectbr, %L2), ptr null], align 8 +@indirectbr.L = internal unnamed_addr constant [3 x ptr] [ptr blockaddress(@callee_with_indirectbr, %L1), ptr blockaddress(@callee_with_indirectbr, %L2), ptr null], align 8 + + +; CHECK-LABEL: define void @simple_callee( +; RESULT-NEXT: store i32 123, ptr %arg, align 4 +; RESULT-NEXT: ret void +define void @simple_callee(ptr %arg) { + store i32 123, ptr %arg + ret void +} + +; CHECK-LABEL: define void @simple_caller( +; RESULT-NEXT: store i32 123, ptr %outer.arg, align 4 +; RESULT-NEXT: ret void +define void @simple_caller(ptr %outer.arg) { + call void @simple_callee(ptr %outer.arg) + ret void +} + +; CHECK-LABEL: define void @multi_simple_caller( +; RESULT-NEXT: store i32 123, ptr %outer.arg, align 4 +; RESULT-NEXT: store i32 123, ptr %outer.arg, align 4 +; RESULT-NEXT: store i32 123, ptr null, align 4 +; RESULT-NEXT: ret void +define void @multi_simple_caller(ptr %outer.arg) { + call void @simple_callee(ptr %outer.arg) + call void @simple_callee(ptr %outer.arg) + call void @simple_callee(ptr null) + ret void +} + +; CHECK-LABEL: define void @only_gv_user( +; RESULT-NEXT: store i32 666, ptr %arg, align 4 +; RESULT-NEXT: ret void +define void @only_gv_user(ptr %arg) { + store i32 666, ptr %arg + ret void +} + +; CHECK-LABEL: define void @recursive( +; RESULT-NEXT: call void @recursive(ptr %arg) +; RESULT-NEXT: ret void +define void @recursive(ptr %arg) { + call void @recursive(ptr %arg) + ret void +} + +; CHECK-LABEL: define void @recursive_with_wrong_callsite_type( +; RESULT-NEXT: call void @recursive_with_wrong_callsite_type(ptr %arg, i32 2) +; RESULT-NEXT: ret void +define void @recursive_with_wrong_callsite_type(ptr %arg) { + call void @recursive_with_wrong_callsite_type(ptr %arg, i32 2) + ret void +} + +; CHECK-LABEL: define void @non_callee_use( +; RESULT-NEXT: store i32 567, ptr %arg, align 4 +; RESULT-NEXT: ret void +define void @non_callee_use(ptr %arg) { + store i32 567, ptr %arg + ret void +} + +declare void @extern_ptr_use(ptr) + +; CHECK-LABEL: define void @non_callee_user( +; RESULT-NEXT: call void @extern_ptr_use(ptr @non_callee_use) +; RESULT-NEXT: ret void +define void @non_callee_user() { + call void @extern_ptr_use(ptr @non_callee_use) + ret void +} + +; CHECK-LABEL: define void @non_call_inst_use( +define void @non_call_inst_use(ptr %arg) { + store i32 999, ptr %arg + ret void +} + +; CHECK-LABEL: define void @non_call_inst_user( +; RESULT-NEXT: store ptr @non_call_inst_use, ptr %arg, align 8 +; RESULT-NEXT: ret void +define void @non_call_inst_user(ptr %arg) { + store ptr @non_call_inst_use, ptr %arg + ret void +} + +; CHECK-LABEL: define i32 @used_wrong_call_type( +; RESULT-NEXT: store i32 123, ptr %arg, align 4 +; RESULT-NEXT: ret i32 8 +define i32 @used_wrong_call_type(ptr %arg) { + store i32 123, ptr %arg + ret i32 8 +} + +; Inlining doesn't support the UB cases +; CHECK-LABEL: define void @use_wrong_call_type( +; RESULT-NEXT: call void @used_wrong_call_type(ptr %outer.arg) +; RESULT-NEXT: ret void +define void @use_wrong_call_type(ptr %outer.arg) { + call void @used_wrong_call_type(ptr %outer.arg) + ret void +} + +; INTERESTING-LABEL: define void @incompatible_gc_callee( + +; RESULT-LABEL: define void @incompatible_gc_callee(ptr %arg) gc "gc0" { +; RESULT-NEXT: store i32 10000, ptr %arg, align 4 +; RESULT-NEXT: ret void +define void @incompatible_gc_callee(ptr %arg) gc "gc0" { + store i32 10000, ptr %arg + ret void +} + +; INTERESTING-LABEL: define void @incompatible_gc_caller( + +; RESULT-LABEL: define void @incompatible_gc_caller(ptr %outer.arg) gc "gc1" { +; RESULT-NEXT: call void @incompatible_gc_callee(ptr %outer.arg) +; RESULT-NEXT: ret void +define void @incompatible_gc_caller(ptr %outer.arg) gc "gc1" { + call void @incompatible_gc_callee(ptr %outer.arg) + ret void +} + +; INTERESTING-LABEL: define void @propagate_callee_gc( + +; RESULT-LABEL: define void @propagate_callee_gc(ptr %arg) gc "propagate-gc" { +; RESULT-NEXT: store i32 10000, ptr %arg, align 4 +; RESULT-NEXT: ret void +define void @propagate_callee_gc(ptr %arg) gc "propagate-gc" { + store i32 10000, ptr %arg + ret void +} + +; INTERESTING-LABEL: define void @propagate_caller_gc( + +; RESULT-LABEL: define void @propagate_caller_gc(ptr %arg) gc "propagate-gc" { +; RESULT-NEXT: store i32 10000, ptr %arg, align 4 +; RESULT-NEXT: ret void +define void @propagate_caller_gc(ptr %arg) { + call void @propagate_callee_gc(ptr %arg) + ret void +} + +declare i32 @__gxx_personality_v0(...) + +; INTERESTING-LABEL: define void @propagate_callee_personality( + +; RESULT-LABEL: define void @propagate_callee_personality(ptr %arg) personality ptr @__gxx_personality_v0 { +; RESULT-NEXT: store i32 2000, ptr %arg, align 4 +; RESULT-NEXT: ret void +define void @propagate_callee_personality(ptr %arg) personality ptr @__gxx_personality_v0 { + store i32 2000, ptr %arg + ret void +} + +; INTERESTING-LABEL: define void @propagate_caller_personality( + +; RESULT-LABEL: define void @propagate_caller_personality(ptr %arg) personality ptr @__gxx_personality_v0 { +; RESULT-NEXT: store i32 2000, ptr %arg, align 4 +; RESULT-NEXT: ret void +define void @propagate_caller_personality(ptr %arg) { + call void @propagate_callee_personality(ptr %arg) + ret void +} + +; CHECK-LABEL: define void @callee_with_indirectbr( +define void @callee_with_indirectbr() { +entry: + br label %L1 + +L1: ; preds = %entry, %L1 + %i = phi i32 [ 0, %entry ], [ %inc, %L1 ] + %inc = add i32 %i, 1 + %idxprom = zext i32 %i to i64 + %arrayidx = getelementptr inbounds [3 x ptr], ptr @indirectbr.L, i64 0, i64 %idxprom + %brtarget = load ptr, ptr %arrayidx, align 8 + indirectbr ptr %brtarget, [label %L1, label %L2] + +L2: ; preds = %L1 + ret void +} + +; CHECK-LABEL: define void @calls_func_with_indirectbr( + +; RESULT: L1.i: +; RESULT-NEXT: %i.i = phi i32 [ 0, %call ], [ %inc.i, %L1.i ] +; RESULT-NEXT: %inc.i = add i32 %i.i, 1 +; RESULT-NEXT: %idxprom.i = zext i32 %i.i to i64 +; RESULT-NEXT: %arrayidx.i = getelementptr inbounds [3 x ptr], ptr @indirectbr.L, i64 0, i64 %idxprom.i +; RESULT-NEXT: %brtarget.i = load ptr, ptr %arrayidx.i, align 8 +; RESULT-NEXT: indirectbr ptr %brtarget.i, [label %L1.i, label %callee_with_indirectbr.exit] + +define void @calls_func_with_indirectbr(i1 %arg0) { +entry: + br i1 %arg0, label %call, label %ret + +call: + call void @callee_with_indirectbr() + br label %ret + +ret: + ret void +} + + +; CHECK-LABEL: define ptr @callee_with_blockaddress_use( +; RESULT: L2: +; RESULT-NEXT: store ptr blockaddress(@callee_with_blockaddress_use, %L1), ptr %alloca, align 8 +; RESULT-NEXT: store ptr blockaddress(@callee_with_blockaddress_use, %L2), ptr %alloca, align 8 +; RESULT-NEXT: store ptr blockaddress(@callee_with_blockaddress_use, %L3), ptr %alloca, align 8 +; RESULT-NEXT: %cond1 = load volatile i1, ptr addrspace(1) null +; RESULT-NEXT: br i1 %cond1, label %L1, label %L3 +define ptr @callee_with_blockaddress_use() { +entry: + %alloca = alloca ptr + %cond0 = load volatile i1, ptr addrspace(1) null + br i1 %cond0, label %L1, label %L2 + +L1: + br label %L2 + +L2: + ; reference an earlier block + store ptr blockaddress(@callee_with_blockaddress_use, %L1), ptr %alloca + + ; reference the block itself from the block + store ptr blockaddress(@callee_with_blockaddress_use, %L2), ptr %alloca + + ; reference a later block + store ptr blockaddress(@callee_with_blockaddress_use, %L3), ptr %alloca + + %cond1 = load volatile i1, ptr addrspace(1) null + br i1 %cond1, label %L1, label %L3 + +L3: + %load = load ptr, ptr %alloca + ret ptr %load +} + +; FIXME: This is not correctly remapping the blockaddress use +; CHECK-LABEL: define void @calls_func_with_blockaddress_use( +; RESULT: entry: +; RESULT-NEXT: %alloca.i = alloca ptr, align 8 +; RESULT-NEXT: store i32 1000, ptr null, align 4 +; RESULT-NEXT: br i1 %arg0, label %call, label %ret + +; RESULT: call: +; RESULT-NEXT: store i32 2000, ptr null, align 4 +; RESULT-NEXT: call void @llvm.lifetime.start.p0(ptr %alloca.i) +; RESULT-NEXT: %cond0.i = load volatile i1, ptr addrspace(1) null, align 1 +; RESULT-NEXT: br i1 %cond0.i, label %L1.i, label %L2.i + +; RESULT: L1.i: ; preds = %L2.i, %call +; RESULT-NEXT: br label %L2.i + +; RESULT: L2.i: ; preds = %L1.i, %call +; RESULT-NEXT: store ptr blockaddress(@callee_with_blockaddress_use, %L1), ptr %alloca.i, align 8 +; RESULT-NEXT: store ptr blockaddress(@calls_func_with_blockaddress_use, %L2.i), ptr %alloca.i, align 8 +; RESULT-NEXT: store ptr blockaddress(@callee_with_blockaddress_use, %L3), ptr %alloca.i, align 8 +; RESULT-NEXT: %cond1.i = load volatile i1, ptr addrspace(1) null, align 1 +; RESULT-NEXT: br i1 %cond1.i, label %L1.i, label %callee_with_blockaddress_use.exit + +; RESULT: callee_with_blockaddress_use.exit: ; preds = %L2.i +; RESULT-NEXT: %load.i = load ptr, ptr %alloca.i, align 8 +; RESULT-NEXT: call void @llvm.lifetime.end.p0(ptr %alloca.i) +; RESULT-NEXT: store i32 3000, ptr null, align 4 +; RESULT-NEXT: br label %ret + +; RESULT: ret: ; preds = %callee_with_blockaddress_use.exit, %entry +; RESULT-NEXT: store i32 4000, ptr null, align 4 +; RESULT-NEXT: ret void +define void @calls_func_with_blockaddress_use(i1 %arg0) { +entry: + store i32 1000, ptr null + br i1 %arg0, label %call, label %ret + +call: + store i32 2000, ptr null + call ptr @callee_with_blockaddress_use() + store i32 3000, ptr null + br label %ret + +ret: + store i32 4000, ptr null + ret void +} + +; CHECK-LABEL: define void @callee_with_fallthrough_blockaddress_use( +; RESULT: L2: +; RESULT-NEXT: store ptr blockaddress(@callee_with_fallthrough_blockaddress_use, %L1), ptr %alloca, align 8 +; RESULT-NEXT: store ptr blockaddress(@callee_with_fallthrough_blockaddress_use, %L2), ptr %alloca, align 8 +; RESULT-NEXT: store ptr blockaddress(@callee_with_fallthrough_blockaddress_use, %L3), ptr %alloca, align 8 +; RESULT-NEXT: br label %L3 +define void @callee_with_fallthrough_blockaddress_use() { +entry: + %alloca = alloca ptr + br label %L1 + +L1: + store i32 999, ptr null + br label %L2 + +L2: ; preds = %entry, %L1 + ; reference a block before this block + store ptr blockaddress(@callee_with_fallthrough_blockaddress_use, %L1), ptr %alloca + + ; reference the block itself from the block + store ptr blockaddress(@callee_with_fallthrough_blockaddress_use, %L2), ptr %alloca + + ; reference a block after this block + store ptr blockaddress(@callee_with_fallthrough_blockaddress_use, %L3), ptr %alloca + br label %L3 + +L3: ; preds = %L1 + %load = load ptr, ptr %alloca + ret void +} + + +; CHECK-LABEL: define void @calls_func_with_fallthrough_blockaddress_use( +; RESULT: entry: +; RESULT-NEXT: %alloca.i = alloca ptr, align 8 +; RESULT-NEXT: store i32 1000, ptr null +; RESULT-NEXT: br i1 %arg0, label %call, label %ret + +; RESULT: call: +; RESULT-NEXT: store i32 2000, ptr null, align 4 +; RESULT-NEXT: call void @llvm.lifetime.start.p0(ptr %alloca.i) +; RESULT-NEXT: br label %L1.i + +; RESULT: L1.i: ; preds = %call +; RESULT-NEXT: store i32 999, ptr null, align 4 +; RESULT-NEXT: br label %L2.i + +; RESULT: L2.i: +; RESULT-NEXT: store ptr blockaddress(@calls_func_with_fallthrough_blockaddress_use, %L1.i), ptr %alloca.i, align 8 +; RESULT-NEXT: store ptr blockaddress(@calls_func_with_fallthrough_blockaddress_use, %L2.i), ptr %alloca.i, align 8 +; RESULT-NEXT: store ptr blockaddress(@callee_with_fallthrough_blockaddress_use, %L3), ptr %alloca.i, align 8 +; RESULT-NEXT: br label %callee_with_fallthrough_blockaddress_use.exit + +; RESULT: callee_with_fallthrough_blockaddress_use.exit: ; preds = %L2.i +; RESULT-NEXT: %load.i = load ptr, ptr %alloca.i, align 8 +; RESULT-NEXT: call void @llvm.lifetime.end.p0(ptr %alloca.i) +; RESULT-NEXT: store i32 3000, ptr null, align 4 +; RESULT-NEXT: br label %ret + +; RESULT: ret: +; RESULT-NEXT: store i32 4000, ptr null, align 4 +; RESULT-NEXT: ret void +define void @calls_func_with_fallthrough_blockaddress_use(i1 %arg0) { +entry: + store i32 1000, ptr null + br i1 %arg0, label %call, label %ret + +call: + store i32 2000, ptr null + call void @callee_with_fallthrough_blockaddress_use() + store i32 3000, ptr null + br label %ret + +ret: + store i32 4000, ptr null + ret void +} + +declare i32 @extern_returns_twice() returns_twice + +; CHECK-LABEL: define i32 @callee_returns_twice( +; RESULT-NEXT: %call = call i32 @extern_returns_twice() +; RESULT-NEXT: %add = add nsw i32 1, %call +; RESULT-NEXT: ret i32 %add +define i32 @callee_returns_twice() { + %call = call i32 @extern_returns_twice() + %add = add nsw i32 1, %call + ret i32 %add +} + +; CHECK-LABEL: define i32 @caller_returns_twice_calls_callee_returns_twice( +; RESULT-NEXT: %call.i = call i32 @extern_returns_twice() +; RESULT-NEXT: %add.i = add nsw i32 1, %call.i +; RESULT-NEXT: %add = add nsw i32 1, %add.i +; RESULT-NEXT: ret i32 %add + define i32 @caller_returns_twice_calls_callee_returns_twice() returns_twice { + %call = call i32 @callee_returns_twice() + %add = add nsw i32 1, %call + ret i32 %add +} + +; Inliner usually blocks inlining of returns_twice functions into +; non-returns_twice functions +; CHECK-LABEL: define i32 @regular_caller_calls_callee_returns_twice() { +; RESULT-NEXT: %call.i = call i32 @extern_returns_twice() +; RESULT-NEXT: %add.i = add nsw i32 1, %call.i +; RESULT-NEXT: %add = add nsw i32 1, %add.i +; RESULT-NEXT: ret i32 %add +define i32 @regular_caller_calls_callee_returns_twice() { + %call = call i32 @callee_returns_twice() + %add = add nsw i32 1, %call + ret i32 %add +} + +; CHECK-LABEL: define void @caller_with_vastart( +; RESULT-NEXT: %ap = alloca ptr, align 4 +; RESULT-NEXT: %ap2 = alloca ptr, align 4 +; RESULT-NEXT: call void @llvm.va_start.p0(ptr nonnull %ap) +; RESULT-NEXT: call void @llvm.va_end.p0(ptr nonnull %ap) +; RESULT-NEXT: call void @llvm.va_start.p0(ptr nonnull %ap) +; RESULT-NEXT: call void @llvm.va_end.p0(ptr nonnull %ap) +; RESULT-NEXT: ret void +define void @caller_with_vastart(ptr noalias nocapture readnone %args, ...) { + %ap = alloca ptr, align 4 + %ap2 = alloca ptr, align 4 + call void @llvm.va_start.p0(ptr nonnull %ap) + call fastcc void @callee_with_vaend(ptr nonnull %ap) + call void @llvm.va_start.p0(ptr nonnull %ap) + call fastcc void @callee_with_vaend_alwaysinline(ptr nonnull %ap) + ret void +} + +; CHECK-LABEL: define fastcc void @callee_with_vaend( +; RESULT-NEXT: tail call void @llvm.va_end.p0(ptr %a) +; RESULT-NEXT: ret void +define fastcc void @callee_with_vaend(ptr %a) { + tail call void @llvm.va_end.p0(ptr %a) + ret void +} + +; CHECK-LABEL: define internal fastcc void @callee_with_vaend_alwaysinline( +; RESULT-NEXT: tail call void @llvm.va_end.p0(ptr %a) +; RESULT-NEXT: ret void +define internal fastcc void @callee_with_vaend_alwaysinline(ptr %a) alwaysinline { + tail call void @llvm.va_end.p0(ptr %a) + ret void +} + +; CHECK-LABEL: define i32 @callee_with_va_start( +define i32 @callee_with_va_start(ptr %a, ...) { + %vargs = alloca ptr, align 8 + tail call void @llvm.va_start.p0(ptr %a) + %va1 = va_arg ptr %vargs, i32 + call void @llvm.va_end(ptr %vargs) + ret i32 %va1 +} + +; CHECK-LABEL: define i32 @callee_vastart_caller( +; RESULT-NEXT: %vargs.i = alloca ptr, align 8 +; RESULT-NEXT: %ap = alloca ptr, align 4 +; RESULT-NEXT: %b = load i32, ptr null, align 4 +; RESULT-NEXT: call void @llvm.lifetime.start.p0(ptr %vargs.i) +; RESULT-NEXT: call void @llvm.va_start.p0(ptr nonnull %ap) +; RESULT-NEXT: %va1.i = va_arg ptr %vargs.i, i32 +; RESULT-NEXT: call void @llvm.va_end.p0(ptr %vargs.i) +; RESULT-NEXT: call void @llvm.lifetime.end.p0(ptr %vargs.i) +; RESULT-NEXT: ret i32 %va1.i +define i32 @callee_vastart_caller(ptr noalias nocapture readnone %args, ...) { + %ap = alloca ptr, align 4 + %b = load i32, ptr null + %result = call i32 (ptr, ...) @callee_with_va_start(ptr nonnull %ap, i32 %b) + ret i32 %result +} + +declare void @llvm.localescape(...) + +; CHECK-LABEL: define internal void @callee_uses_localrecover( +define internal void @callee_uses_localrecover(ptr %fp) { + %a.i8 = call ptr @llvm.localrecover(ptr @callee_uses_localescape, ptr %fp, i32 0) + store i32 42, ptr %a.i8 + ret void +} + +; CHECK-LABEL: define i32 @callee_uses_localescape( +; RESULT-NEXT: %a = alloca i32, align 4 +; RESULT-NEXT: call void (...) @llvm.localescape(ptr %a) +; RESULT-NEXT: %fp = call ptr @llvm.frameaddress.p0(i32 0) +; RESULT-NEXT: %a.i8.i = call ptr @llvm.localrecover(ptr @callee_uses_localescape, ptr %fp, i32 0) +; RESULT-NEXT: store i32 42, ptr %a.i8.i, align 4 +; RESULT-NEXT: %r = load i32, ptr %a, align 4 +; RESULT-NEXT: ret i32 %r +define i32 @callee_uses_localescape() alwaysinline { + %a = alloca i32 + call void (...) @llvm.localescape(ptr %a) + %fp = call ptr @llvm.frameaddress(i32 0) + tail call void @callee_uses_localrecover(ptr %fp) + %r = load i32, ptr %a + ret i32 %r +} + +; CHECK-LABEL: define i32 @callee_uses_localescape_caller( +; RESULT-NEXT: %a.i = alloca i32, align 4 +; RESULT-NEXT: call void @llvm.lifetime.start.p0(ptr %a.i) +; RESULT-NEXT: call void (...) @llvm.localescape(ptr %a.i) +; RESULT-NEXT: %fp.i = call ptr @llvm.frameaddress.p0(i32 0) +; RESULT-NEXT: %a.i8.i.i = call ptr @llvm.localrecover(ptr @callee_uses_localescape, ptr %fp.i, i32 0) +; RESULT-NEXT: store i32 42, ptr %a.i8.i.i, align 4 +; RESULT-NEXT: %r.i = load i32, ptr %a.i, align 4 +; RESULT-NEXT: call void @llvm.lifetime.end.p0(ptr %a.i) +; RESULT-NEXT: ret i32 %r.i +define i32 @callee_uses_localescape_caller() { + %r = tail call i32 @callee_uses_localescape() + ret i32 %r +} + +declare void @llvm.icall.branch.funnel(...) + +; CHECK-LABEL: define void @callee_uses_branch_funnel( +; RESULT-NEXT: musttail call void (...) @llvm.icall.branch.funnel(...) +; RESULT-NEXT: ret void +define void @callee_uses_branch_funnel(...) { + musttail call void (...) @llvm.icall.branch.funnel(...) + ret void +} + +; FIXME: This should fail the verifier after inlining +; CHECK-LABEL: define void @callee_branch_funnel_musttail_caller( +; RESULT-NEXT: call void (...) @llvm.icall.branch.funnel() +; RESULT-NEXT: ret void +define void @callee_branch_funnel_musttail_caller() { + call void (...) @callee_uses_branch_funnel() + ret void +} + +; Ignore noinline on the callee function +; CHECK-LABEL: define void @noinline_callee( +; RESULT-NEXT: store i32 123, ptr %arg, align 4 +; RESULT-NEXT: ret void +define void @noinline_callee(ptr %arg) { + store i32 123, ptr %arg + ret void +} + +; CHECK-LABEL: define void @calls_noinline_func( +; RESULT-NEXT: store i32 123, ptr %outer.arg, align 4 +; RESULT-NEXT: ret void +define void @calls_noinline_func(ptr %outer.arg) { + call void @noinline_callee(ptr %outer.arg) + ret void +} + +; Ignore noinline on the callsite +; CHECK-LABEL: define void @calls_noinline_callsite( +; RESULT-NEXT: store i32 123, ptr %outer.arg, align 4 +; RESULT-NEXT: ret void +define void @calls_noinline_callsite(ptr %outer.arg) { + call void @simple_callee(ptr %outer.arg) noinline + ret void +} + +; Ignore optnone +; CHECK-LABEL: define void @optnone_callee( +; RESULT-NEXT: store i32 5555, ptr %arg, align 4 +; RESULT-NEXT: ret void +define void @optnone_callee(ptr %arg) optnone noinline { + store i32 5555, ptr %arg + ret void +} + +; CHECK-LABEL: define void @calls_optnone_callee( +; RESULT-NEXT: store i32 5555, ptr %outer.arg, align 4 +; RESULT-NEXT: ret void +define void @calls_optnone_callee(ptr %outer.arg) { + call void @optnone_callee(ptr %outer.arg) + ret void +} + +; CHECK-LABEL: define void @optnone_caller( +; RESULT-NEXT: store i32 123, ptr %outer.arg, align 4 +; RESULT-NEXT: ret void +define void @optnone_caller(ptr %outer.arg) optnone noinline { + call void @simple_callee(ptr %outer.arg) + ret void +} + +; CHECK-LABEL: define weak void @interposable_callee( +; RESULT-NEXT: store i32 2024, ptr %arg, align 4 +; RESULT-NEXT: ret void +define weak void @interposable_callee(ptr %arg) { + store i32 2024, ptr %arg + ret void +} + +; Ignore interposable linkage +; CHECK-LABEL: @calls_interposable_callee( +; RESULT-NEXT: store i32 2024, ptr %arg, align 4 +; RESULT-NEXT: ret void +define void @calls_interposable_callee(ptr %arg) { + call void @interposable_callee(ptr %arg) + ret void +} + +; Ignore null_pointer_is_valid +; CHECK-LABEL: @null_pointer_is_valid_callee( +; RESULT-NEXT: store i32 42069, ptr %arg, align 4 +; RESULT-NEXT: ret void +define void @null_pointer_is_valid_callee(ptr %arg) null_pointer_is_valid { + store i32 42069, ptr %arg + ret void +} + +; CHECK-LABEL: @calls_null_pointer_is_valid_callee( +; RESULT-NEXT: store i32 42069, ptr %arg, align 4 +; RESULT-NEXT: ret void +define void @calls_null_pointer_is_valid_callee(ptr %arg) { + call void @null_pointer_is_valid_callee(ptr %arg) + ret void +} + +; CHECK-LABEL: @byval_arg_uses_non_alloca_addrspace( +; RESULT-NEXT: %load = load i32, ptr addrspace(1) %arg, align 4 +; RESULT-NEXT: ret i32 %load +define i32 @byval_arg_uses_non_alloca_addrspace(ptr addrspace(1) byval(i32) %arg) { + %load = load i32, ptr addrspace(1) %arg + ret i32 %load +} + +; CHECK-LABEL: @calls_byval_arg_uses_non_alloca_addrspace( +; RESULT-NEXT: %arg1 = alloca i32, align 4, addrspace(1) +; RESULT-NEXT: call void @llvm.lifetime.start.p1(ptr addrspace(1) %arg1) +; RESULT-NEXT: call void @llvm.memcpy.p1.p1.i64(ptr addrspace(1) align 4 %arg1, ptr addrspace(1) %arg, i64 4, i1 false) +; RESULT-NEXT: %load.i = load i32, ptr addrspace(1) %arg1, align 4 +; RESULT-NEXT: call void @llvm.lifetime.end.p1(ptr addrspace(1) %arg1) +; RESULT-NEXT: ret i32 %load.i +define i32 @calls_byval_arg_uses_non_alloca_addrspace(ptr addrspace(1) %arg) { + %call = call i32 @byval_arg_uses_non_alloca_addrspace(ptr addrspace(1) byval(i32) %arg) + ret i32 %call +} + +; CHECK-LABEL: define void @callee_stacksize( +; RESULT-NEXT: %alloca = alloca [4096 x i32] +; RESULT-NEXT: store i32 12345678, ptr %arg +; RESULT-NEXT: store i32 0, ptr %alloca +; RESULT-NEXT: ret void +define void @callee_stacksize(ptr %arg) "inline-max-stacksize"="4" { + %alloca = alloca [4096 x i32] + store i32 12345678, ptr %arg + store i32 0, ptr %alloca + ret void +} + +; CHECK-LABEL: define void @caller_stacksize( +; RESULT-NEXT: %alloca.i = alloca [4096 x i32], align 4 +; RESULT-NEXT: call void @llvm.lifetime.start.p0(ptr %alloca.i) +; RESULT-NEXT: store i32 12345678, ptr %arg, align 4 +; RESULT-NEXT: store i32 0, ptr %alloca.i, align 4 +; RESULT-NEXT: call void @llvm.lifetime.end.p0(ptr %alloca.i) +; RESULT-NEXT: ret void +define void @caller_stacksize(ptr %arg) { + call void @callee_stacksize(ptr %arg) + ret void +} + +; CHECK-LABEL: define void @callee_dynamic_alloca( +; RESULT-NEXT: %alloca = alloca i32, i32 %n, align 4 +; RESULT-NEXT: store i32 12345678, ptr %arg, align 4 +; RESULT-NEXT: store i32 0, ptr %alloca, align 4 +; RESULT-NEXT: ret void +define void @callee_dynamic_alloca(ptr %arg, i32 %n) "inline-max-stacksize"="4" { + %alloca = alloca i32, i32 %n + store i32 12345678, ptr %arg + store i32 0, ptr %alloca + ret void +} + +; CHECK-LABEL: define void @caller_dynamic_alloca( +; RESULT-NEXT: %savedstack = call ptr @llvm.stacksave.p0() +; RESULT-NEXT: %alloca.i = alloca i32, i32 %size, align 4 +; RESULT-NEXT: store i32 12345678, ptr %arg, align 4 +; RESULT-NEXT: store i32 0, ptr %alloca.i, align 4 +; RESULT-NEXT: call void @llvm.stackrestore.p0(ptr %savedstack) +; RESULT-NEXT: ret void +define void @caller_dynamic_alloca(ptr %arg, i32 %size) { + call void @callee_dynamic_alloca(ptr %arg, i32 %size) + ret void +} + +declare void @extern_noduplicate() noduplicate + +; CHECK-LABEL: define void @callee_noduplicate_calls( +; RESULT-NEXT: call void @extern_noduplicate() +; RESULT-NEXT: call void @extern_noduplicate() +; RESULT-NEXT: ret void +define void @callee_noduplicate_calls() { + call void @extern_noduplicate() + call void @extern_noduplicate() + ret void +} + +; Ignore noduplicate restrictions +; CHECK-LABEL: define void @caller_noduplicate_calls_callee( +; RESULT-NEXT: call void @extern_noduplicate() +; RESULT-NEXT: call void @extern_noduplicate() +; RESULT-NEXT: call void @extern_noduplicate() +; RESULT-NEXT: call void @extern_noduplicate() +; RESULT-NEXT: ret void +define void @caller_noduplicate_calls_callee() { + call void @callee_noduplicate_calls() + call void @callee_noduplicate_calls() + ret void +} + +; CHECK-LABEL: define void @sanitize_address_callee( +; RESULT-NEXT: store i32 333, ptr %arg +; RESULT-NEXT: ret void +define void @sanitize_address_callee(ptr %arg) sanitize_address { + store i32 333, ptr %arg + ret void +} + +; CHECK-LABEL: define void @no_sanitize_address_caller( +; RESULT-NEXT: store i32 333, ptr %arg +; RESULT-NEXT: ret void +define void @no_sanitize_address_caller(ptr %arg) { + call void @sanitize_address_callee(ptr %arg) + ret void +} + +; CHECK-LABEL: define float @nonstrictfp_callee( +; RESULT-NEXT: %add = fadd float %a, %a +; RESULT-NEXT: ret float %add +define float @nonstrictfp_callee(float %a) { + %add = fadd float %a, %a + ret float %add +} + +; CHECK-LABEL: define float @strictfp_caller( +; RESULT-NEXT: call float @llvm.experimental.constrained.fadd.f32( +; RESULT-NEXT: call float @llvm.experimental.constrained.fadd.f32( +; RESULT-NEXT: ret float %add +define float @strictfp_caller(float %a) strictfp { + %call = call float @nonstrictfp_callee(float %a) strictfp + %add = call float @llvm.experimental.constrained.fadd.f32(float %call, float 2.0, metadata !"round.dynamic", metadata !"fpexcept.strict") + ret float %add +} + +; CHECK-LABEL: define float @strictfp_callee( +; RESULT-NEXT: call float @llvm.experimental.constrained.fadd.f32( +; RESULT-NEXT: ret float +define float @strictfp_callee(float %a) strictfp { + %add = call float @llvm.experimental.constrained.fadd.f32(float %a, float %a, metadata !"round.dynamic", metadata !"fpexcept.strict") + ret float %add +} + +; FIXME: This should not inline. The inlined case should fail the +; verifier, but it does not. +; CHECK-LABEL: define float @nonstrictfp_caller( +; RESULT-NEXT: call float @llvm.experimental.constrained.fadd.f32( +; RESULT-NEXT: fadd float +; RESULT-NEXT: ret float +define float @nonstrictfp_caller(float %a) { + %call = call float @strictfp_callee(float %a) + %add1 = fadd float %call, 2.0 + ret float %add1 +} + +define void @caller_also_has_non_callee_use() { + call void @simple_callee(ptr @simple_callee) + ret void +} diff --git a/llvm/tools/lli/ForwardingMemoryManager.h b/llvm/tools/lli/ForwardingMemoryManager.h index e5c10d6..d193bef 100644 --- a/llvm/tools/lli/ForwardingMemoryManager.h +++ b/llvm/tools/lli/ForwardingMemoryManager.h @@ -109,8 +109,11 @@ public: if (Syms->size() != 1) return make_error<StringError>("Unexpected remote lookup result", inconvertibleErrorCode()); - return JITSymbol(Syms->front().getAddress().getValue(), - Syms->front().getFlags()); + if (!Syms->front()) + return make_error<StringError>("Expected valid address", + inconvertibleErrorCode()); + return JITSymbol(Syms->front()->getAddress().getValue(), + Syms->front()->getFlags()); } else return Syms.takeError(); } diff --git a/llvm/tools/llvm-exegesis/lib/SerialSnippetGenerator.cpp b/llvm/tools/llvm-exegesis/lib/SerialSnippetGenerator.cpp index bdfc93e..707e6ee 100644 --- a/llvm/tools/llvm-exegesis/lib/SerialSnippetGenerator.cpp +++ b/llvm/tools/llvm-exegesis/lib/SerialSnippetGenerator.cpp @@ -57,6 +57,12 @@ computeAliasingInstructions(const LLVMState &State, const Instruction *Instr, continue; if (OtherInstr.hasMemoryOperands()) continue; + // Filtering out loads/stores might belong in hasMemoryOperands(), but that + // complicates things as there are instructions with may load/store that + // don't have operands (e.g. X86's CLUI instruction). So, it's easier to + // filter them out here. + if (OtherInstr.Description.mayLoad() || OtherInstr.Description.mayStore()) + continue; if (!ET.allowAsBackToBack(OtherInstr)) continue; if (Instr->hasAliasingRegistersThrough(OtherInstr, ForbiddenRegisters)) diff --git a/llvm/tools/llvm-objdump/OffloadDump.cpp b/llvm/tools/llvm-objdump/OffloadDump.cpp index 8a0deb3..a77537d 100644 --- a/llvm/tools/llvm-objdump/OffloadDump.cpp +++ b/llvm/tools/llvm-objdump/OffloadDump.cpp @@ -87,21 +87,30 @@ void llvm::dumpOffloadBundleFatBinary(const ObjectFile &O, StringRef ArchName) { if (Error Err = llvm::object::extractOffloadBundleFatBinary(O, FoundBundles)) reportError(O.getFileName(), "while extracting offload FatBin bundles: " + toString(std::move(Err))); - for (const auto &[BundleNum, Bundle] : llvm::enumerate(FoundBundles)) { for (OffloadBundleEntry &Entry : Bundle.getEntries()) { - if (!ArchName.empty() && !Entry.ID.contains(ArchName)) + if (!ArchName.empty() && Entry.ID.find(ArchName) != std::string::npos) continue; // create file name for this object file: <source-filename>.<Bundle // Number>.<EntryID> - std::string str = Bundle.getFileName().str() + "." + itostr(BundleNum) + - "." + Entry.ID.str(); - if (Error Err = object::extractCodeObject(O, Entry.Offset, Entry.Size, - StringRef(str))) - reportError(O.getFileName(), - "while extracting offload Bundle Entries: " + - toString(std::move(Err))); + std::string str = + Bundle.getFileName().str() + "." + itostr(BundleNum) + "." + Entry.ID; + + if (Bundle.isDecompressed()) { + if (Error Err = object::extractCodeObject( + Bundle.DecompressedBuffer->getMemBufferRef(), Entry.Offset, + Entry.Size, StringRef(str))) + reportError(O.getFileName(), + "while extracting offload Bundle Entries: " + + toString(std::move(Err))); + } else { + if (Error Err = object::extractCodeObject(O, Entry.Offset, Entry.Size, + StringRef(str))) + reportError(O.getFileName(), + "while extracting offload Bundle Entries: " + + toString(std::move(Err))); + } outs() << "Extracting offload bundle: " << str << "\n"; } } diff --git a/llvm/tools/llvm-profdata/CMakeLists.txt b/llvm/tools/llvm-profdata/CMakeLists.txt index 165be9a2..e5aa858 100644 --- a/llvm/tools/llvm-profdata/CMakeLists.txt +++ b/llvm/tools/llvm-profdata/CMakeLists.txt @@ -10,9 +10,6 @@ add_llvm_tool(llvm-profdata DEPENDS intrinsics_gen - GENERATE_DRIVER ) -if(NOT LLVM_TOOL_LLVM_DRIVER_BUILD) - target_link_libraries(llvm-profdata PRIVATE LLVMDebuginfod) -endif() +target_link_libraries(llvm-profdata PRIVATE LLVMDebuginfod) diff --git a/llvm/tools/llvm-profdata/llvm-profdata.cpp b/llvm/tools/llvm-profdata/llvm-profdata.cpp index d658ea9..15ddb05 100644 --- a/llvm/tools/llvm-profdata/llvm-profdata.cpp +++ b/llvm/tools/llvm-profdata/llvm-profdata.cpp @@ -3464,10 +3464,7 @@ static int order_main() { return 0; } -int llvm_profdata_main(int argc, char **argvNonConst, - const llvm::ToolContext &) { - const char **argv = const_cast<const char **>(argvNonConst); - +int main(int argc, const char *argv[]) { StringRef ProgName(sys::path::filename(argv[0])); if (argc < 2) { diff --git a/llvm/tools/llvm-reduce/CMakeLists.txt b/llvm/tools/llvm-reduce/CMakeLists.txt index 7be90bc..c8673b4 100644 --- a/llvm/tools/llvm-reduce/CMakeLists.txt +++ b/llvm/tools/llvm-reduce/CMakeLists.txt @@ -39,6 +39,7 @@ add_llvm_tool(llvm-reduce deltas/ReduceGlobalValues.cpp deltas/ReduceGlobalVarInitializers.cpp deltas/ReduceGlobalVars.cpp + deltas/ReduceInlineCallSites.cpp deltas/ReduceInstructions.cpp deltas/ReduceInstructionFlags.cpp deltas/ReduceInvokes.cpp diff --git a/llvm/tools/llvm-reduce/DeltaManager.cpp b/llvm/tools/llvm-reduce/DeltaManager.cpp index f5c6276..9b13202 100644 --- a/llvm/tools/llvm-reduce/DeltaManager.cpp +++ b/llvm/tools/llvm-reduce/DeltaManager.cpp @@ -28,6 +28,7 @@ #include "deltas/ReduceGlobalVarInitializers.h" #include "deltas/ReduceGlobalVars.h" #include "deltas/ReduceIRReferences.h" +#include "deltas/ReduceInlineCallSites.h" #include "deltas/ReduceInstructionFlags.h" #include "deltas/ReduceInstructionFlagsMIR.h" #include "deltas/ReduceInstructions.h" diff --git a/llvm/tools/llvm-reduce/DeltaPasses.def b/llvm/tools/llvm-reduce/DeltaPasses.def index 3aed0cc..845b106 100644 --- a/llvm/tools/llvm-reduce/DeltaPasses.def +++ b/llvm/tools/llvm-reduce/DeltaPasses.def @@ -58,7 +58,7 @@ DELTA_PASS_IR("volatile", reduceVolatileInstructionsDeltaPass, "Reducing Volatil DELTA_PASS_IR("atomic-ordering", reduceAtomicOrderingDeltaPass, "Reducing Atomic Ordering") DELTA_PASS_IR("syncscopes", reduceAtomicSyncScopesDeltaPass, "Reducing Atomic Sync Scopes") DELTA_PASS_IR("instruction-flags", reduceInstructionFlagsDeltaPass, "Reducing Instruction Flags") - +DELTA_PASS_IR("inline-call-sites", reduceInlineCallSitesDeltaPass, "Inlining callsites") #ifndef DELTA_PASS_MIR #define DELTA_PASS_MIR(NAME, FUNC, DESC) diff --git a/llvm/tools/llvm-reduce/deltas/ReduceInlineCallSites.cpp b/llvm/tools/llvm-reduce/deltas/ReduceInlineCallSites.cpp new file mode 100644 index 0000000..cfef367 --- /dev/null +++ b/llvm/tools/llvm-reduce/deltas/ReduceInlineCallSites.cpp @@ -0,0 +1,103 @@ +//===- ReduceInlineCallSites.cpp ------------------------------------------===// +// +// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. +// See https://llvm.org/LICENSE.txt for license information. +// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception +// +//===----------------------------------------------------------------------===// + +#include "ReduceInlineCallSites.h" +#include "llvm/IR/InstrTypes.h" +#include "llvm/Support/CommandLine.h" +#include "llvm/Transforms/Utils/Cloning.h" + +using namespace llvm; + +extern cl::OptionCategory LLVMReduceOptions; + +static cl::opt<int> CallsiteInlineThreshold( + "reduce-callsite-inline-threshold", + cl::desc("Number of instructions in a function to unconditionally inline " + "(-1 for inline all)"), + cl::init(5), cl::cat(LLVMReduceOptions)); + +static bool functionHasMoreThanNonTerminatorInsts(const Function &F, + uint64_t NumInsts) { + uint64_t InstCount = 0; + for (const BasicBlock &BB : F) { + for (const Instruction &I : make_range(BB.begin(), std::prev(BB.end()))) { + (void)I; + if (InstCount++ > NumInsts) + return true; + } + } + + return false; +} + +static bool hasOnlyOneCallUse(const Function &F) { + unsigned UseCount = 0; + for (const Use &U : F.uses()) { + const CallBase *CB = dyn_cast<CallBase>(U.getUser()); + if (!CB || !CB->isCallee(&U)) + return false; + if (UseCount++ > 1) + return false; + } + + return UseCount == 1; +} + +// TODO: This could use more thought. +static bool inlineWillReduceComplexity(const Function &Caller, + const Function &Callee) { + // Backdoor to force all possible inlining. + if (CallsiteInlineThreshold < 0) + return true; + + if (!hasOnlyOneCallUse(Callee)) + return false; + + // Permit inlining small functions into big functions, or big functions into + // small functions. + if (!functionHasMoreThanNonTerminatorInsts(Callee, CallsiteInlineThreshold) && + !functionHasMoreThanNonTerminatorInsts(Caller, CallsiteInlineThreshold)) + return true; + + return false; +} + +static void reduceCallSites(Oracle &O, Function &F) { + std::vector<std::pair<CallBase *, InlineFunctionInfo>> CallSitesToInline; + + for (Use &U : F.uses()) { + if (CallBase *CB = dyn_cast<CallBase>(U.getUser())) { + // Ignore callsites with wrong call type. + if (!CB->isCallee(&U)) + continue; + + // We do not consider isInlineViable here. It is overly conservative in + // cases that the inliner should handle correctly (e.g. disallowing inline + // of of functions with indirectbr). Some of the other cases are for other + // correctness issues which we do need to worry about here. + + // TODO: Should we delete the function body? + InlineFunctionInfo IFI; + if (CanInlineCallSite(*CB, IFI).isSuccess() && + inlineWillReduceComplexity(*CB->getFunction(), F) && !O.shouldKeep()) + CallSitesToInline.emplace_back(CB, std::move(IFI)); + } + } + + // TODO: InlineFunctionImpl will implicitly perform some simplifications / + // optimizations which we should be able to opt-out of. + for (auto [CB, IFI] : CallSitesToInline) + InlineFunctionImpl(*CB, IFI); +} + +void llvm::reduceInlineCallSitesDeltaPass(Oracle &O, ReducerWorkItem &Program) { + for (Function &F : Program.getModule()) { + if (!F.isDeclaration()) + reduceCallSites(O, F); + } +} diff --git a/llvm/tools/llvm-reduce/deltas/ReduceInlineCallSites.h b/llvm/tools/llvm-reduce/deltas/ReduceInlineCallSites.h new file mode 100644 index 0000000..1df31a1 --- /dev/null +++ b/llvm/tools/llvm-reduce/deltas/ReduceInlineCallSites.h @@ -0,0 +1,18 @@ +//===- ReduceInlineCallSites.h ----------------------------------*- C++ -*-===// +// +// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. +// See https://llvm.org/LICENSE.txt for license information. +// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception +// +//===----------------------------------------------------------------------===// + +#ifndef LLVM_TOOLS_LLVM_REDUCE_DELTAS_REDUCEINLINECALLSITES_H +#define LLVM_TOOLS_LLVM_REDUCE_DELTAS_REDUCEINLINECALLSITES_H + +#include "Delta.h" + +namespace llvm { +void reduceInlineCallSitesDeltaPass(Oracle &O, ReducerWorkItem &Program); +} // namespace llvm + +#endif diff --git a/llvm/tools/llvm-remarkutil/RemarkFilter.cpp b/llvm/tools/llvm-remarkutil/RemarkFilter.cpp index acfef66..507ae36 100644 --- a/llvm/tools/llvm-remarkutil/RemarkFilter.cpp +++ b/llvm/tools/llvm-remarkutil/RemarkFilter.cpp @@ -20,7 +20,9 @@ using namespace llvm; using namespace remarks; using namespace llvm::remarkutil; -namespace filter { +// Note: Avoid using the identifier "filter" in this file, as it is prone to +// namespace collision with headers that might get included e.g. +// curses.h. static cl::SubCommand FilterSub("filter", "Filter remarks based on specified criteria."); @@ -80,5 +82,3 @@ static Error tryFilter() { } static CommandRegistration FilterReg(&FilterSub, tryFilter); - -} // namespace filter diff --git a/llvm/unittests/ADT/STLExtrasTest.cpp b/llvm/unittests/ADT/STLExtrasTest.cpp index 5020acd..47469983 100644 --- a/llvm/unittests/ADT/STLExtrasTest.cpp +++ b/llvm/unittests/ADT/STLExtrasTest.cpp @@ -14,6 +14,7 @@ #include <array> #include <climits> #include <cstddef> +#include <functional> #include <initializer_list> #include <iterator> #include <list> @@ -1658,6 +1659,54 @@ TEST(STLExtrasTest, Accumulate) { EXPECT_EQ(accumulate(V1, 10), std::accumulate(V1.begin(), V1.end(), 10)); EXPECT_EQ(accumulate(drop_begin(V1), 7), std::accumulate(V1.begin() + 1, V1.end(), 7)); + + EXPECT_EQ(accumulate(V1, 2, std::multiplies<>{}), 240); +} + +TEST(STLExtrasTest, SumOf) { + EXPECT_EQ(sum_of(std::vector<int>()), 0); + EXPECT_EQ(sum_of(std::vector<int>(), 1), 1); + std::vector<int> V1 = {1, 2, 3, 4, 5}; + static_assert(std::is_same_v<decltype(sum_of(V1)), int>); + static_assert(std::is_same_v<decltype(sum_of(V1, 1)), int>); + EXPECT_EQ(sum_of(V1), 15); + EXPECT_EQ(sum_of(V1, 1), 16); + + std::vector<float> V2 = {1.0f, 2.0f, 4.0f}; + static_assert(std::is_same_v<decltype(sum_of(V2)), float>); + static_assert(std::is_same_v<decltype(sum_of(V2), 1.0f), float>); + static_assert(std::is_same_v<decltype(sum_of(V2), 1.0), double>); + EXPECT_EQ(sum_of(V2), 7.0f); + EXPECT_EQ(sum_of(V2, 1.0f), 8.0f); + + // Make sure that for a const argument the return value is non-const. + const std::vector<float> V3 = {1.0f, 2.0f}; + static_assert(std::is_same_v<decltype(sum_of(V3)), float>); + EXPECT_EQ(sum_of(V3), 3.0f); +} + +TEST(STLExtrasTest, ProductOf) { + EXPECT_EQ(product_of(std::vector<int>()), 1); + EXPECT_EQ(product_of(std::vector<int>(), 0), 0); + EXPECT_EQ(product_of(std::vector<int>(), 1), 1); + std::vector<int> V1 = {1, 2, 3, 4, 5}; + static_assert(std::is_same_v<decltype(product_of(V1)), int>); + static_assert(std::is_same_v<decltype(product_of(V1, 1)), int>); + EXPECT_EQ(product_of(V1), 120); + EXPECT_EQ(product_of(V1, 1), 120); + EXPECT_EQ(product_of(V1, 2), 240); + + std::vector<float> V2 = {1.0f, 2.0f, 4.0f}; + static_assert(std::is_same_v<decltype(product_of(V2)), float>); + static_assert(std::is_same_v<decltype(product_of(V2), 1.0f), float>); + static_assert(std::is_same_v<decltype(product_of(V2), 1.0), double>); + EXPECT_EQ(product_of(V2), 8.0f); + EXPECT_EQ(product_of(V2, 4.0f), 32.0f); + + // Make sure that for a const argument the return value is non-const. + const std::vector<float> V3 = {1.0f, 2.0f}; + static_assert(std::is_same_v<decltype(product_of(V3)), float>); + EXPECT_EQ(product_of(V3), 2.0f); } struct Foo; diff --git a/llvm/unittests/CAS/CMakeLists.txt b/llvm/unittests/CAS/CMakeLists.txt index 0f8fcb9..ee40e6c 100644 --- a/llvm/unittests/CAS/CMakeLists.txt +++ b/llvm/unittests/CAS/CMakeLists.txt @@ -8,6 +8,7 @@ add_llvm_unittest(CASTests ActionCacheTest.cpp CASTestConfig.cpp ObjectStoreTest.cpp + OnDiskDataAllocatorTest.cpp OnDiskTrieRawHashMapTest.cpp ProgramTest.cpp ) diff --git a/llvm/unittests/CAS/OnDiskDataAllocatorTest.cpp b/llvm/unittests/CAS/OnDiskDataAllocatorTest.cpp new file mode 100644 index 0000000..966fa03 --- /dev/null +++ b/llvm/unittests/CAS/OnDiskDataAllocatorTest.cpp @@ -0,0 +1,66 @@ +//===----------------------------------------------------------------------===// +// +// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. +// See https://llvm.org/LICENSE.txt for license information. +// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception +// +//===----------------------------------------------------------------------===// + +#include "llvm/CAS/OnDiskDataAllocator.h" +#include "llvm/CAS/MappedFileRegionArena.h" +#include "llvm/Config/llvm-config.h" +#include "llvm/Support/Alignment.h" +#include "llvm/Testing/Support/Error.h" +#include "llvm/Testing/Support/SupportHelpers.h" + +#if LLVM_ENABLE_ONDISK_CAS + +using namespace llvm; +using namespace llvm::cas; + +TEST(OnDiskDataAllocatorTest, Allocate) { + unittest::TempDir Temp("data-allocator", /*Unique=*/true); + constexpr size_t MB = 1024u * 1024u; + + std::optional<OnDiskDataAllocator> Allocator; + ASSERT_THAT_ERROR(OnDiskDataAllocator::create( + Temp.path("allocator"), "data", /*MaxFileSize=*/MB, + /*NewFileInitialSize=*/std::nullopt) + .moveInto(Allocator), + Succeeded()); + + // Allocate. + { + for (size_t Size = 1; Size < 16; ++Size) { + OnDiskDataAllocator::OnDiskPtr P; + ASSERT_THAT_ERROR(Allocator->allocate(Size).moveInto(P), Succeeded()); + EXPECT_TRUE( + isAligned(MappedFileRegionArena::getAlign(), P.getOffset().get())); + } + } + + // Out of space. + { + OnDiskDataAllocator::OnDiskPtr P; + ASSERT_THAT_ERROR(Allocator->allocate(MB).moveInto(P), Failed()); + } + + // Check size and capacity. + { + ASSERT_EQ(Allocator->capacity(), MB); + ASSERT_LE(Allocator->size(), MB); + } + + // Get. + { + OnDiskDataAllocator::OnDiskPtr P; + ASSERT_THAT_ERROR(Allocator->allocate(32).moveInto(P), Succeeded()); + ArrayRef<char> Data; + ASSERT_THAT_ERROR(Allocator->get(P.getOffset(), 16).moveInto(Data), + Succeeded()); + ASSERT_THAT_ERROR(Allocator->get(P.getOffset(), 1025).moveInto(Data), + Failed()); + } +} + +#endif // LLVM_ENABLE_ONDISK_CAS diff --git a/llvm/unittests/CAS/OnDiskTrieRawHashMapTest.cpp b/llvm/unittests/CAS/OnDiskTrieRawHashMapTest.cpp index 7bedfe4..6034c70 100644 --- a/llvm/unittests/CAS/OnDiskTrieRawHashMapTest.cpp +++ b/llvm/unittests/CAS/OnDiskTrieRawHashMapTest.cpp @@ -71,7 +71,7 @@ TEST_P(OnDiskTrieRawHashMapTestFixture, General) { std::optional<FileOffset> Offset; std::optional<MutableArrayRef<char>> Data; { - std::optional<OnDiskTrieRawHashMap::pointer> Insertion; + std::optional<OnDiskTrieRawHashMap::OnDiskPtr> Insertion; ASSERT_THAT_ERROR(Trie1->insert({Hash0, Data0v1}).moveInto(Insertion), Succeeded()); EXPECT_EQ(Hash0, (*Insertion)->Hash); @@ -128,7 +128,7 @@ TEST_P(OnDiskTrieRawHashMapTestFixture, General) { // Recover from an offset. { - OnDiskTrieRawHashMap::const_pointer Recovered; + OnDiskTrieRawHashMap::ConstOnDiskPtr Recovered; ASSERT_THAT_ERROR(Trie1->recoverFromFileOffset(*Offset).moveInto(Recovered), Succeeded()); ASSERT_TRUE(Recovered); @@ -140,14 +140,14 @@ TEST_P(OnDiskTrieRawHashMapTestFixture, General) { // Recover from a bad offset. { FileOffset BadOffset(1); - OnDiskTrieRawHashMap::const_pointer Recovered; + OnDiskTrieRawHashMap::ConstOnDiskPtr Recovered; ASSERT_THAT_ERROR( Trie1->recoverFromFileOffset(BadOffset).moveInto(Recovered), Failed()); } // Insert another thing. { - std::optional<OnDiskTrieRawHashMap::pointer> Insertion; + std::optional<OnDiskTrieRawHashMap::OnDiskPtr> Insertion; ASSERT_THAT_ERROR(Trie1->insert({Hash1, Data1}).moveInto(Insertion), Succeeded()); EXPECT_EQ(Hash1, (*Insertion)->Hash); @@ -210,7 +210,7 @@ TEST(OnDiskTrieRawHashMapTest, OutOfSpace) { auto Hash0 = ArrayRef(Hash0Bytes); constexpr StringLiteral Data0v1Bytes = "data0.v1"; ArrayRef<char> Data0v1 = ArrayRef(Data0v1Bytes.data(), Data0v1Bytes.size()); - std::optional<OnDiskTrieRawHashMap::pointer> Insertion; + std::optional<OnDiskTrieRawHashMap::OnDiskPtr> Insertion; ASSERT_THAT_ERROR(Trie->insert({Hash0, Data0v1}).moveInto(Insertion), Failed()); } diff --git a/llvm/unittests/ExecutionEngine/Orc/ObjectLinkingLayerTest.cpp b/llvm/unittests/ExecutionEngine/Orc/ObjectLinkingLayerTest.cpp index 8a6549b..5ff3e26 100644 --- a/llvm/unittests/ExecutionEngine/Orc/ObjectLinkingLayerTest.cpp +++ b/llvm/unittests/ExecutionEngine/Orc/ObjectLinkingLayerTest.cpp @@ -301,7 +301,7 @@ TEST(ObjectLinkingLayerSearchGeneratorTest, AbsoluteSymbolsObjectLayer) { void lookupSymbolsAsync(ArrayRef<LookupRequest> Request, SymbolLookupCompleteFn Complete) override { - std::vector<ExecutorSymbolDef> Result; + std::vector<std::optional<ExecutorSymbolDef>> Result; EXPECT_EQ(Request.size(), 1u); for (auto &LR : Request) { EXPECT_EQ(LR.Symbols.size(), 1u); @@ -309,7 +309,7 @@ TEST(ObjectLinkingLayerSearchGeneratorTest, AbsoluteSymbolsObjectLayer) { if (*Sym.first == "_testFunc") { ExecutorSymbolDef Def{ExecutorAddr::fromPtr((void *)0x1000), JITSymbolFlags::Exported}; - Result.push_back(Def); + Result.emplace_back(Def); } else { ADD_FAILURE() << "unexpected symbol request " << *Sym.first; } diff --git a/llvm/unittests/IR/DebugInfoTest.cpp b/llvm/unittests/IR/DebugInfoTest.cpp index 03333d5..475e0a9 100644 --- a/llvm/unittests/IR/DebugInfoTest.cpp +++ b/llvm/unittests/IR/DebugInfoTest.cpp @@ -1250,6 +1250,82 @@ TEST(MetadataTest, DbgVariableRecordConversionRoutines) { EXPECT_EQ(DVI2->getExpression(), Expr2); } +TEST(MetadataTest, InlinedAtMethodsWithMultipleLevels) { + LLVMContext C; + + // Create IR with 3 levels of inlining: + // main() calls inline1() which calls inline2() which calls inline3() + // We'll test from the perspective of code in inline3() + std::unique_ptr<Module> M = parseIR(C, R"( + define void @main() !dbg !10 { + ret void, !dbg !20 + } + + !llvm.dbg.cu = !{!0} + !llvm.module.flags = !{!2} + + !0 = distinct !DICompileUnit(language: DW_LANG_C99, file: !1) + !1 = !DIFile(filename: "test.c", directory: "/test") + !2 = !{i32 2, !"Debug Info Version", i32 3} + + ; Subprograms for each function in the call chain + !10 = distinct !DISubprogram(name: "main", scope: !1, file: !1, line: 100, unit: !0) + !11 = distinct !DISubprogram(name: "inline1", scope: !1, file: !1, line: 200, unit: !0) + !12 = distinct !DISubprogram(name: "inline2", scope: !1, file: !1, line: 300, unit: !0) + !13 = distinct !DISubprogram(name: "inline3", scope: !1, file: !1, line: 400, unit: !0) + + ; Location in inline3 (line 401), inlined at location !21 + !20 = !DILocation(line: 401, column: 5, scope: !13, inlinedAt: !21) + + ; Location in inline2 (line 301) where inline3 was called, inlined at !22 + !21 = !DILocation(line: 301, column: 10, scope: !12, inlinedAt: !22) + + ; Location in inline1 (line 201) where inline2 was called, inlined at !23 + !22 = !DILocation(line: 201, column: 15, scope: !11, inlinedAt: !23) + + ; Location in main (line 101) where inline1 was called (no more inlinedAt) + !23 = !DILocation(line: 101, column: 3, scope: !10) + )"); + + ASSERT_TRUE(M); + + Function *MainFunc = M->getFunction("main"); + ASSERT_TRUE(MainFunc); + Instruction &RetInst = MainFunc->getEntryBlock().front(); + + // Use getDebugLoc() to get the location from the ret instruction. + const DILocation *InnermostLoc = RetInst.getDebugLoc().get(); + ASSERT_TRUE(InnermostLoc); + + // Test getScope() - should return the immediate scope (inline3). + DILocalScope *ImmediateScope = InnermostLoc->getScope(); + ASSERT_TRUE(ImmediateScope); + EXPECT_TRUE(isa<DISubprogram>(ImmediateScope)); + EXPECT_EQ(cast<DISubprogram>(ImmediateScope)->getName(), "inline3"); + + // Test getInlinedAt() - should return the next level in the inlining chain. + const DILocation *NextLevel = InnermostLoc->getInlinedAt(); + ASSERT_TRUE(NextLevel); + EXPECT_EQ(NextLevel->getLine(), 301u); + EXPECT_EQ(cast<DISubprogram>(NextLevel->getScope())->getName(), "inline2"); + + // Test getInlinedAtLocation() - should return the outermost location. + const DILocation *OutermostLoc = InnermostLoc->getInlinedAtLocation(); + ASSERT_TRUE(OutermostLoc); + EXPECT_EQ(OutermostLoc->getLine(), 101u); + EXPECT_EQ(OutermostLoc->getColumn(), 3u); + EXPECT_EQ(OutermostLoc->getInlinedAt(), nullptr); + EXPECT_EQ(cast<DISubprogram>(OutermostLoc->getScope())->getName(), "main"); + + // Test getInlinedAtScope() - should return the scope of the outermost + // location. + DILocalScope *InlinedAtScope = InnermostLoc->getInlinedAtScope(); + ASSERT_TRUE(InlinedAtScope); + EXPECT_TRUE(isa<DISubprogram>(InlinedAtScope)); + EXPECT_EQ(cast<DISubprogram>(InlinedAtScope)->getName(), "main"); + EXPECT_EQ(InlinedAtScope, OutermostLoc->getScope()); +} + // Test that the hashing function for DISubprograms representing methods produce // the same result after replacing their scope (the type containing the // subprogram) from a temporary DIType with the permanent one. diff --git a/llvm/unittests/IR/ManglerTest.cpp b/llvm/unittests/IR/ManglerTest.cpp index bced6ff..bb0b3ed 100644 --- a/llvm/unittests/IR/ManglerTest.cpp +++ b/llvm/unittests/IR/ManglerTest.cpp @@ -243,6 +243,9 @@ TEST(ManglerTest, Arm64EC) { // public: int __cdecl Wrapper<struct A>::GetValue(struct WW<struct // A>::Z)const "?GetValue@?$Wrapper@UA@@@@$$hQEBAHUZ@?$WW@UA@@@@@Z", + + // MD5 symbol + "??@aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa@$$h@", }; for (const auto &Arm64ECName : Arm64ECNames) { diff --git a/llvm/unittests/Option/CMakeLists.txt b/llvm/unittests/Option/CMakeLists.txt index 7be4300..5fefb5e 100644 --- a/llvm/unittests/Option/CMakeLists.txt +++ b/llvm/unittests/Option/CMakeLists.txt @@ -4,11 +4,15 @@ set(LLVM_LINK_COMPONENTS ) set(LLVM_TARGET_DEFINITIONS Opts.td) - tablegen(LLVM Opts.inc -gen-opt-parser-defs) + +set(LLVM_TARGET_DEFINITIONS SubCommandOpts.td) +tablegen(LLVM SubCommandOpts.inc -gen-opt-parser-defs) + add_public_tablegen_target(OptsTestTableGen) add_llvm_unittest(OptionTests OptionParsingTest.cpp OptionMarshallingTest.cpp + OptionSubCommandsTest.cpp ) diff --git a/llvm/unittests/Option/OptionMarshallingTest.cpp b/llvm/unittests/Option/OptionMarshallingTest.cpp index 005144b..15917cc 100644 --- a/llvm/unittests/Option/OptionMarshallingTest.cpp +++ b/llvm/unittests/Option/OptionMarshallingTest.cpp @@ -29,8 +29,9 @@ static const OptionWithMarshallingInfo MarshallingTable[] = { #define OPTION_WITH_MARSHALLING( \ PREFIX_TYPE, PREFIXED_NAME_OFFSET, ID, KIND, GROUP, ALIAS, ALIASARGS, \ FLAGS, VISIBILITY, PARAM, HELPTEXT, HELPTEXTSFORVARIANTS, METAVAR, VALUES, \ - SHOULD_PARSE, ALWAYS_EMIT, KEYPATH, DEFAULT_VALUE, IMPLIED_CHECK, \ - IMPLIED_VALUE, NORMALIZER, DENORMALIZER, MERGER, EXTRACTOR, TABLE_INDEX) \ + SUBCOMMANDIDS_OFFSET, SHOULD_PARSE, ALWAYS_EMIT, KEYPATH, DEFAULT_VALUE, \ + IMPLIED_CHECK, IMPLIED_VALUE, NORMALIZER, DENORMALIZER, MERGER, EXTRACTOR, \ + TABLE_INDEX) \ {PREFIXED_NAME_OFFSET, #KEYPATH, #IMPLIED_CHECK, #IMPLIED_VALUE}, #include "Opts.inc" #undef OPTION_WITH_MARSHALLING diff --git a/llvm/unittests/Option/OptionSubCommandsTest.cpp b/llvm/unittests/Option/OptionSubCommandsTest.cpp new file mode 100644 index 0000000..e31a326 --- /dev/null +++ b/llvm/unittests/Option/OptionSubCommandsTest.cpp @@ -0,0 +1,252 @@ +//===- unittest/Support/OptionParsingTest.cpp - OptTable tests ------------===// +// +// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions. +// See https://llvm.org/LICENSE.txt for license information. +// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception +// +//===----------------------------------------------------------------------===// + +#include "llvm/ADT/STLExtras.h" +#include "llvm/Option/Arg.h" +#include "llvm/Option/ArgList.h" +#include "llvm/Option/OptTable.h" +#include "llvm/Option/Option.h" +#include "llvm/Support/raw_ostream.h" +#include "gtest/gtest.h" + +using namespace llvm; +using namespace llvm::opt; + +#if defined(__clang__) +#pragma clang diagnostic ignored "-Wdeprecated-declarations" +#endif + +namespace { +enum ID { + OPT_INVALID = 0, +#define OPTION(PREFIXES, NAME, ID, KIND, GROUP, ALIAS, ALIASARGS, FLAGS, \ + VISIBILITY, PARAM, HELPTEXT, HELPTEXTSFORVARIANTS, METAVAR, \ + VALUES, SUBCOMMANDIDS_OFFSET) \ + OPT_##ID, +#include "SubCommandOpts.inc" +#undef OPTION +}; +#define OPTTABLE_STR_TABLE_CODE +#include "SubCommandOpts.inc" +#undef OPTTABLE_STR_TABLE_CODE + +#define OPTTABLE_PREFIXES_TABLE_CODE +#include "SubCommandOpts.inc" +#undef OPTTABLE_PREFIXES_TABLE_CODE + +#define OPTTABLE_SUBCOMMAND_IDS_TABLE_CODE +#include "SubCommandOpts.inc" +#undef OPTTABLE_SUBCOMMAND_IDS_TABLE_CODE + +#define OPTTABLE_SUBCOMMANDS_CODE +#include "SubCommandOpts.inc" +#undef OPTTABLE_SUBCOMMANDS_CODE + +static constexpr OptTable::Info InfoTable[] = { +#define OPTION(...) LLVM_CONSTRUCT_OPT_INFO(__VA_ARGS__), +#include "SubCommandOpts.inc" +#undef OPTION +}; + +class TestOptSubCommandTable : public GenericOptTable { +public: + TestOptSubCommandTable(bool IgnoreCase = false) + : GenericOptTable(OptionStrTable, OptionPrefixesTable, InfoTable, + /*IgnoreCase=*/false, OptionSubCommands, + OptionSubCommandIDsTable) {} +}; + +// Test fixture +template <typename T> class OptSubCommandTableTest : public ::testing::Test {}; + +// Test both precomputed and computed OptTables with the same suite of tests. +using OptSubCommandTableTestTypes = ::testing::Types<TestOptSubCommandTable>; + +TYPED_TEST_SUITE(OptSubCommandTableTest, OptSubCommandTableTestTypes, ); + +TYPED_TEST(OptSubCommandTableTest, SubCommandParsing) { + TypeParam T; + unsigned MAI, MAC; + + std::string ErrMsg; + raw_string_ostream RSO1(ErrMsg); + + auto HandleMultipleSubcommands = [&](ArrayRef<StringRef> SubCommands) { + ErrMsg.clear(); + RSO1 << "Multiple subcommands passed\n"; + for (auto SC : SubCommands) + RSO1 << "\n" << SC; + }; + + auto HandleOtherPositionals = [&](ArrayRef<StringRef> Positionals) { + ErrMsg.clear(); + RSO1 << "Unregistered positionals passed\n"; + for (auto SC : Positionals) + RSO1 << "\n" << SC; + }; + + { + // Test case 1: Toplevel option, no subcommand + const char *Args[] = {"-version"}; + InputArgList AL = T.ParseArgs(Args, MAI, MAC); + EXPECT_TRUE(AL.hasArg(OPT_version)); + StringRef SC = AL.getSubCommand( + T.getSubCommands(), HandleMultipleSubcommands, HandleOtherPositionals); + EXPECT_TRUE(SC.empty()); + EXPECT_FALSE(AL.hasArg(OPT_uppercase)); + EXPECT_FALSE(AL.hasArg(OPT_lowercase)); + } + + { + // Test case 2: Subcommand 'foo' with its valid options + const char *Args[] = {"foo", "-uppercase"}; + InputArgList AL = T.ParseArgs(Args, MAI, MAC); + StringRef SC = AL.getSubCommand( + T.getSubCommands(), HandleMultipleSubcommands, HandleOtherPositionals); + EXPECT_EQ(SC, "foo"); + EXPECT_TRUE(AL.hasArg(OPT_uppercase)); + EXPECT_FALSE(AL.hasArg(OPT_lowercase)); + EXPECT_FALSE(AL.hasArg(OPT_version)); + EXPECT_EQ(std::string::npos, ErrMsg.find("Multiple subcommands passed")) + << "Did not expect error message as this is a valid use case."; + EXPECT_EQ(std::string::npos, ErrMsg.find("Unregistered positionals passed")) + << "Did not expect error message as this is a valid use case."; + } + + { + // Test case 3: Check valid use of subcommand which follows a valid + // subcommand option. + const char *Args[] = {"-uppercase", "foo"}; + InputArgList AL = T.ParseArgs(Args, MAI, MAC); + StringRef SC = AL.getSubCommand( + T.getSubCommands(), HandleMultipleSubcommands, HandleOtherPositionals); + EXPECT_EQ(SC, "foo"); + EXPECT_TRUE(AL.hasArg(OPT_uppercase)); + EXPECT_FALSE(AL.hasArg(OPT_lowercase)); + EXPECT_FALSE(AL.hasArg(OPT_version)); + EXPECT_EQ(std::string::npos, ErrMsg.find("Multiple subcommands passed")) + << "Did not expect error message as this is a valid use case."; + EXPECT_EQ(std::string::npos, ErrMsg.find("Unregistered positionals passed")) + << "Did not expect error message as this is a valid use case."; + } + + { + // Test case 4: Check invalid use of passing multiple subcommands. + const char *Args[] = {"-uppercase", "foo", "bar"}; + InputArgList AL = T.ParseArgs(Args, MAI, MAC); + StringRef SC = AL.getSubCommand( + T.getSubCommands(), HandleMultipleSubcommands, HandleOtherPositionals); + // No valid subcommand should be returned as this is an invalid invocation. + EXPECT_TRUE(SC.empty()); + // Expect the multiple subcommands error message. + EXPECT_NE(std::string::npos, ErrMsg.find("Multiple subcommands passed")); + EXPECT_NE(std::string::npos, ErrMsg.find("foo")); + EXPECT_NE(std::string::npos, ErrMsg.find("bar")); + EXPECT_EQ(std::string::npos, ErrMsg.find("Unregistered positionals passed")) + << "Did not expect error message as this is a valid use case."; + } + + { + // Test case 5: Check invalid use of passing unregistered subcommands. + const char *Args[] = {"foobar"}; + InputArgList AL = T.ParseArgs(Args, MAI, MAC); + StringRef SC = AL.getSubCommand( + T.getSubCommands(), HandleMultipleSubcommands, HandleOtherPositionals); + // No valid subcommand should be returned as this is an invalid invocation. + EXPECT_TRUE(SC.empty()); + // Expect the unregistered subcommands error message. + EXPECT_NE(std::string::npos, + ErrMsg.find("Unregistered positionals passed")); + EXPECT_NE(std::string::npos, ErrMsg.find("foobar")); + } + + { + // Test case 6: Check invalid use of a valid subcommand which follows a + // valid subcommand option but the option is not registered with the given + // subcommand. + const char *Args[] = {"-lowercase", "bar"}; + InputArgList AL = T.ParseArgs(Args, MAI, MAC); + StringRef SC = AL.getSubCommand( + T.getSubCommands(), HandleMultipleSubcommands, HandleOtherPositionals); + auto HandleSubCommandArg = [&](ID OptionType) { + if (!AL.hasArg(OptionType)) + return false; + auto O = T.getOption(OptionType); + if (!O.isRegisteredSC(SC)) { + ErrMsg.clear(); + RSO1 << "Option [" << O.getName() << "] is not valid for SubCommand [" + << SC << "]\n"; + return false; + } + return true; + }; + EXPECT_EQ(SC, "bar"); // valid subcommand + EXPECT_TRUE(AL.hasArg(OPT_lowercase)); // valid option + EXPECT_FALSE(HandleSubCommandArg(OPT_lowercase)); + EXPECT_NE( + std::string::npos, + ErrMsg.find("Option [lowercase] is not valid for SubCommand [bar]")); + } +} + +TYPED_TEST(OptSubCommandTableTest, SubCommandHelp) { + TypeParam T; + std::string Help; + raw_string_ostream RSO(Help); + + // Toplevel help + T.printHelp(RSO, "Test Usage String", "OverviewString"); + EXPECT_NE(std::string::npos, Help.find("OVERVIEW:")); + EXPECT_NE(std::string::npos, Help.find("OverviewString")); + EXPECT_NE(std::string::npos, Help.find("USAGE:")); + EXPECT_NE(std::string::npos, Help.find("Test Usage String")); + EXPECT_NE(std::string::npos, Help.find("SUBCOMMANDS:")); + EXPECT_NE(std::string::npos, Help.find("foo")); + EXPECT_NE(std::string::npos, Help.find("bar")); + EXPECT_NE(std::string::npos, Help.find("HelpText for SubCommand foo.")); + EXPECT_NE(std::string::npos, Help.find("HelpText for SubCommand bar.")); + EXPECT_NE(std::string::npos, Help.find("OPTIONS:")); + EXPECT_NE(std::string::npos, Help.find("--help")); + EXPECT_NE(std::string::npos, Help.find("-version")); + // uppercase is not a global option and should not be shown. + EXPECT_EQ(std::string::npos, Help.find("-uppercase")); + + // Help for subcommand foo + Help.clear(); + StringRef SC1 = "foo"; + T.printHelp(RSO, "Test Usage String", "OverviewString", false, false, + Visibility(), SC1); + EXPECT_NE(std::string::npos, Help.find("OVERVIEW:")); + EXPECT_NE(std::string::npos, Help.find("OverviewString")); + // SubCommand "foo" definition for tablegen has NO dedicated usage string so + // not expected to see USAGE. + EXPECT_EQ(std::string::npos, Help.find("USAGE:")); + EXPECT_NE(std::string::npos, Help.find("HelpText for SubCommand foo.")); + EXPECT_NE(std::string::npos, Help.find("-uppercase")); + EXPECT_NE(std::string::npos, Help.find("-lowercase")); + EXPECT_EQ(std::string::npos, Help.find("-version")); + EXPECT_EQ(std::string::npos, Help.find("SUBCOMMANDS:")); + + // Help for subcommand bar + Help.clear(); + StringRef SC2 = "bar"; + T.printHelp(RSO, "Test Usage String", "OverviewString", false, false, + Visibility(), SC2); + EXPECT_NE(std::string::npos, Help.find("OVERVIEW:")); + EXPECT_NE(std::string::npos, Help.find("OverviewString")); + // SubCommand "bar" definition for tablegen has a dedicated usage string. + EXPECT_NE(std::string::npos, Help.find("USAGE:")); + EXPECT_NE(std::string::npos, Help.find("Subcommand bar <options>")); + EXPECT_NE(std::string::npos, Help.find("HelpText for SubCommand bar.")); + EXPECT_NE(std::string::npos, Help.find("-uppercase")); + // lowercase is not an option for bar and should not be shown. + EXPECT_EQ(std::string::npos, Help.find("-lowercase")); + // version is a global option and should not be shown. + EXPECT_EQ(std::string::npos, Help.find("-version")); +} +} // end anonymous namespace diff --git a/llvm/unittests/Option/SubCommandOpts.td b/llvm/unittests/Option/SubCommandOpts.td new file mode 100644 index 0000000..b9750da --- /dev/null +++ b/llvm/unittests/Option/SubCommandOpts.td @@ -0,0 +1,16 @@ +include "llvm/Option/OptParser.td" + +def sc_foo : SubCommand<"foo", "HelpText for SubCommand foo.">; + +def sc_bar : SubCommand<"bar", "HelpText for SubCommand bar.", + "Subcommand bar <options>">; + +def help : Flag<["--"], "help">, HelpText<"Subcommand <subcommand> <options>">; + +def version : Flag<["-"], "version">, HelpText<"Display the version number">; + +def uppercase : Flag<["-"], "uppercase", [sc_foo, sc_bar]>, + HelpText<"Print in uppercase">; + +def lowercase : Flag<["-"], "lowercase", [sc_foo]>, + HelpText<"Print in lowercase">; diff --git a/llvm/unittests/Support/GlobPatternTest.cpp b/llvm/unittests/Support/GlobPatternTest.cpp index e4f1025..58fd767 100644 --- a/llvm/unittests/Support/GlobPatternTest.cpp +++ b/llvm/unittests/Support/GlobPatternTest.cpp @@ -257,6 +257,78 @@ TEST_F(GlobPatternTest, NUL) { } } +TEST_F(GlobPatternTest, PrefixSuffix) { + auto Pat = GlobPattern::create(""); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("", Pat->prefix()); + EXPECT_EQ("", Pat->suffix()); + + Pat = GlobPattern::create("abcd"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("abcd", Pat->prefix()); + EXPECT_EQ("", Pat->suffix()); + + Pat = GlobPattern::create("*abcd"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("", Pat->prefix()); + EXPECT_EQ("abcd", Pat->suffix()); + + Pat = GlobPattern::create("abcd*"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("abcd", Pat->prefix()); + EXPECT_EQ("", Pat->suffix()); + + Pat = GlobPattern::create("ab*cd"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("ab", Pat->prefix()); + EXPECT_EQ("cd", Pat->suffix()); + + Pat = GlobPattern::create("ab?cd"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("ab", Pat->prefix()); + EXPECT_EQ("cd", Pat->suffix()); + + Pat = GlobPattern::create("ab[n]cd"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("ab", Pat->prefix()); + EXPECT_EQ("cd", Pat->suffix()); + + Pat = GlobPattern::create("ab{}cd"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("ab", Pat->prefix()); + EXPECT_EQ("cd", Pat->suffix()); + + Pat = GlobPattern::create("ab{cd"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("ab", Pat->prefix()); + EXPECT_EQ("cd", Pat->suffix()); + + Pat = GlobPattern::create("ab]cd"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("ab]cd", Pat->prefix()); + EXPECT_EQ("", Pat->suffix()); + + Pat = GlobPattern::create("ab\\cd"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("ab", Pat->prefix()); + EXPECT_EQ("d", Pat->suffix()); + + Pat = GlobPattern::create("ab\\\\cd"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("ab", Pat->prefix()); + EXPECT_EQ("d", Pat->suffix()); + + Pat = GlobPattern::create("ab?cd?"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("ab", Pat->prefix()); + EXPECT_EQ("", Pat->suffix()); + + Pat = GlobPattern::create("?ab?cd"); + ASSERT_TRUE((bool)Pat); + EXPECT_EQ("", Pat->prefix()); + EXPECT_EQ("cd", Pat->suffix()); +} + TEST_F(GlobPatternTest, Pathological) { std::string P, S(40, 'a'); StringRef Pieces[] = {"a*", "[ba]*", "{b*,a*}*"}; diff --git a/llvm/utils/TableGen/Basic/RuntimeLibcallsEmitter.cpp b/llvm/utils/TableGen/Basic/RuntimeLibcallsEmitter.cpp index 45cb209..c96331c 100644 --- a/llvm/utils/TableGen/Basic/RuntimeLibcallsEmitter.cpp +++ b/llvm/utils/TableGen/Basic/RuntimeLibcallsEmitter.cpp @@ -543,21 +543,8 @@ void RuntimeLibcallEmitter::emitSystemRuntimeLibrarySetCalls( OS << "void llvm::RTLIB::RuntimeLibcallsInfo::setTargetRuntimeLibcallSets(" "const llvm::Triple &TT, ExceptionHandling ExceptionModel, " "FloatABI::ABIType FloatABI, EABI EABIVersion, " - "StringRef ABIName) {\n" - " struct LibcallImplPair {\n" - " RTLIB::Libcall Func;\n" - " RTLIB::LibcallImpl Impl;\n" - " };\n" - " auto setLibcallsImpl = [this](\n" - " ArrayRef<LibcallImplPair> Libcalls,\n" - " std::optional<llvm::CallingConv::ID> CC = {})\n" - " {\n" - " for (const auto [Func, Impl] : Libcalls) {\n" - " setLibcallImpl(Func, Impl);\n" - " if (CC)\n" - " setLibcallImplCallingConv(Impl, *CC);\n" - " }\n" - " };\n"; + "StringRef ABIName) {\n"; + ArrayRef<const Record *> AllLibs = Records.getAllDerivedDefinitions("SystemRuntimeLibrary"); @@ -682,18 +669,21 @@ void RuntimeLibcallEmitter::emitSystemRuntimeLibrarySetCalls( Funcs.erase(UniqueI, Funcs.end()); - OS << indent(IndentDepth + 2) << "setLibcallsImpl({\n"; + StringRef CCEnum; + if (FuncsWithCC.CallingConv) + CCEnum = FuncsWithCC.CallingConv->getValueAsString("CallingConv"); + for (const RuntimeLibcallImpl *LibCallImpl : Funcs) { - OS << indent(IndentDepth + 4); - LibCallImpl->emitTableEntry(OS); - } - OS << indent(IndentDepth + 2) << "}"; - if (FuncsWithCC.CallingConv) { - StringRef CCEnum = - FuncsWithCC.CallingConv->getValueAsString("CallingConv"); - OS << ", " << CCEnum; + OS << indent(IndentDepth + 2); + LibCallImpl->emitSetImplCall(OS); + + if (FuncsWithCC.CallingConv) { + OS << indent(IndentDepth + 2) << "setLibcallImplCallingConv("; + LibCallImpl->emitEnumEntry(OS); + OS << ", " << CCEnum << ");\n"; + } } - OS << ");\n\n"; + OS << '\n'; if (!SubsetPredicate.isAlwaysAvailable()) { OS << indent(IndentDepth); diff --git a/llvm/utils/TableGen/Basic/VTEmitter.cpp b/llvm/utils/TableGen/Basic/VTEmitter.cpp index c6b4d0b..301b27d 100644 --- a/llvm/utils/TableGen/Basic/VTEmitter.cpp +++ b/llvm/utils/TableGen/Basic/VTEmitter.cpp @@ -33,11 +33,11 @@ static void vTtoGetLlvmTyString(raw_ostream &OS, const Record *VT) { bool IsRISCVVecTuple = VT->getValueAsBit("isRISCVVecTuple"); if (IsRISCVVecTuple) { - unsigned NElem = VT->getValueAsInt("nElem"); + unsigned NF = VT->getValueAsInt("NF"); unsigned Sz = VT->getValueAsInt("Size"); OS << "TargetExtType::get(Context, \"riscv.vector.tuple\", " "ScalableVectorType::get(Type::getInt8Ty(Context), " - << (Sz / (NElem * 8)) << "), " << NElem << ")"; + << (Sz / (NF * 8)) << "), " << NF << ")"; return; } diff --git a/llvm/utils/TableGen/OptionParserEmitter.cpp b/llvm/utils/TableGen/OptionParserEmitter.cpp index a470fbb..48ae1a0 100644 --- a/llvm/utils/TableGen/OptionParserEmitter.cpp +++ b/llvm/utils/TableGen/OptionParserEmitter.cpp @@ -9,8 +9,10 @@ #include "Common/OptEmitter.h" #include "llvm/ADT/STLExtras.h" #include "llvm/ADT/SmallString.h" +#include "llvm/ADT/SmallVector.h" #include "llvm/ADT/StringExtras.h" #include "llvm/ADT/Twine.h" +#include "llvm/Option/OptTable.h" #include "llvm/Support/InterleavedRange.h" #include "llvm/Support/raw_ostream.h" #include "llvm/TableGen/Record.h" @@ -258,6 +260,9 @@ static void emitOptionParser(const RecordKeeper &Records, raw_ostream &OS) { std::vector<const Record *> Opts = Records.getAllDerivedDefinitions("Option"); llvm::sort(Opts, IsOptionRecordsLess); + std::vector<const Record *> SubCommands = + Records.getAllDerivedDefinitions("SubCommand"); + emitSourceFileHeader("Option Parsing Definitions", OS); // Generate prefix groups. @@ -271,6 +276,35 @@ static void emitOptionParser(const RecordKeeper &Records, raw_ostream &OS) { Prefixes.try_emplace(PrefixKey, 0); } + // Generate sub command groups. + typedef SmallVector<StringRef, 2> SubCommandKeyT; + typedef std::map<SubCommandKeyT, unsigned> SubCommandIDsT; + SubCommandIDsT SubCommandIDs; + + auto PrintSubCommandIdsOffset = [&SubCommandIDs, &OS](const Record &R) { + if (R.getValue("SubCommands") != nullptr) { + std::vector<const Record *> SubCommands = + R.getValueAsListOfDefs("SubCommands"); + SubCommandKeyT SubCommandKey; + for (const auto &SubCommand : SubCommands) + SubCommandKey.push_back(SubCommand->getName()); + OS << SubCommandIDs[SubCommandKey]; + } else { + // The option SubCommandIDsOffset (for default top level toolname is 0). + OS << " 0"; + } + }; + + SubCommandIDs.try_emplace(SubCommandKeyT(), 0); + for (const Record &R : llvm::make_pointee_range(Opts)) { + std::vector<const Record *> RSubCommands = + R.getValueAsListOfDefs("SubCommands"); + SubCommandKeyT SubCommandKey; + for (const auto &SubCommand : RSubCommands) + SubCommandKey.push_back(SubCommand->getName()); + SubCommandIDs.try_emplace(SubCommandKey, 0); + } + DenseSet<StringRef> PrefixesUnionSet; for (const auto &[Prefix, _] : Prefixes) PrefixesUnionSet.insert_range(Prefix); @@ -323,6 +357,40 @@ static void emitOptionParser(const RecordKeeper &Records, raw_ostream &OS) { OS << "\n};\n"; OS << "#endif // OPTTABLE_PREFIXES_TABLE_CODE\n\n"; + // Dump subcommand IDs. + OS << "/////////"; + OS << "// SubCommand IDs\n\n"; + OS << "#ifdef OPTTABLE_SUBCOMMAND_IDS_TABLE_CODE\n"; + OS << "static constexpr unsigned OptionSubCommandIDsTable[] = {\n"; + { + // Ensure the first subcommand set is always empty. + assert(!SubCommandIDs.empty() && + "We should always emit an empty set of subcommands"); + assert(SubCommandIDs.begin()->first.empty() && + "First subcommand set should always be empty"); + llvm::ListSeparator Sep(",\n"); + unsigned CurIndex = 0; + for (auto &[SubCommand, SubCommandIndex] : SubCommandIDs) { + // First emit the number of subcommand strings in this list of + // subcommands. + OS << Sep << " " << SubCommand.size() << " /* subcommands */"; + SubCommandIndex = CurIndex; + assert((CurIndex == 0 || !SubCommand.empty()) && + "Only first subcommand set should be empty!"); + for (const auto &SubCommandKey : SubCommand) { + auto It = std::find_if( + SubCommands.begin(), SubCommands.end(), + [&](const Record *R) { return R->getName() == SubCommandKey; }); + assert(It != SubCommands.end() && "SubCommand not found"); + OS << ", " << std::distance(SubCommands.begin(), It) << " /* '" + << SubCommandKey << "' */"; + } + CurIndex += SubCommand.size() + 1; + } + } + OS << "\n};\n"; + OS << "#endif // OPTTABLE_SUBCOMMAND_IDS_TABLE_CODE\n\n"; + // Dump prefixes union. OS << "/////////\n"; OS << "// Prefix Union\n\n"; @@ -400,7 +468,12 @@ static void emitOptionParser(const RecordKeeper &Records, raw_ostream &OS) { OS << ", nullptr"; // The option Values (unused for groups). - OS << ", nullptr)\n"; + OS << ", nullptr"; + + // The option SubCommandIDsOffset. + OS << ", "; + PrintSubCommandIdsOffset(R); + OS << ")\n"; } OS << "\n"; @@ -527,6 +600,10 @@ static void emitOptionParser(const RecordKeeper &Records, raw_ostream &OS) { OS << getOptionName(R) << "_Values"; else OS << "nullptr"; + + // The option SubCommandIDsOffset. + OS << ", "; + PrintSubCommandIdsOffset(R); }; auto IsMarshallingOption = [](const Record &R) { @@ -595,6 +672,19 @@ static void emitOptionParser(const RecordKeeper &Records, raw_ostream &OS) { OS << "#endif // SIMPLE_ENUM_VALUE_TABLE\n"; OS << "\n"; + OS << "/////////\n"; + OS << "\n// SubCommands\n\n"; + OS << "#ifdef OPTTABLE_SUBCOMMANDS_CODE\n"; + OS << "static constexpr llvm::opt::OptTable::SubCommand OptionSubCommands[] " + "= " + "{\n"; + for (const Record *SubCommand : SubCommands) { + OS << " { \"" << SubCommand->getValueAsString("Name") << "\", "; + OS << "\"" << SubCommand->getValueAsString("HelpText") << "\", "; + OS << "\"" << SubCommand->getValueAsString("Usage") << "\" },\n"; + } + OS << "};\n"; + OS << "#endif // OPTTABLE_SUBCOMMANDS_CODE\n\n"; OS << "\n"; } diff --git a/llvm/utils/gn/secondary/llvm/lib/CAS/BUILD.gn b/llvm/utils/gn/secondary/llvm/lib/CAS/BUILD.gn index c37f43c..b4edd8d 100644 --- a/llvm/utils/gn/secondary/llvm/lib/CAS/BUILD.gn +++ b/llvm/utils/gn/secondary/llvm/lib/CAS/BUILD.gn @@ -9,6 +9,7 @@ static_library("CAS") { "MappedFileRegionArena.cpp", "ObjectStore.cpp", "OnDiskCommon.cpp", + "OnDiskDataAllocator.cpp", "OnDiskTrieRawHashMap.cpp", ] } diff --git a/llvm/utils/gn/secondary/llvm/unittests/CAS/BUILD.gn b/llvm/utils/gn/secondary/llvm/unittests/CAS/BUILD.gn index ccb447f..52a64be 100644 --- a/llvm/utils/gn/secondary/llvm/unittests/CAS/BUILD.gn +++ b/llvm/utils/gn/secondary/llvm/unittests/CAS/BUILD.gn @@ -10,6 +10,7 @@ unittest("CASTests") { "ActionCacheTest.cpp", "CASTestConfig.cpp", "ObjectStoreTest.cpp", + "OnDiskDataAllocatorTest.cpp", "OnDiskTrieRawHashMapTest.cpp", "ProgramTest.cpp", ] diff --git a/llvm/utils/gn/secondary/llvm/unittests/Option/BUILD.gn b/llvm/utils/gn/secondary/llvm/unittests/Option/BUILD.gn index 46f3ff9..759fd6e 100644 --- a/llvm/utils/gn/secondary/llvm/unittests/Option/BUILD.gn +++ b/llvm/utils/gn/secondary/llvm/unittests/Option/BUILD.gn @@ -6,14 +6,21 @@ tablegen("Opts") { args = [ "-gen-opt-parser-defs" ] } +tablegen("SubCommandOpts") { + visibility = [ ":OptionTests" ] + args = [ "-gen-opt-parser-defs" ] +} + unittest("OptionTests") { deps = [ ":Opts", + ":SubCommandOpts", "//llvm/lib/Option", "//llvm/lib/Support", ] sources = [ "OptionMarshallingTest.cpp", "OptionParsingTest.cpp", + "OptionSubCommandsTest.cpp", ] } diff --git a/llvm/utils/profcheck-xfail.txt b/llvm/utils/profcheck-xfail.txt index 53187c8..bbc8f59 100644 --- a/llvm/utils/profcheck-xfail.txt +++ b/llvm/utils/profcheck-xfail.txt @@ -1,11 +1,8 @@ Analysis/LoopAccessAnalysis/memcheck-ni.ll Analysis/MemorySSA/pr116227.ll -Analysis/MemorySSA/pr40038.ll Analysis/MemorySSA/pr43641.ll Analysis/MemorySSA/pr46574.ll Analysis/MemorySSA/update-remove-dead-blocks.ll -Analysis/StackSafetyAnalysis/ipa.ll -Analysis/ValueTracking/known-power-of-two-urem.ll Bitcode/fcmp-fast.ll Bitcode/flags.ll CodeGen/AArch64/cgdata-merge-local.ll @@ -70,16 +67,11 @@ CodeGen/AMDGPU/si-annotate-nested-control-flows.ll CodeGen/AMDGPU/simple-indirect-call-2.ll CodeGen/ARM/loopvectorize_pr33804.ll CodeGen/ARM/sjljeh-swifterror.ll -CodeGen/BPF/adjust-opt-icmp1.ll -CodeGen/BPF/adjust-opt-icmp2.ll -CodeGen/BPF/adjust-opt-icmp5.ll -CodeGen/BPF/adjust-opt-icmp6.ll CodeGen/Hexagon/autohvx/interleave.ll CodeGen/Hexagon/loop-idiom/hexagon-memmove1.ll CodeGen/Hexagon/loop-idiom/hexagon-memmove2.ll CodeGen/Hexagon/loop-idiom/memmove-rt-check.ll CodeGen/NVPTX/lower-ctor-dtor.ll -CodeGen/PowerPC/P10-stack-alignment.ll CodeGen/RISCV/zmmul.ll CodeGen/SPIRV/hlsl-resources/UniqueImplicitBindingNumber.ll CodeGen/WebAssembly/memory-interleave.ll @@ -87,11 +79,8 @@ CodeGen/X86/masked_gather_scatter.ll CodeGen/X86/nocfivalue.ll DebugInfo/AArch64/ir-outliner.ll DebugInfo/assignment-tracking/X86/hotcoldsplit.ll -DebugInfo/debugify-each.ll DebugInfo/Generic/block-asan.ll DebugInfo/KeyInstructions/Generic/loop-unswitch.ll -DebugInfo/KeyInstructions/Generic/simplifycfg-branch-fold.ll -DebugInfo/simplify-cfg-preserve-dbg-values.ll DebugInfo/X86/asan_debug_info.ll Instrumentation/AddressSanitizer/aarch64be.ll Instrumentation/AddressSanitizer/adaptive_global_redzones.ll @@ -532,13 +521,9 @@ Instrumentation/TypeSanitizer/nosanitize.ll Instrumentation/TypeSanitizer/sanitize-no-tbaa.ll Instrumentation/TypeSanitizer/swifterror.ll LTO/X86/diagnostic-handler-remarks-with-hotness.ll -Other/ChangePrinters/DotCfg/print-changed-dot-cfg.ll -Other/opt-bisect-print-ir-path.ll Other/optimization-remarks-auto.ll -Other/printer.ll Other/X86/debugcounter-partiallyinlinelibcalls.ll tools/llvm-objcopy/ELF/auto-remove-add-symtab-shndx.test -tools/not/disable-symbolization.test tools/UpdateTestChecks/update_analyze_test_checks/loop-access-analysis.test tools/UpdateTestChecks/update_analyze_test_checks/loop-distribute.test tools/UpdateTestChecks/update_test_checks/argument_name_reuse.test @@ -563,14 +548,10 @@ tools/UpdateTestChecks/update_test_checks/stable_ir_values_funcs.test tools/UpdateTestChecks/update_test_checks/stable_ir_values.test tools/UpdateTestChecks/update_test_checks/tbaa-semantics-checks.test tools/UpdateTestChecks/update_test_checks/various_ir_values_dbgrecords.test -Transforms/AggressiveInstCombine/inline-strcmp-debugloc.ll Transforms/AggressiveInstCombine/lower-table-based-cttz-basics.ll Transforms/AggressiveInstCombine/lower-table-based-cttz-dereferencing-pointer.ll Transforms/AggressiveInstCombine/lower-table-based-cttz-non-argument-value.ll Transforms/AggressiveInstCombine/lower-table-based-cttz-zero-element.ll -Transforms/AggressiveInstCombine/memchr.ll -Transforms/AggressiveInstCombine/strncmp-1.ll -Transforms/AggressiveInstCombine/strncmp-2.ll Transforms/AggressiveInstCombine/trunc_select_cmp.ll Transforms/AggressiveInstCombine/trunc_select.ll Transforms/AtomicExpand/AArch64/atomicrmw-fp.ll @@ -608,7 +589,6 @@ Transforms/AtomicExpand/AMDGPU/expand-cmpxchg-flat-maybe-private.ll Transforms/AtomicExpand/ARM/atomic-expansion-v7.ll Transforms/AtomicExpand/ARM/atomic-expansion-v8.ll Transforms/AtomicExpand/ARM/atomicrmw-fp.ll -Transforms/AtomicExpand/ARM/cmpxchg-weak.ll Transforms/AtomicExpand/Hexagon/atomicrmw-fp.ll Transforms/AtomicExpand/LoongArch/atomicrmw-fp.ll Transforms/AtomicExpand/Mips/atomicrmw-fp.ll @@ -688,7 +668,6 @@ Transforms/CodeGenPrepare/NVPTX/bypass-slow-div-not-exact.ll Transforms/CodeGenPrepare/NVPTX/bypass-slow-div-special-cases.ll Transforms/CodeGenPrepare/X86/vec-shift-inseltpoison.ll Transforms/CodeGenPrepare/X86/vec-shift.ll -Transforms/Coroutines/coro-alloca-outside-frame.ll Transforms/Coroutines/coro-await-suspend-lower-invoke.ll Transforms/Coroutines/coro-await-suspend-lower.ll Transforms/Coroutines/coro-byval-param.ll @@ -829,21 +808,17 @@ Transforms/HotColdSplit/unwind.ll Transforms/HotColdSplit/update-split-loop-metadata.ll Transforms/IndirectBrExpand/basic.ll Transforms/IndVarSimplify/debugloc-rem-subst.ll -Transforms/IndVarSimplify/eliminate-backedge.ll Transforms/IndVarSimplify/eliminate-rem.ll Transforms/IndVarSimplify/invalidate-modified-lcssa-phi.ll Transforms/IndVarSimplify/pr45835.ll Transforms/IndVarSimplify/preserving-debugloc-rem-div.ll -Transforms/Inline/optimization-remarks-hotness-threshold.ll Transforms/InstCombine/2004-09-20-BadLoadCombine.ll Transforms/InstCombine/2005-04-07-UDivSelectCrash.ll -Transforms/InstCombine/2011-02-14-InfLoop.ll Transforms/InstCombine/AArch64/sve-intrinsic-sel.ll Transforms/InstCombine/AArch64/sve-intrinsic-simplify-binop.ll Transforms/InstCombine/AArch64/sve-intrinsic-simplify-shift.ll Transforms/InstCombine/add-mask.ll Transforms/InstCombine/add-shl-mul-umax.ll -Transforms/InstCombine/add-shl-sdiv-to-srem.ll Transforms/InstCombine/AMDGPU/addrspacecast.ll Transforms/InstCombine/and2.ll Transforms/InstCombine/and-fcmp.ll @@ -853,13 +828,10 @@ Transforms/InstCombine/and-or-icmps.ll Transforms/InstCombine/and-or-implied-cond-not.ll Transforms/InstCombine/apint-div1.ll Transforms/InstCombine/apint-div2.ll -Transforms/InstCombine/apint-rem1.ll -Transforms/InstCombine/apint-rem2.ll Transforms/InstCombine/ashr-demand.ll Transforms/InstCombine/atomic.ll Transforms/InstCombine/binop-cast.ll Transforms/InstCombine/binop-select-cast-of-select-cond.ll -Transforms/InstCombine/binop-select.ll Transforms/InstCombine/bit-checks.ll Transforms/InstCombine/bitreverse.ll Transforms/InstCombine/branch.ll @@ -931,30 +903,23 @@ Transforms/InstCombine/not.ll Transforms/InstCombine/or-bitmask.ll Transforms/InstCombine/or-fcmp.ll Transforms/InstCombine/or.ll -Transforms/InstCombine/phi-select-constant.ll Transforms/InstCombine/pow-1.ll Transforms/InstCombine/pow-3.ll Transforms/InstCombine/pow-sqrt.ll Transforms/InstCombine/pr24354.ll -Transforms/InstCombine/pr35515.ll -Transforms/InstCombine/ptrtoint-nullgep.ll Transforms/InstCombine/pull-conditional-binop-through-shift.ll Transforms/InstCombine/rem.ll Transforms/InstCombine/sdiv-canonicalize.ll Transforms/InstCombine/sdiv-guard.ll -Transforms/InstCombine/select-and-cmp.ll Transforms/InstCombine/select-and-or.ll -Transforms/InstCombine/select_arithmetic.ll Transforms/InstCombine/select-bitext.ll Transforms/InstCombine/select-cmp-br.ll Transforms/InstCombine/select-cmp.ll Transforms/InstCombine/select-factorize.ll Transforms/InstCombine/select_frexp.ll -Transforms/InstCombine/select-icmp-and.ll Transforms/InstCombine/select.ll Transforms/InstCombine/select-min-max.ll Transforms/InstCombine/select-of-symmetric-selects.ll -Transforms/InstCombine/select-or-cmp.ll Transforms/InstCombine/select-safe-bool-transforms.ll Transforms/InstCombine/select-safe-impliedcond-transforms.ll Transforms/InstCombine/select-safe-transforms.ll @@ -974,11 +939,8 @@ Transforms/InstCombine/strlen-1.ll Transforms/InstCombine/strrchr-3.ll Transforms/InstCombine/sub-ashr-and-to-icmp-select.ll Transforms/InstCombine/sub-ashr-or-to-icmp-select.ll -Transforms/InstCombine/sub.ll Transforms/InstCombine/sub-xor-cmp.ll Transforms/InstCombine/truncating-saturate.ll -Transforms/InstCombine/trunc-inseltpoison.ll -Transforms/InstCombine/trunc.ll Transforms/InstCombine/unordered-fcmp-select.ll Transforms/InstCombine/urem-via-cmp-select.ll Transforms/InstCombine/vec_sext.ll @@ -990,7 +952,6 @@ Transforms/InstCombine/X86/x86-avx512-inseltpoison.ll Transforms/InstCombine/X86/x86-avx512.ll Transforms/InstCombine/xor-and-or.ll Transforms/InstCombine/xor-ashr.ll -Transforms/InstCombine/xor.ll Transforms/InstCombine/zext-bool-add-sub.ll Transforms/InstCombine/zext-or-icmp.ll Transforms/IRCE/add-metadata-pre-post-loops.ll @@ -1126,12 +1087,8 @@ Transforms/LoopDistribute/pointer-phi-in-loop.ll Transforms/LoopDistribute/scev-inserted-runtime-check.ll Transforms/LoopDistribute/symbolic-stride.ll Transforms/LoopFlatten/loop-flatten-version.ll -Transforms/LoopFlatten/widen-iv2.ll -Transforms/LoopFlatten/widen-iv.ll Transforms/LoopIdiom/AArch64/byte-compare-index.ll Transforms/LoopIdiom/AArch64/find-first-byte.ll -Transforms/LoopIdiom/memset-runtime-32bit.ll -Transforms/LoopIdiom/memset-runtime-64bit.ll Transforms/LoopIdiom/RISCV/byte-compare-index.ll Transforms/LoopIdiom/X86/arithmetic-right-shift-until-zero.ll Transforms/LoopIdiom/X86/left-shift-until-bittest.ll @@ -1155,10 +1112,6 @@ Transforms/LoopSimplifyCFG/live_block_marking.ll Transforms/LoopSimplifyCFG/mssa_update.ll Transforms/LoopSimplifyCFG/pr117537.ll Transforms/LoopSimplifyCFG/update_parents.ll -Transforms/LoopSimplify/pr26682.ll -Transforms/LoopSimplify/preserve-llvm-loop-metadata.ll -Transforms/LoopUnroll/AArch64/apple-unrolling-multi-exit.ll -Transforms/LoopUnroll/AArch64/unrolling-multi-exit.ll Transforms/LoopUnroll/peel-last-iteration-expansion-cost.ll Transforms/LoopUnroll/peel-last-iteration-with-guards.ll Transforms/LoopUnroll/peel-last-iteration-with-variable-trip-count.ll @@ -1301,7 +1254,6 @@ Transforms/PGOProfile/chr-lifetimes.ll Transforms/PGOProfile/chr.ll Transforms/PGOProfile/chr-poison.ll Transforms/PGOProfile/comdat.ll -Transforms/PGOProfile/cspgo_profile_summary.ll Transforms/PGOProfile/memop_profile_funclet_wasm.ll Transforms/PGOProfile/profcheck-select.ll Transforms/PGOProfile/prof-verify.ll @@ -1310,25 +1262,18 @@ Transforms/PGOProfile/X86/macho.ll Transforms/PhaseOrdering/AArch64/constraint-elimination-placement.ll Transforms/PhaseOrdering/AArch64/globals-aa-required-for-vectorization.ll Transforms/PhaseOrdering/AArch64/hoisting-sinking-required-for-vectorization.ll -Transforms/PhaseOrdering/AArch64/loopflatten.ll -Transforms/PhaseOrdering/AArch64/matrix-extract-insert.ll Transforms/PhaseOrdering/AArch64/predicated-reduction.ll Transforms/PhaseOrdering/AArch64/quant_4x4.ll Transforms/PhaseOrdering/ARM/arm_mean_q7.ll Transforms/PhaseOrdering/lower-table-based-cttz.ll -Transforms/PhaseOrdering/pr44461-br-to-switch-rotate.ll -Transforms/PhaseOrdering/simplifycfg-switch-lowering-vs-correlatedpropagation.ll Transforms/PhaseOrdering/vector-select.ll Transforms/PhaseOrdering/X86/blendv-select.ll Transforms/PhaseOrdering/X86/merge-functions2.ll Transforms/PhaseOrdering/X86/merge-functions3.ll Transforms/PhaseOrdering/X86/merge-functions.ll -Transforms/PhaseOrdering/X86/pr48844-br-to-switch-vectorization.ll Transforms/PhaseOrdering/X86/pr52078.ll Transforms/PhaseOrdering/X86/pr67803.ll Transforms/PhaseOrdering/X86/preserve-access-group.ll -Transforms/PhaseOrdering/X86/simplifycfg-late.ll -Transforms/PhaseOrdering/X86/vdiv.ll Transforms/PhaseOrdering/X86/vector-reductions.ll Transforms/PreISelIntrinsicLowering/AArch64/expand-exp.ll Transforms/PreISelIntrinsicLowering/AArch64/expand-log.ll @@ -1338,13 +1283,8 @@ Transforms/PreISelIntrinsicLowering/RISCV/memset-pattern.ll Transforms/PreISelIntrinsicLowering/X86/memcpy-inline-non-constant-len.ll Transforms/PreISelIntrinsicLowering/X86/memset-inline-non-constant-len.ll Transforms/PreISelIntrinsicLowering/X86/memset-pattern.ll -Transforms/Reassociate/basictest.ll -Transforms/SampleProfile/pseudo-probe-dangle.ll -Transforms/SampleProfile/pseudo-probe-emit.ll -Transforms/SampleProfile/pseudo-probe-profile.ll Transforms/SampleProfile/pseudo-probe-profile-mismatch-thinlto.ll Transforms/SampleProfile/remarks-hotness.ll -Transforms/SampleProfile/remarks.ll Transforms/SandboxVectorizer/special_opcodes.ll Transforms/ScalarizeMaskedMemIntrin/AArch64/expand-masked-load.ll Transforms/ScalarizeMaskedMemIntrin/AArch64/expand-masked-store.ll @@ -1387,63 +1327,6 @@ Transforms/SimpleLoopUnswitch/pr60736.ll Transforms/SimpleLoopUnswitch/trivial-unswitch-freeze-individual-conditions.ll Transforms/SimpleLoopUnswitch/trivial-unswitch.ll Transforms/SimpleLoopUnswitch/trivial-unswitch-logical-and-or.ll -Transforms/SimplifyCFG/2006-12-08-Ptr-ICmp-Branch.ll -Transforms/SimplifyCFG/2008-10-03-SpeculativelyExecuteBeforePHI.ll -Transforms/SimplifyCFG/annotations.ll -Transforms/SimplifyCFG/ARM/branch-fold-threshold.ll -Transforms/SimplifyCFG/ARM/phi-eliminate.ll -Transforms/SimplifyCFG/ARM/select-trunc-i64.ll -Transforms/SimplifyCFG/ARM/switch-to-lookup-table.ll -Transforms/SimplifyCFG/basictest.ll -Transforms/SimplifyCFG/branch-cond-dont-merge.ll -Transforms/SimplifyCFG/branch-fold-dbg.ll -Transforms/SimplifyCFG/branch-fold.ll -Transforms/SimplifyCFG/branch-fold-multiple.ll -Transforms/SimplifyCFG/branch-fold-threshold.ll -Transforms/SimplifyCFG/branch-nested.ll -Transforms/SimplifyCFG/clamp.ll -Transforms/SimplifyCFG/common-code-hoisting.ll -Transforms/SimplifyCFG/common-dest-folding.ll -Transforms/SimplifyCFG/extract-cost.ll -Transforms/SimplifyCFG/fold-branch-to-common-dest-free-cost.ll -Transforms/SimplifyCFG/fold-branch-to-common-dest.ll -Transforms/SimplifyCFG/fold-branch-to-common-dest-two-preds-cost.ll -Transforms/SimplifyCFG/fold-debug-location.ll -Transforms/SimplifyCFG/Hexagon/switch-to-lookup-table.ll -Transforms/SimplifyCFG/hoist-dbgvalue.ll -Transforms/SimplifyCFG/indirectbr.ll -Transforms/SimplifyCFG/merge-cond-stores-2.ll -Transforms/SimplifyCFG/merge-cond-stores.ll -Transforms/SimplifyCFG/multiple-phis.ll -Transforms/SimplifyCFG/PhiBlockMerge.ll -Transforms/SimplifyCFG/pr48641.ll -Transforms/SimplifyCFG/preserve-store-alignment.ll -Transforms/SimplifyCFG/rangereduce.ll -Transforms/SimplifyCFG/RISCV/select-trunc-i64.ll -Transforms/SimplifyCFG/RISCV/switch_to_lookup_table-rv32.ll -Transforms/SimplifyCFG/RISCV/switch_to_lookup_table-rv64.ll -Transforms/SimplifyCFG/safe-abs.ll -Transforms/SimplifyCFG/SimplifyEqualityComparisonWithOnlyPredecessor-domtree-preservation-edgecase.ll -Transforms/SimplifyCFG/speculate-blocks.ll -Transforms/SimplifyCFG/speculate-derefable-load.ll -Transforms/SimplifyCFG/switch_create-custom-dl.ll -Transforms/SimplifyCFG/switch_create.ll -Transforms/SimplifyCFG/switch-dup-bbs.ll -Transforms/SimplifyCFG/switch_mask.ll -Transforms/SimplifyCFG/switch_msan.ll -Transforms/SimplifyCFG/switch-on-const-select.ll -Transforms/SimplifyCFG/switchToSelect-domtree-preservation-edgecase.ll -Transforms/SimplifyCFG/switch-to-select-multiple-edge-per-block-phi.ll -Transforms/SimplifyCFG/switch-to-select-two-case.ll -Transforms/SimplifyCFG/switch-transformations-no-lut.ll -Transforms/SimplifyCFG/wc-widen-block.ll -Transforms/SimplifyCFG/X86/disable-lookup-table.ll -Transforms/SimplifyCFG/X86/hoist-loads-stores-with-cf.ll -Transforms/SimplifyCFG/X86/SpeculativeExec.ll -Transforms/SimplifyCFG/X86/switch-to-lookup-globals.ll -Transforms/SimplifyCFG/X86/switch-to-lookup-large-types.ll -Transforms/SimplifyCFG/X86/switch_to_lookup_table_big.ll -Transforms/SimplifyCFG/X86/switch_to_lookup_table.ll Transforms/SLPVectorizer/AArch64/gather-root.ll Transforms/SLPVectorizer/AArch64/horizontal.ll Transforms/SLPVectorizer/AArch64/loadi8.ll @@ -1471,7 +1354,6 @@ Transforms/SLPVectorizer/reduction-gather-non-scheduled-extracts.ll Transforms/SLPVectorizer/reorder-node.ll Transforms/SLPVectorizer/reused-buildvector-matching-vectorized-node.ll Transforms/SLPVectorizer/revec.ll -Transforms/SLPVectorizer/RISCV/long-gep-chains.ll Transforms/SLPVectorizer/RISCV/remarks_cmp_sel_min_max.ll Transforms/SLPVectorizer/RISCV/remarks-insert-into-small-vector.ll Transforms/SLPVectorizer/RISCV/reordered-interleaved-loads.ll @@ -1556,4 +1438,3 @@ Transforms/Util/libcalls-opt-remarks.ll Transforms/Util/lowerswitch.ll Transforms/VectorCombine/AArch64/shuffletoidentity.ll Transforms/VectorCombine/X86/shuffle-of-selects.ll -Transforms/WholeProgramDevirt/unique-retval-same-vtable.ll |