Overview
Request 5292 (accepted)
- Update to version 26.1.1:
* win-dshow: Fix dshowcapture not linking audio of certain devices
* linux-jack: fix deadlock when closing the client
* linux-jack: mark ports as JackPortIsTerminal
* linux-jack: fix timestamp calculation
* obs-browser: Initialize CEF early to fix macOS crash
* libobs: Update version to 26.1.1
* rtmp-services: Add Loola.tv service
* rtmp-services: Fix json formatting
* libobs: Avoid unnecessary mallocs in audio processing
* UI: Fix padding on Acri context bar buttons
* image-source: Fix slideshow transition bug when randomized
* docs/sphinx: Add missing obs_frontend_open_projector
* libobs: Update to SIMDe 0.7.1
* libobs: Set lock state when duplicating scene item
* libobs: Add definitions in ARCH_SIMD_DEFINES
* cmake: Add ARCH_SIMD_DEFINES variable
* coreaudio-encoder: Fix cmake for mingw
* Revert "UI: Only apply new scaling behavior on newer installs"
* UI: Only apply new scaling behavior on newer installs
* UI: Support fractional scaling for Canvas/Base size
* mac-virtualcam: Remove unnecessary logging
* mac-virtualcam: Mark parameters as unused
* image-source: Add .webp to "All formats" option
* image-source: Add webp to file filter
* CI: Remove jack, speex and fdk-aac from default builds for macOS
* libobs, obs-ffmpeg: Use correct value for EINVAL error check
* UI/updater: Increase number of download workers
* UI/updater: Enable HTTP2 and TLS 1.3
* UI: Fix name of kab-KAB locale
- Created by boombatower over 4 years ago
- In state accepted
-
Package maintainers:
boombatower,
darix, and
frispete
obs-studio.changes
Changed
-------------------------------------------------------------------
+Wed Jan 06 18:27:38 UTC 2021 - jimmy@boombatower.com
+
+- Update to version 26.1.1:
+ * win-dshow: Fix dshowcapture not linking audio of certain devices
+ * linux-jack: fix deadlock when closing the client
+ * linux-jack: mark ports as JackPortIsTerminal
+ * linux-jack: fix timestamp calculation
+ * obs-browser: Initialize CEF early to fix macOS crash
+ * libobs: Update version to 26.1.1
+ * rtmp-services: Add Loola.tv service
+ * rtmp-services: Fix json formatting
+ * libobs: Avoid unnecessary mallocs in audio processing
+ * UI: Fix padding on Acri context bar buttons
+ * image-source: Fix slideshow transition bug when randomized
+ * docs/sphinx: Add missing obs_frontend_open_projector
+ * libobs: Update to SIMDe 0.7.1
+ * libobs: Set lock state when duplicating scene item
+ * libobs: Add definitions in ARCH_SIMD_DEFINES
+ * cmake: Add ARCH_SIMD_DEFINES variable
+ * coreaudio-encoder: Fix cmake for mingw
+ * Revert "UI: Only apply new scaling behavior on newer installs"
+ * UI: Only apply new scaling behavior on newer installs
+ * UI: Support fractional scaling for Canvas/Base size
+ * mac-virtualcam: Remove unnecessary logging
+ * mac-virtualcam: Mark parameters as unused
+ * image-source: Add .webp to "All formats" option
+ * image-source: Add webp to file filter
+ * CI: Remove jack, speex and fdk-aac from default builds for macOS
+ * libobs, obs-ffmpeg: Use correct value for EINVAL error check
+ * UI/updater: Increase number of download workers
+ * UI/updater: Enable HTTP2 and TLS 1.3
+ * UI: Fix name of kab-KAB locale
+ * decklink: Fix automatic pixel format detection
+ * CI: Fix macOS 10.13 crashes due to unsupported library symbols
+ * UI/installer: Add additional VS2019 DLL check
+ * mac-virtualcam: Fix file mode
+ * CI: Run make with -j$(nproc)
+ * CI: Remove obsolete and unused files
+ * libobs: Add texture sharing support for macOS/OpenGL
+ * CI: Add necessary changes for CEF 4183
+ * UI/updater: Move in-use files away before writing
+ * UI/updater: Always clean up temporary files
+ * UI: Remove Smashcast from AutoConfig
+ * rtmp-services: Remove Smashcast
+
+-------------------------------------------------------------------
Tue Dec 15 23:25:38 UTC 2020 - Jimmy Berry <jimmy@boombatower.com>
- Add modinfo-use-full-path.patch for new v4l2lookback support.
obs-studio.spec
Changed
Name: obs-studio
-Version: 26.1.0
+Version: 26.1.1
Release: 0
Summary: A recording/broadcasting program
Group: Productivity/Multimedia/Video/Editors and Convertors
_service
Changed
<services>
<service name="tar_scm" mode="disabled">
<param name="versionformat">@PARENT_TAG@</param>
- <param name="revision">refs/tags/26.1.0</param>
+ <param name="revision">refs/tags/26.1.1</param>
<param name="url">git://github.com/jp9000/obs-studio.git</param>
<param name="scm">git</param>
<param name="changesgenerate">enable</param>
_servicedata
Changed
<servicedata>
<service name="tar_scm">
<param name="url">git://github.com/jp9000/obs-studio.git</param>
- <param name="changesrevision">38ad3ba18fc27846e122bd56f589ccb34c4578e2</param>
+ <param name="changesrevision">dffa8221124106bc2a4c92e5f5d0fa21128a61f6</param>
</service>
</servicedata>
obs-studio-26.1.0.tar.xz/CI/before-deploy-osx.sh
Deleted
-hr() {
- echo "───────────────────────────────────────────────────"
- echo $1
- echo "───────────────────────────────────────────────────"
-}
-
-# Exit if something fails
-set -e
-
-# Generate file name variables
-export GIT_TAG=$(git describe --abbrev=0)
-export GIT_HASH=$(git rev-parse --short HEAD)
-export FILE_DATE=$(date +%Y-%m-%d.%H-%M-%S)
-export FILENAME=$FILE_DATE-$GIT_HASH-$TRAVIS_BRANCH-osx.dmg
-
-echo "git tag: $GIT_TAG"
-
-cd ./build
-
-# Move obslua
-hr "Moving OBS LUA"
-mv ./rundir/RelWithDebInfo/data/obs-scripting/obslua.so ./rundir/RelWithDebInfo/bin/
-
-# Move obspython
-hr "Moving OBS Python"
-# mv ./rundir/RelWithDebInfo/data/obs-scripting/_obspython.so ./rundir/RelWithDebInfo/bin/
-# mv ./rundir/RelWithDebInfo/data/obs-scripting/obspython.py ./rundir/RelWithDebInfo/bin/
-
-# Package everything into a nice .app
-hr "Packaging .app"
-STABLE=false
-if [ -n "${TRAVIS_TAG}" ]; then
- STABLE=true
-fi
-
-#sudo python ../CI/install/osx/build_app.py --public-key ../CI/install/osx/OBSPublicDSAKey.pem --sparkle-framework ../../sparkle/Sparkle.framework --stable=$STABLE
-
-../CI/install/osx/packageApp.sh
-
-# fix obs outputs plugin it doesn't play nicely with dylibBundler at the moment
-if [ -f /usr/local/opt/mbedtls/lib/libmbedtls.12.dylib ]; then
- cp /usr/local/opt/mbedtls/lib/libmbedtls.12.dylib ./OBS.app/Contents/Frameworks/
- cp /usr/local/opt/mbedtls/lib/libmbedcrypto.3.dylib ./OBS.app/Contents/Frameworks/
- cp /usr/local/opt/mbedtls/lib/libmbedx509.0.dylib ./OBS.app/Contents/Frameworks/
- chmod +w ./OBS.app/Contents/Frameworks/*.dylib
- install_name_tool -id @executable_path/../Frameworks/libmbedtls.12.dylib ./OBS.app/Contents/Frameworks/libmbedtls.12.dylib
- install_name_tool -id @executable_path/../Frameworks/libmbedcrypto.3.dylib ./OBS.app/Contents/Frameworks/libmbedcrypto.3.dylib
- install_name_tool -id @executable_path/../Frameworks/libmbedx509.0.dylib ./OBS.app/Contents/Frameworks/libmbedx509.0.dylib
- install_name_tool -change libmbedtls.12.dylib @executable_path/../Frameworks/libmbedtls.12.dylib ./OBS.app/Contents/Plugins/obs-outputs.so
- install_name_tool -change libmbedcrypto.3.dylib @executable_path/../Frameworks/libmbedcrypto.3.dylib ./OBS.app/Contents/Plugins/obs-outputs.so
- install_name_tool -change libmbedx509.0.dylib @executable_path/../Frameworks/libmbedx509.0.dylib ./OBS.app/Contents/Plugins/obs-outputs.so
-elif [ -f /usr/local/opt/mbedtls/lib/libmbedtls.13.dylib ]; then
- cp /usr/local/opt/mbedtls/lib/libmbedtls.13.dylib ./OBS.app/Contents/Frameworks/
- cp /usr/local/opt/mbedtls/lib/libmbedcrypto.5.dylib ./OBS.app/Contents/Frameworks/
- cp /usr/local/opt/mbedtls/lib/libmbedx509.1.dylib ./OBS.app/Contents/Frameworks/
- chmod +w ./OBS.app/Contents/Frameworks/*.dylib
- install_name_tool -id @executable_path/../Frameworks/libmbedtls.13.dylib ./OBS.app/Contents/Frameworks/libmbedtls.13.dylib
- install_name_tool -id @executable_path/../Frameworks/libmbedcrypto.5.dylib ./OBS.app/Contents/Frameworks/libmbedcrypto.5.dylib
- install_name_tool -id @executable_path/../Frameworks/libmbedx509.1.dylib ./OBS.app/Contents/Frameworks/libmbedx509.1.dylib
- install_name_tool -change libmbedtls.13.dylib @executable_path/../Frameworks/libmbedtls.13.dylib ./OBS.app/Contents/Plugins/obs-outputs.so
- install_name_tool -change libmbedcrypto.5.dylib @executable_path/../Frameworks/libmbedcrypto.5.dylib ./OBS.app/Contents/Plugins/obs-outputs.so
- install_name_tool -change libmbedx509.1.dylib @executable_path/../Frameworks/libmbedx509.1.dylib ./OBS.app/Contents/Plugins/obs-outputs.so
-fi
-
-install_name_tool -change /usr/local/opt/curl/lib/libcurl.4.dylib @executable_path/../Frameworks/libcurl.4.dylib ./OBS.app/Contents/Plugins/obs-outputs.so
-install_name_tool -change @rpath/libobs.0.dylib @executable_path/../Frameworks/libobs.0.dylib ./OBS.app/Contents/Plugins/obs-outputs.so
-install_name_tool -change /tmp/obsdeps/bin/libjansson.4.dylib @executable_path/../Frameworks/libjansson.4.dylib ./OBS.app/Contents/Plugins/obs-outputs.so
-
-# copy sparkle into the app
-hr "Copying Sparkle.framework"
-cp -R ../../sparkle/Sparkle.framework ./OBS.app/Contents/Frameworks/
-install_name_tool -change @rpath/Sparkle.framework/Versions/A/Sparkle @executable_path/../Frameworks/Sparkle.framework/Versions/A/Sparkle ./OBS.app/Contents/MacOS/obs
-
-# Copy Chromium embedded framework to app Frameworks directory
-hr "Copying Chromium Embedded Framework.framework"
-sudo mkdir -p OBS.app/Contents/Frameworks
-sudo cp -R ../../cef_binary_${CEF_BUILD_VERSION}_macosx64/Release/Chromium\ Embedded\ Framework.framework OBS.app/Contents/Frameworks/
-
-install_name_tool -change /usr/local/opt/qt/lib/QtGui.framework/Versions/5/QtGui @executable_path/../Frameworks/QtGui.framework/Versions/5/QtGui ./OBS.app/Contents/Plugins/obs-browser.so
-install_name_tool -change /usr/local/opt/qt/lib/QtCore.framework/Versions/5/QtCore @executable_path/../Frameworks/QtCore.framework/Versions/5/QtCore ./OBS.app/Contents/Plugins/obs-browser.so
-install_name_tool -change /usr/local/opt/qt/lib/QtWidgets.framework/Versions/5/QtWidgets @executable_path/../Frameworks/QtWidgets.framework/Versions/5/QtWidgets ./OBS.app/Contents/Plugins/obs-browser.so
-
-cp ../CI/install/osx/OBSPublicDSAKey.pem OBS.app/Contents/Resources
-
-# edit plist
-plutil -insert CFBundleVersion -string $GIT_TAG ./OBS.app/Contents/Info.plist
-plutil -insert CFBundleShortVersionString -string $GIT_TAG ./OBS.app/Contents/Info.plist
-plutil -insert OBSFeedsURL -string https://obsproject.com/osx_update/feeds.xml ./OBS.app/Contents/Info.plist
-plutil -insert SUFeedURL -string https://obsproject.com/osx_update/stable/updates.xml ./OBS.app/Contents/Info.plist
-plutil -insert SUPublicDSAKeyFile -string OBSPublicDSAKey.pem ./OBS.app/Contents/Info.plist
-
-dmgbuild -s ../CI/install/osx/settings.json "OBS" obs.dmg
-
-if [ -v "$TRAVIS" ]; then
- # Signing stuff
- hr "Decrypting Cert"
- openssl aes-256-cbc -K $encrypted_dd3c7f5e9db9_key -iv $encrypted_dd3c7f5e9db9_iv -in ../CI/osxcert/Certificates.p12.enc -out Certificates.p12 -d
- hr "Creating Keychain"
- security create-keychain -p mysecretpassword build.keychain
- security default-keychain -s build.keychain
- security unlock-keychain -p mysecretpassword build.keychain
- security set-keychain-settings -t 3600 -u build.keychain
- hr "Importing certs into keychain"
- security import ./Certificates.p12 -k build.keychain -T /usr/bin/productsign -P ""
- # macOS 10.12+
- security set-key-partition-list -S apple-tool:,apple: -s -k mysecretpassword build.keychain
-fi
-
-cp ./OBS.dmg ./$FILENAME
-
-# Move to the folder that travis uses to upload artifacts from
-hr "Moving package to nightly folder for distribution"
-mkdir ../nightly
-sudo mv ./$FILENAME ../nightly
obs-studio-26.1.0.tar.xz/CI/before-script-osx.sh
Deleted
-# Make sure ccache is found
-export PATH=/usr/local/opt/ccache/libexec:$PATH
-
-git fetch --tags
-
-mkdir build
-cd build
-cmake -DENABLE_SPARKLE_UPDATER=ON \
--DCMAKE_OSX_DEPLOYMENT_TARGET=10.13 \
--DDISABLE_PYTHON=ON \
--DQTDIR=/usr/local/Cellar/qt/5.14.1 \
--DDepsPath=/tmp/obsdeps \
--DVLCPath=$PWD/../../vlc-3.0.8 \
--DBUILD_BROWSER=ON \
--DBROWSER_DEPLOY=ON \
--DWITH_RTMPS=ON \
--DCEF_ROOT_DIR=$PWD/../../cef_binary_${CEF_BUILD_VERSION}_macosx64 ..
obs-studio-26.1.0.tar.xz/CI/install
Deleted
-(directory)
obs-studio-26.1.0.tar.xz/CI/install-dependencies-osx.sh
Deleted
-hr() {
- echo "───────────────────────────────────────────────────"
- echo $1
- echo "───────────────────────────────────────────────────"
-}
-
-# Exit if something fails
-set -e
-
-# Echo all commands before executing
-set -v
-
-if [[ $TRAVIS ]]; then
- git fetch --unshallow
-fi
-
-git fetch origin --tags
-
-# Leave obs-studio folder
-cd ../
-
-# Install Packages app so we can build a package later
-# http://s.sudre.free.fr/Software/Packages/about.html
-hr "Downloading Packages app"
-wget --quiet --retry-connrefused --waitretry=1 https://s3-us-west-2.amazonaws.com/obs-nightly/Packages.pkg
-sudo installer -pkg ./Packages.pkg -target /
-
-brew update
-
-#Base OBS Deps and ccache
-for DEPENDENCY in jack speexdsp ccache mbedtls freetype fdk-aac cmocka; do
- if [ ! -d "$(brew --cellar)/${DEPENDENCY}" ]; then
- brew install $DEPENDENCY
- else
- brew upgrade $DEPENDENCY
- fi
-done
-
-brew install https://gist.githubusercontent.com/DDRBoxman/9c7a2b08933166f4b61ed9a44b242609/raw/ef4de6c587c6bd7f50210eccd5bd51ff08e6de13/qt.rb
-if [ -d "$(brew --cellar)/swig" ]; then
- brew unlink swig
-fi
-brew install https://gist.githubusercontent.com/DDRBoxman/4cada55c51803a2f963fa40ce55c9d3e/raw/572c67e908bfbc1bcb8c476ea77ea3935133f5b5/swig.rb
-
-pip install dmgbuild
-
-export PATH=/usr/local/opt/ccache/libexec:$PATH
-ccache -s || echo "CCache is not available."
-
-# Fetch and untar prebuilt OBS deps that are compatible with older versions of OSX
-hr "Downloading OBS deps"
-wget --quiet --retry-connrefused --waitretry=1 https://github.com/obsproject/obs-deps/releases/download/2020-04-24/osx-deps-2020-04-24.tar.gz
-tar -xf ./osx-deps-2020-04-24.tar.gz -C /tmp
-
-# Fetch vlc codebase
-hr "Downloading VLC repo"
-wget --quiet --retry-connrefused --waitretry=1 https://downloads.videolan.org/vlc/3.0.8/vlc-3.0.8.tar.xz
-tar -xf vlc-3.0.8.tar.xz
-
-# Get sparkle
-hr "Downloading Sparkle framework"
-wget --quiet --retry-connrefused --waitretry=1 -O sparkle.tar.bz2 https://github.com/sparkle-project/Sparkle/releases/download/1.23.0/Sparkle-1.23.0.tar.bz2
-mkdir ./sparkle
-tar -xf ./sparkle.tar.bz2 -C ./sparkle
-sudo cp -R ./sparkle/Sparkle.framework /Library/Frameworks/Sparkle.framework
-
-# CEF Stuff
-hr "Downloading CEF"
-wget --quiet --retry-connrefused --waitretry=1 https://obs-nightly.s3-us-west-2.amazonaws.com/cef_binary_${CEF_BUILD_VERSION}_macosx64.tar.bz2
-tar -xf ./cef_binary_${CEF_BUILD_VERSION}_macosx64.tar.bz2
-cd ./cef_binary_${CEF_BUILD_VERSION}_macosx64
-# remove a broken test
-sed -i '.orig' '/add_subdirectory(tests\/ceftests)/d' ./CMakeLists.txt
-# target 10.11
-sed -i '.orig' s/\"10.9\"/\"10.11\"/ ./cmake/cef_variables.cmake
-mkdir build
-cd ./build
-cmake -DCMAKE_CXX_FLAGS="-std=c++11 -stdlib=libc++" -DCMAKE_EXE_LINKER_FLAGS="-std=c++11 -stdlib=libc++" -DCMAKE_OSX_DEPLOYMENT_TARGET=10.11 ..
-make -j4
-mkdir libcef_dll
-cd ../../
obs-studio-26.1.0.tar.xz/CI/install/osx
Deleted
-(directory)
obs-studio-26.1.0.tar.xz/CI/install/osx/CMakeLists.pkgproj
Deleted
-<?xml version="1.0" encoding="UTF-8"?>
-<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
-<plist version="1.0">
-<dict>
- <key>PACKAGES</key>
- <array>
- <dict>
- <key>PACKAGE_FILES</key>
- <dict>
- <key>DEFAULT_INSTALL_LOCATION</key>
- <string>/</string>
- <key>HIERARCHY</key>
- <dict>
- <key>CHILDREN</key>
- <array>
- <dict>
- <key>CHILDREN</key>
- <array>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>80</integer>
- <key>PATH</key>
- <string>../../../build/OBS.app</string>
- <key>PATH_TYPE</key>
- <integer>3</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>3</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>80</integer>
- <key>PATH</key>
- <string>Utilities</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- </array>
- <key>GID</key>
- <integer>80</integer>
- <key>PATH</key>
- <string>Applications</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>509</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array>
- <dict>
- <key>CHILDREN</key>
- <array>
- <dict>
- <key>CHILDREN</key>
- <array>
- <dict>
- <key>CHILDREN</key>
- <array>
- <dict>
- <key>CHILDREN</key>
- <array>
- <dict>
- <key>CHILDREN</key>
- <array>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>80</integer>
- <key>PATH</key>
- <string>../../../build/plugins/obs-browser/obs-browser-page</string>
- <key>PATH_TYPE</key>
- <integer>3</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>3</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>80</integer>
- <key>PATH</key>
- <string>../../../build/plugins/obs-browser/obs-browser.so</string>
- <key>PATH_TYPE</key>
- <integer>3</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>3</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- </array>
- <key>GID</key>
- <integer>80</integer>
- <key>PATH</key>
- <string>bin</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>2</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- </array>
- <key>GID</key>
- <integer>80</integer>
- <key>PATH</key>
- <string>obs-browser</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>2</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- </array>
- <key>GID</key>
- <integer>80</integer>
- <key>PATH</key>
- <string>plugins</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>2</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- </array>
- <key>GID</key>
- <integer>80</integer>
- <key>PATH</key>
- <string>obs-studio</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>2</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- </array>
- <key>GID</key>
- <integer>80</integer>
- <key>PATH</key>
- <string>Application Support</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>Documentation</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>Filesystems</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>Frameworks</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>Input Methods</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>Internet Plug-Ins</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>LaunchAgents</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>LaunchDaemons</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>PreferencePanes</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>Preferences</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>80</integer>
- <key>PATH</key>
- <string>Printers</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>PrivilegedHelperTools</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>QuickLook</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>QuickTime</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>Screen Savers</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>Scripts</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>Services</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>Widgets</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- </array>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>Library</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array>
- <dict>
- <key>CHILDREN</key>
- <array>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>Extensions</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- </array>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>Library</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- </array>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>System</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>CHILDREN</key>
- <array>
- <dict>
- <key>CHILDREN</key>
- <array/>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>Shared</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>1023</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- </array>
- <key>GID</key>
- <integer>80</integer>
- <key>PATH</key>
- <string>Users</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- </array>
- <key>GID</key>
- <integer>0</integer>
- <key>PATH</key>
- <string>/</string>
- <key>PATH_TYPE</key>
- <integer>0</integer>
- <key>PERMISSIONS</key>
- <integer>493</integer>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UID</key>
- <integer>0</integer>
- </dict>
- <key>PAYLOAD_TYPE</key>
- <integer>0</integer>
- <key>VERSION</key>
- <integer>2</integer>
- </dict>
- <key>PACKAGE_SCRIPTS</key>
- <dict>
- <key>POSTINSTALL_PATH</key>
- <dict>
- <key>PATH</key>
- <string>post-install.sh</string>
- <key>PATH_TYPE</key>
- <integer>3</integer>
- </dict>
- <key>PREINSTALL_PATH</key>
- <dict/>
- <key>RESOURCES</key>
- <array/>
- </dict>
- <key>PACKAGE_SETTINGS</key>
- <dict>
- <key>AUTHENTICATION</key>
- <integer>1</integer>
- <key>CONCLUSION_ACTION</key>
- <integer>0</integer>
- <key>IDENTIFIER</key>
- <string>org.obsproject.pkg.obs-studio</string>
- <key>NAME</key>
- <string>OBS</string>
- <key>OVERWRITE_PERMISSIONS</key>
- <false/>
- <key>VERSION</key>
- <string>1.0</string>
- </dict>
- <key>UUID</key>
- <string>19CCE3F2-8911-4364-B673-8B5BC3ABD4DA</string>
- </dict>
- <dict>
- <key>PACKAGE_SETTINGS</key>
- <dict>
- <key>LOCATION</key>
- <integer>0</integer>
- <key>NAME</key>
- <string>SyphonInject</string>
- </dict>
- <key>PATH</key>
- <dict>
- <key>PATH</key>
- <string>SyphonInject.pkg</string>
- <key>PATH_TYPE</key>
- <integer>1</integer>
- </dict>
- <key>TYPE</key>
- <integer>1</integer>
- <key>UUID</key>
- <string>0CC9C67E-4D14-4794-9930-019925513B1C</string>
- </dict>
- </array>
- <key>PROJECT</key>
- <dict>
- <key>PROJECT_COMMENTS</key>
- <dict>
- <key>NOTES</key>
- <data>
- PCFET0NUWVBFIGh0bWwgUFVCTElDICItLy9XM0MvL0RURCBIVE1M
- IDQuMDEvL0VOIiAiaHR0cDovL3d3dy53My5vcmcvVFIvaHRtbDQv
- c3RyaWN0LmR0ZCI+CjxodG1sPgo8aGVhZD4KPG1ldGEgaHR0cC1l
- cXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7
- IGNoYXJzZXQ9VVRGLTgiPgo8bWV0YSBodHRwLWVxdWl2PSJDb250
- ZW50LVN0eWxlLVR5cGUiIGNvbnRlbnQ9InRleHQvY3NzIj4KPHRp
- dGxlPjwvdGl0bGU+CjxtZXRhIG5hbWU9IkdlbmVyYXRvciIgY29u
- dGVudD0iQ29jb2EgSFRNTCBXcml0ZXIiPgo8bWV0YSBuYW1lPSJD
- b2NvYVZlcnNpb24iIGNvbnRlbnQ9IjE1MDQuODEiPgo8c3R5bGUg
- dHlwZT0idGV4dC9jc3MiPgo8L3N0eWxlPgo8L2hlYWQ+Cjxib2R5
- Pgo8L2JvZHk+CjwvaHRtbD4K
- </data>
- </dict>
- <key>PROJECT_PRESENTATION</key>
- <dict>
- <key>BACKGROUND</key>
- <dict>
- <key>ALIGNMENT</key>
- <integer>4</integer>
- <key>BACKGROUND_PATH</key>
- <dict>
- <key>PATH</key>
- <string>obs.png</string>
- <key>PATH_TYPE</key>
- <integer>1</integer>
- </dict>
- <key>CUSTOM</key>
- <integer>1</integer>
- <key>SCALING</key>
- <integer>0</integer>
- </dict>
- <key>INSTALLATION TYPE</key>
- <dict>
- <key>HIERARCHIES</key>
- <dict>
- <key>INSTALLER</key>
- <dict>
- <key>LIST</key>
- <array>
- <dict>
- <key>DESCRIPTION</key>
- <array/>
- <key>OPTIONS</key>
- <dict>
- <key>HIDDEN</key>
- <false/>
- <key>STATE</key>
- <integer>0</integer>
- </dict>
- <key>PACKAGE_UUID</key>
- <string>19CCE3F2-8911-4364-B673-8B5BC3ABD4DA</string>
- <key>REQUIREMENTS</key>
- <array/>
- <key>TITLE</key>
- <array/>
- <key>TOOLTIP</key>
- <array/>
- <key>TYPE</key>
- <integer>0</integer>
- <key>UUID</key>
- <string>7C540711-59F4-479C-9CFD-8C4D6594992E</string>
- </dict>
- <dict>
- <key>DESCRIPTION</key>
- <array/>
- <key>OPTIONS</key>
- <dict>
- <key>HIDDEN</key>
- <false/>
- <key>STATE</key>
- <integer>1</integer>
- </dict>
- <key>PACKAGE_UUID</key>
- <string>0CC9C67E-4D14-4794-9930-019925513B1C</string>
- <key>REQUIREMENTS</key>
- <array/>
- <key>TITLE</key>
- <array/>
- <key>TOOLTIP</key>
- <array/>
- <key>TYPE</key>
- <integer>0</integer>
- <key>UUID</key>
- <string>BBDE08F6-D7EE-47CB-881F-7F208B3A604B</string>
- </dict>
- </array>
- <key>REMOVED</key>
- <dict/>
- </dict>
- </dict>
- <key>INSTALLATION TYPE</key>
- <integer>0</integer>
- <key>MODE</key>
- <integer>0</integer>
- </dict>
- <key>INSTALLATION_STEPS</key>
- <array>
- <dict>
- <key>ICPRESENTATION_CHAPTER_VIEW_CONTROLLER_CLASS</key>
- <string>ICPresentationViewIntroductionController</string>
- <key>INSTALLER_PLUGIN</key>
- <string>Introduction</string>
- <key>LIST_TITLE_KEY</key>
- <string>InstallerSectionTitle</string>
- </dict>
- <dict>
- <key>ICPRESENTATION_CHAPTER_VIEW_CONTROLLER_CLASS</key>
- <string>ICPresentationViewReadMeController</string>
- <key>INSTALLER_PLUGIN</key>
- <string>ReadMe</string>
- <key>LIST_TITLE_KEY</key>
- <string>InstallerSectionTitle</string>
- </dict>
- <dict>
- <key>ICPRESENTATION_CHAPTER_VIEW_CONTROLLER_CLASS</key>
- <string>ICPresentationViewLicenseController</string>
- <key>INSTALLER_PLUGIN</key>
- <string>License</string>
- <key>LIST_TITLE_KEY</key>
- <string>InstallerSectionTitle</string>
- </dict>
- <dict>
- <key>ICPRESENTATION_CHAPTER_VIEW_CONTROLLER_CLASS</key>
- <string>ICPresentationViewDestinationSelectController</string>
- <key>INSTALLER_PLUGIN</key>
- <string>TargetSelect</string>
- <key>LIST_TITLE_KEY</key>
- <string>InstallerSectionTitle</string>
- </dict>
- <dict>
- <key>ICPRESENTATION_CHAPTER_VIEW_CONTROLLER_CLASS</key>
- <string>ICPresentationViewInstallationTypeController</string>
- <key>INSTALLER_PLUGIN</key>
- <string>PackageSelection</string>
- <key>LIST_TITLE_KEY</key>
- <string>InstallerSectionTitle</string>
- </dict>
- <dict>
- <key>ICPRESENTATION_CHAPTER_VIEW_CONTROLLER_CLASS</key>
- <string>ICPresentationViewInstallationController</string>
- <key>INSTALLER_PLUGIN</key>
- <string>Install</string>
- <key>LIST_TITLE_KEY</key>
- <string>InstallerSectionTitle</string>
- </dict>
- <dict>
- <key>ICPRESENTATION_CHAPTER_VIEW_CONTROLLER_CLASS</key>
- <string>ICPresentationViewSummaryController</string>
- <key>INSTALLER_PLUGIN</key>
- <string>Summary</string>
- <key>LIST_TITLE_KEY</key>
- <string>InstallerSectionTitle</string>
- </dict>
- </array>
- <key>INTRODUCTION</key>
- <dict>
- <key>LOCALIZATIONS</key>
- <array/>
- </dict>
- <key>LICENSE</key>
- <dict>
- <key>KEYWORDS</key>
- <dict/>
- <key>LOCALIZATIONS</key>
- <array/>
- <key>MODE</key>
- <integer>0</integer>
- </dict>
- <key>README</key>
- <dict>
- <key>LOCALIZATIONS</key>
- <array/>
- </dict>
- <key>SUMMARY</key>
- <dict>
- <key>LOCALIZATIONS</key>
- <array/>
- </dict>
- <key>TITLE</key>
- <dict>
- <key>LOCALIZATIONS</key>
- <array>
- <dict>
- <key>LANGUAGE</key>
- <string>English</string>
- <key>VALUE</key>
- <string>OBS</string>
- </dict>
- </array>
- </dict>
- </dict>
- <key>PROJECT_REQUIREMENTS</key>
- <dict>
- <key>LIST</key>
- <array/>
- <key>POSTINSTALL_PATH</key>
- <dict/>
- <key>PREINSTALL_PATH</key>
- <dict/>
- <key>RESOURCES</key>
- <array/>
- <key>ROOT_VOLUME_ONLY</key>
- <false/>
- </dict>
- <key>PROJECT_SETTINGS</key>
- <dict>
- <key>ADVANCED_OPTIONS</key>
- <dict/>
- <key>BUILD_FORMAT</key>
- <integer>0</integer>
- <key>BUILD_PATH</key>
- <dict>
- <key>PATH</key>
- <string>../../../build</string>
- <key>PATH_TYPE</key>
- <integer>3</integer>
- </dict>
- <key>EXCLUDED_FILES</key>
- <array>
- <dict>
- <key>PATTERNS_ARRAY</key>
- <array>
- <dict>
- <key>REGULAR_EXPRESSION</key>
- <false/>
- <key>STRING</key>
- <string>.DS_Store</string>
- <key>TYPE</key>
- <integer>0</integer>
- </dict>
- </array>
- <key>PROTECTED</key>
- <true/>
- <key>PROXY_NAME</key>
- <string>Remove .DS_Store files</string>
- <key>PROXY_TOOLTIP</key>
- <string>Remove ".DS_Store" files created by the Finder.</string>
- <key>STATE</key>
- <true/>
- </dict>
- <dict>
- <key>PATTERNS_ARRAY</key>
- <array>
- <dict>
- <key>REGULAR_EXPRESSION</key>
- <false/>
- <key>STRING</key>
- <string>.pbdevelopment</string>
- <key>TYPE</key>
- <integer>0</integer>
- </dict>
- </array>
- <key>PROTECTED</key>
- <true/>
- <key>PROXY_NAME</key>
- <string>Remove .pbdevelopment files</string>
- <key>PROXY_TOOLTIP</key>
- <string>Remove ".pbdevelopment" files created by ProjectBuilder or Xcode.</string>
- <key>STATE</key>
- <true/>
- </dict>
- <dict>
- <key>PATTERNS_ARRAY</key>
- <array>
- <dict>
- <key>REGULAR_EXPRESSION</key>
- <false/>
- <key>STRING</key>
- <string>CVS</string>
- <key>TYPE</key>
- <integer>1</integer>
- </dict>
- <dict>
- <key>REGULAR_EXPRESSION</key>
- <false/>
- <key>STRING</key>
- <string>.cvsignore</string>
- <key>TYPE</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>REGULAR_EXPRESSION</key>
- <false/>
- <key>STRING</key>
- <string>.cvspass</string>
- <key>TYPE</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>REGULAR_EXPRESSION</key>
- <false/>
- <key>STRING</key>
- <string>.svn</string>
- <key>TYPE</key>
- <integer>1</integer>
- </dict>
- <dict>
- <key>REGULAR_EXPRESSION</key>
- <false/>
- <key>STRING</key>
- <string>.git</string>
- <key>TYPE</key>
- <integer>1</integer>
- </dict>
- <dict>
- <key>REGULAR_EXPRESSION</key>
- <false/>
- <key>STRING</key>
- <string>.gitignore</string>
- <key>TYPE</key>
- <integer>0</integer>
- </dict>
- </array>
- <key>PROTECTED</key>
- <true/>
- <key>PROXY_NAME</key>
- <string>Remove SCM metadata</string>
- <key>PROXY_TOOLTIP</key>
- <string>Remove helper files and folders used by the CVS, SVN or Git Source Code Management systems.</string>
- <key>STATE</key>
- <true/>
- </dict>
- <dict>
- <key>PATTERNS_ARRAY</key>
- <array>
- <dict>
- <key>REGULAR_EXPRESSION</key>
- <false/>
- <key>STRING</key>
- <string>classes.nib</string>
- <key>TYPE</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>REGULAR_EXPRESSION</key>
- <false/>
- <key>STRING</key>
- <string>designable.db</string>
- <key>TYPE</key>
- <integer>0</integer>
- </dict>
- <dict>
- <key>REGULAR_EXPRESSION</key>
- <false/>
- <key>STRING</key>
- <string>info.nib</string>
- <key>TYPE</key>
- <integer>0</integer>
- </dict>
- </array>
- <key>PROTECTED</key>
- <true/>
- <key>PROXY_NAME</key>
- <string>Optimize nib files</string>
- <key>PROXY_TOOLTIP</key>
- <string>Remove "classes.nib", "info.nib" and "designable.nib" files within .nib bundles.</string>
- <key>STATE</key>
- <true/>
- </dict>
- <dict>
- <key>PATTERNS_ARRAY</key>
- <array>
- <dict>
- <key>REGULAR_EXPRESSION</key>
- <false/>
- <key>STRING</key>
- <string>Resources Disabled</string>
- <key>TYPE</key>
- <integer>1</integer>
- </dict>
- </array>
- <key>PROTECTED</key>
- <true/>
- <key>PROXY_NAME</key>
- <string>Remove Resources Disabled folders</string>
- <key>PROXY_TOOLTIP</key>
- <string>Remove "Resources Disabled" folders.</string>
- <key>STATE</key>
- <true/>
- </dict>
- <dict>
- <key>SEPARATOR</key>
- <true/>
- </dict>
- </array>
- <key>NAME</key>
- <string>OBS</string>
- </dict>
- </dict>
- <key>TYPE</key>
- <integer>0</integer>
- <key>VERSION</key>
- <integer>2</integer>
-</dict>
-</plist>
obs-studio-26.1.0.tar.xz/CI/install/osx/Info.plist
Deleted
-<?xml version="1.0" encoding="UTF-8"?>
-<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
-<plist version="1.0">
-<dict>
- <key>CFBundleIconFile</key>
- <string>obs.icns</string>
- <key>CFBundleName</key>
- <string>OBS</string>
- <key>CFBundleGetInfoString</key>
- <string>OBS - Free and Open Source Streaming/Recording Software</string>
- <key>CFBundleExecutable</key>
- <string>obs</string>
- <key>CFBundleIdentifier</key>
- <string>com.obsproject.obs-studio</string>
- <key>CFBundlePackageType</key>
- <string>APPL</string>
- <key>LSMinimumSystemVersion</key>
- <string>10.8.5</string>
- <key>NSHighResolutionCapable</key>
- <true/>
- <key>LSAppNapIsDisabled</key>
- <true/>
- <key>NSCameraUsageDescription</key>
- <string>OBS needs to access the camera to enable camera sources to work.</string>
- <key>NSMicrophoneUsageDescription</key>
- <string>OBS needs to access the microphone to enable audio input.</string>
-</dict>
-</plist>
obs-studio-26.1.0.tar.xz/CI/install/osx/OBSPublicDSAKey.pem
Deleted
------BEGIN PUBLIC KEY-----
-MIIGPDCCBC4GByqGSM44BAEwggQhAoICAQCZZZ2y7H2GJmMfP4KQihJTJOoiGNUw
-mue6sqMbH+utRykRnSKBZux6R665eRFMpNgrgFO1TLLGbdD2U31KiGtCvFJOmOl3
-+QP055BuXjEG36NU7AWEFLAlbDlr/2D3oumq3Ib3iMnnr9RrVztJ2VFOvVio1eWr
-ZxboVwKPK8D6BqsWiv15vbYlJnTC4Fls6ySmdjVBxwoPlTaMu1ysi5DfbIZ93s5u
-aQt1FvXuWtPBWjyVUORcNbcWf49E5R2pV0OSBK95Hw2/wXz4vmj+w92dTePGnVaW
-Me4CoF5PIeZILwp6DCLStX4eW2WG1NChJTC8zeQ/3bMMoGyKM/MadyvrDqMywsKY
-caxkIwHrDKOEdXXGo80dIwZMMLipPA8DKhx5ojphfkeXjIhKSx+49knXT3ED5okE
-Wai7tGUXj/8D8sGh+7b+AVsdujvr4v8WQaZiKUOZ2IIHOg3VLz9T9v0zet1Yt987
-KNymFcp2CHeJ6KnDP/ZGQ6Nl0HsPxUgscsXV+R2FEc8Q1j0Ukkuxnopa0E4/huUu
-gjyRzpXD734qFMDf7LcXca6qNjBor6gVj5sRyRKCpZ+KQfMUlr8jp506ztYSyeJu
-dxJV30tQgztwkbrs02CqOt4Z3Peo6sdht7hWKSPVwmja3tq8/TfUSSoo6wKYN9/w
-Mf3dVeRF8hCzJQIVAJnzuzmzQhCKPiQnl3jh5qGII2XfAoICAQCCVATAff89ceHj
-ROHEbHTQFpVxJ/kRZPfxnU46DSw79Tih7tthV68oakPSOTP3cx/Tga0GwogarZ9N
-F2VVan5w9OQSSewXsr5UDT5bnmJF+h+JB7TMy+sXZBYobUqjlUd5VtKc8RsN86P4
-s7xbK0mA+hfe+27r18JT81/eH3xUfh7UOUGSdMN2Ch9f7RFSMZIgUAZUzu2K3ODp
-hPgtc2QJ8QVAp7GLvQgw8ZUME/ChZslyBIyJvYgUIxfxlgRWYro5pQT7/ngkgdXo
-wlghHKkldwMuY3zaFdhPnFNuEUEtc18ILsbz0+AnagCUd6n+3safskCRqLIHMOY6
-iLBSZPX9hJQhVCqSqz1VNDDww8FNa/fojJ1Lr/TI0I+0Ib2pCiY2LChXUqGY5SLZ
-2KNs5qFsyZP+I0L8YsGwqvUYyFwk7Ok224n0NtaOwqpLCrtXd/i6DaDNiaoJuwJC
-1ELCfaZivorgkC5rhBt2H7qWUAR+EtrFE/gb0k/G5EIhjYql7onGbX+G2re38vQA
-fg1pzguhig2dafP/BxMLZrn1Gg61xzmEYPuS9gclktaf675srv8GVb46VkOxXL+D
-YvTmpJPP7UUOVlmAMCo4j4y09MW3jq9TDp42VTLeZVubyjslGnavlnq1O+ZyXUye
-1FMeby65sIbSHHHwoFnRv3hLSEXI5gOCAgYAAoICAQCUkYnZkPfHfOJZI403xUYP
-CE/bLpkza074Xo6EXElsWRnpQgNTx+JFOvItgj3v0OkIqDin9UredKOwfkiftslV
-jxUVKA6I5kwnGvCpvTpQMLyLjq+VQr+J2D6eId6tV/iajhdu5r4JThU8KllT7Ywb
-NAur34ftLNCVAMRUaDNeEoHfePgderW384e+lbvpmtifmBluammGSxxRtUsdjvJZ
-BFkhaJu86CKxcU7D1lbPVOtV/jaxz6d16VdGcfBdi2LzXZzZtYpT9XGPX3NF+xii
-spAURWsoe11LTRXF+eJhgCm5iIDN3kh1HEQKYKAVpmrcM0aFzk/NpS+tFyU72vaq
-IRSSJw/aa1oELOAakG5oPldc4RcYWl32sbnVwXHO7TZvgTrBSC10o65MAC5CHP/s
-b07heDYAIt7re7szvOYq+c/9zAMAlu3pcO8MqaXYMmybdHBXHQ2b+DdJWHmIUWcX
-CbUzr09vzGkJAvqsXqbmJPr8aixrO75DhT0iDTILLWe/GWK51nf+Tg0pNxVgGyAl
-BqvRqqo7SSDu9FMkwQesFFHhuoHLyEHwVPJ+sMQTNwQcm9c6YuW8EYDRSkeKLWYk
-3fkjG+Pe9uVE8a1taDg3FjSY0UqjUT6XMw+i0Lajyus2L6wFBwrrGM6E4xa6x1CC
-MGjmuSOlPA1umQsToIcO4g==
------END PUBLIC KEY-----
obs-studio-26.1.0.tar.xz/CI/install/osx/background.pxd
Deleted
-(directory)
obs-studio-26.1.0.tar.xz/CI/install/osx/background.pxd/QuickLook
Deleted
-(directory)
obs-studio-26.1.0.tar.xz/CI/install/osx/background.pxd/data
Deleted
-(directory)
obs-studio-26.1.0.tar.xz/CI/install/osx/buildDMG
Deleted
-dmgbuild -s ./settings.json "OBS" obs.dmg
obs-studio-26.1.0.tar.xz/CI/install/osx/build_app.py
Deleted
-#!/usr/bin/env python
-
-candidate_paths = "bin obs-plugins data".split()
-
-plist_path = "../cmake/osxbundle/Info.plist"
-icon_path = "../cmake/osxbundle/obs.icns"
-run_path = "../cmake/osxbundle/obslaunch.sh"
-
-#not copied
-blacklist = """/usr /System""".split()
-
-#copied
-whitelist = """/usr/local""".split()
-
-#
-#
-#
-
-
-from sys import argv
-from glob import glob
-from subprocess import check_output, call
-from collections import namedtuple
-from shutil import copy, copytree, rmtree
-from os import makedirs, rename, walk, path as ospath
-import plistlib
-
-import argparse
-
-def _str_to_bool(s):
- """Convert string to bool (in argparse context)."""
- if s.lower() not in ['true', 'false']:
- raise ValueError('Need bool; got %r' % s)
- return {'true': True, 'false': False}[s.lower()]
-
-def add_boolean_argument(parser, name, default=False):
- """Add a boolean argument to an ArgumentParser instance."""
- group = parser.add_mutually_exclusive_group()
- group.add_argument(
- '--' + name, nargs='?', default=default, const=True, type=_str_to_bool)
- group.add_argument('--no' + name, dest=name, action='store_false')
-
-parser = argparse.ArgumentParser(description='obs-studio package util')
-parser.add_argument('-d', '--base-dir', dest='dir', default='rundir/RelWithDebInfo')
-parser.add_argument('-n', '--build-number', dest='build_number', default='0')
-parser.add_argument('-k', '--public-key', dest='public_key', default='OBSPublicDSAKey.pem')
-parser.add_argument('-f', '--sparkle-framework', dest='sparkle', default=None)
-parser.add_argument('-b', '--base-url', dest='base_url', default='https://obsproject.com/osx_update')
-parser.add_argument('-u', '--user', dest='user', default='jp9000')
-parser.add_argument('-c', '--channel', dest='channel', default='master')
-add_boolean_argument(parser, 'stable', default=False)
-parser.add_argument('-p', '--prefix', dest='prefix', default='')
-args = parser.parse_args()
-
-def cmd(cmd):
- import subprocess
- import shlex
- return subprocess.check_output(shlex.split(cmd)).rstrip('\r\n')
-
-LibTarget = namedtuple("LibTarget", ("path", "external", "copy_as"))
-
-inspect = list()
-
-inspected = set()
-
-build_path = args.dir
-build_path = build_path.replace("\\ ", " ")
-
-def add(name, external=False, copy_as=None):
- if external and copy_as is None:
- copy_as = name.split("/")[-1]
- if name[0] != "/":
- name = build_path+"/"+name
- t = LibTarget(name, external, copy_as)
- if t in inspected:
- return
- inspect.append(t)
- inspected.add(t)
-
-
-for i in candidate_paths:
- print("Checking " + i)
- for root, dirs, files in walk(build_path+"/"+i):
- for file_ in files:
- if ".ini" in file_:
- continue
- if ".png" in file_:
- continue
- if ".effect" in file_:
- continue
- if ".py" in file_:
- continue
- if ".json" in file_:
- continue
- path = root + "/" + file_
- try:
- out = check_output("{0}otool -L '{1}'".format(args.prefix, path), shell=True,
- universal_newlines=True)
- if "is not an object file" in out:
- continue
- except:
- continue
- rel_path = path[len(build_path)+1:]
- print(repr(path), repr(rel_path))
- add(rel_path)
-
-def add_plugins(path, replace):
- for img in glob(path.replace(
- "lib/QtCore.framework/Versions/5/QtCore",
- "plugins/%s/*"%replace).replace(
- "Library/Frameworks/QtCore.framework/Versions/5/QtCore",
- "share/qt5/plugins/%s/*"%replace)):
- if "_debug" in img:
- continue
- add(img, True, img.split("plugins/")[-1])
-
-actual_sparkle_path = '@loader_path/Frameworks/Sparkle.framework/Versions/A/Sparkle'
-
-while inspect:
- target = inspect.pop()
- print("inspecting", repr(target))
- path = target.path
- if path[0] == "@":
- continue
- out = check_output("{0}otool -L '{1}'".format(args.prefix, path), shell=True,
- universal_newlines=True)
-
- if "QtCore" in path:
- add_plugins(path, "platforms")
- add_plugins(path, "imageformats")
- add_plugins(path, "accessible")
- add_plugins(path, "styles")
-
-
- for line in out.split("\n")[1:]:
- new = line.strip().split(" (")[0]
- if '@' in new and "sparkle.framework" in new.lower():
- actual_sparkle_path = new
- print "Using sparkle path:", repr(actual_sparkle_path)
- if not new or new[0] == "@" or new.endswith(path.split("/")[-1]):
- continue
- whitelisted = False
- for i in whitelist:
- if new.startswith(i):
- whitelisted = True
- if not whitelisted:
- blacklisted = False
- for i in blacklist:
- if new.startswith(i):
- blacklisted = True
- break
- if blacklisted:
- continue
- add(new, True)
-
-changes = list()
-for path, external, copy_as in inspected:
- if not external:
- continue #built with install_rpath hopefully
- changes.append("-change '%s' '@rpath/%s'"%(path, copy_as))
-changes = " ".join(changes)
-
-info = plistlib.readPlist(plist_path)
-
-latest_tag = cmd('git describe --tags --abbrev=0')
-log = cmd('git log --pretty=oneline {0}...HEAD'.format(latest_tag))
-
-from os import path
-# set version
-if args.stable:
- info["CFBundleVersion"] = latest_tag
- info["CFBundleShortVersionString"] = latest_tag
- info["SUFeedURL"] = '{0}/stable/updates.xml'.format(args.base_url)
-else:
- info["CFBundleVersion"] = args.build_number
- info["CFBundleShortVersionString"] = '{0}.{1}'.format(latest_tag, args.build_number)
- info["SUFeedURL"] = '{0}/{1}/{2}/updates.xml'.format(args.base_url, args.user, args.channel)
-
-info["SUPublicDSAKeyFile"] = path.basename(args.public_key)
-info["OBSFeedsURL"] = '{0}/feeds.xml'.format(args.base_url)
-
-app_name = info["CFBundleName"]+".app"
-icon_file = "tmp/Contents/Resources/%s"%info["CFBundleIconFile"]
-
-copytree(build_path, "tmp/Contents/Resources/", symlinks=True)
-copy(icon_path, icon_file)
-plistlib.writePlist(info, "tmp/Contents/Info.plist")
-makedirs("tmp/Contents/MacOS")
-copy(run_path, "tmp/Contents/MacOS/%s"%info["CFBundleExecutable"])
-try:
- copy(args.public_key, "tmp/Contents/Resources")
-except:
- pass
-
-if args.sparkle is not None:
- copytree(args.sparkle, "tmp/Contents/Frameworks/Sparkle.framework", symlinks=True)
-
-prefix = "tmp/Contents/Resources/"
-sparkle_path = '@loader_path/{0}/Frameworks/Sparkle.framework/Versions/A/Sparkle'
-
-cmd('{0}install_name_tool -change {1} {2} {3}/bin/obs'.format(
- args.prefix, actual_sparkle_path, sparkle_path.format('../..'), prefix))
-
-
-
-for path, external, copy_as in inspected:
- id_ = ""
- filename = path
- rpath = ""
- if external:
- if copy_as == "Python":
- continue
- id_ = "-id '@rpath/%s'"%copy_as
- filename = prefix + "bin/" +copy_as
- rpath = "-add_rpath @loader_path/ -add_rpath @executable_path/"
- if "/" in copy_as:
- try:
- dirs = copy_as.rsplit("/", 1)[0]
- makedirs(prefix + "bin/" + dirs)
- except:
- pass
- copy(path, filename)
- else:
- filename = path[len(build_path)+1:]
- id_ = "-id '@rpath/../%s'"%filename
- if not filename.startswith("bin"):
- print(filename)
- rpath = "-add_rpath '@loader_path/{}/'".format(ospath.relpath("bin/", ospath.dirname(filename)))
- filename = prefix + filename
-
- cmd = "{0}install_name_tool {1} {2} {3} '{4}'".format(args.prefix, changes, id_, rpath, filename)
- call(cmd, shell=True)
-
-try:
- rename("tmp", app_name)
-except:
- print("App already exists")
- rmtree("tmp")
obs-studio-26.1.0.tar.xz/CI/install/osx/makeRetinaBG
Deleted
-tiffutil -cathidpicheck background.png background@2x.png -out background.tiff
obs-studio-26.1.0.tar.xz/CI/install/osx/packageApp.sh
Deleted
-# Exit if something fails
-set -e
-
-rm -rf ./OBS.app
-
-mkdir OBS.app
-mkdir OBS.app/Contents
-mkdir OBS.app/Contents/MacOS
-mkdir OBS.app/Contents/PlugIns
-mkdir OBS.app/Contents/Resources
-
-cp -R rundir/RelWithDebInfo/bin/ ./OBS.app/Contents/MacOS
-cp -R rundir/RelWithDebInfo/data ./OBS.app/Contents/Resources
-cp ../CI/install/osx/obs.icns ./OBS.app/Contents/Resources
-cp -R rundir/RelWithDebInfo/obs-plugins/ ./OBS.app/Contents/PlugIns
-cp ../CI/install/osx/Info.plist ./OBS.app/Contents
-
-../CI/install/osx/dylibBundler -b -cd -d ./OBS.app/Contents/Frameworks -p @executable_path/../Frameworks/ \
--s ./OBS.app/Contents/MacOS \
--s /usr/local/opt/mbedtls/lib/ \
--x ./OBS.app/Contents/PlugIns/coreaudio-encoder.so \
--x ./OBS.app/Contents/PlugIns/decklink-ouput-ui.so \
--x ./OBS.app/Contents/PlugIns/frontend-tools.so \
--x ./OBS.app/Contents/PlugIns/image-source.so \
--x ./OBS.app/Contents/PlugIns/linux-jack.so \
--x ./OBS.app/Contents/PlugIns/mac-avcapture.so \
--x ./OBS.app/Contents/PlugIns/mac-capture.so \
--x ./OBS.app/Contents/PlugIns/mac-decklink.so \
--x ./OBS.app/Contents/PlugIns/mac-syphon.so \
--x ./OBS.app/Contents/PlugIns/mac-vth264.so \
--x ./OBS.app/Contents/PlugIns/obs-browser.so \
--x ./OBS.app/Contents/PlugIns/obs-browser-page \
--x ./OBS.app/Contents/PlugIns/obs-ffmpeg.so \
--x ./OBS.app/Contents/PlugIns/obs-filters.so \
--x ./OBS.app/Contents/PlugIns/obs-transitions.so \
--x ./OBS.app/Contents/PlugIns/obs-vst.so \
--x ./OBS.app/Contents/PlugIns/rtmp-services.so \
--x ./OBS.app/Contents/MacOS/obs \
--x ./OBS.app/Contents/MacOS/obs-ffmpeg-mux \
--x ./OBS.app/Contents/MacOS/obslua.so \
--x ./OBS.app/Contents/PlugIns/obs-x264.so \
--x ./OBS.app/Contents/PlugIns/text-freetype2.so \
--x ./OBS.app/Contents/PlugIns/obs-libfdk.so
-# -x ./OBS.app/Contents/MacOS/_obspython.so \
-# -x ./OBS.app/Contents/PlugIns/obs-outputs.so \
-
-/usr/local/Cellar/qt/5.14.1/bin/macdeployqt ./OBS.app
-
-mv ./OBS.app/Contents/MacOS/libobs-opengl.so ./OBS.app/Contents/Frameworks
-
-rm -f -r ./OBS.app/Contents/Frameworks/QtNetwork.framework
-
-# put qt network in here becasuse streamdeck uses it
-cp -R /usr/local/opt/qt/lib/QtNetwork.framework ./OBS.app/Contents/Frameworks
-chmod -R +w ./OBS.app/Contents/Frameworks/QtNetwork.framework
-rm -r ./OBS.app/Contents/Frameworks/QtNetwork.framework/Headers
-rm -r ./OBS.app/Contents/Frameworks/QtNetwork.framework/Versions/5/Headers/
-chmod 644 ./OBS.app/Contents/Frameworks/QtNetwork.framework/Versions/5/Resources/Info.plist
-install_name_tool -id @executable_path/../Frameworks/QtNetwork.framework/Versions/5/QtNetwork ./OBS.app/Contents/Frameworks/QtNetwork.framework/Versions/5/QtNetwork
-install_name_tool -change /usr/local/Cellar/qt/5.14.1/lib/QtCore.framework/Versions/5/QtCore @executable_path/../Frameworks/QtCore.framework/Versions/5/QtCore ./OBS.app/Contents/Frameworks/QtNetwork.framework/Versions/5/QtNetwork
-
-
-# decklink ui qt
-install_name_tool -change /usr/local/opt/qt/lib/QtGui.framework/Versions/5/QtGui @executable_path/../Frameworks/QtGui.framework/Versions/5/QtGui ./OBS.app/Contents/PlugIns/decklink-ouput-ui.so
-install_name_tool -change /usr/local/opt/qt/lib/QtCore.framework/Versions/5/QtCore @executable_path/../Frameworks/QtCore.framework/Versions/5/QtCore ./OBS.app/Contents/PlugIns/decklink-ouput-ui.so
-install_name_tool -change /usr/local/opt/qt/lib/QtWidgets.framework/Versions/5/QtWidgets @executable_path/../Frameworks/QtWidgets.framework/Versions/5/QtWidgets ./OBS.app/Contents/PlugIns/decklink-ouput-ui.so
-
-# frontend tools qt
-install_name_tool -change /usr/local/opt/qt/lib/QtGui.framework/Versions/5/QtGui @executable_path/../Frameworks/QtGui.framework/Versions/5/QtGui ./OBS.app/Contents/PlugIns/frontend-tools.so
-install_name_tool -change /usr/local/opt/qt/lib/QtCore.framework/Versions/5/QtCore @executable_path/../Frameworks/QtCore.framework/Versions/5/QtCore ./OBS.app/Contents/PlugIns/frontend-tools.so
-install_name_tool -change /usr/local/opt/qt/lib/QtWidgets.framework/Versions/5/QtWidgets @executable_path/../Frameworks/QtWidgets.framework/Versions/5/QtWidgets ./OBS.app/Contents/PlugIns/frontend-tools.so
-
-# vst qt
-install_name_tool -change /usr/local/opt/qt/lib/QtGui.framework/Versions/5/QtGui @executable_path/../Frameworks/QtGui.framework/Versions/5/QtGui ./OBS.app/Contents/PlugIns/obs-vst.so
-install_name_tool -change /usr/local/opt/qt/lib/QtCore.framework/Versions/5/QtCore @executable_path/../Frameworks/QtCore.framework/Versions/5/QtCore ./OBS.app/Contents/PlugIns/obs-vst.so
-install_name_tool -change /usr/local/opt/qt/lib/QtWidgets.framework/Versions/5/QtWidgets @executable_path/../Frameworks/QtWidgets.framework/Versions/5/QtWidgets ./OBS.app/Contents/PlugIns/obs-vst.so
-install_name_tool -change /usr/local/opt/qt/lib/QtMacExtras.framework/Versions/5/QtMacExtras @executable_path/../Frameworks/QtMacExtras.framework/Versions/5/QtMacExtras ./OBS.app/Contents/PlugIns/obs-vst.so
obs-studio-26.1.0.tar.xz/CI/install/osx/package_util.py
Deleted
-def cmd(cmd):
- import subprocess
- import shlex
- return subprocess.check_output(shlex.split(cmd)).rstrip('\r\n')
-
-def get_tag_info(tag):
- rev = cmd('git rev-parse {0}'.format(latest_tag))
- anno = cmd('git cat-file -p {0}'.format(rev))
- tag_info = []
- for i, v in enumerate(anno.splitlines()):
- if i <= 4:
- continue
- tag_info.append(v.lstrip())
-
- return tag_info
-
-def gen_html(github_user, latest_tag):
-
- url = 'https://github.com/{0}/obs-studio/commit/%H'.format(github_user)
-
- with open('readme.html', 'w') as f:
- f.write("<html><body>")
- log_cmd = """git log {0}...HEAD --pretty=format:'<li>• <a href="{1}">(view)</a> %s</li>'"""
- log_res = cmd(log_cmd.format(latest_tag, url))
- if len(log_res.splitlines()):
- f.write('<p>Changes since {0}: (Newest to oldest)</p>'.format(latest_tag))
- f.write(log_res)
-
- ul = False
- f.write('<p>')
- import re
-
- for l in get_tag_info(latest_tag):
- if not len(l):
- continue
- if l.startswith('*'):
- ul = True
- if not ul:
- f.write('<ul>')
- f.write('<li>• {0}</li>'.format(re.sub(r'^(\s*)?[*](\s*)?', '', l)))
- else:
- if ul:
- f.write('</ul><p/>')
- ul = False
- f.write('<p>{0}</p>'.format(l))
- if ul:
- f.write('</ul>')
- f.write('</p></body></html>')
-
- cmd('textutil -convert rtf readme.html -output readme.rtf')
- cmd("""sed -i '' 's/Times-Roman/Verdana/g' readme.rtf""")
-
-def save_manifest(latest_tag, user, jenkins_build, branch, stable):
- log = cmd('git log --pretty=oneline {0}...HEAD'.format(latest_tag))
- manifest = {}
- manifest['commits'] = []
- for v in log.splitlines():
- manifest['commits'].append(v)
- manifest['tag'] = {
- 'name': latest_tag,
- 'description': get_tag_info(latest_tag)
- }
-
- manifest['version'] = cmd('git rev-list HEAD --count')
- manifest['sha1'] = cmd('git rev-parse HEAD')
- manifest['jenkins_build'] = jenkins_build
-
- manifest['user'] = user
- manifest['branch'] = branch
- manifest['stable'] = stable
-
- import cPickle
- with open('manifest', 'w') as f:
- cPickle.dump(manifest, f)
-
-def prepare_pkg(project, package_id):
- cmd('packagesutil --file "{0}" set package-1 identifier {1}'.format(project, package_id))
- cmd('packagesutil --file "{0}" set package-1 version {1}'.format(project, '1.0'))
-
-
-import argparse
-parser = argparse.ArgumentParser(description='obs-studio package util')
-parser.add_argument('-u', '--user', dest='user', default='jp9000')
-parser.add_argument('-p', '--package-id', dest='package_id', default='org.obsproject.pkg.obs-studio')
-parser.add_argument('-f', '--project-file', dest='project', default='OBS.pkgproj')
-parser.add_argument('-j', '--jenkins-build', dest='jenkins_build', default='0')
-parser.add_argument('-b', '--branch', dest='branch', default='master')
-parser.add_argument('-s', '--stable', dest='stable', required=False, action='store_true', default=False)
-args = parser.parse_args()
-
-latest_tag = cmd('git describe --tags --abbrev=0')
-gen_html(args.user, latest_tag)
-prepare_pkg(args.project, args.package_id)
-save_manifest(latest_tag, args.user, args.jenkins_build, args.branch, args.stable)
obs-studio-26.1.0.tar.xz/CI/install/osx/post-install.sh
Deleted
-#!/usr/bin/env bash
obs-studio-26.1.0.tar.xz/CI/install/osx/settings.json
Deleted
-{
- "title": "OBS",
- "background": "../CI/install/osx/background.tiff",
- "format": "UDZO",
- "compression-level": 9,
- "window": { "position": { "x": 100, "y": 100 },
- "size": { "width": 540, "height": 380 } },
- "contents": [
- { "x": 120, "y": 180, "type": "file",
- "path": "./OBS.app" },
- { "x": 420, "y": 180, "type": "link", "path": "/Applications" }
- ]
-}
obs-studio-26.1.0.tar.xz/CI/osxcert
Deleted
-(directory)
obs-studio-26.1.0.tar.xz/CI/util
Deleted
-(directory)
obs-studio-26.1.0.tar.xz/CI/util/build-package-deps-osx.sh
Deleted
-#!/usr/bin/env bash
-
-set -e
-
-# This script builds a tar file that contains a bunch of deps that OBS needs for
-# advanced functionality on OSX. Currently this tar file is pulled down off of s3
-# and used in the CI build process on travis.
-# Mostly this sets build flags to compile with older SDKS and make sure that
-# the libs are portable.
-
-exists()
-{
- command -v "$1" >/dev/null 2>&1
-}
-
-if ! exists nasm; then
- echo "nasm not found. Try brew install nasm"
- exit
-fi
-
-CURDIR=$(pwd)
-
-# the temp directory
-WORK_DIR=`mktemp -d`
-
-# deletes the temp directory
-function cleanup {
- #rm -rf "$WORK_DIR"
- echo "Deleted temp working directory $WORK_DIR"
-}
-
-# register the cleanup function to be called on the EXIT signal
-trap cleanup EXIT
-
-cd $WORK_DIR
-
-DEPS_DEST=$WORK_DIR/obsdeps
-
-# make dest dirs
-mkdir $DEPS_DEST
-mkdir $DEPS_DEST/bin
-mkdir $DEPS_DEST/include
-mkdir $DEPS_DEST/lib
-
-# OSX COMPAT
-export MACOSX_DEPLOYMENT_TARGET=10.11
-
-# If you need an olders SDK and Xcode won't give it to you
-# https://github.com/phracker/MacOSX-SDKs
-
-# libopus
-curl -L -O https://ftp.osuosl.org/pub/xiph/releases/opus/opus-1.2.1.tar.gz
-tar -xf opus-1.2.1.tar.gz
-cd ./opus-1.2.1
-mkdir build
-cd ./build
-../configure --disable-shared --enable-static --prefix="/tmp/obsdeps"
-make -j 12
-make install
-
-cd $WORK_DIR
-
-# libogg
-curl -L -O https://ftp.osuosl.org/pub/xiph/releases/ogg/libogg-1.3.3.tar.gz
-tar -xf libogg-1.3.3.tar.gz
-cd ./libogg-1.3.3
-mkdir build
-cd ./build
-../configure --disable-shared --enable-static --prefix="/tmp/obsdeps"
-make -j 12
-make install
-
-cd $WORK_DIR
-
-# libvorbis
-curl -L -O https://ftp.osuosl.org/pub/xiph/releases/vorbis/libvorbis-1.3.6.tar.gz
-tar -xf libvorbis-1.3.6.tar.gz
-cd ./libvorbis-1.3.6
-mkdir build
-cd ./build
-../configure --disable-shared --enable-static --prefix="/tmp/obsdeps"
-make -j 12
-make install
-
-cd $WORK_DIR
-
-# libvpx
-curl -L -O https://chromium.googlesource.com/webm/libvpx/+archive/v1.7.0.tar.gz
-mkdir -p ./libvpx-v1.7.0
-tar -xf v1.7.0.tar.gz -C $PWD/libvpx-v1.7.0
-cd ./libvpx-v1.7.0
-mkdir -p build
-cd ./build
-../configure --disable-shared --prefix="/tmp/obsdeps" --libdir="/tmp/obsdeps/lib"
-make -j 12
-make install
-
-cd $WORK_DIR
-
-# x264
-git clone git://git.videolan.org/x264.git
-cd ./x264
-git checkout origin/stable
-mkdir build
-cd ./build
-../configure --extra-ldflags="-mmacosx-version-min=10.11" --enable-static --prefix="/tmp/obsdeps"
-make -j 12
-make install
-../configure --extra-ldflags="-mmacosx-version-min=10.11" --enable-shared --libdir="/tmp/obsdeps/bin" --prefix="/tmp/obsdeps"
-make -j 12
-ln -f -s libx264.*.dylib libx264.dylib
-find . -name \*.dylib -exec cp \{\} $DEPS_DEST/bin/ \;
-rsync -avh --include="*/" --include="*.h" --exclude="*" ../* $DEPS_DEST/include/
-rsync -avh --include="*/" --include="*.h" --exclude="*" ./* $DEPS_DEST/include/
-
-cd $WORK_DIR
-
-# janson
-curl -L -O http://www.digip.org/jansson/releases/jansson-2.11.tar.gz
-tar -xf jansson-2.11.tar.gz
-cd jansson-2.11
-mkdir build
-cd ./build
-../configure --libdir="/tmp/obsdeps/bin" --enable-shared --disable-static
-make -j 12
-find . -name \*.dylib -exec cp \{\} $DEPS_DEST/bin/ \;
-rsync -avh --include="*/" --include="*.h" --exclude="*" ../* $DEPS_DEST/include/
-rsync -avh --include="*/" --include="*.h" --exclude="*" ./* $DEPS_DEST/include/
-
-cd $WORK_DIR
-
-export LDFLAGS="-L/tmp/obsdeps/lib"
-export CFLAGS="-I/tmp/obsdeps/include"
-
-# FFMPEG
-curl -L -O https://github.com/FFmpeg/FFmpeg/archive/n4.0.2.zip
-unzip ./n4.0.2.zip
-cd ./FFmpeg-n4.0.2
-mkdir build
-cd ./build
-../configure --pkg-config-flags="--static" --extra-ldflags="-mmacosx-version-min=10.11" --enable-shared --disable-static --shlibdir="/tmp/obsdeps/bin" --enable-gpl --disable-doc --enable-libx264 --enable-libopus --enable-libvorbis --enable-libvpx --disable-outdev=sdl
-make -j 12
-find . -name \*.dylib -exec cp \{\} $DEPS_DEST/bin/ \;
-rsync -avh --include="*/" --include="*.h" --exclude="*" ../* $DEPS_DEST/include/
-rsync -avh --include="*/" --include="*.h" --exclude="*" ./* $DEPS_DEST/include/
-
-#luajit
-curl -L -O https://luajit.org/download/LuaJIT-2.0.5.tar.gz
-tar -xf LuaJIT-2.0.5.tar.gz
-cd LuaJIT-2.0.5
-make PREFIX=/tmp/obsdeps
-make PREFIX=/tmp/obsdeps install
-find /tmp/obsdeps/lib -name libluajit\*.dylib -exec cp \{\} $DEPS_DEST/lib/ \;
-rsync -avh --include="*/" --include="*.h" --exclude="*" src/* $DEPS_DEST/include/
-make PREFIX=/tmp/obsdeps uninstall
-
-cd $WORK_DIR
-
-tar -czf osx-deps.tar.gz obsdeps
-
-cp ./osx-deps.tar.gz $CURDIR
obs-studio-26.1.0.tar.xz/CI/util/win32.sh
Deleted
-#/bin/bash
-
-cd x264
-make clean
-LDFLAGS="-static-libgcc" ./configure --enable-shared --enable-win32thread --disable-avs --disable-ffms --disable-gpac --disable-interlaced --disable-lavf --cross-prefix=i686-w64-mingw32- --host=i686-pc-mingw32 --prefix="/home/jim/packages/win32"
-make -j6 fprofiled VIDS="CITY_704x576_60_orig_01.yuv"
-make install
-i686-w64-mingw32-dlltool -z /home/jim/packages/win32/bin/x264.orig.def --export-all-symbols /home/jim/packages/win32/bin/libx264-148.dll
-grep "EXPORTS\|x264" /home/jim/packages/win32/bin/x264.orig.def > /home/jim/packages/win32/bin/x264.def
-rm -f /home/jim/packages/win32/bin/x264.org.def
-sed -i -e "/\\t.*DATA/d" -e "/\\t\".*/d" -e "s/\s@.*//" /home/jim/packages/win32/bin/x264.def
-i686-w64-mingw32-dlltool -m i386 -d /home/jim/packages/win32/bin/x264.def -l /home/jim/packages/win32/bin/x264.lib -D /home/jim/win32/packages/bin/libx264-148.dll
-cd ..
-
-cd opus
-make clean
-LDFLAGS="-static-libgcc" ./configure -host=i686-w64-mingw32 --prefix="/home/jim/packages/win32" --enable-shared
-make -j6
-make install
-cd ..
-
-cd zlib/build32
-make clean
-cmake .. -DCMAKE_SYSTEM_NAME=Windows -DCMAKE_C_COMPILER=i686-w64-mingw32-gcc -DCMAKE_INSTALL_PREFIX=/home/jim/packages/win32 -DINSTALL_PKGCONFIG_DIR=/home/jim/packages/win32/lib/pkgconfig -DCMAKE_RC_COMPILER=i686-w64-mingw32-windres -DCMAKE_SHARED_LINKER_FLAGS="-static-libgcc"
-make -j6
-make install
-mv ../../win32/lib/libzlib.dll.a ../../win32/lib/libz.dll.a
-mv ../../win32/lib/libzlibstatic.a ../../win32/lib/libz.a
-cp ../win32/zlib.def /home/jim/packages/win32/bin
-i686-w64-mingw32-dlltool -m i386 -d ../win32/zlib.def -l /home/jim/packages/win32/bin/zlib.lib -D /home/jim/win32/packages/bin/zlib.dll
-cd ../..
-
-cd libpng
-make clean
-PKG_CONFIG_PATH="/home/jim/packages/win32/lib/pkgconfig" LDFLAGS="-L/home/jim/packages/win32/lib -static-libgcc" CPPFLAGS="-I/home/jim/packages/win32/include" ./configure -host=i686-w64-mingw32 --prefix="/home/jim/packages/win32" --enable-shared
-make -j6
-make install
-cd ..
-
-cd libogg
-make clean
-PKG_CONFIG_PATH="/home/jim/packages/win32/lib/pkgconfig" LDFLAGS="-L/home/jim/packages/win32/lib -static-libgcc" CPPFLAGS="-I/home/jim/packages/win32/include" ./configure -host=i686-w64-mingw32 --prefix="/home/jim/packages/win32" --enable-shared
-make -j6
-make install
-cd ..
-
-cd libvorbis
-make clean
-PKG_CONFIG_PATH="/home/jim/packages/win32/lib/pkgconfig" LDFLAGS="-L/home/jim/packages/win32/lib -static-libgcc" CPPFLAGS="-I/home/jim/packages/win32/include" ./configure -host=i686-w64-mingw32 --prefix="/home/jim/packages/win32" --enable-shared --with-ogg="/home/jim/packages/win32"
-make -j6
-make install
-cd ..
-
-cd libvpxbuild
-make clean
-PKG_CONFIG_PATH="/home/jim/packages/win32/lib/pkgconfig" CROSS=i686-w64-mingw32- LDFLAGS="-static-libgcc" ../libvpx/configure --prefix=/home/jim/packages/win32 --enable-vp8 --enable-vp9 --disable-docs --disable-examples --enable-shared --disable-static --enable-runtime-cpu-detect --enable-realtime-only --disable-install-bins --disable-install-docs --disable-unit-tests --target=x86-win32-gcc
-make -j6
-make install
-i686-w64-mingw32-dlltool -m i386 -d libvpx.def -l /home/jim/packages/win32/bin/vpx.lib -D /home/jim/win32/packages/bin/libvpx-1.dll
-cd ..
-
-cd ffmpeg
-make clean
-cp /media/sf_linux/nvEncodeAPI.h /home/jim/packages/win32/include
-PKG_CONFIG_PATH="/home/jim/packages/win32/lib/pkgconfig" LDFLAGS="-L/home/jim/packages/win32/lib -static-libgcc" CFLAGS="-I/home/jim/packages/win32/include" ./configure --enable-memalign-hack --enable-gpl --disable-programs --disable-doc --arch=x86 --enable-shared --enable-nvenc --enable-libx264 --enable-libopus --enable-libvorbis --enable-libvpx --disable-debug --cross-prefix=i686-w64-mingw32- --target-os=mingw32 --pkg-config=pkg-config --prefix="/home/jim/packages/win32" --disable-postproc
-read -n1 -r -p "Press any key to continue building FFmpeg..." key
-make -j6
-make install
-cd ..
obs-studio-26.1.0.tar.xz/CI/util/win64.sh
Deleted
-#/bin/bash
-
-cd x264
-make clean
-LDFLAGS="-static-libgcc" ./configure --enable-shared --enable-win32thread --disable-avs --disable-ffms --disable-gpac --disable-interlaced --disable-lavf --cross-prefix=x86_64-w64-mingw32- --host=x86_64-pc-mingw32 --prefix="/home/jim/packages/win64"
-make -j6 fprofiled VIDS="CITY_704x576_60_orig_01.yuv"
-make install
-x86_64-w64-mingw32-dlltool -z /home/jim/packages/win64/bin/x264.orig.def --export-all-symbols /home/jim/packages/win64/bin/libx264-148.dll
-grep "EXPORTS\|x264" /home/jim/packages/win64/bin/x264.orig.def > /home/jim/packages/win64/bin/x264.def
-rm -f /home/jim/packages/win64/bin/x264.org.def
-sed -i -e "/\\t.*DATA/d" -e "/\\t\".*/d" -e "s/\s@.*//" /home/jim/packages/win64/bin/x264.def
-x86_64-w64-mingw32-dlltool -m i386:x86-64 -d /home/jim/packages/win64/bin/x264.def -l /home/jim/packages/win64/bin/x264.lib -D /home/jim/win64/packages/bin/libx264-148.dll
-cd ..
-
-cd opus
-make clean
-LDFLAGS="-static-libgcc" ./configure -host=x86_64-w64-mingw32 --prefix="/home/jim/packages/win64" --enable-shared
-make -j6
-make install
-cd ..
-
-cd zlib/build64
-make clean
-cmake .. -DCMAKE_SYSTEM_NAME=Windows -DCMAKE_C_COMPILER=x86_64-w64-mingw32-gcc -DCMAKE_INSTALL_PREFIX=/home/jim/packages/win64 -DCMAKE_RC_COMPILER=x86_64-w64-mingw32-windres -DCMAKE_SHARED_LINKER_FLAGS="-static-libgcc"
-make -j6
-make install
-mv ../../win64/lib/libzlib.dll.a ../../win64/lib/libz.dll.a
-mv ../../win64/lib/libzlibstatic.a ../../win64/lib/libz.a
-cp ../win64/zlib.def /home/jim/packages/win64/bin
-x86_64-w64-mingw32-dlltool -m i386:x86-64 -d ../win32/zlib.def -l /home/jim/packages/win64/bin/zlib.lib -D /home/jim/win64/packages/bin/zlib.dll
-cd ../..
-
-cd libpng
-make clean
-PKG_CONFIG_PATH="/home/jim/packages/win64/lib/pkgconfig" LDFLAGS="-L/home/jim/packages/win64/lib" CPPFLAGS="-I/home/jim/packages/win64/include" ./configure -host=x86_64-w64-mingw32 --prefix="/home/jim/packages/win64" --enable-shared
-make -j6
-make install
-cd ..
-
-cd libogg
-make clean
-PKG_CONFIG_PATH="/home/jim/packages/win64/lib/pkgconfig" LDFLAGS="-L/home/jim/packages/win64/lib -static-libgcc" CPPFLAGS="-I/home/jim/packages/win64/include" ./configure -host=x86_64-w64-mingw32 --prefix="/home/jim/packages/win64" --enable-shared
-make -j6
-make install
-cd ..
-
-cd libvorbis
-make clean
-PKG_CONFIG_PATH="/home/jim/packages/win64/lib/pkgconfig" LDFLAGS="-L/home/jim/packages/win64/lib -static-libgcc" CPPFLAGS="-I/home/jim/packages/win64/include" ./configure -host=x86_64-w64-mingw32 --prefix="/home/jim/packages/win64" --enable-shared --with-ogg="/home/jim/packages/win64"
-make -j6
-make install
-cd ..
-
-cd libvpxbuild
-make clean
-PKG_CONFIG_PATH="/home/jim/packages/win64/lib/pkgconfig" CROSS=x86_64-w64-mingw32- LDFLAGS="-static-libgcc" ../libvpx/configure --prefix=/home/jim/packages/win64 --enable-vp8 --enable-vp9 --disable-docs --disable-examples --enable-shared --disable-static --enable-runtime-cpu-detect --enable-realtime-only --disable-install-bins --disable-install-docs --disable-unit-tests --target=x86_64-win64-gcc
-make -j6
-make install
-x86_64-w64-mingw32-dlltool -m i386:x86-64 -d libvpx.def -l /home/jim/packages/win64/bin/vpx.lib -D /home/jim/win64/packages/bin/libvpx-1.dll
-cd ..
-
-cd ffmpeg
-make clean
-cp /media/sf_linux/nvEncodeAPI.h /home/jim/packages/win64/include
-PKG_CONFIG_PATH="/home/jim/packages/win64/lib/pkgconfig" LDFLAGS="-L/home/jim/packages/win64/lib" CPPFLAGS="-I/home/jim/packages/win64/include" ./configure --enable-memalign-hack --enable-gpl --disable-doc --arch=x86_64 --enable-shared --enable-nvenc --enable-libx264 --enable-libopus --enable-libvorbis --enable-libvpx --disable-debug --cross-prefix=x86_64-w64-mingw32- --target-os=mingw32 --pkg-config=pkg-config --prefix="/home/jim/packages/win64" --disable-postproc
-read -n1 -r -p "Press any key to continue building FFmpeg..." key
-make -j6
-make install
-cd ..
obs-studio-26.1.0.tar.xz/libobs/util/simde/mmx.h
Deleted
-/* SPDX-License-Identifier: MIT
- *
- * Permission is hereby granted, free of charge, to any person
- * obtaining a copy of this software and associated documentation
- * files (the "Software"), to deal in the Software without
- * restriction, including without limitation the rights to use, copy,
- * modify, merge, publish, distribute, sublicense, and/or sell copies
- * of the Software, and to permit persons to whom the Software is
- * furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be
- * included in all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
- *
- * Copyright:
- * 2017-2020 Evan Nemerson <evan@nemerson.com>
- */
-
-#if !defined(SIMDE_X86_MMX_H)
-#define SIMDE_X86_MMX_H
-
-#include "simde-common.h"
-
-#if !defined(SIMDE_X86_MMX_NATIVE) && defined(SIMDE_ENABLE_NATIVE_ALIASES)
-#define SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES
-#endif
-
-HEDLEY_DIAGNOSTIC_PUSH
-SIMDE_DISABLE_UNWANTED_DIAGNOSTICS
-
-#if defined(SIMDE_X86_MMX_NATIVE)
-#define SIMDE_X86_MMX_USE_NATIVE_TYPE
-#elif defined(SIMDE_X86_SSE_NATIVE)
-#define SIMDE_X86_MMX_USE_NATIVE_TYPE
-#endif
-
-#if defined(SIMDE_X86_MMX_USE_NATIVE_TYPE)
-#include <mmintrin.h>
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
-#include <arm_neon.h>
-#endif
-
-#include <stdint.h>
-#include <limits.h>
-
-SIMDE_BEGIN_DECLS_
-
-typedef union {
-#if defined(SIMDE_VECTOR_SUBSCRIPT)
- SIMDE_ALIGN(8) int8_t i8 SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(8) int16_t i16 SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(8) int32_t i32 SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(8) int64_t i64 SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(8) uint8_t u8 SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(8) uint16_t u16 SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(8) uint32_t u32 SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(8) uint64_t u64 SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(8) simde_float32 f32 SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(8) int_fast32_t i32f SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(8) uint_fast32_t u32f SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
-#else
- SIMDE_ALIGN(8) int8_t i8[8];
- SIMDE_ALIGN(8) int16_t i16[4];
- SIMDE_ALIGN(8) int32_t i32[2];
- SIMDE_ALIGN(8) int64_t i64[1];
- SIMDE_ALIGN(8) uint8_t u8[8];
- SIMDE_ALIGN(8) uint16_t u16[4];
- SIMDE_ALIGN(8) uint32_t u32[2];
- SIMDE_ALIGN(8) uint64_t u64[1];
- SIMDE_ALIGN(8) simde_float32 f32[2];
- SIMDE_ALIGN(8) int_fast32_t i32f[8 / sizeof(int_fast32_t)];
- SIMDE_ALIGN(8) uint_fast32_t u32f[8 / sizeof(uint_fast32_t)];
-#endif
-
-#if defined(SIMDE_X86_MMX_USE_NATIVE_TYPE)
- __m64 n;
-#endif
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- int8x8_t neon_i8;
- int16x4_t neon_i16;
- int32x2_t neon_i32;
- int64x1_t neon_i64;
- uint8x8_t neon_u8;
- uint16x4_t neon_u16;
- uint32x2_t neon_u32;
- uint64x1_t neon_u64;
- float32x2_t neon_f32;
-#endif
-} simde__m64_private;
-
-#if defined(SIMDE_X86_MMX_USE_NATIVE_TYPE)
-typedef __m64 simde__m64;
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
-typedef int32x2_t simde__m64;
-#elif defined(SIMDE_VECTOR_SUBSCRIPT)
-typedef int32_t simde__m64 SIMDE_ALIGN(8) SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
-#else
-typedef simde__m64_private simde__m64;
-#endif
-
-#if !defined(SIMDE_X86_MMX_USE_NATIVE_TYPE) && \
- defined(SIMDE_ENABLE_NATIVE_ALIASES)
-#define SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES
-typedef simde__m64 __m64;
-#endif
-
-HEDLEY_STATIC_ASSERT(8 == sizeof(simde__m64), "__m64 size incorrect");
-HEDLEY_STATIC_ASSERT(8 == sizeof(simde__m64_private), "__m64 size incorrect");
-#if defined(SIMDE_CHECK_ALIGNMENT) && defined(SIMDE_ALIGN_OF)
-HEDLEY_STATIC_ASSERT(SIMDE_ALIGN_OF(simde__m64) == 8,
- "simde__m64 is not 8-byte aligned");
-HEDLEY_STATIC_ASSERT(SIMDE_ALIGN_OF(simde__m64_private) == 8,
- "simde__m64_private is not 8-byte aligned");
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde__m64_from_private(simde__m64_private v)
-{
- simde__m64 r;
- simde_memcpy(&r, &v, sizeof(r));
- return r;
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64_private simde__m64_to_private(simde__m64 v)
-{
- simde__m64_private r;
- simde_memcpy(&r, &v, sizeof(r));
- return r;
-}
-
-#define SIMDE_X86_GENERATE_CONVERSION_FUNCTION(simde_type, source_type, isax, \
- fragment) \
- SIMDE_FUNCTION_ATTRIBUTES \
- simde__##simde_type simde__##simde_type##_from_##isax##_##fragment( \
- source_type value) \
- { \
- simde__##simde_type##_private r_; \
- r_.isax##_##fragment = value; \
- return simde__##simde_type##_from_private(r_); \
- } \
- \
- SIMDE_FUNCTION_ATTRIBUTES \
- source_type simde__##simde_type##_to_##isax##_##fragment( \
- simde__##simde_type value) \
- { \
- simde__##simde_type##_private r_ = \
- simde__##simde_type##_to_private(value); \
- return r_.isax##_##fragment; \
- }
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, int8x8_t, neon, i8)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, int16x4_t, neon, i16)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, int32x2_t, neon, i32)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, int64x1_t, neon, i64)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, uint8x8_t, neon, u8)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, uint16x4_t, neon, u16)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, uint32x2_t, neon, u32)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, uint64x1_t, neon, u64)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, float32x2_t, neon, f32)
-#endif /* defined(SIMDE_ARM_NEON_A32V7_NATIVE) */
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_add_pi8(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_add_pi8(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i8 = vadd_s8(a_.neon_i8, b_.neon_i8);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i8 = a_.i8 + b_.i8;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
- r_.i8[i] = a_.i8[i] + b_.i8[i];
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_paddb(a, b) simde_mm_add_pi8(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_add_pi8(a, b) simde_mm_add_pi8(a, b)
-#define _m_paddb(a, b) simde_m_paddb(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_add_pi16(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_add_pi16(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vadd_s16(a_.neon_i16, b_.neon_i16);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i16 = a_.i16 + b_.i16;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = a_.i16[i] + b_.i16[i];
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_paddw(a, b) simde_mm_add_pi16(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_add_pi16(a, b) simde_mm_add_pi16(a, b)
-#define _m_add_paddw(a, b) simde_mm_add_pi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_add_pi32(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_add_pi32(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vadd_s32(a_.neon_i32, b_.neon_i32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32 = a_.i32 + b_.i32;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = a_.i32[i] + b_.i32[i];
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_paddd(a, b) simde_mm_add_pi32(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_add_pi32(a, b) simde_mm_add_pi32(a, b)
-#define _m_add_paddd(a, b) simde_mm_add_pi32(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_adds_pi8(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_adds_pi8(a, b);
-#else
- simde__m64_private r_, a_ = simde__m64_to_private(a),
- b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i8 = vqadd_s8(a_.neon_i8, b_.neon_i8);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
- if ((((b_.i8[i]) > 0) &&
- ((a_.i8[i]) > (INT8_MAX - (b_.i8[i]))))) {
- r_.i8[i] = INT8_MAX;
- } else if ((((b_.i8[i]) < 0) &&
- ((a_.i8[i]) < (INT8_MIN - (b_.i8[i]))))) {
- r_.i8[i] = INT8_MIN;
- } else {
- r_.i8[i] = (a_.i8[i]) + (b_.i8[i]);
- }
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_paddsb(a, b) simde_mm_adds_pi8(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_adds_pi8(a, b) simde_mm_adds_pi8(a, b)
-#define _m_add_paddsb(a, b) simde_mm_adds_pi8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_adds_pu8(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_adds_pu8(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u8 = vqadd_u8(a_.neon_u8, b_.neon_u8);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u8) / sizeof(r_.u8[0])); i++) {
- const uint_fast16_t x =
- HEDLEY_STATIC_CAST(uint_fast16_t, a_.u8[i]) +
- HEDLEY_STATIC_CAST(uint_fast16_t, b_.u8[i]);
- if (x > UINT8_MAX)
- r_.u8[i] = UINT8_MAX;
- else
- r_.u8[i] = HEDLEY_STATIC_CAST(uint8_t, x);
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_paddusb(a, b) simde_mm_adds_pu8(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_adds_pu8(a, b) simde_mm_adds_pu8(a, b)
-#define _m_paddusb(a, b) simde_mm_adds_pu8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_adds_pi16(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_adds_pi16(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vqadd_s16(a_.neon_i16, b_.neon_i16);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- if ((((b_.i16[i]) > 0) &&
- ((a_.i16[i]) > (INT16_MAX - (b_.i16[i]))))) {
- r_.i16[i] = INT16_MAX;
- } else if ((((b_.i16[i]) < 0) &&
- ((a_.i16[i]) < (SHRT_MIN - (b_.i16[i]))))) {
- r_.i16[i] = SHRT_MIN;
- } else {
- r_.i16[i] = (a_.i16[i]) + (b_.i16[i]);
- }
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_paddsw(a, b) simde_mm_adds_pi16(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_adds_pi16(a, b) simde_mm_adds_pi16(a, b)
-#define _m_paddsw(a, b) simde_mm_adds_pi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_adds_pu16(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_adds_pu16(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u16 = vqadd_u16(a_.neon_u16, b_.neon_u16);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- const uint32_t x = a_.u16[i] + b_.u16[i];
- if (x > UINT16_MAX)
- r_.u16[i] = UINT16_MAX;
- else
- r_.u16[i] = HEDLEY_STATIC_CAST(uint16_t, x);
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_paddusw(a, b) simde_mm_adds_pu16(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_adds_pu16(a, b) simde_mm_adds_pu16(a, b)
-#define _m_paddusw(a, b) simde_mm_adds_pu16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_and_si64(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_and_si64(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vand_s32(a_.neon_i32, b_.neon_i32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i64 = a_.i64 & b_.i64;
-#else
- r_.i64[0] = a_.i64[0] & b_.i64[0];
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_pand(a, b) simde_mm_and_si64(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_and_si64(a, b) simde_mm_and_si64(a, b)
-#define _m_pand(a, b) simde_mm_and_si64(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_andnot_si64(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_andnot_si64(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vbic_s32(b_.neon_i32, a_.neon_i32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32f = ~a_.i32f & b_.i32f;
-#else
- r_.u64[0] = (~(a_.u64[0])) & (b_.u64[0]);
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_pandn(a, b) simde_mm_andnot_si64(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_andnot_si64(a, b) simde_mm_andnot_si64(a, b)
-#define _m_pandn(a, b) simde_mm_andnot_si64(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_cmpeq_pi8(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cmpeq_pi8(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i8 = vreinterpret_s8_u8(vceq_s8(a_.neon_i8, b_.neon_i8));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
- r_.i8[i] = (a_.i8[i] == b_.i8[i]) ? ~INT8_C(0) : INT8_C(0);
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_pcmpeqb(a, b) simde_mm_cmpeq_pi8(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpeq_pi8(a, b) simde_mm_cmpeq_pi8(a, b)
-#define _m_pcmpeqb(a, b) simde_mm_cmpeq_pi8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_cmpeq_pi16(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cmpeq_pi16(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vreinterpret_s16_u16(vceq_s16(a_.neon_i16, b_.neon_i16));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = (a_.i16[i] == b_.i16[i]) ? ~INT16_C(0) : INT16_C(0);
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_pcmpeqw(a, b) simde_mm_cmpeq_pi16(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpeq_pi16(a, b) simde_mm_cmpeq_pi16(a, b)
-#define _m_pcmpeqw(a, b) simde_mm_cmpeq_pi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_cmpeq_pi32(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cmpeq_pi32(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vreinterpret_s32_u32(vceq_s32(a_.neon_i32, b_.neon_i32));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = (a_.i32[i] == b_.i32[i]) ? ~INT32_C(0) : INT32_C(0);
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_pcmpeqd(a, b) simde_mm_cmpeq_pi32(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpeq_pi32(a, b) simde_mm_cmpeq_pi32(a, b)
-#define _m_pcmpeqd(a, b) simde_mm_cmpeq_pi32(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_cmpgt_pi8(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cmpgt_pi8(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i8 = vreinterpret_s8_u8(vcgt_s8(a_.neon_i8, b_.neon_i8));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
- r_.i8[i] = (a_.i8[i] > b_.i8[i]) ? ~INT8_C(0) : INT8_C(0);
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_pcmpgtb(a, b) simde_mm_cmpgt_pi8(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpgt_pi8(a, b) simde_mm_cmpgt_pi8(a, b)
-#define _m_pcmpgtb(a, b) simde_mm_cmpgt_pi8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_cmpgt_pi16(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cmpgt_pi16(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vreinterpret_s16_u16(vcgt_s16(a_.neon_i16, b_.neon_i16));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = (a_.i16[i] > b_.i16[i]) ? ~INT16_C(0) : INT16_C(0);
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_pcmpgtw(a, b) simde_mm_cmpgt_pi16(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpgt_pi16(a, b) simde_mm_cmpgt_pi16(a, b)
-#define _m_pcmpgtw(a, b) simde_mm_cmpgt_pi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_cmpgt_pi32(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cmpgt_pi32(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vreinterpret_s32_u32(vcgt_s32(a_.neon_i32, b_.neon_i32));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = (a_.i32[i] > b_.i32[i]) ? ~INT32_C(0) : INT32_C(0);
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_pcmpgtd(a, b) simde_mm_cmpgt_pi32(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpgt_pi32(a, b) simde_mm_cmpgt_pi32(a, b)
-#define _m_pcmpgtd(a, b) simde_mm_cmpgt_pi32(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int64_t simde_mm_cvtm64_si64(simde__m64 a)
-{
-#if defined(SIMDE_X86_MMX_NATIVE) && defined(SIMDE_ARCH_AMD64) && \
- !defined(__PGI)
- return _mm_cvtm64_si64(a);
-#else
- simde__m64_private a_ = simde__m64_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- return vget_lane_s64(a_.neon_i64, 0);
-#else
- return a_.i64[0];
-#endif
-#endif
-}
-#define simde_m_to_int64(a) simde_mm_cvtm64_si64(a)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtm64_si64(a) simde_mm_cvtm64_si64(a)
-#define _m_to_int64(a) simde_mm_cvtm64_si64(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_cvtsi32_si64(int32_t a)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cvtsi32_si64(a);
-#else
- simde__m64_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- const int32_t av[sizeof(r_.neon_i32) / sizeof(r_.neon_i32[0])] = {a, 0};
- r_.neon_i32 = vld1_s32(av);
-#else
- r_.i32[0] = a;
- r_.i32[1] = 0;
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_from_int(a) simde_mm_cvtsi32_si64(a)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtsi32_si64(a) simde_mm_cvtsi32_si64(a)
-#define _m_from_int(a) simde_mm_cvtsi32_si64(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_cvtsi64_m64(int64_t a)
-{
-#if defined(SIMDE_X86_MMX_NATIVE) && defined(SIMDE_ARCH_AMD64) && \
- !defined(__PGI)
- return _mm_cvtsi64_m64(a);
-#else
- simde__m64_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i64 = vld1_s64(&a);
-#else
- r_.i64[0] = a;
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_from_int64(a) simde_mm_cvtsi64_m64(a)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtsi64_m64(a) simde_mm_cvtsi64_m64(a)
-#define _m_from_int64(a) simde_mm_cvtsi64_m64(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int32_t simde_mm_cvtsi64_si32(simde__m64 a)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cvtsi64_si32(a);
-#else
- simde__m64_private a_ = simde__m64_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- return vget_lane_s32(a_.neon_i32, 0);
-#else
- return a_.i32[0];
-#endif
-#endif
-}
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtsi64_si32(a) simde_mm_cvtsi64_si32(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_empty(void)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- _mm_empty();
-#else
-#endif
-}
-#define simde_m_empty() simde_mm_empty()
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_empty() simde_mm_empty()
-#define _m_empty() simde_mm_empty()
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_madd_pi16(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_madd_pi16(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- int32x4_t i1 = vmull_s16(a_.neon_i16, b_.neon_i16);
- r_.neon_i32 = vpadd_s32(vget_low_s32(i1), vget_high_s32(i1));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i += 2) {
- r_.i32[i / 2] = (a_.i16[i] * b_.i16[i]) +
- (a_.i16[i + 1] * b_.i16[i + 1]);
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_pmaddwd(a, b) simde_mm_madd_pi16(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_madd_pi16(a, b) simde_mm_madd_pi16(a, b)
-#define _m_pmaddwd(a, b) simde_mm_madd_pi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_mulhi_pi16(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_mulhi_pi16(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- const int32x4_t t1 = vmull_s16(a_.neon_i16, b_.neon_i16);
- const uint32x4_t t2 = vshrq_n_u32(vreinterpretq_u32_s32(t1), 16);
- const uint16x4_t t3 = vmovn_u32(t2);
- r_.neon_i16 = vreinterpret_s16_u16(t3);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = HEDLEY_STATIC_CAST(int16_t,
- ((a_.i16[i] * b_.i16[i]) >> 16));
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_pmulhw(a, b) simde_mm_mulhi_pi16(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_mulhi_pi16(a, b) simde_mm_mulhi_pi16(a, b)
-#define _m_pmulhw(a, b) simde_mm_mulhi_pi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_mullo_pi16(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_mullo_pi16(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- const int32x4_t t1 = vmull_s16(a_.neon_i16, b_.neon_i16);
- const uint16x4_t t2 = vmovn_u32(vreinterpretq_u32_s32(t1));
- r_.neon_i16 = vreinterpret_s16_u16(t2);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = HEDLEY_STATIC_CAST(
- int16_t, ((a_.i16[i] * b_.i16[i]) & 0xffff));
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_pmullw(a, b) simde_mm_mullo_pi16(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_mullo_pi16(a, b) simde_mm_mullo_pi16(a, b)
-#define _m_pmullw(a, b) simde_mm_mullo_pi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_or_si64(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_or_si64(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vorr_s32(a_.neon_i32, b_.neon_i32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i64 = a_.i64 | b_.i64;
-#else
- r_.i64[0] = a_.i64[0] | b_.i64[0];
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_por(a, b) simde_mm_or_si64(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_or_si64(a, b) simde_mm_or_si64(a, b)
-#define _m_por(a, b) simde_mm_or_si64(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_packs_pi16(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_packs_pi16(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i8 = vqmovn_s16(vcombine_s16(a_.neon_i16, b_.neon_i16));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- if (a_.i16[i] < INT8_MIN) {
- r_.i8[i] = INT8_MIN;
- } else if (a_.i16[i] > INT8_MAX) {
- r_.i8[i] = INT8_MAX;
- } else {
- r_.i8[i] = HEDLEY_STATIC_CAST(int8_t, a_.i16[i]);
- }
- }
-
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- if (b_.i16[i] < INT8_MIN) {
- r_.i8[i + 4] = INT8_MIN;
- } else if (b_.i16[i] > INT8_MAX) {
- r_.i8[i + 4] = INT8_MAX;
- } else {
- r_.i8[i + 4] = HEDLEY_STATIC_CAST(int8_t, b_.i16[i]);
- }
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_packsswb(a, b) simde_mm_packs_pi16(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_packs_pi16(a, b) simde_mm_packs_pi16(a, b)
-#define _m_packsswb(a, b) mm_packs_pi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_packs_pi32(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_packs_pi32(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vqmovn_s32(vcombine_s32(a_.neon_i32, b_.neon_i32));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (8 / sizeof(a_.i32[0])); i++) {
- if (a_.i32[i] < SHRT_MIN) {
- r_.i16[i] = SHRT_MIN;
- } else if (a_.i32[i] > INT16_MAX) {
- r_.i16[i] = INT16_MAX;
- } else {
- r_.i16[i] = HEDLEY_STATIC_CAST(int16_t, a_.i32[i]);
- }
- }
-
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (8 / sizeof(b_.i32[0])); i++) {
- if (b_.i32[i] < SHRT_MIN) {
- r_.i16[i + 2] = SHRT_MIN;
- } else if (b_.i32[i] > INT16_MAX) {
- r_.i16[i + 2] = INT16_MAX;
- } else {
- r_.i16[i + 2] = HEDLEY_STATIC_CAST(int16_t, b_.i32[i]);
- }
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_packssdw(a, b) simde_mm_packs_pi32(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_packs_pi32(a, b) simde_mm_packs_pi32(a, b)
-#define _m_packssdw(a, b) simde_mm_packs_pi32(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_packs_pu16(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_packs_pu16(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- const int16x8_t t1 = vcombine_s16(a_.neon_i16, b_.neon_i16);
-
- /* Set elements which are < 0 to 0 */
- const int16x8_t t2 =
- vandq_s16(t1, vreinterpretq_s16_u16(vcgezq_s16(t1)));
-
- /* Vector with all s16 elements set to UINT8_MAX */
- const int16x8_t vmax = vmovq_n_s16((int16_t)UINT8_MAX);
-
- /* Elements which are within the acceptable range */
- const int16x8_t le_max =
- vandq_s16(t2, vreinterpretq_s16_u16(vcleq_s16(t2, vmax)));
- const int16x8_t gt_max =
- vandq_s16(vmax, vreinterpretq_s16_u16(vcgtq_s16(t2, vmax)));
-
- /* Final values as 16-bit integers */
- const int16x8_t values = vorrq_s16(le_max, gt_max);
-
- r_.neon_u8 = vmovn_u16(vreinterpretq_u16_s16(values));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- if (a_.i16[i] > UINT8_MAX) {
- r_.u8[i] = UINT8_MAX;
- } else if (a_.i16[i] < 0) {
- r_.u8[i] = 0;
- } else {
- r_.u8[i] = HEDLEY_STATIC_CAST(uint8_t, a_.i16[i]);
- }
- }
-
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- if (b_.i16[i] > UINT8_MAX) {
- r_.u8[i + 4] = UINT8_MAX;
- } else if (b_.i16[i] < 0) {
- r_.u8[i + 4] = 0;
- } else {
- r_.u8[i + 4] = HEDLEY_STATIC_CAST(uint8_t, b_.i16[i]);
- }
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_packuswb(a, b) simde_mm_packs_pu16(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_packs_pu16(a, b) simde_mm_packs_pu16(a, b)
-#define _m_packuswb(a, b) simde_mm_packs_pu16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_set_pi8(int8_t e7, int8_t e6, int8_t e5, int8_t e4,
- int8_t e3, int8_t e2, int8_t e1, int8_t e0)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_set_pi8(e7, e6, e5, e4, e3, e2, e1, e0);
-#else
- simde__m64_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- const int8_t v[sizeof(r_.i8) / sizeof(r_.i8[0])] = {e0, e1, e2, e3,
- e4, e5, e6, e7};
- r_.neon_i8 = vld1_s8(v);
-#else
- r_.i8[0] = e0;
- r_.i8[1] = e1;
- r_.i8[2] = e2;
- r_.i8[3] = e3;
- r_.i8[4] = e4;
- r_.i8[5] = e5;
- r_.i8[6] = e6;
- r_.i8[7] = e7;
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_set_pi8(e7, e6, e5, e4, e3, e2, e1, e0) \
- simde_mm_set_pi8(e7, e6, e5, e4, e3, e2, e1, e0)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_x_mm_set_pu8(uint8_t e7, uint8_t e6, uint8_t e5, uint8_t e4,
- uint8_t e3, uint8_t e2, uint8_t e1, uint8_t e0)
-{
- simde__m64_private r_;
-
-#if defined(SIMDE_X86_MMX_NATIVE)
- r_.n = _mm_set_pi8(
- HEDLEY_STATIC_CAST(int8_t, e7), HEDLEY_STATIC_CAST(int8_t, e6),
- HEDLEY_STATIC_CAST(int8_t, e5), HEDLEY_STATIC_CAST(int8_t, e4),
- HEDLEY_STATIC_CAST(int8_t, e3), HEDLEY_STATIC_CAST(int8_t, e2),
- HEDLEY_STATIC_CAST(int8_t, e1), HEDLEY_STATIC_CAST(int8_t, e0));
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- const uint8_t v[sizeof(r_.u8) / sizeof(r_.u8[0])] = {e0, e1, e2, e3,
- e4, e5, e6, e7};
- r_.neon_u8 = vld1_u8(v);
-#else
- r_.u8[0] = e0;
- r_.u8[1] = e1;
- r_.u8[2] = e2;
- r_.u8[3] = e3;
- r_.u8[4] = e4;
- r_.u8[5] = e5;
- r_.u8[6] = e6;
- r_.u8[7] = e7;
-#endif
-
- return simde__m64_from_private(r_);
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_set_pi16(int16_t e3, int16_t e2, int16_t e1, int16_t e0)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_set_pi16(e3, e2, e1, e0);
-#else
- simde__m64_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- const int16_t v[sizeof(r_.i16) / sizeof(r_.i16[0])] = {e0, e1, e2, e3};
- r_.neon_i16 = vld1_s16(v);
-#else
- r_.i16[0] = e0;
- r_.i16[1] = e1;
- r_.i16[2] = e2;
- r_.i16[3] = e3;
-#endif
- return simde__m64_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_set_pi16(e3, e2, e1, e0) simde_mm_set_pi16(e3, e2, e1, e0)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_x_mm_set_pu16(uint16_t e3, uint16_t e2, uint16_t e1,
- uint16_t e0)
-{
- simde__m64_private r_;
-
-#if defined(SIMDE_X86_MMX_NATIVE)
- r_.n = _mm_set_pi16(HEDLEY_STATIC_CAST(int16_t, e3),
- HEDLEY_STATIC_CAST(int16_t, e2),
- HEDLEY_STATIC_CAST(int16_t, e1),
- HEDLEY_STATIC_CAST(int16_t, e0));
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- const uint16_t v[sizeof(r_.u16) / sizeof(r_.u16[0])] = {e0, e1, e2, e3};
- r_.neon_u16 = vld1_u16(v);
-#else
- r_.u16[0] = e0;
- r_.u16[1] = e1;
- r_.u16[2] = e2;
- r_.u16[3] = e3;
-#endif
-
- return simde__m64_from_private(r_);
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_x_mm_set_pu32(uint32_t e1, uint32_t e0)
-{
- simde__m64_private r_;
-
-#if defined(SIMDE_X86_MMX_NATIVE)
- r_.n = _mm_set_pi32(HEDLEY_STATIC_CAST(int32_t, e1),
- HEDLEY_STATIC_CAST(int32_t, e0));
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- const uint32_t v[sizeof(r_.u32) / sizeof(r_.u32[0])] = {e0, e1};
- r_.neon_u32 = vld1_u32(v);
-#else
- r_.u32[0] = e0;
- r_.u32[1] = e1;
-#endif
-
- return simde__m64_from_private(r_);
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_set_pi32(int32_t e1, int32_t e0)
-{
- simde__m64_private r_;
-
-#if defined(SIMDE_X86_MMX_NATIVE)
- r_.n = _mm_set_pi32(e1, e0);
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- const int32_t v[sizeof(r_.i32) / sizeof(r_.i32[0])] = {e0, e1};
- r_.neon_i32 = vld1_s32(v);
-#else
- r_.i32[0] = e0;
- r_.i32[1] = e1;
-#endif
-
- return simde__m64_from_private(r_);
-}
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_set_pi32(e1, e0) simde_mm_set_pi32(e1, e0)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_x_mm_set_pi64(int64_t e0)
-{
- simde__m64_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- const int64_t v[sizeof(r_.i64) / sizeof(r_.i64[0])] = {e0};
- r_.neon_i64 = vld1_s64(v);
-#else
- r_.i64[0] = e0;
-#endif
-
- return simde__m64_from_private(r_);
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_x_mm_set_f32x2(simde_float32 e1, simde_float32 e0)
-{
- simde__m64_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- const simde_float32 v[sizeof(r_.f32) / sizeof(r_.f32[0])] = {e0, e1};
- r_.neon_f32 = vld1_f32(v);
-#else
- r_.f32[0] = e0;
- r_.f32[1] = e1;
-#endif
-
- return simde__m64_from_private(r_);
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_set1_pi8(int8_t a)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_set1_pi8(a);
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- simde__m64_private r_;
- r_.neon_i8 = vmov_n_s8(a);
- return simde__m64_from_private(r_);
-#else
- return simde_mm_set_pi8(a, a, a, a, a, a, a, a);
-#endif
-}
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_set1_pi8(a) simde_mm_set1_pi8(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_set1_pi16(int16_t a)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_set1_pi16(a);
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- simde__m64_private r_;
- r_.neon_i16 = vmov_n_s16(a);
- return simde__m64_from_private(r_);
-#else
- return simde_mm_set_pi16(a, a, a, a);
-#endif
-}
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_set1_pi16(a) simde_mm_set1_pi16(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_set1_pi32(int32_t a)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_set1_pi32(a);
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- simde__m64_private r_;
- r_.neon_i32 = vmov_n_s32(a);
- return simde__m64_from_private(r_);
-#else
- return simde_mm_set_pi32(a, a);
-#endif
-}
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_set1_pi32(a) simde_mm_set1_pi32(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_setr_pi8(int8_t e7, int8_t e6, int8_t e5, int8_t e4,
- int8_t e3, int8_t e2, int8_t e1, int8_t e0)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_setr_pi8(e7, e6, e5, e4, e3, e2, e1, e0);
-#else
- return simde_mm_set_pi8(e0, e1, e2, e3, e4, e5, e6, e7);
-#endif
-}
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_setr_pi8(e7, e6, e5, e4, e3, e2, e1, e0) \
- simde_mm_setr_pi8(e7, e6, e5, e4, e3, e2, e1, e0)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_setr_pi16(int16_t e3, int16_t e2, int16_t e1, int16_t e0)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_setr_pi16(e3, e2, e1, e0);
-#else
- return simde_mm_set_pi16(e0, e1, e2, e3);
-#endif
-}
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_setr_pi16(e3, e2, e1, e0) simde_mm_setr_pi16(e3, e2, e1, e0)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_setr_pi32(int32_t e1, int32_t e0)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_setr_pi32(e1, e0);
-#else
- return simde_mm_set_pi32(e0, e1);
-#endif
-}
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_setr_pi32(e1, e0) simde_mm_setr_pi32(e1, e0)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_setzero_si64(void)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_setzero_si64();
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- simde__m64_private r_;
- r_.neon_u32 = vmov_n_u32(0);
- return simde__m64_from_private(r_);
-#else
- return simde_mm_set_pi32(0, 0);
-#endif
-}
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_setzero_si64() simde_mm_setzero_si64()
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_x_mm_setone_si64(void)
-{
- return simde_mm_set1_pi32(~INT32_C(0));
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_sll_pi16(simde__m64 a, simde__m64 count)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_sll_pi16(a, count);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private count_ = simde__m64_to_private(count);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vshl_s16(a_.neon_i16, vmov_n_s16((int16_t)vget_lane_u64(
- count_.neon_u64, 0)));
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.i16 = a_.i16 << count_.u64[0];
-#else
- if (HEDLEY_UNLIKELY(count_.u64[0] > 15)) {
- simde_memset(&r_, 0, sizeof(r_));
- return simde__m64_from_private(r_);
- }
-
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
- r_.u16[i] = HEDLEY_STATIC_CAST(uint16_t,
- a_.u16[i] << count_.u64[0]);
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psllw(a, count) simde_mm_sll_pi16(a, count)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_sll_pi16(a, count) simde_mm_sll_pi16(a, count)
-#define _m_psllw(a, count) simde_mm_sll_pi16(a, count)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_sll_pi32(simde__m64 a, simde__m64 count)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_sll_pi32(a, count);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private count_ = simde__m64_to_private(count);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vshl_s32(a_.neon_i32, vmov_n_s32((int32_t)vget_lane_u64(
- count_.neon_u64, 0)));
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.i32 = a_.i32 << count_.u64[0];
-#else
- if (HEDLEY_UNLIKELY(count_.u64[0] > 31)) {
- simde_memset(&r_, 0, sizeof(r_));
- return simde__m64_from_private(r_);
- }
-
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u32) / sizeof(r_.u32[0])); i++) {
- r_.u32[i] = a_.u32[i] << count_.u64[0];
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_pslld(a, count) simde_mm_sll_pi32(a, count)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_sll_pi32(a, count) simde_mm_sll_pi32(a, count)
-#define _m_pslld(a, count) simde_mm_sll_pi32(a, count)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_slli_pi16(simde__m64 a, int count)
-{
-#if defined(SIMDE_X86_MMX_NATIVE) && !defined(__PGI)
- return _mm_slli_pi16(a, count);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.i16 = a_.i16 << count;
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vshl_s16(a_.neon_i16, vmov_n_s16((int16_t)count));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
- r_.u16[i] = HEDLEY_STATIC_CAST(uint16_t, a_.u16[i] << count);
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psllwi(a, count) simde_mm_slli_pi16(a, count)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_slli_pi16(a, count) simde_mm_slli_pi16(a, count)
-#define _m_psllwi(a, count) simde_mm_slli_pi16(a, count)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_slli_pi32(simde__m64 a, int count)
-{
-#if defined(SIMDE_X86_MMX_NATIVE) && !defined(__PGI)
- return _mm_slli_pi32(a, count);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.i32 = a_.i32 << count;
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vshl_s32(a_.neon_i32, vmov_n_s32((int32_t)count));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u32) / sizeof(r_.u32[0])); i++) {
- r_.u32[i] = a_.u32[i] << count;
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_pslldi(a, b) simde_mm_slli_pi32(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_slli_pi32(a, count) simde_mm_slli_pi32(a, count)
-#define _m_pslldi(a, count) simde_mm_slli_pi32(a, count)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_slli_si64(simde__m64 a, int count)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_slli_si64(a, count);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.i64 = a_.i64 << count;
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i64 = vshl_s64(a_.neon_i64, vmov_n_s64((int64_t)count));
-#else
- r_.u64[0] = a_.u64[0] << count;
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psllqi(a, count) simde_mm_slli_si64(a, count)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_slli_si64(a, count) simde_mm_slli_si64(a, count)
-#define _m_psllqi(a, count) simde_mm_slli_si64(a, count)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_sll_si64(simde__m64 a, simde__m64 count)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_sll_si64(a, count);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private count_ = simde__m64_to_private(count);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i64 = vshl_s64(a_.neon_i64, count_.neon_i64);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i64 = a_.i64 << count_.i64;
-#else
- if (HEDLEY_UNLIKELY(count_.u64[0] > 63)) {
- simde_memset(&r_, 0, sizeof(r_));
- return simde__m64_from_private(r_);
- }
-
- r_.u64[0] = a_.u64[0] << count_.u64[0];
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psllq(a, count) simde_mm_sll_si64(a, count)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_sll_si64(a, count) simde_mm_sll_si64(a, count)
-#define _m_psllq(a, count) simde_mm_sll_si64(a, count)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_srl_pi16(simde__m64 a, simde__m64 count)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_srl_pi16(a, count);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private count_ = simde__m64_to_private(count);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.u16 = a_.u16 >> count_.u64[0];
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u16 = vshl_u16(
- a_.neon_u16,
- vmov_n_s16(-((int16_t)vget_lane_u64(count_.neon_u64, 0))));
-#else
- if (HEDLEY_UNLIKELY(count_.u64[0] > 15)) {
- simde_memset(&r_, 0, sizeof(r_));
- return simde__m64_from_private(r_);
- }
-
- SIMDE_VECTORIZE
- for (size_t i = 0; i < sizeof(r_.u16) / sizeof(r_.u16[0]); i++) {
- r_.u16[i] = a_.u16[i] >> count_.u64[0];
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psrlw(a, count) simde_mm_srl_pi16(a, count)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_srl_pi16(a, count) simde_mm_srl_pi16(a, count)
-#define _m_psrlw(a, count) simde_mm_srl_pi16(a, count)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_srl_pi32(simde__m64 a, simde__m64 count)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_srl_pi32(a, count);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private count_ = simde__m64_to_private(count);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.u32 = a_.u32 >> count_.u64[0];
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u32 = vshl_u32(
- a_.neon_u32,
- vmov_n_s32(-((int32_t)vget_lane_u64(count_.neon_u64, 0))));
-#else
- if (HEDLEY_UNLIKELY(count_.u64[0] > 31)) {
- simde_memset(&r_, 0, sizeof(r_));
- return simde__m64_from_private(r_);
- }
-
- SIMDE_VECTORIZE
- for (size_t i = 0; i < sizeof(r_.u32) / sizeof(r_.u32[0]); i++) {
- r_.u32[i] = a_.u32[i] >> count_.u64[0];
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psrld(a, count) simde_mm_srl_pi32(a, count)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_srl_pi32(a, count) simde_mm_srl_pi32(a, count)
-#define _m_psrld(a, count) simde_mm_srl_pi32(a, count)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_srli_pi16(simde__m64 a, int count)
-{
-#if defined(SIMDE_X86_MMX_NATIVE) && !defined(__PGI)
- return _mm_srli_pi16(a, count);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.u16 = a_.u16 >> count;
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u16 = vshl_u16(a_.neon_u16, vmov_n_s16(-((int16_t)count)));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
- r_.u16[i] = a_.u16[i] >> count;
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psrlwi(a, count) simde_mm_srli_pi16(a, count)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_srli_pi16(a, count) simde_mm_srli_pi16(a, count)
-#define _m_psrlwi(a, count) simde_mm_srli_pi16(a, count)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_srli_pi32(simde__m64 a, int count)
-{
-#if defined(SIMDE_X86_MMX_NATIVE) && !defined(__PGI)
- return _mm_srli_pi32(a, count);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.u32 = a_.u32 >> count;
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u32 = vshl_u32(a_.neon_u32, vmov_n_s32(-((int32_t)count)));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u32) / sizeof(r_.u32[0])); i++) {
- r_.u32[i] = a_.u32[i] >> count;
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psrldi(a, count) simde_mm_srli_pi32(a, count)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_srli_pi32(a, count) simde_mm_srli_pi32(a, count)
-#define _m_psrldi(a, count) simde_mm_srli_pi32(a, count)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_srli_si64(simde__m64 a, int count)
-{
-#if defined(SIMDE_X86_MMX_NATIVE) && !defined(__PGI)
- return _mm_srli_si64(a, count);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u64 = vshl_u64(a_.neon_u64, vmov_n_s64(-count));
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.u64 = a_.u64 >> count;
-#else
- r_.u64[0] = a_.u64[0] >> count;
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psrlqi(a, count) simde_mm_srli_si64(a, count)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_srli_si64(a, count) simde_mm_srli_si64(a, count)
-#define _m_psrlqi(a, count) simde_mm_srli_si64(a, count)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_srl_si64(simde__m64 a, simde__m64 count)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_srl_si64(a, count);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private count_ = simde__m64_to_private(count);
-
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- r_.neon_u64 = vshl_u64(a_.neon_u64, vneg_s64(count_.neon_i64));
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.u64 = a_.u64 >> count_.u64;
-#else
- if (HEDLEY_UNLIKELY(count_.u64[0] > 63)) {
- simde_memset(&r_, 0, sizeof(r_));
- return simde__m64_from_private(r_);
- }
-
- r_.u64[0] = a_.u64[0] >> count_.u64[0];
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psrlq(a, count) simde_mm_srl_si64(a, count)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_srl_si64(a, count) simde_mm_srl_si64(a, count)
-#define _m_psrlq(a, count) simde_mm_srl_si64(a, count)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_srai_pi16(simde__m64 a, int count)
-{
-#if defined(SIMDE_X86_MMX_NATIVE) && !defined(__PGI)
- return _mm_srai_pi16(a, count);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.i16 = a_.i16 >> (count & 0xff);
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vshl_s16(a_.neon_i16, vmov_n_s16(-HEDLEY_STATIC_CAST(int16_t, count));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = a_.i16[i] >> (count & 0xff);
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psrawi(a, count) simde_mm_srai_pi16(a, count)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_srai_pi16(a, count) simde_mm_srai_pi16(a, count)
-#define _m_psrawi(a, count) simde_mm_srai_pi16(a, count)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_srai_pi32(simde__m64 a, int count)
-{
-#if defined(SIMDE_X86_MMX_NATIVE) && !defined(__PGI)
- return _mm_srai_pi32(a, count);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.i32 = a_.i32 >> (count & 0xff);
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vshl_s32(a_.neon_i32,
- vmov_n_s32(-HEDLEY_STATIC_CAST(int32_t, count)));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = a_.i32[i] >> (count & 0xff);
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psradi(a, count) simde_mm_srai_pi32(a, count)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_srai_pi32(a, count) simde_mm_srai_pi32(a, count)
-#define _m_srai_pi32(a, count) simde_mm_srai_pi32(a, count)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_sra_pi16(simde__m64 a, simde__m64 count)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_sra_pi16(a, count);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private count_ = simde__m64_to_private(count);
- const int cnt = HEDLEY_STATIC_CAST(
- int, (count_.i64[0] > 15 ? 15 : count_.i64[0]));
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.i16 = a_.i16 >> cnt;
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 =
- vshl_s16(a_.neon_i16,
- vmov_n_s16(-HEDLEY_STATIC_CAST(
- int16_t, vget_lane_u64(count_.neon_u64, 0))));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = a_.i16[i] >> cnt;
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psraw(a, count) simde_mm_sra_pi16(a, count)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_sra_pi16(a, count) simde_mm_sra_pi16(a, count)
-#define _m_psraw(a, count) simde_mm_sra_pi16(a, count)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_sra_pi32(simde__m64 a, simde__m64 count)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_sra_pi32(a, count);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private count_ = simde__m64_to_private(count);
- const int32_t cnt =
- (count_.u64[0] > 31)
- ? 31
- : HEDLEY_STATIC_CAST(int32_t, count_.u64[0]);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.i32 = a_.i32 >> cnt;
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 =
- vshl_s32(a_.neon_i32,
- vmov_n_s32(-HEDLEY_STATIC_CAST(
- int32_t, vget_lane_u64(count_.neon_u64, 0))));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = a_.i32[i] >> cnt;
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psrad(a, b) simde_mm_sra_pi32(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_sra_pi32(a, count) simde_mm_sra_pi32(a, count)
-#define _m_psrad(a, count) simde_mm_sra_pi32(a, count)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_sub_pi8(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_sub_pi8(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i8 = vsub_s8(a_.neon_i8, b_.neon_i8);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i8 = a_.i8 - b_.i8;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
- r_.i8[i] = a_.i8[i] - b_.i8[i];
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psubb(a, b) simde_mm_sub_pi8(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_sub_pi8(a, b) simde_mm_sub_pi8(a, b)
-#define _m_psubb(a, b) simde_mm_sub_pi8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_sub_pi16(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_sub_pi16(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vsub_s16(a_.neon_i16, b_.neon_i16);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i16 = a_.i16 - b_.i16;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = a_.i16[i] - b_.i16[i];
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psubw(a, b) simde_mm_sub_pi16(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_sub_pi16(a, b) simde_mm_sub_pi16(a, b)
-#define _m_psubw(a, b) simde_mm_sub_pi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_sub_pi32(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_sub_pi32(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vsub_s32(a_.neon_i32, b_.neon_i32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32 = a_.i32 - b_.i32;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = a_.i32[i] - b_.i32[i];
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psubd(a, b) simde_mm_sub_pi32(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_sub_pi32(a, b) simde_mm_sub_pi32(a, b)
-#define _m_psubd(a, b) simde_mm_sub_pi32(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_subs_pi8(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_subs_pi8(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i8 = vqsub_s8(a_.neon_i8, b_.neon_i8);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
- if (((b_.i8[i]) > 0 && (a_.i8[i]) < INT8_MIN + (b_.i8[i]))) {
- r_.i8[i] = INT8_MIN;
- } else if ((b_.i8[i]) < 0 &&
- (a_.i8[i]) > INT8_MAX + (b_.i8[i])) {
- r_.i8[i] = INT8_MAX;
- } else {
- r_.i8[i] = (a_.i8[i]) - (b_.i8[i]);
- }
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psubsb(a, b) simde_mm_subs_pi8(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_subs_pi8(a, b) simde_mm_subs_pi8(a, b)
-#define _m_psubsb(a, b) simde_mm_subs_pi8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_subs_pu8(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_subs_pu8(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u8 = vqsub_u8(a_.neon_u8, b_.neon_u8);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u8) / sizeof(r_.u8[0])); i++) {
- const int32_t x = a_.u8[i] - b_.u8[i];
- if (x < 0) {
- r_.u8[i] = 0;
- } else if (x > UINT8_MAX) {
- r_.u8[i] = UINT8_MAX;
- } else {
- r_.u8[i] = HEDLEY_STATIC_CAST(uint8_t, x);
- }
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psubusb(a, b) simde_mm_subs_pu8(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_subs_pu8(a, b) simde_mm_subs_pu8(a, b)
-#define _m_psubusb(a, b) simde_mm_subs_pu8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_subs_pi16(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_subs_pi16(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vqsub_s16(a_.neon_i16, b_.neon_i16);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- if (((b_.i16[i]) > 0 && (a_.i16[i]) < SHRT_MIN + (b_.i16[i]))) {
- r_.i16[i] = SHRT_MIN;
- } else if ((b_.i16[i]) < 0 &&
- (a_.i16[i]) > INT16_MAX + (b_.i16[i])) {
- r_.i16[i] = INT16_MAX;
- } else {
- r_.i16[i] = (a_.i16[i]) - (b_.i16[i]);
- }
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psubsw(a, b) simde_mm_subs_pi16(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_subs_pi16(a, b) simde_mm_subs_pi16(a, b)
-#define _m_psubsw(a, b) simde_mm_subs_pi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_subs_pu16(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_subs_pu16(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u16 = vqsub_u16(a_.neon_u16, b_.neon_u16);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
- const int x = a_.u16[i] - b_.u16[i];
- if (x < 0) {
- r_.u16[i] = 0;
- } else if (x > UINT16_MAX) {
- r_.u16[i] = UINT16_MAX;
- } else {
- r_.u16[i] = HEDLEY_STATIC_CAST(uint16_t, x);
- }
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psubusw(a, b) simde_mm_subs_pu16(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_subs_pu16(a, b) simde_mm_subs_pu16(a, b)
-#define _m_psubusw(a, b) simde_mm_subs_pu16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_unpackhi_pi8(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_unpackhi_pi8(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- r_.neon_i8 = vzip2_s8(a_.neon_i8, b_.neon_i8);
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
- r_.i8 = SIMDE_SHUFFLE_VECTOR_(8, 8, a_.i8, b_.i8, 4, 12, 5, 13, 6, 14,
- 7, 15);
-#else
- r_.i8[0] = a_.i8[4];
- r_.i8[1] = b_.i8[4];
- r_.i8[2] = a_.i8[5];
- r_.i8[3] = b_.i8[5];
- r_.i8[4] = a_.i8[6];
- r_.i8[5] = b_.i8[6];
- r_.i8[6] = a_.i8[7];
- r_.i8[7] = b_.i8[7];
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_punpckhbw(a, b) simde_mm_unpackhi_pi8(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_unpackhi_pi8(a, b) simde_mm_unpackhi_pi8(a, b)
-#define _m_punpckhbw(a, b) simde_mm_unpackhi_pi8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_unpackhi_pi16(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_unpackhi_pi16(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- r_.neon_i16 = vzip2_s16(a_.neon_i16, b_.neon_i16);
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
- r_.i16 = SIMDE_SHUFFLE_VECTOR_(16, 8, a_.i16, b_.i16, 2, 6, 3, 7);
-#else
- r_.i16[0] = a_.i16[2];
- r_.i16[1] = b_.i16[2];
- r_.i16[2] = a_.i16[3];
- r_.i16[3] = b_.i16[3];
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_punpckhwd(a, b) simde_mm_unpackhi_pi16(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_unpackhi_pi16(a, b) simde_mm_unpackhi_pi16(a, b)
-#define _m_punpckhwd(a, b) simde_mm_unpackhi_pi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_unpackhi_pi32(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_unpackhi_pi32(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- r_.neon_i32 = vzip2_s32(a_.neon_i32, b_.neon_i32);
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
- r_.i32 = SIMDE_SHUFFLE_VECTOR_(32, 8, a_.i32, b_.i32, 1, 3);
-#else
- r_.i32[0] = a_.i32[1];
- r_.i32[1] = b_.i32[1];
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_punpckhdq(a, b) simde_mm_unpackhi_pi32(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_unpackhi_pi32(a, b) simde_mm_unpackhi_pi32(a, b)
-#define _m_punpckhdq(a, b) simde_mm_unpackhi_pi32(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_unpacklo_pi8(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_unpacklo_pi8(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- r_.neon_i8 = vzip1_s8(a_.neon_i8, b_.neon_i8);
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
- r_.i8 = SIMDE_SHUFFLE_VECTOR_(8, 8, a_.i8, b_.i8, 0, 8, 1, 9, 2, 10, 3,
- 11);
-#else
- r_.i8[0] = a_.i8[0];
- r_.i8[1] = b_.i8[0];
- r_.i8[2] = a_.i8[1];
- r_.i8[3] = b_.i8[1];
- r_.i8[4] = a_.i8[2];
- r_.i8[5] = b_.i8[2];
- r_.i8[6] = a_.i8[3];
- r_.i8[7] = b_.i8[3];
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_punpcklbw(a, b) simde_mm_unpacklo_pi8(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_unpacklo_pi8(a, b) simde_mm_unpacklo_pi8(a, b)
-#define _m_punpcklbw(a, b) simde_mm_unpacklo_pi8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_unpacklo_pi16(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_unpacklo_pi16(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- r_.neon_i16 = vzip1_s16(a_.neon_i16, b_.neon_i16);
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
- r_.i16 = SIMDE_SHUFFLE_VECTOR_(16, 8, a_.i16, b_.i16, 0, 4, 1, 5);
-#else
- r_.i16[0] = a_.i16[0];
- r_.i16[1] = b_.i16[0];
- r_.i16[2] = a_.i16[1];
- r_.i16[3] = b_.i16[1];
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_punpcklwd(a, b) simde_mm_unpacklo_pi16(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_unpacklo_pi16(a, b) simde_mm_unpacklo_pi16(a, b)
-#define _m_punpcklwd(a, b) simde_mm_unpacklo_pi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_unpacklo_pi32(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_unpacklo_pi32(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- r_.neon_i32 = vzip1_s32(a_.neon_i32, b_.neon_i32);
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
- r_.i32 = SIMDE_SHUFFLE_VECTOR_(32, 8, a_.i32, b_.i32, 0, 2);
-#else
- r_.i32[0] = a_.i32[0];
- r_.i32[1] = b_.i32[0];
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_punpckldq(a, b) simde_mm_unpacklo_pi32(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_unpacklo_pi32(a, b) simde_mm_unpacklo_pi32(a, b)
-#define _m_punpckldq(a, b) simde_mm_unpacklo_pi32(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_xor_si64(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _mm_xor_si64(a, b);
-#else
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = veor_s32(a_.neon_i32, b_.neon_i32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32f = a_.i32f ^ b_.i32f;
-#else
- r_.u64[0] = a_.u64[0] ^ b_.u64[0];
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_pxor(a, b) simde_mm_xor_si64(a, b)
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _mm_xor_si64(a, b) simde_mm_xor_si64(a, b)
-#define _m_pxor(a, b) simde_mm_xor_si64(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int32_t simde_m_to_int(simde__m64 a)
-{
-#if defined(SIMDE_X86_MMX_NATIVE)
- return _m_to_int(a);
-#else
- simde__m64_private a_ = simde__m64_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- return vget_lane_s32(a_.neon_i32, 0);
-#else
- return a_.i32[0];
-#endif
-#endif
-}
-#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
-#define _m_to_int(a) simde_m_to_int(a)
-#endif
-
-SIMDE_END_DECLS_
-
-HEDLEY_DIAGNOSTIC_POP
-
-#endif /* !defined(SIMDE_X86_MMX_H) */
obs-studio-26.1.0.tar.xz/libobs/util/simde/sse.h
Deleted
-/* SPDX-License-Identifier: MIT
- *
- * Permission is hereby granted, free of charge, to any person
- * obtaining a copy of this software and associated documentation
- * files (the "Software"), to deal in the Software without
- * restriction, including without limitation the rights to use, copy,
- * modify, merge, publish, distribute, sublicense, and/or sell copies
- * of the Software, and to permit persons to whom the Software is
- * furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be
- * included in all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
- *
- * Copyright:
- * 2017-2020 Evan Nemerson <evan@nemerson.com>
- * 2015-2017 John W. Ratcliff <jratcliffscarab@gmail.com>
- * 2015 Brandon Rowlett <browlett@nvidia.com>
- * 2015 Ken Fast <kfast@gdeb.com>
- */
-
-#if !defined(SIMDE_X86_SSE_H)
-#define SIMDE_X86_SSE_H
-
-#include "mmx.h"
-
-#if !defined(SIMDE_X86_AVX512F_NATIVE) && defined(SIMDE_ENABLE_NATIVE_ALIASES)
-#define SIMDE_X86_AVX512F_ENABLE_NATIVE_ALIASES
-#endif
-
-#if defined(_WIN32)
-#include <windows.h>
-#endif
-
-HEDLEY_DIAGNOSTIC_PUSH
-SIMDE_DISABLE_UNWANTED_DIAGNOSTICS
-SIMDE_BEGIN_DECLS_
-
-typedef union {
-#if defined(SIMDE_VECTOR_SUBSCRIPT)
- SIMDE_ALIGN(16) int8_t i8 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) int16_t i16 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) int32_t i32 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) int64_t i64 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) uint8_t u8 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) uint16_t u16 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) uint32_t u32 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) uint64_t u64 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
-#if defined(SIMDE_HAVE_INT128_)
- SIMDE_ALIGN(16) simde_int128 i128 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) simde_uint128 u128 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
-#endif
- SIMDE_ALIGN(16) simde_float32 f32 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) int_fast32_t i32f SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) uint_fast32_t u32f SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
-#else
- SIMDE_ALIGN(16) int8_t i8[16];
- SIMDE_ALIGN(16) int16_t i16[8];
- SIMDE_ALIGN(16) int32_t i32[4];
- SIMDE_ALIGN(16) int64_t i64[2];
- SIMDE_ALIGN(16) uint8_t u8[16];
- SIMDE_ALIGN(16) uint16_t u16[8];
- SIMDE_ALIGN(16) uint32_t u32[4];
- SIMDE_ALIGN(16) uint64_t u64[2];
-#if defined(SIMDE_HAVE_INT128_)
- SIMDE_ALIGN(16) simde_int128 i128[1];
- SIMDE_ALIGN(16) simde_uint128 u128[1];
-#endif
- SIMDE_ALIGN(16) simde_float32 f32[4];
- SIMDE_ALIGN(16) int_fast32_t i32f[16 / sizeof(int_fast32_t)];
- SIMDE_ALIGN(16) uint_fast32_t u32f[16 / sizeof(uint_fast32_t)];
-#endif
-
- SIMDE_ALIGN(16) simde__m64_private m64_private[2];
- SIMDE_ALIGN(16) simde__m64 m64[2];
-
-#if defined(SIMDE_X86_SSE_NATIVE)
- SIMDE_ALIGN(16) __m128 n;
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- SIMDE_ALIGN(16) int8x16_t neon_i8;
- SIMDE_ALIGN(16) int16x8_t neon_i16;
- SIMDE_ALIGN(16) int32x4_t neon_i32;
- SIMDE_ALIGN(16) int64x2_t neon_i64;
- SIMDE_ALIGN(16) uint8x16_t neon_u8;
- SIMDE_ALIGN(16) uint16x8_t neon_u16;
- SIMDE_ALIGN(16) uint32x4_t neon_u32;
- SIMDE_ALIGN(16) uint64x2_t neon_u64;
- SIMDE_ALIGN(16) float32x4_t neon_f32;
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- SIMDE_ALIGN(16) float64x2_t neon_f64;
-#endif
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- SIMDE_ALIGN(16) v128_t wasm_v128;
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(unsigned char) altivec_u8;
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(unsigned short) altivec_u16;
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(unsigned int) altivec_u32;
- SIMDE_ALIGN(16)
- SIMDE_POWER_ALTIVEC_VECTOR(unsigned long long) altivec_u64;
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(signed char) altivec_i8;
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(signed short) altivec_i16;
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(signed int) altivec_i32;
- SIMDE_ALIGN(16)
- SIMDE_POWER_ALTIVEC_VECTOR(signed long long) altivec_i64;
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(float) altivec_f32;
-#if defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(double) altivec_f64;
-#endif
-#endif
-} simde__m128_private;
-
-#if defined(SIMDE_X86_SSE_NATIVE)
-typedef __m128 simde__m128;
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
-typedef float32x4_t simde__m128;
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
-typedef v128_t simde__m128;
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
-typedef SIMDE_POWER_ALTIVEC_VECTOR(float) simde__m128;
-#elif defined(SIMDE_VECTOR_SUBSCRIPT)
-typedef simde_float32 simde__m128 SIMDE_ALIGN(16)
- SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
-#else
-typedef simde__m128_private simde__m128;
-#endif
-
-#if !defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_ENABLE_NATIVE_ALIASES)
-#define SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES
-typedef simde__m128 __m128;
-#endif
-
-HEDLEY_STATIC_ASSERT(16 == sizeof(simde__m128), "simde__m128 size incorrect");
-HEDLEY_STATIC_ASSERT(16 == sizeof(simde__m128_private),
- "simde__m128_private size incorrect");
-#if defined(SIMDE_CHECK_ALIGNMENT) && defined(SIMDE_ALIGN_OF)
-HEDLEY_STATIC_ASSERT(SIMDE_ALIGN_OF(simde__m128) == 16,
- "simde__m128 is not 16-byte aligned");
-HEDLEY_STATIC_ASSERT(SIMDE_ALIGN_OF(simde__m128_private) == 16,
- "simde__m128_private is not 16-byte aligned");
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde__m128_from_private(simde__m128_private v)
-{
- simde__m128 r;
- simde_memcpy(&r, &v, sizeof(r));
- return r;
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128_private simde__m128_to_private(simde__m128 v)
-{
- simde__m128_private r;
- simde_memcpy(&r, &v, sizeof(r));
- return r;
-}
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, int8x16_t, neon, i8)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, int16x8_t, neon, i16)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, int32x4_t, neon, i32)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, int64x2_t, neon, i64)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, uint8x16_t, neon, u8)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, uint16x8_t, neon, u16)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, uint32x4_t, neon, u32)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, uint64x2_t, neon, u64)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, float32x4_t, neon, f32)
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, float64x2_t, neon, f64)
-#endif
-#endif /* defined(SIMDE_ARM_NEON_A32V7_NATIVE) */
-
-#if defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
-HEDLEY_DIAGNOSTIC_POP
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_set_ps(simde_float32 e3, simde_float32 e2,
- simde_float32 e1, simde_float32 e0)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_set_ps(e3, e2, e1, e0);
-#else
- simde__m128_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- SIMDE_ALIGN(16) simde_float32 data[4] = {e0, e1, e2, e3};
- r_.neon_f32 = vld1q_f32(data);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f32x4_make(e0, e1, e2, e3);
-#else
- r_.f32[0] = e0;
- r_.f32[1] = e1;
- r_.f32[2] = e2;
- r_.f32[3] = e3;
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_set_ps(e3, e2, e1, e0) simde_mm_set_ps(e3, e2, e1, e0)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_set_ps1(simde_float32 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_set_ps1(a);
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- return vdupq_n_f32(a);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- (void)a;
- return vec_splats(a);
-#else
- return simde_mm_set_ps(a, a, a, a);
-#endif
-}
-#define simde_mm_set1_ps(a) simde_mm_set_ps1(a)
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_set_ps1(a) simde_mm_set_ps1(a)
-#define _mm_set1_ps(a) simde_mm_set1_ps(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_move_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_move_ss(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 =
- vsetq_lane_f32(vgetq_lane_f32(b_.neon_f32, 0), a_.neon_f32, 0);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- SIMDE_POWER_ALTIVEC_VECTOR(unsigned char)
- m = {16, 17, 18, 19, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15};
- r_.altivec_f32 = vec_perm(a_.altivec_f32, b_.altivec_f32, m);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_v8x16_shuffle(b_.wasm_v128, a_.wasm_v128, 0, 1, 2,
- 3, 20, 21, 22, 23, 24, 25, 26, 27, 28,
- 29, 30, 31);
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
- r_.f32 = SIMDE_SHUFFLE_VECTOR_(32, 16, a_.f32, b_.f32, 4, 1, 2, 3);
-#else
- r_.f32[0] = b_.f32[0];
- r_.f32[1] = a_.f32[1];
- r_.f32[2] = a_.f32[2];
- r_.f32[3] = a_.f32[3];
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_move_ss(a, b) simde_mm_move_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_add_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_add_ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 = vaddq_f32(a_.neon_f32, b_.neon_f32);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f32x4_add(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_f32 = vec_add(a_.altivec_f32, b_.altivec_f32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.f32 = a_.f32 + b_.f32;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.f32[i] = a_.f32[i] + b_.f32[i];
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_add_ps(a, b) simde_mm_add_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_add_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_add_ss(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_ss(a, simde_mm_add_ps(a, b));
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
- r_.f32[0] = a_.f32[0] + b_.f32[0];
- r_.f32[1] = a_.f32[1];
- r_.f32[2] = a_.f32[2];
- r_.f32[3] = a_.f32[3];
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_add_ss(a, b) simde_mm_add_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_and_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_and_ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vandq_s32(a_.neon_i32, b_.neon_i32);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_v128_and(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32 = a_.i32 & b_.i32;
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_f32 = vec_and(a_.altivec_f32, b_.altivec_f32);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = a_.i32[i] & b_.i32[i];
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_and_ps(a, b) simde_mm_and_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_andnot_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_andnot_ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vbicq_s32(b_.neon_i32, a_.neon_i32);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_v128_andnot(b_.wasm_v128, a_.wasm_v128);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_f32 = vec_andc(b_.altivec_f32, a_.altivec_f32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32 = ~a_.i32 & b_.i32;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = ~(a_.i32[i]) & b_.i32[i];
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_andnot_ps(a, b) simde_mm_andnot_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_avg_pu16(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_avg_pu16(a, b);
-#else
- simde__m64_private r_, a_ = simde__m64_to_private(a),
- b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u16 = vrhadd_u16(b_.neon_u16, a_.neon_u16);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS) && \
- defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR) && \
- defined(SIMDE_CONVERT_VECTOR_)
- uint32_t wa SIMDE_VECTOR(16);
- uint32_t wb SIMDE_VECTOR(16);
- uint32_t wr SIMDE_VECTOR(16);
- SIMDE_CONVERT_VECTOR_(wa, a_.u16);
- SIMDE_CONVERT_VECTOR_(wb, b_.u16);
- wr = (wa + wb + 1) >> 1;
- SIMDE_CONVERT_VECTOR_(r_.u16, wr);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
- r_.u16[i] = (a_.u16[i] + b_.u16[i] + 1) >> 1;
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_pavgw(a, b) simde_mm_avg_pu16(a, b)
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_avg_pu16(a, b) simde_mm_avg_pu16(a, b)
-#define _m_pavgw(a, b) simde_mm_avg_pu16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_avg_pu8(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_avg_pu8(a, b);
-#else
- simde__m64_private r_, a_ = simde__m64_to_private(a),
- b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u8 = vrhadd_u8(b_.neon_u8, a_.neon_u8);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS) && \
- defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR) && \
- defined(SIMDE_CONVERT_VECTOR_)
- uint16_t wa SIMDE_VECTOR(16);
- uint16_t wb SIMDE_VECTOR(16);
- uint16_t wr SIMDE_VECTOR(16);
- SIMDE_CONVERT_VECTOR_(wa, a_.u8);
- SIMDE_CONVERT_VECTOR_(wb, b_.u8);
- wr = (wa + wb + 1) >> 1;
- SIMDE_CONVERT_VECTOR_(r_.u8, wr);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u8) / sizeof(r_.u8[0])); i++) {
- r_.u8[i] = (a_.u8[i] + b_.u8[i] + 1) >> 1;
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_pavgb(a, b) simde_mm_avg_pu8(a, b)
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_avg_pu8(a, b) simde_mm_avg_pu8(a, b)
-#define _m_pavgb(a, b) simde_mm_avg_pu8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmpeq_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_cmpeq_ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u32 = vceqq_f32(a_.neon_f32, b_.neon_f32);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f32x4_eq(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_f32 = (SIMDE_POWER_ALTIVEC_VECTOR(float))vec_cmpeq(
- a_.altivec_f32, b_.altivec_f32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32 = HEDLEY_STATIC_CAST(__typeof__(r_.i32), a_.f32 == b_.f32);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.u32[i] = (a_.f32[i] == b_.f32[i]) ? ~UINT32_C(0)
- : UINT32_C(0);
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpeq_ps(a, b) simde_mm_cmpeq_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmpeq_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_cmpeq_ss(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_ss(a, simde_mm_cmpeq_ps(a, b));
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
- r_.u32[0] = (a_.f32[0] == b_.f32[0]) ? ~UINT32_C(0) : UINT32_C(0);
- SIMDE_VECTORIZE
- for (size_t i = 1; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.u32[i] = a_.u32[i];
- }
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpeq_ss(a, b) simde_mm_cmpeq_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmpge_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_cmpge_ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u32 = vcgeq_f32(a_.neon_f32, b_.neon_f32);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f32x4_ge(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_f32 = (SIMDE_POWER_ALTIVEC_VECTOR(float))vec_cmpge(
- a_.altivec_f32, b_.altivec_f32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32 = HEDLEY_STATIC_CAST(__typeof__(r_.i32), (a_.f32 >= b_.f32));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.u32[i] = (a_.f32[i] >= b_.f32[i]) ? ~UINT32_C(0)
- : UINT32_C(0);
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpge_ps(a, b) simde_mm_cmpge_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmpge_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && !defined(__PGI)
- return _mm_cmpge_ss(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_ss(a, simde_mm_cmpge_ps(a, b));
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
- r_.u32[0] = (a_.f32[0] >= b_.f32[0]) ? ~UINT32_C(0) : UINT32_C(0);
- SIMDE_VECTORIZE
- for (size_t i = 1; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.u32[i] = a_.u32[i];
- }
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpge_ss(a, b) simde_mm_cmpge_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmpgt_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_cmpgt_ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u32 = vcgtq_f32(a_.neon_f32, b_.neon_f32);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f32x4_gt(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_f32 = (SIMDE_POWER_ALTIVEC_VECTOR(float))vec_cmpgt(
- a_.altivec_f32, b_.altivec_f32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32 = HEDLEY_STATIC_CAST(__typeof__(r_.i32), (a_.f32 > b_.f32));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.u32[i] = (a_.f32[i] > b_.f32[i]) ? ~UINT32_C(0)
- : UINT32_C(0);
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpgt_ps(a, b) simde_mm_cmpgt_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmpgt_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && !defined(__PGI)
- return _mm_cmpgt_ss(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_ss(a, simde_mm_cmpgt_ps(a, b));
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
- r_.u32[0] = (a_.f32[0] > b_.f32[0]) ? ~UINT32_C(0) : UINT32_C(0);
- SIMDE_VECTORIZE
- for (size_t i = 1; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.u32[i] = a_.u32[i];
- }
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpgt_ss(a, b) simde_mm_cmpgt_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmple_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_cmple_ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u32 = vcleq_f32(a_.neon_f32, b_.neon_f32);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f32x4_le(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_f32 = (SIMDE_POWER_ALTIVEC_VECTOR(float))vec_cmple(
- a_.altivec_f32, b_.altivec_f32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32 = HEDLEY_STATIC_CAST(__typeof__(r_.i32), (a_.f32 <= b_.f32));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.u32[i] = (a_.f32[i] <= b_.f32[i]) ? ~UINT32_C(0)
- : UINT32_C(0);
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmple_ps(a, b) simde_mm_cmple_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmple_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_cmple_ss(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_ss(a, simde_mm_cmple_ps(a, b));
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
- r_.u32[0] = (a_.f32[0] <= b_.f32[0]) ? ~UINT32_C(0) : UINT32_C(0);
- SIMDE_VECTORIZE
- for (size_t i = 1; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.u32[i] = a_.u32[i];
- }
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmple_ss(a, b) simde_mm_cmple_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmplt_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_cmplt_ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u32 = vcltq_f32(a_.neon_f32, b_.neon_f32);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f32x4_lt(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_f32 = (SIMDE_POWER_ALTIVEC_VECTOR(float))vec_cmplt(
- a_.altivec_f32, b_.altivec_f32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32 = HEDLEY_STATIC_CAST(__typeof__(r_.i32), (a_.f32 < b_.f32));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.u32[i] = (a_.f32[i] < b_.f32[i]) ? ~UINT32_C(0)
- : UINT32_C(0);
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmplt_ps(a, b) simde_mm_cmplt_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmplt_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_cmplt_ss(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_ss(a, simde_mm_cmplt_ps(a, b));
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
- r_.u32[0] = (a_.f32[0] < b_.f32[0]) ? ~UINT32_C(0) : UINT32_C(0);
- SIMDE_VECTORIZE
- for (size_t i = 1; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.u32[i] = a_.u32[i];
- }
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmplt_ss(a, b) simde_mm_cmplt_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmpneq_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_cmpneq_ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u32 = vmvnq_u32(vceqq_f32(a_.neon_f32, b_.neon_f32));
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f32x4_ne(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_POWER_ALTIVEC_P9_NATIVE) && SIMDE_ARCH_POWER_CHECK(900) && \
- !defined(HEDLEY_IBM_VERSION)
- /* vec_cmpne(vector float, vector float) is missing from XL C/C++ v16.1.1,
- though the documentation (table 89 on page 432 of the IBM XL C/C++ for
- Linux Compiler Reference, Version 16.1.1) shows that it should be
- present. Both GCC and clang support it. */
- r_.altivec_f32 = (SIMDE_POWER_ALTIVEC_VECTOR(float))vec_cmpne(
- a_.altivec_f32, b_.altivec_f32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32 = HEDLEY_STATIC_CAST(__typeof__(r_.i32), (a_.f32 != b_.f32));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.u32[i] = (a_.f32[i] != b_.f32[i]) ? ~UINT32_C(0)
- : UINT32_C(0);
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpneq_ps(a, b) simde_mm_cmpneq_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmpneq_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_cmpneq_ss(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_ss(a, simde_mm_cmpneq_ps(a, b));
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
- r_.u32[0] = (a_.f32[0] != b_.f32[0]) ? ~UINT32_C(0) : UINT32_C(0);
- SIMDE_VECTORIZE
- for (size_t i = 1; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.u32[i] = a_.u32[i];
- }
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpneq_ss(a, b) simde_mm_cmpneq_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmpnge_ps(simde__m128 a, simde__m128 b)
-{
- return simde_mm_cmplt_ps(a, b);
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpnge_ps(a, b) simde_mm_cmpnge_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmpnge_ss(simde__m128 a, simde__m128 b)
-{
- return simde_mm_cmplt_ss(a, b);
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpnge_ss(a, b) simde_mm_cmpnge_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmpngt_ps(simde__m128 a, simde__m128 b)
-{
- return simde_mm_cmple_ps(a, b);
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpngt_ps(a, b) simde_mm_cmpngt_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmpngt_ss(simde__m128 a, simde__m128 b)
-{
- return simde_mm_cmple_ss(a, b);
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpngt_ss(a, b) simde_mm_cmpngt_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmpnle_ps(simde__m128 a, simde__m128 b)
-{
- return simde_mm_cmpgt_ps(a, b);
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpnle_ps(a, b) simde_mm_cmpnle_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmpnle_ss(simde__m128 a, simde__m128 b)
-{
- return simde_mm_cmpgt_ss(a, b);
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpnle_ss(a, b) simde_mm_cmpnle_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmpnlt_ps(simde__m128 a, simde__m128 b)
-{
- return simde_mm_cmpge_ps(a, b);
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpnlt_ps(a, b) simde_mm_cmpnlt_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmpnlt_ss(simde__m128 a, simde__m128 b)
-{
- return simde_mm_cmpge_ss(a, b);
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpnlt_ss(a, b) simde_mm_cmpnlt_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmpord_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_cmpord_ps(a, b);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- return wasm_v128_and(wasm_f32x4_eq(a, a), wasm_f32x4_eq(b, b));
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- /* Note: NEON does not have ordered compare builtin
- Need to compare a eq a and b eq b to check for NaN
- Do AND of results to get final */
- uint32x4_t ceqaa = vceqq_f32(a_.neon_f32, a_.neon_f32);
- uint32x4_t ceqbb = vceqq_f32(b_.neon_f32, b_.neon_f32);
- r_.neon_u32 = vandq_u32(ceqaa, ceqbb);
-#elif defined(simde_math_isnanf)
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.u32[i] = (simde_math_isnanf(a_.f32[i]) ||
- simde_math_isnanf(b_.f32[i]))
- ? UINT32_C(0)
- : ~UINT32_C(0);
- }
-#else
- HEDLEY_UNREACHABLE();
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpord_ps(a, b) simde_mm_cmpord_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmpunord_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_cmpunord_ps(a, b);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- return wasm_v128_or(wasm_f32x4_ne(a, a), wasm_f32x4_ne(b, b));
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- uint32x4_t ceqaa = vceqq_f32(a_.neon_f32, a_.neon_f32);
- uint32x4_t ceqbb = vceqq_f32(b_.neon_f32, b_.neon_f32);
- r_.neon_u32 = vmvnq_u32(vandq_u32(ceqaa, ceqbb));
-#elif defined(simde_math_isnanf)
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.u32[i] = (simde_math_isnanf(a_.f32[i]) ||
- simde_math_isnanf(b_.f32[i]))
- ? ~UINT32_C(0)
- : UINT32_C(0);
- }
-#else
- HEDLEY_UNREACHABLE();
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpunord_ps(a, b) simde_mm_cmpunord_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmpunord_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && !defined(__PGI)
- return _mm_cmpunord_ss(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_ss(a, simde_mm_cmpunord_ps(a, b));
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(simde_math_isnanf)
- r_.u32[0] =
- (simde_math_isnanf(a_.f32[0]) || simde_math_isnanf(b_.f32[0]))
- ? ~UINT32_C(0)
- : UINT32_C(0);
- SIMDE_VECTORIZE
- for (size_t i = 1; i < (sizeof(r_.u32) / sizeof(r_.u32[0])); i++) {
- r_.u32[i] = a_.u32[i];
- }
-#else
- HEDLEY_UNREACHABLE();
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpunord_ss(a, b) simde_mm_cmpunord_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_comieq_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_comieq_ss(a, b);
-#else
- simde__m128_private a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- uint32x4_t a_not_nan = vceqq_f32(a_.neon_f32, a_.neon_f32);
- uint32x4_t b_not_nan = vceqq_f32(b_.neon_f32, b_.neon_f32);
- uint32x4_t a_or_b_nan = vmvnq_u32(vandq_u32(a_not_nan, b_not_nan));
- uint32x4_t a_eq_b = vceqq_f32(a_.neon_f32, b_.neon_f32);
- return !!(vgetq_lane_u32(vorrq_u32(a_or_b_nan, a_eq_b), 0) != 0);
-#else
- return a_.f32[0] == b_.f32[0];
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_comieq_ss(a, b) simde_mm_comieq_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_comige_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_comige_ss(a, b);
-#else
- simde__m128_private a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- uint32x4_t a_not_nan = vceqq_f32(a_.neon_f32, a_.neon_f32);
- uint32x4_t b_not_nan = vceqq_f32(b_.neon_f32, b_.neon_f32);
- uint32x4_t a_and_b_not_nan = vandq_u32(a_not_nan, b_not_nan);
- uint32x4_t a_ge_b = vcgeq_f32(a_.neon_f32, b_.neon_f32);
- return !!(vgetq_lane_u32(vandq_u32(a_and_b_not_nan, a_ge_b), 0) != 0);
-#else
- return a_.f32[0] >= b_.f32[0];
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_comige_ss(a, b) simde_mm_comige_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_comigt_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_comigt_ss(a, b);
-#else
- simde__m128_private a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- uint32x4_t a_not_nan = vceqq_f32(a_.neon_f32, a_.neon_f32);
- uint32x4_t b_not_nan = vceqq_f32(b_.neon_f32, b_.neon_f32);
- uint32x4_t a_and_b_not_nan = vandq_u32(a_not_nan, b_not_nan);
- uint32x4_t a_gt_b = vcgtq_f32(a_.neon_f32, b_.neon_f32);
- return !!(vgetq_lane_u32(vandq_u32(a_and_b_not_nan, a_gt_b), 0) != 0);
-#else
- return a_.f32[0] > b_.f32[0];
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_comigt_ss(a, b) simde_mm_comigt_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_comile_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_comile_ss(a, b);
-#else
- simde__m128_private a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- uint32x4_t a_not_nan = vceqq_f32(a_.neon_f32, a_.neon_f32);
- uint32x4_t b_not_nan = vceqq_f32(b_.neon_f32, b_.neon_f32);
- uint32x4_t a_or_b_nan = vmvnq_u32(vandq_u32(a_not_nan, b_not_nan));
- uint32x4_t a_le_b = vcleq_f32(a_.neon_f32, b_.neon_f32);
- return !!(vgetq_lane_u32(vorrq_u32(a_or_b_nan, a_le_b), 0) != 0);
-#else
- return a_.f32[0] <= b_.f32[0];
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_comile_ss(a, b) simde_mm_comile_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_comilt_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_comilt_ss(a, b);
-#else
- simde__m128_private a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- uint32x4_t a_not_nan = vceqq_f32(a_.neon_f32, a_.neon_f32);
- uint32x4_t b_not_nan = vceqq_f32(b_.neon_f32, b_.neon_f32);
- uint32x4_t a_or_b_nan = vmvnq_u32(vandq_u32(a_not_nan, b_not_nan));
- uint32x4_t a_lt_b = vcltq_f32(a_.neon_f32, b_.neon_f32);
- return !!(vgetq_lane_u32(vorrq_u32(a_or_b_nan, a_lt_b), 0) != 0);
-#else
- return a_.f32[0] < b_.f32[0];
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_comilt_ss(a, b) simde_mm_comilt_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_comineq_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_comineq_ss(a, b);
-#else
- simde__m128_private a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- uint32x4_t a_not_nan = vceqq_f32(a_.neon_f32, a_.neon_f32);
- uint32x4_t b_not_nan = vceqq_f32(b_.neon_f32, b_.neon_f32);
- uint32x4_t a_and_b_not_nan = vandq_u32(a_not_nan, b_not_nan);
- uint32x4_t a_neq_b = vmvnq_u32(vceqq_f32(a_.neon_f32, b_.neon_f32));
- return !!(vgetq_lane_u32(vandq_u32(a_and_b_not_nan, a_neq_b), 0) != 0);
-#else
- return a_.f32[0] != b_.f32[0];
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_comineq_ss(a, b) simde_mm_comineq_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cvt_pi2ps(simde__m128 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cvt_pi2ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 = vcombine_f32(vcvt_f32_s32(b_.neon_i32),
- vget_high_f32(a_.neon_f32));
-#elif defined(SIMDE_CONVERT_VECTOR_)
- SIMDE_CONVERT_VECTOR_(r_.m64_private[0].f32, b_.i32);
- r_.m64_private[1] = a_.m64_private[1];
-
-#else
- r_.f32[0] = (simde_float32)b_.i32[0];
- r_.f32[1] = (simde_float32)b_.i32[1];
- r_.i32[2] = a_.i32[2];
- r_.i32[3] = a_.i32[3];
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cvt_pi2ps(a, b) simde_mm_cvt_pi2ps((a), b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_cvt_ps2pi(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cvt_ps2pi(a);
-#else
- simde__m64_private r_;
- simde__m128_private a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vcvt_s32_f32(vget_low_f32(a_.neon_f32));
-#elif defined(SIMDE_CONVERT_VECTOR_) && !defined(__clang__)
- SIMDE_CONVERT_VECTOR_(r_.i32, a_.m64_private[0].f32);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = HEDLEY_STATIC_CAST(int32_t, a_.f32[i]);
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cvt_ps2pi(a) simde_mm_cvt_ps2pi((a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cvt_si2ss(simde__m128 a, int32_t b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_cvt_si2ss(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 = vsetq_lane_f32((float)b, a_.neon_f32, 0);
-#else
- r_.f32[0] = HEDLEY_STATIC_CAST(simde_float32, b);
- r_.i32[1] = a_.i32[1];
- r_.i32[2] = a_.i32[2];
- r_.i32[3] = a_.i32[3];
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cvt_si2ss(a, b) simde_mm_cvt_si2ss((a), b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int32_t simde_mm_cvt_ss2si(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_cvt_ss2si(a);
-#else
- simde__m128_private a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V8_NATIVE) && !defined(SIMDE_BUG_GCC_95399)
- return vgetq_lane_s32(vcvtnq_s32_f32(a_.neon_f32), 0);
-#elif defined(simde_math_nearbyintf)
- return SIMDE_CONVERT_FTOI(int32_t, simde_math_nearbyintf(a_.f32[0]));
-#else
- HEDLEY_UNREACHABLE();
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cvt_ss2si(a) simde_mm_cvt_ss2si((a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cvtpi16_ps(simde__m64 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cvtpi16_ps(a);
-#else
- simde__m128_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE) && 0 /* TODO */
- r_.neon_f32 = vmovl_s16(
- vget_low_s16(vuzp1q_s16(a_.neon_i16, vmovq_n_s16(0))));
-#elif defined(SIMDE_CONVERT_VECTOR_)
- SIMDE_CONVERT_VECTOR_(r_.f32, a_.i16);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- simde_float32 v = a_.i16[i];
- r_.f32[i] = v;
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtpi16_ps(a) simde_mm_cvtpi16_ps(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cvtpi32_ps(simde__m128 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cvtpi32_ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a);
- simde__m64_private b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 = vcombine_f32(vcvt_f32_s32(b_.neon_i32),
- vget_high_f32(a_.neon_f32));
-#elif defined(SIMDE_CONVERT_VECTOR_)
- SIMDE_CONVERT_VECTOR_(r_.m64_private[0].f32, b_.i32);
- r_.m64_private[1] = a_.m64_private[1];
-#else
- r_.f32[0] = (simde_float32)b_.i32[0];
- r_.f32[1] = (simde_float32)b_.i32[1];
- r_.i32[2] = a_.i32[2];
- r_.i32[3] = a_.i32[3];
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtpi32_ps(a, b) simde_mm_cvtpi32_ps((a), b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cvtpi32x2_ps(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cvtpi32x2_ps(a, b);
-#else
- simde__m128_private r_;
- simde__m64_private a_ = simde__m64_to_private(a),
- b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 = vcvtq_f32_s32(vcombine_s32(a_.neon_i32, b_.neon_i32));
-#elif defined(SIMDE_CONVERT_VECTOR_)
- SIMDE_CONVERT_VECTOR_(r_.m64_private[0].f32, a_.i32);
- SIMDE_CONVERT_VECTOR_(r_.m64_private[1].f32, b_.i32);
-#else
- r_.f32[0] = (simde_float32)a_.i32[0];
- r_.f32[1] = (simde_float32)a_.i32[1];
- r_.f32[2] = (simde_float32)b_.i32[0];
- r_.f32[3] = (simde_float32)b_.i32[1];
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtpi32x2_ps(a, b) simde_mm_cvtpi32x2_ps(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cvtpi8_ps(simde__m64 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cvtpi8_ps(a);
-#else
- simde__m128_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 =
- vcvtq_f32_s32(vmovl_s16(vget_low_s16(vmovl_s8(a_.neon_i8))));
-#else
- r_.f32[0] = HEDLEY_STATIC_CAST(simde_float32, a_.i8[0]);
- r_.f32[1] = HEDLEY_STATIC_CAST(simde_float32, a_.i8[1]);
- r_.f32[2] = HEDLEY_STATIC_CAST(simde_float32, a_.i8[2]);
- r_.f32[3] = HEDLEY_STATIC_CAST(simde_float32, a_.i8[3]);
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtpi8_ps(a) simde_mm_cvtpi8_ps(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_cvtps_pi16(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cvtps_pi16(a);
-#else
- simde__m64_private r_;
- simde__m128_private a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_CONVERT_VECTOR_)
- SIMDE_CONVERT_VECTOR_(r_.i16, a_.f32);
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vmovn_s32(vcvtq_s32_f32(a_.neon_f32));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = SIMDE_CONVERT_FTOI(int16_t, a_.f32[i]);
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtps_pi16(a) simde_mm_cvtps_pi16((a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_cvtps_pi32(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cvtps_pi32(a);
-#else
- simde__m64_private r_;
- simde__m128_private a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vcvt_s32_f32(vget_low_f32(a_.neon_f32));
-#elif defined(SIMDE_CONVERT_VECTOR_)
- SIMDE_CONVERT_VECTOR_(r_.i32, a_.m64_private[0].f32);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = SIMDE_CONVERT_FTOI(int32_t, a_.f32[i]);
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtps_pi32(a) simde_mm_cvtps_pi32((a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_cvtps_pi8(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cvtps_pi8(a);
-#else
- simde__m64_private r_;
- simde__m128_private a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- int16x4_t b = vmovn_s32(vcvtq_s32_f32(a_.neon_f32));
- int16x8_t c = vcombine_s16(b, vmov_n_s16(0));
- r_.neon_i8 = vmovn_s16(c);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(a_.f32) / sizeof(a_.f32[0])); i++) {
- r_.i8[i] = SIMDE_CONVERT_FTOI(int8_t, a_.f32[i]);
- }
- /* Note: the upper half is undefined */
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtps_pi8(a) simde_mm_cvtps_pi8((a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cvtpu16_ps(simde__m64 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cvtpu16_ps(a);
-#else
- simde__m128_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 = vcvtq_f32_u32(vmovl_u16(a_.neon_u16));
-#elif defined(SIMDE_CONVERT_VECTOR_)
- SIMDE_CONVERT_VECTOR_(r_.f32, a_.u16);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.f32[i] = (simde_float32)a_.u16[i];
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtpu16_ps(a) simde_mm_cvtpu16_ps(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cvtpu8_ps(simde__m64 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cvtpu8_ps(a);
-#else
- simde__m128_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 =
- vcvtq_f32_u32(vmovl_u16(vget_low_u16(vmovl_u8(a_.neon_u8))));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.f32[i] = HEDLEY_STATIC_CAST(simde_float32, a_.u8[i]);
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtpu8_ps(a) simde_mm_cvtpu8_ps(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cvtsi32_ss(simde__m128 a, int32_t b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_cvtsi32_ss(a, b);
-#else
- simde__m128_private r_;
- simde__m128_private a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 = vsetq_lane_f32((simde_float32)b, a_.neon_f32, 0);
-#else
- r_.f32[0] = HEDLEY_STATIC_CAST(simde_float32, b);
- SIMDE_VECTORIZE
- for (size_t i = 1; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = a_.i32[i];
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtsi32_ss(a, b) simde_mm_cvtsi32_ss((a), b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cvtsi64_ss(simde__m128 a, int64_t b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_ARCH_AMD64)
-#if !defined(__PGI)
- return _mm_cvtsi64_ss(a, b);
-#else
- return _mm_cvtsi64x_ss(a, b);
-#endif
-#else
- simde__m128_private r_;
- simde__m128_private a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 = vsetq_lane_f32((simde_float32)b, a_.neon_f32, 0);
-#else
- r_ = a_;
- r_.f32[0] = HEDLEY_STATIC_CAST(simde_float32, b);
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtsi64_ss(a, b) simde_mm_cvtsi64_ss((a), b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde_float32 simde_mm_cvtss_f32(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_cvtss_f32(a);
-#else
- simde__m128_private a_ = simde__m128_to_private(a);
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- return vgetq_lane_f32(a_.neon_f32, 0);
-#else
- return a_.f32[0];
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtss_f32(a) simde_mm_cvtss_f32((a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int32_t simde_mm_cvtss_si32(simde__m128 a)
-{
- return simde_mm_cvt_ss2si(a);
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtss_si32(a) simde_mm_cvtss_si32((a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int64_t simde_mm_cvtss_si64(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_ARCH_AMD64)
-#if !defined(__PGI)
- return _mm_cvtss_si64(a);
-#else
- return _mm_cvtss_si64x(a);
-#endif
-#else
- simde__m128_private a_ = simde__m128_to_private(a);
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- return SIMDE_CONVERT_FTOI(int64_t, vgetq_lane_f32(a_.neon_f32, 0));
-#else
- return SIMDE_CONVERT_FTOI(int64_t, a_.f32[0]);
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtss_si64(a) simde_mm_cvtss_si64((a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_cvtt_ps2pi(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cvtt_ps2pi(a);
-#else
- simde__m64_private r_;
- simde__m128_private a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vcvt_s32_f32(vget_low_f32(a_.neon_f32));
-#elif defined(SIMDE_CONVERT_VECTOR_)
- SIMDE_CONVERT_VECTOR_(r_.i32, a_.m64_private[0].f32);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.i32[i] = SIMDE_CONVERT_FTOI(int32_t, a_.f32[i]);
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_mm_cvttps_pi32(a) simde_mm_cvtt_ps2pi(a)
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtt_ps2pi(a) simde_mm_cvtt_ps2pi((a))
-#define _mm_cvttps_pi32(a) simde_mm_cvttps_pi32((a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int32_t simde_mm_cvtt_ss2si(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_cvtt_ss2si(a);
-#else
- simde__m128_private a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- return SIMDE_CONVERT_FTOI(int32_t, vgetq_lane_f32(a_.neon_f32, 0));
-#else
- return SIMDE_CONVERT_FTOI(int32_t, a_.f32[0]);
-#endif
-#endif
-}
-#define simde_mm_cvttss_si32(a) simde_mm_cvtt_ss2si((a))
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtt_ss2si(a) simde_mm_cvtt_ss2si((a))
-#define _mm_cvttss_si32(a) simde_mm_cvtt_ss2si((a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int64_t simde_mm_cvttss_si64(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_ARCH_AMD64) && \
- !defined(_MSC_VER)
-#if defined(__PGI)
- return _mm_cvttss_si64x(a);
-#else
- return _mm_cvttss_si64(a);
-#endif
-#else
- simde__m128_private a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- return SIMDE_CONVERT_FTOI(int64_t, vgetq_lane_f32(a_.neon_f32, 0));
-#else
- return SIMDE_CONVERT_FTOI(int64_t, a_.f32[0]);
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cvttss_si64(a) simde_mm_cvttss_si64((a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cmpord_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_cmpord_ss(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_ss(a, simde_mm_cmpord_ps(a, b));
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a);
-
-#if defined(simde_math_isnanf)
- r_.u32[0] = (simde_math_isnanf(simde_mm_cvtss_f32(a)) ||
- simde_math_isnanf(simde_mm_cvtss_f32(b)))
- ? UINT32_C(0)
- : ~UINT32_C(0);
- SIMDE_VECTORIZE
- for (size_t i = 1; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.u32[i] = a_.u32[i];
- }
-#else
- HEDLEY_UNREACHABLE();
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpord_ss(a, b) simde_mm_cmpord_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_div_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_div_ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- r_.neon_f32 = vdivq_f32(a_.neon_f32, b_.neon_f32);
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- float32x4_t recip0 = vrecpeq_f32(b_.neon_f32);
- float32x4_t recip1 =
- vmulq_f32(recip0, vrecpsq_f32(recip0, b_.neon_f32));
- r_.neon_f32 = vmulq_f32(a_.neon_f32, recip1);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f32x4_div(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.f32 = a_.f32 / b_.f32;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.f32[i] = a_.f32[i] / b_.f32[i];
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_div_ps(a, b) simde_mm_div_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_div_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_div_ss(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_ss(a, simde_mm_div_ps(a, b));
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
- r_.f32[0] = a_.f32[0] / b_.f32[0];
- SIMDE_VECTORIZE
- for (size_t i = 1; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.f32[i] = a_.f32[i];
- }
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_div_ss(a, b) simde_mm_div_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int16_t simde_mm_extract_pi16(simde__m64 a, const int imm8)
- SIMDE_REQUIRE_RANGE(imm8, 0, 3)
-{
- simde__m64_private a_ = simde__m64_to_private(a);
- return a_.i16[imm8];
-}
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE) && \
- !defined(HEDLEY_PGI_VERSION)
-#if HEDLEY_HAS_WARNING("-Wvector-conversion")
-/* https://bugs.llvm.org/show_bug.cgi?id=44589 */
-#define simde_mm_extract_pi16(a, imm8) \
- (HEDLEY_DIAGNOSTIC_PUSH _Pragma( \
- "clang diagnostic ignored \"-Wvector-conversion\"") \
- HEDLEY_STATIC_CAST(int16_t, _mm_extract_pi16((a), (imm8))) \
- HEDLEY_DIAGNOSTIC_POP)
-#else
-#define simde_mm_extract_pi16(a, imm8) \
- HEDLEY_STATIC_CAST(int16_t, _mm_extract_pi16(a, imm8))
-#endif
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
-#define simde_mm_extract_pi16(a, imm8) \
- vget_lane_s16(simde__m64_to_private(a).neon_i16, imm8)
-#endif
-#define simde_m_pextrw(a, imm8) simde_mm_extract_pi16(a, imm8)
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_extract_pi16(a, imm8) simde_mm_extract_pi16((a), (imm8))
-#endif
-
-enum {
-#if defined(SIMDE_X86_SSE_NATIVE)
- SIMDE_MM_ROUND_NEAREST = _MM_ROUND_NEAREST,
- SIMDE_MM_ROUND_DOWN = _MM_ROUND_DOWN,
- SIMDE_MM_ROUND_UP = _MM_ROUND_UP,
- SIMDE_MM_ROUND_TOWARD_ZERO = _MM_ROUND_TOWARD_ZERO
-#else
- SIMDE_MM_ROUND_NEAREST
-#if defined(FE_TONEAREST)
- = FE_TONEAREST
-#endif
- ,
-
- SIMDE_MM_ROUND_DOWN
-#if defined(FE_DOWNWARD)
- = FE_DOWNWARD
-#endif
- ,
-
- SIMDE_MM_ROUND_UP
-#if defined(FE_UPWARD)
- = FE_UPWARD
-#endif
- ,
-
- SIMDE_MM_ROUND_TOWARD_ZERO
-#if defined(FE_TOWARDZERO)
- = FE_TOWARDZERO
-#endif
-#endif
-};
-
-SIMDE_FUNCTION_ATTRIBUTES
-unsigned int SIMDE_MM_GET_ROUNDING_MODE(void)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _MM_GET_ROUNDING_MODE();
-#elif defined(SIMDE_HAVE_FENV_H)
- return HEDLEY_STATIC_CAST(unsigned int, fegetround());
-#else
- HEDLEY_UNREACHABLE();
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _MM_GET_ROUNDING_MODE() SIMDE_MM_GET_ROUNDING_MODE()
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void SIMDE_MM_SET_ROUNDING_MODE(unsigned int a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- _MM_SET_ROUNDING_MODE(a);
-#elif defined(SIMDE_HAVE_FENV_H)
- fesetround(HEDLEY_STATIC_CAST(int, a));
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _MM_SET_ROUNDING_MODE(a) SIMDE_MM_SET_ROUNDING_MODE(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_insert_pi16(simde__m64 a, int16_t i, const int imm8)
- SIMDE_REQUIRE_RANGE(imm8, 0, 3)
-{
- simde__m64_private r_, a_ = simde__m64_to_private(a);
-
- r_.i64[0] = a_.i64[0];
- r_.i16[imm8] = i;
-
- return simde__m64_from_private(r_);
-}
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE) && \
- !defined(__PGI)
-#if HEDLEY_HAS_WARNING("-Wvector-conversion")
-/* https://bugs.llvm.org/show_bug.cgi?id=44589 */
-#define ssimde_mm_insert_pi16(a, i, imm8) \
- (HEDLEY_DIAGNOSTIC_PUSH _Pragma( \
- "clang diagnostic ignored \"-Wvector-conversion\"")( \
- _mm_insert_pi16((a), (i), (imm8))) HEDLEY_DIAGNOSTIC_POP)
-#else
-#define simde_mm_insert_pi16(a, i, imm8) _mm_insert_pi16(a, i, imm8)
-#endif
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
-#define simde_mm_insert_pi16(a, i, imm8) \
- simde__m64_from_neon_i16( \
- vset_lane_s16((i), simde__m64_to_neon_i16(a), (imm8)))
-#endif
-#define simde_m_pinsrw(a, i, imm8) (simde_mm_insert_pi16(a, i, imm8))
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_insert_pi16(a, i, imm8) simde_mm_insert_pi16(a, i, imm8)
-#define _m_pinsrw(a, i, imm8) simde_mm_insert_pi16(a, i, imm8)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128
-simde_mm_load_ps(simde_float32 const mem_addr[HEDLEY_ARRAY_PARAM(4)])
-{
- simde_assert_aligned(16, mem_addr);
-
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_load_ps(mem_addr);
-#else
- simde__m128_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 = vld1q_f32(mem_addr);
-#elif defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
- r_.altivec_f32 = vec_vsx_ld(0, mem_addr);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_f32 = vec_ld(0, mem_addr);
-#else
- r_ = *SIMDE_ALIGN_CAST(simde__m128_private const *, mem_addr);
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_load_ps(mem_addr) simde_mm_load_ps(mem_addr)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_load_ps1(simde_float32 const *mem_addr)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_load_ps1(mem_addr);
-#else
- simde__m128_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 = vld1q_dup_f32(mem_addr);
-#else
- r_ = simde__m128_to_private(simde_mm_set1_ps(*mem_addr));
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#define simde_mm_load1_ps(mem_addr) simde_mm_load_ps1(mem_addr)
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_load_ps1(mem_addr) simde_mm_load_ps1(mem_addr)
-#define _mm_load1_ps(mem_addr) simde_mm_load_ps1(mem_addr)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_load_ss(simde_float32 const *mem_addr)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_load_ss(mem_addr);
-#else
- simde__m128_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 = vsetq_lane_f32(*mem_addr, vdupq_n_f32(0), 0);
-#else
- r_.f32[0] = *mem_addr;
- r_.i32[1] = 0;
- r_.i32[2] = 0;
- r_.i32[3] = 0;
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_load_ss(mem_addr) simde_mm_load_ss(mem_addr)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_loadh_pi(simde__m128 a, simde__m64 const *mem_addr)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_loadh_pi(a,
- HEDLEY_REINTERPRET_CAST(__m64 const *, mem_addr));
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 = vcombine_f32(
- vget_low_f32(a_.neon_f32),
- vld1_f32(HEDLEY_REINTERPRET_CAST(const float32_t *, mem_addr)));
-#else
- simde__m64_private b_ =
- *HEDLEY_REINTERPRET_CAST(simde__m64_private const *, mem_addr);
- r_.f32[0] = a_.f32[0];
- r_.f32[1] = a_.f32[1];
- r_.f32[2] = b_.f32[0];
- r_.f32[3] = b_.f32[1];
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_loadh_pi(a, mem_addr) \
- simde_mm_loadh_pi((a), (simde__m64 const *)(mem_addr))
-#endif
-
-/* The SSE documentation says that there are no alignment requirements
- for mem_addr. Unfortunately they used the __m64 type for the argument
- which is supposed to be 8-byte aligned, so some compilers (like clang
- with -Wcast-align) will generate a warning if you try to cast, say,
- a simde_float32* to a simde__m64* for this function.
-
- I think the choice of argument type is unfortunate, but I do think we
- need to stick to it here. If there is demand I can always add something
- like simde_x_mm_loadl_f32(simde__m128, simde_float32 mem_addr[2]) */
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_loadl_pi(simde__m128 a, simde__m64 const *mem_addr)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_loadl_pi(a,
- HEDLEY_REINTERPRET_CAST(__m64 const *, mem_addr));
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 = vcombine_f32(
- vld1_f32(HEDLEY_REINTERPRET_CAST(const float32_t *, mem_addr)),
- vget_high_f32(a_.neon_f32));
-#else
- simde__m64_private b_;
- simde_memcpy(&b_, mem_addr, sizeof(b_));
- r_.i32[0] = b_.i32[0];
- r_.i32[1] = b_.i32[1];
- r_.i32[2] = a_.i32[2];
- r_.i32[3] = a_.i32[3];
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_loadl_pi(a, mem_addr) \
- simde_mm_loadl_pi((a), (simde__m64 const *)(mem_addr))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128
-simde_mm_loadr_ps(simde_float32 const mem_addr[HEDLEY_ARRAY_PARAM(4)])
-{
- simde_assert_aligned(16, mem_addr);
-
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_loadr_ps(mem_addr);
-#else
- simde__m128_private r_,
- v_ = simde__m128_to_private(simde_mm_load_ps(mem_addr));
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 = vrev64q_f32(v_.neon_f32);
- r_.neon_f32 = vextq_f32(r_.neon_f32, r_.neon_f32, 2);
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
- r_.f32 = SIMDE_SHUFFLE_VECTOR_(32, 16, v_.f32, v_.f32, 3, 2, 1, 0);
-#else
- r_.f32[0] = v_.f32[3];
- r_.f32[1] = v_.f32[2];
- r_.f32[2] = v_.f32[1];
- r_.f32[3] = v_.f32[0];
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_loadr_ps(mem_addr) simde_mm_loadr_ps(mem_addr)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128
-simde_mm_loadu_ps(simde_float32 const mem_addr[HEDLEY_ARRAY_PARAM(4)])
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_loadu_ps(mem_addr);
-#else
- simde__m128_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 =
- vld1q_f32(HEDLEY_REINTERPRET_CAST(const float32_t *, mem_addr));
-#else
- r_.f32[0] = mem_addr[0];
- r_.f32[1] = mem_addr[1];
- r_.f32[2] = mem_addr[2];
- r_.f32[3] = mem_addr[3];
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_loadu_ps(mem_addr) simde_mm_loadu_ps(mem_addr)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_maskmove_si64(simde__m64 a, simde__m64 mask, int8_t *mem_addr)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- _mm_maskmove_si64(a, mask, HEDLEY_REINTERPRET_CAST(char *, mem_addr));
-#else
- simde__m64_private a_ = simde__m64_to_private(a),
- mask_ = simde__m64_to_private(mask);
-
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(a_.i8) / sizeof(a_.i8[0])); i++)
- if (mask_.i8[i] < 0)
- mem_addr[i] = a_.i8[i];
-#endif
-}
-#define simde_m_maskmovq(a, mask, mem_addr) \
- simde_mm_maskmove_si64(a, mask, mem_addr)
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_maskmove_si64(a, mask, mem_addr) \
- simde_mm_maskmove_si64( \
- (a), (mask), \
- SIMDE_CHECKED_REINTERPRET_CAST(int8_t *, char *, (mem_addr)))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_max_pi16(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_max_pi16(a, b);
-#else
- simde__m64_private r_, a_ = simde__m64_to_private(a),
- b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vmax_s16(a_.neon_i16, b_.neon_i16);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = (a_.i16[i] > b_.i16[i]) ? a_.i16[i] : b_.i16[i];
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_pmaxsw(a, b) simde_mm_max_pi16(a, b)
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_max_pi16(a, b) simde_mm_max_pi16(a, b)
-#define _m_pmaxsw(a, b) simde_mm_max_pi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_max_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_max_ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 = vmaxq_f32(a_.neon_f32, b_.neon_f32);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_f32 = vec_max(a_.altivec_f32, b_.altivec_f32);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.f32[i] = (a_.f32[i] > b_.f32[i]) ? a_.f32[i] : b_.f32[i];
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_max_ps(a, b) simde_mm_max_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_max_pu8(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_max_pu8(a, b);
-#else
- simde__m64_private r_, a_ = simde__m64_to_private(a),
- b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u8 = vmax_u8(a_.neon_u8, b_.neon_u8);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u8) / sizeof(r_.u8[0])); i++) {
- r_.u8[i] = (a_.u8[i] > b_.u8[i]) ? a_.u8[i] : b_.u8[i];
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_pmaxub(a, b) simde_mm_max_pu8(a, b)
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_max_pu8(a, b) simde_mm_max_pu8(a, b)
-#define _m_pmaxub(a, b) simde_mm_max_pu8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_max_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_max_ss(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_ss(a, simde_mm_max_ps(a, b));
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
- r_.f32[0] = (a_.f32[0] > b_.f32[0]) ? a_.f32[0] : b_.f32[0];
- r_.f32[1] = a_.f32[1];
- r_.f32[2] = a_.f32[2];
- r_.f32[3] = a_.f32[3];
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_max_ss(a, b) simde_mm_max_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_min_pi16(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_min_pi16(a, b);
-#else
- simde__m64_private r_, a_ = simde__m64_to_private(a),
- b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vmin_s16(a_.neon_i16, b_.neon_i16);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = (a_.i16[i] < b_.i16[i]) ? a_.i16[i] : b_.i16[i];
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_pminsw(a, b) simde_mm_min_pi16(a, b)
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_min_pi16(a, b) simde_mm_min_pi16(a, b)
-#define _m_pminsw(a, b) simde_mm_min_pi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_min_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_min_ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 = vminq_f32(a_.neon_f32, b_.neon_f32);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_f32 = vec_min(a_.altivec_f32, b_.altivec_f32);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.f32[i] = (a_.f32[i] < b_.f32[i]) ? a_.f32[i] : b_.f32[i];
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_min_ps(a, b) simde_mm_min_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_min_pu8(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_min_pu8(a, b);
-#else
- simde__m64_private r_, a_ = simde__m64_to_private(a),
- b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u8 = vmin_u8(a_.neon_u8, b_.neon_u8);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u8) / sizeof(r_.u8[0])); i++) {
- r_.u8[i] = (a_.u8[i] < b_.u8[i]) ? a_.u8[i] : b_.u8[i];
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_pminub(a, b) simde_mm_min_pu8(a, b)
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_min_pu8(a, b) simde_mm_min_pu8(a, b)
-#define _m_pminub(a, b) simde_mm_min_pu8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_min_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_min_ss(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_ss(a, simde_mm_min_ps(a, b));
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
- r_.f32[0] = (a_.f32[0] < b_.f32[0]) ? a_.f32[0] : b_.f32[0];
- r_.f32[1] = a_.f32[1];
- r_.f32[2] = a_.f32[2];
- r_.f32[3] = a_.f32[3];
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_min_ss(a, b) simde_mm_min_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_movehl_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_movehl_ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_SHUFFLE_VECTOR_)
- r_.f32 = SIMDE_SHUFFLE_VECTOR_(32, 16, a_.f32, b_.f32, 6, 7, 2, 3);
-#else
- r_.f32[0] = b_.f32[2];
- r_.f32[1] = b_.f32[3];
- r_.f32[2] = a_.f32[2];
- r_.f32[3] = a_.f32[3];
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_movehl_ps(a, b) simde_mm_movehl_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_movelh_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_movelh_ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_SHUFFLE_VECTOR_)
- r_.f32 = SIMDE_SHUFFLE_VECTOR_(32, 16, a_.f32, b_.f32, 0, 1, 4, 5);
-#else
- r_.f32[0] = a_.f32[0];
- r_.f32[1] = a_.f32[1];
- r_.f32[2] = b_.f32[0];
- r_.f32[3] = b_.f32[1];
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_movelh_ps(a, b) simde_mm_movelh_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_movemask_pi8(simde__m64 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_movemask_pi8(a);
-#else
- simde__m64_private a_ = simde__m64_to_private(a);
- int r = 0;
- const size_t nmemb = sizeof(a_.i8) / sizeof(a_.i8[0]);
-
- SIMDE_VECTORIZE_REDUCTION(| : r)
- for (size_t i = 0; i < nmemb; i++) {
- r |= (a_.u8[nmemb - 1 - i] >> 7) << (nmemb - 1 - i);
- }
-
- return r;
-#endif
-}
-#define simde_m_pmovmskb(a, b) simde_mm_movemask_pi8(a, b)
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_movemask_pi8(a) simde_mm_movemask_pi8(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_movemask_ps(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_movemask_ps(a);
-#else
- int r = 0;
- simde__m128_private a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- /* TODO: check to see if NEON version is faster than the portable version */
- static const uint32x4_t movemask = {1, 2, 4, 8};
- static const uint32x4_t highbit = {0x80000000, 0x80000000, 0x80000000,
- 0x80000000};
- uint32x4_t t0 = a_.neon_u32;
- uint32x4_t t1 = vtstq_u32(t0, highbit);
- uint32x4_t t2 = vandq_u32(t1, movemask);
- uint32x2_t t3 = vorr_u32(vget_low_u32(t2), vget_high_u32(t2));
- r = vget_lane_u32(t3, 0) | vget_lane_u32(t3, 1);
-#else
- SIMDE_VECTORIZE_REDUCTION(| : r)
- for (size_t i = 0; i < sizeof(a_.u32) / sizeof(a_.u32[0]); i++) {
- r |= (a_.u32[i] >> ((sizeof(a_.u32[i]) * CHAR_BIT) - 1)) << i;
- }
-#endif
-
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_movemask_ps(a) simde_mm_movemask_ps((a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_mul_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_mul_ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 = vmulq_f32(a_.neon_f32, b_.neon_f32);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f32x4_mul(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.f32 = a_.f32 * b_.f32;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.f32[i] = a_.f32[i] * b_.f32[i];
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_mul_ps(a, b) simde_mm_mul_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_mul_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_mul_ss(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_ss(a, simde_mm_mul_ps(a, b));
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
- r_.f32[0] = a_.f32[0] * b_.f32[0];
- r_.f32[1] = a_.f32[1];
- r_.f32[2] = a_.f32[2];
- r_.f32[3] = a_.f32[3];
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_mul_ss(a, b) simde_mm_mul_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_mulhi_pu16(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_mulhi_pu16(a, b);
-#else
- simde__m64_private r_, a_ = simde__m64_to_private(a),
- b_ = simde__m64_to_private(b);
-
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
- r_.u16[i] = HEDLEY_STATIC_CAST(
- uint16_t, ((HEDLEY_STATIC_CAST(uint32_t, a_.u16[i]) *
- HEDLEY_STATIC_CAST(uint32_t, b_.u16[i])) >>
- UINT32_C(16)));
- }
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_pmulhuw(a, b) simde_mm_mulhi_pu16(a, b)
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_mulhi_pu16(a, b) simde_mm_mulhi_pu16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_or_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_or_ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vorrq_s32(a_.neon_i32, b_.neon_i32);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i32 = vec_or(a_.altivec_i32, b_.altivec_i32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32f = a_.i32f | b_.i32f;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u32) / sizeof(r_.u32[0])); i++) {
- r_.u32[i] = a_.u32[i] | b_.u32[i];
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_or_ps(a, b) simde_mm_or_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_prefetch(char const *p, int i)
-{
- (void)p;
- (void)i;
-}
-#if defined(SIMDE_X86_SSE_NATIVE)
-#define simde_mm_prefetch(p, i) _mm_prefetch(p, i)
-#endif
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_prefetch(p, i) simde_mm_prefetch(p, i)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_rcp_ps(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_rcp_ps(a);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- float32x4_t recip = vrecpeq_f32(a_.neon_f32);
-
-#if SIMDE_ACCURACY_PREFERENCE > 0
- for (int i = 0; i < SIMDE_ACCURACY_PREFERENCE; ++i) {
- recip = vmulq_f32(recip, vrecpsq_f32(recip, a_.neon_f32));
- }
-#endif
-
- r_.neon_f32 = recip;
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_f32 = vec_re(a_.altivec_f32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.f32 = 1.0f / a_.f32;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.f32[i] = 1.0f / a_.f32[i];
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_rcp_ps(a) simde_mm_rcp_ps((a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_rcp_ss(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_rcp_ss(a);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_ss(a, simde_mm_rcp_ps(a));
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a);
-
- r_.f32[0] = 1.0f / a_.f32[0];
- r_.f32[1] = a_.f32[1];
- r_.f32[2] = a_.f32[2];
- r_.f32[3] = a_.f32[3];
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_rcp_ss(a) simde_mm_rcp_ss((a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_rsqrt_ps(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_rsqrt_ps(a);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 = vrsqrteq_f32(a_.neon_f32);
-#elif defined(__STDC_IEC_559__)
- /* https://basesandframes.files.wordpress.com/2020/04/even_faster_math_functions_green_2020.pdf
- Pages 100 - 103 */
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
-#if SIMDE_ACCURACY_PREFERENCE <= 0
- r_.i32[i] = INT32_C(0x5F37624F) - (a_.i32[i] >> 1);
-#else
- simde_float32 x = a_.f32[i];
- simde_float32 xhalf = SIMDE_FLOAT32_C(0.5) * x;
- int32_t ix;
-
- simde_memcpy(&ix, &x, sizeof(ix));
-
-#if SIMDE_ACCURACY_PREFERENCE == 1
- ix = INT32_C(0x5F375A82) - (ix >> 1);
-#else
- ix = INT32_C(0x5F37599E) - (ix >> 1);
-#endif
-
- simde_memcpy(&x, &ix, sizeof(x));
-
-#if SIMDE_ACCURACY_PREFERENCE >= 2
- x = x * (SIMDE_FLOAT32_C(1.5008909) - xhalf * x * x);
-#endif
- x = x * (SIMDE_FLOAT32_C(1.5008909) - xhalf * x * x);
-
- r_.f32[i] = x;
-#endif
- }
-#elif defined(simde_math_sqrtf)
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.f32[i] = 1.0f / simde_math_sqrtf(a_.f32[i]);
- }
-#else
- HEDLEY_UNREACHABLE();
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_rsqrt_ps(a) simde_mm_rsqrt_ps((a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_rsqrt_ss(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_rsqrt_ss(a);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_ss(a, simde_mm_rsqrt_ps(a));
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a);
-
-#if defined(__STDC_IEC_559__)
- {
-#if SIMDE_ACCURACY_PREFERENCE <= 0
- r_.i32[0] = INT32_C(0x5F37624F) - (a_.i32[0] >> 1);
-#else
- simde_float32 x = a_.f32[0];
- simde_float32 xhalf = SIMDE_FLOAT32_C(0.5) * x;
- int32_t ix;
-
- simde_memcpy(&ix, &x, sizeof(ix));
-
-#if SIMDE_ACCURACY_PREFERENCE == 1
- ix = INT32_C(0x5F375A82) - (ix >> 1);
-#else
- ix = INT32_C(0x5F37599E) - (ix >> 1);
-#endif
-
- simde_memcpy(&x, &ix, sizeof(x));
-
-#if SIMDE_ACCURACY_PREFERENCE >= 2
- x = x * (SIMDE_FLOAT32_C(1.5008909) - xhalf * x * x);
-#endif
- x = x * (SIMDE_FLOAT32_C(1.5008909) - xhalf * x * x);
-
- r_.f32[0] = x;
-#endif
- }
- r_.f32[1] = a_.f32[1];
- r_.f32[2] = a_.f32[2];
- r_.f32[3] = a_.f32[3];
-#elif defined(simde_math_sqrtf)
- r_.f32[0] = 1.0f / simde_math_sqrtf(a_.f32[0]);
- r_.f32[1] = a_.f32[1];
- r_.f32[2] = a_.f32[2];
- r_.f32[3] = a_.f32[3];
-#else
- HEDLEY_UNREACHABLE();
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_rsqrt_ss(a) simde_mm_rsqrt_ss((a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_sad_pu8(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_sad_pu8(a, b);
-#else
- simde__m64_private r_, a_ = simde__m64_to_private(a),
- b_ = simde__m64_to_private(b);
- uint16_t sum = 0;
-
-#if defined(SIMDE_HAVE_STDLIB_H)
- SIMDE_VECTORIZE_REDUCTION(+ : sum)
- for (size_t i = 0; i < (sizeof(r_.u8) / sizeof(r_.u8[0])); i++) {
- sum += HEDLEY_STATIC_CAST(uint8_t, abs(a_.u8[i] - b_.u8[i]));
- }
-
- r_.i16[0] = HEDLEY_STATIC_CAST(int16_t, sum);
- r_.i16[1] = 0;
- r_.i16[2] = 0;
- r_.i16[3] = 0;
-#else
- HEDLEY_UNREACHABLE();
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#define simde_m_psadbw(a, b) simde_mm_sad_pu8(a, b)
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_sad_pu8(a, b) simde_mm_sad_pu8(a, b)
-#define _m_psadbw(a, b) simde_mm_sad_pu8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_set_ss(simde_float32 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_set_ss(a);
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- return vsetq_lane_f32(a, vdupq_n_f32(SIMDE_FLOAT32_C(0.0)), 0);
-#else
- return simde_mm_set_ps(SIMDE_FLOAT32_C(0.0), SIMDE_FLOAT32_C(0.0),
- SIMDE_FLOAT32_C(0.0), a);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_set_ss(a) simde_mm_set_ss(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_setr_ps(simde_float32 e3, simde_float32 e2,
- simde_float32 e1, simde_float32 e0)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_setr_ps(e3, e2, e1, e0);
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- SIMDE_ALIGN(16) simde_float32 data[4] = {e3, e2, e1, e0};
- return vld1q_f32(data);
-#else
- return simde_mm_set_ps(e0, e1, e2, e3);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_setr_ps(e3, e2, e1, e0) simde_mm_setr_ps(e3, e2, e1, e0)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_setzero_ps(void)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_setzero_ps();
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- return vdupq_n_f32(SIMDE_FLOAT32_C(0.0));
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- return vec_splats((float)0);
-#else
- simde__m128 r;
- simde_memset(&r, 0, sizeof(r));
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_setzero_ps() simde_mm_setzero_ps()
-#endif
-
-#if defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
-HEDLEY_DIAGNOSTIC_PUSH
-SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_undefined_ps(void)
-{
- simde__m128_private r_;
-
-#if defined(SIMDE_HAVE_UNDEFINED128)
- r_.n = _mm_undefined_ps();
-#elif !defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
- r_ = simde__m128_to_private(simde_mm_setzero_ps());
-#endif
-
- return simde__m128_from_private(r_);
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_undefined_ps() simde_mm_undefined_ps()
-#endif
-
-#if defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
-HEDLEY_DIAGNOSTIC_POP
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_x_mm_setone_ps(void)
-{
- simde__m128 t = simde_mm_setzero_ps();
- return simde_mm_cmpeq_ps(t, t);
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_sfence(void)
-{
- /* TODO: Use Hedley. */
-#if defined(SIMDE_X86_SSE_NATIVE)
- _mm_sfence();
-#elif defined(__GNUC__) && \
- ((__GNUC__ > 4) || (__GNUC__ == 4 && __GNUC_MINOR__ >= 7))
- __atomic_thread_fence(__ATOMIC_SEQ_CST);
-#elif !defined(__INTEL_COMPILER) && defined(__STDC_VERSION__) && \
- (__STDC_VERSION__ >= 201112L) && !defined(__STDC_NO_ATOMICS__)
-#if defined(__GNUC__) && (__GNUC__ == 4) && (__GNUC_MINOR__ < 9)
- __atomic_thread_fence(__ATOMIC_SEQ_CST);
-#else
- atomic_thread_fence(memory_order_seq_cst);
-#endif
-#elif defined(_MSC_VER)
- MemoryBarrier();
-#elif HEDLEY_HAS_EXTENSION(c_atomic)
- __c11_atomic_thread_fence(__ATOMIC_SEQ_CST);
-#elif defined(__GNUC__) && \
- ((__GNUC__ > 4) || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1))
- __sync_synchronize();
-#elif defined(_OPENMP)
-#pragma omp critical(simde_mm_sfence_)
- {
- }
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_sfence() simde_mm_sfence()
-#endif
-
-#define SIMDE_MM_SHUFFLE(z, y, x, w) \
- (((z) << 6) | ((y) << 4) | ((x) << 2) | (w))
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _MM_SHUFFLE(z, y, x, w) SIMDE_MM_SHUFFLE(z, y, x, w)
-#endif
-
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE) && \
- !defined(__PGI)
-#define simde_mm_shuffle_pi16(a, imm8) _mm_shuffle_pi16(a, imm8)
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
-#define simde_mm_shuffle_pi16(a, imm8) \
- (__extension__({ \
- const simde__m64_private simde__tmp_a_ = \
- simde__m64_to_private(a); \
- simde__m64_from_private((simde__m64_private){ \
- .i16 = SIMDE_SHUFFLE_VECTOR_( \
- 16, 8, (simde__tmp_a_).i16, \
- (simde__tmp_a_).i16, (((imm8)) & 3), \
- (((imm8) >> 2) & 3), (((imm8) >> 4) & 3), \
- (((imm8) >> 6) & 3))}); \
- }))
-#else
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_shuffle_pi16(simde__m64 a, const int imm8)
- SIMDE_REQUIRE_RANGE(imm8, 0, 255)
-{
- simde__m64_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
-
- for (size_t i = 0; i < sizeof(r_.i16) / sizeof(r_.i16[0]); i++) {
- r_.i16[i] = a_.i16[(imm8 >> (i * 2)) & 3];
- }
-
- HEDLEY_DIAGNOSTIC_PUSH
-#if HEDLEY_HAS_WARNING("-Wconditional-uninitialized")
-#pragma clang diagnostic ignored "-Wconditional-uninitialized"
-#endif
- return simde__m64_from_private(r_);
- HEDLEY_DIAGNOSTIC_POP
-}
-#endif
-#if defined(SIMDE_X86_SSE_NATIVE) && !defined(__PGI)
-#define simde_m_pshufw(a, imm8) _m_pshufw(a, imm8)
-#else
-#define simde_m_pshufw(a, imm8) simde_mm_shuffle_pi16(a, imm8)
-#endif
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_shuffle_pi16(a, imm8) simde_mm_shuffle_pi16(a, imm8)
-#define _m_pshufw(a, imm8) simde_mm_shuffle_pi16(a, imm8)
-#endif
-
-#if defined(SIMDE_X86_SSE_NATIVE) && !defined(__PGI)
-#define simde_mm_shuffle_ps(a, b, imm8) _mm_shuffle_ps(a, b, imm8)
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
-#define simde_mm_shuffle_ps(a, b, imm8) \
- (__extension__({ \
- simde__m128_from_private((simde__m128_private){ \
- .f32 = SIMDE_SHUFFLE_VECTOR_( \
- 32, 16, simde__m128_to_private(a).f32, \
- simde__m128_to_private(b).f32, (((imm8)) & 3), \
- (((imm8) >> 2) & 3), (((imm8) >> 4) & 3) + 4, \
- (((imm8) >> 6) & 3) + 4)}); \
- }))
-#else
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_shuffle_ps(simde__m128 a, simde__m128 b, const int imm8)
- SIMDE_REQUIRE_RANGE(imm8, 0, 255)
-{
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
- r_.f32[0] = a_.f32[(imm8 >> 0) & 3];
- r_.f32[1] = a_.f32[(imm8 >> 2) & 3];
- r_.f32[2] = b_.f32[(imm8 >> 4) & 3];
- r_.f32[3] = b_.f32[(imm8 >> 6) & 3];
-
- return simde__m128_from_private(r_);
-}
-#endif
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_shuffle_ps(a, b, imm8) simde_mm_shuffle_ps((a), (b), imm8)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_sqrt_ps(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_sqrt_ps(a);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- float32x4_t recipsq = vrsqrteq_f32(a_.neon_f32);
- float32x4_t sq = vrecpeq_f32(recipsq);
- /* ??? use step versions of both sqrt and recip for better accuracy? */
- r_.neon_f32 = sq;
-#elif defined(simde_math_sqrt)
- SIMDE_VECTORIZE
- for (size_t i = 0; i < sizeof(r_.f32) / sizeof(r_.f32[0]); i++) {
- r_.f32[i] = simde_math_sqrtf(a_.f32[i]);
- }
-#else
- HEDLEY_UNREACHABLE();
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_sqrt_ps(a) simde_mm_sqrt_ps((a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_sqrt_ss(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_sqrt_ss(a);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_ss(a, simde_mm_sqrt_ps(a));
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a);
-
-#if defined(simde_math_sqrtf)
- r_.f32[0] = simde_math_sqrtf(a_.f32[0]);
- r_.f32[1] = a_.f32[1];
- r_.f32[2] = a_.f32[2];
- r_.f32[3] = a_.f32[3];
-#else
- HEDLEY_UNREACHABLE();
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_sqrt_ss(a) simde_mm_sqrt_ss((a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_store_ps(simde_float32 mem_addr[4], simde__m128 a)
-{
- simde_assert_aligned(16, mem_addr);
-
-#if defined(SIMDE_X86_SSE_NATIVE)
- _mm_store_ps(mem_addr, a);
-#else
- simde__m128_private a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- vst1q_f32(mem_addr, a_.neon_f32);
-#elif defined(SIMDE_POWER_ALTIVE_P7_NATIVE)
- vec_vsx_st(a_.altivec_32, 0, mem_addr);
-#elif defined(SIMDE_POWER_ALTIVE_P5_NATIVE)
- vec_st(a_.altivec_32, 0, mem_addr);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- wasm_v128_store(mem_addr, a_.wasm_v128);
-#else
- SIMDE_VECTORIZE_ALIGNED(mem_addr : 16)
- for (size_t i = 0; i < sizeof(a_.f32) / sizeof(a_.f32[0]); i++) {
- mem_addr[i] = a_.f32[i];
- }
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_store_ps(mem_addr, a) \
- simde_mm_store_ps(SIMDE_CHECKED_REINTERPRET_CAST( \
- float *, simde_float32 *, mem_addr), \
- (a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_store_ps1(simde_float32 mem_addr[4], simde__m128 a)
-{
- simde_assert_aligned(16, mem_addr);
-
-#if defined(SIMDE_X86_SSE_NATIVE)
- _mm_store_ps1(mem_addr, a);
-#else
- simde__m128_private a_ = simde__m128_to_private(a);
-
- SIMDE_VECTORIZE_ALIGNED(mem_addr : 16)
- for (size_t i = 0; i < sizeof(a_.f32) / sizeof(a_.f32[0]); i++) {
- mem_addr[i] = a_.f32[0];
- }
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_store_ps1(mem_addr, a) \
- simde_mm_store_ps1(SIMDE_CHECKED_REINTERPRET_CAST( \
- float *, simde_float32 *, mem_addr), \
- (a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_store_ss(simde_float32 *mem_addr, simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- _mm_store_ss(mem_addr, a);
-#else
- simde__m128_private a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- vst1q_lane_f32(mem_addr, a_.neon_f32, 0);
-#else
- *mem_addr = a_.f32[0];
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_store_ss(mem_addr, a) \
- simde_mm_store_ss(SIMDE_CHECKED_REINTERPRET_CAST( \
- float *, simde_float32 *, mem_addr), \
- (a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_store1_ps(simde_float32 mem_addr[4], simde__m128 a)
-{
- simde_assert_aligned(16, mem_addr);
-
-#if defined(SIMDE_X86_SSE_NATIVE)
- _mm_store1_ps(mem_addr, a);
-#else
- simde_mm_store_ps1(mem_addr, a);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_store1_ps(mem_addr, a) \
- simde_mm_store1_ps(SIMDE_CHECKED_REINTERPRET_CAST( \
- float *, simde_float32 *, mem_addr), \
- (a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_storeh_pi(simde__m64 *mem_addr, simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- _mm_storeh_pi(HEDLEY_REINTERPRET_CAST(__m64 *, mem_addr), a);
-#else
- simde__m64_private *dest_ =
- HEDLEY_REINTERPRET_CAST(simde__m64_private *, mem_addr);
- simde__m128_private a_ = simde__m128_to_private(a);
-
- dest_->f32[0] = a_.f32[2];
- dest_->f32[1] = a_.f32[3];
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_storeh_pi(mem_addr, a) simde_mm_storeh_pi(mem_addr, (a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_storel_pi(simde__m64 *mem_addr, simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- _mm_storel_pi(HEDLEY_REINTERPRET_CAST(__m64 *, mem_addr), a);
-#else
- simde__m64_private *dest_ =
- HEDLEY_REINTERPRET_CAST(simde__m64_private *, mem_addr);
- simde__m128_private a_ = simde__m128_to_private(a);
-
- dest_->f32[0] = a_.f32[0];
- dest_->f32[1] = a_.f32[1];
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_storel_pi(mem_addr, a) simde_mm_storel_pi(mem_addr, (a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_storer_ps(simde_float32 mem_addr[4], simde__m128 a)
-{
- simde_assert_aligned(16, mem_addr);
-
-#if defined(SIMDE_X86_SSE_NATIVE)
- _mm_storer_ps(mem_addr, a);
-#else
- simde__m128_private a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_SHUFFLE_VECTOR_)
- a_.f32 = SIMDE_SHUFFLE_VECTOR_(32, 16, a_.f32, a_.f32, 3, 2, 1, 0);
- simde_mm_store_ps(mem_addr, simde__m128_from_private(a_));
-#else
- SIMDE_VECTORIZE_ALIGNED(mem_addr : 16)
- for (size_t i = 0; i < sizeof(a_.f32) / sizeof(a_.f32[0]); i++) {
- mem_addr[i] =
- a_.f32[((sizeof(a_.f32) / sizeof(a_.f32[0])) - 1) - i];
- }
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_storer_ps(mem_addr, a) \
- simde_mm_storer_ps(SIMDE_CHECKED_REINTERPRET_CAST( \
- float *, simde_float32 *, mem_addr), \
- (a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_storeu_ps(simde_float32 mem_addr[4], simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- _mm_storeu_ps(mem_addr, a);
-#else
- simde__m128_private a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- vst1q_f32(mem_addr, a_.neon_f32);
-#else
- simde_memcpy(mem_addr, &a_, sizeof(a_));
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_storeu_ps(mem_addr, a) \
- simde_mm_storeu_ps(SIMDE_CHECKED_REINTERPRET_CAST( \
- float *, simde_float32 *, mem_addr), \
- (a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_sub_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_sub_ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 = vsubq_f32(a_.neon_f32, b_.neon_f32);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f32x4_sub(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.f32 = a_.f32 - b_.f32;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.f32[i] = a_.f32[i] - b_.f32[i];
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_sub_ps(a, b) simde_mm_sub_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_sub_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_sub_ss(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_ss(a, simde_mm_sub_ps(a, b));
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
- r_.f32[0] = a_.f32[0] - b_.f32[0];
- r_.f32[1] = a_.f32[1];
- r_.f32[2] = a_.f32[2];
- r_.f32[3] = a_.f32[3];
-
- return simde__m128_from_private(r_);
-#endif
-}
-
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_sub_ss(a, b) simde_mm_sub_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_ucomieq_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_ucomieq_ss(a, b);
-#else
- simde__m128_private a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
- int r;
-
-#if defined(SIMDE_HAVE_FENV_H)
- fenv_t envp;
- int x = feholdexcept(&envp);
- r = a_.f32[0] == b_.f32[0];
- if (HEDLEY_LIKELY(x == 0))
- fesetenv(&envp);
-#else
- r = a_.f32[0] == b_.f32[0];
-#endif
-
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_ucomieq_ss(a, b) simde_mm_ucomieq_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_ucomige_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_ucomige_ss(a, b);
-#else
- simde__m128_private a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
- int r;
-
-#if defined(SIMDE_HAVE_FENV_H)
- fenv_t envp;
- int x = feholdexcept(&envp);
- r = a_.f32[0] >= b_.f32[0];
- if (HEDLEY_LIKELY(x == 0))
- fesetenv(&envp);
-#else
- r = a_.f32[0] >= b_.f32[0];
-#endif
-
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_ucomige_ss(a, b) simde_mm_ucomige_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_ucomigt_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_ucomigt_ss(a, b);
-#else
- simde__m128_private a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
- int r;
-
-#if defined(SIMDE_HAVE_FENV_H)
- fenv_t envp;
- int x = feholdexcept(&envp);
- r = a_.f32[0] > b_.f32[0];
- if (HEDLEY_LIKELY(x == 0))
- fesetenv(&envp);
-#else
- r = a_.f32[0] > b_.f32[0];
-#endif
-
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_ucomigt_ss(a, b) simde_mm_ucomigt_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_ucomile_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_ucomile_ss(a, b);
-#else
- simde__m128_private a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
- int r;
-
-#if defined(SIMDE_HAVE_FENV_H)
- fenv_t envp;
- int x = feholdexcept(&envp);
- r = a_.f32[0] <= b_.f32[0];
- if (HEDLEY_LIKELY(x == 0))
- fesetenv(&envp);
-#else
- r = a_.f32[0] <= b_.f32[0];
-#endif
-
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_ucomile_ss(a, b) simde_mm_ucomile_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_ucomilt_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_ucomilt_ss(a, b);
-#else
- simde__m128_private a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
- int r;
-
-#if defined(SIMDE_HAVE_FENV_H)
- fenv_t envp;
- int x = feholdexcept(&envp);
- r = a_.f32[0] < b_.f32[0];
- if (HEDLEY_LIKELY(x == 0))
- fesetenv(&envp);
-#else
- r = a_.f32[0] < b_.f32[0];
-#endif
-
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_ucomilt_ss(a, b) simde_mm_ucomilt_ss((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_ucomineq_ss(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_ucomineq_ss(a, b);
-#else
- simde__m128_private a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
- int r;
-
-#if defined(SIMDE_HAVE_FENV_H)
- fenv_t envp;
- int x = feholdexcept(&envp);
- r = a_.f32[0] != b_.f32[0];
- if (HEDLEY_LIKELY(x == 0))
- fesetenv(&envp);
-#else
- r = a_.f32[0] != b_.f32[0];
-#endif
-
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_ucomineq_ss(a, b) simde_mm_ucomineq_ss((a), (b))
-#endif
-
-#if defined(SIMDE_X86_SSE_NATIVE)
-#if defined(__has_builtin)
-#if __has_builtin(__builtin_ia32_undef128)
-#define SIMDE_HAVE_UNDEFINED128
-#endif
-#elif !defined(__PGI) && !defined(SIMDE_BUG_GCC_REV_208793) && \
- !defined(_MSC_VER)
-#define SIMDE_HAVE_UNDEFINED128
-#endif
-#endif
-
-#if defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
-HEDLEY_DIAGNOSTIC_PUSH
-SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_unpackhi_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_unpackhi_ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- float32x2_t a1 = vget_high_f32(a_.neon_f32);
- float32x2_t b1 = vget_high_f32(b_.neon_f32);
- float32x2x2_t result = vzip_f32(a1, b1);
- r_.neon_f32 = vcombine_f32(result.val[0], result.val[1]);
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
- r_.f32 = SIMDE_SHUFFLE_VECTOR_(32, 16, a_.f32, b_.f32, 2, 6, 3, 7);
-#else
- r_.f32[0] = a_.f32[2];
- r_.f32[1] = b_.f32[2];
- r_.f32[2] = a_.f32[3];
- r_.f32[3] = b_.f32[3];
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_unpackhi_ps(a, b) simde_mm_unpackhi_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_unpacklo_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_unpacklo_ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_SHUFFLE_VECTOR_)
- r_.f32 = SIMDE_SHUFFLE_VECTOR_(32, 16, a_.f32, b_.f32, 0, 4, 1, 5);
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- float32x2_t a1 = vget_low_f32(a_.neon_f32);
- float32x2_t b1 = vget_low_f32(b_.neon_f32);
- float32x2x2_t result = vzip_f32(a1, b1);
- r_.neon_f32 = vcombine_f32(result.val[0], result.val[1]);
-#else
- r_.f32[0] = a_.f32[0];
- r_.f32[1] = b_.f32[0];
- r_.f32[2] = a_.f32[1];
- r_.f32[3] = b_.f32[1];
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_unpacklo_ps(a, b) simde_mm_unpacklo_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_xor_ps(simde__m128 a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_xor_ps(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a),
- b_ = simde__m128_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = veorq_s32(a_.neon_i32, b_.neon_i32);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i32 = vec_xor(a_.altivec_i32, b_.altivec_i32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32f = a_.i32f ^ b_.i32f;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u32) / sizeof(r_.u32[0])); i++) {
- r_.u32[i] = a_.u32[i] ^ b_.u32[i];
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_xor_ps(a, b) simde_mm_xor_ps((a), (b))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_stream_pi(simde__m64 *mem_addr, simde__m64 a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- _mm_stream_pi(HEDLEY_REINTERPRET_CAST(__m64 *, mem_addr), a);
-#else
- simde__m64_private *dest = HEDLEY_REINTERPRET_CAST(simde__m64_private *,
- mem_addr),
- a_ = simde__m64_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- dest->i64[0] = vget_lane_s64(a_.neon_i64, 0);
-#else
- dest->i64[0] = a_.i64[0];
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_stream_pi(mem_addr, a) simde_mm_stream_pi(mem_addr, (a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_stream_ps(simde_float32 mem_addr[4], simde__m128 a)
-{
- simde_assert_aligned(16, mem_addr);
-
-#if defined(SIMDE_X86_SSE_NATIVE)
- _mm_stream_ps(mem_addr, a);
-#else
- simde__m128_private a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- vst1q_f32(SIMDE_ASSUME_ALIGNED(16, mem_addr), a_.neon_f32);
-#else
- simde_memcpy(SIMDE_ASSUME_ALIGNED(16, mem_addr), &a_, sizeof(a_));
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_stream_ps(mem_addr, a) \
- simde_mm_stream_ps(SIMDE_CHECKED_REINTERPRET_CAST( \
- float *, simde_float32 *, mem_addr), \
- (a))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-uint32_t simde_mm_getcsr(void)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- return _mm_getcsr();
-#else
- uint32_t r = 0;
-
-#if defined(SIMDE_HAVE_FENV_H)
- int rounding_mode = fegetround();
-
- switch (rounding_mode) {
-#if defined(FE_TONEAREST)
- case FE_TONEAREST:
- break;
-#endif
-#if defined(FE_UPWARD)
- case FE_UPWARD:
- r |= 2 << 13;
- break;
-#endif
-#if defined(FE_DOWNWARD)
- case FE_DOWNWARD:
- r |= 1 << 13;
- break;
-#endif
-#if defined(FE_TOWARDZERO)
- case FE_TOWARDZERO:
- r = 3 << 13;
- break;
-#endif
- }
-#else
- HEDLEY_UNREACHABLE();
-#endif
-
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_getcsr() simde_mm_getcsr()
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_setcsr(uint32_t a)
-{
-#if defined(SIMDE_X86_SSE_NATIVE)
- _mm_setcsr(a);
-#else
- switch ((a >> 13) & 3) {
-#if defined(FE_TONEAREST)
- case 0:
- fesetround(FE_TONEAREST);
-#endif
-#if defined(FE_DOWNWARD)
- break;
- case 1:
- fesetround(FE_DOWNWARD);
-#endif
-#if defined(FE_UPWARD)
- break;
- case 2:
- fesetround(FE_UPWARD);
-#endif
-#if defined(FE_TOWARDZERO)
- break;
- case 3:
- fesetround(FE_TOWARDZERO);
- break;
-#endif
- }
-#endif
-}
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _mm_setcsr(a) simde_mm_setcsr(a)
-#endif
-
-#define SIMDE_MM_TRANSPOSE4_PS(row0, row1, row2, row3) \
- do { \
- simde__m128 tmp3, tmp2, tmp1, tmp0; \
- tmp0 = simde_mm_unpacklo_ps((row0), (row1)); \
- tmp2 = simde_mm_unpacklo_ps((row2), (row3)); \
- tmp1 = simde_mm_unpackhi_ps((row0), (row1)); \
- tmp3 = simde_mm_unpackhi_ps((row2), (row3)); \
- row0 = simde_mm_movelh_ps(tmp0, tmp2); \
- row1 = simde_mm_movehl_ps(tmp2, tmp0); \
- row2 = simde_mm_movelh_ps(tmp1, tmp3); \
- row3 = simde_mm_movehl_ps(tmp3, tmp1); \
- } while (0)
-
-#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
-#define _MM_TRANSPOSE4_PS(row0, row1, row2, row3) \
- SIMDE_MM_TRANSPOSE4_PS(row0, row1, row2, row3)
-#endif
-
-#if defined(_MM_EXCEPT_INVALID)
-#define SIMDE_MM_EXCEPT_INVALID _MM_EXCEPT_INVALID
-#else
-#define SIMDE_MM_EXCEPT_INVALID (0x0001)
-#endif
-#if defined(_MM_EXCEPT_DENORM)
-#define SIMDE_MM_EXCEPT_DENORM _MM_EXCEPT_DENORM
-#else
-#define SIMDE_MM_EXCEPT_DENORM (0x0002)
-#endif
-#if defined(_MM_EXCEPT_DIV_ZERO)
-#define SIMDE_MM_EXCEPT_DIV_ZERO _MM_EXCEPT_DIV_ZERO
-#else
-#define SIMDE_MM_EXCEPT_DIV_ZERO (0x0004)
-#endif
-#if defined(_MM_EXCEPT_OVERFLOW)
-#define SIMDE_MM_EXCEPT_OVERFLOW _MM_EXCEPT_OVERFLOW
-#else
-#define SIMDE_MM_EXCEPT_OVERFLOW (0x0008)
-#endif
-#if defined(_MM_EXCEPT_UNDERFLOW)
-#define SIMDE_MM_EXCEPT_UNDERFLOW _MM_EXCEPT_UNDERFLOW
-#else
-#define SIMDE_MM_EXCEPT_UNDERFLOW (0x0010)
-#endif
-#if defined(_MM_EXCEPT_INEXACT)
-#define SIMDE_MM_EXCEPT_INEXACT _MM_EXCEPT_INEXACT
-#else
-#define SIMDE_MM_EXCEPT_INEXACT (0x0020)
-#endif
-#if defined(_MM_EXCEPT_MASK)
-#define SIMDE_MM_EXCEPT_MASK _MM_EXCEPT_MASK
-#else
-#define SIMDE_MM_EXCEPT_MASK \
- (SIMDE_MM_EXCEPT_INVALID | SIMDE_MM_EXCEPT_DENORM | \
- SIMDE_MM_EXCEPT_DIV_ZERO | SIMDE_MM_EXCEPT_OVERFLOW | \
- SIMDE_MM_EXCEPT_UNDERFLOW | SIMDE_MM_EXCEPT_INEXACT)
-#endif
-
-#if defined(_MM_MASK_INVALID)
-#define SIMDE_MM_MASK_INVALID _MM_MASK_INVALID
-#else
-#define SIMDE_MM_MASK_INVALID (0x0080)
-#endif
-#if defined(_MM_MASK_DENORM)
-#define SIMDE_MM_MASK_DENORM _MM_MASK_DENORM
-#else
-#define SIMDE_MM_MASK_DENORM (0x0100)
-#endif
-#if defined(_MM_MASK_DIV_ZERO)
-#define SIMDE_MM_MASK_DIV_ZERO _MM_MASK_DIV_ZERO
-#else
-#define SIMDE_MM_MASK_DIV_ZERO (0x0200)
-#endif
-#if defined(_MM_MASK_OVERFLOW)
-#define SIMDE_MM_MASK_OVERFLOW _MM_MASK_OVERFLOW
-#else
-#define SIMDE_MM_MASK_OVERFLOW (0x0400)
-#endif
-#if defined(_MM_MASK_UNDERFLOW)
-#define SIMDE_MM_MASK_UNDERFLOW _MM_MASK_UNDERFLOW
-#else
-#define SIMDE_MM_MASK_UNDERFLOW (0x0800)
-#endif
-#if defined(_MM_MASK_INEXACT)
-#define SIMDE_MM_MASK_INEXACT _MM_MASK_INEXACT
-#else
-#define SIMDE_MM_MASK_INEXACT (0x1000)
-#endif
-#if defined(_MM_MASK_MASK)
-#define SIMDE_MM_MASK_MASK _MM_MASK_MASK
-#else
-#define SIMDE_MM_MASK_MASK \
- (SIMDE_MM_MASK_INVALID | SIMDE_MM_MASK_DENORM | \
- SIMDE_MM_MASK_DIV_ZERO | SIMDE_MM_MASK_OVERFLOW | \
- SIMDE_MM_MASK_UNDERFLOW | SIMDE_MM_MASK_INEXACT)
-#endif
-
-#if defined(_MM_FLUSH_ZERO_MASK)
-#define SIMDE_MM_FLUSH_ZERO_MASK _MM_FLUSH_ZERO_MASK
-#else
-#define SIMDE_MM_FLUSH_ZERO_MASK (0x8000)
-#endif
-#if defined(_MM_FLUSH_ZERO_ON)
-#define SIMDE_MM_FLUSH_ZERO_ON _MM_FLUSH_ZERO_ON
-#else
-#define SIMDE_MM_FLUSH_ZERO_ON (0x8000)
-#endif
-#if defined(_MM_FLUSH_ZERO_OFF)
-#define SIMDE_MM_FLUSH_ZERO_OFF _MM_FLUSH_ZERO_OFF
-#else
-#define SIMDE_MM_FLUSH_ZERO_OFF (0x0000)
-#endif
-
-SIMDE_END_DECLS_
-
-HEDLEY_DIAGNOSTIC_POP
-
-#endif /* !defined(SIMDE_X86_SSE_H) */
obs-studio-26.1.0.tar.xz/libobs/util/simde/sse2.h
Deleted
-/* SPDX-License-Identifier: MIT
- *
- * Permission is hereby granted, free of charge, to any person
- * obtaining a copy of this software and associated documentation
- * files (the "Software"), to deal in the Software without
- * restriction, including without limitation the rights to use, copy,
- * modify, merge, publish, distribute, sublicense, and/or sell copies
- * of the Software, and to permit persons to whom the Software is
- * furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be
- * included in all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
- *
- * Copyright:
- * 2017-2020 Evan Nemerson <evan@nemerson.com>
- * 2015-2017 John W. Ratcliff <jratcliffscarab@gmail.com>
- * 2015 Brandon Rowlett <browlett@nvidia.com>
- * 2015 Ken Fast <kfast@gdeb.com>
- * 2017 Hasindu Gamaarachchi <hasindu@unsw.edu.au>
- * 2018 Jeff Daily <jeff.daily@amd.com>
- */
-
-#if !defined(SIMDE_X86_SSE2_H)
-#define SIMDE_X86_SSE2_H
-
-#include "sse.h"
-
-#if !defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_ENABLE_NATIVE_ALIASES)
-#define SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES
-#endif
-
-HEDLEY_DIAGNOSTIC_PUSH
-SIMDE_DISABLE_UNWANTED_DIAGNOSTICS
-SIMDE_BEGIN_DECLS_
-
-typedef union {
-#if defined(SIMDE_VECTOR_SUBSCRIPT)
- SIMDE_ALIGN(16) int8_t i8 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) int16_t i16 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) int32_t i32 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) int64_t i64 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) uint8_t u8 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) uint16_t u16 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) uint32_t u32 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) uint64_t u64 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
-#if defined(SIMDE_HAVE_INT128_)
- SIMDE_ALIGN(16) simde_int128 i128 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) simde_uint128 u128 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
-#endif
- SIMDE_ALIGN(16) simde_float32 f32 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) simde_float64 f64 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
-
- SIMDE_ALIGN(16) int_fast32_t i32f SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) uint_fast32_t u32f SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
-#else
- SIMDE_ALIGN(16) int8_t i8[16];
- SIMDE_ALIGN(16) int16_t i16[8];
- SIMDE_ALIGN(16) int32_t i32[4];
- SIMDE_ALIGN(16) int64_t i64[2];
- SIMDE_ALIGN(16) uint8_t u8[16];
- SIMDE_ALIGN(16) uint16_t u16[8];
- SIMDE_ALIGN(16) uint32_t u32[4];
- SIMDE_ALIGN(16) uint64_t u64[2];
-#if defined(SIMDE_HAVE_INT128_)
- SIMDE_ALIGN(16) simde_int128 i128[1];
- SIMDE_ALIGN(16) simde_uint128 u128[1];
-#endif
- SIMDE_ALIGN(16) simde_float32 f32[4];
- SIMDE_ALIGN(16) simde_float64 f64[2];
-
- SIMDE_ALIGN(16) int_fast32_t i32f[16 / sizeof(int_fast32_t)];
- SIMDE_ALIGN(16) uint_fast32_t u32f[16 / sizeof(uint_fast32_t)];
-#endif
-
- SIMDE_ALIGN(16) simde__m64_private m64_private[2];
- SIMDE_ALIGN(16) simde__m64 m64[2];
-
-#if defined(SIMDE_X86_SSE2_NATIVE)
- SIMDE_ALIGN(16) __m128i n;
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- SIMDE_ALIGN(16) int8x16_t neon_i8;
- SIMDE_ALIGN(16) int16x8_t neon_i16;
- SIMDE_ALIGN(16) int32x4_t neon_i32;
- SIMDE_ALIGN(16) int64x2_t neon_i64;
- SIMDE_ALIGN(16) uint8x16_t neon_u8;
- SIMDE_ALIGN(16) uint16x8_t neon_u16;
- SIMDE_ALIGN(16) uint32x4_t neon_u32;
- SIMDE_ALIGN(16) uint64x2_t neon_u64;
- SIMDE_ALIGN(16) float32x4_t neon_f32;
-#if defined(SIMDE_ARCH_AARCH64)
- SIMDE_ALIGN(16) float64x2_t neon_f64;
-#endif
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- SIMDE_ALIGN(16) v128_t wasm_v128;
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(signed char) altivec_i8;
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(signed short) altivec_i16;
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(signed int) altivec_i32;
-#if defined(__UINT_FAST32_TYPE__)
- SIMDE_ALIGN(16)
- SIMDE_POWER_ALTIVEC_VECTOR(__INT_FAST32_TYPE__) altivec_i32f;
-#else
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(signed int) altivec_i32f;
-#endif
- SIMDE_ALIGN(16)
- SIMDE_POWER_ALTIVEC_VECTOR(signed long long) altivec_i64;
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(unsigned char) altivec_u8;
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(unsigned short) altivec_u16;
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(unsigned int) altivec_u32;
-#if defined(__UINT_FAST32_TYPE__)
- SIMDE_ALIGN(16) vector __UINT_FAST32_TYPE__ altivec_u32f;
-#else
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(unsigned int) altivec_u32f;
-#endif
- SIMDE_ALIGN(16)
- SIMDE_POWER_ALTIVEC_VECTOR(unsigned long long) altivec_u64;
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(float) altivec_f32;
-#if defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(double) altivec_f64;
-#endif
-#endif
-} simde__m128i_private;
-
-typedef union {
-#if defined(SIMDE_VECTOR_SUBSCRIPT)
- SIMDE_ALIGN(16) int8_t i8 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) int16_t i16 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) int32_t i32 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) int64_t i64 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) uint8_t u8 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) uint16_t u16 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) uint32_t u32 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) uint64_t u64 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) simde_float32 f32 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) simde_float64 f64 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) int_fast32_t i32f SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
- SIMDE_ALIGN(16) uint_fast32_t u32f SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
-#else
- SIMDE_ALIGN(16) int8_t i8[16];
- SIMDE_ALIGN(16) int16_t i16[8];
- SIMDE_ALIGN(16) int32_t i32[4];
- SIMDE_ALIGN(16) int64_t i64[2];
- SIMDE_ALIGN(16) uint8_t u8[16];
- SIMDE_ALIGN(16) uint16_t u16[8];
- SIMDE_ALIGN(16) uint32_t u32[4];
- SIMDE_ALIGN(16) uint64_t u64[2];
- SIMDE_ALIGN(16) simde_float32 f32[4];
- SIMDE_ALIGN(16) simde_float64 f64[2];
- SIMDE_ALIGN(16) int_fast32_t i32f[16 / sizeof(int_fast32_t)];
- SIMDE_ALIGN(16) uint_fast32_t u32f[16 / sizeof(uint_fast32_t)];
-#endif
-
- SIMDE_ALIGN(16) simde__m64_private m64_private[2];
- SIMDE_ALIGN(16) simde__m64 m64[2];
-
-#if defined(SIMDE_X86_SSE2_NATIVE)
- SIMDE_ALIGN(16) __m128d n;
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- SIMDE_ALIGN(16) int8x16_t neon_i8;
- SIMDE_ALIGN(16) int16x8_t neon_i16;
- SIMDE_ALIGN(16) int32x4_t neon_i32;
- SIMDE_ALIGN(16) int64x2_t neon_i64;
- SIMDE_ALIGN(16) uint8x16_t neon_u8;
- SIMDE_ALIGN(16) uint16x8_t neon_u16;
- SIMDE_ALIGN(16) uint32x4_t neon_u32;
- SIMDE_ALIGN(16) uint64x2_t neon_u64;
- SIMDE_ALIGN(16) float32x4_t neon_f32;
-#if defined(SIMDE_ARCH_AARCH64)
- SIMDE_ALIGN(16) float64x2_t neon_f64;
-#endif
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- SIMDE_ALIGN(16) v128_t wasm_v128;
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(signed char) altivec_i8;
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(signed short) altivec_i16;
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(signed int) altivec_i32;
-#if defined(__INT_FAST32_TYPE__)
- SIMDE_ALIGN(16)
- SIMDE_POWER_ALTIVEC_VECTOR(__INT_FAST32_TYPE__) altivec_i32f;
-#else
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(signed int) altivec_i32f;
-#endif
- SIMDE_ALIGN(16)
- SIMDE_POWER_ALTIVEC_VECTOR(signed long long) altivec_i64;
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(unsigned char) altivec_u8;
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(unsigned short) altivec_u16;
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(unsigned int) altivec_u32;
-#if defined(__UINT_FAST32_TYPE__)
- SIMDE_ALIGN(16) vector __UINT_FAST32_TYPE__ altivec_u32f;
-#else
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(unsigned int) altivec_u32f;
-#endif
- SIMDE_ALIGN(16)
- SIMDE_POWER_ALTIVEC_VECTOR(unsigned long long) altivec_u64;
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(float) altivec_f32;
-#if defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
- SIMDE_ALIGN(16) SIMDE_POWER_ALTIVEC_VECTOR(double) altivec_f64;
-#endif
-#endif
-} simde__m128d_private;
-
-#if defined(SIMDE_X86_SSE2_NATIVE)
-typedef __m128i simde__m128i;
-typedef __m128d simde__m128d;
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
-typedef int64x2_t simde__m128i;
-#if defined(SIMDE_ARCH_AARCH64)
-typedef float64x2_t simde__m128d;
-#elif defined(SIMDE_VECTOR_SUBSCRIPT)
-typedef simde_float64 simde__m128d SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
-#else
-typedef simde__m128d_private simde__m128d;
-#endif
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
-typedef v128_t simde__m128i;
-typedef v128_t simde__m128d;
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
-typedef SIMDE_POWER_ALTIVEC_VECTOR(float) simde__m128i;
-typedef SIMDE_POWER_ALTIVEC_VECTOR(double) simde__m128d;
-#elif defined(SIMDE_VECTOR_SUBSCRIPT)
-typedef int_fast32_t simde__m128i SIMDE_ALIGN(16)
- SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
-typedef simde_float64 simde__m128d SIMDE_ALIGN(16)
- SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
-#else
-typedef simde__m128i_private simde__m128i;
-typedef simde__m128d_private simde__m128d;
-#endif
-
-#if !defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_ENABLE_NATIVE_ALIASES)
-#define SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES
-typedef simde__m128i __m128i;
-typedef simde__m128d __m128d;
-#endif
-
-HEDLEY_STATIC_ASSERT(16 == sizeof(simde__m128i), "simde__m128i size incorrect");
-HEDLEY_STATIC_ASSERT(16 == sizeof(simde__m128i_private),
- "simde__m128i_private size incorrect");
-HEDLEY_STATIC_ASSERT(16 == sizeof(simde__m128d), "simde__m128d size incorrect");
-HEDLEY_STATIC_ASSERT(16 == sizeof(simde__m128d_private),
- "simde__m128d_private size incorrect");
-#if defined(SIMDE_CHECK_ALIGNMENT) && defined(SIMDE_ALIGN_OF)
-HEDLEY_STATIC_ASSERT(SIMDE_ALIGN_OF(simde__m128i) == 16,
- "simde__m128i is not 16-byte aligned");
-HEDLEY_STATIC_ASSERT(SIMDE_ALIGN_OF(simde__m128i_private) == 16,
- "simde__m128i_private is not 16-byte aligned");
-HEDLEY_STATIC_ASSERT(SIMDE_ALIGN_OF(simde__m128d) == 16,
- "simde__m128d is not 16-byte aligned");
-HEDLEY_STATIC_ASSERT(SIMDE_ALIGN_OF(simde__m128d_private) == 16,
- "simde__m128d_private is not 16-byte aligned");
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde__m128i_from_private(simde__m128i_private v)
-{
- simde__m128i r;
- simde_memcpy(&r, &v, sizeof(r));
- return r;
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i_private simde__m128i_to_private(simde__m128i v)
-{
- simde__m128i_private r;
- simde_memcpy(&r, &v, sizeof(r));
- return r;
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde__m128d_from_private(simde__m128d_private v)
-{
- simde__m128d r;
- simde_memcpy(&r, &v, sizeof(r));
- return r;
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d_private simde__m128d_to_private(simde__m128d v)
-{
- simde__m128d_private r;
- simde_memcpy(&r, &v, sizeof(r));
- return r;
-}
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i, int8x16_t, neon, i8)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i, int16x8_t, neon, i16)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i, int32x4_t, neon, i32)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i, int64x2_t, neon, i64)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i, uint8x16_t, neon, u8)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i, uint16x8_t, neon, u16)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i, uint32x4_t, neon, u32)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i, uint64x2_t, neon, u64)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i, float32x4_t, neon, f32)
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i, float64x2_t, neon, f64)
-#endif
-#endif /* defined(SIMDE_ARM_NEON_A32V7_NATIVE) */
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d, int8x16_t, neon, i8)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d, int16x8_t, neon, i16)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d, int32x4_t, neon, i32)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d, int64x2_t, neon, i64)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d, uint8x16_t, neon, u8)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d, uint16x8_t, neon, u16)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d, uint32x4_t, neon, u32)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d, uint64x2_t, neon, u64)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d, float32x4_t, neon, f32)
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
-SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d, float64x2_t, neon, f64)
-#endif
-#endif /* defined(SIMDE_ARM_NEON_A32V7_NATIVE) */
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_add_epi8(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_add_epi8(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i8 = vaddq_s8(a_.neon_i8, b_.neon_i8);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i8 = vec_add(a_.altivec_i8, b_.altivec_i8);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i8 = a_.i8 + b_.i8;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
- r_.i8[i] = a_.i8[i] + b_.i8[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_add_epi8(a, b) simde_mm_add_epi8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_add_epi16(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_add_epi16(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vaddq_s16(a_.neon_i16, b_.neon_i16);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i16 = vec_add(a_.altivec_i16, b_.altivec_i16);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i16 = a_.i16 + b_.i16;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = a_.i16[i] + b_.i16[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_add_epi16(a, b) simde_mm_add_epi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_add_epi32(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_add_epi32(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vaddq_s32(a_.neon_i32, b_.neon_i32);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i32 = vec_add(a_.altivec_i32, b_.altivec_i32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32 = a_.i32 + b_.i32;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = a_.i32[i] + b_.i32[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_add_epi32(a, b) simde_mm_add_epi32(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_add_epi64(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_add_epi64(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i64 = vaddq_s64(a_.neon_i64, b_.neon_i64);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i64 = vec_add(a_.altivec_i64, b_.altivec_i64);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i64 = a_.i64 + b_.i64;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i64) / sizeof(r_.i64[0])); i++) {
- r_.i64[i] = a_.i64[i] + b_.i64[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_add_epi64(a, b) simde_mm_add_epi64(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_add_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_add_pd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- r_.neon_f64 = vaddq_f64(a_.neon_f64, b_.neon_f64);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f64x2_add(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_f64 = vec_add(a_.altivec_f64, b_.altivec_f64);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.f64 = a_.f64 + b_.f64;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
- r_.f64[i] = a_.f64[i] + b_.f64[i];
- }
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_add_pd(a, b) simde_mm_add_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_move_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_move_sd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- r_.neon_f64 =
- vsetq_lane_f64(vgetq_lane_f64(b_.neon_f64, 0), a_.neon_f64, 0);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- SIMDE_POWER_ALTIVEC_VECTOR(unsigned char)
- m = {16, 17, 18, 19, 20, 21, 22, 23, 8, 9, 10, 11, 12, 13, 14, 15};
- r_.altivec_f64 = vec_perm(a_.altivec_f64, b_.altivec_f64, m);
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
- r_.f64 = SIMDE_SHUFFLE_VECTOR_(64, 16, a_.f64, b_.f64, 2, 1);
-#else
- r_.f64[0] = b_.f64[0];
- r_.f64[1] = a_.f64[1];
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_move_sd(a, b) simde_mm_move_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_add_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_add_sd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
- r_.f64[0] = a_.f64[0] + b_.f64[0];
- r_.f64[1] = a_.f64[1];
-
-#if defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_sd(a, simde_mm_add_pd(a, b));
-#else
- r_.f64[0] = a_.f64[0] + b_.f64[0];
- r_.f64[1] = a_.f64[1];
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_add_sd(a, b) simde_mm_add_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_add_si64(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_add_si64(a, b);
-#else
- simde__m64_private r_, a_ = simde__m64_to_private(a),
- b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i64 = vadd_s64(a_.neon_i64, b_.neon_i64);
-#else
- r_.i64[0] = a_.i64[0] + b_.i64[0];
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_add_si64(a, b) simde_mm_add_si64(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_adds_epi8(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_adds_epi8(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i8 = vqaddq_s8(a_.neon_i8, b_.neon_i8);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i8 = vec_adds(a_.altivec_i8, b_.altivec_i8);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
- const int32_t tmp = HEDLEY_STATIC_CAST(int16_t, a_.i8[i]) +
- HEDLEY_STATIC_CAST(int16_t, b_.i8[i]);
- r_.i8[i] = HEDLEY_STATIC_CAST(
- int8_t,
- ((tmp < INT8_MAX) ? ((tmp > INT8_MIN) ? tmp : INT8_MIN)
- : INT8_MAX));
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_adds_epi8(a, b) simde_mm_adds_epi8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_adds_epi16(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_adds_epi16(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vqaddq_s16(a_.neon_i16, b_.neon_i16);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i16 = vec_adds(a_.altivec_i16, b_.altivec_i16);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- const int32_t tmp = HEDLEY_STATIC_CAST(int32_t, a_.i16[i]) +
- HEDLEY_STATIC_CAST(int32_t, b_.i16[i]);
- r_.i16[i] = HEDLEY_STATIC_CAST(
- int16_t,
- ((tmp < INT16_MAX)
- ? ((tmp > INT16_MIN) ? tmp : INT16_MIN)
- : INT16_MAX));
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_adds_epi16(a, b) simde_mm_adds_epi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_adds_epu8(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_adds_epu8(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u8 = vqaddq_u8(a_.neon_u8, b_.neon_u8);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_u8 = vec_adds(a_.altivec_u8, b_.altivec_u8);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u8) / sizeof(r_.u8[0])); i++) {
- r_.u8[i] = ((UINT8_MAX - a_.u8[i]) > b_.u8[i])
- ? (a_.u8[i] + b_.u8[i])
- : UINT8_MAX;
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_adds_epu8(a, b) simde_mm_adds_epu8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_adds_epu16(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_adds_epu16(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u16 = vqaddq_u16(a_.neon_u16, b_.neon_u16);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_u16 = vec_adds(a_.altivec_u16, b_.altivec_u16);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
- r_.u16[i] = ((UINT16_MAX - a_.u16[i]) > b_.u16[i])
- ? (a_.u16[i] + b_.u16[i])
- : UINT16_MAX;
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_adds_epu16(a, b) simde_mm_adds_epu16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_and_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_and_pd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vandq_s32(a_.neon_i32, b_.neon_i32);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_v128_and(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_f64 = vec_and(a_.altivec_f64, b_.altivec_f64);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32f = a_.i32f & b_.i32f;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32f) / sizeof(r_.i32f[0])); i++) {
- r_.i32f[i] = a_.i32f[i] & b_.i32f[i];
- }
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_and_pd(a, b) simde_mm_and_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_and_si128(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_and_si128(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vandq_s32(b_.neon_i32, a_.neon_i32);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_u32f = vec_and(a_.altivec_u32f, b_.altivec_u32f);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32f = a_.i32f & b_.i32f;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32f) / sizeof(r_.i32f[0])); i++) {
- r_.i32f[i] = a_.i32f[i] & b_.i32f[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_and_si128(a, b) simde_mm_and_si128(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_andnot_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_andnot_pd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vbicq_s32(a_.neon_i32, b_.neon_i32);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_v128_andnot(b_.wasm_v128, a_.wasm_v128);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i32f = vec_andc(a_.altivec_i32f, b_.altivec_i32f);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32f = ~a_.i32f & b_.i32f;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u64) / sizeof(r_.u64[0])); i++) {
- r_.u64[i] = ~a_.u64[i] & b_.u64[i];
- }
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_andnot_pd(a, b) simde_mm_andnot_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_andnot_si128(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_andnot_si128(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vbicq_s32(b_.neon_i32, a_.neon_i32);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i32 = vec_andc(b_.altivec_i32, a_.altivec_i32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32f = ~a_.i32f & b_.i32f;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32f) / sizeof(r_.i32f[0])); i++) {
- r_.i32f[i] = ~(a_.i32f[i]) & b_.i32f[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_andnot_si128(a, b) simde_mm_andnot_si128(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_avg_epu8(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_avg_epu8(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u8 = vrhaddq_u8(b_.neon_u8, a_.neon_u8);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_u8 = vec_avg(a_.altivec_u8, b_.altivec_u8);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS) && \
- defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR) && \
- defined(SIMDE_CONVERT_VECTOR_)
- uint16_t wa SIMDE_VECTOR(32);
- uint16_t wb SIMDE_VECTOR(32);
- uint16_t wr SIMDE_VECTOR(32);
- SIMDE_CONVERT_VECTOR_(wa, a_.u8);
- SIMDE_CONVERT_VECTOR_(wb, b_.u8);
- wr = (wa + wb + 1) >> 1;
- SIMDE_CONVERT_VECTOR_(r_.u8, wr);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u8) / sizeof(r_.u8[0])); i++) {
- r_.u8[i] = (a_.u8[i] + b_.u8[i] + 1) >> 1;
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_avg_epu8(a, b) simde_mm_avg_epu8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_avg_epu16(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_avg_epu16(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u16 = vrhaddq_u16(b_.neon_u16, a_.neon_u16);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_u16 = vec_avg(a_.altivec_u16, b_.altivec_u16);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS) && \
- defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR) && \
- defined(SIMDE_CONVERT_VECTOR_)
- uint32_t wa SIMDE_VECTOR(32);
- uint32_t wb SIMDE_VECTOR(32);
- uint32_t wr SIMDE_VECTOR(32);
- SIMDE_CONVERT_VECTOR_(wa, a_.u16);
- SIMDE_CONVERT_VECTOR_(wb, b_.u16);
- wr = (wa + wb + 1) >> 1;
- SIMDE_CONVERT_VECTOR_(r_.u16, wr);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
- r_.u16[i] = (a_.u16[i] + b_.u16[i] + 1) >> 1;
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_avg_epu16(a, b) simde_mm_avg_epu16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_setzero_si128(void)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_setzero_si128();
-#else
- simde__m128i_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vdupq_n_s32(0);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32f) / sizeof(r_.i32f[0])); i++) {
- r_.i32f[i] = 0;
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_setzero_si128() (simde_mm_setzero_si128())
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_bslli_si128(simde__m128i a, const int imm8)
- SIMDE_REQUIRE_RANGE(imm8, 0, 255)
-{
- simde__m128i_private r_, a_ = simde__m128i_to_private(a);
-
- if (HEDLEY_UNLIKELY((imm8 & ~15))) {
- return simde_mm_setzero_si128();
- }
-
-#if defined(SIMDE_HAVE_INT128_) && defined(__BYTE_ORDER__) && \
- (__BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__) && 0
- r_.u128[0] = a_.u128[0] << s;
-#else
- r_ = simde__m128i_to_private(simde_mm_setzero_si128());
- for (int i = imm8;
- i < HEDLEY_STATIC_CAST(int, sizeof(r_.i8) / sizeof(r_.i8[0]));
- i++) {
- r_.i8[i] = a_.i8[i - imm8];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-}
-#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(__PGI)
-#define simde_mm_bslli_si128(a, imm8) _mm_slli_si128(a, imm8)
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE) && !defined(__clang__)
-#define simde_mm_bslli_si128(a, imm8) \
- simde__m128i_from_neon_i8( \
- ((imm8) <= 0) \
- ? simde__m128i_to_neon_i8(a) \
- : (((imm8) > 15) \
- ? (vdupq_n_s8(0)) \
- : (vextq_s8(vdupq_n_s8(0), \
- simde__m128i_to_neon_i8(a), \
- 16 - (imm8)))))
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
-#define simde_mm_bslli_si128(a, imm8) \
- (__extension__({ \
- const simde__m128i_private simde__tmp_a_ = \
- simde__m128i_to_private(a); \
- const simde__m128i_private simde__tmp_z_ = \
- simde__m128i_to_private(simde_mm_setzero_si128()); \
- simde__m128i_private simde__tmp_r_; \
- if (HEDLEY_UNLIKELY(imm8 > 15)) { \
- simde__tmp_r_ = simde__m128i_to_private( \
- simde_mm_setzero_si128()); \
- } else { \
- simde__tmp_r_.i8 = SIMDE_SHUFFLE_VECTOR_( \
- 8, 16, simde__tmp_z_.i8, (simde__tmp_a_).i8, \
- HEDLEY_STATIC_CAST(int8_t, (16 - imm8) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (17 - imm8) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (18 - imm8) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (19 - imm8) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (20 - imm8) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (21 - imm8) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (22 - imm8) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (23 - imm8) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (24 - imm8) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (25 - imm8) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (26 - imm8) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (27 - imm8) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (28 - imm8) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (29 - imm8) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (30 - imm8) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (31 - imm8) & 31)); \
- } \
- simde__m128i_from_private(simde__tmp_r_); \
- }))
-#endif
-#define simde_mm_slli_si128(a, imm8) simde_mm_bslli_si128(a, imm8)
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_bslli_si128(a, b) simde_mm_bslli_si128(a, b)
-#define _mm_slli_si128(a, b) simde_mm_bslli_si128(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_bsrli_si128(simde__m128i a, const int imm8)
- SIMDE_REQUIRE_RANGE(imm8, 0, 255)
-{
- simde__m128i_private r_, a_ = simde__m128i_to_private(a);
-
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
- const int e = HEDLEY_STATIC_CAST(int, i) + imm8;
- r_.i8[i] = (e < 16) ? a_.i8[e] : 0;
- }
-
- return simde__m128i_from_private(r_);
-}
-#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(__PGI)
-#define simde_mm_bsrli_si128(a, imm8) _mm_srli_si128(a, imm8)
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE) && !defined(__clang__)
-#define simde_mm_bsrli_si128(a, imm8) \
- simde__m128i_from_neon_i8( \
- ((imm8 < 0) || (imm8 > 15)) \
- ? vdupq_n_s8(0) \
- : (vextq_s8(simde__m128i_to_private(a).neon_i8, \
- vdupq_n_s8(0), \
- ((imm8 & 15) != 0) ? imm8 : (imm8 & 15))))
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
-#define simde_mm_bsrli_si128(a, imm8) \
- (__extension__({ \
- const simde__m128i_private simde__tmp_a_ = \
- simde__m128i_to_private(a); \
- const simde__m128i_private simde__tmp_z_ = \
- simde__m128i_to_private(simde_mm_setzero_si128()); \
- simde__m128i_private simde__tmp_r_ = \
- simde__m128i_to_private(a); \
- if (HEDLEY_UNLIKELY(imm8 > 15)) { \
- simde__tmp_r_ = simde__m128i_to_private( \
- simde_mm_setzero_si128()); \
- } else { \
- simde__tmp_r_.i8 = SIMDE_SHUFFLE_VECTOR_( \
- 8, 16, simde__tmp_z_.i8, (simde__tmp_a_).i8, \
- HEDLEY_STATIC_CAST(int8_t, (imm8 + 16) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (imm8 + 17) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (imm8 + 18) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (imm8 + 19) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (imm8 + 20) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (imm8 + 21) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (imm8 + 22) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (imm8 + 23) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (imm8 + 24) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (imm8 + 25) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (imm8 + 26) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (imm8 + 27) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (imm8 + 28) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (imm8 + 29) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (imm8 + 30) & 31), \
- HEDLEY_STATIC_CAST(int8_t, (imm8 + 31) & 31)); \
- } \
- simde__m128i_from_private(simde__tmp_r_); \
- }))
-#endif
-#define simde_mm_srli_si128(a, imm8) simde_mm_bsrli_si128((a), (imm8))
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_bsrli_si128(a, imm8) simde_mm_bsrli_si128((a), (imm8))
-#define _mm_srli_si128(a, imm8) simde_mm_bsrli_si128((a), (imm8))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_clflush(void const *p)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- _mm_clflush(p);
-#else
- (void)p;
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_clflush(a, b) simde_mm_clflush()
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_comieq_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_comieq_sd(a, b);
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- return !!vgetq_lane_u64(vceqq_f64(a_.neon_f64, b_.neon_f64), 0);
-#else
- return a_.f64[0] == b_.f64[0];
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_comieq_sd(a, b) simde_mm_comieq_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_comige_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_comige_sd(a, b);
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- return !!vgetq_lane_u64(vcgeq_f64(a_.neon_f64, b_.neon_f64), 0);
-#else
- return a_.f64[0] >= b_.f64[0];
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_comige_sd(a, b) simde_mm_comige_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_comigt_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_comigt_sd(a, b);
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- return !!vgetq_lane_u64(vcgtq_f64(a_.neon_f64, b_.neon_f64), 0);
-#else
- return a_.f64[0] > b_.f64[0];
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_comigt_sd(a, b) simde_mm_comigt_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_comile_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_comile_sd(a, b);
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- return !!vgetq_lane_u64(vcleq_f64(a_.neon_f64, b_.neon_f64), 0);
-#else
- return a_.f64[0] <= b_.f64[0];
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_comile_sd(a, b) simde_mm_comile_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_comilt_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_comilt_sd(a, b);
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- return !!vgetq_lane_u64(vcltq_f64(a_.neon_f64, b_.neon_f64), 0);
-#else
- return a_.f64[0] < b_.f64[0];
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_comilt_sd(a, b) simde_mm_comilt_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_comineq_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_comineq_sd(a, b);
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- return !vgetq_lane_u64(vceqq_f64(a_.neon_f64, b_.neon_f64), 0);
-#else
- return a_.f64[0] != b_.f64[0];
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_comineq_sd(a, b) simde_mm_comineq_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_castpd_ps(simde__m128d a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_castpd_ps(a);
-#else
- simde__m128 r;
- simde_memcpy(&r, &a, sizeof(a));
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_castpd_ps(a) simde_mm_castpd_ps(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_castpd_si128(simde__m128d a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_castpd_si128(a);
-#else
- simde__m128i r;
- simde_memcpy(&r, &a, sizeof(a));
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_castpd_si128(a) simde_mm_castpd_si128(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_castps_pd(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_castps_pd(a);
-#else
- simde__m128d r;
- simde_memcpy(&r, &a, sizeof(a));
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_castps_pd(a) simde_mm_castps_pd(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_castps_si128(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_castps_si128(a);
-#else
- simde__m128i r;
- simde_memcpy(&r, &a, sizeof(a));
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_castps_si128(a) simde_mm_castps_si128(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_castsi128_pd(simde__m128i a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_castsi128_pd(a);
-#else
- simde__m128d r;
- simde_memcpy(&r, &a, sizeof(a));
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_castsi128_pd(a) simde_mm_castsi128_pd(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_castsi128_ps(simde__m128i a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_castsi128_ps(a);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- return a;
-#else
- simde__m128 r;
- simde_memcpy(&r, &a, sizeof(a));
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_castsi128_ps(a) simde_mm_castsi128_ps(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_cmpeq_epi8(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmpeq_epi8(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i8 = vreinterpretq_s8_u8(vceqq_s8(b_.neon_i8, a_.neon_i8));
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_i8x16_eq(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i8 = (SIMDE_POWER_ALTIVEC_VECTOR(signed char))vec_cmpeq(
- a_.altivec_i8, b_.altivec_i8);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i8 = HEDLEY_STATIC_CAST(__typeof__(r_.i8), (a_.i8 == b_.i8));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
- r_.i8[i] = (a_.i8[i] == b_.i8[i]) ? ~INT8_C(0) : INT8_C(0);
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpeq_epi8(a, b) simde_mm_cmpeq_epi8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_cmpeq_epi16(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmpeq_epi16(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 =
- vreinterpretq_s16_u16(vceqq_s16(b_.neon_i16, a_.neon_i16));
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_i16x8_eq(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i16 = (SIMDE_POWER_ALTIVEC_VECTOR(signed short))vec_cmpeq(
- a_.altivec_i16, b_.altivec_i16);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i16 = (a_.i16 == b_.i16);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = (a_.i16[i] == b_.i16[i]) ? ~INT16_C(0) : INT16_C(0);
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpeq_epi16(a, b) simde_mm_cmpeq_epi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_cmpeq_epi32(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmpeq_epi32(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 =
- vreinterpretq_s32_u32(vceqq_s32(b_.neon_i32, a_.neon_i32));
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_i32x4_eq(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i32 = (SIMDE_POWER_ALTIVEC_VECTOR(signed int))vec_cmpeq(
- a_.altivec_i32, b_.altivec_i32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32 = HEDLEY_STATIC_CAST(__typeof__(r_.i32), a_.i32 == b_.i32);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = (a_.i32[i] == b_.i32[i]) ? ~INT32_C(0) : INT32_C(0);
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpeq_epi32(a, b) simde_mm_cmpeq_epi32(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmpeq_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmpeq_pd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- r_.neon_i64 = vreinterpretq_s64_u64(
- vceqq_s64(vreinterpretq_s64_f64(b_.neon_f64),
- vreinterpretq_s64_f64(a_.neon_f64)));
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f64x2_eq(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_f64 = (SIMDE_POWER_ALTIVEC_VECTOR(double))vec_cmpeq(
- a_.altivec_f64, b_.altivec_f64);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i64 = HEDLEY_STATIC_CAST(__typeof__(r_.i64), (a_.f64 == b_.f64));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
- r_.u64[i] = (a_.f64[i] == b_.f64[i]) ? ~UINT64_C(0)
- : UINT64_C(0);
- }
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpeq_pd(a, b) simde_mm_cmpeq_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmpeq_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmpeq_sd(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_sd(a, simde_mm_cmpeq_pd(a, b));
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
- r_.u64[0] = (a_.u64[0] == b_.u64[0]) ? ~UINT64_C(0) : 0;
- r_.u64[1] = a_.u64[1];
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpeq_sd(a, b) simde_mm_cmpeq_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmpneq_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmpneq_pd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 = vreinterpretq_f32_u16(
- vmvnq_u16(vceqq_s16(b_.neon_i16, a_.neon_i16)));
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f64x2_ne(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i64 = HEDLEY_STATIC_CAST(__typeof__(r_.i64), (a_.f64 != b_.f64));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
- r_.u64[i] = (a_.f64[i] != b_.f64[i]) ? ~UINT64_C(0)
- : UINT64_C(0);
- }
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpneq_pd(a, b) simde_mm_cmpneq_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmpneq_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmpneq_sd(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_sd(a, simde_mm_cmpneq_pd(a, b));
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
- r_.u64[0] = (a_.f64[0] != b_.f64[0]) ? ~UINT64_C(0) : UINT64_C(0);
- r_.u64[1] = a_.u64[1];
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpneq_sd(a, b) simde_mm_cmpneq_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_cmplt_epi8(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmplt_epi8(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i8 = vreinterpretq_s8_u8(vcltq_s8(a_.neon_i8, b_.neon_i8));
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i8 = HEDLEY_REINTERPRET_CAST(
- SIMDE_POWER_ALTIVEC_VECTOR(signed char),
- vec_cmplt(a_.altivec_i8, b_.altivec_i8));
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_i8x16_lt(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i8 = HEDLEY_STATIC_CAST(__typeof__(r_.i8), (a_.i8 < b_.i8));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
- r_.i8[i] = (a_.i8[i] < b_.i8[i]) ? ~INT8_C(0) : INT8_C(0);
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmplt_epi8(a, b) simde_mm_cmplt_epi8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_cmplt_epi16(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmplt_epi16(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 =
- vreinterpretq_s16_u16(vcltq_s16(a_.neon_i16, b_.neon_i16));
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i16 = HEDLEY_REINTERPRET_CAST(
- SIMDE_POWER_ALTIVEC_VECTOR(signed short),
- vec_cmplt(a_.altivec_i16, b_.altivec_i16));
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_i16x8_lt(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i16 = HEDLEY_STATIC_CAST(__typeof__(r_.i16), (a_.i16 < b_.i16));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = (a_.i16[i] < b_.i16[i]) ? ~INT16_C(0) : INT16_C(0);
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmplt_epi16(a, b) simde_mm_cmplt_epi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_cmplt_epi32(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmplt_epi32(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 =
- vreinterpretq_s32_u32(vcltq_s32(a_.neon_i32, b_.neon_i32));
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i32 = HEDLEY_REINTERPRET_CAST(
- SIMDE_POWER_ALTIVEC_VECTOR(signed int),
- vec_cmplt(a_.altivec_i32, b_.altivec_i32));
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_i32x4_lt(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32 = HEDLEY_STATIC_CAST(__typeof__(r_.i32), (a_.i32 < b_.i32));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = (a_.i32[i] < b_.i32[i]) ? ~INT32_C(0) : INT32_C(0);
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmplt_epi32(a, b) simde_mm_cmplt_epi32(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmplt_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmplt_pd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i64 = HEDLEY_STATIC_CAST(__typeof__(r_.i64), (a_.f64 < b_.f64));
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f64x2_lt(a_.wasm_v128, b_.wasm_v128);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
- r_.u64[i] = (a_.f64[i] < b_.f64[i]) ? ~UINT64_C(0)
- : UINT64_C(0);
- }
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmplt_pd(a, b) simde_mm_cmplt_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmplt_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmplt_sd(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_sd(a, simde_mm_cmplt_pd(a, b));
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
- r_.u64[0] = (a_.f64[0] < b_.f64[0]) ? ~UINT64_C(0) : UINT64_C(0);
- r_.u64[1] = a_.u64[1];
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmplt_sd(a, b) simde_mm_cmplt_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmple_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmple_pd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i64 = HEDLEY_STATIC_CAST(__typeof__(r_.i64), (a_.f64 <= b_.f64));
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f64x2_le(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_f64 = (SIMDE_POWER_ALTIVEC_VECTOR(double))vec_cmple(
- a_.altivec_f64, b_.altivec_f64);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
- r_.u64[i] = (a_.f64[i] <= b_.f64[i]) ? ~UINT64_C(0)
- : UINT64_C(0);
- }
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmple_pd(a, b) simde_mm_cmple_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmple_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmple_sd(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_sd(a, simde_mm_cmple_pd(a, b));
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
- r_.u64[0] = (a_.f64[0] <= b_.f64[0]) ? ~UINT64_C(0) : UINT64_C(0);
- r_.u64[1] = a_.u64[1];
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmple_sd(a, b) simde_mm_cmple_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_cmpgt_epi8(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmpgt_epi8(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i8 = vreinterpretq_s8_u8(vcgtq_s8(a_.neon_i8, b_.neon_i8));
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_i8x16_gt(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i8 = (SIMDE_POWER_ALTIVEC_VECTOR(signed char))vec_cmpgt(
- a_.altivec_i8, b_.altivec_i8);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i8 = HEDLEY_STATIC_CAST(__typeof__(r_.i8), (a_.i8 > b_.i8));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
- r_.i8[i] = (a_.i8[i] > b_.i8[i]) ? ~INT8_C(0) : INT8_C(0);
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpgt_epi8(a, b) simde_mm_cmpgt_epi8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_cmpgt_epi16(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmpgt_epi16(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 =
- vreinterpretq_s16_u16(vcgtq_s16(a_.neon_i16, b_.neon_i16));
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_i16x8_gt(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i16 = HEDLEY_REINTERPRET_CAST(
- SIMDE_POWER_ALTIVEC_VECTOR(signed short),
- vec_cmpgt(a_.altivec_i16, b_.altivec_i16));
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i16 = HEDLEY_STATIC_CAST(__typeof__(r_.i16), (a_.i16 > b_.i16));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = (a_.i16[i] > b_.i16[i]) ? ~INT16_C(0) : INT16_C(0);
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpgt_epi16(a, b) simde_mm_cmpgt_epi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_cmpgt_epi32(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmpgt_epi32(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 =
- vreinterpretq_s32_u32(vcgtq_s32(a_.neon_i32, b_.neon_i32));
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_i32x4_gt(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i32 = (SIMDE_POWER_ALTIVEC_VECTOR(signed int))vec_cmpgt(
- a_.altivec_i32, b_.altivec_i32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32 = HEDLEY_STATIC_CAST(__typeof__(r_.i32), (a_.i32 > b_.i32));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = (a_.i32[i] > b_.i32[i]) ? ~INT32_C(0) : INT32_C(0);
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpgt_epi32(a, b) simde_mm_cmpgt_epi32(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmpgt_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmpgt_pd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i64 = HEDLEY_STATIC_CAST(__typeof__(r_.i64), (a_.f64 > b_.f64));
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f64x2_gt(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_f64 =
- HEDLEY_STATIC_CAST(SIMDE_POWER_ALTIVEC_VECTOR(double),
- vec_cmpgt(a_.altivec_f64, b_.altivec_f64));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
- r_.u64[i] = (a_.f64[i] > b_.f64[i]) ? ~UINT64_C(0)
- : UINT64_C(0);
- }
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpgt_pd(a, b) simde_mm_cmpgt_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmpgt_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(__PGI)
- return _mm_cmpgt_sd(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_sd(a, simde_mm_cmpgt_pd(a, b));
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
- r_.u64[0] = (a_.f64[0] > b_.f64[0]) ? ~UINT64_C(0) : UINT64_C(0);
- r_.u64[1] = a_.u64[1];
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpgt_sd(a, b) simde_mm_cmpgt_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmpge_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmpge_pd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i64 = HEDLEY_STATIC_CAST(__typeof__(r_.i64), (a_.f64 >= b_.f64));
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f64x2_ge(a_.wasm_v128, b_.wasm_v128);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_f64 =
- HEDLEY_STATIC_CAST(SIMDE_POWER_ALTIVEC_VECTOR(double),
- vec_cmpge(a_.altivec_f64, b_.altivec_f64));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
- r_.u64[i] = (a_.f64[i] >= b_.f64[i]) ? ~UINT64_C(0)
- : UINT64_C(0);
- }
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpge_pd(a, b) simde_mm_cmpge_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmpge_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(__PGI)
- return _mm_cmpge_sd(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_sd(a, simde_mm_cmpge_pd(a, b));
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
- r_.u64[0] = (a_.f64[0] >= b_.f64[0]) ? ~UINT64_C(0) : UINT64_C(0);
- r_.u64[1] = a_.u64[1];
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpge_sd(a, b) simde_mm_cmpge_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmpnge_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmpnge_pd(a, b);
-#else
- return simde_mm_cmplt_pd(a, b);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpnge_pd(a, b) simde_mm_cmpnge_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmpnge_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(__PGI)
- return _mm_cmpnge_sd(a, b);
-#else
- return simde_mm_cmplt_sd(a, b);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpnge_sd(a, b) simde_mm_cmpnge_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmpnlt_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmpnlt_pd(a, b);
-#else
- return simde_mm_cmpge_pd(a, b);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpnlt_pd(a, b) simde_mm_cmpnlt_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmpnlt_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmpnlt_sd(a, b);
-#else
- return simde_mm_cmpge_sd(a, b);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpnlt_sd(a, b) simde_mm_cmpnlt_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmpnle_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmpnle_pd(a, b);
-#else
- return simde_mm_cmpgt_pd(a, b);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpnle_pd(a, b) simde_mm_cmpnle_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmpnle_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmpnle_sd(a, b);
-#else
- return simde_mm_cmpgt_sd(a, b);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpnle_sd(a, b) simde_mm_cmpnle_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmpord_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmpord_pd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(simde_math_isnan)
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
- r_.u64[i] = (!simde_math_isnan(a_.f64[i]) &&
- !simde_math_isnan(b_.f64[i]))
- ? ~UINT64_C(0)
- : UINT64_C(0);
- }
-#else
- HEDLEY_UNREACHABLE();
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpord_pd(a, b) simde_mm_cmpord_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde_float64 simde_mm_cvtsd_f64(simde__m128d a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(__PGI)
- return _mm_cvtsd_f64(a);
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a);
- return a_.f64[0];
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtsd_f64(a) simde_mm_cvtsd_f64(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmpord_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmpord_sd(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_sd(a, simde_mm_cmpord_pd(a, b));
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(simde_math_isnan)
- r_.u64[0] =
- (!simde_math_isnan(a_.f64[0]) && !simde_math_isnan(b_.f64[0]))
- ? ~UINT64_C(0)
- : UINT64_C(0);
- r_.u64[1] = a_.u64[1];
-#else
- HEDLEY_UNREACHABLE();
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpord_sd(a, b) simde_mm_cmpord_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmpunord_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmpunord_pd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(simde_math_isnan)
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
- r_.u64[i] = (simde_math_isnan(a_.f64[i]) ||
- simde_math_isnan(b_.f64[i]))
- ? ~UINT64_C(0)
- : UINT64_C(0);
- }
-#else
- HEDLEY_UNREACHABLE();
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpunord_pd(a, b) simde_mm_cmpunord_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cmpunord_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cmpunord_sd(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_sd(a, simde_mm_cmpunord_pd(a, b));
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(simde_math_isnan)
- r_.u64[0] = (simde_math_isnan(a_.f64[0]) || simde_math_isnan(b_.f64[0]))
- ? ~UINT64_C(0)
- : UINT64_C(0);
- r_.u64[1] = a_.u64[1];
-
-#else
- HEDLEY_UNREACHABLE();
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cmpunord_sd(a, b) simde_mm_cmpunord_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cvtepi32_pd(simde__m128i a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cvtepi32_pd(a);
-#else
- simde__m128d_private r_;
- simde__m128i_private a_ = simde__m128i_to_private(a);
-
-#if defined(SIMDE_CONVERT_VECTOR_)
- SIMDE_CONVERT_VECTOR_(r_.f64, a_.m64_private[0].i32);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
- r_.f64[i] = (simde_float64)a_.i32[i];
- }
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtepi32_pd(a) simde_mm_cvtepi32_pd(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cvtepi32_ps(simde__m128i a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cvtepi32_ps(a);
-#else
- simde__m128_private r_;
- simde__m128i_private a_ = simde__m128i_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_f32 = vcvtq_f32_s32(a_.neon_i32);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_f32 = vec_ctf(a_.altivec_i32, 0);
-#elif defined(SIMDE_CONVERT_VECTOR_)
- SIMDE_CONVERT_VECTOR_(r_.f32, a_.i32);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
- r_.f32[i] = (simde_float32)a_.i32[i];
- }
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtepi32_ps(a) simde_mm_cvtepi32_ps(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_cvtpd_epi32(simde__m128d a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cvtpd_epi32(a);
-#else
- simde__m128i_private r_;
- simde__m128d_private a_ = simde__m128d_to_private(a);
-
-#if defined(SIMDE_CONVERT_VECTOR_)
- SIMDE_CONVERT_VECTOR_(r_.m64_private[0].i32, a_.f64);
- r_.m64_private[1] = simde__m64_to_private(simde_mm_setzero_si64());
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(a_.f64) / sizeof(a_.f64[0])); i++) {
- r_.i32[i] = HEDLEY_STATIC_CAST(int32_t, a_.f64[i]);
- }
- simde_memset(&(r_.m64_private[1]), 0, sizeof(r_.m64_private[1]));
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtpd_epi32(a) simde_mm_cvtpd_epi32(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_cvtpd_pi32(simde__m128d a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cvtpd_pi32(a);
-#else
- simde__m64_private r_;
- simde__m128d_private a_ = simde__m128d_to_private(a);
-
-#if defined(SIMDE_CONVERT_VECTOR_)
- SIMDE_CONVERT_VECTOR_(r_.i32, a_.f64);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = HEDLEY_STATIC_CAST(int32_t, a_.f64[i]);
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtpd_pi32(a) simde_mm_cvtpd_pi32(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cvtpd_ps(simde__m128d a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cvtpd_ps(a);
-#else
- simde__m128_private r_;
- simde__m128d_private a_ = simde__m128d_to_private(a);
-
-#if defined(SIMDE_CONVERT_VECTOR_)
- SIMDE_CONVERT_VECTOR_(r_.m64_private[0].f32, a_.f64);
- r_.m64_private[1] = simde__m64_to_private(simde_mm_setzero_si64());
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(a_.f64) / sizeof(a_.f64[0])); i++) {
- r_.f32[i] = (simde_float32)a_.f64[i];
- }
- simde_memset(&(r_.m64_private[1]), 0, sizeof(r_.m64_private[1]));
-#endif
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtpd_ps(a) simde_mm_cvtpd_ps(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cvtpi32_pd(simde__m64 a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cvtpi32_pd(a);
-#else
- simde__m128d_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
-
-#if defined(SIMDE_CONVERT_VECTOR_)
- SIMDE_CONVERT_VECTOR_(r_.f64, a_.i32);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
- r_.f64[i] = (simde_float64)a_.i32[i];
- }
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtpi32_pd(a) simde_mm_cvtpi32_pd(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_cvtps_epi32(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cvtps_epi32(a);
-#else
- simde__m128i_private r_;
- simde__m128_private a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
-/* The default rounding mode on SSE is 'round to even', which ArmV7
- does not support! It is supported on ARMv8 however. */
-#if defined(SIMDE_ARCH_AARCH64)
- r_.neon_i32 = vcvtnq_s32_f32(a_.neon_f32);
-#else
- uint32x4_t signmask = vdupq_n_u32(0x80000000);
- float32x4_t half = vbslq_f32(signmask, a_.neon_f32,
- vdupq_n_f32(0.5f)); /* +/- 0.5 */
- int32x4_t r_normal = vcvtq_s32_f32(
- vaddq_f32(a_.neon_f32, half)); /* round to integer: [a + 0.5]*/
- int32x4_t r_trunc =
- vcvtq_s32_f32(a_.neon_f32); /* truncate to integer: [a] */
- int32x4_t plusone = vshrq_n_s32(vnegq_s32(r_trunc), 31); /* 1 or 0 */
- int32x4_t r_even = vbicq_s32(vaddq_s32(r_trunc, plusone),
- vdupq_n_s32(1)); /* ([a] + {0,1}) & ~1 */
- float32x4_t delta = vsubq_f32(
- a_.neon_f32,
- vcvtq_f32_s32(r_trunc)); /* compute delta: delta = (a - [a]) */
- uint32x4_t is_delta_half =
- vceqq_f32(delta, half); /* delta == +/- 0.5 */
- r_.neon_i32 = vbslq_s32(is_delta_half, r_even, r_normal);
-#endif
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i32 = vec_cts(a_.altivec_f32, 0);
-#elif defined(SIMDE_CONVERT_VECTOR_)
- SIMDE_CONVERT_VECTOR_(r_.i32, a_.f32);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = HEDLEY_STATIC_CAST(int32_t, a_.f32[i]);
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtps_epi32(a) simde_mm_cvtps_epi32(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cvtps_pd(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cvtps_pd(a);
-#else
- simde__m128d_private r_;
- simde__m128_private a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_CONVERT_VECTOR_)
- SIMDE_CONVERT_VECTOR_(r_.f64, a_.m64_private[0].f32);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
- r_.f64[i] = a_.f32[i];
- }
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtps_pd(a) simde_mm_cvtps_pd(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int32_t simde_mm_cvtsd_si32(simde__m128d a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cvtsd_si32(a);
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a);
- return SIMDE_CONVERT_FTOI(int32_t, a_.f64[0]);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtsd_si32(a) simde_mm_cvtsd_si32(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int64_t simde_mm_cvtsd_si64(simde__m128d a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_ARCH_AMD64)
-#if defined(__PGI)
- return _mm_cvtsd_si64x(a);
-#else
- return _mm_cvtsd_si64(a);
-#endif
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a);
- return SIMDE_CONVERT_FTOI(int64_t, a_.f64[0]);
-#endif
-}
-#define simde_mm_cvtsd_si64x(a) simde_mm_cvtsd_si64(a)
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtsd_si64(a) simde_mm_cvtsd_si64(a)
-#define _mm_cvtsd_si64x(a) simde_mm_cvtsd_si64x(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128 simde_mm_cvtsd_ss(simde__m128 a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cvtsd_ss(a, b);
-#else
- simde__m128_private r_, a_ = simde__m128_to_private(a);
- simde__m128d_private b_ = simde__m128d_to_private(b);
-
- r_.f32[0] = HEDLEY_STATIC_CAST(simde_float32, b_.f64[0]);
-
- SIMDE_VECTORIZE
- for (size_t i = 1; i < (sizeof(r_) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = a_.i32[i];
- }
-
- return simde__m128_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtsd_ss(a, b) simde_mm_cvtsd_ss(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int32_t simde_mm_cvtsi128_si32(simde__m128i a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cvtsi128_si32(a);
-#else
- simde__m128i_private a_ = simde__m128i_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- return vgetq_lane_s32(a_.neon_i32, 0);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
-#if defined(SIMDE_BUG_GCC_95227)
- (void)a_;
-#endif
- return vec_extract(a_.altivec_i32, 0);
-#else
- return a_.i32[0];
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtsi128_si32(a) simde_mm_cvtsi128_si32(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int64_t simde_mm_cvtsi128_si64(simde__m128i a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_ARCH_AMD64)
-#if defined(__PGI)
- return _mm_cvtsi128_si64x(a);
-#else
- return _mm_cvtsi128_si64(a);
-#endif
-#else
- simde__m128i_private a_ = simde__m128i_to_private(a);
-#if defined(SIMDE_POWER_ALTIVEC_P5_NATIVE) && !defined(HEDLEY_IBM_VERSION)
- return vec_extract(a_.i64, 0);
-#endif
- return a_.i64[0];
-#endif
-}
-#define simde_mm_cvtsi128_si64x(a) simde_mm_cvtsi128_si64(a)
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtsi128_si64(a) simde_mm_cvtsi128_si64(a)
-#define _mm_cvtsi128_si64x(a) simde_mm_cvtsi128_si64x(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cvtsi32_sd(simde__m128d a, int32_t b)
-{
-
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cvtsi32_sd(a, b);
-#else
- simde__m128d_private r_;
- simde__m128d_private a_ = simde__m128d_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE) && defined(SIMDE_ARCH_AMD64)
- r_.neon_f64 = vsetq_lane_f64((simde_float64)b, a_.neon_f64, 0);
-#else
- r_.f64[0] = HEDLEY_STATIC_CAST(simde_float64, b);
- r_.i64[1] = a_.i64[1];
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtsi32_sd(a, b) simde_mm_cvtsi32_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_cvtsi32_si128(int32_t a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cvtsi32_si128(a);
-#else
- simde__m128i_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vsetq_lane_s32(a, vdupq_n_s32(0), 0);
-#else
- r_.i32[0] = a;
- r_.i32[1] = 0;
- r_.i32[2] = 0;
- r_.i32[3] = 0;
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtsi32_si128(a) simde_mm_cvtsi32_si128(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cvtsi64_sd(simde__m128d a, int64_t b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_ARCH_AMD64)
-#if !defined(__PGI)
- return _mm_cvtsi64_sd(a, b);
-#else
- return _mm_cvtsi64x_sd(a, b);
-#endif
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- r_.neon_f64 = vsetq_lane_f64((simde_float64)b, a_.neon_f64, 0);
-#else
- r_.f64[0] = HEDLEY_STATIC_CAST(simde_float64, b);
- r_.f64[1] = a_.f64[1];
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#define simde_mm_cvtsi64x_sd(a, b) simde_mm_cvtsi64_sd(a, b)
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtsi64_sd(a, b) simde_mm_cvtsi64_sd(a, b)
-#define _mm_cvtsi64x_sd(a, b) simde_mm_cvtsi64x_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_cvtsi64_si128(int64_t a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_ARCH_AMD64)
-#if !defined(__PGI)
- return _mm_cvtsi64_si128(a);
-#else
- return _mm_cvtsi64x_si128(a);
-#endif
-#else
- simde__m128i_private r_;
-
- r_.i64[0] = a;
- r_.i64[1] = 0;
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#define simde_mm_cvtsi64x_si128(a) simde_mm_cvtsi64_si128(a)
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtsi64_si128(a) simde_mm_cvtsi64_si128(a)
-#define _mm_cvtsi64x_si128(a) simde_mm_cvtsi64x_si128(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_cvtss_sd(simde__m128d a, simde__m128 b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cvtss_sd(a, b);
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a);
- simde__m128_private b_ = simde__m128_to_private(b);
-
- a_.f64[0] = HEDLEY_STATIC_CAST(simde_float64, b_.f32[0]);
-
- return simde__m128d_from_private(a_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvtss_sd(a, b) simde_mm_cvtss_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_cvttpd_epi32(simde__m128d a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cvttpd_epi32(a);
-#else
- simde__m128i_private r_;
- simde__m128d_private a_ = simde__m128d_to_private(a);
-
- for (size_t i = 0; i < (sizeof(a_.f64) / sizeof(a_.f64[0])); i++) {
- r_.i32[i] = SIMDE_CONVERT_FTOI(int32_t, a_.f64[i]);
- }
- simde_memset(&(r_.m64_private[1]), 0, sizeof(r_.m64_private[1]));
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvttpd_epi32(a) simde_mm_cvttpd_epi32(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_cvttpd_pi32(simde__m128d a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_cvttpd_pi32(a);
-#else
- simde__m64_private r_;
- simde__m128d_private a_ = simde__m128d_to_private(a);
-
-#if defined(SIMDE_CONVERT_VECTOR_)
- SIMDE_CONVERT_VECTOR_(r_.i32, a_.f64);
-#else
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = SIMDE_CONVERT_FTOI(int32_t, a_.f64[i]);
- }
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvttpd_pi32(a) simde_mm_cvttpd_pi32(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_cvttps_epi32(simde__m128 a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cvttps_epi32(a);
-#else
- simde__m128i_private r_;
- simde__m128_private a_ = simde__m128_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vcvtq_s32_f32(a_.neon_f32);
-#elif defined(SIMDE_CONVERT_VECTOR_)
- SIMDE_CONVERT_VECTOR_(r_.i32, a_.f32);
-#else
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = SIMDE_CONVERT_FTOI(int32_t, a_.f32[i]);
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvttps_epi32(a) simde_mm_cvttps_epi32(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int32_t simde_mm_cvttsd_si32(simde__m128d a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_cvttsd_si32(a);
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a);
- return SIMDE_CONVERT_FTOI(int32_t, a_.f64[0]);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvttsd_si32(a) simde_mm_cvttsd_si32(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int64_t simde_mm_cvttsd_si64(simde__m128d a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_ARCH_AMD64)
-#if !defined(__PGI)
- return _mm_cvttsd_si64(a);
-#else
- return _mm_cvttsd_si64x(a);
-#endif
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a);
- return SIMDE_CONVERT_FTOI(int64_t, a_.f64[0]);
-#endif
-}
-#define simde_mm_cvttsd_si64x(a) simde_mm_cvttsd_si64(a)
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_cvttsd_si64(a) simde_mm_cvttsd_si64(a)
-#define _mm_cvttsd_si64x(a) simde_mm_cvttsd_si64x(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_div_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_div_pd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.f64 = a_.f64 / b_.f64;
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f64x2_div(a_.wasm_v128, b_.wasm_v128);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
- r_.f64[i] = a_.f64[i] / b_.f64[i];
- }
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_div_pd(a, b) simde_mm_div_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_div_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_div_sd(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_sd(a, simde_mm_div_pd(a, b));
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
- r_.f64[0] = a_.f64[0] / b_.f64[0];
- r_.f64[1] = a_.f64[1];
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_div_sd(a, b) simde_mm_div_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int32_t simde_mm_extract_epi16(simde__m128i a, const int imm8)
- SIMDE_REQUIRE_RANGE(imm8, 0, 7)
-{
- uint16_t r;
- simde__m128i_private a_ = simde__m128i_to_private(a);
-
-#if defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
-#if defined(SIMDE_BUG_GCC_95227)
- (void)a_;
- (void)imm8;
-#endif
- r = vec_extract(a_.altivec_i16, imm8);
-#else
- r = a_.u16[imm8 & 7];
-#endif
-
- return HEDLEY_STATIC_CAST(int32_t, r);
-}
-#if defined(SIMDE_X86_SSE2_NATIVE) && \
- (!defined(HEDLEY_GCC_VERSION) || HEDLEY_GCC_VERSION_CHECK(4, 6, 0))
-#define simde_mm_extract_epi16(a, imm8) _mm_extract_epi16(a, imm8)
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
-#define simde_mm_extract_epi16(a, imm8) \
- HEDLEY_STATIC_CAST(int32_t, \
- vgetq_lane_s16(simde__m128i_to_private(a).neon_i16, \
- (imm8)) & \
- (UINT32_C(0x0000ffff)))
-#endif
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_extract_epi16(a, imm8) simde_mm_extract_epi16(a, imm8)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_insert_epi16(simde__m128i a, int16_t i, const int imm8)
- SIMDE_REQUIRE_RANGE(imm8, 0, 7)
-{
- simde__m128i_private a_ = simde__m128i_to_private(a);
- a_.i16[imm8 & 7] = i;
- return simde__m128i_from_private(a_);
-}
-#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(__PGI)
-#define simde_mm_insert_epi16(a, i, imm8) _mm_insert_epi16((a), (i), (imm8))
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
-#define simde_mm_insert_epi16(a, i, imm8) \
- simde__m128i_from_neon_i16( \
- vsetq_lane_s16((i), simde__m128i_to_neon_i16(a), (imm8)))
-#endif
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_insert_epi16(a, i, imm8) simde_mm_insert_epi16(a, i, imm8)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d
-simde_mm_load_pd(simde_float64 const mem_addr[HEDLEY_ARRAY_PARAM(2)])
-{
- simde_assert_aligned(16, mem_addr);
-
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_load_pd(mem_addr);
-#else
- simde__m128d_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u32 =
- vld1q_u32(HEDLEY_REINTERPRET_CAST(uint32_t const *, mem_addr));
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE) && !defined(HEDLEY_IBM_VERSION)
- r_.altivec_f64 = vec_ld(
- 0, HEDLEY_REINTERPRET_CAST(SIMDE_POWER_ALTIVEC_VECTOR(double)
- const *,
- mem_addr));
-#else
- r_ = *SIMDE_ALIGN_CAST(simde__m128d_private const *, mem_addr);
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_load_pd(mem_addr) simde_mm_load_pd(mem_addr)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_load_pd1(simde_float64 const *mem_addr)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_load1_pd(mem_addr);
-#else
- simde__m128d_private r_;
-
- r_.f64[0] = *mem_addr;
- r_.f64[1] = *mem_addr;
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#define simde_mm_load1_pd(mem_addr) simde_mm_load_pd1(mem_addr)
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_load_pd1(mem_addr) simde_mm_load_pd1(mem_addr)
-#define _mm_load1_pd(mem_addr) simde_mm_load1_pd(mem_addr)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_load_sd(simde_float64 const *mem_addr)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_load_sd(mem_addr);
-#else
- simde__m128d_private r_;
-
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- r_.neon_f64 = vsetq_lane_f64(*mem_addr, vdupq_n_f64(0), 0);
-#else
- r_.f64[0] = *mem_addr;
- r_.u64[1] = UINT64_C(0);
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_load_sd(mem_addr) simde_mm_load_sd(mem_addr)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_load_si128(simde__m128i const *mem_addr)
-{
- simde_assert_aligned(16, mem_addr);
-
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_load_si128(
- HEDLEY_REINTERPRET_CAST(__m128i const *, mem_addr));
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- simde__m128i_private r_;
-
-#if defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i32 = vec_ld(
- 0, HEDLEY_REINTERPRET_CAST(
- SIMDE_POWER_ALTIVEC_VECTOR(int) const *, mem_addr));
-#else
- r_.neon_i32 = vld1q_s32((int32_t const *)mem_addr);
-#endif
-
- return simde__m128i_from_private(r_);
-#else
- return *mem_addr;
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_load_si128(mem_addr) simde_mm_load_si128(mem_addr)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_loadh_pd(simde__m128d a, simde_float64 const *mem_addr)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_loadh_pd(a, mem_addr);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a);
- simde_float64 t;
-
- simde_memcpy(&t, mem_addr, sizeof(t));
- r_.f64[0] = a_.f64[0];
- r_.f64[1] = t;
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_loadh_pd(a, mem_addr) simde_mm_loadh_pd(a, mem_addr)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_loadl_epi64(simde__m128i const *mem_addr)
-{
- simde_assert_aligned(16, mem_addr);
-
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_loadl_epi64(
- HEDLEY_REINTERPRET_CAST(__m128i const *, mem_addr));
-#else
- simde__m128i_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vcombine_s32(vld1_s32((int32_t const *)mem_addr),
- vcreate_s32(0));
-#else
- r_.i64[0] = *HEDLEY_REINTERPRET_CAST(int64_t const *, mem_addr);
- r_.i64[1] = 0;
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_loadl_epi64(mem_addr) simde_mm_loadl_epi64(mem_addr)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_loadl_pd(simde__m128d a, simde_float64 const *mem_addr)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_loadl_pd(a, mem_addr);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a);
-
- r_.f64[0] = *mem_addr;
- r_.u64[1] = a_.u64[1];
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_loadl_pd(a, mem_addr) simde_mm_loadl_pd(a, mem_addr)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d
-simde_mm_loadr_pd(simde_float64 const mem_addr[HEDLEY_ARRAY_PARAM(2)])
-{
- simde_assert_aligned(16, mem_addr);
-
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_loadr_pd(mem_addr);
-#else
- simde__m128d_private r_;
-
- r_.f64[0] = mem_addr[1];
- r_.f64[1] = mem_addr[0];
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_loadr_pd(mem_addr) simde_mm_loadr_pd(mem_addr)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d
-simde_mm_loadu_pd(simde_float64 const mem_addr[HEDLEY_ARRAY_PARAM(2)])
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_loadu_pd(mem_addr);
-#else
- simde__m128d_private r_;
-
- simde_memcpy(&r_, mem_addr, sizeof(r_));
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_loadu_pd(mem_addr) simde_mm_loadu_pd(mem_addr)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_loadu_si128(simde__m128i const *mem_addr)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_loadu_si128(HEDLEY_STATIC_CAST(__m128i const *, mem_addr));
-#else
- simde__m128i_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vld1q_s32((int32_t const *)mem_addr);
-#else
- simde_memcpy(&r_, mem_addr, sizeof(r_));
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_loadu_si128(mem_addr) simde_mm_loadu_si128(mem_addr)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_madd_epi16(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_madd_epi16(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- int32x4_t pl =
- vmull_s16(vget_low_s16(a_.neon_i16), vget_low_s16(b_.neon_i16));
- int32x4_t ph = vmull_s16(vget_high_s16(a_.neon_i16),
- vget_high_s16(b_.neon_i16));
- int32x2_t rl = vpadd_s32(vget_low_s32(pl), vget_high_s32(pl));
- int32x2_t rh = vpadd_s32(vget_low_s32(ph), vget_high_s32(ph));
- r_.neon_i32 = vcombine_s32(rl, rh);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_) / sizeof(r_.i16[0])); i += 2) {
- r_.i32[i / 2] = (a_.i16[i] * b_.i16[i]) +
- (a_.i16[i + 1] * b_.i16[i + 1]);
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_madd_epi16(a, b) simde_mm_madd_epi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_maskmoveu_si128(simde__m128i a, simde__m128i mask,
- int8_t mem_addr[HEDLEY_ARRAY_PARAM(16)])
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- _mm_maskmoveu_si128(a, mask, HEDLEY_REINTERPRET_CAST(char *, mem_addr));
-#else
- simde__m128i_private a_ = simde__m128i_to_private(a),
- mask_ = simde__m128i_to_private(mask);
-
- for (size_t i = 0; i < (sizeof(a_.i8) / sizeof(a_.i8[0])); i++) {
- if (mask_.u8[i] & 0x80) {
- mem_addr[i] = a_.i8[i];
- }
- }
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_maskmoveu_si128(a, mask, mem_addr) \
- simde_mm_maskmoveu_si128( \
- (a), (mask), \
- SIMDE_CHECKED_REINTERPRET_CAST(int8_t *, char *, (mem_addr)))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int32_t simde_mm_movemask_epi8(simde__m128i a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(__INTEL_COMPILER)
- /* ICC has trouble with _mm_movemask_epi8 at -O2 and above: */
- return _mm_movemask_epi8(a);
-#else
- int32_t r = 0;
- simde__m128i_private a_ = simde__m128i_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- uint8x16_t input = a_.neon_u8;
- SIMDE_ALIGN_AS(16, int8x8_t)
- static const int8_t xr[8] = {-7, -6, -5, -4, -3, -2, -1, 0};
- uint8x8_t mask_and = vdup_n_u8(0x80);
- int8x8_t mask_shift = vld1_s8(xr);
-
- uint8x8_t lo = vget_low_u8(input);
- uint8x8_t hi = vget_high_u8(input);
-
- lo = vand_u8(lo, mask_and);
- lo = vshl_u8(lo, mask_shift);
-
- hi = vand_u8(hi, mask_and);
- hi = vshl_u8(hi, mask_shift);
-
- lo = vpadd_u8(lo, lo);
- lo = vpadd_u8(lo, lo);
- lo = vpadd_u8(lo, lo);
-
- hi = vpadd_u8(hi, hi);
- hi = vpadd_u8(hi, hi);
- hi = vpadd_u8(hi, hi);
-
- r = ((hi[0] << 8) | (lo[0] & 0xFF));
-#elif defined(SIMDE_POWER_ALTIVEC_P8_NATIVE) && !defined(HEDLEY_IBM_VERSION)
- static const SIMDE_POWER_ALTIVEC_VECTOR(unsigned char)
- perm = {120, 112, 104, 96, 88, 80, 72, 64,
- 56, 48, 40, 32, 24, 16, 8, 0};
- r = HEDLEY_STATIC_CAST(
- int32_t, vec_extract(vec_vbpermq(a_.altivec_u8, perm), 1));
-#else
- SIMDE_VECTORIZE_REDUCTION(| : r)
- for (size_t i = 0; i < (sizeof(a_.u8) / sizeof(a_.u8[0])); i++) {
- r |= (a_.u8[15 - i] >> 7) << (15 - i);
- }
-#endif
-
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_movemask_epi8(a) simde_mm_movemask_epi8(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int32_t simde_mm_movemask_pd(simde__m128d a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_movemask_pd(a);
-#else
- int32_t r = 0;
- simde__m128d_private a_ = simde__m128d_to_private(a);
-
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(a_.u64) / sizeof(a_.u64[0])); i++) {
- r |= (a_.u64[i] >> 63) << i;
- }
-
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_movemask_pd(a) simde_mm_movemask_pd(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_movepi64_pi64(simde__m128i a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_movepi64_pi64(a);
-#else
- simde__m64_private r_;
- simde__m128i_private a_ = simde__m128i_to_private(a);
-
- r_.i64[0] = a_.i64[0];
-
- return simde__m64_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_movepi64_pi64(a) simde_mm_movepi64_pi64(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_movpi64_epi64(simde__m64 a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_movpi64_epi64(a);
-#else
- simde__m128i_private r_;
- simde__m64_private a_ = simde__m64_to_private(a);
-
- r_.i64[0] = a_.i64[0];
- r_.i64[1] = 0;
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_movpi64_epi64(a) simde_mm_movpi64_epi64(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_min_epi16(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_min_epi16(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vminq_s16(a_.neon_i16, b_.neon_i16);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = (a_.i16[i] < b_.i16[i]) ? a_.i16[i] : b_.i16[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_min_epi16(a, b) simde_mm_min_epi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_min_epu8(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_min_epu8(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u8 = vminq_u8(a_.neon_u8, b_.neon_u8);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u8) / sizeof(r_.u8[0])); i++) {
- r_.u8[i] = (a_.u8[i] < b_.u8[i]) ? a_.u8[i] : b_.u8[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_min_epu8(a, b) simde_mm_min_epu8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_min_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_min_pd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
- r_.f64[i] = (a_.f64[i] < b_.f64[i]) ? a_.f64[i] : b_.f64[i];
- }
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_min_pd(a, b) simde_mm_min_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_min_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_min_sd(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_sd(a, simde_mm_min_pd(a, b));
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
- r_.f64[0] = (a_.f64[0] < b_.f64[0]) ? a_.f64[0] : b_.f64[0];
- r_.f64[1] = a_.f64[1];
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_min_sd(a, b) simde_mm_min_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_max_epi16(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_max_epi16(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vmaxq_s16(a_.neon_i16, b_.neon_i16);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i16 = vec_max(a_.altivec_i16, b_.altivec_i16);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = (a_.i16[i] > b_.i16[i]) ? a_.i16[i] : b_.i16[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_max_epi16(a, b) simde_mm_max_epi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_max_epu8(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_max_epu8(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u8 = vmaxq_u8(a_.neon_u8, b_.neon_u8);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_u8 = vec_max(a_.altivec_u8, b_.altivec_u8);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u8) / sizeof(r_.u8[0])); i++) {
- r_.u8[i] = (a_.u8[i] > b_.u8[i]) ? a_.u8[i] : b_.u8[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_max_epu8(a, b) simde_mm_max_epu8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_max_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_max_pd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_f64 = vec_max(a_.altivec_f64, b_.altivec_f64);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
- r_.f64[i] = (a_.f64[i] > b_.f64[i]) ? a_.f64[i] : b_.f64[i];
- }
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_max_pd(a, b) simde_mm_max_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_max_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_max_sd(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_sd(a, simde_mm_max_pd(a, b));
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
- r_.f64[0] = (a_.f64[0] > b_.f64[0]) ? a_.f64[0] : b_.f64[0];
- r_.f64[1] = a_.f64[1];
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_max_sd(a, b) simde_mm_max_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_move_epi64(simde__m128i a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_move_epi64(a);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i64 = vsetq_lane_s64(0, a_.neon_i64, 1);
-#else
- r_.i64[0] = a_.i64[0];
- r_.i64[1] = 0;
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_move_epi64(a) simde_mm_move_epi64(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_mul_epu32(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_mul_epu32(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u64) / sizeof(r_.u64[0])); i++) {
- r_.u64[i] = HEDLEY_STATIC_CAST(uint64_t, a_.u32[i * 2]) *
- HEDLEY_STATIC_CAST(uint64_t, b_.u32[i * 2]);
- }
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_mul_epu32(a, b) simde_mm_mul_epu32(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_x_mm_mul_epi64(simde__m128i a, simde__m128i b)
-{
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i64 = a_.i64 * b_.i64;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i64) / sizeof(r_.i64[0])); i++) {
- r_.i64[i] = a_.i64[i] * b_.i64[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_x_mm_mod_epi64(simde__m128i a, simde__m128i b)
-{
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i64 = a_.i64 % b_.i64;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i64) / sizeof(r_.i64[0])); i++) {
- r_.i64[i] = a_.i64[i] % b_.i64[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_mul_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_mul_pd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.f64 = a_.f64 * b_.f64;
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f64x2_mul(a_.wasm_v128, b_.wasm_v128);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
- r_.f64[i] = a_.f64[i] * b_.f64[i];
- }
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_mul_pd(a, b) simde_mm_mul_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_mul_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_mul_sd(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_sd(a, simde_mm_mul_pd(a, b));
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
- r_.f64[0] = a_.f64[0] * b_.f64[0];
- r_.f64[1] = a_.f64[1];
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_mul_sd(a, b) simde_mm_mul_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_mul_su32(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE) && \
- !defined(__PGI)
- return _mm_mul_su32(a, b);
-#else
- simde__m64_private r_, a_ = simde__m64_to_private(a),
- b_ = simde__m64_to_private(b);
-
- r_.u64[0] = HEDLEY_STATIC_CAST(uint64_t, a_.u32[0]) *
- HEDLEY_STATIC_CAST(uint64_t, b_.u32[0]);
-
- return simde__m64_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_mul_su32(a, b) simde_mm_mul_su32(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_mulhi_epi16(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_mulhi_epi16(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- int16x4_t a3210 = vget_low_s16(a_.neon_i16);
- int16x4_t b3210 = vget_low_s16(b_.neon_i16);
- int32x4_t ab3210 = vmull_s16(a3210, b3210); /* 3333222211110000 */
- int16x4_t a7654 = vget_high_s16(a_.neon_i16);
- int16x4_t b7654 = vget_high_s16(b_.neon_i16);
- int32x4_t ab7654 = vmull_s16(a7654, b7654); /* 7777666655554444 */
- uint16x8x2_t rv = vuzpq_u16(vreinterpretq_u16_s32(ab3210),
- vreinterpretq_u16_s32(ab7654));
- r_.neon_u16 = rv.val[1];
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.u16[i] = HEDLEY_STATIC_CAST(
- uint16_t,
- (HEDLEY_STATIC_CAST(
- uint32_t,
- HEDLEY_STATIC_CAST(int32_t, a_.i16[i]) *
- HEDLEY_STATIC_CAST(int32_t,
- b_.i16[i])) >>
- 16));
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_mulhi_epi16(a, b) simde_mm_mulhi_epi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_mulhi_epu16(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(__PGI)
- return _mm_mulhi_epu16(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
- r_.u16[i] = HEDLEY_STATIC_CAST(
- uint16_t,
- HEDLEY_STATIC_CAST(uint32_t, a_.u16[i]) *
- HEDLEY_STATIC_CAST(uint32_t,
- b_.u16[i]) >>
- 16);
- }
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_mulhi_epu16(a, b) simde_mm_mulhi_epu16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_mullo_epi16(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_mullo_epi16(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vmulq_s16(a_.neon_i16, b_.neon_i16);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.u16[i] = HEDLEY_STATIC_CAST(
- uint16_t,
- HEDLEY_STATIC_CAST(uint32_t, a_.u16[i]) *
- HEDLEY_STATIC_CAST(uint32_t, b_.u16[i]));
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_mullo_epi16(a, b) simde_mm_mullo_epi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_or_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_or_pd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32f = a_.i32f | b_.i32f;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32f) / sizeof(r_.i32f[0])); i++) {
- r_.i32f[i] = a_.i32f[i] | b_.i32f[i];
- }
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_or_pd(a, b) simde_mm_or_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_or_si128(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_or_si128(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vorrq_s32(a_.neon_i32, b_.neon_i32);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i32 = vec_or(a_.altivec_i32, b_.altivec_i32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32f = a_.i32f | b_.i32f;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32f) / sizeof(r_.i32f[0])); i++) {
- r_.i32f[i] = a_.i32f[i] | b_.i32f[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_or_si128(a, b) simde_mm_or_si128(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_packs_epi16(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_packs_epi16(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i8 =
- vcombine_s8(vqmovn_s16(a_.neon_i16), vqmovn_s16(b_.neon_i16));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i8[i] = (a_.i16[i] > INT8_MAX)
- ? INT8_MAX
- : ((a_.i16[i] < INT8_MIN)
- ? INT8_MIN
- : HEDLEY_STATIC_CAST(int8_t,
- a_.i16[i]));
- r_.i8[i + 8] = (b_.i16[i] > INT8_MAX)
- ? INT8_MAX
- : ((b_.i16[i] < INT8_MIN)
- ? INT8_MIN
- : HEDLEY_STATIC_CAST(
- int8_t, b_.i16[i]));
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_packs_epi16(a, b) simde_mm_packs_epi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_packs_epi32(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_packs_epi32(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 =
- vcombine_s16(vqmovn_s32(a_.neon_i32), vqmovn_s32(b_.neon_i32));
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i16 = vec_packs(a_.altivec_i32, b_.altivec_i32);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i16[i] = (a_.i32[i] > INT16_MAX)
- ? INT16_MAX
- : ((a_.i32[i] < INT16_MIN)
- ? INT16_MIN
- : HEDLEY_STATIC_CAST(int16_t,
- a_.i32[i]));
- r_.i16[i + 4] =
- (b_.i32[i] > INT16_MAX)
- ? INT16_MAX
- : ((b_.i32[i] < INT16_MIN)
- ? INT16_MIN
- : HEDLEY_STATIC_CAST(int16_t,
- b_.i32[i]));
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_packs_epi32(a, b) simde_mm_packs_epi32(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_packus_epi16(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_packus_epi16(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u8 =
- vcombine_u8(vqmovun_s16(a_.neon_i16), vqmovun_s16(b_.neon_i16));
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_u8 = vec_packsu(a_.altivec_i16, b_.altivec_i16);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.u8[i] = (a_.i16[i] > UINT8_MAX)
- ? UINT8_MAX
- : ((a_.i16[i] < 0)
- ? UINT8_C(0)
- : HEDLEY_STATIC_CAST(uint8_t,
- a_.i16[i]));
- r_.u8[i + 8] =
- (b_.i16[i] > UINT8_MAX)
- ? UINT8_MAX
- : ((b_.i16[i] < 0)
- ? UINT8_C(0)
- : HEDLEY_STATIC_CAST(uint8_t,
- b_.i16[i]));
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_packus_epi16(a, b) simde_mm_packus_epi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_pause(void)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- _mm_pause();
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_pause() (simde_mm_pause())
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_sad_epu8(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_sad_epu8(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
- for (size_t i = 0; i < (sizeof(r_.i64) / sizeof(r_.i64[0])); i++) {
- uint16_t tmp = 0;
- SIMDE_VECTORIZE_REDUCTION(+ : tmp)
- for (size_t j = 0; j < ((sizeof(r_.u8) / sizeof(r_.u8[0])) / 2);
- j++) {
- const size_t e = j + (i * 8);
- tmp += (a_.u8[e] > b_.u8[e]) ? (a_.u8[e] - b_.u8[e])
- : (b_.u8[e] - a_.u8[e]);
- }
- r_.i64[i] = tmp;
- }
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_sad_epu8(a, b) simde_mm_sad_epu8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_set_epi8(int8_t e15, int8_t e14, int8_t e13, int8_t e12,
- int8_t e11, int8_t e10, int8_t e9, int8_t e8,
- int8_t e7, int8_t e6, int8_t e5, int8_t e4,
- int8_t e3, int8_t e2, int8_t e1, int8_t e0)
-{
-
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_set_epi8(e15, e14, e13, e12, e11, e10, e9, e8, e7, e6, e5,
- e4, e3, e2, e1, e0);
-#else
- simde__m128i_private r_;
-
-#if defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_i8x16_make(e0, e1, e2, e3, e4, e5, e6, e7, e8, e9,
- e10, e11, e12, e13, e14, e15);
-#else
- r_.i8[0] = e0;
- r_.i8[1] = e1;
- r_.i8[2] = e2;
- r_.i8[3] = e3;
- r_.i8[4] = e4;
- r_.i8[5] = e5;
- r_.i8[6] = e6;
- r_.i8[7] = e7;
- r_.i8[8] = e8;
- r_.i8[9] = e9;
- r_.i8[10] = e10;
- r_.i8[11] = e11;
- r_.i8[12] = e12;
- r_.i8[13] = e13;
- r_.i8[14] = e14;
- r_.i8[15] = e15;
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_set_epi8(e15, e14, e13, e12, e11, e10, e9, e8, e7, e6, e5, e4, e3, \
- e2, e1, e0) \
- simde_mm_set_epi8(e15, e14, e13, e12, e11, e10, e9, e8, e7, e6, e5, \
- e4, e3, e2, e1, e0)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_set_epi16(int16_t e7, int16_t e6, int16_t e5, int16_t e4,
- int16_t e3, int16_t e2, int16_t e1, int16_t e0)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_set_epi16(e7, e6, e5, e4, e3, e2, e1, e0);
-#else
- simde__m128i_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- SIMDE_ALIGN_AS(16, int16x8_t)
- int16_t data[8] = {e0, e1, e2, e3, e4, e5, e6, e7};
- r_.neon_i16 = vld1q_s16(data);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_i16x8_make(e0, e1, e2, e3, e4, e5, e6, e7);
-#else
- r_.i16[0] = e0;
- r_.i16[1] = e1;
- r_.i16[2] = e2;
- r_.i16[3] = e3;
- r_.i16[4] = e4;
- r_.i16[5] = e5;
- r_.i16[6] = e6;
- r_.i16[7] = e7;
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_set_epi16(e7, e6, e5, e4, e3, e2, e1, e0) \
- simde_mm_set_epi16(e7, e6, e5, e4, e3, e2, e1, e0)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_set_epi32(int32_t e3, int32_t e2, int32_t e1, int32_t e0)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_set_epi32(e3, e2, e1, e0);
-#else
- simde__m128i_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- SIMDE_ALIGN_AS(16, int32x4_t) int32_t data[4] = {e0, e1, e2, e3};
- r_.neon_i32 = vld1q_s32(data);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_i32x4_make(e0, e1, e2, e3);
-#else
- r_.i32[0] = e0;
- r_.i32[1] = e1;
- r_.i32[2] = e2;
- r_.i32[3] = e3;
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_set_epi32(e3, e2, e1, e0) simde_mm_set_epi32(e3, e2, e1, e0)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_set_epi64(simde__m64 e1, simde__m64 e0)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_set_epi64(e1, e0);
-#else
- simde__m128i_private r_;
-
- r_.m64_private[0] = simde__m64_to_private(e0);
- r_.m64_private[1] = simde__m64_to_private(e1);
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_set_epi64(e1, e0) (simde_mm_set_epi64((e1), (e0)))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_set_epi64x(int64_t e1, int64_t e0)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && \
- (!defined(HEDLEY_MSVC_VERSION) || HEDLEY_MSVC_VERSION_CHECK(19, 0, 0))
- return _mm_set_epi64x(e1, e0);
-#else
- simde__m128i_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i64 = vcombine_s64(vdup_n_s64(e0), vdup_n_s64(e1));
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_i64x2_make(e0, e1);
-#else
- r_.i64[0] = e0;
- r_.i64[1] = e1;
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_set_epi64x(e1, e0) simde_mm_set_epi64x(e1, e0)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_x_mm_set_epu8(uint8_t e15, uint8_t e14, uint8_t e13,
- uint8_t e12, uint8_t e11, uint8_t e10,
- uint8_t e9, uint8_t e8, uint8_t e7, uint8_t e6,
- uint8_t e5, uint8_t e4, uint8_t e3, uint8_t e2,
- uint8_t e1, uint8_t e0)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_set_epi8(
- HEDLEY_STATIC_CAST(char, e15), HEDLEY_STATIC_CAST(char, e14),
- HEDLEY_STATIC_CAST(char, e13), HEDLEY_STATIC_CAST(char, e12),
- HEDLEY_STATIC_CAST(char, e11), HEDLEY_STATIC_CAST(char, e10),
- HEDLEY_STATIC_CAST(char, e9), HEDLEY_STATIC_CAST(char, e8),
- HEDLEY_STATIC_CAST(char, e7), HEDLEY_STATIC_CAST(char, e6),
- HEDLEY_STATIC_CAST(char, e5), HEDLEY_STATIC_CAST(char, e4),
- HEDLEY_STATIC_CAST(char, e3), HEDLEY_STATIC_CAST(char, e2),
- HEDLEY_STATIC_CAST(char, e1), HEDLEY_STATIC_CAST(char, e0));
-#else
- simde__m128i_private r_;
-
- r_.u8[0] = e0;
- r_.u8[1] = e1;
- r_.u8[2] = e2;
- r_.u8[3] = e3;
- r_.u8[4] = e4;
- r_.u8[5] = e5;
- r_.u8[6] = e6;
- r_.u8[7] = e7;
- r_.u8[8] = e8;
- r_.u8[9] = e9;
- r_.u8[10] = e10;
- r_.u8[11] = e11;
- r_.u8[12] = e12;
- r_.u8[13] = e13;
- r_.u8[14] = e14;
- r_.u8[15] = e15;
-
- return simde__m128i_from_private(r_);
-#endif
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_x_mm_set_epu16(uint16_t e7, uint16_t e6, uint16_t e5,
- uint16_t e4, uint16_t e3, uint16_t e2,
- uint16_t e1, uint16_t e0)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_set_epi16(
- HEDLEY_STATIC_CAST(short, e7), HEDLEY_STATIC_CAST(short, e6),
- HEDLEY_STATIC_CAST(short, e5), HEDLEY_STATIC_CAST(short, e4),
- HEDLEY_STATIC_CAST(short, e3), HEDLEY_STATIC_CAST(short, e2),
- HEDLEY_STATIC_CAST(short, e1), HEDLEY_STATIC_CAST(short, e0));
-#else
- simde__m128i_private r_;
-
- r_.u16[0] = e0;
- r_.u16[1] = e1;
- r_.u16[2] = e2;
- r_.u16[3] = e3;
- r_.u16[4] = e4;
- r_.u16[5] = e5;
- r_.u16[6] = e6;
- r_.u16[7] = e7;
-
- return simde__m128i_from_private(r_);
-#endif
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_x_mm_set_epu32(uint32_t e3, uint32_t e2, uint32_t e1,
- uint32_t e0)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_set_epi32(HEDLEY_STATIC_CAST(int, e3),
- HEDLEY_STATIC_CAST(int, e2),
- HEDLEY_STATIC_CAST(int, e1),
- HEDLEY_STATIC_CAST(int, e0));
-#else
- simde__m128i_private r_;
-
- r_.u32[0] = e0;
- r_.u32[1] = e1;
- r_.u32[2] = e2;
- r_.u32[3] = e3;
-
- return simde__m128i_from_private(r_);
-#endif
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_x_mm_set_epu64x(uint64_t e1, uint64_t e0)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && \
- (!defined(HEDLEY_MSVC_VERSION) || HEDLEY_MSVC_VERSION_CHECK(19, 0, 0))
- return _mm_set_epi64x(HEDLEY_STATIC_CAST(int64_t, e1),
- HEDLEY_STATIC_CAST(int64_t, e0));
-#else
- simde__m128i_private r_;
-
- r_.u64[0] = e0;
- r_.u64[1] = e1;
-
- return simde__m128i_from_private(r_);
-#endif
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_set_pd(simde_float64 e1, simde_float64 e0)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_set_pd(e1, e0);
-#else
- simde__m128d_private r_;
-
-#if defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f64x2_make(e0, e1);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f64x2_make(e0, e1);
-#else
- r_.f64[0] = e0;
- r_.f64[1] = e1;
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_set_pd(e1, e0) simde_mm_set_pd(e1, e0)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_set_pd1(simde_float64 a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_set1_pd(a);
-#else
- simde__m128d_private r_;
-
- r_.f64[0] = a;
- r_.f64[1] = a;
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_set_pd1(a) simde_mm_set1_pd(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_set_sd(simde_float64 a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_set_sd(a);
-#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- return vsetq_lane_f64(a, vdupq_n_f64(SIMDE_FLOAT32_C(0.0)), 0);
-#else
- return simde_mm_set_pd(SIMDE_FLOAT64_C(0.0), a);
-
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_set_sd(a) simde_mm_set_sd(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_set1_epi8(int8_t a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_set1_epi8(a);
-#else
- simde__m128i_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i8 = vdupq_n_s8(a);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_i8x16_splat(a);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
- r_.i8[i] = a;
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_set1_epi8(a) simde_mm_set1_epi8(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_set1_epi16(int16_t a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_set1_epi16(a);
-#else
- simde__m128i_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vdupq_n_s16(a);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_i16x8_splat(a);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = a;
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_set1_epi16(a) simde_mm_set1_epi16(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_set1_epi32(int32_t a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_set1_epi32(a);
-#else
- simde__m128i_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vdupq_n_s32(a);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_i32x4_splat(a);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = a;
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_set1_epi32(a) simde_mm_set1_epi32(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_set1_epi64x(int64_t a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && \
- (!defined(HEDLEY_MSVC_VERSION) || HEDLEY_MSVC_VERSION_CHECK(19, 0, 0))
- return _mm_set1_epi64x(a);
-#else
- simde__m128i_private r_;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i64 = vmovq_n_s64(a);
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_i64x2_splat(a);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i64) / sizeof(r_.i64[0])); i++) {
- r_.i64[i] = a;
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_set1_epi64x(a) simde_mm_set1_epi64x(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_set1_epi64(simde__m64 a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_set1_epi64(a);
-#else
- simde__m64_private a_ = simde__m64_to_private(a);
- return simde_mm_set1_epi64x(a_.i64[0]);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_set1_epi64(a) simde_mm_set1_epi64(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_x_mm_set1_epu8(uint8_t value)
-{
- return simde_mm_set1_epi8(HEDLEY_STATIC_CAST(int8_t, value));
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_x_mm_set1_epu16(uint16_t value)
-{
- return simde_mm_set1_epi16(HEDLEY_STATIC_CAST(int16_t, value));
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_x_mm_set1_epu32(uint32_t value)
-{
- return simde_mm_set1_epi32(HEDLEY_STATIC_CAST(int32_t, value));
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_x_mm_set1_epu64(uint64_t value)
-{
- return simde_mm_set1_epi64x(HEDLEY_STATIC_CAST(int64_t, value));
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_set1_pd(simde_float64 a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_set1_pd(a);
-#else
- simde__m128d_private r_;
-
-#if defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f64x2_splat(a);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i64) / sizeof(r_.i64[0])); i++) {
- r_.f64[i] = a;
- }
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_set1_pd(a) simde_mm_set1_pd(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_setr_epi8(int8_t e15, int8_t e14, int8_t e13, int8_t e12,
- int8_t e11, int8_t e10, int8_t e9, int8_t e8,
- int8_t e7, int8_t e6, int8_t e5, int8_t e4,
- int8_t e3, int8_t e2, int8_t e1, int8_t e0)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_setr_epi8(e15, e14, e13, e12, e11, e10, e9, e8, e7, e6, e5,
- e4, e3, e2, e1, e0);
-#else
- return simde_mm_set_epi8(e0, e1, e2, e3, e4, e5, e6, e7, e8, e9, e10,
- e11, e12, e13, e14, e15);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_setr_epi8(e15, e14, e13, e12, e11, e10, e9, e8, e7, e6, e5, e4, \
- e3, e2, e1, e0) \
- simde_mm_setr_epi8(e15, e14, e13, e12, e11, e10, e9, e8, e7, e6, e5, \
- e4, e3, e2, e1, e0)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_setr_epi16(int16_t e7, int16_t e6, int16_t e5, int16_t e4,
- int16_t e3, int16_t e2, int16_t e1, int16_t e0)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_setr_epi16(e7, e6, e5, e4, e3, e2, e1, e0);
-#else
- return simde_mm_set_epi16(e0, e1, e2, e3, e4, e5, e6, e7);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_setr_epi16(e7, e6, e5, e4, e3, e2, e1, e0) \
- simde_mm_setr_epi16(e7, e6, e5, e4, e3, e2, e1, e0)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_setr_epi32(int32_t e3, int32_t e2, int32_t e1, int32_t e0)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_setr_epi32(e3, e2, e1, e0);
-#else
- return simde_mm_set_epi32(e0, e1, e2, e3);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_setr_epi32(e3, e2, e1, e0) simde_mm_setr_epi32(e3, e2, e1, e0)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_setr_epi64(simde__m64 e1, simde__m64 e0)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_setr_epi64(e1, e0);
-#else
- return simde_mm_set_epi64(e0, e1);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_setr_epi64(e1, e0) (simde_mm_setr_epi64((e1), (e0)))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_setr_pd(simde_float64 e1, simde_float64 e0)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_setr_pd(e1, e0);
-#else
- return simde_mm_set_pd(e0, e1);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_setr_pd(e1, e0) simde_mm_setr_pd(e1, e0)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_setzero_pd(void)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_setzero_pd();
-#else
- simde__m128d_private r_;
-
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32f) / sizeof(r_.i32f[0])); i++) {
- r_.i32f[i] = 0;
- }
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_setzero_pd() simde_mm_setzero_pd()
-#endif
-
-#if defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
-HEDLEY_DIAGNOSTIC_PUSH
-SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_undefined_pd(void)
-{
- simde__m128d_private r_;
-
-#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE__HAVE_UNDEFINED128)
- r_.n = _mm_undefined_pd();
-#elif !defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
- r_ = simde__m128d_to_private(simde_mm_setzero_pd());
-#endif
-
- return simde__m128d_from_private(r_);
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_undefined_pd() simde_mm_undefined_pd()
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_undefined_si128(void)
-{
- simde__m128i_private r_;
-
-#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE__HAVE_UNDEFINED128)
- r_.n = _mm_undefined_si128();
-#elif !defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
- r_ = simde__m128i_to_private(simde_mm_setzero_si128());
-#endif
-
- return simde__m128i_from_private(r_);
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_undefined_si128() (simde_mm_undefined_si128())
-#endif
-
-#if defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
-HEDLEY_DIAGNOSTIC_POP
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_x_mm_setone_pd(void)
-{
- return simde_mm_castps_pd(simde_x_mm_setone_ps());
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_x_mm_setone_si128(void)
-{
- return simde_mm_castps_si128(simde_x_mm_setone_ps());
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_shuffle_epi32(simde__m128i a, const int imm8)
- SIMDE_REQUIRE_RANGE(imm8, 0, 255)
-{
- simde__m128i_private r_, a_ = simde__m128i_to_private(a);
-
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = a_.i32[(imm8 >> (i * 2)) & 3];
- }
-
- return simde__m128i_from_private(r_);
-}
-#if defined(SIMDE_X86_SSE2_NATIVE)
-#define simde_mm_shuffle_epi32(a, imm8) _mm_shuffle_epi32((a), (imm8))
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
-#define simde_mm_shuffle_epi32(a, imm8) \
- (__extension__({ \
- const simde__m128i_private simde__tmp_a_ = \
- simde__m128i_to_private(a); \
- simde__m128i_from_private((simde__m128i_private){ \
- .i32 = SIMDE_SHUFFLE_VECTOR_( \
- 32, 16, (simde__tmp_a_).i32, \
- (simde__tmp_a_).i32, ((imm8)) & 3, \
- ((imm8) >> 2) & 3, ((imm8) >> 4) & 3, \
- ((imm8) >> 6) & 3)}); \
- }))
-#endif
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_shuffle_epi32(a, imm8) simde_mm_shuffle_epi32(a, imm8)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_shuffle_pd(simde__m128d a, simde__m128d b, const int imm8)
- SIMDE_REQUIRE_RANGE(imm8, 0, 3)
-{
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
- r_.f64[0] = ((imm8 & 1) == 0) ? a_.f64[0] : a_.f64[1];
- r_.f64[1] = ((imm8 & 2) == 0) ? b_.f64[0] : b_.f64[1];
-
- return simde__m128d_from_private(r_);
-}
-#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(__PGI)
-#define simde_mm_shuffle_pd(a, b, imm8) _mm_shuffle_pd((a), (b), (imm8))
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
-#define simde_mm_shuffle_pd(a, b, imm8) \
- (__extension__({ \
- simde__m128d_from_private((simde__m128d_private){ \
- .f64 = SIMDE_SHUFFLE_VECTOR_( \
- 64, 16, simde__m128d_to_private(a).f64, \
- simde__m128d_to_private(b).f64, \
- (((imm8)) & 1), (((imm8) >> 1) & 1) + 2)}); \
- }))
-#endif
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_shuffle_pd(a, b, imm8) simde_mm_shuffle_pd(a, b, imm8)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_shufflehi_epi16(simde__m128i a, const int imm8)
- SIMDE_REQUIRE_RANGE(imm8, 0, 255)
-{
- simde__m128i_private r_, a_ = simde__m128i_to_private(a);
-
- SIMDE_VECTORIZE
- for (size_t i = 0; i < ((sizeof(a_.i16) / sizeof(a_.i16[0])) / 2);
- i++) {
- r_.i16[i] = a_.i16[i];
- }
- for (size_t i = ((sizeof(a_.i16) / sizeof(a_.i16[0])) / 2);
- i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = a_.i16[((imm8 >> ((i - 4) * 2)) & 3) + 4];
- }
-
- return simde__m128i_from_private(r_);
-}
-#if defined(SIMDE_X86_SSE2_NATIVE)
-#define simde_mm_shufflehi_epi16(a, imm8) _mm_shufflehi_epi16((a), (imm8))
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
-#define simde_mm_shufflehi_epi16(a, imm8) \
- (__extension__({ \
- const simde__m128i_private simde__tmp_a_ = \
- simde__m128i_to_private(a); \
- simde__m128i_from_private((simde__m128i_private){ \
- .i16 = SIMDE_SHUFFLE_VECTOR_( \
- 16, 16, (simde__tmp_a_).i16, \
- (simde__tmp_a_).i16, 0, 1, 2, 3, \
- (((imm8)) & 3) + 4, (((imm8) >> 2) & 3) + 4, \
- (((imm8) >> 4) & 3) + 4, \
- (((imm8) >> 6) & 3) + 4)}); \
- }))
-#endif
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_shufflehi_epi16(a, imm8) simde_mm_shufflehi_epi16(a, imm8)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_shufflelo_epi16(simde__m128i a, const int imm8)
- SIMDE_REQUIRE_RANGE(imm8, 0, 255)
-{
- simde__m128i_private r_, a_ = simde__m128i_to_private(a);
-
- for (size_t i = 0; i < ((sizeof(r_.i16) / sizeof(r_.i16[0])) / 2);
- i++) {
- r_.i16[i] = a_.i16[((imm8 >> (i * 2)) & 3)];
- }
- SIMDE_VECTORIZE
- for (size_t i = ((sizeof(a_.i16) / sizeof(a_.i16[0])) / 2);
- i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = a_.i16[i];
- }
-
- return simde__m128i_from_private(r_);
-}
-#if defined(SIMDE_X86_SSE2_NATIVE)
-#define simde_mm_shufflelo_epi16(a, imm8) _mm_shufflelo_epi16((a), (imm8))
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
-#define simde_mm_shufflelo_epi16(a, imm8) \
- (__extension__({ \
- const simde__m128i_private simde__tmp_a_ = \
- simde__m128i_to_private(a); \
- simde__m128i_from_private((simde__m128i_private){ \
- .i16 = SIMDE_SHUFFLE_VECTOR_( \
- 16, 16, (simde__tmp_a_).i16, \
- (simde__tmp_a_).i16, (((imm8)) & 3), \
- (((imm8) >> 2) & 3), (((imm8) >> 4) & 3), \
- (((imm8) >> 6) & 3), 4, 5, 6, 7)}); \
- }))
-#endif
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_shufflelo_epi16(a, imm8) simde_mm_shufflelo_epi16(a, imm8)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_sll_epi16(simde__m128i a, simde__m128i count)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_sll_epi16(a, count);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- count_ = simde__m128i_to_private(count);
-
- if (count_.u64[0] > 15)
- return simde_mm_setzero_si128();
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.u16 = (a_.u16 << count_.u64[0]);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
- r_.u16[i] = HEDLEY_STATIC_CAST(uint16_t,
- (a_.u16[i] << count_.u64[0]));
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_sll_epi16(a, count) simde_mm_sll_epi16((a), (count))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_sll_epi32(simde__m128i a, simde__m128i count)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_sll_epi32(a, count);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- count_ = simde__m128i_to_private(count);
-
- if (count_.u64[0] > 31)
- return simde_mm_setzero_si128();
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.u32 = (a_.u32 << count_.u64[0]);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u32) / sizeof(r_.u32[0])); i++) {
- r_.u32[i] = HEDLEY_STATIC_CAST(uint32_t,
- (a_.u32[i] << count_.u64[0]));
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_sll_epi32(a, count) (simde_mm_sll_epi32(a, (count)))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_sll_epi64(simde__m128i a, simde__m128i count)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_sll_epi64(a, count);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- count_ = simde__m128i_to_private(count);
-
- if (count_.u64[0] > 63)
- return simde_mm_setzero_si128();
-
- const int_fast16_t s = HEDLEY_STATIC_CAST(int_fast16_t, count_.u64[0]);
-#if !defined(SIMDE_BUG_GCC_94488)
- SIMDE_VECTORIZE
-#endif
- for (size_t i = 0; i < (sizeof(r_.u64) / sizeof(r_.u64[0])); i++) {
- r_.u64[i] = a_.u64[i] << s;
- }
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_sll_epi64(a, count) (simde_mm_sll_epi64(a, (count)))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_sqrt_pd(simde__m128d a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_sqrt_pd(a);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- r_.neon_f64 = vsqrtq_f64(a_.neon_f64);
-#elif defined(simde_math_sqrt)
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
- r_.f64[i] = simde_math_sqrt(a_.f64[i]);
- }
-#else
- HEDLEY_UNREACHABLE();
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_sqrt_pd(a) simde_mm_sqrt_pd(a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_sqrt_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_sqrt_sd(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_sd(a, simde_mm_sqrt_pd(b));
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(simde_math_sqrt)
- r_.f64[0] = simde_math_sqrt(b_.f64[0]);
- r_.f64[1] = a_.f64[1];
-#else
- HEDLEY_UNREACHABLE();
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_sqrt_sd(a, b) simde_mm_sqrt_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_srl_epi16(simde__m128i a, simde__m128i count)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_srl_epi16(a, count);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- count_ = simde__m128i_to_private(count);
-
- const int cnt = HEDLEY_STATIC_CAST(
- int, (count_.i64[0] > 16 ? 16 : count_.i64[0]));
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u16 = vshlq_u16(a_.neon_u16,
- vdupq_n_s16(HEDLEY_STATIC_CAST(int16_t, -cnt)));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
- r_.u16[i] = a_.u16[i] >> cnt;
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_srl_epi16(a, count) (simde_mm_srl_epi16(a, (count)))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_srl_epi32(simde__m128i a, simde__m128i count)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_srl_epi32(a, count);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- count_ = simde__m128i_to_private(count);
-
- const int cnt = HEDLEY_STATIC_CAST(
- int, (count_.i64[0] > 32 ? 32 : count_.i64[0]));
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u32 = vshlq_u32(a_.neon_u32,
- vdupq_n_s32(HEDLEY_STATIC_CAST(int32_t, -cnt)));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u32) / sizeof(r_.u32[0])); i++) {
- r_.u32[i] = a_.u32[i] >> cnt;
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_srl_epi32(a, count) (simde_mm_srl_epi32(a, (count)))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_srl_epi64(simde__m128i a, simde__m128i count)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_srl_epi64(a, count);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- count_ = simde__m128i_to_private(count);
-
- const int cnt = HEDLEY_STATIC_CAST(
- int, (count_.i64[0] > 64 ? 64 : count_.i64[0]));
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u64 = vshlq_u64(a_.neon_u64,
- vdupq_n_s64(HEDLEY_STATIC_CAST(int64_t, -cnt)));
-#else
-#if !defined(SIMDE_BUG_GCC_94488)
- SIMDE_VECTORIZE
-#endif
- for (size_t i = 0; i < (sizeof(r_.u64) / sizeof(r_.u64[0])); i++) {
- r_.u64[i] = a_.u64[i] >> cnt;
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_srl_epi64(a, count) (simde_mm_srl_epi64(a, (count)))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_srai_epi16(simde__m128i a, const int imm8)
- SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 255)
-{
- /* MSVC requires a range of (0, 255). */
- simde__m128i_private r_, a_ = simde__m128i_to_private(a);
-
- const int cnt = (imm8 & ~15) ? 15 : imm8;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vshlq_s16(a_.neon_i16, vdupq_n_s16(-cnt));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = a_.i16[i] >> cnt;
- }
-#endif
-
- return simde__m128i_from_private(r_);
-}
-#if defined(SIMDE_X86_SSE2_NATIVE)
-#define simde_mm_srai_epi16(a, imm8) _mm_srai_epi16((a), (imm8))
-#endif
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_srai_epi16(a, imm8) simde_mm_srai_epi16(a, imm8)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_srai_epi32(simde__m128i a, const int imm8)
- SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 255)
-{
- /* MSVC requires a range of (0, 255). */
- simde__m128i_private r_, a_ = simde__m128i_to_private(a);
-
- const int cnt = (imm8 & ~31) ? 31 : imm8;
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vshlq_s32(a_.neon_i32, vdupq_n_s32(-cnt));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = a_.i32[i] >> cnt;
- }
-#endif
-
- return simde__m128i_from_private(r_);
-}
-#if defined(SIMDE_X86_SSE2_NATIVE)
-#define simde_mm_srai_epi32(a, imm8) _mm_srai_epi32((a), (imm8))
-#endif
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_srai_epi32(a, imm8) simde_mm_srai_epi32(a, imm8)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_sra_epi16(simde__m128i a, simde__m128i count)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_sra_epi16(a, count);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- count_ = simde__m128i_to_private(count);
-
- const int cnt = HEDLEY_STATIC_CAST(
- int, (count_.i64[0] > 15 ? 15 : count_.i64[0]));
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vshlq_s16(a_.neon_i16,
- vdupq_n_s16(HEDLEY_STATIC_CAST(int16_t, -cnt)));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = a_.i16[i] >> cnt;
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_sra_epi16(a, count) (simde_mm_sra_epi16(a, count))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_sra_epi32(simde__m128i a, simde__m128i count)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(SIMDE_BUG_GCC_BAD_MM_SRA_EPI32)
- return _mm_sra_epi32(a, count);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- count_ = simde__m128i_to_private(count);
-
- const int cnt = count_.u64[0] > 31
- ? 31
- : HEDLEY_STATIC_CAST(int, count_.u64[0]);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vshlq_s32(a_.neon_i32,
- vdupq_n_s32(HEDLEY_STATIC_CAST(int32_t, -cnt)));
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = a_.i32[i] >> cnt;
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_sra_epi32(a, count) (simde_mm_sra_epi32(a, (count)))
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_slli_epi16(simde__m128i a, const int imm8)
- SIMDE_REQUIRE_RANGE(imm8, 0, 255)
-{
- simde__m128i_private r_, a_ = simde__m128i_to_private(a);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.i16 = a_.i16 << (imm8 & 0xff);
-#else
- const int s =
- (imm8 >
- HEDLEY_STATIC_CAST(int, sizeof(r_.i16[0]) * CHAR_BIT) - 1)
- ? 0
- : imm8;
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = HEDLEY_STATIC_CAST(int16_t, a_.i16[i] << s);
- }
-#endif
-
- return simde__m128i_from_private(r_);
-}
-#if defined(SIMDE_X86_SSE2_NATIVE)
-#define simde_mm_slli_epi16(a, imm8) _mm_slli_epi16(a, imm8)
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
-#define simde_mm_slli_epi16(a, imm8) \
- simde__m128i_from_neon_u16( \
- vshlq_n_u16(simde__m128i_to_neon_u16(a), (imm8)))
-#endif
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_slli_epi16(a, imm8) simde_mm_slli_epi16(a, imm8)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_slli_epi32(simde__m128i a, const int imm8)
- SIMDE_REQUIRE_RANGE(imm8, 0, 255)
-{
- simde__m128i_private r_, a_ = simde__m128i_to_private(a);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.i32 = a_.i32 << imm8;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = a_.i32[i] << (imm8 & 0xff);
- }
-#endif
-
- return simde__m128i_from_private(r_);
-}
-#if defined(SIMDE_X86_SSE2_NATIVE)
-#define simde_mm_slli_epi32(a, imm8) _mm_slli_epi32(a, imm8)
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
-#define simde_mm_slli_epi32(a, imm8) \
- simde__m128i_from_neon_u32( \
- vshlq_n_u32(simde__m128i_to_neon_u32(a), (imm8)))
-#endif
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_slli_epi32(a, imm8) simde_mm_slli_epi32(a, imm8)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_slli_epi64(simde__m128i a, const int imm8)
- SIMDE_REQUIRE_RANGE(imm8, 0, 255)
-{
- simde__m128i_private r_, a_ = simde__m128i_to_private(a);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.i64 = a_.i64 << imm8;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i64) / sizeof(r_.i64[0])); i++) {
- r_.i64[i] = a_.i64[i] << (imm8 & 0xff);
- }
-#endif
-
- return simde__m128i_from_private(r_);
-}
-#if defined(SIMDE_X86_SSE2_NATIVE)
-#define simde_mm_slli_epi64(a, imm8) _mm_slli_epi64(a, imm8)
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
-#define simde_mm_slli_epi64(a, imm8) \
- simde__m128i_from_neon_u64( \
- vshlq_n_u64(simde__m128i_to_neon_u64(a), (imm8)))
-#endif
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_slli_epi64(a, imm8) simde_mm_slli_epi64(a, imm8)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_srli_epi16(simde__m128i a, const int imm8)
- SIMDE_REQUIRE_RANGE(imm8, 0, 255)
-{
- simde__m128i_private r_, a_ = simde__m128i_to_private(a);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.u16 = a_.u16 >> imm8;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.u16[i] = a_.u16[i] >> (imm8 & 0xff);
- }
-#endif
-
- return simde__m128i_from_private(r_);
-}
-#if defined(SIMDE_X86_SSE2_NATIVE)
-#define simde_mm_srli_epi16(a, imm8) _mm_srli_epi16(a, imm8)
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
-#define simde_mm_srli_epi16(a, imm8) \
- simde__m128i_from_neon_u16( \
- vshrq_n_u16(simde__m128i_to_neon_u16(a), imm8))
-#endif
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_srli_epi16(a, imm8) simde_mm_srli_epi16(a, imm8)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_srli_epi32(simde__m128i a, const int imm8)
- SIMDE_REQUIRE_RANGE(imm8, 0, 255)
-{
- simde__m128i_private r_, a_ = simde__m128i_to_private(a);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
- r_.u32 = a_.u32 >> (imm8 & 0xff);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.u32[i] = a_.u32[i] >> (imm8 & 0xff);
- }
-#endif
-
- return simde__m128i_from_private(r_);
-}
-#if defined(SIMDE_X86_SSE2_NATIVE)
-#define simde_mm_srli_epi32(a, imm8) _mm_srli_epi32(a, imm8)
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
-#define simde_mm_srli_epi32(a, imm8) \
- simde__m128i_from_neon_u32( \
- vshrq_n_u32(simde__m128i_to_neon_u32(a), imm8))
-#endif
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_srli_epi32(a, imm8) simde_mm_srli_epi32(a, imm8)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_srli_epi64(simde__m128i a, const int imm8)
- SIMDE_REQUIRE_RANGE(imm8, 0, 255)
-{
- simde__m128i_private r_, a_ = simde__m128i_to_private(a);
-
- if (HEDLEY_UNLIKELY((imm8 & 63) != imm8))
- return simde_mm_setzero_si128();
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u64 = vshlq_u64(a_.neon_u64, vdupq_n_s64(-imm8));
-#else
-#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR) && !defined(SIMDE_BUG_GCC_94488)
- r_.u64 = a_.u64 >> imm8;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i64) / sizeof(r_.i64[0])); i++) {
- r_.u64[i] = a_.u64[i] >> imm8;
- }
-#endif
-#endif
-
- return simde__m128i_from_private(r_);
-}
-#if defined(SIMDE_X86_SSE2_NATIVE)
-#define simde_mm_srli_epi64(a, imm8) _mm_srli_epi64(a, imm8)
-#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE) && !defined(__clang__)
-#define simde_mm_srli_epi64(a, imm8) \
- ((imm8 == 0) ? (a) \
- : (simde__m128i_from_neon_u64(vshrq_n_u64( \
- simde__m128i_to_neon_u64(a), imm8))))
-#endif
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_srli_epi64(a, imm8) simde_mm_srli_epi64(a, imm8)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_store_pd(simde_float64 mem_addr[HEDLEY_ARRAY_PARAM(2)],
- simde__m128d a)
-{
- simde_assert_aligned(16, mem_addr);
-
-#if defined(SIMDE_X86_SSE2_NATIVE)
- _mm_store_pd(mem_addr, a);
-#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- vst1q_f64(mem_addr, simde__m128d_to_private(a).neon_f64);
-#else
- simde_memcpy(mem_addr, &a, sizeof(a));
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_store_pd(mem_addr, a) \
- simde_mm_store_pd(HEDLEY_REINTERPRET_CAST(double *, mem_addr), a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_store1_pd(simde_float64 mem_addr[HEDLEY_ARRAY_PARAM(2)],
- simde__m128d a)
-{
- simde_assert_aligned(16, mem_addr);
-
-#if defined(SIMDE_X86_SSE2_NATIVE)
- _mm_store1_pd(mem_addr, a);
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a);
-
- mem_addr[0] = a_.f64[0];
- mem_addr[1] = a_.f64[0];
-#endif
-}
-#define simde_mm_store_pd1(mem_addr, a) \
- simde_mm_store1_pd(HEDLEY_REINTERPRET_CAST(double *, mem_addr), a)
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_store1_pd(mem_addr, a) \
- simde_mm_store1_pd(HEDLEY_REINTERPRET_CAST(double *, mem_addr), a)
-#define _mm_store_pd1(mem_addr, a) \
- simde_mm_store_pd1(HEDLEY_REINTERPRET_CAST(double *, mem_addr), a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_store_sd(simde_float64 *mem_addr, simde__m128d a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- _mm_store_sd(mem_addr, a);
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- simde_float64 v = vgetq_lane_f64(a_.neon_f64, 0);
- simde_memcpy(mem_addr, &v, sizeof(simde_float64));
-#else
- simde_float64 v = a_.f64[0];
- simde_memcpy(mem_addr, &v, sizeof(simde_float64));
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_store_sd(mem_addr, a) \
- simde_mm_store_sd(HEDLEY_REINTERPRET_CAST(double *, mem_addr), a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_store_si128(simde__m128i *mem_addr, simde__m128i a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- _mm_store_si128(HEDLEY_STATIC_CAST(__m128i *, mem_addr), a);
-#else
- simde__m128i_private a_ = simde__m128i_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- vst1q_s32(HEDLEY_REINTERPRET_CAST(int32_t *, mem_addr), a_.neon_i32);
-#else
- simde_memcpy(SIMDE_ASSUME_ALIGNED(16, mem_addr), &a_, sizeof(a_));
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_store_si128(mem_addr, a) simde_mm_store_si128(mem_addr, a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_storeh_pd(simde_float64 *mem_addr, simde__m128d a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- _mm_storeh_pd(mem_addr, a);
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
- *mem_addr = vgetq_lane_f64(a_.neon_f64, 1);
-#else
- *mem_addr = a_.f64[1];
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_storeh_pd(mem_addr, a) \
- simde_mm_storeh_pd(HEDLEY_REINTERPRET_CAST(double *, mem_addr), a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_storel_epi64(simde__m128i *mem_addr, simde__m128i a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- _mm_storel_epi64(HEDLEY_STATIC_CAST(__m128i *, mem_addr), a);
-#else
- simde__m128i_private a_ = simde__m128i_to_private(a);
- int64_t tmp;
-
- /* memcpy to prevent aliasing, tmp because we can't take the
- * address of a vector element. */
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- tmp = vgetq_lane_s64(a_.neon_i64, 0);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
-#if defined(SIMDE_BUG_GCC_95227)
- (void)a_;
-#endif
- tmp = vec_extract(a_.altivec_i64, 0);
-#else
- tmp = a_.i64[0];
-#endif
-
- simde_memcpy(mem_addr, &tmp, sizeof(tmp));
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_storel_epi64(mem_addr, a) simde_mm_storel_epi64(mem_addr, a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_storel_pd(simde_float64 *mem_addr, simde__m128d a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- _mm_storel_pd(mem_addr, a);
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a);
-
- *mem_addr = a_.f64[0];
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_storel_pd(mem_addr, a) \
- simde_mm_storel_pd(HEDLEY_REINTERPRET_CAST(double *, mem_addr), a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_storer_pd(simde_float64 mem_addr[2], simde__m128d a)
-{
- simde_assert_aligned(16, mem_addr);
-
-#if defined(SIMDE_X86_SSE2_NATIVE)
- _mm_storer_pd(mem_addr, a);
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a);
-
- mem_addr[0] = a_.f64[1];
- mem_addr[1] = a_.f64[0];
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_storer_pd(mem_addr, a) \
- simde_mm_storer_pd(HEDLEY_REINTERPRET_CAST(double *, mem_addr), a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_storeu_pd(simde_float64 *mem_addr, simde__m128d a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- _mm_storeu_pd(mem_addr, a);
-#else
- simde_memcpy(mem_addr, &a, sizeof(a));
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_storeu_pd(mem_addr, a) \
- simde_mm_storeu_pd(HEDLEY_REINTERPRET_CAST(double *, mem_addr), a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_storeu_si128(simde__m128i *mem_addr, simde__m128i a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- _mm_storeu_si128(HEDLEY_STATIC_CAST(__m128i *, mem_addr), a);
-#else
- simde__m128i_private a_ = simde__m128i_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- vst1q_s32(HEDLEY_REINTERPRET_CAST(int32_t *, mem_addr), a_.neon_i32);
-#else
- simde_memcpy(mem_addr, &a_, sizeof(a_));
-#endif
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_storeu_si128(mem_addr, a) simde_mm_storeu_si128(mem_addr, a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_stream_pd(simde_float64 mem_addr[HEDLEY_ARRAY_PARAM(2)],
- simde__m128d a)
-{
- simde_assert_aligned(16, mem_addr);
-
-#if defined(SIMDE_X86_SSE2_NATIVE)
- _mm_stream_pd(mem_addr, a);
-#else
- simde_memcpy(mem_addr, &a, sizeof(a));
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_stream_pd(mem_addr, a) \
- simde_mm_stream_pd(HEDLEY_REINTERPRET_CAST(double *, mem_addr), a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_stream_si128(simde__m128i *mem_addr, simde__m128i a)
-{
- simde_assert_aligned(16, mem_addr);
-
-#if defined(SIMDE_X86_SSE2_NATIVE)
- _mm_stream_si128(HEDLEY_STATIC_CAST(__m128i *, mem_addr), a);
-#else
- simde_memcpy(mem_addr, &a, sizeof(a));
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_stream_si128(mem_addr, a) simde_mm_stream_si128(mem_addr, a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_stream_si32(int32_t *mem_addr, int32_t a)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- _mm_stream_si32(mem_addr, a);
-#else
- *mem_addr = a;
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_stream_si32(mem_addr, a) simde_mm_stream_si32(mem_addr, a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_stream_si64(int64_t *mem_addr, int64_t a)
-{
- *mem_addr = a;
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_stream_si64(mem_addr, a) \
- simde_mm_stream_si64(SIMDE_CHECKED_REINTERPRET_CAST( \
- int64_t *, __int64 *, mem_addr), \
- a)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_sub_epi8(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_sub_epi8(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i8 = vsubq_s8(a_.neon_i8, b_.neon_i8);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i8 = a_.i8 - b_.i8;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
- r_.i8[i] = a_.i8[i] - b_.i8[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_sub_epi8(a, b) simde_mm_sub_epi8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_sub_epi16(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_sub_epi16(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vsubq_s16(a_.neon_i16, b_.neon_i16);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i16 = a_.i16 - b_.i16;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
- r_.i16[i] = a_.i16[i] - b_.i16[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_sub_epi16(a, b) simde_mm_sub_epi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_sub_epi32(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_sub_epi32(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vsubq_s32(a_.neon_i32, b_.neon_i32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32 = a_.i32 - b_.i32;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
- r_.i32[i] = a_.i32[i] - b_.i32[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_sub_epi32(a, b) simde_mm_sub_epi32(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_sub_epi64(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_sub_epi64(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i64 = vsubq_s64(a_.neon_i64, b_.neon_i64);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i64 = a_.i64 - b_.i64;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i64) / sizeof(r_.i64[0])); i++) {
- r_.i64[i] = a_.i64[i] - b_.i64[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_sub_epi64(a, b) simde_mm_sub_epi64(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_x_mm_sub_epu32(simde__m128i a, simde__m128i b)
-{
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.u32 = a_.u32 - b_.u32;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.u32) / sizeof(r_.u32[0])); i++) {
- r_.u32[i] = a_.u32[i] - b_.u32[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-}
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_sub_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_sub_pd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.f64 = a_.f64 - b_.f64;
-#elif defined(SIMDE_WASM_SIMD128_NATIVE)
- r_.wasm_v128 = wasm_f64x2_sub(a_.wasm_v128, b_.wasm_v128);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
- r_.f64[i] = a_.f64[i] - b_.f64[i];
- }
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_sub_pd(a, b) simde_mm_sub_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_sub_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_sub_sd(a, b);
-#elif defined(SIMDE_ASSUME_VECTORIZATION)
- return simde_mm_move_sd(a, simde_mm_sub_pd(a, b));
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
- r_.f64[0] = a_.f64[0] - b_.f64[0];
- r_.f64[1] = a_.f64[1];
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_sub_sd(a, b) simde_mm_sub_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m64 simde_mm_sub_si64(simde__m64 a, simde__m64 b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
- return _mm_sub_si64(a, b);
-#else
- simde__m64_private r_, a_ = simde__m64_to_private(a),
- b_ = simde__m64_to_private(b);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i64 = a_.i64 - b_.i64;
-#else
- r_.i64[0] = a_.i64[0] - b_.i64[0];
-#endif
-
- return simde__m64_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_sub_si64(a, b) simde_mm_sub_si64(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_subs_epi8(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_subs_epi8(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i8 = vqsubq_s8(a_.neon_i8, b_.neon_i8);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_) / sizeof(r_.i8[0])); i++) {
- if (((b_.i8[i]) > 0 && (a_.i8[i]) < INT8_MIN + (b_.i8[i]))) {
- r_.i8[i] = INT8_MIN;
- } else if ((b_.i8[i]) < 0 &&
- (a_.i8[i]) > INT8_MAX + (b_.i8[i])) {
- r_.i8[i] = INT8_MAX;
- } else {
- r_.i8[i] = (a_.i8[i]) - (b_.i8[i]);
- }
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_subs_epi8(a, b) simde_mm_subs_epi8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_subs_epi16(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_subs_epi16(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i16 = vqsubq_s16(a_.neon_i16, b_.neon_i16);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_) / sizeof(r_.i16[0])); i++) {
- if (((b_.i16[i]) > 0 &&
- (a_.i16[i]) < INT16_MIN + (b_.i16[i]))) {
- r_.i16[i] = INT16_MIN;
- } else if ((b_.i16[i]) < 0 &&
- (a_.i16[i]) > INT16_MAX + (b_.i16[i])) {
- r_.i16[i] = INT16_MAX;
- } else {
- r_.i16[i] = (a_.i16[i]) - (b_.i16[i]);
- }
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_subs_epi16(a, b) simde_mm_subs_epi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_subs_epu8(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_subs_epu8(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u8 = vqsubq_u8(a_.neon_u8, b_.neon_u8);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_) / sizeof(r_.i8[0])); i++) {
- const int32_t x = a_.u8[i] - b_.u8[i];
- if (x < 0) {
- r_.u8[i] = 0;
- } else if (x > UINT8_MAX) {
- r_.u8[i] = UINT8_MAX;
- } else {
- r_.u8[i] = HEDLEY_STATIC_CAST(uint8_t, x);
- }
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_subs_epu8(a, b) simde_mm_subs_epu8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_subs_epu16(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_subs_epu16(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_u16 = vqsubq_u16(a_.neon_u16, b_.neon_u16);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_) / sizeof(r_.i16[0])); i++) {
- const int32_t x = a_.u16[i] - b_.u16[i];
- if (x < 0) {
- r_.u16[i] = 0;
- } else if (x > UINT16_MAX) {
- r_.u16[i] = UINT16_MAX;
- } else {
- r_.u16[i] = HEDLEY_STATIC_CAST(uint16_t, x);
- }
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_subs_epu16(a, b) simde_mm_subs_epu16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_ucomieq_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_ucomieq_sd(a, b);
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
- int r;
-
-#if defined(SIMDE_HAVE_FENV_H)
- fenv_t envp;
- int x = feholdexcept(&envp);
- r = a_.f64[0] == b_.f64[0];
- if (HEDLEY_LIKELY(x == 0))
- fesetenv(&envp);
-#else
- r = a_.f64[0] == b_.f64[0];
-#endif
-
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_ucomieq_sd(a, b) simde_mm_ucomieq_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_ucomige_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_ucomige_sd(a, b);
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
- int r;
-
-#if defined(SIMDE_HAVE_FENV_H)
- fenv_t envp;
- int x = feholdexcept(&envp);
- r = a_.f64[0] >= b_.f64[0];
- if (HEDLEY_LIKELY(x == 0))
- fesetenv(&envp);
-#else
- r = a_.f64[0] >= b_.f64[0];
-#endif
-
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_ucomige_sd(a, b) simde_mm_ucomige_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_ucomigt_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_ucomigt_sd(a, b);
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
- int r;
-
-#if defined(SIMDE_HAVE_FENV_H)
- fenv_t envp;
- int x = feholdexcept(&envp);
- r = a_.f64[0] > b_.f64[0];
- if (HEDLEY_LIKELY(x == 0))
- fesetenv(&envp);
-#else
- r = a_.f64[0] > b_.f64[0];
-#endif
-
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_ucomigt_sd(a, b) simde_mm_ucomigt_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_ucomile_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_ucomile_sd(a, b);
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
- int r;
-
-#if defined(SIMDE_HAVE_FENV_H)
- fenv_t envp;
- int x = feholdexcept(&envp);
- r = a_.f64[0] <= b_.f64[0];
- if (HEDLEY_LIKELY(x == 0))
- fesetenv(&envp);
-#else
- r = a_.f64[0] <= b_.f64[0];
-#endif
-
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_ucomile_sd(a, b) simde_mm_ucomile_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_ucomilt_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_ucomilt_sd(a, b);
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
- int r;
-
-#if defined(SIMDE_HAVE_FENV_H)
- fenv_t envp;
- int x = feholdexcept(&envp);
- r = a_.f64[0] < b_.f64[0];
- if (HEDLEY_LIKELY(x == 0))
- fesetenv(&envp);
-#else
- r = a_.f64[0] < b_.f64[0];
-#endif
-
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_ucomilt_sd(a, b) simde_mm_ucomilt_sd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-int simde_mm_ucomineq_sd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_ucomineq_sd(a, b);
-#else
- simde__m128d_private a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
- int r;
-
-#if defined(SIMDE_HAVE_FENV_H)
- fenv_t envp;
- int x = feholdexcept(&envp);
- r = a_.f64[0] != b_.f64[0];
- if (HEDLEY_LIKELY(x == 0))
- fesetenv(&envp);
-#else
- r = a_.f64[0] != b_.f64[0];
-#endif
-
- return r;
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_ucomineq_sd(a, b) simde_mm_ucomineq_sd(a, b)
-#endif
-
-#if defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
-HEDLEY_DIAGNOSTIC_PUSH
-SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_
-#endif
-
-#if defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
-HEDLEY_DIAGNOSTIC_POP
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_lfence(void)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- _mm_lfence();
-#else
- simde_mm_sfence();
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_lfence() simde_mm_lfence()
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-void simde_mm_mfence(void)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- _mm_mfence();
-#else
- simde_mm_sfence();
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_mfence() simde_mm_mfence()
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_unpackhi_epi8(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_unpackhi_epi8(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- int8x8_t a1 = vreinterpret_s8_s16(vget_high_s16(a_.neon_i16));
- int8x8_t b1 = vreinterpret_s8_s16(vget_high_s16(b_.neon_i16));
- int8x8x2_t result = vzip_s8(a1, b1);
- r_.neon_i8 = vcombine_s8(result.val[0], result.val[1]);
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
- r_.i8 = SIMDE_SHUFFLE_VECTOR_(8, 16, a_.i8, b_.i8, 8, 24, 9, 25, 10, 26,
- 11, 27, 12, 28, 13, 29, 14, 30, 15, 31);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < ((sizeof(r_) / sizeof(r_.i8[0])) / 2); i++) {
- r_.i8[(i * 2)] =
- a_.i8[i + ((sizeof(r_) / sizeof(r_.i8[0])) / 2)];
- r_.i8[(i * 2) + 1] =
- b_.i8[i + ((sizeof(r_) / sizeof(r_.i8[0])) / 2)];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_unpackhi_epi8(a, b) simde_mm_unpackhi_epi8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_unpackhi_epi16(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_unpackhi_epi16(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- int16x4_t a1 = vget_high_s16(a_.neon_i16);
- int16x4_t b1 = vget_high_s16(b_.neon_i16);
- int16x4x2_t result = vzip_s16(a1, b1);
- r_.neon_i16 = vcombine_s16(result.val[0], result.val[1]);
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
- r_.i16 = SIMDE_SHUFFLE_VECTOR_(16, 16, a_.i16, b_.i16, 4, 12, 5, 13, 6,
- 14, 7, 15);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < ((sizeof(r_) / sizeof(r_.i16[0])) / 2); i++) {
- r_.i16[(i * 2)] =
- a_.i16[i + ((sizeof(r_) / sizeof(r_.i16[0])) / 2)];
- r_.i16[(i * 2) + 1] =
- b_.i16[i + ((sizeof(r_) / sizeof(r_.i16[0])) / 2)];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_unpackhi_epi16(a, b) simde_mm_unpackhi_epi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_unpackhi_epi32(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_unpackhi_epi32(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- int32x2_t a1 = vget_high_s32(a_.neon_i32);
- int32x2_t b1 = vget_high_s32(b_.neon_i32);
- int32x2x2_t result = vzip_s32(a1, b1);
- r_.neon_i32 = vcombine_s32(result.val[0], result.val[1]);
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
- r_.i32 = SIMDE_SHUFFLE_VECTOR_(32, 16, a_.i32, b_.i32, 2, 6, 3, 7);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < ((sizeof(r_) / sizeof(r_.i32[0])) / 2); i++) {
- r_.i32[(i * 2)] =
- a_.i32[i + ((sizeof(r_) / sizeof(r_.i32[0])) / 2)];
- r_.i32[(i * 2) + 1] =
- b_.i32[i + ((sizeof(r_) / sizeof(r_.i32[0])) / 2)];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_unpackhi_epi32(a, b) simde_mm_unpackhi_epi32(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_unpackhi_epi64(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_unpackhi_epi64(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_SHUFFLE_VECTOR_)
- r_.i64 = SIMDE_SHUFFLE_VECTOR_(64, 16, a_.i64, b_.i64, 1, 3);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < ((sizeof(r_) / sizeof(r_.i64[0])) / 2); i++) {
- r_.i64[(i * 2)] =
- a_.i64[i + ((sizeof(r_) / sizeof(r_.i64[0])) / 2)];
- r_.i64[(i * 2) + 1] =
- b_.i64[i + ((sizeof(r_) / sizeof(r_.i64[0])) / 2)];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_unpackhi_epi64(a, b) simde_mm_unpackhi_epi64(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_unpackhi_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_unpackhi_pd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(SIMDE_SHUFFLE_VECTOR_)
- r_.f64 = SIMDE_SHUFFLE_VECTOR_(64, 16, a_.f64, b_.f64, 1, 3);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < ((sizeof(r_) / sizeof(r_.f64[0])) / 2); i++) {
- r_.f64[(i * 2)] =
- a_.f64[i + ((sizeof(r_) / sizeof(r_.f64[0])) / 2)];
- r_.f64[(i * 2) + 1] =
- b_.f64[i + ((sizeof(r_) / sizeof(r_.f64[0])) / 2)];
- }
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_unpackhi_pd(a, b) simde_mm_unpackhi_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_unpacklo_epi8(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_unpacklo_epi8(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- int8x8_t a1 = vreinterpret_s8_s16(vget_low_s16(a_.neon_i16));
- int8x8_t b1 = vreinterpret_s8_s16(vget_low_s16(b_.neon_i16));
- int8x8x2_t result = vzip_s8(a1, b1);
- r_.neon_i8 = vcombine_s8(result.val[0], result.val[1]);
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
- r_.i8 = SIMDE_SHUFFLE_VECTOR_(8, 16, a_.i8, b_.i8, 0, 16, 1, 17, 2, 18,
- 3, 19, 4, 20, 5, 21, 6, 22, 7, 23);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < ((sizeof(r_) / sizeof(r_.i8[0])) / 2); i++) {
- r_.i8[(i * 2)] = a_.i8[i];
- r_.i8[(i * 2) + 1] = b_.i8[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_unpacklo_epi8(a, b) simde_mm_unpacklo_epi8(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_unpacklo_epi16(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_unpacklo_epi16(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- int16x4_t a1 = vget_low_s16(a_.neon_i16);
- int16x4_t b1 = vget_low_s16(b_.neon_i16);
- int16x4x2_t result = vzip_s16(a1, b1);
- r_.neon_i16 = vcombine_s16(result.val[0], result.val[1]);
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
- r_.i16 = SIMDE_SHUFFLE_VECTOR_(16, 16, a_.i16, b_.i16, 0, 8, 1, 9, 2,
- 10, 3, 11);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < ((sizeof(r_) / sizeof(r_.i16[0])) / 2); i++) {
- r_.i16[(i * 2)] = a_.i16[i];
- r_.i16[(i * 2) + 1] = b_.i16[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_unpacklo_epi16(a, b) simde_mm_unpacklo_epi16(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_unpacklo_epi32(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_unpacklo_epi32(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- int32x2_t a1 = vget_low_s32(a_.neon_i32);
- int32x2_t b1 = vget_low_s32(b_.neon_i32);
- int32x2x2_t result = vzip_s32(a1, b1);
- r_.neon_i32 = vcombine_s32(result.val[0], result.val[1]);
-#elif defined(SIMDE_SHUFFLE_VECTOR_)
- r_.i32 = SIMDE_SHUFFLE_VECTOR_(32, 16, a_.i32, b_.i32, 0, 4, 1, 5);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < ((sizeof(r_) / sizeof(r_.i32[0])) / 2); i++) {
- r_.i32[(i * 2)] = a_.i32[i];
- r_.i32[(i * 2) + 1] = b_.i32[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_unpacklo_epi32(a, b) simde_mm_unpacklo_epi32(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_unpacklo_epi64(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_unpacklo_epi64(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_SHUFFLE_VECTOR_)
- r_.i64 = SIMDE_SHUFFLE_VECTOR_(64, 16, a_.i64, b_.i64, 0, 2);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < ((sizeof(r_) / sizeof(r_.i64[0])) / 2); i++) {
- r_.i64[(i * 2)] = a_.i64[i];
- r_.i64[(i * 2) + 1] = b_.i64[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_unpacklo_epi64(a, b) simde_mm_unpacklo_epi64(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_unpacklo_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_unpacklo_pd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(SIMDE_SHUFFLE_VECTOR_)
- r_.f64 = SIMDE_SHUFFLE_VECTOR_(64, 16, a_.f64, b_.f64, 0, 2);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < ((sizeof(r_) / sizeof(r_.f64[0])) / 2); i++) {
- r_.f64[(i * 2)] = a_.f64[i];
- r_.f64[(i * 2) + 1] = b_.f64[i];
- }
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_unpacklo_pd(a, b) simde_mm_unpacklo_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128d simde_mm_xor_pd(simde__m128d a, simde__m128d b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_xor_pd(a, b);
-#else
- simde__m128d_private r_, a_ = simde__m128d_to_private(a),
- b_ = simde__m128d_to_private(b);
-
-#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32f = a_.i32f ^ b_.i32f;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32f) / sizeof(r_.i32f[0])); i++) {
- r_.i32f[i] = a_.i32f[i] ^ b_.i32f[i];
- }
-#endif
-
- return simde__m128d_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_xor_pd(a, b) simde_mm_xor_pd(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_mm_xor_si128(simde__m128i a, simde__m128i b)
-{
-#if defined(SIMDE_X86_SSE2_NATIVE)
- return _mm_xor_si128(a, b);
-#else
- simde__m128i_private r_, a_ = simde__m128i_to_private(a),
- b_ = simde__m128i_to_private(b);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = veorq_s32(a_.neon_i32, b_.neon_i32);
-#elif defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
- r_.altivec_i32 = vec_xor(a_.altivec_i32, b_.altivec_i32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32f = a_.i32f ^ b_.i32f;
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32f) / sizeof(r_.i32f[0])); i++) {
- r_.i32f[i] = a_.i32f[i] ^ b_.i32f[i];
- }
-#endif
-
- return simde__m128i_from_private(r_);
-#endif
-}
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _mm_xor_si128(a, b) simde_mm_xor_si128(a, b)
-#endif
-
-SIMDE_FUNCTION_ATTRIBUTES
-simde__m128i simde_x_mm_not_si128(simde__m128i a)
-{
- simde__m128i_private r_, a_ = simde__m128i_to_private(a);
-
-#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
- r_.neon_i32 = vmvnq_s32(a_.neon_i32);
-#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
- r_.i32f = ~(a_.i32f);
-#else
- SIMDE_VECTORIZE
- for (size_t i = 0; i < (sizeof(r_.i32f) / sizeof(r_.i32f[0])); i++) {
- r_.i32f[i] = ~(a_.i32f[i]);
- }
-#endif
-
- return simde__m128i_from_private(r_);
-}
-
-#define SIMDE_MM_SHUFFLE2(x, y) (((x) << 1) | (y))
-#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
-#define _MM_SHUFFLE2(x, y) SIMDE_MM_SHUFFLE2(x, y)
-#endif
-
-SIMDE_END_DECLS_
-
-HEDLEY_DIAGNOSTIC_POP
-
-#endif /* !defined(SIMDE_X86_SSE2_H) */
obs-studio-26.1.0.tar.xz/libobs/util/sse2neon.h
Deleted
-#ifndef SSE2NEON_H
-#define SSE2NEON_H
-
-// This header file provides a simple API translation layer
-// between SSE intrinsics to their corresponding Arm/Aarch64 NEON versions
-//
-// This header file does not yet translate all of the SSE intrinsics.
-//
-// Contributors to this work are:
-// John W. Ratcliff <jratcliffscarab@gmail.com>
-// Brandon Rowlett <browlett@nvidia.com>
-// Ken Fast <kfast@gdeb.com>
-// Eric van Beurden <evanbeurden@nvidia.com>
-// Alexander Potylitsin <apotylitsin@nvidia.com>
-// Hasindu Gamaarachchi <hasindu2008@gmail.com>
-// Jim Huang <jserv@biilabs.io>
-// Mark Cheng <marktwtn@biilabs.io>
-// Malcolm James MacLeod <malcolm@gulden.com>
-// Devin Hussey (easyaspi314) <husseydevin@gmail.com>
-// Sebastian Pop <spop@amazon.com>
-// Developer Ecosystem Engineering <DeveloperEcosystemEngineering@apple.com>
-// Danila Kutenin <danilak@google.com>
-
-/*
- * sse2neon is freely redistributable under the MIT License.
- *
- * Permission is hereby granted, free of charge, to any person obtaining a copy
- * of this software and associated documentation files (the "Software"), to deal
- * in the Software without restriction, including without limitation the rights
- * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
- * copies of the Software, and to permit persons to whom the Software is
- * furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
- * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
- * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
- */
-
-#if defined(__GNUC__) || defined(__clang__)
-#pragma push_macro("FORCE_INLINE")
-#pragma push_macro("ALIGN_STRUCT")
-#define FORCE_INLINE static inline __attribute__((always_inline))
-#define ALIGN_STRUCT(x) __attribute__((aligned(x)))
-#else
-#error "Macro name collisions may happen with unsupported compiler."
-#ifdef FORCE_INLINE
-#undef FORCE_INLINE
-#endif
-#define FORCE_INLINE static inline
-#ifndef ALIGN_STRUCT
-#define ALIGN_STRUCT(x) __declspec(align(x))
-#endif
-#endif
-
-#include <stdint.h>
-#include <stdlib.h>
-
-#include <arm_neon.h>
-
-/* "__has_builtin" can be used to query support for built-in functions
- * provided by gcc/clang and other compilers that support it.
- */
-#ifndef __has_builtin /* GCC prior to 10 or non-clang compilers */
-/* Compatibility with gcc <= 9 */
-#if __GNUC__ <= 9
-#define __has_builtin(x) HAS##x
-#define HAS__builtin_popcount 1
-#define HAS__builtin_popcountll 1
-#else
-#define __has_builtin(x) 0
-#endif
-#endif
-
-/**
- * MACRO for shuffle parameter for _mm_shuffle_ps().
- * Argument fp3 is a digit[0123] that represents the fp from argument "b"
- * of mm_shuffle_ps that will be placed in fp3 of result. fp2 is the same
- * for fp2 in result. fp1 is a digit[0123] that represents the fp from
- * argument "a" of mm_shuffle_ps that will be places in fp1 of result.
- * fp0 is the same for fp0 of result.
- */
-#define _MM_SHUFFLE(fp3, fp2, fp1, fp0) \
- (((fp3) << 6) | ((fp2) << 4) | ((fp1) << 2) | ((fp0)))
-
-/* indicate immediate constant argument in a given range */
-#define __constrange(a, b) const
-
-/* A few intrinsics accept traditional data types like ints or floats, but
- * most operate on data types that are specific to SSE.
- * If a vector type ends in d, it contains doubles, and if it does not have
- * a suffix, it contains floats. An integer vector type can contain any type
- * of integer, from chars to shorts to unsigned long longs.
- */
-typedef float32x2_t __m64;
-typedef float32x4_t __m128; /* 128-bit vector containing 4 floats */
-// On ARM 32-bit architecture, the float64x2_t is not supported.
-// The data type __m128d should be represented in a different way for related
-// intrinsic conversion.
-#if defined(__aarch64__)
-typedef float64x2_t __m128d; /* 128-bit vector containing 2 doubles */
-#else
-typedef float32x4_t __m128d;
-#endif
-typedef int64x1_t __m64i;
-typedef int64x2_t __m128i; /* 128-bit vector containing integers */
-
-/* type-safe casting between types */
-
-#define vreinterpretq_m128_f16(x) vreinterpretq_f32_f16(x)
-#define vreinterpretq_m128_f32(x) (x)
-#define vreinterpretq_m128_f64(x) vreinterpretq_f32_f64(x)
-
-#define vreinterpretq_m128_u8(x) vreinterpretq_f32_u8(x)
-#define vreinterpretq_m128_u16(x) vreinterpretq_f32_u16(x)
-#define vreinterpretq_m128_u32(x) vreinterpretq_f32_u32(x)
-#define vreinterpretq_m128_u64(x) vreinterpretq_f32_u64(x)
-
-#define vreinterpretq_m128_s8(x) vreinterpretq_f32_s8(x)
-#define vreinterpretq_m128_s16(x) vreinterpretq_f32_s16(x)
-#define vreinterpretq_m128_s32(x) vreinterpretq_f32_s32(x)
-#define vreinterpretq_m128_s64(x) vreinterpretq_f32_s64(x)
-
-#define vreinterpretq_f16_m128(x) vreinterpretq_f16_f32(x)
-#define vreinterpretq_f32_m128(x) (x)
-#define vreinterpretq_f64_m128(x) vreinterpretq_f64_f32(x)
-
-#define vreinterpretq_u8_m128(x) vreinterpretq_u8_f32(x)
-#define vreinterpretq_u16_m128(x) vreinterpretq_u16_f32(x)
-#define vreinterpretq_u32_m128(x) vreinterpretq_u32_f32(x)
-#define vreinterpretq_u64_m128(x) vreinterpretq_u64_f32(x)
-
-#define vreinterpretq_s8_m128(x) vreinterpretq_s8_f32(x)
-#define vreinterpretq_s16_m128(x) vreinterpretq_s16_f32(x)
-#define vreinterpretq_s32_m128(x) vreinterpretq_s32_f32(x)
-#define vreinterpretq_s64_m128(x) vreinterpretq_s64_f32(x)
-
-#define vreinterpretq_m128i_s8(x) vreinterpretq_s64_s8(x)
-#define vreinterpretq_m128i_s16(x) vreinterpretq_s64_s16(x)
-#define vreinterpretq_m128i_s32(x) vreinterpretq_s64_s32(x)
-#define vreinterpretq_m128i_s64(x) (x)
-
-#define vreinterpretq_m128i_u8(x) vreinterpretq_s64_u8(x)
-#define vreinterpretq_m128i_u16(x) vreinterpretq_s64_u16(x)
-#define vreinterpretq_m128i_u32(x) vreinterpretq_s64_u32(x)
-#define vreinterpretq_m128i_u64(x) vreinterpretq_s64_u64(x)
-
-#define vreinterpretq_s8_m128i(x) vreinterpretq_s8_s64(x)
-#define vreinterpretq_s16_m128i(x) vreinterpretq_s16_s64(x)
-#define vreinterpretq_s32_m128i(x) vreinterpretq_s32_s64(x)
-#define vreinterpretq_s64_m128i(x) (x)
-
-#define vreinterpretq_u8_m128i(x) vreinterpretq_u8_s64(x)
-#define vreinterpretq_u16_m128i(x) vreinterpretq_u16_s64(x)
-#define vreinterpretq_u32_m128i(x) vreinterpretq_u32_s64(x)
-#define vreinterpretq_u64_m128i(x) vreinterpretq_u64_s64(x)
-
-#define vreinterpret_m64i_s8(x) vreinterpret_s64_s8(x)
-#define vreinterpret_m64i_s16(x) vreinterpret_s64_s16(x)
-#define vreinterpret_m64i_s32(x) vreinterpret_s64_s32(x)
-#define vreinterpret_m64i_s64(x) (x)
-
-#define vreinterpret_m64i_u8(x) vreinterpret_s64_u8(x)
-#define vreinterpret_m64i_u16(x) vreinterpret_s64_u16(x)
-#define vreinterpret_m64i_u32(x) vreinterpret_s64_u32(x)
-#define vreinterpret_m64i_u64(x) vreinterpret_s64_u64(x)
-
-#define vreinterpret_u8_m64i(x) vreinterpret_u8_s64(x)
-#define vreinterpret_u16_m64i(x) vreinterpret_u16_s64(x)
-#define vreinterpret_u32_m64i(x) vreinterpret_u32_s64(x)
-#define vreinterpret_u64_m64i(x) vreinterpret_u64_s64(x)
-
-#define vreinterpret_s8_m64i(x) vreinterpret_s8_s64(x)
-#define vreinterpret_s16_m64i(x) vreinterpret_s16_s64(x)
-#define vreinterpret_s32_m64i(x) vreinterpret_s32_s64(x)
-#define vreinterpret_s64_m64i(x) (x)
-
-// A struct is defined in this header file called 'SIMDVec' which can be used
-// by applications which attempt to access the contents of an _m128 struct
-// directly. It is important to note that accessing the __m128 struct directly
-// is bad coding practice by Microsoft: @see:
-// https://msdn.microsoft.com/en-us/library/ayeb3ayc.aspx
-//
-// However, some legacy source code may try to access the contents of an __m128
-// struct directly so the developer can use the SIMDVec as an alias for it. Any
-// casting must be done manually by the developer, as you cannot cast or
-// otherwise alias the base NEON data type for intrinsic operations.
-//
-// union intended to allow direct access to an __m128 variable using the names
-// that the MSVC compiler provides. This union should really only be used when
-// trying to access the members of the vector as integer values. GCC/clang
-// allow native access to the float members through a simple array access
-// operator (in C since 4.6, in C++ since 4.8).
-//
-// Ideally direct accesses to SIMD vectors should not be used since it can cause
-// a performance hit. If it really is needed however, the original __m128
-// variable can be aliased with a pointer to this union and used to access
-// individual components. The use of this union should be hidden behind a macro
-// that is used throughout the codebase to access the members instead of always
-// declaring this type of variable.
-typedef union ALIGN_STRUCT(16) SIMDVec {
- float m128_f32[4]; // as floats - DON'T USE. Added for convenience.
- int8_t m128_i8[16]; // as signed 8-bit integers.
- int16_t m128_i16[8]; // as signed 16-bit integers.
- int32_t m128_i32[4]; // as signed 32-bit integers.
- int64_t m128_i64[2]; // as signed 64-bit integers.
- uint8_t m128_u8[16]; // as unsigned 8-bit integers.
- uint16_t m128_u16[8]; // as unsigned 16-bit integers.
- uint32_t m128_u32[4]; // as unsigned 32-bit integers.
- uint64_t m128_u64[2]; // as unsigned 64-bit integers.
-} SIMDVec;
-
-// casting using SIMDVec
-#define vreinterpretq_nth_u64_m128i(x, n) (((SIMDVec *)&x)->m128_u64[n])
-#define vreinterpretq_nth_u32_m128i(x, n) (((SIMDVec *)&x)->m128_u32[n])
-
-/* Backwards compatibility for compilers with lack of specific type support */
-
-// Older gcc does not define vld1q_u8_x4 type
-#if defined(__GNUC__) && !defined(__clang__)
-#if __GNUC__ <= 9
-FORCE_INLINE uint8x16x4_t vld1q_u8_x4(const uint8_t *p)
-{
- uint8x16x4_t ret;
- ret.val[0] = vld1q_u8(p + 0);
- ret.val[1] = vld1q_u8(p + 16);
- ret.val[2] = vld1q_u8(p + 32);
- ret.val[3] = vld1q_u8(p + 48);
- return ret;
-}
-#endif
-#endif
-
-/* Function Naming Conventions
- * The naming convention of SSE intrinsics is straightforward. A generic SSE
- * intrinsic function is given as follows:
- * _mm_<name>_<data_type>
- *
- * The parts of this format are given as follows:
- * 1. <name> describes the operation performed by the intrinsic
- * 2. <data_type> identifies the data type of the function's primary arguments
- *
- * This last part, <data_type>, is a little complicated. It identifies the
- * content of the input values, and can be set to any of the following values:
- * + ps - vectors contain floats (ps stands for packed single-precision)
- * + pd - vectors cantain doubles (pd stands for packed double-precision)
- * + epi8/epi16/epi32/epi64 - vectors contain 8-bit/16-bit/32-bit/64-bit
- * signed integers
- * + epu8/epu16/epu32/epu64 - vectors contain 8-bit/16-bit/32-bit/64-bit
- * unsigned integers
- * + si128 - unspecified 128-bit vector or 256-bit vector
- * + m128/m128i/m128d - identifies input vector types when they are different
- * than the type of the returned vector
- *
- * For example, _mm_setzero_ps. The _mm implies that the function returns
- * a 128-bit vector. The _ps at the end implies that the argument vectors
- * contain floats.
- *
- * A complete example: Byte Shuffle - pshufb (_mm_shuffle_epi8)
- * // Set packed 16-bit integers. 128 bits, 8 short, per 16 bits
- * __m128i v_in = _mm_setr_epi16(1, 2, 3, 4, 5, 6, 7, 8);
- * // Set packed 8-bit integers
- * // 128 bits, 16 chars, per 8 bits
- * __m128i v_perm = _mm_setr_epi8(1, 0, 2, 3, 8, 9, 10, 11,
- * 4, 5, 12, 13, 6, 7, 14, 15);
- * // Shuffle packed 8-bit integers
- * __m128i v_out = _mm_shuffle_epi8(v_in, v_perm); // pshufb
- *
- * Data (Number, Binary, Byte Index):
- +------+------+-------------+------+------+-------------+
- | 1 | 2 | 3 | 4 | Number
- +------+------+------+------+------+------+------+------+
- | 0000 | 0001 | 0000 | 0010 | 0000 | 0011 | 0000 | 0100 | Binary
- +------+------+------+------+------+------+------+------+
- | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Index
- +------+------+------+------+------+------+------+------+
-
- +------+------+------+------+------+------+------+------+
- | 5 | 6 | 7 | 8 | Number
- +------+------+------+------+------+------+------+------+
- | 0000 | 0101 | 0000 | 0110 | 0000 | 0111 | 0000 | 1000 | Binary
- +------+------+------+------+------+------+------+------+
- | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Index
- +------+------+------+------+------+------+------+------+
- * Index (Byte Index):
- +------+------+------+------+------+------+------+------+
- | 1 | 0 | 2 | 3 | 8 | 9 | 10 | 11 |
- +------+------+------+------+------+------+------+------+
-
- +------+------+------+------+------+------+------+------+
- | 4 | 5 | 12 | 13 | 6 | 7 | 14 | 15 |
- +------+------+------+------+------+------+------+------+
- * Result:
- +------+------+------+------+------+------+------+------+
- | 1 | 0 | 2 | 3 | 8 | 9 | 10 | 11 | Index
- +------+------+------+------+------+------+------+------+
- | 0001 | 0000 | 0000 | 0010 | 0000 | 0101 | 0000 | 0110 | Binary
- +------+------+------+------+------+------+------+------+
- | 256 | 2 | 5 | 6 | Number
- +------+------+------+------+------+------+------+------+
-
- +------+------+------+------+------+------+------+------+
- | 4 | 5 | 12 | 13 | 6 | 7 | 14 | 15 | Index
- +------+------+------+------+------+------+------+------+
- | 0000 | 0011 | 0000 | 0111 | 0000 | 0100 | 0000 | 1000 | Binary
- +------+------+------+------+------+------+------+------+
- | 3 | 7 | 4 | 8 | Number
- +------+------+------+------+------+------+-------------+
- */
-
-/* Set/get methods */
-
-/* Constants for use with _mm_prefetch. */
-enum _mm_hint {
- _MM_HINT_NTA = 0, /* load data to L1 and L2 cache, mark it as NTA */
- _MM_HINT_T0 = 1, /* load data to L1 and L2 cache */
- _MM_HINT_T1 = 2, /* load data to L2 cache only */
- _MM_HINT_T2 = 3, /* load data to L2 cache only, mark it as NTA */
- _MM_HINT_ENTA = 4, /* exclusive version of _MM_HINT_NTA */
- _MM_HINT_ET0 = 5, /* exclusive version of _MM_HINT_T0 */
- _MM_HINT_ET1 = 6, /* exclusive version of _MM_HINT_T1 */
- _MM_HINT_ET2 = 7 /* exclusive version of _MM_HINT_T2 */
-};
-
-// Loads one cache line of data from address p to a location closer to the
-// processor. https://msdn.microsoft.com/en-us/library/84szxsww(v=vs.100).aspx
-FORCE_INLINE void _mm_prefetch(const void *p, int i)
-{
- (void)i;
- __builtin_prefetch(p);
-}
-
-// extracts the lower order floating point value from the parameter :
-// https://msdn.microsoft.com/en-us/library/bb514059%28v=vs.120%29.aspx?f=255&MSPPError=-2147217396
-FORCE_INLINE float _mm_cvtss_f32(__m128 a)
-{
- return vgetq_lane_f32(vreinterpretq_f32_m128(a), 0);
-}
-
-// Sets the 128-bit value to zero
-// https://msdn.microsoft.com/en-us/library/vstudio/ys7dw0kh(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_setzero_si128(void)
-{
- return vreinterpretq_m128i_s32(vdupq_n_s32(0));
-}
-
-// Clears the four single-precision, floating-point values.
-// https://msdn.microsoft.com/en-us/library/vstudio/tk1t2tbz(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_setzero_ps(void)
-{
- return vreinterpretq_m128_f32(vdupq_n_f32(0));
-}
-
-// Sets the four single-precision, floating-point values to w.
-//
-// r0 := r1 := r2 := r3 := w
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/2x1se8ha(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_set1_ps(float _w)
-{
- return vreinterpretq_m128_f32(vdupq_n_f32(_w));
-}
-
-// Sets the four single-precision, floating-point values to w.
-// https://msdn.microsoft.com/en-us/library/vstudio/2x1se8ha(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_set_ps1(float _w)
-{
- return vreinterpretq_m128_f32(vdupq_n_f32(_w));
-}
-
-// Sets the four single-precision, floating-point values to the four inputs.
-// https://msdn.microsoft.com/en-us/library/vstudio/afh0zf75(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_set_ps(float w, float z, float y, float x)
-{
- float ALIGN_STRUCT(16) data[4] = {x, y, z, w};
- return vreinterpretq_m128_f32(vld1q_f32(data));
-}
-
-// Copy single-precision (32-bit) floating-point element a to the lower element
-// of dst, and zero the upper 3 elements.
-// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_set_ss&expand=4901,4895,4901
-FORCE_INLINE __m128 _mm_set_ss(float a)
-{
- float ALIGN_STRUCT(16) data[4] = {a, 0, 0, 0};
- return vreinterpretq_m128_f32(vld1q_f32(data));
-}
-
-// Sets the four single-precision, floating-point values to the four inputs in
-// reverse order.
-// https://msdn.microsoft.com/en-us/library/vstudio/d2172ct3(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_setr_ps(float w, float z, float y, float x)
-{
- float ALIGN_STRUCT(16) data[4] = {w, z, y, x};
- return vreinterpretq_m128_f32(vld1q_f32(data));
-}
-
-// Sets the 8 signed 16-bit integer values in reverse order.
-//
-// Return Value
-// r0 := w0
-// r1 := w1
-// ...
-// r7 := w7
-FORCE_INLINE __m128i _mm_setr_epi16(short w0, short w1, short w2, short w3,
- short w4, short w5, short w6, short w7)
-{
- int16_t ALIGN_STRUCT(16) data[8] = {w0, w1, w2, w3, w4, w5, w6, w7};
- return vreinterpretq_m128i_s16(vld1q_s16((int16_t *)data));
-}
-
-// Sets the 4 signed 32-bit integer values in reverse order
-// https://technet.microsoft.com/en-us/library/security/27yb3ee5(v=vs.90).aspx
-FORCE_INLINE __m128i _mm_setr_epi32(int i3, int i2, int i1, int i0)
-{
- int32_t ALIGN_STRUCT(16) data[4] = {i3, i2, i1, i0};
- return vreinterpretq_m128i_s32(vld1q_s32(data));
-}
-
-// Sets the 16 signed 8-bit integer values to b.
-//
-// r0 := b
-// r1 := b
-// ...
-// r15 := b
-//
-// https://msdn.microsoft.com/en-us/library/6e14xhyf(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_set1_epi8(signed char w)
-{
- return vreinterpretq_m128i_s8(vdupq_n_s8(w));
-}
-
-// Sets the 8 signed 16-bit integer values to w.
-//
-// r0 := w
-// r1 := w
-// ...
-// r7 := w
-//
-// https://msdn.microsoft.com/en-us/library/k0ya3x0e(v=vs.90).aspx
-FORCE_INLINE __m128i _mm_set1_epi16(short w)
-{
- return vreinterpretq_m128i_s16(vdupq_n_s16(w));
-}
-
-// Sets the 16 signed 8-bit integer values.
-// https://msdn.microsoft.com/en-us/library/x0cx8zd3(v=vs.90).aspx
-FORCE_INLINE __m128i
-_mm_set_epi8(signed char b15, signed char b14, signed char b13, signed char b12,
- signed char b11, signed char b10, signed char b9, signed char b8,
- signed char b7, signed char b6, signed char b5, signed char b4,
- signed char b3, signed char b2, signed char b1, signed char b0)
-{
- int8_t ALIGN_STRUCT(16)
- data[16] = {(int8_t)b0, (int8_t)b1, (int8_t)b2, (int8_t)b3,
- (int8_t)b4, (int8_t)b5, (int8_t)b6, (int8_t)b7,
- (int8_t)b8, (int8_t)b9, (int8_t)b10, (int8_t)b11,
- (int8_t)b12, (int8_t)b13, (int8_t)b14, (int8_t)b15};
- return (__m128i)vld1q_s8(data);
-}
-
-// Sets the 8 signed 16-bit integer values.
-// https://msdn.microsoft.com/en-au/library/3e0fek84(v=vs.90).aspx
-FORCE_INLINE __m128i _mm_set_epi16(short i7, short i6, short i5, short i4,
- short i3, short i2, short i1, short i0)
-{
- int16_t ALIGN_STRUCT(16) data[8] = {i0, i1, i2, i3, i4, i5, i6, i7};
- return vreinterpretq_m128i_s16(vld1q_s16(data));
-}
-
-// Sets the 16 signed 8-bit integer values in reverse order.
-// https://msdn.microsoft.com/en-us/library/2khb9c7k(v=vs.90).aspx
-FORCE_INLINE __m128i _mm_setr_epi8(
- signed char b0, signed char b1, signed char b2, signed char b3,
- signed char b4, signed char b5, signed char b6, signed char b7,
- signed char b8, signed char b9, signed char b10, signed char b11,
- signed char b12, signed char b13, signed char b14, signed char b15)
-{
- int8_t ALIGN_STRUCT(16)
- data[16] = {(int8_t)b0, (int8_t)b1, (int8_t)b2, (int8_t)b3,
- (int8_t)b4, (int8_t)b5, (int8_t)b6, (int8_t)b7,
- (int8_t)b8, (int8_t)b9, (int8_t)b10, (int8_t)b11,
- (int8_t)b12, (int8_t)b13, (int8_t)b14, (int8_t)b15};
- return (__m128i)vld1q_s8(data);
-}
-
-// Sets the 4 signed 32-bit integer values to i.
-//
-// r0 := i
-// r1 := i
-// r2 := i
-// r3 := I
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/h4xscxat(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_set1_epi32(int _i)
-{
- return vreinterpretq_m128i_s32(vdupq_n_s32(_i));
-}
-
-// Sets the 2 signed 64-bit integer values to i.
-// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/whtfzhzk(v=vs.100)
-FORCE_INLINE __m128i _mm_set1_epi64(int64_t _i)
-{
- return vreinterpretq_m128i_s64(vdupq_n_s64(_i));
-}
-
-// Sets the 2 signed 64-bit integer values to i.
-// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_set1_epi64x&expand=4961
-FORCE_INLINE __m128i _mm_set1_epi64x(int64_t _i)
-{
- return vreinterpretq_m128i_s64(vdupq_n_s64(_i));
-}
-
-// Sets the 4 signed 32-bit integer values.
-// https://msdn.microsoft.com/en-us/library/vstudio/019beekt(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_set_epi32(int i3, int i2, int i1, int i0)
-{
- int32_t ALIGN_STRUCT(16) data[4] = {i0, i1, i2, i3};
- return vreinterpretq_m128i_s32(vld1q_s32(data));
-}
-
-// Returns the __m128i structure with its two 64-bit integer values
-// initialized to the values of the two 64-bit integers passed in.
-// https://msdn.microsoft.com/en-us/library/dk2sdw0h(v=vs.120).aspx
-FORCE_INLINE __m128i _mm_set_epi64x(int64_t i1, int64_t i2)
-{
- int64_t ALIGN_STRUCT(16) data[2] = {i2, i1};
- return vreinterpretq_m128i_s64(vld1q_s64(data));
-}
-
-// Stores four single-precision, floating-point values.
-// https://msdn.microsoft.com/en-us/library/vstudio/s3h4ay6y(v=vs.100).aspx
-FORCE_INLINE void _mm_store_ps(float *p, __m128 a)
-{
- vst1q_f32(p, vreinterpretq_f32_m128(a));
-}
-
-// Stores four single-precision, floating-point values.
-// https://msdn.microsoft.com/en-us/library/44e30x22(v=vs.100).aspx
-FORCE_INLINE void _mm_storeu_ps(float *p, __m128 a)
-{
- vst1q_f32(p, vreinterpretq_f32_m128(a));
-}
-
-// Stores four 32-bit integer values as (as a __m128i value) at the address p.
-// https://msdn.microsoft.com/en-us/library/vstudio/edk11s13(v=vs.100).aspx
-FORCE_INLINE void _mm_store_si128(__m128i *p, __m128i a)
-{
- vst1q_s32((int32_t *)p, vreinterpretq_s32_m128i(a));
-}
-
-// Stores four 32-bit integer values as (as a __m128i value) at the address p.
-// https://msdn.microsoft.com/en-us/library/vstudio/edk11s13(v=vs.100).aspx
-FORCE_INLINE void _mm_storeu_si128(__m128i *p, __m128i a)
-{
- vst1q_s32((int32_t *)p, vreinterpretq_s32_m128i(a));
-}
-
-// Stores the lower single - precision, floating - point value.
-// https://msdn.microsoft.com/en-us/library/tzz10fbx(v=vs.100).aspx
-FORCE_INLINE void _mm_store_ss(float *p, __m128 a)
-{
- vst1q_lane_f32(p, vreinterpretq_f32_m128(a), 0);
-}
-
-// Reads the lower 64 bits of b and stores them into the lower 64 bits of a.
-// https://msdn.microsoft.com/en-us/library/hhwf428f%28v=vs.90%29.aspx
-FORCE_INLINE void _mm_storel_epi64(__m128i *a, __m128i b)
-{
- uint64x1_t hi = vget_high_u64(vreinterpretq_u64_m128i(*a));
- uint64x1_t lo = vget_low_u64(vreinterpretq_u64_m128i(b));
- *a = vreinterpretq_m128i_u64(vcombine_u64(lo, hi));
-}
-
-// Stores the lower two single-precision floating point values of a to the
-// address p.
-//
-// *p0 := a0
-// *p1 := a1
-//
-// https://msdn.microsoft.com/en-us/library/h54t98ks(v=vs.90).aspx
-FORCE_INLINE void _mm_storel_pi(__m64 *p, __m128 a)
-{
- *p = vget_low_f32(a);
-}
-
-// Stores the upper two single-precision, floating-point values of a to the
-// address p.
-//
-// *p0 := a2
-// *p1 := a3
-//
-// https://msdn.microsoft.com/en-us/library/a7525fs8(v%3dvs.90).aspx
-FORCE_INLINE void _mm_storeh_pi(__m64 *p, __m128 a)
-{
- *p = vget_high_f32(a);
-}
-
-// Loads a single single-precision, floating-point value, copying it into all
-// four words
-// https://msdn.microsoft.com/en-us/library/vstudio/5cdkf716(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_load1_ps(const float *p)
-{
- return vreinterpretq_m128_f32(vld1q_dup_f32(p));
-}
-#define _mm_load_ps1 _mm_load1_ps
-
-// Sets the lower two single-precision, floating-point values with 64
-// bits of data loaded from the address p; the upper two values are passed
-// through from a.
-//
-// Return Value
-// r0 := *p0
-// r1 := *p1
-// r2 := a2
-// r3 := a3
-//
-// https://msdn.microsoft.com/en-us/library/s57cyak2(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_loadl_pi(__m128 a, __m64 const *p)
-{
- return vreinterpretq_m128_f32(
- vcombine_f32(vld1_f32((const float32_t *)p), vget_high_f32(a)));
-}
-
-// Sets the upper two single-precision, floating-point values with 64
-// bits of data loaded from the address p; the lower two values are passed
-// through from a.
-//
-// r0 := a0
-// r1 := a1
-// r2 := *p0
-// r3 := *p1
-//
-// https://msdn.microsoft.com/en-us/library/w92wta0x(v%3dvs.100).aspx
-FORCE_INLINE __m128 _mm_loadh_pi(__m128 a, __m64 const *p)
-{
- return vreinterpretq_m128_f32(
- vcombine_f32(vget_low_f32(a), vld1_f32((const float32_t *)p)));
-}
-
-// Loads four single-precision, floating-point values.
-// https://msdn.microsoft.com/en-us/library/vstudio/zzd50xxt(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_load_ps(const float *p)
-{
- return vreinterpretq_m128_f32(vld1q_f32(p));
-}
-
-// Loads four single-precision, floating-point values.
-// https://msdn.microsoft.com/en-us/library/x1b16s7z%28v=vs.90%29.aspx
-FORCE_INLINE __m128 _mm_loadu_ps(const float *p)
-{
- // for neon, alignment doesn't matter, so _mm_load_ps and _mm_loadu_ps are
- // equivalent for neon
- return vreinterpretq_m128_f32(vld1q_f32(p));
-}
-
-// Loads a double-precision, floating-point value.
-// The upper double-precision, floating-point is set to zero. The address p does
-// not need to be 16-byte aligned.
-// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/574w9fdd(v%3dvs.100)
-FORCE_INLINE __m128d _mm_load_sd(const double *p)
-{
-#if defined(__aarch64__)
- return vsetq_lane_f64(*p, vdupq_n_f64(0), 0);
-#else
- const float *fp = (const float *)p;
- float ALIGN_STRUCT(16) data[4] = {fp[0], fp[1], 0, 0};
- return vld1q_f32(data);
-#endif
-}
-
-// Loads an single - precision, floating - point value into the low word and
-// clears the upper three words.
-// https://msdn.microsoft.com/en-us/library/548bb9h4%28v=vs.90%29.aspx
-FORCE_INLINE __m128 _mm_load_ss(const float *p)
-{
- return vreinterpretq_m128_f32(vsetq_lane_f32(*p, vdupq_n_f32(0), 0));
-}
-
-FORCE_INLINE __m128i _mm_loadl_epi64(__m128i const *p)
-{
- /* Load the lower 64 bits of the value pointed to by p into the
- * lower 64 bits of the result, zeroing the upper 64 bits of the result.
- */
- return vreinterpretq_m128i_s32(
- vcombine_s32(vld1_s32((int32_t const *)p), vcreate_s32(0)));
-}
-
-/* Logic/Binary operations */
-
-// Compares for inequality.
-// https://msdn.microsoft.com/en-us/library/sf44thbx(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_cmpneq_ps(__m128 a, __m128 b)
-{
- return vreinterpretq_m128_u32(vmvnq_u32(vceqq_f32(
- vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b))));
-}
-
-// Computes the bitwise AND-NOT of the four single-precision, floating-point
-// values of a and b.
-//
-// r0 := ~a0 & b0
-// r1 := ~a1 & b1
-// r2 := ~a2 & b2
-// r3 := ~a3 & b3
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/68h7wd02(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_andnot_ps(__m128 a, __m128 b)
-{
- return vreinterpretq_m128_s32(
- vbicq_s32(vreinterpretq_s32_m128(b),
- vreinterpretq_s32_m128(a))); // *NOTE* argument swap
-}
-
-// Computes the bitwise AND of the 128-bit value in b and the bitwise NOT of the
-// 128-bit value in a.
-//
-// r := (~a) & b
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/1beaceh8(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_andnot_si128(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s32(
- vbicq_s32(vreinterpretq_s32_m128i(b),
- vreinterpretq_s32_m128i(a))); // *NOTE* argument swap
-}
-
-// Computes the bitwise AND of the 128-bit value in a and the 128-bit value in
-// b.
-//
-// r := a & b
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/6d1txsa8(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_and_si128(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s32(vandq_s32(vreinterpretq_s32_m128i(a),
- vreinterpretq_s32_m128i(b)));
-}
-
-// Computes the bitwise AND of the four single-precision, floating-point values
-// of a and b.
-//
-// r0 := a0 & b0
-// r1 := a1 & b1
-// r2 := a2 & b2
-// r3 := a3 & b3
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/73ck1xc5(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_and_ps(__m128 a, __m128 b)
-{
- return vreinterpretq_m128_s32(vandq_s32(vreinterpretq_s32_m128(a),
- vreinterpretq_s32_m128(b)));
-}
-
-// Computes the bitwise OR of the four single-precision, floating-point values
-// of a and b.
-// https://msdn.microsoft.com/en-us/library/vstudio/7ctdsyy0(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_or_ps(__m128 a, __m128 b)
-{
- return vreinterpretq_m128_s32(vorrq_s32(vreinterpretq_s32_m128(a),
- vreinterpretq_s32_m128(b)));
-}
-
-// Computes bitwise EXOR (exclusive-or) of the four single-precision,
-// floating-point values of a and b.
-// https://msdn.microsoft.com/en-us/library/ss6k3wk8(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_xor_ps(__m128 a, __m128 b)
-{
- return vreinterpretq_m128_s32(veorq_s32(vreinterpretq_s32_m128(a),
- vreinterpretq_s32_m128(b)));
-}
-
-// Computes the bitwise OR of the 128-bit value in a and the 128-bit value in b.
-//
-// r := a | b
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/ew8ty0db(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_or_si128(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s32(vorrq_s32(vreinterpretq_s32_m128i(a),
- vreinterpretq_s32_m128i(b)));
-}
-
-// Computes the bitwise XOR of the 128-bit value in a and the 128-bit value in
-// b. https://msdn.microsoft.com/en-us/library/fzt08www(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_xor_si128(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s32(veorq_s32(vreinterpretq_s32_m128i(a),
- vreinterpretq_s32_m128i(b)));
-}
-
-// Moves the upper two values of B into the lower two values of A.
-//
-// r3 := a3
-// r2 := a2
-// r1 := b3
-// r0 := b2
-FORCE_INLINE __m128 _mm_movehl_ps(__m128 __A, __m128 __B)
-{
- float32x2_t a32 = vget_high_f32(vreinterpretq_f32_m128(__A));
- float32x2_t b32 = vget_high_f32(vreinterpretq_f32_m128(__B));
- return vreinterpretq_m128_f32(vcombine_f32(b32, a32));
-}
-
-// Moves the lower two values of B into the upper two values of A.
-//
-// r3 := b1
-// r2 := b0
-// r1 := a1
-// r0 := a0
-FORCE_INLINE __m128 _mm_movelh_ps(__m128 __A, __m128 __B)
-{
- float32x2_t a10 = vget_low_f32(vreinterpretq_f32_m128(__A));
- float32x2_t b10 = vget_low_f32(vreinterpretq_f32_m128(__B));
- return vreinterpretq_m128_f32(vcombine_f32(a10, b10));
-}
-
-FORCE_INLINE __m128i _mm_abs_epi32(__m128i a)
-{
- return vreinterpretq_m128i_s32(vabsq_s32(vreinterpretq_s32_m128i(a)));
-}
-
-FORCE_INLINE __m128i _mm_abs_epi16(__m128i a)
-{
- return vreinterpretq_m128i_s16(vabsq_s16(vreinterpretq_s16_m128i(a)));
-}
-
-FORCE_INLINE __m128i _mm_abs_epi8(__m128i a)
-{
- return vreinterpretq_m128i_s8(vabsq_s8(vreinterpretq_s8_m128i(a)));
-}
-
-// Takes the upper 64 bits of a and places it in the low end of the result
-// Takes the lower 64 bits of b and places it into the high end of the result.
-FORCE_INLINE __m128 _mm_shuffle_ps_1032(__m128 a, __m128 b)
-{
- float32x2_t a32 = vget_high_f32(vreinterpretq_f32_m128(a));
- float32x2_t b10 = vget_low_f32(vreinterpretq_f32_m128(b));
- return vreinterpretq_m128_f32(vcombine_f32(a32, b10));
-}
-
-// takes the lower two 32-bit values from a and swaps them and places in high
-// end of result takes the higher two 32 bit values from b and swaps them and
-// places in low end of result.
-FORCE_INLINE __m128 _mm_shuffle_ps_2301(__m128 a, __m128 b)
-{
- float32x2_t a01 = vrev64_f32(vget_low_f32(vreinterpretq_f32_m128(a)));
- float32x2_t b23 = vrev64_f32(vget_high_f32(vreinterpretq_f32_m128(b)));
- return vreinterpretq_m128_f32(vcombine_f32(a01, b23));
-}
-
-FORCE_INLINE __m128 _mm_shuffle_ps_0321(__m128 a, __m128 b)
-{
- float32x2_t a21 = vget_high_f32(vextq_f32(
- vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(a), 3));
- float32x2_t b03 = vget_low_f32(vextq_f32(vreinterpretq_f32_m128(b),
- vreinterpretq_f32_m128(b), 3));
- return vreinterpretq_m128_f32(vcombine_f32(a21, b03));
-}
-
-FORCE_INLINE __m128 _mm_shuffle_ps_2103(__m128 a, __m128 b)
-{
- float32x2_t a03 = vget_low_f32(vextq_f32(vreinterpretq_f32_m128(a),
- vreinterpretq_f32_m128(a), 3));
- float32x2_t b21 = vget_high_f32(vextq_f32(
- vreinterpretq_f32_m128(b), vreinterpretq_f32_m128(b), 3));
- return vreinterpretq_m128_f32(vcombine_f32(a03, b21));
-}
-
-FORCE_INLINE __m128 _mm_shuffle_ps_1010(__m128 a, __m128 b)
-{
- float32x2_t a10 = vget_low_f32(vreinterpretq_f32_m128(a));
- float32x2_t b10 = vget_low_f32(vreinterpretq_f32_m128(b));
- return vreinterpretq_m128_f32(vcombine_f32(a10, b10));
-}
-
-FORCE_INLINE __m128 _mm_shuffle_ps_1001(__m128 a, __m128 b)
-{
- float32x2_t a01 = vrev64_f32(vget_low_f32(vreinterpretq_f32_m128(a)));
- float32x2_t b10 = vget_low_f32(vreinterpretq_f32_m128(b));
- return vreinterpretq_m128_f32(vcombine_f32(a01, b10));
-}
-
-FORCE_INLINE __m128 _mm_shuffle_ps_0101(__m128 a, __m128 b)
-{
- float32x2_t a01 = vrev64_f32(vget_low_f32(vreinterpretq_f32_m128(a)));
- float32x2_t b01 = vrev64_f32(vget_low_f32(vreinterpretq_f32_m128(b)));
- return vreinterpretq_m128_f32(vcombine_f32(a01, b01));
-}
-
-// keeps the low 64 bits of b in the low and puts the high 64 bits of a in the
-// high
-FORCE_INLINE __m128 _mm_shuffle_ps_3210(__m128 a, __m128 b)
-{
- float32x2_t a10 = vget_low_f32(vreinterpretq_f32_m128(a));
- float32x2_t b32 = vget_high_f32(vreinterpretq_f32_m128(b));
- return vreinterpretq_m128_f32(vcombine_f32(a10, b32));
-}
-
-FORCE_INLINE __m128 _mm_shuffle_ps_0011(__m128 a, __m128 b)
-{
- float32x2_t a11 =
- vdup_lane_f32(vget_low_f32(vreinterpretq_f32_m128(a)), 1);
- float32x2_t b00 =
- vdup_lane_f32(vget_low_f32(vreinterpretq_f32_m128(b)), 0);
- return vreinterpretq_m128_f32(vcombine_f32(a11, b00));
-}
-
-FORCE_INLINE __m128 _mm_shuffle_ps_0022(__m128 a, __m128 b)
-{
- float32x2_t a22 =
- vdup_lane_f32(vget_high_f32(vreinterpretq_f32_m128(a)), 0);
- float32x2_t b00 =
- vdup_lane_f32(vget_low_f32(vreinterpretq_f32_m128(b)), 0);
- return vreinterpretq_m128_f32(vcombine_f32(a22, b00));
-}
-
-FORCE_INLINE __m128 _mm_shuffle_ps_2200(__m128 a, __m128 b)
-{
- float32x2_t a00 =
- vdup_lane_f32(vget_low_f32(vreinterpretq_f32_m128(a)), 0);
- float32x2_t b22 =
- vdup_lane_f32(vget_high_f32(vreinterpretq_f32_m128(b)), 0);
- return vreinterpretq_m128_f32(vcombine_f32(a00, b22));
-}
-
-FORCE_INLINE __m128 _mm_shuffle_ps_3202(__m128 a, __m128 b)
-{
- float32_t a0 = vgetq_lane_f32(vreinterpretq_f32_m128(a), 0);
- float32x2_t a22 =
- vdup_lane_f32(vget_high_f32(vreinterpretq_f32_m128(a)), 0);
- float32x2_t a02 = vset_lane_f32(a0, a22, 1); /* TODO: use vzip ?*/
- float32x2_t b32 = vget_high_f32(vreinterpretq_f32_m128(b));
- return vreinterpretq_m128_f32(vcombine_f32(a02, b32));
-}
-
-FORCE_INLINE __m128 _mm_shuffle_ps_1133(__m128 a, __m128 b)
-{
- float32x2_t a33 =
- vdup_lane_f32(vget_high_f32(vreinterpretq_f32_m128(a)), 1);
- float32x2_t b11 =
- vdup_lane_f32(vget_low_f32(vreinterpretq_f32_m128(b)), 1);
- return vreinterpretq_m128_f32(vcombine_f32(a33, b11));
-}
-
-FORCE_INLINE __m128 _mm_shuffle_ps_2010(__m128 a, __m128 b)
-{
- float32x2_t a10 = vget_low_f32(vreinterpretq_f32_m128(a));
- float32_t b2 = vgetq_lane_f32(vreinterpretq_f32_m128(b), 2);
- float32x2_t b00 =
- vdup_lane_f32(vget_low_f32(vreinterpretq_f32_m128(b)), 0);
- float32x2_t b20 = vset_lane_f32(b2, b00, 1);
- return vreinterpretq_m128_f32(vcombine_f32(a10, b20));
-}
-
-FORCE_INLINE __m128 _mm_shuffle_ps_2001(__m128 a, __m128 b)
-{
- float32x2_t a01 = vrev64_f32(vget_low_f32(vreinterpretq_f32_m128(a)));
- float32_t b2 = vgetq_lane_f32(b, 2);
- float32x2_t b00 =
- vdup_lane_f32(vget_low_f32(vreinterpretq_f32_m128(b)), 0);
- float32x2_t b20 = vset_lane_f32(b2, b00, 1);
- return vreinterpretq_m128_f32(vcombine_f32(a01, b20));
-}
-
-FORCE_INLINE __m128 _mm_shuffle_ps_2032(__m128 a, __m128 b)
-{
- float32x2_t a32 = vget_high_f32(vreinterpretq_f32_m128(a));
- float32_t b2 = vgetq_lane_f32(b, 2);
- float32x2_t b00 =
- vdup_lane_f32(vget_low_f32(vreinterpretq_f32_m128(b)), 0);
- float32x2_t b20 = vset_lane_f32(b2, b00, 1);
- return vreinterpretq_m128_f32(vcombine_f32(a32, b20));
-}
-
-// NEON does not support a general purpose permute intrinsic
-// Selects four specific single-precision, floating-point values from a and b,
-// based on the mask i.
-// https://msdn.microsoft.com/en-us/library/vstudio/5f0858x0(v=vs.100).aspx
-#if 0 /* C version */
-FORCE_INLINE __m128 _mm_shuffle_ps_default(__m128 a,
- __m128 b,
- __constrange(0, 255) int imm)
-{
- __m128 ret;
- ret[0] = a[imm & 0x3];
- ret[1] = a[(imm >> 2) & 0x3];
- ret[2] = b[(imm >> 4) & 0x03];
- ret[3] = b[(imm >> 6) & 0x03];
- return ret;
-}
-#endif
-#define _mm_shuffle_ps_default(a, b, imm) \
- __extension__({ \
- float32x4_t ret; \
- ret = vmovq_n_f32(vgetq_lane_f32(vreinterpretq_f32_m128(a), \
- (imm) & (0x3))); \
- ret = vsetq_lane_f32(vgetq_lane_f32(vreinterpretq_f32_m128(a), \
- ((imm) >> 2) & 0x3), \
- ret, 1); \
- ret = vsetq_lane_f32(vgetq_lane_f32(vreinterpretq_f32_m128(b), \
- ((imm) >> 4) & 0x3), \
- ret, 2); \
- ret = vsetq_lane_f32(vgetq_lane_f32(vreinterpretq_f32_m128(b), \
- ((imm) >> 6) & 0x3), \
- ret, 3); \
- vreinterpretq_m128_f32(ret); \
- })
-
-// FORCE_INLINE __m128 _mm_shuffle_ps(__m128 a, __m128 b, __constrange(0,255)
-// int imm)
-#if __has_builtin(__builtin_shufflevector)
-#define _mm_shuffle_ps(a, b, imm) \
- __extension__({ \
- float32x4_t _input1 = vreinterpretq_f32_m128(a); \
- float32x4_t _input2 = vreinterpretq_f32_m128(b); \
- float32x4_t _shuf = __builtin_shufflevector( \
- _input1, _input2, (imm) & (0x3), ((imm) >> 2) & 0x3, \
- (((imm) >> 4) & 0x3) + 4, (((imm) >> 6) & 0x3) + 4); \
- vreinterpretq_m128_f32(_shuf); \
- })
-#else // generic
-#define _mm_shuffle_ps(a, b, imm) \
- __extension__({ \
- __m128 ret; \
- switch (imm) { \
- case _MM_SHUFFLE(1, 0, 3, 2): \
- ret = _mm_shuffle_ps_1032((a), (b)); \
- break; \
- case _MM_SHUFFLE(2, 3, 0, 1): \
- ret = _mm_shuffle_ps_2301((a), (b)); \
- break; \
- case _MM_SHUFFLE(0, 3, 2, 1): \
- ret = _mm_shuffle_ps_0321((a), (b)); \
- break; \
- case _MM_SHUFFLE(2, 1, 0, 3): \
- ret = _mm_shuffle_ps_2103((a), (b)); \
- break; \
- case _MM_SHUFFLE(1, 0, 1, 0): \
- ret = _mm_movelh_ps((a), (b)); \
- break; \
- case _MM_SHUFFLE(1, 0, 0, 1): \
- ret = _mm_shuffle_ps_1001((a), (b)); \
- break; \
- case _MM_SHUFFLE(0, 1, 0, 1): \
- ret = _mm_shuffle_ps_0101((a), (b)); \
- break; \
- case _MM_SHUFFLE(3, 2, 1, 0): \
- ret = _mm_shuffle_ps_3210((a), (b)); \
- break; \
- case _MM_SHUFFLE(0, 0, 1, 1): \
- ret = _mm_shuffle_ps_0011((a), (b)); \
- break; \
- case _MM_SHUFFLE(0, 0, 2, 2): \
- ret = _mm_shuffle_ps_0022((a), (b)); \
- break; \
- case _MM_SHUFFLE(2, 2, 0, 0): \
- ret = _mm_shuffle_ps_2200((a), (b)); \
- break; \
- case _MM_SHUFFLE(3, 2, 0, 2): \
- ret = _mm_shuffle_ps_3202((a), (b)); \
- break; \
- case _MM_SHUFFLE(3, 2, 3, 2): \
- ret = _mm_movehl_ps((b), (a)); \
- break; \
- case _MM_SHUFFLE(1, 1, 3, 3): \
- ret = _mm_shuffle_ps_1133((a), (b)); \
- break; \
- case _MM_SHUFFLE(2, 0, 1, 0): \
- ret = _mm_shuffle_ps_2010((a), (b)); \
- break; \
- case _MM_SHUFFLE(2, 0, 0, 1): \
- ret = _mm_shuffle_ps_2001((a), (b)); \
- break; \
- case _MM_SHUFFLE(2, 0, 3, 2): \
- ret = _mm_shuffle_ps_2032((a), (b)); \
- break; \
- default: \
- ret = _mm_shuffle_ps_default((a), (b), (imm)); \
- break; \
- } \
- ret; \
- })
-#endif
-
-// Takes the upper 64 bits of a and places it in the low end of the result
-// Takes the lower 64 bits of a and places it into the high end of the result.
-FORCE_INLINE __m128i _mm_shuffle_epi_1032(__m128i a)
-{
- int32x2_t a32 = vget_high_s32(vreinterpretq_s32_m128i(a));
- int32x2_t a10 = vget_low_s32(vreinterpretq_s32_m128i(a));
- return vreinterpretq_m128i_s32(vcombine_s32(a32, a10));
-}
-
-// takes the lower two 32-bit values from a and swaps them and places in low end
-// of result takes the higher two 32 bit values from a and swaps them and places
-// in high end of result.
-FORCE_INLINE __m128i _mm_shuffle_epi_2301(__m128i a)
-{
- int32x2_t a01 = vrev64_s32(vget_low_s32(vreinterpretq_s32_m128i(a)));
- int32x2_t a23 = vrev64_s32(vget_high_s32(vreinterpretq_s32_m128i(a)));
- return vreinterpretq_m128i_s32(vcombine_s32(a01, a23));
-}
-
-// rotates the least significant 32 bits into the most signficant 32 bits, and
-// shifts the rest down
-FORCE_INLINE __m128i _mm_shuffle_epi_0321(__m128i a)
-{
- return vreinterpretq_m128i_s32(vextq_s32(
- vreinterpretq_s32_m128i(a), vreinterpretq_s32_m128i(a), 1));
-}
-
-// rotates the most significant 32 bits into the least signficant 32 bits, and
-// shifts the rest up
-FORCE_INLINE __m128i _mm_shuffle_epi_2103(__m128i a)
-{
- return vreinterpretq_m128i_s32(vextq_s32(
- vreinterpretq_s32_m128i(a), vreinterpretq_s32_m128i(a), 3));
-}
-
-// gets the lower 64 bits of a, and places it in the upper 64 bits
-// gets the lower 64 bits of a and places it in the lower 64 bits
-FORCE_INLINE __m128i _mm_shuffle_epi_1010(__m128i a)
-{
- int32x2_t a10 = vget_low_s32(vreinterpretq_s32_m128i(a));
- return vreinterpretq_m128i_s32(vcombine_s32(a10, a10));
-}
-
-// gets the lower 64 bits of a, swaps the 0 and 1 elements, and places it in the
-// lower 64 bits gets the lower 64 bits of a, and places it in the upper 64 bits
-FORCE_INLINE __m128i _mm_shuffle_epi_1001(__m128i a)
-{
- int32x2_t a01 = vrev64_s32(vget_low_s32(vreinterpretq_s32_m128i(a)));
- int32x2_t a10 = vget_low_s32(vreinterpretq_s32_m128i(a));
- return vreinterpretq_m128i_s32(vcombine_s32(a01, a10));
-}
-
-// gets the lower 64 bits of a, swaps the 0 and 1 elements and places it in the
-// upper 64 bits gets the lower 64 bits of a, swaps the 0 and 1 elements, and
-// places it in the lower 64 bits
-FORCE_INLINE __m128i _mm_shuffle_epi_0101(__m128i a)
-{
- int32x2_t a01 = vrev64_s32(vget_low_s32(vreinterpretq_s32_m128i(a)));
- return vreinterpretq_m128i_s32(vcombine_s32(a01, a01));
-}
-
-FORCE_INLINE __m128i _mm_shuffle_epi_2211(__m128i a)
-{
- int32x2_t a11 =
- vdup_lane_s32(vget_low_s32(vreinterpretq_s32_m128i(a)), 1);
- int32x2_t a22 =
- vdup_lane_s32(vget_high_s32(vreinterpretq_s32_m128i(a)), 0);
- return vreinterpretq_m128i_s32(vcombine_s32(a11, a22));
-}
-
-FORCE_INLINE __m128i _mm_shuffle_epi_0122(__m128i a)
-{
- int32x2_t a22 =
- vdup_lane_s32(vget_high_s32(vreinterpretq_s32_m128i(a)), 0);
- int32x2_t a01 = vrev64_s32(vget_low_s32(vreinterpretq_s32_m128i(a)));
- return vreinterpretq_m128i_s32(vcombine_s32(a22, a01));
-}
-
-FORCE_INLINE __m128i _mm_shuffle_epi_3332(__m128i a)
-{
- int32x2_t a32 = vget_high_s32(vreinterpretq_s32_m128i(a));
- int32x2_t a33 =
- vdup_lane_s32(vget_high_s32(vreinterpretq_s32_m128i(a)), 1);
- return vreinterpretq_m128i_s32(vcombine_s32(a32, a33));
-}
-
-// Shuffle packed 8-bit integers in a according to shuffle control mask in the
-// corresponding 8-bit element of b, and store the results in dst.
-// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_shuffle_epi8&expand=5146
-FORCE_INLINE __m128i _mm_shuffle_epi8(__m128i a, __m128i b)
-{
- int8x16_t tbl = vreinterpretq_s8_m128i(a); // input a
- uint8x16_t idx = vreinterpretq_u8_m128i(b); // input b
- uint8x16_t idx_masked =
- vandq_u8(idx, vdupq_n_u8(0x8F)); // avoid using meaningless bits
-#if defined(__aarch64__)
- return vreinterpretq_m128i_s8(vqtbl1q_s8(tbl, idx_masked));
-#elif defined(__GNUC__)
- int8x16_t ret;
- // %e and %f represent the even and odd D registers
- // respectively.
- __asm__ __volatile__("vtbl.8 %e[ret], {%e[tbl], %f[tbl]}, %e[idx]\n"
- "vtbl.8 %f[ret], {%e[tbl], %f[tbl]}, %f[idx]\n"
- : [ret] "=&w"(ret)
- : [tbl] "w"(tbl), [idx] "w"(idx_masked));
- return vreinterpretq_m128i_s8(ret);
-#else
- // use this line if testing on aarch64
- int8x8x2_t a_split = {vget_low_s8(tbl), vget_high_s8(tbl)};
- return vreinterpretq_m128i_s8(
- vcombine_s8(vtbl2_s8(a_split, vget_low_u8(idx_masked)),
- vtbl2_s8(a_split, vget_high_u8(idx_masked))));
-#endif
-}
-
-#if 0 /* C version */
-FORCE_INLINE __m128i _mm_shuffle_epi32_default(__m128i a,
- __constrange(0, 255) int imm)
-{
- __m128i ret;
- ret[0] = a[imm & 0x3];
- ret[1] = a[(imm >> 2) & 0x3];
- ret[2] = a[(imm >> 4) & 0x03];
- ret[3] = a[(imm >> 6) & 0x03];
- return ret;
-}
-#endif
-#define _mm_shuffle_epi32_default(a, imm) \
- __extension__({ \
- int32x4_t ret; \
- ret = vmovq_n_s32(vgetq_lane_s32(vreinterpretq_s32_m128i(a), \
- (imm) & (0x3))); \
- ret = vsetq_lane_s32( \
- vgetq_lane_s32(vreinterpretq_s32_m128i(a), \
- ((imm) >> 2) & 0x3), \
- ret, 1); \
- ret = vsetq_lane_s32( \
- vgetq_lane_s32(vreinterpretq_s32_m128i(a), \
- ((imm) >> 4) & 0x3), \
- ret, 2); \
- ret = vsetq_lane_s32( \
- vgetq_lane_s32(vreinterpretq_s32_m128i(a), \
- ((imm) >> 6) & 0x3), \
- ret, 3); \
- vreinterpretq_m128i_s32(ret); \
- })
-
-// FORCE_INLINE __m128i _mm_shuffle_epi32_splat(__m128i a, __constrange(0,255)
-// int imm)
-#if defined(__aarch64__)
-#define _mm_shuffle_epi32_splat(a, imm) \
- __extension__({ \
- vreinterpretq_m128i_s32( \
- vdupq_laneq_s32(vreinterpretq_s32_m128i(a), (imm))); \
- })
-#else
-#define _mm_shuffle_epi32_splat(a, imm) \
- __extension__({ \
- vreinterpretq_m128i_s32(vdupq_n_s32( \
- vgetq_lane_s32(vreinterpretq_s32_m128i(a), (imm)))); \
- })
-#endif
-
-// Shuffles the 4 signed or unsigned 32-bit integers in a as specified by imm.
-// https://msdn.microsoft.com/en-us/library/56f67xbk%28v=vs.90%29.aspx
-// FORCE_INLINE __m128i _mm_shuffle_epi32(__m128i a,
-// __constrange(0,255) int imm)
-#if __has_builtin(__builtin_shufflevector)
-#define _mm_shuffle_epi32(a, imm) \
- __extension__({ \
- int32x4_t _input = vreinterpretq_s32_m128i(a); \
- int32x4_t _shuf = __builtin_shufflevector( \
- _input, _input, (imm) & (0x3), ((imm) >> 2) & 0x3, \
- ((imm) >> 4) & 0x3, ((imm) >> 6) & 0x3); \
- vreinterpretq_m128i_s32(_shuf); \
- })
-#else // generic
-#define _mm_shuffle_epi32(a, imm) \
- __extension__({ \
- __m128i ret; \
- switch (imm) { \
- case _MM_SHUFFLE(1, 0, 3, 2): \
- ret = _mm_shuffle_epi_1032((a)); \
- break; \
- case _MM_SHUFFLE(2, 3, 0, 1): \
- ret = _mm_shuffle_epi_2301((a)); \
- break; \
- case _MM_SHUFFLE(0, 3, 2, 1): \
- ret = _mm_shuffle_epi_0321((a)); \
- break; \
- case _MM_SHUFFLE(2, 1, 0, 3): \
- ret = _mm_shuffle_epi_2103((a)); \
- break; \
- case _MM_SHUFFLE(1, 0, 1, 0): \
- ret = _mm_shuffle_epi_1010((a)); \
- break; \
- case _MM_SHUFFLE(1, 0, 0, 1): \
- ret = _mm_shuffle_epi_1001((a)); \
- break; \
- case _MM_SHUFFLE(0, 1, 0, 1): \
- ret = _mm_shuffle_epi_0101((a)); \
- break; \
- case _MM_SHUFFLE(2, 2, 1, 1): \
- ret = _mm_shuffle_epi_2211((a)); \
- break; \
- case _MM_SHUFFLE(0, 1, 2, 2): \
- ret = _mm_shuffle_epi_0122((a)); \
- break; \
- case _MM_SHUFFLE(3, 3, 3, 2): \
- ret = _mm_shuffle_epi_3332((a)); \
- break; \
- case _MM_SHUFFLE(0, 0, 0, 0): \
- ret = _mm_shuffle_epi32_splat((a), 0); \
- break; \
- case _MM_SHUFFLE(1, 1, 1, 1): \
- ret = _mm_shuffle_epi32_splat((a), 1); \
- break; \
- case _MM_SHUFFLE(2, 2, 2, 2): \
- ret = _mm_shuffle_epi32_splat((a), 2); \
- break; \
- case _MM_SHUFFLE(3, 3, 3, 3): \
- ret = _mm_shuffle_epi32_splat((a), 3); \
- break; \
- default: \
- ret = _mm_shuffle_epi32_default((a), (imm)); \
- break; \
- } \
- ret; \
- })
-#endif
-
-// Shuffles the lower 4 signed or unsigned 16-bit integers in a as specified
-// by imm.
-// https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2010/y41dkk37(v=vs.100)
-// FORCE_INLINE __m128i _mm_shufflelo_epi16_function(__m128i a,
-// __constrange(0,255) int
-// imm)
-#define _mm_shufflelo_epi16_function(a, imm) \
- __extension__({ \
- int16x8_t ret = vreinterpretq_s16_m128i(a); \
- int16x4_t lowBits = vget_low_s16(ret); \
- ret = vsetq_lane_s16(vget_lane_s16(lowBits, (imm) & (0x3)), \
- ret, 0); \
- ret = vsetq_lane_s16( \
- vget_lane_s16(lowBits, ((imm) >> 2) & 0x3), ret, 1); \
- ret = vsetq_lane_s16( \
- vget_lane_s16(lowBits, ((imm) >> 4) & 0x3), ret, 2); \
- ret = vsetq_lane_s16( \
- vget_lane_s16(lowBits, ((imm) >> 6) & 0x3), ret, 3); \
- vreinterpretq_m128i_s16(ret); \
- })
-
-// FORCE_INLINE __m128i _mm_shufflelo_epi16(__m128i a,
-// __constrange(0,255) int imm)
-#if __has_builtin(__builtin_shufflevector)
-#define _mm_shufflelo_epi16(a, imm) \
- __extension__({ \
- int16x8_t _input = vreinterpretq_s16_m128i(a); \
- int16x8_t _shuf = __builtin_shufflevector( \
- _input, _input, ((imm) & (0x3)), (((imm) >> 2) & 0x3), \
- (((imm) >> 4) & 0x3), (((imm) >> 6) & 0x3), 4, 5, 6, \
- 7); \
- vreinterpretq_m128i_s16(_shuf); \
- })
-#else // generic
-#define _mm_shufflelo_epi16(a, imm) _mm_shufflelo_epi16_function((a), (imm))
-#endif
-
-// Shuffles the upper 4 signed or unsigned 16-bit integers in a as specified
-// by imm.
-// https://msdn.microsoft.com/en-us/library/13ywktbs(v=vs.100).aspx
-// FORCE_INLINE __m128i _mm_shufflehi_epi16_function(__m128i a,
-// __constrange(0,255) int
-// imm)
-#define _mm_shufflehi_epi16_function(a, imm) \
- __extension__({ \
- int16x8_t ret = vreinterpretq_s16_m128i(a); \
- int16x4_t highBits = vget_high_s16(ret); \
- ret = vsetq_lane_s16(vget_lane_s16(highBits, (imm) & (0x3)), \
- ret, 4); \
- ret = vsetq_lane_s16( \
- vget_lane_s16(highBits, ((imm) >> 2) & 0x3), ret, 5); \
- ret = vsetq_lane_s16( \
- vget_lane_s16(highBits, ((imm) >> 4) & 0x3), ret, 6); \
- ret = vsetq_lane_s16( \
- vget_lane_s16(highBits, ((imm) >> 6) & 0x3), ret, 7); \
- vreinterpretq_m128i_s16(ret); \
- })
-
-// FORCE_INLINE __m128i _mm_shufflehi_epi16(__m128i a,
-// __constrange(0,255) int imm)
-#if __has_builtin(__builtin_shufflevector)
-#define _mm_shufflehi_epi16(a, imm) \
- __extension__({ \
- int16x8_t _input = vreinterpretq_s16_m128i(a); \
- int16x8_t _shuf = __builtin_shufflevector( \
- _input, _input, 0, 1, 2, 3, ((imm) & (0x3)) + 4, \
- (((imm) >> 2) & 0x3) + 4, (((imm) >> 4) & 0x3) + 4, \
- (((imm) >> 6) & 0x3) + 4); \
- vreinterpretq_m128i_s16(_shuf); \
- })
-#else // generic
-#define _mm_shufflehi_epi16(a, imm) _mm_shufflehi_epi16_function((a), (imm))
-#endif
-
-// Blend packed 16-bit integers from a and b using control mask imm8, and store
-// the results in dst.
-//
-// FOR j := 0 to 7
-// i := j*16
-// IF imm8[j]
-// dst[i+15:i] := b[i+15:i]
-// ELSE
-// dst[i+15:i] := a[i+15:i]
-// FI
-// ENDFOR
-// FORCE_INLINE __m128i _mm_blend_epi16(__m128i a, __m128i b,
-// __constrange(0,255) int imm)
-#define _mm_blend_epi16(a, b, imm) \
- __extension__({ \
- const uint16_t _mask[8] = { \
- ((imm) & (1 << 0)) ? 0xFFFF : 0x0000, \
- ((imm) & (1 << 1)) ? 0xFFFF : 0x0000, \
- ((imm) & (1 << 2)) ? 0xFFFF : 0x0000, \
- ((imm) & (1 << 3)) ? 0xFFFF : 0x0000, \
- ((imm) & (1 << 4)) ? 0xFFFF : 0x0000, \
- ((imm) & (1 << 5)) ? 0xFFFF : 0x0000, \
- ((imm) & (1 << 6)) ? 0xFFFF : 0x0000, \
- ((imm) & (1 << 7)) ? 0xFFFF : 0x0000}; \
- uint16x8_t _mask_vec = vld1q_u16(_mask); \
- uint16x8_t _a = vreinterpretq_u16_m128i(a); \
- uint16x8_t _b = vreinterpretq_u16_m128i(b); \
- vreinterpretq_m128i_u16(vbslq_u16(_mask_vec, _b, _a)); \
- })
-
-// Blend packed 8-bit integers from a and b using mask, and store the results in
-// dst.
-//
-// FOR j := 0 to 15
-// i := j*8
-// IF mask[i+7]
-// dst[i+7:i] := b[i+7:i]
-// ELSE
-// dst[i+7:i] := a[i+7:i]
-// FI
-// ENDFOR
-FORCE_INLINE __m128i _mm_blendv_epi8(__m128i _a, __m128i _b, __m128i _mask)
-{
- // Use a signed shift right to create a mask with the sign bit
- uint8x16_t mask = vreinterpretq_u8_s8(
- vshrq_n_s8(vreinterpretq_s8_m128i(_mask), 7));
- uint8x16_t a = vreinterpretq_u8_m128i(_a);
- uint8x16_t b = vreinterpretq_u8_m128i(_b);
- return vreinterpretq_m128i_u8(vbslq_u8(mask, b, a));
-}
-
-/* Shifts */
-
-// Shifts the 4 signed 32-bit integers in a right by count bits while shifting
-// in the sign bit.
-//
-// r0 := a0 >> count
-// r1 := a1 >> count
-// r2 := a2 >> count
-// r3 := a3 >> count immediate
-FORCE_INLINE __m128i _mm_srai_epi32(__m128i a, int count)
-{
- return (__m128i)vshlq_s32((int32x4_t)a, vdupq_n_s32(-count));
-}
-
-// Shifts the 8 signed 16-bit integers in a right by count bits while shifting
-// in the sign bit.
-//
-// r0 := a0 >> count
-// r1 := a1 >> count
-// ...
-// r7 := a7 >> count
-FORCE_INLINE __m128i _mm_srai_epi16(__m128i a, int count)
-{
- return (__m128i)vshlq_s16((int16x8_t)a, vdupq_n_s16(-count));
-}
-
-// Shifts the 8 signed or unsigned 16-bit integers in a left by count bits while
-// shifting in zeros.
-//
-// r0 := a0 << count
-// r1 := a1 << count
-// ...
-// r7 := a7 << count
-//
-// https://msdn.microsoft.com/en-us/library/es73bcsy(v=vs.90).aspx
-#define _mm_slli_epi16(a, imm) \
- __extension__({ \
- __m128i ret; \
- if ((imm) <= 0) { \
- ret = a; \
- } else if ((imm) > 31) { \
- ret = _mm_setzero_si128(); \
- } else { \
- ret = vreinterpretq_m128i_s16(vshlq_n_s16( \
- vreinterpretq_s16_m128i(a), (imm))); \
- } \
- ret; \
- })
-
-// Shifts the 4 signed or unsigned 32-bit integers in a left by count bits while
-// shifting in zeros. :
-// https://msdn.microsoft.com/en-us/library/z2k3bbtb%28v=vs.90%29.aspx
-// FORCE_INLINE __m128i _mm_slli_epi32(__m128i a, __constrange(0,255) int imm)
-#define _mm_slli_epi32(a, imm) \
- __extension__({ \
- __m128i ret; \
- if ((imm) <= 0) { \
- ret = a; \
- } else if ((imm) > 31) { \
- ret = _mm_setzero_si128(); \
- } else { \
- ret = vreinterpretq_m128i_s32(vshlq_n_s32( \
- vreinterpretq_s32_m128i(a), (imm))); \
- } \
- ret; \
- })
-
-// Shift packed 64-bit integers in a left by imm8 while shifting in zeros, and
-// store the results in dst.
-#define _mm_slli_epi64(a, imm) \
- __extension__({ \
- __m128i ret; \
- if ((imm) <= 0) { \
- ret = a; \
- } else if ((imm) > 63) { \
- ret = _mm_setzero_si128(); \
- } else { \
- ret = vreinterpretq_m128i_s64(vshlq_n_s64( \
- vreinterpretq_s64_m128i(a), (imm))); \
- } \
- ret; \
- })
-
-// Shifts the 8 signed or unsigned 16-bit integers in a right by count bits
-// while shifting in zeros.
-//
-// r0 := srl(a0, count)
-// r1 := srl(a1, count)
-// ...
-// r7 := srl(a7, count)
-//
-// https://msdn.microsoft.com/en-us/library/6tcwd38t(v=vs.90).aspx
-#define _mm_srli_epi16(a, imm) \
- __extension__({ \
- __m128i ret; \
- if ((imm) <= 0) { \
- ret = a; \
- } else if ((imm) > 31) { \
- ret = _mm_setzero_si128(); \
- } else { \
- ret = vreinterpretq_m128i_u16(vshrq_n_u16( \
- vreinterpretq_u16_m128i(a), (imm))); \
- } \
- ret; \
- })
-
-// Shifts the 4 signed or unsigned 32-bit integers in a right by count bits
-// while shifting in zeros.
-// https://msdn.microsoft.com/en-us/library/w486zcfa(v=vs.100).aspx FORCE_INLINE
-// __m128i _mm_srli_epi32(__m128i a, __constrange(0,255) int imm)
-#define _mm_srli_epi32(a, imm) \
- __extension__({ \
- __m128i ret; \
- if ((imm) <= 0) { \
- ret = a; \
- } else if ((imm) > 31) { \
- ret = _mm_setzero_si128(); \
- } else { \
- ret = vreinterpretq_m128i_u32(vshrq_n_u32( \
- vreinterpretq_u32_m128i(a), (imm))); \
- } \
- ret; \
- })
-
-// Shift packed 64-bit integers in a right by imm8 while shifting in zeros, and
-// store the results in dst.
-#define _mm_srli_epi64(a, imm) \
- __extension__({ \
- __m128i ret; \
- if ((imm) <= 0) { \
- ret = a; \
- } else if ((imm) > 63) { \
- ret = _mm_setzero_si128(); \
- } else { \
- ret = vreinterpretq_m128i_u64(vshrq_n_u64( \
- vreinterpretq_u64_m128i(a), (imm))); \
- } \
- ret; \
- })
-
-// Shifts the 4 signed 32 - bit integers in a right by count bits while shifting
-// in the sign bit.
-// https://msdn.microsoft.com/en-us/library/z1939387(v=vs.100).aspx
-// FORCE_INLINE __m128i _mm_srai_epi32(__m128i a, __constrange(0,255) int imm)
-#define _mm_srai_epi32(a, imm) \
- __extension__({ \
- __m128i ret; \
- if ((imm) <= 0) { \
- ret = a; \
- } else if ((imm) > 31) { \
- ret = vreinterpretq_m128i_s32( \
- vshrq_n_s32(vreinterpretq_s32_m128i(a), 16)); \
- ret = vreinterpretq_m128i_s32(vshrq_n_s32( \
- vreinterpretq_s32_m128i(ret), 16)); \
- } else { \
- ret = vreinterpretq_m128i_s32(vshrq_n_s32( \
- vreinterpretq_s32_m128i(a), (imm))); \
- } \
- ret; \
- })
-
-// Shifts the 128 - bit value in a right by imm bytes while shifting in
-// zeros.imm must be an immediate.
-//
-// r := srl(a, imm*8)
-//
-// https://msdn.microsoft.com/en-us/library/305w28yz(v=vs.100).aspx
-// FORCE_INLINE _mm_srli_si128(__m128i a, __constrange(0,255) int imm)
-#define _mm_srli_si128(a, imm) \
- __extension__({ \
- __m128i ret; \
- if ((imm) <= 0) { \
- ret = a; \
- } else if ((imm) > 15) { \
- ret = _mm_setzero_si128(); \
- } else { \
- ret = vreinterpretq_m128i_s8( \
- vextq_s8(vreinterpretq_s8_m128i(a), \
- vdupq_n_s8(0), (imm))); \
- } \
- ret; \
- })
-
-// Shifts the 128-bit value in a left by imm bytes while shifting in zeros. imm
-// must be an immediate.
-//
-// r := a << (imm * 8)
-//
-// https://msdn.microsoft.com/en-us/library/34d3k2kt(v=vs.100).aspx
-// FORCE_INLINE __m128i _mm_slli_si128(__m128i a, __constrange(0,255) int imm)
-#define _mm_slli_si128(a, imm) \
- __extension__({ \
- __m128i ret; \
- if ((imm) <= 0) { \
- ret = a; \
- } else if ((imm) > 15) { \
- ret = _mm_setzero_si128(); \
- } else { \
- ret = vreinterpretq_m128i_s8(vextq_s8( \
- vdupq_n_s8(0), vreinterpretq_s8_m128i(a), \
- 16 - (imm))); \
- } \
- ret; \
- })
-
-// Shifts the 8 signed or unsigned 16-bit integers in a left by count bits while
-// shifting in zeros.
-//
-// r0 := a0 << count
-// r1 := a1 << count
-// ...
-// r7 := a7 << count
-//
-// https://msdn.microsoft.com/en-us/library/c79w388h(v%3dvs.90).aspx
-FORCE_INLINE __m128i _mm_sll_epi16(__m128i a, __m128i count)
-{
- uint64_t c = vreinterpretq_nth_u64_m128i(count, 0);
- if (c > 15)
- return _mm_setzero_si128();
-
- int16x8_t vc = vdupq_n_s16((int16_t)c);
- return vreinterpretq_m128i_s16(
- vshlq_s16(vreinterpretq_s16_m128i(a), vc));
-}
-
-// Shifts the 4 signed or unsigned 32-bit integers in a left by count bits while
-// shifting in zeros.
-//
-// r0 := a0 << count
-// r1 := a1 << count
-// r2 := a2 << count
-// r3 := a3 << count
-//
-// https://msdn.microsoft.com/en-us/library/6fe5a6s9(v%3dvs.90).aspx
-FORCE_INLINE __m128i _mm_sll_epi32(__m128i a, __m128i count)
-{
- uint64_t c = vreinterpretq_nth_u64_m128i(count, 0);
- if (c > 31)
- return _mm_setzero_si128();
-
- int32x4_t vc = vdupq_n_s32((int32_t)c);
- return vreinterpretq_m128i_s32(
- vshlq_s32(vreinterpretq_s32_m128i(a), vc));
-}
-
-// Shifts the 2 signed or unsigned 64-bit integers in a left by count bits while
-// shifting in zeros.
-//
-// r0 := a0 << count
-// r1 := a1 << count
-//
-// https://msdn.microsoft.com/en-us/library/6ta9dffd(v%3dvs.90).aspx
-FORCE_INLINE __m128i _mm_sll_epi64(__m128i a, __m128i count)
-{
- uint64_t c = vreinterpretq_nth_u64_m128i(count, 0);
- if (c > 63)
- return _mm_setzero_si128();
-
- int64x2_t vc = vdupq_n_s64((int64_t)c);
- return vreinterpretq_m128i_s64(
- vshlq_s64(vreinterpretq_s64_m128i(a), vc));
-}
-
-// Shifts the 8 signed or unsigned 16-bit integers in a right by count bits
-// while shifting in zeros.
-//
-// r0 := srl(a0, count)
-// r1 := srl(a1, count)
-// ...
-// r7 := srl(a7, count)
-//
-// https://msdn.microsoft.com/en-us/library/wd5ax830(v%3dvs.90).aspx
-FORCE_INLINE __m128i _mm_srl_epi16(__m128i a, __m128i count)
-{
- uint64_t c = vreinterpretq_nth_u64_m128i(count, 0);
- if (c > 15)
- return _mm_setzero_si128();
-
- int16x8_t vc = vdupq_n_s16(-(int16_t)c);
- return vreinterpretq_m128i_u16(
- vshlq_u16(vreinterpretq_u16_m128i(a), vc));
-}
-
-// Shifts the 4 signed or unsigned 32-bit integers in a right by count bits
-// while shifting in zeros.
-//
-// r0 := srl(a0, count)
-// r1 := srl(a1, count)
-// r2 := srl(a2, count)
-// r3 := srl(a3, count)
-//
-// https://msdn.microsoft.com/en-us/library/a9cbttf4(v%3dvs.90).aspx
-FORCE_INLINE __m128i _mm_srl_epi32(__m128i a, __m128i count)
-{
- uint64_t c = vreinterpretq_nth_u64_m128i(count, 0);
- if (c > 31)
- return _mm_setzero_si128();
-
- int32x4_t vc = vdupq_n_s32(-(int32_t)c);
- return vreinterpretq_m128i_u32(
- vshlq_u32(vreinterpretq_u32_m128i(a), vc));
-}
-
-// Shifts the 2 signed or unsigned 64-bit integers in a right by count bits
-// while shifting in zeros.
-//
-// r0 := srl(a0, count)
-// r1 := srl(a1, count)
-//
-// https://msdn.microsoft.com/en-us/library/yf6cf9k8(v%3dvs.90).aspx
-FORCE_INLINE __m128i _mm_srl_epi64(__m128i a, __m128i count)
-{
- uint64_t c = vreinterpretq_nth_u64_m128i(count, 0);
- if (c > 63)
- return _mm_setzero_si128();
-
- int64x2_t vc = vdupq_n_s64(-(int64_t)c);
- return vreinterpretq_m128i_u64(
- vshlq_u64(vreinterpretq_u64_m128i(a), vc));
-}
-
-// NEON does not provide a version of this function.
-// Creates a 16-bit mask from the most significant bits of the 16 signed or
-// unsigned 8-bit integers in a and zero extends the upper bits.
-// https://msdn.microsoft.com/en-us/library/vstudio/s090c8fk(v=vs.100).aspx
-FORCE_INLINE int _mm_movemask_epi8(__m128i a)
-{
-#if defined(__aarch64__)
- uint8x16_t input = vreinterpretq_u8_m128i(a);
- const int8_t ALIGN_STRUCT(16) xr[16] = {-7, -6, -5, -4, -3, -2, -1, 0,
- -7, -6, -5, -4, -3, -2, -1, 0};
- const uint8x16_t mask_and = vdupq_n_u8(0x80);
- const int8x16_t mask_shift = vld1q_s8(xr);
- const uint8x16_t mask_result =
- vshlq_u8(vandq_u8(input, mask_and), mask_shift);
- uint8x8_t lo = vget_low_u8(mask_result);
- uint8x8_t hi = vget_high_u8(mask_result);
-
- return vaddv_u8(lo) + (vaddv_u8(hi) << 8);
-#else
- // Use increasingly wide shifts+adds to collect the sign bits
- // together.
- // Since the widening shifts would be rather confusing to follow in little
- // endian, everything will be illustrated in big endian order instead. This
- // has a different result - the bits would actually be reversed on a big
- // endian machine.
-
- // Starting input (only half the elements are shown):
- // 89 ff 1d c0 00 10 99 33
- uint8x16_t input = vreinterpretq_u8_m128i(a);
-
- // Shift out everything but the sign bits with an unsigned shift right.
- //
- // Bytes of the vector::
- // 89 ff 1d c0 00 10 99 33
- // \ \ \ \ \ \ \ \ high_bits = (uint16x4_t)(input >> 7)
- // | | | | | | | |
- // 01 01 00 01 00 00 01 00
- //
- // Bits of first important lane(s):
- // 10001001 (89)
- // \______
- // |
- // 00000001 (01)
- uint16x8_t high_bits = vreinterpretq_u16_u8(vshrq_n_u8(input, 7));
-
- // Merge the even lanes together with a 16-bit unsigned shift right + add.
- // 'xx' represents garbage data which will be ignored in the final result.
- // In the important bytes, the add functions like a binary OR.
- //
- // 01 01 00 01 00 00 01 00
- // \_ | \_ | \_ | \_ | paired16 = (uint32x4_t)(input + (input >> 7))
- // \| \| \| \|
- // xx 03 xx 01 xx 00 xx 02
- //
- // 00000001 00000001 (01 01)
- // \_______ |
- // \|
- // xxxxxxxx xxxxxx11 (xx 03)
- uint32x4_t paired16 =
- vreinterpretq_u32_u16(vsraq_n_u16(high_bits, high_bits, 7));
-
- // Repeat with a wider 32-bit shift + add.
- // xx 03 xx 01 xx 00 xx 02
- // \____ | \____ | paired32 = (uint64x1_t)(paired16 + (paired16 >>
- // 14))
- // \| \|
- // xx xx xx 0d xx xx xx 02
- //
- // 00000011 00000001 (03 01)
- // \\_____ ||
- // '----.\||
- // xxxxxxxx xxxx1101 (xx 0d)
- uint64x2_t paired32 =
- vreinterpretq_u64_u32(vsraq_n_u32(paired16, paired16, 14));
-
- // Last, an even wider 64-bit shift + add to get our result in the low 8 bit
- // lanes. xx xx xx 0d xx xx xx 02
- // \_________ | paired64 = (uint8x8_t)(paired32 + (paired32 >>
- // 28))
- // \|
- // xx xx xx xx xx xx xx d2
- //
- // 00001101 00000010 (0d 02)
- // \ \___ | |
- // '---. \| |
- // xxxxxxxx 11010010 (xx d2)
- uint8x16_t paired64 =
- vreinterpretq_u8_u64(vsraq_n_u64(paired32, paired32, 28));
-
- // Extract the low 8 bits from each 64-bit lane with 2 8-bit extracts.
- // xx xx xx xx xx xx xx d2
- // || return paired64[0]
- // d2
- // Note: Little endian would return the correct value 4b (01001011) instead.
- return vgetq_lane_u8(paired64, 0) |
- ((int)vgetq_lane_u8(paired64, 8) << 8);
-#endif
-}
-
-// NEON does not provide this method
-// Creates a 4-bit mask from the most significant bits of the four
-// single-precision, floating-point values.
-// https://msdn.microsoft.com/en-us/library/vstudio/4490ys29(v=vs.100).aspx
-FORCE_INLINE int _mm_movemask_ps(__m128 a)
-{
- uint32x4_t input = vreinterpretq_u32_m128(a);
-#if defined(__aarch64__)
- static const int32x4_t shift = {-31, -30, -29, -28};
- static const uint32x4_t highbit = {0x80000000, 0x80000000, 0x80000000,
- 0x80000000};
- return vaddvq_u32(vshlq_u32(vandq_u32(input, highbit), shift));
-#else
- // Uses the exact same method as _mm_movemask_epi8, see that for details.
- // Shift out everything but the sign bits with a 32-bit unsigned shift
- // right.
- uint64x2_t high_bits = vreinterpretq_u64_u32(vshrq_n_u32(input, 31));
- // Merge the two pairs together with a 64-bit unsigned shift right + add.
- uint8x16_t paired =
- vreinterpretq_u8_u64(vsraq_n_u64(high_bits, high_bits, 31));
- // Extract the result.
- return vgetq_lane_u8(paired, 0) | (vgetq_lane_u8(paired, 8) << 2);
-#endif
-}
-
-// Compute the bitwise AND of 128 bits (representing integer data) in a and
-// mask, and return 1 if the result is zero, otherwise return 0.
-// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_test_all_zeros&expand=5871
-FORCE_INLINE int _mm_test_all_zeros(__m128i a, __m128i mask)
-{
- int64x2_t a_and_mask = vandq_s64(vreinterpretq_s64_m128i(a),
- vreinterpretq_s64_m128i(mask));
- return (vgetq_lane_s64(a_and_mask, 0) | vgetq_lane_s64(a_and_mask, 1))
- ? 0
- : 1;
-}
-
-/* Math operations */
-
-// Subtracts the four single-precision, floating-point values of a and b.
-//
-// r0 := a0 - b0
-// r1 := a1 - b1
-// r2 := a2 - b2
-// r3 := a3 - b3
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/1zad2k61(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_sub_ps(__m128 a, __m128 b)
-{
- return vreinterpretq_m128_f32(vsubq_f32(vreinterpretq_f32_m128(a),
- vreinterpretq_f32_m128(b)));
-}
-
-// Subtract 2 packed 64-bit integers in b from 2 packed 64-bit integers in a,
-// and store the results in dst.
-// r0 := a0 - b0
-// r1 := a1 - b1
-FORCE_INLINE __m128i _mm_sub_epi64(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s64(vsubq_s64(vreinterpretq_s64_m128i(a),
- vreinterpretq_s64_m128i(b)));
-}
-
-// Subtracts the 4 signed or unsigned 32-bit integers of b from the 4 signed or
-// unsigned 32-bit integers of a.
-//
-// r0 := a0 - b0
-// r1 := a1 - b1
-// r2 := a2 - b2
-// r3 := a3 - b3
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/fhh866h0(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_sub_epi32(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s32(vsubq_s32(vreinterpretq_s32_m128i(a),
- vreinterpretq_s32_m128i(b)));
-}
-
-FORCE_INLINE __m128i _mm_sub_epi16(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s16(vsubq_s16(vreinterpretq_s16_m128i(a),
- vreinterpretq_s16_m128i(b)));
-}
-
-FORCE_INLINE __m128i _mm_sub_epi8(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s8(
- vsubq_s8(vreinterpretq_s8_m128i(a), vreinterpretq_s8_m128i(b)));
-}
-
-// Subtracts the 8 unsigned 16-bit integers of bfrom the 8 unsigned 16-bit
-// integers of a and saturates..
-// https://technet.microsoft.com/en-us/subscriptions/index/f44y0s19(v=vs.90).aspx
-FORCE_INLINE __m128i _mm_subs_epu16(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_u16(vqsubq_u16(vreinterpretq_u16_m128i(a),
- vreinterpretq_u16_m128i(b)));
-}
-
-// Subtracts the 16 unsigned 8-bit integers of b from the 16 unsigned 8-bit
-// integers of a and saturates.
-//
-// r0 := UnsignedSaturate(a0 - b0)
-// r1 := UnsignedSaturate(a1 - b1)
-// ...
-// r15 := UnsignedSaturate(a15 - b15)
-//
-// https://technet.microsoft.com/en-us/subscriptions/yadkxc18(v=vs.90)
-FORCE_INLINE __m128i _mm_subs_epu8(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_u8(vqsubq_u8(vreinterpretq_u8_m128i(a),
- vreinterpretq_u8_m128i(b)));
-}
-
-// Subtracts the 16 signed 8-bit integers of b from the 16 signed 8-bit integers
-// of a and saturates.
-//
-// r0 := SignedSaturate(a0 - b0)
-// r1 := SignedSaturate(a1 - b1)
-// ...
-// r15 := SignedSaturate(a15 - b15)
-//
-// https://technet.microsoft.com/en-us/subscriptions/by7kzks1(v=vs.90)
-FORCE_INLINE __m128i _mm_subs_epi8(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s8(vqsubq_s8(vreinterpretq_s8_m128i(a),
- vreinterpretq_s8_m128i(b)));
-}
-
-// Subtracts the 8 signed 16-bit integers of b from the 8 signed 16-bit integers
-// of a and saturates.
-//
-// r0 := SignedSaturate(a0 - b0)
-// r1 := SignedSaturate(a1 - b1)
-// ...
-// r7 := SignedSaturate(a7 - b7)
-//
-// https://technet.microsoft.com/en-us/subscriptions/3247z5b8(v=vs.90)
-FORCE_INLINE __m128i _mm_subs_epi16(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s16(vqsubq_s16(vreinterpretq_s16_m128i(a),
- vreinterpretq_s16_m128i(b)));
-}
-
-FORCE_INLINE __m128i _mm_adds_epu16(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_u16(vqaddq_u16(vreinterpretq_u16_m128i(a),
- vreinterpretq_u16_m128i(b)));
-}
-
-// Negate packed 8-bit integers in a when the corresponding signed
-// 8-bit integer in b is negative, and store the results in dst.
-// Element in dst are zeroed out when the corresponding element
-// in b is zero.
-//
-// for i in 0..15
-// if b[i] < 0
-// r[i] := -a[i]
-// else if b[i] == 0
-// r[i] := 0
-// else
-// r[i] := a[i]
-// fi
-// done
-FORCE_INLINE __m128i _mm_sign_epi8(__m128i _a, __m128i _b)
-{
- int8x16_t a = vreinterpretq_s8_m128i(_a);
- int8x16_t b = vreinterpretq_s8_m128i(_b);
-
- int8x16_t zero = vdupq_n_s8(0);
- // signed shift right: faster than vclt
- // (b < 0) ? 0xFF : 0
- uint8x16_t ltMask = vreinterpretq_u8_s8(vshrq_n_s8(b, 7));
- // (b == 0) ? 0xFF : 0
- int8x16_t zeroMask = vreinterpretq_s8_u8(vceqq_s8(b, zero));
- // -a
- int8x16_t neg = vnegq_s8(a);
- // bitwise select either a or neg based on ltMask
- int8x16_t masked = vbslq_s8(ltMask, a, neg);
- // res = masked & (~zeroMask)
- int8x16_t res = vbicq_s8(masked, zeroMask);
- return vreinterpretq_m128i_s8(res);
-}
-
-// Negate packed 16-bit integers in a when the corresponding signed
-// 16-bit integer in b is negative, and store the results in dst.
-// Element in dst are zeroed out when the corresponding element
-// in b is zero.
-//
-// for i in 0..7
-// if b[i] < 0
-// r[i] := -a[i]
-// else if b[i] == 0
-// r[i] := 0
-// else
-// r[i] := a[i]
-// fi
-// done
-FORCE_INLINE __m128i _mm_sign_epi16(__m128i _a, __m128i _b)
-{
- int16x8_t a = vreinterpretq_s16_m128i(_a);
- int16x8_t b = vreinterpretq_s16_m128i(_b);
-
- int16x8_t zero = vdupq_n_s16(0);
- // signed shift right: faster than vclt
- // (b < 0) ? 0xFFFF : 0
- uint16x8_t ltMask = vreinterpretq_u16_s16(vshrq_n_s16(b, 15));
- // (b == 0) ? 0xFFFF : 0
- int16x8_t zeroMask = vreinterpretq_s16_u16(vceqq_s16(b, zero));
- // -a
- int16x8_t neg = vnegq_s16(a);
- // bitwise select either a or neg based on ltMask
- int16x8_t masked = vbslq_s16(ltMask, a, neg);
- // res = masked & (~zeroMask)
- int16x8_t res = vbicq_s16(masked, zeroMask);
- return vreinterpretq_m128i_s16(res);
-}
-
-// Negate packed 32-bit integers in a when the corresponding signed
-// 32-bit integer in b is negative, and store the results in dst.
-// Element in dst are zeroed out when the corresponding element
-// in b is zero.
-//
-// for i in 0..3
-// if b[i] < 0
-// r[i] := -a[i]
-// else if b[i] == 0
-// r[i] := 0
-// else
-// r[i] := a[i]
-// fi
-// done
-FORCE_INLINE __m128i _mm_sign_epi32(__m128i _a, __m128i _b)
-{
- int32x4_t a = vreinterpretq_s32_m128i(_a);
- int32x4_t b = vreinterpretq_s32_m128i(_b);
-
- int32x4_t zero = vdupq_n_s32(0);
- // signed shift right: faster than vclt
- // (b < 0) ? 0xFFFFFFFF : 0
- uint32x4_t ltMask = vreinterpretq_u32_s32(vshrq_n_s32(b, 31));
- // (b == 0) ? 0xFFFFFFFF : 0
- int32x4_t zeroMask = vreinterpretq_s32_u32(vceqq_s32(b, zero));
- // neg = -a
- int32x4_t neg = vnegq_s32(a);
- // bitwise select either a or neg based on ltMask
- int32x4_t masked = vbslq_s32(ltMask, a, neg);
- // res = masked & (~zeroMask)
- int32x4_t res = vbicq_s32(masked, zeroMask);
- return vreinterpretq_m128i_s32(res);
-}
-
-// Computes the average of the 16 unsigned 8-bit integers in a and the 16
-// unsigned 8-bit integers in b and rounds.
-//
-// r0 := (a0 + b0) / 2
-// r1 := (a1 + b1) / 2
-// ...
-// r15 := (a15 + b15) / 2
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/8zwh554a(v%3dvs.90).aspx
-FORCE_INLINE __m128i _mm_avg_epu8(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_u8(vrhaddq_u8(vreinterpretq_u8_m128i(a),
- vreinterpretq_u8_m128i(b)));
-}
-
-// Computes the average of the 8 unsigned 16-bit integers in a and the 8
-// unsigned 16-bit integers in b and rounds.
-//
-// r0 := (a0 + b0) / 2
-// r1 := (a1 + b1) / 2
-// ...
-// r7 := (a7 + b7) / 2
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/y13ca3c8(v=vs.90).aspx
-FORCE_INLINE __m128i _mm_avg_epu16(__m128i a, __m128i b)
-{
- return (__m128i)vrhaddq_u16(vreinterpretq_u16_m128i(a),
- vreinterpretq_u16_m128i(b));
-}
-
-// Adds the four single-precision, floating-point values of a and b.
-//
-// r0 := a0 + b0
-// r1 := a1 + b1
-// r2 := a2 + b2
-// r3 := a3 + b3
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/c9848chc(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_add_ps(__m128 a, __m128 b)
-{
- return vreinterpretq_m128_f32(vaddq_f32(vreinterpretq_f32_m128(a),
- vreinterpretq_f32_m128(b)));
-}
-
-// adds the scalar single-precision floating point values of a and b.
-// https://msdn.microsoft.com/en-us/library/be94x2y6(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_add_ss(__m128 a, __m128 b)
-{
- float32_t b0 = vgetq_lane_f32(vreinterpretq_f32_m128(b), 0);
- float32x4_t value = vsetq_lane_f32(b0, vdupq_n_f32(0), 0);
- // the upper values in the result must be the remnants of <a>.
- return vreinterpretq_m128_f32(vaddq_f32(a, value));
-}
-
-// Adds the 4 signed or unsigned 64-bit integers in a to the 4 signed or
-// unsigned 32-bit integers in b.
-// https://msdn.microsoft.com/en-us/library/vstudio/09xs4fkk(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_add_epi64(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s64(vaddq_s64(vreinterpretq_s64_m128i(a),
- vreinterpretq_s64_m128i(b)));
-}
-
-// Adds the 4 signed or unsigned 32-bit integers in a to the 4 signed or
-// unsigned 32-bit integers in b.
-//
-// r0 := a0 + b0
-// r1 := a1 + b1
-// r2 := a2 + b2
-// r3 := a3 + b3
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/09xs4fkk(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_add_epi32(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s32(vaddq_s32(vreinterpretq_s32_m128i(a),
- vreinterpretq_s32_m128i(b)));
-}
-
-// Adds the 8 signed or unsigned 16-bit integers in a to the 8 signed or
-// unsigned 16-bit integers in b.
-// https://msdn.microsoft.com/en-us/library/fceha5k4(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_add_epi16(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s16(vaddq_s16(vreinterpretq_s16_m128i(a),
- vreinterpretq_s16_m128i(b)));
-}
-
-// Adds the 16 signed or unsigned 8-bit integers in a to the 16 signed or
-// unsigned 8-bit integers in b.
-// https://technet.microsoft.com/en-us/subscriptions/yc7tcyzs(v=vs.90)
-FORCE_INLINE __m128i _mm_add_epi8(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s8(
- vaddq_s8(vreinterpretq_s8_m128i(a), vreinterpretq_s8_m128i(b)));
-}
-
-// Adds the 8 signed 16-bit integers in a to the 8 signed 16-bit integers in b
-// and saturates.
-//
-// r0 := SignedSaturate(a0 + b0)
-// r1 := SignedSaturate(a1 + b1)
-// ...
-// r7 := SignedSaturate(a7 + b7)
-//
-// https://msdn.microsoft.com/en-us/library/1a306ef8(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_adds_epi16(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s16(vqaddq_s16(vreinterpretq_s16_m128i(a),
- vreinterpretq_s16_m128i(b)));
-}
-
-// Adds the 16 unsigned 8-bit integers in a to the 16 unsigned 8-bit integers in
-// b and saturates..
-// https://msdn.microsoft.com/en-us/library/9hahyddy(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_adds_epu8(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_u8(vqaddq_u8(vreinterpretq_u8_m128i(a),
- vreinterpretq_u8_m128i(b)));
-}
-
-// Multiplies the 8 signed or unsigned 16-bit integers from a by the 8 signed or
-// unsigned 16-bit integers from b.
-//
-// r0 := (a0 * b0)[15:0]
-// r1 := (a1 * b1)[15:0]
-// ...
-// r7 := (a7 * b7)[15:0]
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/9ks1472s(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_mullo_epi16(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s16(vmulq_s16(vreinterpretq_s16_m128i(a),
- vreinterpretq_s16_m128i(b)));
-}
-
-// Multiplies the 4 signed or unsigned 32-bit integers from a by the 4 signed or
-// unsigned 32-bit integers from b.
-// https://msdn.microsoft.com/en-us/library/vstudio/bb531409(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_mullo_epi32(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s32(vmulq_s32(vreinterpretq_s32_m128i(a),
- vreinterpretq_s32_m128i(b)));
-}
-
-// Multiplies the four single-precision, floating-point values of a and b.
-//
-// r0 := a0 * b0
-// r1 := a1 * b1
-// r2 := a2 * b2
-// r3 := a3 * b3
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/22kbk6t9(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_mul_ps(__m128 a, __m128 b)
-{
- return vreinterpretq_m128_f32(vmulq_f32(vreinterpretq_f32_m128(a),
- vreinterpretq_f32_m128(b)));
-}
-
-// Multiply the low unsigned 32-bit integers from each packed 64-bit element in
-// a and b, and store the unsigned 64-bit results in dst.
-//
-// r0 := (a0 & 0xFFFFFFFF) * (b0 & 0xFFFFFFFF)
-// r1 := (a2 & 0xFFFFFFFF) * (b2 & 0xFFFFFFFF)
-FORCE_INLINE __m128i _mm_mul_epu32(__m128i a, __m128i b)
-{
- // vmull_u32 upcasts instead of masking, so we downcast.
- uint32x2_t a_lo = vmovn_u64(vreinterpretq_u64_m128i(a));
- uint32x2_t b_lo = vmovn_u64(vreinterpretq_u64_m128i(b));
- return vreinterpretq_m128i_u64(vmull_u32(a_lo, b_lo));
-}
-
-// Multiply the low signed 32-bit integers from each packed 64-bit element in
-// a and b, and store the signed 64-bit results in dst.
-//
-// r0 := (int64_t)(int32_t)a0 * (int64_t)(int32_t)b0
-// r1 := (int64_t)(int32_t)a2 * (int64_t)(int32_t)b2
-FORCE_INLINE __m128i _mm_mul_epi32(__m128i a, __m128i b)
-{
- // vmull_s32 upcasts instead of masking, so we downcast.
- int32x2_t a_lo = vmovn_s64(vreinterpretq_s64_m128i(a));
- int32x2_t b_lo = vmovn_s64(vreinterpretq_s64_m128i(b));
- return vreinterpretq_m128i_s64(vmull_s32(a_lo, b_lo));
-}
-
-// Multiplies the 8 signed 16-bit integers from a by the 8 signed 16-bit
-// integers from b.
-//
-// r0 := (a0 * b0) + (a1 * b1)
-// r1 := (a2 * b2) + (a3 * b3)
-// r2 := (a4 * b4) + (a5 * b5)
-// r3 := (a6 * b6) + (a7 * b7)
-// https://msdn.microsoft.com/en-us/library/yht36sa6(v=vs.90).aspx
-FORCE_INLINE __m128i _mm_madd_epi16(__m128i a, __m128i b)
-{
- int32x4_t low = vmull_s16(vget_low_s16(vreinterpretq_s16_m128i(a)),
- vget_low_s16(vreinterpretq_s16_m128i(b)));
- int32x4_t high = vmull_s16(vget_high_s16(vreinterpretq_s16_m128i(a)),
- vget_high_s16(vreinterpretq_s16_m128i(b)));
-
- int32x2_t low_sum = vpadd_s32(vget_low_s32(low), vget_high_s32(low));
- int32x2_t high_sum = vpadd_s32(vget_low_s32(high), vget_high_s32(high));
-
- return vreinterpretq_m128i_s32(vcombine_s32(low_sum, high_sum));
-}
-
-// Multiply packed signed 16-bit integers in a and b, producing intermediate
-// signed 32-bit integers. Shift right by 15 bits while rounding up, and store
-// the packed 16-bit integers in dst.
-//
-// r0 := Round(((int32_t)a0 * (int32_t)b0) >> 15)
-// r1 := Round(((int32_t)a1 * (int32_t)b1) >> 15)
-// r2 := Round(((int32_t)a2 * (int32_t)b2) >> 15)
-// ...
-// r7 := Round(((int32_t)a7 * (int32_t)b7) >> 15)
-FORCE_INLINE __m128i _mm_mulhrs_epi16(__m128i a, __m128i b)
-{
- // Has issues due to saturation
- // return vreinterpretq_m128i_s16(vqrdmulhq_s16(a, b));
-
- // Multiply
- int32x4_t mul_lo = vmull_s16(vget_low_s16(vreinterpretq_s16_m128i(a)),
- vget_low_s16(vreinterpretq_s16_m128i(b)));
- int32x4_t mul_hi = vmull_s16(vget_high_s16(vreinterpretq_s16_m128i(a)),
- vget_high_s16(vreinterpretq_s16_m128i(b)));
-
- // Rounding narrowing shift right
- // narrow = (int16_t)((mul + 16384) >> 15);
- int16x4_t narrow_lo = vrshrn_n_s32(mul_lo, 15);
- int16x4_t narrow_hi = vrshrn_n_s32(mul_hi, 15);
-
- // Join together
- return vreinterpretq_m128i_s16(vcombine_s16(narrow_lo, narrow_hi));
-}
-
-// Vertically multiply each unsigned 8-bit integer from a with the corresponding
-// signed 8-bit integer from b, producing intermediate signed 16-bit integers.
-// Horizontally add adjacent pairs of intermediate signed 16-bit integers,
-// and pack the saturated results in dst.
-//
-// FOR j := 0 to 7
-// i := j*16
-// dst[i+15:i] := Saturate_To_Int16( a[i+15:i+8]*b[i+15:i+8] +
-// a[i+7:i]*b[i+7:i] )
-// ENDFOR
-FORCE_INLINE __m128i _mm_maddubs_epi16(__m128i _a, __m128i _b)
-{
- // This would be much simpler if x86 would choose to zero extend OR sign
- // extend, not both. This could probably be optimized better.
- uint16x8_t a = vreinterpretq_u16_m128i(_a);
- int16x8_t b = vreinterpretq_s16_m128i(_b);
-
- // Zero extend a
- int16x8_t a_odd = vreinterpretq_s16_u16(vshrq_n_u16(a, 8));
- int16x8_t a_even =
- vreinterpretq_s16_u16(vbicq_u16(a, vdupq_n_u16(0xff00)));
-
- // Sign extend by shifting left then shifting right.
- int16x8_t b_even = vshrq_n_s16(vshlq_n_s16(b, 8), 8);
- int16x8_t b_odd = vshrq_n_s16(b, 8);
-
- // multiply
- int16x8_t prod1 = vmulq_s16(a_even, b_even);
- int16x8_t prod2 = vmulq_s16(a_odd, b_odd);
-
- // saturated add
- return vreinterpretq_m128i_s16(vqaddq_s16(prod1, prod2));
-}
-
-// Computes the absolute difference of the 16 unsigned 8-bit integers from a
-// and the 16 unsigned 8-bit integers from b.
-//
-// Return Value
-// Sums the upper 8 differences and lower 8 differences and packs the
-// resulting 2 unsigned 16-bit integers into the upper and lower 64-bit
-// elements.
-//
-// r0 := abs(a0 - b0) + abs(a1 - b1) +...+ abs(a7 - b7)
-// r1 := 0x0
-// r2 := 0x0
-// r3 := 0x0
-// r4 := abs(a8 - b8) + abs(a9 - b9) +...+ abs(a15 - b15)
-// r5 := 0x0
-// r6 := 0x0
-// r7 := 0x0
-FORCE_INLINE __m128i _mm_sad_epu8(__m128i a, __m128i b)
-{
- uint16x8_t t = vpaddlq_u8(vabdq_u8((uint8x16_t)a, (uint8x16_t)b));
- uint16_t r0 = t[0] + t[1] + t[2] + t[3];
- uint16_t r4 = t[4] + t[5] + t[6] + t[7];
- uint16x8_t r = vsetq_lane_u16(r0, vdupq_n_u16(0), 0);
- return (__m128i)vsetq_lane_u16(r4, r, 4);
-}
-
-// Divides the four single-precision, floating-point values of a and b.
-//
-// r0 := a0 / b0
-// r1 := a1 / b1
-// r2 := a2 / b2
-// r3 := a3 / b3
-//
-// https://msdn.microsoft.com/en-us/library/edaw8147(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_div_ps(__m128 a, __m128 b)
-{
- float32x4_t recip0 = vrecpeq_f32(vreinterpretq_f32_m128(b));
- float32x4_t recip1 = vmulq_f32(
- recip0, vrecpsq_f32(recip0, vreinterpretq_f32_m128(b)));
- return vreinterpretq_m128_f32(
- vmulq_f32(vreinterpretq_f32_m128(a), recip1));
-}
-
-// Divides the scalar single-precision floating point value of a by b.
-// https://msdn.microsoft.com/en-us/library/4y73xa49(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_div_ss(__m128 a, __m128 b)
-{
- float32_t value =
- vgetq_lane_f32(vreinterpretq_f32_m128(_mm_div_ps(a, b)), 0);
- return vreinterpretq_m128_f32(
- vsetq_lane_f32(value, vreinterpretq_f32_m128(a), 0));
-}
-
-// Computes the approximations of reciprocals of the four single-precision,
-// floating-point values of a.
-// https://msdn.microsoft.com/en-us/library/vstudio/796k1tty(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_rcp_ps(__m128 in)
-{
- float32x4_t recip = vrecpeq_f32(vreinterpretq_f32_m128(in));
- recip = vmulq_f32(recip,
- vrecpsq_f32(recip, vreinterpretq_f32_m128(in)));
- return vreinterpretq_m128_f32(recip);
-}
-
-// Computes the approximations of square roots of the four single-precision,
-// floating-point values of a. First computes reciprocal square roots and then
-// reciprocals of the four values.
-//
-// r0 := sqrt(a0)
-// r1 := sqrt(a1)
-// r2 := sqrt(a2)
-// r3 := sqrt(a3)
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/8z67bwwk(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_sqrt_ps(__m128 in)
-{
-#if defined(__aarch64__)
- return vreinterpretq_m128_f32(vsqrtq_f32(vreinterpretq_f32_m128(in)));
-#else
- float32x4_t recipsq = vrsqrteq_f32(vreinterpretq_f32_m128(in));
- float32x4_t sq = vrecpeq_f32(recipsq);
- // ??? use step versions of both sqrt and recip for better accuracy?
- return vreinterpretq_m128_f32(sq);
-#endif
-}
-
-// Computes the approximation of the square root of the scalar single-precision
-// floating point value of in.
-// https://msdn.microsoft.com/en-us/library/ahfsc22d(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_sqrt_ss(__m128 in)
-{
- float32_t value =
- vgetq_lane_f32(vreinterpretq_f32_m128(_mm_sqrt_ps(in)), 0);
- return vreinterpretq_m128_f32(
- vsetq_lane_f32(value, vreinterpretq_f32_m128(in), 0));
-}
-
-// Computes the approximations of the reciprocal square roots of the four
-// single-precision floating point values of in.
-// https://msdn.microsoft.com/en-us/library/22hfsh53(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_rsqrt_ps(__m128 in)
-{
- return vreinterpretq_m128_f32(vrsqrteq_f32(vreinterpretq_f32_m128(in)));
-}
-
-// Compute the approximate reciprocal square root of the lower single-precision
-// (32-bit) floating-point element in a, store the result in the lower element
-// of dst, and copy the upper 3 packed elements from a to the upper elements of
-// dst.
-// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_rsqrt_ss
-FORCE_INLINE __m128 _mm_rsqrt_ss(__m128 in)
-{
- return vsetq_lane_f32(vgetq_lane_f32(_mm_rsqrt_ps(in), 0), in, 0);
-}
-
-// Computes the maximums of the four single-precision, floating-point values of
-// a and b.
-// https://msdn.microsoft.com/en-us/library/vstudio/ff5d607a(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_max_ps(__m128 a, __m128 b)
-{
- return vreinterpretq_m128_f32(vmaxq_f32(vreinterpretq_f32_m128(a),
- vreinterpretq_f32_m128(b)));
-}
-
-// Computes the minima of the four single-precision, floating-point values of a
-// and b.
-// https://msdn.microsoft.com/en-us/library/vstudio/wh13kadz(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_min_ps(__m128 a, __m128 b)
-{
- return vreinterpretq_m128_f32(vminq_f32(vreinterpretq_f32_m128(a),
- vreinterpretq_f32_m128(b)));
-}
-
-// Computes the maximum of the two lower scalar single-precision floating point
-// values of a and b.
-// https://msdn.microsoft.com/en-us/library/s6db5esz(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_max_ss(__m128 a, __m128 b)
-{
- float32_t value = vgetq_lane_f32(vmaxq_f32(vreinterpretq_f32_m128(a),
- vreinterpretq_f32_m128(b)),
- 0);
- return vreinterpretq_m128_f32(
- vsetq_lane_f32(value, vreinterpretq_f32_m128(a), 0));
-}
-
-// Computes the minimum of the two lower scalar single-precision floating point
-// values of a and b.
-// https://msdn.microsoft.com/en-us/library/0a9y7xaa(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_min_ss(__m128 a, __m128 b)
-{
- float32_t value = vgetq_lane_f32(vminq_f32(vreinterpretq_f32_m128(a),
- vreinterpretq_f32_m128(b)),
- 0);
- return vreinterpretq_m128_f32(
- vsetq_lane_f32(value, vreinterpretq_f32_m128(a), 0));
-}
-
-// Computes the pairwise maxima of the 16 unsigned 8-bit integers from a and the
-// 16 unsigned 8-bit integers from b.
-// https://msdn.microsoft.com/en-us/library/st6634za(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_max_epu8(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_u8(
- vmaxq_u8(vreinterpretq_u8_m128i(a), vreinterpretq_u8_m128i(b)));
-}
-
-// Computes the pairwise minima of the 16 unsigned 8-bit integers from a and the
-// 16 unsigned 8-bit integers from b.
-// https://msdn.microsoft.com/ko-kr/library/17k8cf58(v=vs.100).aspxx
-FORCE_INLINE __m128i _mm_min_epu8(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_u8(
- vminq_u8(vreinterpretq_u8_m128i(a), vreinterpretq_u8_m128i(b)));
-}
-
-// Computes the pairwise minima of the 8 signed 16-bit integers from a and the 8
-// signed 16-bit integers from b.
-// https://msdn.microsoft.com/en-us/library/vstudio/6te997ew(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_min_epi16(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s16(vminq_s16(vreinterpretq_s16_m128i(a),
- vreinterpretq_s16_m128i(b)));
-}
-
-// Computes the pairwise maxima of the 8 signed 16-bit integers from a and the 8
-// signed 16-bit integers from b.
-// https://msdn.microsoft.com/en-us/LIBRary/3x060h7c(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_max_epi16(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s16(vmaxq_s16(vreinterpretq_s16_m128i(a),
- vreinterpretq_s16_m128i(b)));
-}
-
-// epi versions of min/max
-// Computes the pariwise maximums of the four signed 32-bit integer values of a
-// and b.
-//
-// A 128-bit parameter that can be defined with the following equations:
-// r0 := (a0 > b0) ? a0 : b0
-// r1 := (a1 > b1) ? a1 : b1
-// r2 := (a2 > b2) ? a2 : b2
-// r3 := (a3 > b3) ? a3 : b3
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/bb514055(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_max_epi32(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s32(vmaxq_s32(vreinterpretq_s32_m128i(a),
- vreinterpretq_s32_m128i(b)));
-}
-
-// Computes the pariwise minima of the four signed 32-bit integer values of a
-// and b.
-//
-// A 128-bit parameter that can be defined with the following equations:
-// r0 := (a0 < b0) ? a0 : b0
-// r1 := (a1 < b1) ? a1 : b1
-// r2 := (a2 < b2) ? a2 : b2
-// r3 := (a3 < b3) ? a3 : b3
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/bb531476(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_min_epi32(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s32(vminq_s32(vreinterpretq_s32_m128i(a),
- vreinterpretq_s32_m128i(b)));
-}
-
-// Multiplies the 8 signed 16-bit integers from a by the 8 signed 16-bit
-// integers from b.
-//
-// r0 := (a0 * b0)[31:16]
-// r1 := (a1 * b1)[31:16]
-// ...
-// r7 := (a7 * b7)[31:16]
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/59hddw1d(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_mulhi_epi16(__m128i a, __m128i b)
-{
- /* FIXME: issue with large values because of result saturation */
- // int16x8_t ret = vqdmulhq_s16(vreinterpretq_s16_m128i(a),
- // vreinterpretq_s16_m128i(b)); /* =2*a*b */ return
- // vreinterpretq_m128i_s16(vshrq_n_s16(ret, 1));
- int16x4_t a3210 = vget_low_s16(vreinterpretq_s16_m128i(a));
- int16x4_t b3210 = vget_low_s16(vreinterpretq_s16_m128i(b));
- int32x4_t ab3210 = vmull_s16(a3210, b3210); /* 3333222211110000 */
- int16x4_t a7654 = vget_high_s16(vreinterpretq_s16_m128i(a));
- int16x4_t b7654 = vget_high_s16(vreinterpretq_s16_m128i(b));
- int32x4_t ab7654 = vmull_s16(a7654, b7654); /* 7777666655554444 */
- uint16x8x2_t r = vuzpq_u16(vreinterpretq_u16_s32(ab3210),
- vreinterpretq_u16_s32(ab7654));
- return vreinterpretq_m128i_u16(r.val[1]);
-}
-
-// Computes pairwise add of each argument as single-precision, floating-point
-// values a and b.
-// https://msdn.microsoft.com/en-us/library/yd9wecaa.aspx
-FORCE_INLINE __m128 _mm_hadd_ps(__m128 a, __m128 b)
-{
-#if defined(__aarch64__)
- return vreinterpretq_m128_f32(vpaddq_f32(vreinterpretq_f32_m128(a),
- vreinterpretq_f32_m128(b)));
-#else
- float32x2_t a10 = vget_low_f32(vreinterpretq_f32_m128(a));
- float32x2_t a32 = vget_high_f32(vreinterpretq_f32_m128(a));
- float32x2_t b10 = vget_low_f32(vreinterpretq_f32_m128(b));
- float32x2_t b32 = vget_high_f32(vreinterpretq_f32_m128(b));
- return vreinterpretq_m128_f32(
- vcombine_f32(vpadd_f32(a10, a32), vpadd_f32(b10, b32)));
-#endif
-}
-
-// Computes pairwise add of each argument as a 16-bit signed or unsigned integer
-// values a and b.
-FORCE_INLINE __m128i _mm_hadd_epi16(__m128i _a, __m128i _b)
-{
- int16x8_t a = vreinterpretq_s16_m128i(_a);
- int16x8_t b = vreinterpretq_s16_m128i(_b);
-#if defined(__aarch64__)
- return vreinterpretq_m128i_s16(vpaddq_s16(a, b));
-#else
- return vreinterpretq_m128i_s16(
- vcombine_s16(vpadd_s16(vget_low_s16(a), vget_high_s16(a)),
- vpadd_s16(vget_low_s16(b), vget_high_s16(b))));
-#endif
-}
-
-// Computes pairwise difference of each argument as a 16-bit signed or unsigned
-// integer values a and b.
-FORCE_INLINE __m128i _mm_hsub_epi16(__m128i _a, __m128i _b)
-{
- int32x4_t a = vreinterpretq_s32_m128i(_a);
- int32x4_t b = vreinterpretq_s32_m128i(_b);
- // Interleave using vshrn/vmovn
- // [a0|a2|a4|a6|b0|b2|b4|b6]
- // [a1|a3|a5|a7|b1|b3|b5|b7]
- int16x8_t ab0246 = vcombine_s16(vmovn_s32(a), vmovn_s32(b));
- int16x8_t ab1357 = vcombine_s16(vshrn_n_s32(a, 16), vshrn_n_s32(b, 16));
- // Subtract
- return vreinterpretq_m128i_s16(vsubq_s16(ab0246, ab1357));
-}
-
-// Computes saturated pairwise sub of each argument as a 16-bit signed
-// integer values a and b.
-FORCE_INLINE __m128i _mm_hadds_epi16(__m128i _a, __m128i _b)
-{
- int32x4_t a = vreinterpretq_s32_m128i(_a);
- int32x4_t b = vreinterpretq_s32_m128i(_b);
- // Interleave using vshrn/vmovn
- // [a0|a2|a4|a6|b0|b2|b4|b6]
- // [a1|a3|a5|a7|b1|b3|b5|b7]
- int16x8_t ab0246 = vcombine_s16(vmovn_s32(a), vmovn_s32(b));
- int16x8_t ab1357 = vcombine_s16(vshrn_n_s32(a, 16), vshrn_n_s32(b, 16));
- // Saturated add
- return vreinterpretq_m128i_s16(vqaddq_s16(ab0246, ab1357));
-}
-
-// Computes saturated pairwise difference of each argument as a 16-bit signed
-// integer values a and b.
-FORCE_INLINE __m128i _mm_hsubs_epi16(__m128i _a, __m128i _b)
-{
- int32x4_t a = vreinterpretq_s32_m128i(_a);
- int32x4_t b = vreinterpretq_s32_m128i(_b);
- // Interleave using vshrn/vmovn
- // [a0|a2|a4|a6|b0|b2|b4|b6]
- // [a1|a3|a5|a7|b1|b3|b5|b7]
- int16x8_t ab0246 = vcombine_s16(vmovn_s32(a), vmovn_s32(b));
- int16x8_t ab1357 = vcombine_s16(vshrn_n_s32(a, 16), vshrn_n_s32(b, 16));
- // Saturated subtract
- return vreinterpretq_m128i_s16(vqsubq_s16(ab0246, ab1357));
-}
-
-// Computes pairwise add of each argument as a 32-bit signed or unsigned integer
-// values a and b.
-FORCE_INLINE __m128i _mm_hadd_epi32(__m128i _a, __m128i _b)
-{
- int32x4_t a = vreinterpretq_s32_m128i(_a);
- int32x4_t b = vreinterpretq_s32_m128i(_b);
- return vreinterpretq_m128i_s32(
- vcombine_s32(vpadd_s32(vget_low_s32(a), vget_high_s32(a)),
- vpadd_s32(vget_low_s32(b), vget_high_s32(b))));
-}
-
-// Computes pairwise difference of each argument as a 32-bit signed or unsigned
-// integer values a and b.
-FORCE_INLINE __m128i _mm_hsub_epi32(__m128i _a, __m128i _b)
-{
- int64x2_t a = vreinterpretq_s64_m128i(_a);
- int64x2_t b = vreinterpretq_s64_m128i(_b);
- // Interleave using vshrn/vmovn
- // [a0|a2|b0|b2]
- // [a1|a2|b1|b3]
- int32x4_t ab02 = vcombine_s32(vmovn_s64(a), vmovn_s64(b));
- int32x4_t ab13 = vcombine_s32(vshrn_n_s64(a, 32), vshrn_n_s64(b, 32));
- // Subtract
- return vreinterpretq_m128i_s32(vsubq_s32(ab02, ab13));
-}
-
-/* Compare operations */
-
-// Compares for less than
-// https://msdn.microsoft.com/en-us/library/vstudio/f330yhc8(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_cmplt_ps(__m128 a, __m128 b)
-{
- return vreinterpretq_m128_u32(vcltq_f32(vreinterpretq_f32_m128(a),
- vreinterpretq_f32_m128(b)));
-}
-
-// Compares for greater than.
-//
-// r0 := (a0 > b0) ? 0xffffffff : 0x0
-// r1 := (a1 > b1) ? 0xffffffff : 0x0
-// r2 := (a2 > b2) ? 0xffffffff : 0x0
-// r3 := (a3 > b3) ? 0xffffffff : 0x0
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/11dy102s(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_cmpgt_ps(__m128 a, __m128 b)
-{
- return vreinterpretq_m128_u32(vcgtq_f32(vreinterpretq_f32_m128(a),
- vreinterpretq_f32_m128(b)));
-}
-
-// Compares for greater than or equal.
-// https://msdn.microsoft.com/en-us/library/vstudio/fs813y2t(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_cmpge_ps(__m128 a, __m128 b)
-{
- return vreinterpretq_m128_u32(vcgeq_f32(vreinterpretq_f32_m128(a),
- vreinterpretq_f32_m128(b)));
-}
-
-// Compares for less than or equal.
-//
-// r0 := (a0 <= b0) ? 0xffffffff : 0x0
-// r1 := (a1 <= b1) ? 0xffffffff : 0x0
-// r2 := (a2 <= b2) ? 0xffffffff : 0x0
-// r3 := (a3 <= b3) ? 0xffffffff : 0x0
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/1s75w83z(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_cmple_ps(__m128 a, __m128 b)
-{
- return vreinterpretq_m128_u32(vcleq_f32(vreinterpretq_f32_m128(a),
- vreinterpretq_f32_m128(b)));
-}
-
-// Compares for equality.
-// https://msdn.microsoft.com/en-us/library/vstudio/36aectz5(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_cmpeq_ps(__m128 a, __m128 b)
-{
- return vreinterpretq_m128_u32(vceqq_f32(vreinterpretq_f32_m128(a),
- vreinterpretq_f32_m128(b)));
-}
-
-// Compares the 16 signed or unsigned 8-bit integers in a and the 16 signed or
-// unsigned 8-bit integers in b for equality.
-// https://msdn.microsoft.com/en-us/library/windows/desktop/bz5xk21a(v=vs.90).aspx
-FORCE_INLINE __m128i _mm_cmpeq_epi8(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_u8(
- vceqq_s8(vreinterpretq_s8_m128i(a), vreinterpretq_s8_m128i(b)));
-}
-
-// Compares the 8 signed or unsigned 16-bit integers in a and the 8 signed or
-// unsigned 16-bit integers in b for equality.
-// https://msdn.microsoft.com/en-us/library/2ay060te(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_cmpeq_epi16(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_u16(vceqq_s16(vreinterpretq_s16_m128i(a),
- vreinterpretq_s16_m128i(b)));
-}
-
-// Compare packed 32-bit integers in a and b for equality, and store the results
-// in dst
-FORCE_INLINE __m128i _mm_cmpeq_epi32(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_u32(vceqq_s32(vreinterpretq_s32_m128i(a),
- vreinterpretq_s32_m128i(b)));
-}
-
-// Compare packed 64-bit integers in a and b for equality, and store the results
-// in dst
-FORCE_INLINE __m128i _mm_cmpeq_epi64(__m128i a, __m128i b)
-{
-#if defined(__aarch64__)
- return vreinterpretq_m128i_u64(vceqq_u64(vreinterpretq_u64_m128i(a),
- vreinterpretq_u64_m128i(b)));
-#else
- // ARMv7 lacks vceqq_u64
- // (a == b) -> (a_lo == b_lo) && (a_hi == b_hi)
- uint32x4_t cmp = vceqq_u32(vreinterpretq_u32_m128i(a),
- vreinterpretq_u32_m128i(b));
- uint32x4_t swapped = vrev64q_u32(cmp);
- return vreinterpretq_m128i_u32(vandq_u32(cmp, swapped));
-#endif
-}
-
-// Compares the 16 signed 8-bit integers in a and the 16 signed 8-bit integers
-// in b for lesser than.
-// https://msdn.microsoft.com/en-us/library/windows/desktop/9s46csht(v=vs.90).aspx
-FORCE_INLINE __m128i _mm_cmplt_epi8(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_u8(
- vcltq_s8(vreinterpretq_s8_m128i(a), vreinterpretq_s8_m128i(b)));
-}
-
-// Compares the 16 signed 8-bit integers in a and the 16 signed 8-bit integers
-// in b for greater than.
-//
-// r0 := (a0 > b0) ? 0xff : 0x0
-// r1 := (a1 > b1) ? 0xff : 0x0
-// ...
-// r15 := (a15 > b15) ? 0xff : 0x0
-//
-// https://msdn.microsoft.com/zh-tw/library/wf45zt2b(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_cmpgt_epi8(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_u8(
- vcgtq_s8(vreinterpretq_s8_m128i(a), vreinterpretq_s8_m128i(b)));
-}
-
-// Compares the 8 signed 16-bit integers in a and the 8 signed 16-bit integers
-// in b for less than.
-//
-// r0 := (a0 < b0) ? 0xffff : 0x0
-// r1 := (a1 < b1) ? 0xffff : 0x0
-// ...
-// r7 := (a7 < b7) ? 0xffff : 0x0
-//
-// https://technet.microsoft.com/en-us/library/t863edb2(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_cmplt_epi16(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_u16(vcltq_s16(vreinterpretq_s16_m128i(a),
- vreinterpretq_s16_m128i(b)));
-}
-
-// Compares the 8 signed 16-bit integers in a and the 8 signed 16-bit integers
-// in b for greater than.
-//
-// r0 := (a0 > b0) ? 0xffff : 0x0
-// r1 := (a1 > b1) ? 0xffff : 0x0
-// ...
-// r7 := (a7 > b7) ? 0xffff : 0x0
-//
-// https://technet.microsoft.com/en-us/library/xd43yfsa(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_cmpgt_epi16(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_u16(vcgtq_s16(vreinterpretq_s16_m128i(a),
- vreinterpretq_s16_m128i(b)));
-}
-
-// Compares the 4 signed 32-bit integers in a and the 4 signed 32-bit integers
-// in b for less than.
-// https://msdn.microsoft.com/en-us/library/vstudio/4ak0bf5d(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_cmplt_epi32(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_u32(vcltq_s32(vreinterpretq_s32_m128i(a),
- vreinterpretq_s32_m128i(b)));
-}
-
-// Compares the 4 signed 32-bit integers in a and the 4 signed 32-bit integers
-// in b for greater than.
-// https://msdn.microsoft.com/en-us/library/vstudio/1s9f2z0y(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_cmpgt_epi32(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_u32(vcgtq_s32(vreinterpretq_s32_m128i(a),
- vreinterpretq_s32_m128i(b)));
-}
-
-// Compares the 2 signed 64-bit integers in a and the 2 signed 64-bit integers
-// in b for greater than.
-FORCE_INLINE __m128i _mm_cmpgt_epi64(__m128i a, __m128i b)
-{
-#if defined(__aarch64__)
- return vreinterpretq_m128i_u64(vcgtq_s64(vreinterpretq_s64_m128i(a),
- vreinterpretq_s64_m128i(b)));
-#else
- // ARMv7 lacks vcgtq_s64.
- // This is based off of Clang's SSE2 polyfill:
- // (a > b) -> ((a_hi > b_hi) || (a_lo > b_lo && a_hi == b_hi))
-
- // Mask the sign bit out since we need a signed AND an unsigned comparison
- // and it is ugly to try and split them.
- int32x4_t mask = vreinterpretq_s32_s64(vdupq_n_s64(0x80000000ull));
- int32x4_t a_mask = veorq_s32(vreinterpretq_s32_m128i(a), mask);
- int32x4_t b_mask = veorq_s32(vreinterpretq_s32_m128i(b), mask);
- // Check if a > b
- int64x2_t greater = vreinterpretq_s64_u32(vcgtq_s32(a_mask, b_mask));
- // Copy upper mask to lower mask
- // a_hi > b_hi
- int64x2_t gt_hi = vshrq_n_s64(greater, 63);
- // Copy lower mask to upper mask
- // a_lo > b_lo
- int64x2_t gt_lo = vsliq_n_s64(greater, greater, 32);
- // Compare for equality
- int64x2_t equal = vreinterpretq_s64_u32(vceqq_s32(a_mask, b_mask));
- // Copy upper mask to lower mask
- // a_hi == b_hi
- int64x2_t eq_hi = vshrq_n_s64(equal, 63);
- // a_hi > b_hi || (a_lo > b_lo && a_hi == b_hi)
- int64x2_t ret = vorrq_s64(gt_hi, vandq_s64(gt_lo, eq_hi));
- return vreinterpretq_m128i_s64(ret);
-#endif
-}
-
-// Compares the four 32-bit floats in a and b to check if any values are NaN.
-// Ordered compare between each value returns true for "orderable" and false for
-// "not orderable" (NaN).
-// https://msdn.microsoft.com/en-us/library/vstudio/0h9w00fx(v=vs.100).aspx see
-// also:
-// http://stackoverflow.com/questions/8627331/what-does-ordered-unordered-comparison-mean
-// http://stackoverflow.com/questions/29349621/neon-isnanval-intrinsics
-FORCE_INLINE __m128 _mm_cmpord_ps(__m128 a, __m128 b)
-{
- // Note: NEON does not have ordered compare builtin
- // Need to compare a eq a and b eq b to check for NaN
- // Do AND of results to get final
- uint32x4_t ceqaa =
- vceqq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(a));
- uint32x4_t ceqbb =
- vceqq_f32(vreinterpretq_f32_m128(b), vreinterpretq_f32_m128(b));
- return vreinterpretq_m128_u32(vandq_u32(ceqaa, ceqbb));
-}
-
-// Compares the lower single-precision floating point scalar values of a and b
-// using a less than operation. :
-// https://msdn.microsoft.com/en-us/library/2kwe606b(v=vs.90).aspx Important
-// note!! The documentation on MSDN is incorrect! If either of the values is a
-// NAN the docs say you will get a one, but in fact, it will return a zero!!
-FORCE_INLINE int _mm_comilt_ss(__m128 a, __m128 b)
-{
- uint32x4_t a_not_nan =
- vceqq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(a));
- uint32x4_t b_not_nan =
- vceqq_f32(vreinterpretq_f32_m128(b), vreinterpretq_f32_m128(b));
- uint32x4_t a_and_b_not_nan = vandq_u32(a_not_nan, b_not_nan);
- uint32x4_t a_lt_b =
- vcltq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b));
- return (vgetq_lane_u32(vandq_u32(a_and_b_not_nan, a_lt_b), 0) != 0) ? 1
- : 0;
-}
-
-// Compares the lower single-precision floating point scalar values of a and b
-// using a greater than operation. :
-// https://msdn.microsoft.com/en-us/library/b0738e0t(v=vs.100).aspx
-FORCE_INLINE int _mm_comigt_ss(__m128 a, __m128 b)
-{
- // return vgetq_lane_u32(vcgtq_f32(vreinterpretq_f32_m128(a),
- // vreinterpretq_f32_m128(b)), 0);
- uint32x4_t a_not_nan =
- vceqq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(a));
- uint32x4_t b_not_nan =
- vceqq_f32(vreinterpretq_f32_m128(b), vreinterpretq_f32_m128(b));
- uint32x4_t a_and_b_not_nan = vandq_u32(a_not_nan, b_not_nan);
- uint32x4_t a_gt_b =
- vcgtq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b));
- return (vgetq_lane_u32(vandq_u32(a_and_b_not_nan, a_gt_b), 0) != 0) ? 1
- : 0;
-}
-
-// Compares the lower single-precision floating point scalar values of a and b
-// using a less than or equal operation. :
-// https://msdn.microsoft.com/en-us/library/1w4t7c57(v=vs.90).aspx
-FORCE_INLINE int _mm_comile_ss(__m128 a, __m128 b)
-{
- // return vgetq_lane_u32(vcleq_f32(vreinterpretq_f32_m128(a),
- // vreinterpretq_f32_m128(b)), 0);
- uint32x4_t a_not_nan =
- vceqq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(a));
- uint32x4_t b_not_nan =
- vceqq_f32(vreinterpretq_f32_m128(b), vreinterpretq_f32_m128(b));
- uint32x4_t a_and_b_not_nan = vandq_u32(a_not_nan, b_not_nan);
- uint32x4_t a_le_b =
- vcleq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b));
- return (vgetq_lane_u32(vandq_u32(a_and_b_not_nan, a_le_b), 0) != 0) ? 1
- : 0;
-}
-
-// Compares the lower single-precision floating point scalar values of a and b
-// using a greater than or equal operation. :
-// https://msdn.microsoft.com/en-us/library/8t80des6(v=vs.100).aspx
-FORCE_INLINE int _mm_comige_ss(__m128 a, __m128 b)
-{
- // return vgetq_lane_u32(vcgeq_f32(vreinterpretq_f32_m128(a),
- // vreinterpretq_f32_m128(b)), 0);
- uint32x4_t a_not_nan =
- vceqq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(a));
- uint32x4_t b_not_nan =
- vceqq_f32(vreinterpretq_f32_m128(b), vreinterpretq_f32_m128(b));
- uint32x4_t a_and_b_not_nan = vandq_u32(a_not_nan, b_not_nan);
- uint32x4_t a_ge_b =
- vcgeq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b));
- return (vgetq_lane_u32(vandq_u32(a_and_b_not_nan, a_ge_b), 0) != 0) ? 1
- : 0;
-}
-
-// Compares the lower single-precision floating point scalar values of a and b
-// using an equality operation. :
-// https://msdn.microsoft.com/en-us/library/93yx2h2b(v=vs.100).aspx
-FORCE_INLINE int _mm_comieq_ss(__m128 a, __m128 b)
-{
- // return vgetq_lane_u32(vceqq_f32(vreinterpretq_f32_m128(a),
- // vreinterpretq_f32_m128(b)), 0);
- uint32x4_t a_not_nan =
- vceqq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(a));
- uint32x4_t b_not_nan =
- vceqq_f32(vreinterpretq_f32_m128(b), vreinterpretq_f32_m128(b));
- uint32x4_t a_and_b_not_nan = vandq_u32(a_not_nan, b_not_nan);
- uint32x4_t a_eq_b =
- vceqq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(b));
- return (vgetq_lane_u32(vandq_u32(a_and_b_not_nan, a_eq_b), 0) != 0) ? 1
- : 0;
-}
-
-// Compares the lower single-precision floating point scalar values of a and b
-// using an inequality operation. :
-// https://msdn.microsoft.com/en-us/library/bafh5e0a(v=vs.90).aspx
-FORCE_INLINE int _mm_comineq_ss(__m128 a, __m128 b)
-{
- // return !vgetq_lane_u32(vceqq_f32(vreinterpretq_f32_m128(a),
- // vreinterpretq_f32_m128(b)), 0);
- uint32x4_t a_not_nan =
- vceqq_f32(vreinterpretq_f32_m128(a), vreinterpretq_f32_m128(a));
- uint32x4_t b_not_nan =
- vceqq_f32(vreinterpretq_f32_m128(b), vreinterpretq_f32_m128(b));
- uint32x4_t a_or_b_nan = vmvnq_u32(vandq_u32(a_not_nan, b_not_nan));
- uint32x4_t a_neq_b = vmvnq_u32(vceqq_f32(vreinterpretq_f32_m128(a),
- vreinterpretq_f32_m128(b)));
- return (vgetq_lane_u32(vorrq_u32(a_or_b_nan, a_neq_b), 0) != 0) ? 1 : 0;
-}
-
-// according to the documentation, these intrinsics behave the same as the
-// non-'u' versions. We'll just alias them here.
-#define _mm_ucomilt_ss _mm_comilt_ss
-#define _mm_ucomile_ss _mm_comile_ss
-#define _mm_ucomigt_ss _mm_comigt_ss
-#define _mm_ucomige_ss _mm_comige_ss
-#define _mm_ucomieq_ss _mm_comieq_ss
-#define _mm_ucomineq_ss _mm_comineq_ss
-
-/* Conversions */
-
-// Converts the four single-precision, floating-point values of a to signed
-// 32-bit integer values using truncate.
-// https://msdn.microsoft.com/en-us/library/vstudio/1h005y6x(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_cvttps_epi32(__m128 a)
-{
- return vreinterpretq_m128i_s32(
- vcvtq_s32_f32(vreinterpretq_f32_m128(a)));
-}
-
-// Converts the four signed 32-bit integer values of a to single-precision,
-// floating-point values
-// https://msdn.microsoft.com/en-us/library/vstudio/36bwxcx5(v=vs.100).aspx
-FORCE_INLINE __m128 _mm_cvtepi32_ps(__m128i a)
-{
- return vreinterpretq_m128_f32(
- vcvtq_f32_s32(vreinterpretq_s32_m128i(a)));
-}
-
-// Converts the four unsigned 8-bit integers in the lower 16 bits to four
-// unsigned 32-bit integers.
-FORCE_INLINE __m128i _mm_cvtepu8_epi16(__m128i a)
-{
- uint8x16_t u8x16 = vreinterpretq_u8_m128i(a); /* xxxx xxxx xxxx DCBA */
- uint16x8_t u16x8 =
- vmovl_u8(vget_low_u8(u8x16)); /* 0x0x 0x0x 0D0C 0B0A */
- return vreinterpretq_m128i_u16(u16x8);
-}
-
-// Converts the four unsigned 8-bit integers in the lower 32 bits to four
-// unsigned 32-bit integers.
-// https://msdn.microsoft.com/en-us/library/bb531467%28v=vs.100%29.aspx
-FORCE_INLINE __m128i _mm_cvtepu8_epi32(__m128i a)
-{
- uint8x16_t u8x16 = vreinterpretq_u8_m128i(a); /* xxxx xxxx xxxx DCBA */
- uint16x8_t u16x8 =
- vmovl_u8(vget_low_u8(u8x16)); /* 0x0x 0x0x 0D0C 0B0A */
- uint32x4_t u32x4 =
- vmovl_u16(vget_low_u16(u16x8)); /* 000D 000C 000B 000A */
- return vreinterpretq_m128i_u32(u32x4);
-}
-
-// Converts the two unsigned 8-bit integers in the lower 16 bits to two
-// unsigned 64-bit integers.
-FORCE_INLINE __m128i _mm_cvtepu8_epi64(__m128i a)
-{
- uint8x16_t u8x16 = vreinterpretq_u8_m128i(a); /* xxxx xxxx xxxx xxBA */
- uint16x8_t u16x8 =
- vmovl_u8(vget_low_u8(u8x16)); /* 0x0x 0x0x 0x0x 0B0A */
- uint32x4_t u32x4 =
- vmovl_u16(vget_low_u16(u16x8)); /* 000x 000x 000B 000A */
- uint64x2_t u64x2 =
- vmovl_u32(vget_low_u32(u32x4)); /* 0000 000B 0000 000A */
- return vreinterpretq_m128i_u64(u64x2);
-}
-
-// Converts the four unsigned 8-bit integers in the lower 16 bits to four
-// unsigned 32-bit integers.
-FORCE_INLINE __m128i _mm_cvtepi8_epi16(__m128i a)
-{
- int8x16_t s8x16 = vreinterpretq_s8_m128i(a); /* xxxx xxxx xxxx DCBA */
- int16x8_t s16x8 =
- vmovl_s8(vget_low_s8(s8x16)); /* 0x0x 0x0x 0D0C 0B0A */
- return vreinterpretq_m128i_s16(s16x8);
-}
-
-// Converts the four unsigned 8-bit integers in the lower 32 bits to four
-// unsigned 32-bit integers.
-FORCE_INLINE __m128i _mm_cvtepi8_epi32(__m128i a)
-{
- int8x16_t s8x16 = vreinterpretq_s8_m128i(a); /* xxxx xxxx xxxx DCBA */
- int16x8_t s16x8 =
- vmovl_s8(vget_low_s8(s8x16)); /* 0x0x 0x0x 0D0C 0B0A */
- int32x4_t s32x4 =
- vmovl_s16(vget_low_s16(s16x8)); /* 000D 000C 000B 000A */
- return vreinterpretq_m128i_s32(s32x4);
-}
-
-// Converts the two signed 8-bit integers in the lower 32 bits to four
-// signed 64-bit integers.
-FORCE_INLINE __m128i _mm_cvtepi8_epi64(__m128i a)
-{
- int8x16_t s8x16 = vreinterpretq_s8_m128i(a); /* xxxx xxxx xxxx xxBA */
- int16x8_t s16x8 =
- vmovl_s8(vget_low_s8(s8x16)); /* 0x0x 0x0x 0x0x 0B0A */
- int32x4_t s32x4 =
- vmovl_s16(vget_low_s16(s16x8)); /* 000x 000x 000B 000A */
- int64x2_t s64x2 =
- vmovl_s32(vget_low_s32(s32x4)); /* 0000 000B 0000 000A */
- return vreinterpretq_m128i_s64(s64x2);
-}
-
-// Converts the four signed 16-bit integers in the lower 64 bits to four signed
-// 32-bit integers.
-FORCE_INLINE __m128i _mm_cvtepi16_epi32(__m128i a)
-{
- return vreinterpretq_m128i_s32(
- vmovl_s16(vget_low_s16(vreinterpretq_s16_m128i(a))));
-}
-
-// Converts the two signed 16-bit integers in the lower 32 bits two signed
-// 32-bit integers.
-FORCE_INLINE __m128i _mm_cvtepi16_epi64(__m128i a)
-{
- int16x8_t s16x8 = vreinterpretq_s16_m128i(a); /* xxxx xxxx xxxx 0B0A */
- int32x4_t s32x4 =
- vmovl_s16(vget_low_s16(s16x8)); /* 000x 000x 000B 000A */
- int64x2_t s64x2 =
- vmovl_s32(vget_low_s32(s32x4)); /* 0000 000B 0000 000A */
- return vreinterpretq_m128i_s64(s64x2);
-}
-
-// Converts the four unsigned 16-bit integers in the lower 64 bits to four
-// unsigned 32-bit integers.
-FORCE_INLINE __m128i _mm_cvtepu16_epi32(__m128i a)
-{
- return vreinterpretq_m128i_u32(
- vmovl_u16(vget_low_u16(vreinterpretq_u16_m128i(a))));
-}
-
-// Converts the two unsigned 16-bit integers in the lower 32 bits to two
-// unsigned 64-bit integers.
-FORCE_INLINE __m128i _mm_cvtepu16_epi64(__m128i a)
-{
- uint16x8_t u16x8 = vreinterpretq_u16_m128i(a); /* xxxx xxxx xxxx 0B0A */
- uint32x4_t u32x4 =
- vmovl_u16(vget_low_u16(u16x8)); /* 000x 000x 000B 000A */
- uint64x2_t u64x2 =
- vmovl_u32(vget_low_u32(u32x4)); /* 0000 000B 0000 000A */
- return vreinterpretq_m128i_u64(u64x2);
-}
-
-// Converts the two unsigned 32-bit integers in the lower 64 bits to two
-// unsigned 64-bit integers.
-FORCE_INLINE __m128i _mm_cvtepu32_epi64(__m128i a)
-{
- return vreinterpretq_m128i_u64(
- vmovl_u32(vget_low_u32(vreinterpretq_u32_m128i(a))));
-}
-
-// Converts the two signed 32-bit integers in the lower 64 bits to two signed
-// 64-bit integers.
-FORCE_INLINE __m128i _mm_cvtepi32_epi64(__m128i a)
-{
- return vreinterpretq_m128i_s64(
- vmovl_s32(vget_low_s32(vreinterpretq_s32_m128i(a))));
-}
-
-// Converts the four single-precision, floating-point values of a to signed
-// 32-bit integer values.
-//
-// r0 := (int) a0
-// r1 := (int) a1
-// r2 := (int) a2
-// r3 := (int) a3
-//
-// https://msdn.microsoft.com/en-us/library/vstudio/xdc42k5e(v=vs.100).aspx
-// *NOTE*. The default rounding mode on SSE is 'round to even', which ARMv7-A
-// does not support! It is supported on ARMv8-A however.
-FORCE_INLINE __m128i _mm_cvtps_epi32(__m128 a)
-{
-#if defined(__aarch64__)
- return vreinterpretq_m128i_s32(vcvtnq_s32_f32(a));
-#else
- uint32x4_t signmask = vdupq_n_u32(0x80000000);
- float32x4_t half = vbslq_f32(signmask, vreinterpretq_f32_m128(a),
- vdupq_n_f32(0.5f)); /* +/- 0.5 */
- int32x4_t r_normal =
- vcvtq_s32_f32(vaddq_f32(vreinterpretq_f32_m128(a),
- half)); /* round to integer: [a + 0.5]*/
- int32x4_t r_trunc = vcvtq_s32_f32(
- vreinterpretq_f32_m128(a)); /* truncate to integer: [a] */
- int32x4_t plusone = vreinterpretq_s32_u32(vshrq_n_u32(
- vreinterpretq_u32_s32(vnegq_s32(r_trunc)), 31)); /* 1 or 0 */
- int32x4_t r_even = vbicq_s32(vaddq_s32(r_trunc, plusone),
- vdupq_n_s32(1)); /* ([a] + {0,1}) & ~1 */
- float32x4_t delta = vsubq_f32(
- vreinterpretq_f32_m128(a),
- vcvtq_f32_s32(r_trunc)); /* compute delta: delta = (a - [a]) */
- uint32x4_t is_delta_half =
- vceqq_f32(delta, half); /* delta == +/- 0.5 */
- return vreinterpretq_m128i_s32(
- vbslq_s32(is_delta_half, r_even, r_normal));
-#endif
-}
-
-// Moves the least significant 32 bits of a to a 32-bit integer.
-// https://msdn.microsoft.com/en-us/library/5z7a9642%28v=vs.90%29.aspx
-FORCE_INLINE int _mm_cvtsi128_si32(__m128i a)
-{
- return vgetq_lane_s32(vreinterpretq_s32_m128i(a), 0);
-}
-
-// Extracts the low order 64-bit integer from the parameter.
-// https://msdn.microsoft.com/en-us/library/bb531384(v=vs.120).aspx
-FORCE_INLINE uint64_t _mm_cvtsi128_si64(__m128i a)
-{
- return vgetq_lane_s64(vreinterpretq_s64_m128i(a), 0);
-}
-
-// Moves 32-bit integer a to the least significant 32 bits of an __m128 object,
-// zero extending the upper bits.
-//
-// r0 := a
-// r1 := 0x0
-// r2 := 0x0
-// r3 := 0x0
-//
-// https://msdn.microsoft.com/en-us/library/ct3539ha%28v=vs.90%29.aspx
-FORCE_INLINE __m128i _mm_cvtsi32_si128(int a)
-{
- return vreinterpretq_m128i_s32(vsetq_lane_s32(a, vdupq_n_s32(0), 0));
-}
-
-// Moves 64-bit integer a to the least significant 64 bits of an __m128 object,
-// zero extending the upper bits.
-//
-// r0 := a
-// r1 := 0x0
-FORCE_INLINE __m128i _mm_cvtsi64_si128(int64_t a)
-{
- return vreinterpretq_m128i_s64(vsetq_lane_s64(a, vdupq_n_s64(0), 0));
-}
-
-// Applies a type cast to reinterpret four 32-bit floating point values passed
-// in as a 128-bit parameter as packed 32-bit integers.
-// https://msdn.microsoft.com/en-us/library/bb514099.aspx
-FORCE_INLINE __m128i _mm_castps_si128(__m128 a)
-{
- return vreinterpretq_m128i_s32(vreinterpretq_s32_m128(a));
-}
-
-// Applies a type cast to reinterpret four 32-bit integers passed in as a
-// 128-bit parameter as packed 32-bit floating point values.
-// https://msdn.microsoft.com/en-us/library/bb514029.aspx
-FORCE_INLINE __m128 _mm_castsi128_ps(__m128i a)
-{
- return vreinterpretq_m128_s32(vreinterpretq_s32_m128i(a));
-}
-
-// Loads 128-bit value. :
-// https://msdn.microsoft.com/en-us/library/atzzad1h(v=vs.80).aspx
-FORCE_INLINE __m128i _mm_load_si128(const __m128i *p)
-{
- return vreinterpretq_m128i_s32(vld1q_s32((const int32_t *)p));
-}
-
-// Loads 128-bit value. :
-// https://msdn.microsoft.com/zh-cn/library/f4k12ae8(v=vs.90).aspx
-FORCE_INLINE __m128i _mm_loadu_si128(const __m128i *p)
-{
- return vreinterpretq_m128i_s32(vld1q_s32((const int32_t *)p));
-}
-
-// _mm_lddqu_si128 functions the same as _mm_loadu_si128.
-#define _mm_lddqu_si128 _mm_loadu_si128
-
-/* Miscellaneous Operations */
-
-// Shifts the 8 signed 16-bit integers in a right by count bits while shifting
-// in the sign bit.
-//
-// r0 := a0 >> count
-// r1 := a1 >> count
-// ...
-// r7 := a7 >> count
-//
-// https://msdn.microsoft.com/en-us/library/3c9997dk(v%3dvs.90).aspx
-FORCE_INLINE __m128i _mm_sra_epi16(__m128i a, __m128i count)
-{
- int64_t c = (int64_t)vget_low_s64((int64x2_t)count);
- if (c > 15)
- return _mm_cmplt_epi16(a, _mm_setzero_si128());
- return vreinterpretq_m128i_s16(
- vshlq_s16((int16x8_t)a, vdupq_n_s16(-c)));
-}
-
-// Shifts the 4 signed 32-bit integers in a right by count bits while shifting
-// in the sign bit.
-//
-// r0 := a0 >> count
-// r1 := a1 >> count
-// r2 := a2 >> count
-// r3 := a3 >> count
-//
-// https://msdn.microsoft.com/en-us/library/ce40009e(v%3dvs.100).aspx
-FORCE_INLINE __m128i _mm_sra_epi32(__m128i a, __m128i count)
-{
- int64_t c = (int64_t)vget_low_s64((int64x2_t)count);
- if (c > 31)
- return _mm_cmplt_epi32(a, _mm_setzero_si128());
- return vreinterpretq_m128i_s32(
- vshlq_s32((int32x4_t)a, vdupq_n_s32(-c)));
-}
-
-// Packs the 16 signed 16-bit integers from a and b into 8-bit integers and
-// saturates.
-// https://msdn.microsoft.com/en-us/library/k4y4f7w5%28v=vs.90%29.aspx
-FORCE_INLINE __m128i _mm_packs_epi16(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s8(
- vcombine_s8(vqmovn_s16(vreinterpretq_s16_m128i(a)),
- vqmovn_s16(vreinterpretq_s16_m128i(b))));
-}
-
-// Packs the 16 signed 16 - bit integers from a and b into 8 - bit unsigned
-// integers and saturates.
-//
-// r0 := UnsignedSaturate(a0)
-// r1 := UnsignedSaturate(a1)
-// ...
-// r7 := UnsignedSaturate(a7)
-// r8 := UnsignedSaturate(b0)
-// r9 := UnsignedSaturate(b1)
-// ...
-// r15 := UnsignedSaturate(b7)
-//
-// https://msdn.microsoft.com/en-us/library/07ad1wx4(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_packus_epi16(const __m128i a, const __m128i b)
-{
- return vreinterpretq_m128i_u8(
- vcombine_u8(vqmovun_s16(vreinterpretq_s16_m128i(a)),
- vqmovun_s16(vreinterpretq_s16_m128i(b))));
-}
-
-// Packs the 8 signed 32-bit integers from a and b into signed 16-bit integers
-// and saturates.
-//
-// r0 := SignedSaturate(a0)
-// r1 := SignedSaturate(a1)
-// r2 := SignedSaturate(a2)
-// r3 := SignedSaturate(a3)
-// r4 := SignedSaturate(b0)
-// r5 := SignedSaturate(b1)
-// r6 := SignedSaturate(b2)
-// r7 := SignedSaturate(b3)
-//
-// https://msdn.microsoft.com/en-us/library/393t56f9%28v=vs.90%29.aspx
-FORCE_INLINE __m128i _mm_packs_epi32(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_s16(
- vcombine_s16(vqmovn_s32(vreinterpretq_s32_m128i(a)),
- vqmovn_s32(vreinterpretq_s32_m128i(b))));
-}
-
-// Packs the 8 unsigned 32-bit integers from a and b into unsigned 16-bit
-// integers and saturates.
-//
-// r0 := UnsignedSaturate(a0)
-// r1 := UnsignedSaturate(a1)
-// r2 := UnsignedSaturate(a2)
-// r3 := UnsignedSaturate(a3)
-// r4 := UnsignedSaturate(b0)
-// r5 := UnsignedSaturate(b1)
-// r6 := UnsignedSaturate(b2)
-// r7 := UnsignedSaturate(b3)
-FORCE_INLINE __m128i _mm_packus_epi32(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_u16(
- vcombine_u16(vqmovn_u32(vreinterpretq_u32_m128i(a)),
- vqmovn_u32(vreinterpretq_u32_m128i(b))));
-}
-
-// Interleaves the lower 8 signed or unsigned 8-bit integers in a with the lower
-// 8 signed or unsigned 8-bit integers in b.
-//
-// r0 := a0
-// r1 := b0
-// r2 := a1
-// r3 := b1
-// ...
-// r14 := a7
-// r15 := b7
-//
-// https://msdn.microsoft.com/en-us/library/xf7k860c%28v=vs.90%29.aspx
-FORCE_INLINE __m128i _mm_unpacklo_epi8(__m128i a, __m128i b)
-{
-#if defined(__aarch64__)
- return vreinterpretq_m128i_s8(vzip1q_s8(vreinterpretq_s8_m128i(a),
- vreinterpretq_s8_m128i(b)));
-#else
- int8x8_t a1 =
- vreinterpret_s8_s16(vget_low_s16(vreinterpretq_s16_m128i(a)));
- int8x8_t b1 =
- vreinterpret_s8_s16(vget_low_s16(vreinterpretq_s16_m128i(b)));
- int8x8x2_t result = vzip_s8(a1, b1);
- return vreinterpretq_m128i_s8(
- vcombine_s8(result.val[0], result.val[1]));
-#endif
-}
-
-// Interleaves the lower 4 signed or unsigned 16-bit integers in a with the
-// lower 4 signed or unsigned 16-bit integers in b.
-//
-// r0 := a0
-// r1 := b0
-// r2 := a1
-// r3 := b1
-// r4 := a2
-// r5 := b2
-// r6 := a3
-// r7 := b3
-//
-// https://msdn.microsoft.com/en-us/library/btxb17bw%28v=vs.90%29.aspx
-FORCE_INLINE __m128i _mm_unpacklo_epi16(__m128i a, __m128i b)
-{
-#if defined(__aarch64__)
- return vreinterpretq_m128i_s16(vzip1q_s16(vreinterpretq_s16_m128i(a),
- vreinterpretq_s16_m128i(b)));
-#else
- int16x4_t a1 = vget_low_s16(vreinterpretq_s16_m128i(a));
- int16x4_t b1 = vget_low_s16(vreinterpretq_s16_m128i(b));
- int16x4x2_t result = vzip_s16(a1, b1);
- return vreinterpretq_m128i_s16(
- vcombine_s16(result.val[0], result.val[1]));
-#endif
-}
-
-// Interleaves the lower 2 signed or unsigned 32 - bit integers in a with the
-// lower 2 signed or unsigned 32 - bit integers in b.
-//
-// r0 := a0
-// r1 := b0
-// r2 := a1
-// r3 := b1
-//
-// https://msdn.microsoft.com/en-us/library/x8atst9d(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_unpacklo_epi32(__m128i a, __m128i b)
-{
-#if defined(__aarch64__)
- return vreinterpretq_m128i_s32(vzip1q_s32(vreinterpretq_s32_m128i(a),
- vreinterpretq_s32_m128i(b)));
-#else
- int32x2_t a1 = vget_low_s32(vreinterpretq_s32_m128i(a));
- int32x2_t b1 = vget_low_s32(vreinterpretq_s32_m128i(b));
- int32x2x2_t result = vzip_s32(a1, b1);
- return vreinterpretq_m128i_s32(
- vcombine_s32(result.val[0], result.val[1]));
-#endif
-}
-
-FORCE_INLINE __m128i _mm_unpacklo_epi64(__m128i a, __m128i b)
-{
- int64x1_t a_l = vget_low_s64(vreinterpretq_s64_m128i(a));
- int64x1_t b_l = vget_low_s64(vreinterpretq_s64_m128i(b));
- return vreinterpretq_m128i_s64(vcombine_s64(a_l, b_l));
-}
-
-// Selects and interleaves the lower two single-precision, floating-point values
-// from a and b.
-//
-// r0 := a0
-// r1 := b0
-// r2 := a1
-// r3 := b1
-//
-// https://msdn.microsoft.com/en-us/library/25st103b%28v=vs.90%29.aspx
-FORCE_INLINE __m128 _mm_unpacklo_ps(__m128 a, __m128 b)
-{
-#if defined(__aarch64__)
- return vreinterpretq_m128_f32(vzip1q_f32(vreinterpretq_f32_m128(a),
- vreinterpretq_f32_m128(b)));
-#else
- float32x2_t a1 = vget_low_f32(vreinterpretq_f32_m128(a));
- float32x2_t b1 = vget_low_f32(vreinterpretq_f32_m128(b));
- float32x2x2_t result = vzip_f32(a1, b1);
- return vreinterpretq_m128_f32(
- vcombine_f32(result.val[0], result.val[1]));
-#endif
-}
-
-// Selects and interleaves the upper two single-precision, floating-point values
-// from a and b.
-//
-// r0 := a2
-// r1 := b2
-// r2 := a3
-// r3 := b3
-//
-// https://msdn.microsoft.com/en-us/library/skccxx7d%28v=vs.90%29.aspx
-FORCE_INLINE __m128 _mm_unpackhi_ps(__m128 a, __m128 b)
-{
-#if defined(__aarch64__)
- return vreinterpretq_m128_f32(vzip2q_f32(vreinterpretq_f32_m128(a),
- vreinterpretq_f32_m128(b)));
-#else
- float32x2_t a1 = vget_high_f32(vreinterpretq_f32_m128(a));
- float32x2_t b1 = vget_high_f32(vreinterpretq_f32_m128(b));
- float32x2x2_t result = vzip_f32(a1, b1);
- return vreinterpretq_m128_f32(
- vcombine_f32(result.val[0], result.val[1]));
-#endif
-}
-
-// Interleaves the upper 8 signed or unsigned 8-bit integers in a with the upper
-// 8 signed or unsigned 8-bit integers in b.
-//
-// r0 := a8
-// r1 := b8
-// r2 := a9
-// r3 := b9
-// ...
-// r14 := a15
-// r15 := b15
-//
-// https://msdn.microsoft.com/en-us/library/t5h7783k(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_unpackhi_epi8(__m128i a, __m128i b)
-{
-#if defined(__aarch64__)
- return vreinterpretq_m128i_s8(vzip2q_s8(vreinterpretq_s8_m128i(a),
- vreinterpretq_s8_m128i(b)));
-#else
- int8x8_t a1 =
- vreinterpret_s8_s16(vget_high_s16(vreinterpretq_s16_m128i(a)));
- int8x8_t b1 =
- vreinterpret_s8_s16(vget_high_s16(vreinterpretq_s16_m128i(b)));
- int8x8x2_t result = vzip_s8(a1, b1);
- return vreinterpretq_m128i_s8(
- vcombine_s8(result.val[0], result.val[1]));
-#endif
-}
-
-// Interleaves the upper 4 signed or unsigned 16-bit integers in a with the
-// upper 4 signed or unsigned 16-bit integers in b.
-//
-// r0 := a4
-// r1 := b4
-// r2 := a5
-// r3 := b5
-// r4 := a6
-// r5 := b6
-// r6 := a7
-// r7 := b7
-//
-// https://msdn.microsoft.com/en-us/library/03196cz7(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_unpackhi_epi16(__m128i a, __m128i b)
-{
-#if defined(__aarch64__)
- return vreinterpretq_m128i_s16(vzip2q_s16(vreinterpretq_s16_m128i(a),
- vreinterpretq_s16_m128i(b)));
-#else
- int16x4_t a1 = vget_high_s16(vreinterpretq_s16_m128i(a));
- int16x4_t b1 = vget_high_s16(vreinterpretq_s16_m128i(b));
- int16x4x2_t result = vzip_s16(a1, b1);
- return vreinterpretq_m128i_s16(
- vcombine_s16(result.val[0], result.val[1]));
-#endif
-}
-
-// Interleaves the upper 2 signed or unsigned 32-bit integers in a with the
-// upper 2 signed or unsigned 32-bit integers in b.
-// https://msdn.microsoft.com/en-us/library/65sa7cbs(v=vs.100).aspx
-FORCE_INLINE __m128i _mm_unpackhi_epi32(__m128i a, __m128i b)
-{
-#if defined(__aarch64__)
- return vreinterpretq_m128i_s32(vzip2q_s32(vreinterpretq_s32_m128i(a),
- vreinterpretq_s32_m128i(b)));
-#else
- int32x2_t a1 = vget_high_s32(vreinterpretq_s32_m128i(a));
- int32x2_t b1 = vget_high_s32(vreinterpretq_s32_m128i(b));
- int32x2x2_t result = vzip_s32(a1, b1);
- return vreinterpretq_m128i_s32(
- vcombine_s32(result.val[0], result.val[1]));
-#endif
-}
-
-// Interleaves the upper signed or unsigned 64-bit integer in a with the
-// upper signed or unsigned 64-bit integer in b.
-//
-// r0 := a1
-// r1 := b1
-FORCE_INLINE __m128i _mm_unpackhi_epi64(__m128i a, __m128i b)
-{
- int64x1_t a_h = vget_high_s64(vreinterpretq_s64_m128i(a));
- int64x1_t b_h = vget_high_s64(vreinterpretq_s64_m128i(b));
- return vreinterpretq_m128i_s64(vcombine_s64(a_h, b_h));
-}
-
-// Horizontally compute the minimum amongst the packed unsigned 16-bit integers
-// in a, store the minimum and index in dst, and zero the remaining bits in dst.
-//
-// index[2:0] := 0
-// min[15:0] := a[15:0]
-// FOR j := 0 to 7
-// i := j*16
-// IF a[i+15:i] < min[15:0]
-// index[2:0] := j
-// min[15:0] := a[i+15:i]
-// FI
-// ENDFOR
-// dst[15:0] := min[15:0]
-// dst[18:16] := index[2:0]
-// dst[127:19] := 0
-//
-// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_minpos_epu16&expand=3789
-FORCE_INLINE __m128i _mm_minpos_epu16(__m128i a)
-{
- __m128i dst;
- uint16_t min, idx = 0;
- // Find the minimum value
-#if defined(__aarch64__)
- min = vminvq_u16(vreinterpretq_u16_m128i(a));
-#else
- __m64i tmp;
- tmp = vreinterpret_m64i_u16(
- vmin_u16(vget_low_u16(vreinterpretq_u16_m128i(a)),
- vget_high_u16(vreinterpretq_u16_m128i(a))));
- tmp = vreinterpret_m64i_u16(vpmin_u16(vreinterpret_u16_m64i(tmp),
- vreinterpret_u16_m64i(tmp)));
- tmp = vreinterpret_m64i_u16(vpmin_u16(vreinterpret_u16_m64i(tmp),
- vreinterpret_u16_m64i(tmp)));
- min = vget_lane_u16(vreinterpret_u16_m64i(tmp), 0);
-#endif
- // Get the index of the minimum value
- int i;
- for (i = 0; i < 8; i++) {
- if (min == vgetq_lane_u16(vreinterpretq_u16_m128i(a), 0)) {
- idx = (uint16_t)i;
- break;
- }
- a = _mm_srli_si128(a, 2);
- }
- // Generate result
- dst = _mm_setzero_si128();
- dst = vreinterpretq_m128i_u16(
- vsetq_lane_u16(min, vreinterpretq_u16_m128i(dst), 0));
- dst = vreinterpretq_m128i_u16(
- vsetq_lane_u16(idx, vreinterpretq_u16_m128i(dst), 1));
- return dst;
-}
-
-// shift to right
-// https://msdn.microsoft.com/en-us/library/bb514041(v=vs.120).aspx
-// http://blog.csdn.net/hemmingway/article/details/44828303
-// Clang requires a macro here, as it is extremely picky about c being a
-// literal.
-#define _mm_alignr_epi8(a, b, c) \
- ((__m128i)vextq_s8((int8x16_t)(b), (int8x16_t)(a), (c)))
-
-// Extracts the selected signed or unsigned 8-bit integer from a and zero
-// extends.
-// FORCE_INLINE int _mm_extract_epi8(__m128i a, __constrange(0,16) int imm)
-#define _mm_extract_epi8(a, imm) vgetq_lane_u8(vreinterpretq_u8_m128i(a), (imm))
-
-// Inserts the least significant 8 bits of b into the selected 8-bit integer
-// of a.
-// FORCE_INLINE __m128i _mm_insert_epi8(__m128i a, int b,
-// __constrange(0,16) int imm)
-#define _mm_insert_epi8(a, b, imm) \
- __extension__({ \
- vreinterpretq_m128i_s8( \
- vsetq_lane_s8((b), vreinterpretq_s8_m128i(a), (imm))); \
- })
-
-// Extracts the selected signed or unsigned 16-bit integer from a and zero
-// extends.
-// https://msdn.microsoft.com/en-us/library/6dceta0c(v=vs.100).aspx
-// FORCE_INLINE int _mm_extract_epi16(__m128i a, __constrange(0,8) int imm)
-#define _mm_extract_epi16(a, imm) \
- vgetq_lane_u16(vreinterpretq_u16_m128i(a), (imm))
-
-// Inserts the least significant 16 bits of b into the selected 16-bit integer
-// of a.
-// https://msdn.microsoft.com/en-us/library/kaze8hz1%28v=vs.100%29.aspx
-// FORCE_INLINE __m128i _mm_insert_epi16(__m128i a, int b,
-// __constrange(0,8) int imm)
-#define _mm_insert_epi16(a, b, imm) \
- __extension__({ \
- vreinterpretq_m128i_s16(vsetq_lane_s16( \
- (b), vreinterpretq_s16_m128i(a), (imm))); \
- })
-
-// Extracts the selected signed or unsigned 32-bit integer from a and zero
-// extends.
-// FORCE_INLINE int _mm_extract_epi32(__m128i a, __constrange(0,4) int imm)
-#define _mm_extract_epi32(a, imm) \
- vgetq_lane_s32(vreinterpretq_s32_m128i(a), (imm))
-
-// Extracts the selected single-precision (32-bit) floating-point from a.
-// FORCE_INLINE int _mm_extract_ps(__m128 a, __constrange(0,4) int imm)
-#define _mm_extract_ps(a, imm) vgetq_lane_s32(vreinterpretq_s32_m128(a), (imm))
-
-// Inserts the least significant 32 bits of b into the selected 32-bit integer
-// of a.
-// FORCE_INLINE __m128i _mm_insert_epi32(__m128i a, int b,
-// __constrange(0,4) int imm)
-#define _mm_insert_epi32(a, b, imm) \
- __extension__({ \
- vreinterpretq_m128i_s32(vsetq_lane_s32( \
- (b), vreinterpretq_s32_m128i(a), (imm))); \
- })
-
-// Extracts the selected signed or unsigned 64-bit integer from a and zero
-// extends.
-// FORCE_INLINE __int64 _mm_extract_epi64(__m128i a, __constrange(0,2) int imm)
-#define _mm_extract_epi64(a, imm) \
- vgetq_lane_s64(vreinterpretq_s64_m128i(a), (imm))
-
-// Inserts the least significant 64 bits of b into the selected 64-bit integer
-// of a.
-// FORCE_INLINE __m128i _mm_insert_epi64(__m128i a, __int64 b,
-// __constrange(0,2) int imm)
-#define _mm_insert_epi64(a, b, imm) \
- __extension__({ \
- vreinterpretq_m128i_s64(vsetq_lane_s64( \
- (b), vreinterpretq_s64_m128i(a), (imm))); \
- })
-
-// Count the number of bits set to 1 in unsigned 32-bit integer a, and
-// return that count in dst.
-// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_popcnt_u32
-FORCE_INLINE int _mm_popcnt_u32(unsigned int a)
-{
-#if defined(__aarch64__)
-#if __has_builtin(__builtin_popcount)
- return __builtin_popcount(a);
-#else
- return (int)vaddlv_u8(vcnt_u8(vcreate_u8((uint64_t)a)));
-#endif
-#else
- uint32_t count = 0;
- uint8x8_t input_val, count8x8_val;
- uint16x4_t count16x4_val;
- uint32x2_t count32x2_val;
-
- input_val = vld1_u8((uint8_t *)&a);
- count8x8_val = vcnt_u8(input_val);
- count16x4_val = vpaddl_u8(count8x8_val);
- count32x2_val = vpaddl_u16(count16x4_val);
-
- vst1_u32(&count, count32x2_val);
- return count;
-#endif
-}
-
-// Count the number of bits set to 1 in unsigned 64-bit integer a, and
-// return that count in dst.
-// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=_mm_popcnt_u64
-FORCE_INLINE int64_t _mm_popcnt_u64(uint64_t a)
-{
-#if defined(__aarch64__)
-#if __has_builtin(__builtin_popcountll)
- return __builtin_popcountll(a);
-#else
- return (int64_t)vaddlv_u8(vcnt_u8(vcreate_u8(a)));
-#endif
-#else
- uint64_t count = 0;
- uint8x8_t input_val, count8x8_val;
- uint16x4_t count16x4_val;
- uint32x2_t count32x2_val;
- uint64x1_t count64x1_val;
-
- input_val = vld1_u8((uint8_t *)&a);
- count8x8_val = vcnt_u8(input_val);
- count16x4_val = vpaddl_u8(count8x8_val);
- count32x2_val = vpaddl_u16(count16x4_val);
- count64x1_val = vpaddl_u32(count32x2_val);
- vst1_u64(&count, count64x1_val);
- return count;
-#endif
-}
-
-// Macro: Transpose the 4x4 matrix formed by the 4 rows of single-precision
-// (32-bit) floating-point elements in row0, row1, row2, and row3, and store the
-// transposed matrix in these vectors (row0 now contains column 0, etc.).
-// https://software.intel.com/sites/landingpage/IntrinsicsGuide/#text=MM_TRANSPOSE4_PS&expand=5949
-#define _MM_TRANSPOSE4_PS(row0, row1, row2, row3) \
- do { \
- __m128 tmp0, tmp1, tmp2, tmp3; \
- tmp0 = _mm_unpacklo_ps(row0, row1); \
- tmp2 = _mm_unpacklo_ps(row2, row3); \
- tmp1 = _mm_unpackhi_ps(row0, row1); \
- tmp3 = _mm_unpackhi_ps(row2, row3); \
- row0 = _mm_movelh_ps(tmp0, tmp2); \
- row1 = _mm_movehl_ps(tmp2, tmp0); \
- row2 = _mm_movelh_ps(tmp1, tmp3); \
- row3 = _mm_movehl_ps(tmp3, tmp1); \
- } while (0)
-
-/* Crypto Extensions */
-
-#if defined(__ARM_FEATURE_CRYPTO)
-// Wraps vmull_p64
-FORCE_INLINE uint64x2_t _sse2neon_vmull_p64(uint64x1_t _a, uint64x1_t _b)
-{
- poly64_t a = vget_lane_p64(vreinterpret_p64_u64(_a), 0);
- poly64_t b = vget_lane_p64(vreinterpret_p64_u64(_b), 0);
- return vreinterpretq_u64_p128(vmull_p64(a, b));
-}
-#else // ARMv7 polyfill
-// ARMv7/some A64 lacks vmull_p64, but it has vmull_p8.
-//
-// vmull_p8 calculates 8 8-bit->16-bit polynomial multiplies, but we need a
-// 64-bit->128-bit polynomial multiply.
-//
-// It needs some work and is somewhat slow, but it is still faster than all
-// known scalar methods.
-//
-// Algorithm adapted to C from
-// https://www.workofard.com/2017/07/ghash-for-low-end-cores/, which is adapted
-// from "Fast Software Polynomial Multiplication on ARM Processors Using the
-// NEON Engine" by Danilo Camara, Conrado Gouvea, Julio Lopez and Ricardo Dahab
-// (https://hal.inria.fr/hal-01506572)
-static uint64x2_t _sse2neon_vmull_p64(uint64x1_t _a, uint64x1_t _b)
-{
- poly8x8_t a = vreinterpret_p8_u64(_a);
- poly8x8_t b = vreinterpret_p8_u64(_b);
-
- // Masks
- uint8x16_t k48_32 = vcombine_u8(vcreate_u8(0x0000ffffffffffff),
- vcreate_u8(0x00000000ffffffff));
- uint8x16_t k16_00 = vcombine_u8(vcreate_u8(0x000000000000ffff),
- vcreate_u8(0x0000000000000000));
-
- // Do the multiplies, rotating with vext to get all combinations
- uint8x16_t d = vreinterpretq_u8_p16(vmull_p8(a, b)); // D = A0 * B0
- uint8x16_t e = vreinterpretq_u8_p16(
- vmull_p8(a, vext_p8(b, b, 1))); // E = A0 * B1
- uint8x16_t f = vreinterpretq_u8_p16(
- vmull_p8(vext_p8(a, a, 1), b)); // F = A1 * B0
- uint8x16_t g = vreinterpretq_u8_p16(
- vmull_p8(a, vext_p8(b, b, 2))); // G = A0 * B2
- uint8x16_t h = vreinterpretq_u8_p16(
- vmull_p8(vext_p8(a, a, 2), b)); // H = A2 * B0
- uint8x16_t i = vreinterpretq_u8_p16(
- vmull_p8(a, vext_p8(b, b, 3))); // I = A0 * B3
- uint8x16_t j = vreinterpretq_u8_p16(
- vmull_p8(vext_p8(a, a, 3), b)); // J = A3 * B0
- uint8x16_t k = vreinterpretq_u8_p16(
- vmull_p8(a, vext_p8(b, b, 4))); // L = A0 * B4
-
- // Add cross products
- uint8x16_t l = veorq_u8(e, f); // L = E + F
- uint8x16_t m = veorq_u8(g, h); // M = G + H
- uint8x16_t n = veorq_u8(i, j); // N = I + J
-
- // Interleave. Using vzip1 and vzip2 prevents Clang from emitting TBL
- // instructions.
-#if defined(__aarch64__)
- uint8x16_t lm_p0 = vreinterpretq_u8_u64(
- vzip1q_u64(vreinterpretq_u64_u8(l), vreinterpretq_u64_u8(m)));
- uint8x16_t lm_p1 = vreinterpretq_u8_u64(
- vzip2q_u64(vreinterpretq_u64_u8(l), vreinterpretq_u64_u8(m)));
- uint8x16_t nk_p0 = vreinterpretq_u8_u64(
- vzip1q_u64(vreinterpretq_u64_u8(n), vreinterpretq_u64_u8(k)));
- uint8x16_t nk_p1 = vreinterpretq_u8_u64(
- vzip2q_u64(vreinterpretq_u64_u8(n), vreinterpretq_u64_u8(k)));
-#else
- uint8x16_t lm_p0 = vcombine_u8(vget_low_u8(l), vget_low_u8(m));
- uint8x16_t lm_p1 = vcombine_u8(vget_high_u8(l), vget_high_u8(m));
- uint8x16_t nk_p0 = vcombine_u8(vget_low_u8(n), vget_low_u8(k));
- uint8x16_t nk_p1 = vcombine_u8(vget_high_u8(n), vget_high_u8(k));
-#endif
- // t0 = (L) (P0 + P1) << 8
- // t1 = (M) (P2 + P3) << 16
- uint8x16_t t0t1_tmp = veorq_u8(lm_p0, lm_p1);
- uint8x16_t t0t1_h = vandq_u8(lm_p1, k48_32);
- uint8x16_t t0t1_l = veorq_u8(t0t1_tmp, t0t1_h);
-
- // t2 = (N) (P4 + P5) << 24
- // t3 = (K) (P6 + P7) << 32
- uint8x16_t t2t3_tmp = veorq_u8(nk_p0, nk_p1);
- uint8x16_t t2t3_h = vandq_u8(nk_p1, k16_00);
- uint8x16_t t2t3_l = veorq_u8(t2t3_tmp, t2t3_h);
-
- // De-interleave
-#if defined(__aarch64__)
- uint8x16_t t0 = vreinterpretq_u8_u64(vuzp1q_u64(
- vreinterpretq_u64_u8(t0t1_l), vreinterpretq_u64_u8(t0t1_h)));
- uint8x16_t t1 = vreinterpretq_u8_u64(vuzp2q_u64(
- vreinterpretq_u64_u8(t0t1_l), vreinterpretq_u64_u8(t0t1_h)));
- uint8x16_t t2 = vreinterpretq_u8_u64(vuzp1q_u64(
- vreinterpretq_u64_u8(t2t3_l), vreinterpretq_u64_u8(t2t3_h)));
- uint8x16_t t3 = vreinterpretq_u8_u64(vuzp2q_u64(
- vreinterpretq_u64_u8(t2t3_l), vreinterpretq_u64_u8(t2t3_h)));
-#else
- uint8x16_t t1 = vcombine_u8(vget_high_u8(t0t1_l), vget_high_u8(t0t1_h));
- uint8x16_t t0 = vcombine_u8(vget_low_u8(t0t1_l), vget_low_u8(t0t1_h));
- uint8x16_t t3 = vcombine_u8(vget_high_u8(t2t3_l), vget_high_u8(t2t3_h));
- uint8x16_t t2 = vcombine_u8(vget_low_u8(t2t3_l), vget_low_u8(t2t3_h));
-#endif
- // Shift the cross products
- uint8x16_t t0_shift = vextq_u8(t0, t0, 15); // t0 << 8
- uint8x16_t t1_shift = vextq_u8(t1, t1, 14); // t1 << 16
- uint8x16_t t2_shift = vextq_u8(t2, t2, 13); // t2 << 24
- uint8x16_t t3_shift = vextq_u8(t3, t3, 12); // t3 << 32
-
- // Accumulate the products
- uint8x16_t cross1 = veorq_u8(t0_shift, t1_shift);
- uint8x16_t cross2 = veorq_u8(t2_shift, t3_shift);
- uint8x16_t mix = veorq_u8(d, cross1);
- uint8x16_t r = veorq_u8(mix, cross2);
- return vreinterpretq_u64_u8(r);
-}
-#endif // ARMv7 polyfill
-
-FORCE_INLINE __m128i _mm_clmulepi64_si128(__m128i _a, __m128i _b, const int imm)
-{
- uint64x2_t a = vreinterpretq_u64_m128i(_a);
- uint64x2_t b = vreinterpretq_u64_m128i(_b);
- switch (imm & 0x11) {
- case 0x00:
- return vreinterpretq_m128i_u64(
- _sse2neon_vmull_p64(vget_low_u64(a), vget_low_u64(b)));
- case 0x01:
- return vreinterpretq_m128i_u64(
- _sse2neon_vmull_p64(vget_high_u64(a), vget_low_u64(b)));
- case 0x10:
- return vreinterpretq_m128i_u64(
- _sse2neon_vmull_p64(vget_low_u64(a), vget_high_u64(b)));
- case 0x11:
- return vreinterpretq_m128i_u64(_sse2neon_vmull_p64(
- vget_high_u64(a), vget_high_u64(b)));
- default:
- abort();
- }
-}
-
-#if !defined(__ARM_FEATURE_CRYPTO) && defined(__aarch64__)
-// In the absence of crypto extensions, implement aesenc using regular neon
-// intrinsics instead. See:
-// https://www.workofard.com/2017/01/accelerated-aes-for-the-arm64-linux-kernel/
-// https://www.workofard.com/2017/07/ghash-for-low-end-cores/ and
-// https://github.com/ColinIanKing/linux-next-mirror/blob/b5f466091e130caaf0735976648f72bd5e09aa84/crypto/aegis128-neon-inner.c#L52
-// for more information Reproduced with permission of the author.
-FORCE_INLINE __m128i _mm_aesenc_si128(__m128i EncBlock, __m128i RoundKey)
-{
- static const uint8_t crypto_aes_sbox[256] = {
- 0x63, 0x7c, 0x77, 0x7b, 0xf2, 0x6b, 0x6f, 0xc5, 0x30, 0x01,
- 0x67, 0x2b, 0xfe, 0xd7, 0xab, 0x76, 0xca, 0x82, 0xc9, 0x7d,
- 0xfa, 0x59, 0x47, 0xf0, 0xad, 0xd4, 0xa2, 0xaf, 0x9c, 0xa4,
- 0x72, 0xc0, 0xb7, 0xfd, 0x93, 0x26, 0x36, 0x3f, 0xf7, 0xcc,
- 0x34, 0xa5, 0xe5, 0xf1, 0x71, 0xd8, 0x31, 0x15, 0x04, 0xc7,
- 0x23, 0xc3, 0x18, 0x96, 0x05, 0x9a, 0x07, 0x12, 0x80, 0xe2,
- 0xeb, 0x27, 0xb2, 0x75, 0x09, 0x83, 0x2c, 0x1a, 0x1b, 0x6e,
- 0x5a, 0xa0, 0x52, 0x3b, 0xd6, 0xb3, 0x29, 0xe3, 0x2f, 0x84,
- 0x53, 0xd1, 0x00, 0xed, 0x20, 0xfc, 0xb1, 0x5b, 0x6a, 0xcb,
- 0xbe, 0x39, 0x4a, 0x4c, 0x58, 0xcf, 0xd0, 0xef, 0xaa, 0xfb,
- 0x43, 0x4d, 0x33, 0x85, 0x45, 0xf9, 0x02, 0x7f, 0x50, 0x3c,
- 0x9f, 0xa8, 0x51, 0xa3, 0x40, 0x8f, 0x92, 0x9d, 0x38, 0xf5,
- 0xbc, 0xb6, 0xda, 0x21, 0x10, 0xff, 0xf3, 0xd2, 0xcd, 0x0c,
- 0x13, 0xec, 0x5f, 0x97, 0x44, 0x17, 0xc4, 0xa7, 0x7e, 0x3d,
- 0x64, 0x5d, 0x19, 0x73, 0x60, 0x81, 0x4f, 0xdc, 0x22, 0x2a,
- 0x90, 0x88, 0x46, 0xee, 0xb8, 0x14, 0xde, 0x5e, 0x0b, 0xdb,
- 0xe0, 0x32, 0x3a, 0x0a, 0x49, 0x06, 0x24, 0x5c, 0xc2, 0xd3,
- 0xac, 0x62, 0x91, 0x95, 0xe4, 0x79, 0xe7, 0xc8, 0x37, 0x6d,
- 0x8d, 0xd5, 0x4e, 0xa9, 0x6c, 0x56, 0xf4, 0xea, 0x65, 0x7a,
- 0xae, 0x08, 0xba, 0x78, 0x25, 0x2e, 0x1c, 0xa6, 0xb4, 0xc6,
- 0xe8, 0xdd, 0x74, 0x1f, 0x4b, 0xbd, 0x8b, 0x8a, 0x70, 0x3e,
- 0xb5, 0x66, 0x48, 0x03, 0xf6, 0x0e, 0x61, 0x35, 0x57, 0xb9,
- 0x86, 0xc1, 0x1d, 0x9e, 0xe1, 0xf8, 0x98, 0x11, 0x69, 0xd9,
- 0x8e, 0x94, 0x9b, 0x1e, 0x87, 0xe9, 0xce, 0x55, 0x28, 0xdf,
- 0x8c, 0xa1, 0x89, 0x0d, 0xbf, 0xe6, 0x42, 0x68, 0x41, 0x99,
- 0x2d, 0x0f, 0xb0, 0x54, 0xbb, 0x16};
- static const uint8_t shift_rows[] = {0x0, 0x5, 0xa, 0xf, 0x4, 0x9,
- 0xe, 0x3, 0x8, 0xd, 0x2, 0x7,
- 0xc, 0x1, 0x6, 0xb};
- static const uint8_t ror32by8[] = {0x1, 0x2, 0x3, 0x0, 0x5, 0x6,
- 0x7, 0x4, 0x9, 0xa, 0xb, 0x8,
- 0xd, 0xe, 0xf, 0xc};
-
- uint8x16_t v;
- uint8x16_t w = vreinterpretq_u8_m128i(EncBlock);
-
- // shift rows
- w = vqtbl1q_u8(w, vld1q_u8(shift_rows));
-
- // sub bytes
- v = vqtbl4q_u8(vld1q_u8_x4(crypto_aes_sbox), w);
- v = vqtbx4q_u8(v, vld1q_u8_x4(crypto_aes_sbox + 0x40), w - 0x40);
- v = vqtbx4q_u8(v, vld1q_u8_x4(crypto_aes_sbox + 0x80), w - 0x80);
- v = vqtbx4q_u8(v, vld1q_u8_x4(crypto_aes_sbox + 0xc0), w - 0xc0);
-
- // mix columns
- w = (v << 1) ^ (uint8x16_t)(((int8x16_t)v >> 7) & 0x1b);
- w ^= (uint8x16_t)vrev32q_u16((uint16x8_t)v);
- w ^= vqtbl1q_u8(v ^ w, vld1q_u8(ror32by8));
-
- // add round key
- return vreinterpretq_m128i_u8(w) ^ RoundKey;
-}
-#elif defined(__ARM_FEATURE_CRYPTO)
-// Implements equivalent of 'aesenc' by combining AESE (with an empty key) and
-// AESMC and then manually applying the real key as an xor operation This
-// unfortunately means an additional xor op; the compiler should be able to
-// optimise this away for repeated calls however See
-// https://blog.michaelbrase.com/2018/05/08/emulating-x86-aes-intrinsics-on-armv8-a
-// for more details.
-inline __m128i _mm_aesenc_si128(__m128i a, __m128i b)
-{
- return vreinterpretq_m128i_u8(
- vaesmcq_u8(
- vaeseq_u8(vreinterpretq_u8_m128i(a), vdupq_n_u8(0))) ^
- vreinterpretq_u8_m128i(b));
-}
-#endif
-
-/* Streaming Extensions */
-
-// Guarantees that every preceding store is globally visible before any
-// subsequent store.
-// https://msdn.microsoft.com/en-us/library/5h2w73d1%28v=vs.90%29.aspx
-FORCE_INLINE void _mm_sfence(void)
-{
- __sync_synchronize();
-}
-
-// Stores the data in a to the address p without polluting the caches. If the
-// cache line containing address p is already in the cache, the cache will be
-// updated.Address p must be 16 - byte aligned.
-// https://msdn.microsoft.com/en-us/library/ba08y07y%28v=vs.90%29.aspx
-FORCE_INLINE void _mm_stream_si128(__m128i *p, __m128i a)
-{
-#if __has_builtin(__builtin_nontemporal_store)
- __builtin_nontemporal_store(a, p);
-#else
- vst1q_s64((int64_t *)p, vreinterpretq_s64_m128i(a));
-#endif
-}
-
-// Cache line containing p is flushed and invalidated from all caches in the
-// coherency domain. :
-// https://msdn.microsoft.com/en-us/library/ba08y07y(v=vs.100).aspx
-FORCE_INLINE void _mm_clflush(void const *p)
-{
- (void)p;
- // no corollary for Neon?
-}
-
-// Allocate aligned blocks of memory.
-// https://software.intel.com/en-us/
-// cpp-compiler-developer-guide-and-reference-allocating-and-freeing-aligned-memory-blocks
-FORCE_INLINE void *_mm_malloc(size_t size, size_t align)
-{
- void *ptr;
- if (align == 1)
- return malloc(size);
- if (align == 2 || (sizeof(void *) == 8 && align == 4))
- align = sizeof(void *);
- if (!posix_memalign(&ptr, align, size))
- return ptr;
- return NULL;
-}
-
-FORCE_INLINE void _mm_free(void *addr)
-{
- free(addr);
-}
-
-// Starting with the initial value in crc, accumulates a CRC32 value for
-// unsigned 8-bit integer v.
-// https://msdn.microsoft.com/en-us/library/bb514036(v=vs.100)
-FORCE_INLINE uint32_t _mm_crc32_u8(uint32_t crc, uint8_t v)
-{
-#if defined(__aarch64__) && defined(__ARM_FEATURE_CRC32)
- __asm__ __volatile__("crc32cb %w[c], %w[c], %w[v]\n\t"
- : [c] "+r"(crc)
- : [v] "r"(v));
-#else
- crc ^= v;
- for (int bit = 0; bit < 8; bit++) {
- if (crc & 1)
- crc = (crc >> 1) ^ UINT32_C(0x82f63b78);
- else
- crc = (crc >> 1);
- }
-#endif
- return crc;
-}
-
-// Starting with the initial value in crc, accumulates a CRC32 value for
-// unsigned 16-bit integer v.
-// https://msdn.microsoft.com/en-us/library/bb531411(v=vs.100)
-FORCE_INLINE uint32_t _mm_crc32_u16(uint32_t crc, uint16_t v)
-{
-#if defined(__aarch64__) && defined(__ARM_FEATURE_CRC32)
- __asm__ __volatile__("crc32ch %w[c], %w[c], %w[v]\n\t"
- : [c] "+r"(crc)
- : [v] "r"(v));
-#else
- crc = _mm_crc32_u8(crc, v & 0xff);
- crc = _mm_crc32_u8(crc, (v >> 8) & 0xff);
-#endif
- return crc;
-}
-
-// Starting with the initial value in crc, accumulates a CRC32 value for
-// unsigned 32-bit integer v.
-// https://msdn.microsoft.com/en-us/library/bb531394(v=vs.100)
-FORCE_INLINE uint32_t _mm_crc32_u32(uint32_t crc, uint32_t v)
-{
-#if defined(__aarch64__) && defined(__ARM_FEATURE_CRC32)
- __asm__ __volatile__("crc32cw %w[c], %w[c], %w[v]\n\t"
- : [c] "+r"(crc)
- : [v] "r"(v));
-#else
- crc = _mm_crc32_u16(crc, v & 0xffff);
- crc = _mm_crc32_u16(crc, (v >> 16) & 0xffff);
-#endif
- return crc;
-}
-
-// Starting with the initial value in crc, accumulates a CRC32 value for
-// unsigned 64-bit integer v.
-// https://msdn.microsoft.com/en-us/library/bb514033(v=vs.100)
-FORCE_INLINE uint64_t _mm_crc32_u64(uint64_t crc, uint64_t v)
-{
-#if defined(__aarch64__) && defined(__ARM_FEATURE_CRC32)
- __asm__ __volatile__("crc32cx %w[c], %w[c], %x[v]\n\t"
- : [c] "+r"(crc)
- : [v] "r"(v));
-#else
- crc = _mm_crc32_u32((uint32_t)(crc), v & 0xffffffff);
- crc = _mm_crc32_u32((uint32_t)(crc), (v >> 32) & 0xffffffff);
-#endif
- return crc;
-}
-
-#if defined(__GNUC__) || defined(__clang__)
-#pragma pop_macro("ALIGN_STRUCT")
-#pragma pop_macro("FORCE_INLINE")
-#endif
-
-#endif
obs-studio-26.1.0.tar.xz/.github/workflows/main.yml -> obs-studio-26.1.1.tar.xz/.github/workflows/main.yml
Changed
runs-on: [macos-latest]
env:
MIN_MACOS_VERSION: '10.13'
- MACOS_DEPS_VERSION: '2020-12-11'
+ MACOS_DEPS_VERSION: '2020-12-22'
VLC_VERSION: '3.0.8'
SPARKLE_VERSION: '1.23.0'
QT_VERSION: '5.15.2'
shell: bash
run: |
if [ -d /usr/local/opt/openssl@1.0.2t ]; then
- brew uninstall openssl@1.0.2t
- brew untap local/openssl
+ brew uninstall openssl@1.0.2t
+ brew untap local/openssl
fi
if [ -d /usr/local/opt/python@2.7.17 ]; then
- brew uninstall python@2.7.17
- brew untap local/python2
+ brew uninstall python@2.7.17
+ brew untap local/python2
+ fi
+
+ if [ -d /usr/local/opt/speexdsp ]; then
+ brew unlink speexdsp
fi
brew bundle --file ./CI/scripts/macos/Brewfile
- name: 'Restore Chromium Embedded Framework from cache'
run: |
mkdir ./build
cd ./build
- cmake -DENABLE_UNIT_TESTS=YES -DENABLE_SPARKLE_UPDATER=ON -DDISABLE_PYTHON=ON -DCMAKE_OSX_DEPLOYMENT_TARGET=${{ env.MIN_MACOS_VERSION }} -DQTDIR="/tmp/obsdeps" -DSWIGDIR="/tmp/obsdeps" -DDepsPath="/tmp/obsdeps" -DVLCPath="${{ github.workspace }}/cmbuild/vlc-${{ env.VLC_VERSION }}" -DENABLE_VLC=ON -DBUILD_BROWSER=ON -DBROWSER_DEPLOY=ON -DWITH_RTMPS=ON -DCEF_ROOT_DIR="${{ github.workspace }}/cmbuild/cef_binary_${{ env.CEF_BUILD_VERSION }}_macosx64" ..
+ LEGACY_BROWSER="$(test "${{ env.CEF_BUILD_VERSION }}" -le 3770 && echo "ON" || echo "OFF")"
+ cmake -DENABLE_UNIT_TESTS=YES -DENABLE_SPARKLE_UPDATER=ON -DDISABLE_PYTHON=ON -DCMAKE_OSX_DEPLOYMENT_TARGET=${{ env.MIN_MACOS_VERSION }} -DQTDIR="/tmp/obsdeps" -DSWIGDIR="/tmp/obsdeps" -DDepsPath="/tmp/obsdeps" -DVLCPath="${{ github.workspace }}/cmbuild/vlc-${{ env.VLC_VERSION }}" -DENABLE_VLC=ON -DBUILD_BROWSER=ON -DBROWSER_LEGACY=$LEGACY_BROWSER -DWITH_RTMPS=ON -DCEF_ROOT_DIR="${{ github.workspace }}/cmbuild/cef_binary_${{ env.CEF_BUILD_VERSION }}_macosx64" ..
- name: 'Build'
shell: bash
working-directory: ${{ github.workspace }}/build
mkdir -p OBS.app/Contents/MacOS
mkdir OBS.app/Contents/PlugIns
mkdir OBS.app/Contents/Resources
+ mkdir OBS.app/Contents/Frameworks
cp rundir/RelWithDebInfo/bin/obs ./OBS.app/Contents/MacOS
cp rundir/RelWithDebInfo/bin/obs-ffmpeg-mux ./OBS.app/Contents/MacOS
+ if ! [ "${{ env.CEF_BUILD_VERSION }}" -le 3770 ]; then
+ cp -R "rundir/RelWithDebInfo/bin/OBS Helper.app" "./OBS.app/Contents/Frameworks/OBS Helper.app"
+ cp -R "rundir/RelWithDebInfo/bin/OBS Helper (GPU).app" "./OBS.app/Contents/Frameworks/OBS Helper (GPU).app"
+ cp -R "rundir/RelWithDebInfo/bin/OBS Helper (Plugin).app" "./OBS.app/Contents/Frameworks/OBS Helper (Plugin).app"
+ cp -R "rundir/RelWithDebInfo/bin/OBS Helper (Renderer).app" "./OBS.app/Contents/Frameworks/OBS Helper (Renderer).app"
+ fi
cp rundir/RelWithDebInfo/bin/libobsglad.0.dylib ./OBS.app/Contents/MacOS
cp -R rundir/RelWithDebInfo/data ./OBS.app/Contents/Resources
cp ../CI/scripts/macos/app/AppIcon.icns ./OBS.app/Contents/Resources
rm -rf ./OBS.app/Contents/Resources/data/obs-scripting/
fi
+ BUNDLE_PLUGINS=(
+ ./OBS.app/Contents/PlugIns/coreaudio-encoder.so
+ ./OBS.app/Contents/PlugIns/decklink-ouput-ui.so
+ ./OBS.app/Contents/PlugIns/decklink-captions.so
+ ./OBS.app/Contents/PlugIns/frontend-tools.so
+ ./OBS.app/Contents/PlugIns/image-source.so
+ ./OBS.app/Contents/PlugIns/mac-avcapture.so
+ ./OBS.app/Contents/PlugIns/mac-capture.so
+ ./OBS.app/Contents/PlugIns/mac-decklink.so
+ ./OBS.app/Contents/PlugIns/mac-syphon.so
+ ./OBS.app/Contents/PlugIns/mac-vth264.so
+ ./OBS.app/Contents/PlugIns/mac-virtualcam.so
+ ./OBS.app/Contents/PlugIns/obs-browser.so
+ ./OBS.app/Contents/PlugIns/obs-ffmpeg.so
+ ./OBS.app/Contents/PlugIns/obs-filters.so
+ ./OBS.app/Contents/PlugIns/obs-transitions.so
+ ./OBS.app/Contents/PlugIns/obs-vst.so
+ ./OBS.app/Contents/PlugIns/rtmp-services.so
+ ./OBS.app/Contents/MacOS/obs-ffmpeg-mux
+ ./OBS.app/Contents/MacOS/obslua.so
+ ./OBS.app/Contents/PlugIns/obs-x264.so
+ ./OBS.app/Contents/PlugIns/text-freetype2.so
+ ./OBS.app/Contents/PlugIns/obs-outputs.so
+ )
+
+ if ! [ "${{ env.CEF_BUILD_VERSION }}" -le 3770 ]; then
../CI/scripts/macos/app/dylibBundler -cd -of -a ./OBS.app -q -f \
-s ./OBS.app/Contents/MacOS \
-s "${{ github.workspace }}/cmbuild/sparkle/Sparkle.framework" \
-s ./rundir/RelWithDebInfo/bin \
- -x ./OBS.app/Contents/PlugIns/coreaudio-encoder.so \
- -x ./OBS.app/Contents/PlugIns/decklink-ouput-ui.so \
- -x ./OBS.app/Contents/PlugIns/decklink-captions.so \
- -x ./OBS.app/Contents/PlugIns/frontend-tools.so \
- -x ./OBS.app/Contents/PlugIns/image-source.so \
- -x ./OBS.app/Contents/PlugIns/linux-jack.so \
- -x ./OBS.app/Contents/PlugIns/mac-avcapture.so \
- -x ./OBS.app/Contents/PlugIns/mac-capture.so \
- -x ./OBS.app/Contents/PlugIns/mac-decklink.so \
- -x ./OBS.app/Contents/PlugIns/mac-syphon.so \
- -x ./OBS.app/Contents/PlugIns/mac-vth264.so \
- -x ./OBS.app/Contents/PlugIns/mac-virtualcam.so \
- -x ./OBS.app/Contents/PlugIns/obs-browser.so \
- -x ./OBS.app/Contents/PlugIns/obs-browser-page \
- -x ./OBS.app/Contents/PlugIns/obs-ffmpeg.so \
- -x ./OBS.app/Contents/PlugIns/obs-filters.so \
- -x ./OBS.app/Contents/PlugIns/obs-transitions.so \
- -x ./OBS.app/Contents/PlugIns/obs-vst.so \
- -x ./OBS.app/Contents/PlugIns/rtmp-services.so \
- -x ./OBS.app/Contents/MacOS/obs-ffmpeg-mux \
- -x ./OBS.app/Contents/MacOS/obslua.so \
- -x ./OBS.app/Contents/PlugIns/obs-x264.so \
- -x ./OBS.app/Contents/PlugIns/text-freetype2.so \
- -x ./OBS.app/Contents/PlugIns/obs-libfdk.so \
- -x ./OBS.app/Contents/PlugIns/obs-outputs.so
+ $(echo "${BUNDLE_PLUGINS[@]/#/-x }")
+ else
+ ../CI/scripts/macos/app/dylibBundler -cd -of -a ./OBS.app -q -f \
+ -s ./OBS.app/Contents/MacOS \
+ -s "${{ github.workspace }}/cmbuild/sparkle/Sparkle.framework" \
+ -s ./rundir/RelWithDebInfo/bin \
+ $(echo "${BUNDLE_PLUGINS[@]/#/-x }") \
+ -x ./OBS.app/Contents/PlugIns/obs-browser-page
+ fi
mv ./libobs-opengl/libobs-opengl.so ./OBS.app/Contents/Frameworks
codesign --force --options runtime --sign "${SIGN_IDENTITY:--}" "./OBS.app/Contents/Frameworks/Chromium Embedded Framework.framework/Libraries/libswiftshader_libEGL.dylib"
codesign --force --options runtime --sign "${SIGN_IDENTITY:--}" "./OBS.app/Contents/Frameworks/Chromium Embedded Framework.framework/Libraries/libGLESv2.dylib"
codesign --force --options runtime --sign "${SIGN_IDENTITY:--}" "./OBS.app/Contents/Frameworks/Chromium Embedded Framework.framework/Libraries/libswiftshader_libGLESv2.dylib"
+ if ! [ "${{ env.CEF_BUILD_VERSION }}" -le 3770 ]; then
+ codesign --force --options runtime --sign "${SIGN_IDENTITY:--}" "./OBS.app/Contents/Frameworks/Chromium Embedded Framework.framework/Libraries/libvk_swiftshader.dylib"
+ fi
codesign --force --options runtime --sign "${SIGN_IDENTITY:--}" --deep "./OBS.app/Contents/Frameworks/Chromium Embedded Framework.framework"
codesign --force --options runtime --deep --sign "${SIGN_IDENTITY:--}" "./OBS.app/Contents/Resources/data/obs-mac-virtualcam.plugin"
codesign --force --options runtime --entitlements "../CI/scripts/macos/app/entitlements.plist" --sign "${SIGN_IDENTITY:--}" --deep ./OBS.app
+ if ! [ "${{ env.CEF_BUILD_VERSION }}" -le 3770 ]; then
+ codesign --force --options runtime --sign "${SIGN_IDENTITY:--}" --deep "./OBS.app/Contents/Frameworks/OBS Helper.app"
+ codesign --force --options runtime --entitlements "../CI/scripts/macos/helpers/helper-gpu-entitlements.plist" --sign "${SIGN_IDENTITY:--}" --deep "./OBS.app/Contents/Frameworks/OBS Helper (GPU).app"
+ codesign --force --options runtime --entitlements "../CI/scripts/macos/helpers/helper-plugin-entitlements.plist" --sign "${SIGN_IDENTITY:--}" --deep "./OBS.app/Contents/Frameworks/OBS Helper (Plugin).app"
+ codesign --force --options runtime --entitlements "../CI/scripts/macos/helpers/helper-renderer-entitlements.plist" --sign "${SIGN_IDENTITY:--}" --deep "./OBS.app/Contents/Frameworks/OBS Helper (Renderer).app"
+ fi
+
codesign -dvv ./OBS.app
- name: 'Package'
if: success() && (github.event_name != 'pull_request' || env.SEEKING_TESTERS == '1')
obs-studio-26.1.0.tar.xz/CI/full-build-macos.sh -> obs-studio-26.1.1.tar.xz/CI/full-build-macos.sh
Changed
CI_SPARKLE_VERSION=$(cat ${CI_WORKFLOW} | sed -En "s/[ ]+SPARKLE_VERSION: '([0-9\.]+)'/\1/p")
CI_QT_VERSION=$(cat ${CI_WORKFLOW} | sed -En "s/[ ]+QT_VERSION: '([0-9\.]+)'/\1/p" | head -1)
CI_MIN_MACOS_VERSION=$(cat ${CI_WORKFLOW} | sed -En "s/[ ]+MIN_MACOS_VERSION: '([0-9\.]+)'/\1/p")
+NPROC="${NPROC:-$(sysctl -n hw.ncpu)}"
BUILD_DEPS=(
"obs-deps ${MACOS_DEPS_VERSION:-${CI_DEPS_VERSION}}"
-DCMAKE_OSX_DEPLOYMENT_TARGET=${MIN_MACOS_VERSION:-${CI_MIN_MACOS_VERSION}} \
..
step "Build..."
- make -j4
+ make -j${NPROC}
if [ ! -d libcef_dll ]; then mkdir libcef_dll; fi
}
-DDepsPath="/tmp/obsdeps" \
-DVLCPath="${DEPS_BUILD_DIR}/vlc-${VLC_VERSION:-${CI_VLC_VERSION}}" \
-DBUILD_BROWSER=ON \
- -DBROWSER_DEPLOY=ON \
+ -DBROWSER_LEGACY="$(test "${CEF_BUILD_VERSION:-${CI_CEF_VERSION}}" -le 3770 && echo "ON" || echo "OFF")" \
-DWITH_RTMPS=ON \
-DCEF_ROOT_DIR="${DEPS_BUILD_DIR}/cef_binary_${CEF_BUILD_VERSION:-${CI_CEF_VERSION}}_macosx64" \
-DCMAKE_BUILD_TYPE="${BUILD_CONFIG}" \
run_obs_build() {
ensure_dir "${CHECKOUT_DIR}/${BUILD_DIR}"
hr "Build OBS..."
- make -j4
+ make -j${NPROC}
}
## OBS BUNDLE AS MACOS APPLICATION ##
hr "Bundle dylibs for macOS application"
step "Run dylibBundler.."
- ${CI_SCRIPTS}/app/dylibbundler -cd -of -a ./OBS.app -q -f \
- -s ./OBS.app/Contents/MacOS \
- -s "${DEPS_BUILD_DIR}/sparkle/Sparkle.framework" \
- -s ./rundir/${BUILD_CONFIG}/bin/ \
- -x ./OBS.app/Contents/PlugIns/coreaudio-encoder.so \
- -x ./OBS.app/Contents/PlugIns/decklink-ouput-ui.so \
- -x ./OBS.app/Contents/PlugIns/decklink-captions.so \
- -x ./OBS.app/Contents/PlugIns/frontend-tools.so \
- -x ./OBS.app/Contents/PlugIns/image-source.so \
- -x ./OBS.app/Contents/PlugIns/linux-jack.so \
- -x ./OBS.app/Contents/PlugIns/mac-avcapture.so \
- -x ./OBS.app/Contents/PlugIns/mac-capture.so \
- -x ./OBS.app/Contents/PlugIns/mac-decklink.so \
- -x ./OBS.app/Contents/PlugIns/mac-syphon.so \
- -x ./OBS.app/Contents/PlugIns/mac-vth264.so \
- -x ./OBS.app/Contents/PlugIns/mac-virtualcam.so \
- -x ./OBS.app/Contents/PlugIns/obs-browser.so \
- -x ./OBS.app/Contents/PlugIns/obs-browser-page \
- -x ./OBS.app/Contents/PlugIns/obs-ffmpeg.so \
- -x ./OBS.app/Contents/PlugIns/obs-filters.so \
- -x ./OBS.app/Contents/PlugIns/obs-transitions.so \
- -x ./OBS.app/Contents/PlugIns/obs-vst.so \
- -x ./OBS.app/Contents/PlugIns/rtmp-services.so \
- -x ./OBS.app/Contents/MacOS/obs-ffmpeg-mux \
- -x ./OBS.app/Contents/MacOS/obslua.so \
- -x ./OBS.app/Contents/PlugIns/obs-x264.so \
- -x ./OBS.app/Contents/PlugIns/text-freetype2.so \
- -x ./OBS.app/Contents/PlugIns/obs-libfdk.so \
- -x ./OBS.app/Contents/PlugIns/obs-outputs.so
- step "Move libobs-opengl to final destination"
+ BUNDLE_PLUGINS=(
+ ./OBS.app/Contents/PlugIns/coreaudio-encoder.so
+ ./OBS.app/Contents/PlugIns/decklink-ouput-ui.so
+ ./OBS.app/Contents/PlugIns/decklink-captions.so
+ ./OBS.app/Contents/PlugIns/frontend-tools.so
+ ./OBS.app/Contents/PlugIns/image-source.so
+ ./OBS.app/Contents/PlugIns/mac-avcapture.so
+ ./OBS.app/Contents/PlugIns/mac-capture.so
+ ./OBS.app/Contents/PlugIns/mac-decklink.so
+ ./OBS.app/Contents/PlugIns/mac-syphon.so
+ ./OBS.app/Contents/PlugIns/mac-vth264.so
+ ./OBS.app/Contents/PlugIns/mac-virtualcam.so
+ ./OBS.app/Contents/PlugIns/obs-browser.so
+ ./OBS.app/Contents/PlugIns/obs-ffmpeg.so
+ ./OBS.app/Contents/PlugIns/obs-filters.so
+ ./OBS.app/Contents/PlugIns/obs-transitions.so
+ ./OBS.app/Contents/PlugIns/obs-vst.so
+ ./OBS.app/Contents/PlugIns/rtmp-services.so
+ ./OBS.app/Contents/MacOS/obs-ffmpeg-mux
+ ./OBS.app/Contents/MacOS/obslua.so
+ ./OBS.app/Contents/PlugIns/obs-x264.so
+ ./OBS.app/Contents/PlugIns/text-freetype2.so
+ ./OBS.app/Contents/PlugIns/obs-outputs.so
+ )
+ if ! [ "${CEF_BUILD_VERSION:-${CI_CEF_VERSION}}" -le 3770 ]; then
+ ${CI_SCRIPTS}/app/dylibbundler -cd -of -a ./OBS.app -q -f \
+ -s ./OBS.app/Contents/MacOS \
+ -s "${DEPS_BUILD_DIR}/sparkle/Sparkle.framework" \
+ -s ./rundir/${BUILD_CONFIG}/bin/ \
+ $(echo "${BUNDLE_PLUGINS[@]/#/-x }")
+ else
+ ${CI_SCRIPTS}/app/dylibbundler -cd -of -a ./OBS.app -q -f \
+ -s ./OBS.app/Contents/MacOS \
+ -s "${DEPS_BUILD_DIR}/sparkle/Sparkle.framework" \
+ -s ./rundir/${BUILD_CONFIG}/bin/ \
+ $(echo "${BUNDLE_PLUGINS[@]/#/-x }") \
+ -x ./OBS.app/Contents/PlugIns/obs-browser-page
+ fi
+
+ step "Move libobs-opengl to final destination"
if [ -f "./libobs-opengl/libobs-opengl.so" ]; then
cp ./libobs-opengl/libobs-opengl.so ./OBS.app/Contents/Frameworks
else
mkdir -p OBS.app/Contents/MacOS
mkdir OBS.app/Contents/PlugIns
mkdir OBS.app/Contents/Resources
+ mkdir OBS.app/Contents/Frameworks
cp rundir/${BUILD_CONFIG}/bin/obs ./OBS.app/Contents/MacOS
cp rundir/${BUILD_CONFIG}/bin/obs-ffmpeg-mux ./OBS.app/Contents/MacOS
cp rundir/${BUILD_CONFIG}/bin/libobsglad.0.dylib ./OBS.app/Contents/MacOS
+ if ! [ "${CEF_BUILD_VERSION:-${CI_CEF_VERSION}}" -le 3770 ]; then
+ cp -R "rundir/${BUILD_CONFIG}/bin/OBS Helper.app" "./OBS.app/Contents/Frameworks/OBS Helper.app"
+ cp -R "rundir/${BUILD_CONFIG}/bin/OBS Helper (GPU).app" "./OBS.app/Contents/Frameworks/OBS Helper (GPU).app"
+ cp -R "rundir/${BUILD_CONFIG}/bin/OBS Helper (Plugin).app" "./OBS.app/Contents/Frameworks/OBS Helper (Plugin).app"
+ cp -R "rundir/${BUILD_CONFIG}/bin/OBS Helper (Renderer).app" "./OBS.app/Contents/Frameworks/OBS Helper (Renderer).app"
+ fi
cp -R rundir/${BUILD_CONFIG}/data ./OBS.app/Contents/Resources
cp ${CI_SCRIPTS}/app/AppIcon.icns ./OBS.app/Contents/Resources
cp -R rundir/${BUILD_CONFIG}/obs-plugins/ ./OBS.app/Contents/PlugIns
codesign --force --options runtime --sign "${CODESIGN_IDENT}" "./OBS.app/Contents/Frameworks/Chromium Embedded Framework.framework/Libraries/libswiftshader_libEGL.dylib"
codesign --force --options runtime --sign "${CODESIGN_IDENT}" "./OBS.app/Contents/Frameworks/Chromium Embedded Framework.framework/Libraries/libGLESv2.dylib"
codesign --force --options runtime --sign "${CODESIGN_IDENT}" "./OBS.app/Contents/Frameworks/Chromium Embedded Framework.framework/Libraries/libswiftshader_libGLESv2.dylib"
- codesign --force --options runtime --sign "${CODESIGN_IDENT}" --deep "./OBS.app/Contents/Frameworks/Chromium Embedded Framework.framework"
+ if ! [ "${CEF_BUILD_VERSION:-${CI_CEF_VERSION}}" -le 3770 ]; then
+ codesign --force --options runtime --sign "${CODESIGN_IDENT}" "./OBS.app/Contents/Frameworks/Chromium Embedded Framework.framework/Libraries/libvk_swiftshader.dylib"
+ fi
+
echo -n "${COLOR_RESET}"
step "Code-sign DAL Plugin..."
echo -n "${COLOR_ORANGE}"
codesign --force --options runtime --entitlements "${CI_SCRIPTS}/app/entitlements.plist" --sign "${CODESIGN_IDENT}" --deep ./OBS.app
echo -n "${COLOR_RESET}"
+
+ if ! [ "${CEF_BUILD_VERSION:-${CI_CEF_VERSION}}" -le 3770 ]; then
+ step "Code-sign CEF helper apps..."
+ echo -n "${COLOR_ORANGE}"
+ codesign --force --options runtime --sign "${CODESIGN_IDENT}" --deep "./OBS.app/Contents/Frameworks/OBS Helper.app"
+ codesign --force --options runtime --entitlements "${CI_SCRIPTS}/helpers/helper-gpu-entitlements.plist" --sign "${CODESIGN_IDENT}" --deep "./OBS.app/Contents/Frameworks/OBS Helper (GPU).app"
+ codesign --force --options runtime --entitlements "${CI_SCRIPTS}/helpers/helper-plugin-entitlements.plist" --sign "${CODESIGN_IDENT}" --deep "./OBS.app/Contents/Frameworks/OBS Helper (Plugin).app"
+ codesign --force --options runtime --entitlements "${CI_SCRIPTS}/helpers/helper-renderer-entitlements.plist" --sign "${CODESIGN_IDENT}" --deep "./OBS.app/Contents/Frameworks/OBS Helper (Renderer).app"
+ echo -n "${COLOR_RESET}"
+ fi
+
step "Check code-sign result..."
codesign -dvv ./OBS.app
}
obs-studio-26.1.0.tar.xz/CI/scripts/macos/Brewfile -> obs-studio-26.1.1.tar.xz/CI/scripts/macos/Brewfile
Changed
tap "akeru-inc/tap"
-brew "jack"
-brew "speexdsp"
brew "cmake"
brew "freetype"
-brew "fdk-aac"
brew "cmocka"
brew "akeru-inc/tap/xcnotary"
\ No newline at end of file
obs-studio-26.1.1.tar.xz/CI/scripts/macos/helpers
Added
+(directory)
obs-studio-26.1.1.tar.xz/CI/scripts/macos/helpers/helper-gpu-entitlements.plist
Added
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
+<plist version="1.0">
+<dict>
+ <key>com.apple.security.cs.allow-jit</key>
+ <true/>
+</dict>
+</plist>
\ No newline at end of file
obs-studio-26.1.1.tar.xz/CI/scripts/macos/helpers/helper-plugin-entitlements.plist
Added
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
+<plist version="1.0">
+<dict>
+ <key>com.apple.security.cs.allow-unsigned-executable-memory</key>
+ <true/>
+ <key>com.apple.security.cs.disable-library-validation</key>
+ <true/>
+</dict>
+</plist>
\ No newline at end of file
obs-studio-26.1.1.tar.xz/CI/scripts/macos/helpers/helper-renderer-entitlements.plist
Added
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
+<plist version="1.0">
+<dict>
+ <key>com.apple.security.cs.allow-jit</key>
+ <true/>
+</dict>
+</plist>
\ No newline at end of file
obs-studio-26.1.0.tar.xz/CMakeLists.txt -> obs-studio-26.1.1.tar.xz/CMakeLists.txt
Changed
endif ()
if(LOWERCASE_CMAKE_SYSTEM_PROCESSOR MATCHES "(i[3-6]86|x86|x64|x86_64|amd64|e2k)")
- set(NEEDS_SIMDE "0")
if(NOT MSVC)
set(ARCH_SIMD_FLAGS "-mmmx" "-msse" "-msse2")
endif()
elseif(LOWERCASE_CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64(le)?")
- set(NEEDS_SIMDE "0")
+ set(ARCH_SIMD_DEFINES "-DNO_WARN_X86_INTRINSICS")
set(ARCH_SIMD_FLAGS "-mvsx")
add_compile_definitions(NO_WARN_X86_INTRINSICS)
else()
- set(NEEDS_SIMDE "1")
- add_definitions(-DNEEDS_SIMDE=1)
if(CMAKE_COMPILER_IS_GNUCC OR CMAKE_COMPILER_IS_GNUCXX)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DSIMDE_ENABLE_OPENMP")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -DSIMDE_ENABLE_OPENMP")
obs-studio-26.1.0.tar.xz/UI/data/locale.ini -> obs-studio-26.1.1.tar.xz/UI/data/locale.ini
Changed
Name=Esperanto
[kab-KAB]
-Name=Taglizit
+Name=Taqbaylit
obs-studio-26.1.0.tar.xz/UI/data/themes/Acri.qss -> obs-studio-26.1.1.tar.xz/UI/data/themes/Acri.qss
Changed
max-height: 40px;
}
-#contextContainer QPushButton[themeID2=contextBarButton] {
- padding: 0px;
+#contextContainer QPushButton {
+ padding: 0px 12px;
}
QPushButton#sourcePropertiesButton {
obs-studio-26.1.0.tar.xz/UI/installer/mp-installer.nsi -> obs-studio-26.1.1.tar.xz/UI/installer/mp-installer.nsi
Changed
ClearErrors
GetDLLVersion "vcruntime140.DLL" $R0 $R1
GetDLLVersion "msvcp140.DLL" $R0 $R1
+ GetDLLVersion "msvcp140_1.DLL" $R0 $R1
IfErrors vs2019Missing_32 vs2019OK_32
vs2019Missing_32:
MessageBox MB_YESNO|MB_ICONEXCLAMATION "Your system is missing runtime components that ${APPNAME} requires. Would you like to download them?" IDYES vs2019true_32 IDNO vs2019false_32
obs-studio-26.1.0.tar.xz/UI/win-update/updater/updater.cpp -> obs-studio-26.1.1.tar.xz/UI/win-update/updater/updater.cpp
Changed
} else {
DeleteFile(outputPath.c_str());
}
+ if (state == STATE_INSTALL_FAILED)
+ DeleteFile(tempPath.c_str());
} else if (state == STATE_DOWNLOADED) {
DeleteFile(tempPath.c_str());
}
bool DownloadWorkerThread()
{
- const DWORD tlsProtocols = WINHTTP_FLAG_SECURE_PROTOCOL_TLS1_2;
+ const DWORD tlsProtocols = WINHTTP_FLAG_SECURE_PROTOCOL_TLS1_2 |
+ WINHTTP_FLAG_SECURE_PROTOCOL_TLS1_3;
+
+ const DWORD enableHTTP2Flag = WINHTTP_PROTOCOL_FLAG_HTTP2;
HttpHandle hSession = WinHttpOpen(L"OBS Studio Updater/2.1",
WINHTTP_ACCESS_TYPE_DEFAULT_PROXY,
WinHttpSetOption(hSession, WINHTTP_OPTION_SECURE_PROTOCOLS,
(LPVOID)&tlsProtocols, sizeof(tlsProtocols));
+ WinHttpSetOption(hSession, WINHTTP_OPTION_ENABLE_HTTP_PROTOCOL,
+ (LPVOID)&enableHTTP2Flag, sizeof(enableHTTP2Flag));
+
HttpHandle hConnect = WinHttpConnect(hSession,
L"cdn-fastly.obsproject.com",
INTERNET_DEFAULT_HTTPS_PORT, 0);
}
}
+static bool MoveInUseFileAway(update_t &file)
+{
+ _TCHAR deleteMeName[MAX_PATH];
+ _TCHAR randomStr[MAX_PATH];
+
+ BYTE junk[40];
+ BYTE hash[BLAKE2_HASH_LENGTH];
+
+ CryptGenRandom(hProvider, sizeof(junk), junk);
+ blake2b(hash, sizeof(hash), junk, sizeof(junk), NULL, 0);
+ HashToString(hash, randomStr);
+ randomStr[8] = 0;
+
+ StringCbCopy(deleteMeName, sizeof(deleteMeName),
+ file.outputPath.c_str());
+
+ StringCbCat(deleteMeName, sizeof(deleteMeName), L".");
+ StringCbCat(deleteMeName, sizeof(deleteMeName), randomStr);
+ StringCbCat(deleteMeName, sizeof(deleteMeName), L".deleteme");
+
+ if (MoveFile(file.outputPath.c_str(), deleteMeName)) {
+
+ if (MyCopyFile(deleteMeName, file.outputPath.c_str())) {
+ MoveFileEx(deleteMeName, NULL,
+ MOVEFILE_DELAY_UNTIL_REBOOT);
+
+ return true;
+ } else {
+ MoveFile(deleteMeName, file.outputPath.c_str());
+ }
+ }
+
+ return false;
+}
+
static bool UpdateFile(update_t &file)
{
wchar_t oldFileRenamedPath[MAX_PATH];
int error_code;
bool installed_ok;
+ bool already_tried_to_move = false;
+
+ retryAfterMovingFile:
if (file.patchable) {
error_code = ApplyPatch(file.tempPath.c_str(),
int is_sharing_violation =
(error_code == ERROR_SHARING_VIOLATION);
- if (is_sharing_violation)
+ if (is_sharing_violation) {
+ if (!already_tried_to_move) {
+ already_tried_to_move = true;
+
+ if (MoveInUseFileAway(file))
+ goto retryAfterMovingFile;
+ }
+
Status(L"Update failed: %s is still in use. "
L"Close all "
L"programs and try again.",
curFileName);
- else
+ } else {
Status(L"Update failed: Couldn't update %s "
L"(error %d)",
curFileName, GetLastError());
+ }
file.state = STATE_INSTALL_FAILED;
return false;
/* ------------------------------------- *
* Download Updates */
- if (!RunDownloadWorkers(2))
+ if (!RunDownloadWorkers(4))
return false;
if ((size_t)completedUpdates != updates.size()) {
obs-studio-26.1.0.tar.xz/UI/window-basic-auto-config.cpp -> obs-studio-26.1.1.tar.xz/UI/window-basic-auto-config.cpp
Changed
if (!wiz->customServer) {
if (wiz->serviceName == "Twitch")
wiz->service = AutoConfig::Service::Twitch;
- else if (wiz->serviceName == "Smashcast")
- wiz->service = AutoConfig::Service::Smashcast;
else
wiz->service = AutoConfig::Service::Other;
} else {
return;
std::string service = QT_TO_UTF8(ui->service->currentText());
- bool regionBased = service == "Twitch" || service == "Smashcast";
+ bool regionBased = service == "Twitch";
bool testBandwidth = ui->doBandwidthTest->isChecked();
bool custom = IsCustomService();
} else if (regionOther) {
return true;
}
- } else if (service == Service::Smashcast) {
- if (strcmp(server, "Default") == 0) {
- return true;
- } else if (astrcmp_n(server, "US-West:", 8) == 0 ||
- astrcmp_n(server, "US-East:", 8) == 0) {
- return regionUS;
- } else if (astrcmp_n(server, "EU-", 3) == 0) {
- return regionEU;
- } else if (astrcmp_n(server, "South Korea:", 12) == 0 ||
- astrcmp_n(server, "Asia:", 5) == 0 ||
- astrcmp_n(server, "China:", 6) == 0) {
- return regionAsia;
- } else if (regionOther) {
- return true;
- }
} else {
return true;
}
obs-studio-26.1.0.tar.xz/UI/window-basic-auto-config.hpp -> obs-studio-26.1.1.tar.xz/UI/window-basic-auto-config.hpp
Changed
enum class Service {
Twitch,
- Smashcast,
Other,
};
obs-studio-26.1.0.tar.xz/UI/window-basic-main.cpp -> obs-studio-26.1.1.tar.xz/UI/window-basic-main.cpp
Changed
uint32_t cx = primaryScreen->size().width();
uint32_t cy = primaryScreen->size().height();
+#ifdef SUPPORTS_FRACTIONAL_SCALING
+ cx *= devicePixelRatioF();
+ cy *= devicePixelRatioF();
+#elif
+ cx *= devicePixelRatio();
+ cy *= devicePixelRatio();
+#endif
+
bool oldResolutionDefaults = config_get_bool(
App()->GlobalConfig(), "General", "Pre19Defaults");
obs-studio-26.1.0.tar.xz/cmake/Modules/FindLibcurl.cmake -> obs-studio-26.1.1.tar.xz/cmake/Modules/FindLibcurl.cmake
Changed
PATH_SUFFIXES
include)
-find_library(CURL_LIB
- NAMES ${_CURL_LIBRARIES} curl libcurl
- HINTS
- ENV curlPath${_lib_suffix}
- ENV curlPath
- ENV DepsPath${_lib_suffix}
- ENV DepsPath
- ${curlPath${_lib_suffix}}
- ${curlPath}
- ${DepsPath${_lib_suffix}}
- ${DepsPath}
- ${_CURL_LIBRARY_DIRS}
- PATHS
- /usr/lib /usr/local/lib /opt/local/lib /sw/lib
- PATH_SUFFIXES
- lib${_lib_suffix} lib
- libs${_lib_suffix} libs
- bin${_lib_suffix} bin
- ../lib${_lib_suffix} ../lib
- ../libs${_lib_suffix} ../libs
- ../bin${_lib_suffix} ../bin
- "build/Win${_lib_suffix}/VC12/DLL Release - DLL Windows SSPI"
- "../build/Win${_lib_suffix}/VC12/DLL Release - DLL Windows SSPI")
+if(APPLE)
+ find_library(CURL_LIB
+ NAMES ${_CURL_LIBRARIES} curl libcurl
+ HINTS
+ ENV curlPath${_lib_suffix}
+ ENV curlPath
+ ENV DepsPath${_lib_suffix}
+ ENV DepsPath
+ ${curlPath${_lib_suffix}}
+ ${curlPath}
+ ${DepsPath${_lib_suffix}}
+ ${DepsPath}
+ ${_CURL_LIBRARY_DIRS}
+ )
+else()
+ find_library(CURL_LIB
+ NAMES ${_CURL_LIBRARIES} curl libcurl
+ HINTS
+ ENV curlPath${_lib_suffix}
+ ENV curlPath
+ ENV DepsPath${_lib_suffix}
+ ENV DepsPath
+ ${curlPath${_lib_suffix}}
+ ${curlPath}
+ ${DepsPath${_lib_suffix}}
+ ${DepsPath}
+ ${_CURL_LIBRARY_DIRS}
+ PATHS
+ /usr/lib /usr/local/lib /opt/local/lib /sw/lib
+ PATH_SUFFIXES
+ lib${_lib_suffix} lib
+ libs${_lib_suffix} libs
+ bin${_lib_suffix} bin
+ ../lib${_lib_suffix} ../lib
+ ../libs${_lib_suffix} ../libs
+ ../bin${_lib_suffix} ../bin
+ "build/Win${_lib_suffix}/VC12/DLL Release - DLL Windows SSPI"
+ "../build/Win${_lib_suffix}/VC12/DLL Release - DLL Windows SSPI")
+endif()
include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(Libcurl DEFAULT_MSG CURL_LIB CURL_INCLUDE_DIR)
obs-studio-26.1.0.tar.xz/docs/sphinx/reference-frontend-api.rst -> obs-studio-26.1.1.tar.xz/docs/sphinx/reference-frontend-api.rst
Changed
---------------------------------------
+.. function:: void obs_frontend_open_projector(const char *type, int monitor, const char *geometry, const char *name)
+
+ :param type: "Preview", "Source", "Scene", "StudioProgram", or "Multiview" (case insensitive).
+ :param monitor: Monitor to open the projector on. If -1, opens a window.
+ :param geometry: If *monitor* is -1, size and position of the projector window. Encoded in Base64 using Qt's geometry encoding.
+ :param name: If *type* is "Source" or "Scene", name of the source or scene to be displayed.
+
+---------------------------------------
+
.. function:: void obs_frontend_save(void)
Saves the current scene collection.
obs-studio-26.1.0.tar.xz/libobs/CMakeLists.txt -> obs-studio-26.1.1.tar.xz/libobs/CMakeLists.txt
Changed
util/pipe-posix.c
util/platform-nix.c)
- if(NEEDS_SIMDE)
- set(libobs_PLATFORM_HEADERS
- util/simde/check.h
- util/simde/hedley.h
- util/simde/mmx.h
- util/simde/simde-arch.h
- util/simde/simde-common.h
- util/simde/sse.h
- util/simde/sse2.h
- util/threading-posix.h)
- else()
- set(libobs_PLATFORM_HEADERS
- util/threading-posix.h)
- endif()
+ set(libobs_PLATFORM_HEADERS
+ util/threading-posix.h)
if(HAVE_PULSEAUDIO)
set(libobs_audio_monitoring_HEADERS
set(libobs_util_HEADERS
util/curl/curl-helper.h
util/sse-intrin.h
- util/sse2neon.h
util/array-serializer.h
util/file-serializer.h
util/utf8.h
obs-video-gpu-encode.c
obs-video.c)
set(libobs_libobs_HEADERS
+ util/simde/check.h
+ util/simde/debug-trap.h
+ util/simde/hedley.h
+ util/simde/simde-align.h
+ util/simde/simde-arch.h
+ util/simde/simde-common.h
+ util/simde/simde-constify.h
+ util/simde/simde-detect-clang.h
+ util/simde/simde-diagnostic.h
+ util/simde/simde-features.h
+ util/simde/simde-math.h
+ util/simde/x86/mmx.h
+ util/simde/x86/sse2.h
+ util/simde/x86/sse.h
${libobs_PLATFORM_HEADERS}
obs-audio-controls.h
obs-defs.h
PUBLIC
HAVE_OBSCONFIG_H)
+target_compile_definitions(libobs
+ PUBLIC
+ ${ARCH_SIMD_DEFINES})
+
target_compile_options(libobs
PUBLIC
${ARCH_SIMD_FLAGS})
obs-studio-26.1.0.tar.xz/libobs/media-io/media-remux.c -> obs-studio-26.1.1.tar.xz/libobs/media-io/media-remux.c
Changed
/* Treat "Invalid data found when processing input" and
* "Invalid argument" as non-fatal */
- if (ret == AVERROR_INVALIDDATA || ret == EINVAL)
+ if (ret == AVERROR_INVALIDDATA || ret == -EINVAL)
continue;
break;
obs-studio-26.1.0.tar.xz/libobs/obs-config.h -> obs-studio-26.1.1.tar.xz/libobs/obs-config.h
Changed
*
* Reset to zero each major or minor version
*/
-#define LIBOBS_API_PATCH_VER 0
+#define LIBOBS_API_PATCH_VER 1
#define MAKE_SEMANTIC_VERSION(major, minor, patch) \
((major << 24) | (minor << 16) | patch)
obs-studio-26.1.0.tar.xz/libobs/obs-scene.c -> obs-studio-26.1.1.tar.xz/libobs/obs-scene.c
Changed
}
static void apply_scene_item_audio_actions(struct obs_scene_item *item,
- float **p_buf, uint64_t ts,
+ float *buf, uint64_t ts,
size_t sample_rate)
{
bool cur_visible = item->visible;
uint64_t frame_num = 0;
size_t deref_count = 0;
- float *buf = NULL;
-
- if (p_buf) {
- if (!*p_buf)
- *p_buf = malloc(AUDIO_OUTPUT_FRAMES * sizeof(float));
- buf = *p_buf;
- }
pthread_mutex_lock(&item->actions_mutex);
}
}
-static bool apply_scene_item_volume(struct obs_scene_item *item, float **buf,
+static bool apply_scene_item_volume(struct obs_scene_item *item, float *buf,
uint64_t ts, size_t sample_rate)
{
bool actions_pending;
size_t sample_rate)
{
uint64_t timestamp = 0;
- float *buf = NULL;
+ float buf[AUDIO_OUTPUT_FRAMES];
struct obs_source_audio_mix child_audio;
struct obs_scene *scene = data;
struct obs_scene_item *item;
size_t pos, count;
bool apply_buf;
- apply_buf = apply_scene_item_volume(item, &buf, timestamp,
+ apply_buf = apply_scene_item_volume(item, buf, timestamp,
sample_rate);
if (obs_source_audio_pending(item->source)) {
*ts_out = timestamp;
audio_unlock(scene);
- free(buf);
return true;
}
}
obs_sceneitem_set_crop(dst, &src->crop);
+ obs_sceneitem_set_locked(dst, src->locked);
if (defer_texture_update) {
os_atomic_set_bool(&dst->update_transform, true);
obs-studio-26.1.0.tar.xz/libobs/obs-source.c -> obs-studio-26.1.1.tar.xz/libobs/obs-source.c
Changed
static void apply_audio_actions(obs_source_t *source, size_t channels,
size_t sample_rate)
{
- float *vol_data = malloc(sizeof(float) * AUDIO_OUTPUT_FRAMES);
+ float vol_data[AUDIO_OUTPUT_FRAMES];
float cur_vol = get_source_volume(source, source->audio_ts);
size_t frame_num = 0;
if ((source->audio_mixers & (1 << mix)) != 0)
multiply_vol_data(source, mix, channels, vol_data);
}
-
- free(vol_data);
}
static void apply_audio_volume(obs_source_t *source, uint32_t mixers,
obs-studio-26.1.0.tar.xz/libobs/obsconfig.h.in -> obs-studio-26.1.1.tar.xz/libobs/obsconfig.h.in
Changed
#define HAVE_DBUS @HAVE_DBUS@
#define HAVE_PULSEAUDIO @HAVE_PULSEAUDIO@
#define USE_XINPUT @USE_XINPUT@
-#define NEEDS_SIMDE @NEEDS_SIMDE@
#define LIBOBS_IMAGEMAGICK_DIR_STYLE_6L 6
#define LIBOBS_IMAGEMAGICK_DIR_STYLE_7GE 7
#define LIBOBS_IMAGEMAGICK_DIR_STYLE @LIBOBS_IMAGEMAGICK_DIR_STYLE@
obs-studio-26.1.0.tar.xz/libobs/util/simde/README.libobs -> obs-studio-26.1.1.tar.xz/libobs/util/simde/README.libobs
Changed
-This is a slightly modified version of https://github.com/nemequ/simde/commit/cafec4b952fa5a31a51a10326f97c2e7c9067771
-sse{,2}.h and mmx.h was moved down from the original "x86" subdirectory,
-subsequently the '#include "../simde-common.h"' line in mmx.h was changed to '#include "simde-common.h"'
+This is a slightly modified version of the simde directory in
+https://github.com/simd-everywhere/simde/commit/c3d7abfaba6729a8b11d09a314b34a4db628911d
+Unused files have removed.
Then the code was reformatted using the "formatcode.sh" script in the root of this repository.
obs-studio-26.1.0.tar.xz/libobs/util/simde/check.h -> obs-studio-26.1.1.tar.xz/libobs/util/simde/check.h
Changed
#endif
#include "hedley.h"
+#include "simde-diagnostic.h"
#include <stdint.h>
#if !defined(_WIN32)
obs-studio-26.1.0.tar.xz/libobs/util/simde/hedley.h -> obs-studio-26.1.1.tar.xz/libobs/util/simde/hedley.h
Changed
* SPDX-License-Identifier: CC0-1.0
*/
-#if !defined(HEDLEY_VERSION) || (HEDLEY_VERSION < 12)
+#if !defined(HEDLEY_VERSION) || (HEDLEY_VERSION < 14)
#if defined(HEDLEY_VERSION)
#undef HEDLEY_VERSION
#endif
-#define HEDLEY_VERSION 12
+#define HEDLEY_VERSION 14
#if defined(HEDLEY_STRINGIFY_EX)
#undef HEDLEY_STRINGIFY_EX
#endif
#define HEDLEY_CONCAT(a, b) HEDLEY_CONCAT_EX(a, b)
+#if defined(HEDLEY_CONCAT3_EX)
+#undef HEDLEY_CONCAT3_EX
+#endif
+#define HEDLEY_CONCAT3_EX(a, b, c) a##b##c
+
+#if defined(HEDLEY_CONCAT3)
+#undef HEDLEY_CONCAT3
+#endif
+#define HEDLEY_CONCAT3(a, b, c) HEDLEY_CONCAT3_EX(a, b, c)
+
#if defined(HEDLEY_VERSION_ENCODE)
#undef HEDLEY_VERSION_ENCODE
#endif
#if defined(HEDLEY_MSVC_VERSION)
#undef HEDLEY_MSVC_VERSION
#endif
-#if defined(_MSC_FULL_VER) && (_MSC_FULL_VER >= 140000000)
+#if defined(_MSC_FULL_VER) && (_MSC_FULL_VER >= 140000000) && !defined(__ICL)
#define HEDLEY_MSVC_VERSION \
HEDLEY_VERSION_ENCODE(_MSC_FULL_VER / 10000000, \
(_MSC_FULL_VER % 10000000) / 100000, \
(_MSC_FULL_VER % 100000) / 100)
-#elif defined(_MSC_FULL_VER)
+#elif defined(_MSC_FULL_VER) && !defined(__ICL)
#define HEDLEY_MSVC_VERSION \
HEDLEY_VERSION_ENCODE(_MSC_FULL_VER / 1000000, \
(_MSC_FULL_VER % 1000000) / 10000, \
(_MSC_FULL_VER % 10000) / 10)
-#elif defined(_MSC_VER)
+#elif defined(_MSC_VER) && !defined(__ICL)
#define HEDLEY_MSVC_VERSION \
HEDLEY_VERSION_ENCODE(_MSC_VER / 100, _MSC_VER % 100, 0)
#endif
#if defined(HEDLEY_MSVC_VERSION_CHECK)
#undef HEDLEY_MSVC_VERSION_CHECK
#endif
-#if !defined(_MSC_VER)
+#if !defined(HEDLEY_MSVC_VERSION)
#define HEDLEY_MSVC_VERSION_CHECK(major, minor, patch) (0)
#elif defined(_MSC_VER) && (_MSC_VER >= 1400)
#define HEDLEY_MSVC_VERSION_CHECK(major, minor, patch) \
#if defined(HEDLEY_INTEL_VERSION)
#undef HEDLEY_INTEL_VERSION
#endif
-#if defined(__INTEL_COMPILER) && defined(__INTEL_COMPILER_UPDATE)
+#if defined(__INTEL_COMPILER) && defined(__INTEL_COMPILER_UPDATE) && \
+ !defined(__ICL)
#define HEDLEY_INTEL_VERSION \
HEDLEY_VERSION_ENCODE(__INTEL_COMPILER / 100, __INTEL_COMPILER % 100, \
__INTEL_COMPILER_UPDATE)
-#elif defined(__INTEL_COMPILER)
+#elif defined(__INTEL_COMPILER) && !defined(__ICL)
#define HEDLEY_INTEL_VERSION \
HEDLEY_VERSION_ENCODE(__INTEL_COMPILER / 100, __INTEL_COMPILER % 100, 0)
#endif
#define HEDLEY_INTEL_VERSION_CHECK(major, minor, patch) (0)
#endif
+#if defined(HEDLEY_INTEL_CL_VERSION)
+#undef HEDLEY_INTEL_CL_VERSION
+#endif
+#if defined(__INTEL_COMPILER) && defined(__INTEL_COMPILER_UPDATE) && \
+ defined(__ICL)
+#define HEDLEY_INTEL_CL_VERSION \
+ HEDLEY_VERSION_ENCODE(__INTEL_COMPILER, __INTEL_COMPILER_UPDATE, 0)
+#endif
+
+#if defined(HEDLEY_INTEL_CL_VERSION_CHECK)
+#undef HEDLEY_INTEL_CL_VERSION_CHECK
+#endif
+#if defined(HEDLEY_INTEL_CL_VERSION)
+#define HEDLEY_INTEL_CL_VERSION_CHECK(major, minor, patch) \
+ (HEDLEY_INTEL_CL_VERSION >= HEDLEY_VERSION_ENCODE(major, minor, patch))
+#else
+#define HEDLEY_INTEL_CL_VERSION_CHECK(major, minor, patch) (0)
+#endif
+
#if defined(HEDLEY_PGI_VERSION)
#undef HEDLEY_PGI_VERSION
#endif
HEDLEY_GCC_VERSION_CHECK(major, minor, patch)
#endif
+#if (defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L)) || \
+ defined(__clang__) || HEDLEY_GCC_VERSION_CHECK(3, 0, 0) || \
+ HEDLEY_INTEL_VERSION_CHECK(13, 0, 0) || \
+ HEDLEY_IAR_VERSION_CHECK(8, 0, 0) || \
+ HEDLEY_PGI_VERSION_CHECK(18, 4, 0) || \
+ HEDLEY_ARM_VERSION_CHECK(4, 1, 0) || \
+ HEDLEY_TI_VERSION_CHECK(15, 12, 0) || \
+ HEDLEY_TI_ARMCL_VERSION_CHECK(4, 7, 0) || \
+ HEDLEY_TI_CL430_VERSION_CHECK(2, 0, 1) || \
+ HEDLEY_TI_CL2000_VERSION_CHECK(6, 1, 0) || \
+ HEDLEY_TI_CL6X_VERSION_CHECK(7, 0, 0) || \
+ HEDLEY_TI_CL7X_VERSION_CHECK(1, 2, 0) || \
+ HEDLEY_TI_CLPRU_VERSION_CHECK(2, 1, 0) || \
+ HEDLEY_CRAY_VERSION_CHECK(5, 0, 0) || \
+ HEDLEY_TINYC_VERSION_CHECK(0, 9, 17) || \
+ HEDLEY_SUNPRO_VERSION_CHECK(8, 0, 0) || \
+ (HEDLEY_IBM_VERSION_CHECK(10, 1, 0) && defined(__C99_PRAGMA_OPERATOR))
+#define HEDLEY_PRAGMA(value) _Pragma(#value)
+#elif HEDLEY_MSVC_VERSION_CHECK(15, 0, 0)
+#define HEDLEY_PRAGMA(value) __pragma(value)
+#else
+#define HEDLEY_PRAGMA(value)
+#endif
+
+#if defined(HEDLEY_DIAGNOSTIC_PUSH)
+#undef HEDLEY_DIAGNOSTIC_PUSH
+#endif
+#if defined(HEDLEY_DIAGNOSTIC_POP)
+#undef HEDLEY_DIAGNOSTIC_POP
+#endif
+#if defined(__clang__)
+#define HEDLEY_DIAGNOSTIC_PUSH _Pragma("clang diagnostic push")
+#define HEDLEY_DIAGNOSTIC_POP _Pragma("clang diagnostic pop")
+#elif HEDLEY_INTEL_VERSION_CHECK(13, 0, 0)
+#define HEDLEY_DIAGNOSTIC_PUSH _Pragma("warning(push)")
+#define HEDLEY_DIAGNOSTIC_POP _Pragma("warning(pop)")
+#elif HEDLEY_GCC_VERSION_CHECK(4, 6, 0)
+#define HEDLEY_DIAGNOSTIC_PUSH _Pragma("GCC diagnostic push")
+#define HEDLEY_DIAGNOSTIC_POP _Pragma("GCC diagnostic pop")
+#elif HEDLEY_MSVC_VERSION_CHECK(15, 0, 0) || \
+ HEDLEY_INTEL_CL_VERSION_CHECK(2021, 1, 0)
+#define HEDLEY_DIAGNOSTIC_PUSH __pragma(warning(push))
+#define HEDLEY_DIAGNOSTIC_POP __pragma(warning(pop))
+#elif HEDLEY_ARM_VERSION_CHECK(5, 6, 0)
+#define HEDLEY_DIAGNOSTIC_PUSH _Pragma("push")
+#define HEDLEY_DIAGNOSTIC_POP _Pragma("pop")
+#elif HEDLEY_TI_VERSION_CHECK(15, 12, 0) || \
+ HEDLEY_TI_ARMCL_VERSION_CHECK(5, 2, 0) || \
+ HEDLEY_TI_CL430_VERSION_CHECK(4, 4, 0) || \
+ HEDLEY_TI_CL6X_VERSION_CHECK(8, 1, 0) || \
+ HEDLEY_TI_CL7X_VERSION_CHECK(1, 2, 0) || \
+ HEDLEY_TI_CLPRU_VERSION_CHECK(2, 1, 0)
+#define HEDLEY_DIAGNOSTIC_PUSH _Pragma("diag_push")
+#define HEDLEY_DIAGNOSTIC_POP _Pragma("diag_pop")
+#elif HEDLEY_PELLES_VERSION_CHECK(2, 90, 0)
+#define HEDLEY_DIAGNOSTIC_PUSH _Pragma("warning(push)")
+#define HEDLEY_DIAGNOSTIC_POP _Pragma("warning(pop)")
+#else
+#define HEDLEY_DIAGNOSTIC_PUSH
+#define HEDLEY_DIAGNOSTIC_POP
+#endif
+
/* HEDLEY_DIAGNOSTIC_DISABLE_CPP98_COMPAT_WRAP_ is for
HEDLEY INTERNAL USE ONLY. API subject to change without notice. */
#if defined(HEDLEY_DIAGNOSTIC_DISABLE_CPP98_COMPAT_WRAP_)
#if defined(__cplusplus)
#if HEDLEY_HAS_WARNING("-Wc++98-compat")
#if HEDLEY_HAS_WARNING("-Wc++17-extensions")
+#if HEDLEY_HAS_WARNING("-Wc++1z-extensions")
+#define HEDLEY_DIAGNOSTIC_DISABLE_CPP98_COMPAT_WRAP_(xpr) \
+ HEDLEY_DIAGNOSTIC_PUSH \
+ _Pragma("clang diagnostic ignored \"-Wc++98-compat\"") _Pragma( \
+ "clang diagnostic ignored \"-Wc++17-extensions\"") \
+ _Pragma("clang diagnostic ignored \"-Wc++1z-extensions\"") \
+ xpr HEDLEY_DIAGNOSTIC_POP
+#else
#define HEDLEY_DIAGNOSTIC_DISABLE_CPP98_COMPAT_WRAP_(xpr) \
HEDLEY_DIAGNOSTIC_PUSH \
_Pragma("clang diagnostic ignored \"-Wc++98-compat\"") \
_Pragma("clang diagnostic ignored \"-Wc++17-extensions\"") \
xpr HEDLEY_DIAGNOSTIC_POP
+#endif
#else
#define HEDLEY_DIAGNOSTIC_DISABLE_CPP98_COMPAT_WRAP_(xpr) \
HEDLEY_DIAGNOSTIC_PUSH \
#elif HEDLEY_IAR_VERSION_CHECK(8, 3, 0)
#define HEDLEY_CPP_CAST(T, expr) \
HEDLEY_DIAGNOSTIC_PUSH \
- _Pragma("diag_suppress=Pe137") HEDLEY_DIAGNOSTIC_POP #else
+ _Pragma("diag_suppress=Pe137") HEDLEY_DIAGNOSTIC_POP
+#else
#define HEDLEY_CPP_CAST(T, expr) ((T)(expr))
#endif
#else
#define HEDLEY_CPP_CAST(T, expr) (expr)
#endif
-#if (defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L)) || \
- defined(__clang__) || HEDLEY_GCC_VERSION_CHECK(3, 0, 0) || \
- HEDLEY_INTEL_VERSION_CHECK(13, 0, 0) || \
- HEDLEY_IAR_VERSION_CHECK(8, 0, 0) || \
- HEDLEY_PGI_VERSION_CHECK(18, 4, 0) || \
- HEDLEY_ARM_VERSION_CHECK(4, 1, 0) || \
- HEDLEY_TI_VERSION_CHECK(15, 12, 0) || \
- HEDLEY_TI_ARMCL_VERSION_CHECK(4, 7, 0) || \
- HEDLEY_TI_CL430_VERSION_CHECK(2, 0, 1) || \
- HEDLEY_TI_CL2000_VERSION_CHECK(6, 1, 0) || \
- HEDLEY_TI_CL6X_VERSION_CHECK(7, 0, 0) || \
- HEDLEY_TI_CL7X_VERSION_CHECK(1, 2, 0) || \
- HEDLEY_TI_CLPRU_VERSION_CHECK(2, 1, 0) || \
- HEDLEY_CRAY_VERSION_CHECK(5, 0, 0) || \
- HEDLEY_TINYC_VERSION_CHECK(0, 9, 17) || \
- HEDLEY_SUNPRO_VERSION_CHECK(8, 0, 0) || \
- (HEDLEY_IBM_VERSION_CHECK(10, 1, 0) && defined(__C99_PRAGMA_OPERATOR))
-#define HEDLEY_PRAGMA(value) _Pragma(#value)
-#elif HEDLEY_MSVC_VERSION_CHECK(15, 0, 0)
-#define HEDLEY_PRAGMA(value) __pragma(value)
-#else
-#define HEDLEY_PRAGMA(value)
-#endif
-
-#if defined(HEDLEY_DIAGNOSTIC_PUSH)
-#undef HEDLEY_DIAGNOSTIC_PUSH
-#endif
-#if defined(HEDLEY_DIAGNOSTIC_POP)
-#undef HEDLEY_DIAGNOSTIC_POP
-#endif
-#if defined(__clang__)
-#define HEDLEY_DIAGNOSTIC_PUSH _Pragma("clang diagnostic push")
-#define HEDLEY_DIAGNOSTIC_POP _Pragma("clang diagnostic pop")
-#elif HEDLEY_INTEL_VERSION_CHECK(13, 0, 0)
-#define HEDLEY_DIAGNOSTIC_PUSH _Pragma("warning(push)")
-#define HEDLEY_DIAGNOSTIC_POP _Pragma("warning(pop)")
-#elif HEDLEY_GCC_VERSION_CHECK(4, 6, 0)
-#define HEDLEY_DIAGNOSTIC_PUSH _Pragma("GCC diagnostic push")
-#define HEDLEY_DIAGNOSTIC_POP _Pragma("GCC diagnostic pop")
-#elif HEDLEY_MSVC_VERSION_CHECK(15, 0, 0)
-#define HEDLEY_DIAGNOSTIC_PUSH __pragma(warning(push))
-#define HEDLEY_DIAGNOSTIC_POP __pragma(warning(pop))
-#elif HEDLEY_ARM_VERSION_CHECK(5, 6, 0)
-#define HEDLEY_DIAGNOSTIC_PUSH _Pragma("push")
-#define HEDLEY_DIAGNOSTIC_POP _Pragma("pop")
-#elif HEDLEY_TI_VERSION_CHECK(15, 12, 0) || \
- HEDLEY_TI_ARMCL_VERSION_CHECK(5, 2, 0) || \
- HEDLEY_TI_CL430_VERSION_CHECK(4, 4, 0) || \
- HEDLEY_TI_CL6X_VERSION_CHECK(8, 1, 0) || \
- HEDLEY_TI_CL7X_VERSION_CHECK(1, 2, 0) || \
- HEDLEY_TI_CLPRU_VERSION_CHECK(2, 1, 0)
-#define HEDLEY_DIAGNOSTIC_PUSH _Pragma("diag_push")
-#define HEDLEY_DIAGNOSTIC_POP _Pragma("diag_pop")
-#elif HEDLEY_PELLES_VERSION_CHECK(2, 90, 0)
-#define HEDLEY_DIAGNOSTIC_PUSH _Pragma("warning(push)")
-#define HEDLEY_DIAGNOSTIC_POP _Pragma("warning(pop)")
-#else
-#define HEDLEY_DIAGNOSTIC_PUSH
-#define HEDLEY_DIAGNOSTIC_POP
-#endif
-
#if defined(HEDLEY_DIAGNOSTIC_DISABLE_DEPRECATED)
#undef HEDLEY_DIAGNOSTIC_DISABLE_DEPRECATED
#endif
#elif HEDLEY_INTEL_VERSION_CHECK(13, 0, 0)
#define HEDLEY_DIAGNOSTIC_DISABLE_DEPRECATED \
_Pragma("warning(disable:1478 1786)")
+#elif HEDLEY_INTEL_CL_VERSION_CHECK(2021, 1, 0)
+#define HEDLEY_DIAGNOSTIC_DISABLE_DEPRECATED \
+ __pragma(warning(disable : 1478 1786))
+#elif HEDLEY_PGI_VERSION_CHECK(20, 7, 0)
+#define HEDLEY_DIAGNOSTIC_DISABLE_DEPRECATED \
+ _Pragma("diag_suppress 1215,1216,1444,1445")
#elif HEDLEY_PGI_VERSION_CHECK(17, 10, 0)
#define HEDLEY_DIAGNOSTIC_DISABLE_DEPRECATED _Pragma("diag_suppress 1215,1444")
#elif HEDLEY_GCC_VERSION_CHECK(4, 3, 0)
#elif HEDLEY_INTEL_VERSION_CHECK(13, 0, 0)
#define HEDLEY_DIAGNOSTIC_DISABLE_UNKNOWN_PRAGMAS \
_Pragma("warning(disable:161)")
+#elif HEDLEY_INTEL_CL_VERSION_CHECK(2021, 1, 0)
+#define HEDLEY_DIAGNOSTIC_DISABLE_UNKNOWN_PRAGMAS \
+ __pragma(warning(disable : 161))
#elif HEDLEY_PGI_VERSION_CHECK(17, 10, 0)
#define HEDLEY_DIAGNOSTIC_DISABLE_UNKNOWN_PRAGMAS _Pragma("diag_suppress 1675")
#elif HEDLEY_GCC_VERSION_CHECK(4, 3, 0)
#elif HEDLEY_INTEL_VERSION_CHECK(17, 0, 0)
#define HEDLEY_DIAGNOSTIC_DISABLE_UNKNOWN_CPP_ATTRIBUTES \
_Pragma("warning(disable:1292)")
+#elif HEDLEY_INTEL_CL_VERSION_CHECK(2021, 1, 0)
+#define HEDLEY_DIAGNOSTIC_DISABLE_UNKNOWN_CPP_ATTRIBUTES \
+ __pragma(warning(disable : 1292))
#elif HEDLEY_MSVC_VERSION_CHECK(19, 0, 0)
#define HEDLEY_DIAGNOSTIC_DISABLE_UNKNOWN_CPP_ATTRIBUTES \
__pragma(warning(disable : 5030))
+#elif HEDLEY_PGI_VERSION_CHECK(20, 7, 0)
+#define HEDLEY_DIAGNOSTIC_DISABLE_UNKNOWN_CPP_ATTRIBUTES \
+ _Pragma("diag_suppress 1097,1098")
#elif HEDLEY_PGI_VERSION_CHECK(17, 10, 0)
#define HEDLEY_DIAGNOSTIC_DISABLE_UNKNOWN_CPP_ATTRIBUTES \
_Pragma("diag_suppress 1097")
#if defined(HEDLEY_DEPRECATED_FOR)
#undef HEDLEY_DEPRECATED_FOR
#endif
-#if defined(__cplusplus) && (__cplusplus >= 201402L)
-#define HEDLEY_DEPRECATED(since) \
- HEDLEY_DIAGNOSTIC_DISABLE_CPP98_COMPAT_WRAP_( \
- [[deprecated("Since " #since)]])
-#define HEDLEY_DEPRECATED_FOR(since, replacement) \
- HEDLEY_DIAGNOSTIC_DISABLE_CPP98_COMPAT_WRAP_( \
- [[deprecated("Since " #since "; use " #replacement)]])
+#if HEDLEY_MSVC_VERSION_CHECK(14, 0, 0) || \
+ HEDLEY_INTEL_CL_VERSION_CHECK(2021, 1, 0)
+#define HEDLEY_DEPRECATED(since) __declspec(deprecated("Since " #since))
+#define HEDLEY_DEPRECATED_FOR(since, replacement) \
+ __declspec(deprecated("Since " #since "; use " #replacement))
#elif HEDLEY_HAS_EXTENSION(attribute_deprecated_with_message) || \
HEDLEY_GCC_VERSION_CHECK(4, 5, 0) || \
HEDLEY_INTEL_VERSION_CHECK(13, 0, 0) || \
__attribute__((__deprecated__("Since " #since)))
#define HEDLEY_DEPRECATED_FOR(since, replacement) \
__attribute__((__deprecated__("Since " #since "; use " #replacement)))
+#elif defined(__cplusplus) && (__cplusplus >= 201402L)
+#define HEDLEY_DEPRECATED(since) \
+ HEDLEY_DIAGNOSTIC_DISABLE_CPP98_COMPAT_WRAP_( \
+ [[deprecated("Since " #since)]])
+#define HEDLEY_DEPRECATED_FOR(since, replacement) \
+ HEDLEY_DIAGNOSTIC_DISABLE_CPP98_COMPAT_WRAP_( \
+ [[deprecated("Since " #since "; use " #replacement)]])
#elif HEDLEY_HAS_ATTRIBUTE(deprecated) || HEDLEY_GCC_VERSION_CHECK(3, 1, 0) || \
HEDLEY_ARM_VERSION_CHECK(4, 1, 0) || \
HEDLEY_TI_VERSION_CHECK(15, 12, 0) || \
#define HEDLEY_DEPRECATED(since) __attribute__((__deprecated__))
#define HEDLEY_DEPRECATED_FOR(since, replacement) \
__attribute__((__deprecated__))
-#elif HEDLEY_MSVC_VERSION_CHECK(14, 0, 0)
-#define HEDLEY_DEPRECATED(since) __declspec(deprecated("Since " #since))
-#define HEDLEY_DEPRECATED_FOR(since, replacement) \
- __declspec(deprecated("Since " #since "; use " #replacement))
-#elif HEDLEY_MSVC_VERSION_CHECK(13, 10, 0) || \
- HEDLEY_PELLES_VERSION_CHECK(6, 50, 0)
+#elif HEDLEY_MSVC_VERSION_CHECK(13, 10, 0) || \
+ HEDLEY_PELLES_VERSION_CHECK(6, 50, 0) || \
+ HEDLEY_INTEL_CL_VERSION_CHECK(2021, 1, 0)
#define HEDLEY_DEPRECATED(since) __declspec(deprecated)
#define HEDLEY_DEPRECATED_FOR(since, replacement) __declspec(deprecated)
#elif HEDLEY_IAR_VERSION_CHECK(8, 0, 0)
#if defined(HEDLEY_WARN_UNUSED_RESULT_MSG)
#undef HEDLEY_WARN_UNUSED_RESULT_MSG
#endif
-#if (HEDLEY_HAS_CPP_ATTRIBUTE(nodiscard) >= 201907L)
-#define HEDLEY_WARN_UNUSED_RESULT \
- HEDLEY_DIAGNOSTIC_DISABLE_CPP98_COMPAT_WRAP_([[nodiscard]])
-#define HEDLEY_WARN_UNUSED_RESULT_MSG(msg) \
- HEDLEY_DIAGNOSTIC_DISABLE_CPP98_COMPAT_WRAP_([[nodiscard(msg)]])
-#elif HEDLEY_HAS_CPP_ATTRIBUTE(nodiscard)
-#define HEDLEY_WARN_UNUSED_RESULT \
- HEDLEY_DIAGNOSTIC_DISABLE_CPP98_COMPAT_WRAP_([[nodiscard]])
-#define HEDLEY_WARN_UNUSED_RESULT_MSG(msg) \
- HEDLEY_DIAGNOSTIC_DISABLE_CPP98_COMPAT_WRAP_([[nodiscard]])
-#elif HEDLEY_HAS_ATTRIBUTE(warn_unused_result) || \
+#if HEDLEY_HAS_ATTRIBUTE(warn_unused_result) || \
HEDLEY_GCC_VERSION_CHECK(3, 4, 0) || \
HEDLEY_INTEL_VERSION_CHECK(13, 0, 0) || \
HEDLEY_TI_VERSION_CHECK(15, 12, 0) || \
#define HEDLEY_WARN_UNUSED_RESULT __attribute__((__warn_unused_result__))
#define HEDLEY_WARN_UNUSED_RESULT_MSG(msg) \
__attribute__((__warn_unused_result__))
+#elif (HEDLEY_HAS_CPP_ATTRIBUTE(nodiscard) >= 201907L)
+#define HEDLEY_WARN_UNUSED_RESULT \
+ HEDLEY_DIAGNOSTIC_DISABLE_CPP98_COMPAT_WRAP_([[nodiscard]])
+#define HEDLEY_WARN_UNUSED_RESULT_MSG(msg) \
+ HEDLEY_DIAGNOSTIC_DISABLE_CPP98_COMPAT_WRAP_([[nodiscard(msg)]])
+#elif HEDLEY_HAS_CPP_ATTRIBUTE(nodiscard)
+#define HEDLEY_WARN_UNUSED_RESULT \
+ HEDLEY_DIAGNOSTIC_DISABLE_CPP98_COMPAT_WRAP_([[nodiscard]])
+#define HEDLEY_WARN_UNUSED_RESULT_MSG(msg) \
+ HEDLEY_DIAGNOSTIC_DISABLE_CPP98_COMPAT_WRAP_([[nodiscard]])
#elif defined(_Check_return_) /* SAL */
#define HEDLEY_WARN_UNUSED_RESULT _Check_return_
#define HEDLEY_WARN_UNUSED_RESULT_MSG(msg) _Check_return_
#define HEDLEY_NO_RETURN __attribute__((__noreturn__))
#elif HEDLEY_SUNPRO_VERSION_CHECK(5, 10, 0)
#define HEDLEY_NO_RETURN _Pragma("does_not_return")
-#elif HEDLEY_MSVC_VERSION_CHECK(13, 10, 0)
+#elif HEDLEY_MSVC_VERSION_CHECK(13, 10, 0) || \
+ HEDLEY_INTEL_CL_VERSION_CHECK(2021, 1, 0)
#define HEDLEY_NO_RETURN __declspec(noreturn)
#elif HEDLEY_TI_CL6X_VERSION_CHECK(6, 0, 0) && defined(__cplusplus)
#define HEDLEY_NO_RETURN _Pragma("FUNC_NEVER_RETURNS;")
#if defined(HEDLEY_ASSUME)
#undef HEDLEY_ASSUME
#endif
-#if HEDLEY_MSVC_VERSION_CHECK(13, 10, 0) || HEDLEY_INTEL_VERSION_CHECK(13, 0, 0)
+#if HEDLEY_MSVC_VERSION_CHECK(13, 10, 0) || \
+ HEDLEY_INTEL_VERSION_CHECK(13, 0, 0) || \
+ HEDLEY_INTEL_CL_VERSION_CHECK(2021, 1, 0)
#define HEDLEY_ASSUME(expr) __assume(expr)
#elif HEDLEY_HAS_BUILTIN(__builtin_assume)
#define HEDLEY_ASSUME(expr) __builtin_assume(expr)
#if HEDLEY_HAS_BUILTIN(__builtin_unpredictable)
#define HEDLEY_UNPREDICTABLE(expr) __builtin_unpredictable((expr))
#endif
-#if HEDLEY_HAS_BUILTIN(__builtin_expect_with_probability) || \
+#if (HEDLEY_HAS_BUILTIN(__builtin_expect_with_probability) && \
+ !defined(HEDLEY_PGI_VERSION)) || \
HEDLEY_GCC_VERSION_CHECK(9, 0, 0)
#define HEDLEY_PREDICT(expr, value, probability) \
__builtin_expect_with_probability((expr), (value), (probability))
__builtin_expect_with_probability(!!(expr), 0, (probability))
#define HEDLEY_LIKELY(expr) __builtin_expect(!!(expr), 1)
#define HEDLEY_UNLIKELY(expr) __builtin_expect(!!(expr), 0)
-#elif HEDLEY_HAS_BUILTIN(__builtin_expect) || \
+#elif (HEDLEY_HAS_BUILTIN(__builtin_expect) && \
+ !defined(HEDLEY_INTEL_CL_VERSION)) || \
HEDLEY_GCC_VERSION_CHECK(3, 0, 0) || \
HEDLEY_INTEL_VERSION_CHECK(13, 0, 0) || \
(HEDLEY_SUNPRO_VERSION_CHECK(5, 15, 0) && defined(__cplusplus)) || \
#define HEDLEY_MALLOC __attribute__((__malloc__))
#elif HEDLEY_SUNPRO_VERSION_CHECK(5, 10, 0)
#define HEDLEY_MALLOC _Pragma("returns_new_memory")
-#elif HEDLEY_MSVC_VERSION_CHECK(14, 0, 0)
+#elif HEDLEY_MSVC_VERSION_CHECK(14, 0, 0) || \
+ HEDLEY_INTEL_CL_VERSION_CHECK(2021, 1, 0)
#define HEDLEY_MALLOC __declspec(restrict)
#else
#define HEDLEY_MALLOC
#elif HEDLEY_GCC_VERSION_CHECK(3, 1, 0) || \
HEDLEY_MSVC_VERSION_CHECK(14, 0, 0) || \
HEDLEY_INTEL_VERSION_CHECK(13, 0, 0) || \
+ HEDLEY_INTEL_CL_VERSION_CHECK(2021, 1, 0) || \
HEDLEY_ARM_VERSION_CHECK(4, 1, 0) || \
HEDLEY_IBM_VERSION_CHECK(10, 1, 0) || \
HEDLEY_PGI_VERSION_CHECK(17, 10, 0) || \
#define HEDLEY_INLINE inline
#elif defined(HEDLEY_GCC_VERSION) || HEDLEY_ARM_VERSION_CHECK(6, 2, 0)
#define HEDLEY_INLINE __inline__
-#elif HEDLEY_MSVC_VERSION_CHECK(12, 0, 0) || \
- HEDLEY_ARM_VERSION_CHECK(4, 1, 0) || \
- HEDLEY_TI_ARMCL_VERSION_CHECK(5, 1, 0) || \
- HEDLEY_TI_CL430_VERSION_CHECK(3, 1, 0) || \
- HEDLEY_TI_CL2000_VERSION_CHECK(6, 2, 0) || \
- HEDLEY_TI_CL6X_VERSION_CHECK(8, 0, 0) || \
- HEDLEY_TI_CL7X_VERSION_CHECK(1, 2, 0) || \
+#elif HEDLEY_MSVC_VERSION_CHECK(12, 0, 0) || \
+ HEDLEY_INTEL_CL_VERSION_CHECK(2021, 1, 0) || \
+ HEDLEY_ARM_VERSION_CHECK(4, 1, 0) || \
+ HEDLEY_TI_ARMCL_VERSION_CHECK(5, 1, 0) || \
+ HEDLEY_TI_CL430_VERSION_CHECK(3, 1, 0) || \
+ HEDLEY_TI_CL2000_VERSION_CHECK(6, 2, 0) || \
+ HEDLEY_TI_CL6X_VERSION_CHECK(8, 0, 0) || \
+ HEDLEY_TI_CL7X_VERSION_CHECK(1, 2, 0) || \
HEDLEY_TI_CLPRU_VERSION_CHECK(2, 1, 0)
#define HEDLEY_INLINE __inline
#else
HEDLEY_TI_CL7X_VERSION_CHECK(1, 2, 0) || \
HEDLEY_TI_CLPRU_VERSION_CHECK(2, 1, 0)
#define HEDLEY_ALWAYS_INLINE __attribute__((__always_inline__)) HEDLEY_INLINE
-#elif HEDLEY_MSVC_VERSION_CHECK(12, 0, 0)
+#elif HEDLEY_MSVC_VERSION_CHECK(12, 0, 0) || \
+ HEDLEY_INTEL_CL_VERSION_CHECK(2021, 1, 0)
#define HEDLEY_ALWAYS_INLINE __forceinline
#elif defined(__cplusplus) && (HEDLEY_TI_ARMCL_VERSION_CHECK(5, 2, 0) || \
HEDLEY_TI_CL430_VERSION_CHECK(4, 3, 0) || \
HEDLEY_TI_CL7X_VERSION_CHECK(1, 2, 0) || \
HEDLEY_TI_CLPRU_VERSION_CHECK(2, 1, 0)
#define HEDLEY_NEVER_INLINE __attribute__((__noinline__))
-#elif HEDLEY_MSVC_VERSION_CHECK(13, 10, 0)
+#elif HEDLEY_MSVC_VERSION_CHECK(13, 10, 0) || \
+ HEDLEY_INTEL_CL_VERSION_CHECK(2021, 1, 0)
#define HEDLEY_NEVER_INLINE __declspec(noinline)
#elif HEDLEY_PGI_VERSION_CHECK(10, 2, 0)
#define HEDLEY_NEVER_INLINE _Pragma("noinline")
#if HEDLEY_HAS_ATTRIBUTE(nothrow) || HEDLEY_GCC_VERSION_CHECK(3, 3, 0) || \
HEDLEY_INTEL_VERSION_CHECK(13, 0, 0)
#define HEDLEY_NO_THROW __attribute__((__nothrow__))
-#elif HEDLEY_MSVC_VERSION_CHECK(13, 1, 0) || HEDLEY_ARM_VERSION_CHECK(4, 1, 0)
+#elif HEDLEY_MSVC_VERSION_CHECK(13, 1, 0) || \
+ HEDLEY_INTEL_CL_VERSION_CHECK(2021, 1, 0) || \
+ HEDLEY_ARM_VERSION_CHECK(4, 1, 0)
#define HEDLEY_NO_THROW __declspec(nothrow)
#else
#define HEDLEY_NO_THROW
#if defined(HEDLEY_FALL_THROUGH)
#undef HEDLEY_FALL_THROUGH
#endif
-#if HEDLEY_GNUC_HAS_ATTRIBUTE(fallthrough, 7, 0, 0) && \
- !defined(HEDLEY_PGI_VERSION)
+#if HEDLEY_HAS_ATTRIBUTE(fallthrough) || HEDLEY_GCC_VERSION_CHECK(7, 0, 0)
#define HEDLEY_FALL_THROUGH __attribute__((__fallthrough__))
#elif HEDLEY_HAS_CPP_ATTRIBUTE_NS(clang, fallthrough)
#define HEDLEY_FALL_THROUGH \
#endif
#if !defined(__cplusplus) && \
((defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 201112L)) || \
- HEDLEY_HAS_FEATURE(c_static_assert) || \
+ (HEDLEY_HAS_FEATURE(c_static_assert) && \
+ !defined(HEDLEY_INTEL_CL_VERSION)) || \
HEDLEY_GCC_VERSION_CHECK(6, 0, 0) || \
HEDLEY_INTEL_VERSION_CHECK(13, 0, 0) || defined(_Static_assert))
#define HEDLEY_STATIC_ASSERT(expr, message) _Static_assert(expr, message)
#elif (defined(__cplusplus) && (__cplusplus >= 201103L)) || \
- HEDLEY_MSVC_VERSION_CHECK(16, 0, 0)
+ HEDLEY_MSVC_VERSION_CHECK(16, 0, 0) || \
+ HEDLEY_INTEL_CL_VERSION_CHECK(2021, 1, 0)
#define HEDLEY_STATIC_ASSERT(expr, message) \
HEDLEY_DIAGNOSTIC_DISABLE_CPP98_COMPAT_WRAP_( \
static_assert(expr, message))
HEDLEY_PGI_VERSION_CHECK(18, 4, 0) || \
HEDLEY_INTEL_VERSION_CHECK(13, 0, 0)
#define HEDLEY_WARNING(msg) HEDLEY_PRAGMA(GCC warning msg)
-#elif HEDLEY_MSVC_VERSION_CHECK(15, 0, 0)
+#elif HEDLEY_MSVC_VERSION_CHECK(15, 0, 0) || \
+ HEDLEY_INTEL_CL_VERSION_CHECK(2021, 1, 0)
#define HEDLEY_WARNING(msg) HEDLEY_PRAGMA(message(msg))
#else
#define HEDLEY_WARNING(msg) HEDLEY_MESSAGE(msg)
#endif
#if HEDLEY_HAS_ATTRIBUTE(flag_enum)
#define HEDLEY_FLAGS __attribute__((__flag_enum__))
+#else
+#define HEDLEY_FLAGS
#endif
#if defined(HEDLEY_FLAGS_CAST)
#if defined(HEDLEY_EMPTY_BASES)
#undef HEDLEY_EMPTY_BASES
#endif
-#if HEDLEY_MSVC_VERSION_CHECK(19, 0, 23918) && \
- !HEDLEY_MSVC_VERSION_CHECK(20, 0, 0)
+#if (HEDLEY_MSVC_VERSION_CHECK(19, 0, 23918) && \
+ !HEDLEY_MSVC_VERSION_CHECK(20, 0, 0)) || \
+ HEDLEY_INTEL_CL_VERSION_CHECK(2021, 1, 0)
#define HEDLEY_EMPTY_BASES __declspec(empty_bases)
#else
#define HEDLEY_EMPTY_BASES
obs-studio-26.1.1.tar.xz/libobs/util/simde/simde-align.h
Added
+/* Alignment
+ * Created by Evan Nemerson <evan@nemerson.com>
+ *
+ * To the extent possible under law, the authors have waived all
+ * copyright and related or neighboring rights to this code. For
+ * details, see the Creative Commons Zero 1.0 Universal license at
+ * <https://creativecommons.org/publicdomain/zero/1.0/>
+ *
+ * SPDX-License-Identifier: CC0-1.0
+ *
+ **********************************************************************
+ *
+ * This is portability layer which should help iron out some
+ * differences across various compilers, as well as various verisons of
+ * C and C++.
+ *
+ * It was originally developed for SIMD Everywhere
+ * (<https://github.com/simd-everywhere/simde>), but since its only
+ * dependency is Hedley (<https://nemequ.github.io/hedley>, also CC0)
+ * it can easily be used in other projects, so please feel free to do
+ * so.
+ *
+ * If you do use this in your project, please keep a link to SIMDe in
+ * your code to remind you where to report any bugs and/or check for
+ * updated versions.
+ *
+ * # API Overview
+ *
+ * The API has several parts, and most macros have a few variations.
+ * There are APIs for declaring aligned fields/variables, optimization
+ * hints, and run-time alignment checks.
+ *
+ * Briefly, macros ending with "_TO" take numeric values and are great
+ * when you know the value you would like to use. Macros ending with
+ * "_LIKE", on the other hand, accept a type and are used when you want
+ * to use the alignment of a type instead of hardcoding a value.
+ *
+ * Documentation for each section of the API is inline.
+ *
+ * True to form, MSVC is the main problem and imposes several
+ * limitations on the effectiveness of the APIs. Detailed descriptions
+ * of the limitations of each macro are inline, but in general:
+ *
+ * * On C11+ or C++11+ code written using this API will work. The
+ * ASSUME macros may or may not generate a hint to the compiler, but
+ * that is only an optimization issue and will not actually cause
+ * failures.
+ * * If you're using pretty much any compiler other than MSVC,
+ * everything should basically work as well as in C11/C++11.
+ */
+
+#if !defined(SIMDE_ALIGN_H)
+#define SIMDE_ALIGN_H
+
+#include "hedley.h"
+
+/* I know this seems a little silly, but some non-hosted compilers
+ * don't have stddef.h, so we try to accomodate them. */
+#if !defined(SIMDE_ALIGN_SIZE_T_)
+#if defined(__SIZE_TYPE__)
+#define SIMDE_ALIGN_SIZE_T_ __SIZE_TYPE__
+#elif defined(__SIZE_T_TYPE__)
+#define SIMDE_ALIGN_SIZE_T_ __SIZE_TYPE__
+#elif defined(__cplusplus)
+#include <cstddef>
+#define SIMDE_ALIGN_SIZE_T_ size_t
+#else
+#include <stddef.h>
+#define SIMDE_ALIGN_SIZE_T_ size_t
+#endif
+#endif
+
+#if !defined(SIMDE_ALIGN_INTPTR_T_)
+#if defined(__INTPTR_TYPE__)
+#define SIMDE_ALIGN_INTPTR_T_ __INTPTR_TYPE__
+#elif defined(__PTRDIFF_TYPE__)
+#define SIMDE_ALIGN_INTPTR_T_ __PTRDIFF_TYPE__
+#elif defined(__PTRDIFF_T_TYPE__)
+#define SIMDE_ALIGN_INTPTR_T_ __PTRDIFF_T_TYPE__
+#elif defined(__cplusplus)
+#include <cstddef>
+#define SIMDE_ALIGN_INTPTR_T_ ptrdiff_t
+#else
+#include <stddef.h>
+#define SIMDE_ALIGN_INTPTR_T_ ptrdiff_t
+#endif
+#endif
+
+#if defined(SIMDE_ALIGN_DEBUG)
+#if defined(__cplusplus)
+#include <cstdio>
+#else
+#include <stdio.h>
+#endif
+#endif
+
+/* SIMDE_ALIGN_OF(Type)
+ *
+ * The SIMDE_ALIGN_OF macro works like alignof, or _Alignof, or
+ * __alignof, or __alignof__, or __ALIGNOF__, depending on the compiler.
+ * It isn't defined everywhere (only when the compiler has some alignof-
+ * like feature we can use to implement it), but it should work in most
+ * modern compilers, as well as C11 and C++11.
+ *
+ * If we can't find an implementation for SIMDE_ALIGN_OF then the macro
+ * will not be defined, so if you can handle that situation sensibly
+ * you may need to sprinkle some ifdefs into your code.
+ */
+#if (defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 201112L)) || \
+ (0 && HEDLEY_HAS_FEATURE(c_alignof))
+#define SIMDE_ALIGN_OF(Type) _Alignof(Type)
+#elif (defined(__cplusplus) && (__cplusplus >= 201103L)) || \
+ (0 && HEDLEY_HAS_FEATURE(cxx_alignof))
+#define SIMDE_ALIGN_OF(Type) alignof(Type)
+#elif HEDLEY_GCC_VERSION_CHECK(2, 95, 0) || \
+ HEDLEY_ARM_VERSION_CHECK(4, 1, 0) || \
+ HEDLEY_INTEL_VERSION_CHECK(13, 0, 0) || \
+ HEDLEY_SUNPRO_VERSION_CHECK(5, 13, 0) || \
+ HEDLEY_TINYC_VERSION_CHECK(0, 9, 24) || \
+ HEDLEY_PGI_VERSION_CHECK(19, 10, 0) || \
+ HEDLEY_CRAY_VERSION_CHECK(10, 0, 0) || \
+ HEDLEY_TI_ARMCL_VERSION_CHECK(16, 9, 0) || \
+ HEDLEY_TI_CL2000_VERSION_CHECK(16, 9, 0) || \
+ HEDLEY_TI_CL6X_VERSION_CHECK(8, 0, 0) || \
+ HEDLEY_TI_CL7X_VERSION_CHECK(1, 2, 0) || \
+ HEDLEY_TI_CL430_VERSION_CHECK(16, 9, 0) || \
+ HEDLEY_TI_CLPRU_VERSION_CHECK(2, 3, 2) || defined(__IBM__ALIGNOF__) || \
+ defined(__clang__)
+#define SIMDE_ALIGN_OF(Type) __alignof__(Type)
+#elif HEDLEY_IAR_VERSION_CHECK(8, 40, 0)
+#define SIMDE_ALIGN_OF(Type) __ALIGNOF__(Type)
+#elif HEDLEY_MSVC_VERSION_CHECK(19, 0, 0)
+/* Probably goes back much further, but MS takes down their old docs.
+ * If you can verify that this works in earlier versions please let
+ * me know! */
+#define SIMDE_ALIGN_OF(Type) __alignof(Type)
+#endif
+
+/* SIMDE_ALIGN_MAXIMUM:
+ *
+ * This is the maximum alignment that the compiler supports. You can
+ * define the value prior to including SIMDe if necessary, but in that
+ * case *please* submit an issue so we can add the platform to the
+ * detection code.
+ *
+ * Most compilers are okay with types which are aligned beyond what
+ * they think is the maximum, as long as the alignment is a power
+ * of two. MSVC is the exception (of course), so we need to cap the
+ * alignment requests at values that the implementation supports.
+ *
+ * XL C/C++ will accept values larger than 16 (which is the alignment
+ * of an AltiVec vector), but will not reliably align to the larger
+ * value, so so we cap the value at 16 there.
+ *
+ * If the compiler accepts any power-of-two value within reason then
+ * this macro should be left undefined, and the SIMDE_ALIGN_CAP
+ * macro will just return the value passed to it. */
+#if !defined(SIMDE_ALIGN_MAXIMUM)
+#if defined(HEDLEY_MSVC_VERSION)
+#if defined(_M_IX86) || defined(_M_AMD64)
+#if HEDLEY_MSVC_VERSION_CHECK(19, 14, 0)
+#define SIMDE_ALIGN_PLATFORM_MAXIMUM 64
+#elif HEDLEY_MSVC_VERSION_CHECK(16, 0, 0)
+/* VS 2010 is really a guess based on Wikipedia; if anyone can
+ * test with old VS versions I'd really appreciate it. */
+#define SIMDE_ALIGN_PLATFORM_MAXIMUM 32
+#else
+#define SIMDE_ALIGN_PLATFORM_MAXIMUM 16
+#endif
+#elif defined(_M_ARM) || defined(_M_ARM64)
+#define SIMDE_ALIGN_PLATFORM_MAXIMUM 8
+#endif
+#elif defined(HEDLEY_IBM_VERSION)
+#define SIMDE_ALIGN_PLATFORM_MAXIMUM 16
+#endif
+#endif
+
+/* You can mostly ignore these; they're intended for internal use.
+ * If you do need to use them please let me know; if they fulfill
+ * a common use case I'll probably drop the trailing underscore
+ * and make them part of the public API. */
+#if defined(SIMDE_ALIGN_PLATFORM_MAXIMUM)
+#if SIMDE_ALIGN_PLATFORM_MAXIMUM >= 64
+#define SIMDE_ALIGN_64_ 64
+#define SIMDE_ALIGN_32_ 32
+#define SIMDE_ALIGN_16_ 16
+#define SIMDE_ALIGN_8_ 8
+#elif SIMDE_ALIGN_PLATFORM_MAXIMUM >= 32
+#define SIMDE_ALIGN_64_ 32
+#define SIMDE_ALIGN_32_ 32
+#define SIMDE_ALIGN_16_ 16
+#define SIMDE_ALIGN_8_ 8
+#elif SIMDE_ALIGN_PLATFORM_MAXIMUM >= 16
+#define SIMDE_ALIGN_64_ 16
+#define SIMDE_ALIGN_32_ 16
+#define SIMDE_ALIGN_16_ 16
+#define SIMDE_ALIGN_8_ 8
+#elif SIMDE_ALIGN_PLATFORM_MAXIMUM >= 8
+#define SIMDE_ALIGN_64_ 8
+#define SIMDE_ALIGN_32_ 8
+#define SIMDE_ALIGN_16_ 8
+#define SIMDE_ALIGN_8_ 8
+#else
+#error Max alignment expected to be >= 8
+#endif
+#else
+#define SIMDE_ALIGN_64_ 64
+#define SIMDE_ALIGN_32_ 32
+#define SIMDE_ALIGN_16_ 16
+#define SIMDE_ALIGN_8_ 8
+#endif
+
+/**
+ * SIMDE_ALIGN_CAP(Alignment)
+ *
+ * Returns the minimum of Alignment or SIMDE_ALIGN_MAXIMUM.
+ */
+#if defined(SIMDE_ALIGN_MAXIMUM)
+#define SIMDE_ALIGN_CAP(Alignment) \
+ (((Alignment) < (SIMDE_ALIGN_PLATFORM_MAXIMUM)) \
+ ? (Alignment) \
+ : (SIMDE_ALIGN_PLATFORM_MAXIMUM))
+#else
+#define SIMDE_ALIGN_CAP(Alignment) (Alignment)
+#endif
+
+/* SIMDE_ALIGN_TO(Alignment)
+ *
+ * SIMDE_ALIGN_TO is used to declare types or variables. It basically
+ * maps to the align attribute in most compilers, the align declspec
+ * in MSVC, or _Alignas/alignas in C11/C++11.
+ *
+ * Example:
+ *
+ * struct i32x4 {
+ * SIMDE_ALIGN_TO(16) int32_t values[4];
+ * }
+ *
+ * Limitations:
+ *
+ * MSVC requires that the Alignment parameter be numeric; you can't do
+ * something like `SIMDE_ALIGN_TO(SIMDE_ALIGN_OF(int))`. This is
+ * unfortunate because that's really how the LIKE macros are
+ * implemented, and I am not aware of a way to get anything like this
+ * to work without using the C11/C++11 keywords.
+ *
+ * It also means that we can't use SIMDE_ALIGN_CAP to limit the
+ * alignment to the value specified, which MSVC also requires, so on
+ * MSVC you should use the `SIMDE_ALIGN_TO_8/16/32/64` macros instead.
+ * They work like `SIMDE_ALIGN_TO(SIMDE_ALIGN_CAP(Alignment))` would,
+ * but should be safe to use on MSVC.
+ *
+ * All this is to say that, if you want your code to work on MSVC, you
+ * should use the SIMDE_ALIGN_TO_8/16/32/64 macros below instead of
+ * SIMDE_ALIGN_TO(8/16/32/64).
+ */
+#if HEDLEY_HAS_ATTRIBUTE(aligned) || HEDLEY_GCC_VERSION_CHECK(2, 95, 0) || \
+ HEDLEY_CRAY_VERSION_CHECK(8, 4, 0) || \
+ HEDLEY_IBM_VERSION_CHECK(11, 1, 0) || \
+ HEDLEY_INTEL_VERSION_CHECK(13, 0, 0) || \
+ HEDLEY_PGI_VERSION_CHECK(19, 4, 0) || \
+ HEDLEY_ARM_VERSION_CHECK(4, 1, 0) || \
+ HEDLEY_TINYC_VERSION_CHECK(0, 9, 24) || \
+ HEDLEY_TI_ARMCL_VERSION_CHECK(16, 9, 0) || \
+ HEDLEY_TI_CL2000_VERSION_CHECK(16, 9, 0) || \
+ HEDLEY_TI_CL6X_VERSION_CHECK(8, 0, 0) || \
+ HEDLEY_TI_CL7X_VERSION_CHECK(1, 2, 0) || \
+ HEDLEY_TI_CL430_VERSION_CHECK(16, 9, 0) || \
+ HEDLEY_TI_CLPRU_VERSION_CHECK(2, 3, 2)
+#define SIMDE_ALIGN_TO(Alignment) \
+ __attribute__((__aligned__(SIMDE_ALIGN_CAP(Alignment))))
+#elif (defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 201112L))
+#define SIMDE_ALIGN_TO(Alignment) _Alignas(SIMDE_ALIGN_CAP(Alignment))
+#elif (defined(__cplusplus) && (__cplusplus >= 201103L))
+#define SIMDE_ALIGN_TO(Alignment) alignas(SIMDE_ALIGN_CAP(Alignment))
+#elif defined(HEDLEY_MSVC_VERSION)
+#define SIMDE_ALIGN_TO(Alignment) __declspec(align(Alignment))
+/* Unfortunately MSVC can't handle __declspec(align(__alignof(Type)));
+ * the alignment passed to the declspec has to be an integer. */
+#define SIMDE_ALIGN_OF_UNUSABLE_FOR_LIKE
+#endif
+#define SIMDE_ALIGN_TO_64 SIMDE_ALIGN_TO(SIMDE_ALIGN_64_)
+#define SIMDE_ALIGN_TO_32 SIMDE_ALIGN_TO(SIMDE_ALIGN_32_)
+#define SIMDE_ALIGN_TO_16 SIMDE_ALIGN_TO(SIMDE_ALIGN_16_)
+#define SIMDE_ALIGN_TO_8 SIMDE_ALIGN_TO(SIMDE_ALIGN_8_)
+
+/* SIMDE_ALIGN_ASSUME_TO(Pointer, Alignment)
+ *
+ * SIMDE_ALIGN_ASSUME_TO is semantically similar to C++20's
+ * std::assume_aligned, or __builtin_assume_aligned. It tells the
+ * compiler to assume that the provided pointer is aligned to an
+ * `Alignment`-byte boundary.
+ *
+ * If you define SIMDE_ALIGN_DEBUG prior to including this header then
+ * SIMDE_ALIGN_ASSUME_TO will turn into a runtime check. We don't
+ * integrate with NDEBUG in this header, but it may be a good idea to
+ * put something like this in your code:
+ *
+ * #if !defined(NDEBUG)
+ * #define SIMDE_ALIGN_DEBUG
+ * #endif
+ * #include <.../simde-align.h>
+ */
+#if HEDLEY_HAS_BUILTIN(__builtin_assume_aligned) || \
+ HEDLEY_GCC_VERSION_CHECK(4, 7, 0)
+#define SIMDE_ALIGN_ASSUME_TO_UNCHECKED(Pointer, Alignment) \
+ HEDLEY_REINTERPRET_CAST( \
+ __typeof__(Pointer), \
+ __builtin_assume_aligned( \
+ HEDLEY_CONST_CAST( \
+ void *, HEDLEY_REINTERPRET_CAST(const void *, \
+ Pointer)), \
+ Alignment))
+#elif HEDLEY_INTEL_VERSION_CHECK(13, 0, 0)
+#define SIMDE_ALIGN_ASSUME_TO_UNCHECKED(Pointer, Alignment) \
+ (__extension__({ \
+ __typeof__(v) simde_assume_aligned_t_ = (Pointer); \
+ __assume_aligned(simde_assume_aligned_t_, Alignment); \
+ simde_assume_aligned_t_; \
+ }))
+#elif defined(__cplusplus) && (__cplusplus > 201703L)
+#include <memory>
+#define SIMDE_ALIGN_ASSUME_TO_UNCHECKED(Pointer, Alignment) \
+ std::assume_aligned<Alignment>(Pointer)
+#else
+#if defined(__cplusplus)
+template<typename T>
+HEDLEY_ALWAYS_INLINE static T *
+simde_align_assume_to_unchecked(T *ptr, const size_t alignment)
+#else
+HEDLEY_ALWAYS_INLINE static void *
+simde_align_assume_to_unchecked(void *ptr, const size_t alignment)
+#endif
+{
+ HEDLEY_ASSUME((HEDLEY_REINTERPRET_CAST(size_t, (ptr)) %
+ SIMDE_ALIGN_CAP(alignment)) == 0);
+ return ptr;
+}
+#if defined(__cplusplus)
+#define SIMDE_ALIGN_ASSUME_TO_UNCHECKED(Pointer, Alignment) \
+ simde_align_assume_to_unchecked((Pointer), (Alignment))
+#else
+#define SIMDE_ALIGN_ASSUME_TO_UNCHECKED(Pointer, Alignment) \
+ simde_align_assume_to_unchecked( \
+ HEDLEY_CONST_CAST(void *, HEDLEY_REINTERPRET_CAST( \
+ const void *, Pointer)), \
+ (Alignment))
+#endif
+#endif
+
+#if !defined(SIMDE_ALIGN_DEBUG)
+#define SIMDE_ALIGN_ASSUME_TO(Pointer, Alignment) \
+ SIMDE_ALIGN_ASSUME_TO_UNCHECKED(Pointer, Alignment)
+#else
+#include <stdio.h>
+#if defined(__cplusplus)
+template<typename T>
+static HEDLEY_ALWAYS_INLINE T *
+simde_align_assume_to_checked_uncapped(T *ptr, const size_t alignment,
+ const char *file, int line,
+ const char *ptrname)
+#else
+static HEDLEY_ALWAYS_INLINE void *
+simde_align_assume_to_checked_uncapped(void *ptr, const size_t alignment,
+ const char *file, int line,
+ const char *ptrname)
+#endif
+{
+ if (HEDLEY_UNLIKELY(
+ (HEDLEY_REINTERPRET_CAST(SIMDE_ALIGN_INTPTR_T_, (ptr)) %
+ HEDLEY_STATIC_CAST(SIMDE_ALIGN_INTPTR_T_,
+ SIMDE_ALIGN_CAP(alignment))) != 0)) {
+ fprintf(stderr,
+ "%s:%d: alignment check failed for `%s' (%p %% %u == %u)\n",
+ file, line, ptrname,
+ HEDLEY_REINTERPRET_CAST(const void *, ptr),
+ HEDLEY_STATIC_CAST(unsigned int,
+ SIMDE_ALIGN_CAP(alignment)),
+ HEDLEY_STATIC_CAST(
+ unsigned int,
+ HEDLEY_REINTERPRET_CAST(SIMDE_ALIGN_INTPTR_T_,
+ (ptr)) %
+ HEDLEY_STATIC_CAST(
+ SIMDE_ALIGN_INTPTR_T_,
+ SIMDE_ALIGN_CAP(alignment))));
+ }
+
+ return ptr;
+}
+
+#if defined(__cplusplus)
+#define SIMDE_ALIGN_ASSUME_TO(Pointer, Alignment) \
+ simde_align_assume_to_checked_uncapped((Pointer), (Alignment), \
+ __FILE__, __LINE__, #Pointer)
+#else
+#define SIMDE_ALIGN_ASSUME_TO(Pointer, Alignment) \
+ simde_align_assume_to_checked_uncapped( \
+ HEDLEY_CONST_CAST(void *, HEDLEY_REINTERPRET_CAST( \
+ const void *, Pointer)), \
+ (Alignment), __FILE__, __LINE__, #Pointer)
+#endif
+#endif
+
+/* SIMDE_ALIGN_LIKE(Type)
+ * SIMDE_ALIGN_LIKE_#(Type)
+ *
+ * The SIMDE_ALIGN_LIKE macros are similar to the SIMDE_ALIGN_TO macros
+ * except instead of an integer they take a type; basically, it's just
+ * a more convenient way to do something like:
+ *
+ * SIMDE_ALIGN_TO(SIMDE_ALIGN_OF(Type))
+ *
+ * The versions with a numeric suffix will fall back on using a numeric
+ * value in the event we can't use SIMDE_ALIGN_OF(Type). This is
+ * mainly for MSVC, where __declspec(align()) can't handle anything
+ * other than hard-coded numeric values.
+ */
+#if defined(SIMDE_ALIGN_OF) && defined(SIMDE_ALIGN_TO) && \
+ !defined(SIMDE_ALIGN_OF_UNUSABLE_FOR_LIKE)
+#define SIMDE_ALIGN_LIKE(Type) SIMDE_ALIGN_TO(SIMDE_ALIGN_OF(Type))
+#define SIMDE_ALIGN_LIKE_64(Type) SIMDE_ALIGN_LIKE(Type)
+#define SIMDE_ALIGN_LIKE_32(Type) SIMDE_ALIGN_LIKE(Type)
+#define SIMDE_ALIGN_LIKE_16(Type) SIMDE_ALIGN_LIKE(Type)
+#define SIMDE_ALIGN_LIKE_8(Type) SIMDE_ALIGN_LIKE(Type)
+#else
+#define SIMDE_ALIGN_LIKE_64(Type) SIMDE_ALIGN_TO_64
+#define SIMDE_ALIGN_LIKE_32(Type) SIMDE_ALIGN_TO_32
+#define SIMDE_ALIGN_LIKE_16(Type) SIMDE_ALIGN_TO_16
+#define SIMDE_ALIGN_LIKE_8(Type) SIMDE_ALIGN_TO_8
+#endif
+
+/* SIMDE_ALIGN_ASSUME_LIKE(Pointer, Type)
+ *
+ * Tihs is similar to SIMDE_ALIGN_ASSUME_TO, except that it takes a
+ * type instead of a numeric value. */
+#if defined(SIMDE_ALIGN_OF) && defined(SIMDE_ALIGN_ASSUME_TO)
+#define SIMDE_ALIGN_ASSUME_LIKE(Pointer, Type) \
+ SIMDE_ALIGN_ASSUME_TO(Pointer, SIMDE_ALIGN_OF(Type))
+#endif
+
+/* SIMDE_ALIGN_CAST(Type, Pointer)
+ *
+ * SIMDE_ALIGN_CAST is like C++'s reinterpret_cast, but it will try
+ * to silence warnings that some compilers may produce if you try
+ * to assign to a type with increased alignment requirements.
+ *
+ * Note that it does *not* actually attempt to tell the compiler that
+ * the pointer is aligned like the destination should be; that's the
+ * job of the next macro. This macro is necessary for stupid APIs
+ * like _mm_loadu_si128 where the input is a __m128i* but the function
+ * is specifically for data which isn't necessarily aligned to
+ * _Alignof(__m128i).
+ */
+#if HEDLEY_HAS_WARNING("-Wcast-align") || defined(__clang__) || \
+ HEDLEY_GCC_VERSION_CHECK(3, 4, 0)
+#define SIMDE_ALIGN_CAST(Type, Pointer) \
+ (__extension__({ \
+ HEDLEY_DIAGNOSTIC_PUSH \
+ _Pragma("GCC diagnostic ignored \"-Wcast-align\"") \
+ Type simde_r_ = \
+ HEDLEY_REINTERPRET_CAST(Type, Pointer); \
+ HEDLEY_DIAGNOSTIC_POP \
+ simde_r_; \
+ }))
+#else
+#define SIMDE_ALIGN_CAST(Type, Pointer) HEDLEY_REINTERPRET_CAST(Type, Pointer)
+#endif
+
+/* SIMDE_ALIGN_ASSUME_CAST(Type, Pointer)
+ *
+ * This is sort of like a combination of a reinterpret_cast and a
+ * SIMDE_ALIGN_ASSUME_LIKE. It uses SIMDE_ALIGN_ASSUME_LIKE to tell
+ * the compiler that the pointer is aligned like the specified type
+ * and casts the pointer to the specified type while suppressing any
+ * warnings from the compiler about casting to a type with greater
+ * alignment requirements.
+ */
+#define SIMDE_ALIGN_ASSUME_CAST(Type, Pointer) \
+ SIMDE_ALIGN_ASSUME_LIKE(SIMDE_ALIGN_CAST(Type, Pointer), Type)
+
+#endif /* !defined(SIMDE_ALIGN_H) */
obs-studio-26.1.0.tar.xz/libobs/util/simde/simde-arch.h -> obs-studio-26.1.1.tar.xz/libobs/util/simde/simde-arch.h
Changed
* an undefined macro being used (e.g., GCC with -Wundef).
*
* This was originally created for SIMDe
- * <https://github.com/nemequ/simde> (hence the prefix), but this
+ * <https://github.com/simd-everywhere/simde> (hence the prefix), but this
* header has no dependencies and may be used anywhere. It is
* originally based on information from
* <https://sourceforge.net/p/predef/wiki/Architectures/>, though it
* has been enhanced with additional information.
*
* If you improve this file, or find a bug, please file the issue at
- * <https://github.com/nemequ/simde/issues>. If you copy this into
+ * <https://github.com/simd-everywhere/simde/issues>. If you copy this into
* your project, even if you change the prefix, please keep the links
* to SIMDe intact so others know where to report issues, submit
* enhancements, and find the latest version. */
/* AMD64 / x86_64
<https://en.wikipedia.org/wiki/X86-64> */
#if defined(__amd64__) || defined(__amd64) || defined(__x86_64__) || \
- defined(__x86_64) || defined(_M_X66) || defined(_M_AMD64)
+ defined(__x86_64) || defined(_M_X64) || defined(_M_AMD64)
#define SIMDE_ARCH_AMD64 1000
#endif
#define SIMDE_ARCH_ARM_NEON SIMDE_ARCH_ARM
#endif
#endif
+#if defined(__ARM_FEATURE_SVE)
+#define SIMDE_ARCH_ARM_SVE
+#endif
/* Blackfin
<https://en.wikipedia.org/wiki/Blackfin> */
#define SIMDE_ARCH_X86_AVX 1
#endif
#endif
+#if defined(__AVX512VP2INTERSECT__)
+#define SIMDE_ARCH_X86_AVX512VP2INTERSECT 1
+#endif
+#if defined(__AVX512VBMI__)
+#define SIMDE_ARCH_X86_AVX512VBMI 1
+#endif
#if defined(__AVX512BW__)
#define SIMDE_ARCH_X86_AVX512BW 1
#endif
#if defined(__GFNI__)
#define SIMDE_ARCH_X86_GFNI 1
#endif
+#if defined(__PCLMUL__)
+#define SIMDE_ARCH_X86_PCLMUL 1
+#endif
+#if defined(__VPCLMULQDQ__)
+#define SIMDE_ARCH_X86_VPCLMULQDQ 1
+#endif
#endif
/* Itanium
#define SIMDE_ARCH_MIPS_CHECK(version) (0)
#endif
+#if defined(__mips_loongson_mmi)
+#define SIMDE_ARCH_MIPS_LOONGSON_MMI 1
+#endif
+
/* Matsushita MN10300
<https://en.wikipedia.org/wiki/MN103> */
#if defined(__MN10300__) || defined(__mn10300__)
obs-studio-26.1.0.tar.xz/libobs/util/simde/simde-common.h -> obs-studio-26.1.1.tar.xz/libobs/util/simde/simde-common.h
Changed
#include "hedley.h"
#define SIMDE_VERSION_MAJOR 0
-#define SIMDE_VERSION_MINOR 5
-#define SIMDE_VERSION_MICRO 0
+#define SIMDE_VERSION_MINOR 7
+#define SIMDE_VERSION_MICRO 1
#define SIMDE_VERSION \
HEDLEY_VERSION_ENCODE(SIMDE_VERSION_MAJOR, SIMDE_VERSION_MINOR, \
SIMDE_VERSION_MICRO)
-#include "simde-arch.h"
-#include "simde-features.h"
-#include "simde-diagnostic.h"
-
#include <stddef.h>
#include <stdint.h>
-#if HEDLEY_HAS_ATTRIBUTE(aligned) || HEDLEY_GCC_VERSION_CHECK(2, 95, 0) || \
- HEDLEY_CRAY_VERSION_CHECK(8, 4, 0) || \
- HEDLEY_IBM_VERSION_CHECK(11, 1, 0) || \
- HEDLEY_INTEL_VERSION_CHECK(13, 0, 0) || \
- HEDLEY_PGI_VERSION_CHECK(19, 4, 0) || \
- HEDLEY_ARM_VERSION_CHECK(4, 1, 0) || \
- HEDLEY_TINYC_VERSION_CHECK(0, 9, 24) || \
- HEDLEY_TI_VERSION_CHECK(8, 1, 0)
-#define SIMDE_ALIGN(alignment) __attribute__((aligned(alignment)))
-#elif defined(_MSC_VER) && !(defined(_M_ARM) && !defined(_M_ARM64))
-#define SIMDE_ALIGN(alignment) __declspec(align(alignment))
-#elif defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 201112L)
-#define SIMDE_ALIGN(alignment) _Alignas(alignment)
-#elif defined(__cplusplus) && (__cplusplus >= 201103L)
-#define SIMDE_ALIGN(alignment) alignas(alignment)
-#else
-#define SIMDE_ALIGN(alignment)
-#endif
-
-#if HEDLEY_GNUC_VERSION_CHECK(2, 95, 0) || \
- HEDLEY_ARM_VERSION_CHECK(4, 1, 0) || \
- HEDLEY_IBM_VERSION_CHECK(11, 1, 0)
-#define SIMDE_ALIGN_OF(T) (__alignof__(T))
-#elif (defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 201112L)) || \
- HEDLEY_HAS_FEATURE(c11_alignof)
-#define SIMDE_ALIGN_OF(T) (_Alignof(T))
-#elif (defined(__cplusplus) && (__cplusplus >= 201103L)) || \
- HEDLEY_HAS_FEATURE(cxx_alignof)
-#define SIMDE_ALIGN_OF(T) (alignof(T))
-#endif
-
-#if defined(SIMDE_ALIGN_OF)
-#define SIMDE_ALIGN_AS(N, T) SIMDE_ALIGN(SIMDE_ALIGN_OF(T))
-#else
-#define SIMDE_ALIGN_AS(N, T) SIMDE_ALIGN(N)
+#include "simde-detect-clang.h"
+#include "simde-arch.h"
+#include "simde-features.h"
+#include "simde-diagnostic.h"
+#include "simde-math.h"
+#include "simde-constify.h"
+#include "simde-align.h"
+
+/* In some situations, SIMDe has to make large performance sacrifices
+ * for small increases in how faithfully it reproduces an API, but
+ * only a relatively small number of users will actually need the API
+ * to be completely accurate. The SIMDE_FAST_* options can be used to
+ * disable these trade-offs.
+ *
+ * They can be enabled by passing -DSIMDE_FAST_MATH to the compiler, or
+ * the individual defines (e.g., -DSIMDE_FAST_NANS) if you only want to
+ * enable some optimizations. Using -ffast-math and/or
+ * -ffinite-math-only will also enable the relevant options. If you
+ * don't want that you can pass -DSIMDE_NO_FAST_* to disable them. */
+
+/* Most programs avoid NaNs by never passing values which can result in
+ * a NaN; for example, if you only pass non-negative values to the sqrt
+ * functions, it won't generate a NaN. On some platforms, similar
+ * functions handle NaNs differently; for example, the _mm_min_ps SSE
+ * function will return 0.0 if you pass it (0.0, NaN), but the NEON
+ * vminq_f32 function will return NaN. Making them behave like one
+ * another is expensive; it requires generating a mask of all lanes
+ * with NaNs, then performing the operation (e.g., vminq_f32), then
+ * blending together the result with another vector using the mask.
+ *
+ * If you don't want SIMDe to worry about the differences between how
+ * NaNs are handled on the two platforms, define this (or pass
+ * -ffinite-math-only) */
+#if !defined(SIMDE_FAST_MATH) && !defined(SIMDE_NO_FAST_MATH) && \
+ defined(__FAST_MATH__)
+#define SIMDE_FAST_MATH
+#endif
+
+#if !defined(SIMDE_FAST_NANS) && !defined(SIMDE_NO_FAST_NANS)
+#if defined(SIMDE_FAST_MATH)
+#define SIMDE_FAST_NANS
+#elif defined(__FINITE_MATH_ONLY__)
+#if __FINITE_MATH_ONLY__
+#define SIMDE_FAST_NANS
+#endif
+#endif
+#endif
+
+/* Many functions are defined as using the current rounding mode
+ * (i.e., the SIMD version of fegetround()) when converting to
+ * an integer. For example, _mm_cvtpd_epi32. Unfortunately,
+ * on some platforms (such as ARMv8+ where round-to-nearest is
+ * always used, regardless of the FPSCR register) this means we
+ * have to first query the current rounding mode, then choose
+ * the proper function (rounnd
+ , ceil, floor, etc.) */
+#if !defined(SIMDE_FAST_ROUND_MODE) && !defined(SIMDE_NO_FAST_ROUND_MODE) && \
+ defined(SIMDE_FAST_MATH)
+#define SIMDE_FAST_ROUND_MODE
+#endif
+
+/* This controls how ties are rounded. For example, does 10.5 round to
+ * 10 or 11? IEEE 754 specifies round-towards-even, but ARMv7 (for
+ * example) doesn't support it and it must be emulated (which is rather
+ * slow). If you're okay with just using the default for whatever arch
+ * you're on, you should definitely define this.
+ *
+ * Note that we don't use this macro to avoid correct implementations
+ * in functions which are explicitly about rounding (such as vrnd* on
+ * NEON, _mm_round_* on x86, etc.); it is only used for code where
+ * rounding is a component in another function, and even then it isn't
+ * usually a problem since such functions will use the current rounding
+ * mode. */
+#if !defined(SIMDE_FAST_ROUND_TIES) && !defined(SIMDE_NO_FAST_ROUND_TIES) && \
+ defined(SIMDE_FAST_MATH)
+#define SIMDE_FAST_ROUND_TIES
+#endif
+
+/* For functions which convert from one type to another (mostly from
+ * floating point to integer types), sometimes we need to do a range
+ * check and potentially return a different result if the value
+ * falls outside that range. Skipping this check can provide a
+ * performance boost, at the expense of faithfulness to the API we're
+ * emulating. */
+#if !defined(SIMDE_FAST_CONVERSION_RANGE) && \
+ !defined(SIMDE_NO_FAST_CONVERSION_RANGE) && defined(SIMDE_FAST_MATH)
+#define SIMDE_FAST_CONVERSION_RANGE
#endif
-#define simde_assert_aligned(alignment, val) \
- simde_assert_int(HEDLEY_REINTERPRET_CAST( \
- uintptr_t, HEDLEY_REINTERPRET_CAST( \
- const void *, (val))) % \
- (alignment), \
- ==, 0)
-
#if HEDLEY_HAS_BUILTIN(__builtin_constant_p) || \
HEDLEY_GCC_VERSION_CHECK(3, 4, 0) || \
HEDLEY_INTEL_VERSION_CHECK(13, 0, 0) || \
#define SIMDE_CHECK_CONSTANT_(expr) (std::is_constant_evaluated())
#endif
-/* diagnose_if + __builtin_constant_p was broken until clang 9,
- * which is when __FILE_NAME__ was added. */
-#if defined(SIMDE_CHECK_CONSTANT_) && defined(__FILE_NAME__)
+#if !defined(SIMDE_NO_CHECK_IMMEDIATE_CONSTANT)
+#if defined(SIMDE_CHECK_CONSTANT_) && \
+ SIMDE_DETECT_CLANG_VERSION_CHECK(9, 0, 0) && \
+ (!defined(__apple_build_version__) || \
+ ((__apple_build_version__ < 11000000) || \
+ (__apple_build_version__ >= 12000000)))
#define SIMDE_REQUIRE_CONSTANT(arg) \
HEDLEY_REQUIRE_MSG(SIMDE_CHECK_CONSTANT_(arg), \
"`" #arg "' must be constant")
#else
#define SIMDE_REQUIRE_CONSTANT(arg)
#endif
+#else
+#define SIMDE_REQUIRE_CONSTANT(arg)
+#endif
#define SIMDE_REQUIRE_RANGE(arg, min, max) \
HEDLEY_REQUIRE_MSG((((arg) >= (min)) && ((arg) <= (max))), \
SIMDE_REQUIRE_CONSTANT(arg) \
SIMDE_REQUIRE_RANGE(arg, min, max)
-/* SIMDE_ASSUME_ALIGNED allows you to (try to) tell the compiler
- * that a pointer is aligned to an `alignment`-byte boundary. */
-#if HEDLEY_HAS_BUILTIN(__builtin_assume_aligned) || \
- HEDLEY_GCC_VERSION_CHECK(4, 7, 0)
-#define SIMDE_ASSUME_ALIGNED(alignment, v) \
- HEDLEY_REINTERPRET_CAST(__typeof__(v), \
- __builtin_assume_aligned(v, alignment))
-#elif defined(__cplusplus) && (__cplusplus > 201703L)
-#define SIMDE_ASSUME_ALIGNED(alignment, v) std::assume_aligned<alignment>(v)
-#elif HEDLEY_INTEL_VERSION_CHECK(13, 0, 0)
-#define SIMDE_ASSUME_ALIGNED(alignment, v) \
- (__extension__({ \
- __typeof__(v) simde_assume_aligned_t_ = (v); \
- __assume_aligned(simde_assume_aligned_t_, alignment); \
- simde_assume_aligned_t_; \
- }))
-#else
-#define SIMDE_ASSUME_ALIGNED(alignment, v) (v)
-#endif
-
-/* SIMDE_ALIGN_CAST allows you to convert to a type with greater
- * aligment requirements without triggering a warning. */
-#if HEDLEY_HAS_WARNING("-Wcast-align")
-#define SIMDE_ALIGN_CAST(T, v) \
- (__extension__({ \
- HEDLEY_DIAGNOSTIC_PUSH \
- _Pragma("clang diagnostic ignored \"-Wcast-align\"") \
- T simde_r_ = HEDLEY_REINTERPRET_CAST(T, v); \
- HEDLEY_DIAGNOSTIC_POP \
- simde_r_; \
- }))
-#else
-#define SIMDE_ALIGN_CAST(T, v) HEDLEY_REINTERPRET_CAST(T, v)
+/* A copy of HEDLEY_STATIC_ASSERT, except we don't define an empty
+ * fallback if we can't find an implementation; instead we have to
+ * check if SIMDE_STATIC_ASSERT is defined before using it. */
+#if !defined(__cplusplus) && \
+ ((defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 201112L)) || \
+ HEDLEY_HAS_FEATURE(c_static_assert) || \
+ HEDLEY_GCC_VERSION_CHECK(6, 0, 0) || \
+ HEDLEY_INTEL_VERSION_CHECK(13, 0, 0) || defined(_Static_assert))
+#define SIMDE_STATIC_ASSERT(expr, message) _Static_assert(expr, message)
+#elif (defined(__cplusplus) && (__cplusplus >= 201103L)) || \
+ HEDLEY_MSVC_VERSION_CHECK(16, 0, 0)
+#define SIMDE_STATIC_ASSERT(expr, message) \
+ HEDLEY_DIAGNOSTIC_DISABLE_CPP98_COMPAT_WRAP_( \
+ static_assert(expr, message))
#endif
#if (HEDLEY_HAS_ATTRIBUTE(may_alias) && !defined(HEDLEY_SUNPRO_VERSION)) || \
* SIMDE_VECTOR - Declaring a vector.
* SIMDE_VECTOR_OPS - basic operations (binary and unary).
+ * SIMDE_VECTOR_NEGATE - negating a vector
* SIMDE_VECTOR_SCALAR - For binary operators, the second argument
can be a scalar, in which case the result is as if that scalar
had been broadcast to all lanes of a vector.
#if HEDLEY_GCC_VERSION_CHECK(4, 8, 0)
#define SIMDE_VECTOR(size) __attribute__((__vector_size__(size)))
#define SIMDE_VECTOR_OPS
+#define SIMDE_VECTOR_NEGATE
#define SIMDE_VECTOR_SCALAR
#define SIMDE_VECTOR_SUBSCRIPT
#elif HEDLEY_INTEL_VERSION_CHECK(16, 0, 0)
#define SIMDE_VECTOR(size) __attribute__((__vector_size__(size)))
#define SIMDE_VECTOR_OPS
+#define SIMDE_VECTOR_NEGATE
/* ICC only supports SIMDE_VECTOR_SCALAR for constants */
#define SIMDE_VECTOR_SUBSCRIPT
#elif HEDLEY_GCC_VERSION_CHECK(4, 1, 0) || HEDLEY_INTEL_VERSION_CHECK(13, 0, 0)
#elif HEDLEY_HAS_ATTRIBUTE(vector_size)
#define SIMDE_VECTOR(size) __attribute__((__vector_size__(size)))
#define SIMDE_VECTOR_OPS
+#define SIMDE_VECTOR_NEGATE
#define SIMDE_VECTOR_SUBSCRIPT
-#if HEDLEY_HAS_ATTRIBUTE(diagnose_if) /* clang 4.0 */
+#if SIMDE_DETECT_CLANG_VERSION_CHECK(5, 0, 0)
#define SIMDE_VECTOR_SCALAR
#endif
#endif
#endif
#if defined(SIMDE_ENABLE_OPENMP)
-#define SIMDE_VECTORIZE _Pragma("omp simd")
+#define SIMDE_VECTORIZE HEDLEY_PRAGMA(omp simd)
#define SIMDE_VECTORIZE_SAFELEN(l) HEDLEY_PRAGMA(omp simd safelen(l))
+#if defined(__clang__)
+#define SIMDE_VECTORIZE_REDUCTION(r) \
+ HEDLEY_DIAGNOSTIC_PUSH \
+ _Pragma("clang diagnostic ignored \"-Wsign-conversion\"") \
+ HEDLEY_PRAGMA(omp simd reduction(r)) HEDLEY_DIAGNOSTIC_POP
+#else
#define SIMDE_VECTORIZE_REDUCTION(r) HEDLEY_PRAGMA(omp simd reduction(r))
+#endif
#define SIMDE_VECTORIZE_ALIGNED(a) HEDLEY_PRAGMA(omp simd aligned(a))
#elif defined(SIMDE_ENABLE_CILKPLUS)
-#define SIMDE_VECTORIZE _Pragma("simd")
+#define SIMDE_VECTORIZE HEDLEY_PRAGMA(simd)
#define SIMDE_VECTORIZE_SAFELEN(l) HEDLEY_PRAGMA(simd vectorlength(l))
#define SIMDE_VECTORIZE_REDUCTION(r) HEDLEY_PRAGMA(simd reduction(r))
#define SIMDE_VECTORIZE_ALIGNED(a) HEDLEY_PRAGMA(simd aligned(a))
#elif defined(__clang__) && !defined(HEDLEY_IBM_VERSION)
-#define SIMDE_VECTORIZE _Pragma("clang loop vectorize(enable)")
+#define SIMDE_VECTORIZE HEDLEY_PRAGMA(clang loop vectorize(enable))
#define SIMDE_VECTORIZE_SAFELEN(l) HEDLEY_PRAGMA(clang loop vectorize_width(l))
#define SIMDE_VECTORIZE_REDUCTION(r) SIMDE_VECTORIZE
#define SIMDE_VECTORIZE_ALIGNED(a)
#elif HEDLEY_GCC_VERSION_CHECK(4, 9, 0)
-#define SIMDE_VECTORIZE _Pragma("GCC ivdep")
+#define SIMDE_VECTORIZE HEDLEY_PRAGMA(GCC ivdep)
#define SIMDE_VECTORIZE_SAFELEN(l) SIMDE_VECTORIZE
#define SIMDE_VECTORIZE_REDUCTION(r) SIMDE_VECTORIZE
#define SIMDE_VECTORIZE_ALIGNED(a)
#elif HEDLEY_CRAY_VERSION_CHECK(5, 0, 0)
-#define SIMDE_VECTORIZE _Pragma("_CRI ivdep")
+#define SIMDE_VECTORIZE HEDLEY_PRAGMA(_CRI ivdep)
#define SIMDE_VECTORIZE_SAFELEN(l) SIMDE_VECTORIZE
#define SIMDE_VECTORIZE_REDUCTION(r) SIMDE_VECTORIZE
#define SIMDE_VECTORIZE_ALIGNED(a)
HEDLEY_DIAGNOSTIC_POP
#endif
-#if HEDLEY_HAS_WARNING("-Wpedantic")
-#define SIMDE_DIAGNOSTIC_DISABLE_INT128 \
- _Pragma("clang diagnostic ignored \"-Wpedantic\"")
-#elif defined(HEDLEY_GCC_VERSION)
-#define SIMDE_DIAGNOSTIC_DISABLE_INT128 \
- _Pragma("GCC diagnostic ignored \"-Wpedantic\"")
-#else
-#define SIMDE_DIAGNOSTIC_DISABLE_INT128
-#endif
-
#if defined(__SIZEOF_INT128__)
#define SIMDE_HAVE_INT128_
HEDLEY_DIAGNOSTIC_PUSH
-SIMDE_DIAGNOSTIC_DISABLE_INT128
+SIMDE_DIAGNOSTIC_DISABLE_PEDANTIC_
typedef __int128 simde_int128;
typedef unsigned __int128 simde_uint128;
HEDLEY_DIAGNOSTIC_POP
#endif
typedef SIMDE_FLOAT64_TYPE simde_float64;
-/* Whether to assume that the compiler can auto-vectorize reasonably
- well. This will cause SIMDe to attempt to compose vector
- operations using more simple vector operations instead of minimize
- serial work.
-
- As an example, consider the _mm_add_ss(a, b) function from SSE,
- which returns { a0 + b0, a1, a2, a3 }. This pattern is repeated
- for other operations (sub, mul, etc.).
-
- The naïve implementation would result in loading a0 and b0, adding
- them into a temporary variable, then splicing that value into a new
- vector with the remaining elements from a.
-
- On platforms which support vectorization, it's generally faster to
- simply perform the operation on the entire vector to avoid having
- to move data between SIMD registers and non-SIMD registers.
- Basically, instead of the temporary variable being (a0 + b0) it
- would be a vector of (a + b), which is then combined with a to form
- the result.
-
- By default, SIMDe will prefer the pure-vector versions if we detect
- a vector ISA extension, but this can be overridden by defining
- SIMDE_NO_ASSUME_VECTORIZATION. You can also define
- SIMDE_ASSUME_VECTORIZATION if you want to force SIMDe to use the
- vectorized version. */
-#if !defined(SIMDE_NO_ASSUME_VECTORIZATION) && \
- !defined(SIMDE_ASSUME_VECTORIZATION)
-#if defined(__SSE__) || defined(__ARM_NEON) || defined(__mips_msa) || \
- defined(__ALTIVEC__) || defined(__wasm_simd128__)
-#define SIMDE_ASSUME_VECTORIZATION
-#endif
-#endif
-
#if HEDLEY_HAS_WARNING("-Wbad-function-cast")
#define SIMDE_CONVERT_FTOI(T, v) \
HEDLEY_DIAGNOSTIC_PUSH \
#define SIMDE_CONVERT_FTOI(T, v) ((T)(v))
#endif
+/* TODO: detect compilers which support this outside of C11 mode */
#if defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 201112L)
#define SIMDE_CHECKED_REINTERPRET_CAST(to, from, value) \
- (_Generic((value), to : (value), from : ((to)(value))))
+ _Generic((value), to \
+ : (value), default \
+ : (_Generic((value), from \
+ : ((to)(value)))))
#define SIMDE_CHECKED_STATIC_CAST(to, from, value) \
- (_Generic((value), to : (value), from : ((to)(value))))
+ _Generic((value), to \
+ : (value), default \
+ : (_Generic((value), from \
+ : ((to)(value)))))
#else
#define SIMDE_CHECKED_REINTERPRET_CAST(to, from, value) \
HEDLEY_REINTERPRET_CAST(to, value)
#if defined(__STDC_HOSTED__)
#define SIMDE_STDC_HOSTED __STDC_HOSTED__
#else
-#if defined(HEDLEY_PGI_VERSION_CHECK) || defined(HEDLEY_MSVC_VERSION_CHECK)
+#if defined(HEDLEY_PGI_VERSION) || defined(HEDLEY_MSVC_VERSION)
#define SIMDE_STDC_HOSTED 1
#else
#define SIMDE_STDC_HOSTED 0
#endif
/* Try to deal with environments without a standard library. */
-#if !defined(simde_memcpy) || !defined(simde_memset)
-#if !defined(SIMDE_NO_STRING_H) && defined(__has_include)
-#if __has_include(<string.h>)
-#include <string.h>
#if !defined(simde_memcpy)
-#define simde_memcpy(dest, src, n) memcpy(dest, src, n)
+#if HEDLEY_HAS_BUILTIN(__builtin_memcpy)
+#define simde_memcpy(dest, src, n) __builtin_memcpy(dest, src, n)
+#endif
#endif
#if !defined(simde_memset)
-#define simde_memset(s, c, n) memset(s, c, n)
+#if HEDLEY_HAS_BUILTIN(__builtin_memset)
+#define simde_memset(s, c, n) __builtin_memset(s, c, n)
#endif
-#else
+#endif
+#if !defined(simde_memcmp)
+#if HEDLEY_HAS_BUILTIN(__builtin_memcmp)
+#define simde_memcmp(s1, s2, n) __builtin_memcmp(s1, s2, n)
+#endif
+#endif
+
+#if !defined(simde_memcpy) || !defined(simde_memset) || !defined(simde_memcmp)
+#if !defined(SIMDE_NO_STRING_H)
+#if defined(__has_include)
+#if !__has_include(<string.h>)
#define SIMDE_NO_STRING_H
#endif
+#elif (SIMDE_STDC_HOSTED == 0)
+#define SIMDE_NO_STRING_H
#endif
#endif
-#if !defined(simde_memcpy) || !defined(simde_memset)
-#if !defined(SIMDE_NO_STRING_H) && (SIMDE_STDC_HOSTED == 1)
+
+#if !defined(SIMDE_NO_STRING_H)
#include <string.h>
#if !defined(simde_memcpy)
#define simde_memcpy(dest, src, n) memcpy(dest, src, n)
#if !defined(simde_memset)
#define simde_memset(s, c, n) memset(s, c, n)
#endif
-#elif (HEDLEY_HAS_BUILTIN(__builtin_memcpy) && \
- HEDLEY_HAS_BUILTIN(__builtin_memset)) || \
- HEDLEY_GCC_VERSION_CHECK(4, 2, 0)
-#if !defined(simde_memcpy)
-#define simde_memcpy(dest, src, n) __builtin_memcpy(dest, src, n)
-#endif
-#if !defined(simde_memset)
-#define simde_memset(s, c, n) __builtin_memset(s, c, n)
+#if !defined(simde_memcmp)
+#define simde_memcmp(s1, s2, n) memcmp(s1, s2, n)
#endif
#else
/* These are meant to be portable, not fast. If you're hitting them you
}
#define simde_memset(s, c, n) simde_memset_(s, c, n)
#endif
-#endif /* !defined(SIMDE_NO_STRING_H) && (SIMDE_STDC_HOSTED == 1) */
-#endif /* !defined(simde_memcpy) || !defined(simde_memset) */
-#include "simde-math.h"
+#if !defined(simde_memcmp)
+SIMDE_FUCTION_ATTRIBUTES
+int simde_memcmp_(const void *s1, const void *s2, size_t n)
+{
+ unsigned char *s1_ = HEDLEY_STATIC_CAST(unsigned char *, s1);
+ unsigned char *s2_ = HEDLEY_STATIC_CAST(unsigned char *, s2);
+ for (size_t i = 0; i < len; i++) {
+ if (s1_[i] != s2_[i]) {
+ return (int)(s1_[i] - s2_[i]);
+ }
+ }
+ return 0;
+}
+#define simde_memcmp(s1, s2, n) simde_memcmp_(s1, s2, n)
+#endif
+#endif
+#endif
#if defined(FE_ALL_EXCEPT)
#define SIMDE_HAVE_FENV_H
#include "check.h"
+/* GCC/clang have a bunch of functionality in builtins which we would
+ * like to access, but the suffixes indicate whether the operate on
+ * int, long, or long long, not fixed width types (e.g., int32_t).
+ * we use these macros to attempt to map from fixed-width to the
+ * names GCC uses. Note that you should still cast the input(s) and
+ * return values (to/from SIMDE_BUILTIN_TYPE_*_) since often even if
+ * types are the same size they may not be compatible according to the
+ * compiler. For example, on x86 long and long lonsg are generally
+ * both 64 bits, but platforms vary on whether an int64_t is mapped
+ * to a long or long long. */
+
+#include <limits.h>
+
+HEDLEY_DIAGNOSTIC_PUSH
+SIMDE_DIAGNOSTIC_DISABLE_CPP98_COMPAT_PEDANTIC_
+
+#if (INT8_MAX == INT_MAX) && (INT8_MIN == INT_MIN)
+#define SIMDE_BUILTIN_SUFFIX_8_
+#define SIMDE_BUILTIN_TYPE_8_ int
+#elif (INT8_MAX == LONG_MAX) && (INT8_MIN == LONG_MIN)
+#define SIMDE_BUILTIN_SUFFIX_8_ l
+#define SIMDE_BUILTIN_TYPE_8_ long
+#elif (INT8_MAX == LLONG_MAX) && (INT8_MIN == LLONG_MIN)
+#define SIMDE_BUILTIN_SUFFIX_8_ ll
+#define SIMDE_BUILTIN_TYPE_8_ long long
+#endif
+
+#if (INT16_MAX == INT_MAX) && (INT16_MIN == INT_MIN)
+#define SIMDE_BUILTIN_SUFFIX_16_
+#define SIMDE_BUILTIN_TYPE_16_ int
+#elif (INT16_MAX == LONG_MAX) && (INT16_MIN == LONG_MIN)
+#define SIMDE_BUILTIN_SUFFIX_16_ l
+#define SIMDE_BUILTIN_TYPE_16_ long
+#elif (INT16_MAX == LLONG_MAX) && (INT16_MIN == LLONG_MIN)
+#define SIMDE_BUILTIN_SUFFIX_16_ ll
+#define SIMDE_BUILTIN_TYPE_16_ long long
+#endif
+
+#if (INT32_MAX == INT_MAX) && (INT32_MIN == INT_MIN)
+#define SIMDE_BUILTIN_SUFFIX_32_
+#define SIMDE_BUILTIN_TYPE_32_ int
+#elif (INT32_MAX == LONG_MAX) && (INT32_MIN == LONG_MIN)
+#define SIMDE_BUILTIN_SUFFIX_32_ l
+#define SIMDE_BUILTIN_TYPE_32_ long
+#elif (INT32_MAX == LLONG_MAX) && (INT32_MIN == LLONG_MIN)
+#define SIMDE_BUILTIN_SUFFIX_32_ ll
+#define SIMDE_BUILTIN_TYPE_32_ long long
+#endif
+
+#if (INT64_MAX == INT_MAX) && (INT64_MIN == INT_MIN)
+#define SIMDE_BUILTIN_SUFFIX_64_
+#define SIMDE_BUILTIN_TYPE_64_ int
+#elif (INT64_MAX == LONG_MAX) && (INT64_MIN == LONG_MIN)
+#define SIMDE_BUILTIN_SUFFIX_64_ l
+#define SIMDE_BUILTIN_TYPE_64_ long
+#elif (INT64_MAX == LLONG_MAX) && (INT64_MIN == LLONG_MIN)
+#define SIMDE_BUILTIN_SUFFIX_64_ ll
+#define SIMDE_BUILTIN_TYPE_64_ long long
+#endif
+
+#if defined(SIMDE_BUILTIN_SUFFIX_8_)
+#define SIMDE_BUILTIN_8_(name) \
+ HEDLEY_CONCAT3(__builtin_, name, SIMDE_BUILTIN_SUFFIX_8_)
+#define SIMDE_BUILTIN_HAS_8_(name) \
+ HEDLEY_HAS_BUILTIN( \
+ HEDLEY_CONCAT3(__builtin_, name, SIMDE_BUILTIN_SUFFIX_8_))
+#else
+#define SIMDE_BUILTIN_HAS_8_(name) 0
+#endif
+#if defined(SIMDE_BUILTIN_SUFFIX_16_)
+#define SIMDE_BUILTIN_16_(name) \
+ HEDLEY_CONCAT3(__builtin_, name, SIMDE_BUILTIN_SUFFIX_16_)
+#define SIMDE_BUILTIN_HAS_16_(name) \
+ HEDLEY_HAS_BUILTIN( \
+ HEDLEY_CONCAT3(__builtin_, name, SIMDE_BUILTIN_SUFFIX_16_))
+#else
+#define SIMDE_BUILTIN_HAS_16_(name) 0
+#endif
+#if defined(SIMDE_BUILTIN_SUFFIX_32_)
+#define SIMDE_BUILTIN_32_(name) \
+ HEDLEY_CONCAT3(__builtin_, name, SIMDE_BUILTIN_SUFFIX_32_)
+#define SIMDE_BUILTIN_HAS_32_(name) \
+ HEDLEY_HAS_BUILTIN( \
+ HEDLEY_CONCAT3(__builtin_, name, SIMDE_BUILTIN_SUFFIX_32_))
+#else
+#define SIMDE_BUILTIN_HAS_32_(name) 0
+#endif
+#if defined(SIMDE_BUILTIN_SUFFIX_64_)
+#define SIMDE_BUILTIN_64_(name) \
+ HEDLEY_CONCAT3(__builtin_, name, SIMDE_BUILTIN_SUFFIX_64_)
+#define SIMDE_BUILTIN_HAS_64_(name) \
+ HEDLEY_HAS_BUILTIN( \
+ HEDLEY_CONCAT3(__builtin_, name, SIMDE_BUILTIN_SUFFIX_64_))
+#else
+#define SIMDE_BUILTIN_HAS_64_(name) 0
+#endif
+
+HEDLEY_DIAGNOSTIC_POP
+
/* Sometimes we run into problems with specific versions of compilers
which make the native versions unusable for us. Often this is due
to missing functions, sometimes buggy implementations, etc. These
#if defined(SIMDE_ARCH_X86) && !defined(SIMDE_ARCH_AMD64)
#define SIMDE_BUG_GCC_94482
#endif
+#if (defined(SIMDE_ARCH_X86) && !defined(SIMDE_ARCH_AMD64)) || \
+ defined(SIMDE_ARCH_SYSTEMZ)
+#define SIMDE_BUG_GCC_53784
+#endif
+#if defined(SIMDE_ARCH_X86) || defined(SIMDE_ARCH_AMD64)
+#if HEDLEY_GCC_VERSION_CHECK(4, 3, 0) /* -Wsign-conversion */
+#define SIMDE_BUG_GCC_95144
+#endif
+#endif
#if !HEDLEY_GCC_VERSION_CHECK(9, 4, 0) && defined(SIMDE_ARCH_AARCH64)
#define SIMDE_BUG_GCC_94488
#endif
-#if defined(SIMDE_ARCH_POWER)
+#if defined(SIMDE_ARCH_ARM)
+#define SIMDE_BUG_GCC_95399
+#define SIMDE_BUG_GCC_95471
+#elif defined(SIMDE_ARCH_POWER)
#define SIMDE_BUG_GCC_95227
+#define SIMDE_BUG_GCC_95782
+#elif defined(SIMDE_ARCH_X86) || defined(SIMDE_ARCH_AMD64)
+#if !HEDLEY_GCC_VERSION_CHECK(10, 2, 0) && !defined(__OPTIMIZE__)
+#define SIMDE_BUG_GCC_96174
+#endif
#endif
#define SIMDE_BUG_GCC_95399
#elif defined(__clang__)
#if defined(SIMDE_ARCH_AARCH64)
#define SIMDE_BUG_CLANG_45541
+#define SIMDE_BUG_CLANG_46844
+#define SIMDE_BUG_CLANG_48257
+#if SIMDE_DETECT_CLANG_VERSION_CHECK(10, 0, 0) && \
+ SIMDE_DETECT_CLANG_VERSION_NOT(11, 0, 0)
+#define SIMDE_BUG_CLANG_BAD_VI64_OPS
+#endif
+#endif
+#if defined(SIMDE_ARCH_POWER)
+#define SIMDE_BUG_CLANG_46770
+#endif
+#if defined(_ARCH_PWR9) && !SIMDE_DETECT_CLANG_VERSION_CHECK(12, 0, 0) && \
+ !defined(__OPTIMIZE__)
+#define SIMDE_BUG_CLANG_POWER9_16x4_BAD_SHIFT
+#endif
+#if defined(SIMDE_ARCH_X86) || defined(SIMDE_ARCH_AMD64)
+#if HEDLEY_HAS_WARNING("-Wsign-conversion") && \
+ SIMDE_DETECT_CLANG_VERSION_NOT(11, 0, 0)
+#define SIMDE_BUG_CLANG_45931
+#endif
+#if HEDLEY_HAS_WARNING("-Wvector-conversion") && \
+ SIMDE_DETECT_CLANG_VERSION_NOT(11, 0, 0)
+#define SIMDE_BUG_CLANG_44589
+#endif
#endif
+#define SIMDE_BUG_CLANG_45959
+#elif defined(HEDLEY_MSVC_VERSION)
+#if defined(SIMDE_ARCH_X86)
+#define SIMDE_BUG_MSVC_ROUND_EXTRACT
#endif
-#if defined(HEDLEY_EMSCRIPTEN_VERSION)
-#define SIMDE_BUG_EMSCRIPTEN_MISSING_IMPL /* Placeholder for (as yet) unfiled issues. */
-#define SIMDE_BUG_EMSCRIPTEN_5242
+#elif defined(HEDLEY_INTEL_VERSION)
+#define SIMDE_BUG_INTEL_857088
#endif
#endif
/* GCC and Clang both have the same issue:
* https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95144
* https://bugs.llvm.org/show_bug.cgi?id=45931
+ * This is just an easy way to work around it.
*/
-#if HEDLEY_HAS_WARNING("-Wsign-conversion") || HEDLEY_GCC_VERSION_CHECK(4, 3, 0)
+#if (HEDLEY_HAS_WARNING("-Wsign-conversion") && \
+ SIMDE_DETECT_CLANG_VERSION_NOT(11, 0, 0)) || \
+ HEDLEY_GCC_VERSION_CHECK(4, 3, 0)
#define SIMDE_BUG_IGNORE_SIGN_CONVERSION(expr) \
(__extension__({ \
HEDLEY_DIAGNOSTIC_PUSH \
obs-studio-26.1.1.tar.xz/libobs/util/simde/simde-constify.h
Added
+/* SPDX-License-Identifier: MIT
+ *
+ * Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy,
+ * modify, merge, publish, distribute, sublicense, and/or sell copies
+ * of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * Copyright:
+ * 2020 Evan Nemerson <evan@nemerson.com>
+ */
+
+/* Constify macros. For internal use only.
+ *
+ * These are used to make it possible to call a function which takes
+ * an Integer Constant Expression (ICE) using a compile time constant.
+ * Technically it would also be possible to use a value not trivially
+ * known by the compiler, but there would be a siginficant performance
+ * hit (a switch switch is used).
+ *
+ * The basic idea is pretty simple; we just emit a do while loop which
+ * contains a switch with a case for every possible value of the
+ * constant.
+ *
+ * As long as the value you pass to the function in constant, pretty
+ * much any copmiler shouldn't have a problem generating exactly the
+ * same code as if you had used an ICE.
+ *
+ * This is intended to be used in the SIMDe implementations of
+ * functions the compilers require to be an ICE, but the other benefit
+ * is that if we also disable the warnings from
+ * SIMDE_REQUIRE_CONSTANT_RANGE we can actually just allow the tests
+ * to use non-ICE parameters
+ */
+
+#if !defined(SIMDE_CONSTIFY_H)
+#define SIMDE_CONSTIFY_H
+
+#include "simde-diagnostic.h"
+
+HEDLEY_DIAGNOSTIC_PUSH
+SIMDE_DIAGNOSTIC_DISABLE_VARIADIC_MACROS_
+SIMDE_DIAGNOSTIC_DISABLE_CPP98_COMPAT_PEDANTIC_
+
+#define SIMDE_CONSTIFY_2_(func_name, result, default_case, imm, ...) \
+ do { \
+ switch (imm) { \
+ case 0: \
+ result = func_name(__VA_ARGS__, 0); \
+ break; \
+ case 1: \
+ result = func_name(__VA_ARGS__, 1); \
+ break; \
+ default: \
+ result = default_case; \
+ break; \
+ } \
+ } while (0)
+
+#define SIMDE_CONSTIFY_4_(func_name, result, default_case, imm, ...) \
+ do { \
+ switch (imm) { \
+ case 0: \
+ result = func_name(__VA_ARGS__, 0); \
+ break; \
+ case 1: \
+ result = func_name(__VA_ARGS__, 1); \
+ break; \
+ case 2: \
+ result = func_name(__VA_ARGS__, 2); \
+ break; \
+ case 3: \
+ result = func_name(__VA_ARGS__, 3); \
+ break; \
+ default: \
+ result = default_case; \
+ break; \
+ } \
+ } while (0)
+
+#define SIMDE_CONSTIFY_8_(func_name, result, default_case, imm, ...) \
+ do { \
+ switch (imm) { \
+ case 0: \
+ result = func_name(__VA_ARGS__, 0); \
+ break; \
+ case 1: \
+ result = func_name(__VA_ARGS__, 1); \
+ break; \
+ case 2: \
+ result = func_name(__VA_ARGS__, 2); \
+ break; \
+ case 3: \
+ result = func_name(__VA_ARGS__, 3); \
+ break; \
+ case 4: \
+ result = func_name(__VA_ARGS__, 4); \
+ break; \
+ case 5: \
+ result = func_name(__VA_ARGS__, 5); \
+ break; \
+ case 6: \
+ result = func_name(__VA_ARGS__, 6); \
+ break; \
+ case 7: \
+ result = func_name(__VA_ARGS__, 7); \
+ break; \
+ default: \
+ result = default_case; \
+ break; \
+ } \
+ } while (0)
+
+#define SIMDE_CONSTIFY_16_(func_name, result, default_case, imm, ...) \
+ do { \
+ switch (imm) { \
+ case 0: \
+ result = func_name(__VA_ARGS__, 0); \
+ break; \
+ case 1: \
+ result = func_name(__VA_ARGS__, 1); \
+ break; \
+ case 2: \
+ result = func_name(__VA_ARGS__, 2); \
+ break; \
+ case 3: \
+ result = func_name(__VA_ARGS__, 3); \
+ break; \
+ case 4: \
+ result = func_name(__VA_ARGS__, 4); \
+ break; \
+ case 5: \
+ result = func_name(__VA_ARGS__, 5); \
+ break; \
+ case 6: \
+ result = func_name(__VA_ARGS__, 6); \
+ break; \
+ case 7: \
+ result = func_name(__VA_ARGS__, 7); \
+ break; \
+ case 8: \
+ result = func_name(__VA_ARGS__, 8); \
+ break; \
+ case 9: \
+ result = func_name(__VA_ARGS__, 9); \
+ break; \
+ case 10: \
+ result = func_name(__VA_ARGS__, 10); \
+ break; \
+ case 11: \
+ result = func_name(__VA_ARGS__, 11); \
+ break; \
+ case 12: \
+ result = func_name(__VA_ARGS__, 12); \
+ break; \
+ case 13: \
+ result = func_name(__VA_ARGS__, 13); \
+ break; \
+ case 14: \
+ result = func_name(__VA_ARGS__, 14); \
+ break; \
+ case 15: \
+ result = func_name(__VA_ARGS__, 15); \
+ break; \
+ default: \
+ result = default_case; \
+ break; \
+ } \
+ } while (0)
+
+#define SIMDE_CONSTIFY_32_(func_name, result, default_case, imm, ...) \
+ do { \
+ switch (imm) { \
+ case 0: \
+ result = func_name(__VA_ARGS__, 0); \
+ break; \
+ case 1: \
+ result = func_name(__VA_ARGS__, 1); \
+ break; \
+ case 2: \
+ result = func_name(__VA_ARGS__, 2); \
+ break; \
+ case 3: \
+ result = func_name(__VA_ARGS__, 3); \
+ break; \
+ case 4: \
+ result = func_name(__VA_ARGS__, 4); \
+ break; \
+ case 5: \
+ result = func_name(__VA_ARGS__, 5); \
+ break; \
+ case 6: \
+ result = func_name(__VA_ARGS__, 6); \
+ break; \
+ case 7: \
+ result = func_name(__VA_ARGS__, 7); \
+ break; \
+ case 8: \
+ result = func_name(__VA_ARGS__, 8); \
+ break; \
+ case 9: \
+ result = func_name(__VA_ARGS__, 9); \
+ break; \
+ case 10: \
+ result = func_name(__VA_ARGS__, 10); \
+ break; \
+ case 11: \
+ result = func_name(__VA_ARGS__, 11); \
+ break; \
+ case 12: \
+ result = func_name(__VA_ARGS__, 12); \
+ break; \
+ case 13: \
+ result = func_name(__VA_ARGS__, 13); \
+ break; \
+ case 14: \
+ result = func_name(__VA_ARGS__, 14); \
+ break; \
+ case 15: \
+ result = func_name(__VA_ARGS__, 15); \
+ break; \
+ case 16: \
+ result = func_name(__VA_ARGS__, 16); \
+ break; \
+ case 17: \
+ result = func_name(__VA_ARGS__, 17); \
+ break; \
+ case 18: \
+ result = func_name(__VA_ARGS__, 18); \
+ break; \
+ case 19: \
+ result = func_name(__VA_ARGS__, 19); \
+ break; \
+ case 20: \
+ result = func_name(__VA_ARGS__, 20); \
+ break; \
+ case 21: \
+ result = func_name(__VA_ARGS__, 21); \
+ break; \
+ case 22: \
+ result = func_name(__VA_ARGS__, 22); \
+ break; \
+ case 23: \
+ result = func_name(__VA_ARGS__, 23); \
+ break; \
+ case 24: \
+ result = func_name(__VA_ARGS__, 24); \
+ break; \
+ case 25: \
+ result = func_name(__VA_ARGS__, 25); \
+ break; \
+ case 26: \
+ result = func_name(__VA_ARGS__, 26); \
+ break; \
+ case 27: \
+ result = func_name(__VA_ARGS__, 27); \
+ break; \
+ case 28: \
+ result = func_name(__VA_ARGS__, 28); \
+ break; \
+ case 29: \
+ result = func_name(__VA_ARGS__, 29); \
+ break; \
+ case 30: \
+ result = func_name(__VA_ARGS__, 30); \
+ break; \
+ case 31: \
+ result = func_name(__VA_ARGS__, 31); \
+ break; \
+ default: \
+ result = default_case; \
+ break; \
+ } \
+ } while (0)
+
+#define SIMDE_CONSTIFY_64_(func_name, result, default_case, imm, ...) \
+ do { \
+ switch (imm) { \
+ case 0: \
+ result = func_name(__VA_ARGS__, 0); \
+ break; \
+ case 1: \
+ result = func_name(__VA_ARGS__, 1); \
+ break; \
+ case 2: \
+ result = func_name(__VA_ARGS__, 2); \
+ break; \
+ case 3: \
+ result = func_name(__VA_ARGS__, 3); \
+ break; \
+ case 4: \
+ result = func_name(__VA_ARGS__, 4); \
+ break; \
+ case 5: \
+ result = func_name(__VA_ARGS__, 5); \
+ break; \
+ case 6: \
+ result = func_name(__VA_ARGS__, 6); \
+ break; \
+ case 7: \
+ result = func_name(__VA_ARGS__, 7); \
+ break; \
+ case 8: \
+ result = func_name(__VA_ARGS__, 8); \
+ break; \
+ case 9: \
+ result = func_name(__VA_ARGS__, 9); \
+ break; \
+ case 10: \
+ result = func_name(__VA_ARGS__, 10); \
+ break; \
+ case 11: \
+ result = func_name(__VA_ARGS__, 11); \
+ break; \
+ case 12: \
+ result = func_name(__VA_ARGS__, 12); \
+ break; \
+ case 13: \
+ result = func_name(__VA_ARGS__, 13); \
+ break; \
+ case 14: \
+ result = func_name(__VA_ARGS__, 14); \
+ break; \
+ case 15: \
+ result = func_name(__VA_ARGS__, 15); \
+ break; \
+ case 16: \
+ result = func_name(__VA_ARGS__, 16); \
+ break; \
+ case 17: \
+ result = func_name(__VA_ARGS__, 17); \
+ break; \
+ case 18: \
+ result = func_name(__VA_ARGS__, 18); \
+ break; \
+ case 19: \
+ result = func_name(__VA_ARGS__, 19); \
+ break; \
+ case 20: \
+ result = func_name(__VA_ARGS__, 20); \
+ break; \
+ case 21: \
+ result = func_name(__VA_ARGS__, 21); \
+ break; \
+ case 22: \
+ result = func_name(__VA_ARGS__, 22); \
+ break; \
+ case 23: \
+ result = func_name(__VA_ARGS__, 23); \
+ break; \
+ case 24: \
+ result = func_name(__VA_ARGS__, 24); \
+ break; \
+ case 25: \
+ result = func_name(__VA_ARGS__, 25); \
+ break; \
+ case 26: \
+ result = func_name(__VA_ARGS__, 26); \
+ break; \
+ case 27: \
+ result = func_name(__VA_ARGS__, 27); \
+ break; \
+ case 28: \
+ result = func_name(__VA_ARGS__, 28); \
+ break; \
+ case 29: \
+ result = func_name(__VA_ARGS__, 29); \
+ break; \
+ case 30: \
+ result = func_name(__VA_ARGS__, 30); \
+ break; \
+ case 31: \
+ result = func_name(__VA_ARGS__, 31); \
+ break; \
+ case 32: \
+ result = func_name(__VA_ARGS__, 32); \
+ break; \
+ case 33: \
+ result = func_name(__VA_ARGS__, 33); \
+ break; \
+ case 34: \
+ result = func_name(__VA_ARGS__, 34); \
+ break; \
+ case 35: \
+ result = func_name(__VA_ARGS__, 35); \
+ break; \
+ case 36: \
+ result = func_name(__VA_ARGS__, 36); \
+ break; \
+ case 37: \
+ result = func_name(__VA_ARGS__, 37); \
+ break; \
+ case 38: \
+ result = func_name(__VA_ARGS__, 38); \
+ break; \
+ case 39: \
+ result = func_name(__VA_ARGS__, 39); \
+ break; \
+ case 40: \
+ result = func_name(__VA_ARGS__, 40); \
+ break; \
+ case 41: \
+ result = func_name(__VA_ARGS__, 41); \
+ break; \
+ case 42: \
+ result = func_name(__VA_ARGS__, 42); \
+ break; \
+ case 43: \
+ result = func_name(__VA_ARGS__, 43); \
+ break; \
+ case 44: \
+ result = func_name(__VA_ARGS__, 44); \
+ break; \
+ case 45: \
+ result = func_name(__VA_ARGS__, 45); \
+ break; \
+ case 46: \
+ result = func_name(__VA_ARGS__, 46); \
+ break; \
+ case 47: \
+ result = func_name(__VA_ARGS__, 47); \
+ break; \
+ case 48: \
+ result = func_name(__VA_ARGS__, 48); \
+ break; \
+ case 49: \
+ result = func_name(__VA_ARGS__, 49); \
+ break; \
+ case 50: \
+ result = func_name(__VA_ARGS__, 50); \
+ break; \
+ case 51: \
+ result = func_name(__VA_ARGS__, 51); \
+ break; \
+ case 52: \
+ result = func_name(__VA_ARGS__, 52); \
+ break; \
+ case 53: \
+ result = func_name(__VA_ARGS__, 53); \
+ break; \
+ case 54: \
+ result = func_name(__VA_ARGS__, 54); \
+ break; \
+ case 55: \
+ result = func_name(__VA_ARGS__, 55); \
+ break; \
+ case 56: \
+ result = func_name(__VA_ARGS__, 56); \
+ break; \
+ case 57: \
+ result = func_name(__VA_ARGS__, 57); \
+ break; \
+ case 58: \
+ result = func_name(__VA_ARGS__, 58); \
+ break; \
+ case 59: \
+ result = func_name(__VA_ARGS__, 59); \
+ break; \
+ case 60: \
+ result = func_name(__VA_ARGS__, 60); \
+ break; \
+ case 61: \
+ result = func_name(__VA_ARGS__, 61); \
+ break; \
+ case 62: \
+ result = func_name(__VA_ARGS__, 62); \
+ break; \
+ case 63: \
+ result = func_name(__VA_ARGS__, 63); \
+ break; \
+ default: \
+ result = default_case; \
+ break; \
+ } \
+ } while (0)
+
+#define SIMDE_CONSTIFY_2_NO_RESULT_(func_name, default_case, imm, ...) \
+ do { \
+ switch (imm) { \
+ case 0: \
+ func_name(__VA_ARGS__, 0); \
+ break; \
+ case 1: \
+ func_name(__VA_ARGS__, 1); \
+ break; \
+ default: \
+ default_case; \
+ break; \
+ } \
+ } while (0)
+
+#define SIMDE_CONSTIFY_4_NO_RESULT_(func_name, default_case, imm, ...) \
+ do { \
+ switch (imm) { \
+ case 0: \
+ func_name(__VA_ARGS__, 0); \
+ break; \
+ case 1: \
+ func_name(__VA_ARGS__, 1); \
+ break; \
+ case 2: \
+ func_name(__VA_ARGS__, 2); \
+ break; \
+ case 3: \
+ func_name(__VA_ARGS__, 3); \
+ break; \
+ default: \
+ default_case; \
+ break; \
+ } \
+ } while (0)
+
+#define SIMDE_CONSTIFY_8_NO_RESULT_(func_name, default_case, imm, ...) \
+ do { \
+ switch (imm) { \
+ case 0: \
+ func_name(__VA_ARGS__, 0); \
+ break; \
+ case 1: \
+ func_name(__VA_ARGS__, 1); \
+ break; \
+ case 2: \
+ func_name(__VA_ARGS__, 2); \
+ break; \
+ case 3: \
+ func_name(__VA_ARGS__, 3); \
+ break; \
+ case 4: \
+ func_name(__VA_ARGS__, 4); \
+ break; \
+ case 5: \
+ func_name(__VA_ARGS__, 5); \
+ break; \
+ case 6: \
+ func_name(__VA_ARGS__, 6); \
+ break; \
+ case 7: \
+ func_name(__VA_ARGS__, 7); \
+ break; \
+ default: \
+ default_case; \
+ break; \
+ } \
+ } while (0)
+
+#define SIMDE_CONSTIFY_16_NO_RESULT_(func_name, default_case, imm, ...) \
+ do { \
+ switch (imm) { \
+ case 0: \
+ func_name(__VA_ARGS__, 0); \
+ break; \
+ case 1: \
+ func_name(__VA_ARGS__, 1); \
+ break; \
+ case 2: \
+ func_name(__VA_ARGS__, 2); \
+ break; \
+ case 3: \
+ func_name(__VA_ARGS__, 3); \
+ break; \
+ case 4: \
+ func_name(__VA_ARGS__, 4); \
+ break; \
+ case 5: \
+ func_name(__VA_ARGS__, 5); \
+ break; \
+ case 6: \
+ func_name(__VA_ARGS__, 6); \
+ break; \
+ case 7: \
+ func_name(__VA_ARGS__, 7); \
+ break; \
+ case 8: \
+ func_name(__VA_ARGS__, 8); \
+ break; \
+ case 9: \
+ func_name(__VA_ARGS__, 9); \
+ break; \
+ case 10: \
+ func_name(__VA_ARGS__, 10); \
+ break; \
+ case 11: \
+ func_name(__VA_ARGS__, 11); \
+ break; \
+ case 12: \
+ func_name(__VA_ARGS__, 12); \
+ break; \
+ case 13: \
+ func_name(__VA_ARGS__, 13); \
+ break; \
+ case 14: \
+ func_name(__VA_ARGS__, 14); \
+ break; \
+ case 15: \
+ func_name(__VA_ARGS__, 15); \
+ break; \
+ default: \
+ default_case; \
+ break; \
+ } \
+ } while (0)
+
+#define SIMDE_CONSTIFY_32_NO_RESULT_(func_name, default_case, imm, ...) \
+ do { \
+ switch (imm) { \
+ case 0: \
+ func_name(__VA_ARGS__, 0); \
+ break; \
+ case 1: \
+ func_name(__VA_ARGS__, 1); \
+ break; \
+ case 2: \
+ func_name(__VA_ARGS__, 2); \
+ break; \
+ case 3: \
+ func_name(__VA_ARGS__, 3); \
+ break; \
+ case 4: \
+ func_name(__VA_ARGS__, 4); \
+ break; \
+ case 5: \
+ func_name(__VA_ARGS__, 5); \
+ break; \
+ case 6: \
+ func_name(__VA_ARGS__, 6); \
+ break; \
+ case 7: \
+ func_name(__VA_ARGS__, 7); \
+ break; \
+ case 8: \
+ func_name(__VA_ARGS__, 8); \
+ break; \
+ case 9: \
+ func_name(__VA_ARGS__, 9); \
+ break; \
+ case 10: \
+ func_name(__VA_ARGS__, 10); \
+ break; \
+ case 11: \
+ func_name(__VA_ARGS__, 11); \
+ break; \
+ case 12: \
+ func_name(__VA_ARGS__, 12); \
+ break; \
+ case 13: \
+ func_name(__VA_ARGS__, 13); \
+ break; \
+ case 14: \
+ func_name(__VA_ARGS__, 14); \
+ break; \
+ case 15: \
+ func_name(__VA_ARGS__, 15); \
+ break; \
+ case 16: \
+ func_name(__VA_ARGS__, 16); \
+ break; \
+ case 17: \
+ func_name(__VA_ARGS__, 17); \
+ break; \
+ case 18: \
+ func_name(__VA_ARGS__, 18); \
+ break; \
+ case 19: \
+ func_name(__VA_ARGS__, 19); \
+ break; \
+ case 20: \
+ func_name(__VA_ARGS__, 20); \
+ break; \
+ case 21: \
+ func_name(__VA_ARGS__, 21); \
+ break; \
+ case 22: \
+ func_name(__VA_ARGS__, 22); \
+ break; \
+ case 23: \
+ func_name(__VA_ARGS__, 23); \
+ break; \
+ case 24: \
+ func_name(__VA_ARGS__, 24); \
+ break; \
+ case 25: \
+ func_name(__VA_ARGS__, 25); \
+ break; \
+ case 26: \
+ func_name(__VA_ARGS__, 26); \
+ break; \
+ case 27: \
+ func_name(__VA_ARGS__, 27); \
+ break; \
+ case 28: \
+ func_name(__VA_ARGS__, 28); \
+ break; \
+ case 29: \
+ func_name(__VA_ARGS__, 29); \
+ break; \
+ case 30: \
+ func_name(__VA_ARGS__, 30); \
+ break; \
+ case 31: \
+ func_name(__VA_ARGS__, 31); \
+ break; \
+ default: \
+ default_case; \
+ break; \
+ } \
+ } while (0)
+
+#define SIMDE_CONSTIFY_64_NO_RESULT_(func_name, default_case, imm, ...) \
+ do { \
+ switch (imm) { \
+ case 0: \
+ func_name(__VA_ARGS__, 0); \
+ break; \
+ case 1: \
+ func_name(__VA_ARGS__, 1); \
+ break; \
+ case 2: \
+ func_name(__VA_ARGS__, 2); \
+ break; \
+ case 3: \
+ func_name(__VA_ARGS__, 3); \
+ break; \
+ case 4: \
+ func_name(__VA_ARGS__, 4); \
+ break; \
+ case 5: \
+ func_name(__VA_ARGS__, 5); \
+ break; \
+ case 6: \
+ func_name(__VA_ARGS__, 6); \
+ break; \
+ case 7: \
+ func_name(__VA_ARGS__, 7); \
+ break; \
+ case 8: \
+ func_name(__VA_ARGS__, 8); \
+ break; \
+ case 9: \
+ func_name(__VA_ARGS__, 9); \
+ break; \
+ case 10: \
+ func_name(__VA_ARGS__, 10); \
+ break; \
+ case 11: \
+ func_name(__VA_ARGS__, 11); \
+ break; \
+ case 12: \
+ func_name(__VA_ARGS__, 12); \
+ break; \
+ case 13: \
+ func_name(__VA_ARGS__, 13); \
+ break; \
+ case 14: \
+ func_name(__VA_ARGS__, 14); \
+ break; \
+ case 15: \
+ func_name(__VA_ARGS__, 15); \
+ break; \
+ case 16: \
+ func_name(__VA_ARGS__, 16); \
+ break; \
+ case 17: \
+ func_name(__VA_ARGS__, 17); \
+ break; \
+ case 18: \
+ func_name(__VA_ARGS__, 18); \
+ break; \
+ case 19: \
+ func_name(__VA_ARGS__, 19); \
+ break; \
+ case 20: \
+ func_name(__VA_ARGS__, 20); \
+ break; \
+ case 21: \
+ func_name(__VA_ARGS__, 21); \
+ break; \
+ case 22: \
+ func_name(__VA_ARGS__, 22); \
+ break; \
+ case 23: \
+ func_name(__VA_ARGS__, 23); \
+ break; \
+ case 24: \
+ func_name(__VA_ARGS__, 24); \
+ break; \
+ case 25: \
+ func_name(__VA_ARGS__, 25); \
+ break; \
+ case 26: \
+ func_name(__VA_ARGS__, 26); \
+ break; \
+ case 27: \
+ func_name(__VA_ARGS__, 27); \
+ break; \
+ case 28: \
+ func_name(__VA_ARGS__, 28); \
+ break; \
+ case 29: \
+ func_name(__VA_ARGS__, 29); \
+ break; \
+ case 30: \
+ func_name(__VA_ARGS__, 30); \
+ break; \
+ case 31: \
+ func_name(__VA_ARGS__, 31); \
+ break; \
+ case 32: \
+ func_name(__VA_ARGS__, 32); \
+ break; \
+ case 33: \
+ func_name(__VA_ARGS__, 33); \
+ break; \
+ case 34: \
+ func_name(__VA_ARGS__, 34); \
+ break; \
+ case 35: \
+ func_name(__VA_ARGS__, 35); \
+ break; \
+ case 36: \
+ func_name(__VA_ARGS__, 36); \
+ break; \
+ case 37: \
+ func_name(__VA_ARGS__, 37); \
+ break; \
+ case 38: \
+ func_name(__VA_ARGS__, 38); \
+ break; \
+ case 39: \
+ func_name(__VA_ARGS__, 39); \
+ break; \
+ case 40: \
+ func_name(__VA_ARGS__, 40); \
+ break; \
+ case 41: \
+ func_name(__VA_ARGS__, 41); \
+ break; \
+ case 42: \
+ func_name(__VA_ARGS__, 42); \
+ break; \
+ case 43: \
+ func_name(__VA_ARGS__, 43); \
+ break; \
+ case 44: \
+ func_name(__VA_ARGS__, 44); \
+ break; \
+ case 45: \
+ func_name(__VA_ARGS__, 45); \
+ break; \
+ case 46: \
+ func_name(__VA_ARGS__, 46); \
+ break; \
+ case 47: \
+ func_name(__VA_ARGS__, 47); \
+ break; \
+ case 48: \
+ func_name(__VA_ARGS__, 48); \
+ break; \
+ case 49: \
+ func_name(__VA_ARGS__, 49); \
+ break; \
+ case 50: \
+ func_name(__VA_ARGS__, 50); \
+ break; \
+ case 51: \
+ func_name(__VA_ARGS__, 51); \
+ break; \
+ case 52: \
+ func_name(__VA_ARGS__, 52); \
+ break; \
+ case 53: \
+ func_name(__VA_ARGS__, 53); \
+ break; \
+ case 54: \
+ func_name(__VA_ARGS__, 54); \
+ break; \
+ case 55: \
+ func_name(__VA_ARGS__, 55); \
+ break; \
+ case 56: \
+ func_name(__VA_ARGS__, 56); \
+ break; \
+ case 57: \
+ func_name(__VA_ARGS__, 57); \
+ break; \
+ case 58: \
+ func_name(__VA_ARGS__, 58); \
+ break; \
+ case 59: \
+ func_name(__VA_ARGS__, 59); \
+ break; \
+ case 60: \
+ func_name(__VA_ARGS__, 60); \
+ break; \
+ case 61: \
+ func_name(__VA_ARGS__, 61); \
+ break; \
+ case 62: \
+ func_name(__VA_ARGS__, 62); \
+ break; \
+ case 63: \
+ func_name(__VA_ARGS__, 63); \
+ break; \
+ default: \
+ default_case; \
+ break; \
+ } \
+ } while (0)
+
+HEDLEY_DIAGNOSTIC_POP
+
+#endif
obs-studio-26.1.1.tar.xz/libobs/util/simde/simde-detect-clang.h
Added
+/* Detect Clang Version
+ * Created by Evan Nemerson <evan@nemerson.com>
+ *
+ * To the extent possible under law, the author(s) have dedicated all
+ * copyright and related and neighboring rights to this software to
+ * the public domain worldwide. This software is distributed without
+ * any warranty.
+ *
+ * For details, see <http://creativecommons.org/publicdomain/zero/1.0/>.
+ * SPDX-License-Identifier: CC0-1.0
+ */
+
+/* This file was originally part of SIMDe
+ * (<https://github.com/simd-everywhere/simde>). You're free to do with it as
+ * you please, but I do have a few small requests:
+ *
+ * * If you make improvements, please submit them back to SIMDe
+ * (at <https://github.com/simd-everywhere/simde/issues>) so others can
+ * benefit from them.
+ * * Please keep a link to SIMDe intact so people know where to submit
+ * improvements.
+ * * If you expose it publicly, please change the SIMDE_ prefix to
+ * something specific to your project.
+ *
+ * The version numbers clang exposes (in the ___clang_major__,
+ * __clang_minor__, and __clang_patchlevel__ macros) are unreliable.
+ * Vendors such as Apple will define these values to their version
+ * numbers; for example, "Apple Clang 4.0" is really clang 3.1, but
+ * __clang_major__ and __clang_minor__ are defined to 4 and 0
+ * respectively, instead of 3 and 1.
+ *
+ * The solution is *usually* to use clang's feature detection macros
+ * (<https://clang.llvm.org/docs/LanguageExtensions.html#feature-checking-macros>)
+ * to determine if the feature you're interested in is available. This
+ * generally works well, and it should probably be the first thing you
+ * try. Unfortunately, it's not possible to check for everything. In
+ * particular, compiler bugs.
+ *
+ * This file just uses the feature checking macros to detect features
+ * added in specific versions of clang to identify which version of
+ * clang the compiler is based on.
+ *
+ * Right now it only goes back to 3.6, but I'm happy to accept patches
+ * to go back further. And, of course, newer versions are welcome if
+ * they're not already present, and if you find a way to detect a point
+ * release that would be great, too!
+ */
+
+#if !defined(SIMDE_DETECT_CLANG_H)
+#define SIMDE_DETECT_CLANG_H 1
+
+/* Attempt to detect the upstream clang version number. I usually only
+ * worry about major version numbers (at least for 4.0+), but if you
+ * need more resolution I'm happy to accept patches that are able to
+ * detect minor versions as well. That said, you'll probably have a
+ * hard time with detection since AFAIK most minor releases don't add
+ * anything we can detect. */
+
+#if defined(__clang__) && !defined(SIMDE_DETECT_CLANG_VERSION)
+#if __has_warning("-Wformat-insufficient-args")
+#define SIMDE_DETECT_CLANG_VERSION 120000
+#elif __has_warning("-Wimplicit-const-int-float-conversion")
+#define SIMDE_DETECT_CLANG_VERSION 110000
+#elif __has_warning("-Wmisleading-indentation")
+#define SIMDE_DETECT_CLANG_VERSION 100000
+#elif defined(__FILE_NAME__)
+#define SIMDE_DETECT_CLANG_VERSION 90000
+#elif __has_warning("-Wextra-semi-stmt") || \
+ __has_builtin(__builtin_rotateleft32)
+#define SIMDE_DETECT_CLANG_VERSION 80000
+#elif __has_warning("-Wc++98-compat-extra-semi")
+#define SIMDE_DETECT_CLANG_VERSION 70000
+#elif __has_warning("-Wpragma-pack")
+#define SIMDE_DETECT_CLANG_VERSION 60000
+#elif __has_warning("-Wbitfield-enum-conversion")
+#define SIMDE_DETECT_CLANG_VERSION 50000
+#elif __has_attribute(diagnose_if)
+#define SIMDE_DETECT_CLANG_VERSION 40000
+#elif __has_warning("-Wcast-calling-convention")
+#define SIMDE_DETECT_CLANG_VERSION 30900
+#elif __has_warning("-WCL4")
+#define SIMDE_DETECT_CLANG_VERSION 30800
+#elif __has_warning("-WIndependentClass-attribute")
+#define SIMDE_DETECT_CLANG_VERSION 30700
+#elif __has_warning("-Wambiguous-ellipsis")
+#define SIMDE_DETECT_CLANG_VERSION 30600
+#else
+#define SIMDE_DETECT_CLANG_VERSION 1
+#endif
+#endif /* defined(__clang__) && !defined(SIMDE_DETECT_CLANG_VERSION) */
+
+/* The SIMDE_DETECT_CLANG_VERSION_CHECK macro is pretty
+ * straightforward; it returns true if the compiler is a derivative
+ * of clang >= the specified version.
+ *
+ * Since this file is often (primarily?) useful for working around bugs
+ * it is also helpful to have a macro which returns true if only if the
+ * compiler is a version of clang *older* than the specified version to
+ * make it a bit easier to ifdef regions to add code for older versions,
+ * such as pragmas to disable a specific warning. */
+
+#if defined(SIMDE_DETECT_CLANG_VERSION)
+#define SIMDE_DETECT_CLANG_VERSION_CHECK(major, minor, revision) \
+ (SIMDE_DETECT_CLANG_VERSION >= \
+ ((major * 10000) + (minor * 1000) + (revision)))
+#define SIMDE_DETECT_CLANG_VERSION_NOT(major, minor, revision) \
+ (SIMDE_DETECT_CLANG_VERSION < \
+ ((major * 10000) + (minor * 1000) + (revision)))
+#else
+#define SIMDE_DETECT_CLANG_VERSION_CHECK(major, minor, revision) (0)
+#define SIMDE_DETECT_CLANG_VERSION_NOT(major, minor, revision) (1)
+#endif
+
+#endif /* !defined(SIMDE_DETECT_CLANG_H) */
obs-studio-26.1.0.tar.xz/libobs/util/simde/simde-diagnostic.h -> obs-studio-26.1.1.tar.xz/libobs/util/simde/simde-diagnostic.h
Changed
*/
#if !defined(SIMDE_DIAGNOSTIC_H)
+#define SIMDE_DIAGNOSTIC_H
#include "hedley.h"
+#include "simde-detect-clang.h"
/* This is only to help us implement functions like _mm_undefined_ps. */
#if defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
#define SIMDE_DIAGNOSTIC_DISABLE_SIMD_PRAGMA_DEPRECATED_
#endif
+/* MSVC emits a diagnostic when we call a function (like
+ * simde_mm_set_epi32) while initializing a struct. We currently do
+ * this a *lot* in the tests. */
#if defined(HEDLEY_MSVC_VERSION)
#define SIMDE_DIAGNOSTIC_DISABLE_NON_CONSTANT_AGGREGATE_INITIALIZER_ \
__pragma(warning(disable : 4204))
#define SIMDE_DIAGNOSTIC_DISABLE_VARIADIC_MACROS_
#endif
+/* emscripten requires us to use a __wasm_unimplemented_simd128__ macro
+ * before we can access certain SIMD intrinsics, but this diagnostic
+ * warns about it being a reserved name. It is a reserved name, but
+ * it's reserved for the compiler and we are using it to convey
+ * information to the compiler.
+ *
+ * This is also used when enabling native aliases since we don't get to
+ * choose the macro names. */
+#if HEDLEY_HAS_WARNING("-Wdouble-promotion")
+#define SIMDE_DIAGNOSTIC_DISABLE_RESERVED_ID_MACRO_ \
+ _Pragma("clang diagnostic ignored \"-Wreserved-id-macro\"")
+#else
+#define SIMDE_DIAGNOSTIC_DISABLE_RESERVED_ID_MACRO_
+#endif
+
+/* clang 3.8 warns about the packed attribute being unnecessary when
+ * used in the _mm_loadu_* functions. That *may* be true for version
+ * 3.8, but for later versions it is crucial in order to make unaligned
+ * access safe. */
+#if HEDLEY_HAS_WARNING("-Wpacked")
+#define SIMDE_DIAGNOSTIC_DISABLE_PACKED_ \
+ _Pragma("clang diagnostic ignored \"-Wpacked\"")
+#else
+#define SIMDE_DIAGNOSTIC_DISABLE_PACKED_
+#endif
+
/* Triggered when assigning a float to a double implicitly. We use
* explicit casts in SIMDe, this is only used in the test suite. */
#if HEDLEY_HAS_WARNING("-Wdouble-promotion")
/* Several compilers treat conformant array parameters as VLAs. We
* test to make sure we're in C mode (C++ doesn't support CAPs), and
- * that the version of the standard supports CAPs. We also blacklist
+ * that the version of the standard supports CAPs. We also reject
* some buggy compilers like MSVC (the logic is in Hedley if you want
* to take a look), but with certain warnings enabled some compilers
* still like to emit a diagnostic. */
#elif HEDLEY_GCC_VERSION_CHECK(3, 4, 0)
#define SIMDE_DIAGNOSTIC_DISABLE_UNUSED_FUNCTION_ \
_Pragma("GCC diagnostic ignored \"-Wunused-function\"")
+#elif HEDLEY_MSVC_VERSION_CHECK(19, 0, 0) /* Likely goes back further */
+#define SIMDE_DIAGNOSTIC_DISABLE_UNUSED_FUNCTION_ \
+ __pragma(warning(disable : 4505))
#else
#define SIMDE_DIAGNOSTIC_DISABLE_UNUSED_FUNCTION_
#endif
#define SIMDE_DIAGNOSTIC_DISABLE_PASS_FAILED_
#endif
-/* https://github.com/nemequ/simde/issues/277 */
+#if HEDLEY_HAS_WARNING("-Wpadded")
+#define SIMDE_DIAGNOSTIC_DISABLE_PADDED_ \
+ _Pragma("clang diagnostic ignored \"-Wpadded\"")
+#elif HEDLEY_MSVC_VERSION_CHECK(19, 0, 0) /* Likely goes back further */
+#define SIMDE_DIAGNOSTIC_DISABLE_PADDED_ __pragma(warning(disable : 4324))
+#else
+#define SIMDE_DIAGNOSTIC_DISABLE_PADDED_
+#endif
+
+#if HEDLEY_HAS_WARNING("-Wzero-as-null-pointer-constant")
+#define SIMDE_DIAGNOSTIC_DISABLE_ZERO_AS_NULL_POINTER_CONSTANT_ \
+ _Pragma("clang diagnostic ignored \"-Wzero-as-null-pointer-constant\"")
+#else
+#define SIMDE_DIAGNOSTIC_DISABLE_ZERO_AS_NULL_POINTER_CONSTANT_
+#endif
+
+#if HEDLEY_HAS_WARNING("-Wold-style-cast")
+#define SIMDE_DIAGNOSTIC_DISABLE_OLD_STYLE_CAST_ \
+ _Pragma("clang diagnostic ignored \"-Wold-style-cast\"")
+#else
+#define SIMDE_DIAGNOSTIC_DISABLE_OLD_STYLE_CAST_
+#endif
+
+#if HEDLEY_HAS_WARNING("-Wcast-function-type") || \
+ HEDLEY_GCC_VERSION_CHECK(8, 0, 0)
+#define SIMDE_DIAGNOSTIC_DISABLE_CAST_FUNCTION_TYPE_ \
+ _Pragma("GCC diagnostic ignored \"-Wcast-function-type\"")
+#else
+#define SIMDE_DIAGNOSTIC_DISABLE_CAST_FUNCTION_TYPE_
+#endif
+
+/* clang will emit this warning when we use C99 extensions whan not in
+ * C99 mode, even though it does support this. In such cases we check
+ * the compiler and version first, so we know it's not a problem. */
+#if HEDLEY_HAS_WARNING("-Wc99-extensions")
+#define SIMDE_DIAGNOSTIC_DISABLE_C99_EXTENSIONS_ \
+ _Pragma("clang diagnostic ignored \"-Wc99-extensions\"")
+#else
+#define SIMDE_DIAGNOSTIC_DISABLE_C99_EXTENSIONS_
+#endif
+
+/* https://github.com/simd-everywhere/simde/issues/277 */
#if defined(HEDLEY_GCC_VERSION) && HEDLEY_GCC_VERSION_CHECK(4, 6, 0) && \
- !HEDLEY_GCC_VERSION_CHECK(6, 0, 0) && defined(__cplusplus)
-#define SIMDE_DIAGNOSTIC_DISABLE_BUGGY_UNUSED_BUT_SET_VARIBALE \
+ !HEDLEY_GCC_VERSION_CHECK(6, 4, 0) && defined(__cplusplus)
+#define SIMDE_DIAGNOSTIC_DISABLE_BUGGY_UNUSED_BUT_SET_VARIBALE_ \
_Pragma("GCC diagnostic ignored \"-Wunused-but-set-variable\"")
#else
-#define SIMDE_DIAGNOSTIC_DISABLE_BUGGY_UNUSED_BUT_SET_VARIBALE
+#define SIMDE_DIAGNOSTIC_DISABLE_BUGGY_UNUSED_BUT_SET_VARIBALE_
+#endif
+
+/* This is the warning that you normally define _CRT_SECURE_NO_WARNINGS
+ * to silence, but you have to do that before including anything and
+ * that would require reordering includes. */
+#if defined(_MSC_VER)
+#define SIMDE_DIAGNOSTIC_DISABLE_ANNEX_K_ __pragma(warning(disable : 4996))
+#else
+#define SIMDE_DIAGNOSTIC_DISABLE_ANNEX_K_
#endif
/* Some compilers, such as clang, may use `long long` for 64-bit
* -Wc++98-compat-pedantic which says 'long long' is incompatible with
* C++98. */
#if HEDLEY_HAS_WARNING("-Wc++98-compat-pedantic")
-#define SIMDE_DIAGNOSTIC_DISABLE_CPP98_COMPAT_PEDANTIC \
+#define SIMDE_DIAGNOSTIC_DISABLE_CPP98_COMPAT_PEDANTIC_ \
_Pragma("clang diagnostic ignored \"-Wc++98-compat-pedantic\"")
#else
-#define SIMDE_DIAGNOSTIC_DISABLE_CPP98_COMPAT_PEDANTIC
+#define SIMDE_DIAGNOSTIC_DISABLE_CPP98_COMPAT_PEDANTIC_
+#endif
+
+/* Some problem as above */
+#if HEDLEY_HAS_WARNING("-Wc++11-long-long")
+#define SIMDE_DIAGNOSTIC_DISABLE_CPP11_LONG_LONG_ \
+ _Pragma("clang diagnostic ignored \"-Wc++11-long-long\"")
+#else
+#define SIMDE_DIAGNOSTIC_DISABLE_CPP11_LONG_LONG_
+#endif
+
+/* emscripten emits this whenever stdin/stdout/stderr is used in a
+ * macro. */
+#if HEDLEY_HAS_WARNING("-Wdisabled-macro-expansion")
+#define SIMDE_DIAGNOSTIC_DISABLE_DISABLED_MACRO_EXPANSION_ \
+ _Pragma("clang diagnostic ignored \"-Wdisabled-macro-expansion\"")
+#else
+#define SIMDE_DIAGNOSTIC_DISABLE_DISABLED_MACRO_EXPANSION_
+#endif
+
+/* Clang uses C11 generic selections to implement some AltiVec
+ * functions, which triggers this diagnostic when not compiling
+ * in C11 mode */
+#if HEDLEY_HAS_WARNING("-Wc11-extensions")
+#define SIMDE_DIAGNOSTIC_DISABLE_C11_EXTENSIONS_ \
+ _Pragma("clang diagnostic ignored \"-Wc11-extensions\"")
+#else
+#define SIMDE_DIAGNOSTIC_DISABLE_C11_EXTENSIONS_
+#endif
+
+/* Clang sometimes triggers this warning in macros in the AltiVec and
+ * NEON headers, or due to missing functions. */
+#if HEDLEY_HAS_WARNING("-Wvector-conversion")
+#define SIMDE_DIAGNOSTIC_DISABLE_VECTOR_CONVERSION_ \
+ _Pragma("clang diagnostic ignored \"-Wvector-conversion\"")
+/* For NEON, the situation with -Wvector-conversion in clang < 10 is
+ * bad enough that we just disable the warning altogether. */
+#if defined(SIMDE_ARCH_ARM) && SIMDE_DETECT_CLANG_VERSION_NOT(10, 0, 0)
+#define SIMDE_DIAGNOSTIC_DISABLE_BUGGY_VECTOR_CONVERSION_ \
+ SIMDE_DIAGNOSTIC_DISABLE_VECTOR_CONVERSION_
+#endif
+#else
+#define SIMDE_DIAGNOSTIC_DISABLE_VECTOR_CONVERSION_
+#endif
+#if !defined(SIMDE_DIAGNOSTIC_DISABLE_BUGGY_VECTOR_CONVERSION_)
+#define SIMDE_DIAGNOSTIC_DISABLE_BUGGY_VECTOR_CONVERSION_
+#endif
+
+/* SLEEF triggers this a *lot* in their headers */
+#if HEDLEY_HAS_WARNING("-Wignored-qualifiers")
+#define SIMDE_DIAGNOSTIC_DISABLE_IGNORED_QUALIFIERS_ \
+ _Pragma("clang diagnostic ignored \"-Wignored-qualifiers\"")
+#elif HEDLEY_GCC_VERSION_CHECK(4, 3, 0)
+#define SIMDE_DIAGNOSTIC_DISABLE_IGNORED_QUALIFIERS_ \
+ _Pragma("GCC diagnostic ignored \"-Wignored-qualifiers\"")
+#else
+#define SIMDE_DIAGNOSTIC_DISABLE_IGNORED_QUALIFIERS_
+#endif
+
+/* GCC emits this under some circumstances when using __int128 */
+#if HEDLEY_GCC_VERSION_CHECK(4, 8, 0)
+#define SIMDE_DIAGNOSTIC_DISABLE_PEDANTIC_ \
+ _Pragma("GCC diagnostic ignored \"-Wpedantic\"")
+#else
+#define SIMDE_DIAGNOSTIC_DISABLE_PEDANTIC_
+#endif
+
+/* MSVC doesn't like (__assume(0), code) and will warn about code being
+ * unreachable, but we want it there because not all compilers
+ * understand the unreachable macro and will complain if it is missing.
+ * I'm planning on adding a new macro to Hedley to handle this a bit
+ * more elegantly, but until then... */
+#if defined(HEDLEY_MSVC_VERSION)
+#define SIMDE_DIAGNOSTIC_DISABLE_UNREACHABLE_ __pragma(warning(disable : 4702))
+#else
+#define SIMDE_DIAGNOSTIC_DISABLE_UNREACHABLE_
+#endif
+
+/* This is a false positive from GCC in a few places. */
+#if HEDLEY_GCC_VERSION_CHECK(4, 7, 0)
+#define SIMDE_DIAGNOSTIC_DISABLE_MAYBE_UNINITIAZILED_ \
+ _Pragma("GCC diagnostic ignored \"-Wmaybe-uninitialized\"")
+#else
+#define SIMDE_DIAGNOSTIC_DISABLE_MAYBE_UNINITIAZILED_
+#endif
+
+#if defined(SIMDE_ENABLE_NATIVE_ALIASES)
+#define SIMDE_DISABLE_UNWANTED_DIAGNOSTICS_NATIVE_ALIASES_ \
+ SIMDE_DIAGNOSTIC_DISABLE_RESERVED_ID_MACRO_
+#else
+#define SIMDE_DISABLE_UNWANTED_DIAGNOSTICS_NATIVE_ALIASES_
#endif
#define SIMDE_DISABLE_UNWANTED_DIAGNOSTICS \
+ SIMDE_DISABLE_UNWANTED_DIAGNOSTICS_NATIVE_ALIASES_ \
SIMDE_DIAGNOSTIC_DISABLE_PSABI_ \
SIMDE_DIAGNOSTIC_DISABLE_NO_EMMS_INSTRUCTION_ \
SIMDE_DIAGNOSTIC_DISABLE_SIMD_PRAGMA_DEPRECATED_ \
SIMDE_DIAGNOSTIC_DISABLE_USED_BUT_MARKED_UNUSED_ \
SIMDE_DIAGNOSTIC_DISABLE_UNUSED_FUNCTION_ \
SIMDE_DIAGNOSTIC_DISABLE_PASS_FAILED_ \
- SIMDE_DIAGNOSTIC_DISABLE_CPP98_COMPAT_PEDANTIC \
- SIMDE_DIAGNOSTIC_DISABLE_BUGGY_UNUSED_BUT_SET_VARIBALE
+ SIMDE_DIAGNOSTIC_DISABLE_CPP98_COMPAT_PEDANTIC_ \
+ SIMDE_DIAGNOSTIC_DISABLE_CPP11_LONG_LONG_ \
+ SIMDE_DIAGNOSTIC_DISABLE_BUGGY_UNUSED_BUT_SET_VARIBALE_ \
+ SIMDE_DIAGNOSTIC_DISABLE_BUGGY_VECTOR_CONVERSION_
-#endif
+#endif /* !defined(SIMDE_DIAGNOSTIC_H) */
obs-studio-26.1.0.tar.xz/libobs/util/simde/simde-features.h -> obs-studio-26.1.1.tar.xz/libobs/util/simde/simde-features.h
Changed
#define SIMDE_FEATURES_H
#include "simde-arch.h"
+#include "simde-diagnostic.h"
#if !defined(SIMDE_X86_SVML_NATIVE) && !defined(SIMDE_X86_SVML_NO_NATIVE) && \
!defined(SIMDE_NO_NATIVE)
#define SIMDE_X86_AVX512F_NATIVE
#endif
+#if !defined(SIMDE_X86_AVX512VP2INTERSECT_NATIVE) && \
+ !defined(SIMDE_X86_AVX512VP2INTERSECT_NO_NATIVE) && \
+ !defined(SIMDE_NO_NATIVE)
+#if defined(SIMDE_ARCH_X86_AVX512VP2INTERSECT)
+#define SIMDE_X86_AVX512VP2INTERSECT_NATIVE
+#endif
+#endif
+#if defined(SIMDE_X86_AVX512VP2INTERSECT_NATIVE) && \
+ !defined(SIMDE_X86_AVX512F_NATIVE)
+#define SIMDE_X86_AVX512F_NATIVE
+#endif
+
+#if !defined(SIMDE_X86_AVX512VBMI_NATIVE) && \
+ !defined(SIMDE_X86_AVX512VBMI_NO_NATIVE) && !defined(SIMDE_NO_NATIVE)
+#if defined(SIMDE_ARCH_X86_AVX512VBMI)
+#define SIMDE_X86_AVX512VBMI_NATIVE
+#endif
+#endif
+#if defined(SIMDE_X86_AVX512VBMI_NATIVE) && !defined(SIMDE_X86_AVX512F_NATIVE)
+#define SIMDE_X86_AVX512F_NATIVE
+#endif
+
#if !defined(SIMDE_X86_AVX512CD_NATIVE) && \
!defined(SIMDE_X86_AVX512CD_NO_NATIVE) && !defined(SIMDE_NO_NATIVE)
#if defined(SIMDE_ARCH_X86_AVX512CD)
#endif
#endif
+#if !defined(SIMDE_X86_PCLMUL_NATIVE) && \
+ !defined(SIMDE_X86_PCLMUL_NO_NATIVE) && !defined(SIMDE_NO_NATIVE)
+#if defined(SIMDE_ARCH_X86_PCLMUL)
+#define SIMDE_X86_PCLMUL_NATIVE
+#endif
+#endif
+
+#if !defined(SIMDE_X86_VPCLMULQDQ_NATIVE) && \
+ !defined(SIMDE_X86_VPCLMULQDQ_NO_NATIVE) && !defined(SIMDE_NO_NATIVE)
+#if defined(SIMDE_ARCH_X86_VPCLMULQDQ)
+#define SIMDE_X86_VPCLMULQDQ_NATIVE
+#endif
+#endif
+
#if !defined(SIMDE_X86_SVML_NATIVE) && !defined(SIMDE_X86_SVML_NO_NATIVE) && \
!defined(SIMDE_NO_NATIVE)
#if defined(__INTEL_COMPILER)
#pragma warning(disable : 4799)
#endif
-#if defined(SIMDE_X86_AVX_NATIVE) || defined(SIMDE_X86_GFNI_NATIVE) || \
- defined(SIMDE_X86_SVML_NATIVE)
+#if defined(SIMDE_X86_AVX_NATIVE) || defined(SIMDE_X86_GFNI_NATIVE)
#include <immintrin.h>
#elif defined(SIMDE_X86_SSE4_2_NATIVE)
#include <nmmintrin.h>
#if !defined(SIMDE_ARM_NEON_A32V8_NATIVE) && \
!defined(SIMDE_ARM_NEON_A32V8_NO_NATIVE) && !defined(SIMDE_NO_NATIVE)
-#if defined(SIMDE_ARCH_ARM_NEON) && SIMDE_ARCH_ARM_CHECK(80)
+#if defined(SIMDE_ARCH_ARM_NEON) && SIMDE_ARCH_ARM_CHECK(80) && \
+ (__ARM_NEON_FP & 0x02)
#define SIMDE_ARM_NEON_A32V8_NATIVE
#endif
#endif
#include <arm_neon.h>
#endif
+#if !defined(SIMDE_ARM_SVE_NATIVE) && !defined(SIMDE_ARM_SVE_NO_NATIVE) && \
+ !defined(SIMDE_NO_NATIVE)
+#if defined(SIMDE_ARCH_ARM_SVE)
+#define SIMDE_ARM_SVE_NATIVE
+#include <arm_sve.h>
+#endif
+#endif
+
#if !defined(SIMDE_WASM_SIMD128_NATIVE) && \
!defined(SIMDE_WASM_SIMD128_NO_NATIVE) && !defined(SIMDE_NO_NATIVE)
#if defined(SIMDE_ARCH_WASM_SIMD128)
#endif
#if defined(SIMDE_WASM_SIMD128_NATIVE)
#if !defined(__wasm_unimplemented_simd128__)
+HEDLEY_DIAGNOSTIC_PUSH
+SIMDE_DIAGNOSTIC_DISABLE_RESERVED_ID_MACRO_
#define __wasm_unimplemented_simd128__
+HEDLEY_DIAGNOSTIC_POP
#endif
#include <wasm_simd128.h>
#endif
#define SIMDE_POWER_ALTIVEC_P5_NATIVE
#endif
#endif
-#if defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
-/* stdbool.h conflicts with the bool in altivec.h */
-#if defined(bool) && !defined(SIMDE_POWER_ALTIVEC_NO_UNDEF_BOOL_)
+
+#if defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+/* AltiVec conflicts with lots of stuff. The bool keyword conflicts
+ * with the bool keyword in C++ and the bool macro in C99+ (defined
+ * in stdbool.h). The vector keyword conflicts with std::vector in
+ * C++ if you are `using std;`.
+ *
+ * Luckily AltiVec allows you to use `__vector`/`__bool`/`__pixel`
+ * instead, but altivec.h will unconditionally define
+ * `vector`/`bool`/`pixel` so we need to work around that.
+ *
+ * Unfortunately this means that if your code uses AltiVec directly
+ * it may break. If this is the case you'll want to define
+ * `SIMDE_POWER_ALTIVEC_NO_UNDEF` before including SIMDe. Or, even
+ * better, port your code to use the double-underscore versions. */
+#if defined(bool)
#undef bool
#endif
+
#include <altivec.h>
-/* GCC allows you to undefine these macros to prevent conflicts with
- * standard types as they become context-sensitive keywords. */
-#if defined(__cplusplus)
+
+#if !defined(SIMDE_POWER_ALTIVEC_NO_UNDEF)
#if defined(vector)
#undef vector
#endif
#if defined(bool)
#undef bool
#endif
-#define SIMDE_POWER_ALTIVEC_VECTOR(T) vector T
-#define SIMDE_POWER_ALTIVEC_PIXEL pixel
-#define SIMDE_POWER_ALTIVEC_BOOL bool
-#else
+#endif /* !defined(SIMDE_POWER_ALTIVEC_NO_UNDEF) */
+
+/* Use these intsead of vector/pixel/bool in SIMDe. */
#define SIMDE_POWER_ALTIVEC_VECTOR(T) __vector T
#define SIMDE_POWER_ALTIVEC_PIXEL __pixel
#define SIMDE_POWER_ALTIVEC_BOOL __bool
-#endif /* defined(__cplusplus) */
+
+/* Re-define bool if we're using stdbool.h */
+#if !defined(__cplusplus) && defined(__bool_true_false_are_defined) && \
+ !defined(SIMDE_POWER_ALTIVEC_NO_UNDEF)
+#define bool _Bool
+#endif
+#endif
+
+#if !defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE) && \
+ !defined(SIMDE_MIPS_LOONGSON_MMI_NO_NATIVE) && \
+ !defined(SIMDE_NO_NATIVE)
+#if defined(SIMDE_ARCH_MIPS_LOONGSON_MMI)
+#define SIMDE_MIPS_LOONGSON_MMI_NATIVE 1
+#endif
+#endif
+#if defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+#include <loongson-mmiintrin.h>
+#endif
+
+/* This is used to determine whether or not to fall back on a vector
+ * function in an earlier ISA extensions, as well as whether
+ * we expected any attempts at vectorization to be fruitful or if we
+ * expect to always be running serial code. */
+
+#if !defined(SIMDE_NATURAL_VECTOR_SIZE)
+#if defined(SIMDE_X86_AVX512F_NATIVE)
+#define SIMDE_NATURAL_VECTOR_SIZE (512)
+#elif defined(SIMDE_X86_AVX_NATIVE)
+#define SIMDE_NATURAL_VECTOR_SIZE (256)
+#elif defined(SIMDE_X86_SSE_NATIVE) || defined(SIMDE_ARM_NEON_A32V7_NATIVE) || \
+ defined(SIMDE_WASM_SIMD128_NATIVE) || \
+ defined(SIMDE_POWER_ALTIVEC_P5_NATIVE)
+#define SIMDE_NATURAL_VECTOR_SIZE (128)
+#endif
+
+#if !defined(SIMDE_NATURAL_VECTOR_SIZE)
+#define SIMDE_NATURAL_VECTOR_SIZE (0)
+#endif
+#endif
+
+#define SIMDE_NATURAL_VECTOR_SIZE_LE(x) \
+ ((SIMDE_NATURAL_VECTOR_SIZE > 0) && (SIMDE_NATURAL_VECTOR_SIZE <= (x)))
+#define SIMDE_NATURAL_VECTOR_SIZE_GE(x) \
+ ((SIMDE_NATURAL_VECTOR_SIZE > 0) && (SIMDE_NATURAL_VECTOR_SIZE >= (x)))
+
+/* Native aliases */
+#if defined(SIMDE_ENABLE_NATIVE_ALIASES)
+#if !defined(SIMDE_X86_MMX_NATIVE)
+#define SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES
+#endif
+#if !defined(SIMDE_X86_SSE_NATIVE)
+#define SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES
+#endif
+#if !defined(SIMDE_X86_SSE2_NATIVE)
+#define SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES
+#endif
+#if !defined(SIMDE_X86_SSE3_NATIVE)
+#define SIMDE_X86_SSE3_ENABLE_NATIVE_ALIASES
+#endif
+#if !defined(SIMDE_X86_SSSE3_NATIVE)
+#define SIMDE_X86_SSSE3_ENABLE_NATIVE_ALIASES
+#endif
+#if !defined(SIMDE_X86_SSE4_1_NATIVE)
+#define SIMDE_X86_SSE4_1_ENABLE_NATIVE_ALIASES
+#endif
+#if !defined(SIMDE_X86_SSE4_2_NATIVE)
+#define SIMDE_X86_SSE4_2_ENABLE_NATIVE_ALIASES
+#endif
+#if !defined(SIMDE_X86_AVX_NATIVE)
+#define SIMDE_X86_AVX_ENABLE_NATIVE_ALIASES
+#endif
+#if !defined(SIMDE_X86_AVX2_NATIVE)
+#define SIMDE_X86_AVX2_ENABLE_NATIVE_ALIASES
+#endif
+#if !defined(SIMDE_X86_FMA_NATIVE)
+#define SIMDE_X86_FMA_ENABLE_NATIVE_ALIASES
+#endif
+#if !defined(SIMDE_X86_AVX512F_NATIVE)
+#define SIMDE_X86_AVX512F_ENABLE_NATIVE_ALIASES
+#endif
+#if !defined(SIMDE_X86_AVX512VL_NATIVE)
+#define SIMDE_X86_AVX512VL_ENABLE_NATIVE_ALIASES
+#endif
+#if !defined(SIMDE_X86_AVX512BW_NATIVE)
+#define SIMDE_X86_AVX512BW_ENABLE_NATIVE_ALIASES
+#endif
+#if !defined(SIMDE_X86_AVX512DQ_NATIVE)
+#define SIMDE_X86_AVX512DQ_ENABLE_NATIVE_ALIASES
+#endif
+#if !defined(SIMDE_X86_AVX512CD_NATIVE)
+#define SIMDE_X86_AVX512CD_ENABLE_NATIVE_ALIASES
+#endif
+#if !defined(SIMDE_X86_GFNI_NATIVE)
+#define SIMDE_X86_GFNI_ENABLE_NATIVE_ALIASES
+#endif
+#if !defined(SIMDE_X86_PCLMUL_NATIVE)
+#define SIMDE_X86_PCLMUL_ENABLE_NATIVE_ALIASES
+#endif
+#if !defined(SIMDE_X86_VPCLMULQDQ_NATIVE)
+#define SIMDE_X86_VPCLMULQDQ_ENABLE_NATIVE_ALIASES
+#endif
+
+#if !defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+#define SIMDE_ARM_NEON_A32V7_ENABLE_NATIVE_ALIASES
+#endif
+#if !defined(SIMDE_ARM_NEON_A32V8_NATIVE)
+#define SIMDE_ARM_NEON_A32V8_ENABLE_NATIVE_ALIASES
+#endif
+#if !defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+#define SIMDE_ARM_NEON_A64V8_ENABLE_NATIVE_ALIASES
+#endif
+#endif
+
+/* Are floating point values stored using IEEE 754? Knowing
+ * this at during preprocessing is a bit tricky, mostly because what
+ * we're curious about is how values are stored and not whether the
+ * implementation is fully conformant in terms of rounding, NaN
+ * handling, etc.
+ *
+ * For example, if you use -ffast-math or -Ofast on
+ * GCC or clang IEEE 754 isn't strictly followed, therefore IEE 754
+ * support is not advertised (by defining __STDC_IEC_559__).
+ *
+ * However, what we care about is whether it is safe to assume that
+ * floating point values are stored in IEEE 754 format, in which case
+ * we can provide faster implementations of some functions.
+ *
+ * Luckily every vaugely modern architecture I'm aware of uses IEEE 754-
+ * so we just assume IEEE 754 for now. There is a test which verifies
+ * this, if that test fails sowewhere please let us know and we'll add
+ * an exception for that platform. Meanwhile, you can define
+ * SIMDE_NO_IEEE754_STORAGE. */
+#if !defined(SIMDE_IEEE754_STORAGE) && !defined(SIMDE_NO_IEE754_STORAGE)
+#define SIMDE_IEEE754_STORAGE
#endif
#endif /* !defined(SIMDE_FEATURES_H) */
obs-studio-26.1.0.tar.xz/libobs/util/simde/simde-math.h -> obs-studio-26.1.1.tar.xz/libobs/util/simde/simde-math.h
Changed
#include "hedley.h"
#include "simde-features.h"
+#include <stdint.h>
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+#include <arm_neon.h>
+#endif
+
+HEDLEY_DIAGNOSTIC_PUSH
+SIMDE_DISABLE_UNWANTED_DIAGNOSTICS
+
+/* SLEEF support
+ * https://sleef.org/
+ *
+ * If you include <sleef.h> prior to including SIMDe, SIMDe will use
+ * SLEEF. You can also define SIMDE_MATH_SLEEF_ENABLE prior to
+ * including SIMDe to force the issue.
+ *
+ * Note that SLEEF does requires linking to libsleef.
+ *
+ * By default, SIMDe will use the 1 ULP functions, but if you use
+ * SIMDE_ACCURACY_PREFERENCE of 0 we will use up to 4 ULP. This is
+ * only the case for the simde_math_* functions; for code in other
+ * SIMDe headers which calls SLEEF directly we may use functions with
+ * greater error if the API we're implementing is less precise (for
+ * example, SVML guarantees 4 ULP, so we will generally use the 3.5
+ * ULP functions from SLEEF). */
+#if !defined(SIMDE_MATH_SLEEF_DISABLE)
+#if defined(__SLEEF_H__)
+#define SIMDE_MATH_SLEEF_ENABLE
+#endif
+#endif
+
+#if defined(SIMDE_MATH_SLEEF_ENABLE) && !defined(__SLEEF_H__)
+HEDLEY_DIAGNOSTIC_PUSH
+SIMDE_DIAGNOSTIC_DISABLE_IGNORED_QUALIFIERS_
+#include <sleef.h>
+HEDLEY_DIAGNOSTIC_POP
+#endif
+
+#if defined(SIMDE_MATH_SLEEF_ENABLE) && defined(__SLEEF_H__)
+#if defined(SLEEF_VERSION_MAJOR)
+#define SIMDE_MATH_SLEEF_VERSION_CHECK(major, minor, patch) \
+ (HEDLEY_VERSION_ENCODE(SLEEF_VERSION_MAJOR, SLEEF_VERSION_MINOR, \
+ SLEEF_VERSION_PATCHLEVEL) >= \
+ HEDLEY_VERSION_ENCODE(major, minor, patch))
+#else
+#define SIMDE_MATH_SLEEF_VERSION_CHECK(major, minor, patch) \
+ (HEDLEY_VERSION_ENCODE(3, 0, 0) >= \
+ HEDLEY_VERSION_ENCODE(major, minor, patch))
+#endif
+#else
+#define SIMDE_MATH_SLEEF_VERSION_CHECK(major, minor, patch) (0)
+#endif
+
#if defined(__has_builtin)
#define SIMDE_MATH_BUILTIN_LIBM(func) __has_builtin(__builtin_##func)
#elif HEDLEY_INTEL_VERSION_CHECK(13, 0, 0) || \
#endif
#endif
-#if !defined(__cplusplus)
-/* If this is a problem we *might* be able to avoid including
- * <complex.h> on some compilers (gcc, clang, and others which
- * implement builtins like __builtin_cexpf). If you don't have
- * a <complex.h> please file an issue and we'll take a look. */
+/* Try to avoid including <complex> since it pulls in a *lot* of code. */
+#if HEDLEY_HAS_BUILTIN(__builtin_creal) || \
+ HEDLEY_GCC_VERSION_CHECK(4, 7, 0) || \
+ HEDLEY_INTEL_VERSION_CHECK(13, 0, 0)
+HEDLEY_DIAGNOSTIC_PUSH
+SIMDE_DIAGNOSTIC_DISABLE_C99_EXTENSIONS_
+typedef __complex__ float simde_cfloat32;
+typedef __complex__ double simde_cfloat64;
+HEDLEY_DIAGNOSTIC_POP
+#define SIMDE_MATH_CMPLX(x, y) \
+ (HEDLEY_STATIC_CAST(double, x) + \
+ HEDLEY_STATIC_CAST(double, y) * (__extension__ 1.0j))
+#define SIMDE_MATH_CMPLXF(x, y) \
+ (HEDLEY_STATIC_CAST(float, x) + \
+ HEDLEY_STATIC_CAST(float, y) * (__extension__ 1.0fj))
+
+#if !defined(simde_math_creal)
+#define simde_math_crealf(z) __builtin_crealf(z)
+#endif
+#if !defined(simde_math_crealf)
+#define simde_math_creal(z) __builtin_creal(z)
+#endif
+#if !defined(simde_math_cimag)
+#define simde_math_cimagf(z) __builtin_cimagf(z)
+#endif
+#if !defined(simde_math_cimagf)
+#define simde_math_cimag(z) __builtin_cimag(z)
+#endif
+#elif !defined(__cplusplus)
#include <complex.h>
#if !defined(HEDLEY_MSVC_VERSION)
typedef _Fcomplex simde_cfloat32;
typedef _Dcomplex simde_cfloat64;
#endif
-#if HEDLEY_HAS_BUILTIN(__builtin_complex) || \
- HEDLEY_GCC_VERSION_CHECK(4, 7, 0) || \
- HEDLEY_INTEL_VERSION_CHECK(13, 0, 0)
-#define SIMDE_MATH_CMPLX(x, y) __builtin_complex((double)(x), (double)(y))
-#define SIMDE_MATH_CMPLXF(x, y) __builtin_complex((float)(x), (float)(y))
-#elif defined(HEDLEY_MSVC_VERSION)
+
+#if defined(HEDLEY_MSVC_VERSION)
#define SIMDE_MATH_CMPLX(x, y) ((simde_cfloat64){(x), (y)})
#define SIMDE_MATH_CMPLXF(x, y) ((simde_cfloat32){(x), (y)})
#elif defined(CMPLX) && defined(CMPLXF)
#define SIMDE_MATH_CMPLX(x, y) CMPLX(x, y)
#define SIMDE_MATH_CMPLXF(x, y) CMPLXF(x, y)
#else
-/* CMPLX / CMPLXF are in C99, but these seem to be necessary in
- * some compilers that aren't even MSVC. */
#define SIMDE_MATH_CMPLX(x, y) \
(HEDLEY_STATIC_CAST(double, x) + HEDLEY_STATIC_CAST(double, y) * I)
#define SIMDE_MATH_CMPLXF(x, y) \
#endif
#if !defined(simde_math_creal)
-#if SIMDE_MATH_BUILTIN_LIBM(creal)
-#define simde_math_creal(z) __builtin_creal(z)
-#else
#define simde_math_creal(z) creal(z)
#endif
-#endif
-
#if !defined(simde_math_crealf)
-#if SIMDE_MATH_BUILTIN_LIBM(crealf)
-#define simde_math_crealf(z) __builtin_crealf(z)
-#else
#define simde_math_crealf(z) crealf(z)
#endif
-#endif
-
#if !defined(simde_math_cimag)
-#if SIMDE_MATH_BUILTIN_LIBM(cimag)
-#define simde_math_cimag(z) __builtin_cimag(z)
-#else
#define simde_math_cimag(z) cimag(z)
#endif
-#endif
-
#if !defined(simde_math_cimagf)
-#if SIMDE_MATH_BUILTIN_LIBM(cimagf)
-#define simde_math_cimagf(z) __builtin_cimagf(z)
-#else
#define simde_math_cimagf(z) cimagf(z)
#endif
-#endif
#else
-
HEDLEY_DIAGNOSTIC_PUSH
#if defined(HEDLEY_MSVC_VERSION)
#pragma warning(disable : 4530)
#endif
#endif
+#if !defined(SIMDE_MATH_PI_OVER_180)
+#define SIMDE_MATH_PI_OVER_180 \
+ 0.0174532925199432957692369076848861271344287188854172545609719144
+#endif
+
+#if !defined(SIMDE_MATH_PI_OVER_180F)
+#define SIMDE_MATH_PI_OVER_180F \
+ 0.0174532925199432957692369076848861271344287188854172545609719144f
+#endif
+
+#if !defined(SIMDE_MATH_180_OVER_PI)
+#define SIMDE_MATH_180_OVER_PI \
+ 57.295779513082320876798154814105170332405472466564321549160243861
+#endif
+
+#if !defined(SIMDE_MATH_180_OVER_PIF)
+#define SIMDE_MATH_180_OVER_PIF \
+ 57.295779513082320876798154814105170332405472466564321549160243861f
+#endif
+
#if !defined(SIMDE_MATH_FLT_MIN)
#if defined(FLT_MIN)
#define SIMDE_MATH_FLT_MIN FLT_MIN
#endif
#endif
+/*** Manipulation functions ***/
+
+#if !defined(simde_math_nextafter)
+#if (HEDLEY_HAS_BUILTIN(__builtin_nextafter) && \
+ !defined(HEDLEY_IBM_VERSION)) || \
+ HEDLEY_ARM_VERSION_CHECK(4, 1, 0) || \
+ HEDLEY_GCC_VERSION_CHECK(3, 4, 0) || \
+ HEDLEY_INTEL_VERSION_CHECK(13, 0, 0)
+#define simde_math_nextafter(x, y) __builtin_nextafter(x, y)
+#elif defined(SIMDE_MATH_HAVE_CMATH)
+#define simde_math_nextafter(x, y) std::nextafter(x, y)
+#elif defined(SIMDE_MATH_HAVE_MATH_H)
+#define simde_math_nextafter(x, y) nextafter(x, y)
+#endif
+#endif
+
+#if !defined(simde_math_nextafterf)
+#if (HEDLEY_HAS_BUILTIN(__builtin_nextafterf) && \
+ !defined(HEDLEY_IBM_VERSION)) || \
+ HEDLEY_ARM_VERSION_CHECK(4, 1, 0) || \
+ HEDLEY_GCC_VERSION_CHECK(3, 4, 0) || \
+ HEDLEY_INTEL_VERSION_CHECK(13, 0, 0)
+#define simde_math_nextafterf(x, y) __builtin_nextafterf(x, y)
+#elif defined(SIMDE_MATH_HAVE_CMATH)
+#define simde_math_nextafterf(x, y) std::nextafter(x, y)
+#elif defined(SIMDE_MATH_HAVE_MATH_H)
+#define simde_math_nextafterf(x, y) nextafterf(x, y)
+#endif
+#endif
+
/*** Functions from C99 ***/
#if !defined(simde_math_abs)
#endif
#endif
-#if !defined(simde_math_absf)
-#if SIMDE_MATH_BUILTIN_LIBM(absf)
-#define simde_math_absf(v) __builtin_absf(v)
+#if !defined(simde_math_fabsf)
+#if SIMDE_MATH_BUILTIN_LIBM(fabsf)
+#define simde_math_fabsf(v) __builtin_fabsf(v)
#elif defined(SIMDE_MATH_HAVE_CMATH)
-#define simde_math_absf(v) std::abs(v)
+#define simde_math_fabsf(v) std::abs(v)
#elif defined(SIMDE_MATH_HAVE_MATH_H)
-#define simde_math_absf(v) absf(v)
+#define simde_math_fabsf(v) fabsf(v)
#endif
#endif
#endif
#if !defined(simde_math_cosf)
-#if SIMDE_MATH_BUILTIN_LIBM(cosf)
+#if defined(SIMDE_MATH_SLEEF_ENABLE)
+#if SIMDE_ACCURACY_PREFERENCE < 1
+#define simde_math_cosf(v) Sleef_cosf_u35(v)
+#else
+#define simde_math_cosf(v) Sleef_cosf_u10(v)
+#endif
+#elif SIMDE_MATH_BUILTIN_LIBM(cosf)
#define simde_math_cosf(v) __builtin_cosf(v)
#elif defined(SIMDE_MATH_HAVE_CMATH)
#define simde_math_cosf(v) std::cos(v)
#endif
#endif
+#if !defined(simde_math_fma)
+#if SIMDE_MATH_BUILTIN_LIBM(fma)
+#define simde_math_fma(x, y, z) __builtin_fma(x, y, z)
+#elif defined(SIMDE_MATH_HAVE_CMATH)
+#define simde_math_fma(x, y, z) std::fma(x, y, z)
+#elif defined(SIMDE_MATH_HAVE_MATH_H)
+#define simde_math_fma(x, y, z) fma(x, y, z)
+#endif
+#endif
+
+#if !defined(simde_math_fmaf)
+#if SIMDE_MATH_BUILTIN_LIBM(fmaf)
+#define simde_math_fmaf(x, y, z) __builtin_fmaf(x, y, z)
+#elif defined(SIMDE_MATH_HAVE_CMATH)
+#define simde_math_fmaf(x, y, z) std::fma(x, y, z)
+#elif defined(SIMDE_MATH_HAVE_MATH_H)
+#define simde_math_fmaf(x, y, z) fmaf(x, y, z)
+#endif
+#endif
+
+#if !defined(simde_math_fmax)
+#if SIMDE_MATH_BUILTIN_LIBM(fmax)
+#define simde_math_fmax(x, y, z) __builtin_fmax(x, y, z)
+#elif defined(SIMDE_MATH_HAVE_CMATH)
+#define simde_math_fmax(x, y, z) std::fmax(x, y, z)
+#elif defined(SIMDE_MATH_HAVE_MATH_H)
+#define simde_math_fmax(x, y, z) fmax(x, y, z)
+#endif
+#endif
+
+#if !defined(simde_math_fmaxf)
+#if SIMDE_MATH_BUILTIN_LIBM(fmaxf)
+#define simde_math_fmaxf(x, y, z) __builtin_fmaxf(x, y, z)
+#elif defined(SIMDE_MATH_HAVE_CMATH)
+#define simde_math_fmaxf(x, y, z) std::fmax(x, y, z)
+#elif defined(SIMDE_MATH_HAVE_MATH_H)
+#define simde_math_fmaxf(x, y, z) fmaxf(x, y, z)
+#endif
+#endif
+
#if !defined(simde_math_hypot)
#if SIMDE_MATH_BUILTIN_LIBM(hypot)
#define simde_math_hypot(y, x) __builtin_hypot(y, x)
#endif
#endif
+#if !defined(simde_math_modf)
+#if SIMDE_MATH_BUILTIN_LIBM(modf)
+#define simde_math_modf(x, iptr) __builtin_modf(x, iptr)
+#elif defined(SIMDE_MATH_HAVE_CMATH)
+#define simde_math_modf(x, iptr) std::modf(x, iptr)
+#elif defined(SIMDE_MATH_HAVE_MATH_H)
+#define simde_math_modf(x, iptr) modf(x, iptr)
+#endif
+#endif
+
+#if !defined(simde_math_modff)
+#if SIMDE_MATH_BUILTIN_LIBM(modff)
+#define simde_math_modff(x, iptr) __builtin_modff(x, iptr)
+#elif defined(SIMDE_MATH_HAVE_CMATH)
+#define simde_math_modff(x, iptr) std::modf(x, iptr)
+#elif defined(SIMDE_MATH_HAVE_MATH_H)
+#define simde_math_modff(x, iptr) modff(x, iptr)
+#endif
+#endif
+
#if !defined(simde_math_nearbyint)
#if SIMDE_MATH_BUILTIN_LIBM(nearbyint)
#define simde_math_nearbyint(v) __builtin_nearbyint(v)
#endif
#endif
+#if !defined(simde_math_roundeven)
+#if HEDLEY_HAS_BUILTIN(__builtin_roundeven) || \
+ HEDLEY_GCC_VERSION_CHECK(10, 0, 0)
+#define simde_math_roundeven(v) __builtin_roundeven(v)
+#elif defined(simde_math_round) && defined(simde_math_fabs)
+static HEDLEY_INLINE double simde_math_roundeven(double v)
+{
+ double rounded = simde_math_round(v);
+ double diff = rounded - v;
+ if (HEDLEY_UNLIKELY(simde_math_fabs(diff) == 0.5) &&
+ (HEDLEY_STATIC_CAST(int64_t, rounded) & 1)) {
+ rounded = v - diff;
+ }
+ return rounded;
+}
+#define simde_math_roundeven simde_math_roundeven
+#endif
+#endif
+
+#if !defined(simde_math_roundevenf)
+#if HEDLEY_HAS_BUILTIN(__builtin_roundevenf) || \
+ HEDLEY_GCC_VERSION_CHECK(10, 0, 0)
+#define simde_math_roundevenf(v) __builtin_roundevenf(v)
+#elif defined(simde_math_roundf) && defined(simde_math_fabsf)
+static HEDLEY_INLINE float simde_math_roundevenf(float v)
+{
+ float rounded = simde_math_roundf(v);
+ float diff = rounded - v;
+ if (HEDLEY_UNLIKELY(simde_math_fabsf(diff) == 0.5f) &&
+ (HEDLEY_STATIC_CAST(int32_t, rounded) & 1)) {
+ rounded = v - diff;
+ }
+ return rounded;
+}
+#define simde_math_roundevenf simde_math_roundevenf
+#endif
+#endif
+
#if !defined(simde_math_sin)
#if SIMDE_MATH_BUILTIN_LIBM(sin)
#define simde_math_sin(v) __builtin_sin(v)
/*** Complex functions ***/
#if !defined(simde_math_cexp)
-#if defined(__cplusplus)
-#define simde_math_cexp(v) std::cexp(v)
-#elif SIMDE_MATH_BUILTIN_LIBM(cexp)
+#if SIMDE_MATH_BUILTIN_LIBM(cexp)
#define simde_math_cexp(v) __builtin_cexp(v)
+#elif defined(__cplusplus)
+#define simde_math_cexp(v) std::cexp(v)
#elif defined(SIMDE_MATH_HAVE_MATH_H)
#define simde_math_cexp(v) cexp(v)
#endif
#endif
#if !defined(simde_math_cexpf)
-#if defined(__cplusplus)
-#define simde_math_cexpf(v) std::exp(v)
-#elif SIMDE_MATH_BUILTIN_LIBM(cexpf)
+#if SIMDE_MATH_BUILTIN_LIBM(cexpf)
#define simde_math_cexpf(v) __builtin_cexpf(v)
+#elif defined(__cplusplus)
+#define simde_math_cexpf(v) std::exp(v)
#elif defined(SIMDE_MATH_HAVE_MATH_H)
#define simde_math_cexpf(v) cexpf(v)
#endif
static HEDLEY_INLINE double simde_math_rad2deg(double radians)
{
- return radians * (180.0 / SIMDE_MATH_PI);
+ return radians * SIMDE_MATH_180_OVER_PI;
}
static HEDLEY_INLINE float simde_math_rad2degf(float radians)
{
- return radians * (180.0f / SIMDE_MATH_PIF);
+ return radians * SIMDE_MATH_180_OVER_PIF;
}
static HEDLEY_INLINE double simde_math_deg2rad(double degrees)
{
- return degrees * (SIMDE_MATH_PI / 180.0);
+ return degrees * SIMDE_MATH_PI_OVER_180;
}
static HEDLEY_INLINE float simde_math_deg2radf(float degrees)
{
- return degrees * (SIMDE_MATH_PIF / 180.0f);
+ return degrees * (SIMDE_MATH_PI_OVER_180F);
}
+/*** Saturated arithmetic ***/
+
+static HEDLEY_INLINE int8_t simde_math_adds_i8(int8_t a, int8_t b)
+{
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vqaddb_s8(a, b);
+#else
+ uint8_t a_ = HEDLEY_STATIC_CAST(uint8_t, a);
+ uint8_t b_ = HEDLEY_STATIC_CAST(uint8_t, b);
+ uint8_t r_ = a_ + b_;
+
+ a_ = (a_ >> ((8 * sizeof(r_)) - 1)) + INT8_MAX;
+ if (HEDLEY_STATIC_CAST(int8_t, ((a_ ^ b_) | ~(b_ ^ r_))) >= 0) {
+ r_ = a_;
+ }
+
+ return HEDLEY_STATIC_CAST(int8_t, r_);
+#endif
+}
+
+static HEDLEY_INLINE int16_t simde_math_adds_i16(int16_t a, int16_t b)
+{
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vqaddh_s16(a, b);
+#else
+ uint16_t a_ = HEDLEY_STATIC_CAST(uint16_t, a);
+ uint16_t b_ = HEDLEY_STATIC_CAST(uint16_t, b);
+ uint16_t r_ = a_ + b_;
+
+ a_ = (a_ >> ((8 * sizeof(r_)) - 1)) + INT16_MAX;
+ if (HEDLEY_STATIC_CAST(int16_t, ((a_ ^ b_) | ~(b_ ^ r_))) >= 0) {
+ r_ = a_;
+ }
+
+ return HEDLEY_STATIC_CAST(int16_t, r_);
+#endif
+}
+
+static HEDLEY_INLINE int32_t simde_math_adds_i32(int32_t a, int32_t b)
+{
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vqadds_s32(a, b);
+#else
+ uint32_t a_ = HEDLEY_STATIC_CAST(uint32_t, a);
+ uint32_t b_ = HEDLEY_STATIC_CAST(uint32_t, b);
+ uint32_t r_ = a_ + b_;
+
+ a_ = (a_ >> ((8 * sizeof(r_)) - 1)) + INT32_MAX;
+ if (HEDLEY_STATIC_CAST(int32_t, ((a_ ^ b_) | ~(b_ ^ r_))) >= 0) {
+ r_ = a_;
+ }
+
+ return HEDLEY_STATIC_CAST(int32_t, r_);
+#endif
+}
+
+static HEDLEY_INLINE int64_t simde_math_adds_i64(int64_t a, int64_t b)
+{
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vqaddd_s64(a, b);
+#else
+ uint64_t a_ = HEDLEY_STATIC_CAST(uint64_t, a);
+ uint64_t b_ = HEDLEY_STATIC_CAST(uint64_t, b);
+ uint64_t r_ = a_ + b_;
+
+ a_ = (a_ >> ((8 * sizeof(r_)) - 1)) + INT64_MAX;
+ if (HEDLEY_STATIC_CAST(int64_t, ((a_ ^ b_) | ~(b_ ^ r_))) >= 0) {
+ r_ = a_;
+ }
+
+ return HEDLEY_STATIC_CAST(int64_t, r_);
+#endif
+}
+
+static HEDLEY_INLINE uint8_t simde_math_adds_u8(uint8_t a, uint8_t b)
+{
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vqaddb_u8(a, b);
+#else
+ uint8_t r = a + b;
+ r |= -(r < a);
+ return r;
+#endif
+}
+
+static HEDLEY_INLINE uint16_t simde_math_adds_u16(uint16_t a, uint16_t b)
+{
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vqaddh_u16(a, b);
+#else
+ uint16_t r = a + b;
+ r |= -(r < a);
+ return r;
+#endif
+}
+
+static HEDLEY_INLINE uint32_t simde_math_adds_u32(uint32_t a, uint32_t b)
+{
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vqadds_u32(a, b);
+#else
+ uint32_t r = a + b;
+ r |= -(r < a);
+ return r;
+#endif
+}
+
+static HEDLEY_INLINE uint64_t simde_math_adds_u64(uint64_t a, uint64_t b)
+{
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vqaddd_u64(a, b);
+#else
+ uint64_t r = a + b;
+ r |= -(r < a);
+ return r;
+#endif
+}
+
+static HEDLEY_INLINE int8_t simde_math_subs_i8(int8_t a, int8_t b)
+{
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vqsubb_s8(a, b);
+#else
+ uint8_t a_ = HEDLEY_STATIC_CAST(uint8_t, a);
+ uint8_t b_ = HEDLEY_STATIC_CAST(uint8_t, b);
+ uint8_t r_ = a_ - b_;
+
+ a_ = (a_ >> 7) + INT8_MAX;
+
+ if (HEDLEY_STATIC_CAST(int8_t, (a_ ^ b_) & (a_ ^ r_)) < 0) {
+ r_ = a_;
+ }
+
+ return HEDLEY_STATIC_CAST(int8_t, r_);
+#endif
+}
+
+static HEDLEY_INLINE int16_t simde_math_subs_i16(int16_t a, int16_t b)
+{
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vqsubh_s16(a, b);
+#else
+ uint16_t a_ = HEDLEY_STATIC_CAST(uint16_t, a);
+ uint16_t b_ = HEDLEY_STATIC_CAST(uint16_t, b);
+ uint16_t r_ = a_ - b_;
+
+ a_ = (a_ >> 15) + INT16_MAX;
+
+ if (HEDLEY_STATIC_CAST(int16_t, (a_ ^ b_) & (a_ ^ r_)) < 0) {
+ r_ = a_;
+ }
+
+ return HEDLEY_STATIC_CAST(int16_t, r_);
+#endif
+}
+
+static HEDLEY_INLINE int32_t simde_math_subs_i32(int32_t a, int32_t b)
+{
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vqsubs_s32(a, b);
+#else
+ uint32_t a_ = HEDLEY_STATIC_CAST(uint32_t, a);
+ uint32_t b_ = HEDLEY_STATIC_CAST(uint32_t, b);
+ uint32_t r_ = a_ - b_;
+
+ a_ = (a_ >> 31) + INT32_MAX;
+
+ if (HEDLEY_STATIC_CAST(int32_t, (a_ ^ b_) & (a_ ^ r_)) < 0) {
+ r_ = a_;
+ }
+
+ return HEDLEY_STATIC_CAST(int32_t, r_);
+#endif
+}
+
+static HEDLEY_INLINE int64_t simde_math_subs_i64(int64_t a, int64_t b)
+{
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vqsubd_s64(a, b);
+#else
+ uint64_t a_ = HEDLEY_STATIC_CAST(uint64_t, a);
+ uint64_t b_ = HEDLEY_STATIC_CAST(uint64_t, b);
+ uint64_t r_ = a_ - b_;
+
+ a_ = (a_ >> 63) + INT64_MAX;
+
+ if (HEDLEY_STATIC_CAST(int64_t, (a_ ^ b_) & (a_ ^ r_)) < 0) {
+ r_ = a_;
+ }
+
+ return HEDLEY_STATIC_CAST(int64_t, r_);
+#endif
+}
+
+static HEDLEY_INLINE uint8_t simde_math_subs_u8(uint8_t a, uint8_t b)
+{
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vqsubb_u8(a, b);
+#else
+ uint8_t res = a - b;
+ res &= -(res <= a);
+ return res;
+#endif
+}
+
+static HEDLEY_INLINE uint16_t simde_math_subs_u16(uint16_t a, uint16_t b)
+{
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vqsubh_u16(a, b);
+#else
+ uint16_t res = a - b;
+ res &= -(res <= a);
+ return res;
+#endif
+}
+
+static HEDLEY_INLINE uint32_t simde_math_subs_u32(uint32_t a, uint32_t b)
+{
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vqsubs_u32(a, b);
+#else
+ uint32_t res = a - b;
+ res &= -(res <= a);
+ return res;
+#endif
+}
+
+static HEDLEY_INLINE uint64_t simde_math_subs_u64(uint64_t a, uint64_t b)
+{
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vqsubd_u64(a, b);
+#else
+ uint64_t res = a - b;
+ res &= -(res <= a);
+ return res;
+#endif
+}
+
+HEDLEY_DIAGNOSTIC_POP
+
#endif /* !defined(SIMDE_MATH_H) */
obs-studio-26.1.1.tar.xz/libobs/util/simde/x86
Added
+(directory)
obs-studio-26.1.1.tar.xz/libobs/util/simde/x86/mmx.h
Added
+/* SPDX-License-Identifier: MIT
+ *
+ * Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy,
+ * modify, merge, publish, distribute, sublicense, and/or sell copies
+ * of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * Copyright:
+ * 2017-2020 Evan Nemerson <evan@nemerson.com>
+ */
+
+#if !defined(SIMDE_X86_MMX_H)
+#define SIMDE_X86_MMX_H
+
+#include "../simde-common.h"
+
+HEDLEY_DIAGNOSTIC_PUSH
+SIMDE_DISABLE_UNWANTED_DIAGNOSTICS
+
+#if defined(SIMDE_X86_MMX_NATIVE)
+#define SIMDE_X86_MMX_USE_NATIVE_TYPE
+#elif defined(SIMDE_X86_SSE_NATIVE)
+#define SIMDE_X86_MMX_USE_NATIVE_TYPE
+#endif
+
+#if defined(SIMDE_X86_MMX_USE_NATIVE_TYPE)
+#include <mmintrin.h>
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+#include <arm_neon.h>
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+#include <loongson-mmiintrin.h>
+#endif
+
+#include <stdint.h>
+#include <limits.h>
+
+SIMDE_BEGIN_DECLS_
+
+typedef union {
+#if defined(SIMDE_VECTOR_SUBSCRIPT)
+ SIMDE_ALIGN_TO_8 int8_t i8 SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_8 int16_t i16 SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_8 int32_t i32 SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_8 int64_t i64 SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_8 uint8_t u8 SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_8 uint16_t u16 SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_8 uint32_t u32 SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_8 uint64_t u64 SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_8 simde_float32 f32 SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_8 int_fast32_t i32f SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_8 uint_fast32_t u32f SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
+#else
+ SIMDE_ALIGN_TO_8 int8_t i8[8];
+ SIMDE_ALIGN_TO_8 int16_t i16[4];
+ SIMDE_ALIGN_TO_8 int32_t i32[2];
+ SIMDE_ALIGN_TO_8 int64_t i64[1];
+ SIMDE_ALIGN_TO_8 uint8_t u8[8];
+ SIMDE_ALIGN_TO_8 uint16_t u16[4];
+ SIMDE_ALIGN_TO_8 uint32_t u32[2];
+ SIMDE_ALIGN_TO_8 uint64_t u64[1];
+ SIMDE_ALIGN_TO_8 simde_float32 f32[2];
+ SIMDE_ALIGN_TO_8 int_fast32_t i32f[8 / sizeof(int_fast32_t)];
+ SIMDE_ALIGN_TO_8 uint_fast32_t u32f[8 / sizeof(uint_fast32_t)];
+#endif
+
+#if defined(SIMDE_X86_MMX_USE_NATIVE_TYPE)
+ __m64 n;
+#endif
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ int8x8_t neon_i8;
+ int16x4_t neon_i16;
+ int32x2_t neon_i32;
+ int64x1_t neon_i64;
+ uint8x8_t neon_u8;
+ uint16x4_t neon_u16;
+ uint32x2_t neon_u32;
+ uint64x1_t neon_u64;
+ float32x2_t neon_f32;
+#endif
+#if defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ int8x8_t mmi_i8;
+ int16x4_t mmi_i16;
+ int32x2_t mmi_i32;
+ int64_t mmi_i64;
+ uint8x8_t mmi_u8;
+ uint16x4_t mmi_u16;
+ uint32x2_t mmi_u32;
+ uint64_t mmi_u64;
+#endif
+} simde__m64_private;
+
+#if defined(SIMDE_X86_MMX_USE_NATIVE_TYPE)
+typedef __m64 simde__m64;
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+typedef int32x2_t simde__m64;
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+typedef int32x2_t simde__m64;
+#elif defined(SIMDE_VECTOR_SUBSCRIPT)
+typedef int32_t simde__m64 SIMDE_ALIGN_TO_8 SIMDE_VECTOR(8) SIMDE_MAY_ALIAS;
+#else
+typedef simde__m64_private simde__m64;
+#endif
+
+#if !defined(SIMDE_X86_MMX_USE_NATIVE_TYPE) && \
+ defined(SIMDE_ENABLE_NATIVE_ALIASES)
+#define SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES
+typedef simde__m64 __m64;
+#endif
+
+HEDLEY_STATIC_ASSERT(8 == sizeof(simde__m64), "__m64 size incorrect");
+HEDLEY_STATIC_ASSERT(8 == sizeof(simde__m64_private), "__m64 size incorrect");
+#if defined(SIMDE_CHECK_ALIGNMENT) && defined(SIMDE_ALIGN_OF)
+HEDLEY_STATIC_ASSERT(SIMDE_ALIGN_OF(simde__m64) == 8,
+ "simde__m64 is not 8-byte aligned");
+HEDLEY_STATIC_ASSERT(SIMDE_ALIGN_OF(simde__m64_private) == 8,
+ "simde__m64_private is not 8-byte aligned");
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde__m64_from_private(simde__m64_private v)
+{
+ simde__m64 r;
+ simde_memcpy(&r, &v, sizeof(r));
+ return r;
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64_private simde__m64_to_private(simde__m64 v)
+{
+ simde__m64_private r;
+ simde_memcpy(&r, &v, sizeof(r));
+ return r;
+}
+
+#define SIMDE_X86_GENERATE_CONVERSION_FUNCTION(simde_type, source_type, isax, \
+ fragment) \
+ SIMDE_FUNCTION_ATTRIBUTES \
+ simde__##simde_type simde__##simde_type##_from_##isax##_##fragment( \
+ source_type value) \
+ { \
+ simde__##simde_type##_private r_; \
+ r_.isax##_##fragment = value; \
+ return simde__##simde_type##_from_private(r_); \
+ } \
+ \
+ SIMDE_FUNCTION_ATTRIBUTES \
+ source_type simde__##simde_type##_to_##isax##_##fragment( \
+ simde__##simde_type value) \
+ { \
+ simde__##simde_type##_private r_ = \
+ simde__##simde_type##_to_private(value); \
+ return r_.isax##_##fragment; \
+ }
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, int8x8_t, neon, i8)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, int16x4_t, neon, i16)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, int32x2_t, neon, i32)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, int64x1_t, neon, i64)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, uint8x8_t, neon, u8)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, uint16x4_t, neon, u16)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, uint32x2_t, neon, u32)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, uint64x1_t, neon, u64)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, float32x2_t, neon, f32)
+#endif /* defined(SIMDE_ARM_NEON_A32V7_NATIVE) */
+
+#if defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, int8x8_t, mmi, i8)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, int16x4_t, mmi, i16)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, int32x2_t, mmi, i32)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, int64_t, mmi, i64)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, uint8x8_t, mmi, u8)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, uint16x4_t, mmi, u16)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, uint32x2_t, mmi, u32)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m64, uint64_t, mmi, u64)
+#endif /* defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE) */
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_add_pi8(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_add_pi8(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i8 = vadd_s8(a_.neon_i8, b_.neon_i8);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i8 = paddb_s(a_.mmi_i8, b_.mmi_i8);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i8 = a_.i8 + b_.i8;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
+ r_.i8[i] = a_.i8[i] + b_.i8[i];
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_paddb(a, b) simde_mm_add_pi8(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_add_pi8(a, b) simde_mm_add_pi8(a, b)
+#define _m_paddb(a, b) simde_m_paddb(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_add_pi16(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_add_pi16(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 = vadd_s16(a_.neon_i16, b_.neon_i16);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i16 = paddh_s(a_.mmi_i16, b_.mmi_i16);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i16 = a_.i16 + b_.i16;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = a_.i16[i] + b_.i16[i];
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_paddw(a, b) simde_mm_add_pi16(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_add_pi16(a, b) simde_mm_add_pi16(a, b)
+#define _m_paddw(a, b) simde_mm_add_pi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_add_pi32(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_add_pi32(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vadd_s32(a_.neon_i32, b_.neon_i32);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i32 = paddw_s(a_.mmi_i32, b_.mmi_i32);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32 = a_.i32 + b_.i32;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = a_.i32[i] + b_.i32[i];
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_paddd(a, b) simde_mm_add_pi32(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_add_pi32(a, b) simde_mm_add_pi32(a, b)
+#define _m_paddd(a, b) simde_mm_add_pi32(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_adds_pi8(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_adds_pi8(a, b);
+#else
+ simde__m64_private r_, a_ = simde__m64_to_private(a),
+ b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i8 = vqadd_s8(a_.neon_i8, b_.neon_i8);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i8 = paddsb(a_.mmi_i8, b_.mmi_i8);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
+ if ((((b_.i8[i]) > 0) &&
+ ((a_.i8[i]) > (INT8_MAX - (b_.i8[i]))))) {
+ r_.i8[i] = INT8_MAX;
+ } else if ((((b_.i8[i]) < 0) &&
+ ((a_.i8[i]) < (INT8_MIN - (b_.i8[i]))))) {
+ r_.i8[i] = INT8_MIN;
+ } else {
+ r_.i8[i] = (a_.i8[i]) + (b_.i8[i]);
+ }
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_paddsb(a, b) simde_mm_adds_pi8(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_adds_pi8(a, b) simde_mm_adds_pi8(a, b)
+#define _m_paddsb(a, b) simde_mm_adds_pi8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_adds_pu8(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_adds_pu8(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u8 = vqadd_u8(a_.neon_u8, b_.neon_u8);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_u8 = paddusb(a_.mmi_u8, b_.mmi_u8);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u8) / sizeof(r_.u8[0])); i++) {
+ const uint_fast16_t x =
+ HEDLEY_STATIC_CAST(uint_fast16_t, a_.u8[i]) +
+ HEDLEY_STATIC_CAST(uint_fast16_t, b_.u8[i]);
+ if (x > UINT8_MAX)
+ r_.u8[i] = UINT8_MAX;
+ else
+ r_.u8[i] = HEDLEY_STATIC_CAST(uint8_t, x);
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_paddusb(a, b) simde_mm_adds_pu8(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_adds_pu8(a, b) simde_mm_adds_pu8(a, b)
+#define _m_paddusb(a, b) simde_mm_adds_pu8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_adds_pi16(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_adds_pi16(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 = vqadd_s16(a_.neon_i16, b_.neon_i16);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i16 = paddsh(a_.mmi_i16, b_.mmi_i16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ if ((((b_.i16[i]) > 0) &&
+ ((a_.i16[i]) > (INT16_MAX - (b_.i16[i]))))) {
+ r_.i16[i] = INT16_MAX;
+ } else if ((((b_.i16[i]) < 0) &&
+ ((a_.i16[i]) < (SHRT_MIN - (b_.i16[i]))))) {
+ r_.i16[i] = SHRT_MIN;
+ } else {
+ r_.i16[i] = (a_.i16[i]) + (b_.i16[i]);
+ }
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_paddsw(a, b) simde_mm_adds_pi16(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_adds_pi16(a, b) simde_mm_adds_pi16(a, b)
+#define _m_paddsw(a, b) simde_mm_adds_pi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_adds_pu16(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_adds_pu16(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u16 = vqadd_u16(a_.neon_u16, b_.neon_u16);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_u16 = paddush(a_.mmi_u16, b_.mmi_u16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ const uint32_t x = a_.u16[i] + b_.u16[i];
+ if (x > UINT16_MAX)
+ r_.u16[i] = UINT16_MAX;
+ else
+ r_.u16[i] = HEDLEY_STATIC_CAST(uint16_t, x);
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_paddusw(a, b) simde_mm_adds_pu16(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_adds_pu16(a, b) simde_mm_adds_pu16(a, b)
+#define _m_paddusw(a, b) simde_mm_adds_pu16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_and_si64(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_and_si64(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vand_s32(a_.neon_i32, b_.neon_i32);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i64 = a_.i64 & b_.i64;
+#else
+ r_.i64[0] = a_.i64[0] & b_.i64[0];
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_pand(a, b) simde_mm_and_si64(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_and_si64(a, b) simde_mm_and_si64(a, b)
+#define _m_pand(a, b) simde_mm_and_si64(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_andnot_si64(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_andnot_si64(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vbic_s32(b_.neon_i32, a_.neon_i32);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i32 = pandn_sw(a_.mmi_i32, b_.mmi_i32);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32f = ~a_.i32f & b_.i32f;
+#else
+ r_.u64[0] = (~(a_.u64[0])) & (b_.u64[0]);
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_pandn(a, b) simde_mm_andnot_si64(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_andnot_si64(a, b) simde_mm_andnot_si64(a, b)
+#define _m_pandn(a, b) simde_mm_andnot_si64(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_cmpeq_pi8(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cmpeq_pi8(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u8 = vceq_s8(a_.neon_i8, b_.neon_i8);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i8 = pcmpeqb_s(a_.mmi_i8, b_.mmi_i8);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
+ r_.i8[i] = (a_.i8[i] == b_.i8[i]) ? ~INT8_C(0) : INT8_C(0);
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_pcmpeqb(a, b) simde_mm_cmpeq_pi8(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpeq_pi8(a, b) simde_mm_cmpeq_pi8(a, b)
+#define _m_pcmpeqb(a, b) simde_mm_cmpeq_pi8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_cmpeq_pi16(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cmpeq_pi16(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u16 = vceq_s16(a_.neon_i16, b_.neon_i16);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i16 = pcmpeqh_s(a_.mmi_i16, b_.mmi_i16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = (a_.i16[i] == b_.i16[i]) ? ~INT16_C(0) : INT16_C(0);
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_pcmpeqw(a, b) simde_mm_cmpeq_pi16(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpeq_pi16(a, b) simde_mm_cmpeq_pi16(a, b)
+#define _m_pcmpeqw(a, b) simde_mm_cmpeq_pi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_cmpeq_pi32(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cmpeq_pi32(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u32 = vceq_s32(a_.neon_i32, b_.neon_i32);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i32 = pcmpeqw_s(a_.mmi_i32, b_.mmi_i32);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = (a_.i32[i] == b_.i32[i]) ? ~INT32_C(0) : INT32_C(0);
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_pcmpeqd(a, b) simde_mm_cmpeq_pi32(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpeq_pi32(a, b) simde_mm_cmpeq_pi32(a, b)
+#define _m_pcmpeqd(a, b) simde_mm_cmpeq_pi32(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_cmpgt_pi8(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cmpgt_pi8(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u8 = vcgt_s8(a_.neon_i8, b_.neon_i8);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i8 = pcmpgtb_s(a_.mmi_i8, b_.mmi_i8);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
+ r_.i8[i] = (a_.i8[i] > b_.i8[i]) ? ~INT8_C(0) : INT8_C(0);
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_pcmpgtb(a, b) simde_mm_cmpgt_pi8(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpgt_pi8(a, b) simde_mm_cmpgt_pi8(a, b)
+#define _m_pcmpgtb(a, b) simde_mm_cmpgt_pi8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_cmpgt_pi16(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cmpgt_pi16(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u16 = vcgt_s16(a_.neon_i16, b_.neon_i16);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i16 = pcmpgth_s(a_.mmi_i16, b_.mmi_i16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = (a_.i16[i] > b_.i16[i]) ? ~INT16_C(0) : INT16_C(0);
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_pcmpgtw(a, b) simde_mm_cmpgt_pi16(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpgt_pi16(a, b) simde_mm_cmpgt_pi16(a, b)
+#define _m_pcmpgtw(a, b) simde_mm_cmpgt_pi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_cmpgt_pi32(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cmpgt_pi32(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u32 = vcgt_s32(a_.neon_i32, b_.neon_i32);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i32 = pcmpgtw_s(a_.mmi_i32, b_.mmi_i32);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = (a_.i32[i] > b_.i32[i]) ? ~INT32_C(0) : INT32_C(0);
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_pcmpgtd(a, b) simde_mm_cmpgt_pi32(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpgt_pi32(a, b) simde_mm_cmpgt_pi32(a, b)
+#define _m_pcmpgtd(a, b) simde_mm_cmpgt_pi32(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int64_t simde_mm_cvtm64_si64(simde__m64 a)
+{
+#if defined(SIMDE_X86_MMX_NATIVE) && defined(SIMDE_ARCH_AMD64) && \
+ !defined(__PGI)
+ return _mm_cvtm64_si64(a);
+#else
+ simde__m64_private a_ = simde__m64_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ HEDLEY_DIAGNOSTIC_PUSH
+#if HEDLEY_HAS_WARNING("-Wvector-conversion") && \
+ SIMDE_DETECT_CLANG_VERSION_NOT(10, 0, 0)
+#pragma clang diagnostic ignored "-Wvector-conversion"
+#endif
+ return vget_lane_s64(a_.neon_i64, 0);
+ HEDLEY_DIAGNOSTIC_POP
+#else
+ return a_.i64[0];
+#endif
+#endif
+}
+#define simde_m_to_int64(a) simde_mm_cvtm64_si64(a)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtm64_si64(a) simde_mm_cvtm64_si64(a)
+#define _m_to_int64(a) simde_mm_cvtm64_si64(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_cvtsi32_si64(int32_t a)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cvtsi32_si64(a);
+#else
+ simde__m64_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ const int32_t av[sizeof(r_.neon_i32) / sizeof(r_.neon_i32[0])] = {a, 0};
+ r_.neon_i32 = vld1_s32(av);
+#else
+ r_.i32[0] = a;
+ r_.i32[1] = 0;
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_from_int(a) simde_mm_cvtsi32_si64(a)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtsi32_si64(a) simde_mm_cvtsi32_si64(a)
+#define _m_from_int(a) simde_mm_cvtsi32_si64(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_cvtsi64_m64(int64_t a)
+{
+#if defined(SIMDE_X86_MMX_NATIVE) && defined(SIMDE_ARCH_AMD64) && \
+ !defined(__PGI)
+ return _mm_cvtsi64_m64(a);
+#else
+ simde__m64_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i64 = vld1_s64(&a);
+#else
+ r_.i64[0] = a;
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_from_int64(a) simde_mm_cvtsi64_m64(a)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtsi64_m64(a) simde_mm_cvtsi64_m64(a)
+#define _m_from_int64(a) simde_mm_cvtsi64_m64(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int32_t simde_mm_cvtsi64_si32(simde__m64 a)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cvtsi64_si32(a);
+#else
+ simde__m64_private a_ = simde__m64_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ HEDLEY_DIAGNOSTIC_PUSH
+#if HEDLEY_HAS_WARNING("-Wvector-conversion") && \
+ SIMDE_DETECT_CLANG_VERSION_NOT(10, 0, 0)
+#pragma clang diagnostic ignored "-Wvector-conversion"
+#endif
+ return vget_lane_s32(a_.neon_i32, 0);
+ HEDLEY_DIAGNOSTIC_POP
+#else
+ return a_.i32[0];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtsi64_si32(a) simde_mm_cvtsi64_si32(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_empty(void)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ _mm_empty();
+#else
+ /* noop */
+#endif
+}
+#define simde_m_empty() simde_mm_empty()
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_empty() simde_mm_empty()
+#define _m_empty() simde_mm_empty()
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_madd_pi16(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_madd_pi16(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ int32x4_t i1 = vmull_s16(a_.neon_i16, b_.neon_i16);
+ r_.neon_i32 = vpadd_s32(vget_low_s32(i1), vget_high_s32(i1));
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i32 = pmaddhw(a_.mmi_i16, b_.mmi_i16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i += 2) {
+ r_.i32[i / 2] = (a_.i16[i] * b_.i16[i]) +
+ (a_.i16[i + 1] * b_.i16[i + 1]);
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_pmaddwd(a, b) simde_mm_madd_pi16(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_madd_pi16(a, b) simde_mm_madd_pi16(a, b)
+#define _m_pmaddwd(a, b) simde_mm_madd_pi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_mulhi_pi16(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_mulhi_pi16(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ const int32x4_t t1 = vmull_s16(a_.neon_i16, b_.neon_i16);
+ const uint32x4_t t2 = vshrq_n_u32(vreinterpretq_u32_s32(t1), 16);
+ const uint16x4_t t3 = vmovn_u32(t2);
+ r_.neon_u16 = t3;
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i16 = pmulhh(a_.mmi_i16, b_.mmi_i16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = HEDLEY_STATIC_CAST(int16_t,
+ ((a_.i16[i] * b_.i16[i]) >> 16));
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_pmulhw(a, b) simde_mm_mulhi_pi16(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_mulhi_pi16(a, b) simde_mm_mulhi_pi16(a, b)
+#define _m_pmulhw(a, b) simde_mm_mulhi_pi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_mullo_pi16(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_mullo_pi16(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ const int32x4_t t1 = vmull_s16(a_.neon_i16, b_.neon_i16);
+ const uint16x4_t t2 = vmovn_u32(vreinterpretq_u32_s32(t1));
+ r_.neon_u16 = t2;
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i16 = pmullh(a_.mmi_i16, b_.mmi_i16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = HEDLEY_STATIC_CAST(
+ int16_t, ((a_.i16[i] * b_.i16[i]) & 0xffff));
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_pmullw(a, b) simde_mm_mullo_pi16(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_mullo_pi16(a, b) simde_mm_mullo_pi16(a, b)
+#define _m_pmullw(a, b) simde_mm_mullo_pi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_or_si64(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_or_si64(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vorr_s32(a_.neon_i32, b_.neon_i32);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i64 = a_.i64 | b_.i64;
+#else
+ r_.i64[0] = a_.i64[0] | b_.i64[0];
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_por(a, b) simde_mm_or_si64(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_or_si64(a, b) simde_mm_or_si64(a, b)
+#define _m_por(a, b) simde_mm_or_si64(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_packs_pi16(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_packs_pi16(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i8 = vqmovn_s16(vcombine_s16(a_.neon_i16, b_.neon_i16));
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i8 = packsshb(a_.mmi_i16, b_.mmi_i16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ if (a_.i16[i] < INT8_MIN) {
+ r_.i8[i] = INT8_MIN;
+ } else if (a_.i16[i] > INT8_MAX) {
+ r_.i8[i] = INT8_MAX;
+ } else {
+ r_.i8[i] = HEDLEY_STATIC_CAST(int8_t, a_.i16[i]);
+ }
+ }
+
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ if (b_.i16[i] < INT8_MIN) {
+ r_.i8[i + 4] = INT8_MIN;
+ } else if (b_.i16[i] > INT8_MAX) {
+ r_.i8[i + 4] = INT8_MAX;
+ } else {
+ r_.i8[i + 4] = HEDLEY_STATIC_CAST(int8_t, b_.i16[i]);
+ }
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_packsswb(a, b) simde_mm_packs_pi16(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_packs_pi16(a, b) simde_mm_packs_pi16(a, b)
+#define _m_packsswb(a, b) simde_mm_packs_pi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_packs_pi32(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_packs_pi32(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 = vqmovn_s32(vcombine_s32(a_.neon_i32, b_.neon_i32));
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i16 = packsswh(a_.mmi_i32, b_.mmi_i32);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (8 / sizeof(a_.i32[0])); i++) {
+ if (a_.i32[i] < SHRT_MIN) {
+ r_.i16[i] = SHRT_MIN;
+ } else if (a_.i32[i] > INT16_MAX) {
+ r_.i16[i] = INT16_MAX;
+ } else {
+ r_.i16[i] = HEDLEY_STATIC_CAST(int16_t, a_.i32[i]);
+ }
+ }
+
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (8 / sizeof(b_.i32[0])); i++) {
+ if (b_.i32[i] < SHRT_MIN) {
+ r_.i16[i + 2] = SHRT_MIN;
+ } else if (b_.i32[i] > INT16_MAX) {
+ r_.i16[i + 2] = INT16_MAX;
+ } else {
+ r_.i16[i + 2] = HEDLEY_STATIC_CAST(int16_t, b_.i32[i]);
+ }
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_packssdw(a, b) simde_mm_packs_pi32(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_packs_pi32(a, b) simde_mm_packs_pi32(a, b)
+#define _m_packssdw(a, b) simde_mm_packs_pi32(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_packs_pu16(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_packs_pu16(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ const int16x8_t t1 = vcombine_s16(a_.neon_i16, b_.neon_i16);
+
+ /* Set elements which are < 0 to 0 */
+ const int16x8_t t2 =
+ vandq_s16(t1, vreinterpretq_s16_u16(vcgezq_s16(t1)));
+
+ /* Vector with all s16 elements set to UINT8_MAX */
+ const int16x8_t vmax =
+ vmovq_n_s16(HEDLEY_STATIC_CAST(int16_t, UINT8_MAX));
+
+ /* Elements which are within the acceptable range */
+ const int16x8_t le_max =
+ vandq_s16(t2, vreinterpretq_s16_u16(vcleq_s16(t2, vmax)));
+ const int16x8_t gt_max =
+ vandq_s16(vmax, vreinterpretq_s16_u16(vcgtq_s16(t2, vmax)));
+
+ /* Final values as 16-bit integers */
+ const int16x8_t values = vorrq_s16(le_max, gt_max);
+
+ r_.neon_u8 = vmovn_u16(vreinterpretq_u16_s16(values));
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_u8 = packushb(a_.mmi_u16, b_.mmi_u16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ if (a_.i16[i] > UINT8_MAX) {
+ r_.u8[i] = UINT8_MAX;
+ } else if (a_.i16[i] < 0) {
+ r_.u8[i] = 0;
+ } else {
+ r_.u8[i] = HEDLEY_STATIC_CAST(uint8_t, a_.i16[i]);
+ }
+ }
+
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ if (b_.i16[i] > UINT8_MAX) {
+ r_.u8[i + 4] = UINT8_MAX;
+ } else if (b_.i16[i] < 0) {
+ r_.u8[i + 4] = 0;
+ } else {
+ r_.u8[i + 4] = HEDLEY_STATIC_CAST(uint8_t, b_.i16[i]);
+ }
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_packuswb(a, b) simde_mm_packs_pu16(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_packs_pu16(a, b) simde_mm_packs_pu16(a, b)
+#define _m_packuswb(a, b) simde_mm_packs_pu16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_set_pi8(int8_t e7, int8_t e6, int8_t e5, int8_t e4,
+ int8_t e3, int8_t e2, int8_t e1, int8_t e0)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_set_pi8(e7, e6, e5, e4, e3, e2, e1, e0);
+#else
+ simde__m64_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ const int8_t v[sizeof(r_.i8) / sizeof(r_.i8[0])] = {e0, e1, e2, e3,
+ e4, e5, e6, e7};
+ r_.neon_i8 = vld1_s8(v);
+#else
+ r_.i8[0] = e0;
+ r_.i8[1] = e1;
+ r_.i8[2] = e2;
+ r_.i8[3] = e3;
+ r_.i8[4] = e4;
+ r_.i8[5] = e5;
+ r_.i8[6] = e6;
+ r_.i8[7] = e7;
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_set_pi8(e7, e6, e5, e4, e3, e2, e1, e0) \
+ simde_mm_set_pi8(e7, e6, e5, e4, e3, e2, e1, e0)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_x_mm_set_pu8(uint8_t e7, uint8_t e6, uint8_t e5, uint8_t e4,
+ uint8_t e3, uint8_t e2, uint8_t e1, uint8_t e0)
+{
+ simde__m64_private r_;
+
+#if defined(SIMDE_X86_MMX_NATIVE)
+ r_.n = _mm_set_pi8(
+ HEDLEY_STATIC_CAST(int8_t, e7), HEDLEY_STATIC_CAST(int8_t, e6),
+ HEDLEY_STATIC_CAST(int8_t, e5), HEDLEY_STATIC_CAST(int8_t, e4),
+ HEDLEY_STATIC_CAST(int8_t, e3), HEDLEY_STATIC_CAST(int8_t, e2),
+ HEDLEY_STATIC_CAST(int8_t, e1), HEDLEY_STATIC_CAST(int8_t, e0));
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ const uint8_t v[sizeof(r_.u8) / sizeof(r_.u8[0])] = {e0, e1, e2, e3,
+ e4, e5, e6, e7};
+ r_.neon_u8 = vld1_u8(v);
+#else
+ r_.u8[0] = e0;
+ r_.u8[1] = e1;
+ r_.u8[2] = e2;
+ r_.u8[3] = e3;
+ r_.u8[4] = e4;
+ r_.u8[5] = e5;
+ r_.u8[6] = e6;
+ r_.u8[7] = e7;
+#endif
+
+ return simde__m64_from_private(r_);
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_set_pi16(int16_t e3, int16_t e2, int16_t e1, int16_t e0)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_set_pi16(e3, e2, e1, e0);
+#else
+ simde__m64_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ const int16_t v[sizeof(r_.i16) / sizeof(r_.i16[0])] = {e0, e1, e2, e3};
+ r_.neon_i16 = vld1_s16(v);
+#else
+ r_.i16[0] = e0;
+ r_.i16[1] = e1;
+ r_.i16[2] = e2;
+ r_.i16[3] = e3;
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_set_pi16(e3, e2, e1, e0) simde_mm_set_pi16(e3, e2, e1, e0)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_x_mm_set_pu16(uint16_t e3, uint16_t e2, uint16_t e1,
+ uint16_t e0)
+{
+ simde__m64_private r_;
+
+#if defined(SIMDE_X86_MMX_NATIVE)
+ r_.n = _mm_set_pi16(HEDLEY_STATIC_CAST(int16_t, e3),
+ HEDLEY_STATIC_CAST(int16_t, e2),
+ HEDLEY_STATIC_CAST(int16_t, e1),
+ HEDLEY_STATIC_CAST(int16_t, e0));
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ const uint16_t v[sizeof(r_.u16) / sizeof(r_.u16[0])] = {e0, e1, e2, e3};
+ r_.neon_u16 = vld1_u16(v);
+#else
+ r_.u16[0] = e0;
+ r_.u16[1] = e1;
+ r_.u16[2] = e2;
+ r_.u16[3] = e3;
+#endif
+
+ return simde__m64_from_private(r_);
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_x_mm_set_pu32(uint32_t e1, uint32_t e0)
+{
+ simde__m64_private r_;
+
+#if defined(SIMDE_X86_MMX_NATIVE)
+ r_.n = _mm_set_pi32(HEDLEY_STATIC_CAST(int32_t, e1),
+ HEDLEY_STATIC_CAST(int32_t, e0));
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ const uint32_t v[sizeof(r_.u32) / sizeof(r_.u32[0])] = {e0, e1};
+ r_.neon_u32 = vld1_u32(v);
+#else
+ r_.u32[0] = e0;
+ r_.u32[1] = e1;
+#endif
+
+ return simde__m64_from_private(r_);
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_set_pi32(int32_t e1, int32_t e0)
+{
+ simde__m64_private r_;
+
+#if defined(SIMDE_X86_MMX_NATIVE)
+ r_.n = _mm_set_pi32(e1, e0);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ const int32_t v[sizeof(r_.i32) / sizeof(r_.i32[0])] = {e0, e1};
+ r_.neon_i32 = vld1_s32(v);
+#else
+ r_.i32[0] = e0;
+ r_.i32[1] = e1;
+#endif
+
+ return simde__m64_from_private(r_);
+}
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_set_pi32(e1, e0) simde_mm_set_pi32(e1, e0)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_x_mm_set_pi64(int64_t e0)
+{
+ simde__m64_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ const int64_t v[sizeof(r_.i64) / sizeof(r_.i64[0])] = {e0};
+ r_.neon_i64 = vld1_s64(v);
+#else
+ r_.i64[0] = e0;
+#endif
+
+ return simde__m64_from_private(r_);
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_x_mm_set_f32x2(simde_float32 e1, simde_float32 e0)
+{
+ simde__m64_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ const simde_float32 v[sizeof(r_.f32) / sizeof(r_.f32[0])] = {e0, e1};
+ r_.neon_f32 = vld1_f32(v);
+#else
+ r_.f32[0] = e0;
+ r_.f32[1] = e1;
+#endif
+
+ return simde__m64_from_private(r_);
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_set1_pi8(int8_t a)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_set1_pi8(a);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ simde__m64_private r_;
+ r_.neon_i8 = vmov_n_s8(a);
+ return simde__m64_from_private(r_);
+#else
+ return simde_mm_set_pi8(a, a, a, a, a, a, a, a);
+#endif
+}
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_set1_pi8(a) simde_mm_set1_pi8(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_set1_pi16(int16_t a)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_set1_pi16(a);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ simde__m64_private r_;
+ r_.neon_i16 = vmov_n_s16(a);
+ return simde__m64_from_private(r_);
+#else
+ return simde_mm_set_pi16(a, a, a, a);
+#endif
+}
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_set1_pi16(a) simde_mm_set1_pi16(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_set1_pi32(int32_t a)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_set1_pi32(a);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ simde__m64_private r_;
+ r_.neon_i32 = vmov_n_s32(a);
+ return simde__m64_from_private(r_);
+#else
+ return simde_mm_set_pi32(a, a);
+#endif
+}
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_set1_pi32(a) simde_mm_set1_pi32(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_setr_pi8(int8_t e7, int8_t e6, int8_t e5, int8_t e4,
+ int8_t e3, int8_t e2, int8_t e1, int8_t e0)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_setr_pi8(e7, e6, e5, e4, e3, e2, e1, e0);
+#else
+ return simde_mm_set_pi8(e0, e1, e2, e3, e4, e5, e6, e7);
+#endif
+}
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_setr_pi8(e7, e6, e5, e4, e3, e2, e1, e0) \
+ simde_mm_setr_pi8(e7, e6, e5, e4, e3, e2, e1, e0)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_setr_pi16(int16_t e3, int16_t e2, int16_t e1, int16_t e0)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_setr_pi16(e3, e2, e1, e0);
+#else
+ return simde_mm_set_pi16(e0, e1, e2, e3);
+#endif
+}
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_setr_pi16(e3, e2, e1, e0) simde_mm_setr_pi16(e3, e2, e1, e0)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_setr_pi32(int32_t e1, int32_t e0)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_setr_pi32(e1, e0);
+#else
+ return simde_mm_set_pi32(e0, e1);
+#endif
+}
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_setr_pi32(e1, e0) simde_mm_setr_pi32(e1, e0)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_setzero_si64(void)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_setzero_si64();
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ simde__m64_private r_;
+ r_.neon_u32 = vmov_n_u32(0);
+ return simde__m64_from_private(r_);
+#else
+ return simde_mm_set_pi32(0, 0);
+#endif
+}
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_setzero_si64() simde_mm_setzero_si64()
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_x_mm_load_si64(const void *mem_addr)
+{
+ simde__m64 r;
+ simde_memcpy(&r, SIMDE_ALIGN_ASSUME_LIKE(mem_addr, simde__m64),
+ sizeof(r));
+ return r;
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_x_mm_loadu_si64(const void *mem_addr)
+{
+ simde__m64 r;
+ simde_memcpy(&r, mem_addr, sizeof(r));
+ return r;
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_x_mm_store_si64(void *mem_addr, simde__m64 value)
+{
+ simde_memcpy(SIMDE_ALIGN_ASSUME_LIKE(mem_addr, simde__m64), &value,
+ sizeof(value));
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_x_mm_storeu_si64(void *mem_addr, simde__m64 value)
+{
+ simde_memcpy(mem_addr, &value, sizeof(value));
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_x_mm_setone_si64(void)
+{
+ return simde_mm_set1_pi32(~INT32_C(0));
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_sll_pi16(simde__m64 a, simde__m64 count)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_sll_pi16(a, count);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private count_ = simde__m64_to_private(count);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ HEDLEY_DIAGNOSTIC_PUSH
+#if HEDLEY_HAS_WARNING("-Wvector-conversion") && \
+ SIMDE_DETECT_CLANG_VERSION_NOT(10, 0, 0)
+#pragma clang diagnostic ignored "-Wvector-conversion"
+#endif
+ r_.neon_i16 =
+ vshl_s16(a_.neon_i16,
+ vmov_n_s16(HEDLEY_STATIC_CAST(
+ int16_t, vget_lane_u64(count_.neon_u64, 0))));
+ HEDLEY_DIAGNOSTIC_POP
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR) && \
+ defined(SIMDE_BUG_CLANG_POWER9_16x4_BAD_SHIFT)
+ if (HEDLEY_UNLIKELY(count_.u64[0] > 15))
+ return simde_mm_setzero_si64();
+
+ r_.i16 = a_.i16 << HEDLEY_STATIC_CAST(int16_t, count_.u64[0]);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.i16 = a_.i16 << count_.u64[0];
+#else
+ if (HEDLEY_UNLIKELY(count_.u64[0] > 15)) {
+ simde_memset(&r_, 0, sizeof(r_));
+ return simde__m64_from_private(r_);
+ }
+
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
+ r_.u16[i] = HEDLEY_STATIC_CAST(uint16_t,
+ a_.u16[i] << count_.u64[0]);
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psllw(a, count) simde_mm_sll_pi16(a, count)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_sll_pi16(a, count) simde_mm_sll_pi16(a, count)
+#define _m_psllw(a, count) simde_mm_sll_pi16(a, count)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_sll_pi32(simde__m64 a, simde__m64 count)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_sll_pi32(a, count);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private count_ = simde__m64_to_private(count);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ HEDLEY_DIAGNOSTIC_PUSH
+#if HEDLEY_HAS_WARNING("-Wvector-conversion") && \
+ SIMDE_DETECT_CLANG_VERSION_NOT(10, 0, 0)
+#pragma clang diagnostic ignored "-Wvector-conversion"
+#endif
+ r_.neon_i32 =
+ vshl_s32(a_.neon_i32,
+ vmov_n_s32(HEDLEY_STATIC_CAST(
+ int32_t, vget_lane_u64(count_.neon_u64, 0))));
+ HEDLEY_DIAGNOSTIC_POP
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.i32 = a_.i32 << count_.u64[0];
+#else
+ if (HEDLEY_UNLIKELY(count_.u64[0] > 31)) {
+ simde_memset(&r_, 0, sizeof(r_));
+ return simde__m64_from_private(r_);
+ }
+
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u32) / sizeof(r_.u32[0])); i++) {
+ r_.u32[i] = a_.u32[i] << count_.u64[0];
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_pslld(a, count) simde_mm_sll_pi32(a, count)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_sll_pi32(a, count) simde_mm_sll_pi32(a, count)
+#define _m_pslld(a, count) simde_mm_sll_pi32(a, count)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_slli_pi16(simde__m64 a, int count)
+{
+#if defined(SIMDE_X86_MMX_NATIVE) && !defined(__PGI)
+ return _mm_slli_pi16(a, count);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR) && \
+ defined(SIMDE_BUG_CLANG_POWER9_16x4_BAD_SHIFT)
+ if (HEDLEY_UNLIKELY(count > 15))
+ return simde_mm_setzero_si64();
+
+ r_.i16 = a_.i16 << HEDLEY_STATIC_CAST(int16_t, count);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.i16 = a_.i16 << count;
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 = vshl_s16(a_.neon_i16, vmov_n_s16((int16_t)count));
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i16 = psllh_s(a_.mmi_i16, b_.mmi_i16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
+ r_.u16[i] = HEDLEY_STATIC_CAST(uint16_t, a_.u16[i] << count);
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psllwi(a, count) simde_mm_slli_pi16(a, count)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_slli_pi16(a, count) simde_mm_slli_pi16(a, count)
+#define _m_psllwi(a, count) simde_mm_slli_pi16(a, count)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_slli_pi32(simde__m64 a, int count)
+{
+#if defined(SIMDE_X86_MMX_NATIVE) && !defined(__PGI)
+ return _mm_slli_pi32(a, count);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.i32 = a_.i32 << count;
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vshl_s32(a_.neon_i32, vmov_n_s32((int32_t)count));
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i32 = psllw_s(a_.mmi_i32, b_.mmi_i32);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u32) / sizeof(r_.u32[0])); i++) {
+ r_.u32[i] = a_.u32[i] << count;
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_pslldi(a, b) simde_mm_slli_pi32(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_slli_pi32(a, count) simde_mm_slli_pi32(a, count)
+#define _m_pslldi(a, count) simde_mm_slli_pi32(a, count)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_slli_si64(simde__m64 a, int count)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_slli_si64(a, count);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.i64 = a_.i64 << count;
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i64 = vshl_s64(a_.neon_i64, vmov_n_s64((int64_t)count));
+#else
+ r_.u64[0] = a_.u64[0] << count;
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psllqi(a, count) simde_mm_slli_si64(a, count)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_slli_si64(a, count) simde_mm_slli_si64(a, count)
+#define _m_psllqi(a, count) simde_mm_slli_si64(a, count)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_sll_si64(simde__m64 a, simde__m64 count)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_sll_si64(a, count);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private count_ = simde__m64_to_private(count);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i64 = vshl_s64(a_.neon_i64, count_.neon_i64);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i64 = a_.i64 << count_.i64;
+#else
+ if (HEDLEY_UNLIKELY(count_.u64[0] > 63)) {
+ simde_memset(&r_, 0, sizeof(r_));
+ return simde__m64_from_private(r_);
+ }
+
+ r_.u64[0] = a_.u64[0] << count_.u64[0];
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psllq(a, count) simde_mm_sll_si64(a, count)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_sll_si64(a, count) simde_mm_sll_si64(a, count)
+#define _m_psllq(a, count) simde_mm_sll_si64(a, count)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_srl_pi16(simde__m64 a, simde__m64 count)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_srl_pi16(a, count);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private count_ = simde__m64_to_private(count);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR) && \
+ defined(SIMDE_BUG_CLANG_POWER9_16x4_BAD_SHIFT)
+ if (HEDLEY_UNLIKELY(count_.u64[0] > 15))
+ return simde_mm_setzero_si64();
+
+ r_.i16 = a_.i16 >> HEDLEY_STATIC_CAST(int16_t, count_.u64[0]);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.u16 = a_.u16 >> count_.u64[0];
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u16 = vshl_u16(
+ a_.neon_u16,
+ vmov_n_s16(-((int16_t)vget_lane_u64(count_.neon_u64, 0))));
+#else
+ if (HEDLEY_UNLIKELY(count_.u64[0] > 15)) {
+ simde_memset(&r_, 0, sizeof(r_));
+ return simde__m64_from_private(r_);
+ }
+
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < sizeof(r_.u16) / sizeof(r_.u16[0]); i++) {
+ r_.u16[i] = a_.u16[i] >> count_.u64[0];
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psrlw(a, count) simde_mm_srl_pi16(a, count)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_srl_pi16(a, count) simde_mm_srl_pi16(a, count)
+#define _m_psrlw(a, count) simde_mm_srl_pi16(a, count)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_srl_pi32(simde__m64 a, simde__m64 count)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_srl_pi32(a, count);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private count_ = simde__m64_to_private(count);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.u32 = a_.u32 >> count_.u64[0];
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u32 = vshl_u32(
+ a_.neon_u32,
+ vmov_n_s32(-((int32_t)vget_lane_u64(count_.neon_u64, 0))));
+#else
+ if (HEDLEY_UNLIKELY(count_.u64[0] > 31)) {
+ simde_memset(&r_, 0, sizeof(r_));
+ return simde__m64_from_private(r_);
+ }
+
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < sizeof(r_.u32) / sizeof(r_.u32[0]); i++) {
+ r_.u32[i] = a_.u32[i] >> count_.u64[0];
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psrld(a, count) simde_mm_srl_pi32(a, count)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_srl_pi32(a, count) simde_mm_srl_pi32(a, count)
+#define _m_psrld(a, count) simde_mm_srl_pi32(a, count)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_srli_pi16(simde__m64 a, int count)
+{
+#if defined(SIMDE_X86_MMX_NATIVE) && !defined(__PGI)
+ return _mm_srli_pi16(a, count);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.u16 = a_.u16 >> count;
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u16 = vshl_u16(a_.neon_u16, vmov_n_s16(-((int16_t)count)));
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i16 = psrlh_s(a_.mmi_i16, b_.mmi_i16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
+ r_.u16[i] = a_.u16[i] >> count;
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psrlwi(a, count) simde_mm_srli_pi16(a, count)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_srli_pi16(a, count) simde_mm_srli_pi16(a, count)
+#define _m_psrlwi(a, count) simde_mm_srli_pi16(a, count)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_srli_pi32(simde__m64 a, int count)
+{
+#if defined(SIMDE_X86_MMX_NATIVE) && !defined(__PGI)
+ return _mm_srli_pi32(a, count);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.u32 = a_.u32 >> count;
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u32 = vshl_u32(a_.neon_u32, vmov_n_s32(-((int32_t)count)));
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i32 = psrlw_s(a_.mmi_i32, b_.mmi_i32);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u32) / sizeof(r_.u32[0])); i++) {
+ r_.u32[i] = a_.u32[i] >> count;
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psrldi(a, count) simde_mm_srli_pi32(a, count)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_srli_pi32(a, count) simde_mm_srli_pi32(a, count)
+#define _m_psrldi(a, count) simde_mm_srli_pi32(a, count)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_srli_si64(simde__m64 a, int count)
+{
+#if defined(SIMDE_X86_MMX_NATIVE) && !defined(__PGI)
+ return _mm_srli_si64(a, count);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u64 = vshl_u64(a_.neon_u64, vmov_n_s64(-count));
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.u64 = a_.u64 >> count;
+#else
+ r_.u64[0] = a_.u64[0] >> count;
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psrlqi(a, count) simde_mm_srli_si64(a, count)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_srli_si64(a, count) simde_mm_srli_si64(a, count)
+#define _m_psrlqi(a, count) simde_mm_srli_si64(a, count)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_srl_si64(simde__m64 a, simde__m64 count)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_srl_si64(a, count);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private count_ = simde__m64_to_private(count);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_u64 = vshl_u64(a_.neon_u64, vneg_s64(count_.neon_i64));
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.u64 = a_.u64 >> count_.u64;
+#else
+ if (HEDLEY_UNLIKELY(count_.u64[0] > 63)) {
+ simde_memset(&r_, 0, sizeof(r_));
+ return simde__m64_from_private(r_);
+ }
+
+ r_.u64[0] = a_.u64[0] >> count_.u64[0];
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psrlq(a, count) simde_mm_srl_si64(a, count)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_srl_si64(a, count) simde_mm_srl_si64(a, count)
+#define _m_psrlq(a, count) simde_mm_srl_si64(a, count)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_srai_pi16(simde__m64 a, int count)
+{
+#if defined(SIMDE_X86_MMX_NATIVE) && !defined(__PGI)
+ return _mm_srai_pi16(a, count);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.i16 = a_.i16 >> (count & 0xff);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 = vshl_s16(a_.neon_i16,
+ vmov_n_s16(-HEDLEY_STATIC_CAST(int16_t, count)));
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i16 = psrah_s(a_.mmi_i16, count);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = a_.i16[i] >> (count & 0xff);
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psrawi(a, count) simde_mm_srai_pi16(a, count)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_srai_pi16(a, count) simde_mm_srai_pi16(a, count)
+#define _m_psrawi(a, count) simde_mm_srai_pi16(a, count)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_srai_pi32(simde__m64 a, int count)
+{
+#if defined(SIMDE_X86_MMX_NATIVE) && !defined(__PGI)
+ return _mm_srai_pi32(a, count);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.i32 = a_.i32 >> (count & 0xff);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vshl_s32(a_.neon_i32,
+ vmov_n_s32(-HEDLEY_STATIC_CAST(int32_t, count)));
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i32 = psraw_s(a_.mmi_i32, count);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = a_.i32[i] >> (count & 0xff);
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psradi(a, count) simde_mm_srai_pi32(a, count)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_srai_pi32(a, count) simde_mm_srai_pi32(a, count)
+#define _m_psradi(a, count) simde_mm_srai_pi32(a, count)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_sra_pi16(simde__m64 a, simde__m64 count)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_sra_pi16(a, count);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private count_ = simde__m64_to_private(count);
+ const int cnt = HEDLEY_STATIC_CAST(
+ int, (count_.i64[0] > 15 ? 15 : count_.i64[0]));
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.i16 = a_.i16 >> cnt;
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 =
+ vshl_s16(a_.neon_i16,
+ vmov_n_s16(-HEDLEY_STATIC_CAST(
+ int16_t, vget_lane_u64(count_.neon_u64, 0))));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = a_.i16[i] >> cnt;
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psraw(a, count) simde_mm_sra_pi16(a, count)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_sra_pi16(a, count) simde_mm_sra_pi16(a, count)
+#define _m_psraw(a, count) simde_mm_sra_pi16(a, count)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_sra_pi32(simde__m64 a, simde__m64 count)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_sra_pi32(a, count);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private count_ = simde__m64_to_private(count);
+ const int32_t cnt =
+ (count_.u64[0] > 31)
+ ? 31
+ : HEDLEY_STATIC_CAST(int32_t, count_.u64[0]);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.i32 = a_.i32 >> cnt;
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 =
+ vshl_s32(a_.neon_i32,
+ vmov_n_s32(-HEDLEY_STATIC_CAST(
+ int32_t, vget_lane_u64(count_.neon_u64, 0))));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = a_.i32[i] >> cnt;
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psrad(a, b) simde_mm_sra_pi32(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_sra_pi32(a, count) simde_mm_sra_pi32(a, count)
+#define _m_psrad(a, count) simde_mm_sra_pi32(a, count)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_sub_pi8(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_sub_pi8(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i8 = vsub_s8(a_.neon_i8, b_.neon_i8);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i8 = psubb_s(a_.mmi_i8, b_.mmi_i8);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i8 = a_.i8 - b_.i8;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
+ r_.i8[i] = a_.i8[i] - b_.i8[i];
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psubb(a, b) simde_mm_sub_pi8(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_sub_pi8(a, b) simde_mm_sub_pi8(a, b)
+#define _m_psubb(a, b) simde_mm_sub_pi8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_sub_pi16(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_sub_pi16(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 = vsub_s16(a_.neon_i16, b_.neon_i16);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i16 = psubh_s(a_.mmi_i16, b_.mmi_i16);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i16 = a_.i16 - b_.i16;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = a_.i16[i] - b_.i16[i];
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psubw(a, b) simde_mm_sub_pi16(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_sub_pi16(a, b) simde_mm_sub_pi16(a, b)
+#define _m_psubw(a, b) simde_mm_sub_pi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_sub_pi32(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_sub_pi32(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vsub_s32(a_.neon_i32, b_.neon_i32);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i32 = psubw_s(a_.mmi_i32, b_.mmi_i32);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32 = a_.i32 - b_.i32;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = a_.i32[i] - b_.i32[i];
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psubd(a, b) simde_mm_sub_pi32(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_sub_pi32(a, b) simde_mm_sub_pi32(a, b)
+#define _m_psubd(a, b) simde_mm_sub_pi32(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_subs_pi8(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_subs_pi8(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i8 = vqsub_s8(a_.neon_i8, b_.neon_i8);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i8 = psubsb(a_.mmi_i8, b_.mmi_i8);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
+ if (((b_.i8[i]) > 0 && (a_.i8[i]) < INT8_MIN + (b_.i8[i]))) {
+ r_.i8[i] = INT8_MIN;
+ } else if ((b_.i8[i]) < 0 &&
+ (a_.i8[i]) > INT8_MAX + (b_.i8[i])) {
+ r_.i8[i] = INT8_MAX;
+ } else {
+ r_.i8[i] = (a_.i8[i]) - (b_.i8[i]);
+ }
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psubsb(a, b) simde_mm_subs_pi8(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_subs_pi8(a, b) simde_mm_subs_pi8(a, b)
+#define _m_psubsb(a, b) simde_mm_subs_pi8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_subs_pu8(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_subs_pu8(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u8 = vqsub_u8(a_.neon_u8, b_.neon_u8);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_u8 = psubusb(a_.mmi_u8, b_.mmi_u8);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u8) / sizeof(r_.u8[0])); i++) {
+ const int32_t x = a_.u8[i] - b_.u8[i];
+ if (x < 0) {
+ r_.u8[i] = 0;
+ } else if (x > UINT8_MAX) {
+ r_.u8[i] = UINT8_MAX;
+ } else {
+ r_.u8[i] = HEDLEY_STATIC_CAST(uint8_t, x);
+ }
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psubusb(a, b) simde_mm_subs_pu8(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_subs_pu8(a, b) simde_mm_subs_pu8(a, b)
+#define _m_psubusb(a, b) simde_mm_subs_pu8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_subs_pi16(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_subs_pi16(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 = vqsub_s16(a_.neon_i16, b_.neon_i16);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i16 = psubsh(a_.mmi_i16, b_.mmi_i16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ if (((b_.i16[i]) > 0 && (a_.i16[i]) < SHRT_MIN + (b_.i16[i]))) {
+ r_.i16[i] = SHRT_MIN;
+ } else if ((b_.i16[i]) < 0 &&
+ (a_.i16[i]) > INT16_MAX + (b_.i16[i])) {
+ r_.i16[i] = INT16_MAX;
+ } else {
+ r_.i16[i] = (a_.i16[i]) - (b_.i16[i]);
+ }
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psubsw(a, b) simde_mm_subs_pi16(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_subs_pi16(a, b) simde_mm_subs_pi16(a, b)
+#define _m_psubsw(a, b) simde_mm_subs_pi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_subs_pu16(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_subs_pu16(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u16 = vqsub_u16(a_.neon_u16, b_.neon_u16);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_u16 = psubush(a_.mmi_u16, b_.mmi_u16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
+ const int x = a_.u16[i] - b_.u16[i];
+ if (x < 0) {
+ r_.u16[i] = 0;
+ } else if (x > UINT16_MAX) {
+ r_.u16[i] = UINT16_MAX;
+ } else {
+ r_.u16[i] = HEDLEY_STATIC_CAST(uint16_t, x);
+ }
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psubusw(a, b) simde_mm_subs_pu16(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_subs_pu16(a, b) simde_mm_subs_pu16(a, b)
+#define _m_psubusw(a, b) simde_mm_subs_pu16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_unpackhi_pi8(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_unpackhi_pi8(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_i8 = vzip2_s8(a_.neon_i8, b_.neon_i8);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.i8 = SIMDE_SHUFFLE_VECTOR_(8, 8, a_.i8, b_.i8, 4, 12, 5, 13, 6, 14,
+ 7, 15);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i8 = punpckhbh_s(a_.mmi_i8, b_.mmi_i8);
+#else
+ r_.i8[0] = a_.i8[4];
+ r_.i8[1] = b_.i8[4];
+ r_.i8[2] = a_.i8[5];
+ r_.i8[3] = b_.i8[5];
+ r_.i8[4] = a_.i8[6];
+ r_.i8[5] = b_.i8[6];
+ r_.i8[6] = a_.i8[7];
+ r_.i8[7] = b_.i8[7];
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_punpckhbw(a, b) simde_mm_unpackhi_pi8(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_unpackhi_pi8(a, b) simde_mm_unpackhi_pi8(a, b)
+#define _m_punpckhbw(a, b) simde_mm_unpackhi_pi8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_unpackhi_pi16(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_unpackhi_pi16(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_i16 = vzip2_s16(a_.neon_i16, b_.neon_i16);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i16 = punpckhhw_s(a_.mmi_i16, b_.mmi_i16);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.i16 = SIMDE_SHUFFLE_VECTOR_(16, 8, a_.i16, b_.i16, 2, 6, 3, 7);
+#else
+ r_.i16[0] = a_.i16[2];
+ r_.i16[1] = b_.i16[2];
+ r_.i16[2] = a_.i16[3];
+ r_.i16[3] = b_.i16[3];
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_punpckhwd(a, b) simde_mm_unpackhi_pi16(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_unpackhi_pi16(a, b) simde_mm_unpackhi_pi16(a, b)
+#define _m_punpckhwd(a, b) simde_mm_unpackhi_pi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_unpackhi_pi32(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_unpackhi_pi32(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_i32 = vzip2_s32(a_.neon_i32, b_.neon_i32);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i32 = punpckhwd_s(a_.mmi_i32, b_.mmi_i32);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.i32 = SIMDE_SHUFFLE_VECTOR_(32, 8, a_.i32, b_.i32, 1, 3);
+#else
+ r_.i32[0] = a_.i32[1];
+ r_.i32[1] = b_.i32[1];
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_punpckhdq(a, b) simde_mm_unpackhi_pi32(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_unpackhi_pi32(a, b) simde_mm_unpackhi_pi32(a, b)
+#define _m_punpckhdq(a, b) simde_mm_unpackhi_pi32(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_unpacklo_pi8(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_unpacklo_pi8(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_i8 = vzip1_s8(a_.neon_i8, b_.neon_i8);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i8 = punpcklbh_s(a_.mmi_i8, b_.mmi_i8);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.i8 = SIMDE_SHUFFLE_VECTOR_(8, 8, a_.i8, b_.i8, 0, 8, 1, 9, 2, 10, 3,
+ 11);
+#else
+ r_.i8[0] = a_.i8[0];
+ r_.i8[1] = b_.i8[0];
+ r_.i8[2] = a_.i8[1];
+ r_.i8[3] = b_.i8[1];
+ r_.i8[4] = a_.i8[2];
+ r_.i8[5] = b_.i8[2];
+ r_.i8[6] = a_.i8[3];
+ r_.i8[7] = b_.i8[3];
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_punpcklbw(a, b) simde_mm_unpacklo_pi8(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_unpacklo_pi8(a, b) simde_mm_unpacklo_pi8(a, b)
+#define _m_punpcklbw(a, b) simde_mm_unpacklo_pi8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_unpacklo_pi16(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_unpacklo_pi16(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_i16 = vzip1_s16(a_.neon_i16, b_.neon_i16);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i16 = punpcklhw_s(a_.mmi_i16, b_.mmi_i16);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.i16 = SIMDE_SHUFFLE_VECTOR_(16, 8, a_.i16, b_.i16, 0, 4, 1, 5);
+#else
+ r_.i16[0] = a_.i16[0];
+ r_.i16[1] = b_.i16[0];
+ r_.i16[2] = a_.i16[1];
+ r_.i16[3] = b_.i16[1];
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_punpcklwd(a, b) simde_mm_unpacklo_pi16(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_unpacklo_pi16(a, b) simde_mm_unpacklo_pi16(a, b)
+#define _m_punpcklwd(a, b) simde_mm_unpacklo_pi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_unpacklo_pi32(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_unpacklo_pi32(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_i32 = vzip1_s32(a_.neon_i32, b_.neon_i32);
+#elif defined(SIMDE_MIPS_LOONGSON_MMI_NATIVE)
+ r_.mmi_i32 = punpcklwd_s(a_.mmi_i32, b_.mmi_i32);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.i32 = SIMDE_SHUFFLE_VECTOR_(32, 8, a_.i32, b_.i32, 0, 2);
+#else
+ r_.i32[0] = a_.i32[0];
+ r_.i32[1] = b_.i32[0];
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_punpckldq(a, b) simde_mm_unpacklo_pi32(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_unpacklo_pi32(a, b) simde_mm_unpacklo_pi32(a, b)
+#define _m_punpckldq(a, b) simde_mm_unpacklo_pi32(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_xor_si64(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_xor_si64(a, b);
+#else
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = veor_s32(a_.neon_i32, b_.neon_i32);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32f = a_.i32f ^ b_.i32f;
+#else
+ r_.u64[0] = a_.u64[0] ^ b_.u64[0];
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_pxor(a, b) simde_mm_xor_si64(a, b)
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _mm_xor_si64(a, b) simde_mm_xor_si64(a, b)
+#define _m_pxor(a, b) simde_mm_xor_si64(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int32_t simde_m_to_int(simde__m64 a)
+{
+#if defined(SIMDE_X86_MMX_NATIVE)
+ return _m_to_int(a);
+#else
+ simde__m64_private a_ = simde__m64_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ HEDLEY_DIAGNOSTIC_PUSH
+#if HEDLEY_HAS_WARNING("-Wvector-conversion") && \
+ SIMDE_DETECT_CLANG_VERSION_NOT(10, 0, 0)
+#pragma clang diagnostic ignored "-Wvector-conversion"
+#endif
+ return vget_lane_s32(a_.neon_i32, 0);
+ HEDLEY_DIAGNOSTIC_POP
+#else
+ return a_.i32[0];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_MMX_ENABLE_NATIVE_ALIASES)
+#define _m_to_int(a) simde_m_to_int(a)
+#endif
+
+SIMDE_END_DECLS_
+
+HEDLEY_DIAGNOSTIC_POP
+
+#endif /* !defined(SIMDE_X86_MMX_H) */
obs-studio-26.1.1.tar.xz/libobs/util/simde/x86/sse.h
Added
+/* SPDX-License-Identifier: MIT
+ *
+ * Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy,
+ * modify, merge, publish, distribute, sublicense, and/or sell copies
+ * of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * Copyright:
+ * 2017-2020 Evan Nemerson <evan@nemerson.com>
+ * 2015-2017 John W. Ratcliff <jratcliffscarab@gmail.com>
+ * 2015 Brandon Rowlett <browlett@nvidia.com>
+ * 2015 Ken Fast <kfast@gdeb.com>
+ */
+
+#if !defined(SIMDE_X86_SSE_H)
+#define SIMDE_X86_SSE_H
+
+#include "mmx.h"
+
+#if defined(_WIN32)
+#include <windows.h>
+#endif
+
+HEDLEY_DIAGNOSTIC_PUSH
+SIMDE_DISABLE_UNWANTED_DIAGNOSTICS
+SIMDE_BEGIN_DECLS_
+
+typedef union {
+#if defined(SIMDE_VECTOR_SUBSCRIPT)
+ SIMDE_ALIGN_TO_16 int8_t i8 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 int16_t i16 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 int32_t i32 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 int64_t i64 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 uint8_t u8 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 uint16_t u16 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 uint32_t u32 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 uint64_t u64 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+#if defined(SIMDE_HAVE_INT128_)
+ SIMDE_ALIGN_TO_16 simde_int128 i128 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 simde_uint128 u128 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+#endif
+ SIMDE_ALIGN_TO_16 simde_float32 f32 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 int_fast32_t i32f SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 uint_fast32_t u32f SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+#else
+ SIMDE_ALIGN_TO_16 int8_t i8[16];
+ SIMDE_ALIGN_TO_16 int16_t i16[8];
+ SIMDE_ALIGN_TO_16 int32_t i32[4];
+ SIMDE_ALIGN_TO_16 int64_t i64[2];
+ SIMDE_ALIGN_TO_16 uint8_t u8[16];
+ SIMDE_ALIGN_TO_16 uint16_t u16[8];
+ SIMDE_ALIGN_TO_16 uint32_t u32[4];
+ SIMDE_ALIGN_TO_16 uint64_t u64[2];
+#if defined(SIMDE_HAVE_INT128_)
+ SIMDE_ALIGN_TO_16 simde_int128 i128[1];
+ SIMDE_ALIGN_TO_16 simde_uint128 u128[1];
+#endif
+ SIMDE_ALIGN_TO_16 simde_float32 f32[4];
+ SIMDE_ALIGN_TO_16 int_fast32_t i32f[16 / sizeof(int_fast32_t)];
+ SIMDE_ALIGN_TO_16 uint_fast32_t u32f[16 / sizeof(uint_fast32_t)];
+#endif
+
+ SIMDE_ALIGN_TO_16 simde__m64_private m64_private[2];
+ SIMDE_ALIGN_TO_16 simde__m64 m64[2];
+
+#if defined(SIMDE_X86_SSE_NATIVE)
+ SIMDE_ALIGN_TO_16 __m128 n;
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ SIMDE_ALIGN_TO_16 int8x16_t neon_i8;
+ SIMDE_ALIGN_TO_16 int16x8_t neon_i16;
+ SIMDE_ALIGN_TO_16 int32x4_t neon_i32;
+ SIMDE_ALIGN_TO_16 int64x2_t neon_i64;
+ SIMDE_ALIGN_TO_16 uint8x16_t neon_u8;
+ SIMDE_ALIGN_TO_16 uint16x8_t neon_u16;
+ SIMDE_ALIGN_TO_16 uint32x4_t neon_u32;
+ SIMDE_ALIGN_TO_16 uint64x2_t neon_u64;
+ SIMDE_ALIGN_TO_16 float32x4_t neon_f32;
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ SIMDE_ALIGN_TO_16 float64x2_t neon_f64;
+#endif
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ SIMDE_ALIGN_TO_16 v128_t wasm_v128;
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(unsigned char) altivec_u8;
+ SIMDE_ALIGN_TO_16
+ SIMDE_POWER_ALTIVEC_VECTOR(unsigned short) altivec_u16;
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(unsigned int) altivec_u32;
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(signed char) altivec_i8;
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(signed short) altivec_i16;
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(signed int) altivec_i32;
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(float) altivec_f32;
+#if defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+ SIMDE_ALIGN_TO_16
+ SIMDE_POWER_ALTIVEC_VECTOR(unsigned long long) altivec_u64;
+ SIMDE_ALIGN_TO_16
+ SIMDE_POWER_ALTIVEC_VECTOR(signed long long) altivec_i64;
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(double) altivec_f64;
+#endif
+#endif
+} simde__m128_private;
+
+#if defined(SIMDE_X86_SSE_NATIVE)
+typedef __m128 simde__m128;
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+typedef float32x4_t simde__m128;
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+typedef v128_t simde__m128;
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+typedef SIMDE_POWER_ALTIVEC_VECTOR(float) simde__m128;
+#elif defined(SIMDE_VECTOR_SUBSCRIPT)
+typedef simde_float32
+ simde__m128 SIMDE_ALIGN_TO_16 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+#else
+typedef simde__m128_private simde__m128;
+#endif
+
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+typedef simde__m128 __m128;
+#endif
+
+HEDLEY_STATIC_ASSERT(16 == sizeof(simde__m128), "simde__m128 size incorrect");
+HEDLEY_STATIC_ASSERT(16 == sizeof(simde__m128_private),
+ "simde__m128_private size incorrect");
+#if defined(SIMDE_CHECK_ALIGNMENT) && defined(SIMDE_ALIGN_OF)
+HEDLEY_STATIC_ASSERT(SIMDE_ALIGN_OF(simde__m128) == 16,
+ "simde__m128 is not 16-byte aligned");
+HEDLEY_STATIC_ASSERT(SIMDE_ALIGN_OF(simde__m128_private) == 16,
+ "simde__m128_private is not 16-byte aligned");
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde__m128_from_private(simde__m128_private v)
+{
+ simde__m128 r;
+ simde_memcpy(&r, &v, sizeof(r));
+ return r;
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128_private simde__m128_to_private(simde__m128 v)
+{
+ simde__m128_private r;
+ simde_memcpy(&r, &v, sizeof(r));
+ return r;
+}
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, int8x16_t, neon, i8)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, int16x8_t, neon, i16)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, int32x4_t, neon, i32)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, int64x2_t, neon, i64)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, uint8x16_t, neon, u8)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, uint16x8_t, neon, u16)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, uint32x4_t, neon, u32)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, uint64x2_t, neon, u64)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, float32x4_t, neon, f32)
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, float64x2_t, neon, f64)
+#endif
+#endif /* defined(SIMDE_ARM_NEON_A32V7_NATIVE) */
+
+#if defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128,
+ SIMDE_POWER_ALTIVEC_VECTOR(signed char),
+ altivec, i8)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128,
+ SIMDE_POWER_ALTIVEC_VECTOR(signed short),
+ altivec, i16)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128,
+ SIMDE_POWER_ALTIVEC_VECTOR(signed int),
+ altivec, i32)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(
+ m128, SIMDE_POWER_ALTIVEC_VECTOR(unsigned char), altivec, u8)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(
+ m128, SIMDE_POWER_ALTIVEC_VECTOR(unsigned short), altivec, u16)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128,
+ SIMDE_POWER_ALTIVEC_VECTOR(unsigned int),
+ altivec, u32)
+
+#if defined(SIMDE_BUG_GCC_95782)
+SIMDE_FUNCTION_ATTRIBUTES
+SIMDE_POWER_ALTIVEC_VECTOR(float)
+simde__m128_to_altivec_f32(simde__m128 value)
+{
+ simde__m128_private r_ = simde__m128_to_private(value);
+ return r_.altivec_f32;
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde__m128_from_altivec_f32(SIMDE_POWER_ALTIVEC_VECTOR(float)
+ value)
+{
+ simde__m128_private r_;
+ r_.altivec_f32 = value;
+ return simde__m128_from_private(r_);
+}
+#else
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, SIMDE_POWER_ALTIVEC_VECTOR(float),
+ altivec, f32)
+#endif
+
+#if defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(
+ m128, SIMDE_POWER_ALTIVEC_VECTOR(signed long long), altivec, i64)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(
+ m128, SIMDE_POWER_ALTIVEC_VECTOR(unsigned long long), altivec, u64)
+#endif
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128, v128_t, wasm, v128);
+#endif /* defined(SIMDE_POWER_ALTIVEC_P6_NATIVE) */
+
+enum {
+#if defined(SIMDE_X86_SSE_NATIVE)
+ SIMDE_MM_ROUND_NEAREST = _MM_ROUND_NEAREST,
+ SIMDE_MM_ROUND_DOWN = _MM_ROUND_DOWN,
+ SIMDE_MM_ROUND_UP = _MM_ROUND_UP,
+ SIMDE_MM_ROUND_TOWARD_ZERO = _MM_ROUND_TOWARD_ZERO
+#else
+ SIMDE_MM_ROUND_NEAREST = 0x0000,
+ SIMDE_MM_ROUND_DOWN = 0x2000,
+ SIMDE_MM_ROUND_UP = 0x4000,
+ SIMDE_MM_ROUND_TOWARD_ZERO = 0x6000
+#endif
+};
+
+#if defined(_MM_FROUND_TO_NEAREST_INT)
+#define SIMDE_MM_FROUND_TO_NEAREST_INT _MM_FROUND_TO_NEAREST_INT
+#define SIMDE_MM_FROUND_TO_NEG_INF _MM_FROUND_TO_NEG_INF
+#define SIMDE_MM_FROUND_TO_POS_INF _MM_FROUND_TO_POS_INF
+#define SIMDE_MM_FROUND_TO_ZERO _MM_FROUND_TO_ZERO
+#define SIMDE_MM_FROUND_CUR_DIRECTION _MM_FROUND_CUR_DIRECTION
+
+#define SIMDE_MM_FROUND_RAISE_EXC _MM_FROUND_RAISE_EXC
+#define SIMDE_MM_FROUND_NO_EXC _MM_FROUND_NO_EXC
+#else
+#define SIMDE_MM_FROUND_TO_NEAREST_INT 0x00
+#define SIMDE_MM_FROUND_TO_NEG_INF 0x01
+#define SIMDE_MM_FROUND_TO_POS_INF 0x02
+#define SIMDE_MM_FROUND_TO_ZERO 0x03
+#define SIMDE_MM_FROUND_CUR_DIRECTION 0x04
+
+#define SIMDE_MM_FROUND_RAISE_EXC 0x00
+#define SIMDE_MM_FROUND_NO_EXC 0x08
+#endif
+
+#define SIMDE_MM_FROUND_NINT \
+ (SIMDE_MM_FROUND_TO_NEAREST_INT | SIMDE_MM_FROUND_RAISE_EXC)
+#define SIMDE_MM_FROUND_FLOOR \
+ (SIMDE_MM_FROUND_TO_NEG_INF | SIMDE_MM_FROUND_RAISE_EXC)
+#define SIMDE_MM_FROUND_CEIL \
+ (SIMDE_MM_FROUND_TO_POS_INF | SIMDE_MM_FROUND_RAISE_EXC)
+#define SIMDE_MM_FROUND_TRUNC \
+ (SIMDE_MM_FROUND_TO_ZERO | SIMDE_MM_FROUND_RAISE_EXC)
+#define SIMDE_MM_FROUND_RINT \
+ (SIMDE_MM_FROUND_CUR_DIRECTION | SIMDE_MM_FROUND_RAISE_EXC)
+#define SIMDE_MM_FROUND_NEARBYINT \
+ (SIMDE_MM_FROUND_CUR_DIRECTION | SIMDE_MM_FROUND_NO_EXC)
+
+#if defined(SIMDE_X86_SSE4_1_ENABLE_NATIVE_ALIASES) && \
+ !defined(_MM_FROUND_TO_NEAREST_INT)
+#define _MM_FROUND_TO_NEAREST_INT SIMDE_MM_FROUND_TO_NEAREST_INT
+#define _MM_FROUND_TO_NEG_INF SIMDE_MM_FROUND_TO_NEG_INF
+#define _MM_FROUND_TO_POS_INF SIMDE_MM_FROUND_TO_POS_INF
+#define _MM_FROUND_TO_ZERO SIMDE_MM_FROUND_TO_ZERO
+#define _MM_FROUND_CUR_DIRECTION SIMDE_MM_FROUND_CUR_DIRECTION
+#define _MM_FROUND_RAISE_EXC SIMDE_MM_FROUND_RAISE_EXC
+#define _MM_FROUND_NINT SIMDE_MM_FROUND_NINT
+#define _MM_FROUND_FLOOR SIMDE_MM_FROUND_FLOOR
+#define _MM_FROUND_CEIL SIMDE_MM_FROUND_CEIL
+#define _MM_FROUND_TRUNC SIMDE_MM_FROUND_TRUNC
+#define _MM_FROUND_RINT SIMDE_MM_FROUND_RINT
+#define _MM_FROUND_NEARBYINT SIMDE_MM_FROUND_NEARBYINT
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+unsigned int SIMDE_MM_GET_ROUNDING_MODE(void)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _MM_GET_ROUNDING_MODE();
+#elif defined(SIMDE_HAVE_FENV_H)
+ unsigned int vfe_mode;
+
+ switch (fegetround()) {
+#if defined(FE_TONEAREST)
+ case FE_TONEAREST:
+ vfe_mode = SIMDE_MM_ROUND_NEAREST;
+ break;
+#endif
+
+#if defined(FE_TOWARDZERO)
+ case FE_TOWARDZERO:
+ vfe_mode = SIMDE_MM_ROUND_DOWN;
+ break;
+#endif
+
+#if defined(FE_UPWARD)
+ case FE_UPWARD:
+ vfe_mode = SIMDE_MM_ROUND_UP;
+ break;
+#endif
+
+#if defined(FE_DOWNWARD)
+ case FE_DOWNWARD:
+ vfe_mode = SIMDE_MM_ROUND_TOWARD_ZERO;
+ break;
+#endif
+
+ default:
+ vfe_mode = SIMDE_MM_ROUND_NEAREST;
+ break;
+ }
+
+ return vfe_mode;
+#else
+ return SIMDE_MM_ROUND_NEAREST;
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _MM_GET_ROUNDING_MODE() SIMDE_MM_GET_ROUNDING_MODE()
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void SIMDE_MM_SET_ROUNDING_MODE(unsigned int a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ _MM_SET_ROUNDING_MODE(a);
+#elif defined(SIMDE_HAVE_FENV_H)
+ int fe_mode = FE_TONEAREST;
+
+ switch (a) {
+#if defined(FE_TONEAREST)
+ case SIMDE_MM_ROUND_NEAREST:
+ fe_mode = FE_TONEAREST;
+ break;
+#endif
+
+#if defined(FE_TOWARDZERO)
+ case SIMDE_MM_ROUND_TOWARD_ZERO:
+ fe_mode = FE_TOWARDZERO;
+ break;
+#endif
+
+#if defined(FE_DOWNWARD)
+ case SIMDE_MM_ROUND_DOWN:
+ fe_mode = FE_DOWNWARD;
+ break;
+#endif
+
+#if defined(FE_UPWARD)
+ case SIMDE_MM_ROUND_UP:
+ fe_mode = FE_UPWARD;
+ break;
+#endif
+
+ default:
+ return;
+ }
+
+ fesetround(fe_mode);
+#else
+ (void)a;
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _MM_SET_ROUNDING_MODE(a) SIMDE_MM_SET_ROUNDING_MODE(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+uint32_t simde_mm_getcsr(void)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_getcsr();
+#else
+ return SIMDE_MM_GET_ROUNDING_MODE();
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_getcsr() simde_mm_getcsr()
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_setcsr(uint32_t a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ _mm_setcsr(a);
+#else
+ SIMDE_MM_SET_ROUNDING_MODE(HEDLEY_STATIC_CAST(unsigned int, a));
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_setcsr(a) simde_mm_setcsr(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_x_mm_round_ps(simde__m128 a, int rounding, int lax_rounding)
+ SIMDE_REQUIRE_CONSTANT_RANGE(rounding, 0, 15)
+ SIMDE_REQUIRE_CONSTANT_RANGE(lax_rounding, 0, 1)
+{
+ simde__m128_private r_, a_ = simde__m128_to_private(a);
+
+ (void)lax_rounding;
+
+/* For architectures which lack a current direction SIMD instruction.
+ *
+ * Note that NEON actually has a current rounding mode instruction,
+ * but in ARMv8+ the rounding mode is ignored and nearest is always
+ * used, so we treat ARMv7 as having a rounding mode but ARMv8 as
+ * not. */
+#if defined(SIMDE_POWER_ALTIVEC_P6_NATIVE) || defined(SIMDE_ARM_NEON_A32V8)
+ if ((rounding & 7) == SIMDE_MM_FROUND_CUR_DIRECTION)
+ rounding = HEDLEY_STATIC_CAST(int, SIMDE_MM_GET_ROUNDING_MODE())
+ << 13;
+#endif
+
+ switch (rounding & ~SIMDE_MM_FROUND_NO_EXC) {
+ case SIMDE_MM_FROUND_CUR_DIRECTION:
+#if defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f32 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(float),
+ vec_round(a_.altivec_f32));
+#elif defined(SIMDE_ARM_NEON_A32V8_NATIVE) && !defined(SIMDE_BUG_GCC_95399)
+ r_.neon_f32 = vrndiq_f32(a_.neon_f32);
+#elif defined(simde_math_nearbyintf)
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0]));
+ i++) {
+ r_.f32[i] = simde_math_nearbyintf(a_.f32[i]);
+ }
+#else
+ HEDLEY_UNREACHABLE_RETURN(simde_mm_undefined_pd());
+#endif
+ break;
+
+ case SIMDE_MM_FROUND_TO_NEAREST_INT:
+#if defined(SIMDE_POWER_ALTIVEC_P8_NATIVE)
+ r_.altivec_f32 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(float),
+ vec_rint(a_.altivec_f32));
+#elif defined(SIMDE_ARM_NEON_A32V8_NATIVE)
+ r_.neon_f32 = vrndnq_f32(a_.neon_f32);
+#elif defined(simde_math_roundevenf)
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0]));
+ i++) {
+ r_.f32[i] = simde_math_roundevenf(a_.f32[i]);
+ }
+#else
+ HEDLEY_UNREACHABLE_RETURN(simde_mm_undefined_pd());
+#endif
+ break;
+
+ case SIMDE_MM_FROUND_TO_NEG_INF:
+#if defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f32 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(float),
+ vec_floor(a_.altivec_f32));
+#elif defined(SIMDE_ARM_NEON_A32V8_NATIVE)
+ r_.neon_f32 = vrndmq_f32(a_.neon_f32);
+#elif defined(simde_math_floorf)
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0]));
+ i++) {
+ r_.f32[i] = simde_math_floorf(a_.f32[i]);
+ }
+#else
+ HEDLEY_UNREACHABLE_RETURN(simde_mm_undefined_pd());
+#endif
+ break;
+
+ case SIMDE_MM_FROUND_TO_POS_INF:
+#if defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f32 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(float),
+ vec_ceil(a_.altivec_f32));
+#elif defined(SIMDE_ARM_NEON_A32V8_NATIVE)
+ r_.neon_f32 = vrndpq_f32(a_.neon_f32);
+#elif defined(simde_math_ceilf)
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0]));
+ i++) {
+ r_.f32[i] = simde_math_ceilf(a_.f32[i]);
+ }
+#else
+ HEDLEY_UNREACHABLE_RETURN(simde_mm_undefined_pd());
+#endif
+ break;
+
+ case SIMDE_MM_FROUND_TO_ZERO:
+#if defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f32 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(float),
+ vec_trunc(a_.altivec_f32));
+#elif defined(SIMDE_ARM_NEON_A32V8_NATIVE)
+ r_.neon_f32 = vrndq_f32(a_.neon_f32);
+#elif defined(simde_math_truncf)
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0]));
+ i++) {
+ r_.f32[i] = simde_math_truncf(a_.f32[i]);
+ }
+#else
+ HEDLEY_UNREACHABLE_RETURN(simde_mm_undefined_pd());
+#endif
+ break;
+
+ default:
+ HEDLEY_UNREACHABLE_RETURN(simde_mm_undefined_pd());
+ }
+
+ return simde__m128_from_private(r_);
+}
+#if defined(SIMDE_X86_SSE4_1_NATIVE)
+#define simde_mm_round_ps(a, rounding) _mm_round_ps((a), (rounding))
+#else
+#define simde_mm_round_ps(a, rounding) simde_x_mm_round_ps((a), (rounding), 0)
+#endif
+#if defined(SIMDE_X86_SSE4_1_ENABLE_NATIVE_ALIASES)
+#define _mm_round_ps(a, rounding) simde_mm_round_ps((a), (rounding))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_set_ps(simde_float32 e3, simde_float32 e2,
+ simde_float32 e1, simde_float32 e0)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_set_ps(e3, e2, e1, e0);
+#else
+ simde__m128_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ SIMDE_ALIGN_TO_16 simde_float32 data[4] = {e0, e1, e2, e3};
+ r_.neon_f32 = vld1q_f32(data);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f32x4_make(e0, e1, e2, e3);
+#else
+ r_.f32[0] = e0;
+ r_.f32[1] = e1;
+ r_.f32[2] = e2;
+ r_.f32[3] = e3;
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_set_ps(e3, e2, e1, e0) simde_mm_set_ps(e3, e2, e1, e0)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_set_ps1(simde_float32 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_set_ps1(a);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ return vdupq_n_f32(a);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ (void)a;
+ return vec_splats(a);
+#else
+ return simde_mm_set_ps(a, a, a, a);
+#endif
+}
+#define simde_mm_set1_ps(a) simde_mm_set_ps1(a)
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_set_ps1(a) simde_mm_set_ps1(a)
+#define _mm_set1_ps(a) simde_mm_set1_ps(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_move_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_move_ss(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 =
+ vsetq_lane_f32(vgetq_lane_f32(b_.neon_f32, 0), a_.neon_f32, 0);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ SIMDE_POWER_ALTIVEC_VECTOR(unsigned char)
+ m = {16, 17, 18, 19, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15};
+ r_.altivec_f32 = vec_perm(a_.altivec_f32, b_.altivec_f32, m);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_v8x16_shuffle(b_.wasm_v128, a_.wasm_v128, 0, 1, 2,
+ 3, 20, 21, 22, 23, 24, 25, 26, 27, 28,
+ 29, 30, 31);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.f32 = SIMDE_SHUFFLE_VECTOR_(32, 16, a_.f32, b_.f32, 4, 1, 2, 3);
+#else
+ r_.f32[0] = b_.f32[0];
+ r_.f32[1] = a_.f32[1];
+ r_.f32[2] = a_.f32[2];
+ r_.f32[3] = a_.f32[3];
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_move_ss(a, b) simde_mm_move_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_add_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_add_ps(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 = vaddq_f32(a_.neon_f32, b_.neon_f32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f32x4_add(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f32 = vec_add(a_.altivec_f32, b_.altivec_f32);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.f32 = a_.f32 + b_.f32;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.f32[i] = a_.f32[i] + b_.f32[i];
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_add_ps(a, b) simde_mm_add_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_add_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_add_ss(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_ss(a, simde_mm_add_ps(a, b));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ float32_t b0 = vgetq_lane_f32(b_.neon_f32, 0);
+ float32x4_t value = vsetq_lane_f32(b0, vdupq_n_f32(0), 0);
+ // the upper values in the result must be the remnants of <a>.
+ r_.neon_f32 = vaddq_f32(a_.neon_f32, value);
+#else
+ r_.f32[0] = a_.f32[0] + b_.f32[0];
+ r_.f32[1] = a_.f32[1];
+ r_.f32[2] = a_.f32[2];
+ r_.f32[3] = a_.f32[3];
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_add_ss(a, b) simde_mm_add_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_and_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_and_ps(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vandq_s32(a_.neon_i32, b_.neon_i32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_v128_and(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32 = a_.i32 & b_.i32;
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f32 = vec_and(a_.altivec_f32, b_.altivec_f32);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = a_.i32[i] & b_.i32[i];
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_and_ps(a, b) simde_mm_and_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_andnot_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_andnot_ps(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vbicq_s32(b_.neon_i32, a_.neon_i32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_v128_andnot(b_.wasm_v128, a_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f32 = vec_andc(b_.altivec_f32, a_.altivec_f32);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32 = ~a_.i32 & b_.i32;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = ~(a_.i32[i]) & b_.i32[i];
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_andnot_ps(a, b) simde_mm_andnot_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_xor_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_xor_ps(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = veorq_s32(a_.neon_i32, b_.neon_i32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_v128_xor(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i32 = vec_xor(a_.altivec_i32, b_.altivec_i32);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32f = a_.i32f ^ b_.i32f;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u32) / sizeof(r_.u32[0])); i++) {
+ r_.u32[i] = a_.u32[i] ^ b_.u32[i];
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_xor_ps(a, b) simde_mm_xor_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_or_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_or_ps(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vorrq_s32(a_.neon_i32, b_.neon_i32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_v128_or(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i32 = vec_or(a_.altivec_i32, b_.altivec_i32);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32f = a_.i32f | b_.i32f;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u32) / sizeof(r_.u32[0])); i++) {
+ r_.u32[i] = a_.u32[i] | b_.u32[i];
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_or_ps(a, b) simde_mm_or_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_x_mm_not_ps(simde__m128 a)
+{
+#if defined(SIMDE_X86_AVX512VL_NATIVE)
+ __m128i ai = _mm_castps_si128(a);
+ return _mm_castsi128_ps(_mm_ternarylogic_epi32(ai, ai, ai, 0x55));
+#elif defined(SIMDE_X86_SSE2_NATIVE)
+ /* Note: we use ints instead of floats because we don't want cmpeq
+ * to return false for (NaN, NaN) */
+ __m128i ai = _mm_castps_si128(a);
+ return _mm_castsi128_ps(_mm_andnot_si128(ai, _mm_cmpeq_epi32(ai, ai)));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vmvnq_s32(a_.neon_i32);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i32 = vec_nor(a_.altivec_i32, a_.altivec_i32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_v128_not(a_.wasm_v128);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32 = ~a_.i32;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = ~(a_.i32[i]);
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_x_mm_select_ps(simde__m128 a, simde__m128 b, simde__m128 mask)
+{
+/* This function is for when you want to blend two elements together
+ * according to a mask. It is similar to _mm_blendv_ps, except that
+ * it is undefined whether the blend is based on the highest bit in
+ * each lane (like blendv) or just bitwise operations. This allows
+ * us to implement the function efficiently everywhere.
+ *
+ * Basically, you promise that all the lanes in mask are either 0 or
+ * ~0. */
+#if defined(SIMDE_X86_SSE4_1_NATIVE)
+ return _mm_blendv_ps(a, b, mask);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b),
+ mask_ = simde__m128_to_private(mask);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vbslq_s32(mask_.neon_u32, b_.neon_i32, a_.neon_i32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_v128_bitselect(b_.wasm_v128, a_.wasm_v128,
+ mask_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i32 =
+ vec_sel(a_.altivec_i32, b_.altivec_i32, mask_.altivec_u32);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32 = a_.i32 ^ ((a_.i32 ^ b_.i32) & mask_.i32);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = a_.i32[i] ^
+ ((a_.i32[i] ^ b_.i32[i]) & mask_.i32[i]);
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_avg_pu16(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_avg_pu16(a, b);
+#else
+ simde__m64_private r_, a_ = simde__m64_to_private(a),
+ b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u16 = vrhadd_u16(b_.neon_u16, a_.neon_u16);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS) && \
+ defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR) && \
+ defined(SIMDE_CONVERT_VECTOR_)
+ uint32_t wa SIMDE_VECTOR(16);
+ uint32_t wb SIMDE_VECTOR(16);
+ uint32_t wr SIMDE_VECTOR(16);
+ SIMDE_CONVERT_VECTOR_(wa, a_.u16);
+ SIMDE_CONVERT_VECTOR_(wb, b_.u16);
+ wr = (wa + wb + 1) >> 1;
+ SIMDE_CONVERT_VECTOR_(r_.u16, wr);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
+ r_.u16[i] = (a_.u16[i] + b_.u16[i] + 1) >> 1;
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_pavgw(a, b) simde_mm_avg_pu16(a, b)
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_avg_pu16(a, b) simde_mm_avg_pu16(a, b)
+#define _m_pavgw(a, b) simde_mm_avg_pu16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_avg_pu8(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_avg_pu8(a, b);
+#else
+ simde__m64_private r_, a_ = simde__m64_to_private(a),
+ b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u8 = vrhadd_u8(b_.neon_u8, a_.neon_u8);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS) && \
+ defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR) && \
+ defined(SIMDE_CONVERT_VECTOR_)
+ uint16_t wa SIMDE_VECTOR(16);
+ uint16_t wb SIMDE_VECTOR(16);
+ uint16_t wr SIMDE_VECTOR(16);
+ SIMDE_CONVERT_VECTOR_(wa, a_.u8);
+ SIMDE_CONVERT_VECTOR_(wb, b_.u8);
+ wr = (wa + wb + 1) >> 1;
+ SIMDE_CONVERT_VECTOR_(r_.u8, wr);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u8) / sizeof(r_.u8[0])); i++) {
+ r_.u8[i] = (a_.u8[i] + b_.u8[i] + 1) >> 1;
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_pavgb(a, b) simde_mm_avg_pu8(a, b)
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_avg_pu8(a, b) simde_mm_avg_pu8(a, b)
+#define _m_pavgb(a, b) simde_mm_avg_pu8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_x_mm_abs_ps(simde__m128 a)
+{
+#if defined(SIMDE_X86_AVX512F_NATIVE) && \
+ (!defined(HEDLEY_GCC_VERSION) || HEDLEY_GCC_VERSION_CHECK(7, 1, 0))
+ return _mm512_castps512_ps128(_mm512_abs_ps(_mm512_castps128_ps512(a)));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 = vabsq_f32(a_.neon_f32);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f32 = vec_abs(a_.altivec_f32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f32x4_abs(a_.wasm_v128);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.f32[i] = simde_math_fabsf(a_.f32[i]);
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmpeq_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_cmpeq_ps(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u32 = vceqq_f32(a_.neon_f32, b_.neon_f32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f32x4_eq(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f32 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(float),
+ vec_cmpeq(a_.altivec_f32, b_.altivec_f32));
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32 = HEDLEY_STATIC_CAST(__typeof__(r_.i32), a_.f32 == b_.f32);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.u32[i] = (a_.f32[i] == b_.f32[i]) ? ~UINT32_C(0)
+ : UINT32_C(0);
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpeq_ps(a, b) simde_mm_cmpeq_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmpeq_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_cmpeq_ss(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_ss(a, simde_mm_cmpeq_ps(a, b));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+ r_.u32[0] = (a_.f32[0] == b_.f32[0]) ? ~UINT32_C(0) : UINT32_C(0);
+ SIMDE_VECTORIZE
+ for (size_t i = 1; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.u32[i] = a_.u32[i];
+ }
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpeq_ss(a, b) simde_mm_cmpeq_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmpge_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_cmpge_ps(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u32 = vcgeq_f32(a_.neon_f32, b_.neon_f32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f32x4_ge(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f32 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(float),
+ vec_cmpge(a_.altivec_f32, b_.altivec_f32));
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32 = HEDLEY_STATIC_CAST(__typeof__(r_.i32), (a_.f32 >= b_.f32));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.u32[i] = (a_.f32[i] >= b_.f32[i]) ? ~UINT32_C(0)
+ : UINT32_C(0);
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpge_ps(a, b) simde_mm_cmpge_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmpge_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && !defined(__PGI)
+ return _mm_cmpge_ss(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_ss(a, simde_mm_cmpge_ps(a, b));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+ r_.u32[0] = (a_.f32[0] >= b_.f32[0]) ? ~UINT32_C(0) : UINT32_C(0);
+ SIMDE_VECTORIZE
+ for (size_t i = 1; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.u32[i] = a_.u32[i];
+ }
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpge_ss(a, b) simde_mm_cmpge_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmpgt_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_cmpgt_ps(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u32 = vcgtq_f32(a_.neon_f32, b_.neon_f32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f32x4_gt(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f32 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(float),
+ vec_cmpgt(a_.altivec_f32, b_.altivec_f32));
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32 = HEDLEY_STATIC_CAST(__typeof__(r_.i32), (a_.f32 > b_.f32));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.u32[i] = (a_.f32[i] > b_.f32[i]) ? ~UINT32_C(0)
+ : UINT32_C(0);
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpgt_ps(a, b) simde_mm_cmpgt_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmpgt_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && !defined(__PGI)
+ return _mm_cmpgt_ss(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_ss(a, simde_mm_cmpgt_ps(a, b));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+ r_.u32[0] = (a_.f32[0] > b_.f32[0]) ? ~UINT32_C(0) : UINT32_C(0);
+ SIMDE_VECTORIZE
+ for (size_t i = 1; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.u32[i] = a_.u32[i];
+ }
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpgt_ss(a, b) simde_mm_cmpgt_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmple_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_cmple_ps(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u32 = vcleq_f32(a_.neon_f32, b_.neon_f32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f32x4_le(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f32 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(float),
+ vec_cmple(a_.altivec_f32, b_.altivec_f32));
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32 = HEDLEY_STATIC_CAST(__typeof__(r_.i32), (a_.f32 <= b_.f32));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.u32[i] = (a_.f32[i] <= b_.f32[i]) ? ~UINT32_C(0)
+ : UINT32_C(0);
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmple_ps(a, b) simde_mm_cmple_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmple_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_cmple_ss(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_ss(a, simde_mm_cmple_ps(a, b));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+ r_.u32[0] = (a_.f32[0] <= b_.f32[0]) ? ~UINT32_C(0) : UINT32_C(0);
+ SIMDE_VECTORIZE
+ for (size_t i = 1; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.u32[i] = a_.u32[i];
+ }
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmple_ss(a, b) simde_mm_cmple_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmplt_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_cmplt_ps(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u32 = vcltq_f32(a_.neon_f32, b_.neon_f32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f32x4_lt(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f32 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(float),
+ vec_cmplt(a_.altivec_f32, b_.altivec_f32));
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32 = HEDLEY_STATIC_CAST(__typeof__(r_.i32), (a_.f32 < b_.f32));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.u32[i] = (a_.f32[i] < b_.f32[i]) ? ~UINT32_C(0)
+ : UINT32_C(0);
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmplt_ps(a, b) simde_mm_cmplt_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmplt_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_cmplt_ss(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_ss(a, simde_mm_cmplt_ps(a, b));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+ r_.u32[0] = (a_.f32[0] < b_.f32[0]) ? ~UINT32_C(0) : UINT32_C(0);
+ SIMDE_VECTORIZE
+ for (size_t i = 1; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.u32[i] = a_.u32[i];
+ }
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmplt_ss(a, b) simde_mm_cmplt_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmpneq_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_cmpneq_ps(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u32 = vmvnq_u32(vceqq_f32(a_.neon_f32, b_.neon_f32));
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f32x4_ne(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P9_NATIVE) && SIMDE_ARCH_POWER_CHECK(900) && \
+ !defined(HEDLEY_IBM_VERSION)
+ /* vec_cmpne(SIMDE_POWER_ALTIVEC_VECTOR(float), SIMDE_POWER_ALTIVEC_VECTOR(float))
+ is missing from XL C/C++ v16.1.1,
+ though the documentation (table 89 on page 432 of the IBM XL C/C++ for
+ Linux Compiler Reference, Version 16.1.1) shows that it should be
+ present. Both GCC and clang support it. */
+ r_.altivec_f32 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(float),
+ vec_cmpne(a_.altivec_f32, b_.altivec_f32));
+#elif defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+ r_.altivec_f32 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(float),
+ vec_cmpeq(a_.altivec_f32, b_.altivec_f32));
+ r_.altivec_f32 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(float),
+ vec_nor(r_.altivec_f32, r_.altivec_f32));
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32 = HEDLEY_STATIC_CAST(__typeof__(r_.i32), (a_.f32 != b_.f32));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.u32[i] = (a_.f32[i] != b_.f32[i]) ? ~UINT32_C(0)
+ : UINT32_C(0);
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpneq_ps(a, b) simde_mm_cmpneq_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmpneq_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_cmpneq_ss(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_ss(a, simde_mm_cmpneq_ps(a, b));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+ r_.u32[0] = (a_.f32[0] != b_.f32[0]) ? ~UINT32_C(0) : UINT32_C(0);
+ SIMDE_VECTORIZE
+ for (size_t i = 1; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.u32[i] = a_.u32[i];
+ }
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpneq_ss(a, b) simde_mm_cmpneq_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmpnge_ps(simde__m128 a, simde__m128 b)
+{
+ return simde_mm_cmplt_ps(a, b);
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpnge_ps(a, b) simde_mm_cmpnge_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmpnge_ss(simde__m128 a, simde__m128 b)
+{
+ return simde_mm_cmplt_ss(a, b);
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpnge_ss(a, b) simde_mm_cmpnge_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmpngt_ps(simde__m128 a, simde__m128 b)
+{
+ return simde_mm_cmple_ps(a, b);
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpngt_ps(a, b) simde_mm_cmpngt_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmpngt_ss(simde__m128 a, simde__m128 b)
+{
+ return simde_mm_cmple_ss(a, b);
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpngt_ss(a, b) simde_mm_cmpngt_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmpnle_ps(simde__m128 a, simde__m128 b)
+{
+ return simde_mm_cmpgt_ps(a, b);
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpnle_ps(a, b) simde_mm_cmpnle_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmpnle_ss(simde__m128 a, simde__m128 b)
+{
+ return simde_mm_cmpgt_ss(a, b);
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpnle_ss(a, b) simde_mm_cmpnle_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmpnlt_ps(simde__m128 a, simde__m128 b)
+{
+ return simde_mm_cmpge_ps(a, b);
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpnlt_ps(a, b) simde_mm_cmpnlt_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmpnlt_ss(simde__m128 a, simde__m128 b)
+{
+ return simde_mm_cmpge_ss(a, b);
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpnlt_ss(a, b) simde_mm_cmpnlt_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmpord_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_cmpord_ps(a, b);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ return wasm_v128_and(wasm_f32x4_eq(a, a), wasm_f32x4_eq(b, b));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ /* Note: NEON does not have ordered compare builtin
+ Need to compare a eq a and b eq b to check for NaN
+ Do AND of results to get final */
+ uint32x4_t ceqaa = vceqq_f32(a_.neon_f32, a_.neon_f32);
+ uint32x4_t ceqbb = vceqq_f32(b_.neon_f32, b_.neon_f32);
+ r_.neon_u32 = vandq_u32(ceqaa, ceqbb);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_v128_and(wasm_f32x4_eq(a_.wasm_v128, a_.wasm_v128),
+ wasm_f32x4_eq(b_.wasm_v128, b_.wasm_v128));
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f32 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(float),
+ vec_and(vec_cmpeq(a_.altivec_f32, a_.altivec_f32),
+ vec_cmpeq(b_.altivec_f32, b_.altivec_f32)));
+#elif defined(simde_math_isnanf)
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.u32[i] = (simde_math_isnanf(a_.f32[i]) ||
+ simde_math_isnanf(b_.f32[i]))
+ ? UINT32_C(0)
+ : ~UINT32_C(0);
+ }
+#else
+ HEDLEY_UNREACHABLE();
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpord_ps(a, b) simde_mm_cmpord_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmpunord_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_cmpunord_ps(a, b);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ return wasm_v128_or(wasm_f32x4_ne(a, a), wasm_f32x4_ne(b, b));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ uint32x4_t ceqaa = vceqq_f32(a_.neon_f32, a_.neon_f32);
+ uint32x4_t ceqbb = vceqq_f32(b_.neon_f32, b_.neon_f32);
+ r_.neon_u32 = vmvnq_u32(vandq_u32(ceqaa, ceqbb));
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_v128_or(wasm_f32x4_ne(a_.wasm_v128, a_.wasm_v128),
+ wasm_f32x4_ne(b_.wasm_v128, b_.wasm_v128));
+#elif defined(SIMDE_POWER_ALTIVEC_P8_NATIVE)
+ r_.altivec_f32 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(float),
+ vec_nand(vec_cmpeq(a_.altivec_f32, a_.altivec_f32),
+ vec_cmpeq(b_.altivec_f32, b_.altivec_f32)));
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f32 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(float),
+ vec_and(vec_cmpeq(a_.altivec_f32, a_.altivec_f32),
+ vec_cmpeq(b_.altivec_f32, b_.altivec_f32)));
+ r_.altivec_f32 = vec_nor(r_.altivec_f32, r_.altivec_f32);
+#elif defined(simde_math_isnanf)
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.u32[i] = (simde_math_isnanf(a_.f32[i]) ||
+ simde_math_isnanf(b_.f32[i]))
+ ? ~UINT32_C(0)
+ : UINT32_C(0);
+ }
+#else
+ HEDLEY_UNREACHABLE();
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpunord_ps(a, b) simde_mm_cmpunord_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmpunord_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && !defined(__PGI)
+ return _mm_cmpunord_ss(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_ss(a, simde_mm_cmpunord_ps(a, b));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(simde_math_isnanf)
+ r_.u32[0] =
+ (simde_math_isnanf(a_.f32[0]) || simde_math_isnanf(b_.f32[0]))
+ ? ~UINT32_C(0)
+ : UINT32_C(0);
+ SIMDE_VECTORIZE
+ for (size_t i = 1; i < (sizeof(r_.u32) / sizeof(r_.u32[0])); i++) {
+ r_.u32[i] = a_.u32[i];
+ }
+#else
+ HEDLEY_UNREACHABLE();
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpunord_ss(a, b) simde_mm_cmpunord_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_comieq_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_comieq_ss(a, b);
+#else
+ simde__m128_private a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ uint32x4_t a_not_nan = vceqq_f32(a_.neon_f32, a_.neon_f32);
+ uint32x4_t b_not_nan = vceqq_f32(b_.neon_f32, b_.neon_f32);
+ uint32x4_t a_or_b_nan = vmvnq_u32(vandq_u32(a_not_nan, b_not_nan));
+ uint32x4_t a_eq_b = vceqq_f32(a_.neon_f32, b_.neon_f32);
+ return !!(vgetq_lane_u32(vorrq_u32(a_or_b_nan, a_eq_b), 0) != 0);
+#else
+ return a_.f32[0] == b_.f32[0];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_comieq_ss(a, b) simde_mm_comieq_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_comige_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_comige_ss(a, b);
+#else
+ simde__m128_private a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ uint32x4_t a_not_nan = vceqq_f32(a_.neon_f32, a_.neon_f32);
+ uint32x4_t b_not_nan = vceqq_f32(b_.neon_f32, b_.neon_f32);
+ uint32x4_t a_and_b_not_nan = vandq_u32(a_not_nan, b_not_nan);
+ uint32x4_t a_ge_b = vcgeq_f32(a_.neon_f32, b_.neon_f32);
+ return !!(vgetq_lane_u32(vandq_u32(a_and_b_not_nan, a_ge_b), 0) != 0);
+#else
+ return a_.f32[0] >= b_.f32[0];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_comige_ss(a, b) simde_mm_comige_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_comigt_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_comigt_ss(a, b);
+#else
+ simde__m128_private a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ uint32x4_t a_not_nan = vceqq_f32(a_.neon_f32, a_.neon_f32);
+ uint32x4_t b_not_nan = vceqq_f32(b_.neon_f32, b_.neon_f32);
+ uint32x4_t a_and_b_not_nan = vandq_u32(a_not_nan, b_not_nan);
+ uint32x4_t a_gt_b = vcgtq_f32(a_.neon_f32, b_.neon_f32);
+ return !!(vgetq_lane_u32(vandq_u32(a_and_b_not_nan, a_gt_b), 0) != 0);
+#else
+ return a_.f32[0] > b_.f32[0];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_comigt_ss(a, b) simde_mm_comigt_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_comile_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_comile_ss(a, b);
+#else
+ simde__m128_private a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ uint32x4_t a_not_nan = vceqq_f32(a_.neon_f32, a_.neon_f32);
+ uint32x4_t b_not_nan = vceqq_f32(b_.neon_f32, b_.neon_f32);
+ uint32x4_t a_or_b_nan = vmvnq_u32(vandq_u32(a_not_nan, b_not_nan));
+ uint32x4_t a_le_b = vcleq_f32(a_.neon_f32, b_.neon_f32);
+ return !!(vgetq_lane_u32(vorrq_u32(a_or_b_nan, a_le_b), 0) != 0);
+#else
+ return a_.f32[0] <= b_.f32[0];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_comile_ss(a, b) simde_mm_comile_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_comilt_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_comilt_ss(a, b);
+#else
+ simde__m128_private a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ uint32x4_t a_not_nan = vceqq_f32(a_.neon_f32, a_.neon_f32);
+ uint32x4_t b_not_nan = vceqq_f32(b_.neon_f32, b_.neon_f32);
+ uint32x4_t a_or_b_nan = vmvnq_u32(vandq_u32(a_not_nan, b_not_nan));
+ uint32x4_t a_lt_b = vcltq_f32(a_.neon_f32, b_.neon_f32);
+ return !!(vgetq_lane_u32(vorrq_u32(a_or_b_nan, a_lt_b), 0) != 0);
+#else
+ return a_.f32[0] < b_.f32[0];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_comilt_ss(a, b) simde_mm_comilt_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_comineq_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_comineq_ss(a, b);
+#else
+ simde__m128_private a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ uint32x4_t a_not_nan = vceqq_f32(a_.neon_f32, a_.neon_f32);
+ uint32x4_t b_not_nan = vceqq_f32(b_.neon_f32, b_.neon_f32);
+ uint32x4_t a_and_b_not_nan = vandq_u32(a_not_nan, b_not_nan);
+ uint32x4_t a_neq_b = vmvnq_u32(vceqq_f32(a_.neon_f32, b_.neon_f32));
+ return !!(vgetq_lane_u32(vandq_u32(a_and_b_not_nan, a_neq_b), 0) != 0);
+#else
+ return a_.f32[0] != b_.f32[0];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_comineq_ss(a, b) simde_mm_comineq_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_x_mm_copysign_ps(simde__m128 dest, simde__m128 src)
+{
+ simde__m128_private r_, dest_ = simde__m128_to_private(dest),
+ src_ = simde__m128_to_private(src);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ const uint32x4_t sign_pos =
+ vreinterpretq_u32_f32(vdupq_n_f32(-SIMDE_FLOAT32_C(0.0)));
+ r_.neon_u32 = vbslq_u32(sign_pos, src_.neon_u32, dest_.neon_u32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ const v128_t sign_pos = wasm_f32x4_splat(-0.0f);
+ r_.wasm_v128 =
+ wasm_v128_bitselect(src_.wasm_v128, dest_.wasm_v128, sign_pos);
+#elif defined(SIMDE_POWER_ALTIVEC_P9_NATIVE)
+#if !defined(HEDLEY_IBM_VERSION)
+ r_.altivec_f32 = vec_cpsgn(dest_.altivec_f32, src_.altivec_f32);
+#else
+ r_.altivec_f32 = vec_cpsgn(src_.altivec_f32, dest_.altivec_f32);
+#endif
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ const SIMDE_POWER_ALTIVEC_VECTOR(unsigned int)
+ sign_pos = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(unsigned int),
+ vec_splats(-0.0f));
+ r_.altivec_f32 = vec_sel(dest_.altivec_f32, src_.altivec_f32, sign_pos);
+#elif defined(SIMDE_IEEE754_STORAGE)
+ (void)src_;
+ (void)dest_;
+ simde__m128 sign_pos = simde_mm_set1_ps(-0.0f);
+ r_ = simde__m128_to_private(simde_mm_xor_ps(
+ dest, simde_mm_and_ps(simde_mm_xor_ps(dest, src), sign_pos)));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.f32[i] = simde_math_copysignf(dest_.f32[i], src_.f32[i]);
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_x_mm_xorsign_ps(simde__m128 dest, simde__m128 src)
+{
+ return simde_mm_xor_ps(simde_mm_and_ps(simde_mm_set1_ps(-0.0f), src),
+ dest);
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cvt_pi2ps(simde__m128 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cvt_pi2ps(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 = vcombine_f32(vcvt_f32_s32(b_.neon_i32),
+ vget_high_f32(a_.neon_f32));
+#elif defined(SIMDE_CONVERT_VECTOR_)
+ SIMDE_CONVERT_VECTOR_(r_.m64_private[0].f32, b_.i32);
+ r_.m64_private[1] = a_.m64_private[1];
+#else
+ r_.f32[0] = (simde_float32)b_.i32[0];
+ r_.f32[1] = (simde_float32)b_.i32[1];
+ r_.i32[2] = a_.i32[2];
+ r_.i32[3] = a_.i32[3];
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cvt_pi2ps(a, b) simde_mm_cvt_pi2ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_cvt_ps2pi(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cvt_ps2pi(a);
+#else
+ simde__m64_private r_;
+ simde__m128_private a_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ a_ = simde__m128_to_private(
+ simde_mm_round_ps(a, SIMDE_MM_FROUND_CUR_DIRECTION));
+ r_.neon_i32 = vcvt_s32_f32(vget_low_f32(a_.neon_f32));
+#elif defined(SIMDE_CONVERT_VECTOR_) && SIMDE_NATURAL_VECTOR_SIZE_GE(128)
+ a_ = simde__m128_to_private(
+ simde_mm_round_ps(a, SIMDE_MM_FROUND_CUR_DIRECTION));
+ SIMDE_CONVERT_VECTOR_(r_.i32, a_.m64_private[0].f32);
+#else
+ a_ = simde__m128_to_private(a);
+
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = HEDLEY_STATIC_CAST(
+ int32_t, simde_math_nearbyintf(a_.f32[i]));
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cvt_ps2pi(a) simde_mm_cvt_ps2pi((a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cvt_si2ss(simde__m128 a, int32_t b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_cvt_si2ss(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 =
+ vsetq_lane_f32(HEDLEY_STATIC_CAST(float, b), a_.neon_f32, 0);
+#else
+ r_.f32[0] = HEDLEY_STATIC_CAST(simde_float32, b);
+ r_.i32[1] = a_.i32[1];
+ r_.i32[2] = a_.i32[2];
+ r_.i32[3] = a_.i32[3];
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cvt_si2ss(a, b) simde_mm_cvt_si2ss((a), b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int32_t simde_mm_cvt_ss2si(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_cvt_ss2si(a);
+#elif defined(SIMDE_ARM_NEON_A32V8_NATIVE) && !defined(SIMDE_BUG_GCC_95399)
+ return vgetq_lane_s32(vcvtnq_s32_f32(simde__m128_to_neon_f32(a)), 0);
+#else
+ simde__m128_private a_ = simde__m128_to_private(
+ simde_mm_round_ps(a, SIMDE_MM_FROUND_CUR_DIRECTION));
+ return SIMDE_CONVERT_FTOI(int32_t, a_.f32[0]);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cvt_ss2si(a) simde_mm_cvt_ss2si((a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cvtpi16_ps(simde__m64 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cvtpi16_ps(a);
+#else
+ simde__m128_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 = vcvtq_f32_s32(vmovl_s16(a_.neon_i16));
+#elif defined(SIMDE_CONVERT_VECTOR_)
+ SIMDE_CONVERT_VECTOR_(r_.f32, a_.i16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ simde_float32 v = a_.i16[i];
+ r_.f32[i] = v;
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtpi16_ps(a) simde_mm_cvtpi16_ps(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cvtpi32_ps(simde__m128 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cvtpi32_ps(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a);
+ simde__m64_private b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 = vcombine_f32(vcvt_f32_s32(b_.neon_i32),
+ vget_high_f32(a_.neon_f32));
+#elif defined(SIMDE_CONVERT_VECTOR_)
+ SIMDE_CONVERT_VECTOR_(r_.m64_private[0].f32, b_.i32);
+ r_.m64_private[1] = a_.m64_private[1];
+#else
+ r_.f32[0] = (simde_float32)b_.i32[0];
+ r_.f32[1] = (simde_float32)b_.i32[1];
+ r_.i32[2] = a_.i32[2];
+ r_.i32[3] = a_.i32[3];
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtpi32_ps(a, b) simde_mm_cvtpi32_ps((a), b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cvtpi32x2_ps(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cvtpi32x2_ps(a, b);
+#else
+ simde__m128_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a),
+ b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 = vcvtq_f32_s32(vcombine_s32(a_.neon_i32, b_.neon_i32));
+#elif defined(SIMDE_CONVERT_VECTOR_)
+ SIMDE_CONVERT_VECTOR_(r_.m64_private[0].f32, a_.i32);
+ SIMDE_CONVERT_VECTOR_(r_.m64_private[1].f32, b_.i32);
+#else
+ r_.f32[0] = (simde_float32)a_.i32[0];
+ r_.f32[1] = (simde_float32)a_.i32[1];
+ r_.f32[2] = (simde_float32)b_.i32[0];
+ r_.f32[3] = (simde_float32)b_.i32[1];
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtpi32x2_ps(a, b) simde_mm_cvtpi32x2_ps(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cvtpi8_ps(simde__m64 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cvtpi8_ps(a);
+#else
+ simde__m128_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 =
+ vcvtq_f32_s32(vmovl_s16(vget_low_s16(vmovl_s8(a_.neon_i8))));
+#else
+ r_.f32[0] = HEDLEY_STATIC_CAST(simde_float32, a_.i8[0]);
+ r_.f32[1] = HEDLEY_STATIC_CAST(simde_float32, a_.i8[1]);
+ r_.f32[2] = HEDLEY_STATIC_CAST(simde_float32, a_.i8[2]);
+ r_.f32[3] = HEDLEY_STATIC_CAST(simde_float32, a_.i8[3]);
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtpi8_ps(a) simde_mm_cvtpi8_ps(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_cvtps_pi16(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cvtps_pi16(a);
+#else
+ simde__m64_private r_;
+ simde__m128_private a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V8_NATIVE) && !defined(SIMDE_BUG_GCC_95399)
+ r_.neon_i16 = vmovn_s32(vcvtq_s32_f32(vrndiq_f32(a_.neon_f32)));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = SIMDE_CONVERT_FTOI(int16_t,
+ simde_math_roundf(a_.f32[i]));
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtps_pi16(a) simde_mm_cvtps_pi16((a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_cvtps_pi32(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cvtps_pi32(a);
+#else
+ simde__m64_private r_;
+ simde__m128_private a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V8_NATIVE) && \
+ defined(SIMDE_FAST_CONVERSION_RANGE) && !defined(SIMDE_BUG_GCC_95399)
+ r_.neon_i32 = vcvt_s32_f32(vget_low_f32(vrndiq_f32(a_.neon_f32)));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ simde_float32 v = simde_math_roundf(a_.f32[i]);
+#if !defined(SIMDE_FAST_CONVERSION_RANGE)
+ r_.i32[i] =
+ ((v > HEDLEY_STATIC_CAST(simde_float32, INT32_MIN)) &&
+ (v < HEDLEY_STATIC_CAST(simde_float32, INT32_MAX)))
+ ? SIMDE_CONVERT_FTOI(int32_t, v)
+ : INT32_MIN;
+#else
+ r_.i32[i] = SIMDE_CONVERT_FTOI(int32_t, v);
+#endif
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtps_pi32(a) simde_mm_cvtps_pi32((a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_cvtps_pi8(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cvtps_pi8(a);
+#else
+ simde__m64_private r_;
+ simde__m128_private a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V8_NATIVE) && !defined(SIMDE_BUG_GCC_95471)
+ /* Clamp the input to [INT8_MIN, INT8_MAX], round, convert to i32, narrow to
+ * i16, combine with an all-zero vector of i16 (which will become the upper
+ * half), narrow to i8. */
+ float32x4_t max =
+ vdupq_n_f32(HEDLEY_STATIC_CAST(simde_float32, INT8_MAX));
+ float32x4_t min =
+ vdupq_n_f32(HEDLEY_STATIC_CAST(simde_float32, INT8_MIN));
+ float32x4_t values =
+ vrndnq_f32(vmaxq_f32(vminq_f32(max, a_.neon_f32), min));
+ r_.neon_i8 = vmovn_s16(
+ vcombine_s16(vmovn_s32(vcvtq_s32_f32(values)), vdup_n_s16(0)));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(a_.f32) / sizeof(a_.f32[0])); i++) {
+ if (a_.f32[i] > HEDLEY_STATIC_CAST(simde_float32, INT8_MAX))
+ r_.i8[i] = INT8_MAX;
+ else if (a_.f32[i] <
+ HEDLEY_STATIC_CAST(simde_float32, INT8_MIN))
+ r_.i8[i] = INT8_MIN;
+ else
+ r_.i8[i] = SIMDE_CONVERT_FTOI(
+ int8_t, simde_math_roundf(a_.f32[i]));
+ }
+ /* Note: the upper half is undefined */
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtps_pi8(a) simde_mm_cvtps_pi8((a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cvtpu16_ps(simde__m64 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cvtpu16_ps(a);
+#else
+ simde__m128_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 = vcvtq_f32_u32(vmovl_u16(a_.neon_u16));
+#elif defined(SIMDE_CONVERT_VECTOR_)
+ SIMDE_CONVERT_VECTOR_(r_.f32, a_.u16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.f32[i] = (simde_float32)a_.u16[i];
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtpu16_ps(a) simde_mm_cvtpu16_ps(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cvtpu8_ps(simde__m64 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cvtpu8_ps(a);
+#else
+ simde__m128_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 =
+ vcvtq_f32_u32(vmovl_u16(vget_low_u16(vmovl_u8(a_.neon_u8))));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.f32[i] = HEDLEY_STATIC_CAST(simde_float32, a_.u8[i]);
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtpu8_ps(a) simde_mm_cvtpu8_ps(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cvtsi32_ss(simde__m128 a, int32_t b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_cvtsi32_ss(a, b);
+#else
+ simde__m128_private r_;
+ simde__m128_private a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 = vsetq_lane_f32(HEDLEY_STATIC_CAST(float32_t, b),
+ a_.neon_f32, 0);
+#else
+ r_ = a_;
+ r_.f32[0] = HEDLEY_STATIC_CAST(simde_float32, b);
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtsi32_ss(a, b) simde_mm_cvtsi32_ss((a), b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cvtsi64_ss(simde__m128 a, int64_t b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_ARCH_AMD64)
+#if !defined(__PGI)
+ return _mm_cvtsi64_ss(a, b);
+#else
+ return _mm_cvtsi64x_ss(a, b);
+#endif
+#else
+ simde__m128_private r_;
+ simde__m128_private a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 = vsetq_lane_f32(HEDLEY_STATIC_CAST(float32_t, b),
+ a_.neon_f32, 0);
+#else
+ r_ = a_;
+ r_.f32[0] = HEDLEY_STATIC_CAST(simde_float32, b);
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtsi64_ss(a, b) simde_mm_cvtsi64_ss((a), b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde_float32 simde_mm_cvtss_f32(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_cvtss_f32(a);
+#else
+ simde__m128_private a_ = simde__m128_to_private(a);
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ return vgetq_lane_f32(a_.neon_f32, 0);
+#else
+ return a_.f32[0];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtss_f32(a) simde_mm_cvtss_f32((a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int32_t simde_mm_cvtss_si32(simde__m128 a)
+{
+ return simde_mm_cvt_ss2si(a);
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtss_si32(a) simde_mm_cvtss_si32((a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int64_t simde_mm_cvtss_si64(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_ARCH_AMD64)
+#if !defined(__PGI)
+ return _mm_cvtss_si64(a);
+#else
+ return _mm_cvtss_si64x(a);
+#endif
+#else
+ simde__m128_private a_ = simde__m128_to_private(a);
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ return SIMDE_CONVERT_FTOI(
+ int64_t, simde_math_roundf(vgetq_lane_f32(a_.neon_f32, 0)));
+#else
+ return SIMDE_CONVERT_FTOI(int64_t, simde_math_roundf(a_.f32[0]));
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtss_si64(a) simde_mm_cvtss_si64((a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_cvtt_ps2pi(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cvtt_ps2pi(a);
+#else
+ simde__m64_private r_;
+ simde__m128_private a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE) && defined(SIMDE_FAST_CONVERSION_RANGE)
+ r_.neon_i32 = vcvt_s32_f32(vget_low_f32(a_.neon_f32));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ simde_float32 v = a_.f32[i];
+#if !defined(SIMDE_FAST_CONVERSION_RANGE)
+ r_.i32[i] =
+ ((v > HEDLEY_STATIC_CAST(simde_float32, INT32_MIN)) &&
+ (v < HEDLEY_STATIC_CAST(simde_float32, INT32_MAX)))
+ ? SIMDE_CONVERT_FTOI(int32_t, v)
+ : INT32_MIN;
+#else
+ r_.i32[i] = SIMDE_CONVERT_FTOI(int32_t, v);
+#endif
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_mm_cvttps_pi32(a) simde_mm_cvtt_ps2pi(a)
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtt_ps2pi(a) simde_mm_cvtt_ps2pi((a))
+#define _mm_cvttps_pi32(a) simde_mm_cvttps_pi32((a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int32_t simde_mm_cvtt_ss2si(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_cvtt_ss2si(a);
+#else
+ simde__m128_private a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE) && defined(SIMDE_FAST_CONVERSION_RANGE)
+ return SIMDE_CONVERT_FTOI(int32_t, vgetq_lane_f32(a_.neon_f32, 0));
+#else
+ simde_float32 v = a_.f32[0];
+#if !defined(SIMDE_FAST_CONVERSION_RANGE)
+ return ((v > HEDLEY_STATIC_CAST(simde_float32, INT32_MIN)) &&
+ (v < HEDLEY_STATIC_CAST(simde_float32, INT32_MAX)))
+ ? SIMDE_CONVERT_FTOI(int32_t, v)
+ : INT32_MIN;
+#else
+ return SIMDE_CONVERT_FTOI(int32_t, v);
+#endif
+#endif
+#endif
+}
+#define simde_mm_cvttss_si32(a) simde_mm_cvtt_ss2si((a))
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtt_ss2si(a) simde_mm_cvtt_ss2si((a))
+#define _mm_cvttss_si32(a) simde_mm_cvtt_ss2si((a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int64_t simde_mm_cvttss_si64(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_ARCH_AMD64) && \
+ !defined(_MSC_VER)
+#if defined(__PGI)
+ return _mm_cvttss_si64x(a);
+#else
+ return _mm_cvttss_si64(a);
+#endif
+#else
+ simde__m128_private a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ return SIMDE_CONVERT_FTOI(int64_t, vgetq_lane_f32(a_.neon_f32, 0));
+#else
+ return SIMDE_CONVERT_FTOI(int64_t, a_.f32[0]);
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cvttss_si64(a) simde_mm_cvttss_si64((a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cmpord_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_cmpord_ss(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_ss(a, simde_mm_cmpord_ps(a, b));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a);
+
+#if defined(simde_math_isnanf)
+ r_.u32[0] = (simde_math_isnanf(simde_mm_cvtss_f32(a)) ||
+ simde_math_isnanf(simde_mm_cvtss_f32(b)))
+ ? UINT32_C(0)
+ : ~UINT32_C(0);
+ SIMDE_VECTORIZE
+ for (size_t i = 1; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.u32[i] = a_.u32[i];
+ }
+#else
+ HEDLEY_UNREACHABLE();
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpord_ss(a, b) simde_mm_cmpord_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_div_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_div_ps(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f32 = vdivq_f32(a_.neon_f32, b_.neon_f32);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ float32x4_t recip0 = vrecpeq_f32(b_.neon_f32);
+ float32x4_t recip1 =
+ vmulq_f32(recip0, vrecpsq_f32(recip0, b_.neon_f32));
+ r_.neon_f32 = vmulq_f32(a_.neon_f32, recip1);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f32x4_div(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+ r_.altivec_f32 = vec_div(a_.altivec_f32, b_.altivec_f32);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.f32 = a_.f32 / b_.f32;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.f32[i] = a_.f32[i] / b_.f32[i];
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_div_ps(a, b) simde_mm_div_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_div_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_div_ss(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_ss(a, simde_mm_div_ps(a, b));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ float32_t value = vgetq_lane_f32(
+ simde__m128_to_private(simde_mm_div_ps(a, b)).neon_f32, 0);
+ r_.neon_f32 = vsetq_lane_f32(value, a_.neon_f32, 0);
+#else
+ r_.f32[0] = a_.f32[0] / b_.f32[0];
+ SIMDE_VECTORIZE
+ for (size_t i = 1; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.f32[i] = a_.f32[i];
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_div_ss(a, b) simde_mm_div_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int16_t simde_mm_extract_pi16(simde__m64 a, const int imm8)
+ SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 3)
+{
+ simde__m64_private a_ = simde__m64_to_private(a);
+ return a_.i16[imm8];
+}
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE) && \
+ !defined(HEDLEY_PGI_VERSION)
+#if defined(SIMDE_BUG_CLANG_44589)
+#define simde_mm_extract_pi16(a, imm8) \
+ (HEDLEY_DIAGNOSTIC_PUSH _Pragma( \
+ "clang diagnostic ignored \"-Wvector-conversion\"") \
+ HEDLEY_STATIC_CAST(int16_t, _mm_extract_pi16((a), (imm8))) \
+ HEDLEY_DIAGNOSTIC_POP)
+#else
+#define simde_mm_extract_pi16(a, imm8) \
+ HEDLEY_STATIC_CAST(int16_t, _mm_extract_pi16(a, imm8))
+#endif
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+#define simde_mm_extract_pi16(a, imm8) \
+ vget_lane_s16(simde__m64_to_private(a).neon_i16, imm8)
+#endif
+#define simde_m_pextrw(a, imm8) simde_mm_extract_pi16(a, imm8)
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_extract_pi16(a, imm8) simde_mm_extract_pi16((a), (imm8))
+#define _m_pextrw(a, imm8) simde_mm_extract_pi16((a), (imm8))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_insert_pi16(simde__m64 a, int16_t i, const int imm8)
+ SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 3)
+{
+ simde__m64_private r_, a_ = simde__m64_to_private(a);
+
+ r_.i64[0] = a_.i64[0];
+ r_.i16[imm8] = i;
+
+ return simde__m64_from_private(r_);
+}
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE) && \
+ !defined(__PGI)
+#if defined(SIMDE_BUG_CLANG_44589)
+#define ssimde_mm_insert_pi16(a, i, imm8) \
+ (HEDLEY_DIAGNOSTIC_PUSH _Pragma( \
+ "clang diagnostic ignored \"-Wvector-conversion\"")( \
+ _mm_insert_pi16((a), (i), (imm8))) HEDLEY_DIAGNOSTIC_POP)
+#else
+#define simde_mm_insert_pi16(a, i, imm8) _mm_insert_pi16(a, i, imm8)
+#endif
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+#define simde_mm_insert_pi16(a, i, imm8) \
+ simde__m64_from_neon_i16( \
+ vset_lane_s16((i), simde__m64_to_neon_i16(a), (imm8)))
+#endif
+#define simde_m_pinsrw(a, i, imm8) (simde_mm_insert_pi16(a, i, imm8))
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_insert_pi16(a, i, imm8) simde_mm_insert_pi16(a, i, imm8)
+#define _m_pinsrw(a, i, imm8) simde_mm_insert_pi16(a, i, imm8)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128
+simde_mm_load_ps(simde_float32 const mem_addr[HEDLEY_ARRAY_PARAM(4)])
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_load_ps(mem_addr);
+#else
+ simde__m128_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 = vld1q_f32(mem_addr);
+#elif defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+ r_.altivec_f32 = vec_vsx_ld(0, mem_addr);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f32 = vec_ld(0, mem_addr);
+#else
+ simde_memcpy(&r_, SIMDE_ALIGN_ASSUME_LIKE(mem_addr, simde__m128),
+ sizeof(r_));
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_load_ps(mem_addr) simde_mm_load_ps(mem_addr)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_load1_ps(simde_float32 const *mem_addr)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_load_ps1(mem_addr);
+#else
+ simde__m128_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 = vld1q_dup_f32(mem_addr);
+#else
+ r_ = simde__m128_to_private(simde_mm_set1_ps(*mem_addr));
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#define simde_mm_load_ps1(mem_addr) simde_mm_load1_ps(mem_addr)
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_load_ps1(mem_addr) simde_mm_load1_ps(mem_addr)
+#define _mm_load1_ps(mem_addr) simde_mm_load1_ps(mem_addr)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_load_ss(simde_float32 const *mem_addr)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_load_ss(mem_addr);
+#else
+ simde__m128_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 = vsetq_lane_f32(*mem_addr, vdupq_n_f32(0), 0);
+#else
+ r_.f32[0] = *mem_addr;
+ r_.i32[1] = 0;
+ r_.i32[2] = 0;
+ r_.i32[3] = 0;
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_load_ss(mem_addr) simde_mm_load_ss(mem_addr)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_loadh_pi(simde__m128 a, simde__m64 const *mem_addr)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_loadh_pi(a,
+ HEDLEY_REINTERPRET_CAST(__m64 const *, mem_addr));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 = vcombine_f32(
+ vget_low_f32(a_.neon_f32),
+ vld1_f32(HEDLEY_REINTERPRET_CAST(const float32_t *, mem_addr)));
+#else
+ simde__m64_private b_ =
+ *HEDLEY_REINTERPRET_CAST(simde__m64_private const *, mem_addr);
+ r_.f32[0] = a_.f32[0];
+ r_.f32[1] = a_.f32[1];
+ r_.f32[2] = b_.f32[0];
+ r_.f32[3] = b_.f32[1];
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#if HEDLEY_HAS_WARNING("-Wold-style-cast")
+#define _mm_loadh_pi(a, mem_addr) \
+ simde_mm_loadh_pi((a), HEDLEY_REINTERPRET_CAST(simde__m64 const *, \
+ (mem_addr)))
+#else
+#define _mm_loadh_pi(a, mem_addr) \
+ simde_mm_loadh_pi((a), (simde__m64 const *)(mem_addr))
+#endif
+#endif
+
+/* The SSE documentation says that there are no alignment requirements
+ for mem_addr. Unfortunately they used the __m64 type for the argument
+ which is supposed to be 8-byte aligned, so some compilers (like clang
+ with -Wcast-align) will generate a warning if you try to cast, say,
+ a simde_float32* to a simde__m64* for this function.
+
+ I think the choice of argument type is unfortunate, but I do think we
+ need to stick to it here. If there is demand I can always add something
+ like simde_x_mm_loadl_f32(simde__m128, simde_float32 mem_addr[2]) */
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_loadl_pi(simde__m128 a, simde__m64 const *mem_addr)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_loadl_pi(a,
+ HEDLEY_REINTERPRET_CAST(__m64 const *, mem_addr));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 = vcombine_f32(
+ vld1_f32(HEDLEY_REINTERPRET_CAST(const float32_t *, mem_addr)),
+ vget_high_f32(a_.neon_f32));
+#else
+ simde__m64_private b_;
+ simde_memcpy(&b_, mem_addr, sizeof(b_));
+ r_.i32[0] = b_.i32[0];
+ r_.i32[1] = b_.i32[1];
+ r_.i32[2] = a_.i32[2];
+ r_.i32[3] = a_.i32[3];
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#if HEDLEY_HAS_WARNING("-Wold-style-cast")
+#define _mm_loadl_pi(a, mem_addr) \
+ simde_mm_loadl_pi((a), HEDLEY_REINTERPRET_CAST(simde__m64 const *, \
+ (mem_addr)))
+#else
+#define _mm_loadl_pi(a, mem_addr) \
+ simde_mm_loadl_pi((a), (simde__m64 const *)(mem_addr))
+#endif
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128
+simde_mm_loadr_ps(simde_float32 const mem_addr[HEDLEY_ARRAY_PARAM(4)])
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_loadr_ps(mem_addr);
+#else
+ simde__m128_private r_,
+ v_ = simde__m128_to_private(simde_mm_load_ps(mem_addr));
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 = vrev64q_f32(v_.neon_f32);
+ r_.neon_f32 = vextq_f32(r_.neon_f32, r_.neon_f32, 2);
+#elif defined(SIMDE_POWER_ALTIVEC_P7_NATIVE) && defined(__PPC64__)
+ r_.altivec_f32 = vec_reve(v_.altivec_f32);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.f32 = SIMDE_SHUFFLE_VECTOR_(32, 16, v_.f32, v_.f32, 3, 2, 1, 0);
+#else
+ r_.f32[0] = v_.f32[3];
+ r_.f32[1] = v_.f32[2];
+ r_.f32[2] = v_.f32[1];
+ r_.f32[3] = v_.f32[0];
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_loadr_ps(mem_addr) simde_mm_loadr_ps(mem_addr)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128
+simde_mm_loadu_ps(simde_float32 const mem_addr[HEDLEY_ARRAY_PARAM(4)])
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_loadu_ps(mem_addr);
+#else
+ simde__m128_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 =
+ vld1q_f32(HEDLEY_REINTERPRET_CAST(const float32_t *, mem_addr));
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_v128_load(mem_addr);
+#elif defined(SIMDE_POWER_ALTIVEC_P7_NATIVE) && defined(__PPC64__)
+ r_.altivec_f32 = vec_vsx_ld(0, mem_addr);
+#else
+ simde_memcpy(&r_, mem_addr, sizeof(r_));
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_loadu_ps(mem_addr) simde_mm_loadu_ps(mem_addr)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_maskmove_si64(simde__m64 a, simde__m64 mask, int8_t *mem_addr)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ _mm_maskmove_si64(a, mask, HEDLEY_REINTERPRET_CAST(char *, mem_addr));
+#else
+ simde__m64_private a_ = simde__m64_to_private(a),
+ mask_ = simde__m64_to_private(mask);
+
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(a_.i8) / sizeof(a_.i8[0])); i++)
+ if (mask_.i8[i] < 0)
+ mem_addr[i] = a_.i8[i];
+#endif
+}
+#define simde_m_maskmovq(a, mask, mem_addr) \
+ simde_mm_maskmove_si64(a, mask, mem_addr)
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_maskmove_si64(a, mask, mem_addr) \
+ simde_mm_maskmove_si64( \
+ (a), (mask), \
+ SIMDE_CHECKED_REINTERPRET_CAST(int8_t *, char *, (mem_addr)))
+#define _m_maskmovq(a, mask, mem_addr) \
+ simde_mm_maskmove_si64( \
+ (a), (mask), \
+ SIMDE_CHECKED_REINTERPRET_CAST(int8_t *, char *, (mem_addr)))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_max_pi16(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_max_pi16(a, b);
+#else
+ simde__m64_private r_, a_ = simde__m64_to_private(a),
+ b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 = vmax_s16(a_.neon_i16, b_.neon_i16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = (a_.i16[i] > b_.i16[i]) ? a_.i16[i] : b_.i16[i];
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_pmaxsw(a, b) simde_mm_max_pi16(a, b)
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_max_pi16(a, b) simde_mm_max_pi16(a, b)
+#define _m_pmaxsw(a, b) simde_mm_max_pi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_max_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_max_ps(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE) && defined(SIMDE_FAST_NANS)
+ r_.neon_f32 = vmaxq_f32(a_.neon_f32, b_.neon_f32);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 = vbslq_f32(vcgtq_f32(a_.neon_f32, b_.neon_f32),
+ a_.neon_f32, b_.neon_f32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE) && defined(SIMDE_FAST_NANS)
+ r_.wasm_v128 = wasm_f32x4_max(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 =
+ wasm_v128_bitselect(a_.wasm_v128, b_.wasm_v128,
+ wasm_f32x4_gt(a_.wasm_v128, b_.wasm_v128));
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE) && defined(SIMDE_FAST_NANS)
+ r_.altivec_f32 = vec_max(a_.altivec_f32, b_.altivec_f32);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f32 = vec_sel(b_.altivec_f32, a_.altivec_f32,
+ vec_cmpgt(a_.altivec_f32, b_.altivec_f32));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.f32[i] = (a_.f32[i] > b_.f32[i]) ? a_.f32[i] : b_.f32[i];
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_max_ps(a, b) simde_mm_max_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_max_pu8(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_max_pu8(a, b);
+#else
+ simde__m64_private r_, a_ = simde__m64_to_private(a),
+ b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u8 = vmax_u8(a_.neon_u8, b_.neon_u8);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u8) / sizeof(r_.u8[0])); i++) {
+ r_.u8[i] = (a_.u8[i] > b_.u8[i]) ? a_.u8[i] : b_.u8[i];
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_pmaxub(a, b) simde_mm_max_pu8(a, b)
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_max_pu8(a, b) simde_mm_max_pu8(a, b)
+#define _m_pmaxub(a, b) simde_mm_max_pu8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_max_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_max_ss(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_ss(a, simde_mm_max_ps(a, b));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ float32_t value = vgetq_lane_f32(maxq_f32(a_.neon_f32, b_.neon_f32), 0);
+ r_.neon_f32 = vsetq_lane_f32(value, a_.neon_f32, 0);
+#else
+ r_.f32[0] = (a_.f32[0] > b_.f32[0]) ? a_.f32[0] : b_.f32[0];
+ r_.f32[1] = a_.f32[1];
+ r_.f32[2] = a_.f32[2];
+ r_.f32[3] = a_.f32[3];
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_max_ss(a, b) simde_mm_max_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_min_pi16(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_min_pi16(a, b);
+#else
+ simde__m64_private r_, a_ = simde__m64_to_private(a),
+ b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 = vmin_s16(a_.neon_i16, b_.neon_i16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = (a_.i16[i] < b_.i16[i]) ? a_.i16[i] : b_.i16[i];
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_pminsw(a, b) simde_mm_min_pi16(a, b)
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_min_pi16(a, b) simde_mm_min_pi16(a, b)
+#define _m_pminsw(a, b) simde_mm_min_pi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_min_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_min_ps(a, b);
+#elif defined(SIMDE_FAST_NANS) && defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ return simde__m128_from_neon_f32(vminq_f32(simde__m128_to_neon_f32(a),
+ simde__m128_to_neon_f32(b)));
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+#if defined(SIMDE_FAST_NANS)
+ r_.wasm_v128 = wasm_f32x4_min(a_.wasm_v128, b_.wasm_v128);
+#else
+ r_.wasm_v128 =
+ wasm_v128_bitselect(a_.wasm_v128, b_.wasm_v128,
+ wasm_f32x4_lt(a_.wasm_v128, b_.wasm_v128));
+#endif
+ return simde__m128_from_private(r_);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_FAST_NANS)
+ r_.altivec_f32 = vec_min(a_.altivec_f32, b_.altivec_f32);
+#else
+ r_.altivec_f32 = vec_sel(b_.altivec_f32, a_.altivec_f32,
+ vec_cmpgt(b_.altivec_f32, a_.altivec_f32));
+#endif
+
+ return simde__m128_from_private(r_);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ simde__m128 mask = simde_mm_cmplt_ps(a, b);
+ return simde_mm_or_ps(simde_mm_and_ps(mask, a),
+ simde_mm_andnot_ps(mask, b));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.f32[i] = (a_.f32[i] < b_.f32[i]) ? a_.f32[i] : b_.f32[i];
+ }
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_min_ps(a, b) simde_mm_min_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_min_pu8(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_min_pu8(a, b);
+#else
+ simde__m64_private r_, a_ = simde__m64_to_private(a),
+ b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u8 = vmin_u8(a_.neon_u8, b_.neon_u8);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u8) / sizeof(r_.u8[0])); i++) {
+ r_.u8[i] = (a_.u8[i] < b_.u8[i]) ? a_.u8[i] : b_.u8[i];
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_pminub(a, b) simde_mm_min_pu8(a, b)
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_min_pu8(a, b) simde_mm_min_pu8(a, b)
+#define _m_pminub(a, b) simde_mm_min_pu8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_min_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_min_ss(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_ss(a, simde_mm_min_ps(a, b));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ float32_t value =
+ vgetq_lane_f32(vminq_f32(a_.neon_f32, b_.neon_f32), 0);
+ r_.neon_f32 = vsetq_lane_f32(value, a_.neon_f32, 0);
+#else
+ r_.f32[0] = (a_.f32[0] < b_.f32[0]) ? a_.f32[0] : b_.f32[0];
+ r_.f32[1] = a_.f32[1];
+ r_.f32[2] = a_.f32[2];
+ r_.f32[3] = a_.f32[3];
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_min_ss(a, b) simde_mm_min_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_movehl_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_movehl_ps(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ float32x2_t a32 = vget_high_f32(a_.neon_f32);
+ float32x2_t b32 = vget_high_f32(b_.neon_f32);
+ r_.neon_f32 = vcombine_f32(b32, a32);
+#elif defined(SIMDE_POWER_ALTIVEC_P8_NATIVE)
+ r_.altivec_f32 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(float),
+ vec_mergel(b_.altivec_i64, a_.altivec_i64));
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.f32 = SIMDE_SHUFFLE_VECTOR_(32, 16, a_.f32, b_.f32, 6, 7, 2, 3);
+#else
+ r_.f32[0] = b_.f32[2];
+ r_.f32[1] = b_.f32[3];
+ r_.f32[2] = a_.f32[2];
+ r_.f32[3] = a_.f32[3];
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_movehl_ps(a, b) simde_mm_movehl_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_movelh_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_movelh_ps(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ float32x2_t a10 = vget_low_f32(a_.neon_f32);
+ float32x2_t b10 = vget_low_f32(b_.neon_f32);
+ r_.neon_f32 = vcombine_f32(a10, b10);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.f32 = SIMDE_SHUFFLE_VECTOR_(32, 16, a_.f32, b_.f32, 0, 1, 4, 5);
+#elif defined(SIMDE_POWER_ALTIVEC_P8_NATIVE)
+ r_.altivec_f32 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(float),
+ vec_mergeh(a_.altivec_i64, b_.altivec_i64));
+#else
+ r_.f32[0] = a_.f32[0];
+ r_.f32[1] = a_.f32[1];
+ r_.f32[2] = b_.f32[0];
+ r_.f32[3] = b_.f32[1];
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_movelh_ps(a, b) simde_mm_movelh_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_movemask_pi8(simde__m64 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_movemask_pi8(a);
+#else
+ simde__m64_private a_ = simde__m64_to_private(a);
+ int r = 0;
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ uint8x8_t input = a_.neon_u8;
+ const int8_t xr[8] = {-7, -6, -5, -4, -3, -2, -1, 0};
+ const uint8x8_t mask_and = vdup_n_u8(0x80);
+ const int8x8_t mask_shift = vld1_s8(xr);
+ const uint8x8_t mask_result =
+ vshl_u8(vand_u8(input, mask_and), mask_shift);
+ uint8x8_t lo = mask_result;
+ r = vaddv_u8(lo);
+#else
+ const size_t nmemb = sizeof(a_.i8) / sizeof(a_.i8[0]);
+ SIMDE_VECTORIZE_REDUCTION(| : r)
+ for (size_t i = 0; i < nmemb; i++) {
+ r |= (a_.u8[nmemb - 1 - i] >> 7) << (nmemb - 1 - i);
+ }
+#endif
+
+ return r;
+#endif
+}
+#define simde_m_pmovmskb(a) simde_mm_movemask_pi8(a)
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_movemask_pi8(a) simde_mm_movemask_pi8(a)
+#define _m_pmovmskb(a) simde_mm_movemask_pi8(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_movemask_ps(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_movemask_ps(a);
+#else
+ int r = 0;
+ simde__m128_private a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ static const int32_t shift_amount[] = {0, 1, 2, 3};
+ const int32x4_t shift = vld1q_s32(shift_amount);
+ uint32x4_t tmp = vshrq_n_u32(a_.neon_u32, 31);
+ return HEDLEY_STATIC_CAST(int, vaddvq_u32(vshlq_u32(tmp, shift)));
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ // Shift out everything but the sign bits with a 32-bit unsigned shift right.
+ uint64x2_t high_bits =
+ vreinterpretq_u64_u32(vshrq_n_u32(a_.neon_u32, 31));
+ // Merge the two pairs together with a 64-bit unsigned shift right + add.
+ uint8x16_t paired =
+ vreinterpretq_u8_u64(vsraq_n_u64(high_bits, high_bits, 31));
+ // Extract the result.
+ return vgetq_lane_u8(paired, 0) | (vgetq_lane_u8(paired, 8) << 2);
+#else
+ SIMDE_VECTORIZE_REDUCTION(| : r)
+ for (size_t i = 0; i < sizeof(a_.u32) / sizeof(a_.u32[0]); i++) {
+ r |= (a_.u32[i] >> ((sizeof(a_.u32[i]) * CHAR_BIT) - 1)) << i;
+ }
+#endif
+
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_movemask_ps(a) simde_mm_movemask_ps((a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_mul_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_mul_ps(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 = vmulq_f32(a_.neon_f32, b_.neon_f32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f32x4_mul(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.f32 = a_.f32 * b_.f32;
+#elif defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+ r_.altivec_f32 = vec_mul(a_.altivec_f32, b_.altivec_f32);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.f32[i] = a_.f32[i] * b_.f32[i];
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_mul_ps(a, b) simde_mm_mul_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_mul_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_mul_ss(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_ss(a, simde_mm_mul_ps(a, b));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+ r_.f32[0] = a_.f32[0] * b_.f32[0];
+ r_.f32[1] = a_.f32[1];
+ r_.f32[2] = a_.f32[2];
+ r_.f32[3] = a_.f32[3];
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_mul_ss(a, b) simde_mm_mul_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_mulhi_pu16(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_mulhi_pu16(a, b);
+#else
+ simde__m64_private r_, a_ = simde__m64_to_private(a),
+ b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ const uint32x4_t t1 = vmull_u16(a_.neon_u16, b_.neon_u16);
+ const uint32x4_t t2 = vshrq_n_u32(t1, 16);
+ const uint16x4_t t3 = vmovn_u32(t2);
+ r_.neon_u16 = t3;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
+ r_.u16[i] = HEDLEY_STATIC_CAST(
+ uint16_t, ((HEDLEY_STATIC_CAST(uint32_t, a_.u16[i]) *
+ HEDLEY_STATIC_CAST(uint32_t, b_.u16[i])) >>
+ UINT32_C(16)));
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_pmulhuw(a, b) simde_mm_mulhi_pu16(a, b)
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_mulhi_pu16(a, b) simde_mm_mulhi_pu16(a, b)
+#define _m_pmulhuw(a, b) simde_mm_mulhi_pu16(a, b)
+#endif
+
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(HEDLEY_GCC_VERSION)
+#define SIMDE_MM_HINT_NTA HEDLEY_STATIC_CAST(enum _mm_hint, 0)
+#define SIMDE_MM_HINT_T0 HEDLEY_STATIC_CAST(enum _mm_hint, 1)
+#define SIMDE_MM_HINT_T1 HEDLEY_STATIC_CAST(enum _mm_hint, 2)
+#define SIMDE_MM_HINT_T2 HEDLEY_STATIC_CAST(enum _mm_hint, 3)
+#define SIMDE_MM_HINT_ENTA HEDLEY_STATIC_CAST(enum _mm_hint, 4)
+#define SIMDE_MM_HINT_ET0 HEDLEY_STATIC_CAST(enum _mm_hint, 5)
+#define SIMDE_MM_HINT_ET1 HEDLEY_STATIC_CAST(enum _mm_hint, 6)
+#define SIMDE_MM_HINT_ET2 HEDLEY_STATIC_CAST(enum _mm_hint, 7)
+#else
+#define SIMDE_MM_HINT_NTA 0
+#define SIMDE_MM_HINT_T0 1
+#define SIMDE_MM_HINT_T1 2
+#define SIMDE_MM_HINT_T2 3
+#define SIMDE_MM_HINT_ENTA 4
+#define SIMDE_MM_HINT_ET0 5
+#define SIMDE_MM_HINT_ET1 6
+#define SIMDE_MM_HINT_ET2 7
+#endif
+
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+HEDLEY_DIAGNOSTIC_PUSH
+#if HEDLEY_HAS_WARNING("-Wreserved-id-macro")
+_Pragma("clang diagnostic ignored \"-Wreserved-id-macro\"")
+#endif
+#undef _MM_HINT_NTA
+#define _MM_HINT_NTA SIMDE_MM_HINT_NTA
+#undef _MM_HINT_T0
+#define _MM_HINT_T0 SIMDE_MM_HINT_T0
+#undef _MM_HINT_T1
+#define _MM_HINT_T1 SIMDE_MM_HINT_T1
+#undef _MM_HINT_T2
+#define _MM_HINT_T2 SIMDE_MM_HINT_T2
+#undef _MM_HINT_ETNA
+#define _MM_HINT_ETNA SIMDE_MM_HINT_ETNA
+#undef _MM_HINT_ET0
+#define _MM_HINT_ET0 SIMDE_MM_HINT_ET0
+#undef _MM_HINT_ET1
+#define _MM_HINT_ET1 SIMDE_MM_HINT_ET1
+#undef _MM_HINT_ET1
+#define _MM_HINT_ET2 SIMDE_MM_HINT_ET2
+ HEDLEY_DIAGNOSTIC_POP
+#endif
+
+ SIMDE_FUNCTION_ATTRIBUTES void simde_mm_prefetch(char const *p, int i)
+{
+#if defined(HEDLEY_GCC_VERSION)
+ __builtin_prefetch(p);
+#else
+ (void)p;
+#endif
+
+ (void)i;
+}
+#if defined(SIMDE_X86_SSE_NATIVE)
+#if defined(__clang__) && \
+ !SIMDE_DETECT_CLANG_VERSION_CHECK( \
+ 10, 0, 0) /* https://reviews.llvm.org/D71718 */
+#define simde_mm_prefetch(p, i) \
+ (__extension__({ \
+ HEDLEY_DIAGNOSTIC_PUSH \
+ HEDLEY_DIAGNOSTIC_DISABLE_CAST_QUAL \
+ _mm_prefetch((p), (i)); \
+ HEDLEY_DIAGNOSTIC_POP \
+ }))
+#else
+#define simde_mm_prefetch(p, i) _mm_prefetch(p, i)
+#endif
+#endif
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_prefetch(p, i) simde_mm_prefetch(p, i)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_x_mm_negate_ps(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return simde_mm_xor_ps(a, _mm_set1_ps(SIMDE_FLOAT32_C(-0.0)));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_POWER_ALTIVEC_P8_NATIVE) && \
+ (!defined(HEDLEY_GCC_VERSION) || HEDLEY_GCC_VERSION_CHECK(8, 1, 0))
+ r_.altivec_f32 = vec_neg(a_.altivec_f32);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 = vnegq_f32(a_.neon_f32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f32x4_neg(a_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P8_NATIVE)
+ r_.altivec_f32 = vec_neg(a_.altivec_f32);
+#elif defined(SIMDE_VECTOR_NEGATE)
+ r_.f32 = -a_.f32;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.f32[i] = -a_.f32[i];
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_rcp_ps(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_rcp_ps(a);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ float32x4_t recip = vrecpeq_f32(a_.neon_f32);
+
+#if SIMDE_ACCURACY_PREFERENCE > 0
+ for (int i = 0; i < SIMDE_ACCURACY_PREFERENCE; ++i) {
+ recip = vmulq_f32(recip, vrecpsq_f32(recip, a_.neon_f32));
+ }
+#endif
+
+ r_.neon_f32 = recip;
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f32x4_div(simde_mm_set1_ps(1.0f), a_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f32 = vec_re(a_.altivec_f32);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.f32 = 1.0f / a_.f32;
+#elif defined(SIMDE_IEEE754_STORAGE)
+ /* https://stackoverflow.com/questions/12227126/division-as-multiply-and-lut-fast-float-division-reciprocal/12228234#12228234 */
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ int32_t ix;
+ simde_float32 fx = a_.f32[i];
+ simde_memcpy(&ix, &fx, sizeof(ix));
+ int32_t x = INT32_C(0x7EF311C3) - ix;
+ simde_float32 temp;
+ simde_memcpy(&temp, &x, sizeof(temp));
+ r_.f32[i] = temp * (SIMDE_FLOAT32_C(2.0) - temp * fx);
+ }
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.f32[i] = 1.0f / a_.f32[i];
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_rcp_ps(a) simde_mm_rcp_ps((a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_rcp_ss(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_rcp_ss(a);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_ss(a, simde_mm_rcp_ps(a));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a);
+
+ r_.f32[0] = 1.0f / a_.f32[0];
+ r_.f32[1] = a_.f32[1];
+ r_.f32[2] = a_.f32[2];
+ r_.f32[3] = a_.f32[3];
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_rcp_ss(a) simde_mm_rcp_ss((a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_rsqrt_ps(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_rsqrt_ps(a);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 = vrsqrteq_f32(a_.neon_f32);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f32 = vec_rsqrte(a_.altivec_f32);
+#elif defined(SIMDE_IEEE754_STORAGE)
+ /* https://basesandframes.files.wordpress.com/2020/04/even_faster_math_functions_green_2020.pdf
+ Pages 100 - 103 */
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+#if SIMDE_ACCURACY_PREFERENCE <= 0
+ r_.i32[i] = INT32_C(0x5F37624F) - (a_.i32[i] >> 1);
+#else
+ simde_float32 x = a_.f32[i];
+ simde_float32 xhalf = SIMDE_FLOAT32_C(0.5) * x;
+ int32_t ix;
+
+ simde_memcpy(&ix, &x, sizeof(ix));
+
+#if SIMDE_ACCURACY_PREFERENCE == 1
+ ix = INT32_C(0x5F375A82) - (ix >> 1);
+#else
+ ix = INT32_C(0x5F37599E) - (ix >> 1);
+#endif
+
+ simde_memcpy(&x, &ix, sizeof(x));
+
+#if SIMDE_ACCURACY_PREFERENCE >= 2
+ x = x * (SIMDE_FLOAT32_C(1.5008909) - xhalf * x * x);
+#endif
+ x = x * (SIMDE_FLOAT32_C(1.5008909) - xhalf * x * x);
+
+ r_.f32[i] = x;
+#endif
+ }
+#elif defined(simde_math_sqrtf)
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.f32[i] = 1.0f / simde_math_sqrtf(a_.f32[i]);
+ }
+#else
+ HEDLEY_UNREACHABLE();
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_rsqrt_ps(a) simde_mm_rsqrt_ps((a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_rsqrt_ss(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_rsqrt_ss(a);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_ss(a, simde_mm_rsqrt_ps(a));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 =
+ vsetq_lane_f32(vgetq_lane_f32(simde_mm_rsqrt_ps(a).neon_f32, 0),
+ a_.neon_f32, 0);
+#elif defined(SIMDE_IEEE754_STORAGE)
+ {
+#if SIMDE_ACCURACY_PREFERENCE <= 0
+ r_.i32[0] = INT32_C(0x5F37624F) - (a_.i32[0] >> 1);
+#else
+ simde_float32 x = a_.f32[0];
+ simde_float32 xhalf = SIMDE_FLOAT32_C(0.5) * x;
+ int32_t ix;
+
+ simde_memcpy(&ix, &x, sizeof(ix));
+
+#if SIMDE_ACCURACY_PREFERENCE == 1
+ ix = INT32_C(0x5F375A82) - (ix >> 1);
+#else
+ ix = INT32_C(0x5F37599E) - (ix >> 1);
+#endif
+
+ simde_memcpy(&x, &ix, sizeof(x));
+
+#if SIMDE_ACCURACY_PREFERENCE >= 2
+ x = x * (SIMDE_FLOAT32_C(1.5008909) - xhalf * x * x);
+#endif
+ x = x * (SIMDE_FLOAT32_C(1.5008909) - xhalf * x * x);
+
+ r_.f32[0] = x;
+#endif
+ }
+ r_.f32[1] = a_.f32[1];
+ r_.f32[2] = a_.f32[2];
+ r_.f32[3] = a_.f32[3];
+#elif defined(simde_math_sqrtf)
+ r_.f32[0] = 1.0f / simde_math_sqrtf(a_.f32[0]);
+ r_.f32[1] = a_.f32[1];
+ r_.f32[2] = a_.f32[2];
+ r_.f32[3] = a_.f32[3];
+#else
+ HEDLEY_UNREACHABLE();
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_rsqrt_ss(a) simde_mm_rsqrt_ss((a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_sad_pu8(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_sad_pu8(a, b);
+#else
+ simde__m64_private r_, a_ = simde__m64_to_private(a),
+ b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ uint16x4_t t = vpaddl_u8(vabd_u8(a_.neon_u8, b_.neon_u8));
+ uint16_t r0 = t[0] + t[1] + t[2] + t[3];
+ r_.neon_u16 = vset_lane_u16(r0, vdup_n_u16(0), 0);
+#else
+ uint16_t sum = 0;
+
+#if defined(SIMDE_HAVE_STDLIB_H)
+ SIMDE_VECTORIZE_REDUCTION(+ : sum)
+ for (size_t i = 0; i < (sizeof(r_.u8) / sizeof(r_.u8[0])); i++) {
+ sum += HEDLEY_STATIC_CAST(uint8_t, abs(a_.u8[i] - b_.u8[i]));
+ }
+
+ r_.i16[0] = HEDLEY_STATIC_CAST(int16_t, sum);
+ r_.i16[1] = 0;
+ r_.i16[2] = 0;
+ r_.i16[3] = 0;
+#else
+ HEDLEY_UNREACHABLE();
+#endif
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#define simde_m_psadbw(a, b) simde_mm_sad_pu8(a, b)
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_sad_pu8(a, b) simde_mm_sad_pu8(a, b)
+#define _m_psadbw(a, b) simde_mm_sad_pu8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_set_ss(simde_float32 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_set_ss(a);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ return vsetq_lane_f32(a, vdupq_n_f32(SIMDE_FLOAT32_C(0.0)), 0);
+#else
+ return simde_mm_set_ps(SIMDE_FLOAT32_C(0.0), SIMDE_FLOAT32_C(0.0),
+ SIMDE_FLOAT32_C(0.0), a);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_set_ss(a) simde_mm_set_ss(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_setr_ps(simde_float32 e3, simde_float32 e2,
+ simde_float32 e1, simde_float32 e0)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_setr_ps(e3, e2, e1, e0);
+#else
+ return simde_mm_set_ps(e0, e1, e2, e3);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_setr_ps(e3, e2, e1, e0) simde_mm_setr_ps(e3, e2, e1, e0)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_setzero_ps(void)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_setzero_ps();
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ return vdupq_n_f32(SIMDE_FLOAT32_C(0.0));
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ return vec_splats(SIMDE_FLOAT32_C(0.0));
+#else
+ simde__m128 r;
+ simde_memset(&r, 0, sizeof(r));
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_setzero_ps() simde_mm_setzero_ps()
+#endif
+
+#if defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
+HEDLEY_DIAGNOSTIC_PUSH
+SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_undefined_ps(void)
+{
+ simde__m128_private r_;
+
+#if defined(SIMDE_HAVE_UNDEFINED128)
+ r_.n = _mm_undefined_ps();
+#elif !defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
+ r_ = simde__m128_to_private(simde_mm_setzero_ps());
+#endif
+
+ return simde__m128_from_private(r_);
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_undefined_ps() simde_mm_undefined_ps()
+#endif
+
+#if defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
+HEDLEY_DIAGNOSTIC_POP
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_x_mm_setone_ps(void)
+{
+ simde__m128 t = simde_mm_setzero_ps();
+ return simde_mm_cmpeq_ps(t, t);
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_sfence(void)
+{
+ /* TODO: Use Hedley. */
+#if defined(SIMDE_X86_SSE_NATIVE)
+ _mm_sfence();
+#elif defined(__GNUC__) && \
+ ((__GNUC__ > 4) || (__GNUC__ == 4 && __GNUC_MINOR__ >= 7))
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+#elif !defined(__INTEL_COMPILER) && defined(__STDC_VERSION__) && \
+ (__STDC_VERSION__ >= 201112L) && !defined(__STDC_NO_ATOMICS__)
+#if defined(__GNUC__) && (__GNUC__ == 4) && (__GNUC_MINOR__ < 9)
+ __atomic_thread_fence(__ATOMIC_SEQ_CST);
+#else
+ atomic_thread_fence(memory_order_seq_cst);
+#endif
+#elif defined(_MSC_VER)
+ MemoryBarrier();
+#elif HEDLEY_HAS_EXTENSION(c_atomic)
+ __c11_atomic_thread_fence(__ATOMIC_SEQ_CST);
+#elif defined(__GNUC__) && \
+ ((__GNUC__ > 4) || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1))
+ __sync_synchronize();
+#elif defined(_OPENMP)
+#pragma omp critical(simde_mm_sfence_)
+ {
+ }
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_sfence() simde_mm_sfence()
+#endif
+
+#define SIMDE_MM_SHUFFLE(z, y, x, w) \
+ (((z) << 6) | ((y) << 4) | ((x) << 2) | (w))
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _MM_SHUFFLE(z, y, x, w) SIMDE_MM_SHUFFLE(z, y, x, w)
+#endif
+
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE) && \
+ !defined(__PGI)
+#define simde_mm_shuffle_pi16(a, imm8) _mm_shuffle_pi16(a, imm8)
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+#define simde_mm_shuffle_pi16(a, imm8) \
+ (__extension__({ \
+ const simde__m64_private simde__tmp_a_ = \
+ simde__m64_to_private(a); \
+ simde__m64_from_private((simde__m64_private){ \
+ .i16 = SIMDE_SHUFFLE_VECTOR_( \
+ 16, 8, (simde__tmp_a_).i16, \
+ (simde__tmp_a_).i16, (((imm8)) & 3), \
+ (((imm8) >> 2) & 3), (((imm8) >> 4) & 3), \
+ (((imm8) >> 6) & 3))}); \
+ }))
+#else
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_shuffle_pi16(simde__m64 a, const int imm8)
+ SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 255)
+{
+ simde__m64_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+
+ for (size_t i = 0; i < sizeof(r_.i16) / sizeof(r_.i16[0]); i++) {
+ r_.i16[i] = a_.i16[(imm8 >> (i * 2)) & 3];
+ }
+
+ HEDLEY_DIAGNOSTIC_PUSH
+#if HEDLEY_HAS_WARNING("-Wconditional-uninitialized")
+#pragma clang diagnostic ignored "-Wconditional-uninitialized"
+#endif
+ return simde__m64_from_private(r_);
+ HEDLEY_DIAGNOSTIC_POP
+}
+#endif
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE) && \
+ !defined(__PGI)
+#define simde_m_pshufw(a, imm8) _m_pshufw(a, imm8)
+#else
+#define simde_m_pshufw(a, imm8) simde_mm_shuffle_pi16(a, imm8)
+#endif
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_shuffle_pi16(a, imm8) simde_mm_shuffle_pi16(a, imm8)
+#define _m_pshufw(a, imm8) simde_mm_shuffle_pi16(a, imm8)
+#endif
+
+#if defined(SIMDE_X86_SSE_NATIVE) && !defined(__PGI)
+#define simde_mm_shuffle_ps(a, b, imm8) _mm_shuffle_ps(a, b, imm8)
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+#define simde_mm_shuffle_ps(a, b, imm8) \
+ __extension__({ \
+ float32x4_t ret; \
+ ret = vmovq_n_f32(vgetq_lane_f32(a, (imm8) & (0x3))); \
+ ret = vsetq_lane_f32(vgetq_lane_f32(a, ((imm8) >> 2) & 0x3), \
+ ret, 1); \
+ ret = vsetq_lane_f32(vgetq_lane_f32(b, ((imm8) >> 4) & 0x3), \
+ ret, 2); \
+ ret = vsetq_lane_f32(vgetq_lane_f32(b, ((imm8) >> 6) & 0x3), \
+ ret, 3); \
+ })
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+#define simde_mm_shuffle_ps(a, b, imm8) \
+ (__extension__({ \
+ simde__m128_from_private((simde__m128_private){ \
+ .f32 = SIMDE_SHUFFLE_VECTOR_( \
+ 32, 16, simde__m128_to_private(a).f32, \
+ simde__m128_to_private(b).f32, (((imm8)) & 3), \
+ (((imm8) >> 2) & 3), (((imm8) >> 4) & 3) + 4, \
+ (((imm8) >> 6) & 3) + 4)}); \
+ }))
+#else
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_shuffle_ps(simde__m128 a, simde__m128 b, const int imm8)
+ SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 255)
+{
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+ r_.f32[0] = a_.f32[(imm8 >> 0) & 3];
+ r_.f32[1] = a_.f32[(imm8 >> 2) & 3];
+ r_.f32[2] = b_.f32[(imm8 >> 4) & 3];
+ r_.f32[3] = b_.f32[(imm8 >> 6) & 3];
+
+ return simde__m128_from_private(r_);
+}
+#endif
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_shuffle_ps(a, b, imm8) simde_mm_shuffle_ps((a), (b), imm8)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_sqrt_ps(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_sqrt_ps(a);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f32 = vsqrtq_f32(a_.neon_f32);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ float32x4_t est = vrsqrteq_f32(a_.neon_f32);
+ for (int i = 0; i <= SIMDE_ACCURACY_PREFERENCE; i++) {
+ est = vmulq_f32(vrsqrtsq_f32(vmulq_f32(a_.neon_f32, est), est),
+ est);
+ }
+ r_.neon_f32 = vmulq_f32(a_.neon_f32, est);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f32x4_sqrt(a_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+ r_.altivec_f32 = vec_sqrt(a_.altivec_f32);
+#elif defined(simde_math_sqrt)
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < sizeof(r_.f32) / sizeof(r_.f32[0]); i++) {
+ r_.f32[i] = simde_math_sqrtf(a_.f32[i]);
+ }
+#else
+ HEDLEY_UNREACHABLE();
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_sqrt_ps(a) simde_mm_sqrt_ps((a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_sqrt_ss(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_sqrt_ss(a);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_ss(a, simde_mm_sqrt_ps(a));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ float32_t value = vgetq_lane_f32(
+ simde__m128_to_private(simde_mm_sqrt_ps(a)).neon_f32, 0);
+ r_.neon_f32 = vsetq_lane_f32(value, a_.neon_f32, 0);
+#elif defined(simde_math_sqrtf)
+ r_.f32[0] = simde_math_sqrtf(a_.f32[0]);
+ r_.f32[1] = a_.f32[1];
+ r_.f32[2] = a_.f32[2];
+ r_.f32[3] = a_.f32[3];
+#else
+ HEDLEY_UNREACHABLE();
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_sqrt_ss(a) simde_mm_sqrt_ss((a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_store_ps(simde_float32 mem_addr[4], simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ _mm_store_ps(mem_addr, a);
+#else
+ simde__m128_private a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ vst1q_f32(mem_addr, a_.neon_f32);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ vec_st(a_.altivec_f32, 0, mem_addr);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ wasm_v128_store(mem_addr, a_.wasm_v128);
+#else
+ simde_memcpy(mem_addr, &a_, sizeof(a));
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_store_ps(mem_addr, a) \
+ simde_mm_store_ps(SIMDE_CHECKED_REINTERPRET_CAST( \
+ float *, simde_float32 *, mem_addr), \
+ (a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_store1_ps(simde_float32 mem_addr[4], simde__m128 a)
+{
+ simde_float32 *mem_addr_ =
+ SIMDE_ALIGN_ASSUME_LIKE(mem_addr, simde__m128);
+
+#if defined(SIMDE_X86_SSE_NATIVE)
+ _mm_store_ps1(mem_addr_, a);
+#else
+ simde__m128_private a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ vst1q_f32(mem_addr_, vdupq_lane_f32(vget_low_f32(a_.neon_f32), 0));
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ wasm_v128_store(mem_addr_,
+ wasm_v32x4_shuffle(a_.wasm_v128, a_.wasm_v128, 0, 0, 0,
+ 0));
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ vec_st(vec_splat(a_.altivec_f32, 0), 0, mem_addr_);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ simde__m128_private tmp_;
+ tmp_.f32 = SIMDE_SHUFFLE_VECTOR_(32, 16, a_.f32, a_.f32, 0, 0, 0, 0);
+ simde_mm_store_ps(mem_addr_, tmp_.f32);
+#else
+ SIMDE_VECTORIZE_ALIGNED(mem_addr_ : 16)
+ for (size_t i = 0; i < sizeof(a_.f32) / sizeof(a_.f32[0]); i++) {
+ mem_addr_[i] = a_.f32[0];
+ }
+#endif
+#endif
+}
+#define simde_mm_store_ps1(mem_addr, a) simde_mm_store1_ps(mem_addr, a)
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_store_ps1(mem_addr, a) \
+ simde_mm_store1_ps(SIMDE_CHECKED_REINTERPRET_CAST( \
+ float *, simde_float32 *, mem_addr), \
+ (a))
+#define _mm_store1_ps(mem_addr, a) \
+ simde_mm_store1_ps(SIMDE_CHECKED_REINTERPRET_CAST( \
+ float *, simde_float32 *, mem_addr), \
+ (a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_store_ss(simde_float32 *mem_addr, simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ _mm_store_ss(mem_addr, a);
+#else
+ simde__m128_private a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ vst1q_lane_f32(mem_addr, a_.neon_f32, 0);
+#else
+ *mem_addr = a_.f32[0];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_store_ss(mem_addr, a) \
+ simde_mm_store_ss(SIMDE_CHECKED_REINTERPRET_CAST( \
+ float *, simde_float32 *, mem_addr), \
+ (a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_storeh_pi(simde__m64 *mem_addr, simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ _mm_storeh_pi(HEDLEY_REINTERPRET_CAST(__m64 *, mem_addr), a);
+#else
+ simde__m128_private a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ vst1_f32(HEDLEY_REINTERPRET_CAST(float32_t *, mem_addr),
+ vget_high_f32(a_.neon_f32));
+#else
+ simde_memcpy(mem_addr, &(a_.m64[1]), sizeof(a_.m64[1]));
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_storeh_pi(mem_addr, a) simde_mm_storeh_pi(mem_addr, (a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_storel_pi(simde__m64 *mem_addr, simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ _mm_storel_pi(HEDLEY_REINTERPRET_CAST(__m64 *, mem_addr), a);
+#else
+ simde__m64_private *dest_ =
+ HEDLEY_REINTERPRET_CAST(simde__m64_private *, mem_addr);
+ simde__m128_private a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ dest_->neon_f32 = vget_low_f32(a_.neon_f32);
+#else
+ dest_->f32[0] = a_.f32[0];
+ dest_->f32[1] = a_.f32[1];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_storel_pi(mem_addr, a) simde_mm_storel_pi(mem_addr, (a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_storer_ps(simde_float32 mem_addr[4], simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ _mm_storer_ps(mem_addr, a);
+#else
+ simde__m128_private a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ vec_st(vec_reve(a_.altivec_f32), 0, mem_addr);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ float32x4_t tmp = vrev64q_f32(a_.neon_f32);
+ vst1q_f32(mem_addr, vextq_f32(tmp, tmp, 2));
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ a_.f32 = SIMDE_SHUFFLE_VECTOR_(32, 16, a_.f32, a_.f32, 3, 2, 1, 0);
+ simde_mm_store_ps(mem_addr, simde__m128_from_private(a_));
+#else
+ SIMDE_VECTORIZE_ALIGNED(mem_addr : 16)
+ for (size_t i = 0; i < sizeof(a_.f32) / sizeof(a_.f32[0]); i++) {
+ mem_addr[i] =
+ a_.f32[((sizeof(a_.f32) / sizeof(a_.f32[0])) - 1) - i];
+ }
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_storer_ps(mem_addr, a) \
+ simde_mm_storer_ps(SIMDE_CHECKED_REINTERPRET_CAST( \
+ float *, simde_float32 *, mem_addr), \
+ (a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_storeu_ps(simde_float32 mem_addr[4], simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ _mm_storeu_ps(mem_addr, a);
+#else
+ simde__m128_private a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ vst1q_f32(mem_addr, a_.neon_f32);
+#elif defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+ vec_vsx_st(a_.altivec_f32, 0, mem_addr);
+#else
+ simde_memcpy(mem_addr, &a_, sizeof(a_));
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_storeu_ps(mem_addr, a) \
+ simde_mm_storeu_ps(SIMDE_CHECKED_REINTERPRET_CAST( \
+ float *, simde_float32 *, mem_addr), \
+ (a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_sub_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_sub_ps(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 = vsubq_f32(a_.neon_f32, b_.neon_f32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f32x4_sub(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f32 = vec_sub(a_.altivec_f32, b_.altivec_f32);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.f32 = a_.f32 - b_.f32;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.f32[i] = a_.f32[i] - b_.f32[i];
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_sub_ps(a, b) simde_mm_sub_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_sub_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_sub_ss(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_ss(a, simde_mm_sub_ps(a, b));
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+ r_.f32[0] = a_.f32[0] - b_.f32[0];
+ r_.f32[1] = a_.f32[1];
+ r_.f32[2] = a_.f32[2];
+ r_.f32[3] = a_.f32[3];
+
+ return simde__m128_from_private(r_);
+#endif
+}
+
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_sub_ss(a, b) simde_mm_sub_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_ucomieq_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_ucomieq_ss(a, b);
+#else
+ simde__m128_private a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+ int r;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ uint32x4_t a_not_nan = vceqq_f32(a_.neon_f32, a_.neon_f32);
+ uint32x4_t b_not_nan = vceqq_f32(b_.neon_f32, b_.neon_f32);
+ uint32x4_t a_or_b_nan = vmvnq_u32(vandq_u32(a_not_nan, b_not_nan));
+ uint32x4_t a_eq_b = vceqq_f32(a_.neon_f32, b_.neon_f32);
+ r = !!(vgetq_lane_u32(vorrq_u32(a_or_b_nan, a_eq_b), 0) != 0);
+#elif defined(SIMDE_HAVE_FENV_H)
+ fenv_t envp;
+ int x = feholdexcept(&envp);
+ r = a_.f32[0] == b_.f32[0];
+ if (HEDLEY_LIKELY(x == 0))
+ fesetenv(&envp);
+#else
+ r = a_.f32[0] == b_.f32[0];
+#endif
+
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_ucomieq_ss(a, b) simde_mm_ucomieq_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_ucomige_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_ucomige_ss(a, b);
+#else
+ simde__m128_private a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+ int r;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ uint32x4_t a_not_nan = vceqq_f32(a_.neon_f32, a_.neon_f32);
+ uint32x4_t b_not_nan = vceqq_f32(b_.neon_f32, b_.neon_f32);
+ uint32x4_t a_and_b_not_nan = vandq_u32(a_not_nan, b_not_nan);
+ uint32x4_t a_ge_b = vcgeq_f32(a_.neon_f32, b_.neon_f32);
+ r = !!(vgetq_lane_u32(vandq_u32(a_and_b_not_nan, a_ge_b), 0) != 0);
+#elif defined(SIMDE_HAVE_FENV_H)
+ fenv_t envp;
+ int x = feholdexcept(&envp);
+ r = a_.f32[0] >= b_.f32[0];
+ if (HEDLEY_LIKELY(x == 0))
+ fesetenv(&envp);
+#else
+ r = a_.f32[0] >= b_.f32[0];
+#endif
+
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_ucomige_ss(a, b) simde_mm_ucomige_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_ucomigt_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_ucomigt_ss(a, b);
+#else
+ simde__m128_private a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+ int r;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ uint32x4_t a_not_nan = vceqq_f32(a_.neon_f32, a_.neon_f32);
+ uint32x4_t b_not_nan = vceqq_f32(b_.neon_f32, b_.neon_f32);
+ uint32x4_t a_and_b_not_nan = vandq_u32(a_not_nan, b_not_nan);
+ uint32x4_t a_gt_b = vcgtq_f32(a_.neon_f32, b_.neon_f32);
+ r = !!(vgetq_lane_u32(vandq_u32(a_and_b_not_nan, a_gt_b), 0) != 0);
+#elif defined(SIMDE_HAVE_FENV_H)
+ fenv_t envp;
+ int x = feholdexcept(&envp);
+ r = a_.f32[0] > b_.f32[0];
+ if (HEDLEY_LIKELY(x == 0))
+ fesetenv(&envp);
+#else
+ r = a_.f32[0] > b_.f32[0];
+#endif
+
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_ucomigt_ss(a, b) simde_mm_ucomigt_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_ucomile_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_ucomile_ss(a, b);
+#else
+ simde__m128_private a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+ int r;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ uint32x4_t a_not_nan = vceqq_f32(a_.neon_f32, a_.neon_f32);
+ uint32x4_t b_not_nan = vceqq_f32(b_.neon_f32, b_.neon_f32);
+ uint32x4_t a_or_b_nan = vmvnq_u32(vandq_u32(a_not_nan, b_not_nan));
+ uint32x4_t a_le_b = vcleq_f32(a_.neon_f32, b_.neon_f32);
+ r = !!(vgetq_lane_u32(vorrq_u32(a_or_b_nan, a_le_b), 0) != 0);
+#elif defined(SIMDE_HAVE_FENV_H)
+ fenv_t envp;
+ int x = feholdexcept(&envp);
+ r = a_.f32[0] <= b_.f32[0];
+ if (HEDLEY_LIKELY(x == 0))
+ fesetenv(&envp);
+#else
+ r = a_.f32[0] <= b_.f32[0];
+#endif
+
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_ucomile_ss(a, b) simde_mm_ucomile_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_ucomilt_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_ucomilt_ss(a, b);
+#else
+ simde__m128_private a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+ int r;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ uint32x4_t a_not_nan = vceqq_f32(a_.neon_f32, a_.neon_f32);
+ uint32x4_t b_not_nan = vceqq_f32(b_.neon_f32, b_.neon_f32);
+ uint32x4_t a_or_b_nan = vmvnq_u32(vandq_u32(a_not_nan, b_not_nan));
+ uint32x4_t a_lt_b = vcltq_f32(a_.neon_f32, b_.neon_f32);
+ r = !!(vgetq_lane_u32(vorrq_u32(a_or_b_nan, a_lt_b), 0) != 0);
+#elif defined(SIMDE_HAVE_FENV_H)
+ fenv_t envp;
+ int x = feholdexcept(&envp);
+ r = a_.f32[0] < b_.f32[0];
+ if (HEDLEY_LIKELY(x == 0))
+ fesetenv(&envp);
+#else
+ r = a_.f32[0] < b_.f32[0];
+#endif
+
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_ucomilt_ss(a, b) simde_mm_ucomilt_ss((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_ucomineq_ss(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_ucomineq_ss(a, b);
+#else
+ simde__m128_private a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+ int r;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ uint32x4_t a_not_nan = vceqq_f32(a_.neon_f32, a_.neon_f32);
+ uint32x4_t b_not_nan = vceqq_f32(b_.neon_f32, b_.neon_f32);
+ uint32x4_t a_and_b_not_nan = vandq_u32(a_not_nan, b_not_nan);
+ uint32x4_t a_neq_b = vmvnq_u32(vceqq_f32(a_.neon_f32, b_.neon_f32));
+ r = !!(vgetq_lane_u32(vandq_u32(a_and_b_not_nan, a_neq_b), 0) != 0);
+#elif defined(SIMDE_HAVE_FENV_H)
+ fenv_t envp;
+ int x = feholdexcept(&envp);
+ r = a_.f32[0] != b_.f32[0];
+ if (HEDLEY_LIKELY(x == 0))
+ fesetenv(&envp);
+#else
+ r = a_.f32[0] != b_.f32[0];
+#endif
+
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_ucomineq_ss(a, b) simde_mm_ucomineq_ss((a), (b))
+#endif
+
+#if defined(SIMDE_X86_SSE_NATIVE)
+#if defined(__has_builtin)
+#if __has_builtin(__builtin_ia32_undef128)
+#define SIMDE_HAVE_UNDEFINED128
+#endif
+#elif !defined(__PGI) && !defined(SIMDE_BUG_GCC_REV_208793) && \
+ !defined(_MSC_VER)
+#define SIMDE_HAVE_UNDEFINED128
+#endif
+#endif
+
+#if defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
+HEDLEY_DIAGNOSTIC_PUSH
+SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_unpackhi_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_unpackhi_ps(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f32 = vzip2q_f32(a_.neon_f32, b_.neon_f32);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ float32x2_t a1 = vget_high_f32(a_.neon_f32);
+ float32x2_t b1 = vget_high_f32(b_.neon_f32);
+ float32x2x2_t result = vzip_f32(a1, b1);
+ r_.neon_f32 = vcombine_f32(result.val[0], result.val[1]);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.f32 = SIMDE_SHUFFLE_VECTOR_(32, 16, a_.f32, b_.f32, 2, 6, 3, 7);
+#else
+ r_.f32[0] = a_.f32[2];
+ r_.f32[1] = b_.f32[2];
+ r_.f32[2] = a_.f32[3];
+ r_.f32[3] = b_.f32[3];
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_unpackhi_ps(a, b) simde_mm_unpackhi_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_unpacklo_ps(simde__m128 a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return _mm_unpacklo_ps(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a),
+ b_ = simde__m128_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f32 = vzip1q_f32(a_.neon_f32, b_.neon_f32);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f32 = vec_mergeh(a_.altivec_f32, b_.altivec_f32);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.f32 = SIMDE_SHUFFLE_VECTOR_(32, 16, a_.f32, b_.f32, 0, 4, 1, 5);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ float32x2_t a1 = vget_low_f32(a_.neon_f32);
+ float32x2_t b1 = vget_low_f32(b_.neon_f32);
+ float32x2x2_t result = vzip_f32(a1, b1);
+ r_.neon_f32 = vcombine_f32(result.val[0], result.val[1]);
+#else
+ r_.f32[0] = a_.f32[0];
+ r_.f32[1] = b_.f32[0];
+ r_.f32[2] = a_.f32[1];
+ r_.f32[3] = b_.f32[1];
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_unpacklo_ps(a, b) simde_mm_unpacklo_ps((a), (b))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_stream_pi(simde__m64 *mem_addr, simde__m64 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ _mm_stream_pi(HEDLEY_REINTERPRET_CAST(__m64 *, mem_addr), a);
+#else
+ simde__m64_private *dest = HEDLEY_REINTERPRET_CAST(simde__m64_private *,
+ mem_addr),
+ a_ = simde__m64_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ dest->i64[0] = vget_lane_s64(a_.neon_i64, 0);
+#else
+ dest->i64[0] = a_.i64[0];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_stream_pi(mem_addr, a) simde_mm_stream_pi(mem_addr, (a))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_stream_ps(simde_float32 mem_addr[4], simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ _mm_stream_ps(mem_addr, a);
+#elif HEDLEY_HAS_BUILTIN(__builtin_nontemporal_store) && \
+ defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ simde__m128_private a_ = simde__m128_to_private(a);
+ __builtin_nontemporal_store(
+ a_.f32, SIMDE_ALIGN_CAST(__typeof__(a_.f32) *, mem_addr));
+#else
+ simde_mm_store_ps(mem_addr, a);
+#endif
+}
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _mm_stream_ps(mem_addr, a) \
+ simde_mm_stream_ps(SIMDE_CHECKED_REINTERPRET_CAST( \
+ float *, simde_float32 *, mem_addr), \
+ (a))
+#endif
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+#define SIMDE_MM_TRANSPOSE4_PS(row0, row1, row2, row3) \
+ do { \
+ float32x4x2_t ROW01 = vtrnq_f32(row0, row1); \
+ float32x4x2_t ROW23 = vtrnq_f32(row2, row3); \
+ row0 = vcombine_f32(vget_low_f32(ROW01.val[0]), \
+ vget_low_f32(ROW23.val[0])); \
+ row1 = vcombine_f32(vget_low_f32(ROW01.val[1]), \
+ vget_low_f32(ROW23.val[1])); \
+ row2 = vcombine_f32(vget_high_f32(ROW01.val[0]), \
+ vget_high_f32(ROW23.val[0])); \
+ row3 = vcombine_f32(vget_high_f32(ROW01.val[1]), \
+ vget_high_f32(ROW23.val[1])); \
+ } while (0)
+#else
+#define SIMDE_MM_TRANSPOSE4_PS(row0, row1, row2, row3) \
+ do { \
+ simde__m128 tmp3, tmp2, tmp1, tmp0; \
+ tmp0 = simde_mm_unpacklo_ps((row0), (row1)); \
+ tmp2 = simde_mm_unpacklo_ps((row2), (row3)); \
+ tmp1 = simde_mm_unpackhi_ps((row0), (row1)); \
+ tmp3 = simde_mm_unpackhi_ps((row2), (row3)); \
+ row0 = simde_mm_movelh_ps(tmp0, tmp2); \
+ row1 = simde_mm_movehl_ps(tmp2, tmp0); \
+ row2 = simde_mm_movelh_ps(tmp1, tmp3); \
+ row3 = simde_mm_movehl_ps(tmp3, tmp1); \
+ } while (0)
+#endif
+#if defined(SIMDE_X86_SSE_ENABLE_NATIVE_ALIASES)
+#define _MM_TRANSPOSE4_PS(row0, row1, row2, row3) \
+ SIMDE_MM_TRANSPOSE4_PS(row0, row1, row2, row3)
+#endif
+
+#if defined(_MM_EXCEPT_INVALID)
+#define SIMDE_MM_EXCEPT_INVALID _MM_EXCEPT_INVALID
+#else
+#define SIMDE_MM_EXCEPT_INVALID (0x0001)
+#endif
+#if defined(_MM_EXCEPT_DENORM)
+#define SIMDE_MM_EXCEPT_DENORM _MM_EXCEPT_DENORM
+#else
+#define SIMDE_MM_EXCEPT_DENORM (0x0002)
+#endif
+#if defined(_MM_EXCEPT_DIV_ZERO)
+#define SIMDE_MM_EXCEPT_DIV_ZERO _MM_EXCEPT_DIV_ZERO
+#else
+#define SIMDE_MM_EXCEPT_DIV_ZERO (0x0004)
+#endif
+#if defined(_MM_EXCEPT_OVERFLOW)
+#define SIMDE_MM_EXCEPT_OVERFLOW _MM_EXCEPT_OVERFLOW
+#else
+#define SIMDE_MM_EXCEPT_OVERFLOW (0x0008)
+#endif
+#if defined(_MM_EXCEPT_UNDERFLOW)
+#define SIMDE_MM_EXCEPT_UNDERFLOW _MM_EXCEPT_UNDERFLOW
+#else
+#define SIMDE_MM_EXCEPT_UNDERFLOW (0x0010)
+#endif
+#if defined(_MM_EXCEPT_INEXACT)
+#define SIMDE_MM_EXCEPT_INEXACT _MM_EXCEPT_INEXACT
+#else
+#define SIMDE_MM_EXCEPT_INEXACT (0x0020)
+#endif
+#if defined(_MM_EXCEPT_MASK)
+#define SIMDE_MM_EXCEPT_MASK _MM_EXCEPT_MASK
+#else
+#define SIMDE_MM_EXCEPT_MASK \
+ (SIMDE_MM_EXCEPT_INVALID | SIMDE_MM_EXCEPT_DENORM | \
+ SIMDE_MM_EXCEPT_DIV_ZERO | SIMDE_MM_EXCEPT_OVERFLOW | \
+ SIMDE_MM_EXCEPT_UNDERFLOW | SIMDE_MM_EXCEPT_INEXACT)
+#endif
+
+#if defined(_MM_MASK_INVALID)
+#define SIMDE_MM_MASK_INVALID _MM_MASK_INVALID
+#else
+#define SIMDE_MM_MASK_INVALID (0x0080)
+#endif
+#if defined(_MM_MASK_DENORM)
+#define SIMDE_MM_MASK_DENORM _MM_MASK_DENORM
+#else
+#define SIMDE_MM_MASK_DENORM (0x0100)
+#endif
+#if defined(_MM_MASK_DIV_ZERO)
+#define SIMDE_MM_MASK_DIV_ZERO _MM_MASK_DIV_ZERO
+#else
+#define SIMDE_MM_MASK_DIV_ZERO (0x0200)
+#endif
+#if defined(_MM_MASK_OVERFLOW)
+#define SIMDE_MM_MASK_OVERFLOW _MM_MASK_OVERFLOW
+#else
+#define SIMDE_MM_MASK_OVERFLOW (0x0400)
+#endif
+#if defined(_MM_MASK_UNDERFLOW)
+#define SIMDE_MM_MASK_UNDERFLOW _MM_MASK_UNDERFLOW
+#else
+#define SIMDE_MM_MASK_UNDERFLOW (0x0800)
+#endif
+#if defined(_MM_MASK_INEXACT)
+#define SIMDE_MM_MASK_INEXACT _MM_MASK_INEXACT
+#else
+#define SIMDE_MM_MASK_INEXACT (0x1000)
+#endif
+#if defined(_MM_MASK_MASK)
+#define SIMDE_MM_MASK_MASK _MM_MASK_MASK
+#else
+#define SIMDE_MM_MASK_MASK \
+ (SIMDE_MM_MASK_INVALID | SIMDE_MM_MASK_DENORM | \
+ SIMDE_MM_MASK_DIV_ZERO | SIMDE_MM_MASK_OVERFLOW | \
+ SIMDE_MM_MASK_UNDERFLOW | SIMDE_MM_MASK_INEXACT)
+#endif
+
+#if defined(_MM_FLUSH_ZERO_MASK)
+#define SIMDE_MM_FLUSH_ZERO_MASK _MM_FLUSH_ZERO_MASK
+#else
+#define SIMDE_MM_FLUSH_ZERO_MASK (0x8000)
+#endif
+#if defined(_MM_FLUSH_ZERO_ON)
+#define SIMDE_MM_FLUSH_ZERO_ON _MM_FLUSH_ZERO_ON
+#else
+#define SIMDE_MM_FLUSH_ZERO_ON (0x8000)
+#endif
+#if defined(_MM_FLUSH_ZERO_OFF)
+#define SIMDE_MM_FLUSH_ZERO_OFF _MM_FLUSH_ZERO_OFF
+#else
+#define SIMDE_MM_FLUSH_ZERO_OFF (0x0000)
+#endif
+
+SIMDE_END_DECLS_
+
+HEDLEY_DIAGNOSTIC_POP
+
+#endif /* !defined(SIMDE_X86_SSE_H) */
obs-studio-26.1.1.tar.xz/libobs/util/simde/x86/sse2.h
Added
+/* SPDX-License-Identifier: MIT
+ *
+ * Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy,
+ * modify, merge, publish, distribute, sublicense, and/or sell copies
+ * of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ *
+ * Copyright:
+ * 2017-2020 Evan Nemerson <evan@nemerson.com>
+ * 2015-2017 John W. Ratcliff <jratcliffscarab@gmail.com>
+ * 2015 Brandon Rowlett <browlett@nvidia.com>
+ * 2015 Ken Fast <kfast@gdeb.com>
+ * 2017 Hasindu Gamaarachchi <hasindu@unsw.edu.au>
+ * 2018 Jeff Daily <jeff.daily@amd.com>
+ */
+
+#if !defined(SIMDE_X86_SSE2_H)
+#define SIMDE_X86_SSE2_H
+
+#include "sse.h"
+
+HEDLEY_DIAGNOSTIC_PUSH
+SIMDE_DISABLE_UNWANTED_DIAGNOSTICS
+SIMDE_BEGIN_DECLS_
+
+typedef union {
+#if defined(SIMDE_VECTOR_SUBSCRIPT)
+ SIMDE_ALIGN_TO_16 int8_t i8 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 int16_t i16 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 int32_t i32 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 int64_t i64 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 uint8_t u8 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 uint16_t u16 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 uint32_t u32 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 uint64_t u64 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+#if defined(SIMDE_HAVE_INT128_)
+ SIMDE_ALIGN_TO_16 simde_int128 i128 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 simde_uint128 u128 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+#endif
+ SIMDE_ALIGN_TO_16 simde_float32 f32 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 simde_float64 f64 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+
+ SIMDE_ALIGN_TO_16 int_fast32_t i32f SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 uint_fast32_t u32f SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+#else
+ SIMDE_ALIGN_TO_16 int8_t i8[16];
+ SIMDE_ALIGN_TO_16 int16_t i16[8];
+ SIMDE_ALIGN_TO_16 int32_t i32[4];
+ SIMDE_ALIGN_TO_16 int64_t i64[2];
+ SIMDE_ALIGN_TO_16 uint8_t u8[16];
+ SIMDE_ALIGN_TO_16 uint16_t u16[8];
+ SIMDE_ALIGN_TO_16 uint32_t u32[4];
+ SIMDE_ALIGN_TO_16 uint64_t u64[2];
+#if defined(SIMDE_HAVE_INT128_)
+ SIMDE_ALIGN_TO_16 simde_int128 i128[1];
+ SIMDE_ALIGN_TO_16 simde_uint128 u128[1];
+#endif
+ SIMDE_ALIGN_TO_16 simde_float32 f32[4];
+ SIMDE_ALIGN_TO_16 simde_float64 f64[2];
+
+ SIMDE_ALIGN_TO_16 int_fast32_t i32f[16 / sizeof(int_fast32_t)];
+ SIMDE_ALIGN_TO_16 uint_fast32_t u32f[16 / sizeof(uint_fast32_t)];
+#endif
+
+ SIMDE_ALIGN_TO_16 simde__m64_private m64_private[2];
+ SIMDE_ALIGN_TO_16 simde__m64 m64[2];
+
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ SIMDE_ALIGN_TO_16 __m128i n;
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ SIMDE_ALIGN_TO_16 int8x16_t neon_i8;
+ SIMDE_ALIGN_TO_16 int16x8_t neon_i16;
+ SIMDE_ALIGN_TO_16 int32x4_t neon_i32;
+ SIMDE_ALIGN_TO_16 int64x2_t neon_i64;
+ SIMDE_ALIGN_TO_16 uint8x16_t neon_u8;
+ SIMDE_ALIGN_TO_16 uint16x8_t neon_u16;
+ SIMDE_ALIGN_TO_16 uint32x4_t neon_u32;
+ SIMDE_ALIGN_TO_16 uint64x2_t neon_u64;
+ SIMDE_ALIGN_TO_16 float32x4_t neon_f32;
+#if defined(SIMDE_ARCH_AARCH64)
+ SIMDE_ALIGN_TO_16 float64x2_t neon_f64;
+#endif
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ SIMDE_ALIGN_TO_16 v128_t wasm_v128;
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(signed char) altivec_i8;
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(signed short) altivec_i16;
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(signed int) altivec_i32;
+#if defined(__UINT_FAST32_TYPE__) && defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+ SIMDE_ALIGN_TO_16
+ SIMDE_POWER_ALTIVEC_VECTOR(__INT_FAST32_TYPE__) altivec_i32f;
+#else
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(signed int) altivec_i32f;
+#endif
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(unsigned char) altivec_u8;
+ SIMDE_ALIGN_TO_16
+ SIMDE_POWER_ALTIVEC_VECTOR(unsigned short) altivec_u16;
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(unsigned int) altivec_u32;
+#if defined(__UINT_FAST32_TYPE__) && defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+ SIMDE_ALIGN_TO_16
+ SIMDE_POWER_ALTIVEC_VECTOR(__UINT_FAST32_TYPE__) altivec_u32f;
+#else
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(unsigned int) altivec_u32f;
+#endif
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(float) altivec_f32;
+#if defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+ SIMDE_ALIGN_TO_16
+ SIMDE_POWER_ALTIVEC_VECTOR(signed long long) altivec_i64;
+ SIMDE_ALIGN_TO_16
+ SIMDE_POWER_ALTIVEC_VECTOR(unsigned long long) altivec_u64;
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(double) altivec_f64;
+#endif
+#endif
+} simde__m128i_private;
+
+typedef union {
+#if defined(SIMDE_VECTOR_SUBSCRIPT)
+ SIMDE_ALIGN_TO_16 int8_t i8 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 int16_t i16 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 int32_t i32 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 int64_t i64 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 uint8_t u8 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 uint16_t u16 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 uint32_t u32 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 uint64_t u64 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 simde_float32 f32 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 simde_float64 f64 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 int_fast32_t i32f SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+ SIMDE_ALIGN_TO_16 uint_fast32_t u32f SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+#else
+ SIMDE_ALIGN_TO_16 int8_t i8[16];
+ SIMDE_ALIGN_TO_16 int16_t i16[8];
+ SIMDE_ALIGN_TO_16 int32_t i32[4];
+ SIMDE_ALIGN_TO_16 int64_t i64[2];
+ SIMDE_ALIGN_TO_16 uint8_t u8[16];
+ SIMDE_ALIGN_TO_16 uint16_t u16[8];
+ SIMDE_ALIGN_TO_16 uint32_t u32[4];
+ SIMDE_ALIGN_TO_16 uint64_t u64[2];
+ SIMDE_ALIGN_TO_16 simde_float32 f32[4];
+ SIMDE_ALIGN_TO_16 simde_float64 f64[2];
+ SIMDE_ALIGN_TO_16 int_fast32_t i32f[16 / sizeof(int_fast32_t)];
+ SIMDE_ALIGN_TO_16 uint_fast32_t u32f[16 / sizeof(uint_fast32_t)];
+#endif
+
+ SIMDE_ALIGN_TO_16 simde__m64_private m64_private[2];
+ SIMDE_ALIGN_TO_16 simde__m64 m64[2];
+
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ SIMDE_ALIGN_TO_16 __m128d n;
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ SIMDE_ALIGN_TO_16 int8x16_t neon_i8;
+ SIMDE_ALIGN_TO_16 int16x8_t neon_i16;
+ SIMDE_ALIGN_TO_16 int32x4_t neon_i32;
+ SIMDE_ALIGN_TO_16 int64x2_t neon_i64;
+ SIMDE_ALIGN_TO_16 uint8x16_t neon_u8;
+ SIMDE_ALIGN_TO_16 uint16x8_t neon_u16;
+ SIMDE_ALIGN_TO_16 uint32x4_t neon_u32;
+ SIMDE_ALIGN_TO_16 uint64x2_t neon_u64;
+ SIMDE_ALIGN_TO_16 float32x4_t neon_f32;
+#if defined(SIMDE_ARCH_AARCH64)
+ SIMDE_ALIGN_TO_16 float64x2_t neon_f64;
+#endif
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ SIMDE_ALIGN_TO_16 v128_t wasm_v128;
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(signed char) altivec_i8;
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(signed short) altivec_i16;
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(signed int) altivec_i32;
+#if defined(__INT_FAST32_TYPE__) && defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+ SIMDE_ALIGN_TO_16
+ SIMDE_POWER_ALTIVEC_VECTOR(__INT_FAST32_TYPE__) altivec_i32f;
+#else
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(signed int) altivec_i32f;
+#endif
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(unsigned char) altivec_u8;
+ SIMDE_ALIGN_TO_16
+ SIMDE_POWER_ALTIVEC_VECTOR(unsigned short) altivec_u16;
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(unsigned int) altivec_u32;
+#if defined(__UINT_FAST32_TYPE__) && defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+ SIMDE_ALIGN_TO_16
+ SIMDE_POWER_ALTIVEC_VECTOR(__UINT_FAST32_TYPE__) altivec_u32f;
+#else
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(unsigned int) altivec_u32f;
+#endif
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(float) altivec_f32;
+#if defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+ SIMDE_ALIGN_TO_16
+ SIMDE_POWER_ALTIVEC_VECTOR(signed long long) altivec_i64;
+ SIMDE_ALIGN_TO_16
+ SIMDE_POWER_ALTIVEC_VECTOR(unsigned long long) altivec_u64;
+ SIMDE_ALIGN_TO_16 SIMDE_POWER_ALTIVEC_VECTOR(double) altivec_f64;
+#endif
+#endif
+} simde__m128d_private;
+
+#if defined(SIMDE_X86_SSE2_NATIVE)
+typedef __m128i simde__m128i;
+typedef __m128d simde__m128d;
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+typedef int64x2_t simde__m128i;
+#if defined(SIMDE_ARCH_AARCH64)
+typedef float64x2_t simde__m128d;
+#elif defined(SIMDE_VECTOR_SUBSCRIPT)
+typedef simde_float64 simde__m128d SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+#else
+typedef simde__m128d_private simde__m128d;
+#endif
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+typedef v128_t simde__m128i;
+typedef v128_t simde__m128d;
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+typedef SIMDE_POWER_ALTIVEC_VECTOR(float) simde__m128i;
+#if defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+typedef SIMDE_POWER_ALTIVEC_VECTOR(double) simde__m128d;
+#else
+typedef simde__m128d_private simde__m128d;
+#endif
+#elif defined(SIMDE_VECTOR_SUBSCRIPT)
+typedef int64_t simde__m128i SIMDE_ALIGN_TO_16 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+typedef simde_float64
+ simde__m128d SIMDE_ALIGN_TO_16 SIMDE_VECTOR(16) SIMDE_MAY_ALIAS;
+#else
+typedef simde__m128i_private simde__m128i;
+typedef simde__m128d_private simde__m128d;
+#endif
+
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+typedef simde__m128i __m128i;
+typedef simde__m128d __m128d;
+#endif
+
+HEDLEY_STATIC_ASSERT(16 == sizeof(simde__m128i), "simde__m128i size incorrect");
+HEDLEY_STATIC_ASSERT(16 == sizeof(simde__m128i_private),
+ "simde__m128i_private size incorrect");
+HEDLEY_STATIC_ASSERT(16 == sizeof(simde__m128d), "simde__m128d size incorrect");
+HEDLEY_STATIC_ASSERT(16 == sizeof(simde__m128d_private),
+ "simde__m128d_private size incorrect");
+#if defined(SIMDE_CHECK_ALIGNMENT) && defined(SIMDE_ALIGN_OF)
+HEDLEY_STATIC_ASSERT(SIMDE_ALIGN_OF(simde__m128i) == 16,
+ "simde__m128i is not 16-byte aligned");
+HEDLEY_STATIC_ASSERT(SIMDE_ALIGN_OF(simde__m128i_private) == 16,
+ "simde__m128i_private is not 16-byte aligned");
+HEDLEY_STATIC_ASSERT(SIMDE_ALIGN_OF(simde__m128d) == 16,
+ "simde__m128d is not 16-byte aligned");
+HEDLEY_STATIC_ASSERT(SIMDE_ALIGN_OF(simde__m128d_private) == 16,
+ "simde__m128d_private is not 16-byte aligned");
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde__m128i_from_private(simde__m128i_private v)
+{
+ simde__m128i r;
+ simde_memcpy(&r, &v, sizeof(r));
+ return r;
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i_private simde__m128i_to_private(simde__m128i v)
+{
+ simde__m128i_private r;
+ simde_memcpy(&r, &v, sizeof(r));
+ return r;
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde__m128d_from_private(simde__m128d_private v)
+{
+ simde__m128d r;
+ simde_memcpy(&r, &v, sizeof(r));
+ return r;
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d_private simde__m128d_to_private(simde__m128d v)
+{
+ simde__m128d_private r;
+ simde_memcpy(&r, &v, sizeof(r));
+ return r;
+}
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i, int8x16_t, neon, i8)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i, int16x8_t, neon, i16)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i, int32x4_t, neon, i32)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i, int64x2_t, neon, i64)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i, uint8x16_t, neon, u8)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i, uint16x8_t, neon, u16)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i, uint32x4_t, neon, u32)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i, uint64x2_t, neon, u64)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i, float32x4_t, neon, f32)
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i, float64x2_t, neon, f64)
+#endif
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i,
+ SIMDE_POWER_ALTIVEC_VECTOR(signed char),
+ altivec, i8)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i,
+ SIMDE_POWER_ALTIVEC_VECTOR(signed short),
+ altivec, i16)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i,
+ SIMDE_POWER_ALTIVEC_VECTOR(signed int),
+ altivec, i32)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(
+ m128i, SIMDE_POWER_ALTIVEC_VECTOR(unsigned char), altivec, u8)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(
+ m128i, SIMDE_POWER_ALTIVEC_VECTOR(unsigned short), altivec, u16)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i,
+ SIMDE_POWER_ALTIVEC_VECTOR(unsigned int),
+ altivec, u32)
+#if defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(
+ m128i, SIMDE_POWER_ALTIVEC_VECTOR(unsigned long long), altivec, u64)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(
+ m128i, SIMDE_POWER_ALTIVEC_VECTOR(signed long long), altivec, i64)
+#endif
+#endif /* defined(SIMDE_ARM_NEON_A32V7_NATIVE) */
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d, int8x16_t, neon, i8)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d, int16x8_t, neon, i16)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d, int32x4_t, neon, i32)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d, int64x2_t, neon, i64)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d, uint8x16_t, neon, u8)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d, uint16x8_t, neon, u16)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d, uint32x4_t, neon, u32)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d, uint64x2_t, neon, u64)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d, float32x4_t, neon, f32)
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d, float64x2_t, neon, f64)
+#endif
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d,
+ SIMDE_POWER_ALTIVEC_VECTOR(signed char),
+ altivec, i8)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d,
+ SIMDE_POWER_ALTIVEC_VECTOR(signed short),
+ altivec, i16)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d,
+ SIMDE_POWER_ALTIVEC_VECTOR(signed int),
+ altivec, i32)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(
+ m128d, SIMDE_POWER_ALTIVEC_VECTOR(unsigned char), altivec, u8)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(
+ m128d, SIMDE_POWER_ALTIVEC_VECTOR(unsigned short), altivec, u16)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d,
+ SIMDE_POWER_ALTIVEC_VECTOR(unsigned int),
+ altivec, u32)
+#if defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(
+ m128d, SIMDE_POWER_ALTIVEC_VECTOR(unsigned long long), altivec, u64)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(
+ m128d, SIMDE_POWER_ALTIVEC_VECTOR(signed long long), altivec, i64)
+#if defined(SIMDE_BUG_GCC_95782)
+SIMDE_FUNCTION_ATTRIBUTES
+SIMDE_POWER_ALTIVEC_VECTOR(double)
+simde__m128d_to_altivec_f64(simde__m128d value)
+{
+ simde__m128d_private r_ = simde__m128d_to_private(value);
+ return r_.altivec_f64;
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde__m128d_from_altivec_f64(SIMDE_POWER_ALTIVEC_VECTOR(double)
+ value)
+{
+ simde__m128d_private r_;
+ r_.altivec_f64 = value;
+ return simde__m128d_from_private(r_);
+}
+#else
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d,
+ SIMDE_POWER_ALTIVEC_VECTOR(double),
+ altivec, f64)
+#endif
+#endif
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128d, v128_t, wasm, v128);
+SIMDE_X86_GENERATE_CONVERSION_FUNCTION(m128i, v128_t, wasm, v128);
+#endif /* defined(SIMDE_ARM_NEON_A32V7_NATIVE) */
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_set_pd(simde_float64 e1, simde_float64 e0)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_set_pd(e1, e0);
+#else
+ simde__m128d_private r_;
+
+#if defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f64x2_make(e0, e1);
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ SIMDE_ALIGN_TO_16 simde_float64 data[2] = {e0, e1};
+ r_.neon_f64 = vld1q_f64(data);
+#else
+ r_.f64[0] = e0;
+ r_.f64[1] = e1;
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_set_pd(e1, e0) simde_mm_set_pd(e1, e0)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_set1_pd(simde_float64 a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_set1_pd(a);
+#else
+ simde__m128d_private r_;
+
+#if defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f64x2_splat(a);
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f64 = vdupq_n_f64(a);
+#elif defined(SIMDE_POWER_ALTIVEC_P8_NATIVE)
+ r_.altivec_f64 = vec_splats(HEDLEY_STATIC_CAST(double, a));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i64) / sizeof(r_.i64[0])); i++) {
+ r_.f64[i] = a;
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#define simde_mm_set_pd1(a) simde_mm_set1_pd(a)
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_set1_pd(a) simde_mm_set1_pd(a)
+#define _mm_set_pd1(a) simde_mm_set1_pd(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_x_mm_abs_pd(simde__m128d a)
+{
+#if defined(SIMDE_X86_AVX512F_NATIVE) && \
+ (!defined(HEDLEY_GCC_VERSION) || HEDLEY_GCC_VERSION_CHECK(7, 4, 0))
+ return _mm512_castpd512_pd128(_mm512_abs_pd(_mm512_castpd128_pd512(a)));
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V8_NATIVE)
+ r_.neon_f32 = vabsq_f32(a_.neon_f32);
+#elif defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+ r_.altivec_f32 = vec_abs(a_.altivec_f32);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
+ r_.f64[i] = simde_math_fabs(a_.f64[i]);
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_x_mm_not_pd(simde__m128d a)
+{
+#if defined(SIMDE_X86_AVX512VL_NATIVE)
+ __m128i ai = _mm_castpd_si128(a);
+ return _mm_castsi128_pd(_mm_ternarylogic_epi64(ai, ai, ai, 0x55));
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vmvnq_s32(a_.neon_i32);
+#elif defined(SIMDE_POWER_ALTIVEC_P8_NATIVE)
+ r_.altivec_f64 = vec_nor(a_.altivec_f64, a_.altivec_f64);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i32 = vec_nor(a_.altivec_i32, a_.altivec_i32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_v128_not(a_.wasm_v128);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32f = ~a_.i32f;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32f) / sizeof(r_.i32f[0])); i++) {
+ r_.i32f[i] = ~(a_.i32f[i]);
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_x_mm_select_pd(simde__m128d a, simde__m128d b,
+ simde__m128d mask)
+{
+/* This function is for when you want to blend two elements together
+ * according to a mask. It is similar to _mm_blendv_pd, except that
+ * it is undefined whether the blend is based on the highest bit in
+ * each lane (like blendv) or just bitwise operations. This allows
+ * us to implement the function efficiently everywhere.
+ *
+ * Basically, you promise that all the lanes in mask are either 0 or
+ * ~0. */
+#if defined(SIMDE_X86_SSE4_1_NATIVE)
+ return _mm_blendv_pd(a, b, mask);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b),
+ mask_ = simde__m128d_to_private(mask);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i64 = a_.i64 ^ ((a_.i64 ^ b_.i64) & mask_.i64);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i64 = vbslq_s64(mask_.neon_u64, b_.neon_i64, a_.neon_i64);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i64) / sizeof(r_.i64[0])); i++) {
+ r_.i64[i] = a_.i64[i] ^
+ ((a_.i64[i] ^ b_.i64[i]) & mask_.i64[i]);
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_add_epi8(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_add_epi8(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i8 = vaddq_s8(a_.neon_i8, b_.neon_i8);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i8 = vec_add(a_.altivec_i8, b_.altivec_i8);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i8x16_add(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i8 = a_.i8 + b_.i8;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
+ r_.i8[i] = a_.i8[i] + b_.i8[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_add_epi8(a, b) simde_mm_add_epi8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_add_epi16(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_add_epi16(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 = vaddq_s16(a_.neon_i16, b_.neon_i16);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i16 = vec_add(a_.altivec_i16, b_.altivec_i16);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i16x8_add(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i16 = a_.i16 + b_.i16;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = a_.i16[i] + b_.i16[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_add_epi16(a, b) simde_mm_add_epi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_add_epi32(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_add_epi32(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vaddq_s32(a_.neon_i32, b_.neon_i32);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i32 = vec_add(a_.altivec_i32, b_.altivec_i32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i32x4_add(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32 = a_.i32 + b_.i32;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = a_.i32[i] + b_.i32[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_add_epi32(a, b) simde_mm_add_epi32(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_add_epi64(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_add_epi64(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i64 = vaddq_s64(a_.neon_i64, b_.neon_i64);
+#elif defined(SIMDE_POWER_ALTIVEC_P8_NATIVE)
+ r_.altivec_i64 = vec_add(a_.altivec_i64, b_.altivec_i64);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i64x2_add(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i64 = a_.i64 + b_.i64;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i64) / sizeof(r_.i64[0])); i++) {
+ r_.i64[i] = a_.i64[i] + b_.i64[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_add_epi64(a, b) simde_mm_add_epi64(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_add_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_add_pd(a, b);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f64 = vaddq_f64(a_.neon_f64, b_.neon_f64);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f64x2_add(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+ r_.altivec_f64 = vec_add(a_.altivec_f64, b_.altivec_f64);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f64x2_add(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.f64 = a_.f64 + b_.f64;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
+ r_.f64[i] = a_.f64[i] + b_.f64[i];
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_add_pd(a, b) simde_mm_add_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_move_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_move_sd(a, b);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f64 =
+ vsetq_lane_f64(vgetq_lane_f64(b_.neon_f64, 0), a_.neon_f64, 0);
+#elif defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+#if defined(HEDLEY_IBM_VERSION)
+ r_.altivec_f64 = vec_xxpermdi(a_.altivec_f64, b_.altivec_f64, 1);
+#else
+ r_.altivec_f64 = vec_xxpermdi(b_.altivec_f64, a_.altivec_f64, 1);
+#endif
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_v64x2_shuffle(a_.wasm_v128, b_.wasm_v128, 2, 1);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.f64 = SIMDE_SHUFFLE_VECTOR_(64, 16, a_.f64, b_.f64, 2, 1);
+#else
+ r_.f64[0] = b_.f64[0];
+ r_.f64[1] = a_.f64[1];
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_move_sd(a, b) simde_mm_move_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_add_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_add_sd(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_sd(a, simde_mm_add_pd(a, b));
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+ r_.f64[0] = a_.f64[0] + b_.f64[0];
+ r_.f64[1] = a_.f64[1];
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_add_sd(a, b) simde_mm_add_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_add_si64(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_add_si64(a, b);
+#else
+ simde__m64_private r_, a_ = simde__m64_to_private(a),
+ b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i64 = vadd_s64(a_.neon_i64, b_.neon_i64);
+#else
+ r_.i64[0] = a_.i64[0] + b_.i64[0];
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_add_si64(a, b) simde_mm_add_si64(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_adds_epi8(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_adds_epi8(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i8 = vqaddq_s8(a_.neon_i8, b_.neon_i8);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i8x16_add_saturate(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i8 = vec_adds(a_.altivec_i8, b_.altivec_i8);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
+ const int_fast16_t tmp =
+ HEDLEY_STATIC_CAST(int_fast16_t, a_.i8[i]) +
+ HEDLEY_STATIC_CAST(int_fast16_t, b_.i8[i]);
+ r_.i8[i] = HEDLEY_STATIC_CAST(
+ int8_t,
+ ((tmp < INT8_MAX) ? ((tmp > INT8_MIN) ? tmp : INT8_MIN)
+ : INT8_MAX));
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_adds_epi8(a, b) simde_mm_adds_epi8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_adds_epi16(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_adds_epi16(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 = vqaddq_s16(a_.neon_i16, b_.neon_i16);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i16x8_add_saturate(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i16 = vec_adds(a_.altivec_i16, b_.altivec_i16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ const int_fast32_t tmp =
+ HEDLEY_STATIC_CAST(int_fast32_t, a_.i16[i]) +
+ HEDLEY_STATIC_CAST(int_fast32_t, b_.i16[i]);
+ r_.i16[i] = HEDLEY_STATIC_CAST(
+ int16_t,
+ ((tmp < INT16_MAX)
+ ? ((tmp > INT16_MIN) ? tmp : INT16_MIN)
+ : INT16_MAX));
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_adds_epi16(a, b) simde_mm_adds_epi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_adds_epu8(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_adds_epu8(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u8 = vqaddq_u8(a_.neon_u8, b_.neon_u8);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_u8x16_add_saturate(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+ r_.altivec_u8 = vec_adds(a_.altivec_u8, b_.altivec_u8);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u8) / sizeof(r_.u8[0])); i++) {
+ r_.u8[i] = ((UINT8_MAX - a_.u8[i]) > b_.u8[i])
+ ? (a_.u8[i] + b_.u8[i])
+ : UINT8_MAX;
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_adds_epu8(a, b) simde_mm_adds_epu8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_adds_epu16(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_adds_epu16(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u16 = vqaddq_u16(a_.neon_u16, b_.neon_u16);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_u16x8_add_saturate(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_u16 = vec_adds(a_.altivec_u16, b_.altivec_u16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
+ r_.u16[i] = ((UINT16_MAX - a_.u16[i]) > b_.u16[i])
+ ? (a_.u16[i] + b_.u16[i])
+ : UINT16_MAX;
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_adds_epu16(a, b) simde_mm_adds_epu16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_and_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_and_pd(a, b);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vandq_s32(a_.neon_i32, b_.neon_i32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_v128_and(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+ r_.altivec_f64 = vec_and(a_.altivec_f64, b_.altivec_f64);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32f = a_.i32f & b_.i32f;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32f) / sizeof(r_.i32f[0])); i++) {
+ r_.i32f[i] = a_.i32f[i] & b_.i32f[i];
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_and_pd(a, b) simde_mm_and_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_and_si128(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_and_si128(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vandq_s32(b_.neon_i32, a_.neon_i32);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_u32f = vec_and(a_.altivec_u32f, b_.altivec_u32f);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32f = a_.i32f & b_.i32f;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32f) / sizeof(r_.i32f[0])); i++) {
+ r_.i32f[i] = a_.i32f[i] & b_.i32f[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_and_si128(a, b) simde_mm_and_si128(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_andnot_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_andnot_pd(a, b);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vbicq_s32(b_.neon_i32, a_.neon_i32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_v128_andnot(b_.wasm_v128, a_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+ r_.altivec_f64 = vec_andc(b_.altivec_f64, a_.altivec_f64);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i32f = vec_andc(b_.altivec_i32f, a_.altivec_i32f);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32f = ~a_.i32f & b_.i32f;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u64) / sizeof(r_.u64[0])); i++) {
+ r_.u64[i] = ~a_.u64[i] & b_.u64[i];
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_andnot_pd(a, b) simde_mm_andnot_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_andnot_si128(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_andnot_si128(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vbicq_s32(b_.neon_i32, a_.neon_i32);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i32 = vec_andc(b_.altivec_i32, a_.altivec_i32);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32f = ~a_.i32f & b_.i32f;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32f) / sizeof(r_.i32f[0])); i++) {
+ r_.i32f[i] = ~(a_.i32f[i]) & b_.i32f[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_andnot_si128(a, b) simde_mm_andnot_si128(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_xor_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_xor_pd(a, b);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32f = a_.i32f ^ b_.i32f;
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_v128_xor(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i64 = veorq_s64(a_.neon_i64, b_.neon_i64);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32f) / sizeof(r_.i32f[0])); i++) {
+ r_.i32f[i] = a_.i32f[i] ^ b_.i32f[i];
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_xor_pd(a, b) simde_mm_xor_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_avg_epu8(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_avg_epu8(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u8 = vrhaddq_u8(b_.neon_u8, a_.neon_u8);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_u8x16_avgr(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_u8 = vec_avg(a_.altivec_u8, b_.altivec_u8);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS) && \
+ defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR) && \
+ defined(SIMDE_CONVERT_VECTOR_)
+ uint16_t wa SIMDE_VECTOR(32);
+ uint16_t wb SIMDE_VECTOR(32);
+ uint16_t wr SIMDE_VECTOR(32);
+ SIMDE_CONVERT_VECTOR_(wa, a_.u8);
+ SIMDE_CONVERT_VECTOR_(wb, b_.u8);
+ wr = (wa + wb + 1) >> 1;
+ SIMDE_CONVERT_VECTOR_(r_.u8, wr);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u8) / sizeof(r_.u8[0])); i++) {
+ r_.u8[i] = (a_.u8[i] + b_.u8[i] + 1) >> 1;
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_avg_epu8(a, b) simde_mm_avg_epu8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_avg_epu16(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_avg_epu16(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u16 = vrhaddq_u16(b_.neon_u16, a_.neon_u16);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_u16x8_avgr(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_u16 = vec_avg(a_.altivec_u16, b_.altivec_u16);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS) && \
+ defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR) && \
+ defined(SIMDE_CONVERT_VECTOR_)
+ uint32_t wa SIMDE_VECTOR(32);
+ uint32_t wb SIMDE_VECTOR(32);
+ uint32_t wr SIMDE_VECTOR(32);
+ SIMDE_CONVERT_VECTOR_(wa, a_.u16);
+ SIMDE_CONVERT_VECTOR_(wb, b_.u16);
+ wr = (wa + wb + 1) >> 1;
+ SIMDE_CONVERT_VECTOR_(r_.u16, wr);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
+ r_.u16[i] = (a_.u16[i] + b_.u16[i] + 1) >> 1;
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_avg_epu16(a, b) simde_mm_avg_epu16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_setzero_si128(void)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_setzero_si128();
+#else
+ simde__m128i_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vdupq_n_s32(0);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i32 = vec_splats(HEDLEY_STATIC_CAST(signed int, 0));
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i32x4_splat(INT32_C(0));
+#elif defined(SIMDE_VECTOR_SUBSCRIPT)
+ r_.i32 = __extension__(__typeof__(r_.i32)){0, 0, 0, 0};
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32f) / sizeof(r_.i32f[0])); i++) {
+ r_.i32f[i] = 0;
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_setzero_si128() (simde_mm_setzero_si128())
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_bslli_si128(simde__m128i a, const int imm8)
+ SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 255)
+{
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a);
+
+ if (HEDLEY_UNLIKELY((imm8 & ~15))) {
+ return simde_mm_setzero_si128();
+ }
+
+#if defined(SIMDE_POWER_ALTIVEC_P6_NATIVE) && defined(SIMDE_ENDIAN_ORDER)
+ r_.altivec_i8 =
+#if (SIMDE_ENDIAN_ORDER == SIMDE_ENDIAN_LITTLE)
+ vec_slo
+#else /* SIMDE_ENDIAN_ORDER == SIMDE_ENDIAN_BIG */
+ vec_sro
+#endif
+ (a_.altivec_i8,
+ vec_splats(HEDLEY_STATIC_CAST(unsigned char, imm8 * 8)));
+#elif defined(SIMDE_HAVE_INT128_) && (SIMDE_ENDIAN_ORDER == SIMDE_ENDIAN_LITTLE)
+ r_.u128[0] = a_.u128[0] << (imm8 * 8);
+#else
+ r_ = simde__m128i_to_private(simde_mm_setzero_si128());
+ for (int i = imm8;
+ i < HEDLEY_STATIC_CAST(int, sizeof(r_.i8) / sizeof(r_.i8[0]));
+ i++) {
+ r_.i8[i] = a_.i8[i - imm8];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+}
+#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(__PGI)
+#define simde_mm_bslli_si128(a, imm8) _mm_slli_si128(a, imm8)
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE) && !defined(__clang__)
+#define simde_mm_bslli_si128(a, imm8) \
+ simde__m128i_from_neon_i8( \
+ ((imm8) <= 0) \
+ ? simde__m128i_to_neon_i8(a) \
+ : (((imm8) > 15) \
+ ? (vdupq_n_s8(0)) \
+ : (vextq_s8(vdupq_n_s8(0), \
+ simde__m128i_to_neon_i8(a), \
+ 16 - (imm8)))))
+#elif defined(SIMDE_SHUFFLE_VECTOR_) && !defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+#define simde_mm_bslli_si128(a, imm8) \
+ (__extension__({ \
+ const simde__m128i_private simde__tmp_a_ = \
+ simde__m128i_to_private(a); \
+ const simde__m128i_private simde__tmp_z_ = \
+ simde__m128i_to_private(simde_mm_setzero_si128()); \
+ simde__m128i_private simde__tmp_r_; \
+ if (HEDLEY_UNLIKELY(imm8 > 15)) { \
+ simde__tmp_r_ = simde__m128i_to_private( \
+ simde_mm_setzero_si128()); \
+ } else { \
+ simde__tmp_r_.i8 = SIMDE_SHUFFLE_VECTOR_( \
+ 8, 16, simde__tmp_z_.i8, (simde__tmp_a_).i8, \
+ HEDLEY_STATIC_CAST(int8_t, (16 - imm8) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (17 - imm8) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (18 - imm8) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (19 - imm8) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (20 - imm8) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (21 - imm8) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (22 - imm8) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (23 - imm8) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (24 - imm8) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (25 - imm8) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (26 - imm8) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (27 - imm8) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (28 - imm8) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (29 - imm8) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (30 - imm8) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (31 - imm8) & 31)); \
+ } \
+ simde__m128i_from_private(simde__tmp_r_); \
+ }))
+#endif
+#define simde_mm_slli_si128(a, imm8) simde_mm_bslli_si128(a, imm8)
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_bslli_si128(a, imm8) simde_mm_bslli_si128(a, imm8)
+#define _mm_slli_si128(a, imm8) simde_mm_bslli_si128(a, imm8)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_bsrli_si128(simde__m128i a, const int imm8)
+ SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 255)
+{
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a);
+
+ if (HEDLEY_UNLIKELY((imm8 & ~15))) {
+ return simde_mm_setzero_si128();
+ }
+
+#if defined(SIMDE_POWER_ALTIVEC_P6_NATIVE) && defined(SIMDE_ENDIAN_ORDER)
+ r_.altivec_i8 =
+#if (SIMDE_ENDIAN_ORDER == SIMDE_ENDIAN_LITTLE)
+ vec_sro
+#else /* SIMDE_ENDIAN_ORDER == SIMDE_ENDIAN_BIG */
+ vec_slo
+#endif
+ (a_.altivec_i8,
+ vec_splats(HEDLEY_STATIC_CAST(unsigned char, imm8 * 8)));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
+ const int e = HEDLEY_STATIC_CAST(int, i) + imm8;
+ r_.i8[i] = (e < 16) ? a_.i8[e] : 0;
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+}
+#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(__PGI)
+#define simde_mm_bsrli_si128(a, imm8) _mm_srli_si128(a, imm8)
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE) && !defined(__clang__)
+#define simde_mm_bsrli_si128(a, imm8) \
+ simde__m128i_from_neon_i8( \
+ ((imm8 < 0) || (imm8 > 15)) \
+ ? vdupq_n_s8(0) \
+ : (vextq_s8(simde__m128i_to_private(a).neon_i8, \
+ vdupq_n_s8(0), \
+ ((imm8 & 15) != 0) ? imm8 : (imm8 & 15))))
+#elif defined(SIMDE_SHUFFLE_VECTOR_) && !defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+#define simde_mm_bsrli_si128(a, imm8) \
+ (__extension__({ \
+ const simde__m128i_private simde__tmp_a_ = \
+ simde__m128i_to_private(a); \
+ const simde__m128i_private simde__tmp_z_ = \
+ simde__m128i_to_private(simde_mm_setzero_si128()); \
+ simde__m128i_private simde__tmp_r_ = \
+ simde__m128i_to_private(a); \
+ if (HEDLEY_UNLIKELY(imm8 > 15)) { \
+ simde__tmp_r_ = simde__m128i_to_private( \
+ simde_mm_setzero_si128()); \
+ } else { \
+ simde__tmp_r_.i8 = SIMDE_SHUFFLE_VECTOR_( \
+ 8, 16, simde__tmp_z_.i8, (simde__tmp_a_).i8, \
+ HEDLEY_STATIC_CAST(int8_t, (imm8 + 16) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (imm8 + 17) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (imm8 + 18) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (imm8 + 19) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (imm8 + 20) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (imm8 + 21) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (imm8 + 22) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (imm8 + 23) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (imm8 + 24) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (imm8 + 25) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (imm8 + 26) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (imm8 + 27) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (imm8 + 28) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (imm8 + 29) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (imm8 + 30) & 31), \
+ HEDLEY_STATIC_CAST(int8_t, (imm8 + 31) & 31)); \
+ } \
+ simde__m128i_from_private(simde__tmp_r_); \
+ }))
+#endif
+#define simde_mm_srli_si128(a, imm8) simde_mm_bsrli_si128((a), (imm8))
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_bsrli_si128(a, imm8) simde_mm_bsrli_si128((a), (imm8))
+#define _mm_srli_si128(a, imm8) simde_mm_bsrli_si128((a), (imm8))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_clflush(void const *p)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ _mm_clflush(p);
+#else
+ (void)p;
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_clflush(a, b) simde_mm_clflush()
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_comieq_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_comieq_sd(a, b);
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return !!vgetq_lane_u64(vceqq_f64(a_.neon_f64, b_.neon_f64), 0);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ return wasm_f64x2_extract_lane(a_.wasm_v128, 0) ==
+ wasm_f64x2_extract_lane(b_.wasm_v128, 0);
+#else
+ return a_.f64[0] == b_.f64[0];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_comieq_sd(a, b) simde_mm_comieq_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_comige_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_comige_sd(a, b);
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return !!vgetq_lane_u64(vcgeq_f64(a_.neon_f64, b_.neon_f64), 0);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ return wasm_f64x2_extract_lane(a_.wasm_v128, 0) >=
+ wasm_f64x2_extract_lane(b_.wasm_v128, 0);
+#else
+ return a_.f64[0] >= b_.f64[0];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_comige_sd(a, b) simde_mm_comige_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_comigt_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_comigt_sd(a, b);
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return !!vgetq_lane_u64(vcgtq_f64(a_.neon_f64, b_.neon_f64), 0);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ return wasm_f64x2_extract_lane(a_.wasm_v128, 0) >
+ wasm_f64x2_extract_lane(b_.wasm_v128, 0);
+#else
+ return a_.f64[0] > b_.f64[0];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_comigt_sd(a, b) simde_mm_comigt_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_comile_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_comile_sd(a, b);
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return !!vgetq_lane_u64(vcleq_f64(a_.neon_f64, b_.neon_f64), 0);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ return wasm_f64x2_extract_lane(a_.wasm_v128, 0) <=
+ wasm_f64x2_extract_lane(b_.wasm_v128, 0);
+#else
+ return a_.f64[0] <= b_.f64[0];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_comile_sd(a, b) simde_mm_comile_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_comilt_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_comilt_sd(a, b);
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return !!vgetq_lane_u64(vcltq_f64(a_.neon_f64, b_.neon_f64), 0);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ return wasm_f64x2_extract_lane(a_.wasm_v128, 0) <
+ wasm_f64x2_extract_lane(b_.wasm_v128, 0);
+#else
+ return a_.f64[0] < b_.f64[0];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_comilt_sd(a, b) simde_mm_comilt_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_comineq_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_comineq_sd(a, b);
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return !vgetq_lane_u64(vceqq_f64(a_.neon_f64, b_.neon_f64), 0);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ return wasm_f64x2_extract_lane(a_.wasm_v128, 0) !=
+ wasm_f64x2_extract_lane(b_.wasm_v128, 0);
+#else
+ return a_.f64[0] != b_.f64[0];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_comineq_sd(a, b) simde_mm_comineq_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_x_mm_copysign_pd(simde__m128d dest, simde__m128d src)
+{
+ simde__m128d_private r_, dest_ = simde__m128d_to_private(dest),
+ src_ = simde__m128d_to_private(src);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ uint64x2_t sign_pos =
+ vreinterpretq_u64_f64(vdupq_n_f64(-SIMDE_FLOAT64_C(0.0)));
+#else
+ simde_float64 dbl_nz = -SIMDE_FLOAT64_C(0.0);
+ uint64_t u64_nz;
+ simde_memcpy(&u64_nz, &dbl_nz, sizeof(u64_nz));
+ uint64x2_t sign_pos = vdupq_n_u64(u64_nz);
+#endif
+ r_.neon_u64 = vbslq_u64(sign_pos, src_.neon_u64, dest_.neon_u64);
+#elif defined(SIMDE_POWER_ALTIVEC_P9_NATIVE)
+#if !defined(HEDLEY_IBM_VERSION)
+ r_.altivec_f64 = vec_cpsgn(dest_.altivec_f64, src_.altivec_f64);
+#else
+ r_.altivec_f64 = vec_cpsgn(src_.altivec_f64, dest_.altivec_f64);
+#endif
+#elif defined(simde_math_copysign)
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
+ r_.f64[i] = simde_math_copysign(dest_.f64[i], src_.f64[i]);
+ }
+#else
+ simde__m128d sgnbit = simde_mm_set1_pd(-SIMDE_FLOAT64_C(0.0));
+ return simde_mm_xor_pd(simde_mm_and_pd(sgnbit, src),
+ simde_mm_andnot_pd(sgnbit, dest));
+#endif
+
+ return simde__m128d_from_private(r_);
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_x_mm_xorsign_pd(simde__m128d dest, simde__m128d src)
+{
+ return simde_mm_xor_pd(simde_mm_and_pd(simde_mm_set1_pd(-0.0), src),
+ dest);
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_castpd_ps(simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_castpd_ps(a);
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vreinterpretq_f32_f64(a);
+#else
+ simde__m128 r;
+ simde_memcpy(&r, &a, sizeof(a));
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_castpd_ps(a) simde_mm_castpd_ps(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_castpd_si128(simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_castpd_si128(a);
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vreinterpretq_s64_f64(a);
+#else
+ simde__m128i r;
+ simde_memcpy(&r, &a, sizeof(a));
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_castpd_si128(a) simde_mm_castpd_si128(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_castps_pd(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_castps_pd(a);
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vreinterpretq_f64_f32(a);
+#else
+ simde__m128d r;
+ simde_memcpy(&r, &a, sizeof(a));
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_castps_pd(a) simde_mm_castps_pd(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_castps_si128(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_castps_si128(a);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ return simde__m128i_from_neon_i32(simde__m128_to_private(a).neon_i32);
+#else
+ simde__m128i r;
+ simde_memcpy(&r, &a, sizeof(a));
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_castps_si128(a) simde_mm_castps_si128(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_castsi128_pd(simde__m128i a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_castsi128_pd(a);
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vreinterpretq_f64_s64(a);
+#else
+ simde__m128d r;
+ simde_memcpy(&r, &a, sizeof(a));
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_castsi128_pd(a) simde_mm_castsi128_pd(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_castsi128_ps(simde__m128i a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_castsi128_ps(a);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ return HEDLEY_REINTERPRET_CAST(SIMDE_POWER_ALTIVEC_VECTOR(float), a);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ return simde__m128_from_neon_i32(simde__m128i_to_private(a).neon_i32);
+#else
+ simde__m128 r;
+ simde_memcpy(&r, &a, sizeof(a));
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_castsi128_ps(a) simde_mm_castsi128_ps(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_cmpeq_epi8(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpeq_epi8(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u8 = vceqq_s8(b_.neon_i8, a_.neon_i8);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i8x16_eq(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i8 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(signed char),
+ vec_cmpeq(a_.altivec_i8, b_.altivec_i8));
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i8 = HEDLEY_STATIC_CAST(__typeof__(r_.i8), (a_.i8 == b_.i8));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
+ r_.i8[i] = (a_.i8[i] == b_.i8[i]) ? ~INT8_C(0) : INT8_C(0);
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpeq_epi8(a, b) simde_mm_cmpeq_epi8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_cmpeq_epi16(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpeq_epi16(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u16 = vceqq_s16(b_.neon_i16, a_.neon_i16);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i16x8_eq(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i16 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(signed short),
+ vec_cmpeq(a_.altivec_i16, b_.altivec_i16));
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i16 = (a_.i16 == b_.i16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = (a_.i16[i] == b_.i16[i]) ? ~INT16_C(0) : INT16_C(0);
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpeq_epi16(a, b) simde_mm_cmpeq_epi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_cmpeq_epi32(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpeq_epi32(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u32 = vceqq_s32(b_.neon_i32, a_.neon_i32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i32x4_eq(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i32 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(signed int),
+ vec_cmpeq(a_.altivec_i32, b_.altivec_i32));
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32 = HEDLEY_STATIC_CAST(__typeof__(r_.i32), a_.i32 == b_.i32);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = (a_.i32[i] == b_.i32[i]) ? ~INT32_C(0) : INT32_C(0);
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpeq_epi32(a, b) simde_mm_cmpeq_epi32(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmpeq_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpeq_pd(a, b);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_u64 = vceqq_s64(b_.neon_i64, a_.neon_i64);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f64x2_eq(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+ r_.altivec_f64 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(double),
+ vec_cmpeq(a_.altivec_f64, b_.altivec_f64));
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i64 = HEDLEY_STATIC_CAST(__typeof__(r_.i64), (a_.f64 == b_.f64));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
+ r_.u64[i] = (a_.f64[i] == b_.f64[i]) ? ~UINT64_C(0)
+ : UINT64_C(0);
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpeq_pd(a, b) simde_mm_cmpeq_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmpeq_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpeq_sd(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_sd(a, simde_mm_cmpeq_pd(a, b));
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+ r_.u64[0] = (a_.u64[0] == b_.u64[0]) ? ~UINT64_C(0) : 0;
+ r_.u64[1] = a_.u64[1];
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpeq_sd(a, b) simde_mm_cmpeq_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmpneq_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpneq_pd(a, b);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_u32 = vmvnq_u32(
+ vreinterpretq_u32_u64(vceqq_f64(b_.neon_f64, a_.neon_f64)));
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f64x2_ne(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i64 = HEDLEY_STATIC_CAST(__typeof__(r_.i64), (a_.f64 != b_.f64));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
+ r_.u64[i] = (a_.f64[i] != b_.f64[i]) ? ~UINT64_C(0)
+ : UINT64_C(0);
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpneq_pd(a, b) simde_mm_cmpneq_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmpneq_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpneq_sd(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_sd(a, simde_mm_cmpneq_pd(a, b));
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+ r_.u64[0] = (a_.f64[0] != b_.f64[0]) ? ~UINT64_C(0) : UINT64_C(0);
+ r_.u64[1] = a_.u64[1];
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpneq_sd(a, b) simde_mm_cmpneq_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_cmplt_epi8(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmplt_epi8(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u8 = vcltq_s8(a_.neon_i8, b_.neon_i8);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i8 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(signed char),
+ vec_cmplt(a_.altivec_i8, b_.altivec_i8));
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i8x16_lt(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i8 = HEDLEY_STATIC_CAST(__typeof__(r_.i8), (a_.i8 < b_.i8));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
+ r_.i8[i] = (a_.i8[i] < b_.i8[i]) ? ~INT8_C(0) : INT8_C(0);
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmplt_epi8(a, b) simde_mm_cmplt_epi8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_cmplt_epi16(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmplt_epi16(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u16 = vcltq_s16(a_.neon_i16, b_.neon_i16);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i16 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(signed short),
+ vec_cmplt(a_.altivec_i16, b_.altivec_i16));
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i16x8_lt(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i16 = HEDLEY_STATIC_CAST(__typeof__(r_.i16), (a_.i16 < b_.i16));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = (a_.i16[i] < b_.i16[i]) ? ~INT16_C(0) : INT16_C(0);
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmplt_epi16(a, b) simde_mm_cmplt_epi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_cmplt_epi32(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmplt_epi32(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u32 = vcltq_s32(a_.neon_i32, b_.neon_i32);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i32 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(signed int),
+ vec_cmplt(a_.altivec_i32, b_.altivec_i32));
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i32x4_lt(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32 = HEDLEY_STATIC_CAST(__typeof__(r_.i32), (a_.i32 < b_.i32));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = (a_.i32[i] < b_.i32[i]) ? ~INT32_C(0) : INT32_C(0);
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmplt_epi32(a, b) simde_mm_cmplt_epi32(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmplt_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmplt_pd(a, b);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i64 = HEDLEY_STATIC_CAST(__typeof__(r_.i64), (a_.f64 < b_.f64));
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_u64 = vcltq_f64(a_.neon_f64, b_.neon_f64);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f64x2_lt(a_.wasm_v128, b_.wasm_v128);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
+ r_.u64[i] = (a_.f64[i] < b_.f64[i]) ? ~UINT64_C(0)
+ : UINT64_C(0);
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmplt_pd(a, b) simde_mm_cmplt_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmplt_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmplt_sd(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_sd(a, simde_mm_cmplt_pd(a, b));
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+ r_.u64[0] = (a_.f64[0] < b_.f64[0]) ? ~UINT64_C(0) : UINT64_C(0);
+ r_.u64[1] = a_.u64[1];
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmplt_sd(a, b) simde_mm_cmplt_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmple_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmple_pd(a, b);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i64 = HEDLEY_STATIC_CAST(__typeof__(r_.i64), (a_.f64 <= b_.f64));
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_u64 = vcleq_f64(a_.neon_f64, b_.neon_f64);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f64x2_le(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f64 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(double),
+ vec_cmple(a_.altivec_f64, b_.altivec_f64));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
+ r_.u64[i] = (a_.f64[i] <= b_.f64[i]) ? ~UINT64_C(0)
+ : UINT64_C(0);
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmple_pd(a, b) simde_mm_cmple_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmple_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmple_sd(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_sd(a, simde_mm_cmple_pd(a, b));
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+ r_.u64[0] = (a_.f64[0] <= b_.f64[0]) ? ~UINT64_C(0) : UINT64_C(0);
+ r_.u64[1] = a_.u64[1];
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmple_sd(a, b) simde_mm_cmple_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_cmpgt_epi8(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpgt_epi8(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u8 = vcgtq_s8(a_.neon_i8, b_.neon_i8);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i8x16_gt(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i8 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(signed char),
+ vec_cmpgt(a_.altivec_i8, b_.altivec_i8));
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i8 = HEDLEY_STATIC_CAST(__typeof__(r_.i8), (a_.i8 > b_.i8));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
+ r_.i8[i] = (a_.i8[i] > b_.i8[i]) ? ~INT8_C(0) : INT8_C(0);
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpgt_epi8(a, b) simde_mm_cmpgt_epi8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_cmpgt_epi16(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpgt_epi16(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u16 = vcgtq_s16(a_.neon_i16, b_.neon_i16);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i16x8_gt(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i16 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(signed short),
+ vec_cmpgt(a_.altivec_i16, b_.altivec_i16));
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i16 = HEDLEY_STATIC_CAST(__typeof__(r_.i16), (a_.i16 > b_.i16));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = (a_.i16[i] > b_.i16[i]) ? ~INT16_C(0) : INT16_C(0);
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpgt_epi16(a, b) simde_mm_cmpgt_epi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_cmpgt_epi32(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpgt_epi32(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u32 = vcgtq_s32(a_.neon_i32, b_.neon_i32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i32x4_gt(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i32 = HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(signed int),
+ vec_cmpgt(a_.altivec_i32, b_.altivec_i32));
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32 = HEDLEY_STATIC_CAST(__typeof__(r_.i32), (a_.i32 > b_.i32));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = (a_.i32[i] > b_.i32[i]) ? ~INT32_C(0) : INT32_C(0);
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpgt_epi32(a, b) simde_mm_cmpgt_epi32(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmpgt_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpgt_pd(a, b);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i64 = HEDLEY_STATIC_CAST(__typeof__(r_.i64), (a_.f64 > b_.f64));
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_u64 = vcgtq_f64(a_.neon_f64, b_.neon_f64);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f64x2_gt(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f64 =
+ HEDLEY_STATIC_CAST(SIMDE_POWER_ALTIVEC_VECTOR(double),
+ vec_cmpgt(a_.altivec_f64, b_.altivec_f64));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
+ r_.u64[i] = (a_.f64[i] > b_.f64[i]) ? ~UINT64_C(0)
+ : UINT64_C(0);
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpgt_pd(a, b) simde_mm_cmpgt_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmpgt_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(__PGI)
+ return _mm_cmpgt_sd(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_sd(a, simde_mm_cmpgt_pd(a, b));
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+ r_.u64[0] = (a_.f64[0] > b_.f64[0]) ? ~UINT64_C(0) : UINT64_C(0);
+ r_.u64[1] = a_.u64[1];
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpgt_sd(a, b) simde_mm_cmpgt_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmpge_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpge_pd(a, b);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i64 = HEDLEY_STATIC_CAST(__typeof__(r_.i64), (a_.f64 >= b_.f64));
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_u64 = vcgeq_f64(a_.neon_f64, b_.neon_f64);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f64x2_ge(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_f64 =
+ HEDLEY_STATIC_CAST(SIMDE_POWER_ALTIVEC_VECTOR(double),
+ vec_cmpge(a_.altivec_f64, b_.altivec_f64));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
+ r_.u64[i] = (a_.f64[i] >= b_.f64[i]) ? ~UINT64_C(0)
+ : UINT64_C(0);
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpge_pd(a, b) simde_mm_cmpge_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmpge_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(__PGI)
+ return _mm_cmpge_sd(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_sd(a, simde_mm_cmpge_pd(a, b));
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+ r_.u64[0] = (a_.f64[0] >= b_.f64[0]) ? ~UINT64_C(0) : UINT64_C(0);
+ r_.u64[1] = a_.u64[1];
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpge_sd(a, b) simde_mm_cmpge_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmpngt_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpngt_pd(a, b);
+#else
+ return simde_mm_cmple_pd(a, b);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpngt_pd(a, b) simde_mm_cmpngt_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmpngt_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(__PGI)
+ return _mm_cmpngt_sd(a, b);
+#else
+ return simde_mm_cmple_sd(a, b);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpngt_sd(a, b) simde_mm_cmpngt_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmpnge_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpnge_pd(a, b);
+#else
+ return simde_mm_cmplt_pd(a, b);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpnge_pd(a, b) simde_mm_cmpnge_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmpnge_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(__PGI)
+ return _mm_cmpnge_sd(a, b);
+#else
+ return simde_mm_cmplt_sd(a, b);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpnge_sd(a, b) simde_mm_cmpnge_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmpnlt_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpnlt_pd(a, b);
+#else
+ return simde_mm_cmpge_pd(a, b);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpnlt_pd(a, b) simde_mm_cmpnlt_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmpnlt_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpnlt_sd(a, b);
+#else
+ return simde_mm_cmpge_sd(a, b);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpnlt_sd(a, b) simde_mm_cmpnlt_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmpnle_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpnle_pd(a, b);
+#else
+ return simde_mm_cmpgt_pd(a, b);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpnle_pd(a, b) simde_mm_cmpnle_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmpnle_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpnle_sd(a, b);
+#else
+ return simde_mm_cmpgt_sd(a, b);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpnle_sd(a, b) simde_mm_cmpnle_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmpord_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpord_pd(a, b);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ /* Note: NEON does not have ordered compare builtin
+ Need to compare a eq a and b eq b to check for NaN
+ Do AND of results to get final */
+ uint64x2_t ceqaa = vceqq_f64(a_.neon_f64, a_.neon_f64);
+ uint64x2_t ceqbb = vceqq_f64(b_.neon_f64, b_.neon_f64);
+ r_.neon_u64 = vandq_u64(ceqaa, ceqbb);
+#elif defined(simde_math_isnan)
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
+ r_.u64[i] = (!simde_math_isnan(a_.f64[i]) &&
+ !simde_math_isnan(b_.f64[i]))
+ ? ~UINT64_C(0)
+ : UINT64_C(0);
+ }
+#else
+ HEDLEY_UNREACHABLE();
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpord_pd(a, b) simde_mm_cmpord_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde_float64 simde_mm_cvtsd_f64(simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(__PGI)
+ return _mm_cvtsd_f64(a);
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a);
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return HEDLEY_STATIC_CAST(simde_float64,
+ vgetq_lane_f64(a_.neon_f64, 0));
+#else
+ return a_.f64[0];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtsd_f64(a) simde_mm_cvtsd_f64(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmpord_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpord_sd(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_sd(a, simde_mm_cmpord_pd(a, b));
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(simde_math_isnan)
+ r_.u64[0] =
+ (!simde_math_isnan(a_.f64[0]) && !simde_math_isnan(b_.f64[0]))
+ ? ~UINT64_C(0)
+ : UINT64_C(0);
+ r_.u64[1] = a_.u64[1];
+#else
+ HEDLEY_UNREACHABLE();
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpord_sd(a, b) simde_mm_cmpord_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmpunord_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpunord_pd(a, b);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ uint64x2_t ceqaa = vceqq_f64(a_.neon_f64, a_.neon_f64);
+ uint64x2_t ceqbb = vceqq_f64(b_.neon_f64, b_.neon_f64);
+ r_.neon_u64 = vreinterpretq_u64_u32(
+ vmvnq_u32(vreinterpretq_u32_u64(vandq_u64(ceqaa, ceqbb))));
+#elif defined(simde_math_isnan)
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
+ r_.u64[i] = (simde_math_isnan(a_.f64[i]) ||
+ simde_math_isnan(b_.f64[i]))
+ ? ~UINT64_C(0)
+ : UINT64_C(0);
+ }
+#else
+ HEDLEY_UNREACHABLE();
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpunord_pd(a, b) simde_mm_cmpunord_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cmpunord_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cmpunord_sd(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_sd(a, simde_mm_cmpunord_pd(a, b));
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(simde_math_isnan)
+ r_.u64[0] = (simde_math_isnan(a_.f64[0]) || simde_math_isnan(b_.f64[0]))
+ ? ~UINT64_C(0)
+ : UINT64_C(0);
+ r_.u64[1] = a_.u64[1];
+#else
+ HEDLEY_UNREACHABLE();
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cmpunord_sd(a, b) simde_mm_cmpunord_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cvtepi32_pd(simde__m128i a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cvtepi32_pd(a);
+#else
+ simde__m128d_private r_;
+ simde__m128i_private a_ = simde__m128i_to_private(a);
+
+#if defined(SIMDE_CONVERT_VECTOR_)
+ SIMDE_CONVERT_VECTOR_(r_.f64, a_.m64_private[0].i32);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
+ r_.f64[i] = (simde_float64)a_.i32[i];
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtepi32_pd(a) simde_mm_cvtepi32_pd(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cvtepi32_ps(simde__m128i a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cvtepi32_ps(a);
+#else
+ simde__m128_private r_;
+ simde__m128i_private a_ = simde__m128i_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_f32 = vcvtq_f32_s32(a_.neon_i32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f32x4_convert_i32x4(a_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ HEDLEY_DIAGNOSTIC_PUSH
+#if HEDLEY_HAS_WARNING("-Wc11-extensions")
+#pragma clang diagnostic ignored "-Wc11-extensions"
+#endif
+ r_.altivec_f32 = vec_ctf(a_.altivec_i32, 0);
+ HEDLEY_DIAGNOSTIC_POP
+#elif defined(SIMDE_CONVERT_VECTOR_)
+ SIMDE_CONVERT_VECTOR_(r_.f32, a_.i32);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f32) / sizeof(r_.f32[0])); i++) {
+ r_.f32[i] = (simde_float32)a_.i32[i];
+ }
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtepi32_ps(a) simde_mm_cvtepi32_ps(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_cvtpd_pi32(simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cvtpd_pi32(a);
+#else
+ simde__m64_private r_;
+ simde__m128d_private a_ = simde__m128d_to_private(a);
+
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ simde_float64 v = simde_math_round(a_.f64[i]);
+#if defined(SIMDE_FAST_CONVERSION_RANGE)
+ r_.i32[i] = SIMDE_CONVERT_FTOI(int32_t, v);
+#else
+ r_.i32[i] =
+ ((v > HEDLEY_STATIC_CAST(simde_float64, INT32_MIN)) &&
+ (v < HEDLEY_STATIC_CAST(simde_float64, INT32_MAX)))
+ ? SIMDE_CONVERT_FTOI(int32_t, v)
+ : INT32_MIN;
+#endif
+ }
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtpd_pi32(a) simde_mm_cvtpd_pi32(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_cvtpd_epi32(simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cvtpd_epi32(a);
+#else
+ simde__m128i_private r_;
+
+ r_.m64[0] = simde_mm_cvtpd_pi32(a);
+ r_.m64[1] = simde_mm_setzero_si64();
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtpd_epi32(a) simde_mm_cvtpd_epi32(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cvtpd_ps(simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cvtpd_ps(a);
+#else
+ simde__m128_private r_;
+ simde__m128d_private a_ = simde__m128d_to_private(a);
+
+#if defined(SIMDE_CONVERT_VECTOR_)
+ SIMDE_CONVERT_VECTOR_(r_.m64_private[0].f32, a_.f64);
+ r_.m64_private[1] = simde__m64_to_private(simde_mm_setzero_si64());
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f32 = vreinterpretq_f32_f64(
+ vcombine_f64(vreinterpret_f64_f32(vcvtx_f32_f64(a_.neon_f64)),
+ vdup_n_f64(0)));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(a_.f64) / sizeof(a_.f64[0])); i++) {
+ r_.f32[i] = (simde_float32)a_.f64[i];
+ }
+ simde_memset(&(r_.m64_private[1]), 0, sizeof(r_.m64_private[1]));
+#endif
+
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtpd_ps(a) simde_mm_cvtpd_ps(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cvtpi32_pd(simde__m64 a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cvtpi32_pd(a);
+#else
+ simde__m128d_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+
+#if defined(SIMDE_CONVERT_VECTOR_)
+ SIMDE_CONVERT_VECTOR_(r_.f64, a_.i32);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
+ r_.f64[i] = (simde_float64)a_.i32[i];
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtpi32_pd(a) simde_mm_cvtpi32_pd(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_cvtps_epi32(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cvtps_epi32(a);
+#else
+ simde__m128i_private r_;
+ simde__m128_private a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE) && defined(SIMDE_FAST_CONVERSION_RANGE)
+ r_.neon_i32 = vcvtnq_s32_f32(a_.neon_f32);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE) && \
+ defined(SIMDE_FAST_CONVERSION_RANGE) && defined(SIMDE_FAST_ROUND_TIES)
+ r_.neon_i32 = vcvtnq_s32_f32(a_.neon_f32);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE) && \
+ defined(SIMDE_FAST_CONVERSION_RANGE) && defined(SIMDE_FAST_ROUND_TIES)
+ HEDLEY_DIAGNOSTIC_PUSH
+ SIMDE_DIAGNOSTIC_DISABLE_C11_EXTENSIONS_
+ SIMDE_DIAGNOSTIC_DISABLE_VECTOR_CONVERSION_
+ r_.altivec_i32 = vec_cts(a_.altivec_f32, 1);
+ HEDLEY_DIAGNOSTIC_POP
+#else
+ a_ = simde__m128_to_private(
+ simde_x_mm_round_ps(a, SIMDE_MM_FROUND_TO_NEAREST_INT, 1));
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ simde_float32 v = simde_math_roundf(a_.f32[i]);
+#if defined(SIMDE_FAST_CONVERSION_RANGE)
+ r_.i32[i] = SIMDE_CONVERT_FTOI(int32_t, v);
+#else
+ r_.i32[i] =
+ ((v > HEDLEY_STATIC_CAST(simde_float32, INT32_MIN)) &&
+ (v < HEDLEY_STATIC_CAST(simde_float32, INT32_MAX)))
+ ? SIMDE_CONVERT_FTOI(int32_t, v)
+ : INT32_MIN;
+#endif
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtps_epi32(a) simde_mm_cvtps_epi32(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cvtps_pd(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cvtps_pd(a);
+#else
+ simde__m128d_private r_;
+ simde__m128_private a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_CONVERT_VECTOR_)
+ SIMDE_CONVERT_VECTOR_(r_.f64, a_.m64_private[0].f32);
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f64 = vcvt_f64_f32(vget_low_f32(a_.neon_f32));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
+ r_.f64[i] = a_.f32[i];
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtps_pd(a) simde_mm_cvtps_pd(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int32_t simde_mm_cvtsd_si32(simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cvtsd_si32(a);
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a);
+
+ simde_float64 v = simde_math_round(a_.f64[0]);
+#if defined(SIMDE_FAST_CONVERSION_RANGE)
+ return SIMDE_CONVERT_FTOI(int32_t, v);
+#else
+ return ((v > HEDLEY_STATIC_CAST(simde_float64, INT32_MIN)) &&
+ (v < HEDLEY_STATIC_CAST(simde_float64, INT32_MAX)))
+ ? SIMDE_CONVERT_FTOI(int32_t, v)
+ : INT32_MIN;
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtsd_si32(a) simde_mm_cvtsd_si32(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int64_t simde_mm_cvtsd_si64(simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_ARCH_AMD64)
+#if defined(__PGI)
+ return _mm_cvtsd_si64x(a);
+#else
+ return _mm_cvtsd_si64(a);
+#endif
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a);
+ return SIMDE_CONVERT_FTOI(int64_t, simde_math_round(a_.f64[0]));
+#endif
+}
+#define simde_mm_cvtsd_si64x(a) simde_mm_cvtsd_si64(a)
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtsd_si64(a) simde_mm_cvtsd_si64(a)
+#define _mm_cvtsd_si64x(a) simde_mm_cvtsd_si64x(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128 simde_mm_cvtsd_ss(simde__m128 a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cvtsd_ss(a, b);
+#else
+ simde__m128_private r_, a_ = simde__m128_to_private(a);
+ simde__m128d_private b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f32 = vsetq_lane_f32(
+ vcvtxd_f32_f64(vgetq_lane_f64(b_.neon_f64, 0)), a_.neon_f32, 0);
+#else
+ r_.f32[0] = HEDLEY_STATIC_CAST(simde_float32, b_.f64[0]);
+
+ SIMDE_VECTORIZE
+ for (size_t i = 1; i < (sizeof(r_) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = a_.i32[i];
+ }
+#endif
+ return simde__m128_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtsd_ss(a, b) simde_mm_cvtsd_ss(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int16_t simde_x_mm_cvtsi128_si16(simde__m128i a)
+{
+ simde__m128i_private a_ = simde__m128i_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ return vgetq_lane_s16(a_.neon_i16, 0);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ return HEDLEY_STATIC_CAST(int16_t,
+ wasm_i16x8_extract_lane(a_.wasm_v128, 0));
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+#if defined(SIMDE_BUG_GCC_95227)
+ (void)a_;
+#endif
+ return vec_extract(a_.altivec_i16, 0);
+#else
+ return a_.i16[0];
+#endif
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+int32_t simde_mm_cvtsi128_si32(simde__m128i a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cvtsi128_si32(a);
+#else
+ simde__m128i_private a_ = simde__m128i_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ return vgetq_lane_s32(a_.neon_i32, 0);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ return HEDLEY_STATIC_CAST(int32_t,
+ wasm_i32x4_extract_lane(a_.wasm_v128, 0));
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+#if defined(SIMDE_BUG_GCC_95227)
+ (void)a_;
+#endif
+ return vec_extract(a_.altivec_i32, 0);
+#else
+ return a_.i32[0];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtsi128_si32(a) simde_mm_cvtsi128_si32(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int64_t simde_mm_cvtsi128_si64(simde__m128i a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_ARCH_AMD64)
+#if defined(__PGI)
+ return _mm_cvtsi128_si64x(a);
+#else
+ return _mm_cvtsi128_si64(a);
+#endif
+#else
+ simde__m128i_private a_ = simde__m128i_to_private(a);
+#if defined(SIMDE_POWER_ALTIVEC_P7_NATIVE) && !defined(HEDLEY_IBM_VERSION)
+ return vec_extract(HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(signed long long),
+ a_.i64),
+ 0);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ return vgetq_lane_s64(a_.neon_i64, 0);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ return HEDLEY_STATIC_CAST(int64_t,
+ wasm_i64x2_extract_lane(a_.wasm_v128, 0));
+#endif
+ return a_.i64[0];
+#endif
+}
+#define simde_mm_cvtsi128_si64x(a) simde_mm_cvtsi128_si64(a)
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtsi128_si64(a) simde_mm_cvtsi128_si64(a)
+#define _mm_cvtsi128_si64x(a) simde_mm_cvtsi128_si64x(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cvtsi32_sd(simde__m128d a, int32_t b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cvtsi32_sd(a, b);
+#else
+ simde__m128d_private r_;
+ simde__m128d_private a_ = simde__m128d_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE) && defined(SIMDE_ARCH_AMD64)
+ r_.neon_f64 = vsetq_lane_f64(HEDLEY_STATIC_CAST(float64_t, b),
+ a_.neon_f64, 0);
+#else
+ r_.f64[0] = HEDLEY_STATIC_CAST(simde_float64, b);
+ r_.i64[1] = a_.i64[1];
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtsi32_sd(a, b) simde_mm_cvtsi32_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_x_mm_cvtsi16_si128(int16_t a)
+{
+ simde__m128i_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 = vsetq_lane_s16(a, vdupq_n_s16(0), 0);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i16x8_make(a, 0, 0, 0, 0, 0, 0, 0);
+#else
+ r_.i16[0] = a;
+ r_.i16[1] = 0;
+ r_.i16[2] = 0;
+ r_.i16[3] = 0;
+ r_.i16[4] = 0;
+ r_.i16[5] = 0;
+ r_.i16[6] = 0;
+ r_.i16[7] = 0;
+#endif
+
+ return simde__m128i_from_private(r_);
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_cvtsi32_si128(int32_t a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cvtsi32_si128(a);
+#else
+ simde__m128i_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vsetq_lane_s32(a, vdupq_n_s32(0), 0);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i32x4_make(a, 0, 0, 0);
+#else
+ r_.i32[0] = a;
+ r_.i32[1] = 0;
+ r_.i32[2] = 0;
+ r_.i32[3] = 0;
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtsi32_si128(a) simde_mm_cvtsi32_si128(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cvtsi64_sd(simde__m128d a, int64_t b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_ARCH_AMD64)
+#if !defined(__PGI)
+ return _mm_cvtsi64_sd(a, b);
+#else
+ return _mm_cvtsi64x_sd(a, b);
+#endif
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f64 = vsetq_lane_f64(HEDLEY_STATIC_CAST(float64_t, b),
+ a_.neon_f64, 0);
+#else
+ r_.f64[0] = HEDLEY_STATIC_CAST(simde_float64, b);
+ r_.f64[1] = a_.f64[1];
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#define simde_mm_cvtsi64x_sd(a, b) simde_mm_cvtsi64_sd(a, b)
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtsi64_sd(a, b) simde_mm_cvtsi64_sd(a, b)
+#define _mm_cvtsi64x_sd(a, b) simde_mm_cvtsi64x_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_cvtsi64_si128(int64_t a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_ARCH_AMD64)
+#if !defined(__PGI)
+ return _mm_cvtsi64_si128(a);
+#else
+ return _mm_cvtsi64x_si128(a);
+#endif
+#else
+ simde__m128i_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i64 = vsetq_lane_s64(a, vdupq_n_s64(0), 0);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i64x2_make(a, 0);
+#else
+ r_.i64[0] = a;
+ r_.i64[1] = 0;
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#define simde_mm_cvtsi64x_si128(a) simde_mm_cvtsi64_si128(a)
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtsi64_si128(a) simde_mm_cvtsi64_si128(a)
+#define _mm_cvtsi64x_si128(a) simde_mm_cvtsi64x_si128(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_cvtss_sd(simde__m128d a, simde__m128 b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cvtss_sd(a, b);
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ float64x2_t temp = vcvt_f64_f32(vset_lane_f32(
+ vgetq_lane_f32(simde__m128_to_private(b).neon_f32, 0),
+ vdup_n_f32(0), 0));
+ return vsetq_lane_f64(
+ vgetq_lane_f64(simde__m128d_to_private(a).neon_f64, 1), temp,
+ 1);
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a);
+ simde__m128_private b_ = simde__m128_to_private(b);
+
+ a_.f64[0] = HEDLEY_STATIC_CAST(simde_float64, b_.f32[0]);
+
+ return simde__m128d_from_private(a_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvtss_sd(a, b) simde_mm_cvtss_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_cvttpd_pi32(simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_cvttpd_pi32(a);
+#else
+ simde__m64_private r_;
+ simde__m128d_private a_ = simde__m128d_to_private(a);
+
+#if defined(SIMDE_CONVERT_VECTOR_) && defined(SIMDE_FAST_CONVERSION_RANGE)
+ SIMDE_CONVERT_VECTOR_(r_.i32, a_.f64);
+#else
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ simde_float64 v = a_.f64[i];
+#if defined(SIMDE_FAST_CONVERSION_RANGE)
+ r_.i32[i] = SIMDE_CONVERT_FTOI(int32_t, v);
+#else
+ r_.i32[i] =
+ ((v > HEDLEY_STATIC_CAST(simde_float64, INT32_MIN)) &&
+ (v < HEDLEY_STATIC_CAST(simde_float64, INT32_MAX)))
+ ? SIMDE_CONVERT_FTOI(int32_t, v)
+ : INT32_MIN;
+#endif
+ }
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvttpd_pi32(a) simde_mm_cvttpd_pi32(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_cvttpd_epi32(simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cvttpd_epi32(a);
+#else
+ simde__m128i_private r_;
+
+ r_.m64[0] = simde_mm_cvttpd_pi32(a);
+ r_.m64[1] = simde_mm_setzero_si64();
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvttpd_epi32(a) simde_mm_cvttpd_epi32(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_cvttps_epi32(simde__m128 a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cvttps_epi32(a);
+#else
+ simde__m128i_private r_;
+ simde__m128_private a_ = simde__m128_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE) && defined(SIMDE_FAST_CONVERSION_RANGE)
+ r_.neon_i32 = vcvtq_s32_f32(a_.neon_f32);
+#elif defined(SIMDE_CONVERT_VECTOR_) && defined(SIMDE_FAST_CONVERSION_RANGE)
+ SIMDE_CONVERT_VECTOR_(r_.i32, a_.f32);
+#else
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ simde_float32 v = a_.f32[i];
+#if defined(SIMDE_FAST_CONVERSION_RANGE)
+ r_.i32[i] = SIMDE_CONVERT_FTOI(int32_t, v);
+#else
+ r_.i32[i] =
+ ((v > HEDLEY_STATIC_CAST(simde_float32, INT32_MIN)) &&
+ (v < HEDLEY_STATIC_CAST(simde_float32, INT32_MAX)))
+ ? SIMDE_CONVERT_FTOI(int32_t, v)
+ : INT32_MIN;
+#endif
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvttps_epi32(a) simde_mm_cvttps_epi32(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int32_t simde_mm_cvttsd_si32(simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_cvttsd_si32(a);
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a);
+ simde_float64 v = a_.f64[0];
+#if defined(SIMDE_FAST_CONVERSION_RANGE)
+ return SIMDE_CONVERT_FTOI(int32_t, v);
+#else
+ return ((v > HEDLEY_STATIC_CAST(simde_float64, INT32_MIN)) &&
+ (v < HEDLEY_STATIC_CAST(simde_float64, INT32_MAX)))
+ ? SIMDE_CONVERT_FTOI(int32_t, v)
+ : INT32_MIN;
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvttsd_si32(a) simde_mm_cvttsd_si32(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int64_t simde_mm_cvttsd_si64(simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_ARCH_AMD64)
+#if !defined(__PGI)
+ return _mm_cvttsd_si64(a);
+#else
+ return _mm_cvttsd_si64x(a);
+#endif
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a);
+ return SIMDE_CONVERT_FTOI(int64_t, a_.f64[0]);
+#endif
+}
+#define simde_mm_cvttsd_si64x(a) simde_mm_cvttsd_si64(a)
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_cvttsd_si64(a) simde_mm_cvttsd_si64(a)
+#define _mm_cvttsd_si64x(a) simde_mm_cvttsd_si64x(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_div_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_div_pd(a, b);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.f64 = a_.f64 / b_.f64;
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f64 = vdivq_f64(a_.neon_f64, b_.neon_f64);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f64x2_div(a_.wasm_v128, b_.wasm_v128);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
+ r_.f64[i] = a_.f64[i] / b_.f64[i];
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_div_pd(a, b) simde_mm_div_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_div_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_div_sd(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_sd(a, simde_mm_div_pd(a, b));
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ float64x2_t temp = vdivq_f64(a_.neon_f64, b_.neon_f64);
+ r_.neon_f64 = vsetq_lane_f64(vgetq_lane(a_.neon_f64, 1), temp, 1);
+#else
+ r_.f64[0] = a_.f64[0] / b_.f64[0];
+ r_.f64[1] = a_.f64[1];
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_div_sd(a, b) simde_mm_div_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int32_t simde_mm_extract_epi16(simde__m128i a, const int imm8)
+ SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 7)
+{
+ uint16_t r;
+ simde__m128i_private a_ = simde__m128i_to_private(a);
+
+#if defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+#if defined(SIMDE_BUG_GCC_95227)
+ (void)a_;
+ (void)imm8;
+#endif
+ r = HEDLEY_STATIC_CAST(uint16_t, vec_extract(a_.altivec_i16, imm8));
+#else
+ r = a_.u16[imm8 & 7];
+#endif
+
+ return HEDLEY_STATIC_CAST(int32_t, r);
+}
+#if defined(SIMDE_X86_SSE2_NATIVE) && \
+ (!defined(HEDLEY_GCC_VERSION) || HEDLEY_GCC_VERSION_CHECK(4, 6, 0))
+#define simde_mm_extract_epi16(a, imm8) _mm_extract_epi16(a, imm8)
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+#define simde_mm_extract_epi16(a, imm8) \
+ (HEDLEY_STATIC_CAST( \
+ int32_t, vgetq_lane_s16(simde__m128i_to_private(a).neon_i16, \
+ (imm8))) & \
+ (INT32_C(0x0000ffff)))
+#endif
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_extract_epi16(a, imm8) simde_mm_extract_epi16(a, imm8)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_insert_epi16(simde__m128i a, int16_t i, const int imm8)
+ SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 7)
+{
+ simde__m128i_private a_ = simde__m128i_to_private(a);
+ a_.i16[imm8 & 7] = i;
+ return simde__m128i_from_private(a_);
+}
+#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(__PGI)
+#define simde_mm_insert_epi16(a, i, imm8) _mm_insert_epi16((a), (i), (imm8))
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+#define simde_mm_insert_epi16(a, i, imm8) \
+ simde__m128i_from_neon_i16( \
+ vsetq_lane_s16((i), simde__m128i_to_neon_i16(a), (imm8)))
+#endif
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_insert_epi16(a, i, imm8) simde_mm_insert_epi16(a, i, imm8)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d
+simde_mm_load_pd(simde_float64 const mem_addr[HEDLEY_ARRAY_PARAM(2)])
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_load_pd(mem_addr);
+#else
+ simde__m128d_private r_;
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f64 = vld1q_f64(mem_addr);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u32 =
+ vld1q_u32(HEDLEY_REINTERPRET_CAST(uint32_t const *, mem_addr));
+#else
+ simde_memcpy(&r_, SIMDE_ALIGN_ASSUME_LIKE(mem_addr, simde__m128d),
+ sizeof(r_));
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_load_pd(mem_addr) simde_mm_load_pd(mem_addr)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_load1_pd(simde_float64 const *mem_addr)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_load1_pd(mem_addr);
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return simde__m128d_from_neon_f64(vld1q_dup_f64(mem_addr));
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ return simde__m128d_from_wasm_v128(wasm_v64x2_load_splat(mem_addr));
+#else
+ return simde_mm_set1_pd(*mem_addr);
+#endif
+}
+#define simde_mm_load_pd1(mem_addr) simde_mm_load1_pd(mem_addr)
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_load_pd1(mem_addr) simde_mm_load1_pd(mem_addr)
+#define _mm_load1_pd(mem_addr) simde_mm_load1_pd(mem_addr)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_load_sd(simde_float64 const *mem_addr)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_load_sd(mem_addr);
+#else
+ simde__m128d_private r_;
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f64 = vsetq_lane_f64(*mem_addr, vdupq_n_f64(0), 0);
+#else
+ r_.f64[0] = *mem_addr;
+ r_.u64[1] = UINT64_C(0);
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_load_sd(mem_addr) simde_mm_load_sd(mem_addr)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_load_si128(simde__m128i const *mem_addr)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_load_si128(
+ HEDLEY_REINTERPRET_CAST(__m128i const *, mem_addr));
+#else
+ simde__m128i_private r_;
+
+#if defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i32 = vec_ld(
+ 0, HEDLEY_REINTERPRET_CAST(
+ SIMDE_POWER_ALTIVEC_VECTOR(int) const *, mem_addr));
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 =
+ vld1q_s32(HEDLEY_REINTERPRET_CAST(int32_t const *, mem_addr));
+#else
+ simde_memcpy(&r_, SIMDE_ALIGN_ASSUME_LIKE(mem_addr, simde__m128i),
+ sizeof(simde__m128i));
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_load_si128(mem_addr) simde_mm_load_si128(mem_addr)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_loadh_pd(simde__m128d a, simde_float64 const *mem_addr)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_loadh_pd(a, mem_addr);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f64 = vcombine_f64(
+ vget_low_f64(a_.neon_f64),
+ vld1_f64(HEDLEY_REINTERPRET_CAST(const float64_t *, mem_addr)));
+#else
+ simde_float64 t;
+
+ simde_memcpy(&t, mem_addr, sizeof(t));
+ r_.f64[0] = a_.f64[0];
+ r_.f64[1] = t;
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_loadh_pd(a, mem_addr) simde_mm_loadh_pd(a, mem_addr)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_loadl_epi64(simde__m128i const *mem_addr)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_loadl_epi64(mem_addr);
+#else
+ simde__m128i_private r_;
+
+ int64_t value;
+ simde_memcpy(&value, mem_addr, sizeof(value));
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i64 = vcombine_s64(
+ vld1_s64(HEDLEY_REINTERPRET_CAST(int64_t const *, mem_addr)),
+ vdup_n_s64(0));
+#else
+ r_.i64[0] = value;
+ r_.i64[1] = 0;
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_loadl_epi64(mem_addr) simde_mm_loadl_epi64(mem_addr)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_loadl_pd(simde__m128d a, simde_float64 const *mem_addr)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_loadl_pd(a, mem_addr);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f64 = vcombine_f64(
+ vld1_f64(HEDLEY_REINTERPRET_CAST(const float64_t *, mem_addr)),
+ vget_high_f64(a_.neon_f64));
+#else
+ r_.f64[0] = *mem_addr;
+ r_.u64[1] = a_.u64[1];
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_loadl_pd(a, mem_addr) simde_mm_loadl_pd(a, mem_addr)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d
+simde_mm_loadr_pd(simde_float64 const mem_addr[HEDLEY_ARRAY_PARAM(2)])
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_loadr_pd(mem_addr);
+#else
+ simde__m128d_private r_;
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f64 = vld1q_f64(mem_addr);
+ r_.neon_f64 = vextq_f64(r_.neon_f64, r_.neon_f64, 1);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i64 =
+ vld1q_s64(HEDLEY_REINTERPRET_CAST(int64_t const *, mem_addr));
+ r_.neon_i64 = vextq_s64(r_.neon_i64, r_.neon_i64, 1);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ v128_t tmp = wasm_v128_load(mem_addr);
+ r_.wasm_v128 = wasm_v64x2_shuffle(tmp, tmp, 1, 0);
+#else
+ r_.f64[0] = mem_addr[1];
+ r_.f64[1] = mem_addr[0];
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_loadr_pd(mem_addr) simde_mm_loadr_pd(mem_addr)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d
+simde_mm_loadu_pd(simde_float64 const mem_addr[HEDLEY_ARRAY_PARAM(2)])
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_loadu_pd(mem_addr);
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vld1q_f64(mem_addr);
+#else
+ simde__m128d_private r_;
+
+ simde_memcpy(&r_, mem_addr, sizeof(r_));
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_loadu_pd(mem_addr) simde_mm_loadu_pd(mem_addr)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_x_mm_loadu_epi8(int8_t const *mem_addr)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_loadu_si128(
+ SIMDE_ALIGN_CAST(simde__m128i const *, mem_addr));
+#else
+ simde__m128i_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i8 =
+ vld1q_s8(HEDLEY_REINTERPRET_CAST(int8_t const *, mem_addr));
+#else
+ simde_memcpy(&r_, mem_addr, sizeof(r_));
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_x_mm_loadu_epi16(int16_t const *mem_addr)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_loadu_si128(
+ SIMDE_ALIGN_CAST(simde__m128i const *, mem_addr));
+#else
+ simde__m128i_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 =
+ vld1q_s16(HEDLEY_REINTERPRET_CAST(int16_t const *, mem_addr));
+#else
+ simde_memcpy(&r_, mem_addr, sizeof(r_));
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_x_mm_loadu_epi32(int32_t const *mem_addr)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_loadu_si128(
+ SIMDE_ALIGN_CAST(simde__m128i const *, mem_addr));
+#else
+ simde__m128i_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 =
+ vld1q_s32(HEDLEY_REINTERPRET_CAST(int32_t const *, mem_addr));
+#else
+ simde_memcpy(&r_, mem_addr, sizeof(r_));
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_x_mm_loadu_epi64(int64_t const *mem_addr)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_loadu_si128(
+ SIMDE_ALIGN_CAST(simde__m128i const *, mem_addr));
+#else
+ simde__m128i_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i64 =
+ vld1q_s64(HEDLEY_REINTERPRET_CAST(int64_t const *, mem_addr));
+#else
+ simde_memcpy(&r_, mem_addr, sizeof(r_));
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_loadu_si128(void const *mem_addr)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_loadu_si128(HEDLEY_STATIC_CAST(__m128i const *, mem_addr));
+#else
+ simde__m128i_private r_;
+
+#if HEDLEY_GNUC_HAS_ATTRIBUTE(may_alias, 3, 3, 0)
+ HEDLEY_DIAGNOSTIC_PUSH
+ SIMDE_DIAGNOSTIC_DISABLE_PACKED_
+ struct simde_mm_loadu_si128_s {
+ __typeof__(r_) v;
+ } __attribute__((__packed__, __may_alias__));
+ r_ = HEDLEY_REINTERPRET_CAST(const struct simde_mm_loadu_si128_s *,
+ mem_addr)
+ ->v;
+ HEDLEY_DIAGNOSTIC_POP
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ /* Note that this is a lower priority than the struct above since
+ * clang assumes mem_addr is aligned (since it is a __m128i*). */
+ r_.neon_i32 =
+ vld1q_s32(HEDLEY_REINTERPRET_CAST(int32_t const *, mem_addr));
+#else
+ simde_memcpy(&r_, mem_addr, sizeof(r_));
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_loadu_si128(mem_addr) simde_mm_loadu_si128(mem_addr)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_madd_epi16(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_madd_epi16(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ int32x4_t pl =
+ vmull_s16(vget_low_s16(a_.neon_i16), vget_low_s16(b_.neon_i16));
+ int32x4_t ph = vmull_high_s16(a_.neon_i16, b_.neon_i16);
+ r_.neon_i32 = vpaddq_s32(pl, ph);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ int32x4_t pl =
+ vmull_s16(vget_low_s16(a_.neon_i16), vget_low_s16(b_.neon_i16));
+ int32x4_t ph = vmull_s16(vget_high_s16(a_.neon_i16),
+ vget_high_s16(b_.neon_i16));
+ int32x2_t rl = vpadd_s32(vget_low_s32(pl), vget_high_s32(pl));
+ int32x2_t rh = vpadd_s32(vget_low_s32(ph), vget_high_s32(ph));
+ r_.neon_i32 = vcombine_s32(rl, rh);
+#elif defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+ static const SIMDE_POWER_ALTIVEC_VECTOR(int) tz = {0, 0, 0, 0};
+ r_.altivec_i32 = vec_msum(a_.altivec_i16, b_.altivec_i16, tz);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_) / sizeof(r_.i16[0])); i += 2) {
+ r_.i32[i / 2] = (a_.i16[i] * b_.i16[i]) +
+ (a_.i16[i + 1] * b_.i16[i + 1]);
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_madd_epi16(a, b) simde_mm_madd_epi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_maskmoveu_si128(simde__m128i a, simde__m128i mask,
+ int8_t mem_addr[HEDLEY_ARRAY_PARAM(16)])
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ _mm_maskmoveu_si128(a, mask, HEDLEY_REINTERPRET_CAST(char *, mem_addr));
+#else
+ simde__m128i_private a_ = simde__m128i_to_private(a),
+ mask_ = simde__m128i_to_private(mask);
+
+ for (size_t i = 0; i < (sizeof(a_.i8) / sizeof(a_.i8[0])); i++) {
+ if (mask_.u8[i] & 0x80) {
+ mem_addr[i] = a_.i8[i];
+ }
+ }
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_maskmoveu_si128(a, mask, mem_addr) \
+ simde_mm_maskmoveu_si128( \
+ (a), (mask), \
+ SIMDE_CHECKED_REINTERPRET_CAST(int8_t *, char *, (mem_addr)))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int32_t simde_mm_movemask_epi8(simde__m128i a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(__INTEL_COMPILER)
+ /* ICC has trouble with _mm_movemask_epi8 at -O2 and above: */
+ return _mm_movemask_epi8(a);
+#else
+ int32_t r = 0;
+ simde__m128i_private a_ = simde__m128i_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ uint8x16_t input = a_.neon_u8;
+ const int8_t xr[16] = {-7, -6, -5, -4, -3, -2, -1, 0,
+ -7, -6, -5, -4, -3, -2, -1, 0};
+ const uint8x16_t mask_and = vdupq_n_u8(0x80);
+ const int8x16_t mask_shift = vld1q_s8(xr);
+ const uint8x16_t mask_result =
+ vshlq_u8(vandq_u8(input, mask_and), mask_shift);
+ uint8x8_t lo = vget_low_u8(mask_result);
+ uint8x8_t hi = vget_high_u8(mask_result);
+ r = vaddv_u8(lo) + (vaddv_u8(hi) << 8);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ // Use increasingly wide shifts+adds to collect the sign bits
+ // together.
+ // Since the widening shifts would be rather confusing to follow in little endian, everything
+ // will be illustrated in big endian order instead. This has a different result - the bits
+ // would actually be reversed on a big endian machine.
+
+ // Starting input (only half the elements are shown):
+ // 89 ff 1d c0 00 10 99 33
+ uint8x16_t input = a_.neon_u8;
+
+ // Shift out everything but the sign bits with an unsigned shift right.
+ //
+ // Bytes of the vector::
+ // 89 ff 1d c0 00 10 99 33
+ // \ \ \ \ \ \ \ \ high_bits = (uint16x4_t)(input >> 7)
+ // | | | | | | | |
+ // 01 01 00 01 00 00 01 00
+ //
+ // Bits of first important lane(s):
+ // 10001001 (89)
+ // \______
+ // |
+ // 00000001 (01)
+ uint16x8_t high_bits = vreinterpretq_u16_u8(vshrq_n_u8(input, 7));
+
+ // Merge the even lanes together with a 16-bit unsigned shift right + add.
+ // 'xx' represents garbage data which will be ignored in the final result.
+ // In the important bytes, the add functions like a binary OR.
+ //
+ // 01 01 00 01 00 00 01 00
+ // \_ | \_ | \_ | \_ | paired16 = (uint32x4_t)(input + (input >> 7))
+ // \| \| \| \|
+ // xx 03 xx 01 xx 00 xx 02
+ //
+ // 00000001 00000001 (01 01)
+ // \_______ |
+ // \|
+ // xxxxxxxx xxxxxx11 (xx 03)
+ uint32x4_t paired16 =
+ vreinterpretq_u32_u16(vsraq_n_u16(high_bits, high_bits, 7));
+
+ // Repeat with a wider 32-bit shift + add.
+ // xx 03 xx 01 xx 00 xx 02
+ // \____ | \____ | paired32 = (uint64x1_t)(paired16 + (paired16 >> 14))
+ // \| \|
+ // xx xx xx 0d xx xx xx 02
+ //
+ // 00000011 00000001 (03 01)
+ // \\_____ ||
+ // '----.\||
+ // xxxxxxxx xxxx1101 (xx 0d)
+ uint64x2_t paired32 =
+ vreinterpretq_u64_u32(vsraq_n_u32(paired16, paired16, 14));
+
+ // Last, an even wider 64-bit shift + add to get our result in the low 8 bit lanes.
+ // xx xx xx 0d xx xx xx 02
+ // \_________ | paired64 = (uint8x8_t)(paired32 + (paired32 >> 28))
+ // \|
+ // xx xx xx xx xx xx xx d2
+ //
+ // 00001101 00000010 (0d 02)
+ // \ \___ | |
+ // '---. \| |
+ // xxxxxxxx 11010010 (xx d2)
+ uint8x16_t paired64 =
+ vreinterpretq_u8_u64(vsraq_n_u64(paired32, paired32, 28));
+
+ // Extract the low 8 bits from each 64-bit lane with 2 8-bit extracts.
+ // xx xx xx xx xx xx xx d2
+ // || return paired64[0]
+ // d2
+ // Note: Little endian would return the correct value 4b (01001011) instead.
+ r = vgetq_lane_u8(paired64, 0) |
+ (HEDLEY_STATIC_CAST(int32_t, vgetq_lane_u8(paired64, 8)) << 8);
+#elif defined(SIMDE_POWER_ALTIVEC_P8_NATIVE) && \
+ !defined(HEDLEY_IBM_VERSION) && \
+ (SIMDE_ENDIAN_ORDER == SIMDE_ENDIAN_LITTLE)
+ static const SIMDE_POWER_ALTIVEC_VECTOR(unsigned char)
+ perm = {120, 112, 104, 96, 88, 80, 72, 64,
+ 56, 48, 40, 32, 24, 16, 8, 0};
+ r = HEDLEY_STATIC_CAST(
+ int32_t, vec_extract(vec_vbpermq(a_.altivec_u8, perm), 1));
+#elif defined(SIMDE_POWER_ALTIVEC_P8_NATIVE) && \
+ !defined(HEDLEY_IBM_VERSION) && \
+ (SIMDE_ENDIAN_ORDER == SIMDE_ENDIAN_BIG)
+ static const SIMDE_POWER_ALTIVEC_VECTOR(unsigned char)
+ perm = {120, 112, 104, 96, 88, 80, 72, 64,
+ 56, 48, 40, 32, 24, 16, 8, 0};
+ r = HEDLEY_STATIC_CAST(
+ int32_t, vec_extract(vec_vbpermq(a_.altivec_u8, perm), 14));
+#else
+ SIMDE_VECTORIZE_REDUCTION(| : r)
+ for (size_t i = 0; i < (sizeof(a_.u8) / sizeof(a_.u8[0])); i++) {
+ r |= (a_.u8[15 - i] >> 7) << (15 - i);
+ }
+#endif
+
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_movemask_epi8(a) simde_mm_movemask_epi8(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int32_t simde_mm_movemask_pd(simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_movemask_pd(a);
+#else
+ int32_t r = 0;
+ simde__m128d_private a_ = simde__m128d_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ static const int64_t shift_amount[] = {0, 1};
+ const int64x2_t shift = vld1q_s64(shift_amount);
+ uint64x2_t tmp = vshrq_n_u64(a_.neon_u64, 63);
+ return HEDLEY_STATIC_CAST(int32_t, vaddvq_u64(vshlq_u64(tmp, shift)));
+#else
+ SIMDE_VECTORIZE_REDUCTION(| : r)
+ for (size_t i = 0; i < (sizeof(a_.u64) / sizeof(a_.u64[0])); i++) {
+ r |= (a_.u64[i] >> 63) << i;
+ }
+#endif
+
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_movemask_pd(a) simde_mm_movemask_pd(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_movepi64_pi64(simde__m128i a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_movepi64_pi64(a);
+#else
+ simde__m64_private r_;
+ simde__m128i_private a_ = simde__m128i_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_i64 = vget_low_s64(a_.neon_i64);
+#else
+ r_.i64[0] = a_.i64[0];
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_movepi64_pi64(a) simde_mm_movepi64_pi64(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_movpi64_epi64(simde__m64 a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_movpi64_epi64(a);
+#else
+ simde__m128i_private r_;
+ simde__m64_private a_ = simde__m64_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i64 = vcombine_s64(a_.neon_i64, vdup_n_s64(0));
+#else
+ r_.i64[0] = a_.i64[0];
+ r_.i64[1] = 0;
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_movpi64_epi64(a) simde_mm_movpi64_epi64(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_min_epi16(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_min_epi16(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 = vminq_s16(a_.neon_i16, b_.neon_i16);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i16x8_min(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i16 = vec_min(a_.altivec_i16, b_.altivec_i16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = (a_.i16[i] < b_.i16[i]) ? a_.i16[i] : b_.i16[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_min_epi16(a, b) simde_mm_min_epi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_min_epu8(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_min_epu8(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u8 = vminq_u8(a_.neon_u8, b_.neon_u8);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_u8x16_min(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_u8 = vec_min(a_.altivec_u8, b_.altivec_u8);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u8) / sizeof(r_.u8[0])); i++) {
+ r_.u8[i] = (a_.u8[i] < b_.u8[i]) ? a_.u8[i] : b_.u8[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_min_epu8(a, b) simde_mm_min_epu8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_min_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_min_pd(a, b);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_POWER_ALTIVEC_P8_NATIVE)
+ r_.altivec_f64 = vec_min(a_.altivec_f64, b_.altivec_f64);
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f64 = vminq_f64(a_.neon_f64, b_.neon_f64);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f64x2_min(a_.wasm_v128, b_.wasm_v128);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
+ r_.f64[i] = (a_.f64[i] < b_.f64[i]) ? a_.f64[i] : b_.f64[i];
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_min_pd(a, b) simde_mm_min_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_min_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_min_sd(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_sd(a, simde_mm_min_pd(a, b));
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ float64x2_t temp = vminq_f64(a_.neon_f64, b_.neon_f64);
+ r_.neon_f64 = vsetq_lane_f64(vgetq_lane(a_.neon_f64, 1), temp, 1);
+#else
+ r_.f64[0] = (a_.f64[0] < b_.f64[0]) ? a_.f64[0] : b_.f64[0];
+ r_.f64[1] = a_.f64[1];
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_min_sd(a, b) simde_mm_min_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_max_epi16(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_max_epi16(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 = vmaxq_s16(a_.neon_i16, b_.neon_i16);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i16x8_max(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i16 = vec_max(a_.altivec_i16, b_.altivec_i16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = (a_.i16[i] > b_.i16[i]) ? a_.i16[i] : b_.i16[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_max_epi16(a, b) simde_mm_max_epi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_max_epu8(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_max_epu8(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u8 = vmaxq_u8(a_.neon_u8, b_.neon_u8);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_u8x16_max(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_u8 = vec_max(a_.altivec_u8, b_.altivec_u8);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u8) / sizeof(r_.u8[0])); i++) {
+ r_.u8[i] = (a_.u8[i] > b_.u8[i]) ? a_.u8[i] : b_.u8[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_max_epu8(a, b) simde_mm_max_epu8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_max_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_max_pd(a, b);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+ r_.altivec_f64 = vec_max(a_.altivec_f64, b_.altivec_f64);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f64x2_max(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f64 = vmaxq_f64(a_.neon_f64, b_.neon_f64);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
+ r_.f64[i] = (a_.f64[i] > b_.f64[i]) ? a_.f64[i] : b_.f64[i];
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_max_pd(a, b) simde_mm_max_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_max_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_max_sd(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_sd(a, simde_mm_max_pd(a, b));
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ float64x2_t temp = vmaxq_f64(a_.neon_f64, b_.neon_f64);
+ r_.neon_f64 = vsetq_lane_f64(vgetq_lane(a_.neon_f64, 1), temp, 1);
+#else
+ r_.f64[0] = (a_.f64[0] > b_.f64[0]) ? a_.f64[0] : b_.f64[0];
+ r_.f64[1] = a_.f64[1];
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_max_sd(a, b) simde_mm_max_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_move_epi64(simde__m128i a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_move_epi64(a);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i64 = vsetq_lane_s64(0, a_.neon_i64, 1);
+#else
+ r_.i64[0] = a_.i64[0];
+ r_.i64[1] = 0;
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_move_epi64(a) simde_mm_move_epi64(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_mul_epu32(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_mul_epu32(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ uint32x2_t a_lo = vmovn_u64(a_.neon_u64);
+ uint32x2_t b_lo = vmovn_u64(b_.neon_u64);
+ r_.neon_u64 = vmull_u32(a_lo, b_lo);
+#elif defined(SIMDE_SHUFFLE_VECTOR_) && \
+ (SIMDE_ENDIAN_ORDER == SIMDE_ENDIAN_LITTLE)
+ __typeof__(a_.u32) z = {
+ 0,
+ };
+ a_.u32 = SIMDE_SHUFFLE_VECTOR_(32, 16, a_.u32, z, 0, 4, 2, 6);
+ b_.u32 = SIMDE_SHUFFLE_VECTOR_(32, 16, b_.u32, z, 0, 4, 2, 6);
+ r_.u64 = HEDLEY_REINTERPRET_CAST(__typeof__(r_.u64), a_.u32) *
+ HEDLEY_REINTERPRET_CAST(__typeof__(r_.u64), b_.u32);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u64) / sizeof(r_.u64[0])); i++) {
+ r_.u64[i] = HEDLEY_STATIC_CAST(uint64_t, a_.u32[i * 2]) *
+ HEDLEY_STATIC_CAST(uint64_t, b_.u32[i * 2]);
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_mul_epu32(a, b) simde_mm_mul_epu32(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_x_mm_mul_epi64(simde__m128i a, simde__m128i b)
+{
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i64 = a_.i64 * b_.i64;
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f64 = vmulq_s64(a_.neon_f64, b_.neon_f64);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i64) / sizeof(r_.i64[0])); i++) {
+ r_.i64[i] = a_.i64[i] * b_.i64[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_x_mm_mod_epi64(simde__m128i a, simde__m128i b)
+{
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i64 = a_.i64 % b_.i64;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i64) / sizeof(r_.i64[0])); i++) {
+ r_.i64[i] = a_.i64[i] % b_.i64[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_mul_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_mul_pd(a, b);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.f64 = a_.f64 * b_.f64;
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f64 = vmulq_f64(a_.neon_f64, b_.neon_f64);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f64x2_mul(a_.wasm_v128, b_.wasm_v128);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
+ r_.f64[i] = a_.f64[i] * b_.f64[i];
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_mul_pd(a, b) simde_mm_mul_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_mul_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_mul_sd(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_sd(a, simde_mm_mul_pd(a, b));
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ float64x2_t temp = vmulq_f64(a_.neon_f64, b_.neon_f64);
+ r_.neon_f64 = vsetq_lane_f64(vgetq_lane(a_.neon_f64, 1), temp, 1);
+#else
+ r_.f64[0] = a_.f64[0] * b_.f64[0];
+ r_.f64[1] = a_.f64[1];
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_mul_sd(a, b) simde_mm_mul_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_mul_su32(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE) && \
+ !defined(__PGI)
+ return _mm_mul_su32(a, b);
+#else
+ simde__m64_private r_, a_ = simde__m64_to_private(a),
+ b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.u64[0] = vget_lane_u64(
+ vget_low_u64(vmull_u32(vreinterpret_u32_s64(a_.neon_i64),
+ vreinterpret_u32_s64(b_.neon_i64))),
+ 0);
+#else
+ r_.u64[0] = HEDLEY_STATIC_CAST(uint64_t, a_.u32[0]) *
+ HEDLEY_STATIC_CAST(uint64_t, b_.u32[0]);
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_mul_su32(a, b) simde_mm_mul_su32(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_mulhi_epi16(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_mulhi_epi16(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ int16x4_t a3210 = vget_low_s16(a_.neon_i16);
+ int16x4_t b3210 = vget_low_s16(b_.neon_i16);
+ int32x4_t ab3210 = vmull_s16(a3210, b3210); /* 3333222211110000 */
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ int32x4_t ab7654 = vmull_high_s16(a_.neon_i16, b_.neon_i16);
+ r_.neon_i16 = vuzp2q_s16(vreinterpretq_s16_s32(ab3210),
+ vreinterpretq_s16_s32(ab7654));
+#else
+ int16x4_t a7654 = vget_high_s16(a_.neon_i16);
+ int16x4_t b7654 = vget_high_s16(b_.neon_i16);
+ int32x4_t ab7654 = vmull_s16(a7654, b7654); /* 7777666655554444 */
+ uint16x8x2_t rv = vuzpq_u16(vreinterpretq_u16_s32(ab3210),
+ vreinterpretq_u16_s32(ab7654));
+ r_.neon_u16 = rv.val[1];
+#endif
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.u16[i] = HEDLEY_STATIC_CAST(
+ uint16_t,
+ (HEDLEY_STATIC_CAST(
+ uint32_t,
+ HEDLEY_STATIC_CAST(int32_t, a_.i16[i]) *
+ HEDLEY_STATIC_CAST(int32_t,
+ b_.i16[i])) >>
+ 16));
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_mulhi_epi16(a, b) simde_mm_mulhi_epi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_mulhi_epu16(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(__PGI)
+ return _mm_mulhi_epu16(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ uint16x4_t a3210 = vget_low_u16(a_.neon_u16);
+ uint16x4_t b3210 = vget_low_u16(b_.neon_u16);
+ uint32x4_t ab3210 = vmull_u16(a3210, b3210); /* 3333222211110000 */
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ uint32x4_t ab7654 = vmull_high_u16(a_.neon_u16, b_.neon_u16);
+ r_.neon_u16 = vuzp2q_u16(vreinterpretq_u16_u32(ab3210),
+ vreinterpretq_u16_u32(ab7654));
+#else
+ uint16x4_t a7654 = vget_high_u16(a_.neon_u16);
+ uint16x4_t b7654 = vget_high_u16(b_.neon_u16);
+ uint32x4_t ab7654 = vmull_u16(a7654, b7654); /* 7777666655554444 */
+ uint16x8x2_t neon_r = vuzpq_u16(vreinterpretq_u16_u32(ab3210),
+ vreinterpretq_u16_u32(ab7654));
+ r_.neon_u16 = neon_r.val[1];
+#endif
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
+ r_.u16[i] = HEDLEY_STATIC_CAST(
+ uint16_t,
+ HEDLEY_STATIC_CAST(uint32_t, a_.u16[i]) *
+ HEDLEY_STATIC_CAST(uint32_t,
+ b_.u16[i]) >>
+ 16);
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_mulhi_epu16(a, b) simde_mm_mulhi_epu16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_mullo_epi16(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_mullo_epi16(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 = vmulq_s16(a_.neon_i16, b_.neon_i16);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ (void)a_;
+ (void)b_;
+ r_.altivec_i16 = vec_mul(a_.altivec_i16, b_.altivec_i16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.u16[i] = HEDLEY_STATIC_CAST(
+ uint16_t,
+ HEDLEY_STATIC_CAST(uint32_t, a_.u16[i]) *
+ HEDLEY_STATIC_CAST(uint32_t, b_.u16[i]));
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_mullo_epi16(a, b) simde_mm_mullo_epi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_or_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_or_pd(a, b);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32f = a_.i32f | b_.i32f;
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_v128_or(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i64 = vorrq_s64(a_.neon_i64, b_.neon_i64);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32f) / sizeof(r_.i32f[0])); i++) {
+ r_.i32f[i] = a_.i32f[i] | b_.i32f[i];
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_or_pd(a, b) simde_mm_or_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_or_si128(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_or_si128(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vorrq_s32(a_.neon_i32, b_.neon_i32);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i32 = vec_or(a_.altivec_i32, b_.altivec_i32);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32f = a_.i32f | b_.i32f;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32f) / sizeof(r_.i32f[0])); i++) {
+ r_.i32f[i] = a_.i32f[i] | b_.i32f[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_or_si128(a, b) simde_mm_or_si128(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_packs_epi16(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_packs_epi16(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i8 =
+ vcombine_s8(vqmovn_s16(a_.neon_i16), vqmovn_s16(b_.neon_i16));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i8[i] = (a_.i16[i] > INT8_MAX)
+ ? INT8_MAX
+ : ((a_.i16[i] < INT8_MIN)
+ ? INT8_MIN
+ : HEDLEY_STATIC_CAST(int8_t,
+ a_.i16[i]));
+ r_.i8[i + 8] = (b_.i16[i] > INT8_MAX)
+ ? INT8_MAX
+ : ((b_.i16[i] < INT8_MIN)
+ ? INT8_MIN
+ : HEDLEY_STATIC_CAST(
+ int8_t, b_.i16[i]));
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_packs_epi16(a, b) simde_mm_packs_epi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_packs_epi32(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_packs_epi32(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 =
+ vcombine_s16(vqmovn_s32(a_.neon_i32), vqmovn_s32(b_.neon_i32));
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i16 = vec_packs(a_.altivec_i32, b_.altivec_i32);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.i16[i] = (a_.i32[i] > INT16_MAX)
+ ? INT16_MAX
+ : ((a_.i32[i] < INT16_MIN)
+ ? INT16_MIN
+ : HEDLEY_STATIC_CAST(int16_t,
+ a_.i32[i]));
+ r_.i16[i + 4] =
+ (b_.i32[i] > INT16_MAX)
+ ? INT16_MAX
+ : ((b_.i32[i] < INT16_MIN)
+ ? INT16_MIN
+ : HEDLEY_STATIC_CAST(int16_t,
+ b_.i32[i]));
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_packs_epi32(a, b) simde_mm_packs_epi32(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_packus_epi16(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_packus_epi16(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u8 =
+ vcombine_u8(vqmovun_s16(a_.neon_i16), vqmovun_s16(b_.neon_i16));
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_u8 = vec_packsu(a_.altivec_i16, b_.altivec_i16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.u8[i] = (a_.i16[i] > UINT8_MAX)
+ ? UINT8_MAX
+ : ((a_.i16[i] < 0)
+ ? UINT8_C(0)
+ : HEDLEY_STATIC_CAST(uint8_t,
+ a_.i16[i]));
+ r_.u8[i + 8] =
+ (b_.i16[i] > UINT8_MAX)
+ ? UINT8_MAX
+ : ((b_.i16[i] < 0)
+ ? UINT8_C(0)
+ : HEDLEY_STATIC_CAST(uint8_t,
+ b_.i16[i]));
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_packus_epi16(a, b) simde_mm_packus_epi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_pause(void)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ _mm_pause();
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_pause() (simde_mm_pause())
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_sad_epu8(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_sad_epu8(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ const uint16x8_t t = vpaddlq_u8(vabdq_u8(a_.neon_u8, b_.neon_u8));
+ r_.neon_u64 = vcombine_u64(vpaddl_u32(vpaddl_u16(vget_low_u16(t))),
+ vpaddl_u32(vpaddl_u16(vget_high_u16(t))));
+#else
+ for (size_t i = 0; i < (sizeof(r_.i64) / sizeof(r_.i64[0])); i++) {
+ uint16_t tmp = 0;
+ SIMDE_VECTORIZE_REDUCTION(+ : tmp)
+ for (size_t j = 0; j < ((sizeof(r_.u8) / sizeof(r_.u8[0])) / 2);
+ j++) {
+ const size_t e = j + (i * 8);
+ tmp += (a_.u8[e] > b_.u8[e]) ? (a_.u8[e] - b_.u8[e])
+ : (b_.u8[e] - a_.u8[e]);
+ }
+ r_.i64[i] = tmp;
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_sad_epu8(a, b) simde_mm_sad_epu8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_set_epi8(int8_t e15, int8_t e14, int8_t e13, int8_t e12,
+ int8_t e11, int8_t e10, int8_t e9, int8_t e8,
+ int8_t e7, int8_t e6, int8_t e5, int8_t e4,
+ int8_t e3, int8_t e2, int8_t e1, int8_t e0)
+{
+
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_set_epi8(e15, e14, e13, e12, e11, e10, e9, e8, e7, e6, e5,
+ e4, e3, e2, e1, e0);
+#else
+ simde__m128i_private r_;
+
+#if defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i8x16_make(e0, e1, e2, e3, e4, e5, e6, e7, e8, e9,
+ e10, e11, e12, e13, e14, e15);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ SIMDE_ALIGN_LIKE_16(int8x16_t)
+ int8_t data[16] = {e0, e1, e2, e3, e4, e5, e6, e7,
+ e8, e9, e10, e11, e12, e13, e14, e15};
+ r_.neon_i8 = vld1q_s8(data);
+#else
+ r_.i8[0] = e0;
+ r_.i8[1] = e1;
+ r_.i8[2] = e2;
+ r_.i8[3] = e3;
+ r_.i8[4] = e4;
+ r_.i8[5] = e5;
+ r_.i8[6] = e6;
+ r_.i8[7] = e7;
+ r_.i8[8] = e8;
+ r_.i8[9] = e9;
+ r_.i8[10] = e10;
+ r_.i8[11] = e11;
+ r_.i8[12] = e12;
+ r_.i8[13] = e13;
+ r_.i8[14] = e14;
+ r_.i8[15] = e15;
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_set_epi8(e15, e14, e13, e12, e11, e10, e9, e8, e7, e6, e5, e4, e3, \
+ e2, e1, e0) \
+ simde_mm_set_epi8(e15, e14, e13, e12, e11, e10, e9, e8, e7, e6, e5, \
+ e4, e3, e2, e1, e0)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_set_epi16(int16_t e7, int16_t e6, int16_t e5, int16_t e4,
+ int16_t e3, int16_t e2, int16_t e1, int16_t e0)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_set_epi16(e7, e6, e5, e4, e3, e2, e1, e0);
+#else
+ simde__m128i_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ SIMDE_ALIGN_LIKE_16(int16x8_t)
+ int16_t data[8] = {e0, e1, e2, e3, e4, e5, e6, e7};
+ r_.neon_i16 = vld1q_s16(data);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i16x8_make(e0, e1, e2, e3, e4, e5, e6, e7);
+#else
+ r_.i16[0] = e0;
+ r_.i16[1] = e1;
+ r_.i16[2] = e2;
+ r_.i16[3] = e3;
+ r_.i16[4] = e4;
+ r_.i16[5] = e5;
+ r_.i16[6] = e6;
+ r_.i16[7] = e7;
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_set_epi16(e7, e6, e5, e4, e3, e2, e1, e0) \
+ simde_mm_set_epi16(e7, e6, e5, e4, e3, e2, e1, e0)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_loadu_si16(void const *mem_addr)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && \
+ (SIMDE_DETECT_CLANG_VERSION_CHECK(8, 0, 0) || \
+ HEDLEY_GCC_VERSION_CHECK(11, 0, 0) || \
+ HEDLEY_INTEL_VERSION_CHECK(20, 21, 1))
+ return _mm_loadu_si16(mem_addr);
+#else
+ int16_t val;
+ simde_memcpy(&val, mem_addr, sizeof(val));
+ return simde_x_mm_cvtsi16_si128(val);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_loadu_si16(mem_addr) simde_mm_loadu_si16(mem_addr)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_set_epi32(int32_t e3, int32_t e2, int32_t e1, int32_t e0)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_set_epi32(e3, e2, e1, e0);
+#else
+ simde__m128i_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ SIMDE_ALIGN_LIKE_16(int32x4_t) int32_t data[4] = {e0, e1, e2, e3};
+ r_.neon_i32 = vld1q_s32(data);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i32x4_make(e0, e1, e2, e3);
+#else
+ r_.i32[0] = e0;
+ r_.i32[1] = e1;
+ r_.i32[2] = e2;
+ r_.i32[3] = e3;
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_set_epi32(e3, e2, e1, e0) simde_mm_set_epi32(e3, e2, e1, e0)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_loadu_si32(void const *mem_addr)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && \
+ (SIMDE_DETECT_CLANG_VERSION_CHECK(8, 0, 0) || \
+ HEDLEY_GCC_VERSION_CHECK(11, 0, 0) || \
+ HEDLEY_INTEL_VERSION_CHECK(20, 21, 1))
+ return _mm_loadu_si32(mem_addr);
+#else
+ int32_t val;
+ simde_memcpy(&val, mem_addr, sizeof(val));
+ return simde_mm_cvtsi32_si128(val);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_loadu_si32(mem_addr) simde_mm_loadu_si32(mem_addr)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_set_epi64(simde__m64 e1, simde__m64 e0)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_set_epi64(e1, e0);
+#else
+ simde__m128i_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i64 = vcombine_s64(simde__m64_to_neon_i64(e0),
+ simde__m64_to_neon_i64(e1));
+#else
+ r_.m64[0] = e0;
+ r_.m64[1] = e1;
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_set_epi64(e1, e0) (simde_mm_set_epi64((e1), (e0)))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_set_epi64x(int64_t e1, int64_t e0)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && \
+ (!defined(HEDLEY_MSVC_VERSION) || HEDLEY_MSVC_VERSION_CHECK(19, 0, 0))
+ return _mm_set_epi64x(e1, e0);
+#else
+ simde__m128i_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ SIMDE_ALIGN_LIKE_16(int64x2_t) int64_t data[2] = {e0, e1};
+ r_.neon_i64 = vld1q_s64(data);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i64x2_make(e0, e1);
+#else
+ r_.i64[0] = e0;
+ r_.i64[1] = e1;
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_set_epi64x(e1, e0) simde_mm_set_epi64x(e1, e0)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_loadu_si64(void const *mem_addr)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && \
+ (SIMDE_DETECT_CLANG_VERSION_CHECK(8, 0, 0) || \
+ HEDLEY_GCC_VERSION_CHECK(11, 0, 0) || \
+ HEDLEY_INTEL_VERSION_CHECK(20, 21, 1))
+ return _mm_loadu_si64(mem_addr);
+#else
+ int64_t val;
+ simde_memcpy(&val, mem_addr, sizeof(val));
+ return simde_mm_cvtsi64_si128(val);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_loadu_si64(mem_addr) simde_mm_loadu_si64(mem_addr)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_x_mm_set_epu8(uint8_t e15, uint8_t e14, uint8_t e13,
+ uint8_t e12, uint8_t e11, uint8_t e10,
+ uint8_t e9, uint8_t e8, uint8_t e7, uint8_t e6,
+ uint8_t e5, uint8_t e4, uint8_t e3, uint8_t e2,
+ uint8_t e1, uint8_t e0)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_set_epi8(
+ HEDLEY_STATIC_CAST(char, e15), HEDLEY_STATIC_CAST(char, e14),
+ HEDLEY_STATIC_CAST(char, e13), HEDLEY_STATIC_CAST(char, e12),
+ HEDLEY_STATIC_CAST(char, e11), HEDLEY_STATIC_CAST(char, e10),
+ HEDLEY_STATIC_CAST(char, e9), HEDLEY_STATIC_CAST(char, e8),
+ HEDLEY_STATIC_CAST(char, e7), HEDLEY_STATIC_CAST(char, e6),
+ HEDLEY_STATIC_CAST(char, e5), HEDLEY_STATIC_CAST(char, e4),
+ HEDLEY_STATIC_CAST(char, e3), HEDLEY_STATIC_CAST(char, e2),
+ HEDLEY_STATIC_CAST(char, e1), HEDLEY_STATIC_CAST(char, e0));
+#else
+ simde__m128i_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ SIMDE_ALIGN_LIKE_16(uint8x16_t)
+ uint8_t data[16] = {e0, e1, e2, e3, e4, e5, e6, e7,
+ e8, e9, e10, e11, e12, e13, e14, e15};
+ r_.neon_u8 = vld1q_u8(data);
+#else
+ r_.u8[0] = e0;
+ r_.u8[1] = e1;
+ r_.u8[2] = e2;
+ r_.u8[3] = e3;
+ r_.u8[4] = e4;
+ r_.u8[5] = e5;
+ r_.u8[6] = e6;
+ r_.u8[7] = e7;
+ r_.u8[8] = e8;
+ r_.u8[9] = e9;
+ r_.u8[10] = e10;
+ r_.u8[11] = e11;
+ r_.u8[12] = e12;
+ r_.u8[13] = e13;
+ r_.u8[14] = e14;
+ r_.u8[15] = e15;
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_x_mm_set_epu16(uint16_t e7, uint16_t e6, uint16_t e5,
+ uint16_t e4, uint16_t e3, uint16_t e2,
+ uint16_t e1, uint16_t e0)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_set_epi16(
+ HEDLEY_STATIC_CAST(short, e7), HEDLEY_STATIC_CAST(short, e6),
+ HEDLEY_STATIC_CAST(short, e5), HEDLEY_STATIC_CAST(short, e4),
+ HEDLEY_STATIC_CAST(short, e3), HEDLEY_STATIC_CAST(short, e2),
+ HEDLEY_STATIC_CAST(short, e1), HEDLEY_STATIC_CAST(short, e0));
+#else
+ simde__m128i_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ SIMDE_ALIGN_LIKE_16(uint16x8_t)
+ uint16_t data[8] = {e0, e1, e2, e3, e4, e5, e6, e7};
+ r_.neon_u16 = vld1q_u16(data);
+#else
+ r_.u16[0] = e0;
+ r_.u16[1] = e1;
+ r_.u16[2] = e2;
+ r_.u16[3] = e3;
+ r_.u16[4] = e4;
+ r_.u16[5] = e5;
+ r_.u16[6] = e6;
+ r_.u16[7] = e7;
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_x_mm_set_epu32(uint32_t e3, uint32_t e2, uint32_t e1,
+ uint32_t e0)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_set_epi32(HEDLEY_STATIC_CAST(int, e3),
+ HEDLEY_STATIC_CAST(int, e2),
+ HEDLEY_STATIC_CAST(int, e1),
+ HEDLEY_STATIC_CAST(int, e0));
+#else
+ simde__m128i_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ SIMDE_ALIGN_LIKE_16(uint32x4_t) uint32_t data[4] = {e0, e1, e2, e3};
+ r_.neon_u32 = vld1q_u32(data);
+#else
+ r_.u32[0] = e0;
+ r_.u32[1] = e1;
+ r_.u32[2] = e2;
+ r_.u32[3] = e3;
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_x_mm_set_epu64x(uint64_t e1, uint64_t e0)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && \
+ (!defined(HEDLEY_MSVC_VERSION) || HEDLEY_MSVC_VERSION_CHECK(19, 0, 0))
+ return _mm_set_epi64x(HEDLEY_STATIC_CAST(int64_t, e1),
+ HEDLEY_STATIC_CAST(int64_t, e0));
+#else
+ simde__m128i_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ SIMDE_ALIGN_LIKE_16(uint64x2_t) uint64_t data[2] = {e0, e1};
+ r_.neon_u64 = vld1q_u64(data);
+#else
+ r_.u64[0] = e0;
+ r_.u64[1] = e1;
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_set_sd(simde_float64 a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_set_sd(a);
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ return vsetq_lane_f64(a, vdupq_n_f64(SIMDE_FLOAT64_C(0.0)), 0);
+#else
+ return simde_mm_set_pd(SIMDE_FLOAT64_C(0.0), a);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_set_sd(a) simde_mm_set_sd(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_set1_epi8(int8_t a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_set1_epi8(a);
+#else
+ simde__m128i_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i8 = vdupq_n_s8(a);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i8x16_splat(a);
+#elif defined(SIMDE_POWER_ALTIVEC_P8_NATIVE)
+ r_.altivec_i8 = vec_splats(HEDLEY_STATIC_CAST(signed char, a));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
+ r_.i8[i] = a;
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_set1_epi8(a) simde_mm_set1_epi8(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_set1_epi16(int16_t a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_set1_epi16(a);
+#else
+ simde__m128i_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 = vdupq_n_s16(a);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i16x8_splat(a);
+#elif defined(SIMDE_POWER_ALTIVEC_P8_NATIVE)
+ r_.altivec_i16 = vec_splats(HEDLEY_STATIC_CAST(signed short, a));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = a;
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_set1_epi16(a) simde_mm_set1_epi16(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_set1_epi32(int32_t a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_set1_epi32(a);
+#else
+ simde__m128i_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vdupq_n_s32(a);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i32x4_splat(a);
+#elif defined(SIMDE_POWER_ALTIVEC_P8_NATIVE)
+ r_.altivec_i32 = vec_splats(HEDLEY_STATIC_CAST(signed int, a));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = a;
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_set1_epi32(a) simde_mm_set1_epi32(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_set1_epi64x(int64_t a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && \
+ (!defined(HEDLEY_MSVC_VERSION) || HEDLEY_MSVC_VERSION_CHECK(19, 0, 0))
+ return _mm_set1_epi64x(a);
+#else
+ simde__m128i_private r_;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i64 = vdupq_n_s64(a);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i64x2_splat(a);
+#elif defined(SIMDE_POWER_ALTIVEC_P8_NATIVE)
+ r_.altivec_i64 = vec_splats(HEDLEY_STATIC_CAST(signed long long, a));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i64) / sizeof(r_.i64[0])); i++) {
+ r_.i64[i] = a;
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_set1_epi64x(a) simde_mm_set1_epi64x(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_set1_epi64(simde__m64 a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_set1_epi64(a);
+#else
+ simde__m64_private a_ = simde__m64_to_private(a);
+ return simde_mm_set1_epi64x(a_.i64[0]);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_set1_epi64(a) simde_mm_set1_epi64(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_x_mm_set1_epu8(uint8_t value)
+{
+#if defined(SIMDE_POWER_ALTIVEC_P8_NATIVE)
+ return simde__m128i_from_altivec_u8(
+ vec_splats(HEDLEY_STATIC_CAST(unsigned char, value)));
+#else
+ return simde_mm_set1_epi8(HEDLEY_STATIC_CAST(int8_t, value));
+#endif
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_x_mm_set1_epu16(uint16_t value)
+{
+#if defined(SIMDE_POWER_ALTIVEC_P8_NATIVE)
+ return simde__m128i_from_altivec_u16(
+ vec_splats(HEDLEY_STATIC_CAST(unsigned short, value)));
+#else
+ return simde_mm_set1_epi16(HEDLEY_STATIC_CAST(int16_t, value));
+#endif
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_x_mm_set1_epu32(uint32_t value)
+{
+#if defined(SIMDE_POWER_ALTIVEC_P8_NATIVE)
+ return simde__m128i_from_altivec_u32(
+ vec_splats(HEDLEY_STATIC_CAST(unsigned int, value)));
+#else
+ return simde_mm_set1_epi32(HEDLEY_STATIC_CAST(int32_t, value));
+#endif
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_x_mm_set1_epu64(uint64_t value)
+{
+#if defined(SIMDE_POWER_ALTIVEC_P8_NATIVE)
+ return simde__m128i_from_altivec_u64(
+ vec_splats(HEDLEY_STATIC_CAST(unsigned long long, value)));
+#else
+ return simde_mm_set1_epi64x(HEDLEY_STATIC_CAST(int64_t, value));
+#endif
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_setr_epi8(int8_t e15, int8_t e14, int8_t e13, int8_t e12,
+ int8_t e11, int8_t e10, int8_t e9, int8_t e8,
+ int8_t e7, int8_t e6, int8_t e5, int8_t e4,
+ int8_t e3, int8_t e2, int8_t e1, int8_t e0)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_setr_epi8(e15, e14, e13, e12, e11, e10, e9, e8, e7, e6, e5,
+ e4, e3, e2, e1, e0);
+#else
+ return simde_mm_set_epi8(e0, e1, e2, e3, e4, e5, e6, e7, e8, e9, e10,
+ e11, e12, e13, e14, e15);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_setr_epi8(e15, e14, e13, e12, e11, e10, e9, e8, e7, e6, e5, e4, \
+ e3, e2, e1, e0) \
+ simde_mm_setr_epi8(e15, e14, e13, e12, e11, e10, e9, e8, e7, e6, e5, \
+ e4, e3, e2, e1, e0)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_setr_epi16(int16_t e7, int16_t e6, int16_t e5, int16_t e4,
+ int16_t e3, int16_t e2, int16_t e1, int16_t e0)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_setr_epi16(e7, e6, e5, e4, e3, e2, e1, e0);
+#else
+ return simde_mm_set_epi16(e0, e1, e2, e3, e4, e5, e6, e7);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_setr_epi16(e7, e6, e5, e4, e3, e2, e1, e0) \
+ simde_mm_setr_epi16(e7, e6, e5, e4, e3, e2, e1, e0)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_setr_epi32(int32_t e3, int32_t e2, int32_t e1, int32_t e0)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_setr_epi32(e3, e2, e1, e0);
+#else
+ return simde_mm_set_epi32(e0, e1, e2, e3);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_setr_epi32(e3, e2, e1, e0) simde_mm_setr_epi32(e3, e2, e1, e0)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_setr_epi64(simde__m64 e1, simde__m64 e0)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_setr_epi64(e1, e0);
+#else
+ return simde_mm_set_epi64(e0, e1);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_setr_epi64(e1, e0) (simde_mm_setr_epi64((e1), (e0)))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_setr_pd(simde_float64 e1, simde_float64 e0)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_setr_pd(e1, e0);
+#else
+ return simde_mm_set_pd(e0, e1);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_setr_pd(e1, e0) simde_mm_setr_pd(e1, e0)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_setzero_pd(void)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_setzero_pd();
+#else
+ return simde_mm_castsi128_pd(simde_mm_setzero_si128());
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_setzero_pd() simde_mm_setzero_pd()
+#endif
+
+#if defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
+HEDLEY_DIAGNOSTIC_PUSH
+SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_undefined_pd(void)
+{
+ simde__m128d_private r_;
+
+#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE__HAVE_UNDEFINED128)
+ r_.n = _mm_undefined_pd();
+#elif !defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
+ r_ = simde__m128d_to_private(simde_mm_setzero_pd());
+#endif
+
+ return simde__m128d_from_private(r_);
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_undefined_pd() simde_mm_undefined_pd()
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_undefined_si128(void)
+{
+ simde__m128i_private r_;
+
+#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE__HAVE_UNDEFINED128)
+ r_.n = _mm_undefined_si128();
+#elif !defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
+ r_ = simde__m128i_to_private(simde_mm_setzero_si128());
+#endif
+
+ return simde__m128i_from_private(r_);
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_undefined_si128() (simde_mm_undefined_si128())
+#endif
+
+#if defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
+HEDLEY_DIAGNOSTIC_POP
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_x_mm_setone_pd(void)
+{
+ return simde_mm_castps_pd(simde_x_mm_setone_ps());
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_x_mm_setone_si128(void)
+{
+ return simde_mm_castps_si128(simde_x_mm_setone_ps());
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_shuffle_epi32(simde__m128i a, const int imm8)
+ SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 255)
+{
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a);
+
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = a_.i32[(imm8 >> (i * 2)) & 3];
+ }
+
+ return simde__m128i_from_private(r_);
+}
+#if defined(SIMDE_X86_SSE2_NATIVE)
+#define simde_mm_shuffle_epi32(a, imm8) _mm_shuffle_epi32((a), (imm8))
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+#define simde_mm_shuffle_epi32(a, imm8) \
+ __extension__({ \
+ int32x4_t ret; \
+ ret = vmovq_n_s32(vgetq_lane_s32(vreinterpretq_s32_s64(a), \
+ (imm8) & (0x3))); \
+ ret = vsetq_lane_s32(vgetq_lane_s32(vreinterpretq_s32_s64(a), \
+ ((imm8) >> 2) & 0x3), \
+ ret, 1); \
+ ret = vsetq_lane_s32(vgetq_lane_s32(vreinterpretq_s32_s64(a), \
+ ((imm8) >> 4) & 0x3), \
+ ret, 2); \
+ ret = vsetq_lane_s32(vgetq_lane_s32(vreinterpretq_s32_s64(a), \
+ ((imm8) >> 6) & 0x3), \
+ ret, 3); \
+ vreinterpretq_s64_s32(ret); \
+ })
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+#define simde_mm_shuffle_epi32(a, imm8) \
+ (__extension__({ \
+ const simde__m128i_private simde__tmp_a_ = \
+ simde__m128i_to_private(a); \
+ simde__m128i_from_private((simde__m128i_private){ \
+ .i32 = SIMDE_SHUFFLE_VECTOR_( \
+ 32, 16, (simde__tmp_a_).i32, \
+ (simde__tmp_a_).i32, ((imm8)) & 3, \
+ ((imm8) >> 2) & 3, ((imm8) >> 4) & 3, \
+ ((imm8) >> 6) & 3)}); \
+ }))
+#endif
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_shuffle_epi32(a, imm8) simde_mm_shuffle_epi32(a, imm8)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_shuffle_pd(simde__m128d a, simde__m128d b, const int imm8)
+ SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 3)
+{
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+ r_.f64[0] = ((imm8 & 1) == 0) ? a_.f64[0] : a_.f64[1];
+ r_.f64[1] = ((imm8 & 2) == 0) ? b_.f64[0] : b_.f64[1];
+
+ return simde__m128d_from_private(r_);
+}
+#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(__PGI)
+#define simde_mm_shuffle_pd(a, b, imm8) _mm_shuffle_pd((a), (b), (imm8))
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+#define simde_mm_shuffle_pd(a, b, imm8) \
+ (__extension__({ \
+ simde__m128d_from_private((simde__m128d_private){ \
+ .f64 = SIMDE_SHUFFLE_VECTOR_( \
+ 64, 16, simde__m128d_to_private(a).f64, \
+ simde__m128d_to_private(b).f64, \
+ (((imm8)) & 1), (((imm8) >> 1) & 1) + 2)}); \
+ }))
+#endif
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_shuffle_pd(a, b, imm8) simde_mm_shuffle_pd(a, b, imm8)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_shufflehi_epi16(simde__m128i a, const int imm8)
+ SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 255)
+{
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a);
+
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < ((sizeof(a_.i16) / sizeof(a_.i16[0])) / 2);
+ i++) {
+ r_.i16[i] = a_.i16[i];
+ }
+ for (size_t i = ((sizeof(a_.i16) / sizeof(a_.i16[0])) / 2);
+ i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = a_.i16[((imm8 >> ((i - 4) * 2)) & 3) + 4];
+ }
+
+ return simde__m128i_from_private(r_);
+}
+#if defined(SIMDE_X86_SSE2_NATIVE)
+#define simde_mm_shufflehi_epi16(a, imm8) _mm_shufflehi_epi16((a), (imm8))
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+#define simde_mm_shufflehi_epi16(a, imm8) \
+ __extension__({ \
+ int16x8_t ret = vreinterpretq_s16_s64(a); \
+ int16x4_t highBits = vget_high_s16(ret); \
+ ret = vsetq_lane_s16(vget_lane_s16(highBits, (imm8) & (0x3)), \
+ ret, 4); \
+ ret = vsetq_lane_s16( \
+ vget_lane_s16(highBits, ((imm8) >> 2) & 0x3), ret, 5); \
+ ret = vsetq_lane_s16( \
+ vget_lane_s16(highBits, ((imm8) >> 4) & 0x3), ret, 6); \
+ ret = vsetq_lane_s16( \
+ vget_lane_s16(highBits, ((imm8) >> 6) & 0x3), ret, 7); \
+ vreinterpretq_s64_s16(ret); \
+ })
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+#define simde_mm_shufflehi_epi16(a, imm8) \
+ (__extension__({ \
+ const simde__m128i_private simde__tmp_a_ = \
+ simde__m128i_to_private(a); \
+ simde__m128i_from_private((simde__m128i_private){ \
+ .i16 = SIMDE_SHUFFLE_VECTOR_( \
+ 16, 16, (simde__tmp_a_).i16, \
+ (simde__tmp_a_).i16, 0, 1, 2, 3, \
+ (((imm8)) & 3) + 4, (((imm8) >> 2) & 3) + 4, \
+ (((imm8) >> 4) & 3) + 4, \
+ (((imm8) >> 6) & 3) + 4)}); \
+ }))
+#endif
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_shufflehi_epi16(a, imm8) simde_mm_shufflehi_epi16(a, imm8)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_shufflelo_epi16(simde__m128i a, const int imm8)
+ SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 255)
+{
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a);
+
+ for (size_t i = 0; i < ((sizeof(r_.i16) / sizeof(r_.i16[0])) / 2);
+ i++) {
+ r_.i16[i] = a_.i16[((imm8 >> (i * 2)) & 3)];
+ }
+ SIMDE_VECTORIZE
+ for (size_t i = ((sizeof(a_.i16) / sizeof(a_.i16[0])) / 2);
+ i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = a_.i16[i];
+ }
+
+ return simde__m128i_from_private(r_);
+}
+#if defined(SIMDE_X86_SSE2_NATIVE)
+#define simde_mm_shufflelo_epi16(a, imm8) _mm_shufflelo_epi16((a), (imm8))
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+#define simde_mm_shufflelo_epi16(a, imm8) \
+ __extension__({ \
+ int16x8_t ret = vreinterpretq_s16_s64(a); \
+ int16x4_t lowBits = vget_low_s16(ret); \
+ ret = vsetq_lane_s16(vget_lane_s16(lowBits, (imm8) & (0x3)), \
+ ret, 0); \
+ ret = vsetq_lane_s16( \
+ vget_lane_s16(lowBits, ((imm8) >> 2) & 0x3), ret, 1); \
+ ret = vsetq_lane_s16( \
+ vget_lane_s16(lowBits, ((imm8) >> 4) & 0x3), ret, 2); \
+ ret = vsetq_lane_s16( \
+ vget_lane_s16(lowBits, ((imm8) >> 6) & 0x3), ret, 3); \
+ vreinterpretq_s64_s16(ret); \
+ })
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+#define simde_mm_shufflelo_epi16(a, imm8) \
+ (__extension__({ \
+ const simde__m128i_private simde__tmp_a_ = \
+ simde__m128i_to_private(a); \
+ simde__m128i_from_private((simde__m128i_private){ \
+ .i16 = SIMDE_SHUFFLE_VECTOR_( \
+ 16, 16, (simde__tmp_a_).i16, \
+ (simde__tmp_a_).i16, (((imm8)) & 3), \
+ (((imm8) >> 2) & 3), (((imm8) >> 4) & 3), \
+ (((imm8) >> 6) & 3), 4, 5, 6, 7)}); \
+ }))
+#endif
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_shufflelo_epi16(a, imm8) simde_mm_shufflelo_epi16(a, imm8)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_sll_epi16(simde__m128i a, simde__m128i count)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_sll_epi16(a, count);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ count_ = simde__m128i_to_private(count);
+
+ if (count_.u64[0] > 15)
+ return simde_mm_setzero_si128();
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.u16 = (a_.u16 << count_.u64[0]);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u16 = vshlq_u16(a_.neon_u16, vdupq_n_s16(HEDLEY_STATIC_CAST(
+ int16_t, count_.u64[0])));
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 =
+ ((wasm_i64x2_extract_lane(count_.wasm_v128, 0) < 16)
+ ? wasm_i16x8_shl(a_.wasm_v128,
+ HEDLEY_STATIC_CAST(
+ int32_t,
+ wasm_i64x2_extract_lane(
+ count_.wasm_v128, 0)))
+ : wasm_i16x8_const(0, 0, 0, 0, 0, 0, 0, 0));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
+ r_.u16[i] = HEDLEY_STATIC_CAST(uint16_t,
+ (a_.u16[i] << count_.u64[0]));
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_sll_epi16(a, count) simde_mm_sll_epi16((a), (count))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_sll_epi32(simde__m128i a, simde__m128i count)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_sll_epi32(a, count);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ count_ = simde__m128i_to_private(count);
+
+ if (count_.u64[0] > 31)
+ return simde_mm_setzero_si128();
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.u32 = (a_.u32 << count_.u64[0]);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u32 = vshlq_u32(a_.neon_u32, vdupq_n_s32(HEDLEY_STATIC_CAST(
+ int32_t, count_.u64[0])));
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 =
+ ((wasm_i64x2_extract_lane(count_.wasm_v128, 0) < 32)
+ ? wasm_i32x4_shl(a_.wasm_v128,
+ HEDLEY_STATIC_CAST(
+ int32_t,
+ wasm_i64x2_extract_lane(
+ count_.wasm_v128, 0)))
+ : wasm_i32x4_const(0, 0, 0, 0));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u32) / sizeof(r_.u32[0])); i++) {
+ r_.u32[i] = HEDLEY_STATIC_CAST(uint32_t,
+ (a_.u32[i] << count_.u64[0]));
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_sll_epi32(a, count) (simde_mm_sll_epi32(a, (count)))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_sll_epi64(simde__m128i a, simde__m128i count)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_sll_epi64(a, count);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ count_ = simde__m128i_to_private(count);
+
+ if (count_.u64[0] > 63)
+ return simde_mm_setzero_si128();
+
+ const int_fast16_t s = HEDLEY_STATIC_CAST(int_fast16_t, count_.u64[0]);
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u64 = vshlq_u64(a_.neon_u64,
+ vdupq_n_s64(HEDLEY_STATIC_CAST(int64_t, s)));
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = (s < 64) ? wasm_i64x2_shl(a_.wasm_v128, s)
+ : wasm_i64x2_const(0, 0);
+#else
+#if !defined(SIMDE_BUG_GCC_94488)
+ SIMDE_VECTORIZE
+#endif
+ for (size_t i = 0; i < (sizeof(r_.u64) / sizeof(r_.u64[0])); i++) {
+ r_.u64[i] = a_.u64[i] << s;
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_sll_epi64(a, count) (simde_mm_sll_epi64(a, (count)))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_sqrt_pd(simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_sqrt_pd(a);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f64 = vsqrtq_f64(a_.neon_f64);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f64x2_sqrt(a_.wasm_v128);
+#elif defined(simde_math_sqrt)
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
+ r_.f64[i] = simde_math_sqrt(a_.f64[i]);
+ }
+#else
+ HEDLEY_UNREACHABLE();
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_sqrt_pd(a) simde_mm_sqrt_pd(a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_sqrt_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_sqrt_sd(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_sd(a, simde_mm_sqrt_pd(b));
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(simde_math_sqrt)
+ r_.f64[0] = simde_math_sqrt(b_.f64[0]);
+ r_.f64[1] = a_.f64[1];
+#else
+ HEDLEY_UNREACHABLE();
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_sqrt_sd(a, b) simde_mm_sqrt_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_srl_epi16(simde__m128i a, simde__m128i count)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_srl_epi16(a, count);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ count_ = simde__m128i_to_private(count);
+
+ const int cnt = HEDLEY_STATIC_CAST(
+ int, (count_.i64[0] > 16 ? 16 : count_.i64[0]));
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u16 = vshlq_u16(a_.neon_u16,
+ vdupq_n_s16(HEDLEY_STATIC_CAST(int16_t, -cnt)));
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u16) / sizeof(r_.u16[0])); i++) {
+ r_.u16[i] = a_.u16[i] >> cnt;
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_srl_epi16(a, count) (simde_mm_srl_epi16(a, (count)))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_srl_epi32(simde__m128i a, simde__m128i count)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_srl_epi32(a, count);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ count_ = simde__m128i_to_private(count);
+
+ const int cnt = HEDLEY_STATIC_CAST(
+ int, (count_.i64[0] > 32 ? 32 : count_.i64[0]));
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u32 = vshlq_u32(a_.neon_u32,
+ vdupq_n_s32(HEDLEY_STATIC_CAST(int32_t, -cnt)));
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_u32x4_shr(a_.wasm_v128, cnt);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u32) / sizeof(r_.u32[0])); i++) {
+ r_.u32[i] = a_.u32[i] >> cnt;
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_srl_epi32(a, count) (simde_mm_srl_epi32(a, (count)))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_srl_epi64(simde__m128i a, simde__m128i count)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_srl_epi64(a, count);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ count_ = simde__m128i_to_private(count);
+
+ const int cnt = HEDLEY_STATIC_CAST(
+ int, (count_.i64[0] > 64 ? 64 : count_.i64[0]));
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u64 = vshlq_u64(a_.neon_u64,
+ vdupq_n_s64(HEDLEY_STATIC_CAST(int64_t, -cnt)));
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_u64x2_shr(a_.wasm_v128, cnt);
+#else
+#if !defined(SIMDE_BUG_GCC_94488)
+ SIMDE_VECTORIZE
+#endif
+ for (size_t i = 0; i < (sizeof(r_.u64) / sizeof(r_.u64[0])); i++) {
+ r_.u64[i] = a_.u64[i] >> cnt;
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_srl_epi64(a, count) (simde_mm_srl_epi64(a, (count)))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_srai_epi16(simde__m128i a, const int imm8)
+ SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 255)
+{
+ /* MSVC requires a range of (0, 255). */
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a);
+
+ const int cnt = (imm8 & ~15) ? 15 : imm8;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 = vshlq_s16(a_.neon_i16,
+ vdupq_n_s16(HEDLEY_STATIC_CAST(int16_t, -cnt)));
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i16x8_shr(a_.wasm_v128, cnt);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = a_.i16[i] >> cnt;
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+}
+#if defined(SIMDE_X86_SSE2_NATIVE)
+#define simde_mm_srai_epi16(a, imm8) _mm_srai_epi16((a), (imm8))
+#endif
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_srai_epi16(a, imm8) simde_mm_srai_epi16(a, imm8)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_srai_epi32(simde__m128i a, const int imm8)
+ SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 255)
+{
+ /* MSVC requires a range of (0, 255). */
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a);
+
+ const int cnt = (imm8 & ~31) ? 31 : imm8;
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vshlq_s32(a_.neon_i32, vdupq_n_s32(-cnt));
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i32x4_shr(a_.wasm_v128, cnt);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = a_.i32[i] >> cnt;
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+}
+#if defined(SIMDE_X86_SSE2_NATIVE)
+#define simde_mm_srai_epi32(a, imm8) _mm_srai_epi32((a), (imm8))
+#endif
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_srai_epi32(a, imm8) simde_mm_srai_epi32(a, imm8)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_sra_epi16(simde__m128i a, simde__m128i count)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_sra_epi16(a, count);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ count_ = simde__m128i_to_private(count);
+
+ const int cnt = HEDLEY_STATIC_CAST(
+ int, (count_.i64[0] > 15 ? 15 : count_.i64[0]));
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 = vshlq_s16(a_.neon_i16,
+ vdupq_n_s16(HEDLEY_STATIC_CAST(int16_t, -cnt)));
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i16x8_shr(a_.wasm_v128, cnt);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = a_.i16[i] >> cnt;
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_sra_epi16(a, count) (simde_mm_sra_epi16(a, count))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_sra_epi32(simde__m128i a, simde__m128i count)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && !defined(SIMDE_BUG_GCC_BAD_MM_SRA_EPI32)
+ return _mm_sra_epi32(a, count);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ count_ = simde__m128i_to_private(count);
+
+ const int cnt = count_.u64[0] > 31
+ ? 31
+ : HEDLEY_STATIC_CAST(int, count_.u64[0]);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vshlq_s32(a_.neon_i32,
+ vdupq_n_s32(HEDLEY_STATIC_CAST(int32_t, -cnt)));
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i32x4_shr(a_.wasm_v128, cnt);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = a_.i32[i] >> cnt;
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_sra_epi32(a, count) (simde_mm_sra_epi32(a, (count)))
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_slli_epi16(simde__m128i a, const int imm8)
+ SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 255)
+{
+ if (HEDLEY_UNLIKELY((imm8 > 15))) {
+ return simde_mm_setzero_si128();
+ }
+
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.i16 = a_.i16 << (imm8 & 0xff);
+#else
+ const int s =
+ (imm8 >
+ HEDLEY_STATIC_CAST(int, sizeof(r_.i16[0]) * CHAR_BIT) - 1)
+ ? 0
+ : imm8;
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = HEDLEY_STATIC_CAST(int16_t, a_.i16[i] << s);
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+}
+#if defined(SIMDE_X86_SSE2_NATIVE)
+#define simde_mm_slli_epi16(a, imm8) _mm_slli_epi16(a, imm8)
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+#define simde_mm_slli_epi16(a, imm8) \
+ (__extension__({ \
+ simde__m128i ret; \
+ if ((imm8) <= 0) { \
+ ret = a; \
+ } else if ((imm8) > 15) { \
+ ret = simde_mm_setzero_si128(); \
+ } else { \
+ ret = simde__m128i_from_neon_i16(vshlq_n_s16( \
+ simde__m128i_to_neon_i16(a), ((imm8)&15))); \
+ } \
+ ret; \
+ }))
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+#define simde_mm_slli_epi16(a, imm8) \
+ ((imm8 < 16) \
+ ? wasm_i16x8_shl(simde__m128i_to_private(a).wasm_v128, imm8) \
+ : wasm_i16x8_const(0, 0, 0, 0, 0, 0, 0, 0))
+#elif defined(SIMDE_POWER_ALTIVEC_P8_NATIVE)
+#define simde_mm_slli_epi16(a, imm8) \
+ ((imm8 & ~15) ? simde_mm_setzero_si128() \
+ : simde__m128i_from_altivec_i16( \
+ vec_sl(simde__m128i_to_altivec_i16(a), \
+ vec_splat_u16(HEDLEY_STATIC_CAST( \
+ unsigned short, imm8)))))
+#endif
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_slli_epi16(a, imm8) simde_mm_slli_epi16(a, imm8)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_slli_epi32(simde__m128i a, const int imm8)
+ SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 255)
+{
+ if (HEDLEY_UNLIKELY((imm8 > 31))) {
+ return simde_mm_setzero_si128();
+ }
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.i32 = a_.i32 << imm8;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = a_.i32[i] << (imm8 & 0xff);
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+}
+#if defined(SIMDE_X86_SSE2_NATIVE)
+#define simde_mm_slli_epi32(a, imm8) _mm_slli_epi32(a, imm8)
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+#define simde_mm_slli_epi32(a, imm8) \
+ (__extension__({ \
+ simde__m128i ret; \
+ if ((imm8) <= 0) { \
+ ret = a; \
+ } else if ((imm8) > 31) { \
+ ret = simde_mm_setzero_si128(); \
+ } else { \
+ ret = simde__m128i_from_neon_i32(vshlq_n_s32( \
+ simde__m128i_to_neon_i32(a), ((imm8)&31))); \
+ } \
+ ret; \
+ }))
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+#define simde_mm_slli_epi32(a, imm8) \
+ ((imm8 < 32) \
+ ? wasm_i32x4_shl(simde__m128i_to_private(a).wasm_v128, imm8) \
+ : wasm_i32x4_const(0, 0, 0, 0))
+#elif defined(SIMDE_POWER_ALTIVEC_P8_NATIVE)
+#define simde_mm_slli_epi32(a, imm8) \
+ (__extension__({ \
+ simde__m128i ret; \
+ if ((imm8) <= 0) { \
+ ret = a; \
+ } else if ((imm8) > 31) { \
+ ret = simde_mm_setzero_si128(); \
+ } else { \
+ ret = simde__m128i_from_altivec_i32( \
+ vec_sl(simde__m128i_to_altivec_i32(a), \
+ vec_splats(HEDLEY_STATIC_CAST( \
+ unsigned int, (imm8)&31)))); \
+ } \
+ ret; \
+ }))
+#endif
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_slli_epi32(a, imm8) simde_mm_slli_epi32(a, imm8)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_slli_epi64(simde__m128i a, const int imm8)
+ SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 255)
+{
+ if (HEDLEY_UNLIKELY((imm8 > 63))) {
+ return simde_mm_setzero_si128();
+ }
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.i64 = a_.i64 << imm8;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i64) / sizeof(r_.i64[0])); i++) {
+ r_.i64[i] = a_.i64[i] << (imm8 & 0xff);
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+}
+#if defined(SIMDE_X86_SSE2_NATIVE)
+#define simde_mm_slli_epi64(a, imm8) _mm_slli_epi64(a, imm8)
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+#define simde_mm_slli_epi64(a, imm8) \
+ (__extension__({ \
+ simde__m128i ret; \
+ if ((imm8) <= 0) { \
+ ret = a; \
+ } else if ((imm8) > 63) { \
+ ret = simde_mm_setzero_si128(); \
+ } else { \
+ ret = simde__m128i_from_neon_i64(vshlq_n_s64( \
+ simde__m128i_to_neon_i64(a), ((imm8)&63))); \
+ } \
+ ret; \
+ }))
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+#define simde_mm_slli_epi64(a, imm8) \
+ ((imm8 < 64) \
+ ? wasm_i64x2_shl(simde__m128i_to_private(a).wasm_v128, imm8) \
+ : wasm_i64x2_const(0, 0))
+#endif
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_slli_epi64(a, imm8) simde_mm_slli_epi64(a, imm8)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_srli_epi16(simde__m128i a, const int imm8)
+ SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 255)
+{
+ if (HEDLEY_UNLIKELY((imm8 > 15))) {
+ return simde_mm_setzero_si128();
+ }
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.u16 = a_.u16 >> imm8;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.u16[i] = a_.u16[i] >> (imm8 & 0xff);
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+}
+#if defined(SIMDE_X86_SSE2_NATIVE)
+#define simde_mm_srli_epi16(a, imm8) _mm_srli_epi16(a, imm8)
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+#define simde_mm_srli_epi16(a, imm8) \
+ (__extension__({ \
+ simde__m128i ret; \
+ if ((imm8) <= 0) { \
+ ret = a; \
+ } else if ((imm8) > 15) { \
+ ret = simde_mm_setzero_si128(); \
+ } else { \
+ ret = simde__m128i_from_neon_u16(vshrq_n_u16( \
+ simde__m128i_to_neon_u16(a), \
+ (((imm8)&15) | (((imm8)&15) == 0)))); \
+ } \
+ ret; \
+ }))
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+#define simde_mm_srli_epi16(a, imm8) \
+ ((imm8 < 16) \
+ ? wasm_u16x8_shr(simde__m128i_to_private(a).wasm_v128, imm8) \
+ : wasm_i16x8_const(0, 0, 0, 0, 0, 0, 0, 0))
+#elif defined(SIMDE_POWER_ALTIVEC_P8_NATIVE)
+#define simde_mm_srli_epi16(a, imm8) \
+ ((imm8 & ~15) ? simde_mm_setzero_si128() \
+ : simde__m128i_from_altivec_i16( \
+ vec_sr(simde__m128i_to_altivec_i16(a), \
+ vec_splat_u16(HEDLEY_STATIC_CAST( \
+ unsigned short, imm8)))))
+#endif
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_srli_epi16(a, imm8) simde_mm_srli_epi16(a, imm8)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_srli_epi32(simde__m128i a, const int imm8)
+ SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 255)
+{
+ if (HEDLEY_UNLIKELY((imm8 > 31))) {
+ return simde_mm_setzero_si128();
+ }
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR)
+ r_.u32 = a_.u32 >> (imm8 & 0xff);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.u32[i] = a_.u32[i] >> (imm8 & 0xff);
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+}
+#if defined(SIMDE_X86_SSE2_NATIVE)
+#define simde_mm_srli_epi32(a, imm8) _mm_srli_epi32(a, imm8)
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+#define simde_mm_srli_epi32(a, imm8) \
+ (__extension__({ \
+ simde__m128i ret; \
+ if ((imm8) <= 0) { \
+ ret = a; \
+ } else if ((imm8) > 31) { \
+ ret = simde_mm_setzero_si128(); \
+ } else { \
+ ret = simde__m128i_from_neon_u32(vshrq_n_u32( \
+ simde__m128i_to_neon_u32(a), \
+ (((imm8)&31) | (((imm8)&31) == 0)))); \
+ } \
+ ret; \
+ }))
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+#define simde_mm_srli_epi32(a, imm8) \
+ ((imm8 < 32) \
+ ? wasm_u32x4_shr(simde__m128i_to_private(a).wasm_v128, imm8) \
+ : wasm_i32x4_const(0, 0, 0, 0))
+#elif defined(SIMDE_POWER_ALTIVEC_P8_NATIVE)
+#define simde_mm_srli_epi32(a, imm8) \
+ (__extension__({ \
+ simde__m128i ret; \
+ if ((imm8) <= 0) { \
+ ret = a; \
+ } else if ((imm8) > 31) { \
+ ret = simde_mm_setzero_si128(); \
+ } else { \
+ ret = simde__m128i_from_altivec_i32( \
+ vec_sr(simde__m128i_to_altivec_i32(a), \
+ vec_splats(HEDLEY_STATIC_CAST( \
+ unsigned int, (imm8)&31)))); \
+ } \
+ ret; \
+ }))
+#endif
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_srli_epi32(a, imm8) simde_mm_srli_epi32(a, imm8)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_srli_epi64(simde__m128i a, const int imm8)
+ SIMDE_REQUIRE_CONSTANT_RANGE(imm8, 0, 255)
+{
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a);
+
+ if (HEDLEY_UNLIKELY((imm8 & 63) != imm8))
+ return simde_mm_setzero_si128();
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u64 = vshlq_u64(a_.neon_u64, vdupq_n_s64(-imm8));
+#else
+#if defined(SIMDE_VECTOR_SUBSCRIPT_SCALAR) && !defined(SIMDE_BUG_GCC_94488)
+ r_.u64 = a_.u64 >> imm8;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i64) / sizeof(r_.i64[0])); i++) {
+ r_.u64[i] = a_.u64[i] >> imm8;
+ }
+#endif
+#endif
+
+ return simde__m128i_from_private(r_);
+}
+#if defined(SIMDE_X86_SSE2_NATIVE)
+#define simde_mm_srli_epi64(a, imm8) _mm_srli_epi64(a, imm8)
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+#define simde_mm_srli_epi64(a, imm8) \
+ (__extension__({ \
+ simde__m128i ret; \
+ if ((imm8) <= 0) { \
+ ret = a; \
+ } else if ((imm8) > 63) { \
+ ret = simde_mm_setzero_si128(); \
+ } else { \
+ ret = simde__m128i_from_neon_u64(vshrq_n_u64( \
+ simde__m128i_to_neon_u64(a), \
+ (((imm8)&63) | (((imm8)&63) == 0)))); \
+ } \
+ ret; \
+ }))
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+#define simde_mm_srli_epi64(a, imm8) \
+ ((imm8 < 64) \
+ ? wasm_u64x2_shr(simde__m128i_to_private(a).wasm_v128, imm8) \
+ : wasm_i64x2_const(0, 0))
+#endif
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_srli_epi64(a, imm8) simde_mm_srli_epi64(a, imm8)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_store_pd(simde_float64 mem_addr[HEDLEY_ARRAY_PARAM(2)],
+ simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ _mm_store_pd(mem_addr, a);
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ vst1q_f64(mem_addr, simde__m128d_to_private(a).neon_f64);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ vst1q_s64(HEDLEY_REINTERPRET_CAST(int64_t *, mem_addr),
+ simde__m128d_to_private(a).neon_i64);
+#else
+ simde_memcpy(SIMDE_ALIGN_ASSUME_LIKE(mem_addr, simde__m128d), &a,
+ sizeof(a));
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_store_pd(mem_addr, a) \
+ simde_mm_store_pd(HEDLEY_REINTERPRET_CAST(double *, mem_addr), a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_store1_pd(simde_float64 mem_addr[HEDLEY_ARRAY_PARAM(2)],
+ simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ _mm_store1_pd(mem_addr, a);
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ vst1q_f64(mem_addr, vdupq_laneq_f64(a_.neon_f64, 0));
+#else
+ mem_addr[0] = a_.f64[0];
+ mem_addr[1] = a_.f64[0];
+#endif
+#endif
+}
+#define simde_mm_store_pd1(mem_addr, a) \
+ simde_mm_store1_pd(HEDLEY_REINTERPRET_CAST(double *, mem_addr), a)
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_store1_pd(mem_addr, a) \
+ simde_mm_store1_pd(HEDLEY_REINTERPRET_CAST(double *, mem_addr), a)
+#define _mm_store_pd1(mem_addr, a) \
+ simde_mm_store_pd1(HEDLEY_REINTERPRET_CAST(double *, mem_addr), a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_store_sd(simde_float64 *mem_addr, simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ _mm_store_sd(mem_addr, a);
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ const simde_float64 v = vgetq_lane_f64(a_.neon_f64, 0);
+ simde_memcpy(mem_addr, &v, sizeof(v));
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ const int64_t v = vgetq_lane_s64(a_.neon_i64, 0);
+ simde_memcpy(HEDLEY_REINTERPRET_CAST(int64_t *, mem_addr), &v,
+ sizeof(v));
+#else
+ simde_float64 v = a_.f64[0];
+ simde_memcpy(mem_addr, &v, sizeof(simde_float64));
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_store_sd(mem_addr, a) \
+ simde_mm_store_sd(HEDLEY_REINTERPRET_CAST(double *, mem_addr), a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_store_si128(simde__m128i *mem_addr, simde__m128i a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ _mm_store_si128(HEDLEY_STATIC_CAST(__m128i *, mem_addr), a);
+#else
+ simde__m128i_private a_ = simde__m128i_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ vst1q_s32(HEDLEY_REINTERPRET_CAST(int32_t *, mem_addr), a_.neon_i32);
+#else
+ simde_memcpy(SIMDE_ALIGN_ASSUME_LIKE(mem_addr, simde__m128i), &a_,
+ sizeof(a_));
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_store_si128(mem_addr, a) simde_mm_store_si128(mem_addr, a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_storeh_pd(simde_float64 *mem_addr, simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ _mm_storeh_pd(mem_addr, a);
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ *mem_addr = vgetq_lane_f64(a_.neon_f64, 1);
+#else
+ *mem_addr = a_.f64[1];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_storeh_pd(mem_addr, a) \
+ simde_mm_storeh_pd(HEDLEY_REINTERPRET_CAST(double *, mem_addr), a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_storel_epi64(simde__m128i *mem_addr, simde__m128i a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ _mm_storel_epi64(HEDLEY_STATIC_CAST(__m128i *, mem_addr), a);
+#else
+ simde__m128i_private a_ = simde__m128i_to_private(a);
+ int64_t tmp;
+
+ /* memcpy to prevent aliasing, tmp because we can't take the
+ * address of a vector element. */
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ tmp = vgetq_lane_s64(a_.neon_i64, 0);
+#elif defined(SIMDE_POWER_ALTIVEC_P7_NATIVE)
+#if defined(SIMDE_BUG_GCC_95227)
+ (void)a_;
+#endif
+ tmp = vec_extract(a_.altivec_i64, 0);
+#else
+ tmp = a_.i64[0];
+#endif
+
+ simde_memcpy(mem_addr, &tmp, sizeof(tmp));
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_storel_epi64(mem_addr, a) simde_mm_storel_epi64(mem_addr, a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_storel_pd(simde_float64 *mem_addr, simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ _mm_storel_pd(mem_addr, a);
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a);
+
+ simde_float64 tmp;
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ tmp = vgetq_lane_f64(a_.neon_f64, 0);
+#else
+ tmp = a_.f64[0];
+#endif
+ simde_memcpy(mem_addr, &tmp, sizeof(tmp));
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_storel_pd(mem_addr, a) \
+ simde_mm_storel_pd(HEDLEY_REINTERPRET_CAST(double *, mem_addr), a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_storer_pd(simde_float64 mem_addr[2], simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ _mm_storer_pd(mem_addr, a);
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ vst1q_s64(HEDLEY_REINTERPRET_CAST(int64_t *, mem_addr),
+ vextq_s64(a_.neon_i64, a_.neon_i64, 1));
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ a_.f64 = SIMDE_SHUFFLE_VECTOR_(64, 16, a_.f64, a_.f64, 1, 0);
+ simde_mm_store_pd(mem_addr, simde__m128d_from_private(a_));
+#else
+ mem_addr[0] = a_.f64[1];
+ mem_addr[1] = a_.f64[0];
+#endif
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_storer_pd(mem_addr, a) \
+ simde_mm_storer_pd(HEDLEY_REINTERPRET_CAST(double *, mem_addr), a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_storeu_pd(simde_float64 *mem_addr, simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ _mm_storeu_pd(mem_addr, a);
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ vst1q_f64(mem_addr, simde__m128d_to_private(a).neon_f64);
+#else
+ simde_memcpy(mem_addr, &a, sizeof(a));
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_storeu_pd(mem_addr, a) \
+ simde_mm_storeu_pd(HEDLEY_REINTERPRET_CAST(double *, mem_addr), a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_storeu_si128(simde__m128i *mem_addr, simde__m128i a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ _mm_storeu_si128(HEDLEY_STATIC_CAST(__m128i *, mem_addr), a);
+#else
+ simde_memcpy(mem_addr, &a, sizeof(a));
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_storeu_si128(mem_addr, a) simde_mm_storeu_si128(mem_addr, a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_storeu_si16(void *mem_addr, simde__m128i a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && \
+ (SIMDE_DETECT_CLANG_VERSION_CHECK(8, 0, 0) || \
+ HEDLEY_GCC_VERSION_CHECK(11, 0, 0) || \
+ HEDLEY_INTEL_VERSION_CHECK(20, 21, 1))
+ _mm_storeu_si16(mem_addr, a);
+#else
+ int16_t val = simde_x_mm_cvtsi128_si16(a);
+ simde_memcpy(mem_addr, &val, sizeof(val));
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_storeu_si16(mem_addr, a) simde_mm_storeu_si16(mem_addr, a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_storeu_si32(void *mem_addr, simde__m128i a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && \
+ (SIMDE_DETECT_CLANG_VERSION_CHECK(8, 0, 0) || \
+ HEDLEY_GCC_VERSION_CHECK(11, 0, 0) || \
+ HEDLEY_INTEL_VERSION_CHECK(20, 21, 1))
+ _mm_storeu_si32(mem_addr, a);
+#else
+ int32_t val = simde_mm_cvtsi128_si32(a);
+ simde_memcpy(mem_addr, &val, sizeof(val));
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_storeu_si32(mem_addr, a) simde_mm_storeu_si32(mem_addr, a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_storeu_si64(void *mem_addr, simde__m128i a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && \
+ (SIMDE_DETECT_CLANG_VERSION_CHECK(8, 0, 0) || \
+ HEDLEY_GCC_VERSION_CHECK(11, 0, 0) || \
+ HEDLEY_INTEL_VERSION_CHECK(20, 21, 1))
+ _mm_storeu_si64(mem_addr, a);
+#else
+ int64_t val = simde_mm_cvtsi128_si64(a);
+ simde_memcpy(mem_addr, &val, sizeof(val));
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_storeu_si64(mem_addr, a) simde_mm_storeu_si64(mem_addr, a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_stream_pd(simde_float64 mem_addr[HEDLEY_ARRAY_PARAM(2)],
+ simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ _mm_stream_pd(mem_addr, a);
+#else
+ simde_memcpy(mem_addr, &a, sizeof(a));
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_stream_pd(mem_addr, a) \
+ simde_mm_stream_pd(HEDLEY_REINTERPRET_CAST(double *, mem_addr), a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_stream_si128(simde__m128i *mem_addr, simde__m128i a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_ARCH_AMD64)
+ _mm_stream_si128(HEDLEY_STATIC_CAST(__m128i *, mem_addr), a);
+#else
+ simde_memcpy(mem_addr, &a, sizeof(a));
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_stream_si128(mem_addr, a) simde_mm_stream_si128(mem_addr, a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_stream_si32(int32_t *mem_addr, int32_t a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ _mm_stream_si32(mem_addr, a);
+#else
+ *mem_addr = a;
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_stream_si32(mem_addr, a) simde_mm_stream_si32(mem_addr, a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_stream_si64(int64_t *mem_addr, int64_t a)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_ARCH_AMD64) && \
+ !defined(HEDLEY_MSVC_VERSION)
+ _mm_stream_si64(SIMDE_CHECKED_REINTERPRET_CAST(long long int *,
+ int64_t *, mem_addr),
+ a);
+#else
+ *mem_addr = a;
+#endif
+}
+#define simde_mm_stream_si64x(mem_addr, a) simde_mm_stream_si64(mem_addr, a)
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_stream_si64(mem_addr, a) \
+ simde_mm_stream_si64(SIMDE_CHECKED_REINTERPRET_CAST( \
+ int64_t *, __int64 *, mem_addr), \
+ a)
+#define _mm_stream_si64x(mem_addr, a) \
+ simde_mm_stream_si64(SIMDE_CHECKED_REINTERPRET_CAST( \
+ int64_t *, __int64 *, mem_addr), \
+ a)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_sub_epi8(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_sub_epi8(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i8 = vsubq_s8(a_.neon_i8, b_.neon_i8);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i8 = a_.i8 - b_.i8;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i8) / sizeof(r_.i8[0])); i++) {
+ r_.i8[i] = a_.i8[i] - b_.i8[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_sub_epi8(a, b) simde_mm_sub_epi8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_sub_epi16(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_sub_epi16(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 = vsubq_s16(a_.neon_i16, b_.neon_i16);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i16 = a_.i16 - b_.i16;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i16) / sizeof(r_.i16[0])); i++) {
+ r_.i16[i] = a_.i16[i] - b_.i16[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_sub_epi16(a, b) simde_mm_sub_epi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_sub_epi32(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_sub_epi32(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vsubq_s32(a_.neon_i32, b_.neon_i32);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32 = a_.i32 - b_.i32;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32) / sizeof(r_.i32[0])); i++) {
+ r_.i32[i] = a_.i32[i] - b_.i32[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_sub_epi32(a, b) simde_mm_sub_epi32(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_sub_epi64(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_sub_epi64(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i64 = vsubq_s64(a_.neon_i64, b_.neon_i64);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i64 = a_.i64 - b_.i64;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i64) / sizeof(r_.i64[0])); i++) {
+ r_.i64[i] = a_.i64[i] - b_.i64[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_sub_epi64(a, b) simde_mm_sub_epi64(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_x_mm_sub_epu32(simde__m128i a, simde__m128i b)
+{
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.u32 = a_.u32 - b_.u32;
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u32 = vsubq_u32(a_.neon_u32, b_.neon_u32);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.u32) / sizeof(r_.u32[0])); i++) {
+ r_.u32[i] = a_.u32[i] - b_.u32[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_sub_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_sub_pd(a, b);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.f64 = a_.f64 - b_.f64;
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f64 = vsubq_f64(a_.neon_f64, b_.neon_f64);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f64x2_sub(a_.wasm_v128, b_.wasm_v128);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
+ r_.f64[i] = a_.f64[i] - b_.f64[i];
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_sub_pd(a, b) simde_mm_sub_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_sub_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_sub_sd(a, b);
+#elif (SIMDE_NATURAL_VECTOR_SIZE > 0)
+ return simde_mm_move_sd(a, simde_mm_sub_pd(a, b));
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+ r_.f64[0] = a_.f64[0] - b_.f64[0];
+ r_.f64[1] = a_.f64[1];
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_sub_sd(a, b) simde_mm_sub_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m64 simde_mm_sub_si64(simde__m64 a, simde__m64 b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE) && defined(SIMDE_X86_MMX_NATIVE)
+ return _mm_sub_si64(a, b);
+#else
+ simde__m64_private r_, a_ = simde__m64_to_private(a),
+ b_ = simde__m64_to_private(b);
+
+#if defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i64 = a_.i64 - b_.i64;
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i64 = vsub_s64(a_.neon_i64, b_.neon_i64);
+#else
+ r_.i64[0] = a_.i64[0] - b_.i64[0];
+#endif
+
+ return simde__m64_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_sub_si64(a, b) simde_mm_sub_si64(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_subs_epi8(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_subs_epi8(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i8 = vqsubq_s8(a_.neon_i8, b_.neon_i8);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i8x16_sub_saturate(a_.wasm_v128, b_.wasm_v128);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_) / sizeof(r_.i8[0])); i++) {
+ if (((b_.i8[i]) > 0 && (a_.i8[i]) < INT8_MIN + (b_.i8[i]))) {
+ r_.i8[i] = INT8_MIN;
+ } else if ((b_.i8[i]) < 0 &&
+ (a_.i8[i]) > INT8_MAX + (b_.i8[i])) {
+ r_.i8[i] = INT8_MAX;
+ } else {
+ r_.i8[i] = (a_.i8[i]) - (b_.i8[i]);
+ }
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_subs_epi8(a, b) simde_mm_subs_epi8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_subs_epi16(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_subs_epi16(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i16 = vqsubq_s16(a_.neon_i16, b_.neon_i16);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_i16x8_sub_saturate(a_.wasm_v128, b_.wasm_v128);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_) / sizeof(r_.i16[0])); i++) {
+ if (((b_.i16[i]) > 0 &&
+ (a_.i16[i]) < INT16_MIN + (b_.i16[i]))) {
+ r_.i16[i] = INT16_MIN;
+ } else if ((b_.i16[i]) < 0 &&
+ (a_.i16[i]) > INT16_MAX + (b_.i16[i])) {
+ r_.i16[i] = INT16_MAX;
+ } else {
+ r_.i16[i] = (a_.i16[i]) - (b_.i16[i]);
+ }
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_subs_epi16(a, b) simde_mm_subs_epi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_subs_epu8(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_subs_epu8(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u8 = vqsubq_u8(a_.neon_u8, b_.neon_u8);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_u8x16_sub_saturate(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_u8 = vec_subs(a_.altivec_u8, b_.altivec_u8);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_) / sizeof(r_.i8[0])); i++) {
+ const int32_t x = a_.u8[i] - b_.u8[i];
+ if (x < 0) {
+ r_.u8[i] = 0;
+ } else if (x > UINT8_MAX) {
+ r_.u8[i] = UINT8_MAX;
+ } else {
+ r_.u8[i] = HEDLEY_STATIC_CAST(uint8_t, x);
+ }
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_subs_epu8(a, b) simde_mm_subs_epu8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_subs_epu16(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_subs_epu16(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_u16 = vqsubq_u16(a_.neon_u16, b_.neon_u16);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_u16x8_sub_saturate(a_.wasm_v128, b_.wasm_v128);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_u16 = vec_subs(a_.altivec_u16, b_.altivec_u16);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_) / sizeof(r_.i16[0])); i++) {
+ const int32_t x = a_.u16[i] - b_.u16[i];
+ if (x < 0) {
+ r_.u16[i] = 0;
+ } else if (x > UINT16_MAX) {
+ r_.u16[i] = UINT16_MAX;
+ } else {
+ r_.u16[i] = HEDLEY_STATIC_CAST(uint16_t, x);
+ }
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_subs_epu16(a, b) simde_mm_subs_epu16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_ucomieq_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_ucomieq_sd(a, b);
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+ int r;
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ uint64x2_t a_not_nan = vceqq_f64(a_.neon_f64, a_.neon_f64);
+ uint64x2_t b_not_nan = vceqq_f64(b_.neon_f64, b_.neon_f64);
+ uint64x2_t a_or_b_nan = vreinterpretq_u64_u32(vmvnq_u32(
+ vreinterpretq_u32_u64(vandq_u64(a_not_nan, b_not_nan))));
+ uint64x2_t a_eq_b = vceqq_f64(a_.neon_f64, b_.neon_f64);
+ r = !!(vgetq_lane_u64(vorrq_u64(a_or_b_nan, a_eq_b), 0) != 0);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ return wasm_f64x2_extract_lane(a_.wasm_v128, 0) ==
+ wasm_f64x2_extract_lane(b_.wasm_v128, 0);
+#elif defined(SIMDE_HAVE_FENV_H)
+ fenv_t envp;
+ int x = feholdexcept(&envp);
+ r = a_.f64[0] == b_.f64[0];
+ if (HEDLEY_LIKELY(x == 0))
+ fesetenv(&envp);
+#else
+ r = a_.f64[0] == b_.f64[0];
+#endif
+
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_ucomieq_sd(a, b) simde_mm_ucomieq_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_ucomige_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_ucomige_sd(a, b);
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+ int r;
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ uint64x2_t a_not_nan = vceqq_f64(a_.neon_f64, a_.neon_f64);
+ uint64x2_t b_not_nan = vceqq_f64(b_.neon_f64, b_.neon_f64);
+ uint64x2_t a_and_b_not_nan = vandq_u64(a_not_nan, b_not_nan);
+ uint64x2_t a_ge_b = vcgeq_f64(a_.neon_f64, b_.neon_f64);
+ r = !!(vgetq_lane_u64(vandq_u64(a_and_b_not_nan, a_ge_b), 0) != 0);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ return wasm_f64x2_extract_lane(a_.wasm_v128, 0) >=
+ wasm_f64x2_extract_lane(b_.wasm_v128, 0);
+#elif defined(SIMDE_HAVE_FENV_H)
+ fenv_t envp;
+ int x = feholdexcept(&envp);
+ r = a_.f64[0] >= b_.f64[0];
+ if (HEDLEY_LIKELY(x == 0))
+ fesetenv(&envp);
+#else
+ r = a_.f64[0] >= b_.f64[0];
+#endif
+
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_ucomige_sd(a, b) simde_mm_ucomige_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_ucomigt_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_ucomigt_sd(a, b);
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+ int r;
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ uint64x2_t a_not_nan = vceqq_f64(a_.neon_f64, a_.neon_f64);
+ uint64x2_t b_not_nan = vceqq_f64(b_.neon_f64, b_.neon_f64);
+ uint64x2_t a_and_b_not_nan = vandq_u64(a_not_nan, b_not_nan);
+ uint64x2_t a_gt_b = vcgtq_f64(a_.neon_f64, b_.neon_f64);
+ r = !!(vgetq_lane_u64(vandq_u64(a_and_b_not_nan, a_gt_b), 0) != 0);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ return wasm_f64x2_extract_lane(a_.wasm_v128, 0) >
+ wasm_f64x2_extract_lane(b_.wasm_v128, 0);
+#elif defined(SIMDE_HAVE_FENV_H)
+ fenv_t envp;
+ int x = feholdexcept(&envp);
+ r = a_.f64[0] > b_.f64[0];
+ if (HEDLEY_LIKELY(x == 0))
+ fesetenv(&envp);
+#else
+ r = a_.f64[0] > b_.f64[0];
+#endif
+
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_ucomigt_sd(a, b) simde_mm_ucomigt_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_ucomile_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_ucomile_sd(a, b);
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+ int r;
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ uint64x2_t a_not_nan = vceqq_f64(a_.neon_f64, a_.neon_f64);
+ uint64x2_t b_not_nan = vceqq_f64(b_.neon_f64, b_.neon_f64);
+ uint64x2_t a_or_b_nan = vreinterpretq_u64_u32(vmvnq_u32(
+ vreinterpretq_u32_u64(vandq_u64(a_not_nan, b_not_nan))));
+ uint64x2_t a_le_b = vcleq_f64(a_.neon_f64, b_.neon_f64);
+ r = !!(vgetq_lane_u64(vorrq_u64(a_or_b_nan, a_le_b), 0) != 0);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ return wasm_f64x2_extract_lane(a_.wasm_v128, 0) <=
+ wasm_f64x2_extract_lane(b_.wasm_v128, 0);
+#elif defined(SIMDE_HAVE_FENV_H)
+ fenv_t envp;
+ int x = feholdexcept(&envp);
+ r = a_.f64[0] <= b_.f64[0];
+ if (HEDLEY_LIKELY(x == 0))
+ fesetenv(&envp);
+#else
+ r = a_.f64[0] <= b_.f64[0];
+#endif
+
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_ucomile_sd(a, b) simde_mm_ucomile_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_ucomilt_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_ucomilt_sd(a, b);
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+ int r;
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ uint64x2_t a_not_nan = vceqq_f64(a_.neon_f64, a_.neon_f64);
+ uint64x2_t b_not_nan = vceqq_f64(b_.neon_f64, b_.neon_f64);
+ uint64x2_t a_or_b_nan = vreinterpretq_u64_u32(vmvnq_u32(
+ vreinterpretq_u32_u64(vandq_u64(a_not_nan, b_not_nan))));
+ uint64x2_t a_lt_b = vcltq_f64(a_.neon_f64, b_.neon_f64);
+ r = !!(vgetq_lane_u64(vorrq_u64(a_or_b_nan, a_lt_b), 0) != 0);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ return wasm_f64x2_extract_lane(a_.wasm_v128, 0) <
+ wasm_f64x2_extract_lane(b_.wasm_v128, 0);
+#elif defined(SIMDE_HAVE_FENV_H)
+ fenv_t envp;
+ int x = feholdexcept(&envp);
+ r = a_.f64[0] < b_.f64[0];
+ if (HEDLEY_LIKELY(x == 0))
+ fesetenv(&envp);
+#else
+ r = a_.f64[0] < b_.f64[0];
+#endif
+
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_ucomilt_sd(a, b) simde_mm_ucomilt_sd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+int simde_mm_ucomineq_sd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_ucomineq_sd(a, b);
+#else
+ simde__m128d_private a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+ int r;
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ uint64x2_t a_not_nan = vceqq_f64(a_.neon_f64, a_.neon_f64);
+ uint64x2_t b_not_nan = vceqq_f64(b_.neon_f64, b_.neon_f64);
+ uint64x2_t a_and_b_not_nan = vandq_u64(a_not_nan, b_not_nan);
+ uint64x2_t a_neq_b = vreinterpretq_u64_u32(vmvnq_u32(
+ vreinterpretq_u32_u64(vceqq_f64(a_.neon_f64, b_.neon_f64))));
+ r = !!(vgetq_lane_u64(vandq_u64(a_and_b_not_nan, a_neq_b), 0) != 0);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ return wasm_f64x2_extract_lane(a_.wasm_v128, 0) !=
+ wasm_f64x2_extract_lane(b_.wasm_v128, 0);
+#elif defined(SIMDE_HAVE_FENV_H)
+ fenv_t envp;
+ int x = feholdexcept(&envp);
+ r = a_.f64[0] != b_.f64[0];
+ if (HEDLEY_LIKELY(x == 0))
+ fesetenv(&envp);
+#else
+ r = a_.f64[0] != b_.f64[0];
+#endif
+
+ return r;
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_ucomineq_sd(a, b) simde_mm_ucomineq_sd(a, b)
+#endif
+
+#if defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
+HEDLEY_DIAGNOSTIC_PUSH
+SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_
+#endif
+
+#if defined(SIMDE_DIAGNOSTIC_DISABLE_UNINITIALIZED_)
+HEDLEY_DIAGNOSTIC_POP
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_lfence(void)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ _mm_lfence();
+#else
+ simde_mm_sfence();
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_lfence() simde_mm_lfence()
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+void simde_mm_mfence(void)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ _mm_mfence();
+#else
+ simde_mm_sfence();
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_mfence() simde_mm_mfence()
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_unpackhi_epi8(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_unpackhi_epi8(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_i8 = vzip2q_s8(a_.neon_i8, b_.neon_i8);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ int8x8_t a1 = vreinterpret_s8_s16(vget_high_s16(a_.neon_i16));
+ int8x8_t b1 = vreinterpret_s8_s16(vget_high_s16(b_.neon_i16));
+ int8x8x2_t result = vzip_s8(a1, b1);
+ r_.neon_i8 = vcombine_s8(result.val[0], result.val[1]);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.i8 = SIMDE_SHUFFLE_VECTOR_(8, 16, a_.i8, b_.i8, 8, 24, 9, 25, 10, 26,
+ 11, 27, 12, 28, 13, 29, 14, 30, 15, 31);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < ((sizeof(r_) / sizeof(r_.i8[0])) / 2); i++) {
+ r_.i8[(i * 2)] =
+ a_.i8[i + ((sizeof(r_) / sizeof(r_.i8[0])) / 2)];
+ r_.i8[(i * 2) + 1] =
+ b_.i8[i + ((sizeof(r_) / sizeof(r_.i8[0])) / 2)];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_unpackhi_epi8(a, b) simde_mm_unpackhi_epi8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_unpackhi_epi16(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_unpackhi_epi16(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_i16 = vzip2q_s16(a_.neon_i16, b_.neon_i16);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ int16x4_t a1 = vget_high_s16(a_.neon_i16);
+ int16x4_t b1 = vget_high_s16(b_.neon_i16);
+ int16x4x2_t result = vzip_s16(a1, b1);
+ r_.neon_i16 = vcombine_s16(result.val[0], result.val[1]);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.i16 = SIMDE_SHUFFLE_VECTOR_(16, 16, a_.i16, b_.i16, 4, 12, 5, 13, 6,
+ 14, 7, 15);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < ((sizeof(r_) / sizeof(r_.i16[0])) / 2); i++) {
+ r_.i16[(i * 2)] =
+ a_.i16[i + ((sizeof(r_) / sizeof(r_.i16[0])) / 2)];
+ r_.i16[(i * 2) + 1] =
+ b_.i16[i + ((sizeof(r_) / sizeof(r_.i16[0])) / 2)];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_unpackhi_epi16(a, b) simde_mm_unpackhi_epi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_unpackhi_epi32(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_unpackhi_epi32(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_i32 = vzip2q_s32(a_.neon_i32, b_.neon_i32);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ int32x2_t a1 = vget_high_s32(a_.neon_i32);
+ int32x2_t b1 = vget_high_s32(b_.neon_i32);
+ int32x2x2_t result = vzip_s32(a1, b1);
+ r_.neon_i32 = vcombine_s32(result.val[0], result.val[1]);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.i32 = SIMDE_SHUFFLE_VECTOR_(32, 16, a_.i32, b_.i32, 2, 6, 3, 7);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < ((sizeof(r_) / sizeof(r_.i32[0])) / 2); i++) {
+ r_.i32[(i * 2)] =
+ a_.i32[i + ((sizeof(r_) / sizeof(r_.i32[0])) / 2)];
+ r_.i32[(i * 2) + 1] =
+ b_.i32[i + ((sizeof(r_) / sizeof(r_.i32[0])) / 2)];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_unpackhi_epi32(a, b) simde_mm_unpackhi_epi32(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_unpackhi_epi64(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_unpackhi_epi64(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ int64x1_t a_h = vget_high_s64(a_.neon_i64);
+ int64x1_t b_h = vget_high_s64(b_.neon_i64);
+ r_.neon_i64 = vcombine_s64(a_h, b_h);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.i64 = SIMDE_SHUFFLE_VECTOR_(64, 16, a_.i64, b_.i64, 1, 3);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < ((sizeof(r_) / sizeof(r_.i64[0])) / 2); i++) {
+ r_.i64[(i * 2)] =
+ a_.i64[i + ((sizeof(r_) / sizeof(r_.i64[0])) / 2)];
+ r_.i64[(i * 2) + 1] =
+ b_.i64[i + ((sizeof(r_) / sizeof(r_.i64[0])) / 2)];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_unpackhi_epi64(a, b) simde_mm_unpackhi_epi64(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_unpackhi_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_unpackhi_pd(a, b);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ float64x1_t a_l = vget_high_f64(a_.f64);
+ float64x1_t b_l = vget_high_f64(b_.f64);
+ r_.neon_f64 = vcombine_f64(a_l, b_l);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_v64x2_shuffle(a_.wasm_v128, b_.wasm_v128, 1, 3);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.f64 = SIMDE_SHUFFLE_VECTOR_(64, 16, a_.f64, b_.f64, 1, 3);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < ((sizeof(r_) / sizeof(r_.f64[0])) / 2); i++) {
+ r_.f64[(i * 2)] =
+ a_.f64[i + ((sizeof(r_) / sizeof(r_.f64[0])) / 2)];
+ r_.f64[(i * 2) + 1] =
+ b_.f64[i + ((sizeof(r_) / sizeof(r_.f64[0])) / 2)];
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_unpackhi_pd(a, b) simde_mm_unpackhi_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_unpacklo_epi8(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_unpacklo_epi8(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_i8 = vzip1q_s8(a_.neon_i8, b_.neon_i8);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ int8x8_t a1 = vreinterpret_s8_s16(vget_low_s16(a_.neon_i16));
+ int8x8_t b1 = vreinterpret_s8_s16(vget_low_s16(b_.neon_i16));
+ int8x8x2_t result = vzip_s8(a1, b1);
+ r_.neon_i8 = vcombine_s8(result.val[0], result.val[1]);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.i8 = SIMDE_SHUFFLE_VECTOR_(8, 16, a_.i8, b_.i8, 0, 16, 1, 17, 2, 18,
+ 3, 19, 4, 20, 5, 21, 6, 22, 7, 23);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < ((sizeof(r_) / sizeof(r_.i8[0])) / 2); i++) {
+ r_.i8[(i * 2)] = a_.i8[i];
+ r_.i8[(i * 2) + 1] = b_.i8[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_unpacklo_epi8(a, b) simde_mm_unpacklo_epi8(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_unpacklo_epi16(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_unpacklo_epi16(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_i16 = vzip1q_s16(a_.neon_i16, b_.neon_i16);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ int16x4_t a1 = vget_low_s16(a_.neon_i16);
+ int16x4_t b1 = vget_low_s16(b_.neon_i16);
+ int16x4x2_t result = vzip_s16(a1, b1);
+ r_.neon_i16 = vcombine_s16(result.val[0], result.val[1]);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.i16 = SIMDE_SHUFFLE_VECTOR_(16, 16, a_.i16, b_.i16, 0, 8, 1, 9, 2,
+ 10, 3, 11);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < ((sizeof(r_) / sizeof(r_.i16[0])) / 2); i++) {
+ r_.i16[(i * 2)] = a_.i16[i];
+ r_.i16[(i * 2) + 1] = b_.i16[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_unpacklo_epi16(a, b) simde_mm_unpacklo_epi16(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_unpacklo_epi32(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_unpacklo_epi32(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_i32 = vzip1q_s32(a_.neon_i32, b_.neon_i32);
+#elif defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ int32x2_t a1 = vget_low_s32(a_.neon_i32);
+ int32x2_t b1 = vget_low_s32(b_.neon_i32);
+ int32x2x2_t result = vzip_s32(a1, b1);
+ r_.neon_i32 = vcombine_s32(result.val[0], result.val[1]);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.i32 = SIMDE_SHUFFLE_VECTOR_(32, 16, a_.i32, b_.i32, 0, 4, 1, 5);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < ((sizeof(r_) / sizeof(r_.i32[0])) / 2); i++) {
+ r_.i32[(i * 2)] = a_.i32[i];
+ r_.i32[(i * 2) + 1] = b_.i32[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_unpacklo_epi32(a, b) simde_mm_unpacklo_epi32(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_unpacklo_epi64(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_unpacklo_epi64(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ int64x1_t a_l = vget_low_s64(a_.i64);
+ int64x1_t b_l = vget_low_s64(b_.i64);
+ r_.neon_i64 = vcombine_s64(a_l, b_l);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.i64 = SIMDE_SHUFFLE_VECTOR_(64, 16, a_.i64, b_.i64, 0, 2);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < ((sizeof(r_) / sizeof(r_.i64[0])) / 2); i++) {
+ r_.i64[(i * 2)] = a_.i64[i];
+ r_.i64[(i * 2) + 1] = b_.i64[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_unpacklo_epi64(a, b) simde_mm_unpacklo_epi64(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_mm_unpacklo_pd(simde__m128d a, simde__m128d b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_unpacklo_pd(a, b);
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a),
+ b_ = simde__m128d_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ float64x1_t a_l = vget_low_f64(a_.f64);
+ float64x1_t b_l = vget_low_f64(b_.f64);
+ r_.neon_f64 = vcombine_f64(a_l, b_l);
+#elif defined(SIMDE_SHUFFLE_VECTOR_)
+ r_.f64 = SIMDE_SHUFFLE_VECTOR_(64, 16, a_.f64, b_.f64, 0, 2);
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < ((sizeof(r_) / sizeof(r_.f64[0])) / 2); i++) {
+ r_.f64[(i * 2)] = a_.f64[i];
+ r_.f64[(i * 2) + 1] = b_.f64[i];
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_unpacklo_pd(a, b) simde_mm_unpacklo_pd(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128d simde_x_mm_negate_pd(simde__m128d a)
+{
+#if defined(SIMDE_X86_SSE_NATIVE)
+ return simde_mm_xor_pd(a, _mm_set1_pd(SIMDE_FLOAT64_C(-0.0)));
+#else
+ simde__m128d_private r_, a_ = simde__m128d_to_private(a);
+
+#if defined(SIMDE_POWER_ALTIVEC_P8_NATIVE) && \
+ (!defined(HEDLEY_GCC_VERSION) || HEDLEY_GCC_VERSION_CHECK(8, 1, 0))
+ r_.altivec_f64 = vec_neg(a_.altivec_f64);
+#elif defined(SIMDE_ARM_NEON_A64V8_NATIVE)
+ r_.neon_f64 = vnegq_f64(a_.neon_f64);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_f64x2_neg(a_.wasm_v128);
+#elif defined(SIMDE_VECTOR_NEGATE)
+ r_.f64 = -a_.f64;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.f64) / sizeof(r_.f64[0])); i++) {
+ r_.f64[i] = -a_.f64[i];
+ }
+#endif
+
+ return simde__m128d_from_private(r_);
+#endif
+}
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_mm_xor_si128(simde__m128i a, simde__m128i b)
+{
+#if defined(SIMDE_X86_SSE2_NATIVE)
+ return _mm_xor_si128(a, b);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a),
+ b_ = simde__m128i_to_private(b);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = veorq_s32(a_.neon_i32, b_.neon_i32);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i32 = vec_xor(a_.altivec_i32, b_.altivec_i32);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32f = a_.i32f ^ b_.i32f;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32f) / sizeof(r_.i32f[0])); i++) {
+ r_.i32f[i] = a_.i32f[i] ^ b_.i32f[i];
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _mm_xor_si128(a, b) simde_mm_xor_si128(a, b)
+#endif
+
+SIMDE_FUNCTION_ATTRIBUTES
+simde__m128i simde_x_mm_not_si128(simde__m128i a)
+{
+#if defined(SIMDE_X86_AVX512VL_NATIVE)
+ return _mm_ternarylogic_epi32(a, a, a, 0x55);
+#else
+ simde__m128i_private r_, a_ = simde__m128i_to_private(a);
+
+#if defined(SIMDE_ARM_NEON_A32V7_NATIVE)
+ r_.neon_i32 = vmvnq_s32(a_.neon_i32);
+#elif defined(SIMDE_POWER_ALTIVEC_P6_NATIVE)
+ r_.altivec_i32 = vec_nor(a_.altivec_i32, a_.altivec_i32);
+#elif defined(SIMDE_WASM_SIMD128_NATIVE)
+ r_.wasm_v128 = wasm_v128_not(a_.wasm_v128);
+#elif defined(SIMDE_VECTOR_SUBSCRIPT_OPS)
+ r_.i32f = ~a_.i32f;
+#else
+ SIMDE_VECTORIZE
+ for (size_t i = 0; i < (sizeof(r_.i32f) / sizeof(r_.i32f[0])); i++) {
+ r_.i32f[i] = ~(a_.i32f[i]);
+ }
+#endif
+
+ return simde__m128i_from_private(r_);
+#endif
+}
+
+#define SIMDE_MM_SHUFFLE2(x, y) (((x) << 1) | (y))
+#if defined(SIMDE_X86_SSE2_ENABLE_NATIVE_ALIASES)
+#define _MM_SHUFFLE2(x, y) SIMDE_MM_SHUFFLE2(x, y)
+#endif
+
+SIMDE_END_DECLS_
+
+HEDLEY_DIAGNOSTIC_POP
+
+#endif /* !defined(SIMDE_X86_SSE2_H) */
obs-studio-26.1.0.tar.xz/libobs/util/sse-intrin.h -> obs-studio-26.1.1.tar.xz/libobs/util/sse-intrin.h
Changed
#pragma once
-#if NEEDS_SIMDE
-
-#include "simde/sse2.h"
-
-#define __m128 simde__m128
-#define _mm_setzero_ps simde_mm_setzero_ps
-#define _mm_set_ps simde_mm_set_ps
-#define _mm_add_ps simde_mm_add_ps
-#define _mm_sub_ps simde_mm_sub_ps
-#define _mm_mul_ps simde_mm_mul_ps
-#define _mm_div_ps simde_mm_div_ps
-#define _mm_set1_ps simde_mm_set1_ps
-#define _mm_movehl_ps simde_mm_movehl_ps
-#define _mm_shuffle_ps simde_mm_shuffle_ps
-#define _mm_min_ps simde_mm_min_ps
-#define _mm_max_ps simde_mm_max_ps
-#define _mm_movelh_ps simde_mm_movelh_ps
-#define _mm_unpacklo_ps simde_mm_unpacklo_ps
-#define _mm_unpackhi_ps simde_mm_unpackhi_ps
-#define _mm_load_ps simde_mm_load_ps
-#define _mm_andnot_ps simde_mm_andnot_ps
-#define _mm_storeu_ps simde_mm_storeu_ps
-#define _mm_loadu_ps simde_mm_loadu_ps
-
-#define __m128i simde__m128i
-#define _mm_set1_epi32 simde_mm_set1_epi32
-#define _mm_set1_epi16 simde_mm_set1_epi16
-#define _mm_load_si128 simde_mm_load_si128
-#define _mm_packs_epi32 simde_mm_packs_epi32
-#define _mm_srli_si128 simde_mm_srli_si128
-#define _mm_and_si128 simde_mm_and_si128
-#define _mm_packus_epi16 simde_mm_packus_epi16
-#define _mm_add_epi64 simde_mm_add_epi64
-#define _mm_shuffle_epi32 simde_mm_shuffle_epi32
-#define _mm_srai_epi16 simde_mm_srai_epi16
-#define _mm_shufflelo_epi16 simde_mm_shufflelo_epi16
-#define _mm_storeu_si128 simde_mm_storeu_si128
-
-#define _MM_SHUFFLE SIMDE_MM_SHUFFLE
-#define _MM_TRANSPOSE4_PS SIMDE_MM_TRANSPOSE4_PS
-
-#else
-
-#if defined(__aarch64__) || defined(__arm__)
-#include <arm_neon.h>
-#include "sse2neon.h"
-#else
-#include <xmmintrin.h>
+#if defined(_MSC_VER)
#include <emmintrin.h>
-#endif
-
+#else
+#define SIMDE_ENABLE_NATIVE_ALIASES
+#include "simde/x86/sse2.h"
#endif
obs-studio-26.1.0.tar.xz/plugins/coreaudio-encoder/CMakeLists.txt -> obs-studio-26.1.1.tar.xz/plugins/coreaudio-encoder/CMakeLists.txt
Changed
encoder.cpp)
if (WIN32)
+ # Set compiler flag before adding resource file
+ if (MINGW)
+ set_source_files_properties(${coreaudio-encoder_SOURCES}
+ PROPERTIES COMPILE_FLAGS "-Wno-multichar")
+ endif()
+
set(MODULE_DESCRIPTION "OBS Core Audio encoder")
configure_file(${CMAKE_SOURCE_DIR}/cmake/winrc/obs-module.rc.in coreaudio-encoder.rc)
list(APPEND coreaudio-encoder_SOURCES
coreaudio-encoder.rc)
set(coreaudio-encoder_HEADERS windows-imports.h)
set(coreaudio-encoder_LIBS )
-
- if (MINGW)
- set_source_files_properties(${coreaudio-encoder_SOURCES}
- PROPERTIES COMPILE_FLAGS "-Wno-multichar")
- endif()
else()
find_library(COREFOUNDATION CoreFoundation)
find_library(COREAUDIO CoreAudio)
obs-studio-26.1.0.tar.xz/plugins/decklink/DecklinkInput.cpp -> obs-studio-26.1.1.tar.xz/plugins/decklink/DecklinkInput.cpp
Changed
return false;
}
- if (!instance->StartCapture(mode, bmdVideoConnection,
+ if (!instance->StartCapture(mode, allow10Bit, bmdVideoConnection,
bmdAudioConnection)) {
instance = nullptr;
return false;
obs-studio-26.1.0.tar.xz/plugins/decklink/DecklinkInput.hpp -> obs-studio-26.1.1.tar.xz/plugins/decklink/DecklinkInput.hpp
Changed
std::string hash;
long long id;
bool swap = false;
+ bool allow10Bit = false;
BMDVideoConnection videoConnection;
BMDAudioConnection audioConnection;
};
obs-studio-26.1.0.tar.xz/plugins/decklink/OBSVideoFrame.cpp -> obs-studio-26.1.1.tar.xz/plugins/decklink/OBSVideoFrame.cpp
Changed
#include "OBSVideoFrame.h"
-OBSVideoFrame::OBSVideoFrame(long width, long height)
+OBSVideoFrame::OBSVideoFrame(long width, long height,
+ BMDPixelFormat pixelFormat)
{
+ int bpp = 2;
this->width = width;
this->height = height;
- this->rowBytes = width * 2;
- this->data = new unsigned char[width * height * 2 + 1];
+ this->rowBytes = width * bpp;
+ this->data = new unsigned char[width * height * bpp + 1];
+ this->pixelFormat = pixelFormat;
}
HRESULT OBSVideoFrame::SetFlags(BMDFrameFlags newFlags)
obs-studio-26.1.0.tar.xz/plugins/decklink/OBSVideoFrame.h -> obs-studio-26.1.1.tar.xz/plugins/decklink/OBSVideoFrame.h
Changed
unsigned char *data;
public:
- OBSVideoFrame(long width, long height);
+ OBSVideoFrame(long width, long height, BMDPixelFormat pixelFormat);
HRESULT STDMETHODCALLTYPE SetFlags(BMDFrameFlags newFlags) override;
obs-studio-26.1.0.tar.xz/plugins/decklink/const.h -> obs-studio-26.1.1.tar.xz/plugins/decklink/const.h
Changed
#define AUTO_START "auto_start"
#define KEYER "keyer"
#define SWAP "swap"
+#define ALLOW_10_BIT "allow_10_bit"
#define TEXT_DEVICE obs_module_text("Device")
#define TEXT_VIDEO_CONNECTION obs_module_text("VideoConnection")
#define TEXT_ENABLE_KEYER obs_module_text("Keyer")
#define TEXT_SWAP obs_module_text("SwapFC-LFE")
#define TEXT_SWAP_TOOLTIP obs_module_text("SwapFC-LFE.Tooltip")
+#define TEXT_ALLOW_10_BIT obs_module_text("Allow10Bit")
obs-studio-26.1.0.tar.xz/plugins/decklink/data/locale/en-US.ini -> obs-studio-26.1.1.tar.xz/plugins/decklink/data/locale/en-US.ini
Changed
SwapFC-LFE.Tooltip="Swap Front Center Channel and LFE Channel"
VideoConnection="Video Connection"
AudioConnection="Audio Connection"
+Allow10Bit="Allow 10 Bit (Required for SDI captions, may cause performance overhead)"
\ No newline at end of file
obs-studio-26.1.0.tar.xz/plugins/decklink/decklink-device-instance.cpp -> obs-studio-26.1.1.tar.xz/plugins/decklink/decklink-device-instance.cpp
Changed
return VIDEO_FORMAT_BGRX;
default:
- case bmdFormat8BitYUV:;
+ case bmdFormat8BitYUV:
+ case bmdFormat10BitYUV:;
+ return VIDEO_FORMAT_UYVY;
}
-
- return VIDEO_FORMAT_UYVY;
}
static inline int ConvertChannelFormat(speaker_layout format)
packets->Release();
}
- IDeckLinkVideoConversion *frameConverter =
- CreateVideoConversionInstance();
+ IDeckLinkVideoFrame *frame;
+ if (videoFrame->GetPixelFormat() != convertFrame->GetPixelFormat()) {
+ IDeckLinkVideoConversion *frameConverter =
+ CreateVideoConversionInstance();
+
+ frameConverter->ConvertFrame(videoFrame, convertFrame);
- frameConverter->ConvertFrame(videoFrame, convertFrame);
+ frame = convertFrame;
+ } else {
+ frame = videoFrame;
+ }
void *bytes;
- if (convertFrame->GetBytes(&bytes) != S_OK) {
+ if (frame->GetBytes(&bytes) != S_OK) {
LOG(LOG_WARNING, "Failed to get video frame data");
return;
}
currentFrame.data[0] = (uint8_t *)bytes;
- currentFrame.linesize[0] = (uint32_t)convertFrame->GetRowBytes();
- currentFrame.width = (uint32_t)convertFrame->GetWidth();
- currentFrame.height = (uint32_t)convertFrame->GetHeight();
+ currentFrame.linesize[0] = (uint32_t)frame->GetRowBytes();
+ currentFrame.width = (uint32_t)frame->GetWidth();
+ currentFrame.height = (uint32_t)frame->GetHeight();
currentFrame.timestamp = timestamp;
obs_source_output_video2(
currentFrame.color_range_min,
currentFrame.color_range_max);
- if (convertFrame) {
- delete convertFrame;
+ delete convertFrame;
+
+ BMDPixelFormat convertFormat;
+ switch (pixelFormat) {
+ case bmdFormat8BitBGRA:
+ convertFormat = bmdFormat8BitBGRA;
+ break;
+ default:
+ case bmdFormat10BitYUV:
+ case bmdFormat8BitYUV:;
+ convertFormat = bmdFormat8BitYUV;
+ break;
}
- convertFrame = new OBSVideoFrame(mode_->GetWidth(), mode_->GetHeight());
+
+ convertFrame = new OBSVideoFrame(mode_->GetWidth(), mode_->GetHeight(),
+ convertFormat);
#ifdef LOG_SETUP_VIDEO_FORMAT
LOG(LOG_INFO, "Setup video format: %s, %s, %s",
}
bool DeckLinkDeviceInstance::StartCapture(DeckLinkDeviceMode *mode_,
+ bool allow10Bit_,
BMDVideoConnection bmdVideoConnection,
BMDAudioConnection bmdAudioConnection)
{
bool isauto = mode_->GetName() == "Auto";
if (isauto) {
displayMode = bmdModeNTSC;
- pixelFormat = bmdFormat10BitYUV;
+ if (allow10Bit) {
+ pixelFormat = bmdFormat10BitYUV;
+ } else {
+ pixelFormat = bmdFormat8BitYUV;
+ }
flags = bmdVideoInputEnableFormatDetection;
} else {
displayMode = mode_->GetDisplayMode();
flags = bmdVideoInputFlagDefault;
}
+ allow10Bit = allow10Bit_;
+
const HRESULT videoResult =
input->EnableVideoInput(displayMode, pixelFormat, flags);
if (videoResult != S_OK) {
{
if (events & bmdVideoInputColorspaceChanged) {
- switch (detectedSignalFlags) {
- case bmdDetectedVideoInputRGB444:
+ if (detectedSignalFlags & bmdDetectedVideoInputRGB444) {
pixelFormat = bmdFormat8BitBGRA;
- break;
-
- default:
- case bmdDetectedVideoInputYCbCr422:
- pixelFormat = bmdFormat10BitYUV;
- break;
+ }
+ if (detectedSignalFlags & bmdDetectedVideoInputYCbCr422) {
+ if (detectedSignalFlags &
+ bmdDetectedVideoInput10BitDepth) {
+ if (allow10Bit) {
+ pixelFormat = bmdFormat10BitYUV;
+ } else {
+ pixelFormat = bmdFormat8BitYUV;
+ }
+ }
+ if (detectedSignalFlags &
+ bmdDetectedVideoInput8BitDepth) {
+ pixelFormat = bmdFormat8BitYUV;
+ }
}
}
obs-studio-26.1.0.tar.xz/plugins/decklink/decklink-device-instance.hpp -> obs-studio-26.1.1.tar.xz/plugins/decklink/decklink-device-instance.hpp
Changed
AudioRepacker *audioRepacker = nullptr;
speaker_layout channelFormat = SPEAKERS_STEREO;
bool swap;
+ bool allow10Bit;
OBSVideoFrame *convertFrame = nullptr;
IDeckLinkMutableVideoFrame *decklinkOutputFrame = nullptr;
inline DeckLinkDeviceMode *GetMode() const { return mode; }
- bool StartCapture(DeckLinkDeviceMode *mode,
+ bool StartCapture(DeckLinkDeviceMode *mode, bool allow10Bit,
BMDVideoConnection bmdVideoConnection,
BMDAudioConnection bmdAudioConnection);
bool StopCapture(void);
obs-studio-26.1.0.tar.xz/plugins/decklink/decklink-source.cpp -> obs-studio-26.1.1.tar.xz/plugins/decklink/decklink-source.cpp
Changed
decklink->SetChannelFormat(channelFormat);
decklink->hash = std::string(hash);
decklink->swap = obs_data_get_bool(settings, SWAP);
+ decklink->allow10Bit = obs_data_get_bool(settings, ALLOW_10_BIT);
decklink->Activate(device, id, videoConnection, audioConnection);
}
list = obs_properties_get(props, PIXEL_FORMAT);
obs_property_set_visible(list, id != MODE_ID_AUTO);
+ auto allow10BitProp = obs_properties_get(props, ALLOW_10_BIT);
+ obs_property_set_visible(allow10BitProp, id == MODE_ID_AUTO);
+
return true;
}
OBS_COMBO_FORMAT_INT);
obs_property_list_add_int(list, "8-bit YUV", bmdFormat8BitYUV);
+ obs_property_list_add_int(list, "10-bit YUV", bmdFormat10BitYUV);
obs_property_list_add_int(list, "8-bit BGRA", bmdFormat8BitBGRA);
list = obs_properties_add_list(props, COLOR_SPACE, TEXT_COLOR_SPACE,
obs_properties_add_bool(props, DEACTIVATE_WNS, TEXT_DWNS);
+ obs_properties_add_bool(props, ALLOW_10_BIT, TEXT_ALLOW_10_BIT);
+
UNUSED_PARAMETER(data);
return props;
}
obs-studio-26.1.0.tar.xz/plugins/image-source/image-source.c -> obs-studio-26.1.1.tar.xz/plugins/image-source/image-source.c
Changed
}
static const char *image_filter =
- "All formats (*.bmp *.tga *.png *.jpeg *.jpg *.gif *.psd);;"
+ "All formats (*.bmp *.tga *.png *.jpeg *.jpg *.gif *.psd *.webp);;"
"BMP Files (*.bmp);;"
"Targa Files (*.tga);;"
"PNG Files (*.png);;"
"JPEG Files (*.jpeg *.jpg);;"
"GIF Files (*.gif);;"
"PSD Files (*.psd);;"
+ "WebP Files (*.webp);;"
"All Files (*.*)";
static obs_properties_t *image_source_properties(void *data)
obs-studio-26.1.0.tar.xz/plugins/image-source/obs-slideshow.c -> obs-studio-26.1.1.tar.xz/plugins/image-source/obs-slideshow.c
Changed
if (!ss->transition || !ss->slide_time)
return;
- if (ss->restart_on_activate && !ss->randomize && ss->use_cut) {
+ if (ss->restart_on_activate && ss->use_cut) {
ss->elapsed = 0.0f;
- ss->cur_item = 0;
+ ss->cur_item = ss->randomize ? random_file(ss) : 0;
do_transition(ss, false);
ss->restart_on_activate = false;
ss->use_cut = false;
}
static const char *file_filter =
- "Image files (*.bmp *.tga *.png *.jpeg *.jpg *.gif)";
+ "Image files (*.bmp *.tga *.png *.jpeg *.jpg *.gif *.webp)";
static const char *aspects[] = {"16:9", "16:10", "4:3", "1:1"};
obs-studio-26.1.0.tar.xz/plugins/linux-jack/jack-wrapper.c -> obs-studio-26.1.1.tar.xz/plugins/linux-jack/jack-wrapper.c
Changed
int jack_process_callback(jack_nframes_t nframes, void *arg)
{
struct jack_data *data = (struct jack_data *)arg;
+ jack_nframes_t current_frames;
+ jack_time_t current_usecs, next_usecs;
+ float period_usecs;
+
+ uint64_t now = os_gettime_ns();
+
if (data == 0)
return 0;
- pthread_mutex_lock(&data->jack_mutex);
-
struct obs_source_audio out;
out.speakers = jack_channels_to_obs_speakers(data->channels);
out.samples_per_sec = jack_get_sample_rate(data->jack_client);
}
out.frames = nframes;
- out.timestamp = os_gettime_ns() -
- jack_frames_to_time(data->jack_client, nframes);
+ if (!jack_get_cycle_times(data->jack_client, ¤t_frames,
+ ¤t_usecs, &next_usecs, &period_usecs)) {
+ out.timestamp = now - (int64_t)(period_usecs * 1000);
+ } else {
+ out.timestamp = now - util_mul_div64(nframes, 1000000000ULL,
+ data->samples_per_sec);
+ blog(LOG_WARNING,
+ "jack_get_cycle_times error: guessing timestamp");
+ }
+ /* FIXME: this function is not realtime-safe, we should do something
+ * about this */
obs_source_output_audio(data->source, &out);
- pthread_mutex_unlock(&data->jack_mutex);
return 0;
}
data->jack_ports[i] = jack_port_register(
data->jack_client, port_name, JACK_DEFAULT_AUDIO_TYPE,
- JackPortIsInput, 0);
+ JackPortIsInput | JackPortIsTerminal, 0);
if (data->jack_ports[i] == NULL) {
blog(LOG_ERROR,
"jack_port_register Error:"
pthread_mutex_lock(&data->jack_mutex);
if (data->jack_client) {
+ jack_client_close(data->jack_client);
if (data->jack_ports != NULL) {
- for (int i = 0; i < data->channels; ++i) {
- if (data->jack_ports[i] != NULL)
- jack_port_unregister(
- data->jack_client,
- data->jack_ports[i]);
- }
bfree(data->jack_ports);
data->jack_ports = NULL;
}
- jack_client_close(data->jack_client);
data->jack_client = NULL;
}
pthread_mutex_unlock(&data->jack_mutex);
obs-studio-26.1.0.tar.xz/plugins/mac-virtualcam/src/dal-plugin/CMSampleBufferUtils.mm -> obs-studio-26.1.1.tar.xz/plugins/mac-virtualcam/src/dal-plugin/CMSampleBufferUtils.mm
Changed
static void releaseNSData(void *o, void *block, size_t size)
{
+ UNUSED_PARAMETER(block);
+ UNUSED_PARAMETER(size);
+
NSData *data = (__bridge_transfer NSData *)o;
data = nil; // Assuming ARC is enabled
}
obs-studio-26.1.0.tar.xz/plugins/mac-virtualcam/src/dal-plugin/Logging.h -> obs-studio-26.1.1.tar.xz/plugins/mac-virtualcam/src/dal-plugin/Logging.h
Changed
#define VLogFunc(fmt, ...)
#define ELog(fmt, ...) DLog(fmt, ##__VA_ARGS__)
+#define UNUSED_PARAMETER(param) (void)param
+
#endif /* Logging_h */
obs-studio-26.1.0.tar.xz/plugins/mac-virtualcam/src/dal-plugin/OBSDALDevice.mm -> obs-studio-26.1.1.tar.xz/plugins/mac-virtualcam/src/dal-plugin/OBSDALDevice.mm
Changed
case kCMIODevicePropertyDeviceMaster:
return sizeof(pid_t);
default:
- DLog(@"Device unhandled getPropertyDataSizeWithAddress for %@",
- [OBSDALObjectStore
- StringFromPropertySelector:address.mSelector]);
+ break;
};
return 0;
*dataUsed = sizeof(pid_t);
break;
default:
- DLog(@"Device unhandled getPropertyDataWithAddress for %@",
- [OBSDALObjectStore
- StringFromPropertySelector:address.mSelector]);
- *dataUsed = 0;
break;
};
}
case kCMIODevicePropertyLinkedCoreAudioDeviceUID:
return false;
default:
- DLog(@"Device unhandled hasPropertyWithAddress for %@",
- [OBSDALObjectStore
- StringFromPropertySelector:address.mSelector]);
return false;
};
}
case kCMIODevicePropertyDeviceMaster:
return true;
default:
- DLog(@"Device unhandled isPropertySettableWithAddress for %@",
- [OBSDALObjectStore
- StringFromPropertySelector:address.mSelector]);
return false;
};
}
self.masterPid = *static_cast<const pid_t *>(data);
break;
default:
- DLog(@"Device unhandled setPropertyDataWithAddress for %@",
- [OBSDALObjectStore
- StringFromPropertySelector:address.mSelector]);
break;
};
}
obs-studio-26.1.0.tar.xz/plugins/mac-virtualcam/src/dal-plugin/OBSDALPlugInInterface.mm -> obs-studio-26.1.1.tar.xz/plugins/mac-virtualcam/src/dal-plugin/OBSDALPlugInInterface.mm
Changed
ULONG HardwarePlugIn_AddRef(CMIOHardwarePlugInRef self)
{
+ UNUSED_PARAMETER(self);
+
sRefCount += 1;
DLogFunc(@"sRefCount now = %d", sRefCount);
return sRefCount;
ULONG HardwarePlugIn_Release(CMIOHardwarePlugInRef self)
{
+ UNUSED_PARAMETER(self);
+
sRefCount -= 1;
DLogFunc(@"sRefCount now = %d", sRefCount);
return sRefCount;
HRESULT HardwarePlugIn_QueryInterface(CMIOHardwarePlugInRef self, REFIID uuid,
LPVOID *interface)
{
+ UNUSED_PARAMETER(self);
DLogFunc(@"");
if (!interface) {
void HardwarePlugIn_ObjectShow(CMIOHardwarePlugInRef self,
CMIOObjectID objectID)
{
+ UNUSED_PARAMETER(objectID);
DLogFunc(@"self=%p", self);
}
CMIOObjectID objectID,
const CMIOObjectPropertyAddress *address)
{
+ UNUSED_PARAMETER(self);
NSObject<CMIOObject> *object =
[OBSDALObjectStore GetObjectWithId:objectID];
const CMIOObjectPropertyAddress *address, UInt32 qualifierDataSize,
const void *qualifierData, UInt32 *dataSize)
{
+ UNUSED_PARAMETER(self);
NSObject<CMIOObject> *object =
[OBSDALObjectStore GetObjectWithId:objectID];
const void *qualifierData, UInt32 dataSize, UInt32 *dataUsed,
void *data)
{
+ UNUSED_PARAMETER(self);
NSObject<CMIOObject> *object =
[OBSDALObjectStore GetObjectWithId:objectID];
OSStatus HardwarePlugIn_DeviceSuspend(CMIOHardwarePlugInRef self,
CMIODeviceID deviceID)
{
+ UNUSED_PARAMETER(deviceID);
+
DLogFunc(@"self=%p", self);
return kCMIOHardwareNoError;
}
OSStatus HardwarePlugIn_DeviceResume(CMIOHardwarePlugInRef self,
CMIODeviceID deviceID)
{
+ UNUSED_PARAMETER(deviceID);
+
DLogFunc(@"self=%p", self);
return kCMIOHardwareNoError;
}
CMIODeviceID deviceID,
CMIODeviceAVCCommand *ioAVCCommand)
{
+ UNUSED_PARAMETER(deviceID);
+ UNUSED_PARAMETER(ioAVCCommand);
+
DLogFunc(@"self=%p", self);
return kCMIOHardwareNoError;
}
CMIODeviceID deviceID,
CMIODeviceRS422Command *ioRS422Command)
{
+ UNUSED_PARAMETER(deviceID);
+ UNUSED_PARAMETER(ioRS422Command);
+
DLogFunc(@"self=%p", self);
return kCMIOHardwareNoError;
}
OSStatus HardwarePlugIn_StreamDeckPlay(CMIOHardwarePlugInRef self,
CMIOStreamID streamID)
{
+ UNUSED_PARAMETER(streamID);
+
DLogFunc(@"self=%p", self);
return kCMIOHardwareIllegalOperationError;
}
OSStatus HardwarePlugIn_StreamDeckStop(CMIOHardwarePlugInRef self,
CMIOStreamID streamID)
{
+ UNUSED_PARAMETER(streamID);
+
DLogFunc(@"self=%p", self);
return kCMIOHardwareIllegalOperationError;
}
OSStatus HardwarePlugIn_StreamDeckJog(CMIOHardwarePlugInRef self,
CMIOStreamID streamID, SInt32 speed)
{
+ UNUSED_PARAMETER(streamID);
+ UNUSED_PARAMETER(speed);
+
DLogFunc(@"self=%p", self);
return kCMIOHardwareIllegalOperationError;
}
Float64 requestedTimecode,
Boolean playOnCue)
{
+ UNUSED_PARAMETER(streamID);
+ UNUSED_PARAMETER(requestedTimecode);
+ UNUSED_PARAMETER(playOnCue);
+
DLogFunc(@"self=%p", self);
return kCMIOHardwareIllegalOperationError;
}
obs-studio-26.1.0.tar.xz/plugins/mac-virtualcam/src/dal-plugin/OBSDALPluginMain.mm -> obs-studio-26.1.1.tar.xz/plugins/mac-virtualcam/src/dal-plugin/OBSDALPluginMain.mm
Changed
extern "C" {
void *PlugInMain(CFAllocatorRef allocator, CFUUIDRef requestedTypeUUID)
{
+ UNUSED_PARAMETER(allocator);
+
DLogFunc(@"version=%@", PLUGIN_VERSION);
if (!CFEqual(requestedTypeUUID, kCMIOHardwarePlugInTypeID)) {
return 0;
obs-studio-26.1.0.tar.xz/plugins/mac-virtualcam/src/dal-plugin/OBSDALStream.mm -> obs-studio-26.1.1.tar.xz/plugins/mac-virtualcam/src/dal-plugin/OBSDALStream.mm
Changed
- (void)fillFrame
{
if (CMSimpleQueueGetFullness(self.queue) >= 1.0) {
- DLog(@"Queue is full, bailing out");
return;
}
case kCMIOStreamPropertyClock:
return sizeof(CFTypeRef);
default:
- DLog(@"Stream unhandled getPropertyDataSizeWithAddress for %@",
- [OBSDALObjectStore
- StringFromPropertySelector:address.mSelector]);
return 0;
};
}
*dataUsed = sizeof(CFTypeRef);
break;
default:
- DLog(@"Stream unhandled getPropertyDataWithAddress for %@",
- [OBSDALObjectStore
- StringFromPropertySelector:address.mSelector]);
*dataUsed = 0;
};
}
StringFromPropertySelector:address.mSelector]);
return false;
default:
- DLog(@"Stream unhandled hasPropertyWithAddress for %@",
- [OBSDALObjectStore
- StringFromPropertySelector:address.mSelector]);
return false;
};
}
- (BOOL)isPropertySettableWithAddress:(CMIOObjectPropertyAddress)address
{
- DLog(@"Stream unhandled isPropertySettableWithAddress for %@",
- [OBSDALObjectStore StringFromPropertySelector:address.mSelector]);
return false;
}
dataSize:(UInt32)dataSize
data:(nonnull const void *)data
{
- DLog(@"Stream unhandled setPropertyDataWithAddress for %@",
- [OBSDALObjectStore StringFromPropertySelector:address.mSelector]);
}
@end
obs-studio-26.1.0.tar.xz/plugins/mac-virtualcam/src/obs-plugin/plugin-main.mm -> obs-studio-26.1.1.tar.xz/plugins/mac-virtualcam/src/obs-plugin/plugin-main.mm
Changed
static void *virtualcam_output_create(obs_data_t *settings,
obs_output_t *output)
{
+ UNUSED_PARAMETER(settings);
+
outputRef = output;
blog(LOG_DEBUG, "output_create");
static void virtualcam_output_destroy(void *data)
{
+ UNUSED_PARAMETER(data);
blog(LOG_DEBUG, "output_destroy");
sMachServer = nil;
}
static bool virtualcam_output_start(void *data)
{
+ UNUSED_PARAMETER(data);
+
bool hasDalPlugin = check_dal_plugin();
if (!hasDalPlugin) {
static void virtualcam_output_stop(void *data, uint64_t ts)
{
+ UNUSED_PARAMETER(data);
+ UNUSED_PARAMETER(ts);
+
blog(LOG_DEBUG, "output_stop");
obs_output_end_data_capture(outputRef);
[sMachServer stop];
static void virtualcam_output_raw_video(void *data, struct video_data *frame)
{
+ UNUSED_PARAMETER(data);
+
uint8_t *outData = frame->data[0];
if (frame->linesize[0] != (videoInfo.output_width * 2)) {
blog(LOG_ERROR,
obs-studio-26.1.0.tar.xz/plugins/obs-browser/obs-browser-plugin.cpp -> obs-studio-26.1.1.tar.xz/plugins/obs-browser/obs-browser-plugin.cpp
Changed
}
obs_data_release(private_data);
#endif
+
+#if defined(__APPLE__) && CHROME_VERSION_BUILD < 4183
+ // Make sure CEF malloc hijacking happens early in the process
+ obs_browser_initialize();
+#endif
+
return true;
}
obs-studio-26.1.0.tar.xz/plugins/obs-ffmpeg/ffmpeg-mux/ffmpeg-mux.c -> obs-studio-26.1.1.tar.xz/plugins/obs-ffmpeg/ffmpeg-mux/ffmpeg-mux.c
Changed
}
/* Treat "Invalid data found when processing input" and "Invalid argument" as non-fatal */
- if (ret == AVERROR_INVALIDDATA || ret == EINVAL) {
+ if (ret == AVERROR_INVALIDDATA || ret == -EINVAL) {
return true;
}
obs-studio-26.1.0.tar.xz/plugins/rtmp-services/data/package.json -> obs-studio-26.1.1.tar.xz/plugins/rtmp-services/data/package.json
Changed
{
"url": "https://obsproject.com/obs2_update/rtmp-services",
- "version": 161,
+ "version": 163,
"files": [
{
"name": "services.json",
- "version": 161
+ "version": 163
}
]
}
obs-studio-26.1.0.tar.xz/plugins/rtmp-services/data/services.json -> obs-studio-26.1.1.tar.xz/plugins/rtmp-services/data/services.json
Changed
}
},
{
- "name": "VIMM",
+ "name": "Loola.tv",
+ "common": false,
"servers": [
{
- "name": "Europe: Frankfurt",
- "url": "rtmp://eu.vimm.tv/live"
+ "name": "US East: Virginia",
+ "url": "rtmp://rtmp.loola.tv/push"
},
{
- "name": "North America: Montreal",
- "url": "rtmp://us.vimm.tv/live"
+ "name": "EU Central: Germany",
+ "url": "rtmp://rtmp-eu.loola.tv/push"
+ },
+ {
+ "name": "South America: Brazil",
+ "url": "rtmp://rtmp-sa.loola.tv/push"
+ },
+ {
+ "name": "Asia/Pacific: Singapore",
+ "url": "rtmp://rtmp-sg.loola.tv/push"
+ },
+ {
+ "name": "Middle East: Bahrain",
+ "url": "rtmp://rtmp-me.loola.tv/push"
}
],
"recommended": {
"keyint": 2,
- "max video bitrate": 8000,
- "max audio bitrate": 320,
+ "profile": "high",
+ "max video bitrate": 2500,
+ "max audio bitrate": 160,
+ "bframes": 2,
"x264opts": "scenecut=0"
}
},
{
- "name": "Smashcast",
+ "name": "VIMM",
"servers": [
{
- "name": "Default",
- "url": "rtmp://live.hitbox.tv/push"
- },
- {
- "name": "EU-North: Amsterdam, Netherlands",
- "url": "rtmp://live.ams.hitbox.tv/push"
- },
- {
- "name": "EU-West: Paris, France",
- "url": "rtmp://live.cdg.hitbox.tv/push"
- },
- {
- "name": "EU-South: Milan, Italia",
- "url": "rtmp://live.mxp.hitbox.tv/push"
- },
- {
- "name": "Russia: Moscow",
- "url": "rtmp://live.dme.hitbox.tv/push"
- },
- {
- "name": "US-East: New York",
- "url": "rtmp://live.jfk.hitbox.tv/push"
- },
- {
- "name": "US-West: San Francisco",
- "url": "rtmp://live.sfo.hitbox.tv/push"
- },
- {
- "name": "US-West: Los Angeles",
- "url": "rtmp://live.lax.hitbox.tv/push"
- },
- {
- "name": "South America: Sao Paulo, Brazil",
- "url": "rtmp://live.gru.hitbox.tv/push"
- },
- {
- "name": "Asia: Singapore",
- "url": "rtmp://live.sin.hitbox.tv/push"
+ "name": "Europe: Frankfurt",
+ "url": "rtmp://eu.vimm.tv/live"
},
{
- "name": "Oceania: Sydney, Australia",
- "url": "rtmp://live.syd.hitbox.tv/push"
+ "name": "North America: Montreal",
+ "url": "rtmp://us.vimm.tv/live"
}
],
"recommended": {
"keyint": 2,
- "profile": "high",
- "max video bitrate": 3500,
- "max audio bitrate": 320
+ "max video bitrate": 8000,
+ "max audio bitrate": 320,
+ "x264opts": "scenecut=0"
}
},
{
{
"name": "US: New York, NY",
"url": "rtmp://live-nyc.vaughnsoft.net/live"
- },
+ },
{
"name": "US: Miami, FL",
"url": "rtmp://live-mia.vaughnsoft.net/live"
{
"name": "US: New York, NY",
"url": "rtmp://live-nyc.vaughnsoft.net/live"
- },
+ },
{
"name": "US: Miami, FL",
"url": "rtmp://live-mia.vaughnsoft.net/live"
"max audio bitrate": 160,
"x264opts": "tune=zerolatency"
}
- },
+ },
{
"name": "Mux",
"servers": [
obs-studio-26.1.0.tar.xz/plugins/win-dshow/libdshowcapture/dshowcapture.hpp -> obs-studio-26.1.1.tar.xz/plugins/win-dshow/libdshowcapture/dshowcapture.hpp
Changed
#define DSHOWCAPTURE_VERSION_MAJOR 0
#define DSHOWCAPTURE_VERSION_MINOR 8
-#define DSHOWCAPTURE_VERSION_PATCH 5
+#define DSHOWCAPTURE_VERSION_PATCH 6
#define MAKE_DSHOWCAPTURE_VERSION(major, minor, patch) \
((major << 24) | (minor << 16) | (patch))
obs-studio-26.1.0.tar.xz/plugins/win-dshow/libdshowcapture/source/dshow-base.cpp -> obs-studio-26.1.1.tar.xz/plugins/win-dshow/libdshowcapture/source/dshow-base.cpp
Changed
return hr;
}
+static HRESULT GetFriendlyName(REFCLSID deviceClass, const wchar_t *devPath,
+ wchar_t *name, int nameSize)
+{
+ /* Sanity checks */
+ if (!devPath)
+ return E_POINTER;
+ if (!name)
+ return E_POINTER;
+
+ /* Create device enumerator */
+ ComPtr<ICreateDevEnum> createDevEnum;
+ HRESULT hr = CoCreateInstance(CLSID_SystemDeviceEnum, NULL,
+ CLSCTX_INPROC_SERVER, IID_ICreateDevEnum,
+ (void **)&createDevEnum);
+
+ /* Enumerate filters */
+ ComPtr<IEnumMoniker> enumMoniker;
+ if (SUCCEEDED(hr)) {
+ /* returns S_FALSE if no devices are installed */
+ hr = createDevEnum->CreateClassEnumerator(deviceClass,
+ &enumMoniker, 0);
+ if (!enumMoniker)
+ hr = E_FAIL;
+ }
+
+ /* Cycle through the enumeration */
+ if (SUCCEEDED(hr)) {
+ ULONG fetched = 0;
+ ComPtr<IMoniker> moniker;
+
+ enumMoniker->Reset();
+
+ while (enumMoniker->Next(1, &moniker, &fetched) == S_OK) {
+
+ /* Get device path from moniker */
+ wchar_t monikerDevPath[512];
+ hr = ReadProperty(moniker, L"DevicePath",
+ monikerDevPath,
+ _ARRAYSIZE(monikerDevPath));
+
+ /* Find desired filter */
+ if (wcscmp(devPath, monikerDevPath) == 0) {
+
+ /* Get friendly name */
+ hr = ReadProperty(moniker, L"FriendlyName",
+ name, nameSize);
+ return hr;
+ }
+ }
+ }
+
+ return E_FAIL;
+}
+
+static bool MatchFriendlyNames(const wchar_t *vidName, const wchar_t *audName)
+{
+ /* Sanity checks */
+ if (!vidName)
+ return false;
+ if (!audName)
+ return false;
+
+ /* Convert strings to lower case */
+ wstring strVidName = vidName;
+ for (wchar_t &c : strVidName)
+ c = (wchar_t)tolower(c);
+ wstring strAudName = audName;
+ for (wchar_t &c : strAudName)
+ c = (wchar_t)tolower(c);
+
+ /* Remove 'video' from friendly name */
+ size_t posVid;
+ wstring searchVid[] = {L"(video) ", L"(video)", L"video ", L"video"};
+ for (int i = 0; i < _ARRAYSIZE(searchVid); i++) {
+ wstring &search = searchVid[i];
+ while ((posVid = strVidName.find(search)) !=
+ std::string::npos) {
+ strVidName.replace(posVid, search.length(), L"");
+ }
+ }
+
+ /* Remove 'audio' from friendly name */
+ size_t posAud;
+ wstring searchAud[] = {L"(audio) ", L"(audio)", L"audio ", L"audio"};
+ for (int i = 0; i < _ARRAYSIZE(searchAud); i++) {
+ wstring &search = searchAud[i];
+ while ((posAud = strAudName.find(search)) !=
+ std::string::npos) {
+ strAudName.replace(posAud, search.length(), L"");
+ }
+ }
+
+ return strVidName == strAudName;
+}
+
static bool GetDeviceAudioFilterInternal(REFCLSID deviceClass,
const wchar_t *vidDevPath,
- IBaseFilter **audioCaptureFilter)
+ IBaseFilter **audioCaptureFilter,
+ bool matchFilterName = false)
{
/* Get video device instance path */
wchar_t vidDevInstPath[512];
return false;
#endif
+ /* Get friendly name */
+ wchar_t vidName[512];
+ if (matchFilterName) {
+ hr = GetFriendlyName(CLSID_VideoInputDeviceCategory, vidDevPath,
+ vidName, _ARRAYSIZE(vidName));
+ if (FAILED(hr))
+ return false;
+ }
+
/* Create device enumerator */
ComPtr<ICreateDevEnum> createDevEnum;
if (SUCCEEDED(hr))
while (enumMoniker->Next(1, &moniker, &fetched) == S_OK) {
bool samePath = false;
-#if 0
- /* Get friendly name (helpful for debugging) */
- wchar_t friendlyName[512];
- ReadProperty(moniker, L"FriendlyName", friendlyName,
- _ARRAYSIZE(friendlyName));
-#endif
/* Get device path */
wchar_t audDevPath[512];
/* Get audio capture filter */
if (samePath) {
- hr = moniker->BindToObject(
- 0, 0, IID_IBaseFilter,
- (void **)audioCaptureFilter);
- if (SUCCEEDED(hr))
- return true;
+ /* Match video and audio filter names */
+ bool isSameFilterName = false;
+ if (matchFilterName) {
+ wchar_t audName[512];
+ hr = ReadProperty(moniker,
+ L"FriendlyName",
+ audName,
+ _ARRAYSIZE(audName));
+ if (SUCCEEDED(hr)) {
+ isSameFilterName =
+ MatchFriendlyNames(
+ vidName,
+ audName);
+ }
+ }
+
+ if (!matchFilterName || isSameFilterName) {
+ hr = moniker->BindToObject(
+ 0, 0, IID_IBaseFilter,
+ (void **)audioCaptureFilter);
+ if (SUCCEEDED(hr))
+ return true;
+ }
}
}
}
bool GetDeviceAudioFilter(const wchar_t *vidDevPath,
IBaseFilter **audioCaptureFilter)
{
- /* Search in "Audio capture sources" */
+ /* Search in "Audio capture sources" and match filter name */
bool success = GetDeviceAudioFilterInternal(
- CLSID_AudioInputDeviceCategory, vidDevPath, audioCaptureFilter);
+ CLSID_AudioInputDeviceCategory, vidDevPath, audioCaptureFilter,
+ true);
+
+ /* Search in "WDM Streaming Capture Devices" and match filter name */
+ if (!success)
+ success = GetDeviceAudioFilterInternal(KSCATEGORY_CAPTURE,
+ vidDevPath,
+ audioCaptureFilter,
+ true);
+
+ /* Search in "Audio capture sources" */
+ if (!success)
+ success = GetDeviceAudioFilterInternal(
+ CLSID_AudioInputDeviceCategory, vidDevPath,
+ audioCaptureFilter);
/* Search in "WDM Streaming Capture Devices" */
if (!success)
obs-studio-26.1.0.tar.xz/plugins/win-dshow/libdshowcapture/source/output-filter.cpp -> obs-studio-26.1.1.tar.xz/plugins/win-dshow/libdshowcapture/source/output-filter.cpp
Changed
* USA
*/
-#include <strsafe.h>
#include "output-filter.hpp"
#include "dshow-formats.hpp"
#include "log.hpp"
+#include <strsafe.h>
+
namespace DShow {
#if 0
No build results available
No rpmlint results available
Request History
boombatower created request over 4 years ago
- Update to version 26.1.1:
* win-dshow: Fix dshowcapture not linking audio of certain devices
* linux-jack: fix deadlock when closing the client
* linux-jack: mark ports as JackPortIsTerminal
* linux-jack: fix timestamp calculation
* obs-browser: Initialize CEF early to fix macOS crash
* libobs: Update version to 26.1.1
* rtmp-services: Add Loola.tv service
* rtmp-services: Fix json formatting
* libobs: Avoid unnecessary mallocs in audio processing
* UI: Fix padding on Acri context bar buttons
* image-source: Fix slideshow transition bug when randomized
* docs/sphinx: Add missing obs_frontend_open_projector
* libobs: Update to SIMDe 0.7.1
* libobs: Set lock state when duplicating scene item
* libobs: Add definitions in ARCH_SIMD_DEFINES
* cmake: Add ARCH_SIMD_DEFINES variable
* coreaudio-encoder: Fix cmake for mingw
* Revert "UI: Only apply new scaling behavior on newer installs"
* UI: Only apply new scaling behavior on newer installs
* UI: Support fractional scaling for Canvas/Base size
* mac-virtualcam: Remove unnecessary logging
* mac-virtualcam: Mark parameters as unused
* image-source: Add .webp to "All formats" option
* image-source: Add webp to file filter
* CI: Remove jack, speex and fdk-aac from default builds for macOS
* libobs, obs-ffmpeg: Use correct value for EINVAL error check
* UI/updater: Increase number of download workers
* UI/updater: Enable HTTP2 and TLS 1.3
* UI: Fix name of kab-KAB locale
boombatower accepted request over 4 years ago
ok