Compare commits

..

No commits in common. "c27a5944c7ee852794bbe6823610f356e6ca4535" and "912fc54f253e70a72728ac4813283f5373f6a13f" have entirely different histories.

63 changed files with 372 additions and 6412 deletions

1
.gitignore vendored
View File

@ -52,7 +52,6 @@ tests/sdr/
# Sphinx documentation
docs/build/
docs/_build/
# Jupyter Notebook
.ipynb_checkpoints

View File

@ -1,21 +1,5 @@
# Changelog
## [0.1.0] - 2026-02-20
### Added
- **Dual-Threshold Detection:** Logic to capture the start and end of signals, not just the peak.
- **Signal Smoothing & Noise Filters:** Prevents detections from breaking into fragments and ignores short interference spikes.
- **Auto-Frequency Calculation:** Automatically adjusts bounding boxes to fit signal frequency ranges tightly.
### Changed
- **Signal Power Detection:** Switched from raw signal strength to power for improved accuracy.
- **CLI Workflow:** `Clear` and `Remove` commands now modify files directly (in-place) to avoid redundant copies.
- **Metadata Logic:** Updated labels to show detection percentages and overhauled internal metadata cleaning.
- **Viewer UI:** Moved legend outside the plot, added a black background, and adjusted transparency for better spectrogram visibility.
### Fixed
- Prevented redundant `_annotated` suffixes in file naming patterns.
- Simplified internal math to increase processing speed and precision.
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) and [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

File diff suppressed because it is too large Load Diff

View File

@ -1,29 +0,0 @@
/* Change the hex values below to customize heading colours */
.rst-content h1 { color: #2c3e50; }
.rst-content h2,
.rst-content h2 a { color: #ffffff !important; font-size: 22px !important; }
.rst-content h3,
.rst-content h3 a { color: #ffffff !important; font-size: 16px !important; }
.rst-content h3 code { font-size: inherit !important; }
.rst-content .admonition.warning {
background: #1a1a2e !important;
border-left: 4px solid #c0392b !important;
}
.rst-content .admonition.warning .admonition-title {
background: #c0392b !important;
color: #ffffff !important;
}
.rst-content .admonition.warning p {
color: #ffffff !important;
}
.rst-content h4 { color: #404040; }
.highlight * { color: #ffffff !important; }
.ria-cmd { color: #2980b9 !important; }

View File

@ -1,8 +0,0 @@
document.addEventListener('DOMContentLoaded', function () {
document.querySelectorAll('.highlight pre').forEach(function (pre) {
pre.innerHTML = pre.innerHTML.replace(
/((?:^|\n|>))(ria)(?=[ \t]|<)/g,
'$1<span class="ria-cmd">$2</span>'
);
});
});

View File

@ -14,7 +14,7 @@ sys.path.insert(0, os.path.abspath(os.path.join('..', '..')))
project = 'ria-toolkit-oss'
copyright = '2025, Qoherent Inc'
author = 'Qoherent Inc.'
release = '0.1.5'
release = '0.1.4'
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
@ -73,6 +73,3 @@ def setup(app):
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
html_theme = 'sphinx_rtd_theme'
html_static_path = ['_static']
html_css_files = ['custom.css']
html_js_files = ['custom.js']

File diff suppressed because it is too large Load Diff

View File

@ -11,15 +11,15 @@ The Radio Dataset Framework provides a software interface to access and manipula
the need for users to interface with the source files directly. Instead, users initialize and interact with a Python
object, while the complexities of efficient data retrieval and source file manipulation are managed behind the scenes.
Ria Toolkit OSS includes an abstract class called :py:obj:`ria_toolkit_oss.datatypes.datasets.RadioDataset`, which defines common properties and
Utils includes an abstract class called :py:obj:`ria_toolkit_oss.datatypes.datasets.RadioDataset`, which defines common properties and
behaviors for all radio datasets. :py:obj:`ria_toolkit_oss.datatypes.datasets.RadioDataset` can be considered a blueprint for all
other radio dataset classes. This class is then subclassed to define more specific blueprints for different types
of radio datasets. For example, :py:obj:`ria_toolkit_oss.datatypes.datasets.IQDataset`, which is tailored for machine learning tasks
involving the processing of signals represented as IQ (In-phase and Quadrature) samples.
Then, in the various project backends, there are concrete dataset classes, which inherit from both Ria Toolkit OSS and the base
Then, in the various project backends, there are concrete dataset classes, which inherit from both Utils and the base
dataset class from the respective backend. For example, the :py:obj:`TorchIQDataset` class extends both
:py:obj:`ria_toolkit_oss.datatypes.datasets.IQDataset` from Ria Toolkit OSS and :py:obj:`torch.ria_toolkit_oss.datatypes.IterableDataset` from
:py:obj:`ria_toolkit_oss.datatypes.datasets.IQDataset` from Utils and :py:obj:`torch.ria_toolkit_oss.datatypes.IterableDataset` from
PyTorch, providing a concrete dataset class tailored for IQ datasets and optimized for the PyTorch backend.
Dataset initialization
@ -130,7 +130,7 @@ Dataset processing and manipulation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
All radio datasets support methods tailored specifically for radio processing. These methods are backend-independent,
inherited from the blueprints in Ria Toolkit OSS like :py:obj:`ria_toolkit_oss.datatypes.datasets.RadioDataset`.
inherited from the blueprints in Utils like :py:obj:`ria_toolkit_oss.datatypes.datasets.RadioDataset`.
For example, we can trim down the length of the examples from 1,024 to 512 samples, and then augment the dataset:

295
poetry.lock generated
View File

@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 2.1.2 and should not be changed by hand.
# This file is automatically @generated by Poetry 2.3.4 and should not be changed by hand.
[[package]]
name = "alabaster"
@ -98,83 +98,6 @@ files = [
[package.extras]
dev = ["backports.zoneinfo ; python_version < \"3.9\"", "freezegun (>=1.0,<2.0)", "jinja2 (>=3.0)", "pytest (>=6.0)", "pytest-cov", "pytz", "setuptools", "tzdata ; sys_platform == \"win32\""]
[[package]]
name = "bcrypt"
version = "5.0.0"
description = "Modern password hashing for your software and your servers"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "bcrypt-5.0.0-cp313-cp313t-macosx_10_12_universal2.whl", hash = "sha256:f3c08197f3039bec79cee59a606d62b96b16669cff3949f21e74796b6e3cd2be"},
{file = "bcrypt-5.0.0-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:200af71bc25f22006f4069060c88ed36f8aa4ff7f53e67ff04d2ab3f1e79a5b2"},
{file = "bcrypt-5.0.0-cp313-cp313t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:baade0a5657654c2984468efb7d6c110db87ea63ef5a4b54732e7e337253e44f"},
{file = "bcrypt-5.0.0-cp313-cp313t-manylinux_2_28_aarch64.whl", hash = "sha256:c58b56cdfb03202b3bcc9fd8daee8e8e9b6d7e3163aa97c631dfcfcc24d36c86"},
{file = "bcrypt-5.0.0-cp313-cp313t-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:4bfd2a34de661f34d0bda43c3e4e79df586e4716ef401fe31ea39d69d581ef23"},
{file = "bcrypt-5.0.0-cp313-cp313t-manylinux_2_28_x86_64.whl", hash = "sha256:ed2e1365e31fc73f1825fa830f1c8f8917ca1b3ca6185773b349c20fd606cec2"},
{file = "bcrypt-5.0.0-cp313-cp313t-manylinux_2_34_aarch64.whl", hash = "sha256:83e787d7a84dbbfba6f250dd7a5efd689e935f03dd83b0f919d39349e1f23f83"},
{file = "bcrypt-5.0.0-cp313-cp313t-manylinux_2_34_x86_64.whl", hash = "sha256:137c5156524328a24b9fac1cb5db0ba618bc97d11970b39184c1d87dc4bf1746"},
{file = "bcrypt-5.0.0-cp313-cp313t-musllinux_1_1_aarch64.whl", hash = "sha256:38cac74101777a6a7d3b3e3cfefa57089b5ada650dce2baf0cbdd9d65db22a9e"},
{file = "bcrypt-5.0.0-cp313-cp313t-musllinux_1_1_x86_64.whl", hash = "sha256:d8d65b564ec849643d9f7ea05c6d9f0cd7ca23bdd4ac0c2dbef1104ab504543d"},
{file = "bcrypt-5.0.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:741449132f64b3524e95cd30e5cd3343006ce146088f074f31ab26b94e6c75ba"},
{file = "bcrypt-5.0.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:212139484ab3207b1f0c00633d3be92fef3c5f0af17cad155679d03ff2ee1e41"},
{file = "bcrypt-5.0.0-cp313-cp313t-win32.whl", hash = "sha256:9d52ed507c2488eddd6a95bccee4e808d3234fa78dd370e24bac65a21212b861"},
{file = "bcrypt-5.0.0-cp313-cp313t-win_amd64.whl", hash = "sha256:f6984a24db30548fd39a44360532898c33528b74aedf81c26cf29c51ee47057e"},
{file = "bcrypt-5.0.0-cp313-cp313t-win_arm64.whl", hash = "sha256:9fffdb387abe6aa775af36ef16f55e318dcda4194ddbf82007a6f21da29de8f5"},
{file = "bcrypt-5.0.0-cp314-cp314t-macosx_10_12_universal2.whl", hash = "sha256:4870a52610537037adb382444fefd3706d96d663ac44cbb2f37e3919dca3d7ef"},
{file = "bcrypt-5.0.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:48f753100931605686f74e27a7b49238122aa761a9aefe9373265b8b7aa43ea4"},
{file = "bcrypt-5.0.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:f70aadb7a809305226daedf75d90379c397b094755a710d7014b8b117df1ebbf"},
{file = "bcrypt-5.0.0-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:744d3c6b164caa658adcb72cb8cc9ad9b4b75c7db507ab4bc2480474a51989da"},
{file = "bcrypt-5.0.0-cp314-cp314t-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:a28bc05039bdf3289d757f49d616ab3efe8cf40d8e8001ccdd621cd4f98f4fc9"},
{file = "bcrypt-5.0.0-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:7f277a4b3390ab4bebe597800a90da0edae882c6196d3038a73adf446c4f969f"},
{file = "bcrypt-5.0.0-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:79cfa161eda8d2ddf29acad370356b47f02387153b11d46042e93a0a95127493"},
{file = "bcrypt-5.0.0-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:a5393eae5722bcef046a990b84dff02b954904c36a194f6cfc817d7dca6c6f0b"},
{file = "bcrypt-5.0.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:7f4c94dec1b5ab5d522750cb059bb9409ea8872d4494fd152b53cca99f1ddd8c"},
{file = "bcrypt-5.0.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:0cae4cb350934dfd74c020525eeae0a5f79257e8a201c0c176f4b84fdbf2a4b4"},
{file = "bcrypt-5.0.0-cp314-cp314t-win32.whl", hash = "sha256:b17366316c654e1ad0306a6858e189fc835eca39f7eb2cafd6aaca8ce0c40a2e"},
{file = "bcrypt-5.0.0-cp314-cp314t-win_amd64.whl", hash = "sha256:92864f54fb48b4c718fc92a32825d0e42265a627f956bc0361fe869f1adc3e7d"},
{file = "bcrypt-5.0.0-cp314-cp314t-win_arm64.whl", hash = "sha256:dd19cf5184a90c873009244586396a6a884d591a5323f0e8a5922560718d4993"},
{file = "bcrypt-5.0.0-cp38-abi3-macosx_10_12_universal2.whl", hash = "sha256:fc746432b951e92b58317af8e0ca746efe93e66555f1b40888865ef5bf56446b"},
{file = "bcrypt-5.0.0-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:c2388ca94ffee269b6038d48747f4ce8df0ffbea43f31abfa18ac72f0218effb"},
{file = "bcrypt-5.0.0-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:560ddb6ec730386e7b3b26b8b4c88197aaed924430e7b74666a586ac997249ef"},
{file = "bcrypt-5.0.0-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:d79e5c65dcc9af213594d6f7f1fa2c98ad3fc10431e7aa53c176b441943efbdd"},
{file = "bcrypt-5.0.0-cp38-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:2b732e7d388fa22d48920baa267ba5d97cca38070b69c0e2d37087b381c681fd"},
{file = "bcrypt-5.0.0-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:0c8e093ea2532601a6f686edbc2c6b2ec24131ff5c52f7610dd64fa4553b5464"},
{file = "bcrypt-5.0.0-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:5b1589f4839a0899c146e8892efe320c0fa096568abd9b95593efac50a87cb75"},
{file = "bcrypt-5.0.0-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:89042e61b5e808b67daf24a434d89bab164d4de1746b37a8d173b6b14f3db9ff"},
{file = "bcrypt-5.0.0-cp38-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:e3cf5b2560c7b5a142286f69bde914494b6d8f901aaa71e453078388a50881c4"},
{file = "bcrypt-5.0.0-cp38-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:f632fd56fc4e61564f78b46a2269153122db34988e78b6be8b32d28507b7eaeb"},
{file = "bcrypt-5.0.0-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:801cad5ccb6b87d1b430f183269b94c24f248dddbbc5c1f78b6ed231743e001c"},
{file = "bcrypt-5.0.0-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:3cf67a804fc66fc217e6914a5635000259fbbbb12e78a99488e4d5ba445a71eb"},
{file = "bcrypt-5.0.0-cp38-abi3-win32.whl", hash = "sha256:3abeb543874b2c0524ff40c57a4e14e5d3a66ff33fb423529c88f180fd756538"},
{file = "bcrypt-5.0.0-cp38-abi3-win_amd64.whl", hash = "sha256:35a77ec55b541e5e583eb3436ffbbf53b0ffa1fa16ca6782279daf95d146dcd9"},
{file = "bcrypt-5.0.0-cp38-abi3-win_arm64.whl", hash = "sha256:cde08734f12c6a4e28dc6755cd11d3bdfea608d93d958fffbe95a7026ebe4980"},
{file = "bcrypt-5.0.0-cp39-abi3-macosx_10_12_universal2.whl", hash = "sha256:0c418ca99fd47e9c59a301744d63328f17798b5947b0f791e9af3c1c499c2d0a"},
{file = "bcrypt-5.0.0-cp39-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:ddb4e1500f6efdd402218ffe34d040a1196c072e07929b9820f363a1fd1f4191"},
{file = "bcrypt-5.0.0-cp39-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:7aeef54b60ceddb6f30ee3db090351ecf0d40ec6e2abf41430997407a46d2254"},
{file = "bcrypt-5.0.0-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:f0ce778135f60799d89c9693b9b398819d15f1921ba15fe719acb3178215a7db"},
{file = "bcrypt-5.0.0-cp39-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:a71f70ee269671460b37a449f5ff26982a6f2ba493b3eabdd687b4bf35f875ac"},
{file = "bcrypt-5.0.0-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:f8429e1c410b4073944f03bd778a9e066e7fad723564a52ff91841d278dfc822"},
{file = "bcrypt-5.0.0-cp39-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:edfcdcedd0d0f05850c52ba3127b1fce70b9f89e0fe5ff16517df7e81fa3cbb8"},
{file = "bcrypt-5.0.0-cp39-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:611f0a17aa4a25a69362dcc299fda5c8a3d4f160e2abb3831041feb77393a14a"},
{file = "bcrypt-5.0.0-cp39-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:db99dca3b1fdc3db87d7c57eac0c82281242d1eabf19dcb8a6b10eb29a2e72d1"},
{file = "bcrypt-5.0.0-cp39-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:5feebf85a9cefda32966d8171f5db7e3ba964b77fdfe31919622256f80f9cf42"},
{file = "bcrypt-5.0.0-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:3ca8a166b1140436e058298a34d88032ab62f15aae1c598580333dc21d27ef10"},
{file = "bcrypt-5.0.0-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:61afc381250c3182d9078551e3ac3a41da14154fbff647ddf52a769f588c4172"},
{file = "bcrypt-5.0.0-cp39-abi3-win32.whl", hash = "sha256:64d7ce196203e468c457c37ec22390f1a61c85c6f0b8160fd752940ccfb3a683"},
{file = "bcrypt-5.0.0-cp39-abi3-win_amd64.whl", hash = "sha256:64ee8434b0da054d830fa8e89e1c8bf30061d539044a39524ff7dec90481e5c2"},
{file = "bcrypt-5.0.0-cp39-abi3-win_arm64.whl", hash = "sha256:f2347d3534e76bf50bca5500989d6c1d05ed64b440408057a37673282c654927"},
{file = "bcrypt-5.0.0-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:7edda91d5ab52b15636d9c30da87d2cc84f426c72b9dba7a9b4fe142ba11f534"},
{file = "bcrypt-5.0.0-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:046ad6db88edb3c5ece4369af997938fb1c19d6a699b9c1b27b0db432faae4c4"},
{file = "bcrypt-5.0.0-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:dcd58e2b3a908b5ecc9b9df2f0085592506ac2d5110786018ee5e160f28e0911"},
{file = "bcrypt-5.0.0-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:6b8f520b61e8781efee73cba14e3e8c9556ccfb375623f4f97429544734545b4"},
{file = "bcrypt-5.0.0.tar.gz", hash = "sha256:f748f7c2d6fd375cc93d3fba7ef4a9e3a092421b8dbf34d8d4dc06be9492dfdd"},
]
[package.extras]
tests = ["pytest (>=3.2.1,!=3.3.0)"]
typecheck = ["mypy"]
[[package]]
name = "black"
version = "26.3.1"
@ -230,14 +153,14 @@ uvloop = ["uvloop (>=0.15.2) ; sys_platform != \"win32\"", "winloop (>=0.5.0) ;
[[package]]
name = "cachetools"
version = "7.0.6"
version = "7.0.5"
description = "Extensible memoizing collections and decorators"
optional = false
python-versions = ">=3.10"
groups = ["test"]
files = [
{file = "cachetools-7.0.6-py3-none-any.whl", hash = "sha256:4e94956cfdd3086f12042cdd29318f5ced3893014f7d0d059bf3ead3f85b7f8b"},
{file = "cachetools-7.0.6.tar.gz", hash = "sha256:e5d524d36d65703a87243a26ff08ad84f73352adbeafb1cde81e207b456aaf24"},
{file = "cachetools-7.0.5-py3-none-any.whl", hash = "sha256:46bc8ebefbe485407621d0a4264b23c080cedd913921bad7ac3ed2f26c183114"},
{file = "cachetools-7.0.5.tar.gz", hash = "sha256:0cd042c24377200c1dcd225f8b7b12b0ca53cc2c961b43757e774ebe190fd990"},
]
[[package]]
@ -259,7 +182,7 @@ description = "Foreign Function Interface for Python calling C code."
optional = false
python-versions = ">=3.9"
groups = ["main"]
markers = "implementation_name == \"pypy\" or platform_python_implementation != \"PyPy\""
markers = "implementation_name == \"pypy\""
files = [
{file = "cffi-2.0.0-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:0cf2d91ecc3fcc0625c2c530fe004f82c110405f101548512cce44322fa8ac44"},
{file = "cffi-2.0.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f73b96c41e3b2adedc34a7356e64c8eb96e03a3782b535e043a986276ce12a49"},
@ -688,79 +611,6 @@ mypy = ["bokeh", "contourpy[bokeh,docs]", "docutils-stubs", "mypy (==1.17.0)", "
test = ["Pillow", "contourpy[test-no-images]", "matplotlib"]
test-no-images = ["pytest", "pytest-cov", "pytest-rerunfailures", "pytest-xdist", "wurlitzer"]
[[package]]
name = "cryptography"
version = "46.0.7"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
optional = false
python-versions = "!=3.9.0,!=3.9.1,>=3.8"
groups = ["main"]
files = [
{file = "cryptography-46.0.7-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:ea42cbe97209df307fdc3b155f1b6fa2577c0defa8f1f7d3be7d31d189108ad4"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:b36a4695e29fe69215d75960b22577197aca3f7a25b9cf9d165dcfe9d80bc325"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:5ad9ef796328c5e3c4ceed237a183f5d41d21150f972455a9d926593a1dcb308"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:73510b83623e080a2c35c62c15298096e2a5dc8d51c3b4e1740211839d0dea77"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:cbd5fb06b62bd0721e1170273d3f4d5a277044c47ca27ee257025146c34cbdd1"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:420b1e4109cc95f0e5700eed79908cef9268265c773d3a66f7af1eef53d409ef"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:24402210aa54baae71d99441d15bb5a1919c195398a87b563df84468160a65de"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:8a469028a86f12eb7d2fe97162d0634026d92a21f3ae0ac87ed1c4a447886c83"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:9694078c5d44c157ef3162e3bf3946510b857df5a3955458381d1c7cfc143ddb"},
{file = "cryptography-46.0.7-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:42a1e5f98abb6391717978baf9f90dc28a743b7d9be7f0751a6f56a75d14065b"},
{file = "cryptography-46.0.7-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:91bbcb08347344f810cbe49065914fe048949648f6bd5c2519f34619142bbe85"},
{file = "cryptography-46.0.7-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:5d1c02a14ceb9148cc7816249f64f623fbfee39e8c03b3650d842ad3f34d637e"},
{file = "cryptography-46.0.7-cp311-abi3-win32.whl", hash = "sha256:d23c8ca48e44ee015cd0a54aeccdf9f09004eba9fc96f38c911011d9ff1bd457"},
{file = "cryptography-46.0.7-cp311-abi3-win_amd64.whl", hash = "sha256:397655da831414d165029da9bc483bed2fe0e75dde6a1523ec2fe63f3c46046b"},
{file = "cryptography-46.0.7-cp314-cp314t-macosx_10_9_universal2.whl", hash = "sha256:d151173275e1728cf7839aaa80c34fe550c04ddb27b34f48c232193df8db5842"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:db0f493b9181c7820c8134437eb8b0b4792085d37dbb24da050476ccb664e59c"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ebd6daf519b9f189f85c479427bbd6e9c9037862cf8fe89ee35503bd209ed902"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:b7b412817be92117ec5ed95f880defe9cf18a832e8cafacf0a22337dc1981b4d"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_28_ppc64le.whl", hash = "sha256:fbfd0e5f273877695cb93baf14b185f4878128b250cc9f8e617ea0c025dfb022"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:ffca7aa1d00cf7d6469b988c581598f2259e46215e0140af408966a24cf086ce"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_31_armv7l.whl", hash = "sha256:60627cf07e0d9274338521205899337c5d18249db56865f943cbe753aa96f40f"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:80406c3065e2c55d7f49a9550fe0c49b3f12e5bfff5dedb727e319e1afb9bf99"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_34_ppc64le.whl", hash = "sha256:c5b1ccd1239f48b7151a65bc6dd54bcfcc15e028c8ac126d3fada09db0e07ef1"},
{file = "cryptography-46.0.7-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:d5f7520159cd9c2154eb61eb67548ca05c5774d39e9c2c4339fd793fe7d097b2"},
{file = "cryptography-46.0.7-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:fcd8eac50d9138c1d7fc53a653ba60a2bee81a505f9f8850b6b2888555a45d0e"},
{file = "cryptography-46.0.7-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:65814c60f8cc400c63131584e3e1fad01235edba2614b61fbfbfa954082db0ee"},
{file = "cryptography-46.0.7-cp314-cp314t-win32.whl", hash = "sha256:fdd1736fed309b4300346f88f74cd120c27c56852c3838cab416e7a166f67298"},
{file = "cryptography-46.0.7-cp314-cp314t-win_amd64.whl", hash = "sha256:e06acf3c99be55aa3b516397fe42f5855597f430add9c17fa46bf2e0fb34c9bb"},
{file = "cryptography-46.0.7-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:462ad5cb1c148a22b2e3bcc5ad52504dff325d17daf5df8d88c17dda1f75f2a4"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:84d4cced91f0f159a7ddacad249cc077e63195c36aac40b4150e7a57e84fffe7"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:128c5edfe5e5938b86b03941e94fac9ee793a94452ad1365c9fc3f4f62216832"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:5e51be372b26ef4ba3de3c167cd3d1022934bc838ae9eaad7e644986d2a3d163"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:cdf1a610ef82abb396451862739e3fc93b071c844399e15b90726ef7470eeaf2"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:1d25aee46d0c6f1a501adcddb2d2fee4b979381346a78558ed13e50aa8a59067"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:cdfbe22376065ffcf8be74dc9a909f032df19bc58a699456a21712d6e5eabfd0"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:abad9dac36cbf55de6eb49badd4016806b3165d396f64925bf2999bcb67837ba"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:935ce7e3cfdb53e3536119a542b839bb94ec1ad081013e9ab9b7cfd478b05006"},
{file = "cryptography-46.0.7-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:35719dc79d4730d30f1c2b6474bd6acda36ae2dfae1e3c16f2051f215df33ce0"},
{file = "cryptography-46.0.7-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:7bbc6ccf49d05ac8f7d7b5e2e2c33830d4fe2061def88210a126d130d7f71a85"},
{file = "cryptography-46.0.7-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:a1529d614f44b863a7b480c6d000fe93b59acee9c82ffa027cfadc77521a9f5e"},
{file = "cryptography-46.0.7-cp38-abi3-win32.whl", hash = "sha256:f247c8c1a1fb45e12586afbb436ef21ff1e80670b2861a90353d9b025583d246"},
{file = "cryptography-46.0.7-cp38-abi3-win_amd64.whl", hash = "sha256:506c4ff91eff4f82bdac7633318a526b1d1309fc07ca76a3ad182cb5b686d6d3"},
{file = "cryptography-46.0.7-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:fc9ab8856ae6cf7c9358430e49b368f3108f050031442eaeb6b9d87e4dcf4e4f"},
{file = "cryptography-46.0.7-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:d3b99c535a9de0adced13d159c5a9cf65c325601aa30f4be08afd680643e9c15"},
{file = "cryptography-46.0.7-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:d02c738dacda7dc2a74d1b2b3177042009d5cab7c7079db74afc19e56ca1b455"},
{file = "cryptography-46.0.7-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:04959522f938493042d595a736e7dbdff6eb6cc2339c11465b3ff89343b65f65"},
{file = "cryptography-46.0.7-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:3986ac1dee6def53797289999eabe84798ad7817f3e97779b5061a95b0ee4968"},
{file = "cryptography-46.0.7-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:258514877e15963bd43b558917bc9f54cf7cf866c38aa576ebf47a77ddbc43a4"},
{file = "cryptography-46.0.7.tar.gz", hash = "sha256:e4cfd68c5f3e0bfdad0d38e023239b96a2fe84146481852dffbcca442c245aa5"},
]
[package.dependencies]
cffi = {version = ">=2.0.0", markers = "python_full_version >= \"3.9.0\" and platform_python_implementation != \"PyPy\""}
typing-extensions = {version = ">=4.13.2", markers = "python_full_version < \"3.11.0\""}
[package.extras]
docs = ["sphinx (>=5.3.0)", "sphinx-inline-tabs", "sphinx-rtd-theme (>=3.0.0)"]
docstest = ["pyenchant (>=3)", "readme-renderer (>=30.0)", "sphinxcontrib-spelling (>=7.3.1)"]
nox = ["nox[uv] (>=2024.4.15)"]
pep8test = ["check-sdist", "click (>=8.0.1)", "mypy (>=1.14)", "ruff (>=0.11.11)"]
sdist = ["build (>=1.0.0)"]
ssh = ["bcrypt (>=3.1.5)"]
test = ["certifi (>=2024)", "cryptography-vectors (==46.0.7)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
test-randomorder = ["pytest-randomly"]
[[package]]
name = "cycler"
version = "0.12.1"
@ -908,7 +758,6 @@ description = "The FlatBuffers serialization format for Python"
optional = false
python-versions = "*"
groups = ["server", "test"]
markers = "python_version >= \"3.11\""
files = [
{file = "flatbuffers-25.12.19-py2.py3-none-any.whl", hash = "sha256:7634f50c427838bb021c2d66a3d1168e9d199b0607e6329399f04846d42e20b4"},
]
@ -1212,18 +1061,6 @@ files = [
{file = "iniconfig-2.3.0.tar.gz", hash = "sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730"},
]
[[package]]
name = "invoke"
version = "3.0.3"
description = "Pythonic task execution"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "invoke-3.0.3-py3-none-any.whl", hash = "sha256:f11327165e5cbb89b2ad1d88d3292b5113332c43b8553b494da435d6ec6f5053"},
{file = "invoke-3.0.3.tar.gz", hash = "sha256:437b6a622223824380bfb4e64f612711a6b648c795f565efc8625af66fb57f0c"},
]
[[package]]
name = "isort"
version = "5.13.2"
@ -1271,7 +1108,7 @@ files = [
[package.dependencies]
attrs = ">=22.2.0"
jsonschema-specifications = ">=2023.03.6"
jsonschema-specifications = ">=2023.3.6"
referencing = ">=0.28.4"
rpds-py = ">=0.25.0"
@ -1618,7 +1455,6 @@ description = "Python library for arbitrary-precision floating-point arithmetic"
optional = false
python-versions = "*"
groups = ["server", "test"]
markers = "python_version >= \"3.11\""
files = [
{file = "mpmath-1.3.0-py3-none-any.whl", hash = "sha256:a0b2b9fe80bbcd81a6647ff13108738cfb482d481d826cc0e02f5b35e5c88d2c"},
{file = "mpmath-1.3.0.tar.gz", hash = "sha256:7a28eb2a9774d00c7bc92411c19a89209d5da7c4c9a9e227be8330a23a25b91f"},
@ -1713,7 +1549,48 @@ files = [
{file = "numpy-1.26.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:7e50d0a0cc3189f9cb0aeb3a6a6af18c16f59f004b866cd2be1c14b36134a4a0"},
{file = "numpy-1.26.4.tar.gz", hash = "sha256:2a02aba9ed12e4ac4eb3ea9421c420301a0c6460d9830d74a9df87efa4912010"},
]
markers = {server = "python_version >= \"3.11\"", test = "python_version >= \"3.11\""}
[[package]]
name = "onnxruntime"
version = "1.24.3"
description = "ONNX Runtime is a runtime accelerator for Machine Learning models"
optional = false
python-versions = ">=3.10"
groups = ["server", "test"]
markers = "python_version == \"3.10\""
files = [
{file = "onnxruntime-1.24.3-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:3e6456801c66b095c5cd68e690ca25db970ea5202bd0c5b84a2c3ef7731c5a3c"},
{file = "onnxruntime-1.24.3-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8b2ebc54c6d8281dccff78d4b06e47d4cf07535937584ab759448390a70f4978"},
{file = "onnxruntime-1.24.3-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:fb56575d7794bf0781156955610c9e651c9504c64d42ec880784b6106244882d"},
{file = "onnxruntime-1.24.3-cp311-cp311-win_amd64.whl", hash = "sha256:c958222ef9eff54018332beecd32d5d94a3ab079d8821937b333811bf4da0d39"},
{file = "onnxruntime-1.24.3-cp311-cp311-win_arm64.whl", hash = "sha256:a8f761857ebaf58a85b9e42422d03207f1d39e6bb8fecfdbf613bac5b9710723"},
{file = "onnxruntime-1.24.3-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:0d244227dc5e00a9ae15a7ac1eba4c4460d7876dfecafe73fb00db9f1d914d91"},
{file = "onnxruntime-1.24.3-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0a9847b870b6cb462652b547bc98c49e0efb67553410a082fde1918a38707452"},
{file = "onnxruntime-1.24.3-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b354afce3333f2859c7e8706d84b6c552beac39233bcd3141ce7ab77b4cabb5d"},
{file = "onnxruntime-1.24.3-cp312-cp312-win_amd64.whl", hash = "sha256:44ea708c34965439170d811267c51281d3897ecfc4aa0087fa25d4a4c3eb2e4a"},
{file = "onnxruntime-1.24.3-cp312-cp312-win_arm64.whl", hash = "sha256:48d1092b44ca2ba6f9543892e7c422c15a568481403c10440945685faf27a8d8"},
{file = "onnxruntime-1.24.3-cp313-cp313-macosx_14_0_arm64.whl", hash = "sha256:34a0ea5ff191d8420d9c1332355644148b1bf1a0d10c411af890a63a9f662aa7"},
{file = "onnxruntime-1.24.3-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1fd2ec7bb0fabe42f55e8337cfc9b1969d0d14622711aac73d69b4bd5abb5ed7"},
{file = "onnxruntime-1.24.3-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:df8e70e732fe26346faaeec9147fa38bef35d232d2495d27e93dd221a2d473a9"},
{file = "onnxruntime-1.24.3-cp313-cp313-win_amd64.whl", hash = "sha256:2d3706719be6ad41d38a2250998b1d87758a20f6ea4546962e21dc79f1f1fd2b"},
{file = "onnxruntime-1.24.3-cp313-cp313-win_arm64.whl", hash = "sha256:b082f3ba9519f0a1a1e754556bc7e635c7526ef81b98b3f78da4455d25f0437b"},
{file = "onnxruntime-1.24.3-cp313-cp313t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:72f956634bc2e4bd2e8b006bef111849bd42c42dea37bd0a4c728404fdaf4d34"},
{file = "onnxruntime-1.24.3-cp313-cp313t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:78d1f25eed4ab9959db70a626ed50ee24cf497e60774f59f1207ac8556399c4d"},
{file = "onnxruntime-1.24.3-cp314-cp314-macosx_14_0_arm64.whl", hash = "sha256:a6b4bce87d96f78f0a9bf5cefab3303ae95d558c5bfea53d0bf7f9ea207880a8"},
{file = "onnxruntime-1.24.3-cp314-cp314-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d48f36c87b25ab3b2b4c88826c96cf1399a5631e3c2c03cc27d6a1e5d6b18eb4"},
{file = "onnxruntime-1.24.3-cp314-cp314-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:e104d33a409bf6e3f30f0e8198ec2aaf8d445b8395490a80f6e6ad56da98e400"},
{file = "onnxruntime-1.24.3-cp314-cp314-win_amd64.whl", hash = "sha256:e785d73fbd17421c2513b0bb09eb25d88fa22c8c10c3f5d6060589efa5537c5b"},
{file = "onnxruntime-1.24.3-cp314-cp314-win_arm64.whl", hash = "sha256:951e897a275f897a05ffbcaa615d98777882decaeb80c9216c68cdc62f849f53"},
{file = "onnxruntime-1.24.3-cp314-cp314t-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:4d4e70ce578aa214c74c7a7a9226bc8e229814db4a5b2d097333b81279ecde36"},
{file = "onnxruntime-1.24.3-cp314-cp314t-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:02aaf6ddfa784523b6873b4176a79d508e599efe12ab0ea1a3a6e7314408b7aa"},
]
[package.dependencies]
flatbuffers = "*"
numpy = ">=1.21.6"
packaging = "*"
protobuf = "*"
sympy = "*"
[[package]]
name = "onnxruntime"
@ -1768,7 +1645,6 @@ files = [
{file = "packaging-26.1-py3-none-any.whl", hash = "sha256:5d9c0669c6285e491e0ced2eee587eaf67b670d94a19e94e3984a481aba6802f"},
{file = "packaging-26.1.tar.gz", hash = "sha256:f042152b681c4bfac5cae2742a55e103d27ab2ec0f3d88037136b6bfe7c9c5de"},
]
markers = {server = "python_version >= \"3.11\""}
[[package]]
name = "pandas"
@ -1837,9 +1713,9 @@ files = [
[package.dependencies]
numpy = [
{version = ">=1.22.4", markers = "python_version < \"3.11\""},
{version = ">=1.26.0", markers = "python_version >= \"3.12\""},
{version = ">=1.23.2", markers = "python_version == \"3.11\""},
{version = ">=1.22.4", markers = "python_version < \"3.11\""},
]
python-dateutil = ">=2.8.2"
pytz = ">=2020.1"
@ -1870,27 +1746,6 @@ sql-other = ["SQLAlchemy (>=2.0.0)", "adbc-driver-postgresql (>=0.8.0)", "adbc-d
test = ["hypothesis (>=6.46.1)", "pytest (>=7.3.2)", "pytest-xdist (>=2.2.0)"]
xml = ["lxml (>=4.9.2)"]
[[package]]
name = "paramiko"
version = "4.0.0"
description = "SSH2 protocol library"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "paramiko-4.0.0-py3-none-any.whl", hash = "sha256:0e20e00ac666503bf0b4eda3b6d833465a2b7aff2e2b3d79a8bba5ef144ee3b9"},
{file = "paramiko-4.0.0.tar.gz", hash = "sha256:6a25f07b380cc9c9a88d2b920ad37167ac4667f8d9886ccebd8f90f654b5d69f"},
]
[package.dependencies]
bcrypt = ">=3.2"
cryptography = ">=3.3"
invoke = ">=2.0"
pynacl = ">=1.5"
[package.extras]
gssapi = ["gssapi (>=1.4.1) ; platform_system != \"Windows\"", "pyasn1 (>=0.1.7)", "pywin32 (>=2.1.8) ; platform_system == \"Windows\""]
[[package]]
name = "pathspec"
version = "1.0.4"
@ -2077,7 +1932,6 @@ description = ""
optional = false
python-versions = ">=3.10"
groups = ["server", "test"]
markers = "python_version >= \"3.11\""
files = [
{file = "protobuf-7.34.1-cp310-abi3-macosx_10_9_universal2.whl", hash = "sha256:d8b2cc79c4d8f62b293ad9b11ec3aebce9af481fa73e64556969f7345ebf9fc7"},
{file = "protobuf-7.34.1-cp310-abi3-manylinux2014_aarch64.whl", hash = "sha256:5185e0e948d07abe94bb76ec9b8416b604cfe5da6f871d67aad30cbf24c3110b"},
@ -2108,7 +1962,7 @@ description = "C parser in Python"
optional = false
python-versions = ">=3.10"
groups = ["main"]
markers = "(platform_python_implementation != \"PyPy\" or implementation_name == \"pypy\") and implementation_name != \"PyPy\""
markers = "implementation_name == \"pypy\""
files = [
{file = "pycparser-3.0-py3-none-any.whl", hash = "sha256:b727414169a36b7d524c1c3e31839a521725078d7b2ff038656844266160a992"},
{file = "pycparser-3.0.tar.gz", hash = "sha256:600f49d217304a5902ac3c37e1281c9fe94e4d0489de643a9504c5cdfdfc6b29"},
@ -2312,9 +2166,9 @@ files = [
astroid = ">=3.3.8,<=3.4.0.dev0"
colorama = {version = ">=0.4.5", markers = "sys_platform == \"win32\""}
dill = [
{version = ">=0.2", markers = "python_version < \"3.11\""},
{version = ">=0.3.7", markers = "python_version >= \"3.12\""},
{version = ">=0.3.6", markers = "python_version == \"3.11\""},
{version = ">=0.2", markers = "python_version < \"3.11\""},
]
isort = ">=4.2.5,<5.13 || >5.13,<7"
mccabe = ">=0.6,<0.8"
@ -2326,48 +2180,6 @@ tomlkit = ">=0.10.1"
spelling = ["pyenchant (>=3.2,<4.0)"]
testutils = ["gitpython (>3)"]
[[package]]
name = "pynacl"
version = "1.6.2"
description = "Python binding to the Networking and Cryptography (NaCl) library"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "pynacl-1.6.2-cp314-cp314t-macosx_10_10_universal2.whl", hash = "sha256:622d7b07cc5c02c666795792931b50c91f3ce3c2649762efb1ef0d5684c81594"},
{file = "pynacl-1.6.2-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:d071c6a9a4c94d79eb665db4ce5cedc537faf74f2355e4d502591d850d3913c0"},
{file = "pynacl-1.6.2-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:fe9847ca47d287af41e82be1dd5e23023d3c31a951da134121ab02e42ac218c9"},
{file = "pynacl-1.6.2-cp314-cp314t-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:04316d1fc625d860b6c162fff704eb8426b1a8bcd3abacea11142cbd99a6b574"},
{file = "pynacl-1.6.2-cp314-cp314t-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:44081faff368d6c5553ccf55322ef2819abb40e25afaec7e740f159f74813634"},
{file = "pynacl-1.6.2-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:a9f9932d8d2811ce1a8ffa79dcbdf3970e7355b5c8eb0c1a881a57e7f7d96e88"},
{file = "pynacl-1.6.2-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:bc4a36b28dd72fb4845e5d8f9760610588a96d5a51f01d84d8c6ff9849968c14"},
{file = "pynacl-1.6.2-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:3bffb6d0f6becacb6526f8f42adfb5efb26337056ee0831fb9a7044d1a964444"},
{file = "pynacl-1.6.2-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:2fef529ef3ee487ad8113d287a593fa26f48ee3620d92ecc6f1d09ea38e0709b"},
{file = "pynacl-1.6.2-cp314-cp314t-win32.whl", hash = "sha256:a84bf1c20339d06dc0c85d9aea9637a24f718f375d861b2668b2f9f96fa51145"},
{file = "pynacl-1.6.2-cp314-cp314t-win_amd64.whl", hash = "sha256:320ef68a41c87547c91a8b58903c9caa641ab01e8512ce291085b5fe2fcb7590"},
{file = "pynacl-1.6.2-cp314-cp314t-win_arm64.whl", hash = "sha256:d29bfe37e20e015a7d8b23cfc8bd6aa7909c92a1b8f41ee416bbb3e79ef182b2"},
{file = "pynacl-1.6.2-cp38-abi3-macosx_10_10_universal2.whl", hash = "sha256:c949ea47e4206af7c8f604b8278093b674f7c79ed0d4719cc836902bf4517465"},
{file = "pynacl-1.6.2-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:8845c0631c0be43abdd865511c41eab235e0be69c81dc66a50911594198679b0"},
{file = "pynacl-1.6.2-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:22de65bb9010a725b0dac248f353bb072969c94fa8d6b1f34b87d7953cf7bbe4"},
{file = "pynacl-1.6.2-cp38-abi3-manylinux_2_26_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:46065496ab748469cdd999246d17e301b2c24ae2fdf739132e580a0e94c94a87"},
{file = "pynacl-1.6.2-cp38-abi3-manylinux_2_26_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:8a66d6fb6ae7661c58995f9c6435bda2b1e68b54b598a6a10247bfcdadac996c"},
{file = "pynacl-1.6.2-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:26bfcd00dcf2cf160f122186af731ae30ab120c18e8375684ec2670dccd28130"},
{file = "pynacl-1.6.2-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:c8a231e36ec2cab018c4ad4358c386e36eede0319a0c41fed24f840b1dac59f6"},
{file = "pynacl-1.6.2-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:68be3a09455743ff9505491220b64440ced8973fe930f270c8e07ccfa25b1f9e"},
{file = "pynacl-1.6.2-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:8b097553b380236d51ed11356c953bf8ce36a29a3e596e934ecabe76c985a577"},
{file = "pynacl-1.6.2-cp38-abi3-win32.whl", hash = "sha256:5811c72b473b2f38f7e2a3dc4f8642e3a3e9b5e7317266e4ced1fba85cae41aa"},
{file = "pynacl-1.6.2-cp38-abi3-win_amd64.whl", hash = "sha256:62985f233210dee6548c223301b6c25440852e13d59a8b81490203c3227c5ba0"},
{file = "pynacl-1.6.2-cp38-abi3-win_arm64.whl", hash = "sha256:834a43af110f743a754448463e8fd61259cd4ab5bbedcf70f9dabad1d28a394c"},
{file = "pynacl-1.6.2.tar.gz", hash = "sha256:018494d6d696ae03c7e656e5e74cdfd8ea1326962cc401bcf018f1ed8436811c"},
]
[package.dependencies]
cffi = {version = ">=2.0.0", markers = "platform_python_implementation != \"PyPy\" and python_version >= \"3.9\""}
[package.extras]
docs = ["sphinx (<7)", "sphinx_rtd_theme"]
tests = ["hypothesis (>=3.27.0)", "pytest (>=7.4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
[[package]]
name = "pyparsing"
version = "3.3.2"
@ -3236,7 +3048,6 @@ description = "Computer algebra system (CAS) in Python"
optional = false
python-versions = ">=3.9"
groups = ["server", "test"]
markers = "python_version >= \"3.11\""
files = [
{file = "sympy-1.14.0-py3-none-any.whl", hash = "sha256:e091cc3e99d2141a0ba2847328f5479b05d94a6635cb96148ccb3f34671bd8f5"},
{file = "sympy-1.14.0.tar.gz", hash = "sha256:d3d3fe8df1e5a0b42f0e7bdf50541697dbe7d23746e894990c030e2b05e72517"},
@ -3370,7 +3181,7 @@ files = [
{file = "typing_extensions-4.15.0-py3-none-any.whl", hash = "sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548"},
{file = "typing_extensions-4.15.0.tar.gz", hash = "sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466"},
]
markers = {main = "python_version < \"3.13\"", dev = "python_version == \"3.10\"", docs = "python_version < \"3.13\""}
markers = {main = "python_version <= \"3.12\"", dev = "python_version == \"3.10\"", docs = "python_version <= \"3.12\""}
[[package]]
name = "typing-inspection"
@ -3749,4 +3560,4 @@ files = [
[metadata]
lock-version = "2.1"
python-versions = ">=3.10"
content-hash = "ffde300b2fc93161d2279a6e2b899bc988d3b5eb3833135821830affc9a5fb62"
content-hash = "7ddbf7d85e9ae7bd3a1b99ae481df20aaf6fd185d5f628b0fdf9b7bd278730ed"

2
poetry.toml Normal file
View File

@ -0,0 +1,2 @@
[virtualenvs.options]
system-site-packages = true

View File

@ -1,6 +1,6 @@
[project]
name = "ria-toolkit-oss"
version = "0.1.5"
version = "0.1.4"
description = "An open-source version of the RIA Toolkit, including the fundamental tools to get started developing, testing, and deploying radio intelligence applications"
license = { text = "AGPL-3.0-only" }
readme = "README.md"
@ -49,8 +49,7 @@ dependencies = [
"pyzmq (>=27.1.0,<28.0.0)",
"pyyaml (>=6.0.3,<7.0.0)",
"click (>=8.1.0,<9.0.0)",
"matplotlib (>=3.8.0,<4.0.0)",
"paramiko (>=4.0.0)"
"matplotlib (>=3.8.0,<4.0.0)"
]
# [project.optional-dependencies] Commented out to prevent Tox tests from failing
@ -88,7 +87,7 @@ pytest = "^8.0.0"
tox = "^4.19.0"
fastapi = ">=0.111,<1.0"
uvicorn = {version = ">=0.29,<1.0", extras = ["standard"]}
onnxruntime = {version = ">=1.17,<2.0", python = ">=3.11"}
onnxruntime = ">=1.17,<2.0"
httpx = ">=0.27,<1.0"
[tool.poetry.group.docs.dependencies]
@ -124,7 +123,7 @@ ria-app = "ria_toolkit_oss.app.cli:main"
[tool.poetry.group.server.dependencies]
fastapi = ">=0.111,<1.0"
uvicorn = {version = ">=0.29,<1.0", extras = ["standard"]}
onnxruntime = {version = ">=1.17,<2.0", python = ">=3.11"}
onnxruntime = ">=1.17,<2.0"
[tool.black]
line-length = 119

View File

@ -66,9 +66,8 @@ class LoggingFakeWs:
pass
def _make_iq_frame(
buffer_size: int, tone_hz: float, sample_rate: float, phase_offset: float = 0.0
) -> tuple[bytes, float]:
def _make_iq_frame(buffer_size: int, tone_hz: float, sample_rate: float,
phase_offset: float = 0.0) -> tuple[bytes, float]:
"""Return ``(interleaved_float32_bytes, next_phase)`` for a sine tone.
Emitting one continuous phase-coherent tone requires threading the phase
@ -94,9 +93,7 @@ def _make_pluto_factory(identifier: str | None):
if device != "pluto":
raise ValueError(f"this script only drives pluto; got device={device!r}")
from ria_toolkit_oss.sdr.pluto import Pluto
return Pluto(identifier=identifier)
return factory
@ -133,14 +130,13 @@ async def _run(args: argparse.Namespace) -> int:
# Abort if tx_start was rejected by an interlock (no session → nothing to do).
if streamer._tx is None:
print("tx_start rejected — see [tx_status] line above for the reason.", file=sys.stderr)
print("tx_start rejected — see [tx_status] line above for the reason.",
file=sys.stderr)
return 2
print(
f"Transmitting at {args.frequency/1e6:.3f} MHz with "
print(f"Transmitting at {args.frequency/1e6:.3f} MHz with "
f"{args.tone/1e3:.1f} kHz baseband tone at gain {args.gain} dB. "
f"{'Running for ' + str(args.duration) + 's' if args.duration > 0 else 'Run until Ctrl-C'}."
)
f"{'Running for ' + str(args.duration) + 's' if args.duration > 0 else 'Run until Ctrl-C'}.")
# Arrange a clean shutdown on Ctrl-C.
stop = asyncio.Event()
@ -161,11 +157,12 @@ async def _run(args: argparse.Namespace) -> int:
# topped up. The queue's own backpressure keeps us from spinning.
produce_interval = buffer_dt * 0.5
try:
async def producer():
nonlocal phase
while not stop.is_set():
frame, phase = _make_iq_frame(args.buffer_size, args.tone, args.sample_rate, phase)
frame, phase = _make_iq_frame(
args.buffer_size, args.tone, args.sample_rate, phase
)
await streamer.on_binary(frame)
await asyncio.sleep(produce_interval)
@ -196,17 +193,20 @@ def main() -> int:
p = argparse.ArgumentParser(
description="End-to-end TX smoke test: agent → Pluto continuous tone.",
)
p.add_argument("--identifier", default=None, help="Pluto IP/hostname (default: auto-discover pluto.local)")
p.add_argument("--frequency", type=float, default=3_410_000_000.0, help="TX LO in Hz (default 2.45 GHz)")
p.add_argument("--gain", type=float, default=-0.0, help="TX gain in dB; Pluto range [-89, 0] (default -30)")
p.add_argument("--sample-rate", type=float, default=1_000_000.0, help="Baseband sample rate (default 1 Msps)")
p.add_argument(
"--tone", type=float, default=100_000.0, help="Baseband tone offset in Hz; 0 = DC (default 100 kHz)"
)
p.add_argument("--buffer-size", type=int, default=4096, help="Complex samples per frame (default 4096)")
p.add_argument(
"--duration", type=float, default=60.0, help="Seconds to transmit; 0 = run until Ctrl-C (default 30)"
)
p.add_argument("--identifier", default=None,
help="Pluto IP/hostname (default: auto-discover pluto.local)")
p.add_argument("--frequency", type=float, default=3_410_000_000.0,
help="TX LO in Hz (default 2.45 GHz)")
p.add_argument("--gain", type=float, default=-0.0,
help="TX gain in dB; Pluto range [-89, 0] (default -30)")
p.add_argument("--sample-rate", type=float, default=1_000_000.0,
help="Baseband sample rate (default 1 Msps)")
p.add_argument("--tone", type=float, default=100_000.0,
help="Baseband tone offset in Hz; 0 = DC (default 100 kHz)")
p.add_argument("--buffer-size", type=int, default=4096,
help="Complex samples per frame (default 4096)")
p.add_argument("--duration", type=float, default=60.0,
help="Seconds to transmit; 0 = run until Ctrl-C (default 30)")
p.add_argument("--log-level", default="INFO")
args = p.parse_args()

View File

@ -41,7 +41,8 @@ from ria_toolkit_oss.agent.streamer import Streamer
from ria_toolkit_oss.agent.ws_client import WsClient
def _make_iq_frame(buffer_size: int, tone_hz: float, sample_rate: float, phase_offset: float) -> tuple[bytes, float]:
def _make_iq_frame(buffer_size: int, tone_hz: float, sample_rate: float,
phase_offset: float) -> tuple[bytes, float]:
n = np.arange(buffer_size, dtype=np.float64)
phase = 2.0 * np.pi * tone_hz / sample_rate * n + phase_offset
amp = 0.7
@ -58,9 +59,7 @@ def _make_pluto_factory(identifier: str | None):
if device != "pluto":
raise ValueError(f"this script only drives pluto; got device={device!r}")
from ria_toolkit_oss.sdr.pluto import Pluto
return Pluto(identifier=identifier)
return factory
@ -74,14 +73,13 @@ async def _mock_hub_handler(ws, args, stop: asyncio.Event):
payload = json.loads(first)
if payload.get("type") == "heartbeat":
caps = payload.get("capabilities")
print(f"[mock-hub] agent heartbeat: capabilities={caps} " f"tx_enabled={payload.get('tx_enabled')}")
print(f"[mock-hub] agent heartbeat: capabilities={caps} "
f"tx_enabled={payload.get('tx_enabled')}")
except asyncio.TimeoutError:
print("[mock-hub] warning: no heartbeat received in first 2s")
# Arm the agent's TX path.
await ws.send(
json.dumps(
{
await ws.send(json.dumps({
"type": "tx_start",
"app_id": "ws-smoke",
"radio_config": {
@ -93,10 +91,9 @@ async def _mock_hub_handler(ws, args, stop: asyncio.Event):
"buffer_size": int(args.buffer_size),
"underrun_policy": "repeat",
},
}
)
)
print(f"[mock-hub] sent tx_start at {args.frequency/1e6:.3f} MHz, " f"gain={args.gain} dB")
}))
print(f"[mock-hub] sent tx_start at {args.frequency/1e6:.3f} MHz, "
f"gain={args.gain} dB")
# Producer: push IQ frames at a steady clip. Use a concurrent receiver so
# tx_status frames show up in real time rather than being queued behind
@ -115,11 +112,15 @@ async def _mock_hub_handler(ws, args, stop: asyncio.Event):
recv_task = asyncio.create_task(receiver())
try:
deadline = None if args.duration <= 0 else (asyncio.get_event_loop().time() + args.duration)
deadline = None if args.duration <= 0 else (
asyncio.get_event_loop().time() + args.duration
)
while not stop.is_set():
if deadline is not None and asyncio.get_event_loop().time() >= deadline:
break
frame, phase = _make_iq_frame(args.buffer_size, args.tone, args.sample_rate, phase)
frame, phase = _make_iq_frame(
args.buffer_size, args.tone, args.sample_rate, phase
)
try:
await ws.send(frame)
except websockets.ConnectionClosed:
@ -203,15 +204,20 @@ def main() -> int:
p = argparse.ArgumentParser(
description="Full-stack TX smoke: localhost mock-hub → WS → agent → Pluto.",
)
p.add_argument("--identifier", default=None, help="Pluto IP/hostname (default: auto-discover pluto.local)")
p.add_argument("--frequency", type=float, default=2_450_000_000.0, help="TX LO in Hz (default 2.45 GHz)")
p.add_argument("--gain", type=float, default=0.0, help="TX gain in dB; Pluto range [-89, 0] (default 0)")
p.add_argument("--sample-rate", type=float, default=1_000_000.0, help="Baseband sample rate (default 1 Msps)")
p.add_argument("--tone", type=float, default=100_000.0, help="Baseband tone offset in Hz (default 100 kHz)")
p.add_argument("--buffer-size", type=int, default=4096, help="Complex samples per frame (default 4096)")
p.add_argument(
"--duration", type=float, default=30.0, help="Seconds to transmit; 0 = run until Ctrl-C (default 30)"
)
p.add_argument("--identifier", default=None,
help="Pluto IP/hostname (default: auto-discover pluto.local)")
p.add_argument("--frequency", type=float, default=2_450_000_000.0,
help="TX LO in Hz (default 2.45 GHz)")
p.add_argument("--gain", type=float, default=0.0,
help="TX gain in dB; Pluto range [-89, 0] (default 0)")
p.add_argument("--sample-rate", type=float, default=1_000_000.0,
help="Baseband sample rate (default 1 Msps)")
p.add_argument("--tone", type=float, default=100_000.0,
help="Baseband tone offset in Hz (default 100 kHz)")
p.add_argument("--buffer-size", type=int, default=4096,
help="Complex samples per frame (default 4096)")
p.add_argument("--duration", type=float, default=30.0,
help="Seconds to transmit; 0 = run until Ctrl-C (default 30)")
p.add_argument("--log-level", default="INFO")
args = p.parse_args()

View File

@ -22,7 +22,6 @@ import os
from dataclasses import asdict, dataclass, field
from pathlib import Path
def _resolve_default_path() -> Path:
return Path(os.environ.get("RIA_AGENT_CONFIG", str(Path.home() / ".ria" / "agent.json")))

View File

@ -46,7 +46,9 @@ def heartbeat_payload(
if c.tx_max_duration_s is not None:
payload["tx_max_duration_s"] = float(c.tx_max_duration_s)
if c.tx_allowed_freq_ranges:
payload["tx_allowed_freq_ranges"] = [[float(lo), float(hi)] for lo, hi in c.tx_allowed_freq_ranges]
payload["tx_allowed_freq_ranges"] = [
[float(lo), float(hi)] for lo, hi in c.tx_allowed_freq_ranges
]
if app_id:
payload["app_id"] = app_id
if sessions:

View File

@ -931,7 +931,11 @@ def main() -> None:
"--role",
default=None,
choices=["general", "rx", "tx"],
help=("Node role reported to the hub. " "'tx' enables synthetic transmission commands. " "Default: general"),
help=(
"Node role reported to the hub. "
"'tx' enables synthetic transmission commands. "
"Default: general"
),
)
parser.add_argument(
"--session-code",

View File

@ -270,7 +270,9 @@ class Streamer:
)
self._rx = session
await self._send_status("streaming", app_id)
session.task = asyncio.create_task(self._capture_loop(session), name="ria-streamer-capture")
session.task = asyncio.create_task(
self._capture_loop(session), name="ria-streamer-capture"
)
async def _handle_rx_stop(self, msg: dict) -> None:
session = self._rx
@ -308,7 +310,9 @@ class Streamer:
logger.warning("Applying configure failed: %s", exc)
try:
samples = await loop.run_in_executor(None, session.sdr.rx, session.buffer_size)
samples = await loop.run_in_executor(
None, session.sdr.rx, session.buffer_size
)
except Exception as exc:
from ria_toolkit_oss.sdr import SdrDisconnectedError
@ -338,7 +342,7 @@ class Streamer:
# ==================================================================
# TX
async def _handle_tx_start(self, msg: dict) -> None: # noqa: C901
async def _handle_tx_start(self, msg: dict) -> None:
app_id = msg.get("app_id") or ""
radio_config = dict(msg.get("radio_config") or {})
@ -379,7 +383,9 @@ class Streamer:
buffer_size = int(radio_config.pop("buffer_size", _DEFAULT_BUFFER_SIZE))
underrun_policy = str(radio_config.pop("underrun_policy", "pause"))
if underrun_policy not in ("pause", "zero", "repeat"):
await self._send_tx_status(app_id, "error", f"invalid underrun_policy {underrun_policy!r}")
await self._send_tx_status(
app_id, "error", f"invalid underrun_policy {underrun_policy!r}"
)
return
if not device:
await self._send_tx_status(app_id, "error", "tx_start missing radio_config.device")
@ -398,10 +404,15 @@ class Streamer:
# manifest bug and we want it surfaced immediately, not papered
# over with stale radio state.
if hasattr(sdr, "init_tx"):
init_args = {k: radio_config.get(f"tx_{k}") for k in ("sample_rate", "center_frequency", "gain")}
init_args = {
k: radio_config.get(f"tx_{k}")
for k in ("sample_rate", "center_frequency", "gain")
}
missing = [f"tx_{k}" for k, v in init_args.items() if v is None]
if missing:
raise ValueError(f"tx_start missing required radio_config keys: {missing}")
raise ValueError(
f"tx_start missing required radio_config keys: {missing}"
)
sdr.init_tx(
sample_rate=init_args["sample_rate"],
center_frequency=init_args["center_frequency"],
@ -487,8 +498,9 @@ class Streamer:
return _silence(n)
# Max-duration watchdog.
if session.max_duration_s is not None and (time.monotonic() - session.started_at) >= float(
session.max_duration_s
if (
session.max_duration_s is not None
and (time.monotonic() - session.started_at) >= float(session.max_duration_s)
):
session.stop_event.set()
try:
@ -516,7 +528,7 @@ class Streamer:
if arr.size < 2 or arr.size % 2 != 0:
logger.warning("Malformed TX frame: %d floats (must be non-zero even count)", arr.size)
return self._underrun_fill(session, n)
samples = arr[0::2].astype(np.complex64) + 1j * arr[1::2].astype(np.complex64)
samples = (arr[0::2].astype(np.complex64) + 1j * arr[1::2].astype(np.complex64))
if samples.size < n:
out = np.zeros(n, dtype=np.complex64)
out[: samples.size] = samples
@ -735,7 +747,6 @@ def _default_sdr_factory(device: str, identifier: str | None):
# ---------------------------------------------------------------------------
# Top-level entry
async def run_streamer(ws_url: str, token: str, *, cfg: AgentConfig | None = None) -> None:
"""Connect to *ws_url* and run the streamer loop until cancelled."""
ws = WsClient(ws_url, token)

View File

@ -1,54 +0,0 @@
"""
The annotations package contains tools and utilities for creating, managing, and processing annotations.
Provides automatic annotation generation using various signal detection algorithms:
- Energy-based detection (detect_signals_energy)
- CUSUM-based segmentation (annotate_with_cusum)
- Threshold-based qualification (threshold_qualifier)
- Signal isolation and extraction (isolate_signal)
- Occupied bandwidth analysis (calculate_occupied_bandwidth, calculate_nominal_bandwidth)
All detection functions return Recording objects with added annotations.
"""
__all__ = [
# Energy-based detection
"detect_signals_energy",
"calculate_occupied_bandwidth",
"calculate_nominal_bandwidth",
"calculate_full_detected_bandwidth",
"annotate_with_obw",
# CUSUM detection
"annotate_with_cusum",
# Threshold detection
"threshold_qualifier",
# Parallel signal separation (Phase 2)
"find_spectral_components",
"split_annotation_by_components",
"split_recording_annotations",
# Signal isolation
"isolate_signal",
# Annotation transforms
"remove_contained_boxes",
"is_annotation_contained",
# Dataset creation
"qualify_slice_from_annotations",
]
from .annotation_transforms import is_annotation_contained, remove_contained_boxes
from .cusum_annotator import annotate_with_cusum
from .energy_detector import (
annotate_with_obw,
calculate_full_detected_bandwidth,
calculate_nominal_bandwidth,
calculate_occupied_bandwidth,
detect_signals_energy,
)
from .parallel_signal_separator import (
find_spectral_components,
split_annotation_by_components,
split_recording_annotations,
)
from .qualify_slice import qualify_slice_from_annotations
from .signal_isolation import isolate_signal
from .threshold_qualifier import threshold_qualifier

View File

@ -1,55 +0,0 @@
from ria_toolkit_oss.datatypes.annotation import Annotation
# TODO figure out how to transfer labels in the merge case
def remove_contained_boxes(annotations: list[Annotation]):
"""
Remove all annotations (bounding boxes) that are entirely contained within other boxes in the list.
:param annotations: A list of Annotation objects.
:type annotations: list[Annotation]
:returns: A new list of Annotation objects.
:rtype: list[Annotation]"""
output_boxes = []
for i in range(len(annotations)):
contained = False
for j in range(len(annotations)):
if i != j and is_annotation_contained(annotations[i], annotations[j]):
contained = True
break
if not contained:
output_boxes.append(annotations[i])
return output_boxes
def is_annotation_contained(inner: Annotation, outer: Annotation) -> bool:
"""
Check if an annotation box is entirely contained within another annotation bounding box.
:param inner: The inner box.
:type inner: Annotation.
:param outer: The outer box.
:type outer: Annotation.
:returns: True if inner is within outer, false otherwise.
:rtype: bool
"""
inner_sample_stop = inner.sample_start + inner.sample_count
outer_sample_stop = outer.sample_start + outer.sample_count
if inner.sample_start > outer.sample_start and inner_sample_stop < outer_sample_stop:
if inner.freq_lower_edge > outer.freq_lower_edge and inner.freq_upper_edge < outer.freq_upper_edge:
return True
return False
def merge_annotations(annotations: list[Annotation], overlap_threshold) -> list[Annotation]:
raise NotImplementedError

View File

@ -1,203 +0,0 @@
import json
from typing import Optional
import numpy as np
from ria_toolkit_oss.datatypes import Annotation, Recording
def annotate_with_cusum(
recording: Recording,
label: Optional[str] = "segment",
window_size: Optional[int] = 1,
min_duration: Optional[float] = None,
tolerance: Optional[int] = None,
annotation_type: Optional[str] = "standalone",
):
"""
Add annotations that divide the recording into distinct time segments.
This algorithm computes the cumulative sum of the sample magnitudes and
determines break points in the signal.
This tool can be used to find points where a signal turns on or off, or
changes between a low and high amplitude.
:param recording: A ``Recording`` object to annotate.
:type recording: ``ria_toolkit_oss.datatypes.Recording``
:param label: Label for the detected segments.
:type label: str
:param window_size: The length (in samples) of the moving average window.
:type window_size: int
:param min_duration: The minimum duration (in ms) of a segment.
The algorithm will not produce annotations shorter than this length.
:type min_duration: float
:param tolerance: The minimum length (in samples) of a segment.
:type tolerance: int
:param annotation_type: Annotation type (standalone, parallel, intersection).
:type annotation_type: str
"""
sample_rate = recording.metadata["sample_rate"]
center_frequency = recording.metadata.get("center_frequency", 0)
# Create an object of the time segmenter
time_segmenter = TimeSegmenter(sample_rate, min_duration, window_size, tolerance)
change_points = time_segmenter.apply(recording.data[0])
time_segments_indices = np.append(np.insert(change_points, 0, 0), len(recording.data[0]))
annotations = []
for i in range(len(time_segments_indices) - 1):
# Build comment JSON with type metadata
comment_data = {
"type": annotation_type,
"generator": "cusum_annotator",
"params": {
"window_size": window_size,
"min_duration": min_duration,
"tolerance": tolerance,
},
}
f_min, f_max = detect_frequency(
signal=recording.data[0],
start=time_segments_indices[i],
stop=time_segments_indices[i + 1],
sample_rate=sample_rate,
)
annotations.append(
Annotation(
sample_start=time_segments_indices[i],
sample_count=time_segments_indices[i + 1] - time_segments_indices[i],
freq_lower_edge=center_frequency + f_min,
freq_upper_edge=center_frequency + f_max,
label=label,
comment=json.dumps(comment_data),
detail={"generator": "cusum_annotator"},
)
)
return Recording(data=recording.data, metadata=recording.metadata, annotations=recording.annotations + annotations)
def _compute_cusum(_signal, sample_rate: int, tolerance: int = None, min_duration: float = -1):
"""
This function efficiently computes the cumulative sum of a give list (_signal), with an optional tolerance.
Args:
- _signal: array of iq samples.
- Tolerance: the least acceptable length of a block, Defaults to None.
Returns:
- cusum (array): Array of the cumulative sum of the given list
- sample_rate (int): __description_
- change_points (array): Array of the indices at which a change in the CUSUM direction happens.
- min_duration (float): The least acceptable time width of each segment (in ms). Defaults to -1.
"""
# efficiently calculate the running sum of the signal
# cusum = list(itertools.accumulate((_signal - np.mean(_signal))))
x = _signal - np.mean(_signal)
cusum = np.cumsum(x)
# 'diff' computes the differences between the consecutive values,
# then 'sign' determines if it is +ve or -ve.
change_indicators = np.sign(np.diff(cusum))
change_points = np.where(np.diff(change_indicators))[0] + 1
# Limit the change_points
# Reject those whose number of samples < minimum accepted #n of samples in (min duration) ms.
if min_duration is not None and min_duration > 0:
min_samples_wide = int(min_duration * sample_rate / 1000)
segments_lengths = np.diff(change_points)
segments_lengths = np.insert(segments_lengths, 0, change_points[0])
change_points = change_points[np.where(segments_lengths > min_samples_wide)[0]]
return cusum, change_points
def detect_frequency(signal, start, stop, sample_rate):
signal_segment = signal[start:stop]
if len(signal_segment) > 0:
fft_data = np.abs(np.fft.fftshift(np.fft.fft(signal_segment)))
fft_freqs = np.fft.fftshift(np.fft.fftfreq(len(signal_segment), 1 / sample_rate))
# Use a spectral threshold to find the 'height' of the orange block
spectral_thresh = np.max(fft_data) * 0.15
sig_indices = np.where(fft_data > spectral_thresh)[0]
if len(sig_indices) > 4:
return fft_freqs[sig_indices[0]], fft_freqs[sig_indices[-1]]
else:
return -sample_rate / 4, sample_rate / 4
else:
return -sample_rate / 4, sample_rate / 4
class TimeSegmenter:
"""Time Segmenter class, it creates a segmenter object with certain\
characteristics to easily split an input signal to segments based on\
the cumulative sum of deviations (of the signal mean)
"""
def __init__(
self, sample_rate: int, min_duration: float = 1, moving_average_window: int = 3, tolerance: int = None
):
"""_summary_
Args:
sample_rate (int): _description_
min_duration (float, optional): _description_. Defaults to 1.
moving_average_window (int, optional): _description_. Defaults to 3.
tolerance (int, optional): _description_. Defaults to None.
"""
self.sample_rate = sample_rate
self.min_duration = min_duration
self.moving_average_window = moving_average_window
self._moving_avg_filter = self._init_filter()
self.tolerance = tolerance
def _init_filter(self):
"""_summary_
Returns:
_type_: _description_
"""
return np.ones(self.moving_average_window) / self.moving_average_window
def _apply_filter(self, iqsignal: np.array):
"""_summary_
Args:
iqsignal (np.array): _description_
Returns:
_type_: _description_
"""
return np.convolve(abs(iqsignal), self._moving_avg_filter, mode="same")
def _create_segments(self, iq_signal: np.array, change_points: np.array):
"""_summary_
Args:
iq_signal (np.array): _description_
change_points (np.array): _description_
Returns:
_type_: _description_
"""
return np.split(iq_signal, change_points)
def apply(self, iq_signal: np.array):
"""_summary_
Args:
iq_signal (np.array): _description_
Returns:
_type_: _description_
"""
smoothed_signal = self._apply_filter(iq_signal)
_, change_points = _compute_cusum(smoothed_signal, self.sample_rate, self.tolerance, self.min_duration)
# segments = self._create_segments(iq_signal, change_points)
return change_points

View File

@ -1,438 +0,0 @@
"""
Energy-based signal detection and bandwidth analysis.
Provides automatic annotation generation using energy-based signal detection
and occupied bandwidth calculation following ITU-R SM.328 standard.
"""
import json
from typing import Tuple
import numpy as np
from scipy.signal import filtfilt
from ria_toolkit_oss.datatypes import Annotation, Recording
def detect_signals_energy(
recording: Recording,
k: int = 10,
threshold_factor: float = 1.2,
window_size: int = 200,
min_distance: int = 5000,
label: str = "signal",
annotation_type: str = "standalone",
freq_method: str = "nbw",
nfft: int = None,
obw_power: float = 0.99,
) -> Recording:
"""
Detect signal bursts using energy-based method with adaptive noise floor estimation.
This algorithm smooths the signal with a moving average filter, estimates the noise
floor from k segments, applies a threshold to detect regions above noise, and merges
nearby detections. Detected time boundaries are then assigned frequency bounds based
on the selected frequency method.
Time Detection Algorithm:
1. Smooth signal using moving average (envelope detection)
2. Divide smoothed signal into k segments
3. Estimate noise floor as median of segment mean powers
4. Detect regions where power exceeds threshold_factor * noise_floor
5. Merge regions closer than min_distance samples
Frequency Bounding (freq_method):
- 'nbw': Nominal bandwidth (OBW + center frequency) - DEFAULT
- 'obw': Occupied bandwidth (99.99% power, includes siedelobes)
- 'full-detected': Lowest to highest spectral component
- 'full-bandwidth': Entire Nyquist span (center_freq ± sample_rate/2)
:param recording: Recording to analyze
:type recording: Recording
:param k: Number of segments for noise floor estimation (default: 10)
:type k: int
:param threshold_factor: Threshold multiplier above noise floor (typical: 1.2-2.0, default: 1.2)
:type threshold_factor: float
:param window_size: Moving average window size in samples (default: 200)
:type window_size: int
:param min_distance: Minimum distance between separate signals in samples (default: 5000)
:type min_distance: int
:param label: Label for detected annotations (default: "signal")
:type label: str
:param annotation_type: Annotation type (standalone, parallel, intersection, default: standalone)
:type annotation_type: str
:param freq_method: How to calculate frequency bounds (default: 'nbw')
:type freq_method: str
:param nfft: FFT size for frequency calculations (default: None)
:type nfft: int
:param obw_power: Power percentage for OBW (0.9999 = 99.99%, default: 0.99)
:type obw_power: float
:returns: New Recording with added annotations
:rtype: Recording
**Example**::
>>> from ria.io import load_recording
>>> from ria_toolkit_oss.annotations import detect_signals_energy
>>> recording = load_recording("capture.sigmf")
>>> # Detect with NBW frequency bounds (default, best for real signals)
>>> annotated = detect_signals_energy(recording, label="burst")
>>> # Detect with OBW (more conservative, includes siedelobes)
>>> annotated = detect_signals_energy(
... recording, label="burst", freq_method="obw"
... )
>>> # Detect with full detected range (captures all spectral components)
>>> annotated = detect_signals_energy(
... recording, label="burst", freq_method="full-detected"
... )
"""
# Extract signal data (use first channel only)
signal = recording.data[0]
# Calculate smoothed signal power
kernel = np.ones(window_size) / window_size
smoothed_power = filtfilt(kernel, [1], np.abs(signal) ** 2)
# Estimate noise floor using segment-based median (robust to signal presence)
segments = np.array_split(smoothed_power, k)
noise_floor = np.median([np.mean(s) for s in segments])
# Detect signal boundaries (regions above threshold)
enter = noise_floor * threshold_factor
exit = enter * 0.8
boundaries = []
start = None
active = False
for i, p in enumerate(smoothed_power):
if not active and p > enter:
start = i
active = True
elif active and p < exit:
boundaries.append((start, i - start))
active = False
if active:
boundaries.append((start, len(smoothed_power) - start))
# Merge boundaries that are closer than min_distance
merged_boundaries = []
if boundaries:
start, length = boundaries[0]
for next_start, next_length in boundaries[1:]:
if next_start - (start + length) < min_distance:
# Merge with current boundary
length = next_start + next_length - start
else:
# Save current and start new boundary
merged_boundaries.append((start, length))
start, length = next_start, next_length
# Add final boundary
merged_boundaries.append((start, length))
# Create annotations from detected boundaries
sample_rate = recording.metadata["sample_rate"]
center_frequency = recording.metadata.get("center_frequency", 0)
# Validate frequency method
valid_freq_methods = ["nbw", "obw", "full-detected", "full-bandwidth"]
if freq_method not in valid_freq_methods:
raise ValueError(f"Invalid freq_method '{freq_method}'. " f"Must be one of: {', '.join(valid_freq_methods)}")
annotations = []
for start_sample, sample_count in merged_boundaries:
# Calculate frequency bounds based on method
freq_lower, freq_upper = calculate_frequency_bounds(
freq_method, center_frequency, sample_rate, nfft, signal, start_sample, sample_count, obw_power
)
# Build comment JSON with type metadata
comment_data = {
"type": annotation_type,
"generator": "energy_detector",
"freq_method": freq_method,
"params": {
"threshold_factor": threshold_factor,
"window_size": window_size,
"noise_floor": float(noise_floor),
"threshold": float(enter),
},
}
anno = Annotation(
sample_start=start_sample,
sample_count=sample_count,
freq_lower_edge=freq_lower,
freq_upper_edge=freq_upper,
label=label,
comment=json.dumps(comment_data),
detail={"generator": "energy_detector", "freq_method": freq_method},
)
annotations.append(anno)
return Recording(data=recording.data, metadata=recording.metadata, annotations=recording.annotations + annotations)
def calculate_occupied_bandwidth(
signal: np.ndarray,
sampling_rate: float,
nfft: int = None,
power_percentage: float = 0.99,
):
if nfft is None:
nfft = max(65536, 2 ** int(np.floor(np.log2(len(signal)))))
window = np.blackman(len(signal))
spec = np.fft.fftshift(np.fft.fft(signal * window, n=nfft))
psd = np.abs(spec) ** 2
psd = psd / psd.sum() # normalize
freqs = np.fft.fftshift(np.fft.fftfreq(nfft, 1 / sampling_rate))
cdf = np.cumsum(psd)
tail = (1 - power_percentage) / 2
lower_idx = np.searchsorted(cdf, tail)
upper_idx = np.searchsorted(cdf, 1 - tail)
return freqs[upper_idx] - freqs[lower_idx], freqs[lower_idx], freqs[upper_idx]
def calculate_nominal_bandwidth(
signal: np.ndarray,
sampling_rate: float,
nfft: int = None,
power_percentage: float = 0.99,
) -> Tuple[float, float]:
"""
Calculate nominal bandwidth and center frequency.
Nominal bandwidth (NBW) is the occupied bandwidth along with the center
frequency of the signal's spectral occupancy. Useful for characterizing
signals with unknown or drifting center frequencies.
:param signal: Complex IQ signal samples
:type signal: np.ndarray
:param sampling_rate: Sample rate in Hz
:type sampling_rate: float
:param nfft: FFT size
:type nfft: int
:param power_percentage: Fraction of power to contain
:type power_percentage: float
:returns: Tuple of (nominal_bandwidth_hz, center_frequency_hz)
:rtype: Tuple[float, float]
**Example**::
>>> from ria_toolkit_oss.annotations import calculate_nominal_bandwidth
>>> nbw, center = calculate_nominal_bandwidth(signal, sampling_rate=10e6)
>>> print(f"NBW: {nbw/1e6:.3f} MHz, Center: {center/1e6:.3f} MHz")
"""
bw, lower_freq, upper_freq = calculate_occupied_bandwidth(signal, sampling_rate, nfft, power_percentage)
# Center frequency is midpoint of occupied band
center_freq = (lower_freq + upper_freq) / 2
return lower_freq, upper_freq, center_freq
def calculate_full_detected_bandwidth(
signal: np.ndarray,
sampling_rate: float,
nfft: int = None,
start_offset: int = 1000,
) -> Tuple[float, float, float]:
"""
Calculate frequency range from lowest to highest spectral component.
Unlike OBW/NBW which define a power-based bandwidth, this calculates
the absolute frequency span from the lowest non-zero spectral component
to the highest non-zero component.
Useful for:
- Signals with spectral gaps
- Multiple parallel signals (captures all of them)
- Understanding total occupied spectrum vs. actual bandwidth
:param signal: Complex IQ signal samples
:type signal: np.ndarray
:param sampling_rate: Sample rate in Hz
:type sampling_rate: float
:param nfft: FFT size
:type nfft: int
:param start_offset: Skip samples at start
:type start_offset: int
:returns: Tuple of (bandwidth_hz, lower_freq_hz, upper_freq_hz)
:rtype: Tuple[float, float, float]
**Example**::
>>> # Signal with two components at different frequencies
>>> bw, f_low, f_high = calculate_full_detected_bandwidth(
... signal, sampling_rate=10e6, nfft=65536
... )
>>> print(f"Full span: {f_low/1e6:.3f} to {f_high/1e6:.3f} MHz")
"""
# Validate input
if len(signal) < nfft + start_offset:
raise ValueError(
f"Signal too short: need {nfft + start_offset} samples, "
f"got {len(signal)}. Reduce nfft or start_offset."
)
# Extract segment
signal_segment = signal[start_offset : nfft + start_offset]
# Compute FFT and power spectral density
freq_spectrum = np.fft.fft(signal_segment, n=nfft)
psd = np.abs(freq_spectrum) ** 2
# Shift to center DC
psd_shifted = np.fft.fftshift(psd)
freq_bins = np.fft.fftshift(np.fft.fftfreq(nfft, 1 / sampling_rate))
# Find noise floor (mean of lowest 10% of bins) and all bins above noise floor
noise_floor = np.mean(np.sort(psd_shifted)[: int(len(psd_shifted) * 0.1)])
above_noise = np.where(psd_shifted > noise_floor * 1.5)[0]
if len(above_noise) == 0:
# No signal above noise, return zero bandwidth
return 0.0, 0.0, 0.0
# Get frequency range of signal components
lower_idx = above_noise[0]
upper_idx = above_noise[-1]
lower_freq = freq_bins[lower_idx]
upper_freq = freq_bins[upper_idx]
bandwidth = upper_freq - lower_freq
return bandwidth, lower_freq, upper_freq
def annotate_with_obw(
recording: Recording,
label: str = "signal",
annotation_type: str = "standalone",
nfft: int = None,
power_percentage: float = 0.99,
) -> Recording:
"""
Create a single annotation spanning the occupied bandwidth of the entire recording.
Analyzes the full recording to find its occupied bandwidth and creates an annotation
covering that frequency range for the entire time duration.
:param recording: Recording to analyze
:type recording: Recording
:param label: Annotation label
:type label: str
:param annotation_type: Annotation type
:type annotation_type: str
:param nfft: FFT size
:type nfft: int
:param power_percentage: Power percentage for OBW calculation
:type power_percentage: float
:returns: Recording with OBW annotation added
:rtype: Recording
**Example**::
>>> from ria_toolkit_oss.annotations import annotate_with_obw
>>> annotated = annotate_with_obw(recording, label="signal_obw")
"""
signal = recording.data[0]
sample_rate = recording.metadata["sample_rate"]
center_freq = recording.metadata.get("center_frequency", 0)
# Calculate OBW
obw, lower_offset, upper_offset = calculate_occupied_bandwidth(signal, sample_rate, nfft, power_percentage)
# Convert baseband offsets to absolute frequencies
freq_lower = center_freq + lower_offset
freq_upper = center_freq + upper_offset
# Create comment JSON
comment_data = {
"type": annotation_type,
"generator": "obw_annotator",
"obw_hz": float(obw),
"power_percentage": power_percentage,
"params": {"nfft": nfft},
}
# Create annotation spanning entire recording
anno = Annotation(
sample_start=0,
sample_count=len(signal),
freq_lower_edge=freq_lower,
freq_upper_edge=freq_upper,
label=label,
comment=json.dumps(comment_data),
detail={"generator": "obw_annotator", "obw_hz": float(obw)},
)
return Recording(data=recording.data, metadata=recording.metadata, annotations=recording.annotations + [anno])
def calculate_frequency_bounds(
freq_method, center_frequency, sample_rate, nfft, signal, start_sample, sample_count, obw_power
):
if freq_method == "full-bandwidth":
# Full Nyquist span
freq_lower = center_frequency - (sample_rate / 2)
freq_upper = center_frequency + (sample_rate / 2)
else:
# Extract segment for frequency analysis
segment_start = start_sample
segment_end = min(start_sample + sample_count, len(signal))
segment = signal[segment_start:segment_end]
if nfft is None or len(segment) >= nfft:
if freq_method == "nbw":
# Nominal bandwidth (OBW + center frequency)
try:
lower_freq, upper_freq, _ = calculate_nominal_bandwidth(segment, sample_rate, nfft, obw_power)
freq_lower = center_frequency + lower_freq
freq_upper = center_frequency + upper_freq
except (ValueError, IndexError):
# Fallback if calculation fails
freq_lower = center_frequency - (sample_rate / 2)
freq_upper = center_frequency + (sample_rate / 2)
elif freq_method == "obw":
# Occupied bandwidth
try:
_, f_lower, f_upper = calculate_occupied_bandwidth(segment, sample_rate, nfft, obw_power)
freq_lower = center_frequency + f_lower
freq_upper = center_frequency + f_upper
except (ValueError, IndexError):
# Fallback if calculation fails
freq_lower = center_frequency - (sample_rate / 2)
freq_upper = center_frequency + (sample_rate / 2)
elif freq_method == "full-detected":
# Full detected range (lowest to highest component)
try:
_, f_lower, f_upper = calculate_full_detected_bandwidth(segment, sample_rate, nfft)
freq_lower = center_frequency + f_lower
freq_upper = center_frequency + f_upper
except (ValueError, IndexError):
# Fallback if calculation fails
freq_lower = center_frequency - (sample_rate / 2)
freq_upper = center_frequency + (sample_rate / 2)
else:
# Segment too short for FFT, use full bandwidth
freq_lower = center_frequency - (sample_rate / 2)
freq_upper = center_frequency + (sample_rate / 2)
return freq_lower, freq_upper

View File

@ -1,435 +0,0 @@
"""
Parallel signal separation for multi-component frequency-offset signals.
Provides methods to detect and separate overlapping frequency-domain signals
that occupy the same time window but different frequency bands.
This module implements **spectral peak detection** to identify distinct frequency
components and split single time-domain annotations into frequency-specific
sub-annotations.
**Key Design Decisions** (per Codex review):
1. **Complex IQ Support**: Uses `scipy.signal.welch` with `return_onesided=False`
for proper complex signal handling. Window length automatically adapts to
signal length via `nperseg=min(nfft, len(signal))` to handle bursts <nfft.
2. **Frequency Representation**: Components are detected in **relative** frequency
(baseband, centered at 0 Hz). Caller must add RF center_frequency_hz when
writing to SigMF annotations. This separation of concerns avoids the frequency
context bug where absolute Hz would be meaningless for baseband processing.
3. **Bandwidth Estimation**: Dual strategy avoids -3dB limitations:
- Primary: -3dB rolloff for typical narrowband signals
- Fallback: Cumulative power (99% like OBW) for wide/OFDM signals
- Auto-fallback when -3dB bandwidth is anomalous
4. **Noise Floor**: Auto-estimated via `np.percentile(psd_db, 10)` from data
to adapt across hardware (Pluto vs. ThinkRF). User can override if needed.
5. **Filter Sizing (Optional FIR extraction)**: When extracting components,
uses Kaiser window FIR with proper stopband specification. Auto-sizes
numtaps based on desired transition bandwidth. Includes downsampling
guidance for long captures.
6. **CLI Surface**: Single `separate` subcommand for all separation operations.
Can be chained after any detector or used standalone on existing annotations.
Example:
Two WiFi channels captured simultaneously:
>>> from ria_toolkit_oss.annotations import find_spectral_components
>>> # Detect the two distinct channels (returns relative frequencies)
>>> components = find_spectral_components(signal, sampling_rate=20e6)
>>> print(f"Found {len(components)} components")
Found 2 components
The module is designed to work with detected time-domain annotations,
allowing splitting of overlapping signals into separate training samples.
"""
import json
from typing import List, Optional, Tuple
import numpy as np
from scipy import ndimage
from scipy import signal as scipy_signal
from ria_toolkit_oss.datatypes import Annotation, Recording
def find_spectral_components(
signal_data: np.ndarray,
sampling_rate: float,
nfft: int = 65536,
noise_threshold_db: Optional[float] = None,
min_component_bw: float = 50e3,
time_percentile: float = 70.0,
) -> List[Tuple[float, float, float]]:
"""
Find distinct frequency components using spectral peak detection.
Identifies separate frequency components in a signal by analyzing the power
spectral density and finding peaks corresponding to distinct signals. This is
useful for separating parallel signals that occupy different frequency bands.
**Frequency Representation**: Returns frequencies in **baseband/relative** Hz
(centered at 0). To get absolute RF frequencies, add center_frequency_hz from
recording metadata to all returned values.
Algorithm:
1. Compute power spectral density using Welch (properly handles complex IQ)
2. Auto-estimate noise floor from data if not specified
3. Smooth PSD to reduce spurious peaks
4. Find local maxima above noise floor
5. Estimate bandwidth per peak using -3dB (fallback: cumulative power)
6. Filter components below minimum bandwidth threshold
:param signal_data: Complex IQ signal samples (np.complex64/128)
:type signal_data: np.ndarray
:param sampling_rate: Sample rate in Hz
:type sampling_rate: float
:param nfft: FFT size / window length for Welch. Automatically capped at
signal length to handle bursts (default: 65536)
:type nfft: int
:param noise_threshold_db: Minimum SNR threshold in dB. If None (default),
auto-estimates as np.percentile(psd_db, 10).
Adapt this across hardware (Pluto: ~-100, ThinkRF: ~-60).
:type noise_threshold_db: Optional[float]
:param min_component_bw: Minimum component bandwidth in Hz (default: 50 kHz)
:type min_component_bw: float
:param power_threshold: Cumulative power threshold for fallback bandwidth
estimation (default: 0.99 = 99% power, like OBW)
:type power_threshold: float
:returns: List of (center_freq_hz, lower_freq_hz, upper_freq_hz) tuples.
**All frequencies are relative (baseband, 0-centered).**
Add recording metadata['center_frequency'] to get absolute RF frequencies.
:rtype: List[Tuple[float, float, float]]
:raises ValueError: If signal has fewer than 256 samples
**Example**::
>>> from ria.io import load_recording
>>> from ria_toolkit_oss.annotations import find_spectral_components
>>> recording = load_recording("capture.sigmf")
>>> segment = recording.data[0][start:end]
>>> # Components in relative (baseband) frequency
>>> components = find_spectral_components(segment, sampling_rate=20e6)
>>> for center_rel, lower_rel, upper_rel in components:
... # Convert to absolute RF frequency
... center_abs = recording.metadata['center_frequency'] + center_rel
... print(f"Component @ {center_abs/1e9:.3f} GHz")
"""
# Validate input
min_samples = 256
if len(signal_data) < min_samples:
raise ValueError(f"Signal too short: need at least {min_samples} samples, " f"got {len(signal_data)}.")
# Compute PSD using Welch method for complex IQ signals
# CRITICAL: return_onesided=False for proper complex signal handling
nperseg = min(nfft, len(signal_data))
noverlap = nperseg // 2
# --- STFT ---
freqs, times, Zxx = scipy_signal.stft(
signal_data,
fs=sampling_rate,
window="blackman",
nperseg=nperseg,
noverlap=noverlap,
return_onesided=False,
boundary=None,
)
# Shift zero freq to center
Zxx = np.fft.fftshift(Zxx, axes=0)
freqs = np.fft.fftshift(freqs)
# Power spectrogram
power = np.abs(Zxx) ** 2
power_db = 10 * np.log10(power + 1e-12)
# --- Aggregate across time robustly ---
# Using percentile instead of mean prevents short signals from being diluted
freq_profile_db = np.percentile(power_db, time_percentile, axis=1)
# --- Noise floor estimation ---
if noise_threshold_db is None:
noise_threshold_db = np.percentile(freq_profile_db, 20)
threshold = noise_threshold_db + 3 # 3 dB above noise floor
# --- Smooth lightly (avoid merging nearby signals) ---
freq_profile_db = ndimage.gaussian_filter1d(freq_profile_db, sigma=1.5)
# --- Binary mask of significant frequencies ---
mask = freq_profile_db > threshold
# --- Find contiguous frequency regions ---
labeled, num_features = ndimage.label(mask)
components = []
for region_label in range(1, num_features + 1):
region_indices = np.where(labeled == region_label)[0]
if len(region_indices) == 0:
continue
lower_idx = region_indices[0]
upper_idx = region_indices[-1]
lower_freq = freqs[lower_idx]
upper_freq = freqs[upper_idx]
bw = upper_freq - lower_freq
if bw < min_component_bw:
continue
center_freq = (lower_freq + upper_freq) / 2
components.append((center_freq, lower_freq, upper_freq))
return components
def split_annotation_by_components(
annotation: Annotation,
signal: np.ndarray,
sampling_rate: float,
center_frequency_hz: float = 0.0,
nfft: int = 65536,
noise_threshold_db: Optional[float] = None,
min_component_bw: float = 50e3,
) -> List[Annotation]:
"""
Split an annotation into multiple annotations by detected frequency components.
Takes an existing annotation spanning multiple frequency components and
analyzes the frequency content to create separate sub-annotations for
each distinct frequency component.
**Use case**: Energy detection found a time window with 2-3 parallel WiFi
channels. This function splits it into separate annotations per channel.
**Frequency Handling**: `find_spectral_components` returns relative (baseband)
frequencies. This function adds `center_frequency_hz` to convert to absolute
RF frequencies for SigMF annotation bounds. This ensures correct frequency
context across baseband and RF domains.
:param annotation: Original annotation to split
:type annotation: Annotation
:param signal: Full signal array (complex IQ)
:type signal: np.ndarray
:param sampling_rate: Sample rate in Hz
:type sampling_rate: float
:param center_frequency_hz: RF center frequency to add to relative frequencies
from peak detection (default: 0.0 = baseband)
:type center_frequency_hz: float
:param nfft: FFT size for analysis (default: 65536, auto-capped at signal length)
:type nfft: int
:param noise_threshold_db: Noise floor threshold in dB. If None (default),
auto-estimates from data.
:type noise_threshold_db: Optional[float]
:param min_component_bw: Minimum component bandwidth in Hz (default: 50 kHz)
:type min_component_bw: float
:returns: List of new annotations (one per detected component).
Returns empty list if no components found or segment too short.
:rtype: List[Annotation]
**Example**::
>>> from ria.io import load_recording
>>> from ria_toolkit_oss.annotations import split_annotation_by_components
>>> recording = load_recording("capture.sigmf")
>>> # Original annotation spans multiple channels
>>> original = recording.annotations[0]
>>> # Split using RF center frequency from metadata
>>> components = split_annotation_by_components(
... original,
... recording.data[0],
... recording.metadata['sample_rate'],
... center_frequency_hz=recording.metadata.get('center_frequency', 0.0)
... )
>>> print(f"Split into {len(components)} components")
Split into 2 components
**Algorithm**:
1. Extract segment corresponding to annotation time bounds
2. Find frequency components in that segment (returns relative frequencies)
3. Add center_frequency_hz to get absolute RF frequencies
4. Create new annotation for each component
5. Preserve original metadata (label, type, etc.)
6. Add component info to comment JSON
**Notes**:
- Original annotation is not modified
- Returns empty list if segment too short (<256 samples)
- Segments <nfft get auto-downsampled to nfft (see find_spectral_components)
- Each component inherits label from original
- Component frequencies in comment JSON are absolute (RF) frequencies
"""
# Extract segment corresponding to annotation time bounds
start_sample = annotation.sample_start
end_sample = min(start_sample + annotation.sample_count, len(signal))
segment = signal[start_sample:end_sample]
# Validate segment length is enough for spectral analysis
if len(segment) < 256:
return []
# Find components in this segment (returns relative/baseband frequencies)
try:
components = find_spectral_components(segment, sampling_rate, nfft, noise_threshold_db, min_component_bw)
except ValueError:
# Spectral analysis failed (e.g., not complex IQ)
return []
if not components:
# No components found
return []
# Create annotations for each component
new_annotations = []
for center_freq_rel, lower_freq_rel, upper_freq_rel in components:
# Convert relative (baseband) frequencies to absolute (RF) frequencies
center_freq_abs = center_frequency_hz + center_freq_rel
lower_freq_abs = center_frequency_hz + lower_freq_rel
upper_freq_abs = center_frequency_hz + upper_freq_rel
# Parse original annotation metadata
try:
comment_data = json.loads(annotation.comment)
except (json.JSONDecodeError, TypeError):
comment_data = {"type": "standalone"}
# Add component information (with absolute RF frequencies)
comment_data["split_from_annotation"] = True
comment_data["original_freq_bounds"] = {
"lower": float(annotation.freq_lower_edge),
"upper": float(annotation.freq_upper_edge),
}
comment_data["component_freq_bounds_rf"] = {
"center": float(center_freq_abs),
"lower": float(lower_freq_abs),
"upper": float(upper_freq_abs),
}
# Create new annotation with absolute RF frequency bounds
new_anno = Annotation(
sample_start=annotation.sample_start,
sample_count=annotation.sample_count,
freq_lower_edge=lower_freq_abs,
freq_upper_edge=upper_freq_abs,
label=annotation.label,
comment=json.dumps(comment_data),
detail={
"generator": "parallel_signal_separator",
"center_freq_hz": float(center_freq_abs),
},
)
new_annotations.append(new_anno)
return new_annotations
def split_recording_annotations(
recording: Recording,
indices: Optional[List[int]] = None,
nfft: int = 65536,
noise_threshold_db: Optional[float] = None,
min_component_bw: float = 50e3,
) -> Recording:
"""
Split multiple annotations in a recording by frequency components.
Processes specified annotations (or all if indices=None), replacing each
with its frequency-separated components. Uses RF center_frequency from
recording metadata for proper absolute frequency conversion.
:param recording: Recording to process
:type recording: Recording
:param indices: Annotation indices to split (None = all, default: None).
Use indices=[] to skip splitting (returns unchanged recording).
:type indices: Optional[List[int]]
:param nfft: FFT size for spectral analysis (default: 65536,
auto-capped at signal segment length)
:type nfft: int
:param noise_threshold_db: Noise floor threshold in dB. If None (default),
auto-estimates from each segment.
:type noise_threshold_db: Optional[float]
:param min_component_bw: Minimum component bandwidth in Hz (default: 50 kHz).
Components narrower than this are filtered out.
:type min_component_bw: float
:returns: New Recording with split annotations
:rtype: Recording
**Example**::
>>> from ria.io import load_recording
>>> from ria_toolkit_oss.annotations import split_recording_annotations
>>> recording = load_recording("capture.sigmf")
>>> # Split all annotations
>>> split_rec = split_recording_annotations(recording)
>>> print(f"Original: {len(recording.annotations)} annotations")
>>> print(f"Split: {len(split_rec.annotations)} annotations")
Original: 5 annotations
Split: 9 annotations
**Algorithm**:
1. For each annotation in indices (or all if None):
2. Call split_annotation_by_components with RF center_frequency
3. If components found, replace annotation with components
4. If no components found, keep original annotation
5. Annotations not in indices are kept unchanged
**Notes**:
- Original recording is not modified
- Returns empty Recording.annotations if recording has no annotations
- RF center_frequency from metadata ensures correct absolute frequencies
- If an annotation can't be split (too short, wrong format), original kept
"""
if indices is None:
# Split all annotations
indices = list(range(len(recording.annotations)))
if not recording.annotations:
# No annotations to split
return recording
signal = recording.data[0]
sample_rate = recording.metadata["sample_rate"]
center_frequency = recording.metadata.get("center_frequency", 0.0)
# Build new annotation list
new_annotations = []
for i, anno in enumerate(recording.annotations):
if i in indices:
# Attempt to split this annotation
try:
components = split_annotation_by_components(
anno,
signal,
sample_rate,
center_frequency_hz=center_frequency,
nfft=nfft,
noise_threshold_db=noise_threshold_db,
min_component_bw=min_component_bw,
)
if components:
# Split successful, use components
new_annotations.extend(components)
else:
# No components found, keep original
new_annotations.append(anno)
except Exception:
# Split failed for any reason, keep original
new_annotations.append(anno)
else:
# Not in split list, keep as-is
new_annotations.append(anno)
return Recording(data=recording.data, metadata=recording.metadata, annotations=new_annotations)

View File

@ -1,35 +0,0 @@
import numpy as np
from ria_toolkit_oss.datatypes import Recording
def qualify_slice_from_annotations(recording: Recording, slice_length: int):
"""
Slice a recording into many smaller recordings,
discarding any slices which do not have annotations that apply to those samples.
Used together with an annotation based qualifier.
:param recording: The recording to slice.
:type recording: Recording
:param slice_length: The length in samples of a slice.
:type slice_length: int"""
if len(recording.annotations) == 0:
print("Warning, no annotations.")
annotation_mask = np.zeros(len(recording.data[0]))
for annotation in recording.annotations:
annotation_mask[annotation.sample_start : annotation.sample_start + annotation.sample_count] = 1
output_recordings = []
for i in range((len(recording.data[0]) // slice_length) - 1):
start_index = slice_length * i
end_index = slice_length * (i + 1)
if 1 in annotation_mask[start_index:end_index]:
sl = recording.data[:, start_index:end_index]
output_recordings.append(Recording(data=sl, metadata=recording.metadata))
return output_recordings

View File

@ -1,97 +0,0 @@
import numpy as np
from scipy.signal import butter, lfilter
from ria_toolkit_oss.datatypes.annotation import Annotation
from ria_toolkit_oss.datatypes.recording import Recording
def isolate_signal(recording: Recording, annotation: Annotation) -> Recording:
"""
Slice, filter and frequency shift the input recording according to the bounding box defined by the annotation.
:param recording: The input Recording to be sliced.
:type recording: Recording
:param annotation: The Annotation object defining the area of the recording to isolate.
:type annotation: Annotation
:param decimate: Decimate the input signal after filtering to reduce the sample rate.
:type decimate: bool
:returns: The subsection of the original recording defined by the annotation.
:rtype: Recording"""
sample_start = max(0, annotation.sample_start)
sample_stop = min(len(recording), annotation.sample_start + annotation.sample_count)
anno_base_center_freq = (annotation.freq_lower_edge + annotation.freq_upper_edge) / 2 - recording.metadata.get(
"center_frequency", 0
)
anno_bw = annotation.freq_upper_edge - annotation.freq_lower_edge
signal_slice = recording.data[0, sample_start:sample_stop]
# normalize
signal_slice = signal_slice / np.max(np.abs(signal_slice))
isolation_bw = anno_bw
# frequency shift the center of the box about zero
shifted_signal_slice = frequency_shift_iq_samples(
iq_samples=signal_slice,
sample_rate=recording.metadata["sample_rate"],
shift_frequency=-1 * anno_base_center_freq,
)
# filter
if isolation_bw < recording.metadata["sample_rate"] - 1:
filtered_signal = apply_complex_lowpass_filter(
signal=shifted_signal_slice, cutoff_frequency=isolation_bw, sample_rate=recording.metadata["sample_rate"]
)
else:
filtered_signal = shifted_signal_slice
output = Recording(data=[filtered_signal], metadata=recording.metadata)
return output
def frequency_shift_iq_samples(iq_samples, sample_rate, shift_frequency):
# Number of samples
num_samples = len(iq_samples)
# Create a time vector from 0 to the total duration in seconds
time_vector = np.arange(num_samples) / sample_rate
# Generate the complex exponential for the frequency shift
complex_exponential = np.exp(1j * 2 * np.pi * shift_frequency * time_vector)
# Apply the frequency shift to the IQ samples
shifted_samples = iq_samples * complex_exponential
return shifted_samples
# Function to apply a lowpass Butterworth filter to a complex signal
def apply_complex_lowpass_filter(signal, cutoff_frequency, sample_rate, order=5):
# Design the lowpass filter
b, a = design_complex_lowpass_filter(cutoff_frequency, sample_rate, order)
# Apply the lowpass filter
filtered_signal = lfilter(b, a, signal)
return filtered_signal
def design_complex_lowpass_filter(cutoff_frequency, sample_rate, order=5):
# Nyquist frequency for complex signals is the sample rate
nyquist = sample_rate
# Ensure the cutoff frequency is positive and within the Nyquist limit
if cutoff_frequency <= 0 or cutoff_frequency > nyquist:
raise ValueError("Cutoff frequency must be between 0 and the Nyquist frequency.")
# Normalize the cutoff frequency to the Nyquist frequency
cutoff_normalized = cutoff_frequency / nyquist
# Create a Butterworth lowpass filter
b, a = butter(order, cutoff_normalized, btype="low")
return b, a

View File

@ -1,359 +0,0 @@
"""
Temporal signal detection and boundary refinement via Hysteresis Thresholding.
Provides methods to detect signal bursts in the time domain by triggering on
smoothed power peaks and expanding boundaries to capture the full energy envelope.
This module implements a **dual-threshold trigger** to solve the 'chatter'
problem in noisy environments, ensuring that signal annotations encapsulate
the entire rise and fall of a burst rather than just the peak.
**Key Design Decisions**:
1. **Hysteresis Logic (Dual-Threshold)**:
- **Trigger**: High threshold (`threshold * max_power`) ensures high confidence
in signal presence.
- **Boundary**: Low threshold (`0.5 * trigger`) allows the annotation to
"crawl" outward, capturing the lower-energy start and end of the burst
often missed by simple single-threshold detectors.
2. **Temporal Smoothing**: Uses a moving average window (`window_size`) prior
- to thresholding. This prevents high-frequency noise spikes from causing
fragmented annotations and provides a more stable estimate of the
signal's power envelope.
3. **Spectral Profiling**: Once a temporal segment is isolated, the module
- performs an automated FFT analysis. It identifies the **90% spectral
occupancy** to define the frequency boundaries (`f_min`, `f_max`),
allowing the detector to work on narrowband and wideband signals without
manual frequency tuning.
4. **Baseband/RF Mapping**: Automatically handles the conversion from
- relative FFT bin frequencies to absolute RF frequencies by referencing
`recording.metadata["center_frequency"]`.
5. **False Positive Mitigation**: Implements a hard minimum duration check
- (10ms) to ignore transient hardware spikes or noise floor fluctuations
that do not constitute a valid signal burst.
The module is designed to be the primary "first-pass" detector for pulsed
waveforms (like ADS-B, Lora, or bursty FSK) before passing them to
classification or demodulation stages.
"""
import json
from typing import Optional
import numpy as np
from ria_toolkit_oss.datatypes import Annotation, Recording
def _find_ranges(indices, max_gap):
"""
Groups individual indices into continuous temporal ranges.
Args:
indices: Array of indices where the signal exceeded a threshold.
max_gap: Maximum gap allowed between indices to consider them part
of the same range.
Returns:
A list of (start, stop) tuples representing detected signal segments.
"""
if len(indices) == 0:
return []
start = indices[0]
prev = indices[0]
ranges = []
for i in range(1, len(indices)):
if indices[i] - prev > max_gap:
ranges.append((start, prev))
start = indices[i]
prev = indices[i]
ranges.append((start, prev))
return ranges
def _expand_and_filter_ranges(
smoothed_power: np.ndarray,
initial_ranges: list[tuple[int, int]],
boundary_val: float,
min_duration_samples: int,
) -> list[tuple[int, int]]:
"""Apply hysteresis expansion and minimum-duration filtering."""
out: list[tuple[int, int]] = []
n = len(smoothed_power)
for start, stop in initial_ranges:
if (stop - start) < min_duration_samples:
continue
true_start = start
while true_start > 0 and smoothed_power[true_start] > boundary_val:
true_start -= 1
true_stop = stop
while true_stop < n - 1 and smoothed_power[true_stop] > boundary_val:
true_stop += 1
if (true_stop - true_start) >= min_duration_samples:
out.append((true_start, true_stop))
return out
def _merge_ranges(ranges: list[tuple[int, int]], max_gap: int) -> list[tuple[int, int]]:
"""Merge overlapping or near-adjacent ranges."""
if not ranges:
return []
ranges = sorted(ranges, key=lambda r: r[0])
merged = [ranges[0]]
for s, e in ranges[1:]:
last_s, last_e = merged[-1]
if s <= last_e + max_gap:
merged[-1] = (last_s, max(last_e, e))
else:
merged.append((s, e))
return merged
def _estimate_noise_floor(power: np.ndarray, quantile: float = 20.0) -> float:
"""Estimate baseline from the quieter portion of the envelope."""
return float(np.percentile(power, quantile))
def _estimate_group_gap(sample_rate: float) -> int:
"""Use a fixed temporal grouping gap instead of reusing the smoothing window."""
return max(1, int(0.001 * sample_rate))
def _estimate_spectral_bounds(signal_segment: np.ndarray, sample_rate: float) -> tuple[float, float]:
"""Estimate occupied bandwidth from a smoothed magnitude spectrum."""
if len(signal_segment) == 0:
return -sample_rate / 4, sample_rate / 4
window = np.hanning(len(signal_segment))
windowed = signal_segment * window
fft_data = np.abs(np.fft.fftshift(np.fft.fft(windowed)))
fft_freqs = np.fft.fftshift(np.fft.fftfreq(len(signal_segment), 1 / sample_rate))
# Smooth the spectrum so noise-like wideband bursts form a contiguous mask
# instead of thousands of tiny isolated runs.
spectral_smooth_bins = max(5, min(257, (len(signal_segment) // 512) | 1))
spectral_kernel = np.ones(spectral_smooth_bins, dtype=np.float64) / spectral_smooth_bins
smoothed_fft = np.convolve(fft_data, spectral_kernel, mode="same")
spectral_floor = float(np.percentile(smoothed_fft, 20))
spectral_peak = float(np.max(smoothed_fft))
spectral_ratio = spectral_peak / max(spectral_floor, 1e-12)
if spectral_ratio < 1.2:
return -sample_rate / 4, sample_rate / 4
spectral_thresh = spectral_floor + 0.1 * (spectral_peak - spectral_floor)
sig_indices = np.where(smoothed_fft > spectral_thresh)[0]
if len(sig_indices) == 0:
peak_idx = int(np.argmax(smoothed_fft))
bin_hz = sample_rate / len(signal_segment)
half_bins = max(1, int(np.ceil(10_000.0 / bin_hz)))
lo_idx = max(0, peak_idx - half_bins)
hi_idx = min(len(smoothed_fft) - 1, peak_idx + half_bins)
else:
runs = _find_ranges(sig_indices, max_gap=max(1, spectral_smooth_bins // 2))
peak_idx = int(np.argmax(smoothed_fft))
lo_idx, hi_idx = min(
runs,
key=lambda run: 0 if run[0] <= peak_idx <= run[1] else min(abs(run[0] - peak_idx), abs(run[1] - peak_idx)),
)
# Prevent extremely narrow tone boxes from collapsing to just a few bins.
min_total_bw_hz = 20_000.0
min_half_bins = max(1, int(np.ceil((min_total_bw_hz / 2) / (sample_rate / len(signal_segment)))))
center_idx = int(round((lo_idx + hi_idx) / 2))
lo_idx = max(0, min(lo_idx, center_idx - min_half_bins))
hi_idx = min(len(smoothed_fft) - 1, max(hi_idx, center_idx + min_half_bins))
return float(fft_freqs[lo_idx]), float(fft_freqs[hi_idx])
def threshold_qualifier(
recording: Recording,
threshold: float,
window_size: Optional[int] = None,
label: Optional[str] = None,
annotation_type: Optional[str] = "standalone",
channel: int = 0,
) -> Recording:
"""
Annotate a recording with bounding boxes for regions above a threshold.
Threshold is defined as a fraction of the maximum sample magnitude.
This algorithm searches for samples above the threshold and combines them into ranges if they
are within window_size of each other.
Detects and annotates signals using energy thresholding and spectral analysis.
The algorithm follows these steps:
1. Smooths power data using a moving average.
2. Identifies 'peak' regions exceeding a high trigger threshold.
3. Uses hysteresis to expand boundaries until power drops below a lower threshold.
4. Performs an FFT on each segment to determine frequency occupancy.
Args:
recording: The Recording object containing IQ or real signal data.
threshold: Sensitivity multiplier (0.0 to 1.0) applied to max power.
window_size: Size of the smoothing filter in samples. Defaults to 1ms worth of samples.
label: Custom string label for annotations.
annotation_type: Metadata string for the 'type' field in the annotation.
channel: Index of the channel to annotate. Defaults to 0.
Returns:
A new Recording object populated with detected Annotations.
"""
# Extract signal and metadata
sample_data = recording.data[channel]
sample_rate = recording.metadata["sample_rate"]
center_frequency = recording.metadata.get("center_frequency", 0)
if window_size is None:
window_size = max(64, int(sample_rate * 0.001))
# --- 1. SIGNAL CONDITIONING ---
# Convert to power (Magnitude squared)
power_data = np.abs(sample_data) ** 2
smoothing_window = np.ones(window_size) / window_size
smoothed_power = np.convolve(power_data, smoothing_window, mode="same")
group_gap_samples = _estimate_group_gap(sample_rate)
# Define thresholds using peak relative to baseline.
max_power = np.max(smoothed_power)
noise_floor = _estimate_noise_floor(smoothed_power)
dynamic_range_ratio = max_power / max(noise_floor, 1e-12)
# Soft early exit: keep a guard for low-contrast noise, but compute it from
# the quieter tail of the envelope so burst-heavy captures are not rejected.
if dynamic_range_ratio < 1.5:
return Recording(data=recording.data, metadata=recording.metadata, annotations=recording.annotations)
trigger_val = noise_floor + threshold * (max_power - noise_floor)
boundary_val = noise_floor + 0.5 * threshold * (max_power - noise_floor)
# --- 2. INITIAL DETECTION ---
# Enforce an explicit minimum duration in seconds; this is stable across
# varying capture lengths and avoids over-fitting to recording length.
min_duration_samples = max(1, int(0.005 * sample_rate))
annotations = []
# Pass 1: Detect stronger bursts.
indices = np.where(smoothed_power > trigger_val)[0]
pass1_initial = _find_ranges(indices=indices, max_gap=group_gap_samples)
pass1_ranges = _expand_and_filter_ranges(
smoothed_power=smoothed_power,
initial_ranges=pass1_initial,
boundary_val=boundary_val,
min_duration_samples=min_duration_samples,
)
# Pass 2: Recover weaker bursts on residual power not already covered.
# This improves recall in mixed-amplitude captures.
# Expand each Pass-1 range by the smoothing window on both sides so the
# smoothing skirts of a strong burst are not re-detected as a weak burst
# immediately adjacent to it (mirrors the guard used in Pass 3).
mask = np.ones_like(smoothed_power, dtype=np.float32)
pass2_mask_expand = window_size
for s, e in pass1_ranges:
mask[max(0, s - pass2_mask_expand) : min(len(mask), e + pass2_mask_expand)] = 0.0
residual_power = smoothed_power * mask
residual_max = float(np.max(residual_power))
residual_ratio = residual_max / max(noise_floor, 1e-12)
pass2_ranges: list[tuple[int, int]] = []
if residual_ratio >= 2.0:
weak_threshold = max(0.3, threshold * 0.7)
weak_trigger = noise_floor + weak_threshold * (residual_max - noise_floor)
weak_boundary = noise_floor + 0.5 * weak_threshold * (residual_max - noise_floor)
weak_indices = np.where(residual_power > weak_trigger)[0]
pass2_initial = _find_ranges(indices=weak_indices, max_gap=group_gap_samples)
pass2_ranges = _expand_and_filter_ranges(
smoothed_power=residual_power,
initial_ranges=pass2_initial,
boundary_val=weak_boundary,
min_duration_samples=min_duration_samples,
)
# Pass 3: Detect sustained faint bursts via macro-window averaging.
# Targets bursts whose peak power is near the trigger level but whose
# *average* power is consistently elevated above the noise floor — these
# are missed by peak-based detection because only a few short spikes exceed
# the trigger, all too brief to pass the minimum-duration filter.
#
# The mask is applied to power_data *before* convolving so that bright
# burst energy does not bleed through the long window into adjacent regions,
# which would inflate macro_residual_max and push the trigger above the
# faint burst's average power.
macro_window_size = max(window_size * 16, int(sample_rate * 0.02))
macro_kernel = np.ones(macro_window_size, dtype=np.float64) / macro_window_size
# Expand each annotated range by half the macro window on both sides so that
# the long convolution cannot "see" the leading/trailing edges of already-
# annotated bursts, which would produce spurious short fragments in Pass 3.
macro_expand = macro_window_size * 2
masked_power_for_macro = power_data.copy()
n = len(masked_power_for_macro)
for s, e in pass1_ranges + pass2_ranges:
masked_power_for_macro[max(0, s - macro_expand) : min(n, e + macro_expand)] = 0.0
macro_residual = np.convolve(masked_power_for_macro, macro_kernel, mode="same")
macro_residual_max = float(np.max(macro_residual))
pass3_ranges: list[tuple[int, int]] = []
if macro_residual_max / max(noise_floor, 1e-12) >= 1.3:
macro_trigger = noise_floor + threshold * (macro_residual_max - noise_floor)
macro_boundary = noise_floor + 0.5 * threshold * (macro_residual_max - noise_floor)
macro_indices = np.where(macro_residual > macro_trigger)[0]
macro_initial = _find_ranges(indices=macro_indices, max_gap=group_gap_samples)
pass3_ranges = _expand_and_filter_ranges(
smoothed_power=macro_residual,
initial_ranges=macro_initial,
boundary_val=macro_boundary,
min_duration_samples=min_duration_samples,
)
all_ranges = _merge_ranges(pass1_ranges + pass2_ranges + pass3_ranges, max_gap=group_gap_samples)
for true_start, true_stop in all_ranges:
# --- 4. SPECTRAL ANALYSIS (Frequency Detection) ---
signal_segment = sample_data[true_start:true_stop]
f_min, f_max = _estimate_spectral_bounds(signal_segment, sample_rate)
# --- 5. ANNOTATION GENERATION ---
ann_label = label if label is not None else f"{int(threshold*100)}%"
# Pack metadata for the UI/Downstream processing
comment_data = {
"type": annotation_type,
"generator": "threshold_qualifier",
"params": {
"threshold": threshold,
"window_size": window_size,
},
}
anno = Annotation(
sample_start=true_start,
sample_count=true_stop - true_start,
freq_lower_edge=center_frequency + f_min,
freq_upper_edge=center_frequency + f_max,
label=ann_label,
comment=json.dumps(comment_data),
detail={"generator": "hysteresis_qualifier"},
)
annotations.append(anno)
# Return a new Recording object including the new annotations
return Recording(data=recording.data, metadata=recording.metadata, annotations=recording.annotations + annotations)

View File

@ -37,7 +37,7 @@ def _engine(cfg: _config.AppConfig, sudo_override: bool = False) -> list[str]:
for exe in ("docker", "podman"):
if shutil.which(exe):
use_sudo = sudo_override or cfg.sudo
return ["sudo", exe] if use_sudo else [exe]
return (["sudo", exe] if use_sudo else [exe])
print("error: neither 'docker' nor 'podman' found on PATH", file=sys.stderr)
sys.exit(2)
@ -96,9 +96,7 @@ def _hardware_flags(labels: dict, no_gpu: bool, no_usb: bool, no_host_net: bool)
if _gpu_available():
flags += ["--gpus", "all"]
else:
notes.append(
"image wants GPU but no NVIDIA runtime detected — skipping --gpus (use --force-gpu to override)"
)
notes.append("image wants GPU but no NVIDIA runtime detected — skipping --gpus (use --force-gpu to override)")
if hw_items & {"pluto", "rtlsdr", "hackrf", "bladerf"} and not no_usb:
flags += ["--device", "/dev/bus/usb"]

View File

@ -1,8 +0,0 @@
"""
The Data package contains abstract data types tailored for radio machine learning, such as ``Recording``, as well
as the abstract interfaces for the radio dataset and radio dataset builder framework.
"""
__all__ = ["Annotation", "Recording"]
from .annotation import Annotation
from .recording import Recording

View File

@ -1,128 +0,0 @@
from __future__ import annotations
import json
from typing import Any, Optional
from sigmf import SigMFFile
class Annotation:
"""Signal annotations are labels or additional information associated with specific data points or segments within
a signal. These annotations could be used for tasks like supervised learning, where the goal is to train a model
to recognize patterns or characteristics in the signal associated with these annotations.
Annotations can be used to label interesting points in your recording.
:param sample_start: The index of the starting sample of the annotation.
:type sample_start: int
:param sample_count: The index of the ending sample of the annotation, inclusive.
:type sample_count: int
:param freq_lower_edge: The lower frequency of the annotation.
:type freq_lower_edge: float
:param freq_upper_edge: The upper frequency of the annotation.
:type freq_upper_edge: float
:param label: The label that will be displayed with the bounding box in compatible viewers including IQEngine.
Defaults to an emtpy string.
:type label: str, optional
:param comment: A human-readable comment. Defaults to an empty string.
:type comment: str, optional
:param detail: A dictionary of user defined annotation-specific metadata. Defaults to None.
:type detail: dict, optional
"""
def __init__(
self,
sample_start: int,
sample_count: int,
freq_lower_edge: float,
freq_upper_edge: float,
label: Optional[str] = "",
comment: Optional[str] = "",
detail: Optional[dict] = None,
):
"""Initialize a new Annotation instance."""
self.sample_start = int(sample_start)
self.sample_count = int(sample_count)
self.freq_lower_edge = float(freq_lower_edge)
self.freq_upper_edge = float(freq_upper_edge)
self.label = str(label)
self.comment = str(comment)
if detail is None:
self.detail = {}
elif not _is_jsonable(detail):
raise ValueError(f"Detail object is not json serializable: {detail}")
else:
self.detail = detail
def is_valid(self) -> bool:
"""
Check that the annotation sample count is > 0 and the freq_lower_edge<freq_upper_edge.
:returns: True if valid, False if not.
"""
return self.sample_count > 0 and self.freq_lower_edge < self.freq_upper_edge
def overlap(self, other):
"""
Quantify how much the bounding box in this annotation overlaps with another annotation.
:param other: The other annotation.
:type other: Annotation
:returns: The area of the overlap in samples*frequency, or 0 if they do not overlap."""
sample_overlap_start = max(self.sample_start, other.sample_start)
sample_overlap_end = min(self.sample_start + self.sample_count, other.sample_start + other.sample_count)
freq_overlap_start = max(self.freq_lower_edge, other.freq_lower_edge)
freq_overlap_end = min(self.freq_upper_edge, other.freq_upper_edge)
if freq_overlap_start >= freq_overlap_end or sample_overlap_start >= sample_overlap_end:
return 0
else:
return (sample_overlap_end - sample_overlap_start) * (freq_overlap_end - freq_overlap_start)
def area(self):
"""
The 'area' of the bounding box, samples*frequency.
Useful to quantify annotation size.
:returns: sample length multiplied by bandwidth."""
return self.sample_count * (self.freq_upper_edge - self.freq_lower_edge)
def __eq__(self, other: Annotation) -> bool:
return self.__dict__ == other.__dict__
def to_sigmf_format(self):
"""
Returns a JSON dictionary representing this annotation formatted to be saved in a .sigmf-meta file.
"""
annotation_dict = {SigMFFile.START_INDEX_KEY: self.sample_start, SigMFFile.LENGTH_INDEX_KEY: self.sample_count}
annotation_dict["metadata"] = {
SigMFFile.LABEL_KEY: self.label,
SigMFFile.COMMENT_KEY: self.comment,
SigMFFile.FHI_KEY: self.freq_upper_edge,
SigMFFile.FLO_KEY: self.freq_lower_edge,
"ria:detail": self.detail,
}
if _is_jsonable(annotation_dict):
return annotation_dict
else:
raise ValueError("Annotation dictionary was not json serializable.")
def _is_jsonable(x: Any) -> bool:
"""
:return: True if x is JSON serializable, False otherwise.
"""
try:
json.dumps(x)
return True
except (TypeError, OverflowError):
return False

View File

@ -1,853 +0,0 @@
from __future__ import annotations
import copy
import hashlib
import json
import os
import re
import time
import warnings
from typing import Any, Iterator, Optional
import numpy as np
from numpy.typing import ArrayLike
from ria_toolkit_oss.datatypes.annotation import Annotation
PROTECTED_KEYS = ["rec_id", "timestamp"]
class Recording:
"""Tape of complex IQ (in-phase and quadrature) samples with associated metadata and annotations.
Recording data is a complex array of shape C x N, where C is the number of channels
and N is the number of samples in each channel.
Metadata is stored in a dictionary of key value pairs,
to include information such as sample_rate and center_frequency.
Annotations are a list of :ref:`Annotation <utils.data.Annotation>`,
defining bounding boxes in time and frequency with labels and metadata.
Here, signal data is represented as a NumPy array. This class is then extended in the RIA Backends to provide
support for different data structures, such as Tensors.
Recordings are long-form tapes can be obtained either from a software-defined radio (SDR) or generated
synthetically. Then, machine learning datasets are curated from collection of recordings by segmenting these
longer-form tapes into shorter units called slices.
All recordings are assigned a unique 64-character recording ID, ``rec_id``. If this field is missing from the
provided metadata, a new ID will be generated upon object instantiation.
:param data: Signal data as a tape IQ samples, either C x N complex, where C is the number of
channels and N is number of samples in the signal. If data is a one-dimensional array of complex samples with
length N, it will be reshaped to a two-dimensional array with dimensions 1 x N.
:type data: array_like
:param metadata: Additional information associated with the recording.
:type metadata: dict, optional
:param annotations: A collection of ``Annotation`` objects defining bounding boxes.
:type annotations: list of Annotations, optional
:param dtype: Explicitly specify the data-type of the complex samples. Must be a complex NumPy type, such as
``np.complex64`` or ``np.complex128``. Default is None, in which case the type is determined implicitly. If
``data`` is a NumPy array, the Recording will use the dtype of ``data`` directly without any conversion.
:type dtype: numpy dtype object, optional
:param timestamp: The timestamp when the recording data was generated. If provided, it should be a float or integer
representing the time in seconds since epoch (e.g., ``time.time()``). Only used if the `timestamp` field is not
present in the provided metadata.
:type dtype: float or int, optional
:raises ValueError: If data is not complex 1xN or CxN.
:raises ValueError: If metadata is not a python dict.
:raises ValueError: If metadata is not json serializable.
:raises ValueError: If annotations is not a list of valid annotation objects.
**Examples:**
>>> import numpy
>>> from ria_toolkit_oss.datatypes import Recording, Annotation
>>> # Create an array of complex samples, just 1s in this case.
>>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> # Create a dictionary of relevant metadata.
>>> sample_rate = 1e6
>>> center_frequency = 2.44e9
>>> metadata = {
... "sample_rate": sample_rate,
... "center_frequency": center_frequency,
... "author": "me",
... }
>>> # Create an annotation for the annotations list.
>>> annotations = [
... Annotation(
... sample_start=0,
... sample_count=1000,
... freq_lower_edge=center_frequency - (sample_rate / 2),
... freq_upper_edge=center_frequency + (sample_rate / 2),
... label="example",
... )
... ]
>>> # Store samples, metadata, and annotations together in a convenient object.
>>> recording = Recording(data=samples, metadata=metadata, annotations=annotations)
>>> print(recording.metadata)
{'sample_rate': 1000000.0, 'center_frequency': 2440000000.0, 'author': 'me'}
>>> print(recording.annotations[0].label)
'example'
"""
def __init__( # noqa C901
self,
data: ArrayLike | list[list],
metadata: Optional[dict[str, any]] = None,
dtype: Optional[np.dtype] = None,
timestamp: Optional[float | int] = None,
annotations: Optional[list[Annotation]] = None,
):
data_arr = np.asarray(data)
if np.iscomplexobj(data_arr):
# Expect C x N
if data_arr.ndim == 1:
self._data = np.expand_dims(data_arr, axis=0) # N -> 1 x N
elif data_arr.ndim == 2:
self._data = data_arr
else:
raise ValueError("Complex data must be C x N.")
else:
raise ValueError("Input data must be complex.")
if dtype is not None:
self._data = self._data.astype(dtype)
assert np.iscomplexobj(self._data)
if metadata is None:
self._metadata = {}
elif isinstance(metadata, dict):
self._metadata = metadata
else:
raise ValueError(f"Metadata must be a python dict, but was {type(metadata)}.")
if not _is_jsonable(metadata):
raise ValueError("Value must be JSON serializable.")
if "timestamp" not in self.metadata:
if timestamp is not None:
if not isinstance(timestamp, (int, float)):
raise ValueError(f"timestamp must be int or float, not {type(timestamp)}")
self._metadata["timestamp"] = timestamp
else:
self._metadata["timestamp"] = time.time()
else:
if not isinstance(self._metadata["timestamp"], (int, float)):
raise ValueError("timestamp must be int or float, not ", type(self._metadata["timestamp"]))
if "rec_id" not in self.metadata:
self._metadata["rec_id"] = generate_recording_id(data=self.data, timestamp=self._metadata["timestamp"])
if annotations is None:
self._annotations = []
elif isinstance(annotations, list):
self._annotations = annotations
else:
raise ValueError("Annotations must be a list or None.")
if not all(isinstance(annotation, Annotation) for annotation in self._annotations):
raise ValueError("All elements in self._annotations must be of type Annotation.")
self._index = 0
@property
def data(self) -> np.ndarray:
"""
:return: Recording data, as a complex array.
:type: np.ndarray
.. note::
For recordings with more than 1,024 samples, this property returns a read-only view of the data.
.. note::
To access specific samples, consider indexing the object directly with ``rec[c, n]``.
"""
if self._data.size > 1024:
# Returning a read-only view prevents mutation at a distance while maintaining performance.
v = self._data.view()
v.setflags(write=False)
return v
else:
return self._data.copy()
@property
def metadata(self) -> dict:
"""
:return: Dictionary of recording metadata.
:type: dict
"""
return self._metadata.copy()
@property
def annotations(self) -> list[Annotation]:
"""
:return: List of recording annotations
:type: list of Annotation objects
"""
return self._annotations.copy()
@property
def shape(self) -> tuple[int]:
"""
:return: The shape of the data array.
:type: tuple of ints
"""
return np.shape(self.data)
@property
def n_chan(self) -> int:
"""
:return: The number of channels in the recording.
:type: int
"""
return self.shape[0]
@property
def rec_id(self) -> str:
"""
:return: Recording ID.
:type: str
"""
return self.metadata["rec_id"]
@property
def dtype(self) -> str:
"""
:return: Data-type of the data array's elements.
:type: numpy dtype object
"""
return self.data.dtype
@property
def timestamp(self) -> float | int:
"""
:return: Recording timestamp (time in seconds since epoch).
:type: float or int
"""
return self.metadata["timestamp"]
@property
def sample_rate(self) -> float | None:
"""
:return: Sample rate of the recording, or None if 'sample_rate' is not in metadata.
:type: str
"""
return self.metadata.get("sample_rate")
@sample_rate.setter
def sample_rate(self, sample_rate: float | int) -> None:
"""Set the sample rate of the recording.
:param sample_rate: The sample rate of the recording.
:type sample_rate: float or int
:return: None
"""
self.add_to_metadata(key="sample_rate", value=sample_rate)
def astype(self, dtype: np.dtype) -> Recording:
"""Copy of the recording, data cast to a specified type.
.. todo: This method is not yet implemented.
:param dtype: Data-type to which the array is cast. Must be a complex scalar type, such as ``np.complex64`` or
``np.complex128``.
:type dtype: NumPy data type, optional
.. note: Casting to a data type with less precision can risk losing data by truncating or rounding values,
potentially resulting in a loss of accuracy and significant information.
:return: A new recording with the same metadata and data, with dtype.
TODO: Add example usage.
"""
# Rather than check for a valid datatype, let's cast and check the result. This makes it easier to provide
# cross-platform support where the types are aliased across platforms.
with warnings.catch_warnings():
warnings.simplefilter("ignore") # Casting may generate user warnings. E.g., complex -> real
data = self.data.astype(dtype)
if np.iscomplexobj(data):
return Recording(data=data, metadata=self.metadata, annotations=self.annotations)
else:
raise ValueError("dtype must be a complex number scalar type.")
def add_to_metadata(self, key: str, value: Any) -> None:
"""Add a new key-value pair to the recording metadata.
:param key: New metadata key, must be snake_case.
:type key: str
:param value: Corresponding metadata value.
:type value: any
:raises ValueError: If key is already in metadata or if key is not a valid metadata key.
:raises ValueError: If value is not JSON serializable.
:return: None.
**Examples:**
Create a recording and add metadata:
>>> import numpy
>>> from ria_toolkit_oss.datatypes import Recording
>>>
>>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = {
>>> "sample_rate": 1e6,
>>> "center_frequency": 2.44e9,
>>> }
>>>
>>> recording = Recording(data=samples, metadata=metadata)
>>> print(recording.metadata)
{'sample_rate': 1000000.0,
'center_frequency': 2440000000.0,
'timestamp': 17369...,
'rec_id': 'fda0f41...'}
>>>
>>> recording.add_to_metadata(key="author", value="me")
>>> print(recording.metadata)
{'sample_rate': 1000000.0,
'center_frequency': 2440000000.0,
'author': 'me',
'timestamp': 17369...,
'rec_id': 'fda0f41...'}
"""
if key in self.metadata:
raise ValueError(
f"Key {key} already in metadata. Use Recording.update_metadata() to modify existing fields."
)
if not _is_valid_metadata_key(key):
raise ValueError(f"Invalid metadata key: {key}.")
if not _is_jsonable(value):
raise ValueError("Value must be JSON serializable.")
self._metadata[key] = value
def update_metadata(self, key: str, value: Any) -> None:
"""Update the value of an existing metadata key,
or add the key value pair if it does not already exist.
:param key: Existing metadata key.
:type key: str
:param value: New value to enter at key.
:type value: any
:raises ValueError: If value is not JSON serializable
:raises ValueError: If key is protected.
:return: None.
**Examples:**
Create a recording and update metadata:
>>> import numpy
>>> from ria_toolkit_oss.datatypes import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = {
>>> "sample_rate": 1e6,
>>> "center_frequency": 2.44e9,
>>> "author": "me"
>>> }
>>> recording = Recording(data=samples, metadata=metadata)
>>> print(recording.metadata)
{'sample_rate': 1000000.0,
'center_frequency': 2440000000.0,
'author': "me",
'timestamp': 17369...
'rec_id': 'fda0f41...'}
>>> recording.update_metadata(key="author", value=you")
>>> print(recording.metadata)
{'sample_rate': 1000000.0,
'center_frequency': 2440000000.0,
'author': "you",
'timestamp': 17369...
'rec_id': 'fda0f41...'}
"""
if key not in self.metadata:
self.add_to_metadata(key=key, value=value)
if not _is_jsonable(value):
raise ValueError("Value must be JSON serializable.")
if key in PROTECTED_KEYS: # Check protected keys.
raise ValueError(f"Key {key} is protected and cannot be modified or removed.")
else:
self._metadata[key] = value
def remove_from_metadata(self, key: str):
"""
Remove a key from the recording metadata.
Does not remove key if it is protected.
:param key: The key to remove.
:type key: str
:raises ValueError: If key is protected.
:return: None.
**Examples:**
Create a recording and add metadata:
>>> import numpy
>>> from ria_toolkit_oss.datatypes import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = {
... "sample_rate": 1e6,
... "center_frequency": 2.44e9,
... }
>>> recording = Recording(data=samples, metadata=metadata)
>>> print(recording.metadata)
{'sample_rate': 1000000.0,
'center_frequency': 2440000000.0,
'timestamp': 17369..., # Example value
'rec_id': 'fda0f41...'} # Example value
>>> recording.add_to_metadata(key="author", value="me")
>>> print(recording.metadata)
{'sample_rate': 1000000.0,
'center_frequency': 2440000000.0,
'author': 'me',
'timestamp': 17369..., # Example value
'rec_id': 'fda0f41...'} # Example value
"""
if key not in PROTECTED_KEYS:
self._metadata.pop(key)
else:
raise ValueError(f"Key {key} is protected and cannot be modified or removed.")
def view(self, output_path: Optional[str] = "images/signal.png", **kwargs) -> None:
"""Create a plot of various signal visualizations as a PNG image.
:param output_path: The output image path. Defaults to "images/signal.png".
:type output_path: str, optional
:param kwargs: Keyword arguments passed on to utils.view.view_sig.
:type: dict of keyword arguments
**Examples:**
Create a recording and view it as a plot in a .png image:
>>> import numpy
>>> from ria_toolkit_oss.datatypes import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = {
>>> "sample_rate": 1e6,
>>> "center_frequency": 2.44e9,
>>> }
>>> recording = Recording(data=samples, metadata=metadata)
>>> recording.view()
"""
from ria_toolkit_oss.view import view_sig
view_sig(recording=self, output_path=output_path, **kwargs)
def simple_view(self, **kwargs) -> None:
"""Create a plot of various signal visualizations as a PNG or SVG image.
:param kwargs: Keyword arguments passed on to ria_toolkit_oss.view.view_signal_simple.create_plots.
:type: dict of keyword arguments
**Examples:**
Create a recording and view it as a plot in a .png image:
>>> import numpy
>>> from ria_toolkit_oss.datatypes import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = {
>>> "sample_rate": 1e6,
>>> "center_frequency": 2.44e9,
>>> }
>>> recording = Recording(data=samples, metadata=metadata)
>>> recording.simple_view()
"""
from ria_toolkit_oss.view.view_signal_simple import view_simple_sig
view_simple_sig(recording=self, **kwargs)
def to_sigmf(
self, filename: Optional[str] = None, path: Optional[os.PathLike | str] = None, overwrite: bool = False
) -> None:
"""Write recording to a set of SigMF files.
The SigMF io format is defined by the `SigMF Specification Project <https://github.com/sigmf/SigMF>`_
:param recording: The recording to be written to file.
:type recording: utils.data.Recording
:param filename: The name of the file where the recording is to be saved. Defaults to auto generated filename.
:type filename: os.PathLike or str, optional
:param path: The directory path to where the recording is to be saved. Defaults to recordings/.
:type path: os.PathLike or str, optional
:raises IOError: If there is an issue encountered during the file writing process.
:return: None
**Examples:**
Create a recording and view it as a plot in a `.png` image:
>>> import numpy
>>> from utils.data import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = {
... "sample_rate": 1e6,
... "center_frequency": 2.44e9,
... }
>>> recording = Recording(data=samples, metadata=metadata)
>>> recording.view()
"""
from ria_toolkit_oss.io.recording import to_sigmf
to_sigmf(filename=filename, path=path, recording=self, overwrite=overwrite)
def to_npy(
self, filename: Optional[str] = None, path: Optional[os.PathLike | str] = None, overwrite: bool = False
) -> str:
"""Write recording to ``.npy`` binary file.
:param filename: The name of the file where the recording is to be saved. Defaults to auto generated filename.
:type filename: os.PathLike or str, optional
:param path: The directory path to where the recording is to be saved. Defaults to recordings/.
:type path: os.PathLike or str, optional
:raises IOError: If there is an issue encountered during the file writing process.
:return: Path where the file was saved.
:rtype: str
**Examples:**
Create a recording and save it to a .npy file:
>>> import numpy
>>> from utils.data import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = {
>>> "sample_rate": 1e6,
>>> "center_frequency": 2.44e9,
>>> }
>>> recording = Recording(data=samples, metadata=metadata)
>>> recording.to_npy()
"""
from ria_toolkit_oss.io.recording import to_npy
to_npy(recording=self, filename=filename, path=path, overwrite=overwrite)
def to_wav(
self,
filename: Optional[str] = None,
path: Optional[os.PathLike | str] = None,
target_sample_rate: Optional[int] = 48000,
bits_per_sample: int = 32,
overwrite: bool = False,
) -> str:
"""Write recording to WAV file with embedded YAML metadata.
WAV format uses stereo audio with I (in-phase) in left channel and Q (quadrature) in right channel.
Metadata is stored in standard LIST INFO chunks with RF-specific metadata encoded as YAML
in the ICMT (comment) field for human readability.
:param filename: The name of the file where the recording is to be saved. Defaults to auto generated filename.
:type filename: os.PathLike or str, optional
:param path: The directory path to where the recording is to be saved. Defaults to recordings/.
:type path: os.PathLike or str, optional
:param target_sample_rate: Sample rate stored in the WAV header when no sample_rate metadata
is present. IQ samples are written without decimation or interpolation. Default is 48000 Hz.
:type target_sample_rate: int, optional
:param bits_per_sample: Bits per sample (32 for float32, 16 for int16). Default is 32.
:type bits_per_sample: int, optional
:param overwrite: Whether to overwrite existing files. Default is False.
:type overwrite: bool, optional
:raises IOError: If there is an issue encountered during the file writing process.
:return: Path where the file was saved.
:rtype: str
**Examples:**
Create a recording and save it to a .wav file:
>>> import numpy
>>> from utils.data import Recording
>>> samples = numpy.exp(1j * 2 * numpy.pi * 0.1 * numpy.arange(10000))
>>> metadata = {"sample_rate": 1e6, "center_frequency": 915e6}
>>> recording = Recording(data=samples, metadata=metadata)
>>> recording.to_wav()
"""
from ria_toolkit_oss.io.recording import to_wav
return to_wav(
recording=self,
filename=filename,
path=path,
target_sample_rate=target_sample_rate,
bits_per_sample=bits_per_sample,
overwrite=overwrite,
)
def to_blue(
self,
filename: Optional[str] = None,
path: Optional[os.PathLike | str] = None,
data_format: str = "CI",
overwrite: bool = False,
) -> str:
"""Write recording to MIDAS Blue file format.
MIDAS Blue is a legacy RF file format with a 512-byte binary header.
Commonly used with X-Midas and other RF/radar signal processing tools.
:param filename: The name of the file where the recording is to be saved. Defaults to auto generated filename.
:type filename: os.PathLike or str, optional
:param path: The directory path to where the recording is to be saved. Defaults to recordings/.
:type path: os.PathLike or str, optional
:param data_format: Format code (default 'CI' = complex int16).
Common formats: 'CI' (complex int16), 'CF' (complex float32), 'CD' (complex float64).
Integer formats require the IQ samples to already be scaled within [-1, 1).
:type data_format: str, optional
:param overwrite: Whether to overwrite existing files. Default is False.
:type overwrite: bool, optional
:raises IOError: If there is an issue encountered during the file writing process.
:return: Path where the file was saved.
:rtype: str
**Examples:**
Create a recording and save it to a .blue file:
>>> import numpy
>>> from utils.data import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = {"sample_rate": 1e6, "center_frequency": 2.44e9}
>>> recording = Recording(data=samples, metadata=metadata)
>>> recording.to_blue()
"""
from ria_toolkit_oss.io.recording import to_blue
return to_blue(recording=self, filename=filename, path=path, data_format=data_format, overwrite=overwrite)
def trim(self, num_samples: int, start_sample: Optional[int] = 0) -> Recording:
"""Trim Recording samples to a desired length, shifting annotations to maintain alignment.
:param start_sample: The start index of the desired trimmed recording. Defaults to 0.
:type start_sample: int, optional
:param num_samples: The number of samples that the output trimmed recording will have.
:type num_samples: int
:raises IndexError: If start_sample + num_samples is greater than the length of the recording.
:raises IndexError: If sample_start < 0 or num_samples < 0.
:return: The trimmed Recording.
:rtype: Recording
**Examples:**
Create a recording and trim it:
>>> import numpy
>>> from utils.data import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = {
... "sample_rate": 1e6,
... "center_frequency": 2.44e9,
... }
>>> recording = Recording(data=samples, metadata=metadata)
>>> print(len(recording))
10000
>>> trimmed_recording = recording.trim(start_sample=1000, num_samples=1000)
>>> print(len(trimmed_recording))
1000
"""
if start_sample < 0:
raise IndexError("start_sample cannot be < 0.")
elif start_sample + num_samples > len(self):
raise IndexError(
f"start_sample {start_sample} + num_samples {num_samples} > recording length {len(self)}."
)
end_sample = start_sample + num_samples
data = self.data[:, start_sample:end_sample]
new_annotations = copy.deepcopy(self.annotations)
for annotation in new_annotations:
# trim annotation if it goes outside the trim boundaries
if annotation.sample_start < start_sample:
annotation.sample_count = annotation.sample_count - (start_sample - annotation.sample_start)
annotation.sample_start = start_sample
if annotation.sample_start + annotation.sample_count > end_sample:
annotation.sample_count = end_sample - annotation.sample_start
# shift annotation to align with the new start point
annotation.sample_start = annotation.sample_start - start_sample
return Recording(data=data, metadata=self.metadata, annotations=new_annotations)
def normalize(self) -> Recording:
"""Scale the recording data, relative to its maximum value, so that the magnitude of the maximum sample is 1.
:return: Recording where the maximum sample amplitude is 1.
:rtype: Recording
**Examples:**
Create a recording with maximum amplitude 0.5 and normalize to a maximum amplitude of 1:
>>> import numpy
>>> from utils.data import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64) * 0.5
>>> metadata = {
... "sample_rate": 1e6,
... "center_frequency": 2.44e9,
... }
>>> recording = Recording(data=samples, metadata=metadata)
>>> print(numpy.max(numpy.abs(recording.data)))
0.5
>>> normalized_recording = recording.normalize()
>>> print(numpy.max(numpy.abs(normalized_recording.data)))
1
"""
scaled_data = self.data / np.max(abs(self.data))
return Recording(data=scaled_data, metadata=self.metadata, annotations=self.annotations)
def __len__(self) -> int:
"""The length of a recording is defined by the number of complex samples in each channel of the recording."""
return self.shape[1]
def __eq__(self, other: Recording) -> bool:
"""Two Recordings are equal if all data, metadata, and annotations are the same."""
# counter used to allow for differently ordered annotation lists
return (
np.array_equal(self.data, other.data)
and self.metadata == other.metadata
and self.annotations == other.annotations
)
def __ne__(self, other: Recording) -> bool:
"""Two Recordings are equal if all data, and metadata, and annotations are the same."""
return not self.__eq__(other=other)
def __iter__(self) -> Iterator:
self._index = 0
return self
def __next__(self) -> np.ndarray:
if self._index < self.n_chan:
to_ret = self.data[self._index]
self._index += 1
return to_ret
else:
raise StopIteration
def __getitem__(self, key: int | tuple[int] | slice) -> np.ndarray | np.complexfloating:
"""If key is an integer, tuple of integers, or a slice, return the corresponding samples.
For arrays with 1,024 or fewer samples, return a copy of the recording data. For larger arrays, return a
read-only view. This prevents mutation at a distance while maintaining performance.
"""
if isinstance(key, (int, tuple, slice)):
v = self._data[key]
if isinstance(v, np.complexfloating):
return v
elif v.size > 1024:
v.setflags(write=False) # Make view read-only.
return v
else:
return v.copy()
else:
raise ValueError(f"Key must be an integer, tuple, or slice but was {type(key)}.")
def __setitem__(self, *args, **kwargs) -> None:
"""Raise an error if an attempt is made to assign to the recording."""
raise ValueError("Assignment to Recording is not allowed.")
def generate_recording_id(data: np.ndarray, timestamp: Optional[float | int] = None) -> str:
"""Generate unique 64-character recording ID. The recording ID is generated by hashing the recording data with
the datetime that the recording data was generated. If no datatime is provided, the current datatime is used.
:param data: Tape of IQ samples, as a NumPy array.
:type data: np.ndarray
:param timestamp: Unix timestamp in seconds. Defaults to None.
:type timestamp: float or int, optional
:return: 256-character hash, to be used as the recording ID.
:rtype: str
"""
if timestamp is None:
timestamp = time.time()
byte_sequence = data.tobytes() + str(timestamp).encode("utf-8")
sha256_hash = hashlib.sha256(byte_sequence)
return sha256_hash.hexdigest()
def _is_jsonable(x: Any) -> bool:
"""
:return: True if x is JSON serializable, False otherwise.
"""
try:
json.dumps(x)
return True
except (TypeError, OverflowError):
return False
def _is_valid_metadata_key(key: Any) -> bool:
"""
:return: True if key is a valid metadata key, False otherwise.
"""
if isinstance(key, str) and key.islower() and re.match(pattern=r"^[a-z_]+$", string=key) is not None:
return True
else:
return False

View File

@ -367,7 +367,9 @@ def to_sigmf(
meta_dict = sigMF_metafile.ordered_metadata()
meta_dict["ria"] = metadata
sigMF_metafile.tofile(meta_file_path, overwrite=overwrite)
if overwrite and os.path.isfile(meta_file_path):
os.remove(meta_file_path)
sigMF_metafile.tofile(meta_file_path)
def from_sigmf(file: os.PathLike | str) -> Recording:

View File

@ -33,6 +33,7 @@ from __future__ import annotations
import logging
import threading
import time
from typing import Any
logger = logging.getLogger(__name__)
@ -116,12 +117,7 @@ class TxExecutor:
logger.info(
"TX step '%s': %.0f s, %s @ %.3f MHz (sps=%d, filter=%s)",
label,
duration,
modulation,
symbol_rate / 1e6,
sps,
filter_type,
label, duration, modulation, symbol_rate / 1e6, sps, filter_type,
)
num_samples = int(duration * sample_rate)
@ -137,7 +133,9 @@ class TxExecutor:
logger.error("TX step '%s' SDR error: %s", label, exc)
else:
# No SDR available — simulate by sleeping for the step duration.
logger.warning("TX step '%s': no SDR — simulating %.0f s delay", label, duration)
logger.warning(
"TX step '%s': no SDR — simulating %.0f s delay", label, duration
)
self.stop_event.wait(timeout=duration)
def _synthesise(
@ -151,7 +149,6 @@ class TxExecutor:
"""Build a block-generator chain and return IQ samples as a numpy array."""
try:
import numpy as np
from ria_toolkit_oss.signal.block_generator import (
BinarySource,
GMSKModulator,
@ -234,7 +231,6 @@ class TxExecutor:
def _init_sdr(self, sample_rate: float, center_freq: float) -> None:
try:
from ria_toolkit_oss.sdr import get_sdr_device
self._sdr = get_sdr_device(self.sdr_device)
self._sdr.init_tx(
sample_rate=sample_rate,
@ -243,9 +239,7 @@ class TxExecutor:
channel=0,
gain_mode="manual",
)
logger.info(
"TX SDR initialised: %s @ %.3f MHz, %.1f Msps", self.sdr_device, center_freq / 1e6, sample_rate / 1e6
)
logger.info("TX SDR initialised: %s @ %.3f MHz, %.1f Msps", self.sdr_device, center_freq / 1e6, sample_rate / 1e6)
except Exception as exc:
logger.warning("TX SDR init failed (%s) — will simulate: %s", self.sdr_device, exc)
self._sdr = None

View File

@ -40,19 +40,15 @@ class RemoteTransmitter:
try:
if radio_str in ("pluto", "plutosdr"):
from ria_toolkit_oss.sdr.pluto import Pluto
self._sdr = Pluto(identifier)
elif radio_str in ("usrp",):
from ria_toolkit_oss.sdr.usrp import USRP
self._sdr = USRP(identifier)
elif radio_str in ("hackrf", "hackrf_one"):
from ria_toolkit_oss.sdr.hackrf import HackRF
self._sdr = HackRF(identifier)
elif radio_str in ("bladerf", "blade"):
from ria_toolkit_oss.sdr.blade import Blade
self._sdr = Blade(identifier)
else:
raise ValueError(f"Unknown SDR type: {radio_str!r}")
@ -81,7 +77,6 @@ class RemoteTransmitter:
if self._sdr is None:
raise RuntimeError("Call set_radio() and init_tx() before transmit()")
import time
# Transmit in a loop until duration has elapsed
end = time.monotonic() + duration_s
while time.monotonic() < end:

View File

@ -14,9 +14,6 @@ import logging
import threading
import time
import paramiko
import zmq
logger = logging.getLogger(__name__)
_STARTUP_WAIT_S = 2.0 # seconds to wait for remote ZMQ server to bind
@ -161,21 +158,16 @@ class RemoteTransmitterController:
"""
logger.info(
"init_tx: fc=%.3f MHz, fs=%.3f MHz, gain=%.1f dB, ch=%d",
center_frequency / 1e6,
sample_rate / 1e6,
gain,
channel,
center_frequency / 1e6, sample_rate / 1e6, gain, channel,
)
self._send(
{
self._send({
"function_name": "init_tx",
"center_frequency": center_frequency,
"sample_rate": sample_rate,
"gain": gain,
"channel": channel,
"gain_mode": gain_mode,
}
)
})
def transmit_async(self, duration_s: float) -> None:
"""Start a timed CW transmission in a background thread.

View File

@ -15,13 +15,8 @@ __all__ = [
]
from .mock import MockSDR
from .sdr import ( # noqa: F401
SDR,
SdrDisconnectedError,
SDRError,
SDRParameterError,
translate_disconnect,
)
from .sdr import SDR, SDRError, SdrDisconnectedError, SDRParameterError, translate_disconnect # noqa: F401
_DRIVER_CANDIDATES: tuple[tuple[str, str, str], ...] = (
("mock", "ria_toolkit_oss.sdr.mock", "MockSDR"),

View File

@ -8,12 +8,7 @@ import adi
import numpy as np
from ria_toolkit_oss.datatypes.recording import Recording
from ria_toolkit_oss.sdr.sdr import (
SDR,
SDRError,
SDRParameterError,
translate_disconnect,
)
from ria_toolkit_oss.sdr.sdr import SDR, SDRError, SDRParameterError, translate_disconnect
class Pluto(SDR):

View File

@ -11,7 +11,7 @@ def spectrogram(rec: Recording, thumbnail: bool = False) -> Figure:
"""Create a spectrogram for the recording.
:param rec: Signal to plot.
:type rec: ria_toolkit_oss.datatypes.Recording
:type rec: utils.data.Recording
:param thumbnail: Whether to return a small thumbnail version or full plot.
:type thumbnail: bool
@ -95,7 +95,7 @@ def iq_time_series(rec: Recording) -> Figure:
"""Create a time series plot of the real and imaginary parts of signal.
:param rec: Signal to plot.
:type rec: ria_toolkit_oss.datatypes.Recording
:type rec: utils.data.Recording
:return: Time series plot as a Plotly figure.
"""
@ -125,7 +125,7 @@ def frequency_spectrum(rec: Recording) -> Figure:
"""Create a frequency spectrum plot from the recording.
:param rec: Input signal to plot.
:type rec: ria_toolkit_oss.datatypes.Recording
:type rec: utils.data.Recording
:return: Frequency spectrum as a Plotly figure.
"""
@ -160,7 +160,7 @@ def constellation(rec: Recording) -> Figure:
"""Create a constellation plot from the recording.
:param rec: Input signal to plot.
:type rec: ria_toolkit_oss.datatypes.Recording
:type rec: utils.data.Recording
:return: Constellation as a Plotly figure.
"""

View File

@ -6,7 +6,6 @@ from typing import Optional
import matplotlib.pyplot as plt
import numpy as np
from matplotlib import gridspec
from matplotlib.patches import Patch
from PIL import Image
from scipy.fft import fft, fftshift
from scipy.signal import spectrogram
@ -40,76 +39,6 @@ def set_spines(ax, spines):
ax.spines["left"].set_visible(False)
def view_annotations(
recording: Recording,
channel: Optional[int] = 0,
output_path: Optional[str] = "images/annotations.png",
title: Optional[str] = "Annotated Spectrogram",
dpi: Optional[int] = 300,
title_fontsize: Optional[int] = 15,
dark: Optional[bool] = True,
) -> None:
# 1. Setup Plotting Environment
plt.close("all")
if dark:
plt.style.use("dark_background")
else:
plt.style.use("default")
fig, ax = plt.subplots(figsize=(12, 8))
complex_signal = recording.data[channel]
sample_rate, center_frequency, _ = extract_metadata_fields(recording.metadata)
annotations = recording.annotations
# 2. Setup Color Mapping
palette = ["#2196F3", "#9C27B0", "#64B5F6", "#7B1FA2", "#5C6BC0", "#CE93D8", "#1565C0", "#7C4DFF"]
unique_labels = sorted(list(set(ann.label for ann in annotations if ann.label)))
label_to_color = {label: palette[i % len(palette)] for i, label in enumerate(unique_labels)}
# 3. Generate Spectrogram
Pxx, freqs, times, im = ax.specgram(
complex_signal, NFFT=256, Fs=sample_rate, Fc=center_frequency, noverlap=128, cmap="twilight"
)
# 4. Draw Annotations (highest threshold % first so lower % renders on top)
def _threshold_sort_key(ann):
try:
return int(ann.label.rstrip("%"))
except (ValueError, AttributeError):
return 0
for annotation in sorted(annotations, key=_threshold_sort_key, reverse=True):
t_start = annotation.sample_start / sample_rate
t_width = annotation.sample_count / sample_rate
f_start = annotation.freq_lower_edge
f_height = annotation.freq_upper_edge - annotation.freq_lower_edge
ann_color = label_to_color.get(annotation.label, "gray")
rect = plt.Rectangle(
(t_start, f_start), t_width, f_height, linewidth=1.5, edgecolor=ann_color, facecolor="none", alpha=0.8
)
ax.add_patch(rect)
if unique_labels:
legend_elements = [
Patch(facecolor=label_to_color[label], alpha=0.3, edgecolor=label_to_color[label], label=label)
for label in unique_labels
]
ax.legend(handles=legend_elements, loc="upper right", framealpha=0.2)
ax.set_title(title, fontsize=title_fontsize, pad=20)
ax.set_xlabel("Time (s)", fontsize=12)
ax.set_ylabel("Frequency (MHz)", fontsize=12)
ax.grid(alpha=0.1)
output_path, _ = set_path(output_path=output_path)
plt.savefig(output_path, dpi=dpi, bbox_inches="tight")
plt.close(fig)
print(f"Professional annotation plot saved to {output_path}")
def view_channels(
recording: Recording,
output_path: Optional[str] = "images/signal.png",
@ -280,7 +209,9 @@ def view_sig(
)
set_spines(spec_ax, spines)
spec_ax.set_title("Spectrogram", loc="center", fontsize=subtitle_fontsize)
spec_ax.set_title("Spectrogram", fontsize=subtitle_fontsize)
spec_ax.set_ylabel("Frequency (Hz)")
spec_ax.set_xlabel("Time (s)")
if iq:
iq_ax = plt.subplot(gs[plot_y_indx : plot_y_indx + 2, :])
@ -364,11 +295,7 @@ def view_sig(
set_spines(meta_ax, spines)
if logo and os.path.isfile(logo_path):
# logo_ax = plt.subplot(gs[plot_y_indx:, 2])
logo_pos = [0.75, 0.05, 0.2, 0.08]
logo_ax = fig.add_axes(logo_pos, anchor="SE", zorder=10)
plot_x_indx = plot_x_indx + 1
logo_ax = plt.subplot(gs[plot_y_indx + 2 :, 2])
logo_ax.axis("off")
try:
@ -387,6 +314,7 @@ def view_sig(
hspace=2.5, # Vertical space between subplots
)
# save path handling
output_path, _ = set_path(output_path=output_path)
plt.savefig(output_path, dpi=dpi)
print(f"Saved signal plot to {output_path}")

View File

@ -3,7 +3,6 @@
from __future__ import annotations
import gc
import json
from typing import Optional
import matplotlib
@ -21,52 +20,6 @@ from ria_toolkit_oss.view.tools import (
)
def _add_annotations(annotations, compact_mode, show_labels, sample_rate_hz, center_freq_hz, ax2):
if annotations and not compact_mode:
for annotation in annotations:
start_idx = annotation.get("core:sample_start", 0)
length = annotation.get("core:sample_count", 0)
start_time = start_idx / sample_rate_hz
end_time = (start_idx + length) / sample_rate_hz
freq_low = annotation.get("core:freq_lower_edge", center_freq_hz - sample_rate_hz / 4)
freq_high = annotation.get("core:freq_upper_edge", center_freq_hz + sample_rate_hz / 4)
comment = annotation.get("core:comment", "{}")
try:
comment_data = json.loads(comment) if isinstance(comment, str) else comment
ann_type = comment_data.get("type", "unknown")
if ann_type == "intersection":
color = COLORS["success"]
elif ann_type == "parallel":
color = COLORS["primary"]
elif ann_type == "standalone":
color = COLORS["warning"]
else:
color = COLORS["error"]
except Exception:
color = COLORS["error"]
rect = plt.Rectangle(
(start_time, freq_low),
end_time - start_time,
freq_high - freq_low,
color=color,
alpha=0.4,
linewidth=2,
)
ax2.add_patch(rect)
if show_labels:
label = annotation.get("core:label", "Signal")
ax2.text(
start_time,
freq_high,
label,
color=COLORS["light"],
fontsize=10,
bbox=dict(boxstyle="round,pad=0.2", facecolor=color, alpha=0.7),
)
def _get_nfft_size(signal, fast_mode):
if len(signal) < 1000:
nfft = 128
@ -185,7 +138,6 @@ def detect_constellation_symbols(signal: np.ndarray, method: str = "differential
def view_simple_sig(
recording: Recording,
annotations: Optional[list] = None,
output_path: Optional[str] = "images/signal.png",
saveplot: Optional[bool] = True,
fast_mode: Optional[bool] = False,
@ -309,15 +261,6 @@ def view_simple_sig(
ax2.set_title("Spectrogram", loc="left", pad=10)
_add_annotations(
annotations=annotations,
compact_mode=compact_mode,
show_labels=show_labels,
sample_rate_hz=sample_rate_hz,
center_freq_hz=center_freq_hz,
ax2=ax2,
)
if ax_constellation is not None:
constellation_samples = _get_plot_samples(signal=signal, fast_mode=fast_mode, slow_max=50_000, fast_max=20_000)
method = "differential" if fast_mode else "combined"
@ -367,7 +310,7 @@ def view_simple_sig(
else:
plt.tight_layout()
if show_title:
plt.subplots_adjust(top=0.92)
plt.subplots_adjust(top=0.90)
if saveplot:
output_path, extension = set_path(output_path=output_path)

View File

@ -1,828 +0,0 @@
"""Annotate command - Automatic detection and manual annotation management."""
import json
from pathlib import Path
import click
from ria_toolkit_oss.annotations import (
annotate_with_cusum,
detect_signals_energy,
split_recording_annotations,
threshold_qualifier,
)
from ria_toolkit_oss.datatypes import Annotation
from ria_toolkit_oss.datatypes.recording import Recording
from ria_toolkit_oss.io import load_recording, to_blue, to_npy, to_sigmf, to_wav
from ria_toolkit_oss_cli.ria_toolkit_oss.common import (
format_frequency,
format_sample_count,
)
def normalize_sigmf_path(filepath):
"""Normalize SigMF path to base name without extension."""
path = Path(filepath)
# Handle .sigmf-data, .sigmf-meta, or .sigmf
if ".sigmf" in path.suffix:
# Remove the suffix to get base name
return path.with_suffix("")
else:
return path
def detect_input_format(filepath):
"""Detect file format from extension."""
path = Path(filepath)
ext = path.suffix.lower()
if ext in [".sigmf-data", ".sigmf-meta"]:
return "sigmf"
elif path.name.endswith(".sigmf"):
return "sigmf"
elif ext == ".npy":
return "npy"
elif ext == ".wav":
return "wav"
elif ext == ".blue":
return "blue"
else:
raise click.ClickException(f"Unknown format for '{filepath}'. Supported: .sigmf, .npy, .wav, .blue")
def determine_output_path(input_path, output_path, fmt, quiet, overwrite):
input_path = Path(input_path)
input_is_annotated = input_path.stem.endswith("_annotated")
if output_path:
target = Path(output_path)
elif overwrite and input_is_annotated:
# Write back in-place only when the input is already an _annotated file
target = input_path
else:
target = input_path.with_name(f"{input_path.stem}_annotated{input_path.suffix}")
if fmt == "sigmf":
final_path = normalize_sigmf_path(target)
if not quiet:
click.echo(f"Saving SigMF metadata to: {final_path}")
else:
final_path = target
if not quiet:
click.echo(f"Saving to: {final_path}")
# Always allow writing to _annotated files; guard against overwriting originals
target_is_annotated = final_path.stem.endswith("_annotated")
if final_path.exists() and not target_is_annotated and final_path != input_path:
click.echo(f"Error: {final_path} is not an annotated file and cannot be overwritten.", err=True)
return None
return final_path
def save_recording_auto(recording, output_path, input_path, quiet=False, overwrite=False):
"""Save recording, auto-detecting format from extension.
For SigMF: Only overwrites metadata file, data file is unchanged
For other formats: Creates _annotated copy by default, unless overwrite=True
"""
input_path = Path(input_path)
fmt = detect_input_format(input_path)
# Determine output path
output_path = determine_output_path(
input_path=input_path, output_path=output_path, fmt=fmt, quiet=quiet, overwrite=overwrite
)
if fmt == "sigmf":
# Normalize path for SigMF
base_path = output_path
stem = base_path.name
parent = base_path.parent
# For SigMF: only save metadata, copy data if needed
meta_path = parent / f"{stem}.sigmf-meta"
data_path = parent / f"{stem}.sigmf-data"
# If output is different from input, copy data file
input_base = normalize_sigmf_path(input_path)
if input_base != base_path:
import shutil
# Construct input data path correctly
# input_base is like /path/to/recording or /path/to/recording.sigmf
# We need /path/to/recording.sigmf-data
if str(input_base).endswith(".sigmf"):
input_data = Path(str(input_base).replace(".sigmf", ".sigmf-data"))
else:
input_data = input_base.parent / f"{input_base.name}.sigmf-data"
if not quiet:
click.echo(f" Copying: {data_path}")
shutil.copy2(input_data, data_path)
# Always save metadata (this is the whole point)
to_sigmf(recording, filename=stem, path=parent, overwrite=True)
if not quiet:
click.echo(f" Updated: {meta_path}")
if input_base != base_path:
click.echo(f" Created: {data_path}")
elif fmt == "npy":
to_npy(recording, filename=output_path.stem, path=output_path.parent, overwrite=True)
if not quiet:
click.echo(f" Created: {output_path}")
elif fmt == "wav":
to_wav(recording, filename=output_path.stem, path=output_path.parent, overwrite=True)
if not quiet:
click.echo(f" Created: {output_path}")
elif fmt == "blue":
to_blue(recording, filename=output_path.stem, path=output_path.parent, overwrite=True)
if not quiet:
click.echo(f" Created: {output_path}")
def determine_frequency_bounds(recording: Recording, freq_lower, freq_upper):
# Handle frequency bounds
if (freq_lower is None) != (freq_upper is None):
raise click.ClickException("Must specify both --freq-lower and --freq-upper, or neither")
if freq_lower is None:
# Default to full bandwidth
sample_rate = recording.metadata.get("sample_rate", 1)
center_freq = recording.metadata.get("center_frequency", 0)
freq_lower = center_freq - (sample_rate / 2)
freq_upper = center_freq + (sample_rate / 2)
freq_default = True
else:
freq_default = False
if freq_lower >= freq_upper:
raise click.ClickException(
f"Invalid frequency range: lower ({format_frequency(freq_lower)}) "
f"must be < upper ({format_frequency(freq_upper)})"
)
return freq_lower, freq_upper, freq_default
def get_indices_list(indices, recording: Recording):
if indices:
try:
indices_list = [int(idx.strip()) for idx in indices.split(",")]
# Validate indices
for idx in indices_list:
if idx < 0 or idx >= len(recording.annotations):
raise click.ClickException(
f"Invalid index {idx}. Recording has {len(recording.annotations)} annotation(s)"
)
except ValueError as e:
raise click.ClickException(f"Invalid indices format. Expected comma-separated integers: {e}")
return indices_list
else:
return None
# ============================================================================
# Main command group
# ============================================================================
@click.group()
def annotate():
"""Manage and auto-detect annotations on RF recordings.
\b
MANUAL MANAGEMENT:
list - List all current annotations
add - Manually add a specific annotation
remove - Delete an annotation by its index
clear - Remove all annotations from the recording
\b
DETECTION & SEPARATION:
energy - Auto-detect using energy-based thresholding
cusum - Auto-detect segments using signal state changes
threshold - Auto-detect samples above magnitude percentage
separate - Auto-detect parallel frequency-offset signals, split into sub-bands
\b
File Path Handling:
- SigMF files: Pass .sigmf-data, .sigmf-meta, or base name
- Other formats: .npy, .wav, .blue files
\b
Output Behavior:
- SigMF: Updates .sigmf-meta only (data unchanged), in-place
- Other: Creates _annotated copy unless --overwrite specified
"""
pass
# ============================================================================
# List subcommand
# ============================================================================
@annotate.command()
@click.argument("input", type=click.Path(exists=True))
@click.option("--verbose", is_flag=True, help="Show detailed annotation info")
def list(input, verbose):
"""List all annotations in a recording.
\b
Examples:
ria annotate list recording.sigmf-data
ria annotate list signal.npy --verbose
"""
try:
recording = load_recording(input)
except Exception as e:
raise click.ClickException(f"Failed to load recording: {e}")
if len(recording.annotations) == 0:
click.echo(f"No annotations in {Path(input).name}")
return
click.echo(f"\nAnnotations in {Path(input).name}:")
for i, ann in enumerate(recording.annotations):
# Parse type from comment JSON
try:
comment_data = json.loads(ann.comment)
ann_type = comment_data.get("type", "unknown")
user_comment = comment_data.get("user_comment", "")
except (json.JSONDecodeError, TypeError):
ann_type = "unknown"
user_comment = ann.comment or ""
# Basic info
freq_range = f"{format_frequency(ann.freq_lower_edge)} - {format_frequency(ann.freq_upper_edge)}"
click.echo(
f" [{i}] Samples {format_sample_count(ann.sample_start)}-"
f"{format_sample_count(ann.sample_start + ann.sample_count)}: {ann.label}"
)
click.echo(f" Type: {ann_type}")
if verbose:
if user_comment:
click.echo(f" Comment: {user_comment}")
click.echo(f" Frequency: {freq_range}")
if ann.detail:
click.echo(f" Detail: {ann.detail}")
click.echo(f"\nTotal: {len(recording.annotations)} annotation(s)")
# ============================================================================
# Add subcommand
# ============================================================================
@annotate.command(context_settings={"max_content_width": 200})
@click.argument("input", type=click.Path(exists=True))
@click.option("--start", type=int, required=True, help="Start sample index")
@click.option("--count", type=int, required=True, help="Sample count")
@click.option("--label", type=str, required=True, help="Annotation label")
@click.option("--freq-lower", type=float, help="Lower frequency edge (Hz)")
@click.option("--freq-upper", type=float, help="Upper frequency edge (Hz)")
@click.option("--comment", type=str, help="Human-readable comment")
@click.option(
"--type",
"annotation_type",
type=click.Choice(["standalone", "parallel", "intersection"]),
default="standalone",
help="Annotation type",
)
@click.option("--output", "-o", type=click.Path(), help="Output file path")
@click.option("--overwrite", is_flag=True, help="Overwrite input file (non-SigMF only)")
@click.option("--quiet", is_flag=True, help="Quiet mode")
def add(input, start, count, label, freq_lower, freq_upper, comment, annotation_type, output, overwrite, quiet):
"""Add a manual annotation.
\b
Examples:
ria annotate add file.npy --start 1000 --count 500 --label wifi
ria annotate add signal.sigmf-data --start 0 --count 1000 --label burst --comment "Strong signal"
"""
try:
recording = load_recording(input)
if not quiet:
click.echo(f"Loaded: {input}")
except Exception as e:
raise click.ClickException(f"Failed to load recording: {e}")
# Validate sample range
n_samples = len(recording.data[0])
if start < 0:
raise click.ClickException(f"--start must be >= 0, got {start}")
if count <= 0:
raise click.ClickException(f"--count must be > 0, got {count}")
if start + count > n_samples:
raise click.ClickException(
f"Invalid annotation range:\n"
f" Start: {start:,}\n"
f" Count: {count:,}\n"
f" End: {start + count:,}\n"
f"Recording only has {n_samples:,} samples"
)
# Handle frequency bounds
freq_lower, freq_upper, freq_default = determine_frequency_bounds(
recording=recording, freq_lower=freq_lower, freq_upper=freq_upper
)
# Build comment JSON
comment_data = {"type": annotation_type}
if comment:
comment_data["user_comment"] = comment
# Create annotation
ann = Annotation(
sample_start=start,
sample_count=count,
freq_lower_edge=freq_lower,
freq_upper_edge=freq_upper,
label=label,
comment=json.dumps(comment_data),
detail={},
)
recording._annotations.append(ann)
if not quiet:
click.echo("\nAdding annotation:")
click.echo(f" Start: {format_sample_count(start)}")
click.echo(f" Count: {format_sample_count(count)} samples")
freq_str = (
"full bandwidth" if freq_default else f"{format_frequency(freq_lower)} - {format_frequency(freq_upper)}"
)
click.echo(f" Frequency: {freq_str}")
click.echo(f" Label: {label}")
click.echo(f" Type: {annotation_type}")
if comment:
click.echo(f" Comment: {comment}")
try:
save_recording_auto(recording, output, input, quiet, overwrite)
if not quiet:
click.echo(" ✓ Saved")
except Exception as e:
raise click.ClickException(f"Failed to save: {e}")
# ============================================================================
# Remove subcommand
# ============================================================================
@annotate.command(context_settings={"max_content_width": 200})
@click.argument("input", type=click.Path(exists=True))
@click.argument("index", type=int)
@click.option("--output", "-o", type=click.Path(), help="Output file path")
@click.option("--overwrite", is_flag=True, help="Overwrite input file (non-SigMF only)")
@click.option("--quiet", is_flag=True, help="Quiet mode")
def remove(input, index, output, overwrite, quiet):
"""Remove annotation by index.
Use 'ria annotate list' to see annotation indices.
\b
Examples:
ria annotate remove signal.sigmf-data 2
ria annotate remove file.npy 0
"""
try:
recording = load_recording(input)
if not quiet:
click.echo(f"Loaded: {input}")
except Exception as e:
raise click.ClickException(f"Failed to load recording: {e}")
if index < 0 or index >= len(recording.annotations):
raise click.ClickException(
f"Cannot remove annotation at index {index}\n"
f"Recording has {len(recording.annotations)} annotation(s) (indices 0-{len(recording.annotations)-1})"
)
removed_ann = recording.annotations[index]
recording._annotations.pop(index)
if not quiet:
click.echo(f"\nRemoving annotation [{index}]:")
click.echo(
f" Removed: samples {format_sample_count(removed_ann.sample_start)}-"
f"{format_sample_count(removed_ann.sample_start + removed_ann.sample_count)} ({removed_ann.label})"
)
try:
save_recording_auto(recording, output_path=input, input_path=input, quiet=quiet, overwrite=True)
if not quiet:
click.echo(" ✓ Saved")
except Exception as e:
raise click.ClickException(f"Failed to save: {e}")
# ============================================================================
# Clear subcommand
# ============================================================================
@annotate.command(context_settings={"max_content_width": 175})
@click.argument("input", type=click.Path(exists=True))
@click.option("--output", "-o", type=click.Path(), help="Output file path")
@click.option("--overwrite", is_flag=True, help="Overwrite input file (non-SigMF only)")
@click.option("--force", is_flag=True, help="Skip confirmation")
@click.option("--quiet", is_flag=True, help="Quiet mode")
def clear(input, output, overwrite, force, quiet):
"""Clear all annotations.
\b
Examples:
ria annotate clear signal.sigmf-data
ria annotate clear file.npy --force
"""
try:
recording = load_recording(input)
if not quiet:
click.echo(f"Loaded: {input}")
except Exception as e:
raise click.ClickException(f"Failed to load recording: {e}")
count_before = len(recording.annotations)
if count_before == 0:
if not quiet:
click.echo("No annotations to clear")
return
# Confirm unless --force
if not force and not quiet:
click.echo(f"\nWarning: This will remove all {count_before} annotation(s)")
click.confirm("Continue?", abort=True)
recording._annotations = []
if not quiet:
click.echo(f"\nCleared {count_before} annotation(s)")
recording._annotations = []
try:
save_recording_auto(recording, output_path=input, input_path=input, quiet=quiet, overwrite=True)
if not quiet:
click.echo(" ✓ Saved")
except Exception as e:
raise click.ClickException(f"Failed to save: {e}")
# ============================================================================
# Energy detection subcommand
# ============================================================================
@annotate.command(context_settings={"max_content_width": 200})
@click.argument("input", type=click.Path(exists=True))
@click.option("--label", type=str, default="signal", help="Annotation label")
@click.option("--threshold", type=float, default=1.2, help="Threshold multiplier above noise floor")
@click.option("--segments", type=int, default=10, help="Number of segments for noise estimation")
@click.option("--window-size", type=int, default=200, help="Smoothing window size")
@click.option("--min-distance", type=int, default=5000, help="Min distance between detections")
@click.option(
"--freq-method",
type=click.Choice(["nbw", "obw", "full-detected", "full-bandwidth"]),
default="nbw",
help="Frequency bounding method",
)
@click.option("--nfft", type=int, default=None, help="FFT size for frequency calculation")
@click.option("--obw-power", type=float, default=0.99, help="Power percentage for OBW/NBW (0.98-0.9999)")
@click.option(
"--type",
"annotation_type",
type=click.Choice(["standalone", "parallel", "intersection"]),
default="standalone",
help="Annotation type",
)
@click.option("--output", "-o", type=click.Path(), help="Output file path")
@click.option("--overwrite", is_flag=True, help="Overwrite input file (non-SigMF only)")
@click.option("--quiet", is_flag=True, help="Quiet mode")
def energy(
input,
label,
threshold,
segments,
window_size,
min_distance,
freq_method,
nfft,
obw_power,
annotation_type,
output,
overwrite,
quiet,
):
"""Auto-detect signals using energy-based method.
Detects bursts based on energy above noise floor. Best for bursty signals
and intermittent transmissions.
\b
Frequency Bounding Methods:
nbw - Nominal bandwidth (default, best for real signals)
obw - Occupied bandwidth (more conservative, includes sidelobes)
full-detected - Lowest to highest spectral component
full-bandwidth - Entire Nyquist span
\b
Examples:
ria annotate energy capture.sigmf-data --label burst
ria annotate energy signal.npy --threshold 1.5 --min-distance 10000
ria annotate energy signal.sigmf-data --freq-method obw
ria annotate energy signal.sigmf-data --freq-method full-detected
"""
try:
recording = load_recording(input)
if not quiet:
click.echo(f"Loaded: {input}")
except Exception as e:
raise click.ClickException(f"Failed to load recording: {e}")
if not quiet:
click.echo("\nDetecting signals using energy-based method...")
click.echo(" Time detection:")
click.echo(f" Segments: {segments}")
click.echo(f" Threshold: {threshold}x noise floor")
click.echo(f" Window size: {window_size} samples")
click.echo(f" Min distance: {min_distance} samples")
click.echo(f" Frequency bounds: {freq_method}")
try:
initial_count = len(recording.annotations)
recording = detect_signals_energy(
recording,
k=segments,
threshold_factor=threshold,
window_size=window_size,
min_distance=min_distance,
label=label,
annotation_type=annotation_type,
freq_method=freq_method,
nfft=nfft,
obw_power=obw_power,
)
added = len(recording.annotations) - initial_count
if not quiet:
click.echo(f" ✓ Added {added} annotation(s)")
save_recording_auto(recording, output, input, quiet, overwrite)
if not quiet:
click.echo(" ✓ Saved")
except Exception as e:
raise click.ClickException(f"Energy detection failed: {e}")
# ============================================================================
# CUSUM detection subcommand
# ============================================================================
@annotate.command()
@click.argument("input", type=click.Path(exists=True))
@click.option("--label", type=str, default="segment", help="Annotation label")
@click.option("--min-duration", type=float, default=5.0, help="Min duration in ms (prevents over-segmentation)")
@click.option("--window-size", type=int, default=1, help="Smoothing window size")
@click.option("--tolerance", type=int, default=-1, help="Sample tolerance for merging")
@click.option(
"--type",
"annotation_type",
type=click.Choice(["standalone", "parallel", "intersection"]),
default="standalone",
help="Annotation type",
)
@click.option("--output", "-o", type=click.Path(), help="Output file path")
@click.option("--overwrite", is_flag=True, help="Overwrite input file (non-SigMF only)")
@click.option("--quiet", is_flag=True, help="Quiet mode")
def cusum(input, label, min_duration, window_size, tolerance, annotation_type, output, overwrite, quiet):
"""Auto-detect segments using CUSUM method.
Detects signal state changes (on/off, amplitude transitions). Best for
segmenting continuous signals.
IMPORTANT: Always specify --min-duration to prevent excessive segmentation.
\b
Examples:
ria annotate cusum signal.sigmf-data --min-duration 5.0
ria annotate cusum data.npy --min-duration 10.0 --label state
"""
try:
recording = load_recording(input)
if not quiet:
click.echo(f"Loaded: {input}")
except Exception as e:
raise click.ClickException(f"Failed to load recording: {e}")
if not quiet:
click.echo("\nDetecting segments using CUSUM...")
click.echo(f" Min duration: {min_duration} ms")
if window_size != 1:
click.echo(f" Window size: {window_size} samples")
try:
initial_count = len(recording.annotations)
recording = annotate_with_cusum(
recording,
label=label,
window_size=window_size,
min_duration=min_duration,
tolerance=tolerance,
annotation_type=annotation_type,
)
added = len(recording.annotations) - initial_count
if not quiet:
click.echo(f" ✓ Added {added} annotation(s)")
save_recording_auto(recording, output, input, quiet, overwrite)
if not quiet:
click.echo(" ✓ Saved")
except Exception as e:
raise click.ClickException(f"CUSUM detection failed: {e}")
# ============================================================================
# Threshold detection subcommand
# ============================================================================
@annotate.command()
@click.argument("input", type=click.Path(exists=True))
@click.option("--threshold", type=float, required=True, help="Threshold (0.0-1.0, fraction of max magnitude)")
@click.option("--label", type=str, default=None, help="Annotation label")
@click.option(
"--window-size",
type=int,
default=None,
help="Smoothing window size in samples (default: 1ms at recording sample rate)",
)
@click.option(
"--type",
"annotation_type",
type=click.Choice(["standalone", "parallel", "intersection"]),
default="standalone",
help="Annotation type",
)
@click.option("--channel", type=int, default=0, help="Channel index to annotate (default: 0)")
@click.option("--output", "-o", type=click.Path(), help="Output file path")
@click.option("--overwrite", is_flag=True, help="Overwrite input file (non-SigMF only)")
@click.option("--quiet", is_flag=True, help="Quiet mode")
def threshold(input, threshold, label, window_size, annotation_type, channel, output, overwrite, quiet):
"""Auto-detect signals using threshold method.
Detects samples above a percentage of maximum magnitude. Best for simple
power-based detection.
\b
Examples:
ria annotate threshold signal.sigmf-data --threshold 0.7 --label wifi
ria annotate threshold data.npy --threshold 0.5 --window-size 2048
"""
if not (0.0 <= threshold <= 1.0):
raise click.ClickException(f"--threshold must be between 0.0 and 1.0, got {threshold}")
try:
recording = load_recording(input)
if not quiet:
click.echo(f"Loaded: {input}")
except Exception as e:
raise click.ClickException(f"Failed to load recording: {e}")
if not quiet:
click.echo("\nDetecting signals using threshold qualifier...")
click.echo(f" Threshold: {threshold * 100:.1f}% of max magnitude")
click.echo(f" Window size: {'auto (1ms)' if window_size is None else f'{window_size} samples'}")
click.echo(f" Channel: {channel}")
try:
initial_count = len(recording.annotations)
recording = threshold_qualifier(
recording,
threshold=threshold,
window_size=window_size,
label=label,
annotation_type=annotation_type,
channel=channel,
)
added = len(recording.annotations) - initial_count
if not quiet:
click.echo(f" ✓ Added {added} annotation(s)")
save_recording_auto(recording, output, input, quiet, overwrite)
if not quiet:
click.echo(" ✓ Saved")
except Exception as e:
raise click.ClickException(f"Threshold detection failed: {e}")
# ============================================================================
# Separate subcommand (Phase 2: Parallel signal separation)
# ============================================================================
@annotate.command()
@click.argument("input", type=click.Path(exists=True))
@click.option("--indices", type=str, help="Comma-separated annotation indices to split (default: all)")
@click.option("--nfft", type=int, default=65536, help="FFT size for spectral analysis")
@click.option("--noise-threshold-db", type=float, help="Noise floor threshold in dB (auto-estimated if not specified)")
@click.option("--min-component-bw", type=float, default=50e3, help="Min component bandwidth in Hz")
@click.option("--output", "-o", type=click.Path(), help="Output file path")
@click.option("--overwrite", is_flag=True, help="Overwrite input file (non-SigMF only)")
@click.option("--quiet", is_flag=True, help="Quiet mode")
@click.option("--verbose", is_flag=True, help="Verbose output (show detected components)")
def separate(input, indices, nfft, noise_threshold_db, min_component_bw, output, overwrite, quiet, verbose):
"""
Auto-detect parallel frequency-offset signals and split into sub-bands.
Provides methods to detect and separate overlapping frequency-domain signals
that occupy the same time window but different frequency bands.
Detects multiple frequency components within single annotations and splits
them into separate annotations. Uses spectral peak detection with dual
bandwidth estimation.
\b
Key Features:
- Spectral peak detection for frequency components
- Auto noise floor estimation (or user-specified)
- Dual bandwidth estimation: -3dB primary, cumulative power fallback
- Handles narrowband and wide signals (OFDM)
\b
Examples:
ria annotate separate capture.sigmf-data
ria annotate separate signal.npy --indices 0,1,2
ria annotate separate data.sigmf-data --noise-threshold-db -70
ria annotate separate signal.npy --min-component-bw 100000
"""
try:
recording = load_recording(input)
if not quiet:
click.echo(f"Loaded: {input}")
except Exception as e:
raise click.ClickException(f"Failed to load recording: {e}")
# Parse indices if specified
indices_list = get_indices_list(indices=indices, recording=recording)
if len(recording.annotations) == 0:
if not quiet:
click.echo("No annotations to split")
return
if not quiet:
click.echo("\nSplitting annotations by frequency components...")
click.echo(f" Input annotations: {len(recording.annotations)}")
if indices_list:
click.echo(f" Splitting indices: {indices_list}")
click.echo(f" FFT size: {nfft}")
if noise_threshold_db is not None:
click.echo(f" Noise threshold: {noise_threshold_db} dB")
else:
click.echo(" Noise threshold: auto-estimated")
click.echo(f" Min component BW: {format_frequency(min_component_bw)}")
try:
initial_count = len(recording.annotations)
recording = split_recording_annotations(
recording,
indices=indices_list,
nfft=nfft,
noise_threshold_db=noise_threshold_db,
min_component_bw=min_component_bw,
)
final_count = len(recording.annotations)
added = final_count - initial_count
if not quiet:
click.echo(f" ✓ Output annotations: {final_count} ({'+' if added >= 0 else ''}{added} change)")
if verbose and added > 0:
click.echo("\n Details:")
for i in range(initial_count, final_count):
ann = recording.annotations[i]
freq_range = f"{format_frequency(ann.freq_lower_edge)} - {format_frequency(ann.freq_upper_edge)}"
click.echo(
f" [{i}] samples {format_sample_count(ann.sample_start)}-"
f"{format_sample_count(ann.sample_start + ann.sample_count)}: {freq_range}"
)
save_recording_auto(recording, output, input, quiet, overwrite)
if not quiet:
click.echo(" ✓ Saved")
except Exception as e:
raise click.ClickException(f"Spectral separation failed: {e}")

View File

@ -3,7 +3,6 @@
This module contains all the CLI bindings for the ria package.
"""
from .annotate import annotate
from .campaign import campaign
from .capture import capture
from .combine import combine

View File

@ -232,8 +232,8 @@ def generate():
\b
Examples:
ria synth chirp -b 1e6 -p 0.01 -s 10e6 -o chirp_basic.sigmf
ria synth fsk -M 2 -r 100e3 -s 2e6 -o fsk2_basic.sigmf
utils synth chirp -b 1e6 -p 0.01 -s 10e6 -o chirp_basic.sigmf
utils synth fsk -M 2 -r 100e3 -s 2e6 -o fsk2_basic.sigmf
"""
pass

View File

@ -270,13 +270,13 @@ def transform():
Examples:\n
\b
# List available augmentations
ria transform augment --list
utils transform augment --list
\b
# Apply channel swap
ria transform augment channel_swap input.npy
utils transform augment channel_swap input.npy
\b
# Apply AWGN impairment
ria transform impair awgn input.npy --snr-db 15
utils transform impair awgn input.npy --snr-db 15
"""
pass

View File

@ -7,7 +7,7 @@ from typing import Optional
import click
from ria_toolkit_oss.io.recording import from_npy, load_recording
from ria_toolkit_oss.view.view_signal import view_annotations, view_channels, view_sig
from ria_toolkit_oss.view.view_signal import view_channels, view_sig
from ria_toolkit_oss.view.view_signal_simple import view_simple_sig
from .common import echo_progress, echo_verbose, load_yaml_config
@ -34,11 +34,6 @@ VISUALIZATION_TYPES = {
"spines",
],
},
"annotations": {
"function": view_annotations,
"description": "Annotation-focused spectrogram view",
"options": ["channel", "dark"],
},
"channels": {"function": view_channels, "description": "Multi-channel IQ and spectrogram view", "options": []},
}
@ -199,7 +194,7 @@ def print_metadata(recording, quiet):
@click.option(
"--type",
"viz_type",
type=click.Choice(list(VISUALIZATION_TYPES.keys()) + ["annotate", "annotation"]),
type=click.Choice(list(VISUALIZATION_TYPES.keys())),
default="simple",
show_default=True,
help="Visualization type",
@ -243,7 +238,7 @@ def print_metadata(recording, quiet):
@click.option("--verbose", "-v", is_flag=True, help="Verbose output")
@click.option("--quiet", "-q", is_flag=True, help="Suppress output")
@click.option("--overwrite", is_flag=True, help="Overwrite existing output file")
def view( # noqa: C901
def view(
input,
viz_type,
output,
@ -302,9 +297,6 @@ def view( # noqa: C901
# Legacy NPY file
ria view old_capture.npy --legacy --type simple
"""
if viz_type in ["annotate", "annotation"]:
viz_type = "annotations"
# Load config file if specified
if config:
_ = load_yaml_config(config)

View File

@ -26,11 +26,9 @@ class _FakeResp:
def _run_register(argv: list[str], cfg_path) -> int:
fake_resp = _FakeResp({"agent_id": "agent-1", "token": "tok-abc"})
with (
patch.dict("os.environ", {"RIA_AGENT_CONFIG": str(cfg_path)}, clear=False),
patch("urllib.request.urlopen", return_value=fake_resp),
patch.object(sys, "argv", ["ria-agent", *argv]),
):
with patch.dict("os.environ", {"RIA_AGENT_CONFIG": str(cfg_path)}, clear=False), \
patch("urllib.request.urlopen", return_value=fake_resp), \
patch.object(sys, "argv", ["ria-agent", *argv]):
try:
agent_cli.main()
except SystemExit as exc:
@ -98,11 +96,9 @@ def test_stream_allow_tx_does_not_persist(tmp_path):
captured["cfg"] = cfg
return None
with (
patch.dict("os.environ", {"RIA_AGENT_CONFIG": str(cfg_path)}, clear=False),
patch("ria_toolkit_oss.agent.streamer.run_streamer", new=_fake_run_streamer),
patch.object(sys, "argv", ["ria-agent", "stream", "--allow-tx"]),
):
with patch.dict("os.environ", {"RIA_AGENT_CONFIG": str(cfg_path)}, clear=False), \
patch("ria_toolkit_oss.agent.streamer.run_streamer", new=_fake_run_streamer), \
patch.object(sys, "argv", ["ria-agent", "stream", "--allow-tx"]):
try:
agent_cli.main()
except SystemExit:

View File

@ -70,7 +70,9 @@ def test_server_start_stream_stop_cycle_over_real_ws():
reconnect_pause=0.05,
)
streamer = Streamer(ws=client, sdr_factory=lambda d, i: MockSDR(buffer_size=32, seed=0))
task = asyncio.create_task(client.run(on_message=streamer.on_message, heartbeat=streamer.build_heartbeat))
task = asyncio.create_task(
client.run(on_message=streamer.on_message, heartbeat=streamer.build_heartbeat)
)
await asyncio.wait_for(ready.wait(), timeout=3.0)
await asyncio.wait_for(stopped.wait(), timeout=3.0)
client.stop()

View File

@ -77,7 +77,10 @@ def test_server_tx_start_binary_stop_cycle_over_real_ws():
msg = await asyncio.wait_for(ws.recv(), timeout=2.0)
if isinstance(msg, str):
control_frames.append(json.loads(msg))
if any(f.get("type") == "tx_status" and f.get("state") == "transmitting" for f in control_frames):
if any(
f.get("type") == "tx_status" and f.get("state") == "transmitting"
for f in control_frames
):
break
await ws.send(json.dumps({"type": "tx_stop", "app_id": "tx-app"}))

View File

@ -30,6 +30,7 @@ from ria_toolkit_oss.agent.config import AgentConfig
from ria_toolkit_oss.agent.streamer import Streamer
from ria_toolkit_oss.sdr.mock import MockSDR
_STRESS_S = float(os.environ.get("RIA_LOCK_STRESS_S", "2.0"))
@ -155,21 +156,18 @@ def test_full_duplex_stays_healthy_over_stress_window():
s = Streamer(ws=ws, sdr_factory=lambda d, i: sdr, cfg=AgentConfig(tx_enabled=True))
await s.on_message(
{"type": "start", "app_id": "app-1", "radio_config": {"device": "mock", "buffer_size": BUF}}
{"type": "start", "app_id": "app-1",
"radio_config": {"device": "mock", "buffer_size": BUF}}
)
await s.on_message(
{
"type": "tx_start",
"app_id": "app-1",
{"type": "tx_start", "app_id": "app-1",
"radio_config": {
"device": "mock",
"buffer_size": BUF,
"device": "mock", "buffer_size": BUF,
"tx_sample_rate": 1_000_000,
"tx_center_frequency": 2.45e9,
"tx_gain": -20,
"underrun_policy": "zero",
},
}
}}
)
marker = np.arange(BUF, dtype=np.complex64) + 1
@ -182,10 +180,12 @@ def test_full_duplex_stays_healthy_over_stress_window():
# which routes through the same setters the stress test above
# verifies.
await s.on_message(
{"type": "tx_configure", "app_id": "app-1", "radio_config": {"tx_sample_rate": 1_000_000 + i}}
{"type": "tx_configure", "app_id": "app-1",
"radio_config": {"tx_sample_rate": 1_000_000 + i}}
)
await s.on_message(
{"type": "configure", "app_id": "app-1", "radio_config": {"sample_rate": 2_000_000 + i}}
{"type": "configure", "app_id": "app-1",
"radio_config": {"sample_rate": 2_000_000 + i}}
)
i += 1
await asyncio.sleep(0.005)
@ -197,7 +197,8 @@ def test_full_duplex_stays_healthy_over_stress_window():
ws, s = asyncio.run(scenario())
# No error frame leaked out.
errors = [m for m in ws.json_sent if m.get("type") in ("error", "tx_status") and m.get("state") == "error"]
errors = [m for m in ws.json_sent
if m.get("type") in ("error", "tx_status") and m.get("state") == "error"]
assert errors == [], f"Unexpected error frames: {errors}"
# RX produced IQ frames and TX's callback ran — heartbeat-level contention
# check: both setter paths were hit at least once during configure dispatch.

View File

@ -121,7 +121,9 @@ def test_start_without_device_emits_error():
def test_configure_queues_update():
async def scenario():
streamer = Streamer(ws=FakeWs(), sdr_factory=_factory)
await streamer.on_message({"type": "configure", "app_id": "x", "radio_config": {"center_frequency": 915e6}})
await streamer.on_message(
{"type": "configure", "app_id": "x", "radio_config": {"center_frequency": 915e6}}
)
# Before start(), pending config lives on the standalone dict exposed via the _pending_config shim.
return streamer._pending_config

View File

@ -143,7 +143,10 @@ def test_rejects_duplicate_tx_session():
return ws
ws = asyncio.run(scenario())
errors = [m for m in ws.json_sent if m.get("type") == "tx_status" and m.get("state") == "error"]
errors = [
m for m in ws.json_sent
if m.get("type") == "tx_status" and m.get("state") == "error"
]
assert any("already active" in e.get("message", "") for e in errors)

View File

@ -70,7 +70,10 @@ def test_underrun_pause_stops_session_and_emits_status():
# Do not push any buffers. The callback underruns on first tick and
# the watchdog should emit "underrun" and tear down.
for _ in range(100):
if any(m.get("type") == "tx_status" and m.get("state") == "underrun" for m in ws.json_sent):
if any(
m.get("type") == "tx_status" and m.get("state") == "underrun"
for m in ws.json_sent
):
break
await asyncio.sleep(0.01)
for _ in range(50):
@ -100,7 +103,9 @@ def test_underrun_zero_keeps_session_alive():
ws, still_alive = asyncio.run(scenario())
# No underrun status emitted (policy absorbs it silently).
assert not any(m.get("type") == "tx_status" and m.get("state") == "underrun" for m in ws.json_sent)
assert not any(
m.get("type") == "tx_status" and m.get("state") == "underrun" for m in ws.json_sent
)
assert still_alive
# All produced buffers are zero (no real data was pushed).
assert sdr.tx_produced, "expected at least one TX callback invocation"
@ -124,7 +129,9 @@ def test_underrun_repeat_replays_last_buffer():
ws, sdr = asyncio.run(scenario())
# No underrun status emitted.
assert not any(m.get("type") == "tx_status" and m.get("state") == "underrun" for m in ws.json_sent)
assert not any(
m.get("type") == "tx_status" and m.get("state") == "underrun" for m in ws.json_sent
)
# At least two buffers equal to the marker — the real one and ≥1 repeat.
matching = [b for b in sdr.tx_produced if np.array_equal(b, marker)]
assert len(matching) >= 2, f"expected ≥2 buffers matching marker, got {len(matching)}"

View File

@ -142,7 +142,9 @@ def test_malformed_control_frame_does_not_crash():
async def on_msg(m):
handled.append(m)
task = asyncio.create_task(client.run(on_message=on_msg, heartbeat=lambda: {"type": "heartbeat"}))
task = asyncio.create_task(
client.run(on_message=on_msg, heartbeat=lambda: {"type": "heartbeat"})
)
for _ in range(50):
if handled:
break

View File

@ -102,7 +102,9 @@ def test_binary_frame_dropped_when_no_handler():
async def on_msg(m):
messages.append(m)
task = asyncio.create_task(client.run(on_message=on_msg, heartbeat=lambda: {"type": "heartbeat"}))
task = asyncio.create_task(
client.run(on_message=on_msg, heartbeat=lambda: {"type": "heartbeat"})
)
for _ in range(50):
if messages:
break

View File

@ -4,6 +4,7 @@ from __future__ import annotations
import json
import stat
import threading
from types import SimpleNamespace
import pytest
@ -107,47 +108,37 @@ class TestCampaignResult:
return r
def test_total_steps(self):
r = self._make(
[
r = self._make([
StepResult("tx1", "s1", "/out", _ok_qa(), 0.0),
StepResult("tx1", "s2", "/out", _ok_qa(), 0.0),
]
)
])
assert r.total_steps == 2
def test_passed_count(self):
r = self._make(
[
r = self._make([
StepResult("tx1", "s1", "/out", _ok_qa(), 0.0),
StepResult("tx1", "s2", "/out", _failed_qa(), 0.0),
]
)
])
assert r.passed == 1
def test_failed_count(self):
r = self._make(
[
r = self._make([
StepResult("tx1", "s1", "/out", _ok_qa(), 0.0),
StepResult("tx1", "s2", "/out", _failed_qa(), 0.0),
]
)
])
assert r.failed == 1
def test_flagged_count(self):
r = self._make(
[
r = self._make([
StepResult("tx1", "s1", "/out", _ok_qa(), 0.0),
StepResult("tx1", "s2", "/out", _flagged_qa(), 0.0),
]
)
])
assert r.flagged == 1
def test_error_step_counts_as_failed_not_passed(self):
r = self._make(
[
r = self._make([
StepResult("tx1", "s1", None, _ok_qa(), 0.0, error="disk full"),
]
)
])
assert r.failed == 1
assert r.passed == 0
@ -241,45 +232,37 @@ class TestExtractTxParams:
assert _extract_tx_params(tx) is None
def test_returns_signal_params(self):
tx = SimpleNamespace(
sdr_agent={
tx = SimpleNamespace(sdr_agent={
"modulation": "QPSK",
"symbol_rate": 1e6,
"center_frequency": 2.4e9,
}
)
})
result = _extract_tx_params(tx)
assert result == {"modulation": "QPSK", "symbol_rate": 1e6, "center_frequency": 2.4e9}
def test_strips_infra_key_node_id(self):
tx = SimpleNamespace(
sdr_agent={
tx = SimpleNamespace(sdr_agent={
"modulation": "BPSK",
"node_id": "node_abc123",
}
)
})
result = _extract_tx_params(tx)
assert "node_id" not in result
assert result == {"modulation": "BPSK"}
def test_strips_infra_key_session_code(self):
tx = SimpleNamespace(
sdr_agent={
tx = SimpleNamespace(sdr_agent={
"modulation": "FSK",
"session_code": "amber-peak-transmit",
}
)
})
result = _extract_tx_params(tx)
assert "session_code" not in result
def test_strips_none_values(self):
tx = SimpleNamespace(
sdr_agent={
tx = SimpleNamespace(sdr_agent={
"modulation": "QPSK",
"order": None,
"rolloff": 0.35,
}
)
})
result = _extract_tx_params(tx)
assert "order" not in result
assert result == {"modulation": "QPSK", "rolloff": 0.35}
@ -291,8 +274,7 @@ class TestExtractTxParams:
assert "node_id" in cfg
def test_full_sdr_agent_config(self):
tx = SimpleNamespace(
sdr_agent={
tx = SimpleNamespace(sdr_agent={
"modulation": "16QAM",
"order": 4,
"symbol_rate": 5e6,
@ -301,8 +283,7 @@ class TestExtractTxParams:
"rolloff": 0.35,
"node_id": "node_xyz",
"session_code": "some-code",
}
)
})
result = _extract_tx_params(tx)
assert result == {
"modulation": "16QAM",

View File

@ -116,7 +116,9 @@ class TestLabelRecording:
def test_tx_params_written_as_tx_prefix_keys(self):
params = {"modulation": "QPSK", "symbol_rate": 1e6}
rec = label_recording(_simple_recording(), "dev", _wifi_step(), time.time(), tx_params=params)
rec = label_recording(
_simple_recording(), "dev", _wifi_step(), time.time(), tx_params=params
)
assert rec.metadata["tx_modulation"] == "QPSK"
assert rec.metadata["tx_symbol_rate"] == pytest.approx(1e6)
@ -129,15 +131,17 @@ class TestLabelRecording:
"filter": "rrc",
"rolloff": 0.35,
}
rec = label_recording(_simple_recording(), "dev", _wifi_step(), time.time(), tx_params=params)
rec = label_recording(
_simple_recording(), "dev", _wifi_step(), time.time(), tx_params=params
)
for k, v in params.items():
assert f"tx_{k}" in rec.metadata
assert (
rec.metadata[f"tx_{k}"] == pytest.approx(v) if isinstance(v, float) else rec.metadata[f"tx_{k}"] == v
)
assert rec.metadata[f"tx_{k}"] == pytest.approx(v) if isinstance(v, float) else rec.metadata[f"tx_{k}"] == v
def test_tx_params_empty_dict_writes_nothing(self):
rec = label_recording(_simple_recording(), "dev", _wifi_step(), time.time(), tx_params={})
rec = label_recording(
_simple_recording(), "dev", _wifi_step(), time.time(), tx_params={}
)
tx_keys = [k for k in rec.metadata if k.startswith("tx_") and k != "tx_power_dbm"]
assert tx_keys == []

View File

@ -3,7 +3,7 @@
from __future__ import annotations
import threading
from unittest.mock import patch
from unittest.mock import MagicMock, patch
import numpy as np
import pytest
@ -73,6 +73,8 @@ class TestTxExecutorRun:
waited = []
real_ev = threading.Event()
orig_wait = real_ev.wait
def _fake_wait(timeout=None):
waited.append(timeout)
return False

View File

@ -12,6 +12,7 @@ import pytest
from ria_toolkit_oss.remote_control.remote_transmitter import RemoteTransmitter
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
@ -62,18 +63,14 @@ class TestSetRadio:
def test_hackrf_alias(self):
tx = RemoteTransmitter()
mock_sdr = _make_mock_sdr()
mock_module = MagicMock()
mock_module.HackRF = MagicMock(return_value=mock_sdr)
with patch.dict("sys.modules", {"ria_toolkit_oss.sdr.hackrf": mock_module}):
with patch("ria_toolkit_oss.sdr.hackrf.HackRF", return_value=mock_sdr):
tx.set_radio("hackrf", "")
assert tx._sdr is mock_sdr
def test_hackrf_one_alias(self):
tx = RemoteTransmitter()
mock_sdr = _make_mock_sdr()
mock_module = MagicMock()
mock_module.HackRF = MagicMock(return_value=mock_sdr)
with patch.dict("sys.modules", {"ria_toolkit_oss.sdr.hackrf": mock_module}):
with patch("ria_toolkit_oss.sdr.hackrf.HackRF", return_value=mock_sdr):
tx.set_radio("hackrf_one", "")
assert tx._sdr is mock_sdr
@ -244,40 +241,34 @@ class TestRunFunction:
def test_init_tx_without_radio_returns_failure(self):
tx = RemoteTransmitter()
resp = tx.run_function(
{
resp = tx.run_function({
"function_name": "init_tx",
"center_frequency": 2.4e9,
"sample_rate": 20e6,
"gain": 0,
}
)
})
assert resp["status"] is False
assert resp["error_message"]
def test_init_tx_with_radio_success(self):
tx = self._tx_with_mock_sdr()
resp = tx.run_function(
{
resp = tx.run_function({
"function_name": "init_tx",
"center_frequency": 2.4e9,
"sample_rate": 20e6,
"gain": 30,
}
)
})
assert resp["status"] is True
def test_transmit_runs_for_short_duration(self):
tx = self._tx_with_mock_sdr()
tx._sdr.init_tx = MagicMock()
resp = tx.run_function(
{
resp = tx.run_function({
"function_name": "init_tx",
"center_frequency": 2.4e9,
"sample_rate": 20e6,
"gain": 0,
}
)
})
resp = tx.run_function({"function_name": "transmit", "duration_s": 0.02})
assert resp["status"] is True

View File

@ -7,6 +7,8 @@ sys.modules so they run regardless of whether the packages are installed.
from __future__ import annotations
import json
import sys
import threading
import time
from types import ModuleType
from unittest.mock import MagicMock, patch
@ -197,11 +199,15 @@ class TestErrorHandling:
def test_missing_paramiko_raises_runtime_error(self):
"""If paramiko is absent, connecting gives a clear RuntimeError."""
import importlib
import ria_toolkit_oss.remote_control.remote_transmitter_controller as mod
with patch.dict("sys.modules", {"paramiko": None}):
with pytest.raises((RuntimeError, ImportError)):
mod.RemoteTransmitterController(host="h", ssh_user="u", ssh_key_path="/k")
mod.RemoteTransmitterController(
host="h", ssh_user="u", ssh_key_path="/k"
)
# ---------------------------------------------------------------------------

View File

@ -2,7 +2,7 @@
from __future__ import annotations
from unittest.mock import MagicMock, patch
from unittest.mock import MagicMock, call, patch
import pytest
@ -12,6 +12,7 @@ from ria_toolkit_oss.orchestration.campaign import (
TransmitterConfig,
)
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
@ -178,7 +179,9 @@ class TestInitRemoteTxControllers:
}
]
executor = _make_executor(d)
with patch("ria_toolkit_oss.remote_control.RemoteTransmitterController") as mock_cls:
with patch(
"ria_toolkit_oss.remote_control.RemoteTransmitterController"
) as mock_cls:
executor._init_remote_tx_controllers()
mock_cls.assert_not_called()
assert executor._remote_tx_controllers == {}
@ -261,7 +264,7 @@ class TestStartTransmitterSdrRemote:
tx = executor.config.transmitters[0]
step = CaptureStep(duration=5.0, label="nochan")
executor._start_transmitter(tx, step)
_, kwargs = ctrl.init_tx.call_args
_, kwargs = mock_ctrl_kwarg = ctrl.init_tx.call_args
assert kwargs["channel"] == 0
def test_missing_controller_raises(self):
@ -378,11 +381,7 @@ class TestRunWithSdrRemote:
),
patch.object(executor, "_close_sdr"),
patch.object(executor, "_close_remote_tx_controllers"),
patch.object(
executor,
"_execute_step",
return_value=MagicMock(error=None, qa=MagicMock(flagged=False, snr_db=20.0, duration_s=10.0)),
),
patch.object(executor, "_execute_step", return_value=MagicMock(error=None, qa=MagicMock(flagged=False, snr_db=20.0, duration_s=10.0))),
):
executor.run()
@ -402,7 +401,6 @@ class TestTransmitBufferAndTimeout:
def _executor_with_ctrl(self):
from ria_toolkit_oss.orchestration.executor import CampaignExecutor
cfg = CampaignConfig.from_dict(_FULL_CAMPAIGN_DICT)
executor = CampaignExecutor(cfg)
ctrl = MagicMock()

View File

@ -1,6 +1,6 @@
# CLI Tests
Comprehensive test suite for the ria CLI commands.
Comprehensive test suite for the utils CLI commands.
## Test Structure
@ -13,25 +13,25 @@ Comprehensive test suite for the ria CLI commands.
### Run all CLI tests:
```bash
poetry run pytest tests/ria_toolkit_oss_cli/ -v
poetry run pytest tests/utils_cli/ -v
```
### Run specific test file:
```bash
poetry run pytest tests/ria_toolkit_oss_cli/test_common.py -v
poetry run pytest tests/ria_toolkit_oss_cli/test_discover.py -v
poetry run pytest tests/ria_toolkit_oss_cli/test_capture.py -v
poetry run pytest tests/utils_cli/test_common.py -v
poetry run pytest tests/utils_cli/test_discover.py -v
poetry run pytest tests/utils_cli/test_capture.py -v
```
### Run specific test class or function:
```bash
poetry run pytest tests/ria_toolkit_oss_cli/test_capture.py::TestCaptureCommand::test_capture_basic -v
poetry run pytest tests/ria_toolkit_oss_cli/test_common.py::test_parse_frequency -v
poetry run pytest tests/utils_cli/test_capture.py::TestCaptureCommand::test_capture_basic -v
poetry run pytest tests/utils_cli/test_common.py::test_parse_frequency -v
```
### Run with coverage:
```bash
poetry run pytest tests/ria_toolkit_oss_cli/ --cov=utils_cli --cov-report=html
poetry run pytest tests/utils_cli/ --cov=utils_cli --cov-report=html
```
## Test Coverage

View File

@ -1 +1 @@
"""Tests for ria CLI commands."""
"""Tests for utils CLI commands."""

View File

@ -6,6 +6,8 @@ import threading
import time
from unittest.mock import MagicMock, patch
import pytest
from ria_toolkit_oss.agent import NodeAgent