Compare commits

..

21 Commits

Author SHA1 Message Date
a4cd158b2a Merge branch 'main' into docs/sdr-guides-update
All checks were successful
Build Sphinx Docs Set / Build Docs (pull_request) Successful in 6m36s
Build Project / Build Project (3.10) (pull_request) Successful in 3m50s
Build Project / Build Project (3.12) (pull_request) Successful in 3m38s
Build Project / Build Project (3.11) (pull_request) Successful in 4m0s
Test with tox / Test with tox (3.10) (pull_request) Successful in 5m16s
Test with tox / Test with tox (3.11) (pull_request) Successful in 5m33s
Test with tox / Test with tox (3.12) (pull_request) Successful in 5m24s
2026-04-24 14:36:53 -04:00
2baae2f63e Merge pull request 'Update SDR guides, Getting Started Guide and fix Sphinx warnings for release' (#29) from docs/sdr-guides-update into main
All checks were successful
Build Sphinx Docs Set / Build Docs (push) Successful in 30s
Build Project / Build Project (3.10) (push) Successful in 11m37s
Build Project / Build Project (3.12) (push) Successful in 12m20s
Build Project / Build Project (3.11) (push) Successful in 13m40s
Test with tox / Test with tox (3.10) (push) Successful in 13m58s
Test with tox / Test with tox (3.11) (push) Successful in 14m24s
Test with tox / Test with tox (3.12) (push) Successful in 14m10s
Reviewed-on: #29
Reviewed-by: muq <muq@noreply.localhost>
2026-04-24 11:52:45 -04:00
4df5455af4 Merge branch 'main' into docs/sdr-guides-update
All checks were successful
Build Sphinx Docs Set / Build Docs (pull_request) Successful in 35s
Build Project / Build Project (3.10) (pull_request) Successful in 5m49s
Build Project / Build Project (3.11) (pull_request) Successful in 19m39s
Build Project / Build Project (3.12) (pull_request) Successful in 19m21s
Test with tox / Test with tox (3.11) (pull_request) Successful in 21m31s
Test with tox / Test with tox (3.12) (pull_request) Successful in 17m24s
Test with tox / Test with tox (3.10) (pull_request) Successful in 21m51s
2026-04-24 10:36:18 -04:00
2881aaf06e Merge pull request 'zfp-oss' (#27) from zfp-oss into main
Some checks failed
Build Sphinx Docs Set / Build Docs (push) Successful in 44m43s
Test with tox / Test with tox (3.10) (push) Successful in 1h4m45s
Build Project / Build Project (3.10) (push) Successful in 1h16m56s
Build Project / Build Project (3.12) (push) Successful in 1h16m52s
Test with tox / Test with tox (3.12) (push) Successful in 31m45s
Test with tox / Test with tox (3.11) (push) Successful in 47m45s
Build Project / Build Project (3.11) (push) Failing after 1h9m0s
Reviewed-on: #27
2026-04-23 11:10:43 -04:00
ben
50d04161b7 Merge remote-tracking branch 'origin/main' into zfp-oss
All checks were successful
Build Sphinx Docs Set / Build Docs (pull_request) Successful in 35s
Build Project / Build Project (3.10) (pull_request) Successful in 8m8s
Test with tox / Test with tox (3.11) (pull_request) Successful in 8m0s
Build Project / Build Project (3.11) (pull_request) Successful in 8m6s
Build Project / Build Project (3.12) (pull_request) Successful in 8m6s
Test with tox / Test with tox (3.12) (pull_request) Successful in 9m8s
Test with tox / Test with tox (3.10) (pull_request) Successful in 13m58s
2026-04-22 15:44:12 -04:00
ben
07c72294f5 removing orchestrator references
All checks were successful
Build Sphinx Docs Set / Build Docs (pull_request) Successful in 19s
Test with tox / Test with tox (3.12) (pull_request) Successful in 10m47s
Test with tox / Test with tox (3.11) (pull_request) Successful in 15m47s
Build Project / Build Project (3.12) (pull_request) Successful in 15m55s
Build Project / Build Project (3.11) (pull_request) Successful in 16m46s
Build Project / Build Project (3.10) (pull_request) Successful in 16m49s
Test with tox / Test with tox (3.10) (pull_request) Successful in 18m15s
2026-04-22 10:10:25 -04:00
ben
c9b19949ad timeout chunk improvements
All checks were successful
Build Sphinx Docs Set / Build Docs (pull_request) Successful in 19m57s
Build Project / Build Project (3.10) (pull_request) Successful in 19m59s
Test with tox / Test with tox (3.10) (pull_request) Successful in 19m46s
Build Project / Build Project (3.11) (pull_request) Successful in 20m19s
Build Project / Build Project (3.12) (pull_request) Successful in 20m21s
Test with tox / Test with tox (3.11) (pull_request) Successful in 18m48s
Test with tox / Test with tox (3.12) (pull_request) Successful in 1m25s
2026-04-21 17:11:16 -04:00
ben
53e8e5adb6 chunk timeout error
All checks were successful
Build Sphinx Docs Set / Build Docs (pull_request) Successful in 3m26s
Build Project / Build Project (3.10) (pull_request) Successful in 20m28s
Test with tox / Test with tox (3.10) (pull_request) Successful in 22m26s
Build Project / Build Project (3.11) (pull_request) Successful in 24m14s
Build Project / Build Project (3.12) (pull_request) Successful in 24m26s
Test with tox / Test with tox (3.11) (pull_request) Successful in 22m45s
Test with tox / Test with tox (3.12) (pull_request) Successful in 24m13s
2026-04-21 16:40:49 -04:00
a502dd97a9 Merge pull request 'Moved all contents of datatypes to data, refactored accordingly' (#28) from fix/unify_data_folders into main
All checks were successful
Build Project / Build Project (3.10) (push) Successful in 19m47s
Build Sphinx Docs Set / Build Docs (push) Successful in 23m31s
Build Project / Build Project (3.11) (push) Successful in 24m54s
Test with tox / Test with tox (3.11) (push) Successful in 18m11s
Build Project / Build Project (3.12) (push) Successful in 25m35s
Test with tox / Test with tox (3.10) (push) Successful in 26m50s
Test with tox / Test with tox (3.12) (push) Successful in 8m7s
Reviewed-on: #28
Reviewed-by: gillian <gillian@qoherent.ai>
2026-04-21 16:04:28 -04:00
ben
34b67c0c17 campaign loop support
All checks were successful
Build Sphinx Docs Set / Build Docs (pull_request) Successful in 13m32s
Build Project / Build Project (3.12) (pull_request) Successful in 13m49s
Build Project / Build Project (3.11) (pull_request) Successful in 15m28s
Build Project / Build Project (3.10) (pull_request) Successful in 15m37s
Test with tox / Test with tox (3.10) (pull_request) Successful in 6m40s
Test with tox / Test with tox (3.11) (pull_request) Successful in 4m27s
Test with tox / Test with tox (3.12) (pull_request) Successful in 7m57s
2026-04-21 15:56:04 -04:00
ben
39d5d74d6a large memory fix
All checks were successful
Build Sphinx Docs Set / Build Docs (pull_request) Successful in 20m54s
Build Project / Build Project (3.12) (pull_request) Successful in 5m13s
Build Project / Build Project (3.10) (pull_request) Successful in 25m24s
Build Project / Build Project (3.11) (pull_request) Successful in 25m31s
Test with tox / Test with tox (3.10) (pull_request) Successful in 6m18s
Test with tox / Test with tox (3.11) (pull_request) Successful in 15m2s
Test with tox / Test with tox (3.12) (pull_request) Successful in 19m57s
2026-04-21 15:03:57 -04:00
8a66860d33 Moved all contents of to , refactored accordingly
All checks were successful
Build Sphinx Docs Set / Build Docs (pull_request) Successful in 15m51s
Build Project / Build Project (3.10) (pull_request) Successful in 16m14s
Build Project / Build Project (3.11) (pull_request) Successful in 17m9s
Build Project / Build Project (3.12) (pull_request) Successful in 2m29s
Test with tox / Test with tox (3.12) (pull_request) Successful in 21m28s
Test with tox / Test with tox (3.10) (pull_request) Successful in 22m50s
Test with tox / Test with tox (3.11) (pull_request) Successful in 23m18s
2026-04-21 14:38:06 -04:00
ben
4d3aaf6ec8 json access issue
All checks were successful
Build Sphinx Docs Set / Build Docs (pull_request) Successful in 26s
Build Project / Build Project (3.12) (pull_request) Successful in 2m39s
Build Project / Build Project (3.10) (pull_request) Successful in 3m9s
Build Project / Build Project (3.11) (pull_request) Successful in 3m7s
Test with tox / Test with tox (3.10) (pull_request) Successful in 8m2s
Test with tox / Test with tox (3.11) (pull_request) Successful in 13m37s
Test with tox / Test with tox (3.12) (pull_request) Successful in 13m28s
2026-04-21 14:34:48 -04:00
ben
4aea2841be two-machine TX/RX
All checks were successful
Build Sphinx Docs Set / Build Docs (pull_request) Successful in 17m1s
Test with tox / Test with tox (3.10) (pull_request) Successful in 17m10s
Build Project / Build Project (3.10) (pull_request) Successful in 17m31s
Test with tox / Test with tox (3.11) (pull_request) Successful in 17m38s
Build Project / Build Project (3.11) (pull_request) Successful in 17m48s
Build Project / Build Project (3.12) (pull_request) Successful in 17m47s
Test with tox / Test with tox (3.12) (pull_request) Successful in 3m12s
2026-04-21 14:09:36 -04:00
ben
4c2c9c0288 rx and tx test
All checks were successful
Build Sphinx Docs Set / Build Docs (pull_request) Successful in 19s
Build Project / Build Project (3.11) (pull_request) Successful in 1m15s
Build Project / Build Project (3.10) (pull_request) Successful in 1m18s
Build Project / Build Project (3.12) (pull_request) Successful in 1m16s
Test with tox / Test with tox (3.11) (pull_request) Successful in 1m47s
Test with tox / Test with tox (3.12) (pull_request) Successful in 1m44s
Test with tox / Test with tox (3.10) (pull_request) Successful in 2m4s
2026-04-21 13:23:49 -04:00
ben
c27a5944c7 formats
All checks were successful
Build Sphinx Docs Set / Build Docs (pull_request) Successful in 8m30s
Build Project / Build Project (3.12) (pull_request) Successful in 4m15s
Build Project / Build Project (3.11) (pull_request) Successful in 4m17s
Build Project / Build Project (3.10) (pull_request) Successful in 4m19s
Test with tox / Test with tox (3.11) (pull_request) Successful in 14m59s
Test with tox / Test with tox (3.10) (pull_request) Successful in 20m7s
Test with tox / Test with tox (3.12) (pull_request) Successful in 18m9s
2026-04-20 16:49:52 -04:00
ben
062a0e766f Merge origin/main into zfp-oss; regenerate poetry.lock
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-20 16:44:59 -04:00
ben
cdcc03327b Merge remote-tracking branch 'origin/main' into zfp-oss 2026-04-20 16:42:08 -04:00
ben
912fc54f25 Merge remote-tracking branch 'origin/qac-cli-commands' into zfp-oss
Some checks failed
Build Sphinx Docs Set / Build Docs (pull_request) Successful in 23s
Build Project / Build Project (3.10) (pull_request) Successful in 11m47s
Test with tox / Test with tox (3.10) (pull_request) Failing after 21m33s
Build Project / Build Project (3.12) (pull_request) Successful in 21m47s
Build Project / Build Project (3.11) (pull_request) Successful in 21m52s
Test with tox / Test with tox (3.12) (pull_request) Failing after 26m45s
Test with tox / Test with tox (3.11) (pull_request) Failing after 28m40s
2026-04-20 13:28:34 -04:00
ben
b884397f1f Merge remote-tracking branch 'origin/main' into zfp-oss 2026-04-20 13:28:12 -04:00
ben
dae9510981 transmission code 2026-04-20 12:33:14 -04:00
82 changed files with 2443 additions and 2194 deletions

View File

@ -159,7 +159,7 @@ Finally, RIA Toolkit OSS can be installed directly from the source code. This ap
Once the project is installed, you can import modules, functions, and classes from the Toolkit for use in your Python code. For example, you can use the following import statement to access the `Recording` object: Once the project is installed, you can import modules, functions, and classes from the Toolkit for use in your Python code. For example, you can use the following import statement to access the `Recording` object:
```python ```python
from ria_toolkit_oss.datatypes import Recording from ria_toolkit_oss.data import Recording
``` ```
Additional usage information is provided in the project documentation: [RIA Toolkit OSS Documentation](https://ria-toolkit-oss.readthedocs.io/). Additional usage information is provided in the project documentation: [RIA Toolkit OSS Documentation](https://ria-toolkit-oss.readthedocs.io/).

View File

@ -25,7 +25,7 @@ In this example, we initialize the `Blade` SDR, configure it to record a signal
import time import time
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
from ria_toolkit_oss.sdr.blade import Blade from ria_toolkit_oss.sdr.blade import Blade
my_radio = Blade() my_radio = Blade()

View File

@ -21,7 +21,7 @@ Code
import numpy as np import numpy as np
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
from ria_toolkit_oss.sdr.blade import Blade from ria_toolkit_oss.sdr.blade import Blade
# Parameters # Parameters

View File

@ -1038,7 +1038,7 @@ For quick non-CLI use:
.. code-block:: python .. code-block:: python
from ria_toolkit_oss.datatypes import Recording from ria_toolkit_oss.data import Recording
from ria_toolkit_oss.io import load_recording, to_sigmf from ria_toolkit_oss.io import load_recording, to_sigmf
from ria_toolkit_oss.transforms import iq_augmentations, iq_impairments from ria_toolkit_oss.transforms import iq_augmentations, iq_impairments

View File

@ -11,15 +11,15 @@ The Radio Dataset Framework provides a software interface to access and manipula
the need for users to interface with the source files directly. Instead, users initialize and interact with a Python the need for users to interface with the source files directly. Instead, users initialize and interact with a Python
object, while the complexities of efficient data retrieval and source file manipulation are managed behind the scenes. object, while the complexities of efficient data retrieval and source file manipulation are managed behind the scenes.
Ria Toolkit OSS includes an abstract class called :py:obj:`ria_toolkit_oss.datatypes.datasets.RadioDataset`, which defines common properties and Ria Toolkit OSS includes an abstract class called :py:obj:`ria_toolkit_oss.data.datasets.RadioDataset`, which defines common properties and
behaviors for all radio datasets. :py:obj:`ria_toolkit_oss.datatypes.datasets.RadioDataset` can be considered a blueprint for all behaviors for all radio datasets. :py:obj:`ria_toolkit_oss.data.datasets.RadioDataset` can be considered a blueprint for all
other radio dataset classes. This class is then subclassed to define more specific blueprints for different types other radio dataset classes. This class is then subclassed to define more specific blueprints for different types
of radio datasets. For example, :py:obj:`ria_toolkit_oss.datatypes.datasets.IQDataset`, which is tailored for machine learning tasks of radio datasets. For example, :py:obj:`ria_toolkit_oss.data.datasets.IQDataset`, which is tailored for machine learning tasks
involving the processing of signals represented as IQ (In-phase and Quadrature) samples. involving the processing of signals represented as IQ (In-phase and Quadrature) samples.
Then, in the various project backends, there are concrete dataset classes, which inherit from both Ria Toolkit OSS and the base Then, in the various project backends, there are concrete dataset classes, which inherit from both Ria Toolkit OSS and the base
dataset class from the respective backend. For example, the :py:obj:`TorchIQDataset` class extends both dataset class from the respective backend. For example, the :py:obj:`TorchIQDataset` class extends both
:py:obj:`ria_toolkit_oss.datatypes.datasets.IQDataset` from Ria Toolkit OSS and :py:obj:`torch.ria_toolkit_oss.datatypes.IterableDataset` from :py:obj:`ria_toolkit_oss.data.datasets.IQDataset` from Ria Toolkit OSS and :py:obj:`torch.ria_toolkit_oss.data.IterableDataset` from
PyTorch, providing a concrete dataset class tailored for IQ datasets and optimized for the PyTorch backend. PyTorch, providing a concrete dataset class tailored for IQ datasets and optimized for the PyTorch backend.
Dataset initialization Dataset initialization
@ -130,7 +130,7 @@ Dataset processing and manipulation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
All radio datasets support methods tailored specifically for radio processing. These methods are backend-independent, All radio datasets support methods tailored specifically for radio processing. These methods are backend-independent,
inherited from the blueprints in Ria Toolkit OSS like :py:obj:`ria_toolkit_oss.datatypes.datasets.RadioDataset`. inherited from the blueprints in Ria Toolkit OSS like :py:obj:`ria_toolkit_oss.data.datasets.RadioDataset`.
For example, we can trim down the length of the examples from 1,024 to 512 samples, and then augment the dataset: For example, we can trim down the length of the examples from 1,024 to 512 samples, and then augment the dataset:

View File

@ -1,7 +1,7 @@
Dataset License SubModule Dataset License SubModule
========================= =========================
.. automodule:: ria_toolkit_oss.datatypes.datasets.license .. automodule:: ria_toolkit_oss.data.datasets.license
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:

View File

@ -1,11 +1,11 @@
Datatypes Package (ria_toolkit_oss.datatypes) Datatypes Package (ria_toolkit_oss.data)
============================================= =============================================
.. |br| raw:: html .. |br| raw:: html
<br /> <br />
.. automodule:: ria_toolkit_oss.datatypes .. automodule:: ria_toolkit_oss.data
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
@ -13,7 +13,7 @@ Datatypes Package (ria_toolkit_oss.datatypes)
Radio Dataset SubPackage Radio Dataset SubPackage
------------------------ ------------------------
.. automodule:: ria_toolkit_oss.datatypes.datasets .. automodule:: ria_toolkit_oss.data.datasets
:members: :members:
:undoc-members: :undoc-members:
:show-inheritance: :show-inheritance:
@ -21,5 +21,5 @@ Radio Dataset SubPackage
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
Dataset License SubModule <ria_toolkit_oss.datatypes.datasets.license> Dataset License SubModule <ria_toolkit_oss.data.datasets.license>
Radio Datasets <radio_datasets> Radio Datasets <radio_datasets>

View File

@ -11,7 +11,7 @@ class and function signatures, and doctest examples where available.
:maxdepth: 2 :maxdepth: 2
:caption: Contents: :caption: Contents:
Datatypes Package <datatypes/ria_toolkit_oss.datatypes> Data Package <data/ria_toolkit_oss.data>
SDR Package <ria_toolkit_oss.sdr> SDR Package <ria_toolkit_oss.sdr>
IO Package <ria_toolkit_oss.io> IO Package <ria_toolkit_oss.io>
Transforms Package <ria_toolkit_oss.transforms> Transforms Package <ria_toolkit_oss.transforms>

12
poetry.lock generated
View File

@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 2.3.3 and should not be changed by hand. # This file is automatically @generated by Poetry 2.1.2 and should not be changed by hand.
[[package]] [[package]]
name = "alabaster" name = "alabaster"
@ -230,14 +230,14 @@ uvloop = ["uvloop (>=0.15.2) ; sys_platform != \"win32\"", "winloop (>=0.5.0) ;
[[package]] [[package]]
name = "cachetools" name = "cachetools"
version = "7.0.5" version = "7.0.6"
description = "Extensible memoizing collections and decorators" description = "Extensible memoizing collections and decorators"
optional = false optional = false
python-versions = ">=3.10" python-versions = ">=3.10"
groups = ["test"] groups = ["test"]
files = [ files = [
{file = "cachetools-7.0.5-py3-none-any.whl", hash = "sha256:46bc8ebefbe485407621d0a4264b23c080cedd913921bad7ac3ed2f26c183114"}, {file = "cachetools-7.0.6-py3-none-any.whl", hash = "sha256:4e94956cfdd3086f12042cdd29318f5ced3893014f7d0d059bf3ead3f85b7f8b"},
{file = "cachetools-7.0.5.tar.gz", hash = "sha256:0cd042c24377200c1dcd225f8b7b12b0ca53cc2c961b43757e774ebe190fd990"}, {file = "cachetools-7.0.6.tar.gz", hash = "sha256:e5d524d36d65703a87243a26ff08ad84f73352adbeafb1cde81e207b456aaf24"},
] ]
[[package]] [[package]]
@ -1271,7 +1271,7 @@ files = [
[package.dependencies] [package.dependencies]
attrs = ">=22.2.0" attrs = ">=22.2.0"
jsonschema-specifications = ">=2023.3.6" jsonschema-specifications = ">=2023.03.6"
referencing = ">=0.28.4" referencing = ">=0.28.4"
rpds-py = ">=0.25.0" rpds-py = ">=0.25.0"
@ -3749,4 +3749,4 @@ files = [
[metadata] [metadata]
lock-version = "2.1" lock-version = "2.1"
python-versions = ">=3.10" python-versions = ">=3.10"
content-hash = "ffde300b2fc93161d2279a6e2b899bc988d3b5eb3833135821830affc9a5fb62" content-hash = "66c9adf647316db90f963da05e8a83574378bfa4db2c69ce751446b5ee7c408c"

View File

@ -50,7 +50,7 @@ dependencies = [
"pyyaml (>=6.0.3,<7.0.0)", "pyyaml (>=6.0.3,<7.0.0)",
"click (>=8.1.0,<9.0.0)", "click (>=8.1.0,<9.0.0)",
"matplotlib (>=3.8.0,<4.0.0)", "matplotlib (>=3.8.0,<4.0.0)",
"paramiko (>=4.0.0)" "paramiko (>=3.5.1)"
] ]
# [project.optional-dependencies] Commented out to prevent Tox tests from failing # [project.optional-dependencies] Commented out to prevent Tox tests from failing
@ -149,6 +149,11 @@ exclude = '''
[tool.pytest.ini_options] [tool.pytest.ini_options]
pythonpath = ["src"] pythonpath = ["src"]
filterwarnings = [
# FastAPI emits this internally when handling 422 responses; the constant
# is not yet renamed in the installed starlette version, so we can't migrate.
"ignore:'HTTP_422_UNPROCESSABLE_ENTITY' is deprecated:DeprecationWarning",
]
[tool.isort] [tool.isort]
profile = "black" profile = "black"

View File

@ -68,7 +68,7 @@ _HEARTBEAT_INTERVAL = 30 # seconds between heartbeats
_POLL_TIMEOUT = 30 # server-side long-poll duration _POLL_TIMEOUT = 30 # server-side long-poll duration
_POLL_CLIENT_TIMEOUT = 40 # client read timeout — slightly longer than server _POLL_CLIENT_TIMEOUT = 40 # client read timeout — slightly longer than server
_RECONNECT_PAUSE = 5 # seconds to wait after a poll error before retrying _RECONNECT_PAUSE = 5 # seconds to wait after a poll error before retrying
_CHUNK_SIZE = 50 * 1024 * 1024 # 50 MB — well below Cloudflare's 100 MB limit _CHUNK_SIZE = 10 * 1024 * 1024 # 10 MB per chunk — fast enough for git-LFS to process within timeout
_DIRECT_THRESHOLD = 90 * 1024 * 1024 # files above this use chunked upload _DIRECT_THRESHOLD = 90 * 1024 * 1024 # files above this use chunked upload
_CAPTURE_SAMPLES = 4096 # IQ samples per inference window _CAPTURE_SAMPLES = 4096 # IQ samples per inference window
_IDLE_LABELS = frozenset({"noise", "idle", "no_signal", "unknown_protocol", "background"}) _IDLE_LABELS = frozenset({"noise", "idle", "no_signal", "unknown_protocol", "background"})
@ -93,16 +93,24 @@ class NodeAgent:
name: str, name: str,
sdr_device: str = "unknown", sdr_device: str = "unknown",
insecure: bool = False, insecure: bool = False,
role: str = "general",
session_code: str | None = None,
) -> None: ) -> None:
self.hub_url = hub_url.rstrip("/") self.hub_url = hub_url.rstrip("/")
self.api_key = api_key self.api_key = api_key
self.name = name self.name = name
self.sdr_device = sdr_device self.sdr_device = sdr_device
self.insecure = insecure self.insecure = insecure
self.role = role
self.session_code = session_code
self.node_id: str | None = None self.node_id: str | None = None
self._stop = threading.Event() self._stop = threading.Event()
# ── TX state ────────────────────────────────────────────────────────
self._tx_stop = threading.Event()
self._tx_thread: threading.Thread | None = None
# ── Inference state ───────────────────────────────────────────────── # ── Inference state ─────────────────────────────────────────────────
# Protected by _inf_lock for cross-thread model swaps. # Protected by _inf_lock for cross-thread model swaps.
self._inf_lock = threading.Lock() self._inf_lock = threading.Lock()
@ -172,19 +180,27 @@ class NodeAgent:
capabilities = ["campaign"] capabilities = ["campaign"]
if self._ort_available: if self._ort_available:
capabilities.append("inference") capabilities.append("inference")
resp = self._post( if self.role == "tx":
"/composer/nodes/register", capabilities.append("transmit")
json={ payload: dict = {
"name": self.name, "name": self.name,
"sdr_device": self.sdr_device, "sdr_device": self.sdr_device,
"ria_toolkit_version": self._ria_version, "ria_toolkit_version": self._ria_version,
"capabilities": capabilities, "capabilities": capabilities,
}, "role": self.role,
timeout=15, }
) if self.session_code:
payload["session_code"] = self.session_code
resp = self._post("/composer/nodes/register", json=payload, timeout=15)
resp.raise_for_status() resp.raise_for_status()
self.node_id = resp.json()["node_id"] self.node_id = resp.json()["node_id"]
logger.info("Registered as %r (node_id=%s)", self.name, self.node_id) logger.info(
"Registered as %r (node_id=%s, role=%s%s)",
self.name,
self.node_id,
self.role,
f", session_code={self.session_code!r}" if self.session_code else "",
)
def _deregister(self) -> None: def _deregister(self) -> None:
if not self.node_id: if not self.node_id:
@ -245,9 +261,10 @@ class NodeAgent:
if command == "run_campaign": if command == "run_campaign":
campaign_id: str = cmd.get("campaign_id") or str(uuid.uuid4()) campaign_id: str = cmd.get("campaign_id") or str(uuid.uuid4())
config_dict: dict = cmd.get("payload") or {} config_dict: dict = cmd.get("payload") or {}
skip_local_tx: bool = bool(cmd.get("skip_local_tx", False))
threading.Thread( threading.Thread(
target=self._run_campaign, target=self._run_campaign,
args=(campaign_id, config_dict), args=(campaign_id, config_dict, skip_local_tx),
daemon=True, daemon=True,
name=f"campaign-{campaign_id[:8]}", name=f"campaign-{campaign_id[:8]}",
).start() ).start()
@ -269,6 +286,17 @@ class NodeAgent:
self._stop_inference() self._stop_inference()
elif command == "configure_inference": elif command == "configure_inference":
self._queue_sdr_config(cmd) self._queue_sdr_config(cmd)
elif command == "start_transmit":
threading.Thread(
target=self._start_transmit,
args=(cmd,),
daemon=True,
name="ria-start-tx",
).start()
elif command == "stop_transmit":
self._stop_transmit()
elif command == "configure_transmit":
logger.info("configure_transmit received — will apply on next step boundary")
else: else:
logger.warning("Unknown command %r — ignored", command) logger.warning("Unknown command %r — ignored", command)
@ -276,7 +304,7 @@ class NodeAgent:
# Campaign execution # Campaign execution
# ------------------------------------------------------------------ # ------------------------------------------------------------------
def _run_campaign(self, campaign_id: str, config_dict: dict) -> None: def _run_campaign(self, campaign_id: str, config_dict: dict, skip_local_tx: bool = False) -> None:
try: try:
from ria_toolkit_oss.orchestration.campaign import CampaignConfig from ria_toolkit_oss.orchestration.campaign import CampaignConfig
from ria_toolkit_oss.orchestration.executor import CampaignExecutor from ria_toolkit_oss.orchestration.executor import CampaignExecutor
@ -288,10 +316,10 @@ class NodeAgent:
) )
return return
logger.info("Campaign %s starting", campaign_id[:8]) logger.info("Campaign %s starting (skip_local_tx=%s)", campaign_id[:8], skip_local_tx)
try: try:
config = CampaignConfig.from_dict(config_dict) config = CampaignConfig.from_dict(config_dict)
executor = CampaignExecutor(config) executor = CampaignExecutor(config, skip_local_tx=skip_local_tx)
result = executor.run() result = executor.run()
logger.info("Campaign %s completed — uploading recordings", campaign_id[:8]) logger.info("Campaign %s completed — uploading recordings", campaign_id[:8])
self._upload_recordings(campaign_id, config, result) self._upload_recordings(campaign_id, config, result)
@ -301,6 +329,58 @@ class NodeAgent:
logger.error("Campaign %s failed: %s", campaign_id[:8], exc) logger.error("Campaign %s failed: %s", campaign_id[:8], exc)
self._report_campaign_status(campaign_id, "failed", error=str(exc)) self._report_campaign_status(campaign_id, "failed", error=str(exc))
# ------------------------------------------------------------------
# TX execution
# ------------------------------------------------------------------
def _start_transmit(self, cmd: dict) -> None:
"""Execute a synthetic transmit campaign using TxExecutor.
The command payload mirrors a TransmitterConfig dict with an optional
``schedule`` of steps. Each step synthesises a signal and transmits it
via the local SDR in TX mode.
"""
try:
from ria_toolkit_oss.orchestration.tx_executor import TxExecutor
except ImportError as exc:
logger.error("start_transmit: TxExecutor not available: %s", exc)
return
if self._tx_thread and self._tx_thread.is_alive():
logger.warning("start_transmit: TX already running — ignoring duplicate command")
return
self._tx_stop.clear()
campaign_id: str = cmd.get("campaign_id") or str(uuid.uuid4())
executor = TxExecutor(
config=cmd,
sdr_device=self.sdr_device,
stop_event=self._tx_stop,
)
self._tx_thread = threading.Thread(
target=self._run_tx_campaign,
args=(executor, campaign_id),
daemon=True,
name=f"tx-campaign-{campaign_id[:8]}",
)
self._tx_thread.start()
def _run_tx_campaign(self, executor: Any, campaign_id: str) -> None:
try:
executor.run()
logger.info("TX campaign %s completed", campaign_id[:8])
self._report_campaign_status(campaign_id, "completed")
except Exception as exc:
logger.error("TX campaign %s failed: %s", campaign_id[:8], exc)
self._report_campaign_status(campaign_id, "failed", error=str(exc))
def _stop_transmit(self) -> None:
"""Signal the TX loop to stop gracefully."""
self._tx_stop.set()
if self._tx_thread and self._tx_thread.is_alive():
self._tx_thread.join(timeout=5.0)
logger.info("TX stopped")
# ------------------------------------------------------------------ # ------------------------------------------------------------------
# Inference — model loading # Inference — model loading
# ------------------------------------------------------------------ # ------------------------------------------------------------------
@ -579,13 +659,18 @@ class NodeAgent:
base_url = f"{self.hub_url}/datasets/upload" base_url = f"{self.hub_url}/datasets/upload"
steps = (result.get("steps") if isinstance(result, dict) else getattr(result, "steps", None)) or [] steps = (result.get("steps") if isinstance(result, dict) else getattr(result, "steps", None)) or []
output_obj = getattr(config, "output", None)
folder = getattr(output_obj, "folder", None)
campaign_name: str = folder if folder is not None else (getattr(config, "name", None) or "")
for step in steps: for step in steps:
output_path: str | None = getattr(step, "output_path", None) output_path: str | None = getattr(step, "output_path", None)
if not output_path: if not output_path:
continue continue
device_id: str = getattr(step, "transmitter_id", "") or "" device_id: str = getattr(step, "transmitter_id", "") or ""
for fpath in _sigmf_files(output_path): for fpath in _sigmf_files(output_path):
filename = os.path.basename(fpath) basename = os.path.basename(fpath)
path_parts = [p for p in (campaign_name, device_id) if p]
filename = "/".join(path_parts + [basename])
metadata = { metadata = {
"filename": filename, "filename": filename,
"repo_owner": repo_owner, "repo_owner": repo_owner,
@ -671,7 +756,7 @@ class NodeAgent:
headers=headers, headers=headers,
files={"file": (filename, chunk, "application/octet-stream")}, files={"file": (filename, chunk, "application/octet-stream")},
data={**metadata, "upload_id": upload_id, "chunk_index": i, "total_chunks": total_chunks}, data={**metadata, "upload_id": upload_id, "chunk_index": i, "total_chunks": total_chunks},
timeout=120, timeout=(30, None), # 30s connect, no read timeout — server may take minutes on final chunk
verify=verify, verify=verify,
) )
if not resp.ok: if not resp.ok:
@ -848,6 +933,21 @@ def main() -> None:
choices=["DEBUG", "INFO", "WARNING", "ERROR"], choices=["DEBUG", "INFO", "WARNING", "ERROR"],
help="Logging verbosity (default: INFO)", help="Logging verbosity (default: INFO)",
) )
parser.add_argument(
"--role",
default=None,
choices=["general", "rx", "tx"],
help=("Node role reported to the hub. " "'tx' enables synthetic transmission commands. " "Default: general"),
)
parser.add_argument(
"--session-code",
default=None,
metavar="CODE",
help=(
"3-word session code to pair this TX agent with a waiting campaign, "
"e.g. 'amber-peak-transmit'. Supplied by the campaign UI."
),
)
args = parser.parse_args() args = parser.parse_args()
@ -861,6 +961,8 @@ def main() -> None:
device = args.device or cfg.get("device", "unknown") device = args.device or cfg.get("device", "unknown")
insecure = args.insecure if args.insecure is not None else cfg.get("insecure", False) insecure = args.insecure if args.insecure is not None else cfg.get("insecure", False)
log_level = args.log_level or cfg.get("log_level", "INFO") log_level = args.log_level or cfg.get("log_level", "INFO")
role = args.role or cfg.get("role", "general")
session_code = args.session_code or cfg.get("session_code")
if not hub: if not hub:
parser.error("--hub is required (or set 'hub' in the config file)") parser.error("--hub is required (or set 'hub' in the config file)")
@ -888,6 +990,8 @@ def main() -> None:
name=name, name=name,
sdr_device=device, sdr_device=device,
insecure=insecure, insecure=insecure,
role=role,
session_code=session_code,
) )
agent.run() agent.run()

View File

@ -1,4 +1,4 @@
from ria_toolkit_oss.datatypes.annotation import Annotation from ria_toolkit_oss.data.annotation import Annotation
# TODO figure out how to transfer labels in the merge case # TODO figure out how to transfer labels in the merge case

View File

@ -3,7 +3,7 @@ from typing import Optional
import numpy as np import numpy as np
from ria_toolkit_oss.datatypes import Annotation, Recording from ria_toolkit_oss.data import Annotation, Recording
def annotate_with_cusum( def annotate_with_cusum(
@ -24,7 +24,7 @@ def annotate_with_cusum(
changes between a low and high amplitude. changes between a low and high amplitude.
:param recording: A ``Recording`` object to annotate. :param recording: A ``Recording`` object to annotate.
:type recording: ``ria_toolkit_oss.datatypes.Recording`` :type recording: ``ria_toolkit_oss.data.Recording``
:param label: Label for the detected segments. :param label: Label for the detected segments.
:type label: str :type label: str
:param window_size: The length (in samples) of the moving average window. :param window_size: The length (in samples) of the moving average window.

View File

@ -11,7 +11,7 @@ from typing import Tuple
import numpy as np import numpy as np
from scipy.signal import filtfilt from scipy.signal import filtfilt
from ria_toolkit_oss.datatypes import Annotation, Recording from ria_toolkit_oss.data import Annotation, Recording
def detect_signals_energy( def detect_signals_energy(

View File

@ -55,7 +55,7 @@ import numpy as np
from scipy import ndimage from scipy import ndimage
from scipy import signal as scipy_signal from scipy import signal as scipy_signal
from ria_toolkit_oss.datatypes import Annotation, Recording from ria_toolkit_oss.data import Annotation, Recording
def find_spectral_components( def find_spectral_components(

View File

@ -1,6 +1,6 @@
import numpy as np import numpy as np
from ria_toolkit_oss.datatypes import Recording from ria_toolkit_oss.data import Recording
def qualify_slice_from_annotations(recording: Recording, slice_length: int): def qualify_slice_from_annotations(recording: Recording, slice_length: int):

View File

@ -1,8 +1,8 @@
import numpy as np import numpy as np
from scipy.signal import butter, lfilter from scipy.signal import butter, lfilter
from ria_toolkit_oss.datatypes.annotation import Annotation from ria_toolkit_oss.data.annotation import Annotation
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
def isolate_signal(recording: Recording, annotation: Annotation) -> Recording: def isolate_signal(recording: Recording, annotation: Annotation) -> Recording:

View File

@ -46,7 +46,7 @@ from typing import Optional
import numpy as np import numpy as np
from ria_toolkit_oss.datatypes import Annotation, Recording from ria_toolkit_oss.data import Annotation, Recording
def _find_ranges(indices, max_gap): def _find_ranges(indices, max_gap):

View File

@ -57,7 +57,7 @@ class Annotation:
def is_valid(self) -> bool: def is_valid(self) -> bool:
""" """
Check that the annotation sample count is > 0 and the freq_lower_edge<freq_upper_edge. Verify ``sample_count > 0`` and the ``freq_lower_edge < freq_upper_edge``.
:returns: True if valid, False if not. :returns: True if valid, False if not.
""" """
@ -96,9 +96,9 @@ class Annotation:
def __eq__(self, other: Annotation) -> bool: def __eq__(self, other: Annotation) -> bool:
return self.__dict__ == other.__dict__ return self.__dict__ == other.__dict__
def to_sigmf_format(self): def to_sigmf_format(self) -> dict:
""" """
Returns a JSON dictionary representing this annotation formatted to be saved in a .sigmf-meta file. Returns a JSON dictionary representation, formatted for saving in a ``.sigmf-meta`` file.
""" """
annotation_dict = {SigMFFile.START_INDEX_KEY: self.sample_start, SigMFFile.LENGTH_INDEX_KEY: self.sample_count} annotation_dict = {SigMFFile.START_INDEX_KEY: self.sample_start, SigMFFile.LENGTH_INDEX_KEY: self.sample_count}
@ -119,7 +119,8 @@ class Annotation:
def _is_jsonable(x: Any) -> bool: def _is_jsonable(x: Any) -> bool:
""" """
:return: True if x is JSON serializable, False otherwise. :return: True if ``x`` is JSON serializable, False otherwise.
:rtype: bool
""" """
try: try:
json.dumps(x) json.dumps(x)

View File

@ -7,8 +7,8 @@ from typing import Any, Optional
from packaging.version import Version from packaging.version import Version
from ria_toolkit_oss.datatypes.datasets.license.dataset_license import DatasetLicense from ria_toolkit_oss.data.datasets.license.dataset_license import DatasetLicense
from ria_toolkit_oss.datatypes.datasets.radio_dataset import RadioDataset from ria_toolkit_oss.data.datasets.radio_dataset import RadioDataset
from ria_toolkit_oss.utils.abstract_attribute import abstract_attribute from ria_toolkit_oss.utils.abstract_attribute import abstract_attribute

View File

@ -7,11 +7,11 @@ from typing import Optional
import h5py import h5py
import numpy as np import numpy as np
from ria_toolkit_oss.datatypes.datasets.h5helpers import ( from ria_toolkit_oss.data.datasets.h5helpers import (
append_entry_inplace, append_entry_inplace,
copy_dataset_entry_by_index, copy_dataset_entry_by_index,
) )
from ria_toolkit_oss.datatypes.datasets.radio_dataset import RadioDataset from ria_toolkit_oss.data.datasets.radio_dataset import RadioDataset
class IQDataset(RadioDataset, ABC): class IQDataset(RadioDataset, ABC):
@ -19,7 +19,7 @@ class IQDataset(RadioDataset, ABC):
radiofrequency (RF) signals represented as In-phase (I) and Quadrature (Q) samples. radiofrequency (RF) signals represented as In-phase (I) and Quadrature (Q) samples.
For machine learning tasks that involve processing spectrograms, please use For machine learning tasks that involve processing spectrograms, please use
ria_toolkit_oss.datatypes.datasets.SpectDataset instead. ria_toolkit_oss.data.datasets.SpectDataset instead.
This is an abstract interface defining common properties and behaviour of IQDatasets. Therefore, this class This is an abstract interface defining common properties and behaviour of IQDatasets. Therefore, this class
should not be instantiated directly. Instead, it is subclassed to define custom interfaces for specific machine should not be instantiated directly. Instead, it is subclassed to define custom interfaces for specific machine

View File

@ -12,7 +12,7 @@ import numpy as np
import pandas as pd import pandas as pd
from numpy.typing import ArrayLike from numpy.typing import ArrayLike
from ria_toolkit_oss.datatypes.datasets.h5helpers import ( from ria_toolkit_oss.data.datasets.h5helpers import (
append_entry_inplace, append_entry_inplace,
copy_file, copy_file,
copy_over_example, copy_over_example,
@ -29,7 +29,7 @@ class RadioDataset(ABC):
This is an abstract interface defining common properties and behavior of radio datasets. Therefore, this class This is an abstract interface defining common properties and behavior of radio datasets. Therefore, this class
should not be instantiated directly. Instead, it should be subclassed to define specific interfaces for different should not be instantiated directly. Instead, it should be subclassed to define specific interfaces for different
types of radio datasets. For example, see ria_toolkit_oss.datatypes.datasets.IQDataset, which is a radio dataset types of radio datasets. For example, see ria_toolkit_oss.data.datasets.IQDataset, which is a radio dataset
subclass tailored for tasks involving the processing of radio signals represented as IQ (In-phase and Quadrature) subclass tailored for tasks involving the processing of radio signals represented as IQ (In-phase and Quadrature)
samples. samples.

View File

@ -3,7 +3,7 @@ from __future__ import annotations
import os import os
from abc import ABC from abc import ABC
from ria_toolkit_oss.datatypes.datasets.radio_dataset import RadioDataset from ria_toolkit_oss.data.datasets.radio_dataset import RadioDataset
class SpectDataset(RadioDataset, ABC): class SpectDataset(RadioDataset, ABC):
@ -13,7 +13,7 @@ class SpectDataset(RadioDataset, ABC):
radio signal spectrograms. radio signal spectrograms.
For machine learning tasks that involve processing on IQ samples, please use For machine learning tasks that involve processing on IQ samples, please use
ria_toolkit_oss.datatypes.datasets.IQDataset instead. ria_toolkit_oss.data.datasets.IQDataset instead.
This is an abstract interface defining common properties and behaviour of IQDatasets. Therefore, this class This is an abstract interface defining common properties and behaviour of IQDatasets. Therefore, this class
should not be instantiated directly. Instead, it is subclassed to define custom interfaces for specific machine should not be instantiated directly. Instead, it is subclassed to define custom interfaces for specific machine

View File

@ -6,11 +6,8 @@ from typing import Optional
import numpy as np import numpy as np
from numpy.random import Generator from numpy.random import Generator
from ria_toolkit_oss.datatypes.datasets import RadioDataset from ria_toolkit_oss.data.datasets import RadioDataset
from ria_toolkit_oss.datatypes.datasets.h5helpers import ( from ria_toolkit_oss.data.datasets.h5helpers import copy_over_example, make_empty_clone
copy_over_example,
make_empty_clone,
)
def split(dataset: RadioDataset, lengths: list[int | float]) -> list[RadioDataset]: def split(dataset: RadioDataset, lengths: list[int | float]) -> list[RadioDataset]:
@ -31,7 +28,7 @@ def split(dataset: RadioDataset, lengths: list[int | float]) -> list[RadioDatase
cases. cases.
This function is deterministic, meaning it will always produce the same split. For a random split, see This function is deterministic, meaning it will always produce the same split. For a random split, see
ria_toolkit_oss.datatypes.datasets.random_split. ria_toolkit_oss.data.datasets.random_split.
:param dataset: Dataset to be split. :param dataset: Dataset to be split.
:type dataset: RadioDataset :type dataset: RadioDataset
@ -50,7 +47,7 @@ def split(dataset: RadioDataset, lengths: list[int | float]) -> list[RadioDatase
>>> import string >>> import string
>>> import numpy as np >>> import numpy as np
>>> import pandas as pd >>> import pandas as pd
>>> from ria_toolkit_oss.datatypes.datasets import split >>> from ria_toolkit_oss.data.datasets import split
First, let's generate some random data: First, let's generate some random data:
@ -126,7 +123,7 @@ def random_split(
training and test datasets. training and test datasets.
This restriction makes it unlikely that a random split will produce datasets with the exact lengths specified. This restriction makes it unlikely that a random split will produce datasets with the exact lengths specified.
If it is important to ensure the closest possible split, consider using ria_toolkit_oss.datatypes.datasets.split If it is important to ensure the closest possible split, consider using ria_toolkit_oss.data.datasets.split
instead. instead.
:param dataset: Dataset to be split. :param dataset: Dataset to be split.
@ -144,7 +141,7 @@ def random_split(
:rtype: list of RadioDataset :rtype: list of RadioDataset
See Also: See Also:
ria_toolkit_oss.datatypes.datasets.split: Usage is the same as for ``random_split()``. ria_toolkit_oss.data.datasets.split: Usage is the same as for ``random_split()``.
""" """
if not isinstance(dataset, RadioDataset): if not isinstance(dataset, RadioDataset):
raise ValueError(f"'dataset' must be RadioDataset or one of its subclasses, got {type(dataset)}.") raise ValueError(f"'dataset' must be RadioDataset or one of its subclasses, got {type(dataset)}.")

View File

@ -12,7 +12,7 @@ from typing import Any, Iterator, Optional
import numpy as np import numpy as np
from numpy.typing import ArrayLike from numpy.typing import ArrayLike
from ria_toolkit_oss.datatypes.annotation import Annotation from ria_toolkit_oss.data.annotation import Annotation
PROTECTED_KEYS = ["rec_id", "timestamp"] PROTECTED_KEYS = ["rec_id", "timestamp"]
@ -26,7 +26,7 @@ class Recording:
Metadata is stored in a dictionary of key value pairs, Metadata is stored in a dictionary of key value pairs,
to include information such as sample_rate and center_frequency. to include information such as sample_rate and center_frequency.
Annotations are a list of :ref:`Annotation <utils.data.Annotation>`, Annotations are a list of :class:`~ria_toolkit_oss.data.Annotation`,
defining bounding boxes in time and frequency with labels and metadata. defining bounding boxes in time and frequency with labels and metadata.
Here, signal data is represented as a NumPy array. This class is then extended in the RIA Backends to provide Here, signal data is represented as a NumPy array. This class is then extended in the RIA Backends to provide
@ -46,7 +46,7 @@ class Recording:
:param metadata: Additional information associated with the recording. :param metadata: Additional information associated with the recording.
:type metadata: dict, optional :type metadata: dict, optional
:param annotations: A collection of ``Annotation`` objects defining bounding boxes. :param annotations: A collection of :class:`~ria_toolkit_oss.data.Annotation` objects defining bounding boxes.
:type annotations: list of Annotations, optional :type annotations: list of Annotations, optional
:param dtype: Explicitly specify the data-type of the complex samples. Must be a complex NumPy type, such as :param dtype: Explicitly specify the data-type of the complex samples. Must be a complex NumPy type, such as
@ -66,7 +66,7 @@ class Recording:
**Examples:** **Examples:**
>>> import numpy >>> import numpy
>>> from ria_toolkit_oss.datatypes import Recording, Annotation >>> from ria_toolkit_oss.data import Recording, Annotation
>>> # Create an array of complex samples, just 1s in this case. >>> # Create an array of complex samples, just 1s in this case.
>>> samples = numpy.ones(10000, dtype=numpy.complex64) >>> samples = numpy.ones(10000, dtype=numpy.complex64)
@ -146,7 +146,7 @@ class Recording:
self._metadata["timestamp"] = time.time() self._metadata["timestamp"] = time.time()
else: else:
if not isinstance(self._metadata["timestamp"], (int, float)): if not isinstance(self._metadata["timestamp"], (int, float)):
raise ValueError("timestamp must be int or float, not ", type(self._metadata["timestamp"])) raise ValueError(f"timestamp must be int or float, not {type(self._metadata['timestamp'])}")
if "rec_id" not in self.metadata: if "rec_id" not in self.metadata:
self._metadata["rec_id"] = generate_recording_id(data=self.data, timestamp=self._metadata["timestamp"]) self._metadata["rec_id"] = generate_recording_id(data=self.data, timestamp=self._metadata["timestamp"])
@ -274,7 +274,13 @@ class Recording:
:return: A new recording with the same metadata and data, with dtype. :return: A new recording with the same metadata and data, with dtype.
TODO: Add example usage.
**Examples:**
.. todo::
Usage examples coming soon!
""" """
# Rather than check for a valid datatype, let's cast and check the result. This makes it easier to provide # Rather than check for a valid datatype, let's cast and check the result. This makes it easier to provide
# cross-platform support where the types are aliased across platforms. # cross-platform support where the types are aliased across platforms.
@ -305,7 +311,7 @@ class Recording:
Create a recording and add metadata: Create a recording and add metadata:
>>> import numpy >>> import numpy
>>> from ria_toolkit_oss.datatypes import Recording >>> from ria_toolkit_oss.data import Recording
>>> >>>
>>> samples = numpy.ones(10000, dtype=numpy.complex64) >>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = { >>> metadata = {
@ -360,7 +366,7 @@ class Recording:
Create a recording and update metadata: Create a recording and update metadata:
>>> import numpy >>> import numpy
>>> from ria_toolkit_oss.datatypes import Recording >>> from ria_toolkit_oss.data import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64) >>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = { >>> metadata = {
@ -387,6 +393,7 @@ class Recording:
""" """
if key not in self.metadata: if key not in self.metadata:
self.add_to_metadata(key=key, value=value) self.add_to_metadata(key=key, value=value)
return
if not _is_jsonable(value): if not _is_jsonable(value):
raise ValueError("Value must be JSON serializable.") raise ValueError("Value must be JSON serializable.")
@ -414,7 +421,7 @@ class Recording:
Create a recording and add metadata: Create a recording and add metadata:
>>> import numpy >>> import numpy
>>> from ria_toolkit_oss.datatypes import Recording >>> from ria_toolkit_oss.data import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64) >>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = { >>> metadata = {
@ -438,7 +445,7 @@ class Recording:
'rec_id': 'fda0f41...'} # Example value 'rec_id': 'fda0f41...'} # Example value
""" """
if key not in PROTECTED_KEYS: if key not in PROTECTED_KEYS:
self._metadata.pop(key) self._metadata.pop(key, None)
else: else:
raise ValueError(f"Key {key} is protected and cannot be modified or removed.") raise ValueError(f"Key {key} is protected and cannot be modified or removed.")
@ -447,7 +454,7 @@ class Recording:
:param output_path: The output image path. Defaults to "images/signal.png". :param output_path: The output image path. Defaults to "images/signal.png".
:type output_path: str, optional :type output_path: str, optional
:param kwargs: Keyword arguments passed on to utils.view.view_sig. :param kwargs: Keyword arguments passed on to ria_toolkit_oss.view.view_sig.
:type: dict of keyword arguments :type: dict of keyword arguments
**Examples:** **Examples:**
@ -455,7 +462,7 @@ class Recording:
Create a recording and view it as a plot in a .png image: Create a recording and view it as a plot in a .png image:
>>> import numpy >>> import numpy
>>> from ria_toolkit_oss.datatypes import Recording >>> from ria_toolkit_oss.data import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64) >>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = { >>> metadata = {
@ -466,14 +473,14 @@ class Recording:
>>> recording = Recording(data=samples, metadata=metadata) >>> recording = Recording(data=samples, metadata=metadata)
>>> recording.view() >>> recording.view()
""" """
from ria_toolkit_oss.view import view_sig from ria_toolkit_oss.view.view_signal import view_sig
view_sig(recording=self, output_path=output_path, **kwargs) view_sig(recording=self, output_path=output_path, **kwargs)
def simple_view(self, **kwargs) -> None: def simple_view(self, **kwargs) -> None:
"""Create a plot of various signal visualizations as a PNG or SVG image. """Create a plot of various signal visualizations as a PNG or SVG image.
:param kwargs: Keyword arguments passed on to ria_toolkit_oss.view.view_signal_simple.create_plots. :param kwargs: Keyword arguments passed on to ria_toolkit_oss.view.view_signal_simple.view_simple_sig.
:type: dict of keyword arguments :type: dict of keyword arguments
**Examples:** **Examples:**
@ -481,7 +488,7 @@ class Recording:
Create a recording and view it as a plot in a .png image: Create a recording and view it as a plot in a .png image:
>>> import numpy >>> import numpy
>>> from ria_toolkit_oss.datatypes import Recording >>> from ria_toolkit_oss.data import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64) >>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = { >>> metadata = {
@ -504,7 +511,7 @@ class Recording:
The SigMF io format is defined by the `SigMF Specification Project <https://github.com/sigmf/SigMF>`_ The SigMF io format is defined by the `SigMF Specification Project <https://github.com/sigmf/SigMF>`_
:param recording: The recording to be written to file. :param recording: The recording to be written to file.
:type recording: utils.data.Recording :type recording: ria_toolkit_oss.data.Recording
:param filename: The name of the file where the recording is to be saved. Defaults to auto generated filename. :param filename: The name of the file where the recording is to be saved. Defaults to auto generated filename.
:type filename: os.PathLike or str, optional :type filename: os.PathLike or str, optional
:param path: The directory path to where the recording is to be saved. Defaults to recordings/. :param path: The directory path to where the recording is to be saved. Defaults to recordings/.
@ -513,22 +520,6 @@ class Recording:
:raises IOError: If there is an issue encountered during the file writing process. :raises IOError: If there is an issue encountered during the file writing process.
:return: None :return: None
**Examples:**
Create a recording and view it as a plot in a `.png` image:
>>> import numpy
>>> from utils.data import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = {
... "sample_rate": 1e6,
... "center_frequency": 2.44e9,
... }
>>> recording = Recording(data=samples, metadata=metadata)
>>> recording.view()
""" """
from ria_toolkit_oss.io.recording import to_sigmf from ria_toolkit_oss.io.recording import to_sigmf
@ -554,7 +545,7 @@ class Recording:
Create a recording and save it to a .npy file: Create a recording and save it to a .npy file:
>>> import numpy >>> import numpy
>>> from utils.data import Recording >>> from ria_toolkit_oss.data import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64) >>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = { >>> metadata = {
@ -605,7 +596,7 @@ class Recording:
Create a recording and save it to a .wav file: Create a recording and save it to a .wav file:
>>> import numpy >>> import numpy
>>> from utils.data import Recording >>> from ria_toolkit_oss.data import Recording
>>> samples = numpy.exp(1j * 2 * numpy.pi * 0.1 * numpy.arange(10000)) >>> samples = numpy.exp(1j * 2 * numpy.pi * 0.1 * numpy.arange(10000))
>>> metadata = {"sample_rate": 1e6, "center_frequency": 915e6} >>> metadata = {"sample_rate": 1e6, "center_frequency": 915e6}
>>> recording = Recording(data=samples, metadata=metadata) >>> recording = Recording(data=samples, metadata=metadata)
@ -655,7 +646,7 @@ class Recording:
Create a recording and save it to a .blue file: Create a recording and save it to a .blue file:
>>> import numpy >>> import numpy
>>> from utils.data import Recording >>> from ria_toolkit_oss.data import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64) >>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = {"sample_rate": 1e6, "center_frequency": 2.44e9} >>> metadata = {"sample_rate": 1e6, "center_frequency": 2.44e9}
>>> recording = Recording(data=samples, metadata=metadata) >>> recording = Recording(data=samples, metadata=metadata)
@ -683,7 +674,7 @@ class Recording:
Create a recording and trim it: Create a recording and trim it:
>>> import numpy >>> import numpy
>>> from utils.data import Recording >>> from ria_toolkit_oss.data import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64) >>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = { >>> metadata = {
@ -712,7 +703,14 @@ class Recording:
data = self.data[:, start_sample:end_sample] data = self.data[:, start_sample:end_sample]
new_annotations = copy.deepcopy(self.annotations) new_annotations = copy.deepcopy(self.annotations)
trimmed_annotations = []
for annotation in new_annotations: for annotation in new_annotations:
# skip annotations entirely outside the trim window
if annotation.sample_start + annotation.sample_count <= start_sample:
continue
if annotation.sample_start >= end_sample:
continue
# trim annotation if it goes outside the trim boundaries # trim annotation if it goes outside the trim boundaries
if annotation.sample_start < start_sample: if annotation.sample_start < start_sample:
annotation.sample_count = annotation.sample_count - (start_sample - annotation.sample_start) annotation.sample_count = annotation.sample_count - (start_sample - annotation.sample_start)
@ -723,8 +721,9 @@ class Recording:
# shift annotation to align with the new start point # shift annotation to align with the new start point
annotation.sample_start = annotation.sample_start - start_sample annotation.sample_start = annotation.sample_start - start_sample
trimmed_annotations.append(annotation)
return Recording(data=data, metadata=self.metadata, annotations=new_annotations) return Recording(data=data, metadata=self.metadata, annotations=trimmed_annotations)
def normalize(self) -> Recording: def normalize(self) -> Recording:
"""Scale the recording data, relative to its maximum value, so that the magnitude of the maximum sample is 1. """Scale the recording data, relative to its maximum value, so that the magnitude of the maximum sample is 1.
@ -737,7 +736,7 @@ class Recording:
Create a recording with maximum amplitude 0.5 and normalize to a maximum amplitude of 1: Create a recording with maximum amplitude 0.5 and normalize to a maximum amplitude of 1:
>>> import numpy >>> import numpy
>>> from utils.data import Recording >>> from ria_toolkit_oss.data import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64) * 0.5 >>> samples = numpy.ones(10000, dtype=numpy.complex64) * 0.5
>>> metadata = { >>> metadata = {
@ -753,7 +752,10 @@ class Recording:
>>> print(numpy.max(numpy.abs(normalized_recording.data))) >>> print(numpy.max(numpy.abs(normalized_recording.data)))
1 1
""" """
scaled_data = self.data / np.max(abs(self.data)) max_val = np.max(abs(self.data))
if max_val == 0:
raise ValueError("Cannot normalize a recording with all-zero data.")
scaled_data = self.data / max_val
return Recording(data=scaled_data, metadata=self.metadata, annotations=self.annotations) return Recording(data=scaled_data, metadata=self.metadata, annotations=self.annotations)
def __len__(self) -> int: def __len__(self) -> int:

View File

@ -1,8 +0,0 @@
"""
The datatypes package contains abstract data types tailored for radio machine learning.
"""
__all__ = ["Annotation", "Recording"]
from .annotation import Annotation
from .recording import Recording

View File

@ -1,129 +0,0 @@
from __future__ import annotations
import json
from typing import Any, Optional
from sigmf import SigMFFile
class Annotation:
"""Signal annotations are labels or additional information associated with specific data points or segments within
a signal. These annotations could be used for tasks like supervised learning, where the goal is to train a model
to recognize patterns or characteristics in the signal associated with these annotations.
Annotations can be used to label interesting points in your recording.
:param sample_start: The index of the starting sample of the annotation.
:type sample_start: int
:param sample_count: The index of the ending sample of the annotation, inclusive.
:type sample_count: int
:param freq_lower_edge: The lower frequency of the annotation.
:type freq_lower_edge: float
:param freq_upper_edge: The upper frequency of the annotation.
:type freq_upper_edge: float
:param label: The label that will be displayed with the bounding box in compatible viewers including IQEngine.
Defaults to an emtpy string.
:type label: str, optional
:param comment: A human-readable comment. Defaults to an empty string.
:type comment: str, optional
:param detail: A dictionary of user defined annotation-specific metadata. Defaults to None.
:type detail: dict, optional
"""
def __init__(
self,
sample_start: int,
sample_count: int,
freq_lower_edge: float,
freq_upper_edge: float,
label: Optional[str] = "",
comment: Optional[str] = "",
detail: Optional[dict] = None,
):
"""Initialize a new Annotation instance."""
self.sample_start = int(sample_start)
self.sample_count = int(sample_count)
self.freq_lower_edge = float(freq_lower_edge)
self.freq_upper_edge = float(freq_upper_edge)
self.label = str(label)
self.comment = str(comment)
if detail is None:
self.detail = {}
elif not _is_jsonable(detail):
raise ValueError(f"Detail object is not json serializable: {detail}")
else:
self.detail = detail
def is_valid(self) -> bool:
"""
Verify ``sample_count > 0`` and the ``freq_lower_edge < freq_upper_edge``.
:returns: True if valid, False if not.
"""
return self.sample_count > 0 and self.freq_lower_edge < self.freq_upper_edge
def overlap(self, other):
"""
Quantify how much the bounding box in this annotation overlaps with another annotation.
:param other: The other annotation.
:type other: Annotation
:returns: The area of the overlap in samples*frequency, or 0 if they do not overlap."""
sample_overlap_start = max(self.sample_start, other.sample_start)
sample_overlap_end = min(self.sample_start + self.sample_count, other.sample_start + other.sample_count)
freq_overlap_start = max(self.freq_lower_edge, other.freq_lower_edge)
freq_overlap_end = min(self.freq_upper_edge, other.freq_upper_edge)
if freq_overlap_start >= freq_overlap_end or sample_overlap_start >= sample_overlap_end:
return 0
else:
return (sample_overlap_end - sample_overlap_start) * (freq_overlap_end - freq_overlap_start)
def area(self):
"""
The 'area' of the bounding box, samples*frequency.
Useful to quantify annotation size.
:returns: sample length multiplied by bandwidth."""
return self.sample_count * (self.freq_upper_edge - self.freq_lower_edge)
def __eq__(self, other: Annotation) -> bool:
return self.__dict__ == other.__dict__
def to_sigmf_format(self) -> dict:
"""
Returns a JSON dictionary representation, formatted for saving in a ``.sigmf-meta`` file.
"""
annotation_dict = {SigMFFile.START_INDEX_KEY: self.sample_start, SigMFFile.LENGTH_INDEX_KEY: self.sample_count}
annotation_dict["metadata"] = {
SigMFFile.LABEL_KEY: self.label,
SigMFFile.COMMENT_KEY: self.comment,
SigMFFile.FHI_KEY: self.freq_upper_edge,
SigMFFile.FLO_KEY: self.freq_lower_edge,
"ria:detail": self.detail,
}
if _is_jsonable(annotation_dict):
return annotation_dict
else:
raise ValueError("Annotation dictionary was not json serializable.")
def _is_jsonable(x: Any) -> bool:
"""
:return: True if ``x`` is JSON serializable, False otherwise.
:rtype: bool
"""
try:
json.dumps(x)
return True
except (TypeError, OverflowError):
return False

View File

@ -1,855 +0,0 @@
from __future__ import annotations
import copy
import hashlib
import json
import os
import re
import time
import warnings
from typing import Any, Iterator, Optional
import numpy as np
from numpy.typing import ArrayLike
from ria_toolkit_oss.datatypes.annotation import Annotation
PROTECTED_KEYS = ["rec_id", "timestamp"]
class Recording:
"""Tape of complex IQ (in-phase and quadrature) samples with associated metadata and annotations.
Recording data is a complex array of shape C x N, where C is the number of channels
and N is the number of samples in each channel.
Metadata is stored in a dictionary of key value pairs,
to include information such as sample_rate and center_frequency.
Annotations are a list of :class:`~ria_toolkit_oss.datatypes.Annotation`,
defining bounding boxes in time and frequency with labels and metadata.
Here, signal data is represented as a NumPy array. This class is then extended in the RIA Backends to provide
support for different data structures, such as Tensors.
Recordings are long-form tapes can be obtained either from a software-defined radio (SDR) or generated
synthetically. Then, machine learning datasets are curated from collection of recordings by segmenting these
longer-form tapes into shorter units called slices.
All recordings are assigned a unique 64-character recording ID, ``rec_id``. If this field is missing from the
provided metadata, a new ID will be generated upon object instantiation.
:param data: Signal data as a tape IQ samples, either C x N complex, where C is the number of
channels and N is number of samples in the signal. If data is a one-dimensional array of complex samples with
length N, it will be reshaped to a two-dimensional array with dimensions 1 x N.
:type data: array_like
:param metadata: Additional information associated with the recording.
:type metadata: dict, optional
:param annotations: A collection of :class:`~ria_toolkit_oss.datatypes.Annotation` objects defining bounding boxes.
:type annotations: list of Annotations, optional
:param dtype: Explicitly specify the data-type of the complex samples. Must be a complex NumPy type, such as
``np.complex64`` or ``np.complex128``. Default is None, in which case the type is determined implicitly. If
``data`` is a NumPy array, the Recording will use the dtype of ``data`` directly without any conversion.
:type dtype: numpy dtype object, optional
:param timestamp: The timestamp when the recording data was generated. If provided, it should be a float or integer
representing the time in seconds since epoch (e.g., ``time.time()``). Only used if the `timestamp` field is not
present in the provided metadata.
:type dtype: float or int, optional
:raises ValueError: If data is not complex 1xN or CxN.
:raises ValueError: If metadata is not a python dict.
:raises ValueError: If metadata is not json serializable.
:raises ValueError: If annotations is not a list of valid annotation objects.
**Examples:**
>>> import numpy
>>> from ria_toolkit_oss.datatypes import Recording, Annotation
>>> # Create an array of complex samples, just 1s in this case.
>>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> # Create a dictionary of relevant metadata.
>>> sample_rate = 1e6
>>> center_frequency = 2.44e9
>>> metadata = {
... "sample_rate": sample_rate,
... "center_frequency": center_frequency,
... "author": "me",
... }
>>> # Create an annotation for the annotations list.
>>> annotations = [
... Annotation(
... sample_start=0,
... sample_count=1000,
... freq_lower_edge=center_frequency - (sample_rate / 2),
... freq_upper_edge=center_frequency + (sample_rate / 2),
... label="example",
... )
... ]
>>> # Store samples, metadata, and annotations together in a convenient object.
>>> recording = Recording(data=samples, metadata=metadata, annotations=annotations)
>>> print(recording.metadata)
{'sample_rate': 1000000.0, 'center_frequency': 2440000000.0, 'author': 'me'}
>>> print(recording.annotations[0].label)
'example'
"""
def __init__( # noqa C901
self,
data: ArrayLike | list[list],
metadata: Optional[dict[str, any]] = None,
dtype: Optional[np.dtype] = None,
timestamp: Optional[float | int] = None,
annotations: Optional[list[Annotation]] = None,
):
data_arr = np.asarray(data)
if np.iscomplexobj(data_arr):
# Expect C x N
if data_arr.ndim == 1:
self._data = np.expand_dims(data_arr, axis=0) # N -> 1 x N
elif data_arr.ndim == 2:
self._data = data_arr
else:
raise ValueError("Complex data must be C x N.")
else:
raise ValueError("Input data must be complex.")
if dtype is not None:
self._data = self._data.astype(dtype)
assert np.iscomplexobj(self._data)
if metadata is None:
self._metadata = {}
elif isinstance(metadata, dict):
self._metadata = metadata
else:
raise ValueError(f"Metadata must be a python dict, but was {type(metadata)}.")
if not _is_jsonable(metadata):
raise ValueError("Value must be JSON serializable.")
if "timestamp" not in self.metadata:
if timestamp is not None:
if not isinstance(timestamp, (int, float)):
raise ValueError(f"timestamp must be int or float, not {type(timestamp)}")
self._metadata["timestamp"] = timestamp
else:
self._metadata["timestamp"] = time.time()
else:
if not isinstance(self._metadata["timestamp"], (int, float)):
raise ValueError(f"timestamp must be int or float, not {type(self._metadata['timestamp'])}")
if "rec_id" not in self.metadata:
self._metadata["rec_id"] = generate_recording_id(data=self.data, timestamp=self._metadata["timestamp"])
if annotations is None:
self._annotations = []
elif isinstance(annotations, list):
self._annotations = annotations
else:
raise ValueError("Annotations must be a list or None.")
if not all(isinstance(annotation, Annotation) for annotation in self._annotations):
raise ValueError("All elements in self._annotations must be of type Annotation.")
self._index = 0
@property
def data(self) -> np.ndarray:
"""
:return: Recording data, as a complex array.
:type: np.ndarray
.. note::
For recordings with more than 1,024 samples, this property returns a read-only view of the data.
.. note::
To access specific samples, consider indexing the object directly with ``rec[c, n]``.
"""
if self._data.size > 1024:
# Returning a read-only view prevents mutation at a distance while maintaining performance.
v = self._data.view()
v.setflags(write=False)
return v
else:
return self._data.copy()
@property
def metadata(self) -> dict:
"""
:return: Dictionary of recording metadata.
:type: dict
"""
return self._metadata.copy()
@property
def annotations(self) -> list[Annotation]:
"""
:return: List of recording annotations
:type: list of Annotation objects
"""
return self._annotations.copy()
@property
def shape(self) -> tuple[int]:
"""
:return: The shape of the data array.
:type: tuple of ints
"""
return np.shape(self.data)
@property
def n_chan(self) -> int:
"""
:return: The number of channels in the recording.
:type: int
"""
return self.shape[0]
@property
def rec_id(self) -> str:
"""
:return: Recording ID.
:type: str
"""
return self.metadata["rec_id"]
@property
def dtype(self) -> str:
"""
:return: Data-type of the data array's elements.
:type: numpy dtype object
"""
return self.data.dtype
@property
def timestamp(self) -> float | int:
"""
:return: Recording timestamp (time in seconds since epoch).
:type: float or int
"""
return self.metadata["timestamp"]
@property
def sample_rate(self) -> float | None:
"""
:return: Sample rate of the recording, or None is 'sample_rate' is not in metadata.
:type: str
"""
return self.metadata.get("sample_rate")
@sample_rate.setter
def sample_rate(self, sample_rate: float | int) -> None:
"""Set the sample rate of the recording.
:param sample_rate: The sample rate of the recording.
:type sample_rate: float or int
:return: None
"""
self.add_to_metadata(key="sample_rate", value=sample_rate)
def astype(self, dtype: np.dtype) -> Recording:
"""Copy of the recording, data cast to a specified type.
.. todo: This method is not yet implemented.
:param dtype: Data-type to which the array is cast. Must be a complex scalar type, such as ``np.complex64`` or
``np.complex128``.
:type dtype: NumPy data type, optional
.. note: Casting to a data type with less precision can risk losing data by truncating or rounding values,
potentially resulting in a loss of accuracy and significant information.
:return: A new recording with the same metadata and data, with dtype.
**Examples:**
.. todo::
Usage examples coming soon!
"""
# Rather than check for a valid datatype, let's cast and check the result. This makes it easier to provide
# cross-platform support where the types are aliased across platforms.
with warnings.catch_warnings():
warnings.simplefilter("ignore") # Casting may generate user warnings. E.g., complex -> real
data = self.data.astype(dtype)
if np.iscomplexobj(data):
return Recording(data=data, metadata=self.metadata, annotations=self.annotations)
else:
raise ValueError("dtype must be a complex number scalar type.")
def add_to_metadata(self, key: str, value: Any) -> None:
"""Add a new key-value pair to the recording metadata.
:param key: New metadata key, must be snake_case.
:type key: str
:param value: Corresponding metadata value.
:type value: any
:raises ValueError: If key is already in metadata or if key is not a valid metadata key.
:raises ValueError: If value is not JSON serializable.
:return: None.
**Examples:**
Create a recording and add metadata:
>>> import numpy
>>> from ria_toolkit_oss.datatypes import Recording
>>>
>>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = {
>>> "sample_rate": 1e6,
>>> "center_frequency": 2.44e9,
>>> }
>>>
>>> recording = Recording(data=samples, metadata=metadata)
>>> print(recording.metadata)
{'sample_rate': 1000000.0,
'center_frequency': 2440000000.0,
'timestamp': 17369...,
'rec_id': 'fda0f41...'}
>>>
>>> recording.add_to_metadata(key="author", value="me")
>>> print(recording.metadata)
{'sample_rate': 1000000.0,
'center_frequency': 2440000000.0,
'author': 'me',
'timestamp': 17369...,
'rec_id': 'fda0f41...'}
"""
if key in self.metadata:
raise ValueError(
f"Key {key} already in metadata. Use Recording.update_metadata() to modify existing fields."
)
if not _is_valid_metadata_key(key):
raise ValueError(f"Invalid metadata key: {key}.")
if not _is_jsonable(value):
raise ValueError("Value must be JSON serializable.")
self._metadata[key] = value
def update_metadata(self, key: str, value: Any) -> None:
"""Update the value of an existing metadata key,
or add the key value pair if it does not already exist.
:param key: Existing metadata key.
:type key: str
:param value: New value to enter at key.
:type value: any
:raises ValueError: If value is not JSON serializable
:raises ValueError: If key is protected.
:return: None.
**Examples:**
Create a recording and update metadata:
>>> import numpy
>>> from ria_toolkit_oss.datatypes import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = {
>>> "sample_rate": 1e6,
>>> "center_frequency": 2.44e9,
>>> "author": "me"
>>> }
>>> recording = Recording(data=samples, metadata=metadata)
>>> print(recording.metadata)
{'sample_rate': 1000000.0,
'center_frequency': 2440000000.0,
'author': "me",
'timestamp': 17369...
'rec_id': 'fda0f41...'}
>>> recording.update_metadata(key="author", value=you")
>>> print(recording.metadata)
{'sample_rate': 1000000.0,
'center_frequency': 2440000000.0,
'author': "you",
'timestamp': 17369...
'rec_id': 'fda0f41...'}
"""
if key not in self.metadata:
self.add_to_metadata(key=key, value=value)
return
if not _is_jsonable(value):
raise ValueError("Value must be JSON serializable.")
if key in PROTECTED_KEYS: # Check protected keys.
raise ValueError(f"Key {key} is protected and cannot be modified or removed.")
else:
self._metadata[key] = value
def remove_from_metadata(self, key: str):
"""
Remove a key from the recording metadata.
Does not remove key if it is protected.
:param key: The key to remove.
:type key: str
:raises ValueError: If key is protected.
:return: None.
**Examples:**
Create a recording and add metadata:
>>> import numpy
>>> from ria_toolkit_oss.datatypes import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = {
... "sample_rate": 1e6,
... "center_frequency": 2.44e9,
... }
>>> recording = Recording(data=samples, metadata=metadata)
>>> print(recording.metadata)
{'sample_rate': 1000000.0,
'center_frequency': 2440000000.0,
'timestamp': 17369..., # Example value
'rec_id': 'fda0f41...'} # Example value
>>> recording.add_to_metadata(key="author", value="me")
>>> print(recording.metadata)
{'sample_rate': 1000000.0,
'center_frequency': 2440000000.0,
'author': 'me',
'timestamp': 17369..., # Example value
'rec_id': 'fda0f41...'} # Example value
"""
if key not in PROTECTED_KEYS:
self._metadata.pop(key, None)
else:
raise ValueError(f"Key {key} is protected and cannot be modified or removed.")
def view(self, output_path: Optional[str] = "images/signal.png", **kwargs) -> None:
"""Create a plot of various signal visualizations as a PNG image.
:param output_path: The output image path. Defaults to "images/signal.png".
:type output_path: str, optional
:param kwargs: Keyword arguments passed on to utils.view.view_sig.
:type: dict of keyword arguments
**Examples:**
Create a recording and view it as a plot in a .png image:
>>> import numpy
>>> from utils.data import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = {
>>> "sample_rate": 1e6,
>>> "center_frequency": 2.44e9,
>>> }
>>> recording = Recording(data=samples, metadata=metadata)
>>> recording.view()
"""
from ria_toolkit_oss.view.view_signal import view_sig
view_sig(recording=self, output_path=output_path, **kwargs)
def simple_view(self, **kwargs) -> None:
"""Create a plot of various signal visualizations as a PNG or SVG image.
:param kwargs: Keyword arguments passed on to utils.view.view_signal_simple.create_plots.
:type: dict of keyword arguments
**Examples:**
Create a recording and view it as a plot in a .png image:
>>> import numpy
>>> from utils.data import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = {
>>> "sample_rate": 1e6,
>>> "center_frequency": 2.44e9,
>>> }
>>> recording = Recording(data=samples, metadata=metadata)
>>> recording.simple_view()
"""
from ria_toolkit_oss.view.view_signal_simple import view_simple_sig
view_simple_sig(recording=self, **kwargs)
def to_sigmf(
self, filename: Optional[str] = None, path: Optional[os.PathLike | str] = None, overwrite: bool = False
) -> None:
"""Write recording to a set of SigMF files.
The SigMF io format is defined by the `SigMF Specification Project <https://github.com/sigmf/SigMF>`_
:param recording: The recording to be written to file.
:type recording: ria_toolkit_oss.datatypes.Recording
:param filename: The name of the file where the recording is to be saved. Defaults to auto generated filename.
:type filename: os.PathLike or str, optional
:param path: The directory path to where the recording is to be saved. Defaults to recordings/.
:type path: os.PathLike or str, optional
:raises IOError: If there is an issue encountered during the file writing process.
:return: None
"""
from ria_toolkit_oss.io.recording import to_sigmf
to_sigmf(filename=filename, path=path, recording=self, overwrite=overwrite)
def to_npy(
self, filename: Optional[str] = None, path: Optional[os.PathLike | str] = None, overwrite: bool = False
) -> str:
"""Write recording to ``.npy`` binary file.
:param filename: The name of the file where the recording is to be saved. Defaults to auto generated filename.
:type filename: os.PathLike or str, optional
:param path: The directory path to where the recording is to be saved. Defaults to recordings/.
:type path: os.PathLike or str, optional
:raises IOError: If there is an issue encountered during the file writing process.
:return: Path where the file was saved.
:rtype: str
**Examples:**
Create a recording and save it to a .npy file:
>>> import numpy
>>> from ria_toolkit_oss.datatypes import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = {
>>> "sample_rate": 1e6,
>>> "center_frequency": 2.44e9,
>>> }
>>> recording = Recording(data=samples, metadata=metadata)
>>> recording.to_npy()
"""
from ria_toolkit_oss.io.recording import to_npy
to_npy(recording=self, filename=filename, path=path, overwrite=overwrite)
def to_wav(
self,
filename: Optional[str] = None,
path: Optional[os.PathLike | str] = None,
target_sample_rate: Optional[int] = 48000,
bits_per_sample: int = 32,
overwrite: bool = False,
) -> str:
"""Write recording to WAV file with embedded YAML metadata.
WAV format uses stereo audio with I (in-phase) in left channel and Q (quadrature) in right channel.
Metadata is stored in standard LIST INFO chunks with RF-specific metadata encoded as YAML
in the ICMT (comment) field for human readability.
:param filename: The name of the file where the recording is to be saved. Defaults to auto generated filename.
:type filename: os.PathLike or str, optional
:param path: The directory path to where the recording is to be saved. Defaults to recordings/.
:type path: os.PathLike or str, optional
:param target_sample_rate: Sample rate stored in the WAV header when no sample_rate metadata
is present. IQ samples are written without decimation or interpolation. Default is 48000 Hz.
:type target_sample_rate: int, optional
:param bits_per_sample: Bits per sample (32 for float32, 16 for int16). Default is 32.
:type bits_per_sample: int, optional
:param overwrite: Whether to overwrite existing files. Default is False.
:type overwrite: bool, optional
:raises IOError: If there is an issue encountered during the file writing process.
:return: Path where the file was saved.
:rtype: str
**Examples:**
Create a recording and save it to a .wav file:
>>> import numpy
>>> from utils.data import Recording
>>> samples = numpy.exp(1j * 2 * numpy.pi * 0.1 * numpy.arange(10000))
>>> metadata = {"sample_rate": 1e6, "center_frequency": 915e6}
>>> recording = Recording(data=samples, metadata=metadata)
>>> recording.to_wav()
"""
from ria_toolkit_oss.io.recording import to_wav
return to_wav(
recording=self,
filename=filename,
path=path,
target_sample_rate=target_sample_rate,
bits_per_sample=bits_per_sample,
overwrite=overwrite,
)
def to_blue(
self,
filename: Optional[str] = None,
path: Optional[os.PathLike | str] = None,
data_format: str = "CI",
overwrite: bool = False,
) -> str:
"""Write recording to MIDAS Blue file format.
MIDAS Blue is a legacy RF file format with a 512-byte binary header.
Commonly used with X-Midas and other RF/radar signal processing tools.
:param filename: The name of the file where the recording is to be saved. Defaults to auto generated filename.
:type filename: os.PathLike or str, optional
:param path: The directory path to where the recording is to be saved. Defaults to recordings/.
:type path: os.PathLike or str, optional
:param data_format: Format code (default 'CI' = complex int16).
Common formats: 'CI' (complex int16), 'CF' (complex float32), 'CD' (complex float64).
Integer formats require the IQ samples to already be scaled within [-1, 1).
:type data_format: str, optional
:param overwrite: Whether to overwrite existing files. Default is False.
:type overwrite: bool, optional
:raises IOError: If there is an issue encountered during the file writing process.
:return: Path where the file was saved.
:rtype: str
**Examples:**
Create a recording and save it to a .blue file:
>>> import numpy
>>> from utils.data import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = {"sample_rate": 1e6, "center_frequency": 2.44e9}
>>> recording = Recording(data=samples, metadata=metadata)
>>> recording.to_blue()
"""
from ria_toolkit_oss.io.recording import to_blue
return to_blue(recording=self, filename=filename, path=path, data_format=data_format, overwrite=overwrite)
def trim(self, num_samples: int, start_sample: Optional[int] = 0) -> Recording:
"""Trim Recording samples to a desired length, shifting annotations to maintain alignment.
:param start_sample: The start index of the desired trimmed recording. Defaults to 0.
:type start_sample: int, optional
:param num_samples: The number of samples that the output trimmed recording will have.
:type num_samples: int
:raises IndexError: If start_sample + num_samples is greater than the length of the recording.
:raises IndexError: If sample_start < 0 or num_samples < 0.
:return: The trimmed Recording.
:rtype: Recording
**Examples:**
Create a recording and trim it:
>>> import numpy
>>> from ria_toolkit_oss.datatypes import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64)
>>> metadata = {
... "sample_rate": 1e6,
... "center_frequency": 2.44e9,
... }
>>> recording = Recording(data=samples, metadata=metadata)
>>> print(len(recording))
10000
>>> trimmed_recording = recording.trim(start_sample=1000, num_samples=1000)
>>> print(len(trimmed_recording))
1000
"""
if start_sample < 0:
raise IndexError("start_sample cannot be < 0.")
elif start_sample + num_samples > len(self):
raise IndexError(
f"start_sample {start_sample} + num_samples {num_samples} > recording length {len(self)}."
)
end_sample = start_sample + num_samples
data = self.data[:, start_sample:end_sample]
new_annotations = copy.deepcopy(self.annotations)
trimmed_annotations = []
for annotation in new_annotations:
# skip annotations entirely outside the trim window
if annotation.sample_start + annotation.sample_count <= start_sample:
continue
if annotation.sample_start >= end_sample:
continue
# trim annotation if it goes outside the trim boundaries
if annotation.sample_start < start_sample:
annotation.sample_count = annotation.sample_count - (start_sample - annotation.sample_start)
annotation.sample_start = start_sample
if annotation.sample_start + annotation.sample_count > end_sample:
annotation.sample_count = end_sample - annotation.sample_start
# shift annotation to align with the new start point
annotation.sample_start = annotation.sample_start - start_sample
trimmed_annotations.append(annotation)
return Recording(data=data, metadata=self.metadata, annotations=trimmed_annotations)
def normalize(self) -> Recording:
"""Scale the recording data, relative to its maximum value, so that the magnitude of the maximum sample is 1.
:return: Recording where the maximum sample amplitude is 1.
:rtype: Recording
**Examples:**
Create a recording with maximum amplitude 0.5 and normalize to a maximum amplitude of 1:
>>> import numpy
>>> from ria_toolkit_oss.datatypes import Recording
>>> samples = numpy.ones(10000, dtype=numpy.complex64) * 0.5
>>> metadata = {
... "sample_rate": 1e6,
... "center_frequency": 2.44e9,
... }
>>> recording = Recording(data=samples, metadata=metadata)
>>> print(numpy.max(numpy.abs(recording.data)))
0.5
>>> normalized_recording = recording.normalize()
>>> print(numpy.max(numpy.abs(normalized_recording.data)))
1
"""
max_val = np.max(abs(self.data))
if max_val == 0:
raise ValueError("Cannot normalize a recording with all-zero data.")
scaled_data = self.data / max_val
return Recording(data=scaled_data, metadata=self.metadata, annotations=self.annotations)
def __len__(self) -> int:
"""The length of a recording is defined by the number of complex samples in each channel of the recording."""
return self.shape[1]
def __eq__(self, other: Recording) -> bool:
"""Two Recordings are equal if all data, metadata, and annotations are the same."""
# counter used to allow for differently ordered annotation lists
return (
np.array_equal(self.data, other.data)
and self.metadata == other.metadata
and self.annotations == other.annotations
)
def __ne__(self, other: Recording) -> bool:
"""Two Recordings are equal if all data, and metadata, and annotations are the same."""
return not self.__eq__(other=other)
def __iter__(self) -> Iterator:
self._index = 0
return self
def __next__(self) -> np.ndarray:
if self._index < self.n_chan:
to_ret = self.data[self._index]
self._index += 1
return to_ret
else:
raise StopIteration
def __getitem__(self, key: int | tuple[int] | slice) -> np.ndarray | np.complexfloating:
"""If key is an integer, tuple of integers, or a slice, return the corresponding samples.
For arrays with 1,024 or fewer samples, return a copy of the recording data. For larger arrays, return a
read-only view. This prevents mutation at a distance while maintaining performance.
"""
if isinstance(key, (int, tuple, slice)):
v = self._data[key]
if isinstance(v, np.complexfloating):
return v
elif v.size > 1024:
v.setflags(write=False) # Make view read-only.
return v
else:
return v.copy()
else:
raise ValueError(f"Key must be an integer, tuple, or slice but was {type(key)}.")
def __setitem__(self, *args, **kwargs) -> None:
"""Raise an error if an attempt is made to assign to the recording."""
raise ValueError("Assignment to Recording is not allowed.")
def generate_recording_id(data: np.ndarray, timestamp: Optional[float | int] = None) -> str:
"""Generate unique 64-character recording ID. The recording ID is generated by hashing the recording data with
the datetime that the recording data was generated. If no datatime is provided, the current datatime is used.
:param data: Tape of IQ samples, as a NumPy array.
:type data: np.ndarray
:param timestamp: Unix timestamp in seconds. Defaults to None.
:type timestamp: float or int, optional
:return: 256-character hash, to be used as the recording ID.
:rtype: str
"""
if timestamp is None:
timestamp = time.time()
byte_sequence = data.tobytes() + str(timestamp).encode("utf-8")
sha256_hash = hashlib.sha256(byte_sequence)
return sha256_hash.hexdigest()
def _is_jsonable(x: Any) -> bool:
"""
:return: True if x is JSON serializable, False otherwise.
"""
try:
json.dumps(x)
return True
except (TypeError, OverflowError):
return False
def _is_valid_metadata_key(key: Any) -> bool:
"""
:return: True if key is a valid metadata key, False otherwise.
"""
if isinstance(key, str) and key.islower() and re.match(pattern=r"^[a-z_]+$", string=key) is not None:
return True
else:
return False

View File

@ -1,5 +1,5 @@
""" """
Utilities for input/output operations on the ria_toolkit_oss.datatypes.Recording object. Utilities for input/output operations on the ria_toolkit_oss.data.Recording object.
""" """
import datetime import datetime
@ -19,8 +19,8 @@ from quantiphy import Quantity
from sigmf import SigMFFile, sigmffile from sigmf import SigMFFile, sigmffile
from sigmf.utils import get_data_type_str from sigmf.utils import get_data_type_str
from ria_toolkit_oss.datatypes import Annotation from ria_toolkit_oss.data import Annotation
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
_BLUE_META_PREFIX = "META_" _BLUE_META_PREFIX = "META_"
_BLUE_META_TAG_MAX_LEN = 60 _BLUE_META_TAG_MAX_LEN = 60
@ -64,7 +64,7 @@ def to_npy(
"""Write recording to ``.npy`` binary file. """Write recording to ``.npy`` binary file.
:param recording: The recording to be written to file. :param recording: The recording to be written to file.
:type recording: ria_toolkit_oss.datatypes.Recording :type recording: ria_toolkit_oss.data.Recording
:param filename: The name of the file where the recording is to be saved. Defaults to auto generated filename. :param filename: The name of the file where the recording is to be saved. Defaults to auto generated filename.
:type filename: os.PathLike or str, optional :type filename: os.PathLike or str, optional
:param path: The directory path to where the recording is to be saved. Defaults to recordings/. :param path: The directory path to where the recording is to be saved. Defaults to recordings/.
@ -135,7 +135,7 @@ def from_npy(file: os.PathLike | str, legacy: bool = False) -> Recording:
:raises IOError: If there is an issue encountered during the file reading process. :raises IOError: If there is an issue encountered during the file reading process.
:return: The recording, as initialized from the ``.npy`` file. :return: The recording, as initialized from the ``.npy`` file.
:rtype: ria_toolkit_oss.datatypes.Recording :rtype: ria_toolkit_oss.data.Recording
""" """
filename, extension = os.path.splitext(file) filename, extension = os.path.splitext(file)
@ -161,7 +161,7 @@ def from_npy(file: os.PathLike | str, legacy: bool = False) -> Recording:
try: try:
raw_ann = np.load(f, allow_pickle=False) raw_ann = np.load(f, allow_pickle=False)
ann_list = json.loads(raw_ann.tobytes().decode()) ann_list = json.loads(raw_ann.tobytes().decode())
from ria_toolkit_oss.datatypes.annotation import Annotation from ria_toolkit_oss.data.annotation import Annotation
annotations = [Annotation(**a) for a in ann_list] annotations = [Annotation(**a) for a in ann_list]
except EOFError: except EOFError:
@ -198,7 +198,7 @@ def from_npy_legacy(file: os.PathLike | str) -> Recording:
:raises IOError: If there is an issue encountered during the file reading process. :raises IOError: If there is an issue encountered during the file reading process.
:return: The recording, as initialized from the legacy ``.npy`` file. :return: The recording, as initialized from the legacy ``.npy`` file.
:rtype: ria_toolkit_oss.datatypes.Recording :rtype: ria_toolkit_oss.data.Recording
**Examples:** **Examples:**
@ -270,7 +270,7 @@ def to_sigmf(
The SigMF io format is defined by the `SigMF Specification Project <https://github.com/sigmf/SigMF>`_ The SigMF io format is defined by the `SigMF Specification Project <https://github.com/sigmf/SigMF>`_
:param recording: The recording to be written to file. :param recording: The recording to be written to file.
:type recording: ria_toolkit_oss.datatypes.Recording :type recording: ria_toolkit_oss.data.Recording
:param filename: The name of the file where the recording is to be saved. Defaults to auto generated filename. :param filename: The name of the file where the recording is to be saved. Defaults to auto generated filename.
:type filename: os.PathLike or str, optional :type filename: os.PathLike or str, optional
:param path: The directory path to where the recording is to be saved. Defaults to recordings/. :param path: The directory path to where the recording is to be saved. Defaults to recordings/.
@ -381,7 +381,7 @@ def from_sigmf(file: os.PathLike | str) -> Recording:
:raises IOError: If there is an issue encountered during the file reading process. :raises IOError: If there is an issue encountered during the file reading process.
:return: The recording, as initialized from the SigMF files. :return: The recording, as initialized from the SigMF files.
:rtype: ria_toolkit_oss.datatypes.Recording :rtype: ria_toolkit_oss.data.Recording
""" """
file = str(file) file = str(file)
@ -443,7 +443,7 @@ def to_wav(
in the ICMT (comment) field for human readability. in the ICMT (comment) field for human readability.
:param recording: The recording to be written to file. :param recording: The recording to be written to file.
:type recording: ria_toolkit_oss.datatypes.Recording :type recording: ria_toolkit_oss.data.Recording
:param filename: The name of the file where the recording is to be saved. :param filename: The name of the file where the recording is to be saved.
Defaults to auto-generated filename. Defaults to auto-generated filename.
:type filename: str, optional :type filename: str, optional
@ -553,7 +553,7 @@ def from_wav(file: os.PathLike | str) -> Recording:
:raises ValueError: If file is not stereo or has unsupported format. :raises ValueError: If file is not stereo or has unsupported format.
:return: The recording, as initialized from the WAV file. :return: The recording, as initialized from the WAV file.
:rtype: ria_toolkit_oss.datatypes.Recording :rtype: ria_toolkit_oss.data.Recording
""" """
import wave import wave
@ -635,7 +635,7 @@ def to_blue(
Commonly used with X-Midas and other RF/radar signal processing tools. Commonly used with X-Midas and other RF/radar signal processing tools.
:param recording: The recording to be written to file. :param recording: The recording to be written to file.
:type recording: ria_toolkit_oss.datatypes.Recording :type recording: ria_toolkit_oss.data.Recording
:param filename: The name of the file where the recording is to be saved. :param filename: The name of the file where the recording is to be saved.
Defaults to auto-generated filename. Defaults to auto-generated filename.
:type filename: str, optional :type filename: str, optional
@ -792,7 +792,7 @@ def from_blue(file: os.PathLike | str) -> Recording:
:raises ValueError: If file format is not valid or unsupported. :raises ValueError: If file format is not valid or unsupported.
:return: The recording, as initialized from the Blue file. :return: The recording, as initialized from the Blue file.
:rtype: ria_toolkit_oss.datatypes.Recording :rtype: ria_toolkit_oss.data.Recording
""" """
filename = str(file) filename = str(file)
if not filename.endswith(".blue"): if not filename.endswith(".blue"):
@ -917,7 +917,7 @@ def load_recording(file: os.PathLike) -> Recording:
:raises ValueError: If the inferred file extension is not supported. :raises ValueError: If the inferred file extension is not supported.
:return: The recording, as initialized from file(s). :return: The recording, as initialized from file(s).
:rtype: ria_toolkit_oss.datatypes.Recording :rtype: ria_toolkit_oss.data.Recording
""" """
_, extension = os.path.splitext(file) _, extension = os.path.splitext(file)
extension = extension.lstrip(".") extension = extension.lstrip(".")

View File

@ -233,6 +233,9 @@ class TransmitterConfig:
# For sdr_remote control — keys: host, ssh_user, ssh_key_path, device_type, device_id, zmq_port # For sdr_remote control — keys: host, ssh_user, ssh_key_path, device_type, device_id, zmq_port
sdr_remote: Optional[dict] = None sdr_remote: Optional[dict] = None
# For sdr_agent control — keys: modulation, order, symbol_rate, center_frequency, filter, rolloff
sdr_agent: Optional[dict] = None
@classmethod @classmethod
def from_dict(cls, d: dict) -> "TransmitterConfig": def from_dict(cls, d: dict) -> "TransmitterConfig":
schedule = [CaptureStep.from_dict(s) for s in d.get("schedule", [])] schedule = [CaptureStep.from_dict(s) for s in d.get("schedule", [])]
@ -244,6 +247,7 @@ class TransmitterConfig:
script=d.get("script"), script=d.get("script"),
device=d.get("device"), device=d.get("device"),
sdr_remote=d.get("sdr_remote"), sdr_remote=d.get("sdr_remote"),
sdr_agent=d.get("sdr_agent"),
) )
@ -272,6 +276,7 @@ class OutputConfig:
path: str = "recordings" path: str = "recordings"
device_id: Optional[str] = None # for device-profile campaigns device_id: Optional[str] = None # for device-profile campaigns
repo: Optional[str] = None repo: Optional[str] = None
folder: Optional[str] = None # repo subfolder: None = use campaign name, "" = no subfolder, str = custom
@classmethod @classmethod
def from_dict(cls, d: dict) -> "OutputConfig": def from_dict(cls, d: dict) -> "OutputConfig":
@ -280,6 +285,7 @@ class OutputConfig:
path=str(d.get("path", "recordings")), path=str(d.get("path", "recordings")),
device_id=d.get("device_id"), device_id=d.get("device_id"),
repo=d.get("repo"), repo=d.get("repo"),
folder=d.get("folder"),
) )
@ -293,6 +299,7 @@ class CampaignConfig:
qa: QAConfig = field(default_factory=QAConfig) qa: QAConfig = field(default_factory=QAConfig)
output: OutputConfig = field(default_factory=OutputConfig) output: OutputConfig = field(default_factory=OutputConfig)
mode: str = "controlled_testbed" mode: str = "controlled_testbed"
loops: int = 1 # repeat full schedule this many times; labels get _run{N:02d} suffix
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Loaders # Loaders
@ -320,6 +327,7 @@ class CampaignConfig:
return cls( return cls(
name=safe_name, name=safe_name,
mode=str(campaign_meta.get("mode", "controlled_testbed")), mode=str(campaign_meta.get("mode", "controlled_testbed")),
loops=max(1, int(campaign_meta.get("loops", 1))),
recorder=RecorderConfig.from_dict(raw["recorder"]), recorder=RecorderConfig.from_dict(raw["recorder"]),
transmitters=transmitters, transmitters=transmitters,
qa=QAConfig.from_dict(raw.get("qa", {})), qa=QAConfig.from_dict(raw.get("qa", {})),
@ -384,6 +392,7 @@ class CampaignConfig:
return cls( return cls(
name=safe_name, name=safe_name,
mode=str(campaign_meta.get("mode", "controlled_testbed")), mode=str(campaign_meta.get("mode", "controlled_testbed")),
loops=max(1, int(campaign_meta.get("loops", 1))),
recorder=RecorderConfig.from_dict(raw["recorder"]), recorder=RecorderConfig.from_dict(raw["recorder"]),
transmitters=transmitters, transmitters=transmitters,
qa=QAConfig.from_dict(raw.get("qa", {})), qa=QAConfig.from_dict(raw.get("qa", {})),
@ -486,9 +495,9 @@ class CampaignConfig:
) )
def total_capture_time_s(self) -> float: def total_capture_time_s(self) -> float:
"""Sum of all step durations across all transmitters.""" """Sum of all step durations across all transmitters and loops."""
return sum(step.duration for tx in self.transmitters for step in tx.schedule) return sum(step.duration for tx in self.transmitters for step in tx.schedule) * self.loops
def total_steps(self) -> int: def total_steps(self) -> int:
"""Total number of capture steps across all transmitters.""" """Total number of capture steps across all transmitters and loops."""
return sum(len(tx.schedule) for tx in self.transmitters) return sum(len(tx.schedule) for tx in self.transmitters) * self.loops

View File

@ -5,17 +5,19 @@ from __future__ import annotations
import json import json
import logging import logging
import subprocess import subprocess
import threading
import time import time
from dataclasses import dataclass, field from dataclasses import dataclass, field, replace
from pathlib import Path from pathlib import Path
from typing import Callable, Optional from typing import Callable, Optional
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
from ria_toolkit_oss.io.recording import to_sigmf from ria_toolkit_oss.io.recording import to_sigmf
from .campaign import CampaignConfig, CaptureStep, TransmitterConfig from .campaign import CampaignConfig, CaptureStep, TransmitterConfig
from .labeler import build_output_filename, label_recording from .labeler import build_output_filename, label_recording
from .qa import QAResult, check_recording from .qa import QAResult, check_recording
from .tx_executor import TxExecutor
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -169,6 +171,21 @@ def _run_script(script: str, *args: str, timeout: float = 15.0) -> str:
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
def _extract_tx_params(transmitter: TransmitterConfig) -> dict | None:
"""Build a tx_params dict from a transmitter's signal config for SigMF labeling.
For sdr_agent transmitters, returns the synthetic generation parameters
(modulation, order, symbol_rate, etc.) so recordings capture what was
transmitted. Returns None for control methods without signal-level params.
"""
sdr_agent_cfg = getattr(transmitter, "sdr_agent", None)
if not sdr_agent_cfg:
return None
# Extract known signal-level fields; ignore infra fields
_INFRA_KEYS = {"node_id", "session_code"}
return {k: v for k, v in sdr_agent_cfg.items() if k not in _INFRA_KEYS and v is not None}
class CampaignExecutor: class CampaignExecutor:
"""Executes a :class:`CampaignConfig` end-to-end. """Executes a :class:`CampaignConfig` end-to-end.
@ -192,11 +209,14 @@ class CampaignExecutor:
config: CampaignConfig, config: CampaignConfig,
progress_cb: Optional[Callable[[int, int, StepResult], None]] = None, progress_cb: Optional[Callable[[int, int, StepResult], None]] = None,
verbose: bool = False, verbose: bool = False,
skip_local_tx: bool = False,
): ):
self.config = config self.config = config
self.progress_cb = progress_cb self.progress_cb = progress_cb
self.skip_local_tx = skip_local_tx
self._sdr = None self._sdr = None
self._remote_tx_controllers: dict = {} self._remote_tx_controllers: dict = {}
self._tx_executors: dict[str, tuple] = {} # tx_id → (TxExecutor, stop_event, thread)
if verbose: if verbose:
logging.basicConfig(level=logging.DEBUG) logging.basicConfig(level=logging.DEBUG)
@ -216,10 +236,12 @@ class CampaignExecutor:
""" """
result = CampaignResult(campaign_name=self.config.name) result = CampaignResult(campaign_name=self.config.name)
loops = self.config.loops
logger.info( logger.info(
f"Starting campaign '{self.config.name}': " f"Starting campaign '{self.config.name}': "
f"{self.config.total_steps()} steps, " f"{self.config.total_steps()} steps"
f"~{self.config.total_capture_time_s():.0f}s capture time" + (f" ({self.config.total_steps() // loops} × {loops} loops)" if loops > 1 else "")
+ f", ~{self.config.total_capture_time_s():.0f}s capture time"
) )
self._init_sdr() self._init_sdr()
@ -228,10 +250,14 @@ class CampaignExecutor:
total = self.config.total_steps() total = self.config.total_steps()
step_index = 0 step_index = 0
for loop_idx in range(loops):
if loops > 1:
logger.info(f"Loop {loop_idx + 1}/{loops}")
for transmitter in self.config.transmitters: for transmitter in self.config.transmitters:
logger.info(f"Transmitter: {transmitter.id} ({len(transmitter.schedule)} steps)") logger.info(f"Transmitter: {transmitter.id} ({len(transmitter.schedule)} steps)")
for step in transmitter.schedule: for step in transmitter.schedule:
step_result = self._execute_step(transmitter, step) looped_step = replace(step, label=f"{step.label}_run{loop_idx + 1:02d}") if loops > 1 else step
step_result = self._execute_step(transmitter, looped_step)
result.steps.append(step_result) result.steps.append(step_result)
step_index += 1 step_index += 1
@ -239,18 +265,21 @@ class CampaignExecutor:
self.progress_cb(step_index, total, step_result) self.progress_cb(step_index, total, step_result)
if step_result.error: if step_result.error:
logger.warning(f"Step '{step.label}' error: {step_result.error}") logger.warning(f"Step '{looped_step.label}' error: {step_result.error}")
elif step_result.qa.flagged: elif step_result.qa.flagged:
logger.warning(f"Step '{step.label}' flagged for review: " + "; ".join(step_result.qa.issues)) logger.warning(
f"Step '{looped_step.label}' flagged for review: " + "; ".join(step_result.qa.issues)
)
else: else:
logger.info( logger.info(
f"Step '{step.label}' OK " f"Step '{looped_step.label}' OK "
f"(SNR {step_result.qa.snr_db:.1f} dB, " f"(SNR {step_result.qa.snr_db:.1f} dB, "
f"{step_result.qa.duration_s:.1f}s)" f"{step_result.qa.duration_s:.1f}s)"
) )
finally: finally:
self._close_sdr() self._close_sdr()
self._close_remote_tx_controllers() self._close_remote_tx_controllers()
self._close_tx_executors()
result.end_time = time.time() result.end_time = time.time()
logger.info( logger.info(
@ -325,6 +354,12 @@ class CampaignExecutor:
logger.warning(f"Error closing remote Tx controller {tx_id}: {exc}") logger.warning(f"Error closing remote Tx controller {tx_id}: {exc}")
self._remote_tx_controllers.clear() self._remote_tx_controllers.clear()
def _close_tx_executors(self) -> None:
for tx_id, (_, stop_event, t) in list(self._tx_executors.items()):
stop_event.set()
t.join(timeout=5.0)
self._tx_executors.clear()
def _record(self, duration_s: float) -> Recording: def _record(self, duration_s: float) -> Recording:
"""Capture ``duration_s`` seconds of IQ samples.""" """Capture ``duration_s`` seconds of IQ samples."""
num_samples = int(duration_s * self.config.recorder.sample_rate) num_samples = int(duration_s * self.config.recorder.sample_rate)
@ -369,6 +404,7 @@ class CampaignExecutor:
step=step, step=step,
capture_timestamp=capture_timestamp, capture_timestamp=capture_timestamp,
campaign_name=self.config.name, campaign_name=self.config.name,
tx_params=_extract_tx_params(transmitter),
) )
# QA # QA
@ -437,6 +473,30 @@ class CampaignExecutor:
# Start transmission in background; _record() runs concurrently # Start transmission in background; _record() runs concurrently
ctrl.transmit_async(step.duration + 1.0) ctrl.transmit_async(step.duration + 1.0)
elif transmitter.control_method == "sdr_agent":
if self.skip_local_tx:
logger.debug(f"skip_local_tx — TX for '{transmitter.id}' delegated to TX agent node")
return
if not transmitter.sdr_agent:
logger.warning(f"Transmitter '{transmitter.id}' has no sdr_agent config — skipping")
return
step_dict: dict = {"label": step.label, "duration": step.duration + 1.0}
if step.power_dbm is not None:
step_dict["power_dbm"] = step.power_dbm
tx_config = {
"id": transmitter.id,
"sdr_agent": transmitter.sdr_agent,
"schedule": [step_dict],
}
rec = self.config.recorder
tx_device = transmitter.device or rec.device
sdr_device = _DEVICE_ALIASES.get(tx_device.lower(), tx_device.lower())
stop_event = threading.Event()
executor = TxExecutor(tx_config, sdr_device=sdr_device, stop_event=stop_event)
t = threading.Thread(target=executor.run, daemon=True, name=f"tx-{transmitter.id}")
self._tx_executors[transmitter.id] = (executor, stop_event, t)
t.start()
else: else:
logger.warning(f"Unknown control method '{transmitter.control_method}' — skipping") logger.warning(f"Unknown control method '{transmitter.control_method}' — skipping")
@ -459,6 +519,13 @@ class CampaignExecutor:
if ctrl is not None: if ctrl is not None:
ctrl.wait_transmit(timeout=step.duration + 10.0) ctrl.wait_transmit(timeout=step.duration + 10.0)
elif transmitter.control_method == "sdr_agent":
entry = self._tx_executors.pop(transmitter.id, None)
if entry is not None:
_, stop_event, t = entry
stop_event.set()
t.join(timeout=step.duration + 10.0)
@staticmethod @staticmethod
def _step_params_json(transmitter: TransmitterConfig, step: CaptureStep) -> str: def _step_params_json(transmitter: TransmitterConfig, step: CaptureStep) -> str:
"""Serialise step parameters to a JSON string for the control script.""" """Serialise step parameters to a JSON string for the control script."""

View File

@ -4,7 +4,7 @@ from __future__ import annotations
from typing import Optional from typing import Optional
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
from .campaign import CaptureStep from .campaign import CaptureStep
@ -15,6 +15,7 @@ def label_recording(
step: CaptureStep, step: CaptureStep,
capture_timestamp: float, capture_timestamp: float,
campaign_name: Optional[str] = None, campaign_name: Optional[str] = None,
tx_params: Optional[dict] = None,
) -> Recording: ) -> Recording:
"""Apply device identity and capture configuration labels to a recording's metadata. """Apply device identity and capture configuration labels to a recording's metadata.
@ -27,6 +28,9 @@ def label_recording(
step: The capture step that was active during this recording. step: The capture step that was active during this recording.
capture_timestamp: Unix timestamp (float) of when capture started. capture_timestamp: Unix timestamp (float) of when capture started.
campaign_name: Optional campaign name for cross-recording reference. campaign_name: Optional campaign name for cross-recording reference.
tx_params: Optional dict of transmitter signal parameters (e.g. modulation,
order, symbol_rate) written as ``ria:tx_<key>`` fields so downstream
training pipelines know what was transmitted into the recording.
Returns: Returns:
The same recording with updated metadata. The same recording with updated metadata.
@ -57,6 +61,11 @@ def label_recording(
if step.power_dbm is not None: if step.power_dbm is not None:
recording.update_metadata("tx_power_dbm", step.power_dbm) recording.update_metadata("tx_power_dbm", step.power_dbm)
# Transmitter signal parameters (e.g. from sdr_agent synthetic generation)
if tx_params:
for key, value in tx_params.items():
recording.update_metadata(f"tx_{key}", value)
return recording return recording

View File

@ -6,7 +6,7 @@ from dataclasses import dataclass, field
import numpy as np import numpy as np
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
from .campaign import QAConfig from .campaign import QAConfig

View File

@ -0,0 +1,299 @@
"""TX campaign executor — synthesises and transmits signals via a local SDR.
The TxExecutor receives a transmitter config dict (matching the
``sdr_agent`` control method's schema) and a step schedule, then for each
step builds a signal chain with the block generator and transmits it via
the local SDR device.
Supported modulations (``modulation`` field in config):
BPSK, QPSK, 8PSK, 16QAM, 64QAM, 256QAM, FSK, OOK, GMSK, OQPSK
Example config dict (matches CampaignConfig transmitter with
``control_method: sdr_agent``)::
{
"id": "synthetic-tx",
"type": "sdr",
"control_method": "sdr_agent",
"sdr_agent": {
"modulation": "QPSK",
"order": 4,
"symbol_rate": 1000000,
"center_frequency": 0.0,
"filter": "rrc",
"rolloff": 0.35
},
"schedule": [
{"label": "step1", "duration": 10, "power_dbm": -10}
]
}
"""
from __future__ import annotations
import logging
import threading
from typing import Any
logger = logging.getLogger(__name__)
def _parse_hz(val: object) -> float:
"""Parse a frequency value that may be a float (Hz) or a string like '2.45GHz'."""
if isinstance(val, (int, float)):
return float(val)
s = str(val).strip()
for suffix, mult in (("GHz", 1e9), ("MHz", 1e6), ("kHz", 1e3), ("Hz", 1.0)):
if s.endswith(suffix):
return float(s[: -len(suffix)]) * mult
return float(s)
def _parse_seconds(val: object) -> float:
"""Parse a duration value that may be a float (seconds) or a string like '5s'."""
if isinstance(val, (int, float)):
return float(val)
s = str(val).strip()
return float(s[:-1]) if s.endswith("s") else float(s)
# Mapping from modulation name → (PSK/QAM order, generator_type)
# 'psk' uses PSKGenerator, 'qam' uses QAMGenerator
_MOD_TABLE: dict[str, tuple[int, str]] = {
"BPSK": (1, "psk"),
"QPSK": (2, "psk"),
"8PSK": (3, "psk"),
"16QAM": (4, "qam"),
"64QAM": (6, "qam"),
"256QAM": (8, "qam"),
}
_SPECIAL_MODS = {"FSK", "OOK", "GMSK", "OQPSK"}
# usrp-uhd-client's tx_recording() streams 2 000-sample chunks and loops the
# source buffer for the full tx_time, so only this many samples ever need to
# be in RAM regardless of step duration or sample rate.
# 50 000 complex64 samples ≈ 400 kB — enough spectral diversity for looping.
_SYNTH_BLOCK_SAMPLES = 50_000
class TxExecutor:
"""Synthesise and transmit a signal campaign via a local SDR.
Args:
config: Transmitter config dict (must have ``sdr_agent`` sub-dict with
modulation params, and ``schedule`` list of step dicts).
sdr_device: SDR device name to open in TX mode (e.g. "pluto", "usrp").
stop_event: External event that aborts the TX loop mid-step.
"""
def __init__(
self,
config: dict,
sdr_device: str = "unknown",
stop_event: threading.Event | None = None,
) -> None:
self.config = config
self.sdr_device = sdr_device
self.stop_event = stop_event or threading.Event()
self._sdr: Any = None
def run(self) -> None:
"""Execute all steps in the schedule, transmitting for each step duration."""
agent_cfg: dict = self.config.get("sdr_agent") or {}
schedule: list[dict] = self.config.get("schedule") or []
if not schedule:
logger.warning("TxExecutor: no schedule steps — nothing to transmit")
return
modulation: str = agent_cfg.get("modulation", "QPSK").upper()
symbol_rate: float = float(agent_cfg.get("symbol_rate", 1e6))
center_freq: float = _parse_hz(agent_cfg.get("center_frequency", 0.0))
filter_type: str = agent_cfg.get("filter", "rrc").lower()
rolloff: float = float(agent_cfg.get("rolloff", 0.35))
loops: int = max(1, int(self.config.get("loops", 1)))
# Upsampling factor: samples_per_symbol, fixed at 8 for SDR compatibility.
sps = 8
sample_rate = symbol_rate * sps
self._init_sdr(sample_rate, center_freq)
try:
for loop_idx in range(loops):
if self.stop_event.is_set():
break
if loops > 1:
logger.info("TX loop %d/%d", loop_idx + 1, loops)
for step in schedule:
if self.stop_event.is_set():
break
looped_step = (
{**step, "label": f"{step.get('label', 'step')}_run{loop_idx + 1:02d}"} if loops > 1 else step
)
self._execute_step(looped_step, modulation, sps, symbol_rate, filter_type, rolloff)
finally:
self._close_sdr()
def _execute_step(
self,
step: dict,
modulation: str,
sps: int,
symbol_rate: float,
filter_type: str,
rolloff: float,
) -> None:
duration: float = _parse_seconds(step.get("duration", 10.0))
label: str = step.get("label", "step")
gain: float = float(step.get("power_dbm") or 0.0)
sample_rate = symbol_rate * sps
logger.info(
"TX step '%s': %.0f s, %s @ %.3f MHz (sps=%d, filter=%s)",
label,
duration,
modulation,
symbol_rate / 1e6,
sps,
filter_type,
)
num_samples = int(duration * sample_rate)
# Synthesise a short representative block. tx_recording() loops this
# buffer for the full tx_time using a 2 000-sample streaming callback,
# so peak memory is O(_SYNTH_BLOCK_SAMPLES) regardless of duration.
block_size = min(num_samples, _SYNTH_BLOCK_SAMPLES)
signal = self._synthesise(modulation, sps, block_size, filter_type, rolloff)
if self._sdr is not None:
try:
# Apply gain update if SDR supports it
if hasattr(self._sdr, "set_tx_gain"):
self._sdr.set_tx_gain(gain)
self._sdr.tx_recording(signal, tx_time=duration)
except Exception as exc:
logger.error("TX step '%s' SDR error: %s", label, exc)
else:
# No SDR available — simulate by sleeping for the step duration.
logger.warning("TX step '%s': no SDR — simulating %.0f s delay", label, duration)
self.stop_event.wait(timeout=duration)
def _synthesise(
self,
modulation: str,
sps: int,
num_samples: int,
filter_type: str,
rolloff: float,
):
"""Build a block-generator chain and return IQ samples as a numpy array."""
try:
import numpy as np
from ria_toolkit_oss.signal.block_generator import (
BinarySource,
GMSKModulator,
Mapper,
OOKModulator,
OQPSKModulator,
RaisedCosineFilter,
RootRaisedCosineFilter,
Upsampling,
)
from ria_toolkit_oss.signal.block_generator.continuous_modulation.fsk_modulator import (
FSKModulator,
)
except ImportError as exc:
raise RuntimeError(f"ria_toolkit_oss block generator not available: {exc}") from exc
# ── Special modulations with their own source-connected modulator ──
if modulation in ("OOK", "GMSK", "OQPSK"):
src = BinarySource()
if modulation == "OOK":
mod = OOKModulator(src, samples_per_symbol=sps)
elif modulation == "GMSK":
mod = GMSKModulator(src, samples_per_symbol=sps)
else:
mod = OQPSKModulator(src, samples_per_symbol=sps)
recording = mod.record(num_samples)
flat = np.asarray(recording.data).flatten().astype(np.complex64)
if len(flat) < num_samples:
flat = np.tile(flat, num_samples // len(flat) + 1)
return flat[:num_samples]
if modulation == "FSK":
symbol_rate = num_samples / sps
bits_per_sym = 1 # 2-FSK
num_bits = max(num_samples // sps, 128) * bits_per_sym
bits = BinarySource()((1, num_bits))
mod = FSKModulator(
num_bits_per_symbol=bits_per_sym,
frequency_spacing=symbol_rate * 0.5,
symbol_duration=1.0 / max(symbol_rate, 1.0),
sampling_frequency=symbol_rate * sps,
)
flat = np.asarray(mod(bits)).flatten().astype(np.complex64)
if len(flat) < num_samples:
flat = np.tile(flat, num_samples // len(flat) + 1)
return flat[:num_samples]
# ── PSK / QAM via Mapper → Upsampling → pulse filter ──────────────
if modulation not in _MOD_TABLE:
logger.warning("Unknown modulation %r — defaulting to QPSK", modulation)
modulation = "QPSK"
bits_per_sym, gen_type = _MOD_TABLE[modulation]
mod_family = "QAM" if gen_type == "qam" else "PSK"
source = BinarySource()
mapper = Mapper(constellation_type=mod_family, num_bits_per_symbol=bits_per_sym)
upsampler = Upsampling(factor=sps)
mapper.connect_input([source])
upsampler.connect_input([mapper])
if filter_type in ("rrc",):
pulse_filter = RootRaisedCosineFilter(span_in_symbols=6, upsampling_factor=sps, beta=rolloff)
pulse_filter.connect_input([upsampler])
recording = pulse_filter.record(num_samples)
elif filter_type in ("rc",):
pulse_filter = RaisedCosineFilter(span_in_symbols=6, upsampling_factor=sps, beta=rolloff)
pulse_filter.connect_input([upsampler])
recording = pulse_filter.record(num_samples)
else:
# "none", "rect", "gaussian" — use upsampler output directly
recording = upsampler.record(num_samples)
flat = np.asarray(recording.data).flatten().astype(np.complex64)
if len(flat) < num_samples:
flat = np.tile(flat, num_samples // len(flat) + 1)
return flat[:num_samples]
def _init_sdr(self, sample_rate: float, center_freq: float) -> None:
try:
from ria_toolkit_oss.sdr import get_sdr_device
self._sdr = get_sdr_device(self.sdr_device)
self._sdr.init_tx(
sample_rate=sample_rate,
center_frequency=center_freq,
gain=0,
channel=0,
gain_mode="manual",
)
logger.info(
"TX SDR initialised: %s @ %.3f MHz, %.1f Msps", self.sdr_device, center_freq / 1e6, sample_rate / 1e6
)
except Exception as exc:
logger.warning("TX SDR init failed (%s) — will simulate: %s", self.sdr_device, exc)
self._sdr = None
def _close_sdr(self) -> None:
if self._sdr is not None:
try:
self._sdr.close()
except Exception as exc:
logger.debug("TX SDR close error: %s", exc)
self._sdr = None

View File

@ -5,7 +5,7 @@ from typing import Optional
import numpy as np import numpy as np
from bladerf import _bladerf from bladerf import _bladerf
from ria_toolkit_oss.datatypes import Recording from ria_toolkit_oss.data import Recording
from ria_toolkit_oss.sdr import SDR, SDRError, SDRParameterError from ria_toolkit_oss.sdr import SDR, SDRError, SDRParameterError

View File

@ -4,7 +4,7 @@ from typing import Optional
import numpy as np import numpy as np
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
from ria_toolkit_oss.sdr._external.libhackrf import HackRF as hrf from ria_toolkit_oss.sdr._external.libhackrf import HackRF as hrf
from ria_toolkit_oss.sdr.sdr import SDR, SDRParameterError from ria_toolkit_oss.sdr.sdr import SDR, SDRParameterError

View File

@ -7,7 +7,7 @@ from typing import Optional
import adi import adi
import numpy as np import numpy as np
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
from ria_toolkit_oss.sdr.sdr import ( from ria_toolkit_oss.sdr.sdr import (
SDR, SDR,
SDRError, SDRError,

View File

@ -11,7 +11,7 @@ try:
except ImportError as exc: # pragma: no cover - dependency provided by end user except ImportError as exc: # pragma: no cover - dependency provided by end user
raise ImportError("pyrtlsdr is required to use the RTLSDR class") from exc raise ImportError("pyrtlsdr is required to use the RTLSDR class") from exc
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
from ria_toolkit_oss.sdr.sdr import SDR, SDRParameterError from ria_toolkit_oss.sdr.sdr import SDR, SDRParameterError

View File

@ -8,7 +8,7 @@ from typing import Optional
import numpy as np import numpy as np
import zmq import zmq
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
class SDR(ABC): class SDR(ABC):

View File

@ -6,7 +6,7 @@ from typing import Optional
import numpy as np import numpy as np
import uhd import uhd
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
from ria_toolkit_oss.sdr.sdr import SDR, SDRParameterError from ria_toolkit_oss.sdr.sdr import SDR, SDRParameterError

View File

@ -3,7 +3,7 @@
from fastapi import Depends, FastAPI from fastapi import Depends, FastAPI
from .auth import require_api_key from .auth import require_api_key
from .routers import inference, orchestrator from .routers import conductor, inference
def create_app(api_key: str = "") -> FastAPI: def create_app(api_key: str = "") -> FastAPI:
@ -28,9 +28,9 @@ def create_app(api_key: str = "") -> FastAPI:
app.state.api_key = api_key app.state.api_key = api_key
app.include_router( app.include_router(
orchestrator.router, conductor.router,
prefix="/orchestrator", prefix="/conductor",
tags=["Orchestrator"], tags=["Conductor"],
dependencies=[Depends(require_api_key)], dependencies=[Depends(require_api_key)],
) )
app.include_router( app.include_router(

View File

@ -7,7 +7,7 @@ from pathlib import Path
from pydantic import BaseModel, field_validator from pydantic import BaseModel, field_validator
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# Orchestrator # Conductor
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------

View File

@ -1,4 +1,4 @@
"""Orchestrator routes: campaign deployment, status, and cancellation.""" """Conductor routes: campaign deployment, status, and cancellation."""
from __future__ import annotations from __future__ import annotations

View File

@ -11,7 +11,7 @@ from scipy.signal import butter
from scipy.signal import chirp as sci_chirp from scipy.signal import chirp as sci_chirp
from scipy.signal import hilbert, lfilter from scipy.signal import hilbert, lfilter
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
def sine( def sine(

View File

@ -1,4 +1,4 @@
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
from ria_toolkit_oss.signal.block_generator.generators.signal_generator import ( from ria_toolkit_oss.signal.block_generator.generators.signal_generator import (
SignalGenerator, SignalGenerator,
) )

View File

@ -1,4 +1,4 @@
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
from ria_toolkit_oss.signal.block_generator.generators.signal_generator import ( from ria_toolkit_oss.signal.block_generator.generators.signal_generator import (
SignalGenerator, SignalGenerator,
) )

View File

@ -1,4 +1,4 @@
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
from ria_toolkit_oss.signal.block_generator.generators.signal_generator import ( from ria_toolkit_oss.signal.block_generator.generators.signal_generator import (
SignalGenerator, SignalGenerator,
) )

View File

@ -1,4 +1,4 @@
from ria_toolkit_oss.datatypes import Recording from ria_toolkit_oss.data import Recording
from ria_toolkit_oss.signal import Recordable from ria_toolkit_oss.signal import Recordable
from ria_toolkit_oss.signal.block_generator.block import Block from ria_toolkit_oss.signal.block_generator.block import Block

View File

@ -4,7 +4,7 @@ from datetime import datetime
import click import click
import numpy as np import numpy as np
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
from ria_toolkit_oss.signal.block_generator.mapping.mapper import Mapper from ria_toolkit_oss.signal.block_generator.mapping.mapper import Mapper
from ria_toolkit_oss.signal.block_generator.multirate.upsampling import Upsampling from ria_toolkit_oss.signal.block_generator.multirate.upsampling import Upsampling
from ria_toolkit_oss.signal.block_generator.pulse_shaping.raised_cosine_filter import ( from ria_toolkit_oss.signal.block_generator.pulse_shaping.raised_cosine_filter import (

View File

@ -1,4 +1,4 @@
from ria_toolkit_oss.datatypes import Recording from ria_toolkit_oss.data import Recording
from ria_toolkit_oss.signal.block_generator.data_types import DataType from ria_toolkit_oss.signal.block_generator.data_types import DataType
from ria_toolkit_oss.signal.block_generator.recordable_block import RecordableBlock from ria_toolkit_oss.signal.block_generator.recordable_block import RecordableBlock
from ria_toolkit_oss.signal.block_generator.source_block import SourceBlock from ria_toolkit_oss.signal.block_generator.source_block import SourceBlock

View File

@ -1,6 +1,6 @@
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from ria_toolkit_oss.datatypes import Recording from ria_toolkit_oss.data import Recording
class Recordable(ABC): class Recordable(ABC):

View File

@ -11,7 +11,7 @@ from typing import Optional
import numpy as np import numpy as np
from numpy.typing import ArrayLike from numpy.typing import ArrayLike
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
from ria_toolkit_oss.utils.array_conversion import convert_to_2xn from ria_toolkit_oss.utils.array_conversion import convert_to_2xn
# TODO: For round 2 of index generation, should j be at min 2 spots away from where it was to prevent adjacent patches. # TODO: For round 2 of index generation, should j be at min 2 spots away from where it was to prevent adjacent patches.
@ -29,7 +29,7 @@ def generate_awgn(signal: ArrayLike | Recording, snr: Optional[float] = 1) -> np
:param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N :param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N
is the length of the IQ examples. is the length of the IQ examples.
:type signal: array_like or ria_toolkit_oss.datatypes.Recording :type signal: array_like or ria_toolkit_oss.data.Recording
:param snr: The signal-to-noise ratio in dB. Default is 1. :param snr: The signal-to-noise ratio in dB. Default is 1.
:type snr: float, optional :type snr: float, optional
@ -37,7 +37,7 @@ def generate_awgn(signal: ArrayLike | Recording, snr: Optional[float] = 1) -> np
:return: A numpy array representing the generated noise which matches the SNR of `signal`. If `signal` is a :return: A numpy array representing the generated noise which matches the SNR of `signal`. If `signal` is a
Recording, returns a Recording object with its `data` attribute containing the generated noise array. Recording, returns a Recording object with its `data` attribute containing the generated noise array.
:rtype: np.ndarray or ria_toolkit_oss.datatypes.Recording :rtype: np.ndarray or ria_toolkit_oss.data.Recording
>>> rec = Recording(data=[[2 + 5j, 1 + 8j]]) >>> rec = Recording(data=[[2 + 5j, 1 + 8j]])
>>> new_rec = generate_awgn(rec) >>> new_rec = generate_awgn(rec)
@ -80,14 +80,14 @@ def time_reversal(signal: ArrayLike | Recording) -> np.ndarray | Recording:
:param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N :param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N
is the length of the IQ examples. is the length of the IQ examples.
:type signal: array_like or ria_toolkit_oss.datatypes.Recording :type signal: array_like or ria_toolkit_oss.data.Recording
:raises ValueError: If `signal` is not CxN complex. :raises ValueError: If `signal` is not CxN complex.
:return: A numpy array containing the reversed I and Q data samples if `signal` is an array. :return: A numpy array containing the reversed I and Q data samples if `signal` is an array.
If `signal` is a `Recording`, returns a `Recording` object with its `data` attribute containing the If `signal` is a `Recording`, returns a `Recording` object with its `data` attribute containing the
reversed array. reversed array.
:rtype: np.ndarray or ria_toolkit_oss.datatypes.Recording :rtype: np.ndarray or ria_toolkit_oss.data.Recording
>>> rec = Recording(data=[[1+2j, 3+4j, 5+6j]]) >>> rec = Recording(data=[[1+2j, 3+4j, 5+6j]])
>>> new_rec = time_reversal(rec) >>> new_rec = time_reversal(rec)
@ -123,14 +123,14 @@ def spectral_inversion(signal: ArrayLike | Recording) -> np.ndarray | Recording:
:param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N :param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N
is the length of the IQ examples. is the length of the IQ examples.
:type signal: array_like or ria_toolkit_oss.datatypes.Recording :type signal: array_like or ria_toolkit_oss.data.Recording
:raises ValueError: If `signal` is not CxN complex. :raises ValueError: If `signal` is not CxN complex.
:return: A numpy array containing the original I and negated Q data samples if `signal` is an array. :return: A numpy array containing the original I and negated Q data samples if `signal` is an array.
If `signal` is a `Recording`, returns a `Recording` object with its `data` attribute containing the If `signal` is a `Recording`, returns a `Recording` object with its `data` attribute containing the
inverted array. inverted array.
:rtype: np.ndarray or ria_toolkit_oss.datatypes.Recording :rtype: np.ndarray or ria_toolkit_oss.data.Recording
>>> rec = Recording(data=[[0+45j, 2-10j]]) >>> rec = Recording(data=[[0+45j, 2-10j]])
>>> new_rec = spectral_inversion(rec) >>> new_rec = spectral_inversion(rec)
@ -165,14 +165,14 @@ def channel_swap(signal: ArrayLike | Recording) -> np.ndarray | Recording:
:param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N :param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N
is the length of the IQ examples. is the length of the IQ examples.
:type signal: array_like or ria_toolkit_oss.datatypes.Recording :type signal: array_like or ria_toolkit_oss.data.Recording
:raises ValueError: If `signal` is not CxN complex. :raises ValueError: If `signal` is not CxN complex.
:return: A numpy array containing the swapped I and Q data samples if `signal` is an array. :return: A numpy array containing the swapped I and Q data samples if `signal` is an array.
If `signal` is a `Recording`, returns a `Recording` object with its `data` attribute containing the If `signal` is a `Recording`, returns a `Recording` object with its `data` attribute containing the
swapped array. swapped array.
:rtype: np.ndarray or ria_toolkit_oss.datatypes.Recording :rtype: np.ndarray or ria_toolkit_oss.data.Recording
>>> rec = Recording(data=[[10+20j, 7+35j]]) >>> rec = Recording(data=[[10+20j, 7+35j]])
>>> new_rec = channel_swap(rec) >>> new_rec = channel_swap(rec)
@ -207,14 +207,14 @@ def amplitude_reversal(signal: ArrayLike | Recording) -> np.ndarray | Recording:
:param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N :param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N
is the length of the IQ examples. is the length of the IQ examples.
:type signal: array_like or ria_toolkit_oss.datatypes.Recording :type signal: array_like or ria_toolkit_oss.data.Recording
:raises ValueError: If `signal` is not CxN complex. :raises ValueError: If `signal` is not CxN complex.
:return: A numpy array containing the negated I and Q data samples if `signal` is an array. :return: A numpy array containing the negated I and Q data samples if `signal` is an array.
If `signal` is a `Recording`, returns a `Recording` object with its `data` attribute containing the If `signal` is a `Recording`, returns a `Recording` object with its `data` attribute containing the
negated array. negated array.
:rtype: np.ndarray or ria_toolkit_oss.datatypes.Recording :rtype: np.ndarray or ria_toolkit_oss.data.Recording
>>> rec = Recording(data=[[4-3j, -5-2j, -9+1j]]) >>> rec = Recording(data=[[4-3j, -5-2j, -9+1j]])
>>> new_rec = amplitude_reversal(rec) >>> new_rec = amplitude_reversal(rec)
@ -253,7 +253,7 @@ def drop_samples( # noqa: C901 # TODO: Simplify function
:param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N :param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N
is the length of the IQ examples. is the length of the IQ examples.
:type signal: array_like or ria_toolkit_oss.datatypes.Recording :type signal: array_like or ria_toolkit_oss.data.Recording
:param max_section_size: Maximum allowable size of the section to be dropped and replaced. Default is 2. :param max_section_size: Maximum allowable size of the section to be dropped and replaced. Default is 2.
:type max_section_size: int, optional :type max_section_size: int, optional
:param fill_type: Fill option used to replace dropped section of data (back-fill, front-fill, mean, zeros). :param fill_type: Fill option used to replace dropped section of data (back-fill, front-fill, mean, zeros).
@ -275,7 +275,7 @@ def drop_samples( # noqa: C901 # TODO: Simplify function
:return: A numpy array containing the I and Q data samples with replaced subsections if :return: A numpy array containing the I and Q data samples with replaced subsections if
`signal` is an array. If `signal` is a `Recording`, returns a `Recording` object with its `data` `signal` is an array. If `signal` is a `Recording`, returns a `Recording` object with its `data`
attribute containing the array with dropped samples. attribute containing the array with dropped samples.
:rtype: np.ndarray or ria_toolkit_oss.datatypes.Recording :rtype: np.ndarray or ria_toolkit_oss.data.Recording
>>> rec = Recording(data=[[2+5j, 1+8j, 6+4j, 3+7j, 4+9j]]) >>> rec = Recording(data=[[2+5j, 1+8j, 6+4j, 3+7j, 4+9j]])
>>> new_rec = drop_samples(rec) >>> new_rec = drop_samples(rec)
@ -346,7 +346,7 @@ def quantize_tape(
:param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N :param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N
is the length of the IQ examples. is the length of the IQ examples.
:type signal: array_like or ria_toolkit_oss.datatypes.Recording :type signal: array_like or ria_toolkit_oss.data.Recording
:param bin_number: The number of bins the signal should be divided into. Default is 4. :param bin_number: The number of bins the signal should be divided into. Default is 4.
:type bin_number: int, optional :type bin_number: int, optional
:param rounding_type: The type of rounding applied during processing. Default is "floor". :param rounding_type: The type of rounding applied during processing. Default is "floor".
@ -362,7 +362,7 @@ def quantize_tape(
:return: A numpy array containing the quantized I and Q data samples if `signal` is an array. :return: A numpy array containing the quantized I and Q data samples if `signal` is an array.
If `signal` is a `Recording`, returns a `Recording` object with its `data` attribute containing If `signal` is a `Recording`, returns a `Recording` object with its `data` attribute containing
the quantized array. the quantized array.
:rtype: np.ndarray or ria_toolkit_oss.datatypes.Recording :rtype: np.ndarray or ria_toolkit_oss.data.Recording
>>> rec = Recording(data=[[1+1j, 4+4j, 1+2j, 1+4j]]) >>> rec = Recording(data=[[1+1j, 4+4j, 1+2j, 1+4j]])
>>> new_rec = quantize_tape(rec) >>> new_rec = quantize_tape(rec)
@ -421,7 +421,7 @@ def quantize_parts(
:param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N :param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N
is the length of the IQ examples. is the length of the IQ examples.
:type signal: array_like or ria_toolkit_oss.datatypes.Recording :type signal: array_like or ria_toolkit_oss.data.Recording
:param max_section_size: Maximum allowable size of the section to be quantized. Default is 2. :param max_section_size: Maximum allowable size of the section to be quantized. Default is 2.
:type max_section_size: int, optional :type max_section_size: int, optional
:param bin_number: The number of bins the signal should be divided into. Default is 4. :param bin_number: The number of bins the signal should be divided into. Default is 4.
@ -439,7 +439,7 @@ def quantize_parts(
:return: A numpy array containing the I and Q data samples with quantized subsections if `signal` :return: A numpy array containing the I and Q data samples with quantized subsections if `signal`
is an array. If `signal` is a `Recording`, returns a `Recording` object with its `data` attribute is an array. If `signal` is a `Recording`, returns a `Recording` object with its `data` attribute
containing the partially quantized array. containing the partially quantized array.
:rtype: np.ndarray or ria_toolkit_oss.datatypes.Recording :rtype: np.ndarray or ria_toolkit_oss.data.Recording
>>> rec = Recording(data=[[2+5j, 1+8j, 6+4j, 3+7j, 4+9j]]) >>> rec = Recording(data=[[2+5j, 1+8j, 6+4j, 3+7j, 4+9j]])
>>> new_rec = quantize_parts(rec) >>> new_rec = quantize_parts(rec)
@ -510,7 +510,7 @@ def magnitude_rescale(
:param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N :param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N
is the length of the IQ examples. is the length of the IQ examples.
:type signal: array_like or ria_toolkit_oss.datatypes.Recording :type signal: array_like or ria_toolkit_oss.data.Recording
:param starting_bounds: The bounds (inclusive) as indices in which the starting position of the rescaling occurs. :param starting_bounds: The bounds (inclusive) as indices in which the starting position of the rescaling occurs.
Default is None, but if user does not assign any bounds, the bounds become (random index, N-1). Default is None, but if user does not assign any bounds, the bounds become (random index, N-1).
:type starting_bounds: tuple, optional :type starting_bounds: tuple, optional
@ -522,7 +522,7 @@ def magnitude_rescale(
:return: A numpy array containing the I and Q data samples with the rescaled magnitude after the random :return: A numpy array containing the I and Q data samples with the rescaled magnitude after the random
starting point if `signal` is an array. If `signal` is a `Recording`, returns a `Recording` starting point if `signal` is an array. If `signal` is a `Recording`, returns a `Recording`
object with its `data` attribute containing the rescaled array. object with its `data` attribute containing the rescaled array.
:rtype: np.ndarray or ria_toolkit_oss.datatypes.Recording :rtype: np.ndarray or ria_toolkit_oss.data.Recording
>>> rec = Recording(data=[[2+5j, 1+8j, 6+4j, 3+7j, 4+9j]]) >>> rec = Recording(data=[[2+5j, 1+8j, 6+4j, 3+7j, 4+9j]])
>>> new_rec = magniute_rescale(rec) >>> new_rec = magniute_rescale(rec)
@ -571,7 +571,7 @@ def cut_out( # noqa: C901 # TODO: Simplify function
:param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N :param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N
is the length of the IQ examples. is the length of the IQ examples.
:type signal: array_like or ria_toolkit_oss.datatypes.Recording :type signal: array_like or ria_toolkit_oss.data.Recording
:param max_section_size: Maximum allowable size of the section to be quantized. Default is 3. :param max_section_size: Maximum allowable size of the section to be quantized. Default is 3.
:type max_section_size: int, optional :type max_section_size: int, optional
:param fill_type: Fill option used to replace cutout section of data (zeros, ones, low-snr, avg-snr-1, avg-snr-2). :param fill_type: Fill option used to replace cutout section of data (zeros, ones, low-snr, avg-snr-1, avg-snr-2).
@ -596,7 +596,7 @@ def cut_out( # noqa: C901 # TODO: Simplify function
:return: A numpy array containing the I and Q data samples with random sections cut out and replaced according to :return: A numpy array containing the I and Q data samples with random sections cut out and replaced according to
`fill_type` if `signal` is an array. If `signal` is a `Recording`, returns a `Recording` object `fill_type` if `signal` is an array. If `signal` is a `Recording`, returns a `Recording` object
with its `data` attribute containing the cut out and replaced array. with its `data` attribute containing the cut out and replaced array.
:rtype: np.ndarray or ria_toolkit_oss.datatypes.Recording :rtype: np.ndarray or ria_toolkit_oss.data.Recording
>>> rec = Recording(data=[[2+5j, 1+8j, 6+4j, 3+7j, 4+9j]]) >>> rec = Recording(data=[[2+5j, 1+8j, 6+4j, 3+7j, 4+9j]])
>>> new_rec = cut_out(rec) >>> new_rec = cut_out(rec)
@ -666,7 +666,7 @@ def patch_shuffle(signal: ArrayLike | Recording, max_patch_size: Optional[int] =
:param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N :param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N
is the length of the IQ examples. is the length of the IQ examples.
:type signal: array_like or ria_toolkit_oss.datatypes.Recording :type signal: array_like or ria_toolkit_oss.data.Recording
:param max_patch_size: Maximum allowable patch size of the data that can be shuffled. Default is 3. :param max_patch_size: Maximum allowable patch size of the data that can be shuffled. Default is 3.
:type max_patch_size: int, optional :type max_patch_size: int, optional
@ -676,7 +676,7 @@ def patch_shuffle(signal: ArrayLike | Recording, max_patch_size: Optional[int] =
:return: A numpy array containing the I and Q data samples with randomly shuffled regions if `signal` is :return: A numpy array containing the I and Q data samples with randomly shuffled regions if `signal` is
an array. If `signal` is a `Recording`, returns a `Recording` object with its `data` attribute containing an array. If `signal` is a `Recording`, returns a `Recording` object with its `data` attribute containing
the shuffled array. the shuffled array.
:rtype: np.ndarray or ria_toolkit_oss.datatypes.Recording :rtype: np.ndarray or ria_toolkit_oss.data.Recording
>>> rec = Recording(data=[[2+5j, 1+8j, 6+4j, 3+7j, 4+9j]]) >>> rec = Recording(data=[[2+5j, 1+8j, 6+4j, 3+7j, 4+9j]])
>>> new_rec = patch_shuffle(rec) >>> new_rec = patch_shuffle(rec)

View File

@ -16,7 +16,7 @@ import numpy as np
from numpy.typing import ArrayLike from numpy.typing import ArrayLike
from scipy.signal import resample_poly from scipy.signal import resample_poly
from ria_toolkit_oss.datatypes import Recording from ria_toolkit_oss.data import Recording
from ria_toolkit_oss.transforms import iq_augmentations from ria_toolkit_oss.transforms import iq_augmentations
@ -31,7 +31,7 @@ def add_awgn_to_signal(signal: ArrayLike | Recording, snr: Optional[float] = 1)
:param signal: Input IQ data as a complex ``C x N`` array or `Recording`, where ``C`` is the number of channels :param signal: Input IQ data as a complex ``C x N`` array or `Recording`, where ``C`` is the number of channels
and ``N`` is the length of the IQ examples. and ``N`` is the length of the IQ examples.
:type signal: array_like or ria_toolkit_oss.datatypes.Recording :type signal: array_like or ria_toolkit_oss.data.Recording
:param snr: The signal-to-noise ratio in dB. Default is 1. :param snr: The signal-to-noise ratio in dB. Default is 1.
:type snr: float, optional :type snr: float, optional
@ -39,7 +39,7 @@ def add_awgn_to_signal(signal: ArrayLike | Recording, snr: Optional[float] = 1)
:return: A numpy array which is the sum of the noise (which matches the SNR) and the original signal. If `signal` :return: A numpy array which is the sum of the noise (which matches the SNR) and the original signal. If `signal`
is a `Recording`, returns a `Recording object` with its `data` attribute containing the noisy signal array. is a `Recording`, returns a `Recording object` with its `data` attribute containing the noisy signal array.
:rtype: np.ndarray or ria_toolkit_oss.datatypes.Recording :rtype: np.ndarray or ria_toolkit_oss.data.Recording
>>> rec = Recording(data=[[1+1j, 2+2j]]) >>> rec = Recording(data=[[1+1j, 2+2j]])
>>> new_rec = add_awgn_to_signal(rec) >>> new_rec = add_awgn_to_signal(rec)
@ -71,7 +71,7 @@ def time_shift(signal: ArrayLike | Recording, shift: Optional[int] = 1) -> np.nd
:param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N :param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N
is the length of the IQ examples. is the length of the IQ examples.
:type signal: array_like or ria_toolkit_oss.datatypes.Recording :type signal: array_like or ria_toolkit_oss.data.Recording
:param shift: The number of indices to shift by. Default is 1. :param shift: The number of indices to shift by. Default is 1.
:type shift: int, optional :type shift: int, optional
@ -80,7 +80,7 @@ def time_shift(signal: ArrayLike | Recording, shift: Optional[int] = 1) -> np.nd
:return: A numpy array which represents the time-shifted signal. If `signal` is a `Recording`, :return: A numpy array which represents the time-shifted signal. If `signal` is a `Recording`,
returns a `Recording object` with its `data` attribute containing the time-shifted array. returns a `Recording object` with its `data` attribute containing the time-shifted array.
:rtype: np.ndarray or ria_toolkit_oss.datatypes.Recording :rtype: np.ndarray or ria_toolkit_oss.data.Recording
>>> rec = Recording(data=[[1+1j, 2+2j, 3+3j, 4+4j, 5+5j]]) >>> rec = Recording(data=[[1+1j, 2+2j, 3+3j, 4+4j, 5+5j]])
>>> new_rec = time_shift(rec, -2) >>> new_rec = time_shift(rec, -2)
@ -134,7 +134,7 @@ def frequency_shift(signal: ArrayLike | Recording, shift: Optional[float] = 0.5)
:param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N :param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N
is the length of the IQ examples. is the length of the IQ examples.
:type signal: array_like or ria_toolkit_oss.datatypes.Recording :type signal: array_like or ria_toolkit_oss.data.Recording
:param shift: The frequency shift relative to the sample rate. Must be in the range ``[-0.5, 0.5]``. :param shift: The frequency shift relative to the sample rate. Must be in the range ``[-0.5, 0.5]``.
Default is 0.5. Default is 0.5.
:type shift: float, optional :type shift: float, optional
@ -144,7 +144,7 @@ def frequency_shift(signal: ArrayLike | Recording, shift: Optional[float] = 0.5)
:return: A numpy array which represents the frequency-shifted signal. If `signal` is a `Recording`, :return: A numpy array which represents the frequency-shifted signal. If `signal` is a `Recording`,
returns a `Recording object` with its `data` attribute containing the frequency-shifted array. returns a `Recording object` with its `data` attribute containing the frequency-shifted array.
:rtype: np.ndarray or ria_toolkit_oss.datatypes.Recording :rtype: np.ndarray or ria_toolkit_oss.data.Recording
>>> rec = Recording(data=[[1+1j, 2+2j, 3+3j, 4+4j]]) >>> rec = Recording(data=[[1+1j, 2+2j, 3+3j, 4+4j]])
>>> new_rec = frequency_shift(rec, -0.4) >>> new_rec = frequency_shift(rec, -0.4)
@ -189,7 +189,7 @@ def phase_shift(signal: ArrayLike | Recording, phase: Optional[float] = np.pi) -
:param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N :param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N
is the length of the IQ examples. is the length of the IQ examples.
:type signal: array_like or ria_toolkit_oss.datatypes.Recording :type signal: array_like or ria_toolkit_oss.data.Recording
:param phase: The phase angle by which to rotate the IQ samples, in radians. Must be in the range ``[-π, π]``. :param phase: The phase angle by which to rotate the IQ samples, in radians. Must be in the range ``[-π, π]``.
Default is π. Default is π.
:type phase: float, optional :type phase: float, optional
@ -199,7 +199,7 @@ def phase_shift(signal: ArrayLike | Recording, phase: Optional[float] = np.pi) -
:return: A numpy array which represents the phase-shifted signal. If `signal` is a `Recording`, :return: A numpy array which represents the phase-shifted signal. If `signal` is a `Recording`,
returns a `Recording object` with its `data` attribute containing the phase-shifted array. returns a `Recording object` with its `data` attribute containing the phase-shifted array.
:rtype: np.ndarray or ria_toolkit_oss.datatypes.Recording :rtype: np.ndarray or ria_toolkit_oss.data.Recording
>>> rec = Recording(data=[[1+1j, 2+2j, 3+3j, 4+4j]]) >>> rec = Recording(data=[[1+1j, 2+2j, 3+3j, 4+4j]])
>>> new_rec = phase_shift(rec, np.pi/2) >>> new_rec = phase_shift(rec, np.pi/2)
@ -246,7 +246,7 @@ def iq_imbalance(
:param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N :param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N
is the length of the IQ examples. is the length of the IQ examples.
:type signal: array_like or ria_toolkit_oss.datatypes.Recording :type signal: array_like or ria_toolkit_oss.data.Recording
:param amplitude_imbalance: The IQ amplitude imbalance to apply, in dB. Default is 1.5. :param amplitude_imbalance: The IQ amplitude imbalance to apply, in dB. Default is 1.5.
:type amplitude_imbalance: float, optional :type amplitude_imbalance: float, optional
:param phase_imbalance: The IQ phase imbalance to apply, in radians. Default is π. :param phase_imbalance: The IQ phase imbalance to apply, in radians. Default is π.
@ -260,7 +260,7 @@ def iq_imbalance(
:return: A numpy array which is the original signal with an applied IQ imbalance. If `signal` is a `Recording`, :return: A numpy array which is the original signal with an applied IQ imbalance. If `signal` is a `Recording`,
returns a `Recording object` with its `data` attribute containing the IQ imbalanced signal array. returns a `Recording object` with its `data` attribute containing the IQ imbalanced signal array.
:rtype: np.ndarray or ria_toolkit_oss.datatypes.Recording :rtype: np.ndarray or ria_toolkit_oss.data.Recording
>>> rec = Recording(data=[[2+18j, -34+2j, 3+9j]]) >>> rec = Recording(data=[[2+18j, -34+2j, 3+9j]])
>>> new_rec = iq_imbalance(rec, 1, np.pi, 2) >>> new_rec = iq_imbalance(rec, 1, np.pi, 2)
@ -315,7 +315,7 @@ def resample(signal: ArrayLike | Recording, up: Optional[int] = 4, down: Optiona
:param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N :param signal: Input IQ data as a complex CxN array or `Recording`, where C is the number of channels and N
is the length of the IQ examples. is the length of the IQ examples.
:type signal: array_like or ria_toolkit_oss.datatypes.Recording :type signal: array_like or ria_toolkit_oss.data.Recording
:param up: The upsampling factor. Default is 4. :param up: The upsampling factor. Default is 4.
:type up: int, optional :type up: int, optional
:param down: The downsampling factor. Default is 2. :param down: The downsampling factor. Default is 2.
@ -325,7 +325,7 @@ def resample(signal: ArrayLike | Recording, up: Optional[int] = 4, down: Optiona
:return: A numpy array which represents the resampled signal If `signal` is a `Recording`, :return: A numpy array which represents the resampled signal If `signal` is a `Recording`,
returns a `Recording object` with its `data` attribute containing the resampled array. returns a `Recording object` with its `data` attribute containing the resampled array.
:rtype: np.ndarray or ria_toolkit_oss.datatypes.Recording :rtype: np.ndarray or ria_toolkit_oss.data.Recording
>>> rec = Recording(data=[[1+1j, 2+2j]]) >>> rec = Recording(data=[[1+1j, 2+2j]])
>>> new_rec = resample(rec, 2, 1) >>> new_rec = resample(rec, 2, 1)

View File

@ -4,14 +4,14 @@ import scipy.signal as signal
from plotly.graph_objs import Figure from plotly.graph_objs import Figure
from scipy.fft import fft, fftshift from scipy.fft import fft, fftshift
from ria_toolkit_oss.datatypes import Recording from ria_toolkit_oss.data import Recording
def spectrogram(rec: Recording, thumbnail: bool = False) -> Figure: def spectrogram(rec: Recording, thumbnail: bool = False) -> Figure:
"""Create a spectrogram for the recording. """Create a spectrogram for the recording.
:param rec: Signal to plot. :param rec: Signal to plot.
:type rec: ria_toolkit_oss.datatypes.Recording :type rec: ria_toolkit_oss.data.Recording
:param thumbnail: Whether to return a small thumbnail version or full plot. :param thumbnail: Whether to return a small thumbnail version or full plot.
:type thumbnail: bool :type thumbnail: bool
@ -95,7 +95,7 @@ def iq_time_series(rec: Recording) -> Figure:
"""Create a time series plot of the real and imaginary parts of signal. """Create a time series plot of the real and imaginary parts of signal.
:param rec: Signal to plot. :param rec: Signal to plot.
:type rec: ria_toolkit_oss.datatypes.Recording :type rec: ria_toolkit_oss.data.Recording
:return: Time series plot as a Plotly figure. :return: Time series plot as a Plotly figure.
""" """
@ -125,7 +125,7 @@ def frequency_spectrum(rec: Recording) -> Figure:
"""Create a frequency spectrum plot from the recording. """Create a frequency spectrum plot from the recording.
:param rec: Input signal to plot. :param rec: Input signal to plot.
:type rec: ria_toolkit_oss.datatypes.Recording :type rec: ria_toolkit_oss.data.Recording
:return: Frequency spectrum as a Plotly figure. :return: Frequency spectrum as a Plotly figure.
""" """
@ -160,7 +160,7 @@ def constellation(rec: Recording) -> Figure:
"""Create a constellation plot from the recording. """Create a constellation plot from the recording.
:param rec: Input signal to plot. :param rec: Input signal to plot.
:type rec: ria_toolkit_oss.datatypes.Recording :type rec: ria_toolkit_oss.data.Recording
:return: Constellation as a Plotly figure. :return: Constellation as a Plotly figure.
""" """

View File

@ -12,7 +12,7 @@ from scipy.fft import fft, fftshift
from scipy.signal import spectrogram from scipy.signal import spectrogram
from scipy.signal.windows import hann from scipy.signal.windows import hann
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
from ria_toolkit_oss.view.tools import ( from ria_toolkit_oss.view.tools import (
COLORS, COLORS,
decimate, decimate,

View File

@ -12,7 +12,7 @@ import numpy as np
from scipy.fft import fft, fftshift from scipy.fft import fft, fftshift
from scipy.signal.windows import hann from scipy.signal.windows import hann
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
from ria_toolkit_oss.view.tools import ( from ria_toolkit_oss.view.tools import (
COLORS, COLORS,
decimate, decimate,

View File

@ -4,14 +4,14 @@ import scipy.signal as signal
from plotly.graph_objs import Figure from plotly.graph_objs import Figure
from scipy.fft import fft, fftshift from scipy.fft import fft, fftshift
from ria_toolkit_oss.datatypes import Recording from ria_toolkit_oss.data import Recording
def spectrogram(rec: Recording, thumbnail: bool = False) -> Figure: def spectrogram(rec: Recording, thumbnail: bool = False) -> Figure:
"""Create a spectrogram for the recording. """Create a spectrogram for the recording.
:param rec: Signal to plot. :param rec: Signal to plot.
:type rec: ria_toolkit_oss.datatypes.Recording :type rec: ria_toolkit_oss.data.Recording
:param thumbnail: Whether to return a small thumbnail version or full plot. :param thumbnail: Whether to return a small thumbnail version or full plot.
:type thumbnail: bool :type thumbnail: bool
@ -107,7 +107,7 @@ def iq_time_series(rec: Recording) -> Figure:
"""Create a time series plot of the real and imaginary parts of signal. """Create a time series plot of the real and imaginary parts of signal.
:param rec: Signal to plot. :param rec: Signal to plot.
:type rec: ria_toolkit_oss.datatypes.Recording :type rec: ria_toolkit_oss.data.Recording
:return: Time series plot, as a Plotly Figure. :return: Time series plot, as a Plotly Figure.
""" """
@ -145,7 +145,7 @@ def frequency_spectrum(rec: Recording) -> Figure:
"""Create a frequency spectrum plot from the recording. """Create a frequency spectrum plot from the recording.
:param rec: Input signal to plot. :param rec: Input signal to plot.
:type rec: ria_toolkit_oss.datatypes.Recording :type rec: ria_toolkit_oss.data.Recording
:return: Frequency spectrum, as a Plotly figure. :return: Frequency spectrum, as a Plotly figure.
""" """
@ -187,7 +187,7 @@ def constellation(rec: Recording) -> Figure:
"""Create a constellation plot from the recording. """Create a constellation plot from the recording.
:param rec: Input signal to plot. :param rec: Input signal to plot.
:type rec: ria_toolkit_oss.datatypes.Recording :type rec: ria_toolkit_oss.data.Recording
:return: Constellation, as a Plotly Figure. :return: Constellation, as a Plotly Figure.
""" """
@ -222,7 +222,7 @@ def power_spectral_density(rec: Recording) -> Figure:
"""Create a Power Spectral Density (PSD) plot from the recording. """Create a Power Spectral Density (PSD) plot from the recording.
:param rec: Input signal to plot. :param rec: Input signal to plot.
:type rec: ria_toolkit_oss.datatypes.Recording :type rec: ria_toolkit_oss.data.Recording
:return: PSD plot, as a Plotly Figure. :return: PSD plot, as a Plotly Figure.
""" """
@ -268,7 +268,7 @@ def fft_plot(rec: Recording) -> Figure:
"""Create an FFT magnitude plot from the recording. """Create an FFT magnitude plot from the recording.
:param rec: Input signal to plot. :param rec: Input signal to plot.
:type rec: ria_toolkit_oss.datatypes.Recording :type rec: ria_toolkit_oss.data.Recording
:return: FFT plot, as a Plotly Figure. :return: FFT plot, as a Plotly Figure.
""" """
@ -312,7 +312,7 @@ def spectrogram_3d(rec: Recording) -> Figure:
"""Create a 3D spectrogram plot from the recording. """Create a 3D spectrogram plot from the recording.
:param rec: Input signal to plot. :param rec: Input signal to plot.
:type rec: ria_toolkit_oss.datatypes.Recording :type rec: ria_toolkit_oss.data.Recording
:return: 3D Spectrogram, as a Plotly Figure. :return: 3D Spectrogram, as a Plotly Figure.
""" """

View File

@ -11,8 +11,8 @@ from ria_toolkit_oss.annotations import (
split_recording_annotations, split_recording_annotations,
threshold_qualifier, threshold_qualifier,
) )
from ria_toolkit_oss.datatypes import Annotation from ria_toolkit_oss.data import Annotation
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
from ria_toolkit_oss.io import load_recording, to_blue, to_npy, to_sigmf, to_wav from ria_toolkit_oss.io import load_recording, to_blue, to_npy, to_sigmf, to_wav
from ria_toolkit_oss_cli.ria_toolkit_oss.common import ( from ria_toolkit_oss_cli.ria_toolkit_oss.common import (
format_frequency, format_frequency,

View File

@ -7,7 +7,7 @@ from pathlib import Path
import click import click
import numpy as np import numpy as np
from ria_toolkit_oss.datatypes import Recording from ria_toolkit_oss.data import Recording
from ria_toolkit_oss.io import from_npy_legacy, load_recording from ria_toolkit_oss.io import from_npy_legacy, load_recording
from ria_toolkit_oss_cli.ria_toolkit_oss.common import ( from ria_toolkit_oss_cli.ria_toolkit_oss.common import (
echo_progress, echo_progress,

View File

@ -7,7 +7,7 @@ from typing import Any, Dict, List, Optional
import click import click
import yaml import yaml
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
from ria_toolkit_oss.io.recording import to_blue, to_npy, to_sigmf, to_wav from ria_toolkit_oss.io.recording import to_blue, to_npy, to_sigmf, to_wav

View File

@ -8,7 +8,7 @@ import numpy as np
import yaml import yaml
import ria_toolkit_oss.signal.basic_signal_generator as basic_gen import ria_toolkit_oss.signal.basic_signal_generator as basic_gen
from ria_toolkit_oss.datatypes import Recording from ria_toolkit_oss.data import Recording
from ria_toolkit_oss.signal.block_generator.basic import FrequencyShift from ria_toolkit_oss.signal.block_generator.basic import FrequencyShift
from ria_toolkit_oss.signal.block_generator.continuous_modulation.fsk_modulator import ( from ria_toolkit_oss.signal.block_generator.continuous_modulation.fsk_modulator import (
FSKModulator, FSKModulator,

View File

@ -23,9 +23,9 @@ def serve(host: str, port: int, api_key: str, log_level: str):
\b \b
Endpoints: Endpoints:
POST /orchestrator/deploy POST /conductor/deploy
GET /orchestrator/status/{campaign_id} GET /conductor/status/{campaign_id}
POST /orchestrator/cancel/{campaign_id} POST /conductor/cancel/{campaign_id}
POST /inference/load POST /inference/load
POST /inference/start POST /inference/start
POST /inference/stop POST /inference/stop

View File

@ -8,7 +8,7 @@ from pathlib import Path
import click import click
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
from ria_toolkit_oss.io.recording import load_recording from ria_toolkit_oss.io.recording import load_recording
from ria_toolkit_oss.transforms import iq_augmentations, iq_impairments from ria_toolkit_oss.transforms import iq_augmentations, iq_impairments
from ria_toolkit_oss_cli.ria_toolkit_oss.common import ( from ria_toolkit_oss_cli.ria_toolkit_oss.common import (

View File

@ -6,7 +6,7 @@ import time
import click import click
from ria_toolkit_oss.datatypes import Recording from ria_toolkit_oss.data import Recording
from ria_toolkit_oss.io import from_npy_legacy, load_recording from ria_toolkit_oss.io import from_npy_legacy, load_recording
from .common import ( from .common import (

View File

@ -1,4 +1,4 @@
from ria_toolkit_oss.datatypes import Annotation from ria_toolkit_oss.data import Annotation
def test_annotation_creation(): def test_annotation_creation():

View File

@ -3,8 +3,8 @@ from typing import Iterable
import numpy as np import numpy as np
import pytest import pytest
from ria_toolkit_oss.datatypes import Annotation, Recording from ria_toolkit_oss.data import Annotation, Recording
from ria_toolkit_oss.datatypes.recording import generate_recording_id from ria_toolkit_oss.data.recording import generate_recording_id
COMPLEX_DATA_1 = [[0.5 + 0.5j, 0.1 + 0.1j, 0.3 + 0.3j, 0.4 + 0.4j, 0.5 + 0.5j]] COMPLEX_DATA_1 = [[0.5 + 0.5j, 0.1 + 0.1j, 0.3 + 0.3j, 0.4 + 0.4j, 0.5 + 0.5j]]

View File

@ -1,6 +1,6 @@
import numpy as np import numpy as np
from ria_toolkit_oss.datatypes import Annotation, Recording from ria_toolkit_oss.data import Annotation, Recording
from ria_toolkit_oss.io.recording import ( from ria_toolkit_oss.io.recording import (
from_npy, from_npy,
from_sigmf, from_sigmf,

View File

@ -0,0 +1,314 @@
"""Tests for orchestration executor — StepResult, CampaignResult, _run_script, _extract_tx_params."""
from __future__ import annotations
import json
import stat
from types import SimpleNamespace
import pytest
from ria_toolkit_oss.orchestration.executor import (
CampaignResult,
StepResult,
_extract_tx_params,
_run_script,
)
from ria_toolkit_oss.orchestration.qa import QAResult
def _ok_qa() -> QAResult:
return QAResult(passed=True, flagged=False, snr_db=20.0, duration_s=1.0)
def _flagged_qa() -> QAResult:
return QAResult(passed=True, flagged=True, snr_db=5.0, duration_s=1.0, issues=["low SNR"])
def _failed_qa() -> QAResult:
return QAResult(passed=False, flagged=True, snr_db=0.0, duration_s=0.0, issues=["no signal"])
# ---------------------------------------------------------------------------
# StepResult
# ---------------------------------------------------------------------------
class TestStepResult:
def test_ok_true_when_no_error_and_qa_passed(self):
r = StepResult(
transmitter_id="tx1",
step_label="step1",
output_path="/out/rec.sigmf-data",
qa=_ok_qa(),
capture_timestamp=0.0,
)
assert r.ok is True
def test_ok_false_when_error_set(self):
r = StepResult(
transmitter_id="tx1",
step_label="step1",
output_path=None,
qa=_ok_qa(),
capture_timestamp=0.0,
error="SDR failed",
)
assert r.ok is False
def test_ok_false_when_qa_not_passed(self):
r = StepResult(
transmitter_id="tx1",
step_label="step1",
output_path="/out",
qa=_failed_qa(),
capture_timestamp=0.0,
)
assert r.ok is False
def test_to_dict_contains_required_keys(self):
r = StepResult(
transmitter_id="tx1",
step_label="step1",
output_path="/out/rec.sigmf-data",
qa=_ok_qa(),
capture_timestamp=1234.5,
)
d = r.to_dict()
assert d["transmitter_id"] == "tx1"
assert d["step_label"] == "step1"
assert d["output_path"] == "/out/rec.sigmf-data"
assert d["capture_timestamp"] == pytest.approx(1234.5)
assert d["error"] is None
assert d["qa"]["passed"] is True
def test_to_dict_includes_error_when_set(self):
r = StepResult(
transmitter_id="tx1",
step_label="step1",
output_path=None,
qa=_failed_qa(),
capture_timestamp=0.0,
error="disk full",
)
assert r.to_dict()["error"] == "disk full"
# ---------------------------------------------------------------------------
# CampaignResult
# ---------------------------------------------------------------------------
class TestCampaignResult:
def _make(self, steps: list) -> CampaignResult:
r = CampaignResult(campaign_name="test_campaign")
r.steps = steps
r.end_time = r.start_time + 5.0
return r
def test_total_steps(self):
r = self._make(
[
StepResult("tx1", "s1", "/out", _ok_qa(), 0.0),
StepResult("tx1", "s2", "/out", _ok_qa(), 0.0),
]
)
assert r.total_steps == 2
def test_passed_count(self):
r = self._make(
[
StepResult("tx1", "s1", "/out", _ok_qa(), 0.0),
StepResult("tx1", "s2", "/out", _failed_qa(), 0.0),
]
)
assert r.passed == 1
def test_failed_count(self):
r = self._make(
[
StepResult("tx1", "s1", "/out", _ok_qa(), 0.0),
StepResult("tx1", "s2", "/out", _failed_qa(), 0.0),
]
)
assert r.failed == 1
def test_flagged_count(self):
r = self._make(
[
StepResult("tx1", "s1", "/out", _ok_qa(), 0.0),
StepResult("tx1", "s2", "/out", _flagged_qa(), 0.0),
]
)
assert r.flagged == 1
def test_error_step_counts_as_failed_not_passed(self):
r = self._make(
[
StepResult("tx1", "s1", None, _ok_qa(), 0.0, error="disk full"),
]
)
assert r.failed == 1
assert r.passed == 0
def test_duration_s_from_end_time(self):
r = CampaignResult(campaign_name="c")
r.start_time = 100.0
r.end_time = 115.0
assert r.duration_s == pytest.approx(15.0)
def test_to_dict_structure(self):
r = self._make([StepResult("tx1", "s1", "/out", _ok_qa(), 0.0)])
d = r.to_dict()
assert d["campaign_name"] == "test_campaign"
assert d["total_steps"] == 1
assert d["passed"] == 1
assert len(d["steps"]) == 1
def test_write_report(self, tmp_path):
r = self._make([StepResult("tx1", "s1", "/out", _ok_qa(), 0.0)])
out = tmp_path / "report.json"
r.write_report(str(out))
assert out.exists()
data = json.loads(out.read_text())
assert data["campaign_name"] == "test_campaign"
def test_write_report_creates_nested_dirs(self, tmp_path):
r = self._make([])
out = tmp_path / "nested" / "deep" / "report.json"
r.write_report(str(out))
assert out.exists()
# ---------------------------------------------------------------------------
# _run_script
# ---------------------------------------------------------------------------
class TestRunScript:
def _script(self, tmp_path, body: str) -> str:
s = tmp_path / "script.sh"
s.write_text("#!/bin/sh\n" + body)
s.chmod(s.stat().st_mode | stat.S_IEXEC)
return str(s)
def test_returns_stdout(self, tmp_path):
out = _run_script(self._script(tmp_path, 'echo "hello world"'))
assert out == "hello world"
def test_passes_args_to_script(self, tmp_path):
out = _run_script(self._script(tmp_path, 'echo "$1 $2"'), "configure", "arg2")
assert "configure" in out
def test_raises_on_nonzero_exit(self, tmp_path):
with pytest.raises(RuntimeError, match="exited 1"):
_run_script(self._script(tmp_path, "exit 1"))
def test_raises_on_relative_path(self):
with pytest.raises(RuntimeError, match="absolute"):
_run_script("relative/script.sh")
def test_raises_on_missing_file(self, tmp_path):
with pytest.raises(RuntimeError):
_run_script(str(tmp_path / "nonexistent.sh"))
def test_raises_on_timeout(self, tmp_path):
with pytest.raises(RuntimeError, match="timed out"):
_run_script(self._script(tmp_path, "sleep 60"), timeout=0.1)
def test_stderr_included_in_error_message(self, tmp_path):
with pytest.raises(RuntimeError) as exc_info:
_run_script(self._script(tmp_path, "echo 'bad thing' >&2; exit 1"))
assert "bad thing" in str(exc_info.value)
# ---------------------------------------------------------------------------
# _extract_tx_params
# ---------------------------------------------------------------------------
class TestExtractTxParams:
def test_returns_none_when_no_sdr_agent_attribute(self):
tx = SimpleNamespace()
assert _extract_tx_params(tx) is None
def test_returns_none_when_sdr_agent_is_none(self):
tx = SimpleNamespace(sdr_agent=None)
assert _extract_tx_params(tx) is None
def test_returns_none_when_sdr_agent_is_empty_dict(self):
tx = SimpleNamespace(sdr_agent={})
assert _extract_tx_params(tx) is None
def test_returns_signal_params(self):
tx = SimpleNamespace(
sdr_agent={
"modulation": "QPSK",
"symbol_rate": 1e6,
"center_frequency": 2.4e9,
}
)
result = _extract_tx_params(tx)
assert result == {"modulation": "QPSK", "symbol_rate": 1e6, "center_frequency": 2.4e9}
def test_strips_infra_key_node_id(self):
tx = SimpleNamespace(
sdr_agent={
"modulation": "BPSK",
"node_id": "node_abc123",
}
)
result = _extract_tx_params(tx)
assert "node_id" not in result
assert result == {"modulation": "BPSK"}
def test_strips_infra_key_session_code(self):
tx = SimpleNamespace(
sdr_agent={
"modulation": "FSK",
"session_code": "amber-peak-transmit",
}
)
result = _extract_tx_params(tx)
assert "session_code" not in result
def test_strips_none_values(self):
tx = SimpleNamespace(
sdr_agent={
"modulation": "QPSK",
"order": None,
"rolloff": 0.35,
}
)
result = _extract_tx_params(tx)
assert "order" not in result
assert result == {"modulation": "QPSK", "rolloff": 0.35}
def test_does_not_mutate_source_dict(self):
cfg = {"modulation": "QPSK", "node_id": "nid", "session_code": "code"}
tx = SimpleNamespace(sdr_agent=cfg)
_extract_tx_params(tx)
assert "node_id" in cfg
def test_full_sdr_agent_config(self):
tx = SimpleNamespace(
sdr_agent={
"modulation": "16QAM",
"order": 4,
"symbol_rate": 5e6,
"center_frequency": 915e6,
"filter": "rrc",
"rolloff": 0.35,
"node_id": "node_xyz",
"session_code": "some-code",
}
)
result = _extract_tx_params(tx)
assert result == {
"modulation": "16QAM",
"order": 4,
"symbol_rate": 5e6,
"center_frequency": 915e6,
"filter": "rrc",
"rolloff": 0.35,
}

View File

@ -5,7 +5,7 @@ import time
import numpy as np import numpy as np
import pytest import pytest
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
from ria_toolkit_oss.orchestration.campaign import CaptureStep from ria_toolkit_oss.orchestration.campaign import CaptureStep
from ria_toolkit_oss.orchestration.labeler import build_output_filename, label_recording from ria_toolkit_oss.orchestration.labeler import build_output_filename, label_recording
@ -109,6 +109,38 @@ class TestLabelRecording:
result = label_recording(rec, "iphone13_001", _wifi_step(), time.time()) result = label_recording(rec, "iphone13_001", _wifi_step(), time.time())
assert result is rec assert result is rec
def test_tx_params_none_by_default(self):
rec = label_recording(_simple_recording(), "iphone13_001", _wifi_step(), time.time())
tx_keys = [k for k in rec.metadata if k.startswith("tx_")]
assert tx_keys == []
def test_tx_params_written_as_tx_prefix_keys(self):
params = {"modulation": "QPSK", "symbol_rate": 1e6}
rec = label_recording(_simple_recording(), "dev", _wifi_step(), time.time(), tx_params=params)
assert rec.metadata["tx_modulation"] == "QPSK"
assert rec.metadata["tx_symbol_rate"] == pytest.approx(1e6)
def test_tx_params_multiple_fields(self):
params = {
"modulation": "16QAM",
"order": 4,
"symbol_rate": 5e6,
"center_frequency": 915e6,
"filter": "rrc",
"rolloff": 0.35,
}
rec = label_recording(_simple_recording(), "dev", _wifi_step(), time.time(), tx_params=params)
for k, v in params.items():
assert f"tx_{k}" in rec.metadata
assert (
rec.metadata[f"tx_{k}"] == pytest.approx(v) if isinstance(v, float) else rec.metadata[f"tx_{k}"] == v
)
def test_tx_params_empty_dict_writes_nothing(self):
rec = label_recording(_simple_recording(), "dev", _wifi_step(), time.time(), tx_params={})
tx_keys = [k for k in rec.metadata if k.startswith("tx_") and k != "tx_power_dbm"]
assert tx_keys == []
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# build_output_filename # build_output_filename

View File

@ -3,7 +3,7 @@
import numpy as np import numpy as np
import pytest import pytest
from ria_toolkit_oss.datatypes.recording import Recording from ria_toolkit_oss.data.recording import Recording
from ria_toolkit_oss.orchestration.campaign import QAConfig from ria_toolkit_oss.orchestration.campaign import QAConfig
from ria_toolkit_oss.orchestration.qa import QAResult, check_recording, estimate_snr_db from ria_toolkit_oss.orchestration.qa import QAResult, check_recording, estimate_snr_db

View File

@ -0,0 +1,153 @@
"""Tests for TxExecutor — signal synthesis and step execution."""
from __future__ import annotations
import threading
from unittest.mock import patch
import numpy as np
import pytest
from ria_toolkit_oss.orchestration.tx_executor import TxExecutor
def _cfg(modulation="QPSK", symbol_rate=100_000, steps=None):
return {
"id": "test-tx",
"type": "sdr",
"control_method": "sdr_agent",
"sdr_agent": {
"modulation": modulation,
"symbol_rate": symbol_rate,
"center_frequency": 0.0,
"filter": "rrc",
"rolloff": 0.35,
},
"schedule": steps or [{"label": "step1", "duration": 0.001, "power_dbm": -10}],
}
# ---------------------------------------------------------------------------
# Initialisation
# ---------------------------------------------------------------------------
class TestTxExecutorInit:
def test_stores_sdr_device(self):
ex = TxExecutor(_cfg(), sdr_device="pluto")
assert ex.sdr_device == "pluto"
def test_stop_event_created_when_not_supplied(self):
ex = TxExecutor(_cfg())
assert isinstance(ex.stop_event, threading.Event)
assert not ex.stop_event.is_set()
def test_accepts_external_stop_event(self):
ev = threading.Event()
ex = TxExecutor(_cfg(), stop_event=ev)
assert ex.stop_event is ev
# ---------------------------------------------------------------------------
# run() — schedule iteration
# ---------------------------------------------------------------------------
class TestTxExecutorRun:
def test_empty_schedule_returns_immediately(self):
cfg = _cfg(steps=[])
ex = TxExecutor(cfg)
ex.run() # must not raise or block
def test_pre_set_stop_event_skips_all_steps(self):
ev = threading.Event()
ev.set()
ex = TxExecutor(_cfg(), stop_event=ev)
# If stop was set, _execute_step should never be called.
# run() should return cleanly without attempting synthesis.
ex.run()
def test_no_sdr_falls_back_to_simulation(self, monkeypatch):
"""Without SDR hardware TxExecutor simulates by calling stop_event.wait."""
cfg = _cfg(steps=[{"label": "s", "duration": 0.001, "power_dbm": 0}])
waited = []
real_ev = threading.Event()
def _fake_wait(timeout=None):
waited.append(timeout)
return False
monkeypatch.setattr(real_ev, "wait", _fake_wait)
# Patch SDR init to always fail (forces simulation path)
with patch.object(TxExecutor, "_init_sdr", lambda self, *a, **kw: setattr(self, "_sdr", None)):
ex = TxExecutor(cfg, sdr_device="nonexistent_xyz", stop_event=real_ev)
ex.run()
assert len(waited) >= 1, "expected stop_event.wait to be called for simulation"
# ---------------------------------------------------------------------------
# _synthesise — all modulation types and filter types
# ---------------------------------------------------------------------------
class TestSynthesise:
@pytest.fixture(autouse=True)
def _ex(self):
self.ex = TxExecutor(_cfg())
def _synth(self, mod, num_samples=256):
return self.ex._synthesise(mod, sps=4, num_samples=num_samples, filter_type="rrc", rolloff=0.35)
@pytest.mark.parametrize("mod", ["BPSK", "QPSK", "8PSK", "16QAM", "64QAM", "256QAM"])
def test_psk_qam_returns_complex64_array(self, mod):
sig = self._synth(mod)
assert sig.dtype == np.complex64
assert len(sig) == 256
def test_fsk_returns_correct_length(self):
sig = self._synth("FSK")
assert len(sig) == 256
def test_ook_returns_correct_length(self):
sig = self._synth("OOK")
assert len(sig) == 256
def test_gmsk_returns_correct_length(self):
sig = self._synth("GMSK")
assert len(sig) == 256
def test_oqpsk_returns_correct_length(self):
sig = self._synth("OQPSK")
assert len(sig) == 256
@pytest.mark.parametrize("mod", ["BPSK", "QPSK", "16QAM", "FSK", "OOK", "GMSK"])
def test_samples_are_finite(self, mod):
sig = self._synth(mod)
assert np.all(np.isfinite(sig.real)), f"{mod}: non-finite real samples"
assert np.all(np.isfinite(sig.imag)), f"{mod}: non-finite imag samples"
def test_unknown_modulation_defaults_to_qpsk(self):
sig = self._synth("UNKNOWN_MOD_XYZ")
assert len(sig) == 256
assert sig.dtype == np.complex64
@pytest.mark.parametrize("filter_type", ["rrc", "rc", "gaussian", "rect", "none"])
def test_all_filter_types(self, filter_type):
sig = self.ex._synthesise("QPSK", sps=4, num_samples=128, filter_type=filter_type, rolloff=0.35)
assert len(sig) == 128
@pytest.mark.parametrize("n", [64, 128, 512, 1024])
def test_output_length_matches_requested_samples(self, n):
sig = self._synth("QPSK", num_samples=n)
assert len(sig) == n
def test_bpsk_output_is_complex_not_real(self):
sig = self._synth("BPSK")
# complex64 always has imag part; just check dtype
assert sig.dtype == np.complex64
def test_256qam_correct_length(self):
sig = self._synth("256QAM")
assert len(sig) == 256

View File

@ -7,7 +7,7 @@ import numpy as np
import pytest import pytest
from click.testing import CliRunner from click.testing import CliRunner
from ria_toolkit_oss.datatypes import Annotation, Recording from ria_toolkit_oss.data import Annotation, Recording
from ria_toolkit_oss.io import load_recording, to_npy, to_sigmf from ria_toolkit_oss.io import load_recording, to_npy, to_sigmf
from ria_toolkit_oss_cli.cli import cli from ria_toolkit_oss_cli.cli import cli

View File

@ -189,6 +189,8 @@ class TestNoiseCommand:
"10000", "10000",
"--noise-type", "--noise-type",
"gaussian", "gaussian",
"--power",
"0.01",
"--output", "--output",
output, output,
"-q", "-q",
@ -234,7 +236,7 @@ class TestNoiseCommand:
"--num-samples", "--num-samples",
"10000", "10000",
"--power", "--power",
"0.5", "0.01",
"--output", "--output",
output, output,
"-q", "-q",

View File

@ -7,7 +7,7 @@ import numpy as np
import pytest import pytest
from click.testing import CliRunner from click.testing import CliRunner
from ria_toolkit_oss.datatypes import Annotation, Recording from ria_toolkit_oss.data import Annotation, Recording
from ria_toolkit_oss.io import load_recording, to_sigmf from ria_toolkit_oss.io import load_recording, to_sigmf
from ria_toolkit_oss_cli.cli import cli from ria_toolkit_oss_cli.cli import cli

View File

@ -1,6 +1,6 @@
"""Tests for the RT-OSS HTTP server. """Tests for the RT-OSS HTTP server.
Covers: auth, inference lifecycle (without SDR/ONNX hardware), orchestrator Covers: auth, inference lifecycle (without SDR/ONNX hardware), conductor
lifecycle (with mocked executor), and state helpers. lifecycle (with mocked executor), and state helpers.
``start_inference`` and ``_inference_loop`` require real SDR hardware and an ``start_inference`` and ``_inference_loop`` require real SDR hardware and an
@ -286,17 +286,17 @@ class TestInferenceStop:
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# POST /orchestrator/deploy # POST /conductor/deploy
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
class TestOrchestratorDeploy: class TestConductorDeploy:
def test_deploy_422_on_invalid_config(self, client): def test_deploy_422_on_invalid_config(self, client):
with patch( with patch(
"ria_toolkit_oss.server.routers.orchestrator.CampaignConfig.from_dict", "ria_toolkit_oss.server.routers.conductor.CampaignConfig.from_dict",
side_effect=ValueError("missing required field 'name'"), side_effect=ValueError("missing required field 'name'"),
): ):
resp = client.post("/orchestrator/deploy", json={"config": {}}) resp = client.post("/conductor/deploy", json={"config": {}})
assert resp.status_code == 422 assert resp.status_code == 422
def test_deploy_returns_campaign_id(self, client): def test_deploy_returns_campaign_id(self, client):
@ -307,10 +307,10 @@ class TestOrchestratorDeploy:
mock_executor.return_value.run.return_value = MagicMock(to_dict=lambda: {}) mock_executor.return_value.run.return_value = MagicMock(to_dict=lambda: {})
with ( with (
patch("ria_toolkit_oss.server.routers.orchestrator.CampaignConfig.from_dict", return_value=mock_cfg), patch("ria_toolkit_oss.server.routers.conductor.CampaignConfig.from_dict", return_value=mock_cfg),
patch("ria_toolkit_oss.server.routers.orchestrator.CampaignExecutor", mock_executor), patch("ria_toolkit_oss.server.routers.conductor.CampaignExecutor", mock_executor),
): ):
resp = client.post("/orchestrator/deploy", json={"config": {"name": "test_campaign"}}) resp = client.post("/conductor/deploy", json={"config": {"name": "test_campaign"}})
assert resp.status_code == 200 assert resp.status_code == 200
body = resp.json() body = resp.json()
@ -325,23 +325,23 @@ class TestOrchestratorDeploy:
mock_executor.return_value.run.return_value = MagicMock(to_dict=lambda: {}) mock_executor.return_value.run.return_value = MagicMock(to_dict=lambda: {})
with ( with (
patch("ria_toolkit_oss.server.routers.orchestrator.CampaignConfig.from_dict", return_value=mock_cfg), patch("ria_toolkit_oss.server.routers.conductor.CampaignConfig.from_dict", return_value=mock_cfg),
patch("ria_toolkit_oss.server.routers.orchestrator.CampaignExecutor", mock_executor), patch("ria_toolkit_oss.server.routers.conductor.CampaignExecutor", mock_executor),
): ):
resp = client.post("/orchestrator/deploy", json={"config": {}}) resp = client.post("/conductor/deploy", json={"config": {}})
campaign_id = resp.json()["campaign_id"] campaign_id = resp.json()["campaign_id"]
assert state_module._campaigns.get(campaign_id) is not None assert state_module._campaigns.get(campaign_id) is not None
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# GET /orchestrator/status/{campaign_id} # GET /conductor/status/{campaign_id}
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
class TestOrchestratorStatus: class TestConductorStatus:
def test_status_404_for_unknown_id(self, client): def test_status_404_for_unknown_id(self, client):
resp = client.get("/orchestrator/status/nonexistent-id") resp = client.get("/conductor/status/nonexistent-id")
assert resp.status_code == 404 assert resp.status_code == 404
def test_status_returns_campaign_state(self, client): def test_status_returns_campaign_state(self, client):
@ -357,7 +357,7 @@ class TestOrchestratorStatus:
) )
state_module._campaigns["abc-123"] = state state_module._campaigns["abc-123"] = state
resp = client.get("/orchestrator/status/abc-123") resp = client.get("/conductor/status/abc-123")
assert resp.status_code == 200 assert resp.status_code == 200
body = resp.json() body = resp.json()
assert body["campaign_id"] == "abc-123" assert body["campaign_id"] == "abc-123"
@ -367,13 +367,13 @@ class TestOrchestratorStatus:
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
# POST /orchestrator/cancel/{campaign_id} # POST /conductor/cancel/{campaign_id}
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------
class TestOrchestratorCancel: class TestConductorCancel:
def test_cancel_404_for_unknown_id(self, client): def test_cancel_404_for_unknown_id(self, client):
resp = client.post("/orchestrator/cancel/no-such-id") resp = client.post("/conductor/cancel/no-such-id")
assert resp.status_code == 404 assert resp.status_code == 404
def test_cancel_sets_cancel_event(self, client): def test_cancel_sets_cancel_event(self, client):
@ -387,7 +387,7 @@ class TestOrchestratorCancel:
) )
state_module._campaigns["camp-to-cancel"] = state state_module._campaigns["camp-to-cancel"] = state
resp = client.post("/orchestrator/cancel/camp-to-cancel") resp = client.post("/conductor/cancel/camp-to-cancel")
assert resp.status_code == 200 assert resp.status_code == 200
assert resp.json()["cancelled"] is True assert resp.json()["cancelled"] is True
assert cancel_event.is_set() assert cancel_event.is_set()
@ -403,7 +403,7 @@ class TestOrchestratorCancel:
) )
state_module._campaigns["done"] = state state_module._campaigns["done"] = state
resp = client.post("/orchestrator/cancel/done") resp = client.post("/conductor/cancel/done")
assert resp.status_code == 200 assert resp.status_code == 200
assert resp.json()["cancelled"] is False assert resp.json()["cancelled"] is False
assert not cancel_event.is_set() assert not cancel_event.is_set()

247
tests/test_agent.py Normal file
View File

@ -0,0 +1,247 @@
"""Tests for NodeAgent — TX role, session code, and TX command dispatch."""
from __future__ import annotations
import threading
import time
from unittest.mock import MagicMock, patch
from ria_toolkit_oss.agent import NodeAgent
def _agent(role="general", session_code=None, **kwargs):
return NodeAgent(
hub_url="http://hub.test",
api_key="test-key",
name="test-node",
sdr_device="mock",
role=role,
session_code=session_code,
**kwargs,
)
def _mock_register(agent, node_id="node_abc123"):
"""Patch _post so _register() returns a fake node_id response."""
resp = MagicMock()
resp.json.return_value = {"node_id": node_id}
resp.raise_for_status.return_value = None
agent._post = MagicMock(return_value=resp)
return agent._post
# ---------------------------------------------------------------------------
# Initialisation
# ---------------------------------------------------------------------------
class TestNodeAgentInit:
def test_stores_role_general(self):
assert _agent(role="general").role == "general"
def test_stores_role_tx(self):
assert _agent(role="tx").role == "tx"
def test_stores_role_rx(self):
assert _agent(role="rx").role == "rx"
def test_session_code_stored(self):
assert _agent(session_code="amber-peak-transmit").session_code == "amber-peak-transmit"
def test_session_code_none_by_default(self):
assert _agent().session_code is None
def test_tx_stop_event_created(self):
a = _agent()
assert isinstance(a._tx_stop, threading.Event)
def test_tx_thread_none_initially(self):
assert _agent()._tx_thread is None
def test_hub_url_trailing_slash_stripped(self):
a = NodeAgent(hub_url="http://hub.test/", api_key="k", name="n")
assert a.hub_url == "http://hub.test"
# ---------------------------------------------------------------------------
# _register payload
# ---------------------------------------------------------------------------
class TestNodeAgentRegisterPayload:
def _payload(self, agent):
post = _mock_register(agent)
agent._register()
_, kwargs = post.call_args
return kwargs["json"]
def test_general_role_in_payload(self):
payload = self._payload(_agent(role="general"))
assert payload["role"] == "general"
def test_tx_role_in_payload(self):
payload = self._payload(_agent(role="tx"))
assert payload["role"] == "tx"
def test_tx_role_adds_transmit_capability(self):
payload = self._payload(_agent(role="tx"))
assert "transmit" in payload["capabilities"]
def test_general_role_omits_transmit_capability(self):
payload = self._payload(_agent(role="general"))
assert "transmit" not in payload.get("capabilities", [])
def test_session_code_included_when_set(self):
payload = self._payload(_agent(role="tx", session_code="amber-peak-transmit"))
assert payload["session_code"] == "amber-peak-transmit"
def test_session_code_omitted_when_none(self):
payload = self._payload(_agent())
assert "session_code" not in payload
def test_register_stores_returned_node_id(self):
a = _agent()
_mock_register(a, node_id="node_xyz999")
a._register()
assert a.node_id == "node_xyz999"
def test_name_in_payload(self):
a = NodeAgent(hub_url="http://h", api_key="k", name="my-bench")
_mock_register(a)
a._register()
_, kwargs = a._post.call_args
assert kwargs["json"]["name"] == "my-bench"
def test_sdr_device_in_payload(self):
a = _agent()
post = _mock_register(a)
a._register()
_, kwargs = post.call_args
assert kwargs["json"]["sdr_device"] == "mock"
def test_campaign_capability_always_present(self):
for role in ("general", "rx", "tx"):
a = _agent(role=role)
payload = self._payload(a)
assert "campaign" in payload["capabilities"]
# ---------------------------------------------------------------------------
# _dispatch — TX commands
# ---------------------------------------------------------------------------
class TestNodeAgentDispatch:
def _make_agent(self):
a = _agent(role="tx")
a.node_id = "node_abc"
a._report_campaign_status = MagicMock()
return a
def test_start_transmit_spawns_thread(self):
a = self._make_agent()
done = threading.Event()
class _FakeExecutor:
def run(self_):
done.wait(timeout=2)
with patch("ria_toolkit_oss.orchestration.tx_executor.TxExecutor", return_value=_FakeExecutor()):
a._dispatch({"command": "start_transmit", "sdr_agent": {}, "schedule": []})
time.sleep(0.05)
assert a._tx_thread is not None
done.set()
def test_start_transmit_clears_stop_event(self):
a = self._make_agent()
a._tx_stop.set() # pre-set
done = threading.Event()
class _FakeExecutor:
def run(self_):
done.wait(timeout=2)
with patch("ria_toolkit_oss.orchestration.tx_executor.TxExecutor", return_value=_FakeExecutor()):
a._dispatch({"command": "start_transmit", "sdr_agent": {}, "schedule": []})
time.sleep(0.05)
assert not a._tx_stop.is_set()
done.set()
def test_stop_transmit_sets_stop_event(self):
a = self._make_agent()
a._dispatch({"command": "stop_transmit"})
assert a._tx_stop.is_set()
def test_configure_transmit_does_not_raise(self):
a = self._make_agent()
a._dispatch({"command": "configure_transmit", "modulation": "BPSK"})
def test_unknown_command_is_ignored(self):
a = self._make_agent()
a._dispatch({"command": "frobnicate_xyz"})
def test_duplicate_start_transmit_ignored_while_running(self):
a = self._make_agent()
done = threading.Event()
run_calls = []
class _FakeExecutor:
def run(self_):
run_calls.append(1)
done.wait(timeout=2)
with patch("ria_toolkit_oss.orchestration.tx_executor.TxExecutor", return_value=_FakeExecutor()):
a._dispatch({"command": "start_transmit"})
time.sleep(0.05)
a._dispatch({"command": "start_transmit"}) # second while first alive
done.set()
time.sleep(0.05)
assert len(run_calls) == 1
def test_run_campaign_dispatched_in_thread(self):
a = self._make_agent()
done = threading.Event()
with patch("ria_toolkit_oss.agent.NodeAgent._run_campaign") as mock_run:
mock_run.side_effect = lambda *_: done.set()
a._dispatch({"command": "run_campaign", "campaign_id": "c1", "payload": {}})
done.wait(timeout=2)
assert mock_run.called
# ---------------------------------------------------------------------------
# _stop_transmit
# ---------------------------------------------------------------------------
class TestStopTransmit:
def test_no_thread_noop(self):
a = _agent()
a._stop_transmit() # must not raise
def test_sets_stop_event(self):
a = _agent()
a._stop_transmit()
assert a._tx_stop.is_set()
def test_joins_live_thread(self):
a = _agent()
finished = threading.Event()
unblock = threading.Event()
def _task():
unblock.wait(timeout=2)
finished.set()
t = threading.Thread(target=_task, daemon=True)
t.start()
a._tx_thread = t
# Signal stop and trigger thread exit
a._tx_stop.set()
unblock.set()
a._stop_transmit()
assert not t.is_alive()

View File

@ -1,7 +1,7 @@
import numpy as np import numpy as np
import pytest import pytest
from ria_toolkit_oss.datatypes import Recording from ria_toolkit_oss.data import Recording
from ria_toolkit_oss.transforms import iq_augmentations from ria_toolkit_oss.transforms import iq_augmentations
TEST_DATA1 = [[1 + 1j, 2 + 2j, 3 + 3j, 4 + 4j]] TEST_DATA1 = [[1 + 1j, 2 + 2j, 3 + 3j, 4 + 4j]]

View File

@ -13,7 +13,7 @@ Bugs/issues identified during review:
import numpy as np import numpy as np
import pytest import pytest
from ria_toolkit_oss.datatypes import Recording from ria_toolkit_oss.data import Recording
from ria_toolkit_oss.transforms import iq_impairments from ria_toolkit_oss.transforms import iq_impairments
# --------------------------------------------------------------------------- # ---------------------------------------------------------------------------