Wireless Core

Introduction

The Wireless Core is implemented as a custom network stack. Its purpose is to facilitate transceiver setup, making it possible to optimize performance for data rate, latency and power consumption. It manages wireless functionalities within the SDK and is controllable via its top-level API: the Wireless Core API.

Several concepts must be fully understood before delving into the design of a network with the Wireless Core. This documentation describes all those concepts, provides tips on how to design an optimal schedule and gives guidelines on how to optimize resources utilization.

Concepts

Time Division Multiple Access

The Wireless Core always operates in Time Division Multiple Access (TDMA) mode. It divides time into multiple slices called timeslots. Each timeslot is a portion of time reserved for a specific event. The sum of all the timeslots is called a schedule. This TDMA approach is very efficient in terms of power consumption. Most of the time, a device can be asleep. It will wake up only to transmit or receive a frame on its dedicated timeslot.

Network

A network encompasses a schedule, connections and their configuration.

Schedule

To maximize the usage of air time, the length of the timeslots in a TDMA schedule can be adjusted with a precision of 1 microsecond. On a given network, the schedule must be known to all the Nodes on the network and cannot change at runtime. The schedule specifies when and for how long a given Node is allowed to transmit or receive a frame. When the schedule reaches the last timeslot, it loops back to the first one.

Figure not found: Basic schedule

Figure 29: Basic schedule

The illustration above shows a basic schedule which enables bi-directional communication between two devices. Information is exchanged in one direction during the first timeslot, and in the other direction during the second timeslot. The schedule repeats indefinitely.

Timeslot duration is determined by the frame air time and inter-frame spacing required for the MCU to fetch/retrieve and process data for each frame. The inter-frame spacing will mostly depend on the MCU performance characteristics and the configured SPI speed.

Figure not found: Interframe spacing

Figure 30: Inter-frame spacing

The figure above shows an example of a star network schedule. The diagram shows air time (colored) and processing time. The processing time must be included in a single timeslot for consecutive over-the-air events to occur for the same device (e.g. Coordinator Timeslot Index 1 to Index 2). If the over-the-air events are non-consecutive, processing can occur during the other devices Timeslot Index (e.g. Coordinator Timeslot Index 3).

Frequency Switching

To optimize UWB spectrum usage and avoid violating the regulatory emission limits, frequency switching is used.

The strategy consists of spreading pulse energy over a wider spectrum by cycling through selected frequencies at each timeslot.

The Wireless Core can be configured to use different frequencies in consecutive timeslots by providing it with a list of channels. For every timeslot, the Wireless Core will tune the radio to the next channel in the list. As a result, the list is cycled independently from the connections.

Addressing

The Wireless Core addresses are 20-bit long. The first 12 most significant bits are used for the PAN ID and the remaining 8 least significant bits are used for the node address.

PAN ID

Node Address

12 bits

8 bits

PAN ID

The PAN ID (Personal Area Network Identification) is used to group multiple nodes into logical networks. Nodes from different networks are not able to communicate with each other. It is one of the feature enabling concurrency.

Valid PAN IDs range from 0x001 to 0xFFE. 0x000 and 0xFFF are reserved.

Node Address

The node address is used to identify nodes within the same PAN.

The node addresses range from 0x00 to 0xFE. Address 0xFF is used for broadcasting on the PAN. Thus, a single PAN can support up to 255 active nodes.

PAN Broadcast

A frame containing the 0xFF destination address will be treated as a broadcast in the selected PAN. Every node part of the PAN will receive the frame as if it was specifically sent to them, with one condition: the connection’s destination address on the receiver node (RX) must be set to its local address.

Connection

A connection is defined by a unidirectional link between a source address and a destination address. A single device can use one or multiple connections. For a device to send data, a connection must be associated with one or more timeslots within the schedule. Since a connection defines a unidirectional data stream, a timeslot can contain two connections: one for the main transmission and one for the auto-reply (if enabled).

A connection between two devices must share the same network configuration. Within this configuration, a payload queue is assigned to the connection. When the connection’s active timeslot occurs, the oldest element is removed from the queue and scheduled for transmission.

It is possible to send data to multiple devices within the same network with one connection using a broadcast address: 255 (or 0xFF). The transmitting device will then communicate with every device within reach that are part of the same network (same PAN ID).

Channel

Each channel defines the frequency and the power settings of the transceiver to apply when a transmission occurs over said channel. The Wireless Core must be provided with an array of channels on which it will iterate; that is the channel sequence. Each connection has its own set of configurations for a given channel.

Stream Types

There are three types of data streams which are described in the table below:

Table 12: Stream Types Description

Best Effort

Frames are only sent once. The transmitter doesn’t know if it was successfully received on the remote side.

Limited Retransmissions

Frames reception are ACKed. If a frame is not ACKed after a configurable amount of retransmissions, the transmitter drops it.

Guaranteed delivery

Frames reception are ACKed. The transmitter will retransmit a frame (indefinitely) until it is ACKed, thus guaranteeing delivery.

Auto-Reply

The SPARK transceivers have a built-in auto-reply functionality that allows a receiver to transmit a payload immediately after a successful reception. While generally used for acknowledgements only, this auto-replied frame can also contain an effective payload. Bear in mind that while the Wireless Core allows transmission of data inside them, it does not support acknowledgements of auto-replied frames. A data stream transmitted through auto-replied frames is considered a “best-effort” stream since no retransmission mechanism can be leveraged in case of loss.

Timeslot Events

When auto-replies are enabled, two exchanges can happen inside a single timeslot. The Wireless Core treats each exchange as separate events.

Main Frame Transmission

The main event in a timeslot is the main frame transmission. That data stream will be a best-effort transmission unless Auto-Replies are enabled.

figure not found: Main Transmission Event

Figure 31: Main Transmission Event

Auto-Replied ACK

When enabled, the receiver of a main frame will auto-reply an acknowledgement (ACK) to notify the transmitter that reception was successful. This allows the Wireless Core to put a retransmission mechanism in place in case a main frame is not acknowledged, enabling guaranteed delivery or limited retransmission links. The ACK frame does not contain any payload.

figure not found: Auto-Reply Event (ACK only)

Figure 32: Auto-reply event (ACK only)

Auto-Replied Payload

A connection can make use of the auto-reply to transmit data in a best-effort manner. The auto-replied frame can also optionally contain an acknowledgement of the main frame alongside the payload, keeping the ability of the main frame transmitter to use a retransmission mechanism. However, keep in mind that the Wireless Core does not offer the possibility to acknowledge auto-replied frames, rendering this type of data transmission a best-effort one.

figure not found: Auto-Reply Event (with data)

Figure 33: Auto-reply event (with data)

Synchronization Methods

Standard Sync

Upon power up, the Node will listen for a message from the Coordinator. If none is received, it will go to sleep for a predetermined amount of time and then retry listening. This method will limit power consumption of the receiver by duty cycling the listening periods instead of listening continuously for a message from the Coordinator which might not be active yet. This method usually allows the device pair to achieve synchronization in a short period of time when the frame rate on the Coordinator is high.

Fast Sync

Upon power up, the Node will continuously listen for a message from the Coordinator until it successfully receives a frame. This synchronization method is less power efficient, but will yield faster synchronization because the Node should catch frames from the Coordinator as soon as they are sent. This method should be used when the frame rate on the Coordinator is too low for the standard sync to maintain a reliable synchronization or to ensure a quick synchronization at startup.

Note

The synchronization delay depends on the defined RF schedule and the RF conditions, regardless of the synchronization method. A schedule with shorter timeslots or more synchronization frames and good RF conditions will result in a short synchronization delay. On the other hand, a schedule with longer timeslots or less frequent transmissions from the coordinator will lead to longer node synchronization delays.

Dual Radio

In applications where antenna placement is challenging, two transceivers can be used to increase coverage. Since this allows both transceivers to redundantly receive frames, the Wireless Core is free to choose which transceiver to use for transmission based on the received signal strength. Bear in mind that enabling this feature will increase the processor load of the Wireless Core by about 1.5 times the equivalent single radio scenario.

The second transceiver must use a dedicated SPI bus and DMA channel, independent of the first one, to allow for simultaneous data transfers. A 16-bit timer running at 20.48 MHz provided by the MCU is also required. The timer period must be configurable by the Wireless Core.

Topology

In any topology, the link between two Nodes can be either unidirectional or bidirectional. A given Node should always receive frames from its syncing Node at least every 10 ms in order to maintain synchronization on the network. Note that the topology of the network is entirely defined by the user configuration. Different topologies are discussed in the sections that follow.

Note

Routing is not currently supported by the Wireless Core and must be handled by the application.

Star Network Topology

In a star network, all communications are established directly between a central Node configured as Coordinator and several other Nodes.

Figure not found: Star Network Topology

Figure 34: Star network Topology

Peer to Peer Network Topology

In a peer-to-peer network, each Node can communicate with any other Node, but the Coordinator must send a frame to every other Node at least every 10 ms to maintain synchronization. In most cases, this frame will be sent in the form of a broadcast and is often referred to as a beacon.

Figure not found: P2P Network Topology

Figure 35: P2P Network Topology

Cluster Tree Network Topology

In a cluster tree network, Nodes can sync on a child Node of the Coordinator rather than on the Coordinator itself. The support of this configuration is still limited and the relaying of a frame through multiple Nodes must be handled by the application.

Figure not found: Cluster Tree Network Topology

Figure 36: Cluster Tree Network Topology

Concurrency

The Wireless Core supports a set of features which can be used to optimize concurrent operation. Those features are described in the next sections.

Personal Area Network ID (PAN ID)

The PAN ID, is used to logically split the addressing space into networks. This will ensure that transceivers with a certain PAN ID will not receive frames sent by devices that have a different PAN ID, even if the destination node address matches.

Channel Sequence Randomization

A random channel sequence can be generated by the Wireless Core to reduce the probability of collisions in the frequency domain. The sequence is generated using the PAN ID as the seed and overrides the channel sequence specified by the user. Other channel properties are maintained.

Note

The Wireless Core will override the channel sequence provided by the user with its own auto-generated random sequence when this feature is enabled.

Clear Channel Assessment (CCA)

Clear Channel Assessment (CCA) is a mechanism used to determine if a certain frequency is currently in use or not. This mechanism is used by the Wireless Core to avoid collisions resulting from concurrent transmissions. Collision avoidance is achieved by delaying the transmission within the same timeslot. If the connection’s timeslot expires and the CCA didn’t succeed, the transmission can be postponed to the next connection’s timeslot or it can be transmitted anyways depending on the CCA configuration.

Random Datarate Offset Protocol (RDO)

The Random Datarate Offset (RDO) feature intentionally adds a small amount of jitter to the link timings. The timing offset helps CCA performance in situations where devices listen at the same time and then transmit at the same time. This timing offset does not affect the overall link throughput and has the most impact when concurrent networks have the exact same schedules.

Distributed De-Synchronization algorithm

The Distributed De-Synchronization algorithm aims to optimize the air-time allocation of concurrent networks by de-synchronizing the coordinator of an network from other transmissions. To do so, the transmission will be delayed to reduce the amount of cca retry required for a transmission. To ensure that the node stays synchronized to the coordinator, only a small amount of delay is applied for each transmission. Furthermore, the delay is only applied if successful transmissions are achieved. When no successful transmissions can be achieved for a certain amount of time, an offset of 1024 pll cycles will be applied to find a free slot in the air time allocation.

Sleep Modes

SPARK transceivers support 3 sleep modes: idle, shallow and deep. The transceiver will automatically fall into the selected sleep mode once it has completed a transmission or reception and wake up for the next event, enabling considerable power savings. The deeper the sleep mode, the higher the wake-up delay is. This has to be taken into consideration when configuring the timeslots duration. E.g.: idle sleep will support shorter timeslots since the transceiver can wake up from idle sleep much faster than from the other sleep modes.

Table 13: Sleep Modes Wake-Up Delay

Sleep Mode

Wake-Up Delay (us)

Idle

0.5

Shallow

62

Deep

3062

Note

Please refer to the transceiver’s datasheet for more information on sleep modes.

Note

The sleep modes can only be set globally for all timeslots since the transceiver can’t switch between modes without losing synchronization. Consequently, the Wireless Core must be completely shut down and reconfigured to be able to change the global sleep mode setting. Doing so requires considerable time (several milliseconds), so application downtime is to be expected if it must be done at runtime.

Important

Shallow sleep usage requires the timeslots duration to be higher than 185us. Deep sleep usage requires the timeslots duration to be higher than 3062us.

Modulation

Two modulation schemes are available, Inverted On-Off Keying (IOOK) and 2-bit Pulse Position Modulation (2-bit PPM). Each of these modulation schemes has their merits. For example, it will take half the time to transmit a frame in IOOK when compared to 2-bit PPM, but a better link budget is expected when using 2-bit PPM.

The coding for both these schemes is shown below:

  • IOOK: Use IOOK modulation for high data rate applications.

  • 2-bit PPM: Use 2-bit PPM to obtain higher link budget.

figure not found: Modulation bit sequence

Figure 37: Modulation bit sequence

Forward Error Correction (FEC) Level

The goal of the FEC is to correct errors which might occur during over-the-air transmissions. FEC adds redundant bits to a bitstream and thus increases the frame size.

The SPARK radio can support four levels of FEC. Each level is defined by its redundancy rate. The redundancy rate is the factor by which the frame size is inflated. The FEC level 0 corresponds to disabling it, and as such, will not inflate the frame. At its maximum level of 3, the FEC will double the frame size.

Table 14: Forward Error Correction Rates

FEC Level

Frame Inflation Rate

0

x1

1

x1.334

2

x1.667

3

x2

Stop and Wait

The Stop and Wait control mechanism handles frame retransmissions automatically when a frame is not acknowledged. The acknowledgements on the connection must be enabled to use this setting. There are two modes for the Stop and Wait mechanism: “Retry Count” and “Deadline”. The first mode will count the number of retransmission attempts before deciding to drop a frame. The second mode will drop a frame if it is not delivered after a certain amount of time. The timeout for the second mode is set in number of ticks. The tick duration is defined by the user depending on the HAL implementation.

For a guaranteed delivery quality of service, both timeout settings must be set to 0. That way, the stop and wait module will retransmit the frame indefinitely until the frame is acknowledged. This type of service is commonly used for non-real time data transfers (e.g. file transfer of firmware updates)

Frame Fragmentation

The frame fragmentation mode enables users to transmit payloads that exceed the maximum payload size of the connection. The wireless core will automatically segment the user payload into multiple fragments and enqueue it as multiple connection queue frames. Upon reception, fragments are buffered in the RX queue until the last fragment is received. The reception of the last fragment triggers reconstruction of the frame and the Wireless Core then notifies the user through an RX callback. The biggest payload that can be sent is limited by the size of the connection queue and the connection’s maximum payload size. The fragmentation adds 3 bytes to the header of the first fragment and 2 bytes to the header of subsequent fragments.

Fallback Mode

The Wireless Core gives the user the possibility to define an alternative channel configuration (power settings) to switch to when a predetermined failure threshold is crossed. This mechanism is called the fallback mode. In that mode, the application is expected to reduce its data throughput by providing smaller payloads to compensate for the increase in power emission. This allows the system to tradeoff throughput for increased range or robustness in suboptimal environmental conditions.

Connection Priority

The Wireless Core allows the user to set a priority for each defined connection. This priority is used by the Wireless Core to determine which connection should be scheduled first when multiple connections are assigned to the same timeslot. The priority is set by the user when the connection is created.

On the reception timeslot, the parameters from the first connection added to the timeslot will be used to configure the radio. This could lead to unexpected results if some parameters are different between connections, such as CCA settings and maximum payload size.

If auto-reply connections are use, only one remote device can be linked to a timeslot. Otherwise, any connection can be linked to any device on the network.

Here are some use cases:

  1. Audio streaming with data payload every 10 ms.

    In this use case, an audio streaming application uses about 75% of the transmission timeslots which gives it 25% of re-transmission margin. This connection transmits time sensitive payloads, so it should be the highest priority. The data connection can then be set to a lower priority and would only be transmitted on timeslots not used by the audio connection.

    • If the link quality degrades, the audio connection will use a bigger percentage of the transmission timeslots. This could limit the ability of the data connection to transmit its payload.

    • This use case allows the audio connection to stop transmitting its payload if nothing needs to be transmitted (audio paused) since the data connection would keep the sync between the coordinator and the node.

  2. Star Network with a beacon every 10 ms.

    In this use case, a coordinator transmits data to one node and receives data from other nodes. To maintain the synchronization of all the nodes in the network, a beacon must be sent periodically to all the nodes. The beacon can be added to one of the dongle’s transmission timeslots as the highest priority to guarantee its transmission.

  3. Multiple low frame rate connections.

    In this use case, multiple low frame rate connections can share one timeslot. For example, three connections generating ~500 packets per seconds could share one 2000 frame per second timeslot.

Ranging Mode

The Wireless Core gives the user the possibility to activate ranging mode when needed. When it is enabled on a connection, it will either serve as an Initiator or a Responder. The latter is responsible for transferring fine-grained frame reception timing information to the Initiator in the form of preamble phase correlation metrics. These metrics are used for time-of-flight (ToF) fine-tuning and allow for better ranging precision. Therefore, a responder can only be configured on an auto-reply transmitting connection, as a deterministic reply time is required by a ToF ranging technique. The responder will bundle up the four preamble phase correlation metrics and include them in the corresponding header field. Moreover, when ranging mode is enabled, the Wireless Core triggers a configured application callback when enough ranging data sets are available for distance processing. The required number of data sets to be collected is configurable.

Frame Structure

There are three types of frames that can be generated by the Wireless Core: data frames, sync frames and acknowledgement frames. It’s important to note that a few bytes of the PHY payload will be utilized for the MAC layer. The exact number of bytes depends on the configured header for the specific connection.

Data frames

Data frames are frames that contain an application payload. They can be sent as main frames or in an auto-reply.

figure not found: Data Frame

Figure 41: Data frame

Sync frames

Sync frames are used when auto-sync mode is enabled on a connection. When the queue of a connection is empty, a frame containing only a header will be sent to keep the network synchronized.

figure not found: Sync Frame

Figure 42: Sync frame

Acknowledgement frames

Acknowledgement frames can only be sent in an auto-reply. This type of frame does not contain a payload.

figure not found: Acknowledgment Frame

Figure 43: Acknowledgment frame

Header Configuration

The Wireless Core packet header is composed of a variable amount of fields which are determined at connection creation, and depends on the enabled features. For example, when a connection is configured for ranging, the Wireless Core will append ranging-specific fields to the header prior to transmission. Here is the list of possible fields:

Table 15: List of Header Fields

Configuration

Description

Size (bits)

Mandatory

Header Size

Size of the header in bytes

8

Yes

Stop and wait

See Stop and Wait section.

1

Yes

Timeslot ID

See Schedule section.

7

Yes

Channel Index

See Channel section.

8

Yes

Random Datarate Offset

See Random Datarate Offset Protocol (RDO) section.

16

No

Ranging Initiator

See Ranging Mode section.

8

No

Ranging Responder

See Ranging Mode section.

40

No

Connection ID

See Connection Priority section.

8

No

Data Validation

The transceiver implements multiple layers of data protection in a frame to ensure proper transmission. These mechanisms are handled automatically by the Wireless Core.

CRC

A frame with a failing cyclical redundancy check (CRC) is discarded and not acknowledged.

Address filtering

Only frames with destination address matching the local address are passed on to the Wireless Core. Frames with a non-matching addresses are discarded by the transceiver.

Syncword match

Frames with the wrong syncword are automatically discarded by the transceiver.

Max frame size

Each connection has a maximum frame size. If a larger frame is received, it will automatically be discarded to avoid buffer overflows.

MCU Requirements

MCU Footprint

Memory

The table below shows the memory usage of the Wireless Core library compiled with GCC from the GNU Arm Embedded Toolchain v9.3.1 using the -O2 optimization flag.

Table 16: Memory Usage of Wireless Core library.

Memory Section

Memory Usage (kB)

RAM

2

Flash

35

Memory usage is affected by factors such as the number of wireless connections and their queue size. Expect an increase of ~1 kB of RAM for each extra connection, and ~0.5 kB for each extra element in the connection queue.

Additional RAM and flash space must be allocated for the application layer which is not included in these estimates.

CPU

CPU usage will vary depending on the selected MCU. The Wireless Core is entirely driven by interrupts from the transceiver IRQ pin and from the SPI DMA. The interrupt handlers from these two interrupts will call the functions that process the Wireless Core state machine. These interrupts should always be processed with high priority. Delaying the handling of these interrupts could greatly reduce the performances of the wireless link.

The Wireless Core timing diagram shown below illustrates the processor usage and the interrupt model of the Wireless Core.

figure not found: Wireless Core Timing Diagram

Figure 44: Wireless Core timing diagram

Function

Description

read_event

Ask the radio for the IRQ flags after a radio interrupt.

process_event

Read the IRQ flags and take action regarding of the outcome.

get_header

Read link header from the radio FIFO.

get_payload

Read payload from the radio FIFO.

mac_process_frame

Process the MAC layer at the end of a frame.

mac_prepare_frame

Process link layer at the beginning of a frame.

prepare_radio_cfg

Prepare the radio register to send for the next transmission / reception.

send_radio_cfg

Send configuration register radio processed in the prepare_radio_cfg state.

callback_context_switch

User provided function to trigger the callback context switch.

set_header

Write the link header to the radio FIFO.

set_payload

Write the user payload to the radio FIFO.

close_spi

Reset the chip select SPI pin, then wait for a radio event.

Example of processing time on the EVK1.4 board :

Configuration

Min

Average

Max

Unidirectional 1 byte payload TX

47 us

54 us

56 us

Unidirectional 125 byte payload TX

77 us

83 us

85 us

Unidirectional 1 byte payload RX

53 us

53 us

54 us

Unidirectional 125 byte payload RX

42 us

59 us

66 us

Bidirectional 1 byte payload

44 us

54 us

71 us

Bidirectional 125 byte payload

44 us

69 us

97 us

To determine the processing time for a given MCU implementation, one can measure the delay between the rising edge of the transceiver IRQ pin, and the last rising edge of the CS pin as shown by the red markers in the following image:

figure not found: Processing Time Measurement

Figure 45: Processing Time Measurement

Schedule Design Consideration

Bandwidth

When designing the schedule, the user must consider the bandwidth requirements for every device. The peak data rate of a specific connection can be determined from the number of allocated timeslots within the schedule. The diagram below illustrates a scenario where a device with the address 0xCD01 has 3 times more link bandwidth than the device at the address 0xAAAA (assuming equally sized payloads and equal timeslot durations).

figure not found: Wireless Core Data Rate

Figure 46: Wireless Core data rate

The timeslot duration and the schedule period itself are other factors that come into play for the bandwidth. The user must configure the duration of every timeslot when configuring the schedule.

The maximum datarate in kbps of a connection is given as :

\(datarate = \frac{nTs\times Ps \times 8}{Sl \times Tt \times 1000} kbps\)

where \(nTs\) is the number of timeslots allocated to the connection in the schedule, \(Ps\) is the maximum payload size in bytes of the connection, \(Tt\) is the timeslots duration in microseconds and \(Sl\) is the total schedule period.

Example:

\(nTs = 2\);

\(Ps = 120\);

\(Sl = 4\);

\(Tt = 250us\);

\(datarate = \frac{2 \times 120 \times 8}{4 \times 250us \times 1000} = 512kbps\)

Retransmission Margin

The retransmission margin consists of normally unused timeslots that can be used for retransmissions when the link starts to degrade. By accounting for a retransmission margin in the wireless schedule, we are in fact designing a link which allows for a higher throughput than what the application requires. A higher retransmission margin will increase the robustness of the link. The concept of retransmission margin only applies when the connection is using the Stop and Wait feature.

The retransmission margin is given as:

\(margin = \frac {cD - aD}{cD}\times100\)

where \(cD\) is the maximum data rate of the connection and \(aD\) is the data rate of the application for a given connection.

Example:

\(cD = 512kbps\);

\(aD = 384kbps\);

\(\frac {512kbps - 384kbps}{512kbps}\times100 = 25%\)

Sync Timeslot (Beacon)

The Sync Timeslot, or beacon, consists of a timeslot where the coordinator sends either a normal frame which contains a header and a payload or an empty frame which only contains a header. The maximal transmission period for this beacon is 10 ms. Having an interval greater than this will result in a loss of sync between the coordinator and the receiving node. If the coordinator does not have data to send at the time of the sync timeslot, the auto-sync feature can be used. The auto-sync feature lets the Wireless Core handle the transmission of the sync frame automatically. If the coordinator does not have data to send, its sync connection can have a FIFO size of 0. In a star or peer-to-peer network, this sync frame should be sent by the coordinator as a broadcast, eg with the address 0xFFFF.

Concurrency Considerations

The following figure shows 2 timeslots of a generic schedule operating with concurrency:

Figure not found: concurrency schedule

Figure 47: Generic concurrency schedule

During the first timeslot, the Network #1 transmitter executed CCA and determined that the air was free, so started to transmit on Channel #0. Network #2 transmitter also did a CCA, but slightly after Network #1 transmitter started to transmit, thus resulted in a failed CCA because Channel #0 was in use. Network #2 Transmitter then waited for a predetermined delay before retrying. At some point, its CCA succeeded, and it was able to transmit over Channel #0 because the other transmitter had completed its transmission.

During the second timeslot, both transmitters were able to transmit simultaneously as their CCA both succeeded. This is because the frequency separation (between Channel 3 and Channel 1) is high enough (at least 1 GHz separation). thanks to the random channel sequence mechanism managed by the Wireless Core.

In order to achieve proper concurrency, the following equations must be true:

\(T_{CCA} >= T_{AIR}\)
\(T_{SLOT} - (T_{CCA} * Retry_{COUNT}) - T_{AIR} - T_{PROC} >= 0\)

Where \(T_{CCA}\) is the CCA delay, \(Retry_{COUNT}\) is the number of times the CCA check is done, \(T_{AIR}\) is the air time of the frame, \(T_{SLOT}\) is the timeslot duration and \(T_{PROC}\) is the processing time required by the Wireless Core.

The number of links supported without any frame delivery degradation \(N\) is equal to the following, rounded down:

\(N = \frac {(T_{CCA} * Retry_{COUNT})}{T_{AIR}}\)

Latency Calculation

Here, we will look at latency and how it is affected by the 3 types of data transfer: best effort, limited retransmission and guaranteed delivery.

  • Best effort: There is only one attempt to transmit every frame. If the frame is not received correctly, it is dropped. This is done by configuring a connection without acknowledgement and Stop and Wait. Minimum latency (\(LatencyMin\)) and maximum latency (\(LatencyMax\)) for a successfully transmitted frame can be evaluated as follows:

    \(LatencyMin = (AirTime + CallbackDelay)\) \(\mu\)\(s\)
    \(LatencyMax = (AirTime + CallbackDelay + RefreshPeriod)\) \(\mu\)\(s\)

    where \(AirTime\) is the frame’s air time, \(CallbackDelay\) is the delay to trigger the RX callback and \(RefreshPeriod\) is the connection refresh period.

    The connection refresh period is the duration between data transfers for the maximum data rate case on a connection.

  • Limited retransmission: A frame is retransmitted until the frame is either successfully transmitted or dropped after a timeout period has expired. This is done by configuring a connection with acknowledgement and Stop and Wait with a timeout. Minimum latency and maximum latency for a successfully transmitted frame can be evaluated as follows:

    \(LatencyMin = (AirTime + CallbackDelay)\) \(\mu\)\(s\)
    “Retry count” mode (see Stop and Wait):
    \(LatencyMax = (AirTime + CallbackDelay + RefreshPeriod + RetxCount \times RefreshPeriod)\) \(\mu\)\(s\)
    “Deadline” mode (see Stop and Wait):
    \(LatencyMax = (AirTime + CallbackDelay + RefreshPeriod + DeadlineDelay)\) \(\mu\)\(s\)

    where \(RetxCount\) is the maximum number of retransmissions selected and \(DeadlineDelay\) is deadline time value selected.

  • Guaranteed delivery: A frame is retransmitted until it is successfully transmitted. This is done by configuring a connection with acknowledgement and Stop and Wait with timeout settings to 0. Minimum latency and maximum latency for a successfully transmitted frame can be evaluated as follows:

    \(LatencyMin = (AirTime + CallbackDelay)\) \(\mu\)\(s\)
    \(LatencyMax = infinite\)

Note

For real-time streaming applications, the connection’s TX and RX queue size will generally act as buffers and thus induce latency in the system. A greater queue size will allow for greater instantaneous PER spikes without interruption of service but will yield higher latency figures. The ideal buffering to put in place in an application varies depending on the schedule definition. As a rule of thumb, set the buffering length (latency) to a duration equal to 15 transmissions of the associated connection. In other words, a device needs to be able to transmit 15 packets in a period equal to the added latency. For example, if a buffer length of 5ms is used, the associated connection should be able to send packets at >3000 packets/second (15 packets/5 ms). The RX buffer of the receiver should have the same size as the TX buffer of the transmitter.

Case Study: Latency Evaluation of an Application

In this section, the Wireless Core latency is examined and the expected variations that can occur are explained. First, it must be understood that the interface between the Application and the Wireless Core is an asynchronous interface. A buffer (Queue) is implemented between the two interfaces. In a transmit scenario, data is written to the buffer (TX Queue) by the Application at the Application Rate, and data is read from the buffer (TX Queue) at the Wireless Core rate. The two rates are not synchronized with each other and are not usually the same frequency (Wireless Core frequency is typically higher than the Application frequency to allow for retransmissions). In such a scenario, there will be latency variations.

The latency introduced by the Wireless Core can be measured by toggling GPIOs on both the Transmitting and Receiving devices. The first GPIO is activated when the application writes data into the TX Queue and the second GPIO is activated when the Wireless Core writes data into the RX Queue (please refer to the Wireless Core Latency Representation. figure).

An oscilloscope can then be used to capture the IO state changes and compile a histogram of the handling time of the data by the Wireless Core including over-the-air (OTA) transmission time. Latency statistics such as minimum, maximum, average and standard deviation can then be obtained from that data to effectively qualify it.

In the example that follows, it is assumed that the application does not buffer data (application data is sent directly to the TX Queue); It is also assumed that all transmissions are successful (perfect RF link with no retransmissions). The figure below illustrates the signal path where the latency measurements are taken:

figure not found: Wireless Core Latency Representation

Figure 48: Wireless Core Latency Representation.

Latency variations depends on schedule and application timing. Let’s evaluate it for the following sample schedule:

figure not found: Example Schedule

Figure 49: Example schedule. Note that the schedule loops indefinitely.

Looking at the schedule, a single iteration of the schedule (every 1ms) can be seen, the Coordinator has four transmission opportunities whereas the Node has only one. The following section shows how timeslot allocation plays a major role in the latency jitter that occurs in practice. The amount of jitter depends on when the application data is yielded to the Wireless Core.

An important detail must be understood before diving in the analysis: the Wireless Core always prepares the action to be taken in the next timeslot (e.g.: timeslot #3) at the beginning of the preceding timeslot (e.g.: timeslot #2). This means that the application data is guaranteed to wait a little bit more than the timeslot’s duration (e.g.: timeslot #2 + air time). Here is a breakdown of the theoretical latency for both data stream directions.

Coordinator TX -> Node RX Latency

Starting with the best case scenario, one in which the application yields data to the Wireless Core at the perfect moment (just before timeslot #1), we can assume that the next transmission opportunity will occur in a little more than a timeslot’s duration (in timeslot #2), making ~200us the lowest expected transmission latency for the Coordinator. Note that it will be slightly higher than that due to the air time and data handling time which can vary. On the other hand, if the application provides data to the Wireless Core at the least convenient moment (during timeslot #3, just after missing the preparation for timeslot #4), the Wireless Core will not be able to send the data in the next timeslot (#4). Furthermore, because timeslot #5 is an RX timeslot, the next transmission opportunity will be in timeslot #6 (which is identical to timeslot #1). This means that the data will wait in the Wireless Core queue for a little more than three timeslots (#3 to #6), making the worst case theoretical latency ~600us (+ air time and data handling time).

Table 17: Coordinator TX Min/Max Latencies

Min

Max

~200us

~600us

Node TX -> Coordinator RX Latency

Starting with the best case scenario, one in which the application yields data to the Wireless Core at the perfect moment, we can assume that the next transmission opportunity will occur in a little more than a timeslot’s duration, making ~200us the lowest expected transmission latency for the Node. On the other hand, if the application provides data to the Wireless Core at the least convenient moment (during timeslot #4, just after missing the preparation for timeslot #5), the Wireless Core will not be able to send the data in the next timeslot (#5). Furthermore, because only timeslot #5 is dedicated for transmission, the next transmission opportunity will be in a complete schedule loop (which is equivalent to timeslot #10). This means that the data will wait in the Wireless Core queue for a little more than six timeslots (#4 to #10), making the worst case theoretical latency ~1.2ms (+ air time and data handling time).

Table 18: Node TX Min/Max Latencies

Min

Max

~200us

~1.2ms

Those results can be confirmed by using the method described at the beginning of this section. Generating a histogram of the latency value per frame with an oscilloscope will highlight the fact that those minimum and maximum latency values are corner cases. In practice, latencies between the minimum and maximum are far more likely to occur than the minimum or maximum themselves. In the end, the average latency can be obtained from the histogram and will generally sit somewhere between the expected minimum and maximum, with a low standard deviation.

Schedule Design Trade-off

There are always trade-offs between latency, maximum data rate and link budget for any schedule design.

From the Latency Calculation equations, frame air time and the connection refresh period should be decreased in order to reduce latency. However, decreasing those parameters can result in lower maximum data rate and/or link budget.

The 3 main factors that impact frame air time and connection refresh period are modulation, FEC level and payload size. For modulation and FEC level, there are 2 recommended configurations:

  • 2 bit PPM, FEC1: This will yield better link budget but longer frame air time.

  • IOOK, FEC2: This will yield lower link budget but shorter frame air time.

A large payload which minimizes PHY overhead results in higher application data rates at the expense of higher latency and lower link budget.

A smaller payload with shorter air-time results in lower latency and higher link budgets at the expense of lower application data rates due to higher PHY overhead.

The table below, summarizes the effects that payload size and Modulation/FEC have on Link Budget, latency and data rate.

Figure not found: Schedule design trade-offs

Figure 50: Schedule design trade-offs

Power Saving Strategies

In this section, some strategies are outlined for reducing power consumption.

  • Reduce the transceiver’s input voltage:

    Keep the supply voltage as low as possible. The transceiver’s supply voltage can range from 1.8V to 3.3V.

  • Avoid use of firmware delay:

    A software delay keeps the CPU active, thus needlessly consumes power. Try to use timer peripherals instead and apply a power saving method while waiting for its period to end.

  • Use non-blocking transfer:

    The user must always try to use non-blocking transfers with the help of DMA peripherals when using a communication interface (SPI, I2C, USART, etc.). Instead of blocking the CPU waiting on a transfer complete event, a non-blocking transfer allows the MCU to do other processing tasks or go into sleep mode.

  • Change the transceiver’s sleep level:

    If the target data rate allows for a deeper sleep level, do not hesitate to use the transceiver’s shallow or deep sleep levels. Those sleep levels decrease the current consumption of the transceiver when not transmitting or receiving by shutting down its internal DC/DC converter. See the Sleep Modes section for further details on sleep modes.

  • Disable application logging:

    If possible, disable all logging. Don’t forget to disable unused serial peripherals accordingly (e.g.: UART).

  • Keep unused peripherals disabled:

    Do not initialize unused peripherals to prevent them from consuming any power.

  • Reduce peripherals clock speed:

    Lowering the clock speed of active peripherals (e.g.: SPI, UART, GPIOs) will decrease power consumption. For instance, if your application does not require a high data throughput and has loose or no latency requirements, lower the clock speed of the SPI peripheral controlling the transceiver. The default speed of 40MHz is not always required.

  • Use link throttling:

    The link throttling allows the user to define an alternative, lower power mode of operation. If the application can reduce its data generation rate, link throttling can be activated to reduce the Wireless Core footprint and free up CPU cycles. See the Link Throttling section for more details on this feature.

Pairing Module

Description

The Pairing Module is used by applications to dynamically assign addresses to new nodes wanting to join a network. This module runs at the application level, so it uses the SWC API to create a dedicated schedule with its own connections. The predefined pairing schedule complies with FCC, ETSI and ARIB regulations (user selectable).

Pairing Usage

To initiate the Pairing Procedure, the application needs to invoke either the pairing_coordinator_start() or the pairing_node_start() function from the Pairing API, depending on the device’s network role.

Both functions require the inclusion of the Pairing Configuration, Pairing Address, and Pairing Error structures.

In the case of the coordinator, it is also necessary to provide the Pairing Discovery List along with its corresponding size. The list size depends on the number of devices within the application’s network.

For the node, in addition to the parameters, it is crucial to include the Pairing Device Role. This role is application-specific and allows the coordinator to store the node’s unique ID and address within the Pairing Discovery List.

Both functions yield a Pairing Event as a return value, which needs to be appropriately handled at the application level.

/** @brief Start a pairing procedure with the coordinator.
 *
 *  @param[in]  pairing_cfg               Pairing configurations from the application.
 *  @param[out] pairing_assigned_address  Pairing addresses exchanged during the pairing procedure.
 *  @param[in]  discovery_list            List of discovered devices used by the coordinator.
 *  @param[in]  discovery_list_size       The size of the discovery list.
 *  @param[out] pairing_err               Pairing module error code.
 *  @return The pairing event.
 */
pairing_event_t pairing_coordinator_start(pairing_cfg_t *pairing_cfg, pairing_assigned_address_t *pairing_assigned_address,
                                          pairing_discovery_list_t *discovery_list, uint8_t discovery_list_size,
                                          pairing_error_t *pairing_err);

/** @brief Start a pairing procedure with the node.
 *
 *  @param[in]  pairing_cfg               Pairing configurations from the application.
 *  @param[out] pairing_assigned_address  Pairing addresses exchanged during the pairing procedure.
 *  @param[in]  device_role               Application level device role from the node.
 *  @param[out] pairing_err               Pairing module error code.
 *  @return The pairing event.
 */
pairing_event_t pairing_node_start(pairing_cfg_t *pairing_cfg, pairing_assigned_address_t *pairing_assigned_address,
                                   uint8_t device_role, pairing_error_t *pairing_err);

Pairing Configuration

The pairing configuration structure is used to configure the Pairing Module. It is essential to populate all parameters within the structure to ensure proper function of the Pairing Module. Only the application callback is optional.

/** @brief Pairing parameters from the application.
 */
typedef struct pairing_cfg {
    /*! Application code to prevent pairing unwanted devices. */
    uint64_t app_code;
    /*! The timeout period in seconds after which the pairing process will stop. */
    uint16_t timeout_sec;
    /*! Wireless Core hardware abstraction layer. */
    swc_hal_t *hal;
    /*! Optional application callback function pointer. */
    void (*application_callback)(void);
    /*! Ultra-wideband regulation used for the pairing process. */
    swc_uwb_regulation_t uwb_regulation;
    /*! Memory pool instance from which memory allocation is done. */
    uint8_t *memory_pool;
    /*! Memory pool size in bytes. */
    uint32_t memory_pool_size;
} pairing_cfg_t;

Pairing Assigned Address

The Pairing Assigned Address structure serves as an output parameter that gets populated during the pairing procedure. Upon successful completion of the pairing process, the application can utilize the PAN ID, coordinator address, and node address from this structure to establish a wireless connection. For applications with multiple Nodes, the Node’s assigned address will correspond to the most recently paired device.

/** @brief Pairing addresses discoverable during the pairing procedure.
 */
typedef struct pairing_assigned_address {
    /*! Coordinator's PAN ID. */
    uint16_t pan_id;
    /*! Coordinator's address. */
    uint8_t coordinator_address;
    /*! Node's assigned address. */
    uint8_t node_address;
} pairing_assigned_address_t;

Pairing Discovery List

The pairing_coordinator_start() function requires the Discovery List structure and the discovery list size parameter as inputs.

The Discovery List serves a crucial purpose when an application involves multiple nodes. It is used by the coordinator to remember the different paired nodes inside the network.

As the application already possesses knowledge of the number of nodes it encompasses, it is responsible for creating the Discovery List locally and passing it through the pairing_coordinator_start() function, along with the corresponding list size.

Upon a successful pairing procedure, both the coordinator and the newly paired device will be added to the discovery list. This information can then be utilized by the application to configure wireless connections.

/** @brief Pairing list of discovered devices used by the coordinator.
 */
typedef struct pairing_discovery_list {
    /*! Generated unique ID. */
    uint64_t unique_id;
    /*! Address of the node. */
    uint8_t node_address;
} pairing_discovery_list_t;

Device Role

The Device Role parameter is utilized by the pairing_node_start() function and serves as the index within the Pairing Discovery List.

In any network, the Coordinator is assigned a “0” device role, thus always occupies the discovery list’s index 0. Hence, it is the responsibility of the application to assign a Device Role to each node, ensuring proper indexing within the discovery list.

Pairing Error

The Pairing Error parameter serves as a tool to communicate any critical issues that might occur during the Pairing Procedure, leading to an automatic pairing process abortion in such cases. It is essential for the user to handle the returned error by utilizing the pairing error parameter.

/** @brief Pairing API error structure.
 */
typedef enum pairing_error {
    /*! No error occurred. */
    PAIRING_ERR_NONE = 0,
    /*! A NULL pointer is passed as argument. */
    PAIRING_ERR_NULL_PTR,
    /*! Discovery list size must be 2 or more. */
    PAIRING_ERR_DISCOVERY_LIST_SIZE_TOO_SMALL,
    /*! The application code is not configured. */
    PAIRING_ERR_APP_CODE_NOT_CONFIGURED,
    /*! Timeout is shorter than the minimum timeout duration. */
    PAIRING_ERR_TIMEOUT,
    /*! HAL has not been initialized at the application level. */
    PAIRING_ERR_HAL_NOT_INITIALIZED,
    /*! The wireless regulation chosen is not supported. */
    PAIRING_ERR_REGULATION_OPTION_NOT_SUPPORTED,
    /*! The node's device role conflicts with the coordinator's reserved role. */
    PAIRING_ERR_DEVICE_ROLE,
    /*! A wireless error occurred. */
    PAIRING_ERR_WIRELESS_ERROR,
    /*!< Wireless configurations can't be changed while the SWC is running. */
    PAIRING_ERR_CHANGING_WIRELESS_CONFIG_WHILE_RUNNING,
} pairing_error_t;

Pairing Events

Pairing Events are used to communicate the outcome of the Pairing Procedure back to the application. The application must then handle those events and act accordingly whether the pairing procedure was successful or not.

/** @brief Pairing event when exiting pairing procedure.
 */
typedef enum pairing_event {
    /*! No event occurred. */
    PAIRING_EVENT_NONE = 0,
    /*! The pairing procedure is successful. */
    PAIRING_EVENT_SUCCESS,
    /*! The timeout was reached. */
    PAIRING_EVENT_TIMEOUT,
    /*! The application code is not valid. */
    PAIRING_EVENT_INVALID_APP_CODE,
    /*! The pairing procedure was aborted from an external source. */
    PAIRING_EVENT_ABORT,
} pairing_event_t;

Pairing Application Callback

The Application callback is an optional feature aimed at preventing the user application from being blocked while the pairing procedure is underway.

With the Application callback, the application is given CPU control at the completion of each state, allowing it to handle background tasks such as button press detection or LED management.

It is advised for the user to exercise caution and refrain from implementing lengthy functions or delays within this callback, as this may increase the pairing procedure time.

To use this feature, the application must assign a function pointer to the application callback member of the pairing configuration structure before starting the Pairing Procedure.

Pairing Abort

The pairing abort feature provides a mechanism for externally stopping the pairing procedure and making it return an “Abort Event”.

This feature can be effectively combined with the application callback, granting the application the ability to initiate the cancellation of the pairing procedure if necessary.

To use this feature, the pairing_abort() function must be called while the Pairing Procedure is underway.

Timeout

As part of the pairing configuration, the user is required to define a timeout duration in seconds for the pairing procedure. This timeout duration is internally managed by the pairing module.

This timeout is checked between each step of the pairing process. Once the specified time limit is reached, the pairing procedure will automatically cease, and the application will be notified through the timeout event.

Application Code

The application code serves as a crucial security feature to prevent unauthorized pairing between incompatible devices. As all SPARK devices utilize the Wireless Core and have access to the pairing module, the possibility exists for any device to attempt pairing with others. By specifying an application code, users can limit the chances of undesired pairing by ensuring that only devices sharing the same application code can complete the pairing procedure successfully.

During the authentication phase, the coordinator will transmit its application code, which is then compared by the node to check for a match. If the codes match, the pairing continues. However, if there is a mismatch, the pairing procedure is promptly terminated, and the application is notified through the invalid application code event.

Unique ID

The unique ID comes from SPARK radio’s 64-bit serial number. The serial number has been assigned during manufacturing and is unique among all SPARK transceivers of the same model. The serial number is automatically stored when starting a Pairing Procedure, no user action is needed.

Pairing Address Generation

During the pairing procedure, the coordinator generates its own address and the nodes’ address. Such addresses are generated using a CRC16-CITT scheme which uses the device’s unique ID as its seed. The following PAN ID are reserved: 0x000 and 0xFFF. The following addresses are reserved: 0x00 and 0xFF. If the address generator lands on a reserved address or an address already in use by another device in the network, it starts again using a different seed until a valid address is generated.

Pairing Procedure

Pairing Phases

The Pairing Procedure will pass through three phases: the Authentication phase, the Identification phase and the Addressing phase.

Authentication Phase

The Authentication phase is essential for verifying the compatibility of the unpaired devices. It involves the verification of the devices’ Application Code. If the Application Codes do not match, the pairing procedure is aborted since the node’s response field for ‘authentication_action’ will reflect a failure. This mismatch indicates an incompatibility between the devices. On the other hand, if the Application Codes match, the next step in the pairing process is initiated, signifying that both devices and the application they run are compatible.

This phase helps to prevent the pairing of two devices that possess pairing capabilities but lack compatibility. For instance, it ensures that a mouse device cannot pair with a headset device, as such a pairing would be functionally inappropriate.

Identification Phase

The Identification phase involves the Node transmitting its device role and unique identifier number to the Coordinator. The Coordinator generates the Node address with the received unique identifier number. It then stores the Node’s device role, the generated Node address and the unique identifier number in its Pairing Discovery List. If the device role was already present in the Discovery List, it will be overwritten by the new Node address and unique identifier number. The Coordinator then proceeds to initiate the Addressing States.

Addressing phase

The Addressing phase involves the Coordinator sending to the Node device the network PAN ID, the Coordinator address and the newly generated Node address.

During this stage, the Pairing procedure should be completed successfully.

Pairing Module Wireless Configuration

When entering the Pairing Mode, a pre-defined wireless configuration is employed, overriding any previously used wireless configurations within the application. The handling of the wireless configuration is internally managed by the Pairing Module through the SWC API.

During the Pairing Procedure, the PAN ID for both devices is fixed at 0x000, as it is the reserved address specifically designated for the Pairing Module. The coordinator is assigned the fixed address of 0x01, while the node is assigned the fixed address of 0x02. These predetermined addresses ensure proper identification and communication between the devices within the pairing framework.

The Pairing Schedule follows an alternating pattern of transmissions between the Coordinator and Node, with each device taking turns every 3 milliseconds (ms).

The wireless configuration utilizes a single channel, and the frequency of this channel is determined by the chosen regulatory standard during the configuration of the Pairing Module. The specific frequency will vary based on the regulatory standards selected, such as FCC, ETSI, or ARIB.

Table 19: Pairing regulation frequencies.

Regulatory body

Frequency (GHz)

FCC

7004.2

ETSI

7004.2

ARIB

8478.7

Pairing Module schedule

Figure 51: Pairing Module’s Schedule

After the completion of the Pairing Procedure, the wireless configuration will be reset. As a result, the application must reconfigure Wireless Core using the newly obtained network information after the completion of the pairing procedure.