Wireless Core¶
Introduction¶
The Wireless Core is implemented as a custom network stack. Its purpose is to facilitate transceiver setup, making it possible to optimize performance for data rate, latency and power consumption. It manages wireless functionalities within the SDK and is controllable via its top-level API: the Wireless Core API.
Several concepts must be fully understood before delving into the design of a network with the Wireless Core. This documentation describes all those concepts, provides tips on how to design an optimal schedule and gives guidelines on how to optimize resources utilization.
Concepts¶
Time Division Multiple Access¶
The Wireless Core always operates in Time Division Multiple Access (TDMA) mode. It divides time into multiple slices called timeslots. Each timeslot is a portion of time reserved for a specific event. The sum of all the timeslots is called a schedule. This TDMA approach is very efficient in terms of power consumption. Most of the time, a device can be asleep. It will wake up only to transmit or receive a frame on its dedicated timeslot.
Network¶
A network encompasses a schedule, connections and their configuration.
Schedule¶
To maximize the usage of air time, the length of the timeslots in a TDMA schedule can be adjusted with a precision of 1 microsecond. On a given network, the schedule must be known to all the Nodes on the network and cannot change at runtime. The schedule specifies when and for how long a given Node is allowed to transmit or receive a frame. When the schedule reaches the last timeslot, it loops back to the first one.

Figure 31: Basic schedule¶
The illustration above shows a basic schedule which enables bi-directional communication between two devices. Information is exchanged in one direction during the first timeslot, and in the other direction during the second timeslot. The schedule repeats indefinitely.
Timeslot duration is determined by the frame air time and inter-frame spacing required for the MCU to fetch/retrieve and process data for each frame. The inter-frame spacing will mostly depend on the MCU performance characteristics and the configured SPI speed.

Figure 32: Inter-frame spacing¶
The figure above shows an example of a star network schedule. The diagram shows air time (colored) and processing time. The processing time must be included in a single timeslot for consecutive over-the-air events to occur for the same device (e.g. Coordinator Timeslot Index 1 to Index 2). If the over-the-air events are non-consecutive, processing can occur during the other devices Timeslot Index (e.g. Coordinator Timeslot Index 3).
Frequency Switching¶
To optimize UWB spectrum usage and avoid violating the regulatory emission limits, frequency switching is used.
The strategy consists of spreading pulse energy over a wider spectrum by cycling through selected frequencies at each timeslot.
The Wireless Core can be configured to use different frequencies in consecutive timeslots by providing it with a list of channels. For every timeslot, the Wireless Core will tune the radio to the next channel in the list. As a result, the list is cycled independently from the connections.
Addressing¶
The Wireless Core addresses are 20-bit long. The first 12 most significant bits are used for the PAN ID and the remaining 8 least significant bits are used for the node address.
PAN ID |
Node Address |
---|---|
12 bits |
8 bits |
PAN ID¶
The PAN ID (Personal Area Network Identification) is used to group multiple nodes into logical networks. Nodes from different networks are not able to communicate with each other. It is one of the feature enabling concurrency.
Valid PAN IDs range from 0x001 to 0xFFE. 0x000 and 0xFFF are reserved.
Node Address¶
The node address is used to identify nodes within the same PAN.
The node addresses range from 0x00 to 0xFE. Address 0xFF is used for broadcasting on the PAN. Thus, a single PAN can support up to 255 active nodes.
PAN Broadcast¶
A frame containing the 0xFF destination address will be treated as a broadcast in the selected PAN. Every node part of the PAN will receive the frame as if it was specifically sent to them, with one condition: the connection’s destination address on the receiver node (RX) must be set to its local address.
Connection¶
A connection is defined by a unidirectional link between a source address and a destination address. A single device can use one or multiple connections. For a device to send data, a connection must be associated with one or more timeslots within the schedule. Since a connection defines a unidirectional data stream, a timeslot can contain two connections: one for the main transmission and one for the auto-reply (if enabled).
A connection between two devices must share the same network configuration. Within this configuration, a payload queue is assigned to the connection. When the connection’s active timeslot occurs, the oldest element is removed from the queue and scheduled for transmission.
It is possible to send data to multiple devices within the same network with one connection using a broadcast address: 255 (or 0xFF). The transmitting device will then communicate with every device within reach that are part of the same network (same PAN ID).
Channel¶
Each channel defines the frequency and the power settings of the transceiver to apply when a transmission occurs over said channel. The Wireless Core must be provided with an array of channels on which it will iterate; that is the channel sequence. Each connection has its own set of configurations for a given channel.
Stream Types¶
There are three types of data streams which are described in the table below:
Best Effort |
Frames are only sent once. The transmitter doesn’t know if it was successfully received on the remote side. |
Limited Retransmissions |
Frames reception are ACKed. If a frame is not ACKed after a configurable amount of retransmissions, the transmitter drops it. |
Guaranteed delivery |
Frames reception are ACKed. The transmitter will retransmit (indefinitely) a frame until it is ACKed, guaranteeing delivery. |
Auto-Reply¶
The SPARK transceivers have a built-in auto-reply functionality that allows a receiver to transmit a payload immediately after a successful reception. While generally used for acknowledgements only, this auto-replied frame can also contain an effective payload. Bear in mind that while the Wireless Core allows transmission of data inside them, it does not support acknowledgements of auto-replied frames. A data stream transmitted through auto-replied frames is considered a “best-effort” stream since no retransmission mechanism can be leveraged in case of loss.
Timeslot Events¶
When auto-replies are enabled, two exchanges can happen inside a single timeslot. The Wireless Core treats each exchange as separate events.
Main Frame Transmission¶
The main event in a timeslot is the main frame transmission. That data stream will be a best-effort transmission unless Auto-Replies are enabled.
![]()
Figure 33: Main Transmission Event¶
Auto-Replied ACK¶
When enabled, the receiver of a main frame will auto-reply an acknowledgement (ACK) to notify the transmitter that reception was successful. This allows the Wireless Core to put a retransmission mechanism in place in case a main frame is not acknowledged, enabling guaranteed delivery or limited retransmission links. The ACK frame does not contain any payload.
![]()
Figure 34: Auto-reply event (ACK only)¶
Auto-Replied Payload¶
A connection can make use of the auto-reply to transmit data in a best-effort manner. The auto-replied frame can also optionally contain an acknowledgement of the main frame alongside the payload, keeping the ability of the main frame transmitter to use a retransmission mechanism. However, keep in mind that the Wireless Core does not offer the possibility to acknowledge auto-replied frames, rendering this type of data transmission a best-effort one.
![]()
Figure 35: Auto-reply event (with data)¶
Synchronization Methods¶
Standard Sync¶
Upon power up, the Node will listen for a message from the Coordinator. If none is received, it will go to sleep for a predetermined amount of time and then retry listening. This method will limit power consumption of the receiver by duty cycling the listening periods instead of listening continuously for a message from the Coordinator which might not be active yet. This method usually allows the device pair to achieve synchronization in a short period of time when the frame rate on the Coordinator is high.
Fast Sync¶
Upon power up, the Node will continuously listen for a message from the Coordinator until it successfully receives a frame. This synchronization method is less power efficient, but will yield faster synchronization because the Node should catch frames from the Coordinator as soon as they are sent. This method should be used when the frame rate on the Coordinator is too low for the standard sync to maintain a reliable synchronization or to ensure a quick synchronization at startup.
Note
The synchronization delay depends on the defined RF schedule and the RF conditions, regardless of the synchronization method. A schedule with shorter timeslots or more synchronization frames and good RF conditions will result in a short synchronization delay. On the other hand, a schedule with longer timeslots or less frequent transmissions from the coordinator will lead to longer node synchronization delays.
Dual Radio¶
In applications where antenna placement is challenging, two transceivers can be used to increase coverage. Since this allows both transceivers to redundantly receive frames, the Wireless Core is free to choose which transceiver to use for transmission based on the received signal strength. Bear in mind that enabling this feature will increase the processor load of the Wireless Core by about 1.5 times the equivalent single radio scenario.
The second transceiver must use a dedicated SPI bus and DMA channel, independent of the first one, to allow for simultaneous data transfers. A 16-bit timer running at 20.48 MHz provided by the MCU is also required. The timer period must be configurable by the Wireless Core.
Topology¶
In any topology, the link between two Nodes can be either unidirectional or bidirectional. A given Node should always receive frames from its syncing Node at least every 10 ms in order to maintain synchronization on the network. Note that the topology of the network is entirely defined by the user configuration. Different topologies are discussed in the sections that follow.
Note
Routing is not currently supported by the Wireless Core and must be handled by the application.
Star Network Topology¶
In a star network, all communications are established directly between a central Node configured as Coordinator and several other Nodes.

Figure 36: Star network Topology¶
Peer to Peer Network Topology¶
In a peer-to-peer network, each Node can communicate with any other Node, but the Coordinator must send a frame to every other Node at least every 10 ms to maintain synchronization. In most cases, this frame will be sent in the form of a broadcast and is often referred to as a beacon.

Figure 37: P2P Network Topology¶
Cluster Tree Network Topology¶
In a cluster tree network, Nodes can sync on a child Node of the Coordinator rather than on the Coordinator itself. The support of this configuration is still limited and the relaying of a frame through multiple Nodes must be handled by the application.

Figure 38: Cluster Tree Network Topology¶
Concurrency¶
The Wireless Core supports a set of features which can be used to optimize concurrent operation. Those features are described in the next sections.
PAN ID¶
A Personal Area Network ID (PAN ID), is used to logically split the addressing space into networks. This will ensure that transceivers with a certain PAN ID will not receive frames sent by devices that have a different PAN ID, even if the destination node address matches.
Channel Sequence Randomization¶
A random channel sequence can be generated by the Wireless Core to reduce the probability of collisions in the frequency domain. The sequence is generated using the PAN ID as the seed and overrides the channel sequence specified by the user. Other channel properties are maintained.
Note
The Wireless Core will override the channel sequence provided by the user with its own auto-generated random sequence when this feature is enabled.
Clear Channel Assessment (CCA)¶
Clear Channel Assessment (CCA) is a mechanism used to determine if a certain frequency is currently in use or not. This mechanism is used by the Wireless Core to avoid collisions resulting from concurrent transmissions. Collision avoidance is achieved by delaying the transmission within the same timeslot. If the connection’s timeslot expires and the CCA didn’t succeed, the transmission can be postponed to the next connection’s timeslot or it can be transmitted anyways depending on the CCA configuration.
Random Datarate Offset Protocol (RDO)¶
The Random Datarate Offset (RDO) feature intentionally adds a small amount of jitter to the link timings. The timing offset helps CCA performance in situations where devices listen at the same time and then transmit at the same time. This timing offset does not affect the overall link throughput and has the most impact when concurrent networks have the exact same schedules.
Sleep Modes¶
SPARK transceivers support 3 sleep modes: idle, shallow and deep. The transceiver will automatically fall into the selected sleep mode once it has completed a transmission or reception and wake up for the next event, enabling considerable power savings. The deeper the sleep mode, the higher the wake-up delay is. This has to be taken into consideration when configuring the timeslots duration. E.g.: idle sleep will support shorter timeslots since the transceiver can wake up from idle sleep much faster than from the other sleep modes.
Sleep Mode |
Wake-Up Delay (us) |
---|---|
Idle |
0.5 |
Shallow |
62 |
Deep |
3062 |
Note
Please refer to the transceiver’s datasheet for more information on sleep modes.
Note
The sleep modes can only be set globally for all timeslots since the transceiver can’t switch between modes without losing synchronization. Consequently, the Wireless Core must be completely shut down and reconfigured to be able to change the global sleep mode setting. Doing so requires considerable time (several milliseconds), so application downtime is to be expected if it must be done at runtime.
Important
Shallow sleep usage requires the timeslots duration to be higher than 185us. Deep sleep usage requires the timeslots duration to be higher than 3062us.
Modulation¶
Two modulation schemes are available, Inverted On-Off Keying (IOOK) and 2-bit Pulse Position Modulation (2-bit PPM). Each of these modulation schemes has their merits. For example, it will take half the time to transmit a frame in IOOK when compared to 2-bit PPM, but a better link budget is expected when using 2-bit PPM.
The coding for both these schemes is shown below:
IOOK: Use IOOK modulation for high data rate applications.
2-bit PPM: Use 2-bit PPM to obtain higher link budget.

Figure 39: Modulation bit sequence¶
Forward Error Correction (FEC) Level¶
The goal of the FEC is to correct errors which might occur during over-the-air transmissions. FEC adds redundant bits to a bitstream and thus increases the frame size.
The SPARK radio can support four levels of FEC. Each level is defined by its redundancy rate. The redundancy rate is the factor by which the frame size is inflated. The FEC level 0 corresponds to disabling it, and as such, will not inflate the frame. At its maximum level of 3, the FEC will double the frame size.
FEC Level |
Frame Inflation Rate |
---|---|
0 |
x1 |
1 |
x1.334 |
2 |
x1.667 |
3 |
x2 |
Stop and Wait¶
The Stop and Wait control mechanism handles frame retransmissions automatically when a frame is not acknowledged. The acknowledgements on the connection must be enabled to use this setting. There are two modes for the Stop and Wait mechanism: “Retry Count” and “Deadline”. The first mode will count the number of retransmission attempts before deciding to drop a frame. The second mode will drop a frame if it is not delivered after a certain amount of time. The timeout for the second mode is set in increments of 250 microseconds.
For a guaranteed delivery quality of service, both timeout settings must be set to 0. That way, the stop and wait module will retransmit the frame indefinitely until the frame is acknowledged. This type of service is commonly used for non-real time data transfers (e.g. file transfer of firmware updates)
Fallback Mode¶
The Wireless Core gives the user the possibility to define an alternative channel configuration (power settings) to switch to when a predetermined failure threshold is crossed. This mechanism is called the fallback mode. In that mode, the application is expected to reduce its data throughput by providing smaller payloads to compensate for the increase in power emission. This allows the system to tradeoff throughput for increased range or robustness in suboptimal environmental conditions.
Link Throttling¶
This feature allows the user to limit the predefined throughput of the schedule by disabling a portion of its timeslots. The Wireless Core achieves this by making the transceiver sleep over the normally active transmission periods of the schedule; in other words, the Wireless Core can disable timeslots on-the-fly. This allows the transceiver to sleep for longer periods of time while freeing up MCU resources. MCU power saving methods can also be applied during that time (e.g.: lowering MCU core clock or using MCU sleep modes).
Link throttling can be enabled on a per connection basis via the Wireless Core API. Link throttling is implemented by specifying an. active timeslot ratio (%) used for transmissions.
For example, let’s assume a bidirectional link containing 3 connections:
Connection #1: Device #1 -> Device #2
Connection #2: Device #2 -> Device #1
Connection #3: Device #1 -> Device #2
From the Device #1 perspective, only Connection #1 and Connection #3 can use the link throttling feature since these connections are used for transmissions. By defining a 20% active timeslot ratio on Connection #1, only 1 timeslot out of 5 will be used and the rest will be disabled. Link throttling only affects the timeslots linked to the connection where it is applied. Other connections will remain unchanged. In this example Connection #2 and Connection #3 will remain with a 100% active timeslot ratio. In summary, the Device #1 will:
Be able to transmit in only 1 timeslot out of 5 over the Connection #1 (and sleep for the remaining 4).
Always wake up and listen for a packet on timeslots assigned to Connection #2. (Unaffected by link throttling on Connection #1)
Transmit in every timeslot assigned to Connection #3. (Unaffected by link throttling on Connection #1)
Limitations¶
Link throttling will have an impact on the Wireless Core latency since it reduces the transmission opportunities in a defined period of time. Consideration to latency requirements should be made before implementing this feature.
Also, since the maximum sleep duration possible using IDLE sleep is 3.19ms, the throttling should not be set to a ratio where the transceiver does not wake up within this maximum sleep duration. The link throttling feature evenly distributes the active timeslot based on the configured ratio to help with this limitation, but it can sometimes not be enough when using a low active timeslots ratio combined with long timeslot durations.
Another limitation of this feature is the fact that an RX connection cannot be throttled, as this could impact the synchronization. All timeslots of an RX connection are always active, so if throttling is activated on a transmitting device, the receiving device will keep listening in all the timeslots regardless (leading to a higher amount of timeouts on the receiver).
Complete Example¶
Let’s assume a unidirectional link consisting of 4 timeslotsm and a single connection:

Figure 40: Link Throttling Example - Schedule¶
During normal operation, the Wireless Core will be able to send a frame in any of the timeslots. However, when an active timeslot ratio lower than 100% is configured, some timeslots will be disabled. For example, when setting the active timeslots ratio to 90%, the Wireless Core has 10% less transmission opportunities:
![]()
Figure 41: Link Throttling Example - 90% Active Ratio¶
Applying the same principle, throttling the connection even more by setting the active timeslot ratio to 50%, the Wireless Core throughput for that connection is effectively halved:
![]()
Figure 42: Link Throttling Example - 50% Active Ratio¶
In short, a connection normally allowing transmission of 1000 packets per seconds configured with an active timeslot ratio of 50% will see its packet rate reduced to 500 packets per second. In that case, if the application is generating packets at a rate higher than 500 per seconds, the Wireless Core will be forced to drop excess packets, causing data loss. One must consider the application packet rate before enabling the throttling feature and selecting an active timeslot ratio. Please refer to the Retransmission Margin section for more details on the relationship between application packet rate and the Wireless Core’s schedule packet rate.
Frame Structure¶
There are three types of frames that can be generated by the Wireless Core: data frames, sync frames and acknowledgement frames.
Data frames¶
Data frames are frames that contain an application payload. They can be sent as main frames or in an auto-reply.

Figure 43: Data frame¶
Data Validation¶
The transceiver implements multiple layers of data protection in a frame to ensure proper transmission. These mechanisms are handled automatically by the Wireless Core.
CRC¶
A frame with a failing cyclical redundancy check (CRC) is discarded and not acknowledged.
Address filtering¶
Only frames with destination address matching the local address are passed on to the Wireless Core. Frames with a non-matching addresses are discarded by the transceiver.
Syncword match¶
Frames with the wrong syncword are automatically discarded by the transceiver.
Max frame size¶
Each connection has a maximum frame size, if a larger frame is received, it will automatically be discarded to avoid buffer overflows.
MCU Requirements¶
For the complete list of hardware requirements, please refer to the Hardware Platform Requirements article located in the Porting Guide.
MCU Footprint¶
Memory¶
The table below shows the memory usage of the Wireless Core library compiled with GCC from the GNU Arm Embedded Toolchain v9.3.1 using the -O2 optimization flag.
Memory Section |
Memory Usage (kB) |
---|---|
RAM |
2 |
Flash |
35 |
Memory usage is affected by factors such as the number of wireless connections and their queue size. Expect an increase of ~1 kB of RAM for each extra connection, and ~0.5 kB for each extra element in the connection queue.
Additional RAM and flash space must be allocated for the application layer which is not included in these estimates.
CPU¶
CPU usage will vary depending on the selected MCU. The Wireless Core is entirely driven by interrupts from the transceiver IRQ pin and from the SPI DMA. The interrupt handlers from these two interrupts will call the functions that process the Wireless Core state machine. These interrupts should always be processed with high priority. Delaying the handling of these interrupts could greatly reduce the performances of the wireless link.
The Wireless Core timing diagram shown below illustrates the processor usage and the interrupt model of the Wireless Core.

Figure 46: Wireless Core timing diagram¶
Function |
Description |
---|---|
read_event |
Ask the radio for the IRQ flags after a radio interrupt. |
process_event |
Read the IRQ flags and take action regarding of the outcome. |
get_header |
Read link header from the radio FIFO. |
get_payload |
Read payload from the radio FIFO. |
mac_process_frame |
Process the MAC layer at the end of a frame. |
mac_prepare_frame |
Process link layer at the beginning of a frame. |
prepare_radio_cfg |
Prepare the radio register to send for the next transmission / reception. |
send_radio_cfg |
Send configuration register radio processed in the prepare_radio_cfg state. |
callback_context_switch |
User provided function to trigger the callback context switch. |
set_header |
Write the link header to the radio FIFO. |
set_payload |
Write the user payload to the radio FIFO. |
close_spi |
Reset the chip select SPI pin, then wait for a radio event. |
Example of processing time on the EVK1.4 board :
Configuration
Min
Average
Max
Unidirectional 1 byte payload TX
47 us
54 us
56 us
Unidirectional 125 byte payload TX
77 us
83 us
85 us
Unidirectional 1 byte payload RX
53 us
53 us
54 us
Unidirectional 125 byte payload RX
42 us
59 us
66 us
Bidirectional 1 byte payload
44 us
54 us
71 us
Bidirectional 125 byte payload
44 us
69 us
97 us
To determine the processing time for a given MCU implementation, one can measure the delay between the rising edge of the transceiver IRQ pin, and the last rising edge of the CS pin as shown by the red markers in the following image:

Figure 47: Processing Time Measurement¶
Schedule Design Consideration¶
Bandwidth¶
When designing the schedule, the user must consider the bandwidth requirements for every device. The peak data rate of a specific connection can be determined from the number of allocated timeslots within the schedule. The diagram below illustrates a scenario where a device with the address 0xCD01 has 3 times more link bandwidth than the device at the address 0xAAAA (assuming equally sized payloads and equal timeslot durations).

Figure 48: Wireless Core data rate¶
The timeslot duration and the schedule period itself are other factors that come into play for the bandwidth. The user must configure the duration of every timeslot when configuring the schedule.
The maximum datarate in kbps of a connection is given as :
\(datarate = \frac{nTs\times Ps \times 8}{Sl \times Tt \times 1000} kbps\)
where \(nTs\) is the number of timeslots allocated to the connection in the schedule, \(Ps\) is the maximum payload size in bytes of the connection, \(Tt\) is the timeslots duration in microseconds and \(Sl\) is the total schedule period.
- Example:
\(nTs = 2\);
\(Ps = 120\);
\(Sl = 4\);
\(Tt = 250us\);
\(datarate = \frac{2 \times 120 \times 8}{4 \times 250us \times 1000} = 512kbps\)
Retransmission Margin¶
The retransmission margin consists of normally unused timeslots that can be used for retransmissions when the link starts to degrade. By accounting for a retransmission margin in the wireless schedule, we are in fact designing a link which allows for a higher throughput than what the application requires. A higher retransmission margin will increase the robustness of the link. The concept of retransmission margin only applies when the connection is using the Stop and Wait feature.
The retransmission margin is given as:
\(margin = \frac {cD - aD}{cD}\times100\)
where \(cD\) is the maximum data rate of the connection and \(aD\) is the data rate of the application for a given connection.
- Example:
\(cD = 512kbps\);
\(aD = 384kbps\);
\(\frac {512kbps - 384kbps}{512kbps}\times100 = 25%\)
Sync Timeslot (Beacon)¶
The Sync Timeslot, or beacon, consists of a timeslot where the coordinator sends either a normal frame which contains a header and a payload or an empty frame which only contains a header. The maximal transmission period for this beacon is 10 ms. Having an interval greater than this will result in a loss of sync between the coordinator and the receiving node. If the coordinator does not have data to send at the time of the sync timeslot, the auto-sync feature can be used. The auto-sync feature lets the Wireless Core handle the transmission of the sync frame automatically. If the coordinator does not have data to send, its sync connection can have a FIFO size of 0. In a star or peer-to-peer network, this sync frame should be sent by the coordinator as a broadcast, eg with the address 0xFFFF.
Concurrency Considerations¶
The following figure shows 2 timeslots of a generic schedule operating with concurrency:

Figure 49: Generic concurrency schedule¶
During the first timeslot, the Network #1 transmitter executed CCA and determined that the air was free, so started to transmit on Channel #0. Network #2 transmitter also did a CCA, but slightly after Network #1 transmitter started to transmit, thus resulted in a failed CCA because Channel #0 was in use. Network #2 Transmitter then waited for a predetermined delay before retrying. At some point, its CCA succeeded, and it was able to transmit over Channel #0 because the other transmitter had completed its transmission.
During the second timeslot, both transmitters were able to transmit simultaneously as their CCA both succeeded. This is because the frequency separation (between Channel 3 and Channel 1) is high enough (at least 1 GHz separation). thanks to the random channel sequence mechanism managed by the Wireless Core.
In order to achieve proper concurrency, the following equations must be true:
\(T_{CCA} >= T_{AIR}\)\(T_{SLOT} - (T_{CCA} * Retry_{COUNT}) - T_{AIR} - T_{PROC} >= 0\)
Where \(T_{CCA}\) is the CCA delay, \(Retry_{COUNT}\) is the number of times the CCA check is done, \(T_{AIR}\) is the air time of the frame, \(T_{SLOT}\) is the timeslot duration and \(T_{PROC}\) is the processing time required by the Wireless Core.
The number of links supported without any frame delivery degradation \(N\) is equal to the following, rounded down:
\(N = \frac {(T_{CCA} * Retry_{COUNT})}{T_{AIR}}\)
Latency Calculation¶
Here, we will look at latency and how it is affected by the 3 types of data transfer: best effort, limited retransmission and guaranteed delivery.
Best effort: There is only one attempt to transmit every frame. If the frame is not received correctly, it is dropped. This is done by configuring a connection without acknowledgement and Stop and Wait. Minimum latency (\(LatencyMin\)) and maximum latency (\(LatencyMax\)) for a successfully transmitted frame can be evaluated as follows:
\(LatencyMin = (AirTime + CallbackDelay)\) \(\mu\)\(s\)\(LatencyMax = (AirTime + CallbackDelay + RefreshPeriod)\) \(\mu\)\(s\)where \(AirTime\) is the frame’s air time, \(CallbackDelay\) is the delay to trigger the RX callback and \(RefreshPeriod\) is the connection refresh period.
The connection refresh period is the duration between data transfers for the maximum data rate case on a connection.
Limited retransmission: A frame is retransmitted until the frame is either successfully transmitted or dropped after a timeout period has expired. This is done by configuring a connection with acknowledgement and Stop and Wait with a timeout. Minimum latency and maximum latency for a successfully transmitted frame can be evaluated as follows:
\(LatencyMin = (AirTime + CallbackDelay)\) \(\mu\)\(s\)“Retry count” mode (see Stop and Wait):\(LatencyMax = (AirTime + CallbackDelay + RefreshPeriod + RetxCount \times RefreshPeriod)\) \(\mu\)\(s\)“Deadline” mode (see Stop and Wait):\(LatencyMax = (AirTime + CallbackDelay + RefreshPeriod + DeadlineDelay)\) \(\mu\)\(s\)where \(RetxCount\) is the maximum number of retransmissions selected and \(DeadlineDelay\) is deadline time value selected.
Guaranteed delivery: A frame is retransmitted until it is successfully transmitted. This is done by configuring a connection with acknowledgement and Stop and Wait with timeout settings to 0. Minimum latency and maximum latency for a successfully transmitted frame can be evaluated as follows:
\(LatencyMin = (AirTime + CallbackDelay)\) \(\mu\)\(s\)\(LatencyMax = infinite\)
Note
For real-time streaming applications, the connection’s TX and RX queue size will generally act as buffers and thus induce latency in the system. A greater queue size will allow for greater instantaneous PER spikes without interruption of service but will yield higher latency figures. The ideal buffering to put in place in an application varies depending on the schedule definition. As a rule of thumb, set the buffering length (latency) to a duration equal to 15 transmissions of the associated connection. In other words, a device needs to be able to transmit 15 packets in a period equal to the added latency. For example, if a buffer length of 5ms is used, the associated connection should be able to send packets at >3000 packets/second (15 packets/5 ms). The RX buffer of the receiver should have the same size as the TX buffer of the transmitter.
Case Study: Latency Evaluation of an Application¶
In this section, the Wireless Core latency is examined and the expected variations that can occur are explained. First, it must be understood that the interface between the Application and the Wireless Core is an asynchronous interface. A buffer (Queue) is implemented between the two interfaces. In a transmit scenario, data is written to the buffer (TX Queue) by the Application at the Application Rate, and data is read from the buffer (TX Queue) at the Wireless Core rate. The two rates are not synchronized with each other and are not usually the same frequency (Wireless Core frequency is typically higher than the Application frequency to allow for retransmissions). In such a scenario, there will be latency variations.
The latency introduced by the Wireless Core can be measured by toggling GPIOs on both the Transmitting and Receiving devices. The first GPIO is activated when the application writes data into the TX Queue and the second GPIO is activated when the Wireless Core writes data into the RX Queue (please refer to the Wireless Core Latency Representation. figure).
An oscilloscope can then be used to capture the IO state changes and compile a histogram of the handling time of the data by the Wireless Core including over-the-air (OTA) transmission time. Latency statistics such as minimum, maximum, average and standard deviation can then be obtained from that data to effectively qualify it.
In the example that follows, it is assumed that the application does not buffer data (application data is sent directly to the TX Queue); It is also assumed that all transmissions are successful (perfect RF link with no retransmissions). The figure below illustrates the signal path where the latency measurements are taken:

Figure 50: Wireless Core Latency Representation.¶
Latency variations depends on schedule and application timing. Let’s evaluate it for the following sample schedule:

Figure 51: Example schedule. Note that the schedule loops indefinitely.¶
Looking at the schedule, a single iteration of the schedule (every 1ms) can be seen, the Coordinator has four transmission opportunities whereas the Node has only one. The following section shows how timeslot allocation plays a major role in the latency jitter that occurs in practice. The amount of jitter depends on when the application data is yielded to the Wireless Core.
An important detail must be understood before diving in the analysis: the Wireless Core always prepares the action to be taken in the next timeslot (e.g.: timeslot #3) at the beginning of the preceding timeslot (e.g.: timeslot #2). This means that the application data is guaranteed to wait a little bit more than the timeslot’s duration (e.g.: timeslot #2 + air time). Here is a breakdown of the theoretical latency for both data stream directions.
Coordinator TX -> Node RX Latency
Starting with the best case scenario, one in which the application yields data to the Wireless Core at the perfect moment (just before timeslot #1), we can assume that the next transmission opportunity will occur in a little more than a timeslot’s duration (in timeslot #2), making ~200us the lowest expected transmission latency for the Coordinator. Note that it will be slightly higher than that due to the air time and data handling time which can vary. On the other hand, if the application provides data to the Wireless Core at the least convenient moment (during timeslot #3, just after missing the preparation for timeslot #4), the Wireless Core will not be able to send the data in the next timeslot (#4). Furthermore, because timeslot #5 is an RX timeslot, the next transmission opportunity will be in timeslot #6 (which is identical to timeslot #1). This means that the data will wait in the Wireless Core queue for a little more than three timeslots (#3 to #6), making the worst case theoretical latency ~600us (+ air time and data handling time).
Min |
Max |
~200us |
~600us |
Node TX -> Coordinator RX Latency
Starting with the best case scenario, one in which the application yields data to the Wireless Core at the perfect moment, we can assume that the next transmission opportunity will occur in a little more than a timeslot’s duration, making ~200us the lowest expected transmission latency for the Node. On the other hand, if the application provides data to the Wireless Core at the least convenient moment (during timeslot #4, just after missing the preparation for timeslot #5), the Wireless Core will not be able to send the data in the next timeslot (#5). Furthermore, because only timeslot #5 is dedicated for transmission, the next transmission opportunity will be in a complete schedule loop (which is equivalent to timeslot #10). This means that the data will wait in the Wireless Core queue for a little more than six timeslots (#4 to #10), making the worst case theoretical latency ~1.2ms (+ air time and data handling time).
Min |
Max |
~200us |
~1.2ms |
Those results can be confirmed by using the method described at the beginning of this section. Generating a histogram of the latency value per frame with an oscilloscope will highlight the fact that those minimum and maximum latency values are corner cases. In practice, latencies between the minimum and maximum are far more likely to occur than the minimum or maximum themselves. In the end, the average latency can be obtained from the histogram and will generally sit somewhere between the expected minimum and maximum, with a low standard deviation.
Schedule Design Trade-off¶
There are always trade-offs between latency, maximum data rate and link budget for any schedule design.
From the Latency Calculation equations, frame air time and the connection refresh period should be decreased in order to reduce latency. However, decreasing those parameters can result in lower maximum data rate and/or link budget.
The 3 main factors that impact frame air time and connection refresh period are modulation, FEC level and payload size. For modulation and FEC level, there are 2 recommended configurations:
2 bit PPM, FEC1: This will yield better link budget but longer frame air time.
IOOK, FEC2: This will yield lower link budget but shorter frame air time.
A large payload which minimizes PHY overhead results in higher application data rates at the expense of higher latency and lower link budget.
A smaller payload with shorter air-time results in lower latency and higher link budgets at the expense of lower application data rates due to higher PHY overhead.
The table below, summarizes the effects that payload size and Modulation/FEC have on Link Budget, latency and data rate.

Figure 52: Schedule design trade-offs¶
Power Saving Strategies¶
In this section, some strategies are outlined for reducing power consumption.
Reduce the transceiver’s input voltage:
Keep the supply voltage as low as possible. The transceiver’s supply voltage can range from 1.8V to 3.3V.
Avoid use of firmware delay:
A software delay keeps the CPU active, thus needlessly consumes power. Try to use timer peripherals instead and apply a power saving method while waiting for its period to end.
Use non-blocking transfer:
The user must always try to use non-blocking transfers with the help of DMA peripherals when using a communication interface (SPI, I2C, USART, etc.). Instead of blocking the CPU waiting on a transfer complete event, a non-blocking transfer allows the MCU to do other processing tasks or go into sleep mode.
Change the transceiver’s sleep level:
If the target data rate allows for a deeper sleep level, do not hesitate to use the transceiver’s shallow or deep sleep levels. Those sleep levels decrease the current consumption of the transceiver when not transmitting or receiving by shutting down its internal DC/DC converter. See the Sleep Modes section for further details on sleep modes.
Disable application logging:
If possible, disable all logging. Don’t forget to disable unused serial peripherals accordingly (e.g.: UART).
Keep unused peripherals disabled:
Do not initialize unused peripherals to prevent them from consuming any power.
Reduce peripherals clock speed:
Lowering the clock speed of active peripherals (e.g.: SPI, UART, GPIOs) will decrease power consumption. For instance, if your application does not require a high data throughput and has loose or no latency requirements, lower the clock speed of the SPI peripheral controlling the transceiver. The default speed of 40MHz is not always required.
Use link throttling:
The link throttling allows the user to define an alternative, lower power mode of operation. If the application can reduce its data generation rate, link throttling can be activated to reduce the Wireless Core footprint and free up CPU cycles. See the Link Throttling section for more details on this feature.
Pairing Module¶
Description¶
The Pairing Module is used by applications to dynamically assign addresses to new nodes wanting to join a network. This module runs at the application level, so it uses the SWC API to create a dedicated schedule with its own connections.
Pairing Configuration¶
The Pairing Module configuration structure is as follows:
/** @brief Paired device identification.
*/
typedef struct swc_paired_device {
uint16_t node_address; /*!< Address of the node */
uint64_t unique_id; /*!< Generated unique ID */
} swc_pairing_device_t;
/** @brief Pairing parameters.
*/
typedef struct swc_pairing {
uint8_t *memory_pool; /*!< Memory pool instance from which memory allocation is done */
uint32_t memory_pool_size; /*!< Memory pool size in bytes */
swc_role_t network_role; /*!< Network role of the current device */
uint16_t pan_id; /*!< Coordinator's PAN ID */
uint8_t coordinator_address; /*!< Coordinator's address */
uint8_t node_address; /*!< Node's assigned address */
uint8_t device_role; /*!< Device Role */
swc_pairing_device_t paired_device[SWC_PAIRING_DEVICE_LIST_MAX_COUNT]; /*!< List of paired devices */
} swc_pairing_t;
Pairing Addressing¶
Default PAN ID and Address¶
To simplify the pairing process, default addresses and PAN ID are used:
#define PAIRING_COORD_ADDRESS 0xF1
#define PAIRING_NODE_ADDRESS 0xF2
#define PAIRING_PAN_ID 0xBCD
Address Generation¶
An address is internally generated by each node prior to pairing. Such addresses are generated using a CRC16-CITT scheme which uses the 5-byte SPARK transceiver’s unique ID as its seed. The following PAN ID are reserved: 0x000 and 0xFFF. The following addresses are reserved: 0x00 and 0xFF. If the address generator lands on a reserved address, it starts again using a different seed until a valid address is generated.
Pairing Procedure¶

Figure 53: Pairing Procedure Diagram¶
Pairing Request¶
The Pairing Request message is sent by the Coordinator to the Node. An available Node address is included in the payload.
The Coordinator sends the Pairing Request message continuously until it receives the Node’s response. Upon receiving the Pairing Request message, the Node learns the Coordinator’s PAN ID and address and receives its own assigned address.
Pairing Response¶
The Pairing Response message is sent by the Node to the Coordinator after it receives a Pairing Request message. This message includes the Node’s device role and Unique ID number. Once the Node has sent the Pairing Response message, it waits for the Pairing Confirmation message from the Coordinator.
When the Coordinator receives the Pairing Response, it saves the Node address along with its Unique ID in a Paired Device list with the Node’s device Role as its index. It then sends a Pairing Confirmation message back to the Node.
Pairing Confirmation¶
The Pairing Confirmation message is sent by the Coordinator to the Node to validate that the Node was added to the Paired Device list. When the Node receives this message, an acknowledgment is sent back to the Coordinator. The Pairing Process is now completed. The user application can then start.
Payload Composition¶
Pairing Request Payload Composition¶
Code (0xCAFE) |
Command (PAIRING_REQUEST) |
PAN ID |
Coordinator Address |
Node Address |
---|---|---|---|---|
0-1 |
2 |
3-4 |
5 |
6 |
Pairing Response Payload Composition¶
Code (0xCAFE) |
Command (PAIRING_RESPONSE) |
Node Role* |
Unique ID* |
---|---|---|---|
0-1 |
2 |
3 |
4-8 |
*Node Role: The Node’s role is a number from 1 to 255. The number is used to identify the role of a particular Node on the network. Role 0 is reserved for the Coordinator. Every other Node on the network must have a unique role and all the roles must be sequential, starting from 1. For example, in a Gaming Hub application, the Hub has the role 0 since it is the Coordinator. The Headset can have role 1, the Mouse can have role 2 and the Keyboard can have role 3.
*Unique ID: The 5-byte Unique Identification number extracted from the SPARK Radio.
Paired Device List¶
The Paired Device List is an array which contains information about paired devices. The maximum number of entries can be changed by the programmer by modifying the constant PAIRING_DEVICE_LIST_MAX_COUNT. A maximum of 256 (i.e. 255 Nodes and 1 Coordinator) entries are supported.
This list is mainly used in Point to Multi Point Pairing applications. When the Coordinator receives the Node Device Role and Unique ID during the Pairing Response state, it adds that information to the Paired Device List.
The Coordinator assigns itself to the index 0 in the array to include its own address and Unique ID.
Here is an example of what a Paired Device List might look like in a Gaming Hub application.
Index |
Node Address |
Unique ID |
Note |
---|---|---|---|
0 |
0x37501 |
0x9834570293 |
Dongle (Coordinator) |
1 |
0x37502 |
0x8794358929 |
Headset (Node) |
2 |
0x37503 |
0x4984753489 |
Mouse (Node) |
3 |
0x37504 |
0x2346987609 |
Keyboard (Node) |
If the Coordinator is reset, the Paired Device List is lost. It is the application’s responsibility to save it in non-volatile memory (e.g., flash memory) to make it persistent.
Pairing Module Schedule¶
When entering the Pairing Mode, a pre-defined Pairing Schedule is used.
The Pairing Schedule alternates the transmissions between the Coordinator and Node every 3000 us while switching on 5 different frequencies.

Figure 54: Pairing Module’s schedule¶
Once the Pairing Procedure is completed, the user application schedule can be applied and the application can start.
Running the Pairing Process¶
To start the pairing process, initialize the pairing module using:
/** @brief Initialize the pairing process.
*
* @param[in] pairing Pairing structure to exchange data between applications.
* @param[in] hal Board specific functions.
*/
void swc_pairing_init_pairing_process(swc_pairing_t *pairing, swc_hal_t *hal);
Then, call the process function in a loop until it returns a pairing success:
/** @brief Function to be called by an application to run the pairing process.
*
* @return True if the pairing was successful.
* False if the pairing is still running.
*/
bool swc_pairing_process(void);
The application can implement a timeout to stop the pairing process execution loop if desired. Otherwise, the pairing process will continue until a device is found.
Deinitializing the Pairing Process¶
The pairing process can be stopped using this function:
void swc_pairing_deinit(void);
This function will release any resources that were allocated during the pairing process and reset the SPARK Wireless Core. It should be called once the application gets out of the pairing process loop.
Disclaimer¶
This pairing module provides a basic pairing algorithm intended for demonstration purposes. It is very generic and not feature complete. Applications that require more stringent pairing may need to build upon this example.
- The known limitations are as followed:
Any device can pair with any other device. This means that there are no protections to avoid different projects to pair with each other. For example, a headset dongle (Coordinator) could pair with a mouse (Node) or a mouse dongle (Coordinator) could pair with a headset (Node).
The PAN ID used is arbitrary. The PAN ID used in this pairing example could also be used in another application, causing potential conflicts.