BLE central connection processor usage patterns

This section describes the processor availability and interrupt processing time for the SoftDevice in a central connection event.

Figure 1. Central connection events (some priority levels left out for clarity)

In a central connection event, the pattern of SoftDevice processing activity at interrupt priority level 0 is typically as follows:

SoftDevice processing activity in the different priority levels during central connection events is outlined in Table 1. The typical case is seen when receiving GATT write commands writing 20 bytes. The max case can be seen when sending and receiving maximum length packets and at the same time initiating encryption, while having a maximum number of connections and utilizing the Radio Timeslot API and Flash memory API at the same time.

Table 1. Processor usage latency when connected
Parameter Description Min Typical Max

Processing preparing the radio for a connection event.

  29 μs 52 μs
tISR(0),RadioStart Processing when starting the connection event.   21 μs 25 μs
tISR(0),RadioProcessing Processing after sending or receiving a packet.   30 μs 60 μs
tISR(0),PostProcessing Processing at the end of a connection event.   90 μs 170 μs
tnISR(0) Distance between connection event interrupts. 30 μs > 195 μs  
tISR(4) Priority level 4 interrupt after a packet is sent or received.   40 μs  

From the table, we can calculate a typical processing time for a central connection event where one packet is sent and received to be

tISR(0),RadioPrepare + tISR(0),RadioStart + tISR(0),RadioProcessing + tISR(0),PostProcessing + 2 * tISR(4) = 248 μs

which means that typically more than 99% of the processor time is available to the application when one peripheral link is established and one packet is sent in each direction with a 100 ms connection interval.

Documentation feedback | Developer Zone | Subscribe | Updated 2017-03-10