首页 > 解决方案 > BLE Missing Packets (Protocol / Spec question)

问题描述

I have been learning the nuts and bolts of BLE lately, because I intend to do some development work using a BLE stack. I have learned a lot from the online documentation and the spec, but there is one aspect that I cannot seem to find.

BLE uses frequency hopping for communication. Once two devices are connected (one master and one slave), it looks like all communication is then initiated via the master and the slave responds to each packet. My question involves loss of packets in the air. There are two major cases I am concerned with:

  1. Master sends a packet that is received by the slave and the slave sends a packet back to the master. The master doesn't receive the packet or if it does, it is corrupt.
  2. Master sends a packet that is not received by the slave.

Case 1 to me is a "dont care" (I think). Basically the master doesn't get a reply but at the very least, the slave got the packet and can "sync" to it. The master does whatever and tries transmitting the packet at the next connection event.

Case 2 is the harder case. The slave doesn't receive the packet and therefore cannot "sync" its communication to the current frequency channel.

How exactly do devices synchronize the channel hopping sequence with each other when packets are lost in the air (specifically case 2)? Yes, there is a channel map, so the slave technically knows what frequency to jump to for the next connection event. However, the only way I can see all of this happening is via a "self timed" mechanism based on the connection parameters. Is this good enough? I mean, given the clock drift, there will be slight differences in the amount of time the master and slave are transmitting and receiving on the same channel... and eventually they will be off by 1 channel.. 2 channels, etc. Is this not really an issue, because for that to happen 'a lot' of time needs to pass based on the 500ppm clock spec? I understand there is a supervisor timer that would declare the connection dead after no valid data is transferred for some time. However, I still wonder about the "hopping drift", which brings me to the next point.

How much "self timing" is employed / mandated within the protocol? Do slave devices use a valid start of packet from the master every connection interval to re synchronize the channel hopping? For example if the (connection interval + some window) elapses, hop to the next channel, OR if packet received re sync / restart timeout timer. This would be a hop timer separate from the supervisor timer.

I can't really find this information within the core 5.2 spec. It's pretty dense at only over 3000+ pages... If somebody could point me to the relevant sections in the spec or somewhere else.. or even answer the questions, that would be great.

标签: bluetooth-lowenergyprotocolsspecificationspacketloss

解决方案


从机知道频道映射。如果没有从主设备接收到一个数据包,它将在一个连接间隔后在下一个通道上再次侦听。如果它也没有收到,它会增加一个额外的连接间隔和下一个通道。

当检测到最后一个从主设备接收到的数据包时,从设备还存储时间戳(或事件计数器),无论 crc 是否正确。这称为锚点。这与用于监督超时的时间点不同。

锚点和下一个预期数据包之间的时间量乘以主机 + 从机精度(例如 500 ppm)以获得接收窗口,再加上 16 微秒。所以从机在预期的数据包到达时间之前和之后监听这个时间量。


推荐阅读