- Hands-on Labs
- Tech Journal
- Video/Media Library
- SIGN UP FREE
Frequently Asked Questions
How will updated Semtech solvers be made available to the market; system integrators, network operators and end customers?
Semtech will make the updated solvers available directly to the licensees of the Location service. This, in general is the system integrator. Semtech makes the solvers available as source code and, following receipt, the system integrator will choose whether to integrate the new version into their product. Since the solver is provided in source code, it is expected that the development of the solvers are not limited to the work conducted within Semtech. It is expected that third parties will become involved and create differentiation in the market place. Therefore, the schedule of updates should be discussed between the end user and the provider of the location service.
Gateway firmware updates are released from Semtech to (i) gateway manufacturer licensees, (ii) location service licensees. Firmware updates are expected to be infrequent and no more than annually unless an urgent bug-fix is required. The location service licensees will be responsible for integrating the new firmware into their solution and planning the in-field deployment to the gateways. For further details the system integrator partner/location service provider should be contacted.
The time-stamping firmware has been extensively tested. The performance in the resolution of the received signal is very close to the lower bound of Cramer Rao lower bound for estimation of arrival time. As shown in the graph below, the uncertainty is dependent on the signal-to-noise ratio at the receiver compared to the sensitivity level. The development of the time-stamping firmware is not expected to make any big leaps forward. There may be some minor improvements but in general, it is felt that improvements will come from the solver and statistical analysis of packets and location rather than time-stamp improvement.
As can be seen the estimation uncertainty reduces very rapidly with increasing signal-to-noise ratio above the threshold of sensitivity. This is one of the reasons why antenna diversity is important; as when a packet arrives with poor signal-to-noise ratio on one antenna there is a chance that the second antenna will receive it at s significantly better signal-to-noise ratio and, hence have a much improved arrival time estimation.
Which frequency offset has to be taken into account to avoid disturbance of 3G and 4G telecom networks?
The gateway reference designs are specified to coexist with 3G, and more importantly with 4G base stations, when co-located with a 4G base station with 50 dB of antenna isolation. The main risk is the injection by the LoRa® GW of noise into the LTE uplink band 20, but this is taken care of by the Semtech reference designs.
No data is available on using sectored antenna with LoRa location at this time.
The GPS time-base has been measured to generally contribute less than 20-30 meters uncertainty to the calculation. A more stable time-base and one that mitigates the problems of GPS (GNSS) transmissions and multipath would remove one of the uncertainties. Semtech has measured some different GNSS receivers with a variety of antennas and it is possible to improve the precision. The GPS time-base error is, however, one of the smaller parts of the uncertainty in the resolved location.
It is feasible to create a near-line-of-sight deployment in a rural area. It is almost impossible to create a true line-of-sight deployment (ie. without any multipath element to the signal received). Semtech has made experiments with NLOS deployment and even without keeping the first Fresnel zone clear (due to lack of antenna height) very good results were achieved.
Semtech states that target gateway diversity is 4 for location and 2 for data, does that mean that a deployment for location requires twice as many gateways?
The answer to this question is ‘not necessarily’. It is always recommended to have a gateway diversity of at least 2 for data coverage for a fixed sensor for a number of reasons including minimization of black spots in coverage, downlink collisions, interference local to the gateway etc.To answer the question on the density of gateways required for LOC one first has to answer,
a: is the data coverage planned to cover indoor?
b: Is the location required primarily outdoor?
If the answer to both of the questions above is ‘yes’, then the additional gateways required for location is likely to be very small. The reason for this is that the path loss from outdoors to indoors is normally well in excess of 20db. Therefore, if a deployment provides the recommended diversity of two indoors, then it is highly likely that outdoors in a similar location will have a gateway diversity of 4.
If, however, the answer to the above question is that indoor coverage is only targeted at one or even not at all, then additional gateways should be planned to operate the location service.
If gateways are using GPS for precision timing, do you have any guidelines for the GPS installation?
There is a lot of data to confirm that GPS timing performance is significantly affected by (1) the constellation of satellites in view, (2) interference or multipath. The recommendation is that the GPS antenna location is carefully considered to give the maximum ‘view’ of the open sky and if possible to reduce the effects of signal reflections from other sources. Semtech can provide a short study on the effects of GPS antennas and chipset configuration on the timing accuracy of the recovered 1pps signal. For deployment, the main influence is the antenna installation.
Results show that location uncertainty reduces with the number of packet repetitions, but the effect diminishes after 6 to 8 packets, why is that? Would increasing the number of channels help?
Evidence suggests that increasing the number of channels would make a positive impact on accuracy since it would increase frequency diversity. Any kind of diversity aids the algorithms in the solver to make better choices and weight the data presented in a more effective manner. Until now, the field data has only been collected for 8 channel European LoRaWAN deployment. Once more data is available this assumption can be validated and more detail added to this.
Results show that location uncertainty reduces with the number of packet repetitions, but the effect diminishes after 6 to 8 packets. Why is that, if I need better precision can I just send more packets?
To some extent the answer is yes, however, that is not the whole story. The main benefit of packet repetition is obtained due to the fact that the different packets are sent on different frequencies. Since the channel selection is random, there is no guarantee that 8 packets sent will cover all the channels in an 8 channel system, however, one would expect that, statistically most of the channels would have been used. The effect of changing the frequency is to change the multipath effects experienced by the signal bouncing off the different buildings and other objects. Once the algorithm has received a packet on each available frequency then the improvement becomes rather small. There is some variance of the multipath due to time but field data suggests that it is rather less than the frequency related and would, therefore, give minimal benefit. So, for an N channel LoRaWAN deployment, the benefit is mostly brought with approximately N packets transmitted.
Until June 2016 the only non-urban data that is available is a rural near-line-of-sight trial. The trial is detailed in a separate report but the summary is that in a near-line-of-sight trial, the mean uncertainty measured was 20 to 50 metres as shown below:
The length of a receive window must be at least the time required by the end-device’s radio transceiver to effectively detect a downlink preamble. Minimum 6 symbols are required to do so e.g.:
- For BW = 125kHz, SF7 it should be at least 6.1ms
- For BW = 125kHz, SF12 it should be at least 196.6ms
In the Class B principle, the latency is not defined by the number of nodes, but by the latency that the node requests to the network. If it negotiates 32 seconds with the network, then on average, it will be listening to the network every 32 seconds. When the load increases for Class B on a specific gateway, the impact is not delay but potentially overhearing of more than 1 device on a defined “meeting point”, in other words if the gateway is out of time slots, it may assign the same time slot to multiple devices. This would cause these devices to lengthen their listening time (and therefore consume more energy), even when the “other” device is interrogated. This impact is mild obviously, on the basis that network actuation is meant to be scarce (a few times per day typically).
If the end-device did not receive the downlink frame during the first receive window “RX1”, it must open a second receive window “RX2”. Note that the end-device does not open the second receive window if a frame intended for this end-device has correctly checked the address and MIC (message integrity code) during the first receive window.
It is region specific. For EU863-870, the maximum application payload length is:
- 51 bytes at SF12 / 125 kHz (lowest data rate)
- 51 bytes at SF11 / 125 kHz
- 51 bytes at SF10 / 125 kHz
- 115 bytes at SF9 / 125 kHz
- 222 bytes at SF8 / 125 kHz
- 222 bytes at SF7 / 125 kHz
- 222 bytes at SF7 / 250 kHz
- 222 bytes at FSK / 50 kpbs
ADR stands for Adaptive Data Rate. The ADR feature is used to adapt and optimize the following parameters of a static end-device:
- Data rate,
- Tx power level,
- Channel mask,
- The number of repetitions for each uplink message.
The end-device decides to enable ADR. Once ADR is requested by the end-device, the network can optimize the end-device’s data rate, Tx power, channel mask and the number of repetitions for each uplink message.
Commissioning (or on-boarding) an end-device on a network is the process of securely transferring to the network server data base and to the device:
- The end-device’s DevAddr.
- The end-device’s Network Session Key (NwkSKey) and Application Session Key (AppSKey).
- To which destination (IP addr of the application server) the end-device’s uplink frames should be routed.
- The end-device’s important characteristics (class , type, short description).
This happens only once: at the end-device life start
- RECEIVE_DELAY1 is a fixed configurable delay in seconds. Default is 1 second. If RECEIVED_DELAY1 implemented in the end-device is different from the default value, RECEIVED_DELAY1 must be communicated to the network using an out-of-band channel during the end-device commissioning process. The network server may not accept parameter different from its default value.
- RX1 frequency uses the same frequency channel as the uplink.
- RX1 Data Rate is programmable, can equal or lower than the uplink data rate. By default the first receive window data rate is identical to the data rate of the last uplink.
- RECEIVED_DELAY2 is a fixed configurable delay in seconds. Must be RECEIVE_DELAY1 + 1 second. Default is 2 seconds. If RECEIVED_DELAY2 implemented in the end-device is different from the default value, RECEIVED_DELAY2 must be communicated to the network using an out-of-band channel during the end-device commissioning process. The network server may not accept parameter different from its default value.
- RX2 frequency is a fixed configurable frequency.
- RX2 Data Rate is a fixed configurable data rate.
It is region specific. For EU863-870, it is from 250 bps to 11 kbps with LoRa® modulation and up to 50 kbps in FSK modulation mode.