Home Community Blog Buy Now
Blog

Misty Community Forum

Expected round-trip times in REST API?

I have noticed that total request-response time of REST API commands is somewhat slow: often 400 ms or more (approaching 1000 ms), but sometimes fast, e.g., 40 ms.

I have carefully explored different wireless network configurations to determine that it almost surely is not my network configuration. Furthermore, my non-Misty wireless devices such as a Raspberry Pi named “molly” have much lower ping times; e.g.,

$ ping molly
PING molly (192.168.2.161) 56(84) bytes of data.
64 bytes from molly (192.168.2.161): icmp_seq=1 ttl=64 time=6.45 ms
64 bytes from molly (192.168.2.161): icmp_seq=2 ttl=64 time=5.51 ms
64 bytes from molly (192.168.2.161): icmp_seq=3 ttl=64 time=6.62 ms
64 bytes from molly (192.168.2.161): icmp_seq=4 ttl=64 time=6.17 ms
^C
--- molly ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 5.511/6.188/6.623/0.422 ms

$ ping 192.168.2.214
PING 192.168.2.214 (192.168.2.214) 56(84) bytes of data.
64 bytes from 192.168.2.214: icmp_seq=1 ttl=64 time=400 ms
64 bytes from 192.168.2.214: icmp_seq=2 ttl=64 time=505 ms
64 bytes from 192.168.2.214: icmp_seq=3 ttl=64 time=427 ms
64 bytes from 192.168.2.214: icmp_seq=4 ttl=64 time=143 ms

Am I missing something?

Here are several calls to GET /api/battery via curl together with execution durations:

$ time curl  192.168.2.134/api/battery
{"result":{"chargePercent":1.0,"created":"2020-05-01T22:53:26.723011Z","current":0.086,"healthPercent":null,"isCharging":true,"sensorId":"charge","state":"Charging","temperature":0,"trained":false,"voltage":8.374},"status":"Success"}
curl 192.168.2.134/api/battery  0.00s user 0.01s system 21% cpu 0.040 total

$ time curl  192.168.2.134/api/battery
{"result":{"chargePercent":1.0,"created":"2020-05-01T22:53:28.2270119Z","current":0.1,"healthPercent":null,"isCharging":true,"sensorId":"charge","state":"Charging","temperature":0,"trained":false,"voltage":8.375},"status":"Success"}
curl 192.168.2.134/api/battery  0.01s user 0.00s system 2% cpu 0.467 total

$ time curl  192.168.2.134/api/battery
{"result":{"chargePercent":1.0,"created":"2020-05-01T22:53:54.9199124Z","current":0.102,"healthPercent":null,"isCharging":true,"sensorId":"charge","state":"Charging","temperature":0,"trained":false,"voltage":8.375},"status":"Success"}
curl 192.168.2.134/api/battery  0.01s user 0.00s system 1% cpu 0.802 total

$ time curl  192.168.2.134/api/battery
{"result":{"chargePercent":1.0,"created":"2020-05-01T22:53:55.9611052Z","current":0.077,"healthPercent":null,"isCharging":true,"sensorId":"charge","state":"Charging","temperature":0,"trained":false,"voltage":8.373},"status":"Success"}
curl 192.168.2.134/api/battery  0.00s user 0.00s system 5% cpu 0.172 total

My informal observation is that round-trip time performance is best shortly after reboot, but it degrades after the robot is powered on continuously for a while. Just now, immediately after reboot, round-trip time of GET /api/battery is below 200 ms on almost all calls, with occasional deviation to 500 ms or so.

In contrast, after being continuously powered on for several days, the performance is much worse, as described earlier in this post, including some times above 1500 ms.

I can just reboot more often, but this information might be useful for your engineering team as there might be memory leaks, abandoned processes, socket leaks, etc. that cause gradual degradation of performance.

1 Like

Thanks @slivingston it’s so helpful to see this quantified!

For the purposes of human-robot interaction, specifically education and therapeutic-related applications, it’s important to keep the response time below 1000 ms as much as possible (ideally close to 400 ms). Beyond 1500 ms is not acceptable for these applications, as it breaks continuity, trust, conversation etc. particularly with impatient young learners.

Usually Misty responds in a suitable time for my applications, but it’s helpful to understand that there may be some global system improvements to support longer-term deployments.