I am studying the time it takes to send different variables to the same device using the HTTP protocol. The first test that I carried out is to send a single variable to a device more than 50 times, however, when analyzing the results in ThingsBoard and verifying that the variable was sent 50 times, I realized that the ThingsBoard platform did not receive the 50 shipments and the last 7 were lost. Carrying out different tests to study this phenomenon, I have concluded that from 40 shipments most of the shipments are lost. Is it known why this happens?
Please have a look at the troubleshooting guide of the Documentation:
https://thingsboard.io/docs/user-guide/troubleshooting/
You should see, if there are any timeouts or processing errors.
I also witnessed the same behaviour sometimes if using In-Memory Queue. After switching to Kafka as queue, I could resolve this problem:
https://thingsboard.io/docs/user-guide/install/ubuntu/?ubuntuThingsboardQueue=kafka#step-4-choose-thingsboard-queue-service
Related
Overview
I have a Go echo http server running with version 1.13.
$ go version
go version go1.13.7 linux/amd64
I'm monitoring a number of different statistics about the server, including the number of goroutines. I periodically see brief spikes of thousands of goroutines, when high load shouldn't cause it to exceed maybe a few hundred. These spikes do not correlate to an increase in http requests as logged by the labstack echo middleware.
To better debug this situation, I added a periodic check in the program which sends me a pprof report on the goroutines if the number spikes.
The added goroutines surprised me, as when the server is in "normal" operating mode, I see 0 goroutines of the listed functions.
goroutine profile: total 1946
601 # 0x4435f0 0x4542e1 0x8f09dc 0x472c61
# 0x8f09db net/http.(*persistConn).readLoop+0xf0b /usr/local/go/src/net/http/transport.go:2027
601 # 0x4435f0 0x4542e1 0x8f2943 0x472c61
# 0x8f2942 net/http.(*persistConn).writeLoop+0x1c2 /usr/local/go/src/net/http/transport.go:2205
601 # 0x4435f0 0x4542e1 0x8f705a 0x472c61
# 0x8f7059 net/http.setRequestCancel.func3+0x129 /usr/local/go/src/net/http/client.go:321
What I'm struggling with, however, is where these are coming from, what they indicate, and at what point in an http request would I expect them.
To my untrained eye, it looks as if something is briefly attempting to open a connection the immediately tries to close it.
But it would be good to have confirmation of this. In what part of an http request do readLoop, writeLoop, and setRequestCancel goroutines get started? What do these goroutines indicate?
Notes
A few things I've looked at:
I tried adding middleware to capture requests frequencies from IP addresses as they came in, and report on those when the spikes happen. To total request number remains low, in the 30-40 range even as this spike is happening. No IP address is anomalous.
I've considered executing something like lsof to find open connections but that seems like a tenuous approach at best, and relies on my understanding of what these goroutines mean.
I've tried to cross-correlate the timing of seeing this with other things on the network, but without understanding what could cause this, I can't make much sense of where the potential culprit may lie.
If the number of goroutines exceeds 8192, the program crashes with the error: race: limit on 8192 simultaneously alive goroutines is exceeded, dying. A search for this error gets me to this github issue, which feels relevant because I am, in fact, using gorilla websockets in the program. However, the binary was compiled with -race and no race condition is spit out along with my error, which is entirely different from the aforementioned question.
I'm trying to send data from multiple ESP-8266 to feeds on my Adafruit IO account.
The problem is that when I try to send new values, I'm faced with a ban from publishing because the 2 seconds time limit is violated when two or more of my MCUs happen to send data at the same time (I can't synchronize them to avoid this).
is there any possible solution to this problem?
I suggest to consider those three options:
A sending token which is send from one ESp to the next. So basically all ESPs are mot allowed to send. If the token is received its allowed to send - waits the appropriate time limit hands the token to the next ESP. This solution has all Arduinos connected via an AP/router and would use client to client communication. It can be setup fail safe, so if the next ESP is not available (reset/out of battery etc) you take the next on the list and issue an additional warning to the server
The second solution could be (more flexible and dynamic BUT SPO - single point of failure) to set up one ESP as data collector to do the sending.
If the ESps are in different locations you have to set them up that they meet the following requirement:
If you have a free Adafruit IO Account, the rate limit is 30 data
points per minute.
If you exceed this limit, a notice will be sent to the
{username}/throttle MQTT topic. You can subscribe to the topic if you
wish to know when the Adafruit IO rate limit has been exceeded for
your user account. This limit applies to all Data record modification
actions over the HTTP and MQTT APIs, so if you have multiple devices
or clients publishing data, be sure to delay their updates enough that
the total rate is below your account limit.
so its not 2 sec limit but 30/min (60/min if pro) so you limit sending each ESP to the formula:
30 / Number of ESPs sending to I/O -> 30 / 5 = 6 ==> 5 incl. saftey margin
means each ESP is within a minute only allowed to send 5 times. Important if the 5 times send limit is up it HAS to wait a minute before the next send.
The answer is simple, just don't send that frequent.
In the IoT world
If data need frequent update (such as motor/servo, accelerometer, etc.), it is often that you'd want to keep it local and won't want/need to send it to the cloud.
If the data need to be in the cloud, it is often not necessary need to be updated so frequently (such as temperature/humidity).
Alternatively, if you still think that your data is so critical that need to be updated so frequently, dedicate one ESP as your Edge Gateway to collect the data from sensor nodes, and send it to the cloud at once, that actually the proper way of an IoT network design with multiple sensor nodes.
If that still doesn't work for you, you still have the choice of pay for the premium service to raise the rate limit, or build your own cloud service and integrate it with your Edge Gateway.
I am new to Apache Camel and Netty and this is my first project. I am trying to use Camel with the Netty component to load balance heavy traffic in a back end load test scenario.This is the setup I have right now:
from("netty:tcp:\\this-ip:9445?defaultCodec=false&sync=true").loadBalance().roundRobin().to("netty:tcp:\\backend1:9445?defaultCodec=false&sync=true,netty:tcp:\\backend2:9445?defaultCodec=false&sync=true)
The issue is unexpected buffer sizes that I am receiving in the response that I see in the client system sending tcp traffic to Camel. When I send multiple requests one after the other I see no issues and the buffer size is as expected. But, when I try running multiple users sending similar requests to Camel on the same port, I intermittently see unexpected buffer sizes, sometimes 0 bytes to sometimes even greater than the expected number of bytes. I tried playing around with multiple options mentioned in the Camel-Netty page like:
Increasing backlog
keepAlive
buffersizes
timeouts
poolSizes
workerCount
synchronous
stream caching (did not work)
disabled useOriginalMessage for performance
System level TCP parameters, etc. among others.
I am yet to resolve the issue. I am not sure if I'm fundamentally missing something. I did take a look at the encoder/decoders and guess if that could be an issue. But, I don't understand why a load balancer needs to encode/decode messages. I have worked with other load balancers which just require endpoint configurations and hence, I am assuming that Camel does not require this. Am I right? Please know that the issue is not with my client/backend as I ran a 2000 user load test from my client to the backend with less than 1% failures but see a large number of failure ( not that there are no successes) with Camel. I have the following questions:
1.Is this a valid use-case for Apache Camel- Netty? Should I be looking at Mina or others?
2.Can I try to route tcp traffic to JMS or other components and then finally to the tcp endpoint?
3.Do I need encoders/decoders or should this configuration work?
4.Should I continue with this approach or try some other load balancer?
Please let me know if you have any other suggestions. TIA.
Edit1:
I also tried the same approach with netty4 and mina components. The route looks similar to the one in netty. The route with netty4 is as follows:
from("netty4:tcp:\\this-ip:9445?defaultCodec=false&sync=true").to("netty4:tcp:\\backend1:9445?defaultCodec=false&sync=true")
I read a few posts which had the same issue but did not find any solution relevant to my issue.
Edit2:
I increased the receive timeout at my client and immediately noticed the mismatch in expected buffer length issue fall to less than 1%. However, I see that the response times for each transaction when using Camel and not using it is huge; almost 10 times higher. Can you help me with reducing the response times for each transaction? The message received back at my client varies from 5000 to 20000 bytes. Here is my latest route:
from("netty:tcp://this-ip:9445?sync=true&allowDefaultCodec=false&workerCount=20&requestTimeout=30000")
.threads(20)
.loadBalance()
.roundRobin()
.to("netty:tcp://backend-1:9445?sync=true&allowDefaultCodec=false","netty:tcp://backend-2:9445?sync=true&allowDefaultCodec=false")
I also used certain performance enhancements like:
context.setAllowUseOriginalMessage(false);
context.disableJMX();
context.setMessageHistory(false);
context.setLazyLoadTypeConverters(true);
Can you point me in the right direction about how I can reduce the individual transaction times?
For netty4 component there is no parameter called defaultCodec. It is called allowDefaultCodec. http://camel.apache.org/netty4.html
Also, try something like this first.
from("netty4:tcp:\\this-ip:9445?textline=true&sync=true").to("netty4:tcp:\\backend1:9445?textline=true&sync=true")
The above means the data being sent is normal text. If you are sending byte or something else you will need to provide decoding/encoding for netty to handle the data.
And a side note. Before running the Camel route, test manually to send test messages via a standard tcp tool like sockettest to verify that everything works. Then implement the same via Camel. You can find sockettest here http://sockettest.sourceforge.net/ .
I finally solved the issue with the same route settings as above. The issue was with the Request and Response Delimiter not configured properly due to which it was either closing the connection too early leading to unexpected buffer sizes or it was waiting too long even after the entire buffer was received leading to high response times.
I have seen a number of examples of paho clients reading sensor data then publishing, e.g., https://github.com/jamesmoulding/motion-sensor/blob/master/open.py. None that I have seen have started a network loop as suggested in https://eclipse.org/paho/clients/python/docs/#network-loop. I am wondering if the network loop is unnecessary for publishing? Perhaps only needed if I am subscribed to something?
To expand on what #hardillb has said a bit, his point 2 "To send the ping packets needed to keep a connection alive" is only strictly necessary if you aren't publishing at a rate sufficient to match the keepalive you set when connecting. In other words, it's entirely possible the client will never need to send a PINGREQ and hence never need to receive a PINGRESP.
However, the more important point is that it is impossible to guarantee that calling publish() will actually complete sending the message without using the network loop. It may work some of the time, but could fail to complete sending a message at any time.
The next version of the client will allow you to do this:
m = mqttc.publish("class", "bar", qos=2)
m.wait_for_publish()
But this will require that the network loop is being processed in a separate thread, as with loop_start().
The network loop is needed for a number of things:
To deal with incoming messages
To send the ping packets needed to keep a connection alive
To handle the extra packets needed for high QOS
Send messages that take up more than one network packet (e.g. bigger than local MTU)
The ping messages are only needed if you have a low message rate (less than 1 msg per keep alive period).
Given you can start the network loop in the background on a separate thread these days, I would recommend starting it regardless
I'm using SignalR with Redis as a message bus on a server that sits behind an Nginx proxy for load balancing. I used SignalR's PersistentConnection class to write a simple chat program that broadcasts messages to users belonging to the same certain group. Users are added to a group in OnConnectedAsync, removed in OnDisconnectAsync, and the user-to-group mapping is deterministic.
Currently, the client side falls back to long polling for whatever reason (I'm not entirely sure why), and whenever the client sets up a new connection after waiting for and receiving a response, seemingly at random, the server will sometimes respond to the new connection immediately with the previous response, despite there having only been one POST.
The message ID's tend to differ by exactly one, (the smaller ID coming first), with the rest of the response remaining the same. I logged some debug info and am quite positive that my override of OnReceivedAsync is sending one response per one request. I tried the same implementation without the Redis message bus, and got the same problem. Running locally (with long polling) however yielded good results so I suspect that the problem might be with the way the message bus might be buffering messages to refresh clients who might not be caught up, and some weird timing with the cutting/setting up of connections with the Nginx load balancer, but beyond that, I am very much at a loss.
Any help would be appreciated.
EDIT: Further investigation reveals that duplication occurs at somewhat regular intervals of approximately 20-30 seconds. I'm led to believe that the message expiration in the message bus might have something to do with the bug.
EDIT: Bug can be seen here: http://tinyurl.com/9q5t3va
The server is simply broadcasting a counter being sent by the client. You will notice some responses are duplicated every 20 or so.
Reducing the number of worker processes in the IIS (6.0) Server Manager from 2 to 1 solved the problem.