Hyperledger Besu max connection configuration - networking

today i started a node using hyperledger besu client in PoA. The network is made up of 13 validators and has been active for about a year.
My node took about 40 minutes to synchronize but generated a pick of 30k connections/s. Is it normal for the node to generate all these connection requests per second?
Do you know a way to set a maximum of connections?
Thanks!

Related

Loss of variables in ThingsBoard using HTTP

I am studying the time it takes to send different variables to the same device using the HTTP protocol. The first test that I carried out is to send a single variable to a device more than 50 times, however, when analyzing the results in ThingsBoard and verifying that the variable was sent 50 times, I realized that the ThingsBoard platform did not receive the 50 shipments and the last 7 were lost. Carrying out different tests to study this phenomenon, I have concluded that from 40 shipments most of the shipments are lost. Is it known why this happens?
Please have a look at the troubleshooting guide of the Documentation:
https://thingsboard.io/docs/user-guide/troubleshooting/
You should see, if there are any timeouts or processing errors.
I also witnessed the same behaviour sometimes if using In-Memory Queue. After switching to Kafka as queue, I could resolve this problem:
https://thingsboard.io/docs/user-guide/install/ubuntu/?ubuntuThingsboardQueue=kafka#step-4-choose-thingsboard-queue-service

BizTalk send port retry interval and retry count

There is one dynamic send port (Req/response) in my orchestration.
Request is sending to external system and accepting response in orch. There is a chance the external system have monthly maintenance of 2 days. To handle that scenario
Retry interval if I set to 2 days is it impacting the performance? Is it a good idea?
I wouldn't think it is a good idea, as even a transitory error of another type would then mean that message would be delayed by two days.
As maintenance is usually scheduled, either stop the send port (but don't unenlist) or stop the receive port that picks up the messages to send (preferable, especially if it is high volume), and start them again after the maintenance period.
The other option would be to build that logic into the Orchestration, that if it catches an exception that it increased the retry interval on each retry. However as above, if it is high volume, you might be better off switching of the receive location, as otherwise you will have a high number of running instances.
Set a service interval at the send port if you know when the receiving system will be down. If the schedule is unknown I would rather set:
retry count = 290
retry interval = 10 minutes
to achieve that the messages will be transmitted over two days.

BizTalk 2013 R2 - Rate based Throttling on flooding messages

We have a solution that takes a message, and sends it to a web API.
Every day, an automatic procedure is run by another department that passes thousands of records into the messagebox, which seems to cause errors related to the API solicit-response port (strangely these errors don't allude to a timeout, but they do only trigger when such a massive quantity of data is sent downstream).
I've contacted the service supplier to determine the capacity of their API calls, so I'll be able to tailor our flow once I have a better idea.
I've been reading up on Rate Based Throttling this morning, and have a few questions I can't find an answer to;
If throttling is enabled, does it only process the Minimum number of samples/messages? If so, what happens to the remaining messages? I read somewhere they're queued in memory, but only of a max of 100, so where do all the others go?
If I have 2350 messages flood through in the space of 2 seconds, and I want to control the flow, would changing my Sampling Window duration down to 1 second and setting Throttling override to initiate throttling make a difference?
If you are talking about Host Throttling setting, the remaining messages will be in the message box database and will show as being in a Dehydrated state.
You would have to test the throttling settings under load. If you get it wrong it can be very bad. I've come across one server where the settings were configured incorrectly and it is constantly throttling.

NiFi HandleHttpResponse 503 Errors

In NiFi, I have an HTTP endpoint that accepts POST requests with payloads varying from 7kb to 387kb or larger (up to 4mb). The goal is to have a clustered implementation capable of handling approximately 10,000 requests per second. However, no matter if I have NiFi clustered with 3 nodes or a single instance, I've never been able to average more than 15-20 requests/second without the Jetty service returning a 503 error. I've tried reducing the time penalty and increasing the number of Maximum Outstanding Requests in the StandardHttpContextMap. No matter what I try, whether on my local machine or on a remote VM, I cannot get any impressive number of requests to go through.
Any idea why this is occurring and how to fix this? Even when clustering, I notice one node (not even the primary node) does the majority of the work and I think this explains why the throughput isn't much higher for a clustered implementation.
No matter at what bulletin level, this is the error I get in the nifi-app.log:
2016-08-09 09:54:41,568 INFO [qtp1644282758-117] o.a.n.p.standard.HandleHttpRequest HandleHttpRequest[id=6e30cb0d-221f-4b36-b35f-735484e05bf0] Sending back a SERVICE_UNAVAILABLE response to 127.0.0.1; request was POST 127.0.0.1
This is the same whether I'm running just two processors (HandleHttpRequest and HandleHttpResponse) as my only processors or I have my general flow where I'm routing on content, replacing some text, and writing to a database or jms messaging system. I can get higher throughput (up to 40 requests/sec) when I'm running just the web service without my entire flow, but it still has a KO rate of about 90%, so it's not much better - still seems to be an issue with the jetty service.

JBoss 5.1 Servlet repeats request after 60 seconds

We have a servlet that accepts requests and performs certain actions on external systems. Many times these external systems respond slowly and the requests take longer than 60 seconds. In the log we notice that exactly after 60 seconds a new request is made to the servlet (with the same post parameters) as long as the client is still connected.
Googling found that the same is reported on other App Servers such as Glassfish etc. The reason seems to be that after a timeout of 60 seconds the servlet or the web container are timing out the call and repeating the request. Note that this seems to be a servlet or container initiated refresh and not really posted from the client. Way to avoid this is to apparently increase the timeout. (Read more here on a similar issue: Java - multiple requests from two WebContainer threads)
I increased the connectionTimeout in the deploy/jbossweb.sar/server.xml to 120000 (2 minutes) but the call repeats exactly after 60 seconds still.
Any idea how to increase the timeout or to avoid this behaviour in JBoss?
Thanks
Srini
Found the issue. The problem was not to do with JBoss at all. Our JBoss servers run on Amazon EC2 instances and are behind an ELB load balancer. The AWS ELB load balancer timesout after every 60 seconds of idle time and resubmits the request.

Resources