Resource not available error during issuing of multiple write IO requests to the single file - io-uring

I have created an application which uses io_uring and generates many write IO requests to a single file. As result, I have got a resource unavailable error (OS error 11). Once I set a limit of simultaneous requests to the 1K error is gone.
Could someone suggest to me how to detect the limit of simultaneous requests which I can issue using io_uring?

Related

Nginx High Open TCP connections

My web page got error 500 and dropped. Checking my Nginx's metrics in GCP, I detected:
To many Open TCP Conecctions.
Open TCP Conecctions
To many Accepted and Handled Conecctions
Accepted and Handled Conecctions
To many Writting Connections
enter image description here
Normal Request per minute for each different IPs from Access.log (in compare with others days and months)
Nginx's requests from acces.log
=> The drop in the graphs is because I restarted the server.
So, according to these metrics, I donĀ“t see any relation beetween the amount of connections (TCP, Accepted, Handled and Writting) and the requests (acces.log).
Further, Is normal these amount of Open TCP connections? I don't think.
I'll appreciate your opinions and possibles reasons why happened this.
500 is a server side error generally occurs if server is unable to process request. The webserver throws 500 internal server error when it encounterd an unexpected condition which prevents it from fulfilling the client's request.
The probable cause of 500 error could be because of:
Permission error.
Incorrect code in .htaccess file
PHP timeout.
Syntax or coder erro in CGI/Perl scripts
PHP memory limit

Mule sftp inbound connector stopped polling files after processing 20K files

Use case: process large number of files(30K files per day) using SFTP inbound
Issue: After processing 20K files, SFTP inbound connector is not polling files, it remains idle
Current impl: We have used Queued Asynchronous Processing Strategy at flow level. Flow got stopped after processing 20K files.
Even got similar issue when tried with Synchronous processing strategy. Used minThread=8, threadWaitTimeout=-1
At SFTP connector level we used Thread configuration, like maxTreadsIdle=16,
Mule Runtime: 3.8.3
we have used Queued Asynchronous Processing Strategy at flow level. Flow got stopped after processing 20K files.
Even got similar issue when tried with Synchronous processing strategy. Used minThread=8, threadWaitTimeout=-1
Below are exceptions we got while trying with different approaches.
Root Exception stack trace:
java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor did not accept within 30000 MILLISECONDS
Root Exception stack trace:
org.mule.api.service.FailedToQueueEventException: The queue for 'SEDA Stage mypi_gw_formsFlow.stage1' did not accept new event within -1 MILLISECONDS.
This issue and solutions are explained in this KB article : https://support.mulesoft.com/s/article/Error-The-queue-for-SEDA-queue-name-did-not-accept-new-event-within-30000-MILLISECONDS

Perfoce(P4) constantly fails on a specific file when trying to submit with error: WSAECONNABORTED

I'm not entirely sure what is going on here but perfoce will fail 100% of the time when trying to submit a very specific file in our depot. Things I have tried to solve this problem:
It Submit the file on it's own or in a changelist with other files
Submit using parallel sync or without it
Adjust the batch and thread numbers
Submitting via the command line and the visual client
Increasing or turning off net.maxwait
Increasing or turning off netkeepalive.idle as well as interval.
Checking disk space on server( 150GB free)
Deleting the current instance of the file and copying a backup overtop of the current one
The error message I receive are:
From Client side CLI:
write: socket: WSAECONNABORTED
From Server logs:
TCP send failed. write: socket: WSAECONNABORTED Perforce client error:
TCP send failed.
write: socket: WSAECONNABORTED
Perforce is just flat out refusing to let me submit this one particular file, it's a 63MB file, larger than some but smaller than others which have gone through successfully. We are on a 30Mbps connection, so I usually get the failure message after 10-20 seconds. I'm at a loss here so any help or ideas would be greatly appreciated. Thanks!

Connection timed out error in JMeter test execution

When am running my JMeter scripts using GUI for few of the samples sometimes am getting Connection timed out error and response are not getting, but if I run the same test after few mins I got the response for the same samples.
Can anybody please answer what is the solution for this?
Currently am checking the response time of each page, if add timers than the page response time will be showing more right?
There are at least 3 possibles reasons:
Your server (meaning web servers handling request and any components after them) is not handling the load correctly and slowing down, monitor the system and check
You have exhausted your injector ephemeral ports , you need to adjust your OS TCP settings to increase port range
You're running load test in GUI mode with a View Results Tree in test, this is bad practice as GC will happen frequently possibly triggering Stop The World leading to this. As per best-practices use NON GUI mode:
https://jmeter.apache.org/usermanual/best-practices.html
https://www.ubik-ingenierie.com/blog/jmeter_performance_tuning_tips/

What is HTTP Status code 000?

Just switched some downloads over to the Akamai CDN network and I'm seeing some strange stuff in the log files they deliver. A number of entries have the status code 000. When I asked them they said that 000 is the status when the client disconnects without transferring the entire file. Since 000 doesn't appear to be a valid HTTP response code (from the RFC), I have to wonder if that's right.
There's a knowledge base article (requires login) which lists their log values:
Log Delivery Services (LDS) LDS will show a 000 for any 200 or 206
responses with a client abort: the object was served correctly from
the origin or edge, but the end-user terminated the
connection/transaction before it completed.
This is indeed a custom status because the standard log format doesn't include a field which can indicate a client abort.
000 is a common code to use when no HTTP code was received due to a network error. According to a knowledge base article for Amazon CloudFront, 000 also means that the client disconnected before completing the request for that service.
It normally means: No valid HTTP response code
(ie: Connection failed, or was aborted before any data happened).
I would guess that their are either network issues or Akamai isn't managing their webservers correctly.

Resources