gRPC testing stats are not collected - grpc

I am following the grpc example from the example and use the same locustfile.py and replace the stub and server with my own stub (class and calls) and server. I can verify that the requests are sent successfully and the responses are correct, however, locust fails to collect any stats. The output is like this for the entire run:
Name # reqs # fails | Avg Min Max Median | req/s failures/s
--------------------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------------
Aggregated 0 0(0.00%) | 0 0 0 0 | 0.00 0.00
What I am missing here? Any pointer or direction to debug is appreciated.

If you replaced the stub and server with your own code, you're likely no longer firing the event for Locust with the request stats. In the example you linked, that's on line 46.
events.request.fire(**request_meta)
It doesn't have to be there, but it has to be somewhere. When using custom clients like the gRPC client, you need to tell Locust what happens. That's done with request events. Without those, Locust has no idea what the code you're running is doing.

Related

grpc-swift: How to set timeout for an RPC in Swift?

I am using https://github.com/grpc/grpc-swift for inter-process communication. I have a GRPC server written in Go that listens on a unix domain socket, and a macOS app written in Swift that communicates with it over the socket.
Let's say the Go server process is not running and I make an RPC call from my Swift program. The default timeout before the call will fail is 20 seconds, but I would like to shorten it to 1 second. I am trying to do something like this:
let callOptions = CallOptions(timeLimit: .seconds(1)) // <-- Does not compile
This fails with compile error Type 'TimeLimit' has no member 'seconds'.
What is the correct way to decrease the timeout interval for Swift GRPC calls?
As mentioned in the error TimeLimit don't have a member seconds. This seconds function that you are trying to access is inside TimeAmount. So if you want to use a deadline, you will need to use:
CallOptions(timeLimit: .deadline(.now() + .seconds(1)))
here the .now is inside NIODeadline and it as a + operator defined for adding with TimeLimit (check here).
and for a timeout:
CallOptions(timeLimit: .timeout(.seconds(1)))
Note that I'm not an expert in Swift, but I checked in TimeLimitTests.swift and that seems to be the idea.

dp:url-open's timeout value is getting ignored in datapower

I am providing a timeout of one second , however when the URL is down it is taking 120+ seconds for the response to come. Is there some variable or something that overrides the timeout in do:url-open ?
Update: I was calling the dp:url-open on request-transformation as well as on response-transformation. So the overriden timeout is 60 sec, adding both side it was becoming 120 sec.
Here's how I am calling this (I am storing the time before and after dp:url-open calls, and then returning them in the response):
Case 1: When the url is reachable I am getting a result like:
Case 2: When url is not reachable:
Update: FIXED: It seems the port that I was using was getting timed-out in the firewall first there it used to spend 1 minute. I was earlier trying to hit an application running on port 8077, later I changed that to 8088, And I started seeing the same timeout that I was passing.
The do:url-open() timeout only affects the operation done in the script but not the service itself. It depends on how you have built the solution but the time-out from the do:url-open() should be honored.
You can check this by setting logs to debug and adding a <xsl:message>Before url-open</xsl:message> and one after to see in the log if it is your url-open call or teh service that waits 120+ sec.
If it is the url-open you have most likely some error in the script and if it is the service that halts the response you need to return from the script (or throw an error depending on your needs) to halt the service.
You can set the time-out for the service itself or set a time-out in the User Agent for the specific URL you are calling as well.
Please note that the time-out will terminate the service after that time if you set it on service level so 1 sec. would not be recommended!

What's the difference between transfer-response and forward-request errors in API management?

A large number requests over our Azure API Management result in the ClientConnectionFailure exception.
By querying the logs I see two variants of the error:
exceptions
| where cloud_RoleName == "..."
| summarize num = count(itemCount) by problemId, outerMessage
| order by num
problemId: ClientConnectionFailure at transfer-response, outermessage: A task was canceled, count 403,249
problemId: ClientConnectionFailure at forward-request, outermessage: The operation was canceled, count 55,531
Based on this post, the problem could be time-outs or that clients abandon connections. With response times generally within 500ms I'm inclined to rule out the first.
The question is: what is the difference between transfer-response and forward-request, and does it provide any clues as to what is going on?
Transfer-response means that the client dropped the connection after it started receiving the response.
Forward-request means that the client dropped the connection while the APIM gateway was sending the request to the back end or waiting for a response from the back end.

How to make async requests using HTTPoison?

Background
We have an app that deals with a considerable amount of requests per second. This app needs to notify an external service, by making a GET call via HTTPS to one of our servers.
Objective
The objective here is to use HTTPoison to make async GET requests. I don't really care about the response of the requests, all I care is to know if they failed or not, so I can write any possible errors into a logger.
If it succeeds I don't want to do anything.
Research
I have checked the official documentation for HTTPoison and I see that they support async requests:
https://hexdocs.pm/httpoison/readme.html#usage
However, I have 2 issues with this approach:
They use flush to show the request was completed. I can't loggin into the app and manually flush to see how the requests are going, that would be insane.
They don't show any notifications mechanism for when we get the responses or errors.
So, I have a simple question:
How do I get asynchronously notified that my request failed or succeeded?
I assume that the default HTTPoison.get is synchronous, as shown in the documentation.
This could be achieved by spawning a new process per-request. Consider something like:
notify = fn response ->
# Any handling logic - write do DB? Send a message to another process?
# Here, I'll just print the result
IO.inspect(response)
end
spawn(fn ->
resp = HTTPoison.get("http://google.com")
notify.(resp)
end) # spawn will not block, so it will attempt to execute next spawn straig away
spawn(fn ->
resp = HTTPoison.get("http://yahoo.com")
notify.(resp)
end) # This will be executed immediately after previoius `spawn`
Please take a look at the documentation of spawn/1 I've pointed out here.
Hope that helps!

Understanding Wapiti results

I ran Wapiti on my webserver. I dump the database before and after, deleted the last line which is the timestamp and found both files have me the same hash value so i know the database hasnt been changed.
But according to the report i failed a number of test. And this is the data in the info
500 HTTP Error code.
Internal Server Error. The server encountered an unexpected condition which prevented it from fulfilling the request.
* World Wide Web Consortium: HTTP/1.1 Status Code Definitions
* Wikipedia: List of HTTP status codes
It appears each and every one of these are caused by ill-formed strings that ASP.NET does not like (note i use a debian machine with xsp to host. It works well).
Should i not care what the generated reports say? should i only check if anything was changed or anything was corrupted by manually looking through the pages?
SQL Injection (1) Blind SQL Injection (2) File Handling (3) Cross Site Scripting (4) CRLF (5) Commands execution (6) Resource consumption (7) Htaccess Bypass (8) Backup file (9) Potentially dangerous file (10)
High 14 14 13 0 0 14 0 0 0 0
Medium 0 0 0 0 0 0 0 0 0 0
Low 0 0 0 0 0 0 0 0 0 0
The database restoration is a very good idea. You do need a populated database to get proper code coverage. You also need to make sure that error reporting is enabled, nasty input must cause a sql error or wapiti might not find it. Wapiti does have blind sql injection testing, but its not as accurate.
I would look at the normal output from a ./wapiti.py http://yourdomain.com, this will list all of the vulnerabilities found and then you can patch them. After you do your first round of patching, re-run wapiti to make sure the patches work. The reports it generates are mostly meant for managers and the like who don't know what vulnerability is, they just want to know if they are safe or not. SQL Injection probably won't corrupt the database or any of the pages, Wapiti does do stored xss testing and this will corrupt a page, but if you are restoring the database then everything should be fine.
If you want to test for sql injection, I recommend using a tool which is particularly good at it. Namely:
sqlmap
http://sqlmap.sourceforge.net/
Note, the debian repository version is horribly out of date.

Resources