I am testing the NTLM auth using the release v0.20.0-dev-dccb254. The command line to run the script below looks like:
bin\k6.exe run --vus 3 --duration 30s scripts/getOrder.js
import http from "k6/http";
import { check, sleep } from "k6";
export default function() {
let res = http.get("http://user:password#localhost:8247/orders/2377", {auth: "ntlm"});
console.log("Status code: " + res.status);
check(res, {
"status was 200": (r) => r.status == 200
});
sleep(1);
};
The first request succeeds for each VU, but the next ones are all failing. This is the output from K6:
duration: 30s, iterations: -
vus: 3, max: 3
INFO[0002] Status code: 200 -
INFO[0002] Status code: 200
INFO[0002] Status code: 200
WARN[0003] Request Failed error="Get http://user:password#localhost:8247/orders/2377: Invalid WWW-Authenticate header"
INFO[0003] Status code: 0
WARN[0003] Request Failed error="Get http://user:password#localhost:8247/orders/2377: Invalid WWW-Authenticate header"
INFO[0003] Status code: 0
...
Related
I'm writing a function to call Rasa API for intent prediction. Here is my code:
def run_test():
url = "http://localhost:5005/model/parse"
obj = {"text": "What is your name?"}
response = requests.post(url, data=obj)
print(response.json())
I also start Rasa server with this command: rasa run -m models --enable-api --cors "*" --debug
And here is what I got from Rasa server terminal:
In the terminal that I excuted run_test(), I got this result:
{'version': '2.7.1', 'status': 'failure', 'message': 'An unexpected error occurred. Error: Failed when parsing body as json', 'reason': 'ParsingError', 'details': {}, 'help': None, 'code'
: 500}
Anybody help me to solve this problem? Thank you very much!
I found the solution:
Only need to use json.dumps() the object, because object in Python is different than object in json.
def run_test():
url = "http://localhost:5005/model/parse"
obj = {"text": "What is your name?"}
response = requests.post(url, data=json.dumps(obj))
print(response.json())
I am trying to hit a nginx server that is behind an alb. I am triggering a simple HTTP2 POST request using pycurl. My request looks like this'
import io
import pycurl
curl = pycurl.Curl()
curl.setopt(curl.URL, "https://alb-xxxx.us-east-1.elb.amazonaws.com/endpoint")
curl.setopt(pycurl.HTTP_VERSION, pycurl.CURL_HTTP_VERSION_2_PRIOR_KNOWLEDGE)
curl.setopt(pycurl.POST, 1)
curl.setopt(pycurl.VERBOSE, 1)
curl.setopt(pycurl.SSL_VERIFYPEER, 0)
curl.setopt(pycurl.SSL_VERIFYHOST, 0)
with io.BytesIO() as buf:
curl.setopt(curl.WRITEFUNCTION, buf.write)
curl.perform()
print(buf.getvalue())
curl.close()
The error i am getting is
error Traceback (most recent call last)
<ipython-input-91-0756005b6c1b> in <module>()
16 with io.BytesIO() as buf:
17 curl.setopt(curl.WRITEFUNCTION, buf.write)
---> 18 curl.perform()
19 print(buf.getvalue())
20 curl.close()
error: (92, 'HTTP/2 stream 1 was not closed cleanly: INTERNAL_ERROR (err 2)')
the error occurrence is random. some times I get correct response and sometimes I get error like this. But when I try to do the same in curl I am getting correct response always. This is my curl command.
curl -v -k --http2-prior-knowledge -X POST https://alb-xxx.us-east-1.elb.amazonaws.com/endpoint
My post request response is a string({"field":"value"}). My NGINX configuration file looks like this
location =/endpoint {
if ($request_method = POST) {
return 200 '{"field":"value"}';
}
}
I don't know why I am getting correct response in curl but getting a different response in pycurl. Am I doing anything wrong in pycurl ?
Pycurl version -> 7.43.0.6
Python version -> 3.8.5
Thanks. Any help is appreciated.
In my project I use Karate 0.9.5, Oracle JDK8.
From time to time in our pipeline I see a problem. Karate fails to close earlier started Crome processes. Is there any solution to this problem?
I try to solved by explicitly call close() and quit() it didn't help. After running process was finished I found a couple of active chrome process on the server.
Here's a log:
12:51:45.227 preferred port 9222 not available, will use: 52436
12:51:47.524 request:
1 > GET http://localhost:52436/json
1 > Accept-Encoding: gzip,deflate
1 > Connection: Keep-Alive
1 > Host: localhost:52436
1 > User-Agent: Apache-HttpClient/4.5.11 (Java/1.8.0_181)
12:51:47.584 response time in milliseconds: 58,31
1 < 200
1 < Content-Length: 361
1 < Content-Type: application/json; charset=UTF-8
[ {
"description": "",
"devtoolsFrontendUrl": "/devtools/inspector.html?ws=localhost:52436/devtools/page/4BD6A5C19E01B01D88995CD69367F81F",
"id": "4BD6A5C19E01B01D88995CD69367F81F",
"title": "",
"type": "page",
"url": "about:blank",
"webSocketDebuggerUrl": "ws://localhost:52436/devtools/page/4BD6A5C19E01B01D88995CD69367F81F"
} ]
12:51:47.584 root frame id: 4BD6A5C19E01B01D88995CD69367F81F
12:51:47.632 >> {"method":"Target.activateTarget","params":{"targetId":"4BD6A5C19E01B01D88995CD69367F81F"},"id":1}
12:51:47.638 << {"id":1,"result":{}}
12:51:47.639 >> {"method":"Page.enable","id":2}
12:51:47.883 << {"id":2,"result":{}}
12:51:47.885 >> {"method":"Runtime.enable","id":3}
12:51:47.889 << {"method":"Runtime.executionContextCreated","params":{"context":{"id":1,"origin":"://","name":"","auxData":{"isDefault":true,"type":"default","frameId":"4BD6A5C19E01B01D88995CD69367F81F"}}}}
12:51:47.890 << {"id":3,"result":{}}
12:51:47.890 >> {"method":"Target.setAutoAttach","params":{"autoAttach":true,"waitForDebuggerOnStart":false,"flatten":true},"id":4}
12:51:47.892 << {"id":4,"result":{}}
12:52:02.894 << timed out after milliseconds: 15000 - [id: 4, method: Target.setAutoAttach, params: {autoAttach=true, waitForDebuggerOnStart=false, flatten=true}]
12:52:02.917 driver config / start failed: failed to get reply for: [id: 4, method: Target.setAutoAttach, params: {autoAttach=true, waitForDebuggerOnStart=false, flatten=true}], options: {type=chrome, showDriverLog=true, httpConfig={readTimeout=60000}, headless=true, target=null}
I think the easiest way to solve it is to extend root web driver interface com.intuit.karate.driver.Driver by adding additional method like getPID(). This method must return PID of launched process.
Can you log a bug and also provide some information as to what happened before this, are you using the Docker image etc. It looks like the previous Scenario did not shut down clearly.
But before that do you mind upgrading to 0.9.6.RC3 (just released) and trying that ? There have been a couple of tweaks to the life-cycle especially trying to close Chrome without waiting for a response. Also it may be worth adding a couple of changes a) Target.setAutoAttach without waiting and b) add configure option to not start Chrome if the port is in use
I am running a K6 tests with request.batch where number of requests can change for each test.
req = [req0, req1, req2, ...];
let res = http.batch(req);
then, I am trying to run a "check" for each request, and I use a while loop to do so.
while (i < req.length) {
check(
res[i],
{" ${i} - status 200": (r) => r.status === 200 }
);
i++;
}
however K6 accumulate all "check" tests results in a single test, as test message doesn't parse the variable I pass.
the output prints this message at the end of the test:
done [===============] 10s / 10s
✓ ${i} - status 200
I have tried to use different ways add the parameters, but no use:
{ i + " - status 200": (r) => r.status === 200 }
{' ${i} - status 200': (r) => r.status === 200 }
{` ${i} - status 200`: (r) => r.status === 200 }
{" %d - status 200", i : (r) => r.status === 200 }
I would like to know if there is any why to pass a customized message here.
Not sure if you have the answer already but for others you can use the bracket notation for assiging the property name. See the example below and the following link: How to create an object property from a variable value in JavaScript?
import http from "k6/http";
import { check } from "k6";
export default function() {
let response = http.get("https://test.loadimpact.com");
let test = "x";
check(response, {
[test + " - status 200"]: (r) => r.status === 200
});
};
This results in the following output:
done [==========================================================] 1 / 1i
✓ x - status 200
checks.....................: 100.00% ✓ 1 ✗ 0
data_received..............: 6.8 kB 15 kB/s
data_sent..................: 527 B 1.2 kB/s
http_req_blocked...........: avg=362.37ms min=362.37ms med=362.37ms max=362.37ms p(90)=362.37ms p(95)=362.37ms
http_req_connecting........: avg=138.16ms min=138.16ms med=138.16ms max=138.16ms p(90)=138.16ms p(95)=138.16ms
http_req_duration..........: avg=91.62ms min=91.62ms med=91.62ms max=91.62ms p(90)=91.62ms p(95)=91.62ms
http_req_receiving.........: avg=0s min=0s med=0s max=0s p(90)=0s p(95)=0s
http_req_sending...........: avg=992.2µs min=992.2µs med=992.2µs max=992.2µs p(90)=992.2µs p(95)=992.2µs
http_req_tls_handshaking...: avg=214.19ms min=214.19ms med=214.19ms max=214.19ms p(90)=214.19ms p(95)=214.19ms
http_req_waiting...........: avg=90.63ms min=90.63ms med=90.63ms max=90.63ms p(90)=90.63ms p(95)=90.63ms
http_reqs..................: 1 2.207509/s
iteration_duration.........: avg=453.99ms min=453.99ms med=453.99ms max=453.99ms p(90)=453.99ms p(95)=453.99ms
iterations.................: 1 2.207509/s
vus........................: 1 min=1 max=1
vus_max....................: 1 min=1 max=1
Python's BaseHTTPRequestHandler has an issue with forms sent through post!
I have seen other people asking the same question (Why GET method is faster than POST?), but the time difference in my case is too much (1 second)
Python server:
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
import datetime
def get_ms_since_start(start=False):
global start_ms
cur_time = datetime.datetime.now()
# I made sure to stay within hour boundaries while making requests
ms = cur_time.minute*60000 + cur_time.second*1000 + int(cur_time.microsecond/1000)
if start:
start_ms = ms
return 0
else:
return ms - start_ms
class MyServer(BaseHTTPRequestHandler, object):
def do_GET(self):
print "Start get method at %d ms" % get_ms_since_start(True)
field_data = self.path
self.send_response(200)
self.end_headers()
self.wfile.write(str(field_data))
print "Sent response at %d ms" % get_ms_since_start()
return
def do_POST(self):
print "Start post method at %d ms" % get_ms_since_start(True)
length = int(self.headers.getheader('content-length'))
print "Length to read is %d at %d ms" % (length, get_ms_since_start())
field_data = self.rfile.read(length)
print "Reading rfile completed at %d ms" % get_ms_since_start()
self.send_response(200)
self.end_headers()
self.wfile.write(str(field_data))
print "Sent response at %d ms" % get_ms_since_start()
return
if __name__ == '__main__':
server = HTTPServer(('0.0.0.0', 8082), MyServer)
print 'Starting server, use <Ctrl-C> to stop'
server.serve_forever()
Get request with python server is very fast
time curl -i http://0.0.0.0:8082\?one\=1
prints
HTTP/1.0 200 OK
Server: BaseHTTP/0.3 Python/2.7.6
Date: Sun, 18 Sep 2016 07:13:47 GMT
/?one=1curl http://0.0.0.0:8082\?one\=1 0.00s user 0.00s system 45% cpu 0.012 total
and on the server side:
Start get method at 0 ms
127.0.0.1 - - [18/Sep/2016 00:26:30] "GET /?one=1 HTTP/1.1" 200 -
Sent response at 0 ms
Instantaneous!
Post request when sending form to python server is very slow
time curl http://0.0.0.0:8082 -F one=1
prints
--------------------------2b10061ae9d79733
Content-Disposition: form-data; name="one"
1
--------------------------2b10061ae9d79733--
curl http://0.0.0.0:8082 -F one=1 0.00s user 0.00s system 0% cpu 1.015 total
and on the server side:
Start post method at 0 ms
Length to read is 139 at 0 ms
Reading rfile completed at 1002 ms
127.0.0.1 - - [18/Sep/2016 00:27:16] "POST / HTTP/1.1" 200 -
Sent response at 1002 ms
Specifically, self.rfile.read(length) is taking 1 second for very little form data
Post request when sending data (not form) to python server is very fast
time curl -i http://0.0.0.0:8082 -d one=1
prints
HTTP/1.0 200 OK
Server: BaseHTTP/0.3 Python/2.7.6
Date: Sun, 18 Sep 2016 09:09:25 GMT
one=1curl -i http://0.0.0.0:8082 -d one=1 0.00s user 0.00s system 32% cpu 0.022 total
and on the server side:
Start post method at 0
Length to read is 5 at 0
Reading rfile completed at 0
127.0.0.1 - - [18/Sep/2016 02:10:18] "POST / HTTP/1.1" 200 -
Sent response at 0
node.js server:
var http = require('http');
var qs = require('querystring');
http.createServer(function(req, res) {
if (req.method == 'POST') {
whole = ''
req.on('data', function(chunk) {
whole += chunk.toString()
})
req.on('end', function() {
console.log(whole)
res.writeHead(200, 'OK', {'Content-Type': 'text/html'})
res.end('Data received.')
})
}
}).listen(8082)
Post request when sending form to node.js server is very fast
time curl -i http://0.0.0.0:8082 -F one=1
prints:
HTTP/1.1 100 Continue
HTTP/1.1 200 OK
Content-Type: text/html
Date: Sun, 18 Sep 2016 10:31:38 GMT
Connection: keep-alive
Transfer-Encoding: chunked
Data received.curl -i http://0.0.0.0:8082 -F one=1 0.00s user 0.00s system 42% cpu 0.013 total
I think this is the answer to your problem: libcurl delays for 1 second before uploading data, command-line curl does not
libcurl is sending the Expect 100-Continue header, and waiting 1 second for a response before sending the form data (in the case of the -F command).
In the case of -d, it does not send the 100-Continue header, for whatever reason.