curl CONNECT method without path (destination hostname and port instead) - http

I want to test proxy server. In order to make https request, browser sends CONNECT method beforehand (e.g. like Firefox does, when proxy is specified).
I can not achieve/send the same result in curl:
Following has root slash /www.example.com:443:
curl -X CONNECT http://proxy_host:proxy_port/www.example.com:443
Following will not work (without slash):
curl -X CONNECT http://proxy_host:proxy_portwww.example.com:443
Following is not what I want:
curl -X CONNECT http://proxy_host:proxy_port/some_path
So the first line of HTTP data should be CONNECT www.example.com:443 HTTP/1.1 but not CONNECT /www.example.com:443 HTTP/1.1 like curl sends in this case.
Maybe this question also related some-how, if I would know how to not send path.
NOTE! I do not want to use curl -x http://proxy_host:proxy_port https://www.example.com, because this option/flag -x does not work with custom SSL certificates --cacert ... --key ... --cert ....
Any ideas how to send plain header data or not specify path, or specify host and port as a path?

(-X simply replaces the string in the request so of course setting it to CONNECT will not issue a proper CONNECT request and will certainly not make curl handle it correctly.)
curl will do a CONNECT by itself when connecting to a TLS server through a HTTP proxy, and even though you claim -x breaks the certificate options that is an incorrect statement. The --cacert and other options work the same even when the connection is done through a HTTP proxy.
You can also make curl do a CONNECT trough a HTTP(S) proxy for other protocols by using -p, --proxytunnel - also in combination with -x.

Related

Lunix server returning old SSL certs via Curl

When I try and retrieve the most up-to-date SSL cert info from a url on my Centos7 machine I keep getting some sort of old cached result.
example curl:
curl --insecure -v https://www.google.com 2>&1 | awk 'BEGIN { cert=0 } /^\* Server certificate:/ { cert=1 } /^\*/ { if (cert) print }'
I know for a fact, and even Chrome knows, that my expiry is in the future, but the curl request always returns the old cert which has expired.
Is there some sort of cache on the machine itself?
Issue looks to be around bad configurations of Microsoft IIS servers, and potentially any ISA's sitting in front of them.
Unless you fully remove, and reboot the server that you've updated a new SSL cert onto, there will still be intances of that server sending out the previous (expired) SSL Cert so you will never see the latest one.

How to add and resolve URL as we do with host name after adding it in /etc/hosts file

I have a web app running on machine with ip : 172.10.10.10.
The basic API call exposed by this app is : GET - http://172.10.10.10
and it will return a response as OK.
On another machine I added an entry in /etc/hosts file as below.
172.10.10.10 webserver1.com
With this the ping command is resolved successfully. e.g. : ping webserver1.com
Now I want to resolve the curl command as well.
e.g. : curl http://webserver1.com
Result : curl: (6) Could not resolve host: webserver1.com
How to achieve this for curl command with http url?
You can setup a DNS server and point your IP in /etc/resolv.conf
There are many options out there in marker ( paid / free ) for a Local DSN Server dockerized and non-dockerized too.

AB load testing on local ip or domain name?

I am using digitalocean as a vps for my webserver.
I added a second droplet with ubuntu 18 that is part of the private network (digitalocean function) with the web server.
I am using cloudflare as my dns provider and also using their ssl certificates.
What is the most accurate load test with ab (**please note the http/https in the example below):
ab -n 100 -c 1 -k -H "Accept-Encoding: gzip, deflate" https://www.example.com/
Request per second : 12.66
ab -n 100 -c 1 -k -H "Accept-Encoding: gzip, deflate" http://www.example.com/
Request per second : 60.90
ab -n 100 -c 1 -k -H "Accept-Encoding: gzip, deflate" https://private.network.local.ip/
Request per second : 36.70
ab -n 100 -c 1 -k -H "Accept-Encoding: gzip, deflate" http://private.network.local.ip/
Request per second : 1849
How should I use ab with http or https and with domain or local ip?
Well-behaved load test should represent real-life application usage as close as possible, otherwise it doesn't make sense. So you should use the same settings as real users of your application will use, my expectations are:
domain name instead of IP address
https protocol
Any reason for comparing response time of your application with the http://example.com which is a live website? You should be comparing DNS hostname of your application with the IP address of your application, in this case the results should be the same
ab is not the best tool for simulating real users activity, it basically "hammers" a single URL which doesn't represent real user behaviour, real users:
establish SSL session once, further communication is being made over this channel
send HTTP Headers which may trigger response compression reducing response size
have HTTP Cache implemented in their browsers so embedded resources like images, scripts, styles, fonts, etc. are being requested only once
have Cookies which represent user session
assuming all above I would recommend switching to a more advanced load testing tool which is capable of acting like a real browser

How to use invoke http to perform GET request in nifi?

I need to perform a get request from nifi to couchbase. The curl command is:
curl http://HOST:PORT/query/service -d "statement=select item.Date from bucket unnest bucket as item" -u USER:PASSWORD
I tried using InvokeHttp and ExecuteStreamCommand but it keeps returning errors(status code 400). The full error message is:
{ "requestID": "bff62c0b-36fd-401d-bca0-0959e0944323", "errors":
[{"code":1050,"msg":"No statement or prepared value"}], "status":
"fatal", "metrics": {"elapsedTime": "113.31µs","executionTime":
"74.321µs","resultCount": 0,"resultSize": 0,"errorCount": 1
It's important to say that I prefer that the http request will be triggered by an incoming flowfile. I tried using the processors in various of ways but non of them worked.
When I run the command from the nifi server it works fine.
Thanks for the help
the -d parameter of the curl utility forces HTTP POST command
and application/x-www-form-urlencoded mime-type.
so, in the nifi InvokeHTTP select the following parameters
HTTP Method = POST
Remote URL = <your url here>
Basic Authentication Username = <username>
Basic Authentication Password = <password>
Content-Type = application/x-www-form-urlencoded
and the body of the flow file should be
statement=select item.Date from bucket unnest bucket as item
I don't know nifi, but based on the error message, the "statement=" part of the request isn't being included, or you are not sending the request as a POST command.

HTTP client request Authorization: Negotiate option

I would like to access HDFS files using webhdfs. Curl gives me an option of using --negotiate -u: user option to use existing kerberos token. How do we pass the negotiate option using HTTP request headers. I know that we can use "Authorization: Negotiate" option. However, I get the following error.
GSSException: Defective token detected
you can do like this:
kinit -kt ${your_keytab_file_full_path} ${your_principal}
curl --negotiate -u : -o ${your_keytab_file_full_path} ${URL}

Resources