How does cUrl -u turn username and password into hash? - http

I'm trying to figure out how safe curl -u is to use with a real username and password. Investigating the header of such a request, it seems the user name and password are turned into some kind of hash.
In the example below, it seems jujuba:lalalala is being turned to anVqdWJhOmxhbGFsYWxh
Is this encryption or compression? Is it safe? How does the recipient decode this data?
curl -u jujuba:lalalala -i -X Get http://localhost:80/api/resource -v
* timeout on name lookup is not supported
* Trying 127.0.0.1...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to localhost (127.0.0.1) port 80 (#0)
* Server auth using Basic with user 'jujuba'
> Get /api/resource HTTP/1.1
> Host: localhost
> Authorization: Basic anVqdWJhOmxhbGFsYWxh

If you run the command:
echo anVqdWJhOmxhbGFsYWxh | base64 -d
You will get jujuba:lalala showing that the content is just Base-64 encoded, which is the standard for Basic authentication.
You should use HTTPS for any site that requires authentication.

Related

AB load testing on local ip or domain name?

I am using digitalocean as a vps for my webserver.
I added a second droplet with ubuntu 18 that is part of the private network (digitalocean function) with the web server.
I am using cloudflare as my dns provider and also using their ssl certificates.
What is the most accurate load test with ab (**please note the http/https in the example below):
ab -n 100 -c 1 -k -H "Accept-Encoding: gzip, deflate" https://www.example.com/
Request per second : 12.66
ab -n 100 -c 1 -k -H "Accept-Encoding: gzip, deflate" http://www.example.com/
Request per second : 60.90
ab -n 100 -c 1 -k -H "Accept-Encoding: gzip, deflate" https://private.network.local.ip/
Request per second : 36.70
ab -n 100 -c 1 -k -H "Accept-Encoding: gzip, deflate" http://private.network.local.ip/
Request per second : 1849
How should I use ab with http or https and with domain or local ip?
Well-behaved load test should represent real-life application usage as close as possible, otherwise it doesn't make sense. So you should use the same settings as real users of your application will use, my expectations are:
domain name instead of IP address
https protocol
Any reason for comparing response time of your application with the http://example.com which is a live website? You should be comparing DNS hostname of your application with the IP address of your application, in this case the results should be the same
ab is not the best tool for simulating real users activity, it basically "hammers" a single URL which doesn't represent real user behaviour, real users:
establish SSL session once, further communication is being made over this channel
send HTTP Headers which may trigger response compression reducing response size
have HTTP Cache implemented in their browsers so embedded resources like images, scripts, styles, fonts, etc. are being requested only once
have Cookies which represent user session
assuming all above I would recommend switching to a more advanced load testing tool which is capable of acting like a real browser

Why isn't uWSGI respecting the "--http-keepalive" flag?

I installed uWSGI in a Docker container running ubuntu:16.04 using the following commands:
apt-get update
apt-get install -y build-essential python-dev python-pip
pip install uwsgi
I then created a single static file:
cd /root
mkdir static
dd if=/dev/zero bs=1 count=1024 of=static/data
...and finally started uWSGI with the following command:
uwsgi \
--buffer-size 32768 \
--http-socket 0.0.0.0:80 \
--processes 4 \
--http-timeout 60 \
--http-keepalive \
--static-map2=/static=./
I was able to access the static file without any problems. However, despite passing the --http-keepalive option, issuing multiple requests with cURL resulted in the following output:
# curl -v 'http://myserver/static/data' -o /dev/null 'http://myserver/static/data' -o /dev/null
* Trying 192.168.1.101...
...
> GET /static/data HTTP/1.1
> Host: 192.168.1.101:8100
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Length: 1024
< Last-Modified: Sat, 03 Dec 2016 22:06:49 GMT
<
{ [1024 bytes data]
100 1024 100 1024 0 0 577k 0 --:--:-- --:--:-- --:--:-- 1000k
* Connection #0 to host 192.168.1.101 left intact
* Found bundle for host 192.168.1.101: 0x563fbc855630 [can pipeline]
* Connection 0 seems to be dead!
* Closing connection 0
...
Of particular interest is the line:
* Connection 0 seems to be dead!
This was confirmed with WireShark:
As you can see, there are two completely separate TCP connections. The first one is closed by uWSGI (packet #10 - the [FIN, ACK]).
What am I doing wrong? Why doesn't uWSGI honor the --http-keepalive flag instead of immediately closing the connection?
In my case, I faced with random 502 responses from aws ALB / ELB.
I provided configuration by .ini file like:
http-keepalive = true
but after hours of debugging, I saw a similar picture in Wireshark - after each response, the connection was closed by the server, so the keep-alive option was ignored
In uWSGI#2018, the discussion points that it should be an integer (doc here), but unfortunately can't find exact info on whether it represents seconds of socket lifetime, or it could be simple '1'. After this change - the random 502 disappeared and uwsgi started work in expected mode.
Hope it could be also helpful for somebody.
I was finally able to get keepalive working by switching from --http-socket to simply --http. According to uWSGI docs:
If your web server does not support the uwsgi protocol but is able to speak to upstream HTTP proxies, or if you are using a service like Webfaction or Heroku to host your application, you can use http-socket. If you plan to expose your app to the world with uWSGI only, use the http option instead, as the router/proxy/load-balancer will then be your shield.
In my particular case, it was also necessary to load the http plugin.

Ubuntu server, monit email, postfix piping

I have a specific setup to pipe incoming emails on an ubuntu server. When emails are sent to name#myserver.com they are filtered and piped to a php script
The postfix specific configuration for this piping is as follow (in brief):
**main.cf:**
...
smtpd_recipient_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination, check_recipient_access hash:/etc/postfix/access
...
**master.cf**
mydestination = myserver.com, localhost.myserver, localhost
...
smtp inet n - - - - smtpd
-o content_filter=myhook:dummy
...
pickup fifo n - - 60 1 pickup
cleanup unix n - - - 0 cleanup
qmgr fifo n - n 300 1 qmgr
#qmgr fifo n - - 300 1 oqmgr
tlsmgr unix - - - 1000? 1 tlsmgr
...
myhook unix - n n - - pipe
flags=F user=www-data null_sender= argv=//admin/get_mail.php ${sender} ${size} ${recipient}
**access file:**
name#myserver.com FILTER myhook:dummy
Now everything is working fine when emails are sent to 'myserver.com'. Messages are filtered and the php script is triggered.
The problem comes with monit service that is running on the server.
Emails sent by monit from myserver.com are filtered by myhook when emails are sent by the service and sent to the piped php script while they should not and directly sent out to the receipient...
It looks like postfix filter settings are not working in that case.
Curiously, email sent be other web application from the server and going out as they should (from www-data#myserver.com).
Specific configuration for minitrc are:
set mailserver localhost
set mail-format { from: monit#myserver.com }
set alert monit#anotherdomain.com
Could you help my figure out what could be the conflict between monit and postfix here?
thank you.

why does this simple Hydra command not work?

I'm trying to get the hang of hydra.
When I do this to test against my ftp site, it works. I'm hitting my own ftp site (ex. www.mysite.com) with the correct username and password (ex. username1 and password1):
./hydra -l username1 -p password1 -vV -f www.mysite.com ftp
Hydra v7.4.1 (c)2012 by van Hauser/THC & David Maciejak - for legal purposes only
Hydra (http://www.thc.org/thc-hydra) starting at 2012-12-29 21:06:20
[VERBOSE] More tasks defined than login/pass pairs exist. Tasks reduced to 1.
[DATA] 1 task, 1 server, 1 login try (l:1/p:1), ~1 try per task
[DATA] attacking service ftp on port 21
[VERBOSE] Resolving addresses ... done
[ATTEMPT] target www.mysite.com - login "username1" - pass "password1" - 1 of 1 [child 0]
[21][ftp] host: 200.200.240.240 login: username1 password: password1
[STATUS] attack finished for www.mysite.com (valid pair found)
1 of 1 target successfully completed, 1 valid password found
Hydra (http://www.thc.org/thc-hydra) finished at 2012-12-29 21:06:21
However, when I do this to test a public basic authentication test page (http://browserspy.dk/password-ok.php) with the correct username and password (test and test), hydra just stops with a 'Resolving address ... done' message.
./hydra -l test -p test -vV -f browserspy.dk http-get /password-ok.php
Hydra v7.4.1 (c)2012 by van Hauser/THC & David Maciejak - for legal purposes only
Hydra (http://www.thc.org/thc-hydra) starting at 2012-12-29 21:02:58
[VERBOSE] More tasks defined than login/pass pairs exist. Tasks reduced to 1.
[DATA] 1 task, 1 server, 1 login try (l:1/p:1), ~1 try per task
[DATA] attacking service http-get on port 80
[VERBOSE] Resolving addresses ... done
The hydra process just seems to die here and I'm returned to the command prompt.
What am I doing wrong?
You are not doing anything wrong, its a bug in hydra which affects the modes http-get, http-head and irc. Downgrade to v7.3 or wait for v7.5 which will fix this issue.

Exit status of ping command

My Perl script gets stuck with an exit status when trying to use the ping command.
According to this website:
If ping does not receive any reply packets at all it will exit with code 1. If a packet count and deadline are both specified, and fewer than count packets are received by the time the deadline has arrived, it will also exit with code 1. On other error it exits with code 2. Otherwise it exits with code 0. This makes it possible to use the exit code to see if a host is alive or not.
To list the results:
Success: code 0
No reply: code 1
Other errors: code 2
Note that the page I link to says "Linux/Unix ping command", but other systems, or perhaps even variants of Linux and Unix, might vary this value.
If possible, I would test on the system in question to make sure you have the right ones.
It's worth doing some testing on this on your OS. e.g on OSX
Resolvable host which is up
ping -c 1 google.com ; echo $?
Replies:
PING google.com (173.194.38.14): 56 data bytes
64 bytes from 173.194.38.14: icmp_seq=0 ttl=51 time=16.878 ms
--- google.com ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 16.878/16.878/16.878/0.000 ms
Returns
0
Resolvable host which is down/does not respond to ping
ping -c 1 localhost ; echo $?
Replies:
PING stuart-woodward.local (127.0.0.1): 56 data bytes
--- stuart-woodward.local ping statistics ---
1 packets transmitted, 0 packets received, 100.0% packet loss
Returns:
2
Non Resolvable host
ping -c 1 sdhjfjsd ; echo $?
Replies:
ping: cannot resolve sdhjfjsd: Unknown host
Returns:
68
The ping utility returns an exit
status of zero if at least one
response was heard from the specified
host; a status of two if the
transmission was successful but no
responses were received; or another
value (from ) if an error
occurred.
http://www.manpagez.com/man/8/ping
The actual return values may depend on your system.
Successful connection will always return code 0, whilst failed connections will always return code 1 and above.
To test this out, try this snippet
#!/bin/bash
light_red='\e[1;91m%s\e[0m\n'
light_green='\e[1;92m%s\e[0m\n'
ping -c 4 -q google.comz
if [ "$?" -eq 0 ]; then
printf "$light_green" "[ CONNECTION AVAILABLE ]"
else
printf "$light_red" "[ HOST DISCONNECTED ]"
fi
You should also take into account that if the ping for example receives a 'network unreachable' icmp reply, it will be counted as reply an thus returns success status 0 (tested with cygwin ping on windows). So not really useful for testing if a host is alive and IMO a bug.
Try man ping from the command line. The return values are listed near the bottom.

Resources