SQLMap: Can't establish SSL Connection: Need Solution - sqlmap

Am trying to use SQLMap with https but when i try
"C:\Python27\sqlmap>sqlmap.py -u https://localhost:8774/App/console/index.jsp --force-ssl" it returns
"Can't establish SSL Connection".
So it there any way that i can pass SSL certificate to SQLMap?
Environment Details:
OS: Windows 10
Python: 2.7
SQLMap: 1.4.2.42
Refer to attached image for more details.

remove https:// from 'u' paremeter, just put:
-u localhost:8774/App/console/index.jsp

A simple solution for that is to set up a proxy listener like Burp Suite, browse over to the site with the bad SSL certificate and Trust it.
After that, you can include the following option in your SQLMap command:
--proxy="http://PROXY-IP:PROXY-PORT"
where proxy ip is generally 127.0.0.1 and proxy port 8080.

Related

Maxscale: maxctrl error when admin_ssl parameters are set in maxscale.cnf

System:
Maxscale 2.5.9
Ubuntu 20.04
In order to access the Web AdminGUI my maxsclale.cnf file looks like this:
[maxscale]
threads=auto
admin_host=0.0.0.0
admin_secure_gui=1
admin_auth=1
admin_enabled=1
admin_gui=1
admin_ssl_key=/etc/ssl/certs/maxscale-key.pem
admin_ssl_cert=/etc/ssl/certs/maxscale-cert.pem
admin_ssl_ca_cert=/etc/ssl/certs/ca-certificates.crt
[...all other configuration..]
With this configuration I can access the Web-AdminGUI on port 8989 from the internal ip address (not 127.0.0.1) by browser.
The SSL key/certs are self-signed .
BUT
When using the command line like:
maxctrl list servers
I get the following error:
Error: Error: socket hang up
When I remove or comment out the lines with the admin_ssl_XXX parameters and restart maxscale, command line works again, but of course the Web-AdminGUI does not.
I tried with various SSL certificate creations (also the one that is listed on the mariadb.com-Website
https://mariadb.com/docs/security/encryption/in-transit/create-self-signed-certificates-keys-openssl/#create-self-signed-certificates-keys-openssl),
the issue remains.
No errors in the maxscale.log whatsoever.
What is the best way to debug this issue?
Or do you have by any chance the right answer at hand?
YOUR help is greatly appreciated!
BR. Martin
You should use maxctrl --secure to encrypt the connections used by it.
Since you are using self-signed certificates, you have to also specify the CA certificate with --tls-ca-cert=/etc/ssl/certs/ca-certificates.crt if it's not installed in the system certificate store.
In addition, you probably need to use --tls-verify-server-cert=false to disable any warnings about self-signed certificates.

How to set up a secure connection between Filbeat and Elasticsearch using SSL

I'm unable to setup an SSL connection between Filebeat and Elasticsearch.
My knowledge is lacking when it comes to SSL. I'm using X-Pack to generate a certificate using the certutil command. bin/xpack/certutil ca generates a certificate authority under the name elastic-stack-ca.p12.
Then
$ bin/x-pack/certutil cert --ca elastic-stack-ca.p12
Which I believe creates a certificate signed by that CA. This results in the file elastic-certificates.p12. From here I'm clueless.
I tried testing to see if the certificates work by setting up a HTTPS connection to ES.
I put
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: /path/to/elastic-certificates.p12
xpack.security.http.ssl.certificate: /path/to/elastic-certificates.p12
xpack.security.http.ssl.certificate_authorities: [ "/path/to/elastic-stack-ca.p12" ]
However, this brings up quite a few errors one of them being
caught exception while handling client http traffic, closing connection
When I add the https IP and the CA in Kibana it fails to connect with ES.
I would like to know how to successfully set up https. Also how can a SSL connection be established between two servers. One containing Filebeat, but no X-Pack and the receiving server with ES on it alongside X-Pack installed.
After adding those SSL settings in your elasticsearch.yml, you also need to add the password to the Elasticsearch keystore and truststore. You should've set a password when you ran the certutil command. You can do that with:
$ echo password | /usr/share/elasticsearch/bin/elasticsearch-keystore add --stdin xpack.security.transport.ssl.keystore.secure_password
$ echo password | /usr/share/elasticsearch/bin/elasticsearch-keystore add --stdin xpack.security.transport.ssl.truststore.secure_password
Make sure you restart Elasticsearch after making these changes.

Let's Encrypt check the previous certificat and throw an error

I setup my own landing page on my server with Nginx on top of it. I follow digital ocean 'How to' to get SSL certificat for it.
Now I finish to setup a Wordpress for my wife. Everything working well on plain HTTP but if I try to redo the process with let's encrypt : sudo certbot --nginx -d pamelajoa.com -d www.pamelajoa.com cerbot try to challenge the server but find out that there is already a certificat for my own website:
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: pamelajoa.com
Type: unauthorized
Detail: Incorrect validation certificate for tls-sni-01 challenge.
Requested
XXX.YYY.acme.invalid
from [2001:41d0:8:6d9b::1]:443. Received 2 certificate(s), first
certificate had names "gfelot.xyz, www.gfelot.xyz"
Domain: www.pamelajoa.com
Type: unauthorized
Detail: Incorrect validation certificate for tls-sni-01 challenge.
Requested
XXX.YYY.acme.invalid
from [2001:41d0:8:6d9b::1]:443. Received 2 certificate(s), first
certificate had names "gfelot.xyz, www.gfelot.xyz"
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.
Once again my own web site works on HTTPS and the WP works on HTTP so I don't think it's coming for my Nginx conf.
Any Idea ?
Found a solution that worked for me by using this option in your command :
--preferred-challenges http-01
or you may try to use this one :
--preferred-challenges http
Full command here :
sudo certbot --nginx --preferred-challenges http-01 -d www.kaokeb.com
Full post for this solution in this thread :
https://community.letsencrypt.org/t/expired-certification/60185/23

Tyk gateway with Nginx and Apache Tomcat 8 (ubuntu 14.04)

Just wondering what I am missing here when trying to create an API with Tyk Dashboard.
My setup is:
Nginx > Apache Tomcat 8 > Java Web Application > (database)
Nginx is already working, redirecting calls to apache tomcat at default port 8080.
Example: tomcat.myserver.com/webapp/get/1
200-OK
I have setup tyk-dashboard and tyk-gateway previously as follows using a custom node port 8011:
Tyk dashboard:
$ sudo /opt/tyk-dashboard/install/setup.sh --listenport=3000 --redishost=localhost --redisport=6379 --mongo=mongodb://127.0.0.1/tyk_analytics --tyk_api_hostname=$HOSTNAME --tyk_node_hostname=http://127.0.0.1 --tyk_node_port=8011 --portal_root=/portal --domain="dashboard.tyk-local.com"
Tyk gateway:
/opt/tyk-gateway/install/setup.sh --dashboard=1 --listenport=8011 --redishost=127.0.0.1 --redisport=6379 --domain=""
/etc/hosts already configured (not really needed):
127.0.0.1 dashboard.tyk-local.com
127.0.0.1 portal.tyk-local.com
Tyk Dashboard configurations (nothing special here):
API name: foo
Listen path: /foo
API slug: foo
Target URL: tomcat.myserver.com/webapp/
What URI I suppose to call? Is there any setup I need to add in Nginx?
myserver.com/foo 502 nginx
myserver.com:8011/foo does not respond
foo.myserver.com 502 nginx
(everything is running under the same server)
SOLVED:
Tyk Gateway configuration was incorrect.
Needed to add --mongo and remove --domain directives at setup.sh :
/opt/tyk-gateway/install/setup.sh --dashboard=1 --listenport=8011 --redishost=localhost --redisport=6379 --mongo=mongodb://127.0.0.1/tyk_analytics
So, calling curl -H "Authorization: null" 127.0.0.1:8011/foo
I get:
{
"error": "Key not authorised"
}
I am not sure about the /foo path. I think that was previously what the /hello path is. But it appears there is a key not authorized issue. If the call is made using the Gateway API, then the secret value may be missing. It is required when making calls to the gateway (except the hello and reload paths)
x-tyk-authorization: <your-secret>
However, since there is a dashboard present, then I would suggest using the Dashboard APIs to create the API definition instead.

Exposing localhost to the internet via tunneling (using ngrok): HTTP error 400: bad request; invalid hostname

From previous versions of the question, there is this: Browse website with ip address rather than localhost, which outlines pretty much what I've done so far...I've got the local IP working. Then I found ngrok, and apparently I don't need to connect via the IP.
What I am trying to do is expose my website running on localhost to the internet. I found a tool that will do this: ngrok.
Running the website in visual studio, the website starts up on localhost/port#. I run the command "ngrok http port#" in the command line. Everything seems to start up fine. I generate a couple of URLs, and the ngrok inspection url (localhost:4040) works.
The only problem is that when I go to the generated URLs, I get an HTTP error 400: bad request invalid hostname. This is a different error than when I run "ngrok http wrongport#", which is a host not found error...so I think something good is happening. I just can't tell what...
Is there a step I am missing in exposing my site to the internet via the tunneling service? If there is, I can't find it in the ngrok documentation.
Troubleshot this issue with ngrok. In the words of inconshrevable, some applications get angry when they see a different host header than expected.
Running the following command should fix the problem:
ngrok http [port] --host-header="localhost:[port]"
Depending on the version, you may also want to try:
ngrok http [port] --host-header="localhost:[port]"
Following command will fix the issue
ngrok http -host-header=localhost 8080
This didn't work for me.
you could do the following:
For IIS Express
In VS 2015:
Go to the .vs\config\applicationhost.config folder in your project
In VS 2013 and earlier:
Go to %USERPROFILE%\My Documents\IISExpress\config\applicationhost.config
Find the binding that says:
<binding protocol="http" bindingInformation="*:5219:localhost" />
For me it was a project running on port 5219
change it to
<binding protocol="http" bindingInformation="*:5219:" />
IIS Express will now accept all incoming connections on that port.
Disadvantage: you need to run IIS Express as admin.
or you could rewrite the host header in Ngrok:
ngrok.exe http -host-header=rewrite localhost:5219
For https this works:
ngrok http https://localhost:<PORT> --host-header="localhost:<PORT>"
UPDATED COMMAND FOR LATEST VERSION
Tested with: (Windows) (ngrok v3.0.5)
Use -- instead of -
ngrok http --host-header=localhost 8080
The simplest thing for me was using iisexpress-proxy + ngrok.
First I install iisexpress-proxy globally with npm
npm install -g iisexpress-proxy
Then I proxy my localhost with it. Say for instance my site is running on 3003.
iisexpress-proxy 3003 to 12345 where 12345 is the new http port I want to proxy to.
Then I can run ngrok on it.
./ngrok.exe http 12345
It just works! 😃
But I think it works only with http. Right now I don't use https to test, but even if it works, usually it's a lot of work as always.
For https this works:
ngrok http https://localhost:<PORT> --host-header="localhost:<PORT>"
Try with different locations from the Global infrastructure > Locations
ngrok http -region eu 8080
You can make a request and view any traffic passing through your tunnel using the ngrok traffic inspector at http://localhost:4040.
OR in command line
ngrok http -region eu 8080 --log=stdout
If one region fails then try with another.
ngrok runs tunnel servers in datacenters around the world. The location of the datacenter within a given region may change without notice (e.g. the European servers may move from Frankfurt to London).
us - United States (Ohio)
eu - Europe (Frankfurt)
ap - Asia/Pacific (Singapore)
au - Australia (Sydney)
sa - South America (Sao Paulo)
jp - Japan (Tokyo)
in - India (Mumbai)
First open ngrok configuration YAML file, run from terminal:
ngrok config edit
Example of yaml for localhost setup (client & server):
version: "2"
authtoken: {YOUR_AUTH_TOKEN_FROM_NGROK_WEBSITE}
tunnels:
client:
addr: 3000
proto: http
host_header: localhost
server:
addr: 4000
proto: http
host_header: localhost
Save the config file based on your client and server ports and run the following command:
ngrok start --all
This will make ngrok open a tunnel for all the configurations declared in the yaml file
Had IIS Express .net web API, had installed NGROK in docker (windows as a host)
Had "Bad Request" error, the next command worked for me:
docker run -it -e NGROK_AUTHTOKEN=<token> ngrok/ngrok --host-header=localhost:21852 http host.docker.internal:21852
As I understood later, --host-header needed because IIS Express refuses all requests from outside (must be "localhost:port
"), host.docker.internal I've used instead of localhost, because NGROK was running inside docker, while IIS Express was running on a windows host.
I had the same issue and used the following solution:
Make sure your application binding in your IIS is set to All Unassigned IP address
Run ngrok HTTP 127.0.0.1:173 --region=eu --hostname=yourcustomdomain.eu.ngrok.io
That's it. Works perfectly. This solution is also for paid pro accounts
Steps.
Run command on your console from ngrok.exe directory . ngrok http
port i.e ngrok http 80 https://www.screencast.com/t/oyuEPlR6Z Set
Ngrok url to your app .
It will create a tunnel to your application.
Thanks .

Resources