Ti.Network.createHTTPClient SSL - tidesdk

When I try to connect my application to my apache sslv3 server tidesdk reports: "SSL connect error"
var url = 'https://www.sipmeeting.com/';
var client = Ti.Network.createHTTPClient({
onload: function(e) {
//request complete do something with data
//assuming that we are not working with XML
alert('Response received '+this.responseText);
},
onerror: function(e) {
//error received, do something
}
});
client.open('GET',url,true);
client.send();
Is tidesdk sslv3 compatible? Other sites open fine like https://mail.google.com/mail
Thank you for the help!
--John

In case anyone was on the edge of there seat I solved this. The issue was on my apache server I needed to set ServerName in my apache ssl vhost to the same as my common name on my SSL Certificate.
To debug I turned on ssl debugging:
ErrorLog /var/log/apache2/ssl_engine.log
LogLevel debug
Nothing showed up in the logs when I would type
curl https://www.sipmeeting.com
but curl would return:
server:~ john$ curl -v https://www.sipmeeting.com
* About to connect() to www.sipmeeting.com port 443 (#0)
* Trying 208.126.100.54...
* connected
* Connected to www.sipmeeting.com (208.126.100.54) port 443 (#0)
* SSLv3, TLS handshake, Client hello (1):
* error:14077458:SSL routines:SSL23_GET_SERVER_HELLO:reason(1112)
* Closing connection #0
curl: (35) error:14077458:SSL routines:SSL23_GET_SERVER_HELLO:reason(1112)
But when I added the -3 to the curl command: I.E.
curl -v -3 https://www.sipmeeting.com
Everything showed. I then deduced something wasn't correlating between the two certificates, and the CN was the most probable. I changed the CN and it was fixed for both curl and tidesdk.
Thanks!
--John

Related

GCP deployment with nginx - uwsgi - flask fails

I have a very simple flask app that is deployed on GKE and exposed via google external load balancer. And getting random 502 responses from the backend-service (added a custom headers on backend-service and nginx to make sure the source and I can see the backend-service's header but not nginx's)
The setup is;
LB -> backend-service -> neg -> pod (nginx -> uwsgi) where pod is the application built using flask and deployed via uwsgi and nginx.
The scenario is to handle image uploads in simple-secured way. Sender sends me a token with upload request.
My flask app
receive request and check the sent token via another service using "requests".
If token valid, proceed to handle the image and return 200
If token is not valid, stop and send back a 401 response.
First, I got suspicious about the 200 and 401's. And reverted all responses to 200. Following some of the expected responses, server starts to respond 502 and keep sending it. "Some of the messages at the very beginning succeeded".
nginx error logs contains below lines
2023/02/08 18:22:29 [error] 10#10: *145 readv() failed (104: Connection reset by peer) while reading upstream, client: 35.191.17.139, server: _, request: "POST /api/v1/imageUpload/image HTTP/1.1", upstream: "uwsgi://127.0.0.1:21270", host: "example-host.com"
my uwsgi.ini file is as below;
[uwsgi]
socket = 127.0.0.1:21270
master
processes = 8
threads = 1
buffer-size = 32768
stats = 127.0.0.1:21290
log-maxsize = 104857600
logdate
log-reopen
log-x-forwarded-for
uid = image_processor
gid = image_processor
need-app
chdir = /server/
wsgi-file = image_processor_application.py
callable = app
py-auto-reload = 1
pidfile = /tmp/uwsgi-imgproc-py.pid
my nginx.conf is as below
location ~ ^/api/ {
client_max_body_size 15M;
include uwsgi_params;
uwsgi_pass 127.0.0.1:21270;
}
Lastly, my app has a healthcheck method with simple JSON response. It does no extra stuff and simply returns. This never fails as explained above.
Edit : my nginx access logs in the pod shows the response as 401 while the client receives 502.
for those who gonna face with the same issue, the problem was post data reading (or not reading).
nginx was expecting to get post data read by the proxied, in our case uwsgi, app. But according to my logic I was not reading it in some cases and returning back the response.
Setting uwsgi post-buffering solved the issue.
post-buffering = %(16 * 1024 * 1024)
Which led me to this solution;
https://stackoverflow.com/a/26765936/631965
Nginx uwsgi (104: Connection reset by peer) while reading response header from upstream

Dotnet watch on macOS gives curl an tlsv1 alert protocol version error

I am developing an Asp.Net application on MacOS with F# (.NET 6.0.301). While writing code I run a dotnet watch:
dotnet watch run -v --project Server/Server.fsproj
and send a curl message to one of the api endpoints of the server
curl -k -i -d "#loginInfo.json" -H "Accept: application/json" -H "Content-Type: application/json" -v 'https://localhost:5001/services/IAdminApi/login'
* Trying 127.0.0.1:5001...
* Connected to localhost (127.0.0.1) port 5001 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (OUT), TLS handshake, Client hello (1):
* (304) (IN), TLS handshake, Server hello (2):
[...] // More handshake data
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
which returns the expected result. It worked seamessly until a few months ago, when I started to receive the following error
* Trying 127.0.0.1:5001...
* Connected to localhost (127.0.0.1) port 5001 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
* CApath: none
* (304) (OUT), TLS handshake, Client hello (1):
* error:1404B42E:SSL routines:ST_CONNECT:tlsv1 alert protocol version
* Closing connection 0
curl: (35) error:1404B42E:SSL routines:ST_CONNECT:tlsv1 alert protocol version
However, when I run the server directly without the watch command:
/usr/local/share/dotnet/dotnet Server/bin/Debug/net6.0/Server.dll
everything works perfectly, and the API sends back the proper info. The server uses a self-signed certificate that is read from file.
Everything is running locally in a macOS machine. I have tried in two machines with different macOS versions, and problems started after updating to Monterey 12.6 and Ventura 13. Now both machines run updated versions (Monterey 12.6.2 and Ventura 13.1), but the problem persists.
However, dotnet watch works as expected in Windows 10. Codes are run from a terminal, without any intervention from the IDE (Rider in my case). Even though I lean towards something at the os level, also tried sending the curl command with the --tlsv1.x --tls-max 1.x options (x=0,1,2,3) with no luck. The version of curl is 7.79.1.
Any pointer to keep investigating will be greatly appreciated.
I think that that might be something related to how the dotnet watch command handles encrypted traffic. as per this page: https://learn.microsoft.com/en-us/dotnet/core/tools/dotnet-watch
"As part of dotnet watch, the browser refresh server mechanism reads this value to determine the WebSocket host environment. The value 127.0.0.1 is replaced by localhost, and the http:// and https:// schemes are replaced with ws:// and wss:// respectively."
So perhaps while the https traffic (when you run the application without dotnet watch) works fine because it uses appropriate cyphers and version of TLS, there is a bug or some omission in the implementation of the wss protocol, where TLS is fixed to version 1.
It would appear that you have two options.
run your application on local host without https
configure your operating system to allow for TLS 1.0

HTTP/HTTPS timeouts in/out because of DHCP?

I'm trying to debug a new server I ordered at OVH.com and they insist everything is working properly even though it times out when doing a curl request towards for an example github.com (times out 9 in around 10 tries)
curl -L -v https://github.com
I get
* Rebuilt URL to: https://github.com/
* Trying 140.82.118.4...
* connect to 140.82.118.4 port 443 failed: Connection timed out
* Failed to connect to github.com port 443: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to github.com port 443: Connection timed out
Even when I set up NGINX sever, site timeouts almost every second request
So I thought perhaps DHCP server can be an issue so I checked it and I see this from (var/lib/dhcp..)
lease {
interface "ens4";
fixed-address 10.0.X.XX;
option subnet-mask 255.255.255.0;
option routers 10.0.X.X;
option dhcp-lease-time 86400;
option dhcp-message-type 5;
option dhcp-server-identifier 10.0.X.X;
option domain-name-servers 10.0.X.X;
renew 6 2020/03/28 02:16:19;
rebind 6 2020/03/28 13:47:57;
expire 6 2020/03/28 16:47:57;
}
lease {
interface "ens4";
fixed-address 10.0.X.XX;
option subnet-mask 255.255.255.0;
option routers 10.0.X.X;
option dhcp-lease-time 86400;
option dhcp-message-type 5;
option dhcp-server-identifier 10.0.X.X;
option domain-name-servers 10.0.X.X;
renew 5 2020/03/27 16:51:54;
rebind 5 2020/03/27 16:51:54;
expire 5 2020/03/27 16:51:54;
}
I tried getting a new one by doing this command but nothing changes, still the same as above
sudo dhclient -r
Am I looking at the DHCP wrong or does it look normal? For the record my public IP on this dedicated starts with 5 not 1 and it is run on Ubuntu 16.04 LTS
What is the offer you have at OVH ? They usually don't give private IP to dedicated server or virtual private server, so that's quite odd.
You may want to collect some trace to check what is going wrong with tools like :
tcptraceroute to check if the path to a domain on port 80 or 443
looks strange
ping to be able to see if there packet loss
tcpdump to capture raw network packet while a timeout is occuring to see what's going on
That's a good start and may also help you go back to OVH Support and prove them there's something wrong.

SQLMap: Can't establish SSL Connection: Need Solution

Am trying to use SQLMap with https but when i try
"C:\Python27\sqlmap>sqlmap.py -u https://localhost:8774/App/console/index.jsp --force-ssl" it returns
"Can't establish SSL Connection".
So it there any way that i can pass SSL certificate to SQLMap?
Environment Details:
OS: Windows 10
Python: 2.7
SQLMap: 1.4.2.42
Refer to attached image for more details.
remove https:// from 'u' paremeter, just put:
-u localhost:8774/App/console/index.jsp
A simple solution for that is to set up a proxy listener like Burp Suite, browse over to the site with the bad SSL certificate and Trust it.
After that, you can include the following option in your SQLMap command:
--proxy="http://PROXY-IP:PROXY-PORT"
where proxy ip is generally 127.0.0.1 and proxy port 8080.

Nginx memcached with fallback to remote service

I can't get Nginx working with memcached module, the requirement is to query remote service, cache data in memcached and never fetch remote endpoint until backend invalidates the cache. I have 2 containers with memcached v1.4.35 and one with Nginx v1.11.10.
The configuration is the following:
upstream http_memcached {
server 172.17.0.6:11211;
server 172.17.0.7:11211;
}
upstream remote {
server api.example.com:443;
keepalive 16;
}
server {
listen 80;
location / {
set $memcached_key "$uri?$args";
memcached_pass http_memcached;
error_page 404 502 504 = #remote;
}
location #remote {
internal;
proxy_pass https://remote;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
I tried to set memcached upstream incorrectly but I get HTTP 499 instead and warnings:
*3 upstream server temporarily disabled while connecting to upstream
It seems with described configuration Nginx can reach memcached successfully but can't write or read from it. I can write and read to memcached with telnet successfully.
Can you help me please?
My guesses on what's going on with your configuration
1. 499 codes
HTTP 499 is nginx' custom code meaning the client terminated connection before receiving the response (http://lxr.nginx.org/source/src/http/ngx_http_request.h#0120)
We can easily reproduce it, just
nc -k -l 172.17.0.6 172.17.0.6:11211
and curl your resource - curl will hang for a while and then press Ctrl+C — you'll have this message in your access logs
2. upstream server temporarily disabled while connecting to upstream
It means nginx didn't manage to reach your memcached and just removed it from the pool of upstreams. Suffice is to shutdown both memcached servers and you'd constantly see it in your error logs (I see it every time with error_log ... info).
As you see these messages your assumption that nginx can freely communicate with memcached servers doesn't seem to be true.
Consider explicitly setting http://nginx.org/en/docs/http/ngx_http_memcached_module.html#memcached_bind
and use the -b option with telnet to make sure you're correctly testing memcached servers for availability via your telnet client
3. nginx can reach memcached successfully but can't write or read from it
Nginx can only read from memcached via its built-in module
(http://nginx.org/en/docs/http/ngx_http_memcached_module.html):
The ngx_http_memcached_module module is used to obtain responses from
a memcached server. The key is set in the $memcached_key variable. A
response should be put in memcached in advance by means external to
nginx.
4. overall architecture
It's not fully clear from your question how the overall schema is supposed to work.
nginx's upstream uses weighted round-robin by default.
That means your memcached servers will be queried once at random.
You can change it by setting memcached_next_upstream not_found so a missing key will be considered an error and all of your servers will be polled. It's probably ok for a farm of 2 servers, but unlikely is it what your want for 20 servers
the same is ordinarily the case for memcached client libraries — they'd pick a server out of a pool according to some hashing scheme => so your key would end up on only 1 server out of the pool
5. what to do
I've managed to set up a similar configuration in 10 minutes on my local box - it works as expected. To mitigate debugging I'd get rid of docker containers to avoid networking overcomplication, run 2 memcached servers on different ports in single-threaded mode with -vv option to see when requests are reaching them (memcached -p 11211 -U o -vv) and then play with tail -f and curl to see what's really happening in your case.
6. working solution
nginx config:
https and http/1.1 is not used here but it doesn't matter
upstream http_memcached {
server 127.0.0.1:11211;
server 127.0.0.1:11212;
}
upstream remote {
server 127.0.0.1:8080;
}
server {
listen 80;
server_name server.lan;
access_log /var/log/nginx/server.access.log;
error_log /var/log/nginx/server.error.log info;
location / {
set $memcached_key "$uri?$args";
memcached_next_upstream not_found;
memcached_pass http_memcached;
error_page 404 = #remote;
}
location #remote {
internal;
access_log /var/log/nginx/server.fallback.access.log;
proxy_pass http://remote;
proxy_set_header Connection "";
}
}
server.py:
this is my dummy server (python):
from random import randint
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello: {}\n'.format(randint(1, 100000))
This is how to run it (just need to install flask first)
FLASK_APP=server.py [flask][2] run -p 8080
filling in my first memcached server:
$ telnet 127.0.0.1 11211
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
set /? 0 900 5
cache
STORED
quit
Connection closed by foreign host.
checking:
note that we get a result every time although we stored data
only in the first server
$ curl http://server.lan && echo
cache
$ curl http://server.lan && echo
cache
$ curl http://server.lan && echo
cache
this one is not in the cache so we'll get a response from server.py
$ curl http://server.lan/?q=1 && echo
Hello: 32337
whole picture:
the 2 windows on the right are
memcached -p 11211 -U o -vv
and
memcached -p 11212 -U o -vv

Resources