Why I get Error code 404? - tsung

After I run the test as:
tsung -f test.xml start
I get this:
$ cd /Users/samir/.tsung/log/20160910-1035
Apple-Mac-mini:20160910-1035 samir$ /usr/local/Cellar/tsung/1.6.0/lib/tsung/bin/tsung_stats.pl
creating subdirectory data
creating subdirectory gnuplot_scripts
creating subdirectory images
warn, last interval (2) not equal to the first, use the first one (10)
No data for Bosh
No data for Match
No data for Event
No data for Async
Then, I try to run the local server to see the results:
python -m SimpleHTTPServer 8080
In the browser, if I visit http://localhost:8080, it will redirect to http://localhost:8080/es/ts_web:status which will result into:
Error response
Error code 404.
Message: File not found.
Error code explanation: 404 = Nothing matches the given URI.
However, other reports works fine, like http://localhost:8080/graph.html
Any idea? is http://localhost:8080/es/ts_web:status for the real time status? why I got error 404?

This is right , the tsung_stats.pl script generate report.html file , in the browser, visit http://localhost:8080/report.html, the index.html be created in tsung runtime by ts_controller_sup.beam , the ts_controller_sup.beam listen 8091 port, you need to visit http://localhost:8091/index.html in tsung runtime , you can read the code in ts_controller_sup.erl 100 line

Related

Debugging "Segmentation fault (core dumped)" for Flask App deployed on Apache with mod-wsgi-py3 (Ubuntu)

I created a Flask app that uses Beautiful Soup and Selenium to scrape and track Amazon product prices. The data was stored using CS50's version of SQLalchemy.
I then created an account to use Oracle's always free VM, with Ubuntu. I followed this excellent guide to the dot https://asdkazmi.medium.com/deploying-flask-app-with-wsgi-and-apache-server-on-ubuntu-20-04-396607e0e40f and set up Apache's conf file and the Wsgi file. I also added the network rules on Oracle's Virtual Cloud Network and to iptables, which I believe works fine.
Following this, the website still couldn't launch. Apache's error log showed a "PermissionError: [Errno 13] Permission denied: '/flask_session'". Based on this post Permission issue when writing file on webserver (flask, apache & wsgi) I changed the OS env to my env os.chdir('/home/ubuntu/flaskapp') and used chown to give rights
sudo chown -R ubuntu:www-data flaskapp
sudo chmod -R g+s flaskapp.
Now, my front page is accessible on http://129.150.38.171/ . However, upon any request to the server, Chrome displays "This page isn’t working 129.150.38.171 didn’t send any data." Apache's log shows a "segmentation fault (core dumped) python flask". Based on the sequence of my code, the error begins when I try to execute SQL, e.g. rows = usersdb.execute("SELECT * FROM users WHERE username = ?", request.form.get("username")).
I do not think that it is not my codes' error as it runs fine locally and the production server also worked when I set it up on Oracle VM using this guide https://docs.oracle.com/en-us/iaas/developer-tutorials/tutorials/flask-on-ubuntu/01oci-ubuntu-flask-summary.htm .
I've found this guide on debugging https://www.bustawin.com/debug-segmentation-faults-in-apache-from-mod_wsgi/ using gdb. But with source /etc/apache2/envvars
sudo -E gdb /usr/sbin/apache, it just tells me "No executable file specified".
Any ideas on what could be the error?

How to control vhost_shared_traffic memory K8s nginx ingress?

Background
We run a kubernetes cluster that handles several php/lumen microservices. We started seeing the app php-fpm/nginx reporting 499 status code in it's logs, and it seems to correspond with the client getting a blank response (curl returns curl: (52) Empty reply from server) while the applications log 499.
10.10.x.x - - [09/Mar/2020:18:26:46 +0000] "POST /some/path/ HTTP/1.1" 499 0 "-" "curl/7.65.3"
My understanding is nginx will return the 499 code when the client socket is no longer open/available to return the content to. In this situation that appears to mean something before the nginx/application layer is terminating this connection. Our configuration currently is:
ELB -> k8s nginx ingress -> application
So my thoughts are either ELB or ingress since the application is the one who has no socket left to return to. So i started hitting ingress logs...
Potential core problem?
While looking the the ingress logs i'm seeing quite a few of these:
2020/03/06 17:40:01 [crit] 11006#11006: ngx_slab_alloc() failed: no memory in vhost_traffic_status_zone "vhost_traffic_status"
Potential Solution
I imagine if i gave vhost_traffic_status_zone some more memory at least that error would go away and on to finding the next error.. but I can't seem to find any configmap value or annotation that would allow me to control this. I've checked the docs:
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/
Thanks in advance for any insight / suggestions / documentation I might be missing!
here is the standard way to look up how to modify the nginx.conf in the ingress controller. After that, I'll link in some info on suggestions on how much memory you should give the zone.
First start by getting the ingress controller version by checking the image version on the deploy
kubectl -n <namespace> get deployment <deployment-name> | grep 'image:'
From there, you can retrieve the code for your version from the following URL. In the following, I will be using version 0.10.2.
https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.10.2
The nginx.conf template can be found at rootfs/etc/nginx/template/nginx.tmpl in the code or /etc/nginx/template/nginx.tmpl on a pod. This can be grepped for the line of interest. I the example case, we find the following line in the nginx.tmpl
vhost_traffic_status_zone shared:vhost_traffic_status:{{ $cfg.VtsStatusZoneSize }};
This gives us the config variable to look up in the code. Our next grep for VtsStatusZoneSize leads us to the lines in internal/ingress/controller/config/config.go
// Description: Sets parameters for a shared memory zone that will keep states for various keys. The cache is shared between all worker processe
// https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone
// Default value is 10m
VtsStatusZoneSize string `json:"vts-status-zone-size,omitempty"
This gives us the key "vts-status-zone-size" to be added to the configmap "ingress-nginx-ingress-controller". The current value can be found in the rendered nginx.conf template on a pod at /etc/nginx/nginx.conf.
When it comes to what size you may want to set the zone, there are the docs here that suggest setting it to 2*usedSize:
If the message("ngx_slab_alloc() failed: no memory in vhost_traffic_status_zone") printed in error_log, increase to more than (usedSize * 2).
https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone
"usedSize" can be found by hitting the stats page for nginx or through the JSON endpoint. Here is the request to get the JSON version of the stats and if you have jq the path to the value: curl http://localhost:18080/nginx_status/format/json 2> /dev/null | jq .sharedZones.usedSize
Hope this helps.

Proxy authentication using wget on cygwin

My institute recently installed a new proxy server for our network. I am trying to configure my Cygwin environment to be able to run wget and download data from a remote repository.
Browsing the internet I have found two different solutions to my problem, but no one of them seem to work in my case.
The first one I tried was to follow these instructions, so in Cygwin:
cd /cygdrive/c/cygwin64/etc/
nano wgetrc
at the end of the file, I added:
use_proxy = on
http_proxy=http://username:password#my.proxy.ip:my.port/
https_proxy=https://username:password#my.proxy.ip:my.port/
ftp_proxy=http://username:password#my.proxy.ip:my.port/
(of course, using my user and password)
The second approach was what was suggested by this SO post, so in my Cygwin environment:
export http_proxy=http://username:password#my.proxy.ip:my.port/
export https_proxy=https://username:password#my.proxy.ip:my.port/
export ftp_proxy=http://username:password#my.proxy.ip:my.port/
in both cases, if I try to test my wget, I get the following:
$ wget http://www.google.com
--2020-01-30 12:12:22-- http://www.google.com/
Resolving my.proxy.ip (my.proxy.ip)... 10.1XX.XXX.XX
Connecting to my.proxy.ip (my.proxy.ip)|10.1XX.XXX.XX|:8XXX... connected.
Proxy request sent, awaiting response... 407 Proxy Authentication Required
2020-01-30 12:12:22 ERROR 407: Proxy Authentication Required.
It looks like if my user and password are not ok, but I actually checked them on my browsers and my credentials work just fine.
Any idea on what this could be due to?
This problem was solved thanks to the suggestion of a User of the community AskUbuntu.
Basically, instead of editing the global configuration file wgetrc, I should have created a new .wgetrc with my proxy configuration in my Cygwin home directory.
In summary:
Step 1 - Create a .wgetrc file;
nano ~/.wgetrc
Step 2 - record in this file the proxy info:
use_proxy=on
http_proxy=http://my.proxy.ip:my.port
https_proxy=https://my.proxy.ip:my.port
ftp_proxy=http://my.proxy.ip:my.port
proxy_user=username
proxy_password=password

Running WireMock server as a stand alone

I am trying to set up a mock server using wireMock as a standalone process. I downloaded the jar file and executed the following command:
java -jar wiremock-standalone-2.23.2.jar --port 0
I had to dynamically determine a port because I am already using the default 8080 port for another program running on my machine. It gave me the port number 55142, but when I tried accessing that on the web, it gave me the following error:
HTTP ERROR 403
Problem accessing /__files/. Reason:
Forbidden
Powered by Jetty://
It's probably due to the fact that you just entered http://localhost:55142
and as there are no mappings in ./mappings directory and files in ./files directory (the same where you have your wiremock.jar file is located)
2019-06-04 00:10:58.890 Request was not matched as there were no stubs registered:
{
"url" : "/"
...
}
please try call with __admin endpoint to see if WireMock is working
http://localhost:55142/__admin
please see also docs here for more nice admin commands.

Docker restart not showing the desired effect

I have a small nginx based test application that I want to run inside a docker container. So I followed the example given here docker installation
So I have a foder name restartTest and it contains an index.html file that has this single line in it that says Docker Test 1. I mount this up as my volume during runtime for docker container. So the commmand I use is
docker run -dP -v /Users/Sachin/restartTest/:/usr/share/nginx/html --name engine2 nginx
And it runs fine. I use curl to verify that the volume has mounted properly and the application is running as desired. Now what I do is that I change the content of the index.html file (from my localhost) to Docker test 2 and then I restart the container. I execute the following command to verify that the content has indeed changed inside the docker container
docker exec engine2 cat /usr/share/nginx/html/index.html
And as expected, the file reads Docker Test 2. However, when I use the curl command to see if the webpage also reflects the change I see that I still get Docker Test 1 as the response. The index.html reflects the change however when I run the curl command or if I access the app from the browser, I still get the same result. I have tried the following but to no avail.
Restart the service
Stop and start the container
Stop and start the boot2docker VM and docker daemon.
I have no clue as to why this is happening.
So I found this known bug with VirtualBox VM that is used for running Docker on Mac.
When we have shared content between our host machine and the VirtualBox, then only we face this bug. There is a optimisation as far as web servers like nginx, apache (and apparently vertx) are concerned. Whenever we request a static file from the server, it uses sendfile to provide us with the file. The bug is that in case of VirtualBox (in the scenario described above) we always get the first version of the file no matter what we try. The workaround for this in case of nginx and apache is to turn sendfile off . However, there is a hack that we use as far as vertx is concerned.
rename the file say login.html to login.html.moved (anything)
curl :/….../login.html (we won’t get anything)
rename the file back to its original name login.html.moved to login.html
Hard refresh the page (Command + Shift + R).
For further reading about this bug consult the following
Link1
Link2
Link3
Link4
I assume it is a caching problem. Did you try to set expires -1 in your index.html location configuration to disable server side caching for static files?

Resources