Web server(nginx) in mininet - nginx

After using sudo mn to build a simple network in mini-net, I use nginx to build a web server in host1.
I use systemctl start nginx in host1 xterm to build a web server. But it seems it starts a web server on my localhost, not in the mini-net. I cannot access the web server in host1 and host2 by Firefox in mini-net.
Is there anything wrong in my operation?

the reason why you cannot connect to the server on host1 is as you said - the host isn't there it's running on 127.0.0.1 (localhost) of your hosts's machine not any of your mininet hosts.
The way to get around this is by telling nxinx to run on your host's (local) IP explicitly via the server conf file.
Here's an example that works for me. (Tested with nginx 1.4.6, mininet 2.3.0 and ubuntu 18.04)
from mininet.topo import Topo
from mininet.node import CPULimitedHost
from mininet.link import TCLink
from mininet.net import Mininet
import time
class DumbbellTopo(Topo):
def build(self, bw=8, delay="10ms", loss=0):
switch1 = self.addSwitch('switch1')
switch2 = self.addSwitch('switch2')
appClient = self.addHost('aClient')
appServer = self.addHost('aServer')
crossClient = self.addHost('cClient')
crossServer = self.addHost('cServer')
self.addLink(appClient, switch1)
self.addLink(crossClient, switch1)
self.addLink(appServer, switch2)
self.addLink(crossServer, switch2)
self.addLink(switch1, switch2, bw=bw, delay=delay, loss=loss, max_queue_size=14)
def simulate():
dumbbell = DumbbellTopo()
network = Mininet(topo=dumbbell, host=CPULimitedHost, link=TCLink, autoPinCpus=True)
network.start()
appClient = network.get('aClient')
appServer = network.get('aServer')
wd = str(appServer.cmd("pwd"))[:-2]
appServer.cmd("echo 'b a n a n a s' > available-fruits.html")
appServer.cmd("echo 'events { } http { server { listen " + appServer.IP() + "; root " + wd + "; } }' > nginx-conf.conf") # Create server config file
appServer.cmd("sudo nginx -c " + wd + "/nginx-conf.conf &") # Tell nginx to use configuration from the file we just created
time.sleep(1) # Server might need some time to start
fruits = appClient.cmd("curl http://" + appServer.IP() + "/available-fruits.html")
print(fruits)
appServer.cmd("sudo nginx -s stop")
network.stop()
if __name__ == '__main__':
simulate()
This way we create the nginx conf file (nginx-conf.conf), then tell nginx to use this for its configuration.
Alternatively if you want to start it from a terminal on the host, create the conf file and then use the command to tell nginx to run with this file as shown in the code above.

Related

Can't access the fastapi page using the public ipv4 address of the deployed aws ec2 instance with uvicorn running service

I was testing a simple fastapi backend by deploying it on aws ec2 instance. The service runs fine in the default port 8000 in the local machine. But as I ran the script on the ec2 instance with
uvicorn main:app --reload it ran just fine with following return
INFO: Will watch for changes in these directories: ['file/path']
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [4698] using StatReload
INFO: Started server process [4700]
INFO: Waiting for application startup.
INFO: Application startup complete.
Then in the ec2 security group configuration, the TCP for 8000 port was allowed as shown in the below image.
ec2 security group port detail
Then to test and access the service I opened the public ipv4 address with port address as https://ec2-public-ipv4-ip:8000/ in chrome.
But there is no response whatsoever.
The webpage is as below
result webpage
The error in the console is as below
VM697:6747 crbug/1173575, non-JS module files deprecated.
(anonymous) # VM697:6747
The fastapi main file contains :->
from fastapi import FastAPI, Form, Depends
from fastapi.middleware.cors import CORSMiddleware
from fastapi.encoders import jsonable_encoder
import joblib
import numpy as np
import os
from own.preprocess import Preprocess
import sklearn
col_data = joblib.load("col_bool_mod.z")
app = FastAPI()
#app.get("/predict")
async def test():
return jsonable_encoder(col_data)
#app.post("/predict")
async def provide(data: list):
print(data)
output = main(data)
return output
def predict_main(df):
num_folds = len(os.listdir("./models/"))
result_li = []
for fold in range(num_folds):
print(f"predicting for fold {fold} / {num_folds}")
model = joblib.load(f"./models/tabnet/{fold}_tabnet_reg_adam/{fold}_model.z")
result = model.predict(df)
print(result)
result_li.append(result)
return np.mean(result_li)
def main(data):
df = Preprocess(data)
res = predict_main(df)
print(res)
return {"value": f"{np.float64(res).item():.3f}" if res >=0 else f"{np.float64(0).item()}"}
The service runs fine with same steps in the react js frontend using port 3000 but the fastapi on 8000 is somehow not working.
Thank You for Your Patience
I wanted to retrieve the basic api reponses from the fastapi-uvicorn server deployed in an aws ec2 instance. But there is no response with 8000 port open and ec2 access on local ipv4 ip address.
One way the problem is fixed is by assigning public ipv4 address followed by port 3000 in the CORS origin. But the issue is to hide the get request data on the browser that is accessed by 8000 port.

How do I deploy Apache-Airflow via uWSGI and nginx?

I'm trying to deploy airflow in a production environment on a server running nginx and uWSGI.
I've searched the web and found instructions on installing airflow behind a reverse proxy, but those instructions only have nginx config examples. However, due to the permissions, I can't change the nginx.conf itself and have to solve it via uswsgi.
My folder structure is:
project_folder
|_airflow
|_airflow.cfg
|_webserver_config.py
|_wsgi.py
|_env
|_start
|_stop
|_uwsgi.ini
My path/to/myproject/uwsgi.ini file is configured as follows:
[uwsgi]
master = True
http-socket = 127.0.0.1:9999
virtualenv = /path/to/myproject/env/
daemonize = /path/to/myproject/uwsgi.log
pidfile = /path/to/myproject/tmp/myapp.pid
workers = 2
threads = 2
# adjust the following to point to your project
wsgi-file = /path/to/myproject/airflow/wsgi.py
touch-reload = /path/to/myproject/airflow/wsgi.py
and currently the /path/to/myproject/airflow/wsgi.py looks as follows:
def application(env, start_response):
start_response('200 OK', [('Content-Type','text/html')])
return [b'Hello World!']
I'm assuming I have to somehow call the airflow flask app from the wsgi.py file (perhaps by also changing some reverse proxy fix configs, since I'm behind SSL), but I'm stuck; what do I have to configure?
Will this procedure then be identical for the workers and scheduler?

request.client.host is working in one server but not in other server?

We are using same gunicorn and nginx configuration in both servers. one server it is giving client ip but not in other server. Both are ubuntu servers.
we are developing rest API services by using fastAPI framework. we are running gunicorn behind the nginx.
below are the gunicorn.py file
import os
errorlog = '/var/log/gunicorn/gunicorn.log'
loglevel = 'debug'
bind = 'unix:/tmp/gunicorn.sock'
daemon = True
workers = os.cpu_count() * 2
timeout = 600
graceful_timeout = 600
keepalive = 60
worker_class = "uvicorn.workers.UvicornWorker"
max_requests = 2048
preload_app = True
max_requests_jitter = 1024
worker_connections = 1000
proxy_protocol = True
forwarded_allow_ips = "*"
proxy_allow_ips = "*"
we are running above gunicorn.py file using gunicorn -c gunicorn.py base.main:app
we are getting client ip by using request.client.host
Issue got resolved by recreating my virtual environment again. Removed existing environment and created again in the serve. it's working now.

Pytorch model prediction in production with uwsgi

I have a problem deploying a pytorch model in production. For a demonstration, I build a simple model and a flask app. I put everything in a docker container (pytorch+flask+uwsgi) plus another container for nginx. Everything is running well, my app is rendered and I can navigate inside. However, well I navigate into the URL that launches a prediction of the model, the server hangs and does not seem to compute anything.
The uWSGI is run like this:
/opt/conda/bin/uwsgi --ini /usr/src/web/uwsgi.ini
with uwsgi.ini
[uwsgi]
#application's base folder
chdir = /usr/src/web/
#python module to import
wsgi-file = /usr/src/web/wsgi.py
callable = app
#socket file's location
socket = /usr/src/web/uwsgi.sock
#permissions for the socket file
chmod-socket = 666
# Port to expose
http = :5000
# Cleanup the socket when process stops
vacuum = true
#Log directory
logto = /usr/src/web/app.log
# minimum number of workers to keep at all times
cheaper = 2
processes = 16
As said, the server hangs and I finally got a timeout. What is strange is when I run the flask application directly (also in the container) with
python /usr/src/web/manage.py runserver --host 0.0.0.0
I get my prediction in no time
I think this is related to
https://discuss.pytorch.org/t/basic-operations-do-not-work-in-1-1-0-with-uwsgi-flask/50257
Maybe try as mentioned there:
app = flask.Flask(__name__)
segmentator = None
#app.before_first_request
def load_segmentator():
global segmentator
segmentator = Segmentator()
where Segmentator is a class with pytorch’s nn.Module, which loads weights in __init__
FYI this solution worked for me with one app but not the other

uWSGI and Flask Server Sent Events

I want to run a Flask application on my Raspberry Pi 3. I already developed the Flask app and it works fine, but this is on Flask's development server.
I want to use a production server so i'm using nginx as the webserver and uWSGI as the application server on the Pi. Now, the Flask app uses server sent events (SSE) to to get live data from the server. When I run the app using uWSGI, it stalls. I believe its because i'm using SSE because I had a similar problem on the Flask server but all I did was enable threading and the problem was solved. Enabling threading on uWSGI (when running the uWSGI script) doesn't solve the issue though. HELP!
This is my uWSGI .ini file.
[uwsgi]
base = /home/pi/heap
app = app
module = %(app)
home = %(base)/venv
pythonpath = %(base)
socket = /home/pi/heap/%n.sock
chmod-socket = 666
callable = app
Thank you!
Try running it in port instead of socket mode with defined processes and threads.
[uwsgi]
base = project_path
chdir = project_path
module = your_module_name
callable = your_app_name
enable-threads = true
master = true
processes = 5
threads = 2
http = :5000

Resources