Can't access the fastapi page using the public ipv4 address of the deployed aws ec2 instance with uvicorn running service - http

I was testing a simple fastapi backend by deploying it on aws ec2 instance. The service runs fine in the default port 8000 in the local machine. But as I ran the script on the ec2 instance with
uvicorn main:app --reload it ran just fine with following return
INFO: Will watch for changes in these directories: ['file/path']
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [4698] using StatReload
INFO: Started server process [4700]
INFO: Waiting for application startup.
INFO: Application startup complete.
Then in the ec2 security group configuration, the TCP for 8000 port was allowed as shown in the below image.
ec2 security group port detail
Then to test and access the service I opened the public ipv4 address with port address as https://ec2-public-ipv4-ip:8000/ in chrome.
But there is no response whatsoever.
The webpage is as below
result webpage
The error in the console is as below
VM697:6747 crbug/1173575, non-JS module files deprecated.
(anonymous) # VM697:6747
The fastapi main file contains :->
from fastapi import FastAPI, Form, Depends
from fastapi.middleware.cors import CORSMiddleware
from fastapi.encoders import jsonable_encoder
import joblib
import numpy as np
import os
from own.preprocess import Preprocess
import sklearn
col_data = joblib.load("col_bool_mod.z")
app = FastAPI()
#app.get("/predict")
async def test():
return jsonable_encoder(col_data)
#app.post("/predict")
async def provide(data: list):
print(data)
output = main(data)
return output
def predict_main(df):
num_folds = len(os.listdir("./models/"))
result_li = []
for fold in range(num_folds):
print(f"predicting for fold {fold} / {num_folds}")
model = joblib.load(f"./models/tabnet/{fold}_tabnet_reg_adam/{fold}_model.z")
result = model.predict(df)
print(result)
result_li.append(result)
return np.mean(result_li)
def main(data):
df = Preprocess(data)
res = predict_main(df)
print(res)
return {"value": f"{np.float64(res).item():.3f}" if res >=0 else f"{np.float64(0).item()}"}
The service runs fine with same steps in the react js frontend using port 3000 but the fastapi on 8000 is somehow not working.
Thank You for Your Patience
I wanted to retrieve the basic api reponses from the fastapi-uvicorn server deployed in an aws ec2 instance. But there is no response with 8000 port open and ec2 access on local ipv4 ip address.

One way the problem is fixed is by assigning public ipv4 address followed by port 3000 in the CORS origin. But the issue is to hide the get request data on the browser that is accessed by 8000 port.

Related

How to redirect HTTPS traffic to local HTTP server using mitmproxy?

I am trying to setup mitmproxy so that I can make a request from my browser to https://{my-domain} and have it return a response from my local server running at http://localhost:3000 instead, but I cannot get the https request to reach my local server. I see the debugging statements from mitmproxy. Also, I can get it working for http traffic, but not for https.
I read the mitmproxy addon docs and api docs
I've installed the cert and I can monitor https through the proxy.
I'm using Mitmproxy: 4.0.4 and Python: 3.7.4
This is my addon (local-redirect.py) and how I run mitmproxy:
from mitmproxy import ctx
import mitmproxy.http
class LocalRedirect:
def __init__(self):
print('Loaded redirect addon')
def request(self, flow: mitmproxy.http.HTTPFlow):
if 'my-actual-domain-here' in flow.request.pretty_host:
ctx.log.info("pretty host is: %s" % flow.request.pretty_host)
flow.request.host = "localhost"
flow.request.port = 3000
flow.request.scheme = 'http'
addons = [
LocalRedirect()
]
$ mitmdump -s local-redirect.py | grep pretty
When I visit the url form my server, I see the logging statement, but my browser hangs on the request and there is no request made to my local server.
The above addon was fine, however my local server did not support HTTP2.
Using the --no-http2 option was a quick fix:
mitmproxy -s local-redirect.py --no-http2 --view-filter localhost
or
mitmdump -s local-redirect.py --no-http2 localhost

Pytorch model prediction in production with uwsgi

I have a problem deploying a pytorch model in production. For a demonstration, I build a simple model and a flask app. I put everything in a docker container (pytorch+flask+uwsgi) plus another container for nginx. Everything is running well, my app is rendered and I can navigate inside. However, well I navigate into the URL that launches a prediction of the model, the server hangs and does not seem to compute anything.
The uWSGI is run like this:
/opt/conda/bin/uwsgi --ini /usr/src/web/uwsgi.ini
with uwsgi.ini
[uwsgi]
#application's base folder
chdir = /usr/src/web/
#python module to import
wsgi-file = /usr/src/web/wsgi.py
callable = app
#socket file's location
socket = /usr/src/web/uwsgi.sock
#permissions for the socket file
chmod-socket = 666
# Port to expose
http = :5000
# Cleanup the socket when process stops
vacuum = true
#Log directory
logto = /usr/src/web/app.log
# minimum number of workers to keep at all times
cheaper = 2
processes = 16
As said, the server hangs and I finally got a timeout. What is strange is when I run the flask application directly (also in the container) with
python /usr/src/web/manage.py runserver --host 0.0.0.0
I get my prediction in no time
I think this is related to
https://discuss.pytorch.org/t/basic-operations-do-not-work-in-1-1-0-with-uwsgi-flask/50257
Maybe try as mentioned there:
app = flask.Flask(__name__)
segmentator = None
#app.before_first_request
def load_segmentator():
global segmentator
segmentator = Segmentator()
where Segmentator is a class with pytorch’s nn.Module, which loads weights in __init__
FYI this solution worked for me with one app but not the other

Airflow - Failed to fetch log file from worker. 404 Client Error: NOT FOUND for url

I am running Airflowv1.9 with Celery Executor. I have 5 Airflow workers running in 5 different machines. Airflow scheduler is also running in one of these machines. I have copied the same airflow.cfg file across these 5 machines.
I have daily workflows setup in different queues like DEV, QA etc. (each worker runs with an individual queue name) which are running fine.
While scheduling a DAG in one of the worker (no other DAG have been setup for this worker/machine previously), I am seeing the error in the 1st task and as a result downstream tasks are failing:
*** Log file isn't local.
*** Fetching here: http://<worker hostname>:8793/log/PDI_Incr_20190407_v2/checkBCWatermarkDt/2019-04-07T17:00:00/1.log
*** Failed to fetch log file from worker. 404 Client Error: NOT FOUND for url: http://<worker hostname>:8793/log/PDI_Incr_20190407_v2/checkBCWatermarkDt/2019-04-07T17:00:00/1.log
I have configured MySQL for storing the DAG metadata. When I checked task_instance table, I see proper hostnames are populated against the task.
I also checked the log location and found that the log is getting created.
airflow.cfg snippet:
base_log_folder = /var/log/airflow
base_url = http://<webserver ip>:8082
worker_log_server_port = 8793
api_client = airflow.api.client.local_client
endpoint_url = http://localhost:8080
What am I missing here? What configurations do I need to check additionally for resolving this issue?
Looks like the worker's hostname is not being correctly resolved.
Add a file hostname_resolver.py:
import os
import socket
import requests
def resolve():
"""
Resolves Airflow external hostname for accessing logs on a worker
"""
if 'AWS_REGION' in os.environ:
# Return EC2 instance hostname:
return requests.get(
'http://169.254.169.254/latest/meta-data/local-ipv4').text
# Use DNS request for finding out what's our external IP:
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect(('1.1.1.1', 53))
external_ip = s.getsockname()[0]
s.close()
return external_ip
And export: AIRFLOW__CORE__HOSTNAME_CALLABLE=airflow.hostname_resolver:resolve
The web program of the master needs to go to the worker to fetch the log and display it on the front-end page. This process is to find the host name of the worker. Obviously, the host name cannot be found,Therefore, add the host name to IP mapping on the master's vim /etc/hosts
If this happens as part of a Docker Compose Airflow setup, the hostname resolution needs to be passed to the container hosting the webserver, e.g. through extra_hosts:
# docker-compose.yml
version: "3.9"
services:
webserver:
extra_hosts:
- "worker_hostname_0:192.168.xxx.yyy"
- "worker_hostname_1:192.168.xxx.zzz"
...
...
More details here.

Web server(nginx) in mininet

After using sudo mn to build a simple network in mini-net, I use nginx to build a web server in host1.
I use systemctl start nginx in host1 xterm to build a web server. But it seems it starts a web server on my localhost, not in the mini-net. I cannot access the web server in host1 and host2 by Firefox in mini-net.
Is there anything wrong in my operation?
the reason why you cannot connect to the server on host1 is as you said - the host isn't there it's running on 127.0.0.1 (localhost) of your hosts's machine not any of your mininet hosts.
The way to get around this is by telling nxinx to run on your host's (local) IP explicitly via the server conf file.
Here's an example that works for me. (Tested with nginx 1.4.6, mininet 2.3.0 and ubuntu 18.04)
from mininet.topo import Topo
from mininet.node import CPULimitedHost
from mininet.link import TCLink
from mininet.net import Mininet
import time
class DumbbellTopo(Topo):
def build(self, bw=8, delay="10ms", loss=0):
switch1 = self.addSwitch('switch1')
switch2 = self.addSwitch('switch2')
appClient = self.addHost('aClient')
appServer = self.addHost('aServer')
crossClient = self.addHost('cClient')
crossServer = self.addHost('cServer')
self.addLink(appClient, switch1)
self.addLink(crossClient, switch1)
self.addLink(appServer, switch2)
self.addLink(crossServer, switch2)
self.addLink(switch1, switch2, bw=bw, delay=delay, loss=loss, max_queue_size=14)
def simulate():
dumbbell = DumbbellTopo()
network = Mininet(topo=dumbbell, host=CPULimitedHost, link=TCLink, autoPinCpus=True)
network.start()
appClient = network.get('aClient')
appServer = network.get('aServer')
wd = str(appServer.cmd("pwd"))[:-2]
appServer.cmd("echo 'b a n a n a s' > available-fruits.html")
appServer.cmd("echo 'events { } http { server { listen " + appServer.IP() + "; root " + wd + "; } }' > nginx-conf.conf") # Create server config file
appServer.cmd("sudo nginx -c " + wd + "/nginx-conf.conf &") # Tell nginx to use configuration from the file we just created
time.sleep(1) # Server might need some time to start
fruits = appClient.cmd("curl http://" + appServer.IP() + "/available-fruits.html")
print(fruits)
appServer.cmd("sudo nginx -s stop")
network.stop()
if __name__ == '__main__':
simulate()
This way we create the nginx conf file (nginx-conf.conf), then tell nginx to use this for its configuration.
Alternatively if you want to start it from a terminal on the host, create the conf file and then use the command to tell nginx to run with this file as shown in the code above.

uWSGI and Flask Server Sent Events

I want to run a Flask application on my Raspberry Pi 3. I already developed the Flask app and it works fine, but this is on Flask's development server.
I want to use a production server so i'm using nginx as the webserver and uWSGI as the application server on the Pi. Now, the Flask app uses server sent events (SSE) to to get live data from the server. When I run the app using uWSGI, it stalls. I believe its because i'm using SSE because I had a similar problem on the Flask server but all I did was enable threading and the problem was solved. Enabling threading on uWSGI (when running the uWSGI script) doesn't solve the issue though. HELP!
This is my uWSGI .ini file.
[uwsgi]
base = /home/pi/heap
app = app
module = %(app)
home = %(base)/venv
pythonpath = %(base)
socket = /home/pi/heap/%n.sock
chmod-socket = 666
callable = app
Thank you!
Try running it in port instead of socket mode with defined processes and threads.
[uwsgi]
base = project_path
chdir = project_path
module = your_module_name
callable = your_app_name
enable-threads = true
master = true
processes = 5
threads = 2
http = :5000

Resources