How to run my Flask server so i can get to it with my domain name? - http

i currently have a very small http server and i would like to add it to my vm in google cloud platform, and then be able to access it with my domain name connected to it. But my server does not have permission to be on port 443 because it is already occupied how can I change this?
ERROR:
* Serving Flask app 'main'
* Debug mode: off
Address already in use
Port 443 is in use by another program. Either identify and stop that program, or start the server with a different port.```
SCRIPT:
from flask import Flask, redirect, url_for, request, render_template
app = Flask(__name__)
#app.route("/") #home page
def home():
return "Hello! this is the main page <h1>HELLO<h1>"
#app.route("/admin")
def admin():
return redirect(url_for("home"))
#app.route("/<usr>")
def user(usr):
return f"<h1>{usr}<h1>"
#app.route('/login', methods=["POST","GET" ])
def login():
if request.method == "POST":
user = request.form['id']
return redirect(url_for("user", usr=user))
else:
return 'GET'
if __name__ == "__main__":
app.run('0.0.0.0',443)

Related

Azure Web App returns Internal Error Server when executing a Get Request using FastAPI in Python code

We are trying to execute a Get Request from the Azure Web App using our python code. The request will be made to our DevOps Repo following the available API. This is just a part (example) of the entire code:
from fastapi import FastAPI
import requests
import base64
import uvicorn
app = FastAPI()
#app.get("/")
async def root():
return {"Hello": "Pycharm"}
#app.get('/file')
async def sqlcode(p: str = '/ppppppppp',
v: str = 'vvvv',
r: str = 'rrrrrr',
pn: str = 'nnnnnnn'):
organizational_url = f'https://xxxxxxxx/{pn}/_apis/git/repositories/{r}/items?path={p}&versionDescriptor.version={v}&api-version=6.1-preview.1'
username = 'username'
password = 'password'
basic_authentication = base64.b64encode((f'{username}:{password}').encode('utf-8')).decode('utf-8')
headers = {
'Authorization': f'Basic {basic_authentication}',
}
response = requests.get(url=organizational_url, headers=headers)
return {"sqlcode": response.text}
# Code ends for Fastapi with swagger
def print_hi(name):
print(f'Hi, {name}') # Press Ctrl+F8 to toggle the breakpoint.
# Press the green button in the gutter to run the script.
if __name__ == '__main__':
print_hi('PyCharm')
And this is the requirements.txt
Flask==2.0.1
asyncpg==0.21.0
databases==0.4.1
fastapi==0.63.0
gunicorn==20.0.4
pydantic==1.7.3
SQLAlchemy==1.3.22
uvicorn==0.11.5
config==0.3.9
pandas==1.5.3
dask==2023.1.0
azure.storage.blob==12.14.1
pysftp==0.2.9
azure-identity==1.12.0
azure-keyvault-secrets==4.6.0
When we execute the code in Pycharm this works without any issue :
But when we execute the same code from the Web App we've got the Internal Server Error Message
I'm just trying to help the team to understand this error. I'm not a cloud engineer. What could be missing, is it something in the code, a particular port to use, is a permission needed in Azure Web Service?

In gRPC python, the Service class and the Serve method are always in the same file, why?

In gRPC python, the Service-class and the Serve method are always in the same file, why?
for example -
the service class - link and the serve() - link
Similarly - service-class and serve()
I am new to python and grpc. In my project I wrote the serve method in different file importing the service-class, the server seems started but when I invoke it from client code (postman)
it doesn't work
Here is my code - ems_validator_service.py contains the service class
and main.py has the serve() method
File: ems_validator_service.py -
from validator.src.grpc import ems_validator_service_pb2
from validator.src.grpc.ems_validator_service_pb2_grpc import EmsValidatorServiceServicer
class EmsValidatorServiceServicer(EmsValidatorServiceServicer):
def Validate(self, request, context):
# TODO: logic to validate
return ems_validator_service_pb2.GetStatusResponse(
validation_status=ems_validator_service_pb2.VALIDATION_STATUS_IN_PROGRESS)
def GetStatus(self, request, context):
# TODO: logic to get actual status
return ems_validator_service_pb2.GetStatusResponse(
validation_status=ems_validator_service_pb2.VALIDATION_STATUS_IN_PROGRESS)
File: main.py -
from validator.src.grpc.ems_validator_service_pb2_grpc import (
EmsValidatorServiceServicer,
add_EmsValidatorServiceServicer_to_server
)
from concurrent import futures
import grpc
def serve():
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
add_EmsValidatorServiceServicer_to_server(EmsValidatorServiceServicer(), server)
server.add_insecure_port('localhost:50051') # todo change it
server.start()
server.wait_for_termination()
if __name__ == "__main__":
serve()
For the above code I can't invoke the rpc.
but If I move the serve method to the ems_validator_service.py file and call that method from main.py then it works fine. Not sure if it is a python thing or gRPC thing?
The error I get from client.py -
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/grpc/_channel.py", line 946, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNIMPLEMENTED
details = "Method not implemented!"
debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:50051 {created_time:"2022-10-19T22:10:18.898439-07:00", grpc_status:12, grpc_message:"Method not implemented!"}"
>
As already mentioned, the same client works fine if I move the above serve() method to the service class
These are just simple examples. It's totally fine to call serve() from a different file.

how to set fast API version to allow HTTP can specify version in accept header?

I am working on a project that requires me to version fast API endpoints. We want to version the endpoint through HTTP accept header, e.g. headers={'Accept': 'application/json;version=1.0.1'}, headers={'Accept': 'application/json;version=1.0.2'}. Only set up the api version like this seem not work:
app = FastAPI(
version=version,
title="A title",
description="Some description.",
)
Does anyone know what else I need to do with this ?
Well maybe the version in path url could be better
sub apps docs
from fastapi import FastAPI
app = FastAPI()
v1 = FastAPI()
#v1.get("/app/")
def read_main():
return {"message": "Hello World from api v1"}
v2 = FastAPI()
#v2.get("/app/")
def read_sub():
return {"message": "Hello World from api v2"}
app.mount("/api/v1", v1)
app.mount("/api/v2", v2)
You will see the auto docs for each app
localhost:8000/api/v1/docs
localhost:8000/api/v2/docs
But you always get the headers in request
from starlette.requests import Request
from fastapi import FastAPI
app = FastAPI()
#app.post("/hyper_mega_fast_service")
def fast_service(request: Request, ):
aceept = request.headers.get('Accept')
value = great_fuction_to_get_version_from_header(aceept)
if value == '1.0.1':
"Do something"
if value == '1.0.2':
"Do something"
Try api versioning for fastapi web applications
Installation
pip install fastapi-versioning
Examples
from fastapi import FastAPI
from fastapi_versioning import VersionedFastAPI, version
app = FastAPI(title="My App")
#app.get("/greet")
#version(1, 0)
def greet_with_hello():
return "Hello"
#app.get("/greet")
#version(1, 1)
def greet_with_hi():
return "Hi"
app = VersionedFastAPI(app)
this will generate two endpoints:
/v1_0/greet
/v1_1/greet
as well as:
/docs
/v1_0/docs
/v1_1/docs
/v1_0/openapi.json
/v1_1/openapi.json
There's also the possibility of adding a set of additional endpoints that
redirect the most recent API version. To do that make the argument
enable_latest true:
app = VersionedFastAPI(app, enable_latest=True)
this will generate the following additional endpoints:
/latest/greet
/latest/docs
/latest/openapi.json
In this example, /latest endpoints will reflect the same data as /v1.1.
Try it out:
pip install pipenv
pipenv install --dev
pipenv run uvicorn example.annotation.app:app
# pipenv run uvicorn example.folder_name.app:app
Usage without minor version
from fastapi import FastAPI
from fastapi_versioning import VersionedFastAPI, version
app = FastAPI(title='My App')
#app.get('/greet')
#version(1)
def greet():
return 'Hello'
#app.get('/greet')
#version(2)
def greet():
return 'Hi'
app = VersionedFastAPI(app,
version_format='{major}',
prefix_format='/v{major}')
this will generate two endpoints:
/v1/greet
/v2/greet
as well as:
/docs
/v1/docs
/v2/docs
/v1/openapi.json
/v2/openapi.json
Extra FastAPI constructor arguments
It's important to note that only the title from the original FastAPI will be
provided to the VersionedAPI app. If you have any middleware, event handlers
etc these arguments will also need to be provided to the VersionedAPI function
call, as in the example below
from fastapi import FastAPI, Request
from fastapi_versioning import VersionedFastAPI, version
from starlette.middleware import Middleware
from starlette.middleware.sessions import SessionMiddleware
app = FastAPI(
title='My App',
description='Greet uses with a nice message',
middleware=[
Middleware(SessionMiddleware, secret_key='mysecretkey')
]
)
#app.get('/greet')
#version(1)
def greet(request: Request):
request.session['last_version_used'] = 1
return 'Hello'
#app.get('/greet')
#version(2)
def greet(request: Request):
request.session['last_version_used'] = 2
return 'Hi'
#app.get('/version')
def last_version(request: Request):
return f'Your last greeting was sent from version {request.session["last_version_used"]}'
app = VersionedFastAPI(app,
version_format='{major}',
prefix_format='/v{major}',
description='Greet users with a nice message',
middleware=[
Middleware(SessionMiddleware, secret_key='mysecretkey')
]
)

How to disable "check_hostname" using Requests library and Python 3.8.5?

using latest Requests library and Python 3.8.5, I can't seem to "disable" certificate checking on my API call. I understand the reasons not to disable, but I'd like this to work.
When i attempt to use "verify=True", the servers I connect to throw this error:
(Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123)')))
When i attempt to use "verify=False", I get:
Error making PS request to [<redacted server name>] at URL https://<redacted server name/rest/v2/api_endpoint: Cannot set verify_mode to CERT_NONE when check_hostname is enabled.
I don't know how to also disable "check_hostname" as I haven't seen a way to do that with the requests library (which I plan to keep and use).
My code:
self.ps_server = server
self.ps_base_url = 'https://{}/rest/v2/'.format(self.ps_server)
url = self.ps_base_url + endpoint
response = None
try:
if req_type == 'POST':
response = requests.post(url, json=post_data, auth=(self.ps_username, self.ps_password), verify=self.verify, timeout=60)
return json.loads(response.text)
elif req_type == 'GET':
response = requests.get(url, auth=(self.ps_username, self.ps_password), verify=self.verify, timeout=60)
if response.status_code == 200:
return json.loads(response.text)
else:
logging.error("Error making PS request to [{}] at URL {} [{}]".format(server, url, response.status_code))
return {'status': 'error', 'trace': '{} - {}'.format(response.text, response.status_code)}
elif req_type == 'DELETE':
response = requests.delete(url, auth=(self.ps_username, self.ps_password), verify=self.verify, timeout=60)
return response.text
elif req_type == 'PUT':
response = requests.put(url, json=post_data, auth=(self.ps_username, self.ps_password), verify=self.verify, timeout=60)
return response.text
except Exception as e:
logging.error("Error making PS request to [{}] at URL {}: {}".format(server, url, e))
return {'status': 'error', 'trace': '{}'.format(e)}
Can someone shed some light on how I can disable check_hostname as well, so that I can test this without SSL checking?
If you have pip-system-certs, it monkey-patches requests as well. Here's a link to the code: https://gitlab.com/alelec/pip-system-certs/-/blob/master/pip_system_certs/wrapt_requests.py
After digging through requests and urllib3 source for awhile, this is the culprit in pip-system-certs:
ssl_context = ssl.create_default_context()
ssl_context.load_default_certs()
kwargs['ssl_context'] = ssl_context
That dict is used to grab an ssl_context later from a urllib3 connection pool but it has .check_hostname set to True on it.
As far as replacing the utility of the pip-system-certs package, I think forking it and making it only monkey-patch pip would be the right way forward. That or just adding --trusted-host args to any pip install commands.
EDIT:
Here's how it's normally initialized through requests (versions I'm using):
https://github.com/psf/requests/blob/v2.21.0/requests/adapters.py#L163
def init_poolmanager(self, connections, maxsize, block=DEFAULT_POOLBLOCK, **pool_kwargs):
"""Initializes a urllib3 PoolManager.
This method should not be called from user code, and is only
exposed for use when subclassing the
:class:`HTTPAdapter <requests.adapters.HTTPAdapter>`.
:param connections: The number of urllib3 connection pools to cache.
:param maxsize: The maximum number of connections to save in the pool.
:param block: Block when no free connections are available.
:param pool_kwargs: Extra keyword arguments used to initialize the Pool Manager.
"""
# save these values for pickling
self._pool_connections = connections
self._pool_maxsize = maxsize
self._pool_block = block
# NOTE: pool_kwargs doesn't have ssl_context in it
self.poolmanager = PoolManager(num_pools=connections, maxsize=maxsize,
block=block, strict=True, **pool_kwargs)
And here's how it's monkey-patched:
def init_poolmanager(self, *args, **kwargs):
import ssl
ssl_context = ssl.create_default_context()
ssl_context.load_default_certs()
kwargs['ssl_context'] = ssl_context
return super(SslContextHttpAdapter, self).init_poolmanager(*args, **kwargs)

Qt WebEngine set socks5 proxy

I want to set socks5 proxy for my Qt WebEngine app. I use PyQt5.8 , QT5.8.
I set up a socks5 server by danted v1.4.1. I test my socks5 server and it worked good. But when I use it in my app, danted log errors:
error after reading 3 bytes in 0 seconds: client offered no acceptable authentication method
This is my code:
def set_proxy():
from PyQt5.QtNetwork import QNetworkProxy
proxy = QNetworkProxy()
from six.moves.urllib import parse as urlparse
string_proxy = "socks5://username:password#ip:port"
urlinfo = urlparse.urlparse(string_proxy)
proxy = QNetworkProxy()
if urlinfo.scheme == 'socks5':
proxy.setType(QNetworkProxy.Socks5Proxy)
else:
proxy.setType(QNetworkProxy.NoProxy)
if urlinfo.hostname != None:
proxy.setHostName(urlinfo.hostname)
if urlinfo.port != None:
proxy.setPort(urlinfo.port)
if urlinfo.username != None:
proxy.setUser(urlinfo.username)
else:
proxy.setUser('')
if urlinfo.password != None:
proxy.setPassword(urlinfo.password)
else:
proxy.setPassword('')
QNetworkProxy.setApplicationProxy(proxy)
Can anyone help me?
update on 2017/03/29
add proxyAuthenticationRequired signal
def set_proxy(string_proxy):
proxy = QNetworkProxy()
urlinfo = urlparse.urlparse(string_proxy)
if urlinfo.scheme == 'socks5':
proxy.setType(QNetworkProxy.Socks5Proxy)
elif urlinfo.scheme == 'http':
proxy.setType(QNetworkProxy.HttpProxy)
else:
proxy.setType(QNetworkProxy.NoProxy)
proxy.setHostName(urlinfo.hostname)
proxy.setPort(urlinfo.port)
proxy.setUser(urlinfo.username)
proxy.setPassword(urlinfo.password)
QNetworkProxy.setApplicationProxy(proxy)
def handleProxyAuthReq(url, auth, proxyhost):
auth.setUser(username)
auth.setPassword(password)
webView = QtWebEngineWidgets.QWebEngineView()
#proxy_string = "http://username:password#ip:port"
proxy_string = "socks5://username:password#ip:port"
set_proxy(proxy_string)
webView.page().proxyAuthenticationRequired.connect(handleProxyAuthReq)
I test it by my Http proxy and it worded. But when I use Socks5 proxy, the proxyAuthenticationRequired signal can not be emited.
QtWebEngine does not handle the username/password information from QNetworkProxy:
All other proxy settings such as QNetworkProxy::rawHeader(), QNetworkProxy::user(), or QNetworkProxy::password() are ignored.
You'll need to handle proxyAuthenticationRequired and handle authentication there.
update on 2017/03/30
Looks like Chromium does not support authentication with SOCKS proxies.

Resources