Hi I created an instance in ec2 and connected elastic ip to it,
in the instance itself I installed a server that runs with this url http://172.17.0.2:5000/api/v1.0/,
the elastic ip is in this address (for example 54.193.250.150),
I have now installed nginx and I am trying to do the routing from my PC to a server sitting in ec2 ,
so i tried to create site-available file but i this is not working for me i will be very glad to get help with this issue.
server {
listen 80;
server_name 54.193.250.150 ;
location api/v1.0
{
proxy_pass http://172.17.0.2:5000/api/v1.0;
}
}
Related
I have an NGINX configuration that forwards HTTP to HTTPS. It works fine on my local system and it fails on an AWS EC2.
Here's the only configuration I have added to NGINX, the rest is left intact:
server {
listen 58080;
server_name localhost;
location / {
proxy_pass https://acme.com;
}
}
When serving the content (a single web page UI app) from my laptop everything works as expected. When trying to server the content from AWS I keep seeing errors in the dev-tools console as bellow:
GET https://ec2-*****.compute.amazonaws.com:58080/assets/index.5c112cd8.css net::ERR_SSL_PROTOCOL_ERROR
GET https://ec2-*****.compute.amazonaws.com:58080/assets/index.5660fafb.js net::ERR_SSL_PROTOCOL_ERROR
GET https://ec2-*****.compute.amazonaws.com:58080/favicon.ico net::ERR_SSL_PROTOCOL_ERROR
Something overwrites the protocol from HTTP to HTTPS. I did try just copy-n-paste these links and replace the protocol to HTTP and it works for this link specifically.
I did try adding additional directives such as proxy_set_header Host $proxy_host; and/or changing the server_name _; or the specific host of my EC2 instance but still got the same result.
Both systems env:
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.1 LTS
Release: 22.04
Codename: jammy
$ nginx -v
nginx version: nginx/1.22.0
I will definitely add a self-signed certificate to NGINX next which I believe it will resolve my issues but I am curious why it works on my localhost while on AWS it does not.
Good day!
The question may seem strange, but I am trying to understand the situation and the possibilities of its implementation. I will be grateful for the answer and your time.
Components:
Virtual machine with reverse proxy server as nginx
k8s load balancer input load balancer
running on, running on 80,443 ports in a k8s cluster
dns entry for *.k8s.test.lab
Can I use the following construction:
convert a VM running to a load balancer directly nginx, convert a VM running to a load balancer directly, avoiding an ingress load balancer in a k8s cluster?
Can I get content from the site.test.laboratory? If so, where to make changes for this?
After making the following configuration on an external nginx, I get the error 502 faulty gateway
[block scheme]
upstream loadbalancer {
server srv-k8s-worker0.test.lab:80;
server srv-k8s-worker1.test.lab:80;
server srv-k8s-worker2.test.lab:80;
}
server {
listen 80;
server_name site.test.lab; # name in browser
location / {
proxy_pass http://loadbalancer;
}
}
Also, for verification, I created dns records for the
srv-k8s-worker0.test.lab
srv-k8s-worker1.test.lab
srv-k8s-worker2.test.lab
In general, an answer is needed about the possibility of this configuration as a whole and whether it makes sense. Node port is not an option to use.
The only option that allows you to change the domain names that I managed to do.
server {
listen 80;
server_name site.test.lab; # name in browser
location / {
proxy_pass http://site.k8s.test.lab;
}
}
Thanks!
I'm using the gitea versioning system in a docker environment. The gitea used is a rootless type image.
The http port mapping is “8084:3000” and the ssh port mapping is “2224:2222”.
I generated the keys on my Linux host and added the generated public key to my Gitea account.
1.Test environment
Later I created the ssh config file nano /home/campos/.ssh/config :
Host localhost
HostName localhost
User git
Port 2224
IdentityFile ~/.ssh/id_rsa
After finishing the settings i created the myRepo repository and cloned it.
To perform the clone, I changed the url from ssh://git#localhost:2224/campos/myRepo.git to git#localhost:/campos/myRepo.git
To clone the repository I typed: git clone git#localhost:/campos/myRepo.git
This worked perfectly!
2.Production environment
However, when defining a reverse proxy and a domain name, it was not possible to clone the repository.
Before performing the clone, I changed the ssh configuration file:
Host gitea.domain.com
HostName gitea.domain.com
User git
Port 2224
IdentityFile ~/.ssh/id_rsa
Then I tried to clone the repository again:
git clone git#gitea.domain.com:/campos/myRepo.git
A connection refused message was shown:
Cloning into 'myRepo'...
ssh: connect to host gitea.domain.com port 2224: Connection refused
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I understand the message is because by default the proxy doesn't handle ssh requests.
Searching a bit, some links say to use "stream" in Nginx.
But I still don't understand how to do this configuration. I need to continue accessing my proxy server on port 22 and redirect port 2224 of the proxy to port 2224 of the docker host.
The gitea.conf configuration file i use is as follows:
server {
listen 443 ssl http2;
server_name gitea.domain.com;
# SSL
ssl_certificate /etc/nginx/ssl/mycert_bundle.crt;
ssl_certificate_key /etc/nginx/ssl/mycert.key;
# logging
access_log /var/log/nginx/gitea.access.log;
error_log /var/log/nginx/gitea.error.log warn;
# reverse proxy
location / {
proxy_pass http://192.168.10.2:8084;
include myconfig/proxy.conf;
}
}
# HTTP redirect
server {
listen 80;
server_name gitea.domain.com;
return 301 https://gitea.domain.com$request_uri;
}
3. Redirection in Nginx
I spent several hours trying to understand how to configure Nginx's "stream" feature. Below is what I did.
At the end of the nginx.conf file I added:
stream {
include /etc/nginx/conf.d/stream;
}
In the stream file in conf.d, I added the content below:
upstream ssh-gitea {
server 10.0.200.39:2224;
}
server {
listen 2224;
proxy_pass ssh-gitea;
}
I tested the Nginx configuration and restart your service:
nginx -t && systemctl restart nginx.service
I viewed whether ports 80,443, 22 and 2224 were open on the proxy server.
ss -tulpn
This configuration made it possible to perform the ssh clone of a repository with a domain name.
4. Clone with ssh correctly
After all the settings I made, I understood that it is possible to use the original url ssh://git#gitea.domain.com:2224/campos/myRepo.git in the clone.
When typing the command git clone ssh://git#gitea.domain.com:2224/campos/myRepo.git, it is not necessary to define the config file in ssh.
This link helped me:
https://discourse.gitea.io/t/password-is-required-to-clone-repository-using-ssh/5006/2
In previous messages I explained my solution. So I'm setting this question as solved.
I would like to set up nginx to distribute different servers from request pointing dirrerent domain.
The nginx server environment is below.
CentOS Linux release 7.3.1611 (Core)
nginx 1.11.8
* in configure with --with-stream parameter. build & install from source.
My image is.
server1.testdomain.com ssh request ->(global IP) *nginx server -> (local IP)192.168.1.101 server
server2.testdomain.com ssh request ->(global IP) *nginx server -> (local IP)192.168.1.102 server
nginx server is same glocal IP and same server.
nginx.conf is ...
stream {
error_log /usr/local/nginx/logs/stream.log info;
upstream server1 {
server 192.168.1.101:22;
}
upstream server2 {
server 192.168.1.102:22;
}
server {
listen 22 server1.testdomain.com;
proxy_pass server1;
}
server {
listen 22 server2.testdomain.com;
proxy_pass server2;
}
}
But...
nginx: [emerg] the invalid "server1.testdomain.com" parameter in・・
error occurred. It seems like impossilbe to execute such as listen "22 server1.testdomain.com".
And,
I tried to write "server_name" in "server".
nginx: [emerg] "server_name" directive is not allowed here in・・・
don't permit to use "server_name" in "server".
How do I write config file to distribute difference server for difference domain request?
If you have a idea or information, could you teach me?
Its not possible with nginx because stream module is L3 balancer. SSH protocol works at L5/7.
Its not possible at all because ssh negotiation does not include destination host name.
You can do what you want only using two different IP or using two different ports. In both cases nginx can forward connection, but much better to use iptables in this case.
I am trying to setting up my django-socketio with uwsgi and nginx, and when I ran
sudo uwsgi --ini uwsgi.ini
I got an error saying Address is already in use.
I know what the problem is, I think they problem is when I ran sudo uwsgi --ini uwsgi.ini, it creates a SocketIOServer on port 80, and since my nginx is also started, it also listens to port 80. Therefore, they are conflicts, but I don't know how to solve it.
Could someone help.
My wsgi.py file looks like:
import os
PORT = 80
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "project.settings")
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
from socketio import SocketIOServer
print 'Listening on port %s and on port 843 (flash policy server)' % PORT
SocketIOServer(('', PORT), application, resource="socket.io").serve_forever()
And my nginx file looks like:
upstream django {
server unix:///tmp/uwsgi.sock;
}
server {
listen 80;
charset utf-8;
error_log /home/ubuntu/nginxerror.log ;
location /static {
alias /home/ubuntu/project/static;
}
location / {
uwsgi_pass django;
include /etc/nginx/uwsgi_params;
}
}
I was looking at django-socketio recently, I remember I only let socketio listen on port 843.
Any reason why you need to listen on both 80 & 843?
under development, you may add open port 843, and see if this solves your problem.
Instead of creating a socketio server in your wsgi file, use the built in runserver_socketio and start it on port 9000 using supervisor, then have nginx proxy any requests for /socket.io/ to port 9000