I have a web app developed in VisualAge Smalltalk that uses the ABTWSAC (Web Connect) to do CGI Handling.
In Apache, I simply AddHandler cgi-script .exe in mime module and Options -Indexes FollowSymLinks ExecCGI in Directory module.
(There is also a ISAPI handler that works in IIS).
How on earth do you do this in nginx? Nginx seems to always want a running service on a port or a 'unix' socket (which is clearly not support on windows).
All the googling shows that people assume cgi in nginx must be PHP. None of the examples or explinations tell me how to do what I want to do specifically.
As far as I know Nginx does not have native CGI support. It supports "only" fastCGI.
In my eyes you have four options:
1) Change from ABTWSAC (Web Connect) to seaside. Then use seaside with VisualAge Smalltalk. I would go with this guide
Copied from the link for later reference:
Our Bare Bones Nginx FastCGI Configuration
worker_processes 1;
events
{
worker_connections 1024;
}
http
{
include mime.types;
default_type application/octet-stream;
upstream seaside
{
server localhost:9001;
server localhost:9002;
server localhost:9003;
}
server
{
root /var/www/glass/;
location /
{
error_page 403 404 = #seaside;
}
location #seaside
{
include fastcgi_params;
fastcgi_pass seaside;
}
}
}
2) Reverse proxy to Seaside (again requiring switching from ABTWSAC (Web Connect)), for more see this link
3) Install Apache or lighthttpd, different port than ngnix, on the same server. You want to proxy cgi-bin folder via nginx. I know it kind of beats the purpose for having nginx only, but it is also a possible solution so I'm writing it here.
You can write to your nginx (running on 8888 port) configuration:
location /cgi-bin {
proxy_pass http://127.0.0.1:8888
}
4) As you already suggested running web server with native CGI support like your mentioned apache or lighthttpd.
Dusty,
If I remember correctly, you can also use Web Connect on Top of SST, which basically is just an in-image HTTP server.
So your Web server (nginx) only needs to act as an HTTP (Reverse) Proxy. It is not faster than fastCGI but requires only minimal changes to your Web Connect setup process in the image startup procedure...
Related
I have a server running with Nginx reverse proxy.
We have our application running in another server, which is served using this Nginx proxy. Below is the configuration I have used and its working fine.
location / {
rewrite ^/(.*) /$1 break;
proxy_pass http://10.0.0.121:8000;
}
I would need to download a pdf file in the application machine (10.0.0.121) , which is under /home/ubuntu/app/pdf/data-2021-03-25.pdf.
How could I make the file in application machine downloadable from the proxy server, please help.
Thanks in Advance.
I would simply install another nginx instance on 10.0.0.121 and configure it like this. NON-PROD READY!
server {
listen 8080;
server_name ...;
root /home/ubuntu/app/pdf;
location = /data-2021-03-25.pdf {
try_files $uri $uri/ =404;
}
server {
listen 8090;
location / {
proxy_pass http://localhost:8080;
}
}
}
Not tested but this server will handling the request serving the file. Then you could just use proxy_pass on the other server to proxy the request.
But beside from this option you can use a python, perl, php, java, nodejs, assembly or what ever programming language you want to use to open a http port and serve the file on an incoming request. Its really your choice.
just make sure if you're going for the proxy solution you are sanitizing the requests on your proxy. For example. With a small change in the setup above you could cheat and get any other files from your home/app directory by sending an request like curl -v localhost:8090/pdf/../other/file. So make sure you are using the root(/home/ubuntu/app/pdf/) directive and set a location matching the pdf-file on the proxy-server as well.
That worked in my demo app.
I have the following configuration that works for only 1 page:
location /mypage.html/ {
proxy_pass http://${remote_server}/;
}
When I am trying to navigate to other pages on the remote server, I get page not found.
Is there any way to keep the reverse proxy open for all pages on the remote server?
This will match everything, technically everything that starts /, and route the same request to your remote server:
location / {
proxy_pass http://${remote_server}/;
}
I have several ISS Webservers hosting multiple web applications on each IIS server.
The do have a public certificate on each system.
Every IIS has an unique IP.
All IIS Server are placed in the same DMZ
I have setup an nginx System in another DMZ.
My goal is, to have nginx handle all the requests to the IIS from the Internet and JUST passthrough all the SSL and certificates checking to the IIS. So as it was before nginx. I don't want to have nginx break up the certificates, or offloads them etc.
Before I try to rumble with nginx reverse proxy to get it done (since I'm not very familiar with nginx), my question would be, if this is possible?
Believe me I've googled times and times and could not find something which answers my question(s)
Or maybe I'm too dumb google correctly. I've searched even for passthrough, or reverse proxy, offloading.
So far I've gathered, nginx needs probably some extra mods. Since I have a "apt-get" Installation, I don't even know how to add them.
nevermind I found the solution:
Issue:
Several Webservers with various applications on each are running behind a FW and responding only on Port 443
The Webservers have a wildcard Certificate, they are IIS Webservers(whoooho very brave), have public IP addresses on each
It is requested, that all webserver should not be exposed to the Internet and moved to a DMZ
Since IP4 addresses are short these days, it is not possible get more IPs addresses
Nginx should only passthrough the requests. No Certificate break, decrypt, re-encrypt between webserver and reverse proxy or whatsoever.
Solution:
All websservers should be moved to a internal DMZ
A single nginx reverse proxy should handle all requests based on the webservers DNS entries and map them. This will make the public IP4 address needs obsolete
All webservers would get a private IP
A wild certificate would be just fine to handle all aliases for DNS forwarding.
Steps to be done:
1. A single nginx RP should be placed on the external-DMZ.
2. Configure nginx:
- Install nginx on a fully patched debian with apt-get install nginx. At this Point
you'll get Version 1.14 for nginx. Of course you may compile it too
If you have installed nginx by the apt-get way, it will be configured with the following modules, which you will need later: ngx_stream_ssl_preread, ngx_stream_map, and stream. Don't worry, they are already in the package. You may check with nginx -V
4. external DNS Configuration:
- all DNS request from the Internet should point the nginx.
E.g webserver1.domain.com --> nginx
webserver2.domain.com --> nginx
webserver3.domain.com --> nginx
5. Configuration nginx reverse-proxy
CD to /etc/nginx/modules-enabled
vi a filename of your choice (e.g. passtru)
Content of this file:
enter code here
stream {
map $ssl_preread_server_name $name {
webserver01.domain.com webserver01_backend;
webserver02.domain.com webserver02_backend;
}
upstream support_backend {
server 192.168.0.1:443; # or DNS Name
}
upstream intranet_backend {
server 192.168.0.2:443; # or DNS Name
}
log_format basic '$remote_addr [$time_local] '
'$protocol $status $bytes_sent $bytes_received '
'$session_time "$upstream_addr" '
'"$upstream_bytes_sent" "$upstream_bytes_received"
"$upstream_connect_time"';
access_log /var/log/nginx/access.log basic;
error_log /var/log/nginx/error.log;
server {
listen 443;
proxy_pass $name; # Pass allrequests to the above defined variable container $name
ssl_preread on;
}
}
6. Unlink the default virtual webserver
rm /etc/nginx/sites-enabled/default
7. Redirect all http traffic to https:
create a file vi /etc/nginx/conf.d/redirect.conf
add following code
enter code here
server {
listen 80;
return 301 https://$host$request_uri;
}
test nginx -t
reload systemctl reload nginx
Open up a browser and check the /var/log/nginx/access.log while calling the webservers
Finish
I've read that it's not necessary to use sites-enabled, and even seen it suggested not to use.
In any case, its merit is not part of the question (so please consider that discussion off topic).
What I'm trying to do is set up an absolutely barebones basic nginx.conf file which does some super basic use case stuff: various forms of redirection.
By my understanding, this conf should be enough:
http {
# default server
server {
root /var/www/html/production-site;
# reverse proxy for external blog, makes example.com/blog display the blog. helps with SEO.
location /blog/ {
proxy_pass https://example.some-external-blog.com/;
}
}
# dev server
server {
server_name dev.example.com;
root /var/www/html/dev-site;
}
}
Unfortunately, my example doesn't work. The proxy bit works, but subdomains don't seem to. I don't honestly believe that server_name does anything at this point.
So, how does one write a simple (no extras) nginx.conf file which will exemplify these super trivial functionalities (subdomains and reverse proxies)?
I tried your config at my sandbox VM. nginx refuses to start, and when I run nginx -t command (which is always a good idea after significant configuration change), it says:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: [emerg] no "events" section in configuration
nginx: configuration file /etc/nginx/nginx.conf test failed
So I've added events {} line to the config. After that nginx successfuly starts and all is working as expected.
Another thing that I wouldn't skip is including mime.types file. So final minimal configuration would look as follows:
events {}
http {
include mime.types;
# default server
server {
root /var/www/html/production-site;
# reverse proxy for external blog, makes example.com/blog display the blog. helps with SEO.
location /blog/ {
proxy_pass https://example.some-external-blog.com/;
}
}
# dev server
server {
server_name dev.example.com;
root /var/www/html/dev-site;
}
}
Is there an equivalent of apache's ProxyRemote directive for NginX?
So the scenario is I am behind a corporate proxy and I want to do proxy passes for various services with NginX. I would do it in Apache with the following:
ProxyPass /localStackOverflow/ https://stackoverflow.com/
ProxyPassReverse /localStackOverflow/ https://stackoverflow.com/
ProxyRemote https://stackoverflow.com/ http://(my corporate proxy IP)
I know I need the proxy_pass directive in NginX but can't find what I would use for the ProxyRemote.
Thanks
Not sure how #tacos response can work - possibly something I'm missing but the only way I could sort of get this to work was by rewriting the url and passing on to the corporate proxy. This is shown below:
http {
server {
listen 80;
location / {
rewrite ^(.*)$ "http://www.externalsite.com$1" break;
proxy_pass http://corporate-proxy.mycorp.com:8080;
}
}
}
This works, but does rewrite the url, not sure if this is important to the original use-case..
The servers you proxy behind an Nginx front-end web server are referred to as upstream servers. You will want to refer to the documentation for the HttpUpstreamModule. It's very similair to what you are familiar with. If you don't need load-balancing, you just setup the one upstream server in the configuration and it will serve your purpose.