My program uploads a picture in the ftp server and I need to get the http address for the picture. How can I do it so it would be dynamic and independent of a specific server?
There is nothing whatsoever that says it has an HTTP address, and if it does it is totally under the control of the server configuration. There is no defined mapping.
Does not exist any fixed rule that associate an http address from ftp address. And it does not depends on Java.
It's only a configuration problem, whatever language you use; you have to know when you upload a file on a ftp server if that file will be reachable through an HTTP server and what address it will have.
Paths on ftp servers and http server do not correlate. The only solution I could imagine is:
Know the "server root path" for both HTTP and FTP server
Know the path relative to this root
Combine these two to the full path
Nevertheless there is no guarantee that it will work in every case. It relies on the fact that you can establish a manual mapping of the server roots (see EJP's answer).
Related
I'm trying to open with the question that I really want answered. I want the URL at which outside users can access a particular part of my application.
In my server's setup, we're using Nginx as a reverse proxy, so my app is confugured to be at port 9000. But I can't point users at this, because they can't access that port. Users can access port 8080. But this is part of my system configuration and could (I think) change. Also it does change from development to staging to production. So, I would like to avoid hard-coding this if possible.
So then my question, can I somehow, dynamically, tell the "outermost" port that an incoming request is received at? Possibly through passing a header down from Nginx? I'm thinking of X-Forwarded-For, except I want to know what URL the client contacted to reach me (the server), not what IP address the client is contacting the server from. Is this possible?
$server_port variable holds the port the client connected to.
if I have an entry in my hosts file ,and also the hostname is in the server block of the nginx configure file ,I wonder which file is applied first and does what specific,could someone tell me ?
The hosts file is used by your OS to resolve hostnames to IPs and is usually evaluated first (can be customized - at least on unix based OS). If you tell an application to look for some host, e.g. www.example.com, it looks up the name in the hosts file and uses the IP to connect to that host. In case the hostname can't be found in the file, it will usually ask the configured DNS servers for it.. See Hosts File and DNS for more info.
The hostname in the server block on the other side is used by nginx to determine the appropriate action to be taken. nginx evaluates the HOST header in the request and tries to match it against the values configured in the server_name variables in each block. See Server names and How nginx processes a request.
What I am trying to achieve is to have a central Webmail client that I can use in a ISP envioroment but has the capability to connect to multiple mail servers.
I have now been looking at Perdition, NGINX and Dovecot.
But most of the articles have not been updated for a very long time.
The one that I am realy looking at is NGINX imap proxy as it can almost do everything i require.
http://wiki.nginx.org/ImapAuthenticateWithEmbeddedPerlScript
But firstly the issue I have is you can no longer compile NGINX from source with those flags.
And secondly the GitRepo for this project https://github.com/falcacibar/nginx_auth_imap_perl
Does not give detailed information about the updated project.
So all I am trying to achieve is to have one webmail server that can connect to any one of my mailservers where my location is residing in a database. But the location is a hostname and not a IP.
You can tell Nginx to do auth_http with any http URL you set up.
You don't need an embedded perl script specifically.
See http://nginx.org/en/docs/mail/ngx_mail_auth_http_module.html to get an idea of the header based protocol Nginx uses.
You can implement the protocol described above in any language - CGI script with apache if you like.
You do the auth and database query and return the appropriate backend servers in this script.
(Personally, I use a python + WSGI server setup.)
Say you set up your script on apache at http://localhost:9000/cgi-bin/nginx_auth.py
In your Nginx config, you use:
auth_http http://localhost:9000/cgi-bin/nginx_auth.py
I am trying to make an HTTP request from a micro-controller , the request is successful when I make it to google.com usign its IP (173.194.33.104) , while it fails when I use my server ip
when I put the ip address in the browser it show me a message like this
"Great Success ! Apache is working on your cPanel® and WHM™ Server" and some more info about apache server , I get my ip from the terminal (ping www.xxxxxx.com)
also if I put my ip with the user name in the browser I see my pages in my server (xx.xx.xx.xx/~aymanj/)
I want to make the HTTP request direct to my pages in the server
how can I do that ?
Lets assume you are using a browser (like IE, FireFox or Chrome) and that we aren't talking about using network functions from the Arduino. When you put an URL or some IP address in the browser's address control, and then hit enter - the browser parses that string to determine an address. Then based on your network settings, attempts to make a network connection to that address (typically over port 80). One it connects, the next step in the HTTP protococl is to request a single page from that connection. That page is everything after the address. If there is no page after the address specified, then its just the default "/" or default page. Each web server (Apache, IIS, etc) has options to set up different defaults. Typically that's something like index.html or default.aspx located in the root directory of whereever the web server lives. Extending that, a web server can map other directories to other paths. In your case, someone mapped a directory "/~aymanj" to your username. In that directory, there is probably a file something like "index.html". And when you go to that address without a directory or path, you are requesting the webserver root. Appparently no one has set a default page for it. Hope this helps you get started.
I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users.
The following shows a simple example:
1) The user's browser requests http://www.example.com/files/file1.zip
2) Request goes to server A, based on the DNS A record for example.com.
3) Server A analyzes the request and works out that /files/file1.zip is stored on server B.
4) Server A forwards the request to server B.
5) Server B returns file1.zip directly to the user without going through server A.
Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain.
From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy.
For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required.
For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address.
My problem is how to actually achieve step 4?
I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution.
Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything.
Forgive my ignorance, but why not setup Server A to mount the files that are located on the other servers either via NFS or SMB, depending on whether you're using a unix variant, or whether you're using Windows?
Seems like what you're trying to do is overly complicate something that could be very simple. In addition, using network-mounted files will allow you to mount those files on additional machines in the future when you need them. At that point, then you could put a load balancer in front of server A (and servers x, y, and z, which also all mount files from server B).
Granted this would not solve the problem of bypassing server A on the return, technically server A would be returning the file instead of server B, but if a load balancer were to be put in front of A, then A would become B anyways, so technically B would still be returning the file, because the load balancer would use direct server return (its a standard feature for a long time now).
If I did miss something, please do elaborate.
Edit: Yes I realize this was posted nearly 3 years ago. Oh well.
Why not send an HTTP response of status code 307 Temporary Redirect?
At that point the client will re-request to the correct specified server.
I know you want a single domain, but you could have both individual subdomains plus a single common domain.
For example:
example.com has IP1, IP2, IP3.
example1.example.com has IP1
example2.example.com has IP2
example3.example.com has IP3
If the request comes to a server that it can't handle itself, it will forward the user to make another request to the correct specific server. An HTTP browser will follow this redirect transparently by the way.