Finding an HTTP proxy that will intercept static resource requests - http

Background
I develop a web application that lives on an embedded device. In order to make dev times sane, frontend development is done using apache serving static documents, with PHP proxying out to the embedded device for specifically configured dynamic resources. This requires that we keep various server-simulation scripts hanging around in source control, and it requires updating those scripts whenever we add a new dynamic resource.
Problem
I'd like to invert the logic: if the requested document is available in the static documents directory, serve it; otherwise, proxy the request to the embedded device.
Optimally, I want a software package that will do this for me (for Windows or buildable on cygwin). I can deal with forcing apache to do it with PHP, but I'm unsure how to configure it to make it happen. I've looked at squid and privoxy, but neither of them seem to do what I want.
Any ideas? I'd rather not have to roll my own.

Now, Varnish is available in cygwin, see:
Installation instructions: http://varnish-cache.org/trac/wiki/VarnishOnCygwinWindows

I think what you want is varnish.

Now that I've looked at varnish, I understand that what I actually want is a special case of a reverse proxy, and that squid can be configured to do what I need. (With the added bonus of having it available as a cygwin package.)

Related

How to manage multiple symfony projects in a development computer

I've seen some post, including How to manage multiple backend stacks for development?, but nothing related to use lxc for a stable, safe and separate development environment, matching the production environment and regardless the desktop and/or linux distribution.
There was a feature previous to symfony cli release that allowed to specify a socket via ip:port, and this allowed to use different names in /etc/hosts using the 127.0.0.0/8 loopback network, so I could always use "bin/console server:start -p:myproject:8000", and I knew that using http://myproject:8000 (specified in /etc/hosts) I could access my project and keep the sessions, etc.
The symfony cli, as far as I've tried, doesn't allow this. Reading the docs, there's a built-in proxy in symfony cli, but though I've set a couple of projects to use this in the container, clicking on the list doesn't open the project (with .wip suffix), and issues an error about proxy redirections. If I browse to the port and ip of the container ip, it works perfectly, but the port is something that can change with every reboot of the container.
If there's nothing that can be set on the proxy side to solve this scenario, I'd ask to take back the socket feature that existed previously, so I can manage this situation as I used to do before, and solve this.
Thanks in advance.
I think I've finally found a good solution. I've created an issue to improve the situation that seemed not to work, so I'll try to explain for whoever might be interested.
I've setup the proxy server built-in with the symfony cli, but instead of allowing it to run with the defaults, I've had to specify --host=proxyhost (resolvable from the host) and setting proxy exceptions for .com, .org, .net, .tv, etc, together with setting a name to attach for every project (issuing symfony proxy:domain:attach myproject from inside the project dir), I can go to http://myproject.wip just like http://proxyhost:portX, no matter which port is portX.

How to proxy subdomains to other servers with dokku?

I want my dokku host to run the main nginx for my domain (let say cooldok.ku).
On cooldok.ku for some reasons I have other Virtual Machines serving content. I want to expose this content on a subdomain (say vm.cooldok.ku, runs in a VM at 10.0.0.7 on the cooldok.ku host).
I figured the involved methodology is called reverse-proxy.
In an optimal world, there would be a dokku-only way to register and 'link'/proxy the subdomains. As an added bonus, the cooldok.ku host would do the ssl-stuff for https itself (like ssltunnel) so that I could leverage existing certificates and/or use the awesome letsencrypt on the same machine and secure applications in the VM that were not meant to be served via https.
How can this scenario be realised with dokku? How difficult would it be to write a plugin doing that?
Update
So, basically dokku (0.8) comes equipped with exactly everything it would need. The question is, how much of what dokku wants to achieve (fire up those yummy docker containers) is in the way. To hack a setup which does what I want, following can be done:
# create folder where we want it
dokku apps:create vm
Now, these files have to be created/be present (vanilla 0.8 dokku installation)
#/home/dokku/vm/DOCKER_OPTIONS_DEPLOY
--restart=on-failure:10
#/home/dokku/vm/IP.web.1
10.0.0.7
#/home/dokku/vm/PORT.web.1
80
#/home/dokku/vm/URLS
# THIS FILE IS GENERATED BY DOKKU - DO NOT EDIT, YOUR CHANGES WILL BE OVERWRITTEN - I did it nonetheless
http://vm.cooldok.ku
#/home/dokku/vm/VHOST
vm.cookdok.ku
#/home/dokku/vm/nginx.conf
# Just listing changes from another default app
[...]
proxy_pass http://vm-host;
[...]
upstream vm-host {
server 10.0.0.7:80;
}
Afterwards, nginx needs a manual restart (or ... dokku can do something for us here)
I am pretty sure that some of the (redundant) information can be left out, as dokku should puzzle the nginx.conf itself, for example. I am not sure if this setup survives a reboot/nginx restart. Also, on tests, letsencrypt would not let me install the certificates/rebuild the nginx configuration because it sees the app vm as not being deployed.
Update2
To overcome the "app not deployed" issue, it suffices to touch /home/dokku/vm/CONTAINER, but this gets messier and messier ...
I bundled the information from the updates of my post into a dirty script at https://github.com/econya/scripts/blob/master/scripts/virt-helpers/fake-dokku-app.sh .
I guess the cleanest solution as-is with upwards compatibility would be to create a Dockerfile that launches a reverse proxy itself (configured via env/ config:set variables) - but I am happy to learn a smarter and nicer solution, or that I get paid to write a proper plugin ;)
Second approach would be to use a "Null"-Docker image together with a custom nginx template I guess.
Update 2021
According to the release notes it works now (look for "Routing to non-Dokku managed apps"):
https://dokku.github.io/release/dokku-0.25.0
I still use an older dokku and the solution written above, though.

Webapp only per ssl canoncial way

I just installed owncloud (7.0.10-1) from the EPEL (6) repository to a CentOS 6 server. Given that this rpm contains some "best practices" I want to minimize the changes to the config files that come with the rpm.
The app does work out of the box.
However, the app is available via http, but I prefer https only access. Is there some simple switch to turn off http access or do I have to do it manually? Is there any suggested canonical way to force ssl at all? The wikipages do not seem to mention this at all.

Mercurial file protocol

does Mercurial have an HTTP protocol we could browse files/folders/branches instead of clone/pull changesets?
I've seen something using TortoiseHG WebServer and access http://localhost:8080/ using browser but completely different HTML is served when you use project on https://bitbucket.org/ (at least I could not find the same representation).
Update the HttpCommandProtocol document describes only changesets but not files/folders. So, the task is to download only few files only for particular revision (for example with tip 'stable') and a list of files. However I do not want to download a complete repository for this.
Non HTTP protocols are welcome but conditions are the same: do not download a complete repository.
Update 2 hgweb serves static HTML and files. Is it always the same HTML fromat for different hgweb versions? What about bitbucket.org? Is there any common protocol?
As you noticed already, the HttpCommandProtocol defines the exchange of repository information and changesets - it ensures that you can clone/push/pull from/to any repo served by HTTP. But AFAIK there's no standard for how to browse a repo (e.g. getting a single file of a certain revision).
You'll have to adapt to whatever URL scheme your hosting system of choice uses (as you also noticed, hgweb and bitbucket have different schemes). Depending on your use case you could define your own file access protocol and feed it to an converter.
For instance you might want to access files with this scheme:
<repo-url>/<rev>/<path>
Where <repo-url> is the URL you use to clone/push/pull. In practice you would then use URLs like that:
https://bitbucket.org/user/repo/<rev>/<path>
https://hgwebhost.org/.../repo/<rev>/<path>
Obviously these are virtual URLs which do not exist. That's where your converter comes in: check the hosting system type and convert URLs accordingly:
https://bitbucket.org/user/repo/raw/<rev>/<path>
https://hgwebhost.org/.../repo/raw-file/<rev>/<path>
If your converter knows bitbucket and hgweb, then it already works with a good deal of repositories out there.
Mercurial has hgweb. It can be deployed via any wsgi container and I think it even has CGI support.
If you just go to any hg repo and type
hg serve
you will have a webserver listening at a url that you can point a browser at. The formatting of the webpages generated by hg can be changed via templates. It is very likely bitbucket.org has their own fancier templates, hence they have a prettier webpages.
Further the listening url can be used to push and pull from as well using hg. This is in fact the same website that is channeled via hgweb.cgi and also the underlying mechanism for doing push/pull over SSH.

Run a remote python script from ASP.Net

I have a python script on a linux server that I can SSH into and I want to run the script on the linux server( and pass it parameters entered by the user) and get the output on an ASP.net webpage running on IIS. How would I be able to do that?
Would it be easier if I was running a wamp server?
Edit: The servers are in the same internal intranet.
Probably the best approach is the least coupled one. If you can determine a protocol that you're comfortable with the two (asp/python) talking in, it will go a long way to reducing headaches.
Let's say you pick XML.
Setup the python script to run as a WSGI application with either cherrypy or apache (or whatever). The script formats it's response in XML and passes that to WSGI which returns the XML over HTTP.
On the ASP.NET side of things, whenever you want to "run the script" you simply query the URL with the WebRequest class, then parse the results with LINQ-to-XML (which on a side note is a really cool technology).
Here's where this becomes relevant: Later on if either the ASP.NET implementation or the python implementation changes you don't have to re-code/refactor the other. Later if you realize that the ASP.NET app and some desktop app need to be able to do that, you've standardized on a protocol and implementing it should be easy and well supported.

Resources