Serving two websites written in Google Go within a single VM - http

I have a VM from Digital Ocean. It currently has two domains linked to the VM.
I do not use any other web server but Golang's built in http module. Performance-wise I like it, and I feel like I have a full control over it.
Currently I am using a single Go program that has multiple websites built in.
http.HandleFunc("test.com/", serveTest)
http.HandleFunc("123.com/", serve123)
http.HandleFunc("/", serve123)
As they are websites, Go program is using port 80 for that.
And the problem is when I am trying to update only 1 website, I have to recompile whole thing as they are written in the same code.
1) Is there a way to make it hot-swappable only with Golang (without Nginx or Apache)
2) What would be a standard best practice?
Thank you so much!

Well, you can do hotswapping in go, but I really wouldn't want to do that unless really ncecessary as the complexity added isn't negligible (and I'm not talking about code).
You can have something close with a kind of proxy that would sit in front of the program and do a graceful swap whenever your binary change : the principle is to have the binary on one port, the proxy on another. When a new binary is ready, you run it on another port, and make the proxy redirect to the new port, then gracefully shutdown the old one.
There was a tool for that in Go that I can't remember the name of…
EDIT: not the one I had in mind, but close call https://github.com/rcrowley/goagain
Personnal advice: use a reverse proxy for that, its much more simple to do. My personnal setup is to use h2o to terminate SSL, HTTP2, etc, and send the requests to the various websites running on the background. Not only Go ones, though, but also PHP ones, a Gitlab instance, etc. Its much more flexible, and the performance penalty of the proxy is small…

Related

Simultaneously running nginx and apache2

I am running my websites on Ubuntu 14.04 server with nginx. I would like to set up a mail server (probably using squirrelmail) which uses apache2 (presently not in use). I intend to keep the port numbers for the web server and mail server entirely different.
Can I do this? Do I have to do anything out of the ordinary (secret handshakes) to set this up, and if so, what exactly do I need to do?
Yes it's entirely possible. Generally the complications you will have to sort out is both servers trying to listen on the same port, and directory permission issues.
In this case you will have to take precaution to keep Nginx user www-data as root because only root is allowed access to port 80 and 443 (which I'm assuming you are using for your website).
If you work out the kinks though it's entirely possible and has been done many times for situations like yours.
With that said I think it would be a lot of extra work setting up an entire apache webserver just to run squirrelmail. You can configure squirrelmail to work with Nginx and it's not that difficult.
Tutorial here.
Imho that is the easiest way forward with the fewest possible ways to crash your entire server and be forced to startover. Which brings me to another point.
Before you do any work on a server BACK UP EVERYTHING. Clone your hard drive if you can, if not make sure you get all databases static files config files ect....
It's also best practice to not work on a live production server if you can avoid it. Use a backup in the mean time while you play around with yours.
It will save you a lot of stress to work at a comfortable pace rather than work against the clock of hundreds of customers trying to view your website.
I know that doesn't apply in all cases, but it's a good practice to get into if you can to prepare you for any bigger jobs that come down the road.

How to configure GWAN as a reverse proxy?

I saw some performance of GWAN and interested in testing it as a reverse proxy of static content in front of Apache with APC for optimizing PHP opcode, to run a Wordpress multisite. I can get GWAN up and running but I have no idea how to configure it for reverse proxy, as there seems to be almost no information on it. Anyone use GWAN as a reverse proxy?
It's still a not documented feature ... maybe in a next release ? #gil ?
Right now there's no easy way for you to do that. That will change with the next release.
We first hardcoded the reverse-proxy feature in G-WAN along with the load-balancer. Then, as we needed to personalize reverse-proxying, we implemented it as a protocol handler script.
Protocol handler scripts allow users to implement any protocol (like SMTP, LDAP, etc.) without haveing to deal with multi-threading nor socket events.
But finally, to reduce complexity for users, we might revert to the hard-coded implementation with connection handlers scripts to let people personalize the reverse proxy.
It's maturing under different use cases, hence the delay in publicly releasing this feature and a few others.
Rushing to implement features and interfaces is not always optimal, if the goal is to stay flexible and easy to use.

Simulating a remote website locally for testing

I am developing a browser extension. The extension works on external websites we have no control over.
I would like to be able to test the extension. One of the major problems I'm facing is displaying a website 'as-is' locally.
Is it possible to display a website 'as-is' locally?
I want to be able to serve the website exactly as-is locally for testing. This means I want to simulate the exact same HTTP data, including iframe ads, etc.
Is there an easy way to do this?
More info:
I'd like my system to act as closely to the remote website as possible. I'd like to run command fetch for example which would allow me to go to the site in my browser (without the internet on) and get the exact same thing I would otherwise (including information that is not from a single domain, google ads, etc).
I don't mind using a virtual machine if this helps.
I figured this was quite a useful thing in testing. Especially when I have a bug I need to reliably reproduce in sites that have many random factors (what ads show, etc).
As was already mentioned, caching proxies should do the trick for you (BTW, this is the simplest solution). There are quite a lot of different implementations, so you just need to spend some time selecting a proper one (according to my experience squid is a good solution). Anyway, I would like to highlight two other interesting options:
Option 1: Betamax
Betamax is a tool for mocking external HTTP resources such as web services and REST APIs in your tests. The project was inspired by the VCR library for Ruby. Betamax aims to solve these problems by intercepting HTTP connections initiated by your application and replaying previously recorded responses.
Betamax comes in two flavors. The first is an HTTP and HTTPS proxy that can intercept traffic made in any way that respects Java’s http.proxyHost and http.proxyPort system properties. The second is a simple wrapper for Apache HttpClient.
BTW, Betamax has a very interesting feature for you:
Betamax is a testing tool and not a spec-compliant HTTP proxy. It ignores any and all headers that would normally be used to prevent a proxy caching or storing HTTP traffic.
Option 2: Wireshark and replay proxy
Grab all traffic you are interested in using Wireshark and replay it. This I would say it is not that hard to implement required replaying tool, but you can use available solution called replayproxy
Replayproxy parses HTTP streams from .pcap files
opens a TCP socket on port 3128 and listens as a HTTP proxy using the extracted HTTP responses as a cache while refusing all requests for unknown URLs.
Such approach provide you with the full control and bit-to-bit precise simulation.
I don't know if there is an easy way, but there is a way.
You can set up a local webserver, something like IIS, Apache, or minihttpd.
Then you can grab the website contents using wget. (It has an option for mirroring). And many browsers have an option for "save whole web page" that will grab everything, like images.
Ads will most likely come from remote sites, so you may have to manually edit those lines in the HTML to either not reference the actual ad-servers, or set up a mock ad yourself (like a banner image).
Then you can navigate your browser to http://localhost to visit your local website, assuming port 80 which is the default.
Hope this helps!
I assume you want to serve a remote site that's not under your control. In that case you can use a proxy server and have that server cache every response aggressively. However, this has it's limits. First of all you will have to visit every site you intend to use through this proxy (with a browser for example), second you will not be able to emulate form processing.
Alternatively you could use a spider to download all content of a certain website. Depending on the spider software, it may even be able to download JavaScript-built links. You then can use a webserver to serve that content.
This service http://www.json-gen.com provides mock for html, json and xml via rest. By this way, you can test your frontend separately from backend.

Secure data transfer over HTTP when HTTPS is not an option

I would like to write an application to manage files, directories and processes on hundreds of remote PCs. There are measurement programs running on these machines, which are currently managed manually using TightVNC / RealVNC. Since the number of machines is large (and increasing) there is a need for automatic management. The plan is that our operators would get a scriptable client application, from which they could send queries and commands to server applications running on each remote PC.
For the communication, I would like to use a TCP-based custom protocol, but it is administratively complicated and would take very long to open pinholes in every firewall in the way. Fortunately, there is a program with a built-in TinyWeb-based custom web server running on every remote PC, and port 80 is opened in every firewall. These web servers serve requests coming from a central server, by starting a CGI program, which loads and sends back parts of the log files of measurement programs.
So the plan is to write a CGI program, and communicate with it from the clients through HTTP (using GET and POST). Although (most of) the remote PCs are inside the corporate intranet, they are scattered all over the country, I would like to secure the communication. It would not be wise to send commands, which manipulate files and processes, in plain text. Unfortunately the program which contains the web server cannot be touched, so I cannot simply prepare it for HTTPS. I can only implement the security layer in the client and in the CGI program. What should I do?
I have read all similar questions in SO, but I am still not sure what to do in this specific situation. Thank you for your help.
There are several webshells but as far as I can see ( http://www-personal.umich.edu/~mressl/webshell/features.html ) they run on the top of an existing SSL/TLS layer.
There is also S-HTTP.
There are several ways of authenticating to an server (username/passwort) in a protected way, without SSL. http://www.switchonthecode.com/tutorials/secure-authentication-without-ssl-using-javascript . But these solutions are focused only on sending a username/password to the server.
Would it be possible to implement something like message-level security in SOAP/WS-Security? I realise this might be a bit heavy duty and complicated to implement, but at least it is
standardised
definitely secure
possibly supported by some libraries or frameworks you could use
suitable for HTTP

Why use Mongrel2?

I'm confused what purpose Mongrel2 serves/provides that nginx doesn't already do.
(Yes, I've read the manual but I must to be too much of a noob to understand how it's fundamentally different than nginx)
My current web application stack is:
- nginx: webserver
- Lua: programming language
- FastCGI + LuaJIT: to connect nginx to Lua
- Postgres: database
If you could only name one thing then it would be that Mongrel2 is build around ZeroMQ which means that scaling your web server has never been easier.
If a request comes in, Mongrel2 receives it (nothing unusual here, same as for NginX and any other httpd). Next thing that happens is that Mongrel2 distributes the task of compiling a response to n (ZeroMQ-enabled) backends, waits for them to do the work, receives results, compiles the response and sends it off to the client.
Now, the magic is with the fact that n can be any number and, that each of n can be written in any language as supported by ZeroMQ (20 or so) plus, all goes across the network so each n can be a dedicated box, possibly in another datacenter.
In other words: with NginX and all the rest you have to do scalability in your logic tier, Mongrel2 allows you to start (from a request/response cycle point of view) this right where the request hits your infrastructure, at the httpd rather than letting complexity penetrate down to your logic tier which blows complexity upwards by at least one order of magnitude imo.
You should look at the strengths of each and decide to use either or both depending on your use cases..
While, it seems that nginx does everything that mongrel2 provides in the surface, you'll find there are major differences in focus between the two.
Nginx shines as a front-end webserver, that can proxy requests to your backend webservers/appservers and also serve static content.
Mongrel2 is a slight change in the stack. As mentioned, it's power comes from it's use of zeromq as the transport layer between it and the backend appservers. It can serve dynamic request urls (app requests) and direct the compute portion of the task out to different backends using zeromq..
mongrel2 allows you to serve not just http, websockets etc, but other protocols (if you're inclined to do so) all from the same server. the user would never know that portions of the app are being served from different backends.
If your requirements for the functionality of your webapp keeps changing or you want to add things like streaming, the ability to code in different languages in the back end etc, then I would definitely look at mongrel2. Or even have a hybrid
where you use nginx/haproxy/varnish for static files and caching, and everything else is directed to mongrel2.

Resources