I want my dokku host to run the main nginx for my domain (let say cooldok.ku).
On cooldok.ku for some reasons I have other Virtual Machines serving content. I want to expose this content on a subdomain (say vm.cooldok.ku, runs in a VM at 10.0.0.7 on the cooldok.ku host).
I figured the involved methodology is called reverse-proxy.
In an optimal world, there would be a dokku-only way to register and 'link'/proxy the subdomains. As an added bonus, the cooldok.ku host would do the ssl-stuff for https itself (like ssltunnel) so that I could leverage existing certificates and/or use the awesome letsencrypt on the same machine and secure applications in the VM that were not meant to be served via https.
How can this scenario be realised with dokku? How difficult would it be to write a plugin doing that?
Update
So, basically dokku (0.8) comes equipped with exactly everything it would need. The question is, how much of what dokku wants to achieve (fire up those yummy docker containers) is in the way. To hack a setup which does what I want, following can be done:
# create folder where we want it
dokku apps:create vm
Now, these files have to be created/be present (vanilla 0.8 dokku installation)
#/home/dokku/vm/DOCKER_OPTIONS_DEPLOY
--restart=on-failure:10
#/home/dokku/vm/IP.web.1
10.0.0.7
#/home/dokku/vm/PORT.web.1
80
#/home/dokku/vm/URLS
# THIS FILE IS GENERATED BY DOKKU - DO NOT EDIT, YOUR CHANGES WILL BE OVERWRITTEN - I did it nonetheless
http://vm.cooldok.ku
#/home/dokku/vm/VHOST
vm.cookdok.ku
#/home/dokku/vm/nginx.conf
# Just listing changes from another default app
[...]
proxy_pass http://vm-host;
[...]
upstream vm-host {
server 10.0.0.7:80;
}
Afterwards, nginx needs a manual restart (or ... dokku can do something for us here)
I am pretty sure that some of the (redundant) information can be left out, as dokku should puzzle the nginx.conf itself, for example. I am not sure if this setup survives a reboot/nginx restart. Also, on tests, letsencrypt would not let me install the certificates/rebuild the nginx configuration because it sees the app vm as not being deployed.
Update2
To overcome the "app not deployed" issue, it suffices to touch /home/dokku/vm/CONTAINER, but this gets messier and messier ...
I bundled the information from the updates of my post into a dirty script at https://github.com/econya/scripts/blob/master/scripts/virt-helpers/fake-dokku-app.sh .
I guess the cleanest solution as-is with upwards compatibility would be to create a Dockerfile that launches a reverse proxy itself (configured via env/ config:set variables) - but I am happy to learn a smarter and nicer solution, or that I get paid to write a proper plugin ;)
Second approach would be to use a "Null"-Docker image together with a custom nginx template I guess.
Update 2021
According to the release notes it works now (look for "Routing to non-Dokku managed apps"):
https://dokku.github.io/release/dokku-0.25.0
I still use an older dokku and the solution written above, though.
Related
I've seen some post, including How to manage multiple backend stacks for development?, but nothing related to use lxc for a stable, safe and separate development environment, matching the production environment and regardless the desktop and/or linux distribution.
There was a feature previous to symfony cli release that allowed to specify a socket via ip:port, and this allowed to use different names in /etc/hosts using the 127.0.0.0/8 loopback network, so I could always use "bin/console server:start -p:myproject:8000", and I knew that using http://myproject:8000 (specified in /etc/hosts) I could access my project and keep the sessions, etc.
The symfony cli, as far as I've tried, doesn't allow this. Reading the docs, there's a built-in proxy in symfony cli, but though I've set a couple of projects to use this in the container, clicking on the list doesn't open the project (with .wip suffix), and issues an error about proxy redirections. If I browse to the port and ip of the container ip, it works perfectly, but the port is something that can change with every reboot of the container.
If there's nothing that can be set on the proxy side to solve this scenario, I'd ask to take back the socket feature that existed previously, so I can manage this situation as I used to do before, and solve this.
Thanks in advance.
I think I've finally found a good solution. I've created an issue to improve the situation that seemed not to work, so I'll try to explain for whoever might be interested.
I've setup the proxy server built-in with the symfony cli, but instead of allowing it to run with the defaults, I've had to specify --host=proxyhost (resolvable from the host) and setting proxy exceptions for .com, .org, .net, .tv, etc, together with setting a name to attach for every project (issuing symfony proxy:domain:attach myproject from inside the project dir), I can go to http://myproject.wip just like http://proxyhost:portX, no matter which port is portX.
As we all know, we can run a meteor app by just typing meteor in a terminal.
By default it will start a server and use port 3000.
So why do I need to deploy it using MUP etc.
I can configure it to use port 80 or use nginx to route to port 80 for the app. So the port is not the point.
Edit:
Assume meteor is running on a VPS or cloud server with public IP address, not a personal computer.
MUP does a few extra things you can do yourself:
it 'bundles' the code into a single file, using meteor build bundle
the javascript is one file, and css another; it's minified, and obfuscated so it's smaller and faster to load, and less easy to decipher on the client.
some packages are also meant to be removed when running in production. For example meteorToys, the utility toolset to look up collections and much more, is not bundled into the production bundle, as per the instructions in its package. This insures you don't deploy code with security vulnerabilities (Meteor toys basically opens up client side delete / updates etc... if you're not careful)
So, in short, it installs a minimal version of your site, making sure that what's meant for development only doesn't get push to a production environment.
EDIT: On other reason to do this, is that you don't need all the Meteor build tools on your production server; that can add up to a lot of stuff, especially if you keep caches going for a while...
I believe it also takes care of hooking up to a remote MongoDB Instance (at least it used to be the case on the free meteor site) which is more scalable and fault tolerant than running on the same instance as the web server, as well as provision storage etc... if needed.
basically, to deploy a Meteor app yourself manually, you need to:
on your dev box:
meteor build bundle your app to a tar file (using the architecture flag corresponding to the OS you will use)
on the server:
install node v0.10 (or whatever is the current version of node required by Meteor)
you might have to install Fiber#1.0.5 (but I believe this is now part of meteor install already)
untar the bundle, get into bundle/programs/server/ and run npm install
run the server with node main.js in the bundle folder.
The purpose of deploying an application is that you are situating your project on hardware outside of your local machine. For example if you deploy an application on Heroku app you create a repository on heroku's systems and that code based is used to serve your application off of their servers.
If you just start an application on your personal system, you will suffer a lack of network and resource availability as well as under use of computer time at non-peak hours as your system will need to remain attentive for additional users without having alternative tasks. Hosting providers provide resources as needed, and their diverse client base allows their systems to work around the clock on a global scale.
I just installed owncloud (7.0.10-1) from the EPEL (6) repository to a CentOS 6 server. Given that this rpm contains some "best practices" I want to minimize the changes to the config files that come with the rpm.
The app does work out of the box.
However, the app is available via http, but I prefer https only access. Is there some simple switch to turn off http access or do I have to do it manually? Is there any suggested canonical way to force ssl at all? The wikipages do not seem to mention this at all.
I am just starting to get my head wrapped around continuous deployment with Jenkins, but I am running into some roadblocks and I haven't really found very many good, definitive resources on the topic in regards to ASP.NET applications.
I have set up a local build server than successfully pulls down code from a SVN repo, and builds it OK with MSBuild. This works well so far, but now I'd like to automate pushing this compiled code to a development server.
My problem is this - from what I gather based on what I read (which may be an incorrect assumption...) is that the staging server is typically within the same network as the build server, meaning you can share network resources, servers, etc.
In my case, I want to run the Jenkins server on a remote VPS, then deploy to other remote VPSes (so, essentially individual isolated machines communicating with each other).
I have seen alot of terms, but I am very new in my Sys Admin / DevOps type skills.
So, my question is this:
Is it even possible to, using Jenkins on a VPS, to then deploy to any particular server I choose? (I have full access to all of them, so if its a security thing, I can fix that... but they are not within the same network/domain)
What is the method to achieve this? I've seen xcopy, Web Deployment Packages (msdeploy), batch scripts, etc. mentioned, but not really a guidance behind what to use in what situations. Are any of these methods useful to achieve my goal?
Thanks for any help or guidance!
How is your Powershell? ;) You should check out psake.
psake is a build automation tool written in PowerShell. It avoids the
angle-bracket tax associated with executable XML by leveraging the
PowerShell syntax in your build scripts. psake has a syntax inspired
by rake (aka make in Ruby) and bake (aka make in Boo), but is easier
to script because it leverages your existent command-line knowledge.
psake is pronounced sake – as in Japanese rice wine. It does NOT rhyme
with make, bake, or rake.
You can deploy your files to the target server through SSH. Jenkins do support transfers through SSH. All you need to do is setting up a SSH server ex : CopSSH and a user account with admin permissions. and configuring the Jenkins to transfer through SSH.
Create host configurations in the main Jenkins configuration
Add an SSH Server
Add the public key to the remote server (the build server)
Click "Test Configuration"
Save
Configure a job to Publish Over SSH (Post Build Action)
Add Transfer Set.
Refer Publish Over SSH For More details
Background
I develop a web application that lives on an embedded device. In order to make dev times sane, frontend development is done using apache serving static documents, with PHP proxying out to the embedded device for specifically configured dynamic resources. This requires that we keep various server-simulation scripts hanging around in source control, and it requires updating those scripts whenever we add a new dynamic resource.
Problem
I'd like to invert the logic: if the requested document is available in the static documents directory, serve it; otherwise, proxy the request to the embedded device.
Optimally, I want a software package that will do this for me (for Windows or buildable on cygwin). I can deal with forcing apache to do it with PHP, but I'm unsure how to configure it to make it happen. I've looked at squid and privoxy, but neither of them seem to do what I want.
Any ideas? I'd rather not have to roll my own.
Now, Varnish is available in cygwin, see:
Installation instructions: http://varnish-cache.org/trac/wiki/VarnishOnCygwinWindows
I think what you want is varnish.
Now that I've looked at varnish, I understand that what I actually want is a special case of a reverse proxy, and that squid can be configured to do what I need. (With the added bonus of having it available as a cygwin package.)