Playframework : Deploy website - web-deployment

So, I have completed developing a website using Play 2.2 framework. I have a basic question of How to deploy the play application? I have followed the steps given in Play Production mode and generated files in target/universal/stage/bin and all related files using [project]$ dist command. Now, What I have to do with the files that are generated ? How can I put this live? Please specify steps require to deploy this application or a helpful article.
I am a newbie so this question may be too simple to ask.
Thanks :)

That's easy:
send the unziped files to server
find start and make it executable : chmod +x start
start the application on port 80 like ./start -Dhttp.port=80 (don't forget to use nohup, otherwise application will terminate, when you'll close SSH session)
That's all
Tip for easier maintenance you can use for an instance Jenkins - or some other CI system - with rsync command you can prepare one-click task for redeploying app even at distant location.
If you need to run more than 1 application at port 80 on the same machine use some lightweight HTTP server for reverse proxy and/or load balancing, i.e. nginx works perfect.

Related

Why do we need to deploy a meteor app instead of just starting it?

As we all know, we can run a meteor app by just typing meteor in a terminal.
By default it will start a server and use port 3000.
So why do I need to deploy it using MUP etc.
I can configure it to use port 80 or use nginx to route to port 80 for the app. So the port is not the point.
Edit:
Assume meteor is running on a VPS or cloud server with public IP address, not a personal computer.
MUP does a few extra things you can do yourself:
it 'bundles' the code into a single file, using meteor build bundle
the javascript is one file, and css another; it's minified, and obfuscated so it's smaller and faster to load, and less easy to decipher on the client.
some packages are also meant to be removed when running in production. For example meteorToys, the utility toolset to look up collections and much more, is not bundled into the production bundle, as per the instructions in its package. This insures you don't deploy code with security vulnerabilities (Meteor toys basically opens up client side delete / updates etc... if you're not careful)
So, in short, it installs a minimal version of your site, making sure that what's meant for development only doesn't get push to a production environment.
EDIT: On other reason to do this, is that you don't need all the Meteor build tools on your production server; that can add up to a lot of stuff, especially if you keep caches going for a while...
I believe it also takes care of hooking up to a remote MongoDB Instance (at least it used to be the case on the free meteor site) which is more scalable and fault tolerant than running on the same instance as the web server, as well as provision storage etc... if needed.
basically, to deploy a Meteor app yourself manually, you need to:
on your dev box:
meteor build bundle your app to a tar file (using the architecture flag corresponding to the OS you will use)
on the server:
install node v0.10 (or whatever is the current version of node required by Meteor)
you might have to install Fiber#1.0.5 (but I believe this is now part of meteor install already)
untar the bundle, get into bundle/programs/server/ and run npm install
run the server with node main.js in the bundle folder.
The purpose of deploying an application is that you are situating your project on hardware outside of your local machine. For example if you deploy an application on Heroku app you create a repository on heroku's systems and that code based is used to serve your application off of their servers.
If you just start an application on your personal system, you will suffer a lack of network and resource availability as well as under use of computer time at non-peak hours as your system will need to remain attentive for additional users without having alternative tasks. Hosting providers provide resources as needed, and their diverse client base allows their systems to work around the clock on a global scale.

Would it be a security risk to create an Automator application to start Plone?

In order to make a "one-click" start-up solution for Plone on a dedicated Mac web server, I would like to create an Automator application. The purpose of this would be so that it can be launched on login so that if the computer encountered a power outage or needed to restart for maintenance, Plone would automatically start once the machine was powered on again. That said, because the installation would be as root, the user and ".../zeocluster/bin/*" would need to be blessed in sudoers in order to run without needing a password to start plonectl.
Basic question: is it a huge security risk on a production server to add /bin/* to sudoers?
Zope will start as root, but will not run as root. After attaching to the port, it changes to the effective user specified in your buildout.
That said, unless you need to bind to a privileged port (like port 80 or 443), I would try to avoid starting Zope as root. It's just not necessary, and it increases the attack surface. For much the same reason, I'd avoid using automator for an app that starts as root.
Instead, take a look at the init_scripts directory in the Unified Installer. It has example startup scripts and packaging lists for OS X. These haven't been touched in a long time, so there's a good chance you'll need to edit to match the actual start commands. I'd also have it sudo to the effective user rather than start as root. So:
sudo -u plone_daemon /usr/local/Plone/zeocluster/bin/plonectl start
Adjusted to your install location.

Deployment of PrecompiledApp issue

The autogenerated PrecompiledApp.config is causing me some headache.
Im automating the deployment of an older web site and 50% of the time when I deploy I get this error:
System.IO.IOException: The process cannot access the file '\\web.prod.local\c$\Sites\Website\PrecompiledApp.config' because it is being used by another process.
Content:
<precompiledApp version="2" updatable="true"/>
To the best of my knowledge websites uses some shadow copy feature to allow updating the site "runtime", with things such as app.config etc.
However this 1 file seems to be an exception.
Can anyone suggest a workaround besides stopping the website while deploying?
Kind regards
Judging by the path in the error message I see you're trying to copy the files over a network share while deploying. This is bad practice to update the files directly over a network share or FTP etc. And this is the reason, actually. Network deployment is slow and while some files are still being updated/uploaded - the ASP.NET on the server is already trying to recycle the app, copy the files to "Temporary ASP.NET folders" etc. etc. etc.
Deployment best practice:
ZIP your precompiled site, upload, then run UNZIP on the server remotely
Here's how you run UNZIP remotely:
plink -ssh -l USERNAME -pw PASSWORD web.prod.local c:\Sites\Website\unzip -q -o c:\Sites\Website\site.zip -d c:\Sites\Website\
"plink" is a free SSH tool for windows (command-line) you need it on your dev machine
"web.prod.local" is your server address.
"c:\Sites\Website\" is the path your website on the server
You need SSH installed on your server to run commands remotely, the simplest option is too install the free tool: "freesshd" (google it)
Drop "unzip.exe" on the server as well, you see it's being called right there. Simplest way is to drop it right into the c:\Sites\Website\
PS. This is just an example, you can come up with your own solution

Deploying an ASP.NET web site to a remote VPS with Jenkins

I am just starting to get my head wrapped around continuous deployment with Jenkins, but I am running into some roadblocks and I haven't really found very many good, definitive resources on the topic in regards to ASP.NET applications.
I have set up a local build server than successfully pulls down code from a SVN repo, and builds it OK with MSBuild. This works well so far, but now I'd like to automate pushing this compiled code to a development server.
My problem is this - from what I gather based on what I read (which may be an incorrect assumption...) is that the staging server is typically within the same network as the build server, meaning you can share network resources, servers, etc.
In my case, I want to run the Jenkins server on a remote VPS, then deploy to other remote VPSes (so, essentially individual isolated machines communicating with each other).
I have seen alot of terms, but I am very new in my Sys Admin / DevOps type skills.
So, my question is this:
Is it even possible to, using Jenkins on a VPS, to then deploy to any particular server I choose? (I have full access to all of them, so if its a security thing, I can fix that... but they are not within the same network/domain)
What is the method to achieve this? I've seen xcopy, Web Deployment Packages (msdeploy), batch scripts, etc. mentioned, but not really a guidance behind what to use in what situations. Are any of these methods useful to achieve my goal?
Thanks for any help or guidance!
How is your Powershell? ;) You should check out psake.
psake is a build automation tool written in PowerShell. It avoids the
angle-bracket tax associated with executable XML by leveraging the
PowerShell syntax in your build scripts. psake has a syntax inspired
by rake (aka make in Ruby) and bake (aka make in Boo), but is easier
to script because it leverages your existent command-line knowledge.
psake is pronounced sake – as in Japanese rice wine. It does NOT rhyme
with make, bake, or rake.
You can deploy your files to the target server through SSH. Jenkins do support transfers through SSH. All you need to do is setting up a SSH server ex : CopSSH and a user account with admin permissions. and configuring the Jenkins to transfer through SSH.
Create host configurations in the main Jenkins configuration
Add an SSH Server
Add the public key to the remote server (the build server)
Click "Test Configuration"
Save
Configure a job to Publish Over SSH (Post Build Action)
Add Transfer Set.
Refer Publish Over SSH For More details

How to use a virtual machine with automated tests?

I am attempting to setup automated tests for our applications using a virtual machine environment.
What I would like to have is something like the following scenario:
Build server is automatically triggered to start an automated test for the application
A "build" script is then run which consist of:
Copy application files and a test script to a location accessible by the VM
Start the VM
In the VM, a special application looks in the shared folder and start the test script
The tests script do its job, results are output to shared folder
Test script ends
The special application then delete the test script
The special application somehow have the VM manager close the VM and revert to the previous snapshot
When the VM has exited, process the result and send to build server.
I am using TeamCity if that matters.
For virtual machines, we use VirtualBox but we are open to any other if needed.
Is there any applications/suite that would manage this scenario?
If there are none then I would then code it myself, should be easy but the only part I am not sure is the handling of the virtual machine.
What I need to be able to do is to have the VM close itself after the test and revert to a previous snapshot since I want it to be in a known state for the next test.
Any pointers?
I have a similar setup running and I chose to use Vagrant as its the same thing our developers where using for normalizing the development environment.
The initial state of the virtualmachine was scripted using puppet, but we didn't run the deployment scripts from scratch on each test, only once a day.
You could use puppet/chef for everything, but for all other operations on the VM, we would use Fabric scripts, as they were used for the real deployment too, and somehow fitted how we worked better. In sum the script would look something like the following:
vagrant up # fire up the vm, and run the puppet provisioning tool
fab vm run_test # run tests on vm
fab local process_result # process results on local shared folder
vagrant destroy # destroy the vm
The advantage is that your developers can also use vagrant to mimic your production environment without having to take care of that themselves (i.e. changes to your database settings get synced to all your developers vm's wherever they are) and the same scripts can be used in production too.
VirtualBox does have a COM API. I have no experience with it, but it may be possible to use that. One option would be to have TeamCity fire off a script to do this. I'd suggest starting with NAnt (supported natively by TeamCity) and possibly executing PowerShell if necessary.
Though I don't have any experience with either, I happen to have heard of a couple applications in this space recently:
http://www.infoq.com/news/2011/05/virtual_machine_test_harness
http://www.automatedqa.com/techpapers/testcomplete/automated-testing-in-virtual-labs/

Resources