Dart lang app with open stack / docker / vagrant - nginx

I'm newbie for these techs (open stack / docker / vagrant), not sure if I understood them correctly (most likely did not), for me I understood it is something like having a portable application to run it with same development configuration to ensure all the development team have same setup, but did not understand, what after development, and how to get benefit from them with dart app.
my question is:
1. Correct my understanding
2. Do I need the end user to have these things installed in his system, and run my application through them, same as in the development stage?
3. How can I build/develop/distribute dart lang app through them, may be as hese as well as dart are new, I could not find enough info while googling.
thanks

Docker is similar to a virtual machine like VM-Ware or Virtualbox as it creates an abstraction layer between the host operating system and the operating system running within a Docker container. The difference is that Docker doesn't emulate the entire hardware. The disadvantage is that Docker only runs on Linux and only Linux can be run inside Docker. If your host is an Intel system you can't run an ARM Linux inside the container. (theoretically you can run Virtualbox inside Docker and run Windows. or other OSes in it)
With Docker you can test your application locally in the same environment as the application will run when deployed.
When you for example create an application you want to run in Google Compute Engine you install and test it locally inside a Docker container and then deploy the Docker container to Google Compute Engine as a whole unit. When there is a bug in the deployed application you should be able to reproduce it locally as well because it's just a 1:1 copy. No bug could have been introduce because the operating system or other dependencies were installed differently on the deployment environment than in the develeopment/test environment.
The Dockerfile is a set of instructions to set up a Docker container. If you want to create a new Docker container (for example for a new developer) you just let Docker process the Dockerfile and a new Docker container is created from it. This allows to easily create new Containers.
If you want to update one dependency to a newer version or want to add remove components to/from the environment you change the Dockerfile and create a new container from it. This way you avoid that manual addition/removal form/to an existing container manually lets containers of different developers/testers/deployment diverge from each other.
I haven't used OpenStack myself but from the web page it seems to provide components and tools to build and manage your own cloud infrastructure.
I also haven't used Vagrant myself but it seems to help to automate a lot of tasks related to creating and managing virtual machines like VM-Ware, Virtualbox, Docker and probably others.
When you have for example a server application it probably consist of a number of components you don't want all to run in one container but split up into several containers. One container for the Database, one for the web server, one for the backend application (created in Dart for example), and others. It can become cumbersome to manage all those containers. Vagrant helps to automate related tasks.

Related

Deploy Docker to offline PC

I am new to docker and have hit a road block I am having troule figuring out.
Here is my scenario
Current (pre-container)
We do a Visual Studio Online build that outputs an MSI. The build uses the full .Net Framework (we will later go to .Net core).
The MSI is put on a flash drive and installed on an offline (no internet access) computer.
The MSI installs several Windows services that expose web api web services.
Local clients can query those web services.
Desired (containers)
We wish to replace the windows services with docker containers.
The installation still needs to be performed offline (no internet access)
We wish to use docker community edition to avoid cost.
One assumption: the offline computer will have docker installed and will have already downloaded the base image "microsoft/aspnet".
To start figuring this out, I simply created a new ASP.NET Web Application from VS 2017. I chose Web Api and to enable docker support. Great, now I have a container running with a web site / service in it. I next wanted to try to figure out how to deploy the container / image. For reference, here is the default Dockerfile that was created when I created the empty project.
FROM microsoft/aspnet:latest
ARG source
WORKDIR /inetpub/wwwroot
COPY ${source:-obj/Docker/publish} .
I first looked at "docker save". My thinking was that I could save the image as a file and use that to deploy the container. However, because I am using the full .net framework, the saved file is 7.7 GB. I understand why it is so large; that image has not only my sample web site in it, but also the microsoft/aspnet image in it too. After some googling, I found references do being able to exclude layers (https://github.com/moby/moby/pull/9304), but it does not appear that "docker save" supports that. Ultimately though, that is what I think I want - to be able to save just my layer to a file.
Am I going down the right path with trying to figure out how to save a layer? We are pretty open on how to accomplish this, but we are not able to deploy a 7.7 GB file for every software update.
Any suggestions on how to do this - especially any that incorporate the VS Online build are greatly appreciated.
Thanks.
The only way to transfer an image offline is to save it into tarball using docker save
As for the size of the image, the solution is to use a smaller aspnet. The one you are using is 7GB large. Thus you need to choose a smaller aspnet image that would be sufficient from the available ones
Another solution is to transfer the source code and build the image on the target machine. In this case, you save the microsoft/aspnet:latest to a tarball and transfer it once to the target machine. Whenever you have new updates in the source, you copy the source and the Dockerfile to the target machine and you build the image there.

Protecting docker filesystem access

We use docker during development and everything works well. Our software is written in PHP and dockerized with MySQL, Apache and a lot of frameworks and libraries.
For some of our customers we want to ship docker images in order to let them test, evaluate and use it. Using docker images they just need tun run the container and get a fully installed and configured system - very easy!
But: How can we avoid customers seeing our code by simply attaching to docker or making some execs inside the containers?
Are there techniques to completely lock down every kind of access to the filesystem inside a container? We just like to get access via ssh to our software.
It is possible to override almost everything about the construction of an image at runtime using the docker run command. So they wouldn't even need to do exec, they could just override cmd or entrypoint to bash or whatever. Anytime a customer has your code (even compiled / encrypted / etc...) they have your code. If this is really a big deal, think about a SaaS model.

Why do we need to deploy a meteor app instead of just starting it?

As we all know, we can run a meteor app by just typing meteor in a terminal.
By default it will start a server and use port 3000.
So why do I need to deploy it using MUP etc.
I can configure it to use port 80 or use nginx to route to port 80 for the app. So the port is not the point.
Edit:
Assume meteor is running on a VPS or cloud server with public IP address, not a personal computer.
MUP does a few extra things you can do yourself:
it 'bundles' the code into a single file, using meteor build bundle
the javascript is one file, and css another; it's minified, and obfuscated so it's smaller and faster to load, and less easy to decipher on the client.
some packages are also meant to be removed when running in production. For example meteorToys, the utility toolset to look up collections and much more, is not bundled into the production bundle, as per the instructions in its package. This insures you don't deploy code with security vulnerabilities (Meteor toys basically opens up client side delete / updates etc... if you're not careful)
So, in short, it installs a minimal version of your site, making sure that what's meant for development only doesn't get push to a production environment.
EDIT: On other reason to do this, is that you don't need all the Meteor build tools on your production server; that can add up to a lot of stuff, especially if you keep caches going for a while...
I believe it also takes care of hooking up to a remote MongoDB Instance (at least it used to be the case on the free meteor site) which is more scalable and fault tolerant than running on the same instance as the web server, as well as provision storage etc... if needed.
basically, to deploy a Meteor app yourself manually, you need to:
on your dev box:
meteor build bundle your app to a tar file (using the architecture flag corresponding to the OS you will use)
on the server:
install node v0.10 (or whatever is the current version of node required by Meteor)
you might have to install Fiber#1.0.5 (but I believe this is now part of meteor install already)
untar the bundle, get into bundle/programs/server/ and run npm install
run the server with node main.js in the bundle folder.
The purpose of deploying an application is that you are situating your project on hardware outside of your local machine. For example if you deploy an application on Heroku app you create a repository on heroku's systems and that code based is used to serve your application off of their servers.
If you just start an application on your personal system, you will suffer a lack of network and resource availability as well as under use of computer time at non-peak hours as your system will need to remain attentive for additional users without having alternative tasks. Hosting providers provide resources as needed, and their diverse client base allows their systems to work around the clock on a global scale.

How to scale Docker containers in production

This question's answers are a community effort. Edit existing answers to improve this post. It is not currently accepting new answers or interactions.
So I recently discovered this awesome tool, and it says
Docker is an open-source project to easily create lightweight,
portable, self-sufficient containers from any application. The same
container that a developer builds and tests on a laptop can run at
scale, in production, on VMs, bare metal, OpenStack clusters, public
clouds and more.
Let's say I have a docker image which runs Nginx and a website connects to external database. How do I scale the container in production?
Update: 2019-03-11
First of all thanks for those who have upvoted this answer over the years.
Please be aware that this question was asked in August 2013, when Docker was still a very new technology. Since then: Kubernetes was launched on June 2014, Docker swarm was integrated into the Docker engine in Feb 2015, Amazon launched it's container solution, ECS, in April 2015 and Google launched GKE in August 2015. It's fair to say the production container landscape has changed substantially.
The short answer is that you'd have to write your own logic to do this.
I would expect this kind of feature to emerge from the following projects, built on top of docker, and designed to support applications in production:
flynn
deis
coreos
Mesos
Update 1
Another related project I recently discovered:
maestro
Update 2
The latest release Openstack contains support for managing Docker containers:
Docker Openstack
Paas zone within OpenStack
Update 3
System for managing Docker instances
Shipyard
And a presentation on how to use tools like Packer, Docker and Serf to deliver an immutable server infrastructure pattern
FutureOps with Immutable Infrastructure
Slides
Update 4
A neat article on how to wire together docker containers using serf:
Decentralizing Docker: How to use serf with Docker
Update 5
Run Docker on Mesos using the Marathon framework
Mesosphere Docker Developer Tutorial
Update 6
Run Docker on Tsuru as it supports docker-cluster and segregated scheduler deploy
http://blog.tsuru.io/2014/04/04/running-tsuru-in-production-scaling-and-segregating-docker-containers/
Update 7
Docker-based environments orchestration
maestro-ng
Update 8
decking.io
Update 9
Google kubernetes
Update 10
Redhat have refactored their openshift PAAS to integrate Docker
Project Atomic
Geard
Update 11
A Docker NodeJS lib wrapping the Docker command line and managing it from a json file.
docker-cmd
Update 12
Amazon's new container service enables scaling in the cluster.
Update 13
Strictly speaking Flocker does not "scale" applications, but it is designed to fufil a related function of making stateful containers (running databases services?) portable across multiple docker hosts:
https://clusterhq.com/
Update 14
A project to create portable templates that describe Docker applications:
http://panamax.io/
Update 15
The Docker project is now addressing orchestration natively (See announcement)
Docker machine
Docker swarm
Docker compose
Update 16
Spotify Helios
See also:
https://blog.docker.com/tag/helios/
Update 17
The Openstack project now has a new "container as a service" project called Magnum:
https://wiki.openstack.org/wiki/Magnum
Shows a lot of promise, enables the easy setup of Docker orchestration frameworks like Kubernetes and Docker swarm.
Update 18
Rancher is a project that is maturing rapidly
http://rancher.com/
Nice UI and strong focus on hyrbrid Docker infrastructures
Update 19
The Lattice project is an offshoot of Cloud Foundry for managing container clusters.
Update 20
Docker recently bought Tutum:
https://www.docker.com/tutum
Update 21
Package manager for applications deployed on Kubernetes.
http://helm.sh/
Update 22
Vamp is an open source and self-hosted platform for managing (micro)service oriented architectures that rely on container technology.
http://vamp.io/
Update 23
A Distributed, Highly Available, Datacenter-Aware Scheduler
https://www.nomadproject.io/
From the guys that gave us Vagrant and other powerful tools.
Update 24
Container hosting solution for AWS, open source and based on Kubernetes
https://supergiant.io/
Update 25
Apache Mesos based container hosted located in Germany
https://sloppy.io/features/#features
And Docker Inc. also provide a container hosting service called Docker cloud
https://cloud.docker.com/
Update 26
Jelastic is a hosted PAAS service that scales containers automatically.
Deis automates scaling of Docker containers (among other things).
Deis (pronounced DAY-iss) is an open source PaaS that makes it easy to deploy and manage applications on your own servers. Deis builds upon Docker and CoreOS to provide a lightweight PaaS with a Heroku-inspired workflow.
Here is the developer workflow:
deis create myapp # create a new deis app called "myapp"
git push deis master # built with a buildpack or dockerfile
deis scale web=16 worker=4 # scale up docker containers
Deis automatically deploys your Docker containers across a CoreOS cluster and configures the Nginx routers to route requests to healthy Docker containers. If a host dies, containers are automatically restarted on another host in seconds. Just browse to the proxy URL or use deis open to hit your app.
Some other useful commands:
deis config:set DATABASE_URL= # attach to a database w/ an envvar
deis run make test # run ephemeral containers for one-off tasks
deis logs # get aggregated logs for troubleshooting
deis rollback v23 # rollback to a prior release
To see this in action, check out the terminal video at http://deis.io/overview/. You can also learn about Deis concepts or jump right into deploying your own private PaaS.
You can try Tsuru. Tsuru is a opensource PaaS inspired in Heroku, and it is already with some products in production at Globo.com(internet arm of the biggest Broadcast Television Company in Brazil)
It manages the entire flow of an application, since the container creation, deploy, routing(with hipache) with many nice features as docker cluster, scaling of units, segregated deploy, etc.
Take a look in our documentation bellow:
http://docs.tsuru.io/
Here our post covering our environment:
http://blog.tsuru.io/2014/04/04/running-tsuru-in-production-scaling-and-segregating-docker-containers/
Have a look at Rancher.com - it can manage multiple Docker hosts and much more.
A sensible approach to scaling Docker could be:
Each service will be a docker container
Intra container service discovery managed through links (new feature from docker 0.6.5)
Containers will be deployed through Dokku
Applications will be managed through Shipyard which in its turn is using hipache
Another docker open sourced project from Yandex:
cocaine
Openshift guys also created a project. You can find more information here, try test container and detailed info here .
The only problem is the solution is Redhat centric for now :)
While we're big fans of Deis (deis.io) and are actively deploying to it, there are other Heroku like PaaS style deployment solutions out there, including:
Longshoreman from the Wayfinder folks:
https://github.com/longshoreman/longshoreman
Decker from the CloudCredo folks, using CloudFoundry:
http://www.cloudcredo.com/decker-docker-cloud-foundry/
As for straight up orchestration, NewRelic's opensource Centurion project seems quite promising:
https://github.com/newrelic/centurion
Take a look also at etcd and Consul.
Panamax: Docker Management for Humans. panamax.io
Fig: Fast, isolated development environments using Docker. fig.sh
One option not mentioned in other posts is Helios. It is built by spotify and does not try to do too much.
https://github.com/spotify/helios

How to use a virtual machine with automated tests?

I am attempting to setup automated tests for our applications using a virtual machine environment.
What I would like to have is something like the following scenario:
Build server is automatically triggered to start an automated test for the application
A "build" script is then run which consist of:
Copy application files and a test script to a location accessible by the VM
Start the VM
In the VM, a special application looks in the shared folder and start the test script
The tests script do its job, results are output to shared folder
Test script ends
The special application then delete the test script
The special application somehow have the VM manager close the VM and revert to the previous snapshot
When the VM has exited, process the result and send to build server.
I am using TeamCity if that matters.
For virtual machines, we use VirtualBox but we are open to any other if needed.
Is there any applications/suite that would manage this scenario?
If there are none then I would then code it myself, should be easy but the only part I am not sure is the handling of the virtual machine.
What I need to be able to do is to have the VM close itself after the test and revert to a previous snapshot since I want it to be in a known state for the next test.
Any pointers?
I have a similar setup running and I chose to use Vagrant as its the same thing our developers where using for normalizing the development environment.
The initial state of the virtualmachine was scripted using puppet, but we didn't run the deployment scripts from scratch on each test, only once a day.
You could use puppet/chef for everything, but for all other operations on the VM, we would use Fabric scripts, as they were used for the real deployment too, and somehow fitted how we worked better. In sum the script would look something like the following:
vagrant up # fire up the vm, and run the puppet provisioning tool
fab vm run_test # run tests on vm
fab local process_result # process results on local shared folder
vagrant destroy # destroy the vm
The advantage is that your developers can also use vagrant to mimic your production environment without having to take care of that themselves (i.e. changes to your database settings get synced to all your developers vm's wherever they are) and the same scripts can be used in production too.
VirtualBox does have a COM API. I have no experience with it, but it may be possible to use that. One option would be to have TeamCity fire off a script to do this. I'd suggest starting with NAnt (supported natively by TeamCity) and possibly executing PowerShell if necessary.
Though I don't have any experience with either, I happen to have heard of a couple applications in this space recently:
http://www.infoq.com/news/2011/05/virtual_machine_test_harness
http://www.automatedqa.com/techpapers/testcomplete/automated-testing-in-virtual-labs/

Resources