I currently use a local dev setup with Vagrant to manage my vm's per platform. so I have a vm/vagrantfile for wordpress, laravel, static sites etc. I use scotchbox but with a multiple vhost setup with apache on the host machine, synched to my local files. This works, but obviously the performance is not great, especially with so many projects on each vm. I have also played around with just using one vm per project, but I want something better.
I have done some reading about docker, and using vagrant with docker, and would like to go that route. Problem is I keep running into issues, and I have tried several different approaches. I did happen to get a setup going where I used a host vm to attach docker and then spin up a container for Nginx..initially I had some port forwarding issues but resolved that.
My question is, how have some of you went about setting this up? What does your Vagrantfile look like for the host and for the project? What other scripts are you loading? How are you handling multiple projects, file sharing and hostnames?
I have read so many different questions/answers and walkthroughs and none of them outline specifically what I am asking, so any discussion on the topic is greatly appreciated!
Related
I recently migrated my site to lightsail. My db has about 2mm records and slightly under 1Gb. I connected do the DB through an external client. While I was connected but not running any queries, site became slow.
Then I tried uploading some images through FTP, at that point, the site came to a halt again, would even open.
Upon looking at metrics, I got into burstable zone here and there, but its not sustained.
Any there tools I can use to diagnose what is the problem.
What size instance did you deploy? Also, is this Linux or Windows. It would be good to look at metrics from the lightsail dashboard but also be good to know what's running inside of your instance. I'd be curious to know if you are overburdening your instance (undersized) or not.
SFTP is generally slower than FTP due to the security built into the protocol. The data is encrypted, which takes time, but perhaps more importantly, the protocol itself functions differently; it's not "streamed" like FTP.
I tried a lot of different tools like Acquia, but most of them support only MysQL/Mariadb. What's the easiest way to run Drupal with postgres? Do I have to create Virtualbox VM from scratch?
You don't have to create VM box from scratch - just find one that fulfill drupal requirements. Even more you have some suggested by Drupal community:
https://www.drupal.org/docs/develop/local-server-setup/virtual-machine-development-environments
I'm personally using box "geerlingguy/ubuntu1804" with vagrant, additionally configured with ansible (by somebody else).
And IMHO it's better to us VM for at least 2 strong reasons:
If someone else is working on project with you with VM he will also have identical working environment.
When you decide to launch your site it will most likely run on linux box.
And it's better to avoid "unpredictable" problems when you decide to move project to other server.
We have one existing vmware virtualization contains 4 host, each host contains nearly 6vms, Now we are planning to deploy Open stack, The thing which Open stack version is good to deploy in VM, i have installed Centos 7 on VM.
I have to confirm which version of open stack is good for real time environment.
If anyone knows pls suggest version, and installation URl it will much better understanding to me,
Get started with devstack which is easier to install as you just have to run one script(stack.sh) and it will deploy all the clients on same machine. You can use that to practice creating VMs, making security groups and assigning floating ip to the vms. After that try to configure on a multinode architecture and I would suggest that you get a Ravello account (https://www.ravellosystems.com/) for that instead of using your own servers. This link might help you in configuration (https://docs.oracle.com/cd/E36784_01/html/E54155/archover.html#scrolltoc).
Search "openstack multinode deployment" on google. You will have plenty of links.
I work for a small web startup. They have decided to use OpenStack as IaaS and then on top of it, cloudfoundry as PaaS. I am trying to learn about this technology stack. But I am really confused even after going through documentations and related materials on the web.
What do I want?
I have a web site, that currently runs on a RHEL system (aws instance), with
nginx as web server. I want to shift this to OpenStack-cloudfoundry
stack because the company's management has decided to do so. They also
want me to evaluate if I can put Docker to use anywhere.
From my understanding, OpenStack (Iaas) will provide me with all stuff related to hardware software needs, and cloudfoundry will help me on the development front.
Now, where does nginx (or any web server) come into the picture? Is it part of Openstack or Is it part of cloudfoundry?
On my aws RHEL system, Do I just install Openstack and Cloudfoundry, and then push my app and not at all bother about what happens beneath? I am really confused.. please help out.
And, Is there anywhere I can utilize Docker, in this setup?
You would generally not deploy OpenStack on top of AWS. OpenStack is similar to AWS in that it provides a service for you to create and destroy virtual machine instances, manage networking between and around your VMs, attach and detach block devices to instances, etc. In other words, both are services for managing "infrastructures", where "infrastructure" here means a virtualized datacenter, which at its core means a bunch of hardware running hypervisors that allow you regard the datacenter as a bunch of virtual machines that can be spun up and down on demand, rather than a bunch of "static" physical machines.
AWS is an Infrastructure-as-a-Service provided by Amazon, so you don't have to install AWS yourself, you can just start using it to provision VM instances within Amazon's datacenters. OpenStack is software you install yourself (or pay a vendor to manage for you) on hardware you own or pay for yourself, and once installed OpenStack provides a similar service/interface to AWS.
With a Platform-as-a-Service, you concern yourself more with your application code, and "just pushing it", and don't have to concern yourself as much with what's happening on the underlying machine. You don't have to worry as much about the underlying OS, making sure you have the right runtime and code dependencies of your application, generally don't have to care about the webserver that's serving your code, etc. And you get many more higher level features, e.g. easy ability to scale vertically or horizontally, dynamic routing, automatic log aggregation, automatic health management, etc.
As far as how nginx fits in, it depends how you're using nginx, and what kind of application you have. Cloud Foundry has few couple ways of dealing with applications.
One is the buildpack model, where you simply push your source code to the platform, and it will automatically detect the appropriate runtime and dependencies for your application. For instance, if your application is a Ruby application, it will automatically detect this, and by default automatically run the application using the WEBrick server. However, you can choose other Ruby webservers such as Phusion, Passenger, etc. [1]
If your application is primarily serving static content, it will use nginx as the webserver. [2]
Another is using Docker. You can deploy applications based on Docker images on Cloud Foundry, in which case you could have a container running nginx and your application inside the container, or not, it depends on whether you still need nginx. Pushing a docker application is as simple as:
cf push trainingwebapp --docker-image training/webapp -c 'python app.py'
Here, this uses the sample Hello World web app from the Docker documentation. [3]
[1] https://docs.cloudfoundry.org/buildpacks/ruby/ruby-prod-server.html
[2] https://docs.cloudfoundry.org/buildpacks/staticfile/index.html
[3] https://docs.docker.com/engine/userguide/containers/usingdocker/
With OpenStack's architecture, is it possible to, for instance, have a PowerPC64 (Altivec) machine, a Intel CoreDuo machine, and a ARMv6 all on the same cluster?
Or is this impossible, because of the restrictions in building buildpacks when deploying to multiple architectures?
EDIT: Whoops, I meant OpenStack, not OpenShift ;)
The answer above is correct (answer from developercorey).
Although whether this suits you depends on how its managed and what your trying to achieve. Typically when you add servers with different physical attributes such as CPU, Disk, Network cards etc you group them into different host aggregates.
By default when you launch a VM it will try and find a suitable host, but you can also tag it, so for example if your VM required alot of disk IO, you might want to place it on a host that has SSD drivers. So you can put those hosts into a 'SSD' aggregate, and then when launching your VM you can make sure it goes to a host in that aggregate.
If your just trying to make the most out of the hardware you have, then I don't see any issue by mixing them.
I don't think that they have to be, but I do believe that they only build packages for 1 or 2 architechtures, so I'm not sure how many options you really have there.