I am evaluating application sample with WebSphere8 running in Docker.
https://hub.docker.com/r/ibmcom/websphere-traditional/
Along with this tech note, I want to change some server settings.
http://www-01.ibm.com/support/docview.wss?uid=swg21614221
But as you know, docker container throw away any changes in container storage once the application server is restarted.
So, my question is how can I preserve these server settings change beyond server restart, such as using docker volume.
If you want to make changes via the UI (admin console), then consider using docker commit to save your modified image and then spawn containers from that new image. https://docs.docker.com/engine/reference/commandline/commit/
You want to customize the image. The simplest way is to have your own Dockerfile with FROM ibmcom/websphere-traditional:latest then RUN whatever wsadmin commands (with -conntype NONE) you need to perform customization, including your application deployment.
Related
I do most of my dev work locally but ocassionally I have to switch over to using a preconfigured jupyterlab instance on GCP. The way things are set up now, I'm unable to ssh into these notebook servers and the only way for me to interact with them is through the jupyterlab integrated terminal.
I have custom save hook functions set up in my local environment for testing, linting, etc.--comes in really handy for keeping everything in a production-ready state and I'd like to be able to set up a sort of "environment as code" system where when I pull updated code into a new environment, the customized configuration would move with it and take effect automatically. I suppose the proper way to do this would be to use a Docker image and rebuild the cloud instance from scratch every time it needs an update but it kind of seems like overkill for such minor changes. (Also the Google docker images don't work that well on my M1 MacBook ).
I'm new to Docker and was wondering if it was possible to set the following up:
I have my personal computer on which I'm working on my WordPress site via a Dockerfile. All his well and the data is persistent.
What I'd like to do is be able to save that work on Docker hub possibly or Github (I assume the updated images would be backed up on my Docker hub) and work on a totally different computer picking up where I left off.
Is that possible ?
Generally you should be able to set up your Docker containers such that there is no persistent state inside the container at all; you can freely delete and recreate the container without losing data. The best and easiest case of this is a container that just depends on some external database, in which case you don’t need to do anything.
If you have something like a Wordpress installation with local customizations, or something that stores persistent data in the filesystem, you should use the docker run -v option or the Docker Compose volumes: option to inject parts of the host filesystem into the container. Then those volumes need to be backed up (and for all that the Docker documentation endorses named volumes, if you use host directories, your normal backup solution will work fine.
In short, I’d recommend:
Build a custom image for your application, and check the Dockerfile and any supporting artifacts into source control. They don’t need to be separately backed up; even if you lose your image you can docker build again.
Inject customizations using bind mounts, and check those customizations into source control. They don’t need to be separately backed up.
Store mutable data using volumes or bind mounts, and back these up normally.
Containers are disposable. You don’t need to back up a container per se, you should always be able to recreate it from the artifacts above.
I am using Meteor server v1.8.
I want to create a backup server.
If Main server goes down, users should automatically transferred to a backup server, to avoid any down time.
How can I achieve such behaviour.
Thanks in advance.
You could use process spawning tools like Phusion Passenger to make your application failsafe. If you app crashes, Passenger restarts it immediately.
Some resources on that:
https://github.com/phusion/passenger/wiki/Phusion-Passenger:-Meteor-tutorial
https://www.phusionpassenger.com/docs/tutorials/installation/meteor/
Or use some container orchestration and make your app available on more than one machines. If one instance fails, your app should still be available.
In both cases: install your mongodb on a separate server. This is also why you need to define the MONGO_URL environment variable on your Meteor deployment, so your app process is separated from the Database process.
In such a setup you won't need to "submit" data on failure to a separate server, which I think might even not be a realistic approach in a production environment.
We use docker during development and everything works well. Our software is written in PHP and dockerized with MySQL, Apache and a lot of frameworks and libraries.
For some of our customers we want to ship docker images in order to let them test, evaluate and use it. Using docker images they just need tun run the container and get a fully installed and configured system - very easy!
But: How can we avoid customers seeing our code by simply attaching to docker or making some execs inside the containers?
Are there techniques to completely lock down every kind of access to the filesystem inside a container? We just like to get access via ssh to our software.
It is possible to override almost everything about the construction of an image at runtime using the docker run command. So they wouldn't even need to do exec, they could just override cmd or entrypoint to bash or whatever. Anytime a customer has your code (even compiled / encrypted / etc...) they have your code. If this is really a big deal, think about a SaaS model.
I'm newbie for these techs (open stack / docker / vagrant), not sure if I understood them correctly (most likely did not), for me I understood it is something like having a portable application to run it with same development configuration to ensure all the development team have same setup, but did not understand, what after development, and how to get benefit from them with dart app.
my question is:
1. Correct my understanding
2. Do I need the end user to have these things installed in his system, and run my application through them, same as in the development stage?
3. How can I build/develop/distribute dart lang app through them, may be as hese as well as dart are new, I could not find enough info while googling.
thanks
Docker is similar to a virtual machine like VM-Ware or Virtualbox as it creates an abstraction layer between the host operating system and the operating system running within a Docker container. The difference is that Docker doesn't emulate the entire hardware. The disadvantage is that Docker only runs on Linux and only Linux can be run inside Docker. If your host is an Intel system you can't run an ARM Linux inside the container. (theoretically you can run Virtualbox inside Docker and run Windows. or other OSes in it)
With Docker you can test your application locally in the same environment as the application will run when deployed.
When you for example create an application you want to run in Google Compute Engine you install and test it locally inside a Docker container and then deploy the Docker container to Google Compute Engine as a whole unit. When there is a bug in the deployed application you should be able to reproduce it locally as well because it's just a 1:1 copy. No bug could have been introduce because the operating system or other dependencies were installed differently on the deployment environment than in the develeopment/test environment.
The Dockerfile is a set of instructions to set up a Docker container. If you want to create a new Docker container (for example for a new developer) you just let Docker process the Dockerfile and a new Docker container is created from it. This allows to easily create new Containers.
If you want to update one dependency to a newer version or want to add remove components to/from the environment you change the Dockerfile and create a new container from it. This way you avoid that manual addition/removal form/to an existing container manually lets containers of different developers/testers/deployment diverge from each other.
I haven't used OpenStack myself but from the web page it seems to provide components and tools to build and manage your own cloud infrastructure.
I also haven't used Vagrant myself but it seems to help to automate a lot of tasks related to creating and managing virtual machines like VM-Ware, Virtualbox, Docker and probably others.
When you have for example a server application it probably consist of a number of components you don't want all to run in one container but split up into several containers. One container for the Database, one for the web server, one for the backend application (created in Dart for example), and others. It can become cumbersome to manage all those containers. Vagrant helps to automate related tasks.