Trying to deploy a Docker image in AWS Elastic Beanstalk running on a single instance for now. It all works fine, apart from WebSockets which I am using through Socket.IO.
Another post suggests to remove nginx, but that is either not possible anymore or just not an option for deployments with Docker.
I have a python script that changes the nginx configuration to allow WebSocket connections. When I ssh into the instance and run that script, it works. However, that part of the nginx configuration does not exist yet when ebextensions are run, so I cannot run this script automatically.
If you want to try it yourself, I am trying to deploy databench_examples. It is working when you deploy this with eb init and eb start and then ssh into the instance and go to /var/app/current and run sudo python nginx_socketio_conf.py which changes the nginx configuration file. If it is not working, you see a 500 error in the browser console for the Socket.IO handshake when running the simplepi analysis.
You're correct that the nginx configuration file does not exist when ebextensions are run. Here's why: that config file is dynamically generated after the application is deployed because the port mapping for the Docker container isn't known until after the container stops. So your awesome Python script executed by ebextensions doesn't have a config file to operate on.
Another conventional approach doesn't work, i.e., writing the nginx config file to /etc/nginx/conf.d because the location directive has to exist inside the server block in the sites_enabled config. So that's a no go.
I created a PR to illustrate an approach that will work: https://github.com/svenkreiss/databench_examples/pull/3 This is an undocumented technique that drops the Python/nginx mutation script into the right place in Elastic Beanstalk's hooks directory. The script is then executed by Elastic Beanstalk immediately after the nginx configuration is generated (Elastic Beanstalk will run executable scripts in the hooks subdirectories in alphabetical order, hence the 01_ prefix.
Thanks,
Evan
Related
I have a meteor app that needs to periodically read a file located on the host's file system, outside of the app package. I am using node's fs to accomplish this, and it works fine on my (macOS) development machine.
However, when I run mup deploy to deploy it to my (Ubuntu 14) server, mup returns the following error after starting meteor:
Error: ENOENT: no such file or directory, open '/home/sam/data/all_data.json'
at Object.fs.openSync (fs.js:652:18)
at Object.fs.readFileSync (fs.js:553:33)
Does anyone know why this might be happening?
you should follow mup documentation closely. Have you seen volumes setup in mup config? Try this to solve your issue.
Reason: mup runs app in docker without any access to host file system unless specified. Volumes setup does this for you with mup deployment.
Below is the part of mup config from http://meteor-up.com/docs.html, Everything Configured, read more to get a better idea.
name: 'app',
path: '../app',
// lets you add docker volumes (optional). Can be used to
// store files between app deploys and restarts.
volumes: {
// passed as '-v /host/path:/container/path' to the docker run command
'/host/path': '/container/path',
'/second/host/path': '/second/container/path'
},
The user you have that is running your meteor build on the server needs to have access to that folder - read access. I would store the file in a different directory than the home one, because you don't want to mess it up. Either way doing something like chmod -R 444 /home/sam/data should give read access to any user for all files in that directory. You are probably running meteor as your local user(sam?) in development mode on your macOS, but the built up gets run as meteor or some other user on ubuntu, because of mup and forever.
I installed gitlab with my own Nginx(I disabled the build-in nginx), Everything works fine before I tried to use CI in it.
After I commit the pipeline started, and give the message:
Running with gitlab-runner 10.1.0 (c1ecf97f)
on the runner for wxEditor (fccd792d)
Using Docker executor with image node:8.8.1 ...
Using docker image
sha256:c3a98397674933da01d4fc1e90dc880b1fb0760fedc43515e7660dcc58c6af28 for
predefined container...
Pulling docker image node:8.8.1 ...
Using docker image node:8.8.1
ID=sha256:d575940ea42b064ac3fa5b00c36ec099968ce2ae542488d1d8673f100dc0a622 for
build container...
Running on runner-fccd792d-project-2-concurrent-0 via vps143760...
Cloning repository...
Cloning into '/builds/zhaozhong/wxEditor'...
fatal: unable to access 'http://gitlab-ci-
token:xxxxxxxxxxxxxxxxxxxx#gitlab.dingshao.cc/zhaozhong/wxEditor.git/': The
requested URL returned error: 500
ERROR: Job failed: exit code 1
I've got stucked here for a hole day.
Gitlab EE: 10.1.0, ubuntu 16.04
Thanks
Problem solved!
I ran gitlab with my excisting nginx and use an old nginx config file, and there should be nginx + passenger which insure gitlab to get to work.
After read this anwer, I reinstalled Nginx together with passenger, and copy the bundled Nginx's config file gitlab-http.conf to the Nginx's sites-enable folder, every works perfect then.
when configuring the loggly agent for nginx it returns. ERROR: Nginx is not configured as a service. Is there something missing in a configuration file?
Looking at Loggly's nginx install script I saw, that it's just checking SysV init.
Obviously you've setup a new Box, using Systemd instead of SysV init. The Loggly installer fails with Systemd; they have to update their scripts.
You'll have to configure all the stuff manually for the time being.
I'm just getting started with Docker. With the official NGINX image on my OSX development machine (with Docker Machine as the Docker host) I ran up against the bug with sendfile and VirtualBox which means the server fails to show changes I make to files.
The workaround for this is to use a modified nginx.conf file that turns off sendfile. This guy's solution has an instruction in the Dockerfile to copy a customised conf file into the container. Alternatively, this guy maps the NGINX configuration to a new folder with modified conf file.
This kind of thing works OK locally. But what if I don't need this modification on my cloud host? How should I handle this and other differences when it comes to deployment?
You could mount your custom nginx.conf into the container in development via e.g. --volume ./nginx/nginx.conf:/etc/nginx/nginx.conf and simply omit this parameter to docker run in production.
If using docker-compose, the two options I would recommend are:
Employ the limited support for environment variable interpolation and add something like the following under volumes in your container definition: ./nginx/nginx.${APP_ENV}.conf:/etc/nginx/nginx.conf
Use a separate YAML file for production overrides.
When running behind proxy (NGINX) I get message
It appears that your reverse proxy set up is broken.
I referred to this link, but still get the same message. It refers to file /etc/default/jenkins which is not the case for me as I downloaded the zip file and am running in glassfish.
As I understand all I need is to provide argument --prefix to JENKINS_ARGS. How do I do that when running in glassfish behind nginx.
Thanks.
Just a quick guess but if you run Jenkins inside of GlassFish I think you have to set the proxy on the GlassFish JVM.
You can either add the following JVM options via the GlassFish admin GUI (server-config -> JVM Settings -> JVM Options):
-Dhttp.proxyHost=proxyhostname
-Dhttp.proxyPort=8080
-Dhttps.proxyHost=proxyhostname
-Dhttps.proxyPort=8080
or you can set them via asadmin in the following way:
asadmin create-jvm-options -Dhttp.proxyHost=proxyhostname