elasticbeanstalk does not copy my nginx config - nginx

I want to add max-upload-size.conf file to /etc/nginx/conf.d directory in elastic beanstalk but it was not working.
My environment is Java 8 running on 64bit Amazon Linux/2.11.3
All ways below are not copied my config file.
.platform/nginx/conf.d/max-upload-size.conf
client_max_body_size 50M;
.ebextensions/nginx/conf.d/max-upload-size.conf
client_max_body_size 50M;
I followed the processes below
check documentation.
create max-upload-size.conf
./gradlew clean & ./gradlew bootJar
eb deploy
eb ssh and check nginx directory (/etc/nginx/conf.d)
I already checked https://stackoverflow.com/a/63626941/7770508 and https://stackoverflow.com/a/51888100/7770508.
Is there really no way to extend it?

Your platform 64bit Amazon Linux/2.11.3 is Amazon Linux 1 (AL1), not AL2. From your post its not clear if you checked offcial AWS documentation for nginx in Java for AL1. So the example on how to setup nginx for AL1 for Java is in:
Configuring the proxy on Amazon Linux AMI (preceding Amazon Linux 2)
with full config file on github.

Related

How to add edit the Elastic Beanstalk ngnix configuration for Ktor Application?

I ma getting 403 request entity too large when I try to upload a file of more than 1 mb on elastic beanstalk.
I tried to add my configuration in the .platform/nginx/conf.d/proxy.conf file. here is the file content
client_max_body_size 200M;
I followed the advice here to configure the nginx reverse proxy to allow files larger than the default 1MB.
I tried to create a .config file in the .ebextenstions. file content look like this
files: "/etc/nginx/conf.d/proxy.conf" : mode: "000755" owner: root group: root content: | client_max_body_size 200M; commands: 01_reload_nginx: command: "service nginx reload"
Also tried everything mentioned on this answer here
I tried to access the Ec2 instance via terminal and manually add client_max_body_size 200M; in the etc/nginx/nginx.conf file. this worked but the problem is whenever we deploy the new version on beanstalk this will update the nginx configuration and have to follow the same instructions again after every deployment as Beanstalk also warn the same thing.enter image description here

SSL: Certbot + AWS Lightsail + LetsEncrypt + Really Simple SSL Plugin

Scenario:
Current server # example.com is running an older version of amazon AWS Lightsail with wordpress (ubuntu) and we just had a new certificate issued using letsencrypt. All is well. Original cert was requested with wildcard, so functional for any subdomain.
Now, we needed to spin up a fresh new server for a subdomain, let's call it development.example.com.
The new AWS lightsail instances now are no longer Ubuntu but Debian!
The idea was to install certbot in the new Debian instance and then copy over the certificate files from the primary server # example.com.
I've done this successfully in the past when it was going from Ubuntu to Ubuntu but now that the new instance is Debian, the Really Simple SSL plugin does not recognize that a certificate is installed.
STEPS I took to move the certificate files:
What I've done before is simply to copy /etc/letsencrypt/* from one server to another and then follow the steps outlined in the AWS documentation here:
https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-using-lets-encrypt-certificates-with-wordpress#complete-the-prerequisites-lets-encrypt-wordpress
In this case, performing the steps 7.4, 7.5, 7.6 and section 8.
However, steps described in section 8.1 do not appear valid in this document anymore for Debian, because there is no such location on Debian:
sudo chmod 666 /opt/bitnami/apps/wordpress/htdocs/wp-config.php
AND because it seems an .htaccess does not exist either.
sudo chmod 666 /opt/bitnami/apps/wordpress/conf/htaccess.conf
Are there additional steps now which I've missed to be able to copy the necessary files for SSL to work properly on this new subdomain server now running Debian?
I was going to go through a new certificate request in the development server but wouldn't that invalidate the certificate currently installed for the primary domain?
In other words, how to properly copy the SSL files from the main Ubuntu server and configure the Debian subdomain server so that both wordpress installations have SSL correctly installed?
Thank you #mikemoy indeed, one can issue multiple wildcard certificates from different servers in a subdomain. Just went ahead and issued a new certificate.

502 gateway error with meteor, browser policy, HTTP connecting to S3

I am using meteor with the BrowerPolicy package and Meteor Up with the abernix/meteord:base docker image to deploy my app to a EC2 instance. I use HTTPS using nginx all on the same server. The trouble comes when I allow connections to an AWS S3 bucket using the following line:
BrowserPolicy.content.allowOriginForAll('*.s3-us-west-2.amazonaws.com');
It works locally but when I deploy to the EC2 server, I get a 502 bad gateway error for the entire app.
I have read that this problem can sometimes be due to the header size being too large and that it can be fixed by changing proxy_buffer_size 8k; in the /var/lib/docker/aufs/mnt/CHECKEDID/opt/nginx/conf/nginx.conf file. I checked and my header size is 499 for a random svg that I have S3.
If indeed I need to make a change to the docker image to have this larger header size, how do I do that? I believe that this is the source repo for the docker image. If I am totally off base and there is a different problem, please let me know that too.
Thanks!
I ended up figuring it out. So it turns out to be a configuration error with nginx. I configured my EC2 instance using this guide. In order to fix nginx, I first logged into my cluster and opened this file:
sudo vi /etc/nginx/sites-available/default
I then added the proxy_buffer_size 8k; line to the server block of the configuration file. Finally, I checked the syntax with sudo nginx -t and restarted nginx nginx restart. That was it!
The best part is that since I configured my nginx instance manually and deploy my meteor instance on top of this running on port 3000, these settings persist even after I deploy new versions of my app.

How can I deploy my Angular 2 + Typescript + Webpack app

I am actually learning Angular 2 with Typescript and developed a little app by based on the angular-seed project (angular-seed). I have built the app for production purposes and got dist folder ready to be deployed containing my bundle files like this:
dist/
main.bundle.js
main.map
polyfills.bundle.js
polyfills.map
vendor.bundle.js
vendor.map
However, as a fresher, I have no idea how to deploy it now on my EC2 server. I read that I have to config Nginx server to serve my static file but do I have to config it particularly to work with my bundle files?
Excuse my mistakes if any. Thanks a lot in advance!
You are on the right track.....
Just install the nginx on your EC2. In my case I had a linux Ubuntu 14.04 installed on "Digital Ocean".
First I updated the apt-get package lists:
sudo apt-get update
Then install Nginx using apt-get:
sudo apt-get install nginx
Then open the default server block configuration file for editing:
sudo vi /etc/nginx/sites-available/default
Delete everything in this configuration file and paste the following content:
server {
listen 80 default_server;
root /path/dist-nginx;
index index.html index.htm;
server_name localhost;
location / {
try_files $uri $uri/ =404;
}
}
To make the changes active, restart the webserver nginx:
sudo service nginx restart
Then copy index.html and the bundle files to /path/dist-nginx on your server and you are up and running.
If anyone still struggling with production setup of angular 2/4/5 app + Nginx, then here is the complete solution:
Suppose you want to deploy your angular app at HOST: http://example.com and PORT: 8080
Note - HOST and PORT might be different in your case.
Make sure you have <base href="/"> in you index.html head tag.
Firstly, go to your angular repo (i.e. /home/user/helloWorld) path at your machine.
Then build /dist for your production server using the following command:
ng build --prod --base-href http://example.com:8080
Now copy /dist (i.e. /home/user/helloWorld/dist) folder from your machine's angular repo to the remote machine where you want to host your production server.
Now login to your remote server and add following nginx server configuration.
server {
listen 8080;
server_name http://example.com;
root /path/to/your/dist/location;
# eg. root /home/admin/helloWorld/dist
index index.html index.htm;
location / {
try_files $uri $uri/ /index.html;
# This will allow you to refresh page in your angular app. Which will not give error 404.
}
}
Now restart nginx.
That's it !! Now you can access your angular app at http://example.com:8080
Hope it will be helpful.
A quicker way to deploy is as below:
1. Install nginx as mentioned by Herman.
2. Copy your dist/* files to /var/www/html/ without disturbing /etc/nginx/sites-available/default.
sudo cp /your/path/to/dist/* /var/www/html/
3. Restart nginx:
sudo systemctl restart nginx
I'm using the official angular CLI instead to deploy to production and is very easy to do. You can deploy to pre-production ie, or production this way:
ng build --env=pre --output-path=build/pre/
ng build --env=prod --output-path=build/prod/

Installing nginx via macports with ngx_echo module available

The ngx_echo module isn't included when I install nginx by:
sudo port install nginx
I looked at the portfile and the variants are:
addition dav debug flv geoip google_perftools gzip_static ipv6 mail perl5 realip redis secure_download ssl status substitution upload zip
Is ngx_echo included in any of those options?
I compiled it using macports as mentioned in the link below
https://serverfault.com/questions/328416/how-to-set-random-value-in-the-specified-range-in-variable/347191#347191

Resources