ERROR: Could not find a profile matching 'Nginx Full' - nginx

I have installed latest version of nginx.It is is installed succefully.
But getting error while typing the below command.
sudo ufw allow 'Nginx Full'
Error:ERROR: Could not find a profile matching 'Nginx Full'
sudo ufw app list
showing only
Available applications:
OpenSSH
How to add the application.
Nginx Full
Nginx HTTP
Nginx HTTPS
OpenSSH
I have installed two times nginx server
Error:ERROR: Could not find a profile matching 'Nginx Full'

Ubuntu (18.04)
You can see which apps are available by running this command:
ufw app list
Ports:
HTTP - 80
HTTPS - 443
Simple way to add them to UFW:
ufw allow 80,443/tcp
If you are wanting to accomplish this via application you will need to create the application ini file within /etc/ufw/applications.d
Example:
vi /etc/ufw/applications.d/nginx.ini
Place this inside file
[Nginx HTTP]
title=Web Server
description=Enable NGINX HTTP traffic
ports=80/tcp
[Nginx HTTPS] \
title=Web Server (HTTPS) \
description=Enable NGINX HTTPS traffic
ports=443/tcp
[Nginx Full]
title=Web Server (HTTP,HTTPS)
description=Enable NGINX HTTP and HTTPS traffic
ports=80,443/tcp
Then type this commands
ufw app update nginx
ufw app info 'Nginx HTTP'
ufw allow 'Nginx HTTP'

I had the same problem.. turned out Nginx was not installed due to some reason.
So it showed only OpenSSH by doing
sudo ufw app list
I got to this when I tried to uninstall Nginx using the command
sudo apt-get remove nginx
The output showed something like this:
Package 'nginx' is not installed, so not removed
Now you have to try installing Nginx again using commands
sudo apt update
sudo apt install nginx
sudo ufw app list
now the options will be available
// Check to see
Available applications:
Nginx Full
Nginx HTTP
Nginx HTTPS
OpenSSH
Now allow HTTP port using the command:
sudo ufw allow 'Nginx HTTP'
And finally run this command:
sudo ufw enable
Now hit the URL in browser it will show Nginx default page.

ERROR: Could not find a profile matching 'OpenSSH', Then install first ssh by given command
sudo apt-get install ssh
After installing package add the OpenSSH allow
sudo ufw allow OpenSSH
sudo ufw status
Tested

Happened to me after installing using the official site's instructions for Ubuntu
Simply install as this (after removing if already installed)
sudo apt-get remove nginx
sudo apt install nginx

Related

How do I fix 502 Bad Gateway error with GCP and NGINX

I'm trying to follow a tutorial on creating an Apache Airflow pipeline on a GCP vm instance (https://towardsdatascience.com/10-minutes-to-building-a-machine-learning-pipeline-with-apache-airflow-53cd09268977) but after building and running the docker container, I get this "502 Bad Gateway" error with Nginx 1.14 when try to access the webserver using:
http://<VM external ip>/
I'm quite new to using GCP and can't figure out how to fix this.
Some online research has suggested editing NGINX configuration files to:
keepalive_timeout 650;
keepalive_requests 10000;
But this hasn't changed anything.
The GCP instance is a N1-standard-8 with Ubuntu 18.04, and Cloud, HTTPS and HTTP access enabled.
The Nginx sites enabled are :
server {
listen 80;
location / {
proxy_pass http://0.0.0.0:8080/;
}
}
Root Cause:
The issue the you experience has nothing to do with keepalives, it is rather simpler - the docker container exits out and isn't running, so when nginx tries to proxy your request into the container, it fails and thus the error. Said failure is due to the incompatibility of airflow with current versions of sqlalchemy.
Verification:
run this command to see the logs of the failed container
sudo docker logs `sudo docker ps -a -f "ancestor=greenr-airflow" --format '{{.ID}}'`
and you will see that the python inside the container fails to import a package with the following error:
No module named 'sqlalchemy.ext.declarative.clsregistry'
Solution:
While I followed the tutorial to the letter, I'd recommend against
running commands with sudo you may want to deviate from the tutorial a
wee bit in order not to.
before running
sudo docker build -t greenr-airflow:latest .
command, edit the Dockerfile file and add the following two lines
&& pip install SQLAlchemy==1.3.23 \
&& pip install Flask-SQLAlchemy==2.4.4 \
somewhere up in the list of packages that are being installed, I've added it after
&& pip install -U pip setuptools wheel \
which is line 54 at the time of writing.
If you would like to re-use the same instance, delete and rebuild the images after making changes to the file:
sudo docker rmi greenr-airflow
sudo docker build -t greenr-airflow:latest .

How to Install SSL on AWS EC2 WordPress Site

I've created and launched my WordPress site on AWS using EC2. I followed this tutorial to create the site. Its currently mapped to a domain using Route 53. All development on the site is done online in my instance.
I would now like to install an SSL Certificate on my site. How would I do so?
If you created WordPress on AWS using "Bitnami",
you may ssh to your instance and run:
sudo /opt/bitnami/bncert-tool
See bitnami docs for details
If you're looking for easy and free solution, try https://letsencrypt.org/. They have a easy to follow doc for anyone.
TLDR; Head to https://certbot.eff.org/, choose your OS and server type and they will give you 4-5 line installation to install certificate automatically.
Before attempting, make sure your domain name is correctly pointed to your EC2 using Route53 or Elastic IP.
For example, here's all you need to run to automatically get and install SSL on a Ubuntu EC2 running nginx
$ sudo apt-get update
$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-nginx
Best of luck!
This tutorial provides a simple 3 step guide to setting up your Wordpress on AWS using LetsEncrypt / Certbot:
https://blog.brainycheetah.com/index.php/2018/11/02/wordpress-switching-to-https-ssl-hosted-on-aws/
Step 1: Get SSl certificate
Step 2: Configure redirects
Step 3: Update firewall
At each stage replace 'example.com' with your own site address.
Install certbot:
$ sudo apt-get update
$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-apache
Create certificates:
$ sudo certbot --apache -m admin#example.com -d example.com -d www.example.com
To configure redirects, first open the wp-config file:
$ sudo vim /var/www/html/example.com/wp-config.php
Insert the following above the "stop editing" comment line:
// HTTPS configuration
define('WP_HOME','https://example.com');
define('WP_SITEURL','https://example.com');
define('FORCE_SSL_ADMIN', true);
And finally, update firewall via the AWS console:
Login to your AWS control panel for your EC2 / Lightsail instance
Select the Networking tab Within the Firewall section, just below
the table
Select Add another
Custom and TCP should be pre-populated within the first two fields by default, leave these as they are
Within the Port range field enter 443 Select Save
Then just reload your apache config:
sudo service apache2 reload
And you should be good to go.
According to the Tutorial, since you have configured only an EC2 instance, direct approach is to purchase a SSL certificate and install it into apache server. For detailed steps follow the tutorial
HOW TO ADD SSL AND HTTPS IN WORDPRESS
How to Add SSL and HTTPS in WordPress.
If you plan to use AWS Certificate Manager issued free SSL certificates, then it requires either to configure a Elastic Load Balancer or the CDN CloudFront. This can get complicated if you are new to AWS. If you plan to give it a try with AWS Cloudfront, follow the steps in How To Use Your Own Secure Domain with CloudFront.
Using Cloudfront also provides a boost in performance since it caches your content and reduces the load from your EC2 instance. However one of the challenges you will face is to avoid mixcontent issues. There are WordPress plugins that are capable of resolving mixcontent issues, so do try them out.
This is how I enabled SSL on my WordPress website.
I have used the Lets Encyprpt X.509 Certificates. Lets Encrypt is a certificate authority that provides x.509 Certificates in an automated fashion for free. You can find more information about lets encrypt [here][2]
Steps to follow:
SSH into the instance and switch to root.
Download Certbot
wget https://dl.eff.org/certbot-auto
Chmod a+x certbot-auto
Run certbot to fetch the certificates
sudo ./certbot-auto --debug -v --server https://acme-v01.api.letsencrypt.org/directory certonly -d "your-domain-name"
A wizard would be launched asking you select options for Apache, WebRoot, and Standalone. Select the WebRoot option and continue.Note the directory of your domain
Usually /var/www/html will be your directory for your domain. After success you will have three certificates in the following paths
Certificate: /etc/letsencrypt/live/<<<"Domain-Name">>>/cert.pem
Full Chain: /etc/letsencrypt/live/<<<"Domain-Name">>>/fullchain.pem
Private Key: /etc/letsencrypt/live/<<<"Domain-Name">>>/privkey.pem
Copy the pem file paths to /etc/httpd/conf.d/ssl.conf. Then restart the apache
Service httpd restart
And Finally, I have enabled the Really Simple SSL Plugin in wordpress. Thats it!

Nginx doesn't show the default html page

I'm running nginx on raspberry pi.
I ran update and upgrade commands and then installed nginx.
1. sudo apt-get update
2. sudo apt-get upgrade
3. sudo apt-get install nginx
Started the server
4. sudo /etc/init.d/nginx start
Output
[ ok ] Starting nginx (via systemctl): nginx.service.
When I enter ip address into the browser nothing appears. What could be the problem here?
FIXED
Changed the root in /etc/nginx/sites-available/default
from root /var/www/html;
to root /usr/share/nginx/www;
I also renamed html folder to www because it was missing.
Restarted nginx for the changes to take effect.
sudo systemctl restart nginx

How to setup Nginx as a load balancer using the StrongLoop Nginx Controller

I'm attempting to setup Nginx as a load balancer using the StrongLoop Nginx Controller. Nginx will be acting as a load balancer for a StrongLoop LoopBack application hosted by the standalone StrongLoop Process Manager. However, I've been unsuccessful at making the Nginx deployment following the official directions from StrongLoop. Here are the steps I've taken:
Step #1 -- My first step was to install Nginx and the StrongLoop Nginx Controller on an AWS EC2 instance. I launched an EC2 sever (Ubuntu 14.04) to host the load balancer, and attached an Elastic IP to the server. Then I executed the following commands:
$ ssh -i ~/mykey.pem ubuntu#[nginx-ec2-ip-address]
$ sudo apt-get update
$ sudo apt-get install nginx
$ sudo apt-get install build-essential
$ curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -
$ sudo apt-get install -y nodejs
$ sudo npm install -g strong-nginx-controller
$ sudo sl-nginx-ctl-install -c 444
Then I opened up port 444 in the security group of the EC2 instance using a Custom TCP Rule.
Step #2 -- My second step was to setup two Loopback application servers. To accomplish this I launched two more EC2 servers (both Ubuntu 14.04) for the application servers, and attached an Elastic IP to each server. Then I ran the following series of commands, once on each application server:
$ ssh -i ~/mykey.pem ubuntu#[application-server-ec2-ip-address]
$ sudo apt-get update
$ sudo apt-get install build-essential
$ curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -
$ sudo apt-get install -y nodejs
$ sudo npm install -g strong-pm
$ sudo sl-pm-install
$ sudo /sbin/initctl start strong-pm
Step #3 -- My third step was to deploy the application to each of the application servers. For this I used StrongLoop Arc:
$ cd /path/to/loopback-getting-started-intermediate # my application
$ slc arc
Once in the StrongLoop Arc web console, I built a tar for the application, and deployed it to both application servers. Then in the Arc Process Manager, I connected to both application servers. Once connected, I clicked "load balancer," and entered the Nginx host and port into the form and pressed save. This caused a message to pop up saying "load balancer config saved."
Something strange happened at this point: The fields in StrongLoop Arc where I just typed the settings for the load balancer (host and port) reverted back to the original values the fields had before I started typing. (The original port value was 555 and the original value in the host field was the address of my second application server.)
Don't know what to do next -- This is where I really don't know what to do next. (I tried opening my web browser and navigating to the IP address of the Nginx load balancer, using several different port values. I tried 80, 8080, 3001, and 80, having opened up each in the security group, in an attempt to find the place to which I need to navigate in order to see "load balancing" in action. However, I saw nothing by navigating to each of these places, with the exception of port 80 which served up the "welcome to Nginx page," not what I'm looking for.)
How do I setup Nginx as a load balancer using the StrongLoop Nginx Controller? What's the next step in the process, assuming all of my steps listed are correct.
What I usually do is this:
sudo sl-nginx-ctl-install -c http://0.0.0.0:444
Maybe this can solve your problem.

Install phpmyadmin without selection apache2 or lighttpd

I use command Ubuntu: sudo apt-get install phpmyadmin
In process installing I inform about selection type of server: apache2 and lighttpd.
My server is build only on Nginx + php_fpm.
How i can install phpmyadmin without selection apache2 or lighttpd?
Sorry for stupid question
First install php5-fpm and then install phpmyadmin.
sudo apt-get install php5-fpm
sudo apt-get install phpmyadmin
The software, phpMyAdmin, requires a Web server and PHP. If PHP and a Web server have not yet been installed, then the default action is to use Apache. The package, php5-fpm, satisfies the requirements; thus, installing phpmyadmin after php5-fpm results in only the following additional package dependencies.
dbconfig-common javascript-common libjs-codemirror libjs-jquery
libjs-jquery-cookie libjs-jquery-event-drag libjs-jquery-metadata
libjs-jquery-mousewheel libjs-jquery-tablesorter libjs-jquery-ui
libjs-underscore libmcrypt4 php-gettext php5 php5-gd php5-mcrypt php5-mysql
Although PHP-FPM is not a Web server, the package maintainer understood that if php5-fpm has been installed, then the Ubuntu server will utilize some other Web server that uses the FastCGI Process Manager (FPM), and there is no need to know which Web server.
considering you have a lemp stack
you could also skip the queston with tab to "ok".
this might force phpmyadmin to install apache2, at least on the newest built, it wasnt like that before.
than when an error arrives that apache2 could not start, this is due nginx php-fpm already using the port, just toggle apache to start with this shell command
sudo update-rc.d -f apache2 remove

Resources