How can I enable FTP/SFTP on Google Instance VM? - wordpress

I'm trying to update WordPress, but it asks for FTP credentials.
I successfully changed the password using this command:
sudo passwd
and I entered the FTP credentials on the form but still can't update WordPress.

Here the instruction:
-1 ssh into the instance and run the below command
$ sudo su
$ apt-get update
$ apt-get install vsftpd
$ echo -e "pasv_enable=Yes\npasv_max_port=10101\npasv_min_port=10100\npasv_promiscuous=YES" >> /etc/vsftpd.conf
$ systemctl restart vsftpd
-2 Create a firewall rule and assign it to a target tag
gcloud compute --project=[your-project] firewall-rules create myftp --direction=INGRESS --priority=1000 --network=default --action=ALLOW --rules=tcp:20,tcp:21,tcp:10101 --source-ranges=0.0.0.0/0 --target-tags=ftp
-3 Add the firewall tag "ftp" to the drupal instance.
$ gcloud compute instances add-tags [vm-name] --zone=[vm-zone] --tags ftp

You have to make sure that you have an FTP server such as vsftp running on your VM. You can check by ssh-ing into your VM and running:
# ps aux | grep ftp
If there's no ftp server running, you need to configure and start it.
Also make sure that your GCP firewall settings allow FTP.
IN: TCP 20,21,60000-65535
OUT: TCP 20,21,60000-65535

Related

How do I create a FTP user in Amazon Lightsail to update Wordpress Plugins

I successfully migrated a Wordpress site from BlueHost to AWS Lightsail. When I go to update the plugins, Wordpress is asking for FTP credentials (see the image).
By default, you can only connect to the Lightsail instance via SSH Certificate, which I have successfully done via Transit.
In your lightsail firewall rules make sure you allow access to TCP ports 21 and 1024-1048 from 127.0.0.1
SSH to your Lightsail instance (use putty for windows unless you know how to edit files with vim)
run the following commands to install vsftpd.
sudo apt install vsftpd
sudo nano /etc/vsftpd.conf
uncomment these lines:
local_enable=YES
write_enable=YES
add these lines:
pasv_enable=YES
pasv_min_port=1024
pasv_max_port=1048
pasv_address=127.0.0.1
Press Ctrl+X , Y , ENTER to save the changes to the file (this is why I said to use putty)
run this command to see what group owns the wp-content directory
ls -l /home/bitnami/apps/wordpress/htdocs/
In my lightsail instance, it was the "daemon" group
Note:other articles suggest adding this user to the bitnami group, but in my experience this resulted in errors during update siting that it was not able to create directories.
Run the following to create a new user and assign it to this group so that it will have access to write to the wp-content directory.
(in the following lines, substitute ftpuser for the new username)
sudo /etc/init.d/vsftpd restart
sudo adduser ftpuser
sudo usermod -d /home/bitnami ftpuser
sudo usermod -a -G daemon ftpuser
sudo /etc/init.d/vsftpd restart
Now you can try your updates again and it should work.
use 127.0.0.1 for the hostname and specify the new FTPuser credentials you just created.

Multi user signin on JupyterHub

I'm trying to setup JupyterHub on an Amazon EC2 instance using these instructions.
In the step titled Run the Hub Server I'm running the server using sudo jupyterhub. But I'm not able to login using the credentials of other Linux users (those apart from the one used to run the server).
It says No such file or directory: 'jupyterhub-singleuser' in the logs and I get a 500 internal server error in the browser. Please help!
Here's how to set up jupyterhub for use with multi-users:
My github here will help you.
Github/Jupyter
Create a group:
$ sudo groupadd <groupname>
Add a user to a group:
$ sudo adduser <username> <groupname>
Using:
c.LocalAuthenticator.group_whitelist = ['<groupname>']
It's been a long time since you asked this, but I think a I can help other users that have similar problems.
I think the problem is that jupyterhub-singleuser is not in PATH for all users. The solution I used was to make symbolic links for the binaries jupyterhub requires.
sudo ln -s /your/jupyterhub/install/location/jupyterhub /usr/bin/jupyterhub
sudo ln -s /your/jupyterhub/install/location/configurable-http-proxy /usr/bin/configurable-http-proxy
sudo ln -s /your/jupyterhub/install/path/node /usr/bin/node
sudo ln -s /your/jupyterhub/install/path/jupyterhub-singleuser /usr/bin/jupyterhub-singleuser
I think it will work

How to setup Nginx as a load balancer using the StrongLoop Nginx Controller

I'm attempting to setup Nginx as a load balancer using the StrongLoop Nginx Controller. Nginx will be acting as a load balancer for a StrongLoop LoopBack application hosted by the standalone StrongLoop Process Manager. However, I've been unsuccessful at making the Nginx deployment following the official directions from StrongLoop. Here are the steps I've taken:
Step #1 -- My first step was to install Nginx and the StrongLoop Nginx Controller on an AWS EC2 instance. I launched an EC2 sever (Ubuntu 14.04) to host the load balancer, and attached an Elastic IP to the server. Then I executed the following commands:
$ ssh -i ~/mykey.pem ubuntu#[nginx-ec2-ip-address]
$ sudo apt-get update
$ sudo apt-get install nginx
$ sudo apt-get install build-essential
$ curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -
$ sudo apt-get install -y nodejs
$ sudo npm install -g strong-nginx-controller
$ sudo sl-nginx-ctl-install -c 444
Then I opened up port 444 in the security group of the EC2 instance using a Custom TCP Rule.
Step #2 -- My second step was to setup two Loopback application servers. To accomplish this I launched two more EC2 servers (both Ubuntu 14.04) for the application servers, and attached an Elastic IP to each server. Then I ran the following series of commands, once on each application server:
$ ssh -i ~/mykey.pem ubuntu#[application-server-ec2-ip-address]
$ sudo apt-get update
$ sudo apt-get install build-essential
$ curl -sL https://deb.nodesource.com/setup_0.12 | sudo bash -
$ sudo apt-get install -y nodejs
$ sudo npm install -g strong-pm
$ sudo sl-pm-install
$ sudo /sbin/initctl start strong-pm
Step #3 -- My third step was to deploy the application to each of the application servers. For this I used StrongLoop Arc:
$ cd /path/to/loopback-getting-started-intermediate # my application
$ slc arc
Once in the StrongLoop Arc web console, I built a tar for the application, and deployed it to both application servers. Then in the Arc Process Manager, I connected to both application servers. Once connected, I clicked "load balancer," and entered the Nginx host and port into the form and pressed save. This caused a message to pop up saying "load balancer config saved."
Something strange happened at this point: The fields in StrongLoop Arc where I just typed the settings for the load balancer (host and port) reverted back to the original values the fields had before I started typing. (The original port value was 555 and the original value in the host field was the address of my second application server.)
Don't know what to do next -- This is where I really don't know what to do next. (I tried opening my web browser and navigating to the IP address of the Nginx load balancer, using several different port values. I tried 80, 8080, 3001, and 80, having opened up each in the security group, in an attempt to find the place to which I need to navigate in order to see "load balancing" in action. However, I saw nothing by navigating to each of these places, with the exception of port 80 which served up the "welcome to Nginx page," not what I'm looking for.)
How do I setup Nginx as a load balancer using the StrongLoop Nginx Controller? What's the next step in the process, assuming all of my steps listed are correct.
What I usually do is this:
sudo sl-nginx-ctl-install -c http://0.0.0.0:444
Maybe this can solve your problem.

gitlab docker ssh issue

I looked the different posts concerning gitlab, docker, and ssh issues without any help. So, I ask my question here.
I have the following setting:
linux box with ubuntu server 14.04 and IP 192.168.1.104
DNS: git.mydomain.com = 192.168.1.104
A gitlab docker that I start, according to the official doc, this way:
sudo docker run --detach --name gitlab_app --publish 8080:80 --publish 2222:22 --volumes-from gitlab_data gitlab_image
or
sudo docker run --detach --name gitlab_app --publish 8080:80 --publish 2222:22 -e "GITLAB_SHELL_SSH_PORT=2222" --volumes-from gitlab_data gitlab_image
the linux box runs an nginx which redirects (proxy_pass) git.mydomain.com to 192.168.1.104:8080
I access git.mydomain.com without any issue, everything works.
I generated an ssh key that I have added to my profile on gitlab and added the following lines to my ~/.ssh/config
Host git.mydomain.com
User git
Port 2222
IdentityFile /home/user/.ssh/id_rsa
If I try
ssh -p 2222 git#git.mydomain.com
the connection is closed. I assume it is because only a git-shell is permitted.
But, if I try
mkdir test
cd test
git init
touch README.md
git add README.md
git commit -m "first commit"
git remote add origin git#git.domain.com:user/test.git
git push -u origin master
it stucks with
Connection closed by 192.168.1.104
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I also tried with
git remote add origin git#git.domain.com:2222/user/
and the result was the same.
Note that the logs of gitlab docker include
[2015-03-06T11:04:43+00:00] INFO: group[git] created
[2015-03-06T11:04:43+00:00] INFO: user[git] created
[2015-03-06T11:04:44+00:00] INFO: group[gitlab-www] created
[2015-03-06T11:04:44+00:00] INFO: user[gitlab-www] created
Any idea how I can fix this issue?
Thanks in advance for your help.
I would guess that you have authentication problem.
Here are few things you can try:
Make sure you added you public key in gitlab.
Check permissions of you id_rsa file.
Try temporarily disabling hosts verification with
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
I have the same setup as you (docker container in VM, DNS points to VM). I also configured .ssh/config like you.
But when I log-in with ssh I get:
ssh -p 2222 git#gitlab
PTY allocation request failed on channel 0
Welcome to GitLab, tomzo!
Connection to gitlab closed.
Git remotes do not need port 2222 configured. This is OK (works for me):
$ git remote -v
origin git#gitlab:lab/workflow.git (fetch)
origin git#gitlab:lab/workflow.git (push)
And I can push and pull with git.
$ git push
Everything up-to-date

Restart nginx without sudo?

So I want to be able to cap:deploy without having to type any passwords. I have setup all private keys so I can get to the remote servers fine, and am now using svn over ssh, so no passwords there.
I have one last problem, I need to be able to restart nginx. Right now I have sudo /etc/init.d/nginx reload. That is a problem b/c it uses the capistrano password, the one I just removed b/c I am using keys. Any ideas on how to restart nginx w\out a password?
I just spent a good hour looking at sudoer wildcards and the like trying to solve this exact problem. In truth, all you really need is a root executable script that restarts nginx.
Add this to the /etc/sudoers file
username hostname ALL=NOPASSWD: /path/to/script
Write script as root
#! /bin/bash
/bin/kill -HUP `cat /var/run/nginx.pid`
Make the script executable
Test.
sudo /path/to/script
There is a better answer on Stack Overflow that does not involve writing a custom script:
The best practice is to use /etc/sudoers.d/myusername
The /etc/sudoers.d/ folder can contain multiple files that allow users
to call stuff using sudo without being root.
The file usually contains a user and a list of commands that the user
can run without having to specify a password.
Instructions:
In all commands, replace myusername with the name of your user that you want to use to restart nginx without sudo.
Open sudoers file for your user:
$ sudo visudo -f /etc/sudoers.d/myusername
Editor will open. There you paste the following line. This will allow that user to run nginx start, restart, and stop:
myusername ALL=(ALL) NOPASSWD: /usr/sbin/service nginx start,/usr/sbin/service nginx stop,/usr/sbin/service nginx restart
Save by hitting ctrl+o. It will ask where you want to save, simply press enter to confirm the default. Then exit out of the editor with ctrl+x.
Now you can restart (and start and stop) nginx without password. Let's try it.
Open new session (otherwise, you might simply not be asked for your sudo password because it has not timed out):
$ ssh myusername#myserver
Stop nginx
$ sudo /usr/sbin/service nginx stop
Confirm that nginx has stopped by checking your website or running ps aux | grep nginx
Start nginx
$ sudo /usr/sbin/service nginx start
Confirm that nginx has started by checking your website or running ps aux | grep nginx
PS: Make sure to use sudo /usr/sbin/service nginx start|restart|stop, and not sudo service nginx start|restart|stop.
Run sudo visudo
Append with below lines (in this example you can add multiple scripts and services after comma)
# Run scripts without asking for pass
<your-user> ALL=(root) NOPASSWD: /opt/fixdns.sh,/usr/sbin/service nginx *,/usr/sbin/service docker *
Save and exit with :wq
Create a rake task in Rails_App/lib/capistrano/tasks/nginx.rake and paste below code.
namespace :nginx do
%w(start stop restart reload).each do |command|
desc "#{command.capitalize} Nginx"
task command do
on roles(:app) do
execute :sudo, "service nginx #{command}"
end
end
end
end
Then ssh to your remote server and open file
sudo vi /etc/sudoers
and the paste this line (after line %sudo ALL=(ALL:ALL) ALL)
deploy ALL=(ALL:ALL) NOPASSWD: /usr/sbin/service nginx *
Or, as in your case,
deploy ALL=(ALL:ALL) NOPASSWD: /etc/init.d/nginx *
Here I am assuming your deployment user is deploy.
You can add here other commands too for which you dont require to enter password. For example
deploy ALL=(ALL:ALL) NOPASSWD: /usr/sbin/service nginx *, /etc/init.d/mysqld, /etc/init.d/apache2

Resources