I installed wordpress in a GCP VM and tried installing plugins and themes through the wp-admin dashboard, but it asks for an FTP server.
I installed vsftp but couldn't connect to the server, even after creating firewall rules. I was hoping someone could help.
As other have pointed out, knowing what type of firewall rules (and how) you have configured or if you followed a specific tutorial would be very helpful to provide a specific answer. I'll do my best to provide a general answer based on the details you shared.
It's not clear to me if you modified the firewall rules inside your instance or in the Cloud Console. This page describes the commands for working with firewall rules in GCP and offers some examples in using them. In case you were setting firewall rules within the instance, make sure both firewalls are configured properly.
I'm not familiar with vsftp but I found this tutorial that you may find helpful as it's specific for GCP.
As Gurpreet mentioned in his reply, you can use SSH keys to connect via SFTP instead. This is a tutorial to configure an SFTP connection with Filezilla and is also specific to GCP.
If you expanded your question with more details, screenshots, etc. maybe we could provide better suggestions to solve your issue.
You can use filezilla to connect to GCP through SFTP.
Web Host is your public IP
Username should be root by default unless you changed it
And, Password is your root password.
If you don't have the root password or not able to connect via SFTP You can use SSH keys.
Read this carefully regarding how to add SSH keys in Google Cloud Console:
https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys
After adding SSH keys, you can Filezilla without root password using SSH keys
https://tecadmin.net/import-private-key-in-filezilla/
Related
I am trying to import my local SQL server database into Azure and I have all the requirements (storage, bacpac file, etc). When I try to import the db, I am getting the error below.
The Azure SQL Server firewall did not allow the operation to connect.
To resolve this, please select the "Allow All Azure" checkbox in the
Sql Server's configuration blade.
I have already checked yes on the Allow Azure services and resources to access this server option in the firewall settings and added my client IP. Is there something behind the scenes preventing it from allowing access? I am running my SQL server on a Docker container.
Imported bacpac file
Import Operation Azure
Import Error
Firewall Settings
After a week of trial and error, the database imported fine with no problems so I'll answer my question. What is interesting is that I don't have a concrete answer to my question since I don't know exactly why it did, but I'll give two tips anyway.
It might have been the cache on Azure's side. I got in contact with
an Azure rep recently and they stated that the cache may have not
updated yet. Clearing your cache could be the source of the problem
as well. To clear the cache see this document.
DBCC FLUSHAUTHCACHE;
Creating a new rule that spans from 0.0.0.0 to 255.255.255.255 in
your firewall settings.
Feel free to provide more solutions in the answers. Like I said, it was likely the cache on their side. It was really odd that it didn't work for a while, even with the firewall settings configured correctly.
I'm trying to connect zapier to the mysql db installed with a bitnami wp site in aws, but it seems I cannot connect them if the access is only restricted to the localhost. Any idea on how to change this in order to make it work?
this is what my security group looks like:
https://pasteboard.co/HL2ALzP.png
this is what my registered targets look like:
https://pasteboard.co/HL2Aa1G.png
this is the info zapier asks for in order to connect:
https://pasteboard.co/HLa6HMC.png
WARNING: These instructions suppose an important security risk. I do not recommend to open your database to the public under any circumstances
You need to edit the /opt/bitnami/mysql/my.cnf and change
bind-address=127.0.0.1
to
bind-address=0.0.0.0
Then restart mysql
sudo /opt/bitnami/ctlscript.sh restart mysql
Apart from that, you will need to open your firewall port in AWS. Here you have a guide for that
https://docs.bitnami.com/aws/faq/administration/use-firewall/
woocommerce webhooks aren't firing at all for me, even on a fresh install. I did the following:
Create a new MySQL database
Install WP from the zip file.
Set up WP.
Install Woocommerce.
Enable REST API and create a key.
Added "Coupon created" webhook, made sure it's set to active, and set it to a publicly accessible site.
When I create a coupon, the webhook does not fire, and no entry is created in the log. I tried this with orders as well and also doesn't work.
I think it's a machine configuration problem, but not sure what to change. The machine is an EC2 instance and has all ports opened in its security group policy.
Weirdest of all is that on a different EC2 instance does work, but it's a production machine and I want to have a dev server work so I can test out things. The only config differences between the production and dev machines that I can think of are the subnets and the firewall, but I don't understand why the subnet should matter and I opened all the firewall ports on the dev machine.
what Linux distributions are you running for prod and dev?
CentOS with SELinux enabled with not allow HTTPD scripts and modules to connect to network by default.
setsebool -P httpd_can_network_connect on
If above is not valid, please identify network problems by trying connecting to AWS RDS via SSH CLI. If you can open a connection via SSH CLI, the problem will be with your application. If you can't, it will be network problem. First thing to check in that case is AWS RDS security group. For testing you can open 3306 to public.
Let me know how it goes.
Let's say I want to set up a poor man's authentication scheme for a simple network service.
I don't want to bother with username/password authentication, for simplicity I just want to have a list of public keys in my application and anyone who can prove they are the owner of that key can use my service.
For the purposes of my application it would greatly simplify the authentication process since all my users are on the local network and they all use Unix. Anytime I onboard a new user I can just ask them for their ssh public key.
Is there a simple way to reuse the mechanism involved in ssh public key authentication in a non-ssh application? This is question is intended to be language agnostic.
If you just have a list of users that can use your application and you have no need to see who did what.
You can setup your server so that it listens only on localhost (127.1) rather than 0.0.0.0, and provide a restricted sshd, forwarding the port required to connect to the application
~/.ssh/authorized_keys will provide a list of the authorized keys that can be used.
ssh -I private_key_file <hostname> -L 3000:localhost:3000
For a basic setup and help with configuring your sshd, check out this answer:
https://askubuntu.com/questions/48129/how-to-create-a-restricted-ssh-user-for-port-forwarding
Note: Be warned that if you don't lock it down, any user will have full shell access on your box where the machine is hosted.
A dirty hack from top of my head: could you wrap the application so that it would create an actual SSH tunnel from localhost to your server, and use that for ?
Assuming you are talking about a web based application. What you are really looking for is X.509 Client certificates (1.3.6.1.5.5.7.3.2). This will allow you to identify a user individually to your application.
These face the same issues that are usually faced when looking at key distribution. Which is generally considered a hard problem.
If you wanted to head down this road here is what you would need to do.
Generate a root certificate (once)
Setup web server with appropriate modules to parse the certificate (nginx/apache)
Generate a certificate for each user (openssl)
Download cerificiate from centralized server. (maybe use their ssh pub key here)
Install the x509 cert locally (OS Dependent)
On the server side, you would need to process the cert as part of the web-server (nginx or apache should have modules to do this) and then pass the name onto your application as a header field which you can then process internally.
This is a much better security solution than usernames and passwords, however is complex because of the key distribution issue. Most people wouldn't bother since in most applications it is easy enough to integrate logins with LDAP or radius.
I have recently setup a VM on Google Cloud to develop and host my web site/application. The setup went fine, and I even have gcloud SDK up and running. I also have Apache installed and configured. My question is how do I setup my editing environment (PHP Storm) and upload my files? They seem to have the ports for FTP and SFTP blocked.
FTP uses a clear-text protocol and is thus not recommended. To use SFTP:
Make sure you can ssh to your instance: gcutil --project=<project> ssh <instance>. This does two things: (a) makes sure that port 22 is open on your VM, and (b) propagates your private key to the instance, if it's not already there.
Configure PHP Storm to use the Key pair authentication mechanism using the key ~/.ssh/google_compute_engine to log in to the instance.
That's it.