I have recently setup a VM on Google Cloud to develop and host my web site/application. The setup went fine, and I even have gcloud SDK up and running. I also have Apache installed and configured. My question is how do I setup my editing environment (PHP Storm) and upload my files? They seem to have the ports for FTP and SFTP blocked.
FTP uses a clear-text protocol and is thus not recommended. To use SFTP:
Make sure you can ssh to your instance: gcutil --project=<project> ssh <instance>. This does two things: (a) makes sure that port 22 is open on your VM, and (b) propagates your private key to the instance, if it's not already there.
Configure PHP Storm to use the Key pair authentication mechanism using the key ~/.ssh/google_compute_engine to log in to the instance.
That's it.
Related
I'm using Ant Media Server on AWS and it works perfectly fine. However, some of our users have blocked UDP ports and therefore I want to know if it is possible to use TCP instead of UDP for WebRTC.
And with this in your User data in AWS you'll get the current instance public IP inserted automatically on boot:
sed -i "s/server.name=.*/server.name=$(curl -s http://169.254.169.254/latest/meta-data/public-ipv4)/g" /usr/local/antmedia/conf/red5.properties
Yes, we can make use of TCP ports for WebRTC.
Please open TCP Port range 50000-60000 on the AWS Security group (for AMS v2.4.2.1 and above, for older version use port range 5000-65000).
Go to the Application settings:
/usr/local/antmedia/webapps/<AppName>/WEB-INF/red5-web.properties
Edit the red5-web.properties file and set
settings.webrtc.tcpCandidateEnabled=true
Restart Ant Media Server
sudo service antmedia restart
If you are using a cloud service like OVH or if there is pubic IP directly associated with the instance, then webrtc should work.
If you are using a cloud service like AWS with private/public IP, then some additional settings are required to be configured.
Go to server configuration settings
/usr/local/antmedia/conf/red5.properties
Edit the red5.properties file and set
server.name=Instance_Public_IP
Go to the application settings again and edit the red5-web.properties
/usr/local/antmedia/webapps/<AppName>/WEB-INF/red5-web.properties
set
settings.replaceCandidateAddrWithServerAddr=true
Save the settings and restart Ant Media server
sudo service antmedia restart
Webrtc should work fine afterwards.
Thank you.
antmedia.io
I have completed an automated ansible install and have most of the wrinkles worked out.
All of my services except Nodes are running on a single box on non-secure HTTP though I specified 443 in my inventory I see now that does not imply an HTTPS configuration. So I have non-secure API endpoints listening on 443.
Is there any way around the requirements of operating CLC and Cluster Controller on different hardware as described in the SSL howto: https://docs.eucalyptus.cloud/eucalyptus/5/admin_guide/managing_system/bps/configuring_ssl/
I've read that how-to and can only guess that installing certs on the CLC messes up the Cluster Controller keys but I don't fully grasp it. Am I wasting my time trying to find a workaround or can I keep these services on the same box and still achieve SSL?
When you deploy eucalyptus using the ansible playbook a script will be available:
# /usr/local/bin/eucalyptus-cloud-https-import --help
Usage:
eucalyptus-cloud-https-import [--alias ALIAS] [--key FILE] [--certs FILE]
which can be used to import a key and certificate chain from PEM files.
Alternatively you can follow the manual steps from the documentation that you referenced.
It is fine to use HTTPS with all components on a single host, the documentation is out of date.
Eucalyptus will detect if an HTTP(S) connection is using TLS (SSL) and use the configured certificate when appropriate.
It is recommended to use the ansible playbook certbot / Let's Encrypt integration for the HTTPS certificate when possible.
When manually provisioning certificates, wildcards can be used (*.DOMAIN *.s3.DOMAIN) so that all services and S3 buckets are included. If a wildcard certificate is not possible then the certificate should include the service endpoint names if possible (autoscaling, bootstrap, cloudformation, ec2, elasticloadbalancing, iam, monitoring, properties, route53, s3, sqs, sts, swf)
I installed wordpress in a GCP VM and tried installing plugins and themes through the wp-admin dashboard, but it asks for an FTP server.
I installed vsftp but couldn't connect to the server, even after creating firewall rules. I was hoping someone could help.
As other have pointed out, knowing what type of firewall rules (and how) you have configured or if you followed a specific tutorial would be very helpful to provide a specific answer. I'll do my best to provide a general answer based on the details you shared.
It's not clear to me if you modified the firewall rules inside your instance or in the Cloud Console. This page describes the commands for working with firewall rules in GCP and offers some examples in using them. In case you were setting firewall rules within the instance, make sure both firewalls are configured properly.
I'm not familiar with vsftp but I found this tutorial that you may find helpful as it's specific for GCP.
As Gurpreet mentioned in his reply, you can use SSH keys to connect via SFTP instead. This is a tutorial to configure an SFTP connection with Filezilla and is also specific to GCP.
If you expanded your question with more details, screenshots, etc. maybe we could provide better suggestions to solve your issue.
You can use filezilla to connect to GCP through SFTP.
Web Host is your public IP
Username should be root by default unless you changed it
And, Password is your root password.
If you don't have the root password or not able to connect via SFTP You can use SSH keys.
Read this carefully regarding how to add SSH keys in Google Cloud Console:
https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys
After adding SSH keys, you can Filezilla without root password using SSH keys
https://tecadmin.net/import-private-key-in-filezilla/
Say I have a docker image, and I deployed it on some server. But I don't want other user to access this image. Is there a good way to encrypt the docker image ?
Realistically no, if a user has permission to run the docker daemon then they are going to have access to all of the images - this is due to the elevated permissions docker requires in order to run.
See the extract from the docker security guide for more info on why this is.
Docker daemon attack surface
Running containers (and applications)
with Docker implies running the Docker daemon. This daemon currently
requires root privileges, and you should therefore be aware of some
important details.
First of all, only trusted users should be allowed to control your
Docker daemon. This is a direct consequence of some powerful Docker
features. Specifically, Docker allows you to share a directory between
the Docker host and a guest container; and it allows you to do so
without limiting the access rights of the container. This means that
you can start a container where the /host directory will be the /
directory on your host; and the container will be able to alter your
host filesystem without any restriction. This is similar to how
virtualization systems allow filesystem resource sharing. Nothing
prevents you from sharing your root filesystem (or even your root
block device) with a virtual machine.
This has a strong security implication: for example, if you instrument
Docker from a web server to provision containers through an API, you
should be even more careful than usual with parameter checking, to
make sure that a malicious user cannot pass crafted parameters causing
Docker to create arbitrary containers.
For this reason, the REST API endpoint (used by the Docker CLI to
communicate with the Docker daemon) changed in Docker 0.5.2, and now
uses a UNIX socket instead of a TCP socket bound on 127.0.0.1 (the
latter being prone to cross-site request forgery attacks if you happen
to run Docker directly on your local machine, outside of a VM). You
can then use traditional UNIX permission checks to limit access to the
control socket.
You can also expose the REST API over HTTP if you explicitly decide to
do so. However, if you do that, being aware of the above mentioned
security implication, you should ensure that it will be reachable only
from a trusted network or VPN; or protected with e.g., stunnel and
client SSL certificates. You can also secure them with HTTPS and
certificates.
The daemon is also potentially vulnerable to other inputs, such as
image loading from either disk with ‘docker load’, or from the network
with ‘docker pull’. This has been a focus of improvement in the
community, especially for ‘pull’ security. While these overlap, it
should be noted that ‘docker load’ is a mechanism for backup and
restore and is not currently considered a secure mechanism for loading
images. As of Docker 1.3.2, images are now extracted in a chrooted
subprocess on Linux/Unix platforms, being the first-step in a wider
effort toward privilege separation.
Eventually, it is expected that the Docker daemon will run restricted
privileges, delegating operations well-audited sub-processes, each
with its own (very limited) scope of Linux capabilities, virtual
network setup, filesystem management, etc. That is, most likely,
pieces of the Docker engine itself will run inside of containers.
Finally, if you run Docker on a server, it is recommended to run
exclusively Docker in the server, and move all other services within
containers controlled by Docker. Of course, it is fine to keep your
favorite admin tools (probably at least an SSH server), as well as
existing monitoring/supervision processes (e.g., NRPE, collectd, etc).
Say if only some strings need to be encrypted. Could possibly encrypt this data using openssl or an alternative solution. Encryption solution should be setup inside the docker container. When building container - data is encrypted. When container is run - data is decrypted (possibly with the help of an entry using a passphrase passed from .env file). This way container can be stored safely.
I am going to play with it this week as time permits, as I am pretty curious myself.
woocommerce webhooks aren't firing at all for me, even on a fresh install. I did the following:
Create a new MySQL database
Install WP from the zip file.
Set up WP.
Install Woocommerce.
Enable REST API and create a key.
Added "Coupon created" webhook, made sure it's set to active, and set it to a publicly accessible site.
When I create a coupon, the webhook does not fire, and no entry is created in the log. I tried this with orders as well and also doesn't work.
I think it's a machine configuration problem, but not sure what to change. The machine is an EC2 instance and has all ports opened in its security group policy.
Weirdest of all is that on a different EC2 instance does work, but it's a production machine and I want to have a dev server work so I can test out things. The only config differences between the production and dev machines that I can think of are the subnets and the firewall, but I don't understand why the subnet should matter and I opened all the firewall ports on the dev machine.
what Linux distributions are you running for prod and dev?
CentOS with SELinux enabled with not allow HTTPD scripts and modules to connect to network by default.
setsebool -P httpd_can_network_connect on
If above is not valid, please identify network problems by trying connecting to AWS RDS via SSH CLI. If you can open a connection via SSH CLI, the problem will be with your application. If you can't, it will be network problem. First thing to check in that case is AWS RDS security group. For testing you can open 3306 to public.
Let me know how it goes.