Root login in instance with eucalyptus - eucalyptus

I have a eucalyptus with a node and fronted. I created a instance and i can connect with cloud-user#IpOfIstance. Bu if I get sudo su command, the instance needs a passw of root, but i not created this password. How can I connect in instance with root? root#ip doesn't works

I would be inclined to think that the image you installed isn't properly configured.
Are you able to reproduce the issue with one of the images located at the following URL?
http://emis.eucalyptus.com/
In particular, check out the "Quick start" section to make it more straightforward to install one of the images located there.

Related

How to set private file directory of Drupal by Bitnami on AWS ECS Fargate?

Drupal 9 requires that the private file directory be set by editing the Drupal settings.php file. However, when I deploy the Drupal by Bitnami container (https://hub.docker.com/r/bitnami/drupal/) on AWS ECS Fargate, I am unable to SSH in to the machine and update the settings.php file.
I'm aware that one solution may be to configure ECS Exec (https://aws.amazon.com/blogs/containers/new-using-amazon-ecs-exec-access-your-containers-fargate-ec2/) so that I can tunnel into the container and edit the settings.php file. I'm wondering if that's the ideal solution, or if there is some way to use Docker/ECS commands to potentially:
Copy a custom settings.php file from an outside source (local, or S3)
or
Append text to the settings.php file this container creates after site installation
or
Something I'm not considering
The ultimate goal would be to set a private file directory which I can mount to an EFS system to. So, if there's an EFS related shortcut I'm missing here, that could also prove useful.
I tried looking through the Drupal by Bitnami documentation to see if there was an environment variable or parameter I could set for the private file directory, but I did not see anything.
Thanks in advance,
Jordan
Trying to modify the settings after the container is running is the wrong way to do something like this. That method doesn't scale, and doesn't work in any sort of automated/fault-tolerant/highly-available environment. You shouldn't think of a running docker container as a server that you can then modify after the fact.
If you absolutely want to change the volume path in settings.php then you probably need to create custom docker image, using this Bitnami image as the base, and as part of the Dockerfile for that custom docker image you would either copy a new settings.php file in, or modify the existing file.
However, I'm confused why you need to edit settings.php to change the path. The documentation for the image you are using clearly states that you need to create a persistent volume that is mapped to the /bitnami/drupal path. There's no need to change that to anything else, just configure your ECS Fargate task definition to map an EFS volume into the container at /bitnami/drupal.

Edit file permissions using SFTP to Google Cloud Engine Wordpress website

My goal is to update 1 file on my Wordpress website on Google Cloud Engine using Filezilla to transfer it.
I am successfully logged into my files using SFTP. I'm on a Mac. I have my vm instance name from Google Cloud Engine but cannot find how to create a password.
I think if I can figure out how to create a ding dang password my next step is to type this in terminal:
sudo chwown "vm-name" /var/www/html
Any direction is much appreciated. My website has been down since yesterday b/c I messed with https plugin. I'm a designer and got in way over my head. Learned more than bargained for so far.
To run any linux command inside your instance you need to:
Generate a private/public key pair using Puttygen.
You need to add the content of your public key to GCP as in below answer.
How can i setup remote project with PhpStorm with Google Compute Engine running LEMP?
You need to open Putty.
In the Session menu enter yourusername#instance_external_ip as a hostname using connection type SSH
In the menu Connection/SSH/Auth browse and select your private key(the one generated at step 1).
Click Open. A new terminal window will open. You will be abel to navigate inside your linux instance and do whatever sudo commands you wish.
Good luck

Google Cloud Engine - Issues When Mounting Storage Bucket To Use With WordPress

I searched thoroughly and could not find any info on this issue.
Currently, I am building a Word Press site, hosted on Google Cloud's Compute Engine. The goal is to build an autoscaling WP site. I am currently attempting to build the instance template, which I can then use to deploy an instance group and subsequently setup the HTTP load balancer.
I am using a base Ubuntu 16.04 image, and from there I installed the rest of the LEMP stack.
I was successfully able to use the SQL proxy to connect to Cloud SQL so that all of the ephermal instances to be deployed will share the same database. To have the proxy initiated each time a new instance is spun up, I am using a startup script:
./cloud_sql_proxy -dir=/cloudsql &
But here's my issue right now: the Word Press files. I can't figure out a way to have all of the instances use the same WP files. I already tried to use Google Cloud Storage FUSE, but when I mount the bucket at the root directory, it deletes all of the WordPress folders (wp-content, wp-admin, wp-includes).
It's as if the instance can't "see" the folders I have in the google cloud storage. Here was my work flow:
setup LEMP with WordPress--got the site working on the front end--all systems go
copied all of the files/folders in my root directory (/var/www/html/) to google cloud bucket
gsutil -m cp -r /var/www/html/. gs://example-bucket/
Then I mounted the bucket at my directory
gcsfuse --dir-mode "777" -o allow_other example-bucket /var/www/html/
But now, when I "ls" inside of the root directory from the Linux terminal, I DON'T see any of the WP folders (wp-includes, wp-admin, wp-content).
My end goal is to have a startup script that:
initiates the DB proxy so all WP instances can share the same DC
initiates the mounting of the google cloud bucket so all instances can share the same WP files
Obviously, that isn't working right now.
What is going wrong here? I am also open to other ideas for how to make the WP files "persistent" and "shared" across all of the ephemeral instances that the instance group spins up.
Try:
sudo nano /etc/rc.local
In the file after:
#By default this script does nothing...
gcsfuse -dir-mode=777 -o allow_other [bucket-name] /var/www/html/[folder-name if apply]
exit 0
Will be work to mount in every scaled instance your bucket.
Hope this help for somebody i realized that you Q has more than 6 months.
:)
Well, there should not be a problem with giving each instance its own WordPress files, unless those files will be modified by the instance.
Each instance can have its own copy of the Wordpress files and all instances can share a common Database. You can host the media files such as images, videos as well as Javascript and CSS files on a Content Delivery Network (CDN)

Password protect wordpress on Bitnami

I'm new to Bitnami and currently using a Google Cloud Platform for my VMs.
I'm trying to password protect a wordpress installation for 1 of my VMs. It's a dev site so only going to be using the IP address to access the site.
However following the instructions I found here, I am unable to write in the opt/bitnami/apache2/ folder. Everytime I try to run the commands in the above linked instruction, I get the following error
/opt/bitnami/apache2/bin/htpasswd.bin: cannot create file opt/bitnami/apache2/wordpress_users
I've tried to manually change the permissions on the folder and it doesn't work. I can't seem to run any commands with SU because there is no password provided for shell access (using only a ssh cert file)
Can anyone can offer help as to what I'm doing wrong or what I'm missing?
Thanks
Bitnami developer here,
It is not necessary to modify permission for the apache2 folder nor the htpasswd.bin file. The bitnami user does not have a password by default but is allowed to run commands with higher privileges using sudo. Could you please try to revert the permission changes and run the command with sudo?
cd /opt/bitnami
sudo apache2/bin/htpasswd -cb apache2/wordpress_users your_desired_username your_desired_password
Let me know if you have any other issue or if it worked for you.
Best regards,
Gonzalo
Bitnami developer here,
Sorry I didn't mentioned that you need to change to the /opt/bitnami directory first as I thought you already did it. Also, you have to run command with sudo because root is the owner of the apache2 folder.
Apart from that, I'm glad you could fix your issue!
Best regards,
Gonzalo
Thanks for the idea but it didn't work, I tried with sudo having my apache2/ folder set to 755 and ownership back to it's original setting.
On the other hand.... I manage to solve my problem. Seems that in the original on how to call your command there is a part missing! Well it's missing for me cause by adding the fullpath to the original file I was able to successfully get a password in place and things are now working perfectly
$ opt/bitnami/apache2/bin/htpasswd -cb apache2/wordpress_users username password
Took me too long to get that working... Thanks for the help, without you I wouldn't have made a typo that made me rethink the error! Thanks again

Deploying GF4.0 in Solaris

Im trying to install Glassfish 4.0 in a Solaris system, outside the users home folder.
I set up asenv.conf file and i can create the domain, but performing several different actions like restarting the domain or enable secure admin I get the message:
"Unable to create client data directory:/export/home/appbill/.gfclient"
"NCLS-ADMIN-00010"
/export/home/appbill is the home folder of my user.. but I dont want any files to be created there..
GF is instaled at /opt/folder1/glassfish4/
I don't know if there na env variable I must set or what's exactly happening but this shouldn't be hard to solve..
Any ideas?

Resources