Using rhc snapshot-save Returns Empty File - wordpress

I have a WordPress site on OpenShift and I'm attempting to backup the site. I've used commands:
rhc tidy-app
and
rhc snapshot-save
After reporting a snapshot is being pulled down, "Success" is displayed a few seconds later but only an empty tar.gz file is created (it's supposed to be about ~50mb).
This incident occurred before and usually, after a few repeated attempts, eventually worked. I've tried several times now without the backup being downloaded.
Anyone have any thoughts? Thanks.
FYI, the gear is well below the size and file count quotas

I came across this post because I was having the same issue. I was getting empty hello.tar.gz files when running the following command:
rhc snapshot save -a hello
After some research I found that I was missing an option. The hello.tar.gz file contained the expected contents after running the following command:
rhc snapshot save -a hello --deployment

Related

(Dagster) Schedule my_hourly_schedule was started from a location that can no longer be found

I'm getting the following Warning message when trying to start the dagster-daemon:
Schedule my_hourly_schedule was started from a location Scheduler that can no longer be found in the workspace, or has metadata that has changed since the schedule was started. You can turn off this schedule in the Dagit UI from the Status tab.
I'm trying to automate some pipelines with dagster and created a new project using dagster new-project Scheduler where "Scheduler" is my project.
This command, as expected, created a diretory with some hello_world files. Inside of it I put the dagster.yaml file with configuration for a PostgreDB to which I want to right the logs. The whole thing looks like this:
However, whenever I run dagster-daemon run from the directory where the workspace.yaml file is located, I get the message above. I tried runnning running the daemon from other folders, but it then complains that it can't find any workspace.yaml files.
I guess, I'm running into a "beginner mistake", but could anyone help me with this?
I appreciate any counsel.
One thing to note is that the dagster.yaml file will not do anything unless you've set your DAGSTER_HOME environment variable to point at the directory that this file lives.
That being said, I think what's going on here is that you don't have the Scheduler package installed into the python environment that you're running your dagster-daemon in.
To fix this, you can run pip install -e . in the Scheduler directory, although the README.md inside that directory has more specific instructions for working with virtualenvs.

How can I mount my Google drive to Colab when the runtime type is Julia instead of Python?

I have two copies of a 400MB dataset file in my personal computer and in my Google drive. I want to play with the dataset with the programming language Julia on the Google Colab Jupyter notebook. I found a working code piece that changes the default Colab runtime type from Python 3 to Julia 1.3.1. If you run the following code in a code cell, and then reload the Colab page, the runtime type becomes Julia:
%%shell
if ! command -v julia 2>&1 > /dev/null
then
wget 'https://julialang-s3.julialang.org/bin/linux/x64/1.3/julia-1.3.1-linux-x86_64.tar.gz' \
-O /tmp/julia.tar.gz
tar -x -f /tmp/julia.tar.gz -C /usr/local --strip-components 1
rm /tmp/julia.tar.gz
fi
julia -e 'using Pkg; pkg"add Plots; add PyPlot; add IJulia; add Knet;"'
julia -e 'using Pkg; pkg"build Knet;"'
When the runtime type becomes Julia, clicking on the Mount Drive button returns the following error message:
Mounting your Google Drive is only available on hosted Python runtimes.
When I try to mount the drive during the Python runtime type, then converting the runtime type to Julia, Colab clears everything including the mounted drive. So, this method does not work, too.
When I try to upload the dataset to Colab from my computer, everything starts smoothly. However, each time that I try to upload the dataset from my computer in place of mounting the drive, I face one of these two problems: Either the upload process fails or Colab stops the Julia runtime due to inactivity (how can I start being active without my dataset). When the upload process stops without uploading the file completely, the yellow-green circle on the bottom left part of the page which indicates the percentage of the task that is completed becomes completely red. It gives no error message except this red circle. When I download the uploaded (not complete) file to my computer, I see that it is only around 20MB (the original file was 400MB). Therefore, I can understand that the upload process has failed.
The same question has been asked here before. However, the answer suggests mounting the drive in Python runtime and changing the runtime type after that. This does not work for me because when the runtime changes, everything goes away as I stated above.
By the way, my dataset cannot be found anywhere else. So, sample datasets folder does not work.
So, how can I use my dataset on Google Colab with Julia?
If the dataset is not top secret, you can share it publicly and use gdown command to download it
run(`gdown --id 1-7dVdjCIZIxh8hHJnGTK-RA1-jL1tor4`)
Here 1-7dV...or4 is the file_id taken from the shared URL.

Artifactory backup directory is not recognized

I wanted to change the backup to a different disk. I mounted the disk to /mnt2 on centos and when I navigate to Admin > Backups > Backup Daily > Edit backup-daily Backup, I see an option Server Path For Backup. I tried the following two things.
I entered the mount directory /mnt2 and hit run now. The background job fails with the following error in logs.
An error occurred while performing a backup: Backup directory provided
in configuration: '/mnt2' cannot be created or is not a directory.
I also tried creating a tmp2 directory on local drive and entered /tmp2 and hit run now. The background job fails with the same error as above.
Note 1:
I restarted the docker container just to see if it's not picking up file system changes in real time. That did not work.
Note 2:
There is a browse button next to Sever Path for Backup and I dont see /mnt2 or /tmp2 directories I created. I couldnot find anything useful in the documentation either.
How do I change the backup directory for artifactory?
The setup is artifactory with docker.
For artifactory docker instance, a volume needs to be specified so it maps to the local folder, say, /opt/artifactory/.
In my case, /var/opt/jfrog/artifactory(docker) is mapped to /opt/artifactory(local)
I am supposed to create a folder here -- /opt/artifactory/backup_mount. Give read and write access for 1039 user and group. It shows up in artifactory UI as /var/opt/jfrog/artifactory/backup_mount.
Note:
If you create a directory, it shows up without any docker restart.
If you create a mount, restart docker so the mount is recognized.

openstack image-delete can not delete image

Our openstack is kilo and manage vmware 6.
We backup using the following command:
nova backup 762ecf86-f5eb-4c7c-a512-3cf15e23dd1a \
backup-snapshot-a-daily-$(date "+%Y%m%d-%H:%M") daily 1
A new image appears when running glance image-list.
To delete it, we run glance image-delete without errors.
We run glance image-list again and find that the image has disappeared.
Yet in vsphere client, in vmfs, the directory is still here.
After one minute, we run glance image-list again, the image is still there.
So how can we delete this image?

Installation issue: Directory bin in server/dynamodb-titan100-storage-backend-1.0.6-SNAPSHOT-hadoop1 does not exist

I have been trying for a couple of days to install Aws DynamoDB Titan Storage Backend on Windows Subsystem Linux without any success. I am using the following instructions http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Tools.TitanDB.DownloadingAndRunning.html
Am currently stuck at step 5, where i am supposed to install gremlin server using the following command
src/test/resources/install-gremlin-server.sh
The command runs successfully without any error, but when i try to run the next command
bin/gremlin-server.sh ${PWD}/conf/gremlin-server/gremlin-server-local.yaml
after changing directory to
server/dynamodb-titan100-storage-backend-1.0.0-hadoop1
it fails because the directory bin does not exist, upon lookup, only two directories exists (badlibs and ext). I have searched for a solution in vain, hopefully someone will help. Thanks
In order to install gremlin, you need to do it as root in linux, otherwise it won't work. I repeated the procedure with root and everything worked. Also in order to run it, you also need to run it as root, otherwise you will run into some problems

Resources