How do I restore phabricator if I deleted the files but the database is still intact? - phabricator

So, I did a stupid rm -rf on the folder where the complete phabricator folder was present.
The whole phabricator database is still intact though.
I cloned the required repos on the same old location:
somewhere/ $ git clone https://github.com/phacility/libphutil.git
somewhere/ $ git clone https://github.com/phacility/arcanist.git
somewhere/ $ git clone https://github.com/phacility/phabricator.git
Apache was already configured during previous install.
I then ran:
./bin/storage upgrade
After which I went to the address which pointed to phabricator folder. Now I get the following error:
1146: Table 'phabricator_user.user_cache' doesn't exist
How do I resolve it? Or in general, what's the best way to reinstall phabricator using the old database?
Thanks

Well, if you still have the database, make a mysqldump from the data (export the db data - you should have this by default - a cron job, running a backup script on another backup machine/usb/hard/cloud)
Do a fresh reinstall on phabricator(EVEN on whole LAMP).
Import the previous backup.sql you did.
After setting the user/passwd/host/port/ in the "path_to_phab/conf/local/local.json" via the command line or simply editing the file, try to run the
./bin/storage upgrade
This should work fine if you have the storage engine set to mysql db (not-recommended). If you have a different storage engine (like hdd) try to restore data reproducing the path to where you have data in phab`s fresh installation conf files along with mysql import.

Related

Getting this weird error when trying to run DVC pull

I am new to using DVC and just exploring it. I am trying to pull data from s3 that was pushed by another person on my team. But I am getting this error:
WARNING: Some of the cache files do not exist neither locally nor on remote. Missing cache files:
name: head_test_file.csv, md5: 45db668193ba44228d61115b1d0304fe
WARNING: Cache '45db668193ba44228d61115b1d0304fe' not found. File 'head_test_file.csv' won't be created.
No changes.
ERROR: failed to pull data from the cloud - Checkout failed for following targets:
head_test_file.csv
Did you forget to fetch?
My mistake. I ran dvc add but I had missed to run dvc push. Running this fixed it.
When I ran dvc add it did create my_file.csv.dvc file but it was not pushed. Hence when I was trying to pull it saw the .dvc file but got confused.
It was a simple solution, but it took a while for me to figure out. Since this is a new tool, asking and answering my own question just in case if someone else makes the same mistake.
You may want to run dvc install which installs a Git hook to automate dvc push before git push (:
Push: While publishing changes to the Git remote with git push, its easy to forget that the dvc push command is necessary to upload new or updated data files and directories tracked by DVC to remote storage.
This hook automates dvc push.

Git - Can't Push - "! [remote rejected] master -> master (Working directory has unstaged changes)"

I am trying to set up a "simple" git workflow for a Wordpress installation that has a local version, a staging version and a production version. All I want to do is make changes locally and push from local to staging and from local to production. At first I thought this would be a simple task. I have been able to initialize the repository, add the two remotes "staging" and "production", initialize git on the remotes and push changes from my local version to the staging and production servers normally using the commands:
git add .
git commit -m "some changes"
git push staging master
git push production master
However, at some point during my work something changed, and while I am still able to push to Staging, now I am unable to push to the Production server without getting the error:
! [remote rejected] master -> master (Working directory has unstaged changes)
When I do "git status" it says:
On branch master
nothing to commit, working tree clean
After reading the answers to several SIMILAR BUT DIFFERENT questions on Stack Overflow I have tried the following:
git pull staging master
git pull staging master --rebase
git pull production master
git pull production master --rebase
I also tried executing this command on the remote servers
git config --local receive.denyCurrentBranch updateInstead
I have already completely re-created the servers and repositories a few times just to re-install git entirely from scratch, but this problem keeps happening after a while and at this point Git is actually HURTING my workflow instead of helping it. If anyone has any insight into what my mistake is, it would be much appreciated!
I had similar problems, pushing to a non-bare remote repo where I wanted the working copy files to be checked out immediately.
My remote was configured with receive.denyCurrentBranch updateInstead, but it still refused to accept pushes at unpredictable times.
Git 2.4 added a push-to-checkout hook, which can override such push failures.
I wrote a short script based on the example in the githooks documentation.
#!/bin/sh
set -ex
git read-tree --reset -u HEAD "$1"
I installed this script in .git/hooks/push-to-checkout in my remote repo.
Note that this script overwrites the working copy in the remote — it does not attempt to merge any changes to those files.
That's because I want the working copy to simply reflect the files in the repo.
Making a git bare repo is the best practice to push to.
You could push to a non-bare one... but only if you are not modifying files on the destination side while you are pushing file from the source side. See push-to-deploy.
But the best practice remains to add a post-receive hook (as in my other answer) in order to checkout in an actual folder all the files you have received.

Artifactory: Converting remote repo to local repo

My employer has been misusing Bintray as our binary repository for some time. We are finally moving to Artifactory instead and closing down Bintray. But this seems to be an almost impossible task. There is no way of exporting Bintray repos to a zip. Downloading the repos means manually downloading each file from the UI or through their API. I have tried two approaches for automation:
1) wget for crawling our bintray like this:
wget -e robots=off -o ~/wget.log -w 1 -m -np --user --password "https://.bintray.com"
which yielded all of the files in the repos. But this only solves half the problem. I couldn't find out how to import the files to a repository in artifactory (all the repos are over 100mbs each and therefore can't be uploaded, for some reason).
2) I set the Bintray repos up as remote repositories and enabled Active Replication. That seems to have worked for now. But I don't know if they will be removed when the Bintray account is moved or even if they are stored in Artifactory. Therefore I would like to convert the remote repo to a local repo, to make sure that it is permanently stored in artifactory is there a way of doing this? If so, how?
I'll try to address both of your questions below.
What do you mean you can't upload more than 100mb? Which version of Artifactory are you using? On-prem or SaaS-based installation? How are you trying to upload your files to Artifactory? Have you tried to import the content by using the import feature of Artifactory? (Admin --> Import&Export --> repository Import)
It sounds like you are using the UI for the upload, and if so you can configure the max upload size in Admin --> General Configuration page.
If you mean that you have all of the content from Bintray cached in your remote repository cache in Artifactory just use the "Copy" or "Move" option and move the content to a local repository. This will ensure that all of the content is stored locally.

Error while updating Drupal with drush

I want to update my drupal, but when I am executing drush up there is an error:
The tb_sirate_starter directory could not be found within the profiles directory at /var/www/html/project/sites/all/modules/tb_megamenu, perhaps the project is enabled but has been deleted from disk.
I have tb_megamenu and I tried to install tb_sirate_starter but the error is not disappearing
The update might have moved files around and failed before the system table in Drupal's database was updated.
Try installing the Regsitry Rebuild project (if not installed already) and running a registry rebuild: https://www.drupal.org/project/registry_rebuild.
You can see if you already have it installed and run it by executing:
drush rr
If you do not have it installed already, go ahead an install it and follow directions on the project page to run it.
Once that is complete, run an updb and cache clearas well:
drush updb -y
drush cc all
Be sure to make a backup of your database and codebase before doing any of this (as you should have before running drush up in your original attempt and as you should before any and all upgrades).

Unignore vendor folder for deployment

I have been trying to deploy my Symfony2 project in Openshift with 1 small gear. My plan was to execute composer update once I finish pushing my codes into the server. Unfortunately an error keeps telling me that there is not enough memory to execute the command.
So I thought of unignoring the bundles that I needed in the vendor folder by removing them from the .gitignore file but still it doesn't get included in the commit.
Composer is known to consume a lot of memory. You should build you're project (composer update, php app/console assetic:dump, ...) and then push it to the server. Take a look at Jenkins, it's great tool.
Anyway if you want to force tracking of ignored files you can use the git add -f command.
You should only update composer in dev, for the prod server use: composer install (you need to have composer.lock under version control)
This is common problem connected with insufficient memory on the server. The solution is to add swap partition, so that you have enough memory to complete the update.
free -m
/bin/dd if=/dev/zero of=/var/swap.1 bs=1M count=1024 // 1GB, or
/bin/dd if=/dev/zero of=/var/swap.1 bs=1M count=2048 // 2GB
/sbin/mkswap /var/swap.1
/sbin/swapon /var/swap.1
Once you have added the swap you may run the update command, and it will complete successfully

Resources