Phbricator - Git PUSH hangs - phabricator

I finally got Phabricator installed and Git over SSH sort of working, I can clone down a repository fine, however when it comes to push the repository I end up with the following:
git.exe push --progress "origin" master:master
Counting objects: 391, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (383/383), done.
Writing objects: 88% (345/391), 2.87 MiB | 1.28 MiB/s
It always hangs on 88% and doesn't progress from there.
Any idea what's going on and how to fix this?

It seems this is a permissions issue (although it's not obvious), adding write access for all users to the repo directory seemed to fix it:
chmod -R a+rw /var/repo
I should probably get more granular and restrict it to the correct user groups but for now this will suffice.

Related

rsync with --fake-super not preserving owner after restore - Monterey/Synology DS920+/rsync 3

Working through a backup script debug backup/restore on:
macStudio M1 / macOS Monterey <-> Synology DS920+
On the mac, I've downloaded HomeBrew rsync 3.2.4
On the synology, I'm running what it shipped with - rsync 3.1.2
For debug, I used /Volumes/Recovery which has files with
owner set to root and group set to wheel.
src="/Volumes/Recovery/"
dest="$userID#$remoteIP::NetBackup/MacStudio1/Volumes/Recovery/
restore="/tmp/RestoreBackup/"
userID is has admin privileges on the NAS.
rsync services are enabled on the NAS.
user directories are enabled on the NAS.
Backup:
rsync -ahX --delete -M--fake-super $src $dest
Restore:
rsync -ahX --delete -M--fake-super $dest $restore
It all seems to work without error. Files are on restore as expected except I'm seeing the files have owner set to my ID.
for example, ls -laR shows (abridged) :
/Volumes/Recovery/E4A28DF2-7007-4ED8-A427-320FCCA8AC36/usr/standalone/firmware:
-rw-rw-rw- 1 root wheel 1821914899 Jun 4 11:42 arm64eBaseSystem.dmg
/tmp/RestoreBackup//E4A28DF2-7007-4ED8-A427-320FCCA8AC36/usr/standalone/firmware:
-rw-rw-rw- 1 myID wheel 1821914899 Jun 4 11:42 arm64eBaseSystem.dmg
I've looked at the rsync man (more than once) and I see words like "To affect the remote side of a remote-shell connection...".
However, I'm not sure how to apply that to a backup or a restore.
Do I want to effect the remote side on the backup?
Do I want to effect the remote side on the restore?
Any guidance on what I should have set the options to?
So looks like I'm not getting any responses. Guess I'll wrap this up with my observations.
In testing I've done on a user directory (with test data files), the rsync is working to save and restore files with extended attributes (I verified they got set and that they matched on restore). So I think the overall switches on the rsync commands are correct.
The problems I'm seeing on backing up and restoring the "Recovery" volume have the following issues:
All regular files have the wrong "owner". The groups look correct.
The one linked directory has the wrong "owner" and the wrong "group".
I believe (1) problem is caused because I need to use sudo rsync on the restore. I'm guessing that the files that are backup up have the correct owner/group in metadata, but the restore doesn't have the authority to set the owner to 'root'. I tried using sudo briefly and it died with some errors I didn't quite understand. I believe I need to set up the etc/sudoers file with some information. The (2) problem may partially go away if I fix (1) or it may need some additional rsync flags to do with linked files and directories.
Overall, my backup script is working, but I'm now starting to question if I know enough to know what to backup on macOS. A rather length article by the CCC folk seems to explain this but it leaves me feeling I don't know enough above macOS data structures and it seems some of this may change over time when new version are released. I had started with the idea of just backuping up everything under /* (Macintosh HD), and perhaps this would work, though there are at least somethings that need to be excluded (like /Volumes/* and perhaps /tmp/* ). Also noticed that there is a /System tree that doesn't show up with ls /* that CCC folk say to leave alone. So not exactly got a good feeling I understand what I need to know.
So for the moment I'm going to sideline this effort. I've got Time Machine running to my NAS and I need to get the NAS backed up to a cloud first. My fall back positions are either (1) to just be dependent on TimeMachine only, (2) to buy and use CCC as a secondary backup, or (3) to create a backup with just my user directories as a secondary backup - which will require my reinstalling any 3rd party software in the event that I can't recover with Time Machine.

Repository packages-microsoft-com-prod is listed more than once in the configuration

Whenever I run any yum command I am getting the below error -
Repository packages-microsoft-com-prod is listed more than once in the configuration
Any ideas to resolve the issue ?
Repository packages-microsoft-com-prod is listed more than once in the configuration
HDP-2.6 | 2.9 kB 00:00:00
HDP-UTILS-1.1.0.21 | 2.9 kB 00:00:00
Updates-ambari-2.5.2.0 | 2.9 kB 00:00:00
https://packages.microsoft.com/rhel/7/mssql-server/repodata/repomd.xml: [Errno 14] curl#60 - "Peer's certificate issuer has been marked as not trusted by the user."
Trying other mirror.
In the folder /ect/yum.repos.d you have two or more files.repo with the same repository name [packages-microsoft-com-prod]. I had the same problem and I have to delete one of the files, which was irrelevant to my os. And then I understood that that was a not good idea.
Find the packages that are related to another, in this case, files that relate to Microsoft and open them in your favorite editor. The file is probably named different, but the contents will be the same.
[packages-microsoft-com-prod]
name=packages-microsoft-com-prod
baseurl=https://packages.microsoft.com/rhel/7/prod/
enabled=1
gpgcheck=1
gpgkey=https://packages.microsoft.com/keys/microsoft.asc
If this is the case, then it is safe to delete one of them. But if they are different, I wouldn't touch them. You could probably rename one of them, but I'm not sure if that would break something important.
First of all, the specific repository is not the problem, meaning that normally with third party repos the general message should look like this:
Repository XXX is listed more than once in the configuration
If that happens, the problem solves just by deleting the files related to that repository in the location etc/yum.repos.d/
You are able to delete such files by typing:
sudo rm -rf XXX.repos
in the terminal, in that location. You also got to type:
yum clean all
or
dnf clean all
depending on which command is triggering the problem.
Also, the repository or the app is not usually the problem. This is due to a bug according to the official site of RedHat.
Finally if you want to uninstall the app, you gotta run the same command to delete files on the locations:
var/cache/dnf
var/cache/log
Example:
sudo rm -rf XXX
rm is the command to remove files using the terminal, while rmdir is for deleting directories.
by using rf you are forcing the deletion, even when the files chosen are protected or the directories filled with protected and/or unprotected files, for which you might want to be careful with such command.
NOTE: Just for THIRD-PARTY repos.

Do not re-create repositories after updating

We manage systems and thus manage repositories. We remove repositories which we do not use, present in /etc/yum.repos.d/<file>
Our problem is: after an update/upgrade of the system, CentOS automatically re-creates the repositories which were removed, which is an issue for us.
Question: Is there a command / method to ensure repositories are not re-created after an upgrade on CentOS 7 systems.
Those repositories are created by someone, the OS doesn't recreate them.
Either they are restored by an update of a RPM package such as centos-release or by an automatic script you setup/run (ansible?).
I'm not aware of an automatic method to delete a repo; I see a couple of solutions:
Exclude centos-release from the upgradable packages, by adding
exclude=centos-release
to /etc/yum.conf (space separated list), but this could break some updates;
Disable them with:
# yum-config-manager --disable base,updates,extras,centosplus,epel,whatever
(this can be easily scripted and put in a cron or in your ansible playbook)
Write a small script and place it in /etc/cron.hourly/, e.g. /etc/cron.hourly/wipe_repos, containing:
#!/usr/bin/env bash
rm -f /etc/yum.repos.d/CentOS-Base.repo
or, better:
#!/usr/bin/env bash
yum-config-manager --disable base,updates,extras,centosplus,epel,whatever
I would suggest to use solution 2, since the repo files aren't overwritten by updates, but the new versions are placed along the old in .rpmnew files.
This is guaranteed by the flag %config(noreplace) in the source rpm of centos-release, applied to all files in /etc/yum.repos.d/.
You can check this by downloading the .src.rpm and opening the centos-release.spec file.
$ mkdir test && cd test
$ yumdownloader --source centos-release
$ rpm2cpio centos-release*.rpm | cpio -idmv
$ cat centos-release.spec
(or search for the package online and download the src.rpm)
Then scroll down to section %files and you'll notice:
%config(noreplace) /etc/yum.repos.d/*
%config(noreplace) means that all those files are not replaced with new files from an update, but the files from the new rpm are saved with the extension .rpmnew, so you'll have:
$ ls /etc/yum.repos.d/
CentOS-Base.repo <-- here you set them as disabled
CentOS-Base.repo.rpmnew <-- this comes from the update, but yum will ignore it
For reference, see http://people.ds.cam.ac.uk/jw35/docs/rpm_config.html or https://serverfault.com/a/48819/.
As I already said in the comments below the question, the reason why those repositories keep reappearing after an update is quite simple: the files defining the system repositories are owned by the package centos-release and whenever this package gets updated or reinstalled, the repositories reappear.
The package centos-release is a very basic package, it provides the capabilities redhat-release and system-release, and a number of other basic packages depend on it.
[local ~]$ rpm -q --provides centos-release
centos-release = 7-6.1810.2.el7.centos
centos-release(upstream) = 7.6
centos-release(x86-64) = 7-6.1810.2.el7.centos
config(centos-release) = 7-6.1810.2.el7.centos
redhat-release = 7.6-1
system-release = 7.6-1
system-release(releasever) = 7
[local ~]$ rpm -q --whatrequires system-release
setup-2.8.71-10.el7.noarch
grubby-8.28-25.el7.x86_64
[local ~]$ rpm -q --whatrequires redhat-release
initscripts-9.49.46-1.el7.x86_64
systemd-219-62.el7_6.5.x86_64
There is no easy way out of this.
But one possible solution might be to create a customized RPM package to replace centos-release. It should contain the pointers to your own repositories and of course needs to provide the capabilities redhat-release and system-release.
Please be aware that I have no idea if this is actually going to work, it's just something that came to my mind while thinking about the problem. It might save you the work of creating a full custom distribution derived from CentOS, which is the only other way I can think of to achieve what you seem to want.
My solution doesn't exactly solve the problem you request ("how do I delete default repository config files forever?"), but it does stabilize your config changes. If you zero out the files instead of deleting them, then system updates will leave your 'edited' versions unchanged.
I do feel that this is a 'hack', leaving named ghost files, but it's one I can live with. No need to disable or customize redhat-release or system-release.
My problem was slightly different than yours - I maintained different configs for the same repositories for different situations, indicated by filename. On updates the original files would return, leaving me with redundant and incorrect definitions. Now they don't.

Update deployed meteor app while running with minimum downtime - best practice

I run my meteor app on EC2 like this: node main.js (in tmux session)
Here are the steps I use to update my meteor app:
1) meteor bundle app.tgz
2) scp app.tgz EC2-server:/path
3) ssh EC2-server and attach to tmux
4) kill the current meteor-node process by C-c
5) extract app.tgz
6) run "node main.js" of the extracted app.tgz
Is this the standard practice?
I realize forever can be used too but still do you have to kill the old node process and start a new one every time I update my app? Can the upgrade be more seamless without killing the Node process?
You can't do this without killing the node process, but I haven't found that really matters. What's actually more annoying is the browser refresh on the client, but there isn't much you can do about that.
First, let's assume the application is already running. We start our app via forever with a script like the one in my answer here. I'd show you my whole upgrade script but it contains all kinds of Edthena-specific stuff, so I'll outline the steps we take below:
Build a new bundle. We do this on the server itself, which avoids any missing fibers issues. The bundle file is written to /home/ubuntu/apps/edthena/edthena.tar.gz.
We cd into the /home/ubuntu/apps/edthena directory and rm -rf bundle. That will blow away the files used by the current running process. Because the server is still running in memory it will keep executing. However, this step is problematic if your app regularly does uncached disk operatons like reading from the private directory after startup. We don't, and all of the static assets are served by nginx, so I feel safe in doing this. Alternatively, you can move the old bundle directory to something like bundle.old and it should work.
tar xzf edthena.tar.gz
cd bundle/programs/server && npm install
forever restart /home/ubuntu/apps/edthena/bundle/main.js
There really isn't any downtime with this approach - it just restarts the app in the same way it would if the server threw an exception. Forever also keeps the environment from your original script, so you don't need to specify your environment variables again.
Finally, you can have a look at the log files in your ~/.forever directory. The exact path can be found via forever list.
David's method is better than this once, because there's less downtime when using forever restart compared to forever stop; ...; forever start.
Here's the deploy script spelled out, using the latter technique. In ~/MyApp, I run this bash script:
echo "Meteor bundling..."
meteor bundle myapp.tgz
mkdir ~/myapp.prod 2> /dev/null
cd ~/myapp.prod
forever stop myapp.js
rm -rf bundle
echo "Unpacking bundle"
tar xzf ~/MyApp/myapp.tgz
mv bundle/main.js bundle/myapp.js
# `pwd` is there because ./myapp.log would create the log in ~/.forever/myapp.log actually
PORT=3030 ROOT_URL=http://myapp.example.com MONGO_URL=mongodb://localhost:27017/myapp forever -a -l `pwd`/myapp.log start myapp.js
You're asking about best practices.
I'd recommend mup and cluster
They allow for horizontal scaling, and a bunch of other nice features, while using simple commands and configuration.

Minimum permissions needed to allow PHP PDO / Apache access to SQLite database in /var/www

SOLVED: See my answer below
I'm experiencing the same issue that Austin Hyde experienced in this question. I have an SQLite database that I can read, but not write.
Specifically, I'm getting General error: 8 attempt to write a readonly database in /var/www/html/green/database.php on line 34
My issue diverges from his as follows:
-As recommended in the answers to his question, I've made the database world-writeable, as well as the folder in which the database resides, with no luck. I've also set the owner of the database to "apache" as well as "nobody", without success.
-I've set the entire path set 777, beginning at /var (which I hate to do), no joy.
-I've messed about with SELinux (I'm running Fedora 12) to let httpd do whatever it wants; nothing.
I feel that I'm almost certainly missing something simple here, but I'm out of ideas.
What permissions need to be on an SQLite file in order to allow PHP / Apache to read and write to it via PDO?
Edit: Another related question, adding weight to the hypothesis that I've got a write permissions conflict somewhere.
For those who can not afford to disable SELinux entirely, here is the way to go.
To make a directory (say rw_data) and all it's content writable to any process running in httpd_t domain type ie. web-server processes, use following command as root.
chcon -R -t httpd_sys_content_rw_t "/var/www/html/mysite/rw_data/"
you can check SELinux context labels with following command :
ls -Z /var/www/html/mysite | grep httpd_sys_content_rw_t
This works on Fedora 16, should work on other SELinux enabled distros too.

Resources