Git merge results in 400 rename/rename conflicts, how do I resolve them quickly? - wordpress

So, I have a number of Wordpress sites managed with a Git repository, all of which are branches off of a central upstream Git repository. I recently applied a bunch of updates to the parent repo, but one of the child website repos had a plugin updated to a different version and now throws up about 400 rename/rename conflicts. All of these conflicts are in an upstream plugin directory that would be safe to just resolve in favor of the upstream branch.
I want to do the following:
Ensure the upstream version of the files 'wins' the merge conflict (e.g. what the --theirs flag does with checkout)
Produce a mergeable history (If it's not safe for a coworker to type "git pull origin master" with an old repo, it's not an option. I'm religiously opposed to rebasing.)
Not restructure my Git repository (My hosting provider, Pantheon, will not install Composer dependencies at deploy time. Upstream plugins have to be part of the repo.)
Not get a repetitive stress injury (Has to be a reasonably small number of commands because I have to resolve these kinds of messes once a month or so.)
If I just type "git checkout wp-content/plugins/** --theirs", I get hit in the face with about 400 errors, and Git refuses to checkout the files. They look like this:
....400 or so errors omitted...
error: path 'wp-content/plugins/wordpress-seo/js/dist/wp-seo-quick-edit-handler-710.min.js' does not have their version
error: path 'wp-content/plugins/wordpress-seo/js/dist/wp-seo-quick-edit-handler-720.min.js' does not have their version
error: path 'wp-content/plugins/wordpress-seo/js/dist/wp-seo-recalculate-710.min.js' does not have their version
error: path 'wp-content/plugins/wordpress-seo/js/dist/wp-seo-recalculate-720.min.js' does not have their version
I categorically refuse to type 400 git rm/git add commands with each individual path included. git checkout --force is not an option, as --theirs and --force are mutually incompatible (for some reason). My current solution is to open Git GUI and manually right-click -> Use Remote Version and then click Yes... 400 times. I don't have to type the path at least but this is still time consuming.
How do I efficiently resolve a large number of rename/rename conflicts in favor of the remote repository?

Do you want to just resolve the conflicted files in favour of the remote, or just take a whole tree as it is in the remote?
For the latter, you could do this:
Just accept the files as-is with conflicts. git add . or similar
Commit the merge.
rm -Rf path/in/question
git checkout origin/branch -- path/in/question
git commit --amend -a
For the former, it's probably something pretty similar
Just accept the files as-is with conflicts. git add . or similar
Commit the merge.
Find files with conflicts. e.g. grep -r -l '>>>>' path/in/question > /tmp/conflicts.txt
Delete the files with conflicts, check out the desired versions, and amend the commit in a similar means to the above.
(If there are files/paths with spaces in them, small adjustments to the above commands may be necessary. I've given the simpler versions for clarity.)

Related

Can I point pre-commit mypy hook to use a requirements.txt for the additional_dependencies?

I would like to use the exactly same version of flake8 in requirements.txt and in .pre-commit-config.yaml.
To avoid redundancy I would like to keep the version number of flake8 exactly once in my repo.
Can pre-commit.com read the version number of flake8 from requirements.txt?
it cannot
pre-commit intentionally does not read from the repository under test as this makes caching intractable
you can read more in this issue and the many duplicate issues linked there
for me, I no longer include flake8, etc. in my requirements files as pre-commit replaces the need to install linters / code formatters elsewhere
disclaimer: I created pre-commit

Meson / Ninja build system - How to run custom script at Uninstall?

Meson/Ninja provide an easy method to run a script at install time.
For example, this line will tell Meson to run the glib-compile-schemas command to compile the GSettings on Linux (system configuration options).
meson.add_install_script('glib-compile-schemas', schemas_dir)
(this command will be automatically run when the user executes ninja install)
How can I tell Meson to run a custom command at uninstall?
In this specific case I would like to delete (or at least reset to default) the key-value pairs in GSettings. To reset them, I have found that the command is gsettings reset-recursively <path> (successfully tested in terminal).
Adding custom uninstall script is still being discussed, it's proposed quite some time ago but not yet implemented. It looks this task is typically left for package manager (and therefore to corresponding packaged scripts).
But I agree, there is some illogical asymmetry in case of meson install command. As a workaround, you can create your own target:
run_target('my-uninstall', command : ['scripts/uninstall.sh'])
The drawbacks, of course, are that it should be invoked explicitly, cannot override, append or rename internal uninstall target and script should have executable permissions.
The internal, reserved uninstall target, however, does revert all explicit install operations:
Meson allows you to uninstall an install step by invoking the
uninstall target. This will remove all files installed as part of
install. Note that this does not restore the original files. This also
does not undo changes done by custom install scripts (because they can
do arbitrary install operations).

Repository packages-microsoft-com-prod is listed more than once in the configuration

Whenever I run any yum command I am getting the below error -
Repository packages-microsoft-com-prod is listed more than once in the configuration
Any ideas to resolve the issue ?
Repository packages-microsoft-com-prod is listed more than once in the configuration
HDP-2.6 | 2.9 kB 00:00:00
HDP-UTILS-1.1.0.21 | 2.9 kB 00:00:00
Updates-ambari-2.5.2.0 | 2.9 kB 00:00:00
https://packages.microsoft.com/rhel/7/mssql-server/repodata/repomd.xml: [Errno 14] curl#60 - "Peer's certificate issuer has been marked as not trusted by the user."
Trying other mirror.
In the folder /ect/yum.repos.d you have two or more files.repo with the same repository name [packages-microsoft-com-prod]. I had the same problem and I have to delete one of the files, which was irrelevant to my os. And then I understood that that was a not good idea.
Find the packages that are related to another, in this case, files that relate to Microsoft and open them in your favorite editor. The file is probably named different, but the contents will be the same.
[packages-microsoft-com-prod]
name=packages-microsoft-com-prod
baseurl=https://packages.microsoft.com/rhel/7/prod/
enabled=1
gpgcheck=1
gpgkey=https://packages.microsoft.com/keys/microsoft.asc
If this is the case, then it is safe to delete one of them. But if they are different, I wouldn't touch them. You could probably rename one of them, but I'm not sure if that would break something important.
First of all, the specific repository is not the problem, meaning that normally with third party repos the general message should look like this:
Repository XXX is listed more than once in the configuration
If that happens, the problem solves just by deleting the files related to that repository in the location etc/yum.repos.d/
You are able to delete such files by typing:
sudo rm -rf XXX.repos
in the terminal, in that location. You also got to type:
yum clean all
or
dnf clean all
depending on which command is triggering the problem.
Also, the repository or the app is not usually the problem. This is due to a bug according to the official site of RedHat.
Finally if you want to uninstall the app, you gotta run the same command to delete files on the locations:
var/cache/dnf
var/cache/log
Example:
sudo rm -rf XXX
rm is the command to remove files using the terminal, while rmdir is for deleting directories.
by using rf you are forcing the deletion, even when the files chosen are protected or the directories filled with protected and/or unprotected files, for which you might want to be careful with such command.
NOTE: Just for THIRD-PARTY repos.

Do not re-create repositories after updating

We manage systems and thus manage repositories. We remove repositories which we do not use, present in /etc/yum.repos.d/<file>
Our problem is: after an update/upgrade of the system, CentOS automatically re-creates the repositories which were removed, which is an issue for us.
Question: Is there a command / method to ensure repositories are not re-created after an upgrade on CentOS 7 systems.
Those repositories are created by someone, the OS doesn't recreate them.
Either they are restored by an update of a RPM package such as centos-release or by an automatic script you setup/run (ansible?).
I'm not aware of an automatic method to delete a repo; I see a couple of solutions:
Exclude centos-release from the upgradable packages, by adding
exclude=centos-release
to /etc/yum.conf (space separated list), but this could break some updates;
Disable them with:
# yum-config-manager --disable base,updates,extras,centosplus,epel,whatever
(this can be easily scripted and put in a cron or in your ansible playbook)
Write a small script and place it in /etc/cron.hourly/, e.g. /etc/cron.hourly/wipe_repos, containing:
#!/usr/bin/env bash
rm -f /etc/yum.repos.d/CentOS-Base.repo
or, better:
#!/usr/bin/env bash
yum-config-manager --disable base,updates,extras,centosplus,epel,whatever
I would suggest to use solution 2, since the repo files aren't overwritten by updates, but the new versions are placed along the old in .rpmnew files.
This is guaranteed by the flag %config(noreplace) in the source rpm of centos-release, applied to all files in /etc/yum.repos.d/.
You can check this by downloading the .src.rpm and opening the centos-release.spec file.
$ mkdir test && cd test
$ yumdownloader --source centos-release
$ rpm2cpio centos-release*.rpm | cpio -idmv
$ cat centos-release.spec
(or search for the package online and download the src.rpm)
Then scroll down to section %files and you'll notice:
%config(noreplace) /etc/yum.repos.d/*
%config(noreplace) means that all those files are not replaced with new files from an update, but the files from the new rpm are saved with the extension .rpmnew, so you'll have:
$ ls /etc/yum.repos.d/
CentOS-Base.repo <-- here you set them as disabled
CentOS-Base.repo.rpmnew <-- this comes from the update, but yum will ignore it
For reference, see http://people.ds.cam.ac.uk/jw35/docs/rpm_config.html or https://serverfault.com/a/48819/.
As I already said in the comments below the question, the reason why those repositories keep reappearing after an update is quite simple: the files defining the system repositories are owned by the package centos-release and whenever this package gets updated or reinstalled, the repositories reappear.
The package centos-release is a very basic package, it provides the capabilities redhat-release and system-release, and a number of other basic packages depend on it.
[local ~]$ rpm -q --provides centos-release
centos-release = 7-6.1810.2.el7.centos
centos-release(upstream) = 7.6
centos-release(x86-64) = 7-6.1810.2.el7.centos
config(centos-release) = 7-6.1810.2.el7.centos
redhat-release = 7.6-1
system-release = 7.6-1
system-release(releasever) = 7
[local ~]$ rpm -q --whatrequires system-release
setup-2.8.71-10.el7.noarch
grubby-8.28-25.el7.x86_64
[local ~]$ rpm -q --whatrequires redhat-release
initscripts-9.49.46-1.el7.x86_64
systemd-219-62.el7_6.5.x86_64
There is no easy way out of this.
But one possible solution might be to create a customized RPM package to replace centos-release. It should contain the pointers to your own repositories and of course needs to provide the capabilities redhat-release and system-release.
Please be aware that I have no idea if this is actually going to work, it's just something that came to my mind while thinking about the problem. It might save you the work of creating a full custom distribution derived from CentOS, which is the only other way I can think of to achieve what you seem to want.
My solution doesn't exactly solve the problem you request ("how do I delete default repository config files forever?"), but it does stabilize your config changes. If you zero out the files instead of deleting them, then system updates will leave your 'edited' versions unchanged.
I do feel that this is a 'hack', leaving named ghost files, but it's one I can live with. No need to disable or customize redhat-release or system-release.
My problem was slightly different than yours - I maintained different configs for the same repositories for different situations, indicated by filename. On updates the original files would return, leaving me with redundant and incorrect definitions. Now they don't.

Preserve files/directories for rpm upgrade in .spec file(rpmbuild)

I wrote a .spec file on RHEL and I am building RPM using rpmbuild. I need ideas on how to handle the situation below.
My RPM creates an empty logs directory when it installs first time within the installation folder like below
/opt/MyInstallation-1.0.0-1/some executables
/opt/MyInstallation-1.0.0-1/lib/carries shared objects(.so files)
/opt/MyInstallation-1.0.0-1/config/carries some XML and custom configuration files(.xml, etc)
/opt/MyInstallation-1.0.0-1/log--->This is where application writes logs
When my RPM upgrades MyInstallation-1.0.0-1, to MyInstallation-1.0.0-2 for example, I get everything right as I wanted.
But, my question is how to preserve log files written in MyInstallation-1.0.0-1? Or to precisely copy the log directory to MyInstallation-1.0.0-2.
I believe if you tag the directory as %config, it is expected that the user will have files in there, so it will leave it alone.
I found a solution or workaround to this by hit and trial method :)
I am using rpmbuild version 4.8.0 on RHEL 6.3 x86_64. I believe it will work on other distros as well.
If you install with one name only like "MyInstallation" rather than "MyInstallation-version number-RPM Build Number" and create "logs directory as a standard directory(no additional flags on it)[See Original Question for scenario] Whenever you upgrade, you normally don't touch logs directory. RPM will leave its contents as it is. All you have to do is to ensure that you keep the line below in the install section.
%install
install --directory $RPM_BUILD_ROOT%{_prefix}/%{name}/log
Here, prefix and name are macros. That has to do nothing with underlying concept.
Regarding config files, the following is a very precise table that will help you guarding your config files. Again, this rule can't be applied on logs our applications create.
http://www-uxsup.csx.cam.ac.uk/~jw35/docs/rpm_config.html
Thanks & Regards.

Resources