I use some Julia packages that require a specific version (namely ≥ v0.3 and 0.4 ≤). I could not find a way to compile Julia from source (I'm using Linux) in a specific version. Is there a way to do this that I am not aware of? Neither the Github-Julia-Readme nor extensive internet search did reveal any insights. I'm sure there is an easy way that I'm unaware of.
My usual procedure is
git clone git://github.com/JuliaLang/julia.git && cd julia && printf "prefix=/usr/local" > Make.user && make && make install
Is it sufficient to edit the entry in the VERSION file in the Julia source?
You can git checkout the branch or tag that you'd like to follow. For right now, you can follow the release-0.3 branch until the "chaos period" ends.
So, you can simply modify your command sequence to be:
git clone git://github.com/JuliaLang/julia.git && cd julia && git checkout release-0.3 && …
You could similarly grab the release-0.2 or release-0.1 branches if you'd like. Now, this doesn't actually follow an exact version; it follows all development on the 0.x series. For example, during the 0.3 development period, release-0.2 was occasionally updated with back-ported fixes and after a time 0.2.1 was tagged from this branch. By following the release-x branch, git pull && make clean && make will grab and compile recent updates, even if they haven't been tagged into a point-release yet. You're still living on the edge and may run into occasional hiccups (although that is much less likely than on master).
If you'd like to get an exact version, you can checkout a tag instead of a branch:
git clone git://github.com/JuliaLang/julia.git && cd julia && git checkout v0.3.0-rc4 && …
This will then be a stable version and will not change with git pull. You will have to manually checkout the next tagged version if you want, for example, to update to the final release of 0.3.0.
(Changing the VERSION file will simply make Julia lie to you; you could enter 1.0 into VERSION and Julia will happily report to you that its living in the future).
Related
I'm using Ubuntu on WSL2, having installed powerlevel10k on oh-my-zsh.
What is that weird question-mark-number next to the git branch name, what does it mean?
You have 1 untracked file in your repo. Do git status --short and you'll see the same symbol.
Most of the other symbols used by P10k, however, are unfortunately different from the ones used by git status (or other prompts; see below). See the P10k documentation for an overview. As you can see, it uses a lot of Unicode symbols, whereas git status restricts itself to ASCII symbols.
The majority other Git prompt themes out there, however, use yet an another, entirely different set of symbols, based on those defined in git-prompt.sh, which is distributed with git itself. These, too, are restricted to ASCII symbols. Here’s an overview of them:
Symbol
Meaning
*
unstaged
+
staged
$
stash
%
untracked files
<
behind
>
ahead
<>
diverged
=
no difference
|
operation in progress
?
sparse checkout
In any case, when in doubt, check your prompt theme's documentation.
We manage systems and thus manage repositories. We remove repositories which we do not use, present in /etc/yum.repos.d/<file>
Our problem is: after an update/upgrade of the system, CentOS automatically re-creates the repositories which were removed, which is an issue for us.
Question: Is there a command / method to ensure repositories are not re-created after an upgrade on CentOS 7 systems.
Those repositories are created by someone, the OS doesn't recreate them.
Either they are restored by an update of a RPM package such as centos-release or by an automatic script you setup/run (ansible?).
I'm not aware of an automatic method to delete a repo; I see a couple of solutions:
Exclude centos-release from the upgradable packages, by adding
exclude=centos-release
to /etc/yum.conf (space separated list), but this could break some updates;
Disable them with:
# yum-config-manager --disable base,updates,extras,centosplus,epel,whatever
(this can be easily scripted and put in a cron or in your ansible playbook)
Write a small script and place it in /etc/cron.hourly/, e.g. /etc/cron.hourly/wipe_repos, containing:
#!/usr/bin/env bash
rm -f /etc/yum.repos.d/CentOS-Base.repo
or, better:
#!/usr/bin/env bash
yum-config-manager --disable base,updates,extras,centosplus,epel,whatever
I would suggest to use solution 2, since the repo files aren't overwritten by updates, but the new versions are placed along the old in .rpmnew files.
This is guaranteed by the flag %config(noreplace) in the source rpm of centos-release, applied to all files in /etc/yum.repos.d/.
You can check this by downloading the .src.rpm and opening the centos-release.spec file.
$ mkdir test && cd test
$ yumdownloader --source centos-release
$ rpm2cpio centos-release*.rpm | cpio -idmv
$ cat centos-release.spec
(or search for the package online and download the src.rpm)
Then scroll down to section %files and you'll notice:
%config(noreplace) /etc/yum.repos.d/*
%config(noreplace) means that all those files are not replaced with new files from an update, but the files from the new rpm are saved with the extension .rpmnew, so you'll have:
$ ls /etc/yum.repos.d/
CentOS-Base.repo <-- here you set them as disabled
CentOS-Base.repo.rpmnew <-- this comes from the update, but yum will ignore it
For reference, see http://people.ds.cam.ac.uk/jw35/docs/rpm_config.html or https://serverfault.com/a/48819/.
As I already said in the comments below the question, the reason why those repositories keep reappearing after an update is quite simple: the files defining the system repositories are owned by the package centos-release and whenever this package gets updated or reinstalled, the repositories reappear.
The package centos-release is a very basic package, it provides the capabilities redhat-release and system-release, and a number of other basic packages depend on it.
[local ~]$ rpm -q --provides centos-release
centos-release = 7-6.1810.2.el7.centos
centos-release(upstream) = 7.6
centos-release(x86-64) = 7-6.1810.2.el7.centos
config(centos-release) = 7-6.1810.2.el7.centos
redhat-release = 7.6-1
system-release = 7.6-1
system-release(releasever) = 7
[local ~]$ rpm -q --whatrequires system-release
setup-2.8.71-10.el7.noarch
grubby-8.28-25.el7.x86_64
[local ~]$ rpm -q --whatrequires redhat-release
initscripts-9.49.46-1.el7.x86_64
systemd-219-62.el7_6.5.x86_64
There is no easy way out of this.
But one possible solution might be to create a customized RPM package to replace centos-release. It should contain the pointers to your own repositories and of course needs to provide the capabilities redhat-release and system-release.
Please be aware that I have no idea if this is actually going to work, it's just something that came to my mind while thinking about the problem. It might save you the work of creating a full custom distribution derived from CentOS, which is the only other way I can think of to achieve what you seem to want.
My solution doesn't exactly solve the problem you request ("how do I delete default repository config files forever?"), but it does stabilize your config changes. If you zero out the files instead of deleting them, then system updates will leave your 'edited' versions unchanged.
I do feel that this is a 'hack', leaving named ghost files, but it's one I can live with. No need to disable or customize redhat-release or system-release.
My problem was slightly different than yours - I maintained different configs for the same repositories for different situations, indicated by filename. On updates the original files would return, leaving me with redundant and incorrect definitions. Now they don't.
So, I have a number of Wordpress sites managed with a Git repository, all of which are branches off of a central upstream Git repository. I recently applied a bunch of updates to the parent repo, but one of the child website repos had a plugin updated to a different version and now throws up about 400 rename/rename conflicts. All of these conflicts are in an upstream plugin directory that would be safe to just resolve in favor of the upstream branch.
I want to do the following:
Ensure the upstream version of the files 'wins' the merge conflict (e.g. what the --theirs flag does with checkout)
Produce a mergeable history (If it's not safe for a coworker to type "git pull origin master" with an old repo, it's not an option. I'm religiously opposed to rebasing.)
Not restructure my Git repository (My hosting provider, Pantheon, will not install Composer dependencies at deploy time. Upstream plugins have to be part of the repo.)
Not get a repetitive stress injury (Has to be a reasonably small number of commands because I have to resolve these kinds of messes once a month or so.)
If I just type "git checkout wp-content/plugins/** --theirs", I get hit in the face with about 400 errors, and Git refuses to checkout the files. They look like this:
....400 or so errors omitted...
error: path 'wp-content/plugins/wordpress-seo/js/dist/wp-seo-quick-edit-handler-710.min.js' does not have their version
error: path 'wp-content/plugins/wordpress-seo/js/dist/wp-seo-quick-edit-handler-720.min.js' does not have their version
error: path 'wp-content/plugins/wordpress-seo/js/dist/wp-seo-recalculate-710.min.js' does not have their version
error: path 'wp-content/plugins/wordpress-seo/js/dist/wp-seo-recalculate-720.min.js' does not have their version
I categorically refuse to type 400 git rm/git add commands with each individual path included. git checkout --force is not an option, as --theirs and --force are mutually incompatible (for some reason). My current solution is to open Git GUI and manually right-click -> Use Remote Version and then click Yes... 400 times. I don't have to type the path at least but this is still time consuming.
How do I efficiently resolve a large number of rename/rename conflicts in favor of the remote repository?
Do you want to just resolve the conflicted files in favour of the remote, or just take a whole tree as it is in the remote?
For the latter, you could do this:
Just accept the files as-is with conflicts. git add . or similar
Commit the merge.
rm -Rf path/in/question
git checkout origin/branch -- path/in/question
git commit --amend -a
For the former, it's probably something pretty similar
Just accept the files as-is with conflicts. git add . or similar
Commit the merge.
Find files with conflicts. e.g. grep -r -l '>>>>' path/in/question > /tmp/conflicts.txt
Delete the files with conflicts, check out the desired versions, and amend the commit in a similar means to the above.
(If there are files/paths with spaces in them, small adjustments to the above commands may be necessary. I've given the simpler versions for clarity.)
I'd like to discover the guile ecosystem. I looked at how to install a library and I didn't find a package manager, like python's pip. Does such a thing exist for guile ?
Looks like guildhall is the closest thing to pip out there. There has been some discussion on the Guile mailing lists recently around it. The posts by Wingo, Boubekki, Zaretskii, and a few others who are heavily involved with Guile development indicate a push towards making guildhall an upstream source for something called Guix that is a more general package manager intended to be independent of platform.
If you consult the Guix list of packages you will see guile there and a number of other guile related items (e.g. guile-json, guile-ncurses, etc..). I'd give that a shot. Otherwise you're on your own and you'll have to either fall back to the OS package manager or pull down the source yourself, build, and install.
Full disclosure: I haven't tried Guix myself but I've been meaning to. I'd be very interested to see how it turns out for you so if you do go this route it'd be awesome if you could provide an update with your Guix experience.
There's also been a recent call to update the libraries page and from a quick inspection there's been some small number of updates that you may find useful.
#unclejamil This is an update of my attempt to install the guix package manager.
Documentation
First of all, the links:
the official page: https://www.gnu.org/software/guix/
the download page: http://alpha.gnu.org/gnu/guix/ (guix-the-system and guix the package manager are listed together)
Installation (Debian)
Guix needs Guile-2.0-dev and more dependencies, which are present in Debian's repositories:
apt-get install guile-2.0-dev guile-2.0 libgcrypt20-dev libbz2-dev libsqlite3-dev autopoint
Download guix. See the above links to download a binary. Or get the sources:
git clone git://git.savannah.gnu.org/guix.git
The installation goes with a classical ./configure && make && make install.
make will take several minutes and make install needs root access. If you install from source, make will build guile objects of the 346 base packages (python, zsh, abiword,…) so it'll take a long time (the database is included into guix-the-program, so we must do that. You can still tweak this list in the Makefile, at MODULES) .
Note: Your current directory must not contain non ascii characters.
Note: see also this complete tutorial, with the focus on how to install guix locally, i.e. not to run make install: http://dustycloud.org/blog/guix-package-manager-without-make-install/
Usage
To install packages with guix, we need a running server.
The first method, for testing purposes, is simply to run the server in a terminal:
sudo guix-daemon
and the client in another one:
guix package -s "guile.*curses" # search with regexps
sudo guix package -i guile-ncurses # install. All start with the "package" command.
For the proper method, see https://www.gnu.org/software/guix/manual/html_node/Build-Environment-Setup.html#Build-Environment-Setup
To be continued.
This answer is a community wiki, feel free to complete it, thanks !
I am building Guix right now and encountered the same error about not finding guile-2.0. I managed to fix it by installing the development files for guile-2.0
sudo apt-get install guile-2.0-dev
I encountered some more errors later on and it just meant I needed to install the development files for it.
Every time you compile something from source, you go through the same 3 steps:
$ ./configure
$ make
$ make install
I understand, that it makes sense to divide the installing process into different steps, but I don't get it, why each and every coder on this planet has to write the same three commands again and again just to get one single job done. From my point of view it would make totally sense to have a ./install.sh script automatically delivered with the source code which contains the following text:
#!/bin/sh
./configure
make
make install
why would people do the 3 steps separately?
Because each step does different things
Prepare(setup) environment for building
./configure
This script has lots of options that you should change. Like --prefix or --with-dir=/foo. That means every system has a different configuration. Also ./configure checks for missing libraries that should be installed. Anything wrong here causes not to build your application. That's why distros have packages that are installed on different places, because every distro thinks it's better to install certain libraries and files to certain directories. It is said to run ./configure, but in fact you should change it always.
For example have a look at the Arch Linux packages site. Here you'll see that any package uses a different configure parameter (assume they are using autotools for the build system).
Building the system
make
This is actually make all by default. And every make has different actions to do. Some do building, some do tests after building, some do checkout from external SCM repositories. Usually you don't have to give any parameters, but again some packages execute them differently.
Install to the system
make install
This installs the package in the place specified with configure. If you want you can specify ./configure to point to your home directory. However, lots of configure options are pointing to /usr or /usr/local. That means then you have to use actually sudo make install because only root can copy files to /usr and /usr/local.
Now you see that each step is a pre-requirement for next step. Each step is a preparation to make things work in a problemless flow. Distros use this metaphor to build packages (like RPM, deb, etc.).
Here you'll see that each step is actually a different state. That's why package managers have different wrappers. Below is an example of a wrapper that lets you build the whole package in one step. But remember that each application has a different wrapper (actually these wrappers have a name like spec, PKGBUILD, etc.):
def setup:
... #use ./configure if autotools is used
def build:
... #use make if autotools is used
def install:
... #use make all if autotools is used
Here one can use autotools, that means ./configure, make and make install. But another one can use SCons, Python related setup or something different.
As you see splitting each state makes things much easier for maintaining and deployment, especially for package maintainers and distros.
First, it should be ./configure && make && make install since each depends on the success of the former. Part of the reason is evolution and part of the reason is convenience for the development workflow.
Originally, most Makefiles would only contain the commands to compile a program and installation was left to the user. An extra rule allows make install to place the compiled output in a place that might be correct; there are still plenty of good reasons that you might not want to do this, including not being the system administrator, not want to install it at all. Moreover, if I am developing the software, I probably don't want to install it. I want to make some changes and test the version sitting in my directory. This becomes even more salient if I'm going to have multiple versions lying around.
./configure goes and detects what is available in the environment and/or is desired by the user to determine how to build the software. This is not something that needs to change very often and can often take some time. Again, if I am a developer, it's not worth the time to reconfigure constantly. More importantly, since make uses timestamps to rebuild modules, if I rerun configure there is a possibility that flags will change and now some of the components in my build will be compile with one set of flags and others with a different set of flags that might lead to different, incompatible behaviour. So long as I don't rerun configure, I know that my compilation environment remains the same even if I change my sources. If I rerun configure, I should make clean first, to remove any built sources to ensure things are built uniformly.
The only case where the three command are run in a row are when users install the program or a package is built (e.g., Debian's debuild or RedHat's rpmbuild). And that assumes that the package can be given a plain configure, which is not usually the case for packaging, where, at least, --prefix=/usr is desired. And pacakgers are like to have to deal with fake-roots when doing the make install part. Since there are lots of exceptions, making ./configure && make && make install the rule would be inconvenient for a lot of people who do it on a far more frequent basis!
configure may fail if it finds that dependencies are missing.
make runs a default target, the first one listed in the Makefile. Often this target is all, but not always. So you could only make all install if you knew that was the target.
So ...
#!/bin/sh
if ./configure $*; then
if make; then
make install
fi
fi
or:
./configure $* && ./make && ./make install
The $* is included because one often has to provide options to configure.
But why not just let people do it themselves? Is this really such a big win?
Firstly ./configure doesn't always find everything that it needs, or in other cases it finds everything it requires but not everything it could use. In that case you would want to know about it (and your ./install.sh script would fail anyway!) The classic example of non-failure with unintended consequences, from my point of view, is compiling large applications like ffmpeg or mplayer. These will use libraries if they are available but will compile anyway if they aren't, leaving some options disabled. The problem is that you then discover later that it was compiled without support for some format or another, thus requiring you to go back and redo it.
Another thing ./configure does somewhat interactively is giving you the option to customise where on the system the application will be installed. Different distributions/environments have different conventions, and you would probably want to stick to the convention on your system. Also, you might want to install it locally (solely for yourself). Traditionally the ./configure and make steps aren't run as root, while make install (unless it is installed solely for yourself) has to be run as root.
Specific distributions often provide scripts that perform this ./install.sh functionality in a distribution-sensitive manner - for example, source RPMs + spec file + rpmbuild or slackbuilds.
(Footnote: that being said, I agree that ./configure; make; make install; can get extremely tedious.)