Downgrade 2sxc from 9.32 to 9.14 - 2sxc

Is there a procedure to follow to downgrade 2sxc? We are currently on 9.32 and would like to downgrade to 9.14 due to a critical feature not working properly for us on 9.32.
Installing 9.14 would fail since we have 9.32 installed.
Thank you

Some versions of 2sxc allow downgrades, but I believe that there were some breaking changes in the data model between 9.14 and 9.32 so I think a downgrade will be very risky if not impossible.
Best to post the issue and if it's critical, maybe assist in fixing it asap.

Related

Issue running airflow on Mac M1: error in Flask-OpenID setup command: use_2to3 is invalid

Having an issue running airflow on my M1 Mac. Keeps erroring out with error in Flask-OpenID setup command: use_2to3 is invalid. I have setuptools < 58 and still having issues.
ERROR: Could not find a version that satisfies the requirement flask-openid==1.2.5 (from versions: 0.9, 0.9.1, 1.0, 1.0.1, 1.1, 1.1.1, 1.2, 1.2.1, 1.2.2, 1.2.3, 1.2.4, 1.2.5, 1.3.0)
ERROR: No matching distribution found for flask-openid==1.2.5
Yes. It's been fixed in flask_openid 1.2.6 (It's not a problem with Airflow but with FlaskOpenID).
Looks like for some reason your setuptools is not what you think it is. See:
https://github.com/pallets-eco/flask-openid/issues/59
You have not explained a crucial things - how you are installing airflow, neither which version of Airflow you try to install - which does not help in trying to help you unfortunately, so I have to make some guesses. Here is what you can do if you cannot - for any reason - downgrade to setuptools < 0.58.
If you are using Airlfow 2 and using constraints (as you should - this is the only supported way https://airflow.apache.org/docs/apache-airflow/stable/installation/installing-from-pypi.html) for some older version of Airflow, then possibly flask-openid is 1.2.5 in those old constraint versions. Please check it and if you REALLY want to stay with an older version, then you can download the constraint file locally, modify flask-openid version to 1.2.6 and point to the file instead of the github URL as you should do normally (If you don't use constraints - start using them immediately).
However, better option than installing an old version of Airflow, will be to update to the latest version of Airflow (currently 2.2.2 but we are about to start voting on 2.2.3), where this problem is for sure fixed (also in few other versions). Airflow folows SemVer so you should be generally safe to migrate to 2.2.2 if you've used an earlier version of Airflow 2.
If you are trying to install Airflow 1.10.* - then don't do it. Move Airflow 2 immediately. Airflow 1.10 has reached end of life in June 2021 and it's almost half a year as it did not receive any fixes - it won't receive any fixes for the Flask OpenID problem, so you are pretty much on your own here.
Also you make yourself vulnerable to unpatched security issues (Airlfow 1.10 stopped receiving also critical security fixes as of June 2021).

Julia libllvm_system not defined

I am trying to install the packages CUDAnative, CuArrays, and CUDAdrv in Julia.
I'm getting the following error when I write
Pkg.build("CUDAnative")
LoadError: UndefVarError: libllvm_system not defined
Any idea why?
(I am using an Asus Zephyrus G14 with Nvidia RTX 2060 and AMD Ryzen 9)
I've realized that CUDAnative, CuArrays and CUDAdrv packages are deprecated and it is enough to use CUDA.jl only. The problem is arising when I want to install these libraries. They downgrade CUDA.jl to a <1 version.
Please see the CUDAnative docs here: https://juliagpu.gitlab.io/CUDA.jl/installation/conditional/ for details on this issue and troubleshooting steps: https://juliagpu.gitlab.io/CUDA.jl/installation/troubleshooting/.
If you continue to have this issue, please open an issue for the package on GitHub.

Upgrading Drupal 7.43 to 8 error: Source database is Drupal version 8 but version 7 was selected

I am new to Drupal, but I am trying to upgrade a site with the version 7.43 to 8, flowing these steps: https://www.drupal.org/docs/8/upgrade/upgrade-using-web-browser
After installing the Migrate modules and defining the source site, I have the following error message:
Source database is Drupal version 8 but version 7 was selected.
In my old site's database, the drupal_version table is empty.
I managed to solve it.
It was caused be an update on PHP, that caused blank pages or “white screen of death” (WSOD) in the admin, too.
After updateing to 7.59, it was OK.
This article helped me: https://www.drupal.org/node/158043
Conclusion: first update latest 7.x than 8.

FreeBSD quarterly binaries mariadb101-server missing, why?

On FreeBSD 11.1 I am having problems building mariadb101-server. So as a last resort I thought I might grab the binary from the quarterly (kids, you should normally not mix ports with packages) repo, but there's no package for mariadb101-server:
http://pkg.freebsd.org/FreeBSD:11:amd64/quarterly/All/
Why is it not there?
You could try the latest:
http://pkg.freebsd.org/FreeBSD:11:amd64/latest/All/
The mariadb101-server:
http://pkg.freebsd.org/FreeBSD:11:amd64/latest/All/mariadb101-server-10.1.33.txz
The reasons for why is not listed in the quarterly seems to be because some critical vulnerabilities, you can read more about it here: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=219045

How to optimize R performance

We have a recent performance bench mark that I am trying to understand. We have a large script that performance appears 50% slower on a Redhat Linux machine than a Windows 7 laptop where the specs are comparable. The linux machine is virtualized using kvm and has 4 cores assigned to it along with 16GB of memory. The script is not io intensive but has quite a few for loops. Mainly I am wondering if there are any R compile options that I can use to optimize or any kernel compiler options that might help to make this more comparable. Any pointers would be appreciated. I will try to get another machine and test it in using raw metal also for a better comparison.
These are the configure flags that I am using to compile R on the linux machine. I have experimented quite a bit and this seems to cut 12 seconds off the execution time in the green for larger data sets. Essentially I went from 2.087 to 1.48 seconds with these options.
./configure CFLAGS="-O3 -g -std=gnu99" CXXFLAGS="-O3 -g" FFLAGS="-O2 -g" LDFLAGS="-Bdirect,--hash-stype=both,-Wl,-O1" --enable-R-shlib --without-x --with-cairo --with-libpng --with-jpeglib
Update 1
The script has not been optimized yet. Another group is actually working on the script and we have put in requests to use the apply function but not sure how this explains the disparity in the times.
The top of the profile looks like this. Most of these functions will later be optimized using the apply functions but right now it is bench marked apples to apples on both machines.
"eval.with.vis" 8.66 100.00 0.12 1.39
"source" 8.66 100.00 0.00 0.00
"[" 5.38 62.12 0.16 1.85
"GenerateQCC" 5.30 61.20 0.00 0.00
"[.data.frame" 5.20 60.05 2.40 27.71
"ZoneCalculation" 5.12 59.12 0.02 0.23
"cbind" 3.22 37.18 0.34 3.93
"[[" 2.26 26.10 0.38 4.39
"[[.data.frame" 1.88 21.71 0.82 9.47
My first suspicion and I will be testing shortly and updating with my findings is that KVM linux virtualization is to blame. This script is very memory intensive and due to the large number of array operations and R being pass by copy ( which of course has to malloc ) this may be causing the problem. Since the VM does not have direct access to the memory controller and must share it with it's other VM's this could very likely cause the problem. I will be getting a raw machine later on today and will update with my findings.
Thank you all for the quick updates.
Update 2
We originally thought the cause of the performance problem was caused by hyper threading with a VM, but this turned out to be incorrect and performance was the same on a bare metal machine comparatively.
We later realized that the windows laptop is using a 32 bit version of R for computations. This led us to try the 64 bit version of R and the result was ~140% slower than 32 bit on the same exact same script. This leads me to the question of how is it possible that the 64 bit could be ~140% slower than the 32 bit version of R?
What we are seeing is that the 32
Windows 32 bit execution time 48 seconds
Windows 64 bit execution time 2.33 seconds.
Linux 64 bit execution time 2.15 seconds.
Linux 32 bit execution time < in progress > ( Built a 32 bit version on RHEL 6.3 x86_64 but did not see performance improvement am going to reload with 32 bit version of RHEL 6.3 )
I found this link but it only explains a 15-20% hit on some 64 bit machines.
http://www.hep.by/gnu/r-patched/r-admin/R-admin_51.html
Sorry I cannot legally post the script.
Have a look at the sections on "profiling" in the Writing R Extensions manual.
From 30,000 feet, you can't say much else -- you will need profiling information. "General consensus" (and I am putting this is in quotes as you can't really generalize these things) is that Linux tends to do better on memory management and file access, so I am a little astonished by your results.
Building R with --enable-R-shlib can cause a performance penalty. This is discussed in R Installation and Administration, in Appendix B, Section 1. That alone could explain 10-20% of the variation. Other sources could be from the differences of the "comparable specs".
The issue was resolved and it was caused by a non optimized BLAS library.
This article was a great help. Using ATLAS was a great help.
http://www.cybaea.net/Blogs/Data/Faster-R-through-better-BLAS.html

Resources