On FreeBSD 11.1 I am having problems building mariadb101-server. So as a last resort I thought I might grab the binary from the quarterly (kids, you should normally not mix ports with packages) repo, but there's no package for mariadb101-server:
http://pkg.freebsd.org/FreeBSD:11:amd64/quarterly/All/
Why is it not there?
You could try the latest:
http://pkg.freebsd.org/FreeBSD:11:amd64/latest/All/
The mariadb101-server:
http://pkg.freebsd.org/FreeBSD:11:amd64/latest/All/mariadb101-server-10.1.33.txz
The reasons for why is not listed in the quarterly seems to be because some critical vulnerabilities, you can read more about it here: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=219045
Related
I'm trying to import the Blackboard.zip file created by r/exams but I got the following warning
Dec 10, 2020 2:00:36 PM - [FATAL] Fatal: An error has occurred. The error recorded is:
The .zip file you provided failed to import. Please try again with a new file.
For more information, see the detailed log.
Dec 10, 2020 2:00:36 PM - [WARNING] Status: The operation import did not complete.
I thought that it was problem of my R installation, and tried to import the file given in http://www.r-exams.org/assets/posts/2018-04-16-exams2blackboard//blackboard.zip
and even so I get the same error message.
Does anyone knows what is happening? I even try to import a QTI 2.1 file generated by exams package and same error.
I really appreciate any help.
I asked my co-author Niels Smits - the main author behind exams2blackboard - to check what is going on. He confirms that the old blackboard.zip file shared with our blog post from April 2018 does not work in current versions of Blackboard anymore. Potentially this is caused by a duplication of some files in the .zip that we had overlooked in earlier versions of R/exams
However, R/exams version 2.4-0 has fixed several problems with exams2blackboard which apparently also solves the problems you are describing. Thus, please update R/exams to a version of at least 2.4-0 and you should be able to avoid the reported issues. After that the output from exams2blackboard should work as an input for current Blackboard versions.
Regarding the QTI 2.1 support. The exams2qti21 function tries to follow the guidance in the official IMS QTI 2.1 description and has mostly been tested with OpenOLAT. The QTI 2.1 XML files produced by exams2qti21 do not cooperate with Blackboard, yet, presumably because Blackboard has introduced their own QTI 2.1 "flavor". (Note that Blackboard's QTI 1.2 flavor also deviated quite substantially from a plain QTI 1.2 implementation.) We still have to look into this to check whether we can support Blackboard's QTI 2.1 flavor as well.
I have been having this problem for more than a week now and I am running out of time and patience.This problem occurs when I run my script on a Mac and when I run it on a PC (no difference of results from more RAM, it just aborts faster). When I try to run this line of my dataset, the session aborts.
set.seed(119)
tax_PR2 <- assignTaxonomy(seqtab,
"~/Desktop/Documents/Bruts/aeDNA_data_shared/pr2_version_4.11.1_dada2.fasta",
multithread=TRUE)
Does anyone have any idea of what the problem is? I verified my dataset (seqtab is currently considered by R as a large matrix of 3930724 elements of 20.2Mb), I verified the space I have on my computer, I have all the needed packages to run this line of code and I tried different sources of genome database for PR2 (PR2 version 4.11.1 or 4.12.0 etc...) and it always has the same result.
If you have any ideas I would appreciate them. I hope the information I gave is sufficient.
Packages installed:
library(BiocManager)
library(Rcpp)
library(dada2)
library(ff)
library(ggplot)
library(gridExtra)
library(phyloseq)
library(vegan)
This is probably caused by a bug that was introduced in 1.14, see the Github issue here for more information: https://github.com/benjjneb/dada2/issues/916
We've just identified the cause, and a fix should be out soon. For immediate use, the workaround is to turn off multithreading, or to revert to the previous release 1.12.
I'm writing an R package which depends upon many other packages. When I load too many packages into the session I frequently got this error:
Error in dyn.load(file, DLLpath = DLLpath, ...) :
unable to load shared object '/Library/Frameworks/R.framework/Versions/3.2/Resources/library/proxy/libs/proxy.so':
`maximal number of DLLs reached...
This post Exceeded maximum number of DLLs in R pointed out that the issue is with the Rdynload.c of the base R code:
#define MAX_NUM_DLLS 100
Is there any way to bypass this issue except modifying and building from source?
As of R 3.4, you can set a different max number of DLLs using and environmental variable R_MAX_NUM_DLLS. From the release notes:
The maximum number of DLLs that can be loaded into R e.g. via
dyn.load() can now be increased by setting the environment
variable R_MAX_NUM_DLLS before starting R.
Increasing that number is of course "possible"... but it also costs a bit
(adding to the fixed memory footprint of R).
I did not set that limit, but I'm pretty sure it was also meant as reminder for the useR to "clean up" a bit in her / his R session, i.e., not load package namespaces unnecessarily. I cannot yet imagine that you need > 100 packages | namespaces loaded in your R session.
OTOH, some packages nowadays have a host of dependencies, so I agree that this at least may happen accidentally more frequently than in the past.
The real solution of course would be a code improvement that starts with a relatively small number of "DLLinfo" structures (say 32), and then allocates more batches (of size say 32) if needed.
Patches to the R sources (development trunk in subversion at https://svn.r-project.org/R/trunk/ ) are very welcome!
---- added Jan.26, 2017: In the mean time, we've had a public bug report about this, a proposed patch (which was not good enough: There is always an OS dependent limit on the number of open files), and today that bug report has been closed by R core member #TomasKalibera who implemented new code where the maximal number of loaded DLLs is set at
pmax(100, pmin(1000, 0.6* OS_dependent_getrlimit_or_equivalent()))
and so on Windows and Linux (and not yet tested, but "almost surely" macOS), the limit should be considerably higher than previously.
----- Update #2 (written Jan.5, 2018):
In Oct'17, the above change was made more automatic with the following commit to the sources (of the development version of R - only!)
r73545 | kalibera | 2017-10-12 14:41:20
Increase the number of DLLs that can be loaded by default. If needed,
increase the soft limit on open files.
and on the help page ?dyn.load (https://stat.ethz.ch/R-manual/R-devel/library/base/html/dynload.html) the ulimit -n <num_open_files> is now mentioned (section Note close to bottom).
So you might consider using R's development version till that becomes "main stream" in April.
Alternatively, you do (in a terminal / shell)
ulimit -n 2048
and then start R from that terminal. Tomas Kalibera mentioned this to work on macOS.
I had this issue with the simpleSingleCell library in bioconductor
On the macOS you can't exceed 256. So I set my .Renviron in my home dir
R_MAX_NUM_DLLS=150
It's easy
Go to the environment variable and edit
variable_name = R_MAX_NUM_DLL
value = 1000
Restart R
worked well for me
I'm fighting this problem second day straight with a completely sleepless night and I'm really starting to lose my patience and strength. It all started after I decided to provision another (paid) AWS EC2 instance in order to test my R code for dissertation data analysis. Previously I was using a single free tier t1.micro instance, which is painfully slow, especially when testing/running particular code. Time is much more valuable than reasonable number of cents per hour that Amazon is charging.
Therefore, I provisioned a m3.large instance, which I hope should have enough power to crunch my data comfortably fast. After EC2-specific setup, which included selecting Ubuntu 14.04 LTS as an operating system and some security setup, I installed R and RStudio Server per instructions via sudo apt-get install r-base r-base-dev as ubuntu user. I also created ruser as a special user for running R sessions. Basically, the same procedure as on the smaller instance.
Current situation is that any command that I issuing in R session command line result in messages like this: Error: could not find function "sessionInfo". The only function that works is q(). I suspect here a permissions problem, however, I'm not sure how to approach investigating permission-related problems in R environment. I'm also curious what could be the reasons for such situation, considering that I was following recommendations from R Project and RStudio sources.
I was able to pinpoint the place that I think caused all that horror - it was just a small configuration file "/etc/R/Rprofile.site", which I have previously updated with directives borrowed from R experts' posts here on StackOverflow. After removing questionable contents, I was able to run R commands successfully. Out of curiosity and for sharing this hard-earned knowledge, here's the removed contents:
local({
# add DISS_FLOSS_PKGS to the default packages, set a CRAN mirror
DISS_FLOSS_PKGS <- c("RCurl", "digest", "jsonlite",
"stringr", "XML", "plyr")
#old <- getOption("defaultPackages")
r <- getOption("repos")
r["CRAN"] <- "http://cran.us.r-project.org"
#options(defaultPackages = c(old, DISS_FLOSS_PKGS), repos = r)
options(defaultPackages = DISS_FLOSS_PKGS, repos = r)
#lapply(list(DISS_FLOSS_PKGS), function() library)
library(RCurl)
library(digest)
library(jsonlite)
library(stringr)
library(XML)
library(plyr)
})
Any comments on this will be appreciated!
I'm working with a Mac Os 10.9.2 and a R version 3.0.2.
I used dbDriver() and dbConnect() to initiate the connection to my database. Next, I tried to connect to my postgres database using
c = readOGR("PG:dbname=OB", layer="geo.countries")
This does not work, and always returns a "Cannot open file" error.
I understood from https://stat.ethz.ch/pipermail/r-sig-geo/2010-January/007519.html that the reason for this is the absence of a driver for PostgreSQL. As can be seen by using the command ogrDrivers()
Does anybody can help me on how to install the driver? Or how I can make this work? Any help is much appreciated!
Thanks!
In the absence of the right driver, gdal/ogr usually throws and error like
Unable to find driver PostgreSQL
First, make sure that the database and layer exist. If it's true that the driver for Postgres isn't installed, you'll have to re-install gdal. Using homebrew:
brew uninstall gdal
brew install gdal --with-postgresql
See also this question.
If you are really sure that gdal is properly installed, make sure that your dns is fully specified (it has more parameters than dbname)
dsn="PG:dbname=DB host=HOST user=USER password=PSSWD port=5432"
Moreover, if your table contains more than one spatial columns (layers), you have to specify the one you want:
layer = "DB.TABLE(YOUR_SPATIAL_COLUMN)"
Took me a while to find out. But it was obvious after calling function
ogrListLayers()
It works for me. Just one issue, if you have some raster column in you table, it will not be excluded (as all other layers/spatial columns). Instead, it will be loaded into spatialobject#data dataframe as text column. Quite annoying in case of big rasters. I have posted question for this.