Rmpi, OpenCPU, and Apparmor: DENIED request for "/" - r

I have an R package that sends out a job to the OpenMPI cluster I have running by means of the Rmpi package. All works as expected within an R session run from the console. However, when I try to execute the relevant function with from my OpenCPU server like this (details changed to protect the innocent):
curl -XPOST http://99.999.999.99/ocpu/library/MyPackage/R/my_cluster_function
I get this error:
R call failed: process died.
(Other, non-cluster calling functions within the package work as expected via OpenCPU). I noticed in /var/log/kern.log a variety of requests being DENIED by apparmor, and I have been able to resolve most of them by adding entries into /etc/apparmor.d/opencpu.d/custom to allow OpenMPI to access the files it needs. However, I cannot resolve these two issues (again, IP address changed) related to "open" requests for location "/":
Oct 26 03:49:58 99.999.999.99 kernel: [142952.551234] type=1400 audit(1414295398.849:957): apparmor="DENIED" operation="open" profile="opencpu-main" name="/" pid=22486 comm="orted" requested_mask="r" denied_mask="r" fsuid=33 ouid=0
Oct 26 03:49:58 99.999.999.99 kernel: [142952.556422] type=1400 audit(1414295398.857:958): apparmor="DENIED" operation="open" profile="opencpu-main" name="/" pid=22485 comm="apache2" requested_mask="r" denied_mask="r" fsuid=33 ouid=0
Adding this to my apparmor rules did not help:
/* r,
Two questions:
Why is opencpu trying to read from my root level directory (or does this mean something else)?
More urgently, how can I resolve this apparmor issue?
Thanks.

You might need to add both apparmor rules
/ r,
/* r,
The first rule allows directory listing of / and the second rule allows read access to any file under /.
I don't understand why Rmpi wants to read / or why were you getting process died error instead of access denied. Are you sure the problem is completely resolved?

Related

jupyterhub fails to spawn server with systemdspawner

I am trying to run jupyterhub on an Ubuntu 20.04 LTS server. My idea is to run python/jupyterhub in a conda virtual environment as a system service. As I want to be able to limit the resources available to individual users I installed the systemdspawner.
After installing everything and starting the jupyterhub service I can login through my web browser. However, when trying to start the server the spawner stucks and after a while I get an error message saying "Spawn failed: Timeout"
in journalctl I can see the following messages:
User logged in: me 302 POST /hub/login?next= -> /hub/spawn (me#::ffff:[my IP address]) 59.42ms
Adding role server to token: <APIToken('93c8...', user='me', client_id='jupyterhub')
Creating oauth client jupyterhub-user-me
pam_loginuid(login:session): Error writing /proc/self/loginuid: Operation not permitted
pam_loginuid(login:session): set_loginuid failed
pam_unix(login:session): session opened for user me by (uid=0)
Failed to open PAM session for me: [PAM Error 14] Cannot make/remove an entry for the specified session
Disabling PAM sessions from now on. user:me
Unit jupyter-me-singleuser in a failed state. Resetting state.
Disclaimer: My Jupyter/Python installation is replacing an former installation that was setup by someone else and got messed up a bit during time. I tried to remove everything related and start with a clean installation from scratch. However, as I had very little documentation about the old setup there is a certain risk that there might be some left-overs of the previous installation that may cause trouble.
Any ideas?
Solved it out myself. In the end the PAM related messages seem to be non-critical and were not related to the timeout at all. Instead I found a mistake in /etc/systemd/system/jupyterhub.service, where the PATH variable was not including the bin directory of my miniconda installation.

Grakn Error; trying to load schema for "phone calls" example

I am trying to run the example grakn migration "phone_calls" (using python and JSON files).
Before reaching there, I need to load the schema, but I am having trouble with getting the schema loaded, as shown here: https://dev.grakn.ai/docs/examples/phone-calls-schema
System:
-Mac OS 10.15
-grakn-core 1.8.3
-python 3.7.3
The grakn server is started. I checked and the 48555 TCP port is open, so I don't think there is any firewall issue. The schema file is in the same folder (phone_calls) as where the json data files is, for the next step. I am using a virtual environment. The error is below:
(project1_env) (base) tiffanytoor1#MacBook-Pro-2 onco % grakn server start
Storage is already running
Grakn Core Server is already running
(project1_env) (base) tiffanytoor1#MacBook-Pro-2 onco % grakn console --keyspace phone_calls --file phone_calls/schema.gql
Unable to create connection to Grakn instance at localhost:48555
Cause: io.grpc.StatusRuntimeException
UNKNOWN: Could not reach any contact point, make sure you've provided valid addresses (showing first 1, use getErrors() for more: Node(endPoint=/127.0.0.1:9042, hostId=null, hashCode=5f59fd46): com.datastax.oss.driver.api.core.connection.ConnectionInitException: [JanusGraph Session|control|connecting...] init query OPTIONS: error writing ). Please check server logs for the stack trace.
I would appreciate any help! Thanks!
Nevermind -- I found the solution, in case any one else runs into a similar problem. The server configuration file needs to be edited: point the data directory to your project data files (here: the phone_calls data files) & change the server IP address to your own.

Lucene index gets broken segments after every restart of liferay-tomcat

I have a corrupted Lucene index. If I run "CheckIndex -fix" the problem is resolved, but as soon as I restart tomcat it becomes corrupted again.
The index directory is shared between two application servers running Liferay-Tomcat. I am fixing the index on 1 server and restarting that whilst the other is running. This is a production environment so I cannot bring them both down.
Any suggestions please?
Before fix, CheckIndex says:
Opening index # /usr/local/tomcat/liferay/lucene/0
Segments file=segments_5yk numSegments=1 version=FORMAT_SINGLE_NORM_FILE [Lucene 2.2]
1 of 1: name=_2vg docCount=31
compound=false
hasProx=true
numFiles=8
size (MB)=0.016
no deletions
test: open reader.........FAILED
WARNING: fixIndex() would remove reference to this segment; full exception:
java.io.IOException: read past EOF
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:151)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:38)
at org.apache.lucene.store.IndexInput.readVInt(IndexInput.java:78)
at org.apache.lucene.index.FieldInfos.read(FieldInfos.java:335)
at org.apache.lucene.index.FieldInfos.<init>(FieldInfos.java:71)
at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(SegmentReader.java:119)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:652)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:605)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:491)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
WARNING: 1 broken segments (containing 31 documents) detected
WARNING: would write new segments file, and 31 documents would be lost, if -fix were specified
If you access your search index with more than one application server, I would suggest integrating a Solr Server. So you don't have the problem that 2 app servers are trying to write on the same file. This could be error prone as you already found out.
To get Solr up and running you have to follow those steps:
Install a Solr Server on any machine you like. A machine running only Solr would be quite preferable.
Install the Solr search portlet in Liferay
Adjust the config files according to the setup document of Sol Search portlet.
Here are some additional links:
http://www.liferay.com/de/marketplace/-/mp/application/15193648
http://www.liferay.com/de/community/wiki/-/wiki/Main/Pluggable+Enterprise+Search+with+Solr

pgpool-II connection pooling - ERROR: "MD5" authentication with pgpool failed

Using the following for just connection pooling no master_slave or replication: rhel 6, postgresql 9.1.9, & pgpool-II 3.1.3 (also tried 3.2.5)
Followed solution suggested in http://www.pgpool.net/pipermail/pgpool-general/2013-May/001773.html
After following the instructions for MD5 I also tried setting both pg_hba.conf and pool_hba.conf to trust for local and subnet, but still get the following error when attempting to connect to the pool locally:
ERROR: "MD5" authentication with pgpool failed for user foo
Tried locally on Fedora 18 with pg9.2 and pgpool from Fedora repo and worked right out of the box.
At the end of all routes suggested everywhere I could find.
Help would be greatly appreciated.
After having hit the same problem the solution was to change ownership of the pool_passwd file to postgres.
Even though this file has a 644 permission, if owner isn't postgres you'll always get the aforementioned error. I guess this file's owner and the user running pgpool must match.
I'm running PosgreSQL 9.2 and pgpool-II 3.3.2, BTW.

How do I tell the R interpreter how to use the proxy server?

I'm trying to get R (running on Windows) to download some packages from the Internet, but the download fails because I can't get it to correctly use the necessary proxy server. The output text when I try the Windows menu option Packages > Install package(s)... and select a CRAN mirror is:
> utils:::menuInstallPkgs()
--- Please select a CRAN mirror for use in this session ---
Warning: unable to access index for repository http://cran.opensourceresources.org/bin/windows/contrib/2.12
Warning: unable to access index for repository http://www.stats.ox.ac.uk/pub/RWin/bin/windows/contrib/2.12
Error in install.packages(NULL, .libPaths()[1L], dependencies = NA, type = type) :
no packages were specified
In addition: Warning message:
In open.connection(con, "r") :
cannot open: HTTP status was '407 Proxy Authentication Required'
I know the address and port of the proxy, and I also know the address of the automatic configuration script. I don't know what the authentication is called, but when using the proxy (in a browser and some other applications), I enter a username and password in a dialog window that pops up.
To set the proxy, I tried each of the following:
Sys.setenv(http_proxy="http://proxy.example.com:8080")
Sys.setenv("http_proxy"="http://proxy.example.com:8080")
Sys.setenv(HTTP_PROXY="http://proxy.example.com:8080")
Sys.setenv("HTTP_PROXY"="http://proxy.example.com:8080")
For authentication, I similarly tried setting the http_proxy_user environment variable to:
ask
user:passwd
Leaving it untouched
Am I using the right commands in the right way?
You have two options:
Use --internet2 or setInternet2(TRUE) and set the proxy details in the control panel, in Internet Options
Do not use either --internet2 or setInternet2(FALSE), but specify the environment variables
EDIT: One trick is, you cannot change your mind between 1 and 2, after you have tried it in a session, i.e. if you run the command setInternet2(TRUE) and try to use it e.g. install.packages('reshape2'), should this fail, you cannot then call setInternet2(FALSE). You have to restart the R session.
As of R version 3.2.0, the setInternet2 function can set internet connection settings and change them within the same R session. No need to restart.
When using option 2, one way (which is nice and compact) to specify the username and password is http_proxy="http://user:password#proxy.example.com:8080/"
In the past, I have had most luck with option 2
If you want internet2 to be used everytime you use R you could add the following line to the Rprofile.site file which is located in R.x.x\etc\Rprofile.site
utils::setInternet2(TRUE)
I've solved my trouble editing the file .Renviron as documented in Proxy setting for R.
EDITED
The solutions based on the setInternet2 statement do not work with the recent R versions because setInternet2 is declared defunct.
I'm using the 4.2.1 (on Win 11Pro) while I never had any problems in previous versions .
So to solve the problem need to modify some config files in order to fix the proxy issue not only for packages installation but, in general, also to acced to a remote resource (ie. boundary maps in my case).
The question "Proxy setting for R" collect a lot of solutions. I've found that this one has solved both my problems (packages installation and remote resources) explaining step-by-step how to edit the file .Renviron
Other solutions based on the customization of the file Renviron.site for me doesn't work
install.packages("RCurl")
that will solve your problem.

Resources