In my Centos 6.5 I want to install chrony
# yum install chrony
I get the following error
Loaded plugins: fastestmirror Loading mirror speeds from cached
hostfile Setting up Install Process No package chrony available.
Error: Nothing to do
How can I install Chrony on Centos 6.5 ?
I would greatly appreciate any help you can give me in working this problem.
It sounds like your missing the base.repo from your install.
Available Packages
Name : chrony
Arch : x86_64
Version : 2.1.1
Release : 2.el6_8
Size : 266 k
Repo : base
Summary : An NTP client/server
URL : http://chrony.tuxfamily.org
License : GPLv2
Description : A client/server for the Network Time Protocol, this program keeps your
: computer's clock accurate. It was specially designed to support
: systems with intermittent internet connections, but it also works well
: in permanently connected environments. It can use also hardware reference
: clocks, system real-time clock or manual input as time references.
It sounds like you may be missing the 'CentOS-Base' repo from your config. Check in /etc/yum.repos.d/CentOS-Base.repo If it doesn't exist then create it and add the following config block;
[base]
name=CentOS-$releasever - Base
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
#released updates
[updates]
name=CentOS-$releasever - Updates
mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
#baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
You can also find the package via http://rpm.pbone.net
Related
after installing all prerequietsed/environment when i am trying to install Repositary Strain Package Showing this error: 'strain': not a valid cloud name. Must be one of ['folsom','folsom-proposed','grizzly','grizzly-proposed', 'havana', 'havana-proposed', 'icehouse', 'icehouse-proposed', 'juno', 'juno-proposed', 'octa', 'octa-proposed', 'newton', 'newton-proposed',
i have try to install that package removing and recreating another VM and follow the instruction page -- openstack, but still same issue their.
i want the points which i need to check which is responsible for giving following error.
I want to set up rstudio-server on an iMac with support for multiple users and remote login. I followed the steps in the INSTALL tutorial: I built the source, set up the configuration files and the launchd daemon. At first, it works fine, but after some time, I get these warnings/errors when I plot:
2022-06-09 08:02:29.438 rsession[3050:139329] XType: failed to connect - Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.fonts was invalidated: failed at lookup with error 3 - No such process." UserInfo={NSDebugDescription=The connection to service named com.apple.fonts was invalidated: failed at lookup with error 3 - No such process.}
2022-06-09 08:02:29.438 rsession[3050:139329] Font server protocol version mismatch (expected:5 got:0), falling back to local fonts
2022-06-09 08:02:29.438 rsession[3050:139329] XType: unable to make a connection to the font daemon!
2022-06-09 08:02:29.438 rsession[3050:139329] XType: XTFontStaticRegistry is enabled as fontd is not available.
Then I can't plot any more unless I restart R and re-run my code. Do you know what could be the issue? I could not get any help when opening an issue on the rstudio-server github since MacOS is not officially supported.
I was also looking at running rstudio-server via docker, but I couldn't find a good way to map the user namespace from macOS to the container.
Any help or suggestion would be greatly appreciated!
EDIT: It seems I was able to solve the issue by launching the fontd daemon with:
sudo launchctl load -w /System/Library/LaunchAgents/com.apple.fontd.useragent.plist
This seems like an issue with the MacOS font daemon, not with RStudio itself.
Someone reported a similar issue on PhantomJS. Rebooting resolved it for them.
This answer reported the same error for a different build, and they were able to resolve it by installing the correct "Apple Worldwide Developer Relations Certification Authority" in Keychain:
The one I had had an expiration date of February 2023. I deleted that one and went here, downloaded the one called "Worldwide Developer Relations - G3 (Expiring 02/20/2030 00:00:00 UTC)", then retried the build and it worked.
I'm having issues with Azure Machine Learning SDK for R: "module 'azureml' has no attribute 'core'"...
For reasons that aren't my own, I have to use azureml to apply machine learning (my own stuff, written in R) to data from our data warehouse that is put in the blob storage. The modelled output should be put back into the blob storage so it can be accessed from the data warehouse.
I've written the code in R on my local machine (stored in a git repo). Preferably, I'd find some method to pull my code from git into a pipeline in the azureml environment so that it can be directly run whenever new data is available in the blob storage.
I've embarked on a tutorial-spree and found this seemingly relevant walkthrough: Train and deploy your first model with Azure ML (and this one).
But... after trying all I could think of, I'm stuck on the first steps. After installing all (or at least.. that's what I think) packages, modules, apps etc, and running the following code in RStudio:
library(azuremlsdk)
existing_ws <- get_workspace(name = name,
subscription_id = subscription_id,
resource_group)
I run into an error that I haven't been able to fix:
AttributeError: module 'azureml' has no attribute 'core'
It seems that the azuerml is supposed to have an attribute "core", but when looking at it more closely, there is indeed no such attribute.
The function "get_workspace()" is trying to access: "azureml$core$Workspace$get".
I found that "azuerML$Workspace" does exist, but then I can't figure out how to make that work.
Can anyone explain to me why I'm encountering this error?
Does anyone know of a better tutorial on how to connect my R code the azureml's cloud service?
Any pointers in the right direction are much appreciated!
EDITS - still not solved:
After advice from others, I double, triple and quadruple checked the installation.
I updated R and I'm now running:
R.version
platform x86_64-w64-mingw32
arch x86_64
os mingw32
system x86_64, mingw32
status
major 3
minor 6.2
year 2019
month 12
day 12
svn rev 77560
language R
version.string R version 3.6.2 (2019-12-12)
nickname Dark and Stormy Night
I installed Conda with Python 3.6.10.
I installed the azuremlsdk R package (I tried both provided options).
I then realized that there are some inconsistencies with the versions of the azure-modules, so I also tried installing it with the keyword 'multi-arch':
remotes::install_cran('azuremlsdk', repos = 'http://cran.us.r-project.org', INSTALL_opts=c("--no-multiarch"))
Then, I installed the azureml python sdk.
I had a look at all the versions again (using python -m pip freeze):
azure-common==1.1.24
azure-graphrbac==0.61.1
azure-mgmt-authorization==0.60.0
azure-mgmt-containerregistry==2.8.0
azure-mgmt-keyvault==2.0.0
azure-mgmt-resource==7.0.0
azure-mgmt-storage==7.1.0
azureml==0.2.7
azureml-automl-core==1.0.83.1
azureml-core==1.0.69
azureml-dataprep==1.1.36
azureml-dataprep-native==13.2.0
azureml-pipeline==1.0.69
azureml-pipeline-core==1.0.69
azureml-pipeline-steps==1.0.69
azureml-sdk==1.0.69
azureml-telemetry==1.0.69
azureml-train==1.0.69
azureml-train-automl-client==1.0.83
azureml-train-core==1.0.69
azureml-train-restclients-hyperdrive==1.0.69
As I was surprised to see all the 1.0.69 versions, instead of the 1.0.83 versions, I re-installed the azureml python sdk using:
azuremlsdk::install_azureml(version = "1.0.83")
This worked, in the sense that indeed all versions are now 1.0.83:
azure-common==1.1.24
azure-graphrbac==0.61.1
azure-mgmt-authorization==0.60.0
azure-mgmt-containerregistry==2.8.0
azure-mgmt-keyvault==2.0.0
azure-mgmt-resource==7.0.0
azure-mgmt-storage==7.1.0
azureml==0.2.7
azureml-automl-core==1.0.83.1
azureml-core==1.0.83
azureml-dataprep==1.1.36
azureml-dataprep-native==13.2.0
azureml-pipeline==1.0.83
azureml-pipeline-core==1.0.83
azureml-pipeline-steps==1.0.83
azureml-sdk==1.0.83
azureml-telemetry==1.0.83
azureml-train==1.0.83
azureml-train-automl-client==1.0.83
azureml-train-core==1.0.83
azureml-train-restclients-hyperdrive==1.0.83
But still... I get the error with the missing core. I get it both when running:
library(azuremlsdk)
get_current_run()
and also when running:
library(azuremlsdk)
existing_ws <- get_workspace(name = name,
subscription_id = subscription_id,
resource_group)
Note that the first time running this code after starting up RStudio, I get the error:
Error in py_get_attr_impl(x, name, silent) :
AttributeError: module 'azureml' has no attribute '_base_sdk_common'
And every time after that I get this error:
Error in py_get_attr_impl(x, name, silent) :
AttributeError: module 'azureml' has no attribute 'core'
Any help would be much appreciated!
This issue was introduced by the latest reticulate 1.14 release, in which reticulate would create a default r-reticulate conda environment. Since Azure ML was installing the python SDK in an environment named r-azureml, the r-reticulate environment used by reticulate was missing the python SDK. A fix for this issue was addressed in a PR and has been merged into master. Please install from GitHub for now if you have reticulate version 1.14 and are running into this issue. We will be releasing an update to CRAN shortly.
I seemed to have fixed the issue by specifically installing the python package azureml AND azureml.core:
python -m pip install azureml
and then...
python -m pip install azureml.core
I did this for the Conda version that was called by R (r-reticulate). It's a bit odd to not be able to use the Conda environment 'r-azureml' without R switching back to 'r-reticulate', but ah well... at least I don't get my 'azureml' has no attribute 'core' anymore.
I've been trying to install and run keras in RStudio (Windows) in vain.
i installed keras package using normal package "keras"
(didn't use github)
I've installed latest python (3.6) and Anaconda.
then i use
> library(keras)
> install.keras()
and i get this error:
Creating r-tensorflow conda environment for TensorFlow installation...
Fetching package metadata ... CondaHTTPError: HTTP 000 CONNECTION
FAILED for url
https://repo.continuum.io/pkgs/main/win-64/repodata.json.bz2
Elapsed: -
An HTTP error occurred when trying to retrieve this URL. HTTP errors
are often intermittent, and a simple retry will get you on your way.
ConnectTimeout(MaxRetryError("HTTPSConnectionPool(host='repo.continuum.io',
port=443): Max retries exceeded with url:
/pkgs/main/win-64/repodata.json.bz2 (Caused by
ConnectTimeoutError(, 'Connection to repo.continuum.io timed out.
(connect timeout=9.15)'))",),)
Error: Error 1 occurred creating conda environment r-tensorflow In
addition: Warning message: running command
'"C:\PROGRA~3\ANACON~1\Scripts\conda.exe" "create" "--yes" "--name"
"r-tensorflow" "python=3.6"' had status 1
I've looked up everywhere on the web and can't figure out how to install keras and tensorflow properly. Using latest version of R (3.4.2)
Every method fails somewhere.
just to add to misery, i've also tried:
> devtools::install_github("rstudio/keras")
and i get this error:
Installation failed: Timeout was reached: Connection timed out after
10015 milliseconds
I am not behind any authenticated proxies. So, after multiple failure, i just downloaded the zip file from github and manually installed it using the zip file.
i also tried install.packages("keras") and that didn't give me any error either.
when i call the library i don't get any errors (as shown above)
UPDATE: I was able to install and use the package very easily on another computer that doesn't have python/anaconda installed on it already.
UPDATE 2: my proxy does not need authentication and there is no https_proxy either.
OK,, FINALLY found a solution.
Turns out RStudio uses a lot of default proxy settings, so i needed to change all that and set up my own proxy settings.
First step:
Rstudio --> Tools --> Global Options --> packages --> uncheck both "Use secure download method for HTTP" and "Use Internet Explorer librayr/proxy for HTTP"
Second step, in RStudio type:
> file.edit('./.Renviron')
Either an empty file or some file with already existing proxy settings will open. (Mine was empty). Then I included the following two:
http_proxy=http://myusename:password#proxy.server.com:port/
https_proxy=http://myusename:password#proxy.server.com:port/
(a few notes: I didn't have a https_proxy setting but I still needed to use the http_proxy details for my https_proxy setting. This was one of the culprits for my issue. Also, I needed to include the username:password even though my proxy doesn't need secure authentication. Same thing goes with the port. Port number had to be included, otherwise it wouldn't work.
Step 3:
Saved the new changes in .Renviron file and restarted RStudio.
I checked my proxy settings in RStudio after restart by typing:
> Sys.getenv("http_proxy")
> Sys.getenv("https_proxy")
The first few times i did this i realised that the proxy settings were not being changed in RStudio because i was editing the wrong .Renviron file. So, it's best to use file.edit('~/.Renviron') in step 2 to make sure it's the right file.
After all this, when i ran install.keras(), it installed successfully, including installing Tensorflow. Again, initially i had skipped step 1 so keras started being installed but it failed at installing tensorflow.
It was only going through all the steps that i was able to install both keras and tensorflow successfully over a proxy. Hope this helps.
Uninstalling Anaconda3 and installing Anaconda2 (i.e. Python 2.7) did the trick for me: https://www.anaconda.com/download/
I already installed OpenCPU on a Ubuntu Server - Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-32-generic x86_64) - and everything worked perfectly without any problems.
Here I want to say that I really like this API and I am very thankful for all the effort from the people (I think mostly Jeroen Ooms) working on it.
Now I installed it again, but on another Server hosted at another provider. It is also a Ubuntu Server - Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-042stab093.4 x86_64) - and therefore I expected it to work as smoothly as before.
But now I have a problem. After the installation and starting the service, I wanted to check through my browser if everything is OK.
So I just opened http://xxx.xxx.xxx.xxx/ocpu like it worked on my other server. This time my browser doesn't show the OpenCPU API Explorer, but the following message:
Failed to set rlimit. ENOSYS
In call:
rlimit_wrapper("rlimit_as", hardlim, softlim, pid, verbose)
The server only has 1GB of physical memory, so I thought changing the "rlimit.as" to 1e9 instead of the standrd 2e9 would fix the problem (I also tried 750000000 and 500000000), but nothing helped (of course I restarted the service opencpu again after each change).
I also think that this is not the problem, because I guess the server would use virtual memory when an operation uses more than one GB.
I think the problem has to do with RAppArmor. So I tried to disable it and restart opencpu, but the problem didn't vanish:
$ sudo aa-disable usr.bin.r
Disabling /etc/apparmor.d/usr.bin.r.
Traceback (most recent call last):
File "/usr/sbin/aa-disable", line 30, in
tool.cmd_disable()
File "/usr/lib/python3/dist-packages/apparmor/tools.py", line 148, in cmd_disable
raise apparmor.AppArmorException(cmd_info[1])
apparmor.common.AppArmorException: 'Warning: unable to find a suitable fs in /proc/mounts, is it mounted?\nUse --subdomainfs to override.\n'
So does anyone know what the problem here could be or has any suggestions where to look for a solution (I tried to google already, but didn't find anything helpful)?
I don't think anything of the following is the cause of the problem, but since I'm not sure, I add these warnings anyways:
The only strange thing I encountered during the OpenCPU installation was this message (which appeared 4 times):
iptables v1.4.21: can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
But afterwards it said:
* Reloading nginx configuration nginx [ OK ]
OK
Setting up opencpu (1.4.4-trusty15) ...
Also when I tried to install RAppArmor separately, I got the following warning:
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LC_CTYPE = "UTF-8",
LANG = (unset)
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
Selecting previously unselected package r-cran-rapparmor.
And also this one:
Warning: unable to find a suitable fs in /proc/mounts, is it mounted?\nUse --subdomainfs to override.\n
Thanks in advance!
It looks like your new hosting provider uses some sort of virtualization system that has a shared kernel which limits all kind of linux functionality, including rlimit, iptables and probably apparmor. Is an actual cloud host, or something you setup yourself?
It would be helpful to debug this in R (outside of opencpu). On your server, start R in the console and type:
library(RAppArmor, lib="/usr/lib/opencpu/library")
rlimit_as(1e9)
rlimit_fsize(1e9)
rlimit_cpu(1e5)