I'm trying to get the spark.sas7bdat package to work so that I can read SAS tables into Spark. For some reason, it seems to make Spark fail.
e.g.
The following code works fine and launches spark:
Sys.setenv(JAVA_HOME = "/Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home")
library(sparklyr)
sc <- spark_connect(master = "local", version = "2.0.1")
However, if I simply load the spark.sas7bdat package, this fails:
Sys.setenv(JAVA_HOME = "/Library/Java/JavaVirtualMachines/zulu-8.jdk/Contents/Home")
library(sparklyr)
library(spark.sas7bdat)
sc <- spark_connect(master = "local", version = "2.0.1")
I get the following error:
Error in spark_connect_gateway(gatewayAddress, gatewayPort, sessionId,
: Gateway in localhost:8880 did not respond. Try running
options(sparklyr.log.console = TRUE) followed by sc <- spark_connect(...) for more debugging info.
I'm running this on an M1 Macbook Air with R 4.2.1. I have maven installed via homebrew.
Any suggestions?
Others seem to have the same issue, see: https://githubmemory.com/repo/bnosac/spark.sas7bdat/issues/9
Running options(sparklyr.log.console = TRUE) gives me the following:
Ivy Default Cache set to: /Users/henrydehe/.ivy2/cache
The jars for the packages stored in: /Users/henrydehe/.ivy2/jars
:: loading settings :: url = jar:file:/Users/henrydehe/spark/spark-2.0.1-bin-hadoop2.7/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
saurfang#spark-sas7bdat added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0
confs: [default]
:: resolution report :: resolve 460ms :: artifacts dl 1ms
:: modules in use:
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 1 | 0 | 0 | 0 || 0 | 0 |
---------------------------------------------------------------------
:: problems summary ::
:::: WARNINGS
module not found: saurfang#spark-sas7bdat;2.0.0-s_2.11
==== local-m2-cache: tried
file:/Users/henrydehe/.m2/repository/saurfang/spark-sas7bdat/2.0.0-s_2.11/spark-sas7bdat-2.0.0-s_2.11.pom
-- artifact saurfang#spark-sas7bdat;2.0.0-s_2.11!spark-sas7bdat.jar:
file:/Users/henrydehe/.m2/repository/saurfang/spark-sas7bdat/2.0.0-s_2.11/spark-sas7bdat-2.0.0-s_2.11.jar
==== local-ivy-cache: tried
/Users/henrydehe/.ivy2/local/saurfang/spark-sas7bdat/2.0.0-s_2.11/ivys/ivy.xml
-- artifact saurfang#spark-sas7bdat;2.0.0-s_2.11!spark-sas7bdat.jar:
/Users/henrydehe/.ivy2/local/saurfang/spark-sas7bdat/2.0.0-s_2.11/jars/spark-sas7bdat.jar
==== central: tried
https://repo1.maven.org/maven2/saurfang/spark-sas7bdat/2.0.0-s_2.11/spark-sas7bdat-2.0.0-s_2.11.pom
-- artifact saurfang#spark-sas7bdat;2.0.0-s_2.11!spark-sas7bdat.jar:
https://repo1.maven.org/maven2/saurfang/spark-sas7bdat/2.0.0-s_2.11/spark-sas7bdat-2.0.0-s_2.11.jar
==== spark-packages: tried
http://dl.bintray.com/spark-packages/maven/saurfang/spark-sas7bdat/2.0.0-s_2.11/spark-sas7bdat-2.0.0-s_2.11.pom
-- artifact saurfang#spark-sas7bdat;2.0.0-s_2.11!spark-sas7bdat.jar:
http://dl.bintray.com/spark-packages/maven/saurfang/spark-sas7bdat/2.0.0-s_2.11/spark-sas7bdat-2.0.0-s_2.11.jar
::::::::::::::::::::::::::::::::::::::::::::::
:: UNRESOLVED DEPENDENCIES ::
::::::::::::::::::::::::::::::::::::::::::::::
:: saurfang#spark-sas7bdat;2.0.0-s_2.11: not found
::::::::::::::::::::::::::::::::::::::::::::::
:::: ERRORS
Server access error at url http://dl.bintray.com/spark-packages/maven/saurfang/spark-sas7bdat/2.0.0-s_2.11/spark-sas7bdat-2.0.0-s_2.11.pom (java.net.SocketException: Unexpected end of file from server)
Server access error at url http://dl.bintray.com/spark-packages/maven/saurfang/spark-sas7bdat/2.0.0-s_2.11/spark-sas7bdat-2.0.0-s_2.11.jar (java.net.SocketException: Unexpected end of file from server)
:: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
Exception in thread "main" java.lang.RuntimeException: [unresolved dependency: saurfang#spark-sas7bdat;2.0.0-s_2.11: not found]
at org.apache.spark.deploy.SparkSubmitUtils$.resolveMavenCoordinates(SparkSubmit.scala:1076)
at org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSubmit.scala:294)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:158)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Error in spark_connect_gateway(gatewayAddress, gatewayPort, sessionId, :
Gateway in localhost:8880 did not respond.
Related
After some trials, I was able to install Rmpi package on my computer using the following code:
R CMD INSTALL -l /storage/home/***/.R Rmpi_0.6-7.tar.gz --configure-args="--with-Rmpi-type=OPENMPI --disable-dlopen --with-Rmpi-include=/gpfs/group/RISE/sw7/openmpi_4.1.4_gcc-9.3.1/include --with-Rmpi-libpath=/gpfs/group/RISE/sw7/openmpi_4.1.4_gcc-9.3.1/lib"
I tried to run the following test code:
# Load the R MPI package if it is not already loaded.
if (!is.loaded("mpi_initialize")) {
library("Rmpi")
}
ns <- mpi.universe.size() - 1
mpi.spawn.Rslaves(nslaves=ns)
#
# In case R exits unexpectedly, have it automatically clean up
# resources taken up by Rmpi (slaves, memory, etc...)
.Last <- function(){
if (is.loaded("mpi_initialize")){
if (mpi.comm.size(1) > 0){
print("Please use mpi.close.Rslaves() to close slaves.")
mpi.close.Rslaves()
}
print("Please use mpi.quit() to quit R")
.Call("mpi_finalize")
}
}
# Tell all slaves to return a message identifying themselves
mpi.bcast.cmd( id <- mpi.comm.rank() )
mpi.bcast.cmd( ns <- mpi.comm.size() )
mpi.bcast.cmd( host <- mpi.get.processor.name() )
mpi.remote.exec(paste("I am",mpi.comm.rank(),"of",mpi.comm.size()))
# Test computations
x <- 5
x <- mpi.remote.exec(rnorm, x)
length(x)
x
# Tell all slaves to close down, and exit the program
mpi.close.Rslaves(dellog = FALSE)
mpi.quit()
On my HPC I run the following:
qsub -A open -l walltime=6:00:00 -l nodes=4:ppn=4:stmem -I
module use /gpfs/group/RISE/sw7/modules
module load openmpi/4.1.4-gcc.9.3.1 r/4.0.3
mpirun -np 4 Rscript "codes/test/test4.R"
But then I get the following error indicating that I only have 1 number of slaves:
--------------------------------------------------------------------------
By default, for Open MPI 4.0 and later, infiniband ports on a device
are not used by default. The intent is to use UCX for these devices.
You can override this policy by setting the btl_openib_allow_ib MCA parameter
to true.
Local host: comp-sc-0222
Local adapter: mlx4_0
Local port: 1
--------------------------------------------------------------------------
--------------------------------------------------------------------------
By default, for Open MPI 4.0 and later, infiniband ports on a device
are not used by default. The intent is to use UCX for these devices.
You can override this policy by setting the btl_openib_allow_ib MCA parameter
to true.
Local host: comp-sc-0222
Local adapter: mlx4_0
Local port: 1
--------------------------------------------------------------------------
--------------------------------------------------------------------------
By default, for Open MPI 4.0 and later, infiniband ports on a device
are not used by default. The intent is to use UCX for these devices.
You can override this policy by setting the btl_openib_allow_ib MCA parameter
to true.
Local host: comp-sc-0222
Local adapter: mlx4_0
Local port: 1
--------------------------------------------------------------------------
--------------------------------------------------------------------------
By default, for Open MPI 4.0 and later, infiniband ports on a device
are not used by default. The intent is to use UCX for these devices.
You can override this policy by setting the btl_openib_allow_ib MCA parameter
to true.
Local host: comp-sc-0222
Local adapter: mlx4_0
Local port: 1
--------------------------------------------------------------------------
--------------------------------------------------------------------------
WARNING: There was an error initializing an OpenFabrics device.
Local host: comp-sc-0222
Local device: mlx4_0
--------------------------------------------------------------------------
--------------------------------------------------------------------------
WARNING: There was an error initializing an OpenFabrics device.
Local host: comp-sc-0222
Local device: mlx4_0
--------------------------------------------------------------------------
--------------------------------------------------------------------------
WARNING: There was an error initializing an OpenFabrics device.
Local host: comp-sc-0222
Local device: mlx4_0
--------------------------------------------------------------------------
--------------------------------------------------------------------------
WARNING: There was an error initializing an OpenFabrics device.
Local host: comp-sc-0222
Local device: mlx4_0
--------------------------------------------------------------------------
Error in mpi.comm.spawn(slave = system.file("Rslaves.sh", package = "Rmpi"), :
Choose a positive number of slaves.
Calls: mpi.spawn.Rslaves -> mpi.comm.spawn
Execution halted
Error in mpi.comm.spawn(slave = system.file("Rslaves.sh", package = "Rmpi"), :
Choose a positive number of slaves.
Calls: mpi.spawn.Rslaves -> mpi.comm.spawn
Execution halted
Error in mpi.comm.spawn(slave = system.file("Rslaves.sh", package = "Rmpi"), :
Choose a positive number of slaves.
Calls: mpi.spawn.Rslaves -> mpi.comm.spawn
Execution halted
Error in mpi.comm.spawn(slave = system.file("Rslaves.sh", package = "Rmpi"), :
Choose a positive number of slaves.
Calls: mpi.spawn.Rslaves -> mpi.comm.spawn
Execution halted
I have tried specifying different number of np's but still get the same error. What could be the cause here?
============================================================
(EDIT)
It seems that my original command to load the modules also load intel/19.1.2 and mkl/2020.3. If I unload them, I do see that OMPI_UNIVERSE_SIZE=4.
[****#comp-sc-0220 work]$ module purge
[****#comp-sc-0220 work]$ module load openmpi/4.1.4-gcc.9.3.1 r/4.0.3
[****#comp-sc-0220 work]$ module list
Currently Loaded Modules:
1) openmpi/4.1.4-gcc.9.3.1 2) intel/19.1.2 3) mkl/2020.3 4) r/4.0.3
[****#comp-sc-0220 work]$ mpirun -np 4 env | grep OMPI_UNIVERSE_SIZE
[****#comp-sc-0220 work]$ type mpirun; mpirun --version; mpirun -np 1 env | grep OMPI
mpirun is /opt/aci/intel/compilers_and_libraries_2020.2.254/linux/mpi/intel64/bin/mpirun
Intel(R) MPI Library for Linux* OS, Version 2019 Update 8 Build 20200624 (id: 4f16ad915)
Copyright 2003-2020, Intel Corporation.
LMOD_FAMILY_COMPILER_VERSION=19.1.2
LMOD_FAMILY_COMPILER=intel
[****#comp-sc-0220 work]$ module purge
[****#comp-sc-0220 work]$ module load openmpi/4.1.4-gcc.9.3.1 r/4.0.3
[****#comp-sc-0220 work]$ module unload intel mkl
[****#comp-sc-0220 work]$ module list
Currently Loaded Modules:
1) openmpi/4.1.4-gcc.9.3.1 2) r/4.0.3
[****#comp-sc-0220 work]$ mpirun -np 4 env | grep OMPI_UNIVERSE_SIZE
OMPI_UNIVERSE_SIZE=4
OMPI_UNIVERSE_SIZE=4
OMPI_UNIVERSE_SIZE=4
OMPI_UNIVERSE_SIZE=4
[****#comp-sc-0220 work]$ type mpirun; mpirun --version; mpirun -np 1 env | grep OMPI
mpirun is /gpfs/group/RISE/sw7/openmpi_4.1.4_gcc-9.3.1/bin/mpirun
mpirun (Open MPI) 4.1.4
Report bugs to http://www.open-mpi.org/community/help/
OMPI_MCA_pmix=^s1,s2,cray,isolated
OMPI_COMMAND=env
OMPI_MCA_orte_precondition_transports=954e2ae0a9569e46-2223294369d728a3
OMPI_MCA_orte_local_daemon_uri=4134338560.0;tcp://10.102.201.220:58039
OMPI_MCA_orte_hnp_uri=4134338560.0;tcp://10.102.201.220:58039
OMPI_MCA_mpi_oversubscribe=0
OMPI_MCA_orte_app_num=0
OMPI_UNIVERSE_SIZE=4
OMPI_MCA_orte_num_nodes=1
OMPI_MCA_shmem_RUNTIME_QUERY_hint=mmap
OMPI_MCA_orte_bound_at_launch=1
OMPI_MCA_ess=^singleton
OMPI_MCA_orte_ess_num_procs=1
OMPI_COMM_WORLD_SIZE=1
OMPI_COMM_WORLD_LOCAL_SIZE=1
OMPI_MCA_orte_tmpdir_base=/tmp
OMPI_MCA_orte_top_session_dir=/tmp/ompi.comp-sc-0220.26954
OMPI_MCA_orte_jobfam_session_dir=/tmp/ompi.comp-sc-0220.26954/pid.8212
OMPI_NUM_APP_CTX=1
OMPI_FIRST_RANKS=0
OMPI_APP_CTX_NUM_PROCS=1
OMPI_MCA_initial_wdir=/storage/work/k/****
OMPI_MCA_orte_launch=1
OMPI_MCA_ess_base_jobid=4134338561
OMPI_MCA_ess_base_vpid=0
OMPI_COMM_WORLD_RANK=0
OMPI_COMM_WORLD_LOCAL_RANK=0
OMPI_COMM_WORLD_NODE_RANK=0
OMPI_MCA_orte_ess_node_rank=0
OMPI_FILE_LOCATION=/tmp/ompi.comp-sc-0220.26954/pid.8212/0/0
But if I run the same test4.R again, I get the following error:
/gpfs/group/RISE/sw7/R-4.0.3-intel-19.1.2-mkl-2020.3/R-4.0.3/../install/lib64/R/bin/exec/R: error while loading shared libraries: libiomp5.so: cannot open shared object file: No such file or directory
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
/gpfs/group/RISE/sw7/R-4.0.3-intel-19.1.2-mkl-2020.3/R-4.0.3/../install/lib64/R/bin/exec/R: error while loading shared libraries: libiomp5.so: cannot open shared object file: No such file or directory
/gpfs/group/RISE/sw7/R-4.0.3-intel-19.1.2-mkl-2020.3/R-4.0.3/../install/lib64/R/bin/exec/R: error while loading shared libraries: libiomp5.so: cannot open shared object file: No such file or directory
/gpfs/group/RISE/sw7/R-4.0.3-intel-19.1.2-mkl-2020.3/R-4.0.3/../install/lib64/R/bin/exec/R: error while loading shared libraries: libiomp5.so: cannot open shared object file: No such file or directory
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[63743,1],0]
Exit code: 127
--------------------------------------------------------------------------
============================================================
(EDIT 2)
I changed my module load command again to module load openmpi/4.1.4-gcc.9.3.1 r/4.0.5-gcc-9.3.1. With this newer version of R I ran my test4.R script again with mpirun -np 4 Rscript "codes/test/test4.R". It is now returning a new error message as follows:
[1] "/storage/home/k/kxk5678/.R"
[2] "/gpfs/group/RISE/sw7/R-4.0.5-gcc-9.3.1/install/lib64/R/library"
[1] "/storage/home/k/kxk5678/.R"
[2] "/gpfs/group/RISE/sw7/R-4.0.5-gcc-9.3.1/install/lib64/R/library"
[1] "/storage/home/k/kxk5678/.R"
[2] "/gpfs/group/RISE/sw7/R-4.0.5-gcc-9.3.1/install/lib64/R/library"
[1] "/storage/home/k/kxk5678/.R"
[2] "/gpfs/group/RISE/sw7/R-4.0.5-gcc-9.3.1/install/lib64/R/library"
[1] 4
[1] 4
[1] 4
[1] 4
--------------------------------------------------------------------------
All nodes which are allocated for this job are already filled.
--------------------------------------------------------------------------
Error in mpi.comm.spawn(slave = system.file("Rslaves.sh", package = "Rmpi"), :
MPI_ERR_SPAWN: could not spawn processes
Calls: mpi.spawn.Rslaves -> mpi.comm.spawn
Execution halted
Error in mpi.comm.spawn(slave = system.file("Rslaves.sh", package = "Rmpi"), :
MPI_ERR_SPAWN: could not spawn processes
Calls: mpi.spawn.Rslaves -> mpi.comm.spawn
Execution halted
Error in mpi.comm.spawn(slave = system.file("Rslaves.sh", package = "Rmpi"), :
MPI_ERR_SPAWN: could not spawn processes
Calls: mpi.spawn.Rslaves -> mpi.comm.spawn
Execution halted
Error in mpi.comm.spawn(slave = system.file("Rslaves.sh", package = "Rmpi"), :
MPI_ERR_SPAWN: could not spawn processes
Calls: mpi.spawn.Rslaves -> mpi.comm.spawn
Execution halted
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[62996,1],1]
Exit code: 1
--------------------------------------------------------------------------
Install the package pbdMPI in an R session on the login node and run the following translation of the Rmpi test code into the use of pbdMPI:
library(pbdMPI)
ns <- comm.size()
# Tell all R sessions to return a message identifying themselves
id <- comm.rank()
ns <- comm.size()
host <- system("hostname", intern = TRUE)
comm.cat("I am", id, "on", host, "of", ns, "\n", all.rank = TRUE)
# Test computations
x <- 5
x <- rnorm(x)
comm.print(length(x))
comm.print(x, all.rank = TRUE)
finalize()
You run it the same way you used for the Rmpi version: mpirun -np 4 Rscript your_new_script_file.
Spawning MPI (as in the Rmpi example) was appropriate when running on clusters of workstations but on an HPC cluster the prevalent way to program with MPI is SPMD - single program multiple data. SPMD means that your code is a generalization of a serial code that is able to have several copies of itself cooperate with each other.
In the above example, cooperation happens only with printing (the comm... functions). There is no manager/master, just several R sessions running the same code (usually computing something different based on comm.rank()) and cooperating/communicating via MPI. This is the prevalent way of large scale parallel computing on HPC clusters.
Here is my /etc/salt/master config:
#GitFS
gitfs_provider: pygit2
gitfs_base: DEVELOPMENT
gitfs_env_whitelist:
- base
fileserver_backend:
- git
gitfs_remotes:
- ssh://git#github.com/myrepo/salt-states.git:
- pubkey: /root/.ssh/my.pub
- privkey: /root/.ssh/my
- mountpoint: salt:///srv/salt/salt-states
Here is my directory structure for the repo:
.
|-- README.md
|-- formulas
| `-- test
| |-- test.sls
`-- top.sls
Here is my very basic top.sls:
base:
'*':
- test
If i try to run highstate on my test node I get:
root#saltmaster:/etc/salt] salt -v '*' state.highstate
Executing job with jid 1234567890
-------------------------------------------
test-minion.domain:
----------
ID: states
Function: no.None
Result: False
Comment: No Top file or external nodes data matches found.
Started:
Duration:
Changes:
Summary for test-minion.domain
------------
Succeeded: 0
Failed: 1
------------
Total states run: 1
Total run time: 0.000 ms
I'm not sure why this isn't working and would appreciate any help with this. I've tried just applying the test.sls to see if it was the top file that was the issue but I got this:
root#saltmaster:/etc/salt] salt -v '*' state.sls test
Executing job with jid 1234567890
-------------------------------------------
test-minion.domain:
Data failed to compile:
----------
No matching sls found for 'test' in env 'base'
I had a similar problem, which was due to the cache being out of sync and not updating. If I tried to run:
salt-run fileserver.update
I got:
[WARNING ] Update lock file is present for gitfs remote 'git#github.com:mention-me/Salt.git', skipping. If this warning persists, it is possible that the update process was interrupted, but the lock could also have been manually set. Removing /var/cache/salt/master/gitfs/7d8d9790a933949777fd5a58284b8850/.git/update.lk or running 'salt-run cache.clear_git_lock gitfs type=update' will allow updates to continue for this remote.
Deleting the cache file specified, and running the above command fixed the problem.
I talked to the folks on the saltstack IRC and someone helped me fix the problem. It seems that adding a mountpoint was screwing everything up. Credit goes to:
12:20] == realname : Thomas Phipps
[12:20] == channels : #salt
[12:20] == server : orwell.freenode.net [NL]
[12:20] == : is using a secure connection
[12:20] == account : whytewolf
[12:20] == End of WHOIS
Good afternoon,
I have installed the Generic enablers Cosmos, following the manual BigData Analysis - Installation and Administration Guide. When I have come to 'Step 7: applying Puppet' and executed the commands, in the file puppet.err has appeared the following errors:
Error: Could not prefetch yumrepo provider ' inifilé: Section 'openvz-utils' is already defined, cannot re-defines in/etc/yum.repos.d/openvz.repo
Description: There is a conflict with the titles (indicated in bold type) of the file /etc/yum.repos.d/cosmos-openvz.repo and /etc/yum.repos.d/openvz.repo .
cat /etc/yum.repos.d/cosmos-openvz.repo
[openvz-utils]
...
[openvz-kernel-rhel6]
...
cat /etc/yum.repos.d/openvz.repo
[openvz-utils]
...
[openvz-kernel-rhel6]
...
[openvz-kernel-rhel6-testing]
...
Solution: I have realized a change in the titles of the file /etc/yum.repos.d/openvz.repo, example: [openvz-utils_1]
Error: Could not prefetch database_grant provider 'mysql': Execution of '/usr/bin/mysql mysql -Be describe user' returned 1: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)
Description: in the folder /var/lib/mysql/ was not found the file mysql.sock.
Solution: I have installed mysql-server.x86_64:
yum install mysql-server.x86_64
At the end of the installations, I restarted the service:
/etc/init.d/mysqld stop
/etc/init.d/mysqld start
Error: Could not update: Execution of '/usr/bin/yum -d 0 -e 0 -y list vzstats' returned 1: Error: Cannot retrieve repository metadata (repomd.xml) for repository: ambari. Please verify its path and try again
Description: This error appears in the machine of the Master node, this one is provoked by the configuration of the file [COSMOS_TMP_PATH]/puppet/modules/cosmos/manifests/hieradata/my_environment/common.yaml, indicated in 'Step 6: Puppet configuration'. Concretely, the URL that use the IP: 130.206.81.65
Solution: in the file [COSMOS_TMP_PATH]/puppet/modules/cosmos/manifests/hieradata /my_environment/common.yaml to change the line:
ambari::params::repo_url: 'http:// 130.206.81.65/cosmos/ambari/'
(without blank space)
for
ambari::params::repo_url: 'http:// public-repo-1.hortonworks.com/ambari/centos6/1.x/GA'
(without blank space)
Error: Could not update: Execution of '/usr/bin/yum -d 0 -e 0 -y list vzstats' returned 1: Error: Cannot retrieve repository metadata (repomd.xml) for repository: cosmos-libvirt. Please verify its path and try again
Description: it is the same problem as the previous error. The difficulty in this one is that I cannot modify the file [COSMOS_TMP_PATH]/puppet/modules/cosmos/manifests/hieradata /my_environment/common.yaml in the line:
cosmos::params::cosmos_repo_deps_url: 'http:// 130.206.81.65/cosmos/rpms/cosmos-deps'
(without blank space)
Because it is line is used in several files:
cat /etc/yum.repos.d/cosmos-libvirt.repo
[cosmos-libvirt]
name=Cosmos LibVirt with OpenVZ - v1.0.5 - NO PolKIT
baseurl=http:// 130.206.81.65/cosmos/rpms/cosmos-deps//libvirt
gpgcheck=0
priority=10
enabled=1
cat /etc/yum.repos.d/cosmos-openvz.repo
[openvz-utils]
name=OpenVZ utilities
baseurl=http:// 130.206.81.65/cosmos/rpms/cosmos-deps//OpenVZ/openvz-utils
enabled=1
gpgcheck=0
priority=1
[openvz-kernel-rhel6]
name=OpenVZ RHEL6-based kernel
baseurl=http:// 130.206.81.65/cosmos/rpms/cosmos-deps//OpenVZ/openvz-kernel- > rhel6
enabled=1
gpgcheck=0
priority=1
It does not also allow to modify the file previous, in the moment to execute the command (indicated in 'Step 7: applying Puppet'):
puppet apply --debug --verbose \
--modulepath [COSMOS_TMP_PATH]/puppet/modules/:[COSMOS_TMP_PATH]/puppet/modules_third_party/ \
--environment my_environment --hiera_config [COSMOS_TMP_PATH]/puppet/modules/cosmos/manifests/hiera.yaml \
--manifestdir [COSMOS_TMP_PATH]/puppet/modules/cosmos/manifests/ [COSMOS_TMP_PATH]/puppet/modules/cosmos/manifests/site.pp \
> puppet.out 2> puppet.err
It will erase the modified.
Solution: https://github.com/telefonicaid/fiware-cosmos-platform/issues/4
I need help with the error:
Error: /Stage[main]/Ambari::Server::Config/Augeas[ambari-config-repoinfo]: Could not evaluate: Saving failed, see debug
Might they to throw me a hand with these last error?
Thank you in advance.
PD: Forgive for if it is written badly
I am setting up a system to connect to an AWS Redshift database from python. I am thinking that there's something wrong in the python script because I can connect via isql. I've installed all the relevant packages, and I am able to connect via isql as follows:
$ isql rndredshift readonly ***** -v
+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
SQL> quit
However, my python script is failing to connect. Here's the script:
import pyodbc
import sys
def main():
redshift_conn_str = assemble_connection_string(
Driver='{PostgreSQL}',
Server='10.191.4.97',
ServerName='rndredshift',
Port='5439',
Database='prod',
Uid='readonly',
Pwd='*******'
)
print("===========")
print(redshift_conn_str)
print("===========")
new_conn2 = pyodbc.connect(redshift_conn_str)
print(psql.read_sql('select top 10 * from rawdb.raw_imprequest_20150101', new_conn2))
def assemble_connection_string(**kwargs):
return ';'.join([k + '=' + v for (k, v) in kwargs.items()])
if __name__ == '__main__':
sys.exit(main())
Here's the output:
===========
Uid=readonly;Database=prod;ServerName=rndredshift;Driver={PostgreSQL}; Server=10.191.4.97;Pwd=********;Port=5439
===========
Traceback (most recent call last):
File "test_redshift.py", line 24, in <module>
sys.exit(main())
File "test_redshift.py", line 17, in main
new_conn2 = pyodbc.connect(redshift_conn_str)
pyodbc.Error: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name not found, and no default driver specified (0) (SQLDriverConnectW)')
The PosgreSQL driver is installed:
$ odbcinst -q -d
[PostgreSQL]
[MySQL]
And the data source is configured:
$ odbcinst -q -s
[rndredshift]
If you're using DSNs, you're going to need to specify that in your connection string. Also, if you want to use DSN-less connections, I believe the keyword is SERVER and not SERVERNAME.
Try this connection string?
Uid=readonly;Database=prod;DSN=rndredshift;Driver={PostgreSQL};Pwd=********;
Make sure you specify the full server name and port in odbc.ini as well. Also, since you're using PostgreSQL, any reason you're not using the native PostgreSQL driver?
https://wiki.postgresql.org/wiki/Psycopg
Good luck!
Also, I've been perplexed over the ways to obtain and install the PostgreSQL driver. When I installed unixODBC, the odbcinst.ini file was created and contained an entry for the PostgreSQL driver that looked this this:
[PostgreSQL]
Description = ODBC for PostgreSQL
Driver = /usr/lib/psqlodbc.so
Setup = /usr/lib/libodbcpsqlS.so
Driver64 = /usr/lib64/psqlodbc.so
Setup64 = /usr/lib64/libodbcpsqlS.so
FileUsage = 1
However, the files for Driver and Driver64 where not on the system. So then, I installed postgresql-odbc, which gave me the missing libraries. Is there a better way to do this? As I mentioned earlier, isql works fine, so I'm still thinking it's a python issue.
I decided to try using the psycopg2 package, and I got a connection to work! Here's my script:
import sys
import psycopg2
def main():
conn_string = "host='10.191.4.97' dbname='prod' user='readonly' password='****' port='5439'"
print("===========")
print(conn_string)
print("===========")
new_conn2 = psycopg2.connect(conn_string)
print("Connected using psycopg2!")
if __name__ == '__main__':
sys.exit(main())
So, while I'm happy that I can connect, the question still remains about pyodbc and the PostgreSQL connection string. Thoughts?
Here's the connection string:
Uid=readonly;Database=prod;ServerName=rndredshift;Driver={PostgreSQL}; Server=10.191.4.97;Pwd=********;Port=5439
Using DSN instead of ServerName didn't work.
I try to install https://github.com/roots/bedrock-ansible to get a bedrock deployment (http://roots.io/wordpress-stack/) running.
When I run "vagrant up", after some time I get the error:
TASK: [capistrano-setup | Setup deploy group] *********************************
skipping: [default]
TASK: [capistrano-setup | Setup deploy user] **********************************
skipping: [default]
TASK: [capistrano-setup | Adding public key to server] ************************
fatal: [default] => could not locate file in lookup: ~/.ssh/id_rsa.pub
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/johannes/site.retry
default : ok=46 changed=16 unreachable=1 failed=0
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
I do not have a clou how i can fix this. Do you have an idea?
It seems the role is trying to find your local public key. It should be in the location in the error message '~/.ssh/id_rsa.pub', but it's not. So either you don't have one, or you keep it in another location.
If you're not familiar with generating SSH keys you probably don't have one. I personally like the GitHub help page for this: https://help.github.com/articles/generating-ssh-keys/
(you only have to perform steps 1 and 2).
If you do have SSH keys, but in a different location, the capistrano-install role in bedrock uses some variables:
deploy_user: deploy
deploy_keys:
- "~/.ssh/id_rsa.pub"
So you can set (multiple) public key files in the deploy_keys list and they will be added to the deploy_user's authorized keys.
All this is needed because Capistrano will use the deploy user to connect to the remote server later. http://blakesmith.me/2010/02/08/understanding-public-key-private-key-concepts.html