How to run Hazelcast command line console - console

Following this suggest: Hazelcast access using CLI I tried to run Hazelcast console in the following way:
1) Downloaded file hazelcast-client-X.Y.Z.jar
2) Run
/path/to/java -cp "/path/to/hazelcast-client-X.Y.Z.jar" com.hazelcast.client.console.ClientConsoleApp
and I got
Error: Could not find or load main class com.hazelcast.client.console.ClientConsoleApp
Any suggest?
Thanks

ClientConsoleApp is not included in hazelcast-client.jar. Use hazelcast-all.jar instead. I just checked the latest version and ClientConsoleApp works fine.
$ java -cp hazelcast-all-3.12.jar com.hazelcast.client.console.ClientConsoleApp
Apr 12, 2019 9:02:15 AM com.hazelcast.config.AbstractConfigLocator
INFO: Loading 'hazelcast-client-default.xml' from the classpath.
Apr 12, 2019 9:02:18 AM com.hazelcast.client.HazelcastClient
...

I wrote a simple shell script to automate the process of:
Downloading the correct version of hazelcast-all-<version>.jar (containing the ClientConsoleApp class),
Creating the appropriate hazelcast-client.xml file for connecting to the cluster: https://github.com/YongJieYongJie/hazelcast-cli, and
Running the appropriate java command actually connect to the cluster.
Hope this proves helpful.
(Note: This is based on discussion over at this question: Hazelcast access using CLI)

Related

Unable to communicate with the runtime for 'R' script in SQL Server 2017

I'm having trouble getting R to work on SQL Server 2017 on one server (I've successfully installed it on about 8 other servers). I've already installed that latest cumulative update.
When I execute a stored procedure that runs a simple hello world R script, I can see that LaunchPad.exe and rterm.exe are both running. After 60 seconds, however, I get the following error:
Msg 39012, Level 16, State 1, Line 0
Unable to communicate with the runtime for 'R' script. Please check the requirements of 'R' runtime.
STDERR message(s) from external script: Fatal error: creation of tmpfile failed -- set TMPDIR suitably?
This is the script that fails:
EXEC sp_execute_external_script
#language =N'R', #script=N'print("hello")';
Any ideas on what I need to do to resolve this error?
The problem was that Named Pipes wasn't enabled for SQL Server. Enabling that, and restarting the services solved my issue.
My assumption is that you applied the CU after the installation of Machine Learning Services? If so, the CU somehow messes up the folder permissions.
I wrote a blog post about how to fix it here. The blog post is about CU7, but it should apply to any CU.
I do not guarantee that it works, as I have seen other issues when the ML Services stop working, for those cases what fixes it is to do a repair of the SQL installation.

Asterisk-Java can't find fastagi-mapping.properties

I'm using Asterisk and I'd like to emit a call from my Java app and then use an AGI script to control what happens. So I've got a first class that contacts the Asterisk server and uses an OriginateAction to start the call (this works well) and an AGI server that runs and should serve AGI requests. Though, it doesn't work because it can't find the fastagi-mapping.properties file.
Here is my fastagi-mapping.properties:
alertcall.agi = AlertCallScript
(It only has one case.)
In the same folder, I have AlertCallScript.java (and asterisk-java.jar) that I compile like this:
javac -cp asterisk-java.jar:. AlertCallScript.java ExamplesAsteriskSettings.java
And then I start my AGI server using this (found in the doc):
java -cp asterisk-java.jar:. -jar asterisk-java.jar
When I emit my call, I get the following error in the AGI server output:
Jun 13, 2018 6:28:12 AM org.asteriskjava.fastagi.ResourceBundleMappingStrategy loadResourceBundle
INFO: Resource bundle 'fastagi-mapping' not found.
Jun 13, 2018 6:28:12 AM org.asteriskjava.fastagi.internal.AgiConnectionHandler run
SEVERE: No script configured for URL 'agi://localhost/alertcall.agi' (script 'alertcall.agi')
And I don't know why... I've been looking into this for more than an hour and I probably made a stupid mistake, though I can't find it.
Notes :
I use my classpath java-asterisk.jar:. to be able to have asterisk-java and the current folder, which contains the fastagi-mapping.properties file, so the file should be found without any problem within the class path.
I have already try to delete and recreate the file, it didn't change anything.
Please suggest.
Part of the problem is probably trying to use -cp and -jar in the same java command, this is not supported and there are a number of questions on Stackoverflow about that (here is one).
You can use something like
java -cp asterisk-java.jar:. DefaultAgiServer
to start asterisk-java's DefaultAgiServer specifically (which is what specifying -jar asterisk-java.jar is going to do anyways, if I remember the entry in the manifest correctly).

Demos not working on Windows 10

I have been trying to run all the examples and demos provided by R3 Corda on my local machine which has windows 10 in it.
https://docs.corda.net/releases/release-V1.0/running-the-demos.html
While example 1 and 2 are working fine, but I am unable to run the demos. While the nodes are getting deployed, but once I run the nodes using runnodes command from command-line, they open in terminals and immediately close. Please can someone from #r3corda #corda team may help to sort this out.
I was running into the similar issue on Win 10.
Running the following:
>java -jar corda.jar
Exception in thread "main" java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no jansi64-1.14 in java.library.path, no jansi-1.14 in java.library.path, no jansi in java.library.path,
C:\Users\<user>\AppData\Local\Temp\jansi-64-1-6925657630491746639.14: Access is denied]
Downloading the missing library to C:\windows reloved it.
Download Link:
http://www.java2s.com/Code/Jar/j/Downloadjansi14jar.htm

multiple version of spark on CDH5.10 is failing to launch spark-submit

I have installed spark with 2.0 on CDH5.10 By following the link https://www.cloudera.com/documentation/spark2/latest/topics/spark2_installing.html
after all configuration when I hit spark2-submit --version it gives me correct version which is 2.0
however when I submit a spark job . First it says
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream
This is clearly indicating that hadoop libs are not in classpath. My question is it something wrong with my installation of spark 2. ? also once we add jars with sparkExtralibCLasspath for driver and core then it says SPARK_HADOOP_CONF Is not set.
How can I verify my installation is correct ?
I am also trying to understand where are my spark2 conf dirs
I saw few previous question on stackoverflow like https://community.cloudera.com/t5/Cloudera-Manager-Installation/CHD-5-7-spark-shell-java-lang-ClassNotFoundException-org-apache/td-p/42209 and NoClassDefFoundError com.apache.hadoop.fs.FSDataInputStream when execute spark-shell but this doesnt help
I am using spark2-shell and spark2-submit command
some more investigation with https://community.cloudera.com/t5/Cloudera-Manager-Installation/CDH-5-5-pyspark-java-lang-NoClassDefFoundError-org-apache-hadoop/td-p/34424 shows might be If I can correctly set SPARK_EXTRA_LIB_PATH for spark2 then I can fix this issue. can somebody guide me please. Thanks

How to submit jobs to spark master running locally

I am using R and spark to run a simple example to test spark.
I have a spark master running locally using the following:
spark-class org.apache.spark.deploy.master.Master
I can see the status page at http://localhost:8080/
Code:
system("spark-submit --packages com.databricks:spark-csv_2.10:1.0.3 --master local[*]")
suppressPackageStartupMessages(library(SparkR)) # Load the library
sc <- sparkR.session(master = "local[*]")
df <- as.DataFrame(faithful)
head(df)
Now this runs fine when I do the following (code is saved as 'sparkcode'):
Rscript sparkcode.R
Problem:
But what happens is that a new spark instance is created, I want the R to use the existing master instance (should see this as a completed job http://localhost:8080/#completed-app)
P.S: using Mac OSX , spark 2.1.0 and R 3.3.2
A number of things:
If you use standalone cluster use correct url which should be sparkR.session(master = "spark://hostname:port"). Both hostname and port depend on the configuration but the standard port is 7077 and hostname should default to hostname. This is the main problem.
Avoid using spark-class directly. This is what $SPARK_HOME/sbin/ scripts are for (like start-master.sh). There are not crucial but handle small and tedious tasks for you.
Standalone master is only resource manager. You have to start worker nodes as well (start-slave*).
It is usually better to use bin/spark-submit though it shouldn't matter much here.
spark-csv is no longer necessary in Spark 2.x and even if it was Spark 2.1 uses Scala 2.11 by default. Not to mention 1.0.3 is extremely old (like Spark 1.3 or so).

Resources