I try run my R Script within JavaFx. I use Renjin for this purpose and it seems to work properly with statements I run internally. But I want to run an external R Script. The project is set up with Maven so the path should be easy as the R Script is in the resources folder. The path works when I load FXML files, so I'm pretty confused why it can't find my Script.
Here's a short example:
package survey;
import javax.script.*;
import org.renjin.script.*;
import java.io.FileReader;
public class calcFunction {
public static void main(String[] args) throws Exception {
// create a script engine manager:
RenjinScriptEngineFactory factory = new RenjinScriptEngineFactory();
// create a Renjin engine:
ScriptEngine engine = factory.getScriptEngine();
engine.put("x", 4);
engine.put("y", 5);
engine.eval(new FileReader("/test.R"));
}
}
Is something missing? Thanks in advance!
EDIT1:
With my FXML files it works with the "/" path like this:
root = FXMLLoader.load(getClass().getResource("/moduleDa.fxml"));
EDIT2:
Someone who deleted his comment proposed this:
engine.eval(new FileReader(new File(".").getAbsolutePath()+"/test.R"));
It works if the script is in the root directory, where the pom.xml file is located. #James_D made it work so the R script can be located in the resources folder - thanks a lot!
If your R script is bundled as part of the application, it can't be treated as a file - you need to treat it as a resource. Typically, you will deploy your application as a Jar file, and the resources will be elements within that jar file (they won't be files in their own right).
So just treat the R script as a resource and load it as such. I don't know the renjin framework, but I assume ScriptEngine here is a javax.script.ScriptEngine, in which case ScriptEngine.eval(...) takes a Reader as a parameter, and so (if your R script is located in the root of the class path) you can do
engine.eval(new InputStreamReader(getClass().getResourceAsStream("/test.R")));
Related
I'm trying to create app based on Jetty 9.4.20 (embedded) and Vaadin Flow 14.0.12.
It based on very nice project vaadin14-embedded-jetty.
I want to package app with one main-jar and all dependency libs must be in folder 'libs' near main-jar.
I remove maven-assembly-plugin, instead use maven-dependency-plugin and maven-jar-plugin. In maven-dependency-plugin i add section <execution>get-dependencies</execution> where i unpack directories META-INF/resources/,META-INF/services/ from Vaadin Flow libs to the result JAR.
In this case app work fine. But if i comment section <execution>get-dependencies</execution> then result package didn't contain that directories and app didn't work.
It just cannot give some static files from Vaadin Flow libs.
This error occurs only if i launch packaged app with ...
$ java -jar vaadin14-embedded-jetty-1.0-SNAPSHOT.jar
... but from Intellij Idea it launch correctly.
There was an opinion that is Jetty staring with wrong ClassLoader and cannot maintain requests to static files in Jar-libs.
The META-INF/services/ files MUST be maintained from the Jetty libs.
That's important for Jetty to use java.util.ServiceLoader.
If you are merging contents of JAR files into a single JAR file, that's called a "uber jar".
There are many techniques to do this, but if you are using maven-assembly-plugin or maven-dependency-plugin to build this "uber jar" then you will not be merging critical files that have the same name across multiple JAR files.
Consider using maven-shade-plugin and it's associated Resource Transformers to properly merge these files.
http://maven.apache.org/plugins/maven-shade-plugin/
http://maven.apache.org/plugins/maven-shade-plugin/examples/resource-transformers.html
The ServicesResourceTransformer is the one that merges META-INF/services/ files, use it.
As for static content, that works fine, but you have to setup your Base Resource properly.
Looking at your source, you do the following ...
final URI webRootUri = ManualJetty.class.getResource("/webapp/").toURI();
final WebAppContext context = new WebAppContext();
context.setBaseResource(Resource.newResource(webRootUri));
That won't work reliably in 100% of cases (as you have noticed when running in the IDE vs command line).
The Class.getResource(String) is only reliable if you lookup a file (not a directory).
Consider that the Jetty Project Embedded Cookbook recipes have techniques for this.
See:
WebAppContextFromClasspath.java
ResourceHandlerFromClasspath.java
DefaultServletFileServer.java
DefaultServletMultipleBases.java
XmlEnhancedServer.java
MultipartMimeUploadExample.java
Example:
// Figure out what path to serve content from
ClassLoader cl = ManualJetty.class.getClassLoader();
// We look for a file, as ClassLoader.getResource() is not
// designed to look for directories (we resolve the directory later)
URL f = cl.getResource("webapp/index.html");
if (f == null)
{
throw new RuntimeException("Unable to find resource directory");
}
// Resolve file to directory
URI webRootUri = f.toURI().resolve("./").normalize();
System.err.println("WebRoot is " + webRootUri);
WebAppContext context = new WebAppContext();
context.setBaseResource(Resource.newResource(webRootUri));
I am trying to access static resource (eg. first.html) packed inside the same .jar file (testJetty.jar), which also has a class which starts the jetty (v.8) server (MainTest.java). I am unable to set the resource base correctly.
The structure of my jar file (testJetty.jar):
testJetty.jar
first.html
MainTest.java
==
Works fine on local machine, but when I wrap it in jar file and then run it, it doesn't work, giving "404: File not found" error.
I tried to set the resourcebase with the following values, all of which failed:
a) Tried setting it to .
resource_handler.setResourceBase("."); // Results in directory containing the jar file, D:\Work\eclipseworkspace\testJettyResult
b) Tried getting it from getResource
ClassLoader loader = this.getClass().getClassLoader();
File indexLoc = new File(loader.getResource("first.html").getFile());
String htmlLoc = indexLoc.getAbsolutePath();
resource_handler.setResourceBase(htmloc); // Results in D:\Work\eclipseworkspace\testJettyResult\file:\D:\Work\eclipseworkspace\testJettyResult\testJetty1.jar!\first.html
c) Tried getting the webdir
String webDir = this.getClass().getProtectionDomain()
.getCodeSource().getLocation().toExternalForm();
resource_handler.setResourceBase(webdir); // Results in D:/Work/eclipseworkspace/testJettyResult/testJetty1.jar
None of these 3 approaches worked.
Any help or alternative would be appreciated
Thanks
abbas
The solutions provided in this thread work but I think some clarity to the solution could be useful.
If you are building a fat jar and use the ProtectionDomain way you may hit some issues because you are loading the whole jar!
class.getProtectionDomain().getCodeSource().getLocation().toExternalForm();
So the better solution is the other provided solution
contextHandler.setResourceBase(
YourClass.class
.getClassLoader()
.getResource("WEB-INF")
.toExternalForm());
The problem here is if you are building a fat jar you are not really dumping your webapp resources into WEB-INF but are probably going into the root of the jar, so a simple workaround is to create a folder XXX and use the second approach as follows:
contextHandler.setResourceBase(
YourClass.class
.getClassLoader()
.getResource("XXX")
.toExternalForm());
Or change your build tool to export the webapp files into that given directory. Maybe Maven does this on a Jar for you but gradle does not.
Not unusually, I found a solution to my problem. The 3rd approach mentioned by Stephen in Embedded Jetty : how to use a .war that is included in the .jar from which Jetty starts? worked!
So, I changed from Resource_handler to WebAppContext, where WebAppContext is pointing to the same jar (testJetty.jar) and it worked!
String webDir = MainTest.class.getProtectionDomain()
.getCodeSource().getLocation().toExternalForm(); ; // Results in D:/Work/eclipseworkspace/testJettyResult/testJetty.jar
WebAppContext webappContext = new WebAppContext(webDir, "/");
It looks like ClassLoader.getResource does not understand an empty string or . or / as an argument. In my jar file I had to move all stuf to WEB-INF(any other wrapping dir will do). So the code looks like
contextHandler.setResourceBase(EmbeddedJetty.class.getClassLoader().getResource("WEB-INF").toExternalForm());
so the context looks like this then:
ContextHandler:744 - Started o.e.j.w.WebAppContext#48b3806{/,jar:file:/Users/xxx/projects/dropbox/ui/target/ui-1.0-SNAPSHOT.jar!/WEB-INF,AVAILABLE}
I have a TestNG project. Don't have any main class, currently it is running like "Run As TestNG".
I want to export it as runnable jar or jar so that any one can just hit a command from command line and test cases start running.
Could any one help me out in this? or suggest any other way to deliver the code in runnable form...
I am not using ant or maven.
Thanks
I seem to have found the solution after a bit of googling. This works fine in Eclipse (Juno).
Say, you have a TestNG file named 'Tests.java'. As you rightly pointed out, there won't be a class with main method.
So, we have to create a new Java class file under the same package. Let us name it 'MainOne.java'. This will have a class with main method.
Here is the code you need:
import com.beust.testng.TestNG;
public class MainOne {
public static void main(String[] args) {
TestNG testng = new TestNG();
Class[] classes = new Class[]{Tests.class};
testng.setTestClasses(classes);
testng.run();
}
Run the 'MainOne.java' as a Java application. Then right click on the package -> Export -> Runnable Jar [Choose 'MainOne' as Launch Configuration] -> Finish.
My current understanding is that, in order to benefit from the parallel niftiness of TestNG, one should use the static main method in org.testng's jar file when running the Java class from the command line rather than from inside Eclipse IDE.
The issue then becomes classpath, which defines how java finds all the JAR files. I found http://javarevisited.blogspot.com/2012/10/5-ways-to-add-multiple-jar-to-classpath-java.html to be most useful because it has the * wildcard mentioned --- VERY helpful when you need to reference all the jar files required for Selenum + TestNG + custom test suites.
This is my current Windows BAT file, and it works. ADV.jar contains my custom class but no main method.
setlocal
set r=d:\Apps\Selenium\
cd /d %~dp0
java -classpath %r%Downloaded\*;%r%MyCompany\ADV.jar; org.testng.TestNG .\testng-customsuite-adv.xml
pause
All the JAR files that I downloaded from public places went into my d:\Apps\Selenium\Downloaded folder. I put my custom ADV.jar file in d:\Apps\Selenium\MyCompany to keep it separate.
I created my ADV.jar file from Eclipse using Export Jar file and ignored warnings about a missing main method.
Aside: while this https://stackoverflow.com/a/16879386/424855 was very intriguing, I could not figure out how to make that work.
Here is the better way to do it.
You can just create a main method which will have list of all test classes to be executed as follows:
public static void main(String[] args) {
TestListenerAdapter tla = new TestListenerAdapter();
TestNG testng = new TestNG();
testng.setTestClasses(new Class[] { test_start.class });
testng.addListener(tla);
testng.run();
}
Here is the reference URL from the official testng website.
Run the MainOne.java as a Java application. Then right click on the package -> Export -> Runnable Jar [Choose MainOne as Launch Configuration] -> Finish.
So I have a bunch of data that I want to load into database from CSV. I've hacked together a solution that works in local development, but when I deploy to meteor.com, it no longer works.
I'm loading the csv file in the folder /server/data/:
function readData(name){
var fs = __meteor_bootstrap__.require('fs');
var path = __meteor_bootstrap__.require('path');
var base = path.resolve('.');
var data = fs.readFileSync(path.join(base, '/server/data/', name));
return CSVToArray(data);
}
After I deploy to meteor.com, i got:
INFO Error: ENOENT, no such file or directory '/meteor/containers/98eb1286-120b-ee84-8e98-ce673fa2eab7/public/data/categories.csv'
at Object.openSync (fs.js:240:18)
at Object.readFileSync (fs.js:128:15)
at readData (app/server/models.js:10:16)
at app/server/categories.js:6:7
at /meteor/containers/98eb1286-120b-ee84-8e98-ce673fa2eab7/bundle/server/server.js:132:63
at Array.forEach (native)
at Function.<anonymous> (/meteor/containers/98eb1286-120b-ee84-8e98-ce673fa2eab7/bundle/server/underscore.js:76:11)
at /meteor/containers/98eb1286-120b-ee84-8e98-ce673fa2eab7/bundle/server/server.js:132:7
Any idea how I can get meteor to see the csv file after deployment?
I realize this question is old, but it still ranks high on certain keyword searches. So, if you're using Meteor 0.6.5+, you can use the new Assets API.
The issue is that meteor only bundles files that it knows about (ie. JS/CSS/HTML/+more depending on which packages you use) up when it deploys.
Try putting the file you need in the public directory (this directory is exempt from the above rule).
Thanks to SamuelDavis and Tom Coleman's tips. I ended up figuring out what the problem is. Turns out the bundled app is no longer formated as client, public, and server. I ended up debugging it by running meteor bundle to create a tarball. extract the tarball and took a look inside to find where the data folder is. Tom was also right that the data folder needed to be in the public folder in order to get bundled in.
It appears that the base directory is not in the same location that contains the file '/server/data/xxx.csv'.
Before you try anything else, log the base path after calling "var base = path.resolve('.'). If that value is what you expected, log the files that appear in that directory. Again if the files are what you expected, navigate into the /server folder and print out those directories and so forth.
This should pinpoint you to which folder and/or directory is missing and should indicate where you should place the CSV file in future.
I am mentioning below the driver code of a simple mapR program
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
#SuppressWarnings("deprecation")
public class CsvParserDriver {
#SuppressWarnings("deprecation")
public static void main(String[] args) throws Exception
{
if(args.length != 2)
{
System.out.println("usage: [input] [output]");
System.exit(-1);
}
JobConf conf = new JobConf(CsvParserDriver.class);
Job job = new Job(conf);
conf.setJobName("CsvParserDriver");
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
conf.setMapperClass(CsvParserMapper.class);
conf.setMapOutputKeyClass(IntWritable.class);
conf.setMapOutputValueClass(Text.class);
conf.setReducerClass(CsvParserReducer.class);
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(Text.class);
conf.set("splitNode","NUM_AE");
JobClient.runJob(conf);
}
}
I am running my code using the below command
hadoop jar CsvParser.jar CsvParserDriver /user/sritamd/TestData /user/sritamd/output
(All the respective jars and directories in the above command are created)
I get the error as
Exception in thread "main" org.apache.hadoop.mapred.InvalidJobConfException: Output directory not set in JobConf.
You didn't create HDFS input and output directories as it was specified in the apache-hadoop-tutorial.
If you want to use local directory file:///user/sritamd/TestData - add FS prefix.
This might be caused by old API and new API.
Here is my new Job API to do configuration.
Step1: import new API lib
import org.apache.hadoop.mapreduce.Job
Step2: do configuration by new API job.
val job = Job.getInstance(conf)
job.getConfiguration.set(TableOutputFormat.OUTPUT_TABLE, tableName)
job.setOutputFormatClass(classOf[TableOutputFormat[Put]])
Hope this can help you.
Try this
Configuration configuration = new Configuration();
Job job = new Job(configuration, "MyConfig");
then
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
Your HDFS filesystem might not be created you need to first format a given directory and the that directory can be used as input and output of files for Hadoop
/usr/local/hadoop/bin/hadoop namenode -format
Use link :-http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
and follow each step
I think you need to set the input and output directory to conf instead of job Like:
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
if you are running hadoop on standard mode (without cluster) for testing the code you dont need to have fs prefix to the output path. You can initialize Job and set the paths. Following code should work(make sure you are either using Job ( from org.apache.hadoop.mapreduce.Job) or JobConf from org.apache.hadoop.mapred.JobConf)
Job job = new Job();
job.setJobName("Job Name");
job.setJarByClass(MapReduceJob.class);
FileInputFormat.setInputPaths(job,new Path(args[0]));
FileOutputFormat.setOutputPath(job,new Path(args[1]));
job.setMapperClass(MaxTemperatureMapper.class);
job.setReducerClass(MaxTemperatureReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
System.exit(job.waitForCompletion(true)? 0:1);
I had same issue but fixed it. I used job.waitForCompletion(true) which caused spark on hbase to crash when using saveAsNewAPIHadoopFile(...).A
you should not wait for your job since it is using the old Hadoop api instead of the new API
First make sure that your directory is not already exist. If it exist Delete it.
Second run your code in Eclipse, if it run properly and gives ArrayOutofBounds warning.
Otherwise, check your library that you inserted make sure to insert all CLIENT Libraries OR check that your class is in a package.
If all the above conditions meet your job will execute.