Unable to write data to csv using Rserve - r

I am able to perform Remote command execution and function calling in R script through Rserve in my Java application. But when my function is trying to save a dataframe in a csv file using
write.csv(MyData, file = "MyData.csv")
They MyData.csv file is not being generated, and no error is showing. when i am executing the same steps in R console, it working fine.
The Rserve is running in my local machine itself and I am using the following to connect and execute:
RConnection connection = new RConnection();
connection.eval("makecsv()")
p.s. I've omitted the "source the R script" step above
Just for reference this is my Dummy R script that I'm trying to run:
makecsv <- function(){
x<-rnorm(10)
y<-rnorm(10)
df1<-data.frame(x,y)
write.csv(df1, file = "MyData.csv")
return(df1)
}

Probably you have to use the absolute path, something like this:
write.csv(MyData, file = "/var/MyData.csv")

This can happen if your Rserve is dead. Wrapping in try-catch with proper error handling can help in debugging.
This version works for me:
import org.rosuda.REngine.*;
import org.rosuda.REngine.Rserve.*;
public class Main {
public static void main(String[] args) {
try {
RConnection c = new RConnection();
org.rosuda.REngine.REXP getwd = c.eval("getwd()");
System.out.println(getwd.asString());
c.eval("source(\"main.R\")");
c.eval("makecsv()");
c.close();
} catch (REngineException | REXPMismatchException e) {
e.printStackTrace();
}
}
}
The output is:
C:/Users/moon/Documents
Process finished with exit code 0
And in Documents folder I have the MyData.csv.

Here are 2 suggestions:
Try to parse the string to expression first by connection.eval(parse("makecsv()"))
Check the working dir by connection.eval("getwd()")

Related

Unable to create folder with RCurl

I'm having trouble using the ftpUpload() function of RCurl to upload a file to a non-existent folder in an SFTP. I want the folder to be made if its not there, using the ftp.create.missing.dirs option. Here's my code currently:
.opts <- list(ftp.create.missing.dirs=TRUE)
ftpUpload(what = "test.txt",
to "sftp://ftp.testserver.com:22/newFolder/existingfile.txt",
userpwd = paste(user, pwd, sep = ":"), .opts = opts)`
It doesn't seem to be working as I get the following error:
* Initialized password authentication
* Authentication complete
* Failed to close libssh2 file
I can upload a file to an existent folder with success, its just when the folder isn't there I get the error.
The problem seems be due the fact you are trying to create the new folder, as seen in this question: Create an remote directory using SFTP / RCurl
The error can be found in Microsoft R Open git page:
case SSH_SFTP_CLOSE:
if(sshc->sftp_handle) {
rc = libssh2_sftp_close(sshc->sftp_handle);
if(rc == LIBSSH2_ERROR_EAGAIN) {
break;
}
else if(rc < 0) {
infof(data, "Failed to close libssh2 file\n");
}
sshc->sftp_handle = NULL;
}
if(sftp_scp)
Curl_safefree(sftp_scp->path);
In the code the parameter rc is related to libssh2_sftp_close function (more info here https://www.libssh2.org/libssh2_sftp_close_handle.html), that tries close the nonexistent directory, resulting in the error.
Try use curlPerform as:
curlPerform(url="ftp.xxx.xxx.xxx.xxx/";, postquote="MkDir /newFolder/", userpwd="user:pass")

Can't get the names of the files that exist in a specific directory using File or InputStream [duplicate]

I have a resources folder/package in the root of my project, I "don't" want to load a certain File. If I wanted to load a certain File, I would use class.getResourceAsStream and I would be fine!! What I actually want to do is to load a "Folder" within the resources folder, loop on the Files inside that Folder and get a Stream to each file and read in the content... Assume that the File names are not determined before runtime... What should I do? Is there a way to get a list of the files inside a Folder in your jar File?
Notice that the Jar file with the resources is the same jar file from which the code is being run...
Finally, I found the solution:
final String path = "sample/folder";
final File jarFile = new File(getClass().getProtectionDomain().getCodeSource().getLocation().getPath());
if(jarFile.isFile()) { // Run with JAR file
final JarFile jar = new JarFile(jarFile);
final Enumeration<JarEntry> entries = jar.entries(); //gives ALL entries in jar
while(entries.hasMoreElements()) {
final String name = entries.nextElement().getName();
if (name.startsWith(path + "/")) { //filter according to the path
System.out.println(name);
}
}
jar.close();
} else { // Run with IDE
final URL url = Launcher.class.getResource("/" + path);
if (url != null) {
try {
final File apps = new File(url.toURI());
for (File app : apps.listFiles()) {
System.out.println(app);
}
} catch (URISyntaxException ex) {
// never happens
}
}
}
The second block just work when you run the application on IDE (not with jar file), You can remove it if you don't like that.
Try the following.
Make the resource path "<PathRelativeToThisClassFile>/<ResourceDirectory>" E.g. if your class path is com.abc.package.MyClass and your resoure files are within src/com/abc/package/resources/:
URL url = MyClass.class.getResource("resources/");
if (url == null) {
// error - missing folder
} else {
File dir = new File(url.toURI());
for (File nextFile : dir.listFiles()) {
// Do something with nextFile
}
}
You can also use
URL url = MyClass.class.getResource("/com/abc/package/resources/");
The following code returns the wanted "folder" as Path regardless of if it is inside a jar or not.
private Path getFolderPath() throws URISyntaxException, IOException {
URI uri = getClass().getClassLoader().getResource("folder").toURI();
if ("jar".equals(uri.getScheme())) {
FileSystem fileSystem = FileSystems.newFileSystem(uri, Collections.emptyMap(), null);
return fileSystem.getPath("path/to/folder/inside/jar");
} else {
return Paths.get(uri);
}
}
Requires java 7+.
I know this is many years ago . But just for other people come across this topic.
What you could do is to use getResourceAsStream() method with the directory path, and the input Stream will have all the files name from that dir. After that you can concat the dir path with each file name and call getResourceAsStream for each file in a loop.
I had the same problem at hands while i was attempting to load some hadoop configurations from resources packed in the jar... on both the IDE and on jar (release version).
I found java.nio.file.DirectoryStream to work the best to iterate over directory contents over both local filesystem and jar.
String fooFolder = "/foo/folder";
....
ClassLoader classLoader = foofClass.class.getClassLoader();
try {
uri = classLoader.getResource(fooFolder).toURI();
} catch (URISyntaxException e) {
throw new FooException(e.getMessage());
} catch (NullPointerException e){
throw new FooException(e.getMessage());
}
if(uri == null){
throw new FooException("something is wrong directory or files missing");
}
/** i want to know if i am inside the jar or working on the IDE*/
if(uri.getScheme().contains("jar")){
/** jar case */
try{
URL jar = FooClass.class.getProtectionDomain().getCodeSource().getLocation();
//jar.toString() begins with file:
//i want to trim it out...
Path jarFile = Paths.get(jar.toString().substring("file:".length()));
FileSystem fs = FileSystems.newFileSystem(jarFile, null);
DirectoryStream<Path> directoryStream = Files.newDirectoryStream(fs.getPath(fooFolder));
for(Path p: directoryStream){
InputStream is = FooClass.class.getResourceAsStream(p.toString()) ;
performFooOverInputStream(is);
/** your logic here **/
}
}catch(IOException e) {
throw new FooException(e.getMessage());
}
}
else{
/** IDE case */
Path path = Paths.get(uri);
try {
DirectoryStream<Path> directoryStream = Files.newDirectoryStream(path);
for(Path p : directoryStream){
InputStream is = new FileInputStream(p.toFile());
performFooOverInputStream(is);
}
} catch (IOException _e) {
throw new FooException(_e.getMessage());
}
}
Another solution, you can do it using ResourceLoader like this:
import org.springframework.core.io.Resource;
import org.apache.commons.io.FileUtils;
#Autowire
private ResourceLoader resourceLoader;
...
Resource resource = resourceLoader.getResource("classpath:/path/to/you/dir");
File file = resource.getFile();
Iterator<File> fi = FileUtils.iterateFiles(file, null, true);
while(fi.hasNext()) {
load(fi.next())
}
If you are using Spring you can use org.springframework.core.io.support.PathMatchingResourcePatternResolver and deal with Resource objects rather than files. This works when running inside and outside of a Jar file.
PathMatchingResourcePatternResolver r = new PathMatchingResourcePatternResolver();
Resource[] resources = r.getResources("/myfolder/*");
Then you can access the data using getInputStream and the filename from getFilename.
Note that it will still fail if you try to use the getFile while running from a Jar.
As the other answers point out, once the resources are inside a jar file, things get really ugly. In our case, this solution:
https://stackoverflow.com/a/13227570/516188
works very well in the tests (since when the tests are run the code is not packed in a jar file), but doesn't work when the app actually runs normally. So what I've done is... I hardcode the list of the files in the app, but I have a test which reads the actual list from disk (can do it since that works in tests) and fails if the actual list doesn't match with the list the app returns.
That way I have simple code in my app (no tricks), and I'm sure I didn't forget to add a new entry in the list thanks to the test.
Below code gets .yaml files from a custom resource directory.
ClassLoader classLoader = this.getClass().getClassLoader();
URI uri = classLoader.getResource(directoryPath).toURI();
if("jar".equalsIgnoreCase(uri.getScheme())){
Pattern pattern = Pattern.compile("^.+" +"/classes/" + directoryPath + "/.+.yaml$");
log.debug("pattern {} ", pattern.pattern());
ApplicationHome home = new ApplicationHome(SomeApplication.class);
JarFile file = new JarFile(home.getSource());
Enumeration<JarEntry> jarEntries = file.entries() ;
while(jarEntries.hasMoreElements()){
JarEntry entry = jarEntries.nextElement();
Matcher matcher = pattern.matcher(entry.getName());
if(matcher.find()){
InputStream in =
file.getInputStream(entry);
//work on the stream
}
}
}else{
//When Spring boot application executed through Non-Jar strategy like through IDE or as a War.
String path = uri.getPath();
File[] files = new File(path).listFiles();
for(File file: files){
if(file != null){
try {
InputStream is = new FileInputStream(file);
//work on stream
} catch (Exception e) {
log.error("Exception while parsing file yaml file {} : {} " , file.getAbsolutePath(), e.getMessage());
}
}else{
log.warn("File Object is null while parsing yaml file");
}
}
}
Took me 2-3 days to get this working, in order to have the same url that work for both Jar or in local, the url (or path) needs to be a relative path from the repository root.
..meaning, the location of your file or folder from your src folder.
could be "/main/resources/your-folder/" or "/client/notes/somefile.md"
Whatever it is, in order for your JAR file to find it, the url must be a relative path from the repository root.
it must be "src/main/resources/your-folder/" or "src/client/notes/somefile.md"
Now you get the drill, and luckily for Intellij Idea users, you can get the correct path with a right-click on the folder or file -> copy Path/Reference.. -> Path From Repository Root (this is it)
Last, paste it and do your thing.
Simple ... use OSGi. In OSGi you can iterate over your Bundle's entries with findEntries and findPaths.
Inside my jar file I had a folder called Upload, this folder had three other text files inside it and I needed to have an exactly the same folder and files outside of the jar file, I used the code below:
URL inputUrl = getClass().getResource("/upload/blabla1.txt");
File dest1 = new File("upload/blabla1.txt");
FileUtils.copyURLToFile(inputUrl, dest1);
URL inputUrl2 = getClass().getResource("/upload/blabla2.txt");
File dest2 = new File("upload/blabla2.txt");
FileUtils.copyURLToFile(inputUrl2, dest2);
URL inputUrl3 = getClass().getResource("/upload/blabla3.txt");
File dest3 = new File("upload/Bblabla3.txt");
FileUtils.copyURLToFile(inputUrl3, dest3);

Is there anyway to test run EDIDisassembler component with pipeline testing tool?

First, Let me explain why I need do this.
I have an inbound port with EDIReceive Pipeline configuration. it receives EDI X12 837I files and disassemble these files to 837I messages.
There's one file failed with error description below:
The following elements are not closed: ns0:X12_00501_837_I. Line 1, position 829925.
It looks like the incoming file have some structure issue. Making the disassembler cannot produce the message correctly. But the error itself don't help to locate the issue. Also, no TA1 and 999 generated to help us locate the issue.
So I created a little console application using the Pipeline Component Test Library try to run this file through the edidisassembler pipeline component to see if I can find what cause the error.
The code is pretty straightforward:
namespace TestEDIDasm
{
using System;
using System.IO;
using Microsoft.BizTalk.Edi.Pipelines;
using Microsoft.BizTalk.Message.Interop;
using Winterdom.BizTalk.PipelineTesting;
using Microsoft.BizTalk.Edi.BatchMarker;
class Program
{
static void Main(string[] args)
{
var ediDasmComp = new EdiDisassembler();
ediDasmComp.UseIsa11AsRepetitionSeparator = true;
ediDasmComp.XmlSchemaValidation = true;
var batchMaker = new PartyBatchMarker();
IBaseMessage testingMessage = MessageHelper.LoadMessage(#"c:\temp\{1C9420EB-5C54-43E5-9D9D-7297DE65B36C}_context.xml");
ReceivePipelineWrapper testPipelineWrapper = PipelineFactory.CreateEmptyReceivePipeline();
testPipelineWrapper.AddComponent(ediDasmComp, PipelineStage.Disassemble);
testPipelineWrapper.AddComponent(batchMaker, PipelineStage.ResolveParty);
var outputMessages = testPipelineWrapper.Execute(testingMessage);
if (outputMessages.Count <= 0)
{
Console.WriteLine("No output message");
Console.ReadKey();
return;
}
var msg = outputMessages[0];
StreamReader sr = new StreamReader(msg.BodyPart.Data);
Console.WriteLine(sr.ReadToEnd());
Console.ReadKey();
}
}
}
I added some breakpoint but end up with following errors in message context:
"X12 service schema not found"
Clearly, the EDIDisassembler component rely on some other stuff to do its job.
Now goes to my question:
Is there anyway to make EdiDisassembler working in testing
environment?
If there any other way to debug/trace the disassembler component
processing file other than Pipeline Component Test Library?
Theoretically, sure, but you have to replicate a lot of engine context that exists during Pipeline execution. The EDI components have issues running inside Orchestrations so it's likely a pretty tall order.
Have you tried a Preserve Interchange Pipeline with the Fallback Settings? That's about as simple as you can get with the EDI Disassembler.

groovy upload jar to nexus

I have some jar file (custom) which I need to publish to Sonatype Nexus repository from Groovy script.
I have jar located in some path on machine where Groovy script works (for instance: c:\temp\module.jar).
My Nexus repo url is http://:/nexus/content/repositories/
On this repo I have folder structure like: folder1->folder2->folder3
During publishing my jar I need to create in folder3:
New directory with module's revision (my Groovy script knows this revision)
Upload jar to this directory
Create pom, md5 and sha1 files for jar uploaded
After several days of investigation I still have no idea how to create such script but this way looks very clear instead of using direct uploading.
I found http://groovy.codehaus.org/Using+Ant+Libraries+with+AntBuilder and some other stuff (stackoverflow non script solution).
I got how to create ivy.xml in my Groovy script, but I don't understand how to create build.xml and ivysetting.xml on the fly and setup whole system to work.
Could you please help to understand Groovy's way?
UPDATE:
I found that the following command works fine for me:
curl -v -F r=thirdparty -F hasPom=false -F e=jar -F g=<my_groupId> -F a=<my_artifactId> -F v=<my_artifactVersion> -F p=jar -F file=#module.jar -u admin:admin123 http://<my_nexusServer>:8081/nexus/service/local/repositories
As I understand curl perform POST request to Nexus services. Am I correct?
And now I'm trying to build HTTP POST request using Groovy HTTPBuilder.
How I should transform curl command parameters into Groovy's HTTPBuilder request?
Found a way to do this with the groovy HttpBuilder.
based on info from sonatype, and a few other sources.
This works with http-builder version 0.7.2 (not with earlier versions)
And also needs an extra dependency: 'org.apache.httpcomponents:httpmime:4.2.1'
The example also uses basic auth against nexus.
import groovyx.net.http.Method
import groovyx.net.http.ContentType;
import org.apache.http.HttpRequest
import org.apache.http.HttpRequestInterceptor
import org.apache.http.entity.mime.MultipartEntity
import org.apache.http.entity.mime.content.FileBody
import org.apache.http.entity.mime.content.StringBody
import org.apache.http.protocol.HttpContext
import groovyx.net.http.HttpResponseException;
class NexusUpload {
def uploadArtifact(Map artifact, File fileToUpload, String user, String password) {
def path = "/service/local/artifact/maven/content"
HTTPBuilder http = new HTTPBuilder("http://my-nexus.org/")
String basicAuthString = "Basic " + "$user:$password".bytes.encodeBase64().toString()
http.client.addRequestInterceptor(new HttpRequestInterceptor() {
void process(HttpRequest httpRequest, HttpContext httpContext) {
httpRequest.addHeader('Authorization', basicAuthString)
}
})
try {
http.request(Method.POST, ContentType.ANY) { req ->
uri.path = path
MultipartEntity entity = new MultipartEntity()
entity.addPart("hasPom", new StringBody("false"))
entity.addPart("file", new FileBody(fileToUpload))
entity.addPart("a", new StringBody("my-artifact-id"))
entity.addPart("g", new StringBody("my-group-id"))
entity.addPart("r", new StringBody("my-repository"))
entity.addPart("v", new StringBody("my-version"))
req.entity = entity
response.success = { resp, reader ->
if(resp.status == 201) {
println "success!"
}
}
}
} catch (HttpResponseException e) {
e.printStackTrace()
}
}
}
`
Ivy is an open source library, so, one approach would be to call the classes directly. The problem with that approach is that there are few examples on how to invoke ivy programmatically.
Since groovy has excellent support for generating XML, I favour the slightly dumber approach of creating the files I understand as an ivy user.
The following example is designed to publish files into Nexus generating both the ivy and ivysettings files:
import groovy.xml.NamespaceBuilder
import groovy.xml.MarkupBuilder
// Methods
// =======
def generateIvyFile(String fileName) {
def file = new File(fileName)
file.withWriter { writer ->
xml = new MarkupBuilder(writer)
xml."ivy-module"(version:"2.0") {
info(organisation:"org.dummy", module:"dummy")
publications() {
artifact(name:"dummy", type:"pom")
artifact(name:"dummy", type:"jar")
}
}
}
return file
}
def generateSettingsFile(String fileName) {
def file = new File(fileName)
file.withWriter { writer ->
xml = new MarkupBuilder(writer)
xml.ivysettings() {
settings(defaultResolver:"central")
credentials(host:"myrepo.com" ,realm:"Sonatype Nexus Repository Manager", username:"deployment", passwd:"deployment123")
resolvers() {
ibiblio(name:"central", m2compatible:true)
ibiblio(name:"myrepo", root:"http://myrepo.com/nexus", m2compatible:true)
}
}
}
return file
}
// Main program
// ============
def ant = new AntBuilder()
def ivy = NamespaceBuilder.newInstance(ant, 'antlib:org.apache.ivy.ant')
generateSettingsFile("ivysettings.xml").deleteOnExit()
generateIvyFile("ivy.xml").deleteOnExit()
ivy.resolve()
ivy.publish(resolver:"myrepo", pubrevision:"1.0", publishivy:false) {
artifacts(pattern:"build/poms/[artifact].[ext]")
artifacts(pattern:"build/jars/[artifact].[ext]")
}
Notes:
More complex? Perhaps... however, if you're not generating the ivy file (using it to manage your dependencies) you can easily call the makepom task to generate the Maven POM files prior to upload into Nexus.
The REST APIs for Nexus work fine. I find them a little cryptic and of course a solution that uses them cannot support more than one repository manager (Nexus is not the only repository manager technology available).
The "deleteOnExit" File method call ensures the working files are cleaned up properly.

Flyway output to SQL File

Is it possible to output the db migration to an SQL file instead of directly invoking database changes in flyway?
Most times this will not be needed as with Flyway the DB migrations themselves will already be written in SQL.
Yes it's possible and as far as I am concerned the feature is an absolute must for DBAs who don't want to allow flyway in prod.
I made do with modifying code from here, it's a dry run command for flyway, you can add a filewriter and write out migrationDetails:
https://github.com/killbill/killbill/commit/996a3d5fd096525689dced825eac7a95a8a7817e
I did it like so... Project structure (just copied it out of killbill's project and renamed package to flywaydr:
.
./main
./main/java
./main/java/com
./main/java/com/flywaydr
./main/java/com/flywaydr/CapturingMetaDataTable.java
./main/java/com/flywaydr/CapturingSqlMigrationExecutor.java
./main/java/com/flywaydr/DbMigrateWithDryRun.java
./main/java/com/flywaydr/MigrationInfoCallback.java
./main/java/com/flywaydr/Migrator.java
./main/java/org
./main/java/org/flywaydb
./main/java/org/flywaydb/core
./main/java/org/flywaydb/core/FlywayWithDryRun.java
In Migrator.java add (implement callback and put it in DbMigrateWithDryRun.java) :
} else if ("dryRunMigrate".equals(operation)) {
MigrationInfoCallback mcb = new MigrationInfoCallback();
flyway.dryRunMigrate();
MigrationInfoImpl[] migrationDetails = mcb.getPendingMigrationDetails();
if(migrationDetails.length>0){
writeMasterScriptToFile(migrationDetails);
}
}
Then to write stuff to file something like:
private static void writeMasterScriptToFile(MigrationInfoImpl[] migrationDetails){
FileWriter fw = null;
try{
String masterScriptLoc="path/to/file";
fw = new FileWriter(masterScriptLoc);
LOG.info("Writing output to " + masterScriptLoc);
for (final MigrationInfoImpl migration : migrationDetails){
Path file =Paths.get(migration.getResolvedMigration().getPhysicalLocation());
//if you want to copy actual script files parsed by flyway
Files.copy(file, Paths.get(new StringBuilder(scriptspathloc).append(File.separator).append(file.getFileName().toString()).toString()), REPLACE_EXISTING);
}
//or just get the sql
for (final SqlStatement sqlStatement : sqlStatements) {
//sqlStatement.getSql();
}
fw.write(stuff.toString());
} catch(Exception e){
LOG.error("Could not write to file, io exception was thrown.",e);
} finally{
try{fw.close();}catch(Exception e){LOG.error("Could not close file writer.",e);}
}
}
One last thing to mention, I compile and package this into a jar "with dependencies" (aka fatjar) via maven (google assembly plugin + jar with dependencies) and run it via command like below or you can include it as a dependency and call it via mvn exec:exec goal, which is something I had success with as well.
$ java -jar /path/to/flywaydr-fatjar.jar dryRunMigrate -regular.flyway.configs -etc -etc
I didnt find a way. Switched to mybatis migration. Looks quite nice.

Resources