I am new to splunk so I will try to be as clear as possible. I wanted to test the visualization of networkx graphs in Splunk 3D Graph Network Topology App. I was able to load the csv file of the graph successfully and I can see the data and the graph visualization. However, when I run community detection algorithm, it shows me the following error:
Unknown search command: 'fit'
Can somebody help me fix the issue please, thanks.
Make sure you have the prerequisite apps installed, especially the Machine Learning Toolkit, which provides the fit command. See https://splunkbase.splunk.com/app/4611/#/details.
Related
I follow all the steps to instrumentalize, run and analyse the obtained output (gmon.out), but the text file obtained shows the analysis of the execution of the main task only.
I've read about similar problems in Linux OS but I'm running on Windows.
Has anyone experience something similar?
I am trying to train a TensorFlow/Keras/TFRuns model and tune hyperparameters on Google Cloud ML. I wish to do this starting from my laptop, and follow the example here:
https://blogs.rstudio.com/tensorflow/posts/2018-01-24-keras-fraud-autoencoder/
The issue is that because I have installed some packages from resources outside of CRAN (i.e. SparkR, assertthat, aws.s3, et. al.) I keep getting an error stating "Unable to retrieve package records for the following packages: ...<<some package goes here>>"
I only need to have a few packages to follow the example in the link above. I am wondering if there is a way to ask Google Cloud ML to use only a specific subset of all my installed packages? Would it be better for me to setup some sort of virtual environment for R? If so, is there a link to a "How-To" guide I could follow? Should I try to do this in Docker? I'd love to be able to follow this example. I'm hoping someone can point me in the right direction.
Thank you in advance for any help.
All the best,
Nate
You can stage these dependencies on GCS and provide URIs in the job request. Check out this section in the public doc.
I am a beginner in Gephi, and i want to apply Girvan Newman and Markov Cluster Algorithms in Gephi 0.9.1 on my graph(Nodes-Edges)
I'm downloaded these plugins from gephi.org https://marketplace.gephi.org/plugin/girvan-newman-clustering/ but, when i'm trying to install them in Gephi ,this error is shown to me!
error
i understand it, and i downloaded these missing plugins but with extensions .zip or .jar which don't accept in Gephi .. and as another attempt to resolve this error, I installed Gephi 0.8.2 beta and 0.7 also, then they correctly installed but, basically they don't start opening.
I wish you help me.
Thanks a lot!
The plugins have been made for Gephi 0.8.2, the developers made some major changes with version 0.9 and the plugins will need to be ported.
It might be worth contacting the author to see if they plan on porting it, or try doing it yourself.
The code is here
https://github.com/jaroslav-kuchar/GirmanNewmanClustering and Gephi have made a tool to help port old plugins so it may be trivial, more info can be found here https://github.com/gephi/gephi-maven-plugin
I have an R script which I program on my laptop. After I am done, I FTP the R script up to my university cluster and run my code there (in parallel if needed). Most of my functions return data frames that I'd like to plot using ggplot. This is perfectly fine however I'd like to use tikzDevice to create tikz (latex code) for my plot to have the same font and style as my thesis.
The problem:
I can't run tikzdevice on the university cluster because of the lack of LaTeX packages. I also can't install them due to no sudo access. Essentially, this route is a dead end for me.
Solution:
I can run tikzDevice on my own laptop. Since I am working on my latex document(thesis) on my laptop, its a seamless \include.
The problem is that the data (as dataframes) exist on the university cluster. I COULD save dataframes as textfiles, download them onto my laptop, and read.table them but this is gonna kill my productivity.
Are there any pacakges, tools, software, anything that will let me "extract" my data from the university server?
A possible solution is https://gist.github.com/SachaEpskamp/5796467
but I have no idea how to use this.
Note: I also don't know which part of the SE network this could go on.
I've found a workaround solution to this.
To those who are looking to transfer data back and forth from server/client, you can send and receive objects by serializing it.
On the server, you use the saveRDS command, and on the client you have the readRDS command. To provide a URL to readRDS, you must use gzcon, so like the following:
con = gzcon(url("http://path.com/to/your/object/serialized"))
a = readRDS(file = con)
Obviously this depends on some protocol installed on the server (like http)
I want to install R on the AMZ EC2 servers. This is my first time I am doing so.
I have been using R installed on my Laptop but to process large data-set my Laptop gives up! I will be using the Visualization capabilities for graphs and Plots. I was looking for solution that will help me to Visualize the Plots on the server without using the X11 or port forwarding from my local machine when I remotely login to the servers.
Thanks for your responses....
While you won't be able to visualize the plots using X11, you should still be able to create PDFs:
pdf("test.pdf")
plot(1:20)
dev.off()
You can then download and view those PDFs.
I highly recommend RStudio Server that allows running full-featured RStudio IDE from a browser on your local machine/laptop. I've been using it with EC2 and it has been great - no need to save/transfer those pesky pdf files every time one needs to look at a plot..