gremlin> graph.io(graphml()).readGraph('airroutes.xml')
airroutes.xml (The system cannot find the file specified)
Type ':help' or ':h' for help.
Display stack trace? [yN]
g=graph.traversal()
This is the error I'm getting when I'm trying to load an XML file. What is the correct way?
Note that, and I need to update this in Practical Gremlin also, that graph.io is deprecated now in favor of g.io. If you are loading locally into an in memory TinkerGraph a simple file system path should be sufficient. If you are loading into a Gremlin Server, the server will need to have access to that file from its file system.
If loading locally, the format will look like this (I used the older graph.io form).
graph.io(graphml()).readGraph('/myfiles/air-routes.graphml')
If you are looking to load the air-routes data into a Gremlin Server, you should be able to follow the steps here.
Related
I explored the log4r package and found that we have console and file appenders. I am looking for pushing the logs to a mongoDB database. I am unable to find any way in pushing the logs to mongoDB, do we have any appender for the same.
TIA
You can try the mongoimport tool
If you do not specify a file (option --file=), mongoimport reads data from standard input (e.g. "stdin").
Or write logs into a temporary file and use mongoimport to import it.
I have a Node-RED flow. It uses a sqlite node. I am using node-red-node-sqlite. My OS is Windows 10.
My sql database is configured just with name "db" :
My question is, where is located the sqlite database file?
I already search in the following places, but didn't found:
C:\Users\user\AppData\Roaming\npm\node_modules\node-red
C:\Users\user\.node-red
Thanks in advance.
Edit
I am also using pm2 with pm2-windows-service to start Node-RED.
If you don't specify a full path to the file in the Database field it will create the file in the current working directory for the process, which will be where you ran either node-red or npm start.
Use full path location with file name.
It should work i guess.
This isn't a valid answer, just a workaround for those who have the same problem.
I could't find my database file. But inside Node-RED everything worked just great. So. this is what I have done as a workaround:
In Node-RED, make some select nodes to get all data from tables
Store the tables values somewhere (in a .txt file or something like that)
Create your database outside Node-RED, somewhere like c:\sqlite\db.db. Check read/write permissions
Create the tables and insert the values stored from old database
In Node-RED, inside "Database", put the complete path of the database. For example, c:\sqlite\db.db
In my case this was easy because I only had two database with less than 10 rows.
Hope this can help others.
Anyway, still waiting for a valid answer :)
I am trying to use sqlite3 in a Native Client application. There is a port available in the Chromium project, but I was unable to make it run properly.
My problem is that, for some reason, the application is unable to open a DB since a call like sqlite3_open("/filename.db", &db); fails with an I/O error.
I mounted / to a html5fs file system.
Has anybody managed to use SQLite with Native Client?
If so, I would really like to see a simple code that does something like opening a DB, CREATE a table, INSERT something and do a SELECT.
Struggled for almost a day, I found a workaround to skip the disk I/O error with specifying a VFS parameter.
sqlite3_open_v2(filename, &db, SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE | SQLITE_OPEN_NOMUTEX, "unix-none");
For more information about VFS, please see http://www.sqlite.org/vfs.html
My testing environment is on a chrome extension.
We have a folder of elmah error logs in XML format. These files will be in millions and each file might be upto 50 kb in size. We need to be able to search on the files(eg: What errors occured, what system failed etc). Do we have a open source system that will index the files and perhaps help us search through the files using keywords? I have looked at Lucene.net but it seems that I will have the code the application.
Please advise.
If you need to have the logs in a folder in XML, elmah-loganalyzer might be of use.
You can also use Microsoft's Log Parser to perform "sql like" queries over the xml files:
LogParser -i:XML "SELECT * FROM *.xml WHERE detail like '%something%'"
EDIT:
You could use a combination of nutch+SOLR or logstash+Elastic Search as an indexing solution.
http://wiki.apache.org/nutch/NutchTutorial
http://lucene.apache.org/solr/tutorial.html
http://blog.building-blocks.com/building-a-search-engine-with-nutch-and-solr-in-10-minutes
http://www.logstash.net/
http://www.elasticsearch.org/tutorials/using-elasticsearch-for-logs/
http://www.javacodegeeks.com/2013/02/your-logs-are-your-data-logstash-elasticsearch.html
We are a couple of developers doing the website http://elmah.io. elmah.io index all your errors (in ElasticSearch) and makes it possible to do funky searches, group errors, hide errors, time filter errors and more. We are currently in beta, but you will get a link to the beta site if you sign up at http://elmah.io.
Unfortunately elmah.io doesn't import your existing error logs. We will open source an implementation of the ELMAH ErrorLog type, which index your errors in your own ElasticSearch (watch https://github.com/elmahio for the project). Again this error logger will not index your existing error logs, but you could implement a parser which runs through your XML files and index everything using our open source error logger. Also you could import the errors directly to elmah.io through our API, if you don't want to implement a new UI on top of ElasticSearch.
Is it possible to know the Exact Path of the File on the Server. For example this URL http://www.hdfcsec.com/Research/ResearchDetails.aspx?report_id=2987918 resolves to a PDF. How to determine the direct location to the PDF file ? Any tools like network connection traces or pointers to find the same is appreciated.
Thanks.
Short of examining the actual source of ResearchDetails.aspx and figuring out where it takes its file from, no. Server-side scripts (and non-script binaries) can handle the request in any way they need and produce any data. There are cases where PDFs are dynamically generated by scripts and do not exist as actual files at all.