Exists firebase-util in Java - firebase

I found the firebase-util and it is great.
Does firebase-util exist for Java? Or is possible to use "join" in Java?
I was testing firebase-util and I found that it is a little bit slow. Is it appropriate to join 1:1 rather than 10000 rows with 100 rows (where is better load 10000 a then - if it is needed - join)?
Thanks for reply

There is not currently a version of Fireabse-util for Java. Since this is still an experimental lib, the Firebase team is still soliciting feedback and determining the appropriate API. At some point in the near future, there will also be querying features rolled into the core API, which will be considerably more efficient and complete than this client-side helper lib.
It shouldn't matter if you join 1:1 or 1:many, but 10,000 rows is a very huge number for a join utility on the client. You wouldn't be able to display this many in the DOM at one point anyway as that would be even slower. A better solution would be to create an index and do an intersection against that, only fetching a small subset of the records:
// only get the first twenty from path A
var refA = new Firebase('URL/path/A').limit(20);
var refB = new Firebase('URL/path/B');
// intersection only loads if key exists in both paths,
// so we've filtered B by the subset of A
var joinedRef = new Firebase.util.intersection(refA, refB);
This would only fetch records in refB that exist in refA, and thus, only the first 10. You can also create an index of specific record ids to fetch, or query for a subset based on priorities, and then use intersection to reduce the payload.

Related

Indexing in Titan/Janus

I have 2 questions:
How to index this query?
g.V(vertexId).repeat(out().hasLabel('location')).emit().tree().next()
in the Titan 1.0 documentation, there are only ways given to index the graph once when the data is already inserted .
However in the generate-modern.groovy file here
we see that indexing is done before the creation of vertices which seems reasonable. However I am unable to do it when trying to use buildMixedIndex as it is throwing me
illegal argument exception :Unknown external index backend search
My approach was
def location = mgmt.makeVertexLabel("location").make()
def displayName = mgmt.makePropertyKey("displayName").dataType(String.class).cardinality(Cardinality.SINGLE).make()
def shortName = mgmt.makePropertyKey("shortName").dataType(String.class).cardinality(Cardinality.SINGLE).make()
def description = mgmt.makePropertyKey("description").dataType(String.class).cardinality(Cardinality.SINGLE).make()
def latitude = mgmt.makePropertyKey("latitude").dataType(String.class).cardinality(Cardinality.SINGLE).make()
def longitude = mgmt.makePropertyKey("longitude").dataType(String.class).cardinality(Cardinality.SINGLE).make()
def locationByName = mgmt.buildIndex("displayNameAndShortNameAndDescriptionAndLatitudeAndLongitude", Vertex.class).addKey(displayName).addKey(shortName).addKey(description)
.addKey(latitude).addKey(longitude).indexOnly(location).buildMixedIndex('search')
Where I am getting it wrong?
If that query is taking a long time, the problem is likely that it is visiting too many elements or it is stuck in an infinite loop. The existing JanusGraph/Titan indexes won't help for that. You already have a direct vertex lookup by id, g.V(vertexId), and the rest of the query is traversing the neighborhood from that vertex. I'd suggest using edge labels, i.e. out('friends'), to limit the number of edges you visit. You could also use simplePath() to eliminate cyclic paths. You could also use times() or until() to keep a limit on the number of times you loop with the repeat() step.
The configuration example you referenced only used composite indexes, which do not require an indexing backend.
Mixed indexes require configuring an indexing backend, either Elasticsearch, Lucene, or Solr. Pick one of these, then make sure you pass the correct configuration properties when you initialize your graph. You can find several examples in the distribution zip file in the conf directory. For example, in the janusgraph-cassandra-es.properties, you'll find:
index.search.backend=elasticsearch
index.search.hostname=127.0.0.1
index.search.elasticsearch.client-only=true
where search in index.X.backend is the chosen index configuration name you must pass to buildMixedIndex(X).
Here's an answer.
Both composite and mixed index are only avaiable for the first level gremlin query, not for the second level. Vertex-Centric index is required for the second level query.

How to insert large number of nodes into Neo4J

I need to insert about 1 million of nodes in Neo4j. I need to specify that each node is unique, so every time I insert a node it has to be checked that there's not the same node yet. Also the relationships must be unique.
I'm using Python and Cypher:
uq = 'CREATE CONSTRAINT ON (a:ipNode8) ASSERT a.ip IS UNIQUE'
...
queryProbe = 'MERGE (a:ipNode8 {ip:"' + prev + '"})'
...
queryUpdateRelationship= 'MATCH (a:ipNode8 {ip:"' + prev + '"}),(b:ipNode8 {ip:"' + next + '"}) MERGE (a)-[:precede]->(b)'
The problem is that after putting 40-50K nodes into Neo4j , the insertion speed slows down quickly and I can not to put anything else.
Your question is quite open ended. In addition to #InverseFalcon's recommendations, here are some other things you can investigate to speed things up.
Read the Performance Tuning documentation, and follow the recommendations. In particular, you might be running into memory-related issues, so the Memory Tuning section may be very helpful.
Your Cypher query(ies) can probably be sped up. For instance, if it makes sense, you can try something like the following. The data parameter is expected to be a list of objects having the format {a: 123, b: 234}. You can make the list as long as appropriate (e.g., 20K) to avoid running out of memory on the server while it processes the list within a single transaction. (This query assumes that you also want to create b if it does not exist.)
UNWIND {data} AS d
MERGE (a:ipNode8 {ip: d.a})
MERGE (b:ipNode8 {ip: d.b})
MERGE (a)-[:precede]->(b)
There are also periodic execution APOC procedures that you might be able to use.
For mass inserts like this, it's best to use LOAD CSV with periodic commit or the import tool.
I believe it's also best practice to use a parameterized query instead of appending values into a string.
Also, you created a unique property constraint on :ipNode8, but not :ipNode, which is the first one you MERGE. Seems like you'll need a unique constraint for that one too.

Dealing with Indexer timeout when importing documentdb into Azure Search

I'm trying to import a rather large (~200M docs) documentdb into Azure Search, but I'm finding the indexer times out after ~24hrs. When the indexer restarts, it starts again from the beginning, rather than from where it got to, meaning I can't get more than ~40M docs into the search index. The data source has a highwater mark set like this:
var source = new DataSource();
source.Name = DataSourceName;
source.Type = DataSourceType.DocumentDb;
source.Credentials = new DataSourceCredentials(myEnvDef.ConnectionString);
source.Container = new DataContainer(myEnvDef.CollectionName, QueryString);
source.DataChangeDetectionPolicy = new HighWaterMarkChangeDetectionPolicy("_ts");
serviceClient.DataSources.Create(source);
The highwater mark appears to work correctly when testing on a small db.
Should the highwater mark be respected when the indexer fails like this, and if not how can I index such a large data set?
The reason the indexer is not making incremental progress even while timing out after 24 hours (the 24 hour execution time limit is expected) is that a user-specified query (QueryString argument passed to the DataContainer constructor) is used. With a user-specified query, we cannot guarantee and therefore cannot assume that the query response stream of documents will be ordered by the _ts column, which is a necessary assumption to support incremental progress.
So, if a custom query isn't required for your scenario, consider not using it.
Alternatively, consider partitioning your data and creating multiple datasource / indexer pairs that all write into the same index. You can use Datasource.Container.Query parameter to provide a DocumentDB query that partitions your data using a WHERE filter. That way, each of the indexers will have less work to do, and with sufficient partitioning, will fit under the 24 hour limit. Moreover, if your search service has multiple search units, multiple indexers will run in parallel, further increasing the indexing throughout and decreasing the overall time to index your entire dataset.

MongoTemplate Limit query issue

I am using MongoTemplate to execute my Mongo queries.
I wanted to know if count works with limit set?
Also why find query searches full collection (according to query) although limit is set?
For e.g. the query i wrote might result in having 10000 records, but i want only 100 records and for that i have set limit to 100 and then fired find query. But still query goes on to search full 10000 records.
dataQuery.limit(100);
List<logs> logResultsTemp = mongoTemplate1.find(dataQuery, logs.class);
Is their any limitations in using limit command?
Limit works fine (at least on spring data version 1.2.1 that I use). Perhaps it was a problem on your your version?
About count, there is a specific method to get your collection count, so you don't need to care about the amount of data that your system will fetch:
mongoTemplate.count(new Query(), MyCollection.class)
Btw, if you try this directly on your mongodb console: db.myCollection.limit(1).count() you will get the actual total of documents in your collection, not only one. An so it is for the mongoTemplate.count method, so:
mongoTemplate.count(new Query().limit(1), MyCollection.class)
will work the same way.

Linq 'contains' query taking too long

I have this query:
var newComponents = from ic in importedComponents
where !existingComponents.Contains(ic)
select ic;
importedComponents and existingComponents are of type List<ImportedComponent>, and exist only in memory (are not tied to a data context). In this instance, importedComponents has just over 6,100 items, and existingComponents has 511 items.
This statement is taking too long to complete (I don't know how long, I stop the script after 20 minutes). I've tried the following with no improvement in execution speed:
var existingComponentIDs = from ec in existingComponents
select ec.ID;
var newComponents = from ic in importedComponents
where !existingComponentIDs.Contains(ic.ID)
select ic;
Any help will be much appreciated.
The problem is quadratic complexity of this algorithm. Put the IDs of all existingComponentIDs into a HashSet and use the HashSet.Contains method. It has O(1) lookup cost compared to O(N) for Contains/Any on a list.
The morelinq project contains a method that does all of that in one convenient step: ExceptBy.
You could use Except to get the set difference:
var existingComponentIDs = existingComponents.Select(c => c.ID);
var importedComponentIDs = importedComponents.Select(c => c.ID);
var newComponentIDs = importedComponentIDs.Except(existingComponentIDs);
var newComponents = from ic in importedComponents
join newID in newComponentIDs on ic.ID equals newID
select ic;
foreach (var c in newComponents)
{
// insert into database?
}
Why is LINQ JOIN so much faster than linking with WHERE?
In short: Join method can set up a hash table to use as an index to quicky zip two tables together
Well based on the logic and numbers you provided that means you are basically performing 3117100 comparisons when you run that statement. Obviously that is not entirely accurate because your condition may be satisfied before running through the entire array but you get my point.
With collections this large you are going to want use a collection where you can index your key (in this case your component ID) to help reduce the overhead of the search. The thing to remember is that even though LINQ looks like SQL there are no magic indexes here; it is mainly for convenience. In fact, I have seen articles where a link lookup is actually a slight bit slower than a brute force lookup.
EDIT: If it is possible I would suggest trying a Dictionary or SortedList for your values. I believe either one would have slightly better lookup performance.

Resources