How to use geoshape in gremlinpython - gremlin

In JanusGraph,there is some function like
g.E().has('place', geoWithin(Geoshape.circle(37.97, 23.72, 50)))
to search place data. Now I want to use gremlinpython to do that,but I can't find the suitable API from the document.

Gremlin does not yet support Geo data types and predicates. The bits of syntax that you are referencing are specific to JanusGraph and are part of its libraries. At this point, I don't believe that JanusGraph has a Python specific library to give you direct access to those things. If you need to use Geo searches then, for now, you will need to submit a Gremlin script to JanusGraph Server with that syntax.

Something like this:
g.V().has('polygon',geoIntersect(Geoshape.point(55.70,37.55)))

Related

adding a new language in google trans api python

I'm trying to add my country( senegal ) language(wo = wolof) into googletrans. I already build a list of words, so now I want to integrate them in googletrans python library.
Please.
I don't think it is possible. The Google Translate APIs are simply a client that send the requests to Google servers where the translation work is actually done. There is no way to add a new language to the API. (You can confirm this by looking at the (unofficial) API source code.)
Besides, you need more than just a word list to do a reasonable job of translating from one language to another. (Word mapping without any context tends to produce nonsense.)
Having said that ... if you believe that you can do reasonable translation based on simple word maps, then you don't need to use Google Translate APIs at all. You can use your word lists / maps directly in your Python program.

Does Datastore support bulk updates?

I've combed through GC DataStore documentation, but I haven't found anything useful around bulk updating documents. Does anyone have best practice around bulk updates in DataStore? Is that even a valid use case for DataStore? (I'm thinking something along the lines of MongoDB Bulk.find.update or db.Collection.bulkWrite)
You can use the projects.commit API call to write multiple changes at once. Most client libraries have a way to do this, for example in python use put_multi.
In C# the DatastoreDb.Insert method has an overload for adding multiple entities at once.
If this is a one-off, consider using gcloud's import function.

Add or get vertex in Azure Cosmos DB Graph API

Using Gremlin, I can create a vertex in an Azure Cosmos DB graph by issuing
g.addV('the-label').property('id', 'the-id')
and subsequently find it using
g.V('the-label').has('id', 'the-id')
However, I haven't found a way to issue a query that will insert the node if it is missing, and just get the reference to it if it already exists. Is there a way?
My concrete use case is that I want to add an edge between two nodes, regardless of whether those nodes (or the edge, for that matter) exist already or not, in a single query. I tried this upsert approach, but apparently Cosmos DB does not support Groovy closures, so it won't work.
The "upsert pattern" is relatively well defined and accepted at this point. It is described here. If you want to extend that to also add an edge, that's possible too:
g.V().has('event','id','1').
fold().
coalesce(unfold(),
addV('event').property('id','1')).as('start').
coalesce(outE('link').has('id','3'),
coalesce(V().has('event','id','2'),
addV('event').property('id','2')).
addE('link').from('start').property('id','3'))
If that looks a bit complex you can definitely simplify with a Gremlin DSL (though I'm not sure that CosmosDB supports Gremlin bytecode at this point). Here's an example with even more complex upsert logic simplified by a DSL. It's discussed in this blog post in more detail.
Please look at this.
http://tinkerpop.apache.org/docs/current/reference/#coalesce-step
You can try
g.Inject(0).coalesce(__.V().has('id', 'the-id'), addV('the-label').property('id', 'the-id'))
btw, you won't able to find the vertex using g.V('the-label').has('id', 'the-id').
g.V() accepts vertex id as parameters and not vertex labels.

QtSql Fuzzy search

I am working with QSQLITE database in qt and attempt to implement fuzzy search in our program our sql query is something like this:
select name from things where name like '%arg%'
it's not the same the query is longer has joins and etc.
I tried using SOUND and SOUNDEX() but I think none of them is supported in QSQLITE is there any way I can implement fuzzy search here?
For soundex support, you would have to recompile the SQLite library embedded in Qt.
There is no other built-in 'fuzzy' function.
Either implement your own custom function, or store a normalized version of your string in the database so that you can compare it directly.

Accessing CoreData tables from fmdb

I'm using CoreData in my application for DML statements and everything is fine with it.
However I don't want use NSFetchedResultsController for simple queries like getting count of rows, etc.
I've decided to use fmdb, but don't know actual table names to write sql. Entity and table names don't match.
I've even looked inside .sqllite file with TextEdit but no hope :)
FMResultSet *rs = [db getSchema] doesn't return any rows
Maybe there's a better solution to my problem?
Thanks in advance
Core Data prefixes all its SQL names with Z_. Use the SQL command line tools to check out the your persistent store file to see what names it uses.
However, this is a very complicated and fragile solution. The Core Data schema is undocumented and changes without warning because Core Data does not support direct SQL access. You are likely to make error access the store file directly and your solution may break at random when the API is next updated.
The Core Data API provides the functionality you are seeking. IJust use a fetch request that fetches on a specific value using an NSExpressionDescription to perform a function. This allows you to get information like counts, minimums, maximums etc. You can create and use such fetches independent of a NSFetchedResultsController.
The Core Data API is very feature rich. If you find yourself looking outside the API for a data solution, chances are you've missed something in the API.

Resources