choose and option steps in Cosmos DB gremlin API - gremlin

Although choose is not on the list of supported gremlin steps for Cosmos DB (as well as some others) it seems to be supported. Given an example graph with people it a query like
g.V().hasLabel('person').choose(values('name'))
.option('josh', constant('it's Josh!'))
returns a json array ['Josh!']. Adding more option also works e.g.
g.V().hasLabel('person').choose(values('name'))
.option('josh', constant('it's Josh!'))
.option('marco', constant('it's marco!'))
but what doesnt seem to work is using Pick.none/ none for specifying a default case as described in the gremlin docs for choose e.g.
g.V().hasLabel('person').choose(values('name'))
.option('josh', constant('it's Josh!'))
.option('marco', constant('it's marco!'))
.option(none, constant('it's somebody else!'))
Does anybody know how to specify the default case in Cosmos DB? I already tried any permutation containing Pick and/or none that I could think of e.g. Pick.none, Pick().none(), none, none(), ...

Related

Microsoft Graph API- list all users with OneDrive license

I want to list all users that have OneDrive license.
I an using this URL but doesn't work.
https://graph.microsoft.com/v1.0/users?$filter=assignedLicenses/any(x:x/skuId eq 4b585984-651b-448a-9e53-3b10f069cf7f or x/skuId eq c7df2760-2c81-4ef7-b578-5b5392b571df)
Do you have any idea how to do it?
Unfortunately complex query (Whatever you're trying to do above) on property assignedLicenses is not supported. If you do so, the API will throw the error:
Complex query on property assignedLicenses is not supported
Being said that i can see it
works for simple filter, like,
https://graph.microsoft.com/v1.0/users?$filter=assignedLicenses/any(x:x/skuId eq 4b585984-651b-448a-9e53-3b10f069cf7f)

Pass search strategy to filter from rest URI

First time using api-platform and Symfony 4 to create an API interface for a MySQL db.
I'm updating an old search interface for the db for which I need to replicate many of the search options. This includes being able to search on a given field using various matching operators/strategies. e.g. starts with, contains exactly equals, etc.
I've set everything up for the api using Annotations.
The #ApiFilter(SearchFilter::class, properties={"fieldname": "strategy"} annotation on my table class works as designed, but I am limited to one-and-only-one strategy per field. I need to be able to pass the strategy to the api search function in the url. something like:
/api/staff?lastname[start]=dav
or
/api/staff?lastname=david&match=contains
or
/api/staff/lastname/son?searchtype=end
would be fine.
I can't figure out how to set this up. Shockingly, to me anyway, this common requirement doesn't seem to be documented at all.
The file CustomSearchFilter.php located at the repo https://github.com/jordonedavidson/custom_search_filter solves this use-case using the
/api/staff?lastname[start]=dav
syntax.
The file was written by Kévin Dunglas (the author of Api Platform) and is presented with his blessing.

OpenJpa2.0 How to map Oracle sys.XMLTYPE column to String

I changed Change in persistence.xml
I also changed column definition (columnDefinition="XDB.XMLType") for xml fields
I checked OpenJpa(http://openjpa.208410.n2.nabble.com/Oracle-XMLType-fetch-problems-td6208344.html) site and IBM (http://www.ibm.com/support/knowledgecenter/SS7J6S_7.5.0/com.ibm.wsadapters.jca.jdbc.doc/env/doc/rjdb_problemsolutions.html)
My env is OpenJpa 2.0 and WAS 7
its throwing exception
org.apache.openjpa.persistence.PersistenceException: ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at "SYS.XMLTYPE", line 169
Please suggest without changing OpenJpa2.0 as its part of IBM WebSphere Application Server V7.0 how can i handle sys.XMLTYPE data, i am migrating my application from db2 to Oracle in same environment.
Writing XML data can be tricky some times! Getting the correct drivers and things defined properly can have its challenges. I can not say exactly what you need to do given the lack of info on your domain model and such, but let me give some general things to look for. First, there is an XML test in the OpenJPA test framework if you want to make reference to it. It can be seen publicly here:
https://apache.googlesource.com/openjpa/+/refs/heads/trunk/openjpa-persistence-jdbc/src/test/java/org/apache/openjpa/jdbc/oracle/
Or, another test using an "XMLValueHandler" (likely this is beyond the scope of what you are looking for):
https://apache.googlesource.com/openjpa/+/refs/heads/trunk/openjpa-persistence-jdbc/src/test/java/org/apache/openjpa/persistence/xmlmapping/query/
Second, (stating the obvious) I assume you have a column in Oracle defined as "XMLTYPE". Also, I see you are using schema SYS. I'm sure you are aware but this is a system/admin schema......just for sanity sake you might want to first get things running using a non-system/admin schema just so we don't get hung up with any issues with your OpenJPA client not having the correct permissions.
Next, you need the following definition:
#Lob #Basic
#Column(name = "ANXMLCOLUMN", columnDefinition="XMLCOLUMN XMLType")
private String anXMLString;
The #Lob I think will be necessary if you are using data greater than 4000 chars (this was mentioned in one of the comments). To start I'd use a very small set of data (a couple characters), once that works, then experiment with > 4k.
Next, make sure to use the correct JDBC driver. The last time I experimented with an XMLType I used the Oracle JDBC 11.2.0.2 driver.
Finally, you might need to use the property "openjpa.jdbc.DBDictionary" with value "oracle(supportsSetClob=true,maxEmbeddedClobSize=-1)". Again, experiment with this AND look at the OpenJPA documentation on these properties to determine if they are necessary in your scenario. I think the supportsSetClob=true will only be necessary for older version (pre-2.2.x) of OpenJPA. You might also need to use property "openjpa.jdbc.SchemaFactory" with value "native". I would suggest you first try without either or these two properties. If that doesn't help, then experiment with these two properties. I know this is vague, but I don't know what your DDL or domain model looks like so I have to keep in vague.
Thanks,
Heath Thomann

AND\OR Query search using MarkLogic (XQuery or equivalent)

I am new to MarkLogic and we are evaluating MarkLogic for our product use case.
We evaluated few NoSQL databases like MongoDB, Couchbase etc.
I am looking for a below type of query search.
(Condition1 OR Condition2) AND (Condition3 OR Condition4) AND (Condition5)
Can MarkLogic provide such type of search query?
I am just started learning MarkLogic and trying to understand the architecture.
Thanks,
Sameer
Yes, MarkLogic provides some high level libraries for this type of functionality. Take a look at Search API.
Start here: https://developer.marklogic.com/learn/2009-07-search-api-walkthrough
And more thorough documentation is here: https://docs.marklogic.com/guide/search-dev/search-api
MarkLogic can handle this kind of logic in many ways as mentioned.
For example, this is how you could setup a search query using the CTS library (I highly recommend the CTS library, since it uses indexes much better, and the construction of them are so much more flexible):
cts:search(//elementName,
cts:and-query((
cts:element-attribute-value-query(xs:QName("entry"), xs:QName("private"), "true"),
cts:or-query((
cts:element-attribute-value-query(xs:QName("entry"), xs:QName("forced"), "false"),
cts:element-attribute-value-query(xs:QName("entry"), xs:QName("forced"), "pending")
))
))
)
This snippet shows both AND and OR logic. The cts:and-query() and cts:or-query() functions can take a list of nodes. The above query says: "Find an element (called elementName) that has an attribute of private='true' AND has either one of the following: forced='true' or forced='pending'".
For much simpler data, you can use xQuery predicates by doing something like the following:
for $node in $xml/parent/child[#param1 eq "test" AND #param2 eq "OK"]/grandchild[#service eq "yahoo" or #service eq "google"]
return $node
The short answer to the original question is "yes". The details of "how" will depend on the approach used to express the queries.
The reference architecture recommends a three-tier approach using the Java or Node.js Client APIs if you use one of those, or HTTP calls to the REST API if you use a different language in your middle tier.
You can also use the Search API (as mentioned by wst) if you're working in MarkLogic's application server (typically as a two-tier architecture). You can do that with either XQuery or Server-side JavaScript, as of MarkLogic 8.

Schema qualified tables with SQLAlchemy, SQLite and Postgresql?

I have a Pylons project and a SQLAlchemy model that implements schema qualified tables:
class Hockey(Base):
__tablename__ = "hockey"
__table_args__ = {'schema':'winter'}
hockey_id = sa.Column(sa.types.Integer, sa.Sequence('score_id_seq', optional=True), primary_key=True)
baseball_id = sa.Column(sa.types.Integer, sa.ForeignKey('summer.baseball.baseball_id'))
This code works great with Postgresql but fails when using SQLite on table and foreign key names (due to SQLite's lack of schema support)
sqlalchemy.exc.OperationalError: (OperationalError) unknown database "winter" 'PRAGMA "winter".table_info("hockey")' ()
I'd like to continue using SQLite for dev and testing.
Is there a way of have this fail gracefully on SQLite?
I'd like to continue using SQLite for
dev and testing.
Is there a way of have this fail
gracefully on SQLite?
It's hard to know where to start with that kind of question. So . . .
Stop it. Just stop it.
There are some developers who don't have the luxury of developing on their target platform. Their life is a hard one--moving code (and sometimes compilers) from one environment to the other, debugging twice (sometimes having to debug remotely on the target platform), gradually coming to an awareness that the gnawing in their gut is actually the start of an ulcer.
Install PostgreSQL.
When you can use the same database environment for development, testing, and deployment, you should.
Not to mention the QA team. Why on earth are they testing stuff they're not going to ship? If you're deploying on PostgreSQL, assure the quality of your work on PostgreSQL.
Seriously.
I'm not sure if this works with foreign keys, but someone could try to use SQLAlchemy's Multi-Tenancy Schema Translation for Table objects. It worked for me but I have used custom primaryjoin and secondaryjoinexpressions in combination with composite primary keys.
The schema translation map can be passed directly to the engine creator:
...
if dialect == "sqlite":
url = lambda: "sqlite:///:memory:"
execution_options={"schema_translate_map": {"winter": None, "summer": None}}
else:
url = lambda: f"postgresql://{user}:{pass}#{host}:{port}/{name}"
execution_options=None
engine = create_engine(url(), execution_options=execution_options)
...
Here is the doc for create_engine. There is a another question on so which might be related in that regard.
But one might get colliding table names all schema names are mapped to None.
I'm just a beginner myself, and I haven't used Pylons, but...
I notice that you are combining the table and the associated class together. How about if you separate them?
import sqlalchemy as sa
meta = sa.MetaData('sqlite:///tutorial.sqlite')
schema = None
hockey_table = sa.Table('hockey', meta,
sa.Column('score_id', sa.types.Integer, sa.Sequence('score_id_seq', optional=True), primary_key=True),
sa.Column('baseball_id', sa.types.Integer, sa.ForeignKey('summer.baseball.baseball_id')),
schema = schema,
)
meta.create_all()
Then you could create a separate
class Hockey(Object):
...
and
mapper(Hockey, hockey_table)
Then just set schema above = None everywhere if you are using sqlite, and the value(s) you want otherwise.
You don't have a working example, so the example above isn't a working one either. However, as other people have pointed out, trying to maintain portability across databases is in the end a losing game. I'd add a +1 to the people suggesting you just use PostgreSQL everywhere.
HTH, Regards.
I know this is a 10+ year old question, but I ran into the same problem recently: Postgres in production and sqlite in development.
The solution was to register an event listener for when the engine calls the "connect" method.
#sqlalchemy.event.listens_for(engine, "connect")
def connect(dbapi_connection, connection_record):
dbapi_connection.execute('ATTACH "your_data_base_name.db" AS "schema_name"')
Using ATTACH statement only once will not work, because it affects only a single connection. This is why we need the event listener, to make the ATTACH statement over all connections.

Resources