DynamoDB PartiQL does not support an equivalent IFNULL/COALESCE built-in function yet (even though COALESCE is a reserved keyword and DynamoDB's sibling QLDB seems to support it with its PartiQL implementation, so it appears to be theoretically possible). Using only PartiQL, is there any way to construct an equivalent of SQL's,
SELECT COALESCE(col1, col2) FROM sometable;
with the functions/syntax currently supported?
Related
I understand how to use dplyr::select_if() and dplyr::mutate_at(). But I don't understand what dplyr::select_at() provides that a basic select() doesn't provide.
As far as I understand, the verb_at() functions allow you to utilize the select helper functions (like matches() and starts_with()). But select() already uses the select helpers--so why would you use select_at() instead of just select()?
The primary benefit of select_at() (as opposed to the vanilla select()) is that it provides an .funs= parameter so that you can use a function, eg. toupper() to rename files as you select them.
This makes a ton of sense for something like rename_at(). Providing similar functionality with select_at() makes sense from a tidyverse-style "everything works the same" perspective.
Does GraphDB offer configurability on materialization strategies to allow for non-monotonic entailment? I.e. adding new explicit statements to the graph might require retracting existing implicit statements that were already inferred based on previous assertions made to the graph.
From the GraphDB documentation this does indeed seem possible to some extend:
GraphDB stores explicit and implicit statements, i.e., the statements inferred (materialised) from the explicit statements. So, when explicit statements are removed from the repository, any implicit statements that rely on the removed statement must also be removed.
I.e., if a new triple causes a previously implicit/explicit triple to be removed, any implicit triples related to this removed triple will also be removed.
You can read more on GraphDB reasoning strategies here.
The GraphDB inference engine (and it's rule language) do not support negation in any form, so a non-monotoning reasoning is not supported by it.
Datalog is a lovely language for querying relational data. It is simple, clear, composes well, and supports recursive queries without additional syntax.
SQLite is a fantastic embedded database with what seems to be a powerful query engine able to handle recursive queries – see the examples at the bottom of that page for generating Mandelbrot sets and finding all possible solutions to Sudoko puzzles!
I'm interested to know if there is a fairly standard way to translate from a datalog query in to recursive SQL as supported by SQLite, or if there are libraries that provide this facility.
DLVDB is an interpreter for recursive Datalog that uses an ODBC database connection for their extensional data: http://www.dlvsystem.com/dlvdb/
Apart from that, the paper
S. Ceri, G. Gottlob, and L. Tanca. 1989. What You Always Wanted to Know About Datalog (And Never Dared to Ask). IEEE Trans. on Knowl. and Data Eng. 1, 1 (March 1989), 146-166. http://dx.doi.org/10.1109/69.43410
provides theoretical background and some pointers for translating Datalog into relational algebra.
We're building an application with DocumentDb backend that will get lots of hits and its reponsiveness is absolutely paramount.
I wanted to see if there was a "preferred" approach from a performance stand point in querying DocumentDb. Should we use SQL for our queries or LINQ?
Theoretically, there shouldn't be a noticable difference in regards to responsiveness.
LINQ is a simply a fluent wrapper API, that which given a LINQ expression generates a SQL expression. You can view the generated SQL expression by applying toString() to the end of the LINQ expression. The performance hit on converting a LINQ expression to SQL is negligible compared the time it takes to perform I/O.
In practice, the translation from a LINQ expression may result in a sub-optimal SQL expression when dealing with corner cases. For those corner cases, working directly with SQL would be preferred.
Is it possible to use Neo4J's Cipher Query language (or another declarative language) but still reference custom code snippets (for instance to do custom WHERE-clauses based on, say, the result of a ElasticSearch/Lucene search?)
If other GraphDB's have declarative languages that support this, please shoot. I'm in no way bound to Neo4J.
Background:
I'm doing some research whether to include Neo4J in my current stack, which in the backend already consists of ElasticSearch, MongoDB and Redis.
Particulary with Redis' fast set-intersection capability, I could potentially create some rude graph-like querying. (although likely not as performant as a graphDB). I'm a long way in defining a DSL, with the type of queries to support.
However, I'm designing a CMS so contenttypes, and the relationships between these contenttypes which I would like to model with a graph are not known beforehand.
Therefore, the ideal case, of populating the needed Redis collections (with Mongo as source) to support all my quering based on Contenttypes and their relationships that are not known at design time, will be messy to say the least. Hope you're still following.
Which leads me to conclude that another solution may be needed, which is why I'm looking at GraphDb'd and Neo4J in particular (If others are potentially better suited for my use-case do shoot)
If you model your content-types as nodes you don't need to know them beforehand.
User-defined functions in javascript are planned for cypher later this year.
You can use a language like gremlin to declare your functions in groovy though.
You can store the node-id's in redis and then pass in an array of id's returned by redis to a cypher query for further processing.
start n=node({ids})
match n-[:HAS_TYPE]->content_type<-[:HAS_TYPE]-other_content
return content_type, count(*)
order by count(*) desc
limit 10
parameters: {"ids": [1,2,3,5]}