Translation from blaze query to GraphQL query - jupyter-notebook

I have a data consumer which is a Jupyter Notebook. Is there any way to translate queries written in blaze to graphQL queries?
for example in blaze we have:
accounts[accounts.balance < 0].name
and in GraphQL we might have this:
{
accounts(balance<0)
{
name
}
}

GraphQL doesn't expose any built-in filtering capability. It can be implemented manually in form of the parameters. So depending on your needs you may define query field like:
type Query {
accounts(balanceBelow: Int!): [Account!]
}
or:
type Constraint {
fieldName: String!
operator: OperatorEnum!
value: String!
}
type Query {
accounts(where: Constraint): [Account!]
}
or something in between. But if things like custom query filtering and aggregations are common in your domain, you may find GraphQL cumbersome choice to work with.

Related

Use Extension to filter OneToMany relationship

I'm using API Platform 3 and Gedmo's Softdeleteable Extension. For my entities, I'm using Extensions to add $queryBuilder->andWhere('o.deletedAt IS NULL') to all queries.
This is working fine, but when using OneToMany relations, API Platform doesn't use these extensions and therefor shows 'softdeleted' entities.
Let's take a slightly modified example from their documentation:
#[ApiResource]
class User
{
#[ORM\OneToMany]
#[Serializer\Groups(['user'])]
public Collection $offers;
}
/api/offers/1 correctly shows a 404, because it has been softdeleted. But /api/users/1 shows something like this:
{
username: 'foo',
offers:
{
'#id': "/api/offers/1",
deletedAt: "2022-01-27T12:04:45+01:00"
}
}
How can I change the query that API Platform uses to fetch the relationships?
You have two methods of achieving this.
Filter hydrated objects:
Inside the existing (or completely new) getter:
return $this->offers->filter(function(Offer offer){
return $offer->getDeletedAt() !== NULL;
});
Apply criteria to query:
Again, inside the existing (or completely new) getter:
$criteria = Criteria::create()
->andWhere(Criteria::expr()->eq('deletedAt', NULL));
return $this->offers->matching($criteria);
The Criteria method is preferable if your user could have LOTS of deleted offers - the filtering would be performed on a database level.
Hope this helps.
Btw, both of these methods are well explained in SymfonyCasts tutorials:
https://symfonycasts.com/screencast/api-platform-security/filtered-collection
https://symfonycasts.com/screencast/symfony4-doctrine-relations/collection-criteria

Querying wordpress with meta queries from Gatsby

I'm trying to fetch data from my Wordpress backend using a meta query. I'm using this plugin:
https://www.wpgraphql.com/extenstion-plugins/wpgraphql-meta-query/
I can run my query in GraphiQL IDE in Wordpress, but not in Gatsbys GraphiQL tool.
I get this error:
Unknown argument "where" on field "Query.allWpPage"
Query:
query test {
allWpPage(
where: {metaQuery: {
relation: OR,
metaArray: [
{
key: "some_value",
value: null,
compare: EQUAL_TO
},
{key: "some_value",
value: "536",
compare: EQUAL_TO
}
]
}}
) {
edges {
node {
id
uri
}
}
}
}
I've tried deleting the cache directory and rebuilding, didn't help.
And just to clarify, I have no problems running other queries and getting ACL-data and what not. The only problem I have (right now) is exposing the where argument to Gatsby.
where filter is restricted in Gatsby. Here you have a detailed list of comparators, but they are:
eq (equals)
ne (not equals)
in (includes)
nin (not includes)
lt, lte, gt, gte (less than, equal or less than, greater than, equal or greater than respectively)
regex, glob (regular expression)
elemMatch (element matches)
On the other hand, there is a list of filters available. In your case, filter is what you are looking for. Your final query should look like:
query test {
allWpPage(
filter : {uri : {ne : "" }}
) {
edges {
node {
id
uri
}
}
}
}
Of course, adapt the filter to your needs. elemMatch should work for you either.
You will need to add each condition for each property of the object you're trying to match.
Why is where restricted?
Because it belonged to Sift, a library that Gatsby was using to use MongoDB queries, where where is available. Since Gatsby 2.23.0 (June 2020) this library is not being used anymore. More details at History and Sift:
For a long time Gatsby used the Sift library through which you can use
MongoDB queries in JavaScript.
Unfortunately Sift did not align with how Gatsby used it and so a
custom system was written to slowly replace it. This system was called
“fast filters” and as of gatsby#2.23.0 (June 2020) the Sift library is
no longer used.

DynamoDB PartiQL pagination using SDK

I'm currently working on pagination in DynamoDB using the JS AWS-SDK's executeStatement using PartiQL, but my returned object does not contain a NextToken (only the Items array), which is used to paginate.
This is what the code looks like (pretty simple):
const statement = `SELECT "user", "id" FROM "TABLE-X" WHERE "activity" = 'XXXX'`;
const params = {Statement: statement};
try {
const posted = await dynamodb.executeStatement(params).promise();
return { posted: posted };
} catch(err) {
throw new Error(err);
}
I was wondering if anyone has dealt with pagination using PartiQL for DynamoDB.
Could this be because my partition key is a string type?
Still trying to figure it out.
Thanks, in advance!
It turns out that if you want a NextToken DO NOT use version 2 of the AWS SDK for JavaScript. Use version 3. Version 3 will always return a NextToken, even if it is undefined.
From there you can figure out your limits, etc (default limit until you actually get a NextToken is 1MB). You'll need to look into the dynamodb v3 execute statement method.
You can also look into dynamodb paginators, which I've never used, but plan on studying.

make book.randomID key in amazon dynamodb table

for some reason I want to use book.randomID as key in amazon DynamoDB table using java code. when i tried id added a new field in the item named "book.randomID"
List<KeySchemaElement> keySchema = new ArrayList<KeySchemaElement>();
keySchema.add(new KeySchemaElement().withAttributeName("conceptDetailInfo.conceptId").withKeyType(KeyType.HASH)); // Partition
and here is the json structure
{
"_id":"123",
"book":{
"chapters":{
"chapterList":[
{
"_id":"11310674",
"preferred":true,
"name":"1993"
}
],
"count":1
},
"randomID":"1234"
}
}
so is it possible to use such element as key. if yes how can we use it as key
When creating DynamoDB tables AWS limits it to the types String, Binary and Number. Your attribute book.random seems to be a String.
As long as it's not one of the other data types like List, Map or Set you should be fine.
Just going to AWS console and trying it out worked for me:

How to regularly update a meteor collection from an external JSON feed?

I'm building an app that pulls its data in from a couple of external JSON feeds that I'm supplying from another domain.
I want to build collections on the server so that I can use $near location queries, but I'm going to need to check these JSON feeds regularly to see if they have updated, and if they have I'll need to update my collection accordingly.
Is there a 'standard' way of doing this? (checking what I have on the server is the latest version?)
edit... more info!
My JSON feeds both have a unique identifier (called url_title, but definitely unique across all feeds) that I could use in an upsert if I could just figure out how to format the query. Would that work in combination with setInterval? One of my feeds looks like this:
[{"offer_title":"NEW TEST OFFER!!!",
"url_title":"new-test-offer_144",
"offer_desc":"description here",
"offer_start":"2014-07-10",
"offer_end":"2014-07-12",
"offer_category":"food-drink",
"offer_advertiser":"Testing Corp",
"location": {
"type": "Point",
"coordinates": [-5.53596,50.12121]
}
} ...
And I'm pulling in the data (currently client-side) like this:
Offers = new Meteor.Collection( null );
HTTP.get("http://myappurl.com/offers.json", function(err,result) {
if (result.statusCode === 200) {
respJson = JSON.parse(result.content);
for (var i = 0; i < respJson.length; i++) {
console.log('inserting '+respJson[i]['offer_advertiser']);
Offers.insert(respJson[i]);
}
//Commented out because I need to move this server side
//Offers._ensureIndex({location: "2dsphere"});
}
else {
console.log(result.statusCode);
}
});
If I wrap this in Meteor.setInterval and give it a relatively long interval time, would that work? What would the insert/upsert format look like?
Quick solution:
Mirror your external feeds with a mongo collection. Just create a collection and periodically update it from the external feed. Write your queries and publications as normal.

Resources