Hasura Remote Schema to Database Array relationship returns invalid input syntax for type integer - hasura

I have got a custom Graphql server running in a separate server and I wanted to include a query from it to Hasura with the remote schema feature. I've created the remote schema and it is working well but I also needed to create an array relationship between the response in the remote schema query to a table in Postgres.
Currently, It is returning an error below.
{ "errors": [ { "extensions": { "code": "data-exception", "path": "$" }, "message": "invalid input syntax for type integer: "["36", "37", "38", "39", "40", "41", "42", "43"]"" } ] }
What should be the response from the remote schema query to To get the related entries from the database with the list of ids returned from the remote schema query response?

Related

Azure Data Factory Cosmos DB sql api 'DateTimeFromParts' is not a recognized built-in function name

I am using Copy Activity in my Datafactory(V2) to query Cosmos DB (NO SQL/SQLAPI). I have a where clause to build datetime from parts using DateTimeFromParts datetime function. THis query works fine when I execute it on the Cosmos DB data explorer query window. But when i use the same query from my copy activity I get the following error:
"message":"'DateTimeFromParts' is not a recognized built-in function name."}]}
ActivityId: ac322e36-73b2-4d54-a840-6a55e456e15e, documentdb-dotnet-sdk/2.5.1 Host/64-bit
I am trying convert a string attribute which is like this '20221231', this translates to Dec 31,2022, to a date to compare it with current date, i use the DateTimeFromParts to build the date, is there another way to convert this '20221231' to a valid date
Select * from c where
DateTimeFromParts(StringToNumber(LEFT(c.userDate, 4)), StringToNumber(SUBSTRING(c.userDate,4, 2)), StringToNumber(RIGHT(c.userDate, 2))) < GetCurrentDateTime()
I suspect the error might be because the documentdb-dotnet-sdk might be an old version. Is there way to specify which sdk to use in the activity?
I tried to repro this and got the same error.
Instead of changing the format of userDate column using DateTimeFromParts function, try changing the GetCurrentDateTime() function to userDate column format.
Workaround query:
SELECT * FROM c
where c.userDate <
replace(left(GetCurrentDateTime(),10),'-','')
Input data
[
{
"id": "1",
"userDate": "20221231"
},
{
"id": "2",
"userDate": "20211231",
}
]
Output data
[
{
"id": "2",
"userDate": "20211231"
}
]
Apologies for the slow reply here. Holidays slowed getting an answer for this.
There is a workaround that allows you to use the SDK v3 which would then allows you to access the DateTimeFromParts() system function which was released in .NET SDK v.3.13.0.
Option 1: Use AAD authentication (i.e Service Principal or System or User Managed Identity) for the Linked Service object in ADF to Cosmos DB. This will automatically pick up the .NET SDK v3.
Option 2: Modify the linked service template. First, click on Manage in ADF designer, next click on Linked Services, then select the connection and click the {} to open the JSON template, you can then modify and set useV3 to true. Here is an example.
{
"name": "<CosmosDbV3>",
"type": "Microsoft.DataFactory/factories/linkedservices",
"properties": {
"annotations": [],
"type": "CosmosDb",
"typeProperties": {
"useV3": true,
"accountEndpoint": "<https://sample.documents.azure.com:443/>",
"database": "<db>",
"accountKey": {
"type": "SecureString",
"value": "<account key>"
}
}
}
}

MS Graph API v1.0 cannot filter by onPremisesSamAccountName using Python requests

I'm attempting to use Python + requests to talk with MS Graph API (v1.0) in order to filter user objects by the onPremisesSamAccountName property but am receiving this error when sending the simple query:
endpoint = "https://graph.microsoft.com/v1.0/users"
query_parameters = {
'$filter': 'onPremisesSamAccountName eq \'somevalue\'',
'$select': 'id,displayName,mail,onPremisesSamAccountName'
}
user_graph_data = requests.get(
endpoint,
headers={'Authorization': 'Bearer ' + result['access_token']},
params=query_parameters
).json()
==============================
{
"error": {
"code": "Request_UnsupportedQuery",
"message": "Unsupported or invalid query filter clause specified for property 'onPremisesSamAccountName' of resource 'User'.",
"innerError": {
"date": "...",
"request-id": "...",
"client-request-id": "..."
}
}
}
I am able to filter using this field while using Microsoft's Graph Explorer:
https://developer.microsoft.com/en-us/graph/graph-explorer and the corresponding Javascript call in the developer console shows a successful call and response based on the filter with onPremisesSamAccountName.
The MS Graph docs for v1.0 state that this is a supported field for filtering as well:
Returned only on $select. Supports $filter (eq, ne, NOT, ge, le, in,
startsWith).
I'm also able to successfully filter using other fields such as 'mail' (i.e. changing the $filter string from 'onPremisesSamAccountName eq \'somevalue\'' to 'mail eq \'somevalue\'' works just fine, so I don't believe this is a syntactical error)

Using nginx to redirect dynamic request

I have a druid service which runs at my local machine at port 8082 as follows:
Method POST: http://localhost:8082/druid/v2/?pretty
Body:
{
"queryType" : "topN",
"dataSource" : "some_source",
"intervals" : ["2015-09-12/2015-09-13"],
"granularity" : "all",
"dimension" : "page",
"metric" : "edits",
"threshold" : 25,
"filter": {
"type": "and",
"fields": [
{
"type": "selector",
"dimension": "pix_id",
"value": "1234"
}
}
}
Hitting this query gives me a list of records based on the value of the dimension 'pix_id'.
Now, I want to setup an nginx such that the external application should not have any clue about my druid service. I just want the external application to hit the URL:
http://localhost:80/pix_id/98765
This url should dynamically generate a JSON with the above mentioned pix_id and send a request to druid and return the response to the user.
Is it possible to do this in nginx?
Yes you can do this, but rather I would suggest to have a php or python script in between to give the results.
So the setup would be -
Have php page receive the request.
make a curl call from php to the druid, locally.
get the result and pass on the response.
There are multiple benefits of doing this eg. -
You completely mask druid, and not necessarily limited to druid.
You can do more calculations in php before sending the request to druid.
caching at php end.

What is the issue in the Google map engine Query

i need to query some data with where clause
as per the API google map engine i have request as below.
https://www.googleapis.com/mapsengine/v1/tables/14538994882799551513-11853667273131550346/features?where=gx_id%3D900
for the above URL it says BAD REQUEST where=gx_id=900
and the response is
{
"error": {
"errors": [
{
"domain": "global",
"reason": "invalid",
"message": "The value is invalid.",
"locationType": "parameter",
"location": "query"
}
],
"code": 400,
"message": "The value is invalid."
}
}
Please suggest me what is wrong in this URL
You use a Number in your query, but the particular gx_id seems to be of type String.
Enclose the Number with single-quotes:
https://www.googleapis.com/mapsengine/v1/tables/14538994882799551513-11853667273131550346/features?where=gx_id%3D%27900%27
Also, you don't have to worry about creating and keeping track of your ID's. the API says you need to include gx_id only to make sure that no Feature is sent twice.
Therefore, you can just throw in a string representing your system's current time for example.

Google Cloud Datastore runQuery returning 412 "no matching index found"

** UPDATE **
Thanks to Alfred Fuller for pointing out that I need to create a manual index for this query.
Unfortunately, using the JSON API, from a .NET application, there does not appear to be an officially supported way of doing so. In fact, there does not officially appear to be a way to do this at all from an app outside of App Engine, which is strange since the Cloud Datastore API was designed to allow access to the Datastore outside of App Engine.
The closest hack I could find was to POST the index definition using RPC to http://appengine.google.com/api/datastore/index/add. Can someone give me the raw spec for how to do this exactly (i.e. URL parameters, what exactly should the body look like, etc), perhaps using Fiddler to inspect the call made by appcfg.cmd?
** ORIGINAL QUESTION **
According to the docs, "a query can combine equality (EQUAL) filters for different properties, along with one or more inequality filters on a single property".
However, this query fails:
{
"query": {
"kinds": [
{
"name": "CodeProse.Pogo.Tests.TestPerson"
}
],
"filter": {
"compositeFilter": {
"operator": "and",
"filters": [
{
"propertyFilter": {
"operator": "equal",
"property": {
"name": "DepartmentCode"
},
"value": {
"integerValue": "123"
}
}
},
{
"propertyFilter": {
"operator": "greaterThan",
"property": {
"name": "HourlyRate"
},
"value": {
"doubleValue": 50
}
}
},
{
"propertyFilter": {
"operator": "lessThan",
"property": {
"name": "HourlyRate"
},
"value": {
"doubleValue": 100
}
}
}
]
}
}
}
}
with the following response:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "FAILED_PRECONDITION",
"message": "no matching index found.",
"locationType": "header",
"location": "If-Match"
}
],
"code": 412,
"message": "no matching index found."
}
}
The JSON API does not yet support local index generation, but we've documented a process that you can follow to generate the xml definition of the index at https://developers.google.com/datastore/docs/tools/indexconfig#Datastore_Manual_index_configuration
Please give this a shot and let us know if it doesn't work.
This is a temporary solution that we hope to replace with automatic local index generation as soon as we can.
The error "no matching index found." indicates that an index needs to be added for the query to work. See the auto index generation documentation.
In this case you need an index with the properties DepartmentCode and HourlyRate (in that order).
For gcloud-node I fixed it with those 3 links:
https://github.com/GoogleCloudPlatform/gcloud-node/issues/369
https://github.com/GoogleCloudPlatform/gcloud-node/blob/master/system-test/data/index.yaml
and most important link:
https://cloud.google.com/appengine/docs/python/config/indexconfig#Python_About_index_yaml to write your index.yaml file
As explained in the last link, an index is what allows complex queries to run faster by storing the result set of the queries in an index. When you get no matching index found it means that you tried to run a complex query involving order or filter. So to make your query work, you need to create your index on the google datastore indexes by creating a config file manually to define your indexes that represent the query you are trying to run. Here is how you fix:
create an index.yaml file in a folder named for example indexes in your app directory by following the directives for the python conf file: https://cloud.google.com/appengine/docs/python/config/indexconfig#Python_About_index_yaml or get inspiration from the gcloud-node tests in https://github.com/GoogleCloudPlatform/gcloud-node/blob/master/system-test/data/index.yaml
create the indexes from the config file with this command:
gcloud preview datastore create-indexes indexes/index.yaml
see https://cloud.google.com/sdk/gcloud/reference/preview/datastore/create-indexes
wait for the indexes to serve on your developer console in Cloud Datastore/Indexes, the interface should display "serving" once the index is built
once it is serving your query should work
For example for this query:
var q = ds.createQuery('project')
.filter('tags =', category)
.order('-date');
index.yaml looks like:
indexes:
- kind: project
ancestor: no
properties:
- name: tags
- name: date
direction: desc
Try not to order the result. After removing orderby(), it worked for me.

Resources