I can successfully use the Realm GraphQL Client with a realm path like myInstance.us1.cloud.realm.io/~/realmName but when trying to use a global path, i.e., myinstance.us1.cloud.realm.io/realmName, I always get a 502 response from the server.
Any thoughts?
TLDR;
I have been fighting with getting data from a global/shared realm, i.e., no /~/ in the realm path with no luck. I always get a 502 Bad Gateway in response to executing a query. If I add the /~/ to the realm path, a connection is established and a new and empty user-specific realm is created (as expected) but then queries fail because the realm is empty (also expected).
Does the GraphQL Service provided by Realm Cloud support connecting to global/shared realms? I’ve skimmed over the source for both the server and client and did not see any specific reason why global/shared would not be supported.
I also tried passing isQueryBasedSync to the GraphQLConfig which results in a connection and successfully executed query, but the query responses are always empty
Any advice is greatly appreciated.
I got past the 502 Bad Gateway error using the undocumented API(s) shown below (I had to find them by reading the current code in the realm-graphql repo):
const credentials = Credentials.usernamePassword(<username>, <password>);
const user = await User.authenticate(credentials, <server>);
const config = await GraphQLConfig.create(user, <realm_name>, undefined, false);
const client = config.createApolloClient();
However, I now frequently receive the following error during GraphQLConfig.create execution:
network timeout at: https://.cloud.realm.io/auth
Additionally, I posted this question on the Realm Forums that you may want to follow and received the following response:
Getting a 502 in the GraphQL service usually means you were trying to open a very large Realm that runs into some resourcing limits.
I am still waiting for more information from the Realm team and will update this answer accordingly.
Related
I've been using #aws-sdk/client-dynamodb server-side (SvelteKit / NodeJS) connecting to localhost Docker container with instance of amazon/dynamodb-local:latest which works well. I used AWS CLI to configure tables, etc. I've created the client using the simplest configuration:
const client = new DynamoDBClient({ endpoint: 'http://localhost:8000' });
This works server-side, but when the same is executed client-side along with a command, I get a message that the region is missing. I've tried passing region: 'none', but then I get a message that the credentials are missing. Adding dummy credentials enables the command to execute, but I don't get an expected response. For example, sending the ListTablesCommand returns an empty array. If I do the same from the AWS CLI, I get the correct response.
Does the DynamoDB client run client-side, i.e., in the browser? Or am I missing something else?
No it doesn't run in a browser, You will need API Gateway and some backend code to connect a browser to Dynamodb.
I am using auth0 management api v2 with .net core 2.1 to fetch uses's logs. Now i want to fetch only success and failed logs and for that my url is:
{domain}/api/v2/users/{user_id}/logs?q=type:s type:f&page=1&per_page=10&include_totals=true
But this url just returning 10 records, doesn't matter whether they are succeeded logs or failed logs. I have also tried chaning q=type:s only but even that didn't work. Also q=type:\"s\" is not working. other query params are working fine. So I want to know what am I doing wrong? What is the correct syntax?
Update:
By exploring management api docs. I have realized that user log api doesn't accept query param (q=). So is there any way to add search by type criteria in user logs?
I need to make calls to a rest API service via BizTalk Send adapter. The API simply uses a token in the header for authentication/authorization. I have tested this in a C# console app using httpclient and it works fine:
string apiUrl = "https://api.site.com/endpoint/<method>?";
string dateFormat = "dateFormat = 2017-05-01T00:00:00";
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("token", "<token>");
client.DefaultRequestHeaders.Add("Accept", "application/json");
string finalurl = apiUrl + dateFormat;
HttpResponseMessage resp = await client.GetAsync(finalurl);
if (resp.IsSuccessStatusCode)
{
string result = await resp.Content.ReadAsStringAsync();
var rootresult = JsonConvert.DeserializeObject<jobList>(result);
return rootresult;
}
else
{
return null;
}
}
however I want to use BizTalk to make the call and handle the response.
I have tried using the wcf-http adapter, selecting 'Transport' for security (it is an https site so security is required(?)) with no credential type specified and placed the header with the token in the 'messages' tab of the adapter configuration. This fails though with the exception: System.IO.IOException: Authentication failed because the remote party has closed the transport stream.
I have tried googling for this specific scenario and cannot find a solution. I did find this article with suggestions for OAUth handling but I'm surprised that even with BizTalk 2016 I still have to create a custom assembly for something so simple.
Does anyone know how this might be done in the wcf-http send adapter?
Yes, you have to write a custom Endpoint Behaviour and add it to the send port. In fact with the WCF-WebHttp adapter even Basic Auth doesn't work so I'm currently writing an Endpoint Behaviour to address this.
One of the issues with OAuth, is that there isn't one standard that everyone follows, so far I've had to write 2 different OAuth behaviours as they have implemented things differently. One using a secret and time stamp hashed to has to get a token, and the other using Basic Auth to get a token. Also one of them you could get multiple tokens using the same creds, whereas the other would expire the old token straight away.
Another thing I've had to write a custom behaviour for is which version of TLS the end points expects as by default BizTalk 2013 R2 tries TLS 1.0, and then will fail if the web site does not allow it.
You can feedback to Microsoft that you wish to have this feature by voting on Add support for OAuth 2.0 / OpenID Connect authentication
Maybe someone will open source their solution. See Announcement: BizTalk Server embrace open source!
Figured it out. I should have used the 'Certificate' for client credential type.
I just had to:
Add token in the Outbound HTTP Headers box in the Messages tab and select 'Transport' security and 'Certificate' for Transport client credential type.
Downloaded the certificate from the API's website via the browser (manually) and installed it on the local servers certificate store.
I then selected that certificate and thumbprint in the corresponding fields in the adapter via the 'browse' buttons (had to scroll through the available certificates and select the API/website certificate I was trying to connect to).
I discovered this on accident when I had Fiddler running and set the adapter proxy setting to the local Fiddler address (http://localhost:8888). I realized that since Fiddler negotiates the TLS connection/certificate (I enabled tls1.2 in fiddler) to the remote server, messages were able to get through but not directly between the adapter and the remote API server (when Fiddler WASN'T running).
I am trying to migrate my parse application over to digital ocean and followed this guide :
https://www.digitalocean.com/community/tutorials/how-to-migrate-a-parse-app-to-parse-server-on-ubuntu-14-04
Everything works perfectly fine until I get to the very end Test Parse Server ( Executing Example Cloud Code ) section
I tested the cloud code for the sample cloud code that was provided in the tutorial :
Parse.Cloud.define('hello', function(req, res) {
res.success('Hi');
});
so I got a Hi back in my browser as well as in postman.
See image here : https://cloudup.com/cH2dbBx1KTo

Then I test the function that uses sendgrid's service to send emails (http://blog.parse.com/announcements/introducing-the-sendgrid-cloud-module/), my cloud code file looks like this :
see image : https://cloudup.com/cD6MNRP3Tft
and now I try to run my post request from postman and I get an error even on my hello function that was working before
See image : https://cloudup.com/cIkwJ6552_5
So I look around and figure out that its an issue with my sendgrid import
var sendgrid = require("sendgrid");
sendgrid.initialize(“xxxxxx”, “xxxxx.”);
in these lines.
does anyone have any experience with digital ocean cloud code and send grid emailing service please help me out I will be grateful as this is the last step left and I will be done with my migration :)
cheers
Tanzeel
you have to specify server URL in parse config file. It is required and could be the reason why you cant run cloud code.
"PARSE_SERVER_URL": "http://localhost:1337/parse"
The url has be the same what you are using. There is also error in Nginx config in that tutorial, I explained it here https://serverfault.com/questions/765627/cannot-post-get-over-ssl/766428#766428
So I looked up at pm2 and to see real-time logs the command is
pm2 logs
at first when I ran the command I saw some errors, maybe they were there from before :
Then I tried the hello cloud function from postman app to test for its output in pm2 logs and I got the following :
Next I try to run my sendMail sendgrid function and I find out the the api-key I had used in my sendgrid function was throwing an error
ReferenceError: XXXXXXXXXXXX is not defined
So I went back to my cloud code and used quotes around my api-key parameter and passed it as a string in my send grid initialize function. Then I retry and get
[Error: The provided authorization grant is invalid, expired, or revoked]
So I went back to my sendgrid account and made sure that the api-key I was using was the correct one and it seemed to be just fine. I tested again and got the same error again so I decided to generate a new api-key just in case.
So I realize that I was not using the api-key but instead API KEY ID :
When we create a new api-key on sendgrid they give us the actual api key once and they ask us to store it in some secure place :
We can only display the key above one time. Please store it somewhere safe because as soon as you navigate away from this page, we will not be able to retrieve or restore this generated token.
So after I used an actual api-key I was able to send emails 😃
But one small issue still remains and I am not sure if its because of postman that I am using to run cloud code or something in the parse server or nginx that is still returning me with a 502 Bad Gateway as a response
But when I look at the logs for my parse server I do see a
parse-wrapper-0 { message: 'success' }
but it never gets back to me in my postman and instead I am getting a 502 error not sure why but the emails are being sent succesfully :)
Backgroud:
I have a PHP code that runs queries to Google Analytics API on behalf of my users.
I am using OAuth2 for authentication and storing the access-tokens of the users in my DB.
My code makes sure not to exceed the quota per user (10 QPS), and I'm using the "quotaUser" parameter in my queries.
The issue:
About 50% of my queries to GA are responded with error 403 ("insufficientPermissions", "User does not have sufficient permissions for this profile.").
The strange this is, that the other ~50% are getting the results from GA successfully.
Some important points:
The only thing common to all the successful queries and common to the unsuccessful queries is the batch that they are in: my code is sending a "batches" of queries (one after one with very short delay between them) to GA's API and every batch is ether passing or failing with 403.
Adding \ removing permissions to the scope did not solved the issue.
It is worth mentioning that this is not a View-ID \ Account-ID issue etc. as the same query can pass or fail for the same user and view.
I saw a related unanswered issue here and couldn't find any other truly-related issues.
A snippet from my code:
//Create a Google Client
$client = new Google_Client();
$client->setAuthConfigFile($this->secretJson);
$client->addScope(Google_Service_Analytics::ANALYTICS_READONLY, Google_Service_Oauth2::PLUS_LOGIN, Google_Service_Analytics::ANALYTICS);
// Set the access token on the client.
$client->setAccessToken($accessToken);
// Create an authorized analytics service object.
$this->analytics = new Google_Service_Analytics($client);
...
//Run the query
$results = $this->analytics->data_ga->get($id, $startDate, $endDate, $metrics, $opts);
Ido, can you still reproduce this issue? It sounds very much like there is a problem with your access token. I would start troubleshooting by making sure you are setting the same access token for all batch runs and validating that the token is not expired (isAccessTokenExpired()).