I've been using #aws-sdk/client-dynamodb server-side (SvelteKit / NodeJS) connecting to localhost Docker container with instance of amazon/dynamodb-local:latest which works well. I used AWS CLI to configure tables, etc. I've created the client using the simplest configuration:
const client = new DynamoDBClient({ endpoint: 'http://localhost:8000' });
This works server-side, but when the same is executed client-side along with a command, I get a message that the region is missing. I've tried passing region: 'none', but then I get a message that the credentials are missing. Adding dummy credentials enables the command to execute, but I don't get an expected response. For example, sending the ListTablesCommand returns an empty array. If I do the same from the AWS CLI, I get the correct response.
Does the DynamoDB client run client-side, i.e., in the browser? Or am I missing something else?
No it doesn't run in a browser, You will need API Gateway and some backend code to connect a browser to Dynamodb.
Related
So I am using Prisma as an ORM on my project to communicate with the database that I set up with AWS. Not happy with the AWS service I am now switching my database to railway.app - which is working out well for me. However, I set up a Prisma data proxy on my app with the AWS connection string, and now that I don't seem to want/ need it anymore I removed it but getting an error:
error - InvalidDatasourceError: Datasource URL should use Prisma:// protocol.
If you are not using the Data Proxy, remove the data proxy from the preview features in your
schema and ensure that PRISMA_CLIENT_ENGINE_TYPE environment variable is not set to data proxy.
Since getting the error I have removed previewFeatures = ["dataProxy"] from the prisma.schema file to make it look like this (back to what it was before configuring with dataproxy):
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url= env("DATABASE_URL")
}
but the error still persists, how do I fix this?
running prisma generate fixes this issue
I can successfully use the Realm GraphQL Client with a realm path like myInstance.us1.cloud.realm.io/~/realmName but when trying to use a global path, i.e., myinstance.us1.cloud.realm.io/realmName, I always get a 502 response from the server.
Any thoughts?
TLDR;
I have been fighting with getting data from a global/shared realm, i.e., no /~/ in the realm path with no luck. I always get a 502 Bad Gateway in response to executing a query. If I add the /~/ to the realm path, a connection is established and a new and empty user-specific realm is created (as expected) but then queries fail because the realm is empty (also expected).
Does the GraphQL Service provided by Realm Cloud support connecting to global/shared realms? I’ve skimmed over the source for both the server and client and did not see any specific reason why global/shared would not be supported.
I also tried passing isQueryBasedSync to the GraphQLConfig which results in a connection and successfully executed query, but the query responses are always empty
Any advice is greatly appreciated.
I got past the 502 Bad Gateway error using the undocumented API(s) shown below (I had to find them by reading the current code in the realm-graphql repo):
const credentials = Credentials.usernamePassword(<username>, <password>);
const user = await User.authenticate(credentials, <server>);
const config = await GraphQLConfig.create(user, <realm_name>, undefined, false);
const client = config.createApolloClient();
However, I now frequently receive the following error during GraphQLConfig.create execution:
network timeout at: https://.cloud.realm.io/auth
Additionally, I posted this question on the Realm Forums that you may want to follow and received the following response:
Getting a 502 in the GraphQL service usually means you were trying to open a very large Realm that runs into some resourcing limits.
I am still waiting for more information from the Realm team and will update this answer accordingly.
I need to make calls to a rest API service via BizTalk Send adapter. The API simply uses a token in the header for authentication/authorization. I have tested this in a C# console app using httpclient and it works fine:
string apiUrl = "https://api.site.com/endpoint/<method>?";
string dateFormat = "dateFormat = 2017-05-01T00:00:00";
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("token", "<token>");
client.DefaultRequestHeaders.Add("Accept", "application/json");
string finalurl = apiUrl + dateFormat;
HttpResponseMessage resp = await client.GetAsync(finalurl);
if (resp.IsSuccessStatusCode)
{
string result = await resp.Content.ReadAsStringAsync();
var rootresult = JsonConvert.DeserializeObject<jobList>(result);
return rootresult;
}
else
{
return null;
}
}
however I want to use BizTalk to make the call and handle the response.
I have tried using the wcf-http adapter, selecting 'Transport' for security (it is an https site so security is required(?)) with no credential type specified and placed the header with the token in the 'messages' tab of the adapter configuration. This fails though with the exception: System.IO.IOException: Authentication failed because the remote party has closed the transport stream.
I have tried googling for this specific scenario and cannot find a solution. I did find this article with suggestions for OAUth handling but I'm surprised that even with BizTalk 2016 I still have to create a custom assembly for something so simple.
Does anyone know how this might be done in the wcf-http send adapter?
Yes, you have to write a custom Endpoint Behaviour and add it to the send port. In fact with the WCF-WebHttp adapter even Basic Auth doesn't work so I'm currently writing an Endpoint Behaviour to address this.
One of the issues with OAuth, is that there isn't one standard that everyone follows, so far I've had to write 2 different OAuth behaviours as they have implemented things differently. One using a secret and time stamp hashed to has to get a token, and the other using Basic Auth to get a token. Also one of them you could get multiple tokens using the same creds, whereas the other would expire the old token straight away.
Another thing I've had to write a custom behaviour for is which version of TLS the end points expects as by default BizTalk 2013 R2 tries TLS 1.0, and then will fail if the web site does not allow it.
You can feedback to Microsoft that you wish to have this feature by voting on Add support for OAuth 2.0 / OpenID Connect authentication
Maybe someone will open source their solution. See Announcement: BizTalk Server embrace open source!
Figured it out. I should have used the 'Certificate' for client credential type.
I just had to:
Add token in the Outbound HTTP Headers box in the Messages tab and select 'Transport' security and 'Certificate' for Transport client credential type.
Downloaded the certificate from the API's website via the browser (manually) and installed it on the local servers certificate store.
I then selected that certificate and thumbprint in the corresponding fields in the adapter via the 'browse' buttons (had to scroll through the available certificates and select the API/website certificate I was trying to connect to).
I discovered this on accident when I had Fiddler running and set the adapter proxy setting to the local Fiddler address (http://localhost:8888). I realized that since Fiddler negotiates the TLS connection/certificate (I enabled tls1.2 in fiddler) to the remote server, messages were able to get through but not directly between the adapter and the remote API server (when Fiddler WASN'T running).
I am trying to migrate my parse application over to digital ocean and followed this guide :
https://www.digitalocean.com/community/tutorials/how-to-migrate-a-parse-app-to-parse-server-on-ubuntu-14-04
Everything works perfectly fine until I get to the very end Test Parse Server ( Executing Example Cloud Code ) section
I tested the cloud code for the sample cloud code that was provided in the tutorial :
Parse.Cloud.define('hello', function(req, res) {
res.success('Hi');
});
so I got a Hi back in my browser as well as in postman.
See image here : https://cloudup.com/cH2dbBx1KTo

Then I test the function that uses sendgrid's service to send emails (http://blog.parse.com/announcements/introducing-the-sendgrid-cloud-module/), my cloud code file looks like this :
see image : https://cloudup.com/cD6MNRP3Tft
and now I try to run my post request from postman and I get an error even on my hello function that was working before
See image : https://cloudup.com/cIkwJ6552_5
So I look around and figure out that its an issue with my sendgrid import
var sendgrid = require("sendgrid");
sendgrid.initialize(“xxxxxx”, “xxxxx.”);
in these lines.
does anyone have any experience with digital ocean cloud code and send grid emailing service please help me out I will be grateful as this is the last step left and I will be done with my migration :)
cheers
Tanzeel
you have to specify server URL in parse config file. It is required and could be the reason why you cant run cloud code.
"PARSE_SERVER_URL": "http://localhost:1337/parse"
The url has be the same what you are using. There is also error in Nginx config in that tutorial, I explained it here https://serverfault.com/questions/765627/cannot-post-get-over-ssl/766428#766428
So I looked up at pm2 and to see real-time logs the command is
pm2 logs
at first when I ran the command I saw some errors, maybe they were there from before :
Then I tried the hello cloud function from postman app to test for its output in pm2 logs and I got the following :
Next I try to run my sendMail sendgrid function and I find out the the api-key I had used in my sendgrid function was throwing an error
ReferenceError: XXXXXXXXXXXX is not defined
So I went back to my cloud code and used quotes around my api-key parameter and passed it as a string in my send grid initialize function. Then I retry and get
[Error: The provided authorization grant is invalid, expired, or revoked]
So I went back to my sendgrid account and made sure that the api-key I was using was the correct one and it seemed to be just fine. I tested again and got the same error again so I decided to generate a new api-key just in case.
So I realize that I was not using the api-key but instead API KEY ID :
When we create a new api-key on sendgrid they give us the actual api key once and they ask us to store it in some secure place :
We can only display the key above one time. Please store it somewhere safe because as soon as you navigate away from this page, we will not be able to retrieve or restore this generated token.
So after I used an actual api-key I was able to send emails 😃
But one small issue still remains and I am not sure if its because of postman that I am using to run cloud code or something in the parse server or nginx that is still returning me with a 502 Bad Gateway as a response
But when I look at the logs for my parse server I do see a
parse-wrapper-0 { message: 'success' }
but it never gets back to me in my postman and instead I am getting a 502 error not sure why but the emails are being sent succesfully :)
I have backend meteor server which serves and shares common collections across multiple apps (just sharing mongo db is not enough, realtime updates are needed).
BACKEND
/ \
APP1 APP2
| |
CLIENT CLIENT
I have server-to-server DDP connections running between backend server and app servers.
Atm i'm just re-publishing the collections in app server after subscribing them from backend server.
It all seems working quite well. The only problem tho is that in app server cant query any collections in server side, all the find() responses are empty, in client side (browser) it all works fine tho.
Is it just a coincidence that it works at all or what do you suggest how i should set it up.
Thanks
I realize that this is a pretty old question, but I thought I would share my solution. I had a similar problem as I have two applications (App1 and App2) that will be sharing data with a third application (App3).
I couldn't figure out why the server-side of my App1 could not see the shared collections in App3...even though the client-side of App1 was seeing them. Then it hit me that the server-side of my App1 was acting like a "client" of App3, so needed to subscribe to the publication, too.
I moved my DDP.connection.subscribe() call outside the client folder of App1, so that it would be shared between the client and server of App1. Then, I used a Meteor.setInterval() call to wait for the subscription to be ready on the server side in order to use it. That seemed to do the trick.
Here's a quick example:
in lib/common.js:
Meteor.myRemoteConnection = DDP.connect(url_to_App3);
SharedWidgets = new Meteor.Collection('widgets', Meteor.myRemoteConnection);
Meteor.sharedWidgetsSubscription = Meteor.myRemoteConnection.subscribe('allWidgets');
in server/fixtures.js:
Meteor.startup(function() {
// check once every second to see if the subscription is ready
var subIsReadyInterval = Meteor.setInterval(function () {
if ( Meteor.sharedWidgetsSubscription.ready() ) {
// SharedWidgets should be available now...
console.log('widget count:' + SharedWidgets.find().count);
// clean up the interval...
Meteor.clearInterval(subIsReadyInterval);
}
}, 1000);
});
If there is a better way to set this up, I'd love to know.
I have done this already,
check my app Tapmate or youtap.meteor.com on android and iphone,
I know it will work till 0.6.4 meteor version,
haven't checked if that works on above version,
You have to manually override the default ddp url while connecting,
i.e. go to live-data package in .meteor/packages/live-data/stream_client_socket.js
overwrite this - Meteor._DdpClientStream = function (url) {
url = "ddp+sockjs://ddp--**-youtap.meteor.com/sockjs";
now you won't see things happening locally but it will point to meteor server
also disable reload js from reloading
Thanks