Include zipped secure connect file when build Next.js app on Amplify - next.js

There was a driver for connecting to the Datastax Astra-DB Cassandra Database in node.js called 'cassandra-driver'. For legacy connection It uses a connection secret key file named secure-connect-{DB-Name}.zip, syntax like this:
const client = new Client({
cloud: {
secureConnectBundle: 'Address of the zipped file'
},
credentials: {
username: 'This is client_id',
password: 'This is client_secret',
},
});
In local the syntax works well but when I deploy it on AWS Amplify, since the file does not put in Next.js bundle, A file not found error will be raised. Now the Question: Is there any way to keep the file in Amplify itself inside Next.js bundle and not upload it on external storage (like S3 , a silly way!) to access?

Related

Next Auth CLIENT_FETCH_ERROR when deploying on vercel

I am trying to deploy a NextJs application to vercel, I am using next auth to authenticate via discord.
Locally it is working fine. The right callback URL's are all configured and NEXT_AUTH_URL is configured too.
export default NextAuth({
providers: [
DiscordProvider({
clientId: process.env.DISCORD_CLIENT_ID,
clientSecret: process.env.DISCORD_CLIENT_SECRET
})
],
secret: process.env.SECRET,
jwt: {
signingKey: process.env.JWT_SIGNING_PRIVATE_KEY,
},
database: process.env.DATABASE_URL,
adapter: PrismaAdapter(prisma),
})
When trying to log in on the deployed site, however, I am receiving a 500 CLIENT_FETCH_ERROR.
Maybe it is something with keys in enviroment:
use keys in this format : process.env.NEXT_PUBLIC_JWT_SIGNING_PRIVATE_KEY
I have found what it was.
I had made changes to the prisma schema before it stopped working.
Vercel had cached the old prisma client, which was generated with the old schema.
I had to update my build settings and perform a redeployment to clear the cache and regenerate the prisma client to be up to date.
This is the build command I configured on vercel, after that I simply had to trigger a redeployment.
prisma generate && next build

How to use newsecret in capacitor-community/sqlite when creating connection

In my Ionic 5 app, I am using the capacitor-community/sqlite plugin. To create and encrypt a database. I am doing the following as per documentation.
await this.sqlite.createConnection('database1', false, "no-encryption", 1);
await this.sqlite.createConnection('database1', true, "encryption", 1);
await this.sqlite.createConnection('database1', true, "secret", 1);
the mode "encryption" is to be used when you have an already existing database non encrypted and you want to encrypt it.
the mode "secret" is to be used when you want to open an encrypted database.
the mode "newsecret" is to be used when you want to change the secret of an encrypted database with the newsecret.
A secret and newsecret is maintained in the configuration file as an encryption password. When I create the connection with the secret it works fine but I am unable to use the newsecret.
await this.sqlite.createConnection('database1', true, "newsecret", 1);
The above code is supposed to change my connection secret but it's not working. When I run this code it executes with no error but when I run await this.db.open();, it fails with error "Open command failed: Failed in openOrCreateDatabase Wrong Secret". I didn't find the correct way to implement this method in the official documentations.

Download a file from AWS S3 with Angular 2+

I have an ASP.NET Core web app and I'm using Angular 4. There are a lot of resources showing how to upload a file to S3, which I've done. But there doesn't seem to be anything about reading the file.
I want to give users the ability to upload a JSON file, save it to S3, then on a different view show the user all of the files they've uploaded as well as display the content of the file.
Are there any resources for showing how to do this?
If its publicaly available items, you can use the S3 JavaScript SDKs 'getObject' method to download a file.
/* The following example retrieves an object for an S3 bucket. */
var params = {
Bucket: "examplebucket",
Key: "HappyFace.jpg"
};
s3.getObject(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
/*
data = {
AcceptRanges: "bytes",
ContentLength: 3191,
ContentType: "image/jpeg",
ETag: "\"6805f2cfc46c0f04559748bb039d69ae\"",
LastModified: <Date Representation>,
Metadata: {
},
TagCount: 2,
VersionId: "null"
}
*/
});
If the files are private, use S3 Signed Urls or CloudFront Signed Urls( or Cookies) to generate a download Url from your backend after authorizing the user. Using this download url, download the file from S3 directly in angular app.
Examples:
Using CloudFront signed urls.
Using S3 Signed Url using the S3 SDKs getSignedUrl method.
Another option is to generate S3 temporary access credentials from AWS STS directly from your backend and send back to the Angular app or using a authentication service such as AWS Cognito so that Angular app can use it to invoke the S3 SDK.

Upload TLS client certificate to Firebase cloud functions

I'm trying to figure out if it is possible to upload a TLS client certificate to be used for my cloud functions in firebase. The TLS client certificate is required by a third-party payment solution called Swish.
This is my first firebase project and it seems silly that a small issue like this will render the platform unusable for me..
After some headache and trying I found a quite easy way to solve swish-payments through cloud functions:
Using request-js instead of the built in libraries, I only need to build the options object to use in the request.post() method as following:
const swishOptions = {
url: 'LINK TO SWISH SERVER',
json: true,
pfx: fs.readFileSync('cert.p12'),
passphrase: 'swish',
body: swishRequestBody
}
The cert.p12-file should be placed in the same folder as index.js and will be uploaded together with the functions.
rq.post(swishOptions, (err, res) => {
if (err){
console.log('payment creation error: ' + JSON.stringify(err))
reject(err)
}
if (res){
console.log('Payment-token: ' + res.headers.paymentrequesttoken)
}
});
The body-object should contain all fields specified in the Swish API, use console.log() to read the error-messages from the Swish-server.

Meteor deploy - MAIL_URL not being set

I recently started deploying a meteor app off of my local machine and it seems that the MAIL_URL property is not being set when deploying to a *.meteor.com domain. No matter what I have tried the email is sent via the default MailGun
What I have tried so far.
Verified that process.MAIL_URL is set and works locally - ensures
that I am setting MAIL_URL correctly
Verified that process.MAIL_URL
is set on *.meteor.com domain by checking meteor logs - ensures that
the process.env settings are being set on *.meteor.com
Tried multiple *.meteor.com domains - ensures it was not a subdomain specific
issue
Tried multiple smtp providers: gmail and Mandrill - ensures
that it was not an issue with the smtp provider
Tried creating a simple app with a simple test email button - ensures problem was not
related to my app code
Nothing works. With the simple app, my code is the following:
if (Meteor.isClient) {
Template.hello.greeting = function () {
return "Welcome to testmail.";
};
Template.hello.events({
'click input' : function () {
console.log("calling send mail");
Meteor.call('sendEmail',
'xxx#gmail.com',
'xxx#domain.com',
'Hello from Meteor!',
'This is a test of Email.send.');
}
});
}
if (Meteor.isServer) {
// In your server code: define a method that the client can call
Meteor.methods({
sendEmail: function (to, from, subject, text) {
check([to, from, subject, text], [String]);
// Let other method calls from the same client start running,
// without waiting for the email sending to complete.
this.unblock();
Email.send({
to: to,
from: from,
subject: subject,
text: text
});
}
});
Meteor.startup(function () {
// code to run on server at startup
process.env.MAIL_URL = 'smtp://blahblah:token#smtp.mandrillapp.com:587/';
console.log(process.env);
});
}
I am out of ideas at this point. Has anybody else experience this before and what was the resolution? Thanks.
By default meteor deploy can only use mailgun since you can't alter the environmental variables on meteor deploy hosting. Additionally meteor deploy hosting uses a galaxy configuration which takes precedence over environmental variables.
If you take a look at [this file] meteor deploy hosting uses some kind of App configuration that configures it over the environmental variable (see https://github.com/meteor/meteor/blob/devel/packages/email/email.js#L42). This is part of the galaxy configuration engine.
You have to modify the Email package to use a custom smtp server. To do this :
get the files from https://github.com/meteor/meteor/tree/devel/packages/email and place them in a directory in your project /packages/email.
add this package to your meteor project with meteor add email. It should override the default meteor-core package. If it says already using, thats okay.
Modify /packages/email/email.js around line 36 to be:
var smtpPool = makePool("<YOUR CUSTOM MAIL_URL>");
Then you should be good to go. Meteor should use this smtp host instead, even on meteor.com hosting.

Resources