I'm exporting a database from mlabs to CosmosDb.
after failing with other methods, I finally exported via mongodump and imported into CosmosDb. I can read the collections in studio 3t.
but when connecting in Express/mongoose via the connectionString, the collections all return [].
I've added my IP to the firewall.
It's not the code, when I point back to the mlab db everything retrieves correctly.
what am I missing in CosmosDb?
issue also located here: https://learn.microsoft.com/en-us/answers/questions/153583/cosmosdb-migration-mern-can39t-read-collections.html?childToView=154602#answer-154602
in azure, the generated connection strings don't quite match what the exercise sent by Kaylan sent. and I also wasn't setting some of the options. without the options I get a message "Password contains an illegal unescaped character"
the Primary Connection string and Secondary Connection String add the following:
&retrywrites=false
&maxIdleTimeMS=120000
&appName=#<appName>#
once I removed that from the connection string AND used the options, things worked.
my final connection example:
//pulled from config for clarity
const config = {
'database': process.env.MONGODB_URI || 'mongodb://<user>:<password>#<db>.mongo.cosmos.azure.com:10255/<dbname>?ssl=true&replicaSet=globaldb'
}
mongoose.connect(config.database, //with user and pass inside
{
useNewUrlParser: true,
useUnifiedTopology: true,
retryWrites: false,
useFindAndModify: false,
useCreateIndex: true
})
//.then(() => console.log('Connection to CosmosDB successful'))
.catch((err) => console.error('CosmosDB error:', err))
Related
My DataStore is not syncing with DynamoDB for some reason.
I've ready every issue on stackoverflow to see if I can find a resolution but no dice.
There are no errors.
Hub is showing the events firing.
Here is an example of the issue:
try {
const result = await DataStore.save(
new Employer({
name: 'Test Employer',
rate: 123.45,
}),
)
console.log('Employer saved successfully!')
console.log(result)
// const employer = await DataStore.query(Employer)
// console.log('EMPLOYER = ')
// console.log(employer)
} catch (err) {
console.log('ERROR: An error occurred during getEmployer')
console.log('Error message was ' + JSON.stringify(err))
}
DataStore nevers seems to sync with DynamoDB.
Other than that everything is fine. No issues. DataStore contains the correct data.
The only difference between my project and the code examples if that I have used Amplify Studio to build the data model and performed Amplify pull to update the project.
When I do an "amplify status" the API looks correct.
The aws-exports.js file seems to be correct.
It contains entries for Auth, API, Storage etc.
Auth is working correctly.
What am I missing?
I have created a new Realtime Database in the same project.
I followed the instructions here but got some errors.
final db = FirebaseDatabase.instance.ref(); // default database instance
// using ref()
final db2 = FirebaseDatabase.instance.ref('https://mydb2.us-central1.firebasedatabase.app');
// error Invalid Firebase Database path
// Firebase Database paths must not contain '.', '#', '$', '[', or ']'
// using refFromURL()
final db3 = FirebaseDatabase.instance.refFromURL('https://mydb2.us-central1.firebasedatabase.app');
// error Invalid argument (must equal the current FirebaseDatabase instance databaseURL)
Same goes if using
.firebaseio.com in the URL.
So what is the correct way to get DatabaseReference for secondary database in the same project?
I've figured out by using instanceFor():
final db2 = FirebaseDatabase.instanceFor(
app: Firebase.app(),
databaseURL: 'https://mydb2.us-central1.firebasedatabase.app/'
).ref();
While creating the connection from NetSuite to SFTP using N/SFTP module i'm facing an error states:
"FTP_INCORRECT_HOST_KEY","message":"Provided host key does not match
remote server's fingerprint."
I have tried checking with my server team but no hope. Can any one suggest me how to resolve this or how can i get an authorized finger print host key from server.
I have tried with Suitescript 2.0 module (N/SFTP) with the help of the tool mentioned below.
https://ursuscode.com/netsuite-tips/suitescript-2-0-sftp-tool/
/**
*#NApiVersion 2.x
#NScriptType ScheduledScript
*/
define(['N/sftp', 'N/file', 'N/runtime'],function(sftp, file,runtime) {
function execute(context)
{
var myPwdGuid = "Encrypted password by GUID";
var myHostKey = "Some long Host key around 380 characters";
// establish connection to remote FTP server
var connection = sftp.createConnection({
username: 'fuel_integration',
passwordGuid: myPwdGuid, // references var myPwdGuid
url: '59.165.215.45',//Example IP
directory: '/sftproot/TaleoSync',
restrictToScriptIds : runtime.getCurrentScript().id,
restrictToCurrentUser :false,
hostKey: myHostKey // references var myHostKey
});
// specify the file to upload using the N/file module
// download the file from the remote server
var downloadedFile = connection.download({
directory: '/sftproot/TaleoSync',
filename: 'Fuel Funnel Report_without filter.csv'
});
downloadedFile.folder = ;
downloadedFile.save();
context.response.write(' Downloaded "Fuel Funnel Report_without filter" to fileCabinet');
}
return {
execute: execute
};
});
I expect to create a connection between SFTP and NetSuite to down a file from SFTP and place it to NetSuite file cabinet.
A couple of things:
restrictToScriptIds : runtime.getCurrentScript().id,
restrictToCurrentUser :false,
Are not part of the createConnection signature. Those should have been used when you created a Suitelet to vault your credential.
However the hostkey complaint may be dealt with by using ssh-keyscan from a linux box.
ssh-keyscan 59.165.215.45
should replay with the server name then ssh-rsa then a long base64 string. Copy that string so it ends up in myHostKey and set the hostKeyType to RSA.
I'm using the google cloud nodejs storage library to upload some images to cloud storage. This all works fine. I'm then trying to generate a signed URL immediately after uploading the file, using the same storage object that uploaded the file in the first place but I receive the following error:
Request had insufficient authentication scopes
I'm not sure why this would be happening if it's all linked to the same service account that uploaded in the first place. (For what it's worth it's a firebase app).
The code is below:
const Storage = require('#google-cloud/storage');
storage = new Storage();
storage.bucket(bucketName).upload(event.file.pathName, {
// Support for HTTP requests made with `Accept-Encoding: gzip`
gzip: true,
destination: gcsname,
metadata: {
// Enable long-lived HTTP caching headers
// Use only if the contents of the file will never change
// (If the contents will change, use cacheControl: 'no-cache')
cacheControl: 'public, max-age=31536000'
},
}).then(result => {
let url = `https://storage.googleapis.com/${bucketName}/${gcsname}`;
const options = {
action: 'read',
expires: Date.now() + 1000 * 60 * 60, // one hour
};
// Get a signed URL for the file
storage.bucket(bucketName).file(gcsname).getSignedUrl(options).then(result => {
console.log("generated signed url", result);
}).catch(err => {
console.log("err occurred", err)
})
})
The bucket itself isn't public and neither are the objects, but it's my understanding that I should still be able to generate a signed url. The app itself is running on GCP compute engine, hence not passing any options to the new Storage() - passing options in fact also makes the upload fail.
Can anyone advise on what I'm doing wrong?
With the limited amount of information I have, here's a few things that you could be missing based on the error you are receiving:
The Identity and Access Management (IAM) API must be enabled for the project
The Compute Engine service account needs the iam.serviceAccounts.signBlob permission, available to the "Service Account Token Creator" role.
Additionally, you can find more documentation regarding the topic here.
https://cloud.google.com/storage/docs/access-control/signed-urls
https://cloud.google.com/storage/docs/access-control/signing-urls-manually
Now I used syslog-ng recive json-format log and store to local file, but the log was be changed.
pro log:
{"input_name":"sensor_alert","machine":"10.200.249.27"}
currently store log:
"sensor_alert","machine":"10.200.249.27"}`
the key "input_name" was be deleted
syslog-ng config:
source test_src {
udp(
ip(0.0.0.0) port(5115)
);
};
destination test_dest {
file("/data/test_${YEAR}${MONTH}${DAY}.log"
template("$MSG\n")
template-escape(no));
};
log {
source(test_src);
destination(test_dest);
};
Who can tell me the reason, thks.
If you only send the above mentioned string (without any other framing) probably you should turn of parsing in the source with:
udp(... flags(no-parse));
This is going to put everything it received into the MSG macro.
If you have some kind of framing (like syslog) please provide an sample message, because otherwise I can only guess.