sequelize migration not working - sqlite

I created a migration and ran it. It says it worked fine, but nothing happened. I don't think it is even connecting to my database.
My Migration file:
var util = require("util");
module.exports = {
up : function(migration, DataTypes, done) {
migration.createTable('nameOfTheNewTable', {
attr1 : DataTypes.STRING,
attr2 : DataTypes.INTEGER,
attr3 : {
type : DataTypes.BOOLEAN,
defaultValue : false,
allowNull : false
}
}).success(
function() {
migration.describeTable('nameOfTheNewTable').success(
function(attributes) {
util.puts("nameOfTheNewTable Schema: "
+ JSON.stringify(attributes));
done();
});
});
},
down : function(migration, DataTypes, done) {
// logic for reverting the changes
}
};
My Config.json:
{
"development": {
"username": "user",
"password": "pw",
"database": "my-db",
"dialect" : "sqlite",
"host": "localhost"
}
}
The command:
./node_modules/sequelize/bin/sequelize --migrate --env development
Loaded configuration file "config/config.json".
Using environment "development".
Running migrations...
20130921234513-initial.js
nameOfTheNewTable Schema: {"attr1":{"type":"VARCHAR(255)","allowNull":true,"defaultValue":null},"attr2":{"type":"INTEGER","allowNull":true,"defaultValue":null},"attr3":{"type":"TINYINT(1)","allowNull":false,"defaultValue":false}}
Completed in 8ms
I can run this over and over and the output is always the same. I've tried it on a database which I know to have existing tables and try to describe those tables and still nothing happens.
Am I doing something wrong?
EDIT:
I'm pretty sure I'm not connecting to the db, but try as I might I cannot connect using the migration. I can connect using sqlite3 my-db.sqlite and run commands such as .tables to see tables I have created previously, but I cannot for the life of me get the "nameOfTheNewTable" table created using a migration. (I want to create indexes in the migration too). I have tried using "development", changing values in the config.json like the host, database (my-db, ../my-db, my-db.sqlite), etc.
Here's a good example, in the config.json I put "database" : "bad-db" and the output from the migration is exactly the same. When it is done, there is no bad-db.sqlite file to be found.

You need to specify the 'storage' parameter in your config.json, so that sequelize knows what file to use as the sqlite DB.
Sequelize defaults to using memory storage for sqlite, so it's migrating an in-memory database, then exiting, effectively destroying the DB it just migrated.

you most likely have to wait for migration.createTable to finish:
migration.createTable(/*your options*/).success(function() {
migration.describeTable('nameOfTheNewTable').success(function(attributes) {
util.puts("nameOfTheNewTable Schema: " + JSON.stringify(attributes));
done()
});
})

Related

Azure Bicep - can't access output from external module

I have a module called "privateendpoints.bicep" that creates a private endpoint as follows:
resource privateEndpoint_resource 'Microsoft.Network/privateEndpoints#2020-07-01' = {
name: privateEndpointName
location: resourceGroup().location
properties: {
subnet: {
id: '${vnet_resource.id}/subnets/${subnetName}'
}
privateLinkServiceConnections: [
{
name: privateEndpointName
properties: {
privateLinkServiceId: resourceId
groupIds: [
pvtEndpointGroupName_var
]
}
}
]
}
}
output privateEndpointIpAddress string = privateEndpoint_resource.properties.networkInterfaces[0].properties.ipConfigurations[0].properties.privateIPAddress
This is then referenced by a calling bicep file as follows:
module sqlPE '../../Azure.Modules/Microsoft.Network.PrivateEndpoints/1.0.0/privateendpoints.bicep' = {
name:'sqlPE'
params:{
privateEndpointName:'pe-utrngen-sql-${env}-001'
resourceId:sqlDeploy.outputs.sqlServerId
serviceType:'sql'
subnetName:'sub-${env}-utrngenerator01'
vnetName:'vnet-${env}-uksouth'
vnetResourceGroup:'rg-net-${env}-001'
}
}
var sqlPrivateLinkIpAddress = sqlPE.outputs.privateEndpointIpAddress
My problem is, it won't build. In VSCode I get the error The type "outputs" does not contain property "privateEndpointIpAddress"
This is the property I just added. Prior to me adding then all worked ok. I've made sure to save the updated external module and I've ensure right-clicked it in VSCode and selected build, it build ok and created a json file.
So, it seems the client bicep file is not picking up changes in the external module.
Any suggestions please?
The problem seemed to be caused by the fact I had the external module open in a separate VS Code instance. Once I closed this and opened the file in the same instance as the calling bicep file then it worked ok.

Azure Functions .NET 5 fails to start after changing value in local settings file

This is a very strange problem and want to see if anyone else can replicate the issue.
I start a brand new Azure Functions app targeting .NET 5. Mine is a timer function but I don't it matters what type of function it is.
I then add a value in my local.settings.json file:
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "MY_CONNECTION_STRING_FOR_AZURE_STORAGE",
"FUNCTIONS_WORKER_RUNTIME": "dotnet-isolated",
"MY_APP_ID": "1324"
}
}
}
I modify the Program.cs to read this value:
var host = new HostBuilder()
.ConfigureAppConfiguration(c =>
{
c.AddEnvironmentVariables();
var config = c.Build();
var id = config.GetValue<string>("MY_APP_ID");
})
.ConfigureFunctionsWorkerDefaults()
.Build();
I then run the app and it seems to start up fine.
Then, I modify the local.settings.json file and add a section -- see below:
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "MY_CONNECTION_STRING_FOR_AZURE_STORAGE",
"FUNCTIONS_WORKER_RUNTIME": "dotnet-isolated",
"MY_APP_ID": "1324",
"MyApp": {
"MY_APP_ID": "1324"
}
}
}
}
When I try to run the app, I get the following error:
I then remove this section from my local.settings.json file and try running the app again and I get the same error.
It looks like, from this point on, there's nothing I can do to make this app run! If you noticed, I didn't change any code in my Program.cs file. Simply adding a section and removing it from local.settings.json file seems to render the app useless!
Any idea what's going on here?

alexa skill local could not write to dynamodb

I am writing a node.js skill using ask-sdk and using alexa-skill-local to test the endpoint. I need to persist data to DynamoDb in one of the handler. But I keep getting "missing region error". Please find my code below:
'use strict';
// use 'ask-sdk' if standard SDK module is installed
const Alexa = require('ask-sdk');
const { launchRequestHandler, HelpIntentHandler, CancelAndStopIntentHandler, SessionEndedRequestHandler } = require('./commonHandlers');
const ErrorHandler = {
canHandle() {
return true;
},
handle(handlerInput, error) {
return handlerInput.responseBuilder
.speak('Sorry, I can\'t understand the command. Please say again.')
.reprompt('Sorry, I can\'t understand the command. Please say again.')
.getResponse();
},
};
////////////////////////////////
// Code for the handlers here //
////////////////////////////////
exports.handler = Alexa.SkillBuilders
.standard()
.addRequestHandlers(
launchRequestHandler,
HelpIntentHandler,
CancelAndStopIntentHandler,
SessionEndedRequestHandler,
ErrorHandler
)
.withTableName('devtable')
.withDynamoDbClient()
.lambda();
And in one of the handler I am trying to get persisted attributes like below:
handlerInput.attributesManager.getPersistentAttributes().then((data) => {
console.log('--- the attributes are ----', data)
})
But I keep getting the following error:
(node:12528) UnhandledPromiseRejectionWarning: AskSdk.DynamoDbPersistenceAdapter Error: Could not read item (amzn1.ask.account.AHJECJ7DTOPSTT25R36BZKKET4TKTCGZ7HJWEJEBWTX6YYTLG5SJVLZH5QH257NFKHXLIG7KREDKWO4D4N36IT6GUHT3PNJ4QPOUE4FHT2OYNXHO6Z77FUGHH3EVAH3I2KG6OAFLV2HSO3VMDQTKNX4OVWBWUGJ7NP3F6JHRLWKF2F6BTWND7GSF7OVQM25YBH5H723VO123ABC) from table (EucerinSkinCareDev): Missing region in config
at Object.createAskSdkError (E:\projects\nodejs-alexa-sdk-v2-eucerin-skincare-dev\node_modules\ask-sdk-dynamodb-persistence-adapter\dist\utils\AskSdkUtils.js:22:17)
at DynamoDbPersistenceAdapter.<anonymous> (E:\projects\nodejs-alexa-sdk-v2-eucerin-skincare-dev\node_modules\ask-sdk-dynamodb-persistence-adapter\dist\attributes\persistence\DynamoDbPersistenceAdapter.js:121:45)
Can we read and write attributes from DynamoDb using alexa-skill-local ? Do we need some different setup to achieve this ?
Thanks
I know that this is a really old topic, but I had the same problem few days ago, and I'm gonna explain how I did it work.
You have to download DynamoDB Locally and follow the instructions from here
Once that you have configure your local DynamoDB and check that it is working. You have to pass it through your code, to DynamoDbPersistenceAdapter constructor.
Your code should look similar to:
var awsSdk = require('aws-sdk');
var myDynamoDB = new awsSdk.DynamoDB({
endpoint: 'http://localhost:8000', // If you change the default url, change it here
accessKeyId: <your-access-key-id>,
secretAccessKey: <your-secret-access-key>,
region: <your-region>,
apiVersion: 'latest'
});
const {DynamoDbPersistenceAdapter} = require('ask-sdk-dynamodb-persistence-adapter');
return new DynamoDbPersistenceAdapter({
tableName: tableName || 'my-table-name',
createTable: true,
dynamoDBClient: myDynamoDB
});
Where <your-acces-key-id>, <your-secrete-access-key> and <your-region> are defined at aws config and credentials files.
The next step is launch your server with alexa-skill-local command as always.
Hope this will be helpfull! =)
Presumably you have an AWS config profile that your skill is using when running locally.
You need to edit the .config file and set the default region (ie us-east-1) there. The region should match the region where your table exists.
Alternatively, if you want to be able to run completely isolated, you may need to write come conditional logic and swap the dynamo client with one targeting an instance of DynamoDB Local running on your machine.

"SQLITE_CANTOPEN: unable to open database file" using a memory SQLite database in Knex,

I'm getting Knex:Error Pool2 - Error: SQLITE_CANTOPEN: unable to open database file when runnnig a mocha test with Knex/SQLite
My knexfile:
module.exports = {
test: {
client: 'sqlite3',
connection: {
filename: ':memory:',
}
}
...
The only reference I find when I google is to someone who mispelled ':memory'.
How should I debug this? Is there a way to turn on verbose logging for knex/SQLite?
I believe the error happens in the initialization, before the tests, and before the beforeEach(). When I switch from sqlite3 to mysql (which we use in our dev setup), it just works.
The problem was that knex wasn't properly initialized.
I had
var knexfile = require('../knexfile');
var knex = require('knex')(environment);
// instead of require('knex')(knexfile[enviornment])

How to list files in folder

How can I list all files inside a folder with Meteor.I have FS collection and cfs:filesystem installed on my app. I didn't find it in the doc.
Another way of doing this is by adding the shelljs npm module.
To add npm modules see: https://github.com/meteorhacks/npm
Then you just need to do something like:
var shell = Meteor.npmRequire('shelljs');
var list = shell.ls('/yourfolder');
Shelljs docs:
https://github.com/arturadib/shelljs
The short answer is that FS.Collection creates a Mongo collection that you can treat like any other, i.e., you can list entries using find().
The long answer...
Using cfs:filesystem, you can create a mongo database that mirrors a given folder on the server, like so:
// in lib/files.js
files = new FS.Collection("my_files", {
stores: [new FS.Store.FileSystem("my_files", {"~/test"})] // creates a ~/test folder at the home directory of your server and will put files there on insert
});
You can then access this collection on the client to upload files to the server to the ~test/ directory:
files.insert(new File(['Test file contents'], 'my_test_file'));
And then you can list the files on the server like so:
files.find(); // returns [ { createdByTransform: true,
_id: 't6NoXZZdx6hmJDEQh',
original:
{ name: 'my_test_file',
updatedAt: (Date)
size: (N),
type: '' },
uploadedAt: (Date),
copies: { my_files: [Object] },
collectionName: 'my_files'
}
The copies object appears to contain the actual names of the files created, e.g.,
files.findOne().copies
{
"my_files" : {
"name" : "testy1",
"type" : "",
"size" : 6,
"key" : "my_files-t6NoXZZdx6hmJDEQh-my_test_file", // This is the name of the file on the server at ~/test/
"updatedAt" : ISODate("2015-03-29T16:53:33Z"),
"createdAt" : ISODate("2015-03-29T16:53:33Z")
}
}
The problem with this approach is that it only tracks the changes made through the Collection; if you add something manually to the ~/test directory, it won't get mirrored into the Collection. For instance, if on the server I run something like...
mkfile 1k ~/test/my_files-aaaaaaaaaa-manually-created
Then I look for it in the collection, it won't be there:
files.findOne({"original.name": {$regex: ".*manually.*"}}) // returns undefined
If you just want a straightforward list of files on the server, you might consider just running an ls. From https://gentlenode.com/journal/meteor-14-execute-a-unix-command/33 you can execute any arbitrary UNIX command using Node's child_process.exec(). You can access the app root directory with process.env.PWD (from this question). So in the end if you wanted to list all the files in your public directory, for instance, you might do something like this:
exec = Npm.require('child_process').exec;
console.log("This is the root dir:");
console.log(process.env.PWD); // running from localhost returns: /Users/me/meteor_apps/test
child = exec('ls -la ' + process.env.PWD + '/public', function(error, stdout, stderr) {
// Fill in this callback with whatever you actually want to do with the information
console.log('stdout: ' + stdout);
console.log('stderr: ' + stderr);
if(error !== null) {
console.log('exec error: ' + error);
}
});
This will have to run on the server, so if you want the information on the client, you'll have to put it in a method. This is also pretty insecure, depending on how you structure it, so you'd want to think about how to stop people from listing all the files and folders on your server, or worse -- running arbitrary execs.
Which method you choose probably depends on what you're really trying to accomplish.

Resources