I am getting ERROR: Unable to resolve the storage: when running sequelize seeder. Although I was able to run migrations (to create, update tables).
Following is my config file
module.exports = {
development: {
dialect: 'sqlite',
storage: 'path-to-db',
seederStorage: 'path-to-db',
password: 'some-private-key-to-make-db-password-protected',
dialectModulePath: '#journeyapps/sqlcipher',
dialectOptions: {
options: {
encrypt: true,
},
},
},
staging: { .. //and so on
production: { .. //and so on
I got the issue what I was doing wrong.
Notice
storage: 'path-to-db',
We need to pass 'sequelize here
storage: 'sequelize'
Will work like a charm
Related
I am trying to deploy NextJs and NextAuth.js to AWS using CDK (Cloud Development Kit). I have cloned the NextAuth.js example project (https://github.com/nextauthjs/next-auth-example) and
installed "serverless-http" for handling the binding from Lambda to NextJs. I attempted to follow this guide https://remaster.com/blog/nextjs-lambda-serverless-framework but using AWS CDK instead of the serverless.yml file as I am integrating it with existing infrastructure.
next.config.js:
/** #type {import('next').NextConfig} */
const nextConfig = {
reactStrictMode: true,
swcMinify: true,
images: {
unoptimized: true
},
output: 'standalone'
}
module.exports = nextConfig
[...nextauth].ts (From the example but using a simple credentials provider that always resolves):
import NextAuth, { NextAuthOptions } from "next-auth"
import CredentialsProvider from "next-auth/providers/credentials";
export const authOptions: NextAuthOptions = {
providers: [
CredentialsProvider({
name: "Credentials",
credentials: {
username: { label: "Username", type: "text", placeholder: "jsmith" },
password: { label: "Password", type: "password" }
},
async authorize(credentials, req) {
return { id: "1", name: "J Smith", email: "jsmith#example.com" }
}
})
],
theme: {
colorScheme: "light",
},
callbacks: {
async jwt({ token }) {
token.userRole = "admin"
return token
},
},
}
export default NextAuth(authOptions)
server.ts:
import { NextConfig } from "next";
import NextServer from "next/dist/server/next-server";
import serverless from "serverless-http";
// #ts-ignore
import { config } from "./.next/required-server-files.json";
const nextServer = new NextServer({
hostname: "localhost",
port: 3000,
dir: __dirname,
dev: false,
conf: {
...(config as NextConfig),
},
});
export const handler = serverless(nextServer.getRequestHandler());
It is being built using the following script:
#!/bin/bash
BUILD_FOLDER=.dist
yarn build
rm -rf $BUILD_FOLDER
mv .next/standalone/ $BUILD_FOLDER/
cp -r .next/static $BUILD_FOLDER/.next
rm $BUILD_FOLDER/server.js
cp -r next.config.js $BUILD_FOLDER/
cp -r node_modules/serverless-http $BUILD_FOLDER/node_modules/serverless-http
tsc server.ts --outDir .dist --esModuleInterop true
cp -r public $BUILD_FOLDER/
This is deployed using AWS written in CDK C#. Primarily using a HttpApi and a single Lambda. Each configured as shown below:
Lambda:
var function = new Function(this, "nextjs-function", new FunctionProps
{
Code = Code.FromAsset(...".dist"),
Handler = "server.handler",
Runtime = Runtime.NODEJS_16_X,
...
Environment = new Dictionary<string, string>
{
{ "NEXTAUTH_URL", "https://myDomainName.com" },
{ "NEXTAUTH_SECRET", portalSecret },
}
});
HttpApi:
var httpApi = new HttpApi(this, "http-api", new HttpApiProps
{
DisableExecuteApiEndpoint = true,
DefaultIntegration = new HttpLambdaIntegration("nextjs-route", function),
DefaultDomainMapping = new DomainMappingOptions
{
DomainName = "myDomainName.com"
}
});
Opening the deployed webpage and clicking the "Sign In" button at the top, I get taken to /api/auth/signin?callbackUrl=%2F with a form. Without touching the credentials I click "Sign in with credentials". This results in the page reloading and nothing happening. Expected behaviour should be a session and a redirect back to the home page (/) as is happening when running it locally using either yarn dev or yarn build && yarn start.
I get no errors client/server-side thus leaving me in the dark.
I suspect that it has to do with domain configuration but I am unable to find the problem. I tested with another NextJs/NextAuth project using a AWS Cognito provider. This also had problems as when I clicked the sign in button I got an "Unexpected token" error due to the underlying signIn(...) function (from the NextAuth library) trying to parse the fetched page as JSON, which turned out to be the sign-in-page. Thus my suspicion of something domain-related.
I'm doing my first NextJS project, and currently looking at logging. Looking around, I found this blog post, which suggests configuring Pino to log the commit SHA in each message. This makes sense to me for a lot of reasons. The question is, how do I get the SHA when Next is doing its static build? In the blog example, they're deploying on Vercel, and get it from there:
const logger = pino({
base: {
env: process.env.NODE_ENV,
revision: process.env.VERCEL_GITHUB_COMMIT_SHA,
},
});
But I almost certainly will not be deploying on Vercel. Equally, there's lots of documentation on how to generate a custom build ID, which comes down to doing something like this in next.config.js:
const nextConfig = {
generateBuildId: async () => {
const buildId = await determineBuildId()
console.log(`> Build ID: ${buildId}`)
return buildId
},
}
But I can't find any documentation on how to actually access the buildId within Next, so that it's a constant in my logging:
const logger = pino({
base: {
env: process.env.NODE_ENV,
revision: // how do I access the generated buildId here?
},
});
This really must just be a failing of my google-fu, because this seems like it should be common practice, but ... I can't find it. Any ideas? Or pointers to documentation?
I deployed a simple NFT smart contract on polygon mumbai testnet but when I am trying to verify it then It is showing an error. please guide me how to verify it...
This is the error which I am getting
PS C:\Users\Sumits\Desktop\truffle> truffle run verify MyNFT --network matic --debug
DEBUG logging is turned ON
Running truffle-plugin-verify v0.5.20
Retrieving network's chain ID
Verifying MyNFT
Reading artifact file at C:\Users\Sumits\Desktop\truffle\build\contracts\MyNFT.json
Failed to verify 1 contract(s): MyNFT
PS C:\Users\Sumits\Desktop\truffle>
This is my truffle-config.js
const HDWalletProvider = require('#truffle/hdwallet-provider');
const fs = require('fs');
const mnemonic = fs.readFileSync(".secret").toString().trim();
module.exports = {
networks: {
development: {
host: "127.0.0.1", // Localhost (default: none)
port: 8545, // Standard Ethereum port (default: none)
network_id: "*", // Any network (default: none)
},
matic: {
provider: () => new HDWalletProvider(mnemonic, `https://rpc-mumbai.maticvigil.com`),
network_id: 80001,
confirmations: 2,
timeoutBlocks: 200,
skipDryRun: true
},
},
// Set default mocha options here, use special reporters etc.
mocha: {
// timeout: 100000
},
// Configure your compilers
compilers: {
solc: {
version: "^0.8.0",
}
},
plugins: ['truffle-plugin-verify'],
api_keys: {
polygonscan: 'BTWY55K812M*******WM9NAAQP1H3'
}
}
First deploy the contract:
truffle migrate --network matic --reset
I am not sure if you successfully deploy it to matic network, because your configuration does not seem to be correct:
matic: {
// make sure you set up provider correct
provider: () => new HDWalletProvider(mnemonic, `https://rpc-mumbai.maticvigil.com/v1/YOURPROJECTID`),
network_id: 80001,
confirmations: 2,
timeoutBlocks: 200,
skipDryRun: true
},
Then verify.
truffle run verify ContractName --network matic
ContractName should be the name of the contract, not the name of the file
please make sure you are putting the polygonscan api key in lowercase
I am trying to deploy a next.js (ssr) application in AWS' Amplify using the CDK but Amplify fails to identify the app as next.js ssr. When I do it manually though, using AWS UI, app is identified as SSR and works as expected.
This is generated by aws-cdk/aws-amplify v118 as:
import * as cdk from '#aws-cdk/core';
import * as amplify from '#aws-cdk/aws-amplify';
import codebuild = require('#aws-cdk/aws-codebuild');
export class AmplifyStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props: cdk.StackProps) {
super(scope, id, props);
const sourceCodeProvider = new amplify.GitHubSourceCodeProvider({
owner: '.....',
repository: '....',
oauthToken: cdk.SecretValue.secretsManager('github-token'),
});
const buildSpec = codebuild.BuildSpec.fromObjectToYaml(
{
version: 1,
applications: [
{
frontend: {
phases: {
preBuild: {
commands: [
"npm install"
]
},
build: {
commands: [
"npm run build"
]
}
},
artifacts: {
baseDirectory: ".next",
files: [
"**/*"
]
},
cache: {
paths: [
"node_modules/**/*"
]
}
}
}
]
}
);
const amplifyApp = new amplify.App(this, "cdk-nf-web-app", {
sourceCodeProvider: sourceCodeProvider,
buildSpec: buildSpec
});
amplifyApp.addBranch('develop', {
basicAuth: amplify.BasicAuth.fromGeneratedPassword('dev')
});
amplifyApp.addCustomRule({
source: "</^[^.]+$|\\.(?!(css|gif|ico|jpg|js|png|txt|svg|woff|ttf|map|json)$)([^.]+$)/>",
target: "/index.html",
status: amplify.RedirectStatus.REWRITE
});
}
}
Which is identical to what AWS has generated when I do it manually from UI. The difference here is the lack of Framework identification as shown in picture. Any ideas?
To answer my own question, I was missing the role as without it aws won't create the necessary resources. (role: https://docs.aws.amazon.com/cdk/api/latest/docs/aws-iam-readme.html)
Edit to elaborate on how i fixed it:
Added a new role that can be used by amplify
const role = new iam.Role(this, 'amplify-role-webapp-'+props.environment, {
assumedBy: new iam.ServicePrincipal('amplify.amazonaws.com'),
description: 'Custom role permitting resources creation from Amplify',
});
and assigned a policy (AdministratorAccess) that role
let iManagedPolicy = iam.ManagedPolicy.fromAwsManagedPolicyName(
'AdministratorAccess',
);
role.addManagedPolicy(iManagedPolicy)
Then upon creating the app, i assigned the role to the app:
const amplifyApp = new amplify.App(this, "cdk-nf-web-app", {
sourceCodeProvider: sourceCodeProvider,
buildSpec: buildSpec,
role: role <--- this line here
});
The amplify app requires authorisation to create the relevant resources:
// This is for demonstrations purposes only; Do not give full access for production usage!
amplifyApp.grantPrincipal.addToPrincipalPolicy(new iam.PolicyStatement({
resources: ["*"],
actions: ['*'],
}))
Source Code Showcase
I have a Meteor App based on Angular 1.3 + Meteor 1.5.2.2.
I am using Ubuntu 17.
I am trying to deploy my Meteor App on local machine first before going for live server using Meteor Up.
But I am facing this issue when running mup setup command
martinihenry#martinihenry:~/mytestapp-prod/.deploy$ mup setup
Started TaskList: Setup Docker
[192.168.100.12] - Setup Docker
events.js:141
throw er; // Unhandled 'error' event
^
Error: connect ECONNREFUSED 192.168.100.12:22
at Object.exports._errnoException (util.js:907:11)
at exports._exceptionWithHostPort (util.js:930:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1078:14)
Here is my mup.json:
module.exports = {
servers: {
one: {
// TODO: set host address, username, and authentication method
host: '192.168.100.12',
username: 'root',
// pem: './path/to/pem'
// password: 'server-password'
// or neither for authenticate from ssh-agent
}
},
app: {
// TODO: change app name and path
name: 'mytestapp-prod',
path: '../',
servers: {
one: {},
},
buildOptions: {
serverOnly: true,
},
env: {
// TODO: Change to your app's url
// If you are using ssl, it needs to start with https://
ROOT_URL: '192.168.100.12:3000',
MONGO_URL: 'mongodb://localhost/meteor',
},
// ssl: { // (optional)
// // Enables let's encrypt (optional)
// autogenerate: {
// email: 'email.address#domain.com',
// // comma separated list of domains
// domains: 'website.com,www.website.com'
// }
// },
docker: {
// change to 'kadirahq/meteord' if your app is using Meteor 1.3 or older
image: 'abernix/meteord:base',
},
// Show progress bar while uploading bundle to server
// You might need to disable it on CI servers
enableUploadProgressBar: true
},
mongo: {
version: '3.4.1',
servers: {
one: {}
}
}
};
What could be wrong here?
It looks like you don't have sshd running on your machine, or you have not enabled remote ssh access for root.
You need to edit /etc/ssh/sshd_config, and comment out the following line:
PermitRootLogin without-password
Just below it, add the following line:
PermitRootLogin yes
Then restart SSH:
service ssh restart
I know this is late, but this a known and reproducable bug resulting from inotfiy-watch using all of the available slots for watches, and while very misleading, it actually has absolutely nothing to do with disk space.
The easy fix? increase watch slots:
sudo -i
echo 1048576 > /proc/sys/fs/inotify/max_user_watches
exit