When i send an transaction to peer/transactions i can send it with and without signatures. Both transaction are accepted. What is the difference.
Example; Create a new chain / dapp by using asch-js
Situation 1: using signatures (and a transactionid):
function createDApp(options, secret, secondSecret) {
var keys = crypto.getKeys(secret);
var transaction = {
secret: secret,
type: 200,
amount: 0,
fee: constants.fees.dapp,
recipientId: null,
senderId: crypto.getAddress(keys.publicKey),
timestamp: slots.getTime() - globalOptions.get('clientDriftSeconds'),
args: [options.name, options.description, options.link, options.icon, options.delegates, options.unlockDelegates],
signatures: []
};
transaction.signatures.push(crypto.sign(transaction, keys));
if (secondSecret) {
var secondKeys = crypto.getKeys(secondSecret);
transaction.signatures.push(crypto.secondSign(transaction, secondKeys));
}
transaction.id = crypto.getId(transaction);
return transaction;
}
Situation 2: no signatures
function createDApp(options, secret, secondSecret) {
var keys = crypto.getKeys(secret);
var transaction = {
secret: secret,
type: 200,
amount: 0,
fee: constants.fees.dapp,
recipientId: null,
senderId: crypto.getAddress(keys.publicKey),
timestamp: slots.getTime() - globalOptions.get('clientDriftSeconds'),
args: [options.name, options.description, options.link, options.icon, options.delegates, options.unlockDelegates],
signatures: []
};
return transaction;
}
Both transaction are accepted (and create a new chain). So what is the difference and what is best practice?
As far as I understand the api/transactions endpoint is mainly for unsigned transactions (like your 2nd example). Therefore you need to provide the secret property. Otherwise the ASCH blockchain can't sign your unsigned transaction.
In your first example you are signing the transaction by yourself, therefore you don't need to send the secret property to the peer/transactions endpoint.
From a security endpoint it is better to sign your transactions locally. So no malicious node can steal all your funds when you send your secret the a blockchain endpoint.
Related
Reddit's access token has an expiration of 1 hour, but I want users that log in to my app to be able to post comments on Reddit for example. This means I need to refresh their access token once it has expired. Since I'm using a database (PlanetScale + Prisma) and not a JWT strategy, the documentation found here https://next-auth.js.org/tutorials/refresh-token-rotation is not useful to me (jwt callback is never called).
As far as I'm understanding it, it means it's not really possible to check the expiration in the session callback and refresh the token here without accessing the database each time?
What can I do if I want to refresh the access token in my database? Should I use a JWT strategy instead, even though I'm using a database?
To do refresh token rotation when using a database strategy you can do something like this:
async function refreshAccessToken(session: Session) {
if (!session.user?.id) {
return;
}
const {
id,
refresh_token: refreshToken,
expires_at: expiresAt,
} = (await prisma.account.findFirst({
where: { userId: session.user.id, provider: "reddit" },
})) ?? {};
if (!id || !refreshToken) {
return;
}
// If expired refresh it
if (expiresAt && Date.now() / 1000 > expiresAt) {
const authorizationString = Buffer.from(
`${process.env?.["REDDIT_CLIENT_ID"]}:${process.env?.["REDDIT_CLIENT_SECRET"]}`,
).toString("base64");
const headers = {
Authorization: `Basic ${authorizationString}`,
"Content-Type": "application/x-www-form-urlencoded",
};
const urlSearchParams = new URLSearchParams();
urlSearchParams.append("grant_type", "refresh_token");
urlSearchParams.append("refresh_token", refreshToken);
urlSearchParams.append("redirect_uri", `${process.env?.["NEXTAUTH_URL"]}/api/auth/callback/reddit`);
const { data } = await axios.post<RedditResponse>("https://www.reddit.com/api/v1/access_token", urlSearchParams, {
headers,
});
await prisma.account.update({
where: { id },
data: {
access_token: data.access_token,
expires_at: Math.floor(Date.now() / 1000) + data.expires_in,
refresh_token: data.refresh_token,
token_type: data.token_type,
scope: data.scope,
},
});
}
}
You can use this anywhere I guess. I don't know if it makes sense to use this in the session callback or not since it's probably a performance hit, so maybe just call it each time you actually need the access token for something? I'm not knowledgable about this to know what the best practice is in this regard...
After many hours of tinkering i just found out how to get the refresh token into the database!
following the first part of the next auth token refresh tutorial, add the authorization param to the provider options
const GOOGLE_AUTHORIZATION_URL =
"https://accounts.google.com/o/oauth2/v2/auth?" +
new URLSearchParams({
prompt: "consent",
access_type: "offline",
response_type: "code",
});
and
export default NextAuth({
providers: [
GoogleProvider({
clientId: process.env.GOOGLE_ID,
clientSecret: process.env.GOOGLE_SECRET,
authorization: GOOGLE_AUTHORIZATION_URL,
}),
This will send me well on my way to figuring out the rest of the process... hope it works for you too!
So to import into firebase, you need the hashing stuff to decode the password properly. We're using the microsoft owen security package from asp.net. I have a machineKey setup in the webconfig (I presume this is the hmac key I need to provide?)
<machineKey validationKey="<key>" decryptionKey="<decodeKey>" validation="HMACSHA256" />
and in the startup it's a simple
public void ConfigureOAuth(IAppBuilder app)
{
OAuthAuthorizationServerOptions OAuthServerOptions = new OAuthAuthorizationServerOptions()
{
TokenEndpointPath = new PathString("/token"),
AccessTokenExpireTimeSpan = TimeSpan.FromDays(1),
#if DEBUG
AllowInsecureHttp = true,
#endif
Provider = new SimpleAuthorizationServerProvider()
};
// Token Generation
app.UseOAuthAuthorizationServer(OAuthServerOptions);
app.UseOAuthBearerAuthentication(new OAuthBearerAuthenticationOptions());
}
To import into firebase, I'm using
(this is a dev DB user so it's ok that I'm showing the hash)
let userImportRecords = [
{
uid: 'afd698c4-4172-49f1-a6c4-77d175efbc1c',
email: '<usersEmail>',
passwordHash: Buffer.from('APwDZFRV6tnEREBsmECj2LMkgUZAqfOAb9/u0nMra5WrHgVG0F7ggFQABt0Qdtsw3w=='),
passwordSalt: Buffer.from('APwDZFRV6tnEREBs')
}
];
var admin = require("firebase-admin");
var serviceAccount = require("./socket-server/serviceAccountKey.json");
admin.initializeApp({
credential: admin.credential.cert(serviceAccount),
databaseURL: "<DB URL>"
});
admin.auth().importUsers(userImportRecords, {
hash: {
algorithm: 'PBKDF2_SHA256',
rounds: 10000,
saltSeparator: 16,
// Must be provided in a byte buffer.
key: Buffer.from('<webconfig machine decryptionKey key?>')
}
});
This will import but the password doesn't work. I'm assuming there's something wrong with how I have the salt and/or the hash key. I didn't see in the asp.net documentation where it's setting the key, but I know I have that machineKey setup so I presume this is what it's using. (we have another server that validates the tokens itself)
I'm fairly certain the algorithm and rounds are correct (pulled from documentation). I know the salt is at the front of the password hash, I think it's 16 bytes. But in this setup, do I leave the 16 bytes included in the hash or do they need to be removed? Did I even remove the correct amount of characters or do I need to throw this string into memory and extract the 16 bytes manually. I feel like this is super close. I just need to re-arrange my values.
I run a program that sends data to dynamodb using api gateway and lambdas.
All the data sent to the db is small, and only sent from about 200 machines.
I'm still using free tier and sometimes unexpectedly in the middle of the month I'm start getting an higher provisioned read / write capacity and then from this day I pay a constant amount each day until the end of the month.
Can someone understand from the image below what happened in the 03/13 that caused this pike in the charts and caused these provisioned to rise from 50 to 65?
I can't tell what happened based on those charts alone, but some things to consider:
You may not be aware of the new "PAY_PER_REQUEST" billing mode option for DynamoDB tables which allows you to mostly forget about manually provisioning your throughput capacity: https://aws.amazon.com/blogs/aws/amazon-dynamodb-on-demand-no-capacity-planning-and-pay-per-request-pricing/
Also, might not make sense for your use case, but for free tier projects I've found it useful to proxy all writes to DynamoDB through an SQS queue (use the queue as an event source for a Lambda with a reserved concurrency that is compatible with your provisioned throughput). This is easy if your project is reasonably event-driven, i.e. build your DynamoDB request object/params, write to SQS, then have the next step be a Lambda that is triggered from the DynamoDB stream (so you aren't expecting a synchronous response from the write operation in the first Lambda). Like this:
Example serverless config for SQS-triggered Lambda:
dynamodb_proxy:
description: SQS event function to write to DynamoDB table '${self:custom.dynamodb_table_name}'
handler: handlers/dynamodb_proxy.handler
memorySize: 128
reservedConcurrency: 95 # see custom.dynamodb_active_write_capacity_units
environment:
DYNAMODB_TABLE_NAME: ${self:custom.dynamodb_table_name}
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:PutItem
Resource:
- Fn::GetAtt: [ DynamoDbTable, Arn ]
- Effect: Allow
Action:
- sqs:ReceiveMessage
- sqs:DeleteMessage
- sqs:GetQueueAttributes
Resource:
- Fn::GetAtt: [ DynamoDbProxySqsQueue, Arn ]
events:
- sqs:
batchSize: 1
arn:
Fn::GetAtt: [ DynamoDbProxySqsQueue, Arn ]
Example write to SQS:
await sqs.sendMessage({
MessageBody: JSON.stringify({
method: 'putItem',
params: {
TableName: DYNAMODB_TABLE_NAME,
Item: {
...attributes,
created_at: {
S: createdAt.toString(),
},
created_ts: {
N: createdAtTs.toString(),
},
},
...conditionExpression,
},
}),
QueueUrl: SQS_QUEUE_URL_DYNAMODB_PROXY,
}).promise();
SQS-triggered Lambda:
import retry from 'async-retry';
import { getEnv } from '../lib/common';
import { dynamodb } from '../lib/aws-clients';
const {
DYNAMODB_TABLE_NAME
} = process.env;
export const handler = async (event) => {
const message = JSON.parse(event.Records[0].body);
if (message.params.TableName !== env.DYNAMODB_TABLE_NAME) {
console.log(`DynamoDB proxy event table '${message.params.TableName}' does not match current table name '${env.DYNAMODB_TABLE_NAME}', skipping.`);
} else if (message.method === 'putItem') {
let attemptsTaken;
await retry(async (bail, attempt) => {
attemptsTaken = attempt;
try {
await dynamodb.putItem(message.params).promise();
} catch (err) {
if (err.code && err.code === 'ConditionalCheckFailedException') {
// expected exception
// if (message.params.ConditionExpression) {
// const conditionExpression = message.params.ConditionExpression;
// console.log(`ConditionalCheckFailed: ${conditionExpression}. Skipping.`);
// }
} else if (err.code && err.code === 'ProvisionedThroughputExceededException') {
// retry
throw err;
} else {
bail(err);
}
}
}, {
retries: 5,
randomize: true,
});
if (attemptsTaken > 1) {
console.log(`DynamoDB proxy event succeeded after ${attemptsTaken} attempts`);
}
} else {
console.log(`Unsupported method ${message.method}, skipping.`);
}
};
I'm migrating to the new database and 3.0 client libs. I'm updating the part which generates a custom auth token (on our server) to do a PATCH to update a resource in the Firebase DB.
These PATCH requests used to be made by our server to Firebase using admin claims based on this: https://www.firebase.com/docs/rest/guide/user-auth.htm
For the new DB, I'm generating the JWT token (using ruby-jwt) like this:
payload = {
aud: "https://identitytoolkit.googleapis.com/google.identity.identitytoolkit.v1.IdentityToolkit",
claims: custom_claims.merge({ admin: true }),
exp: now_seconds + (60 * 60), # Maximum expiration time is one hour
iat: now_seconds,
iss: service_account_email,
sub: service_account_email,
uid: uid
}
JWT.encode(payload, private_key, "RS256")
A PATCH request with this token to the Firebase DB fails with: Missing claim 'kid' in auth header.
In the new Firebase you need to directly use a Service Account to create administrative access credentials. Here is a Node.js snippet that shows how to make a REST call to the Database:
// key.json is a service account key downloaded from the Firebase Console
var key = require('./key.json');
var google = require('googleapis');
var request = require('request');
var DATABASE_URL = 'https://<databaseName>.firebaseio.com';
var jwtClient = new google.auth.JWT(key.client_email, null, key.private_key, [
'https://www.googleapis.com/auth/userinfo.email',
'https://www.googleapis.com/auth/firebase.database'
]);
jwtClient.authorize(function(err, tokens) {
request({
url: DATABASE_URL + '/.json',
method: 'GET',
headers: {
'Authorization': 'Bearer ' + tokens.access_token
}
}, function(err, resp) {
console.log(resp.body);
});
});
To do the same in Ruby, you might take a look at the googleauth gem for fetching the access token using Service Account credentials.
Here is the equivalent of Michael Bleigh's answer using the ruby googleauth module:
require 'googleauth'
scopes = [ 'https://www.googleapis.com/auth/userinfo.email', 'https://www.googleapis.com/auth/firebase.database']
auth = ::Google::Auth.get_application_default(scopes)
auth_client = auth.dup
auth_client.sub = "service-account-email-here#yourapp.iam.gserviceaccount.com"
token = auth_client.fetch_access_token!
You will also need to set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of your service account JSON file. the value for auth_client.sub comes from client_email in this JSON file.
Of course, as above, this is only valid in a server application you control.
Also, making the request to the firebase REST API is still an exercise for the reader.
references
https://developers.google.com/api-client-library/ruby/auth/service-accounts#authorizingrequests
https://developers.google.com/identity/protocols/application-default-credentials#whentouse
I'm migrating to the new database and 3.0 client libs. I'm updating the part which generates a custom auth token (on our server) to do a PATCH to update a resource in the Firebase DB.
These PATCH requests used to be made by our server to Firebase using admin claims based on this: https://www.firebase.com/docs/rest/guide/user-auth.htm
For the new DB, I'm generating the JWT token (using ruby-jwt) like this:
payload = {
aud: "https://identitytoolkit.googleapis.com/google.identity.identitytoolkit.v1.IdentityToolkit",
claims: custom_claims.merge({ admin: true }),
exp: now_seconds + (60 * 60), # Maximum expiration time is one hour
iat: now_seconds,
iss: service_account_email,
sub: service_account_email,
uid: uid
}
JWT.encode(payload, private_key, "RS256")
A PATCH request with this token to the Firebase DB fails with: Missing claim 'kid' in auth header.
In the new Firebase you need to directly use a Service Account to create administrative access credentials. Here is a Node.js snippet that shows how to make a REST call to the Database:
// key.json is a service account key downloaded from the Firebase Console
var key = require('./key.json');
var google = require('googleapis');
var request = require('request');
var DATABASE_URL = 'https://<databaseName>.firebaseio.com';
var jwtClient = new google.auth.JWT(key.client_email, null, key.private_key, [
'https://www.googleapis.com/auth/userinfo.email',
'https://www.googleapis.com/auth/firebase.database'
]);
jwtClient.authorize(function(err, tokens) {
request({
url: DATABASE_URL + '/.json',
method: 'GET',
headers: {
'Authorization': 'Bearer ' + tokens.access_token
}
}, function(err, resp) {
console.log(resp.body);
});
});
To do the same in Ruby, you might take a look at the googleauth gem for fetching the access token using Service Account credentials.
Here is the equivalent of Michael Bleigh's answer using the ruby googleauth module:
require 'googleauth'
scopes = [ 'https://www.googleapis.com/auth/userinfo.email', 'https://www.googleapis.com/auth/firebase.database']
auth = ::Google::Auth.get_application_default(scopes)
auth_client = auth.dup
auth_client.sub = "service-account-email-here#yourapp.iam.gserviceaccount.com"
token = auth_client.fetch_access_token!
You will also need to set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of your service account JSON file. the value for auth_client.sub comes from client_email in this JSON file.
Of course, as above, this is only valid in a server application you control.
Also, making the request to the firebase REST API is still an exercise for the reader.
references
https://developers.google.com/api-client-library/ruby/auth/service-accounts#authorizingrequests
https://developers.google.com/identity/protocols/application-default-credentials#whentouse