I am storing my users logged in status in redux to determine what sections of the application they should see like below
state: {
user: {
loggedIn: true
}
}
I am using the code below to try to utilize migrations in redux-persist#^5.9.1 to log the user out on a version bump.
const migrations = {
1: state => {
return {
...state,
user: {
...state.user,
loggedIn: false
}
}
}
}
const persistConfig = {
key: 'root',
version: 1,
storage,
migrate: createMigrate(migrations, { debug: true })
}
But the state does not seem to reconciled with the new loggedIn value, it stays true. I see the migrations are running in the console.
redux-persist: migrationKeys [1]
createMigrate.js:39 redux-persist: running migration for versionKey 1
From reading the documentation and looking at the example, this should be possible. I am unsure if I am doing something incorrectly or if I have misread the documentation
Related
In an Amplify w/ graphql project, I have a conversation schema which both members of the conversation saved in an array. When creating a conversation, I only want one conversation to exists between two users. So I want the creating of the conversation entry to fail when a conversation already exists.
When creating the mutation, I tried to use the condition input to let the query fail when the condition is false. But I have not found a solution to check the members array.
Any advice on using the conditions input is appreciated!
type Conversation
#model {
id: ID!
members: [String!]!
...
}
const createConversationInput: CreateConversationInput = {
members: [userOneId, userTwoId],
...
};
const createConversationMutation = (await API.graphql({
query: createConversation,
variables: {
input: createConversationInput,
condition: {
and: [
{ not: { members: { contains: userOneId } } },
{ not: { members: { contains: userTwoId } } },
],
},
},
...
})) as { data: CreateConversationMutation };
I run a program that sends data to dynamodb using api gateway and lambdas.
All the data sent to the db is small, and only sent from about 200 machines.
I'm still using free tier and sometimes unexpectedly in the middle of the month I'm start getting an higher provisioned read / write capacity and then from this day I pay a constant amount each day until the end of the month.
Can someone understand from the image below what happened in the 03/13 that caused this pike in the charts and caused these provisioned to rise from 50 to 65?
I can't tell what happened based on those charts alone, but some things to consider:
You may not be aware of the new "PAY_PER_REQUEST" billing mode option for DynamoDB tables which allows you to mostly forget about manually provisioning your throughput capacity: https://aws.amazon.com/blogs/aws/amazon-dynamodb-on-demand-no-capacity-planning-and-pay-per-request-pricing/
Also, might not make sense for your use case, but for free tier projects I've found it useful to proxy all writes to DynamoDB through an SQS queue (use the queue as an event source for a Lambda with a reserved concurrency that is compatible with your provisioned throughput). This is easy if your project is reasonably event-driven, i.e. build your DynamoDB request object/params, write to SQS, then have the next step be a Lambda that is triggered from the DynamoDB stream (so you aren't expecting a synchronous response from the write operation in the first Lambda). Like this:
Example serverless config for SQS-triggered Lambda:
dynamodb_proxy:
description: SQS event function to write to DynamoDB table '${self:custom.dynamodb_table_name}'
handler: handlers/dynamodb_proxy.handler
memorySize: 128
reservedConcurrency: 95 # see custom.dynamodb_active_write_capacity_units
environment:
DYNAMODB_TABLE_NAME: ${self:custom.dynamodb_table_name}
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:PutItem
Resource:
- Fn::GetAtt: [ DynamoDbTable, Arn ]
- Effect: Allow
Action:
- sqs:ReceiveMessage
- sqs:DeleteMessage
- sqs:GetQueueAttributes
Resource:
- Fn::GetAtt: [ DynamoDbProxySqsQueue, Arn ]
events:
- sqs:
batchSize: 1
arn:
Fn::GetAtt: [ DynamoDbProxySqsQueue, Arn ]
Example write to SQS:
await sqs.sendMessage({
MessageBody: JSON.stringify({
method: 'putItem',
params: {
TableName: DYNAMODB_TABLE_NAME,
Item: {
...attributes,
created_at: {
S: createdAt.toString(),
},
created_ts: {
N: createdAtTs.toString(),
},
},
...conditionExpression,
},
}),
QueueUrl: SQS_QUEUE_URL_DYNAMODB_PROXY,
}).promise();
SQS-triggered Lambda:
import retry from 'async-retry';
import { getEnv } from '../lib/common';
import { dynamodb } from '../lib/aws-clients';
const {
DYNAMODB_TABLE_NAME
} = process.env;
export const handler = async (event) => {
const message = JSON.parse(event.Records[0].body);
if (message.params.TableName !== env.DYNAMODB_TABLE_NAME) {
console.log(`DynamoDB proxy event table '${message.params.TableName}' does not match current table name '${env.DYNAMODB_TABLE_NAME}', skipping.`);
} else if (message.method === 'putItem') {
let attemptsTaken;
await retry(async (bail, attempt) => {
attemptsTaken = attempt;
try {
await dynamodb.putItem(message.params).promise();
} catch (err) {
if (err.code && err.code === 'ConditionalCheckFailedException') {
// expected exception
// if (message.params.ConditionExpression) {
// const conditionExpression = message.params.ConditionExpression;
// console.log(`ConditionalCheckFailed: ${conditionExpression}. Skipping.`);
// }
} else if (err.code && err.code === 'ProvisionedThroughputExceededException') {
// retry
throw err;
} else {
bail(err);
}
}
}, {
retries: 5,
randomize: true,
});
if (attemptsTaken > 1) {
console.log(`DynamoDB proxy event succeeded after ${attemptsTaken} attempts`);
}
} else {
console.log(`Unsupported method ${message.method}, skipping.`);
}
};
Is there a way to disable the details within Meteor.user() on Browser console in production mode?
Below is the snapshot of what I see when I deploy my code in production. This is very insecure as far as client details are concerned.
Just don't publish sensitive data to client, keep your logic regarding user memberships on server.
Don't save sensitive data in Meteor.user().
Rather make another Collection then associate it through _id.
Use Publish And Subscribe carefully.
You do not want to use the profile field on a Meteor.users document, since that is always published to the client.
See here: https://guide.meteor.com/accounts.html#dont-use-profile
What I would suggest is to move all the sensitive data from the profile field and into the top-level key of the users document.
if (Meteor.isServer) {
// with document:
// Document {
// _id: '123',
// services: { /* */ },
// profile: { /* */ },
// subscription: { /* */ }
// }
Meteor.publish('users.subscriptions', function(userId) {
return Users.find({ _id: userId }, { fields: { subscription: 1 }})
})
}
if (Meteor.isClient) {
Template.home.onCreated(function() {
this.autorun(() => {
console.log(Meteor.user().subscription) // `undefined` at this point
this.subscribe('users.subscriptions', Meteor.userId(), function() {
console.log(Meteor.user().subscription) // returns user's subscription
})
})
})
}
You can make use of libraries such as percolate:migrations to migrate the data to top-level key.
meteor add percolate:migrations
And then:
// server/migrations/1-move-all-profile-info-to-top-level.js
import _ from 'meteor/underscore'
Migrations.add({
version: 1,
up: function() {
_.each(Meteor.users.find().fetch(), function(user) {
Meteor.update(user._id, {
$set: {
subscription: user.profile.subscription,
// other fields that needs migrating
profile: null // empty out the profile field
}
})
})
}
})
Meteor.startup(() => {
Migrations.migrateTo('latest')
})
I'm using accounts-ui and accounts-google in Meteor v1.4.1. I can't get the user.services object to appear scoped in the client code. In particular, I need google's profile picture.
I've configured the server-side code to authenticate with Google like so:
import { Meteor } from 'meteor/meteor';
import { ServiceConfiguration } from 'meteor/service-configuration';
const services = Meteor.settings.private.oauth;
for (let service of Object.keys(services)) {
ServiceConfiguration.configurations.upsert({
service
}, {
$set: {
clientId: services[service].app_id,
secret: services[service].secret,
loginStyle: "popup"
}
});
}
...and the client side code to configure permissions like so:
Accounts.ui.config({
requestPermissions: {
google: ['email', 'profile']
},
forceApprovalPrompt: {
google: true
},
passwordSignupFields: 'EMAIL_ONLY'
});
When users click the 'Sign-In with Google' button, a pop-up appears and they can authenticate. No prompt appears, however, despite forceApprovalPrompt being set to true for google.
The big issue is that when I execute this,
const user = Meteor.user();
console.log(user.services);
anywhere in client code, I do not see the expected user services information. I check my database and it is definitely there for the taking:
$ mongo localhost:27017
> db.users.find({})
> ... "services" : { "google" : { "accessToken" : ... } } ...
I'm curious what I'm missing? Should I explicitly define a publish function in order for user services data to exist in the client?
The services property is intentionally hidden on the client side for security reasons. There are a couple of approaches here :
Suggestions
My preferred one would be to expose a meteor method to bring you the
public keys and avatars you might need in the few places you'd need
them.
On a successful login, you could record the data you need somewhere in the user object, but outside of the services property.
As you said, you could make a new publication which explicitly specifies which fields to retrieve and which ones to hide. You have to be careful what you publish, though.
Code Examples
Meteor methods:
// server
Meteor.methods({
getProfilePicture() {
const services = Meteor.user().services;
// replace with actual profile picture property
return services.google && services.google.profilePicture;
}
});
// client
Meteor.call('getProfilePicture', (err, profilePicture) => {
console.log('profile picture url', profilePicture);
});
Update on successful user creation (you might want to have a login hook as well to reflect any avatar/picture changes in google):
// Configure what happens with profile data on user creation
Accounts.onCreateUser((options, user) => {
if (!('profile' in options)) { options.profile = {}; }
if (!('providers' in options.profile)) { options.profile.providers = {}; }
// Define additional specific profile options here
if (user.services.google) {
options.profile.providers.google = {
picture: user.services.google.picture
}
}
user.profile = options.profile;
return user;
});
Publish only select data...
// Server
Meteor.publish('userData', function () {
if (this.userId) {
return Meteor.users.find({ _id: this.userId }, {
fields: { other: 1, things: 1 }
});
} else {
this.ready();
}
});
// Client
Meteor.subscribe('userData');
http://redux.js.org/docs/api/createStore.html
[preloadedState] (any): The initial state. You may optionally specify
it to hydrate the state from the server in universal apps, or to
restore a previously serialized user session.
I can successfully initialize store with the preloadedState parameter.
However there are times I need to change the structure of the state.
As an example, I initially had
{
totalCount: 3,
usedCount: 1
}
now I want to change it to
{
totalCount: 3,
unusedCount: 2
}
Then stored state in the first structure won't be valid now.
At least I want to discard old state and start afresh from the new initialState.
I'm storing state in the server and using it as the preloadedState param.
Is there a way to discard the server stored state when state structure changes?
I''m sharing my solution.
I created HOC reducer (http://redux.js.org/docs/recipes/reducers/ReusingReducerLogic.html)
export function versionedReducer(reducerFunction) {
return (state, action) => {
const isInitializationCall = state === undefined
if (isInitializationCall) {
return state
}
const { version } = reducerFunction(undefined, {type: undefined})
if (!version) {
throw "should versioned"
}
const shouldAcceptPreloadedState = state.version && state.version >= version
if (!shouldAcceptPreloadedState) {
state = undefined
}
return reducerFunction(state, action)
}
}