I keep going back and forth about choosing DynamoDB or RDS for my project. I understand they are 2 completely different kinds of DB systems, but I am not sure which one would be a better fit for my app. My app alerts users of certain events that happen VERY infrequently.
For instance, an employee may trigger an alert saying that there is an active shooter in the building, so my app needs to get the cell phone numbers of everyone in the company from the database and then use those numbers to send text messages. I just discovered that DynamoDB has a limit of 100 items when retrieving stuff from the database, which is a problem for me because I may have to retrieve 200 or 300 or even more phone numbers as quickly as possible.
In addition to this, the database would not be queried regularly. It would be queried rarely when someone needs to update a user's profile information. of course, it would be queried for users' cell phone numbers in an emergency and I need this to return the results as fast as possible.
It kind of sounds like DynamoDB may be an overkill, but I am not 100 % sure. On the other hand, It seems a PERFECT fit since it can query stuff really quickly, but the limit of 100 items per request just kills me.
To me, there isn't a clear answer in terms of what database system to choose. Based on this use case, what is the best DB option?
You should use AWS pinpoint for that. Pinpoint has endpoints and segments.
The endpoint is email, number... One person in the company can have multiple endpoints.
The segment is a filtered list of endpoints. For example, you can filter endpoints by person title, or by company.
You create Campaign based on segments, so each person in selected segments get email or SMS or both...
Regarding your example, you can create a dynamo DB trigger which will create/update/delete pinpoint endpoints.
AWS approach is not to scan dynamo DB to send group emails or SMS. Instead of that, the approach is to create segment and then create campaigns.
Related
Background: I have a relational db background and have never built anything for DynamoDB that wasn't just used for fast writes with very few reads. I am trying to learn DynamoDB patterns by migrating one of my help desk apps from MySQL to DynamoDB.
The application is a fairly simple one from a data storage perspective. A user submits a request and that request generates 1 or more tickets.
Setup: I have screens where people see initial requests and that request's tickets and search views that allow support to query on a bunch of attributes of a ticket (last name of user, status of ticket, use case of ticket, phone number of user, dept of user). This design in a SQL db is pretty straightforward but in Dynamo, I'm really being thrown for a loop on how to structure primary/sort keys and secondary indexes (if necessary).
I created one collection for requests and one collection for tickets. The individual requests have an array of ticket ids that belong to it. The ticket item has an attribute that stores the request id so that I can search that way. But what I am hung up on, is how do I incorporate searching on a ticket/request's attributes without having to do a full scan?
I read about composite keys and perhaps creating a composite sort key similar to: ## so that I can search on each of those fields directly without having to know the primary key (ticket id).
Question: How do you design dynamo collections/tables that require querying a lot of different attribute values without relying on a primary key?
This is typically something that DynamoDB is not good at, not to say it definitely cannot be done. The strength and speed for DynamoDB comes from having well known access patterns and designing your schema for these patterns. In general if you don't know what your users will search for, or there are many different possible queries, it's better to look at something like RDS or a native SQL DB. That being said a possible direction to solve this could be to create multiple lists for each of the fields and duplicate the data. This could all be done in the same table.
I am setting up a Serverless application for a system and I am wondering the following:
Say that my table handle Companies. Each Company can have Invoices. Each company has roughly 6-8000 Invoices. Say that I have 14 Companies, that results in roughly 112 000 items in my table.
Is it "okay" to handle it this way? I will only pay for each Get request I do, and I can query a lot of items into the same get request.
I will not fetch every single item each time I write or get items.
So, is there a recommendation for how many items I should max have in a table? I could bake some items together, but I mainly want a general recommendation.
There is no practical limit to the number of items you can have in a table. How many items each invoice is depends on your application's access patterns. You need to ask, what data does your app need, when does it need that data, and how large is the data, how often is the item updated. For example, if all the data in one item comes in under the 1Kb WCU and 4Kb RCU and you do not write to it often, and when you read it, you need all of the data in the item, then shove it in one item perhaps. If the data is larger, or part of it gets written to more often, then perhaps split it up.
An example might be a package tracking app. You have the initial information about the package, size, weight, source address, destination address, etc. That could be a lot of data. When that package enters a sorting facility it is checked in. Do you want to update that entire item you already wrote? Or do you just write an item that has the same PK (item collection), but a different SK and then the info that it made it to the sorting facility? When it leaves the sorting facility, you want to write to the DB that it left, which truck it was on, etc. Same questions.
Now when you need to present the shipping information by tracking ID number, the PK, you can do a query to DynamoDB and get the entire item collection for that tracking ID number. Therefore you get all items with that ID as your app presents much of that information on the tracking web site for the customer.
So again, it really depends on the app and your access patterns, but you want to TRY to only read and write the items your app needs, when you need them, how you need them, and no more...within reason (there is such a thing as over slicing your data). That is how, in my opinion, you will make a NoSQL database like DynamoDB be the most performant and most cost effective.
Dynamo Db won't even notice 100K entries...
As mentioned by LifeOfPi, entries should be less than 400k.
The question indicates a distinct lack of understanding of what/why/how to use DDB. I suggest you do some more learning. The AWS Reinvent videos around DDB are quite useful.
In a standard RDBMS, you need to know the structure from the beginning. Accessing that data is then very flexible.
DDB is the opposite, you need to understand how you'll need to access you data; the structure is not important. You should end up with something like so:
For 100K items and for most applications, you may find Aurora serverless to be an easier fit for your needs; especially if you have complicated searching and/or sorting needs.
I'm working on a chat client using the firebase realtime database as the database. The way that it currently works is that it saves a chat log between two people in a chat collection with each entry in the following format <uid>-<uid>. This works great as it just looks your uid and the uid of the person you want to chat with and then sorts them, so it's always a consistent format and then it looks if that entry exists on the chat collection and if so, it just adds to that entry. Otherwise it creates a new one.
This works awesome. I'm trying to think ahead though if we want to be able to have multiple people talk together like in slack. I could just add 3 or even 4 people's uid as the key but eventually it's going to be insanely long. The limitation of a firebase key is 768 Bytes. Apparently that's somewhere between 500 and 700 characters. I doubt we will have the key get that long, but if we can figure out a solution that is more scalable now and won't require us to fix our data later, i'd rather do that.
I was thinking that each chat entry could have a participants array with the uid's of all the users in that chat. Then if you want to chat with someone, we would need to query all chat entries and check the arrays in each of them for the current user uid and the uid of the person(s) they want to chat with. That doesn't seem very efficient though.
Any thoughts on which implementation is better / more scalable / performant? Or perhaps a suggestion for another implementation?
How about simply using the hash of the resulting concatenation of UIDs?
Alternatively:
Come up with your own unique room key, e.g. using a push ID.
create a new top-level node with chatroom-keys and store the concatenated UID as the value there:
chatroom-keys
push-id1: uid1-uid2-uid3
push-id2: uid1-uid2-uid3-uid4-uid5-uid6
push-id3: uid3-uid4-uid5-uid6-uid7-uid8-uid8-uid10
In this structure you can look up the room key for a set of participants by:
firebase.database().ref("chatroom-keys").orderByValue().equalTo("uid1-uid2-uid3")
I need an optimal way to store a lot of individual fields in firestore. Here is the problem:
I get json data from some api. it contains a list of users. I need to tell if those users are active, ie have been online in the past n days.
I cannot query each user in the list from the api against firestore, because there could be hundreds of thousands of users in that list, and therefore hundreds of thousands of queries and reads, which is way too expensive.
There is no way to use a list as a map for querying as far as I know in firestore, so that's not an option.
What I initially did was have a cloud function go through and find all the active users maybe once every hour, and place them in firebase realtime database in the structure:
activeUsers{
uid1: true
uid2: true
uid2: true
etc...
}
and every time I need to check which users are active, I get all fields under activeUsers (which is constrained to a maximum of 100,000 fields, approx 3~5 mb.
Now i was going to use that as my final mechanism, but I just realised that firebase charges for amount of bandwidth used, not number of reads. Therefore it could get very expensive doing this over and over whenever a user makes this request. And I cannot query every single result from firebase database as, while it does not charge per read (i think), it would be very slow to carry out hundreds of thousands of queries.
Now I have decided to use cloud firestore as my final hope, since it charges for number of reads and writes primarily as opposed to data downloaded and uploaded. I am going to use cloud functions again to check every hour the active users, and I'm going to try to figure out the best way to store that data within a few documents. I was thinking 10,000 fields per document with all the active users, then when a user needs to get the active users, they get all the documents (would be
10 if there are 100,000 total active users) and maps those client side to filter the active users.
So I really have 2 questions. 1, If I do it this way, what is the best way to store that data in firestore, is it the way I suggested? And 2, is there an all around better way to be performing this check of active users against the list returned from the api? Have I got it all wrong?
You could use firebase storage to store all the users in a text file, then download that text file every time?
Well this is three years old, but I'll answer here.
What you have done is not efficient and not a good approach. What I would do is as follows:
Make a separate collection, for all active users.
and store all the active users unique field such as ID there.
Then query that collection. Update that collection when needed.
We are building a conversation system that will support messages between 2 users (and eventually between 3+ users). Each conversation will have a collection of users who can participate/view the conversation as well as a collection of messages. The UI will display the most recent 10 messages in a specific conversation with the ability to "page" (progressive scrolling?) the messages to view messages further back in time.
The plan is to store conversations and the participants in MSSQL and then only store the messages (which represents the data that has the potential to grow very large) in DynamoDB. The message table would use the conversation ID as the hash key and the message CreateDate as the range key. The conversation ID could be anything at this point (integer, GUID, etc) to ensure an even message distribution across the partitions.
In order to avoid hot partitions one suggestion is to create separate tables for time series data because typically only the most recent data will be accessed. Would this lead to issues when we need to pull back previous messages for a user as they scroll/page because we have to query across multiple tables to piece together a batch of messages?
Is there a different/better approach for storing time series data that may be infrequently accessed, but available quickly?
I guess we can assume that there are many "active" conversations in parallel, right? Meaning - we're not dealing with the case where all the traffic is regarding a single conversation (or a few).
If that's the case, and you're using a random number/GUID as your HASH key, your objects will be evenly spread throughout the nodes and as far as I know, you shouldn't be afraid of skewness. Since the CreateDate is only the RANGE key, all messages for the same conversation will be stored on the same node (based on their ConversationID), so it actually doesn't matter if you query for the latest 5 records or the earliest 5. In both cases it's query using the index on CreateDate.
I wouldn't break the data into multiple tables. I don't see what benefit it gives you (considering the previous section) and it will make your administrative life a nightmare (just imagine changing throughput for all tables, or backing them up, or creating a CloudFormation template to create your whole environment).
I would be concerned with the number of messages that will be returned when you pull the history. I guess you'll implement that by a query command with the ConversationID as the HASH key and order results by CreationDate descending. In that case, I'd return only the first page of results (I think it returns up to 1MB of data, so depends on an average message length, it might be enough or not) and only if the user keeps scrolling, fetch the next page. Otherwise, you might use a lot of your throughput on really long conversations and anyway, the client doesn't really want to get stuck for a long time waiting for megabytes of data to appear on screen..
Hope this helps