:)
I recently started working with aws platform, and i'm having a hard time developing one thing.
Basically i want to link two different dynamodb tables( like company table and orders table), and one company can have different orders connected to it. The first thing i have done was store in company's table a list of orders id, but the problem is that i cannot index an array, so it is kinda problematic to know the orders in that company, because it cant be done pagination.
I cannot figure out a better solution, so if any of u more experience developers can indicate me a way to store this association, i would be very grateful.
Thanks
Related
I've been doing some research on DynamoDb and a common scenario I see is where a sort key is created in a form like ORDER#{orderId}. Such as the below image in the re:Invent 2019 talk about DyamodDb.
I have some confusion around this. Where would a "unique" order like this come from? Everything I read says you're not going to be able to efficiently check all Orders in the system to guarantee uniqueness of a new OrderId. In a real scenario would it be fine to just generate a UUID and use that here?
This is a typical situation for 1 to many relationships: a chat group iOS app, a group table to record all the group chat related information, like group id, create time, thread title, etc.
To record the participants, of course, I would assume there is another 1:m table. So I was rather surprised to see the app just added another column called "participants" to record it, with each participant is separated by a delimiter (':' to be exact). The problem with that is quite obvious, mixing application code with sql code, e.g. no way to see how many groups a specific user is in with sql code, violated 1NF/2NF, etc.
But they said we understood all your points. But
as this is a mobile app, you always need to use objective c code to access sqlite tables, you won't use sql codes alone. So not a "big deal" to mix them together.
participants don't change often and normally are set when a group is created. If we have 100 participants we would rather just insert 1 record to group table instead of insert 100 records into another group-participants table.
The participant data will be used when someone wants to see who are in this chat group (by several taps on the menu) and when someone joins or leaves the chat group, assume it won't happen often.
So my question is in this particular situation what is the advantage I will gain if I use another 1:m table?
----- update -----
Except for the answer I got, Renzo kindly pointed this discussion to me, which is also very helpful!
It's hard to respond to "is this design better/worse" style questions without understanding the full context. I'm going to make some assumptions based on your question.
You appear to be building a mobile application, supporting "many to many" user chat. I'm picturing something like Slack.
Your application design is using the SQLite database for local storage.
Your local sqlite database on the phone is some kind of subset of the overall application data - like a cache, only showing the data for the current user.
If all that is true, the question really is down to style/maintainability on the one hand, and performance and scalability on the other.
From a "style" point of view, storing the data in a comma-separated value in a column is ugly. A new developer who joins the project, with a background in "regular" database design will consider it at best a hack. On the other hand, iOS developers may consider it perfectly normal.
From a performance point of view, it's probably not worth arguing about - parsing the CSV is probably just as slow as reading/writing from the database.
From a scalability point of view, you may have a problem. If the application design needs to capture in which order users joined the chat, or capture some kind of status (active/asleep, for instance), or provide a bit of history (user x exited at 21:20), you almost certainly end up re-designing the database.
I am developing an app which presents a feed of posts and allows users to vote on these posts.
I want to prevent users from voting multiple times on a single post. To do that, I want to store a list of id's of the posts voted on already so that I can check that each time the user tries to vote.
What's the most efficient way of storing these post IDs if there's a chance of the user voting on up to thousands of posts within a year?
Sqlite, core data, p list or nsuserdefaults?
Since you would also like to know how many people voted (I think), I would save it to a server (using sqlite to store it).
Saving this on a user device seems redundant.
If you do want to store it I would advice Core Data.
It is too much information for NSUserdata, plists… I don’t know why but it just doesn’t seem like a good idea, and Coredata is just a better version of Sqlite (for swift usage)
I'm trying to build a Drupal site in which users can input records containing data about "customers", "employees" and "sales".
I would like to be able to create a form(s) which takes data about a sale/customer/employee and can be associated with a record of a customer/employee(who made the sale)/sale.
I would also like to be able to display records showing a list of sales or customers or employees in which when clicking one record, it will open a page displaying all the relational data.
I'm new to development and am searching around like a headless chicken lol. I was thinking of using content types for sales/employees/customers and using individual nodes for each record then using something like views to displays filtered lists, but I am unsure if this is the best way to go about/structure it (maybe I should use separate custom tables or database and use a custom module to fetch the data?). It would also be nice if some of the fields can populate other fields based on it input and also if some fields can utilize a sort of auto-complete by garbing data from other records, or is that asking way too much?
Thank you for any suggestions you might be able to give me.
I, for one, would certainly prefer using a custom separate database and leave drupal databases to its own devices, if you would ever need to upgrade the site to a higher version of drupal it helps if you don't modify it, and also consider using webform (http://drupal.org/project/webform) as it makes development easier both in components and hooks.
I'm looking at accepting a project that would require me to clean up an existing e-commerce website. Its been relatively successful and has over 100,000 individual products - loaded both by the client and its publishers.
The site wasn't originally designed for this many products and has become fairly disorganized.
SO, the client has asked I look at a more robust search option - filterable and so forth. I completely agree it needs to be improved, but after looking at the database, I can tell that there are dozens and dozens of categories and not everything is labeled correctly etc.
Is there any database management software that could help me clean up 100,000 entries quickly? Make categories consistent - fix uppercase/lowercase problems etc.
Are there any companies out there that I can source just this particular part of the project to?
Its a massive amount of data-entry. If I spent 2 minutes per product, it would take me 6 months full time to just to complete the database cleanup. I either need to get it down to a matter of seconds per product or find a company that specializes in this type of work.
I don't even know what to search for on Google.
Thanks guys!
--
Thanks everyone for your ideas! I have a lot of options now so I feel a lot more comfortable heading in to this project. Right now I think the direction we will go is to build a tool that allows the client to hire data entry people that can update it as necessary. Then I will work as a consultant, taking care of any UPDATE-WHERE type functions as necessary.
Thanks again!
If there are inconsistencies like you are describing, it sounds like the problem may be more an issues of a bad data model (i.e. lack of normalization) than just dirty data. If good normalization is in place, cleaning up categories should be as simple as updating a single record per each category - but if category name is used instead of a foreign key, then you will most likely need to perform a series of UPDATE WHERE statements to clean up the text.
You may want to look into an ETL (extract, transform, load) tool that can help with bulk data transformation. I'm not familiar with ETL tools for mysql, but I'm sure they exist. SQL Server has a build in service called SQL Integration Services that provides the ability to extract data from an existing data source, perform bulk changes or transformations, and then reload the data back into a destination database. Tools like this may help speed up the process of standardizing capitalization, punctuation, changing categories etc.
Even still, don't overlook the possibility that the data model may need tweaking to help prevent this type of situation in the future.
Edit: Wikipedia has a list of opensource ETL products that you may want to investigate.
In any case you'll probability need to do more than "clean the data", which means you'll need to build new normalized tables. So start there, build a new database that is fully normalized, import the data "as is", with all the duplicate categories, etc.
for example, new tables:
Items
ItemID int identity/auto number
ItemName string
CategoryID int
....
Categories
CategoryID int identity/auto number
CategoryName string
....
import the bad data into the new system:
Items
ItemID ItemName CategoryID
1 thing A 1
2 thing B 2
3 thing C 3
4 thing D 1
Categories
CategoryID CategoryName
1 Game
2 food
3 games
now, you can consolidate the data using the PKs
UPDATE Items
SET CategoryID=1
WHERE CategoryID=3
DELETE Categories
WHERE CategoryID=3
You might just write an application where the customer can do the consolidation. Let them select the duplicates on a screen and merge to a selected parent category. you have this application do the merge sql from above.
If there are issues of needing to have a clean cut over date, create an application that generates a series of "Map" tables, where you store the CategoryNameOld="games" and the CategoryNameNew="Game" and use these when you do the conversion/load of the bad data into the new system's tables.
I would implement the new search system or whatever and build them a tool that would allow them to easily go through and cleanup the listings, re-categorize, etc. This task requires domain knowledge, so they're the best ones to do it.
Do some number crunching so they can prioritize the list and clean in order of importance.
Keep in mind that one or your options is to build a crappy interface that somebody can use to edit records, hire half a dozen data-entry people from a temp agency, spend two days training them, and let them go to town.