I am populating 3 tables from my APEX application:
Customer
Order
CustomerOrder
First record is inserted into customer table, then order table and then a record created in CustomerOrder, linking the first two tables together.
Do there are 3 inserts, one after another
Insert into Customer …
If cust_id is not null then
Insert into Order..
If order_id is not null then
Insert into CustomerOrder
End If
End If
But what if issue occurs when the record is being inserted into CustomerOrder? The record in Order table will be left not linked to any customer, isolated.
Can this be prevented? Meaning if an error occurs anywhere in the code, can the whole thing be rolled back like with the transactions in SQL?
I wonder why you have CustomerOrder at all. Can an Order belong to more than one Customer? If not, it seems that you could simply have a Customer ID column in Order.
That aside, the answer to your question depends on how you have the application laid out. If you have one page where a user enters all the order information, including what customer the order belongs to; and that page calls a PL/SQL block that does multiple INSERTs; and you don't explicitly COMMIT within that PL/SQL block; then all of that takes place in a single transaction. Apex will commit that transaction if it completes without errors, or roll it back if not.
If you are splitting the data entry across multiple pages, then each page submit is going to be a separately committed transaction.
It makes sense to me that you would have a separate page for entering customer information. But I see no issue with committing the Customer record before entering the order information.
I wouldn't make sense to me to have one page to enter the order, and then another page to map the order to a customer. If you are selecting the customer on the order entry page, and inserted into Order and CustomerOrder in one PL/SQL block, then you should not have any orphaned orders.
Related
I am creating a leave tracker app where I want to store the user ID along with the from date and to date. I am using Amazon's DynamoDB as the database, and the user enters a leave through a custom command.
Eg: apply-leave from-date to-date
I want to avoid duplicate entries in the database. For example, if a user has already applied for a leave between 06-10-2019 to 10-10-2019 and applies for a leave between the same dates again, they should get a message saying that this already exists and a new record should not be created for the same.
However, a user can apply for multiple leaves and two users can take a leave between the same dates.
I tried using a conditional statement as follows:
table.put_item(
Item={
'leave_id': leave_id,
'user_id': user_id,
'from_date': from_date,
'to_date': to_date,
},
ConditionExpression='attribute_not_exists(user_id) AND attribute_not_exists(from_date) AND attribute_not_exists(to_date)'
)
where leave_id is the partition key. However, this does not work and a new row is added every time, even if it is the same dates. I have looked through similar other questions, but haven't been able to understand how to get this configured correctly.
Any ideas on how I should go about this, or if there is a different design that I should follow?
If you are calling your code with the leave_id that doesn't yet exist in the table, the item will always be inserted. If you call your code with leave_id that does already exist in your table you should be getting An error occurred (ConditionalCheckFailedException) when calling the PutItem operation: The conditional request failed error message.
I have two suggestions:
If you don't want to change your table, you can create a secondary index with user_id as the partition key and then query the index for all the items where the given user has some from_date and to_date attributes.
Like this:
table.query(
IndexName='user_id-index',
KeyConditionExpression=Key('user_id').eq(user_id),
FilterExpression=Attr('from_date').exists() & Attr('from_date').exists()
)
Then you will need to check for overlapping leave requests, etc. (eg. leave request that starts before the one that is already in place finishes). After deciding that the leave request is a valid one you will call put_item.
Another suggestion and probably a better one would be to create a composite primary key on your table with user_id as a partition key and leave_id as a sort key. That way you could execute a query for all leave requests from a particular user without the need to create a secondary index.
I have a Page with a Table for which its datasource is a relation and needs to be sorted based on fields from another model:
Page
Datasource = Indicators
Table
Datasource = Indicators [one] : MetadataText [many] (relation)
The Table needs to be sorted based on a field from another Model called MetadataField, which has a one to many relation with MetadataText.
I have the datasource of MetadataField sorted. But the content in the Table appears in random order. When I first access the application, the Table is sorted by the order that the records were loaded. After view some records, the sorting of the records changes and keeps changing.
I am using Google Drive tables.
You can easily sort related records by one of the fields that belong to the related record itself, but only once (you'll received those records sorted from server).
But it seems, that you want to sort related records by their related record. App Maker will not be your friend in this case... but javascript will be! Since App Maker loads all related records you can safely sort them on client using javascript:
indicatorsDatasource.load(function() {
indicatorsDatasource.items.forEach(function(indicator) {
indicator.MetadataTexts.sort(function(a, b) {
return /* here goes your sorting logic */;
});
});
});
It will work in O(n * m * log(m)) in case you have n Indicators on the page and every indicator has m associated MetadataTexts. If you want to let users to sort related records by clicking table's header, you'll need to implement that logic on your own. So... all this hassle leads us to alternative solution! What if we decouple related records and introduce separated datasource for them? Having that you'll be able to use full power of App Maker's tables (sorting/paging) with almost no effort. You can take a look at implementation sample in Project Tracker template ViewProject page.
I have a SQL Server table that contains two columns AppID and GroupID. The table is populated from a asp listbox. An AppID can have many GroupID's associated with it.
It works fine for adding Groups for each App and when a user wants to edit a record I can populate the Listbox with the already selected items.
What I want to know is when a user edits the items in the listbox, they can deselect existing items and select new ones what is the best way to update the table in the database? Would I be better to delete all the records for the AppID or is there a better way?
There isnt likely to be more than 12 Groups linked to anyone App.
EDIT
Sorry should have said that the table is a link table between the Apps table and the Groups table. The IDs in the link table are the primary keys from those tables.
TIA
Pro grammatically it would be easier to delete everything for the AppID and add them all back in. However your performance would take a huge hit using this method.
So your best method from a performance perspective would be to split the operation into 2 parts:
find records in your link table that are no longer in your ListBox &
delete them
find records in your ListBox that aren't in the table & insert them
This will only work well if you keep track of the data to insert and delete as the user clicks on the ListBox. Then apply the changes all together as doing the inserts & deletes one record at a time would mean many more transactions which would be slower.
EDIT
A further performance improvement would be to turn some of the deletes & inserts into updates as updates are typically faster than delete + insert. Albeit this would add complexity to your code so it comes down to what is more important.
As before keep track of the deletes & inserts as the user makes changes to the ListBox
Count how many deletes & inserts you have
Take the lowest number and that is how many delete + inserts you can convert into an update. For example, if you need to delete GroupID 123 & insert GroupID 456, then instead just update GroupID 123 to 456.
If you have left over deletes, then perform those, otherwise handle the leftover inserts
if one user accessing the record 1 out of 10 records in a table. if at the same time, 2nd user trying to access that same record of 1st user, he should not be displayed that record but instead he should be displayed 2nd record and because this first user will be holding the record for some time to process and update till then this records should not be shown to any other user even select query is fired from second user application. is it possible using Row Lock? please provide me the example how to implement rowlock and holdlock and release the hold lock used Row level lock. apart from this if you have any other suggestion please share it
I am using SqlServer2005 with Asp.Net
Babu.M
I don't know if row lock would stop it being selected but could you not use a audit table? For example when user one gets access to the record store the ID for that record in an audit table and then when user two trys to use a record the application should check to see if the primary key for the record is in the audit table, if it is the second user does not gain access if not the second user gains access. Once a user has finished with the record you can either delete the record in th audit table or you could keep the record but set a flag to say it is no longer in use as this way you could see who has changed what record and at what time if you add a date time stamp.
Again when using select command just make it so that if the primary key is present in the audit table you don't select it.
Hope this helps
I am currently working on a ASP.NET MVC 3 project in which I have to keep record of field changes with certain attributes. Example:
public class MyModel
{
public String PropertyOne { get; set; }
// Need to keep track of these properties
[RequiresSupervisorKey]
public String PropertyTwo { get; set; }
}
As soon as one of the fields is changed, it requires a supervisor to approve of these field changes.
Until the changes have been approved the record will be in a pending state, and I somehow need to keep the old record and the new record until such time!
What is the best practices regarding storing these records? Should I have 2 records in the table in the database or should i have a audit table that can store this data until it has been approved.
Thank you.
I'd save them in one table too. Use a combined key for identifying a unique row. Row ID with autoincretment id. and datetime as the second part of the combined key. When a 3rd row can store the state. This allows you versioning as well. If you select the field for display you select by id order by datetime desc where state is approved limit 1. Hope this helps ;-)
This is my .02 from other projects but I would add a version or state column to the table and keep n number of records in the table. I don't know if its possible in your system for the record to be changed by two different users with different changes, but in situations like this that is usually the case. An audit table is an acceptable solution but in general I prefer to keep things in one table.
I would keep both records in the table (old and new) with an extra field for status (such as active, pending, delete, disapproved) (or what ever statuses you think you need).
Then I would create a view that shows only the active records (used for most purposes) and one that shows only the pending records (uses for the supervisor approval page).
I would create a trigger on the table to ensure only one record was active at a time. So if a supervisor changed a record from pending to active, it would take the old record and change it to the delete status. If a supervisor disapproved a change, it would go to the disapproved status.
To keep the table nimble (you indicate no need to permanently store the old statuses), I would have a job that runs at night to delete all records in the delete or disapproved status.