I have written an api in ASP.NET which uses Entity Framework 6.
Here is the code
cr = context.Responses.FirstOrDefault(s => s.RegistrationId ==registrationId);
if (cr == null)
{
cr = new Responses()
{
Answer = answer,
RegistrationId = registrationId,
CreationTime = DateTime.Now
};
context.Responses.Add(cr);
}
else
{
cr.Answer = answer;
}
context.SaveChanges();
This is my result in database
But while doing the database inserts, it inserts the same data twice with same creation time which happens often. Why is this so? What is the best way to avoid these duplicate inserts?
first of all it should be
cr = context.Responses.FirstOrDefault(s => s.RegistrationId == registrationId );
This is possible this error originates from the UI. suppose you are filling out responses by form, and somebody presses the submit twice, then you would have two lines in the db representing the same response. The correct way to resolve this is to have the form (via javascript and such) generate the guid, and immediately disable the submit button after the click. Another way is to declare in the database that the combinations of result columns are unique, thus no two "same" lines can exist by defintion.
Related
The bounty expires in 1 hour. Answers to this question are eligible for a +100 reputation bounty.
eia92 wants to draw more attention to this question.
I recently upgraded my project to .NET Core 6 and now removing records from my look up tables is not working. I have a Risk object that has a collection of Users. Removing users from the risk object no longer works. Any ideas what I'm doing wrong?
My lookup table is called RiskItemUser, and it has two columns, RiskItemId and UserId.
Code:
var postSavedRisk = _riskService.Queryable().Include(c => c.AssignedTo).Where(w => w.Id == riskitem.Id).FirstOrDefault();
List<User> usersToRemove = postSavedRisk.AssignedTo.Where(c => userNamesToRemove.Contains(c.UserName)).ToList();
using (var db = new ApplicationDbContext())
{
var postSavedAssginedTo = db.RiskItemUser
.Where(w => w.RiskItemId == riskitem.Id)
.ToList();
foreach (var userToRemove in usersToRemove)
{
foreach (var riskAssignedTo in postSavedAssginedTo)
{
if(userToRemove.Id == riskAssignedTo.UserId)
db.RiskItemUser.Remove(riskAssignedTo);
await db.SaveChangesAsync().ConfigureAwait(false);
}
}
}
The code, as you show it, looks like it should work, although some parts are hidden. Therefore, it's hard to tell how to make it work. But there's room for simplification, which should result in working code.
You want to remove users whose names are specified by userNamesToRemove from a risk that's specified by riskitem.Id. Assuming that there's a navigation property RiskItemUser.User, removing these data could be done by essentially one line of code:
db.RiskItemUser.RemoveRange(
db.RiskItemUser.Where(ru => ru.RiskItemId == riskitem.Id
&& userNamesToRemove.Contains(ru.User.Name)));
await db.SaveChangesAsync().ConfigureAwait(false);
You tagged EFC 6, but as of EFC 7.0, there's support for bulk delete (and update) functions, allowing for single-statement deletion of multiple database records:
db.RiskItemUser
.Where(db.RiskItemUser.Where(ru => ru.RiskItemId == riskitem.Id
&& userNamesToRemove.Contains(ru.User.Name)))
.ExecuteDelete();
This will execute one delete statement, whereas the previous method will execute one statement per row.
Note that this bulk method is like executing raw SQL. There's no communication with EF's change tracker and EF can't coordinate the correct order of statements. I think the general advice should be to not mix these bulk methods with regular SaveChanges calls.
I have this function in my application. If the insert of Phrase fails then can someone tell me if the Audit entry still gets added? If that's the case then is there a way that I can package these into a single transaction that could be rolled back.
Also if it fails can I catch this and then still have the procedure exit with an exception?
[Route("Post")]
[ValidateModel]
public async Task<IHttpActionResult> Post([FromBody]Phrase phrase)
{
phrase.StatusId = (int)EStatus.Saved;
UpdateHepburn(phrase);
db.Phrases.Add(phrase);
var audit = new Audit()
{
Entity = (int)EEntity.Phrase,
Action = (int)EAudit.Insert,
Note = phrase.English,
UserId = userId,
Date = DateTime.UtcNow,
Id = phrase.PhraseId
};
db.Audits.Add(audit);
await db.SaveChangesAsync();
return Ok(phrase);
}
I have this function in my application. If the insert of Phrase fails
then can someone tell me if the Audit entry still gets added?
You have written your code in a correct way by calling await db.SaveChangesAsync(); only one time after doing all your modifications on the DbContext.
The answer to your question is: No, the Audit will not be added if Phrase fails.
Because you are calling await db.SaveChangesAsync(); after doing all your things with your entities, Entity Framework wil generate all the required SQL Queries and put them in a single SQL transaction which makes the whole queries as an atomic operation to your database. If one of the generated query e.g. Auditgenerated query failed then the transaction will be rolled back. So every modification that have been done to your database will be removed and so Entity Framework will let your database in a coherent state.
I have one row in database to count total user logins
I have tried to increase number by getting the row and adding +1 to it
And i'm not sure about concurrency after I have tried this, counter was increased by 1 and not by 2 as it "should" (if many users will login at the same time)
using(var db = new Database()) {
db.Settings.FirstOrDefault(x => x.Name == "Logins").Counter++;
using(var db2 = new Database()) {
db2.Settings.FirstOrDefault(x => x.Name == "Logins").Counter++;
db2.SaveChanges();
}
db.SaveChanges();
}
Why not make a single table for storing the number of people who have logged in increment the field when someone logs in successfully and decrease when the user logs out. For example for login:
_Users = context.Users.First(aa => aa.UserName.ToUpper() == _UserName.ToUpper() && aa.MDesktop == true);
if (_Users != null)
{
context.LogEntry.FirstOrDefault().Counter++;
context.SaveChanges();
}
This is old but it is still a relevant discussion for new EF developers and it deserves an explanation.
OP's example uses two different DBContext's, effectively OP has defined two different units of work, and importantly, neither of these is aware that the other exists at all.
Lets assume that the current value of the "Logins" setting is 5
For the purposes of this walkthrough lets save the two instances that are requested from Settings into variables outside of the scope of the DB contexts in question:
Setting setting1 = null;
Setting setting2 = null;
using(var db = new Database()) {
// DB: 5, Setting1: null, Setting2: null
// Load the value of setting1 from the database
setting1 = db.Settings.FirstOrDefault(x => x.Name == "Logins");
// DB: 5, Setting1: 5, Setting2: null
// Increment the value of setting1
setting1.Counter++;
// at this point, no changes have been saved yet, the DB still holds the original value for "Logins"
// DB: 5, Setting1: 6, Setting2: null
// Create a new context called DB2
using(var db2 = new Database()) {
// load setting2 from the DB
setting2 = db2.Settings.FirstOrDefault(x => x.Name == "Logins");
// right now setting2 still has a value of 5, the previous change was not yet committed
// DB: 5, Setting1: 6, Setting2: 5
setting2.Counter++;
// DB: 5, Setting1: 6, Setting2: 6
// Save the value of Setting2 back to the database
db2.SaveChanges();
// DB: 6, Setting1: 6, Setting2: 6
// At this point setting1, setting2, and the DB all agree the value is 6.
}
// The context is only aware that we previously set the value of setting1 to 6
// so it issues an update to the DB
db.SaveChanges();
// ultimately this update would not actually change anything.
}
Entity Framework, Unit of Work and Repository data access patterns all exhibit this behaviour, when you create a new DbContext IRepository or IUnitOfWork it is done so in isolation of any others that might exist at the same point in time, there is no difference between instantiating a new context in the same method, or a different thread or even executing on entirely different servers. If you need to implement counters or incremental values there is always a degree of uncertainty when we first cache the value of the field, then increment the value and later write that value back to the database.
To minimise the potential conflict, read the record and save it immediately after, then as a rule always re-query the value of this setting before you use it.
You can call .SaveChanges() multiple times in your logic, in this example simply saving before instantiating the second context, or at least before the second context loaded the record from the DB would have been enough to see the value incremented twice:
using(var db = new Database()) {
db.Settings.FirstOrDefault(x => x.Name == "Logins").Counter++;
db.SaveChanges(); // save it back as soon as we've made the change
using(var db2 = new Database()) {
db2.Settings.FirstOrDefault(x => x.Name == "Logins").Counter++;
db2.SaveChanges();
}
db.SaveChanges();
}
Where possible, you will find the code simpler if you can avoid a schema where an incrementing or counter fields is required, instead you could turn the count logic into a query based solution.
Counters are of course a special case, you could always make direct SQL calls to the database, both for read or increment to ensure that that we bypass any potential caching that might occur with the records through EF.
You could do this as a one liner to increment the value:
dbContext.Database.ExecuteSqlCommand("UPDATE Setting SET[Counter] = IsNull([Counter],0) + 1 WHERE[Name] = 'Logins'");
Or if you want to inspect the new value:
int newCount = dbContext.Database.SqlQuery<int>(#"
UPDATE Setting SET[Counter] = IsNull([Counter],0) + 1
OUTPUT inserted.[Counter]
WHERE [Name] = 'Logins'").First();
If you need to ge tthe current value, and know that it is the most up-to-date then you can simply query it from any context in the same way:
int logins = dbContext.Database.SqlQuery<int>(#"
SELECT [Counter] FROM Setting
WHERE [Name] = 'Logins'").First();
I hope this sheds some light on why your code only incremented the value once, its not a fault in EF, just something that we need to be aware of, once EF has read values form the DB, they are potentially already stale or out of date. If optimistic concurrency is not appropriate for your use case, then you will need to think outside of the box a little bit ;)
the easy approach?
then I'd suggest using a manual transaction in EF Core
ef core transaction docs
Be sure to add an unique constraint of some sort eb. (settings id + logins counter)
using(var transaction = _context.Database.BeginTransaction())
{
try
{
var totalLoginsCounter = _context.Settings.FirstOrDefault(x => x.Name == "Logins").Counter;
totalLoginsCounter += 1;
await _context.SaveChanges();
transaction.Commit();
}
catch
{
commit.RollBack();
}
}
should concurrency happen the request will fail. Because it would try to put duplicate keys which is not possible. then HIGHLY recommend you'd then implement a retry pattern to avoid people not being able to actually login because a number in your database didn't get updated.
When we recieve orders from web it creates a sales id and stores it. But if i recieve order from web at same time in two instances, it creates two sales orders for the same web order. So how can i stop it?
I kept as Index for weborder number Allow Duplicates:No. But still it doesnt work. Any Suggestions?
(Added as a answer bit late, 'cause I'm slow that way :))
Send a unique identifier like a GUID from the web, save it in SalesTable and in insert check if it already exists - or make a unique index for the field, but you might log these attempted duplicates and it's easier to code it yourself in insert or validateWrite.
This is because the user presses submit button several times. You need to track the number of clicks on the button. For this you need to use js.
var submit = 0;
function checkIsRepeat(){
var isValid = Page_ClientValidate();
if(isValid) {
if(++ submit > 1){
alert('Yours message here');
return false;
}
}
return isValid;
}
I have a requirement for to show the search result on the jsp with maxcount of 10 and it should have a pagination to traverse back and forward as pagination functionality.
Dynamodb has a lastevaluatedkey, but it doesn't help to go back to the previous page, though I can move to the next result set by the lastevaluatedKey.
Can anybody please help on this.
I am using Java SPRING and DynamoDB as the stack.
Thanks
Satya
To enable forward/backward, all you need is to keep
the first key, which is hash key + sort key of the first record of the previously returned page (null if you are about to query the first page).
and
the last key of the retrieved page, which is hash key + sort key of the last record of the previously returned page
Then to navigate forward or backward, you need to pass in below parameters in the query request:
Forward: last key as the ExclusiveStartKey, order = ascend
Backward: first key as the ExclusiveStartKey, order = descend
I have achieved this in a project in 2016. DynamoDB might provide some similar convenient APIs now, but I'm not sure as I haven't checked DynamoDB for a long time.
Building on Ray's answer, here's what I did. sortId is the sort key.
// query a page of items and create prev and next cursor
// cursor idea from this article: https://hackernoon.com/guys-were-doing-pagination-wrong-f6c18a91b232
async function queryCursor( cursor) {
const cursor1 = JSON.parse(JSON.stringify(cursor));
const pageResult = await queryPage( cursor1.params, cursor1.pageItems);
const result = {
Items: pageResult.Items,
Count: pageResult.Count
};
if ( cursor.params.ScanIndexForward) {
if (pageResult.LastEvaluatedKey) {
result.nextCursor = JSON.parse(JSON.stringify(cursor));
result.nextCursor.params.ExclusiveStartKey = pageResult.LastEvaluatedKey;
}
if ( cursor.params.ExclusiveStartKey) {
result.prevCursor = JSON.parse(JSON.stringify(cursor));
result.prevCursor.params.ScanIndexForward = !cursor.params.ScanIndexForward;
result.prevCursor.params.ExclusiveStartKey.sortId = pageResult.Items[0].sortId;
}
} else {
if (pageResult.LastEvaluatedKey) {
result.prevCursor = JSON.parse(JSON.stringify(cursor));
result.prevCursor.params.ExclusiveStartKey = pageResult.LastEvaluatedKey;
}
if ( cursor.params.ExclusiveStartKey) {
result.nextCursor = JSON.parse(JSON.stringify(cursor));
result.nextCursor.params.ScanIndexForward = !cursor.params.ScanIndexForward;
result.nextCursor.params.ExclusiveStartKey.sortId = pageResult.Items[0].sortId;
}
}
return result;
}
You will have to keep a record of the previous key in a session var, query string, or something similar you can access later, then execute the query using that key when you want to go backwards. Dynamo does not keep track of that for you.
For a simple stateless forward and reverse navigation with dynamodb check out my answer to a similar question: https://stackoverflow.com/a/64179187/93451.
In summary it returns the reverse navigation history in each response, allowing the user to explicitly move forward or back until either end.
GET /accounts -> first page
GET /accounts?next=A3r0ijKJ8 -> next page
GET /accounts?prev=R4tY69kUI -> previous page