I have one row in database to count total user logins
I have tried to increase number by getting the row and adding +1 to it
And i'm not sure about concurrency after I have tried this, counter was increased by 1 and not by 2 as it "should" (if many users will login at the same time)
using(var db = new Database()) {
db.Settings.FirstOrDefault(x => x.Name == "Logins").Counter++;
using(var db2 = new Database()) {
db2.Settings.FirstOrDefault(x => x.Name == "Logins").Counter++;
db2.SaveChanges();
}
db.SaveChanges();
}
Why not make a single table for storing the number of people who have logged in increment the field when someone logs in successfully and decrease when the user logs out. For example for login:
_Users = context.Users.First(aa => aa.UserName.ToUpper() == _UserName.ToUpper() && aa.MDesktop == true);
if (_Users != null)
{
context.LogEntry.FirstOrDefault().Counter++;
context.SaveChanges();
}
This is old but it is still a relevant discussion for new EF developers and it deserves an explanation.
OP's example uses two different DBContext's, effectively OP has defined two different units of work, and importantly, neither of these is aware that the other exists at all.
Lets assume that the current value of the "Logins" setting is 5
For the purposes of this walkthrough lets save the two instances that are requested from Settings into variables outside of the scope of the DB contexts in question:
Setting setting1 = null;
Setting setting2 = null;
using(var db = new Database()) {
// DB: 5, Setting1: null, Setting2: null
// Load the value of setting1 from the database
setting1 = db.Settings.FirstOrDefault(x => x.Name == "Logins");
// DB: 5, Setting1: 5, Setting2: null
// Increment the value of setting1
setting1.Counter++;
// at this point, no changes have been saved yet, the DB still holds the original value for "Logins"
// DB: 5, Setting1: 6, Setting2: null
// Create a new context called DB2
using(var db2 = new Database()) {
// load setting2 from the DB
setting2 = db2.Settings.FirstOrDefault(x => x.Name == "Logins");
// right now setting2 still has a value of 5, the previous change was not yet committed
// DB: 5, Setting1: 6, Setting2: 5
setting2.Counter++;
// DB: 5, Setting1: 6, Setting2: 6
// Save the value of Setting2 back to the database
db2.SaveChanges();
// DB: 6, Setting1: 6, Setting2: 6
// At this point setting1, setting2, and the DB all agree the value is 6.
}
// The context is only aware that we previously set the value of setting1 to 6
// so it issues an update to the DB
db.SaveChanges();
// ultimately this update would not actually change anything.
}
Entity Framework, Unit of Work and Repository data access patterns all exhibit this behaviour, when you create a new DbContext IRepository or IUnitOfWork it is done so in isolation of any others that might exist at the same point in time, there is no difference between instantiating a new context in the same method, or a different thread or even executing on entirely different servers. If you need to implement counters or incremental values there is always a degree of uncertainty when we first cache the value of the field, then increment the value and later write that value back to the database.
To minimise the potential conflict, read the record and save it immediately after, then as a rule always re-query the value of this setting before you use it.
You can call .SaveChanges() multiple times in your logic, in this example simply saving before instantiating the second context, or at least before the second context loaded the record from the DB would have been enough to see the value incremented twice:
using(var db = new Database()) {
db.Settings.FirstOrDefault(x => x.Name == "Logins").Counter++;
db.SaveChanges(); // save it back as soon as we've made the change
using(var db2 = new Database()) {
db2.Settings.FirstOrDefault(x => x.Name == "Logins").Counter++;
db2.SaveChanges();
}
db.SaveChanges();
}
Where possible, you will find the code simpler if you can avoid a schema where an incrementing or counter fields is required, instead you could turn the count logic into a query based solution.
Counters are of course a special case, you could always make direct SQL calls to the database, both for read or increment to ensure that that we bypass any potential caching that might occur with the records through EF.
You could do this as a one liner to increment the value:
dbContext.Database.ExecuteSqlCommand("UPDATE Setting SET[Counter] = IsNull([Counter],0) + 1 WHERE[Name] = 'Logins'");
Or if you want to inspect the new value:
int newCount = dbContext.Database.SqlQuery<int>(#"
UPDATE Setting SET[Counter] = IsNull([Counter],0) + 1
OUTPUT inserted.[Counter]
WHERE [Name] = 'Logins'").First();
If you need to ge tthe current value, and know that it is the most up-to-date then you can simply query it from any context in the same way:
int logins = dbContext.Database.SqlQuery<int>(#"
SELECT [Counter] FROM Setting
WHERE [Name] = 'Logins'").First();
I hope this sheds some light on why your code only incremented the value once, its not a fault in EF, just something that we need to be aware of, once EF has read values form the DB, they are potentially already stale or out of date. If optimistic concurrency is not appropriate for your use case, then you will need to think outside of the box a little bit ;)
the easy approach?
then I'd suggest using a manual transaction in EF Core
ef core transaction docs
Be sure to add an unique constraint of some sort eb. (settings id + logins counter)
using(var transaction = _context.Database.BeginTransaction())
{
try
{
var totalLoginsCounter = _context.Settings.FirstOrDefault(x => x.Name == "Logins").Counter;
totalLoginsCounter += 1;
await _context.SaveChanges();
transaction.Commit();
}
catch
{
commit.RollBack();
}
}
should concurrency happen the request will fail. Because it would try to put duplicate keys which is not possible. then HIGHLY recommend you'd then implement a retry pattern to avoid people not being able to actually login because a number in your database didn't get updated.
Related
The bounty expires in 1 hour. Answers to this question are eligible for a +100 reputation bounty.
eia92 wants to draw more attention to this question.
I recently upgraded my project to .NET Core 6 and now removing records from my look up tables is not working. I have a Risk object that has a collection of Users. Removing users from the risk object no longer works. Any ideas what I'm doing wrong?
My lookup table is called RiskItemUser, and it has two columns, RiskItemId and UserId.
Code:
var postSavedRisk = _riskService.Queryable().Include(c => c.AssignedTo).Where(w => w.Id == riskitem.Id).FirstOrDefault();
List<User> usersToRemove = postSavedRisk.AssignedTo.Where(c => userNamesToRemove.Contains(c.UserName)).ToList();
using (var db = new ApplicationDbContext())
{
var postSavedAssginedTo = db.RiskItemUser
.Where(w => w.RiskItemId == riskitem.Id)
.ToList();
foreach (var userToRemove in usersToRemove)
{
foreach (var riskAssignedTo in postSavedAssginedTo)
{
if(userToRemove.Id == riskAssignedTo.UserId)
db.RiskItemUser.Remove(riskAssignedTo);
await db.SaveChangesAsync().ConfigureAwait(false);
}
}
}
The code, as you show it, looks like it should work, although some parts are hidden. Therefore, it's hard to tell how to make it work. But there's room for simplification, which should result in working code.
You want to remove users whose names are specified by userNamesToRemove from a risk that's specified by riskitem.Id. Assuming that there's a navigation property RiskItemUser.User, removing these data could be done by essentially one line of code:
db.RiskItemUser.RemoveRange(
db.RiskItemUser.Where(ru => ru.RiskItemId == riskitem.Id
&& userNamesToRemove.Contains(ru.User.Name)));
await db.SaveChangesAsync().ConfigureAwait(false);
You tagged EFC 6, but as of EFC 7.0, there's support for bulk delete (and update) functions, allowing for single-statement deletion of multiple database records:
db.RiskItemUser
.Where(db.RiskItemUser.Where(ru => ru.RiskItemId == riskitem.Id
&& userNamesToRemove.Contains(ru.User.Name)))
.ExecuteDelete();
This will execute one delete statement, whereas the previous method will execute one statement per row.
Note that this bulk method is like executing raw SQL. There's no communication with EF's change tracker and EF can't coordinate the correct order of statements. I think the general advice should be to not mix these bulk methods with regular SaveChanges calls.
I have written an api in ASP.NET which uses Entity Framework 6.
Here is the code
cr = context.Responses.FirstOrDefault(s => s.RegistrationId ==registrationId);
if (cr == null)
{
cr = new Responses()
{
Answer = answer,
RegistrationId = registrationId,
CreationTime = DateTime.Now
};
context.Responses.Add(cr);
}
else
{
cr.Answer = answer;
}
context.SaveChanges();
This is my result in database
But while doing the database inserts, it inserts the same data twice with same creation time which happens often. Why is this so? What is the best way to avoid these duplicate inserts?
first of all it should be
cr = context.Responses.FirstOrDefault(s => s.RegistrationId == registrationId );
This is possible this error originates from the UI. suppose you are filling out responses by form, and somebody presses the submit twice, then you would have two lines in the db representing the same response. The correct way to resolve this is to have the form (via javascript and such) generate the guid, and immediately disable the submit button after the click. Another way is to declare in the database that the combinations of result columns are unique, thus no two "same" lines can exist by defintion.
I'm using Azure Mobile App with Xamarin.Forms to create an offline capable mobile app.
My solution is based on https://adrianhall.github.io/develop-mobile-apps-with-csharp-and-azure/chapter3/client/
Here is the code that I use for offline sync :
public class AzureDataSource
{
private async Task InitializeAsync()
{
// Short circuit - local database is already initialized
if (client.SyncContext.IsInitialized)
{
return;
}
// Define the database schema
store.DefineTable<ArrayElement>();
store.DefineTable<InputAnswer>();
//Same thing with 16 others table
...
// Actually create the store and update the schema
await client.SyncContext.InitializeAsync(store, new MobileServiceSyncHandler());
}
public async Task SyncOfflineCacheAsync()
{
await InitializeAsync();
//Check if authenticated
if (client.CurrentUser != null)
{
// Push the Operations Queue to the mobile backend
await client.SyncContext.PushAsync();
// Pull each sync table
var arrayTable = await GetTableAsync<ArrayElement>();
await arrayTable.PullAsync();
var inputAnswerInstanceTable = await GetTableAsync<InputAnswer>();
await inputAnswerInstanceTable.PullAsync();
//Same thing with 16 others table
...
}
}
public async Task<IGenericTable<T>> GetTableAsync<T>() where T : TableData
{
await InitializeAsync();
return new AzureCloudTable<T>(client);
}
}
public class AzureCloudTable<T>
{
public AzureCloudTable(MobileServiceClient client)
{
this.client = client;
this.table = client.GetSyncTable<T>();
}
public async Task PullAsync()
{
//Query name used for incremental pull
string queryName = $"incsync_{typeof(T).Name}";
await table.PullAsync(queryName, table.CreateQuery());
}
}
The problem is that the syncing takes a lot of time even when there isn't anything to pull (8-9 seconds on Android devices and more than 25 seconds to pull the whole database).
I looked at Fiddler to find how much time takes the Mobile Apps BackEnd to respond and it is about 50 milliseconds per request so the problem doesn't seem to come from here.
Does anyone have the same trouble ? Is there something that I'm doing wrong or tips to improve my sync performance ?
Our particular issue was linked to our database migration. Every row in the database had the same updatedAt value. We ran an SQL script to modify these so that they were all unique.
This fix was actually for some other issue we had, where not all rows were being returned for some unknown reason, but we also saw a substantial speed improvement.
Also, another weird fix that improved loading times was the following.
After we had pulled all of the data the first time (which, understandably takes some time) - we did an UpdateAsync() on one of the rows that were returned, and we did not push it afterwards.
We've come to understand that the way offline sync works, is that it will pull anything that has a date newer than the most recent updated at. There was a small speed improvement associated with this.
Finally, the last thing we did to improve speed was to not fetch the data again, if it already had cached a copy in the view. This may not work for your use case though.
public List<Foo> fooList = new List<Foo>
public void DisplayAllFoo()
{
if(fooList.Count == 0)
fooList = await SyncClass.GetAllFoo();
foreach(var foo in fooList)
{
Console.WriteLine(foo.bar);
}
}
Edit 20th March 2019:
With these improvements in place, we are still seeing very slow sync operations, used in the same way as mentioned in the OP, also including the improvements listed in my answer here.
I encourage all to share their solutions or ideas on how this speed can be improved.
One of the reasons for the slow Pull() is when more than (10) rows get the same UpdatedAt value. This happens when you update the rows at once, for example running an SQL command.
One way to overcome this is to modify the default trigger on the tables. To ensure every row gets a unique UpdateAt, we did something like this:
ALTER TRIGGER [dbo].[TR_dbo_Items_InsertUpdateDelete] ON [dbo].[TableName]
AFTER INSERT, UPDATE, DELETE
AS
BEGIN
DECLARE #InsertedAndDeleted TABLE
(
Id NVARCHAR(128)
);
DECLARE #Count INT,
#Id NVARCHAR(128);
INSERT INTO #InsertedAndDeleted
SELECT Id
FROM inserted;
INSERT INTO #InsertedAndDeleted
SELECT Id
FROM deleted
WHERE Id NOT IN
(
SELECT Id
FROM #InsertedAndDeleted
);
--select * from #InsertedAndDeleted;
SELECT #Count = Count(*)
FROM #InsertedAndDeleted;
-- ************************ UpdatedAt ************************
-- while loop
WHILE #Count > 0
BEGIN
-- selecting
SELECT TOP (1) #Id = Id
FROM #InsertedAndDeleted;
-- updating
UPDATE [dbo].[TableName]
SET UpdatedAt = Convert(DATETIMEOFFSET, DateAdd(MILLISECOND, #Count, SysUtcDateTime()))
WHERE Id = #Id;
-- deleting
DELETE FROM #InsertedAndDeleted
WHERE id = #Id;
-- counter
SET #Count = #Count - 1;
END;
END;
I have this ASP.Net code and I was getting an error when running it. The error was:
Server: Msg 272, Level 16, State 1, Line 1 Cannot update a timestamp
column.
Here's the mapping for this table that I already have:
Property(x =>
x.Version).HasColumnName(#"Version").IsOptional().HasColumnType("timestamp").HasDatabaseGeneratedOption(System.ComponentModel.DataAnnotations.Schema.DatabaseGeneratedOption.Computed);
Note that I do have a version column in my table.
public async Task<IHttpActionResult> Put([FromBody]WordForm wordForm)
{
// SampleSentences -> s
var oldsObj = db.SampleSentences
.Where(w => w.WordFormId == wordForm.WordFormId)
.AsNoTracking()
.ToList();
var newsObj = wordForm.SampleSentences.ToList();
// There is other code here to modify SampleSentences
//
//
// db.WordForms.Attach(wordForm);
// db.Entry(wordForm).State = EntityState.Modified;
wordForm.StatusId = (int)EStatus.Saved;
await db.SaveChangesAsync(User, DateTime.UtcNow);
return Ok(wordForm);
}
I was able to fix the error by adding comments to the two lines in the method. But could someone explain why I am getting the error if I don't comment out those lines. Should I not be able to Attach the wordForm and mark as Modified?
Your table probably has a rowversion or timestamp field which is used for optimistic concurrency. rowversion fields can't be set or updated at all. They are a value that gets incremented automatically each time a row is modified.
To avoid the problem, mark your rowversion property with the TimeStamp attribute:
[TimeStamp]
public byte[] RowVersion { get; set; }
In fact, timestamp is the deprecated name of the type which causes a bit of confusion
From the docs:
The timestamp syntax is deprecated. This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use this feature.
I have a page that need to run a query against a large dataset very often. To ease the burden on the database, I've set up a cache that will refresh itself every 5 minutes.
The logic is:
When a call is made, check if there is data in cache, if it is, run the queryu on the cache. If not, start a task of fetching from all rows from database while running a query on my repository to get out just the data needed for that call. When all rows is fetched, put it in the cache so it can be accessed on the next call. The problem is that I sometimes get a: "Message = "There is already an open DataReader associated with this Command which must be closed first." I guess this is because it runs two queries to the same repository at the same time (one for all rows and one for the query). I've got MARS enabled in my connections string.
My code
public IQueryable<TrackDto> TrackDtos([FromUri] int[] Ids)
{
if (HttpContext.Current.Cache["Tracks"] != null && ((IQueryable<TrackDto>)HttpContext.Current.Cache["Tracks"]).Any())
{
var trackDtos = Ids.Length > 0
? ((IQueryable<TrackDto>)HttpContext.Current.Cache["Tracks"]).Where(trackDto => Ids.Contains(trackDto.Id).AsQueryable()
: ((IQueryable<TrackDto>)HttpContext.Current.Cache["Tracks"]).AsQueryable();
return trackDtos;
}
else
{
UpdateTrackDtoCache(DateTime.Today);
var trackDtos = Ids.Length > 0
? WebRepository.TrackDtos.Where(trackDto => trackDto.Date == DateTime.Today && Ids.Contains(trackDto.Id)).AsQueryable()
: WebRepository.TrackDtos.Where(trackDto => trackDto.Date == DateTime.Today).AsQueryable().AsQueryable();
return trackDtos;
}
}
private IQueryable<TrackDto> MapTrackDtosFromDb(DateTime date)
{
return WebRepository.TrackDtos.Where(tdto => tdto.Date == date.Date);
}
private void UpdateTrackDtoCache(DateTime date)
{
if (CacheIsUpdating)
return;
CacheIsUpdating = true;
var task = Task.Factory.StartNew(
state =>
{
var context = (HttpContext)state;
context.Cache.Insert("Tracks", MapTrackDtosFromDb(date), null, Cache.NoAbsoluteExpiration,
new TimeSpan(0, 5, 0));
CacheIsUpdating = false;
},
HttpContext.Current);
}
I believe you are running DML or DDL sql queries using the same active connection. And MARS does not allow that. You can execute multiple select statements or bulk insert but if you run multiple update, delete statements or your sql execution will throw this kind of errors. Even if you run an update sql query while running a select statement on the same command you will get this error. For more info read this
http://msdn.microsoft.com/en-us/library/h32h3abf(v=vs.110).aspx