We are experiencing something very curious with EntityFramework that we are having a hard time debugging and understanding.
We have a service, which debounces index updates, such that we'll update the index when all changes are made and not for every change. The code looks something like this:
var messages = await _debouncingDbContext.DebounceMessages
.Where(message => message.ElapsedTime <= now)
.Take(_internalDebouncingOptionsMonitor.CurrentValue.ReturnLimit)
.ToListAsync(stoppingToken)
.ConfigureAwait(false);
if (!messages.Any())
return;
var tasks = messages.Select(m =>
_messageSession.Send(m.ReplyTo, new HandleDebouncedMessage {Message = m.OriginalMessage}))
.ToList();
try
{
await Task.WhenAll(tasks).ConfigureAwait(false);
}
catch (Exception e)
{
//Exception handling
}
_debouncingDbContext.DebounceMessages.RemoveRange(messages);
await _debouncingDbContext.SaveChangesAsync().ConfigureAwait(false);
While this is being run, we have another thread that can update the ElapsedTime on the entries. This happens if a new event comes in, before the debounce timer expires.
What we experience is the that the await _debouncingDbContext.SaveChangesAsync().ConfigureAwait(false); throws a DbUpdateConcurrencyException.
The result is that the entries are not being deleted and consequently being queried out over and over again in the initial query. This leads to an exponential growth in our index updates where the same few items are being updated over and over again. Eventually, the system dies.
The only fix we have for this right now, is restarting the service. Once that is done, the next iteration picks up the troublesome messages just fine and everything works again.
I am having a hard time understanding how this can happen. It appears that the dbcontext thinks that the entries are deleted, when in fact that are not. Somehow the DBContext gets decoupled from the database state.
I cannot understand how this can happen, when the only thing potentially being changed on the database entry it self is a timestamp and not the actual ID by which it is deleted.
EDIT 18th of November.
Adding a little more context.
The database model, looks like this:
[DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public int Id { get; set; }
public string Key { get; set; }
public string OriginalMessage { get; set; }
public string ReplyTo { get; set; }
public DateTimeOffset ElapsedTime { get; set; }
The only thing configured on the dbcontext is two indexes:
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
base.OnModelCreating(modelBuilder);
modelBuilder.Entity<DebounceMessageWrapper>()
.HasIndex(m => m.Key);
modelBuilder.Entity<DebounceMessageWrapper>()
.HasIndex(m => m.ElapsedTime);
}
The flow is quite simple.
We have a dotnet hosted service extending the abstract BackgroundService class from dotnet core. This runs in a while(!stoppingToken.IsCancellationRequested) loop with the initial code above and a Task.Delay(Y) in each loop. All the above code does, is it queries all the messages with an ElapsedTime greater than the allowed timespan. For each of those messages, it returns it to the ReplyTo and then deletes all the corresponding database entries. It is this delete that fails.
Secondly, we have a MessageHandler listening for events on a RabbitMQ. This spawns a thread per physical core on the host. Each of these threads receives messages and looks up the messages based of the key on the database model. If the message already exist, the ElapsedTime is updated, if not, the message is inserted in the database.
This gives us X+1 threads, where X equals the number of physical cores on the host, that potentially can alter the database. Each of these threads uses it's own scope and thereby unique instance of the DBContext.
The idea of this service, is as mentioned to debounce index updates. The nature of our system makes these index updates come in bulks and there is no reason to update the index for each of the updates, if it can be done by one index update when all the changes are finished.
Related
I have a Blazor Server App. This app is connected to a SQL DB and is at this time relatively complex. Since the main focus is usability, we ran into some problems when we access the database directly (components not updated correctly, etc.).
Therefore, I am trying to create a StateService which basically acts as some sort of "Cache". Data is stored in it and components can access it, without any loading times. During my research, I had some questions, which the documentation couldn't answer to me.
The Problem
It should be possible that all components always have the latest state of the data. This means that clients need to be automatically notified of any changes and automatically refresh their states. It also should be possible to have the power to provide the service to ~1.000 concurrent users, without the necessity to upgrade to a high-end server (I know, that this is very vague).
Possible Solutions
Singleton State
I basically have a service, which holds the data as a property in it and has an OnChange-event. Whenever any data property gets set, the event gets triggered. This service is then used by components to display data. When I add data to the database, the data will then be automatically loaded back into the state. I added this service as a singleton, so there is only one object during the server runtime.
public class SharedStateService
{
public event Action OnChange;
private ICollection<MyData>? myData;
public ICollection<MyData>? MyData
{
get => this.myData;
set
{
this.myData= value;
this.OnChange?.Invoke();
}
}
}
public class MyDataService
{
private readonly SharedStateService sharedStateService;
private readonly TestDbContext context;
public MyDataService(TestDbContext context, SharedStateService sharedService)
{
this.context = context;
this.sharedStateService = sharedService;
}
public async Task<bool> DeleteData(MyData data)
{
try
{
this.context.Set<MyData>().Remove(data);
await this.context.SaveChangesAsync();
}
catch (Exception)
{
return false;
}
await this.ReloadData();
return true;
}
public async Task ReloadData()
{
this.sharedStateService.MyData =
await this.context.Set<MyData>().ToListAsync();
}
}
In my views, it is now possible to subscribe to the OnChange event and freely use the MyData property.
<table class="table">
<thead>
<tr>
<!-- ... -->
</tr>
</thead>
<tbody>
#foreach (var data in SharedStateService.MyData)
{
<tr>
<!-- ... -->
</tr>
}
</tbody>
</table>
#code {
public void Dispose()
{
SharedStateService.OnChange -= Refresh;
}
protected override void OnInitialized()
{
SharedStateService.OnChange += Refresh;
}
private async void Refresh()
{
await InvokeAsync(this.StateHasChanged);
}
}
The problem I see with this case is that the entire data is constantly stored on the server. Might there be any problems? Am I overthinking it too much? What could possible risks of such an approach be?
Singleton Event
It is similar to the singleton state, but I do not store the data anywhere. Instead of the state, I have a service, which only provides an event, which can be subscribed to. This service is, again, added as a singleton.
public class RefreshService
{
public event Action OnChange;
public void Refresh()
{
OnChange?.Invoke();
}
}
This service is then injected into the data providers and called, when a change occur.
I extend the MyDataService by a new method.
public async Task<ICollection<MyData>> GetAll()
{
return await this.context.Set<MyData>().ToListAsync();
}
Afterwards, in my view, I add a property and adjust the Refresh method, to load the data into this local property.
private async void Refresh()
{
this.MyData= await MyDataService.GetAll();
await InvokeAsync(this.StateHasChanged);
}
This approach is very similar to the first one, but I don't need to store the data constantly. Is this approach easier to handle for the server? Could this lead to wrong data displayed, since the data is stored redundantly?
I know that this is a long read, but maybe someone knows which approach is generally preferable over the other.
Listen to data change it's not a bad idea, the only think i would get focus on it's the way you delete and change. First i will improve on use EFCoreBulkExtensions just for performance, if you will be updating / deleting data everytime, it's not a bad idea to perform that (principally because your database will grow as time goes by).
And what i think it's the proper solution it's the second one, Singleton Event , that way allow's you to prevent the possible error that could make the first one. Think in this scenario: you have 1000 users, it's probably that most of your users where interacting with the data at same time. If you delete, and then refresh the data could make data inconsistency, but if you get the event change state, you could use it as a flag, that data needs to be updated before user interacts with it.
Finally, i think you could use BulkInsertOrUpdateOrDelete method, so if data doesn't exists (with their id), you insert it, if any changes, it get's updates, and if it doesn't exists (an existing id) you delete it, all with one optimized method of bulk extensions. And in case you can't add another library, you should make your own add/update/delete method!
I'm working on a subscription-based system developed using Asp Core 3 MVC and Sql Server. The payment is handled externally, not linked to the application in any way. All I need to do in the application is to check the user's status, that is managed by an admin. When a user registers the status will be Pending, when the admin approves the user, Approval Date will be saved in the database, and the status will be changed to Approved.
The tricky thing for me is that I want the application to wait for 365 days before it changes the user status to Expired. I've no idea from where to start this part and would appreciate your help.
The simplest way i can think of without using hosted services is to add a check on user login that subtracts the approval date from today's date and check if the difference is equal or greater than 365 days
Something like this:
if ((DateTime.Now - user.ApprovalDate).TotalDays >= 365)
{
//Mark the user as expired...
}
You really shouldn't trigger a background thread from your main application code.
The correct way to do this is with a background worker process that has been designed specifically for this scenario.
ASP.NET Core 3 has a project type that is specifically for this, and will continue to run the back ground and can be used for all of your maintenance tasks. You can create a worker process using dotnet new worker -o YourProjectName or selecting Worker Service from the project selection window in Visual Studio.
Within that service you can then create a routine that will be used to determine if the user has expired. Encapsulate this logic in a class that makes testing easy.
Working repl has been posted here.
using System;
public class MainClass {
public static void Main (string[] args) {
var user = new User(){ ApprovedDate = DateTime.Today };
Console.WriteLine (UserHelper.IsUserExpired(user));
// this should be false
user = new User(){ ApprovedDate = DateTime.Today.AddDays(-180) };
Console.WriteLine (UserHelper.IsUserExpired(user));
// this should be false
user = new User(){ ApprovedDate = DateTime.Today.AddDays(-365) };
Console.WriteLine (UserHelper.IsUserExpired(user));
// this should be true
user = new User(){ ApprovedDate = DateTime.Today.AddDays(-366) };
Console.WriteLine (UserHelper.IsUserExpired(user));
}
}
public class User {
public DateTime ApprovedDate {get; set;}
}
public static class UserHelper
{
public static bool IsUserExpired(User user){
//... add all the repective logic in here that you need, for example;
return (DateTime.Today - user.ApprovedDate.Date).TotalDays > 365;
}
}
Maybe I'm missing something really simple out here but gonna ask anyways.....
I am using Xamarin forms (.NET Standard project), MVVMLight, Realm DB and ZXing Barcode Scanner.
I have a realmobject like so...
public class Participant : RealmObject
{
public string FirstName {get; set;}
public string LastName {get; set;}
public string Email {get; set;}
public string RegistrationCode {get; set;}
//More properties skipped out for brevity
}
I have the corresponding viewmodel as follows:
public class ParticipantViewModel
{
Realm RealmInstance
public ParticipantViewModel()
{
RealmInstance = Realms.Realm.GetInstance();
RefreshParticipants();
}
private async Task RefreshParticipants()
{
//I have code here that GETS the list of Participants from an API and saves to the device.
//I am using the above-defined RealmInstance to save to IQueryable<Participant> Participants
}
}
All the above works fine and I have no issues with this. In the same viewmodel, I am also able to fire up the ZXing Scanner and scan a bar code representing a RegistrationCode.
This, in turn, populates the below property (also in the viewmodel) once scanned...
private ZXing.Result result;
public ZXing.Result Result
{
get { return result; }
set { Set(() => Result, ref result, value); }
}
and calls the below method (wired up via the ScanResultCommand) to fetch the participant bearing the scanned RegistrationCode.
private async Task ScanResults()
{
if (Result != null && !String.IsNullOrWhiteSpace(Result.Text))
{
string regCode = Result.Text;
await CloseScanner();
SelectedParticipant = Participants.FirstOrDefault(p => p.RegistrationCode.Equals(regCode, StringComparison.OrdinalIgnoreCase));
if (SelectedParticipant != null)
{
//Show details for the scanned Participant with regCode
}
else
{
//Display not found message
}
}
}
I keep getting the below error....
System.Exception: Realm accessed from incorrect thread.
generated by the line below....
SelectedParticipant = Participants.FirstOrDefault(p => p.RegistrationCode.Equals(regCode, StringComparison.OrdinalIgnoreCase));
I'm not sure how this is an incorrect thread but any ideas on how I can get around to fetching the scanned participant either from the already populated IQueryable or from the Realm representation directly would be greatly appreciated.
Thanks
Yes, you're getting a realm instance in the constructor, and then using it from an async task (or thread). You can only access a realm from the thread in which you obtained the reference. Since you're only using a default instance, you should be able to simply obtain a local reference within the function (or thread) where you use it. Try using
Realm LocalInstance = Realms.Realm.GetInstance();
at the top of the function and use that. You'll need to recreate the Participants query to use the same instance as its source too. This will be the case wherever you use async tasks (threads), so either change all to get hold of the default instance on entry or reduce the number of threads that access the realm.
Incidentally I'm surprised you don't get a similar access error from within
RefreshParticipants() - maybe you're not actually accessing data via RealmInstance from there.
To Set the context,
We have 4 tables in cassandra, out of those 4, one is data table remaining are search tables (Lets assumme DATA, SEARCH1, SEARCH2 and SEARCH3 are the tables).
We have an initial load requirement with upto 15k rows in one req for the DATA table and hence to the search tables to keep in sync.
We do it in batch inserts with each bacth as 4 queries (one to each table) to keep consistency.
But for every batch we need to read the data. If exists, just update only the DATA table's lastUpdatedDate column, else insert to all the 4 tables.
And below is the code snippet how we are doing:
public List<Items> loadData(List<Items> items) {
CountDownLatch latch = new CountDownLatch(items.size());
ForkJoinPool pool = new ForkJoinPool(6);
pool.submit(() -> items.parallelStream().forEach(item -> {
BatchStatement batch = prepareBatchForCreateOrUpdate(item);
batch.setConsistencyLevel(ConsistencyLevel.LOCAL_ONE);
ResultSetFuture future = getSession().executeAsync(batch);
Futures.addCallback(future, new AsyncCallBack(latch), pool);
}));
try {
latch.await();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
//TODO Consider what to do with the failed Items, Retry? or remove from the items in the return type
return items;
}
private BatchStatement prepareBatchForCreateOrUpdate(Item item) {
BatchStatement batch = new BatchStatement();
Item existingItem = getExisting(item) //synchronous read
if (null != data) {
existingItem.setLastUpdatedDateTime(new Timestamp(System.currentTimeMillis()));
batch.add(existingItem));
return batch;
}
batch.add(item);
batch.add(convertItemToSearch1(item));
batch.add(convertItemToSearch2(item));
batch.add(convertItemToSearch3(item));
return batch;
}
class AsyncCallBack implements FutureCallback<ResultSet> {
private CountDownLatch latch;
AsyncCallBack(CountDownLatch latch) {
this.latch = latch;
}
// Cooldown the latch for either success or failure so that the thread that is waiting on latch.await() will know when all the asyncs are completed.
#Override
public void onSuccess(ResultSet result) {
latch.countDown();
}
#Override
public void onFailure(Throwable t) {
LOGGER.warn("Failed async query execution, Cause:{}:{}", t.getCause(), t.getMessage());
latch.countDown();
}
}
The execution is taking about 1.5 to 2 mins for 15k items considering the network roundtrip b/w application and cassandra cluster(Both reside on same DNS but different pods on kubernetes)
we have ideas to make even the read call getExisting(item) also async, but handling of the failure cases is becoming complex.
Is there a better approach for data loads for cassandra(Considering only the Async wites through datastax enterprise java driver).
First thing - batches in Cassandra are other things than in the relational DBs. And by using them you're putting more load on the cluster.
Regarding the making everything async, I thought about following possibility:
make query to the DB, obtain a Future and add listener to it - that will be executed when query is finished (override the onSuccess);
from that method, you can schedule the execution of the next actions based on the result that is obtained from Cassandra.
One thing that you need to make sure to check, is that you don't issue too much simultaneous requests at the same time. In the version 3 of the protocol, you can have up to 32k in-flight requests per connection, but in your case you may issue up to 60k (4x15k) requests. I'm using following wrapper around Session class to limit the number of in-flight requests.
I have developed and released one application in market long ago. Now some some users pointed crashes when holding application for long time. Now I identified the reason for the crash, that is I am using a class with static variable and methods to store data (getters and setters). Now I want to replace the static way with any other ways.From my study I got the following suggestions:
shared preferences: I have to store more than 40 variable (strings, int and json arrays and objects), So I think using shared preferences is not a good idea.
SQLite: more than 40 fields are there and I don't need to keep more than one value at a time.I am getting values for fields from different activities. I mean name from one activity , age from another activity, etc So using SQLite also not a good Idea I think.
Application classes: Now I am thinking about using application classes to store these data. Will it loss the data like static variable after hold the app for long time?
Now I replace the static variable with application class . Please let me know that application data also became null after long time?
It may useful to somebody.
Even though I didn't get a solution for my problem, I got the reason for why shouldn't we use application objects to hold the data. Please check the below link
Don't use application object to store data
Normally if you have to keep something in case your Activity gets destroyed you save all these things in onSaveInstanceState and restore them in onCreate or in onRestoreInstanceState
public class MyActivity extends Activity {
int myVariable;
final String ARG_MY_VAR="myvar";
public void onCreate(Bundle savedState) {
if(savedState != null {
myVariable = savedState.getInt(ARG_MY_VAR);
} else {
myVariable = someDefaultValue;
}
public void onSaveInstanceState(Bundle outState) {
outState.putInt(ARG_MY_VAR, myVariable);
super.onSaveInstanceState(outState);
}
}
Here if Android OS destroys your Activity onSaveInstanceState will be called and your important variable will be saved. Then when the user returns to your app again Android OS restores the activity and your variable will be correctly initialized
This does not happen when you call finish() yourself though, it happens only when Android destroys your activity for some reasons (which is quite likely to happen anytime while your app is in background).
First you should overwrite the onSaveInstanceState and onRestoreInstanceState methods in you activity:
#Override
protected void onSaveInstanceState (Bundle outState){
outState.putString("myVariable", myVariable);
// Store all you data inside the bundle
}
#Override
protected void onRestoreInstanceState (Bundle savedInstanceState){
if(savedInstanceState != null){
myVariable = savedInstanceState.getString("myVariable");
// Restore all the variables
}
}
May be try use static variable inside Application space?
public class YourApplication extends Application
{
private static ApplicationBubblick singleton;
public String str;
public static YourApplication getInstance()
{
return singleton;
}
}
And use variable via:
YourApplication.getInstance().str = ...; // set variable
... = YourApplication.getInstance().str; // get variable
This variable will be live until your app will start and stop all services or activities of your app. This is not work when your app crash.