.NET Core DbContext block subsequent requests - .net-core

We have a web application written in .Net Core (currently v2.2), and with Angular as frontend.
If I do an ajax-call to one route in the backend which in turn opens up a dbcontext to perform a query, we are experiencing that all subsequent ajax-calls to any other route is getting held up until the query of the first controller route is done. (no, its not a DB Lock in the SQL server. Its different tables).
Example of the code in the first route (which, for the purpose of the example say takes 20 seconds):
public IActionResult GetBusinessesByNaceAndAmount(int take)
{
using (ConsumentContext consumentContext = new ConsumentContext())
{
var data = consumentContext.Businesses.AsNoTracking().Where(b => b.Established_date != null).GroupBy(b => new { Code = b.Business_code.Substring(0, 2) }).Select(b => new
{
BusinessName = b.First().Business_code.Substring(0, 2),
Businesses = b.Where(bl => bl.Established_date != null).OrderBy(bl => bl.Established_date).Select(bl => new { BusinessName = bl.Name, Amount = 10 }).Take(10).ToList(),
}).Take(take).ToList();
return Ok(data );
}
}
Then, I perform another call to this, one millisecond later in the frontend:
public IActionResult GetCustomers()
{
using (ConsumentContext consumentContext = new ConsumentContext())
{
var customers = consumentContext.Customers.AsNoTracking().Take(5).ToList();
return Ok(customers);
}
}
Even though the query of the second endpoint only takes a few milliseconds, its TTFB is held up until the first one is done.
I don't know if it has anything to do with it, but our backend is currently running in a linux environment (Docker container), and is communicating via TCP/IP to our MSSQL server (yes, its locked down in firewall).

Your problem looks like either your server is running out of free threads to process action or in your Angular application you are not making API calls simultaneously but sequentially.
To free threads in a long running DB call, you can try changing your first action to an async action so the thread is not freezed, e.g.
public async Task<IActionResult> GetBusinessesByNaceAndAmount(int take, CancellationToken token)
{
using (ConsumentContext consumentContext = new ConsumentContext())
{
var data = await consumentContext.Businesses
.AsNoTracking()
.Where(b => b.Established_date != null)
.GroupBy(b => new { Code = b.Business_code.Substring(0, 2) })
.Select(b => new
{
BusinessName = b.First().Business_code.Substring(0, 2),
Businesses = b.Where(bl => bl.Established_date != null)
.OrderBy(bl => bl.Established_date)
.Select(bl => new { BusinessName = bl.Name, Amount = 10})
.Take(10).ToList(),
})
.Take(take)
.ToListAsync(token);
return Ok(data);
}
}
This could help if your server is running out of threads to process the action.
You can also verify your Angular code. If your application is waiting for the result of the first API call, then above code won't help - you should make all the calls simultaneously.

Related

Quartz.NET Runtime Scheduler will only get db connection string from app.config

I've created an asp.net core web app with Quartz for my job scheduling. I'm writing the job blueprints into a db at startup and then adding triggers and scheduling the job once I have all the info I need(user input).
Yesterday I had this issue, that Quartz couldn't find the job in the jobstore when I tried to schedule it. Eventually I figured out that it was looking in the RAM jobstore for some reason.
I tried a number of different things, did some googling to make sure I had everything setup correctly and in the end, on a whim, I figured I'd try adding the connection strings in an app.config. Suddenly quartz happily fetched the connection strings and scheduled my jobs. I added the app.config because I initially used the app.config for my connection strings when setting up the project before figuring out that using the appsettings.json is the preferred method for a .net core project. So maybe there is some setting somewhere that still prompts quartz to look for an app.config? I just have no clue where that could be.
Also, as I said, when setting up the jobs in my program.cs it gets the strings from appsettings.json just as it is supposed to.
program.cs with a most .AddJob omitted for brevity.
Log.Logger = new LoggerConfiguration().Enrich.FromLogContext().WriteTo.Console().CreateLogger();
builder.Services.AddQuartz(q =>
{
q.UseMicrosoftDependencyInjectionJobFactory();
var jobKey = new JobKey("HR First Contact", "DEFAULT");
q.AddJob<MailHRNewEmployee>(jobKey, j => j
.StoreDurably()
.WithDescription("job blueprint"));
q.UsePersistentStore(s =>
{
// s.PerformSchemaValidation = true;
s.UseProperties = true;
s.RetryInterval = TimeSpan.FromSeconds(15);
s.UseMySqlConnector(MySql =>
{
MySql.ConnectionString = builder.Configuration.GetConnectionString("quartz.dataSource.default.connectionString");
MySql.TablePrefix = "QRTZ_";
});
s.UseJsonSerializer();
});
});
builder.Services.Configure<QuartzOptions>(builder.Configuration.GetSection("Quartz"));
builder.Services.Configure<QuartzOptions>(options =>
{
options.Scheduling.IgnoreDuplicates = true;
options.Scheduling.OverWriteExistingData = true;
});
builder.Services.AddQuartzHostedService(options =>
{
options.WaitForJobsToComplete = true;
});
my Scheduler - most of it omitted for brevity, I can add the rest but it's just fetching data I need for the jobmap.
public async Task StartAsync(CancellationToken cancellationToken)
{
ISchedulerFactory schedFact = new StdSchedulerFactory();
this.scheduler = await schedFact.GetScheduler();
ITrigger jtriggz = CreateTriggerFirstContact();
await scheduler.ScheduleJob(jtriggz, cancellationToken);
await scheduler.Start(cancellationToken);
}
private ITrigger CreateTriggerFirstContact()
{
return Quartz.TriggerBuilder.Create().WithIdentity(_employee.FName + _employee.LName, "HR Contact").UsingJobData("FName", _employee.FName).UsingJobData("LName", _employee.LName).UsingJobData("Gender", _employee.Gender).UsingJobData("Id", Convert.ToString(_employee.Id)).UsingJobData("Empfaenger", recipient).UsingJobData("EmpfaengerName", HRManager.Name).UsingJobData("EmpfaengerGeschlecht", HRManager.Gender).StartAt(DateTime.Now.AddSeconds(10)).ForJob("HR First Contact").Build();
}

What happens with changes on tracked entities if i do not call SaveChanges()?

I have an application using ASP.NET 6 and EF Core 6.0 where I have an endpoint like this:
public IActionResult Get() {
var sales = _context.Sale.Include(s => s.Product).ToList();
sales.ForEach(s => s.LastFetch = DateTime.UtcNow);
_context.SaveChanges();
// I only want to change the content of "ProductId" in the JSON serialisation here
sales.ForEach(r => r.ProductId = r.Product.LocalId);
return Ok(result);
}
Is there any possible situation where the changes of the second ForEach-Call would get persisted to database?

Best practice for long running SQL queries in ASP.Net MVC

I have an action method which needs to complete 15~52 long running SQL queries (all of them are similar, each takes more than 5 seconds to complete) according to user-selected dates.
After doing a lot of research, it seems the best way to do this without blocking the ASP.Net thread is to use async/await task methods With SQL Queries:
[HttpPost]
public async Task<JsonResult> Action() {
// initialization stuff
// create tasks to run async SQL queries
ConcurrentBag<Tuple<DateTime, List<long>>> weeklyObsIdBag =
new ConcurrentBag<Tuple<DateTime, List<long>>>();
Task[] taskList = new Task[reportDates.Count()];
int idx = 0;
foreach (var reportDate in reportDates) { //15 <= reportDates.Count() <= 52
var task = Task.Run(async () => {
using (var sioDbContext = new SioDbContext()) {
var historyEntryQueryable = sioDbContext.HistoryEntries
.AsNoTracking()
.AsQueryable<HistoryEntry>();
var obsIdList = await getObsIdListAsync(
historyEntryQueryable,
reportDate
);
weeklyObsIdBag.Add(new Tuple<DateTime,List<long>>(reportDate, obsIdList));
}
});
taskList[idx++] = task;
}
//await for all the tasks to complete
await Task.WhenAll(taskList);
// consume the results from long running SQL queries,
// which is stored in weeklyObsIdBag
}
private async Task<List<long>> getObsIdListAsync(
IQueryable<HistoryEntry> historyEntryQueryable,
DateTime reportDate
) {
//apply reportDate condition to historyEntryQueryable
//run async query
List<long> obsIdList = await historyEntryQueryable.Select(he => he.ObjectId)
.Distinct()
.ToListAsync()
.ConfigureAwait(false);
return obsIdList;
}
After making this change, the time taken to complete this action is greatly reduced since now I am able to execute multiple (15~52) async SQL queries simultaneously and await for them to complete rather than running them sequentially. However, users start to experience lots of time out issues, such as :
(from Elmah error log)
"Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool.
This may have occurred because all pooled connections were in use and max pool size was
reached."
"The wait operation timed out"
Is it caused by thread starvation? I got a feeling that I might be using too many threads from thread pool to achieve what I want, but I thought it shouldn't be a problem because I used async/await to prevent all the threads from being blocked.
If things won't work this way, then what's the best practice to execute multiple long running SQL queries?
Consider limiting the number of concurrent tasks being executed, for example:
int concurrentTasksLimit = 5;
List<Task> taskList = new List<Task>();
foreach (var reportDate in reportDates) { //15 <= reportDates.Count() <= 52
var task = Task.Run(async () => {
using (var sioDbContext = new SioDbContext()) {
var historyEntryQueryable = sioDbContext.HistoryEntries
.AsNoTracking()
.AsQueryable<HistoryEntry>();
var obsIdList = await getObsIdListAsync(
historyEntryQueryable,
reportDate
);
weeklyObsIdBag.Add(new Tuple<DateTime,List<long>>(reportDate, obsIdList));
}
});
taskList.Add(task);
if (concurrentTasksLimit == taskList.Count)
{
await Task.WhenAll(taskList);
// before clearing the list, you should get the results and store in memory (e.g another list) for later usage...
taskList.Clear();
}
}
//await for all the remaining tasks to complete
if (taskList.Any())
await Task.WhenAll(taskList);
Take note I changed your taskList to an actual List<Task>, it just seems easier to use it, since we need to add/remove tasks from the list.
Also, you should get the results before clearing the taskList, since you are going to use them later.

Changing application server key in push manager subscription

I implemented web push notifications using service worker. I collected user subscriptions with a particular application server key. Suppose if we change the application server key, then when we get the subscription using "reg.pushManager.getSubscription()", we will get the old subscription information which was created using the old application server key. How to handle this scenario? How to get the new subscription from the user?
Get the subscription using reg.pushManager.getSubscription() and check whether current subscription uses the new application server key. If not, then call unsubscribe() function on the existing subscription and resubscribe again.
After properly starting the service worker and getting the permissions, call navigator.serviceWorker.ready in order to get access to the *.pushManager object.
From this object we call another promise to get the pushSubscription object we actually care about.
If the user was never subscribed pushSubscription will be null otherwise we get the key from it and check if it's different, if that's the case we unsubscribe the user and subscribe them again.
var NEW_PUBLIC_KEY = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX';
Notification.requestPermission(function (result) {
if (permissionResult == 'granted'){
subscribeUser();
}
});
function subscribeUser() {
navigator.serviceWorker.ready
.then(registration => {
registration.pushManager.getSubscription()
.then(pushSubscription => {
if(!pushSubscription){
//the user was never subscribed
subscribe(registration);
}
else{
//check if user was subscribed with a different key
let json = pushSubscription.toJSON();
let public_key = json.keys.p256dh;
console.log(public_key);
if(public_key != NEW_PUBLIC_KEY){
pushSubscription.unsubscribe().then(successful => {
// You've successfully unsubscribed
subscribe(registration);
}).catch(e => {
// Unsubscription failed
})
}
}
});
})
}
function subscribe(registration){
registration.pushManager.subscribe({
userVisibleOnly: true,
applicationServerKey: urlBase64ToUint8Array(NEW_PUBLIC_KEY)
})
.then(pushSubscription => {
//successfully subscribed to push
//save it to your DB etc....
});
}
function urlBase64ToUint8Array(base64String) {
var padding = '='.repeat((4 - base64String.length % 4) % 4);
var base64 = (base64String + padding)
.replace(/\-/g, '+')
.replace(/_/g, '/');
var rawData = window.atob(base64);
var outputArray = new Uint8Array(rawData.length);
for (var i = 0; i < rawData.length; ++i) {
outputArray[i] = rawData.charCodeAt(i);
}
return outputArray;
}
In my case, I managed to solve it by clearing the cache and cookies
The key you get from calling sub.getKey('p256dh') (or sub.toJSON.keys.p256dh) is the client's public key, it will always be different from the server public key. You need to compare the new public server key and sub.options.applicationServerKey.
sub above is the resolved promise from reg.pushManager.getSubscription().
Therefore:
Get PushSubscription interface - reg.pushManager.getSubscription().then(sub => {...}), if sub is null no subscription exists, therefore no worry, but if it's defined:
Inside the block get the current key in use sub.options.applicationServerKey
Convert it to string, because you can't compare ArrayBuffer directly - const curKey = btoa(String.fromCharCode.apply(null, new Uint8Array(sub.options.applicationServerKey)))
Compare it with your new key. If the keys are different call sub.unsubscribe() and then subscribe again by calling reg.pushManager.subscribe(subscribeOptions), where subscribeOptions uses your new key. You call 'unsubscribe' on PushSubscription, but subscribe on PushManager

Is this a good implementation of StackExchange Redis pipelining?

I'm starting with REDIS and the StackExchange Redis client. I'm wondering if I'm getting the best performance for getting multiple items at once from REDIS.
Situation:
I have an ASP.NET MVC web application that shows a personal calendar on the dashboard of the user. Because the dashboard is the landing page it's heavily used.
To show the calendar items, I first get all calendar item ID's for that particular month:
RedisManager.RedisDb.StringGet("calendaritems_2016_8");
// this returns JSON Serialized List<int>
Then, for each calendar item id I build a list of corresponding cache keys:
"CalendarItemCache_1"
"CalendarItemCache_2"
"CalendarItemCache_3"
etc.
With this collection I reach out to REDIS with a generic function:
var multipleItems = CacheHelper.GetMultiple<CalendarItemCache>(cacheKeys);
That's implemented like:
public List<T> GetMultiple<T>(List<string> keys) where T : class
{
var taskList = new List<Task>();
var returnList = new ConcurrentBag<T>();
foreach (var key in keys)
{
Task<T> stringGetAsync = RedisManager.RedisDb.StringGetAsync(key)
.ContinueWith(task =>
{
if (!string.IsNullOrWhiteSpace(task.Result))
{
var deserializeFromJson = CurrentSerializer.Serializer.DeserializeFromJson<T>(task.Result);
returnList.Add(deserializeFromJson);
return deserializeFromJson;
}
else
{
return null;
}
});
taskList.Add(stringGetAsync);
}
Task[] tasks = taskList.ToArray();
Task.WaitAll(tasks);
return returnList.ToList();
}
Am I implementing pipelining correct? The REDIS CLI monitor shows:
1472728336.718370 [0 127.0.0.1:50335] "GET" "CalendarItemCache_1"
1472728336.718389 [0 127.0.0.1:50335] "GET" "CalendarItemCache_2"
etc.
I'm expecting some kind of MGET command.
Many thanks in advance.
I noticed an overload method for StringGet that accepts a RedisKey[]. Using this, I see a MGET command in the monitor.
public List<T> GetMultiple<T>(List<string> keys) where T : class
{
List<RedisKey> list = new List<RedisKey>(keys.Count);
foreach (var key in keys)
{
list.Add(key);
}
RedisValue[] result = RedisManager.RedisDb.StringGet(list.ToArray());
var redisValues = result.Where(x=>x.HasValue);
var multiple = redisValues.Select(x => DeserializeFromJson<T>(x)).ToList();
return multiple;
}

Resources