I got a db where I need to write entries to a table.
I need to make sure that my table will contain only 20 rows at any given time (I'm making it simple).
Of course, since I am in a web app, I have several users at the same time.
This is what I plan to do :
I use an aspx page with a button "AddRecord", when I click on it, I do this :
public void click(event e...)
{
Object lockInstance = ApplicationContext["lockObject"];
if (lockInstance == null)
{
// Create Object and store it in app context.
}
lock(lockInstance)
{
// Run Query select count bla bla
// if count < 20 then insert...
}
}
No triggers, or stored proc (no I'm not biased, the person I'm working for is :) )
Is there a better way than to rely on the Application Context ?
Thank you
Your solution wouldn't work on a web garden or load balanced web farm scenario. I suggest you use proper DB locks.
You can, for example. begin a transaction, execute a select statement using TABLOCKX (which locks the table exclusively), add some rows, if there are less than 20, and finally commit the transaction.
See locking hints.
Related
We appear to have a problem with MDriven generating the same ECO_ID for multiple objects. For the most part it seems to happen in conjunction with unexpected process shutdowns and/or server shutdowns, but it does also happen during normal activity.
Our system consists of one ASP.NET application and one WinForms application. The ASP.NET app is setup in IIS to use a single worker process. We have a mixture of WebForms and MVC, including ApiControllers. We're using a rather old version of the ECO packages: 7.0.0.10021. We're on VS 2017, target framework is 4.7.1.
We have it configured to use 64 bit integers for object id:s. Database is Firebird. SQL configuration is set to use ReadCommitted transaction isolation.
As far as I can tell we have configured EcoSpaceStrategyHandler with EcoSpaceStrategyHandler.SessionStateMode.Never, which should mean that EcoSpaces are not reused at all, right? (Why would I even use EcoSpaceStrategyHandler in this case, instead of just creating EcoSpace normally with the new keyword?)
We have created MasterController : Controller and MasterApiController : ApiController classes that we use for all our controllers. These have a EcoSpace property that simply does this:
if (ecoSpace == null)
{
if (ecoSpaceStrategyHandler == null)
ecoSpaceStrategyHandler = new EcoSpaceStrategyHandler(
EcoSpaceStrategyHandler.SessionStateMode.Never,
typeof(DiamondsEcoSpace),
null,
false
);
ecoSpace = (DiamondsEcoSpace)ecoSpaceStrategyHandler.GetEcoSpace();
}
return ecoSpace;
I.e. if no strategy handler has been created, create one specifying no pooling and no session state persisting of eco spaces. Then, if no ecospace has been fetched, fetch one from the strategy handler. Return the ecospace. Is this an acceptable approach? Why would it be better than simply doing this:
if (ecoSpace = null)
ecoSpace = new DiamondsEcoSpace();
return ecoSpace;
In aspx we have a master page that has an EcoSpaceManager. It has been configured to use a pool but SessionStateMode is Never. It has EnableViewState set to true. Is this acceptable? Does it mean that EcoSpaces will be pooled but inactivated between round trips?
It is possible that we receive multiple incoming API calls in tight succession, so that one API call hasn't been completed before the next one comes in. I assume that this means that multiple instances of MasterApiController can execute simultaneously but in separate threads. There may of course also be MasterController instances executing MVC requests and also the WinForms app may be running some batch job or other.
But as far as I understand id reservation is made at the beginning of any UpdateDatabase call, in this way:
update "ECO_ID" set "BOLD_ID" = "BOLD_ID" + :N;
select "BOLD_ID" from "ECO_ID";
If the returned value is K, this will reserve N new id:s ranging from K - N to K - 1. Using ReadCommitted transactions everywhere should ensure that the update locks the id data row, forcing any concurrent save operations to wait, then fetches the update result without interference from other transactions, then commits. At that point any other pending save operation can proceed with its own id reservation. I fail to see how this could result in the same ID being used for multiple objects.
I should note that it does seem like it sometimes produces id duplicates within one single UpdateDatabase, i.e. when saving a set of new related objects, some of them end up with the same id. I haven't really confirmed this though.
Any ideas what might be going on here? What should I look for?
The issue is most likely that you use ReadCommitted isolation.
This allows for 2 systems to simultaneously start a transaction, read the current value, increase the batch, and then save after each other.
You must use Serializable isolation for key generation; ie only read things not currently in a write operation.
MDriven use 2 settings for isolation level UpdateIsolationLevel and FetchIsolationLevel.
Set your UpdateIsolationLevel to Serializable
What would a Cosmos stored procedure look like that would set the PumperID field for every record to a default value?
We are needing to do this to repair some data, so the procedure would visit every record that has a PumperID field (not all docs have this), and set it to a default value.
Assuming a one-time data maintenance task, arguably the simplest solution is to create a single purpose .NET Core console app and use the SDK to query for the items that require changes, and perform the updates. I've used this approach to rename properties, for example. This works for any Cosmos database and doesn't require deploying any stored procs or otherwise.
Ideally, it is designed to be idempotent so it can be run multiple times if several passes are required to catch new data coming in. If the item count is large, one could optionally use the SDK operations to scale up throughput on start and scale back down when finished. For performance run it close to the endpoint on an Azure Virtual Machine or Function.
For scenarios where you want to iterate through every item in a container and update a property, the best means to accomplish this is to use the Change Feed Processor and run the operation in an Azure function or VM. See Change Feed Processor to learn more and examples to start with.
With Change Feed you will want to start it to read from the beginning of the container. To do this see Reading Change Feed from the beginning.
Then within your delegate you will read each item off the change feed, check it's value and then call ReplaceItemAsync() to write back if it needed to be updated.
static async Task HandleChangesAsync(IReadOnlyCollection<MyType> changes, CancellationToken cancellationToken)
{
Console.WriteLine("Started handling changes...");
foreach (MyType item in changes)
{
if(item.PumperID == null)
{
item.PumperID = "some value"
//call ReplaceItemAsync(), etc.
}
}
Console.WriteLine("Finished handling changes.");
}
I just need suggestion in this case. There is a PIN code field in my project in asp.net environment. I have stored 50,000 around pin code in sql server database. When I run project in local host, it becomes slow down. Since I have a drop-down to get value from database. I think it is because of huge data is being rendered into html, since when I click on view source at run-time, I can see all the PIN-code inside it.
Moreover, I have also done this for Select CITY, and STATE from database in a same way.
I will really appreciate you, if you get me any logic or technique to lessen this slowdown
If you are using all the Pincode in the single page then You have multiple option to optimized this slow down If this is in initialized phase then Try MongoDB ,No SQL DB otherwise go for Solr , Redis that gives fast accessing of the data. If you are not able to using these then You can optimised it by eager loading , Cache Storing of data.
If its not in single page then break it to batch via paginate the pincode.
This is common problem with any website where we deal with large amount of data. To be frank there is no code level solution for this. You need to select any of following approach.
You can try multiple options for faster retrieval.
Caching -
Use redis or memcache - in simpler words, on the first request cache manager will read and store your data from SQL server. For subsequent requests, data will be served from cache.
Also, don't forget to make a provision to invalidate the data when new pin codes are added.
Edit: You can also use object caching provided by .Net framework. Refer: object caching
Code will be something like.
if (Cache["key_pincodes"] == null)
{
// if No object is present in Cache, add it to the cache with expiry time of 10 minutes
// Read data to datatable or any object
DataTable pinCodeObject = GetPinCodesFromdatabase();
Cache.Insert("key_pincodes", pinCodeObject, null, DateTime.MaxValue, TimeSpan.FromMinutes(10));
}
else // If pinCodes are cached, dont make Database call and read it from cache
{
// This will get execute
DataTable pinCodeObject = (DataTable)Cache["key_pincodes"];
}
// bind it your dropdown
No-sql database-
MongoDB, XML, Txt files could be used to read the data. It will take much lesser time than the database hit.
I've got quite a lot of code on my site that looks like this;
Item item;
if(Cache["foo"] != null)
{
item = (Item)Cache["foo"];
}
else
{
item = database.getItemFromDatabase();
Cache.insert(item, "foo", null, DateTime.Now.AddDays(1), ...
}
One such instance of this has a rather expensive getItemFromDatabase method (which is the main reason it's cached). The problem I have is that with every release or restart of the application, the cache is cleared and then an army of users come online and hit the above code, which kills our database server.
What is the typical method of dealing with these sorts of scenarios?
You could hook into the Application OnStart event in the global.asax file and call a method to load the expensive database calls in a seperate thread when the application starts.
It may also be an idea to use a specialised class for accessing these properties using a locking pattern to avoid multiple database calls when the initial value is null.
I am having a problem with the speed of accessing an association property with a large number of records.
I have an XAF app with a parent class called MyParent.
There are 230 records in MyParent.
MyParent has a child class called MyChild.
There are 49,000 records in MyChild.
I have an association defined between MyParent and MyChild in the standard way:
In MyChild:
// MyChild (many) and MyParent (one)
[Association("MyChild-MyParent")]
public MyParent MyParent;
And in MyParent:
[Association("MyChild-MyParent", typeof(MyChild))]
public XPCollection<MyCHild> MyCHildren
{
get { return GetCollection<MyCHild>("MyCHildren"); }
}
There's a specific MyParent record called MyParent1.
For MyParent1, there are 630 MyChild records.
I have a DetailView for a class called MyUI.
The user chooses an item in one drop-down in the MyUI DetailView, and my code has to fill another drop-down with MyChild objects.
The user chooses MyParent1 in the first drop-down.
I created a property in MyUI to return the collection of MyChild objects for the selected value in the first drop-down.
Here is the code for the property:
[NonPersistent]
public XPCollection<MyChild> DisplayedValues
{
get
{
Session theSession;
MyParent theParentValue;
XPCollection<MyCHild> theChildren;
theParentValue = this.DropDownOne;
// get the parent value
if theValue == null)
{
// if none
return null;
// return null
}
theChildren = theParentValue.MyChildren;
// get the child values for the parent
return theChildren;
// return it
}
I marked the DisplayedValues property as NonPersistent because it is only needed for the UI of the DetailVIew. I don't think that persisting it will speed up the creation of the collection the first time, and after it's used to fill the drop-down, I don't need it, so I don't want to spend time storing it.
The problem is that it takes 45 seconds to call theParentValue = this.DropDownOne.
Specs:
Vista Business
8 GB of RAM
2.33 GHz E6550 processor
SQL Server Express 2005
This is too long for users to wait for one of many drop-downs in the DetailView.
I took the time to sketch out the business case because I have two questions:
How can I make the associated values load faster?
Is there another (simple) way to program the drop-downs and DetailView that runs much faster?
Yes, you can say that 630 is too many items to display in a drop-down, but this code is taking so long I suspect that the speed is proportional to the 49,000 and not to the 630. 100 items in the drop-down would not be too many for my app.
I need quite a few of these drop-downs in my app, so it's not appropriate to force the user to enter more complicated filtering criteria for each one. The user needs to pick one value and see the related values.
I would understand if finding a large number of records was slow, but finding a few hundred shouldn't take that long.
Firstly you are right to be sceptical that this operation should take this long, XPO on read operations should add only between 30 - 70% overhead, and on this tiny amount of data we should be talking milli-seconds not seconds.
Some general perf tips are available in the DevExpress forums, and centre around object caching, lazy vs deep loads etc, but I think in your case the issue is something else, unfortunately its very hard to second guess whats going on from your question, only to say, its highly unlikely to be a problem with XPO much more likely to be something else, I would be inclined to look at your session creation (this also creates your object cache) and SQL connection code (the IDataStore stuff), Connections are often slow if hosts cannot not be resolved cleanly and if you are not pooling / re-using connections this problem can be exacerbated.
I'm unsure why you would be doing it the way you are. If you've created an association like this:
public class A : XPObject
{
[Association("a<b", typeof(b))]
public XPCollection<b> bs { get { GetCollection("bs"); } }
}
public class B : XPObject
{
[Association("a<b") Persistent("Aid")]
public A a { get; set; }
}
then when you want to populate a dropdown (like a lookupEdit control)
A myA = GetSomeParticularA();
lupAsBs.Properties.DataSource = myA.Bs;
lupAsBs.Properties.DisplayMember = "WhateverPropertyName";
You don't have to load A's children, XPO will load them as they're needed, and there's no session management necessary for this at all.
Thanks for the answer. I created a separate solution and was able to get good performance, as you suggest.
My SQL connection is OK and works with other features in the app.
Given that I'm using XAF and not doing anything extra/fancy, aren't my sessions managed by XAF?
The session I use is read from the DetailView.
I'm not sure about your case, just want to share some my experiences with XAF.
The first time you click on a dropdown (lookup list) control (in a detail view), there will be two queries sent to the database to populate the list. In my tests, sometimes entire object is loaded into the source collection, not just ID and Name properties as we thought so depends on your objects you may want to use lighter ones for lists. You can also turn on Server Mode of the list then only 128 objects are loaded each time.