I have getting error
Collection was modified; enumeration operation may not execute.
at System.Collections.Queue.QueueEnumerator.MoveNext()
Queue ReqQ = (Application["ReqQ"] != null) ? ((Queue)Application["ReqQ"]) :
new Queue(50);
if (ReqQ != null)
{
foreach (object OReq in ReqQ)
{
string mId = (string)OReq;
if (mId.Split('~')[1].Equals(reqUid.Split('~')[1]) && (DateTime.Parse(mId.Split('~')[0]).AddMinutes(1 * int.Parse(string.IsNullOrEmpty(delay) ? "0" : delay)) > DateTime.Now))
{
isSuccess = false;
break;
}
}
}
else
{
ReqQ = new Queue(10);
isSuccess = true;
}
if (isSuccess)
{
if (ReqQ.Count >= 10) //only keep last 10 messages in application cache
ReqQ.Dequeue();
ReqQ.Enqueue(reqUid);
Application["ReqQ"] = ReqQ;
}
It looks like you've got a single collection which you're reading and modifying from multiple threads (for different requests). To start with that's not safe using Queue - and it's particularly not true if you're iterating through the collection while you modify it in another. (EDIT: I've just noticed you're not even using a generic collection. If you're using .NET 4, there's no reason to use the non-generic collections...)
It's unclear what you're trying to achieve - you may be able to get away with just changing to use ConcurrentQueue<T> instead, but you need to be aware that by the time you've iterated over the collection, the values you read may already have been dequeued in another thread.
Related
I need to update a collection in values like this :
{
"email" : "x#gmail.com",
"fullName" : "Mehr",
"locations" : ["sss","dsds","adsdsd"]
}
Locations needs to be an array. in firebase how can I do that ... and also it should check duplicated.
I did like this :
const locations=[]
locations.push(id)
firebase.database().ref(`/users/ + ${userId}`).push({ locations })
Since you need to check for duplicates, you'll need to first read the value of the array, and then update it. In the Firebase Realtime Database that combination can is done through a transaction. You can run the transaction on the locations node itself here:
var locationsRef = firebase.database().ref(`/users/${userId}/locations`);
var newLocation = "xyz";
locationsRef.transaction(function(locations) {
if (locations) {
if (locations.indexOf(newLocation) === -1) {
locations.push(newLocation);
}
}
return locations;
});
As you can see, this loads the locations, ensures the new location is present once, and then writes it back to the database.
Note that Firebase recommends using arrays for set-like data structures such as this. Consider using the more direct mapping of a mathematical set to JavaScript:
"locations" : {
"sss": true,
"dsds": true,
"adsdsd": true
}
One advantage of this structure is that adding a new value is an idempotent operation. Say that we have a location "sss". We add that to the location with:
locations["sss"] = true;
Now there are two options:
"sss" was not yet in the node, in which case this operation adds it.
"sss" was already in the node, in which case this operation does nothing.
For more on this, see best practices for arrays in Firebase.
you can simply push the items in a loop:
if(locations.length > 0) {
var ref = firebase.database().ref(`/users/ + ${userId}`).child('locations');
for(i=0; i < locations.length; i++) {
ref.push(locations[i]);
}
}
this also creates unique keys for the items, instead of a numerical index (which tends to change).
You can use update rather than push method. It would much easier for you. Try it like below
var locationsObj={};
if(locations.length > 0) {
for(i=0; i < locations.length; i++) {
var key= firebase.database().ref(`/users/ + ${userId}`).child('locations').push().key;
locationsObj[`/users/ + ${userId}` +'/locations/' + key] =locations[i];
}
firebase.database().ref().update(locationsObj).then(function(){// which return the promise.
console.log("successfully updated");
})
}
Note : update method is used to update multiple paths at a same time. which will be helpful in this case, but if you use push in the loop then you have to wait for the all the push to return the promises. In the update method it will take care of the all promises and returns at once. Either you get success or error.
Is it possible to select a document at a specific index?
I have a document import process, I get a page of items from my data source (250 items at once) I then import these into DocumentDB in concurrently. Assuming I get an error inserting these items into DocumentDB I wont be sure what individual item or items failed. (I could work it out but don't want to). It would be easier to just Upsert all the items from the page again.
The items i'm inserting have an ascending id. So if i query DocumentDB (ordered by id) and select the id at position (count of all Id's - page size) I can start importing from that point forward again.
I know SKIP is not implemented, I want to check if there is another option?
You could try a bulk import stored procedure. The sproc creation code below is from Azure's github repo. This sproc will report back the number of docs created in the batch and continue trying to create docs in multiple batches if the sproc times out.
Since the sproc is ACID, you will have to retry from the beginning (or the last successful batch) if there are any exceptions thrown.
You could also change the createDocument function to upsertDocument if you just want to retry the entire batch process if any exception is thrown.
{
id: "bulkImportSproc",
body: function bulkImport(docs) {
var collection = getContext().getCollection();
var collectionLink = collection.getSelfLink();
// The count of imported docs, also used as current doc index.
var count = 0;
// Validate input.
if (!docs) throw new Error("The array is undefined or null.");
var docsLength = docs.length;
if (docsLength == 0) {
getContext().getResponse().setBody(0);
return;
}
// Call the CRUD API to create a document.
tryCreate(docs[count], callback);
// Note that there are 2 exit conditions:
// 1) The createDocument request was not accepted.
// In this case the callback will not be called, we just call setBody and we are done.
// 2) The callback was called docs.length times.
// In this case all documents were created and we don't need to call tryCreate anymore. Just call setBody and we are done.
function tryCreate(doc, callback) {
var isAccepted = collection.createDocument(collectionLink, doc, callback);
// If the request was accepted, callback will be called.
// Otherwise report current count back to the client,
// which will call the script again with remaining set of docs.
// This condition will happen when this stored procedure has been running too long
// and is about to get cancelled by the server. This will allow the calling client
// to resume this batch from the point we got to before isAccepted was set to false
if (!isAccepted) getContext().getResponse().setBody(count);
}
// This is called when collection.createDocument is done and the document has been persisted.
function callback(err, doc, options) {
if (err) throw err;
// One more document has been inserted, increment the count.
count++;
if (count >= docsLength) {
// If we have created all documents, we are done. Just set the response.
getContext().getResponse().setBody(count);
} else {
// Create next document.
tryCreate(docs[count], callback);
}
}
}
}
I have one row in database to count total user logins
I have tried to increase number by getting the row and adding +1 to it
And i'm not sure about concurrency after I have tried this, counter was increased by 1 and not by 2 as it "should" (if many users will login at the same time)
using(var db = new Database()) {
db.Settings.FirstOrDefault(x => x.Name == "Logins").Counter++;
using(var db2 = new Database()) {
db2.Settings.FirstOrDefault(x => x.Name == "Logins").Counter++;
db2.SaveChanges();
}
db.SaveChanges();
}
Why not make a single table for storing the number of people who have logged in increment the field when someone logs in successfully and decrease when the user logs out. For example for login:
_Users = context.Users.First(aa => aa.UserName.ToUpper() == _UserName.ToUpper() && aa.MDesktop == true);
if (_Users != null)
{
context.LogEntry.FirstOrDefault().Counter++;
context.SaveChanges();
}
This is old but it is still a relevant discussion for new EF developers and it deserves an explanation.
OP's example uses two different DBContext's, effectively OP has defined two different units of work, and importantly, neither of these is aware that the other exists at all.
Lets assume that the current value of the "Logins" setting is 5
For the purposes of this walkthrough lets save the two instances that are requested from Settings into variables outside of the scope of the DB contexts in question:
Setting setting1 = null;
Setting setting2 = null;
using(var db = new Database()) {
// DB: 5, Setting1: null, Setting2: null
// Load the value of setting1 from the database
setting1 = db.Settings.FirstOrDefault(x => x.Name == "Logins");
// DB: 5, Setting1: 5, Setting2: null
// Increment the value of setting1
setting1.Counter++;
// at this point, no changes have been saved yet, the DB still holds the original value for "Logins"
// DB: 5, Setting1: 6, Setting2: null
// Create a new context called DB2
using(var db2 = new Database()) {
// load setting2 from the DB
setting2 = db2.Settings.FirstOrDefault(x => x.Name == "Logins");
// right now setting2 still has a value of 5, the previous change was not yet committed
// DB: 5, Setting1: 6, Setting2: 5
setting2.Counter++;
// DB: 5, Setting1: 6, Setting2: 6
// Save the value of Setting2 back to the database
db2.SaveChanges();
// DB: 6, Setting1: 6, Setting2: 6
// At this point setting1, setting2, and the DB all agree the value is 6.
}
// The context is only aware that we previously set the value of setting1 to 6
// so it issues an update to the DB
db.SaveChanges();
// ultimately this update would not actually change anything.
}
Entity Framework, Unit of Work and Repository data access patterns all exhibit this behaviour, when you create a new DbContext IRepository or IUnitOfWork it is done so in isolation of any others that might exist at the same point in time, there is no difference between instantiating a new context in the same method, or a different thread or even executing on entirely different servers. If you need to implement counters or incremental values there is always a degree of uncertainty when we first cache the value of the field, then increment the value and later write that value back to the database.
To minimise the potential conflict, read the record and save it immediately after, then as a rule always re-query the value of this setting before you use it.
You can call .SaveChanges() multiple times in your logic, in this example simply saving before instantiating the second context, or at least before the second context loaded the record from the DB would have been enough to see the value incremented twice:
using(var db = new Database()) {
db.Settings.FirstOrDefault(x => x.Name == "Logins").Counter++;
db.SaveChanges(); // save it back as soon as we've made the change
using(var db2 = new Database()) {
db2.Settings.FirstOrDefault(x => x.Name == "Logins").Counter++;
db2.SaveChanges();
}
db.SaveChanges();
}
Where possible, you will find the code simpler if you can avoid a schema where an incrementing or counter fields is required, instead you could turn the count logic into a query based solution.
Counters are of course a special case, you could always make direct SQL calls to the database, both for read or increment to ensure that that we bypass any potential caching that might occur with the records through EF.
You could do this as a one liner to increment the value:
dbContext.Database.ExecuteSqlCommand("UPDATE Setting SET[Counter] = IsNull([Counter],0) + 1 WHERE[Name] = 'Logins'");
Or if you want to inspect the new value:
int newCount = dbContext.Database.SqlQuery<int>(#"
UPDATE Setting SET[Counter] = IsNull([Counter],0) + 1
OUTPUT inserted.[Counter]
WHERE [Name] = 'Logins'").First();
If you need to ge tthe current value, and know that it is the most up-to-date then you can simply query it from any context in the same way:
int logins = dbContext.Database.SqlQuery<int>(#"
SELECT [Counter] FROM Setting
WHERE [Name] = 'Logins'").First();
I hope this sheds some light on why your code only incremented the value once, its not a fault in EF, just something that we need to be aware of, once EF has read values form the DB, they are potentially already stale or out of date. If optimistic concurrency is not appropriate for your use case, then you will need to think outside of the box a little bit ;)
the easy approach?
then I'd suggest using a manual transaction in EF Core
ef core transaction docs
Be sure to add an unique constraint of some sort eb. (settings id + logins counter)
using(var transaction = _context.Database.BeginTransaction())
{
try
{
var totalLoginsCounter = _context.Settings.FirstOrDefault(x => x.Name == "Logins").Counter;
totalLoginsCounter += 1;
await _context.SaveChanges();
transaction.Commit();
}
catch
{
commit.RollBack();
}
}
should concurrency happen the request will fail. Because it would try to put duplicate keys which is not possible. then HIGHLY recommend you'd then implement a retry pattern to avoid people not being able to actually login because a number in your database didn't get updated.
I have a page that need to run a query against a large dataset very often. To ease the burden on the database, I've set up a cache that will refresh itself every 5 minutes.
The logic is:
When a call is made, check if there is data in cache, if it is, run the queryu on the cache. If not, start a task of fetching from all rows from database while running a query on my repository to get out just the data needed for that call. When all rows is fetched, put it in the cache so it can be accessed on the next call. The problem is that I sometimes get a: "Message = "There is already an open DataReader associated with this Command which must be closed first." I guess this is because it runs two queries to the same repository at the same time (one for all rows and one for the query). I've got MARS enabled in my connections string.
My code
public IQueryable<TrackDto> TrackDtos([FromUri] int[] Ids)
{
if (HttpContext.Current.Cache["Tracks"] != null && ((IQueryable<TrackDto>)HttpContext.Current.Cache["Tracks"]).Any())
{
var trackDtos = Ids.Length > 0
? ((IQueryable<TrackDto>)HttpContext.Current.Cache["Tracks"]).Where(trackDto => Ids.Contains(trackDto.Id).AsQueryable()
: ((IQueryable<TrackDto>)HttpContext.Current.Cache["Tracks"]).AsQueryable();
return trackDtos;
}
else
{
UpdateTrackDtoCache(DateTime.Today);
var trackDtos = Ids.Length > 0
? WebRepository.TrackDtos.Where(trackDto => trackDto.Date == DateTime.Today && Ids.Contains(trackDto.Id)).AsQueryable()
: WebRepository.TrackDtos.Where(trackDto => trackDto.Date == DateTime.Today).AsQueryable().AsQueryable();
return trackDtos;
}
}
private IQueryable<TrackDto> MapTrackDtosFromDb(DateTime date)
{
return WebRepository.TrackDtos.Where(tdto => tdto.Date == date.Date);
}
private void UpdateTrackDtoCache(DateTime date)
{
if (CacheIsUpdating)
return;
CacheIsUpdating = true;
var task = Task.Factory.StartNew(
state =>
{
var context = (HttpContext)state;
context.Cache.Insert("Tracks", MapTrackDtosFromDb(date), null, Cache.NoAbsoluteExpiration,
new TimeSpan(0, 5, 0));
CacheIsUpdating = false;
},
HttpContext.Current);
}
I believe you are running DML or DDL sql queries using the same active connection. And MARS does not allow that. You can execute multiple select statements or bulk insert but if you run multiple update, delete statements or your sql execution will throw this kind of errors. Even if you run an update sql query while running a select statement on the same command you will get this error. For more info read this
http://msdn.microsoft.com/en-us/library/h32h3abf(v=vs.110).aspx
Is there a design pattern in meteor application to handle multiple clients inserting the same logical record 'simultaneously'?
Specifically I have a scoring type application, and multiple clients could create the initial, basically blank, Score record for an Entrant when the entrant is ready to start. The appearance of the record is then used to make it available on the page for editing by the officials, incrementing penalty counts and such.
Stages = new Meteor.Collection("contests");
Entrants = new Meteor.Collection("entrants");
Scores = new Meteor.Collection("scores");
// official picks the next entrant
Scores.insert( stage_id:xxxx, entrant_id:yyyy)
I am happy with the implications of the conflict resolutions of edits to the Score record once it is in the Collection. I am not sure how to deal with multiple clients trying to insert the Score for the stage_id/entrant_id pair.
In a synchronous app I would tend to use some form of interlocking, or a relational DB key constraint.
Well, according to this answer Meteor $upsert flag is still in enhancement list and seems to be added in stable branch after 1.0 release.
So the first way is how it was said to add an unique index:
All the implementation ways are listed here. I would recommend you to use native mongo indexes, not a code implementation.
The optimistic concurrency way is much more complicated according to no transactions in MongoDB.
Here comes my implementation of it(be careful, might be buggy))):
var result_callback = function(_id) {
// callback to call on successfull insert made
}
var $set = {stage_id: xxxx, entrant_id: xxxx};
var created_at = Date.now().toFixed();
var $insert = _.extend({}, $set, {created_at: created_at});
Scores.insert($insert, function(error, _id) {
if (error) {
//handle it
return;
}
var entries = Scores.find($set, {sort: {created_at: -1}}).fetch()
if (entries.length > 1) {
var duplicates = entries.splice(0, entries.length - 1);
var duplicate_ids = _.map(duplicates, function(entry) {
return entry._id;
});
Scores.remove({_id: {$in: duplicate_ids}})
Scores.update(entries[0]._id, $set, function(error) {
if (error) {
// handle it
} else {
result_callback(entries[0]._id)
}
})
} else {
result_callback(_id);
}
});
Hope this will give you some good ideas)
Sorry, previous version of my answer was completely incorrect.