I'm creating Events and want to bundle them into consolidated objects matched by title so I created an EventBundle repository which holds these objects and I register single events against it matching them by title into the Bundles.
Since I have a lot of troubles saving them I already went so far as to cache them locally which does help somewhat but still it's pretty bad.
public function registerEvent($event) {
//We are matching with the title of the event so we get that first
$title = $event->getEvTitle();
if(!isset($this->aBundles[$title]))
//Then we look up the event bundle for this title, if it does not exist this will return null
$this->aBundles[$title] = $this->findEventBundleByTitle($title);
if($this->aBundles[$title] != NULL) {
$this->aBundles[$title]->copyDetails($event);
$this->aBundles[$title]->setEvTitle($title);
$this->update($this->aBundles[$title]);
print_r("Update: $title\n");
}
else {
$objectManager = \TYPO3\CMS\Core\Utility\GeneralUtility::makeInstance('TYPO3\CMS\Extbase\Object\ObjectManager');
$this->aBundles[$title] = $objectManager->get('Ext\MyEvents\Domain\Model\EventBundle');
$this->aBundles[$title]->copyDetails($event);
$this->aBundles[$title]->setEvTitle($title);
$this->add($this->aBundles[$title]);
print_r("Add: $title\n");
}
}
public function findEventBundleByTitle($title){
$query = $this->createQuery();
$query->getQuerySettings()->setRespectStoragePage(FALSE);
$query->matching(
$query->equals('ev_title', $title)
);
$res = $query->execute();
$bundle = ($res->count()==0?NULL:$res->getFirst());
return $bundle;
}
Now running this I would expect to see one add for each title and then updates - which is true for the first run.
But on subsequent runs there are again some adds, it does not match some of the events to the title. With each subsequent run there are less and less adds until there are only updates. But when looking into the Database it shows multiple records with the same title now. A unique index will cause errors on the second run too as the lookup of the Object fails, sometimes without any pattern.
Any idea why this might happen? I can check to see the entries in the database between the runs so it's most likely that the lookup fails for some reason. But I'm totally out of ideas why that might be the case as it does work eventually, but there are a lot more than just 1-2 entries in the database for some of the events then...
Also confusing is the fact that after 5 runs all events do match consistently with some events being in the database 5 times at this point of time. But all matches are to the FIRST of those entries so it's not like it is not matched by the query, it's just being ignored until there are enough of them?!, all entries created due to the database lookup not returning anything are ignored after this point. Deleting them from the database by hand restarts the adding of spurious content again.
To answer it myself... I just found that within the copy function I copied over some properties of the Model that I probably should not copy which confuses TYPO3 and does break the saving to the DB.
So if someone stumbles across this, make sure you only copy valid data and not all properties of the Model as some of the properties might break functionality.
Related
With Doctrine and Symfony in my PHPUnit test method :
// Change username for user #1 (Sheriff Woody to Chuck Norris)
$form = $crawler->selectButton('Update')->form([
'user[username]' => 'Chuck Norris',
]);
$client->submit($form);
// Find user #1
$user = $em->getRepository(User::class)->find(1);
dump($user); // Username = "Sheriff Woody"
$user = $em->createQueryBuilder()
->from(User::class, 'user')
->andWhere('user.id = :userId')
->setParameter('userId', 1)
->select('
user
')
->getQuery()
->getOneOrNullResult()
;
dump($user); // Username = "Chuck Norris"
Why my two methods to fetch the user #1 return different results ?
diagnosis / explanation
I assume* you already created the User object you're editing via crawler before in that function and checked that it is there. This leads to it being a managed entity.
It is in the nature of data, to not sync itself magically with the database, but some automatism must be in place or some method executed to sync it.
The find() method will always try to use the cache (unless explicitly turned off, also see side note). The query builder won't, if you explicitly call getResult() (or one of its varieties), since you explicitly want a query to be executed. Executing a different query might lead to the cache not being hit, producing the current result. (it should update the first user object though ...) [updated, due to comment from Arno Hilke]
((( side note: Keeping objects in sync is hard. It's mainly about having consistency in the database, but all of ACID is wanted. Any process talking to the database should assume, that it only is working with the state at the moment of its first query, and is the only user of the database. Unless additional constraints must be met and inconsistent reads can occur, in which case isolation levels should be raised (See also: transactions or more precisely: isolation). So, automatically syncing is usually not wanted. Doctrine uses certain assumptions for performance gains (mainly: isolation / locking is optimistic). However, in your particular case, all of those things are of no actual concern... since you actually want a non-repeatable read. )))
(* otherwise, the behavior you're seeing would be really unexpected)
solution
One easy solution would be, to actively and explicitly sync the data from the database by either calling $em->refresh($user), or - before fetching the user again - to call $em->clear(), which will detach all entities (clearing the cache, which might have a noticable performance impact) and allowing you to call find again with the proper results being returned.
Please note, that detaching entities means, that any object previously returned from the entity manager should be discarded and fetched again (not via refresh).
alternate solution 1 - everything is requests
instead of checking the database, you could instead do a different request to a page that displays the user's name and checks that it has changed.
alternate solution 2 - using only one entity manager
using only one entity manager (that is: sharing the entity manager / database in the unit test with the server on the request) may be a reasonable solution, but it comes with its own set of problems. mainly omitted commits and flushes may avoid detection.
alternate solution 3 - using multiple entity managers
using one entity manager to set up the test, since the server is using a new entity manager to perform its work, you should theoretically - to do this actually properly - create yet another entity manager to check the server's behavior.
comment: the alternate solutions 1,2 and 3 would work with the highest isolation level, the initial solution probably wouldn't.
I have a Range Input that's taking in an array of AssignmentSearchTypes for the valueRange. My task is to conditionally limit the search types based on the user's role. In particular, we want to only allow certain roles to assign jobs directly to another user. I've done this successfully, but when I actually pull up the page in a browser the "User" SearchType is always included, even when it shouldn't be.
I've tried declaring an array that only contains the desired SearchType
return new AssignmentSearchType[] { AssignmentSearchType.TC_GROUP }
but for some reason the TC_USER element still appears first in the rendered drop-down, in addition to TC_GROUP. I've stepped through the project line by line in the debugger, but it hasn't proved useful thus far.
Any ideas, Guidewire folks?
I have a SysOperation Framework process that creates a ReliableAsynchronous batch to post packing slips and several get created at a time.
Depending on how quickly I click to create them, I get:
Cannot edit a record in LastValue (SysLastValue).
An update conflict occurred due to another user process deleting the record or changing one or more fields in the record.
And
Cannot create a record in LastValue (SysLastValue). User ID: t edit a, Class.
The record already exists.
On a couple of them in the BatchHistory. I have this.parmLoadFromSysLastValue(false); set. I'm not sure how to prevent writing to SysLastValue table.
Any idea what could be going on?
I get this exception a lot too, so I've created the habit of catching DuplicateKeyException in my service operation. When it is thrown, catch it and retry (for a default of 5x).
The error occurs when a lot of processes run simultaneously, like you are doing now.
DupplicateKeyException can be caught inside a transaction so you could improve by putting a try/catch around the code that does the insert in the SysLastValue table if you can find the code.
As far as I can see these are the only to occurrences where a record is inserted in this table (except maybe in kernel):
InventUnusedDimCleanUp.serialize()
SysAutoSemaphore.autoSemaphore()
Put a breakpoint there and see if that code is executed. If so you can add a try/catch with retry and see if that "fixes" it.
You could also use the tracing cockpit and the trace parser to figure out where that record is inserted if it's not one of those two.
My theory about LoadFromSysLastValue: I believe setting this.parmLoadFromSysLastValue(false) does not work since it is only taken into account when the dialog is started, not when your operation is executed. When in batch, no SysLastValue will be used to initialize your data contract as you want it to use the exact parameters you have supplied in your data contract .
It's because of the code calling SysOperationController.savelast() while in batch, my solution is to set loadFromSysLastValue to false in SysOperationController.loadFromSysLastValue() as part of the in batch check:
if (!this.isInBatch())
{
.....
}
//Begin
else
{
loadFromSysLastValue = false;
}
//End
I'm working on ASP.NET MVC 4.5 site using EF. I have code that checks if a record in the DB exists, and if it doesn't it creates a new record. As a result of a problem in my code which I've subsequently fixed I was calling this code twice within a fraction of a second. The result was that a duplicate record was created. Here is my code:
Word wordToUpdate = context.Words.SingleOrDefault(w => w.Label == word);
if (wordToUpdate == null) // word doesn't exist yet so make a new one
{
Word w = new Word() {
// add new word stuff here
};
context.Words.Add(w);
}
else
{
// word already exists so just add stuff to existing entry
wordToUpdate.AnnotationGroups.Add(ag);
}
context.SaveChanges();
If the word doesn't already exist in the DB it is added twice. Here are the timestamps from the duplicate records:
CreatedOn
2014-03-11 06:52:35.743
2014-03-11 06:52:50.637
I've stepped through the code while watching the DB records and the new record is added during the first execution, so:
Why is context.Words.SingleOrDefault() returning null on the second execution when there is a matching record in the DB?
Duplicate records should never exist in this table. How can I improve my code to make sure it is impossible for that to happen?
EDIT
Let me add a few details I've observed while debugging this with a breakpoint at the beginning of the code snippet above:
The first time it's called everything works as expected - since it's a new word wordToUpdate is null and a new word is added.
I stopped the code at context.SaveChanges() and checked the DB - a new row shows up with the new word added
The next call (this is an AJAX call from an Ajax.ActionLink link) fires with the same word
wordToUpdate returns null even though DB already contains that word and thus a duplicate entry for that word is added (I'm not using the word as the primary key and I'd rather handle this in code instead of trying the handle errors thrown from the DB)
When context.SaveChanges is called again another row is add to the DB
So my question is since this call is coming from the same client is the code actually being executed synchronously? The way it steps through the code in debugging seems to suggest this, but this is where my knowledge of ASP.NET gets a little fuzzy.
Maybe your problem is with the assertion you are using: w => w.Label == word.
If you are comparing objects, even though they might have same contents, == just compares if they have the same memory address, which is the default implementation. You should override Equals in the Word class so the behavior compares key values or something like that.
Each task has a reference to the goal it is assigned to. When I try and delete the tasks, and then the goal I get the error
"Store update, insert, or delete statement affected an unexpected number of rows (0). Entities may have been modified or deleted since entities were loaded. Refresh ObjectStateManager entries." on the line _goalRepository.Delete(goalId);
What am I doing wrong?
[HttpPost]
public void DeleteGoal(int goalId, bool deleteTasks)
{
try
{
if (deleteTasks)
{
Goal goalWithTasks = _goalRepository.GetWithTasks(goalId);
foreach (var task in goalWithTasks.Tasks)
{
_taskRepository.Delete(task.Id);
}
goalWithTasks.Tasks = null;
_goalRepository.Update(goalWithTasks);
}
_goalRepository.Delete(goalId);
}
catch (Exception ex)
{
Exception deleteException = ex;
}
}
Most likely the problem is because you're attempting to hold onto and reuse a context across page views. You should create a new context, do your work, and dispose of the context atomically. It's called the Unit Of Work pattern.
The main reason for this is that the context maintains some state information about the database rows it has seen, if that state information becomes stale or out of date then you get exceptions like this.
There are a lot of other reasons to use the Unit of Work pattern, I would suggest you do a web search and do a little reading as an educational exercise.
This may have nothing to do with data access though. You are removing items from a list as you are iterating it, which would cause problems if you were using a normal List. Without knowing much about the internals of EF, my guess is that your delete calls to the repository are changing the same list that you are iterating.
Try iterating the list in one pass and record the Task ids you want to delete in separate list. Then when you have finished iterating, call delete on the Task list. For example:
var tasksToDelete = new List<int>();
foreach (var task in goalWithTasks.Tasks)
{
tasksToDelete.Add(task.Id);
}
foreach (var id in tasksToDelete)
{
_taskRepository.Delete(id);
}
This may not be the cause of your problem but it is good practice to never change the collection you are iterating.
I ran across this issue at work (I am an Intern), I was getting this error when trying to delete a piece of Equipment that was referenced in other data-tables.
I was deleting all references before attempting to delete the Equipment BUT the reference deletion was happening in another Model which had its own database context, the reference deletion would be saved within the Model's context.
But the Equipment Model's context would not know about the changes that just happened in another Model's context which is why when I was trying to delete the Equipment and then save the changes (eg: db.SaveChanges()) the error would happen (the Equipment context still thought there was references to that equipment in other tables).
My solution for this was to re-allocate the context before attempting to delete the Equipment:
db = new DatabaseContext();
Now the newly allocated context has the latest snapshot of the database and is aware of all changes made. Deletion happens without issues.
Hope my experience helps.