When typing quickly and then undoing by pressing Ctrl+Z, AvalonEdit's undo feature goes a character at a time instead of grouping the undo operations where typing is paused.
I've tried invoking undo groupings on text changed events but often run into exceptions due to nesting problems.
Is there a way to configure Avalon Edit to group undo-operations?
Related
Let assume we got two long lived process managers. Both sagas operates over 10 milion items for example. First saga adds something to each item. Second saga removes it from each item. Given both process managers need few minutes to complete its job if I run them simultaneously I get into troubles.
Part of those items would hold the value while rest of them not. The result is close to random actually and depends on command order that affect particular item. I wondered if redispatching "Remove" command in case of failure would solve the problem. I mean if you try remove non existing value you should wait for the first saga to add the value. But while process managers are working someone else may dispatch "Remove" or "Add" command. In such case my approach would fail.
How may I solve such problem? :)
It seems that you would want the second saga to not run if the first saga is running (and presumably not run until some process which depends on whatever the first saga added being there). So the apparent solution would be to have a component (could be a microservice, could also be a record in a strongly consistent datastore like zookeeper/etcd/consul) that gives permission for the sagas to start executing. An example protocol might look like:
Saga sends a message to the component identifying the saga and conveying the intention to start
Component validates that no sagas might be running which would prevent this saga from running
Component responds with permission to start running
Subsequent saga attempts result in rejection until the running saga tells the component it's OK to run the other saga
Assuming that this component is reliably durable, the failure mode to worry about is that permission is granted but this component never processes the message that the saga finished (causes of this could include the permission message not getting delivered/processed or the saga crashing). No amount of acknowledgements or extra messages can solve this (it's basically the Two Generals' Problem).
A mitigation is to have this component (or something watching this component) alert if it seems that too much time has passed without saga completion. Whatever/whoever is responsible for ensuring liveness would then investigate to see if the saga is still running and if none is running, inform the component that it's OK to run the other saga. Note that this is not foolproof: it's quite possible for the decider in question to make what turns out to be the wrong decision.
I feel like I need more context. Whilst you don't say it explicitly, is the problem that the second saga tries to remove values that haven't been added by the first?
If that was true, a simple solution would be to just use a third state.
What I mean by that is to just more explicitly define and declare item state. You currently seem to have two states with value, and without value, but nothing to indicate if an item is ready to be processed by the second saga because the first saga has already done it's work on the item in question.
So all that needs to happen is that the second saga keeps looking for items where:
(with_value == true & ready_for_saga2 == true)
Ready_for_saga2 or "Saga 1 processing complete", whatever seems more appropriate in your context.
I'd say that the solution would vary based on which actual problem, we're trying to solve.
Say it's an inventory and add are items added to the inventory and remove are items requested for delivery. Then the order of commands does not matter that much because you could just process the request for delivery, when new items are added to the inventory.
This would lead to an aggregate root with two collections: Items and PendingOrders.
One process manager adds new inventory to Items - if any orders are pending, it will complete these orders in the same transaction and remove both the item and the order from the collections.
If the other process manager adds an order (tries to remove an item), it will either do it right away, if there's any items left - or it will add the order to the pending orders to be processed when new items arrive (and maybe notify someone about the delay, while we're at it).
This way we end up with the same state regardless of the order of commands, but the actual real-world-problem has great influence on the model chosen.
If we have other real world problems, we can make a model those too.
Let's say you have two users that each starts a process that bulk updates titles on inventory items. In this case you - and the users - have to decide how best to resolve this conflict - what will lead to the best real world outcome.
If you want consistency across all the items - all or no items should be updated by a single bulk update - I would embed this knowledge in a new model. Let's call it UpdateTitlesProcesses. We have only one instance of this model in the system. The state is shared between processes. This model is effectually a command queue, and when a user initiates the bulk operation, it adds all the commands to the queue and starts processing each item one at a time.
When the second user initiates another title update, the business logic in our models will reject this, as there's already another update started. Or if the experts say that the last write should win, then we ditch the remaining commands from the first process and add the new ones (and similarly we should decide what should happen if a user issues a single title update, not bulk - should it be rejected, prioritized or put on hold?).
So in short I'd say:
Make it clear which real world problem we are solving - and thus which conflict resolution outcome is best (probably a trade off, often also something that requires user interaction or notification).
Model this explicitly (where processes, actions and conflict handling are also part of the model).
I have a process that opens a database using sqlite3_open and sets journal mode to WAL.
Another process, uses sqlite3_open to open that same database. Everything seems to work, but the problem is that second process does not seem to see changes made by the first process. I am trying to fetch count, or rowids, and they stay the same.
I am sure that database is being updated, because refreshing using SQLiteDatabaseBrowser shows the changes.
I tried multiple ways of opening databases, and multiple ways of querying, but no luck so far. What am I missing? Thanks!
Transactions are used to isolate connections from each other, especially to make changes visible only after a transaction has completed.
So for changes to be visible, the writing connection must end its transaction, and the reading connection must not have started its own transaction before that. (When using automatic transactions, ensure that statements are reset or finalized.)
I figured out what the problem was, and as usual in cases where thing make no sense, mistake on my side. Problem is however subtle, so worth mentioning.
I was doing sqlite3_reset calls on cached prepared statements lazily, that is before I reuse the prepared statement, not immediately after I am done executing it. Problem is that this pattern means that there’s always prepared statement pending reset. Apparently, reset is necessary to be able to see updates to the database (probably some mutex is being held).
Thanks for your help.
EDIT: after sleeping on it this behavior actually makes sense. Updates should not be visible during the time of prepared statement execution, otherwise it might never be done or accurate.
tl;dr
In a program that calls a function onEnterFrame on each frame, how do you store and mutate state? For instance if you are making a level editor or a painting program where keeping track of state and making small incremental changes are tempting / enticing / inviting. What is the most performany way to handle such a thing with minimal global state mutations?
long version:
In a interactive program that accepts input from the user, like mouse clicks and key strokes, we may need to keep track of the state of the data model. For instance:
Are some elements selected?
Is the mouse cursor hovering over an element, which one?
How long is the mouse button held down? Is this a click or a drag?
We also, sometimes need make small changes to a large model:
In a level editor, we may need to add one wall to an existing large set of prefabs. You don't want to recreate the set, no?
Read Prof Frisby's mostly-adequate-guide so far, there are many functional solutions to issues that deal with extracting a piece of data from some source of input, performing computation on that data and passing the result to some output.
Sometimes an app let's the user interact and perform a sequence of mutations on data. For instance, what if a program let's the user draw (like Paint) on a canvas and we need to store the state of the painting as well as the actions that led to that state (for undo and logging/debugging purposes)?
What state is acceptable to store and what should we absolutely avoid?
Currently my conclusions is that we should never store state that we only need temporarily, we should pass it to the function that needs it directly.
But what if there are several functions that need a specific computation? Like the case in which we check if the mouse's cursor is hovering over a specific area, why would we want to recompute that?
Are there ways to further minimize mutations of global state?
Storing state isn't the problem. It is mutating global state that is the problem. There are solutions to handling this. One that comes to mind is the State Monad. However, I am not sure this is ideal for undoing operations. But it is a place to start.
If you just want to look at the problem as an initial state and a set of operations then you can think of the operations as a List that can be traversed (with the head being the latest operation). Undoing a set of n operations could be accomplished by traversing the first n elements of the list and cons-ing the inverse of these operations to the list.
That way you don't modify global state at all.
We have implemented 6-7 trigger on a table and there are 4 update trigger out of these. All of the 4 triggers require long processing because of data manipulation and conditions. But whenever trigger executes then all the pages on the website stop responding and hangs for every other user from different systems also. Even when we execute update statement in SQL Server Management Studio on the trigger holding table then it also hangs. Can we resolve this hanging issue by shifting this trigger code into the stored procedure and call this stored procedures after update statement of the table?
Because I think trigger block the table access to the other user at the time of execution. If not then can anyone provide the solution over it.
Triggers are dangerous - they get fired whenever things happen, and you have no control over when and how often they fire.
You should definitely NOT do any time-consuming processing in a trigger! A trigger should be super fast, and lean.
If you need processing - let the trigger record the info needed into a separate "command" table, and have another process (e.g. a scheduled SQL Agent job) that checks that table for commands to be executed, and then executes those commands - separately, independently of the main application, in a separate execution path.
Don't block your main app by doing excessive data processing / manipulation in a trigger! That's the wrong place to do this!
Can we resolve this hanging issue by shifting this trigger code into the stored procedure
and call this stored procedures after update statement of the table?
You have a box that weights a ton. Does it get lighter when you put it into some nice packaging?
A trigger is already compiled. Putting it into a stored procedure is just dressing it up differently.
Your problem is that you abuse triggers to do heavy processing - something they should not do by design. Change the design.
Because I think trigger block the table access to the other user at the time of execution.
Well, triggers do NO SUCH THING - so you think wrong.
A trigger does what it is told to do and an empty trigger sets zero locks (the locks are there from whatever triggers it). If you do set up a table wide lock - fire whoever did that and redesign.
Triggers should be fast, light and be over fast. NO heavy processing in them.
Without actually seeing the triggers it's impossible to diagnose this confidently but here goes...
The TRIGGER won't set up a lock as such but if it sets off other UPDATE statements they'll require locks and if those UPDATE statements fire other triggers then you could have a chain reaction that produces the kind of grief you seem to be experiencing.
If that sounds like what might be happening then removing the triggers and doing the processing explicitly by running a stored procedure at the end may fix it. If the stored procedure is rubbish then you'll still have problems but at least they'll be easier to fix. Try to ensure that the Stored Procedure only updates the records that need updated
The main problem with shifting the functionality to a stored procedure that you run after the update is ensuring that it is in fact run every time.
If your asp.net skills are stronger than your T-SQL skills then this should be a far easier problem to solve than untangling a web of SQL triggers.
The other issue is that the between the update completing and the Stored Procedure completing the records will be in an intermediate state showing the initial change but not the remaining ones. This may or may not be a problem in your case
In my mathematical application I am using timers to regularly perform certain actions. These actions can also be configured by my users. Now I don't want these actions to be executed if there is already another action busy.
E.g. if the user just started a complex calculation by selecting a menu entry, I don't want to execute the actions behind my timers.
Problem is that the user can execute an action via a lot of different ways (via the menu, by clicking somewhere, via popup menu, via drag-and-drop, ...). What I effectively want is to prevent the timers from going off if the application is currently not in the main event loop.
I will give a more concrete example to make it clearer:
At startup I create the timers
If a timer goes off, I execute some actions which, in practice, could access almost every bit in may application's data structure.
Now suppose the user starts a mathematical algorithm (via the menu, by clicking or by dragging elements on the screen, it doesn't matter how he started it).
The algorithm will perform lots of calculations (in the main thread). Since they are executed in the main thread, the timer events will not go off.
Now the algorithm shows a message box (could be a warning or a question).
While the message box is open, events are processed again, including my timer events, which could possibly perform incorrect calculations because there is already another algorithm running.
Reworking my application so that I move logic to a separate worker thread, or adding checks to all of my actions isn't possible at this moment. So please don't suggest to completely rework my application.
What I tried so far is the following:
Using postEvent to send an event, hoping that this event would only be executed in the main event loop. Unfortunately, also the message box's event loop seems to process posted events.
Using the QEvent::WindowBlocked and QEvent::WindowUnblocked events to see when a modal dialog was opened. In my timer-event-logic I can check whether we are between QEvent::WindowBlocked-QEvent::WindowUnblocked calls or not. Unfortunately, these events only work for modal dialogs created by Qt itself, not for other dialogs (e.g. the Windows MessageBox, or the system's printer configuration dialog). Also, this trick would not help if there would be other event loops created by sub routines.
What I actually need to solve my problem is a simple function, that:
If the application is handling an event in the main event loop returns true
If the application is handling an event in another [sub] event loop, it returns false
An alternative could be to return a level that indicates the 'depth' of the handled event.
Anyone suggestions?
You could hook into the event loop of your main thread/application using QAbstractEventDispatcher. Conditionaly filter out QTimer-events based on your application state.