I'm using the promoted activity (http://msdn.microsoft.com/en-us/library/ff642473.aspx) to store information needed to track a workflow.
During execution values are correctly stored and I can query them using the view but, if the workflow is persisted, the view becomes empty and I can't find information anymore.
Can someone explain me how to keep those values until the natural completion of the workflow?
Thanks
update
a few more details
I'm using IIS to store workflows
I promote values at the beginning of the workflow and I wouldn't like to do it again on every persistence property (this was the first workaround I thought)
Each time a workflow is persisted the complete state is persisted. There is no incremental addition. So by not adding the promoted properties on subsequent persists you are effectively removing them from the instance store.
I finally found the problem.
In the web.config I was adding my extension after the element, but order matters.
My configuration looks like this now:
..
Everythings works fine now and promoted values are always available.
Related
I'm reading the source of an ASP.NET Core example project (MSDN) and try to understand all.
There's an Edit razor page which shows the values of an entity record in <input> fields allowing the user to see and change different fields of a given record. There's this line:
Movie = await _context.Movie.FirstOrDefaultAsync(m => m.ID == id);
...
_context.Attach(Movie).State = EntityState.Modified;
I don't understand why it adds a new entity and change its EntityState to Modified, instead of fetch the record and change it then call SaveChanges().
My guess is that their example is loading the movie in one call, passing it to the view, then in another update action, passing the modified entity from the view to the controller, which is attaching it, setting it's state to modified, and then calling save changes.
IMHO this is an extremely bad practice with EF for a number of reasons, and I have no idea why Microsoft uses it in examples (other than that it makes CRUD look easy-peazy).
By serializing entities to the view, you are serializing typically far more data to send across the wire than your view actually needs. You give malicious, or curious users far more information about your system than you should.
You are bound to run into serializer errors with bi-directional references. ("A" has reference to "B" which has a reference back to "A") Serializers (like JSON) generally don't like these.
You are bound to run into performance issues with lazy loading calls as the serializer "touches" references. When dealing with collections of results, the resulting lazy load calls can blow performance completely out of the water.
Without lazy loading enabled, you can easily run into issues where referenced data is passed as #null or potentially incomplete collections due to the Context possibly having some referenced data in the cache that it can pull and associate to the entity, but not the complete set of child records.
By passing entities back to the controller you expose your system to unintentional modifications by which an attacker can modify the entity data in ways that you do not intend it to be modified, then when you attach it, set the state to modified, and save, you overwrite the data state. I.e. change FKs, or otherwise alter data that is not supported, or even displayed by your UI.
You are bound to run into stale data issues where data can have changed between the point you read the entity initially and the point it is saved. Attaching and saving takes a brutal "last-in-wins" approach to concurrent data.
My general advice is to:
1. Leverage Select or Automapper's ProjectTo to populate ViewModel classes with just the fields your view will need. This avoids the risks of lazy loads, and minimizes the data passed to the client. (Faster, and reveals nothing extra about your system)
2. Trust absolutely nothing coming back from the client. Validate the returned view model object, then only after you're satisfied it is legit, load the entity(ies) from the context and copy the applicable fields across. This also gives you the opportunity to evaluate row versioning to handle concurrency issues.
Disclaimer: You certainly can address most of my pointed out issues while still passing entities back & forth, but you definitely leave the door wide open to vulnerabilities and bugs creeping in when someone just defaults to an attach & save, or lazy loads start creeping in.
Under the condition of "allow_versions" set to "FALSE" or "TRUE", for both cases, how Swift response for the scenario that a file is under overwriting while delete request come in simultaneously(with the order of overwriting first then deletion)?
Please share your thoughts.
Many thanks!
The timestamp assigned to the request coming in to the proxy is what will ultimately decide which "last" write wins.
If you have a long running upload and issue the delete during, the tombstone will have a later timestamp and will eventually take precedence even if the upload finishes after.
When using the container versioning feature, overwriting in a versioned container will cause the object data to be COPY'd off the current tip before the PUT data is sent to the storage node with the assigned timestamp. For deletes in a versioned container the "previous version" is discovered at the time the overwriting request is made and subject to eventual consistency in the container listing, but is only deleted once it has been copied into the current location for the object.
More information about object versioning is available here:
http://docs.openstack.org/developer/swift/overview_object_versioning.html
Well, a quick summary comes, though still a very high level view but hope it helps understanding how it works under the hood.
The diagram(below link) sets two simultaneous scenarios(A and B) against the enable/disable of the Swift object versioning feature. The outcome for each scenario is shown in the diagram.
Download the diagram.
Please share your thoughts if any.
I have asked a few questions today as I try to think through to the solution of a problem.
We have a complex data structure where all of the various entities are tightly interconnected, with almost all entities heavily reliant/dependant upon entities of other types.
The project is a website (MVC3, .NET 4), and all of the logic is implemented using LINQ-to-SQL (2008) in the business layer.
What we need to do is have a user "lock" the system while they make their changes (there are other reasons for this which I won't go into here that are not database related). While this user is making their changes we want to be able to show them the original state of entities which they are updating, as well as a "preview" of the changes they have made. When finished, they need to be able to rollback/commit.
We have considered these options:
Holding open a transaction for the length of time a user takes to make multiple changes stinks, so that's out.
Holding a copy of all the data in memory (or cached to disk) is an option but there is heck of a lot of it, so seems unreasonable.
Maintaining a set of secondary tables, or attempting to use session state to store changes, but this is complex and difficult to maintain.
Using two databases, flipping between them by connection string, and using T-SQL to manage replication, putting them back in sync after commit/rollback. I.e. switching on/off, forcing snapshot, reversing direction etc.
We're a bit stumped for a solution that is relatively easy to maintain. Any suggestions?
Our solution to a similar problem is to use a locking table that holds locks per entity type in our system. When the client application wants to edit an entity, we do a "GetWithLock" which gets the client the most up-to-date version of the entity's data as well as obtaining a lock (a GUID that is stored in the lock table along with the entity type and the entity ID). This prevents other users from editing the same entity. When you commit your changes with an update, you release the lock by deleting the lock record from the lock table. Since stored procedures are the api we use for interacting with the database, this allows a very straight forward way to lock/unlock access to specific entities.
On the client side, we implement IEditableObject on the UI model classes. Our model classes hold a reference to the instance of the service entity that was retrieved on the service call. This allows the UI to do a Begin/End/Cancel Edit and do the commit or rollback as necessary. By holding the instance of the original service entity, we are able to see the original and current data, which would allow the user to get that "preview" you're looking for.
While our solution does not implement LINQ, I don't believe there's anything unique in our approach that would prevent you from using LINQ as well.
HTH
Consider this:
Long transactions makes system less scalable. If you do UPDATE command, update locks last until commit/rollback, preventing other transaction to proceed.
Second tables/database can be modified by concurent transactions, so you cannot rely on data in tables. Only way is to lock it => see no1.
Serializable transaction in some data engines uses versions of data in your tables. So after first cmd is executed, transaction can see exact data available in cmd execution time. This might help you to show changes made by user, but you have no guarantee to save them back into storage.
DataSets contains old/new version of data. But that is unfortunatelly out of your technology aim.
Use a set of secondary tables.
The problem is that your connection should see two versions of data while the other connections should see only one (or two, one of them being their own).
While it is possible theoretically and is implemented in Oracle using flashbacks, SQL Server does not support it natively, since it has no means to query previous versions of the records.
You can issue a query like this:
SELECT *
FROM mytable
AS OF TIMESTAMP
TO_TIMESTAMP('2010-01-17')
in Oracle but not in SQL Server.
This means that you need to implement this functionality yourself (placing the new versions of rows into your own tables).
Sounds like an ugly problem, and raises a whole lot of questions you won't be able to go into on SO. I got the following idea while reading your problem, and while it "smells" as bad as the others you list, it may help you work up an eventual solution.
First, have some kind of locking system, as described by #user580122, to flag/record the fact that one of these transactions is going on. (Be sure to include some kind of periodic automated check, to test for lost or abandoned transactions!)
Next, for every change you make to the database, log it somehow, either in the application or in a dedicated table somewhere. The idea is, given a copy of the database at state X, you could re-run the steps submitted by the user at any time.
Next up is figuring out how to use database snapshots. Read up on these in BOL; the general idea is you create a point-in-time snapshot of the database, do whatever you want with it, and eventually throw it away. (Only available in SQL 2005 and up, Enterprise edition only.)
So:
A user comes along and initiates one of these meta-transactions.
A flag is marked in the database showing what is going on. A new transaction cannot be started if one is already in process. (Again, check for lost transactions now and then!)
Every change made to the database is tracked and recorded in such a fashion that it could be repeated.
If the user decides to cancel the transaction, you just drop the snapshot, and nothing is changed.
If the user decides to keep the transaction, you drop the snapshot, and then immediately re-apply the logged changes to the "real" database. This should work, since your requirements imply that, while someone is working on one of these, no one else can touch the related parts of the database.
Yep, this sure smells, and it may not apply to well to your problem. Hopefully the ideas here help you work something out.
I've been quite impressed with dynamic data and how easy and quick it is to get a simple site up and running. I'm planning on using it for a simple internal HR admin site for registering people's skills/degrees/etc.
I've been watching the intro videos at www.asp.net/dynamicdata and one thing they never mention is how to handle concurrency control.
It seems that DD does not handle it right out of the box (unless there is some setting I haven't seen) as I manually generated a change conflict exception and the app failed without any user friendly message.
Anybody know if DD handles it out of the box? Or do you have to somehow build it into the site?
Concurrency is not handled out the of the box by DD.
One approach would be to implement this on the database side, by adding a "last updated" timestamp column (or other unique stamp, such as a GUID) to each table.
You then create an update trigger for each table. For each row being updated, is the "last updated" stamp passed in the same as the one on the row in the database?
If so, update the row, but give it a new "last updated" stamp.
If not, raise a specific "Data is out of date" exception.
On the client side, for each row you update, you'd need to refresh the "last updated" stamp.
In the client code you watch for the "Data is out of date" exception and display a helpful message to the user, asking them to refresh the data and re-submit their change.
Hope this helps.
All depends on the definition, what do you mean under "out of the box". Of cause you have to create a lot of code to handle concurrency, but some features help us to implement it.
My favorite model is "optimistic concurrency" based on rowversion datatype of SQL Server. It is like "last updated" timestamp, but you need not use any update trigger for each table. All updates of the corresponding "timestamp" column in your tables will be made automatically by SQL server at every update of data in the table row. I describes it in my old answer Concurrency handling of Sql transactrion. I hope it will be helpful for you.
I was of the impression the Dynamic data does the update on the underlying data source. Maybe you can specify the concurrency model (pessimistic/optimistic) on the data meta model that gets registered on the App_Init section. But you would probably get unable to save changes error, so by default would be pessimistic, last in loses....
Sorry to replay late. Yes DD is too strong when it come to fast development of project. Not only that it is base for .Net 4.0. DD is more enhance and have been included in .Net 4.0.
DD mostly work on Linq to sql. I will suggest you to have a look on that part.
In linq to SQl when you go to property of table you will find a property there which specify wheater to check the old value before updating new value. If you set that true I think your proble will get handle.
wish you best luck.
Let's learn from each other.
The solution given by Binary Worrier works and it's widely used on platforms providing a GUI to merge the changes (e.g. source control programs, wiki engines, etc). That way none of the users lose their changes. In the other hand, it requires much code or using external components or DLLs.
If you are not happy with that, another approach is just to lock the record that is being edited. Nobody else will be able to edit that record until the user commit the changes or his session expires. It has pros and cons but requires little code compared with the first option.
Until recently I have been using cairngorm as a framework for flex. However, in this latest project I have switched to Mate. It's` still confusing me a little as I kind of got used to leaving data in the model. I have a couple of components which rely on the same dataset(collection).
In the component the creation complete handler sends a 'GiveMeMyDataEvent' which is caught by one of the eventmaps. Now in cairngorm in my command class I would have had a quick peek in the model to decide whether I need to get the data from the server or not and then either returned the data from the model or called the db.
How would I do this in Mate? Or is there a better way to go about this, I'm trying to utilize the data that has already been recieved from the server, but at the same time I'm not sure I have loaded the data or not. If a component which uses that same data has been instantiated then the answer is yes otherwise no.
Any help/hints greatly appreciated.
Most things in Mate are indirect. You have managers that manage your data, and you set up injectors (which are bindings) between the managers and your views. The injectors make sure your views are synchronized with your managers. That way the views always have the latest data. Views don't get updated as a direct consequence of dispatching an event, but as an indirect consequence.
When you want to load new data you dispatch an event which is caught by an event map, which in turn calls some service, which loads data and returns it to the event map, and the event map sticks it into the appropriate manager.
When the manager gets updated the injectors make sure that the views are updated.
By using injectors you are guaranteed to always have the latest data in your views, so if the views have data the data is loaded -- unless you need to update periodically, in which case it's up to you to determine if data is stale and dispatch an event that triggers a service call, which triggers an update, which triggers the injectors to push the new data into the views again, and round it goes.
So, in short the answer to your question is that you need to make sure you use injectors properly. If this is a too high-level answer for you I know you can get more help in the Mate forums.
I ran into a similiar situation with the app I am working on at the moment, and found that it is easily implemented in Mate when you start thinking about having two events.
The first event being something like DataEvent.REFRESH_MY_DATA. This event is handled by some DataManager, which can decide to either ignore it (since data is already present in the client and considered up to date), or the manager can dispatch an event like DataEvent.FETCH_MY_DATA.
The FETCH_MY_DATA event triggers a service call in the event map, which updates a value in the manager. This update is property-injected into the view, happy days :)