I used WWF on my web project, on this project I have several workflows and visitor will fill forms and then form will post to technical people to do their job and some other state ...
When I change workflow and create new activity or state , when run ( continue ) all workflows that persist on db before changes , will throw errors.
Server was unable to process request. ---> System.InvalidOperationException: Workflow with id "82b0cb6c-d6b7-43cd-9071-04a1078954ec" not found in state persistence store.
at System.Workflow.Runtime.Hosting.PersistenceDBAccessor.RetrieveInstanceState(Guid instanceStateId, Guid ownerId, DateTime timeout)
at System.Workflow.Runtime.Hosting.SqlWorkflowPersistenceService.LoadWorkflowInstanceState(Guid id)
at System.Workflow.Runtime.WorkflowRuntime.InitializeExecutor(Guid instanceId, CreationContext context, WorkflowExecutor executor, WorkflowInstance workflowInstance)
at System.Workflow.Runtime.WorkflowRuntime.Load(Guid key, CreationContext context, WorkflowInstance workflowInstance)
at System.Workflow.Runtime.WorkflowRuntime.GetWorkflow(Guid instanceId)
at System.Workflow.Activities.WorkflowWebService.Invoke(Type interfaceType, String methodName, Boolean isActivation, Object[] parameters)
how can I recycle old workflows after changes?
Thanks
Your error looks pretty specific there. The workflow with your particular GUID doesn't exist.
Changing a workflow definition does not change existing workflows. They will continue to execute their predetermined definitions unless you initiate an actual change process on them. You'll want to dig into Dynamic Workflow Updates if this is what you want to accomplish.
Related
I have an asp.net core 3.1 app with AD authentication that works fine. We are migrating it to a new infra setup, with a new set of CI/CD pipelines that use terraform. After re-deployment it fails with the above error (full stack trace below). Any idea what could be causing it?
It uses a containerized build so in theory everything should be the same, apart from the Azure components. As the error has to do with Authentication/Authorization, I have reviewed the AD app registration very closely. I believe I've got it as close as is possible with terraform. Could it be a peculiarity with the app registration that terraform is not supporting? I'm linking the two app manifests (stripped of company-specific data). I'm also doing a detailed comparison of all the differences, under the stack trace.
AD app manifest (old, which works): https://drive.google.com/file/d/187LFczxaReqZLxDrlv7SoWA8nEccUqOj/view?usp=sharing
AD app manifest (new one, broken): https://drive.google.com/file/d/1a0ygOo7MvBt6enro3J5DaBvj6lP9HzH6/view?usp=sharing
Additional info that may be relevant - the app uses swagger, which for some reason has its own app registration. I have yet to investigate that aspect in comparable detail
Review of the differences:
replyUrlsWithType - I'm guessing this is governed by the
redirect_uris param in terraform, which doesn't let me change it for
apps of type 'api', only for web, spa and public_client
signInUrl -
this doesn't seem to exist in the terraform module at all, it's
probably a copy of the above (the values are the same in the legacy
app)
acceptMappedClaims - ends up as 'false' even if I hardcode null into the terraform script
optionalClaims - null v an object with 3 empty arrays; I can't
seem to get it to spit out the latter with terraform
This seems to be it - the rest of the differences seem to be IDs and names. I can't think of anything else that's different between the two environments. It's on two different Azure subscriptions (same AD tenant) and all other dependencies are different (storage, sql db, azure service bus, azure search, etc.) but if there was an issue with these I would expect the error to indicate the same.
Full stack trace:
Application startup exception: System.InvalidOperationException:
Unable to find the required services. Please add all the required
services by calling 'IServiceCollection.AddAuthorization' inside the
call to 'ConfigureServices(...)' in the application startup code.
at
Microsoft.AspNetCore.Builder.AuthorizationAppBuilderExtensions.VerifyServicesRegistered(IApplicationBuilder
app) at
Microsoft.AspNetCore.Builder.AuthorizationAppBuilderExtensions.UseAuthorization(IApplicationBuilder
app) at
projectname.WebApi.Startup.DefaultHttpPipeline(IApplicationBuilder
app) in /app/projectname.WebApi/Startup.cs:line 360 at
projectname.WebApi.Startup.ConfigureDevelopment(IApplicationBuilder
app) in /app/projectname.WebApi/Startup.cs:line 317 at
System.RuntimeMethodHandle.InvokeMethod(Object target, Object[]
arguments, Signature sig, Boolean constructor, Boolean wrapExceptions)
at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags
invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
at Microsoft.AspNetCore.Hosting.ConfigureBuilder.Invoke(Object
instance, IApplicationBuilder builder) at
Microsoft.AspNetCore.Hosting.ConfigureBuilder.<>c__DisplayClass4_0.b__0(IApplicationBuilder
builder) at
Microsoft.AspNetCore.Hosting.ConventionBasedStartup.Configure(IApplicationBuilder
app) at
Microsoft.AspNetCore.HostFilteringStartupFilter.<>c__DisplayClass0_0.b__0(IApplicationBuilder
app) at Microsoft.AspNetCore.Hosting.WebHost.BuildApplication()
Next steps I've planned to try:
Get the Client ID & Client secret of the new app reg (provisioned via terraform) and stick them manually into the old app (it's the same AD tenant so there shouldn't be access issues). If we get the same error - this confirms the problem is with the app registration.
Comment out everything to do with swagger to make sure it's not it
Migrate the whole thing to .net 6 - it has to be done anyway, maybe we'll get a better error message
Figured it out finally, there was a wrong value for the swagger app registration client ID.
We have a spring cloud stream app using Kafka. The requirement is that on the producer side the list of messages needs to be put in a topic in a transaction. There is no consumer for the messages in the same app. When i initiated the transaction using spring.cloud.stream.kafka.binder.transaction.transaction-id prefix, I am facing the error that there is no subscriber for the dispatcher and a total number of partitions obtained from the topic is less than the transaction configured. The app is not able to obtain the partitions for the topic in transaction mode. Could you please tell if I am missing anything. I will post detailed logs tomorrow.
Thanks
You need to show your code and configuration as well as the versions you are using.
Producer-only transactions are discussed in the documentation.
Enable transactions by setting spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix to a non-empty value, e.g. tx-. When used in a processor application, the consumer starts the transaction; any records sent on the consumer thread participate in the same transaction. When the listener exits normally, the listener container will send the offset to the transaction and commit it. A common producer factory is used for all producer bindings configured using spring.cloud.stream.kafka.binder.transaction.producer.* properties; individual binding Kafka producer properties are ignored.
If you wish to use transactions in a source application, or from some arbitrary thread for producer-only transaction (e.g. #Scheduled method), you must get a reference to the transactional producer factory and define a KafkaTransactionManager bean using it.
#Bean
public PlatformTransactionManager transactionManager(BinderFactory binders) {
ProducerFactory<byte[], byte[]> pf = ((KafkaMessageChannelBinder) binders.getBinder(null,
MessageChannel.class)).getTransactionalProducerFactory();
return new KafkaTransactionManager<>(pf);
}
Notice that we get a reference to the binder using the BinderFactory; use null in the first argument when there is only one binder configured. If more than one binder is configured, use the binder name to get the reference. Once we have a reference to the binder, we can obtain a reference to the ProducerFactory and create a transaction manager.
Then you would just normal Spring transaction support, e.g. TransactionTemplate or #Transactional, for example:
public static class Sender {
#Transactional
public void doInTransaction(MessageChannel output, List<String> stuffToSend) {
stuffToSend.forEach(stuff -> output.send(new GenericMessage<>(stuff)));
}
}
If you wish to synchronize producer-only transactions with those from some other transaction manager, use a ChainedTransactionManager.
Is there a way to programmatically determine from a DocumentClientException where StatusCode == HttpStatusCode.NotFound whether it was the document, the collection, or the database that was not found?
I'm trying to figure out whether I can implement on-demand collection provisioning and only call DocumentClient.CreateDocumentCollectionIfNotExistsAsync when I need to. I'm trying to avoid calling it before making every request (presumably this adds an extra network roundtrip to every request). Likewise, I'm trying to avoid calling it on error recovery when I know it won't help.
From experimentation with the local emulator, the only field I see varying in these three cases is DocumentClientException.Error.Message, and only when the database cannot be found. I generally try to avoid exception dispatching based on human-readable messages.
Wrong database name:
StatusCode: HttpStatusCode.NotFound
Error.Message: {\"Errors\":[\"Owner resource does not exist\"]}...
Correct database name, wrong collection name:
StatusCode: HttpStatusCode.NotFound
Error.Message: {\"Errors\":[\"Resource Not Found\"]}...
Correct database name, correct collection name, incorrect document ID:
StatusCode: HttpStatusCode.NotFound
Error.Message: {\"Errors\":[\"Resource Not Found\"]}...
I'm planning to use a database with its own offer. Since collections inside a database with its own offer are cheap, I'm trying to see whether I can segregate each tenant in my multi-tenant application into its own collection. Each tenant ends up having a different indexing and default TTL policy. The set of collections is not fixed and changes dynamically during runtime as new tenants sign up. I cannot predict when I will need to add a new collection. There's no new tenant notification: I just get a request that I need to handle by creating a document in a possibly non-existent collection. There's a process to garbage collect unused collections.
I'm using the NuGet package Microsoft.Azure.DocumentDB.Core Version 1.9.1 in a .NET Core 2.1 app targeting a SQL API Cosmos DB instance.
If you look at the Message property in detail, you should see following strings that informs whether 404 Not Found response was generated due to Document vs Collection.
ResourceType: Document
ResourceType: Collection
It's not ideal but you can try to regex this information out of error message.
We have a problem with entity framework. For example if we do:
modelBuilder.Conventions.Remove<OneToManyCascadeDeleteConvention>();
and then we try to delete an entity that has mapped child entities who depend on it, it is logical that we get an error. (Cannot delete parent when there are children in database that depend on it).
Afterwards, using a new context instance, doing a 'ParentEntity.ChildEntities.ToList()' there is still a problem!
A workaround is to restart the app pool, and the problem goes away.
We are using Autofac and the lifecycle of the context is set (and confirmed) to per HttpRequest, so the error persists somewhere else. Any idea what can be done so as to avoid these errors?
Our guess is that the objectcontext is persistent somewhere else, and it stores the state of the child entities as "EntityState.Deleted" so this conflicts with the actual data received from the database on subsequent calls.
Update: Seems like a closer examination of the stack reveals that there is a lazy internal context:
[DbUpdateException: An error occurred while saving entities that do not expose foreign key properties for their relationships. The EntityEntries property will return null because a single entity cannot be identified as the source of the exception. Handling of exceptions while saving can be made easier by exposing foreign key properties in your entity types. See the InnerException for details.]
System.Data.Entity.Internal.InternalContext.SaveChanges() +200
System.Data.Entity.Internal.LazyInternalContext.SaveChanges() +33
System.Data.Entity.DbContext.SaveChanges() +20
Maybe if I were to somehow disable LazyInternalContext? Can this be done?
If you don't want to get the exceptions and keep the database in a valid state by your self for some reason you can do so by stopping validation:
context.Configuration.ValidateOnSaveEnabled = false; // you can put this in the constructor of your context;
context.SaveChanges();
I'm looking at migrating business processes into Windows Workflow, the client app will be ASP/MVC and the workflows are likely to be hosted via IIS.
I want to create a common 'simple task' activity which can be used across multiple workflows. Activity properties would look something like this:
Related customer
Assigned agent
Prompt ("Please review PO #12345")
Text for 'true' button ("Accept")
Text for 'false' button ("Reject")
Variable to store result in
Once the workflow hits this activity a task should be put into a db table. The web app will query the table and show the agent a list of tasks they need to complete. Once they hit accept / reject the workflow needs to resume.
It's the last bit that I'm stuck on. What do I need to store in the DB table to resume a workflow? Given that the tasks table will be used by multiple workflows how work I instantiate the workflow to resume it? I've looked at bookmarks but they assume that you know the type of workflow that you're resuming. Do I need to use reflection or is there a method in WF where I can pass a workflow id and it will instantiate it?
You can use workflow service and control its via ControlEndPoint.
For more info about controlendpoint you can refer at
http://msdn.microsoft.com/en-us/library/ee358723.aspx