How to setup up Creating Notifications, and Alerts for Contracts in FSCM? - peoplesoft

Does anyone know or has setup Notification and Alerts for Contracts in FSCM? Accounding to
PeopleBooks (https://docs.oracle.com/cd/F60972_01/fscm92pbr45/eng/fscm/fspr/CreatingSpendThresholds_Notifications_AndAlertsForContracts.html) it is setup under Procurement Contracts > Create Contract Alert Workflow.
When I get to the Run Control Page to create a new run control, I am not able see any Contracts under the Contract ID lookup option.
Am I am missing any setup?
Basicailly what we want is the system to send an email out when the contract is up for renewal.
Many thanks!

Related

Alfresco Process Services involved task notifications

I am working on the Alfresco Process Services and came across the below requirements.
I want to send an email when I involve someone in the task. The involved person should get an email notification. Right now, by default the email notification is not going to the involved person.
Also, is there any way to make the attached documents and task form read only except the task claimer? I mean, only the task claimer should be able to edit the task but all other users in the group should be able to view the task in read only mode.
Any help will be appreciated...

What needs to be done for an action submission?

I am developing a google smart home action.On the Google Home App, I can set up my test action. I can use the Google Home mini turn on my devices. I used a C++ server as fulfillment (url: https://xxxx.xxx.com/google/smarthome). My server can receive and process SYNC, QUERY and EXECUTE methods.
When I running the SMARTHOME-WASHER demo. It used firebase and homegraph. I can see the washer status data in firebase database. I can see the functions in firebase, such as fakeauth, faketoken, reportstate, requestsync and smarthome. Evenytime when I turn on/off the washer. I saw reportstate and other requests.
But when I test my test action, in my test action's firebase, there are nothing. No status data, no request data.
When I use Test suite for smart home, the WASHER-DEMO is OK, my test action failed to get device list from HomeGraph.
So I want to know:
1. If firebase is necessary when I used my own fulfillment ?
2. How to report state? Request from google server to my server, or conversely?
3. When I add my test action on Google Home App, the firebase database is empty. Is this a issue?
4. What need I to do , if I want to submission my action.
I have been troubled for more than two months. Thanks for your help.
Firebase is not required. You can use any backend implementation you want. To simplify development, our codelab uses one type of implementation. If you want to use another host and database, then you can change how you handle the requests.
Report State is a command you send from your server to the Home Graph. It's proactive, meant to be sent when a device's state changes.
If you are not using Firebase for your test action, then you would not see any Firebase activity. This is fine.
To submit, you should follow this guide. Primarily, run the Test Suite and then submit your action through the Actions Console.

Understanding the Pact Broker Workflow

Knowing full well, there are many types of workflows for different ways of integrating Pact, I'm trying to visualize what a common work flow looks like. I developed this Swimlane for Pact Broker Workflow.
How do we run a Provider verification on an older Provider build?
How does this change with tags?
When does the webhook get created back to the Provider?
What if different Providers have different base urls (i.e. build systems)?
How does a new Provider build alert about the Consumers if the Provider fails?
Am I thinking about this flow correctly?
I've tried to collect my understanding from Webhooks, Using pact where the consumer team is different from the provider team, and Publishing verification results to a Pact Broker . Assuming I am thinking about the problem the right way and did not completely miss some documentation, I'd gladly write up an advise work flow documentation for the community.
Your swimlane diagram is a good picture of the workflow, with the caveat that once everything is all set up, it's rare to manually start provider builds from the broker.
The provider doesn't ever notify the consumers about verification failure (or success) in the process. If it did, then you could end up with circular builds.
I think about it like this:
The consumer tests create a contract (the Pact file).
This step also verifies that the consumer can work with a provider that fulfils that contract (using the mock provider).
Then, the consumer gives this Pact file to the broker (if configured to do so)
Now that there's a new pact, the broker (if configured) can trigger a provider build
The provider's CI infrastructure builds the provider, and runs the pact verification
The provider's CI infrastructure (if configured) tells the broker about the verification result.
The broker and the provider's build system are the only bits that know about the verification result - it isn't passed back to the consumer at the moment.
A consumer that is passing the tests means the consumer can say "I've written this communication contract and confirmed that I can hold up my side of it". Failure to verify the contract at the provider end doesn't change this statement.
However, if the verification succeeds, you may want to trigger a consumer deployment. As Beth Skurrie (one of the primary contributors to Pact) points out in the comments below:
Communicating the status of the verification back to the consumer is actually a highly important thing, as it tells the consumer whether or not they can be deployed safely. It is the missing part of the pact workflow at the moment, and I'm working away as fast as I can to rectify this.
Currently, since the verification status is information you might like to know about - especially if you're unable to see the provider's CI infrastructure - you might like to check out the pact build badges, which are a lighter way of checking the broker.

What's the recommended way to handle microservice processing bugs new insights?

Before I get to my question, let me sketch out a sample set of microservices to illustrate my dilemma.
Scenario outline
Suppose I have 4 microservices:
An activation service where features supplied to our customers are (de)activated. A registration service where members can be added and changed. A secured key service that is able to generate secure keys (in a multi step process) for members to be used when communicating with them with the outside world. And a communication service that is used to communicate about our members with external vendors.
The secured key service may however only request secured keys if this is a feature that is activated. Additionally, the communication service may only communicate about members that have a secured key AND if the communication feature itself is activated.
Because they are microservices, each of the services has it's own datastore and is completely self sufficient. That is, any data that is required from the other microservices is duplicated locally and kept in sync by means of asynchronous messages from the other microservices.
The dilemma
I'm actually facing two main dilemma's. The first is (pretty obviously) data synchronization. When there are multiple data stores that need to be kept in sync you have to account for messages getting lost or processed out of order. But there are plenty of out of the box solutions for this and when all fails you could even fall back to some kind of ETL process to keep things in sync.
The main issue I'm facing however is the actions that need to be performed. In the above example the secured key service must perform an action when it either
Receives a message from the registration service for a new member when it already knows that the secured keys feature is active in the activation service
Receives a message from the activation service that the secured keys feature is now active when it already knows about members from the registration service
In both cases this means that a message from the external system must lead to both an update in the local copy of the data as well as some logic that needs to be processed.
The question
Now to the actual question :)
What is the recommended way to cope with either bugs or new insights when it comes to handling those messages? Suppose there is a bug in the message handler from the activation service. The handler does update the internal data structure, but it fails to detect that there are already registered members and thus never starts the secure key generation process. Alternatively it could be that there's no bug, but we decide that there is something else we want the handler to do.
The system will have no reason to resubmit or reprocess messages (as the message didn't fail), but there's no real way for us to re-trigger the behavior that's behind the message.
I hope it's clear what I'm asking (and I do apologize if it should be posted on any of the other 170 Stack... sites, I only really know of StackOverflow)
I don't know what is the recommended way, I know how this is done in DDD and maybe this can help you as DDD and microservices are friends.
What you have is a long-running/multi-step process that involves information from multiple microservices. In DDD this can be implemented using a Saga/Process manager. The Saga maintains a local state by subscribing to events from both the registration service and the activation service. As the events come, the Saga check to see if it has all the information it needs to generate secure keys by submitting a CreateSecureKey command. The events may come in any order and even can be duplicated but this is not a problem as the Saga can compensate for this.
In case of bugs or new features, you could create special scripts or other processes that search for a particular situation and handle it by submitting specific compensating commands, without reprocessing all the past events.
In case of new features you may even have to process old events that now are interesting for your business process. You do this in the same way, by querying the events source for the newly interesting old events and send them to the newly updated Saga. After that import process, you subscribe the Saga to these newly interesting events and the Saga continues to function as usual.

Workflow foundation 4.5 custom database for tracking

I am planning to use Windows Workflow Foundation 4.5.
I need to track information in a custom database.
Lists will be shown in a user interface (my tasks, all tasks).
What is the best way to have a generic system by which I don't need to add custom activities to a workflow.
workflow created
workflow ended
workflow terminated
receive activity started (log custom record in a table)
receive activity completed
--> bookmark events?? I also need to correlate an activity instance id with the record in the database.
...
Do I use a tracking participant for this or can I tap into the events of the workflow service host??
You don't need to add custom activities to your workflow. You can implement SQL tracking participant. Here is a sample: http://msdn.microsoft.com/en-us/library/ee622983.aspx
Tracking participant is the solution to choose here... it is really flexible. You can get the status of the workflow instance with the WorkflowInstanceRecord and the activity status with the ActivityStateRecord. Both records have the instanceId to correlate them.
Also you can track custom tracking in your code activities with the CustomTrackingRecord.
We are using it for long time and the performance is quite good.
I hope it helps.

Resources