how is an event triggered for a service in oracle brm - oracle-apps

Hello every one i have been going through the oracle communications business revenue management (BRM) docs for sometime now but one thing i can't figure out is how are the events associated with a service triggered actually. Is it done by the BRM or the client application interacting with the BRM. Somehow the docs don't mention it or i may have missed it.Would be nice if someone can explain it to me. Note the term service is used in the same context as used in BRM docs i.e. something that the customers subscribe to and pay for when use i.e. telephony service. the link to the docs https://docs.oracle.com/cd/E51000_01/index.htm

Events are created by BRM:
the logic is defined in the CM (Connection Manager) tier, inside the Facilities Modules (FMs)
the actual SQL instructions are generated by the Oracle DM (dm_oracle)
For the sake of completeness, there is also another way in which events are created:
Call Detail Records (CDRs) are rated with the Batch Rating Engine (Pipeline Manager) and generate Rated Event records
these are then loaded, via the Rated Event Loader (REL), directly to the database. Under the hood, REL calls SQL*Loader and a stored procedure to update account balances and bill items

Related

best practice for bulk update in document DB

we have a scenario where we need to populate the collection every one hour with the latest data whenever we receive the data file in blob from external sources and at the same time , we do not want to impact the live users while updating the collection.
So, we have done below
Created 2 databases and collection 1 in both databases
Created a another collection in different database( configuration database ) with property as Active and Passive and this will have the Database1 and Database2 as values for the above properties
Now , our web job will run every time it sees the file in blob and check this configuration database and identify which one is active or passive and process the xml file and update the collection in passive database as that is not used by the live feed and once it is done , will update the active database to current and passive to live
now , our service will always check which one is active and passive and fetch the data accordingly and show to user
As we have to delete the data and insert the newly data in web job , wanted to know is this is best design we have come up with ? Does deleting and inserting the data will cost ? Is there better way to do bulks delete and insert as we are doing sequentially now
wanted to know is this is best design we have come up with ?
As David Makogon said, as for your solution, you need to manage and pay for multiple databases. If possible, you could create new documents in same collection and control which document is active in your program logic.
Does deleting and inserting the data will cost ?
the operation/request will consume the request units, which will be charged. To know Request Units and DocumentDB Pricing details, please refer to:
What is a Request Unit
DocumentDB pricing details
Is there better way to do bulks delete and insert as we are doing sequentially now
Stored Procedure that provides a way to group operations like inserts and submit them in bulk. You could create the stored procedures and then execute the stored procedure in your Webjobs function.

Transaction Scope in Pub/Sub + Message Label in Rebus

Currently I'm using WCF as service bus. But I want to switch to a more powerful service bus. I Chose Rebus.
I'm somehow new to Rebus. I have some problems :
1) My data is persisted in a DB table. I want publisher to read all persisted data every n seconds and publishes it to subscribers and then set a sent flag to data in DB.
Is there some timing for publishing?
Reading and publishing and changing (setting flag) data must be done in a transaction scope. Is there any defined solution in Rebus?
2) In Consumer, I want to save published data in some table. Reading message from message queue and saving in DB (in my handler) must be done in transaction scope. How Rebus do this?
3) Message label for published messages set to a random unique string. I want to set my custom label for created MSMQ message. is there any solution?
1) You are on your own when it comes to querying database tables at regular intervals – there's no built-in mechanism in Rebus that does this.
I can recommend you take a look at System.Timers.Timer or something similar.
2) You can enable automatic transaction scopes in your Rebus handlers by using the Rebus.TransactionScopes package.
3) Out of the box, it is not possible to specify the label to be used on the MSMQ message. It will be set by Rebus to a string consisting of the message type and ID as indicated by this extension method.

Simplest possible BAM Scenario

I’m trying to set up a very simple BAM scenario within BizTalk Server 2013R2 upon which to build, involving tracking just the time of all incoming messages processed by a port.
To this end I have :
Within Excel, created an Activity Definition (called
SimpleReceiveTest) containing a single Item called ReceiveTime of
type milestone (date/time), and a View Definition (also called
SimpleReceiveTest) containing just this Activity Definition and Item.
Imported this BAM definition spreadsheet using bm.exe
Added view rights to SimpleReceiveTest again using bm.exe
Launched the Tracking Profile Editor, imported the BAM Activity
Definition, and mapped ActivityID = MessageID and ReceiveTime =
PortStartTime by drag and drop from the Messaging Property Schema, as
shown below :
Set the Port Mappings for MessageID and PortStartTime to relate to a
test Receive Port ReceivePort1 that I am using for testing. This is
using a pass-through pipeline.
Saved and applied the above Tracking Profile
It is my understanding that for any messages received on port ReceivePort1 I should now get a tracking activity created. However this is not happening – there are no records in any of the BAM tables/views and no data is available within the BAM Portal.
I have tried restarting the hosts, and have verified that the TDDS_FailedTrackingData table is empty, there is nothing relevant in the event log, a Tracking host is running and the SQL Agent Jobs are running. I have also tried running these jobs manually.
Have I missed something, and am I correct in my expectation that this simple scenario should create tracked activities for any messages passing through the Receive Port? If so what can I try to further diagnose this?
Now fixed - it's actually a bug in vanilla BizTalk 2013R2 when using a standard pipeline that has been fixed in CU2.
FIX: BAM tracking doesn’t work when you use the XMLReceive or a custom pipeline in BizTalk Server

Microsoft AX Dynamics Process Integration through Outbound Ports

I would like to know the Process Integration steps.
Through Outbound ports
If any of the event occurs at AX Dynamics, we just want to know that events in the form of XML(Process Integration).
Example: Sales Order Creation, Customer Creation, Purchase Order Creation..
Outbound ports are only useful for asynchronous communication.
See AX 2012 Export Data with Outbound ports for an example (using the file system).
The steps to initiate sending data is in the AIF_SendCustomer.
As this is no lightweight operation, you may consider logging the records which needs integration in a custom integration table, then doing the processing in batch.
This is done in the insert and/or update and maybe delete method.
Deletes requires you store the RecId field value in the external system to be used for delete requests. The following does not cover this.
For logged table make the following method:
void syncRecord()
{
XXXRecordLog log;
log.RefTableId = this.TableId;
log.RefRecId = this.RecId;
log.insert();
}
Then call this.syncRecord() in the insert and update methods.
In the query to the outbound service be sure to exists join your table and the log table. This way only changed records are exported.
Make a batch job to do the transfer using the AIF_SendCustomer as a template.
After a synchronous (AifSendMode::Sync) transfer of the records, delete the log records (or mark them transferred).
Finally call AIFoutboundProcessingService to flush the file:
new AIFoutboundProcessingService().run();
Try to keeps things simple. It might be simpler to do a comma file export of the changed records!

Windows Workflow - Creating a reusable task list (bookmarks?)

I'm looking at migrating business processes into Windows Workflow, the client app will be ASP/MVC and the workflows are likely to be hosted via IIS.
I want to create a common 'simple task' activity which can be used across multiple workflows. Activity properties would look something like this:
Related customer
Assigned agent
Prompt ("Please review PO #12345")
Text for 'true' button ("Accept")
Text for 'false' button ("Reject")
Variable to store result in
Once the workflow hits this activity a task should be put into a db table. The web app will query the table and show the agent a list of tasks they need to complete. Once they hit accept / reject the workflow needs to resume.
It's the last bit that I'm stuck on. What do I need to store in the DB table to resume a workflow? Given that the tasks table will be used by multiple workflows how work I instantiate the workflow to resume it? I've looked at bookmarks but they assume that you know the type of workflow that you're resuming. Do I need to use reflection or is there a method in WF where I can pass a workflow id and it will instantiate it?
You can use workflow service and control its via ControlEndPoint.
For more info about controlendpoint you can refer at
http://msdn.microsoft.com/en-us/library/ee358723.aspx

Resources