Is it possible to create and trigger email alerts for APIs in Sterling Order Management system? - ibm-sterling

Is there a way to create Email Alerts in Sterling Order management system when a Specific API is inactive/no activity for a brief period of time? For example,If a custom API which reads/grabs messages from one queue and places in another queue every 10 mins goes inactive or there is no activity recorded, can an email alert be fired by the sterling OMS product?
Thanks in advance

As per my knowledge , its not possible in Sterling OMS currently. If the intention is to monitor Integration Server and get an alert when messages are not consumed from a queue , an alert can be created on MQ manager. MQ manager monitoring can be done to trigger an alert when queue depth increase beyond a threshold or Last message in queue is not consumed within pre-configured interval of time.

We can achieve this by running health monitor agents in Sterling OMS. The Health Monitor agent can be configured to alert system administrators when problems occur such as when an application server crashes or agent servers are not processing tasks.
To configure the health monitor agent, follow the below steps -
From the Application Platform tree, choose System Administration > Health Monitor Rules. The Health Monitor Rules window displays in the work area.
Select the applicable option from the list.
To run the Health Monitor Agent, run the startHealthMonitor.sh/cmd script file located in your /bin directory

Related

Microservices client acknowledgement and Event Sourcing

Scenario
I am building courier service system using Microservices. I am not sure of few things and here is my Scenario
Booking API - This is where customer Place order
Payment API - This is where we process the payment against booking
Notification API - There service is responsible for sending the notification after everything is completed.
The system is using event-driven Architecture. When customer places booking order , i commit local transaction in booking API and publish event. Payment API and notification API are subscribed to their respective event . Once Done Payment and notification API need to acknowledge back to Booking API.
My Questions is
After publishing the event my booking service can't block the call and goes back to the client (front end). How does my client app will have to check the status of transaction or it would know that transaction is completed? Does it poll every couple of seconds ? Since this is distributed transaction and any service can go down and won't be able to acknowledge back . In that case how do my client (front end) would know since it will keep on waiting. I am considering saga for distributed transactions.
What's the best way to achieve all of this ?
Event Sourcing
I want to implement Event sourcing to track the complete track of the booking order. Does i have to implement this in my booking API with event store ? Or event store are shared between services since i am supposed to catch all the events from different services . What's the best way to implement this ?
Many Thanks,
The way I visualize this is as follows (influenced by Martin Kleppmann's talk here and here).
The end user places an order. The order is written to a Kafka topic. Since Kafka has a log structured storage, the order details will be saved in the least possible time. It's an atomic operation ('A' in 'ACID') - all or nothing
Now as soon as the user places the order, the user would like to read it back (read-your-write). To acheive this we can write the order data in a distributed cache as well. Although dual write is not usually a good idea as it may cause partial failure (e.g. writing to Kafka is successful, but writing to cache fails), we can mitigate this risk by ensuring that one of the Kafka consumer writes the data in a database. So, even in a rare scenario of cache failure, the user can read the data back from DB eventually.
The status of the order in the cache as written at the time of order creation is "in progress"
One or more kafka consumer groups are then used to handle the events as follows: the payment and notification are handled properly and the final status will be written back to the cache and database
A separate Kafka consumer will then receive the response from the payment and notification apis and write the updates to cache, DB and a web socket
The websocket will then update the UI model and the changes would be then reflected in the UI through event sourcing.
Further clarifications based on comment
The basic idea here is that we build a cache using streaming for every service with data they need. For e.g. the account service needs feedback from the payment and notification services. Therefore, we have these services write their response to some Kafka topic which has some consumers that write the response back to order service's cache
Based on the ACID properties of Kafka (or any similar technology), the message will never be lost. Eventually we will get all or nothing. That's atomicity. If the order service fails to write the order, an error response is sent back to the client in a synchronous way and the user probably retries after some time. If the order service is successful, the response to the other services must flow back to its cache eventually. If one of the services is down for some time, the response will be delayed, but it will be sent eventually when the service resumes
The clients need not poll. The result will be propagated to it through streaming using websocket. The UI page will listen to the websocket As the consumer writes the feedback in the cache, it can also write to the websocket. This will notify the UI. Then if you use something like Angular or ReactJS, the appropriate section of the UI can be refreshed with the value received at the websocket. Until that happens user keeps seeing the status "in progress" as was written to the cache at the time of order creation Even if the user refreshes the page, the same status is retrieved from the cache. If the cache value expires and follows a LRU mechanism, the same value will be fetched from the DB and wriitten back to the cache to serve future requests. Once the feedback from the other services are available, the new result will be streamed using websocket. On page refresh, new status would be available from the cache or DB
You can pass an Identifier back to client once the booking is completed and client can use this identifier to query the status of the subsequent actions if you can connect them on the back end. You can also send a notification back to the Client when other events are completed. You can do long polling or you can do notification.
thanks skjagini. part of my question is to handle a case where other
microservices don't get back in time or never. lets say payment api is
done working and charged the client but didn't notify my order service
in time or after very long time. how my client waits ? if we timeout
the client the backend may have processed it after timeout
In CQRS, you would separate the Commands and Querying. i.e, considering your scenario you can implement all interactions with Queues for interaction. (There are multiple implementations for CQRS with event sourcing, but in simplest form):
Client Sends a request --> Payment API receives the request --> Validates the request (if validation fails throws error back to the user) --> On successful validation --> generates a GUID and writes the message request to Queue --> passes the GUID to the user
Payment API subscribes the payment queue --> After processing the request --> writes to Order queue or any other queues
Order APi subscribes to Order Queue and processes the request.
User has a GUID which can get him data for all the interactions.
If use a pub/sub as in Kafka instead of Kafka (all other subsequent systems can read from the same topic, you don't need to write for each queue)
If any of the services fail to process, once the services are restarted they should be able to pick where they left off, if the services are down in the middle of a transaction as long as they roll back their resp changes you system should be stable condition
I'm not 100% sure what you are asking. But it sounds like you should be using a messaging service. As #Saptarshi Basu mentioned kafka is good. I would really recommend NATS - although I'm biased because that's the one I work with
With NATS you can create request-reply messages to interface between client and booking service. That's a 1-1 communication
If you have multiple instances of each of your services running, you can use the Queuing service to automatically load balance. NATS will just randomly select a server for you
And then you can use pub-sub feeds for communication between all of your services.
This will give you a very resilient and scalable architecture, and NATS makes it all incredibly easy

Use GAE background thread to trigger SSE to multiple web clients

All,
I have completed the basic GAE "Guestbook" example which uses Google Cloud Endpoints and Google Cloud Messaging. I can successfully add a note to the guestbook and have it appear on all registered devices.
I've also used the super simple Server Sent Event (SSE) mechanism to have a web page initiate an event source and then update itself as events are received. But separate web pages appear to create their own distinct event sources (even if using the same URI to the event source) and thus get their own events at their own times.
The objective here is to create a bit of collaboration such that user actions can come from an android device or a web page and the effects the received action are then pushed to all connected users/devices/web pages.
I have assumed I will need a background module and that both Endpoints and 'normal' web pages / queries would channel the received user action to that background module. I believe I can get that far. Next, I need the background module to trigger a push notification to all interested parties.
I believe I can trigger a Google Could Messaging event to registered Android devices from that background module.
But it isn't clear to me how a background module can be the source of an SSE, or how the background module can best communicate with a foreground module that already is the source of an SSE.
I've looked at the Google Queue API, but I have a feeling I'm making something quite easy much more difficult than it needs to be. If you were not going to 'poll' for changes from a web page... and you wanted to receive notifications from an SSE source when changes were made by other users, possibly using Android devices rather than a typical web page, and the deployed application is running on the Google Application Engine, what would you recommend?
Many thanks,
Randy
You are on the right track, not really sure why you are using the background module but from what i understood you need to:
Your front end module receives an update
You retrieve a list of all devices receiving that update
Use the Queue service to send the update via GCM to every single device
Why use queues? because front end instances have a 1 min time limit per request and you'll need to queue work in order to go beyond that time to serve you (potentially) thousands of users.
Now, If you already have a backend instance (which does not have the 1min limit) you could just iterate over the list and send all messages on one request. I believe you have a 24 hr request limit so you should be OK. But in this scenario you don't have need for the front end module, you can just hit this server straight up.

Delegating Tasks for Mission Critical Application

I'm working on a mission critical application.
The application fetches Stock Market data from different stock markets like NYSE, NASDAQ, etc. using third party service.
Customers can come to the application and add their Portfolio (which company's shares they have).
And then set Alerts. eg. Notify me when AAPL price goes above $xxx on NASDAQ. when MSFT price goes below $zzz on NYSE.
I've a cron job that fetches market data from third party service for all the tickers users have added (AAPL, GOOG, MSFT, etc...) every 1 min.
After I get the data, I fetch all the alerts that users have created and then send them notification via Email, SMS, Pushover, Twitter, Facebook Message, etc. Also add that notification to app's database so user can see it in App when they log in.
Now since this is time intensive application, failure to fetch data may result in big loss since customers are paying for the time critical data.
Currently, I'm pushing all the notification sending part to Queue. Worker (on my server) sends notification.
Are there any other better ways to delegate as much work as possible to third party servers?
Would you recommend using Iron.io worker so it does the job of sending the notifications as well.
And may be also fetching data from the market.
Thanks!
Architecturally there are a number of approaches but it sounds as if you're making the right choices. Using a queue to decouple the producer from the notification process makes sense. This enables a more proper SOA architecture where you can change/update/evolve various parts of the app independently without worrying too much about tightly coupled code.
That said, your question is specifically around offloading to third parties. There are third parties that can abstract the notification part out of your code. I'm not super familiar with them but there are many options: PubNub, Pusher, Twilio, SendGrid, Mailgun, AWS SNS, etc.
I work for Iron.io. We have many customers doing exactly what you're trying to accomplish: creating workers that become little mini-services and calling them from either push events, scheduled tasks, or on-demand. This frees you up from having to deal with the queuing, routing, scheduling, and worker/background server capacity.
We're happy to help you architect things right from the beginning, just reach out to support#iron.io.

Cloud service monitor changes

I've cloud service and in the cloud service (in azure view )there is tab which is called
monitor which you can add there settings (like key value),I want to track if some user
did for it some changes and when like Auditing ,there is a process which should I invoke to track this changes,with user name and timestamps ?
If I'm not mistaken, what you're looking for is the Operation Logs functionality. It is available in Azure Portal (https://manage.windowsazure.com). Once you login into the portal, click on MANAGEMENT SERVICES and then OPERATION LOGS. The operation you would want to track is Change Configuration (or something like that).
If you want to track it programmatically, the Service Management API operation you would want to invoke is List Subscription Operations
I believe the management portal only makes available a finite list of performance counters for monitoring. And it sounds like perhaps you are trying to add a custom event, which you would not in turn be able to add to that list.
So the broader question then turns to how to log in-app events in a way that I can monitor for. To that end, I'd recommend first looking at Azure Diagnostics. This gives you a fairly low impact way to capture telemetry about your application. You can then in turn scan the result logs and act on the events you need to capture.

Processing large numbers of emails from an asp.net site

The web site I am developing will be sending tens of thousands of emails daily (and that number will be growing) - registration, notifications, alerts, etc. I will have a dedicated server box that will be actually generating and sending emails by request from the asp.net application (asp.net app calls a WCF method on the email box and provides various parameters for an email).
Now, I am trying to figure out what's the best way of queueing those email jobs on the email server. The call from asp.net app has to be async so that asp.net app doesn't wait for email server to create and send actual email.
Originally I was just creating a worker thread for each email job request but number of emails is going to be really high and I'm not sure if creating hundreds of simultaneous threads is a good idea performance wise. My next thought is to use MSMQ but I'm not sure about its performance and scalability.
Any ideas/production examples?
Thanks!
At a previous job, we had to queue messages for delivery, much like you are explaining. We decided to create a database record that represented each message. At message creation time, we created the mail message in .NET and then saved it into the database. A separate process (Windows service built in .NET) would periodically check to see if there were messages to be sent (delivery date was in the past and status was unsent). It would then re-create the mail message from the information it received from the stored procedure and sent the message along its merry way.
The procedure that returned the messages ready for sending also performed throttling logic based on the day and time of the call (we allowed more of our bandwidth to be used at night and the weekends than during the day).
We also had need for tracking bouncebacks, message opens, and click-throughs which meant having a database record that represented the email was necessary so we could relate events (bounce, open, click) with individual emails and recipients.

Resources