wso2 api manager shuts down after few days on its own - wso2-api-manager

WSO2 API Manager shuts down on its own after few days. Is there a configuration which I need to tweak, so that this doesn't happen in production?

Based on the provided log, it clearly says that someone has triggered the shutdown hook mistakenly.
INFO {org.wso2.carbon.core.init.CarbonServerManager} - Shutdown hook triggered....
WSO2 API Manager normally won't shutdown automatically.

Related

Logs are not being sent to stackdriver using Stackdriver log4net integration

I'm trying to use the Google.cloud.logging.log4net library in a .NET Web application to send logs to Google Stackdriver. I've followed the steps from stackdriver logging how to guide Option 2 https://cloud.google.com/logging/docs/integrate/dotnet
It seems to work from my local machine when I provide the service account credentials. But it does not send logs when I run it from a GCE instance.

SNS getEndpointAttributes Returns Old Data After EventEndpointUpdated Event

When attaching a topic to a SNS application's "Endpoint updated" configurable topic I'm experiencing some unexpected behavior. Per AWS's documentation on SNS Application Events, I should receive an event on my configured topic when a platform endpoint has been updated to disabled or it's token changed.
In my case I have a lambda function subscribed to this topic that then retrieves the platform endpoint's attributes via a call to AWS's javascript sdk SNS.getEndpointAttributes so that I can check what attribute have changed to either delete the endpoint or update the associated token in my persistent storage. This call however is returning the endpoints as Enabled = true which then prevents me from taking the corrective actions. However if I look in the AWS SNS console I can see the endpoint has been disabled as Enabled = false.
Have others experienced similar inconsistencies and if so what's the best practice to get around them? Thanks for any input!
I was also facing the similar problem when amazon notified me sns application events via http. To work around this problem i actually delayed the execution of code that sync these endpoint updates with my database. To achieve this i scheduled a job for my background queue worker and delayed its execution after 30 seconds from the time amazon notified via http. I don't know whether it is a best practice or not but it is working in my scenario.

ServiceDataPublisherAdmin not set in wso2 api manager gateway

I am setting up wso2 API manager 1.10.x with DAS 3.0.1 for publishing API statistics using mysql. My API manager system is clustered with gateway worker node on a separate VM. I followed the documents to enable analytics for API manager via UI. I also followed this document to manually enable analytics for gateway worker node. http://blog.rukspot.com/2016/05/configure-wso2-apim-analytics-using-xml.html After setup, I restart all servers, everything seems fine. But when I make a request to published API, gateway does not publish any statistics to DAS receiver. No data in DAS summary tables either.
By debugging wso2 Gateway, I am able to narrow it down to the fact that
private static ServiceDataPublisherAdmin dataPublisherAdminService; inside org.wso2.carbon.apimgt.impl.internal.APIManagerComponent never get set. Therefore APIMgtUsageHandler does not do anything.
Any idea on what could cause this to happen?
Thanks.
Figured it out myself.
bundle org.wso2.carbon.statistics_4.4.8 and 2 other statistics bundles are necessary for gateway worker to publish statistics data to DAS. But worker profile provided in the package of wso2 API manager 1.10.0 had them excluded.
To work around it, start wso2 on worker node with -Dprofile=default.
You can use osgi console to confirm the activation of these bundles. Once the bundle is activated, class inside is instantiated, gateway will start to publish statistics to DAS when you invoke a published API.

Stackdriver logs not available for Cloud ML jobs since migration to V2

Since migration to V2 logs from Cloud ML jobs are not accessible on the Stackdriver logging console anymore. The last log displayed is
Waiting for Tensorflow to start.
The job is executed and completed successfully, I just can't access outputs in the logs
All Stackdriver APIs are enabled for the project.
There are no known issues with Cloud ML's Stackdriver logging. The fact that you see "Waiting for Tensorflow to start." indicates you are seeing log messages from Cloud ML.
If logs from your Python/TensorFlow program are missing that usually indicates Cloud ML hasn't been authorized to send logs to Stackdriver logging for your project. To check permissions do the following
Identify the Cloud ML service account by following these instructions
In the Cloud Console select the IAM Tab
Verify that the Cloud ML service account is listed and has Logs Writer permissions
This problem also took me two weeks to search answers online with frustration, until I came across this post. I did not see "migration to V2" as OP mentions but I simply could not get any application logs in StackDriver, only system logs of job started/completed. Following what Jeremy replies solves the problem.
To make Jeremy's reply simpler to follow, essentially you add the ML service account
cloud-ml-service#<project-id>.iam.gserviceaccount.com
to your project's IAM members, with at least "Logs Writer" role.
You can get "project-id" by:
gcloud config list project --format "value(core.project)"
I also assigned Project->Editor role to allow Bucket access.

Workflow Foundation - Pending Timers on Server Restart

I have a workflow built in a way that it has delay activity which causes it to persist, and after delay has expired, a notification is sent. Workflow is exposed via Workflow Services.
This works perfect except for the scenarios when the server restart occurs or server is brought down for a maintenance for a day or two and timers were already expired. In that case, notification is not sent until the first request related to particular workflow arrives to WCF endpoints.
I have to mention that application pool is already set to alwaysRunning.
Is there anything else that has to be added for IIS/AppFabric to check pending timers that should have already been executed?
I'm using Workflow Foundation 4.5.
Issue has been caused by AppFabric not being set properly.

Resources