I have recently deployed WSO2 API Manager (2.0.0) as 2 instance all-in-one clustered (using Hazelcast AWS scheme) with mysql datasource as specified in this link
Since, not able to find a complete step by step installation guide for this setup. I would like to clarify few areas that I am not too sure of.
Depsync via SVN - since this will be manger to manger nodes (instead of manager to worker nodes) both will have <AutoCommit>true</AutoCommit>. Should we have any concern on this?
DAS - Having DAS as separate node, should both WSO2AM and WSO2DAS share the same WSO2AM_STATS_DB database?
Publisher - Can we use both publishers (i.e one at a time). Noticed once we published an API, it takes time for other publisher to sync the state to published (even if the new API appears almost immediate on other publisher as created)
Thank you.
1) If you enable <AutoCommit>true</AutoCommit> in both nodes, it can cause svn conflicts if there are parallel publishing from 2 nodes. Instead, you can publish to multiple gateways from the publisher. For that, you can configure multiple environments in <Environments> section in api-manager.xml
2) Yes, DAS writes summarized data to that DB, and APIM dashboards read data from the same DB.
3) All publisher/store nodes should be in the same cluster. Then only they can communicate about API state changes etc. To be on the same cluster, all these nodes should have the same clustering domain. You can configure that in clustering section of axis2.xml.
Related
I am using NebulaGraph Dashboard. Why can't I select the meta service when scaling?
The Metad service stores the metadata of the NebulaGraph database. Once the Metad service fails to function, the entire cluster may break down. Besides, the amount of data processed by the Metad service is not much, so it is not recommended to scale the Metad service. And we directly disabled operating on the Metad service in Dashboard to prevent the cluster from being unavailable due to the misoperation of users.
I have a docker instance of wso2-am running with the published API which are working fine. However, when the docker instance is shutdown and started up again the published APIs together with the configurations are lost.
How can I persist the published API, map and display it accordingly when the wso2-am docker instance is once started?
This is the basic issue with docker where once the container is stopped all of its data is also lost with it.
In order, to save the data I had to use the docker commit command to save the previous working state.
APIM related data is stored in the database(api related metadata) and filesystem(synapse apis and throttling policies, etc). By default APIM uses H2 database. To persist the data, you will have to point this to a RDBMS (mysql, oracle, etc). See https://docs.wso2.com/display/AM260/Changing+the+Default+API-M+Databases
To persist API related artifact (synapse files, etc), You have to preserve the content in the repository/deployement/server location. For this you could use a NFS mounting.
Also please refer this https://docs.wso2.com/display/AM260/Deploying+API+Manager+using+Single+Node+Instances on information about doing a single node deployment
I was previously using a cognitive search API key with no issues. Recently, it expired (I assume due a migration to Azure but it's unclear).
To get a new API key, I took the following steps:
created an Azure account added the Cognitive Search APIs service
(with image search, the service I'm interested in)
selected the
standard package (1k req/month at $3/month if I recall)
created the
service
When I attempt to use the new API key, either through curl,
my app, or the test console, I receive a 401. I recreated the service
and the new API key fails as well.
Thanks.
It's been a few months since you asked this question, but having just had this difficulty myself, I thought I'd share the solution.
If you create an instance of the service in Azure, you can presently create it on a whole host of different regions and it'll create successfully and provide you a key for it. However, if you look at the Azure Services by Region, you'll see that most of the Cognitive Services are only actually available in the West US region.
If you go back to the Azure portal, delete your instance and recreate it in the West US region, I expect you'll be more successful.
We'd like to create separate APIM stores in our internal network and DMZ. I've been going through the documentation, and I've seen you can publish to multiple store (https://docs.wso2.com/display/AM200/Publish+to+Multiple+External+API+Stores) but this is not exactly what I'm looking for, since you need to visit the "main" store to subscribe to an API.
I'd like to have the option from a single publisher instance to check of to which stores an API must be published. Much like the way you can decide to which API gateways you publish your APIs.
Any thoughts or help on this would be great.
Thanks,
Danny
Once API is published in publisher api artifacts are stored in registry which is shared between store and publisher. API store get artifacts from this registry and display it. So
When create apis use tags to differentiate artifacts e.g tag DMZ, Internal
Modify the store to get artifacts based on tags and display
Process App information is stored in LSW_PROJECT table
Human services and other "tasks" are designed in Process Designer to the Process App. I believe these are stored in LSW_PROCESS and LSW_PROCESS_ITEM.
How do I make a query associating a Process App to the services included in that App?
What is the significance of LSW_TASK table?
Is there any documentation which describe the tables used in IBM BPM ?
Querying the product database tables is not a supported method in IBM BPM. The database schema is not documented in the BPM knowledge center. There are REST API and Java Script API method to obtain information about the projects (which are stored in LSW_PROJECT). For the JS API, you could go through all process apps with the getAllProcessApps or the REST API with GET /rest/bpm/wle/v1/processApps
The LSW_TASK table holds information about the tasks which people (users) or the system processes. These are created for each activity on a BPD diagram. These are then deleted with the BPDProcessInstanceCleanup command is run.
If you describe your what problem you are trying to solve, I can direct you to specific resources that may already exist.