Storing and Retrieving Published APIs in WSO2 AM - wso2-api-manager

I have a docker instance of wso2-am running with the published API which are working fine. However, when the docker instance is shutdown and started up again the published APIs together with the configurations are lost.
How can I persist the published API, map and display it accordingly when the wso2-am docker instance is once started?

This is the basic issue with docker where once the container is stopped all of its data is also lost with it.
In order, to save the data I had to use the docker commit command to save the previous working state.

APIM related data is stored in the database(api related metadata) and filesystem(synapse apis and throttling policies, etc). By default APIM uses H2 database. To persist the data, you will have to point this to a RDBMS (mysql, oracle, etc). See https://docs.wso2.com/display/AM260/Changing+the+Default+API-M+Databases
To persist API related artifact (synapse files, etc), You have to preserve the content in the repository/deployement/server location. For this you could use a NFS mounting.
Also please refer this https://docs.wso2.com/display/AM260/Deploying+API+Manager+using+Single+Node+Instances on information about doing a single node deployment

Related

WSO2 API Manager as 2 instance all-in-one setup

I have recently deployed WSO2 API Manager (2.0.0) as 2 instance all-in-one clustered (using Hazelcast AWS scheme) with mysql datasource as specified in this link
Since, not able to find a complete step by step installation guide for this setup. I would like to clarify few areas that I am not too sure of.
Depsync via SVN - since this will be manger to manger nodes (instead of manager to worker nodes) both will have <AutoCommit>true</AutoCommit>. Should we have any concern on this?
DAS - Having DAS as separate node, should both WSO2AM and WSO2DAS share the same WSO2AM_STATS_DB database?
Publisher - Can we use both publishers (i.e one at a time). Noticed once we published an API, it takes time for other publisher to sync the state to published (even if the new API appears almost immediate on other publisher as created)
Thank you.
1) If you enable <AutoCommit>true</AutoCommit> in both nodes, it can cause svn conflicts if there are parallel publishing from 2 nodes. Instead, you can publish to multiple gateways from the publisher. For that, you can configure multiple environments in <Environments> section in api-manager.xml
2) Yes, DAS writes summarized data to that DB, and APIM dashboards read data from the same DB.
3) All publisher/store nodes should be in the same cluster. Then only they can communicate about API state changes etc. To be on the same cluster, all these nodes should have the same clustering domain. You can configure that in clustering section of axis2.xml.

Do I need a Storage Controller for all my model classes in the app to use azure file sync?

Background:
Xamrin Forms Client App
Azure backend with Dot Net
Using Azure offline data sync
trying to use Azure offline File Sync
Related SO questions
there have been 2 more questions I asked here which are somewhat related
Getting a 404 while using Azure File Sync
Getting a 500 while using Azure File Sync
Solution
As stated above in the first link, I had to create a storage controller for the User entity to be able to successfully login even though I do not intend to use Files for Users.
As I work further in the app, I am still getting more 404 errors as I can see in fiddler. These are similar calls which are looking to access an API like below
GET /tables/{EntityName}/{Id}/MobileServ‌​iceFiles HTTP/1.1
My Question Now
Do I need a storage controller for every entity I have in my solution? may be every entity that inherits from EntityData?
Is there a way I can selectively tell the system which entities are going to work with files & have storage controllers only for them? Like, may be, marking them with some Attribute?
Reference
I am using this blog post to implement Azure File Sync in my app.
To answer my own query (and not the answer I wanted to hear) YES. We need a Storage controller for all entities, even if they don't have any files to be stored in Storage account. This is a limitation.
Found this info on comments of the original blog I was following (I wish I did it earlier), to quote the author
Donna Malayeri [donnam#MSFT] Chris • 2 months ago
It's a limitation of the current storage SDK that you can't specify which tables have files. See this GitHub issue: https://github.com/Azure/azure...
As a workaround, you have to make your own file sync trigger factory.
Here's a sample: https://github.com/azure-appse...
The reason the SDK calls Get/Delete for files in the storage
controller is because the server manages the mapping from record to
container or blob name. You wouldn't necessarily want to give the
client access to the blob account to access arbitrary files or
containers, for instance. In the case of delete, the server doesn't
even need to give out a SAS token with delete permissions, since it
can just authenticate the user and do the delete itself.

Separate APIM Stores in internal and DMZ network

We'd like to create separate APIM stores in our internal network and DMZ. I've been going through the documentation, and I've seen you can publish to multiple store (https://docs.wso2.com/display/AM200/Publish+to+Multiple+External+API+Stores) but this is not exactly what I'm looking for, since you need to visit the "main" store to subscribe to an API.
I'd like to have the option from a single publisher instance to check of to which stores an API must be published. Much like the way you can decide to which API gateways you publish your APIs.
Any thoughts or help on this would be great.
Thanks,
Danny
Once API is published in publisher api artifacts are stored in registry which is shared between store and publisher. API store get artifacts from this registry and display it. So
When create apis use tags to differentiate artifacts e.g tag DMZ, Internal
Modify the store to get artifacts based on tags and display

Do I need to change to Redis Cache in Azure from Nov 30 2016?

I have a single instance of a website hosted in Azure, which uses the in-role session cache. This uses some very basic calls to pass data between pages, such as Session("MustChangePassword") = "True"
Microsoft have emailed Azure customers saying that the in-role and managed caches are going to be retired, and that Azure Redis cache should be used instead:
Azure Managed Cache Service and Azure In-Role Cache to be retired November 30, 2016
As a reminder, Azure Managed Cache Service and Azure In-Role Cache service will remain available for existing customers until November 30, 2016. After this date, Managed Cache Service will be shut down, and In-Role Cache service will no longer be supported. We recommend that you migrate to Azure Redis Cache. For more information on migrating, please visit the Migrate from Managed Cache Service to Azure Redis Cache documentation webpage. For more information about the retirement, please visit the Azure Blog.
Is this going to still affect cloud services that use just one instance, or will Session data just completely break after this change is made if I don't do anything?
If I do have to change to Redis cache, I see from the supplied links that I can download it as a NuGet package and make changes to the web.config file. However, I am then unsure as to whether I'd need to make changes to the code, or whether the calls to Session("Whatever") would still work without any further changes needed.
So in summary:
1) Do I need to change to the new cache?
2) If so, what code changes do I need to make over and above configuring the new cache?
This announcement is at least one year old, if not older.
So in summary:
Do I need to change to the new cache?
If so, what code changes do I need to make over and above configuring the new cache?
To Anser your questions:
YES
Check out the documentation links you quoted.
And by the way, you cannot downlaod Azure Redis Cache as a NuGet package. What you download is Client SDK/API to work with Azure Redis Cache. Azure Redis Cache is a separate service in Azure. Which is also billed separately.
So it turns out that using a session call such as Session("MustChangePassword") = "True" is absolutely fine to still use in the case of running a single instance machine.
It may not be supported, but it still works, and I have not had to add any other kind of session management to this project.
Everything is working exactly as it was before the announcement, and continues to work after the deadline had passed.
So in summary:
1) Do I need to change to the new cache?
2) If so, what code changes do I need to make over and above configuring the new cache?
The answers to the above questions were 1) No, and 2) No changes needed.

How to access file from another application's directory on Bluemix?

This is my problem scenario :
1.Create 2 apps.
2.App1 continuously pulls tweets and stores the json file in its /data folder.
3.App2 picks up the latest file from the /data folder of App1 and uses it.
I have used R and its corresponding build-pack to deploy the app on bluemix.
How do I access /data/file1 in App1 from App2 i.e. can I do something like this in the App2 source file :
read.csv("App1/data/Filename.csv") ;
will bluemix understand what App1 folder points to ?
Bluemix is a Platform as a Service. This essentially means that there is no filesystem in the traditional sense. Yes, your application "lives" in a file structure on a type of VM, but if you were to restage or redeploy your application at any time, changes to the filesystem will be lost.
The "right" way to handle this data that you have is to store it in a NoSQL database and point each app to this DB. Bluemix offers several options, depending on your needs.
MongoDB is probably one of the easier and more straight-forward DBs to use and understand. Cloudant is also very good and robust, but has a slightly higher learning curve.
Once you have this DB set up, you could poll it for new records periodically, or better yet, look into using WebSockets for pushing notifications from one app to the other in real-time.
Either way, click the Catalog link in the Bluemix main navigation and search for either of these services to provision and bind them to your app. You'll then need to reference them via the VCAP_SERVICES environment object, which you can learn more about here.
You can't access files from another app on bluemix. You should use a database service like cloudant to store your json. Bind the same service to both apps.
Using something like Cloudant or the Object Storage service would be a great way to share data between two apps. You can even bind the same service to 2 apps.
Another solution would be to create a microservice that is your persistance layer that stores your data for you. You could then create an API on top of this that both of your apps could call.
As stated above storing information on disk is not a good idea for a cloud app. Go check out http://12factor.net, it describes no-no's for writing a true cloud based app.

Resources