We'd like to create separate APIM stores in our internal network and DMZ. I've been going through the documentation, and I've seen you can publish to multiple store (https://docs.wso2.com/display/AM200/Publish+to+Multiple+External+API+Stores) but this is not exactly what I'm looking for, since you need to visit the "main" store to subscribe to an API.
I'd like to have the option from a single publisher instance to check of to which stores an API must be published. Much like the way you can decide to which API gateways you publish your APIs.
Any thoughts or help on this would be great.
Thanks,
Danny
Once API is published in publisher api artifacts are stored in registry which is shared between store and publisher. API store get artifacts from this registry and display it. So
When create apis use tags to differentiate artifacts e.g tag DMZ, Internal
Modify the store to get artifacts based on tags and display
Related
How to add Azure custom Policy for Azure Data Factory to only use Azure Key Vault during the Linked Service Creation for fetching the Data Store Credentials instead of credentials being put up directly in ADF Linked Service. Please suggest ARM or PowerShell methods for the policy implementation.
As of yesterday, the Data Factory Azure Policy integration is available which means you can now find some built-in policies that can be assigned to ADF.
One of those is exactly what you're asking for as you can see in the image below. You can find more information here
Edit: Based on your comment, I'm editing this answer with the info you want. When it comes to custom policies, it's pretty much up to you to come up with them and create what fits your needs. In your particular case, I've created one policy that does what you want, please see here.
This policy will audit your data factory linked services and check if they're using a self-hosted integration runtime. Currently, that check is only done for a few types of integration runtimes (if you look at the policy, you can see 5 of them) which means that if you want to check more types of linked services, you'll need to add them to the list of allowed values and select them when assigning the policy definition.
Bear in mind that for some linked services types, such as Key Vault, that check won't make sense since that service can't use a self-hosted IR
Background:
Xamrin Forms Client App
Azure backend with Dot Net
Using Azure offline data sync
trying to use Azure offline File Sync
Related SO questions
there have been 2 more questions I asked here which are somewhat related
Getting a 404 while using Azure File Sync
Getting a 500 while using Azure File Sync
Solution
As stated above in the first link, I had to create a storage controller for the User entity to be able to successfully login even though I do not intend to use Files for Users.
As I work further in the app, I am still getting more 404 errors as I can see in fiddler. These are similar calls which are looking to access an API like below
GET /tables/{EntityName}/{Id}/MobileServiceFiles HTTP/1.1
My Question Now
Do I need a storage controller for every entity I have in my solution? may be every entity that inherits from EntityData?
Is there a way I can selectively tell the system which entities are going to work with files & have storage controllers only for them? Like, may be, marking them with some Attribute?
Reference
I am using this blog post to implement Azure File Sync in my app.
To answer my own query (and not the answer I wanted to hear) YES. We need a Storage controller for all entities, even if they don't have any files to be stored in Storage account. This is a limitation.
Found this info on comments of the original blog I was following (I wish I did it earlier), to quote the author
Donna Malayeri [donnam#MSFT] Chris • 2 months ago
It's a limitation of the current storage SDK that you can't specify which tables have files. See this GitHub issue: https://github.com/Azure/azure...
As a workaround, you have to make your own file sync trigger factory.
Here's a sample: https://github.com/azure-appse...
The reason the SDK calls Get/Delete for files in the storage
controller is because the server manages the mapping from record to
container or blob name. You wouldn't necessarily want to give the
client access to the blob account to access arbitrary files or
containers, for instance. In the case of delete, the server doesn't
even need to give out a SAS token with delete permissions, since it
can just authenticate the user and do the delete itself.
I use APIGee for both API Proxy and Documentation, using a customized documentation site.
Following the recent APIGee outage this weekend, when I access my registered application list using my personal login on the documentation portal, I can no longer retrieve my application keys.
I get the error
STATUS: 404 - Not Found; Communication with the Apigee endpoint is
compromised. Cannot get API Products List.
The strange thing is that if I use my admin login at accounts.apigee.com, I can see 2 of my 3 applications listed... but one has disappeared. And more worryingly, this portal provides different application keys to the ones that were initially provided though the documentation portal.
I haven't been able to find any good documentation on this. How are these two sites linked together? Why are the keys different on both sites? What has caused my data to go missing?!
Tadhg -
This sounds like an issue that needs investigation by Apigee Global Support.
Would you please create an Apigee Support case? Please provide any applicable details, including your Organization name, the API call(s) you are making, the 3 applications you expect to see, and any other details you think might be helpful to diagnose.
Thanks!
I want to use Webex API [www.webex.com] to create meeting from my site.
For that I need my own domain in the case of URL API in this way:
"https://yourWebExHostedName.webex.com/yourWebExHostedName/".
And in the case XML API, I need WebexID, SiteID, ParternerID.
Those are mentioned in this Webex official document.
https://developer.cisco.com/documents/4733862/4736679/URL+API+WBS+27+Ref+Guide.pdf
I want to say that these parameters are available in testing environment.
But I don't have my own domain to use this API in production environment.
So I want to know that it is possible to use this API in production environment without owning a domain.
Do you have any Idea? Have you faced such problem? I need urgent solution regarding that.
For the XML API, you can obtain those parameters from this page (you need to login or register first to be able to see the form):
https://developer.cisco.com/site/webex-developer/develop-test/try-webex-apis/
To test the API, all the requests would be made to the sandbox site https://apidemoeu.webex.com
No
You cam't go for production without Webexdomain. Because For recording of video,Host users's and Attendee user's it take space on server to stored all this data you need your web-ex hosting site.
I'm working on a Java console application that needs to go through all the e-mailaddresses in the frontend database in Tridion Outbound E-mail 2011 and change a certain extended field of that contact.
I've gone through the Subscription API documentation for clues on how to get a listing of all the e-mailaddresses, but I'm getting stuck there. Is there any clean way to do this through the API, without resorting to database queries?
It is not possible to get a list of Contacts using the Subscription API. It is meant primarily for working with single Contacts, who update their profile on your website.
For bulk management of Contacts, you should use Tridion.AudienceManagement.API on your Content Management server instead. The changes will then be synchronized to all of your websites.
You should not change anything directly in the database, as you will get issues with synchronization.