I have been asked in an interview below question:
I have a rest service developed using spring boot and this service is running in production and consumed by 500 other services. I want to change some attributes in this service for new consumers. How do I achieve this one without impacting existing 500 consumers.
Just add a new method with the new attributes for the new customer and leave the old one as it is.
Add a new version of your webservice (by using URL encoded versioning) and give the new customer the new version URL.
REST versioning
You can do this in two ways.
You must have given API documentation to the customers i-e confluence or any other document. Update the document and share with customers the updated one.
You can also do it using API definition scripts called YAML. If you add new attribute, you need to update the YAML configuration of the API and share it with all the customers.
Related
I wonder how you can append a second api to an already registered api in Azure api management via an ARM deploy?
If I use the same value for the name property in my Microsoft.ApiManagement/service/apis resource. It overwrites the whole api instead of appending it. I don't find a property in the arm reference docs to specify I want to append the api instead of overwriting it: https://learn.microsoft.com/en-us/azure/templates/microsoft.apimanagement/2019-01-01/service/apis
I want to accomplish the same result via arm, like I am able todo via the Azure portal import menu
This is also described in the docs: https://learn.microsoft.com/en-us/azure/api-management/add-api-manually#append-other-apis
That is easily not possible at the moment. "Append" logic is implemented in UI, but it does rely only on publicly available ARM calls. You could inspect calls it makes to ARM to append one API to another and try to reproduce it "by hand".
I have legacy BizTalk app that have about 10 orchestrations and 20 maps built on external webservice schemes.Now this old webservice will be removed and replaced with new web-service with similar (almost the same) schemes.
What would be best strategy to replace schemes from old webservice into all orchestrations and maps ? I can go through every orchestration and replace all message types ports and transformations manually.
Is there better way ?
Please advise.
ACK:I know that more convenient way of building BizTalk applications is to create internal type (xsd) and design all orchestrations and maps around internal type.Than create one map to transform from external(webservice) type to internal, so in case of changing web-service only this one map will be changed.
Unfortunately this is not the way legacy app was build.
UPD:
problem is that old webservice types are being used into a lot of orchestrations and maps. if I pull old webservice out and import new webservice I will get an error in all of them.So I have manually change all of them for using new type. I'm trying to find a way to cheat and not to change them.
new web-service with similar (almost the same) schemes.
If that is indeed the case, you probably don't have to replace much if anything. Just update the existing BizTalk app with the 'minor' changes to accommodate the new service.
However, if the current schema is used in multiple places, you can just use a Map on the Receive Port to transform the new message to the old one. It perfectly fine if the Root Element and Namespace are the same, all you would need to do is set the old one explicitly in the XmlDisassembler. Maps always work on the .Net Type only.
so i want to create some service that accesses external API, and i want to cache common requests from the API inside of that service, it depends on 3 other services, but i want to give it its own instance of cache, MemoryDistributedCache might later be changed for something else
services.AddSingleton<ISomeApi, SomeApi>(provider => new SomeApi(
Configuration.Get<Options>(),
new MemoryDistributedCache(new MemoryCache(new MemoryCacheOptions())),
provider.GetService<ILogger<SomeApi>>()
));
now from my Controllers i can access the api via DI, it works nicely but im not sure if its some sort of an anti-pattern or if there are better ways of doing it
i mean the real problem is separating the internal cache, requesting
IDistributedMemory from one service would give me the same object as if i request it from another service, they must be separated
This sounds like something you could use a proxy or decorator pattern for. The basic problem is that you have a service that does some data access, and another service responsible for caching the results of the first service. I realize you're not using a repository per se, but nonetheless the CachedRepository pattern should work for your needs. See here:
http://ardalis.com/introducing-the-cachedrepository-pattern
and
http://ardalis.com/building-a-cachedrepository-via-strategy-pattern
You can write your cached implementation such that it takes in the actual SomeApi type in its constructor if you don't need that part of the design to be flexible.
According to the Evernote docs, a resource is created along with a note that contains and references it.
Say I have a note with a resource, and I want to create a new note with exactly the same resource.
Is there a way to create the new note and use the existing resource, instead of creating a new duplicate resource?
Reasons for that:
I get the existing note with resources using getNote API call, and I have no reason to pass with_resource_data=True, which will consume bandwidth.
I don't want to consume extra quota by uploading the resource data again with the new note.
I'm using the Python Evernote SDK.
No. There's no way to do that.
A resource is linked to a specific note and can't be shared amongst notes.
That's why there's a noteGuid attribute in the Resource object.
So you'll have to get the resource with the note, attach it to the new note and upload the new note.
It's not optimal but it's the only way...
I am trying to see if there is a way to get the ComponentPresentations by passing the list of ComponentIDs in one single API instead of passing each one in a loop. In my case all the DCPs are using the same template as well.
When I checked the API I could not find any method which could accept the list of tcmids or something in those lines. The use case I am trying to solve is getting all the DCPs in one single API call vs. looping through 10-15 (in my case) and get each DCP independently which is not effective when the first time we hit the broker db.
I was able to get the same using OData web service, but we not yet ready to use the Odata. Not sure if Odata and broker API are slightly different, but could not find any documentation that explain API vs Odata difference in capabilities from query point.
Any help will be appreciated.
ENV: Tridion 2011 SP1, Java API.
OData and Broker API are very different. If you want information on OData I'd recommend checking here and here.
No, you can't do that operation through the Content Delivery API. With a properly configured cache you will be hitting the database only once per component presentation, so the impact is minimized...