Handle external API calls inside an api - api-design

I have a simple HTTP server where you can create and manage todos. You can also add plugins in order to, for example, send an email to the people who starred a todo when that todo has been completed. I currently check for all enabled plugins through an query to the database, and then query each API endpoint for the different plugins (Gmail, Notion, Trello, etc). After this is finished, I send a response back to the user. This is a problem, because it means I rely on the speed of the external API's I am requesting for my response. If the Notion api is slow, then my endpoint is also slow.
Is there a way to first send a response after, for example, the server marks the todo as completed, but then send a different response after all the plugins have been queried (Gmail, Notion, Trello, etc)? Would I have to use web sockets? Or is the way I currently handle external API queries the only way to do it?

You are right thinking that you want to decouple requests from customers with backend processing (reaching out other providers); and web sockets is one of options to do that. HTTP2 streams is another options. And, of course, pulling is also a way (simple, but not too efficient).

Related

Is there a way to prevent content caching or scraping from an API?

Imagine the following situation. I have an API and a developer builds an application that retrieves new content from it on a daily base. She stores this content and provides this data to all the instances of an app she developed. In this way these apps do not have to call the API directly.
Is there a way to prevent this and force the apps (and therefore the end users) to use the API and not only the application on the server.
I found many questions about how to cache API data but not how to prevent that. I am fairly new to this, so maybe I am overlooking something or maybe it is not possible to prevent this.
Thank you in advance!
Assuming you are using Apigee for API-management, you have some options. First, consider the options available to you contractually, if this is that sort of business relationship and you can impose certain API behavior with a business partner through a contract.
Separate from the legal side of things, we remember that you control your API and the credentials you issue for use by your API clients. You cannot though control, practically, what a client developer does with the credentials you issue: she could promise to embed the credentials in the mobile apps' API client, but change her mind and use it centrally, and then design her mobile client to call into her central cache. If though you really insist that only mobile app clients should be calling your API and not a hub/cache server, then you could consider applying constraint policies on your API (within the Apigee proxy, such as Access Control). For instance, you could blacklist your partner's hub/cache server IP address, although that is weak security at best. Or, you could apply a constraint that only clients with certain identifying User-Agent strings (mobile OS, client) are allowed to connect to your API. Or use GeoIP filtering to allow only clients from certain regions, if that applies to your use-case.
Finally, depending on the data model, you might be able to rate-limit such that a bulk cache becomes impractical: if your edge-client use-cases is to fetch a single record, but a cache would have to hold thousands of records, then you could impose a per-client rate limit (Quota policy) which is no bother to individual mobile clients, but makes the work of a hub/cache server untenable.

How to send an HTTP Post to a URL from a VG-IDMS

Does someone knows a way of sending an HTTP post to a URL from a VG-IDMS program?
We have a mainframe database with a few programs written on it and we have so far successfully created functionality for it to work as a kinda web service provider, responding to HTTP get requests with XML data. That way we've been able to create external (and modernized) apps that requests data from it as if would be a modern server.
Now we'd like to make the opposite. We want certain VG programs to create and send HTTP post requests to an external URL (most likely JSON formatted although not important). The objective is to create some sort of notification so that when certain events happen on the mainframe (say data got updated) a VG program would notify an external web service (or web api) of it.
I've been trying to find documentation all over but failed so far.
EDIT: Replacing the old mainframe is not an option for now

How does it work to implement an API for Payments in separate ends of a project?

Alright, A friend and I are developing an App where I'm developing the back-end and he is developing the front-end. The project is separated into two repositories the front-end and the back-end, and we need to implement a payment API.
Now, since we're using the REST API Concept, we communicate both ends through JSON data.
My question is, when we're making the connection to the payment API, who needs to execute that request? The front-end or the back-end?
I know it's a silly question, but first timer here.
The backend will obviously process the payment, I'm not sure which payment API you're going to use. But depending on the API you go with, the implementation will vary. But the actual processing of the payment will be processed in the backend for sure.
It completely depends on the API.
In some cases, a payment can be accomplished via a secure web service call, which would be issued by your friend's REST service. The front end will still need to collect data (e.g. payment amount and card number) and may also need to collect additional information to satisfy the API (e.g. IP address or browser signature, for risk management purposes).
In other cases, the payment is sent directly to the service from the browser. The role of your application would be to render an iFrame housing a page that is reached via SSO. The back end may need to call a service to retrieve an SSO token, or may have to compute an SSO token using a shared key.
You should probably refer to the payment API's documentation. They often have very specific guidance which you must follow carefully in order to achieve payment card (PCI-DSS) compliance. There is nothing special about "payments" that says that allows StackOverflow users to guess anything about its API.

Sharing particular data between realms

I am about to start working on the back-end for a mobile app (initially iOS/Android, later also website) and I am thinking whether Realm could fulfill all my needs.
The basic idea is that there are two types of users - customers and service-providers. The customers send requests to the server once in a while and are subscribed (real-time) for any event that might occur in relation to this request in the future. Each service-provider is listening for specific requests from all customers and is the one who is going to trigger various events (send data) for each of those requests.
From the Realm docs, it is obvious that the real-time data sync is not going to be a problem. The thing I am concerned about is how to model the scenario (customer/service-provider) in the Realm 'world'. Based on what I read, it is preferred to have one realm per user. Therefore, I suppose the user will register and will be given a realm. Then whenever he makes a request, it is going to be stored in his realm. Now the question is how to model the service-provider. There are going to be various service-providers each responding (triggering various kinds of events up to one hour after request) to different kinds of requests. (Each user can send any request and therefore be served by any service-provider.)
I read a bit about that Realm supports data sharing among different realms which could be a partial solution for this problem, however I was not able to find if this 'sharing' could share only particular requests. (Meaning each service-provider will get only requests intended for him.)
My question is whether this scenario is doable using Realm?
This sounds like a perfect fit for Realm's server-side event-handling. Put simply, Realm offers the ability through our Node SDK to listen for changes across Realms on the server.
So in your example, where each mobile user would have their own Realm, the URL for this would be /~/myRealm in which the tilde represents the Realm user ID. The Node SDK event handling API allows you to register a JS function that will run in response to changes represented by a Regex pattern for Realm URLs. In this case you could use: ^/([0-9a-f]+)/myRealm so that any time any user's myRealm updated, the server could perform some logic.
In this manner, the server via the Node SDK is really a "super-user" or service-provider as you describe. When an event fires, the JS function that runs is provided the Realm that was updated and a list of indexes pertaining to the objects in the Realm that were inserted, deleted, or modified. You can then perform any logic in JS, such as using the changed data to call out to another API or opening the Realm in question or any other and writing changes which will get pushed back out to the respective clients.
The full server-side event handling is part of Realm Professional Edition, but we recently released another way to interact with this called Realm Functions. This provides the ability through the server's dashboard to create the same JS functions that will run in response to changes across Realms. The developer edition support 3 functions so you can try it out immediately!

Is there any mail queue system in ASP.NET?

Is there any mail queue concept in ASP.NET?
I want to send thousands of different mail to thousands of users (i.e. each user will have a different mail). I want to send the mail at a particular time, so each user receives it at a constant time.
There really is not mail queue in the Core framework. You can send individual messages synchronously or asynchronously, but you can't really send a bunch at once.
You can queue your messages by storing them to a database or file server and then kicking off a job to loop through your saved messages and send them off.
Also, not all of your users will receive the messages at the same time, even if you could send them at the same time. There are too many external variables and dependencies (network traffic, mail server loads, spam filters) to accurately predicate when or even if your users receive their messages.
There's no native MailQueue concept within .NET framework. The queue will have to be implemented yourself. In your case, you would like the mails for each recipients to be sent at about the same time for all batches. Am I right?
Well, this is a bit tricky. You can use any SMTP server, localhost or external ones. But that also mean although you can dispatch to the SMTP server at a specific time, there's no guarantee it will reach the recipients immediately.
There are a whole bunch of stuff on mail delivery which are not exactly programming related (grey listing, spam filtering etc etc).
The alternative is to have full control on the sending and have your app directly sending the mails to the recipients' mail servers. Well that is workable and I suggest you use a commercial or a good open source component for that. Anyhow, there's still a whole bunch of issues you need to deal with, (e.g. some receiving mail server like Yahoo might block the sending a few times and let it through after a few retries).
I've posted a related question, take a look at the replies here.

Resources