Revalidating nextjs within existing api route - next.js

I'm confused about why so many of the examples for NextJS revalidation use a separate endpoint for revalidation. Is that because some folks set up some polling to revalidate via that endpoint?
For my use case, I have a route and when the user makes an update to it I want to call revalidate... But I would rather not have to call fetch and make a separate call and just use the existing NextApiResponse object to revalidate the route. Is this a bad practice for some reason?

Related

HonoJs: Best way to start a Twitter SDK connection in Hono with CloudFlare?

In an old-school server environment, you initialize an SDK (like the Twitter SDK) when the server starts up, using dotenv to read secrets and tokens from your .env file like so:
import dotenv from 'dotenv';
import {Client} from 'twitter-api-sdk';
dotenv.config();
const twitterClient = new Client (TWITTER_SECRET_INFO);
And then you would use the twitterClient object to get data in one of the route handlers.
What's the best practice for initializing something like the twitter client in Hono with Cloudflare?
In the old service worker framework, I could have treated the secret info as a global environment variable much like in Node/Express, but in the new module worker code you have to access the environment variables as a parameter passed to a function call. It looks like Hono manages this by passing contexts to methods like .use/.get/.post.
Ideally, though, I wouldn't reinitialize the twitter connection on every request, especially since I'm just getting public info with a token, not dealing with any user login/password info.
Is there any way to do this in Hono/Cloudflare, or do I have to initialize the Twitter client middle ware each request? I looked at the Hono class constructer, but from what I can tell, all it does is take a router config object.
And from what I can tell of the cloudflare docs, module workers have the same issue. Whereas constants in a service worker were declared outside the route handler, it looks like everything in a module worker is declared inside a fetch handler. Is there anyway to initialize once during the life of the worker and not for each request?
In principle you could initialize the client on the first request:
let twitterClient = null;
export default {
async fetch(req, env, ctx) {
if (!twitterClient) {
twitterClient = new Client(env.TWITTER_SECRET_INFO);
}
// ... normal code ...
}
}
That said, though, is creating a new client actually expensive?
Constructing the client does not "initialize a connection". The client presumably makes requests by calling fetch(). The fetch() API doesn't expose any way to control the underlying connections used; each fetch() operates effectively independently. But, the Workers Runtime will automatically reuse connections behind the scenes, when possible. It could even reuse the same connection for two completely unrelated Workers, if they are contacting the same destination host. So it may be that even creating a new client with every request, you're already getting good connection reuse.
That said, perhaps the client has to do some sort of key exchange upfront, e.g. exchanging a long-lived refresh token for an access token. That is annoying to have to repeat on every request. So in that sense, maybe caching it in a global helps.
However, note that Workers creates LOTS of instances of your Worker around the world. You may find if you curl your Worker several times in a row, each request lands on a different instance. You may find that caching in global state does not actually have much impact unless you have a large amount of traffic.
Caching may be more effective if you use the Cache API to store cached values into the colo-wide cache. Unfortunately, client libraries designed for Node environments may not provide the right hooks to do this.
One final note: Note that putting live resources (things that are not just plain data structures) into the global scope can be dangerous on Workers, because in general a Promise created on behalf of one incoming request cannot be awaited in the context of some other request. So if that twitter client does do some sort of upfront key exchange and tries to have all requests wait for that to complete, you may find that if you receive multiple requests at once before the initial key exchange finishes, all except the first request end up failing. To be honest, I would recommend creating a new client for every request unless you see a measurable performance problem from this.

Is it possible to invalidate cache from another microfrontend RTK-query

I have a micro-frontend 1 that is using RTK-query and micro-frontend 2 which just a general react application. In theory, I would like for MF-2 to make some api (Post/Put) calls that should invalidate a query from MF-1. I'm wondering if this would actually work ? Can a separate service invalidate the cache of another so they can be in sync ?
It's possible to programmatically invalidate an API endpoint, if you have access to the Redux store and the API slice object and can dispatch actions:
store.dispatch(
api.util.invalidateTags(['Post']))
)
So yes, if your MF-2 can get access to the store from MF-1 somehow, or you can expose some kind of method or event emitter from MF-1 that wraps that "invalidate" logic, this can work.
See https://redux-toolkit.js.org/rtk-query/api/created-api/api-slice-utils#invalidatetags

How to access dependency injection container in Symfony 4 without actual injection?

I've got a project written in Symfony 4 (can update to the latest version if needed). In it I have a situation similar to this:
There is a controller which sends requests to an external system. It goes through records in the DB and sends a request for every row. To do that there is an MagicApiConnector class which connects to the external system, and for every request there is a XxxRequest class (like FooRequest, BarRequest, etc).
So, something like this general:
foreach ( $allRows as $row ) {
$request = new FooRequest($row['a'], $row['b']);
$connector->send($request);
}
Now in order to do all the parameter filling magic, the requests need to access a service which is defined in Symfony's DI. The controller itself neither knows nor cares about this service, but the requests need it.
How can my request classes access this service? I don't want to set it as a dependency of the controller - I could, but it kinda seems awkward, as the controller really doesn't care about it and would only pass it through. It's an implementation detail of the request, and I feel like it shouldn't burden the users of the request with this boilerplate requirement.
Then again, sometimes you need to make a sacrifice in the name of the greater good, so perhaps this is one of those cases? It feels like I'm "going against the grain" and haven't grasped some ideological concept.
Added: OK, the full gory details, no simplification.
This all is happening in the context of two homebrew systems. Let's call them OldApp and NewApp. Both are APIs and NewApp is calling into the OldApp. The APIs are simple REST/JSON style. OldApp is not built on Symfony (mostly even doesn't use a framework), the NewApp is. My question is about NewApp.
The authentication for OldApp APIs comes in three different flavors and might get more in the future if needed (it's not yet dead!) Different API calls use different authentication methods; sometimes even the same API call can be used with different methods (depending on who is calling it). All these authentication methods are also homebrew. One uses POST fields, another uses custom HTTP headers, don't remember about the third.
Now, NewApp is being called by an Android app which is distributed to many users. Android app actually uses both NewApp and OldApp. When it calls NewApp it passes along extra HTTP headers with authentication data for OldApp (method 1). Thus NewApp can impersonate the Android app user for OldApp. In addition, NewApp also needs to use a special command of OldApp that users themselves cannot call (a question of privilege). Therefore it uses a different authentication mechanism (method 2) for that command. The parameters for that command are stored in local configuration (environment variables).
Before me, a colleague had created the scheme of a APIConnector and APICommand where you get the connector as a dependency and create command instances as needed. The connector actually performs the HTTP request; the commands tell it what POST fields and what headers to send. I wish to keep this scheme.
But now how do the different authentication mechanisms fit into this? Each command should be able to pass what it needs to the connector; and the mechanisms should be reusable for multiple commands. But one needs access to the incoming request, the other needs access to configuration parameters. And neither is instantiated through DI. How to do this elegantly?
This sounds like a job for factories.
function action(MyRequestFactory $requestFactory)
{
foreach ( $allRows as $row ) {
$request = $requestFactory->createFoo($row['a'], $row['b']);
$connector->send($request);
}
The factory itself as a service and injected into the controller as part of the normal Symfony design. Whatever additional services that are needed will be injected into the factory. The factory in turn can provide whatever services the individual requests might happen to need as it creates the request.

Add request interceptors to specific calls

I'm currently trying to figure out which options does retrofit offer to add an interceptor only to specific calls.
Background & use cases
I'm currently using retrofit 1.9
The use case is pretty simple. Imagine a user who needs to login and get a session token. There is a call.
/**
* Call the backend and request a session token
*/
#POST("auht_endpoint")
Observable<Session> login(...);
All other calls will require a token from the above session in the form of a request header. In other words, all subsequent calls will have a header which provides the session token to the backend.
My question
Is there a simple way of adding this header only to specific calls through interceptors?
What I've tried so far
Obviously the easiest approach was to add the #Header annotation to the specific calls and providing the token as a parameter
I guess one can inspect the url in the request inside the interceptor. Not very flexible.
Create different rest adapters with different interceptors. I heard you should avoid creating several instances of the rest adapter for performance reasons.
Additional info
I'm not committed to interceptors, I would use other solutions
I've said I'm using retrofit 1.9, but I'd be also interested in a way to do it with retrofit 2.x
Please note this is not an answer, comment box was too small.
I've recently had this problem and I came up to the same possible solutions as you.
First of all I put aside double adapters - thats a last resort.
#Header field seems ok, bacause you explicitly define that this specific request needs authorization. However it's kinda boring to use.
Url inspection in interceptor looks "ugly", but I've decided to go with that. I mean if all requests from a one specific endpoint need that authorization header then what's the problem?
I had two other ideas:
Somehow dynamically replace/modify okHttpClient which is used with Retrofit. After some tests I figured that it's not possible.
Maybe create some custom annotation #AddAuthorizationHeader to the call definition, which will do everything for you, but I guess it wouldn't be possible either.
And in this matter Retrofit 2.x doesn't bring anything new.

Getting a list of target endpoints

Can I get a list of target endpoints in a javascript policy?
Let's say I have a proxy endpoint that connects to multiple target endpoints. Can I write a javascript policy so that if a request is made to a specific url on that proxy, it will make a call to all the target endpoints and aggregate the results?
Yes that's possible. The apiproxy definition itself holds all the target endpoints defined for it.
For example:
curl -v https://api.enterprise.apigee.com/v1/o/{org}/apis/{api}/revisions/{rev}/targets
would give you the list of all targets.
Then you can get each target URL from the list by calling:
curl -v https://api.enterprise.apigee.com/v1/o/{org}/apis/{api}/revisions/{rev}/targets/{target}
You can parse out each URL in a for loop and then make a request to each of these URLs. If your requests a simple GET calls without any variation in the request object like headers, body etc. then a simple for loop would be good enough.
For example:
var geocoding = httpClient.get(URL);
context.session["geocoding"] = geocoding;
This piece of code can be called in a loop for all the target endpoints that you might have.
The only catch here is that, to get the target endpoints you are making a management api call from the runtime layer. Which means if at any point of time the Apigee management layer is down for maintenance or experiencing degraded service due to scheduled maintenance, your runtime calls would tend to fail. The other solution could be to isolate the two scripts:
Get the list of endpoints in one javascript and maybe store the URLs in cache (populateCache policy) or keyvaluemaps (given that proxy endpoint URLs won't change too often)
Read the list of endpoints from cache or kvm and then trigger another javascript that can make calls to these endpoints and then aggregate the response.
There is not a way to call a target endpoint from JavaScript. In fact, you can only call 0 or 1 target endpoints for a single call to the proxy, not multiple target endpoints.
You can call make multiple HTTP requests from within JavaScript using httpClient, and aggregate the results, but not target endpoints. An example of this is found here.

Resources