Are Callable Cloud Functions better than HTTP functions? - http

With the latest Firebase Update callable functions were introduced. My question is whether this new way is faster than the "old" http triggers and if it is more secure.
I have no expertise in this field, but I think the HTTP vs HTTPS might make a difference.
This is interesting to me because if the callable functions are faster, they have that advantage, but their disadvantage lies in the nature of flexibility: They cannot be reached by other sources.
If the callable functions have no advantages in terms of speed or security I do not see a reason to switch it up.

Callable functions are exactly the same as HTTP functions, except the provided SDKs are doing some extra work for you that you don't have to do. This includes, on the client:
Handling CORS with the request (for web clients)
Sending the authenticated user's token
Sending the device instance id
Serializing an input object that you pass on the client
Deserializing the response object in the client
And on the backend in the function:
Validating the user token and providing a user object from that
Deserializing the input object in the function
Serializing the response object in the function
This is all stated in the documentation. If you are OK with doing all this work yourself, then don't use callables. If you want this work done automatically, then callables are helpful.
If you need direct control over the details of the HTTP protocol (method, headers, content body), then don't use a callable, because it will hide all these details.
There are no security advantages to using callables. There are no speed improvements.

Related

Is using Firebase cloud function library better than direct http call to the same cloud function url

I have an Express server running in firebase cloud function.
Currently, I am calling this cloud function from my client (assume it's a web client) using direct http call to url that was provided to me to access the function.
However, there is another method to call this function which is using thr firebase cloud function library for the client.
My question is : what advantage do I get ( in terms of speed ) if I use the library instead of the direct http call.
My assumption is that the library uses a web socket to accesd the function whereas the direct http call uses http call
I couldn't find anywhere in the documentation saying which one is better
Callable functions invoked using the Firebase SDK were not designed to provide any sort of increase in performance over a normal HTTP request. In fact, on average, you will hardly notice any difference between the two.
It's better to pick the one that meets your needs and preferences rather than thinking about performance. But if you really must choose the one that's faster, you should benchmark your choices yourself to see which one is better for your specific case.
See also: Are Callable Cloud Functions better than HTTP functions?

Adding custom validation logic to dart:io HttpClient

I am trying to create an HttpClient that can validate an SSL certificate after every TLS handshake and before any other data is fetched/sent.
So the flow would look like this:
Create an HttpClient
Execute a request
The client connects to the host via HTTPS
After the TLS handshake was done, the client now knows the certificate
Pass the certificate to a callback. Execute actual request when callback succeeds, abort the request otherwise
In case the callback was successful, proceed as usual (e.g. pass the response etc.)
I was looking into SecurityContext already. The problem is that it only validates against a fixed set of certificates, but I want to validate the certificate dynamically based on the certificate that was sent by the host.
Also, I saw that there is a badCertificateCallback method in HttpClient, but this does not serve my usecase well as I want to validate every certificate, not just the invalid/bad ones.
I was wondering whether I could theoretically create a class that uses HttpClient as a superclass and therefore modify it's behaviour, but I am wondering whether there is a more elegant way that doesn't break that easily when the implementation of HttpClient changes.
Another idea of mine is to set a SecurityContext that rejects every single certificate by default. I could then use the badCerificateCallback to do the checks normally done by SecurityContext (check against a list of trusted certificates) and add my own validation on top of that. Is anyone aware of any drawbacks this might have? I got a little bit uncertain when reading about the limitations regarding iOS.
Has anyone here done similar things before and could give me a hint? :)
Thanks in advance!
For your usecase, it is better that you have your own version of BetterHttpClient.
However, instead of BetterHttpClient inheriting from HttpClient, you can use composition. Compose HttpClient inside BetterHttpClient. This will give you more control over what you want to use/update from the existing implementation and also this will be better guarded against any changes that HttpClient will go through

Does gRPC resend messages

A question related to the idempotence of serverside code, or the necessity of it. Either for gRPC in general, or specifically to the java implementation.
Is it possible that when we send a message once from a client, that it is handled twice by our service implementation?
Maybe this would be related to retries when service seems unavailable; or could be configured by some policy?
Right now, when you send a message from a client it will be seen (at most) once by the server. If you have idempotent methods, you will soon be able to specify a policy for automatic retries (design doc) but these retry attempts will not be enabled by default. You do not have to worry about the gRPC library sending your RPCs more than once unless you have this configured in your retry policy.
According to grpc-java/5724, the retry logic has already been implemented. The OP does it using a Map, that is not type safe. A better way would be as follows:
NettyChannelBuilder builder = NettyChannelBuilder.forAddress(host, port)
.enableRetry()
.maxRetryAttempts(3);
There are other retry configurations available on the NettyChannelBuilder.
There's also an example here, although it's pretty hard to find.

How should a synchronous public api be integrated with message-based services?

I've been reading about microservices, and have found a lot of interesting advice in Jonas Bonér's Reactive Microservices Architecture (available to download free here). He emphasises the need for asynchronous communication between miroservices, but says that APIs for external clients sometimes need to be synchronous (often REST).
I've been trying to think how asynchronous response messages sent back from microservices should best be routed back to the waiting client. To me the most obvious way would be to record something like a request id in all messages sent when processing the request, and then copy this id into response messages sent by the services. The public API would block when processing the request, collecting all expected response messages which have the matching id, before finally sending the response to the client.
Am I on the right lines here? Are there better approaches? Do any frameworks take the work of doing this routing away from the developer (I'm looking at Spring Cloud Streams etc, but others would be interesting too)?
He emphasises the need for asynchronous communication between
miroservices, but says that APIs for external clients sometimes need
to be synchronous (often REST).
When dealing with client - backend communications you can have a couple of types of operations and they should be handled seperetly (look at the idea of CQS):
State changing operations - they should be one way fire and forget using messaging (it can be the client calling an HTTP API and the api dispatching the message)
read operations: synchronous (request response) operations (using an HTTP API) and this is does not involve any messaging what so ever
Does that make sense?

node.js asynchronous initialization issue

I am creating a node.js module which communicates with a program through XML-RPC. The API for this program changed recently after a certain version. For this reason, when a client is created (createClient) I want to ask the program its version (through XML-RPC) and base my API definitions on that.
The problem with this is that, because I do the above asynchronously, there exists a possibility that the work has not finished before the client is actually used. In other words:
var client = program.createClient();
client.doSomething();
doSomething() will fail because the API definitions have not been set, I imagine because HTTP XML-RPC response has not returned from the program.
What are some ways to remedy this? I want to be able to have a variable named client and work with that, as later I will be calling methods on it to get information (which will be returned via a callback).
Set it up this way:
program.createClient(function (client) {
client.doSomething()
})
Any time there is IO, it must be async. Another approach to this would be with a promise/future/coroutine type thing, but imo, just learning to love the callback is best :)

Resources