I am creating a node.js module which communicates with a program through XML-RPC. The API for this program changed recently after a certain version. For this reason, when a client is created (createClient) I want to ask the program its version (through XML-RPC) and base my API definitions on that.
The problem with this is that, because I do the above asynchronously, there exists a possibility that the work has not finished before the client is actually used. In other words:
var client = program.createClient();
client.doSomething();
doSomething() will fail because the API definitions have not been set, I imagine because HTTP XML-RPC response has not returned from the program.
What are some ways to remedy this? I want to be able to have a variable named client and work with that, as later I will be calling methods on it to get information (which will be returned via a callback).
Set it up this way:
program.createClient(function (client) {
client.doSomething()
})
Any time there is IO, it must be async. Another approach to this would be with a promise/future/coroutine type thing, but imo, just learning to love the callback is best :)
Related
In an old-school server environment, you initialize an SDK (like the Twitter SDK) when the server starts up, using dotenv to read secrets and tokens from your .env file like so:
import dotenv from 'dotenv';
import {Client} from 'twitter-api-sdk';
dotenv.config();
const twitterClient = new Client (TWITTER_SECRET_INFO);
And then you would use the twitterClient object to get data in one of the route handlers.
What's the best practice for initializing something like the twitter client in Hono with Cloudflare?
In the old service worker framework, I could have treated the secret info as a global environment variable much like in Node/Express, but in the new module worker code you have to access the environment variables as a parameter passed to a function call. It looks like Hono manages this by passing contexts to methods like .use/.get/.post.
Ideally, though, I wouldn't reinitialize the twitter connection on every request, especially since I'm just getting public info with a token, not dealing with any user login/password info.
Is there any way to do this in Hono/Cloudflare, or do I have to initialize the Twitter client middle ware each request? I looked at the Hono class constructer, but from what I can tell, all it does is take a router config object.
And from what I can tell of the cloudflare docs, module workers have the same issue. Whereas constants in a service worker were declared outside the route handler, it looks like everything in a module worker is declared inside a fetch handler. Is there anyway to initialize once during the life of the worker and not for each request?
In principle you could initialize the client on the first request:
let twitterClient = null;
export default {
async fetch(req, env, ctx) {
if (!twitterClient) {
twitterClient = new Client(env.TWITTER_SECRET_INFO);
}
// ... normal code ...
}
}
That said, though, is creating a new client actually expensive?
Constructing the client does not "initialize a connection". The client presumably makes requests by calling fetch(). The fetch() API doesn't expose any way to control the underlying connections used; each fetch() operates effectively independently. But, the Workers Runtime will automatically reuse connections behind the scenes, when possible. It could even reuse the same connection for two completely unrelated Workers, if they are contacting the same destination host. So it may be that even creating a new client with every request, you're already getting good connection reuse.
That said, perhaps the client has to do some sort of key exchange upfront, e.g. exchanging a long-lived refresh token for an access token. That is annoying to have to repeat on every request. So in that sense, maybe caching it in a global helps.
However, note that Workers creates LOTS of instances of your Worker around the world. You may find if you curl your Worker several times in a row, each request lands on a different instance. You may find that caching in global state does not actually have much impact unless you have a large amount of traffic.
Caching may be more effective if you use the Cache API to store cached values into the colo-wide cache. Unfortunately, client libraries designed for Node environments may not provide the right hooks to do this.
One final note: Note that putting live resources (things that are not just plain data structures) into the global scope can be dangerous on Workers, because in general a Promise created on behalf of one incoming request cannot be awaited in the context of some other request. So if that twitter client does do some sort of upfront key exchange and tries to have all requests wait for that to complete, you may find that if you receive multiple requests at once before the initial key exchange finishes, all except the first request end up failing. To be honest, I would recommend creating a new client for every request unless you see a measurable performance problem from this.
Question says it all really. I can setup a page at a known location, and make sure the http request works, but that feels kinda hackey.
Meteor.status() is a client side function which will help for individual clients within the Meteor context. Your best bet for a platform-agnostic check is a DDP client that lives outside your Meteor environment. (True, the Meteor folks proposed the DDP client, but the intention is that it serve as a protocol for real-time platform, not a tightly coupled part of the Meteor architecture).
In practice, then, you'd need to wire up a DDP client and subscribe to a server-side publication, returning whatever defines 'running' in the context you require. This could be simply a collection of mongo documents, or some status checks based on more complex set and unset method calls within your publish function.
Here are a couple DDP clients I've found
DDPClient.NET
node-ddp-client
ruby-ddp-client
Hope that helps.
Meteor's documentation states:
In Meteor, your server code runs in a single thread per request, not in the asynchronous callback style typical of Node
Do they actually mean?
A) the server is running multiple threads in parallel (which seems unusual within the Node.js ecosystem)
or
B) There is still only a single thread within an evented server and each request is processed sequentially, at least until it makes calls to resources outside the server - like the datastore, at which point the server itself is handling the callbacks while it processes with other requests, so you don't have to write/administer the callbacks yourself.
Brad, your B is correct.
Meteor uses fibers internally. As you said, there's only one thread inside an evented server, but when you do (eg) a database read, Fibers yields and control quickly gets back to the event loop. So your code looks like:
doc = MyCollection.findOne(id);
(with a hidden "yield to the event loop, come back when the doc is here") rather than
MyCollection.findOne(id, function (err, doc) {
if (err)
handle(err);
process(doc);
});
Error handling in the fiber version also just uses standard JavaScript exceptions instead of needing to check an argument every time.
I think this leads to an easier style of code to read for business logic which wants to take a bunch of actions which depend on each other in series. However, most of Meteor's synchronous APIs optionally take callbacks and become asynchronous, if you'd like to use the async style.
i want to know a simple way of implementing callback mechanism in Rserve for a java client . According to Rserve docs :
Rserve provides no callback functionality. Your application could implement callbacks via TCP/IP and the R sockets but it is not a part of Rserve.
This means my java client can call functions on the remote Session through Rconnection reference , but the remote Session cannot call back the java client which has instantiated it . How can i develop such a mechanism . If its through R sockets or a tcp/ip server , does that mean for every connection there will be a socket server open ?
Ok, so this is how I think it is possible to implement reactive R.
Non-blocking calls from java
You need to fork RServe java client and split method request into two parts in this line [1]. First part write request to the socket and the second waits for response. We need to make waiting optional by for example some boolean flag.
Returning result from R
You will need some kind of active communication to Java. One possibility is to use plain sockets or something on higher level as HTTP. I thought about httpRequest package [2].
So the call from java should look like:
connection.eval(s"""simplePostToHost(
"192.168.12.12","/listener/results/",
try(eval(parse(text="$code")),silent=TRUE),port=8080""")
Listening for result in Java
The request and response should share some kind of unique ID so we know which response is for which request. You should run some service that listen on path /listener/results for incoming results and tells Java that result is ready. It should also enable to reuse RConnection that previously should be marked as "busy".
I recommend to use it this part scala Promise[T] .
Hope it helps somebody. I'm probably going to implement it once my company needs it.
[1]https://github.com/s-u/REngine/blob/a74e184c051c2d2e850430cd2d0526656d3a6c48/Rserve/protocol/RTalk.java#L211
[2]https://cran.r-project.org/web/packages/httpRequest/httpRequest.pdf
Here is the answer I found on the http://statweb.stanford.edu/~lpekelis/13_datafest_cart/13_datafest_r_talk.pdf and at http://www.rforge.net/JRI/files/
Start with the instance of R
Rengine re= new Rengine(args, false, new TextConsole());
Here is the code you can see for the call back:
Also, check the links for further reference. I didn't got who is the author otherwise I would have mentioned it.
http://docs.meteor.com/#meteor_methods
I have tried it in publish.js in my server folder.
I am successfully calling Meteor.apply and attempting the server call from the client. I always get an undefined response.
Calling Meteor.methods on the server is correct. That will define remote methods that run in the privileged environment and return results to the client. To return a normal result, just call return from your method function with some JSON value. To signal an error, throw a Meteor.Error.
On the client, Meteor.apply always returns undefined, because the method call is asynchronous. If you want the return value of the method, the last argument to apply should be a callback, which will be passed two arguments: error and result, in the typical async callback style.
Is your server code actually getting called? You can check that by updating the DB in the method and seeing if the client's cache gets the new data, or calling console.log from inside the method body and looking at the output of the "meteor" process in your terminal.
There are several places I can define my Meteor.methods() (with pro's and con's):
On the server only - when the client calls the method, it'll have to wait for the server to respond before anything changes on the client-side
On the server, and uses a stub on the client - when the client calls the method, it will execute the stub method on the client-side, which can quickly return a (predicted) response. When the server comes back with the 'actual' response, it will replace the response generated by the stub and update other elements according.
The same method on both client and server - commonly used for methods dealing with collections, where the method is actually a stub on the client-side, but this stub is the same as the server-side function, and uses the client's cached collections instead of the server's. So it will still appear to update instantly, like the stub, but I guess it's a bit more accurate in its guessing.
I've uploaded a short example here, should you need a working example of this: https://gist.github.com/2387816
I hope some will find use of this addition, and this doesn't cloud the issue that methods are primarily intended to run on the server as debergalis has explained.
Using Meteor.methods() on the client is useful too. (look for "stub" in the Meteor.call() section too...)
This allows the client to (synchronously) simulate the expected effect of the server call.
As mentioned in the docs:
You use methods all the time, because the database mutators (insert,
update, remove) are implemented as methods. (...)
A separate section explaining use of stubs on the client might ease the understanding of methods calls on the server.