How to use RedisPool to manage several Redis connections with Swoole websocket - swoole

I have a Swoole websocket server and I would like to use a RedisPool to manage several Redis connections to pick one for each query when I get a message. When I try to get a connection from my RedisPool, Swoole outputs an error "Error: Uncaught Swoole\Error: API must be called in the coroutine in #swoole-src/library/core/ConnectionPool.php:69", so I tried to encapsulate my query with Co\Run :
Co\run(function() use ($key, $pool) {
go(function () use ($key, $pool) {
$redis = $pool->get();
$result = $redis->get($key);
$pool->put($redis);
});
});
Then I get "Warning: Swoole\Coroutine\Scheduler::start(): eventLoop has already been created. unable to start Swoole\Coroutine\Scheduler"
I suppose the eventLoop has already been started by my websocket server. Is there a way to access the server eventLoop or another method to run a Coroutine ?

Related

How do I wait for successful connection using DDP in meteor (server -> server)

Continuing the discussion from DDP: how do I wait for a connection?:
Based on the thread above, we an leverage Tracker.autorun to wait and confirm for a successful connection between a client and meteor server.
I would like to do the same on a server : server connection
Basically, I have a meteor server (server1), that will need to “test” and see if another meteor server (server2) is available.
Each time I run DDP.connect(remoteUrl).status() within server1’s meteor method, it always says “connection”. I know it connects in the next second or two, but I’m unable to wait for checking the connection success flag.
How do i do this on the server?
Thanks
The idea of reactivity doesn't exist in this form on the server, so something like the Tracker is not an option. Fortunately though there is the onReconnect callback you can use. You can steal the required logic from my meteor-serversync package:
const connection = DDP.connect(URL);
connection.onReconnect = function() {
console.log("(re)connected");
if (!initialized) {
options.onConnect && options.onConnect();
}
};

Detect BROWSERSTACK_ALL_PARALLELS_IN_USE error when instantiating a driver with WebdriverIO

I am creating a new Appium WebDriver session using WebdriverIO API:
const options = {...};
const driver: Browser<"async">;
try {
driver = await remote(options);
} catch (error) {
console.log("Error:", error);
}
In BrowserStack there is a limit on the number of parallel tests that one can run. I am hitting that limit and my call to remote errors and, in the catch, I get this inside variable error:
Failed to create session.
Which does not give me much info on why the call failed. On the other hand, the driver is emitting logs that give such an answer:
XXXX-XX-XXTXX:XX:XX.XXXZ ERROR webdriver: [BROWSERSTACK_ALL_PARALLELS_IN_USE] All parallel tests are currently in use, including the queued tests. Please wait to finish or upgrade your plan to add more sessions.: [BROWSERSTACK_ALL_PARALLELS_IN_USE] All parallel tests are currently in use, including the queued tests. Please wait to finish or upgrade your plan to add more sessions.
at Object.getErrorFromResponseBody (/home/vsts/work/1/s/node_modules/webdriverio/node_modules/webdriver/build/utils.js:189:12)
at /home/vsts/work/1/s/node_modules/webdriverio/node_modules/webdriver/build/request.js:168:31
at Generator.next (<anonymous>)
at asyncGeneratorStep (/home/vsts/work/1/s/node_modules/webdriverio/node_modules/webdriver/build/request.js:9:103)
at _next (/home/vsts/work/1/s/node_modules/webdriverio/node_modules/webdriver/build/request.js:11:194)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
Question
How can I get more information about the nature of the error as I am calling remote? I want to detect the occurrence of BROWSERSTACK_ALL_PARALLELS_IN_USE so that I can implement, in code, some strategies around this issue (like retrying after some randomized time).
The error BROWSERSTACK_ALL_PARALLELS_IN_USE indicates that all your running threads and queued threads are full and no more test can be triggered.
Once you have maxed out your running + queued threads, any new sessions initiated are dropped with BROWSERSTACK_ALL_PARALLELS_IN_USE.
You will not be able to trigger any more tests until there are threads available.
You can take a look at https://webdriver.io/docs/organizingsuites/ by setting maxInstances. This will allow you to limit the number of tests triggered.
Actually the error is a proper Error object and has message and stack available. The log shown is encapsulated in the message, so the following is just fine:
try {
driver = await remote(options);
} catch (error) {
if (error.message.indexOf("BROWSERSTACK_ALL_PARALLELS_IN_USE") >= 0) {
// Retry
}
// Do something else
}

Restoring messages from database

I am thinking of a way to manage failed messages in Rebus.
In my second level retry strategy I want to save the message and exception details into the database so that I can later review the error details and decide whether to resend the message to the be reprocessed or ignore and delete.
In the handler I am capturing details as follows:
public async Task Handle(IFailed<StudentCreated> failedMessage)
{
//Logic to Defer Message with rebus_defer_count not shown
DictionarySerializer dictionarySerializer = new
DictionarySerializer();
ObjectSerializer objectSerializer = new ObjectSerializer();
string headers =
dictionarySerializer.SerializeToString(failedMessage.Headers);
string message =
objectSerializer.SerializeToString(failedMessage.Message);
Exception lastException= failedMessage.Exceptions.Last();
string exception = objectSerializer.SerializeToString(lastException);
//Logic to save the message and error details in the database not shown
}
This will enable me to save the message and error details into the database where I can create a dashboard to view the messages and resolve them as I wish rather than in the broker queue such as RabbitMQ.
Now my question is how can I return them to the handler where the error was raised using the information provided in the headers?
What is the best way to do it with REBUS provided I have all the details from the Failed Message as shown in my code snippet?
Regards
What you're trying to achieve will be much easier if you make a small change to your application. You see, Rebus already has a built-in service in place for handling failed messages called IErrorHandler.
You can register your own error handler like this:
Configure.With(...)
.(...)
.Options(o => o.Register<IErrorHandler>(c => new MyCustomErrorHandler()))
.Start();
thus replacing the default error handler (which btw. is PoisonQueueErrorHandler)
The error handler gets to handle the message in the form of the raw TransportMessage (i.e. simply headers and a byte[]) when all retries have failed, so this is the perfect place to save the message to your database.
If you then look here, you can see how Rebus' default error handler adds its own queue name as the rbs2-source-queue header, meaning that the message can later be sent back to that queue.
With this information, it should be fairly easy to write some code that inspects the message for its source queue and sends a RabbitMQ message to that queue.
This will only work if the re-delivery service has access to the RabbitMQ instance where all of your Rebus endpoints are running, of course. It's less straightforward, if you want to implement this in a general way: E.g. if you were using Fleet Manager, each Rebus instance would use a long-polling protocol to query the server for commands, which enables Fleet Manager to tell any Rebus instance to e.g. send a previously failed message to any queue it has access to.

gRPC Java Client - hasNext during onNext?

I have a server-side streaming gRPC service that may have messages coming in very rapidly. A nice to have client feature would be to know there are more updates already queued by the time this onNext execution is ready to display in the UI, as I would simply display the next one instead.
StreamObserver< Info > streamObserver = new StreamObserver< info >( )
{
#Override
public void onNext( Info info )
{
doStuffForALittleWhile();
if( !someHasNextFunction() )
render();
}
}
Is there some has next function or method of detection I'm unaware of?
There's no API to determine if additional messages have been received, but not yet delivered to the application.
The client-side stub API (e.g., StreamObserver) is implemented using the more advanced ClientCall/ClientCall.Listener API. It does not provide any received-but-not-delivered hint.
Internally, gRPC processes messages lazily. gRPC waits until the application is ready for more messages (typically by returning from StreamObserver.onNext()) to try to decode another message. If it decodes another message then it will immediately begin delivering that message.
One way would be to have a small, buffer with messages from onNext. That would let you should the current message, and then check to see if another has arrived in the mean time.

How to check if Meteor.call() fails when server connection is down?

When the Meteor server connection is lost, how can I verify that Meteor.call() failed? Meteor.call() doesn't return any value. Basically Ctrl+Z in the Meteor shell when your app is running, then do something in the app that triggers a Meteor.call i.e. adding a new blog post:
Meteor.call('createPhrase', phrase, function(error) {
console.log("This NEVER gets called if server is down.");
if (error) {
throwError(error.reason);
}
});
I tried using Session vars, but the reactivity screws it up, i.e. the code below will trigger an error in my template handler (that get's flashed to the browser quickly) and as soon as isMyError is set to true, then when the Meteor.call is successful the error goes away as per isMyError = false, but this looks really sloppy.
Session.set("isMyError", true);
Meteor.call('createPhrase', phrase, function(error) {
console.log("This NEVER gets called if server is down.");
Session.set("isMyError", false);
if (error) {
throwError(error.reason);
}
});
Template.index.isMeteorStatus = function () {
myClientStatus = Meteor.status();
if ( (myClientStatus.connected === false) || (Session.get("isMyError") === true) ) {
return false;
} else {
return true;
}
};
Meteor's calls are generally entered into a queue that are sent to the server in the order that they are called. If there is no connection they stay in the queue until the server is connected once more.
This is the reason nothing is returned because Meteor hopes that it can reconnect then send the call and when it does it does eventually return a result then.
If you want to validate whether the server is connected at the point of the call it's best to check Meteor.status().connected (which is reactive) and only run Meteor.call if it is else throw an error
if(Meteor.status().connected)
Meteor.call(....
else throwError("Error - not connected");
You could also use navigator.onLine to check whether the network is connected.
The reason you would experience a 60 second delay with Meteor.status().connected on the true status of whether meteor is connected or not is there isn't really a way for a browser to check if its connected or not.
Meteor sends a periodic heartbeat, a 'h' on the websocket/long polling wire to check it is connected. Once it realizes it didn't get a heartbeat on the other end it marks the connection disconnected.
However, it also marks it as disconnected if a Meteor.call or some data is sent through and the socket isn't able to send any data. If you use a Meteor.call beforehand to Meteor.status().connected it would realize much sooner that it is disconnected. I'm not sure it would realize it immediately that you can use them one line after the next, but you could use a Meteor.setTimeout after a second or two to fire the call.
Attempt to succeed:
Meteor is designed very well to attempt to succeed. Instead of 'attempting to fail' with an error stating the network is not available its better to try and queue everything up until the connection is back.
The best thing to do would be to avoid telling the user the network is down because usually they would know this. The queued tasks ensure the userflow would be unchanged as soon as the connection is back.
So it would be better to work with the queues that are built into the reconnection process rather than to avoid them.

Resources