I'm using the hiredis C client library to interact with Redis in an async context.
On some point of my workflow I have to make a Sync call to Redis but I'm not being able to get a successful response from Redis.
I'm not sure whether I can issue a sync command to Redis from an async context but...
I have something like this
redisAsyncContext * redis_ctx;
redisReply * reply;
// ...
reply = redisCommand(&(redis_ctx->c), COMMAND);
After redisCommand call, my reply is NULL what is documented as an error condition and my redis_ctx->c is like
err = 0
errstr = '\000' <repeats 127 times>
fd = 11
flags = 2
obuf = "*5\r\n$4\r\nEVAL\r\n$215\r\n\"math.randomseed(tonumber(ARGV[1])) local keys = redis.call('hkeys',KEYS[1]) if #keys == 0 then return nil end local key = keys[math.random(#keys)] local value = redis.call('hget', KEYS[1], key) return {key, value}\"\r\n$1\r\n1\r\n$0\r\n\r\n$1\r\n1\r\n"
reader = 0x943730
I can't figure out whether the command was issued or not.
Hope it's not too late. I'm not so expert about Redis, but if you need to make a Sync call to Redis, why would you use an AsyncContext?
If you just use redisCommand with a redisContext everything should be fine.
Assuming that variable ctx has been declared as
redisContext *ctx;
you can use redisCommand like this:
reply = (redisReply *)redisCommand(ctx, "HGET %s %s", hash, key);
Related
In a gRPC unidirectional client-to-server stream, it is possible for the server to cancel the stream and return an error message to the client?
I've tried setting a trailer and returning a status message with .SendAndClose(), but neither are readable from the client. At the client, .Send returns an EOF error as expected, but .CloseAndRecv() does not return the status message sent by the server and .Trailer() returns an empty map.
// protobuf:
service Foo {
rpc Eat (stream Food) returns (Status) {}
}
// Server:
var Retval pb.Status
Retval.Status = "something went wrong"
emap := make(map[string]string)
emap["error"] = "something went wrong"
MD := metadata.New(emap)
Stream.SetTrailer(MD)
Stream.SendAndClose(&Retval)
// Client:
err = Stream.Send(Stuff) // returns EOF
if err != nil {
Status, err := o.Stream.CloseAndRecv() //returns nil, EOF
MD := o.Stream.Trailer() // returns an empty map.
}
Is there a way to do this without a bidirectional stream or a separate RPC endpoint for the client to request status messages from the server?
First, you don't need to define the Status object in your proto file. You can for example return a Well Known Type called Empty.
To do this you can do:
import "google/protobuf/empty.proto"
service Foo {
rpc Eat (stream Food) returns (google.protobuf.Empty);
}
This is just a recommendation because you don't need to provide a Status object. gRPC already has one defined that you can use in your go code.
Then on the server side you don't need to call SendAndClose when you want to return an error because your Eat function will have the following function declaration:
func (*Server) Eat(stream pb.Foo_EatServer) error
You can use something like the following code to send a status specifying that you had an error.
return nil, status.Errorf(
codes.Internal, // check suitable code
"A description for the error",
)
and before returning you can set a Trailer that will trigger error on the client side.
On the client side, you'll need to do something like:
trailer := stream.Trailer()
v, exist := trailer["error"]
if exist { // there is an error
fmt.Println("Error: ", v)
}
let me know if you need more help on that.
Is there a way to notify system`s users on real-time that the system is in deployment process(publish to production)?The purpose is to prevent them from starting to do atomic operations?
the system is an ASP.NET-based system and it already has SignalR Dlls, but I do not exactly know how to get to the "source" in the application from which I know that the system is deploying right now.
This is highly dependent on your deployment process, but I achieved something similar in the following way:
I created a method in one of my controllers called AnnounceUpdate:
[HttpPost("announce-update")]
public async Task<IActionResult> AnnounceUpdate([FromQuery] int secondsUntilUpdate, string updateToken)
{
await _tenantService.AnnounceUpdate(secondsUntilUpdate, updateToken);
return Ok();
}
The controller method takes in the amount of seconds till the update, as well as a secret token to ensure not just anyone can call this endpoint.
The idea is that we will call this controller just before we deploy, to announce the pending deployment. I make my deployments using Azure Dev Ops, and so I was able to create a release task that automatically runs the following PowerShell code to call my endpoint:
$domain = $env:LOCALURL;
$updateToken = $env:UPDATETOKEN;
$minutesTillUpdate = 5;
$secondsUntilUpdate = $minutesTillUpdate * 60;
$len = $secondsUntilUpdate / 10;
#notify users every 10 seconds about update
for($num =1; $num -le $len; $num++)
{
$url = "$domain/api/v1/Tenant/announce-update?secondsUntilUpdate=$secondsUntilUpdate&updateToken=$updateToken";
$r = Invoke-WebRequest $url -Method Post -UseBasicParsing;
$minsLeft = [math]::Floor($secondsUntilUpdate/60);
$secsLeft = $secondsUntilUpdate - $minsLeft * 60;
$timeLeft;
if($minsLeft -eq 0){
$timeLeft = "$secsLeft seconds";
}else{
if($secsLeft -eq 0){
$timeLeft = "$minsLeft minute(s)";
}else{
$timeLeft = "$minsLeft minute(s) $secsLeft seconds";
}
};
$code = $r.StatusCode;
Write-Output "";
Write-Output "Notified users $num/$len times.";
Write-Output "Response: $code.";
Write-Output "$timeLeft remaining."
Write-Output "_________________________________"
Start-Sleep -Seconds 10;
$secondsUntilUpdate = $secondsUntilUpdate - 10;
}
Write-Output "Allowing users to log out.";
Write-Output "";
Start-Sleep -Seconds 1;
Write-Output "Users notfied! Proceeding with update.";
As you can see, on the script I have set that the time till the update is 5 minutes. I then call my AnnounceUpdate endpoint every 10 seconds for the duration of the 5 minutes. I have done this because if I announce an update that will occur in 5 minutes, and then 2 minutes later someone connects, they will not see the update message. On the client side I set a variable called updatePending to true when the client receives the update notification, so that they do not keep on getting a message every 10 seconds. Only clients that have not yet seen the update message will get it.
In the tenant service I then have this code:
public async Task AnnounceUpdate(int secondsUntilUpdate, string updateToken)
{
if (updateToken != _apiSettings.UpdateToken) throw new ApiException("Invalid update token");
await _realTimeHubWrapper.AnnouncePendingUpdate(secondsUntilUpdate);
}
I simply check if the token is valid and then conitnue to call my HUB Wrapper.
The hub wrapper is an implementation of signalR's hub context, which allows to invoke signalR methods from within our code. More info can be read here
In the HUB wrapper, I have the following method:
public Task AnnouncePendingUpdate(int secondsUntilUpdate) =>
_hubContext.Clients.All.SendAsync("UpdatePending", secondsUntilUpdate);
On the client side I have set up this handler:
// When an update is on the way, clients will be notified every 10 seconds.
private listenForUpdateAnnouncements() {
this.hubConnection.on(
'PendingUpdate', (secondsUntilUpdate: number) => {
if (!this.updatePending) {
const updateTime = currentTimeString(true, secondsUntilUpdate);
const msToUpdate = secondsUntilUpdate * 1000;
const message =
secondsUntilUpdate < 60
? `The LMS will update in ${secondsUntilUpdate} seconds.
\n\nPlease save your work and close this page to avoid any loss of data.`
: `The LMS is ready for an update.
\n\nThe update will start at ${updateTime}.
\n\nPlease save your work and close this page to avoid any loss of data.`;
this.toastService.showWarning(message, msToUpdate);
this.updatePending = true;
setTimeout(() => {
this.authService.logout(true, null, true);
this.stopConnection();
}, msToUpdate);
}
}
);
}
I show a toast message to the client, notifying them of the update. I then set a timeout (using the value of secondsUntilUpdate) which will log the user out and stop the connection. This was specifically for my use case. You can do whatever you want at this point
To sum it up, the logical flow is:
PowerShell Script -> Controller -> Service -> Hub Wrapper -> Client
The main take away is that somehow we need to still trigger the call to the endpoint to announce the update. I am lucky enough to be able to have it run automatically during my release process. If you are manually publishing and copying the published code, perhaps you can just run the PowerShell script manually, and then deploy when it's done?
Goal
Jest test react-native application that using realm sync as database completely local without using internet connection nor hit realm sync server.
My App Logic
User login with realm user credential and sync data from device to the realm server
How I did it
First I reroute the routine that opening the realm connection so that when the code runs in the jest-test environment it will open the local file realm instead of realm sync.
Then by mocking realm-network-transport module I intercept any request attempt to the remote realm server (mongo stitch server) so it will serve the response from a designated static response I prepared.
It is also applied to user function call as at the end realm function call will use HTTP request using realm-network-transport module
The Problem
All is working fine prior to using realmjs v10.1.2,but after using v10.1.2 only authentication routine is works, function call throwing error with this workaround (things do works normal in normal run)
The error reported is
JS value must be of type 'object', got (undefined)
at func (.../node_modules/realm/lib/user.js:34:37)
...
the code pointed by the error is
callFunction(name, args, service = undefined) {
return promisify(cb => this._callFunction(name, this._cleanArgs(args), service, cb));
},
i tried to console.log this, name, args, and service, it yield
{} clientAvailableOutletPosid [...function argument...] undefined
which i guess will go to /node_modules/realm/src/js_user.hpp line 354
template<typename T>
void UserClass<T>::call_function(ContextType ctx, ObjectType this_object, Arguments& args, ReturnValue &) {
args.validate_count(4);
auto user = get_internal<T, UserClass<T>>(ctx, this_object);
auto name = Value::validated_to_string(ctx, args[0], "name");
auto call_args_js = Value::validated_to_array(ctx, args[1], "args");
auto service = Value::is_undefined(ctx, args[2])
? util::none
: util::Optional<std::string>(Value::validated_to_string(ctx, args[2], "service"));
auto callback = Value::validated_to_function(ctx, args[3], "callback");
auto call_args_bson = Value::to_bson(ctx, call_args_js);
user->m_app->call_function(
*user,
name,
call_args_bson.operator const bson::BsonArray&(),
service,
Function::wrap_callback_error_first(ctx, this_object, callback,
[] (ContextType ctx, const util::Optional<bson::Bson>& result) {
REALM_ASSERT_RELEASE(result);
return Value::from_bson(ctx, *result);
}));
}
Where to go from here?
I want to implement an http4s server that receives the content from another service, processes it and return the response.
The original service uses redirects so I added the Follow redirect middleware. I also added the Logger middleware to check the logs produced.
The skeleton of the service is:
implicit val clientResource = BlazeClientBuilder[F](global).resource
val wikidataEntityUrl = "http://www.wikidata.org/entity/Q"
def routes(implicit timer: Timer[F]): HttpRoutes[F] = HttpRoutes.of[F] {
case GET -> Root / "e" / entity => {
val uri = uri"http://www.wikidata.org/entity/" / ("Q" + entity)
val req: Request[F] = Request(uri = uri)
clientResource.use { c => {
val req: Request[F] = Request(Method.GET, uri)
def cb(resp: Response[F]): F[Response[F]] = Ok(resp.bodyAsText)
val redirectClient = Logger(true,true,_ => false)(FollowRedirect[F](10, _ => true)(c))
redirectClient.fetch[Response[F]](req)(cb)
}}}}
When I try to access the service with curl as:
curl -v http://localhost:8080/e/33
The response contains the first part of the original content and finnishes with:
transfer closed with outstanding read data remaining
* Closing connection 0
Looking at the logs, they content the following line:
ERROR o.h.s.blaze.Http1ServerStage$$anon$1 - Error writing body
org.http4s.InvalidBodyException: Received premature EOF.
which suggests that there was an error receiving a premature EOF.
I found a possible answer in this issue: but the answers suggest to use deprecated methods like tohttpService.
I think I would need to rewrite the code using a streams, but I am not sure what's the more idiomatic way to do it. Some suggestions?
I received some help in the http4s gitter channel to use the toHttpApp method instead of the fetch method.
I was also suggested also to pass the client as a parameter.
The resulting code is:
case GET -> Root / "s" / entity => {
val uri = uri"http://www.wikidata.org/entity/" / ("Q" + entity)
val req: Request[F] = Request(Method.GET, uri)
val redirectClient = Logger(true,true,_ => false)(FollowRedirect[F](10, _ => true)(client))
redirectClient.toHttpApp.run(req)
}
and now it works as expected.
The toHttpApp method is intended for use in proxy servers.
Are console.log/debug/warn/error in node.js asynchrounous? I mean will javascript code execution halt till the stuff is printed on screen or will it print at a later stage?
Also, I am interested in knowing if it is possible for a console.log to NOT display anything if the statement immediately after it crashes node.
Update: Starting with Node 0.6 this post is obsolete, since stdout is synchronous now.
Well let's see what console.log actually does.
First of all it's part of the console module:
exports.log = function() {
process.stdout.write(format.apply(this, arguments) + '\n');
};
So it simply does some formatting and writes to process.stdout, nothing asynchronous so far.
process.stdout is a getter defined on startup which is lazily initialized, I've added some comments to explain things:
.... code here...
process.__defineGetter__('stdout', function() {
if (stdout) return stdout; // only initialize it once
/// many requires here ...
if (binding.isatty(fd)) { // a terminal? great!
stdout = new tty.WriteStream(fd);
} else if (binding.isStdoutBlocking()) { // a file?
stdout = new fs.WriteStream(null, {fd: fd});
} else {
stdout = new net.Stream(fd); // a stream?
// For example: node foo.js > out.txt
stdout.readable = false;
}
return stdout;
});
In case of a TTY and UNIX we end up here, this thing inherits from socket. So all that node bascially does is to push the data on to the socket, then the terminal takes care of the rest.
Let's test it!
var data = '111111111111111111111111111111111111111111111111111';
for(var i = 0, l = 12; i < l; i++) {
data += data; // warning! gets very large, very quick
}
var start = Date.now();
console.log(data);
console.log('wrote %d bytes in %dms', data.length, Date.now() - start);
Result
....a lot of ones....1111111111111111
wrote 208896 bytes in 17ms
real 0m0.969s
user 0m0.068s
sys 0m0.012s
The terminal needs around 1 seconds to print out the sockets content, but node only needs 17 milliseconds to push the data to the terminal.
The same goes for the stream case, and also the file case gets handle asynchronous.
So yes Node.js holds true to its non-blocking promises.
console.warn() and console.error() are blocking. They do not return until the underlying system calls have succeeded.
Yes, it is possible for a program to exit before everything written to stdout has been flushed. process.exit() will terminate node immediately, even if there are still queued writes to stdout. You should use console.warn to avoid this behavior.
My Conclusion , after reading Node.js 10.* docs (Attached below). is that you can use console.log for logging , console.log is synchronous and implemented in low level c .
Although console.log is synchronic, it wont cause a performance issue only if you are not logging huge amount of data.
(The command line example below demonstrate, console.log async and console.error is sync)
Based on Node.js Doc's
The console functions are synchronous when the destination is a terminal or a file (to avoid lost messages in case of premature exit) and asynchronous when it's a pipe (to avoid blocking for long periods of time).
That is, in the following example, stdout is non-blocking while stderr is blocking:
$ node script.js 2> error.log | tee info.log
In daily use, the blocking/non-blocking dichotomy is not something you should worry about unless you > log huge amounts of data.
Hope it helps
Console.log is asynchronous in windows while it is synchronous in linux/mac. To make console.log synchronous in windows write this line at the start of your
code probably in index.js file. Any console.log after this statement will be considered as synchronous by interpreter.
if (process.stdout._handle) process.stdout._handle.setBlocking(true);
You can use this for synchrounous logging:
const fs = require('fs')
fs.writeSync(1, 'Sync logging\n')