Offline Firestore : Stream closed with status: Status{code=UNAVAILABLE - firebase

I am getting those error messages when I am using my App offline (it is working when I am online):
W/Firestore(23675): (24.0.1) [WatchStream]: (da69cba) Stream closed
with status: Status{code=UNAVAILABLE, description=End of stream or
IOException, cause=null}. W/Firestore(23675): (24.0.1) [WriteStream]:
(9780b0d) Stream closed with status: Status{code=UNAVAILABLE,
description=End of stream or IOException, cause=null}.
W/Firestore(23675): (24.0.1) [WriteStream]: (9780b0d) Stream closed
with status: Status{code=UNAVAILABLE, description=Unable to resolve
host firestore.googleapis.com, cause=java.lang.RuntimeException:
java.net.UnknownHostException: Unable to resolve host
"firestore.googleapis.com": No address associated with hostname
Here is my main.dart :
await Firebase.initializeApp();
FirebaseFirestore.instance.settings = Settings(cacheSizeBytes: Settings.CACHE_SIZE_UNLIMITED);
FirebaseFirestore.instance.settings = Settings(persistenceEnabled: true);
What did I miss ? (I have the last version of flutter and cloud_firestore package)
Is it because I am in debug mode using Android Emulator ?

It could be a server error or it could be related to this issue on Github.
As you can see the issue on Github is still open and under investigation, if you can update to the latest version of flutter.
For the UNAVAILABLE Firebase ErrorCode documentation says:
UNAVAILABLE (HTTP error code = 503) The server is overloaded. The server couldn't process the request in time. Retry the same request, but you must:
- Honor the Retry-After header if it is included in the response from the FCM Connection Server.
- Implement exponential back-off in your retry mechanism. (e.g. if you waited one second before the first retry, wait at least two second before the next one, then 4 seconds and so on). If you're sending multiple messages, delay each one independently by an additional random amount to avoid issuing a new request for all messages at the same time. Senders that cause problems risk being denylisted.
Check here for more about Error codes
Also,basic but worth a test, make sure that the device connection is stable with no drops.

Related

gRPC client can't connect to server Failed parsing HTTP/2, only on my computer

I'm trying to connect to a server from my client using gRPC, but connection always fails only on my pc(macbook pro). My teammate tried with the exact same code, and it works perfectly fine. The following is the error messages from each client and server. We are using protobuf 3, python 3.9. Can anyone give me some hint? Thank You.
Client Error Message
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "Failed parsing HTTP/2"
debug_error_string = "{"created":"#1626678822.089372000","description":"Error received from peer ipv4:10.113.66.145:9390", "file":"src/core/lib/surface/call.cc", "file_line":1067,"grpc_message":"Failed parsing HTTP/2","grpc_status":14}"
Server Error Message
[07/19 01:29:32 cctv_service]: Session Connected
E0719 01:29:32.744434791 14615 parsing.cc:302] Unknown frame type 71
I0719 01:29:32.744519637 14615 chttp2_transport.cc:812] W:0x7f3364002ae0 SERVER [ipv4:10.25.211.173:50662] state IDLE -> WRITING [CLOSE_FROM_API]
I0719 01:29:32.744554230 14615 chttp2_transport.cc:812] W:0x7f3364002ae0 SERVER [ipv4:10.25.211.173:50662] state WRITING -> WRITING [begin write in current thread]
Updated MacOS 11.4 to 11.5, problem never appeared again.

How to keep grpc-js client connection open (alive) during inactive times?

I have a grpc server streaming RPC that communicates with the client. The client throws an error when it does not receive communication from the server. The error is:
Error: 13 INTERNAL: Received RST_STREAM with code 2
at Object.callErrorFromStatus (/app/node_modules/#grpc/grpc-js/build/src/call.js:30:26)
at Object.onReceiveStatus (/app/node_modules/#grpc/grpc-js/build/src/client.js:328:49)
at Object.onReceiveStatus (/app/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:304:181)
at Http2CallStream.outputStatus (/app/node_modules/#grpc/grpc-js/build/src/call-stream.js:116:74)
at Http2CallStream.maybeOutputStatus (/app/node_modules/#grpc/grpc-js/build/src/call-stream.js:155:22)
at Http2CallStream.endCall (/app/node_modules/#grpc/grpc-js/build/src/call-stream.js:141:18)
at ClientHttp2Stream.stream.on (/app/node_modules/#grpc/grpc-js/build/src/call-stream.js:410:22)
at ClientHttp2Stream.emit (events.js:198:13)
at emitCloseNT (internal/streams/destroy.js:68:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
Emitted 'error' event at:
at Object.onReceiveStatus (/app/node_modules/#grpc/grpc-js/build/src/client.js:328:28)
at Object.onReceiveStatus (/app/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:304:181)
[... lines matching original stack trace ...]
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at process._tickCallback (internal/process/next_tick.js:63:19)
I have tried searching for solutions and the only "solution" that kept the connection open was to manually ping the client every 10 seconds using setInterval. I read that grpc is supposed to keep the connections open for you and even reconnect if it is lost in this article.
As mentioned above, KeepAlive provides a valuable benefit: periodically checking the health of the connection by sending an HTTP/2 PING to determine whether the connection is still alive.
The way I set up the client is below
var grpc = require('#grpc/grpc-js');
require('dotenv').config({path:'/app/.env'});
var responderProto = grpc.loadPackageDefinition(packageDefinition).responder;
var client = new responderProto.ResponderService(process.env.GRPC_HOST_AND_PORT,
grpc.credentials.createInsecure(),
{
"grpc.http2.max_pings_without_data" : 0,
"grpc.keepalive_time_ms": 10000,
"grpc.keepalive_permit_without_calls" : 1
});
My understanding was that the "grpc.keepalive_time_ms" is supposed to ping the server every x ms
Am I doing something wrong or missing or misunderstanding something essential?

Using a KillSwitch in an akka http streaming request

I'm using Akka's HTTP client to make a connection to an infinitely streaming HTTP endpoint. I am having difficulty getting the client to close the upstream to the HTTP server.
Here's my code (StreamRequest().stream returns a Source[T, Any]. It's generated by Http().outgoingConnectionHttps and then a Flow[HttpResponse, T, NotUsed] to convert HttpResponse to a stream of T):
(killSwitch, tFuture) = StreamRequest()
.stream
.takeWithin(timeToStreamFor)
.take(toPull)
.viaMat(KillSwitches.single)(Keep.right)
.toMat(Sink.seq)(Keep.both)
.run()
Then I have
tFuture.onComplete { _ =>
info(s"Shutting down the connection")
killSwitch.shutdown()
}
When I run the code I see the 'Shutting down the connection' log message but the server tells me that I'm still connected. It disconnects only when the JVM exits.
Any ideas what I'm doing wrong or what I should be doing differently here?
Thanks!
I suspect you should invoke Http().shutdownAllConnectionPools() when tFuture completes. The pool does not close connections because they can be reused by the different stream materialisations, so when the stream completes it does not close the pool. The shut connection you seen in the log can be because the idle timeout has triggered for one of the connections.

Postfix handling Amazon SES Maximum Send Rate error

We have a postfix server which we were using to send emails. This server is used by many services. Thus for using Amazon SES, I've integrated our postfix server with SES SMTP interface ( using "http://docs.aws.amazon.com/ses/latest/DeveloperGuide/postfix.html" ). The configuration is working fine and mails are getting delivered properly.
Now, there is a limit of 5 email/sec, imposed by SES and it raises error '454 Throttling failure: Maximum sending rate exceeded ' if limit exceeds.
I'm a newbie to Postfix.
Kindly guide me with the configuration settings in Postfix to make postfix resend the mail when error '454 Throttling failure: Maximum sending rate exceeded' occurs.
Also, how to resend email, when occasionally 'Connection timed out' error occurs with relay server(Amazon SES).
This is not the actual answer you are asked,but you can bypass the issue with this,
You can add this lines in main.cf file
default_destination_concurrency_limit=1
default_destination_rate_delay=10s
This will increase the time delay but you don't get the error message.
You can also verify this link to learn more about postfix performance turning
I was wondering the same thing so I tried it out on a fresh postfix install. I found that no additional configuration was required and postfix did indeed retry sending the messages about 5 minutes after original throttling error was reported in the log file.

Service not available, closing transmission channel. The server response was: 4.4.2 Timeout while waiting for command

I'm trying to send a message and we sometime get this error :
Service not available, closing transmission channel. The server response was: 4.4.2 Timeout while waiting for command.
Anyone know what to do with this? Because it only happens "sometime" and apperently, for no specific reason.
I saw many article saying :
442 The server started to deliver the message but then the connection was broke (Source : http://www.sorkincomputer.net/SMTP%20errors.htm)
This is typically a server side (the SMTP server you're delivering to) error or a network connectivity error. There isn't anything you can do about it via your code, you would need to get the related IT staff involved to figure out why your connection is getting closed or interrupted.

Resources