Completion queue configuration - grpc

Using an async service with a completion queue, is it possible to configure the size of the cq ?
If not, are their any rules applied on the completion queue lifecycle ?
I'm using grpc 1.13.1 and when I looked into the implementation , I didn't find anything like it ( in the server settings nor in the completion queue attributes).
Althought the pending_tags configuration for the GRPC_TRACE variable doesnt work properly on my version ( I get this message : Unknown trace var :'pending_tags' ) , I was able to display the size of the queue enabling 'all' traces.
The question was asked on this thread : https://groups.google.com/forum/#!topic/grpc-io/LTxgMYBx0yk of google groups. Has anything changed since then?
Thnak you

The CompletionQueue API does not expose the 'size' or the number of pending tags on the completion queue. As for pending_tags trace, I believe it is a Debug Only flag, and would only work if gRPC is compiled in Debug mode.

Related

Asterisk AMI events sometimes missing

I have a Python service which connects to Asterisk via AMI and listens for events to detect when a call has begun.
This seems to work on most of the Asterisk servers I connect to. However, on a few of our servers we just don't see any of the AMI events (e.g. Newstate) when the call happens, though we do later see the Cdr event once the call has completed.
I've confirmed that this isn't specific to the library we're using to connect to AMI (py-Asterisk), because I see exactly the same thing when I connect manually, e.g.
$ openssl s_client -connect my-asterisk-server:5039
...
Asterisk Call Manager/2.10.3
Action: Login
Username: manager
Secret: ThisIsWhereITypedTheActualPassword
Response: Success
Message: Authentication accepted
Event: FullyBooted
Privilege: system,all
Status: Fully Booted
Action: Events
EventMask: on
Response: Success
Events: on
In the above block, I manually connected to AMI, logged in as the same administrator my Python code is using, and ensured that all events are turned on (though my asterisk config should already be displaying all events I care about by default).
After this point what I see on some of my Asterisk servers is a series of expected events like Newstate, followed by an eventual Cdr event. On other servers, I see only the Cdr event, with nothing proceeding it. This is completely consistened within each server, meaning a server either always sends all of the expected events or it never does.
I've checked the versions of asterisk, the manager.conf config file, the extensions.conf dialplan, the asterisk console in verbose mode (i.e. connecting via asterisk -vvvr to the running process), and just generally comparing my config files to my actually-working Asterisk servers.
I'm stumped as to what could be causing this, or even what to try next. If it matters, here's what my manager.conf looks like:
[general]
tlsenable=yes
tlsbindaddr=10.0.0.123:5039
tlscertfile=/etc/pki/asterisk/ami.crt
tlsprivatekey=/etc/pki/asterisk/ami.key
[admin]
secret=TheActualPasswordIsOnThisLine
read=system,call,log,verbose,command,agent,user,originate,cdr
write=system,call,log,verbose,command,agent,user,originate,cdr
EDIT: After some further digging, it seems that the only events which are showing up are Cdr events, so even things like peer registration events don't show up. I've also confirmed that all of my Asterisk 13.19.0-1 servers are exhibiting this behavior, and the only working servers are running much older versions of Asterisk.
The weird thing is that calls do come through successfully, so the problem is not that I'm missing some necessary module. (Or maybe I am? Is there some "make events show up in AMI" module that I need to ensure is loaded?)
FURTHER EDIT: I was able to turn on CEL events (Channel Event Logging), and those events show up, but this is a different set of events than the standard Newchannel/Newstate/etc I'm looking for. In theory I could rewrite a large portion of my service to use the CEL events, but ideally I'd just turn on the standard Newstate/Newchannel/Hangup events.
It turns out the issue is that I was missing
enabled=yes
in my manager.conf. Hopefully this will be helpful to someone in the future: even if you're able to connect to AMI, log into AMI, receive some events from AMI, and send commands to AMI and get responses back, it might not be enabled, and this will suppress most of the core events like Newchannel, Newstate, etc.
One clue was that running manager show settings in the asterisk console returned this:
Global Settings:
----------------
Manager (AMI) No
...
which is apparently how it indicates that it's not enabled.

grpc client completion queue not shutting down

My code performs the following:
1)Create grpc channel
2)start monitoring completion queue in a different thread
3)Issue shutdown on completion queue
After executing step 3, I expect "(cq.Next(&tag, &ok)" to return false as there are no pending events with above 3 steps. But it is observed that "(cq.Next(&tag, &ok)" never returns false. Please let me know if I am missing something.
Thanks,
Ikshu
In order to get channel state notification, a tag was being added to the queue and that use to always post some events. so the cq->next() never returned false. I fixed this issue by achieving same functionality by using already existing standard API for channel state. So closing the bug.

propagate OpenTracing TraceIds from publisher to consumer using MassTransit.RabbitMQ

Using MassTransit.RabbitMQ v5.3.2 and OpenTracing.Contrib.NetCore v0.5.0.
I'm able publish and consume events to RabbitMQ using MassTransit and I've got OpenTracing working with Jaeger, but I haven't managed to get my OpenTracing TraceIds propogated from my message publisher to my message consumer - The publisher and consumer traces have different TraceIds.
I've configured MassTransit with the following filter:
cfg.UseDiagnosticsActivity(new DiagnosticListener("test"));
I'm not actually sure what the listener name should be, hence "test". The documentation doesn't have an example for OpenTracing. Anyways, this adds a 'Publishing Message' span to the active trace on the publish side, and automatically sets up a 'Consuming Message' trace on the consumer side; however they're separate traces. How would I go about consolidating this into a single trace?
I could set a TraceId header using:
cfg.ConfigureSend(configurator => configurator.UseExecute(context => context.Headers.Set("TraceId", GlobalTracer.Instance.ActiveSpan.Context.TraceId)))
but then how would I configure my message consumer so that this is the root TraceId? Interested to see how I might do this, or if there's a different approach...
Thanks!
If anyone is interested; I ended up solving this by creating some publish and consume MassTransit middleware to do the trace propagation via trace injection and extraction respectively.
I've put the solution up on GitHub - https://github.com/yesmarket/MassTransit.OpenTracing
Still interested to hear if there's a better way of doing this...

Hitting 100 active connections limit in test env with only two users

I have a single web client and a few Lambda functions which use the Admin SDK. I've noticed recently that I've bumped into the 100 simultaneous connection limit but I really shouldn't be anywhere near that limit. Also it would appear that the connections established by my Lamba functions are not dropping off even after the function has completed.
Any idea on:
how I can prevent this run-up on connections from happening?
how I can release connections established by past Lambda scripts?
how can I monitor which processes/threads/stacks are holding connections?
Note: this is a testing environment I'm working out of so I'd prefer to keep this in the free tier and my requirements should definitely not be running into the 100 active limit. I am on a paid plan in prod.
I attempt to avoid calling initializeApp more than once by using the following connection code. In the example I'm talking about I only have a single database as a backend and so the default "name" of DEFAULT is used each time.
const runningApps = new Set(firebase.apps.map(i => i.name));
this.app = runningApps.has(name)
? firebase.app()
: firebase.initializeApp({
credential: firebase.credential.cert(serviceAccount),
databaseURL: config.databaseUrl
});
I'm now trying to explicitly close connections with goOffline but that leads to another issue where on the second connection -- aka, where the DEFAULT application is already setup and it just reuses the connection already established I get the following logging:
# Generated as result of `goOnline`
Connecting to Firebase: [https://xyz.firebaseio.com]
appears to be already connected
# Listening on ".info/connected" comes back as true, resulting in:
AbstractedAdmin: connected to [DEFAULT]
# but then I get this error
NotAllowed: You must first connect before using the database() API at Object._getFirebaseType
The fact that you have unexpected incoming connections to the database, makes it seem like the stale instances keep an open connection.
Best I can think off is to call goOffline() in your function before it completes to explicitly disconnect. That would probably also mean you have to call goOnline at the start of the function, since it might be running on an instance that previously went offline. Both goOnline and goOffline are synchronous calls afaik, but there's definitely going to be some time between going online and the data becoming available in your app.
If Lambda has a way for you to detect life-cycle events of its instances, that would be the preferred place to call goOffline and goOnline.
admin.initializeApp should only get called once in your script/node app.
The Firebase SDK's talks HTTP2 to the Firebase cloud system, so I'm not sure why you would encounter max connection issues as unique sockets are not stood up per call.
One thing to look out for is that calls to 3rd part API's (such as sendgrid) are not supported on the free tier.

Could not create internal topics - Stream-thread exception

I am trying to execute a simple Wordcount stream application but I face the error "Could not create internal topics - Stream-thread exception"
I have seen a similar thread but that seems to be more of a network issue.
Here is no security enabled on the kafka broker.
Only one broker is configured and still this issue.
Can someone let me know how to fix this?
Clean your temporary kafka queues.
Run --list command on kafka to see all the queues starting with your names and ending with -changelog & -repartition and manually run delete on them.
This one worked for me.
Also, check your settings on delete.topic.enable for actual deletion happening. It was not the default setting until 1.0.0 - see https://issues.apache.org/jira/browse/KAFKA-5384
i have connected to kafka using kafka tool and delete them manually

Resources