How to generate events on serial control lines in twisted - serial-port

Is it possible to get an 'event' callback within Twisted when DCD or CTS lines change state?
Currently my app uses twisted.internet.serialport.SerialPort and a LoopingCall() to check for changes in these lines once a second. This works, but isn't very "Twisted"
Thanks,

Related

How to create a zabbix problem whenever a cisco switch interface utilizes more than 80 mbps (80% of it's bandwidth)

I'm trying to create a trigger in zabbix which will show me a problem and alert me on my email whenever an interface in a cisco switch (with snmpv2) crosses 80% of it's bandwidth (100 mbps or 1000 mbps) without hardcoding anything, I tried using this trigger expression:
{/switch name:net.if.out[ifHCOutOctets./switch interface].min(10)}>80000000
I would like to know how can I write this trigger expression which works fine without applying it to every single interface item in every switch. I think that maybe macros could help in these situations but found no explanation or any guide about how to use them or how to use low level discovery which maybe have a part at the solution for my need.
Thanks in advance.
You are correct, you want to use Low Level Discovery to do this, as well as to discover all your interfaces. Low level discovery at a high level consists of two things. 1) you have to tell zabbix how to go discovery a bunch of dynamic things and asssign a LLD macro to them, that is done a the high level of the Discovery rule. 2) you have to tell zabbix what item protototypes, trigger protorypes, etc to dynamically create as actual items and triggers, every time the discovery rule runs.
Take a look at the Arista SNMPv2 template included with zabbix as an example. There are a number of Discovery Rules included in that template, one of which is the Network Interfaces discovery rule. Within the network interfaces discovery rule, zabbix basically does an snmp walk and gets a list of all the interfaces and assigns LLD(Low level discovery macros) for each interfaces such as #IFINDEX, #IFSTATUS, etc. The discovery Rule, like all LLD rules, takes the output of the "Network Interfaces" discovery rule, and uses them to dynamically create actual items on each host the template applied to.
The next part of this to understand is the prototypes. Once zabbbix finds all the network interfaces, your question should be, how do i get it to create new Items in my host for each interface it finds, and how do i get it to create triggers for each interface it finds dynamically, automatically and without user intervention. The answer is protoyptes. Prototypes are child elements of a Low Level Discovery. They are what actually creates the new items and triggers for every thing it discovered.
Take a look here for some examples and docs on low level discovery rules.
https://www.zabbix.com/documentation/4.2/manual/discovery/low_level_discovery#trigger_prototypes
Zabbix can create LLD rules via numerous discovery methods including SNMPv#, which is all capable of being configured in the UI or api, and other customer discovery rules not included through the use of user parmaters, externel checks, etc.
If your make and model of switch is already known to zabbix, a template in "Templates/Network Devices", at least i think thats the path, will exist just like the arista and juniper ones.
You can create custom low level discovery rules as well for non snmp stuff. basically you write a script that will go find the things you want to dynamically add to zabbix and your script needs to return a valid json output with #macronames and values you want added. For example a custom file system discovery rules, which shouldn't be needed because there already included if your using the agent, would produce lines like the ones shown in this example in the official docs.
https://www.zabbix.com/documentation/4.2/manual/discovery/low_level_discovery#creating_custom_lld_rules
In short, check to see if a template exists for your switch already and a discovery rule with the item prototypes to discover things the way you want them already. LLD basically allows zabbix too walk a dynamic data structure of any source, as long as that data structure has a definition known to zabbix, and you tell it what keys and values in the JSON you want to create as items, triggers, etc.
You should leverage the low level discovery feature of Zabbix.
After a standard setup you should have a "Template Module Interfaces SNMPv2", which is used by some other templates as a standard for interface discovery.
Within the template you will find a "Network Interfaces Discovery" discovery rule that runs every hour to:
query the target device for an interface list
create N items for each interface (bits in, bits out, speed, type etc), defined as Item Prototypes
create some triggers, defined as Trigger Prototypes
The speed item is the negotiated interface speed (ie: 1000 for a gigabit switch port), which is your 100% limit.
You can add some definitions to this template to get the alert you need:
Create a calculated item prototype and set its formula to
100*(currentOutputBits/speed)
Create a template macro to define your alert threshold, ie {$INTERFACE_OUTPUT_THRESHOLD}, and set it to 80
Create a trigger prototype that matches when the calculated item is
greater than {$INTERFACE_OUTPUT_THRESHOLD} for N minutes
Optionally, do the same for the currentInputBits item
This setup will create 1 additional item and 1 additional trigger for each physical or logical interface found on the target device: it makes sense to unflag the "create enable" option on the trigger prototype and enable it only on specific ports of specific devices.
The threshold macro can be changed at template level, affecting any device linked to it, or at host level for a custom threshold.
Thanks for your replies, I actually checked again the Template Module Interfaces SNMPv2 and saw that there is a prototype trigger that solves my question.
for anyone who wants to do the same with a switch:
Add to your switch (host) the "Template Module Interfaces SNMPv2" template.
Change the IF_UTIL_MAX macro to whatever value you want it to be, the default is 90 (this is the macro which is responsible for the percentage of the bandwidth which will trigger a problem, for example, if you change it to 60, when a host bandwidth utilization averages more than 60% in any interface for 15 minutes, a problem will be added to the problems tab or to the dashboard).
If the time of 15 minutes isn't right for you, you can change it by going to: configuration -> templates -> Template Module Interfaces SNMPv2 -> Discovery rules -> Network Interfaces Discovery -> Trigger prototypes -> search for a trigger name which contains high bandwidth usage -> in the problem expression and the recovery expression find the .avg() function and change the value inside it to whatever value is right for you, for an example: 15m = 15 minutes, 1s = 1 second etc...
I actually recommend to clone the trigger prototype, change it and then disable the built in one instead of just changing the time inside it for easily debugging errors in the long run. So you can change the name of the trigger prototype and then clone it by pressing clone in the bottom left corner of the screen, then change it's name and settings for whatever suits you the best.
Press add in the bottom left corner of the screen and if you took my advice you also need to click on the green "Yes" link of the built in trigger in the trigger prototypes table for it to disable.
You can also go ahead and try the other answers on this thread, I'm seriously thankful for them but I don't have enough time to check if those work since I already figured it out after reading Simone Zabberoni's and helllordkb's answers and checking the built in low level discovery in the "Template Module Interfaces SNMPv2" template.

How to respond to events caused by users differently to those caused by periodic callbacks?

More specifically, I have a slider (which I call clock) whose value I increase with a periodic callback:
clock = Slider(start=0, end=1000, value=0, step=1, title='clock')
def increase_slider_value():
clock.value = (clock.value + 1) % 1000
period_ms = 50
curdoc().add_periodic_callback(increase_slider_value, period_ms)
When the value of the clock changes the sources of some plots are updated:
clock.on_change('value', update_sources)
Updating the sources is expensive and fairly optimized, using patches and if statements for key values of the clock.
This works fine if the callback changes the clock. But if the users grabs the slider and moves it around the patches do not create the desired sources anymore, i.e., create invalid states of the sources and rubbish plots.
The only half-elegant way I can see to fix this is to treat value changes that are caused by the user differently from value changes that are caused by the callback. User-caused updates would require a new computation of the sources while the periodic callback would continue to utilize patches.
A) Does this make sense to you / do you approve?
B) How can I get the clock.on_change(..) to distinguish between the event that triggered the change? Are there "slider drag" events?
There is not any way to distinguish the events directly. I would suggest you utilize the slightly hacky but serviceable approach described in Throttling in Bokeh application. This serves two purposes:
You can specify a different callback on the "fake" data source than the clock.on_change(...) one.
You can actually throttle user updates from the slider to only "finished" states so the expensive callback does not get executed unnecessarily often.

Is it OK for a DirectShow filter to seek the filters upstream from itself?

Normally seek commands are executed on a filter graph, get called on the renderers in the graph and calls are passed upstream by filters until a filter that can handle the seek does the actual seek operation.
Could an individual filter seek the upstream filters connected to one or more of its input pins in the same way without it affecting the downstream portion of the graph in unexpected ways? I wouldn't expect that there wouldn't be any graph state changes caused by calling IMediaSeeking.SetPositions upstream.
I'm assuming that all upstream filters are connected to the rest of the graph via this filter only.
Obviously the filter would need to be prepared to handle the resulting BeginFlush, EndFlush and NewSegment calls coming from upstream appropriately and distinguish samples that arrived before and after the seek operation. It would also need to set new sample times on its output samples so that the output samples had consistent sample presentation times. Any other issues?
It is perfectly feasible to do what you require. I used this approach to build video and audio mixer filters for a video editor. A full description of the code is available from the BBC White Papers 129 and 138 available from http://www.bbc.co.uk/rd
A rather ancient version of the code can be found on www.SourceForge.net if you search for AAFEditPack. The code is written in Delphi using DSPack to get access to the DirectShow headers. I did this because it makes it easier to handle com object lifetimes - by implementing smart pointers by default. It should be fairly straightforward to transfer the ideas to a C++ implementation if that is what you use.
The filters keep lists of the sub-graphs (a section of a graph but running in the same FilterGraph as the mixers). The filters implement a custom version of TBCPosPassThru which knows about the output pins of the sub-graph for each media clip. It handles passing on the seek commands to get each clip ready for replay when its point in the timeline is reached. The mixers handle the BeginFlush, EndFlush, NewSegment and EndOfStream calls for each sub-graph so they are kept happy. The editor uses only one FilterGraph that houses both video and audio graphs. Seeking commands are make by the graph on both the video and audio renderers and these commands are passed upstream to the mixers which implement them.
Sub-graphs that are not currently active are blocked by the mixer holding references to the samples they have delivered. This does not cause any problems for the FilterGraph because, as Roman R says, downstream filters only care about getting a consecutive stream of sample and do not know about what happens upstream.
Some key points you need to make sure of to avoid wasted debugging time are:
Your decoder filters need to be able to queue to the exact media frame or audio time. Not as easy to do as you might expect, especially with compressed formats such as mpeg2, which was designed for transmission and has no frame index in the files. If you do not do this, the filter may wait indefinitely to get a NewSegment call with the correct media times.
Your sub graphs need to present a NewSegment time equal to the value you asked for in your seek command before delivering samples. Some decoders may seek to the nearest key frame, which is a bit unhelpful and some are a bit arbitrary about the timings of their NewSegment and the following samples.
The start and stop times of each clip need to be within the duration of the file. Its probably not a good idea to police this in the DirectShow filter because you would probably want to construct a timeline without needing to run the filter first. I did this in the component that manages the FilterGraph.
If you want to add sections from the same source file consecutively in the timeline, and have effects that span the transition, you need to have two instances of the sub-graph for that file and if you have more than one transition for the same source file, your list needs to alternate the graphs for successive clips. This is because each sub graph should only play monotonically: calling lots of SetPosition calls would waste cpu cycles and would not work well with compressed files.
The filter's output pins define the entire seeking behaviour of the graph. The output sample time stamps (IMediaSample.SetTime) are implemented by the filter so you need to get them correct without any missing time stamps. and you can also set the MediaTime (IMediaSample.SetMediaTime) values if you like, although you have to be careful to get them correct or the graph may drop samples or stall.
Good luck with your development. If you need any more information please contact me through StackOverflow or DTSMedia.co.uk

Detect silence while playing sound

I am developing an java-asterisk application that is calling subscribers to deliver messages. At some moments during the call, I need to monitor whether the subscriber is talking or is silent. I need to monitor that for a fairly long time (1-3 seconds) but don't want to interrupt the flow of the outgoing message.
The way I am doing it now is as below
streamFile(*file A*);
exec("WaitForSilence","300,1,1");
waitStatus=getVariable("WAITSTATUS");
streamFile(*file B*);
This works fine but it is only a 300ms detect and a 1s timeout, so from the subscriber point of view the silence between file A and file B is almost unnoticeable. But if I want to listen for longer (say 3 seconds for example) then the subscriber's experience will be ruined.
What I would need is a function similar to "WaitForSilence" but that:
runs in parallel to the script;
delivers its outcome in a variable channel with a name that I define (as there might be several calls to the function, and I need to get all the results)
I've been looking for more than aweek now and couldn't find a way to do that. Any ideas?
Code you provided will do wait, after that will do playback.
There are no way do that simple in one application.
Posible ways:
1) create c/c++ application(asterisk guru skill required) for that.
2) create enother channel, mix it with ChanSpy and in that channel do silence detect. Complexity - expert in asterisk.
Both are not so short(more then 2-3 screens of code), so can't be described in this site.
You can also try use Background application, but i am afraid it will not work too.

Asterisk TDM410

This is not a programming question per se. I am trying to build a system which consists of the following:
User calls system using regular land line
Some processing is done by asterisk
Call is forwarded to an external number (another landline/mobile phone)
Now I would like atleast 2 simultaneous lines on which the user can call. I would like to know the following:
Will the TDM410 work for what I am trying to achieve?
Since I want call forwarding, do I need an extra line for that? Or can I do it on the same line? e.g. for one user do I need one incoming line and one outgoing line or can I do both receiving and forwarding on the same line?
I have both asterisk books but am still unclear as to which card to purchase. Is the TDM410 with 4 FXO the right one? I am thinking of TDM410 because it has 4 lines so that I can use two for incoming and two for outgoing. Am I right? Can someone please point me to a link/online store?
Thank you very much for your time.
PS- I do not wish to use SIP. I want to use POTS for all my calls (incoming and outgoing)
The TDM400 with 4 FXO modules will work for what you need.
When you receive a call on one line, then that line is busy and you must use a different line to forward the call (Asterisk will bridge the calls but it needs 2 lines for that). You can buy them directly from Digium.
You need at least 3 landlines if you want to be able to receive calls from 2 lines; the third line will be the one you use to forward the calls, but only one call can be forwarded at at time and the other person will have to wait. If you don't want that then you need 4 landlines.

Resources