I am new to and working on asterisk.
How to Capture the Reverse Signal in Asterisk,
Currently I am receiving the disposition as NA-No Answer Auto Dial for all unreached calls where I should receive as follows
No Answer,
Busy,
Not Reachable,
Switched Off.
Currently using vicidial: 2.2.1
Vicidial use app_AMD
Synopsis
Attempts to detect answering machines.
Description
AMD([|initialSilence][|greeting][|afterGreetingSilence][|totalAnalysisTime][|minimumWordLength][|betweenWordsSilence][|maximumNumberOfWords][|silenceThreshold])
This application attempts to detect answering machines at the beginning of outbound calls.
Simply call this application after the call has been answered (outbound only, of course).
When loaded, AMD reads amd.conf and uses the parameters specified as default values.
Those default values get overwritten when calling AMD with parameters.
'initialSilence' is the maximum silence duration before the greeting. If exceeded then MACHINE.
'greeting' is the maximum length of a greeting. If exceeded then MACHINE.
'afterGreetingSilence' is the silence after detecting a greeting. If exceeded then HUMAN.
'totalAnalysisTime' is the maximum time allowed for the algorithm to decide on a HUMAN or MACHINE.
'minimumWordLength'is the minimum duration of Voice to considered as a word.
'betweenWordsSilence' is the minimum duration of silence after a word to considere the audio what follows as a new word.
'maximumNumberOfWords'is the maximum number of words in the greeting. If exceeded then MACHINE.
'silenceThreshold' is the silence threshold.
This application sets the following channel variable upon completion:
AMDSTATUS - This is the status of the answering machine detection.
Possible values are:
MACHINE | HUMAN | NOTSURE | HANGUP
AMDCAUSE - Indicates the cause that led to the conclusion.
Possible values are:
TOOLONG-<%d total_time>
INITIALSILENCE-<%d silenceDuration>-<%d initialSilence>
HUMAN-<%d silenceDuration>-<%d afterGreetingSilence>
MAXWORDS-<%d wordsCount>-<%d maximumNumberOfWords>
LONGGREETING-<%d voiceDuration>-<%d greeting>
http://www.voip-info.org/wiki/view/Asterisk+cmd+AMD
Related
I am designing the software controlling several serial ports, operating system is OpenWrt. The device application is running in is single core ARM9 # 450 MHz. Protocol for serial ports is Modbus.
The problem is with Modbus slave implementation. I designed it in real-time manner, looping reading data from serial port (port is open in non-blocking mode with 0 characters to wait for and no timeouts). The sequence/data stream timeout is about 4 milliseconds # 9600/8/N/1 (3.5 characters as advised). Timeout is checked if application does not see anything in the buffer, therefore if application is slower that incoming stream of characters, timeouts mechanism will not take place - until all characters are removed from the buffer and bus is quiet.
But I see that CPU switches between threads and this thread is missed for about 40-60 milliseconds, which is a lot to measure timeouts. While I guess serial port buffer still receives the data (how long this buffer is?), I am unable to assess how much time passed between chars, and may treat next message as continuation of previous and miss Modbus request targeted for my device.
Therefore, I guess, something must be redesigned (for the slave - master is different story). First idea coming to my mind is to forget about timeouts, and just parse the incoming data after being synchronized with the whole stream (by finding initial timeout). However, there're several problems - I must know everything about all types of Modbus messages to parse them correctly and find out their ends and where next message starts, and I do not see the way how to differentiate Modbus request with Modbus response from the device. If developers of the Modbus protocol would put special bit in command field identifying if message is request or response... but it is not the case, and I do not see the right way to identify if message I am getting is request or response without getting following bytes and checking CRC16 at would-be byte counts, it will cost time while I am getting bytes, and may miss window for the response to request targeted for me.
Another idea would be using blocking method with VTIME timeout setting, but this value may be set to tenths of seconds only (therefore minimal is 100 ms), and this is too much given another +50 ms for possible CPU switching between threads, I think something like timeout of 10 ms is needed here. It is a good question if VTIME is hardware time, or software/driver also subject to CPU thread interruptions; and the size of FIFO in the chip/driver how many bytes it can accumulate.
Any help/idea/advice will be greatly appreciated.
Update per #sawdust comments:
non-blocking use has really appeared not a good way because I do not poll hardware, I poll software, and the whole design is again subject to CPU switching and execution scheduling;
"using the built-in event-based blocking mode" would work in blocking mode if I would be able to configure UART timeout (VTIME) in 100 us periods, plus if again I would be sure I am not interrupted during the executing - only then I will be able to have timing properly, and restart reading in time to get next character's timing assessed properly.
all
Recently I am debugging a problem on unix system, by using command
netstat -s
and I get an output with
$ netstat -s
// other fields
// other fields
TCPBacklogDrop: 368504
// other fields
// other fields
I have searched for a while to understand what does this field means, and got mainly two different answers:
This means that your tcp-date-receive-buffer is full, and there are some packages overflow
This means your tcp-accept-buffer is full, and there are some disconnections
Which is the correct one? any offical document to support it?
Interpretation #2 is referring to the queue of sockets waiting to be accepted, possibly because its size is set (more or less) by the value of the parameter named backlog to listen. This interpretation, however, is not correct.
To understand why interpretation #1 is correct (although incomplete), we will need to consult the source. First note that the string "TCPBacklogDrop"is associated with the Linux identifier LINUX_MIB_TCPBACKLOGDROP (see, e.g., this). This is incremented here in tcp_add_backlog.
Roughly speaking, there are 3 queues associated with the receive side of an established TCP socket. If the application is blocked on a read when a packet arrives, it will generally be sent to the prequeue for processing in user space in the application process. If it can't be put on the prequeue, and the socket is not locked, it will be placed in the receive queue. However, if the socket is locked, it will be placed in the backlog queue for subsequent processing.
If you follow through the code you will see that the call to sk_add_backlog called from tcp_add_backlog will return -ENOBUFS if the receive queue is full (including that which is in the backlog queue) and the packet will be dropped and the counter incremented. I say this interpretation is incomplete because this is not the only place where a packet could be dropped when the "receive queue" is full (which we now understand to be not as straightforward as a single queue).
I wouldn't expect such drops to be frequent and/or problematic under normal operating conditions as the sender's TCP stack should honor the advertised window of the receiver and not send data exceeding the capacity of the receive queue (with the exception of zero window probes and older kernel versions whose calculations could cause drops when the receive window was not actually full). If it is somehow indicative of a problem, I would start worrying about malicious clients (some form of DDOS maybe) or some failure causing a sockets lock to be held for an extended period of time.
I am developping an application on the STM32 SPBTLE-1S module (BLE 4.2). The module connects to a Raspberry Pi.
When the connection quality is low, a disconnection will sometimes occur with error code 0x28 (Reason: Instant Passed) before the connection timeout is reached.
Current connection settings are:
Conn_Interval_Min: 10
Conn_Interval_Max: 20
Slave_latency: 5
Timeout_Multiplier: 3200
Reading more on this type of error, it seems to happen when "an LMP PDU or LL PDU that includes an instant cannot be performed because the instant when this would have occurred has passed." These paquets are typically used for frequency hopping or for connection updates. In my case, they must be frequency hoping paquets.
Any idea on how to prevent these disconnections caused by "Instant Passed" errors? Or are they simply a consequence of the BLE technology?
Your question sounds similar to this one
In a nutshell, there's only two possible link layer requests that can result in this type of disconnect (referred to as LL_CONNECTION_UPDATE_IND & LL_CHANNEL_MAP_IND in the latest v5.2 Bluetooth Core Spec)
If you have access to the low level firmware for the bluetooth stack on the embedded device, what I've done in the past is increase the number of slots in the future the switch "Instant" is at so there's more time for the packet to go through in a noisy environment.
Otherwise the best you can do is try to limit the amount of times you change connection parameters to make that style of disconnect less likely to occur. (The disconnect could still be triggered by channel map change but I haven't seen many BLE stacks expose a lot of configuration over when those take place.)
In talking to a MODBUS device, is there an upper bound on how long a device can take to respond before it's considered a timeout? I'm trying to work out what to set my read timeout to. Answers for both MODBUS RTU and TCP would be great.
In MODBUS over serial line specification and implementation guide V1.0 section 2.5.2.1 MODBUS Message ASCII Framing There are suggestions that inter-character delays of up to 5 seconds are reasonable in slow WAN configurations.
2.6 Error Checking Methods indicates that the timeouts are configured without specifying any values.
The current Modicon Modbus Protocol Reference Guide PI–MBUS–300 Rev. J also provides no quantitative suggestions for these settings.
Your application time-sensitivity, along with the constraints that your network enforces, will largely determine your choices.
If you identify the worst-case delays you can tolerate, take half that time to allow a single retransmission to fail, subtract reasonable transmission times for a message of maximal length, then you should have a good candidate for a timeout. This will allow you to recover from a single error, while not reporting errors unnecessarily often.
Of course, the real problem is, what to do when the error occurs. Is it likely to be a transient problem, or is it the result of a permanent fault that requires attention?
Alexandre Vinçon's comment about the ACKNOWLEDGEMENTs is also relevant. It may be your device does not implement this, and extended delays may be intended.
The specification does not mention a particular value for the timeout, because it is not possible to normalize a timeout value for a wide range of MODBUS slaves.
However, it is a good assumption that you should receive a reply within a few hundreds of milliseconds.
I usually define my timeouts to 1 second with RTU and 500 ms with TCP.
Also, if the device takes a long time to reply, it is supposed to return an ACKNOWLEDGE message to prevent the expiration of the timeout.
I have an app that is connected to the balance through the serial port. The balance is quite large and pressing the PRINT button is not an option. So my app asks the balance to print programmatically upon a certain user action. The balance interface allows it, and defines a print command. All works for awhile. Then after weighting few items, the balance starts outputing the previous weight....I am buffled at this point since there are few commands defined and there is not too many options to what can be done. I am already flushing out the OUT buffer after each time. So I don't know why it keeps giving me the old value.
Here is my code:
if (askedToPrint)
{
_sp.DiscardOutBuffer();
//ask the balance to print
_sp.Write("P\r\n");
}
_sp - is a SerialPort object
I am using WinCE 6.0 and Compact Framework 2.0/C#
if you are reading data from serial port using Readline() or Read() then there is a possibility that balance have sent multiple packets queued. So before reading you have to discard already pending packets. other way around is before writing request to print use ReadExisting() method to read all available data. So after sending command if your balance still sending old packets then there might be a problem with balance.