How do you write an Asterisk Dialpan to suspend 180-Ringing until after 183-progress in chan_sip? - asterisk

I'm trying to write an Asterisk dialplan to suspend 180-Ringing until after 183-progress. What needs to happen in the dialplan to prevent this?
Below you can see the trace: The chan_sip driver sends "180 Ringing" and then immediately get a 503 when the upstream trunk (next hop) sends a 503, but that upsets the carrier. They don't want to see ringing unless progress can be made so I want to suppress sending "180 Ringing" until after the upstream trunk sends a "183 Progress".
I looked at the Dial(SIP/117/X) function reference/flags but didn't see anything useful. This is chan_sip, but I would be open to chan_pjsip dialplan function calls or branch tests as well.
Ideas?
(FYI: This is a dialplan programming question, not a configuration question, so I think it belongs here on stackoverflow instead of superuser according to the SE asterisk tag rules.)
22:29:02.539741 IP (tos 0x68, ttl 57, id 36866, offset 0, flags [none], proto UDP (17), length 1405)
10.0.0.101.5062 > 10.0.0.201.5060: SIP, length: 1377
"INVITE sip:117#10.0.0.201 SIP/2.0"
22:29:02.541085 IP (tos 0x60, ttl 64, id 22099, offset 0, flags [none], proto UDP (17), length 538)
10.0.0.201.5060 > 10.0.0.101.5062: SIP, length: 510
"SIP/2.0 100 Trying"
22:29:02.546553 IP (tos 0x60, ttl 64, id 22102, offset 0, flags [none], proto UDP (17), length 554)
10.0.0.201.5060 > 10.0.0.101.5062: SIP, length: 526
"SIP/2.0 180 Ringing" # but this is too early!
22:29:02.770970 IP (tos 0x60, ttl 64, id 22111, offset 0, flags [none], proto UDP (17), length 526)
10.0.0.201.5060 > 10.0.0.101.5062: SIP, length: 498
"SIP/2.0 503 Service Unavailable"

Asterisk is PBX and designed in a way it can't control channel-level answer. There are no hacks for this in chan_sip too. Just because chan_sip is designed in a way it is almost impossible.
You can use kamailio/opensips(proxy type), which give 100% control over sip packets.
You can try play with following options(and thats all).
;prematuremedia=no ; Some ISDN links send empty media frames before
; the call is in ringing or progress state. The SIP
; channel will then send 183 indicating early media
; which will be empty - thus users get no ring signal.
; Setting this to "yes" will stop any media before we have
; call progress (meaning the SIP channel will not send 183 Session
; Progress for early media). Default is "yes". Also make sure that
; the SIP peer is configured with progressinband=never.
;
; In order for "noanswer" applications to work, you need to run
; the progress() application in the priority before the app.
;progressinband=no ; If we should generate in-band ringing. Always
; use 'never' to never use in-band signalling, even in cases
; where some buggy devices might not render it
; Valid values: yes, no, never Default: no
Another option is just to rewrite chan_sip and hardcode your required behavour.

Related

How to stop tcpdump from truncation http payload?

I'm trying to do a capture and am seeing half the payload with '[!http]' half way through. Is there a way to make it show the whole payload?
I'm performing a REST call and want to see what the server is receiving:
<value>0x100000</value>
</attribute>
</greater-than-or-equals>
<less-than-or-equals>
<attribute id="0x129fa">
[!http]
16:34:06.549662 IP (tos 0x0, ttl 255, id 49564, offset 0, flags [DF], proto TCP (6), length 1301)
I'm using:
tcpdump -vvv -i ens192 tcp port 8080 and src 192.168.1.1
Any help would be appreciated.
Many Thanks
Frank
It is quite likely that the problem has less to do with what you are capturing and more to do with the payload being larger than a single packet.
When you run tcpdump, these days, the default is to capture packets whose length match the MTU of your interface (at least). You can override this, if you are unsure, by specifying a capture length of zero:
tcpdump -s 0 -w captureFile.cap
Again, this is likely not the problem here. It is more likely that the rest of the data is in the next TCP segment. Unfortunately, tcpdump is not the ideal tool for extracting session data. I would suggest that you look at Wireshark (or tshark) which will allow you to easily select a packet and then reassemble the data stream with all of the IP and TCP headers removed.

TCP keep-alive gets involved after TCP zero-window and closes the connection erroneously

We're seeing this pattern happen a lot between two RHEL 6 boxes that are transferring data via a TCP connection. The client issues a TCP Window Full, 0.2s later the client sends TCP Keep-Alives, to which the server responds with what look like correctly shaped responses. The client is unsatisfied by this however and continues sending TCP Keep-Alives until it finally closes the connection with an RST nearly 9s later.
This is despite the RHEL boxes having the default TCP Keep-Alive configuration:
net.ipv4.tcp_keepalive_time = 7200
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_intvl = 75
...which declares that this should only occur until 2hrs of silence. Am I reading my PCAP wrong (relevant packets available on request)?
Below is Wireshark screenshot of the pattern, with my own packet notes in the middle.
Actually, these "keep-alive" packets are not used for TCP keep-alive! They are used for window size updates detection.
Wireshark treats them as keep-alive packets just because these packets look like keep-alive packet.
A TCP keep-alive packet is simply an ACK with the sequence number set to one less than the current sequence number for the connection.
(We assume that ip 10.120.67.113 refers to host A, 10.120.67.132 refers to host B.) In packet No.249511, A acks seq 24507484. In next packet(No.249512), B send seq 24507483(24507484-1).
Why there are so many "keep-alive" packets, what are they used for?
A sends data to B, and B replies zero-window size to tell A that he temporarily can't receive data anymore. In order to assure that A knows when B can receive data again, A send "keep-alive" packet to B again and again with persistence timer, B replies to A with his window size info (In our case, B's window size has always been zero).
And the normal TCP exponential backoff is used when calculating the persist timer. So we can see that A send its first "keep-alive" packet after 0.2s, send its second packet after 0.4s, the third is sent after 0.8, the fouth is sent after 1.6s...
This phenomenon is related to TCP flow control.
The source and destination IP addresses in the packets originating from client do not match the destination and source IP addresses in the response packets, which indicates that there is some device in between the boxes doing NAT. It is also important to understand where the packets have been captured. Probably a packet capture on the client itself will help understand the issue.
Please note that the client can generate TCP keepalive if it does not receive a data packet for two hours or more. As per RFC 1122, the client retries keepalive if it does not receive a keepalive response from the peer. It eventually disconnects after continuous retry failure.
The NAT devices typically implement connection caches to maintain the state of ongoing connections. If the size of the connection reaches limit, the NAT devices drops old connections in order to service the new connections. This could also lead to such a scenario.
The given packet capture indicates that there is a high probability that packets are not reaching the client, so it will be helpful to capture packets on client machine.
I read the trace slightly differently:
Sender sends more data than receiver can handle and gets zerowindow response
Sender sends window probes (not keepalives it is way to soon for that) and the application gives up after 10 seconds with no progress and closes the connection, the reset indicates there is data pending in the TCP sendbuffer.
If the application uses a large blocksize writing to the socket it may have seen no progress for more than the 10 seconds seen in the tcpdump.
If this is a straight connection (no proxies etc.) the most likely reason is that the receiving up stop receiving (or is slower than the sender & data transmission)
It looks to me like packet number 249522 provoked the application on 10.120.67.113 to abort the connection. All the window probes get a zero window response from .132 (with no payload) and then .132 sends (unsolicited) packet 249522 with 63 bytes (and still showing 0 window). The PSH flag suggests that this 63 bytes is the entire data written by the app on .132. Then .113 in the same millisecond responds with an RST. I can't think of any reason why the TCP stack would send a RST immediately after receiving data (sequence numbers are correct). In my view it is almost certain that the app on .113 decided to give up based on the 63 byte message sent by .132.

RTP audio stream works only in one direction in a SIP call

I'm developing software feature for a Session Boarder Controller(SBC).
I'm trying to establish a SIP call using two SIP clients and a Session Boarder Controller(SBC). Asterisk is used as the soft-switch.
When I call, the SIP signalling is working fine. But I am getting audio only in one direction. I captured rtp packets on all interfaces using wireshark. I observed that rtp packets in one direction is being dropped by asterisk.
Note: There is no send only attribute in any of the SIP/SDP messages.
I would like to know if there is any settings in asterisk that may cause this issue?
One more thing that I would like to know is that, from where a SIP client gets the RTP connection information. The port information is present in the media attribute
m=audio 16388 RTP/AVP 8 0 101
From where does the client get the transport IP address? is it from the "o=" field or "c=" field in the SDP or any other fields in the SDP or SIP?
You should troubleshoot the problem by capturing the complete call with Wireshark. Then look carefully at:
Client A initial INVITE: which port is it expecting media on (m= line) which address is it expecting media on (c= line)
SBC for Client A initial INVITE: If the SBC is anchoring the media (I assume so) check m / c lines
SBC for Client B initial INVITE: Which port / ip (m/c lines) is SBC for Client A expecting media on
Client B initial INVITE: Which port / ip (m/c lines) is SBC for Client B expecting media on
Are all nodes in this direction sending media on to the correct ports / ips (look at the RTP streams in wireshark)?
Then check the other direction (based on the SDP in the 183 or 200 (depending on your signaling flow)).
Note: In wireshark there is a nice feature which helps alot: Telephony --> VoIP Calls, which shows you the call flow more graphically

Regarding ICMP "Fragmentation needed, DF bit set" or ICMP packet too big message

I'm injecting ICMP "Fragmentation needed, DF bit set" into the server and ideally server should start sending packets with the size mentioned in the field 'next-hop MTU' in ICMP. But this is not working.
Here is the server code:
#!/usr/bin/env python
import socket # Import socket module
import time
import os
range= [1,2,3,4,5,6,7,8,9]
s = socket.socket() # Create a socket object
host = '192.168.0.17' # Get local machine name
port = 12349 # Reserve a port for your service.
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((host, port)) # Bind to the port
rand_string = os.urandom(1600)
s.listen(5) # Now wait for client connection.
while True:
c, addr = s.accept() # Establish connection with client.
print 'Got connection from', addr
for i in range:
c.sendall(rand_string)
time.sleep(5)
c.close()
Here is the client code:
#!/usr/bin/python # This is client.py file
import socket # Import socket module
s = socket.socket() # Create a socket object
host = '192.168.0.17' # Get local machine name
port = 12348 # Reserve a port for your service.
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.connect((host, port))
while 1:
print s.recv(1024)
s.close()
Scapy to inject ICMP:
###[ IP ]###
version= 4
ihl= None
tos= 0x0
len= None
id= 1
flags= DF
frag= 0
ttl= 64
proto= ip
chksum= None
src= 192.168.0.45
dst= 192.168.0.17
\options\
###[ ICMP ]###
type= dest-unreach
code= fragmentation-needed
chksum= None
unused= 1300
Send(ip/icmp)
Unused field shows as next-hop MTU in wireshark. Is server smart enough to check that DF Bit was not set when it was communicating with client and it is still receiving ICMP "Fragmentation needed, DF bit set" message? If it is not then why is server not reducing its packet size from 1500 to 1300?
First of all, let's answer your first question (is ICMP sent over TCP?).
ICMP runs directly over IP, as specified in RFC 792:
ICMP messages are sent using the basic IP header.
This can be a bit confusing as ICMP is classified as a network layer protocol rather than a transport layer protocol but it makes sense when taking into account that it's merely an addition to IP to carry error, routing and control messages and data. Thus, it can't rely on the TCP layer to transfer itself since the TCP layer depends on the IP layer which ICMP helps to manage and troubleshoot.
Now, let's deal with your second question (How does TCP come to know about the MTU if ICMP isn't sent over TCP?). I've tried to answer this question to the best of my understanding, with reliance on official specifications, but perhaps the best approach would be to analyze some open source network stack implementation in order to see what's really going on...
The TCP layer may come to know of the path's MTU value even though the ICMP message is not layered upon TCP. It's up to the implementation of OS the network stack to notify the TCP layer of the MTU so it can then use this value to update its MSS value.
RFC 1122 requires that the ICMP message includes the IP header as well as the first 8 bytes of the problematic datagram that triggered that ICMP message:
Every ICMP error message includes the Internet header and at least the first 8 data octets of the datagram that triggered the error; more than 8 octets MAY be sent; this header and data MUST be unchanged from the received datagram.
In those cases where the Internet layer is required to pass an ICMP error message to the transport layer, the IP protocol number MUST be extracted from the original header and used to select the appropriate transport protocol entity to handle the error.
This illustrates how the OS can pinpoint the TCP connection whose MSS should be updated, as these 8 bytes include the source and destination ports.
RFC 1122 also states that there MUST be a mechanism by which the transport layer can learn the maximum transport-layer message size that may be sent for a given {source, destination, TOS} triplet. Therefore, I assume that once an ICMP Fragmentation needed and DF set error message is received, the MTU value is somehow made available to the TCP layer that can use it to update its MSS value.
Furthermore, I think that the application layer that instantiated the TCP connection and taking use of it may handle such messages as well and fragment the packets at a higher level. The application may open a socket that expects ICMP messages and act accordingly when such are received. However, fragmenting packets at the application layer is totally transparent to the TCP & IP layers. Note that most applications would allow the TCP & IP layers to handle this situation by themselves.
However, once an ICMP Fragmentation needed and DF set error message is received by a host, its behavior as dictated by the lower layers is not conclusive.
RFC 5927, section 2.2 refers to RFC 1122, section 4.2.3.9 which states that TCP should abort the connection when an ICMP Fragmentation needed and DF set error message is passed up from the IP layer, since it signifies a hard error condition. The RFC states that the host should implement this behavior, but it is not a must (section 4.2.5). This RFC also states in section 3.2.2.1 that a Destination Unreachable message that is received MUST be reported to the TCP layer. Implementing both of these would result in the destruction of a TCP connection when an ICMP Fragmentation needed and DF set error message is received on that connection, which doesn't make any sense, and is clearly not the desired behavior.
On the other hand, RFC 1191 states this in regard to the required behavior:
RFC 1191 does not outline a specific behavior that is expected from the sending
host, because different applications may have different requirements, and
different implementation architectures may favor different strategies [This
leaves a room for this method-OA].
The only required behavior is that a host must attempt to avoid sending more
messages with the same PMTU value in the near future. A host can either
cease setting the Don't Fragment bit in the IP header (and allow
fragmentation by the routers in the way) or reduce the datagram size. The
better strategy would be to lower the message size because fragmentation
will cause more traffic and consume more Internet resources.
For conclusion, I think that the specification is not definitive in regard to the required behavior from a host upon receipt of an ICMP Fragmentation needed and DF set error message. My guess is that both layers (IP & TCP) are notified of the message in order to update their MTU & MSS values, respectively and that one of them takes upon the responsibility of retransmitting the problematic packet in smaller chunks.
Lastly, regarding your implementation, I think that for full compliance with RFC 1122, you should update the ICMP message to include the IP header of the problematic packet, as well as its next 8 bytes (though you may include more than just the first 8 bytes). Moreover, you should verify that the ICMP message is received before the corresponding ACK for the packet to which that ICMP message refers. In fact, just in order to be on the safe side, I would abolish that ACK altogether.
Here is a sample implementation of how the ICMP message should be built. If sending the ICMP message as a response to one of the TCP packets fails, I suggest you try sending the ICMP message before even receiving the TCP packet to which it relates at first, in order to assure it is received before the ACK. Only if that fails as well, try abolishing the ACK altogether.
The way i understand it, the host receives a "ICMP Fragmentation needed and DF set" but the message can come from a intermediate device(router) in the path, thus the host cant directly matched the icmp response with a current session, the icmp only contains the destination ip and mtu limit.
The host then adds a entry to the routing table for the destination ip that records the route and mtu with a expiry of 10min.
This can be observed on linux by asking for the specific route with ip route get x.x.x.x after doing a tracepath or ping that triggers the icmp response.
$ ip route get 10.x.y.z
10.z.y.z via 10.a.b.1 dev eth0 src 10.a.b.100
cache expires 598sec mtu 1300

How does TCP PSH work?

PSH is a way to send data via TCP. Besides that, I can find very little info on how to implement it properly.
Here is what interests me:
Let's say, server window is 8000 bytes, and I send 2 requests with 150 and 600 bytes. Do I get some sort of confirmation that the data has been received? Can I somehow trigger a confirmacion?
I've seen some ACK packets, which does not contain PSH but do contain some sort of payload data (Wireshark marks it as "TCP segment data"). Is this data passed on to user, and if it is, why do we need PSH flag?
TCP PSH generally doesn't 'work' at all. Berkely-derived TCP implementations completely ignore it.
Source: W.R. Stevens, TCP/IP Illustrated, vol I: 20.5 PUSH Flag.
#Arsen: Answering to the second part of your question "why do we need PSH flag?"
The PSH flag in the TCP header informs the receiving host that the data should be pushed up to the receiving application immediately.
We are using PSH flag to exchange Time Stamp value between two servers.
i am assuming , if we set push flag , packet wont wait in receive buffer , it will directly send to receiver.
The data doesn't sit waiting in the receive buffer anyhow.
TCP apps must go out of their way to have the TCP layer bulk up a few packets and deliver full data buffers.
In fact its somewhat frustrating to see applications allocate 64KB buffers to receive data and see them getting a gazillion 1480/1472 byte messages.

Resources