we have configured kannel, and the status look like
SMS: received 0 (0 queued), sent 15133 (0 queued), store size 0
SMS: inbound (0.00,0.00,0.00) msg/sec, outbound (3.08,15.23,0.02) msg/sec
DLR: received 14232, sent 0
DLR: inbound (11.45,5.64,0.02) msg/sec, outbound (0.00,0.00,0.00) msg/sec
DLR: 980 queued, using internal storage
Box connections:
smsbox:vsmsc, IP 127.0.0.1 (0 queued), (on-line 8d 21h 38m 41s)
smsc1[smsc1] SMPP:xxxxx.xxxxx.com:2775/2775:xxxxx:SMPP (online 769120s, rcvd: sms 0 (0.00,0.00,0.00) / dlr 1 (0.00,0.00,0.00), sent: sms 1 (0.00,0.00,0.00) / dlr 0 (0.00,0.00,0.00), failed 0, queued 0 msgs)
in the DLR inbound, it showing (11.45,5.64,0.02) msg/sec. There is 3 value inside (). What is the meaning of each?
thanks
Average for the last minute;
Average for the last 5 minutes;
Average for all of runtime.
Related
So i can make calls but i'am offline in the console. I don't get notice in the console when I register Why that ? thanks.
PJSIP.conf
[transport-udp]
type=transport
protocol=udp
bind=0.0.0.0
[7000]
type=endpoint
context=from-internal
disallow=all
allow=g729
transport=transport-udp
auth=7000
aors=7000
[7000]
type=auth
auth_type=userpass
password=7000
username=7000
[7000]
type=aor
qualify_timeout=4.0
qualify_frequency=50
max_contacts=1
Cmd: pjsip show endpoints
Endpoint: 7000 Unavailable 0 of inf
InAuth: 7000/7000
Aor: 7000 1
Transport: transport-udp udp 0 0 0.0.0.0:5060
Cmd: pjsip show endpoint 7000
Endpoint: 7000 Unavailable 0 of inf
InAuth: 7000/7000
Aor: 7000 1
Transport: transport-udp udp 0 0 0.0.0.0:5060
I had to add allow_subscribe=yes ;)
Endpoint: 7000 In use 1 of inf
InAuth: 7000/7000
Aor: 7000 1
Contact: 7000/sip:7000#127.0.0.1:62210;ob a891149c2b Avail 1.464
Transport: transport-udp udp 0 0 0.0.0.0:5060
Channel: PJSIP/7000-00000001/Playback Up 00:00:03
Exten: 999 CLCID: "" <>
You can do outbound call by using sip response "Authentification required" and authentification after that.
For incoming call need "registration", when your device use it asterisk will record ip/port pair for use for incoming calls. That pair can be any if your device have no public ip(NAT).
I'm new to NS3 and i was trying to extract ip address of a packet from QueueDiscItem,
when i have:
Ptr< QueueDiscItem > item initiated and call:
item->Print(std::cout);
the output i get is
"tos 0x0 DSCP Default ECN Not-ECT ttl 63 id 265 protocol 6 offset (bytes) 0 flags [none] length: 76 10.1.4.2 > 10.1.2.1 0x7fffc67ec880 Dst addr 02-06-ff:ff:ff:ff:ff:ff proto 2048 txq"
but when i call:
Ipv4Header header;
item->GetPacket()->PeekHeader(header);
header.Print(std::cout);
the output i get is
"tos 0x0 DSCP Default ECN Not-ECT ttl 0 id 0 protocol 0 offset (bytes) 0 flags [none] length: 20 102.102.102.102 > 102.102.102.102"
How to get the Header data
According to the list of TraceSources, the TraceSources associated with QueueDiscItems are for Queues. I'm guessing you were trying to attach to one of those TraceSources.
A QueueDiscItem encapsulates several things: a Ptr<Packet>, a MAC address, and several more things. Since you are using IPv4, the QueueDiscItem is actually an Ipv4QueueDiscItem (the latter is a subclass of the former). So, let's start by casting the QueueDiscItem to an Ipv4QueueDiscItem by
Ptr<const Ipv4QueueDiscItem> ipItem = DynamicCast<const Ipv4QueueDiscItem>(item);
Next, you need to know that at this point in the simulation, the Ipv4Header has not been added to the Ptr<Packet> yet. This is probably a design choice (that I don't understand). So, how can we get this information? Well, the Ipv4QueueDiscItem encapsulates the Ipv4Header, and at some point before passing the Ptr<Packet> to L2, the header is added to the packet. This Header can be retrieved by
const Ipv4Header ipHeader = ipItem->GetHeader();
So, now we have the Ipv4Header of the packet you're interested in. Now, we can safely get the address from the Ipv4QueueDiscItem by
ipHeader.GetSource();
ipHeader.GetDestination();
In summary, your TraceSource function should look something like this:
void
EnqueueTrace (Ptr<const QueueDiscItem> item) {
Ptr<const Ipv4QueueDiscItem> ipItem = DynamicCast<const Ipv4QueueDiscItem>(item);
const Ipv4Header ipHeader = ipItem->GetHeader();
NS_LOG_UNCOND("Packet received at " << Simulator::Now() << " going from " << ipHeader.GetSource() << " to " << ipHeader.GetDestination());
}
Why does item->Print(std::cout); work?
All of the above makes sense, but why does
item->Print(std::cout);
print the correct addresses? First, it is important to realize that here Print() is a function of the QueueDiscItem, not the Packet. If we go to the source of this function, we find that Print() just prints the Header if it has already been added.
I am getting a lot of error like below mentioned,
read tcp xx.xx.xx.xx:80: use of closed network connection
read tcp xx.xx.xx.xx:80: connection reset by peer
//function for HTTP connection
func GetResponseBytesByURL_raw(restUrl, connectionTimeOutStr, readTimeOutStr string) ([]byte, error) {
connectionTimeOut, _ /*err*/ := time.ParseDuration(connectionTimeOutStr)
readTimeOut, _ /*err*/ := time.ParseDuration(readTimeOutStr)
timeout := connectionTimeOut + readTimeOut // time.Duration((strconv.Atoi(connectionTimeOutStr) + strconv.Atoi(readTimeOutStr)))
//timeout = 200 * time.Millisecond
client := http.Client{
Timeout: timeout,
}
resp, err := client.Get(restUrl)
if nil != err {
logger.SetLog("Error GetResponseBytesByURL_raw |err: ", logs.LevelError, err)
return make([]byte, 0), err
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
return body, err
}
Update (July 14):
Server : NumCPU=8, RAM=24GB, GO=go1.4.2.linux-amd64
I am getting such error during some high traffic.
20000-30000 request per minutes, and I have a time frame of 500ms to fetch response from third party api.
netstat status from my server (using : netstat -nat | awk '{print $6}' | sort | uniq -c | sort -n) to get frequency
1 established)
1 Foreign
9 LISTEN
33 FIN_WAIT1
338 ESTABLISHED
5530 SYN_SENT
32202 TIME_WAIT
sysctl -p
**sysctl -p**
fs.file-max = 2097152
vm.swappiness = 10
vm.dirty_ratio = 60
vm.dirty_background_ratio = 2
net.ipv4.tcp_synack_retries = 2
net.ipv4.ip_local_port_range = 2000 65535
net.ipv4.tcp_rfc1337 = 1
net.ipv4.tcp_fin_timeout = 5
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_intvl = 15
net.core.rmem_default = 31457280
net.core.rmem_max = 12582912
net.core.wmem_default = 31457280
net.core.wmem_max = 12582912
net.core.somaxconn = 65536
net.core.netdev_max_backlog = 65536
net.core.optmem_max = 25165824
net.ipv4.tcp_mem = 65536 131072 262144
net.ipv4.udp_mem = 65536 131072 262144
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.udp_rmem_min = 16384
net.ipv4.tcp_wmem = 8192 65536 16777216
net.ipv4.udp_wmem_min = 16384
net.ipv4.tcp_max_tw_buckets = 1440000
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_tw_reuse = 1
net.ipv6.bindv6only = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
error: "net.ipv4.icmp_ignore_bogus_error_messages" is an unknown key
kernel.exec-shield = 1
kernel.randomize_va_space = 1
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.log_martians = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.ip_forward = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
When making connections at a high rate over the internet, it's very likely you're going to encounter some connection problems. You can't mitigate them completely, so you may want to add retry logic around the request. The actual error type at this point probably doesn't matter, but matching the error string for use of closed network connection or connection reset by peer is about the best you can do if you want to be specific. Make sure to limit the retries with a backoff, as some systems will drop or reset connections as a way to limit request rates, and you may get more errors the faster you reconnect.
Depending on the number of remote hosts you're communicating with, you will want to increase Transport.MaxIdleConnsPerHost (the default is only 2). The fewer hosts you talk to, the higher you can set this. This will decrease the number of new connections made, and speed up the requests overall.
If you can, try the go1.5 beta. There have been a couple changes around keep-alive connections that may help reduce the number of errors you see.
I recommend implementing an exponential back off or some other rate limiting mechanism on your side of the wire. There's not really anything you can do about those error, and using exponential back off won't necessarily make you get the data any faster either. But it can ensure that you get all the data and the API you're pulling from will surely appreciate the reduced traffic. Here's a link to one I found on GitHub; https://github.com/cenkalti/backoff
There was another popular option as well though I haven't used either. Implementing one yourself isn't terribly difficult either and I could provide some sample of that on request. One thing I do recommend based off my experience is make sure you're using a retry function that has an abort channel. If you get to really long back off times then you'll want some way for the caller to kill it.
I am using a master-worker structure using Message Passing Interface (MPI) but whenever I call Receive function, instead of receiving the messages in the order of sending sequence, I need to receive the last message sent from the master to the each processor and ignore the previous ones!
My question is that if there is any way that we can access each processor's buffer and pick the last message in the queue?
No, you can't just peer into the queue; but you can test to see if more messages are present with MPI_Probe or MPI_Iprobe, and while there are more messages present, keep receiving and discarding the old data:
#!/usr/bin/env python
from mpi4py import MPI
import time
def waiter(comm, sendTask):
# wait for messages to be present
while not comm.Iprobe(source=sendTask, tag=1):
time.sleep(1)
# read all messages while more are available, discarding old
while comm.Iprobe(source=sendTask, tag=1):
lastMsg = comm.recv(source=sendTask, tag=1)
if lastMsg is None:
print "No messages pending"
else:
print "Last message was ", lastMsg
comm.Barrier()
def sender(comm, waitTask):
for msgno in range(5):
print "sending: ", msgno
comm.send(msgno, dest=waitTask, tag=1)
print "sending: ", -1
comm.send(-1, dest=waitTask, tag=1)
comm.Barrier()
if __name__== "__main__":
comm = MPI.COMM_WORLD
sendTask = 1
waitTask = 0
if comm.rank == waitTask:
waiter(comm, sendTask)
elif comm.rank == sendTask:
sender(comm, waitTask)
else:
comm.Barrier()
Running gives
$ mpirun -np 2 ./readall.py
sending: 0
sending: 1
sending: 2
sending: 3
sending: 4
sending: -1
Last message was -1
I need to send a bunch of IP packets that I'm sure will trigger an ICMP TTL-expired error message. How exactly can I associate each error message with the packet that generated it? What field in the ICMP header is used for this?
Should I rather use some custom ID number in the original IP header, so that I can tell which error message corresponds to which packet? If so, which field is most suitable for this?
The body of ICMP TTL Expired messages must include the IP header of the original packet (which includes the source-port / destination-port) and 64 bits beyond the original header.
Based on timing and that header information, you can derive which packet triggered the TTL-expired message.
I am including a sample triggered by an NTP packet below...
See RFC 792 (Page 5) for more details.
ICMP TTL-Expired Message
Ethernet II, Src: JuniperN_c3:a0:00 (b0:c6:9a:c3:a0:00), Dst: 78:2b:cb:37:4c:7a (78:2b:cb:37:4c:7a)
Destination: 78:2b:cb:37:4c:7a (78:2b:cb:37:4c:7a)
Address: 78:2b:cb:37:4c:7a (78:2b:cb:37:4c:7a)
.... ...0 .... .... .... .... = IG bit: Individual address (unicast)
.... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)
Source: JuniperN_c3:a0:00 (b0:c6:9a:c3:a0:00)
Address: JuniperN_c3:a0:00 (b0:c6:9a:c3:a0:00)
.... ...0 .... .... .... .... = IG bit: Individual address (unicast)
.... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)
Type: IP (0x0800)
Internet Protocol, Src: 172.25.116.254 (172.25.116.254), Dst: 172.25.116.10 (172.25.116.10)
Version: 4
Header length: 20 bytes
Differentiated Services Field: 0x00 (DSCP 0x00: Default; ECN: 0x00)
0000 00.. = Differentiated Services Codepoint: Default (0x00)
.... ..0. = ECN-Capable Transport (ECT): 0
.... ...0 = ECN-CE: 0
Total Length: 56
Identification: 0x86d7 (34519)
Flags: 0x02 (Don't Fragment)
0.. = Reserved bit: Not Set
.1. = Don't fragment: Set
..0 = More fragments: Not Set
Fragment offset: 0
Time to live: 255
Protocol: ICMP (0x01)
Header checksum: 0xb3b1 [correct]
[Good: True]
[Bad : False]
Source: 172.25.116.254 (172.25.116.254)
Destination: 172.25.116.10 (172.25.116.10)
Internet Control Message Protocol
Type: 11 (Time-to-live exceeded)
Code: 0 (Time to live exceeded in transit)
Checksum: 0x4613 [correct]
Internet Protocol, Src: 172.25.116.10 (172.25.116.10), Dst: 172.25.0.11 (172.25.0.11)
Version: 4
Header length: 20 bytes
Differentiated Services Field: 0x00 (DSCP 0x00: Default; ECN: 0x00)
0000 00.. = Differentiated Services Codepoint: Default (0x00)
.... ..0. = ECN-Capable Transport (ECT): 0
.... ...0 = ECN-CE: 0
Total Length: 36
Identification: 0x0001 (1)
Flags: 0x00
0.. = Reserved bit: Not Set
.0. = Don't fragment: Not Set
..0 = More fragments: Not Set
Fragment offset: 0
Time to live: 0
[Expert Info (Note/Sequence): "Time To Live" only 0]
[Message: "Time To Live" only 0]
[Severity level: Note]
[Group: Sequence]
Protocol: UDP (0x11)
Header checksum: 0xee80 [correct]
[Good: True]
[Bad : False]
Source: 172.25.116.10 (172.25.116.10)
Destination: 172.25.0.11 (172.25.0.11)
User Datagram Protocol, Src Port: telindus (1728), Dst Port: ntp (123)
Source port: telindus (1728)
Destination port: ntp (123)
Length: 16
Checksum: 0xa7a1 [unchecked, not all data available]
[Good Checksum: False]
[Bad Checksum: False]