I am trying to connect to a postgreSQL database (hosted on AWS RDS) via an SSH tunnel in R. So far, I have been able to connect using the following methods:
1.---------------------------
Opening the ssh tunnel in my terminal (MacOs) using
ssh -i {key file path} -f -N -L 5432:{db host}:5432 {ssh user}#{ssh host} -v
and then connecting to the database using
psql -hlocalhost -U{db user} -p5432 -dpostgres
2.---------------------------
Opening the ssh tunnel in my terminal and then running the following code in R to connect
conn <- dbConnect(
RPostgres::Postgres(),
dbname = db_name,
user = db_user,
password = db_password,
host = "127.0.0.1",
port = db_port
)
3.---------------------------
This is where the issue occurs. I'm able to connect by opening the ssh tunnel in R (in a background environment) with
tunnel_process <- callr::r_bg(
function(ssh_host, ssh_user, ssh_key, db_host, db_port) {
session <- ssh::ssh_connect(host = glue::glue("{ ssh_user }#{ ssh_host }"),
keyfile = ssh_key,
verbose = 3)
ssh::ssh_tunnel(session = session,
port = db_port,
target = glue::glue("{ db_host }:{ db_port }"))
},
args = list(ssh_host, ssh_user, ssh_key, db_host, db_port),
stdout = nullfile(),
stderr = nullfile()
)
But then I'm unable to use the same "dbConnect" code as above to connect. It only gives me the following error message
Error: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
I am however able to connect directly from the terminal using the connection code in part 1. but only if I run psql -hlocalhost -U{db user} -p5432 -dpostgres, then re-run the ssh tunnel code in R, and only THEN enter my password in the terminal. I would appear that every time I try to connect, it closes the SSH tunnel, so I then have to re-launch it before submitting my password.
Question---------------------------
From what I just detailed, it would appear that:
a. My database is reachable since I can easily connect through the terminal
b. My R code works since I'm able to use it to both successfully open the SSH tunnel AND connect to the database. I'm just unable to use both together for some reason.
c. The tunnel I open through R breaks any time I try to connect to the database. This is not the case for the tunnel I open directly in the terminal.
Since I want to be able to do everything directly from R, does anybody here have any ideas on what may be causing the issue?
EDIT---------------------------
Here's the log I get in R when I try to connect to the database, just before it closes the tunnel:
> ssh::ssh_tunnel(session = session,
+ port = db_port,
+ target = glue::glue("{ db_host }:{ db_port }"))
\ Waiting for connetion on port 5432... client connected!
channel_open: Creating a channel 43 with 64000 window and 32768 max packet
ssh_socket_unbuffered_write: Enabling POLLOUT for socket
packet_send2: packet: wrote [len=124,padding=18,comp=105,payload=105]
channel_open: Sent a SSH_MSG_CHANNEL_OPEN type direct-tcpip for channel 43
ssh_packet_socket_callback: packet: read type 80 [len=492,padding=16,comp=475,payload=475]
ssh_packet_process: Dispatching handler for packet type 80
ssh_packet_global_request: Received SSH_MSG_GLOBAL_REQUEST packet
ssh_packet_global_request: UNKNOWN SSH_MSG_GLOBAL_REQUEST hostkeys-00#openssh.com 0
ssh_packet_process: Couldn't do anything with packet type 80
packet_send2: packet: wrote [len=12,padding=6,comp=5,payload=5]
ssh_socket_unbuffered_write: Enabling POLLOUT for socket
ssh_packet_socket_callback: packet: read type 91 [len=28,padding=10,comp=17,payload=17]
ssh_packet_process: Dispatching handler for packet type 91
ssh_packet_channel_open_conf: Received SSH2_MSG_CHANNEL_OPEN_CONFIRMATION
ssh_packet_channel_open_conf: Received a CHANNEL_OPEN_CONFIRMATION for channel 43:0
ssh_packet_channel_open_conf: Remote window : 2097152, maxpacket : 32768
| Tunneled -1 bytes...ssh_socket_unbuffered_write: Enabling POLLOUT for socket
packet_send2: packet: wrote [len=28,padding=10,comp=17,payload=17]
channel_write_common: channel_write wrote 8 bytes
| Tunneled 7 bytes...ssh_packet_socket_callback: packet: read type 94 [len=28,padding=17,comp=10,payload=10]
ssh_packet_process: Dispatching handler for packet type 94
channel_rcv_data: Channel receiving 1 bytes data in 0 (local win=64000 remote win=2097144)
channel_default_bufferize: placing 1 bytes into channel buffer (stderr=0)
channel_rcv_data: Channel windows are now (local win=63999 remote win=2097144)
ssh_socket_unbuffered_write: Enabling POLLOUT for socket
packet_send2: packet: wrote [len=28,padding=18,comp=9,payload=9]
grow_window: growing window (channel 43:0) to 1280000 bytes
ssh_channel_read_timeout: Read (1) buffered : 1 bytes. Window: 1280000
- Tunneled 8 bytes...ssh_socket_unbuffered_write: Enabling POLLOUT for socket
packet_send2: packet: wrote [len=316,padding=17,comp=298,payload=298]
channel_write_common: channel_write wrote 289 bytes
/ Tunneled 297 bytes...ssh_packet_socket_callback: packet: read type 94 [len=3964,padding=12,comp=3951,payload=3951]
ssh_packet_process: Dispatching handler for packet type 94
channel_rcv_data: Channel receiving 3942 bytes data in 0 (local win=1280000 remote win=2096855)
channel_default_bufferize: placing 3942 bytes into channel buffer (stderr=0)
channel_rcv_data: Channel windows are now (local win=1276058 remote win=2096855)
ssh_channel_read_timeout: Read (3942) buffered : 3942 bytes. Window: 1276058
\ Tunneled 4239 bytes...ssh_socket_unbuffered_write: Enabling POLLOUT for socket
packet_send2: packet: wrote [len=156,padding=8,comp=147,payload=147]
channel_write_common: channel_write wrote 138 bytes
- Tunneled 4377 bytes...ssh_packet_socket_callback: packet: read type 94 [len=76,padding=15,comp=60,payload=60]
ssh_packet_process: Dispatching handler for packet type 94
channel_rcv_data: Channel receiving 51 bytes data in 0 (local win=1276058 remote win=2096717)
channel_default_bufferize: placing 51 bytes into channel buffer (stderr=0)
channel_rcv_data: Channel windows are now (local win=1276007 remote win=2096717)
ssh_channel_read_timeout: Read (51) buffered : 51 bytes. Window: 1276007
| Tunneled 4428 bytes...ssh_socket_unbuffered_write: Enabling POLLOUT for socket
packet_send2: packet: wrote [len=140,padding=14,comp=125,payload=125]
channel_write_common: channel_write wrote 116 bytes
\ Tunneled 4544 bytes...ssh_packet_socket_callback: packet: read type 94 [len=60,padding=8,comp=51,payload=51]
ssh_packet_process: Dispatching handler for packet type 94
channel_rcv_data: Channel receiving 42 bytes data in 0 (local win=1276007 remote win=2096601)
channel_default_bufferize: placing 42 bytes into channel buffer (stderr=0)
channel_rcv_data: Channel windows are now (local win=1275965 remote win=2096601)
ssh_channel_read_timeout: Read (42) buffered : 42 bytes. Window: 1275965
/ Tunneled 4586 bytes...ssh_socket_unbuffered_write: Enabling POLLOUT for socket
packet_send2: packet: wrote [len=60,padding=19,comp=40,payload=40]
channel_write_common: channel_write wrote 31 bytes
- Tunneled 4617 bytes...packet_send2: packet: wrote [len=12,padding=6,comp=5,payload=5]
ssh_channel_send_eof: Sent a EOF on client channel (43:0)
ssh_socket_unbuffered_write: Enabling POLLOUT for socket
packet_send2: packet: wrote [len=12,padding=6,comp=5,payload=5]
ssh_channel_close: Sent a close on client channel (43:0)
ssh_socket_unbuffered_write: Enabling POLLOUT for socket
tunnel closed!
For reference, this is what the same log looks like when using the workaround detailed in 3. (re-running the ssh_tunnel right before submitting my password in the terminal):
> ssh::ssh_tunnel(session = session,
+ port = db_port,
+ target = glue::glue("{ db_host }:{ db_port }"))
\ Waiting for connetion on port 5432... client connected!
channel_open: Creating a channel 43 with 64000 window and 32768 max packet
ssh_socket_unbuffered_write: Enabling POLLOUT for socket
packet_send2: packet: wrote [len=124,padding=18,comp=105,payload=105]
channel_open: Sent a SSH_MSG_CHANNEL_OPEN type direct-tcpip for channel 43
ssh_packet_socket_callback: packet: read type 80 [len=492,padding=16,comp=475,payload=475]
ssh_packet_process: Dispatching handler for packet type 80
ssh_packet_global_request: Received SSH_MSG_GLOBAL_REQUEST packet
ssh_packet_global_request: UNKNOWN SSH_MSG_GLOBAL_REQUEST hostkeys-00#openssh.com 0
ssh_packet_process: Couldn't do anything with packet type 80
packet_send2: packet: wrote [len=12,padding=6,comp=5,payload=5]
ssh_socket_unbuffered_write: Enabling POLLOUT for socket
ssh_packet_socket_callback: packet: read type 91 [len=28,padding=10,comp=17,payload=17]
ssh_packet_process: Dispatching handler for packet type 91
ssh_packet_channel_open_conf: Received SSH2_MSG_CHANNEL_OPEN_CONFIRMATION
ssh_packet_channel_open_conf: Received a CHANNEL_OPEN_CONFIRMATION for channel 43:0
ssh_packet_channel_open_conf: Remote window : 2097152, maxpacket : 32768
| Tunneled -1 bytes...ssh_socket_unbuffered_write: Enabling POLLOUT for socket
packet_send2: packet: wrote [len=28,padding=10,comp=17,payload=17]
channel_write_common: channel_write wrote 8 bytes
| Tunneled 7 bytes...ssh_packet_socket_callback: packet: read type 94 [len=28,padding=17,comp=10,payload=10]
ssh_packet_process: Dispatching handler for packet type 94
channel_rcv_data: Channel receiving 1 bytes data in 0 (local win=64000 remote win=2097144)
channel_default_bufferize: placing 1 bytes into channel buffer (stderr=0)
channel_rcv_data: Channel windows are now (local win=63999 remote win=2097144)
ssh_socket_unbuffered_write: Enabling POLLOUT for socket
packet_send2: packet: wrote [len=28,padding=18,comp=9,payload=9]
grow_window: growing window (channel 43:0) to 1280000 bytes
ssh_channel_read_timeout: Read (1) buffered : 1 bytes. Window: 1280000
- Tunneled 8 bytes...ssh_socket_unbuffered_write: Enabling POLLOUT for socket
packet_send2: packet: wrote [len=316,padding=17,comp=298,payload=298]
channel_write_common: channel_write wrote 289 bytes
\ Tunneled 297 bytes...ssh_packet_socket_callback: packet: read type 94 [len=3964,padding=12,comp=3951,payload=3951]
ssh_packet_process: Dispatching handler for packet type 94
channel_rcv_data: Channel receiving 3942 bytes data in 0 (local win=1280000 remote win=2096855)
channel_default_bufferize: placing 3942 bytes into channel buffer (stderr=0)
channel_rcv_data: Channel windows are now (local win=1276058 remote win=2096855)
ssh_channel_read_timeout: Read (3942) buffered : 3942 bytes. Window: 1276058
/ Tunneled 4239 bytes...ssh_socket_unbuffered_write: Enabling POLLOUT for socket
packet_send2: packet: wrote [len=156,padding=8,comp=147,payload=147]
channel_write_common: channel_write wrote 138 bytes
| Tunneled 4377 bytes...ssh_packet_socket_callback: packet: read type 94 [len=76,padding=15,comp=60,payload=60]
ssh_packet_process: Dispatching handler for packet type 94
channel_rcv_data: Channel receiving 51 bytes data in 0 (local win=1276058 remote win=2096717)
channel_default_bufferize: placing 51 bytes into channel buffer (stderr=0)
channel_rcv_data: Channel windows are now (local win=1276007 remote win=2096717)
ssh_channel_read_timeout: Read (51) buffered : 51 bytes. Window: 1276007
- Tunneled 4428 bytes...ssh_socket_unbuffered_write: Enabling POLLOUT for socket
packet_send2: packet: wrote [len=140,padding=14,comp=125,payload=125]
channel_write_common: channel_write wrote 116 bytes
/ Tunneled 4544 bytes...ssh_packet_socket_callback: packet: read type 94 [len=60,padding=8,comp=51,payload=51]
ssh_packet_process: Dispatching handler for packet type 94
channel_rcv_data: Channel receiving 42 bytes data in 0 (local win=1276007 remote win=2096601)
channel_default_bufferize: placing 42 bytes into channel buffer (stderr=0)
channel_rcv_data: Channel windows are now (local win=1275965 remote win=2096601)
ssh_channel_read_timeout: Read (42) buffered : 42 bytes. Window: 1275965
\ Tunneled 4586 bytes...ssh_socket_unbuffered_write: Enabling POLLOUT for socket
packet_send2: packet: wrote [len=92,padding=12,comp=79,payload=79]
channel_write_common: channel_write wrote 70 bytes
- Tunneled 4656 bytes...ssh_packet_socket_callback: packet: read type 94 [len=380,padding=15,comp=364,payload=364]
ssh_packet_process: Dispatching handler for packet type 94
channel_rcv_data: Channel receiving 355 bytes data in 0 (local win=1275965 remote win=2096531)
channel_default_bufferize: placing 355 bytes into channel buffer (stderr=0)
channel_rcv_data: Channel windows are now (local win=1275610 remote win=2096531)
ssh_channel_read_timeout: Read (355) buffered : 355 bytes. Window: 1275610
| Tunneled 5011 bytes...
Finally, here's the log when running ssh_connect:
> session <- ssh::ssh_connect(host = glue::glue("{ ssh_user }#{ ssh_host }"),
+ keyfile = ssh_key,
+ verbose = 3)
ssh_pki_import_privkey_base64: Trying to decode privkey passphrase=false
ssh_connect: libssh 0.8.6 (c) 2003-2018 Aris Adamantiadis, Andreas Schneider and libssh contributors. Distributed under the LGPL, please refer to COPYING file for information about your rights, using threading threads_pthread
ssh_socket_connect: Nonblocking connection socket: 50
ssh_connect: Socket connecting, now waiting for the callbacks to work
ssh_connect: Actual timeout : 10000
ssh_socket_pollcallback: Received POLLOUT in connecting state
socket_callback_connected: Socket connection callback: 1 (0)
ssh_socket_unbuffered_write: Enabling POLLOUT for socket
callback_receive_banner: Received banner: SSH-2.0-OpenSSH_7.4
ssh_client_connection_callback: SSH server banner: SSH-2.0-OpenSSH_7.4
ssh_analyze_banner: Analyzing banner: SSH-2.0-OpenSSH_7.4
ssh_analyze_banner: We are talking to an OpenSSH client version: 7.4 (70400)
ssh_known_hosts_read_entries: Failed to open the known_hosts file '/etc/ssh/ssh_known_hosts': No such file or directory
ssh_client_select_hostkeys: Changing host key method to "ecdsa-sha2-nistp256,ssh-ed25519,ecdsa-sha2-nistp521,ecdsa-sha2-nistp384,ssh-rsa,ssh-dss"
ssh_socket_unbuffered_write: Enabling POLLOUT for socket
packet_send2: packet: wrote [len=644,padding=9,comp=634,payload=634]
ssh_packet_socket_callback: packet: read type 20 [len=1276,padding=10,comp=1265,payload=1265]
ssh_packet_process: Dispatching handler for packet type 20
ssh_kex_select_methods: Negotiated curve25519-sha256,ecdsa-sha2-nistp256,aes256-ctr,aes256-ctr,hmac-sha2-256,hmac-sha2-256,none,none,,
ssh_socket_unbuffered_write: Enabling POLLOUT for socket
packet_send2: packet: wrote [len=44,padding=6,comp=37,payload=37]
ssh_packet_socket_callback: packet: read type 31 [len=260,padding=11,comp=248,payload=248]
ssh_packet_process: Dispatching handler for packet type 31
ssh_packet_dh_reply: Received SSH_KEXDH_REPLY
ssh_socket_unbuffered_write: Enabling POLLOUT for socket
packet_send2: packet: wrote [len=12,padding=10,comp=1,payload=1]
ssh_client_curve25519_reply: SSH_MSG_NEWKEYS sent
ssh_packet_socket_callback: Processing 112 bytes left in socket buffer
ssh_packet_socket_callback: packet: read type 21 [len=12,padding=10,comp=1,payload=1]
ssh_packet_process: Dispatching handler for packet type 21
ssh_packet_newkeys: Received SSH_MSG_NEWKEYS
crypt_set_algorithms2: Set output algorithm to aes256-ctr
crypt_set_algorithms2: Set HMAC output algorithm to hmac-sha2-256
crypt_set_algorithms2: Set input algorithm to aes256-ctr
crypt_set_algorithms2: Set HMAC input algorithm to hmac-sha2-256
ssh_packet_newkeys: Signature verified and valid
ssh_packet_socket_callback: Processing 96 bytes left in socket buffer
ssh_packet_socket_callback: packet: read type 7 [len=60,padding=6,comp=53,payload=53]
ssh_packet_process: Dispatching handler for packet type 7
ssh_packet_ext_info: Received SSH_MSG_EXT_INFO
ssh_packet_ext_info: Follows 1 extensions
ssh_packet_ext_info: Extension: server-sig-algs=<rsa-sha2-256,rsa-sha2-512>
ssh_connect: current state : 7
packet_send2: packet: wrote [len=28,padding=10,comp=17,payload=17]
ssh_service_request: Sent SSH_MSG_SERVICE_REQUEST (service ssh-userauth)
ssh_socket_unbuffered_write: Enabling POLLOUT for socket
ssh_packet_socket_callback: packet: read type 6 [len=28,padding=10,comp=17,payload=17]
ssh_packet_process: Dispatching handler for packet type 6
ssh_packet_service_accept: Received SSH_MSG_SERVICE_ACCEPT
ssh_socket_unbuffered_write: Enabling POLLOUT for socket
packet_send2: packet: wrote [len=44,padding=4,comp=39,payload=39]
ssh_packet_socket_callback: packet: read type 51 [len=60,padding=15,comp=44,payload=44]
ssh_packet_process: Dispatching handler for packet type 51
ssh_packet_userauth_failure: Access denied for 'none'. Authentication that can continue: publickey,gssapi-keyex,gssapi-with-mic
ssh_packet_userauth_failure: Access denied for 'none'. Authentication that can continue: publickey,gssapi-keyex,gssapi-with-mic
ssh_key_algorithm_allowed: Checking rsa-sha2-512 with list <ssh-ed25519,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-rsa,rsa-sha2-512,rsa-sha2-256,ssh-dss>
ssh_key_algorithm_allowed: Checking rsa-sha2-512 with list <ssh-ed25519,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-rsa,rsa-sha2-512,rsa-sha2-256,ssh-dss>
ssh_key_algorithm_allowed: Checking rsa-sha2-512 with list <ssh-ed25519,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-rsa,rsa-sha2-512,rsa-sha2-256,ssh-dss>
ssh_socket_unbuffered_write: Enabling POLLOUT for socket
packet_send2: packet: wrote [len=636,padding=11,comp=624,payload=624]
ssh_packet_socket_callback: packet: read type 52 [len=12,padding=10,comp=1,payload=1]
ssh_packet_process: Dispatching handler for packet type 52
ssh_packet_userauth_success: Authentication successful
I am analyzing some events against dns servers running unbound. In the course of this investigation I am running into traffic involving queries to the dns servers that are reported as having in some cases a source port between 1 and 1024. As far as my knowledge goes these are reserved for services so there should never be traffic originating / initiated from those to a server.
Since I also know this is a practice, not a law, that evolved over time, I know there is no technical limitation to put any number in the source port field of a packet. So my conclusion would be that these queries were generated by some tool in which the source port is filled with a random value (the frequency is about evenly divided over 0-65535, except for a peak around 32768) and that this is a deliberate attack.
Can someone confirm/deny the source port theory and vindicate my conclusion or declare me a total idiot and explain why?
Thanks in advance.
Edit 1: adding more precise info to settle some disputes below that arose due to my incomplete reporting.
It's definitely not a port scan. It was traffic arriving on port 53 UDP and unbound accepted it apparently as an (almost) valid dns query, while generating the following error messages for each packet:
notice: remote address is <ipaddress> port <sourceport>
notice: sendmsg failed: Invalid argument
$ cat raw_daemonlog.txt | egrep -c 'notice: remote address is'
256497
$ cat raw_daemonlog.txt | egrep 'notice: remote address is' | awk '{printf("%s\n",$NF)}' | sort -n | uniq -c > sourceportswithfrequency.txt
$ cat sourceportswithfrequency.txt | wc -l
56438
So 256497 messages, 56438 unique source ports used
$ cat sourceportswithfrequency.txt | head
5 4
3 5
5 6
So the lowest source port seen was 4 which was used 5 times
$ cat sourceportswithfrequency.txt | tail
8 65524
2 65525
14 65526
1 65527
2 65528
4 65529
3 65530
3 65531
3 65532
4 65534
So the highest source port seen was 65534 and it was used 4 times.
$ cat sourceportswithfrequency.txt | sort -n | tail -n 25
55 32786
58 35850
60 32781
61 32785
66 32788
68 32793
71 32784
73 32783
88 32780
90 32791
91 32778
116 2050
123 32779
125 37637
129 7077
138 32774
160 32777
160 57349
162 32776
169 32775
349 32772
361 32773
465 32769
798 32771
1833 32768
So the peak around 32768 is real.
My original question still stands: does this traffic pattern suggest an attack or is there an logical explanation for, for instance, the traffic with source ports < 1024?
As far as my knowledge goes these are reserved for services so there should never be traffic originating / initiated from those to a server.
It doesn't matter what the source port number is, as long as it's between 1 and 65,535. It's not like a source port of 53 means that there is a DNS server listening on the source machine.
The source port is just there to allow multiple connections / in-flight datagrams from one machine to another machine on the same destination port.
See also Wiki: Ephemeral port:
The Internet Assigned Numbers Authority (IANA) suggests the range 49152 to 65535 [...] for dynamic or private ports.[1]
That sounds like a port scan.
There are 65536 distinct and usable port numbers. (ibid.)
FYI: The TCP and UDP port 32768 is registered and used by IBM FileNet TMS.
Here's my setup on local: 3 VMs (using Virtualbox), kafka and zookeeper installed on all three. They are all talking to each other as well.
I am trying to use kafka-console-producer from my local, which requires broker-list. I am supplying the IPs of my VMs but it doesn't seem to be working. I've tried the advertised.host properties on the VMs too but has no effect for me. Here's my server.properties from the three machines:
Server 1:
broker.id=4
port=9092
host.name=10.30.3.4
advertised.host.name=10.30.3.4
advertised.port=9092
zookeeper.connect=10.30.3.4:2181
zookeeper.connection.timeout.ms=6000
Server 2:
broker.id=3
port=9092
host.name=10.30.3.3
advertised.host.name=10.30.3.3
advertised.port=9092
zookeeper.connect=10.30.3.3:2181
zookeeper.connection.timeout.ms=6000
Server 3:
broker.id=2
port=9092
host.name=10.30.3.2
advertised.host.name=10.30.3.2
advertised.port=9092
zookeeper.connect=10.30.3.2:2181
zookeeper.connection.timeout.ms=6000
My virtualbox also has port forwarding setup:
Similarly for other two machines too ports are only tweaked a bit.
I am able to connect to zookeeper just fine, so:
bin/zkCli.sh -server 127.0.0.1:9999
is able to connect to zookeeper on VM. But if I try connecting kafka-console-producer it fails when I try sending messages:
bin/kafka-console-producer.sh --broker-list 127.0.0.1:9502 --topic partition2replica2 --timeout 3000
leads to:
[2016-02-17 15:06:36,943] WARN Property topic is not valid (kafka.utils.VerifiableProperties)
hi
there
[2016-02-17 15:07:23,699] WARN Failed to send producer request with correlation id 3 to broker 3 with data for partitions [partition2replica2,1] (kafka.producer.async.DefaultEventHandler)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:103)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer.scala:102)
at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.producer.SyncProducer.send(SyncProducer.scala:101)
at kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(DefaultEventHandler.scala:255)
at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:106)
at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:100)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:100)
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72)
at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:88)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:68)
at scala.collection.immutable.Stream.foreach(Stream.scala:547)
at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45)
[2016-02-17 15:07:25,318] WARN Failed to send producer request with correlation id 7 to broker 3 with data for partitions [partition2replica2,1] (kafka.producer.async.DefaultEventHandler)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:103)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer.scala:102)
at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.producer.SyncProducer.send(SyncProducer.scala:101)
at kafka.producer.async.DefaultEventHandler.kafka$producer$async
Not sure what I am doing wrong here? Any ideas? (I can provide ifconfig output if anyone wants). Any help will be appreciated.
[Edit 1]: Adding output of zookeeper quorum:
That seems to be in quorum:
echo stat| nc 10.30.3.2 2181
Received: 81
Sent: 80
Connections: 1
Outstanding: 0
Mode: follower
Node count: 149
echo stat| nc 10.30.3.3 2181
Received: 660
Sent: 664
Connections: 1
Outstanding: 0
Zxid: 0x600000109
Mode: leader
Node count: 149
echo stat| nc 10.30.3.4 2181
Received: 293
Sent: 295
Connections: 1
Outstanding: 0
Zxid: 0x600000109
Mode: follower
Node count: 149
As far as I can understand your setup, the zookeepers on each node should also been in Quorum with each other to support the 3 kafka servers instances as one cluster. You have provided kafka config only, so I cannot make out if they are configured that way.
You can check by using the 4 letter command on each zookeeper node like below
echo stat | nc <zk ip> <zk port>
echo mntr | nc <zk ip> <zk port>
One should be a "leader" and other two should be "followers".
I am not sure if they will work as expected if they are not configured to be in quorum.
I am using SMPPSim selenium software and i have connected it to the kannel 4.3 , am sending messages using the SMPPSim from the user interface using inject message
I have noticed that if i send long messages from SMPPSim to kannel , I wont receive those long messages and the udh=0 always either for short messages or long messages(multiple messages)
as i know long messages should be divided into multiple messages and their udh should be value other then 0
Keeping in mind that :
1. I have been asked to use kannel 1.4.3 by the requested client , requested client is still using 1.4.3 and they have so many applications based in this version of the kannel so upgrading is not in my hand.
2. It is mentioned in SMPPSim selenium software that it supports multiple messages but how i really dont know , and that what am looking for !!!
here is the configuration
kannel.conf
by the way am sending using the SMPPSim Silenim user interface
kannel.conf
group = core
admin-port = 17000
smsbox-port = 17001
admin-password = Dumb-bugger
log-file = "/var/log/kannel/kannel.log"
log-level = 0
box-deny-ip = "*.*.*.*"
box-allow-ip = "127.0.0.1;192.168.1.*"
admin-deny-ip = "*.*.*.*"
admin-allow-ip = "127.0.0.1;192.168.1.*"
#unified-prefix = "+91,0091,+0091;+,;"
access-log = "/var/log/kannel/access.log"
dlr-storage = mysql
group = smsbox
bearerbox-host = localhost
sendsms-port = 17017
bearerbox-port = 17010
log-level = 0
mo-recode=false
group = sendsms-user
username = test
password = test
user-allow-ip = "127.0.0.1;192.168.1.*"
concatenation = true
max-messages = 10
group = smsc
smsc = smpp
smsc-id = "SMPPSim"
host = localhost
port =2775
receive-port = 2775
system-type = ""
smsc-username = smppclient1
smsc-password = password
keepalive = 2
interface-version = 34
source-addr-ton = 5
source-addr-npi = 0
source-addr-autodetect = yes
dest-addr-ton = 1
dest-addr-npi = 1
address-range = ""
enquire-link-interval = 60
max-pending-submits = 10
reconnect-delay = 10
priority = 0
smppsim.props
# SMPP_PORT specified the port that SMPPSim will listen on for connections from SMPP
# clients. SMPP_CONNECTION_HANDLERS determines the maximum number of client connections
# that can be handled concurrently.
SMPP_PORT=2775
SMPP_CONNECTION_HANDLERS=10
# Specify the classes that imlement connection and protocol handling respectively here.
# Such classes *must* be subclasses of com.seleniumsoftware.SMPPSim.ConnectionHandler and com.seleniumsoftware.SMPPSim.SMPPProtocolHandler respectively
# Or those classes themselves for the default (good) behaviour
# Supply your own subclasses with particular methods overridden if you want to implement
# bad SMSC behaviours to see how your client application copes...
CONNECTION_HANDLER_CLASS=com.seleniumsoftware.SMPPSim.StandardConnectionHandler
PROTOCOL_HANDLER_CLASS=com.seleniumsoftware.SMPPSim.StandardProtocolHandler
# Specify the class that implements the message state life cycle simulation.
# Such classes must extend the default class, LifeCycleManager
LIFE_CYCLE_MANAGER=com.seleniumsoftware.SMPPSim.LifeCycleManager
#
# The Deterministic Lifecycle Manager sets message state according to the first character of the message destination address:
# 1=EXPIRED,2=DELETED,3=UNDELIVERABLE,4=ACCEPTED,5=REJECTED, other=DELIVERED
# LIFE_CYCLE_MANAGER=com.seleniumsoftware.SMPPSim.DeterministicLifeCycleManager
# LifeCycleManager parameters
#
# Check and possibly change the state of messages in the OutboundQueue every n milliseconds
MESSAGE_STATE_CHECK_FREQUENCY=5000
# Maximum time (in milliseconds) in the initial ENROUTE state
MAX_TIME_ENROUTE=10000
# The minimum time to wait before generating a delivery receipt (ms)
DELAY_DELIVERY_RECEIPTS_BY=0
# Percentage of messages that change state each time we check (excluding expiry or messages being completely discarded due to age)
# Requires an integer between 0 and 100
PERCENTAGE_THAT_TRANSITION=75
# State transition percentages. These parameters define the percentage of messages that
# transition from ENROUTE to the specified final state. The list of percentages should
# add up to 100 and must be integer values. SMPPSim will adjust the percentages if they do not.
# Percentage of messages that will transition from ENROUTE to DELIVERED
PERCENTAGE_DELIVERED=90
# Percentage of messages that will transition from ENROUTE to UNDELIVERABLE
PERCENTAGE_UNDELIVERABLE=6
# Percentage of messages that will transition from ENROUTE to ACCEPTED
PERCENTAGE_ACCEPTED=2
# Percentage of messages that will transition from ENROUTE to REJECTED
PERCENTAGE_REJECTED=2
# Time messages held in queue before being discarded, after a final state has been reached (milliseconds)
# For example, after transitioning to DELIVERED (a final state), state info about this message will be
# retained in the queue for a further (e.g.) 60000 milliseconds before being deleted.
DISCARD_FROM_QUEUE_AFTER=60000
# Web Management
HTTP_PORT=88
HTTP_THREADS=1
DOCROOT=www
AUTHORISED_FILES=/css/style.css,/index.htm,/inject_mo.htm,/favicon.ico,/images/logo.gif,/images/dots.gif,/user-guide.htm,/images/homepage.gif,/images/inject_mo.gif
INJECT_MO_PAGE=/inject_mo.htm
# Account details. Comma seperate. SystemID and Password provided in Binds will be validated against these credentials.
SYSTEM_IDS=smppclient1,smppclient2
#SYSTEM_IDS=smppclient
PASSWORDS=password,password
#PASSWORDS=password
OUTBIND_ENABLED=false
#OUTBIND_ENABLED=true
OUTBIND_ESME_IP_ADDRESS=127.0.0.1
OUTBIND_ESME_PORT=2776
#OUTBIND_ESME_PORT=2777
#OUTBIND_ESME_SYSTEMID=smppclient1
OUTBIND_ESME_SYSTEMID=smppclient
OUTBIND_ESME_PASSWORD=password
# MO SERVICE
DELIVERY_MESSAGES_PER_MINUTE=0
DELIVER_MESSAGES_FILE=deliver_messages.csv
# LOOPBACK
LOOPBACK=FALSE
# ESME to ESME routing
ESME_TO_ESME=true
# QUEUES
# Maximum size parameters are expressed as max number of objects the queue can hold
OUTBOUND_QUEUE_MAX_SIZE=1000
INBOUND_QUEUE_MAX_SIZE=1000
# The delayed inbound queue holds DELIVER_SM (MO) messages which could not be delivered to the selected ESME
# because it replied "queue full". Such messages get stored in the delayed inbound queue and delivery is attempted again
# periodically according to the following configuration.
#
# How many seconds to wait between passes through the delayed inbound queue. Recommend this is set to at least one minute.
DELAYED_INBOUND_QUEUE_PROCESSING_PERIOD=60
DELAYED_INBOUND_QUEUE_MAX_ATTEMPTS=100
# LOGGING
# See logging.properties for configuration of the logging system as a whole
#
# Set the following property to true to have each PDU logged in human readable
# format. Uses INFO level logging so the log level must be set accordingly for this
# output to appear.
DECODE_PDUS_IN_LOG=true
# PDU CAPTURE
# The following properties allow binary and/or decoded PDUs to be captured in files
# This is to allow the results of test runs (especially regression testing) to be
# checked with reference to these files
#
# Note that currently you must use the StandardConnectionHandler and StandardProtocolHandler classes for this
# feature to be available.
#
# _SME_ properties concern PDUs sent from the SME application to SMPPSim
# _SMPPSIM_ properties concern PDUs sent from SMPPSim to the SME application
#
CAPTURE_SME_BINARY=false
CAPTURE_SME_BINARY_TO_FILE=sme_binary.capture
CAPTURE_SMPPSIM_BINARY=false
CAPTURE_SMPPSIM_BINARY_TO_FILE=smppsim_binary.capture
CAPTURE_SME_DECODED=false
CAPTURE_SME_DECODED_TO_FILE=sme_decoded.capture
CAPTURE_SMPPSIM_DECODED=false
CAPTURE_SMPPSIM_DECODED_TO_FILE=smppsim_decoded.capture
# Byte Stream Callback
#
# This feature, if enabled, will cause SMPPSim to send PDUs received from the ESME or sent to it
# as byte streams over a couple of connections.
# This is intended to be useful in automated testing scenarios where you need to notify the test application
# with details of what was *actually* received by SMPPSim (or sent by it).
#
# Note that byte streams are prepended by the following fields:
#
# a 4 byte integer which indicates the length of the whole callback message
# a 1 byte indicator of the type of interaction giving rise to the callback,
# - where 0x01 means SMPPSim received a request PDU and
# 0x02 means SMPPSim sent a request PDU (e.g. a DeliverSM)
# a 4 byte fixed length identified, which identifies the SMPPSim instance that sent the bytes
#
# So the length of the SMPP pdu is the callback message length - 9.
#
# LENGTH(4) TYPE(1) ID(4) PDU (LENGTH)
CALLBACK=false
CALLBACK_ID=SIM1
CALLBACK_TARGET_HOST=localhost
CALLBACK_PORT=3333
# MISC
SMSCID=SMPPSim
smppsim logs
2012.11.06 10:19:05 958 INFO 22 00010139:36373731:32303030:38333600:
2012.11.06 10:19:05 958 INFO 22 01013939:30300000:00000000:00000000:
2012.11.06 10:19:05 959 INFO 22 00546865:20756E69:76657273:69747920:
2012.11.06 10:19:05 959 INFO 22 77617320:666F756E:64656420:696E204D:
2012.11.06 10:19:05 959 INFO 22 61726368:20332C20:31383634:20617320:
2012.11.06 10:19:05 959 INFO 22 74686520:436F6C6F:7261646F:2053656D:
2012.11.06 10:19:05 959 INFO 22 696E6172:79206279:204A6F68:6E204576:
2012.11.06 10:19:05 959 INFO 22 616E732C:20746865:20666F72:6D657220:
2012.11.06 10:19:05 959 INFO 22 476F7665:726E6F72:206F6620:436F6C6F:
2012.11.06 10:19:05 959 INFO 22 7261646F:20546572:7269746F:72792C20:
2012.11.06 10:19:05 959 INFO 22 77686F20:68616420:6265656E:20617070:
2012.11.06 10:19:05 960 INFO 22 6F696E74:65642062:79205072:65736964:
2012.11.06 10:19:05 960 INFO 22 656E7420:41627261:68616D20:4C696E63:
2012.11.06 10:19:05 960 INFO 22 6F6C6E2E:204A6F68:6E204576:616E732C:
2012.11.06 10:19:05 960 INFO 22 2077686F:20616C73:6F20666F:756E6465:
2012.11.06 10:19:05 960 INFO 22 64204E6F:72746877:65737465:726E2055:
2012.11.06 10:19:05 960 INFO 22 6E697665:72736974:79207072:696F7220:
2012.11.06 10:19:05 960 INFO 22 746F2066:6F756E64:696E6720:4455202E:
2012.11.06 10:19:05 960 INFO 22 20
2012.11.06 10:19:05 960 INFO 22 cmd_len=0,cmd_id=5,cmd_status=0,seq_no=4,service_type=,source_addr_ton=1
2012.11.06 10:19:05 961 INFO 22 source_addr_npi=1,source_addr=919894198941,dest_addr_ton=1,dest_addr_npi=1
2012.11.06 10:19:05 961 INFO 22 destination_addr=9900,esm_class=0,protocol_ID=0,priority_flag=0
2012.11.06 10:19:05 961 INFO 22 schedule_delivery_time=,validity_period=,registered_delivery_flag=0
2012.11.06 10:19:05 961 INFO 22 replace_if_present_flag=0,data_coding=0,sm_default_msg_id=0,sm_length=256
2012.11.06 10:19:05 961 INFO 22 short_message=The university was founded in March 3
2012.11.06 10:19:05 961 INFO 22 1864 as the Colorado Seminary by John Evans
2012.11.06 10:19:05 961 INFO 22 the former Governor of Colorado Territory
2012.11.06 10:19:05 961 INFO 22 who had been appointed by President Abraham Lincoln. John Evans
2012.11.06 10:19:05 961 INFO 22 who also founded Northwestern University prior to founding DU .
2012.11.06 10:19:05 961 INFO 22
2012.11.06 10:19:05 961 INFO 22 addressIsServicedByReceiver(9900)
2012.11.06 10:19:05 962 INFO 22 InboundQueue: empty - waiting
2012.11.06 10:19:05 962 INFO 16 : DELIVER_SM_RESP:
2012.11.06 10:19:05 962 INFO 16 Hex dump (17) bytes:
2012.11.06 10:19:05 962 INFO 16 00000011:80000005:00000000:00000004:
2012.11.06 10:19:05 962 INFO 16 00
2012.11.06 10:19:05 963 INFO 16 cmd_len=17,cmd_id=-2147483643,cmd_status=0,seq_no=4,system_id=
2012.11.06 10:19:05 963 INFO 16 DelayedInboundQueue: now contains 0 object(s)
2012.11.06 10:19:05 963 INFO 16
access.log
2012-11-06 10:19:05 Receive SMS [SMSC:SMPPSim] [SVC:] [ACT:] [BINF:] [FID:] [from:+919894198941] [to:+9900] [flags:-1:0:-1:0:-1] [msg:0:] [udh:0:]
Unfortunately SMPPSim does not support long SMS-MO (deliver_sm) as easy as injecting from Http interface, it will send the pdu as is to kannel (or whatever your ESME is), you can check that by sniffing the smpp traffic between kannel and SMPPSim.
When you inject a long SMS-MO, SMPPSim will instanciate a new com.selenium.SMPPSim.pdu.DeliverSm object and then populate it with the arguments you manually entered on the http interface, so if you entered a long text, it will simply set it into short_message no matter its length.
You can send long SMS-MO through that injection http interface when you know how to use sar_total_segments, sar_segment_seqnum and sar_msg_ref_num, here's a good&simple tutorial.
I config sar_total_segments, sar_segment_seqnum and sar_msg_ref_num,but UDH is 0 when use com.cloudhopper.commons.gsm.GsmUtil.getShortMessageUserDataHeader.