I am trying to configure an ovs bridge to connect to a controller. I notice that it sends the HELLO but does not complete the connection. I see the following:
ovs-ofctl show br-flowmon
OFPT_FEATURES_REPLY (xid=0x2): dpid:deadbeefdeadbeef
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(patch-flowmon1): addr:26:1f:db:26:99:4a
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
LOCAL(br-flowmon): addr:56:ea:36:94:4b:4e
config: PORT_DOWN
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
I suspect that it does not complete the connection because the config state is PORT_DOWN. How can I "turn on" the port? Is there any other possible reason for this behavior?
(is there an ovs-vsctl or ovs-ofctl command to do this?)
Thank you
ifconfig br-flowmon up
Fixed the issue but I still don't see a completed connection at the controller. Any clues?
Related
In our OpenStack environment, we did see huge packets lost. Then we found packets were dropped at Open vSwitch. Could someone give clue how to improve the situation?
[bscuser#compute-4 ~]$ sudo ovs-ofctl dump-ports br-int vhub97ae049-a2
OFPST_PORT reply (xid=0x4): 1 ports
port "vhub97ae049-a2": rx pkts=12472105, bytes=1647807101, drop=0, errs=0, frame=?, over=?, crc=?
tx pkts=83585797, bytes=77693614917, drop=4643534, errs=?, coll=?
[root#compute1 ~]# ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000c272e446ba49
n_tables:254, n_buffers:0
And n_buffers is 0, is it normal? I searched on web, it seems all the result are 256. But I don't know how to change it.
Thank you in advance.
I am trying to simulate syslog flume agent which eventually should put the data into HDFS.
My scenario follows:
the syslog flume agent is running on physical server A, following are the configuration details:
===
syslog_agent.sources = syslog_source
syslog_agent.channels = MemChannel
syslog_agent.sinks = HDFS
# Describing/Configuring the source
syslog_agent.sources.syslog_source.type = syslogudp
#syslog_agent.sources.syslog_source.bind = 0.0.0.0
syslog_agent.sources.syslog_source.bind = localhost
syslog_agent.sources.syslog_source.port = 514
# Describing/Configuring the sink
syslog_agent.sinks.HDFS.type=hdfs
syslog_agent.sinks.HDFS.hdfs.path=hdfs://<IP_ADD_OF_NN>:8020/user/ec2-user/syslog
syslog_agent.sinks.HDFS.hdfs.fileType=DataStream
syslog_agent.sinks.HDFS.hdfs.writeformat=Text
syslog_agent.sinks.HDFS.hdfs.batchSize=1000
syslog_agent.sinks.HDFS.hdfs.rollSize=0
syslog_agent.sinks.HDFS.hdfs.rollCount=10000
syslog_agent.sinks.HDFS.hdfs.rollInterval=600
# Describing/Configuring the channel
syslog_agent.channels.MemChannel.type=memory
syslog_agent.channels.MemChannel.capacity=10000
syslog_agent.channels.MemChannel.transactionCapacity=1000
#Bind sources and sinks to the channel
syslog_agent.sources.syslog_source.channels = MemChannel
syslog_agent.sinks.HDFS.channel = MemChannel
I am sending syslog "logs" from different physical server B using the inbuilt utility "logger", like this:
sudo logger --server <IP_Address_physical_server_A> --port 514 --udp
I do see yje log messages going into physical server-A 's path --> /var/log/messages
But I don't see any message going into HDFS; it seems the the flume agent isn't able to get any data, even though the messages are going from server-B to server-A.
Am I doing something wrong here? Can anyone help me how to resolve this?
EDIT
The following is the output of netstat command on server-A where the syslog daemon is running:
tcp 0 0 0.0.0.0:514 0.0.0.0:* LISTEN 573/rsyslogd
tcp6 0 0 :::514 :::* LISTEN 573/rsyslogd
udp 0 0 0.0.0.0:514 0.0.0.0:* 573/rsyslogd
udp6 0 0 :::514 :::* 573/rsyslogd
I'm not sure what logger --server.gives you, but most examples I have seen use netcat.
In any case, you've set batchSize=1000, so until you send 1000 messages, Flume will not write to HDFS.
Keep in mind, HDFS is not a streaming platform, and prefers not to have small files.
If you're looking for log collection, look into Elasticsearch or Solr fronted by a Kafka topic
Here's my setup on local: 3 VMs (using Virtualbox), kafka and zookeeper installed on all three. They are all talking to each other as well.
I am trying to use kafka-console-producer from my local, which requires broker-list. I am supplying the IPs of my VMs but it doesn't seem to be working. I've tried the advertised.host properties on the VMs too but has no effect for me. Here's my server.properties from the three machines:
Server 1:
broker.id=4
port=9092
host.name=10.30.3.4
advertised.host.name=10.30.3.4
advertised.port=9092
zookeeper.connect=10.30.3.4:2181
zookeeper.connection.timeout.ms=6000
Server 2:
broker.id=3
port=9092
host.name=10.30.3.3
advertised.host.name=10.30.3.3
advertised.port=9092
zookeeper.connect=10.30.3.3:2181
zookeeper.connection.timeout.ms=6000
Server 3:
broker.id=2
port=9092
host.name=10.30.3.2
advertised.host.name=10.30.3.2
advertised.port=9092
zookeeper.connect=10.30.3.2:2181
zookeeper.connection.timeout.ms=6000
My virtualbox also has port forwarding setup:
Similarly for other two machines too ports are only tweaked a bit.
I am able to connect to zookeeper just fine, so:
bin/zkCli.sh -server 127.0.0.1:9999
is able to connect to zookeeper on VM. But if I try connecting kafka-console-producer it fails when I try sending messages:
bin/kafka-console-producer.sh --broker-list 127.0.0.1:9502 --topic partition2replica2 --timeout 3000
leads to:
[2016-02-17 15:06:36,943] WARN Property topic is not valid (kafka.utils.VerifiableProperties)
hi
there
[2016-02-17 15:07:23,699] WARN Failed to send producer request with correlation id 3 to broker 3 with data for partitions [partition2replica2,1] (kafka.producer.async.DefaultEventHandler)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:103)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer.scala:102)
at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.producer.SyncProducer.send(SyncProducer.scala:101)
at kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(DefaultEventHandler.scala:255)
at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:106)
at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:100)
at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
at kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:100)
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72)
at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:88)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:68)
at scala.collection.immutable.Stream.foreach(Stream.scala:547)
at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45)
[2016-02-17 15:07:25,318] WARN Failed to send producer request with correlation id 7 to broker 3 with data for partitions [partition2replica2,1] (kafka.producer.async.DefaultEventHandler)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:103)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer.scala:102)
at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.producer.SyncProducer.send(SyncProducer.scala:101)
at kafka.producer.async.DefaultEventHandler.kafka$producer$async
Not sure what I am doing wrong here? Any ideas? (I can provide ifconfig output if anyone wants). Any help will be appreciated.
[Edit 1]: Adding output of zookeeper quorum:
That seems to be in quorum:
echo stat| nc 10.30.3.2 2181
Received: 81
Sent: 80
Connections: 1
Outstanding: 0
Mode: follower
Node count: 149
echo stat| nc 10.30.3.3 2181
Received: 660
Sent: 664
Connections: 1
Outstanding: 0
Zxid: 0x600000109
Mode: leader
Node count: 149
echo stat| nc 10.30.3.4 2181
Received: 293
Sent: 295
Connections: 1
Outstanding: 0
Zxid: 0x600000109
Mode: follower
Node count: 149
As far as I can understand your setup, the zookeepers on each node should also been in Quorum with each other to support the 3 kafka servers instances as one cluster. You have provided kafka config only, so I cannot make out if they are configured that way.
You can check by using the 4 letter command on each zookeeper node like below
echo stat | nc <zk ip> <zk port>
echo mntr | nc <zk ip> <zk port>
One should be a "leader" and other two should be "followers".
I am not sure if they will work as expected if they are not configured to be in quorum.
I have the network configuration as follows
I have tried to ping 192.168.1.100 to 192.168.1.101 and it succeeds.
I have tried to ping 192.168.50.100 to 192.168.50.101 which is on vlan 50 and it fails.
The simulation diagram showed arp is not being forwarded from switch1 to switch2.
I have configured both the sides of switch to trunk.
I am just learning on vlans and trunking.
Can anybody please explains what is the configuration I am missing?
If i remove switch1 and connect switch0 to switch2 everything works fine.
EDIT
Switch0 vlan configuration.
Switch1 vlan configuration.
Switch2 vlan configuration
You have to add at switch0 and switch2 in the assigned ports, in my case:
Switch0(config-if)#int fastEthernet0/2
Switch0(config-if)#switchport access vlan 50
Switch0(config-if)#switchport mode access
Switch2(config-if)#int fastEthernet0/3
Switch2(config-if)#switchport access vlan 50
Switch2(config-if)#switchport mode access
You can also add vlan 50 to switch1 (I don't know how you have it).
Switch1(config)#vlan 50
Switch1(config-vlan)#name VLAN0050
Switch1(config-vlan)#exit
Switch1(config)#
where the Ethernet cable connects from the PC to the switch.
As you can see PC0 goes to PC2 successfully and PC1 goes to PC3 successfully.
Write This Command on Switch 0 & 2 :
Switch#configure terminal
Switch(config)#vlan 50
Switch(config-vlan)#name test
Switch(config-vlan)#exit
Switch(config)#interface fastEthernet 0/2
Switch(config-if)#switchport mode access
Switch(config-if)#switchport access vlan 50
Switch(config-if)#exit
Switch(config)#interface fastEthernet 0/3
Switch(config-if)#switchport mode trunk
Switch(config-if)#exit
Switch(config)#exit
Switch#write memory
Switch 1 :
in Switch 1 You Must Define Vlan's or You Can Delete This Switch and Connect Switch 0 & 2 Direct With Trunk or You Have Anoter Option Like VTP mode .
Whatever This Command Write On Switch 1 :
Switch(config)#interface fastEthernet 0/1
Switch(config-if)#switchport mode trunk
Switch(config-if)#exit
Switch(config)#interface fastEthernet 0/2
Switch(config-if)#switchport mode trunk
Switch(config-if)#exit
Switch(config)#vlan 50
Switch(config-vlan)#name test
Switch(config-vlan)#exit
Switch(config)#exit
Switch#write memory
I'm trying to write the linux client script for a simple port knocking setup. My server has iptables configured to require a certain sequence of TCP SYN's to certain ports for opening up access. I'm able to successfully knock using telnet or manually invoking netcat (Ctrl-C right after running the command), but failing to build an automated knock script.
My attempt at an automated port knocking script consists simply of "nc -w 1 x.x.x.x 1234" commands, which connect to x.x.x.x port 1234 and timeout after one second. The problem, however, seems to be the kernel(?) doing automated SYN retries. Most of the time more than one SYN is being send during the 1 second nc tries to connect. I've checked this with tcpdump.
So, does anyone know how to prevent the SYN retries and make netcat simply send only one SYN per connection/knock attempt? Other solutions which do the job are also welcome.
Yeah, I checked that you may use nc too!:
$ nc -z example.net 1000 2000 3000; ssh example.net
The magic comes from (-z: zero-I/O mode)...
You may use nmap for port knocking (SYN). Just exec:
for p in 1000 2000 3000; do
nmap -Pn --max-retries 0 -p $p example.net;
done
try this (as root):
echo 1 > /proc/sys/net/ipv4/tcp_syn_retries
or this:
int sc = 1;
setsockopt(sock, IPPROTO_TCP, TCP_SYNCNT, &sc, sizeof(sc));
You can't prevent the TCP/IP stack from doing what it is expressly designed to do.