Matching TCP flows based on tcp ports Ryu Controller - tcp

i'm trying to redirect TCP flows to a specific servers using their tcp source port with RYU SDN controller. This is my topology (simple for the first step):
host -- ovs1 -- ovs2 -- server
match rule for ovs1:
match = parse.OFPMatch(in_port=port,eth_type=0x0800, ipv4_dst=server_ip, tcp_src=tcp_pkt.src_port)
But i get the following error:
EventOFPErrorMsg received.
version=0x4, msg_type=0x1, msg_len=0x4c, xid=0x370bf1bf
`-- msg_type: OFPT_ERROR(1)
OFPErrorMsg(type=0x4, code=0x9, data=b'\x04\x0e\x00\x70\x37\x0b\xf1\xbf\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x03\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x28\x80\x00\x00\x04\x00\x00\x00\x01\x80\x00\x0a\x02')
|-- type: OFPET_BAD_MATCH(4)
|-- code: OFPBMC_BAD_PREREQ(9)
`-- data: version=0x4, msg_type=0xe, msg_len=0x70, xid=0x370bf1bf
`-- msg_type: OFPT_FLOW_MOD(14)
The point is, if I remove the tcp_src option, everything works fine, that's why I think that the problem is related to how I'm passing the port.
Any ideas?
Thanks in advance!

Ok, after spend a lot of time with this problem i got the answer. In order to define a specific match with TCP ports, we need to satisfy all the prerequisites, that means that in my case is needed to add the ip_proto field:
match = parse.OFPMatch(in_port=port,eth_type=0x0800, ip_proto=6, ipv4_dst=server_ip, tcp_src=tcp_pkt.src_port)
I found the answer here: OpenFlow Switch Specification

Related

How to control vhost_shared_traffic memory K8s nginx ingress?

Background
We run a kubernetes cluster that handles several php/lumen microservices. We started seeing the app php-fpm/nginx reporting 499 status code in it's logs, and it seems to correspond with the client getting a blank response (curl returns curl: (52) Empty reply from server) while the applications log 499.
10.10.x.x - - [09/Mar/2020:18:26:46 +0000] "POST /some/path/ HTTP/1.1" 499 0 "-" "curl/7.65.3"
My understanding is nginx will return the 499 code when the client socket is no longer open/available to return the content to. In this situation that appears to mean something before the nginx/application layer is terminating this connection. Our configuration currently is:
ELB -> k8s nginx ingress -> application
So my thoughts are either ELB or ingress since the application is the one who has no socket left to return to. So i started hitting ingress logs...
Potential core problem?
While looking the the ingress logs i'm seeing quite a few of these:
2020/03/06 17:40:01 [crit] 11006#11006: ngx_slab_alloc() failed: no memory in vhost_traffic_status_zone "vhost_traffic_status"
Potential Solution
I imagine if i gave vhost_traffic_status_zone some more memory at least that error would go away and on to finding the next error.. but I can't seem to find any configmap value or annotation that would allow me to control this. I've checked the docs:
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/
Thanks in advance for any insight / suggestions / documentation I might be missing!
here is the standard way to look up how to modify the nginx.conf in the ingress controller. After that, I'll link in some info on suggestions on how much memory you should give the zone.
First start by getting the ingress controller version by checking the image version on the deploy
kubectl -n <namespace> get deployment <deployment-name> | grep 'image:'
From there, you can retrieve the code for your version from the following URL. In the following, I will be using version 0.10.2.
https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.10.2
The nginx.conf template can be found at rootfs/etc/nginx/template/nginx.tmpl in the code or /etc/nginx/template/nginx.tmpl on a pod. This can be grepped for the line of interest. I the example case, we find the following line in the nginx.tmpl
vhost_traffic_status_zone shared:vhost_traffic_status:{{ $cfg.VtsStatusZoneSize }};
This gives us the config variable to look up in the code. Our next grep for VtsStatusZoneSize leads us to the lines in internal/ingress/controller/config/config.go
// Description: Sets parameters for a shared memory zone that will keep states for various keys. The cache is shared between all worker processe
// https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone
// Default value is 10m
VtsStatusZoneSize string `json:"vts-status-zone-size,omitempty"
This gives us the key "vts-status-zone-size" to be added to the configmap "ingress-nginx-ingress-controller". The current value can be found in the rendered nginx.conf template on a pod at /etc/nginx/nginx.conf.
When it comes to what size you may want to set the zone, there are the docs here that suggest setting it to 2*usedSize:
If the message("ngx_slab_alloc() failed: no memory in vhost_traffic_status_zone") printed in error_log, increase to more than (usedSize * 2).
https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone
"usedSize" can be found by hitting the stats page for nginx or through the JSON endpoint. Here is the request to get the JSON version of the stats and if you have jq the path to the value: curl http://localhost:18080/nginx_status/format/json 2> /dev/null | jq .sharedZones.usedSize
Hope this helps.

tcpprep: Command line arguments not allowed

I'm not sure, why executing below command on ubuntu terminal throws error. tcpprep syntax and options are mentioned as per in help doc, still throws error.
root#test-vm:~# /usr/bin/tcpprep --cachefile='cachefile1' —-pcap='/pcaps/http.pcap'
tcpprep: Command line arguments not allowed
tcpprep (tcpprep) - Create a tcpreplay cache cache file from a pcap file
root#test-vm:~# /usr/bin/tcpprep -V
tcpprep version: 3.4.4 (build 2450) (debug)
There are two problems with your command (and it doesn't help that tcpprep errors are vague or wrong).
Problem #1: Commands out of order
tcpprep requires that -i/--pcap come before -o/--cachefile. You can fix this as below, but then you get a different error:
bash$ /usr/bin/tcpprep —-pcap='/pcaps/http.pcap' --cachefile='cachefile1'
Fatal Error in tcpprep_api.c:tcpprep_post_args() line 387:
Must specify a processing mode: -a, -c, -r, -p
Note that the error above is not even accurate! -e/--mac can also be used!
Problem #2: Processing mode must be specified
tcpprep is used to preprocess a capture file into client/server using a heuristic that you provide. Looking through the tcpprep manpage, there are 5 valid options (-acerp). Given this capture file as input.pcapng with server 192.168.122.201 and next hop mac 52:54:00:12:35:02,
-a/--auto
Let tcpprep determine based on one of 5 heuristics: bridge, router, client, server, first. Ex:
tcpprep --auto=first —-pcap=input.pcapng --cachefile=input.cache
-c/--cidr
Specify server by cidr range. We see servers at 192.168.122.201, 192.168.122.202, and 192.168.3.40, so summarize with 192.168.0.0/16:
tcpprep --cidr=192.168.0.0/16 --pcap=input.pcapng --cachefile=input.cache
-e/--mac
This is not as useful in this capture as ALL traffic in this capture has dest mac of next hop of 52:54:00:12:35:02, ff:ff:ff:ff:ff:ff (broadcast), or 33:33:00:01:00:02 (multicast). Nonetheless, traffic from the next hop won't be client traffic, so this would look like:
tcpprep --mac=52:54:00:12:35:02 —-pcap=input.pcapng --cachefile=input.cache
-r/--regex
This is for IP ranges, and is an alternative to summarizing subnets with --cidr. This would be more useful if you have several IPs like 10.0.20.1, 10.1.20.1, 10.2.20.1, ... where summarization won't work and regex will. This is one regex we could use to summarize the servers:
tcpprep --regex="192\.168\.(122|3).*" —-pcap=input.pcapng --cachefile=input.cache
-p/--port
Looking at Wireshark > Statistics > Endpoints, we see ports [135,139,445,1024]/tcp, [137,138]/udp are associated with the server IPs. 1024/tcp, used with dcerpc is the only one that falls outside the range 0-1023, and so we'd have to manually specify it. Per services syntax, we'd represent this as 'dcerpc 1024/tcp'. In order to specify port, we also need to specify a --services file. We can specify one inline as a temporary file descriptor with process substitution. Altogether,
tcpprep --port --services=<(echo "dcerpc 1024/tcp") --pcap=input.pcapng --cachefile=input.cache
Further Reading
For more examples and information, check out the online docs.

nmap wordpress script scan not return result

I am trying to use following nmap script http-wordpress-enum.nse
http-wordpress-plugins.nse scan one wordpress website.
To access this wordpress website you have to go following link : http://192.168.0.1/wp/
I am having trouble to run these nmap script against that host. when you do
nmap -p80 --script http-wordpress-plugins.nse 192.168.0.1
no result returned, even though I know there is plugin installed. is that because nmap scanned web address is http://192.168.0.1 rather than ://192.168.0.1/wp/ ? so nmap just see there is no actual word press website there and terminated the scan? anyone have suggestion how to fix this?
Thank you in advance
You should use the http-wordpress-plugins.root script argumentto specify your "/wp/" path. In your case, something like:
nmap -p80 --script http-wordpress-plugins.nse --script-args http-wordpress-plugins.root="/wp/" 192.168.0.1
Quoting the source code of the http-wordpress-plugins.nse script (/usr/share/nmap/scripts/http-wordpress-plugins.nse):
description = [[
Tries to obtain a list of installed WordPress plugins by brute force
testing for known plugins.
The script will brute force the /wp-content/plugins/ folder with a dictionary
of 14K (and counting) known WP plugins. Anything but a 404 means that a given
plugin directory probably exists, so the plugin probably also does.
The available plugins for Wordpress is huge and despite the efforts of Nmap to
parallelize the queries, a whole search could take an hour or so. That's why
the plugin list is sorted by popularity and by default the script will only
check the first 100 ones. Users can tweak this with an option (see below).
]]
---
-- #args http-wordpress-plugins.root If set, points to the blog root directory on the website. If not, the script will try to find a WP directory installation or fall back to root.
-- #args http-wordpress-plugins.search As the plugins list contains tens of thousand of plugins, this script will only search the 100 most popular ones by default.
-- Use this option with a number or "all" as an argument for a more comprehensive brute force.
--
-- #usage
-- nmap --script=http-wordpress-plugins --script-args http-wordpress-plugins.root="/blog/",http-wordpress-plugins.search=500 <targets>
--
--#output
-- Interesting ports on my.woot.blog (123.123.123.123):
-- PORT STATE SERVICE REASON
-- 80/tcp open http syn-ack
-- | http-wordpress-plugins:
-- | search amongst the 500 most popular plugins
-- | akismet
-- | wp-db-backup
-- | all-in-one-seo-pack
-- | stats
-- |_ wp-to-twitter
Be warned, though, that nmap does its best using a mix of heuristic methods, known vulnerabilties and brute force. A negative result does not mean that "something is not there, 100% sure". It just mean that "nmap could not find it", and it's possibly because the host is well protected (ex the service is wisely configured, firewall, IDS...)

made a network with multiple switches in mininet and add flows to switches manually (by dpctl)

Can we made a network with multiple switches without any controller in mininet .
I mean that we manually control the switches by "dpctl"
I made the topology like this:
s2
/ \
h1---s1/ \s3--h2
\ /
\s4/
and I want to send from h1 to h2.
first of all that I turn on the floodlight, and I extract the switch port number from there. after that I want to add flows manually that I encountered to an error.
mininet#mininet-vm:~$ dpctl add-flow tcp:127.0.0.1:40566 in_port=1,actions=output:2
dpctl: failed to send packet to switch: Connection refused
how can i fix it??
thanks a lot
dpctl add-flow tcp:127.0.0.1:6634in_port=1,actions=output:2
Port number will be 6634.
Controller is at Port number 6633 and from 6634 you have port for dpctl.

Node dev1#192.168.1.11 is not reachable

First, I followed exactly "The Riak Fast Track" tutorial to build a four nodes Riak cluster.
Then, I changed the 127.0.0.1 IP to 192.168.1.11 in dev[1-4]/etc/app.config files, and reinstalled clusters(delete dev[1-4], fresh install).
but Riak tells me:
Node dev1#192.168.1.11 is not reachable when I issue dev2/bin/riak-admin cluster join dev1#192.168.1.11
What's wrong?
+1 to what Brian Roach said in the comment.
Make sure to update the node name and IP address in both the app.config files AND the vm.args, before you start up the node.
Make sure node dev1 is up and reachable, before issuing a cluster join command to dev2.
Meaning, make sure dev1/bin/riak ping returns a 'pong', etc.

Resources