Last week I was bit by exceeding my Port limit in Openstack Kilo. I understand how to query, change and in the future will setup my services to notify as this quota is approached...But what is this quota actually limiting?
From the documentation: "(IntOpt) Number of ports allowed per tenant. A negative value means unlimited."
Are there a number of virtual iscsi ports this is limiting? If so do I have a physical limit on my hardware that I might exceed if this becomes unlimited?
Or is this a number of IPs that can allocate from a range? (if so, why is it referred to as ports)
In my case the following:
[root#_regionOne_ ~]# neutron quota-show --tenant-id _projectUUID_ -c port
+-------+-------+
| Field | Value |
+-------+-------+
| port | 150 |
+-------+-------+
Was altered with:
[root#_regionOne_ ~]# neutron quota-show --tenant-id _projectUUID_ --port <new quota limit>
To solve the issue. But improving my understanding would be a much better solution!
Finally, I have an answer to that question, which was also interesting for me.
Port: A port in Neutron represents a virtual switch port on a logical
virtual switch. Virtual machine interfaces are mapped to Neutron
ports, and the ports define both the MAC address and the IP address
to be assigned to the interface plugged into them. Neutron port
definitions are stored in the Neutron database, which is then used by
the receptive plugin agent to build and connect the virtual switching
infrastructure.
From the book Learning OpenStack Networking.
The update command should be;
neutron quota-update --tenant-id projectUUID --port
Related
Am curious about how OpenStack handles IP configuration, i have a complete working openstack dashboard with a static IP of 192.168.1.73/24 and i want to change it to something else. Running as a VM using RHEL\Scientific Linux\Centos 7.5 as the Guest Host.
Am running openstack-queens (repo) -- /etc/yum.repos.d
What i've tried and failed...
1.Changing static IP in /etc/sysconfig/network-scripts/ifcfg-eth0
2.Made sure in /etc/resolv.conf reflects my new configuration.
2.Replacing IP configuration in packstack-answerfile for the compute node and the rest of the services i've configured.
What i have noted!!!
1.systemctl status -l redis.service --- fails when i change the IP configuration, this is active (running) with its initial configuration.
2.Virtualization daemon also fails during boot--(running as KVM)
How "deep" does Networking go for OpenStack and how do i achieve my goals of setting a different IP and still have my dashboard up and running?
This was Easy. What I missed to do is to only re-run my packstack answerfile.
First, change the IP address on the machine in /etc/sysconfig/network-scripts/ifcfg-br-ex thats if you already gone ahead in setting up networking for your OpenStack Env.
If you have done a backup of your ifcfg-eth0, revert to it and change to new IP configuration.
Second, Replace new IP configuration in packstack-answerfile for the compute node and the rest of the services configured.
Last But not Least: Requires Steady Internet Connection!!!
Last Step is to re-run your packstack-answerfile with the new IP configuration.
I'm running Kubernetes with AWS EKS. I'm performing some load tests for a nodeport service and seeing a concurrent connection limit of ~16k-20k when hitting a node the pod is not running on. I'm wondering if there's some way to increase the number of concurrent connections.
So I'm running a nodeport service with only 1 pod that is scheduled on node A. The load test I'm running tries to connect as many concurrent websocket connections as possible. The websockets just sleep and send heartbeats every 30s to keep the connection alive.
When I point the load tester (tsung) at node A, I can get upwards of 65k concurrent websockets before the pod gets OOMKilled so memory is the limiting factor and that's fine. The real problem is when I point the load tester at node B, and kube-proxy's iptables forward the connection to node A, all of sudden, I can only get about 16k-20k concurrent websocket connections before the connections start stalling. According to netstat, they are getting stuck in the SYN_SENT state.
netstat -ant | awk '{print $6}' | sort | uniq -c | sort -n
...
20087 ESTABLISHED
30969 SYN_SENT
The only thing I can think of to check is my conntrack limit and it looks to be fine. Here is what I get for node B.
net.netfilter.nf_conntrack_buckets = 16384
net.netfilter.nf_conntrack_max = 131072
net.nf_conntrack_max = 131072
Here is the port range. I'm not sure if it matters (I'm not sure if DNAT and SNAT use up ports), but the range seems to be well above 16k.
net.ipv4.ip_local_port_range = 32768 60999
The file descriptor limit and kernel TCP settings are the same for node A and node B so I think that rules them out.
Is there anything else that could be limiting the number of concurrent connections forwarded through iptables/netfilter?
You are always going to get worse performance when hitting the NodePort where your pod is not running. Essentially, your packets are going through extra hops trying (through iptables) to get its final destination.
I'd recommend using source IP for your NodePort service. Basically, patch your service with this:
$ kubectl patch svc <your-service> -p '{"spec":{"externalTrafficPolicy":"Local"}}'
Then let your load balancer forward traffic only to NodePorts that are serving traffic.
Alternatively, if you'd like to consider something better performing you could consider using proxy mode ipvs or something like BPF/Cillium for your overlay.
I want to know how does the openstack assign ip to virtual machines ? and how to find out port and ips used by the VM. Is it possible for us to find out the IP and ports being used by an application running inside the VM ?
To assign an IP to your VM you can use this command:
openstack floating ip create public
To associate your VM and the IP use the command below:
openstack server add floating ip your-vm-name your-ip-number
To list all the ports used by applications, ssh to your instance and run:
sudo lsof -i
Assuming you know the VM name
do the following:
On controller run
nova interface-list VM-NAME
It will give you port-id, IP-address and mac address of VM interface.
You can login to VM and run
netstat -tlnp to see which IP and ports being used by applications running inside the VM.
As to how a VM gets IP, it depends on your deployment. On a basic openstack deployment when you create a network and create a subnet under that network, you will see on the network node a dhcp namespace getting created. (do ip netns on network node). The namespace name would be qdhcp-network-id. The dnsmasq process running inside the dhcp namespace allots IPs to VM. This is just one of the many ways in which VM gets IP.
This particular End User page of the official documentation could be a good start:
"Each instance can have a private, or fixed, IP address and a public, or floating, one.
Private IP addresses are used for communication between instances, and public ones are used for communication with the outside world.
When you launch an instance, it is automatically assigned a private IP address that stays the same until you explicitly terminate the instance. Rebooting an instance has no effect on the private IP address.
A pool of floating IPs, configured by the cloud operator, is available in OpenStack Compute.
You can allocate a certain number of these to a project: The maximum number of floating IP addresses per project is defined by the quota.
You can add a floating IP address from this set to an instance of the project. Floating IP addresses can be dynamically disassociated and associated with other instances of the same project at any time.
Before you can assign a floating IP address to an instance, you first must allocate floating IPs to a project. After floating IP addresses have been allocated to the current project, you can assign them to running instances.
You can assign a floating IP address to one instance at a time."
There are of course deeper layers to look at in this section of the Admin Guide
Regarding how to find out about ports and IPs, you have two options: command line interface or API.
For example, if you are using Neutron* and want to find out the IPs or networks in use with the API:
GET v2.0/networks
And using the CLI:
$ neutron net-list
You can use similar commands for ports and subnets, however I haven't personally tested if you can get information about the application running in the VM this way.
*Check out which OpenStack release you're running. If it's an old one, chances are it's using the Compute node (Nova) for networking.
I am trying to measure the latency between one of my machines, and an EC2 instance. EC2 instances cannot be pinged. So I tried using application level timestaps (using gettimeofday()). I send a tcp packet with a timestamp in the payload.
Upon receiving this packet, I calculate the timestamp on my machine, and obtain the difference. It always comes out to be negative. My guess was that the clocks in the two machines could be skewed. So I used ntp to synchronize both the machines, but the problem still persists.
Can someone please help.
EC2 instances can be pinged, if configured to allow it. I set one up for this today while trying to track down packet drops in us-west-2. In the security group protecting the instance, you add a rule to permit "ICMP Echo Request" from the source address of the machine where you're originating the ping.
See the AWS FAQ for this quote.
Why can't I ping my instance? Ping uses ICMP ECHO, which by default is
blocked by your firewall. You'll need to grant ICMP access to your
instances by updating the firewall restrictions that are tied to your
security group.
ec2-authorize default -P icmp -t -1:-1 -s 0.0.0.0/0
Check out the latest developer guide for details.
Section: Instance Addressing and Network Security -> Network Security
-> Examples
I am using bridging as a technique to connect 2 virtual interfaces together in Ubuntu 12.04.
One of the interfaces is a mininet interface (www.mininet.org).
I am getting a lot of TCP retransmission packets, and the connectivity is extremely slow.
Trying to debug this issue.
I have tried to enable STP on the bridge, but it doesn't happen:
~$ brctl show
bridge name bridge id STP enabled interfaces
s1 0000.f643bed86249 no s1-eth1
s1-eth2
s1-eth3
s2 0000.caf874f68248 no s2-eth1
~$ sudo brctl stp s2 on
~$ brctl show
bridge name bridge id STP enabled interfaces
s1 0000.f643bed86249 no s1-eth1
s1-eth2
s1-eth3
s2 0000.caf874f68248 no s2-eth1
I am confused as to why this command does not work.
Also, auto-negotiation is off in these interfaces.
Does autonegotiation matter for virtual interfaces?
Should I manually set auto-negotiation to 'on' or set the duplex and speed of virtual interfaces?
Also, ping and dns work perfectly fine. For http traffic, SYN, SYN-ACK and ACK is as expected, however, the GET/POST request gets retransmitted 5-6 time immediately after the first GET/POST.
This is a confusing thing for me now and any links/pointers/commands will be helpful.
Please direct me to the right forum if this is not a question for stackoverflow. TIA.
The STP is founded to solve the Lay2 looping and the broadcast storm that the Lay2 looping cause. It's nothing about the TCP retransmission.
Maybe you can check the DNS resolvf time out in your case, and turn on the web server debug log.