Limit bandwidth for a cisco switch port - networking

I am using Cisco Catalyst 2960-CG Series switch. I want to limit the bandwidth of my ports to 100 kbps. But not able to do so. Tried doing so by setting up a VLAN but still not able to achieve the desired limit. Can someone help?

Try:
Switch(config)# interface FastEthernet 0/1
Switch(config-if)# srr-queue bandwidth limit 90
Source:
https://www.techrepublic.com/article/limit-bandwidth-on-a-cisco-catalyst-switch-port/#:~:text=The%2090%20sets%20the%20outbound,the%20port%20to%2010%20Mb.

Related

Diagnosing a 400Mbps cap on Network Speed With Cisco and Watchguard Devices

We are noticing that we max our WAN port out at 400 Mbps. We have a 1Gbps connection with our provider delivered over pure Ethernet (in a datacenter).
Here is an example of the max-out using crude MRTG:
We are directly connecting to our provider via Ethernet at 1Gbps. This comes in to our Cisco 2901 router and then we are then connecting directly to our Watchguard Firebox at 1Gbps Ethernet (in drop-in mode). All devices are reporting 1Gbps line speed with full duplex.
The Firebox then connects to our switch at 1Gbps. We are running a gigabit switch which connects directly to our servers (also at 1Gbps to each server).
We can't seem to achieve anything over 400Mbps through the setup. The Firebox X1250e we are running is rated at 1.5Gbps throughput for raw packet forwarding (which we are doing - no proxying or fixup is being performed on the data).
We have even fired up a command line Speedtest (Ookla) on one of the servers and it hits the 400Mbps cap.
I know people are going to say the Cisco 2901 is the issue but we are running full 1500 packets and even at 400Mbps over an extended period, this is an example of our CPU usage:
sh proc cpu
CPU utilization for five seconds: 18%/17%; one minute: 17%; five minutes: 17%
Also worth noting, we are not running any QoS on the Firebox - all QoS is disabled (the whole module unloaded).
The Cisco 2901 has CEF enabled.
We have the following configuration:
Does anybody know what may be causing this "cap"?
We would like some tips, advice and suggestions as to how we can diagnose this remotely (we can't easily go to the datacenter to perform tests).
Any help and advice are greatly appreciated; thank you in advance.
Depending on the 2901's configuration, it's quite possible that it's simply maxed out. Cisco documents a peak throughput of 3+ Gbit/s, but recommends the 2901 for just 25 Mbit/s duplex throughput. 1
For testing, I'd connect a test device/machine in front of the 2901 and check whether it tops out at the same speed or not. Next step is to connect the test machine behind the 2901 but before the Watchguard.

Detect faulty physical links with ping

I would have a question regarding physical problem detection in a link with ping.
If we have a fiber or cable which has a problem and generate some CRC errors on the frame (visible with switch or router interface statistics), it's possible all ping pass because of the default small icmp packet size and statistically fewer possibilities of error. First, can you confirm this ?
Also, my second question, if I ping with a large size like 65000 bytes, one ping will generate approximately 65000 / 1500(mtu) = 43 frames, as ip framgents, then the statistics to get packet loss (because normally if one ip fragment is lost the entire ip packet is lost) with large ping is clearly higher ? Is this assumption is true ?
The global question is, with large ping, could we easier detect a physical problem on a link ?
A link problem is a layer 1 or 2 problem. ping is a layer 3 tool, if you use it for diagnosis you might get completely unexpected results. Port counters are much more precise in diagnosing link problems.
That said, it's quite possible that packet loss for small ping packets is low while real traffic is impacted more severely.
In addition to cable problems - that you'll need to repair - and a statistically random loss of packets there are also some configuration problems that can lead to CRC errors.
Most common in 10/100 Mbit networks is a duplex mismatch where one side uses half-duplex (HDX) transmission with CSMA/CD while the other one uses full-duplex (FDX) - once real data is transmitted, the HDX side will detect collisions, late collisions and possibly jabber while the FDX side will detect FCS errors. Throughput is very low, put ping with its low bandwidth usually works.
Duplex mismatches happen most often when one side is forced to full duplex, thus deactivating auto-negotiation and the other side defaults to half duplex.

Get port number of a switch a pc is connected to

I would like to know what are the steps to get the exact port number of the switch my pc is connected to. I'm advised SNMP can solve that but i have so little background about networking. If it really can supply me the port number then how do you do it? If possible, I would like to ask the detailed steps like what do you need to download beforehand etc. And if there are any other simpler ways to get port numbers through the command prompt only, any help would be very much appreciated. Thanks in advance!
Basically you should correlate data from different MIB tables:
Management MIB2 Interfaces
Ethernet MIB dot3Stats
Management MIB2 dot1dBridge

how can you count the number of packet losses in a file transfer?

One of my networks course projects has to do with 802.11 protocol.
Me and my parther thought about exploring the "hidden terminal" problem, simulating it.
We've set up a private network. We have 2 wireless terminals that will attempt to send a file
to a 3rd terminal that is connected to the router via ethernet. RTS/CTS will be disabled.
To compare results, we'd like to measure the number of packet collisions that occured during the transfer so as to conclude that is due to RTS being disabled.
We've read that it is imposible to measure packet collisions as it is basically noise. We'll have to make do with counting the packets that didnt recieve an "ACK". Basically, the number of retransmitions.
How can we do that?
I suggested that instead of sending a file, we could make the 2 wireless terminals ping the 3rd terminal continually. The ping feature automatically counts the ping packets that didnt recieve the "pong". Do you think its a viable approach?
Thank you very much.
No, you'll get incorrect results. Ping is an application, i.e. working at application (highest) level of the network. 802.11 protocol operates at MAC layer - there are at least 2 layers separating between ping and 802.11. Whatever retransmissions happen at MAC layer - they are hidden by the layers above it. You'll see failure in ping only if all the retransmissions initiated by lower levels have failed.
You need to work on the same level that you're investigating - in your case it's the MAC layer. You can use a sniffer (google for it) to get the statistics you want.

Simulate high speed network connection

I have created a bandwidth meter application to measure total Internet traffic. I need to test the application with relatively high data transfer rates, such as 4 Mbps. I have a slow Internet connection, so I need a simulator to test my application to see the behavior with high throughput rates.
As an option, you can run some HTTP server in one virtual machine with NAT'ed network adapter and test your bandwidth meter against it from the host system or a similar VM.
There are commercial packet generators that do this, and also a few freely available ones like PackETH and Bit-Twist.
There are also other creative solutions. For example, do the packets need to be IP packets for your purpose? If not, you could always get a "dumb" switch or hub (no spanning-tree or other loop protection) and plug a crossover cable into it. (or a straight-through Ethernet cable would work if the switch supports Auto-MDIX) The idea would be that with a loop in your network, the hub/switch will flood the network to 100% for you since it will continually re-forward the same packets.
If you try this, be sure yours is the only computer on the network, since this technique will effectively render it useless. ;-)
You could always send some IP broadcast packets to "seed" the loop. Otherwise, the first thing I think you'd likely see is broadcast ARP packets, which won't help if you're measuring layer 3 traffic only.
Lastly, (and especially if this sounds like too much trouble) I recommend you read up on dependency injection and refactor your code so you can test it without the need for a high-speed interface. Of course, you'll still need to test your code in a real high-speed environment, but doing this will give you much more confidence in your code.

Resources