eth1 disappear after use new kernel 5.6.0 | uSID | Centos - networking

In order to test SRv6 uSID in Linux, I compiled the new kernel 5.6.0 that in following Github:
https://github.com/netgroup/srv6-usid-linux-kernel.git
After compiled and reboot, my 2nd network adapter port(eth1) disappeared, two network adapter ports should the same type, and only eth0 was renamed to ens3, as follow:
[root#frank cisco]# uname -a
Linux frank 5.6.0+ #3 SMP Tue Jun 30 17:32:20 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
[root#frank cisco]# dmesg |grep eth
[ 2.311925] e1000 0000:00:03.0 eth0: (PCI:33MHz:32-bit) 5e:00:00:00:00:00
[ 2.314897] e1000 0000:00:03.0 eth0: Intel(R) PRO/1000 Network Connection
[ 2.770167] e1000 0000:00:04.0 eth1: (PCI:33MHz:32-bit) fa:16:3e:38:fd:91
[ 2.773194] e1000 0000:00:04.0 eth1: Intel(R) PRO/1000 Network Connection
[ 5.352825] e1000 0000:00:03.0 ens3: renamed from eth0
[root#frank cisco]#
[root#frank cisco]# lshw -class network -businfo
Bus info Device Class Description
========================================================
pci#0000:00:03.0 ens3 network 82540EM Gigabit Ethernet Controller
pci#0000:00:04.0 network 82540EM Gigabit Ethernet Controller
Follow is dmesg for two ports:
[root#frank cisco]# dmesg |grep 00:03.0
[ 0.700489] pci 0000:00:03.0: [8086:100e] type 00 class 0x020000
[ 0.702057] pci 0000:00:03.0: reg 0x10: [mem 0xfeb80000-0xfeb9ffff]
[ 0.703921] pci 0000:00:03.0: reg 0x14: [io 0xc000-0xc03f]
[ 0.707532] pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb3ffff pref]
[ 2.311925] e1000 0000:00:03.0 eth0: (PCI:33MHz:32-bit) 5e:00:00:00:00:00
[ 2.314897] e1000 0000:00:03.0 eth0: Intel(R) PRO/1000 Network Connection
[ 5.352825] e1000 0000:00:03.0 ens3: renamed from eth0
[root#frank cisco]#
[root#frank cisco]# dmesg |grep 00:04.0
[ 0.708456] pci 0000:00:04.0: [8086:100e] type 00 class 0x020000
[ 0.710057] pci 0000:00:04.0: reg 0x10: [mem 0xfeba0000-0xfebbffff]
[ 0.711846] pci 0000:00:04.0: reg 0x14: [io 0xc040-0xc07f]
[ 0.715515] pci 0000:00:04.0: reg 0x30: [mem 0xfeb40000-0xfeb7ffff pref]
[ 2.770167] e1000 0000:00:04.0 eth1: (PCI:33MHz:32-bit) fa:16:3e:38:fd:91
[ 2.773194] e1000 0000:00:04.0 eth1: Intel(R) PRO/1000 Network Connection
Follow lshw cmd
"driver=uio_pci_generic"
[root#frank v2.81]# lshw -c network
*-network:0
description: Ethernet interface
product: 82540EM Gigabit Ethernet Controller
vendor: Intel Corporation
physical id: 3
bus info: pci#0000:00:03.0
logical name: ens3
version: 03
serial: 5e:00:00:00:00:00
size: 1Gbit/s
capacity: 1Gbit/s
width: 32 bits
clock: 33MHz
capabilities: bus_master rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=e1000 driverversion=7.3.21-k8-NAPI duplex=full ip=172.16.1.140 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
resources: irq:10 memory:feb80000-feb9ffff ioport:c000(size=64) memory:feb00000-feb3ffff
*-network:1
description: Ethernet controller
product: 82540EM Gigabit Ethernet Controller
vendor: Intel Corporation
physical id: 4
bus info: pci#0000:00:04.0
version: 03
width: 32 bits
clock: 33MHz
capabilities: bus_master rom
configuration: driver=uio_pci_generic latency=0 <<<
resources: irq:11 memory:feba0000-febbffff ioport:c040(size=64) memory:feb40000-feb7ffff
And found the port bound by dpdk, but I didn't set any bound config...
[root#frank v2.81]# ./dpdk_setup_ports.py -s
Network devices using DPDK-compatible driver
============================================
0000:00:04.0 '82540EM Gigabit Ethernet Controller' drv=uio_pci_generic unused=e1000,igb_uio,vfio-pci <<<
Network devices using kernel driver
===================================
0000:00:03.0 '82540EM Gigabit Ethernet Controller' if=ens3 drv=e1000 unused=igb_uio,vfio-pci,uio_pci_generic
Other network devices
=====================
<none>
Does anyone know what is going on...and how to solve this problem...?
Thanks a lot!
Frank

After discussed with colleagues, the issue should be followed by this link:
https://www.kernel.org/doc/html/v4.12/driver-api/uio-howto.html
And as above guide, I can workaround the issue, but issue appear again after reboot...
[root#frank v2.81]# ls -l /sys/bus/pci/devices/0000:00:04.0/driver
lrwxrwxrwx. 1 root root 0 Jun 30 17:59 /sys/bus/pci/devices/0000:00:04.0/driver -> ../../../bus/pci/drivers/uio_pci_generic
[root#frank v2.81]# echo -n 0000:00:04.0 > /sys/bus/pci/drivers/uio_pci_generic/unbind
[root#frank v2.81]# echo -n 0000:00:04.0 > /sys/bus/pci/drivers/e1000/bind
[79965.358393] e1000 0000:00:04.0 eth0: (PCI:33MHz:32-bit) fa:16:3e:38:fd:91
[79965.360499] e1000 0000:00:04.0 eth0: Intel(R) PRO/1000 Network Connection
[root#frank v2.81]# ls -l /sys/bus/pci/devices/0000:00:04.0/driver
lrwxrwxrwx. 1 root root 0 Jul 1 16:12 /sys/bus/pci/devices/0000:00:04.0/driver -> ../../../bus/pci/drivers/e1000
[root#frank cisco]# ifconfig eth0 up
[ 221.792886] e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[ 221.796553] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[root#frank cisco]# lshw -c network
*-network:0
description: Ethernet interface
product: 82540EM Gigabit Ethernet Controller
vendor: Intel Corporation
physical id: 3
bus info: pci#0000:00:03.0
logical name: ens3
version: 03
serial: 5e:00:00:00:00:00
size: 1Gbit/s
capacity: 1Gbit/s
width: 32 bits
clock: 33MHz
capabilities: bus_master rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=e1000 driverversion=7.3.21-k8-NAPI duplex=full ip=172.16.1.140 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
resources: irq:11 memory:feb80000-feb9ffff ioport:c000(size=64) memory:feb00000-feb3ffff
*-network:1
description: Ethernet interface
product: 82540EM Gigabit Ethernet Controller
vendor: Intel Corporation
physical id: 4
bus info: pci#0000:00:04.0
logical name: eth0
version: 03
serial: fa:16:3e:38:fd:91
size: 1Gbit/s
capacity: 1Gbit/s
width: 32 bits
clock: 33MHz
capabilities: bus_master rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=e1000 driverversion=7.3.21-k8-NAPI duplex=full latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
resources: irq:11 memory:feba0000-febbffff ioport:c040(size=64) memory:feb40000-feb7ffff

Related

Upload speed very slow in Ubuntu 18.04.6 while download speed is normal

I bought a new 1GB/ps server from OneProvider and installed Ubuntu 18.04.6 on it.
The upload speed from ssh or FTP is very good, but the download speed is about 100kb/s from ssh, FTP and I tried to install Nginx and download from it but it's also about 100kb/ps.
All attempts from more than 5 devices from different locations some of these tried were from another server in the same network with (wget) but all attempts did not exceed the speed of 150kb/s.
this is (ip a) output :
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether d4:ae:52:ca:0f:6e brd ff:ff:ff:ff:ff:ff
inet (serverip)/24 brd 62.210.207.255 scope global eno1
valid_lft forever preferred_lft forever
inet6 fe80::d6ae:52ff:feca:f6e/64 scope link
valid_lft forever preferred_lft forever
3: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d4:ae:52:ca:0f:6f brd ff:ff:ff:ff:ff:ff
(ethtool eno1)output :
Settings for eno1:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: Yes
Supported FEC modes: Not reported
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Advertised FEC modes: Not reported
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
MDI-X: on
Supports Wake-on: g
Wake-on: d
Link detected: yes
(ethtool -S eno1)output :
NIC statistics:
rx_bytes: 67518474469
rx_error_bytes: 0
tx_bytes: 1939892582744
tx_error_bytes: 0
rx_ucast_packets: 457688996
rx_mcast_packets: 1105671
rx_bcast_packets: 743858
tx_ucast_packets: 1341579130
tx_mcast_packets: 12
tx_bcast_packets: 4
tx_mac_errors: 0
tx_carrier_errors: 0
rx_crc_errors: 0
rx_align_errors: 0
tx_single_collisions: 0
tx_multi_collisions: 0
tx_deferred: 0
tx_excess_collisions: 0
tx_late_collisions: 0
tx_total_collisions: 0
rx_fragments: 0
rx_jabbers: 0
rx_undersize_packets: 0
rx_oversize_packets: 0
rx_64_byte_packets: 4346996
rx_65_to_127_byte_packets: 430360977
rx_128_to_255_byte_packets: 1072678
rx_256_to_511_byte_packets: 420201
rx_512_to_1023_byte_packets: 250311
rx_1024_to_1522_byte_packets: 23087362
rx_1523_to_9022_byte_packets: 0
tx_64_byte_packets: 899130
tx_65_to_127_byte_packets: 11634758
tx_128_to_255_byte_packets: 2699608
tx_256_to_511_byte_packets: 3443633
tx_512_to_1023_byte_packets: 7211982
tx_1024_to_1522_byte_packets: 1315690035
tx_1523_to_9022_byte_packets: 0
rx_xon_frames: 0
rx_xoff_frames: 0
tx_xon_frames: 0
tx_xoff_frames: 0
rx_mac_ctrl_frames: 0
rx_filtered_packets: 113311
rx_ftq_discards: 0
rx_discards: 0
rx_fw_discards: 0
(ifconfig eno1 |grep errors) output :
RX errors 0 dropped 93 overruns 0 frame 0
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
(lshw -C network) output :
*-network:0
description: Ethernet interface
product: NetXtreme II BCM5716 Gigabit Ethernet
vendor: Broadcom Inc. and subsidiaries
physical id: 0
bus info: pci#0000:01:00.0
logical name: eno1
version: 20
serial: d4:ae:52:ca:0f:6e
size: 1Gbit/s
capacity: 1Gbit/s
width: 64 bits
clock: 33MHz
capabilities: pm vpd msi msix pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=bnx2 driverversion=2.2.6 duplex=full firmware=7.4.8 bc 7.4.0 NCSI 2.0.11 ip=(serverip) latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s
resources: irq:16 memory:c0000000-c1ffffff
*-network:1 DISABLED
description: Ethernet interface
product: NetXtreme II BCM5716 Gigabit Ethernet
vendor: Broadcom Inc. and subsidiaries
physical id: 0.1
bus info: pci#0000:01:00.1
logical name: eno2
version: 20
serial: d4:ae:52:ca:0f:6f
capacity: 1Gbit/s
width: 64 bits
clock: 33MHz
capabilities: pm vpd msi msix pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation
configuration: autonegotiation=on broadcast=yes driver=bnx2 driverversion=2.2.6 duplex=half firmware=7.4.8 bc 7.4.0 NCSI 2.0.11 latency=0 link=no multicast=yes port=twisted pair
resources: irq:17 memory:c2000000-c3ffffff

Trace incoming multicast UDP messages with Wireshark

I have an application A that is sending a multicast message to application B
The log shows the following:
Sender: 48704 -> /239.6.7.8:46655
Receiver: /172.17.95.17:48704, Hello, world!
Sender: 48704 -> /239.6.7.8:46655
Receiver: /172.17.95.17:48704, Hello, world!
Sender: 48704 -> /239.6.7.8:46655
Receiver: /172.17.95.17:48704, Hello, world!
As you can see, I am able to connect, send and receive messages.
In tshark, I can see only what the sender is sending.
What do I need to do in order to see the incoming message?
[hudson#edg-perf09 ~]$ tshark -ni any | grep "46655"
Capturing on 'any'
0.114114866 172.17.95.17 -> 239.6.7.8 UDP 57 Source port: 48704 Destination port: 46655
1.115497174 172.17.95.17 -> 239.6.7.8 UDP 57 Source port: 48704 Destination port: 46655
2.116822371 172.17.95.17 -> 239.6.7.8 UDP 57 Source port: 48704 Destination port: 46655
3.118153942 172.17.95.17 -> 239.6.7.8 UDP 57 Source port: 48704 Destination port: 46655
4.119370365 172.17.95.17 -> 239.6.7.8 UDP 57 Source port: 48704 Destination port: 46655
5.120568524 172.17.95.17 -> 239.6.7.8 UDP 57 Source port: 48704 Destination port: 46655
6.121715504 172.17.95.17 -> 239.6.7.8 UDP 57 Source port: 48704 Destination port: 46655

VNF do not forward packets sent from Client in Openstack using VNFF Graph

I'm trying to ping from Client to 8.8.8.8 via VNF1 so I use VNFFG to force ICMP traffic of Client go through VNF1 before going out to internet.
After I apply the VNFFG rule in openstack, VNF1 can see MPLS packet encapsulated from Client's ICMP packet by openstack when I use tcpdump but the Forwarding Table of VNF1 do not receive any packet to continue forward that packet.
This is packet seen on VNF1:
09:15:12.161830 MPLS (label 13311, exp 0, [S], ttl 255) IP 12.0.0.58 > 8.8.8.8: ICMP echo request, id 10531, seq 15, length 64
I capture that packet, see that the content can be read (without encryption) and src, dst MAC belong to Client and VNF1 respectively.
This is my VNFFG template:
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Sample VNFFG template
topology_template:
node_templates:
Forwarding_path1:
type: tosca.nodes.nfv.FP.TackerV2
description: demo chain
properties:
id: 51
policy:
type: ACL
criteria:
- name: block_icmp
classifier:
network_src_port_id: 0304e8b5-6c37-4634-bde2-1351cdee5134 #CLIENT PORT ID
ip_proto: 1
- name: block_udp
classifier:
network_src_port_id: 0304e8b5-6c37-4634-bde2-1351cdee5134 #CLIENT PORT ID
ip_proto: 17
path:
- forwarder: VNF1
capability: CP1
groups:
VNFFG1:
type: tosca.groups.nfv.VNFFG
description: Traffic to server
properties:
vendor: tacker
version: 1.0
number_of_endpoints: 1
dependent_virtual_link: [VL1]
connection_point: [CP1]
constituent_vnfs: [VNF1]
members: [Forwarding_path1]
This is my VNF Descriptor:
tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0
description: Demo example
metadata:
template_name: sample-tosca-vnfd
topology_template:
node_templates:
VDU1:
type: tosca.nodes.nfv.VDU.Tacker
capabilities:
nfv_compute:
properties:
num_cpus: 1
mem_size: 2 GB
disk_size: 20 GB
properties:
image: VNF1
availability_zone: nova
mgmt_driver: noop
key_name: my-key-pair
config: |
param0: key1
param1: key2
CP1:
type: tosca.nodes.nfv.CP.Tacker
properties:
management: true
order: 0
anti_spoofing_protection: false
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
VL1:
type: tosca.nodes.nfv.VL
properties:
network_name: my-private-network
vendor: Tacker
FIP1:
type: tosca.nodes.network.FloatingIP
properties:
floating_network: public
requirements:
- link:
node: CP1
I used this command to deploy VNFGG rule:
tacker vnffg-create --vnffgd-template vnffg_test.yaml forward_traffic
I do not know if the problem can come from the key I defined for VNF1 because I do not know what param0: key0 and param1: key1 used for and where are they?
How can I resolve to make the VNF forward these packet.

How to attach vhostuser port to VM: ports are not being shown in VM

I am struggling with attaching OVS-DPDK ports to my VM.
I am new to openstack, OVS-DPDK and here is my current setup:
I have created a VM with ports of physnets which are SRIOV port.
I have other 2 ports which will be associated to OVS-DPDK. OVS-DPDK installed and have done below steps ( ovs-vswitchd (Open vSwitch) 2.17.0 DPDK 21.11.0)
Binding UIO driver for NIC port
dpdk-devbind.py -b vfio-pci 08:00.0
dpdk-devbind.py -b vfio-pci 08:00.1
Binding this DPDK port to OVS, called dpdkport
ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
ovs-vsctl add-port br0 dpdk-p0 -- set Interface dpdk-p0 type=dpdk ofport_request=1 options:dpdk-devargs=0000:08:00.0
ovs-vsctl add-port br0 dpdk-p1 -- set Interface dpdk-p1 type=dpdk ofport_request=2 options:dpdk-devargs=0000:08:00.1
/usr/libexec/qemu-kvm -name guest=instance-0000000c -chardev socket,id=char1,path=/var/run/dpdkvhostclient1,server -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 -object memory-backend-file,id=mem1,size=0x8000000,mem-path=/dev/hugepages,share=on -numa node,memdev=mem1 -mem-prealloc &
and
/usr/libexec/qemu-kvm -name guest=instance-0000000c -chardev socket,id=char1,path=/var/run/dpdkvhostclient2,server -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet1 -object memory-backend-file,id=mem1,size=0x8000000,mem-path=/dev/hugepages,share=on -numa node,memdev=mem1 -mem-prealloc &
Add a vhostuser port to OVS
ovs-vsctl add-port br0 dpdkvhostclient1 -- set Interface dpdkvhostclient1 type=dpdkvhostuserclient ofport_request=3 options:vhost-server-path=/var/run/dpdkvhostclient1
ovs-vsctl add-port br0 dpdkvhostclient2 -- set Interface dpdkvhostclient2 type=dpdkvhostuserclient ofport_request=4 options:vhost-server-path=/var/run/dpdkvhostclient2
Add a flow that forwarding PKT from vhostuser to dpdkport
ovs-ofctl del-flows br0
ovs-ofctl add-flow br0 in_port=1,actions=output:3
ovs-ofctl add-flow br0 in_port=2,actions=output:4
ovs-ofctl add-flow br0 in_port=3,actions=output:1
ovs-ofctl add-flow br0 in_port=4,actions=output:2
Logged into VM and I don't see any of dpdk port being shown in ipconfig -a also.
I am following https://docs.openvswitch.org/en/latest/topics/dpdk/vhost-user/#dpdk-vhost-user-client
I also tried putting in xml of my VM instance
<cpu mode='host-model' check='partial'>
<model fallback='allow'/>
<topology sockets='6' cores='1' threads='1'/>
<numa>
<cell id='0' cpus='0-5' memory='4096' unit='KiB' memAccess='shared'/>
</numa>
</cpu>
<memoryBacking>
<hugepages>
<page size='2048' unit='G'/>
</hugepages>
<locked/>
<source type='file'/>
<access mode='shared'/>
<allocation mode='immediate'/>
<discard/>
</memoryBacking>
<interface type='vhostuser'>
<mac address='0c:c4:7a:ea:4b:b2'/>
<source type='unix' path='/var/run/dpdkvhostclient1' mode='server'/>
<target dev='dpdkvhostclient1'/>
<model type='virtio'/>
<driver queues='2'>
<host mrg_rxbuf='on'/>
</driver>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</interface>
<interface type='vhostuser'>
<mac address='0c:c4:7a:ea:4b:b3'/>
<source type='unix' path='/var/run/dpdkvhostclient2' mode='server'/>
<target dev='dpdkvhostclient2'/>
<model type='virtio'/>
<driver queues='2'>
<host mrg_rxbuf='on'/>
</driver>
<address type='pci' domain='0x0000' bus='0x00' slot='0x10' function='0x0'/>
</interface>
Mac addresses of these dpdkuserports are some random and slots are also the one which were not present in xml. NUMA block was added in CPU section and memoryBacking was also added, rebooted instance hen but new interfaces didnt appear in VM.
dpduserport were shown DOWN as below
ovs-ofctl show br0
OFPT_FEATURES_REPLY (xid=0x2): dpid:00001cfd0870760c
n_tables:254, n_buffers:0
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
1(dpdk-p0): addr:1c:fd:08:70:76:0c
config: 0
state: 0
current: 1GB-FD AUTO_NEG
speed: 1000 Mbps now, 0 Mbps max
2(dpdk-p1): addr:1c:fd:08:70:76:0d
config: 0
state: 0
current: 1GB-FD AUTO_NEG
speed: 1000 Mbps now, 0 Mbps max
3(dpdkvhostclient): addr:00:00:00:00:00:00
config: 0
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
4(dpdkvhostclient): addr:00:00:00:00:00:00
config: 0
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
LOCAL(br0): addr:1c:fd:08:70:76:0c
config: PORT_DOWN
state: LINK_DOWN
current: 10MB-FD COPPER
speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
What Am I missing ?

Arch Linux and slow Wi-Fi speed/connection. Broadcom BCM4313

I'm trying to figure out what is the reason of such a slow speed using wi-fi on Arch. I have Windows 8 installed alongside with the Arch and I can say for sure that at Windows the speed is ultimately higher. When I switch back to Arch and start surfing the Internet I feel like my internet speed is clipped half! I use Arch quite recently and many things I don't know (especially networking). So here are the steps of how I usually connect to the internet using wpa_supplicant:
cat wpa.conf
network={
ssid="Home"
#psk="pass"
psk=05a9b845b68a55291d1d5b94e50b9b1811706b0746d89e67f581cc5f7b88b758
}
sudo wpa_supplicant -B -iwlp10s0b1 -cwpa.conf
sudo dhcpcd wlp10s0b1
There are card details:
iwconfig (after connection to internet)
wlp10s0b1 IEEE 802.11bgn ESSID:"Home"
Mode:Managed Frequency:2.412 GHz Access Point: E0:CB:4E:ED:8F:48
Bit Rate=28.9 Mb/s Tx-Power=19 dBm
Retry short limit:7 RTS thr:off Fragment thr:off
Power Management:off
Link Quality=70/70 Signal level=-39 dBm
Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
Tx excessive retries:2168 Invalid misc:12 Missed beacon:0
lspci -k
...
0a:00.0 Network controller: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter (rev 01)
Subsystem: Hewlett-Packard Company Device 1795
Kernel driver in use: bcma-pci-bridge
Kernel modules: bcma
The output of dmesg:
[ 12.123632] bcma: bus0: Found chip with id 0x4313, rev 0x01 and package 0x08
[ 12.123663] bcma: bus0: Core 0 found: ChipCommon (manuf 0x4BF, id 0x800, rev 0x24, class 0x0)
[ 12.123689] bcma: bus0: Core 1 found: IEEE 802.11 (manuf 0x4BF, id 0x812, rev 0x18, class 0x0)
[ 12.123741] bcma: bus0: Core 2 found: PCIe (manuf 0x4BF, id 0x820, rev 0x11, class 0x0)
[ 12.135277] bcma: bus0: Bus registered
[ 13.610907] b43: probe of bcma0:1 failed with error -524
[ 13.695776] brcmsmac bcma0:1: mfg 4bf core 812 rev 24 class 0 irq 19
[ 13.928846] brcmsmac bcma0:1 wlp10s0b1: renamed from wlan0
[ 104.737832] brcmsmac bcma0:1: brcms_ops_bss_info_changed: qos enabled: false (implement)
[ 104.737847] brcmsmac bcma0:1: brcms_ops_config: change power-save mode: false (implement)
[ 448.818973] brcmsmac bcma0:1: brcmsmac: brcms_ops_bss_info_changed: associated
[ 448.818982] brcmsmac bcma0:1: brcms_ops_bss_info_changed: qos enabled: true (implement)
[ 449.080594] brcmsmac bcma0:1: wl0: brcms_c_d11hdrs_mac80211: txop exceeded phylen 159/256 dur 1778/1504
[ 449.085704] brcmsmac bcma0:1: wl0: brcms_c_d11hdrs_mac80211: txop exceeded phylen 137/256 dur 1602/1504
[ 528.990763] brcmsmac bcma0:1: brcms_ops_bss_info_changed: arp filtering: 1 addresses (implement)
[ 688.637057] brcmsmac bcma0:1: brcms_ops_bss_info_changed: arp filtering: 0 addresses (implement)
[ 698.117592] brcmsmac bcma0:1: brcms_ops_bss_info_changed: arp filtering: 1 addresses (implement)
[ 792.648850] brcmsmac bcma0:1: START: tid 1 is not agg'able
[ 792.668819] brcmsmac bcma0:1: START: tid 1 is not agg'able
[ 792.825554] brcmsmac bcma0:1: START: tid 1 is not agg'able
[ 1114.888310] brcmsmac bcma0:1: START: tid 1 is not agg'able
[ 1114.918308] brcmsmac bcma0:1: START: tid 1 is not agg'able
[ 1240.631071] brcmsmac bcma0:1: START: tid 1 is not agg'able
[ 2282.871425] brcmsmac bcma0:1: START: tid 1 is not agg'able
[ 2282.884764] brcmsmac bcma0:1: START: tid 1 is not agg'able
Some notes in advance, please do not suggest check the router settings (everything is set correctly and tested on different devices), so the problem isn't in the router nor the physical obstacles.

Resources