i am tring to measure TCP throughput between my android device and access point.
setup:-
Android device<------>AP<---connected wired(10Gbps ethernet)----->windows/linux laptop
iperf is giving a 5-8 Mbps high tput when i used linux laptop at other end.
iperf version used-2.0.5 on windows.
2.0.4 on linux.
Anyone have any idea about this behaviour..?
Related
I am building a CNC controller which has 1Gbps Ethernet interface for communction with a Desktop Windows. Right now I am using NPCap to implement a custom protocol to communicate between Windows and a CNC microcontroller for a high-performance link with low latency.
Problem: NPCAP is extremely slow. I benchmarked both the microcontroller firmware and NPCap. My microcontroller can easily go up to 700 Mbps in transmit and receive. But NPCap is giving me only 1 mbps transmit and 56 kbps receive. Is it that NPCAP itself is slow, or am I doing something wrong?
I see that Microsoft Winsock also allows some limited capabilities for RAW Ethernet. As of now, I haven't been able to get Winsock to work on my Windows application in SOCK_RAW mode. But before I go down the rabbit hole of using Winsock in RAW or give up and try to make my own network driver, I want to know if Winsock transfer speeds in SOCK_RAW are better than NPCap or are they similarly limited?
The system is Ubuntu 20.04, kernel version is 5.14. And there is a Network Card, socat version is 1.7.3.3. Then using the following cmds to socat the Network Card Device to PTY Device "/dev/huu":
"socat -d -d -d -d -b65535 INTERFACE:the name of Network Card Device, type=2, PTY, rawer, link=/dev/huu &"
Then using the standard stdlib function "open/read/write" to operate the /dev/huu for the Network Card Device. But found that for read and write, the data packet size is limited to 0xfff(4095) bytes, the data exceed than 4095 bytes would be transfer to next packet automatic. And from the driver log of Network Card Device, it has no 4095 limitation. So it seems to may be the limitation of socat. How could we do to configure socat to break this limit? Thank you very much.
Read/write exceed than 4095 bytes in one packet.
We are noticing that we max our WAN port out at 400 Mbps. We have a 1Gbps connection with our provider delivered over pure Ethernet (in a datacenter).
Here is an example of the max-out using crude MRTG:
We are directly connecting to our provider via Ethernet at 1Gbps. This comes in to our Cisco 2901 router and then we are then connecting directly to our Watchguard Firebox at 1Gbps Ethernet (in drop-in mode). All devices are reporting 1Gbps line speed with full duplex.
The Firebox then connects to our switch at 1Gbps. We are running a gigabit switch which connects directly to our servers (also at 1Gbps to each server).
We can't seem to achieve anything over 400Mbps through the setup. The Firebox X1250e we are running is rated at 1.5Gbps throughput for raw packet forwarding (which we are doing - no proxying or fixup is being performed on the data).
We have even fired up a command line Speedtest (Ookla) on one of the servers and it hits the 400Mbps cap.
I know people are going to say the Cisco 2901 is the issue but we are running full 1500 packets and even at 400Mbps over an extended period, this is an example of our CPU usage:
sh proc cpu
CPU utilization for five seconds: 18%/17%; one minute: 17%; five minutes: 17%
Also worth noting, we are not running any QoS on the Firebox - all QoS is disabled (the whole module unloaded).
The Cisco 2901 has CEF enabled.
We have the following configuration:
Does anybody know what may be causing this "cap"?
We would like some tips, advice and suggestions as to how we can diagnose this remotely (we can't easily go to the datacenter to perform tests).
Any help and advice are greatly appreciated; thank you in advance.
Depending on the 2901's configuration, it's quite possible that it's simply maxed out. Cisco documents a peak throughput of 3+ Gbit/s, but recommends the 2901 for just 25 Mbit/s duplex throughput. 1
For testing, I'd connect a test device/machine in front of the 2901 and check whether it tops out at the same speed or not. Next step is to connect the test machine behind the 2901 but before the Watchguard.
Does anyone know why the seconds field would be about 30,000 different between a MAC and ESP32 (Arduino) synced to the same NTP server?
I have a group of ESP32 chips with NTP clients running, and they all sync from a local Windows10 NTP server, and do so correctly. The ESP32 chips all agree, but the Mac does not.
I have an OSX Mojave machine also set to use that Windows10 NTP server as it's time serer, and have requeted updates with 'sntp -sS' successfully.
My problem is that the 'gettimeofday' values are wildly different, by about 30,000 seconds, between the ESP32 and Mac platforms.
Timezone doesn't seem to matter. I am getting this value via time.time() in Python which is supposed to call 'gettimeofday' for me.
It turns out the "standard" NTP impl on the ESP32 adds a fudge factor, combined with timezone shift, that accounted for it. I modified the library not to add those, and it's working as expected.
I'm making a multiplayer game and I need to test it in simulated environment with packet loss, high latencies, packet reordering, etc. I'm using Network Emulator for Windows Toolkit for this purpose. However I can't get it to work for loopback packets. For instance when I do "ping google.com" I get increased latency but when I do "ping 127.0.0.1" - latency is under 1ms, so I think NEWT is not intercepting these packets. Do you have any idea how can I make it work?
If you install VMWare player you can create a "remote" server with its own network interface. You probably aren't going to be able to intercept the loopback address reliably on Windows, but more importantly what you are trying to do is not a good test: it doesn't tell you how the game will run in a realistic setting, with two computers and two OSs interacting. With a VM and network emulator you have something closer to reality.