I have couple of question related to mobile network generations:
While comparing generation of mobile network people mention that 1G and 2G has to use maximum bandwidth. What is bandwidth in this content and why they use maximum bandwidth?
1G and 2G are narrow band networks. What is narrow band here?
3G and 4G are wide band networks. Don't they use maximum bandwidth?
They use maximum bandwidth to transmit analogue voice information. Since available bandwidth is limited and there is a limit on the bandwidth needed to make the voice recognizable on the other side, they can't go below this fixed bandwidth.
It refers to the frequency span of the carrier. If it is narrow band, then usually it's just a few kilohertzes, which makes available bandwidth low (this also depends on the kind of modulation used).
They don't need the maximum available bandwidth for voice communications, since for an acceptable voice quality we need to transmit signals that encode just a few kilohertzes of analogue information.
Related
The Altlibary is great at detection! One thing we noticed with testing is if we have an app doing both transmitting and receiving we are not picking up the other phones at times. (Very sporadic) With real devices like ibeacons we are constantly able to pick them up.
My question is how do we control the frequency of the transmitter vs the frequency of the scanning (recieving) so that we can both do transmission and detection at the same time?
My goal is to achieve the best of both worlds scanning and transmitting, is that even possible.
https://altbeacon.github.io/android-beacon-library/beacon-transmitter.html
By default, the Android Beacon Library's BeaconTransmitter uses the highest power and frequency allowed by the underlying APIs in the Android operating system. Here are the settings, showing the defaults:
beaconTransmitter.setAdvertiseTxPowerLevel(
AdvertiseSettings.ADVERTISE_TX_POWER_HIGH);
beaconTransmitter.setAdvertiseMode(
AdvertiseSettings.ADVERTISE_MODE_LOW_LATENCY);
While the settings are configurable, presumably you already want the fastest and strongest advertising for you use case. And that is exactly what the library does with no extra configuration. (Note: there is very little reason to lower the transmit power or frequency, because tests show that transmitters use negligible battery. See my blog post here: http://www.davidgyoungtech.com/2015/11/12/battery-friendly-beacon-transmission)
If you are seeing that hardware beacons are reliable, but some phone models' transmitters are not detected infrequently, then the issue may be hardware issues with those phones themselves. You may wish to characterize which ones are problematic.
I can confirm that I see very strong transmissions from the Pixel 3a, Moto G7, Samsung Galaxy S10 and Huawei P9 Lite I have handy.
Working on my thesis I need to create a simulation for video transmission in a normal WLAN to detect how much the quality is reduced depending on the number of devices or quality of originating transmission.
I was using NS-3 for this when someone proposed to me to use my home devices (I have a number of computers, tablets, E-readers, video game consoles etc).
It seemed to me like a good idea since I have a fast enough WiFi I can just use my Mac as the hotspot and connect all devices through it then sniff the packets with wireshark and limit the speed of the transfer using "Network Link Conditioner" my question is, would limiting the speed of transfer with the network link conditioner affect the devices using my computer as a hotspot? or does it only affect my personal computer and I need to figure another way of limiting the speed to successfully simulate what I need?
I am not 100% sure what you are after, but seeing you mentioned limiting bandwidth on your Mac, this may be in use:
Basically, you'll need a PC that can run FreeBSD and two network interfaces (e.g. a built in NIC plus one other card). You then setup the box to bridge those two cards.
Check out this tutorial to see how Network Bandwidth Latency and Delay Simulation Tutorial
Once setup you can then control the parameters of that bridge using the ipfw command in FreeBSD, allowing you to change bit rates, latency and simulate packet loss.
With this box in between your video sources (the internet?) and with a wifi router on the other side to connect your devices to, you'd be able to simulate a variety of conditions.
[Note: credit for digging this out this link needs to go to a colleague of mine, but I used this on a project once he'd set it up and it was very powerful]
I am trying to measure PCIe Bandwidth on ATI FirePro 8750. The amd app sample PCIeBandwidth in the SDK measures the bandwith of transfers from:
Host to device, using clEnqueueReadBuffer().
Device to host, using clEnqueueWriteBuffer().
On my system (windows 7, Intel Core2Duo 32 bit) the output is coming like this:
Selected Platform Vendor : Advanced Micro Devices, Inc.
Device 0 : ATI RV770
Host to device : 0.412435 GB/s
Device to host : 0.792844 GB/s
This particular card has 2 GB DRAM and max clock frequency is 750 Mhz
1- Why is bandwidth different in each direction?
2- Why is the Bandwdith so small?
Also I understand that this communication takes place through DMA, so the Bandwidth may not be affected by CPU.
This paper from Microsoft Research labs give some inkling of why there is asymmetric PCIe data transfer bandwidth between GPU - CPU. The paper describes performance metrics for FPGA - GPU data transfer bandwidth over PCIe. It also includes metrics from CPU - GPU data transfer bandwidth over PCIe.
To quote the relevant section
'it should also be noted that the GPU-CPU transfers themselves also
show some degree of asymmetric behavior. In the case of a GPU to CPU
transfer, where the GPU is initiating bus master writes, the GPU
reaches a maximum of
6.18 GByte/Sec. In the opposite direction from CPU to GPU, the GPU is initiating bus master reads and the resulting bandwidth falls to 5.61
GByte/Sec. In our observations it is typically the case that bus
master writes are more efficient than bus master reads for any PCIe
implementation due to protocol overhead and the relative complexity of
implementation. While a possible solution to this asymmetry would be
to handle the CPU to GPU direction by using CPU initiated bus master
writes, that hardware facility is not available in the PC architecture
in general. '
The answer to the second question on bandwidth could be due units of data transfer size.
See figs 2,3,4 and 5. I have also seen graphs like this at the 1st AMD Fusion Conference. The explanation is that the PCIe transfer of data has overheads due to the protocol and the device latency. The overheads are more significant for small transfer sizes and become less significant for larger sizes.
What levers do you have to control or improve performance?
Getting the right combo of chip/motherboard and GPU is the H/W lever. Chips with the max number of PCIe lanes are better. Using a higher spec PCIe protocol, PCIe 3.0 is better than PCIe 2.0. All components need to support the higher standards.
As a programmer controlling the data transfer size, is a very important lever.
Transfer sizes of 128K - 256K bytes get approx 50% of the max bandwidth. Transfers of 1M - 2M bytes get over 90% of max bandwidth.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have two adjacent computers, both running a recent version of Ubuntu. Both computers have:
Multiple USB 2.0 ports
RJ-45 connection
5400RPM hard drive
Express Card card slot
PCMCIA Type II
I want to transfer as much data as possible in a set period of time.
What is the fastest physical medium to transfer data between the two computers without swapping hard drives?
What is the fastest protocol (not necessarily TCP/IP based) for transferring high-entropy data? If it is TCP/IP, what needs to be tweaked for optimal performance?
First of all, RJ-45 is not a medium, but just a connector type. So your ethernet connection could be anything between 10BASE-T (10 Mbit) and 10GBASE-T (10 Gbit). Using ethernet the link speed is defined by the lowest common speed grade supported by both peers.
The USB Hi-Speed mode is specified for 480 Mbit/s (60 MByte/s), but the typical maximum speed is somewhere near (40 MByte/s) due to the protocol overhead. This speed is only for direct USB host to client connections, but you have 2 USB hosts and so you need some kind of device in the middle to handle the client parts. I guess that will also lower the achievable data rate.
With ethernet you have a simple plug 'n play technology with a well known (socket) API. The transfer speed depends on the link type:
Max. TCP/IP data transfer rates (taken from here):
Fast Ethernet (100Mbit): 11.7 MByte/s
Gigabit Ethernet (1000Mbit): 117.6 MByte/s
The USB 2.0 specification results in a 480 Mbit/s rate, which is 60 MB/s.
Ethernet depends on the network cards (NIC) used and to a lesser degree the wiring used. If both NICs are 1Gbit/s they will both auto-negotiate to 1 Gbit/s translating to 125 MB/s. If one or both NICs only support 100 Mbit/s then they will auto-negotiate to 100 Mbit/s and your speed will be 12.5 MBytes/s.
Wireless is also an option with 802.11n supporting up to 600 Mb/s (75 MB/s) - faster than USB 2.0.
USB 3.0 is the latest USB spec supporting up to 5 Gb/s (625 MB/s).
Ofcourse actual throughput will differ and depend on many other factors, such as wiring, interference, latency, etc.
TCP vs. UDP protocol depends on the type of connection you need and your application's capacity to deal with dropped packets, etc. TCP has a higher initial cost for building up the initial connection, but the transmission is reliable and for long running transactions may turn out to be the fastest. UDP is cheaper to create connections, but you may have dropped packets.
Maximum Transmission Unit (MTU) is a parameter than can have a significant affect on an IP based network. Picking the right MTU depends on several factors. The Internet has numerous articles on this.
Other tweaks are the basics like closing known chatty apps, netbios service if your on windows, etc (lots of hits on google for speeding up tcp).
I referred to different threads about reliable UDP vs TCP for large file transfers. However, before making the decision of choosing UDP over TCP ( and add reliability mechanism to UDP ) I want to benchmark performance of UDP & TCP. Is there any utility in linux or windows that can give me this performance benchmark ?
I found Iperf is one such utility. But when I used Iperf on two linux machines to send data using both udp and tcp, I found that TCP performs better than UDP for 10MB of data. This was surprising for me as it is well known fact that UDP performs better than TCP.
My Questions are :
Does UDP always perform better than TCP ? or is there any specific
scenario where UDP is better than TCP.
Is there any published
benchmarks for validating this fact ?
Is there any standard utilty to measure tcp and udp performance on a particular network ?
Thanks in Advance
UDP is NOT always faster than TCP. There are many TCP performance turning including RSS/vRSS. For example, TCP on Linux-on-HyperV can get 30Gbps and on Linux-on-Azure can get 20G+. //I think for Windows VM, it is similar; also on other virt platform such XEN, KVM, TCP did even better.
There are lot of tools to measure: iPerf, ntttcp (Windows), ntttcp-for-Linux, netperf, etc:
iPerf3: https://github.com/esnet/iperf
Windows NTTTCP: https://gallery.technet.microsoft.com/NTttcp-Version-528-Now-f8b12769
ntttcp-for-Linux: https://github.com/Microsoft/ntttcp-for-linux
Netperf: http://www.netperf.org/netperf/
The differences have two sides, conceptually and practically. Many documentation regarding performance is from the '90s when CPU power was significantly faster than network speed and network adapters were quite basic.
Consider, UDP can technically be faster due to less overheads but modern hardware is not fast enough to saturate even 1 GigE channels with smallest packet size. TCP is pretty much accelerated by any card from checksumming to segmentation through to full offload.
Use UDP when you need multicast, i.e. distributing to say more than say a few recipients. Use UDP when TCP windowing and congestion control is not optimised, such as high latency, high bandwidth WAN links: see UDT and WAN accelerators for example.
Find any performance documentation for 10 GigE NICs. The basic problem is that hardware is not fast enough to saturate the NIC, so many vendors provide total TCP/IP stack offload. Also consider file servers, such as NetApp et al, if software is used you may see tweaking the MTU to larger sizes to reduce the CPU overheads. This is popular with low end equipment such as SOHO NAS devices from ReadyNAS, Synology, etc. With high end equipment if you offload the entire stack then, if the hardware is capable, better latency can be achieved with normal Ethernet MTU sizes and Jumbograms become obsolete.
iperf is pretty much the one goto tool for network testing, but it will not always be the best on Windows platforms. You need to look at Microsoft's own tool NTttcp:
http://msdn.microsoft.com/en-us/windows/hardware/gg463264.aspx
Note these tools are more about testing the network and not application performance. Microsoft's tool goes to extremes with basically a large memory locked buffer queued up waiting for the NIC to send as fast as possible with no interactivity. The tool also includes a warm up session to make sure no mallocs are necessary during the test period.