Instead of Carrier Aggregation, why don't carriers use the new frequency bandwidth as separate channel to connect users directly? - networking

Carrier aggregation combines the existing spectrum, say if the carrier had previously 20MHz in the area, with the newly acquired spectrum of 20MHz, to give a wider pipe or bandwidth for data flow between the mobile device & the base station tower.
My question is, why don't they just operate the new bandwidth as a separate pipe? So that there would be two pipes of 20MHz each, instead of one aggregated pipe of 40MHz?
Benefits:
Carriers won't have to deal with the complexity of Carrier Aggregation technology, as the two bands are totally separate (2300MHz & 1800MHz). End-users can be divided over the two frequencies. Theoretically this should halve the load on one channel, providing double the speeds to connected users.
Many existing 4G devices use single antenna for 4G operation. The LTE-A tech needs MIMO support on both mobile & tower to work. Essentially it needs 2 antennas on both mobile & tower for operating 2 different frequencies, which only stresses the mobile device. Existing hardware cannot benefit from LTE-A, where speeds will continue to remain the same post upgradation. In fact, it may slightly decrease post LTE-A implementation, since newer LTE-A devices will share load on both the frequencies, but existing LTE users can only use one.
For those new, this simple image explains how Carrier Aggregation works. https://www.techtalkthai.com/wp-content/uploads/2014/12/qualcomm_carrier_aggregation.jpg

1) Assuming that the operator already has 2 bands, it is really not complex to enable and configure carrier aggregation. It is likely that they already have the ability as part of the latest LTE software upgrades and it is just a matter of configuring it and possibly paying for a license to use it.
The scenario you describe of using two separate pipes instead of a single CA pipe is not feasible (or may not be possible?). When a device establishes a connection in an LTE network, a default bearer is configured which would not be able to simultaneously use two radio connections without CA or other similar features. Multiple bearers can certainly be established simultaneously, however they serve different purposes (e.g. voice vs data). That said, really CA is using two different pipes, but they act as a single (logical) bearer. Another advantage of CA is that the control plane signaling takes place on only one of the component carriers and therefore the other component carriers can be fully dedicated to user plane traffic.
2) I'll clear a few things up:
MIMO has nothing to do with Carrier Aggregation.
Most 4G devices today transmit on a single antenna and receive on two antennas. (Although they most likely have at least 2 tx and 2 rx antennas, and many have 4 tx and 4 rx antennas, although 4x4 MIMO has not been implemented by most operators.)
Existing devices are already taking advantage of LTE-A features and some operators are currently rolling out 3-carrier CA, 4x4 MIMO as well as 256QAM.
Here is a recent news article which discusses LTE-A features which have already been implemented: https://newsroom.t-mobile.com/news-and-blogs/lte-advanced.htm

Related

is there any efficient solution for this?

A company named RT&T has a network of n switching stations connected by m high-speed communication links. Each customer’s phone is directly connected to one station in his or her area. The engineers of RT&T have developed a prototype video-phone system that allows two customers to see each other during a phone call. In order to have acceptable image quality, however, the number of links used to transmit video signals between the two parties cannot exceed 4. Suppose that RT&T’s network is represented by a graph. Design an efficient algorithm that computes, for each station, the set of stations it can reach using no more than 4 links.

Zigbee mesh networking

I'm making an application for a running competition on a fixed track. I'm investigating what systems could be used and tough of using a stick containing a GPS/DGPS module and a Zigbee enabled chip to communicate the location to a server.
I've researched the subject (on the internet) but I was wondering if anyone has some practical advice/experience with using a Zigbee mesh/star topology in a dynamic environment wich could apply to this use case. I'm also very interested in using a mesh topology where the data is transmitted (hopping) trough the different Zigbee modules to the server.
Runners are holding a stick; run around the track and than pass the stick on to the next team member.
I am not particularly clear about your goal. But I'd like to say a few things.
First, using GPS/DGPS to measure which team reaches the finish line is inaccurate. Raw GPS data is horrible in accuracy (varying in 1 - 10 meters, well, around that), also the sampling rate of a GPS module is low (say once a second?) How do you determine exactly which team reaches the finish line first?
Second, to use a mobile ZigBee chip to communicate in real-time is hard. I assume your stick has a ZigBee end device. When it is moving (which in your case is pretty fast), it must dynamically find and associate with new parent routers, which takes time and depending on the wireless environment, it might involve several retries. So you will imagine a packet is only successfully delivered to the other end after 100ms or even a second. This might not be a problem if your stick records the exact time when a team reaches the finish line. Since you have a GPS module in the stick so there is no problem in getting very accurate time.

Cell (cell-id), BTS and BSS in GSM network

what is the relation between BTS and cell? I think one BTS hardware can cover few cells and also some cells could be covered by more than one BTS isn't it?
Is part of information, that mobile receives from GSM network identification of concrete BTS or mobile phone knows only cell-id?
Is part of information, that mobile receives from GSM network identification of BSC?
Ad 1: Typically one BTS can handle several cells. Common patterns are a one BTS covering a circular area with one round-radiating antenna or a three-sector BTS which covers three cells with sector-radiating antennas. One cell can only be handed by one BTS at a time. Two or more BTSes are not possible since the radio communication would interfere with each other. Note that this is completely different in WCDMA/UMTS since there is no concept of cells.
Ad 2: Since one cell is covered by exactly one BTS, the cell id uniquely identified the concrete BTS.
Ad 3: Since the BTS does not contain any control logic, the mobile communicates directly with the BSC, e.g. about radio resources.
Edit after comment:
1/ The BTS is "dumb" to say it simply. It does only what the BSC instructs it to do. E.g. The BSC tells the BTS as well as the mobile which frequencies to use for the radio communication. A BTS does not route traffic as it is hooked to exactly one BSC. It even does not route traffic to one of several mobiles attached to the BTS as this is done by the BSC. Think of the BTS as a Um-to-Abis physical layer and protocol transcoder.
2/ Actually my earlier statement that UMTS has no cell concept is not exactly true, it's just different.
GSM is FTDMA (frequency and time division multiple access). The radio channel is shared by using different frequecies (per cell) and timeslots (per mobile). Since radio frequency is used to distinguish participants, great care must be taken that not two GSM participants use the same frequency at the same time at the same location. The solution to this is cells, where geographic areas have different frequencies assigned. Network planning must ensure that no two neighbouring cells use the same frequencies as this may lead to interference since you cannot control exactly the size of a cell (e.g. due to absorption and reflection). In GSM, a BTS has a fixed number of radio transmission channels, the number depends on the BTS hardware configuration. If all channels are in use, the cell is full, this is indpendent of the location of a mobile in the cell.
UMTS is CDMA (code division multiple access). The radio channel is shared by encoding the payload in a way that allows to decode it later even if several senders use the same frequency range. That requires coding schemes which are collision free (all codes are different from each other to avoid senders using too similar codes) and a great deal of signal processing. As an analogy: on a party you can understand someone accross the room, even if ten people are talking. The more senders communicate within the cell, the smaller the cell gets in order to allow the BTS/Node-B distinguishing between senders. Therefore, in UMTS a cell size is not geographically fixed. The cell "breathes" depending on its load.
OK, this thread is quite old, but requires some further clarifications for next generations.
When talking about GSM physical network architecture, the term BTS (Base Transceiver System) refers to the physical site itself - the 'small house with the tower' (although modern small BTSs are just boxes hanged on walls or placed on roof tops).
Each such physical site can host one omni-directional cell, or several sector cells.
In GSM logical network architecture, there is some confusion.
The terms 'Cell' and 'Base Station' actually refer to the same physical entity (a set of transceiver units, each used to receive and transmit one of the paired UL/DL carrier frequencies allocated in the BA frequency set). Let's call this entity 'physical cell' just for clarification.
The term Base Station is used for radio resource management. A BSIC (BS Id Code, or BTS Id Code) is allocated for the 'physical cell' and is used in the radio-related conversations between the MS (Mobile Station) and the BSS (BTS and BSC), e.g. for measurement reports.
The BSIC is composed of 'local' parameters - Network Color Code (NCC) and BS Color Code (BCC), and is therefore unknown outside the network.
This is where the term Cell comes in:
The term Cell is used for Mobility Management. A Cell Identity (CI) is defined as a refinement of the Routing Area - one RA will include several cells in it.
The Global Cell Identifier (GCI) is composed of network, RA and CI, and is used for handovers inside and outside the network.
It is up to the BSC to convert the BSIC to the Cell Identity (the BSC may convert the BSIC directly to GCI, or the BSC converts to CI, and the MSC will convert it to GCI).
Hope that helps a bit.
BTS means different at different place!
MS, BTS, BSC, when these words appear together, BTS means something between your phone and the MSC.
Sometimes we call a site (a small house and a tower) as a BTS.
In NOKIA gsm equipment,cell is called segment. Every cell has at least one BTS,different BTS has different functions,Eg:BTS1 provide voice service,BTS2 provide EDGE service。
Phone get BCCH(freq)/NCC/BCC to identificate different cells. Decode the information from BCCH to get CI, LAC...etc.

How to synchronize media playback over an unreliable network?

I wish I could play music or video on one computer, and have a second computer playing the same media, synchronized. As in, I can hear both computers' speakers at the same time, and it doesn't sound funny.
I want to do this over Wi-Fi, which is slightly unreliable.
Algorithmically, what's the best approach to this problem?
EDIT 1
Whether both computers "play" the same media, or one "plays" the media and streams it to the other, doesn't matter to me.
I am certain this is a tractable problem because I once saw a demo of Wi-Fi speakers. That was 5+ years ago, so I'm figure the technology should make it easier today.
(I myself was looking for an application which did this, hoping I wouldn't have to write one myself, when I stumbled upon this question.)
overview
You introduce a bit of buffer latency and use a network time-synchronization protocol to align the streams. That is, you split the stream up into packets, and timestamp each packet with "play later at time T", where T is for example 50-100ms in the future (or more if the network is glitchy). You send (or multicast) the packets on the local network, to all computers in the chorus. The computers will all play the sound at the same time because the application clock is synced.
Note that there may be other factors like OS/driver/soundcard latency which may have to be factored into the time-synchronization protocol. If you are not too discerning, the synchronization protocol may be as simple as one computer beeping every second -- plus you hitting a key on the other computer in beat. This has the advantage of accounting for any other source of lag at the OS/driver/soundcard layers, but has the disadvantage that manual intervention is needed if the clocks become desynchronized.
hybrid manual-network sync
One way to account for other sources of latency, without constant manual intervention, is to combine this approach with a standard network-clock synchronization protocol; the first time you run the protocol on new machines:
synchronize the machines with manual beat-style intervention
synchronize the machines with a network-clock sync protocol
for each machine in the chorus, take the difference of the two synchronizations; this is the OS/driver/soundcard latency of each machine, which they each keep track of
Now whenever the network backbone changes, all one needs to do is resync using the network-clock sync protocol (#2), and subtract out the OS/driver/soundcard latencies, obviating the need for manual intervention (unless you change the OS/drivers/soundcards).
nature-mimicking firefly sync
If you are doing this in a quiet room and all machines have microphones, you do not even need manual intervention (#1), because you can have them all follow a "firefly-style" synchronizing algorithm. Many species of fireflies in nature will all blink in unison. http://tinkerlog.com/2007/05/11/synchronizing-fireflies/ describes the algorithm these fireflies use: "If a firefly receives a flash of a neighbour firefly, it flashes slightly earlier." Flashes correspond to beeps or buzzes (through the soundcard, not the mobo piezo buzzer!), and seeing corresponds to listening through the microphone.
This may be a bit awkward over very large room distances due to the speed of sound, but I doubt it'll be an issue (if so, decrease rate of beeping).
The synchronization is relative to the position of the listener relative to each speaker. I don't think the reliability of the network would have as much to do with this synchronization as it would the content of the audio stream. In order to synchronize you need to find the distance between each speaker and the listener. Find the difference between each of those values and the value for the farthest speaker. For each 1.1 feet of difference, delay each of the close speakers by 1ms. This will ensure that the audio stream reaches the listener at the same time. This all assumes an open area, as any in proximity to your scenario will generate reflections of the audio waves and create destructive interference. Objects within the area may also transmit sound at a slower speed resulting in delayed sound of their own.

Why I cannot get equal upload and download speed on symmetrical channel?

I'm assigned to a project where my code is supposed to perform uploads and downloads of some files on the same FTP or HTTP server simultaneously. The speed is measured and some conclusions are being made out of this.
Now, the problem is that on high-speed connections we're getting pretty much expected results in terms of throughput, but on slow connections (think ideal CDMA 1xRTT link) either download or upload wins at the expense of the opposite direction. I have a "higher body" who's convinced that CDMA 1xRTT connection is symmetric and thus we should be able to perform data transfer with equivalent speeds (~100 kbps in each direction) on this link.
My measurements show that without heavy tweaking the code in terms of buffer sizes and data link throttling it's not possible to have same speeds in forementioned conditions. I tried both my multithreaded code and also created a simple batch file that automates Windows' ftp.exe to perform data transfer -- same result.
So, the question is: is it really possible to perform data transfer on a slow symmetrical link with equivalent speeds? Is a "higher body" right in their expectations? If yes, do you have any suggestions on what should I do with my code in order to achieve such throughput?
PS.
I completely re-wrote the question, so it would be obvious it belongs to this site.
CDMA 1x consists of up to 15 channels of 9.6kbps traffic. This results in a total throughput of 144kbps.
Two channels are used for command and control signals (talking to base stations, associating/disassociating, SMS traffic, ring signals, etc).
That leaves you with up to 124.8kbps.
--> Each channel is one way. <--
They are dynamically switched and allocated depending on the need.
Generally you'll get more download than upload because that's the typical cell phone modem usage. But you'll never get more than 120kbps total aggregate bandwidth.
In practise, due to overhead of 1xRTT encoding, error correction, resends, etc, you'll typically experience between 60kbps and 90kbps even if you have all the channels possible.
This means that you can probably only get 30kbps-60kbps of upload and download simultaneously.
Further, due to switching the channels dynamically (and the fact that the base station controls this more than your modem - they need to manage base station channels carefully to keep channels free for voice calls) you'll lose time when it switches channels - it's not an instantaneous process.
So - 1xRTT can, in theory, give you 124kbps one way, but due to overhead, switching times, base station capacity, or the phone company simply limiting such connections for other reasons, you can't depend on a symmetrical link.
NOTE:
This will vary to some degree based on the provider and the modem. For instance, some modems have 16 channels, and some providers support 16 channels. In some cases those modems and providers work well together and can provide a full 144kbps aggregate raw bandwidth to the application, with only one dedicated channel (which has to work pretty hard) to deal with control, switching, and other issues. Even then, though, with the overhead of the modem communications, then the overhead of PPP, then the overhead of IP, then the overhead of TCP, you're still looking at maybe 100-120kbps total bandwidth, both up and down.
Lastly, no provider yet supports transparent transfer of IP traffic. In other words if you're modem is moving, the modem will switch to a new base station, but you'll completely drop the PPP session and have to restart it, as well as all the TCP sessions and such. You typically won't get the same IP address, and so your TCP sessions will not recover gracefully.
The "fun" aspect to this twist is that this can happen even if you aren't moving. If one base station gets loaded down, you may be transferred to another base station if you are close enough - there are other things that may make your modem transfer even without you moving. So make sure you take this into account, since you seem to be keen on maintaining a full duplex, symmetric channel open. It's tough to write stuff that will recover gracefully, nevermind anticipate it and do it quickly. You would do well to work very closely with a modem manufacturer (such as Kyocera) on this - otherwise you won't get the documentation on how to control the modem chipset at the low level that you need.
-Adam
I think the whole drama with high equal speeds on both directions is because my higher body thinks that they have 144 kbps on uplink AND 144 kbps on DOWNLINK (== TWO pipes). Whereas in reality we have 144 kbps of ONE pipe which is switching directions when I transfer files.
Comment me if I right or wrong, please.

Resources