[ Please only consider 1k classic cards in your answers. ]
I am hoping to gain a bit of industry knowledge in the realm of RFID. Specifically, with the MIFARE MAD (Mifare application directory). I understand its architecture from reviewing the MIFARE MAD documentation published by NXP located here:
https://www.nxp.com/docs/en/application-note/AN10787.pdf
However, from this document a few questions arise.
Who manages the initial MAD directory on a given card? I know that manufactures may distribute cards with MAD 1, 2, or 3 pre-initialized in sector 0/16. What is the standard if a card is read which the MAD has yet been placed on the card? Would it be appropriate to write my own MAD to the card in this case? Or is it more appropriate to force clients to purchase cards with MAD preinstalled?
Given a standard MIFARE Classic 1k card there are only two 16 byte blocks of sector 0 which the MAD directory may reside. This provides only 32 bytes for the MAD directory. The list of registered AIDs seems to be much larger than the 32 bytes. What process should I take if the AID I am looking for is not indexed in the MAD?
Given a specific AID from the list of AIDs located here (link is dead, PDF can still be found via Wayback Machine)
what is the general process for identifying which sector the data resides? So picking a random AID from the list say 0034 which is registered to Verifone how do I identify which sectors the data is located? How do I identify if the data is located in multiple sectors?
What is MAD version 1, 2, and 3?
MAD version 1 and 2 are used with MIFARE Classic cards. MAD version 1 uses sector 0 of the card to assign the remaining sectors (sectors 1..15) to specific "applications" (each sector can be assigned to one application ID indicating the application that manages/uses those sectors). MAD version 2 is an extension of MAD v1 that is used with MIFARE Classic 4K cards. MAD version two uses sector 16 as an additional directory to assign the 4K-specific sectors (sector 17..39) to applications.
MAD version 3 is used with MIFARE DESFire (EV1) cards. Since you specifically asked for MIFARE Classic 1K, this is probably off-topic for your question.
Who manages the initial MAD directory on a given card?
Typically, the MAD would be managed by the card issuer. Thus, whoever issues the card would also initiate the MAD sector(s).
What is the standard if a card is read which the MAD has yet been placed on the card?
If you happen to find a card that already contains data but does not use the MAD, you would typically consider this a single-application card. Since the applications that already use this card would probably not understand the MAD concept, you would not be able to introduce a MAD later on. (That's particularly the case if the application uses any of the MAD sectors (sector 0 or 16) for other application data.)
Would it be appropriate to write my own MAD to the card in this case?
See above. Usually it would not make sense to introduce a MAD later on. Also, if the card already is in use, you would probably not have the keys to write to the MAD sectors (or any of the other (used) sectors).
Or is it more appropriate to force clients to purchase cards with MAD preinstalled?
I'm not aware of any directory manager service where you could purchase empty cards with a pre-configured MAD and where users would be able to get their specific applications installed onto the cards later on by that manager.
In fact, the MAD is usually used in closed-loop application scenarios where one card issuer uses the cards for multiple applications within their domain (e.g. a university (right, I'm working for one) that uses these cards for an access control system, for a closed-loop payment system, etc.)
Given a standard MIFARE Classic 1k card there are only two 16 byte blocks of sector 0 which the MAD directory may reside. This provides only 32 bytes for the MAD directory. The list of registered AIDs seems to be much larger than the 32 bytes.
In MAD v1 and V2, each AID has 16 bits (2 bytes). Since the MAD assigned sectors to applications, each sector has a two byte slot within the MAD where the AID that a sector is assigned to will be stored. See How to access a MIFARE Classic card that uses the MIFARE Application Directory structure?.
What process should I take if the AID I am looking for is not indexed in the MAD?
You could request NXP to register your application(s) and assign AIDs to them. See the appendix of the application note for the MIFARE Application Directory for the registration form. While the list suggests that NXP is still taking new registrations, you should keep in mind though that MIFARE Classic security is broken since 2008 and there are newer products in the MIFARE product line that would be more suitable for new applications.
Given a specific AID what the general process for identifying which sector the data resides?
See above and How to access a MIFARE Classic card that uses the MIFARE Application Directory structure?. Each slot in the MAD assigns an AID to one specific sector. Thus, you would read the MAD sectors and then browse them for the occurence of the AID, by accumulating all occurences you get a list of all sectors assigned to that application.
Related
My professor recently approved our research paper which will also be used in our final year thesis. Basically our main purpose is to create a system for location tracking and attendance automation for students and staffs. We would like to use the power of bluetooth low energy modules for this project.
I have actually done quite few research about this but I am having trouble which keywords to use in order for me to filter the right answers for my question. So instead, I'll just put all my questions here.
I provided an image to further understand the concept I am talking about.
Basically, the broadcaster/advertisement mode modules are for students and staffs. While the observer mode modules are initially installed in every rooms or spaces in our building/campus.
Broadcast and Observer mode
I would like to clarify first that the location tracking is only basic, it only detects which rooms are the students and staffs located.
Here are my questions:
What is the maximum advertisement/broadcaster module can the observer module detects at the same time?
Our target is about 50 students per room, 300 students in cafeteria, will the observer module have a large amount of latency upon scanning advertisement packets?
Do we have to use different module for observer mode, or will the same module for broadcaster mode be just fine?
Since this is supposedly embedded to school IDs, we would like to use a coin cell battery, how long will it last?
According to my research, BLE range is about 100 meters, but we will be using coin cell battery, is it really possible to achieve 100m for broadcasting and observing? If it is, can we perhaps decrease it by programming?
My apologies for too much question, as this is actually our first time doing applied hardware stuffs due to pandemic. Most of our laboratories are basically tinkercad base. Face-to-face classes are allowed for only medical students for now.
A few answers:
BLE scanners can detect hundreds of distinct broadcasters at the same time. There is no hard limit, but the more broadcasters the longer it will take the scanners to detect each broadcaster.
Most BLE modules support both peripheral mode (broadcaster) and central mode (scanner) simultaneously.
Scanning 50 broadcasters in a single room will easily detect 90% of packets, so if the advertiser is going at 1 Hz it will usually take one second to detect, but sometimes 2-3 seconds of packets are missed.
The indoor range is closer to 40 meters with no walls obstructing the signal. Outdoors with clear line of sight the range is higher. Walls often block signals almost entirely, depending on materials.
A CR2032 coin cell can power a BLE broadcast at 1 Hz and max power for about 30 days.
Creating an embedded solution is cool and valid but just remember that broadcasters already exists as each and every student carries a smartphone with BLE embedded into it and your observer can be any BLE capable device from smartphone through PC with BLE dongle all the way to Arduino and alike.
Your broadcasters (or BLE peripherals as they should be called) will need an Android / iOS app and you will have to deal with working in the background without the operating system stopping your app.
Your observer (or Central in BLE language) can be any stationary PC if such exists in the class which can make development and deployment a lot easier.
Carrier aggregation combines the existing spectrum, say if the carrier had previously 20MHz in the area, with the newly acquired spectrum of 20MHz, to give a wider pipe or bandwidth for data flow between the mobile device & the base station tower.
My question is, why don't they just operate the new bandwidth as a separate pipe? So that there would be two pipes of 20MHz each, instead of one aggregated pipe of 40MHz?
Benefits:
Carriers won't have to deal with the complexity of Carrier Aggregation technology, as the two bands are totally separate (2300MHz & 1800MHz). End-users can be divided over the two frequencies. Theoretically this should halve the load on one channel, providing double the speeds to connected users.
Many existing 4G devices use single antenna for 4G operation. The LTE-A tech needs MIMO support on both mobile & tower to work. Essentially it needs 2 antennas on both mobile & tower for operating 2 different frequencies, which only stresses the mobile device. Existing hardware cannot benefit from LTE-A, where speeds will continue to remain the same post upgradation. In fact, it may slightly decrease post LTE-A implementation, since newer LTE-A devices will share load on both the frequencies, but existing LTE users can only use one.
For those new, this simple image explains how Carrier Aggregation works. https://www.techtalkthai.com/wp-content/uploads/2014/12/qualcomm_carrier_aggregation.jpg
1) Assuming that the operator already has 2 bands, it is really not complex to enable and configure carrier aggregation. It is likely that they already have the ability as part of the latest LTE software upgrades and it is just a matter of configuring it and possibly paying for a license to use it.
The scenario you describe of using two separate pipes instead of a single CA pipe is not feasible (or may not be possible?). When a device establishes a connection in an LTE network, a default bearer is configured which would not be able to simultaneously use two radio connections without CA or other similar features. Multiple bearers can certainly be established simultaneously, however they serve different purposes (e.g. voice vs data). That said, really CA is using two different pipes, but they act as a single (logical) bearer. Another advantage of CA is that the control plane signaling takes place on only one of the component carriers and therefore the other component carriers can be fully dedicated to user plane traffic.
2) I'll clear a few things up:
MIMO has nothing to do with Carrier Aggregation.
Most 4G devices today transmit on a single antenna and receive on two antennas. (Although they most likely have at least 2 tx and 2 rx antennas, and many have 4 tx and 4 rx antennas, although 4x4 MIMO has not been implemented by most operators.)
Existing devices are already taking advantage of LTE-A features and some operators are currently rolling out 3-carrier CA, 4x4 MIMO as well as 256QAM.
Here is a recent news article which discusses LTE-A features which have already been implemented: https://newsroom.t-mobile.com/news-and-blogs/lte-advanced.htm
I have a friend who is working on a project where they need to deploy a large number of devices over the midwest. For simplicity let's say these are temperature gauges - they read the current temperature and transmit that information to a server. The server would just need to know what device is reporting what temperature (412X|10c).
These devices will be in forests, near highways, in cities and swamps. All other technology is prototyped and working (ability to read the temperature, the hardware for the device) the open question they have right now is 'what is the cheapest way we can send this information to the primary server'?
I think they'll need to go with a wireless carrier (verizon/sprint/at&t) and use something similar to mobile broadband. Is there really any other option?
You could do it with ham radio and something like APRS, assuming they don't care about encryption and don't have a pecuniary interest in the project.
You wouldn't need full mobile broadband, as your data would fit in a text message. You can get cellular shields for arduino that would probably fit your needs.
The following quote is from this link: http://apidocs.meego.com/1.2-preview/qtmobility/qgeopositioninfosource.html#createDefaultSource
Creates and returns a position source
with the given parent that reads from
the system's default sources of
location data, or the plugin with the
highest available priority.
What and where are the system's default "sources of location data"? Any examples?
And what do I need to read to understand these concepts?
The default source depends on the Device.
As to the question regarding what the sources might be , here is an extract from forum.nokia documentation regarding symbian phones , although this is mostly true in reference to other devices and platforms as well
GPS based: It can provide location estimation with accuracy from 10 to 30 meters. Depending on the actual technology and the state of GPS module, time to first fix varies from several seconds to minutes. Time to next fix is normally 1 second. It may not work indoor. The GPS module, which makes location estimation, may reside internally (e.g. integrated GPS) or externally (e.g. Bluetooth GPS) of the terminal.
Assisted GPS: Assisted GPS technology improves performance (i.e. time to first fix and sensitivity) by acquiring assistant data from an assistance server. The mobile phone receives the assistant data from wireless network. Depends on the operator and subscription, end user may have to pay for receiving assistant data.
Network based: It can provide location estimation with accuracy from a hundred meter to several kilometers. Time to first fix and time to next fix is normally within 10 seconds. It works also indoor. It normally requires support from operator.
what is the relation between BTS and cell? I think one BTS hardware can cover few cells and also some cells could be covered by more than one BTS isn't it?
Is part of information, that mobile receives from GSM network identification of concrete BTS or mobile phone knows only cell-id?
Is part of information, that mobile receives from GSM network identification of BSC?
Ad 1: Typically one BTS can handle several cells. Common patterns are a one BTS covering a circular area with one round-radiating antenna or a three-sector BTS which covers three cells with sector-radiating antennas. One cell can only be handed by one BTS at a time. Two or more BTSes are not possible since the radio communication would interfere with each other. Note that this is completely different in WCDMA/UMTS since there is no concept of cells.
Ad 2: Since one cell is covered by exactly one BTS, the cell id uniquely identified the concrete BTS.
Ad 3: Since the BTS does not contain any control logic, the mobile communicates directly with the BSC, e.g. about radio resources.
Edit after comment:
1/ The BTS is "dumb" to say it simply. It does only what the BSC instructs it to do. E.g. The BSC tells the BTS as well as the mobile which frequencies to use for the radio communication. A BTS does not route traffic as it is hooked to exactly one BSC. It even does not route traffic to one of several mobiles attached to the BTS as this is done by the BSC. Think of the BTS as a Um-to-Abis physical layer and protocol transcoder.
2/ Actually my earlier statement that UMTS has no cell concept is not exactly true, it's just different.
GSM is FTDMA (frequency and time division multiple access). The radio channel is shared by using different frequecies (per cell) and timeslots (per mobile). Since radio frequency is used to distinguish participants, great care must be taken that not two GSM participants use the same frequency at the same time at the same location. The solution to this is cells, where geographic areas have different frequencies assigned. Network planning must ensure that no two neighbouring cells use the same frequencies as this may lead to interference since you cannot control exactly the size of a cell (e.g. due to absorption and reflection). In GSM, a BTS has a fixed number of radio transmission channels, the number depends on the BTS hardware configuration. If all channels are in use, the cell is full, this is indpendent of the location of a mobile in the cell.
UMTS is CDMA (code division multiple access). The radio channel is shared by encoding the payload in a way that allows to decode it later even if several senders use the same frequency range. That requires coding schemes which are collision free (all codes are different from each other to avoid senders using too similar codes) and a great deal of signal processing. As an analogy: on a party you can understand someone accross the room, even if ten people are talking. The more senders communicate within the cell, the smaller the cell gets in order to allow the BTS/Node-B distinguishing between senders. Therefore, in UMTS a cell size is not geographically fixed. The cell "breathes" depending on its load.
OK, this thread is quite old, but requires some further clarifications for next generations.
When talking about GSM physical network architecture, the term BTS (Base Transceiver System) refers to the physical site itself - the 'small house with the tower' (although modern small BTSs are just boxes hanged on walls or placed on roof tops).
Each such physical site can host one omni-directional cell, or several sector cells.
In GSM logical network architecture, there is some confusion.
The terms 'Cell' and 'Base Station' actually refer to the same physical entity (a set of transceiver units, each used to receive and transmit one of the paired UL/DL carrier frequencies allocated in the BA frequency set). Let's call this entity 'physical cell' just for clarification.
The term Base Station is used for radio resource management. A BSIC (BS Id Code, or BTS Id Code) is allocated for the 'physical cell' and is used in the radio-related conversations between the MS (Mobile Station) and the BSS (BTS and BSC), e.g. for measurement reports.
The BSIC is composed of 'local' parameters - Network Color Code (NCC) and BS Color Code (BCC), and is therefore unknown outside the network.
This is where the term Cell comes in:
The term Cell is used for Mobility Management. A Cell Identity (CI) is defined as a refinement of the Routing Area - one RA will include several cells in it.
The Global Cell Identifier (GCI) is composed of network, RA and CI, and is used for handovers inside and outside the network.
It is up to the BSC to convert the BSIC to the Cell Identity (the BSC may convert the BSIC directly to GCI, or the BSC converts to CI, and the MSC will convert it to GCI).
Hope that helps a bit.
BTS means different at different place!
MS, BTS, BSC, when these words appear together, BTS means something between your phone and the MSC.
Sometimes we call a site (a small house and a tower) as a BTS.
In NOKIA gsm equipment,cell is called segment. Every cell has at least one BTS,different BTS has different functions,Eg:BTS1 provide voice service,BTS2 provide EDGE service。
Phone get BCCH(freq)/NCC/BCC to identificate different cells. Decode the information from BCCH to get CI, LAC...etc.