The datasheet for TDA93XX (an 80C51 based controller used in a TV) on page 3 says that it can have:
32 - 128Kx8-bit late programmed ROM
I wonder what it means? Is there a chance for me to reprogram it?
It means the ROM is programmed just before being encased by a variety of different methods. It does not mean the device is reprogrammable:
U.S. Pat. No. 4,295,209 discloses late programming an IGFET ROM by ion implantation through openings in an overlying phosphosilicate glass layer immediately before metallization. The ion implantation is done through a polycrystalline silicon gate electrode of selected IGFETs in the ROM. Small size in the ROM is preserved in U.S. Pat. No. 4,295,209 by incorporating a silicon nitride coating immediately beneath the phosphosilicate glass layer. Consequently, when implant openings are etched in the phosphosilicate glass layer, the polycrystalline silicon gate electrode is not exposed. Accordingly, metal lines can cross directly over the implant openings without contacting the gate electrodes. However, the silicon nitride coating is ordinarily thin and there can be a capacitive coupling which occurs between the drain lines and the polysilicon gate within the implant windows. In larger size ROMs this capacitive coupling can become significant enough to slow down the speed of the ROM.
In our above-mentioned concurrently filed Ser. No. 268,086, we disclose a different late programming process by which high operating speed can be retained with ROMs of the usual construction. We have now found a technique by which ion implantation can be used in substantially the same way as outlined in the aforementioned U.S. Pat. No. 4,295,209 but without a penalty of slower operating speed or expanded size in larger ROMs. We have discovered that the U.S. Pat. No. 4,295,209 late programming method is effective on a high density ROM of unique configuration. The unique configuration was previously known and is referred to as a hexagonal ROM. This unique configuration does not require metal lines to cross implant windows in the reflowable glass layer. Thus, capacitive coupling is minimized. Accordingly, both highest speed and maximum ROM density is retained.
Related
My professor recently approved our research paper which will also be used in our final year thesis. Basically our main purpose is to create a system for location tracking and attendance automation for students and staffs. We would like to use the power of bluetooth low energy modules for this project.
I have actually done quite few research about this but I am having trouble which keywords to use in order for me to filter the right answers for my question. So instead, I'll just put all my questions here.
I provided an image to further understand the concept I am talking about.
Basically, the broadcaster/advertisement mode modules are for students and staffs. While the observer mode modules are initially installed in every rooms or spaces in our building/campus.
Broadcast and Observer mode
I would like to clarify first that the location tracking is only basic, it only detects which rooms are the students and staffs located.
Here are my questions:
What is the maximum advertisement/broadcaster module can the observer module detects at the same time?
Our target is about 50 students per room, 300 students in cafeteria, will the observer module have a large amount of latency upon scanning advertisement packets?
Do we have to use different module for observer mode, or will the same module for broadcaster mode be just fine?
Since this is supposedly embedded to school IDs, we would like to use a coin cell battery, how long will it last?
According to my research, BLE range is about 100 meters, but we will be using coin cell battery, is it really possible to achieve 100m for broadcasting and observing? If it is, can we perhaps decrease it by programming?
My apologies for too much question, as this is actually our first time doing applied hardware stuffs due to pandemic. Most of our laboratories are basically tinkercad base. Face-to-face classes are allowed for only medical students for now.
A few answers:
BLE scanners can detect hundreds of distinct broadcasters at the same time. There is no hard limit, but the more broadcasters the longer it will take the scanners to detect each broadcaster.
Most BLE modules support both peripheral mode (broadcaster) and central mode (scanner) simultaneously.
Scanning 50 broadcasters in a single room will easily detect 90% of packets, so if the advertiser is going at 1 Hz it will usually take one second to detect, but sometimes 2-3 seconds of packets are missed.
The indoor range is closer to 40 meters with no walls obstructing the signal. Outdoors with clear line of sight the range is higher. Walls often block signals almost entirely, depending on materials.
A CR2032 coin cell can power a BLE broadcast at 1 Hz and max power for about 30 days.
Creating an embedded solution is cool and valid but just remember that broadcasters already exists as each and every student carries a smartphone with BLE embedded into it and your observer can be any BLE capable device from smartphone through PC with BLE dongle all the way to Arduino and alike.
Your broadcasters (or BLE peripherals as they should be called) will need an Android / iOS app and you will have to deal with working in the background without the operating system stopping your app.
Your observer (or Central in BLE language) can be any stationary PC if such exists in the class which can make development and deployment a lot easier.
Looking at:
OMNET++: How to obtain wireless signal power?
and
https://github.com/inet-framework/inet/blob/master/examples/wireless/scaling/omnetpp.ini
there seem to be no power consumption related settings to packets that are sent in a UnitDiskRadio.
Is there a way of setting packet power consumption in a unit disk radio medium, or, conversely, communication range in ApskScalarRadioMedium?
UnitDiskRadio is a simplified version of a radio, where you are not interested in the transmission, propagation, attenuation etc. details. You just want to have a clear cut transmission distance. Above that, the transmission always fails, below that the transmission always succeed. This is simple, fast and suitable if you want to simulate high level behavior like application level or routing. You really don't care how much your radio draws from a power grid (or battery) in this case.
On the other hand, if you are interested in low level details, the whole radio transmission process should be modeled. In this case, you model the power draw and based on that transmission and there is no clear cut transmission range. Whether a transmission succeeds is a probabilistic outcome depending on power, antenna configuration, encoding, modulation, noise and a lot of other stuff, so you cannot set it as a simple "range".
TLDR: No, you cannot set both of them on the same radio.
PS: and make sure that you do not mix and match various power parameters. The first question you linked is about getting the power of a received packet (i.e. how strong that signal was when it was received). The second link show how to configure the transmission power (that goes out on the antenna), and in the question you are referring to power consumption which is a third thing, meaning how much you draw from a battery to make the transmission. They are NOT the same thing.
In my project i'll use modbus protocol for serial communication. There are more than 320 slaves which seperated equally in 2 groups(see image). Every 16 slaves are powered from the same supply and isolated from others galvanically(Master'll be isolated from all the slaves).
My first question is if there is a problem in this design?
Secondly I want to synchronise all the slaves via 10ms period pulses that are derived from master microcontroller. How can i achieve a robust synchronisation(what type of bus, single or differential signal, where to isolate)?
Here is an alternative one:
see picture
Many things can go wrong here. For starters, it will take a looooooong time for you to poll each and every one of your slaves. And your isolators will easily introduce delays beyond 2us to your sync signal.
Can you briefly tell us what are you trying to do specifically, eg. synchronized motion control? There are other alternatives used in industrial solutions.
Most of the synchronized motion control used in industrial systems are used to replace mechanical cams and eccentric gears, and thus usually called "electronic camming" in this field. Here's a list of techniques I had come across in my last job
A PLC which outputs multiple pulse trains, each commands an individual servo/stepper motor driver. The PLC will have to store all the motion profiles and do all the interpolations, so relatively simple drives can be used. But each actuator will need it's own pulse train lines, and there's just too much in your system.
Motor drives stores motion profile & does interpolation, and the motion is advanced/reversed by an external pulse train. This is a technique used in Delta Automation ASDA and Schneider Electric Lexium 23 model industrial servo drives. The motion profiles are either burnt into the drive's EEPROM beforehand, or written in through MODBUS. This is very close to what you are trying to do, but the difference here is the external sync pulse train is on a separate wire.
Real Time Ethernet. The target positions are periodically written to each drives at a specific interval. This can be done very rapidly at 100Mbps. As for the latency that occurs when writing to different drives, there is a built in mechanism that measures the latency of each drives, and this is then compensated accordingly later. Cool eh? The one that I had saw, but never really used is EtherCAT by Beckhoff.
I worked mostly with method 2 in the past, and from those experience you needs might not need to be so stringent. Here are my recommendations.
It will be perfectly fine if your sync signal is delayed a little if your mechanism has no risk of collision if the timing is off by a little. But lost pulses cannot be tolerated as one of the actuators will be out of phase. Don't scrimp on your sync & communication cable quality, shielded twisted pair if possble, and connect them properly.
If the communication line is not too long, isolators are not needed. I had worked with lines up to 8 meters without the need for isolators or repeaters. Instead I am more worried about the number of spur (branch) connection on your RS485 bus. If possible, connect everything to your 2 main buses directly.
If this is a production system, there might be a problem. When the system is running in sync motion mode, there is no way to monitor the actuators as the communication lines are now occupied. This will not be acceptable on a real world application, but if this is just a proof of concept design, go for it.
what is the relation between BTS and cell? I think one BTS hardware can cover few cells and also some cells could be covered by more than one BTS isn't it?
Is part of information, that mobile receives from GSM network identification of concrete BTS or mobile phone knows only cell-id?
Is part of information, that mobile receives from GSM network identification of BSC?
Ad 1: Typically one BTS can handle several cells. Common patterns are a one BTS covering a circular area with one round-radiating antenna or a three-sector BTS which covers three cells with sector-radiating antennas. One cell can only be handed by one BTS at a time. Two or more BTSes are not possible since the radio communication would interfere with each other. Note that this is completely different in WCDMA/UMTS since there is no concept of cells.
Ad 2: Since one cell is covered by exactly one BTS, the cell id uniquely identified the concrete BTS.
Ad 3: Since the BTS does not contain any control logic, the mobile communicates directly with the BSC, e.g. about radio resources.
Edit after comment:
1/ The BTS is "dumb" to say it simply. It does only what the BSC instructs it to do. E.g. The BSC tells the BTS as well as the mobile which frequencies to use for the radio communication. A BTS does not route traffic as it is hooked to exactly one BSC. It even does not route traffic to one of several mobiles attached to the BTS as this is done by the BSC. Think of the BTS as a Um-to-Abis physical layer and protocol transcoder.
2/ Actually my earlier statement that UMTS has no cell concept is not exactly true, it's just different.
GSM is FTDMA (frequency and time division multiple access). The radio channel is shared by using different frequecies (per cell) and timeslots (per mobile). Since radio frequency is used to distinguish participants, great care must be taken that not two GSM participants use the same frequency at the same time at the same location. The solution to this is cells, where geographic areas have different frequencies assigned. Network planning must ensure that no two neighbouring cells use the same frequencies as this may lead to interference since you cannot control exactly the size of a cell (e.g. due to absorption and reflection). In GSM, a BTS has a fixed number of radio transmission channels, the number depends on the BTS hardware configuration. If all channels are in use, the cell is full, this is indpendent of the location of a mobile in the cell.
UMTS is CDMA (code division multiple access). The radio channel is shared by encoding the payload in a way that allows to decode it later even if several senders use the same frequency range. That requires coding schemes which are collision free (all codes are different from each other to avoid senders using too similar codes) and a great deal of signal processing. As an analogy: on a party you can understand someone accross the room, even if ten people are talking. The more senders communicate within the cell, the smaller the cell gets in order to allow the BTS/Node-B distinguishing between senders. Therefore, in UMTS a cell size is not geographically fixed. The cell "breathes" depending on its load.
OK, this thread is quite old, but requires some further clarifications for next generations.
When talking about GSM physical network architecture, the term BTS (Base Transceiver System) refers to the physical site itself - the 'small house with the tower' (although modern small BTSs are just boxes hanged on walls or placed on roof tops).
Each such physical site can host one omni-directional cell, or several sector cells.
In GSM logical network architecture, there is some confusion.
The terms 'Cell' and 'Base Station' actually refer to the same physical entity (a set of transceiver units, each used to receive and transmit one of the paired UL/DL carrier frequencies allocated in the BA frequency set). Let's call this entity 'physical cell' just for clarification.
The term Base Station is used for radio resource management. A BSIC (BS Id Code, or BTS Id Code) is allocated for the 'physical cell' and is used in the radio-related conversations between the MS (Mobile Station) and the BSS (BTS and BSC), e.g. for measurement reports.
The BSIC is composed of 'local' parameters - Network Color Code (NCC) and BS Color Code (BCC), and is therefore unknown outside the network.
This is where the term Cell comes in:
The term Cell is used for Mobility Management. A Cell Identity (CI) is defined as a refinement of the Routing Area - one RA will include several cells in it.
The Global Cell Identifier (GCI) is composed of network, RA and CI, and is used for handovers inside and outside the network.
It is up to the BSC to convert the BSIC to the Cell Identity (the BSC may convert the BSIC directly to GCI, or the BSC converts to CI, and the MSC will convert it to GCI).
Hope that helps a bit.
BTS means different at different place!
MS, BTS, BSC, when these words appear together, BTS means something between your phone and the MSC.
Sometimes we call a site (a small house and a tower) as a BTS.
In NOKIA gsm equipment,cell is called segment. Every cell has at least one BTS,different BTS has different functions,Eg:BTS1 provide voice service,BTS2 provide EDGE service。
Phone get BCCH(freq)/NCC/BCC to identificate different cells. Decode the information from BCCH to get CI, LAC...etc.
In R. Kent Dybvig's paper "Three Implementation Models for Scheme" he speaks of "FFP languages" and "FFP machines". Apparently there is some correlation between FFP machines, and string-reduction on multiple processors.
Googling doesn't really uncover much in terms of explanations or examples.
Can anyone shed some light on this topic?
Thanks.
Kent Dybvig's advisor, Gyula A. Mago, published a detailed description in "The FFP Machine: Technical Report 87-014" in 1987 by Mago and Stanat.
As of this writing, the PDF is freely available at:
http://www.cs.unc.edu/techreports/87-014.pdf
The FFP Machine is a very fine-grained parallel computer architecture:
each processor holds a single symbol / atom / value.
It uses a string reduction model of computation in which
innermost function applications are found and replaced by their
equivalent result (eager evaluation).
Where a result is used in several places, it tends to be re-evaluated
instead of incurring the costs of accessing some global store
(but see Mago's paper on "Copying Operands vs Copying Results", or better yet Mago's "Data Sharing in an FFP Machine" in the 1982 Functional Programming Languages and Computer Architecture conference).
The L cells holding the FFP expression being reduced
communicate through a tree structured arrangement of T cells.
Note that IC's are basically two dimensional and with wiring,
circuits can move towards being three dimensional in physical space.
Interconnection networks that occupy higher dimensions
(such as the Hypercube, Omega, Banyan, Star, etc. networks)
will eventually be unable to perform near their theoretical limit.
This communication network is circuit-switched rather than being packet-switched.
Data packets contain no addresses and do not need routing.
Packets from distinct reductions cannot meet, cannot conflict
and cannot experience congestion with each other.
The configuring activity (called "Partitioning") is performed
in a single sweep upwards in the tree, using a handful
of logic operations on 3-bit messages, leaving "area machines" in its wake,
each created to advance at most a single reducible application.
While it is technically logarithmic in time,
the resulting area machines can begin communicating
in a pipelined fashion behind the partitioning wave,
practically costing a constant time penalty.
(The dismantling of area machines remains a logarithmic cost in time).
Packets within a single reduction should, and must, meet
and thus provide a often-useful synchronization.
Sequences of packets are sorted and combined as they rise
within an area, to be broadcast from the root of the area machine.
Parallel Prefix and Parallel Suffix operations are provided
to reduce area traffic, since there remains a potential bottleneck
within an individual reducible application.
This is accomplished without the need exhibited in
the Ultracomputer (Jack (Jacob?) Schwartz at NYU)
for a separate logarithmic-sized cache memory in each
communication node.
Each T cell (internal tree node) only needs a FIFO buffer
(for efficiency) of size greater than the pipeline path to
the top of the tree and back down.
(This latter is a conjecture of mine, but it seems reasonable).
Since the tree maintains the left-to-right order of data
(unlike some other combining networks), the system enables cells
to rotate their data in logarithmic rather than linear time,
avoiding the plausible congestion at the root of the area machine.
It's worth noting again, that the parallelism within an area
machine is independent of the simultaneous parallelism in other
area machines, and has available to it a number of processors
proportional to the quantity of data in the operand.
Have you come across this yet?: Compiling APL for parallel execution on an FFP machine
Formal FP. Similar to FP, but with regular sugarless syntax, for machine execution is all I can offer you.
See Wikis Fp page.