I am attempting to reverse engineer a device which utilises a serial line to communicate.
I have not currently been able to determine the baud rate of the serial line as of yet. Upon reading the data at a certain baud rate and converting it over to hex, I can see that some of the digits change each time I take a reading.
I am certain that each time I take a reading the device is performing the same action, and should therefore be outputting the same signal.
If I receive different digits each time I take a measurement, would this potentially be a trait of selecting an incorrect baud rate?
Related
I have three interfaces:
RS422
SPI
Ethernet
I’m looking to generate and transmit a variable length message over the interfaces at rates up to 2KHz. I want to code checks to make sure that my output message length can be transmitted at the selected output rate for the settings of the transmission medium. I realise that this depends on the configuration of the different interfaces:
RS422: baud rate
SPI: clock
Ethernet: 10Mbs or 100Mbps
As well as:
Message length
Transmission rate
Can anyone suggest how I can check for these fault conditions?
Measure the bit error rate at various transmission rates and take this into consideration. You can transmit various bit patterns, collect a good sample and analyze the results to pick an ideal message size, transmission rate and integrity check/repair mechanism. Your ideal message size will likely be different for each interface.
I am using the readstream interface to sample at 100hz, I have been able to integrate the interface into Oscilloscope application. I just have a doubt in the way I pass on the buffer value on to the packet to be transmitted . Currently this is how I am doing it :
uint8_t i=0;
event void ReadStream.bufferDone( error_t result,uint16_t* buffer, uint16_t count )
{
if (reading < count )
i++;
local.readings[reading++] = buffer[i];
}
I have defined a buffer size of 50, I am not sure this is the way to do it as I am noticing just one sample per packet even though I have set Nreadings=2.
Also the sampling rate does not seem to be 100 samples/second when I check.I am not doing something right in the way I pass data to the packet to be transmitted.
I think I need to clarify a few things according to your questions and comments.
Reading a single sample from an accelerometer on micaZ motes works as follows:
Turn on the accelerometer.
Wait 17 milliseconds. According to the ADXL202E (the accelerometer) datasheet, startup time is 16.3 ms. This is because this particular hardware is capable of providing first reading not immediately after being powered on, but with some delay. If you decrease this delay, you will likely get a wrong reading, however, the behavior is undefined, so you may sometimes get a correct reading or the result may depend on environment conditions, such as ambient temperature. Changing this 17-ms delay to a lower value is certainly a bad idea.
Read values (in two axes) from the Analog to Digital Converter (ADC), which as an MCU component that converts analog output voltage of the accelerometer to the digital value (an integer). The speed at which ADC can sample is independent from the parameters of the accelerometer: it is another piece of hardware.
Turn off the accelerometer.
This is what happens when you call Read.read() in your code. You see that the maximum frequency at which you can sample is once every 17 ms, that is, 58 samples per second. It may be even a bit smaller because of some overhead from MCU or inaccuracy of timers. This is true when you sample by calling Read.read() in a loop or every fixed interval, because this call itself lasts no less than 17 ms (I mean the delay between the command and the event).
What you may want to do is:
Turn on the accelerometer.
Wait 17 ms.
Perform series of reads.
Turn off the accelerometer.
If you do so, you have one 17-ms delay for a set of samples instead of such delay for each sample. What is important, these steps have nothing to do with the interface you use for performing readings. You may call Read.read() multiple times in your application, however, it cannot be the same implementation of the read command that is already implemented for this accelerometer, because the existing implementation is responsible for turning on and off the accelerometer, and it waits 17 ms before reading each sample. For convenience, you may implement the ReadStream interface instead and call it once in your application.
Moreover, you wrote that ReadStream used a microsecond timer and is independent from the 17-ms settling time of the ADC. That sentence is completely wrong. First of all, you cannot say that an interface uses or does not use a timer. The interface is just a set of commands and events without their definitions. A particular implementation of the interface may use timers. The Read and ReadStream interfaces may be implemented multiple times on different platforms by various hardware components, such as accelerometers, thermometers, hygrometers, magnetometers, and so on. Secondly, the 17-ms settling time refers to the accelerometer, not the ADC. And no matter which interface you use, Read or ReadStream, and which timers a driver uses, milli- or microsecond, the 17-ms delay is always required after powering on the accelerometer. As I mentioned, you probably want to make this delay once per multiple reads instead of once per a single read.
It seems that the TinyOS source code already contains an implementation of the accelerometer driver providing the ReadStream interface which allows you to sample continuously. Look at the AccelXStreamC and AccelYStreamC components (in tos/sensorboards/mts300/).
The ReadStream interface consists of two commands. postBuffer(val_t *buf, uint16_t count) is called to provide a buffer for samples. In the accelerometer driver, val_t is defined as uint16_t. You may post multiple buffers, one by one. This command does not yet start sampling and filling buffers. For that purpose, there is a read(uint32_t usPeriod) command, which directs the device to start filling buffers by sampling with the specified period (in microseconds). When a buffer is full, you get an event bufferDone(error_t result, val_t *buf, uint16_t count) and a component starts filling a next buffer, if any. If there are no buffers left, you get additionally an event readDone(error_t result, uint32_t usActualPeriod), which passes to your application a parameter usActualPeriod, which indicates an actual sampling period and may be different (especially, higher) from a period you requested when calling read due to some hardware constraints.
So the solution is to use the ReadStream interface provided by AccelXStreamC and AccelYStreamC (or maybe some higher-level components that use them) and pass an expected period in microseconds to the read command. If the actual period is lower than one you expect, this means that sampling at higher rate is impossible either due to hardware constraints or because it was not implemented in the ADC driver. In the second case, you may try to fix the driver, although it requires good knowledge of low-level programming. The ADC driver source code for this platform is located in tos/chips/atm128/adc.
I have a PIC24 based system equipped with a 24 bit, 8 channels ADC (google MCP3914 Evaluation Board for more details...).
I have got the board to sample all of the 8 channels, store the data in a 512x8 buffer and transmit the data to PC using a USB module when the buffer is full (it's is done by different interrupts).
The only problem is that when the MCU is transmitting data (UART transmission interrupt has higher priority than the ADC reading interrupt) the ADC is not sampling data hence there will be data loss (sample rate is around 500 samples/sec).
Is there any way to prevent this data loss? maybe some multitasking?
Simply transmit the information to the UART register without using interrupts but by polling the bit TXIF
while (PIR1.TXIF == 0);
TXREG = "the data you want to send";
The same applies to the ADC conversion : if you were using interruptions to start / stop a conversion, simply poll the required bits (ADON) and thats it.
The TX bits and AD bits may vary depending on your PIC.
That prevents the MCU to enter an interrupt service routine and loose 3-4 samples.
In PIC24 an interrupt can be assigned one of the 8 priorities. Take a look at the corresponding section in the "Family Reference Manual" -> http://ww1.microchip.com/downloads/en/DeviceDoc/70000600d.pdf
Alternatively you can use DMA channels which are very handy. You can configure your ADC to use the DMA, and thus sampling and feeding the buffer won't use any CPU Time, same goes for UART I beleive.
http://ww1.microchip.com/downloads/en/DeviceDoc/39742A.pdf
http://esca.atomki.hu/PIC24/code_examples/docs/manuallyCreated/Appendix_H_ADC_with_DMA.pdf
I'm currently designing a sensor network that will have small ATtiny85 probes that each have a temperature sensor, a barometer, and a humidity sensor. I think I will use these (http://goo.gl/TqaDjl) to communicate as they are low cost and don't need much range. Im not sure though how I will get the probes to communicate with the main control, as the transmitter transmits digitally and I will have +20 probes that all need to send data without signals overlapping or getting messed up every minute. I think the easiest way would be to time the probes so that they don't overlap in transmission but I'm not sure.
Questions:
-Is using RF the cheapest and best option for this system?
-How can I prevent communication overlapping?
-What is the easiest way to send data digitally from an arduino (or ATtiny85)?
I guess I'm late to the party, but I'll offer some insight into collision control with a ton of chattering transmitters on one link, a la 802.11. This is somewhat packetized.
If two transmitters try to transmit at the same time, you're bound to get a mangled mess of rotten bacon at the receivers.
A simplified version of WiFi-style collisions would be good. Basically, it uses preambles that can be detected, and for longer transmissions that have a higher chance of conflicting, it can use shorter request/clear to send packets.
While this is likely overkill, I'd go for preambles. Start by transmitting a steady stream of something recognizable, like in hex, 555533330f0f00ff which is basically alternating 1s and 0s but with changing frequency(0101, then 0011, then 00001111, and so on), a readily recognizable pattern that is unlikely to be given off by stray radiation or noise.
This pattern could undergo a shift so there's a finite set of other preambles that should be bitwise-shifted relative to the original.
If a transmitter detects this preamble, it should STOP and wait. If you limit all packets to a certain temporal length, collisions should not occur if you wait sufficient time between packets. If during the time of one packet, a preamble is heard, then your station should wait for the full length of the transmission(listening to its length and other header fields so it knows how long to wait). Once the packet is done, your station can transmit its preamble.
This is where the WiFi resemblance stops and simpler protocols take over.
Note that if 2 stations are waiting on a packet they can start their preambles almost simultaneously. To resolve this, each station should have a different zero bit flipped in its preamble. If it detects a 1 for that bit, it sees that there's another station preambling, and should back off.
Each station should wait a certain delay(up to you) after each packet so other stations can start their transmissions.
A few sketches of the communication patterns show that this is sufficient for your needs.
Now if it's a master-slave-style system as long as you only have one network it should be easier since there should only be one outstanding request that would involve a slave transmitting.
Those will be by far the cheapest method. As for the best method, there are a variety of choices much better, but more expensive. A network of Xbee modules comes to mind, but those are much more expensive than $1.25 a pair.
Using the RF modules is very do-able however. To prevent communication overlapping, put a RF transmitter and receiver on each sensor node and the main hub. The main hub can send "hey sensor1 give me your data", which gets broadcasted to all of the sensors. However, only sensor1 will realize "hey I am sensor 1, here is my data" which the hub will listen for. Then, the hub will go on and say "hey sensor2 send me your data" and so on and so forth.
I think your original approach may be best. The approach of putting a Tx and Rx on every device may be affordable, but I question if it will work. With 20 devices transmitting on the same frequency, which one will the receiver "hear". Most important, how will a device receive any remote transmitter's signal when its own transmitter is very close? Keep in mind: these are AM radios and will "send" a carrier even if not sending any data. Get a small number of transmitters before trying to go full scale.
To avoid the problem of receiving the one active transmitter among the soup of inactive transmitters, you want only 1 transmitter powered at 1 time. You would control Vcc to one transmitter, turn it on, send the burst of data, and then power it off.
-How can I prevent communication overlapping?
You can't -- you have to accept that there will be occasional overlaps. Add a CRC to the transmitted data so that the receiver can detect garbage.
The timing of the multiple transmitters is surely a project in itself. You surely don't want to run them all at the same transmission period. They may not collide at the beginning, but when two devices did drift together and start colliding, they would stay together and collide for a long time, until the clocks drifted apart.
I would start with something simple. For example with three devices, run the transmissions at 2000 ms, 2200 ms, 2400 ms period (use EEPROM to configure). That way, if a pair happens to collide at one data point, then next transmissions that pair will be 200 ms apart.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What is the potential maximum speed for rs232 serial port on a modern PC? I know that the specification says it is 115200 bps. But i believe it can be faster. What influences the speed of the rs232 port? I believe it is quartz resonator but i am not sure.
This goes back to the original IBM PC. The engineers that designed it needed a cheap way to generate a stable frequency. And turned to crystals that were widely in use at the time, used in any color TV in the USA. A crystal made to run an oscillator circuit at the color burst frequency in the NTSC television standard. Which is 315/88 = 3.579545 megahertz. From there, it first went through a programmable divider, the one you change to set the baudrate. The UART itself then divides it by 16 to generate the sub-sampling clock for the data line.
So the highest baudrate you can get is by setting the divider to the smallest value, 2. Which produces 3579545 / 2 / 16 = 111861 baud. A 2.3% error from the ideal baudrate. But close enough, the clock rate doesn't have to be exact. The point of asynchronous signalling, the A in UART, the start bit always re-synchronizes the receiver.
Getting real RS-232 hardware running at 115200 baud reliably is a significant challenge. The electrical standard is very sensitive to noise, there is no attempt at canceling induced noise and no attempt at creating an impedance-matched transmission line. The maximum recommended cable length at 9600 baud is only 50 feet. At 115200 only very short cables will do in practice. To go further you need a different approach, like RS-422's differential signals.
This is all ancient history and doesn't exactly apply to modern hardware anymore. True serial hardware based on a UART chip like 16550 have been disappearing rapidly and replaced by USB emulators. Which have a custom driver to emulate a serial port. They do accept a baudrate selection but just ignore it for the USB bus itself, it only applies to the last half-inch in the dongle you plug in the device. Whether or not the driver accepts 115200 as the maximum value is a driver implementation detail, they usually accept higher values.
The maximum speed is limited by the specs of the UART hardware.
I believe the "classical" PC UART (the 16550) in modern implementations can handle at least 1.5 Mbps. If you use a USB-based serial adapter, there's no 16550 involved and the limit is instead set by the specific chip(s) used in the adapter, of course.
I regularly use a RS232 link running at 460,800 bps, with a USB-based adapter.
In response to the comment about clocking (with a caveat: I'm a software guy): asynchronous serial communication doesn't transmit the clock (that's the asynchronous part right there) along with the data. Instead, transmitter and receiver are supposed to agree beforehand about which bitrate to use.
A start bit on the data line signals the start of each "character" (typically a byte, but with start/stop/parity bits framing it). The receiver then starts sampling the data line in order to determine if its a 0 or a 1. This sampling is typically done at least 16 times faster than the actual bit rate, to make sure it's stable. So for a UART communicating at 460,800 bps like I mentioned above, the receiver will be sampling the RX signal at around 7.4 MHz. This means that even if you clock the actual UART with a raw frequency f, you can't expect it to reliably receive data at that rate. There is overhead.
Yes it is possible to run at higher speeds but the major limitation is the environment, in a noisy environment there will be more corrupt data limitating the speed. Another limitation is the length of the cable between the devices, you may need to add a repeater or some other device to strengthen the signal.