I'm not sure whether this questions fits here or at electrical engineering stack exchange. Nevertheless...
I have an SIM800L modem controlled through serial port. Inserted SIM belongs to an O2 Czech Republic operator (230 02).
When I make a call from the device to my phone and then I reject it, the modem does not detect it. It just thinks that the other side still rings. It takes ca. 20s after the hang-up when the modem terminates the call. And that's a problem for my implementation. I need to know this (almost) immediately.
I tried everyting from the SMI800 commands docs.
Namely:
CDRIND=0 (+CDRIND arrives after ca. 20s after hanging)
ATX4 (Or ATX(0..4) does not shorten the interval)
ATS7 does not have any effect
ATS10 does not have any effect
COLP? returns 0,2. After setting COLP=1, the lag is the same and the result is BUSY (or NO CARRIER, depending on ATX, of course)
MORING=1 (does nothing worth mentioning)
Do you have any idea?
Related
I'm trying to pick up some serial comms for a new job I am starting. I have done some reading which has helped a lot however, a lot of the reading tells you about the specification of serial comms and what everything is, but not when is best to use particular options.
My searches for this information so far only seem to pull in the spec; perhaps as a novice I am searching for the wrong terms.
My questions then!
Baud Rate - I have read this is signal changes per second and is often mislabelled as bits per second. Is this essentially bits per second including the frame data if asynchronous, and actually bits per second if synchronous?
Parity - Even/Odd.. Is there any difference at all between the two? I'm thinking in terms of efficiency or similar. Does this only still exist for compatabilities sake?
Stop Bits - I have read so far you can have 1 or 2 stop bits. In C# there seems to be an option for 1.5 too. I can't find anything on why you would want/need more than 1.
If anyone can advise on these points, or point me to some recommended reading material I would be very grateful.
Thanks for reading.
edit: typo
You very rarely have a choice, you must make it compatible with the settings that the device uses. If you don't know then you need to look in a manual or pick up a phone. Do keep in mind that it is increasing very rare to work with a real serial port device, one that uses an UART. Most commonly you actually talk to an emulated serial port, implemented by a USB or Bluetooth device driver. The settings you use don't matter in such a case since the actual signaling is implemented by the underlying bus.
If you can configure the device then basic guidelines are:
Baudrate is directly related to the length of the cable and the amount of electrical interference that's present. You have to go slower when you get bit errors. The RS-232 spec only allows for a maximum of 50 ft at 9600 baud.
Parity ought to be used when you don't use an error-correcting protocol. It does not matter whether you pick Odd or Even. Odd people pick odd, it's their prerogative.
Stopbits is usually 1. Picking 1.5 or 2 help a bit to relieve pressure on a device whose interrupt response times are poor, detected by data loss.
Databits is almost always 8, sometimes 7 if the device only handles ASCII codes.
Handshaking is an important setting that never stop causing trouble since many programmers just overlook it. Modern computers are almost always fast enough to not need it but that's not necessarily true for devices. The most basic stay-out-of-trouble configuration is to turn DTR on when you open the port and to tell the device driver to take care of RTS/CTS handshaking. Xon/Xoff handshaking is sometimes used, depends on the device.
A good 90% of the battle is won by implementing solid error checking. It is almost always skimped on, bad idea. Very important for serial port devices since they have no error correcting capabilities themselves and very weak error detection. Always make sure that you can detect and properly report overrun, parity and framing errors. And test them by getting the settings intentionally wrong.
I am reading a book about networking, and it says that, in a circuit switching environment, the number of links and switches a signal has to go through before getting to destination does not affect the overall time it takes for all the signal to be received. On the other hand, in a packet switching scenario the number of links and switches does make a difference. I spent quite a bit of time trying to figure it out, but I can't seem to get it. Why is that?
To drastically over-simplify, effectively a circuit-switched environment has a direct line from the transmitter to the transmitted once a connection has been established; imagine an old-fashioned phone-call going through switchboards. Therefore the transmission time is the same regardless of hops (well, ignoring the physical time it takes the signal to move over the wire, which very small since it's moving at the speed of light).
In a packet-switched environment, there is no direct connection. A packet of data is sent from the transmitter to the first hop, which tries to calculate an open route to the destination. It then passes its data onto the next hop, which again has to calculate the next available hop, and so on. This takes time that linearly increases with the number of hops. Think sending a letter through the US postal system. It has to go from your house to a post office, then the post office to a local distribution center, then from the local distribution center to the national one, then from that to the recipient's local distribution center, then to the recipient's post office, then finally to their house.
The difference is that only one connection at a time per circuit can exist on a circuit-switched network; again, think phone line with someone using it to make a call. Whereas in a packet switched network many transmitters and receivers can be sending data at the same time; again, think many people sending/recieving letters.
I'm currently designing a sensor network that will have small ATtiny85 probes that each have a temperature sensor, a barometer, and a humidity sensor. I think I will use these (http://goo.gl/TqaDjl) to communicate as they are low cost and don't need much range. Im not sure though how I will get the probes to communicate with the main control, as the transmitter transmits digitally and I will have +20 probes that all need to send data without signals overlapping or getting messed up every minute. I think the easiest way would be to time the probes so that they don't overlap in transmission but I'm not sure.
Questions:
-Is using RF the cheapest and best option for this system?
-How can I prevent communication overlapping?
-What is the easiest way to send data digitally from an arduino (or ATtiny85)?
I guess I'm late to the party, but I'll offer some insight into collision control with a ton of chattering transmitters on one link, a la 802.11. This is somewhat packetized.
If two transmitters try to transmit at the same time, you're bound to get a mangled mess of rotten bacon at the receivers.
A simplified version of WiFi-style collisions would be good. Basically, it uses preambles that can be detected, and for longer transmissions that have a higher chance of conflicting, it can use shorter request/clear to send packets.
While this is likely overkill, I'd go for preambles. Start by transmitting a steady stream of something recognizable, like in hex, 555533330f0f00ff which is basically alternating 1s and 0s but with changing frequency(0101, then 0011, then 00001111, and so on), a readily recognizable pattern that is unlikely to be given off by stray radiation or noise.
This pattern could undergo a shift so there's a finite set of other preambles that should be bitwise-shifted relative to the original.
If a transmitter detects this preamble, it should STOP and wait. If you limit all packets to a certain temporal length, collisions should not occur if you wait sufficient time between packets. If during the time of one packet, a preamble is heard, then your station should wait for the full length of the transmission(listening to its length and other header fields so it knows how long to wait). Once the packet is done, your station can transmit its preamble.
This is where the WiFi resemblance stops and simpler protocols take over.
Note that if 2 stations are waiting on a packet they can start their preambles almost simultaneously. To resolve this, each station should have a different zero bit flipped in its preamble. If it detects a 1 for that bit, it sees that there's another station preambling, and should back off.
Each station should wait a certain delay(up to you) after each packet so other stations can start their transmissions.
A few sketches of the communication patterns show that this is sufficient for your needs.
Now if it's a master-slave-style system as long as you only have one network it should be easier since there should only be one outstanding request that would involve a slave transmitting.
Those will be by far the cheapest method. As for the best method, there are a variety of choices much better, but more expensive. A network of Xbee modules comes to mind, but those are much more expensive than $1.25 a pair.
Using the RF modules is very do-able however. To prevent communication overlapping, put a RF transmitter and receiver on each sensor node and the main hub. The main hub can send "hey sensor1 give me your data", which gets broadcasted to all of the sensors. However, only sensor1 will realize "hey I am sensor 1, here is my data" which the hub will listen for. Then, the hub will go on and say "hey sensor2 send me your data" and so on and so forth.
I think your original approach may be best. The approach of putting a Tx and Rx on every device may be affordable, but I question if it will work. With 20 devices transmitting on the same frequency, which one will the receiver "hear". Most important, how will a device receive any remote transmitter's signal when its own transmitter is very close? Keep in mind: these are AM radios and will "send" a carrier even if not sending any data. Get a small number of transmitters before trying to go full scale.
To avoid the problem of receiving the one active transmitter among the soup of inactive transmitters, you want only 1 transmitter powered at 1 time. You would control Vcc to one transmitter, turn it on, send the burst of data, and then power it off.
-How can I prevent communication overlapping?
You can't -- you have to accept that there will be occasional overlaps. Add a CRC to the transmitted data so that the receiver can detect garbage.
The timing of the multiple transmitters is surely a project in itself. You surely don't want to run them all at the same transmission period. They may not collide at the beginning, but when two devices did drift together and start colliding, they would stay together and collide for a long time, until the clocks drifted apart.
I would start with something simple. For example with three devices, run the transmissions at 2000 ms, 2200 ms, 2400 ms period (use EEPROM to configure). That way, if a pair happens to collide at one data point, then next transmissions that pair will be 200 ms apart.
I'm currently developing a small application for monitoring the power / current our solar collector is generating.
The array is connected to 3 inverters. Every inverter has a RS232 interface, transmitting one Line of information(its current status) every 10 seconds.
Since I want to do the monitoring using a device only having one serial port, I need to come up with a way to be able to read the data from all of the inverters in parallel.
I don't need to send anything to one of the inverters!
Is it possible to just connect 3 RS232 wires in parallel to one serial port? Collisions will be pretty unlikely since every inverter is transmitting only 64Byte / 10seconds ending with a newline, so I could check for variable line lengths to detect collisions.
I'm sort of chuckling at doomsday and wacky answers that so often pop up on stackoverflow...
But anyway, in years gone buy I have used paralleled RS-232 transmit lines using diodes and it can work fine for situations where collisions are unlikely. In one particular application I used this technique there were two input terminals where a user could key in simple commands to control the system (a specialized security system) and it was very unlikely that two people would be trying to control it at the same time from the two different terminals. Amazingly enough there are no problems with voltage levels with most RS-232A receivers I tested at the time and they tolerated the signal characteristics (no negative voltage) that result from the simple use of the diodes in series with the TXD signals. However, if I had to do this again I would likely add a simple pull-down resister and capacitor to ground with a diode between RXD and the cap in a sort of charge pump configuration or a pull-down to negative going handshake signal to ensure the "OR'd" input signal goes truly negative since the RS-232 spec defines +3 to -3v as invalid.
In any case, I would recommend not using this technique except in very specific, limited, and non-mission critical cases and would not use it in the case where you have multiple devices sending information at a programmed interval as in the case of the OP or where there is a software handshake.
In can be a simple solution to the problem of not enough serial input ports but only in a very limited set of environments.
No, you should NOT connect 3 serial output port in parallel. If you do that you are probably going to broke the RS232 output circuitry of your inverters.
You have 3 RS232 outputs, so you need 3 RS232 input, then you can manage these 3 input the way you like: maybe you can buffer the data from each input, and reoutput the data on a single RS232 output, to be connected to your monitoring device.... but you should add some code in the data flow to differentiate the data coming from the 3 inverters.
Maybe you can use some kind of IC that do the job for you, I'm not sure, but maybe that some IC that multiplex multiple RS232 input on a single RS232 output already exist.
Try this search: rs232 port input multiplexer on Google
Or, if the monitoring device is a Window computer, you can use 3 serial-to-usb converter: that will create 3 virtual COM port on your computer and you can read data from them with any software.
Update
About the hypothesis of securing the output circuitry using diods to block reentering current, I don't think it's going to work...
Many year have passed by since last time I've used an RS232 link at low level (so maybe I'm wrong) but I think that there is some kind of handshake going on between RS232 input and output port (speed to use, parity, stop bit...).
Each RS232 port have inputs and outputs signal, both for data and for transmission control, so your multiple RS232 outputs does have some input signals, and your single RS232 input does have some outputs.
This mean that your input monitoring RS323 port is going to try to make a handshake with 3 RS323 ports at the same time... and the 3 RS232 ports are probably going to respond at the same time... so I think it's not going to work.
Other than that if you place diodes on your output, you are going to loose 0.7v, I don't remember the tolerance on signal level of RS232, but maybe that 0.7v can be relevant.
What are the implications of using a half-duplex serial connection versus a full-duplex one? What happens if both sides try sending data at the same time? Do you end up with corrupt data arriving on both ends? Does flow-control help you with this?
On the line the data will be garbled, which may or may not lead to devices receiving the garbled data. Sometimes this will be used to detect that the transmission failed due to a collision.
Normally you wouldn't use half-duplex in the same way as full-duplex to send single characters in asynchronous mode. Rather you'd use some packet protocol which determines who has the right to send at which times, and which includes some checksum (usually a CRC) to detect corruption.
Flow control doesn't help much for this. It's purpose is to ensure that the receiver is not overrun by to much data. There is software flow control which uses the ASCII characters XON and XOFF to start and stop transmission, and hardware flow control which uses the RTS (Request To Send) and CTS (Clear To Send) control lines. XON/XOFF-style software flow control won't work with half duplex.
These days you don't see half duplex with ordinary RS-232 and modems (I used it with acoustic couplers in the eighties, it was rare even then). But it is common for RS-485, which is used in industrial control with various protocols. There are also many other data transmission standards which operate in a half-duplex way, mostly when there are more than two devices attached to the same line (ancient 10base2 Ethernet, CAN, LIN, FlexRay, I2C, ...).
My God, where did you find a half-duplex line in this day and age?
Anyway, the answer is that if both ends drive the line, it gets all confused. For this reason, there are specified ASCII characters lie Clear to Send and Data Terminal Ready (CTS and DTR) that are used to make a handshake. See this tutorial for more.
Augh, I should have gone to bed. Tutorial right, me stoopid.