While reading CCENT/CCNA ICND1 Official Exam Certification Guide, I came across steps to how a hub creates an electrical bus:
Step 1 The network interface card (NIC) sends a frame.
Step 2 The NIC loops the sent frame onto its receive pair internally on the card.
Step 3 The hub receives the electrical signal, interpreting the signal as bits so
that it can clean up and repeat the signal.
Step 4 The hub’s internal wiring repeats the signal out all other ports, but not
back to the port from which the signal was received.
Step 5 The hub repeats the signal to each receive pair on all other devices.
However I fail to understand the logic behind step 2 and I am unable to find resources that explain this step. Can anyone kindly explain to me the purpose of step 2?
The NIC doing that loop in Half-Duplex mode (means that cannot TX / RX simultaniously) its normally an old 10baseT / 100baseT implementation. the idea is to check if the received frame received correctly, therefore it assumes that no collision was occured. since today all switches uses Full-Duplex communication, collisions cant occur anyway .
Related
I have been learning the nuts and bolts of BLE lately, because I intend to do some development work using a BLE stack. I have learned a lot from the online documentation and the spec, but there is one aspect that I cannot seem to find.
BLE uses frequency hopping for communication. Once two devices are connected (one master and one slave), it looks like all communication is then initiated via the master and the slave responds to each packet. My question involves loss of packets in the air. There are two major cases I am concerned with:
Master sends a packet that is received by the slave and the slave sends a packet back to the master. The master doesn't receive the packet or if it does, it is corrupt.
Master sends a packet that is not received by the slave.
Case 1 to me is a "dont care" (I think). Basically the master doesn't get a reply but at the very least, the slave got the packet and can "sync" to it. The master does whatever and tries transmitting the packet at the next connection event.
Case 2 is the harder case. The slave doesn't receive the packet and therefore cannot "sync" its communication to the current frequency channel.
How exactly do devices synchronize the channel hopping sequence with each other when packets are lost in the air (specifically case 2)? Yes, there is a channel map, so the slave technically knows what frequency to jump to for the next connection event. However, the only way I can see all of this happening is via a "self timed" mechanism based on the connection parameters. Is this good enough? I mean, given the clock drift, there will be slight differences in the amount of time the master and slave are transmitting and receiving on the same channel... and eventually they will be off by 1 channel.. 2 channels, etc. Is this not really an issue, because for that to happen 'a lot' of time needs to pass based on the 500ppm clock spec? I understand there is a supervisor timer that would declare the connection dead after no valid data is transferred for some time. However, I still wonder about the "hopping drift", which brings me to the next point.
How much "self timing" is employed / mandated within the protocol? Do slave devices use a valid start of packet from the master every connection interval to re synchronize the channel hopping? For example if the (connection interval + some window) elapses, hop to the next channel, OR if packet received re sync / restart timeout timer. This would be a hop timer separate from the supervisor timer.
I can't really find this information within the core 5.2 spec. It's pretty dense at only over 3000+ pages... If somebody could point me to the relevant sections in the spec or somewhere else.. or even answer the questions, that would be great.
The slave knows the channel map. If one packet is not received from the master, it will listen again after one connection interval on the next channel. If that it also not received, it adds one extra connection interval and next channel.
The slave also stores a timestamp (or event counter) when the last received packet from the master was detected, regardless of if the crc was correct or not. This is called the anchor point. This is not the same time point used for supervision timeout.
The amount of time between the anchor point and the next expected packet is multiplied by the master + slave accuracy (for example 500 ppm) to get a receive window, plus 16 microseconds. So the slave listens this amount of time before and after the expected packet time of arrival.
I am developing a BLE android application where I have used RxAndroidBLe for BLE communication.
Everything working fine except one issue that the application is not receiving every scan response broadcasted by the BLE device (I am not sure whether it is an issue or not). The BLE device is set to broadcast it in every 1 second. It broadcasts the scan response on all the three channels 37,38,39 in round robin fashion. The application is intend to scan continuously until the application gets closed. But I observed that the application is not receiving all the scan response. Means it is not receiving scan response from the devices in every 1 second. Sometimes there is a gap of 2,3 seconds or more. I want to know is there any solution to overcome this issue or this is a valid behavior?
Any suggestion would be appreciated. Thanks in advance.
Due to nature of BLE scanning it is not certain that you will receive each scan response broadcast. At basic, it depend mostly on scan interval and scan window parameters (host side) and the interval you are broadcasting responses.
You can try low latency scan mode to improve your results.
I'm using Lora 1276 and Arduino to collect data from here every nodes.
The example code I use is from here and it work successfully!
As far I know, LoraWAN is using TDMA to distribute the time to any nodes.
And polling by the gateway to get the data. That can make the nodes keep low power consume and let gateway to be control master.
I searched some information about preamble which is at the front of packet, nodes will decide whether to reply after received. If nodes receive a preamble that does not match,it will go back to sleep.
Is there any sample code for polling mode?
Thanks.
LoraWAN Class B devices do indeed use TDMA for scheduling periodic receive windows.
Here is sample code implementing Class B. https://github.com/Lora-net/LoRaMac-node/tree/master/src/apps/LoRaMac/classB
I want wireless data transmission between microcontrollers. Of the three microcontrollers A,B and C , i need one to many connection in such a way that A have bidirectional communication with B and C but B and C need not communicate with each other. Will RF transcievers be helpful??
Yes, RF transceivers will be helpful implementing a wireless communications link. (I suppose you could use IR transceivers if your devices will have line of sight to each other.)
Are you asking how to direct the messages to the proper destination? If your data can be packetized then the answer is the same as for any other multi-drop network media. You add a header to the packet and put a destination address in the header. Nodes B and C may receive each other's transmissions but they will check the destination address, see that the message is addressed to A, and they will ignore those messages. Another possibility is that B and C could use different radio frequencies. Then they won't receive each other's transmissions. But in this case A will have to receive on two frequencies. Perhaps A could retune in order to communicate to the other node.
Update: If two of your devices transmit at the same time the transmissions may interfere and the receiver may not be able to receive either transmission. This issue is addressed with a channel access strategy. Here again, techniques used on wired networks also apply to wireless networks. One way to avoid collisions is for the transceiver to listen for a carrier signal or existing transmission before transmitting itself. This technique is called Carrier Sense Multiple Access (CSMA). If there is no signal then it is OK to transmit. But if their is an existing signal on the channel then the transceiver will hold off on its own transmission until the channel is clear. I'm familiar with the CC1101 transceiver and this functionality is built into the transceiver (although it may need to be enabled via configuration). Another way you could avoid collisions is with a master/slave or client/server communication strategy. For example, if B and C only transmit in response to requests from A then A can manage when each node transmits and ensure that two nodes don't transmit simultaneously. Other ways to avoid collisions include Time Division Multiple Access (TDMA) or Frequency Division Multiple Access (FDMA).
I am using a 16-bit MCU PIC24HJ64GP504 to write a CAN based application. Basically it is communication between my board and one another node which continuously keeps on sending data to my board using CAN at 1 Mbit/s. I am configuring the ECAN module in my PIC24 to work at 1 Mbit/s. I have written the code in such a way that for the first 10 ms the ECAN module will accept all messages coming in from the other side and after that I have re-configured the ECAN module to accept only those messages with message ID 0x13.
Now here comes the issue... The other node and my board are powered up at the same instant. The other node starts transmitting messages after 40 ms or so after powerup. But I am not able to get any message from it on my board. Now if I power up my board first, give it some time to reconfigure the ECAN module with new filters and settle down and then power up the other node, then everything works perfectly.
Now the strangest part... If I have a CAN bus analyzer connected between my board and the other node and even if I power up both the nodes at the same time, everything works fine... No need to power up my board first. I have tried this with three different bus analyzers from different manufacturers and got the same results.
To me it appears that during re-configuration of the ECAN module, it takes some time to settle down. And with the introduction of the bus analyzer in the bus, this time is somehow cut short so that everything works perfectly. But I am not sure what exactly the problem might be.
The problem might be a missing ACK. The CAN-Analyzer might acknowledge frames and the device does not switch to error passive.
I would hold off sending until the whole bus is initialized.
Also sounds like missing ACKs to me.
Are you seeing any error frames (get the scope to trigger off 6 consecutive dominant bits) - the Tx node might be going off the bus or even into some application-error mode if it doesn't get acknowledged enough.
You might be able to coax it back on bus by transmitting a dummy message on the bus.
I've found a Saleae Logic very useful in these circumstances (as well as a scope) - hang it off the Rx pin of your physical layer (or even wire up a standalone PHY that you can use to monitor the bus). The Saleae software will interpret the CAN and show you what's happening. Sometimes it's useful to use the scope trigger out to trigger the Logic.
CAN Communication requires at least two active devices on bus to have successful communication. This is because, a CAN frame is not completed unless someone acknowledges it.
When you power up your board and other node, it seems your board is not getting ready in 40msec. If it is not ready, it leaves "Other node" to be the only member on the bus and voilates above stated rule. Other node will get Tx error and after 128 erros, That other node will go in error mode and stop sending messages -- Hence you are not getting anything.
When you power up your board first, give it time - your board is ready and will ACK every message sent by other node -- Hence communication is good!
When you add CANalyzer, even if your board is not powered up, there are two active nodes on the bus -- Hence communication is good!