I am using a 16-bit MCU PIC24HJ64GP504 to write a CAN based application. Basically it is communication between my board and one another node which continuously keeps on sending data to my board using CAN at 1 Mbit/s. I am configuring the ECAN module in my PIC24 to work at 1 Mbit/s. I have written the code in such a way that for the first 10 ms the ECAN module will accept all messages coming in from the other side and after that I have re-configured the ECAN module to accept only those messages with message ID 0x13.
Now here comes the issue... The other node and my board are powered up at the same instant. The other node starts transmitting messages after 40 ms or so after powerup. But I am not able to get any message from it on my board. Now if I power up my board first, give it some time to reconfigure the ECAN module with new filters and settle down and then power up the other node, then everything works perfectly.
Now the strangest part... If I have a CAN bus analyzer connected between my board and the other node and even if I power up both the nodes at the same time, everything works fine... No need to power up my board first. I have tried this with three different bus analyzers from different manufacturers and got the same results.
To me it appears that during re-configuration of the ECAN module, it takes some time to settle down. And with the introduction of the bus analyzer in the bus, this time is somehow cut short so that everything works perfectly. But I am not sure what exactly the problem might be.
The problem might be a missing ACK. The CAN-Analyzer might acknowledge frames and the device does not switch to error passive.
I would hold off sending until the whole bus is initialized.
Also sounds like missing ACKs to me.
Are you seeing any error frames (get the scope to trigger off 6 consecutive dominant bits) - the Tx node might be going off the bus or even into some application-error mode if it doesn't get acknowledged enough.
You might be able to coax it back on bus by transmitting a dummy message on the bus.
I've found a Saleae Logic very useful in these circumstances (as well as a scope) - hang it off the Rx pin of your physical layer (or even wire up a standalone PHY that you can use to monitor the bus). The Saleae software will interpret the CAN and show you what's happening. Sometimes it's useful to use the scope trigger out to trigger the Logic.
CAN Communication requires at least two active devices on bus to have successful communication. This is because, a CAN frame is not completed unless someone acknowledges it.
When you power up your board and other node, it seems your board is not getting ready in 40msec. If it is not ready, it leaves "Other node" to be the only member on the bus and voilates above stated rule. Other node will get Tx error and after 128 erros, That other node will go in error mode and stop sending messages -- Hence you are not getting anything.
When you power up your board first, give it time - your board is ready and will ACK every message sent by other node -- Hence communication is good!
When you add CANalyzer, even if your board is not powered up, there are two active nodes on the bus -- Hence communication is good!
Related
We have hardware in the field which communicates at a variety of different baud rates using RS-485/Modbus RTU (1200, 9600, through to 115200).
The firmware running on our device has a small bug in it, where the Modbus RTU response delay was fixed and calculated based on running at the 115200 baud. The issue went unnoticed until recently one of our customers began using the 1200 baud rate. It appears the 115200 response delay was adequate for everything down to 9600.
At the 1200 baud rate though, the first byte of the response packet is being missed (I'm assuming due to the time to takes to switch over from sending to receive at the 1200 baud). If a large packet is being requested things are ok (because the time it takes the device to put the packet together makes up for the lack of delay) although most packets are being corrupted.
Upgrading the firmware on these devices already in the field to use the correct/longer response delay is not an option unfortunately. Does anyone have any ideas as to how we can retrieve the full packets at the 1200 baud? (with the incorrect response delay that is currently causing 1 byte to be missed)
The only idea I could come up with was requesting an excessive amount of registers from the software with each request to cause the delay to increase.
If I understand your question correctly I had this same issue once.
I was called to troubleshoot a very unreliable Modbus link that was failing pretty frequently but for short periods of time, it was working normally.
After checking all other obvious parameters I hooked up my scope and this is what I saw:
As it turned out there was an issue with the firmware on the slave: it was firing its Modbus answer immediately after receiving the stop bit of the query. So part of the answer was transmitted before the master had time to free the bus (yellow arrow on the picture).
At the time we were not happy with the prospect of updating the firmware on the salve so we first explored other options. The best thing we came up with was a setting on the master (a PLC from Schneider Electric) that allowed to tweak the time the bus was asserted by the Master after its stop bit.
This is how it was defined in the manual:
I vaguely remember we were able to improve the situation but there was a watchdog triggering an alarm somewhere every time there was an error so this solution was not considered acceptable and we had to update the firmware.
Somehow related to your question, I once measured the time it takes to free the bus using hardware direction control versus a software solution. You can see some details here. If updating the firmware is a no-go for you I guess messing with the transceiver won't be an option either... At the end of this question I linked a circuit to toggle the direction control line of an RS485 link automatically. That might be a (admittedly terrible) solution if your Master is not able to toggle faster.
I'm using Lora 1276 and Arduino to collect data from here every nodes.
The example code I use is from here and it work successfully!
As far I know, LoraWAN is using TDMA to distribute the time to any nodes.
And polling by the gateway to get the data. That can make the nodes keep low power consume and let gateway to be control master.
I searched some information about preamble which is at the front of packet, nodes will decide whether to reply after received. If nodes receive a preamble that does not match,it will go back to sleep.
Is there any sample code for polling mode?
Thanks.
LoraWAN Class B devices do indeed use TDMA for scheduling periodic receive windows.
Here is sample code implementing Class B. https://github.com/Lora-net/LoRaMac-node/tree/master/src/apps/LoRaMac/classB
I've been trying to establish serial (UART) communication between a Raspberry Pi Model B Revision 2.0 (checked the model like described on this page) and Arduino Mega 2560. I made a service on the Pi that writes to UART and then expects a message and a coworker programmed the Arduino with an echo program. While they were communicating, I had trouble receiving data, meaning that it was clustered in 8 byte pieces and I had to introduce a timeout for waiting between them (I was actually as much as available and calling select()for the next cluster but it turned to be 8bytes a cluster, except for maybe the last one. As explained in a question I found on this site, the programmer is the one to take care of the protocol and can not rely that the whole message will be ready to read at once (that is logical).
However, when I just connected Pi's TXD and RXD pins, no matter how much bytes I tried sending, it sends them in one go (I've gone up to a bit more than 256, that's more than enough for my purposes). I also have around 50 milliseconds of duration difference, measured directly from within the program, using gettimeofday() function.
So, could anybody clear things for me:
Why is this happening?
Is this difference in behaviour expected?
Is there a potential problem in either of the devices (if that can even be concluded from the given information).
Of course, any additional information is welcome, in case I forgot asking something that is deemed important.
Why is this happening?
I tried some time back communicating Arduino-Arduino and Arduino-Pi. I faced some problems with UART communication. However, you might want to keep same Baud rate on both the devices. With Pi, you might need to trigger an event if you receive data from Arduino. On the other side, if you code runs longer, then you might lose some data i.e. your Arduino code is running something else while Pi sends data over UART.
Is this difference in behaviour expected?
Yes. Arduino is a microcontroller based device while Pi is microprocessor based (runs on OS)
Is there a potential problem in either of the devices (if that can even be concluded from the given information).
I don't think there could be any hardware problem unless it is not functioning at all.
Also, because of this issues, I switched from UART communication to SPI communication. This solved my problem completely.
I have been trying to set up two XBees to communicate since the last three days. X-CTU seems to be the perfect option to do so, however, it is a real menace when it comes to discovering XBees on serial ports.
I was able to detect one XBee by luck just once and the other one never showed up. I have even replaced both my XBees. I am trying to figure out the alternative, i.e. using a serial console to perform the operation. I haven't been able to receive an OK response from the device upon issuing +++.
Since I haven't had a good experience using a PC to communicate with ESP8266 devices earlier, I tried to figure out a workaround by using the second Serial port of an Arduino to send such configuration messages and read the response by printing it out on the default serial console.
It also appears that configuration messages can differ depending on the mode of the device. If it's in API mode, the frame has to be generated in a specific format (I use the X-CTU frame generator for this purpose).
Why am I not able to receive a response from the XBee upon issuing a +++?
The devices are Series 1 XBees and the exact part number is XB24-AWI-001. Any help is highly appreciated.
Have you considered the XBee being in API mode? Maybe should you consider to reflash the device in AT mode to start playing with it.
To test if it's in API mode, you can refer to the guide, chapter 9 for the API mode structure:
http://eewiki.net/download/attachments/24313921/XBee_ZB_User_Guide.pdf?version=1&modificationDate=1380318639117&api=v2
Basically, a datagram in API mode starts with ~, and it's built as follows:
[0x7E|length(2B)|Command(1B)|Payload(length-1B)|Checksum(1B)]
As 0x7E is ~ on the ASCII table, you should try typing a bogus datagram in a serial terminal session like:
~ <C-d> AAAA
N.B.: The <C-d> characters means Control-d under unix., which is the EOF character.
Obviously such a message isn't likely to work, and you will receive a reply asking you to send that datagram again. That's because the EOF character being ASCII code 4, it means that the length of the datagram will be 4 bytes. So then you send four bogus bytes, the checksum will be A, which is very likely to be right, and the receiver will assume the transmission has been corrupted. So the datagram will be asked again, meaning you will receive a datagram to do that query.
Though I can only advice you to consider running it only in API mode (more reliable and a better API, but you cannot play around with it and understand what's going on by tapping on the line with a logic analyzer… though giving enough time, you'll start to read API datagrams like it's English ☺).
I wrote a page with a few resources to check on how to reflash the XBees:
https://github.com/hackable-devices/polluxnzcity/wiki/Flash-zigbee
and here's other advices from another totally unrelated project:
https://github.com/andrewrapp/xbee-api#documentation
And I also wrote a lib (aimed at beaglebones but you can tweak it for your use) that handles API mode 2 with XBees:
https://github.com/hackable-devices/polluxnzcity/blob/master/PolluxGateway/include/xbee/xbee_communicator.h
https://github.com/guyzmo/polluxnzcity/blob/master/PolluxGateway/src/xbee/xbee_communicator.C
but I bet with a little google search you can find more widely used libraries than those ones, and even some aimed to be run on Arduinos (N.B.: that lib was originally written for Arduinos, and then adapted to run for Beaglebone, so reversing the operation shouldn't be hard).
I'm writing a Linux program (using Qt 4.8 and libusb 1.0) which will communicate with a custom USB device (currently being programmed by a co-worker).
Step 1 is to have a "heartbeat" going back and forth over USB at regular intervals.
I'm currently using asynchronous bulk transfer.
For testing, I've put my "Send_Heartbeat()" on a button click. If I click on the button a LOT and queue up a number of messages to send, as long as I keep my queue busy, the messages keep sending and my USB device keeps receiving them.
If I stop for a few seconds, then resume and add more messages to the queue, the USB device stops receiving them.
BUT, my program's Transfer Callback DOES return with a transfer status code of 0, indicating success, even though my USB device isn't receiving them.
My questions:
Why does the callback's transfer status indicate success if my USB device appears to have stopped receiving them?
Has anyone heard of this type of behaviour?
It's worth noting that if I disconnect the USB device, I get proper status codes returned in my callback indicating that the device has gone away.
If the USB Device is left connected and running, and I "Detatch" and then again "Attach" to force a re-connection and try sending more test heartbeats, it works! The USB device starts receiving messages again.
My "Detatch" is the following calls:
libusb_release_interface()
libusb_reset_device()
libusb_close()
Then my "Attach" is:
libusb_get_device_list()
libusb_get_device_descriptor()
libusb_open()
libusb_set_configuration()
libusb_claim_interface()
My next step is to narrow down which of the libusb commands is re-establishing the communication.
Meanwhile, I'm hoping someone recognizes these symptoms and has a suggestion.
As it's my first time programming USB communication, I'm wondering if there is some fundamental which I've missed.
Thanks!
The issue is here I guess:
My "Detatch" is the following calls:
libusb_release_interface();
In your detatch, you need to attach kernel driver
detatch_kernel_driver();
libusb_reset_device();
libusb_close();
Then my "Attach" is:
libusb_get_device_list();
libusb_get_device_descriptor();
libusb_open();
libusb_set_configuration();
Here you need to check if the kernel driver is active or not. So,
check what attach_kernel_driver(); returns, and call detatch_kernel_driver(); if needed
libusb_claim_interface();