I have a project using MDB (multi-drop bus) for vending machine (VDM).
The VDM has a MDB-RS232.
I'm not sure if it converts 9bit - 8bit (MDB-UART).
How do I read data from VDM in my computer?
Thanks all
MDB (multi-drop bus) is 9 bit, because after the standard 8 data bits (like in standard RS232 UART communication) there is a 9th bit called "mode".
(Wikipedia on MDB: "the mode bit differentiates between ADDRESS and DATA bytes.")
But you can read such data even with regular 8-bit RS232 interfaces, e.g. a plain standard USB-to-RS232 device for PC.
Here is how:
Use 9600 baud, 8 data bits, 1 stop bit, but RS232 parity setting "Space". Make sure you receive the original character value even in case of a Parity Error indication. Any MDB address byte from your VDM will be received with a Parity Error (but still be displayed correctly). Any data byte will be displayed without error.
For sending MDB ADDRESS and DATA bytes using a standard 8-bit RS232 port, you could apply temporary parity changes: Change the parity setting to "Mark" before sending an address byte, then change back to "Space" before sending data bytes.
On Windows, you can do such tricks with our Docklight software (see Docklight and MDB). It's free for basic testing and there is also a related 9-bit example project.
On Linux / Raspberry Pi other users have successfully implemented the parity trick, too, see this stackexchange post about a MDB + Pi.
But also with RealTerm, Teraterm, Termite, Bray, YAT or any other RS232 application you should be able to read the data, as long as it handles "Space" or "Mark" parity settings correctly.
You'll need an adapter which will do all convert operations on-the-fly and in real time. If you want to emulate VMC (master), you'll need MDB-UART master adapter. If you want to emulate MDB peripheral device (such as coin changer, bill validator etc), you'll need this. For two-way "sniffing" MDB bus you'll need a combination of these devices.
Direct connection PC's RS-232 to MDB will not work due to strict MDB timings (delay between VMC command and peripheral response must not exceed 5ms, delays between POLL requests are 50-300ms in general). I mean pretty reliable functioning available for commercial purposes.
Related
Please feel free to slap me and send a link if this question has already been answered; I just couldn't find it. I did search though.
I've been trouble-shooting communication with a serial device. In looking over lots of documentation, I now understand what the settings for "baud rate," "data bits," "stop bit," and "parity" mean. But what I can't seem to understand is who (sender or receiver) determines these settings.
Say I have a serial device plugged into my computer. In my code, I open a connection to the serial port and specify something like 9600,8,E,1. When I specify these settings, do these get sent to the sending itself, so that it knows how to send the data to my receiver? Or is it more common for a sender to expect a receiver to comply with strict settings?
The issue I'm having is that I attempted to use "Even" parity, and that resulted in tons of irregular transfer errors. When I use "Odd" parity, however, those errors go away. There is also a USB to Serial adapter involved in my set up. There aren't any transfer errors with Even or Odd parity without the adapter in the middle. So I'm just having a hard time understanding whether the device itself doesn't support sending with Even parity, or whether the adapter is the thing causing trouble, etc.
Thanks.
When I specify these settings, do these get sent to the sending itself, so that it knows how to send the data to my receiver?
No.
To expand on the comment by Hans Passant, both sides of the serial port have to agree on the settings, otherwise they won't talk to each other. If they don't agree, you will get gibberish data on either side as the hardware will read the data at an incorrect time. The settings are normally documented in the manual for the device that you are attempting to communicate with. For example, to communicate with a Cisco router, you will generally use the following settings:
Bits per sec : 9600
Data bits : 8
Parity : none
Stop bits : 1
Flow control : none
When you setup the serial port on your side, you must use these same settings, there is no hardware-level handshake between the two devices that determines the speed that they will communicate at.
Sometimes, the format for the serial port settings may be given in a format like the following:
9600,8,N,1
Which is just shorthand for the above quote(9600 baud, 8 data bits, no parity, 1 stop bit)
In my experience, most devices default to 9600,8,N,1, the next common serial setting is 115200,8,N,1
I've been playing with the MSDOS API in assembler for some time and I'm trying to build an application to read/write from the serial port. I'm currently using VMware Workstation 11 + VSPE (http://www.eterlogic.com/Products.VSPE.html) to emulate the serial port communication.
One thing I noticed is that if I send let's say "asdfgh" into the serial port and then read it in MSDOS (Using interrupt 21h function 03h, but I also tried interrupt 14h function 02h), it only returns the last character read: "h"
According to some documentation I read, if an application sends data faster than I can process it, characters will be lost, which means that either there is another way to make MSDOS save bytes to a buffer (controlling flow) or I have to write a driver that does this (or maybe a TSR program that manages this I dunno).
So the question is, do I have to write a driver or is there another way to do this?
I managed to write a program that turns on the 16550 FIFO buffer and now It works using the DOS API.
So it was just the FIFO buffer
No need to slow down the baud rate or whatsoever. Nonetheless, if one happen to have a 8250 it's probably either that or implement a communications driver on top of it
Also, this helped a lot:
https://en.wikibooks.org/wiki/Serial_Programming/8250_UART_Programming#UART_Registers
I am working on a project which requires data to be sent FROM PC TO FPGA,which processes the data and sends it BACK TO PC.
The board I am using is Atlys™ Spartan-6 FPGA Development Board.
The data is to be sent as 1 byte , because 1 byte is processed at each rising edge of the clock.
Could you please suggest me ways of sending data to FPGA ?
Thanks
Pick some method of communication that you have access to IP (intellectual property) cores for. For example, if you can readily access a UDP/IP core for your FPGA, then use that. If you have to develop the HDL yourself, serial protocols (UART, I2C, etc) will be simpler blocks to write. In general, HDL takes longer to develop, debug, and test.
UDP has some advantage because you can use tools like Wireshark to capture packets on the PC (once you get past the initial hurdle of actually getting packets to/from the FPGA). Plus, many people are familiar with UDP in various programming languages (C, C++).
In any case, you'll probably spend time with an oscilloscope and logic analyzer checking out signal levels and timing when data is sent to/from the FPGA.
I currently have an embedded device connected to a PC through a serial port. I am having trouble with receiving data on the PC. When I use my PCI serial port card I am able to receive data right away (no delays). When I use my USB-To-Serial plug or the motherboards built in serial port I have to delay reading data (40ms for 32byte packets).
The only difference I can find between the hardware is the UART. The PCI card uses a 16650 and the plug/motherboard uses a standard 16550A. The PCI card is set to interrupt at 28 bytes and the plug is set to interrupt at 14 bytes.
I am connected at 56700 Baud (if this helps).
The delay becomes the majority of the duty cycle and really increases transfer time. (10min transfer vs 1 hour transfer).
Does anyone have an explanation for why I have to use a delay with the plug/motherboard? Can anyone suggest a possible solution to minimizing or removing this delay?
Linux has an ASYNC_LOW_LATENCY flag for the serial driver that may help. Whatever driver you're using may have something similar.
However, latency shouldn't make a difference on a bulk transfer. It should add 40 ms at the very start of the transfer and that's it, which is why drivers don't worry about it in the first place. I would recommend refactoring your transfer protocol to use a sliding window protocol, with a window size of around 100 packets, if you are doing 32-byte packets at that baud rate and latency. In other words, you only want to stop transmitting if you haven't received an ACK for the packet you sent 100 packets ago.
You'll probably find that different USB-Serial converters produce different results. We've found that the FTDI ones work well for talking with embedded devices. Some converters seem to buffer the data for a long time and/or fragment it.
I've never seen a problem with a motherboard connection - not sure what is going on there! Can you change the interrupt point for the motherboard serial port?
I have a serial to usb converter. When I hook it up to my breakout box and create a loopback I am able to send / receive at close to 1Mbps without problems. The serial port sends binary data that may be translated into ascii data.
Using .Net I set my software to fire an event on every byte (ReceivedBytesThreshold=1), though that doesn't mean it will.
I am designing software around an existing hardware product. I have full control of the communication protocol but I'm not sure how to facilitate device detection.
A device could have a range of possible configurations (i.e. baud rate, data bits, parity bits, stop bits) that must be detected at runtime. What is the easiest, most reliable way for the software to figure out what configuration it is using? Again, I have full control of the communication protocol so I can define any mechanism I wish.
Is this a full-duplex or half-duplex device? Can you control request-to-send and monitor clear-to-send on both ends of the serial line? Is the serial line point-to-point (like RS-232) or multi-drop (like RS-485)? It will make a (albeit small) difference if you are going to interfere with other already connected devices while negotiating with a newly connected one.
If you think of the handshake process like a modem negotiating a link layer protocol, it uses a standard set of messages to describe the type of communications it would like to have and waits for an "ack" from the other end. In your case I recommend having a "let's talk" standard message that your head end generates with the range of bit rates and waits for the ack from the device.
I also recommend reducing the number of configuration options for the device. Forget about variable data bits, parity bits, and stop bits. The serial communications world is no longer as unstable as it was back in the 70's. Just use 8 data bits, no parity, one stop bit and vary the bit rate. A CRC on the end of messages provides plenty of error-checking.