Modbus TCP simulator Windows/Linux - simulator

I am looking for a good Modbus over TCP simulator with GUI to try on two separate pcs( one a master other as slave). I found only SimplyModbus,but this one acts only as master. Any reccomendations?

In case future users are landing on this page looking for something cross platform and open source, I was faced with a similar problem as OP a while back.
I ended up creating a Java based GUI for troubleshooting Modbus RTU and TCP, which is now known as ModbusMechanic.
It has both a master and slave simulator, TCP->RTU gateway, RTU node scanner, RTU bus sniffer, and can act as a bridge between Ethernet/IP and Modbus.
https://modbusmechanic.scifidryer.com

http://www.modbustools.com/download.html
They have both Modbus poll and Modbus slave simulator. Free for trial.

You can try Modbus protocol tester (master) which is free for download from here:https://www.rtipsonline.com/WebPages/download.html
 
For a slave, you can find a Modbus RTU and TCP slave implementation in 'C' source code form at https://www.rtipsonline.com 
Since the implementation is in ANSI 'C' you can port it easily to any controller. 
Slave supports following 11 function codes.
Read Coils(0x01),
Read Discrete Input(0x02),
Read Holding register(0x03),
Read Input register(0x04),
write single coil(0x05),
write multiple coils(0x0F),
write multiple registers(0x10),
write single register(0x06),
Read exception status(0x07),
Report slave ID(0x11),
Read/write multiple registers(0x17)
They also provide porting services for getting library ported to your board and working at minimal cost.
Using Modbus Protocol tester (master) and WIN32(C source code) Modbus slave, you can establish client/server Modbus communication between two different PCs.

There's a free Modbus RTU/TCP slave simulator tool called Unslave.
Instead of a GUI it supports simple JSON-based configuration, where you setup slaves like this:
"slaves": {
"1": {
"isOnline": true,
"registers": {
"HR0": 1,
"HR10": "0x0A",
"C0": true,
"C999": {exception: 3}
}
}
}
It also shows logs of all communications on the link to simplify debugging:
2017/06/06 17:21:54.310 - TRACE: Byte received: 1. Total: 1
2017/06/06 17:21:54.310 - TRACE: Byte received: 3. Total: 2
2017/06/06 17:21:54.310 - TRACE: Byte received: 0. Total: 3
2017/06/06 17:21:54.326 - TRACE: Byte received: 60. Total: 4
2017/06/06 17:21:54.326 - TRACE: Byte received: 0. Total: 5
2017/06/06 17:21:54.326 - TRACE: Byte received: 1. Total: 6
2017/06/06 17:21:54.342 - TRACE: Byte received: 68. Total: 7
2017/06/06 17:21:54.358 - TRACE: Byte received: 6. Total: 8
2017/06/06 17:21:54.363 - INFO: Modbus frame received: [1 3 0 60 0 1 68 6]
2017/06/06 17:21:54.363 - INFO: Reading value: 1HR60 = 14119
2017/06/06 17:21:54.363 - INFO: Modbus frame sent: [1 3 2 55 39 238 110]

Related

LE Set Data Length returns Unsupported Feature or Parameter Value

I setup a gatt-server on samsung platform (using bluez 5.47), upon client connect I want to configure the data length (this sets the link layer packet length)
which will return Unsupported Feature or Parameter Value.
the same command works when I setup a client that connects to a remote gatt-server.
* according to bluetooth core spec 4.2 I should be able to do that
"Both the master and slave can initiate this procedure at any time after entering the Connection State".
* I tried to enter some default values of tx octet 27, tx time 328, this does not work. (probably means this isn't parameter value issue).
anyone know why is that not working?
* just to be noted, I would like this to be set in order to increase throughput. currently set MTU and connection params only.
< HCI Command: LE Set Data Length (0x08|0x0022) plen 6 #31973 [hci0] 5281.478803
Handle: 1894
TX octets: 251
TX time: 2120
HCI Event: Command Complete (0x0e) plen 6 #31974 [hci0] 5281.479176
LE Set Data Length (0x08|0x0022) ncmd 1
Status: Unsupported Feature or Parameter Value (0x11)
Handle: 1894
turnes out I connected to Ipone 6s which does not support bluetooth 4.2 (it supports bluetooth 4.1). That was the reason for "Unsupported Feature"
Meaning both master and slave must support bluetooth 4.2 (since data length extension is feature of bluetooth 4.2)

DPDK MLX5 driver - QP creation failure

I am developing a DPDK program using a Mellanox ConnectX-5 100G.
My program starts N workers (one per core), and each worker deals with its own dedicated TX and RX queue, therefore I need to setup N TX and N RX queues.
I am using flow director and rte_flow APIs to send ingress traffic to the different queues.
For each RX queue I create a mbuf pool with:
n = 262144
cache size = 512
priv_size = 0
data_room_size = RTE_MBUF_DEFAULT_BUF_SIZE
For N<=4 everything works fine, but with N=8, rte_eth_dev_start returns:
Unknown error -12
and the following log message:
net_mlx5: port 0 Tx queue 0 QP creation failure
net_mlx5: port 0 Tx queue allocation failed: Cannot allocate memory
I tried:
to increment the number of Hugepages (up to 64x1G)
change the pool size in different ways
both DPDK 18.05 and 18.11
change the number of TX/RX descriptors from 32768 to 16384
but with no success.
You can see my port_init function here (for DPDK 18.11).
Thanks for your help!
The issue is related to the TX inlining feature of the MLX5 driver, which is only enabled when the number of queues is >=8.
TX inlining uses DMA to send the packet directly to the host memory buffer.
With TX inlining, there are some checks that fail in the underlying verbs library (which is called from DPDK during QP Creation) if a large number of descriptors is used. So a workaround is to use fewer descriptors.
I was using 32768 descriptors, since the advertised value in dev_info.rx_desc_lim.nb_max is higher.
The issue is solved using 1024 descriptors.

Send bits over physical ethernet cable without any error correction like FCS or CRC

I would like to send some raw bits over ethernet cable between two computers. The errors in data that occurred during transmission are corrected by Ethernet Frame Check Sequence(FCS) (like CRC: cycle redundancy check) and further checks by the upper layers like TCP.
But, I do not want any error correction techniques to be applied. I want to see the exact bits received with the errors occurring in transmission. I have seen some articles(example http://hacked10bits.blogspot.in/2011/12/sending-raw-ethernet-frames-in-6-easy.html) on sending raw ethernet frames but I think they also undergo FCS like CRC checks. Is it possible to send data without any such error corrections. Thanks.
Edit 1
I am connecting two computers directly end to end using an Ethernet cable (no switches or router in between).
The ethernet cable is "CAT 5E", labelled as "B Network Patch Cable CAT 5E 24AWG 4PR-ETL TIA/EIA-568B"
The output of lspci -v is (nearly same for both the computers):
Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c)
Subsystem: Lenovo RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
Flags: bus master, fast devsel, latency 0, IRQ 28
I/O ports at e000 [size=256]
Memory at f7c04000 (64-bit, non-prefetchable) [size=4K]
Memory at f7c00000 (64-bit, prefetchable) [size=16K]
Capabilities: [40] Power Management version 3
Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+
Capabilities: [70] Express Endpoint, MSI 01
Capabilities: [b0] MSI-X: Enable- Count=4 Masked-
Capabilities: [d0] Vital Product Data
Capabilities: [100] Advanced Error Reporting
Capabilities: [140] Virtual Channel
Capabilities: [160] Device Serial Number 01-00-00-00-68-4c-e0-00
Capabilities: [170] Latency Tolerance Reporting
Kernel driver in use: r8169
Kernel modules: r8169
I used the following command to show FCS and not drop bad frames
sudo ethtool -K eth0 rx-fcs on rx-all on
Still I am not receiving any error/bad frames. I am sending 1000 bits of zeros in each frame and none of the bits received had any 1's. Do I need to keep sending a lot of such frames in order to receive a bad frame? (Because the bit error rate is probably a lot less for a CAT 5E cable)
Also, can I implement my own LAN protocol with the same current NIC and ethernet cable?
Basically, I want to get as many errors as possible during the transmission and detect all of them.
While it's generally not possible to send Ethernet frames without appending a proper FCS, it is often possible to receive frames that don't have a correct FCS. Many network controllers support this, though it would likely require you to modify the native network device driver.
Many Intel NICs for example, have a mode setting that causes framing errors, FCS errors and other sorts of error frames to be discarded. The driver usually turns that feature on. This is generally desirable because such frames are unlikely to be useful (since they are known to be corrupted). However, for troubleshooting purposes, the NIC supports receiving all frames, including error frames. It's just that there's usually no reason to expose that feature to users. After all, who wants to receive known-corrupted frames?
It is customary to enable counters for such frames. Many NICs actually expose those via diagnostic counters. In Linux, you can often see these with ethtool -S <interface>.
For example, on my machine (note rx_crc_errors):
$ ethtool -S eth0
NIC statistics:
rx_packets: 1629186
tx_packets: 138121
rx_bytes: 747886491
tx_bytes: 12198820
rx_broadcast: 0
tx_broadcast: 0
rx_multicast: 0
tx_multicast: 0
rx_errors: 0
tx_errors: 0
tx_dropped: 0
multicast: 0
collisions: 0
rx_length_errors: 0
rx_over_errors: 0
rx_crc_errors: 0
rx_frame_errors: 0
rx_no_buffer_count: 0
rx_missed_errors: 0
tx_aborted_errors: 0
tx_carrier_errors: 0
tx_fifo_errors: 0
tx_heartbeat_errors: 0
tx_window_errors: 0
tx_abort_late_coll: 0
tx_deferred_ok: 0
tx_single_coll_ok: 0
tx_multi_coll_ok: 0
tx_timeout_count: 0
tx_restart_queue: 0
rx_long_length_errors: 0
rx_short_length_errors: 0
rx_align_errors: 0
tx_tcp_seg_good: 269
tx_tcp_seg_failed: 0
rx_flow_control_xon: 0
rx_flow_control_xoff: 0
tx_flow_control_xon: 0
tx_flow_control_xoff: 0
rx_long_byte_count: 747886491
rx_csum_offload_good: 1590047
rx_csum_offload_errors: 0
alloc_rx_buff_failed: 0
tx_smbus: 0
rx_smbus: 0
dropped_smbus
This is not possible. FCS is mandatory for Ethernet frames (layer 2); it checks for errors but locating/correcting error bits isn't possible. FEC is used with faster PHYs (layer 1) and isn't optional either.
When switching off FCS checking on a NIC you have to keep in mind that any switch you use also checks for FCS errors and drops ingress frames with bad FCS. Ethernet is explicitly designed not to propagate error frames.
Edit 1 question comments:
With decent cabling, error frames on GbE should be very rare. If you actually want errors(?), use a good length of Cat3 cable or severly abuse the Cat5 cable...
An Ethernet NIC speaks Ethernet. If you want your own protocol you'd need to build your own hardware.
Sending wrong CRC/FCS is not possible with your NIC device (Realtek) . Your NIC keeps adding the FCS 4 bytes for every packet you send, even for "hand made" raw packets using AF_PACKET socket .
The only standard NICs that support sending wrong CRC/FCS, until now and as I know ,are the following INTEL NIC drivers :
e1001, e1000, e100, ixgbe, i40e and igb .

AVR ATMega1284P USART Communication lockup

I am using the USART in synchronous mode to communicate from the host computer to firmware(resides in ATMega 1284P). My maximum buffer size in the firmware side is 20, If I send the data continuously from the host to the firmware and some replies from firmware to host computer, somehow the communication locks up. I doubt that the UDR register which is common for both Transmit Data Buffer(TXB) and Receive Data Buffer(RXB) to send/receive the data in/out of the firmware is locked which results in ceasing of communication. Any Suggestion for this issue?
PS:
For transmisson from firmware to host, the codition is:
UCSRA & (1 << UDRE) should be TRUE
For reception from host to firmware, the condition is:
UCSRA & (1 << RXC) should be TRUE
I am using hardware interrupt M_USARTx_RX_vect for checking the availability of the serial characters from host.
Update: Firmware - Initial Source : MarlinSerial.cpp : USART Definitions, Marlin_main.cpp : Program Flow
The UDR register is physically two times present at the same address in the avr address space ( special io register mapping). There is no locking between the udr of the rx and tx of the uasrt in hardware.
The shown conditions seems ok to me, but I have not looked in the avr datasheet.
Maybe you have some problems while writing/reading to your cyclic? 20 char buffer? Please show your code ( please shrink for the minimum we need to understand ).

Xbee, Arduino & Processing Design Query

I have a processing sketch that pumps out basic serial commands to an xbee.
Then I have two (soon to be 3, maybe 4) arduino's with their own xbee's who receive the data and do stuff.
Thing is each Arduino has it's own purpose, and therefore it's own data packet.
So, to implement this. Is there a way to send a message to a particular xbee? I.e. can I assign the xbee an index or channel of some sort, then get the broadcasting xbee to send data to whatever index or channel it needs to?
Or, will this need to be implemented in the Arduino software?
i.e. Processing prefix the data packet with an index/identifier and the arduino ignore incoming messages with that prefix?
Or is there another option entirely :P
Thanks in advance for your advice.
While not a specific answer to your question, with this type of communication some packet error checking would be beneficial. Send the data using a crc error checking algorithm. Packet structure could look something like:
0x7F 0x02 (Address Bytes) (Command Bytes) (CRC bytes) 0x7F 0x03
Where 0x7F is the DLE character used to indicate either a start byte will follow, and end byte will follow, or a data byte with the value of DLE will follow. This means any DLE character that is part of the address or command should be preceded by a 'Stuffed' DLE character. The CRC is calculated from the address and command bytes and used to check the integrity of the data that is received. The CRC check bytes are included in each packet.
This type of communication will prevent packets going to the wrong source from being used, and also packets that are in error from being used.
To read more on serial framing here is a good place to start: http://eli.thegreenplace.net/2009/08/12/framing-in-serial-communications/.
To what i understood is that you want to be able to tell the diffrence to what Xbee you are sending data to. You can do this by using IP adresses. If you have for example two Xbees with the IPs:
Xbee1 - 192.168.80.50
Xbee2 - 192.168.80.51
Xbee3 - 192.168.80.52
You can send information between them by just connecting the Xbee that will start the communication to the Xbee that will receive it. If you want to have any kind of communication over the wireless network (or ethernet) you must have an IP assigned to every Xbee.
EDIT:
If you have a server on a computer that you have made yourself in for example Java. You can connect the Xbees to that and connect of them to the computer server. Then you can set up the server to receive and send data to the diffrent Xbee clients.
I did something similar to this: Maintaining communication between Arduino and Java program , but i didn't use a Xbee, i used the official WiFi shield.
Hope this helped!
-Kad

Resources