Background:
I use UART for communication between two devices (with different OSs, one with Linux and one with Windows). I have an application on Windows, as master (sending commands) and an application in Linux as acting and responding application. It will do corresponding operations and give response.
Windows app: Application will send a command and wait for the response. If Linux has not responded within some (say, 10 seconds), come out of wait, notifying the user a timeout error, (and send the next command by user).
Linux app: Application will wait for the command. Process it (say, for 5 seconds max), and then send response to Windows.
Problem: If due to any error/issue, Linux responds after Windows app's timeout (say, 15 seconds), Windows application has already aborted the command thinking it's timed out, and sent the next command. The response of first command is being treated as response for current one, which is not correct.
Solution: I thought of appending the command byte as first byte in Linux response to check/verify in Windows application (that whether the response is for current command or not) and ignore if invalid. But this too has limitations that, if both the commands are same, there will be a mismatch again.
What other logic I can implement to solve this?
Related
I want to create an application that works as a "man in the middle" to analyze a protocol (ISO 8583) sent over TCP/IP.
A client connects to the application and sends some binary data (average length 500 bytes).
The application receives the message and then sends it to the real server.
The server responds to the request to the application.
The application sends the response to the original client.
Some context: The main idea is to get the raw binary data and convert it to a string for parsing and decoding the protocol.
There are two parts to the project:
The gateway part (man in the middle).
Parsing and decoding of the data.
I am expending too much time on the first part. So, if there is a mock-up that I can use to get me started, it will be nice. It doesn't have to be with Indy, but I prefer C++Builder.
This is my first time with Indy, and although I have experience working with TCP/IP, I have always used it as something that is already there, never at the low-level implementation.
I am testing with Hercules, and so far I can see the connections.
When I connect to a server in Hercules, I can see that my application is connecting. But, when my application disconnects, I don't see a message that says so, which means (I think) that my app is not disconnecting correctly (but I can reconnect as many times as I want).
I am sending data to my application using Hercules (a "Hello" string). It is working apparently, but I am having a hard time getting the actual data.
The documentation sometimes gets me into dead links, there are no samples or they are available on Delphi.
I am working with the following:
Windows 11 Home
Embarcadero® C++Builder 10.4 Version 27.0.40680.4203
Delphi and C++ Builder 10.4 Update 2
Indy 10.6.2.0
Have a look at Indy's TIdMappedPortTCP component. It is a TCP server that acts as a MITM proxy between clients and a specified server, giving you events when either party sends raw data.
Use the Bindings collection, or DefaultPort property, to specify the local IP/Port(s) that you want the server to listen for clients on.
Use the MappedHost and MappedPort properties to specify the remote server that you want TIdMappedPortTCP to connect to.
The OnBeforeConnect event is fired when a client has connected to TIdMappePortTCP.
The OnConnect event is fired just before TIdMappedPortTCP attempts to connect to the remote server.
The OnOutboundClientConnect event is fired when TIdMappedPortTCP has connected to the remote server.
The OnExecute event is fired when a client send bytes to TIdMappedPortTCP, and before the bytes are sent to the remote server. The event can alter the bytes, if desired.
The OnOutboundData event is fired when the remote server send bytes to TIdMappedPortTCP, and before the bytes are sent to the client. The event can alter the bytes, if desired.
An instance of a BizTalk send pipeline has started to run continuously. On 09/12/2021 an attempt was made to send a file via SFTP, which retried several times but ultimately failed due to a network issue. The error from the event logs is:
The adapter failed to transmit message going to send port "Deliver Outgoing - SFTP" with URL "sftp://xxx.xxxxxx.co.nz:22/To_****/%SourceFileName%". It will be retransmitted after the retry interval specified for this Send Port. Details:"WinSCP.SessionRemoteException: Network error: Software caused connection abort.
For some reason BizTalk made another send attempt at 1:49pm on 10/12/2021 which succeeded as confirmed by the administrator of the SFTP site. Despite this, BizTalk continued making intermittent send attempts and the pipeline instance is still running. The same file has been sent 4 times to the SFTP server.
The pipeline instance in theory should have suspended at 9:47pm on 09/12/2021. I have been able to confirm definitively whether anybody resumed it, but it seems unlikely at this stage. In any case, after sending successfully the pipeline instance should have terminated and should not be re-executing intermittently.
Does anybody know what could account for this behaviour? This is occurring on BTS2020 with CU2 applied.
I've sent messages over SFTP where the WinSCP interpretation of the date-modified attribute doesn't work with a specific type of SFTP server.
With the WinSCP GUI a dialogue box appears and you can disregard this error, but this option isn't available with BizTalk's GUI. This error appears when a file with the same filename already exists on the server and is supposed to be overwritten.
My solution was to create a pipeline component that removed %SourceFileName% on the server. The pipeline component (just like WinSCP GUI) can disregard the modified-date.
We have an upcoming deploy for a system that processes a lot of messages through BizTalk. Since those messages are cumulative updates they need to be queued up during the deployment outage then processed in order when the deploy is finished. Since there may be a large number of them it’s difficult to do this manually.
One possible solution is to leave the send port stopped and let the messages suspend. We can then resume them in order when the deployment is completed.
Is it possible to run a SQL script (or a tool) against the BizTalk messagebox database that will resume suspended messages, for a specific port, in order of receipt?
If you have an ordered requirement (you either do or don't), then the Send Port should be marked for Ordered Delivery.
If so, then when you Start a Stopped Send Port, the messages will be processed in the same order they were submitted.
If you stop the port (but leave it subscribed) and start it again afterwards it should resume the message itself, or if not it is simple enough to go into the Administration Console and batch resume them.
However if the response messages of the send port are subscribed too by running Orchestrations you will not be able to un-deploy the Orchestrations until they have all completed, so stopping the send port would not work in this scenario.
Sometimes one option is if the initiating port is a one way receive, is to stop the receive location and let everything complete. You can then stop the application and redeploy and restart it and the send port will pick up all the waiting message to process.
If the above is not possible you may want to look at doing a side by side deployment where you increment the version numbers of all the assemblies in the solution so you can have both versions deployed at the same time and you can then allow the old version to finish running but have the new version processing any new messages.
The better option is to send messages to msmq, usually there is no extra coding required for this. You can just route messages to msmq using MSMQ adapter and then after deployment receive them in order as MSMQ adapter allows to receive in order. Just make sure you do a small test in yr QA environment before doing it in production.
I'm trying to maintain a persistent connection between client and Remote server using Qt. My sever side is fine. I'm doing my client side in Qt. Here I will be using QNetworkAccessManager for requesting server with get method(Part of QNetworkRequest method). I will be able to send and receive requests.
But after sometime(approx ~ 2 min)the client is intimating server, the connection has closed by posting a request automatically. I think QNetworkAccessManager is setting a timeout for this connection. I want to maintain a persistent connection between the ends.
Is my approach correct, if not, can someone guide me in correct path?
This question is interesting, so let's do some research. I set up a nginx server with big keep alive timeout and wrote the simpliest Qt application:
QApplication a(argc, argv);
QNetworkAccessManager manager;
QNetworkRequest r(QUrl("http://myserver/"));
manager.get(r);
return a.exec();
Also I used the following command (in Linux console) to monitor connections and check if the problem reproduces at all:
watch -n 1 netstat -n -A inet
I took quick look at Qt sources and found that it uses QTcpSocket and closes it in QHttpNetworkConnectionChannel::close. So I opened the debugger console (Window → Views → Debugger log in Qt Creator) and added a breakpoint while the process was paused:
bp QAbstractSocket::close
Note: this is for cdb (MS debugger), other debuggers require other commands. Another note: I use Qt with debug info, and this approach may not work without it.
After two minutes of waiting I got the backtrace of the close() call!
QAbstractSocket::close qabstractsocket.cpp 2587 0x13fe12600
QHttpNetworkConnectionPrivate::~QHttpNetworkConnectionPrivate qhttpnetworkconnection.cpp 110 0x13fe368c4
QHttpNetworkConnectionPrivate::`scalar deleting destructor' untitled 0x13fe3db27
QScopedPointerDeleter<QObjectData>::cleanup qscopedpointer.h 62 0x140356759
QScopedPointer<QObjectData,QScopedPointerDeleter<QObjectData>>::~QScopedPointer<QObjectData,QScopedPointerDeleter<QObjectData>> qscopedpointer.h 99 0x140355700
QObject::~QObject qobject.cpp 863 0x14034b04f
QHttpNetworkConnection::~QHttpNetworkConnection qhttpnetworkconnection.cpp 1148 0x13fe35fa2
QNetworkAccessCachedHttpConnection::~QNetworkAccessCachedHttpConnection untitled 0x13fe1e644
QNetworkAccessCachedHttpConnection::`scalar deleting destructor' untitled 0x13fe1e6e7
QNetworkAccessCachedHttpConnection::dispose qhttpthreaddelegate.cpp 170 0x13fe1e89e
QNetworkAccessCache::timerEvent qnetworkaccesscache.cpp 233 0x13fd99d07
(next lines are not interesting)
The class responsible for this action is QNetworkAccessCache. It sets up timers and makes sure that its objects are deleted when QNetworkAccessCache::Node::timestamp is in the past. And these objects are HTTP connections, FTP connections, and credentials.
Next, what is timestamp? When the object is released, its timestamp is calculated in the following way:
node->timestamp = QDateTime::currentDateTime().addSecs(ExpiryTime);
And ExpiryTime = 120 is hardcoded.
All involved classes are private, and I found no way to prevent this from happening. So it's way simpler to send keep-alive requests each minute (at least now you know that 1 minute is safe enough), as the alternative is to rewrite Qt code and compile custom version.
I'd say by definition, a 2 minute timeout connection qualifies for persistent. I mean if it wasn't persistent, you'd have to reconnect on every request. 2 minutes is quite generous comparing to some other software out there. But it is set to eventually timeout after a period of inactivity and that's a good thing which should not surprise. Some software allows the timeout period to be changed, but from Pavel's investigation it would appear in the case of Qt the timeout is hardcoded.
Luckily the solution is simple, just rig a timer to send a heartbeat (just a dummy request, do not confuse with "heartbeat network") every 1 minute or so to keep the connection alive. Before you use your connection deactivate the timer, and after you are done with the connection, restart the timer.
I currently have a Java Applet running on my web page that communicates to a display pole via COM1. However since the Java update I can no longer run self-signed Java Applets and I figure it would just be easier to send an AJAX request back to the server and have the server send a response to a TCP port on the computer...the computer would need a TCP > COM virtual adapter. How do I install a virtual adapter to go from a TCP port to COM1?
I've looked into com0com and that is just confusing as hell to me, and I don't see how to connect any ports to COM1. I've tried tcp2com but it doesn't seem to install the service in Windows 7 x64. I've tried com2tcp and the interface seems like it WOULD work (I haven't tested), but I don't want an app running on the desktop...it needs to be a service that runs in the background.
So to summarize how it would work:
Web page on comp1 sends AJAX request to server
Server sends text response to comp1 on port 999
comp1 has virtual COM port listening on port 999, sends data to COM1
pole displays data
EDIT: I'm using Win 7 x64 and tcp2com doesn't work as a service. I tried using srvany but I get an error stating that the application started then stopped. If I use powershell and pass the tcp2com as an argument, it doesn't quit but it also doesn't run. So I nixed the whole 'service' deal and put the command: powershell -windowstyle hidden "tcp2com --test tcp/999 com1" and it works...sort of. The characters that get sent are all effed. I can write "echo WTF > COM1" on another computer which has COM2TCP (different vendor) and it'll come up as a single block on the POS display pole. However if I use COM2TCP on both the server and client machines, everything works fine...but that's only a trial version and it costs several hundred dollars! On another note, is there a way to send the raw text over IP without having to use another Virtual COM > IP adapter on another computer? Sort of like how curl works but different...?
After somewhat of an exhaustive search, I came across a program called 'piracom'. It's a very simple app that lets you specify port settings for the express purpose of connecting a serial port to an listening port over the network. So this is IP > Serial. For Serial > IP I used HW-VSP3-Single as even on the piracom website it said it's compatible! I've tested and it works!
I just put a shortcut to piracom in the startup folder of my user account; the app runs off of a .ini that it updates every time you make a change...so if you run the server and hide it, on the next reboot of the pc it'll start up running and hidden with all prior settings. Easy.
Now it's a matter of installing HW-VSP3 on the server and making a method on the Rails app which will write to the virtual COM port. The only issue I can see right now is that writing echo \14Test This! > COM3 actually prints the \14...if I do that in my Java applet, it sends the "go to beginning" signal.
Addendum 1: The \14 problem was fixed by using the serialport gem for RoR. I created a method in a controller that returned head :no_content and then send data to the COM port. Calls to this method were made via jQuery's $.Ajax, using "HEAD" HTTP method. Apparently though I had to add the GET verb in Rails routes because the HEAD option isn't supported for some gimpy reason.
Addendum 2: Some garbage data was being sent to the display pole at the end of the string...turns out I needed to turn off the "NVT" option in HW-VSP3. Also keep in mind that firewalls need to be modified to allow communication.