I need to use wait processor for terminating invokehttp processor in case of failure, I have tried this - How to use wait\notify Processor? and added DistributeCashMapServerService by PutDistributeMapCachProcessor (I have made host:localhost with port 4557 for both DistributeMapCachclient and server service), but both of them still throws execpetion:
Here's a screenshot of the error message:
Should I change server port? Or configure it another way?
Here is my workflow:
Related
An instance of a BizTalk send pipeline has started to run continuously. On 09/12/2021 an attempt was made to send a file via SFTP, which retried several times but ultimately failed due to a network issue. The error from the event logs is:
The adapter failed to transmit message going to send port "Deliver Outgoing - SFTP" with URL "sftp://xxx.xxxxxx.co.nz:22/To_****/%SourceFileName%". It will be retransmitted after the retry interval specified for this Send Port. Details:"WinSCP.SessionRemoteException: Network error: Software caused connection abort.
For some reason BizTalk made another send attempt at 1:49pm on 10/12/2021 which succeeded as confirmed by the administrator of the SFTP site. Despite this, BizTalk continued making intermittent send attempts and the pipeline instance is still running. The same file has been sent 4 times to the SFTP server.
The pipeline instance in theory should have suspended at 9:47pm on 09/12/2021. I have been able to confirm definitively whether anybody resumed it, but it seems unlikely at this stage. In any case, after sending successfully the pipeline instance should have terminated and should not be re-executing intermittently.
Does anybody know what could account for this behaviour? This is occurring on BTS2020 with CU2 applied.
I've sent messages over SFTP where the WinSCP interpretation of the date-modified attribute doesn't work with a specific type of SFTP server.
With the WinSCP GUI a dialogue box appears and you can disregard this error, but this option isn't available with BizTalk's GUI. This error appears when a file with the same filename already exists on the server and is supposed to be overwritten.
My solution was to create a pipeline component that removed %SourceFileName% on the server. The pipeline component (just like WinSCP GUI) can disregard the modified-date.
I have a setup where I use Flask-MQTT to connect my python Flask API to a Mosquitto broker. Whenever I run the Flask API with the development server all is well. But whenever I spin it up for production (using wsgi+nginx), the connection with Mosquitto is made, but everytime i try to publish something i get the following error:
Socket error on client <unknown>, disconnecting.
My app.ini has the processes configured to 1 (processes = 1)
My mosquitto.config has the allow_anonymous flag set to trye (allow_anonymous true)
I can't really seem to figure out what I'm doing wrong here...
Update:
So what i think is happening is that the Flask-uwsgi application is trying to connect to mosquitto more than once. There is a master process that connects with Mosquitto on initialize. Then there is a second process that is being used whenever input is given on the Flask app. I'm not sure, but I think Mosquitto only wants one connection at the time, therefor erroring on the second. So now i either need to:
A) Configure Mosquitto in a way that it accepts multiple connection from the same device
B) Configure Flask in a way that wil only use one single process (configuring processes = 1 is not enough, it will still spawn two processes)
99% of the time, a "Socket error on client <unknown>" is an authentication error. I don't know Flask, so I don't know where to point you at, but something in your code is either trying to pass a username/password that is not defined to Mosquitto, or its trying a TLS connection with an cert that Mosquitto doesn't like.
Alright, it turns I could've read that the whole multiple processes wouldn't work from the start at the official Flask-MQTT documentation. It sais right there in think letters:
Flask-MQTT is currently not suitable for the use with multiple worker
instances.
So I looked at my uwsgi app.ini file again closely and actually the answer is quite simple. I turned out i had a like in there master = true.. after I removed that it works like a charm.
I have orchestration like on screenshot below.
And I want to have additional logic executed when service connected to wsp is failing or unavailable (timeout,service crash, server unavailable, etc).
It looks like I will not be able to catch this in orchestration in Scope_1.
How can I add logic to orchestration if service crashed ?
I think you want the pattern described in this article or some variation:
BizTalk Server: Suspend and Resume an Orchestration on Two Way Port Error
This pattern captures the Port error, and puts the Orchestration in a Suspended/Resumable state.
You should still properly configure the Port Retries.
I has a problem with send port and an application: The process cannot access the file because another process has locked a portion of the file.
I guess the problem is while BizTalk send port is writing a file, the application pickup this file and process.
My scenario:
I have an orchestration with a file send port to write a file to a location.
After this port I have another send port to call an application to picking the written file and process.
I think: While file send port is writing and not yet finished, the orchestration does not wait but continue next step - calling application. And this leads to above error.
Is my assumption correct?
And how can I solve this problem?
You are absolutely correct your orchestration basically throws the message in your send port and continues, but you can change this behavior and I'll give you a really simple solution here it is
* Set your Logical send port like this
Now your orchestration will wait for delivery ACK
*To make things cleaner
Create a scope and catch the Microsoft.XLANGs.BaseTypes.DeliveryFailureException which occurs when you don't get an ACK
*Also add in your catch block a suspend Orchestration shape so you can resume your orchestration if your message doesn't get to its destination :)
This works with both File and FTP protocol, (I didn't test others)
The client use ssh login and start up a server on remote machine, then the clinet create a tcp connect to the server.
The server need exit when the client has exit normally or crashed or network is dropped.
So the question is how to detect if the client which the server has connected to is crashed.
The first try is using error() signal, catch QAbsoluteSocket::NetworkError to determine the network has dropped. But I can't receive error() signal at all even if i pull out the network cable.
The second try is using the SocketState, i think whenever SocketState is UnconnectedState,the client may has exit normally and the server should exit too. This way works fine for "normal exit", but I don't know how to deal with "crash" and "dead network".
Help me, thanks!
I'd recommend using TCP keep alive. It is not exposed through the public QTcpSocket interface, but you can use setsockopt with QAbstractSocker::socketDescriptor to activate the SO_KEEPALIVE feature.
EDIT: It appears that keep alive was added to QAbstractSocket at some point. So, simply call QAbstractSocket::setSocketOption with QAbstractSocket::KeepAliveOption.
You can find information about adjusting the timeout of keep alive request here: http://www.gnugk.org/keepalive.html
Most of the time, the only way you will know there is a problem with a socket connection is when you try to read or write with it. There are some exceptions: Windows will change the state of sockets if the network cable is unplugged, Linux (in my experience) will not.
The most reliable way to detect connection problems is to have the client regularly send a small message at an agreed upon interval with the server. If the server does not see this message within a reasonable time, it should consider the client dead and drop the connection. This will also give both sides regular opportunities to detect a problem via reads and writes.