Qt Release QProcess Signal Slot Issue - qt

Using Qt 5.6.0 & MSVC2015 I have an app which upon start-up, using QProcess and ssh/plink/cat, shall attempt to read the hostname of a remote server and capture the contents of a specific file on a remote server. This works when run from Qt Creator (the IDE), in either debug or release. If I attempt the same app from outside if Qt Creator, either the signal is never emitted or the slot is not called.
Here's the pertinent code bits:
void MainWindow::Perform1stSSHcmd()
{
QPalette palette;
// hostname: THIS IS OUR 1st SSH COMMAND SO WE WANT TO DETERMINE IF THE KEYS ARE SET UP OK...
QProcess* qp = new QProcess( this );
QString userAndCommand = "root#";
userAndCommand.append( m_cmsIp );
userAndCommand.append(" hostname"); // cmd we want to execute on the remote server
qp->start( m_plinkPuttyCmd.arg( userAndCommand ));
qp->waitForFinished( 16000 ); // I've tried vaious values for this
QString output = qp->readAll();
QString err = qp->readAllStandardError();
// ... SNIP various error checking here ...
// Now the system info
m_sysInfoProc = new QProcess(this);
qDebug() << "About to read systemInfo.xml... ";
if( !connect( this->m_sysInfoProc, SIGNAL( readyReadStandardOutput() ), SLOT( readSystemInfoXML() ))) {
qDebug() << "Connect Failed!!!!!";
}
userAndCommand = "root#";
userAndCommand.append( m_cmsIp );
qDebug() << "preparing cat of xml... ";
userAndCommand.append(" cat /root/systemInfo.xml");
m_sysInfoProc->start( m_plinkPuttyCmd.arg( userAndCommand ));
qDebug() << "->start issued... ";
m_sysInfoProc->waitForFinished( 6000 );
qDebug() << "after waitForFinished( 6000 )";
}
void MainWindow::readSystemInfoXML()
{
qDebug() << "In readSystemInfoXML()";
QProcess *systemInfoXml = qobject_cast<QProcess *>(sender());
if( !systemInfoXml ) {
return;
}
QString res = systemInfoXml->readAllStandardOutput();
qDebug() << "readSystemInfoXML() just read:" << res;
if( !res.length() ) {
return;
}
// . . . XML parsing of file contents . . . (not a concern, works fine)
}
Output Debug/Releas Mode from IDE:
Wed Sep 28 15:36:06 2016 Debug: output: "Lanner"
Wed Sep 28 15:36:06 2016 Debug: err: ""
Wed Sep 28 15:36:06 2016 Debug: About to read systemInfo.xml...
Wed Sep 28 15:36:06 2016 Debug: preparing cat of xml...
Wed Sep 28 15:36:06 2016 Debug: ->start issued...
Wed Sep 28 15:36:06 2016 Debug: In readSystemInfoXML()
Wed Sep 28 15:36:06 2016 Debug: readSystemInfoXML() just read: "\n \n\t2.50-06-15\n 1.0\n \tSINA\n \tclass IC\n \tdoes something\n \n"
Wed Sep 28 15:36:06 2016 Debug: after waitForFinished( 6000 )
Wed Sep 28 15:36:06 2016 Debug: ICICIC
(tags missing from xml dur to formatting, only values appear)
Now the release...
Output when started from release folder on my Windows box:
Wed Sep 28 15:38:09 2016 Debug: output: ""
Wed Sep 28 15:38:09 2016 Debug: err: ""
Wed Sep 28 15:38:09 2016 Debug: About to read systemInfo.xml...
Wed Sep 28 15:38:09 2016 Debug: preparing cat of xml...
Wed Sep 28 15:38:09 2016 Debug: ->start issued...
Wed Sep 28 15:38:09 2016 Debug: after waitForFinished( 6000 )
Wed Sep 28 15:38:09 2016 Debug: ICICIC
I've searched for others who may of had similar issues, the closest was
Qt: some slots don't get executed in release mode
I've tried touching MainWindow.h, re-running qmake and build all, no luck.
Any advice would be greatly appreciated.
Ian

SOLUTION: As it turns out it was related to the way ssh caches keys, more specifically my app's processing of QProcess' output.

Related

Azure Pipeline run Qt application

Simpe Qt App 'untitled.exe':
#include <QCoreApplication>
#include <QtCore>
void parseCmd(const QCoreApplication &app)
{
QCommandLineParser parser;
parser.addHelpOption();
parser.addVersionOption();
parser.process(app);
}
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
QCoreApplication::setApplicationVersion("1.0");
parseCmd(a);
return a.exec();
}
Output:
2020-12-15T19:06:08.1133300Z ##[section]Starting: Test code...
2020-12-15T19:06:08.1434531Z ==============================================================================
2020-12-15T19:06:08.1434873Z Task : Command line
2020-12-15T19:06:08.1435197Z Description : Run a command line script using Bash on Linux and macOS and cmd.exe on Windows
2020-12-15T19:06:08.1435509Z Version : 2.178.0
2020-12-15T19:06:08.1436101Z Author : Microsoft Corporation
2020-12-15T19:06:08.1436429Z Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/utility/command-line
2020-12-15T19:06:08.1436801Z ==============================================================================
2020-12-15T19:06:09.5889046Z Generating script.
2020-12-15T19:06:09.6545863Z ========================== Starting Command Output ===========================
2020-12-15T19:06:09.7002067Z ##[command]"C:\windows\system32\cmd.exe" /D /E:ON /V:OFF /S /C "CALL "D:\a\_temp\cc51db57-c085-4347-873f-ed28f0e7af53.cmd""
2020-12-15T19:06:09.7240225Z 1 file(s) copied.
2020-12-15T19:06:09.7388204Z Volume in drive D is Temp
2020-12-15T19:06:09.7389142Z Volume Serial Number is 405E-826C
2020-12-15T19:06:09.7389470Z
2020-12-15T19:06:09.7389984Z Directory of D:\a\1\s\Release_WIN_x64
2020-12-15T19:06:09.7390270Z
2020-12-15T19:06:09.7390552Z 12/15/2020 07:06 PM <DIR> .
2020-12-15T19:06:09.7392444Z 12/15/2020 07:06 PM <DIR> ..
2020-12-15T19:06:09.7393989Z 12/15/2020 07:06 PM <DIR> bearer
2020-12-15T19:06:09.7394372Z 12/15/2020 07:06 PM <DIR> iconengines
2020-12-15T19:06:09.7394795Z 12/15/2020 07:06 PM <DIR> imageformats
2020-12-15T19:06:09.7395287Z 12/08/2020 02:33 PM 3,409,920 libcrypto-1_1-x64.dll
2020-12-15T19:06:09.7395702Z 12/08/2020 02:33 PM 682,496 libssl-1_1-x64.dll
2020-12-15T19:06:09.7399561Z 12/15/2020 07:06 PM 47,104 MainApp-1.0.0.exe
2020-12-15T19:06:09.7400169Z 12/15/2020 07:06 PM 594,944 MaintenanceTool-1.0.0.exe
2020-12-15T19:06:09.7400757Z 12/15/2020 07:06 PM <DIR> platforms
2020-12-15T19:06:09.7401416Z 05/11/2020 08:46 AM 5,998,712 Qt5Core.dll
2020-12-15T19:06:09.7401954Z 05/11/2020 08:47 AM 7,085,176 Qt5Gui.dll
2020-12-15T19:06:09.7404188Z 05/11/2020 08:47 AM 1,349,240 Qt5Network.dll
2020-12-15T19:06:09.7404743Z 05/11/2020 03:05 PM 329,848 Qt5Svg.dll
2020-12-15T19:06:09.7405277Z 05/11/2020 08:47 AM 5,516,920 Qt5Widgets.dll
2020-12-15T19:06:09.7405719Z 12/15/2020 07:06 PM <DIR> styles
2020-12-15T19:06:09.7406168Z 12/15/2020 07:05 PM 10,240 untitled.exe
2020-12-15T19:06:09.7406588Z 10 File(s) 25,024,600 bytes
2020-12-15T19:06:09.7407293Z 7 Dir(s) 11,920,207,872 bytes free
2020-12-15T19:06:09.8738006Z ##[error]Cmd.exe exited with code '-1073741701'.
2020-12-15T19:06:09.9214918Z ##[section]Finishing: Test code...
And i try to run that in azure pipeline but is failed to run with -v params (that should show version number).
As all dlls are in place, application is crashing, as also others apps are similar like in example 'MainApp' and 'MaintenanceTool' - using same 'core' stuff.
When i run 'MainApp' or 'MaintenanceTool' - azure wait forever, and only way to stop pipeline is cancel.
Weird thing is that i'm able to compile, but not run :/
Found problem, that is for Azure pipelines.
As Qt class QCommandLineParser is using 'qApp' that can be Core, or GUI app (in my case was QApplication) - when we run app with -v (default option for QCommandLineParser) was/is show MessageBox (just show GUI pop-up) with version.
So when we use that in 'command line pipe line' - there no way that we can click Ok on taht dialog.
Based on code from Can I use QCommandLineParser to determine GUI mode or CLI mode?
i made some example that is just print version - as this was requirement for me :)
#include <QApplication>
#include <QMessageBox>
#include <iostream> // For std::
#include <QtCore>
// Use any other flags from cmd ...
struct Options {
bool version = false;
};
Options parseCmd()
{
QCommandLineParser parser;
// parser.addVersionOption(); DO NOT USE THIS!
QCommandLineOption showVer("ver", "Show program version.");
parser.addOption(showVer);
// Procs args
parser.process(*qApp);
// Save data
Options opts = {};
opts.version = parser.isSet(showVer);
return opts;
}
int main(int argc, char *argv[])
{
std::cout << "Create core app" << std::endl;
QCoreApplication::setApplicationVersion("1.0");
// Create base core app
QScopedPointer<QCoreApplication> app(new QCoreApplication(argc, argv));
// Parse cmd parameters
std::cout << "Parse cmd params" << std::endl;
Options opt = parseCmd();
// If only display version
if(opt.version)
{
std::cout << "Display version" << std::endl;
std::cout << qApp->applicationVersion().toStdString() << std::endl;
} else
{
std::cout << "Run in gui mode" << std::endl;
// Reset app to GUI mode
app.reset();
app.reset(new QApplication(argc, argv));
}
//
if(qobject_cast<QApplication*>(qApp))
{
std::cout << "Show QMessageBox" << std::endl;
// Create any GUI stuff here
QMessageBox::information(nullptr, "Hello", "Hello, World!");
}
else
{
std::cout << "Send quit message" << std::endl;
// As we run QCoreApplication only for parsing cmd params ...
QMetaObject::invokeMethod(qApp, "quit", Qt::QueuedConnection);
}
//
return app->exec();
}
Also we need and extra line into project file: '''CONFIG += console'''
QT += core gui widgets
CONFIG += console
TEMPLATE = app
SOURCES += main.cpp
And that how we can 'show' version in command line ;)

Transfer Encoding chunked with okhttp only delivers full result

I am trying to get some insights on a chunked endpoint and therefore planned to print what the server sends me chunk by chunk. I failed to do so so I wrote a test to see if OkHttp/Retrofit are working as I expect it.
The following test should deliver some chunks to the console but all I get is the full response.
I am a bit lost what I am missing to even make the MockWebServer of OkHttp3 sending me chunks.
I found this retrofit issue entry but the answer is a bit ambiguous for me: Chunked Transfer Encoding Response
class ChunkTest {
#Rule
#JvmField
val rule = RxImmediateSchedulerRule() // custom rule to run Rx synchronously
#Test
fun `test Chunked Response`() {
val mockWebServer = MockWebServer()
mockWebServer.enqueue(getMockChunkedResponse())
val retrofit = Retrofit.Builder()
.baseUrl(mockWebServer.url("/"))
.addCallAdapterFactory(RxJava2CallAdapterFactory.create())
.client(OkHttpClient.Builder().build())
.build()
val chunkedApi = retrofit.create(ChunkedApi::class.java)
chunkedApi.getChunked()
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.subscribe({
System.out.println(it.string())
}, {
System.out.println(it.message)
})
mockWebServer.shutdown()
}
private fun getMockChunkedResponse(): MockResponse {
val mockResponse = MockResponse()
mockResponse.setHeader("Transfer-Encoding", "chunked")
mockResponse.setChunkedBody("THIS IS A CHUNKED RESPONSE!", 5)
return mockResponse
}
}
interface ChunkedApi {
#Streaming
#GET("/")
fun getChunked(): Flowable<ResponseBody>
}
Test console output:
Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$2 execute
INFO: MockWebServer[49293] starting to accept connections
Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$3 processOneRequest
INFO: MockWebServer[49293] received request: GET / HTTP/1.1 and responded: HTTP/1.1 200 OK
THIS IS A CHUNKED RESPONSE!
Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$2 acceptConnections
INFO: MockWebServer[49293] done accepting connections: Socket closed
I expected to be more like (body "cut" every 5 bytes):
Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$2 execute
INFO: MockWebServer[49293] starting to accept connections
Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$3 processOneRequest
INFO: MockWebServer[49293] received request: GET / HTTP/1.1 and responded: HTTP/1.1 200 OK
THIS
IS A
CHUNKE
D RESPO
NSE!
Nov 06, 2018 4:08:15 PM okhttp3.mockwebserver.MockWebServer$2 acceptConnections
INFO: MockWebServer[49293] done accepting connections: Socket closed
The OkHttp Mockserver does chunk the data, however it looks like the LoggingInterceptor waits until the whole chunks buffer is full then it displays it.
From this nice summary about HTTP streaming:
The use of Transfer-Encoding: chunked is what allows streaming within a single request or response. This means that the data is transmitted in a chunked manner, and does not impact the representation of the content.
With that in mind, we are dealing with 1 "request / response", which means we'll have to do our chunks retrieval before getting the entire response. Then pushing each chunk in our own buffer, all that on an OkHttp network interceptor.
Here is an example of said NetworkInterceptor:
class ChunksInterceptor: Interceptor {
val Utf8Charset = Charset.forName ("UTF-8")
override fun intercept (chain: Interceptor.Chain): Response {
val originalResponse = chain.proceed (chain.request ())
val responseBody = originalResponse.body ()
val source = responseBody!!.source ()
val buffer = Buffer () // We create our own Buffer
// Returns true if there are no more bytes in this source
while (!source.exhausted ()) {
val readBytes = source.read (buffer, Long.MAX_VALUE) // We read the whole buffer
val data = buffer.readString (Utf8Charset)
println ("Read: $readBytes bytes")
println ("Content: \n $data \n")
}
return originalResponse
}
}
Then of course we register this Network Interceptor on the OkHttp client.

Play Framework running Asynchronous Web Calls times out

I have to perform three HTTP requests from a Play Application. These calls are directed towards three subprojects of the main application. The architecture looks like this:
main app
|
--modules
|
--component1
|
--component2
|
--component3
each component* is an individual sbt subprojects.
The main class in main app runs this Action:
def service = Action.async(BodyParsers.parse.json) { implicit request =>
val query = request.body
val url1 = "http://localhost:9000/one"
val url2 = "http://localhost:9000/two"
val url3 = "http://localhost:9000/three"
val sync_calls = for {
a <- ws.url(url1).withRequestTimeout(Duration.Inf).withHeaders("Content-Type"->"application/json")
.withBody(query).get()
b <- ws.url(url2).withRequestTimeout(Duration.Inf).withHeaders("Content-Type"->"application/json")
.withBody(a.body).get()
c <- ws.url(url3).withRequestTimeout(Duration.Inf).withHeaders("Content-Type"->"application/json")
.withBody(b.body).get()
} yield c.body
sync_calls.map(x => Ok(x))
}
The components need to be activated one after the other, so they need to be a sequence. Each of the does a spark job. However when I call the service action I get this error:
[error] application -
! #71og96ol6 - Internal server error, for (GET) [/automatic] ->
play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[TimeoutException: Read timeout to localhost/127.0.0.1:9000 after 120000 ms]] [TimeoutException: Read timeout to localhost/127.0.0.1:9000 after 120000 ms]
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:280)
at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:206)
at play.api.GlobalSettings$class.onError(GlobalSettings.scala:160)
at play.api.DefaultGlobal$.onError(GlobalSettings.scala:188)
at play.api.http.GlobalSettingsHttpErrorHandler.onServerError(HttpErrorHandler.scala:98) │
at play.core.server.netty.PlayRequestHandler$$anonfun$2$$anonfun$apply$1.applyOrElse(PlayRequestHandler.scala:1│
00)
at play.core.server.netty.PlayRequestHandler$$anonfun$2$$anonfun$apply$1.applyOrElse(PlayRequestHandler.scala:9│
9)
at scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:346)
at scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:345)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
Caused by: java.util.concurrent.TimeoutException: Read timeout to localhost/127.0.0.1:9000 after 120000 ms
at org.asynchttpclient.netty.timeout.TimeoutTimerTask.expire(TimeoutTimerTask.java:43)
at org.asynchttpclient.netty.timeout.ReadTimeoutTimerTask.run(ReadTimeoutTimerTask.java:54)
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:581)
at io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:655)
at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:367)
at java.lang.Thread.run(Thread.java:745)
and
HTTP/1.1 500 Internal Server Error
Content-Length: 4931
Content-Type: text/html; charset=utf-8
Date: Tue, 25 Oct 2016 21:06:17 GMT
<body id="play-error-page">
<p id="detail" class="pre">[TimeoutException: Read timeout to localhost/127.0.0.1:9000 after 120000 ms]</p>
</body>
I specifically set the Timeout for each call to Duration.Inf for the purpose of avoiding timouts. Why is this happening and how do I fix it?

How to parse http headers in Go

I have http response headers shipped in logs from elsewhere. In my log file I have things like :-
Date: Fri, 21 Mar 2014 06:45:15 GMT\r\nContent-Encoding: gzip\r\nLast-Modified: Tue, 20 Aug 2013 15:45:41 GMT\r\nServer: nginx/0.8.54\r\nAge: 18884\r\nVary: Accept-Encoding\r\nContent-Type: text/html\r\nCache-Control: max-age=864000, public\r\nX-UA-Compatible: IE=Edge,chrome=1\r\nTiming-Allow-Origin: *\r\nContent-Length: 14888\r\nExpires: Mon, 31 Mar 2014 06:45:15 GMT\r\n
Given the above as string, how go I parse it into Header object as described in net/http . One way would be to split the string myself and map the key, values... But I am looking to avoid doing that by hand and use the standard (or well maintained 3rd party) library to parse it... Any pointers?
The builtin parser is in textproto. You can either use this directly, or add a
fake HTTP request header and use ReadRequest in the http package. Either way
you need to wrap your data into a bufio.Reader, here I'm just assuming we're
starting with a string.
With textproto:
logEntry := "Content-Encoding: gzip\r\nLast-Modified: Tue, 20 Aug 2013 15:45:41 GMT\r\nServer: nginx/0.8.54\r\nAge: 18884\r\nVary: Accept-Encoding\r\nContent-Type: text/html\r\nCache-Control: max-age=864000, public\r\nX-UA-Compatible: IE=Edge,chrome=1\r\nTiming-Allow-Origin: *\r\nContent-Length: 14888\r\nExpires: Mon, 31 Mar 2014 06:45:15 GMT\r\n"
// don't forget to make certain the headers end with a second "\r\n"
reader := bufio.NewReader(strings.NewReader(logEntry + "\r\n"))
tp := textproto.NewReader(reader)
mimeHeader, err := tp.ReadMIMEHeader()
if err != nil {
log.Fatal(err)
}
// http.Header and textproto.MIMEHeader are both just a map[string][]string
httpHeader := http.Header(mimeHeader)
log.Println(httpHeader)
and with http.ReadRequest:
logEntry := "Content-Encoding: gzip\r\nLast-Modified: Tue, 20 Aug 2013 15:45:41 GMT\r\nServer: nginx/0.8.54\r\nAge: 18884\r\nVary: Accept-Encoding\r\nContent-Type: text/html\r\nCache-Control: max-age=864000, public\r\nX-UA-Compatible: IE=Edge,chrome=1\r\nTiming-Allow-Origin: *\r\nContent-Length: 14888\r\nExpires: Mon, 31 Mar 2014 06:45:15 GMT\r\n"
// we need to make sure to add a fake HTTP header here to make a valid request.
reader := bufio.NewReader(strings.NewReader("GET / HTTP/1.1\r\n" + logEntry + "\r\n"))
logReq, err := http.ReadRequest(reader)
if err != nil {
log.Fatal(err)
}
log.Println(logReq.Header)
https://golang.org/pkg/net/textproto

Using JSch ChannelSftp: How to read multiple files with dynamic names?

I have to read a bunch of .CSV files with dynamic file names from a SFTP server. These files get generated every 15 minutes.
I am using JSch's ChannelSftp, but there is no method which would give the exact filenames. I only see an .ls() method. This gives a Vector e.g.
[drwxr-xr-x 2 2019 2019 144 Aug 9 22:29 .,
drwx------ 6 2019 2019 176 Aug 27 2009 ..,
-rw-r--r-- 1 2019 2019 121 Aug 9 21:03 data_task1_2011_TEST.csv,
-rw-r--r-- 1 2019 2019 121 Aug 9 20:57 data_task1_20110809210007.csv]
Is there a simple way to read all the files in a directory and copy them to another location?
This code works for copying a single file:
JSch jsch = new JSch();
session = jsch.getSession(SFTPUSER,SFTPHOST,SFTPPORT);
session.setPassword(SFTPPASS);
java.util.Properties config = new java.util.Properties();
config.put("StrictHostKeyChecking", "no");
session.setConfig(config);
session.connect();
channel = session.openChannel("sftp");
channel.connect();
channelSftp = (ChannelSftp)channel;
channelSftp.cd(SFTPWORKINGDIR);
channelSftp.get("data_task1_20110809210007.csv","data_task1_20110809210007.csv");
The ls method is the one you need. It returns a vector of LsEntry objects, each of which you can ask about its name.
So, after your channelSftp.cd(SFTPWORKINGDIR);, you could do the following:
Vector<ChannelSftp.LsEntry> list = channelSftp.ls("*.cvs");
for(ChannelSftp.LsEntry entry : list) {
channelSftp.get(entry.getFilename(), destinationPath + entry.getFilename());
}
(This assumes destinationPath is a local directory name ending with / (or \ in Windows).)
Of course, if you don't want to download the same files again after 15 minutes, you might want to have a list of the local files, to compare them (use a HashSet or similar), or delete them from the server.
Note that ls is case sensitive. This method retrieves all csv files, regardless of the extension case
ArrayList<String> list = new ArrayList<String>();
Vector<LsEntry> entries = sftpChannel.ls("*.*");
for (LsEntry entry : entries) {
if(entry.getFilename().toLowerCase().endsWith(".csv")) {
list.add(entry.getFilename());
}
}

Resources