catch2: how to execute another process as part of test case? - catch2

i'm trying to implement a test case in catch2 that tests usage of fifo between processes.
For testing it, i want to run another process that creates and writes to fifo, while my test (using catch2) will read from this fifo.
Is there a way to run a process using catch2 or i just use regular system APIs for execution another exe file (process) as part of the test?
Thanks

Related

read and copy buffer from kernel in CPU to kernel in FPGA with OpenCL

I'm trying to speed up Ethash algorithm on Xilinx u50 FPGA. My problem is not about FPGA, it is about pass DAG file that are generated in CPU and send it to FPGA.
first I'm using this code in my test. I made a few changes to support Intel OpenCL driver. now if I only using CPU to run Ethash (or in this case xleth) program all process are be done. but in my case I first generate DAG file in CPU and with using 4 core it take 30 second for generating epoch number 0. after that I wanna pass DAG file (in code showing with m_dag) to a new buffer look like g_dag to send it in u50 HBMs.
I can't using only one context in this program, because I'm using 2 separated kernel files (.cl for CPU and .xclbin for FPGA) and when I try to make program and kernel it send me error 33 (CL_INVALID_DEVICE). so I make separate context (with name g_context).
now I wanna know how can I send data from m_contex to g_context? and it that ok and optimize in performance?(send me another solution if you have.)
I send my code in this link so pls if you can, just send me code solution.

Apache Flink - End to End testing how to terminate input source

I've used apache flink in batch processing for a while but now we want to convert this batch job to a streaming job. The problem I run into is how to run end-to-end tests.
How it worked in a batch job
When using batch processing we created end-to-end tests using cucumber.
We would fill up the hbase table we read from
Run the batch job
Wait for it to finish
verify the result
The problem in a streaming job
We would like to do something similar with the streaming job except the streaming job does not really finish.
So:
fill up the message queue we read from
Run the streaming job.
Wait for it to finish (how?)
Verify the result
We could just wait 5 seconds after every test and assume everything has been processed but that would slow everything down a lot.
Question:
What are some ways or best practices to run end-to-end tests on a streaming flink job without forceable terminating the flink job after x seconds
Most Flink DataStream sources, if they are reading from a finite input, will inject a watermark with value LONG.MAX_VALUE when they reach the end, after which the job will be terminated.
The Flink training exercises illustrate one approach to doing end-to-end testing of Flink jobs. I suggest cloning the github repo and looking at how the tests are setup. They use a custom source and sink and redirect the input and output for testing.
This topic is also discussed a bit in the documentation.

QShared memory for an externally running process?

I have a QApplication that calls an external executable. This executable will keep running infinitely, passing data to this QApplication through stdout, unless it's manually exited from by the user running it from console. This process does not wait for stdin while it is running (it's a simple c++ code that's running as an executable that has a while loop).
I want to be able to modify this executable's behavior at runtime by being able to send some form of signal from the QApplication to the external process. I read about QT's IPC and I think QSharedMemory is the easiest way to achieve this. I cannot any kind of pipes etc since the process is not waiting for stdin.
Is it possible for there to be a QSharedMemory that is shared by the QApplication as a well as a process running externally that is not a QT application. If yes, are there any example someone can point me to; I tried to find some but couldn't. If not, what other options might work in my specific scenario?
Thanks in advance
The idea that you have to wait for any sort of I/O is mostly antiquated. You should design your code so that it is informed by the operating system as soon as I/O request is fulfulled (new input data available, output data sent, etc.).
You should simply use standard input for your purposes. The process doesn't have to wait for standard input, it can check if any input is available, and read it if so. You'd do it in the same place were you'd poll for changes to the shared memory segment.
For Unix systems, you could use QSocketNotifier to get notified when standard input is available.
On Windows, the simplest test is _kbhit, for other solutions see this answer. QWinEventNotifier also works with a console handle on Windows.

Process stop getting network data

We have a process (written in c++ /managed), which receives network data via tcpip.
After running the process for a while while tracking network load, it seems that network get into freeze state and the process does not getting data, there are other processes in the system that using networking (same nic) which operates normally.
the process gets out of this frozen situation by itself after several minutes.
Any idea what is happening?
Any counter i can track to see if my process reach some limitations ?
It is going to be very difficult to answer specifically,
-- without knowing what exactly is your process/application about,
-- whether it is a network chat application, or a file server/client, or ......
-- without other details about your process how it is implemented, what libraries it uses, if relevant to problem.
Also you haven't mentioned what OS and environment you are running this process under,
there is very little anyone can help . It could be anything, a busy wait loopl in your code, locking problems if its a multi-threaded code,....
Nonetheless , here are some options to check:
If its linux try below commands to debug and monitor the behaviour of the process and see what could be problem-
top
Check top to see ow much resources(CPU, memory) your process is using and if there is anything abnormally high values in CPU usage for it.
pstack
This should stack frames of the process executing at time of the problem.
netstat
Run this with necessary options (tcp/udp) to check what is the stae of the network sockets opened by your process
gcore -s -c
This forces your process to core when the mentioned problem happens, and then analyze that core file using gdb
gdb
and then use command where at gdb prompt to get full back trace of the process (which functions it was executing last and previous function calls.

Advice/experience on testing MPI code with CppUnit

I've got a codebase where I have been using CppUnit for unit testing. I'm now adding some MPI code to the project and I'd like to unit test some abstractions I'm building on top of MPI. For example, I've written some code to manage a single-producer/multiple-consumer relationship where consumers ask for work and the producer serializes the next bit of work to send to the consumer, and I'd like to test just that interaction with a test that generates some fake work items in a producer, distributes them to consumers, which then send some kind of checksum back to the consumer to make sure everything got distributed and nothing deadlocked etc.
Does anyone have experience of what works best here? Some things I've been thinking about:
Is it reasonable to have all processes execute the test runner so that they all execute the test functions in the same order? Or is it better to have only the master run the test runner and have it send broadcasts to the slaves to tell them what to do next (presumably with some kind of lookup table to map commands to test functions)?
Is it sane in any way to use CPPUNIT_ASSERT inside the slaves, or should all information be sent back to the master for assertions? If slaves can assert, how should all the results be combined to get a single output log?
How should one handle test failures, such that the exception thrown in one process doesn't cause synchronization problems e.g. another process is waiting for an MPI_Recv for which the matching MPI_Send will now never happen?

Resources