Connecting output of one pipe to input of one FIFO - unix

I am trying to write a client-server program in which there are three executables D1, D2 and D3 which provide some data as the output. The clients request for any one of these data sources and send their pid to the server with the help of a common fifo. The structure for sending this request is:
struct Request
{
char p[10]; // the pid of the client program in string form
int req; // 1,2,or 3 depending on which one is required D1,D2 or D3
};
After getting a request the server will open a fifo whose pathname is the pid of the client. So it works as a client specific fifo.
mkfifo(pid,O_CREAT|0666);
int fd1 = open(pid,O_WRONLY);
Now, suppose the req field is 1. If it is the first request for D1, the Server will run:
FILE* fp = popen("./D1","r");
int fd = fileno(fp); //for getting the file descriptor for the reading end of the pipe connected to D1
Now I want my client to read from the pipe of D1.D1 contains simple logic program like:
while(1)
{
write(1,"Data from D1",12);
sleep(1);
}
I tried dup2(fd,fd1) but it did not work. Is there any way connecting the two file descriptors fd and fd1?
Also, if another client requests for D1, how to connect the file descriptor of client2 to fd so that both clients receive the same message together?

Instead of "connecting" two file descriptors, you can send the file descriptor to the client and let the client read:
The server listens on a UNIX stream socket.
The client connects the socket and sends the request.
The server receives the request, does popen and obtains the file descriptor.
The server then sends the file descriptor to the client and closes the file descriptor.
The client receives the file descriptor and reads from it till EOF.
See man unix(7) for details about sending file descriptors between processes with SCM_RIGHTS.
Alternatively, instead of using popen:
The server forks itself. The child does mkfifo (the client passed the filename in the request), opens it for write and redirects its stdout into the named pipe's file descriptor.
The child execs the application. This application writes into stdout and that goes into the named pipe.
The client opens the named pipe and reads the output of the application. The client can unlink the pipe filename after opening it.

Related

In TCP, How many data is buffered if the connection is not accepted by the server?

I write a simple server application. In that application, I created a server socket and put it into the listen state with listen call.
After that, I did not write any code to accept the incoming connection request. I simply waited for the termination with pause call.
I want to figure out practically that how many bytes are buffered in the server side if the connection is not accepted. Then I want to validate the number with the theory of the TCP.
To do that,
First, I started my server application.
Then I used "dd" and "netcat" to send the data from client to server. Here is the command:
$> dd if=/dev/zero count=1 bs=100000 | nc 127.0.0.1 45001
Then I opened wireshark and wait for the zero-window message.
From the last properly acknowledged tcp frame. the client side can successfully send 64559 byte data to the server.
Then I execute the above dd-netcat command to create another client and send data again.
In this case, I got the following wireshark output:
From the last successfully acknowledged tcp frame, I understand that the client application can successfully sent 72677 bytes to the server.
So, it seems that the size of the related buffer can change in runtime. Or, I misinterpret the output of the wireshark.
How can I understand the size of the related receive buffer? What is the correct name to refer that receive buffer in terminology? How can I show the default size of the related receive buffer?
Note that the port number of the tcp server is "45001".
Thank you!

qLocalServer write blocked (full buffer), how to discard data?

I'm trying to understand how local sockets (Unix domain socket) works, specifically how to use them with QT.
It is my understanding that Unix domain socket is always reliable and no data is lost. Looking at these examples these are the steps (making it simple), considering a Server (producer) and a Client (consumer)..
A qLocalServer (on the Server) create a socket, bind it to a known location and listen for incoming connections..
A qLocalSocket (on the client) connect to the known location. On the server connection is accepted.
Server can send data to the socket using write method (of the qLocalSocket instance returned by qLocalServer->nextPendingConnection())
When the socket buffer is full, any write by the server is blocked as soon as the client read from the socket and free the socket buffer. This is the part I don't understand: suppose the socket buffer is full and the server keeps writing .. where all these data are stored and how to control the data structure holding these data? Is there any way to discard these data?
Imagine a server that produce data 20 times/second and a client that consume 1 time/second, I want to drop all data in bewtween (better real time data than ALL the data in my use case).
There's nothing unique about the Qt socket classes (for the purposes of this question).
When the socket buffer is full, any write by the server is blocked as soon as the client read from the socket and free the socket buffer.
No - the write is blocked until the client performs a read, freeing up some of the buffer. (Perhaps this is what you meant and we just have a minor language barrier.)
Once the server writes to the socket, the data can't be "un-written" so how best to handle this depends on your application, and whether you have more control over the programming of the server or the client.

How can I receive exactly the same application data packets on a TCP data stream at the destination that was sent by the app at the source node

As a TCP connection is a stream, data traveling through the network is fragmented, buffered etc. during the transmission so there is no guarantee that the data will be delivered in the same packets to the destination application as was sent by source. I want to receive the data at the destination in exactly the same chunks as they were sent by the source app. Or in short term, how to properly implement TCP message framing in INET.
The trick is that you have to create a buffer at the application side and put all received data chunks into that buffer. There is a special queue class in INET (ChunkQueue) that allows you to queue received data chunks and merge those chunks automatically into a bigger chunk that was originally sent by the application layer at the other side.
Practically you feed all the received data into the queue and you can ask the queue whether there is enough data in the queue to build up (one or more) chunk(s) that was sent by the application layer. You can use the queue's has() method to check whether at least one application layer data chunk is present and then you can get that out with the pop() method.
Even better, there is a special callback interface (TcpSocket::ReceiveQueueBasedCallback) that can automatically put all the received chunks into a queue for you.
If you implement that interface in your application, you just have to implement the socketDataArrived(TcpSocket *socket) method and check the queue content regularly (each time data arrives) to see whether enough data is present to be able to deliver the original chunks to the application. There is an example of this in the Ldp protocol implementation in INET:
void MyApp::socketDataArrived(TcpSocket *socket)
{
auto queue = socket->getReceiveQueue();
while (queue->has<MyAppMessage>()) {
auto header = queue->pop<MyAppMessage>();
processMyAppMessageFromTcp(header);
}
}

When does a broken pipe occur in a TCP stream?

I am trying to write an echo server in Rust.
use std::net::{TcpStream, TcpListener};
use std::io::prelude::*;
fn main() {
let listener = TcpListener::bind("0.0.0.0:8000").unwrap();
for stream in listener.incoming() {
let stream = stream.unwrap();
println!("A connection established");
handle_connection(stream);
}
}
fn handle_connection(mut stream: TcpStream) {
let mut buffer = [0; 512];
stream.read(&mut buffer).unwrap();
println!("Request: {}", String::from_utf8_lossy(&buffer[..]));
stream.write(&buffer[..]).unwrap();
stream.flush().unwrap();
}
The first request with nc localhost 8000 is working as expected but subsequent request aren't. What am I doing wrong? Is the problem in how the server is reading requests from clients? Though there is no error server side.
I am sending data by typing them on the terminal:
$ nc localhost 8000
hi
hi
hello
# no response
# on pressing enter
Ncat: Broken pipe.
A 'Broken pipe' message happens when you write to a stream where the other end has been closed. In your example, your handle_connection routine reads a single buffer from the client, copies that back to the client, and then returns, which will close the stream. When you run netcat from the terminal like that, the terminal defaults to line buffering, so each line you type will be sent to the server as a single write.
The first line is sent, read by the server, echoed back, and then the server closes the connection. Netcat gets a second line, writes that to the socket, and gets a 'Broken pipe' because the server has closed the connection.
If you want your server to read multiple messages, you need to have your handle_connection routine loop, reading from the stream until it gets an EOF.

BizTalk: Knowing send port finished write file

I has a problem with send port and an application: The process cannot access the file because another process has locked a portion of the file.
I guess the problem is while BizTalk send port is writing a file, the application pickup this file and process.
My scenario:
I have an orchestration with a file send port to write a file to a location.
After this port I have another send port to call an application to picking the written file and process.
I think: While file send port is writing and not yet finished, the orchestration does not wait but continue next step - calling application. And this leads to above error.
Is my assumption correct?
And how can I solve this problem?
You are absolutely correct your orchestration basically throws the message in your send port and continues, but you can change this behavior and I'll give you a really simple solution here it is
* Set your Logical send port like this
Now your orchestration will wait for delivery ACK
*To make things cleaner
Create a scope and catch the Microsoft.XLANGs.BaseTypes.DeliveryFailureException which occurs when you don't get an ACK
*Also add in your catch block a suspend Orchestration shape so you can resume your orchestration if your message doesn't get to its destination :)
This works with both File and FTP protocol, (I didn't test others)

Resources