i got a problem with receiving a raw data stream from the lan connection of a time of flight camera (mesa sr4500) to my pc via simulink. the ip adress and the port of the simulink block tcp/ip receive seems to be chosen correctly (it is the ip adress and port of my network card). here's a picture of the further settings of the simulink block:
remote address: 192.168.1.1
port : 139
data size: [25344 1] -> should receive an array of this size due to the image resolution
data type uint16 -> each pixel of the camera is encoded with 16 bits
bye order: BigEndian
enable blocking mode is turned on
timeout: 10 (seconds)
block sample time: 0.1 (seconds) -> camera fps = 10
and this is the error msg, that i'm receiving.
Block error -> Error evaluating registered method 'outputs' of MATLAB
S-Function 'stciprb' in 'decoding/TCP/IP Receive'
Caused by:
The specified amount of data was not returned within the Timeout period.
Please ensure that data is being sent to the specified port of specify a greater timeout value.
i think a greater timeout wouldnt help, because the camera is already streaming with 10 fps. so the timeout should be long enough.
have i misunderstood any of the setting options? has someone worked with a similar camera?
Since you're getting Timeout error, it's probably, because Simulink doesn't see your camera, so either you have assigned the wrong IP address or port in the TCP/IP Receive block. The IP address and port of your network card (by that I presume you meant network card of your PC?) won't work, you need to assign IP of the camera.
Open TCP/IP Receive block and click Verify address and port connectivity, with that you'll get the output message whether Simulink sees your camera's IP and port.
By the way I see you have set your port to 139 (that's port for NetBIOS session services). I don't know if your camera uses some special dedicated port (for example some cameras with very low fps send pictures through FTP on port 20, but with 10 fps I doubt that's the case with your camera?). Try to assign a free port (i.e. number between 1024 and 65536).
Related
I have an application that creates, listens on and writes to a tap interface. The software will read(tun_fd,...) and perform some action on that data, and it will return data to the system as UDP packets via write(tun_fd,...).
I assign an IP to the interface, 10.10.10.10\24 so that a socket application can bind to it and so that the kernel will pass any packets for the virtual subnet to the tap interface.
The software generate frames with IP/UDP packets with the destination IP being that assigned to the interface, and a source IP existing in the same subnet. The source and dest mac address match that of the tap device. Those frames are written back to the kernel with write(tun_fd,...).
If I open said tap interface in wireshark I will see my frames/packets as I expect to, properly formatted, expected ports, expected macs and IPs. But if I try to read those packets with netcat -lvu 0.0.0.0 ${MY_UDP_PORT} I don't see anything.
Is this expected behavior?
Update 1
INADDR_ANY is a red herring. I have the problem even when explicitly binding to an interface / port as in this pseudo code:
#> # make_tap_gen is a fake program that creates a tap interface and pushes UDP packets to 10.10.10.10#1234
#> ./make_tap_gen tun0
#> ip addr add dev tun0 10.10.10.10/24
#> netcat -lvu 10.10.10.10 1234
Update 2
I modified my code to be able to switch to a tun as opposed to a tap and I experience the same issue (well formatted packets in Wireshark but no data in socket applications).
Update 3
In the kernel documentation for tuntap it says
Let's say that you configured IPv6 on the tap0, then whenever
the kernel sends an IPv6 packet to tap0, it is passed to the application
(VTun for example). The application encrypts, compresses and sends it to
the other side over TCP or UDP. The application on the other side decompresses
and decrypts the data received and writes the packet to the TAP device,
the kernel handles the packet like it came from real physical device.
This implies to me that a write(tun_fd,...) where the packet was properly formatted and destined for an IP assigned to some interface on the system should be received by any application listening to 0.0.0.0:${MY_UDP_PORT}
Yes, data written into the tuntap device via write(tun_fd...) should get passed to the kernel protocol stack and distributed to listening sockets with matching packet information just like the frame had arrived over a wire attached to a physical ethernet device.
It requires that the packets be properly formed (IP checksum is good, UDP checksum is good or 0). It requires that the kernel know how to handle the packet (is there an interface on the system with a matching destination IP?). If it's a tap device it may also require that your application is properly ARP'ing (although this might not be necessary for a 'received' packet from the perspective of a socket application listening to an address assigned to the tap device).
In my case the problem was silly. While I had turned on UDP checksum verification in wireshark I forgot to turn on IP header verification. An extra byteswap was breaking that checksum. After fixing that I was immediately able to see packets written into the TAP device in a socket application listening on the address assigned to that interface.
I am programming a server and client program to communicate between a windows PC using the Boost libraries and a Linux ARM beagleboard using the asio stand alone libraries. I have for a while had successful UDP communication between the two devices but now I want to recover the port from the endpoint the server discovers when the client connects. The way the client connects is via query:
udp::resolver resolver(io_service);
udp::resolver::query query_tx(udp::v4(), hostIP, "43210");
udp::endpoint receiver_endpoint_tx = *resolver.resolve(query_tx);
where host IP is a string and this works fine. Upon debugging though I notice that when i check the value returned by:
receiver_endpoint_tx.port()
This returns 51880. Now don't jump the guns and yell out network byte order and host byte order. I AM AWARE. The strange part is that this number 51880 sometimes is a different number and when i check what the server has stored in its endpoint it is a completely different number: 21743. Now I know I must be doing something wrong with the byte orders but i tried:
//unsigned long port_long = boost::asio::detail::socket_ops::host_to_network_long(receiver_endpoint_tx.port());
//unsigned long port_short = boost::asio::detail::socket_ops::host_to_network_short(receiver_endpoint_tx.port());
And they do not give me back my original port: 43210. Neither does network to host. So what am i missing and how can I on both ends recover my 43210 port? Obviously it must be there somewhere because they are successfully communicating.
Thanks in advance, sorry if noob mistake :)
Fistly, UDP is connectionless, there is no connection.
I'm not sure if I understand you correctly, but it sound too me like you want to bind to specific port numbers. If you want the client to send a packet from port x to port y on the server, and the server should respond from port y to port x, then you need to bind the sockets to the desired ports. Alternatively you can use the constructor to bind. Not doing so will result in the OS using ephemeral ports.
Further, to get the remote endpoint that a packet was received from the async_receive_from takes the sender_endpoint reference parameter. When the read handler is called, you can retrieve host and port from it.
I'm creating a simple device that sends data to a Windows PC over serial COM ports.
I'd like the software to be able to scan the available COM ports until it recognizes the device. The problem is, if the PC tries to initiate the handshake with a device other than mine, it may interpret the commands [wrongly, of course].
The only solution I see is for my device to periodically broadcast some sort of identifier, perhaps 5 times per second or so, so the application only needs to listen for that identifier rather than risk corrupting another device also connected to a COM port. When the application loads, it listens on each available COM port until the device is recognised. Does this sound reasonable?
Thanks
IMO whatever the direction on which you initiate the handshake, the problem will be the same.
If you send your handshake from your device and another application on your PC is listening to the corresponding serial port, it also has risks to badly interpret the data you are sending.
So I would say that software on both side should be protected against incoherent data they receive from the outside.
Suppose two web browsers are running on the same computer and are accessing the same website (in other words, accessing the same IP address on the same port).
How does the operating system recognize which packets are from/for which program?
Does each program have a unique id field in the TCP header? If so, what is the field called?
The two programs are not actually accessing the "same port." For purposes of TCP, a connection is defined by the tuple (src_ip,src_port,dst_ip,dst_port).
The source port is usually ephemeral, which means it is randomly assigned by the OS. In other words:
Program A will have:
(my_ip, 10000, your_ip, 80)
Program B will have:
(my_ip, 10001, your_ip, 80)
Thus, the OS can see those are different "connections" and can push the packets to the correct socket objects.
the source port number will be different even if the destination port number is the same. the kernel will associate the source port number with the process.
When the client opens a connection to destination port 80, it uses an arbitrary unused source port on the local machine, say 17824. The web server then responds to that client by sending packets to destination port 17824.
A second client will use a second unused port number, say 17825, and so the two sockets' packets will not be mixed up since they'll use different port numbers on the client machine.
Christopher's answer is partially correct.
Programs A and B actually have a handle to a socket descriptor stored in the underlying OS's socket implementation. Packets are delivered to this underlying socket, and then any process which has a handle to that socket resource can read or write it.
For example, say you are writing a simple server on a Unix like OS such as Linux or Mac OSX.
Your server accepts a connection, at which point a connection consisting of
( src IP, src Port, dest IP, dest Port )
comes in to existence in the underlying OS socket layer. You then fork a process to handle the connection - at this point you now have two processes with handles to the socket both of which can read / write it.
Typically ( always ) the original server will close it's handle to the socket and let the forked process handle it. There are many reasons for this, but the one that is not always obvious to people is that when the child process finishes it's work and closes the socket the socket will stay open and connected if the parent process still has an open handle to it.
By port number.
IP address is used to identify computer, and port is used to identify process(application) within the computer. When a port is used by one process, other processes can't use it any more. So if any packet is sent to that port, only the owner of that port can handle that packet.
Connections are identified by a pair of endpoints.
– Endpoint means (ip, port)
Os assigns random number as src port number so, when packet travels to the receiving side, it is treated as different process's msg, since src port numbers are different.
I've been trying to do TCP communication using my Wavecom Fastrack modem. What I want to achieve is make the modem connect to a specified TCP server port to enable me to transfer data to and from the server. I found some information on than in the user's guide.
Basing on the information you can find on page 66 I created an application that opens the serial port to which the modem is connected and writes the following AT commands:
AT+WIPCFG=1 //start IP stack
AT+WIPBR=1,6 //open GPRS bearer
AT+WIPBR=2,6,11,"APN" //set APN of GPRS bearer
AT+WIPBR=2,6,0 //username
AT+WIPBR=2,6,1 //password
AT+WIPBR=4,6,0 //start GPRS bearer
AT+WIPCREATE=2,1,"server_ip_address",server_port //create a TCP client on port "server_port"
AT+WIPDATA=2,1,1 //switch do data exchange mode
This is exactly what the user's guide says. After the last command is sent to the modem, the device switches to data exchange mode and from then on everything what is written to the serial port opened by my application should be received by the server and everything the server sends should appear in the input buffer of that port.
The thing is that I did not manage to maintain stable bidirectional communication between the server and my modem. When I write some data to the serial port (only a few bytes), it takes a lot of time before the data appears on the server's side and in many cases the data does not reach the server at all.
I performed a few tests writing about 100 bytes to the serial port at once. Logging the data received by my server application I noticed that the first piece of data (8-35 bytes) is received after a second or two. The rest of the data appears in 2-5 seconds (either as a whole or in pieces of the said size) or does not appear at all.
I do not know where to look for the reason of that behaviour. Did I use wrong AT commands to switch the modem to TCP client mode? I can't believe the communication may be so slow and unstable.
Any advice will be appreciated. Thank you in advance.
what OS are you running? Windows does a pretty good job of hiding the messy details of communicating with the GPRS modem, all you have to do is create a new dial-up connection. To establish the connection you can make a call to the Win32 RasDial function. Once connected, you can use standard sockets to transfer data on a TCP port.
i have been using wavecomm modem for 2 years now.As far as i know from my experience is that if you are able to send some of the data then you can send all of the data.
the problem might be in the listening application which receives the data on the server side.
It could be that it is unable to deal with the amount of data that you are trying to send.
try sending the same data in smaller busts
with some delay in between them,then you might receive all data intact.