I have several video servers connected to each other over 1Giga bit ethernet cable.
Video clips can be send from one server to another or live broadcasting.
What best protocol should I use (UDP, TCP...) to achive real time transfering of HD video frames?
RTSP or a new combination like RTP over ZeroMQ.
Related
I have a surveillance Camera and i want to stream the live feed to multiple clients in the same network using WebRTC. To save compute resource and bandwidth requirement i want to do multicasting on WebRTC feed. Is this possible?
WebRTC doesn't support multicast. You might want to look into tools like webtorrent.
I'm trying to stream a live video on a website and I already have a udp mpeg-ts. I cannot show this stream on html so I wanna convert this stream to http on server then send it to clients. how can I do that using ffmpeg?
any other solution accepted too.
thank you
The key question is - do you need a low latency live streaming or 15-30 seconds latency is OK for you. If you don't need low latency, use ffmpeg to ingest your udp mpeg-ts and output HLS.
For low latency live streaming to web browsers, you will have to install a media server software, such as Wowza / Unreal Media Server / Red5 or similar.
The media server will ingest your udp mpeg-ts and will convert it to WebRTC streams playable by web browsers.
My question is regarding an ESP8266 board and the ESP-touch technology.
ESP-touch uses the length field of a UDP package to broadcast wifi ID and PW through a device (like a smartphone) to the chip (like in my case ESP 8266).
I want to turn this around, more specifically:
I want the ESP8266 chip to broadcast UDP packets with some sort of identifier-number in the length field of the UDP packet, without beeing connected to any wifi connection. Then these UDP packets are recieved by an app on a smartphone so the identifier-number can be extracted and used on the smartphone.
I am relatively new to this topic and do not know if this can work.
When I try to find any information online they all say that the first step is to connect the chip to a wifi. But I don't want that. The smartphone and the chip don't know each other and are not connected in any way. So I want this type of "broadcasting" so that the smartphone can recieve the package without really beeing connected to the chip.
I guess there must be some way to make it function like I explained above, but I can't find a way how this can work.
I don't need the chip to send UDP packets explicitely, it can be any type of package. I took UDP packet as an example because there is already the ESP-touch technology which is more or less similar.
The important thing is that the package that I send has a field where I can put some identifier-number in (not encrypted), which can then be recieved by another device like a smartphone where this identifier-number is extracted.
For clarification: I don't need to use ESP touch or anything related to that. I only stated this technology as an example. I just want to achieve the behavior stated above and in the picture! :)
This is an example picture how I want it to work:
No, it's not possible to send any packages without being connected to the network. ESP-touch or TI Smart Config or similar technologies utilize Monitor mode. As the name suggests, in this mode one can listen for packages, but can't send them.
ESPNOW provides data flow between ESP devices without connections via router. It is another feature of the Espressive API. There are tutorials for ESP8266 and ESP32...
https://randomnerdtutorials.com/esp-now-esp8266-nodemcu-arduino-ide/ .. https://www.instructables.com/ESP32-With-ESP-Now-Protocol/
I am developing a logic core to perform data transfer between a FPGA and a PC over ethernet, using a LAN8710 PHY on my FPGA board.
I've achieved to transfer some UDP data packets from the FPGA to the PC. It's a simple core that complies with the PHY transfer requirements. It builds the UDP package and transfer it to the PC.
To check the reception on the PC, I am using Wireshark and as said above, I receive the packets correctly. I've checked the reception with a simple UDP receiver written by myself.
But, I've noticed that I only receive these packets when Wireshark is running on the PC. I mean, if Wireshark is ON, my application receives the packets too, and the counter of received packets of the following picture increases. (This picture is not mine, just one from the internet)
http://i.stack.imgur.com/wsChT.gif
If I close Wireshark, the PC stops receiving packets and the counter of received packets stops. My application stops receiving too.
Although novice on networking topics, I suspect that this issue is related to PC-side. Seems like Wireshark is "opening/closing" the ethernet communication channel, or something like that. Does anyone knows about this issue?
To build a functional core to transfer data between a PC and the FPGA, I've developed a core to transfer and receive UDP packets. Next step will be ARP implementation (to let the PC identify my FPGA board, as I understand). What protocols are necessary to perform full-duplex data transfer between this 2 devices?
Thank you very much in advance,
migue.
Check whether you are able to get appropriate receive interrupt at ethernet driver level on PC-side for a single transmitted packet by FPGA. If you do not get the receive interrupt, check on the transmit side(FPGA) for appropriate transmit interrupts for packet that is being transmitted. This should mostly help you in cornering the issue.
As far as i know, wireshark is just a packet analyzer/sniffer. However, if wireshark is suspected, one option could be to try with alternate packet sniffer to rule out if any such scenario is happening.
A handy tool for determining problems in network and also for determining the network statistics shall be netstat. netstat -sp udp shall list down the statistics only for UDP. There are many other parameters that can be used with netstat for diagnosis.
After many months I solved it, I post to help someone stucked in the same point.
Finally I figured out that Wireshark uses a tool to access the network link layer of the computer. This tool allows Wireshark to sniff all incoming and outgoing packets at a specified network device. To do this, the first step is to OPEN the network device, and that's why my program only worked if Wireshark was open.
Regards.
I'm using lwip stack on my embedded platform. I have connected the board to my PC via ethernet. My application running on board, dumps the image data out of ethernet. PC applications waits for header, after header it decodes the data and displays the image.
This is for debug purpose only. My images are 4MBytes and i receive 20 Frames per second. So it will be 80MBytes data per second.
Is is advisable to use TCP or UDP?
I tried using TCP, but my send buffers becomes full and it will wait around 200ms to receive acknowledge. Mean time i loose 5-6 images coming from sensor. Can this be fixed if i use UDP?
Thanks,
Sathya
I suggest you apply some kind of compression to your images before sending them to the network.
That said, if you use UDP, you may get better transferrate, but you do need receiving code that can handle lost packets (discard image or request resend or pad affected area)