omxPlayer playing HLS stream, volume doesn't change immediately in RaspberryPi - volume

In raspbian jessie i am using omxplayer to play a HLS stream from command line. when it plays video, any volume command given from keyboard/remote doesn't execute immediately rather it updates volume at the end of every .ts stream chunk i.e after every 9-10 sec.
is there any possible way to execute this volume immediately?

Related

read and copy buffer from kernel in CPU to kernel in FPGA with OpenCL

I'm trying to speed up Ethash algorithm on Xilinx u50 FPGA. My problem is not about FPGA, it is about pass DAG file that are generated in CPU and send it to FPGA.
first I'm using this code in my test. I made a few changes to support Intel OpenCL driver. now if I only using CPU to run Ethash (or in this case xleth) program all process are be done. but in my case I first generate DAG file in CPU and with using 4 core it take 30 second for generating epoch number 0. after that I wanna pass DAG file (in code showing with m_dag) to a new buffer look like g_dag to send it in u50 HBMs.
I can't using only one context in this program, because I'm using 2 separated kernel files (.cl for CPU and .xclbin for FPGA) and when I try to make program and kernel it send me error 33 (CL_INVALID_DEVICE). so I make separate context (with name g_context).
now I wanna know how can I send data from m_contex to g_context? and it that ok and optimize in performance?(send me another solution if you have.)
I send my code in this link so pls if you can, just send me code solution.

Intermittently getting connect error: Function not implemented (38) when connecting with gatttool

I'm working on a project where I need to get data from a BLE environmental sensor onto a raspberry pi and send it to a server at regular intervals. The more often I can send, the better. I found a script online that works with the particular type of sensor that I'm working, but it only reads the data once and doesn't update unless some device connects and disconnects to the sensor.
So, for example, if I ran the script twice in a row it would contain the same data, but if I run the script once, then connected and disconnected from the sensor with my phone, then ran the script again, it would have new, updated data. Now, I'm trying to make this fully automated and don't want to have to keep connecting and disconnecting with my phone every time to get new data, so I've found that running gatttool and connecting has the same effect as if I were to connect and disconnect with my phone. So I've come up with a somewhat clunky solution of automation that all runs through crontab:
Run a script that connects and immediately disconnects from the sensor using gatttool
Run the data-collection script and send the data to the server
Repeat as soon as possible
Step 3 is where the issue lies. I can't run this series as often as I want. The ideal interval is to collect and send data every 30 seconds, but for some reason I intermittently get an error from gatttool:
connect error: Function not implemented (38)
I get this error on every iteration of the cron schedule until I set the interval so that the scripts only run every 2 minutes, and even then I'm intermittently getting the error. I need the data to be consistent and definitely not as sparse as 2 minutes apart. 1 minute would be the absolute max interval I can afford to have the data sent.
How can I get rid of this error?
My script to connect and disconnect from the device:
import pexpect
import time
print(time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()))
scan = pexpect.spawn("sudo hcitool lescan")
time.sleep(5)
print(scan.terminate())
child = pexpect.spawn("sudo gatttool -i hci0 -b E2:D8:9D:FF:72:A2 -I -t random")
child.sendline("connect")
child.expect("Connection successful", timeout=7)
print("connected!")
child.sendline("disconnect")
child.sendline("quit")
child.sendline("sudo hciconfig hci0 down")
child.sendline("sudo hciconfig hci0 up")
print("done!")
The script that you linked to at the start of your question does not seem to be connecting to the sensor. My understanding of their script is that it is scanning for the advertising data from the sensor which contains the measurement information. This is a common thing to do and there are many different types of beacons you can get.
I speculate that you are seeing more frequent measurements when you connect and disconnect because that is reseting the advertising as the sensor will not be advertising when you are connected.
On the front-page of the repo you linked to, there is some information about about how to change the measurement interval.
You said you wanted this to be every 30 seconds, so that would be a value of 1E that you would need to write to that characteristic.
They suggest an app to do this with. I have used that app and there is nothing specific about that app they point you towards. If you wanted alternatives, I find the nRF Connect app very good for these kinds of activities. If you have the Chrome or Chromium browser installed on your PC or Raspberry Pi, then you can do it from there if you enter the URL of:
chrome://bluetooth-internals/#devices
Press Start Scan -> Inspect the sensor device -> click on 0C4C3010-7700-46F4-AA96D5E974E32A54 service -> click on 0C4C3011-7700-46F4-AA96D5E974E32A54 characteristic -> enter the value (1E) -> press Write button.
This should allow you to use their original script with the frequency of measurement you want.

HLS stream from multiple FFMPEG to RTMP command at VideoJS keep repeating for segments

I am building on demand video streaming application based on user interaction at frontend using FFMPEG and RTMP, which eventually converted to HLS using nginx-rtmp-module, with hls_continuous flag set to true.
While running back to back FFMPEG command to RTMP(i.e. once one FFMPEG command done with execution at RTMP stream, another FFMPEG command is executed at same stream), observation at VideoJs player that some of the HLS segment keeps repeating.
Would be great help if someone could help me to figure out what could be possible reason, and how to fix the same?
Thanks in Advance.
Compare your code to this example
https://videojs.github.io/videojs-contrib-hls/

Error: "job pending on /dev/sda1" when plugging in USB device

On my raspberry pi running raspbian (unix based i think) I get this error "job pending on /dev/sda1" every time that i plug in my USB flash drive. from my research it has something to do with mounting and un-mounting the device but I'm new to command line and most other posts are over my head. What do i do to fix this error?
if it matters i'm running a script on the pi that writes to that flash drive
Your desktop (LXDE for Raspberry Pi) has spotted that you have inserted a USB disc and has tried to mount it. You can enable or disable this behavior by editing the file ~/.config/pcmanfm/LXDE-pi/pcmanfm.conf and changing the line
mount_removable=1
to set '0' instead.
You would get that error if there were more than one thing trying to mount your USB disc at the same time. The second to run will notice that the first has already started a job to mount the disc.
Are you running two copies of your desktop GUI? (You may not realize that you are, but if you have a VNC or XRDP service you probably will be.)
If that is the cause of your problem (and probably in any event) the message is harmless. Something will have mounted your USB disc (perhaps you can even see it in /media?) The only infelicity will be that your desktop won't show you the results (e.g. by displaying the drive in a file manager.)

DVB Recording of a channel

I'm trying to record a DVB-Channel with a DVB-T Tuner.
I already did much research on this topic but I don't get really "information" what to do.
Basically I'm already able to create a own Graph with the default GraphEdit, make a tune request and watch a channel. Converting the Graph to C# Code with the DirectShowLib or to C++ isn't a big problem for me.
But what I don't know, what is the right approach to record the movie. (Without decode it to mpeg / avi and so on.)
The most important parts of the graph are some tuning related filters, they connect to the demultiplexer (demux), and the demux will output a video and audio stream.
The easiest way to get the mpeg stream is putting a filter before the demux. For example a samplegrabber. There you will receive the complete transport stream as it is broadcasted. But that normally contains multiple programs which are multiplexed on the same frequency. If you only need one program, you need to filter the other programs out of the stream.
If you only need a single program, it is probably easier to directly connect the audio and video stream coming out of the demultiplexer, to a multiplexer, and write it's output to a file. You need to make sure there is no decoder or any other filter between the demux and the mux. The problem is that you need to find a directshow multiplexer, as windows does not contain a standard multiplexer. I don't know any free multiplexer.
What you also can do is write the audio and video directly to a file. (again without decoding, or anything else). Then use for example ffmpeg to join the audio and video to a single file.
C:\> ffmpeg -i input.m2v -i input.mp2 -vcodec copy -acodec copy output.mpg
You probably also need to delay the audio or video stream to get them in sync.
One addition, of course you can also use ffmpeg to convert the multi program transport stream to a single program stream.

Resources