omxplayer not loading a playlist file - pi

So i'm trying to make my raspberry pi play a bunch of video files on it's own.
I got omxplayer setup and working to actually play video to the screen the pi is connected to and not the remote terminal and also working with the usb sound card.
(the pi is connected to composite video not hdmi and I couldn't get audio ova gpio working and this is a pi zero w so no headphone jack, so yes it's an odd setup)
I also setup an sh script to play video files.
and so on..
However, when it's finished with one video file and changes to another, the pi will show the terminal for a second while it loads the next video and then will play that video. I would like it if it didn't have to keep loading new individual omxplayer commands like this.
a workaround hack would be to some how blank the display of the terminal so I just get a second of black screen and then the next video. But what I would really like to do is to just make a playlist file and play that.
I tried m3u, m3u8, xspf and even html but I got nothing.
I saw a script that mentioned a .pls playlist so I tried to make one of those, that indeed did seem to load something but didn't play it?
I tried editing in an absolute file path and just the normal file name and got the same result.
Temp hacks would be some how blanking the terminal display on the pi its self
or using something like ffmpeg to just combined all my videos into one huge video file, but I think a playlist file is going to be better in the long run, just have to figure out what playlist format omxplayer is happy with.

Related

Ableton will only play sound from library but not in workspace

Ableton only plays sounds from the library but not in the workspace.
I can only get sounds to play from the library in Ableton, but when I move something into the workspace it remains mute.
I have the Novation midi keyboard hooked up to it, so it does most things automatically from the keyboard. I tried the keyboard with Arturia and it worked. So I know that it is something in the Ableton area. I have tried the ASIO audio format along with Direct X. I have them both synced to the speaker on my graphics card as my others stopped working, strangely.
Does anyone have any idea what the problem could be?
I think I understand where the confusion is. An .adv file is an instrument preset file. To use it in the Session view, after dragging it onto a MIDI track, you need to create a clip with notes that play the instrument, like in this screenshot:
I agree that it can be confusing that items in the Sounds section are previewed and presented as a small waveform when you click them in the Browser, but then need to be played with notes when you actually use them in the Session view.
I think what it is is that a driver is required for the MIDI. Which means that it has to be on ASIO in the audio settings with the driver selected. On Arturia it just picks the MIDI up so I figured it would be the same. However, with Garritan I was required to install the driver. But, for the Novation midi keyboard designed for Ableton, the MIDI has to be connected to the computer through an audio interface, because for some reason it will not read the driver through the USB connection as Garritan does.

Is it possible to combine 2 images together into one on watchOS?

I am upgrading my watch app from the first version of watchOS. In my first version I was placing UIImageViews on top of each other and then rendering them with UIImagePNGRepresentation() and then converting it to NSData and transferring it across to the watch. As we know there are limited layout options on the apple watch so if you want cool blur effects behind images or images on images they have to be flattened off screen.
Now when I re-created my targets to watchOS2 etc suddenly the images transferred via NSData through [[WKSession defaultSession] sendMessage:replyHandler:] come up with an error saying its too large of a payload!
So as far as I can see I either have to work out how to combine images strictly via watchkit libs or use the transferFile option on the WKSession and still render them on the iPhone. The transferFile option sounds really slow and clumsy since I will have to render the file, save to disk on iPhone, transfer to watch, load into something that I can set on a WK component.
Does anyone know how to merge images on the watch? QuartzCore doesn't seem to be available as a dependency in watch land.
Instead of sendMessage, use transferFile. Note that the actual transfer will happen in a background thread at a time that the system determines to be best.
Sorry, but I have no experience with manipulating images on the watch.

Play video from an intermediate point

I am in the middle of an application that has a module to play videos from a directory on the same web-server. Everything is fine, except for the point that, while video is streaming, if I try to drag the player tip to an intermediate point, it either drags back to where it was(in flex player) or keeps loading un-till the video actually approaches that point(in case of jw-player or html5 player) or does nothing(in some other online players available). My client wants to be able to play or start buffering from any desirable point. I read something about RTMP to be used for such thing, but wasnt able to find a direct guide over how to do it.
Help appreciated!
If you're talking about being able to load a video file from x seconds in to the video, you should look into http pseudo-streaming. Here's a link to the jwplayer page about it: jwplayer pseudo-streaming

Decode QR-Code in scanned PDF

I'm trying to decode qr-codes in a PDF (generate by Scan2Mail with a Canon iR Scanner).
I know the quality is quite poor (see attached image), but with every single iOS app for qr-code scanning I can successfully scan the code within milliseconds filming the image below with the iPhone cam.
I tried to scan with zbarimg and zxing but nothing did work. Do you have ideas what to do? maybe enhance the image somehow with imagemagick?
It seems counter-intuitive that an image of an image decodes, when the original image doesn't. An app is seeing a stream of slightly different versions of the image. The original is damaged and distorted. The blur that results when the camera captures it can actually help remove the noise. Yes, zxing decodes this fine -- from the Barcode Scanner app.
Try running this through a light blur filter first.

Apply a Filter to a Camera video stream publishing to Media Server

I am Trying to Apply some information Like text and An Image as an overlay to create a overlay effect as lice a security camera with time and date on the video along with a png based Logo.
I can record VIDEO Using Flex and FMS or any other Media Server. But I want to save a modified version of the stream being uploaded.
Use camTwist (mac) or webcammax (pc) as your video input...you can use that program to use the webcam and then add whatever over it (text, date). Use Flash Media Live Encoder and select camTwist as your video source. Stream and save your recordings using that FMLE.
Sorenson Squeeze is pretty awesome for this and is cross platform. Camtwist is cool but it's not really a 'pro' app, that makes me sound way snobbish. It's actually pretty fun and a good suggestion for a simple result but I haven't found anything better than Sorenson Squeeze for the features and control it gives you.

Resources