Is there a way to test video playback on mobile app - automated-tests

I am working on a hybrid mobile app (React native) automated testing project using robot framework and Appium, i'm looking for a way to test videos streaming.
Is there a way to test if the video playback in UI works as expected ?

For the video apps , if you know that there is video playing , try to capture time left / time run for the video from time bar that is displayed in video. Comparing screenshot is not the solution but an workaround

The simplest solution is to use the UI tool to get a few screenshots spaced in time, then compare them to each other (verifying that they are different enough)

Like what #Nithin said.
Use video player to scrrenshot video at 10th minute.
Do robot test and capture screenshot at 10th minute.
Use screenshot comparison tool like applitools to comlare #2 vs #1.
Both should identical.
Just my ideas, I am not sure if such comparison tool is realible to compare video screenshot or not.
Let me know the outcome.
I am also research how to automate video streaming testing.

Related

Audio unable to play in Aframe unless the inspector is opened

I have been working on adding background music to a Glitch project. I followed the Aframe.io documents to create a sound component and set it to automatically play upon running, however, when I try to run the project, the sound does not play automatically on my laptop. I have to open the Aframe inspector and un-pause my project for the music to begin. For some reason, I have not encountered this issue when I tried the same project on my phone. The sound played as expected once the program is up and running. I am not sure what is causing the sound to malfunction but any help is greatly appreciated. Here is the project I am working on: https://glitch.com/edit/#!/animation-animate?path=index.html%3A15%3A23
when working with sounds natively with Aframe, I had the same problem and one of the recommendations that I can give you is to play sounds with the HowlerJS sound library, which is compatible with most frameworks and also compatible with A-Frame
I share their page and the documentation, this library is very easy to use and implement to the project
https://howlerjs.com/
https://github.com/goldfire/howler.js#documentation
Autoplay audio on most modern browsers requires user interaction before enabling audio. You can put a div in front of your scene that upon click starts playing your background audio.
Check out a related SO question / answer for video autoplay that should help:
Autoplaying videosphere from A-frame is not working on any browser(Safari/Chrome)
Another user on A-Frame discord provided this example demo:
https://aframe-autoplay-background-music.glitch.me/

Is there any way to record a video of website using headless Chrome?

I know there is a way to make screenshots, using many available APIs, such as Puppeteer
https://github.com/GoogleChrome/puppeteer
but what about recording videos? my goal is to capture a CSS animation into an mp4 file.
Or am i doing it all wrong, are there better tools for that on the server side?
https://www.npmjs.com/package/puppeteer-recorder
Something like this can work, it takes lots of screenshots and stitches them together
For recording video with audio, try puppeteer-stream:
https://github.com/Flam3rboy/puppeteer-stream

Creating the stereo photosphere without Unity

I'm working on a Daydream app using the Google VR SDK/NDK. To submit the app to Google Play, I need a 360-degree stereo photosphere. I've seen directions for creating this with Unity, but is there any way to create this without Unity?
I've taken a screenshot of the app in stereo mode, but I don't think that will satisfy the requirement.
Google doesn't provide any tools to capture in-app photospheres in non-Unity apps at this time. Some devs produce photospheres in modeling apps like Maya and Blender.
You could always cheat and make your "Daydream 360 degree stereoscopic image" in Photoshop. Just use the same image twice, once on top and once on the bottom.
I think others have already done this, because I have noticed a few wrong looking previews in the store. Where if I close one eye, parts of the image disappear.
If you change your mind and make one with Unity, this plugin worked nicely for me: https://www.assetstore.unity3d.com/en/#!/content/38755

Can we used the iOS technologies in apple watch app?

I want to create music app in which the watch extension app shows audio wave so my question is Can we used the iOS technologies like openGL in watch app?
You can't run any code on the watch. You can only run code in a Watch extension in your iOS app and update a relatively static UI on the watch. You could generate images in your extension for the audio wave, put them together into an animation and then update the UI with that.
It would be possible to pass some information from your iOS app to the Watch extension running on the phone, which could then update a pre-defined interface on the Watch app. However, if you are wanting to provide a real-time audio waveform, I think this could face major problems regarding latency.
Note that as Stephen Johnson states, you could only do this by rendering static images which would then be sent to the watch for display, or by having pre-installed images in your watch interface that you rapidly show or hide to give the impression of levels changing. The latter would be a much more promising approach from a latency approach, and given Apple give a demonstration of a circular progress indicator made up of 360 images, perhaps it would appear to animate smoothly even. However, the key question would be whether the peaks would appear on the Watch screen close enough to when they actually occurred in the music that the user would see them as being linked.
It might be possible to pre-process the audio and build in a delay to both the display of the peaks and the audio playback to manage the communication latency—but testing that would really only be possible once you had Watch hardware in your hand.

Apply a Filter to a Camera video stream publishing to Media Server

I am Trying to Apply some information Like text and An Image as an overlay to create a overlay effect as lice a security camera with time and date on the video along with a png based Logo.
I can record VIDEO Using Flex and FMS or any other Media Server. But I want to save a modified version of the stream being uploaded.
Use camTwist (mac) or webcammax (pc) as your video input...you can use that program to use the webcam and then add whatever over it (text, date). Use Flash Media Live Encoder and select camTwist as your video source. Stream and save your recordings using that FMLE.
Sorenson Squeeze is pretty awesome for this and is cross platform. Camtwist is cool but it's not really a 'pro' app, that makes me sound way snobbish. It's actually pretty fun and a good suggestion for a simple result but I haven't found anything better than Sorenson Squeeze for the features and control it gives you.

Resources