Short of recording a video, I wanted to save a bunch of frames from videodisplay. I know how to grab a photo from videodisplay but Im more concerned of How to grab a series of bitmapdata objects to form a small video clip. I can do a loop and capture a bunch of bitmapdata but is this the correct way is there a better way? Im afraid I might lock the app to a halt. Im not really interested on recording an flv.
found some answers by reading code of FLVRecorder (code.google)
Related
I have a 3D environment done in Unity which I wan't to have as an Item in Qt (QML). I've tried a few different paths, but none has proved to be efficient enough or I'm unable to get it to work.
My current working solution is to do the following each frame
In Unity, use ReadPixels of my RenderTexture (GPU) to get a regular Texture (RAM).
Encode to JPG and send bytearray through TCP socket.
In Qt, instantiate a QImage from the data and save it for later use.
In the render function of QQuickFramebufferObject::Renderer, use glTexImage2D to render the image to my active texture.
Obviously this is not an optimal solution. This performs maybe 10 fps with a 128x64 texture size (for testing). My understanding is that the bottleneck is transferring data from gpu and back.
In my latest attempts I have tried to get the ID of the RenderTexture using renderTexture.GetNativeTexturePtr(). Then in Qt I'm trying to get the pixel data through glGetTexImage, but I keep getting 0's in the data. When later using glDrawPixels, the Qt application crashes.
So my question now is, do anyone know if it's possible to share the texture between processes and if so, how?
I'd like to take multiple video streams and display them one at a time, with the ability to swap between them. I was thinking about taking the video output from OBS and stream it to a private server using RMTP and nginx. Then I'd write some code (C/C++ maybe) to swap which stream is being displayed.
My first question is, would this even work? Would I be able to process the video being streamed to the server using this method, or would I need to send it to the server a different way? (preferably still using OBS)
My second question is, what would be a good place to get started for processing the streams? Are there any tutorials or forms that could be helpful?
I've never done any sort of video processing, so if I'm missing a key component I apologize ahead of time.
If I understand you correctly OBS gives you a couple of options to achieve this. You can create a different scene for each video input, select the scene to display the input of your choice. If you use the studio version it has a built in transition effects. Alternatively you can add all the video sources to one scene then move your desired source to the top. Using this method you can resize the sources and display more than one at a time.
Yes,upstairs is right.or you can use mimoLive.app if you are a macOS user
i have seen almost all relevant threads on almost whole internet. and i m still confused..
i m working on a drawing app ( flex / air ),
1- where user loads image file,(bitmapdata > bitmap > movieclip base layer)
2- add layer ( new sprite > movieclip "second object in display list")
3- draw on this sprite ( graphics.draw circle / etc, lot of other details etc)
4- usr can add more layers etc, name layers etc ... all sprites are of same size as bitmap
Q1. now i wanted to save this main movieClip (part of UIComponent ) as swf file. so user can read again and continue work. almost like i do with photoshop.
comment: i know it sounds too generic, to ask such detailed thing. so please be patient..
i have been fiddling around with byteArray. and Air File Object, flash.net.fileReference etc...
Q2. (main question), i don't want to convert drawn sprites to bitmapdata and then do bytearray. ( in my mind) it will convert everything to pixels, which i don't want.
Q3. is there some auto magical line which will do everything..(save drawn vectors / shapes) in sprites as is?? (less chances). if i can read swf as movieClip, i can check sprites for layers and everything else... problem is writing this to disk as swf file.
please share your thoughts and feel free to guide,in all possible directions..
thanks in advance for your valuable time..
In an ideal world you would just write
YourByteArray.writeObject(YourCanvasSprite);
That doesn't work however since serialize only works with data that is readable and writeable (and public). So you need to make it possible to recreate the graphic. You might be able to simplify that by using IGraphics (http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/display/IGraphicsData.html) as these should be possible to serialize. So if you subclass Sprite and store "drawing commands" and the parameters, and let the object have a restore function, it should be possible to save it to a byteArray. It's far from a one-liner though, but probably a little bit easier than making and reading SVG.
i'm thinking the cleanest solution to this problem would be to use SVG (maybe EPS). i'm unfamiliar with any library that will convert drawn vector objects into SVG, but since SVG is simply an XML file you could study the SVG architecture and create your own: W3C - Scalable Vector Graphics (SVG) 1.1 (Second Edition). the following open source project may help you write one and can also be used to read SVG into Flash: New Flex and Flash SVGWeb Components
a less clean, and perhaps more complicated way to solve this problem could be to write your own SWFs, as you mentioned. in order to do so you would have to generate .swf file at runtime, which you could accomplish by using NativeProcess to pass arguments (namely an ActionScript file containing your drawing information, child layers with names, etc.) to MXMLC from the Flex SDK.
i find both answers useful. and with time,i have realized that in both cases it might be more work than i can handle. meanwhile, i was testing other pixels bases approaches as well. so i finally took decision of going through pixel way. user can save layers as png files with transparency.
if some person (other readers) come to this post, i wanted to conclude quickly, i have found really valuable drawing library named "graffiti" for as3. it almost does 80% of things, which i was hoping to do. so i am using that and tweaking it for my needs. huge thanks and respect to all great ppl who replied from their valuable time..
Having had a quick look at the Flex docs I can't seem to find any reference to providing audio content to be played from a custom (possibly encrypted - don't worry, it's not that evil) container format. Is this possible and if so, could someone point me in the right direction.
Or if that's not possible, some way to hook into the disk/network (disk is much more important in this case) I/O of the sound playing mechanism to provide a supported container in memory from a custom wrapper.
Since Flash Player 10, it's posible to write PCM / raw audio data to a Sound Object.
Basically, you call play on an "empty" Sound Object and it will start dispatching periodically a SampleDataEvent, requesting data. You then can write to the audio stream through the data ByteArray exposed by the event object.
http://help.adobe.com/en_US/FlashPlatform//reference/actionscript/3/flash/events/SampleDataEvent.html?filter_flex=4
http://www.adobe.com/devnet/flash/articles/dynamic_sound_generation/index.html
Also, if you're interested in good articles and reference for audio programming in Actionscript, you might want to check out Andre Michelle's stuf:
http://blog.andre-michelle.com/
http://lab.andre-michelle.com/
A flash.media.Sound must either be:
constructed/loaded with a URLRequest,
inherit its data through embedding
There currently is no provision for directly piping mp3 (or aac, or video) data to a any "media" object, such as Sound. You can only get the Sound object to download the data for itself. There are people who are upset about this, including myself; you are not alone!
I say "at this stage" because it's not unthinkable that Adobe will update the API to make this possible in a future version. For the now, you're best to go with the decoding-to-a-dynamic-sound workaround mentioned by Juan, if you really need to be able to do this.
And post a feature request at Adobe's bug tracker, or vote on an existing one!
I am embedding an mp3 into my Flex project for use as a sound effect, but I am finding that every time I play it, there is a delay of about half a second from when I call .play() to when you can hear the sound. This makes it weird because I want the sound effects to sync to game events. My mp3 itself is only about a fifth of a second long so it isn't because of the contents of the mp3.
I'm embedding with
[Embed(source="assets/Tock.mp3")]
[Bindable]
public static var TockSound:Class;
public var tock_sound:SoundAsset;
and then playing with
if (tock_sound == null) {
tock_sound = new TockSound() as SoundAsset;
}
Alert.show("tock");
tock_sound.play();
I know there's a delay because the sound plays about a half second after the Alert displays. I did consider that maybe it was the initial loading time of constructing the TockSound, but the delay is there on all the subsequent calls as well.
How can I avoid this delay on playing a sound?
Update: It turns out this delay is only present when playing the swf on Linux. I believe it is a Linux-specific flaw in Adobe's flash player.
Not sure about the reason, other than Flash always has had some bad audio latency issues. Read Tinic's blog to stay on top of this stuff: http://www.kaourantin.net/
One thing that might help: make sure your MP3 is 44.1kHz or else Flash will need to resample it.
You can actually embed a WAV file, it just takes work. You embed it as a byte array, and in FP9, dynamically construct a SWF file on the fly. Pretty horrible, but doable. :-) In FP10, you can use the dynamic sound API, so it's easy.
Try StandingWave
http://code.google.com/p/standingwave/
It has the ability to "cache" the sound before playing getting rid of those delays and clicks you normally hear
I haven't worked with audio in Flash too much but it sounds like the half second delay might be the Flash Player opening up the file and reading it into memory. You could try doing a play() and a stop() when you load the application. That might push it into memory.
The other option is using the StandingWave library which was built by the guys at Noteflight. You can get some additional control over the audio files with that library and hopefully it'll help your delay problem.
The problem is that all MP3s have a random amount of blank time at the beginning of the file that is put there during the compression process. Modern software jukeboxes(itunes, songbird etc...) compensate for this by scanning the file before its played and determining the songs actual starting point. Your best bet for sound effects is to use .wav files as their format allows for instant playback, but with a filesize hit.
you might also try: http://www.mptrim.com/ <- they claim to be able to trim the space off the mp3.