What does `scan` mean in CSS Media Queries? - css

What exactly does the scan media features values progressive and interlace do exactly in simple terms? And are these the only values available for the scan feature?

They have to do with the output method of the screen of the device.
Describes the scanning process of television output devices.
Source.
progressive and interlace are the only two possible values.
Progressive Scan
Progressive (or noninterlaced scanning) is a way of displaying, storing, or transmitting moving images in which all the lines of each frame are drawn in sequence.
Source.
Interlaced Scan
Interlaced video is a technique of doubling the perceived frame rate of a video signal without consuming extra bandwidth. Since the interlaced signal contains the two fields of a video frame shot at two different times, it enhances motion perception to the viewer and reduces flicker by taking advantage of the persistence of vision effect.
Source.

It is used in style sheets for television. More info (book excerpt) here. If Interlaced and progressive videos interest you in some way, you can read about it here.

Related

Controlling trimming behavior of DirectShow's AVIMux?

When the directshow AVIMux is provided with two streams of data (e.g. audio and video) and one stream starts a bit before the other, is there any way to control how the AVIMux behaves? Namely, if the AVIMux gets a few video frames before the audio starts, it will actually omit the video frames from the output AVI. This contrasts with what it does when audio is missing at the end, when it includes the video frames anyways.
My sources for the audio and video are live streams (commercial capture filters I can't really improve/control), and I'd like to keep the video frames even though the audio starts a bit later.
Is there a nice way to do this? I can imagine wrapping the two filters into a custom filter with its own graph and inserting silence as necessary, but it would be awesome to not have to go to all of that trouble.
The question has seemingly incorrect assumption about dropping frames in the multiplexer. The multiplexer looks at video and audio data time stamps. If "a few frames before..." means that time stamps are negative and the data is preroll data, then it's dropped and excluded from resulting file. Otherwise it's included regardless of the actual order of data on the input of the multiplexer. Respective audio silence will be present in the beginning of the audio track.
That is, make sure the data is correctly time stamped and multiplexer will get it written.
tl;dr - For my use case, a frame I process not being present in the final AVI is showstopper, and the AVI mux/demux process is complicated enough that I'm better off just assuming some small number of frames may be dropped at the beginning. So I'll likely settle on pushing a number of special frames at the beginning (identified with a GUID/counter pair encoded in the pixels) before I start processing frames. I can then look for these special frames after writing the AVI to identify the frame where processing begins is present.
Everything I've seen leads me to believe what I originally asked for is effectively not possible. Based on file size, I think technically the video frames are written to the AVI file, but for most intents, they might as well not be.
That is, avi players like virtualdub and VLC, and even the directshow AVI splitter, ignore/drop any video frames present before the audio starts. So I imagine you'd have to parse the AVI file with some other library to extract the pre-audio frames.
The reason I care about this is because I write a parallel data structure with an entry for each frame in the AVI file, and I need to know which data goes with which frame. If frames are dropped from the AVI, I can't match the frames and the data.
I had success with creating custom transform filters after the video/audio capture filters. These filters look at the timestamps and drop the video frames until an audio start time is established and the video frames are after that time. Then the filters downstream know that they can rely on the video frames they process being written. The drawback is that the audio filter actually delivers samples a bit delayed, so when audio starts at 100ms, I don't find out until I'm already handling the video frame at 250ms, meaning I've dropped 250ms of video data to ensure I know when video frames will have accompanying audio. Combine that with different AVI tools behaving differently when video starts more than 1 video sample duration after the audio starts, and my confidence in trying to control the AVIMux/Splitter starts to wane.
All of that leads me to just accept that the AVIMux and AVI Splitter are complicated enough to not make it worth trying to control them exactly.

High Resolution Capture and Encoding

I'm using two custom push filters to inject audio and video (uncompressed RGB) into a DirectShow graph. I'm making a video capture application, so I'd like to encode the frames as they come in and store them in a file.
Up until now, I've used the ASF Writer to encode the input to a WMV file, but it appears the renderer is too slow to process high resolution input (such as 1920x1200x32). At least, FillBuffer() seems to only be able to process around 6-15 FPS, which obviously isn't fast enough.
I've tried increasing the cBuffers count in DecideBufferSize(), but that only pushes the problem to a later point, of course.
What are my options to speed up the process? What's the right way to do live high res encoding via DirectShow? I eventually want to end up with a WMV video, but maybe that has to be a post-processing step.
You have great answers posted here to your question: High resolution capture and encoding too slow. The task is too complex for the CPU in your system, which is just not fast enough to perform realtime video encoding in the configuration you set it to work.

Volume render DICOMDIR CT scan

I got a CD from the hospital that is a head CD scan.
I am completely new to medical imaging. What I would like to do perform a volume rendering of the CT scan.
It is in DICOMDIR format. How and where would I start?
From messing about with various tools I get the feeling that I need to extract each series into DICOM format. Is this correct and if so how would I do it?
Unless you were given the volume data your rendering will be disappointing at best. Many institutions still acquire head CT's in separate "step-slices", and not as volumes so here you will have significant 'stepping' artifact.
Even if it was acquired with volume data, unless they transferred all the data to your CD, you will still be stuck with only the processed 'slab' or 'slice' images.
The best way to do a volume rendering is to actually have the volume data. "Slice image" data has most of the information dumbed down and removed. You are just getting 20 or 30 images in 256 x 256 x (8 or 16 bit greyscale) array data.
If you have a mac try OsiriX - it's free, open source and will do everything you need and more. If you don't, and this is a one time thing, you could always sign up for a free demo of a commercial grade DICOM viewer. Medical image viewing software is insanely expensive and would be impossible to sell without demos. Just claim to be working for a clinician and you'll have no problem getting working software.
I believe ImageJ will open any of the files in the DICOMDIR for you. I'm not entirely sure it can open the entire study from the DICOMDIR, but I'm fairly certain it will handle any individual files you need to open. It should also offer the option to export the images to various other formats. If you need more info, feel free to post a comment.
You can also try MevisLab (http://www.mevislab.de/) it is free but a bit more complex to use and maybe it requires two steps to get the rendering of your dicom images.
Most probably you have to use one of the widgets they provide to convert the image and then to load the converted image and render it.
I have done with ImageJ but ImageJ not support compress dicom files at that time you have create your own logic to read compress dicom file.
Fiji and VolumeJ are also Good Option for Volume Rendering
Try Real3d VolViCon which is an advanced application for reconstruction of computed tomography (CT), magnetic resonance (MR), ultrasound, and x-rays images. It gives features for exporting 3D surfaces or volume as triangular mesh files for creating physical models using 3D printing technologies. It also provides high-quality visualization, linear and angular measurement tools, and various type of markups. It takes a single raw volume file or a sequence of 2D (i.e., DICOM) files and reconstructs 3D volume (voxels) and mesh (surfaces) models.

How to split movie and play parts to look as a whole?

I'm writing software which is demonstraiting video on demand service. One of the feature is something similiar to IIS Smooth Streaming - I want to adjust quality to the bandwith of the client. My idea is, to split single movie into many, let's say - 2 seconds parts, in different qualities and then send it to the client and play them. The point is that for example first part can be in very high quality, and second in really poor (if the bandwith seems to be poor). The question is - do you know any software that allows me to cut movies precisly? For example ffmpeg splits movies in a way that join is visible and really annoying (seconds are the measure of precision). I use qt + phonon as a player if it matters. Or maybe you know any better way to provide such feature, without splitting movie into parts?
Are you sure ffmpeg's precision is in seconds? Here's an excerpt from the man page:
-t duration
Restrict the transcoded/captured video sequence to the duration specified in seconds. "hh:mm:ss[.xxx]" syntax is also supported.
-ss position
Seek to given time position in seconds. "hh:mm:ss[.xxx]" syntax is also supported.
-itsoffset offset
Set the input time offset in seconds. "[-]hh:mm:ss[.xxx]" syntax is also supported. This option affects all the input files that follow it. The offset is added to the timestamps of the input files. Specifying a positive offset means that the corresponding streams are delayed by 'offset' seconds.
Looks like it supports up to millisecond precision, and since most video is not +1000 frames per second, this would be more than enough precision to accurately seek through any video stream.
Are you sure this is a good idea? Checking the bandwidth and switching out clips every two seconds seems like it will only allow you to buffer two seconds into the future at any given point, and unless the client has some Godly connection, it will appear extremely jumpy.
And what about playback, if the user replays the video? Would it recalculate the quality as it replays, or do you build the video file while streaming?
I am not experienced in the field of streaming video, but it seems what I see most often is that the provider has several different quality versions of their video (from extremely low to HD), and they test the user's bandwidth and then stream at an appropriate quality.
(I apologize if I misunderstood the question.)

WMP in c# play rate

I am using wmp in my windows application. I want to change the rate of the play speed.
It is possible for some type of files e.g; avi. But its not possible for some types, eg; wmv,mpeg etc. Is there any other way to change rate. Please, its urgent. Thanx in advance
Its possible, but your choice of using windows media player will limit your choices. Windows media player uses a very simple graphfilter to control playback. This will make it impossible to change the rate for formats which require more complex filters. The general way to change the rate is to either repeat or drop frames in the video.
I am not sure about wmv, but if memory serves me right, wmv is just a container format like AVI, so the graphfilter that is used varies from file to file.
mpeg has 3 kinds of frames. only the i frame is complete. the p and b frames are not so you cant really repeat or drop the frames easily.
Dont know how to help you with this, but you will have better choices if your using directshow so that you can change graphfilters to duplicate/drop frames.

Resources