Why do different scanned images have same size in file transfer mode? - twain

I am developing a scanner application in C++. Currently I am able to scan the documents and get the images in file transfer mode. But all the scanned documents have same size even though the content of the documents are different.
FileFormat:TWFF_TIFF
Pixel flavout: TWPF_CHOCOLATE
Xresoultion:75
Yresoultion:75
ICAP_UNITS: TWUN_INCHES
ICAP_PIXELTYPE: TWPT_GRAY
ICAP_BRIGHTNESS:0
ICAP_CONTRAST:0
ICAP_BITDEPTH: 8
Every time scanned image size as 327kb. Why would this be?
Also, how can I set JPEG_Compression. Does file transfer mode supports JPEG_compression?

Probably your scanner/driver is writing uncompressed TIFF files, so the file size depends only on the dimensions of the image. If each image is the same width & height, the resulting files will be the same size.
All the file-transfer stuff in TWAIN is implemented by the driver (not TWAIN itself) and all the features are optional. So you need to check if your scanner/driver supports JPEG compression when transferring TIFF files. It might, it might not.
You can try setting ICAP_COMPRESSION to TWCP_JPEG, after setting ICAP_IMAGEFILEFORMAT to TWFF_TIFF. Probably if both succeed you will get JPEG compression in your TIFFs, although it might be either "Old Style" JPEG or "New Style" JPEG. If you don't know what that means, you probably should find out.
I wrote a tool for this kind of experimenting, years ago, still maintained and free from Atalasoft: Twirl TWAIN Probe
Caution: Many scanners don't support File Transfer Mode (it is optional) and those that do may not support the TIFF file format (the only required file format is BMP!) If you need to support a wide variety of scanners, you'll have to use TWAIN's Native Transfer Mode or Memory Transfer Mode, and write the images to file yourself e.g. using LibTiff.

Related

Is there an indicator other than file extension that indicates the file type?

I am trying to make .txt file look like a .jpg file so it can be sent across wi-fi using an Eye-Fi SD card. The card only sends .jpg files for several reasons. One reason is that the transmission path of the picture from an the SD card to the computer looks like:
Camera writes pictures to EYE-FI SD -> EYE-FI connects to local router -> local router uploads to EYE-FI servers -> EYE-FI servers upload to your computer.
[Explanation]
There could be some filter on the server end, so I found some software that allows the user to bypass the eye-fi servers so now I know I am only dealing with the SD card. It's also nice to know that no one else is looking at my files. After some experimentation, I figured out that I can put .jpg files on the card and have them transmitted once a picture is taken. I also found how that the pictures must be named in short format; a name not longer than 8 characters(excluding file extensions), this probably has to do with the fact the card is formatted in fat32 (the card can be reformatted and still works). I tried uploading a .txt file to the card and gave it a similar format, and renamed it as a .jpg file. It did transfer which indicates to me there is probably something other than a file extension which denotes how the file is formatted.
[Questions]
1) Is there someway I can spoof .txt files to make them look like .jpg files?
2) Is there some kind of program I can use (for linux) to play around with values on the card so I can figure out what triggers an upload? Any ideas one what could trigger the upload?
1) Yes, there are hex value in the file that indicate it is a .jpg. If you open up a .jpg file with a hex editor, you will notice that there are header lines that have a bunch of information about how the image was compress, sometimes what made the image, some firmware information etc. In the editor, you can find the string "FF D8" this indicates the beginning of image file. This is followed shortly by "FF C0". The next 6 bytes contain information about the size of the image, and (I am guessing) is used by whatever software displays the image. The end of a jpg file is denoted by the 2 byes "FF D9". Fun fact, I played around with the jpg file I was using and it seems that you can put text after the "FF D9" and still have the jpg operate. I thought this was neat. Source
None of this was needed to get the eye-fi to upload the file though. As I said in my question, the card needs the name of the file to be in short format (which means the title cannot have more than 8 characters) and needs to have an acceptable file extension, in my case I used ".jpg". I wrote a text file, and just saved it as a "text.jpg". I found that there is a minimum size required in order to transfer the file, which is strange.
My hex editor of choice for this was bless, it is good for opening files, but I have yet to figure out if it can open volumes. It doesn't seem like it can.

Encoding videos for use with Adobe Live Streaming

I have an original video coded at 20Mbps, 1920x1080, 30fps and want to convert it down to be 640x480 30fps at a range of (3 different) bitrates for use by Adobe Live Streaming.
Should I use ffmpeg to resize and encode at the 3 bitrates then use f4fpackager to create the f4m f4f and f4x files or just use ffmpeg to reduce the resolution and then f4fpackager to encode the relevant bitrates?
I've had several tries so far, but when encoded the videos seem to play at a much larger bitrate than they've been encoded at. For example, if I set up the OSMF to play from my webserver, I'd be expecting my best encoded video to play at 1,500kbps but it's way above that.
Has anyone had any experience of encoding for use like this?
I'm using the following options to f4fpackager
--bitrate=1428 --segment-duration 30 --fragment-duration 2
f4fpackager doesn't do any encoding, it does 2 things:
- fragment the mp4 files (mp4 -> f4f)
- generate a Manifest (f4m) file referencing all you fragmented files (f4f)
So the process is:
- transcode your source file in all the size/bitrate that you want to provide (eg: 1920x01080#4Mbps, 1280x720#2Mbps, etc)
- use f4fpackager to convert the mp4 to f4f (this is the fragmentation step)
- use f4fpackager to generate the Manifest.f4m referencing the files that you generated in the previous step
the --bitrate option of f4fpackager should match the value that you use with ffmpeg, this parameter is used to generate the manifest file with the correct bitrate value of each quality

High Resolution Capture and Encoding

I'm using two custom push filters to inject audio and video (uncompressed RGB) into a DirectShow graph. I'm making a video capture application, so I'd like to encode the frames as they come in and store them in a file.
Up until now, I've used the ASF Writer to encode the input to a WMV file, but it appears the renderer is too slow to process high resolution input (such as 1920x1200x32). At least, FillBuffer() seems to only be able to process around 6-15 FPS, which obviously isn't fast enough.
I've tried increasing the cBuffers count in DecideBufferSize(), but that only pushes the problem to a later point, of course.
What are my options to speed up the process? What's the right way to do live high res encoding via DirectShow? I eventually want to end up with a WMV video, but maybe that has to be a post-processing step.
You have great answers posted here to your question: High resolution capture and encoding too slow. The task is too complex for the CPU in your system, which is just not fast enough to perform realtime video encoding in the configuration you set it to work.

Volume render DICOMDIR CT scan

I got a CD from the hospital that is a head CD scan.
I am completely new to medical imaging. What I would like to do perform a volume rendering of the CT scan.
It is in DICOMDIR format. How and where would I start?
From messing about with various tools I get the feeling that I need to extract each series into DICOM format. Is this correct and if so how would I do it?
Unless you were given the volume data your rendering will be disappointing at best. Many institutions still acquire head CT's in separate "step-slices", and not as volumes so here you will have significant 'stepping' artifact.
Even if it was acquired with volume data, unless they transferred all the data to your CD, you will still be stuck with only the processed 'slab' or 'slice' images.
The best way to do a volume rendering is to actually have the volume data. "Slice image" data has most of the information dumbed down and removed. You are just getting 20 or 30 images in 256 x 256 x (8 or 16 bit greyscale) array data.
If you have a mac try OsiriX - it's free, open source and will do everything you need and more. If you don't, and this is a one time thing, you could always sign up for a free demo of a commercial grade DICOM viewer. Medical image viewing software is insanely expensive and would be impossible to sell without demos. Just claim to be working for a clinician and you'll have no problem getting working software.
I believe ImageJ will open any of the files in the DICOMDIR for you. I'm not entirely sure it can open the entire study from the DICOMDIR, but I'm fairly certain it will handle any individual files you need to open. It should also offer the option to export the images to various other formats. If you need more info, feel free to post a comment.
You can also try MevisLab (http://www.mevislab.de/) it is free but a bit more complex to use and maybe it requires two steps to get the rendering of your dicom images.
Most probably you have to use one of the widgets they provide to convert the image and then to load the converted image and render it.
I have done with ImageJ but ImageJ not support compress dicom files at that time you have create your own logic to read compress dicom file.
Fiji and VolumeJ are also Good Option for Volume Rendering
Try Real3d VolViCon which is an advanced application for reconstruction of computed tomography (CT), magnetic resonance (MR), ultrasound, and x-rays images. It gives features for exporting 3D surfaces or volume as triangular mesh files for creating physical models using 3D printing technologies. It also provides high-quality visualization, linear and angular measurement tools, and various type of markups. It takes a single raw volume file or a sequence of 2D (i.e., DICOM) files and reconstructs 3D volume (voxels) and mesh (surfaces) models.

Get mp3 total track time using either javascript or ASP.NET

I am using the below jQuery plugin for playing mp3
www.happyworm.com/jquery/jplayer
However, there is a bug in Flash that the total play (track) time won't show up correctly UNTIL AFTER the whole mp3 is completed downloaded.
I wonder if there is a way to work around this to get the correct total time using either javascript / another flash / even backend library in ASP.NET. Any suggestion helps. Thanks
You sure that's a bug? Looking at the header definition for the MP3 format I don't see any values for the length of the file. Generally applications that play MP3s would have to calculate the time, and that may not be doable until the entire file is downloaded. So the behavior you're seeing from Flash might be expected.
Theoretically if it's a fixed bitrate file (as opposed to VBR) then knowing the bitrate (gotten from the header) and the total size of the file should be enough to calculate it. However, the server would have to report the size of the file in the response headers (and that's not guaranteed to be accurate).
My guess is you'd need some service on the server that could calculate the length and report that to you in a separate request.

Resources