Why dcm4che creates huge files when we send DICOM from CharruaSoft sendscu tool using different transfer syntax? - dicom

I have few 16 bit and 8 bit DICOM files which I am transferring to dcm4che StoreSCP using CharruaSoft sendscu tool.
It is working fine for 16 bit files but for 8 bit simple 2 MB file it generates a huge 90 MB file.
I tried to send using StoreSCU from dcm4che itself and it works fine.
But from CharruaSoft SendSCU it creates huge files.
Steps to reproduce:
Download CharruaSoft SendSCU.
Setup dcm4che tool SCP maven project.
Run main method of java using the proper arguments given in --help.
Use CharruaSoft SendSCU to send a 16 bit DICOM, it works fine.
Now send a 8 bit DICOM, it works but creates a huge file, in my case 2 MB became 90 MB.
First of all, I thought it could be a CharruaSoft SendSCU problem but CharruaSoft SendSCU is able to send to other SCPs for eg: mymedicalimages.com properly.
Did anyone already faced similar issues?
Edit:
If I select JPEG lossy 8 bit compression from CharruaSoft sendscu, it works and doesn't create a huge 90 mb file.
But I have no control over CharruaSoft sendSCU tool. I want java dcm4che SCP to handle that.
Edit 2:
It is also fine if I just override the transfer syntax with the correct one instead , so that it saves the dicom file as exact size file.

I debugged your issue with SendSCU.
I got an image with JPEG 2000 Lossy compression. I established a connection with my SCP with it and pushed the image.
Following is the Associate log:
Implementation Version: CharruaSoft
Maximum PDU Size: 16384
Called AE Title: remote
Calling AE Title: local
Presentation Contexts: 1
Presentation Context: 1 [Proposed]
Abstract: CT Image Storage
Transfer: Explicit VR Little Endian
Transfer: JPEG 2000 Image Compression
Transfer: Implicit VR Little Endian: Default Transfer Syntax for DICOM
Note that SendSCU is proposing just one presentation context (PC) with three transfer syntaxes in it. Now it is up to SCP which TS to accept. Good thing here is that, SCU is auto-detecting the original TS of image to be sent.
for 8 bit simple 2 MB file it generates a huge 90 MB file.
This is because your SCP is accepting first transfer syntax and send ASSOCIATE-ACCEPT back to SendSCU. SendSCU then (as expected) decompresses the image on the fly and hence the increase in size.
I tried to send using StoreSCU from dcm4che itself and it works fine.
I am sure StoreSCU must be proposing:
only one TS - the Lossy one OR
multiple TS each in separate PC. SCP accepts each PC. StoreSCU uses the best one - Lossy OR
multiple TS with Lossy TS at the top
In any of the above case, StoreSCU will not decompress the image and there will not be a size issue. May be you should get the similar log for it as did above.
CharruaSoft SendSCU is able to send to other SCPs for eg: mymedicalimages.com properly.
It is decision of SCP which TS to accept if multiple TS are proposed in one PC. As the SCP you mentioned is hosted on internet, most probably it accepts Lossy TS (to improve performance and save bandwidth) on priority and hence the resultant file size is small. You should check their Conformance statement. If you upload it here, I may help out a bit.
If I select JPEG lossy 8 bit compression from CharruaSoft sendscu, it works and doesn't create a huge 90 mb file.
Following is the Associate log in that case:
Implementation Version: CharruaSoft
Maximum PDU Size: 16384
Called AE Title: remote
Calling AE Title: local
Presentation Contexts: 1
Presentation Context: 1 [Proposed]
Abstract: CT Image Storage
Transfer: JPEG 2000 Image Compression
Transfer: Implicit VR Little Endian: Default Transfer Syntax for DICOM
Note that JPEG 2000 is the first TS proposed here. SCP accepts it and everything just works fine.
But I have no control over CharruaSoft sendSCU tool. I want java dcm4che SCP to handle that.
I never used dcm4che tool; I cannot help here. You can check dcm4che document to see how you can configure which TS to accept that is proposed in PCs. Hopefully, there is a setting/switch to handle that behavior. This is your only way to go if you want to handle this with SCP on the fly.
Other alternative is offline TS conversion with -t switch as explained here.
-t,--transfer-syntax <uid>
transcode sources to specified Transfer Syntax. At default use Explicit VR Little Endian

Related

How to compile audio and videos from a stream mpd file?

I have downloaded a DRM protected content audio and video files with a stream.mpd file. The audio and video files are encrypted using a key that can be found in stream.mpd file. So, how can I decrypt it and compile audio and video files and make a running mp4 file?
Just a quick check first - if the video and/or audio is protected by a standard DRM it would not be normal for the key to be included in the mpd file, so I am guessing you are using Clearkey protection (https://github.com/Dash-Industry-Forum/ClearKey-Content-Protection).
Assuming this is the case you can concatenate the segments into an mp4 file - see example and also some discussion on limitation on windows systems here: https://stackoverflow.com/a/27017348/334402
You can use ffmpeg to decrypt - e.g:
ffmpeg -decryption_key {key} -I {input-file} {output-file}
(https://ffmpeg.org/ffmpeg-formats.html#Options-1)
One thing to also be aware of is that most dash videos will have multi bit rate renditions and your client will download whatever bitrate is appropriate for the device and network conditions at any point during the streaming. For this reason you may have a mix of bit rates/resolutions and hence quality in the final video. If this is an issue your client may allow you to select a particular bitrate for the entire video instead of switching.

Why is direct output to network share much slower than inter-buffering?

This is an Arch Linux System where I mounted a network device over SSHFS (SFTP) using GVFS managed by Nemo FM. I'm using Handbrake to convert a video that lies on my SSD.
Observations:
If I encode the video using Handbrake and set the destination to a folder on the SSD, I get 100 FPS
If I copy a file from the SSD to the network share (without Handbrake), I get 3 MB/s
However, if I combine both (using Handbrake with the destination set to a folder on the network share), I get 15 FPS and 0.2 MB/s, both being significantly lower than the available capacities.
I suppose this is a buffering problem. But where does it reside? Is it Handbrake's fault, or perhaps GVFS caching not enough? Long story short, how can the available capacities be fully used in this situation?
When accessing the file over SFTP Handbrake will be requesting small portions of the file rather than the entire thing, meaning it is starting and finishing lots of transfers and adding that much more overhead.
Your best best for solving this issue is to transfer the ENTIRE file to the SSD before performing the encoding. 3 MB/s is slower than direct access to an older, large capacity mechanical drive and as such will not give you the performance you are looking for so direct access to a network share is not recommended unless you can speed up those transfers significantly.

EDI Receive Pipeline performance issue

I have a file receive location with edireceive pipeline configed to receive incoming HIPPA 5010 837 files.
The normal incoming file size is 4 to 6 megabytes, contains 3K to 5K records. The 837 schema deployed is the "multiple" version which have the subdocument_break="yes". So the file been processed will generate 3K to 5K messages per file.
The pipeline works fine and can split the file into multiple messages as expected. for 1 single file, BizTalk takes less than 5 mins to process.
The problem is when more than 10 files was put to the incoming folder at same time, Biztalk will start process these files parallel. But it will take hours to process these files and the BizTalk Host consumes more than 10G memory.
Some other info:
The BizTalk host is a dedicated 64bit receive host
No file lock by other applications found
Batching setting in file adapter is Num of Msgs in a batch = 1; Max batch size = 10240000
Rename file while reading is checked.
My question is: Is this performance normal? how can I improve it?
You are correct, 5K message is not really the issue, it 5 batchs of 5K message at the same time that's causing the problem.
To serialize the debatching you can use an Ordered Delivery Two Way Send Port with an Loopback Adapter which debatches the EDI on the Receive Side. In this case, the initial Receive Location would be a PassThrough.
You can find several Loopback Adapters here: http://social.technet.microsoft.com/wiki/contents/articles/12824.biztalk-server-list-of-custom-adapters.aspx#jjj
BizTalk isn't really made to process multiple large files at once, and the file adapter doesn't have any built in way to limit how many files it will pull at once.
There's a commercial solution available to help handle this (disclosure: I work for Tallan and work on this solution) called the T-Connect EDI Splitter (https://www.tallan.com/products/t-connect/edi-file-splitter/). The use case is splitting the files on a pipeline into more manageable chunks to be consumed elsewhere. This is not a trivial task, unfortunately.
If your files are small enough to process without splitting them before they hit the EDI recieve pipeline (you don't need to split them further, you just need to process them one at a time), you'll have to come up with a more complicated messaging flow to deal with that - receive them using PassThrough transmit, send them somewhere that can just consume them, then poll them using a second receive location that offers more precise control of polling.
You could also just write your own adapter that offers polling and interval settings, but that's much more complicated and messy.

DVB Recording of a channel

I'm trying to record a DVB-Channel with a DVB-T Tuner.
I already did much research on this topic but I don't get really "information" what to do.
Basically I'm already able to create a own Graph with the default GraphEdit, make a tune request and watch a channel. Converting the Graph to C# Code with the DirectShowLib or to C++ isn't a big problem for me.
But what I don't know, what is the right approach to record the movie. (Without decode it to mpeg / avi and so on.)
The most important parts of the graph are some tuning related filters, they connect to the demultiplexer (demux), and the demux will output a video and audio stream.
The easiest way to get the mpeg stream is putting a filter before the demux. For example a samplegrabber. There you will receive the complete transport stream as it is broadcasted. But that normally contains multiple programs which are multiplexed on the same frequency. If you only need one program, you need to filter the other programs out of the stream.
If you only need a single program, it is probably easier to directly connect the audio and video stream coming out of the demultiplexer, to a multiplexer, and write it's output to a file. You need to make sure there is no decoder or any other filter between the demux and the mux. The problem is that you need to find a directshow multiplexer, as windows does not contain a standard multiplexer. I don't know any free multiplexer.
What you also can do is write the audio and video directly to a file. (again without decoding, or anything else). Then use for example ffmpeg to join the audio and video to a single file.
C:\> ffmpeg -i input.m2v -i input.mp2 -vcodec copy -acodec copy output.mpg
You probably also need to delay the audio or video stream to get them in sync.
One addition, of course you can also use ffmpeg to convert the multi program transport stream to a single program stream.

Howto pipe raw PCM-Data from /dev/ttyUSB0 to soundcard?

I'm working currently on a small microhpone, connected to PC via an FPGA. The FPGA spits a raw datastream via UART/USB into my computer. I'm able to record, play and analyze the data.
But I can't play the "live" audiostream directly.
What works is saving the datastream in PCM raw-format with a custom made C-program, and piping the content of the file into aplay. But that adds a 10sec lag into the datastream... Not so nice for demoing or testing.
tail -f snd.raw | aplay -t raw -f S16_LE -r 9000
Does someone have another idea, how get the audiostream faster into my ears? Why does
cat /dev/ttyUSB0 | aplay
not work? (nothing happens)
Thanks so far
marvin
You need an api that lets you stream audiobuffers directly to the soundcard. I haven't done it on Linux, but I've used FMOD for this purpose. You might find another API in this question. SDL seems popular.
The general idea is that you set up a streaming buffer, then your c program stuffs the incoming bytes into an array. The size is chosen to balance lag with jitter in the incoming stream. When the array is full, you pass it to the API, and start filling another one while the first plays.
That would seem to be the domain of the alsaloop program. However, this program requires two ALSA devices to work with and you can see from its options that it goes to considerable effort in order to match the data flow of the devices, something that you would not necessarily want to do yourself.
This Stackoverflow topic talks about how to create a virtual userspace device available to ALSA: maybe that is a route worth pursuing.

Resources