Load dicom files - dicom

I am trying to view (uncompressed) dicom files using XTK. However the browser does not show anything, although it seems that it is loading normally.
Does it matter that the slices from DICOM files are horizontal? In the Lesson 15 at https://github.com/xtk/X#readme the slices are vertical. The dicom files come from http://www.osirix-viewer.com/datasets/ (BRAINX dataset).
Thanks in advance!

Everything appears to work fine with the latest XTK Edge (https://github.com/xtk/X#readme). Could you point us to a non-working data set?
You might at some point see "white noise" instead of the actual image. It is most likely because the DICOM is JPEG compressed. XTK do not support that yet. To bypass it, you can uncompress the DICOM with dcmtk "dcmdjpeg" (http://support.dcmtk.org/docs/dcmdjpeg.html)
Thanks

Since Osirix Dataset is stored using JPEG 2000 Transfer Syntax, you are pretty much required to use gdcm:
$ gdcmconv --raw osirix_dataset.dcm output_raw.dcm
See the gdcmconv man page
DCMTK/JPEG 2000 module is not free, see here.

Related

Merging/concatenating video and keeping sound in R using AV package

I am trying to merge/concatenate multiple videos with sound sequentially into one video using only R, (I don't want to work with ffmpeg in the command line as the rest of the project doesn't require it and I would rather not bring it in at this stage).
My code looks like the following:
dir<-"C:/Users/Admin/Documents/r_programs/"
videos<-c(
paste0(dir,"video_1.mp4"),
paste0(dir,"video_2.mp4"),
paste0(dir,"video_3.mp4")
)
#encoding
av_encode_video(
videos,
output=paste0(dir,"output.mp4"),
framerate=30,
vfilter="null",
codec="libx264rgb",
audio=videos,
verbose=TRUE
)
It almost works, the output file is an mp4 containing the 3 videos sequentially one after the other, but the audio present is only from the first of the 3 video and then it cuts off.
It doesn't really matter what the videos are. I have recreated this issue with the videos I was using or even 3 randomly downloaded 1080p 30fps videos from YouTube.
Any help is appreciated & thank you in advance.
The experienced behavior (only 1 audio source) is exactly how it is designed to do. In the C source code, you can identify that encode_video only takes the first audio entry and ignores the rest. Overall, audio is poorly supported by ropensci/av atm as its primary focus is to turn R plots into videos. Perhaps, you can file a feature request issue on GitHub.
Meanwhile, why not just use base.system function to call FFmpeg from R? This will likely speed up your process significantly assuming the videos have identical format by using concat demuxer + stream-copy (-c copy). The av library does not support this feature as far as I can tell. (If formats differ, you need to use the concat filter which is also explained in the link above.)

Why does R raster::writeRaster() generate a pic which can't be shown in Win10?

I read my hyperspectral (.raw) file and combine three bands to "gai_out_r" Then I output as following:
writeRaster(gai_out_r,filepath,format="GTiff")
finally I got gai_out_r.tif
But, why Win10 can't display this small tif as the pic that I output the same way from envi--save image as--tif
Two tiffs are displayed by Win10 as following:
Default windows image viewing applications doesn't support Hyperspectral Images-since you are just reading and combining 3 bands from your .raw file, the resulting image will be a hyperspectral image.You need to have separate dedicated softwares to view hypercubes or can view it using spectral-python also.
In sPy, using envi.save_image , will save it as a ENVI type file only. To save it as an rgb image file(readable in windows OS) we need to use other methods.
You are using writeRaster to write to a GTiff (GeoTiff) format file. To write to a standard tif file you can use the tiff method. With writeRaster you could also write to a PNG instead
writeRaster(gai_out_r, "gai.png")
Cause of the issue:
I had a similar issue and recognised that the exported .tif files had a different bit depth than .tif images I could open. The images could not be displayed using common applications, although they were not broken and I could open them in R or QGIS. Hence, the values were coded in a way Windows would not expect.
When you type ?writeRaster() you will find that there are various options when it comes to saving a .tif (or other format) using the raster::writeRaster() function. Click on the links therein to get to the dataType {raster} help site and you'll find there are various integer types to choose from.
Solution (write a Windows-readable GeoTIFF):
I set the following options to make the resulting .tif file readable (note the datatype option):
writeRaster(raster, filename = "/path/to/your/output.tif",
format = "GTiff", datatype = "INT1U")
Note:
I realised your post is from 2 and a half years ago... Anyways, may this answer help others who encounter this problem.

Why won't Autocad print georefenced jpeg that is reduced?

So, on my projects I require an aerial photo of a site. I usually use ones in public record. I use the USGS high res ortho photos located here https://earthexplorer.usgs.gov/. I have them uploaded to my server and they are TIFs and have TFWs and XMLs associated with them (I am unsure of what the xml is for). I can load these into autocad fine, and print them just fine. average file size of these appears to be in the 250,000 kb range.
On some of my projects, I need more detail. I get privately flown aerial photos of a site. These come as a JPG format and are georefernced by a .jgw. These files are about 25000 kb depending on the site ( I did not notice this at first, as i was told they are very large relative to the TIFs). When these are loaded into autocad and i try to plot, the whole system freezed and wont plot for about 15-20 mins. At first I thought this was a file size issue. So I did the following in R, to try to reduce the size. My code is as follows.
library(jpeg)
library(tiff)
img <- readJPEG("ortho.jpg", native = TRUE)
writeJPEG(img, quality = 0.2)'
This got the file size down to about 9000 kb. I loaded this into autocad and it still would not plot. This leads me to assume that size is not the issue. With this is mind what could be a property of this photo that would freeze autocad? How could I fix those properties in R or in Autocad.
First I would check out the first and third causes listed here: https://knowledge.autodesk.com/support/autocad/troubleshooting/caas/sfdcarticles/sfdcarticles/Some-OLE-objects-do-not-plot.html and see if that fixes your issue.
Second I would convert to a png (in my limited experience those seem to be the most stable in autocad.)
library(png)
writePNG(img)
If you really need it in jpg I would try the solutions here too: https://www.landfx.com/kb/autocad-fxcad/images/item/1926-raster-disappear.html

Font Awesome 5: Differences fa-svg-with-js.css / fontawesome.min.js / fontawesome-all.min.js

I've read the official documentation but I didn't get much information regarding fa-svg-with-js.css
Currently, I'm using fontawesome-all.min.js and that's all I need to get going. However, the file size is ridiculously big (680kb).
I noticed there was another folder with a file named fa-svg-with-js.css what is this for? Do I need it?
I also notice there's another file called fontawesome.min.js with a much smaller file size (27kb). How do I use this and why is it smaller than the svg one.
I don't want to use the webfont version. Any suggestions?

Converting .pdf files to excel (.xls)

A friend of mine doing an internship asked me 2 hours ago if I could help him avoid to do manually 462 pdf file to .xls using free online soft.
I thought of a shell script using unoconv, but I didn't find out how to use it properly, and I am not sure if unoconv can solve this problem since it mainly converts file to pdf, not the reverse thing.
Conversion from PDF to any other structured format is not always possible and not generally recommended.
Having said that, this does look like a one-off job and there's a fair few of them (462).
It's worth pursuing, if you can reliably extract text from most of them and it's reasonably structured. It's a matter of trying to get regular text output across a sample of the PDF's that you can reliably parse into a table structure.
There's plenty of tools around that target either direct or OCR based text extraction, just google around.
One I like is pstotext from the ghostscript suite; the -bboxes option lets me get the coordinates of each word and leaves it up to me to re-assemble the structure. Despite its name it does work on input PDFs. Downside is that it can be a bit flakey and works on some PDF's but not others.
If you get this far, you'd then most likely then need to write a shell-script or program to convert that to a CSV. You can either open this directly via a spread-sheet or look for tools to convert this into XLS.
PS If he hasn't already, get the intern to ask if there's any possible way of getting at the original data that was used to created the PDFs It will save a lot of time and effort and lead to a way more accurate result.
Update An alternative to pstotext is renderpdf.pl command which is included in the Perl CAM::PDF module. More robust, but just reports text (x,y) position, not bounding boxes.
Other responses on a linked question suggest Tabula, too.
https://github.com/tabulapdf/tabula
I tried and it works very well.

Resources