I am trying to open dicom files in R using following code:
library(oro.dicom)
dcmobject <- readDICOMFile(filename)
Some files open properly and I can display them. However, some files give errors of different types:
First error: For some, I get the error:
Error in file(con, "rb") : cannot open the connection
Second error: In others, I get following error with dicom file: http://www.barre.nom.fr/medical/samples/files/OT-MONO2-8-hip.gz :
Error in readDICOMFile(filename) : DICM != DICM
Third error: This file gives following error: http://www.barre.nom.fr/medical/samples/files/CT-MONO2-16-chest.gz
Error in parsePixelData(fraw[(132 + dcm$data.seek + 1):fsize], hdr, endian, :
Number of bytes in PixelData not specified
Fourth error: One dicom file gives following error:
Error in rawToChar(fraw[129:132]) : embedded nul in string: '\0\0\b'
How can I get rid of these errors and display these images in R?
EDIT:
This sample file gives the error 'embed nul in string...':
http://www.barre.nom.fr/medical/samples/files/CT-MONO2-12-lomb-an2.gz
> jj = readDICOMFile( "CT-MONO2-12-lomb-an2.dcm" )
Error in rawToChar(fraw[129:132]) : embedded nul in string: '3\0\020'
There are four different errors highlighted in this ticket:
Error in file(con, "rb") : cannot open the connection
This is not a problem with oro.dicom, it is simply the fact that the file path and/or name has been mis-specified.
Error in readDICOMFile(filename) : DICM != DICM
The file is not a valid DICOM file. That is, section 7.1 in Part 10 of the DICOM Standard (available at http://dicom.nema.org) specifies that there should be (a) the File Preample of length 128 bytes and (b) the four-byte DICOM Prefix "DICM" at the beginning of a DICOM file. The file OT-MONO2-8-hip does not follow this standard. One can investigate this problem further using the debug=TRUE input parameter
> dcm <- readDICOMFile("OT-MONO2-8-hip.dcm", debug=TRUE)
# First 128 bytes of DICOM header =
[1] 08 00 00 00 04 00 00 00 b0 00 00 00 08 00 08 00 2e 00 00 00 4f 52 49 47 49 4e 41 4c 5c 53 45
[32] 43 4f 4e 44 41 52 59 5c 4f 54 48 45 52 5c 41 52 43 5c 44 49 43 4f 4d 5c 56 41 4c 49 44 41 54
[63] 49 4f 4e 20 08 00 16 00 1a 00 00 00 31 2e 32 2e 38 34 30 2e 31 30 30 30 38 2e 35 2e 31 2e 34
[94] 2e 31 2e 31 2e 37 00 08 00 18 00 1a 00 00 00 31 2e 33 2e 34 36 2e 36 37 30 35 38 39 2e 31 37
[125] 2e 31 2e 37
Error in readDICOMFile("OT-MONO2-8-hip.dcm", debug = TRUE) : DICM != DICM
It is apparent that the first 128 bytes contain information. One can now use the parameters skipFirst128=FALSE and DICM=FALSE to start reading information from the beginning of the file
dcm <- readDICOMFile("OT-MONO2-8-hip.dcm", skipFirst128=FALSE, DICM=FALSE)
image(t(dcm$img), col=grey(0:64/64), axes=FALSE, xlab="", ylab="")
3.
Error in parsePixelData(fraw[(132 + dcm$data.seek + 1):fsize], hdr, endian, :
Number of bytes in PixelData not specified
The file CT-MONO2-16-chest.dcm is encoded using JPEG compression. The R package oro.dicom does not support compression.
Error in rawToChar(fraw[129:132]) : embedded nul in string: '\0\0\b'
I have to speculate, since the file is not available for direct interrogation. This problem is related to the check for "DICM" characters as part of the DICOM standard. If it failed, then one can assume the file is not a valid DICM file. I will look into making this error more informative in future versions of oro.dicom.
EDIT: Thank-you for providing a link to the appropriate file. The file is in "ARC-NEMA 2" format. The R package oro.dicom has not been designed to read such a file. I have modified the code to improve the error tracking.
Related
I am trying to get a text file from a URL. From the browser, its fairly simple. I just have to "save as" the from the URL and I get the file I want. At first, i had some trouble logging in using rvest (see [https://stackoverflow.com/questions/66352322/how-to-get-txt-file-from-password-protected-website-jsp-in-r][1])(I uploaded a couple of probably useful picture there). When I use the following code:
fileurl <- "http://www1.bolsadecaracas.com/esp/productos/dinamica/downloadDetail.jsp?symbol=BNC&dateFrom=20190101&dateTo=20210101&typePlazo=nada&typeReg=T"
session(fileurl)
I get the following (note how I am redirected to a different URL, as happens in the browser when you try to get to the fileurl without first logging in):
<session> http://www1.bolsadecaracas.com/esp/productos/dinamica/downloadDetail.jsp?symbol=BNC&dateFrom=20190101&dateTo=20210101&typePlazo=nada&typeReg=T
Status: 200
Type: text/html; charset=ISO-8859-1
Size: 84
I managed to log in using the following code:
#Define URLs
loginurl <- "http://www1.bolsadecaracas.com/esp/usuarios/customize.jsp"
fileurl <- "http://www1.bolsadecaracas.com/esp/productos/dinamica/downloadDetail.jsp?symbol=BNC&dateFrom=20190101&dateTo=20210101&typePlazo=nada&typeReg=T"
#Create session
pgsession <- session(loginurl)
pgform<-html_form(pgsession)[[1]] #Get form
#Create a fake submit button as form does not have one
fake_submit_button <- list(name = NULL,
type = "submit",
value = NULL,
checked = NULL,
disabled = NULL,
readonly = NULL,
required = FALSE)
attr(fake_submit_button, "class") <- "input"
pgform[["fields"]][["submit"]] <- fake_submit_button
#Create and submit filled form
filled_form<-html_form_set(pgform, login="******", passwd="******")
session_submit(pgsession, filled_form)
#Jump to new url
loggedsession <- session_jump_to(pgsession, url = fileurl)
#Output
loggedsession
It seems to me that the login was succesful, as the session output is the exact same size than the .txt file when I download it and I am no longer redirected. See the output.
<session> http://www1.bolsadecaracas.com/esp/productos/dinamica/downloadDetail.jsp?symbol=BNC&dateFrom=20190101&dateTo=20210101&typePlazo=nada&typeReg=T
Status: 200
Type: text/plain; charset=ISO-8859-1
Size: 32193
However, whenever I try to extract the content of the session with read_html() or the like, i get the following error: "Error: Page doesn't appear to be html.". I dont know if it has anything to do with the "Type: text/plain" of the session.
When I run
loggedsession[["response"]][["content"]]
I get
[1] 0d 0a 0d 0a 0d 0a 0d 0a 0d 0a 7c 30 32 2f 30 31 2f 32 30 31 39 7c 52 7c 31 34 2c 39 30 7c 31 35 2c
[34] 30 30 7c 31 37 2c 38 33 7c 31 33 2c 35 30 7c 39 7c 31 33 2e 35 33 33 7c 32 30 33 2e 30 36 30 2c 31
[67] 39 7c 0a 7c 30 33 2f 30 31 2f 32 30 31 39 7c 52 7c 31 35 2c 30 30 7c 31 37 2c 39 38 7c 31 37 2c 39
Any help on how to extract the text file??? Would be greatly appreciated.
PS:
At one point, just playing with functions I managed to get something that would have worked with httr::: GET(fileurl). That was after playing with rvest functions and managing to log in. However, after closing and opening RStudio I was not able to get the same output with that function.
Because rvest uses httr package internally, you can use the httr and base to save your file. The key to the solution is that your response (in terms of the httr package) is in the session object:
library(rvest)
library(httr)
httr::content(loggedsession$response, as = "text") %>%
cat(file = "your_file.txt")
More importantly, if your file were binary (e.g. a zip archive), you would have to do:
library(rvest)
library(httr)
httr::content(loggedsession$response, as = "raw") %>%
writeBin(con = 'your_file.zip')
I've been trying to use the AWS S3 storage option via R. I've been using the aws.s3 package to help do that.
Everything seems to work on until I try to recall and use an rds file I had saved on AWS.
By way of example:
library("aws.s3")
Sys.setenv("AWS_ACCESS_KEY_ID" = "mykey",
"AWS_SECRET_ACCESS_KEY" = "mysecretkey",
"AWS_DEFAULT_REGION" = "us-east-1",
"AWS_SESSION_TOKEN" = "mytoken")
#Create Dummy Data
testdata <- rep(1:3, 10)
#Save to AWS
s3saveRDS(testdata, object = "testdata.rds", bucket = "mybucket")
#Recall from AWS
newtestdata <- get_object("testdata.rds", bucket = "mybucket")
newtestdata comes back in a raw format but I can't find how to convert it into its original format. I've tried things such as rawToChar() but I get errors.
For info this is what the newtestdata file looks like in its raw form:
1f 8b 08 00 00 00 00 00 00 06 8b e0 62 60 60 60 62 60 66 61 64 60 62 06 32 19 78 81 58 0e 88 19 c1 e2 0c 0c cc f4 64 03 00 62 4b 7d f5 8e 00 00 00
What should I do to convert this file back to its original form?
You can try below snippet to read the data as it is as mentioned in [1] and see if it they are matching with identical() .
s3readRDS(object = "mtcars.rds", bucket = "myexamplebucket")
identical(mtcars, mtcars2)
I am trying to play a WAV file in the browser. This particular WAV file plays just fine in all browsers except IE and Edge. IE is to be expected -- it doesn't support WAV. However, it looks like it should work in Edge. I've studied the header of the file, and it looks like it is well-formed to me.
This error is printed to the console:
WEBAUDIO17014: Decoding error: The stream provided is corrupt or unsupported.
Here is the header:
52 49 46 46 : "RIFF"
24 30 0C 00 : file size (798,756 bytes)
57 41 56 45 : "WAVE"
66 6D 74 20 : "fmt "
10 00 00 00 : length of format (16 bytes)
01 00 : type of format
01 00 : Number of channels (1)
22 56 00 00 : Sample rate (22050)
88 58 01 00 : (Sample Rate * BitsPerSample * Channels) / 8.
02 00 : (BitsPerSample * Channels) / 8.1 - 8 bit mono2 - 8 bit stereo/16 bit mono4 - 16 bit stereo
10 00 : bits per sample (16)
64 61 74 61 : "data"
00 30 0C 00 : size of data (798,720 bytes)
So, the two unusual things I see are:
1) It uses a sample rate of 22050. Is that supported in Edge?
2) It is a mono file. Is that supported in Edge?
I tried to look up WEBAUDIO17014 but I didn't get any relevant answers. Is there a way to get a more specific error message about what is going wrong?
Edit:
By request, here is the HTML of the page:
<audio src="/path/to/file.wav" preload></audio>
I'm experimenting with creating a Bluetooth Low Energy Peripheral on my Linux computer (The goal is to send data over Bluetooth From an iPhone). Im currently using the Tools hciconfig, hcitool and hcidump.
My current experiment is to advertise a Service with a Specific UUID, that the iOS CoreBluetooth Library will pick up. (Note: I'm not trying to create an iBeacon).
Right now, it's actually as simple as One Single Command that is bugging me.
hcitool -i hci0 cmd 0x08 0x0008 15 02 01 1a 11 07 41 42 43 44 45 46 47 48 49 4a 4b 4c 4d 4e 4f 50
What I think it should do is the following:
0x08: Setting Group to BLE
0x0008: Setting Command to HCI_LE_Set_Advertising_Data
0x15: Setting the Length of the Significant Bytes in the Header to 21. (3 Byte for the Flag packet, 18 Byte for the Service Structure)
0x02: Setting the Length of the Flags structure to 2 Bytes
0x01: Setting the structure Type to AD Flags
0x1a: Flag Value:
bit 0 (OFF) LE Limited Discoverable Mode
bit 1 (ON) LE General Discoverable Mode
bit 2 (OFF) BR/EDR Not Supported
bit 3 (ON) Simultaneous LE and BR/EDR to Same Device Capable (controller)
bit 4 (ON) Simultaneous LE and BR/EDR to Same Device Capable (Host)
(End of Flag)
0x11 Setting the Length of Service Structure to 17 Bytes
0x07 Setting the Structure Type to 128 Bit Complete Service UUID List
0x41 ... 0x50 Setting the UUID of the Test Service to ABCDEFGHIJKLMNOP
As far as I can see with hcidump, it's executed properly and looks the way I wanted to. But it's rejected with Error:
LE Set Advertising Data (0x08|0x0008) ncmd 1
status 0x12
Error: Invalid HCI Command Parameters
And I have spent a whole day trying to get it right. Does someone skilled see what I have done wrong? And is this the correct way to advertise a Service?
(Context for the Interested reader: I have successfully accomplished what I want to do using the Bleno Library in NodeJs. However, this will not fit into the bigger picture in our System. Using HCITool directly for advertising is just for experimentation and will be written in Python later)
The length of the the HCI_LE_Set_Advertising_Data payload should be exactly 32 bytes. Try zero padding the command to reach 32 bytes:
hcitool -i hci0 cmd 0x08 0x0008 15 02 01 1a 11 07 41 42 43 44 45 46 47 48 49 4a 4b 4c 4d 4e 4f 50 00 00 00 00 00 00 00 00 00 00
You can gain some more insight using hcidump --raw.
Compare the output of the original command:
$hcidump --raw
HCI sniffer - Bluetooth packet analyzer ver 5.30
device: hci0 snap_len: 1500 filter: 0xffffffffffffffff
< 01 08 20 16 15 02 01 1A 11 07 41 42 43 44 45 46 47 48 49 4A
4B 4C 4D 4E 4F 50
> 04 0E 04 01 08 2
With the zero padded one:
HCI sniffer - Bluetooth packet analyzer ver 5.30
device: hci0 snap_len: 1500 filter: 0xffffffffffffffff
< 01 08 20 20 15 02 01 1A 11 07 41 42 43 44 45 46 47 48 49 4A
4B 4C 4D 4E 4F 50 00 00 00 00 00 00 00 00 00 00
> 04 0E 04 01 08 20 00
Another way to gain more insight is to run hciconfig hci0 leadv and use hcidump --raw to examine the payload of the SET_ADVERTISING_PARAMETERS command send by hciconfig.
By the way, I've noticed that sometimes a non zero padded command also works, it might depend on the bluez version you are using.
I'm trying to read in a complicated data file that has floating point values. Some C code has been supplied that handles this format (Met Office PP file) and it does a lot of bit twiddling and swapping. And it doesn't work. It gets a lot right, like the size of the data, but the numerical values in the returned matrix are nonsensical, have NaNs and values like 1e38 and -1e38 liberally sprinkled.
However, I have a binary exe ("convsh") that can convert these to netCDF, and the netCDFs look fine - nice swirly maps of wind speed.
What I'm thinking is that the bytes of the PP file are being read in in the wrong order. If I could compare the bytes of the floats returned correctly in the netCDF data with the bytes in the floats returned wrongly from the C code, then I might figure out the correct swappage.
So is there a plain R function to dump the four (or eight?) bytes of a floating point number? Something like:
> as.bytes(pi)
[1] 23 54 163 73 99 00 12 45 # made up values
searches for "bytes" and "float" and "binary" haven't helped.
Its trivial in C, I could probably have written it in the time it took me to write this...
rdyncall might give you what you're looking for:
library(rdyncall)
as.floatraw(pi)
# [1] db 0f 49 40
# attr(,"class")
# [1] "floatraw"
Or maybe writeBin(pi, raw(8))?
Yes, that must exist in the serialization code because R merrily sends stuff across the wire, taking care of endianness too. Did you look at eg Rserve using it, or how digest passes the char representation to chosen hash functions?
After a quick glance at digest.R:
R> serialize(pi, connection=NULL, ascii=TRUE)
[1] 41 0a 32 0a 31 33 34 39 31 34 0a 31 33 31 38 34 30 0a
[19] 31 34 0a 31 0a 33 2e 31 34 31 35 39 32 36 35 33 35 38
[37] 39 37 39 33 0a
and
R> serialize(pi, connection=NULL, ascii=FALSE)
[1] 58 0a 00 00 00 02 00 02 0f 02 00 02 03 00 00 00 00 0e
[19] 00 00 00 01 40 09 21 fb 54 44 2d 18
R>
That might get you going.
Come to think about it, this includes header meta-data.
The package mcga (machine-coded genetic algorithms) includes some functions for bytes-to-double and doubles-to-byte conversions. For handling the bytes of pi, you can use DoubleToBytes like:
> DoubleToBytes(pi)
1 24 45 68 84 251 33 9 64
For converting bytes to double again, BytesToDouble() can be used instead:
> BytesToDouble(c(24,45,68,84,251,33,9,64))
1 3.141593
Links:
CRAN page of mcga