OFFIS DICOM - dcmdump v3.6.0 - (0002,0010) Transfer Syntax UID - dicom

I use OFFIS DICOM dcmdump tool to extract information from a DICOM image:
http://support.dcmtk.org/docs/dcmdump.html
I use dcmdump.exe -M -L +Qn to dump the DICOM information.
Output looks like
Dicom-File-Format
# Dicom-Meta-Information-Header
# Used TransferSyntax: Little Endian Explicit
(0002,0000) UL 164 # 4, 1 FileMetaInformationGroupLength
(0002,0001) OB 00\01 # 2, 1 FileMetaInformationVersion
(0002,0002) UI =DigitalXRayImageStorageForPresentation # 28, 1 MediaStorageSOPClassUID
(0002,0003) UI [1.2.826.0.1.3680043.2.876.8598.1.4.0.20160428091911.2.2] # 56, 1 MediaStorageSOPInstanceUID
(0002,0010) UI =JPEGLSLossless # 22, 1 TransferSyntaxUID
(0002,0012) UI [1.2.276.0.64] # 12, 1 ImplementationClassUID
Why did dcmdump translate (0002,0010) to the value JPEGLSLossless instead of 1.2.840.10008.1.2.4.80 ?
Is there any switch to do so?

dcmdump does so because it translates well known UIDs to human readable meanings by default.
The parameter you are searching for to change this behavior is -Un (--no-uid-names)

Related

Obtaining pixel value from 16-bit tiff images using magick package in R

Does anyone know how to obtain the pixel value for each channel (RGB) from 16-bit tiff images using the magick package in R? Currently I am using Mathematica to perform this operation, because I could not find an equivalent way to doing it in mathematica.
I have tried to read the pixel value from the image-magick package and the results is a raw type (e.g. "ff"). I used the function rawToNum (package "pack") to convert the raw type to numeric and the results is close to what I obtain using ImageDate function in Mathematica, but not exactly the same.
You can also access the pixels as a numeric array with the magick package. The example is based on this vignette from the package.
library(magick)
tiger <- image_read('http://jeroen.github.io/images/tiger.svg')
tiger_tiff <- image_convert(tiger, "tiff")
# Access data in raw format and convert to integer
tiger_array <- as.integer(tiger_tiff[[1]])
Then if check the dimension and type you get:
dim(tiger_array)
[1] 900 900 4
is.numeric(tiger_array)
[1] TRUE
I don't know too much about R at all, but I guess you can "shell out" and execute an external command, using system() or somesuch.
If, so, maybe you can use this. First, let's make a 16-bit TIFF file that is a gradient from red-blue just 10 pixels wide and 1 pixel tall:
convert -size 10x1 gradient:red-blue image.tiff
Now we can dump the pixels to a file using ImageMagick:
convert image.tiff rgb:image.rgb
# Now check its length - yes, 60 bytes = 10 pixels with 2 bytes each for RG &B
ls -l image.rgb
-rw-r--r-- 1 mark staff 60 11 Jul 10:32 image.rgb
We can also write the data to stdout like this:
convert image.tiff rgb:-
and also look at it with 1 pixel per line (6 bytes)
convert image.tiff rgb:- | xxd -g 3 -c 6
00000000: ffff00 000000 ...... # Full Red, no Green, no Blue
00000006: 8de300 00721c ....r. # Lots of Red, no Green, a little Blue
0000000c: 1cc700 00e338 .....8
00000012: aaaa00 005555 ....UU
00000018: 388e00 00c771 8....q
0000001e: c77100 00388e .q..8.
00000024: 555500 00aaaa UU....
0000002a: e33800 001cc7 .8....
00000030: 721c00 008de3 r.....
00000036: 000000 00ffff ...... # No Red, no Green, full Blue
I'm hoping you can do something like that in R, with:
system("convert image.tif rgb:-")
Another way of dumping the pixels might be with Perl to slurp the entire file and then unpack the contained unsigned shorts and print them one per line:
convert image.tiff rgb: | perl -e 'my $str=do{local $/; <STDIN>}; print join("\n",unpack("v*",$str)),"\n";'
Sample Output
65535 # Full Red
0 # No Green
0 # No Blue
58253 # Lots of Red
0 # No Green
7282 # A little Blue
50972 # Moderate Red
0
14563
43690
0
21845
36408
0
29127
29127
0
36408
21845
0
43690
14563
0
50972
7282
0 # No Green
58253 # Lots of Blue
0 # No Red
0 # No Green
65535 # Full Blue
Another way of seeing the data may be using od and awk like this:
convert image.tiff rgb: | od -An -tuS | awk '{for(i=1;i<=NF;i++){print $i}}'
65535
0
0
58253
0
7282
50972
0
14563
43690
0
21845
36408
0
29127
29127
0
36408
21845
0
43690
14563
0
50972
7282
0
58253
0
0
65535
where the -An suppresses printing of the address, and the -tuS says the type of the data is unsigned short.
Perhaps a slightly simpler way in ImageMagick would be to use the txt: output format.
Using Mark Setchell's image:
convert -size 10x1 gradient:red-blue image.tiff
Using TXT: as
convert image.tiff txt: | sed -n 's/^.*[(]\(.*\)[)].*[#].*$/\1/p'
Produces:
65535,0,0
58253,0,7282
50972,0,14563
43690,0,21845
36408,0,29127
29127,0,36408
21845,0,43690
14563,0,50972
7282,0,58253
0,0,65535
or Using TXT: to include the pixel coordinates
convert image.tiff txt: | sed -n 's/^\(.*[)]\).*[#].*$/\1/p'
Produces:
0,0: (65535,0,0)
1,0: (58253,0,7282)
2,0: (50972,0,14563)
3,0: (43690,0,21845)
4,0: (36408,0,29127)
5,0: (29127,0,36408)
6,0: (21845,0,43690)
7,0: (14563,0,50972)
8,0: (7282,0,58253)
9,0: (0,0,65535)
Thank you all. The best answer I found was given by a student of mine using the package raster:
library(raster)
img <- stack(filename)
x <- as.matrix(raster(img, 1)) # here we specify the layer 1, 2 or 3
The only issue is that the function as.matrix of the package raster may be confused with the one from the base package, so it may be necessary to specify raster::as.matrix.
#Nate_A gave the answer, but three cents short of a dollar.
After
dim(tiger_array)
[1] 900 900 4
You get each channel color of first pixel by
tiger_array[1,1,1] # red
tiger_array[1,1,2] # green
tiger_array[1,1,3] # blue
Or if you prefer between 0 - 255
tiger_array[1,1,1]*255
tiger_array[1,1,2]*255
tiger_array[1,1,3]*255

How to connect sqlite database to weka

I'm trying to import a database from sqlite3 to weka, but the problem is that even after the database is loaded and displayed, when I click ok so I can start working with the database, the message "couldn't read from database: unknown data type: text " appears. I've tried modifying the DatabaseUtil.props file but nothing seems to work, so I really apreacite if someone could tell me how to solve this issue. Thanks
I have read these instructions:
https://waikato.github.io/weka-wiki/databases/#configuration-files
Now this is my DatabaseUtils.props file, please change the jdbcURL entry
# Database settings for sqlite 3.x
#
# General information on database access can be found here:
# https://waikato.github.io/weka-wiki/databases
#
# url: http://www.sqlite.org/
# jdbc: http://www.zentus.com/sqlitejdbc/
# author: Fracpete (fracpete at waikato dot ac dot nz)
# version: $Revision: 5836 $
# JDBC driver (comma-separated list)
jdbcDriver=org.sqlite.JDBC,
# database URL
jdbcURL=jdbc:sqlite:/some/path/to/mydb.sqlite
# specific data types
# string, getString() = 0; --> nominal
# boolean, getBoolean() = 1; --> nominal
# double, getDouble() = 2; --> numeric
# byte, getByte() = 3; --> numeric
# short, getByte()= 4; --> numeric
# int, getInteger() = 5; --> numeric
# long, getLong() = 6; --> numeric
# float, getFloat() = 7; --> numeric
# date, getDate() = 8; --> date
# text, getString() = 9; --> string
# time, getTime() = 10; --> date
#SQLITE DATATYPES
#NULL. The value is a NULL value.
null=9
#INTEGER. The value is a signed integer, stored in 1, 2, 3, 4, 6, or 8 bytes depending on the magnitude of the value.
integer=5
#REAL. The value is a floating point value, stored as an 8-byte IEEE floating point number.
float=6
#TEXT. The value is a text string, stored using the database encoding (UTF-8, UTF-16BE or UTF-16LE).
TEXT=9
text=9
#BLOB. The value is a blob of data, stored exactly as it was input.
# other options
CREATE_DOUBLE=DOUBLE
CREATE_STRING=varchar(2000)
CREATE_STRING=TEXT
CREATE_INT=INT
CREATE_DATE=DATETIME
DateFormat=yyyy-MM-dd HH:mm:ss
checkUpperCaseNames=false
checkLowerCaseNames=false
checkForTable=true
# All the reserved keywords for this database
# Based on the keywords listed at the following URL (2009-04-13):
# http://www.sqlite.org/lang_keywords.html
Keywords=\
ABORT,\
ADD,\
AFTER,\
ALL,\
ALTER,\
ANALYZE,\
AND,\
AS,\
ASC,\
ATTACH,\
AUTOINCREMENT,\
BEFORE,\
BEGIN,\
BETWEEN,\
BY,\
CASCADE,\
CASE,\
CAST,\
CHECK,\
COLLATE,\
COLUMN,\
COMMIT,\
CONFLICT,\
CONSTRAINT,\
CREATE,\
CROSS,\
CURRENT_DATE,\
CURRENT_TIME,\
CURRENT_TIMESTAMP,\
DATABASE,\
DEFAULT,\
DEFERRABLE,\
DEFERRED,\
DELETE,\
DESC,\
DETACH,\
DISTINCT,\
DROP,\
EACH,\
ELSE,\
END,\
ESCAPE,\
EXCEPT,\
EXCLUSIVE,\
EXISTS,\
EXPLAIN,\
FAIL,\
FOR,\
FOREIGN,\
FROM,\
FULL,\
GLOB,\
GROUP,\
HAVING,\
IF,\
IGNORE,\
IMMEDIATE,\
IN,\
INDEX,\
INDEXED,\
INITIALLY,\
INNER,\
INSERT,\
INSTEAD,\
INTERSECT,\
INTO,\
IS,\
ISNULL,\
JOIN,\
KEY,\
LEFT,\
LIKE,\
LIMIT,\
MATCH,\
NATURAL,\
NOT,\
NOTNULL,\
NULL,\
OF,\
OFFSET,\
ON,\
OR,\
ORDER,\
OUTER,\
PLAN,\
PRAGMA,\
PRIMARY,\
QUERY,\
RAISE,\
REFERENCES,\
REGEXP,\
REINDEX,\
RELEASE,\
RENAME,\
REPLACE,\
RESTRICT,\
RIGHT,\
ROLLBACK,\
ROW,\
SAVEPOINT,\
SELECT,\
SET,\
TABLE,\
TEMP,\
TEMPORARY,\
THEN,\
TO,\
TRANSACTION,\
TRIGGER,\
UNION,\
UNIQUE,\
UPDATE,\
USING,\
VACUUM,\
VALUES,\
VIEW,\
VIRTUAL,\
WHEN,\
WHERE
# The character to append to attribute names to avoid exceptions due to
# clashes between keywords and attribute names
KeywordsMaskChar=_
#flags for loading and saving instances using DatabaseLoader/Saver
nominalToStringLimit=50
idColumn=auto_generated_id
Try putting the DatabaseUtils.prop file in the Weka home directory. Also, in the file you should add sth like TEXT=0 or TEXT=9 in the corresponding sector.

Disable/reset bold in xterm

With escape sequences "\033[21m" is used to reset/remove bold/bright:
echo -e "Text\033[1mMore text\033[21mEnd"
must return:
TextMore textEnd
but I get
TextMore textE̲n̲d̲
As you can see, in xterm "\033[21m" changes to underline and to reset bold we need to use "\033[0m", why is this?
Is there a way to change this behavior? (maybe launching xterm with some parameter)
According to XTerm Control Sequences, SGR 21 is "doubly-underlined":
CSI Pm m Character Attributes (SGR).
Ps = 2 1 -> Doubly-underlined (ISO 6429).
Ps = 2 2 -> Normal (neither bold nor faint).
Ps = 2 3 -> Not italicized (ISO 6429).
Ps = 2 4 -> Not underlined.
Ps = 2 5 -> Steady (not blinking).
Ps = 2 7 -> Positive (not inverse).
Ps = 2 8 -> Visible, i.e., not hidden (VT300).
Ps = 2 9 -> Not crossed-out (ISO 6429).
Perhaps you intended SGR 22.
The doubly-underlined feature was implemented in xterm patch #305:
minor reorganization to implement “filler” SGR features. There are no established applications which rely upon these; some people find them amusing.
separate bits used to manage drawing state from attribute-bits.
implement SGR codes 2, 3, 9, 21 and their corresponding resets.
add configure option --disable-wide-attrs to disable the feature.

Pyparsing - name not starting with a character

I am trying to use Pyparsing to identify a keyword which is not beginning with $ So for the following input:
$abc = 5 # is not a valid one
abc123 = 10 # is valid one
abc$ = 23 # is a valid one
I tried the following
var = Word(printables, excludeChars='$')
var.parseString('$abc')
But this doesn't allow any $ in var. How can I specify all printable characters other than $ in the first character position? Any help will be appreciated.
Thanks
Abhijit
You can use the method I used to define "all characters except X" before I added the excludeChars parameter to the Word class:
NOT_DOLLAR_SIGN = ''.join(c for c in printables if c != '$')
keyword_not_starting_with_dollar = Word(NOT_DOLLAR_SIGN, printables)
This should be a bit more efficient than building up with a Combine and a NotAny. But this will match almost anything, integers, words, valid identifiers, invalid identifiers, so I'm skeptical of the value of this kind of expression in your parser.

Exception importing data into neo4j using batch-import

I am running neo-4j 1.8.2 on a remote unix box. I am using this jar (https://github.com/jexp/batch-import/downloads).
nodes.csv is same as given in example:
name age works_on
Michael 37 neo4j
Selina 14
Rana 6
Selma 4
rels.csv is like this:
start end type since counter:int
1 2 FATHER_OF 1998-07-10 1
1 3 FATHER_OF 2007-09-15 2
1 4 FATHER_OF 2008-05-03 3
3 4 SISTER_OF 2008-05-03 5
2 3 SISTER_OF 2007-09-15 7
But i am getting this exception :
Using Existing Configuration File
Total import time: 0 seconds
Exception in thread "main" java.util.NoSuchElementException
at java.util.StringTokenizer.nextToken(StringTokenizer.java:332)
at org.neo4j.batchimport.Importer$Data.split(Importer.java:156)
at org.neo4j.batchimport.Importer$Data.update(Importer.java:167)
at org.neo4j.batchimport.Importer.importNodes(Importer.java:226)
at org.neo4j.batchimport.Importer.main(Importer.java:83)
I am new to neo4j, was checking if this importer can save some coding effort.
It would be great if someone can point to the probable mistake.
Thanks for help!
--Edit:--
My nodes.csv
name dob city state s_id balance desc mgr_primary mgr_secondary mgr_tertiary mgr_name mgr_status
John Von 8/11/1928 Denver CO 1114-010 7.5 RA 0023-0990 0100-0110 Doozman Keith Active
my rels.csv
start end type since status f_type f_num
2 1 address_of
1 3 has_account 5 Active
4 3 f_of Primary 0111-0230
Hi I had some issues in the past with the batch import script.
The formating of your file must be very rigorous, which means :
no extra spaces where not expected, like the ones I see in the first line of your rels.csv before "start"
no multiple spaces in place of the tab. If your files are exactly like what you've copied here, you have 4 spaces instead of on tab, and this is not going to work, as the script uses a tokenizer looking for tabs !!!
I had this issue because I always convert tabs to 4 spaces, and once I understood that, I stopped doing it for my csv !

Resources