DICOM CTDI Phantom Type Code Sequence Attribute (0018,9346) - Values? - dicom

I have checked a lot of DICOM Images about CTDI Phantom Type Code Sequence Attribute (0018,9346) Values. I found 2000 times the value: 1)
What do these signs "1)" mean?
https://dicom.innolitics.com/ciods/ct-image/ct-image/00189346

You should probably ask the engineer, who performed the studies, or the technician who had installed the machine an did setup of the worksation.
I could only guess and I would take a shot that this sequence means, that "there was a phantom used during the study".
You need to get used to the fact, that DICOM standard is something like a suggestion, not a requirement. For example, when you get a series where 0008,1030 ("Study Description") has value "CTChest", you shouldn't assume that this is a CT of the chest, as it could be any type of the examination.

Related

In what case the DICOM should be build with SQ VR type attribute

All, Forgive me I am a Newbie of the DICOM. And I was just learning the DICOM standard right now. I just knew there has an attribute named SQ (Sequencing Data Sets) in the DICOM standard. Basically, It can be used to describe a DICOM object like a tree. I am just curious about in what particular case we should use this kind of structure to build a DICOM object? Thanks.
The dicom sequence is type of nested structure to define some complex tag and consist in a set of datasets, like a structured report. The image above can exemplify:
Currently I'm working in ultrasound images and I use the dicom sequence to specify a region of a image, for example:
The region 'A' have a specific tag: (0018,6011) Sequence of Ultrasound Regions, and this region have nested tags like:
(0018,6018) Region Location Min x0
(0018,601A) Region Location Min y0
(0018,601C) Region Location Max x1
(0018,601E) Region Location Max y1
(0018,6024) Physical Units X Direction
(0018,6026) Physical Units Y Direction
These tags is used for a instance of a region, the region 'B', 'C' or whatever may have the same tags. To exemplify better see the image above
For more information, in this link (http://dicom.nema.org/dicom/2013/output/chtml/part05/sect_7.5.html) have a standard associated with nesting structures, and in this link(http://dicom.nema.org/medical/dicom/2014c/output/chtml/part03/sect_C.8.5.5.html) have specific use for ultrasound image to use with example.
Good luck in your Dicom studies!
One relevant thing is missing in the excellent answer by Gabriel IMHO: It is not the implementor's choice, when to use a sequence to encode data in DICOM. DICOM datasets are structured in modules which constitute from attributes. So there is a list of attributes allowed for a particular type of DICOM object (such as ultrasound image, CT image,...). The attribute has a "type" (in DICOM terms: Value Representation - VR) - string, number, person name or sequence - where the items allowed in the sequence are also well defined.
So the answer to "when to use a sequence" is: When the DICOM standard requires you to.
References:
DICOM Part 3 - which attributes are required/allowed for which type of DICOM object
DICOM Part 6 - which attributes are encoded with which value representation

Weka Apriori No Large Itemset and Rules Found

I am trying to do apriori association mining with WEKA (i use 3.7) using given database table
So, i exported two columns (orderLineNumber and productCode) and load it into weka, as far as i go, i haven't got any success attempt, always ended with "No large itemsets and rules found!"
Again, i tried to convert the csv into ARFF file first using ARFF Converter and still get the same message;
I also tried using database loader in WEKA, the data loaded just fine but still give the same result;
The filter i've applied in preprocessing is only numericToNominal filter;
What have i wrongly done here, i suspiciously think it was my ARFF format though, thank you
Update
After further trial, i found out that i exported wrong column and i lack 1 filter process, which is "denormalized", i installed the plugin via packet manager and denormalized my data after converting it to nominal first;
I then compared the results with "Supermarket" sample's result; The only difference are my output came with 'f' instead of 't' (like shown below) and the confidence value seems like always 100%;
First of all, OrderLine is the wrong column.
Obviously, the position on the printed bill is not very important.
Secondly, the file format is not appropriate.
You want one line for every order, one column for every possible item in the #data section. To save memory, it may be helpful to use sparse formats (do not forget to set flags appropriately)
Other tools like ELKI can process input formats like this, that may be easier to use (it also was a lot faster than Weka):
apple banana
milk diapers beer
but last I checked, ELKI would "only" find frequent itemsets (the harder part) not compute association rules. I then used a tiny python script to produce actual association rules as desired.

Calculating the number of needed error correction words in QR code

I like to encode QR codes. Therefore I need to know, how much error correction words are needed by an specified version and correction level.
For QR version 1 in combination with ec-level Q there must be 13 error correction words and 13 data words.
I know there are some tables (table 7,8,9) in the ISO/IEC 18004 where this information is stored. But I like to know if it's possible to calculate the amount of needed error correction words.
Greets,
Raffi
Yes, you need ISO 18004. I suppose you could also look at the source code from zxing that calculates this. It happens around the method interleaveWithECBytes.

dos date/time calculation

I am working on a project that involves converting data into dos date and time. using a hex editor (Hex Workshop) i have looked through the file manually and and found the values I am looking for, however I am unsure how they are calculated. I am told that the int16 value 15430 corresponds to the date 06/02/2010 but i can see no correlation, also the value 15430 corresponds to the time 07:34:12 but i am lost in how it is calculated. any help with these calculations would be very welcomed
You need to look at the bits in those numbers.
See here for details:
http://www.vsft.com/hal/dostime.htm
I know this post is very old but I think the time 07:34:12 corresponds to 15436 (not 15430).

Fuzzy matching of product names

I need to automatically match product names (cameras, laptops, tv-s etc) that come from different sources to a canonical name in the database.
For example "Canon PowerShot a20IS", "NEW powershot A20 IS from Canon" and "Digital Camera Canon PS A20IS"
should all match "Canon PowerShot A20 IS". I've worked with levenshtein distance with some added heuristics (removing obvious common words, assigning higher cost to number changes etc), which works to some extent, but not well enough unfortunately.
The main problem is that even single-letter changes in relevant keywords can make a huge difference, but it's not easy to detect which are the relevant keywords. Consider for example three product names:
Lenovo T400
Lenovo R400
New Lenovo T-400, Core 2 Duo
The first two are ridiculously similar strings by any standard (ok, soundex might help to disinguish the T and R in this case, but the names might as well be 400T and 400R), the first and the third are quite far from each other as strings, but are the same product.
Obviously, the matching algorithm cannot be a 100% precise, my goal is to automatically match around 80% of the names with a high confidence.
Any ideas or references is much appreciated
I think this will boil down to distinguishing key words such as Lenovo from chaff such as New.
I would run some analysis over the database of names to identify key words. You could use code similar to that used to generate a word cloud.
Then I would hand-edit the list to remove anything obviously chaff, like maybe New is actually common but not key.
Then you will have a list of key words that can be used to help identify similarities. You would associate the "raw" name with its keywords, and use those keywords when comparing two or more raw names for similarities (literally, percentage of shared keywords).
Not a perfect solution by any stretch, but I don't think you are expecting one?
The key understanding here is that you do have a proper distance metric. That is in fact not your problem at all. Your problem is in classification.
Let me give you an example. Say you have 20 entries for the Foo X1 and 20 for the Foo Y1. You can safely assume they are two groups. On the other hand, if you have 39 entries for the Bar X1 and 1 for the Bar Y1, you should treat them as a single group.
Now, the distance X1 <-> Y1 is the same in both examples, so why is there a difference in the classification? That is because Bar Y1 is an outlier, whereas Foo Y1 isn't.
The funny part is that you do not actually need to do a whole lot of work to determine these groups up front. You simply do an recursive classification. You start out with node per group, and then add the a supernode for the two closest nodes. In the supernode, store the best assumption, the size of its subtree and the variation in it. As many of your strings will be identical, you'll soon get large subtrees with identical entries. Recursion ends with the supernode containing at the root of the tree.
Now map the canonical names against this tree. You'll quickly see that each will match an entire subtree. Now, use the distances between these trees to pick the distance cutoff for that entry. If you have both Foo X1 and Foo Y1 products in the database, the cut-off distance will need to be lower to reflect that.
edg's answer is in the right direction, I think - you need to distinguish key words from fluff.
Context matters. To take your example, Core 2 Duo is fluff when looking at two instances of a T400, but not when looking at a a CPU OEM package.
If you can mark in your database which parts of the canonical form of a product name are more important and must appear in one form or another to identify a product, you should do that. Maybe through the use of some sort of semantic markup? Can you afford to have a human mark up the database?
You can try to define equivalency classes for things like "T-400", "T400", "T 400" etc. Maybe a set of rules that say "numbers bind more strongly than letters attached to those numbers."
Breaking down into cases based on manufacturer, model number, etc. might be a good approach. I would recommend that you look at techniques for term spotting to try and accomplish that: http://www.worldcat.org/isbn/9780262100854
Designing everything in a flexible framework that's mostly rule driven, where the rules can be modified based on your needs and emerging bad patterns (read: things that break your algorithm) would be a good idea, as well. This way you'd be able to improve the system's performance based on real world data.
You might be able to make use of a trigram search for this. I must admit I've never seen the algorithm to implement an index, but have seen it working in pharmaceutical applications, where it copes very well indeed with badly misspelt drug names. You might be able to apply the same kind of logic to this problem.
This is a problem of record linkage. The dedupe python library provides a complete implementation, but even if you don't use python, the documentation has a good overview of how to approach this problem.
Briefly, within the standard paradigm, this task is broken into three stages
Compare the fields, in this case just the name. You can use one or more comparator for this, for example an edit distance like the Levenshtein distance or something like the cosine distance that compares the number of common words.
Turn an array fo distance scores into a probability that a pair of records are truly about the same thing
Cluster those pairwise probability scores into groups of records that likely all refer to the same thing.
You might want to create logic that ignores the letter/number combination of model numbers (since they're nigh always extremely similar).
Not having any experience with this type of problem, but I think a very naive implementation would be to tokenize the search term, and search for matches that happen to contain any of the tokens.
"Canon PowerShot A20 IS", for example, tokenizes into:
Canon
Powershot
A20
IS
which would match each of the other items you want to show up in the results. Of course, this strategy will likely produce a whole lot of false matches as well.
Another strategy would be to store "keywords" with each item, such as "camera", "canon", "digital camera", and searching based on items that have matching keywords. In addition, if you stored other attributes such as Maker, Brand, etc., you could search on each of these.
Spell checking algorithms come to mind.
Although I could not find a good sample implementation, I believe you can modify a basic spell checking algorithm to comes up with satisfactory results. i.e. working with words as a unit instead of a character.
The bits and pieces left in my memory:
Strip out all common words (a, an, the, new). What is "common" depends on context.
Take the first letter of each word and its length and make that an word key.
When a suspect word comes up, looks for words with the same or similar word key.
It might not solve your problems directly... but you say you were looking for ideas, right?
:-)
That is exactly the problem I'm working on in my spare time. What I came up with is:
based on keywords narrow down the scope of search:
in this case you could have some hierarchy:
type --> company --> model
so that you'd match
"Digital Camera" for a type
"Canon" for company and there you'd be left with much narrower scope to search.
You could work this down even further by introducing product lines etc.
But the main point is, this probably has to be done iteratively.
We can use the Datadecision service for matching products.
It will allow you to automatically match your product data using statistical algorithms. This operation is done after defining a threshold score of confidence.
All data that cannot be automatically matched will have to be manually reviewed through a dedicated user interface.
The online service uses lookup tables to store synonyms as well as your manual matching history. This allows you to improve the data matching automation next time you import new data.
I worked on the exact same thing in the past. What I have done is using an NLP method; TF-IDF Vectorizer to assign weights to each word. For example in your case:
Canon PowerShot a20IS
Canon --> weight = 0.05 (not a very distinguishing word)
PowerShot --> weight = 0.37 (can be distinguishing)
a20IS --> weight = 0.96 (very distinguishing)
This will tell your model which words to care and which words to not. I had quite good matches thanks to TF-IDF.
But note this: a20IS cannot be recognized as a20 IS, you may consider to use some kind of regex to filter such cases.
After that, you can use a numeric calculation like cosine similarity.

Resources