How to represent a "must have" OWL restriction in Protege? - constraints

I am trying to build an ontology using OWL. I have two classes: Test and Question. I have an object property hasQuestion with Domain (Test) and Range (Question).
If I created an individual Exam1 for the class Test and did not relate it to Question individual, I would like the reasoner to raise inconsistency. How can I do this in Protege?
For example:
Exam1 (Test)
Exam2 (Test) hasQuestion Quest1 (Question)
When I run the reasoner on Exam1, I should get an inconsistency since there are no questions related to it. However, Exam2 should not give an inconsistency since it has Quest1 related to it.

Related

Is it bad or good practice to store info on an object as an attribute of that object?

I have written code with many data tables and other objects. When other people need to look into my code, they often have no idea what the difference is between the different data tables I have created. I was therefore wondering whether it is advised to simply store a description of a particular data table as an attribute of that data table.
For example:
animal = c('rabbit', 'dog', 'cat')
food = c('carrot', 'bone', 'fish')
DT = data.table(animal, food)
attr(DT, 'information') <- 'table that holds info on what different animals eat'
attributes(DT)$information
I suppose you could argue that this question is asking for opintion, but before anyone leaps in with that, let's look at the code that already endorses the use of attributes for this purpose. The comment function exist for bot setting and getting such an attribute. The comment functions help page is entitled: "Query or Set a "comment" Attribute".
Frank Harrell's Hmisc package also has a label function that attaches informative strings to dataframe columns, as well as a `Label" function that does hte same to the dataframe itself.
Opinion supported by evidence: following Frank's lead in programming practice is "good".

inverse of market basket analysis with R

I want to do an analysis on :what item didnt go well together in Market basket analyis . Basically finding out which item together didnt go out of the queue . I have a situation , where an record ( containing 13 attribute/column )is incomplete because of various combinations of attributes .
for ex : a1,a2 .... a13 .
All the above attributes may or maynot have values . But any attribute not having values will make the record to be incomplete
with this situation , I need to see, which combination of incomplete records is mostly occuring in my recordsets. Knowing this pattern will help my team prioritize the records which needs most attention .
I see that Apriori algorithm takes only values whcih are available , but I need to analyse the combination that is not occuring . I am sure this problem should have been solved in the past , but I dont see any hints in the forum .
Does anyone have any experience of such kind? Or do you suggest any other Algorithm that i should use ? I am using R for this analyis. And the total records :218k
If I grasp your stated situation right, you'd like to get of a dataset, where an item of a case either has a value or doesn't have a value, association rules to those cases which has at least one item without value and then only to these items, which has no values. For this purpose is the Apriori algorithm just fine. And you even don't need to invert it. The solution lies here within the formatting of the dataset: Just get rid of the items with values and give the items without values a value like the name of the regarding item, e.g. a12. Then your dataset only contains cases with at least one item without value and items without values, plus those items can be identified by their values, i.e. their names. Now it's possible for the Apriori algorithm to extract of the formatted dataset the frequent itemsets and subsequently association rules. Concerning if you should use another algorithm to extract association rules: Yes. Use the FP-Growth. It is a way faster than the Apriori algorithm.
Thanks , that answer helped .I need to analyse all null items in each transaction and I need to see which combination of null has most occurence from all the transactions.
I tried replacing all my null values with constants . Did some tweaks in the apriori algorithm to get the those constants as rhs . But I didnt understand,how FP growth algorithm could help on this?can you explain .

Creating a new variable based on text field in dataset (results to be "0" or "1")

I'm working on an exercise to create a support vector machine, but am stuck at an early step. The dataset I'm working with measures restaurant health violations, and may be found here: https://health.data.ny.gov/Health/Food-Service-Establishment-Last-Inspection/cnih-y5dw
This data has been imported into R-Studio. I need to look at the VIOLATIONS variable, and create a variable (true/false, 0/1?), to be added to this dataset - which will be used later in the SVM potion. After a quick inspection, restaurants with no violations seem to contain the text string "No violations found." in the VIOLATIONS variable. So I'm thinking I need to setup a function to run through the thousands of records and compare entries against that text.
My guess is that I want to give restaurants with no violations a "0" or "FALSE" mark, whereas the restaurants with violations (any other text) would receive a "1" or "TRUE" mark. This needs to be processed for every entry in the dataset, and the resulting values need to be added to this dataset as a new variable (for later analysis).
I'm hoping somebody can provide hints or suggestions (or just help) on how to go about this, so I can move onto the SVM! Any ideas?
I wasn't sure the best way to ask this so didn't see any good examples when I tried searching.
I called your data.frame df and the new added column ANYVIOLATIONS.
As far as I can see from a brief glance at the provided data VIOLATIONS always exactly matches "No violations found." if there were no violations. Thus the code to get a logical vector that meets your requirements should be quite simple:
df$ANYVIOLATIONS <- df$VIOLATIONS != "No violations found."

R + Bioconductor : combining probesets in an ExpressionSet

First off, this may be the wrong Forum for this question, as it's pretty darn R+Bioconductor specific. Here's what I have:
library('GEOquery')
GDS = getGEO('GDS785')
cd4T = GDS2eSet(GDS)
cd4T <- cd4T[!fData(cd4T)$symbol == "",]
Now cd4T is an ExpressionSet object which wraps a big matrix with 19794 rows (probesets) and 15 columns (samples). The final line gets rid of all probesets that do not have corresponding gene symbols. Now the trouble is that most genes in this set are assigned to more than one probeset. You can see this by doing
gene_symbols = factor(fData(cd4T)$Gene.symbol)
length(gene_symbols)-length(levels(gene_symbols))
[1] 6897
So only 6897 of my 19794 probesets have unique probeset -> gene mappings. I'd like to somehow combine the expression levels of each probeset associated with each gene. I don't care much about the actual probe id for each probe. I'd like very much to end up with an ExpressionSet containing the merged information as all of my downstream analysis is designed to work with this class.
I think I can write some code that will do this by hand, and make a new expression set from scratch. However, I'm assuming this can't be a new problem and that code exists to do it, using a statistically sound method to combine the gene expression levels. I'm guessing there's a proper name for this also but my googles aren't showing up much of use. Can anyone help?
I'm not an expert, but from what I've seen over the years everyone has their own favorite way of combining probesets. The two methods that I've seen used the most on a large scale has been using only the probeset which has the largest variance across the expression matrix and the other being to take the mean of the probesets and creating a meta-probeset out of it. For smaller blocks of probesets I've seen people use more intensive methods involving looking at per-probeset plots to get a feel for what's going on ... generally what happens is that one probeset turns out to be the 'good' one and the rest aren't very good.
I haven't seen generalized code to do this - as an example we recently realized in my lab that a few of us have our own private functions to do this same thing.
The word you are looking for is 'nsFilter' in R genefilter package. This function assign two major things, it looks for only entrez gene ids, rest of the probesets will be filtered out. When an entrez id has multiple probesets, then the largest value will be retained and the others removed. Now you have unique entrez gene id mapped matrix. Hope this helps.

Fuzzy matching of product names

I need to automatically match product names (cameras, laptops, tv-s etc) that come from different sources to a canonical name in the database.
For example "Canon PowerShot a20IS", "NEW powershot A20 IS from Canon" and "Digital Camera Canon PS A20IS"
should all match "Canon PowerShot A20 IS". I've worked with levenshtein distance with some added heuristics (removing obvious common words, assigning higher cost to number changes etc), which works to some extent, but not well enough unfortunately.
The main problem is that even single-letter changes in relevant keywords can make a huge difference, but it's not easy to detect which are the relevant keywords. Consider for example three product names:
Lenovo T400
Lenovo R400
New Lenovo T-400, Core 2 Duo
The first two are ridiculously similar strings by any standard (ok, soundex might help to disinguish the T and R in this case, but the names might as well be 400T and 400R), the first and the third are quite far from each other as strings, but are the same product.
Obviously, the matching algorithm cannot be a 100% precise, my goal is to automatically match around 80% of the names with a high confidence.
Any ideas or references is much appreciated
I think this will boil down to distinguishing key words such as Lenovo from chaff such as New.
I would run some analysis over the database of names to identify key words. You could use code similar to that used to generate a word cloud.
Then I would hand-edit the list to remove anything obviously chaff, like maybe New is actually common but not key.
Then you will have a list of key words that can be used to help identify similarities. You would associate the "raw" name with its keywords, and use those keywords when comparing two or more raw names for similarities (literally, percentage of shared keywords).
Not a perfect solution by any stretch, but I don't think you are expecting one?
The key understanding here is that you do have a proper distance metric. That is in fact not your problem at all. Your problem is in classification.
Let me give you an example. Say you have 20 entries for the Foo X1 and 20 for the Foo Y1. You can safely assume they are two groups. On the other hand, if you have 39 entries for the Bar X1 and 1 for the Bar Y1, you should treat them as a single group.
Now, the distance X1 <-> Y1 is the same in both examples, so why is there a difference in the classification? That is because Bar Y1 is an outlier, whereas Foo Y1 isn't.
The funny part is that you do not actually need to do a whole lot of work to determine these groups up front. You simply do an recursive classification. You start out with node per group, and then add the a supernode for the two closest nodes. In the supernode, store the best assumption, the size of its subtree and the variation in it. As many of your strings will be identical, you'll soon get large subtrees with identical entries. Recursion ends with the supernode containing at the root of the tree.
Now map the canonical names against this tree. You'll quickly see that each will match an entire subtree. Now, use the distances between these trees to pick the distance cutoff for that entry. If you have both Foo X1 and Foo Y1 products in the database, the cut-off distance will need to be lower to reflect that.
edg's answer is in the right direction, I think - you need to distinguish key words from fluff.
Context matters. To take your example, Core 2 Duo is fluff when looking at two instances of a T400, but not when looking at a a CPU OEM package.
If you can mark in your database which parts of the canonical form of a product name are more important and must appear in one form or another to identify a product, you should do that. Maybe through the use of some sort of semantic markup? Can you afford to have a human mark up the database?
You can try to define equivalency classes for things like "T-400", "T400", "T 400" etc. Maybe a set of rules that say "numbers bind more strongly than letters attached to those numbers."
Breaking down into cases based on manufacturer, model number, etc. might be a good approach. I would recommend that you look at techniques for term spotting to try and accomplish that: http://www.worldcat.org/isbn/9780262100854
Designing everything in a flexible framework that's mostly rule driven, where the rules can be modified based on your needs and emerging bad patterns (read: things that break your algorithm) would be a good idea, as well. This way you'd be able to improve the system's performance based on real world data.
You might be able to make use of a trigram search for this. I must admit I've never seen the algorithm to implement an index, but have seen it working in pharmaceutical applications, where it copes very well indeed with badly misspelt drug names. You might be able to apply the same kind of logic to this problem.
This is a problem of record linkage. The dedupe python library provides a complete implementation, but even if you don't use python, the documentation has a good overview of how to approach this problem.
Briefly, within the standard paradigm, this task is broken into three stages
Compare the fields, in this case just the name. You can use one or more comparator for this, for example an edit distance like the Levenshtein distance or something like the cosine distance that compares the number of common words.
Turn an array fo distance scores into a probability that a pair of records are truly about the same thing
Cluster those pairwise probability scores into groups of records that likely all refer to the same thing.
You might want to create logic that ignores the letter/number combination of model numbers (since they're nigh always extremely similar).
Not having any experience with this type of problem, but I think a very naive implementation would be to tokenize the search term, and search for matches that happen to contain any of the tokens.
"Canon PowerShot A20 IS", for example, tokenizes into:
Canon
Powershot
A20
IS
which would match each of the other items you want to show up in the results. Of course, this strategy will likely produce a whole lot of false matches as well.
Another strategy would be to store "keywords" with each item, such as "camera", "canon", "digital camera", and searching based on items that have matching keywords. In addition, if you stored other attributes such as Maker, Brand, etc., you could search on each of these.
Spell checking algorithms come to mind.
Although I could not find a good sample implementation, I believe you can modify a basic spell checking algorithm to comes up with satisfactory results. i.e. working with words as a unit instead of a character.
The bits and pieces left in my memory:
Strip out all common words (a, an, the, new). What is "common" depends on context.
Take the first letter of each word and its length and make that an word key.
When a suspect word comes up, looks for words with the same or similar word key.
It might not solve your problems directly... but you say you were looking for ideas, right?
:-)
That is exactly the problem I'm working on in my spare time. What I came up with is:
based on keywords narrow down the scope of search:
in this case you could have some hierarchy:
type --> company --> model
so that you'd match
"Digital Camera" for a type
"Canon" for company and there you'd be left with much narrower scope to search.
You could work this down even further by introducing product lines etc.
But the main point is, this probably has to be done iteratively.
We can use the Datadecision service for matching products.
It will allow you to automatically match your product data using statistical algorithms. This operation is done after defining a threshold score of confidence.
All data that cannot be automatically matched will have to be manually reviewed through a dedicated user interface.
The online service uses lookup tables to store synonyms as well as your manual matching history. This allows you to improve the data matching automation next time you import new data.
I worked on the exact same thing in the past. What I have done is using an NLP method; TF-IDF Vectorizer to assign weights to each word. For example in your case:
Canon PowerShot a20IS
Canon --> weight = 0.05 (not a very distinguishing word)
PowerShot --> weight = 0.37 (can be distinguishing)
a20IS --> weight = 0.96 (very distinguishing)
This will tell your model which words to care and which words to not. I had quite good matches thanks to TF-IDF.
But note this: a20IS cannot be recognized as a20 IS, you may consider to use some kind of regex to filter such cases.
After that, you can use a numeric calculation like cosine similarity.

Resources