How are complex conditions represented in decision table - logical-operators

I am trying to model a decision table template.
Why I understand for simple rules like
(x>10 and y<10) print "red" can be represented in a decision table with one row using two columns for conditions and one column for action.
+-----+-----+-------------+
| X | Y | Action |
+-----+-----+-------------+
| >10 | <10 | Print "red" |
+-----+-----+-------------+
How are conditions like
((x>10 and y<10) or x>1) or z<5 and y>5 print "red" represented in decision tables.
I assume the above big condition is represented in many rows where the combination of different mini conditions is true. with the same action part repeated. Is there any method to reduce conditions like this to decision tables?
However In that case The action is fired multiple rows. Where as we have only one action. Is there any column for grouping?

One approach is to give actions numbers, and reference them from decision tables. If an action has been fired during an evaluation run, subsequent firings are ignored.
Here is an example:
+-----+-----+-----+--------+
| X | Y | Z | Action |
+-----+-----+-----+--------+
| >10 | >10 | - | 1 |
+-----+-----+-----+--------+
| >10 | <10 | - | 2 |
+-----+-----+-----+--------+
| >50 | - | - | 2 |
+-----+-----+-----+--------+
| - | - | >5 | 2 |
+-----+-----+-----+--------+
Action number corresponds to an action from this table:
+-----+--------------+
| # | Action |
+-----+--------------+
| 1 | Print "red" |
+-----+--------------+
| 2 | Print "blue" |
+-----+--------------+
If action #2 is fired because x>10 AND y<10, it wouldn't fire again even if x>50 or z>5.

Related

Count merged observations and calculate fraction

I merged two data sets using Stata and now I need to find the fraction and number of projects matched. To do this, I am assuming that I will need to calculate two counts.
How do I get both of the counts to display at the same time, and then divide one by the other?
Below is an example of my _merge variable:
4022. | master only (1) |
4023. | matched (3) |
4024. | using only (2) |
4025. | using only (2) |
4026. | using only (2) |
4027. | matched (3) |
4028. | matched (3) |
4029. | matched (3) |
4030. | matched (3) |
I would first like to count and store all of the variables under _merge, and then count those that don't say "master only". Then divide the two by each other.
For example:
count1 count2 fraction
6019 4020 .66 (4020/6019)
With count1 being everything under _merge, while count2 being everything that was matched (excludes master only).
Using the following toy example:
clear
webuse autosize
merge 1:1 make using http://www.stata-press.com/data/r14/autoexpense
First it is a good idea to confirm the value which corresponds to "master only":
list _merge
+-----------------+
| _merge |
|-----------------|
1. | matched (3) |
2. | matched (3) |
3. | matched (3) |
4. | master only (1) |
5. | matched (3) |
|-----------------|
6. | matched (3) |
+-----------------+
list _merge, nolabel
+--------+
| _merge |
|--------|
1. | 3 |
2. | 3 |
3. | 3 |
4. | 1 |
5. | 3 |
|--------|
6. | 3 |
+--------+
Then generate the three variables by first counting the relevant observations and dividing:
count if _merge
generate count1 = r(N)
count if _merge != 1
generate count2 = r(N)
generate fraction = count2 / count1
display count1
6
display count2
5
display fraction
1.2

Addition of calculated field in rpivotTable

I want to create a calculated field to use with the rpivotTable package, similar to the functionality seen in excel.
For instance, consider the following table:
+--------------+--------+---------+-------------+-----------------+
| Manufacturer | Vendor | Shipper | Total Units | Defective Units |
+--------------+--------+---------+-------------+-----------------+
| A | P | X | 173247 | 34649 |
| A | P | Y | 451598 | 225799 |
| A | P | Z | 759695 | 463414 |
| A | Q | X | 358040 | 225565 |
| A | Q | Y | 102068 | 36744 |
| A | Q | Z | 994961 | 228841 |
| A | R | X | 454672 | 231883 |
| A | R | Y | 275994 | 124197 |
| A | R | Z | 691100 | 165864 |
| B | P | X | 755594 | 302238 |
| . | . | . | . | . |
| . | . | . | . | . |
+--------------+--------+---------+-------------+-----------------+
(my actual table has many more columns, both dimensions and measures, time, etc. and I need to define multiple such "calculated columns")
If I want to calculate defect rate (which would be Defective Units/Total Units) and I want to aggregate by either of the first three columns, I'm not able to.
I tried assignment by reference (:=), but that still didn't seem to work and summed up defect rates (i.e., sum(Defective_Units/Total_Units)), instead of sum(Defective_Units)/sum(Total_Units):
myData[, Defect.Rate := Defective_Units / Total_Units]
This ended up giving my defect rates greater than 1. Is there anywhere I can declare a calculated field, which is just a formula evaluated post aggregation?
You're lucky - the creator of pivottable.js foresaw cases like yours (and mine, earlier today) by implementing an aggregator called "Sum over Sum" and a few more, likewise, cf. https://github.com/nicolaskruchten/pivottable/blob/master/src/pivot.coffee#L111 and https://github.com/nicolaskruchten/pivottable/blob/master/src/pivot.coffee#L169.
So we'll use "Sum over Sum" as parameter "aggregatorName", and the columns whose quotient we want in the "vals" parameter.
Here's a meaningless usage example from the mtcars data for reproducibility:
require(rpivotTable)
data(mtcars)
rpivotTable(mtcars,rows="gear", cols=c("cyl","carb"),
aggregatorName = "Sum over Sum",
vals =c("mpg","disp"),
width="100%", height="400px")

Get weight of words by occurence

Maybe this is related to math.stacexhange, but I am affraid, that I will get a formula in answer what I won't undersand.
I have products in our database, and I have products from different suppliers in another table.
What I want is to pair, these supplieres products to our products if it is possible, or show for me at least show me a list, where the matching is high.
I did iterate throught all the suppliers products, and explodes the product name by spaces, and store it in a table, and the count of the occurence.
The table seems like this.
+--------+-------------+---------------+-------+
| id | word | originalWord | count |
+--------+-------------+---------------+-------+
| 220950 | Tracer | Tracer | 493 |
| 220951 | Destroyer | Destroyer | 3 |
| 220952 | Avago5050 | Avago5050 | 4 |
| 220953 | mouse | mouse | 2535 |
| 220954 | TRAMYS44916 | /TRAMYS44916/ | 2 |
| 220955 | GameZone | GameZone | 16 |
| 220956 | Enduro | Enduro | 3 |
| 220957 | AVAGO | AVAGO | 10 |
| 220958 | 5050 | 5050 | 4 |
| 220959 | optical | optical | 2370 |
| 220960 | USB | USB | 6160 |
+--------+-------------+---------------+-------+
and so on. Of course, in another table I stored, what is the product id for each word.
So what I want is to determine the weight of a word by occurence.
As you see, the word TRAMYS44916 is occured only twice, almost certain that is a partnumber, so this is the most heavy word. It weight should be 1.
Let's say the most occured is USB with 6160 occurence, so it weight should be like 0.01 or something like that, I think.
What is the best way to get all the weights of the words?
There are other tables for other suppliers so dispersion is always change.
This reminds me of Naive Bayes text classification, so to determine which product should it belongs to, you can calculate tf-idf of all the words.
Then if you want to pair it from another product name, you can decompose it to words again and select the product id based on the highest term value, however maybe you should specify some threshold for this, because in some cases it would not be that clear.
tf-idf = ("number of word matches in product name"/"word count of product name") * log ("number of products" / "number of products that contains the word")
You can see how it is done in the example here (In your case the document will be the product full name): https://en.wikipedia.org/wiki/Tf–idf#Example_of_tf.E2.80.93idf
Example implementation in Java: https://guendouz.wordpress.com/2015/02/17/implementation-of-tf-idf-in-java/

What database schema to use for storing survey answers

I'm required for designing a survey system for our customer.
It's based on asp.net, and the database used is oracle.
I've no experience here so I'd like to ask for advice about:
What database schema to use for storing user answers, I'm afraid my current design is likely to have performance issue...
About the survey:
There'll be two or more surveys going on at the same time.
Surveys may be triggered once a year or more frequently, so I think I need a Survey Period table.
Surveys are targeting different products, so there'll be a mapping between products and surveys
Currently my design:
Survey Category table
+------------+--------------+
| CatageryId | CatageryName |
+------------+--------------+
| 1 | cat1 |
| 2 | cat2 |
+------------+--------------+
Survey Category version table
+-----------+------------+--------------------+
| VersionId | CatageryId | VersionDescription |
+-----------+------------+--------------------+
| 1 | 1 | 'cat1 version1' |
| 2 | 1 | 'cat1 version2' |
| 3 | 2 | 'cat2 version1' |
+-----------+------------+--------------------+
Survey Period Table
+----------+--------------------+
| PeriodId | PeriodDescription |
+----------+--------------------+
| 1 | 'cat1 period2016' |
| 2 | 'cat1 period2017' |
| 3 | 'cat2 period2016' |
+----------+--------------------+
Survey Period-Version map table
+----------+-----------+
| PeriodId | VersionId |
+----------+-----------+
| 1 | 1 |
| 1 | 2 |
| 2 | 1 |
| 3 | 3 |
+----------+-----------+
A Version-Question map table
+--------------+------------+
| VersionId | | QuestionId |
+--------------+------------+
| 1 | 1 |
| 1 | 2 |
| 1 | 3 |
| 2 | 1 |
| 2 | 2 |
| 3 | 1 |
+--------------+------------+
A Version-Product map table
+-----------+-----------+
| VersionId | ProductId |
+-----------+-----------+
| 1 | 'prodA' |
| 1 | 'prodB' |
| 1 | 'prodC' |
| 2 | 'prodA' |
+-----------+-----------+
And to Store the survey result data, I have to put lots of duplicated information between rows of record:
User Answer table
+----------+------------+----------+-----------+-----------+--------+-----------+
| AnswerId | QuestionId | PeriodId | UserId/Ip | ProductId | Answer | VersionId |
+----------+------------+----------+-----------+-----------+--------+-----------+
| 1 | 1 | 1 | 'adam' | 'prodA' | 'Yes' | 2 |
| 2 | 2 | 1 | 'Joe' | 'prodA' | 'Yes' | 2 |
| 3 | 1 | 2 | 'adam' | 'prodB' | 'A' | 3 |
+----------+------------+----------+-----------+-----------+--------+-----------+
We're expecting tens of products and thousands of users for this system.
So assume 30 products, 5000 users, 50 questions per survey and 4 surveys per year
in the current design, there'll be 5000 * 4 * 50 * 30 = 30 millions of records added in the User Answer Table per year,
I'm really afraid if it could still work properly..., so any suggestions for optimizing?
Edit 1:
Add VersionId column in user answer table as suggested.
This looks like a case of premature optimization. You should probably worry more about correctness and flexibility than performance.
30 million rows per year, especially in these skinny tables, is a small amount of data for any Oracle system. Don't worry too much about indexes and partitioning yet, those can be added later if necessary.
Your solution is similar to the Entity Attribute Value (EAV) model. It's worth knowing that term since much has been written about it. There are 2 common problems with EAV models you want to avoid:
Avoid extremes. Don't use EAV for everything, but don't completely avoid it either. EAV is slow and inconvenient compared to a normal table structure. It should not be used for every interesting columns, otherwise you have created a database within a database. For example, if virtually every survey has fields like a username and a date created, store those as regular columns and not in a generic column. It's OK to have a column that is only populated 99% of the time. On the other hand, it's a bad idea to always avoid the EAV and try to hack something together with 1,000 column tables or object-relational types.
Always use the correct type. Always, always, always store data as the correct type. Store numbers as numbers, dates as dates, and strings as strings. Your queries will be easier, faster, and safer, if you have at least three columns for the data: ANSWER_NUMBER, ANSWER_STRING, ANSWER_DATE. I explain the type safety problem more in this answer. Those extra columns may look bad in the model diagram, but they are a life-saver when you're querying the data.

Combine DataFrame rows into a new column

I am wondering if there is simple way to achieve this in Julia besides iterating over the rows in a for-loop.
I have a table with two columns that looks like this:
| Name | Interest |
|------|----------|
| AJ | Football |
| CJ | Running |
| AJ | Running |
| CC | Baseball |
| CC | Football |
| KD | Cricket |
...
I'd like to create a table where each Name in first column is matched with a combined Interest column as follows:
| Name | Interest |
|------|----------------------|
| AJ | Football, Running |
| CJ | Running |
| CC | Baseball, Football |
| KD | Cricket |
...
How do I achieve this?
UPDATE: OK, so after trying a few things including print_joint and grpby, I realized that the easiest way to do this would be by() function. I'm 99% there.
by(myTable, :Name, df->DataFrame(Interest = string(df[:Interest])))
This gives me my :Interest column as "UTF8String[\"Running\"]", and I can't figure out which method I should use instead of string() (or where to typecast) to get the desired ASCIIString output.

Resources