Extracting contents from decision tree J48 - r

I have the following decision tree (created by JWEKA package - by the command J48(NSP~., data=training) ):
[[1]]
J48 pruned tree
------------------
MSTV <= 0.4
| MLTV <= 4.1: 3 -2
| MLTV > 4.1
| | ASTV <= 79
| | | b <= 1383:00:00 2 -18
| | | b > 1383
| | | | UC <= 05:00 1 -2
| | | | UC > 05:00 2 -2
| | ASTV > 79:00:00 3 -2
MSTV > 0.4
| DP <= 0
| | ALTV <= 09:00 1 (170.0/2.0)
| | ALTV > 9
| | | FM <= 7
| | | | LBE <= 142:00:00 1 (27.0/1.0)
| | | | LBE > 142
| | | | | AC <= 2
| | | | | | e <= 1058:00:00 1 -5
| | | | | | e > 1058
| | | | | | | DL <= 04:00 2 (9.0/1.0)
| | | | | | | DL > 04:00 1 -2
| | | | | AC > 02:00 1 -3
| | | FM > 07:00 2 -2
| DP > 0
| | DP <= 1
| | | UC <= 03:00 2 (4.0/1.0)
| | | UC > 3
| | | | MLTV <= 0.4: 3 -2
| | | | MLTV > 0.4: 1 -8
| | DP > 01:00 3 -8
Number of Leaves : 16
Size of the tree : 31
I would like to extract the nodes' values in 2 formats:
one format only the name of the property such as: MSTV, MLTV, DP... etc.,
So each level of the tree will be followed by his parent, in the above case I would like to get the '(' as separator between each level such as:
(MSTV (MLTV...) (DP...) )
In the second format I would like to get the nodes with their values such as:
(MSTV 0.4 (MLTV 4.1 ....) (DP 0..... ) )
How can I extract the relevant information. I think to separate between the node values we should separate the characters by using gsub("[A-Z]:", "", string)
But we need to ignore the last lines.
Thanks a lot for your help.

Related

Sqlite count occurence per year

So let's say I have a table in my Sqlite database with some information about some files, with the following structure:
| id | file format | creation date |
----------------------------------------------------------
| 1 | Word | 2010:02:12 13:31:33+01:00 |
| 2 | PSD | 2021:02:23 15:44:51+01:00 |
| 3 | Word | 2019:02:13 14:18:11+01:00 |
| 4 | Word | 2010:02:12 13:31:20+01:00 |
| 5 | Word | 2003:05:25 18:55:10+02:00 |
| 6 | PSD | 2014:07:20 20:55:58+02:00 |
| 7 | Word | 2014:07:20 21:09:24+02:00 |
| 8 | TIFF | 2011:03:30 11:56:56+02:00 |
| 9 | PSD | 2015:07:15 14:34:36+02:00 |
| 10 | PSD | 2009:08:29 11:25:57+02:00 |
| 11 | Word | 2003:05:25 20:06:18+02:00 |
I would like results that show me a chronology of how many of each file format were created in a given year – something along the lines of this:
|Format| 2003 | 2009 | 2010 | 2011 | 2014 | 2015 | 2019 | 2021 |
----------------------------------------------------------------
| Word | 2 | 0 | 0 | 2 | 0 | 0 | 2 | 0 |
| PSD | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 1 |
| TIFF | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 |
I've gotten kinda close (I think) with this, but am stuck:
SELECT
file_format,
COUNT(CASE file_format WHEN creation_date LIKE '%2010%' THEN 1 ELSE 0 END),
COUNT(CASE file_format WHEN creation_date LIKE '%2011%' THEN 1 ELSE 0 END),
COUNT(CASE file_format WHEN creation_date LIKE '%2012%' THEN 1 ELSE 0 END)
FROM
fileinfo
GROUP BY
file_format;
When I do this I am getting unique amounts for each file format, but the same count for every year…
|Format| 2010 | 2011 | 2012 |
-----------------------------
| Word | 4 | 4 | 4 |
| PSD | 1 | 1 | 1 |
| TIFF | 6 | 6 | 6 |
Why am I getting that incorrect tally, and moreover, is there a smarter way of querying that doesn't rely on the year being statically searched for as a string for every single year? If it helps, the column headers and row headers could be switched – doesn't matter to me. Please help a n00b :(
Use SUM() aggregate function for conditional aggregation:
SELECT file_format,
SUM(creation_date LIKE '2010%') AS `2010`,
SUM(creation_date LIKE '2011%') AS `2011`,
..........................................
FROM fileinfo
GROUP BY file_format;
See the demo.

How to subset a dataframe using a column from another dataframe in r?

I have 2 dataframes
Dataframe1:
| Cue | Ass_word | Condition | Freq | Cue_Ass_word |
1 | ACCENDERE | ACCENDINO | A | 1 | ACCENDERE_ACCENDINO
2 | ACCENDERE | ALLETTARE | A | 0 | ACCENDERE_ALLETTARE
3 | ACCENDERE | APRIRE | A | 1 | ACCENDERE_APRIRE
4 | ACCENDERE | ASCENDERE | A | 1 | ACCENDERE_ASCENDERE
5 | ACCENDERE | ATTIVARE | A | 0 | ACCENDERE_ATTIVARE
6 | ACCENDERE | AUTO | A | 0 | ACCENDERE_AUTO
7 | ACCENDERE | ACCENDINO | B | 2 | ACCENDERE_ACCENDINO
8 | ACCENDERE| ALLETTARE | B | 3 | ACCENDERE_ALLETTARE
9 | ACCENDERE| ACCENDINO | C | 2 | ACCENDERE_ACCENDINO
10 | ACCENDERE| ALLETTARE | C | 0 | ACCENDERE_ALLETTARE
Dataframe2:
| Group.1 | x
1 | ACCENDERE_ACCENDINO | 5
13 | ACCENDERE_FUOCO | 22
16 | ACCENDERE_LUCE | 10
24 | ACCENDERE_SIGARETTA | 6
....
I want to exclude from Dataframe1 all the rows that contain words (Cue_Ass_word) that are not reported in the column Group.1 in Dataframe2.
In other words, how can I subset Dataframe1 using the strings reported in Dataframe2$Group.1?
It's not quite clear what you mean, but is this what you need?
Dataframe1[!(Dataframe1$Cue_Ass_word %in% Dataframe2$Group1),]

Extracting columns from text file

I load a text file (tree.txt) to R, with the below content (copy pasted from JWEKA - J48 command).
I use the following command to load the text file:
data3 <-read.table (file.choose(), header = FALSE,sep = ",")
I would like to insert each column into a separate variables named like the following format COL1, COL2 ... COL8 (in this example since we have 8 columns). If you load it to EXCEL with delimited separation each row will be separated in one column (this is the required result).
Each COLn will contain the relevant characters of the tree in this example.
How can separate and insert the text file into these columns automatically while ignoring the header and footer content of the file?
Here is the text file content:
[[1]]
J48 pruned tree
------------------
MSTV <= 0.4
| MLTV <= 4.1: 3 -2
| MLTV > 4.1
| | ASTV <= 79
| | | b <= 1383:00:00 2 -18
| | | b > 1383
| | | | UC <= 05:00 1 -2
| | | | UC > 05:00 2 -2
| | ASTV > 79:00:00 3 -2
MSTV > 0.4
| DP <= 0
| | ALTV <= 09:00 1 (170.0/2.0)
| | ALTV > 9
| | | FM <= 7
| | | | LBE <= 142:00:00 1 (27.0/1.0)
| | | | LBE > 142
| | | | | AC <= 2
| | | | | | e <= 1058:00:00 1 -5
| | | | | | e > 1058
| | | | | | | DL <= 04:00 2 (9.0/1.0)
| | | | | | | DL > 04:00 1 -2
| | | | | AC > 02:00 1 -3
| | | FM > 07:00 2 -2
| DP > 0
| | DP <= 1
| | | UC <= 03:00 2 (4.0/1.0)
| | | UC > 3
| | | | MLTV <= 0.4: 3 -2
| | | | MLTV > 0.4: 1 -8
| | DP > 01:00 3 -8
Number of Leaves : 16
Size of the tree : 31
An example of the COL1 content will be:
MSTV
|
|
|
|
|
|
|
|
MSTV
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
COL2 content will be:
MLTV
MLTV
|
|
|
|
|
|
>
DP
|
|
|
|
|
|
|
|
|
|
|
|
DP
|
|
|
|
|
|
Try this:
cleaned.txt <- capture.output(cat(paste0(tail(head(readLines("FILE_LOCATION"), -4), -4), collapse = '\n'), sep = '\n'))
cleaned.df <- read.fwf(file = textConnection(cleaned.txt),
header = FALSE,
widths = rep.int(4, max(nchar(cleaned.txt)/4)),
strip.white= TRUE
)
cleaned.df <- cleaned.df[,colSums(is.na(cleaned.df))<nrow(cleaned.df)]
For the cleaning process, I end up using a combination of head and tail to remove the 4 spaces on the top and the bottom. There's probably a more efficient way to do this outside of R, but this isn't so bad. Generally, I'm just making the file readable to R.
Your file looks like a fixed-width file so I use read.fwf, and use textConnection() to point the function to the cleaned output.
Finally, I'm not sure how your data is actually structured, but when I copied it from stackoverflow, it pasted with a bunch of whitespace at the end of each line. I'm using some tricks to guess at how long the file is, and removing extraneous columns over here
widths = rep.int(4, max(nchar(cleaned.txt)/4))
cleaned.df <- cleaned.df[,colSums(is.na(cleaned.df))<nrow(cleaned.df)]
Next, I'm creating the data in the way you would like it structured.
for (i in colnames(cleaned.df)) {
assign(i, subset(cleaned.df, select=i))
assign(i, capture.output(cat(paste0(unlist(get(i)[get(i)!=""])),sep = ' ', fill = FALSE)))
}
rm(i)
rm(cleaned.df)
rm(cleaned.txt)
What this does is it creates a loop for each column header in your data frame.
From there it uses assign() to put all the data in each column into its' own data frame. In your case, they are named V1 through V15.
Next, it uses a combination of cat() and paste() with unlist() an capture.output() to concatenate your list into a single character vectors, for each of the data frames, so they are now character vectors, instead of data frames.
Keep in mind that because you wanted a space at each new character, I'm using a space as a separator. But because this is a fixed-width file, some columns are completely blank, which I'm removing using
get(i)[get(i)!=""]
(Your question said you wanted COL2 to be: MLTV MLTV | | | | | | > DP | | | | | | | | | | | | DP | | | | | |).
If we just use get(i), there will be a leading whitespace in the output.

How to get alphabet string out of dataframe factor

I have a dataframe (df). The data frame has n columns. Each columns contains numbers, different signs and one or more or non names (alphabet characters). I would like to get the variable names for each dataframe column (which is a factor). I used the following command to extract from specific column, but it works only partially:
str_extract(paste0(df$V3, collapse=""), perl("(?<=\\|)[A-Za-z]+(?=\\|)"))
Here is the example for some columns (df is the dataframe and V1...Vn are its columns).
> df$V1
[1] MSTV | | | | | | | | MSTV | | | | | | | | | | | | | |
[25] | | | | | |
Levels: | MSTV
> df$V2
[1] MLTV MLTV | | | | | | DP | | | | | | | | | | | | DP
[25] | | | | | |
Levels: | DP MLTV
> df$V3
[1] <= ASTV | | | | ASTV > <= ALTV ALTV | | | | | | | | | | >
[25] DP | | | | DP
Levels: | <= > ALTV ASTV DP
> cleaned.df$V4
[1] 0.4 <= > b b | | 0.4 0 FM | | | | | | | | FM 0 <= UC UC | | >
Levels: | <= > 0 0.4 b FM UC
For df$V1 I would like to get: MSTV
For df$V2 I would like to get: DP MLTV
For df$V3 I would like to get: ALTV ASTV DP
For df$V4 I would like to get: b FM UC
and so on...

How do I present data from a text file to CrossTable in R?

I am having trouble importing data to R so that the CrossTable package will do a simple chi squared test. Thank you for any tips on how to import the data in the correct way: the test is fine when I enter data manually but not when I import into a table - see below. /OT
> library(gmodels)
> library(MASS)
> #When I enter the data manually there's no problem running a simple chi-squared:
> CA<-c(42,100,10,5)
> noCA<-c(20,0,140,40)
> regionalca<-cbind(CA,noCA)
> regionalca
CA noCA
[1,] 42 20
[2,] 100 0
[3,] 10 140
[4,] 5 40
> CrossTable(regionalca, fisher=FALSE, chisq=TRUE, expected=TRUE, , sresid=TRUE, format="SPSS")
Cell Contents
|-------------------------|
| Count |
| Expected Values |
| Chi-square contribution |
| Row Percent |
| Column Percent |
| Total Percent |
| Std Residual |
|-------------------------|
Total Observations in Table: 357
|
| CA | noCA | Row Total |
-------------|-----------|-----------|-----------|
[1,] | 42 | 20 | 62 |
| 27.266 | 34.734 | |
| 7.962 | 6.250 | |
| 67.742% | 32.258% | 17.367% |
| 26.752% | 10.000% | |
| 11.765% | 5.602% | |
| 2.822 | -2.500 | |
-------------|-----------|-----------|-----------|
[2,] | 100 | 0 | 100 |
| 43.978 | 56.022 | |
| 71.366 | 56.022 | |
| 100.000% | 0.000% | 28.011% |
| 63.694% | 0.000% | |
| 28.011% | 0.000% | |
| 8.448 | -7.485 | |
-------------|-----------|-----------|-----------|
[3,] | 10 | 140 | 150 |
| 65.966 | 84.034 | |
| 47.482 | 37.274 | |
| 6.667% | 93.333% | 42.017% |
| 6.369% | 70.000% | |
| 2.801% | 39.216% | |
| -6.891 | 6.105 | |
-------------|-----------|-----------|-----------|
[4,] | 5 | 40 | 45 |
| 19.790 | 25.210 | |
| 11.053 | 8.677 | |
| 11.111% | 88.889% | 12.605% |
| 3.185% | 20.000% | |
| 1.401% | 11.204% | |
| -3.325 | 2.946 | |
-------------|-----------|-----------|-----------|
Column Total | 157 | 200 | 357 |
| 43.978% | 56.022% | |
-------------|-----------|-----------|-----------|
Statistics for All Table Factors
Pearson's Chi-squared test
------------------------------------------------------------
Chi^2 = 246.0862 d.f. = 3 p = 4.595069e-53
Minimum expected frequency: 19.78992
> #But when I try to import the data from a .txt file, it becomes unacceptable:
> regionalca<-read.table(file="låtsas ca.txt", header=TRUE)
> regionalca
CA noCA
1 43 20
2 100 1
3 10 140
4 5 40
> CrossTable(regionalca, fisher=FALSE, chisq=TRUE, expected=TRUE, , sresid=TRUE, format="SPSS")
Error in margin.table(x, margin) : 'x' is not an array
> #I would really like to run the test on this table:
> regionalca<-read.table(file="låtsas ca.txt", header=TRUE)
> regionalca
region CA noCA
1 south 43 20
2 southwest 100 0
3 mid 10 140
4 north 5 40
> #Which ob
> CrossTable(regionalca, fisher=FALSE, chisq=TRUE, expected=TRUE, , sresid=TRUE, format="SPSS")
Error in if (any(x < 0) || any(is.na(x))) stop("all entries of x must be nonnegative and finite") :
missing value where TRUE/FALSE needed
In addition: Warning message:
In Ops.factor(left, right) : ‘<’ not meaningful for factors
>
The error is very explicit :
if (any(x < 0) || any(is.na(x)))
stop("all entries of x must be nonnegative and finite")
You have not eligible inputs for CrossTable ( gmodels package). I can reproduce it using your data and introduction a non negative value:
CA <- c(-1,100,10,5) ## -1 the first value
So you need to remove all this values before or setting them by another value. For example :
regionalca <- regionalca[rowSums(!regionalca < 0) == ncol(regionalca) &
rowSums(!is.na(regionalca))==ncol(regionalca),]
The probelm is that the read.table create a data.frame, yet what you need is a matrix. Note the sing c.bind() defaulting in a matrix class output. It is also specified in the error you have printed:
Error in margin.table(x, margin) : 'x' is not an array, while array equals matrix, in that case.
That is in order to fix it you need to change your code as follows:
regionalca<-as.matrix(read.table(file="låtsas ca.txt", header=TRUE))

Resources