I have a dataset and I need to filter bad/invalid data for my compute. I tried percentile but that will filter genuine value
Eg: 1, 2,8,10,20,25,55,100,100000,98,99,95
Here, 100000 is corrupted/bad, When I call Max () function, I expect 100 instead of 100000.
series_outliers()
datatable(val:int)[1, 2,8,10,20,25,55,100,100000,98,99,95]
| summarize val = make_list(val)
| extend anomaly_score = series_outliers(val)
| mv-expand val to typeof(int), anomaly_score to typeof(real);
//| where anomaly_score between (-1.5 .. 1.5)
val
anomaly_score
1
-0.045592692382452886
2
-0.017097259643419842
8
0
10
0
20
0
25
0
55
0
100
0.0028495432739035469
100000
2846.6965801726751
98
0
99
0
95
0
Fiddle
Related
I have the following data frame in R.
ID | Year_Month | Amount
10001|2021-06 | 85
10001|2021-07 | 32.0
20032|2021-08 | 63
20032|2021-09 | 44.23
20033|2021-11 | 10.90
I would like to transform this data to look something like this:
ID | 2021-06 | 2021-07 |2021-08 | 2021-09 | 2021-11
10001| 85 | 32 | 0 | 0 | 0
20032| 0 | 0 | 63 | 44.23 | 0
20033| 0 | 0 | 0 | 0 | 10.90
The totals will be on the columns based on the Year_Month column. Can someone help? I have tried using transpose but it did not work.
You should check out tidyverse package, it has some really good functions for data wrangling.
## Loading the required libraries
library(dplyr)
library(tidyverse)
## Creating the dataframe
df = data.frame(ID=c(10001,10001,20032,20032,20033),
Date=c('2021-06','2021-07','2021-08','2021-09','2021-11'),
Amount = c(85,32,63,44.2,10.9))
## Pivot longer to wider
df_pivot = df %>%
pivot_wider(names_from = Date, values_from = c(Amount))
## Replacing NA with 0
df_pivot[is.na(df_pivot)] = 0
df_pivot
# A tibble: 3 x 6
ID `2021-06` `2021-07` `2021-08` `2021-09` `2021-11`
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 10001 85 32 0 0 0
2 20032 0 0 63 44.2 0
3 20033 0 0 0 0 10.9
I am wondering if the stem function in R is producing the stem and leaf plot correctly for this example. The code
X <- c(rep(1,1000),2:15)
stem(X,width = 20)
produces the output
The decimal point is at the |
1 | 00000000+980
2 | 0
3 | 0
4 | 0
5 | 0
6 | 0
7 | 0
8 | 0
9 | 0
10 | 0
11 | 0
12 | 0
13 | 0
14 | 0
15 | 0
There are 1000 ones in the data, but the output of the stem function seems to indicate that there are 988 ones (if you count the zeros in the first row and add 980). Instead of +980, I think it should display +992 at the end of the first row.
Is there an error in the stem function or am I not reading the output correctly?
I have a dataset like this:
ID dum1 dum2 dum3 var1
1 0 1 . hi
1 0 . 0 hi
2 1 . . bye
2 0 0 1 .
What I'm trying to do is that I want to fill in information based on the same ID if observations are missing. So my end product would be something like:
ID dum1 dum2 dum3 var1
1 0 1 0 hi
1 0 1 0 hi
2 1 0 1 bye
2 0 0 1 bye
Is there any way I can do this in R or Stata?
This continues discussion of Stata solutions. The solution by #Pearly Spencer looks backward and forward from observations with missing values and so is fine for the example with just two observations per group, and possibly fine for some other situations.
An alternative approach makes use, as appropriate, of the community-contributed commands mipolate and stripolate from SSC as explained also at https://www.statalist.org/forums/forum/general-stata-discussion/general/1308786-mipolate-now-available-from-ssc-new-program-for-interpolation
Examples first, then commentary:
clear
input ID dum1a dum2a dum3a str3 var1a
1 0 1 . "hi"
1 0 . 0 "hi"
2 1 . . "bye"
2 0 0 1 ""
2 0 1 . ""
end
gen long obsno = _n
foreach v of var dum*a {
quietly count if missing(`v')
if r(N) > 0 capture noisily mipolate `v' obsno, groupwise by(ID) generate(`v'_2)
}
foreach v of var var*a {
quietly count if missing(`v')
if r(N) > 0 capture noisily stripolate `v' obsno, groupwise by(ID) generate(`v'_2)
}
list
+----------------------------------------------------------------+
| ID dum1a dum2a dum3a var1a obsno dum3a_2 var1a_2 |
|----------------------------------------------------------------|
1. | 1 0 1 . hi 1 0 hi |
2. | 1 0 . 0 hi 2 0 hi |
3. | 2 1 . . bye 3 1 bye |
4. | 2 0 0 1 4 1 bye |
5. | 2 0 1 . 5 1 bye |
+----------------------------------------------------------------+
Notes:
The groupwise option of mipolate and stripolate uses the rule: replace missing values within groups with the non-missing value in that group if and only if there is only one distinct non-missing value in that group. Thus if the non-missing values in a group are all 1, or all 42, or whatever it is, then interpolation uses 1 or 42 or whatever it is. If the non-missing values in a group are 0 and 1, then no go.
The variable obsno created here plays no role in that interpolation and is needed solely to match the general syntax of mipolate.
There is no assumption here that groups consist of just two observations or have the same number of observations. A common playground for these problems is data on families whenever some variables were recorded only for certain family members but it is desired to spread the values recorded to other family members. Naturally, in real data families often have more than two members and the number of family members will vary.
This question exposed a small bug in mipolate, groupwise and stripolate, groupwise: it doesn't exit as appropriate if there is nothing to do, as in dum1a where there are no missing values. In the code above, this is trapped by asking for interpolation if and only if missing values are counted. At some future date, the bug will be fixed and the code in this answer simplified accordingly, or so I intend as program author.
mipolate, groupwise and stripolate, groupwise both exit with an error message if any group is found with two or more distinct non-missing values; no interpolation is then done for any groups, even if some groups are fine. That is the point of the code capture noisily: the error message for dum2a is not echoed above. As program author I am thinking of adding an option whereby such groups will be ignored but that interpolation will take place for groups with just one distinct non-missing value.
Assuming your data is in df
library(dplyr)
df %>%
group_by(ID) %>%
mutate(dum1=dum1[dum1!="."][1],
dum2=dum2[dum2!="."][1],
dum3=dum3[dum3!="."][1],
var1=var1[var1!="."][1])
Using your toy example:
clear
input ID dum1a dum2a dum3a str3 var1a
1 0 1 . "hi"
1 0 . 0 "hi"
2 1 . . "bye"
2 0 0 1 "."
end
replace var1a = "" if var1a == "."
sort ID (dum2a)
list
+------------------------------------+
| ID dum1a dum2a dum3a var1a |
|------------------------------------|
1. | 1 0 1 . hi |
2. | 1 0 . 0 hi |
3. | 2 0 0 1 |
4. | 2 1 . . bye |
+------------------------------------+
In Stata you can do the following:
ds ID, not
local varlist `r(varlist)'
foreach var of local varlist {
generate `var'b = `var'
bysort ID (`var'): replace `var'b = cond(!missing(`var'[_n-1]), `var'[_n-1], ///
`var'[_n+1]) if missing(`var')
}
list ID dum?ab var?ab
+----------------------------------------+
| ID dum1ab dum2ab dum3ab var1ab |
|----------------------------------------|
1. | 1 0 1 0 hi |
2. | 1 0 1 0 hi |
3. | 2 0 0 1 bye |
4. | 2 1 0 1 bye |
+----------------------------------------+
I have one dataset which includes all the points of students and other variables.
I further have a diagonal matrix which includes information on which student is a peer of another student.
Now I would like to use the second matrix (network) to calculate the mean-peer-points for each student. Everyone can have different (number of) peers.
To calculate the mean, I recalculated the simple 0,1 matrix into percentages, whereby the denominator is the sum of the number of peers one student has.
The second matrix then would look something like this:
ID1 ID2 ID3 ID4 ID5
ID1 0 0 0 0 1
ID2 0 0 0.5 0.5 0
ID3 0 0.5 0 0 0.5
ID4 0 0.5 0 0 0.5
ID5 0.33 0 0.33 0.33 0
And the points of each students is a simple variable in another dataset, and I would like to have the peers-average-points in as a second variable:
ID Points Peers
ID1 45 11
ID2 42 33.5
ID3 25 26.5
ID4 60 26.5
ID5 11 43.33
Are there any commands in Stata for that problem? I am currently looking into the Stata commands nwcommands, but I am unsure whether it can help. I could use solutions for Stata and R.
Without getting too creative, you can accomplish what you are trying to do with reshape, collapse and a couple of merges in Stata. Generally speaking, data in long format is easier to work with for this type of exercise.
Below is an example which produces the desired result.
/* Set-up data for example */
clear
input int(id points)
1 45
2 42
3 25
4 60
5 11
end
tempfile points
save `points'
clear
input int(StudentId id1 id2 id3 id4 id5)
1 0 0 0 0 1
2 0 0 1 1 0
3 0 1 0 0 1
4 0 1 0 0 1
5 1 0 1 1 0
end
/* End data set-up */
* Reshape peers data to long form
reshape long id, i(Student) j(PeerId)
drop if id == 0 // drop if student is not a peer of `StudentId`
* create id variable to use in merge
replace id = PeerId
* Merge to points data to get peer points
merge m:1 id using `points', nogen
* collapse data to the student level, sum peer points
collapse (sum) PeerPoints = points (count) CountPeers = PeerId, by(StudentId)
* merge back to points data to get student points
rename StudentId id
merge 1:1 id using `points', nogen
gen peers = PeerPoints / CountPeers
li id points peers
+------------------------+
| id points peers |
|------------------------|
1. | 1 45 11 |
2. | 2 42 42.5 |
3. | 3 25 26.5 |
4. | 4 60 26.5 |
5. | 5 11 43.33333
+------------------------+
In the above code, I reshape your peer data into long form data and keep only student-peer pairs. I then merge this data to the points data to get the points of each students peers. From here, I collapse the data back to the student level, totaling peer points and peer count in the process. At this point, you have total points for the peers of each student and the number of peers each student has. Now, you simply have to merge back to the points data to get the subject students points and divide total peer points (PeerPoints) by the number of peers the student has (CountPeers) for average peer points.
nwcommands is an outstanding package I have never used or studied, so I will just try the problem from first principles. This is all matrix algebra, but given a matrix and a variable, I would approach it like this in Stata.
clear
scalar third = 1/3
mat M = (0,0,0,0,1\0,0,0.5,0.5,0\0,0.5,0,0,0.5\0,0.5,0,0,0.5\third,0,third,third,0)
input ID Points Peers
1 45 11
2 42 33.5
3 25 26.5
4 60 26.5
5 11 43.33
end
gen Wanted = 0
quietly forval i = 1/5 {
forval j = 1/5 {
replace Wanted = Wanted + M[`i', `j'] * Points[`j'] in `i'
}
}
list
+--------------------------------+
| ID Points Peers Wanted |
|--------------------------------|
1. | 1 45 11 11 |
2. | 2 42 33.5 42.5 |
3. | 3 25 26.5 26.5 |
4. | 4 60 26.5 26.5 |
5. | 5 11 43.33 43.33334 |
+--------------------------------+
Small points: Using 0.33 for 1/3 doesn't give enough precision. You'll have similar problems for 1/6 and 1/7, for example.
Also, I get that the peers of 2 are 3 and 4 so their average is (25 + 60)/2 = 42.5, not 33.5.
EDIT: A similar approach starts with a data structure very like that imagined by #ander2ed
clear
input int(id points id1 id2 id3 id4 id5)
1 45 0 0 0 0 1
2 42 0 0 1 1 0
3 25 0 1 0 0 1
4 60 0 1 0 0 1
5 11 1 0 1 1 0
end
gen wanted = 0
quietly forval i = 1/5 {
forval j = 1/5 {
replace wanted = wanted + id`j'[`i'] * points[`j'] in `i'
}
}
egen count = rowtotal(id1-id5)
replace wanted = wanted/count
list
+--------------------------------------------------------------+
| id points id1 id2 id3 id4 id5 wanted count |
|--------------------------------------------------------------|
1. | 1 45 0 0 0 0 1 11 1 |
2. | 2 42 0 0 1 1 0 42.5 2 |
3. | 3 25 0 1 0 0 1 26.5 2 |
4. | 4 60 0 1 0 0 1 26.5 2 |
5. | 5 11 1 0 1 1 0 43.33333 3 |
+--------------------------------------------------------------+
I have a dataframe that contains 7 p-value variables.
I can't post it because it is private data but it looks like this:
>df
o m l c a aa ep
1.11E-09 4.43E-05 0.000001602 4.02E-88 1.10E-43 7.31E-05 0.00022168
8.57E-07 0.0005479 0.0001402 2.84E-44 4.97E-17 0.0008272 0.000443361
0.00001112 0.0005479 0.0007368 1.40E-39 3.17E-16 0.0008272 0.000665041
7.31E-05 0.0006228 0.0007368 4.59E-33 2.57E-13 0.0008272 0.000886721
8.17E-05 0.002307 0.0008453 4.58E-18 5.14E-12 0.0008336 0.001108402
Each column has values from 0-1.
I would like to subset the entire data frame by extracting all the values in each column less than 0.009 and making a new data frame. If I were to extract on this condition, the columns would have very different lengths. E.g. c has 290 values less than 0.009, and o has 300, aa has 500 etc.
I've tried:
subset(df,c<0.009 & a<0.009 & l<0.009 & m<0.009& aa<0.009 & o<0.009)
When I do this I just end up with a very small number of even columns which isn't what I want, I want all values in each column fitting the subset criteria in the data.
I then want to take this data frame and bin it into p-value range groups by using something like the summary(cut()) function, but I am not sure how to do it.
So essentially I would like to have a final data frame that includes the number of values in each p-value bin for each variable:
o# m# l# c# a# aa# ep#
0.00-0.000001 545 58 85 78 85 45 785
0.00001-000.1 54 77 57 57 74 56 58
0.001-0.002 54 7 5 5 98 7 5 865
An attempt:
sapply(df,function(x) table(cut(x[x<0.009],c(0,0.000001,0.001,0.002,Inf))) )
# o m l c a aa ep
#(0,1e-06] 2 0 0 5 5 0 0
#(1e-06,0.001] 3 4 5 0 0 5 4
#(0.001,0.002] 0 0 0 0 0 0 1
#(0.002,Inf] 0 1 0 0 0 0 0