I'm new to R and still getting to grips with how it handles data (my background is spreadsheets and databases). the problem I have is as follows. My data looks like this (it is held in CSV):
RecNo Var1 Var2 Var3
41 800 201.8 Y
43 140 39 N
47 60 20.24 N
49 687 77 Y
54 570 135 Y
58 1250 467 N
61 211 52 N
64 96 117.3 N
68 687 77 Y
Column 1 (RecNo) is my observation number; while it is a number, it is not required for my analysis. Column 4 (Var3) is a Yes/No column which, again, I do not currently need for the analysis but will need later in the process to add information in the output.
I need to normalise the numeric data in my dataframe to values between 0 and 1 without losing the other information. I have the following function:
normalize <- function(x) {
x <- sweep(x, 2, apply(x, 2, min))
sweep(x, 2, apply(x, 2, max), "/")
}
However, when I apply it to my above data by calling
myResult <- normalize(myData)
it returns an error because of the text in Column 4. If I set the text in this column to binary values it runs fine, but then also normalises my case numbers, which I don't want.
So, my question is: How can I change my normalize function above to accept the names of the columns to transform, while outputting the full dataset (i.e. without losing columns)?
I could not get TUSHAr's suggestion to work, but I have found two solutions that work fine:
1. akrun's suggestion above:
myData2 <- myData1 %>% mutate_at(2:3, funs((.-min(.))/max(.-min(.))))
This produces the following:
RecNo Var1 Var2 Var3
1 41 0.62184874 0.40601834 Y
2 43 0.06722689 0.04195255 N
3 47 0.00000000 0.00000000 N
4 49 0.52689076 0.12693105 Y
5 54 0.42857143 0.25663508 Y
6 58 1.00000000 1.00000000 N
7 61 0.12689076 0.07102414 N
8 64 0.03025210 0.21718329 N
9 68 0.52689076 0.12693105 Y
Alternatively, there is the package BBmisc which allowed me the following after transforming my record numbers to factors:
> myData <- myData %>% mutate(RecNo = factor(RecNo))
> myNorm <- normalize(myData2, method="range", range = c(0,1), margin = 1)
> myNorm
RecNo Var1 Var2 Var3
1 41 0.62184874 0.40601834 Y
2 43 0.06722689 0.04195255 N
3 47 0.00000000 0.00000000 N
4 49 0.52689076 0.12693105 Y
5 54 0.42857143 0.25663508 Y
6 58 1.00000000 1.00000000 N
7 61 0.12689076 0.07102414 N
8 64 0.03025210 0.21718329 N
9 68 0.52689076 0.12693105 Y
EDIT: For completion I include TUSHAr's solution as well, showing as always that there are many ways around a single problem:
normalize<-function(x){
minval=apply(x[,c(2,3)],2,min)
maxval=apply(x[,c(2,3)],2,max)
#print(minval)
#print(maxval)
y=sweep(x[,c(2,3)],2,minval)
#print(y)
sweep(y,2,(maxval-minval),"/")
}
df[,c(2,3)]=normalize(df)
Thank you for your help!
normalize<-function(x){
minval=apply(x[,c(2,3)],2,min)
maxval=apply(x[,c(2,3)],2,max)
#print(minval)
#print(maxval)
y=sweep(x[,c(2,3)],2,minval)
#print(y)
sweep(y,2,(maxval-minval),"/")
}
df[,c(2,3)]=normalize(df)
Related
So, I know how to find it using the subset function. Is there any way not to use subset function?
Example dataset:
Month A B
J 67 89
F 48 69
M 78 89
A 54 90
M 54 75
So, lets say I need to write a code to find the min value in Column B.
My Code: subset(df, B == min(df)
My question:
How to use Logical indexing and min function for this dataset? I don't wanna use subset.
You can use which to find the postitions.
x <- c(2,1,3,1)
which(x == min(x))
#[1] 2 4
To get the first hit which.min could be used.
which.min(x)
#[1] 2
With the given data set.
x <- read.table(header=TRUE, text="Month A B
J 67 89
F 48 69
M 78 89
A 54 90
M 54 75")
which(x$B == min(x$B))
#[1] 2
which(x[2:3] == min(x[2:3]), TRUE)
# row col
#[1,] 2 1
I have the following data frame for a student with homework and exam scores.
> student1
UID Homework_1 Homework_2 Homework_3 Homework_4 Homework_5 Homework_6 Homework_7 Homework_8
10 582493224 59 99 88 10 66 90 50 80
Homework_9 Homework_10 Exam_1 Exam_2 Exam_3 Section
10 16 NA 41 61 11 A
The Homework_10 score is missing, and I need to create a function to impute the NA value with mean or median.
The function messy_impute should have the following arguments:
data : data frame or tibble to be imputed.
center : whether to impute using mean or median.
margin : whether to use row or column to input value (1- use row 2-use column).
For example,
messy_impute(student1,mean,1) should print out
> student1
UID Homework_1 Homework_2 Homework_3 Homework_4 Homework_5 Homework_6 Homework_7 Homework_8
10 582493224 59 99 88 10 66 90 50 80
Homework_9 Homework_10 Exam_1 Exam_2 Exam_3 Section
10 16 **62** 41 61 11 A
since the mean of the rest of the homework is 62.
And, if the mean of the columns (other students) in section A for homework 10 is 50, then
messy_impute(student1,mean,2) should print out
> student1
UID Homework_1 Homework_2 Homework_3 Homework_4 Homework_5 Homework_6 Homework_7 Homework_8
10 582493224 59 99 88 10 66 90 50 80
Homework_9 Homework_10 Exam_1 Exam_2 Exam_3 Section
10 16 **50** 41 61 11 A
since the mean of columns in section A is 50.
Please note the if the margin is 2, then the calculation should be done with the same section.
I'm really stuck on this defining the function.
Base R solution:
# Define function to Impute a row-wise mean (assumes one observation per student):
row_wise_mean_impute <- function(df){
grade_df <- df[,names(df) != "studid"]
return(cbind(df[,c("studid"), drop = FALSE],
replace(grade_df, is.na(grade_df), apply(grade_df, 1, mean, na.rm = TRUE))))
}
# Apply function:
row_wise_mean_impute(student1)
Data:
x <- c(rnorm(85, 50, 3), rnorm(15, 50, 15))
student1 <- cbind(studid = 1010101, data.frame(t(x)))
student1[, 10] <- NA_real_
I have a dataset with 3 columns: Default, Height and Weight.
I made a binning of the variables and almacenated it (I have to do it this way) in a list. Every binning has a woe associated, but now I want to put those woes in the original Dataframe depending in which buckets are my observations:
For example, the data frame
df1 <- data.frame(default=sample(c(0,1), replace=TRUE, size=100, prob=c(0.9,0.1)),
height=sample(150:180, 100, replace=T),
weight=sample(50:80,100,replace=T))
> head(df1)
# default height weight
# 1 0 172 54
# 2 0 169 71
# 3 0 164 61
# 4 0 156 55
# 5 0 180 66
# 6 0 162 63
The bins (I will just show the first one)
bins <- lapply(c("height","weight"), function(x) woe.binning(df1, "default", x,
min.perc.total=0.05,
min.perc.class=0.05,event.class=1,
stop.limit = 0.05)[2])
# [[1]]
# [[1]][[1]]
# woe cutpoints.final cutpoints.final[-1] iv.total.final 0 1 col.perc.a col.perc.b iv.bins
# (-Inf,156] -46.58742 -Inf 156 0.1050725 21 5 0.24137931 0.38461538 0.0667299967
# (156,168] 23.91074 156 168 0.1050725 34 4 0.39080460 0.30769231 0.0198727638
# (168,169] -10.91993 168 169 0.1050725 6 1 0.06896552 0.07692308 0.0008689599
# (169, Inf] 25.85255 169 Inf 0.1050725 26 3 0.29885057 0.23076923 0.0176007627
# Missing NA Inf Missing 0.1050725 0 0 0.00000000 0.00000000
Now I want to see in with bins is my data.
My desired output is something similar to this
# default height weight woe_height woe_weight
# 1 0 160 54 23.91074 -8.180032
# 2 0 140 71 -46.58742 -7.640947
Is there any way to do it? The main problem I see here is that the intervals (a,b) are strings. I was thinking about use substr() or something similar to separate the strings in logical options, but I dont think that would work, and its not very elegant.
Any help will be welcome, thanks in advance.
Does this work fine for you?
apply_woe_binning <- function(df, x){
# woe binning
w <- woe.binning(df, "default", x,
min.perc.total=0.05,
min.perc.class=0.05,
event.class=1,
stop.limit = 0.05)[[2]]
# create new column name
new_col <- paste("woe", x, sep = "_")
# define cuts
cuts <- cut(df[[x]], w$cutpoints.final)
# add new column
df[[new_col]] <- w[cuts, "woe", drop = TRUE]
df
}
# one by one
df2 <- apply_woe_binning(df1, "height")
df2 <- apply_woe_binning(df2, "weight")
# in a functional
df2 <- Reduce(function(y, x) apply_woe_binning(df = y, x = x),
c("height","weight"),
init = df1)
I am trying to select random rows from a data frame with 1000 lines (and six columns) where the skewness of the line is larger than a given value (say Sk > 0.3).
I've generated the following data frame
df=data.frame(replicate(6,sample(10:100,1000,rep=TRUE)))
I can get row skewness from the fbasics package:
rowSkewness(df) gives:
[8] -0.2243295435 0.5306809351 0.0707122386 0.0341447417 0.3339384838 -0.3910593364 -0.6443905090
[15] 0.5603809206 0.4406091534 -0.3736108832 0.0397860038 0.9970040772 -0.7702547535 0.2065830354
But now, I need to select say 10 rows of the df which have rowskewness greater than say 0.1... May with
for (a in 1:10) {
sample.data[a,] = sample(x=df[which(rowSkewness(df[sample(1:nrow(df),1)>0.1),], size = 1, replace = TRUE)
}
or something like this?
Any thoughts on this will be appreciated.
thanks in advance.
you can use the sample_n() function or sample_frac() - makes your version a little shorter:
library(tidyr)
library(fBasics)
df=data.frame(replicate(6,sample(10:100,1000,rep=TRUE)))
x=df %>% dplyr::filter(rowSkewness(df)>0.1) %>% dplyr::sample_n(10)
Got it:
x=df %>% filter(rowSkewness(df)>0.1)
for (a in 1:samplesize) {
sample.data[a,] = sample(x=x, size = 1, replace = TRUE)
}
Just do a subset:
res1 <- DF[fBasics::rowSkewness(DF) > .1, ]
head(res1)
# X1 X2 X3 X4 X5 X6
# 7 56 28 21 93 74 24
# 8 33 56 23 44 10 12
# 12 29 19 29 38 94 95
# 13 35 51 54 98 66 10
# 14 12 51 24 23 36 68
# 15 50 37 81 22 55 97
Or with e1071::skewness:
res2 <- DF[apply(as.matrix(DF), 1, e1071::skewness) > .1, ]
stopifnot(all.equal(res1, res2))
Data
set.seed(42); DF <- data.frame(replicate(6, sample(10:100, 1000, rep=TRUE)))
I have a difficulty with application of the data frame on my function in R. I have a data.frame with three columns ID of a point, its location on x axis and its location on y axis. All I need to do is to find for a given point IDs of points that lies in its neighborhood. I've made the function that shows whether the point lies within a circle where the center is a location of observed point and returns it's ID if true.
Here is my code:
point_id <- locationdata$point_id
x_loc <- locationdata$x_loc
y_loc <- locationdata$y_loc
locdata <- data.frame(point_id, x_loc, y_loc)
#radius set to1km
incircle3 <- function(x_loc, y_loc, center_x, center_y, pointid, r = 1000000){
dx = (x_loc-center_x)
dy = (y_loc-center_y)
if (b <- dx^2 + dy^2 <= r^2){
print(shopid)} ##else {print('')}
}
Unfortunately I don't know how to apply this function on the whole data frame. So once I enter the locations of the observed point it would return me IDs of all points that lies in the neighborhood. Ideally I would need to find this relation for all the points automatically. So it would return me the points that lies in the neighborhood of each point from the dataset. Previously I have been inserting the center_x and center_y manually.
Thank you very much for your advices in advance!
You can tackle this with R's dist function:
# set the random seed and create some dummy data
set.seed(101)
dummy <- data.frame(id=1:100, x=runif(100), y=runif(100))
> head(dummy)
id x y
1 1 0.37219838 0.12501937
2 2 0.04382482 0.02332669
3 3 0.70968402 0.39186128
4 4 0.65769040 0.85959857
5 5 0.24985572 0.71833452
6 6 0.30005483 0.33939503
Call the dist function which returns a dist object. The default distance metric is Euclidean which is what you have coded in your question.
dists <- dist(dummy[,2:3])
Loop over the distance matrix and return the indices for each id that are within some constant distance:
neighbors <- apply(as.matrix(dists), 1, function(x) which(x < 0.33))
> neighbors[[1]]
1 6 7 8 19 23 30 32 33 34 42 44 46 51 55 87 88 91 94 99
Here's a modification to handle volatile ids:
set.seed(101)
dummy <- data.frame(id=sample(1:100, 100), x=runif(100), y=runif(100))
> head(dummy)
id x y
1 38 0.12501937 0.60567568
2 5 0.02332669 0.56259740
3 70 0.39186128 0.27685556
4 64 0.85959857 0.22614243
5 24 0.71833452 0.98355758
6 29 0.33939503 0.09838715
dists <- dist(dummy[,2:3])
neighbors <- apply(as.matrix(dists), 1, function(x) {
dummy$id[which(x < 0.33)]
})
names(neighbors) <- dummy$id
> neighbors[['38']]
[1] 38 5 55 80 63 76 17 71 47 11 88 13 41 21 36 31 73 61 99 59 39 89 94 12 18 3