Error using IRMI imputation - r

I've been trying to use irmi from VIM package to replace NA's.
My data looks something like this:
> str(sub_mex)
'data.frame': 21 obs. of 83 variables:
$ pH : num 7.2 7.4 7.4 7.36 7.2 7.82 7.67 7.73 7.79 7.7 ...
$ Cond : num 1152 1078 1076 1076 1018 ...
$ CO3 : num NA NA NA NA NA ...
$ Mg : num 25.8 24.9 24.3 24.8 23.4 ...
$ NO3 : num 49.7 25.6 27.1 39.6 52.8 ...
$ Cd : num 0.0088 0.0104 0.0085 0.0092 0.0086 ...
$ As_H : num 0.006 0.0059 0.0056 0.0068 0.0073 ...
$ As_F : num 0.0056 0.0058 0.0057 0.0066 0.0065 0.004 0.004 0.004 0.0048 0.0078 ...
$ As_FC : num NA NA NA NA NA NA NA NA NA 0.0028 ...
$ Pb : num 0.0097 0.0096 0.0092 0.01 0.0093 0.0275 0.024 0.0255 0.031 0.024 ...
$ Fe : num 0.39 0.26 0.27 0.28 0.32 0.135 0.08 NA 0.13 NA ...
$ No_EPT : int 0 0 0 0 0 0 0 0 0 0 ...
I've subset my sub_mex dataset to analyze observations separately, so i have sub_t dataset. Which look something like this
> str(sub_t)
'data.frame': 5 obs. of 83 variables:
$ pH : num 7.82 7.67 7.73 7.79 7.7
$ CO3 : num 45 NA 37.2 41.9 40.3
$ Mg : num 41.3 51.4 47.7 51.8 53
$ NO3 : num 47.1 40.7 39.9 42.1 37.6
$ Cd : num 0.0173 0.0145 0.016 0.016 0.0154
$ As_H : num 0.00949 0.01009 0.00907 0.00972 0.00954
$ As_F : num 0.004 0.004 0.004 0.0048 0.0078
$ As_FC : num NA NA NA NA 0.0028
$ Pb : num 0.0275 0.024 0.0255 0.031 0.024
$ Fe : num 0.135 0.08 NA 0.13 NA
$ No_EPT : int 0 0 0 0 0
I impute NA's of the sub_mex dataset using:
imp_mexi <- irmi(sub_mex) which works fine
However when I try to impute the subset sub_t I got the following error message:
> imp_t <- irmi(sub_t)
Error in indexNA2s[, variable[j]] : subscript out of bounds
Does anyone have an idea of how to solve this? I want to impute my data sub_t and I don't want to use a subset of the ìmp_mexi imputed dataset.
Any help will be deeply appreciated.

I had a similar issue and discovered that one of the columns in my dataframe was entirely missing- hence the out of bounds error.

Related

Error when applying comb_CSR function in R

my aim is to estimate a complete subset regression using the function comb_CSR() in R.
My data set is the following:
str(df_na)
Classes ‘fredmd’ and 'data.frame': 360 obs. of 128 variables:
$ date : Date, format: "1992-03-01" "1992-04-01" "1992-05-01" "1992-06-01" ...
$ RPI : num 0.001653 0.00373 0.005329 0.004173 -0.000796 ...
$ W875RX1 : num 0.000812 0.002751 0.005493 0.004447 -0.001346 ...
$ DPCERA3M086SBEA: num 0.001824 0.000839 0.005146 0.002696 0.003342 ...
$ CMRMTSPLx : num 0.00402 0.00664 -0.00874 0.01049 0.0133 ...
$ RETAILx : num -0.003 0.00602 0.00547 0.0028 0.00708 ...
$ INDPRO : num 0.008279 0.007593 0.003221 0.000539 0.008911 ...
$ IPFPNSS : num 0.00851 0.00743 0.0055 -0.00244 0.00998 ...
$ IPFINAL : num 0.00899 0.0076 0.0058 -0.00309 0.01129 ...
$ IPCONGD : num 0.00911 0.00934 0.00648 -0.0049 0.01298 ...
$ IPDCONGD : num 0.0204 0.0185 0.0308 -0.0138 0.0257 ...
$ IPNCONGD : num 0.00518 0.00612 -0.00219 -0.00172 0.00843 ...
$ IPBUSEQ : num 0.01174 0.00958 0.00792 0.00247 0.01016 ...
$ IPMAT : num 0.007989 0.007794 0.000352 0.004296 0.007562 ...
$ IPDMAT : num 0.0113 0.00652 0.01044 0.00211 0.01118 ...
$ IPNMAT : num 0.014042 0.001707 -0.004866 0.010879 0.000204 ...
$ IPMANSICS : num 0.01014 0.00538 0.00579 0.00327 0.0089 ...
$ IPB51222S : num -0.00883 0.04244 -0.02427 -0.04027 0.00958 ...
$ IPFUELS : num 0.0048 0.00603 -0.00854 -0.00383 0.00329 ...
$ CUMFNS : num 0.6213 0.2372 0.2628 0.0569 0.5077 ...
$ HWI : num 140 -104 94 -36 -20 68 -91 55 98 3 ...
$ HWIURATIO : num 0.014611 -0.009559 -0.000538 -0.012463 0.003537 ...
$ CLF16OV : num 0.003116 0.001856 0.002172 0.00265 0.000809 ...
$ CE16OV : num 0.003315 0.002384 -0.000431 0.000372 0.00248 ...
$ UNRATE : num 0 0 0.2 0.2 -0.1 ...
$ UEMPMEAN : num 0.4 0.3 0.4 0.4 -0.1 ...
$ UEMPLT5 : num 0.0404 -0.0346 0.0361 0.0291 -0.023 ...
$ UEMP5TO14 : num -0.05285 0.00711 -0.01177 0.0075 0.00815 ...
$ UEMP15OV : num 0.00439 -0.02087 0.0956 0.08725 -0.03907 ...
$ UEMP15T26 : num -0.0365 -0.0321 0.0564 0.0966 -0.0762 ...
$ UEMP27OV : num 0.0386 -0.0119 0.1255 0.0804 -0.0122 ...
$ CLAIMSx : num -0.02914 -0.02654 -0.00203 0.00323 0.05573 ...
$ PAYEMS : num 0.000498 0.00142 0.001197 0.000607 0.000717 ...
$ USGOOD : num -0.000678 0.000226 0.000136 -0.001718 -0.001041 ...
$ CES1021000001 : num -0.00225 -0.00628 -0.00486 -0.01144 -0.00296 ...
$ USCONS : num 0.00195 -0.003903 0.000434 -0.004571 -0.003059 ...
$ MANEMP : num -0.001427 0.001546 0.000238 -0.000535 -0.000416 ...
$ DMANEMP : num -0.002104 0.000802 0.0002 -0.001304 -0.002009 ...
$ NDMANEMP : num -0.000439 0.00263 0.000292 0.000583 0.001893 ...
$ SRVPRD : num 0.0008 0.00173 0.00147 0.0012 0.00117 ...
$ USTPU : num 4.52e-05 4.97e-04 -6.78e-04 1.36e-04 -1.95e-03 ...
$ USWTRADE : num -0.000509 -0.002313 -0.002043 -0.001476 -0.003649 ...
$ USTRADE : num 1.56e-05 1.47e-03 -1.17e-04 5.22e-04 -1.68e-03 ...
$ USFIRE : num 0.000153 0.001379 0.001835 0.001222 -0.000153 ...
$ USGOVT : num 0.00139 0.001282 0.000747 0.00048 0.002927 ...
$ CES0600000007 : num 40.3 40.5 40.4 40.3 40.3 40.3 40.2 40.3 40.3 40.3 ...
$ AWOTMAN : num 0.1 0 0.2 -0.1 0 ...
$ AWHMAN : num 40.7 40.8 40.9 40.8 40.8 40.8 40.7 40.8 40.9 40.9 ...
$ HOUST : num 7.17 7 7.1 7.04 7.04 ...
$ HOUSTNE : num 4.92 4.81 4.82 4.74 4.81 ...
$ HOUSTMW : num 5.79 5.47 5.7 5.62 5.6 ...
$ HOUSTS : num 6.27 6.17 6.22 6.14 6.16 ...
$ HOUSTW : num 5.72 5.57 5.67 5.68 5.61 ...
$ PERMIT : num 6.99 6.96 6.96 6.96 6.99 ...
$ PERMITNE : num 4.81 4.77 4.83 4.8 4.86 ...
$ PERMITMW : num 5.58 5.49 5.55 5.49 5.53 ...
$ PERMITS : num 6.1 6.05 6.04 6.08 6.09 ...
$ PERMITW : num 5.52 5.59 5.54 5.55 5.59 ...
$ ACOGNO : num 0.04458 0.00165 0.02271 0.01092 -0.00382 ...
$ AMDMNOx : num 0.04682 0.03636 0.0108 -0.02403 -0.00199 ...
$ ANDENOx : num 0.0931 0.0104 0.0242 -0.0371 -0.0105 ...
$ AMDMUOx : num -0.00481 0.00194 0.00191 -0.00509 -0.00927 ...
$ BUSINVx : num 0.003853 0.003351 -0.000536 0.006642 0.005653 ...
$ ISRATIOx : num -0.03 0 -0.01 0 -0.01 ...
$ M1SL : num -0.003773 -0.004802 -0.000372 -0.003294 0.005502 ...
$ M2SL : num -0.004398 -0.002381 0.000911 -0.001208 0.001679 ...
$ M2REAL : num -0.00245 -0.0034 -0.00246 -0.00441 -0.00269 ...
$ BOGMBASE : num 0.01892 -0.02067 0.01167 0.00355 0.002 ...
$ TOTRESNS : num 0.0305 -0.1304 0.0784 0.0465 -0.0082 ...
$ NONBORRES : num 0.02896 -0.12317 0.06947 0.04605 -0.00826 ...
$ BUSLOANS : num 0.00237 -0.00104 0.00132 0.00173 -0.00106 ...
$ REALLN : num -0.00132 0.0058 -0.00663 -0.00338 0.00177 ...
$ NONREVSL : num -6.43e-05 -4.49e-03 3.65e-03 -2.72e-03 5.74e-03 ...
$ CONSPI : num -0.000498 -0.001173 -0.000825 -0.001014 -0.000115 ...
$ S&P 500 : num -0.012684 0.000123 0.018001 -0.015892 0.01647 ...
$ S&P: indust : num -0.01236 -0.000681 0.012694 -0.018013 0.010731 ...
$ S&P div yield : num 0.047815 -0.000371 -0.053946 0.047576 -0.04368 ...
$ S&P PE ratio : num 0.00689 0.02343 0.0377 -0.00193 0.02857 ...
$ FEDFUNDS : num -0.08 -0.25 0.09 -0.06 -0.51 ...
$ CP3Mx : num 0.19 -0.26 -0.16 0.04 -0.48 ...
$ TB3MS : num 0.2 -0.29 -0.12 0.03 -0.45 ...
$ TB6MS : num 0.25 -0.31 -0.12 0.02 -0.49 ...
$ GS1 : num 0.34 -0.33 -0.11 -0.02 -0.57 ...
$ GS5 : num 0.37 -0.17 -0.09 -0.21 -0.64 ...
$ GS10 : num 0.2 -0.06 -0.09 -0.13 -0.42 ...
$ AAA : num 0.06 -0.02 -0.05 -0.06 -0.15 ...
$ BAA : num 0.02 -0.04 -0.08 -0.08 -0.21 ...
$ COMPAPFFx : num 0.32 0.31 0.06 0.16 0.19 0.08 0.02 0.23 0.57 0.75 ...
$ TB3SMFFM : num 0.06 0.02 -0.19 -0.1 -0.04 -0.17 -0.31 -0.24 0.04 0.3 ...
$ TB6SMFFM : num 0.2 0.14 -0.07 0.01 0.03 -0.09 -0.26 -0.06 0.25 0.44 ...
$ T1YFFM : num 0.65 0.57 0.37 0.41 0.35 0.17 -0.04 0.2 0.59 0.79 ...
$ T5YFFM : num 2.97 3.05 2.87 2.72 2.59 2.3 2.16 2.5 2.95 3.16 ...
$ T10YFFM : num 3.56 3.75 3.57 3.5 3.59 3.29 3.2 3.49 3.78 3.85 ...
$ AAAFFM : num 4.37 4.6 4.46 4.46 4.82 4.65 4.7 4.89 5.01 5.06 ...
$ BAAFFM : num 5.27 5.48 5.31 5.29 5.59 5.35 5.4 5.74 5.87 5.89 ...
$ TWEXAFEGSMTHx : num 0.02529 -0.00399 -0.01238 -0.02224 -0.02363 ...
$ EXSZUSx : num 0.036 0.0066 -0.0191 -0.0451 -0.0655 ...
$ EXJPUSx : num 0.03964 0.00508 -0.02095 -0.03056 -0.00755 ...
$ EXUSUKx : num -0.0308 0.0188 0.0297 0.0249 0.0332 ...
[list output truncated]
- attr(*, "na.action")= 'omit' Named int [1:402] 1 2 3 4 5 6 7 8 9 10 ...
..- attr(*, "names")= chr [1:402] "2" "3" "4" "5" ...
The code is the following so far:
#define response variable
y <- df_na$PAYEMS
#define matrix of predictor variables
x <- data.matrix(df_na[, !names(df_na) %in% c("PAYEMS", "date")])
# break data into in-sample and out-of-smple
y.in = y[1:190]; y.out = y[-c(1:190)]
x.in = x[1:190, ]; x.out = x[-c(1:190), ]
trial <- foreccomb(y.in, x.in, y.out, x.out)
result <- comb_CSR(trial)
However, as soon as I run the last line, I get the following error:
> result <- comb_CSR(trial)
Error in matrix(0, ndiff_models, 4) :
invalid 'nrow' value (too large or NA)
In addition: Warning message:
In matrix(0, ndiff_models, 4) : NAs introduced by coercion to integer range
The data set does not have any NA values as I get rid of them beforehand. Unfortunately, I do not understand where the error comes from. Does anyone have an idea?

"'groups' must be a factor" on Shapiro-Wilk test on Rcmdr

I am trying to run a shapiro-wilk normality test on R (Rcmdr to be more accurate) by going to "Statistics=>Summary=>Descriptive statistics" and then selecting one of my dependent variable and choosing "summary by group".
Rcmdr automatically triggers the following code :
normalityTest(Algometre.J0 ~ Modalite, test="shapiro.test",
data=Dataset)
And I am getting the following error message :
'groups' must be a factor.
I have already categorized my independant variable as a factor (I swear, I did !)
Any idea what's wrong ?
Thanx in advance
Here is what str(Dataset) shows :
'data.frame': 76 obs. of 11 variables:
$ Modalite : chr "C" "C" "C" "C" ...
$ Angle.J0 : num 20.1 20.5 21 22.5 19.1 ...
$ Angle.J1 : num 21.7 22.6 22.8 23.3 20.5 ...
$ Angle.J2 : num 22.3 23 23.9 24.2 21 ...
$ Epaisseur.J0: num 1.97 1.54 1.76 1.89 1.53 1.87 1.54 2 1.79 1.41 ...
$ Epaisseur.J1: num 2.07 1.49 1.87 1.91 1.54 1.9 1.51 2.03 1.71 1.48 ...
$ Epaisseur.J2: num 2.08 1.69 1.77 2 1.61 1.99 1.38 2.06 1.86 1.53 ...
$ Algometre.J0: num 45 40 105 165 66.3 ...
$ Algometre.J1: num 32.7 39.7 91.7 124 63.7 ...
$ Algometre.J2: num 51.3 58.7 101 138 60.3 ...
$ ObsNumber : int 1 2 3 4 5 6 7 8 9 10 ...
What does that mean ?

How to normalize all variables in an R dataframe (except for the one variable that's a factor)

I'm having difficulty applying the max-min normalize function to the predictor variables (30 of them) in my data frame without excluding the diagnosis variable (as it is a factor and not subject to the function) from the data frame.
```{r}
cancer_data <- as.data.frame(lapply(cancer_data, normalize))
```
This won't run bc it will prompt an error message referencing the factor column, but I don't want the new data frame to be created without that column. I would just like to apply the normalize function I created to the 30 predictor variables.
Here is the structure of my data frame if it provides helpful context at all:
str(cancer_data)
## 'data.frame': 569 obs. of 31 variables:
## $ diagnosis : Factor w/ 2 levels "Benign","Malignant": 1 1 1 1 1 1 1 2 1 1 ...
## $ radius_mean : num 12.3 10.6 11 11.3 15.2 ...
## $ texture_mean : num 12.4 18.9 16.8 13.4 13.2 ...
## $ perimeter_mean : num 78.8 69.3 70.9 73 97.7 ...
## $ area_mean : num 464 346 373 385 712 ...
## $ smoothness_mean : num 0.1028 0.0969 0.1077 0.1164 0.0796 ...
## $ compactness_mean : num 0.0698 0.1147 0.078 0.1136 0.0693 ...
## $ concavity_mean : num 0.0399 0.0639 0.0305 0.0464 0.0339 ...
## $ points_mean : num 0.037 0.0264 0.0248 0.048 0.0266 ...
## $ symmetry_mean : num 0.196 0.192 0.171 0.177 0.172 ...
## $ dimension_mean : num 0.0595 0.0649 0.0634 0.0607 0.0554 ...
## $ radius_se : num 0.236 0.451 0.197 0.338 0.178 ...
## $ texture_se : num 0.666 1.197 1.387 1.343 0.412 ...
## $ perimeter_se : num 1.67 3.43 1.34 1.85 1.34 ...
## $ area_se : num 17.4 27.1 13.5 26.3 17.7 ...
## $ smoothness_se : num 0.00805 0.00747 0.00516 0.01127 0.00501 ...
## $ compactness_se : num 0.0118 0.03581 0.00936 0.03498 0.01485 ...
## $ concavity_se : num 0.0168 0.0335 0.0106 0.0219 0.0155 ...
## $ points_se : num 0.01241 0.01365 0.00748 0.01965 0.00915 ...
## $ symmetry_se : num 0.0192 0.035 0.0172 0.0158 0.0165 ...
## $ dimension_se : num 0.00225 0.00332 0.0022 0.00344 0.00177 ...
## $ radius_worst : num 13.5 11.9 12.4 11.9 16.2 ...
## $ texture_worst : num 15.6 22.9 26.4 15.8 15.7 ...
## $ perimeter_worst : num 87 78.3 79.9 76.5 104.5 ...
## $ area_worst : num 549 425 471 434 819 ...
## $ smoothness_worst : num 0.139 0.121 0.137 0.137 0.113 ...
## $ compactness_worst: num 0.127 0.252 0.148 0.182 0.174 ...
## $ concavity_worst : num 0.1242 0.1916 0.1067 0.0867 0.1362 ...
## $ points_worst : num 0.0939 0.0793 0.0743 0.0861 0.0818 ...
## $ symmetry_worst : num 0.283 0.294 0.3 0.21 0.249 ...
## $ dimension_worst : num 0.0677 0.0759 0.0788 0.0678 0.0677 ...
Assuming you already have normalize function in your environment. You can get the numeric variables in your data and apply the function to selected columns using lapply.
cols <- sapply(cancer_data, is.numeric)
cancer_data[cols] <- lapply(cancer_data[cols], normalize)
Or without creating cols.
cancer_data[] <- lapply(cancer_data, function(x)
if(is.numeric(x)) normalize(x) else x)
If you want to exclude only 1st column, you can also use :
cancer_data[-1] <- lapply(cancer_data[-1], normalize)
This should work, but do look into tidymodels
Thanks to akrun for the new shorter answer.
library(tidyverse)
cancer_data <-cancer_data %>% mutate_if(negate(is.factor), normalize)

How to create a loop to run 2 variable generalized regression models?

I have 19 variables and I want to run 19 different regressions that consist of 2 independent variables from my dataset.
*Update -This is my dataset's structure:
$ Failure_Response_Var_Yr: num 0 0 0 0 0 0 0 0 0 0 ...
$ exp_var_nocorr_2 : num 4.61 5.99 6.13 3.17 4.4 ...
$ exp_var_nocorr_3 : num 4.16 5.46 5.24 2.86 3.72 ...
$ exp_var_nocorr_4 : num 0.00191 2.23004 0.5613 1.07986 0.99836 ...
$ exp_var_nocorr_5 : num 0.709 2.79 6.846 15.478 11.418 ...
$ exp_var_nocorr_6 : num 0.724 0.497 1.782 0.156 2.525 ...
$ exp_var_nocorr_7 : num 0 168.17 92.041 0.584 265.338 ...
$ exp_var_nocorr_8 : num -38.64 4.89 1.5 24.8 16.56 ...
$ exp_var_nocorr_9 : num 116 88.3 56.4 60.6 57.6 ...
$ exp_var_nocorr_10 : num 0 10.3 0 93.7 0 ...
$ exp_var_nocorr_11 : num 1.02 1.23 1.31 2.06 1.33 ...
$ exp_var_nocorr_12 : num 60 140 124 275 203 ...
$ exp_var_nocorr_13 : num 10.835 5.175 1.838 0.347 0.783 ...
$ exp_var_nocorr_14 : num 59 60.2 87.2 42.2 84.2 ...
$ exp_var_nocorr_15 : num 61.9 68.3 99 50.2 103.9 ...
$ exp_var_nocorr_16 : num 4.4 11.24 8.23 6.9 8.84 ...
$ exp_var_nocorr_17 : num 6.43 18.62 10.72 15.62 10.35 ...
I wrote this code:
col17 <- names(my.sample)[-c(1:9,26:29)]
Such that now dput(col17) gives out:
c("exp_var_nocorr_2", "exp_var_nocorr_3", "exp_var_nocorr_4", "exp_var_nocorr_5", "exp_var_nocorr_6", "exp_var_nocorr_7", "exp_var_nocorr_8", "exp_var_nocorr_9", "exp_var_nocorr_10", "exp_var_nocorr_11", "exp_var_nocorr_12", "exp_var_nocorr_13", "exp_var_nocorr_14", "exp_var_nocorr_15", "exp_var_nocorr_16", "exp_var_nocorr_17" )
`logit.test2 <- vector("list", length(col17))
#start of loop #
for(i in seq_along(col17)){
for(k in seq_along(col17)){
logit.test2[i] <- glm(reformulate(col17[i]+col17[k], "Failure_Response_Var_Yr"),
family=binomial(link='logit'), data=my.sample)
}
}`
# end of loop #
but it printed out this problem:
"Error in col17[i] + col17[k] : non-numeric argument to binary operator"
Can anybody hand me out a code that can fix this problem?

R- get a single column from many columns

I have wavelenghts from 350 to 2500 each one have data:
x350 x351 x352 x353 x354 ...... x2500
0.18 0.17 0.17 0.17 0.16 ...... 0.3
0.16 0.15 0.15 0.15 0.15 ...... 0.47
0.14 0.14 0.13 0.13 0.13 ...... 0.35
I need to make one column without the name of the wavelenght and give to this new colum a name:
Wave
0.18
0.16
0.14
0.17
0.15
0.14
0.17
0.15
0.13
0.16
0.15
0.13
.
.
.
0.3
0.47
0.35
m is my file and the columns of the wavelenghts are from 17 col to 2167 col. I tried:
a <- list(m[1:16,17:2167])
but I get the list with the names of the columns in between:
list(structure(list(X350 = c(0.15723315, 0.138406682, 0.174909807,
0.143139974, 0.123193808, 0.154449448, 0.163255619, 0.126194713,
0.14327512, 0.066265248, 0.139851395, 0.158271497, 0.158060045,
0.145313933, 0.143890661), X351 = c(0.154324452, 0.135509959,
0.173350322, 0.139867145, 0.121439474, 0.15276091, 0.160391152,
0.125592826, 0.140349489, 0.065316491, 0.137927937, 0.158400317,
0.156211611, 0.142498763, 0.141353986), X352 = c(0.151243533....
How can I get just one column with one name from 2465 columns?
More info
str(m)
'data.frame': 16 obs. of 2167 variables:
$ pott : int 48 49 50 51 52 53 54 55 56 57 ...
$ b : chr "B1" "B1" "B1" "B1" ...
$ F : int 1 1 1 1 1 1 1 1 1 1 ...
$ G : chr "Sunstar" "Quarrion" "Nacozari" "W130114" ...
$ R : int 3 3 3 3 3 3 3 3 3 3 ...
$ D : int 80 80 81 80 81 80 82 82 82 82 ...
$ W: num 1.8 1.5 1.3 1.9 1.8 1.25 1.85 2.1 1.6 2.4 ...
$ S : num 43.4 35.7 44.7 48.6 45.3 35.5 49.2 49.1 46.8 41.5 ...
$ R : num -0.327 1.149 2.348 1.636 1.952 ...
$ V : num 76.4 49 118.9 108 114.5 ...
$ J : num 158 114 191 169 183 ...
$ P: num 19.9 10.6 24.1 21.1 23.6 ...
$ Ce : num 0.367 0.13 0.466 0.36 0.462 ...
$ Ci : num 273 246 280 263 272 ...
$ S : num 23.5 29 30.9 29.4 24.1 ...
$ L : num 42.5 34.4 32.4 34 41.4 ...
$ X350 : num 0.176 0.157 0.138 0.175 0.143 ...
$ X351 : num 0.172 0.154 0.136 0.173 0.14 ...
$ X352 : num 0.169 0.151 0.133 0.172 0.138 ...
$ X353 : num 0.167 0.147 0.132 0.17 0.137 ...
$ X354 : num 0.165 0.147 0.13 0.167 0.133 ...
$ X355 : num 0.162 0.146 0.127 0.166 0.13 ...
$ X356 : num 0.159 0.144 0.126 0.164 0.128 ...
$ X357 : num 0.158 0.14 0.125 0.161 0.125 ...
$ X358 : num 0.155 0.138 0.123 0.159 0.124 ...
$ X359 : num 0.153 0.137 0.121 0.157 0.123 ...
$ X360 : num 0.15 0.135 0.12 0.154 0.122 ...
....$2500
I guess your data are in a text file
data <- read.table("your_file", header=T, quote="\"")
so, data will look like
structure(list(x350 = c(0.18, 0.16, 0.14), x351 = c(0.17, 0.15,
0.14), x352 = c(0.17, 0.15, 0.13), x353 = c(0.17, 0.15, 0.13)), .Names = c("x350",
"x351", "x352", "x353"), class = "data.frame", row.names = c(NA,
-3L))
and
result <- data.frame(Wave = unlist(data,use.names=FALSE))
will produce
Wave
1 0.18
2 0.16
3 0.14
4 0.17
5 0.15
6 0.14
7 0.17
8 0.15
9 0.13
10 0.17
11 0.15
12 0.13

Resources