Reshaping data of different time lengths in R - r

I want to perform several repeated measures on my data. I first need to reshape the dataframe from a wide to a long format to do that.
This is my dataframe:
ID Group x1 x2 x3 y1 y2 y3 z1 z2
144 1 566 613 597 563 549 562 599 469
167 2 697 638 756 682 695 693 718 439.5
247 4 643 698 730 669 656 669 698 514.5
317 4 633 646 641 520 543 586 559 405.5
344 3 651 678 708 589 608 615 667 514
352 2 578 702 671 536 594 579 591 467.5
382 1 678 690 693 555 565 534 521 457.5
447 3 668 672 718 663 689 751 784 506.5
464 2 760 704 763 514 554 520 564 486
628 1 762 789 783 618 610 645 625 536
As you might notice, I have measured variable x and y on three time points and variable z at two points. I was wondering if it makes sense at all to try and reshape the data into long format, given the fact that I have separate time lengths.
I have not been able to do so. So first of all, does it even make sense to do it this way? Or should I make two dataframes? Second, if it does make sense, how?
EDIT: I would expect something like:
ID Group Timex Timey Timez x y z
144 1 1 1 1 566 563 599
144 1 2 2 2 613 549 469
144 1 3 3 597 562
167 2 1 1 1 697 682 718
167 2 2 2 2 638 695 439.5
167 2 3 3 756 693
....
But I'm not even sure if that makes sense at all, to have these empty cells?

Here is one idea. dt_all is the final output. Notice that this example does not create Timex, Timey, and Timez, but I would argue that one column called Time is sufficient and individual Timex, Timey, and Timez are redundant.
# Load packages
library(dplyr)
library(tidyr)
# Process the data
dt_all <- dt %>%
gather(Var, Value, -ID, -Group) %>%
mutate(Time = sub("[a-z]", "", Var), Type = sub("[0-9]", "", Var)) %>%
select(-Var) %>%
spread(Type, Value)
Data Preparation
# Create example data frames
dt <- read.table(text = "ID Group x1 x2 x3 y1 y2 y3 z1 z2
144 1 566 613 597 563 549 562 599 469
167 2 697 638 756 682 695 693 718 439.5
247 4 643 698 730 669 656 669 698 514.5
317 4 633 646 641 520 543 586 559 405.5
344 3 651 678 708 589 608 615 667 514
352 2 578 702 671 536 594 579 591 467.5
382 1 678 690 693 555 565 534 521 457.5
447 3 668 672 718 663 689 751 784 506.5
464 2 760 704 763 514 554 520 564 486
628 1 762 789 783 618 610 645 625 536",
header = TRUE)

Related

Calculating number combinations through R

I have following data. i want to first find out the most occurring digit on every place value. Obviously one place can have 10 possibilities from 0 to 9. Than i want an option where by i can choose 5 top occurrences or 6 or 7 or 8 top occurrences for e.g. if i choose 5 then the program should take the top 5 occurrences or if i choose 8 then program should leave out the least 2 occurring digits and take all others.
Data example:
076060
693022
585821
980575
438068
766214
051726
060417
822591
015507
635576
180231
212238
417651
631269
720767
348344
532148
748085
474026
380897
512421
749492
423616
950330
930079
097759
638901
319356
683308
818127
880675
256095
639187
339904
945437
799571
466063
428853
397799
782034
462486
739342
879023
419264
793319
603131
315791
351701
151747
365656
982700
348093
793392
946875
912108
070001
780515
222468
345439
234846
227112
757243
341747
480781
906624
868265
388572
947873
898895
452518
738580
217342
849951
437382
247068
743776
562584
636948
049434
139296
688436
443629
I want option of choosing 5, 6,7 or 8 top occurrences and 2 or 3 or 4number combination
Expected results, 2 number combination basis top 8 occurrences and so on.
01
02
03
04
05
06
08
09
21
22
23
24
25
26
28
29
31
32
33
34
35
36
38
39
41
42
43
44
45
46
48
49
61
62
63
64
65
66
68
69
71
72
73
74
75
76
78
79
81
82
83
84
85
86
88
89
91
92
93
94
95
96
98
99
Expected results, 3 number combination basis top 8 occurrences and so on.
010
012
013
015
016
017
018
019
020
022
023
025
026
027
028
029
030
032
033
035
036
037
038
039
040
042
043
045
046
047
048
049
050
052
053
055
056
057
058
059
060
062
063
065
066
067
068
069
080
082
083
085
086
087
088
089
090
092
093
095
096
097
098
099
210
212
213
215
216
217
218
219
220
222
223
225
226
227
228
229
230
232
233
235
236
237
238
239
240
242
243
245
246
247
248
249
250
252
253
255
256
257
258
259
260
262
263
265
266
267
268
269
280
282
283
285
286
287
288
289
290
292
293
295
296
297
298
299
310
312
313
315
316
317
318
319
320
322
323
325
326
327
328
329
330
332
333
335
336
337
338
339
340
342
343
345
346
347
348
349
350
352
353
355
356
357
358
359
360
362
363
365
366
367
368
369
380
382
383
385
386
387
388
389
390
392
393
395
396
397
398
399
410
412
413
415
416
417
418
419
420
422
423
425
426
427
428
429
430
432
433
435
436
437
438
439
440
442
443
445
446
447
448
449
450
452
453
455
456
457
458
459
460
462
463
465
466
467
468
469
480
482
483
485
486
487
488
489
490
492
493
495
496
497
498
499
610
612
613
615
616
617
618
619
620
622
623
625
626
627
628
629
630
632
633
635
636
637
638
639
640
642
643
645
646
647
648
649
650
652
653
655
656
657
658
659
660
662
663
665
666
667
668
669
680
682
683
685
686
687
688
689
690
692
693
695
696
697
698
699
710
712
713
715
716
717
718
719
720
722
723
725
726
727
728
729
730
732
733
735
736
737
738
739
740
742
743
745
746
747
748
749
750
752
753
755
756
757
758
759
760
762
763
765
766
767
768
769
780
782
783
785
786
787
788
789
790
792
793
795
796
797
798
799
810
812
813
815
816
817
818
819
820
822
823
825
826
827
828
829
830
832
833
835
836
837
838
839
840
842
843
845
846
847
848
849
850
852
853
855
856
857
858
859
860
862
863
865
866
867
868
869
880
882
883
885
886
887
888
889
890
892
893
895
896
897
898
899
910
912
913
915
916
917
918
919
920
922
923
925
926
927
928
929
930
932
933
935
936
937
938
939
940
942
943
945
946
947
948
949
950
952
953
955
956
957
958
959
960
962
963
965
966
967
968
969
980
982
983
985
986
987
988
989
990
992
993
995
996
997
998
999
code i have tried
getwd()
setwd("C:/Users/aziq/Desktop")
library(xlsx)
x <- read.xlsx("numbers.xlsx","Sheet1")
generate_combinations <- function(x, pos, n) {
#select first pos characters from each string
#split each character and create a matrix
mat <- do.call(rbind, strsplit(substr(x, 1, pos), ''))
#Find top n occurrence in each column of matrix
tmp <- apply(mat, 2, function(x) tail(names(sort(table(x))), n))
#Create all combinations of top occurrences.
do.call(expand.grid, asplit(tmp, 2))
}
generate_combinations(x, 2, 8)
nrow(generate_combinations(x, 2, 8))
Error it is showing
Error in asplit(tmp, 2) : dim(x) must have a positive length
Dput results:
> dput(x)
structure(list(X076060 = c("693022", "585821", "980575", "438068",
"766214", "051726", "060417", "822591", "015507", "635576", "180231",
"212238", "417651", "631269", "720767", "348344", "532148", "748085",
"474026", "380897", "512421", "749492", "423616", "950330", "930079",
"097759", "638901", "319356", "683308", "818127", "880675", "256095",
"639187", "339904", "945437", "799571", "466063", "428853", "397799",
"782034", "462486", "739342", "879023", "419264", "793319", "603131",
"315791", "351701", "151747", "365656", "982700", "348093", "793392",
"946875", "912108", "070001", "780515", "222468", "345439", "234846",
"227112", "757243", "341747", "480781", "906624", "868265", "388572",
"947873", "898895", "452518", "738580", "217342", "849951", "437382",
"247068", "743776", "562584", "636948", "049434", "139296", "688436",
"443629")), class = "data.frame", row.names = c(NA, -82L))
We can write a function :
generate_combinations <- function(x, pos, n) {
if(pos == 1) {
return(data.frame(Var1 = names(sort(table(substr(x, 1, pos)),
= decreasing = TRUE)[1:n])))
}
#select first pos characters from each string
#split each character and create a matrix
mat <- do.call(rbind, strsplit(substr(x, 1, pos), ''))
#Find top n occurrence in each column of matrix
tmp <- apply(mat, 2, function(x) tail(names(sort(table(x))), n))
#Create all combinations of top occurrences.
do.call(expand.grid, asplit(tmp, 2))
}
generate_combinations(x, 2, 8)
# Var1 Var2
#1 0 2
#2 2 2
#3 8 2
#4 6 2
#5 9 2
#6 3 2
#7 4 2
#8 7 2
#9 0 5
#10 2 5
#...
#...
nrow(generate_combinations(x, 2, 8))
#[1] 64
nrow(generate_combinations(x, 3, 8))
#[1] 512
data
x <- c("076060", "693022", "585821", "980575", "438068", "766214",
"051726", "060417", "822591", "015507", "635576", "180231", "212238",
"417651", "631269", "720767", "348344", "532148", "748085", "474026",
"380897", "512421", "749492", "423616", "950330", "930079", "097759",
"638901", "319356", "683308", "818127", "880675", "256095", "639187",
"339904", "945437", "799571", "466063", "428853", "397799", "782034",
"462486", "739342", "879023", "419264", "793319", "603131", "315791",
"351701", "151747", "365656", "982700", "348093", "793392", "946875",
"912108", "070001", "780515", "222468", "345439", "234846", "227112",
"757243", "341747", "480781", "906624", "868265", "388572", "947873",
"898895", "452518", "738580", "217342", "849951", "437382", "247068",
"743776", "562584", "636948", "049434", "139296", "688436", "443629")

Is there a more efficient version of match for searching repeated permutations of numbers?

I have a dataset with 20 rows and n columns. I was originally working with n=10000, but have found that I need to use a much bigger number instead, probably over ten times as much. Each column of this dataset is generated independently of the others and contains a random but biased permutation of the whole numbers from 1 to 20. I wish to sum the locations of each number over the entire dataset. In other words, I want to know how many times the number a appeared in the bth position for every a and b (i.e. my final result is a 20*20 table of values).
I am confident that I already have code that achieves this goal. For example, my computer handles the entire n=10000 cause in under two minutes (i.e. it gives me the count for every single a and b). However, both n=100000 and the lesser n=50000 took so long that my patience ran out. Most of my code is extremely simple, and I am confident that the inefficiency is in the use of match in the following lines (a, b, and n are as described above, data is the dataset):
list<-c()
for(k in 1:n)
{
position<-match(a, data[,k])
list<-c(list,position)
}
return(sum(list==b))
how can I improve this? match appears to be notoriously slow, but all of the solutions that I've seen (example) are neither general solution nor applicable to this case.
If you wish to benchmark your solution replicate(n,sample(20)) will generate a similar list to my dataset.
I think the major bottleneck is you are increasing the size of the vector in the loop. Try to initialize it before the loop and assign the value in the vector.
list_vec <- numeric(length = n)
for(k in 1:n) {
list_vec[k] <- match(a, data[,k])
}
Or using sapply
sapply(data, function(x) match(a, x))
An option using data.table:
library(data.table)
DT <- data.table(ri=rep(1:20, n), v=as.vector(l))
dcast(DT, ri ~ v, length)
output:
ri 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
1: 1 499 506 481 507 434 498 537 493 495 474 504 506 545 499 492 467 510 527 507 519
2: 2 506 513 473 521 520 492 508 518 469 520 491 463 495 520 499 526 502 481 492 491
3: 3 481 499 510 480 506 499 493 522 512 507 516 484 516 482 536 476 509 477 500 495
4: 4 502 498 519 532 493 522 481 515 542 488 471 496 466 443 460 505 531 481 532 523
5: 5 497 468 523 492 475 430 502 491 526 514 490 528 460 498 471 557 488 547 521 522
6: 6 514 505 497 506 533 505 482 462 536 508 482 533 505 497 527 496 479 479 479 475
7: 7 525 522 511 476 502 536 508 486 495 452 493 506 507 498 530 498 475 478 498 504
8: 8 544 450 521 528 491 497 534 503 504 497 506 464 485 501 511 467 478 484 523 512
9: 9 442 515 515 507 496 515 460 537 528 510 490 500 526 510 499 508 497 517 465 463
10: 10 513 505 497 517 486 483 518 483 503 491 495 514 507 483 485 514 516 501 498 491
11: 11 480 530 491 486 503 507 517 487 467 499 504 497 496 521 499 444 525 511 500 536
12: 12 507 464 506 537 516 489 480 500 450 507 490 539 482 484 508 483 522 519 471 546
13: 13 501 527 521 443 510 527 507 507 492 547 486 465 515 544 504 472 502 529 456 445
14: 14 478 494 502 464 495 515 503 504 514 475 522 471 529 487 509 548 500 505 510 475
15: 15 489 513 488 505 532 487 506 525 438 530 534 497 494 475 491 494 468 499 544 491
16: 16 520 484 467 516 480 498 508 503 512 472 535 503 533 526 505 508 495 477 460 498
17: 17 512 465 491 514 516 469 487 485 491 465 522 550 494 514 506 542 508 476 490 503
18: 18 505 526 503 499 502 518 484 489 508 513 476 491 505 478 482 523 500 461 555 482
19: 19 528 508 492 488 513 513 493 474 500 510 467 474 463 543 482 495 523 522 505 507
20: 20 457 508 492 482 497 500 492 516 518 521 526 519 477 497 504 477 472 529 494 522
data:
set.seed(0L)
n <- 1e4
l <- replicate(n, sample(20))
This took about 1.4 seconds on my two-year-old Macbook Pro (although #chinsoon12's data.table solution is far faster--about 0.04 seconds on my machine):
library(tidyverse)
# Fake data matrix, 20 rows x 100,000 columns
n = 100000
set.seed(2)
d = replicate(n, sample(1:20))
# Convert to long data frame and count positions
d %>%
as_tibble() %>%
pivot_longer(cols=everything()) %>%
arrange(name) %>%
mutate(position = rep(1:20, n)) %>%
group_by(value, position) %>%
tally
value position n
<int> <int> <int>
1 1 1 4901
2 1 2 5031
3 1 3 4980
4 1 4 4997
5 1 5 4959
6 1 6 5004
7 1 7 4888
8 1 8 5021
9 1 9 4970
10 1 10 4986
# … with 390 more rows
If I understand correctly, this can be done fast, without any package:
n <- 10000
k <- 20
data <- replicate(n, sample(k))
## The result: a k times k array.
## E.g. result[1, 5] tells you how often
## 5 appears in row 1.
result <- array(NA, dim = c(k, k))
for (i in 1:k) {
tmp <- data[seq(i, to = length(data), by = k)]
for (j in 1:k)
result[i, j] <- sum(tmp == j)
}
For a million samples (n == 1e6), it takes about 2 seconds or so.
Avoid growing objects in a loop and bookkeeping of initializing then assigning to objects. Consider sapply or slightly faster, vapply (that verifies the type and length return):
myVec <- sapply(seq(n), function(k) match(a, data[,k]))
sum(myVec==b)
myVec <- vapply(seq(n), function(k) match(a, data[,k]), integer(1))
sum(myVec==b)

R reshape by day, month, year [duplicate]

This question already has answers here:
How to reshape data from long to wide format
(14 answers)
Closed 5 years ago.
I have a simple table in the following format:
Date val
2005-01-01 15
2005-01-02 18
2005-01-03 20
...
And am trying to reshape it to the following "wide" column format:
Year Month day1 day2 day3 day4 ...day31
2005 01 day1val day2val day3val day4val ...day31val
2005 02 day1val day2val day3val day4val ...day31val
I've successfully split the date column into three separate d,m,y columns using
dates_separated <- data.frame(year = as.numeric(format(input_df$DATE, format = "%Y")),
month = as.numeric(format(input_df$DATE, format = "%m")),
day = as.numeric(format(input_df$DATE, format = "%d")))
output_df <- cbind(input_df, dates_sep)
I'm trying to use the reshape function to get this done, but am finding my output could be more complicated than it can handle. Is there another function I should be using here?
Edit: I don't believe this was a duplicate of what was suggested. markdly's answer below did exactly what I needed. Thanks!
For the sake of completeness, here is a solution using the dcast() function.
OP's input_df consists only of two columns Date and val. So, let's create a full year of sample data by
set.seed(1234L)
input_df <- data.frame(Date = as.Date("2005-01-01") + 0:364,
val = sample(100:999, 365L, TRUE))
The dcast() function is available from the reshape2 and the data.table packages. Here, data.table is used because of its handy year(), month(), and mday() functions:
library(data.table)
dcast(input_df, year(Date) + month(Date) ~ mday(Date))
year(Date) month(Date) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
1 2005 1 202 660 648 661 874 676 108 309 699 562 724 590 354 931 363 853 357 340 268 309 384 372 243 135 296 829 573 923 848 141 510
2 2005 2 338 374 556 262 783 281 332 992 826 598 681 380 659 396 551 709 536 319 788 166 378 745 554 237 553 544 776 257 NA NA NA
3 2005 3 863 878 137 385 112 315 735 377 557 146 608 209 903 113 804 180 567 445 163 388 701 933 524 228 589 276 908 450 379 244 906
4 2005 4 249 910 220 218 194 560 370 124 378 767 131 608 352 283 220 393 239 216 491 134 741 190 955 209 297 921 951 351 211 817 NA
5 2005 5 769 924 995 948 537 355 326 552 547 386 966 670 214 480 922 521 917 637 668 882 552 985 391 533 421 664 767 609 982 619 495
6 2005 6 305 173 865 311 989 641 998 438 599 486 618 489 302 176 673 487 165 822 392 781 625 737 484 409 783 481 604 204 372 530 NA
7 2005 7 410 640 168 960 119 857 669 379 768 675 993 215 894 829 839 851 759 984 675 694 575 385 791 573 759 376 463 283 987 609 352
8 2005 8 266 782 610 938 674 730 531 865 480 128 332 401 220 549 821 403 558 544 817 610 196 826 610 291 774 376 540 990 481 319 295
9 2005 9 720 982 529 796 616 969 817 578 636 337 351 158 606 336 102 630 568 860 126 639 341 208 190 773 114 144 772 421 783 438 NA
10 2005 10 819 123 555 839 590 340 410 432 486 926 805 764 352 511 358 726 838 689 472 956 318 647 782 724 203 672 378 417 982 584 499
11 2005 11 954 507 271 992 593 791 922 713 466 466 231 277 272 467 413 851 278 875 457 237 405 430 484 267 692 928 760 894 958 275 NA
12 2005 12 525 447 436 125 936 469 960 344 565 980 432 379 130 700 928 140 281 769 217 737 998 949 633 758 538 791 102 602 514 396 851
To prettify the result, Year and Month can be computed in advance:
dcast(setDT(input_df)[, Year := year(Date)][, Month := month(Date)],
Year + Month ~ sprintf("day%02i", mday(Date)), value.var = "val")
Year Month day01 day02 day03 day04 day05 day06 day07 day08 day09 day10 day11 day12 day13 day14 day15 day16 day17 day18 day19 day20 ...
1: 2005 1 202 660 648 661 874 676 108 309 699 562 724 590 354 931 363 853 357 340 268 309
2: 2005 2 338 374 556 262 783 281 332 992 826 598 681 380 659 396 551 709 536 319 788 166
3: 2005 3 863 878 137 385 112 315 735 377 557 146 608 209 903 113 804 180 567 445 163 388
4: 2005 4 249 910 220 218 194 560 370 124 378 767 131 608 352 283 220 393 239 216 491 134
5: 2005 5 769 924 995 948 537 355 326 552 547 386 966 670 214 480 922 521 917 637 668 882
6: 2005 6 305 173 865 311 989 641 998 438 599 486 618 489 302 176 673 487 165 822 392 781
7: 2005 7 410 640 168 960 119 857 669 379 768 675 993 215 894 829 839 851 759 984 675 694
8: 2005 8 266 782 610 938 674 730 531 865 480 128 332 401 220 549 821 403 558 544 817 610
9: 2005 9 720 982 529 796 616 969 817 578 636 337 351 158 606 336 102 630 568 860 126 639
10: 2005 10 819 123 555 839 590 340 410 432 486 926 805 764 352 511 358 726 838 689 472 956
11: 2005 11 954 507 271 992 593 791 922 713 466 466 231 277 272 467 413 851 278 875 457 237
12: 2005 12 525 447 436 125 936 469 960 344 565 980 432 379 130 700 928 140 281 769 217 737
Note that here sprintf("Day%02i", mday(Date)) is used to keep the columns ordered. Using paste0("day", day) as in markdly's answer, the columns would be in the wrong order:
day1 day10 day11 day12 day13 day14 day15 day16 day17 day18 day19 day2 day20 ...
If you can add actual data to your question it really helps others to post answers. For example, here's some data for 5 days in each month in 2015:
set.seed(123)
df <- expand.grid(year = 2015, month = 1:12, day = 1:5)
df$val <- sample.int(1000, nrow(df))
head(df)
#> year month day val
#> 1 2015 1 1 288
#> 2 2015 2 1 788
#> 3 2015 3 1 409
#> 4 2015 4 1 881
#> 5 2015 5 1 937
#> 6 2015 6 1 46
This can be converted to the desired format using tidyr::spread:
library(dplyr)
library(tidyr)
df %>%
mutate(day = paste0("day", day)) %>%
spread(day, val)
#> year month day1 day2 day3 day4 day5
#> 1 2015 1 288 670 640 732 254
#> 2 2015 2 788 566 691 209 816
#> 3 2015 3 409 102 530 307 44
#> 4 2015 4 881 993 579 223 420
#> 5 2015 5 937 243 282 138 758
#> 6 2015 6 46 42 143 398 116
#> 7 2015 7 525 323 935 397 531
#> 8 2015 8 887 996 875 353 196
#> 9 2015 9 548 872 669 146 121
#> 10 2015 10 453 679 770 133 711
#> 11 2015 11 948 627 24 961 844
#> 12 2015 12 449 972 462 445 957

In R, how can one detect consecutive data points within a column of data which have a value >y only when they appear more than x times consecutively?

I am a beginner R user and am trying to find a way to detect x consecutive values within a column of data which have a value >=y. Example: find all instances where 4 or more consecutive data points have a value >=2
The run length encoding rle() command looks promising for identifying these consecutive values:
rle(dataset>=2)
but I cannot find a way to further set the condition for the lengths to be >=4 and the values to be "TRUE".
Any suggestions?
res <- rle(dataset>=2)
which( res$lengths>=4 & res$values==TRUE] )
That will identify them in the compacted representation of the rle result and you will then need to expand that result and pick out starting points for the sequences.
You can simply transform the vector, and use rle on that:
res = rle(runif(1000, 0, 4) >= 2)
res
Run Length Encoding
lengths: int [1:491] 2 2 2 2 3 1 3 2 7 1 ...
values : logi [1:491] TRUE FALSE TRUE FALSE TRUE FALSE ...
To get the indices where in the vector the runs are located, you can use this trick:
res$values = res$lengths > 4
inv_res = inverse.rle(res)
seq_along(inv_res)[inv_res]
[1] 3 4 5 6 7 8 9 10 11 12 13 37 38 39 40 41 42 74
[19] 75 76 77 78 79 80 81 82 83 84 85 108 109 110 111 112 142 143
[37] 144 145 146 147 148 221 222 223 224 225 226 235 236 237 238 239 240 241
[55] 278 279 280 281 282 305 306 307 308 309 310 311 312 313 314 315 316 317
[73] 318 319 342 343 344 345 346 347 414 415 416 417 418 419 430 431 432 433
[91] 434 435 449 450 451 452 453 472 473 474 475 476 477 478 523 524 525 526
[109] 527 545 546 547 548 549 561 562 563 564 565 566 567 568 569 607 608 609
[127] 610 611 612 613 625 626 627 628 629 630 646 647 648 649 650 651 652 699
[145] 700 701 702 703 765 766 767 768 769 770 771 772 773 789 790 791 792 793
[163] 794 795 800 801 802 803 804 810 811 812 813 814 850 851 852 853 854 855
[181] 869 870 871 872 873 879 880 881 882 883 904 905 906 907 908 909 919 920
[199] 921 922 923 949 950 951 952 953 954 955 956 957 958 959 960 961

R: kmeans on matrix (100x21)

I have a question about Kmeans in R. I have a dataframe like variable data_file. My question is, how can I perform kmeans on my data? If anyone has any suggestion, you are more than welcome. Thank you!
> data_file
WT_Sham WT_Sham.1 WT_Sham.2 WT_Sham.3 WT_Sham.4 WT_Sham.5 WT_CSD WT_CSD.1 WT_CSD.2 WT_CSD.3 RQ_Sham RQ_Sham.1 RQ_Sham.2 RQ_Sham.3 RQ_Sham.4 RQ_Sham.5 RQ_CSD RQ_CSD.1 RQ_CSD.2 RQ_CSD.3
ENSMUSG00000002012 581 1221 681 1789 376 787 1009 480 992 1004 582 896 1319 1200 663 1089 1003 821 807 696
ENSMUSG00000028182 2 11 3 8 2 8 1 3 12 3 1 5 35 13 0 1 8 13 5 1
ENSMUSG00000002017 382 698 555 1290 892 999 546 245 689 539 367 548 927 905 853 623 823 722 494 505
ENSMUSG00000028184 381 666 443 763 491 655 621 376 379 353 382 306 878 690 1787 257 776 636 240 564
ENSMUSG00000002015 402 956 533 1728 1224 1129 668 383 930 355 481 704 1611 1458 0 345 1199 1017 653 917
ENSMUSG00000028180 778 2158 1506 3606 2489 3128 1573 1030 1962 956 1093 1410 3702 3122 1 1433 2535 2125 1242 1825
Did you try the built in function kmeans?
kmeans(USArrests, centers=3)
USArrests is just a data set that comes with R.
If you google R kmeans you will get more information.

Resources