Determining which R architectures are installed - r

How does one determine which architectures are supported by an installation of R? On a standard windows install, one may look for the existence of R_HOME/bin/*/R.exe where * is the architecture (typically i386 or x64). On a standard mac install from CRAN, there are no subdirectories.
I can query R for the default architecture using something like:
$ R --silent -e "sessionInfo()[[1]][[2]]"
> sessionInfo()[[1]][[2]]
[1] "x86_64"
but how do I know on mac/linux whether any sub-architectures are installed, and if so what they are?

R.version, R.Version(), R.version.string, and version provide detailed information about the version of R running.
Update, based on a better understanding of the question. This isn't a complete solution, but it seems you can get fairly close via a combination of the following commands:
# get all the installed architectures
arch <- basename(list.dirs(R.home('bin'), recursive=FALSE))
# handle different operating systems
if(.Platform$OS.type == "unix") {
arch <- gsub("exec","",arch)
if(arch == "")
arch <- R.version$arch
} else { # Windows
# any special handling
}
Note that this won't work if you've built R from source and installed the different architectures in various different places. See 2.6 Sub-architectures of the R Installation and Administration manual for more details.

Using Sys.info() you have a lot of information on your system.
May be it can help here
Sys.info()["machine"]
machine
"x86_64"
EDIT
One workaround to have all architecture possible is to download log files from the Rstudio mirror, it's not complete but it's good estimate of what you need.
start <- as.Date('2012-10-01')
today <- as.Date('2013-07-01')
all_days <- seq(start, today, by = 'day')
year <- as.POSIXlt(all_days)$year + 1900
urls <- paste0('http://cran-logs.rstudio.com/', year, '/', all_days, '.csv.gz')
files <- file.path("/tmp", basename(urls))
list_data <- lapply(files, read.csv, stringsAsFactors = FALSE)
data <- do.call(rbind, list_data)
str(data)
## 'data.frame': 10694506 obs. of 10 variables:
## $ date : chr "2012-10-01" "2012-10-01" "2012-10-01" "2012-10-01" ...
## $ time : chr "00:30:13" "00:30:15" "02:30:16" "02:30:16" ...
## $ size : int 35165 212967 167199 21164 11046 42294 435407 326143 119459 868695 ...
## $ r_version: chr "2.15.1" "2.15.1" "2.15.1" "2.15.1" ...
## $ r_arch : chr "i686" "i686" "x86_64" "x86_64" ...
## $ r_os : chr "linux-gnu" "linux-gnu" "linux-gnu" "linux-gnu" ...
## $ package : chr "quadprog" "lavaan" "formatR" "stringr" ...
## $ version : chr "1.5-4" "0.5-9" "0.6" "0.6.1" ...
## $ country : chr "AU" "AU" "US" "US" ...
## $ ip_id : int 1 1 2 2 2 2 2 1 1 3 ...
unique(data[["r_arch"]])
## [1] "i686" "x86_64" NA "i386" "i486"
## [6] "i586" "armv7l" "amd64" "000000" "powerpc64"
## [11] "armv6l" "sparc" "powerpc" "arm" "armv5tel"

Related

Split PDF files in multiples files every 2 pages in R

I have a PDF document with 300 pages. I need to split this file in 150 files containing each one 2 pages. For example, the 1st document would contain pages 1 & 2 of the original file, the 2nd document, the pages 3 & 4 and so on.
Maybe I can use the "pdftools" package, but I don't know how.
1) pdftools Assuming that the input PDF is in the current directory and the outputs are to go into the same directory, change the inputs below and then get the number of pages num, compute the st and en vectors of start and end page numbers and repeatedly call pdf_subset. Note that the pdf_length and pdf_subset functions come from the qpdf R package but are also made available by the pdftools R package by importing them and exporting them back out.
library(pdftools)
# inputs
infile <- "a.pdf" # input pdf
prefix <- "out_" # output pdf's will begin with this prefix
num <- pdf_length(infile)
st <- seq(1, num, 2)
en <- pmin(st + 1, num)
for (i in seq_along(st)) {
outfile <- sprintf("%s%0*d.pdf", prefix, nchar(num), i)
pdf_subset(infile, pages = st[i]:en[i], output = outfile)
}
2) pdfbox The Apache pdfbox utility can split into files of 2 pages each. Download the .jar command line utilities file from pdfbox and be sure you have java installed. Then run this assuming that your input file is a.pdf and is in the current directory (or run the quoted part directly from the command line without the quotes and without R). The jar file name below may need to be changed if a later version is to be used. The one named below is the latest one currently (not counting alpha version).
system("java -jar pdfbox-app-2.0.26.jar PDFSplit -split 2 a.pdf")
3) animation/pdftk Another option is to install the pdftk program, change the inputs at the top of the script below and run. This gets the number of pages in the input, num, using pdftk and then computes the start and end page numbers, st and en, and then invokes pdftk repeatedly, once for each st/en pair to extract those pages into another file.
library(animation)
# inputs
PDFTK <- "~/../bin/pdftk.exe" # path to pdftk
infile <- "a.pdf" # input pdf
prefix <- "out_" # output pdf's will begin with this prefix
ani.options(pdftk = Sys.glob(PDFTK))
tmp <- tempfile()
dump_data <- pdftk(infile, "dump_data", tmp)
g <- grep("NumberOfPages", readLines(tmp), value = TRUE)
num <- as.numeric(sub(".* ", "", g))
st <- seq(1, num, 2)
en <- pmin(st + 1, num)
for (i in seq_along(st)) {
outfile <- sprintf("%s%0*d.pdf", prefix, nchar(num), i)
pdftk(infile, sprintf("cat %d-%d", st[i], en[i]), outfile)
}
Neither pdftools nor qpdf (on which the first depends) support splitting PDF files by other than "every page". You likely will need to rely on an external program, I'm confident you can get pdftk to do that by calling it once for each 2-page output.
I have a 36-page PDF here named quux.pdf in the current working directory.
str(pdftools::pdf_info("quux.pdf"))
# List of 11
# $ version : chr "1.5"
# $ pages : int 36
# $ encrypted : logi FALSE
# $ linearized : logi FALSE
# $ keys :List of 8
# ..$ Producer : chr "pdfTeX-1.40.24"
# ..$ Author : chr ""
# ..$ Title : chr ""
# ..$ Subject : chr ""
# ..$ Creator : chr "LaTeX via pandoc"
# ..$ Keywords : chr ""
# ..$ Trapped : chr ""
# ..$ PTEX.Fullbanner: chr "This is pdfTeX, Version 3.141592653-2.6-1.40.24 (TeX Live 2022) kpathsea version 6.3.4"
# $ created : POSIXct[1:1], format: "2022-05-17 22:54:40"
# $ modified : POSIXct[1:1], format: "2022-05-17 22:54:40"
# $ metadata : chr ""
# $ locked : logi FALSE
# $ attachments: logi FALSE
# $ layout : chr "no_layout"
I also have pdftk installed and available in the page,
Sys.which("pdftk")
# pdftk
# "C:\\PROGRA~2\\PDFtk Server\\bin\\pdftk.exe"
With this, I can run an external script to create 2-page PDFs:
list.files(pattern = "pdf$")
# [1] "quux.pdf"
pages <- seq(pdftools::pdf_info("quux.pdf")$pages)
pages <- split(pages, (pages - 1) %/% 2)
pages[1:3]
# $`0`
# [1] 1 2
# $`1`
# [1] 3 4
# $`2`
# [1] 5 6
for (pg in pages) {
system(sprintf("pdftk quux.pdf cat %s-%s output out_%02i-%02i.pdf",
min(pg), max(pg), min(pg), max(pg)))
}
list.files(pattern = "pdf$")
# [1] "out_01-02.pdf" "out_03-04.pdf" "out_05-06.pdf" "out_07-08.pdf"
# [5] "out_09-10.pdf" "out_11-12.pdf" "out_13-14.pdf" "out_15-16.pdf"
# [9] "out_17-18.pdf" "out_19-20.pdf" "out_21-22.pdf" "out_23-24.pdf"
# [13] "out_25-26.pdf" "out_27-28.pdf" "out_29-30.pdf" "out_31-32.pdf"
# [17] "out_33-34.pdf" "out_35-36.pdf" "quux.pdf"
str(pdftools::pdf_info("out_01-02.pdf"))
# List of 11
# $ version : chr "1.5"
# $ pages : int 2
# $ encrypted : logi FALSE
# $ linearized : logi FALSE
# $ keys :List of 2
# ..$ Creator : chr "pdftk 2.02 - www.pdftk.com"
# ..$ Producer: chr "itext-paulo-155 (itextpdf.sf.net-lowagie.com)"
# $ created : POSIXct[1:1], format: "2022-05-18 09:37:56"
# $ modified : POSIXct[1:1], format: "2022-05-18 09:37:56"
# $ metadata : chr ""
# $ locked : logi FALSE
# $ attachments: logi FALSE
# $ layout : chr "no_layout"

Behaviour of fread for quoted character columns in version 1.11.0

With the recent changes to fread in data.table Version 1.11.0 (May 1, 2018) seems to have trouble recognizing the correct values of na.strings with quoted character columns.
The default behaviour in conjunction with fwrite works just fine as described in the NEWS and shown in the working example below (fread(fwrite(DT))==DT).
However, if files are written using fwrite(DT, quote = TRUE) mimicing the behaviour of write.csv or writing the files directly using write.csv, fread seems to have problems detecting the correct strings specified in na.strings (shown in the non-working example below).
working example
dt <- data.table(A = c(1, 2, 3), B = c("a", "b", NA))
fwrite(dt, "testfile.csv")
# expected output
str(fread("testfile.csv", na.strings = c("a", "")))
Classes ‘data.table’ and 'data.frame': 3 obs. of 2 variables:
$ A: int 1 2 3
$ B: chr NA "b" NA
- attr(*, ".internal.selfref")=<externalptr>
Specification of na.strings seems to work just fine in this example using unquoted characters.
non-working example
Using the data.table dt from the example above:
fwrite(dt, "testfile_quoted.csv", quote = TRUE) # mimicing write.csv
Here, specifying na.strings = "" also gives the expected result
str(fread("testfile_quoted.csv", na.strings = ""))
Classes ‘data.table’ and 'data.frame': 3 obs. of 2 variables:
$ A: int 1 2 3
$ B: chr "a" "b" NA
- attr(*, ".internal.selfref")=<externalptr>
However, trying to specify na.strings = c("a", "") as in the example above gives an unexpected result:
str(fread("testfile_quoted.csv", na.strings = c("a", "")))
Classes ‘data.table’ and 'data.frame': 3 obs. of 2 variables:
$ A: int 1 2 3
$ B: chr "a" "b" NA
- attr(*, ".internal.selfref")=<externalptr>
The expected result for column B should be c(NA, "b", NA) as in the working example above.
Specifying the quotes in na.strings directly also doesn't change the result for me.
str(fread("testfile_quoted.csv", na.strings = c("\"a\"", "")))
Classes ‘data.table’ and 'data.frame': 3 obs. of 2 variables:
$ A: int 1 2 3
$ B: chr "a" "b" NA
- attr(*, ".internal.selfref")=<externalptr>
Am I missing something here?
I did not have these problems prior to version 1.11.0. Is there any way to restore the old behaviour of fread such that the consistency with old files written with write.csv is maintained?
Session Info
sessionInfo()
R version 3.5.0 (2018-04-23)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1
Matrix products: default
locale:
[1] LC_COLLATE=German_Austria.1252 LC_CTYPE=German_Austria.1252
[3] LC_MONETARY=German_Austria.1252 LC_NUMERIC=C
[5] LC_TIME=German_Austria.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] data.table_1.11.0
loaded via a namespace (and not attached):
[1] compiler_3.5.0 tools_3.5.0
[1]: https://github.com/Rdatatable/data.table/blob/master/NEWS.md

List aviable WFS layers and read into data frame with rgdal

I have the following problem according to different sources it should be able to read WFS layer in R using rgdal.
dsn<-"WFS:http://geomap.reteunitaria.piemonte.it/ws/gsareprot/rp-01/areeprotwfs/wfs_gsareprot_1?service=WFS&request=getCapabilities"
ogrListLayers(dsn)
readOGR(dsn,"SIC")
The result of that code should be 1) to list the available WFS layer and 2) to read a specific Layer (SIC) into R as a Spatial(Points)DataFrame.
I tried several other WFS server but it does not work.
I always get the warning:
Cannot open data source
Checking for the WFS driver i get the following result:
> "WFS" %in% ogrDrivers()$name
[1] FALSE
Well it looks like the WFS driver is not implemented in rgdal (anymore?)
Or why are there so many examples "claiming" the opposite?
I also tried the gdalUtils package and well it works but It gives out the whole console message of ogrinfo.exe and not only the available layers.(I guess it "just" calls the ogrinfo.exe and sends the result back to R like using the r shell or system command).
Well does anyone know what I´m making wrong, or if something like that is even possible with rgdal or any similar package?
You can combine the two packages to accomplish your task.
First, convert the layer you need into a local shapefile using gdalUtils. Then, use rgdal as normal. NOTE: you'll see a warning message after the ogr2ogr call but it performed the conversion fine for me. Also, ogr2ogr won't overwrite local files without the overwrite parameter being TRUE (there are other parameters that may be of use as well).
library(gdalUtils)
library(rgdal)
dsn <- "WFS:http://geomap.reteunitaria.piemonte.it/ws/gsareprot/rp-01/areeprotwfs/wfs_gsareprot_1?service=WFS&request=getCapabilities"
ogrinfo(dsn, so=TRUE)
## [1] "Had to open data source read only."
## [2] "INFO: Open of `WFS:http://geomap.reteunitaria.piemonte.it/ws/gsareprot/rp-01/areeprotwfs/wfs_gsareprot_1?service=WFS&request=getCapabilities'"
## [3] " using driver `WFS' successful."
## [4] "1: AreeProtette"
## [5] "2: ZPS"
## [6] "3: SIC"
ogr2ogr(dsn, "sic.shp", "SIC")
sic <- readOGR("sic.shp", "sic", stringsAsFactors=FALSE)
## OGR data source with driver: ESRI Shapefile
## Source: "sic.shp", layer: "sic"
## with 128 features
## It has 23 fields
plot(sic)
str(sic#data)
## 'data.frame': 128 obs. of 23 variables:
## $ gml_id : chr "SIC.510" "SIC.472" "SIC.470" "SIC.508" ...
## $ objectid : chr "510" "472" "470" "508" ...
## $ inspire_id: chr NA NA NA NA ...
## $ codice : chr "IT1160026" "IT1160017" "IT1160018" "IT1160020" ...
## $ nome : chr "Faggete di Pamparato, Tana del Forno, Grotta delle Turbiglie e Grotte di Bossea" "Stazione di Linum narbonense" "Sorgenti del T.te Maira, Bosco di Saretto, Rocca Provenzale" "Bosco di Bagnasco" ...
## $ cod_tipo : chr "B" "B" "B" "B" ...
## $ tipo : chr "SIC" "SIC" "SIC" "SIC" ...
## $ cod_reg_bi: chr "1" "1" "1" "1" ...
## $ des_reg_bi: chr "Alpina" "Alpina" "Alpina" "Alpina" ...
## $ mese_istit: chr "11" "11" "11" "11" ...
## $ anno_istit: chr "1996" "1996" "1996" "1996" ...
## $ mese_ultmo: chr "2" NA NA NA ...
## $ anno_ultmo: chr "2002" NA NA NA ...
## $ sup_sito : chr "29396102.9972" "82819.1127" "7272687.002" "3797600.3563" ...
## $ perim_sito: chr "29261.8758" "1227.8846" "17650.289" "9081.4963" ...
## $ url1 : chr "http://gis.csi.it/parchi/schede/IT1160026.pdf" "http://gis.csi.it/parchi/schede/IT1160017.pdf" "http://gis.csi.it/parchi/schede/IT1160018.pdf" "http://gis.csi.it/parchi/schede/IT1160020.pdf" ...
## $ url2 : chr "http://gis.csi.it/parchi/carte/IT1160026.djvu" "http://gis.csi.it/parchi/carte/IT1160017.djvu" "http://gis.csi.it/parchi/carte/IT1160018.djvu" "http://gis.csi.it/parchi/carte/IT1160020.djvu" ...
## $ fk_ente : chr NA NA NA NA ...
## $ nome_ente : chr NA NA NA NA ...
## $ url3 : chr NA NA NA NA ...
## $ url4 : chr NA NA NA NA ...
## $ tipo_geome: chr "poligono" "poligono" "poligono" "poligono" ...
## $ schema : chr "Natura2000" "Natura2000" "Natura2000" "Natura2000" ...
Neither the questioner nor the answerer say how rgdal was installed. If it is a CRAN binary for Windows or OSX, it may well have a smaller set of drivers than an independent installation of GDAL underlying gdalUtils. Always state your platform, and whether rgdal was installed binary or from source, and always provide the output of the messages displayed as rgdal loads, as well as of sessionInfo() to show the platform on which you are running.
Given the possible difference in sets of drivers, the advice given seems reasonable.

Ascii file in R as.numeric integers are incorrect

I have read an ascii (.spe) file into R. This file contains one column of, mostly, integers. However R is interpreting these integers incorrectly, probably because I am not specifying the correct format or something like that. The file was generated in Ortec Maestro software. Here is the code:
library(SDMTools)
strontium<-read.table("C:/Users/Hal 2/Desktop/beta_spec/strontium 90 spectrum.spe",header=F,skip=2)
str_spc<-vector(mode="numeric")
for (i in 1:2037)
{
str_spc[i]<-as.numeric(strontium$V1[i+13])
}
Here, for example, strontium$V1[14] has the value 0, but R is interpreting it as a 10. I think I may have to convert the data to some other format, or something like that, but I'm not sure and I'm probably googling the wrong search terms.
Here are the first few lines from the file:
$SPEC_ID:
No sample description was entered.
$SPEC_REM:
DET# 1
DETDESC# MCB 129
AP# Maestro Version 6.08
$DATE_MEA:
10/14/2014 15:13:16
$MEAS_TIM:
1516 1540
$DATA:
0 2047
Here is a link to the file: https://www.dropbox.com/sh/y5x68jen487qnmt/AABBZyC6iXBY3e6XH0XZzc5ba?dl=0
Any help appreciated.
I saw someone had made a parser for SPE Spectra files in python and I can't let that stand without there being at least a minimally functioning R version, so here's one that parses some of the fields, but gets you your data:
library(stringr)
library(gdata)
library(lubridate)
read.spe <- function(file) {
tmp <- readLines(file)
tmp <- paste(tmp, collapse="\n")
records <- strsplit(tmp, "\\$")[[1]]
records <- records[records!=""]
spe <- list()
spe[["SPEC_ID"]] <- str_match(records[which(startsWith(records, "SPEC_ID"))],
"^SPEC_ID:[[:space:]]*([[:print:]]+)[[:space:]]+")[2]
spe[["SPEC_REM"]] <- strsplit(str_match(records[which(startsWith(records, "SPEC_REM"))],
"^SPEC_REM:[[:space:]]*(.*)")[2], "\n")
spe[["DATE_MEA"]] <- mdy_hms(str_match(records[which(startsWith(records, "DATE_MEA"))],
"^DATE_MEA:[[:space:]]*(.*)[[:space:]]$")[2])
spe[["MEAS_TIM"]] <- strsplit(str_match(records[which(startsWith(records, "MEAS_TIM"))],
"^MEAS_TIM:[[:space:]]*(.*)[[:space:]]$")[2], "\n")[[1]]
spe[["ROI"]] <- str_match(records[which(startsWith(records, "ROI"))],
"^ROI:[[:space:]]*(.*)[[:space:]]$")[2]
spe[["PRESETS"]] <- strsplit(str_match(records[which(startsWith(records, "PRESETS"))],
"^PRESETS:[[:space:]]*(.*)[[:space:]]$")[2], "\n")[[1]]
spe[["ENER_FIT"]] <- strsplit(str_match(records[which(startsWith(records, "ENER_FIT"))],
"^ENER_FIT:[[:space:]]*(.*)[[:space:]]$")[2], "\n")[[1]]
spe[["MCA_CAL"]] <- strsplit(str_match(records[which(startsWith(records, "MCA_CAL"))],
"^MCA_CAL:[[:space:]]*(.*)[[:space:]]$")[2], "\n")[[1]]
spe[["SHAPE_CAL"]] <- str_match(records[which(startsWith(records, "SHAPE_CAL"))],
"^SHAPE_CAL:[[:space:]]*(.*)[[:space:]]*$")[2]
spe_dat <- strsplit(str_match(records[which(startsWith(records, "DATA"))],
"^DATA:[[:space:]]*(.*)[[:space:]]$")[2], "\n")[[1]]
spe[["SPE_DAT"]] <- as.numeric(gsub("[[:space:]]", "", spe_dat)[-1])
return(spe)
}
dat <- read.spe("strontium 90 spectrum.Spe")
str(dat)
## List of 10
## $ SPEC_ID : chr "No sample description was entered."
## $ SPEC_REM :List of 1
## ..$ : chr [1:3] "DET# 1" "DETDESC# MCB 129" "AP# Maestro Version 6.08"
## $ DATE_MEA : POSIXct[1:1], format: "2014-10-14 15:13:16"
## $ MEAS_TIM : chr "1516 1540"
## $ ROI : chr "0"
## $ PRESETS : chr [1:3] "None" "0" "0"
## $ ENER_FIT : chr "0.000000 0.002529"
## $ MCA_CAL : chr [1:2] "3" "0.000000E+000 2.529013E-003 0.000000E+000 keV"
## $ SHAPE_CAL: chr "3\n3.100262E+001 0.000000E+000 0.000000E+000"
## $ SPE_DAT : num [1:2048] 0 0 0 0 0 0 0 0 0 0 ...
head(dat$SPE_DAT)
## [1] 0 0 0 0 0 0
It needs some polish and there's absolutely no error checking (i.e. for missing fields), but no time today to deal with that. I'll finish the parsing and make a minimal package wrapper for it over the next couple days.

searchTwitter timestamps

I am using the twitteR library in R and wondering if is is possible to get timestamps associated with a search or a timeline for that matter. E.G if searching #rstats using searchTwitter, I would like to know when the tweets were made...are there additional parameters I need to parse in order to get that information?
here is some example code...
library(twitteR)
searchTwitter("#rstats",n=10)
giving the following result
[[1]]
[1] "MinneAnalytics: #thomaswdinsmore RT #erikriverson: Some thoughts from an observer on the #Rstats track at #BigDataMN. http://t.co/i42PEQHz #R at #CSOM"
[[2]]
[1] "pentalibra: My package ggdendro to draw dendrograms with ggplot2 is back on CRAN. http://t.co/gMviOSnQ Wait a day or so for Windows binary/ #rstats"
[[3]]
[1] "Lachamadice: RT #freakonometrics: \"Regression tree using Gini's index\" http://t.co/tUplMqQj with #rstats"
[[4]]
[1] "Rbloggers: Tracking Number of Historical Clusters: \n(This article was first published on Systematic Investor » R,... http://t.co/jRnWUQ2Y #rstats"
[[5]]
[1] "Rbloggers: ggplot2 multiple boxplots with metadata: \n(This article was first published on mintgene » R, and kindl... http://t.co/re2gghTx #rstats"
[[6]]
[1] "Rbloggers: Learning R using a Chemical Reaction Engineering Book: Part 3: \n(This article was first published on N... http://t.co/agCJi9Rr #rstats"
[[7]]
[1] "Rbloggers: Learning R using a Chemical Reaction Engineering Book: Part 2: \n(This article was first published on N... http://t.co/2qqpgQrq #rstats"
[[8]]
[1] "Rbloggers: Waiting for an API request to complete: \n(This article was first published on Recology - R, and kindly... http://t.co/MZzxHVdw #rstats"
[[9]]
[1] "heidelqekhse3: RT #geospacedman: Just got an openlayers map working on an #rstats #shiny app at #nhshd but... meh."
[[10]]
[1] "jveik: Slides and replay of “Using R with Hadoop” webinar now available #rstats #hadoop | #scoopit http://t.co/Ar2F7We3"
after a google:
mytweet <- searchTwitter("#chocolate",n=10)
str(mytweet[[1]])
Reference class 'status' [package "twitteR"] with 10 fields
$ text : chr "The #chocolate part of the #croquette. #dumplings #truffles http://t.co/Imwt3tTP"
$ favorited : logi FALSE
$ replyToSN : chr(0)
$ created : POSIXct[1:1], format: "2013-01-27 16:26:03"
$ truncated : logi FALSE
$ replyToSID : chr(0)
$ id : chr "295568362526896128"
$ replyToUID : chr(0)
$ statusSource: chr "<a href="http://instagr.am">Instagram</a>"
$ screenName : chr "tahiatmahboob"
and 33 methods, of which 22 are possibly relevant:
getCreated, getFavorited, getId, getReplyToSID, getReplyToSN, getReplyToUID, getScreenName, getStatusSource, getText, getTruncated,
initialize, setCreated, setFavorited, setId, setReplyToSID, setReplyToSN, setReplyToUID, setScreenName, setStatusSource, setText,
setTruncated, toDataFrame
So time stamp is:
mytweet[[1]]$created
[1] "2013-01-27 16:26:03 UTC"
Never used twitteR until I read your question. Seems like something fun to do when bored.
One alterantive to parse the result( as the answer above) , is to use argument since and until.
For example you can do :
res <- searchTwitter("#rstats",n=1000,since='2013-01-24',
until='2013-01-28')
The searchTwitter is a wrapper to the JSON API of twitter. Take a look here for more details of the argument and example of the JSON results.

Resources