RCurl - Boolean Options - r

These Curl docs: http://curl.haxx.se/docs/manpage.html#-d list many boolean options.
How do I specify these options in a postForm call in RCurl? For example, how do I specify the --sslv3 flag?
I tried
postForm(url, .opts = list(sslv3=TRUE))
but received the error:
Warning message:
In mapCurlOptNames(names(.els), asNames = TRUE) :
Unrecognized CURL options: sslv3
Thanks in advance.
SOLUTION
Through some trial and error, I found that this works:
options(RCurlOptions = list(sslversion=3))
postForm(url)
If anyone could clarify how to translate the Curl options to the RCurl options, it would appreciated!

Curl stands for a few things http://daniel.haxx.se/docs/curl-vs-libcurl.html. The problem here is you are looking at what the curl command line tool does and instead want to ask how the libcurl library implements something.
RCurl use the libcurl library. This can be accessed via an api. The "symbols" used in the api are listed here http://curl.haxx.se/libcurl/c/symbols-in-versions.html. We can compare them to the options listed by RCurl:
library(RCurl)
cInfo <- getURL("http://curl.haxx.se/libcurl/c/symbols-in-versions.html")
cInfo <- unlist(strsplit(cInfo, "\n"))
cInfo <- cInfo[grep("CURLOPT_", cInfo)]
cInfo <- gsub("([^[\\s]]*)\\s.*", "\\1", cInfo)
cInfo <- gsub("CURLOPT_", "", cInfo)
cInfo <- tolower(gsub("_", ".", cInfo))
listCurlOptions()[!listCurlOptions()%in%cInfo]
From the above we can see that all RCurl options are derived from libcurl api symbols. The
CURLOPT_ is removed _ is replaced by . and the letters are demoted to lower case.
The question then arises as to what types the symbols represent. I usually just look at the
php library documentation to discover this. http://php.net/manual/en/function.curl-setopt.php lists
CURLOPT_SSLVERSION The SSL version (2 or 3) to use. By default PHP will try to determine this itself, although in some cases this must be set manually.
as an integer type. expecting the value 2 or 3.
Alternatively you can look at the curl_easy_setopt manual page http://curl.haxx.se/libcurl/c/curl_easy_setopt.html.
CURLOPT_SSLVERSION
Pass a long as parameter to control what version of SSL/TLS to attempt to use. The available options are:
CURL_SSLVERSION_DEFAULT
The default action. This will attempt to figure out the remote SSL protocol version, i.e. either SSLv3 or TLSv1 (but not SSLv2, which became disabled by default with 7.18.1).
CURL_SSLVERSION_TLSv1
Force TLSv1
CURL_SSLVERSION_SSLv2
Force SSLv2
CURL_SSLVERSION_SSLv3
Force SSLv3
It says we would need to pass a long with value CURL_SSLVERSION_SSLv3 to stipulate sslv3.
What is the value of CURL_SSLVERSION_SSLv3? We can examine RCurl:::SSLVERSION_SSLv3
> c(RCurl:::SSLVERSION_DEFAULT, RCurl:::SSLVERSION_TLSv1, RCurl:::SSLVERSION_SSLv2, RCurl:::SSLVERSION_SSLv3)
[1] 0 1 2 3
>
So in fact the permissible values for sslversion are 0,1,2 or 3.
So the confusion in this case arose from the curl program which presumably uses the libcurl api implementing this in a binary fashion.
So the correct way in this case to use this option would be:
postForm(url, .opts = list(sslversion = 3))
or
postForm(url, .opts = list(sslv = 3))
you can use the shorter sslv as .opts is passed to mapCurlOptNames which will use pmatch
to find sslversion.
To be fair to the author of RCurl this is all explained in http://www.omegahat.org/RCurl/philosophy.html also located in /RCurl/inst/doc/philosophy.html .An excerpt reads:
Each of these and what it controls is described in the libcurl
man(ual) page for curl_easy_setopt and that is the authoritative
documentation. Anything we provide here is merely repetition or
additional explanation.
The names of the options require a slight explanation. These
correspond to symbolic names in the C code of libcurl. For example,
the option url in R corresponds to CURLOPT_URL in C. Firstly,
uppercase letters are annoying to type and read, so we have mapped
them to lower case letters in R. We have also removed the prefix
"CURLOPT_" since we know the context in which they option names are
being used. And lastly, any option names that have a _ (after we have
removed the CURLOPT_ prefix) are changed to replace the '_' with a '.'
so we can type them in R without having to quote them. For example,
combining these three rules, "CURLOPT_URL" becomes url and
CURLOPT_NETRC_FILE becomes netrc.file. That is the mapping scheme.

Try this (after reviewing examples on ?curlOptions after being referred by ?postForm:)
myOpts = curlOptions(sslv3 = TRUE)
postForm(url, .opts = myOpts)
Although I admit I thought your code should work. You may need to also post you version numbers. There is also a curlSetOpt that might be more "assertive".

Related

libcurl function was given a bad argument CURLOPT_SSL_VERIFYHOST no longer supports 1 as value

While running the "PrepareAnnotationRefseq"  function from the customProDB package in R, I  ran into  a problem due to a compatibility issue of the curl version. I am currently using curl version 4.3.2.  The error report I got is:
PrepareAnnotationRefseq(genome='mm39',CDSfasta="geneseq.fasta",pepfasta="proteinseq.fasta", annotation_path, dbsnp = NULL, splice_matrix=FALSE, ClinVar=FALSE)
In curlSetOpt(..., .opts = .opts, curl = h, .encoding = .encoding) : Error setting the option for # 3 (status = 43) (enum = 81) (value = 0x55822c7f3b70): A libcurl function was given a bad argument CURLOPT_SSL_VERIFYHOST no longer supports 1 as value!
This could be a trivial problem for an expert in R, however with my current skill set I am unable to resolve this after looking for a solution on several forums and R groups. I would be very grateful if you could kindly shed some light on this issue. Perhaps a patch file that can fix the problem.
It's easy to read the manual. Why can't you do it?
If verify value is set to 1:
From 7.28.1 to 7.65.3: setting it to 1 made curl_easy_setopt() return an error and leaving the flag untouched.
Use 2.
When CURLOPT_SSL_VERIFYHOST is 2, that certificate must indicate that the server is the server to which you meant to connect, or the connection fails. Simply put, it means it has to have the same name in the certificate as is in the URL you operate against.
But why do you touch it? The default value for this option is 2 and is suitable for most cases of libcurl usage.

None of the keys entered are valid keys - R

I am trying to learn how to manipulate microarrays for differential expression analysis. While I am trying to add some annotation I can not find the keytype related to:
select(hugene10sttranscriptcluster.db,
keys = my_keys,
columns = c("GENENAME", "SYMBOL"),
keytype = "PROBEID")
-------------------------------------------------------
Error in .testForValidKeys(x, keys, keytype, fks) :
None of the keys entered are valid keys for 'PROBEID'. Please use the keys method to see a listing of valid arguments.
Being the keys:
my_keys
---------------------------------------------------------------------
[1] "16650045" "16650047" "16650049" "16650051" "16650053" "16650055" "16650057" "16650059"
I tried every possible type from keytypes(hugene10sttranscriptcluster.db) with no successful result:
"16650045" %in% keys(hugene10sttranscriptcluster.db, "GENEID")
------------------------------------------------------------------
[1] FALSE
Is there any documentation/alternative where I can find it. I have been looking through the documentation (Array Express) but did not help me. I am also not sure; is it possible that I require a different package (hugene10sttranscriptcluster.db)?
Effectively, I did have a problem with the package. If anyone has the same problem just try to look for the annotation of the microarray in the documentation (pd.hugene.2.0.st in my case) to install and use the proper package (hugene20sttranscriptcluster.db)

python3 imaplib search function encoding

Can someone point me out how to properly search using imaplib in python. The email server is Microsoft Exchange - seems to have problems but I would want a solution from the python/imaplib side.
https://github.com/barbushin/php-imap/issues/128
I so far use:
import imaplib
M = imaplib.IMAP4_SSL(host_name, port_name)
M.login(u, p)
M.select()
s_str = 'hello'
M.search(s_str)
And I get the following error:
>>> M.search(s_str)
('NO', [b'[BADCHARSET (US-ASCII)] The specified charset is not supported.'])
search takes two or more parameters, an encoding, and the search specifications. You can pass None as the encoding, to not specify one. hello is not a valid charset.
You also need to specify what you are searching: IMAP has a complex search language detailed in RFC3501§6.4.4; and imaplib does not provide a high level interface for it.
So, with both of those in mind, you need to do something like:
search(None, 'BODY', '"HELLO"')
or
search(None, 'FROM', '"HELLO"')

Why url.exists returns FALSE when the URL does exists using RCurl?

For example:
if(url.exists("http://www.google.com")) {
# Two ways to submit a query to google. Searching for RCurl
getURL("http://www.google.com/search?hl=en&lr=&ie=ISO-8859-1&q=RCurl&btnG=Search")
# Here we let getForm do the hard work of combining the names and values.
getForm("http://www.google.com/search", hl="en", lr="",ie="ISO-8859-1", q="RCurl", btnG="Search")
# And here if we already have the parameters as a list/vector.
getForm("http://www.google.com/search", .params = c(hl="en", lr="", ie="ISO-8859-1", q="RCurl", btnG="Search"))
}
This is an example from RCurl package manual. However, it does not work:
> url.exists("http://www.google.com")
[1] FALSE
I found there is an answer to this here Rcurl: url.exists returns false when url does exists. It said this is because of the default user agent is not useful. But I do not understand what user agent is and how to use it.
Also, this error happened when I worked in my company. I tried the same code at home, and it worked find. So I am guessing this is because of proxy. Or there is some other reasons that I did not realize.
I need to use RCurl to search my queries from Google, and then extract the information such as title and descriptions from the website. In this case, how to use user agent? Or, does the package httr can do this?
guys. Thanks a lot for help. I think I just figured out how to do it. The important thing is proxy. If I use:
> opts <- list(
proxy = "http://*******",
proxyusername = "*****",
proxypassword = "*****",
proxyport = 8080
)
> url.exists("http://www.google.com",.opts = opts)
[1] TRUE
Then all done! You can find your proxy under System-->proxy if you use win 10. At the same time:
> site <- getForm("http://www.google.com.au", hl="en",
lr="", q="r-project", btnG="Search",.opts = opts)
> htmlTreeParse(site)
$file
[1] "<buffer>"
.........
In getForm, opts needs to be put in as well. There are two posters here (RCurl default proxy settings and Proxy setting for R) answering the same question. I have not tried how to extract information from here.

Logfile analysis in R?

I know there are other tools around like awstats or splunk, but I wonder whether there is some serious (web)server logfile analysis going on in R. I might not be the first thought to do it in R, but still R has nice visualization capabilities and also nice spatial packages. Do you know of any? Or is there a R package / code that handles the most common log file formats that one could build on? Or is it simply a very bad idea?
In connection with a project to build an analytics toolbox for our Network Ops guys,
i built one of these about two months ago. My employer has no problem if i open source it, so if anyone is interested i can put it up on my github repo. I assume it's most useful to this group if i build an R Package. I won't be able to do that straight away though
because i need to research the docs on package building with non-R code (it might be as simple as tossing the python bytecode files in /exec along with a suitable python runtime, but i have no idea).
I was actually suprised that i needed to undertake a project of this sort. There are at least several excellent open source and free log file parsers/viewers (including the excellent Webalyzer and AWStats) but neither parse server error logs (parsing server access logs is the primary use case for both).
If you are not familiar with error logs or with the difference between them and access
logs, in sum, Apache servers (likewsie, nginx and IIS) record two distinct logs and store them to disk by default next to each other in the same directory. On Mac OS X,
that directory in /var, just below root:
$> pwd
/var/log/apache2
$> ls
access_log error_log
For network diagnostics, error logs are often far more useful than the access logs.
They also happen to be significantly more difficult to process because of the unstructured nature of the data in many of the fields and more significantly, because the data file
you are left with after parsing is an irregular time series--you might have multiple entries keyed to a single timestamp, then the next entry is three seconds later, and so forth.
i wanted an app that i could toss in raw error logs (of any size, but usually several hundred MB at a time) have something useful come out the other end--which in this case, had to be some pre-packaged analytics and also a data cube available inside R for command-line analytics. Given this, i coded the raw-log parser in python, while the processor (e.g., gridding the parser output to create a regular time series) and all analytics and data visualization, i coded in R.
I have been building analytics tools for a long time, but only in the past
four years have i been using R. So my first impression--immediately upon parsing a raw log file and loading the data frame in R is what a pleasure R is to work with and how it is so well suited for tasks of this sort. A few welcome suprises:
Serialization. To persist working data in R is a single command
(save). I knew this, but i didn't know how efficient is this binary
format. Thee actual data: for every 50 MB of raw logfiles parsed, the
.RData representation was about 500 KB--100 : 1 compression. (Note: i
pushed this down further to about 300 : 1 by using the data.table
library and manually setting compression level argument to the save
function);
IO. My Data Warehouse relies heavily on a lightweight datastructure
server that resides entirely in RAM and writes to disk
asynchronously, called redis. The proect itself is only about two
years old, yet there's already a redis client for R in CRAN (by B.W.
Lewis, version 1.6.1 as of this post);
Primary Data Analysis. The purpose of this Project was to build a
Library for our Network Ops guys to use. My goal was a "one command =
one data view" type interface. So for instance, i used the excellent
googleVis Package to create a professional-looking
scrollable/paginated HTML tables with sortable columns, in which i
loaded a data frame of aggregated data (>5,000 lines). Just those few
interactive elments--e.g., sorting a column--delivered useful
descriptive analytics. Another example, i wrote a lot of thin
wrappers over some basic data juggling and table-like functions; each
of these functions i would for instance, bind to a clickable button
on a tabbed web page. Again, this was a pleasure to do in R, in part
becasue quite often the function required no wrapper, the single
command with the arguments supplied was enough to generate a useful
view of the data.
A couple of examples of the last bullet:
# what are the most common issues that cause an error to be logged?
err_order = function(df){
t0 = xtabs(~Issue_Descr, df)
m = cbind( names(t0), t0)
rownames(m) = NULL
colnames(m) = c("Cause", "Count")
x = m[,2]
x = as.numeric(x)
ndx = order(x, decreasing=T)
m = m[ndx,]
m1 = data.frame(Cause=m[,1], Count=as.numeric(m[,2]),
CountAsProp=100*as.numeric(m[,2])/dim(df)[1])
subset(m1, CountAsProp >= 1.)
}
# calling this function, passing in a data frame, returns something like:
Cause Count CountAsProp
1 'connect to unix://var/ failed' 200 40.0
2 'object buffered to temp file' 185 37.0
3 'connection refused' 94 18.8
The Primary Data Cube Displayed for Interactive Analysis Using googleVis:
A contingency table (from an xtab function call) displayed using googleVis)
It is in fact an excellent idea. R also has very good date/time capabilities, can do cluster analysis or use any variety of machine learning alogorithms, has three different regexp engines to parse etc pp.
And it may not be a novel idea. A few years ago I was in brief email contact with someone using R for proactive (rather than reactive) logfile analysis: Read the logs, (in their case) build time-series models, predict hot spots. That is so obviously a good idea. It was one of the Department of Energy labs but I no longer have a URL. Even outside of temporal patterns there is a lot one could do here.
I have used R to load and parse IIS Log files with some success here is my code.
Load IIS Log files
require(data.table)
setwd("Log File Directory")
# get a list of all the log files
log_files <- Sys.glob("*.log")
# This line
# 1) reads each log file
# 2) concatenates them
IIS <- do.call( "rbind", lapply( log_files, read.csv, sep = " ", header = FALSE, comment.char = "#", na.strings = "-" ) )
# Add field names - Copy the "Fields" line from one of the log files :header line
colnames(IIS) <- c("date", "time", "s_ip", "cs_method", "cs_uri_stem", "cs_uri_query", "s_port", "cs_username", "c_ip", "cs_User_Agent", "sc_status", "sc_substatus", "sc_win32_status", "sc_bytes", "cs_bytes", "time-taken")
#Change it to a data.table
IIS <- data.table( IIS )
#Query at will
IIS[, .N, by = list(sc_status,cs_username, cs_uri_stem,sc_win32_status) ]
I did a logfile-analysis recently using R. It was no real komplex thing, mostly descriptive tables. R's build-in functions were sufficient for this job.
The problem was the data storage as my logfiles were about 10 GB. Revolutions R does offer new methods to handle such big data, but I at last decided to use a MySQL-database as a backend (which in fact reduced the size to 2 GB though normalization).
That could also solve your problem in reading logfiles in R.
#!python
import argparse
import csv
import cStringIO as StringIO
class OurDialect:
escapechar = ','
delimiter = ' '
quoting = csv.QUOTE_NONE
parser = argparse.ArgumentParser()
parser.add_argument('-f', '--source', type=str, dest='line', default=[['''54.67.81.141 - - [01/Apr/2015:13:39:22 +0000] "GET / HTTP/1.1" 502 173 "-" "curl/7.41.0" "-"'''], ['''54.67.81.141 - - [01/Apr/2015:13:39:22 +0000] "GET / HTTP/1.1" 502 173 "-" "curl/7.41.0" "-"''']])
arguments = parser.parse_args()
try:
with open(arguments.line, 'wb') as fin:
line = fin.readlines()
except:
pass
finally:
line = arguments.line
header = ['IP', 'Ident', 'User', 'Timestamp', 'Offset', 'HTTP Verb', 'HTTP Endpoint', 'HTTP Version', 'HTTP Return code', 'Size in bytes', 'User-Agent']
lines = [[l[:-1].replace('[', '"').replace(']', '"').replace('"', '') for l in l1] for l1 in line]
out = StringIO.StringIO()
writer = csv.writer(out)
writer.writerow(header)
writer = csv.writer(out,dialect=OurDialect)
writer.writerows([[l1 for l1 in l] for l in lines])
print(out.getvalue())
Demo output:
IP,Ident,User,Timestamp,Offset,HTTP Verb,HTTP Endpoint,HTTP Version,HTTP Return code,Size in bytes,User-Agent
54.67.81.141, -, -, 01/Apr/2015:13:39:22, +0000, GET, /, HTTP/1.1, 502, 173, -, curl/7.41.0, -
54.67.81.141, -, -, 01/Apr/2015:13:39:22, +0000, GET, /, HTTP/1.1, 502, 173, -, curl/7.41.0, -
This format can easily be read into R using read.csv. And, it doesn't require any 3rd party libraries.

Resources